diff --git a/README.md b/README.md index a53f0d99..2f098ec9 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ repository](https://github.com/NethermindEth/horus-checker) for full documentation. ## Disclaimer -Kindly note, Horus is a tool consisting of two separate components: Horus-Check, released under the AGPLv3 license, and Horus-Compiler, released under the Cairo Toolkit License. When "Horus" is referenced, the reference is to the two components jointly. +Kindly note, Horus is a tool consisting of two separate components: Horus-Check, released under the AGPLv3 license, and Horus-Compile, released under the Cairo Toolkit License. When "Horus" is referenced, the reference is to the two components jointly. Horus is currently in the alpha stage and no guarantee is being given as to the accuracy and/or completeness of any of the outputs the tool may generate. The tool is provided on an 'as is' basis, without warranties or conditions of any kind, either express or implied, including without limitation as to the outputs of the verification process and the security of any system verified using Horus. As per the relevant licenses, to the fullest extent permitted by the law, Nethermind disclaims any liability in connection with your use of Horus and/or any of its outputs. @@ -11,6 +11,6 @@ Please also note that the terminology used by Horus, including but not limited t For the avoidance of doubt, the outputs generated by Horus and/or your usage thereof shall not be considered or relied upon as any form of financial, investment, tax, legal, regulatory, or other advice. -Horus-Check is licensed under AGPLv3 (Copyright (C) 2023 Nethermind). For more information on the dependencies, please see [Licence-Masterfile](https://github.com/NethermindEth/horus-checker/blob/master/LICENSE). +Horus-Check is licensed under AGPLv3 (Copyright (C) 2023 Nethermind). For more information on the dependencies, please see [here](https://github.com/NethermindEth/horus-checker/blob/master/LICENSE). -Horus-Compiler is licensed under the Cairo Toolkit License (Copyright (C) 2023 Nethermind), pursuant to an exception granted to Nethermind by Starkware Industries Ltd. For more information on the dependencies please see [Licence-Masterfile2](https://github.com/NethermindEth/horus-compile/blob/master/LICENSE). \ No newline at end of file +Horus-Compile is licensed under the Cairo Toolkit License (Copyright (C) 2023 Nethermind), pursuant to an exception granted to Nethermind by Starkware Industries Ltd. For more information on the dependencies please see [here](https://github.com/NethermindEth/horus-compile/blob/master/LICENSE). \ No newline at end of file diff --git a/env/bin/Activate.ps1 b/env/bin/Activate.ps1 new file mode 100644 index 00000000..2fb3852c --- /dev/null +++ b/env/bin/Activate.ps1 @@ -0,0 +1,241 @@ +<# +.Synopsis +Activate a Python virtual environment for the current PowerShell session. + +.Description +Pushes the python executable for a virtual environment to the front of the +$Env:PATH environment variable and sets the prompt to signify that you are +in a Python virtual environment. Makes use of the command line switches as +well as the `pyvenv.cfg` file values present in the virtual environment. + +.Parameter VenvDir +Path to the directory that contains the virtual environment to activate. The +default value for this is the parent of the directory that the Activate.ps1 +script is located within. + +.Parameter Prompt +The prompt prefix to display when this virtual environment is activated. By +default, this prompt is the name of the virtual environment folder (VenvDir) +surrounded by parentheses and followed by a single space (ie. '(.venv) '). + +.Example +Activate.ps1 +Activates the Python virtual environment that contains the Activate.ps1 script. + +.Example +Activate.ps1 -Verbose +Activates the Python virtual environment that contains the Activate.ps1 script, +and shows extra information about the activation as it executes. + +.Example +Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv +Activates the Python virtual environment located in the specified location. + +.Example +Activate.ps1 -Prompt "MyPython" +Activates the Python virtual environment that contains the Activate.ps1 script, +and prefixes the current prompt with the specified string (surrounded in +parentheses) while the virtual environment is active. + +.Notes +On Windows, it may be required to enable this Activate.ps1 script by setting the +execution policy for the user. You can do this by issuing the following PowerShell +command: + +PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser + +For more information on Execution Policies: +https://go.microsoft.com/fwlink/?LinkID=135170 + +#> +Param( + [Parameter(Mandatory = $false)] + [String] + $VenvDir, + [Parameter(Mandatory = $false)] + [String] + $Prompt +) + +<# Function declarations --------------------------------------------------- #> + +<# +.Synopsis +Remove all shell session elements added by the Activate script, including the +addition of the virtual environment's Python executable from the beginning of +the PATH variable. + +.Parameter NonDestructive +If present, do not remove this function from the global namespace for the +session. + +#> +function global:deactivate ([switch]$NonDestructive) { + # Revert to original values + + # The prior prompt: + if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { + Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt + Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT + } + + # The prior PYTHONHOME: + if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { + Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME + Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME + } + + # The prior PATH: + if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { + Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH + Remove-Item -Path Env:_OLD_VIRTUAL_PATH + } + + # Just remove the VIRTUAL_ENV altogether: + if (Test-Path -Path Env:VIRTUAL_ENV) { + Remove-Item -Path env:VIRTUAL_ENV + } + + # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: + if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { + Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force + } + + # Leave deactivate function in the global namespace if requested: + if (-not $NonDestructive) { + Remove-Item -Path function:deactivate + } +} + +<# +.Description +Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the +given folder, and returns them in a map. + +For each line in the pyvenv.cfg file, if that line can be parsed into exactly +two strings separated by `=` (with any amount of whitespace surrounding the =) +then it is considered a `key = value` line. The left hand string is the key, +the right hand is the value. + +If the value starts with a `'` or a `"` then the first and last character is +stripped from the value before being captured. + +.Parameter ConfigDir +Path to the directory that contains the `pyvenv.cfg` file. +#> +function Get-PyVenvConfig( + [String] + $ConfigDir +) { + Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" + + # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). + $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue + + # An empty map will be returned if no config file is found. + $pyvenvConfig = @{ } + + if ($pyvenvConfigPath) { + + Write-Verbose "File exists, parse `key = value` lines" + $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath + + $pyvenvConfigContent | ForEach-Object { + $keyval = $PSItem -split "\s*=\s*", 2 + if ($keyval[0] -and $keyval[1]) { + $val = $keyval[1] + + # Remove extraneous quotations around a string value. + if ("'""".Contains($val.Substring(0, 1))) { + $val = $val.Substring(1, $val.Length - 2) + } + + $pyvenvConfig[$keyval[0]] = $val + Write-Verbose "Adding Key: '$($keyval[0])'='$val'" + } + } + } + return $pyvenvConfig +} + + +<# Begin Activate script --------------------------------------------------- #> + +# Determine the containing directory of this script +$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition +$VenvExecDir = Get-Item -Path $VenvExecPath + +Write-Verbose "Activation script is located in path: '$VenvExecPath'" +Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" +Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" + +# Set values required in priority: CmdLine, ConfigFile, Default +# First, get the location of the virtual environment, it might not be +# VenvExecDir if specified on the command line. +if ($VenvDir) { + Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" +} +else { + Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." + $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") + Write-Verbose "VenvDir=$VenvDir" +} + +# Next, read the `pyvenv.cfg` file to determine any required value such +# as `prompt`. +$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir + +# Next, set the prompt from the command line, or the config file, or +# just use the name of the virtual environment folder. +if ($Prompt) { + Write-Verbose "Prompt specified as argument, using '$Prompt'" +} +else { + Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" + if ($pyvenvCfg -and $pyvenvCfg['prompt']) { + Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" + $Prompt = $pyvenvCfg['prompt']; + } + else { + Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virutal environment)" + Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" + $Prompt = Split-Path -Path $venvDir -Leaf + } +} + +Write-Verbose "Prompt = '$Prompt'" +Write-Verbose "VenvDir='$VenvDir'" + +# Deactivate any currently active virtual environment, but leave the +# deactivate function in place. +deactivate -nondestructive + +# Now set the environment variable VIRTUAL_ENV, used by many tools to determine +# that there is an activated venv. +$env:VIRTUAL_ENV = $VenvDir + +if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { + + Write-Verbose "Setting prompt to '$Prompt'" + + # Set the prompt to include the env name + # Make sure _OLD_VIRTUAL_PROMPT is global + function global:_OLD_VIRTUAL_PROMPT { "" } + Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT + New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt + + function global:prompt { + Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " + _OLD_VIRTUAL_PROMPT + } +} + +# Clear PYTHONHOME +if (Test-Path -Path Env:PYTHONHOME) { + Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME + Remove-Item -Path Env:PYTHONHOME +} + +# Add the venv to the PATH +Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH +$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff --git a/env/bin/activate b/env/bin/activate new file mode 100644 index 00000000..86b47f10 --- /dev/null +++ b/env/bin/activate @@ -0,0 +1,76 @@ +# This file must be used with "source bin/activate" *from bash* +# you cannot run it directly + +deactivate () { + # reset old environment variables + if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then + PATH="${_OLD_VIRTUAL_PATH:-}" + export PATH + unset _OLD_VIRTUAL_PATH + fi + if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then + PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" + export PYTHONHOME + unset _OLD_VIRTUAL_PYTHONHOME + fi + + # This should detect bash and zsh, which have a hash command that must + # be called to get it to forget past commands. Without forgetting + # past commands the $PATH changes we made may not be respected + if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r + fi + + if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then + PS1="${_OLD_VIRTUAL_PS1:-}" + export PS1 + unset _OLD_VIRTUAL_PS1 + fi + + unset VIRTUAL_ENV + if [ ! "${1:-}" = "nondestructive" ] ; then + # Self destruct! + unset -f deactivate + fi +} + +# unset irrelevant variables +deactivate nondestructive + +VIRTUAL_ENV="/mnt/c/Users/elija/Documents/projects/horus-compile/env" +export VIRTUAL_ENV + +_OLD_VIRTUAL_PATH="$PATH" +PATH="$VIRTUAL_ENV/bin:$PATH" +export PATH + +# unset PYTHONHOME if set +# this will fail if PYTHONHOME is set to the empty string (which is bad anyway) +# could use `if (set -u; : $PYTHONHOME) ;` in bash +if [ -n "${PYTHONHOME:-}" ] ; then + _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" + unset PYTHONHOME +fi + +if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then + _OLD_VIRTUAL_PS1="${PS1:-}" + if [ "x(env) " != x ] ; then + PS1="(env) ${PS1:-}" + else + if [ "`basename \"$VIRTUAL_ENV\"`" = "__" ] ; then + # special case for Aspen magic directories + # see https://aspen.io/ + PS1="[`basename \`dirname \"$VIRTUAL_ENV\"\``] $PS1" + else + PS1="(`basename \"$VIRTUAL_ENV\"`)$PS1" + fi + fi + export PS1 +fi + +# This should detect bash and zsh, which have a hash command that must +# be called to get it to forget past commands. Without forgetting +# past commands the $PATH changes we made may not be respected +if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r +fi diff --git a/env/bin/activate.csh b/env/bin/activate.csh new file mode 100644 index 00000000..76b6ff7f --- /dev/null +++ b/env/bin/activate.csh @@ -0,0 +1,37 @@ +# This file must be used with "source bin/activate.csh" *from csh*. +# You cannot run it directly. +# Created by Davide Di Blasi . +# Ported to Python 3.3 venv by Andrew Svetlov + +alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate' + +# Unset irrelevant variables. +deactivate nondestructive + +setenv VIRTUAL_ENV "/mnt/c/Users/elija/Documents/projects/horus-compile/env" + +set _OLD_VIRTUAL_PATH="$PATH" +setenv PATH "$VIRTUAL_ENV/bin:$PATH" + + +set _OLD_VIRTUAL_PROMPT="$prompt" + +if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then + if ("env" != "") then + set env_name = "env" + else + if (`basename "VIRTUAL_ENV"` == "__") then + # special case for Aspen magic directories + # see https://aspen.io/ + set env_name = `basename \`dirname "$VIRTUAL_ENV"\`` + else + set env_name = `basename "$VIRTUAL_ENV"` + endif + endif + set prompt = "[$env_name] $prompt" + unset env_name +endif + +alias pydoc python -m pydoc + +rehash diff --git a/env/bin/activate.fish b/env/bin/activate.fish new file mode 100644 index 00000000..ef4c2fc7 --- /dev/null +++ b/env/bin/activate.fish @@ -0,0 +1,75 @@ +# This file must be used with ". bin/activate.fish" *from fish* (http://fishshell.org) +# you cannot run it directly + +function deactivate -d "Exit virtualenv and return to normal shell environment" + # reset old environment variables + if test -n "$_OLD_VIRTUAL_PATH" + set -gx PATH $_OLD_VIRTUAL_PATH + set -e _OLD_VIRTUAL_PATH + end + if test -n "$_OLD_VIRTUAL_PYTHONHOME" + set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME + set -e _OLD_VIRTUAL_PYTHONHOME + end + + if test -n "$_OLD_FISH_PROMPT_OVERRIDE" + functions -e fish_prompt + set -e _OLD_FISH_PROMPT_OVERRIDE + functions -c _old_fish_prompt fish_prompt + functions -e _old_fish_prompt + end + + set -e VIRTUAL_ENV + if test "$argv[1]" != "nondestructive" + # Self destruct! + functions -e deactivate + end +end + +# unset irrelevant variables +deactivate nondestructive + +set -gx VIRTUAL_ENV "/mnt/c/Users/elija/Documents/projects/horus-compile/env" + +set -gx _OLD_VIRTUAL_PATH $PATH +set -gx PATH "$VIRTUAL_ENV/bin" $PATH + +# unset PYTHONHOME if set +if set -q PYTHONHOME + set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME + set -e PYTHONHOME +end + +if test -z "$VIRTUAL_ENV_DISABLE_PROMPT" + # fish uses a function instead of an env var to generate the prompt. + + # save the current fish_prompt function as the function _old_fish_prompt + functions -c fish_prompt _old_fish_prompt + + # with the original prompt function renamed, we can override with our own. + function fish_prompt + # Save the return status of the last command + set -l old_status $status + + # Prompt override? + if test -n "(env) " + printf "%s%s" "(env) " (set_color normal) + else + # ...Otherwise, prepend env + set -l _checkbase (basename "$VIRTUAL_ENV") + if test $_checkbase = "__" + # special case for Aspen magic directories + # see https://aspen.io/ + printf "%s[%s]%s " (set_color -b blue white) (basename (dirname "$VIRTUAL_ENV")) (set_color normal) + else + printf "%s(%s)%s" (set_color -b blue white) (basename "$VIRTUAL_ENV") (set_color normal) + end + end + + # Restore the return status of the previous command. + echo "exit $old_status" | . + _old_fish_prompt + end + + set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV" +end diff --git a/env/bin/autopep8 b/env/bin/autopep8 new file mode 100644 index 00000000..e0c61b7d --- /dev/null +++ b/env/bin/autopep8 @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from autopep8 import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/base58 b/env/bin/base58 new file mode 100644 index 00000000..cc72f8ed --- /dev/null +++ b/env/bin/base58 @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from base58.__main__ import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/black b/env/bin/black new file mode 100644 index 00000000..e9c9d16e --- /dev/null +++ b/env/bin/black @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from black import patched_main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(patched_main()) diff --git a/env/bin/blackd b/env/bin/blackd new file mode 100644 index 00000000..03dded57 --- /dev/null +++ b/env/bin/blackd @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from blackd import patched_main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(patched_main()) diff --git a/env/bin/cairo-compile b/env/bin/cairo-compile new file mode 100644 index 00000000..e3bcd212 --- /dev/null +++ b/env/bin/cairo-compile @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../..')) +from starkware.cairo.lang.compiler.cairo_compile import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/cairo-format b/env/bin/cairo-format new file mode 100644 index 00000000..66222a0c --- /dev/null +++ b/env/bin/cairo-format @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../..')) +from starkware.cairo.lang.compiler.cairo_format import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/cairo-hash-program b/env/bin/cairo-hash-program new file mode 100644 index 00000000..04f26eb3 --- /dev/null +++ b/env/bin/cairo-hash-program @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../..')) +from starkware.cairo.bootloaders.hash_program import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/cairo-migrate b/env/bin/cairo-migrate new file mode 100644 index 00000000..bb55b174 --- /dev/null +++ b/env/bin/cairo-migrate @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../..')) +from starkware.cairo.lang.migrators.migrator import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/cairo-reconstruct-traceback b/env/bin/cairo-reconstruct-traceback new file mode 100644 index 00000000..021e45ba --- /dev/null +++ b/env/bin/cairo-reconstruct-traceback @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../..')) +from starkware.cairo.lang.vm.reconstruct_traceback import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/cairo-run b/env/bin/cairo-run new file mode 100644 index 00000000..a6d13812 --- /dev/null +++ b/env/bin/cairo-run @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../..')) +from starkware.cairo.lang.vm.cairo_run import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/cairo-sharp b/env/bin/cairo-sharp new file mode 100644 index 00000000..e6ab696b --- /dev/null +++ b/env/bin/cairo-sharp @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../..')) +from starkware.cairo.sharp.sharp_client import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/dmypy b/env/bin/dmypy new file mode 100644 index 00000000..6f2b4fe5 --- /dev/null +++ b/env/bin/dmypy @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mypy.dmypy.client import console_entry +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(console_entry()) diff --git a/env/bin/doesitcache b/env/bin/doesitcache new file mode 100644 index 00000000..08a2ecb5 --- /dev/null +++ b/env/bin/doesitcache @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from cachecontrol._cmd import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/dul-receive-pack b/env/bin/dul-receive-pack new file mode 100644 index 00000000..cb57c486 --- /dev/null +++ b/env/bin/dul-receive-pack @@ -0,0 +1,30 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# dul-receive-pack - git-receive-pack in python +# Copyright (C) 2008 John Carr +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +from dulwich.porcelain import receive_pack +import os +import sys + +if len(sys.argv) < 2: + sys.stderr.write("usage: %s \n" % os.path.basename(sys.argv[0])) + sys.exit(1) + +sys.exit(receive_pack(sys.argv[1])) diff --git a/env/bin/dul-upload-pack b/env/bin/dul-upload-pack new file mode 100644 index 00000000..42c686f7 --- /dev/null +++ b/env/bin/dul-upload-pack @@ -0,0 +1,30 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# dul-upload-pack - git-upload-pack in python +# Copyright (C) 2008 John Carr +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +from dulwich.porcelain import upload_pack +import os +import sys + +if len(sys.argv) < 2: + sys.stderr.write("usage: %s \n" % os.path.basename(sys.argv[0])) + sys.exit(1) + +sys.exit(upload_pack(sys.argv[1])) diff --git a/env/bin/dulwich b/env/bin/dulwich new file mode 100644 index 00000000..5e5fd115 --- /dev/null +++ b/env/bin/dulwich @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from dulwich.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/f2py b/env/bin/f2py new file mode 100644 index 00000000..e2031990 --- /dev/null +++ b/env/bin/f2py @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from numpy.f2py.f2py2e import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/f2py3 b/env/bin/f2py3 new file mode 100644 index 00000000..e2031990 --- /dev/null +++ b/env/bin/f2py3 @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from numpy.f2py.f2py2e import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/f2py3.8 b/env/bin/f2py3.8 new file mode 100644 index 00000000..e2031990 --- /dev/null +++ b/env/bin/f2py3.8 @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from numpy.f2py.f2py2e import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/horus-compile b/env/bin/horus-compile new file mode 100644 index 00000000..71808abd --- /dev/null +++ b/env/bin/horus-compile @@ -0,0 +1,6 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +import sys +from horus.compiler.horus_compile import run + +if __name__ == '__main__': + sys.exit(run()) diff --git a/env/bin/isort b/env/bin/isort new file mode 100644 index 00000000..9785746a --- /dev/null +++ b/env/bin/isort @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from isort.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/isort-identify-imports b/env/bin/isort-identify-imports new file mode 100644 index 00000000..da9ccb26 --- /dev/null +++ b/env/bin/isort-identify-imports @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from isort.main import identify_imports_main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(identify_imports_main()) diff --git a/env/bin/isympy b/env/bin/isympy new file mode 100644 index 00000000..74347fbb --- /dev/null +++ b/env/bin/isympy @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from isympy import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/jsonschema b/env/bin/jsonschema new file mode 100644 index 00000000..8b6210b4 --- /dev/null +++ b/env/bin/jsonschema @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from jsonschema.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/keyring b/env/bin/keyring new file mode 100644 index 00000000..e501a1f8 --- /dev/null +++ b/env/bin/keyring @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from keyring.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/mypy b/env/bin/mypy new file mode 100644 index 00000000..d55b2924 --- /dev/null +++ b/env/bin/mypy @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mypy.__main__ import console_entry +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(console_entry()) diff --git a/env/bin/mypyc b/env/bin/mypyc new file mode 100644 index 00000000..4fec1a7b --- /dev/null +++ b/env/bin/mypyc @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mypyc.__main__ import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/netaddr b/env/bin/netaddr new file mode 100644 index 00000000..1646aa8f --- /dev/null +++ b/env/bin/netaddr @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from netaddr.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/normalizer b/env/bin/normalizer new file mode 100644 index 00000000..bab6ae7d --- /dev/null +++ b/env/bin/normalizer @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from charset_normalizer.cli.normalizer import cli_detect +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(cli_detect()) diff --git a/env/bin/pip b/env/bin/pip new file mode 100644 index 00000000..d3abd073 --- /dev/null +++ b/env/bin/pip @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/pip3 b/env/bin/pip3 new file mode 100644 index 00000000..d3abd073 --- /dev/null +++ b/env/bin/pip3 @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/pip3.8 b/env/bin/pip3.8 new file mode 100644 index 00000000..d3abd073 --- /dev/null +++ b/env/bin/pip3.8 @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/pipdeptree b/env/bin/pipdeptree new file mode 100644 index 00000000..f0f9cf65 --- /dev/null +++ b/env/bin/pipdeptree @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pipdeptree import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/pkginfo b/env/bin/pkginfo new file mode 100644 index 00000000..2bcf16b4 --- /dev/null +++ b/env/bin/pkginfo @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pkginfo.commandline import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/poetry b/env/bin/poetry new file mode 100644 index 00000000..88302dc7 --- /dev/null +++ b/env/bin/poetry @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from poetry.console.application import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/py.test b/env/bin/py.test new file mode 100644 index 00000000..4d3783dd --- /dev/null +++ b/env/bin/py.test @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pytest import console_main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(console_main()) diff --git a/env/bin/pycodestyle b/env/bin/pycodestyle new file mode 100644 index 00000000..34a9798b --- /dev/null +++ b/env/bin/pycodestyle @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pycodestyle import _main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(_main()) diff --git a/env/bin/pytest b/env/bin/pytest new file mode 100644 index 00000000..4d3783dd --- /dev/null +++ b/env/bin/pytest @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python +# -*- coding: utf-8 -*- +import re +import sys +from pytest import console_main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(console_main()) diff --git a/env/bin/python b/env/bin/python new file mode 120000 index 00000000..b8a0adbb --- /dev/null +++ b/env/bin/python @@ -0,0 +1 @@ +python3 \ No newline at end of file diff --git a/env/bin/python3 b/env/bin/python3 new file mode 120000 index 00000000..ae65fdaa --- /dev/null +++ b/env/bin/python3 @@ -0,0 +1 @@ +/usr/bin/python3 \ No newline at end of file diff --git a/env/bin/starknet b/env/bin/starknet new file mode 100644 index 00000000..d1d25731 --- /dev/null +++ b/env/bin/starknet @@ -0,0 +1,12 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys +import asyncio + + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../..')) +from starkware.starknet.cli.starknet_cli import main # noqa + +if __name__ == '__main__': + sys.exit(asyncio.run(main())) diff --git a/env/bin/starknet-compile b/env/bin/starknet-compile new file mode 100644 index 00000000..f18ddebf --- /dev/null +++ b/env/bin/starknet-compile @@ -0,0 +1,10 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../..')) +from starkware.starknet.compiler.compile import main # noqa + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/bin/stubgen b/env/bin/stubgen new file mode 100644 index 00000000..e88a1527 --- /dev/null +++ b/env/bin/stubgen @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mypy.stubgen import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/stubtest b/env/bin/stubtest new file mode 100644 index 00000000..44a34ec4 --- /dev/null +++ b/env/bin/stubtest @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mypy.stubtest import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/env/bin/virtualenv b/env/bin/virtualenv new file mode 100644 index 00000000..12d66f7e --- /dev/null +++ b/env/bin/virtualenv @@ -0,0 +1,8 @@ +#!/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from virtualenv.__main__ import run_with_catch +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(run_with_catch()) diff --git a/env/bin/z3 b/env/bin/z3 new file mode 100644 index 00000000..e87ee04d Binary files /dev/null and b/env/bin/z3 differ diff --git a/env/lib/python3.8/site-packages/6b397dd64e00b5aff23d__mypyc.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/6b397dd64e00b5aff23d__mypyc.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..54f94f8c Binary files /dev/null and b/env/lib/python3.8/site-packages/6b397dd64e00b5aff23d__mypyc.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/74cdc94b2b24dafac2a2__mypyc.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/74cdc94b2b24dafac2a2__mypyc.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..6ffad75a Binary files /dev/null and b/env/lib/python3.8/site-packages/74cdc94b2b24dafac2a2__mypyc.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/INSTALLER b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/LICENSE.txt b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/LICENSE.txt new file mode 100644 index 00000000..d8b3b56d --- /dev/null +++ b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/LICENSE.txt @@ -0,0 +1,13 @@ +Copyright 2012-2021 Eric Larson + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/METADATA b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/METADATA new file mode 100644 index 00000000..02c07cd5 --- /dev/null +++ b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/METADATA @@ -0,0 +1,78 @@ +Metadata-Version: 2.1 +Name: CacheControl +Version: 0.12.11 +Summary: httplib2 caching for requests +Home-page: https://github.com/ionrock/cachecontrol +Author: Eric Larson +Author-email: eric@ionrock.org +License: UNKNOWN +Keywords: requests http caching web +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Environment :: Web Environment +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Topic :: Internet :: WWW/HTTP +Requires-Python: >=3.6 +License-File: LICENSE.txt +Requires-Dist: requests +Requires-Dist: msgpack (>=0.5.2) +Provides-Extra: filecache +Requires-Dist: lockfile (>=0.9) ; extra == 'filecache' +Provides-Extra: redis +Requires-Dist: redis (>=2.10.5) ; extra == 'redis' + +.. + SPDX-FileCopyrightText: SPDX-FileCopyrightText: 2015 Eric Larson + + SPDX-License-Identifier: Apache-2.0 + +============== + CacheControl +============== + +.. image:: https://img.shields.io/pypi/v/cachecontrol.svg + :target: https://pypi.python.org/pypi/cachecontrol + :alt: Latest Version + +.. image:: https://travis-ci.org/ionrock/cachecontrol.png?branch=master + :target: https://travis-ci.org/ionrock/cachecontrol + +CacheControl is a port of the caching algorithms in httplib2_ for use with +requests_ session object. + +It was written because httplib2's better support for caching is often +mitigated by its lack of thread safety. The same is true of requests in +terms of caching. + + +Quickstart +========== + +.. code-block:: python + + import requests + + from cachecontrol import CacheControl + + + sess = requests.session() + cached_sess = CacheControl(sess) + + response = cached_sess.get('http://google.com') + +If the URL contains any caching based headers, it will cache the +result in a simple dictionary. + +For more info, check out the docs_ + +.. _docs: http://cachecontrol.readthedocs.org/en/latest/ +.. _httplib2: https://github.com/httplib2/httplib2 +.. _requests: http://docs.python-requests.org/ + + diff --git a/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/RECORD b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/RECORD new file mode 100644 index 00000000..981a9a82 --- /dev/null +++ b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/RECORD @@ -0,0 +1,34 @@ +../../../bin/doesitcache,sha256=nzM-lTAFm0-w1UUXWgy7rW8ZXS7cRLxI4pIHYT-7PmU,267 +CacheControl-0.12.11.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +CacheControl-0.12.11.dist-info/LICENSE.txt,sha256=hu7uh74qQ_P_H1ZJb0UfaSQ5JvAl_tuwM2ZsMExMFhs,558 +CacheControl-0.12.11.dist-info/METADATA,sha256=m33i3CrYUScQ6HTJJFZpiwg8XRjusnDbM_PAXF1xd3c,2218 +CacheControl-0.12.11.dist-info/RECORD,, +CacheControl-0.12.11.dist-info/WHEEL,sha256=WzZ8cwjh8l0jtULNjYq1Hpr-WCqCRgPr--TX4P5I1Wo,110 +CacheControl-0.12.11.dist-info/entry_points.txt,sha256=HjCekaRCv8kfNqP5WehMR29IWxIA5VrhoOeKrCykCLc,56 +CacheControl-0.12.11.dist-info/top_level.txt,sha256=vGYWzpbe3h6gkakV4f7iCK2x3KyK3oMkV5pe5v25-d4,13 +cachecontrol/__init__.py,sha256=hrxlv3q7upsfyMw8k3gQ9vagBax1pYHSGGqYlZ0Zk0M,465 +cachecontrol/__pycache__/__init__.cpython-38.pyc,, +cachecontrol/__pycache__/_cmd.cpython-38.pyc,, +cachecontrol/__pycache__/adapter.cpython-38.pyc,, +cachecontrol/__pycache__/cache.cpython-38.pyc,, +cachecontrol/__pycache__/compat.cpython-38.pyc,, +cachecontrol/__pycache__/controller.cpython-38.pyc,, +cachecontrol/__pycache__/filewrapper.cpython-38.pyc,, +cachecontrol/__pycache__/heuristics.cpython-38.pyc,, +cachecontrol/__pycache__/serialize.cpython-38.pyc,, +cachecontrol/__pycache__/wrapper.cpython-38.pyc,, +cachecontrol/_cmd.py,sha256=HjFJdGgPOLsvS_5e8BvqYrweXJj1gR7dSsqCB1X24uw,1326 +cachecontrol/adapter.py,sha256=du8CsHKttAWL9-pWmSvyNDVzHrH-qfiSOgo6fcanM-0,5021 +cachecontrol/cache.py,sha256=Tty45fOjH40fColTGkqKQvQQmbYsMpk-nCyfLcv2vG4,1535 +cachecontrol/caches/__init__.py,sha256=h-1cUmOz6mhLsjTjOrJ8iPejpGdLCyG4lzTftfGZvLg,242 +cachecontrol/caches/__pycache__/__init__.cpython-38.pyc,, +cachecontrol/caches/__pycache__/file_cache.cpython-38.pyc,, +cachecontrol/caches/__pycache__/redis_cache.cpython-38.pyc,, +cachecontrol/caches/file_cache.py,sha256=GpexcE29LoY4MaZwPUTcUBZaDdcsjqyLxZFznk8Hbr4,5271 +cachecontrol/caches/redis_cache.py,sha256=bGKAU0IOcJUFmU_blBI3w6zKj1L4IyfDPmLNyzjp_y4,1021 +cachecontrol/compat.py,sha256=JOVKyIibp8nNq3jAbv7nXBsNOtXHVEKPh5u8r2qLGVo,730 +cachecontrol/controller.py,sha256=6jyT4j4LFsIvfDeSFD6xKK15RCOInwE2Nr8Ug7ICtww,16404 +cachecontrol/filewrapper.py,sha256=X4BAQOO26GNOR7nH_fhTzAfeuct2rBQcx_15MyFBpcs,3946 +cachecontrol/heuristics.py,sha256=8kAyuZLSCyEIgQr6vbUwfhpqg9ows4mM0IV6DWazevI,4154 +cachecontrol/serialize.py,sha256=qbMVbnUSDuQRlJWTn6-CKrXYShaUn9dSigtvtAaI3xA,7076 +cachecontrol/wrapper.py,sha256=X3-KMZ20Ho3VtqyVaXclpeQpFzokR5NE8tZSfvKVaB8,774 diff --git a/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/WHEEL b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/WHEEL new file mode 100644 index 00000000..b733a60d --- /dev/null +++ b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.0) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/entry_points.txt new file mode 100644 index 00000000..7c31574e --- /dev/null +++ b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +doesitcache = cachecontrol._cmd:main + diff --git a/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/top_level.txt b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/top_level.txt new file mode 100644 index 00000000..af37ac62 --- /dev/null +++ b/env/lib/python3.8/site-packages/CacheControl-0.12.11.dist-info/top_level.txt @@ -0,0 +1 @@ +cachecontrol diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/AES.py b/env/lib/python3.8/site-packages/Crypto/Cipher/AES.py new file mode 100644 index 00000000..13bd7eaa --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/AES.py @@ -0,0 +1,250 @@ +# -*- coding: utf-8 -*- +# +# Cipher/AES.py : AES +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with AES: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` +:var MODE_OFB: :ref:`Output FeedBack (OFB) ` +:var MODE_CTR: :ref:`CounTer Mode (CTR) ` +:var MODE_OPENPGP: :ref:`OpenPGP Mode ` +:var MODE_CCM: :ref:`Counter with CBC-MAC (CCM) Mode ` +:var MODE_EAX: :ref:`EAX Mode ` +:var MODE_GCM: :ref:`Galois Counter Mode (GCM) ` +:var MODE_SIV: :ref:`Syntethic Initialization Vector (SIV) ` +:var MODE_OCB: :ref:`Offset Code Book (OCB) ` +""" + +import sys + +from Crypto.Cipher import _create_cipher +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +from Crypto.Util import _cpu_features +from Crypto.Random import get_random_bytes + + +_cproto = """ + int AES_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int AES_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int AES_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int AES_stop_operation(void *state); + """ + + +# Load portable AES +_raw_aes_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_aes", + _cproto) + +# Try to load AES with AES NI instructions +try: + _raw_aesni_lib = None + if _cpu_features.have_aes_ni(): + _raw_aesni_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_aesni", + _cproto.replace("AES", + "AESNI")) +# _raw_aesni may not have been compiled in +except OSError: + pass + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + use_aesni = dict_parameters.pop("use_aesni", True) + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) not in key_size: + raise ValueError("Incorrect AES key length (%d bytes)" % len(key)) + + if use_aesni and _raw_aesni_lib: + start_operation = _raw_aesni_lib.AESNI_start_operation + stop_operation = _raw_aesni_lib.AESNI_stop_operation + else: + start_operation = _raw_aes_lib.AES_start_operation + stop_operation = _raw_aes_lib.AES_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the AES cipher" + % result) + return SmartPointer(cipher.get(), stop_operation) + + +def _derive_Poly1305_key_pair(key, nonce): + """Derive a tuple (r, s, nonce) for a Poly1305 MAC. + + If nonce is ``None``, a new 16-byte nonce is generated. + """ + + if len(key) != 32: + raise ValueError("Poly1305 with AES requires a 32-byte key") + + if nonce is None: + nonce = get_random_bytes(16) + elif len(nonce) != 16: + raise ValueError("Poly1305 with AES requires a 16-byte nonce") + + s = new(key[:16], MODE_ECB).encrypt(nonce) + return key[16:], s, nonce + + +def new(key, mode, *args, **kwargs): + """Create a new AES cipher. + + :param key: + The secret key to use in the symmetric cipher. + + It must be 16, 24 or 32 bytes long (respectively for *AES-128*, + *AES-192* or *AES-256*). + + For ``MODE_SIV`` only, it doubles to 32, 48, or 64 bytes. + :type key: bytes/bytearray/memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + If in doubt, use ``MODE_EAX``. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 16 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 16 bytes long for encryption + and 18 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CCM``, ``MODE_EAX``, ``MODE_GCM``, + ``MODE_SIV``, ``MODE_OCB``, and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key (except possibly for ``MODE_SIV``, see below). + + For ``MODE_EAX``, ``MODE_GCM`` and ``MODE_SIV`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CCM``, its length must be in the range **[7..13]**. + Bear in mind that with CCM there is a trade-off between nonce + length and maximum message size. Recommendation: **11** bytes. + + For ``MODE_OCB``, its length must be in the range **[1..15]** + (recommended: **15**). + + For ``MODE_CTR``, its length must be in the range **[0..15]** + (recommended: **8**). + + For ``MODE_SIV``, the nonce is optional, if it is not specified, + then no nonce is being used, which renders the encryption + deterministic. + + If not provided, for modes other than ``MODE_SIV```, a random + byte string of the recommended length is used (you must then + read its value with the :attr:`nonce` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``, ``MODE_GCM``, ``MODE_OCB``, ``MODE_CCM``) + Length of the authentication tag, in bytes. + + It must be even and in the range **[4..16]**. + The recommended value (and the default, if not specified) is **16**. + + * **msg_len** : (*integer*) -- + (Only ``MODE_CCM``). Length of the message to (de)cipher. + If not specified, ``encrypt`` must be called with the entire message. + Similarly, ``decrypt`` can only be called once. + + * **assoc_len** : (*integer*) -- + (Only ``MODE_CCM``). Length of the associated data. + If not specified, all associated data is buffered internally, + which may represent a problem for very large messages. + + * **initial_value** : (*integer* or *bytes/bytearray/memoryview*) -- + (Only ``MODE_CTR``). + The initial value for the counter. If not present, the cipher will + start counting from 0. The value is incremented by one for each block. + The counter number is encoded in big endian mode. + + * **counter** : (*object*) -- + Instance of ``Crypto.Util.Counter``, which allows full customization + of the counter block. This parameter is incompatible to both ``nonce`` + and ``initial_value``. + + * **use_aesni** : (*boolean*) -- + Use Intel AES-NI hardware extensions (default: use if available). + + :Return: an AES object, of the applicable mode. + """ + + kwargs["add_aes_modes"] = True + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_CCM = 8 +MODE_EAX = 9 +MODE_SIV = 10 +MODE_GCM = 11 +MODE_OCB = 12 + +# Size of a data block (in bytes) +block_size = 16 +# Size of a key (in bytes) +key_size = (16, 24, 32) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/AES.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/AES.pyi new file mode 100644 index 00000000..8f655cf6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/AES.pyi @@ -0,0 +1,47 @@ +from typing import Union, Tuple, Optional, Dict + +from Crypto.Cipher._mode_ecb import EcbMode +from Crypto.Cipher._mode_cbc import CbcMode +from Crypto.Cipher._mode_cfb import CfbMode +from Crypto.Cipher._mode_ofb import OfbMode +from Crypto.Cipher._mode_ctr import CtrMode +from Crypto.Cipher._mode_openpgp import OpenPgpMode +from Crypto.Cipher._mode_ccm import CcmMode +from Crypto.Cipher._mode_eax import EaxMode +from Crypto.Cipher._mode_gcm import GcmMode +from Crypto.Cipher._mode_siv import SivMode +from Crypto.Cipher._mode_ocb import OcbMode + +AESMode = int + +MODE_ECB: AESMode +MODE_CBC: AESMode +MODE_CFB: AESMode +MODE_OFB: AESMode +MODE_CTR: AESMode +MODE_OPENPGP: AESMode +MODE_CCM: AESMode +MODE_EAX: AESMode +MODE_GCM: AESMode +MODE_SIV: AESMode +MODE_OCB: AESMode + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: AESMode, + iv : Buffer = ..., + IV : Buffer = ..., + nonce : Buffer = ..., + segment_size : int = ..., + mac_len : int = ..., + assoc_len : int = ..., + initial_value : Union[int, Buffer] = ..., + counter : Dict = ..., + use_aesni : bool = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, + OpenPgpMode, CcmMode, EaxMode, GcmMode, + SivMode, OcbMode]: ... + +block_size: int +key_size: Tuple[int, int, int] diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ARC2.py b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC2.py new file mode 100644 index 00000000..0ba7e335 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC2.py @@ -0,0 +1,175 @@ +# -*- coding: utf-8 -*- +# +# Cipher/ARC2.py : ARC2.py +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with ARC2: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` +:var MODE_OFB: :ref:`Output FeedBack (OFB) ` +:var MODE_CTR: :ref:`CounTer Mode (CTR) ` +:var MODE_OPENPGP: :ref:`OpenPGP Mode ` +:var MODE_EAX: :ref:`EAX Mode ` +""" + +import sys + +from Crypto.Cipher import _create_cipher +from Crypto.Util.py3compat import byte_string +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +_raw_arc2_lib = load_pycryptodome_raw_lib( + "Crypto.Cipher._raw_arc2", + """ + int ARC2_start_operation(const uint8_t key[], + size_t key_len, + size_t effective_key_len, + void **pResult); + int ARC2_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ARC2_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ARC2_stop_operation(void *state); + """ + ) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + effective_keylen = dict_parameters.pop("effective_keylen", 1024) + + if len(key) not in key_size: + raise ValueError("Incorrect ARC2 key length (%d bytes)" % len(key)) + + if not (40 <= effective_keylen <= 1024): + raise ValueError("'effective_key_len' must be at least 40 and no larger than 1024 " + "(not %d)" % effective_keylen) + + start_operation = _raw_arc2_lib.ARC2_start_operation + stop_operation = _raw_arc2_lib.ARC2_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + c_size_t(effective_keylen), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the ARC2 cipher" + % result) + + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new RC2 cipher. + + :param key: + The secret key to use in the symmetric cipher. + Its length can vary from 5 to 128 bytes; the actual search space + (and the cipher strength) can be reduced with the ``effective_keylen`` parameter. + :type key: bytes, bytearray, memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **effective_keylen** (*integer*) -- + Optional. Maximum strength in bits of the actual key used by the ARC2 algorithm. + If the supplied ``key`` parameter is longer (in bits) of the value specified + here, it will be weakened to match it. + If not specified, no limitation is applied. + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: an ARC2 object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(5, 128 + 1) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ARC2.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC2.pyi new file mode 100644 index 00000000..055c4240 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC2.pyi @@ -0,0 +1,35 @@ +from typing import Union, Dict, Iterable + +from Crypto.Cipher._mode_ecb import EcbMode +from Crypto.Cipher._mode_cbc import CbcMode +from Crypto.Cipher._mode_cfb import CfbMode +from Crypto.Cipher._mode_ofb import OfbMode +from Crypto.Cipher._mode_ctr import CtrMode +from Crypto.Cipher._mode_openpgp import OpenPgpMode +from Crypto.Cipher._mode_eax import EaxMode + +ARC2Mode = int + +MODE_ECB: ARC2Mode +MODE_CBC: ARC2Mode +MODE_CFB: ARC2Mode +MODE_OFB: ARC2Mode +MODE_CTR: ARC2Mode +MODE_OPENPGP: ARC2Mode +MODE_EAX: ARC2Mode + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: ARC2Mode, + iv : Buffer = ..., + IV : Buffer = ..., + nonce : Buffer = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, Buffer] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: Iterable[int] diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ARC4.py b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC4.py new file mode 100644 index 00000000..7150ea6e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC4.py @@ -0,0 +1,137 @@ +# -*- coding: utf-8 -*- +# +# Cipher/ARC4.py : ARC4 +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import b + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr) + + +_raw_arc4_lib = load_pycryptodome_raw_lib("Crypto.Cipher._ARC4", """ + int ARC4_stream_encrypt(void *rc4State, const uint8_t in[], + uint8_t out[], size_t len); + int ARC4_stream_init(uint8_t *key, size_t keylen, + void **pRc4State); + int ARC4_stream_destroy(void *rc4State); + """) + + +class ARC4Cipher: + """ARC4 cipher object. Do not create it directly. Use + :func:`Crypto.Cipher.ARC4.new` instead. + """ + + def __init__(self, key, *args, **kwargs): + """Initialize an ARC4 cipher object + + See also `new()` at the module level.""" + + if len(args) > 0: + ndrop = args[0] + args = args[1:] + else: + ndrop = kwargs.pop('drop', 0) + + if len(key) not in key_size: + raise ValueError("Incorrect ARC4 key length (%d bytes)" % + len(key)) + + self._state = VoidPointer() + result = _raw_arc4_lib.ARC4_stream_init(c_uint8_ptr(key), + c_size_t(len(key)), + self._state.address_of()) + if result != 0: + raise ValueError("Error %d while creating the ARC4 cipher" + % result) + self._state = SmartPointer(self._state.get(), + _raw_arc4_lib.ARC4_stream_destroy) + + if ndrop > 0: + # This is OK even if the cipher is used for decryption, + # since encrypt and decrypt are actually the same thing + # with ARC4. + self.encrypt(b'\x00' * ndrop) + + self.block_size = 1 + self.key_size = len(key) + + def encrypt(self, plaintext): + """Encrypt a piece of data. + + :param plaintext: The data to encrypt, of any size. + :type plaintext: bytes, bytearray, memoryview + :returns: the encrypted byte string, of equal length as the + plaintext. + """ + + ciphertext = create_string_buffer(len(plaintext)) + result = _raw_arc4_lib.ARC4_stream_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + ciphertext, + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting with RC4" % result) + return get_raw_buffer(ciphertext) + + def decrypt(self, ciphertext): + """Decrypt a piece of data. + + :param ciphertext: The data to decrypt, of any size. + :type ciphertext: bytes, bytearray, memoryview + :returns: the decrypted byte string, of equal length as the + ciphertext. + """ + + try: + return self.encrypt(ciphertext) + except ValueError as e: + raise ValueError(str(e).replace("enc", "dec")) + + +def new(key, *args, **kwargs): + """Create a new ARC4 cipher. + + :param key: + The secret key to use in the symmetric cipher. + Its length must be in the range ``[5..256]``. + The recommended length is 16 bytes. + :type key: bytes, bytearray, memoryview + + :Keyword Arguments: + * *drop* (``integer``) -- + The amount of bytes to discard from the initial part of the keystream. + In fact, such part has been found to be distinguishable from random + data (while it shouldn't) and also correlated to key. + + The recommended value is 3072_ bytes. The default value is 0. + + :Return: an `ARC4Cipher` object + + .. _3072: http://eprint.iacr.org/2002/067.pdf + """ + return ARC4Cipher(key, *args, **kwargs) + +# Size of a data block (in bytes) +block_size = 1 +# Size of a key (in bytes) +key_size = range(5, 256+1) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ARC4.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC4.pyi new file mode 100644 index 00000000..2e75d6f8 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ARC4.pyi @@ -0,0 +1,16 @@ +from typing import Any, Union, Iterable + +Buffer = Union[bytes, bytearray, memoryview] + +class ARC4Cipher: + block_size: int + key_size: int + + def __init__(self, key: Buffer, *args: Any, **kwargs: Any) -> None: ... + def encrypt(self, plaintext: Buffer) -> bytes: ... + def decrypt(self, ciphertext: Buffer) -> bytes: ... + +def new(key: Buffer, drop : int = ...) -> ARC4Cipher: ... + +block_size: int +key_size: Iterable[int] diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/Blowfish.py b/env/lib/python3.8/site-packages/Crypto/Cipher/Blowfish.py new file mode 100644 index 00000000..6005ffe2 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/Blowfish.py @@ -0,0 +1,159 @@ +# -*- coding: utf-8 -*- +# +# Cipher/Blowfish.py : Blowfish +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with Blowfish: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` +:var MODE_OFB: :ref:`Output FeedBack (OFB) ` +:var MODE_CTR: :ref:`CounTer Mode (CTR) ` +:var MODE_OPENPGP: :ref:`OpenPGP Mode ` +:var MODE_EAX: :ref:`EAX Mode ` +""" + +import sys + +from Crypto.Cipher import _create_cipher +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, c_size_t, + c_uint8_ptr) + +_raw_blowfish_lib = load_pycryptodome_raw_lib( + "Crypto.Cipher._raw_blowfish", + """ + int Blowfish_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int Blowfish_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int Blowfish_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int Blowfish_stop_operation(void *state); + """ + ) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a smart pointer to + a low-level base cipher. It will absorb named parameters in + the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) not in key_size: + raise ValueError("Incorrect Blowfish key length (%d bytes)" % len(key)) + + start_operation = _raw_blowfish_lib.Blowfish_start_operation + stop_operation = _raw_blowfish_lib.Blowfish_stop_operation + + void_p = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + void_p.address_of()) + if result: + raise ValueError("Error %X while instantiating the Blowfish cipher" + % result) + return SmartPointer(void_p.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new Blowfish cipher + + :param key: + The secret key to use in the symmetric cipher. + Its length can vary from 5 to 56 bytes. + :type key: bytes, bytearray, memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a Blowfish object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(4, 56 + 1) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/Blowfish.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/Blowfish.pyi new file mode 100644 index 00000000..eff9da9c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/Blowfish.pyi @@ -0,0 +1,35 @@ +from typing import Union, Dict, Iterable + +from Crypto.Cipher._mode_ecb import EcbMode +from Crypto.Cipher._mode_cbc import CbcMode +from Crypto.Cipher._mode_cfb import CfbMode +from Crypto.Cipher._mode_ofb import OfbMode +from Crypto.Cipher._mode_ctr import CtrMode +from Crypto.Cipher._mode_openpgp import OpenPgpMode +from Crypto.Cipher._mode_eax import EaxMode + +BlowfishMode = int + +MODE_ECB: BlowfishMode +MODE_CBC: BlowfishMode +MODE_CFB: BlowfishMode +MODE_OFB: BlowfishMode +MODE_CTR: BlowfishMode +MODE_OPENPGP: BlowfishMode +MODE_EAX: BlowfishMode + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: BlowfishMode, + iv : Buffer = ..., + IV : Buffer = ..., + nonce : Buffer = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, Buffer] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: Iterable[int] diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/CAST.py b/env/lib/python3.8/site-packages/Crypto/Cipher/CAST.py new file mode 100644 index 00000000..c7e82c1c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/CAST.py @@ -0,0 +1,159 @@ +# -*- coding: utf-8 -*- +# +# Cipher/CAST.py : CAST +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with CAST: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` +:var MODE_OFB: :ref:`Output FeedBack (OFB) ` +:var MODE_CTR: :ref:`CounTer Mode (CTR) ` +:var MODE_OPENPGP: :ref:`OpenPGP Mode ` +:var MODE_EAX: :ref:`EAX Mode ` +""" + +import sys + +from Crypto.Cipher import _create_cipher +from Crypto.Util.py3compat import byte_string +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +_raw_cast_lib = load_pycryptodome_raw_lib( + "Crypto.Cipher._raw_cast", + """ + int CAST_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int CAST_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CAST_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CAST_stop_operation(void *state); + """) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) not in key_size: + raise ValueError("Incorrect CAST key length (%d bytes)" % len(key)) + + start_operation = _raw_cast_lib.CAST_start_operation + stop_operation = _raw_cast_lib.CAST_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the CAST cipher" + % result) + + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new CAST cipher + + :param key: + The secret key to use in the symmetric cipher. + Its length can vary from 5 to 16 bytes. + :type key: bytes, bytearray, memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a CAST object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(5, 16 + 1) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/CAST.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/CAST.pyi new file mode 100644 index 00000000..a0cb6af3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/CAST.pyi @@ -0,0 +1,35 @@ +from typing import Union, Dict, Iterable + +from Crypto.Cipher._mode_ecb import EcbMode +from Crypto.Cipher._mode_cbc import CbcMode +from Crypto.Cipher._mode_cfb import CfbMode +from Crypto.Cipher._mode_ofb import OfbMode +from Crypto.Cipher._mode_ctr import CtrMode +from Crypto.Cipher._mode_openpgp import OpenPgpMode +from Crypto.Cipher._mode_eax import EaxMode + +CASTMode = int + +MODE_ECB: CASTMode +MODE_CBC: CASTMode +MODE_CFB: CASTMode +MODE_OFB: CASTMode +MODE_CTR: CASTMode +MODE_OPENPGP: CASTMode +MODE_EAX: CASTMode + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: CASTMode, + iv : Buffer = ..., + IV : Buffer = ..., + nonce : Buffer = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, Buffer] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size : Iterable[int] diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20.py b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20.py new file mode 100644 index 00000000..9bd2252c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20.py @@ -0,0 +1,287 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Random import get_random_bytes + +from Crypto.Util.py3compat import _copy_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, VoidPointer, + SmartPointer, c_size_t, + c_uint8_ptr, c_ulong, + is_writeable_buffer) + +_raw_chacha20_lib = load_pycryptodome_raw_lib("Crypto.Cipher._chacha20", + """ + int chacha20_init(void **pState, + const uint8_t *key, + size_t keySize, + const uint8_t *nonce, + size_t nonceSize); + + int chacha20_destroy(void *state); + + int chacha20_encrypt(void *state, + const uint8_t in[], + uint8_t out[], + size_t len); + + int chacha20_seek(void *state, + unsigned long block_high, + unsigned long block_low, + unsigned offset); + int hchacha20( const uint8_t key[32], + const uint8_t nonce16[16], + uint8_t subkey[32]); + """) + + +def _HChaCha20(key, nonce): + + assert(len(key) == 32) + assert(len(nonce) == 16) + + subkey = bytearray(32) + result = _raw_chacha20_lib.hchacha20( + c_uint8_ptr(key), + c_uint8_ptr(nonce), + c_uint8_ptr(subkey)) + if result: + raise ValueError("Error %d when deriving subkey with HChaCha20" % result) + + return subkey + + +class ChaCha20Cipher(object): + """ChaCha20 (or XChaCha20) cipher object. + Do not create it directly. Use :py:func:`new` instead. + + :var nonce: The nonce with length 8, 12 or 24 bytes + :vartype nonce: bytes + """ + + block_size = 1 + + def __init__(self, key, nonce): + """Initialize a ChaCha20/XChaCha20 cipher object + + See also `new()` at the module level.""" + + self.nonce = _copy_bytes(None, None, nonce) + + # XChaCha20 requires a key derivation with HChaCha20 + # See 2.3 in https://tools.ietf.org/html/draft-arciszewski-xchacha-03 + if len(nonce) == 24: + key = _HChaCha20(key, nonce[:16]) + nonce = b'\x00' * 4 + nonce[16:] + self._name = "XChaCha20" + else: + self._name = "ChaCha20" + nonce = self.nonce + + self._next = ( self.encrypt, self.decrypt ) + + self._state = VoidPointer() + result = _raw_chacha20_lib.chacha20_init( + self._state.address_of(), + c_uint8_ptr(key), + c_size_t(len(key)), + nonce, + c_size_t(len(nonce))) + if result: + raise ValueError("Error %d instantiating a %s cipher" % (result, + self._name)) + self._state = SmartPointer(self._state.get(), + _raw_chacha20_lib.chacha20_destroy) + + def encrypt(self, plaintext, output=None): + """Encrypt a piece of data. + + Args: + plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the ciphertext + is written to. If ``None``, the ciphertext is returned. + Returns: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("Cipher object can only be used for decryption") + self._next = ( self.encrypt, ) + return self._encrypt(plaintext, output) + + def _encrypt(self, plaintext, output): + """Encrypt without FSM checks""" + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = _raw_chacha20_lib.chacha20_encrypt( + self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting with %s" % (result, self._name)) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt a piece of data. + + Args: + ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the plaintext + is written to. If ``None``, the plaintext is returned. + Returns: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("Cipher object can only be used for encryption") + self._next = ( self.decrypt, ) + + try: + return self._encrypt(ciphertext, output) + except ValueError as e: + raise ValueError(str(e).replace("enc", "dec")) + + def seek(self, position): + """Seek to a certain position in the key stream. + + Args: + position (integer): + The absolute position within the key stream, in bytes. + """ + + position, offset = divmod(position, 64) + block_low = position & 0xFFFFFFFF + block_high = position >> 32 + + result = _raw_chacha20_lib.chacha20_seek( + self._state.get(), + c_ulong(block_high), + c_ulong(block_low), + offset + ) + if result: + raise ValueError("Error %d while seeking with %s" % (result, self._name)) + + +def _derive_Poly1305_key_pair(key, nonce): + """Derive a tuple (r, s, nonce) for a Poly1305 MAC. + + If nonce is ``None``, a new 12-byte nonce is generated. + """ + + if len(key) != 32: + raise ValueError("Poly1305 with ChaCha20 requires a 32-byte key") + + if nonce is None: + padded_nonce = nonce = get_random_bytes(12) + elif len(nonce) == 8: + # See RFC7538, 2.6: [...] ChaCha20 as specified here requires a 96-bit + # nonce. So if the provided nonce is only 64-bit, then the first 32 + # bits of the nonce will be set to a constant number. + # This will usually be zero, but for protocols with multiple senders it may be + # different for each sender, but should be the same for all + # invocations of the function with the same key by a particular + # sender. + padded_nonce = b'\x00\x00\x00\x00' + nonce + elif len(nonce) == 12: + padded_nonce = nonce + else: + raise ValueError("Poly1305 with ChaCha20 requires an 8- or 12-byte nonce") + + rs = new(key=key, nonce=padded_nonce).encrypt(b'\x00' * 32) + return rs[:16], rs[16:], nonce + + +def new(**kwargs): + """Create a new ChaCha20 or XChaCha20 cipher + + Keyword Args: + key (bytes/bytearray/memoryview): The secret key to use. + It must be 32 bytes long. + nonce (bytes/bytearray/memoryview): A mandatory value that + must never be reused for any other encryption + done with this key. + + For ChaCha20, it must be 8 or 12 bytes long. + + For XChaCha20, it must be 24 bytes long. + + If not provided, 8 bytes will be randomly generated + (you can find them back in the ``nonce`` attribute). + + :Return: a :class:`Crypto.Cipher.ChaCha20.ChaCha20Cipher` object + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter %s" % e) + + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(8) + + if len(key) != 32: + raise ValueError("ChaCha20/XChaCha20 key must be 32 bytes long") + + if len(nonce) not in (8, 12, 24): + raise ValueError("Nonce must be 8/12 bytes(ChaCha20) or 24 bytes (XChaCha20)") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return ChaCha20Cipher(key, nonce) + +# Size of a data block (in bytes) +block_size = 1 + +# Size of a key (in bytes) +key_size = 32 diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20.pyi new file mode 100644 index 00000000..3d00a1d4 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20.pyi @@ -0,0 +1,25 @@ +from typing import Union, overload + +Buffer = Union[bytes, bytearray, memoryview] + +def _HChaCha20(key: Buffer, nonce: Buffer) -> bytearray: ... + +class ChaCha20Cipher: + block_size: int + nonce: bytes + + def __init__(self, key: Buffer, nonce: Buffer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + def seek(self, position: int) -> None: ... + +def new(key: Buffer, nonce: Buffer = ...) -> ChaCha20Cipher: ... + +block_size: int +key_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20_Poly1305.py b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20_Poly1305.py new file mode 100644 index 00000000..21ddca3e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20_Poly1305.py @@ -0,0 +1,336 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Crypto.Cipher import ChaCha20 +from Crypto.Cipher.ChaCha20 import _HChaCha20 +from Crypto.Hash import Poly1305, BLAKE2s + +from Crypto.Random import get_random_bytes + +from Crypto.Util.number import long_to_bytes +from Crypto.Util.py3compat import _copy_bytes, bord +from Crypto.Util._raw_api import is_buffer + + +def _enum(**enums): + return type('Enum', (), enums) + + +_CipherStatus = _enum(PROCESSING_AUTH_DATA=1, + PROCESSING_CIPHERTEXT=2, + PROCESSING_DONE=3) + + +class ChaCha20Poly1305Cipher(object): + """ChaCha20-Poly1305 and XChaCha20-Poly1305 cipher object. + Do not create it directly. Use :py:func:`new` instead. + + :var nonce: The nonce with length 8, 12 or 24 bytes + :vartype nonce: byte string + """ + + def __init__(self, key, nonce): + """Initialize a ChaCha20-Poly1305 AEAD cipher object + + See also `new()` at the module level.""" + + self.nonce = _copy_bytes(None, None, nonce) + + self._next = (self.update, self.encrypt, self.decrypt, self.digest, + self.verify) + + self._authenticator = Poly1305.new(key=key, nonce=nonce, cipher=ChaCha20) + + self._cipher = ChaCha20.new(key=key, nonce=nonce) + self._cipher.seek(64) # Block counter starts at 1 + + self._len_aad = 0 + self._len_ct = 0 + self._mac_tag = None + self._status = _CipherStatus.PROCESSING_AUTH_DATA + + def update(self, data): + """Protect the associated data. + + Associated data (also known as *additional authenticated data* - AAD) + is the piece of the message that must stay in the clear, while + still allowing the receiver to verify its integrity. + An example is packet headers. + + The associated data (possibly split into multiple segments) is + fed into :meth:`update` before any call to :meth:`decrypt` or :meth:`encrypt`. + If there is no associated data, :meth:`update` is not called. + + :param bytes/bytearray/memoryview assoc_data: + A piece of associated data. There are no restrictions on its size. + """ + + if self.update not in self._next: + raise TypeError("update() method cannot be called") + + self._len_aad += len(data) + self._authenticator.update(data) + + def _pad_aad(self): + + assert(self._status == _CipherStatus.PROCESSING_AUTH_DATA) + if self._len_aad & 0x0F: + self._authenticator.update(b'\x00' * (16 - (self._len_aad & 0x0F))) + self._status = _CipherStatus.PROCESSING_CIPHERTEXT + + def encrypt(self, plaintext, output=None): + """Encrypt a piece of data. + + Args: + plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the ciphertext + is written to. If ``None``, the ciphertext is returned. + Returns: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() method cannot be called") + + if self._status == _CipherStatus.PROCESSING_AUTH_DATA: + self._pad_aad() + + self._next = (self.encrypt, self.digest) + + result = self._cipher.encrypt(plaintext, output=output) + self._len_ct += len(plaintext) + if output is None: + self._authenticator.update(result) + else: + self._authenticator.update(output) + return result + + def decrypt(self, ciphertext, output=None): + """Decrypt a piece of data. + + Args: + ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the plaintext + is written to. If ``None``, the plaintext is returned. + Returns: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() method cannot be called") + + if self._status == _CipherStatus.PROCESSING_AUTH_DATA: + self._pad_aad() + + self._next = (self.decrypt, self.verify) + + self._len_ct += len(ciphertext) + self._authenticator.update(ciphertext) + return self._cipher.decrypt(ciphertext, output=output) + + def _compute_mac(self): + """Finalize the cipher (if not done already) and return the MAC.""" + + if self._mac_tag: + assert(self._status == _CipherStatus.PROCESSING_DONE) + return self._mac_tag + + assert(self._status != _CipherStatus.PROCESSING_DONE) + + if self._status == _CipherStatus.PROCESSING_AUTH_DATA: + self._pad_aad() + + if self._len_ct & 0x0F: + self._authenticator.update(b'\x00' * (16 - (self._len_ct & 0x0F))) + + self._status = _CipherStatus.PROCESSING_DONE + + self._authenticator.update(long_to_bytes(self._len_aad, 8)[::-1]) + self._authenticator.update(long_to_bytes(self._len_ct, 8)[::-1]) + self._mac_tag = self._authenticator.digest() + return self._mac_tag + + def digest(self): + """Compute the *binary* authentication tag (MAC). + + :Return: the MAC tag, as 16 ``bytes``. + """ + + if self.digest not in self._next: + raise TypeError("digest() method cannot be called") + self._next = (self.digest,) + + return self._compute_mac() + + def hexdigest(self): + """Compute the *printable* authentication tag (MAC). + + This method is like :meth:`digest`. + + :Return: the MAC tag, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* authentication tag (MAC). + + The receiver invokes this method at the very end, to + check if the associated data (if any) and the decrypted + messages are valid. + + :param bytes/bytearray/memoryview received_mac_tag: + This is the 16-byte *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if self.verify not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = (self.verify,) + + secret = get_random_bytes(16) + + self._compute_mac() + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, + data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, + data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* authentication tag (MAC). + + This method is like :meth:`verify`. + + :param string hex_mac_tag: + This is the *printable* MAC. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext): + """Perform :meth:`encrypt` and :meth:`digest` in one step. + + :param plaintext: The data to encrypt, of any size. + :type plaintext: bytes/bytearray/memoryview + :return: a tuple with two ``bytes`` objects: + + - the ciphertext, of equal length as the plaintext + - the 16-byte MAC tag + """ + + return self.encrypt(plaintext), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag): + """Perform :meth:`decrypt` and :meth:`verify` in one step. + + :param ciphertext: The piece of data to decrypt. + :type ciphertext: bytes/bytearray/memoryview + :param bytes received_mac_tag: + This is the 16-byte *binary* MAC, as received from the sender. + :return: the decrypted data (as ``bytes``) + :raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext) + self.verify(received_mac_tag) + return plaintext + + +def new(**kwargs): + """Create a new ChaCha20-Poly1305 or XChaCha20-Poly1305 AEAD cipher. + + :keyword key: The secret key to use. It must be 32 bytes long. + :type key: byte string + + :keyword nonce: + A value that must never be reused for any other encryption + done with this key. + + For ChaCha20-Poly1305, it must be 8 or 12 bytes long. + + For XChaCha20-Poly1305, it must be 24 bytes long. + + If not provided, 12 ``bytes`` will be generated randomly + (you can find them back in the ``nonce`` attribute). + :type nonce: bytes, bytearray, memoryview + + :Return: a :class:`Crypto.Cipher.ChaCha20.ChaCha20Poly1305Cipher` object + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter %s" % e) + + self._len_ct += len(plaintext) + + if len(key) != 32: + raise ValueError("Key must be 32 bytes long") + + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(12) + + if len(nonce) in (8, 12): + pass + elif len(nonce) == 24: + key = _HChaCha20(key, nonce[:16]) + nonce = b'\x00\x00\x00\x00' + nonce[16:] + else: + raise ValueError("Nonce must be 8, 12 or 24 bytes long") + + if not is_buffer(nonce): + raise TypeError("nonce must be bytes, bytearray or memoryview") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return ChaCha20Poly1305Cipher(key, nonce) + + +# Size of a key (in bytes) +key_size = 32 diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20_Poly1305.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20_Poly1305.pyi new file mode 100644 index 00000000..ef0450f0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/ChaCha20_Poly1305.pyi @@ -0,0 +1,28 @@ +from typing import Union, Tuple, overload + +Buffer = Union[bytes, bytearray, memoryview] + +class ChaCha20Poly1305Cipher: + nonce: bytes + + def __init__(self, key: Buffer, nonce: Buffer) -> None: ... + def update(self, data: Buffer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, received_mac_tag: str) -> None: ... + def encrypt_and_digest(self, plaintext: Buffer) -> Tuple[bytes, bytes]: ... + def decrypt_and_verify(self, ciphertext: Buffer, received_mac_tag: Buffer) -> bytes: ... + +def new(key: Buffer, nonce: Buffer = ...) -> ChaCha20Poly1305Cipher: ... + +block_size: int +key_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/DES.py b/env/lib/python3.8/site-packages/Crypto/Cipher/DES.py new file mode 100644 index 00000000..5cc286ae --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/DES.py @@ -0,0 +1,158 @@ +# -*- coding: utf-8 -*- +# +# Cipher/DES.py : DES +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with Single DES: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` +:var MODE_OFB: :ref:`Output FeedBack (OFB) ` +:var MODE_CTR: :ref:`CounTer Mode (CTR) ` +:var MODE_OPENPGP: :ref:`OpenPGP Mode ` +:var MODE_EAX: :ref:`EAX Mode ` +""" + +import sys + +from Crypto.Cipher import _create_cipher +from Crypto.Util.py3compat import byte_string +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +_raw_des_lib = load_pycryptodome_raw_lib( + "Crypto.Cipher._raw_des", + """ + int DES_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int DES_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES_stop_operation(void *state); + """) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) != key_size: + raise ValueError("Incorrect DES key length (%d bytes)" % len(key)) + + start_operation = _raw_des_lib.DES_start_operation + stop_operation = _raw_des_lib.DES_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the DES cipher" + % result) + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new DES cipher. + + :param key: + The secret key to use in the symmetric cipher. + It must be 8 byte long. The parity bits will be ignored. + :type key: bytes/bytearray/memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*byte string*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*byte string*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a DES object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = 8 diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/DES.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/DES.pyi new file mode 100644 index 00000000..1047f132 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/DES.pyi @@ -0,0 +1,35 @@ +from typing import Union, Dict, Iterable + +from Crypto.Cipher._mode_ecb import EcbMode +from Crypto.Cipher._mode_cbc import CbcMode +from Crypto.Cipher._mode_cfb import CfbMode +from Crypto.Cipher._mode_ofb import OfbMode +from Crypto.Cipher._mode_ctr import CtrMode +from Crypto.Cipher._mode_openpgp import OpenPgpMode +from Crypto.Cipher._mode_eax import EaxMode + +DESMode = int + +MODE_ECB: DESMode +MODE_CBC: DESMode +MODE_CFB: DESMode +MODE_OFB: DESMode +MODE_CTR: DESMode +MODE_OPENPGP: DESMode +MODE_EAX: DESMode + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: DESMode, + iv : Buffer = ..., + IV : Buffer = ..., + nonce : Buffer = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, Buffer] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/DES3.py b/env/lib/python3.8/site-packages/Crypto/Cipher/DES3.py new file mode 100644 index 00000000..c0d93671 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/DES3.py @@ -0,0 +1,187 @@ +# -*- coding: utf-8 -*- +# +# Cipher/DES3.py : DES3 +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with Triple DES: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` +:var MODE_OFB: :ref:`Output FeedBack (OFB) ` +:var MODE_CTR: :ref:`CounTer Mode (CTR) ` +:var MODE_OPENPGP: :ref:`OpenPGP Mode ` +:var MODE_EAX: :ref:`EAX Mode ` +""" + +import sys + +from Crypto.Cipher import _create_cipher +from Crypto.Util.py3compat import byte_string, bchr, bord, bstr +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t) + +_raw_des3_lib = load_pycryptodome_raw_lib( + "Crypto.Cipher._raw_des3", + """ + int DES3_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int DES3_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES3_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES3_stop_operation(void *state); + """) + + +def adjust_key_parity(key_in): + """Set the parity bits in a TDES key. + + :param key_in: the TDES key whose bits need to be adjusted + :type key_in: byte string + + :returns: a copy of ``key_in``, with the parity bits correctly set + :rtype: byte string + + :raises ValueError: if the TDES key is not 16 or 24 bytes long + :raises ValueError: if the TDES key degenerates into Single DES + """ + + def parity_byte(key_byte): + parity = 1 + for i in range(1, 8): + parity ^= (key_byte >> i) & 1 + return (key_byte & 0xFE) | parity + + if len(key_in) not in key_size: + raise ValueError("Not a valid TDES key") + + key_out = b"".join([ bchr(parity_byte(bord(x))) for x in key_in ]) + + if key_out[:8] == key_out[8:16] or key_out[-16:-8] == key_out[-8:]: + raise ValueError("Triple DES key degenerates to single DES") + + return key_out + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level base cipher. + It will absorb named parameters in the process.""" + + try: + key_in = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + key = adjust_key_parity(bstr(key_in)) + + start_operation = _raw_des3_lib.DES3_start_operation + stop_operation = _raw_des3_lib.DES3_stop_operation + + cipher = VoidPointer() + result = start_operation(key, + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the TDES cipher" + % result) + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new Triple DES cipher. + + :param key: + The secret key to use in the symmetric cipher. + It must be 16 or 24 byte long. The parity bits will be ignored. + :type key: bytes/bytearray/memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a Triple DES object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = (16, 24) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/DES3.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/DES3.pyi new file mode 100644 index 00000000..a89db9c0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/DES3.pyi @@ -0,0 +1,37 @@ +from typing import Union, Dict, Tuple + +from Crypto.Cipher._mode_ecb import EcbMode +from Crypto.Cipher._mode_cbc import CbcMode +from Crypto.Cipher._mode_cfb import CfbMode +from Crypto.Cipher._mode_ofb import OfbMode +from Crypto.Cipher._mode_ctr import CtrMode +from Crypto.Cipher._mode_openpgp import OpenPgpMode +from Crypto.Cipher._mode_eax import EaxMode + +def adjust_key_parity(key_in: bytes) -> bytes: ... + +DES3Mode = int + +MODE_ECB: DES3Mode +MODE_CBC: DES3Mode +MODE_CFB: DES3Mode +MODE_OFB: DES3Mode +MODE_CTR: DES3Mode +MODE_OPENPGP: DES3Mode +MODE_EAX: DES3Mode + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: DES3Mode, + iv : Buffer = ..., + IV : Buffer = ..., + nonce : Buffer = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, Buffer] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: Tuple[int, int] diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_OAEP.py b/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_OAEP.py new file mode 100644 index 00000000..57a982b8 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_OAEP.py @@ -0,0 +1,239 @@ +# -*- coding: utf-8 -*- +# +# Cipher/PKCS1_OAEP.py : PKCS#1 OAEP +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Signature.pss import MGF1 +import Crypto.Hash.SHA1 + +from Crypto.Util.py3compat import bord, _copy_bytes +import Crypto.Util.number +from Crypto.Util.number import ceil_div, bytes_to_long, long_to_bytes +from Crypto.Util.strxor import strxor +from Crypto import Random + +class PKCS1OAEP_Cipher: + """Cipher object for PKCS#1 v1.5 OAEP. + Do not create directly: use :func:`new` instead.""" + + def __init__(self, key, hashAlgo, mgfunc, label, randfunc): + """Initialize this PKCS#1 OAEP cipher object. + + :Parameters: + key : an RSA key object + If a private half is given, both encryption and decryption are possible. + If a public half is given, only encryption is possible. + hashAlgo : hash object + The hash function to use. This can be a module under `Crypto.Hash` + or an existing hash object created from any of such modules. If not specified, + `Crypto.Hash.SHA1` is used. + mgfunc : callable + A mask generation function that accepts two parameters: a string to + use as seed, and the lenth of the mask to generate, in bytes. + If not specified, the standard MGF1 consistent with ``hashAlgo`` is used (a safe choice). + label : bytes/bytearray/memoryview + A label to apply to this particular encryption. If not specified, + an empty string is used. Specifying a label does not improve + security. + randfunc : callable + A function that returns random bytes. + + :attention: Modify the mask generation function only if you know what you are doing. + Sender and receiver must use the same one. + """ + self._key = key + + if hashAlgo: + self._hashObj = hashAlgo + else: + self._hashObj = Crypto.Hash.SHA1 + + if mgfunc: + self._mgf = mgfunc + else: + self._mgf = lambda x,y: MGF1(x,y,self._hashObj) + + self._label = _copy_bytes(None, None, label) + self._randfunc = randfunc + + def can_encrypt(self): + """Legacy function to check if you can call :meth:`encrypt`. + + .. deprecated:: 3.0""" + return self._key.can_encrypt() + + def can_decrypt(self): + """Legacy function to check if you can call :meth:`decrypt`. + + .. deprecated:: 3.0""" + return self._key.can_decrypt() + + def encrypt(self, message): + """Encrypt a message with PKCS#1 OAEP. + + :param message: + The message to encrypt, also known as plaintext. It can be of + variable length, but not longer than the RSA modulus (in bytes) + minus 2, minus twice the hash output size. + For instance, if you use RSA 2048 and SHA-256, the longest message + you can encrypt is 190 byte long. + :type message: bytes/bytearray/memoryview + + :returns: The ciphertext, as large as the RSA modulus. + :rtype: bytes + + :raises ValueError: + if the message is too long. + """ + + # See 7.1.1 in RFC3447 + modBits = Crypto.Util.number.size(self._key.n) + k = ceil_div(modBits, 8) # Convert from bits to bytes + hLen = self._hashObj.digest_size + mLen = len(message) + + # Step 1b + ps_len = k - mLen - 2 * hLen - 2 + if ps_len < 0: + raise ValueError("Plaintext is too long.") + # Step 2a + lHash = self._hashObj.new(self._label).digest() + # Step 2b + ps = b'\x00' * ps_len + # Step 2c + db = lHash + ps + b'\x01' + _copy_bytes(None, None, message) + # Step 2d + ros = self._randfunc(hLen) + # Step 2e + dbMask = self._mgf(ros, k-hLen-1) + # Step 2f + maskedDB = strxor(db, dbMask) + # Step 2g + seedMask = self._mgf(maskedDB, hLen) + # Step 2h + maskedSeed = strxor(ros, seedMask) + # Step 2i + em = b'\x00' + maskedSeed + maskedDB + # Step 3a (OS2IP) + em_int = bytes_to_long(em) + # Step 3b (RSAEP) + m_int = self._key._encrypt(em_int) + # Step 3c (I2OSP) + c = long_to_bytes(m_int, k) + return c + + def decrypt(self, ciphertext): + """Decrypt a message with PKCS#1 OAEP. + + :param ciphertext: The encrypted message. + :type ciphertext: bytes/bytearray/memoryview + + :returns: The original message (plaintext). + :rtype: bytes + + :raises ValueError: + if the ciphertext has the wrong length, or if decryption + fails the integrity check (in which case, the decryption + key is probably wrong). + :raises TypeError: + if the RSA key has no private half (i.e. you are trying + to decrypt using a public key). + """ + + # See 7.1.2 in RFC3447 + modBits = Crypto.Util.number.size(self._key.n) + k = ceil_div(modBits,8) # Convert from bits to bytes + hLen = self._hashObj.digest_size + + # Step 1b and 1c + if len(ciphertext) != k or k Any: ... + +class HashLikeModule(Protocol): + digest_size : int + @staticmethod + def new(data: Optional[bytes] = ...) -> Any: ... + +HashLike = Union[HashLikeClass, HashLikeModule] + +Buffer = Union[bytes, bytearray, memoryview] + +class PKCS1OAEP_Cipher: + def __init__(self, + key: RsaKey, + hashAlgo: HashLike, + mgfunc: Callable[[bytes, int], bytes], + label: Buffer, + randfunc: Callable[[int], bytes]) -> None: ... + def can_encrypt(self) -> bool: ... + def can_decrypt(self) -> bool: ... + def encrypt(self, message: Buffer) -> bytes: ... + def decrypt(self, ciphertext: Buffer) -> bytes: ... + +def new(key: RsaKey, + hashAlgo: Optional[HashLike] = ..., + mgfunc: Optional[Callable[[bytes, int], bytes]] = ..., + label: Optional[Buffer] = ..., + randfunc: Optional[Callable[[int], bytes]] = ...) -> PKCS1OAEP_Cipher: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_v1_5.py b/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_v1_5.py new file mode 100644 index 00000000..d0d474a6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_v1_5.py @@ -0,0 +1,217 @@ +# -*- coding: utf-8 -*- +# +# Cipher/PKCS1-v1_5.py : PKCS#1 v1.5 +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['new', 'PKCS115_Cipher'] + +from Crypto import Random +from Crypto.Util.number import bytes_to_long, long_to_bytes +from Crypto.Util.py3compat import bord, is_bytes, _copy_bytes + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, c_size_t, + c_uint8_ptr) + + +_raw_pkcs1_decode = load_pycryptodome_raw_lib("Crypto.Cipher._pkcs1_decode", + """ + int pkcs1_decode(const uint8_t *em, size_t len_em, + const uint8_t *sentinel, size_t len_sentinel, + size_t expected_pt_len, + uint8_t *output); + """) + + +def _pkcs1_decode(em, sentinel, expected_pt_len, output): + if len(em) != len(output): + raise ValueError("Incorrect output length") + + ret = _raw_pkcs1_decode.pkcs1_decode(c_uint8_ptr(em), + c_size_t(len(em)), + c_uint8_ptr(sentinel), + c_size_t(len(sentinel)), + c_size_t(expected_pt_len), + c_uint8_ptr(output)) + return ret + + +class PKCS115_Cipher: + """This cipher can perform PKCS#1 v1.5 RSA encryption or decryption. + Do not instantiate directly. Use :func:`Crypto.Cipher.PKCS1_v1_5.new` instead.""" + + def __init__(self, key, randfunc): + """Initialize this PKCS#1 v1.5 cipher object. + + :Parameters: + key : an RSA key object + If a private half is given, both encryption and decryption are possible. + If a public half is given, only encryption is possible. + randfunc : callable + Function that returns random bytes. + """ + + self._key = key + self._randfunc = randfunc + + def can_encrypt(self): + """Return True if this cipher object can be used for encryption.""" + return self._key.can_encrypt() + + def can_decrypt(self): + """Return True if this cipher object can be used for decryption.""" + return self._key.can_decrypt() + + def encrypt(self, message): + """Produce the PKCS#1 v1.5 encryption of a message. + + This function is named ``RSAES-PKCS1-V1_5-ENCRYPT``, and it is specified in + `section 7.2.1 of RFC8017 + `_. + + :param message: + The message to encrypt, also known as plaintext. It can be of + variable length, but not longer than the RSA modulus (in bytes) minus 11. + :type message: bytes/bytearray/memoryview + + :Returns: A byte string, the ciphertext in which the message is encrypted. + It is as long as the RSA modulus (in bytes). + + :Raises ValueError: + If the RSA key length is not sufficiently long to deal with the given + message. + """ + + # See 7.2.1 in RFC8017 + k = self._key.size_in_bytes() + mLen = len(message) + + # Step 1 + if mLen > k - 11: + raise ValueError("Plaintext is too long.") + # Step 2a + ps = [] + while len(ps) != k - mLen - 3: + new_byte = self._randfunc(1) + if bord(new_byte[0]) == 0x00: + continue + ps.append(new_byte) + ps = b"".join(ps) + assert(len(ps) == k - mLen - 3) + # Step 2b + em = b'\x00\x02' + ps + b'\x00' + _copy_bytes(None, None, message) + # Step 3a (OS2IP) + em_int = bytes_to_long(em) + # Step 3b (RSAEP) + m_int = self._key._encrypt(em_int) + # Step 3c (I2OSP) + c = long_to_bytes(m_int, k) + return c + + def decrypt(self, ciphertext, sentinel, expected_pt_len=0): + r"""Decrypt a PKCS#1 v1.5 ciphertext. + + This is the function ``RSAES-PKCS1-V1_5-DECRYPT`` specified in + `section 7.2.2 of RFC8017 + `_. + + Args: + ciphertext (bytes/bytearray/memoryview): + The ciphertext that contains the message to recover. + sentinel (any type): + The object to return whenever an error is detected. + expected_pt_len (integer): + The length the plaintext is known to have, or 0 if unknown. + + Returns (byte string): + It is either the original message or the ``sentinel`` (in case of an error). + + .. warning:: + PKCS#1 v1.5 decryption is intrinsically vulnerable to timing + attacks (see `Bleichenbacher's`__ attack). + **Use PKCS#1 OAEP instead**. + + This implementation attempts to mitigate the risk + with some constant-time constructs. + However, they are not sufficient by themselves: the type of protocol you + implement and the way you handle errors make a big difference. + + Specifically, you should make it very hard for the (malicious) + party that submitted the ciphertext to quickly understand if decryption + succeeded or not. + + To this end, it is recommended that your protocol only encrypts + plaintexts of fixed length (``expected_pt_len``), + that ``sentinel`` is a random byte string of the same length, + and that processing continues for as long + as possible even if ``sentinel`` is returned (i.e. in case of + incorrect decryption). + + .. __: https://dx.doi.org/10.1007/BFb0055716 + """ + + # See 7.2.2 in RFC8017 + k = self._key.size_in_bytes() + + # Step 1 + if len(ciphertext) != k: + raise ValueError("Ciphertext with incorrect length (not %d bytes)" % k) + + # Step 2a (O2SIP) + ct_int = bytes_to_long(ciphertext) + + # Step 2b (RSADP) + m_int = self._key._decrypt(ct_int) + + # Complete step 2c (I2OSP) + em = long_to_bytes(m_int, k) + + # Step 3 (not constant time when the sentinel is not a byte string) + output = bytes(bytearray(k)) + if not is_bytes(sentinel) or len(sentinel) > k: + size = _pkcs1_decode(em, b'', expected_pt_len, output) + if size < 0: + return sentinel + else: + return output[size:] + + # Step 3 (somewhat constant time) + size = _pkcs1_decode(em, sentinel, expected_pt_len, output) + return output[size:] + + +def new(key, randfunc=None): + """Create a cipher for performing PKCS#1 v1.5 encryption or decryption. + + :param key: + The key to use to encrypt or decrypt the message. This is a `Crypto.PublicKey.RSA` object. + Decryption is only possible if *key* is a private RSA key. + :type key: RSA key object + + :param randfunc: + Function that return random bytes. + The default is :func:`Crypto.Random.get_random_bytes`. + :type randfunc: callable + + :returns: A cipher object `PKCS115_Cipher`. + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + return PKCS115_Cipher(key, randfunc) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_v1_5.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_v1_5.pyi new file mode 100644 index 00000000..1719f01a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/PKCS1_v1_5.pyi @@ -0,0 +1,20 @@ +from typing import Callable, Union, Any, Optional, TypeVar + +from Crypto.PublicKey.RSA import RsaKey + +Buffer = Union[bytes, bytearray, memoryview] +T = TypeVar('T') + +class PKCS115_Cipher: + def __init__(self, + key: RsaKey, + randfunc: Callable[[int], bytes]) -> None: ... + def can_encrypt(self) -> bool: ... + def can_decrypt(self) -> bool: ... + def encrypt(self, message: Buffer) -> bytes: ... + def decrypt(self, ciphertext: Buffer, + sentinel: T, + expected_pt_len: Optional[int] = ...) -> Union[bytes, T]: ... + +def new(key: RsaKey, + randfunc: Optional[Callable[[int], bytes]] = ...) -> PKCS115_Cipher: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/Salsa20.py b/env/lib/python3.8/site-packages/Crypto/Cipher/Salsa20.py new file mode 100644 index 00000000..62d0b292 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/Salsa20.py @@ -0,0 +1,167 @@ +# -*- coding: utf-8 -*- +# +# Cipher/Salsa20.py : Salsa20 stream cipher (http://cr.yp.to/snuffle.html) +# +# Contributed by Fabrizio Tarizzo . +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import _copy_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, VoidPointer, + SmartPointer, c_size_t, + c_uint8_ptr, is_writeable_buffer) + +from Crypto.Random import get_random_bytes + +_raw_salsa20_lib = load_pycryptodome_raw_lib("Crypto.Cipher._Salsa20", + """ + int Salsa20_stream_init(uint8_t *key, size_t keylen, + uint8_t *nonce, size_t nonce_len, + void **pSalsaState); + int Salsa20_stream_destroy(void *salsaState); + int Salsa20_stream_encrypt(void *salsaState, + const uint8_t in[], + uint8_t out[], size_t len); + """) + + +class Salsa20Cipher: + """Salsa20 cipher object. Do not create it directly. Use :py:func:`new` + instead. + + :var nonce: The nonce with length 8 + :vartype nonce: byte string + """ + + def __init__(self, key, nonce): + """Initialize a Salsa20 cipher object + + See also `new()` at the module level.""" + + if len(key) not in key_size: + raise ValueError("Incorrect key length for Salsa20 (%d bytes)" % len(key)) + + if len(nonce) != 8: + raise ValueError("Incorrect nonce length for Salsa20 (%d bytes)" % + len(nonce)) + + self.nonce = _copy_bytes(None, None, nonce) + + self._state = VoidPointer() + result = _raw_salsa20_lib.Salsa20_stream_init( + c_uint8_ptr(key), + c_size_t(len(key)), + c_uint8_ptr(nonce), + c_size_t(len(nonce)), + self._state.address_of()) + if result: + raise ValueError("Error %d instantiating a Salsa20 cipher") + self._state = SmartPointer(self._state.get(), + _raw_salsa20_lib.Salsa20_stream_destroy) + + self.block_size = 1 + self.key_size = len(key) + + def encrypt(self, plaintext, output=None): + """Encrypt a piece of data. + + Args: + plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the ciphertext + is written to. If ``None``, the ciphertext is returned. + Returns: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = _raw_salsa20_lib.Salsa20_stream_encrypt( + self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting with Salsa20" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt a piece of data. + + Args: + ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the plaintext + is written to. If ``None``, the plaintext is returned. + Returns: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + try: + return self.encrypt(ciphertext, output=output) + except ValueError as e: + raise ValueError(str(e).replace("enc", "dec")) + + +def new(key, nonce=None): + """Create a new Salsa20 cipher + + :keyword key: The secret key to use. It must be 16 or 32 bytes long. + :type key: bytes/bytearray/memoryview + + :keyword nonce: + A value that must never be reused for any other encryption + done with this key. It must be 8 bytes long. + + If not provided, a random byte string will be generated (you can read + it back via the ``nonce`` attribute of the returned object). + :type nonce: bytes/bytearray/memoryview + + :Return: a :class:`Crypto.Cipher.Salsa20.Salsa20Cipher` object + """ + + if nonce is None: + nonce = get_random_bytes(8) + + return Salsa20Cipher(key, nonce) + +# Size of a data block (in bytes) +block_size = 1 + +# Size of a key (in bytes) +key_size = (16, 32) + diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/Salsa20.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/Salsa20.pyi new file mode 100644 index 00000000..9178f0d5 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/Salsa20.pyi @@ -0,0 +1,27 @@ +from typing import Union, Tuple, Optional, overload + + +Buffer = Union[bytes, bytearray, memoryview] + +class Salsa20Cipher: + nonce: bytes + block_size: int + key_size: int + + def __init__(self, + key: Buffer, + nonce: Buffer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + +def new(key: Buffer, nonce: Optional[Buffer] = ...) -> Salsa20Cipher: ... + +block_size: int +key_size: Tuple[int, int] + diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_ARC4.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_ARC4.abi3.so new file mode 100644 index 00000000..675eb600 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_ARC4.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_EKSBlowfish.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_EKSBlowfish.py new file mode 100644 index 00000000..a844fae4 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_EKSBlowfish.py @@ -0,0 +1,131 @@ +# =================================================================== +# +# Copyright (c) 2019, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import sys + +from Crypto.Cipher import _create_cipher +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, c_size_t, + c_uint8_ptr, c_uint) + +_raw_blowfish_lib = load_pycryptodome_raw_lib( + "Crypto.Cipher._raw_eksblowfish", + """ + int EKSBlowfish_start_operation(const uint8_t key[], + size_t key_len, + const uint8_t salt[16], + size_t salt_len, + unsigned cost, + unsigned invert, + void **pResult); + int EKSBlowfish_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int EKSBlowfish_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int EKSBlowfish_stop_operation(void *state); + """ + ) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a smart pointer to + a low-level base cipher. It will absorb named parameters in + the process.""" + + try: + key = dict_parameters.pop("key") + salt = dict_parameters.pop("salt") + cost = dict_parameters.pop("cost") + except KeyError as e: + raise TypeError("Missing EKSBlowfish parameter: " + str(e)) + invert = dict_parameters.pop("invert", True) + + if len(key) not in key_size: + raise ValueError("Incorrect EKSBlowfish key length (%d bytes)" % len(key)) + + start_operation = _raw_blowfish_lib.EKSBlowfish_start_operation + stop_operation = _raw_blowfish_lib.EKSBlowfish_stop_operation + + void_p = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + c_uint8_ptr(salt), + c_size_t(len(salt)), + c_uint(cost), + c_uint(int(invert)), + void_p.address_of()) + if result: + raise ValueError("Error %X while instantiating the EKSBlowfish cipher" + % result) + return SmartPointer(void_p.get(), stop_operation) + + +def new(key, mode, salt, cost, invert): + """Create a new EKSBlowfish cipher + + Args: + + key (bytes, bytearray, memoryview): + The secret key to use in the symmetric cipher. + Its length can vary from 0 to 72 bytes. + + mode (one of the supported ``MODE_*`` constants): + The chaining mode to use for encryption or decryption. + + salt (bytes, bytearray, memoryview): + The salt that bcrypt uses to thwart rainbow table attacks + + cost (integer): + The complexity factor in bcrypt + + invert (bool): + If ``False``, in the inner loop use ``ExpandKey`` first over the salt + and then over the key, as defined in + the `original bcrypt specification `_. + If ``True``, reverse the order, as in the first implementation of + `bcrypt` in OpenBSD. + + :Return: an EKSBlowfish object + """ + + kwargs = { 'salt':salt, 'cost':cost, 'invert':invert } + return _create_cipher(sys.modules[__name__], key, mode, **kwargs) + + +MODE_ECB = 1 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(0, 72 + 1) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_EKSBlowfish.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_EKSBlowfish.pyi new file mode 100644 index 00000000..95db3794 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_EKSBlowfish.pyi @@ -0,0 +1,15 @@ +from typing import Union, Iterable + +from Crypto.Cipher._mode_ecb import EcbMode + +MODE_ECB: int + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: int, + salt: Buffer, + cost: int) -> EcbMode: ... + +block_size: int +key_size: Iterable[int] diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_Salsa20.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_Salsa20.abi3.so new file mode 100644 index 00000000..2b97f5d5 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_Salsa20.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/__init__.py b/env/lib/python3.8/site-packages/Crypto/Cipher/__init__.py new file mode 100644 index 00000000..ba2d4855 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/__init__.py @@ -0,0 +1,79 @@ +# +# A block cipher is instantiated as a combination of: +# 1. A base cipher (such as AES) +# 2. A mode of operation (such as CBC) +# +# Both items are implemented as C modules. +# +# The API of #1 is (replace "AES" with the name of the actual cipher): +# - AES_start_operaion(key) --> base_cipher_state +# - AES_encrypt(base_cipher_state, in, out, length) +# - AES_decrypt(base_cipher_state, in, out, length) +# - AES_stop_operation(base_cipher_state) +# +# Where base_cipher_state is AES_State, a struct with BlockBase (set of +# pointers to encrypt/decrypt/stop) followed by cipher-specific data. +# +# The API of #2 is (replace "CBC" with the name of the actual mode): +# - CBC_start_operation(base_cipher_state) --> mode_state +# - CBC_encrypt(mode_state, in, out, length) +# - CBC_decrypt(mode_state, in, out, length) +# - CBC_stop_operation(mode_state) +# +# where mode_state is a a pointer to base_cipher_state plus mode-specific data. + +import os + +from Crypto.Cipher._mode_ecb import _create_ecb_cipher +from Crypto.Cipher._mode_cbc import _create_cbc_cipher +from Crypto.Cipher._mode_cfb import _create_cfb_cipher +from Crypto.Cipher._mode_ofb import _create_ofb_cipher +from Crypto.Cipher._mode_ctr import _create_ctr_cipher +from Crypto.Cipher._mode_openpgp import _create_openpgp_cipher +from Crypto.Cipher._mode_ccm import _create_ccm_cipher +from Crypto.Cipher._mode_eax import _create_eax_cipher +from Crypto.Cipher._mode_siv import _create_siv_cipher +from Crypto.Cipher._mode_gcm import _create_gcm_cipher +from Crypto.Cipher._mode_ocb import _create_ocb_cipher + +_modes = { 1:_create_ecb_cipher, + 2:_create_cbc_cipher, + 3:_create_cfb_cipher, + 5:_create_ofb_cipher, + 6:_create_ctr_cipher, + 7:_create_openpgp_cipher, + 9:_create_eax_cipher + } + +_extra_modes = { 8:_create_ccm_cipher, + 10:_create_siv_cipher, + 11:_create_gcm_cipher, + 12:_create_ocb_cipher + } + +def _create_cipher(factory, key, mode, *args, **kwargs): + + kwargs["key"] = key + + modes = dict(_modes) + if kwargs.pop("add_aes_modes", False): + modes.update(_extra_modes) + if not mode in modes: + raise ValueError("Mode not supported") + + if args: + if mode in (8, 9, 10, 11, 12): + if len(args) > 1: + raise TypeError("Too many arguments for this mode") + kwargs["nonce"] = args[0] + elif mode in (2, 3, 5, 7): + if len(args) > 1: + raise TypeError("Too many arguments for this mode") + kwargs["IV"] = args[0] + elif mode == 6: + if len(args) > 0: + raise TypeError("Too many arguments for this mode") + elif mode == 1: + raise TypeError("IV is not meaningful for the ECB mode") + + return modes[mode](factory, **kwargs) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/__init__.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/__init__.pyi new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_chacha20.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_chacha20.abi3.so new file mode 100644 index 00000000..1549d638 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_chacha20.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cbc.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cbc.py new file mode 100644 index 00000000..79c871ac --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cbc.py @@ -0,0 +1,293 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Ciphertext Block Chaining (CBC) mode. +""" + +__all__ = ['CbcMode'] + +from Crypto.Util.py3compat import _copy_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Crypto.Random import get_random_bytes + +raw_cbc_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_cbc", """ + int CBC_start_operation(void *cipher, + const uint8_t iv[], + size_t iv_len, + void **pResult); + int CBC_encrypt(void *cbcState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CBC_decrypt(void *cbcState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CBC_stop_operation(void *state); + """ + ) + + +class CbcMode(object): + """*Cipher-Block Chaining (CBC)*. + + Each of the ciphertext blocks depends on the current + and all previous plaintext blocks. + + An Initialization Vector (*IV*) is required. + + See `NIST SP800-38A`_ , Section 6.2 . + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, iv): + """Create a new block cipher, configured in CBC mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + iv : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + It is as long as the cipher block. + + **The IV must be unpredictable**. Ideally it is picked randomly. + + Reusing the *IV* for encryptions performed with the same key + compromises confidentiality. + """ + + self._state = VoidPointer() + result = raw_cbc_lib.CBC_start_operation(block_cipher.get(), + c_uint8_ptr(iv), + c_size_t(len(iv)), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the CBC mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_cbc_lib.CBC_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(iv) + """The block size of the underlying cipher, in bytes.""" + + self.iv = _copy_bytes(None, None, iv) + """The Initialization Vector originally used to create the object. + The value does not change.""" + + self.IV = self.iv + """Alias for `iv`""" + + self._next = [ self.encrypt, self.decrypt ] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + That also means that you cannot reuse an object for encrypting + or decrypting other data with the same key. + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + Its lenght must be multiple of the cipher block size. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = [ self.encrypt ] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cbc_lib.CBC_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + if result == 3: + raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size) + raise ValueError("Error %d while encrypting in CBC mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + Its length must be multiple of the cipher block size. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = [ self.decrypt ] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cbc_lib.CBC_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + if result == 3: + raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size) + raise ValueError("Error %d while decrypting in CBC mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_cbc_cipher(factory, **kwargs): + """Instantiate a cipher object that performs CBC encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Crypto.Cipher``. + + :Keywords: + iv : bytes/bytearray/memoryview + The IV to use for CBC. + + IV : bytes/bytearray/memoryview + Alias for ``iv``. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + if len(iv) != factory.block_size: + raise ValueError("Incorrect IV length (it must be %d bytes long)" % + factory.block_size) + + if kwargs: + raise TypeError("Unknown parameters for CBC: %s" % str(kwargs)) + + return CbcMode(cipher_state, iv) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cbc.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cbc.pyi new file mode 100644 index 00000000..8b9fb16d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cbc.pyi @@ -0,0 +1,25 @@ +from typing import Union, overload + +from Crypto.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CbcMode'] + +class CbcMode(object): + block_size: int + iv: Buffer + IV: Buffer + + def __init__(self, + block_cipher: SmartPointer, + iv: Buffer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ccm.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ccm.py new file mode 100644 index 00000000..64077de4 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ccm.py @@ -0,0 +1,650 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Counter with CBC-MAC (CCM) mode. +""" + +__all__ = ['CcmMode'] + +import struct +from binascii import unhexlify + +from Crypto.Util.py3compat import (byte_string, bord, + _copy_bytes) +from Crypto.Util._raw_api import is_writeable_buffer + +from Crypto.Util.strxor import strxor +from Crypto.Util.number import long_to_bytes + +from Crypto.Hash import BLAKE2s +from Crypto.Random import get_random_bytes + + +def enum(**enums): + return type('Enum', (), enums) + +MacStatus = enum(NOT_STARTED=0, PROCESSING_AUTH_DATA=1, PROCESSING_PLAINTEXT=2) + + +class CcmMode(object): + """Counter with CBC-MAC (CCM). + + This is an Authenticated Encryption with Associated Data (`AEAD`_) mode. + It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, and it will + still be subject to authentication. The decryption step tells the receiver + if the message comes from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - including the + header - has been modified or corrupted. + + This mode requires a nonce. The nonce shall never repeat for two + different messages encrypted with the same key, but it does not need + to be random. + Note that there is a trade-off between the size of the nonce and the + maximum size of a single message you can encrypt. + + It is important to use a large nonce if the key is reused across several + messages and the nonce is chosen randomly. + + It is acceptable to us a short nonce if the key is only used a few times or + if the nonce is taken from a counter. + + The following table shows the trade-off when the nonce is chosen at + random. The column on the left shows how many messages it takes + for the keystream to repeat **on average**. In practice, you will want to + stop using the key way before that. + + +--------------------+---------------+-------------------+ + | Avg. # of messages | nonce | Max. message | + | before keystream | size | size | + | repeats | (bytes) | (bytes) | + +====================+===============+===================+ + | 2^52 | 13 | 64K | + +--------------------+---------------+-------------------+ + | 2^48 | 12 | 16M | + +--------------------+---------------+-------------------+ + | 2^44 | 11 | 4G | + +--------------------+---------------+-------------------+ + | 2^40 | 10 | 1T | + +--------------------+---------------+-------------------+ + | 2^36 | 9 | 64P | + +--------------------+---------------+-------------------+ + | 2^32 | 8 | 16E | + +--------------------+---------------+-------------------+ + + This mode is only available for ciphers that operate on 128 bits blocks + (e.g. AES but not TDES). + + See `NIST SP800-38C`_ or RFC3610_. + + .. _`NIST SP800-38C`: http://csrc.nist.gov/publications/nistpubs/800-38C/SP800-38C.pdf + .. _RFC3610: https://tools.ietf.org/html/rfc3610 + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, mac_len, msg_len, assoc_len, + cipher_params): + + self.block_size = factory.block_size + """The block size of the underlying cipher, in bytes.""" + + self.nonce = _copy_bytes(None, None, nonce) + """The nonce used for this cipher instance""" + + self._factory = factory + self._key = _copy_bytes(None, None, key) + self._mac_len = mac_len + self._msg_len = msg_len + self._assoc_len = assoc_len + self._cipher_params = cipher_params + + self._mac_tag = None # Cache for MAC tag + + if self.block_size != 16: + raise ValueError("CCM mode is only available for ciphers" + " that operate on 128 bits blocks") + + # MAC tag length (Tlen) + if mac_len not in (4, 6, 8, 10, 12, 14, 16): + raise ValueError("Parameter 'mac_len' must be even" + " and in the range 4..16 (not %d)" % mac_len) + + # Nonce value + if not (nonce and 7 <= len(nonce) <= 13): + raise ValueError("Length of parameter 'nonce' must be" + " in the range 7..13 bytes") + + # Create MAC object (the tag will be the last block + # bytes worth of ciphertext) + self._mac = self._factory.new(key, + factory.MODE_CBC, + iv=b'\x00' * 16, + **cipher_params) + self._mac_status = MacStatus.NOT_STARTED + self._t = None + + # Allowed transitions after initialization + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + # Cumulative lengths + self._cumul_assoc_len = 0 + self._cumul_msg_len = 0 + + # Cache for unaligned associated data/plaintext. + # This is a list with byte strings, but when the MAC starts, + # it will become a binary string no longer than the block size. + self._cache = [] + + # Start CTR cipher, by formatting the counter (A.3) + q = 15 - len(nonce) # length of Q, the encoded message length + self._cipher = self._factory.new(key, + self._factory.MODE_CTR, + nonce=struct.pack("B", q - 1) + self.nonce, + **cipher_params) + + # S_0, step 6 in 6.1 for j=0 + self._s_0 = self._cipher.encrypt(b'\x00' * 16) + + # Try to start the MAC + if None not in (assoc_len, msg_len): + self._start_mac() + + def _start_mac(self): + + assert(self._mac_status == MacStatus.NOT_STARTED) + assert(None not in (self._assoc_len, self._msg_len)) + assert(isinstance(self._cache, list)) + + # Formatting control information and nonce (A.2.1) + q = 15 - len(self.nonce) # length of Q, the encoded message length + flags = (64 * (self._assoc_len > 0) + 8 * ((self._mac_len - 2) // 2) + + (q - 1)) + b_0 = struct.pack("B", flags) + self.nonce + long_to_bytes(self._msg_len, q) + + # Formatting associated data (A.2.2) + # Encoded 'a' is concatenated with the associated data 'A' + assoc_len_encoded = b'' + if self._assoc_len > 0: + if self._assoc_len < (2 ** 16 - 2 ** 8): + enc_size = 2 + elif self._assoc_len < (2 ** 32): + assoc_len_encoded = b'\xFF\xFE' + enc_size = 4 + else: + assoc_len_encoded = b'\xFF\xFF' + enc_size = 8 + assoc_len_encoded += long_to_bytes(self._assoc_len, enc_size) + + # b_0 and assoc_len_encoded must be processed first + self._cache.insert(0, b_0) + self._cache.insert(1, assoc_len_encoded) + + # Process all the data cached so far + first_data_to_mac = b"".join(self._cache) + self._cache = b"" + self._mac_status = MacStatus.PROCESSING_AUTH_DATA + self._update(first_data_to_mac) + + def _pad_cache_and_update(self): + + assert(self._mac_status != MacStatus.NOT_STARTED) + assert(len(self._cache) < self.block_size) + + # Associated data is concatenated with the least number + # of zero bytes (possibly none) to reach alignment to + # the 16 byte boundary (A.2.3) + len_cache = len(self._cache) + if len_cache > 0: + self._update(b'\x00' * (self.block_size - len_cache)) + + def update(self, assoc_data): + """Protect associated data + + If there is any associated data, the caller has to invoke + this function one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver is still able to detect any modification to it. + In CCM, the *associated data* is also called + *additional authenticated data* (AAD). + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. There are no restrictions on its size. + """ + + if self.update not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + self._cumul_assoc_len += len(assoc_data) + if self._assoc_len is not None and \ + self._cumul_assoc_len > self._assoc_len: + raise ValueError("Associated data is too long") + + self._update(assoc_data) + return self + + def _update(self, assoc_data_pt=b""): + """Update the MAC with associated data or plaintext + (without FSM checks)""" + + # If MAC has not started yet, we just park the data into a list. + # If the data is mutable, we create a copy and store that instead. + if self._mac_status == MacStatus.NOT_STARTED: + if is_writeable_buffer(assoc_data_pt): + assoc_data_pt = _copy_bytes(None, None, assoc_data_pt) + self._cache.append(assoc_data_pt) + return + + assert(len(self._cache) < self.block_size) + + if len(self._cache) > 0: + filler = min(self.block_size - len(self._cache), + len(assoc_data_pt)) + self._cache += _copy_bytes(None, filler, assoc_data_pt) + assoc_data_pt = _copy_bytes(filler, None, assoc_data_pt) + + if len(self._cache) < self.block_size: + return + + # The cache is exactly one block + self._t = self._mac.encrypt(self._cache) + self._cache = b"" + + update_len = len(assoc_data_pt) // self.block_size * self.block_size + self._cache = _copy_bytes(update_len, None, assoc_data_pt) + if update_len > 0: + self._t = self._mac.encrypt(assoc_data_pt[:update_len])[-16:] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + This method can be called only **once** if ``msg_len`` was + not passed at initialization. + + If ``msg_len`` was given, the data to encrypt can be broken + up in two or more pieces and `encrypt` can be called + multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + self._next = [self.encrypt, self.digest] + + # No more associated data allowed from now + if self._assoc_len is None: + assert(isinstance(self._cache, list)) + self._assoc_len = sum([len(x) for x in self._cache]) + if self._msg_len is not None: + self._start_mac() + else: + if self._cumul_assoc_len < self._assoc_len: + raise ValueError("Associated data is too short") + + # Only once piece of plaintext accepted if message length was + # not declared in advance + if self._msg_len is None: + self._msg_len = len(plaintext) + self._start_mac() + self._next = [self.digest] + + self._cumul_msg_len += len(plaintext) + if self._cumul_msg_len > self._msg_len: + raise ValueError("Message is too long") + + if self._mac_status == MacStatus.PROCESSING_AUTH_DATA: + # Associated data is concatenated with the least number + # of zero bytes (possibly none) to reach alignment to + # the 16 byte boundary (A.2.3) + self._pad_cache_and_update() + self._mac_status = MacStatus.PROCESSING_PLAINTEXT + + self._update(plaintext) + return self._cipher.encrypt(plaintext, output=output) + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + This method can be called only **once** if ``msg_len`` was + not passed at initialization. + + If ``msg_len`` was given, the data to decrypt can be + broken up in two or more pieces and `decrypt` can be + called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = [self.decrypt, self.verify] + + # No more associated data allowed from now + if self._assoc_len is None: + assert(isinstance(self._cache, list)) + self._assoc_len = sum([len(x) for x in self._cache]) + if self._msg_len is not None: + self._start_mac() + else: + if self._cumul_assoc_len < self._assoc_len: + raise ValueError("Associated data is too short") + + # Only once piece of ciphertext accepted if message length was + # not declared in advance + if self._msg_len is None: + self._msg_len = len(ciphertext) + self._start_mac() + self._next = [self.verify] + + self._cumul_msg_len += len(ciphertext) + if self._cumul_msg_len > self._msg_len: + raise ValueError("Message is too long") + + if self._mac_status == MacStatus.PROCESSING_AUTH_DATA: + # Associated data is concatenated with the least number + # of zero bytes (possibly none) to reach alignment to + # the 16 byte boundary (A.2.3) + self._pad_cache_and_update() + self._mac_status = MacStatus.PROCESSING_PLAINTEXT + + # Encrypt is equivalent to decrypt with the CTR mode + plaintext = self._cipher.encrypt(ciphertext, output=output) + if output is None: + self._update(plaintext) + else: + self._update(output) + return plaintext + + def digest(self): + """Compute the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if self.digest not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = [self.digest] + return self._digest() + + def _digest(self): + if self._mac_tag: + return self._mac_tag + + if self._assoc_len is None: + assert(isinstance(self._cache, list)) + self._assoc_len = sum([len(x) for x in self._cache]) + if self._msg_len is not None: + self._start_mac() + else: + if self._cumul_assoc_len < self._assoc_len: + raise ValueError("Associated data is too short") + + if self._msg_len is None: + self._msg_len = 0 + self._start_mac() + + if self._cumul_msg_len != self._msg_len: + raise ValueError("Message is too short") + + # Both associated data and payload are concatenated with the least + # number of zero bytes (possibly none) that align it to the + # 16 byte boundary (A.2.2 and A.2.3) + self._pad_cache_and_update() + + # Step 8 in 6.1 (T xor MSB_Tlen(S_0)) + self._mac_tag = strxor(self._t, self._s_0)[:self._mac_len] + + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if self.verify not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = [self.verify] + + self._digest() + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + return self.encrypt(plaintext, output=output), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag, output=None): + """Perform decrypt() and verify() in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext, output=output) + self.verify(received_mac_tag) + return plaintext + + +def _create_ccm_cipher(factory, **kwargs): + """Create a new block cipher, configured in CCM mode. + + :Parameters: + factory : module + A symmetric cipher module from `Crypto.Cipher` (like + `Crypto.Cipher.AES`). + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + + Its length must be in the range ``[7..13]``. + 11 or 12 bytes are reasonable values in general. Bear in + mind that with CCM there is a trade-off between nonce length and + maximum message size. + + If not specified, a 11 byte long random string is used. + + mac_len : integer + Length of the MAC, in bytes. It must be even and in + the range ``[4..16]``. The default is 16. + + msg_len : integer + Length of the message to (de)cipher. + If not specified, ``encrypt`` or ``decrypt`` may only be called once. + + assoc_len : integer + Length of the associated data. + If not specified, all data is internally buffered. + """ + + try: + key = key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter: " + str(e)) + + nonce = kwargs.pop("nonce", None) # N + if nonce is None: + nonce = get_random_bytes(11) + mac_len = kwargs.pop("mac_len", factory.block_size) + msg_len = kwargs.pop("msg_len", None) # p + assoc_len = kwargs.pop("assoc_len", None) # a + cipher_params = dict(kwargs) + + return CcmMode(factory, key, nonce, mac_len, msg_len, + assoc_len, cipher_params) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ccm.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ccm.pyi new file mode 100644 index 00000000..4b9f6200 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ccm.pyi @@ -0,0 +1,47 @@ +from types import ModuleType +from typing import Union, overload, Dict, Tuple, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CcmMode'] + +class CcmMode(object): + block_size: int + nonce: bytes + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + mac_len: int, + msg_len: int, + assoc_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> CcmMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cfb.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cfb.py new file mode 100644 index 00000000..b3ee1c74 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cfb.py @@ -0,0 +1,293 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_cfb.py : CFB mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Counter Feedback (CFB) mode. +""" + +__all__ = ['CfbMode'] + +from Crypto.Util.py3compat import _copy_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Crypto.Random import get_random_bytes + +raw_cfb_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_cfb",""" + int CFB_start_operation(void *cipher, + const uint8_t iv[], + size_t iv_len, + size_t segment_len, /* In bytes */ + void **pResult); + int CFB_encrypt(void *cfbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CFB_decrypt(void *cfbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CFB_stop_operation(void *state);""" + ) + + +class CfbMode(object): + """*Cipher FeedBack (CFB)*. + + This mode is similar to CFB, but it transforms + the underlying block cipher into a stream cipher. + + Plaintext and ciphertext are processed in *segments* + of **s** bits. The mode is therefore sometimes + labelled **s**-bit CFB. + + An Initialization Vector (*IV*) is required. + + See `NIST SP800-38A`_ , Section 6.3. + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, iv, segment_size): + """Create a new block cipher, configured in CFB mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + iv : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + It is as long as the cipher block. + + **The IV must be unpredictable**. Ideally it is picked randomly. + + Reusing the *IV* for encryptions performed with the same key + compromises confidentiality. + + segment_size : integer + The number of bytes the plaintext and ciphertext are segmented in. + """ + + self._state = VoidPointer() + result = raw_cfb_lib.CFB_start_operation(block_cipher.get(), + c_uint8_ptr(iv), + c_size_t(len(iv)), + c_size_t(segment_size), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the CFB mode" % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_cfb_lib.CFB_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(iv) + """The block size of the underlying cipher, in bytes.""" + + self.iv = _copy_bytes(None, None, iv) + """The Initialization Vector originally used to create the object. + The value does not change.""" + + self.IV = self.iv + """Alias for `iv`""" + + self._next = [ self.encrypt, self.decrypt ] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = [ self.encrypt ] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cfb_lib.CFB_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting in CFB mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = [ self.decrypt ] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cfb_lib.CFB_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + raise ValueError("Error %d while decrypting in CFB mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_cfb_cipher(factory, **kwargs): + """Instantiate a cipher object that performs CFB encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Crypto.Cipher``. + + :Keywords: + iv : bytes/bytearray/memoryview + The IV to use for CFB. + + IV : bytes/bytearray/memoryview + Alias for ``iv``. + + segment_size : integer + The number of bit the plaintext and ciphertext are segmented in. + If not present, the default is 8. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + if len(iv) != factory.block_size: + raise ValueError("Incorrect IV length (it must be %d bytes long)" % + factory.block_size) + + segment_size_bytes, rem = divmod(kwargs.pop("segment_size", 8), 8) + if segment_size_bytes == 0 or rem != 0: + raise ValueError("'segment_size' must be positive and multiple of 8 bits") + + if kwargs: + raise TypeError("Unknown parameters for CFB: %s" % str(kwargs)) + return CfbMode(cipher_state, iv, segment_size_bytes) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cfb.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cfb.pyi new file mode 100644 index 00000000..e13a909b --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_cfb.pyi @@ -0,0 +1,26 @@ +from typing import Union, overload + +from Crypto.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CfbMode'] + + +class CfbMode(object): + block_size: int + iv: Buffer + IV: Buffer + + def __init__(self, + block_cipher: SmartPointer, + iv: Buffer, + segment_size: int) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ctr.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ctr.py new file mode 100644 index 00000000..15c7e838 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ctr.py @@ -0,0 +1,393 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_ctr.py : CTR mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Counter (CTR) mode. +""" + +__all__ = ['CtrMode'] + +import struct + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Crypto.Random import get_random_bytes +from Crypto.Util.py3compat import _copy_bytes, is_native_int +from Crypto.Util.number import long_to_bytes + +raw_ctr_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_ctr", """ + int CTR_start_operation(void *cipher, + uint8_t initialCounterBlock[], + size_t initialCounterBlock_len, + size_t prefix_len, + unsigned counter_len, + unsigned littleEndian, + void **pResult); + int CTR_encrypt(void *ctrState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CTR_decrypt(void *ctrState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CTR_stop_operation(void *ctrState);""" + ) + + +class CtrMode(object): + """*CounTeR (CTR)* mode. + + This mode is very similar to ECB, in that + encryption of one block is done independently of all other blocks. + + Unlike ECB, the block *position* contributes to the encryption + and no information leaks about symbol frequency. + + Each message block is associated to a *counter* which + must be unique across all messages that get encrypted + with the same key (not just within the same message). + The counter is as big as the block size. + + Counters can be generated in several ways. The most + straightword one is to choose an *initial counter block* + (which can be made public, similarly to the *IV* for the + other modes) and increment its lowest **m** bits by one + (modulo *2^m*) for each block. In most cases, **m** is + chosen to be half the block size. + + See `NIST SP800-38A`_, Section 6.5 (for the mode) and + Appendix B (for how to manage the *initial counter block*). + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, initial_counter_block, + prefix_len, counter_len, little_endian): + """Create a new block cipher, configured in CTR mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + initial_counter_block : bytes/bytearray/memoryview + The initial plaintext to use to generate the key stream. + + It is as large as the cipher block, and it embeds + the initial value of the counter. + + This value must not be reused. + It shall contain a nonce or a random component. + Reusing the *initial counter block* for encryptions + performed with the same key compromises confidentiality. + + prefix_len : integer + The amount of bytes at the beginning of the counter block + that never change. + + counter_len : integer + The length in bytes of the counter embedded in the counter + block. + + little_endian : boolean + True if the counter in the counter block is an integer encoded + in little endian mode. If False, it is big endian. + """ + + if len(initial_counter_block) == prefix_len + counter_len: + self.nonce = _copy_bytes(None, prefix_len, initial_counter_block) + """Nonce; not available if there is a fixed suffix""" + + self._state = VoidPointer() + result = raw_ctr_lib.CTR_start_operation(block_cipher.get(), + c_uint8_ptr(initial_counter_block), + c_size_t(len(initial_counter_block)), + c_size_t(prefix_len), + counter_len, + little_endian, + self._state.address_of()) + if result: + raise ValueError("Error %X while instantiating the CTR mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_ctr_lib.CTR_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(initial_counter_block) + """The block size of the underlying cipher, in bytes.""" + + self._next = [self.encrypt, self.decrypt] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = [self.encrypt] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ctr_lib.CTR_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + if result == 0x60002: + raise OverflowError("The counter has wrapped around in" + " CTR mode") + raise ValueError("Error %X while encrypting in CTR mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = [self.decrypt] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ctr_lib.CTR_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + if result == 0x60002: + raise OverflowError("The counter has wrapped around in" + " CTR mode") + raise ValueError("Error %X while decrypting in CTR mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_ctr_cipher(factory, **kwargs): + """Instantiate a cipher object that performs CTR encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Crypto.Cipher``. + + :Keywords: + nonce : bytes/bytearray/memoryview + The fixed part at the beginning of the counter block - the rest is + the counter number that gets increased when processing the next block. + The nonce must be such that no two messages are encrypted under the + same key and the same nonce. + + The nonce must be shorter than the block size (it can have + zero length; the counter is then as long as the block). + + If this parameter is not present, a random nonce will be created with + length equal to half the block size. No random nonce shorter than + 64 bits will be created though - you must really think through all + security consequences of using such a short block size. + + initial_value : posive integer or bytes/bytearray/memoryview + The initial value for the counter. If not present, the cipher will + start counting from 0. The value is incremented by one for each block. + The counter number is encoded in big endian mode. + + counter : object + Instance of ``Crypto.Util.Counter``, which allows full customization + of the counter block. This parameter is incompatible to both ``nonce`` + and ``initial_value``. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + + counter = kwargs.pop("counter", None) + nonce = kwargs.pop("nonce", None) + initial_value = kwargs.pop("initial_value", None) + if kwargs: + raise TypeError("Invalid parameters for CTR mode: %s" % str(kwargs)) + + if counter is not None and (nonce, initial_value) != (None, None): + raise TypeError("'counter' and 'nonce'/'initial_value'" + " are mutually exclusive") + + if counter is None: + # Crypto.Util.Counter is not used + if nonce is None: + if factory.block_size < 16: + raise TypeError("Impossible to create a safe nonce for short" + " block sizes") + nonce = get_random_bytes(factory.block_size // 2) + else: + if len(nonce) >= factory.block_size: + raise ValueError("Nonce is too long") + + # What is not nonce is counter + counter_len = factory.block_size - len(nonce) + + if initial_value is None: + initial_value = 0 + + if is_native_int(initial_value): + if (1 << (counter_len * 8)) - 1 < initial_value: + raise ValueError("Initial counter value is too large") + initial_counter_block = nonce + long_to_bytes(initial_value, counter_len) + else: + if len(initial_value) != counter_len: + raise ValueError("Incorrect length for counter byte string (%d bytes, expected %d)" % + (len(initial_value), counter_len)) + initial_counter_block = nonce + initial_value + + return CtrMode(cipher_state, + initial_counter_block, + len(nonce), # prefix + counter_len, + False) # little_endian + + # Crypto.Util.Counter is used + + # 'counter' used to be a callable object, but now it is + # just a dictionary for backward compatibility. + _counter = dict(counter) + try: + counter_len = _counter.pop("counter_len") + prefix = _counter.pop("prefix") + suffix = _counter.pop("suffix") + initial_value = _counter.pop("initial_value") + little_endian = _counter.pop("little_endian") + except KeyError: + raise TypeError("Incorrect counter object" + " (use Crypto.Util.Counter.new)") + + # Compute initial counter block + words = [] + while initial_value > 0: + words.append(struct.pack('B', initial_value & 255)) + initial_value >>= 8 + words += [b'\x00'] * max(0, counter_len - len(words)) + if not little_endian: + words.reverse() + initial_counter_block = prefix + b"".join(words) + suffix + + if len(initial_counter_block) != factory.block_size: + raise ValueError("Size of the counter block (%d bytes) must match" + " block size (%d)" % (len(initial_counter_block), + factory.block_size)) + + return CtrMode(cipher_state, initial_counter_block, + len(prefix), counter_len, little_endian) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ctr.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ctr.pyi new file mode 100644 index 00000000..ce708557 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ctr.pyi @@ -0,0 +1,27 @@ +from typing import Union, overload + +from Crypto.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CtrMode'] + +class CtrMode(object): + block_size: int + nonce: bytes + + def __init__(self, + block_cipher: SmartPointer, + initial_counter_block: Buffer, + prefix_len: int, + counter_len: int, + little_endian: bool) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_eax.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_eax.py new file mode 100644 index 00000000..d5fb1351 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_eax.py @@ -0,0 +1,408 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +EAX mode. +""" + +__all__ = ['EaxMode'] + +import struct +from binascii import unhexlify + +from Crypto.Util.py3compat import byte_string, bord, _copy_bytes + +from Crypto.Util._raw_api import is_buffer + +from Crypto.Util.strxor import strxor +from Crypto.Util.number import long_to_bytes, bytes_to_long + +from Crypto.Hash import CMAC, BLAKE2s +from Crypto.Random import get_random_bytes + + +class EaxMode(object): + """*EAX* mode. + + This is an Authenticated Encryption with Associated Data + (`AEAD`_) mode. It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, + and it will still be subject to authentication. + + The decryption step tells the receiver if the message comes + from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - + including the header - has been modified or corrupted. + + This mode requires a *nonce*. + + This mode is only available for ciphers that operate on 64 or + 128 bits blocks. + + There are no official standards defining EAX. + The implementation is based on `a proposal`__ that + was presented to NIST. + + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + .. __: http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/eax/eax-spec.pdf + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, mac_len, cipher_params): + """EAX cipher mode""" + + self.block_size = factory.block_size + """The block size of the underlying cipher, in bytes.""" + + self.nonce = _copy_bytes(None, None, nonce) + """The nonce originally used to create the object.""" + + self._mac_len = mac_len + self._mac_tag = None # Cache for MAC tag + + # Allowed transitions after initialization + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + # MAC tag length + if not (4 <= self._mac_len <= self.block_size): + raise ValueError("Parameter 'mac_len' must not be larger than %d" + % self.block_size) + + # Nonce cannot be empty and must be a byte string + if len(self.nonce) == 0: + raise ValueError("Nonce cannot be empty in EAX mode") + if not is_buffer(nonce): + raise TypeError("nonce must be bytes, bytearray or memoryview") + + self._omac = [ + CMAC.new(key, + b'\x00' * (self.block_size - 1) + struct.pack('B', i), + ciphermod=factory, + cipher_params=cipher_params) + for i in range(0, 3) + ] + + # Compute MAC of nonce + self._omac[0].update(self.nonce) + self._signer = self._omac[1] + + # MAC of the nonce is also the initial counter for CTR encryption + counter_int = bytes_to_long(self._omac[0].digest()) + self._cipher = factory.new(key, + factory.MODE_CTR, + initial_value=counter_int, + nonce=b"", + **cipher_params) + + def update(self, assoc_data): + """Protect associated data + + If there is any associated data, the caller has to invoke + this function one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver is still able to detect any modification to it. + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. There are no restrictions on its size. + """ + + if self.update not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + self._signer.update(assoc_data) + return self + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + self._next = [self.encrypt, self.digest] + ct = self._cipher.encrypt(plaintext, output=output) + if output is None: + self._omac[2].update(ct) + else: + self._omac[2].update(output) + return ct + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = [self.decrypt, self.verify] + self._omac[2].update(ciphertext) + return self._cipher.decrypt(ciphertext, output=output) + + def digest(self): + """Compute the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if self.digest not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = [self.digest] + + if not self._mac_tag: + tag = b'\x00' * self.block_size + for i in range(3): + tag = strxor(tag, self._omac[i].digest()) + self._mac_tag = tag[:self._mac_len] + + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises MacMismatchError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if self.verify not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = [self.verify] + + if not self._mac_tag: + tag = b'\x00' * self.block_size + for i in range(3): + tag = strxor(tag, self._omac[i].digest()) + self._mac_tag = tag[:self._mac_len] + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises MacMismatchError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + return self.encrypt(plaintext, output=output), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag, output=None): + """Perform decrypt() and verify() in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises MacMismatchError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + pt = self.decrypt(ciphertext, output=output) + self.verify(received_mac_tag) + return pt + + +def _create_eax_cipher(factory, **kwargs): + """Create a new block cipher, configured in EAX mode. + + :Parameters: + factory : module + A symmetric cipher module from `Crypto.Cipher` (like + `Crypto.Cipher.AES`). + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + There are no restrictions on its length, but it is recommended to use + at least 16 bytes. + + The nonce shall never repeat for two different messages encrypted with + the same key, but it does not need to be random. + + If not specified, a 16 byte long random string is used. + + mac_len : integer + Length of the MAC, in bytes. It must be no larger than the cipher + block bytes (which is the default). + """ + + try: + key = kwargs.pop("key") + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(16) + mac_len = kwargs.pop("mac_len", factory.block_size) + except KeyError as e: + raise TypeError("Missing parameter: " + str(e)) + + return EaxMode(factory, key, nonce, mac_len, kwargs) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_eax.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_eax.pyi new file mode 100644 index 00000000..cbfa467e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_eax.pyi @@ -0,0 +1,45 @@ +from types import ModuleType +from typing import Any, Union, Tuple, Dict, overload, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['EaxMode'] + +class EaxMode(object): + block_size: int + nonce: bytes + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + mac_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> EaxMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ecb.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ecb.py new file mode 100644 index 00000000..37833572 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ecb.py @@ -0,0 +1,220 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_ecb.py : ECB mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Electronic Code Book (ECB) mode. +""" + +__all__ = [ 'EcbMode' ] + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, create_string_buffer, + get_raw_buffer, SmartPointer, + c_size_t, c_uint8_ptr, + is_writeable_buffer) + +raw_ecb_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_ecb", """ + int ECB_start_operation(void *cipher, + void **pResult); + int ECB_encrypt(void *ecbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ECB_decrypt(void *ecbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ECB_stop_operation(void *state); + """ + ) + + +class EcbMode(object): + """*Electronic Code Book (ECB)*. + + This is the simplest encryption mode. Each of the plaintext blocks + is directly encrypted into a ciphertext block, independently of + any other block. + + This mode is dangerous because it exposes frequency of symbols + in your plaintext. Other modes (e.g. *CBC*) should be used instead. + + See `NIST SP800-38A`_ , Section 6.1. + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher): + """Create a new block cipher, configured in ECB mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + """ + self.block_size = block_cipher.block_size + + self._state = VoidPointer() + result = raw_ecb_lib.ECB_start_operation(block_cipher.get(), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the ECB mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher + # mode + self._state = SmartPointer(self._state.get(), + raw_ecb_lib.ECB_stop_operation) + + # Memory allocated for the underlying block cipher is now owned + # by the cipher mode + block_cipher.release() + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key set at initialization. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + The length must be multiple of the cipher block length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ecb_lib.ECB_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + if result == 3: + raise ValueError("Data must be aligned to block boundary in ECB mode") + raise ValueError("Error %d while encrypting in ECB mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key set at initialization. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + The length must be multiple of the cipher block length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ecb_lib.ECB_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + if result == 3: + raise ValueError("Data must be aligned to block boundary in ECB mode") + raise ValueError("Error %d while decrypting in ECB mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_ecb_cipher(factory, **kwargs): + """Instantiate a cipher object that performs ECB encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Crypto.Cipher``. + + All keywords are passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present""" + + cipher_state = factory._create_base_cipher(kwargs) + cipher_state.block_size = factory.block_size + if kwargs: + raise TypeError("Unknown parameters for ECB: %s" % str(kwargs)) + return EcbMode(cipher_state) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ecb.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ecb.pyi new file mode 100644 index 00000000..1772b23c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ecb.pyi @@ -0,0 +1,19 @@ +from typing import Union, overload + +from Crypto.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = [ 'EcbMode' ] + +class EcbMode(object): + def __init__(self, block_cipher: SmartPointer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_gcm.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_gcm.py new file mode 100644 index 00000000..da8e337a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_gcm.py @@ -0,0 +1,620 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Galois/Counter Mode (GCM). +""" + +__all__ = ['GcmMode'] + +from binascii import unhexlify + +from Crypto.Util.py3compat import bord, _copy_bytes + +from Crypto.Util._raw_api import is_buffer + +from Crypto.Util.number import long_to_bytes, bytes_to_long +from Crypto.Hash import BLAKE2s +from Crypto.Random import get_random_bytes + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr) + +from Crypto.Util import _cpu_features + + +# C API by module implementing GHASH +_ghash_api_template = """ + int ghash_%imp%(uint8_t y_out[16], + const uint8_t block_data[], + size_t len, + const uint8_t y_in[16], + const void *exp_key); + int ghash_expand_%imp%(const uint8_t h[16], + void **ghash_tables); + int ghash_destroy_%imp%(void *ghash_tables); +""" + +def _build_impl(lib, postfix): + from collections import namedtuple + + funcs = ( "ghash", "ghash_expand", "ghash_destroy" ) + GHASH_Imp = namedtuple('_GHash_Imp', funcs) + try: + imp_funcs = [ getattr(lib, x + "_" + postfix) for x in funcs ] + except AttributeError: # Make sphinx stop complaining with its mocklib + imp_funcs = [ None ] * 3 + params = dict(zip(funcs, imp_funcs)) + return GHASH_Imp(**params) + + +def _get_ghash_portable(): + api = _ghash_api_template.replace("%imp%", "portable") + lib = load_pycryptodome_raw_lib("Crypto.Hash._ghash_portable", api) + result = _build_impl(lib, "portable") + return result +_ghash_portable = _get_ghash_portable() + + +def _get_ghash_clmul(): + """Return None if CLMUL implementation is not available""" + + if not _cpu_features.have_clmul(): + return None + try: + api = _ghash_api_template.replace("%imp%", "clmul") + lib = load_pycryptodome_raw_lib("Crypto.Hash._ghash_clmul", api) + result = _build_impl(lib, "clmul") + except OSError: + result = None + return result +_ghash_clmul = _get_ghash_clmul() + + +class _GHASH(object): + """GHASH function defined in NIST SP 800-38D, Algorithm 2. + + If X_1, X_2, .. X_m are the blocks of input data, the function + computes: + + X_1*H^{m} + X_2*H^{m-1} + ... + X_m*H + + in the Galois field GF(2^256) using the reducing polynomial + (x^128 + x^7 + x^2 + x + 1). + """ + + def __init__(self, subkey, ghash_c): + assert len(subkey) == 16 + + self.ghash_c = ghash_c + + self._exp_key = VoidPointer() + result = ghash_c.ghash_expand(c_uint8_ptr(subkey), + self._exp_key.address_of()) + if result: + raise ValueError("Error %d while expanding the GHASH key" % result) + + self._exp_key = SmartPointer(self._exp_key.get(), + ghash_c.ghash_destroy) + + # create_string_buffer always returns a string of zeroes + self._last_y = create_string_buffer(16) + + def update(self, block_data): + assert len(block_data) % 16 == 0 + + result = self.ghash_c.ghash(self._last_y, + c_uint8_ptr(block_data), + c_size_t(len(block_data)), + self._last_y, + self._exp_key.get()) + if result: + raise ValueError("Error %d while updating GHASH" % result) + + return self + + def digest(self): + return get_raw_buffer(self._last_y) + + +def enum(**enums): + return type('Enum', (), enums) + + +MacStatus = enum(PROCESSING_AUTH_DATA=1, PROCESSING_CIPHERTEXT=2) + + +class GcmMode(object): + """Galois Counter Mode (GCM). + + This is an Authenticated Encryption with Associated Data (`AEAD`_) mode. + It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, and it will + still be subject to authentication. The decryption step tells the receiver + if the message comes from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - including the + header - has been modified or corrupted. + + This mode requires a *nonce*. + + This mode is only available for ciphers that operate on 128 bits blocks + (e.g. AES but not TDES). + + See `NIST SP800-38D`_. + + .. _`NIST SP800-38D`: http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, mac_len, cipher_params, ghash_c): + + self.block_size = factory.block_size + if self.block_size != 16: + raise ValueError("GCM mode is only available for ciphers" + " that operate on 128 bits blocks") + + if len(nonce) == 0: + raise ValueError("Nonce cannot be empty") + + if not is_buffer(nonce): + raise TypeError("Nonce must be bytes, bytearray or memoryview") + + # See NIST SP 800 38D, 5.2.1.1 + if len(nonce) > 2**64 - 1: + raise ValueError("Nonce exceeds maximum length") + + + self.nonce = _copy_bytes(None, None, nonce) + """Nonce""" + + self._factory = factory + self._key = _copy_bytes(None, None, key) + self._tag = None # Cache for MAC tag + + self._mac_len = mac_len + if not (4 <= mac_len <= 16): + raise ValueError("Parameter 'mac_len' must be in the range 4..16") + + # Allowed transitions after initialization + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + self._no_more_assoc_data = False + + # Length of associated data + self._auth_len = 0 + + # Length of the ciphertext or plaintext + self._msg_len = 0 + + # Step 1 in SP800-38D, Algorithm 4 (encryption) - Compute H + # See also Algorithm 5 (decryption) + hash_subkey = factory.new(key, + self._factory.MODE_ECB, + **cipher_params + ).encrypt(b'\x00' * 16) + + # Step 2 - Compute J0 + if len(self.nonce) == 12: + j0 = self.nonce + b"\x00\x00\x00\x01" + else: + fill = (16 - (len(nonce) % 16)) % 16 + 8 + ghash_in = (self.nonce + + b'\x00' * fill + + long_to_bytes(8 * len(nonce), 8)) + j0 = _GHASH(hash_subkey, ghash_c).update(ghash_in).digest() + + # Step 3 - Prepare GCTR cipher for encryption/decryption + nonce_ctr = j0[:12] + iv_ctr = (bytes_to_long(j0) + 1) & 0xFFFFFFFF + self._cipher = factory.new(key, + self._factory.MODE_CTR, + initial_value=iv_ctr, + nonce=nonce_ctr, + **cipher_params) + + # Step 5 - Bootstrat GHASH + self._signer = _GHASH(hash_subkey, ghash_c) + + # Step 6 - Prepare GCTR cipher for GMAC + self._tag_cipher = factory.new(key, + self._factory.MODE_CTR, + initial_value=j0, + nonce=b"", + **cipher_params) + + # Cache for data to authenticate + self._cache = b"" + + self._status = MacStatus.PROCESSING_AUTH_DATA + + def update(self, assoc_data): + """Protect associated data + + If there is any associated data, the caller has to invoke + this function one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver is still able to detect any modification to it. + In GCM, the *associated data* is also called + *additional authenticated data* (AAD). + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. There are no restrictions on its size. + """ + + if self.update not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + self._update(assoc_data) + self._auth_len += len(assoc_data) + + # See NIST SP 800 38D, 5.2.1.1 + if self._auth_len > 2**64 - 1: + raise ValueError("Additional Authenticated Data exceeds maximum length") + + return self + + def _update(self, data): + assert(len(self._cache) < 16) + + if len(self._cache) > 0: + filler = min(16 - len(self._cache), len(data)) + self._cache += _copy_bytes(None, filler, data) + data = data[filler:] + + if len(self._cache) < 16: + return + + # The cache is exactly one block + self._signer.update(self._cache) + self._cache = b"" + + update_len = len(data) // 16 * 16 + self._cache = _copy_bytes(update_len, None, data) + if update_len > 0: + self._signer.update(data[:update_len]) + + def _pad_cache_and_update(self): + assert(len(self._cache) < 16) + + # The authenticated data A is concatenated to the minimum + # number of zero bytes (possibly none) such that the + # - ciphertext C is aligned to the 16 byte boundary. + # See step 5 in section 7.1 + # - ciphertext C is aligned to the 16 byte boundary. + # See step 6 in section 7.2 + len_cache = len(self._cache) + if len_cache > 0: + self._update(b'\x00' * (16 - len_cache)) + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + self._next = [self.encrypt, self.digest] + + ciphertext = self._cipher.encrypt(plaintext, output=output) + + if self._status == MacStatus.PROCESSING_AUTH_DATA: + self._pad_cache_and_update() + self._status = MacStatus.PROCESSING_CIPHERTEXT + + self._update(ciphertext if output is None else output) + self._msg_len += len(plaintext) + + # See NIST SP 800 38D, 5.2.1.1 + if self._msg_len > 2**39 - 256: + raise ValueError("Plaintext exceeds maximum length") + + return ciphertext + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = [self.decrypt, self.verify] + + if self._status == MacStatus.PROCESSING_AUTH_DATA: + self._pad_cache_and_update() + self._status = MacStatus.PROCESSING_CIPHERTEXT + + self._update(ciphertext) + self._msg_len += len(ciphertext) + + return self._cipher.decrypt(ciphertext, output=output) + + def digest(self): + """Compute the *binary* MAC tag in an AEAD mode. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if self.digest not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = [self.digest] + + return self._compute_mac() + + def _compute_mac(self): + """Compute MAC without any FSM checks.""" + + if self._tag: + return self._tag + + # Step 5 in NIST SP 800-38D, Algorithm 4 - Compute S + self._pad_cache_and_update() + self._update(long_to_bytes(8 * self._auth_len, 8)) + self._update(long_to_bytes(8 * self._msg_len, 8)) + s_tag = self._signer.digest() + + # Step 6 - Compute T + self._tag = self._tag_cipher.encrypt(s_tag)[:self._mac_len] + + return self._tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if self.verify not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = [self.verify] + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, + data=self._compute_mac()) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, + data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + return self.encrypt(plaintext, output=output), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag, output=None): + """Perform decrypt() and verify() in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + received_mac_tag : byte string + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext, output=output) + self.verify(received_mac_tag) + return plaintext + + +def _create_gcm_cipher(factory, **kwargs): + """Create a new block cipher, configured in Galois Counter Mode (GCM). + + :Parameters: + factory : module + A block cipher module, taken from `Crypto.Cipher`. + The cipher must have block length of 16 bytes. + GCM has been only defined for `Crypto.Cipher.AES`. + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + It must be 16 (e.g. *AES-128*), 24 (e.g. *AES-192*) + or 32 (e.g. *AES-256*) bytes long. + + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + + There are no restrictions on its length, + but it is recommended to use at least 16 bytes. + + The nonce shall never repeat for two + different messages encrypted with the same key, + but it does not need to be random. + + If not provided, a 16 byte nonce will be randomly created. + + mac_len : integer + Length of the MAC, in bytes. + It must be no larger than 16 bytes (which is the default). + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter:" + str(e)) + + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(16) + mac_len = kwargs.pop("mac_len", 16) + + # Not documented - only used for testing + use_clmul = kwargs.pop("use_clmul", True) + if use_clmul and _ghash_clmul: + ghash_c = _ghash_clmul + else: + ghash_c = _ghash_portable + + return GcmMode(factory, key, nonce, mac_len, kwargs, ghash_c) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_gcm.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_gcm.pyi new file mode 100644 index 00000000..8912955f --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_gcm.pyi @@ -0,0 +1,45 @@ +from types import ModuleType +from typing import Union, Tuple, Dict, overload, Optional + +__all__ = ['GcmMode'] + +Buffer = Union[bytes, bytearray, memoryview] + +class GcmMode(object): + block_size: int + nonce: Buffer + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + mac_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> GcmMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ocb.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ocb.py new file mode 100644 index 00000000..27758b11 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ocb.py @@ -0,0 +1,525 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Offset Codebook (OCB) mode. + +OCB is Authenticated Encryption with Associated Data (AEAD) cipher mode +designed by Prof. Phillip Rogaway and specified in `RFC7253`_. + +The algorithm provides both authenticity and privacy, it is very efficient, +it uses only one key and it can be used in online mode (so that encryption +or decryption can start before the end of the message is available). + +This module implements the third and last variant of OCB (OCB3) and it only +works in combination with a 128-bit block symmetric cipher, like AES. + +OCB is patented in US but `free licenses`_ exist for software implementations +meant for non-military purposes. + +Example: + >>> from Crypto.Cipher import AES + >>> from Crypto.Random import get_random_bytes + >>> + >>> key = get_random_bytes(32) + >>> cipher = AES.new(key, AES.MODE_OCB) + >>> plaintext = b"Attack at dawn" + >>> ciphertext, mac = cipher.encrypt_and_digest(plaintext) + >>> # Deliver cipher.nonce, ciphertext and mac + ... + >>> cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) + >>> try: + >>> plaintext = cipher.decrypt_and_verify(ciphertext, mac) + >>> except ValueError: + >>> print "Invalid message" + >>> else: + >>> print plaintext + +:undocumented: __package__ + +.. _RFC7253: http://www.rfc-editor.org/info/rfc7253 +.. _free licenses: http://web.cs.ucdavis.edu/~rogaway/ocb/license.htm +""" + +import struct +from binascii import unhexlify + +from Crypto.Util.py3compat import bord, _copy_bytes +from Crypto.Util.number import long_to_bytes, bytes_to_long +from Crypto.Util.strxor import strxor + +from Crypto.Hash import BLAKE2s +from Crypto.Random import get_random_bytes + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_buffer) + +_raw_ocb_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_ocb", """ + int OCB_start_operation(void *cipher, + const uint8_t *offset_0, + size_t offset_0_len, + void **pState); + int OCB_encrypt(void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OCB_decrypt(void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OCB_update(void *state, + const uint8_t *in, + size_t data_len); + int OCB_digest(void *state, + uint8_t *tag, + size_t tag_len); + int OCB_stop_operation(void *state); + """) + + +class OcbMode(object): + """Offset Codebook (OCB) mode. + + :undocumented: __init__ + """ + + def __init__(self, factory, nonce, mac_len, cipher_params): + + if factory.block_size != 16: + raise ValueError("OCB mode is only available for ciphers" + " that operate on 128 bits blocks") + + self.block_size = 16 + """The block size of the underlying cipher, in bytes.""" + + self.nonce = _copy_bytes(None, None, nonce) + """Nonce used for this session.""" + if len(nonce) not in range(1, 16): + raise ValueError("Nonce must be at most 15 bytes long") + if not is_buffer(nonce): + raise TypeError("Nonce must be bytes, bytearray or memoryview") + + self._mac_len = mac_len + if not 8 <= mac_len <= 16: + raise ValueError("MAC tag must be between 8 and 16 bytes long") + + # Cache for MAC tag + self._mac_tag = None + + # Cache for unaligned associated data + self._cache_A = b"" + + # Cache for unaligned ciphertext/plaintext + self._cache_P = b"" + + # Allowed transitions after initialization + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + # Compute Offset_0 + params_without_key = dict(cipher_params) + key = params_without_key.pop("key") + nonce = (struct.pack('B', self._mac_len << 4 & 0xFF) + + b'\x00' * (14 - len(nonce)) + + b'\x01' + self.nonce) + + bottom_bits = bord(nonce[15]) & 0x3F # 6 bits, 0..63 + top_bits = bord(nonce[15]) & 0xC0 # 2 bits + + ktop_cipher = factory.new(key, + factory.MODE_ECB, + **params_without_key) + ktop = ktop_cipher.encrypt(struct.pack('15sB', + nonce[:15], + top_bits)) + + stretch = ktop + strxor(ktop[:8], ktop[1:9]) # 192 bits + offset_0 = long_to_bytes(bytes_to_long(stretch) >> + (64 - bottom_bits), 24)[8:] + + # Create low-level cipher instance + raw_cipher = factory._create_base_cipher(cipher_params) + if cipher_params: + raise TypeError("Unknown keywords: " + str(cipher_params)) + + self._state = VoidPointer() + result = _raw_ocb_lib.OCB_start_operation(raw_cipher.get(), + offset_0, + c_size_t(len(offset_0)), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the OCB mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + _raw_ocb_lib.OCB_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + raw_cipher.release() + + def _update(self, assoc_data, assoc_data_len): + result = _raw_ocb_lib.OCB_update(self._state.get(), + c_uint8_ptr(assoc_data), + c_size_t(assoc_data_len)) + if result: + raise ValueError("Error %d while computing MAC in OCB mode" % result) + + def update(self, assoc_data): + """Process the associated data. + + If there is any associated data, the caller has to invoke + this method one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver shall still able to detect modifications. + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. + """ + + if self.update not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = [self.encrypt, self.decrypt, self.digest, + self.verify, self.update] + + if len(self._cache_A) > 0: + filler = min(16 - len(self._cache_A), len(assoc_data)) + self._cache_A += _copy_bytes(None, filler, assoc_data) + assoc_data = assoc_data[filler:] + + if len(self._cache_A) < 16: + return self + + # Clear the cache, and proceeding with any other aligned data + self._cache_A, seg = b"", self._cache_A + self.update(seg) + + update_len = len(assoc_data) // 16 * 16 + self._cache_A = _copy_bytes(update_len, None, assoc_data) + self._update(assoc_data, update_len) + return self + + def _transcrypt_aligned(self, in_data, in_data_len, + trans_func, trans_desc): + + out_data = create_string_buffer(in_data_len) + result = trans_func(self._state.get(), + in_data, + out_data, + c_size_t(in_data_len)) + if result: + raise ValueError("Error %d while %sing in OCB mode" + % (result, trans_desc)) + return get_raw_buffer(out_data) + + def _transcrypt(self, in_data, trans_func, trans_desc): + # Last piece to encrypt/decrypt + if in_data is None: + out_data = self._transcrypt_aligned(self._cache_P, + len(self._cache_P), + trans_func, + trans_desc) + self._cache_P = b"" + return out_data + + # Try to fill up the cache, if it already contains something + prefix = b"" + if len(self._cache_P) > 0: + filler = min(16 - len(self._cache_P), len(in_data)) + self._cache_P += _copy_bytes(None, filler, in_data) + in_data = in_data[filler:] + + if len(self._cache_P) < 16: + # We could not manage to fill the cache, so there is certainly + # no output yet. + return b"" + + # Clear the cache, and proceeding with any other aligned data + prefix = self._transcrypt_aligned(self._cache_P, + len(self._cache_P), + trans_func, + trans_desc) + self._cache_P = b"" + + # Process data in multiples of the block size + trans_len = len(in_data) // 16 * 16 + result = self._transcrypt_aligned(c_uint8_ptr(in_data), + trans_len, + trans_func, + trans_desc) + if prefix: + result = prefix + result + + # Left-over + self._cache_P = _copy_bytes(trans_len, None, in_data) + + return result + + def encrypt(self, plaintext=None): + """Encrypt the next piece of plaintext. + + After the entire plaintext has been passed (but before `digest`), + you **must** call this method one last time with no arguments to collect + the final piece of ciphertext. + + If possible, use the method `encrypt_and_digest` instead. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The next piece of data to encrypt or ``None`` to signify + that encryption has finished and that any remaining ciphertext + has to be produced. + :Return: + the ciphertext, as a byte string. + Its length may not match the length of the *plaintext*. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + + if plaintext is None: + self._next = [self.digest] + else: + self._next = [self.encrypt] + return self._transcrypt(plaintext, _raw_ocb_lib.OCB_encrypt, "encrypt") + + def decrypt(self, ciphertext=None): + """Decrypt the next piece of ciphertext. + + After the entire ciphertext has been passed (but before `verify`), + you **must** call this method one last time with no arguments to collect + the remaining piece of plaintext. + + If possible, use the method `decrypt_and_verify` instead. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The next piece of data to decrypt or ``None`` to signify + that decryption has finished and that any remaining plaintext + has to be produced. + :Return: + the plaintext, as a byte string. + Its length may not match the length of the *ciphertext*. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() can only be called after" + " initialization or an update()") + + if ciphertext is None: + self._next = [self.verify] + else: + self._next = [self.decrypt] + return self._transcrypt(ciphertext, + _raw_ocb_lib.OCB_decrypt, + "decrypt") + + def _compute_mac_tag(self): + + if self._mac_tag is not None: + return + + if self._cache_A: + self._update(self._cache_A, len(self._cache_A)) + self._cache_A = b"" + + mac_tag = create_string_buffer(16) + result = _raw_ocb_lib.OCB_digest(self._state.get(), + mac_tag, + c_size_t(len(mac_tag)) + ) + if result: + raise ValueError("Error %d while computing digest in OCB mode" + % result) + self._mac_tag = get_raw_buffer(mac_tag)[:self._mac_len] + + def digest(self): + """Compute the *binary* MAC tag. + + Call this method after the final `encrypt` (the one with no arguments) + to obtain the MAC tag. + + The MAC tag is needed by the receiver to determine authenticity + of the message. + + :Return: the MAC, as a byte string. + """ + + if self.digest not in self._next: + raise TypeError("digest() cannot be called now for this cipher") + + assert(len(self._cache_P) == 0) + + self._next = [self.digest] + + if self._mac_tag is None: + self._compute_mac_tag() + + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + Call this method after the final `decrypt` (the one with no arguments) + to check if the message is authentic and valid. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if self.verify not in self._next: + raise TypeError("verify() cannot be called now for this cipher") + + assert(len(self._cache_P) == 0) + + self._next = [self.verify] + + if self._mac_tag is None: + self._compute_mac_tag() + + secret = get_random_bytes(16) + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext): + """Encrypt the message and create the MAC tag in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The entire message to encrypt. + :Return: + a tuple with two byte strings: + + - the encrypted data + - the MAC + """ + + return self.encrypt(plaintext) + self.encrypt(), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag): + """Decrypted the message and verify its authenticity in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The entire message to decrypt. + received_mac_tag : byte string + This is the *binary* MAC, as received from the sender. + + :Return: the decrypted data (byte string). + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext) + self.decrypt() + self.verify(received_mac_tag) + return plaintext + + +def _create_ocb_cipher(factory, **kwargs): + """Create a new block cipher, configured in OCB mode. + + :Parameters: + factory : module + A symmetric cipher module from `Crypto.Cipher` + (like `Crypto.Cipher.AES`). + + :Keywords: + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + Its length can vary from 1 to 15 bytes. + If not specified, a random 15 bytes long nonce is generated. + + mac_len : integer + Length of the MAC, in bytes. + It must be in the range ``[8..16]``. + The default is 16 (128 bits). + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + try: + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(15) + mac_len = kwargs.pop("mac_len", 16) + except KeyError as e: + raise TypeError("Keyword missing: " + str(e)) + + return OcbMode(factory, nonce, mac_len, kwargs) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ocb.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ocb.pyi new file mode 100644 index 00000000..a1909fc3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ocb.pyi @@ -0,0 +1,36 @@ +from types import ModuleType +from typing import Union, Any, Optional, Tuple, Dict, overload + +Buffer = Union[bytes, bytearray, memoryview] + +class OcbMode(object): + block_size: int + nonce: Buffer + + def __init__(self, + factory: ModuleType, + nonce: Buffer, + mac_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> OcbMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ofb.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ofb.py new file mode 100644 index 00000000..958f6d00 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ofb.py @@ -0,0 +1,282 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_ofb.py : OFB mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Output Feedback (CFB) mode. +""" + +__all__ = ['OfbMode'] + +from Crypto.Util.py3compat import _copy_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Crypto.Random import get_random_bytes + +raw_ofb_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_ofb", """ + int OFB_start_operation(void *cipher, + const uint8_t iv[], + size_t iv_len, + void **pResult); + int OFB_encrypt(void *ofbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OFB_decrypt(void *ofbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OFB_stop_operation(void *state); + """ + ) + + +class OfbMode(object): + """*Output FeedBack (OFB)*. + + This mode is very similar to CBC, but it + transforms the underlying block cipher into a stream cipher. + + The keystream is the iterated block encryption of the + previous ciphertext block. + + An Initialization Vector (*IV*) is required. + + See `NIST SP800-38A`_ , Section 6.4. + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, iv): + """Create a new block cipher, configured in OFB mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + iv : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + It is as long as the cipher block. + + **The IV must be a nonce, to to be reused for any other + message**. It shall be a nonce or a random value. + + Reusing the *IV* for encryptions performed with the same key + compromises confidentiality. + """ + + self._state = VoidPointer() + result = raw_ofb_lib.OFB_start_operation(block_cipher.get(), + c_uint8_ptr(iv), + c_size_t(len(iv)), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the OFB mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_ofb_lib.OFB_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(iv) + """The block size of the underlying cipher, in bytes.""" + + self.iv = _copy_bytes(None, None, iv) + """The Initialization Vector originally used to create the object. + The value does not change.""" + + self.IV = self.iv + """Alias for `iv`""" + + self._next = [ self.encrypt, self.decrypt ] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = [ self.encrypt ] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ofb_lib.OFB_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting in OFB mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext is written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = [ self.decrypt ] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ofb_lib.OFB_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + raise ValueError("Error %d while decrypting in OFB mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_ofb_cipher(factory, **kwargs): + """Instantiate a cipher object that performs OFB encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Crypto.Cipher``. + + :Keywords: + iv : bytes/bytearray/memoryview + The IV to use for OFB. + + IV : bytes/bytearray/memoryview + Alias for ``iv``. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + if len(iv) != factory.block_size: + raise ValueError("Incorrect IV length (it must be %d bytes long)" % + factory.block_size) + + if kwargs: + raise TypeError("Unknown parameters for OFB: %s" % str(kwargs)) + + return OfbMode(cipher_state, iv) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ofb.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ofb.pyi new file mode 100644 index 00000000..60f7f004 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_ofb.pyi @@ -0,0 +1,25 @@ +from typing import Union, overload + +from Crypto.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['OfbMode'] + +class OfbMode(object): + block_size: int + iv: Buffer + IV: Buffer + + def __init__(self, + block_cipher: SmartPointer, + iv: Buffer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_openpgp.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_openpgp.py new file mode 100644 index 00000000..d079d591 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_openpgp.py @@ -0,0 +1,206 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +OpenPGP mode. +""" + +__all__ = ['OpenPgpMode'] + +from Crypto.Util.py3compat import _copy_bytes +from Crypto.Random import get_random_bytes + +class OpenPgpMode(object): + """OpenPGP mode. + + This mode is a variant of CFB, and it is only used in PGP and + OpenPGP_ applications. If in doubt, use another mode. + + An Initialization Vector (*IV*) is required. + + Unlike CFB, the *encrypted* IV (not the IV itself) is + transmitted to the receiver. + + The IV is a random data block. For legacy reasons, two of its bytes are + duplicated to act as a checksum for the correctness of the key, which is now + known to be insecure and is ignored. The encrypted IV is therefore 2 bytes + longer than the clean IV. + + .. _OpenPGP: http://tools.ietf.org/html/rfc4880 + + :undocumented: __init__ + """ + + def __init__(self, factory, key, iv, cipher_params): + + #: The block size of the underlying cipher, in bytes. + self.block_size = factory.block_size + + self._done_first_block = False # True after the first encryption + + # Instantiate a temporary cipher to process the IV + IV_cipher = factory.new( + key, + factory.MODE_CFB, + IV=b'\x00' * self.block_size, + segment_size=self.block_size * 8, + **cipher_params) + + iv = _copy_bytes(None, None, iv) + + # The cipher will be used for... + if len(iv) == self.block_size: + # ... encryption + self._encrypted_IV = IV_cipher.encrypt(iv + iv[-2:]) + elif len(iv) == self.block_size + 2: + # ... decryption + self._encrypted_IV = iv + # Last two bytes are for a deprecated "quick check" feature that + # should not be used. (https://eprint.iacr.org/2005/033) + iv = IV_cipher.decrypt(iv)[:-2] + else: + raise ValueError("Length of IV must be %d or %d bytes" + " for MODE_OPENPGP" + % (self.block_size, self.block_size + 2)) + + self.iv = self.IV = iv + + # Instantiate the cipher for the real PGP data + self._cipher = factory.new( + key, + factory.MODE_CFB, + IV=self._encrypted_IV[-self.block_size:], + segment_size=self.block_size * 8, + **cipher_params) + + def encrypt(self, plaintext): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + + :Return: + the encrypted data, as a byte string. + It is as long as *plaintext* with one exception: + when encrypting the first message chunk, + the encypted IV is prepended to the returned ciphertext. + """ + + res = self._cipher.encrypt(plaintext) + if not self._done_first_block: + res = self._encrypted_IV + res + self._done_first_block = True + return res + + def decrypt(self, ciphertext): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + + :Return: the decrypted data (byte string). + """ + + return self._cipher.decrypt(ciphertext) + + +def _create_openpgp_cipher(factory, **kwargs): + """Create a new block cipher, configured in OpenPGP mode. + + :Parameters: + factory : module + The module. + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + + IV : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + + For encryption, the IV must be as long as the cipher block size. + + For decryption, it must be 2 bytes longer (it is actually the + *encrypted* IV which was prefixed to the ciphertext). + """ + + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing component: " + str(e)) + + return OpenPgpMode(factory, key, iv, kwargs) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_openpgp.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_openpgp.pyi new file mode 100644 index 00000000..14b81058 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_openpgp.pyi @@ -0,0 +1,20 @@ +from types import ModuleType +from typing import Union, Dict + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['OpenPgpMode'] + +class OpenPgpMode(object): + block_size: int + iv: Union[bytes, bytearray, memoryview] + IV: Union[bytes, bytearray, memoryview] + + def __init__(self, + factory: ModuleType, + key: Buffer, + iv: Buffer, + cipher_params: Dict) -> None: ... + def encrypt(self, plaintext: Buffer) -> bytes: ... + def decrypt(self, plaintext: Buffer) -> bytes: ... + diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_siv.py b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_siv.py new file mode 100644 index 00000000..d1eca2a7 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_siv.py @@ -0,0 +1,392 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Synthetic Initialization Vector (SIV) mode. +""" + +__all__ = ['SivMode'] + +from binascii import hexlify, unhexlify + +from Crypto.Util.py3compat import bord, _copy_bytes + +from Crypto.Util._raw_api import is_buffer + +from Crypto.Util.number import long_to_bytes, bytes_to_long +from Crypto.Protocol.KDF import _S2V +from Crypto.Hash import BLAKE2s +from Crypto.Random import get_random_bytes + + +class SivMode(object): + """Synthetic Initialization Vector (SIV). + + This is an Authenticated Encryption with Associated Data (`AEAD`_) mode. + It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, and it will + still be subject to authentication. The decryption step tells the receiver + if the message comes from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - including the + header - has been modified or corrupted. + + Unlike other AEAD modes such as CCM, EAX or GCM, accidental reuse of a + nonce is not catastrophic for the confidentiality of the message. The only + effect is that an attacker can tell when the same plaintext (and same + associated data) is protected with the same key. + + The length of the MAC is fixed to the block size of the underlying cipher. + The key size is twice the length of the key of the underlying cipher. + + This mode is only available for AES ciphers. + + +--------------------+---------------+-------------------+ + | Cipher | SIV MAC size | SIV key length | + | | (bytes) | (bytes) | + +====================+===============+===================+ + | AES-128 | 16 | 32 | + +--------------------+---------------+-------------------+ + | AES-192 | 16 | 48 | + +--------------------+---------------+-------------------+ + | AES-256 | 16 | 64 | + +--------------------+---------------+-------------------+ + + See `RFC5297`_ and the `original paper`__. + + .. _RFC5297: https://tools.ietf.org/html/rfc5297 + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + .. __: http://www.cs.ucdavis.edu/~rogaway/papers/keywrap.pdf + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, kwargs): + + self.block_size = factory.block_size + """The block size of the underlying cipher, in bytes.""" + + self._factory = factory + + self._cipher_params = kwargs + + if len(key) not in (32, 48, 64): + raise ValueError("Incorrect key length (%d bytes)" % len(key)) + + if nonce is not None: + if not is_buffer(nonce): + raise TypeError("When provided, the nonce must be bytes, bytearray or memoryview") + + if len(nonce) == 0: + raise ValueError("When provided, the nonce must be non-empty") + + self.nonce = _copy_bytes(None, None, nonce) + """Public attribute is only available in case of non-deterministic + encryption.""" + + subkey_size = len(key) // 2 + + self._mac_tag = None # Cache for MAC tag + self._kdf = _S2V(key[:subkey_size], + ciphermod=factory, + cipher_params=self._cipher_params) + self._subkey_cipher = key[subkey_size:] + + # Purely for the purpose of verifying that cipher_params are OK + factory.new(key[:subkey_size], factory.MODE_ECB, **kwargs) + + # Allowed transitions after initialization + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + def _create_ctr_cipher(self, v): + """Create a new CTR cipher from V in SIV mode""" + + v_int = bytes_to_long(v) + q = v_int & 0xFFFFFFFFFFFFFFFF7FFFFFFF7FFFFFFF + return self._factory.new( + self._subkey_cipher, + self._factory.MODE_CTR, + initial_value=q, + nonce=b"", + **self._cipher_params) + + def update(self, component): + """Protect one associated data component + + For SIV, the associated data is a sequence (*vector*) of non-empty + byte strings (*components*). + + This method consumes the next component. It must be called + once for each of the components that constitue the associated data. + + Note that the components have clear boundaries, so that: + + >>> cipher.update(b"builtin") + >>> cipher.update(b"securely") + + is not equivalent to: + + >>> cipher.update(b"built") + >>> cipher.update(b"insecurely") + + If there is no associated data, this method must not be called. + + :Parameters: + component : bytes/bytearray/memoryview + The next associated data component. + """ + + if self.update not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = [self.update, self.encrypt, self.decrypt, + self.digest, self.verify] + + return self._kdf.update(component) + + def encrypt(self, plaintext): + """ + For SIV, encryption and MAC authentication must take place at the same + point. This method shall not be used. + + Use `encrypt_and_digest` instead. + """ + + raise TypeError("encrypt() not allowed for SIV mode." + " Use encrypt_and_digest() instead.") + + def decrypt(self, ciphertext): + """ + For SIV, decryption and verification must take place at the same + point. This method shall not be used. + + Use `decrypt_and_verify` instead. + """ + + raise TypeError("decrypt() not allowed for SIV mode." + " Use decrypt_and_verify() instead.") + + def digest(self): + """Compute the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if self.digest not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = [self.digest] + if self._mac_tag is None: + self._mac_tag = self._kdf.derive() + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if self.verify not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = [self.verify] + + if self._mac_tag is None: + self._mac_tag = self._kdf.derive() + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + if self.encrypt not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + + self._next = [ self.digest ] + + # Compute V (MAC) + if hasattr(self, 'nonce'): + self._kdf.update(self.nonce) + self._kdf.update(plaintext) + self._mac_tag = self._kdf.derive() + + cipher = self._create_ctr_cipher(self._mac_tag) + + return cipher.encrypt(plaintext, output=output), self._mac_tag + + def decrypt_and_verify(self, ciphertext, mac_tag, output=None): + """Perform decryption and verification in one step. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + You cannot reuse an object for encrypting + or decrypting other data with the same key. + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if self.decrypt not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = [ self.verify ] + + # Take the MAC and start the cipher for decryption + self._cipher = self._create_ctr_cipher(mac_tag) + + plaintext = self._cipher.decrypt(ciphertext, output=output) + + if hasattr(self, 'nonce'): + self._kdf.update(self.nonce) + self._kdf.update(plaintext if output is None else output) + self.verify(mac_tag) + + return plaintext + + +def _create_siv_cipher(factory, **kwargs): + """Create a new block cipher, configured in + Synthetic Initializaton Vector (SIV) mode. + + :Parameters: + + factory : object + A symmetric cipher module from `Crypto.Cipher` + (like `Crypto.Cipher.AES`). + + :Keywords: + + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + It must be 32, 48 or 64 bytes long. + If AES is the chosen cipher, the variants *AES-128*, + *AES-192* and or *AES-256* will be used internally. + + nonce : bytes/bytearray/memoryview + For deterministic encryption, it is not present. + + Otherwise, it is a value that must never be reused + for encrypting message under this key. + + There are no restrictions on its length, + but it is recommended to use at least 16 bytes. + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter: " + str(e)) + + nonce = kwargs.pop("nonce", None) + + return SivMode(factory, key, nonce, kwargs) diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_siv.pyi b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_siv.pyi new file mode 100644 index 00000000..2934f23d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Cipher/_mode_siv.pyi @@ -0,0 +1,38 @@ +from types import ModuleType +from typing import Union, Tuple, Dict, Optional, overload + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['SivMode'] + +class SivMode(object): + block_size: int + nonce: bytes + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + kwargs: Dict) -> None: ... + + def update(self, component: Buffer) -> SivMode: ... + + def encrypt(self, plaintext: Buffer) -> bytes: ... + def decrypt(self, plaintext: Buffer) -> bytes: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_pkcs1_decode.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_pkcs1_decode.abi3.so new file mode 100644 index 00000000..551a4c23 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_pkcs1_decode.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_aes.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_aes.abi3.so new file mode 100644 index 00000000..42c79195 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_aes.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_aesni.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_aesni.abi3.so new file mode 100644 index 00000000..35701e41 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_aesni.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_arc2.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_arc2.abi3.so new file mode 100644 index 00000000..acca2551 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_arc2.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_blowfish.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_blowfish.abi3.so new file mode 100644 index 00000000..2f4ddcfe Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_blowfish.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cast.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cast.abi3.so new file mode 100644 index 00000000..e8e5722e Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cast.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cbc.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cbc.abi3.so new file mode 100644 index 00000000..7ef5187c Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cbc.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cfb.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cfb.abi3.so new file mode 100644 index 00000000..45e31e12 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_cfb.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ctr.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ctr.abi3.so new file mode 100644 index 00000000..b12ffc4c Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ctr.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_des.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_des.abi3.so new file mode 100644 index 00000000..84d47d1c Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_des.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_des3.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_des3.abi3.so new file mode 100644 index 00000000..c11ef96d Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_des3.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ecb.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ecb.abi3.so new file mode 100644 index 00000000..44f0b0dd Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ecb.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_eksblowfish.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_eksblowfish.abi3.so new file mode 100644 index 00000000..acb074fe Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_eksblowfish.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ocb.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ocb.abi3.so new file mode 100644 index 00000000..0c88d184 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ocb.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ofb.abi3.so b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ofb.abi3.so new file mode 100644 index 00000000..eeb94e71 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Cipher/_raw_ofb.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2b.py b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2b.py new file mode 100644 index 00000000..a00e0b41 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2b.py @@ -0,0 +1,247 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Crypto.Util.py3compat import bord, tobytes + +from Crypto.Random import get_random_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_blake2b_lib = load_pycryptodome_raw_lib("Crypto.Hash._BLAKE2b", + """ + int blake2b_init(void **state, + const uint8_t *key, + size_t key_size, + size_t digest_size); + int blake2b_destroy(void *state); + int blake2b_update(void *state, + const uint8_t *buf, + size_t len); + int blake2b_digest(const void *state, + uint8_t digest[64]); + int blake2b_copy(const void *src, void *dst); + """) + + +class BLAKE2b_Hash(object): + """A BLAKE2b hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The internal block size of the hash algorithm in bytes. + block_size = 64 + + def __init__(self, data, key, digest_bytes, update_after_digest): + + # The size of the resulting hash in bytes. + self.digest_size = digest_bytes + + self._update_after_digest = update_after_digest + self._digest_done = False + + # See https://tools.ietf.org/html/rfc7693 + if digest_bytes in (20, 32, 48, 64) and not key: + self.oid = "1.3.6.1.4.1.1722.12.2.1." + str(digest_bytes) + + state = VoidPointer() + result = _raw_blake2b_lib.blake2b_init(state.address_of(), + c_uint8_ptr(key), + c_size_t(len(key)), + c_size_t(digest_bytes) + ) + if result: + raise ValueError("Error %d while instantiating BLAKE2b" % result) + self._state = SmartPointer(state.get(), + _raw_blake2b_lib.blake2b_destroy) + if data: + self.update(data) + + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (bytes/bytearray/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_blake2b_lib.blake2b_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing BLAKE2b data" % result) + return self + + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(64) + result = _raw_blake2b_lib.blake2b_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while creating BLAKE2b digest" % result) + + self._digest_done = True + + return get_raw_buffer(bfr)[:self.digest_size] + + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (bytes/bytearray/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = new(digest_bits=160, key=secret, data=mac_tag) + mac2 = new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + + def new(self, **kwargs): + """Return a new instance of a BLAKE2b hash object. + See :func:`new`. + """ + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new hash object. + + Args: + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`BLAKE2b_Hash.update`. + digest_bytes (integer): + Optional. The size of the digest, in bytes (1 to 64). Default is 64. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (8 to 512, in steps of 8). + Default is 512. + key (bytes/bytearray/memoryview): + Optional. The key to use to compute the MAC (1 to 64 bytes). + If not specified, no key will be used. + update_after_digest (boolean): + Optional. By default, a hash object cannot be updated anymore after + the digest is computed. When this flag is ``True``, such check + is no longer enforced. + + Returns: + A :class:`BLAKE2b_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 64 + if digest_bytes is not None: + if not (1 <= digest_bytes <= 64): + raise ValueError("'digest_bytes' not in range 1..64") + else: + if not (8 <= digest_bits <= 512) or (digest_bits % 8): + raise ValueError("'digest_bytes' not in range 8..512, " + "with steps of 8") + digest_bytes = digest_bits // 8 + + key = kwargs.pop("key", b"") + if len(key) > 64: + raise ValueError("BLAKE2s key cannot exceed 64 bytes") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return BLAKE2b_Hash(data, key, digest_bytes, update_after_digest) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2b.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2b.pyi new file mode 100644 index 00000000..d37c3749 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2b.pyi @@ -0,0 +1,32 @@ +from typing import Any, Union +from types import ModuleType + +Buffer = Union[bytes, bytearray, memoryview] + +class BLAKE2b_Hash(object): + block_size: int + digest_size: int + oid: str + + def __init__(self, + data: Buffer, + key: Buffer, + digest_bytes: bytes, + update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> BLAKE2b_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + def new(self, + data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + key: Buffer = ..., + update_after_digest: bool = ...) -> BLAKE2b_Hash: ... + +def new(data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + key: Buffer = ..., + update_after_digest: bool = ...) -> BLAKE2b_Hash: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2s.py b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2s.py new file mode 100644 index 00000000..9b25c4a5 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2s.py @@ -0,0 +1,247 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Crypto.Util.py3compat import bord, tobytes + +from Crypto.Random import get_random_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_blake2s_lib = load_pycryptodome_raw_lib("Crypto.Hash._BLAKE2s", + """ + int blake2s_init(void **state, + const uint8_t *key, + size_t key_size, + size_t digest_size); + int blake2s_destroy(void *state); + int blake2s_update(void *state, + const uint8_t *buf, + size_t len); + int blake2s_digest(const void *state, + uint8_t digest[32]); + int blake2s_copy(const void *src, void *dst); + """) + + +class BLAKE2s_Hash(object): + """A BLAKE2s hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The internal block size of the hash algorithm in bytes. + block_size = 32 + + def __init__(self, data, key, digest_bytes, update_after_digest): + + # The size of the resulting hash in bytes. + self.digest_size = digest_bytes + + self._update_after_digest = update_after_digest + self._digest_done = False + + # See https://tools.ietf.org/html/rfc7693 + if digest_bytes in (16, 20, 28, 32) and not key: + self.oid = "1.3.6.1.4.1.1722.12.2.2." + str(digest_bytes) + + state = VoidPointer() + result = _raw_blake2s_lib.blake2s_init(state.address_of(), + c_uint8_ptr(key), + c_size_t(len(key)), + c_size_t(digest_bytes) + ) + if result: + raise ValueError("Error %d while instantiating BLAKE2s" % result) + self._state = SmartPointer(state.get(), + _raw_blake2s_lib.blake2s_destroy) + if data: + self.update(data) + + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_blake2s_lib.blake2s_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing BLAKE2s data" % result) + return self + + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(32) + result = _raw_blake2s_lib.blake2s_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while creating BLAKE2s digest" % result) + + self._digest_done = True + + return get_raw_buffer(bfr)[:self.digest_size] + + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte array/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = new(digest_bits=160, key=secret, data=mac_tag) + mac2 = new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + + def new(self, **kwargs): + """Return a new instance of a BLAKE2s hash object. + See :func:`new`. + """ + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`BLAKE2s_Hash.update`. + digest_bytes (integer): + Optional. The size of the digest, in bytes (1 to 32). Default is 32. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (8 to 256, in steps of 8). + Default is 256. + key (byte string): + Optional. The key to use to compute the MAC (1 to 64 bytes). + If not specified, no key will be used. + update_after_digest (boolean): + Optional. By default, a hash object cannot be updated anymore after + the digest is computed. When this flag is ``True``, such check + is no longer enforced. + + Returns: + A :class:`BLAKE2s_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 32 + if digest_bytes is not None: + if not (1 <= digest_bytes <= 32): + raise ValueError("'digest_bytes' not in range 1..32") + else: + if not (8 <= digest_bits <= 256) or (digest_bits % 8): + raise ValueError("'digest_bytes' not in range 8..256, " + "with steps of 8") + digest_bytes = digest_bits // 8 + + key = kwargs.pop("key", b"") + if len(key) > 32: + raise ValueError("BLAKE2s key cannot exceed 32 bytes") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return BLAKE2s_Hash(data, key, digest_bytes, update_after_digest) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2s.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2s.pyi new file mode 100644 index 00000000..374b3a43 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/BLAKE2s.pyi @@ -0,0 +1,26 @@ +from typing import Any, Union + +Buffer = Union[bytes, bytearray, memoryview] + +class BLAKE2s_Hash(object): + block_size: int + digest_size: int + oid: str + + def __init__(self, + data: Buffer, + key: Buffer, + digest_bytes: bytes, + update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> BLAKE2s_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + def new(self, **kwargs: Any) -> BLAKE2s_Hash: ... + +def new(data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + key: Buffer = ..., + update_after_digest: bool = ...) -> BLAKE2s_Hash: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/CMAC.py b/env/lib/python3.8/site-packages/Crypto/Hash/CMAC.py new file mode 100644 index 00000000..7585617b --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/CMAC.py @@ -0,0 +1,302 @@ +# -*- coding: utf-8 -*- +# +# Hash/CMAC.py - Implements the CMAC algorithm +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from binascii import unhexlify + +from Crypto.Hash import BLAKE2s +from Crypto.Util.strxor import strxor +from Crypto.Util.number import long_to_bytes, bytes_to_long +from Crypto.Util.py3compat import bord, tobytes, _copy_bytes +from Crypto.Random import get_random_bytes + + +# The size of the authentication tag produced by the MAC. +digest_size = None + + +def _shift_bytes(bs, xor_lsb=0): + num = (bytes_to_long(bs) << 1) ^ xor_lsb + return long_to_bytes(num, len(bs))[-len(bs):] + + +class CMAC(object): + """A CMAC hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting MAC tag + :vartype digest_size: integer + """ + + digest_size = None + + def __init__(self, key, msg, ciphermod, cipher_params, mac_len, + update_after_digest): + + self.digest_size = mac_len + + self._key = _copy_bytes(None, None, key) + self._factory = ciphermod + self._cipher_params = cipher_params + self._block_size = bs = ciphermod.block_size + self._mac_tag = None + self._update_after_digest = update_after_digest + + # Section 5.3 of NIST SP 800 38B and Appendix B + if bs == 8: + const_Rb = 0x1B + self._max_size = 8 * (2 ** 21) + elif bs == 16: + const_Rb = 0x87 + self._max_size = 16 * (2 ** 48) + else: + raise TypeError("CMAC requires a cipher with a block size" + " of 8 or 16 bytes, not %d" % bs) + + # Compute sub-keys + zero_block = b'\x00' * bs + self._ecb = ciphermod.new(key, + ciphermod.MODE_ECB, + **self._cipher_params) + L = self._ecb.encrypt(zero_block) + if bord(L[0]) & 0x80: + self._k1 = _shift_bytes(L, const_Rb) + else: + self._k1 = _shift_bytes(L) + if bord(self._k1[0]) & 0x80: + self._k2 = _shift_bytes(self._k1, const_Rb) + else: + self._k2 = _shift_bytes(self._k1) + + # Initialize CBC cipher with zero IV + self._cbc = ciphermod.new(key, + ciphermod.MODE_CBC, + zero_block, + **self._cipher_params) + + # Cache for outstanding data to authenticate + self._cache = bytearray(bs) + self._cache_n = 0 + + # Last piece of ciphertext produced + self._last_ct = zero_block + + # Last block that was encrypted with AES + self._last_pt = None + + # Counter for total message size + self._data_size = 0 + + if msg: + self.update(msg) + + def update(self, msg): + """Authenticate the next chunk of message. + + Args: + data (byte string/byte array/memoryview): The next chunk of data + """ + + if self._mac_tag is not None and not self._update_after_digest: + raise TypeError("update() cannot be called after digest() or verify()") + + self._data_size += len(msg) + bs = self._block_size + + if self._cache_n > 0: + filler = min(bs - self._cache_n, len(msg)) + self._cache[self._cache_n:self._cache_n+filler] = msg[:filler] + self._cache_n += filler + + if self._cache_n < bs: + return self + + msg = memoryview(msg)[filler:] + self._update(self._cache) + self._cache_n = 0 + + remain = len(msg) % bs + if remain > 0: + self._update(msg[:-remain]) + self._cache[:remain] = msg[-remain:] + else: + self._update(msg) + self._cache_n = remain + return self + + def _update(self, data_block): + """Update a block aligned to the block boundary""" + + bs = self._block_size + assert len(data_block) % bs == 0 + + if len(data_block) == 0: + return + + ct = self._cbc.encrypt(data_block) + if len(data_block) == bs: + second_last = self._last_ct + else: + second_last = ct[-bs*2:-bs] + self._last_ct = ct[-bs:] + self._last_pt = strxor(second_last, data_block[-bs:]) + + def copy(self): + """Return a copy ("clone") of the CMAC object. + + The copy will have the same internal state as the original CMAC + object. + This can be used to efficiently compute the MAC tag of byte + strings that share a common initial substring. + + :return: An :class:`CMAC` + """ + + obj = self.__new__(CMAC) + obj.__dict__ = self.__dict__.copy() + obj._cbc = self._factory.new(self._key, + self._factory.MODE_CBC, + self._last_ct, + **self._cipher_params) + obj._cache = self._cache[:] + obj._last_ct = self._last_ct[:] + return obj + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message + that has been authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bs = self._block_size + + if self._mac_tag is not None and not self._update_after_digest: + return self._mac_tag + + if self._data_size > self._max_size: + raise ValueError("MAC is unsafe for this message") + + if self._cache_n == 0 and self._data_size > 0: + # Last block was full + pt = strxor(self._last_pt, self._k1) + else: + # Last block is partial (or message length is zero) + partial = self._cache[:] + partial[self._cache_n:] = b'\x80' + b'\x00' * (bs - self._cache_n - 1) + pt = strxor(strxor(self._last_ct, partial), self._k2) + + self._mac_tag = self._ecb.encrypt(pt)[:self.digest_size] + + return self._mac_tag + + def hexdigest(self): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) + for x in tuple(self.digest())]) + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte array/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + +def new(key, msg=None, ciphermod=None, cipher_params=None, mac_len=None, + update_after_digest=False): + """Create a new MAC object. + + Args: + key (byte string/byte array/memoryview): + key for the CMAC object. + The key must be valid for the underlying cipher algorithm. + For instance, it must be 16 bytes long for AES-128. + ciphermod (module): + A cipher module from :mod:`Crypto.Cipher`. + The cipher's block size has to be 128 bits, + like :mod:`Crypto.Cipher.AES`, to reduce the probability + of collisions. + msg (byte string/byte array/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to `CMAC.update`. Optional. + cipher_params (dict): + Optional. A set of parameters to use when instantiating a cipher + object. + mac_len (integer): + Length of the MAC, in bytes. + It must be at least 4 bytes long. + The default (and recommended) length matches the size of a cipher block. + update_after_digest (boolean): + Optional. By default, a hash object cannot be updated anymore after + the digest is computed. When this flag is ``True``, such check + is no longer enforced. + Returns: + A :class:`CMAC` object + """ + + if ciphermod is None: + raise TypeError("ciphermod must be specified (try AES)") + + cipher_params = {} if cipher_params is None else dict(cipher_params) + + if mac_len is None: + mac_len = ciphermod.block_size + + if mac_len < 4: + raise ValueError("MAC tag length must be at least 4 bytes long") + + if mac_len > ciphermod.block_size: + raise ValueError("MAC tag length cannot be larger than a cipher block (%d) bytes" % ciphermod.block_size) + + return CMAC(key, msg, ciphermod, cipher_params, mac_len, + update_after_digest) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/CMAC.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/CMAC.pyi new file mode 100644 index 00000000..acdf0553 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/CMAC.pyi @@ -0,0 +1,30 @@ +from types import ModuleType +from typing import Union, Dict, Any + +Buffer = Union[bytes, bytearray, memoryview] + +digest_size: int + +class CMAC(object): + digest_size: int + + def __init__(self, + key: Buffer, + msg: Buffer, + ciphermod: ModuleType, + cipher_params: Dict[str, Any], + mac_len: int, update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> CMAC: ... + def copy(self) -> CMAC: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + +def new(key: Buffer, + msg: Buffer = ..., + ciphermod: ModuleType = ..., + cipher_params: Dict[str, Any] = ..., + mac_len: int = ..., + update_after_digest: bool = ...) -> CMAC: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/HMAC.py b/env/lib/python3.8/site-packages/Crypto/Hash/HMAC.py new file mode 100644 index 00000000..e82bb9d3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/HMAC.py @@ -0,0 +1,213 @@ +# +# HMAC.py - Implements the HMAC algorithm as described by RFC 2104. +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bord, tobytes + +from binascii import unhexlify + +from Crypto.Hash import MD5 +from Crypto.Hash import BLAKE2s +from Crypto.Util.strxor import strxor +from Crypto.Random import get_random_bytes + +__all__ = ['new', 'HMAC'] + + +class HMAC(object): + """An HMAC hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting MAC tag + :vartype digest_size: integer + """ + + def __init__(self, key, msg=b"", digestmod=None): + + if digestmod is None: + digestmod = MD5 + + if msg is None: + msg = b"" + + # Size of the MAC tag + self.digest_size = digestmod.digest_size + + self._digestmod = digestmod + + if isinstance(key, memoryview): + key = key.tobytes() + + try: + if len(key) <= digestmod.block_size: + # Step 1 or 2 + key_0 = key + b"\x00" * (digestmod.block_size - len(key)) + else: + # Step 3 + hash_k = digestmod.new(key).digest() + key_0 = hash_k + b"\x00" * (digestmod.block_size - len(hash_k)) + except AttributeError: + # Not all hash types have "block_size" + raise ValueError("Hash type incompatible to HMAC") + + # Step 4 + key_0_ipad = strxor(key_0, b"\x36" * len(key_0)) + + # Start step 5 and 6 + self._inner = digestmod.new(key_0_ipad) + self._inner.update(msg) + + # Step 7 + key_0_opad = strxor(key_0, b"\x5c" * len(key_0)) + + # Start step 8 and 9 + self._outer = digestmod.new(key_0_opad) + + def update(self, msg): + """Authenticate the next chunk of message. + + Args: + data (byte string/byte array/memoryview): The next chunk of data + """ + + self._inner.update(msg) + return self + + def _pbkdf2_hmac_assist(self, first_digest, iterations): + """Carry out the expensive inner loop for PBKDF2-HMAC""" + + result = self._digestmod._pbkdf2_hmac_assist( + self._inner, + self._outer, + first_digest, + iterations) + return result + + def copy(self): + """Return a copy ("clone") of the HMAC object. + + The copy will have the same internal state as the original HMAC + object. + This can be used to efficiently compute the MAC tag of byte + strings that share a common initial substring. + + :return: An :class:`HMAC` + """ + + new_hmac = HMAC(b"fake key", digestmod=self._digestmod) + + # Syncronize the state + new_hmac._inner = self._inner.copy() + new_hmac._outer = self._outer.copy() + + return new_hmac + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message + authenticated so far. + + :return: The MAC tag digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + frozen_outer_hash = self._outer.copy() + frozen_outer_hash.update(self._inner.digest()) + return frozen_outer_hash.digest() + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte string/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexdigest(self): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) + for x in tuple(self.digest())]) + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, + as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + +def new(key, msg=b"", digestmod=None): + """Create a new MAC object. + + Args: + key (bytes/bytearray/memoryview): + key for the MAC object. + It must be long enough to match the expected security level of the + MAC. + msg (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to :meth:`HMAC.update`. + digestmod (module): + The hash to use to implement the HMAC. + Default is :mod:`Crypto.Hash.MD5`. + + Returns: + An :class:`HMAC` object + """ + + return HMAC(key, msg, digestmod) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/HMAC.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/HMAC.pyi new file mode 100644 index 00000000..b577230f --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/HMAC.pyi @@ -0,0 +1,25 @@ +from types import ModuleType +from typing import Union, Dict + +Buffer = Union[bytes, bytearray, memoryview] + +digest_size: int + +class HMAC(object): + digest_size: int + + def __init__(self, + key: Buffer, + msg: Buffer, + digestmod: ModuleType) -> None: ... + def update(self, msg: Buffer) -> HMAC: ... + def copy(self) -> HMAC: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + +def new(key: Buffer, + msg: Buffer = ..., + digestmod: ModuleType = ...) -> HMAC: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/KMAC128.py b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC128.py new file mode 100644 index 00000000..05061fc2 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC128.py @@ -0,0 +1,179 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Crypto.Util.py3compat import bord, tobytes, is_bytes +from Crypto.Random import get_random_bytes + +from . import cSHAKE128, SHA3_256 +from .cSHAKE128 import _bytepad, _encode_str, _right_encode + + +class KMAC_Hash(object): + """A KMAC hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, data, key, mac_len, custom, + oid_variant, cshake, rate): + + # See https://tools.ietf.org/html/rfc8702 + self.oid = "2.16.840.1.101.3.4.2." + oid_variant + self.digest_size = mac_len + + self._mac = None + + partial_newX = _bytepad(_encode_str(tobytes(key)), rate) + self._cshake = cshake._new(partial_newX, custom, b"KMAC") + + if data: + self._cshake.update(data) + + def update(self, data): + """Authenticate the next chunk of message. + + Args: + data (bytes/bytearray/memoryview): The next chunk of the message to + authenticate. + """ + + if self._mac: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + self._cshake.update(data) + return self + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message. + + :return: The MAC tag. Binary form. + :rtype: byte string + """ + + if not self._mac: + self._cshake.update(_right_encode(self.digest_size * 8)) + self._mac = self._cshake.read(self.digest_size) + + return self._mac + + def hexdigest(self): + """Return the **printable** MAC tag of the message. + + :return: The MAC tag. Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (bytes/bytearray/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = SHA3_256.new(secret + mac_tag) + mac2 = SHA3_256.new(secret + self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + def new(self, **kwargs): + """Return a new instance of a KMAC hash object. + See :func:`new`. + """ + + if "mac_len" not in kwargs: + kwargs["mac_len"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new KMAC128 object. + + Args: + key (bytes/bytearray/memoryview): + The key to use to compute the MAC. + It must be at least 128 bits long (16 bytes). + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to :meth:`KMAC_Hash.update`. + mac_len (integer): + Optional. The size of the authentication tag, in bytes. + Default is 64. Minimum is 8. + custom (bytes/bytearray/memoryview): + Optional. A customization byte string (``S`` in SP 800-185). + + Returns: + A :class:`KMAC_Hash` hash object + """ + + key = kwargs.pop("key", None) + if not is_bytes(key): + raise TypeError("You must pass a key to KMAC128") + if len(key) < 16: + raise ValueError("The key must be at least 128 bits long (16 bytes)") + + data = kwargs.pop("data", None) + + mac_len = kwargs.pop("mac_len", 64) + if mac_len < 8: + raise ValueError("'mac_len' must be 8 bytes or more") + + custom = kwargs.pop("custom", b"") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return KMAC_Hash(data, key, mac_len, custom, "19", cSHAKE128, 168) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/KMAC128.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC128.pyi new file mode 100644 index 00000000..8947dabc --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC128.pyi @@ -0,0 +1,33 @@ +from typing import Union +from types import ModuleType + +Buffer = Union[bytes, bytearray, memoryview] + +class KMAC_Hash(object): + + def __init__(self, + data: Buffer, + key: Buffer, + mac_len: int, + custom: Buffer, + oid_variant: str, + cshake: ModuleType, + rate: int) -> None: ... + + def update(self, data: Buffer) -> KMAC_Hash: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + def new(self, + data: Buffer = ..., + mac_len: int = ..., + key: Buffer = ..., + custom: Buffer = ...) -> KMAC_Hash: ... + + +def new(key: Buffer, + data: Buffer = ..., + mac_len: int = ..., + custom: Buffer = ...) -> KMAC_Hash: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/KMAC256.py b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC256.py new file mode 100644 index 00000000..2be8e2f3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC256.py @@ -0,0 +1,74 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import is_bytes + +from .KMAC128 import KMAC_Hash +from . import cSHAKE256 + + +def new(**kwargs): + """Create a new KMAC256 object. + + Args: + key (bytes/bytearray/memoryview): + The key to use to compute the MAC. + It must be at least 256 bits long (32 bytes). + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to :meth:`KMAC_Hash.update`. + mac_len (integer): + Optional. The size of the authentication tag, in bytes. + Default is 64. Minimum is 8. + custom (bytes/bytearray/memoryview): + Optional. A customization byte string (``S`` in SP 800-185). + + Returns: + A :class:`KMAC_Hash` hash object + """ + + key = kwargs.pop("key", None) + if not is_bytes(key): + raise TypeError("You must pass a key to KMAC256") + if len(key) < 32: + raise ValueError("The key must be at least 256 bits long (32 bytes)") + + data = kwargs.pop("data", None) + + mac_len = kwargs.pop("mac_len", 64) + if mac_len < 8: + raise ValueError("'mac_len' must be 8 bytes or more") + + custom = kwargs.pop("custom", b"") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return KMAC_Hash(data, key, mac_len, custom, "20", cSHAKE256, 136) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/KMAC256.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC256.pyi new file mode 100644 index 00000000..86cc5000 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/KMAC256.pyi @@ -0,0 +1,10 @@ +from typing import Union + +from .KMAC128 import KMAC_Hash + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + data: Buffer = ..., + mac_len: int = ..., + custom: Buffer = ...) -> KMAC_Hash: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/KangarooTwelve.py b/env/lib/python3.8/site-packages/Crypto/Hash/KangarooTwelve.py new file mode 100644 index 00000000..f5358d44 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/KangarooTwelve.py @@ -0,0 +1,262 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util._raw_api import (VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Util.number import long_to_bytes +from Crypto.Util.py3compat import bchr + +from .keccak import _raw_keccak_lib + + +def _length_encode(x): + if x == 0: + return b'\x00' + + S = long_to_bytes(x) + return S + bchr(len(S)) + + +# Possible states for a KangarooTwelve instance, which depend on the amount of data processed so far. +SHORT_MSG = 1 # Still within the first 8192 bytes, but it is not certain we will exceed them. +LONG_MSG_S0 = 2 # Still within the first 8192 bytes, and it is certain we will exceed them. +LONG_MSG_SX = 3 # Beyond the first 8192 bytes. +SQUEEZING = 4 # No more data to process. + + +class K12_XOF(object): + """A KangarooTwelve hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, data, custom): + + if custom == None: + custom = b'' + + self._custom = custom + _length_encode(len(custom)) + self._state = SHORT_MSG + self._padding = None # Final padding is only decided in read() + + # Internal hash that consumes FinalNode + self._hash1 = self._create_keccak() + self._length1 = 0 + + # Internal hash that produces CV_i (reset each time) + self._hash2 = None + self._length2 = 0 + + # Incremented by one for each 8192-byte block + self._ctr = 0 + + if data: + self.update(data) + + def _create_keccak(self): + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(32), # 32 bytes of capacity (256 bits) + c_ubyte(12)) # Reduced number of rounds + if result: + raise ValueError("Error %d while instantiating KangarooTwelve" + % result) + return SmartPointer(state.get(), _raw_keccak_lib.keccak_destroy) + + def _update(self, data, hash_obj): + result = _raw_keccak_lib.keccak_absorb(hash_obj.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating KangarooTwelve state" + % result) + + def _squeeze(self, hash_obj, length, padding): + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(hash_obj.get(), + bfr, + c_size_t(length), + c_ubyte(padding)) + if result: + raise ValueError("Error %d while extracting from KangarooTwelve" + % result) + + return get_raw_buffer(bfr) + + def _reset(self, hash_obj): + result = _raw_keccak_lib.keccak_reset(hash_obj.get()) + if result: + raise ValueError("Error %d while resetting KangarooTwelve state" + % result) + + def update(self, data): + """Hash the next piece of data. + + .. note:: + For better performance, submit chunks with a length multiple of 8192 bytes. + + Args: + data (byte string/byte array/memoryview): The next chunk of the + message to hash. + """ + + if self._state == SQUEEZING: + raise TypeError("You cannot call 'update' after the first 'read'") + + if self._state == SHORT_MSG: + next_length = self._length1 + len(data) + + if next_length + len(self._custom) <= 8192: + self._length1 = next_length + self._update(data, self._hash1) + return self + + # Switch to tree hashing + self._state = LONG_MSG_S0 + + if self._state == LONG_MSG_S0: + data_mem = memoryview(data) + assert(self._length1 < 8192) + dtc = min(len(data), 8192 - self._length1) + self._update(data_mem[:dtc], self._hash1) + self._length1 += dtc + + if self._length1 < 8192: + return self + + # Finish hashing S_0 and start S_1 + assert(self._length1 == 8192) + + divider = b'\x03' + b'\x00' * 7 + self._update(divider, self._hash1) + self._length1 += 8 + + self._hash2 = self._create_keccak() + self._length2 = 0 + self._ctr = 1 + + self._state = LONG_MSG_SX + return self.update(data_mem[dtc:]) + + # LONG_MSG_SX + assert(self._state == LONG_MSG_SX) + index = 0 + len_data = len(data) + + # All iteractions could actually run in parallel + data_mem = memoryview(data) + while index < len_data: + + new_index = min(index + 8192 - self._length2, len_data) + self._update(data_mem[index:new_index], self._hash2) + self._length2 += new_index - index + index = new_index + + if self._length2 == 8192: + cv_i = self._squeeze(self._hash2, 32, 0x0B) + self._update(cv_i, self._hash1) + self._length1 += 32 + self._reset(self._hash2) + self._length2 = 0 + self._ctr += 1 + + return self + + def read(self, length): + """ + Produce more bytes of the digest. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + custom_was_consumed = False + + if self._state == SHORT_MSG: + self._update(self._custom, self._hash1) + self._padding = 0x07 + self._state = SQUEEZING + + if self._state == LONG_MSG_S0: + self.update(self._custom) + custom_was_consumed = True + assert(self._state == LONG_MSG_SX) + + if self._state == LONG_MSG_SX: + if not custom_was_consumed: + self.update(self._custom) + + # Is there still some leftover data in hash2? + if self._length2 > 0: + cv_i = self._squeeze(self._hash2, 32, 0x0B) + self._update(cv_i, self._hash1) + self._length1 += 32 + self._reset(self._hash2) + self._length2 = 0 + self._ctr += 1 + + trailer = _length_encode(self._ctr - 1) + b'\xFF\xFF' + self._update(trailer, self._hash1) + + self._padding = 0x06 + self._state = SQUEEZING + + return self._squeeze(self._hash1, length, self._padding) + + def new(self, data=None, custom=b''): + return type(self)(data, custom) + + +def new(data=None, custom=None): + """Return a fresh instance of a KangarooTwelve object. + + Args: + data (bytes/bytearray/memoryview): + Optional. + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + custom (bytes): + Optional. + A customization byte string. + + :Return: A :class:`K12_XOF` object + """ + + return K12_XOF(data, custom) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/KangarooTwelve.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/KangarooTwelve.pyi new file mode 100644 index 00000000..8b3fd74f --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/KangarooTwelve.pyi @@ -0,0 +1,16 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class K12_XOF(object): + def __init__(self, + data: Optional[Buffer] = ..., + custom: Optional[bytes] = ...) -> None: ... + def update(self, data: Buffer) -> K12_XOF: ... + def read(self, length: int) -> bytes: ... + def new(self, + data: Optional[Buffer] = ..., + custom: Optional[bytes] = ...) -> None: ... + +def new(data: Optional[Buffer] = ..., + custom: Optional[Buffer] = ...) -> K12_XOF: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/MD2.py b/env/lib/python3.8/site-packages/Crypto/Hash/MD2.py new file mode 100644 index 00000000..41decbb6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/MD2.py @@ -0,0 +1,166 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_md2_lib = load_pycryptodome_raw_lib( + "Crypto.Hash._MD2", + """ + int md2_init(void **shaState); + int md2_destroy(void *shaState); + int md2_update(void *hs, + const uint8_t *buf, + size_t len); + int md2_digest(const void *shaState, + uint8_t digest[20]); + int md2_copy(const void *src, void *dst); + """) + + +class MD2Hash(object): + """An MD2 hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 16 + # The internal block size of the hash algorithm in bytes. + block_size = 16 + # ASN.1 Object ID + oid = "1.2.840.113549.2.2" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_md2_lib.md2_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating MD2" + % result) + self._state = SmartPointer(state.get(), + _raw_md2_lib.md2_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_md2_lib.md2_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating MD2" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_md2_lib.md2_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating MD2" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = MD2Hash() + result = _raw_md2_lib.md2_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying MD2" % result) + return clone + + def new(self, data=None): + return MD2Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`MD2Hash.update`. + :type data: bytes/bytearray/memoryview + + :Return: A :class:`MD2Hash` hash object + """ + + return MD2Hash().new(data) + +# The size of the resulting hash in bytes. +digest_size = MD2Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = MD2Hash.block_size diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/MD2.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/MD2.pyi new file mode 100644 index 00000000..95a97a9e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/MD2.pyi @@ -0,0 +1,19 @@ +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class MD4Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Buffer = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> MD4Hash: ... + def new(self, data: Buffer = ...) -> MD4Hash: ... + +def new(data: Buffer = ...) -> MD4Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/MD4.py b/env/lib/python3.8/site-packages/Crypto/Hash/MD4.py new file mode 100644 index 00000000..be12b192 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/MD4.py @@ -0,0 +1,185 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +MD4 is specified in RFC1320_ and produces the 128 bit digest of a message. + + >>> from Crypto.Hash import MD4 + >>> + >>> h = MD4.new() + >>> h.update(b'Hello') + >>> print h.hexdigest() + +MD4 stand for Message Digest version 4, and it was invented by Rivest in 1990. +This algorithm is insecure. Do not use it for new designs. + +.. _RFC1320: http://tools.ietf.org/html/rfc1320 +""" + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_md4_lib = load_pycryptodome_raw_lib( + "Crypto.Hash._MD4", + """ + int md4_init(void **shaState); + int md4_destroy(void *shaState); + int md4_update(void *hs, + const uint8_t *buf, + size_t len); + int md4_digest(const void *shaState, + uint8_t digest[20]); + int md4_copy(const void *src, void *dst); + """) + + +class MD4Hash(object): + """Class that implements an MD4 hash + """ + + #: The size of the resulting hash in bytes. + digest_size = 16 + #: The internal block size of the hash algorithm in bytes. + block_size = 64 + #: ASN.1 Object ID + oid = "1.2.840.113549.2.4" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_md4_lib.md4_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating MD4" + % result) + self._state = SmartPointer(state.get(), + _raw_md4_lib.md4_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Repeated calls are equivalent to a single call with the concatenation + of all the arguments. In other words: + + >>> m.update(a); m.update(b) + + is equivalent to: + + >>> m.update(a+b) + + :Parameters: + data : byte string/byte array/memoryview + The next chunk of the message being hashed. + """ + + result = _raw_md4_lib.md4_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating MD4" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that + has been hashed so far. + + This method does not change the state of the hash object. + You can continue updating the object after calling this function. + + :Return: A byte string of `digest_size` bytes. It may contain non-ASCII + characters, including null bytes. + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_md4_lib.md4_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating MD4" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been + hashed so far. + + This method does not change the state of the hash object. + + :Return: A string of 2* `digest_size` characters. It contains only + hexadecimal ASCII digits. + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :Return: A hash object of the same type + """ + + clone = MD4Hash() + result = _raw_md4_lib.md4_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying MD4" % result) + return clone + + def new(self, data=None): + return MD4Hash(data) + + +def new(data=None): + """Return a fresh instance of the hash object. + + :Parameters: + data : byte string/byte array/memoryview + The very first chunk of the message to hash. + It is equivalent to an early call to `MD4Hash.update()`. + Optional. + + :Return: A `MD4Hash` object + """ + return MD4Hash().new(data) + +#: The size of the resulting hash in bytes. +digest_size = MD4Hash.digest_size + +#: The internal block size of the hash algorithm in bytes. +block_size = MD4Hash.block_size diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/MD4.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/MD4.pyi new file mode 100644 index 00000000..a9a72953 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/MD4.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class MD4Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> MD4Hash: ... + def new(self, data: Optional[Buffer] = ...) -> MD4Hash: ... + +def new(data: Optional[Buffer] = ...) -> MD4Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/MD5.py b/env/lib/python3.8/site-packages/Crypto/Hash/MD5.py new file mode 100644 index 00000000..554b7772 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/MD5.py @@ -0,0 +1,184 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import * + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_md5_lib = load_pycryptodome_raw_lib("Crypto.Hash._MD5", + """ + #define MD5_DIGEST_SIZE 16 + + int MD5_init(void **shaState); + int MD5_destroy(void *shaState); + int MD5_update(void *hs, + const uint8_t *buf, + size_t len); + int MD5_digest(const void *shaState, + uint8_t digest[MD5_DIGEST_SIZE]); + int MD5_copy(const void *src, void *dst); + + int MD5_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t first_digest[MD5_DIGEST_SIZE], + uint8_t final_digest[MD5_DIGEST_SIZE], + size_t iterations); + """) + +class MD5Hash(object): + """A MD5 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 16 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "1.2.840.113549.2.5" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_md5_lib.MD5_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating MD5" + % result) + self._state = SmartPointer(state.get(), + _raw_md5_lib.MD5_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_md5_lib.MD5_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating MD5" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_md5_lib.MD5_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating MD5" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = MD5Hash() + result = _raw_md5_lib.MD5_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying MD5" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-1 hash object.""" + + return MD5Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`MD5Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`MD5Hash` hash object + """ + return MD5Hash().new(data) + +# The size of the resulting hash in bytes. +digest_size = 16 + +# The internal block size of the hash algorithm in bytes. +block_size = 64 + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert len(first_digest) == digest_size + assert iterations > 0 + + bfr = create_string_buffer(digest_size); + result = _raw_md5_lib.MD5_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations)) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assis for MD5" % result) + + return get_raw_buffer(bfr) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/MD5.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/MD5.pyi new file mode 100644 index 00000000..d8195562 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/MD5.pyi @@ -0,0 +1,19 @@ +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class MD5Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Buffer = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> MD5Hash: ... + def new(self, data: Buffer = ...) -> MD5Hash: ... + +def new(data: Buffer = ...) -> MD5Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/Poly1305.py b/env/lib/python3.8/site-packages/Crypto/Hash/Poly1305.py new file mode 100644 index 00000000..eb5e0dad --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/Poly1305.py @@ -0,0 +1,217 @@ +# -*- coding: utf-8 -*- +# +# Hash/Poly1305.py - Implements the Poly1305 MAC +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from binascii import unhexlify + +from Crypto.Util.py3compat import bord, tobytes, _copy_bytes + +from Crypto.Hash import BLAKE2s +from Crypto.Random import get_random_bytes +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + + +_raw_poly1305 = load_pycryptodome_raw_lib("Crypto.Hash._poly1305", + """ + int poly1305_init(void **state, + const uint8_t *r, + size_t r_len, + const uint8_t *s, + size_t s_len); + int poly1305_destroy(void *state); + int poly1305_update(void *state, + const uint8_t *in, + size_t len); + int poly1305_digest(const void *state, + uint8_t *digest, + size_t len); + """) + + +class Poly1305_MAC(object): + """An Poly1305 MAC object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting MAC tag + :vartype digest_size: integer + """ + + digest_size = 16 + + def __init__(self, r, s, data): + + if len(r) != 16: + raise ValueError("Parameter r is not 16 bytes long") + if len(s) != 16: + raise ValueError("Parameter s is not 16 bytes long") + + self._mac_tag = None + + state = VoidPointer() + result = _raw_poly1305.poly1305_init(state.address_of(), + c_uint8_ptr(r), + c_size_t(len(r)), + c_uint8_ptr(s), + c_size_t(len(s)) + ) + if result: + raise ValueError("Error %d while instantiating Poly1305" % result) + self._state = SmartPointer(state.get(), + _raw_poly1305.poly1305_destroy) + if data: + self.update(data) + + def update(self, data): + """Authenticate the next chunk of message. + + Args: + data (byte string/byte array/memoryview): The next chunk of data + """ + + if self._mac_tag: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_poly1305.poly1305_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing Poly1305 data" % result) + return self + + def copy(self): + raise NotImplementedError() + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message + authenticated so far. + + :return: The MAC tag digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + if self._mac_tag: + return self._mac_tag + + bfr = create_string_buffer(16) + result = _raw_poly1305.poly1305_digest(self._state.get(), + bfr, + c_size_t(len(bfr))) + if result: + raise ValueError("Error %d while creating Poly1305 digest" % result) + + self._mac_tag = get_raw_buffer(bfr) + return self._mac_tag + + def hexdigest(self): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) + for x in tuple(self.digest())]) + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte string/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, + as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + + +def new(**kwargs): + """Create a new Poly1305 MAC object. + + Args: + key (bytes/bytearray/memoryview): + The 32-byte key for the Poly1305 object. + cipher (module from ``Crypto.Cipher``): + The cipher algorithm to use for deriving the Poly1305 + key pair *(r, s)*. + It can only be ``Crypto.Cipher.AES`` or ``Crypto.Cipher.ChaCha20``. + nonce (bytes/bytearray/memoryview): + Optional. The non-repeatable value to use for the MAC of this message. + It must be 16 bytes long for ``AES`` and 8 or 12 bytes for ``ChaCha20``. + If not passed, a random nonce is created; you will find it in the + ``nonce`` attribute of the new object. + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to ``update()``. + + Returns: + A :class:`Poly1305_MAC` object + """ + + cipher = kwargs.pop("cipher", None) + if not hasattr(cipher, '_derive_Poly1305_key_pair'): + raise ValueError("Parameter 'cipher' must be AES or ChaCha20") + + cipher_key = kwargs.pop("key", None) + if cipher_key is None: + raise TypeError("You must pass a parameter 'key'") + + nonce = kwargs.pop("nonce", None) + data = kwargs.pop("data", None) + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + r, s, nonce = cipher._derive_Poly1305_key_pair(cipher_key, nonce) + + new_mac = Poly1305_MAC(r, s, data) + new_mac.nonce = _copy_bytes(None, None, nonce) # nonce may still be just a memoryview + return new_mac diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/Poly1305.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/Poly1305.pyi new file mode 100644 index 00000000..f97a14a0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/Poly1305.pyi @@ -0,0 +1,24 @@ +from types import ModuleType +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class Poly1305_MAC(object): + block_size: int + digest_size: int + oid: str + + def __init__(self, + r : int, + s : int, + data : Buffer) -> None: ... + def update(self, data: Buffer) -> Poly1305_MAC: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + +def new(key: Buffer, + cipher: ModuleType, + nonce: Buffer = ..., + data: Buffer = ...) -> Poly1305_MAC: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD.py b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD.py new file mode 100644 index 00000000..4e802358 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +# This file exists for backward compatibility with old code that refers to +# Crypto.Hash.RIPEMD + +"""Deprecated alias for `Crypto.Hash.RIPEMD160`""" + +from Crypto.Hash.RIPEMD160 import new, block_size, digest_size diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD.pyi new file mode 100644 index 00000000..e33eb2df --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD.pyi @@ -0,0 +1,3 @@ +# This file exists for backward compatibility with old code that refers to +# Crypto.Hash.SHA + diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD160.py b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD160.py new file mode 100644 index 00000000..820b57dd --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD160.py @@ -0,0 +1,169 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_ripemd160_lib = load_pycryptodome_raw_lib( + "Crypto.Hash._RIPEMD160", + """ + int ripemd160_init(void **shaState); + int ripemd160_destroy(void *shaState); + int ripemd160_update(void *hs, + const uint8_t *buf, + size_t len); + int ripemd160_digest(const void *shaState, + uint8_t digest[20]); + int ripemd160_copy(const void *src, void *dst); + """) + + +class RIPEMD160Hash(object): + """A RIPEMD-160 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 20 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "1.3.36.3.2.1" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_ripemd160_lib.ripemd160_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating RIPEMD160" + % result) + self._state = SmartPointer(state.get(), + _raw_ripemd160_lib.ripemd160_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_ripemd160_lib.ripemd160_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating ripemd160" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_ripemd160_lib.ripemd160_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating ripemd160" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = RIPEMD160Hash() + result = _raw_ripemd160_lib.ripemd160_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying ripemd160" % result) + return clone + + def new(self, data=None): + """Create a fresh RIPEMD-160 hash object.""" + + return RIPEMD160Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`RIPEMD160Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`RIPEMD160Hash` hash object + """ + + return RIPEMD160Hash().new(data) + +# The size of the resulting hash in bytes. +digest_size = RIPEMD160Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = RIPEMD160Hash.block_size diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD160.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD160.pyi new file mode 100644 index 00000000..b6194737 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/RIPEMD160.pyi @@ -0,0 +1,19 @@ +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class RIPEMD160Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Buffer = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> RIPEMD160Hash: ... + def new(self, data: Buffer = ...) -> RIPEMD160Hash: ... + +def new(data: Buffer = ...) -> RIPEMD160Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA.py new file mode 100644 index 00000000..0cc141c9 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +# This file exists for backward compatibility with old code that refers to +# Crypto.Hash.SHA + +from Crypto.Hash.SHA1 import __doc__, new, block_size, digest_size diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA.pyi new file mode 100644 index 00000000..4d7d57e2 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA.pyi @@ -0,0 +1,4 @@ +# This file exists for backward compatibility with old code that refers to +# Crypto.Hash.SHA + +from Crypto.Hash.SHA1 import __doc__, new, block_size, digest_size diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA1.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA1.py new file mode 100644 index 00000000..f79d8257 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA1.py @@ -0,0 +1,185 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import * + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha1_lib = load_pycryptodome_raw_lib("Crypto.Hash._SHA1", + """ + #define SHA1_DIGEST_SIZE 20 + + int SHA1_init(void **shaState); + int SHA1_destroy(void *shaState); + int SHA1_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA1_digest(const void *shaState, + uint8_t digest[SHA1_DIGEST_SIZE]); + int SHA1_copy(const void *src, void *dst); + + int SHA1_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t first_digest[SHA1_DIGEST_SIZE], + uint8_t final_digest[SHA1_DIGEST_SIZE], + size_t iterations); + """) + +class SHA1Hash(object): + """A SHA-1 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 20 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "1.3.14.3.2.26" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha1_lib.SHA1_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA1" + % result) + self._state = SmartPointer(state.get(), + _raw_sha1_lib.SHA1_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha1_lib.SHA1_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating SHA1" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha1_lib.SHA1_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating SHA1" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA1Hash() + result = _raw_sha1_lib.SHA1_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA1" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-1 hash object.""" + + return SHA1Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA1Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA1Hash` hash object + """ + return SHA1Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA1Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA1Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert len(first_digest) == digest_size + assert iterations > 0 + + bfr = create_string_buffer(digest_size); + result = _raw_sha1_lib.SHA1_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations)) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assis for SHA1" % result) + + return get_raw_buffer(bfr) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA1.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA1.pyi new file mode 100644 index 00000000..d6c8e25e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA1.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA1Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA1Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA1Hash: ... + +def new(data: Optional[Buffer] = ...) -> SHA1Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA224.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA224.py new file mode 100644 index 00000000..f788b063 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA224.py @@ -0,0 +1,186 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha224_lib = load_pycryptodome_raw_lib("Crypto.Hash._SHA224", + """ + int SHA224_init(void **shaState); + int SHA224_destroy(void *shaState); + int SHA224_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA224_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA224_copy(const void *src, void *dst); + + int SHA224_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA224Hash(object): + """A SHA-224 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 28 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = '2.16.840.1.101.3.4.2.4' + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha224_lib.SHA224_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA224" + % result) + self._state = SmartPointer(state.get(), + _raw_sha224_lib.SHA224_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha224_lib.SHA224_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA224" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha224_lib.SHA224_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA224 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA224Hash() + result = _raw_sha224_lib.SHA224_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA224" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-224 hash object.""" + + return SHA224Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA224Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA224Hash` hash object + """ + return SHA224Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA224Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA224Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha224_lib.SHA224_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA224" % result) + + return get_raw_buffer(bfr) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA224.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA224.pyi new file mode 100644 index 00000000..613a7f9a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA224.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA224Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA224Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA224Hash: ... + +def new(data: Optional[Buffer] = ...) -> SHA224Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA256.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA256.py new file mode 100644 index 00000000..957aa37e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA256.py @@ -0,0 +1,185 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha256_lib = load_pycryptodome_raw_lib("Crypto.Hash._SHA256", + """ + int SHA256_init(void **shaState); + int SHA256_destroy(void *shaState); + int SHA256_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA256_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA256_copy(const void *src, void *dst); + + int SHA256_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA256Hash(object): + """A SHA-256 hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 32 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.1" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha256_lib.SHA256_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA256" + % result) + self._state = SmartPointer(state.get(), + _raw_sha256_lib.SHA256_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha256_lib.SHA256_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA256" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha256_lib.SHA256_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA256 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA256Hash() + result = _raw_sha256_lib.SHA256_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA256" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-256 hash object.""" + + return SHA256Hash(data) + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA256Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA256Hash` hash object + """ + + return SHA256Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA256Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA256Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha256_lib.SHA256_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA256" % result) + + return get_raw_buffer(bfr) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA256.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA256.pyi new file mode 100644 index 00000000..cbf21bfb --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA256.pyi @@ -0,0 +1,18 @@ +from typing import Union, Optional + + +class SHA256Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Union[bytes, bytearray, memoryview]]=None) -> None: ... + def update(self, data: Union[bytes, bytearray, memoryview]) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA256Hash: ... + def new(self, data: Optional[Union[bytes, bytearray, memoryview]]=None) -> SHA256Hash: ... + +def new(data: Optional[Union[bytes, bytearray, memoryview]]=None) -> SHA256Hash: ... + +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA384.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA384.py new file mode 100644 index 00000000..a98fa9a3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA384.py @@ -0,0 +1,186 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha384_lib = load_pycryptodome_raw_lib("Crypto.Hash._SHA384", + """ + int SHA384_init(void **shaState); + int SHA384_destroy(void *shaState); + int SHA384_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA384_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA384_copy(const void *src, void *dst); + + int SHA384_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA384Hash(object): + """A SHA-384 hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 48 + # The internal block size of the hash algorithm in bytes. + block_size = 128 + # ASN.1 Object ID + oid = '2.16.840.1.101.3.4.2.2' + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha384_lib.SHA384_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA384" + % result) + self._state = SmartPointer(state.get(), + _raw_sha384_lib.SHA384_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha384_lib.SHA384_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA384" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha384_lib.SHA384_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA384 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA384Hash() + result = _raw_sha384_lib.SHA384_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA384" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-384 hash object.""" + + return SHA384Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA384Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA384Hash` hash object + """ + + return SHA384Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA384Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA384Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha384_lib.SHA384_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA384" % result) + + return get_raw_buffer(bfr) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA384.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA384.pyi new file mode 100644 index 00000000..c2aab9e0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA384.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA384Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA384Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA384Hash: ... + +def new(data: Optional[Buffer] = ...) -> SHA384Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_224.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_224.py new file mode 100644 index 00000000..54556d00 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_224.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Hash.keccak import _raw_keccak_lib + +class SHA3_224_Hash(object): + """A SHA3-224 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 28 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.7" + + # Input block size for HMAC + block_size = 144 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/224" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data)) + ) + if result: + raise ValueError("Error %d while updating SHA-3/224" + % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/224" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-224" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-224 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_224_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_224_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_224_Hash.digest_size + +# Input block size for HMAC +block_size = 144 diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_224.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_224.pyi new file mode 100644 index 00000000..21808219 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_224.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_224_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_224_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_224_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_224_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_224_Hash: ... + +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_256.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_256.py new file mode 100644 index 00000000..b4f11ee1 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_256.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Hash.keccak import _raw_keccak_lib + +class SHA3_256_Hash(object): + """A SHA3-256 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 32 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.8" + + # Input block size for HMAC + block_size = 136 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/256" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data)) + ) + if result: + raise ValueError("Error %d while updating SHA-3/256" + % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/256" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-256" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-256 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_256_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_256_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_256_Hash.digest_size + +# Input block size for HMAC +block_size = 136 diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_256.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_256.pyi new file mode 100644 index 00000000..88436bd8 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_256.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_256_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_256_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_256_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_256_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_256_Hash: ... + +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_384.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_384.py new file mode 100644 index 00000000..12f61ce5 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_384.py @@ -0,0 +1,179 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Hash.keccak import _raw_keccak_lib + +class SHA3_384_Hash(object): + """A SHA3-384 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 48 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.9" + + # Input block size for HMAC + block_size = 104 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/384" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHA-3/384" + % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/384" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-384" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-256 hash object.""" + + return type(self)(data, self._update_after_digest) + + + def new(self, data=None): + """Create a fresh SHA3-384 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_384_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_384_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_384_Hash.digest_size + +# Input block size for HMAC +block_size = 104 diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_384.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_384.pyi new file mode 100644 index 00000000..98d00c6c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_384.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_384_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_384_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_384_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_384_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_384_Hash: ... + +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_512.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_512.py new file mode 100644 index 00000000..de8880c7 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_512.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Hash.keccak import _raw_keccak_lib + +class SHA3_512_Hash(object): + """A SHA3-512 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 64 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.10" + + # Input block size for HMAC + block_size = 72 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/512" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHA-3/512" + % result) + return self + + def digest(self): + + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/512" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-512" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-521 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_512_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_512_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_512_Hash.digest_size + +# Input block size for HMAC +block_size = 72 diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_512.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_512.pyi new file mode 100644 index 00000000..cdeec165 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA3_512.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_512_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_512_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_512_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_512_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_512_Hash: ... + +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA512.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHA512.py new file mode 100644 index 00000000..403fe45e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA512.py @@ -0,0 +1,204 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha512_lib = load_pycryptodome_raw_lib("Crypto.Hash._SHA512", + """ + int SHA512_init(void **shaState, + size_t digest_size); + int SHA512_destroy(void *shaState); + int SHA512_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA512_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA512_copy(const void *src, void *dst); + + int SHA512_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA512Hash(object): + """A SHA-512 hash object (possibly in its truncated version SHA-512/224 or + SHA-512/256. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The internal block size of the hash algorithm in bytes. + block_size = 128 + + def __init__(self, data, truncate): + self._truncate = truncate + + if truncate is None: + self.oid = "2.16.840.1.101.3.4.2.3" + self.digest_size = 64 + elif truncate == "224": + self.oid = "2.16.840.1.101.3.4.2.5" + self.digest_size = 28 + elif truncate == "256": + self.oid = "2.16.840.1.101.3.4.2.6" + self.digest_size = 32 + else: + raise ValueError("Incorrect truncation length. It must be '224' or '256'.") + + state = VoidPointer() + result = _raw_sha512_lib.SHA512_init(state.address_of(), + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while instantiating SHA-512" + % result) + self._state = SmartPointer(state.get(), + _raw_sha512_lib.SHA512_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha512_lib.SHA512_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA512" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha512_lib.SHA512_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA512 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA512Hash(None, self._truncate) + result = _raw_sha512_lib.SHA512_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA512" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-512 hash object.""" + + return SHA512Hash(data, self._truncate) + + +def new(data=None, truncate=None): + """Create a new hash object. + + Args: + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA512Hash.update`. + truncate (string): + Optional. The desired length of the digest. It can be either "224" or + "256". If not present, the digest is 512 bits long. + Passing this parameter is **not** equivalent to simply truncating + the output digest. + + :Return: A :class:`SHA512Hash` hash object + """ + + return SHA512Hash(data, truncate) + + +# The size of the full SHA-512 hash in bytes. +digest_size = 64 + +# The internal block size of the hash algorithm in bytes. +block_size = 128 + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha512_lib.SHA512_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA512" % result) + + return get_raw_buffer(bfr) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHA512.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHA512.pyi new file mode 100644 index 00000000..f219ee94 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHA512.pyi @@ -0,0 +1,22 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA512Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, + data: Optional[Buffer], + truncate: Optional[str]) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA512Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA512Hash: ... + +def new(data: Optional[Buffer] = ..., + truncate: Optional[str] = ...) -> SHA512Hash: ... +digest_size: int +block_size: int diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE128.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE128.py new file mode 100644 index 00000000..894a41b1 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE128.py @@ -0,0 +1,129 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Hash.keccak import _raw_keccak_lib + +class SHAKE128_XOF(object): + """A SHAKE128 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + """ + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.11" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(32), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHAKE128" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + self._is_squeezing = False + self._padding = 0x1F + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._is_squeezing: + raise TypeError("You cannot call 'update' after the first 'read'") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHAKE128 state" + % result) + return self + + def read(self, length): + """ + Compute the next piece of XOF output. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + self._is_squeezing = True + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(self._state.get(), + bfr, + c_size_t(length), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while extracting from SHAKE128" + % result) + + return get_raw_buffer(bfr) + + def new(self, data=None): + return type(self)(data=data) + + +def new(data=None): + """Return a fresh instance of a SHAKE128 object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + Optional. + + :Return: A :class:`SHAKE128_XOF` object + """ + + return SHAKE128_XOF(data=data) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE128.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE128.pyi new file mode 100644 index 00000000..f6188816 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE128.pyi @@ -0,0 +1,13 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHAKE128_XOF(object): + oid: str + def __init__(self, + data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> SHAKE128_XOF: ... + def read(self, length: int) -> bytes: ... + def new(self, data: Optional[Buffer] = ...) -> SHAKE128_XOF: ... + +def new(data: Optional[Buffer] = ...) -> SHAKE128_XOF: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE256.py b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE256.py new file mode 100644 index 00000000..f75b8221 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE256.py @@ -0,0 +1,130 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Hash.keccak import _raw_keccak_lib + +class SHAKE256_XOF(object): + """A SHAKE256 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + """ + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.12" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(64), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHAKE256" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + self._is_squeezing = False + self._padding = 0x1F + + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._is_squeezing: + raise TypeError("You cannot call 'update' after the first 'read'") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHAKE256 state" + % result) + return self + + def read(self, length): + """ + Compute the next piece of XOF output. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + self._is_squeezing = True + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(self._state.get(), + bfr, + c_size_t(length), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while extracting from SHAKE256" + % result) + + return get_raw_buffer(bfr) + + def new(self, data=None): + return type(self)(data=data) + + +def new(data=None): + """Return a fresh instance of a SHAKE256 object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + Optional. + + :Return: A :class:`SHAKE256_XOF` object + """ + + return SHAKE256_XOF(data=data) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE256.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE256.pyi new file mode 100644 index 00000000..029347af --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/SHAKE256.pyi @@ -0,0 +1,13 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHAKE256_XOF(object): + oid: str + def __init__(self, + data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> SHAKE256_XOF: ... + def read(self, length: int) -> bytes: ... + def new(self, data: Optional[Buffer] = ...) -> SHAKE256_XOF: ... + +def new(data: Optional[Buffer] = ...) -> SHAKE256_XOF: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash128.py b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash128.py new file mode 100644 index 00000000..8fd32837 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash128.py @@ -0,0 +1,138 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bord, is_bytes, tobytes + +from . import cSHAKE128 +from .cSHAKE128 import _encode_str, _right_encode + + +class TupleHash(object): + """A Tuple hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, custom, cshake, digest_size): + + self.digest_size = digest_size + + self._cshake = cshake._new(b'', custom, b'TupleHash') + self._digest = None + + def update(self, data): + """Authenticate the next byte string in the tuple. + + Args: + data (bytes/bytearray/memoryview): The next byte string. + """ + + if self._digest is not None: + raise TypeError("You cannot call 'update' after 'digest' or 'hexdigest'") + + if not is_bytes(data): + raise TypeError("You can only call 'update' on bytes") + + self._cshake.update(_encode_str(tobytes(data))) + + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the tuple of byte strings. + + :return: The hash digest. Binary form. + :rtype: byte string + """ + + if self._digest is None: + self._cshake.update(_right_encode(self.digest_size * 8)) + self._digest = self._cshake.read(self.digest_size) + + return self._digest + + def hexdigest(self): + """Return the **printable** digest of the tuple of byte strings. + + :return: The hash digest. Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + def new(self, **kwargs): + """Return a new instance of a TupleHash object. + See :func:`new`. + """ + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new TupleHash128 object. + + Args: + digest_bytes (integer): + Optional. The size of the digest, in bytes. + Default is 64. Minimum is 8. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (and in steps of 8). + Default is 512. Minimum is 64. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`TupleHash` object + """ + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 64 + if digest_bytes is not None: + if digest_bytes < 8: + raise ValueError("'digest_bytes' must be at least 8") + else: + if digest_bits < 64 or digest_bits % 8: + raise ValueError("'digest_bytes' must be at least 64 " + "in steps of 8") + digest_bytes = digest_bits // 8 + + custom = kwargs.pop("custom", b'') + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return TupleHash(custom, cSHAKE128, digest_bytes) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash128.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash128.pyi new file mode 100644 index 00000000..3b1e81e5 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash128.pyi @@ -0,0 +1,22 @@ +from typing import Any, Union +from types import ModuleType + +Buffer = Union[bytes, bytearray, memoryview] + +class TupleHash(object): + digest_size: int + def __init__(self, + custom: bytes, + cshake: ModuleType, + digest_size: int) -> None: ... + def update(self, data: Buffer) -> TupleHash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def new(self, + digest_bytes: int = ..., + digest_bits: int = ..., + custom: int = ...) -> TupleHash: ... + +def new(digest_bytes: int = ..., + digest_bits: int = ..., + custom: int = ...) -> TupleHash: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash256.py b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash256.py new file mode 100644 index 00000000..9b4fba08 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash256.py @@ -0,0 +1,73 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from . import cSHAKE256 +from .TupleHash128 import TupleHash + + +def new(**kwargs): + """Create a new TupleHash256 object. + + Args: + digest_bytes (integer): + Optional. The size of the digest, in bytes. + Default is 64. Minimum is 8. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (and in steps of 8). + Default is 512. Minimum is 64. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`TupleHash` object + """ + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 64 + if digest_bytes is not None: + if digest_bytes < 8: + raise ValueError("'digest_bytes' must be at least 8") + else: + if digest_bits < 64 or digest_bits % 8: + raise ValueError("'digest_bytes' must be at least 64 " + "in steps of 8") + digest_bytes = digest_bits // 8 + + custom = kwargs.pop("custom", b'') + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return TupleHash(custom, cSHAKE256, digest_bytes) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash256.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash256.pyi new file mode 100644 index 00000000..82d943f0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/TupleHash256.pyi @@ -0,0 +1,5 @@ +from .TupleHash128 import TupleHash + +def new(digest_bytes: int = ..., + digest_bits: int = ..., + custom: int = ...) -> TupleHash: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_BLAKE2b.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_BLAKE2b.abi3.so new file mode 100644 index 00000000..d7bc5bf2 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_BLAKE2b.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_BLAKE2s.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_BLAKE2s.abi3.so new file mode 100644 index 00000000..1e7ac6c9 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_BLAKE2s.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_MD2.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_MD2.abi3.so new file mode 100644 index 00000000..7671f603 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_MD2.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_MD4.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_MD4.abi3.so new file mode 100644 index 00000000..ddd85d1b Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_MD4.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_MD5.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_MD5.abi3.so new file mode 100644 index 00000000..a8b66839 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_MD5.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_RIPEMD160.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_RIPEMD160.abi3.so new file mode 100644 index 00000000..d37c515f Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_RIPEMD160.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_SHA1.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA1.abi3.so new file mode 100644 index 00000000..70809849 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA1.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_SHA224.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA224.abi3.so new file mode 100644 index 00000000..5ceeaa5c Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA224.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_SHA256.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA256.abi3.so new file mode 100644 index 00000000..0b4f41b4 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA256.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_SHA384.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA384.abi3.so new file mode 100644 index 00000000..8f879720 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA384.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_SHA512.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA512.abi3.so new file mode 100644 index 00000000..804ee9e1 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_SHA512.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/__init__.py b/env/lib/python3.8/site-packages/Crypto/Hash/__init__.py new file mode 100644 index 00000000..4bda0841 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/__init__.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['HMAC', 'MD2', 'MD4', 'MD5', 'RIPEMD160', 'SHA1', + 'SHA224', 'SHA256', 'SHA384', 'SHA512', 'CMAC', 'Poly1305', + 'cSHAKE128', 'cSHAKE256', 'KMAC128', 'KMAC256', + 'TupleHash128', 'TupleHash256', 'KangarooTwelve'] diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/__init__.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/__init__.pyi new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_ghash_clmul.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_ghash_clmul.abi3.so new file mode 100644 index 00000000..77163c13 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_ghash_clmul.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_ghash_portable.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_ghash_portable.abi3.so new file mode 100644 index 00000000..702cd16c Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_ghash_portable.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_keccak.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_keccak.abi3.so new file mode 100644 index 00000000..aaa33d72 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_keccak.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/_poly1305.abi3.so b/env/lib/python3.8/site-packages/Crypto/Hash/_poly1305.abi3.so new file mode 100644 index 00000000..a7950271 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Hash/_poly1305.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE128.py b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE128.py new file mode 100644 index 00000000..92a4e5ca --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE128.py @@ -0,0 +1,187 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bchr + +from Crypto.Util._raw_api import (VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Crypto.Util.number import long_to_bytes + +from Crypto.Hash.keccak import _raw_keccak_lib + + +def _left_encode(x): + """Left encode function as defined in NIST SP 800-185""" + + assert (x < (1 << 2040) and x >= 0) + + # Get number of bytes needed to represent this integer. + num = 1 if x == 0 else (x.bit_length() + 7) // 8 + + return bchr(num) + long_to_bytes(x) + + +def _right_encode(x): + """Right encode function as defined in NIST SP 800-185""" + + assert (x < (1 << 2040) and x >= 0) + + # Get number of bytes needed to represent this integer. + num = 1 if x == 0 else (x.bit_length() + 7) // 8 + + return long_to_bytes(x) + bchr(num) + + +def _encode_str(x): + """Encode string function as defined in NIST SP 800-185""" + + bitlen = len(x) * 8 + if bitlen >= (1 << 2040): + raise ValueError("String too large to encode in cSHAKE") + + return _left_encode(bitlen) + x + + +def _bytepad(x, length): + """Zero pad byte string as defined in NIST SP 800-185""" + + to_pad = _left_encode(length) + x + + # Note: this implementation works with byte aligned strings, + # hence no additional bit padding is needed at this point. + npad = (length - len(to_pad) % length) % length + + return to_pad + b'\x00' * npad + + +class cSHAKE_XOF(object): + """A cSHAKE hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, data, custom, capacity, function): + state = VoidPointer() + + if custom or function: + prefix_unpad = _encode_str(function) + _encode_str(custom) + prefix = _bytepad(prefix_unpad, (1600 - capacity)//8) + self._padding = 0x04 + else: + prefix = None + self._padding = 0x1F # for SHAKE + + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(capacity//8), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating cSHAKE" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + self._is_squeezing = False + + if prefix: + self.update(prefix) + + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._is_squeezing: + raise TypeError("You cannot call 'update' after the first 'read'") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating %s state" + % (result, self.name)) + return self + + def read(self, length): + """ + Compute the next piece of XOF output. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + self._is_squeezing = True + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(self._state.get(), + bfr, + c_size_t(length), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while extracting from %s" + % (result, self.name)) + + return get_raw_buffer(bfr) + + +def _new(data, custom, function): + # Use Keccak[256] + return cSHAKE_XOF(data, custom, 256, function) + + +def new(data=None, custom=None): + """Return a fresh instance of a cSHAKE128 object. + + Args: + data (bytes/bytearray/memoryview): + Optional. + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`cSHAKE_XOF` object + """ + + # Use Keccak[256] + return cSHAKE_XOF(data, custom, 256, b'') diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE128.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE128.pyi new file mode 100644 index 00000000..1452feaf --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE128.pyi @@ -0,0 +1,14 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class cSHAKE_XOF(object): + def __init__(self, + data: Optional[Buffer] = ..., + function: Optional[bytes] = ..., + custom: Optional[bytes] = ...) -> None: ... + def update(self, data: Buffer) -> cSHAKE_XOF: ... + def read(self, length: int) -> bytes: ... + +def new(data: Optional[Buffer] = ..., + custom: Optional[Buffer] = ...) -> cSHAKE_XOF: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE256.py b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE256.py new file mode 100644 index 00000000..b3b31d6d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE256.py @@ -0,0 +1,56 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util._raw_api import c_size_t +from Crypto.Hash.cSHAKE128 import cSHAKE_XOF + + +def _new(data, custom, function): + # Use Keccak[512] + return cSHAKE_XOF(data, custom, 512, function) + + +def new(data=None, custom=None): + """Return a fresh instance of a cSHAKE256 object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + Optional. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`cSHAKE_XOF` object + """ + + # Use Keccak[512] + return cSHAKE_XOF(data, custom, 512, b'') diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE256.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE256.pyi new file mode 100644 index 00000000..205b816a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/cSHAKE256.pyi @@ -0,0 +1,8 @@ +from typing import Union, Optional + +from Crypto.Hash.cSHAKE128 import cSHAKE_XOF + +Buffer = Union[bytes, bytearray, memoryview] + +def new(data: Optional[Buffer] = ..., + custom: Optional[Buffer] = ...) -> cSHAKE_XOF: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/keccak.py b/env/lib/python3.8/site-packages/Crypto/Hash/keccak.py new file mode 100644 index 00000000..f3f8bb5a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/keccak.py @@ -0,0 +1,181 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bord + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +_raw_keccak_lib = load_pycryptodome_raw_lib("Crypto.Hash._keccak", + """ + int keccak_init(void **state, + size_t capacity_bytes, + uint8_t rounds); + int keccak_destroy(void *state); + int keccak_absorb(void *state, + const uint8_t *in, + size_t len); + int keccak_squeeze(const void *state, + uint8_t *out, + size_t len, + uint8_t padding); + int keccak_digest(void *state, + uint8_t *digest, + size_t len, + uint8_t padding); + int keccak_copy(const void *src, void *dst); + int keccak_reset(void *state); + """) + +class Keccak_Hash(object): + """A Keccak hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + def __init__(self, data, digest_bytes, update_after_digest): + # The size of the resulting hash in bytes. + self.digest_size = digest_bytes + + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x01 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating keccak" % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating keccak" % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while squeezing keccak" % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def new(self, **kwargs): + """Create a fresh Keccak hash object.""" + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new hash object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`Keccak_Hash.update`. + digest_bytes (integer): + The size of the digest, in bytes (28, 32, 48, 64). + digest_bits (integer): + The size of the digest, in bits (224, 256, 384, 512). + update_after_digest (boolean): + Whether :meth:`Keccak.digest` can be followed by another + :meth:`Keccak.update` (default: ``False``). + + :Return: A :class:`Keccak_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + raise TypeError("Digest size (bits, bytes) not provided") + if digest_bytes is not None: + if digest_bytes not in (28, 32, 48, 64): + raise ValueError("'digest_bytes' must be: 28, 32, 48 or 64") + else: + if digest_bits not in (224, 256, 384, 512): + raise ValueError("'digest_bytes' must be: 224, 256, 384 or 512") + digest_bytes = digest_bits // 8 + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return Keccak_Hash(data, digest_bytes, update_after_digest) diff --git a/env/lib/python3.8/site-packages/Crypto/Hash/keccak.pyi b/env/lib/python3.8/site-packages/Crypto/Hash/keccak.pyi new file mode 100644 index 00000000..844d2565 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Hash/keccak.pyi @@ -0,0 +1,23 @@ +from typing import Union, Any + +Buffer = Union[bytes, bytearray, memoryview] + +class Keccak_Hash(object): + digest_size: int + def __init__(self, + data: Buffer, + digest_bytes: int, + update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> Keccak_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def new(self, + data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + update_after_digest: bool = ...) -> Keccak_Hash: ... + +def new(data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + update_after_digest: bool = ...) -> Keccak_Hash: ... diff --git a/env/lib/python3.8/site-packages/Crypto/IO/PEM.py b/env/lib/python3.8/site-packages/Crypto/IO/PEM.py new file mode 100644 index 00000000..4c07b257 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/IO/PEM.py @@ -0,0 +1,189 @@ +# +# Util/PEM.py : Privacy Enhanced Mail utilities +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['encode', 'decode'] + +import re +from binascii import a2b_base64, b2a_base64, hexlify, unhexlify + +from Crypto.Hash import MD5 +from Crypto.Util.Padding import pad, unpad +from Crypto.Cipher import DES, DES3, AES +from Crypto.Protocol.KDF import PBKDF1 +from Crypto.Random import get_random_bytes +from Crypto.Util.py3compat import tobytes, tostr + + +def encode(data, marker, passphrase=None, randfunc=None): + """Encode a piece of binary data into PEM format. + + Args: + data (byte string): + The piece of binary data to encode. + marker (string): + The marker for the PEM block (e.g. "PUBLIC KEY"). + Note that there is no official master list for all allowed markers. + Still, you can refer to the OpenSSL_ source code. + passphrase (byte string): + If given, the PEM block will be encrypted. The key is derived from + the passphrase. + randfunc (callable): + Random number generation function; it accepts an integer N and returns + a byte string of random data, N bytes long. If not given, a new one is + instantiated. + + Returns: + The PEM block, as a string. + + .. _OpenSSL: https://github.com/openssl/openssl/blob/master/include/openssl/pem.h + """ + + if randfunc is None: + randfunc = get_random_bytes + + out = "-----BEGIN %s-----\n" % marker + if passphrase: + # We only support 3DES for encryption + salt = randfunc(8) + key = PBKDF1(passphrase, salt, 16, 1, MD5) + key += PBKDF1(key + passphrase, salt, 8, 1, MD5) + objenc = DES3.new(key, DES3.MODE_CBC, salt) + out += "Proc-Type: 4,ENCRYPTED\nDEK-Info: DES-EDE3-CBC,%s\n\n" %\ + tostr(hexlify(salt).upper()) + # Encrypt with PKCS#7 padding + data = objenc.encrypt(pad(data, objenc.block_size)) + elif passphrase is not None: + raise ValueError("Empty password") + + # Each BASE64 line can take up to 64 characters (=48 bytes of data) + # b2a_base64 adds a new line character! + chunks = [tostr(b2a_base64(data[i:i + 48])) + for i in range(0, len(data), 48)] + out += "".join(chunks) + out += "-----END %s-----" % marker + return out + + +def _EVP_BytesToKey(data, salt, key_len): + d = [ b'' ] + m = (key_len + 15 ) // 16 + for _ in range(m): + nd = MD5.new(d[-1] + data + salt).digest() + d.append(nd) + return b"".join(d)[:key_len] + + +def decode(pem_data, passphrase=None): + """Decode a PEM block into binary. + + Args: + pem_data (string): + The PEM block. + passphrase (byte string): + If given and the PEM block is encrypted, + the key will be derived from the passphrase. + + Returns: + A tuple with the binary data, the marker string, and a boolean to + indicate if decryption was performed. + + Raises: + ValueError: if decoding fails, if the PEM file is encrypted and no passphrase has + been provided or if the passphrase is incorrect. + """ + + # Verify Pre-Encapsulation Boundary + r = re.compile(r"\s*-----BEGIN (.*)-----\s+") + m = r.match(pem_data) + if not m: + raise ValueError("Not a valid PEM pre boundary") + marker = m.group(1) + + # Verify Post-Encapsulation Boundary + r = re.compile(r"-----END (.*)-----\s*$") + m = r.search(pem_data) + if not m or m.group(1) != marker: + raise ValueError("Not a valid PEM post boundary") + + # Removes spaces and slit on lines + lines = pem_data.replace(" ", '').split() + + # Decrypts, if necessary + if lines[1].startswith('Proc-Type:4,ENCRYPTED'): + if not passphrase: + raise ValueError("PEM is encrypted, but no passphrase available") + DEK = lines[2].split(':') + if len(DEK) != 2 or DEK[0] != 'DEK-Info': + raise ValueError("PEM encryption format not supported.") + algo, salt = DEK[1].split(',') + salt = unhexlify(tobytes(salt)) + + padding = True + + if algo == "DES-CBC": + key = _EVP_BytesToKey(passphrase, salt, 8) + objdec = DES.new(key, DES.MODE_CBC, salt) + elif algo == "DES-EDE3-CBC": + key = _EVP_BytesToKey(passphrase, salt, 24) + objdec = DES3.new(key, DES3.MODE_CBC, salt) + elif algo == "AES-128-CBC": + key = _EVP_BytesToKey(passphrase, salt[:8], 16) + objdec = AES.new(key, AES.MODE_CBC, salt) + elif algo == "AES-192-CBC": + key = _EVP_BytesToKey(passphrase, salt[:8], 24) + objdec = AES.new(key, AES.MODE_CBC, salt) + elif algo == "AES-256-CBC": + key = _EVP_BytesToKey(passphrase, salt[:8], 32) + objdec = AES.new(key, AES.MODE_CBC, salt) + elif algo.lower() == "id-aes256-gcm": + key = _EVP_BytesToKey(passphrase, salt[:8], 32) + objdec = AES.new(key, AES.MODE_GCM, nonce=salt) + padding = False + else: + raise ValueError("Unsupport PEM encryption algorithm (%s)." % algo) + lines = lines[2:] + else: + objdec = None + + # Decode body + data = a2b_base64(''.join(lines[1:-1])) + enc_flag = False + if objdec: + if padding: + data = unpad(objdec.decrypt(data), objdec.block_size) + else: + # There is no tag, so we don't use decrypt_and_verify + data = objdec.decrypt(data) + enc_flag = True + + return (data, marker, enc_flag) diff --git a/env/lib/python3.8/site-packages/Crypto/IO/PEM.pyi b/env/lib/python3.8/site-packages/Crypto/IO/PEM.pyi new file mode 100644 index 00000000..2e324c45 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/IO/PEM.pyi @@ -0,0 +1,10 @@ +from typing import Tuple, Optional, Callable + +def encode(data: bytes, + marke: str, + passphrase: Optional[bytes] = ..., + randfunc: Optional[Callable[[int],bytes]] = ...) -> str: ... + + +def decode(pem_data: str, + passphrase: Optional[bytes] = ...) -> Tuple[bytes, str, bool]: ... diff --git a/env/lib/python3.8/site-packages/Crypto/IO/PKCS8.py b/env/lib/python3.8/site-packages/Crypto/IO/PKCS8.py new file mode 100644 index 00000000..18dffae3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/IO/PKCS8.py @@ -0,0 +1,239 @@ +# +# PublicKey/PKCS8.py : PKCS#8 functions +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + + +from Crypto.Util.py3compat import * + +from Crypto.Util.asn1 import ( + DerNull, + DerSequence, + DerObjectId, + DerOctetString, + ) + +from Crypto.IO._PBES import PBES1, PBES2, PbesError + + +__all__ = ['wrap', 'unwrap'] + + +def wrap(private_key, key_oid, passphrase=None, protection=None, + prot_params=None, key_params=DerNull(), randfunc=None): + """Wrap a private key into a PKCS#8 blob (clear or encrypted). + + Args: + + private_key (byte string): + The private key encoded in binary form. The actual encoding is + algorithm specific. In most cases, it is DER. + + key_oid (string): + The object identifier (OID) of the private key to wrap. + It is a dotted string, like ``1.2.840.113549.1.1.1`` (for RSA keys). + + passphrase (bytes string or string): + The secret passphrase from which the wrapping key is derived. + Set it only if encryption is required. + + protection (string): + The identifier of the algorithm to use for securely wrapping the key. + The default value is ``PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC``. + + prot_params (dictionary): + Parameters for the protection algorithm. + + +------------------+-----------------------------------------------+ + | Key | Description | + +==================+===============================================+ + | iteration_count | The KDF algorithm is repeated several times to| + | | slow down brute force attacks on passwords | + | | (called *N* or CPU/memory cost in scrypt). | + | | The default value for PBKDF2 is 1000. | + | | The default value for scrypt is 16384. | + +------------------+-----------------------------------------------+ + | salt_size | Salt is used to thwart dictionary and rainbow | + | | attacks on passwords. The default value is 8 | + | | bytes. | + +------------------+-----------------------------------------------+ + | block_size | *(scrypt only)* Memory-cost (r). The default | + | | value is 8. | + +------------------+-----------------------------------------------+ + | parallelization | *(scrypt only)* CPU-cost (p). The default | + | | value is 1. | + +------------------+-----------------------------------------------+ + + key_params (DER object or None): + The ``parameters`` field to use in the ``AlgorithmIdentifier`` + SEQUENCE. If ``None``, no ``parameters`` field will be added. + By default, the ASN.1 type ``NULL`` is used. + + randfunc (callable): + Random number generation function; it should accept a single integer + N and return a string of random data, N bytes long. + If not specified, a new RNG will be instantiated + from :mod:`Crypto.Random`. + + Return: + The PKCS#8-wrapped private key (possibly encrypted), as a byte string. + """ + + # + # PrivateKeyInfo ::= SEQUENCE { + # version Version, + # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, + # privateKey PrivateKey, + # attributes [0] IMPLICIT Attributes OPTIONAL + # } + # + if key_params is None: + algorithm = DerSequence([DerObjectId(key_oid)]) + else: + algorithm = DerSequence([DerObjectId(key_oid), key_params]) + + pk_info = DerSequence([ + 0, + algorithm, + DerOctetString(private_key) + ]) + pk_info_der = pk_info.encode() + + if passphrase is None: + return pk_info_der + + if not passphrase: + raise ValueError("Empty passphrase") + + # Encryption with PBES2 + passphrase = tobytes(passphrase) + if protection is None: + protection = 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC' + return PBES2.encrypt(pk_info_der, passphrase, + protection, prot_params, randfunc) + + +def unwrap(p8_private_key, passphrase=None): + """Unwrap a private key from a PKCS#8 blob (clear or encrypted). + + Args: + p8_private_key (byte string): + The private key wrapped into a PKCS#8 blob, DER encoded. + passphrase (byte string or string): + The passphrase to use to decrypt the blob (if it is encrypted). + + Return: + A tuple containing + + #. the algorithm identifier of the wrapped key (OID, dotted string) + #. the private key (byte string, DER encoded) + #. the associated parameters (byte string, DER encoded) or ``None`` + + Raises: + ValueError : if decoding fails + """ + + if passphrase: + passphrase = tobytes(passphrase) + + found = False + try: + p8_private_key = PBES1.decrypt(p8_private_key, passphrase) + found = True + except PbesError as e: + error_str = "PBES1[%s]" % str(e) + except ValueError: + error_str = "PBES1[Invalid]" + + if not found: + try: + p8_private_key = PBES2.decrypt(p8_private_key, passphrase) + found = True + except PbesError as e: + error_str += ",PBES2[%s]" % str(e) + except ValueError: + error_str += ",PBES2[Invalid]" + + if not found: + raise ValueError("Error decoding PKCS#8 (%s)" % error_str) + + pk_info = DerSequence().decode(p8_private_key, nr_elements=(2, 3, 4, 5)) + if len(pk_info) == 2 and not passphrase: + raise ValueError("Not a valid clear PKCS#8 structure " + "(maybe it is encrypted?)") + + # RFC5208, PKCS#8, version is v1(0) + # + # PrivateKeyInfo ::= SEQUENCE { + # version Version, + # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, + # privateKey PrivateKey, + # attributes [0] IMPLICIT Attributes OPTIONAL + # } + # + # RFC5915, Asymmetric Key Package, version is v2(1) + # + # OneAsymmetricKey ::= SEQUENCE { + # version Version, + # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, + # privateKey PrivateKey, + # attributes [0] Attributes OPTIONAL, + # ..., + # [[2: publicKey [1] PublicKey OPTIONAL ]], + # ... + # } + + if pk_info[0] == 0: + if len(pk_info) not in (3, 4): + raise ValueError("Not a valid PrivateKeyInfo SEQUENCE") + elif pk_info[0] == 1: + if len(pk_info) not in (3, 4, 5): + raise ValueError("Not a valid PrivateKeyInfo SEQUENCE") + else: + raise ValueError("Not a valid PrivateKeyInfo SEQUENCE") + + algo = DerSequence().decode(pk_info[1], nr_elements=(1, 2)) + algo_oid = DerObjectId().decode(algo[0]).value + if len(algo) == 1: + algo_params = None + else: + try: + DerNull().decode(algo[1]) + algo_params = None + except: + algo_params = algo[1] + + # PrivateKey ::= OCTET STRING + private_key = DerOctetString().decode(pk_info[2]).payload + + # We ignore attributes and (for v2 only) publickey + + return (algo_oid, private_key, algo_params) diff --git a/env/lib/python3.8/site-packages/Crypto/IO/PKCS8.pyi b/env/lib/python3.8/site-packages/Crypto/IO/PKCS8.pyi new file mode 100644 index 00000000..2fed1b74 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/IO/PKCS8.pyi @@ -0,0 +1,14 @@ +from typing import Dict, Tuple, Optional, Union, Callable + +from Crypto.Util.asn1 import DerObject + +def wrap(private_key: bytes, + key_oid: str, + passphrase: Union[bytes, str] = ..., + protection: str = ..., + prot_params: Dict = ..., + key_params: Optional[DerObject] = ..., + randfunc: Optional[Callable[[int],str]] = ...) -> bytes: ... + + +def unwrap(p8_private_key: bytes, passphrase: Optional[Union[bytes, str]] = ...) -> Tuple[str, bytes, Optional[bytes]]: ... diff --git a/env/lib/python3.8/site-packages/Crypto/IO/_PBES.py b/env/lib/python3.8/site-packages/Crypto/IO/_PBES.py new file mode 100644 index 00000000..a47c775e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/IO/_PBES.py @@ -0,0 +1,435 @@ +# +# PublicKey/_PBES.py : Password-Based Encryption functions +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto import Random +from Crypto.Util.asn1 import ( + DerSequence, DerOctetString, + DerObjectId, DerInteger, + ) + +from Crypto.Util.Padding import pad, unpad +from Crypto.Hash import MD5, SHA1, SHA224, SHA256, SHA384, SHA512 +from Crypto.Cipher import DES, ARC2, DES3, AES +from Crypto.Protocol.KDF import PBKDF1, PBKDF2, scrypt + +_OID_PBE_WITH_MD5_AND_DES_CBC = "1.2.840.113549.1.5.3" +_OID_PBE_WITH_MD5_AND_RC2_CBC = "1.2.840.113549.1.5.6" +_OID_PBE_WITH_SHA1_AND_DES_CBC = "1.2.840.113549.1.5.10" +_OID_PBE_WITH_SHA1_AND_RC2_CBC = "1.2.840.113549.1.5.11" + +_OID_PBES2 = "1.2.840.113549.1.5.13" + +_OID_PBKDF2 = "1.2.840.113549.1.5.12" +_OID_SCRYPT = "1.3.6.1.4.1.11591.4.11" + +_OID_HMAC_SHA1 = "1.2.840.113549.2.7" +_OID_HMAC_SHA224 = "1.2.840.113549.2.8" +_OID_HMAC_SHA256 = "1.2.840.113549.2.9" +_OID_HMAC_SHA384 = "1.2.840.113549.2.10" +_OID_HMAC_SHA512 = "1.2.840.113549.2.11" + +_OID_DES_EDE3_CBC = "1.2.840.113549.3.7" +_OID_AES128_CBC = "2.16.840.1.101.3.4.1.2" +_OID_AES192_CBC = "2.16.840.1.101.3.4.1.22" +_OID_AES256_CBC = "2.16.840.1.101.3.4.1.42" + + +class PbesError(ValueError): + pass + +# These are the ASN.1 definitions used by the PBES1/2 logic: +# +# EncryptedPrivateKeyInfo ::= SEQUENCE { +# encryptionAlgorithm EncryptionAlgorithmIdentifier, +# encryptedData EncryptedData +# } +# +# EncryptionAlgorithmIdentifier ::= AlgorithmIdentifier +# +# EncryptedData ::= OCTET STRING +# +# AlgorithmIdentifier ::= SEQUENCE { +# algorithm OBJECT IDENTIFIER, +# parameters ANY DEFINED BY algorithm OPTIONAL +# } +# +# PBEParameter ::= SEQUENCE { +# salt OCTET STRING (SIZE(8)), +# iterationCount INTEGER +# } +# +# PBES2-params ::= SEQUENCE { +# keyDerivationFunc AlgorithmIdentifier {{PBES2-KDFs}}, +# encryptionScheme AlgorithmIdentifier {{PBES2-Encs}} +# } +# +# PBKDF2-params ::= SEQUENCE { +# salt CHOICE { +# specified OCTET STRING, +# otherSource AlgorithmIdentifier {{PBKDF2-SaltSources}} +# }, +# iterationCount INTEGER (1..MAX), +# keyLength INTEGER (1..MAX) OPTIONAL, +# prf AlgorithmIdentifier {{PBKDF2-PRFs}} DEFAULT algid-hmacWithSHA1 +# } +# +# scrypt-params ::= SEQUENCE { +# salt OCTET STRING, +# costParameter INTEGER (1..MAX), +# blockSize INTEGER (1..MAX), +# parallelizationParameter INTEGER (1..MAX), +# keyLength INTEGER (1..MAX) OPTIONAL +# } + +class PBES1(object): + """Deprecated encryption scheme with password-based key derivation + (originally defined in PKCS#5 v1.5, but still present in `v2.0`__). + + .. __: http://www.ietf.org/rfc/rfc2898.txt + """ + + @staticmethod + def decrypt(data, passphrase): + """Decrypt a piece of data using a passphrase and *PBES1*. + + The algorithm to use is automatically detected. + + :Parameters: + data : byte string + The piece of data to decrypt. + passphrase : byte string + The passphrase to use for decrypting the data. + :Returns: + The decrypted data, as a binary string. + """ + + enc_private_key_info = DerSequence().decode(data) + encrypted_algorithm = DerSequence().decode(enc_private_key_info[0]) + encrypted_data = DerOctetString().decode(enc_private_key_info[1]).payload + + pbe_oid = DerObjectId().decode(encrypted_algorithm[0]).value + cipher_params = {} + if pbe_oid == _OID_PBE_WITH_MD5_AND_DES_CBC: + # PBE_MD5_DES_CBC + hashmod = MD5 + ciphermod = DES + elif pbe_oid == _OID_PBE_WITH_MD5_AND_RC2_CBC: + # PBE_MD5_RC2_CBC + hashmod = MD5 + ciphermod = ARC2 + cipher_params['effective_keylen'] = 64 + elif pbe_oid == _OID_PBE_WITH_SHA1_AND_DES_CBC: + # PBE_SHA1_DES_CBC + hashmod = SHA1 + ciphermod = DES + elif pbe_oid == _OID_PBE_WITH_SHA1_AND_RC2_CBC: + # PBE_SHA1_RC2_CBC + hashmod = SHA1 + ciphermod = ARC2 + cipher_params['effective_keylen'] = 64 + else: + raise PbesError("Unknown OID for PBES1") + + pbe_params = DerSequence().decode(encrypted_algorithm[1], nr_elements=2) + salt = DerOctetString().decode(pbe_params[0]).payload + iterations = pbe_params[1] + + key_iv = PBKDF1(passphrase, salt, 16, iterations, hashmod) + key, iv = key_iv[:8], key_iv[8:] + + cipher = ciphermod.new(key, ciphermod.MODE_CBC, iv, **cipher_params) + pt = cipher.decrypt(encrypted_data) + return unpad(pt, cipher.block_size) + + +class PBES2(object): + """Encryption scheme with password-based key derivation + (defined in `PKCS#5 v2.0`__). + + .. __: http://www.ietf.org/rfc/rfc2898.txt.""" + + @staticmethod + def encrypt(data, passphrase, protection, prot_params=None, randfunc=None): + """Encrypt a piece of data using a passphrase and *PBES2*. + + :Parameters: + data : byte string + The piece of data to encrypt. + passphrase : byte string + The passphrase to use for encrypting the data. + protection : string + The identifier of the encryption algorithm to use. + The default value is '``PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC``'. + prot_params : dictionary + Parameters of the protection algorithm. + + +------------------+-----------------------------------------------+ + | Key | Description | + +==================+===============================================+ + | iteration_count | The KDF algorithm is repeated several times to| + | | slow down brute force attacks on passwords | + | | (called *N* or CPU/memory cost in scrypt). | + | | | + | | The default value for PBKDF2 is 1 000. | + | | The default value for scrypt is 16 384. | + +------------------+-----------------------------------------------+ + | salt_size | Salt is used to thwart dictionary and rainbow | + | | attacks on passwords. The default value is 8 | + | | bytes. | + +------------------+-----------------------------------------------+ + | block_size | *(scrypt only)* Memory-cost (r). The default | + | | value is 8. | + +------------------+-----------------------------------------------+ + | parallelization | *(scrypt only)* CPU-cost (p). The default | + | | value is 1. | + +------------------+-----------------------------------------------+ + + + randfunc : callable + Random number generation function; it should accept + a single integer N and return a string of random data, + N bytes long. If not specified, a new RNG will be + instantiated from ``Crypto.Random``. + + :Returns: + The encrypted data, as a binary string. + """ + + if prot_params is None: + prot_params = {} + + if randfunc is None: + randfunc = Random.new().read + + if protection == 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC': + key_size = 24 + module = DES3 + cipher_mode = DES3.MODE_CBC + enc_oid = _OID_DES_EDE3_CBC + elif protection in ('PBKDF2WithHMAC-SHA1AndAES128-CBC', + 'scryptAndAES128-CBC'): + key_size = 16 + module = AES + cipher_mode = AES.MODE_CBC + enc_oid = _OID_AES128_CBC + elif protection in ('PBKDF2WithHMAC-SHA1AndAES192-CBC', + 'scryptAndAES192-CBC'): + key_size = 24 + module = AES + cipher_mode = AES.MODE_CBC + enc_oid = _OID_AES192_CBC + elif protection in ('PBKDF2WithHMAC-SHA1AndAES256-CBC', + 'scryptAndAES256-CBC'): + key_size = 32 + module = AES + cipher_mode = AES.MODE_CBC + enc_oid = _OID_AES256_CBC + else: + raise ValueError("Unknown PBES2 mode") + + # Get random data + iv = randfunc(module.block_size) + salt = randfunc(prot_params.get("salt_size", 8)) + + # Derive key from password + if protection.startswith('PBKDF2'): + count = prot_params.get("iteration_count", 1000) + key = PBKDF2(passphrase, salt, key_size, count) + kdf_info = DerSequence([ + DerObjectId(_OID_PBKDF2), # PBKDF2 + DerSequence([ + DerOctetString(salt), + DerInteger(count) + ]) + ]) + else: + # It must be scrypt + count = prot_params.get("iteration_count", 16384) + scrypt_r = prot_params.get('block_size', 8) + scrypt_p = prot_params.get('parallelization', 1) + key = scrypt(passphrase, salt, key_size, + count, scrypt_r, scrypt_p) + kdf_info = DerSequence([ + DerObjectId(_OID_SCRYPT), # scrypt + DerSequence([ + DerOctetString(salt), + DerInteger(count), + DerInteger(scrypt_r), + DerInteger(scrypt_p) + ]) + ]) + + # Create cipher and use it + cipher = module.new(key, cipher_mode, iv) + encrypted_data = cipher.encrypt(pad(data, cipher.block_size)) + enc_info = DerSequence([ + DerObjectId(enc_oid), + DerOctetString(iv) + ]) + + # Result + enc_private_key_info = DerSequence([ + # encryptionAlgorithm + DerSequence([ + DerObjectId(_OID_PBES2), + DerSequence([ + kdf_info, + enc_info + ]), + ]), + DerOctetString(encrypted_data) + ]) + return enc_private_key_info.encode() + + @staticmethod + def decrypt(data, passphrase): + """Decrypt a piece of data using a passphrase and *PBES2*. + + The algorithm to use is automatically detected. + + :Parameters: + data : byte string + The piece of data to decrypt. + passphrase : byte string + The passphrase to use for decrypting the data. + :Returns: + The decrypted data, as a binary string. + """ + + enc_private_key_info = DerSequence().decode(data, nr_elements=2) + enc_algo = DerSequence().decode(enc_private_key_info[0]) + encrypted_data = DerOctetString().decode(enc_private_key_info[1]).payload + + pbe_oid = DerObjectId().decode(enc_algo[0]).value + if pbe_oid != _OID_PBES2: + raise PbesError("Not a PBES2 object") + + pbes2_params = DerSequence().decode(enc_algo[1], nr_elements=2) + + ### Key Derivation Function selection + kdf_info = DerSequence().decode(pbes2_params[0], nr_elements=2) + kdf_oid = DerObjectId().decode(kdf_info[0]).value + + kdf_key_length = None + + # We only support PBKDF2 or scrypt + if kdf_oid == _OID_PBKDF2: + + pbkdf2_params = DerSequence().decode(kdf_info[1], nr_elements=(2, 3, 4)) + salt = DerOctetString().decode(pbkdf2_params[0]).payload + iteration_count = pbkdf2_params[1] + + left = len(pbkdf2_params) - 2 + idx = 2 + + if left > 0: + try: + kdf_key_length = pbkdf2_params[idx] - 0 + left -= 1 + idx += 1 + except TypeError: + pass + + # Default is HMAC-SHA1 + pbkdf2_prf_oid = "1.2.840.113549.2.7" + if left > 0: + pbkdf2_prf_algo_id = DerSequence().decode(pbkdf2_params[idx]) + pbkdf2_prf_oid = DerObjectId().decode(pbkdf2_prf_algo_id[0]).value + + elif kdf_oid == _OID_SCRYPT: + + scrypt_params = DerSequence().decode(kdf_info[1], nr_elements=(4, 5)) + salt = DerOctetString().decode(scrypt_params[0]).payload + iteration_count, scrypt_r, scrypt_p = [scrypt_params[x] + for x in (1, 2, 3)] + if len(scrypt_params) > 4: + kdf_key_length = scrypt_params[4] + else: + kdf_key_length = None + else: + raise PbesError("Unsupported PBES2 KDF") + + ### Cipher selection + enc_info = DerSequence().decode(pbes2_params[1]) + enc_oid = DerObjectId().decode(enc_info[0]).value + + if enc_oid == _OID_DES_EDE3_CBC: + # DES_EDE3_CBC + ciphermod = DES3 + key_size = 24 + elif enc_oid == _OID_AES128_CBC: + # AES128_CBC + ciphermod = AES + key_size = 16 + elif enc_oid == _OID_AES192_CBC: + # AES192_CBC + ciphermod = AES + key_size = 24 + elif enc_oid == _OID_AES256_CBC: + # AES256_CBC + ciphermod = AES + key_size = 32 + else: + raise PbesError("Unsupported PBES2 cipher") + + if kdf_key_length and kdf_key_length != key_size: + raise PbesError("Mismatch between PBES2 KDF parameters" + " and selected cipher") + + IV = DerOctetString().decode(enc_info[1]).payload + + # Create cipher + if kdf_oid == _OID_PBKDF2: + if pbkdf2_prf_oid == _OID_HMAC_SHA1: + hmac_hash_module = SHA1 + elif pbkdf2_prf_oid == _OID_HMAC_SHA224: + hmac_hash_module = SHA224 + elif pbkdf2_prf_oid == _OID_HMAC_SHA256: + hmac_hash_module = SHA256 + elif pbkdf2_prf_oid == _OID_HMAC_SHA384: + hmac_hash_module = SHA384 + elif pbkdf2_prf_oid == _OID_HMAC_SHA512: + hmac_hash_module = SHA512 + else: + raise PbesError("Unsupported HMAC %s" % pbkdf2_prf_oid) + + key = PBKDF2(passphrase, salt, key_size, iteration_count, + hmac_hash_module=hmac_hash_module) + else: + key = scrypt(passphrase, salt, key_size, iteration_count, + scrypt_r, scrypt_p) + cipher = ciphermod.new(key, ciphermod.MODE_CBC, IV) + + # Decrypt data + pt = cipher.decrypt(encrypted_data) + return unpad(pt, cipher.block_size) diff --git a/env/lib/python3.8/site-packages/Crypto/IO/_PBES.pyi b/env/lib/python3.8/site-packages/Crypto/IO/_PBES.pyi new file mode 100644 index 00000000..a8a34ced --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/IO/_PBES.pyi @@ -0,0 +1,19 @@ +from typing import Dict, Optional, Callable + +class PbesError(ValueError): + ... + +class PBES1(object): + @staticmethod + def decrypt(data: bytes, passphrase: bytes) -> bytes: ... + +class PBES2(object): + @staticmethod + def encrypt(data: bytes, + passphrase: bytes, + protection: str, + prot_params: Optional[Dict] = ..., + randfunc: Optional[Callable[[int],bytes]] = ...) -> bytes: ... + + @staticmethod + def decrypt(data:bytes, passphrase: bytes) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/IO/__init__.py b/env/lib/python3.8/site-packages/Crypto/IO/__init__.py new file mode 100644 index 00000000..85a0d0b1 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/IO/__init__.py @@ -0,0 +1,31 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['PEM', 'PKCS8'] diff --git a/env/lib/python3.8/site-packages/Crypto/Math/Numbers.py b/env/lib/python3.8/site-packages/Crypto/Math/Numbers.py new file mode 100644 index 00000000..c2c4483d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/Numbers.py @@ -0,0 +1,42 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ["Integer"] + +try: + from Crypto.Math._IntegerGMP import IntegerGMP as Integer + from Crypto.Math._IntegerGMP import implementation as _implementation +except (ImportError, OSError, AttributeError): + try: + from Crypto.Math._IntegerCustom import IntegerCustom as Integer + from Crypto.Math._IntegerCustom import implementation as _implementation + except (ImportError, OSError): + from Crypto.Math._IntegerNative import IntegerNative as Integer + _implementation = {} diff --git a/env/lib/python3.8/site-packages/Crypto/Math/Numbers.pyi b/env/lib/python3.8/site-packages/Crypto/Math/Numbers.pyi new file mode 100644 index 00000000..126268cb --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/Numbers.pyi @@ -0,0 +1,4 @@ +from Crypto.Math._IntegerBase import IntegerBase + +class Integer(IntegerBase): + pass diff --git a/env/lib/python3.8/site-packages/Crypto/Math/Primality.py b/env/lib/python3.8/site-packages/Crypto/Math/Primality.py new file mode 100644 index 00000000..884c4185 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/Primality.py @@ -0,0 +1,369 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Functions to create and test prime numbers. + +:undocumented: __package__ +""" + +from Crypto import Random +from Crypto.Math.Numbers import Integer + +from Crypto.Util.py3compat import iter_range + +COMPOSITE = 0 +PROBABLY_PRIME = 1 + + +def miller_rabin_test(candidate, iterations, randfunc=None): + """Perform a Miller-Rabin primality test on an integer. + + The test is specified in Section C.3.1 of `FIPS PUB 186-4`__. + + :Parameters: + candidate : integer + The number to test for primality. + iterations : integer + The maximum number of iterations to perform before + declaring a candidate a probable prime. + randfunc : callable + An RNG function where bases are taken from. + + :Returns: + ``Primality.COMPOSITE`` or ``Primality.PROBABLY_PRIME``. + + .. __: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if not isinstance(candidate, Integer): + candidate = Integer(candidate) + + if candidate in (1, 2, 3, 5): + return PROBABLY_PRIME + + if candidate.is_even(): + return COMPOSITE + + one = Integer(1) + minus_one = Integer(candidate - 1) + + if randfunc is None: + randfunc = Random.new().read + + # Step 1 and 2 + m = Integer(minus_one) + a = 0 + while m.is_even(): + m >>= 1 + a += 1 + + # Skip step 3 + + # Step 4 + for i in iter_range(iterations): + + # Step 4.1-2 + base = 1 + while base in (one, minus_one): + base = Integer.random_range(min_inclusive=2, + max_inclusive=candidate - 2, + randfunc=randfunc) + assert(2 <= base <= candidate - 2) + + # Step 4.3-4.4 + z = pow(base, m, candidate) + if z in (one, minus_one): + continue + + # Step 4.5 + for j in iter_range(1, a): + z = pow(z, 2, candidate) + if z == minus_one: + break + if z == one: + return COMPOSITE + else: + return COMPOSITE + + # Step 5 + return PROBABLY_PRIME + + +def lucas_test(candidate): + """Perform a Lucas primality test on an integer. + + The test is specified in Section C.3.3 of `FIPS PUB 186-4`__. + + :Parameters: + candidate : integer + The number to test for primality. + + :Returns: + ``Primality.COMPOSITE`` or ``Primality.PROBABLY_PRIME``. + + .. __: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if not isinstance(candidate, Integer): + candidate = Integer(candidate) + + # Step 1 + if candidate in (1, 2, 3, 5): + return PROBABLY_PRIME + if candidate.is_even() or candidate.is_perfect_square(): + return COMPOSITE + + # Step 2 + def alternate(): + value = 5 + while True: + yield value + if value > 0: + value += 2 + else: + value -= 2 + value = -value + + for D in alternate(): + if candidate in (D, -D): + continue + js = Integer.jacobi_symbol(D, candidate) + if js == 0: + return COMPOSITE + if js == -1: + break + # Found D. P=1 and Q=(1-D)/4 (note that Q is guaranteed to be an integer) + + # Step 3 + # This is \delta(n) = n - jacobi(D/n) + K = candidate + 1 + # Step 4 + r = K.size_in_bits() - 1 + # Step 5 + # U_1=1 and V_1=P + U_i = Integer(1) + V_i = Integer(1) + U_temp = Integer(0) + V_temp = Integer(0) + # Step 6 + for i in iter_range(r - 1, -1, -1): + # Square + # U_temp = U_i * V_i % candidate + U_temp.set(U_i) + U_temp *= V_i + U_temp %= candidate + # V_temp = (((V_i ** 2 + (U_i ** 2 * D)) * K) >> 1) % candidate + V_temp.set(U_i) + V_temp *= U_i + V_temp *= D + V_temp.multiply_accumulate(V_i, V_i) + if V_temp.is_odd(): + V_temp += candidate + V_temp >>= 1 + V_temp %= candidate + # Multiply + if K.get_bit(i): + # U_i = (((U_temp + V_temp) * K) >> 1) % candidate + U_i.set(U_temp) + U_i += V_temp + if U_i.is_odd(): + U_i += candidate + U_i >>= 1 + U_i %= candidate + # V_i = (((V_temp + U_temp * D) * K) >> 1) % candidate + V_i.set(V_temp) + V_i.multiply_accumulate(U_temp, D) + if V_i.is_odd(): + V_i += candidate + V_i >>= 1 + V_i %= candidate + else: + U_i.set(U_temp) + V_i.set(V_temp) + # Step 7 + if U_i == 0: + return PROBABLY_PRIME + return COMPOSITE + + +from Crypto.Util.number import sieve_base as _sieve_base_large +## The optimal number of small primes to use for the sieve +## is probably dependent on the platform and the candidate size +_sieve_base = set(_sieve_base_large[:100]) + + +def test_probable_prime(candidate, randfunc=None): + """Test if a number is prime. + + A number is qualified as prime if it passes a certain + number of Miller-Rabin tests (dependent on the size + of the number, but such that probability of a false + positive is less than 10^-30) and a single Lucas test. + + For instance, a 1024-bit candidate will need to pass + 4 Miller-Rabin tests. + + :Parameters: + candidate : integer + The number to test for primality. + randfunc : callable + The routine to draw random bytes from to select Miller-Rabin bases. + :Returns: + ``PROBABLE_PRIME`` if the number if prime with very high probability. + ``COMPOSITE`` if the number is a composite. + For efficiency reasons, ``COMPOSITE`` is also returned for small primes. + """ + + if randfunc is None: + randfunc = Random.new().read + + if not isinstance(candidate, Integer): + candidate = Integer(candidate) + + # First, check trial division by the smallest primes + if int(candidate) in _sieve_base: + return PROBABLY_PRIME + try: + map(candidate.fail_if_divisible_by, _sieve_base) + except ValueError: + return COMPOSITE + + # These are the number of Miller-Rabin iterations s.t. p(k, t) < 1E-30, + # with p(k, t) being the probability that a randomly chosen k-bit number + # is composite but still survives t MR iterations. + mr_ranges = ((220, 30), (280, 20), (390, 15), (512, 10), + (620, 7), (740, 6), (890, 5), (1200, 4), + (1700, 3), (3700, 2)) + + bit_size = candidate.size_in_bits() + try: + mr_iterations = list(filter(lambda x: bit_size < x[0], + mr_ranges))[0][1] + except IndexError: + mr_iterations = 1 + + if miller_rabin_test(candidate, mr_iterations, + randfunc=randfunc) == COMPOSITE: + return COMPOSITE + if lucas_test(candidate) == COMPOSITE: + return COMPOSITE + return PROBABLY_PRIME + + +def generate_probable_prime(**kwargs): + """Generate a random probable prime. + + The prime will not have any specific properties + (e.g. it will not be a *strong* prime). + + Random numbers are evaluated for primality until one + passes all tests, consisting of a certain number of + Miller-Rabin tests with random bases followed by + a single Lucas test. + + The number of Miller-Rabin iterations is chosen such that + the probability that the output number is a non-prime is + less than 1E-30 (roughly 2^{-100}). + + This approach is compliant to `FIPS PUB 186-4`__. + + :Keywords: + exact_bits : integer + The desired size in bits of the probable prime. + It must be at least 160. + randfunc : callable + An RNG function where candidate primes are taken from. + prime_filter : callable + A function that takes an Integer as parameter and returns + True if the number can be passed to further primality tests, + False if it should be immediately discarded. + + :Return: + A probable prime in the range 2^exact_bits > p > 2^(exact_bits-1). + + .. __: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + exact_bits = kwargs.pop("exact_bits", None) + randfunc = kwargs.pop("randfunc", None) + prime_filter = kwargs.pop("prime_filter", lambda x: True) + if kwargs: + raise ValueError("Unknown parameters: " + kwargs.keys()) + + if exact_bits is None: + raise ValueError("Missing exact_bits parameter") + if exact_bits < 160: + raise ValueError("Prime number is not big enough.") + + if randfunc is None: + randfunc = Random.new().read + + result = COMPOSITE + while result == COMPOSITE: + candidate = Integer.random(exact_bits=exact_bits, + randfunc=randfunc) | 1 + if not prime_filter(candidate): + continue + result = test_probable_prime(candidate, randfunc) + return candidate + + +def generate_probable_safe_prime(**kwargs): + """Generate a random, probable safe prime. + + Note this operation is much slower than generating a simple prime. + + :Keywords: + exact_bits : integer + The desired size in bits of the probable safe prime. + randfunc : callable + An RNG function where candidate primes are taken from. + + :Return: + A probable safe prime in the range + 2^exact_bits > p > 2^(exact_bits-1). + """ + + exact_bits = kwargs.pop("exact_bits", None) + randfunc = kwargs.pop("randfunc", None) + if kwargs: + raise ValueError("Unknown parameters: " + kwargs.keys()) + + if randfunc is None: + randfunc = Random.new().read + + result = COMPOSITE + while result == COMPOSITE: + q = generate_probable_prime(exact_bits=exact_bits - 1, randfunc=randfunc) + candidate = q * 2 + 1 + if candidate.size_in_bits() != exact_bits: + continue + result = test_probable_prime(candidate, randfunc=randfunc) + return candidate diff --git a/env/lib/python3.8/site-packages/Crypto/Math/Primality.pyi b/env/lib/python3.8/site-packages/Crypto/Math/Primality.pyi new file mode 100644 index 00000000..78134830 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/Primality.pyi @@ -0,0 +1,18 @@ +from typing import Callable, Optional, Union, Set + +PrimeResult = int + +COMPOSITE: PrimeResult +PROBABLY_PRIME: PrimeResult + +def miller_rabin_test(candidate: int, iterations: int, randfunc: Optional[Callable[[int],bytes]]=None) -> PrimeResult: ... +def lucas_test(candidate: int) -> PrimeResult: ... +_sieve_base: Set[int] +def test_probable_prime(candidate: int, randfunc: Optional[Callable[[int],bytes]]=None) -> PrimeResult: ... +def generate_probable_prime(*, + exact_bits: int = ..., + randfunc: Callable[[int],bytes] = ..., + prime_filter: Callable[[int],bool] = ...) -> int: ... +def generate_probable_safe_prime(*, + exact_bits: int = ..., + randfunc: Callable[[int],bytes] = ...) -> int: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerBase.py b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerBase.py new file mode 100644 index 00000000..ec9cb478 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerBase.py @@ -0,0 +1,392 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import abc + +from Crypto.Util.py3compat import iter_range, bord, bchr, ABC + +from Crypto import Random + + +class IntegerBase(ABC): + + # Conversions + @abc.abstractmethod + def __int__(self): + pass + + @abc.abstractmethod + def __str__(self): + pass + + @abc.abstractmethod + def __repr__(self): + pass + + @abc.abstractmethod + def to_bytes(self, block_size=0, byteorder='big'): + pass + + @staticmethod + @abc.abstractmethod + def from_bytes(byte_string, byteorder='big'): + pass + + # Relations + @abc.abstractmethod + def __eq__(self, term): + pass + + @abc.abstractmethod + def __ne__(self, term): + pass + + @abc.abstractmethod + def __lt__(self, term): + pass + + @abc.abstractmethod + def __le__(self, term): + pass + + @abc.abstractmethod + def __gt__(self, term): + pass + + @abc.abstractmethod + def __ge__(self, term): + pass + + @abc.abstractmethod + def __nonzero__(self): + pass + __bool__ = __nonzero__ + + @abc.abstractmethod + def is_negative(self): + pass + + # Arithmetic operations + @abc.abstractmethod + def __add__(self, term): + pass + + @abc.abstractmethod + def __sub__(self, term): + pass + + @abc.abstractmethod + def __mul__(self, factor): + pass + + @abc.abstractmethod + def __floordiv__(self, divisor): + pass + + @abc.abstractmethod + def __mod__(self, divisor): + pass + + @abc.abstractmethod + def inplace_pow(self, exponent, modulus=None): + pass + + @abc.abstractmethod + def __pow__(self, exponent, modulus=None): + pass + + @abc.abstractmethod + def __abs__(self): + pass + + @abc.abstractmethod + def sqrt(self, modulus=None): + pass + + @abc.abstractmethod + def __iadd__(self, term): + pass + + @abc.abstractmethod + def __isub__(self, term): + pass + + @abc.abstractmethod + def __imul__(self, term): + pass + + @abc.abstractmethod + def __imod__(self, term): + pass + + # Boolean/bit operations + @abc.abstractmethod + def __and__(self, term): + pass + + @abc.abstractmethod + def __or__(self, term): + pass + + @abc.abstractmethod + def __rshift__(self, pos): + pass + + @abc.abstractmethod + def __irshift__(self, pos): + pass + + @abc.abstractmethod + def __lshift__(self, pos): + pass + + @abc.abstractmethod + def __ilshift__(self, pos): + pass + + @abc.abstractmethod + def get_bit(self, n): + pass + + # Extra + @abc.abstractmethod + def is_odd(self): + pass + + @abc.abstractmethod + def is_even(self): + pass + + @abc.abstractmethod + def size_in_bits(self): + pass + + @abc.abstractmethod + def size_in_bytes(self): + pass + + @abc.abstractmethod + def is_perfect_square(self): + pass + + @abc.abstractmethod + def fail_if_divisible_by(self, small_prime): + pass + + @abc.abstractmethod + def multiply_accumulate(self, a, b): + pass + + @abc.abstractmethod + def set(self, source): + pass + + @abc.abstractmethod + def inplace_inverse(self, modulus): + pass + + @abc.abstractmethod + def inverse(self, modulus): + pass + + @abc.abstractmethod + def gcd(self, term): + pass + + @abc.abstractmethod + def lcm(self, term): + pass + + @staticmethod + @abc.abstractmethod + def jacobi_symbol(a, n): + pass + + @staticmethod + def _tonelli_shanks(n, p): + """Tonelli-shanks algorithm for computing the square root + of n modulo a prime p. + + n must be in the range [0..p-1]. + p must be at least even. + + The return value r is the square root of modulo p. If non-zero, + another solution will also exist (p-r). + + Note we cannot assume that p is really a prime: if it's not, + we can either raise an exception or return the correct value. + """ + + # See https://rosettacode.org/wiki/Tonelli-Shanks_algorithm + + if n in (0, 1): + return n + + if p % 4 == 3: + root = pow(n, (p + 1) // 4, p) + if pow(root, 2, p) != n: + raise ValueError("Cannot compute square root") + return root + + s = 1 + q = (p - 1) // 2 + while not (q & 1): + s += 1 + q >>= 1 + + z = n.__class__(2) + while True: + euler = pow(z, (p - 1) // 2, p) + if euler == 1: + z += 1 + continue + if euler == p - 1: + break + # Most probably p is not a prime + raise ValueError("Cannot compute square root") + + m = s + c = pow(z, q, p) + t = pow(n, q, p) + r = pow(n, (q + 1) // 2, p) + + while t != 1: + for i in iter_range(0, m): + if pow(t, 2**i, p) == 1: + break + if i == m: + raise ValueError("Cannot compute square root of %d mod %d" % (n, p)) + b = pow(c, 2**(m - i - 1), p) + m = i + c = b**2 % p + t = (t * b**2) % p + r = (r * b) % p + + if pow(r, 2, p) != n: + raise ValueError("Cannot compute square root") + + return r + + @classmethod + def random(cls, **kwargs): + """Generate a random natural integer of a certain size. + + :Keywords: + exact_bits : positive integer + The length in bits of the resulting random Integer number. + The number is guaranteed to fulfil the relation: + + 2^bits > result >= 2^(bits - 1) + + max_bits : positive integer + The maximum length in bits of the resulting random Integer number. + The number is guaranteed to fulfil the relation: + + 2^bits > result >=0 + + randfunc : callable + A function that returns a random byte string. The length of the + byte string is passed as parameter. Optional. + If not provided (or ``None``), randomness is read from the system RNG. + + :Return: a Integer object + """ + + exact_bits = kwargs.pop("exact_bits", None) + max_bits = kwargs.pop("max_bits", None) + randfunc = kwargs.pop("randfunc", None) + + if randfunc is None: + randfunc = Random.new().read + + if exact_bits is None and max_bits is None: + raise ValueError("Either 'exact_bits' or 'max_bits' must be specified") + + if exact_bits is not None and max_bits is not None: + raise ValueError("'exact_bits' and 'max_bits' are mutually exclusive") + + bits = exact_bits or max_bits + bytes_needed = ((bits - 1) // 8) + 1 + significant_bits_msb = 8 - (bytes_needed * 8 - bits) + msb = bord(randfunc(1)[0]) + if exact_bits is not None: + msb |= 1 << (significant_bits_msb - 1) + msb &= (1 << significant_bits_msb) - 1 + + return cls.from_bytes(bchr(msb) + randfunc(bytes_needed - 1)) + + @classmethod + def random_range(cls, **kwargs): + """Generate a random integer within a given internal. + + :Keywords: + min_inclusive : integer + The lower end of the interval (inclusive). + max_inclusive : integer + The higher end of the interval (inclusive). + max_exclusive : integer + The higher end of the interval (exclusive). + randfunc : callable + A function that returns a random byte string. The length of the + byte string is passed as parameter. Optional. + If not provided (or ``None``), randomness is read from the system RNG. + :Returns: + An Integer randomly taken in the given interval. + """ + + min_inclusive = kwargs.pop("min_inclusive", None) + max_inclusive = kwargs.pop("max_inclusive", None) + max_exclusive = kwargs.pop("max_exclusive", None) + randfunc = kwargs.pop("randfunc", None) + + if kwargs: + raise ValueError("Unknown keywords: " + str(kwargs.keys)) + if None not in (max_inclusive, max_exclusive): + raise ValueError("max_inclusive and max_exclusive cannot be both" + " specified") + if max_exclusive is not None: + max_inclusive = max_exclusive - 1 + if None in (min_inclusive, max_inclusive): + raise ValueError("Missing keyword to identify the interval") + + if randfunc is None: + randfunc = Random.new().read + + norm_maximum = max_inclusive - min_inclusive + bits_needed = cls(norm_maximum).size_in_bits() + + norm_candidate = -1 + while not 0 <= norm_candidate <= norm_maximum: + norm_candidate = cls.random( + max_bits=bits_needed, + randfunc=randfunc + ) + return norm_candidate + min_inclusive + diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerBase.pyi b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerBase.pyi new file mode 100644 index 00000000..362c5120 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerBase.pyi @@ -0,0 +1,61 @@ +from typing import Optional, Union, Callable + +RandFunc = Callable[[int],int] + +class IntegerBase: + + def __int__(self) -> int: ... + def __str__(self) -> str: ... + def __repr__(self) -> str: ... + def to_bytes(self, block_size: Optional[int]=0, byteorder: str= ...) -> bytes: ... + @staticmethod + def from_bytes(byte_string: bytes, byteorder: Optional[str] = ...) -> IntegerBase: ... + def __eq__(self, term: object) -> bool: ... + def __ne__(self, term: object) -> bool: ... + def __lt__(self, term: Union[IntegerBase, int]) -> bool: ... + def __le__(self, term: Union[IntegerBase, int]) -> bool: ... + def __gt__(self, term: Union[IntegerBase, int]) -> bool: ... + def __ge__(self, term: Union[IntegerBase, int]) -> bool: ... + def __nonzero__(self) -> bool: ... + def is_negative(self) -> bool: ... + def __add__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __sub__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __mul__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __floordiv__(self, divisor: Union[IntegerBase, int]) -> IntegerBase: ... + def __mod__(self, divisor: Union[IntegerBase, int]) -> IntegerBase: ... + def inplace_pow(self, exponent: int, modulus: Optional[Union[IntegerBase, int]]=None) -> IntegerBase: ... + def __pow__(self, exponent: int, modulus: Optional[int]) -> IntegerBase: ... + def __abs__(self) -> IntegerBase: ... + def sqrt(self, modulus: Optional[int]) -> IntegerBase: ... + def __iadd__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __isub__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __imul__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __imod__(self, divisor: Union[IntegerBase, int]) -> IntegerBase: ... + def __and__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __or__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __rshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def __irshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def __lshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def __ilshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def get_bit(self, n: int) -> bool: ... + def is_odd(self) -> bool: ... + def is_even(self) -> bool: ... + def size_in_bits(self) -> int: ... + def size_in_bytes(self) -> int: ... + def is_perfect_square(self) -> bool: ... + def fail_if_divisible_by(self, small_prime: Union[IntegerBase, int]) -> None: ... + def multiply_accumulate(self, a: Union[IntegerBase, int], b: Union[IntegerBase, int]) -> IntegerBase: ... + def set(self, source: Union[IntegerBase, int]) -> IntegerBase: ... + def inplace_inverse(self, modulus: Union[IntegerBase, int]) -> IntegerBase: ... + def inverse(self, modulus: Union[IntegerBase, int]) -> IntegerBase: ... + def gcd(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def lcm(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + @staticmethod + def jacobi_symbol(a: Union[IntegerBase, int], n: Union[IntegerBase, int]) -> IntegerBase: ... + @staticmethod + def _tonelli_shanks(n: Union[IntegerBase, int], p: Union[IntegerBase, int]) -> IntegerBase : ... + @classmethod + def random(cls, **kwargs: Union[int,RandFunc]) -> IntegerBase : ... + @classmethod + def random_range(cls, **kwargs: Union[int,RandFunc]) -> IntegerBase : ... + diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerCustom.py b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerCustom.py new file mode 100644 index 00000000..d6f6f751 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerCustom.py @@ -0,0 +1,118 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from ._IntegerNative import IntegerNative + +from Crypto.Util.number import long_to_bytes, bytes_to_long + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, backend, + c_size_t, c_ulonglong) + + +from Crypto.Random.random import getrandbits + +c_defs = """ +int monty_pow(const uint8_t *base, + const uint8_t *exp, + const uint8_t *modulus, + uint8_t *out, + size_t len, + uint64_t seed); +""" + + +_raw_montgomery = load_pycryptodome_raw_lib("Crypto.Math._modexp", c_defs) +implementation = {"library": "custom", "api": backend} + + +class IntegerCustom(IntegerNative): + + @staticmethod + def from_bytes(byte_string, byteorder='big'): + if byteorder == 'big': + pass + elif byteorder == 'little': + byte_string = bytearray(byte_string) + byte_string.reverse() + else: + raise ValueError("Incorrect byteorder") + return IntegerCustom(bytes_to_long(byte_string)) + + def inplace_pow(self, exponent, modulus=None): + exp_value = int(exponent) + if exp_value < 0: + raise ValueError("Exponent must not be negative") + + # No modular reduction + if modulus is None: + self._value = pow(self._value, exp_value) + return self + + # With modular reduction + mod_value = int(modulus) + if mod_value < 0: + raise ValueError("Modulus must be positive") + if mod_value == 0: + raise ZeroDivisionError("Modulus cannot be zero") + + # C extension only works with odd moduli + if (mod_value & 1) == 0: + self._value = pow(self._value, exp_value, mod_value) + return self + + # C extension only works with bases smaller than modulus + if self._value >= mod_value: + self._value %= mod_value + + max_len = len(long_to_bytes(max(self._value, exp_value, mod_value))) + + base_b = long_to_bytes(self._value, max_len) + exp_b = long_to_bytes(exp_value, max_len) + modulus_b = long_to_bytes(mod_value, max_len) + + out = create_string_buffer(max_len) + + error = _raw_montgomery.monty_pow( + out, + base_b, + exp_b, + modulus_b, + c_size_t(max_len), + c_ulonglong(getrandbits(64)) + ) + + if error: + raise ValueError("monty_pow failed with error: %d" % error) + + result = bytes_to_long(get_raw_buffer(out)) + self._value = result + return self diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerCustom.pyi b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerCustom.pyi new file mode 100644 index 00000000..2dd75c74 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerCustom.pyi @@ -0,0 +1,8 @@ +from typing import Any + +from ._IntegerNative import IntegerNative + +_raw_montgomery = Any + +class IntegerCustom(IntegerNative): + pass diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerGMP.py b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerGMP.py new file mode 100644 index 00000000..f552b71a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerGMP.py @@ -0,0 +1,762 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import sys + +from Crypto.Util.py3compat import tobytes, is_native_int + +from Crypto.Util._raw_api import (backend, load_lib, + get_raw_buffer, get_c_string, + null_pointer, create_string_buffer, + c_ulong, c_size_t, c_uint8_ptr) + +from ._IntegerBase import IntegerBase + +gmp_defs = """typedef unsigned long UNIX_ULONG; + typedef struct { int a; int b; void *c; } MPZ; + typedef MPZ mpz_t[1]; + typedef UNIX_ULONG mp_bitcnt_t; + + void __gmpz_init (mpz_t x); + void __gmpz_init_set (mpz_t rop, const mpz_t op); + void __gmpz_init_set_ui (mpz_t rop, UNIX_ULONG op); + + UNIX_ULONG __gmpz_get_ui (const mpz_t op); + void __gmpz_set (mpz_t rop, const mpz_t op); + void __gmpz_set_ui (mpz_t rop, UNIX_ULONG op); + void __gmpz_add (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_add_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_sub_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_addmul (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_addmul_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_submul_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_import (mpz_t rop, size_t count, int order, size_t size, + int endian, size_t nails, const void *op); + void * __gmpz_export (void *rop, size_t *countp, int order, + size_t size, + int endian, size_t nails, const mpz_t op); + size_t __gmpz_sizeinbase (const mpz_t op, int base); + void __gmpz_sub (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_mul (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_mul_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + int __gmpz_cmp (const mpz_t op1, const mpz_t op2); + void __gmpz_powm (mpz_t rop, const mpz_t base, const mpz_t exp, const + mpz_t mod); + void __gmpz_powm_ui (mpz_t rop, const mpz_t base, UNIX_ULONG exp, + const mpz_t mod); + void __gmpz_pow_ui (mpz_t rop, const mpz_t base, UNIX_ULONG exp); + void __gmpz_sqrt(mpz_t rop, const mpz_t op); + void __gmpz_mod (mpz_t r, const mpz_t n, const mpz_t d); + void __gmpz_neg (mpz_t rop, const mpz_t op); + void __gmpz_abs (mpz_t rop, const mpz_t op); + void __gmpz_and (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_ior (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_clear (mpz_t x); + void __gmpz_tdiv_q_2exp (mpz_t q, const mpz_t n, mp_bitcnt_t b); + void __gmpz_fdiv_q (mpz_t q, const mpz_t n, const mpz_t d); + void __gmpz_mul_2exp (mpz_t rop, const mpz_t op1, mp_bitcnt_t op2); + int __gmpz_tstbit (const mpz_t op, mp_bitcnt_t bit_index); + int __gmpz_perfect_square_p (const mpz_t op); + int __gmpz_jacobi (const mpz_t a, const mpz_t b); + void __gmpz_gcd (mpz_t rop, const mpz_t op1, const mpz_t op2); + UNIX_ULONG __gmpz_gcd_ui (mpz_t rop, const mpz_t op1, + UNIX_ULONG op2); + void __gmpz_lcm (mpz_t rop, const mpz_t op1, const mpz_t op2); + int __gmpz_invert (mpz_t rop, const mpz_t op1, const mpz_t op2); + int __gmpz_divisible_p (const mpz_t n, const mpz_t d); + int __gmpz_divisible_ui_p (const mpz_t n, UNIX_ULONG d); + """ + +if sys.platform == "win32": + raise ImportError("Not using GMP on Windows") + +lib = load_lib("gmp", gmp_defs) +implementation = {"library": "gmp", "api": backend} + +if hasattr(lib, "__mpir_version"): + raise ImportError("MPIR library detected") + +# In order to create a function that returns a pointer to +# a new MPZ structure, we need to break the abstraction +# and know exactly what ffi backend we have +if implementation["api"] == "ctypes": + from ctypes import Structure, c_int, c_void_p, byref + + class _MPZ(Structure): + _fields_ = [('_mp_alloc', c_int), + ('_mp_size', c_int), + ('_mp_d', c_void_p)] + + def new_mpz(): + return byref(_MPZ()) + +else: + # We are using CFFI + from Crypto.Util._raw_api import ffi + + def new_mpz(): + return ffi.new("MPZ*") + + +# Lazy creation of GMP methods +class _GMP(object): + + def __getattr__(self, name): + if name.startswith("mpz_"): + func_name = "__gmpz_" + name[4:] + elif name.startswith("gmp_"): + func_name = "__gmp_" + name[4:] + else: + raise AttributeError("Attribute %s is invalid" % name) + func = getattr(lib, func_name) + setattr(self, name, func) + return func + + +_gmp = _GMP() + + +class IntegerGMP(IntegerBase): + """A fast, arbitrary precision integer""" + + _zero_mpz_p = new_mpz() + _gmp.mpz_init_set_ui(_zero_mpz_p, c_ulong(0)) + + def __init__(self, value): + """Initialize the integer to the given value.""" + + self._mpz_p = new_mpz() + self._initialized = False + + if isinstance(value, float): + raise ValueError("A floating point type is not a natural number") + + if is_native_int(value): + _gmp.mpz_init(self._mpz_p) + self._initialized = True + if value == 0: + return + + tmp = new_mpz() + _gmp.mpz_init(tmp) + + try: + positive = value >= 0 + reduce = abs(value) + slots = (reduce.bit_length() - 1) // 32 + 1 + + while slots > 0: + slots = slots - 1 + _gmp.mpz_set_ui(tmp, + c_ulong(0xFFFFFFFF & (reduce >> (slots * 32)))) + _gmp.mpz_mul_2exp(tmp, tmp, c_ulong(slots * 32)) + _gmp.mpz_add(self._mpz_p, self._mpz_p, tmp) + finally: + _gmp.mpz_clear(tmp) + + if not positive: + _gmp.mpz_neg(self._mpz_p, self._mpz_p) + + elif isinstance(value, IntegerGMP): + _gmp.mpz_init_set(self._mpz_p, value._mpz_p) + self._initialized = True + else: + raise NotImplementedError + + + # Conversions + def __int__(self): + tmp = new_mpz() + _gmp.mpz_init_set(tmp, self._mpz_p) + + try: + value = 0 + slot = 0 + while _gmp.mpz_cmp(tmp, self._zero_mpz_p) != 0: + lsb = _gmp.mpz_get_ui(tmp) & 0xFFFFFFFF + value |= lsb << (slot * 32) + _gmp.mpz_tdiv_q_2exp(tmp, tmp, c_ulong(32)) + slot = slot + 1 + finally: + _gmp.mpz_clear(tmp) + + if self < 0: + value = -value + return int(value) + + def __str__(self): + return str(int(self)) + + def __repr__(self): + return "Integer(%s)" % str(self) + + # Only Python 2.x + def __hex__(self): + return hex(int(self)) + + # Only Python 3.x + def __index__(self): + return int(self) + + def to_bytes(self, block_size=0, byteorder='big'): + """Convert the number into a byte string. + + This method encodes the number in network order and prepends + as many zero bytes as required. It only works for non-negative + values. + + :Parameters: + block_size : integer + The exact size the output byte string must have. + If zero, the string has the minimal length. + byteorder : string + 'big' for big-endian integers (default), 'little' for litte-endian. + :Returns: + A byte string. + :Raise ValueError: + If the value is negative or if ``block_size`` is + provided and the length of the byte string would exceed it. + """ + + if self < 0: + raise ValueError("Conversion only valid for non-negative numbers") + + buf_len = (_gmp.mpz_sizeinbase(self._mpz_p, 2) + 7) // 8 + if buf_len > block_size > 0: + raise ValueError("Number is too big to convert to byte string" + " of prescribed length") + buf = create_string_buffer(buf_len) + + + _gmp.mpz_export( + buf, + null_pointer, # Ignore countp + 1, # Big endian + c_size_t(1), # Each word is 1 byte long + 0, # Endianess within a word - not relevant + c_size_t(0), # No nails + self._mpz_p) + + result = b'\x00' * max(0, block_size - buf_len) + get_raw_buffer(buf) + if byteorder == 'big': + pass + elif byteorder == 'little': + result = bytearray(result) + result.reverse() + result = bytes(result) + else: + raise ValueError("Incorrect byteorder") + return result + + @staticmethod + def from_bytes(byte_string, byteorder='big'): + """Convert a byte string into a number. + + :Parameters: + byte_string : byte string + The input number, encoded in network order. + It can only be non-negative. + byteorder : string + 'big' for big-endian integers (default), 'little' for litte-endian. + + :Return: + The ``Integer`` object carrying the same value as the input. + """ + result = IntegerGMP(0) + if byteorder == 'big': + pass + elif byteorder == 'little': + byte_string = bytearray(byte_string) + byte_string.reverse() + else: + raise ValueError("Incorrect byteorder") + _gmp.mpz_import( + result._mpz_p, + c_size_t(len(byte_string)), # Amount of words to read + 1, # Big endian + c_size_t(1), # Each word is 1 byte long + 0, # Endianess within a word - not relevant + c_size_t(0), # No nails + c_uint8_ptr(byte_string)) + return result + + # Relations + def _apply_and_return(self, func, term): + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + return func(self._mpz_p, term._mpz_p) + + def __eq__(self, term): + if not (isinstance(term, IntegerGMP) or is_native_int(term)): + return False + return self._apply_and_return(_gmp.mpz_cmp, term) == 0 + + def __ne__(self, term): + if not (isinstance(term, IntegerGMP) or is_native_int(term)): + return True + return self._apply_and_return(_gmp.mpz_cmp, term) != 0 + + def __lt__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) < 0 + + def __le__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) <= 0 + + def __gt__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) > 0 + + def __ge__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) >= 0 + + def __nonzero__(self): + return _gmp.mpz_cmp(self._mpz_p, self._zero_mpz_p) != 0 + __bool__ = __nonzero__ + + def is_negative(self): + return _gmp.mpz_cmp(self._mpz_p, self._zero_mpz_p) < 0 + + # Arithmetic operations + def __add__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + try: + term = IntegerGMP(term) + except NotImplementedError: + return NotImplemented + _gmp.mpz_add(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __sub__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + try: + term = IntegerGMP(term) + except NotImplementedError: + return NotImplemented + _gmp.mpz_sub(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __mul__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + try: + term = IntegerGMP(term) + except NotImplementedError: + return NotImplemented + _gmp.mpz_mul(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __floordiv__(self, divisor): + if not isinstance(divisor, IntegerGMP): + divisor = IntegerGMP(divisor) + if _gmp.mpz_cmp(divisor._mpz_p, + self._zero_mpz_p) == 0: + raise ZeroDivisionError("Division by zero") + result = IntegerGMP(0) + _gmp.mpz_fdiv_q(result._mpz_p, + self._mpz_p, + divisor._mpz_p) + return result + + def __mod__(self, divisor): + if not isinstance(divisor, IntegerGMP): + divisor = IntegerGMP(divisor) + comp = _gmp.mpz_cmp(divisor._mpz_p, + self._zero_mpz_p) + if comp == 0: + raise ZeroDivisionError("Division by zero") + if comp < 0: + raise ValueError("Modulus must be positive") + result = IntegerGMP(0) + _gmp.mpz_mod(result._mpz_p, + self._mpz_p, + divisor._mpz_p) + return result + + def inplace_pow(self, exponent, modulus=None): + + if modulus is None: + if exponent < 0: + raise ValueError("Exponent must not be negative") + + # Normal exponentiation + if exponent > 256: + raise ValueError("Exponent is too big") + _gmp.mpz_pow_ui(self._mpz_p, + self._mpz_p, # Base + c_ulong(int(exponent)) + ) + else: + # Modular exponentiation + if not isinstance(modulus, IntegerGMP): + modulus = IntegerGMP(modulus) + if not modulus: + raise ZeroDivisionError("Division by zero") + if modulus.is_negative(): + raise ValueError("Modulus must be positive") + if is_native_int(exponent): + if exponent < 0: + raise ValueError("Exponent must not be negative") + if exponent < 65536: + _gmp.mpz_powm_ui(self._mpz_p, + self._mpz_p, + c_ulong(exponent), + modulus._mpz_p) + return self + exponent = IntegerGMP(exponent) + elif exponent.is_negative(): + raise ValueError("Exponent must not be negative") + _gmp.mpz_powm(self._mpz_p, + self._mpz_p, + exponent._mpz_p, + modulus._mpz_p) + return self + + def __pow__(self, exponent, modulus=None): + result = IntegerGMP(self) + return result.inplace_pow(exponent, modulus) + + def __abs__(self): + result = IntegerGMP(0) + _gmp.mpz_abs(result._mpz_p, self._mpz_p) + return result + + def sqrt(self, modulus=None): + """Return the largest Integer that does not + exceed the square root""" + + if modulus is None: + if self < 0: + raise ValueError("Square root of negative value") + result = IntegerGMP(0) + _gmp.mpz_sqrt(result._mpz_p, + self._mpz_p) + else: + if modulus <= 0: + raise ValueError("Modulus must be positive") + modulus = int(modulus) + result = IntegerGMP(self._tonelli_shanks(int(self) % modulus, modulus)) + + return result + + def __iadd__(self, term): + if is_native_int(term): + if 0 <= term < 65536: + _gmp.mpz_add_ui(self._mpz_p, + self._mpz_p, + c_ulong(term)) + return self + if -65535 < term < 0: + _gmp.mpz_sub_ui(self._mpz_p, + self._mpz_p, + c_ulong(-term)) + return self + term = IntegerGMP(term) + _gmp.mpz_add(self._mpz_p, + self._mpz_p, + term._mpz_p) + return self + + def __isub__(self, term): + if is_native_int(term): + if 0 <= term < 65536: + _gmp.mpz_sub_ui(self._mpz_p, + self._mpz_p, + c_ulong(term)) + return self + if -65535 < term < 0: + _gmp.mpz_add_ui(self._mpz_p, + self._mpz_p, + c_ulong(-term)) + return self + term = IntegerGMP(term) + _gmp.mpz_sub(self._mpz_p, + self._mpz_p, + term._mpz_p) + return self + + def __imul__(self, term): + if is_native_int(term): + if 0 <= term < 65536: + _gmp.mpz_mul_ui(self._mpz_p, + self._mpz_p, + c_ulong(term)) + return self + if -65535 < term < 0: + _gmp.mpz_mul_ui(self._mpz_p, + self._mpz_p, + c_ulong(-term)) + _gmp.mpz_neg(self._mpz_p, self._mpz_p) + return self + term = IntegerGMP(term) + _gmp.mpz_mul(self._mpz_p, + self._mpz_p, + term._mpz_p) + return self + + def __imod__(self, divisor): + if not isinstance(divisor, IntegerGMP): + divisor = IntegerGMP(divisor) + comp = _gmp.mpz_cmp(divisor._mpz_p, + divisor._zero_mpz_p) + if comp == 0: + raise ZeroDivisionError("Division by zero") + if comp < 0: + raise ValueError("Modulus must be positive") + _gmp.mpz_mod(self._mpz_p, + self._mpz_p, + divisor._mpz_p) + return self + + # Boolean/bit operations + def __and__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + _gmp.mpz_and(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __or__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + _gmp.mpz_ior(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __rshift__(self, pos): + result = IntegerGMP(0) + if pos < 0: + raise ValueError("negative shift count") + if pos > 65536: + if self < 0: + return -1 + else: + return 0 + _gmp.mpz_tdiv_q_2exp(result._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return result + + def __irshift__(self, pos): + if pos < 0: + raise ValueError("negative shift count") + if pos > 65536: + if self < 0: + return -1 + else: + return 0 + _gmp.mpz_tdiv_q_2exp(self._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return self + + def __lshift__(self, pos): + result = IntegerGMP(0) + if not 0 <= pos < 65536: + raise ValueError("Incorrect shift count") + _gmp.mpz_mul_2exp(result._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return result + + def __ilshift__(self, pos): + if not 0 <= pos < 65536: + raise ValueError("Incorrect shift count") + _gmp.mpz_mul_2exp(self._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return self + + def get_bit(self, n): + """Return True if the n-th bit is set to 1. + Bit 0 is the least significant.""" + + if self < 0: + raise ValueError("no bit representation for negative values") + if n < 0: + raise ValueError("negative bit count") + if n > 65536: + return 0 + return bool(_gmp.mpz_tstbit(self._mpz_p, + c_ulong(int(n)))) + + # Extra + def is_odd(self): + return _gmp.mpz_tstbit(self._mpz_p, 0) == 1 + + def is_even(self): + return _gmp.mpz_tstbit(self._mpz_p, 0) == 0 + + def size_in_bits(self): + """Return the minimum number of bits that can encode the number.""" + + if self < 0: + raise ValueError("Conversion only valid for non-negative numbers") + return _gmp.mpz_sizeinbase(self._mpz_p, 2) + + def size_in_bytes(self): + """Return the minimum number of bytes that can encode the number.""" + return (self.size_in_bits() - 1) // 8 + 1 + + def is_perfect_square(self): + return _gmp.mpz_perfect_square_p(self._mpz_p) != 0 + + def fail_if_divisible_by(self, small_prime): + """Raise an exception if the small prime is a divisor.""" + + if is_native_int(small_prime): + if 0 < small_prime < 65536: + if _gmp.mpz_divisible_ui_p(self._mpz_p, + c_ulong(small_prime)): + raise ValueError("The value is composite") + return + small_prime = IntegerGMP(small_prime) + if _gmp.mpz_divisible_p(self._mpz_p, + small_prime._mpz_p): + raise ValueError("The value is composite") + + def multiply_accumulate(self, a, b): + """Increment the number by the product of a and b.""" + + if not isinstance(a, IntegerGMP): + a = IntegerGMP(a) + if is_native_int(b): + if 0 < b < 65536: + _gmp.mpz_addmul_ui(self._mpz_p, + a._mpz_p, + c_ulong(b)) + return self + if -65535 < b < 0: + _gmp.mpz_submul_ui(self._mpz_p, + a._mpz_p, + c_ulong(-b)) + return self + b = IntegerGMP(b) + _gmp.mpz_addmul(self._mpz_p, + a._mpz_p, + b._mpz_p) + return self + + def set(self, source): + """Set the Integer to have the given value""" + + if not isinstance(source, IntegerGMP): + source = IntegerGMP(source) + _gmp.mpz_set(self._mpz_p, + source._mpz_p) + return self + + def inplace_inverse(self, modulus): + """Compute the inverse of this number in the ring of + modulo integers. + + Raise an exception if no inverse exists. + """ + + if not isinstance(modulus, IntegerGMP): + modulus = IntegerGMP(modulus) + + comp = _gmp.mpz_cmp(modulus._mpz_p, + self._zero_mpz_p) + if comp == 0: + raise ZeroDivisionError("Modulus cannot be zero") + if comp < 0: + raise ValueError("Modulus must be positive") + + result = _gmp.mpz_invert(self._mpz_p, + self._mpz_p, + modulus._mpz_p) + if not result: + raise ValueError("No inverse value can be computed") + return self + + def inverse(self, modulus): + result = IntegerGMP(self) + result.inplace_inverse(modulus) + return result + + def gcd(self, term): + """Compute the greatest common denominator between this + number and another term.""" + + result = IntegerGMP(0) + if is_native_int(term): + if 0 < term < 65535: + _gmp.mpz_gcd_ui(result._mpz_p, + self._mpz_p, + c_ulong(term)) + return result + term = IntegerGMP(term) + _gmp.mpz_gcd(result._mpz_p, self._mpz_p, term._mpz_p) + return result + + def lcm(self, term): + """Compute the least common multiplier between this + number and another term.""" + + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + _gmp.mpz_lcm(result._mpz_p, self._mpz_p, term._mpz_p) + return result + + @staticmethod + def jacobi_symbol(a, n): + """Compute the Jacobi symbol""" + + if not isinstance(a, IntegerGMP): + a = IntegerGMP(a) + if not isinstance(n, IntegerGMP): + n = IntegerGMP(n) + if n <= 0 or n.is_even(): + raise ValueError("n must be positive odd for the Jacobi symbol") + return _gmp.mpz_jacobi(a._mpz_p, n._mpz_p) + + # Clean-up + def __del__(self): + + try: + if self._mpz_p is not None: + if self._initialized: + _gmp.mpz_clear(self._mpz_p) + + self._mpz_p = None + except AttributeError: + pass diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerGMP.pyi b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerGMP.pyi new file mode 100644 index 00000000..2181b470 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerGMP.pyi @@ -0,0 +1,3 @@ +from ._IntegerBase import IntegerBase +class IntegerGMP(IntegerBase): + pass diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerNative.py b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerNative.py new file mode 100644 index 00000000..a8bcb3db --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerNative.py @@ -0,0 +1,395 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from ._IntegerBase import IntegerBase + +from Crypto.Util.number import long_to_bytes, bytes_to_long + + +class IntegerNative(IntegerBase): + """A class to model a natural integer (including zero)""" + + def __init__(self, value): + if isinstance(value, float): + raise ValueError("A floating point type is not a natural number") + try: + self._value = value._value + except AttributeError: + self._value = value + + # Conversions + def __int__(self): + return self._value + + def __str__(self): + return str(int(self)) + + def __repr__(self): + return "Integer(%s)" % str(self) + + # Only Python 2.x + def __hex__(self): + return hex(self._value) + + # Only Python 3.x + def __index__(self): + return int(self._value) + + def to_bytes(self, block_size=0, byteorder='big'): + if self._value < 0: + raise ValueError("Conversion only valid for non-negative numbers") + result = long_to_bytes(self._value, block_size) + if len(result) > block_size > 0: + raise ValueError("Value too large to encode") + if byteorder == 'big': + pass + elif byteorder == 'little': + result = bytearray(result) + result.reverse() + result = bytes(result) + else: + raise ValueError("Incorrect byteorder") + return result + + @classmethod + def from_bytes(cls, byte_string, byteorder='big'): + if byteorder == 'big': + pass + elif byteorder == 'little': + byte_string = bytearray(byte_string) + byte_string.reverse() + else: + raise ValueError("Incorrect byteorder") + return cls(bytes_to_long(byte_string)) + + # Relations + def __eq__(self, term): + if term is None: + return False + return self._value == int(term) + + def __ne__(self, term): + return not self.__eq__(term) + + def __lt__(self, term): + return self._value < int(term) + + def __le__(self, term): + return self.__lt__(term) or self.__eq__(term) + + def __gt__(self, term): + return not self.__le__(term) + + def __ge__(self, term): + return not self.__lt__(term) + + def __nonzero__(self): + return self._value != 0 + __bool__ = __nonzero__ + + def is_negative(self): + return self._value < 0 + + # Arithmetic operations + def __add__(self, term): + try: + return self.__class__(self._value + int(term)) + except (ValueError, AttributeError, TypeError): + return NotImplemented + + def __sub__(self, term): + try: + return self.__class__(self._value - int(term)) + except (ValueError, AttributeError, TypeError): + return NotImplemented + + def __mul__(self, factor): + try: + return self.__class__(self._value * int(factor)) + except (ValueError, AttributeError, TypeError): + return NotImplemented + + def __floordiv__(self, divisor): + return self.__class__(self._value // int(divisor)) + + def __mod__(self, divisor): + divisor_value = int(divisor) + if divisor_value < 0: + raise ValueError("Modulus must be positive") + return self.__class__(self._value % divisor_value) + + def inplace_pow(self, exponent, modulus=None): + exp_value = int(exponent) + if exp_value < 0: + raise ValueError("Exponent must not be negative") + + if modulus is not None: + mod_value = int(modulus) + if mod_value < 0: + raise ValueError("Modulus must be positive") + if mod_value == 0: + raise ZeroDivisionError("Modulus cannot be zero") + else: + mod_value = None + self._value = pow(self._value, exp_value, mod_value) + return self + + def __pow__(self, exponent, modulus=None): + result = self.__class__(self) + return result.inplace_pow(exponent, modulus) + + def __abs__(self): + return abs(self._value) + + def sqrt(self, modulus=None): + + value = self._value + if modulus is None: + if value < 0: + raise ValueError("Square root of negative value") + # http://stackoverflow.com/questions/15390807/integer-square-root-in-python + + x = value + y = (x + 1) // 2 + while y < x: + x = y + y = (x + value // x) // 2 + result = x + else: + if modulus <= 0: + raise ValueError("Modulus must be positive") + result = self._tonelli_shanks(self % modulus, modulus) + + return self.__class__(result) + + def __iadd__(self, term): + self._value += int(term) + return self + + def __isub__(self, term): + self._value -= int(term) + return self + + def __imul__(self, term): + self._value *= int(term) + return self + + def __imod__(self, term): + modulus = int(term) + if modulus == 0: + raise ZeroDivisionError("Division by zero") + if modulus < 0: + raise ValueError("Modulus must be positive") + self._value %= modulus + return self + + # Boolean/bit operations + def __and__(self, term): + return self.__class__(self._value & int(term)) + + def __or__(self, term): + return self.__class__(self._value | int(term)) + + def __rshift__(self, pos): + try: + return self.__class__(self._value >> int(pos)) + except OverflowError: + if self._value >= 0: + return 0 + else: + return -1 + + def __irshift__(self, pos): + try: + self._value >>= int(pos) + except OverflowError: + if self._value >= 0: + return 0 + else: + return -1 + return self + + def __lshift__(self, pos): + try: + return self.__class__(self._value << int(pos)) + except OverflowError: + raise ValueError("Incorrect shift count") + + def __ilshift__(self, pos): + try: + self._value <<= int(pos) + except OverflowError: + raise ValueError("Incorrect shift count") + return self + + def get_bit(self, n): + if self._value < 0: + raise ValueError("no bit representation for negative values") + try: + try: + result = (self._value >> n._value) & 1 + if n._value < 0: + raise ValueError("negative bit count") + except AttributeError: + result = (self._value >> n) & 1 + if n < 0: + raise ValueError("negative bit count") + except OverflowError: + result = 0 + return result + + # Extra + def is_odd(self): + return (self._value & 1) == 1 + + def is_even(self): + return (self._value & 1) == 0 + + def size_in_bits(self): + + if self._value < 0: + raise ValueError("Conversion only valid for non-negative numbers") + + if self._value == 0: + return 1 + + bit_size = 0 + tmp = self._value + while tmp: + tmp >>= 1 + bit_size += 1 + + return bit_size + + def size_in_bytes(self): + return (self.size_in_bits() - 1) // 8 + 1 + + def is_perfect_square(self): + if self._value < 0: + return False + if self._value in (0, 1): + return True + + x = self._value // 2 + square_x = x ** 2 + + while square_x > self._value: + x = (square_x + self._value) // (2 * x) + square_x = x ** 2 + + return self._value == x ** 2 + + def fail_if_divisible_by(self, small_prime): + if (self._value % int(small_prime)) == 0: + raise ValueError("Value is composite") + + def multiply_accumulate(self, a, b): + self._value += int(a) * int(b) + return self + + def set(self, source): + self._value = int(source) + + def inplace_inverse(self, modulus): + modulus = int(modulus) + if modulus == 0: + raise ZeroDivisionError("Modulus cannot be zero") + if modulus < 0: + raise ValueError("Modulus cannot be negative") + r_p, r_n = self._value, modulus + s_p, s_n = 1, 0 + while r_n > 0: + q = r_p // r_n + r_p, r_n = r_n, r_p - q * r_n + s_p, s_n = s_n, s_p - q * s_n + if r_p != 1: + raise ValueError("No inverse value can be computed" + str(r_p)) + while s_p < 0: + s_p += modulus + self._value = s_p + return self + + def inverse(self, modulus): + result = self.__class__(self) + result.inplace_inverse(modulus) + return result + + def gcd(self, term): + r_p, r_n = abs(self._value), abs(int(term)) + while r_n > 0: + q = r_p // r_n + r_p, r_n = r_n, r_p - q * r_n + return self.__class__(r_p) + + def lcm(self, term): + term = int(term) + if self._value == 0 or term == 0: + return self.__class__(0) + return self.__class__(abs((self._value * term) // self.gcd(term)._value)) + + @staticmethod + def jacobi_symbol(a, n): + a = int(a) + n = int(n) + + if n <= 0: + raise ValueError("n must be a positive integer") + + if (n & 1) == 0: + raise ValueError("n must be odd for the Jacobi symbol") + + # Step 1 + a = a % n + # Step 2 + if a == 1 or n == 1: + return 1 + # Step 3 + if a == 0: + return 0 + # Step 4 + e = 0 + a1 = a + while (a1 & 1) == 0: + a1 >>= 1 + e += 1 + # Step 5 + if (e & 1) == 0: + s = 1 + elif n % 8 in (1, 7): + s = 1 + else: + s = -1 + # Step 6 + if n % 4 == 3 and a1 % 4 == 3: + s = -s + # Step 7 + n1 = n % a1 + # Step 8 + return s * IntegerNative.jacobi_symbol(n1, a1) diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_IntegerNative.pyi b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerNative.pyi new file mode 100644 index 00000000..3f65a39e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Math/_IntegerNative.pyi @@ -0,0 +1,3 @@ +from ._IntegerBase import IntegerBase +class IntegerNative(IntegerBase): + pass diff --git a/env/lib/python3.8/site-packages/Crypto/Math/__init__.py b/env/lib/python3.8/site-packages/Crypto/Math/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/Crypto/Math/_modexp.abi3.so b/env/lib/python3.8/site-packages/Crypto/Math/_modexp.abi3.so new file mode 100644 index 00000000..ffc9ffd9 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Math/_modexp.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Protocol/KDF.py b/env/lib/python3.8/site-packages/Crypto/Protocol/KDF.py new file mode 100644 index 00000000..13482655 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Protocol/KDF.py @@ -0,0 +1,574 @@ +# coding=utf-8 +# +# KDF.py : a collection of Key Derivation Functions +# +# Part of the Python Cryptography Toolkit +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import re +import struct +from functools import reduce + +from Crypto.Util.py3compat import (tobytes, bord, _copy_bytes, iter_range, + tostr, bchr, bstr) + +from Crypto.Hash import SHA1, SHA256, HMAC, CMAC, BLAKE2s +from Crypto.Util.strxor import strxor +from Crypto.Random import get_random_bytes +from Crypto.Util.number import size as bit_size, long_to_bytes, bytes_to_long + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, c_size_t) + +_raw_salsa20_lib = load_pycryptodome_raw_lib("Crypto.Cipher._Salsa20", + """ + int Salsa20_8_core(const uint8_t *x, const uint8_t *y, + uint8_t *out); + """) + +_raw_scrypt_lib = load_pycryptodome_raw_lib("Crypto.Protocol._scrypt", + """ + typedef int (core_t)(const uint8_t [64], const uint8_t [64], uint8_t [64]); + int scryptROMix(const uint8_t *data_in, uint8_t *data_out, + size_t data_len, unsigned N, core_t *core); + """) + + +def PBKDF1(password, salt, dkLen, count=1000, hashAlgo=None): + """Derive one key from a password (or passphrase). + + This function performs key derivation according to an old version of + the PKCS#5 standard (v1.5) or `RFC2898 + `_. + + Args: + password (string): + The secret password to generate the key from. + salt (byte string): + An 8 byte string to use for better protection from dictionary attacks. + This value does not need to be kept secret, but it should be randomly + chosen for each derivation. + dkLen (integer): + The length of the desired key. The default is 16 bytes, suitable for + instance for :mod:`Crypto.Cipher.AES`. + count (integer): + The number of iterations to carry out. The recommendation is 1000 or + more. + hashAlgo (module): + The hash algorithm to use, as a module or an object from the :mod:`Crypto.Hash` package. + The digest length must be no shorter than ``dkLen``. + The default algorithm is :mod:`Crypto.Hash.SHA1`. + + Return: + A byte string of length ``dkLen`` that can be used as key. + """ + + if not hashAlgo: + hashAlgo = SHA1 + password = tobytes(password) + pHash = hashAlgo.new(password+salt) + digest = pHash.digest_size + if dkLen > digest: + raise TypeError("Selected hash algorithm has a too short digest (%d bytes)." % digest) + if len(salt) != 8: + raise ValueError("Salt is not 8 bytes long (%d bytes instead)." % len(salt)) + for i in iter_range(count-1): + pHash = pHash.new(pHash.digest()) + return pHash.digest()[:dkLen] + + +def PBKDF2(password, salt, dkLen=16, count=1000, prf=None, hmac_hash_module=None): + """Derive one or more keys from a password (or passphrase). + + This function performs key derivation according to the PKCS#5 standard (v2.0). + + Args: + password (string or byte string): + The secret password to generate the key from. + salt (string or byte string): + A (byte) string to use for better protection from dictionary attacks. + This value does not need to be kept secret, but it should be randomly + chosen for each derivation. It is recommended to use at least 16 bytes. + dkLen (integer): + The cumulative length of the keys to produce. + + Due to a flaw in the PBKDF2 design, you should not request more bytes + than the ``prf`` can output. For instance, ``dkLen`` should not exceed + 20 bytes in combination with ``HMAC-SHA1``. + count (integer): + The number of iterations to carry out. The higher the value, the slower + and the more secure the function becomes. + + You should find the maximum number of iterations that keeps the + key derivation still acceptable on the slowest hardware you must support. + + Although the default value is 1000, **it is recommended to use at least + 1000000 (1 million) iterations**. + prf (callable): + A pseudorandom function. It must be a function that returns a + pseudorandom byte string from two parameters: a secret and a salt. + The slower the algorithm, the more secure the derivation function. + If not specified, **HMAC-SHA1** is used. + hmac_hash_module (module): + A module from ``Crypto.Hash`` implementing a Merkle-Damgard cryptographic + hash, which PBKDF2 must use in combination with HMAC. + This parameter is mutually exclusive with ``prf``. + + Return: + A byte string of length ``dkLen`` that can be used as key material. + If you want multiple keys, just break up this string into segments of the desired length. + """ + + password = tobytes(password) + salt = tobytes(salt) + + if prf and hmac_hash_module: + raise ValueError("'prf' and 'hmac_hash_module' are mutually exlusive") + + if prf is None and hmac_hash_module is None: + hmac_hash_module = SHA1 + + if prf or not hasattr(hmac_hash_module, "_pbkdf2_hmac_assist"): + # Generic (and slow) implementation + + if prf is None: + prf = lambda p,s: HMAC.new(p, s, hmac_hash_module).digest() + + def link(s): + s[0], s[1] = s[1], prf(password, s[1]) + return s[0] + + key = b'' + i = 1 + while len(key) < dkLen: + s = [ prf(password, salt + struct.pack(">I", i)) ] * 2 + key += reduce(strxor, (link(s) for j in range(count)) ) + i += 1 + + else: + # Optimized implementation + key = b'' + i = 1 + while len(key)I", i)).digest() + key += base._pbkdf2_hmac_assist(first_digest, count) + i += 1 + + return key[:dkLen] + + +class _S2V(object): + """String-to-vector PRF as defined in `RFC5297`_. + + This class implements a pseudorandom function family + based on CMAC that takes as input a vector of strings. + + .. _RFC5297: http://tools.ietf.org/html/rfc5297 + """ + + def __init__(self, key, ciphermod, cipher_params=None): + """Initialize the S2V PRF. + + :Parameters: + key : byte string + A secret that can be used as key for CMACs + based on ciphers from ``ciphermod``. + ciphermod : module + A block cipher module from `Crypto.Cipher`. + cipher_params : dictionary + A set of extra parameters to use to create a cipher instance. + """ + + self._key = _copy_bytes(None, None, key) + self._ciphermod = ciphermod + self._last_string = self._cache = b'\x00' * ciphermod.block_size + + # Max number of update() call we can process + self._n_updates = ciphermod.block_size * 8 - 1 + + if cipher_params is None: + self._cipher_params = {} + else: + self._cipher_params = dict(cipher_params) + + @staticmethod + def new(key, ciphermod): + """Create a new S2V PRF. + + :Parameters: + key : byte string + A secret that can be used as key for CMACs + based on ciphers from ``ciphermod``. + ciphermod : module + A block cipher module from `Crypto.Cipher`. + """ + return _S2V(key, ciphermod) + + def _double(self, bs): + doubled = bytes_to_long(bs)<<1 + if bord(bs[0]) & 0x80: + doubled ^= 0x87 + return long_to_bytes(doubled, len(bs))[-len(bs):] + + def update(self, item): + """Pass the next component of the vector. + + The maximum number of components you can pass is equal to the block + length of the cipher (in bits) minus 1. + + :Parameters: + item : byte string + The next component of the vector. + :Raise TypeError: when the limit on the number of components has been reached. + """ + + if self._n_updates == 0: + raise TypeError("Too many components passed to S2V") + self._n_updates -= 1 + + mac = CMAC.new(self._key, + msg=self._last_string, + ciphermod=self._ciphermod, + cipher_params=self._cipher_params) + self._cache = strxor(self._double(self._cache), mac.digest()) + self._last_string = _copy_bytes(None, None, item) + + def derive(self): + """"Derive a secret from the vector of components. + + :Return: a byte string, as long as the block length of the cipher. + """ + + if len(self._last_string) >= 16: + # xorend + final = self._last_string[:-16] + strxor(self._last_string[-16:], self._cache) + else: + # zero-pad & xor + padded = (self._last_string + b'\x80' + b'\x00' * 15)[:16] + final = strxor(padded, self._double(self._cache)) + mac = CMAC.new(self._key, + msg=final, + ciphermod=self._ciphermod, + cipher_params=self._cipher_params) + return mac.digest() + + +def HKDF(master, key_len, salt, hashmod, num_keys=1, context=None): + """Derive one or more keys from a master secret using + the HMAC-based KDF defined in RFC5869_. + + Args: + master (byte string): + The unguessable value used by the KDF to generate the other keys. + It must be a high-entropy secret, though not necessarily uniform. + It must not be a password. + salt (byte string): + A non-secret, reusable value that strengthens the randomness + extraction step. + Ideally, it is as long as the digest size of the chosen hash. + If empty, a string of zeroes in used. + key_len (integer): + The length in bytes of every derived key. + hashmod (module): + A cryptographic hash algorithm from :mod:`Crypto.Hash`. + :mod:`Crypto.Hash.SHA512` is a good choice. + num_keys (integer): + The number of keys to derive. Every key is :data:`key_len` bytes long. + The maximum cumulative length of all keys is + 255 times the digest size. + context (byte string): + Optional identifier describing what the keys are used for. + + Return: + A byte string or a tuple of byte strings. + + .. _RFC5869: http://tools.ietf.org/html/rfc5869 + """ + + output_len = key_len * num_keys + if output_len > (255 * hashmod.digest_size): + raise ValueError("Too much secret data to derive") + if not salt: + salt = b'\x00' * hashmod.digest_size + if context is None: + context = b"" + + # Step 1: extract + hmac = HMAC.new(salt, master, digestmod=hashmod) + prk = hmac.digest() + + # Step 2: expand + t = [ b"" ] + n = 1 + tlen = 0 + while tlen < output_len: + hmac = HMAC.new(prk, t[-1] + context + struct.pack('B', n), digestmod=hashmod) + t.append(hmac.digest()) + tlen += hashmod.digest_size + n += 1 + derived_output = b"".join(t) + if num_keys == 1: + return derived_output[:key_len] + kol = [derived_output[idx:idx + key_len] + for idx in iter_range(0, output_len, key_len)] + return list(kol[:num_keys]) + + + +def scrypt(password, salt, key_len, N, r, p, num_keys=1): + """Derive one or more keys from a passphrase. + + Args: + password (string): + The secret pass phrase to generate the keys from. + salt (string): + A string to use for better protection from dictionary attacks. + This value does not need to be kept secret, + but it should be randomly chosen for each derivation. + It is recommended to be at least 16 bytes long. + key_len (integer): + The length in bytes of every derived key. + N (integer): + CPU/Memory cost parameter. It must be a power of 2 and less + than :math:`2^{32}`. + r (integer): + Block size parameter. + p (integer): + Parallelization parameter. + It must be no greater than :math:`(2^{32}-1)/(4r)`. + num_keys (integer): + The number of keys to derive. Every key is :data:`key_len` bytes long. + By default, only 1 key is generated. + The maximum cumulative length of all keys is :math:`(2^{32}-1)*32` + (that is, 128TB). + + A good choice of parameters *(N, r , p)* was suggested + by Colin Percival in his `presentation in 2009`__: + + - *( 2¹⁴, 8, 1 )* for interactive logins (≤100ms) + - *( 2²⁰, 8, 1 )* for file encryption (≤5s) + + Return: + A byte string or a tuple of byte strings. + + .. __: http://www.tarsnap.com/scrypt/scrypt-slides.pdf + """ + + if 2 ** (bit_size(N) - 1) != N: + raise ValueError("N must be a power of 2") + if N >= 2 ** 32: + raise ValueError("N is too big") + if p > ((2 ** 32 - 1) * 32) // (128 * r): + raise ValueError("p or r are too big") + + prf_hmac_sha256 = lambda p, s: HMAC.new(p, s, SHA256).digest() + + stage_1 = PBKDF2(password, salt, p * 128 * r, 1, prf=prf_hmac_sha256) + + scryptROMix = _raw_scrypt_lib.scryptROMix + core = _raw_salsa20_lib.Salsa20_8_core + + # Parallelize into p flows + data_out = [] + for flow in iter_range(p): + idx = flow * 128 * r + buffer_out = create_string_buffer(128 * r) + result = scryptROMix(stage_1[idx : idx + 128 * r], + buffer_out, + c_size_t(128 * r), + N, + core) + if result: + raise ValueError("Error %X while running scrypt" % result) + data_out += [ get_raw_buffer(buffer_out) ] + + dk = PBKDF2(password, + b"".join(data_out), + key_len * num_keys, 1, + prf=prf_hmac_sha256) + + if num_keys == 1: + return dk + + kol = [dk[idx:idx + key_len] + for idx in iter_range(0, key_len * num_keys, key_len)] + return kol + + +def _bcrypt_encode(data): + s = "./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" + + bits = [] + for c in data: + bits_c = bin(bord(c))[2:].zfill(8) + bits.append(bstr(bits_c)) + bits = b"".join(bits) + + bits6 = [ bits[idx:idx+6] for idx in range(0, len(bits), 6) ] + + result = [] + for g in bits6[:-1]: + idx = int(g, 2) + result.append(s[idx]) + + g = bits6[-1] + idx = int(g, 2) << (6 - len(g)) + result.append(s[idx]) + result = "".join(result) + + return tobytes(result) + + +def _bcrypt_decode(data): + s = "./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" + + bits = [] + for c in tostr(data): + idx = s.find(c) + bits6 = bin(idx)[2:].zfill(6) + bits.append(bits6) + bits = "".join(bits) + + modulo4 = len(data) % 4 + if modulo4 == 1: + raise ValueError("Incorrect length") + elif modulo4 == 2: + bits = bits[:-4] + elif modulo4 == 3: + bits = bits[:-2] + + bits8 = [ bits[idx:idx+8] for idx in range(0, len(bits), 8) ] + + result = [] + for g in bits8: + result.append(bchr(int(g, 2))) + result = b"".join(result) + + return result + + +def _bcrypt_hash(password, cost, salt, constant, invert): + from Crypto.Cipher import _EKSBlowfish + + if len(password) > 72: + raise ValueError("The password is too long. It must be 72 bytes at most.") + + if not (4 <= cost <= 31): + raise ValueError("bcrypt cost factor must be in the range 4..31") + + cipher = _EKSBlowfish.new(password, _EKSBlowfish.MODE_ECB, salt, cost, invert) + ctext = constant + for _ in range(64): + ctext = cipher.encrypt(ctext) + return ctext + + +def bcrypt(password, cost, salt=None): + """Hash a password into a key, using the OpenBSD bcrypt protocol. + + Args: + password (byte string or string): + The secret password or pass phrase. + It must be at most 72 bytes long. + It must not contain the zero byte. + Unicode strings will be encoded as UTF-8. + cost (integer): + The exponential factor that makes it slower to compute the hash. + It must be in the range 4 to 31. + A value of at least 12 is recommended. + salt (byte string): + Optional. Random byte string to thwarts dictionary and rainbow table + attacks. It must be 16 bytes long. + If not passed, a random value is generated. + + Return (byte string): + The bcrypt hash + + Raises: + ValueError: if password is longer than 72 bytes or if it contains the zero byte + + """ + + password = tobytes(password, "utf-8") + + if password.find(bchr(0)[0]) != -1: + raise ValueError("The password contains the zero byte") + + if len(password) < 72: + password += b"\x00" + + if salt is None: + salt = get_random_bytes(16) + if len(salt) != 16: + raise ValueError("bcrypt salt must be 16 bytes long") + + ctext = _bcrypt_hash(password, cost, salt, b"OrpheanBeholderScryDoubt", True) + + cost_enc = b"$" + bstr(str(cost).zfill(2)) + salt_enc = b"$" + _bcrypt_encode(salt) + hash_enc = _bcrypt_encode(ctext[:-1]) # only use 23 bytes, not 24 + return b"$2a" + cost_enc + salt_enc + hash_enc + + +def bcrypt_check(password, bcrypt_hash): + """Verify if the provided password matches the given bcrypt hash. + + Args: + password (byte string or string): + The secret password or pass phrase to test. + It must be at most 72 bytes long. + It must not contain the zero byte. + Unicode strings will be encoded as UTF-8. + bcrypt_hash (byte string, bytearray): + The reference bcrypt hash the password needs to be checked against. + + Raises: + ValueError: if the password does not match + """ + + bcrypt_hash = tobytes(bcrypt_hash) + + if len(bcrypt_hash) != 60: + raise ValueError("Incorrect length of the bcrypt hash: %d bytes instead of 60" % len(bcrypt_hash)) + + if bcrypt_hash[:4] != b'$2a$': + raise ValueError("Unsupported prefix") + + p = re.compile(br'\$2a\$([0-9][0-9])\$([A-Za-z0-9./]{22,22})([A-Za-z0-9./]{31,31})') + r = p.match(bcrypt_hash) + if not r: + raise ValueError("Incorrect bcrypt hash format") + + cost = int(r.group(1)) + if not (4 <= cost <= 31): + raise ValueError("Incorrect cost") + + salt = _bcrypt_decode(r.group(2)) + + bcrypt_hash2 = bcrypt(password, cost, salt) + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=bcrypt_hash).digest() + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=bcrypt_hash2).digest() + if mac1 != mac2: + raise ValueError("Incorrect bcrypt hash") diff --git a/env/lib/python3.8/site-packages/Crypto/Protocol/KDF.pyi b/env/lib/python3.8/site-packages/Crypto/Protocol/KDF.pyi new file mode 100644 index 00000000..fb004bf3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Protocol/KDF.pyi @@ -0,0 +1,24 @@ +from types import ModuleType +from typing import Optional, Callable, Tuple, Union, Dict, Any + +RNG = Callable[[int], bytes] + +def PBKDF1(password: str, salt: bytes, dkLen: int, count: Optional[int]=1000, hashAlgo: Optional[ModuleType]=None) -> bytes: ... +def PBKDF2(password: str, salt: bytes, dkLen: Optional[int]=16, count: Optional[int]=1000, prf: Optional[RNG]=None, hmac_hash_module: Optional[ModuleType]=None) -> bytes: ... + +class _S2V(object): + def __init__(self, key: bytes, ciphermod: ModuleType, cipher_params: Optional[Dict[Any, Any]]=None) -> None: ... + + @staticmethod + def new(key: bytes, ciphermod: ModuleType) -> None: ... + def update(self, item: bytes) -> None: ... + def derive(self) -> bytes: ... + +def HKDF(master: bytes, key_len: int, salt: bytes, hashmod: ModuleType, num_keys: Optional[int]=1, context: Optional[bytes]=None) -> Union[bytes, Tuple[bytes, ...]]: ... + +def scrypt(password: str, salt: str, key_len: int, N: int, r: int, p: int, num_keys: Optional[int]=1) -> Union[bytes, Tuple[bytes, ...]]: ... + +def _bcrypt_decode(data: bytes) -> bytes: ... +def _bcrypt_hash(password:bytes , cost: int, salt: bytes, constant:bytes, invert:bool) -> bytes: ... +def bcrypt(password: Union[bytes, str], cost: int, salt: Optional[bytes]=None) -> bytes: ... +def bcrypt_check(password: Union[bytes, str], bcrypt_hash: Union[bytes, bytearray, str]) -> None: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Protocol/SecretSharing.py b/env/lib/python3.8/site-packages/Crypto/Protocol/SecretSharing.py new file mode 100644 index 00000000..a757e7cb --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Protocol/SecretSharing.py @@ -0,0 +1,278 @@ +# +# SecretSharing.py : distribute a secret amongst a group of participants +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import is_native_int +from Crypto.Util import number +from Crypto.Util.number import long_to_bytes, bytes_to_long +from Crypto.Random import get_random_bytes as rng + + +def _mult_gf2(f1, f2): + """Multiply two polynomials in GF(2)""" + + # Ensure f2 is the smallest + if f2 > f1: + f1, f2 = f2, f1 + z = 0 + while f2: + if f2 & 1: + z ^= f1 + f1 <<= 1 + f2 >>= 1 + return z + + +def _div_gf2(a, b): + """ + Compute division of polynomials over GF(2). + Given a and b, it finds two polynomials q and r such that: + + a = b*q + r with deg(r)= d: + s = 1 << (deg(r) - d) + q ^= s + r ^= _mult_gf2(b, s) + return (q, r) + + +class _Element(object): + """Element of GF(2^128) field""" + + # The irreducible polynomial defining this field is 1+x+x^2+x^7+x^128 + irr_poly = 1 + 2 + 4 + 128 + 2 ** 128 + + def __init__(self, encoded_value): + """Initialize the element to a certain value. + + The value passed as parameter is internally encoded as + a 128-bit integer, where each bit represents a polynomial + coefficient. The LSB is the constant coefficient. + """ + + if is_native_int(encoded_value): + self._value = encoded_value + elif len(encoded_value) == 16: + self._value = bytes_to_long(encoded_value) + else: + raise ValueError("The encoded value must be an integer or a 16 byte string") + + def __eq__(self, other): + return self._value == other._value + + def __int__(self): + """Return the field element, encoded as a 128-bit integer.""" + return self._value + + def encode(self): + """Return the field element, encoded as a 16 byte string.""" + return long_to_bytes(self._value, 16) + + def __mul__(self, factor): + + f1 = self._value + f2 = factor._value + + # Make sure that f2 is the smallest, to speed up the loop + if f2 > f1: + f1, f2 = f2, f1 + + if self.irr_poly in (f1, f2): + return _Element(0) + + mask1 = 2 ** 128 + v, z = f1, 0 + while f2: + # if f2 ^ 1: z ^= v + mask2 = int(bin(f2 & 1)[2:] * 128, base=2) + z = (mask2 & (z ^ v)) | ((mask1 - mask2 - 1) & z) + v <<= 1 + # if v & mask1: v ^= self.irr_poly + mask3 = int(bin((v >> 128) & 1)[2:] * 128, base=2) + v = (mask3 & (v ^ self.irr_poly)) | ((mask1 - mask3 - 1) & v) + f2 >>= 1 + return _Element(z) + + def __add__(self, term): + return _Element(self._value ^ term._value) + + def inverse(self): + """Return the inverse of this element in GF(2^128).""" + + # We use the Extended GCD algorithm + # http://en.wikipedia.org/wiki/Polynomial_greatest_common_divisor + + if self._value == 0: + raise ValueError("Inversion of zero") + + r0, r1 = self._value, self.irr_poly + s0, s1 = 1, 0 + while r1 > 0: + q = _div_gf2(r0, r1)[0] + r0, r1 = r1, r0 ^ _mult_gf2(q, r1) + s0, s1 = s1, s0 ^ _mult_gf2(q, s1) + return _Element(s0) + + def __pow__(self, exponent): + result = _Element(self._value) + for _ in range(exponent - 1): + result = result * self + return result + + +class Shamir(object): + """Shamir's secret sharing scheme. + + A secret is split into ``n`` shares, and it is sufficient to collect + ``k`` of them to reconstruct the secret. + """ + + @staticmethod + def split(k, n, secret, ssss=False): + """Split a secret into ``n`` shares. + + The secret can be reconstructed later using just ``k`` shares + out of the original ``n``. + Each share must be kept confidential to the person it was + assigned to. + + Each share is associated to an index (starting from 1). + + Args: + k (integer): + The sufficient number of shares to reconstruct the secret (``k < n``). + n (integer): + The number of shares that this method will create. + secret (byte string): + A byte string of 16 bytes (e.g. the AES 128 key). + ssss (bool): + If ``True``, the shares can be used with the ``ssss`` utility. + Default: ``False``. + + Return (tuples): + ``n`` tuples. A tuple is meant for each participant and it contains two items: + + 1. the unique index (an integer) + 2. the share (a byte string, 16 bytes) + """ + + # + # We create a polynomial with random coefficients in GF(2^128): + # + # p(x) = \sum_{i=0}^{k-1} c_i * x^i + # + # c_0 is the encoded secret + # + + coeffs = [_Element(rng(16)) for i in range(k - 1)] + coeffs.append(_Element(secret)) + + # Each share is y_i = p(x_i) where x_i is the public index + # associated to each of the n users. + + def make_share(user, coeffs, ssss): + idx = _Element(user) + share = _Element(0) + for coeff in coeffs: + share = idx * share + coeff + if ssss: + share += _Element(user) ** len(coeffs) + return share.encode() + + return [(i, make_share(i, coeffs, ssss)) for i in range(1, n + 1)] + + @staticmethod + def combine(shares, ssss=False): + """Recombine a secret, if enough shares are presented. + + Args: + shares (tuples): + The *k* tuples, each containin the index (an integer) and + the share (a byte string, 16 bytes long) that were assigned to + a participant. + ssss (bool): + If ``True``, the shares were produced by the ``ssss`` utility. + Default: ``False``. + + Return: + The original secret, as a byte string (16 bytes long). + """ + + # + # Given k points (x,y), the interpolation polynomial of degree k-1 is: + # + # L(x) = \sum_{j=0}^{k-1} y_i * l_j(x) + # + # where: + # + # l_j(x) = \prod_{ \overset{0 \le m \le k-1}{m \ne j} } + # \frac{x - x_m}{x_j - x_m} + # + # However, in this case we are purely interested in the constant + # coefficient of L(x). + # + + k = len(shares) + + gf_shares = [] + for x in shares: + idx = _Element(x[0]) + value = _Element(x[1]) + if any(y[0] == idx for y in gf_shares): + raise ValueError("Duplicate share") + if ssss: + value += idx ** k + gf_shares.append((idx, value)) + + result = _Element(0) + for j in range(k): + x_j, y_j = gf_shares[j] + + numerator = _Element(1) + denominator = _Element(1) + + for m in range(k): + x_m = gf_shares[m][0] + if m != j: + numerator *= x_m + denominator *= x_j + x_m + result += y_j * numerator * denominator.inverse() + return result.encode() diff --git a/env/lib/python3.8/site-packages/Crypto/Protocol/SecretSharing.pyi b/env/lib/python3.8/site-packages/Crypto/Protocol/SecretSharing.pyi new file mode 100644 index 00000000..5952c993 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Protocol/SecretSharing.pyi @@ -0,0 +1,22 @@ +from typing import Union, List, Tuple, Optional + +def _mult_gf2(f1: int, f2: int) -> int : ... +def _div_gf2(a: int, b: int) -> int : ... + +class _Element(object): + irr_poly: int + def __init__(self, encoded_value: Union[int, bytes]) -> None: ... + def __eq__(self, other) -> bool: ... + def __int__(self) -> int: ... + def encode(self) -> bytes: ... + def __mul__(self, factor: int) -> _Element: ... + def __add__(self, term: _Element) -> _Element: ... + def inverse(self) -> _Element: ... + def __pow__(self, exponent) -> _Element: ... + +class Shamir(object): + @staticmethod + def split(k: int, n: int, secret: bytes, ssss: Optional[bool]) -> List[Tuple[int, bytes]]: ... + @staticmethod + def combine(shares: List[Tuple[int, bytes]], ssss: Optional[bool]) -> bytes: ... + diff --git a/env/lib/python3.8/site-packages/Crypto/Protocol/__init__.py b/env/lib/python3.8/site-packages/Crypto/Protocol/__init__.py new file mode 100644 index 00000000..efdf034f --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Protocol/__init__.py @@ -0,0 +1,31 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['KDF', 'SecretSharing'] diff --git a/env/lib/python3.8/site-packages/Crypto/Protocol/__init__.pyi b/env/lib/python3.8/site-packages/Crypto/Protocol/__init__.pyi new file mode 100644 index 00000000..377ed901 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Protocol/__init__.pyi @@ -0,0 +1 @@ +__all__ = ['KDF.pyi', 'SecretSharing.pyi'] diff --git a/env/lib/python3.8/site-packages/Crypto/Protocol/_scrypt.abi3.so b/env/lib/python3.8/site-packages/Crypto/Protocol/_scrypt.abi3.so new file mode 100644 index 00000000..3cf3eff9 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Protocol/_scrypt.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/DSA.py b/env/lib/python3.8/site-packages/Crypto/PublicKey/DSA.py new file mode 100644 index 00000000..4c7f47b6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/DSA.py @@ -0,0 +1,682 @@ +# -*- coding: utf-8 -*- +# +# PublicKey/DSA.py : DSA signature primitive +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['generate', 'construct', 'DsaKey', 'import_key' ] + +import binascii +import struct +import itertools + +from Crypto.Util.py3compat import bchr, bord, tobytes, tostr, iter_range + +from Crypto import Random +from Crypto.IO import PKCS8, PEM +from Crypto.Hash import SHA256 +from Crypto.Util.asn1 import ( + DerObject, DerSequence, + DerInteger, DerObjectId, + DerBitString, + ) + +from Crypto.Math.Numbers import Integer +from Crypto.Math.Primality import (test_probable_prime, COMPOSITE, + PROBABLY_PRIME) + +from Crypto.PublicKey import (_expand_subject_public_key_info, + _create_subject_public_key_info, + _extract_subject_public_key_info) + +# ; The following ASN.1 types are relevant for DSA +# +# SubjectPublicKeyInfo ::= SEQUENCE { +# algorithm AlgorithmIdentifier, +# subjectPublicKey BIT STRING +# } +# +# id-dsa ID ::= { iso(1) member-body(2) us(840) x9-57(10040) x9cm(4) 1 } +# +# ; See RFC3279 +# Dss-Parms ::= SEQUENCE { +# p INTEGER, +# q INTEGER, +# g INTEGER +# } +# +# DSAPublicKey ::= INTEGER +# +# DSSPrivatKey_OpenSSL ::= SEQUENCE +# version INTEGER, +# p INTEGER, +# q INTEGER, +# g INTEGER, +# y INTEGER, +# x INTEGER +# } +# + +class DsaKey(object): + r"""Class defining an actual DSA key. + Do not instantiate directly. + Use :func:`generate`, :func:`construct` or :func:`import_key` instead. + + :ivar p: DSA modulus + :vartype p: integer + + :ivar q: Order of the subgroup + :vartype q: integer + + :ivar g: Generator + :vartype g: integer + + :ivar y: Public key + :vartype y: integer + + :ivar x: Private key + :vartype x: integer + + :undocumented: exportKey, publickey + """ + + _keydata = ['y', 'g', 'p', 'q', 'x'] + + def __init__(self, key_dict): + input_set = set(key_dict.keys()) + public_set = set(('y' , 'g', 'p', 'q')) + if not public_set.issubset(input_set): + raise ValueError("Some DSA components are missing = %s" % + str(public_set - input_set)) + extra_set = input_set - public_set + if extra_set and extra_set != set(('x',)): + raise ValueError("Unknown DSA components = %s" % + str(extra_set - set(('x',)))) + self._key = dict(key_dict) + + def _sign(self, m, k): + if not self.has_private(): + raise TypeError("DSA public key cannot be used for signing") + if not (1 < k < self.q): + raise ValueError("k is not between 2 and q-1") + + x, q, p, g = [self._key[comp] for comp in ['x', 'q', 'p', 'g']] + + blind_factor = Integer.random_range(min_inclusive=1, + max_exclusive=q) + inv_blind_k = (blind_factor * k).inverse(q) + blind_x = x * blind_factor + + r = pow(g, k, p) % q # r = (g**k mod p) mod q + s = (inv_blind_k * (blind_factor * m + blind_x * r)) % q + return map(int, (r, s)) + + def _verify(self, m, sig): + r, s = sig + y, q, p, g = [self._key[comp] for comp in ['y', 'q', 'p', 'g']] + if not (0 < r < q) or not (0 < s < q): + return False + w = Integer(s).inverse(q) + u1 = (w * m) % q + u2 = (w * r) % q + v = (pow(g, u1, p) * pow(y, u2, p) % p) % q + return v == r + + def has_private(self): + """Whether this is a DSA private key""" + + return 'x' in self._key + + def can_encrypt(self): # legacy + return False + + def can_sign(self): # legacy + return True + + def public_key(self): + """A matching DSA public key. + + Returns: + a new :class:`DsaKey` object + """ + + public_components = dict((k, self._key[k]) for k in ('y', 'g', 'p', 'q')) + return DsaKey(public_components) + + def __eq__(self, other): + if bool(self.has_private()) != bool(other.has_private()): + return False + + result = True + for comp in self._keydata: + result = result and (getattr(self._key, comp, None) == + getattr(other._key, comp, None)) + return result + + def __ne__(self, other): + return not self.__eq__(other) + + def __getstate__(self): + # DSA key is not pickable + from pickle import PicklingError + raise PicklingError + + def domain(self): + """The DSA domain parameters. + + Returns + tuple : (p,q,g) + """ + + return [int(self._key[comp]) for comp in ('p', 'q', 'g')] + + def __repr__(self): + attrs = [] + for k in self._keydata: + if k == 'p': + bits = Integer(self.p).size_in_bits() + attrs.append("p(%d)" % (bits,)) + elif hasattr(self, k): + attrs.append(k) + if self.has_private(): + attrs.append("private") + # PY3K: This is meant to be text, do not change to bytes (data) + return "<%s @0x%x %s>" % (self.__class__.__name__, id(self), ",".join(attrs)) + + def __getattr__(self, item): + try: + return int(self._key[item]) + except KeyError: + raise AttributeError(item) + + def export_key(self, format='PEM', pkcs8=None, passphrase=None, + protection=None, randfunc=None): + """Export this DSA key. + + Args: + format (string): + The encoding for the output: + + - *'PEM'* (default). ASCII as per `RFC1421`_/ `RFC1423`_. + - *'DER'*. Binary ASN.1 encoding. + - *'OpenSSH'*. ASCII one-liner as per `RFC4253`_. + Only suitable for public keys, not for private keys. + + passphrase (string): + *Private keys only*. The pass phrase to protect the output. + + pkcs8 (boolean): + *Private keys only*. If ``True`` (default), the key is encoded + with `PKCS#8`_. If ``False``, it is encoded in the custom + OpenSSL/OpenSSH container. + + protection (string): + *Only in combination with a pass phrase*. + The encryption scheme to use to protect the output. + + If :data:`pkcs8` takes value ``True``, this is the PKCS#8 + algorithm to use for deriving the secret and encrypting + the private DSA key. + For a complete list of algorithms, see :mod:`Crypto.IO.PKCS8`. + The default is *PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC*. + + If :data:`pkcs8` is ``False``, the obsolete PEM encryption scheme is + used. It is based on MD5 for key derivation, and Triple DES for + encryption. Parameter :data:`protection` is then ignored. + + The combination ``format='DER'`` and ``pkcs8=False`` is not allowed + if a passphrase is present. + + randfunc (callable): + A function that returns random bytes. + By default it is :func:`Crypto.Random.get_random_bytes`. + + Returns: + byte string : the encoded key + + Raises: + ValueError : when the format is unknown or when you try to encrypt a private + key with *DER* format and OpenSSL/OpenSSH. + + .. warning:: + If you don't provide a pass phrase, the private key will be + exported in the clear! + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _RFC4253: http://www.ietf.org/rfc/rfc4253.txt + .. _`PKCS#8`: http://www.ietf.org/rfc/rfc5208.txt + """ + + if passphrase is not None: + passphrase = tobytes(passphrase) + + if randfunc is None: + randfunc = Random.get_random_bytes + + if format == 'OpenSSH': + tup1 = [self._key[x].to_bytes() for x in ('p', 'q', 'g', 'y')] + + def func(x): + if (bord(x[0]) & 0x80): + return bchr(0) + x + else: + return x + + tup2 = [func(x) for x in tup1] + keyparts = [b'ssh-dss'] + tup2 + keystring = b''.join( + [struct.pack(">I", len(kp)) + kp for kp in keyparts] + ) + return b'ssh-dss ' + binascii.b2a_base64(keystring)[:-1] + + # DER format is always used, even in case of PEM, which simply + # encodes it into BASE64. + params = DerSequence([self.p, self.q, self.g]) + if self.has_private(): + if pkcs8 is None: + pkcs8 = True + if pkcs8: + if not protection: + protection = 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC' + private_key = DerInteger(self.x).encode() + binary_key = PKCS8.wrap( + private_key, oid, passphrase, + protection, key_params=params, + randfunc=randfunc + ) + if passphrase: + key_type = 'ENCRYPTED PRIVATE' + else: + key_type = 'PRIVATE' + passphrase = None + else: + if format != 'PEM' and passphrase: + raise ValueError("DSA private key cannot be encrypted") + ints = [0, self.p, self.q, self.g, self.y, self.x] + binary_key = DerSequence(ints).encode() + key_type = "DSA PRIVATE" + else: + if pkcs8: + raise ValueError("PKCS#8 is only meaningful for private keys") + + binary_key = _create_subject_public_key_info(oid, + DerInteger(self.y), params) + key_type = "PUBLIC" + + if format == 'DER': + return binary_key + if format == 'PEM': + pem_str = PEM.encode( + binary_key, key_type + " KEY", + passphrase, randfunc + ) + return tobytes(pem_str) + raise ValueError("Unknown key format '%s'. Cannot export the DSA key." % format) + + # Backward-compatibility + exportKey = export_key + publickey = public_key + + # Methods defined in PyCrypto that we don't support anymore + + def sign(self, M, K): + raise NotImplementedError("Use module Crypto.Signature.DSS instead") + + def verify(self, M, signature): + raise NotImplementedError("Use module Crypto.Signature.DSS instead") + + def encrypt(self, plaintext, K): + raise NotImplementedError + + def decrypt(self, ciphertext): + raise NotImplementedError + + def blind(self, M, B): + raise NotImplementedError + + def unblind(self, M, B): + raise NotImplementedError + + def size(self): + raise NotImplementedError + + +def _generate_domain(L, randfunc): + """Generate a new set of DSA domain parameters""" + + N = { 1024:160, 2048:224, 3072:256 }.get(L) + if N is None: + raise ValueError("Invalid modulus length (%d)" % L) + + outlen = SHA256.digest_size * 8 + n = (L + outlen - 1) // outlen - 1 # ceil(L/outlen) -1 + b_ = L - 1 - (n * outlen) + + # Generate q (A.1.1.2) + q = Integer(4) + upper_bit = 1 << (N - 1) + while test_probable_prime(q, randfunc) != PROBABLY_PRIME: + seed = randfunc(64) + U = Integer.from_bytes(SHA256.new(seed).digest()) & (upper_bit - 1) + q = U | upper_bit | 1 + + assert(q.size_in_bits() == N) + + # Generate p (A.1.1.2) + offset = 1 + upper_bit = 1 << (L - 1) + while True: + V = [ SHA256.new(seed + Integer(offset + j).to_bytes()).digest() + for j in iter_range(n + 1) ] + V = [ Integer.from_bytes(v) for v in V ] + W = sum([V[i] * (1 << (i * outlen)) for i in iter_range(n)], + (V[n] & ((1 << b_) - 1)) * (1 << (n * outlen))) + + X = Integer(W + upper_bit) # 2^{L-1} < X < 2^{L} + assert(X.size_in_bits() == L) + + c = X % (q * 2) + p = X - (c - 1) # 2q divides (p-1) + if p.size_in_bits() == L and \ + test_probable_prime(p, randfunc) == PROBABLY_PRIME: + break + offset += n + 1 + + # Generate g (A.2.3, index=1) + e = (p - 1) // q + for count in itertools.count(1): + U = seed + b"ggen" + bchr(1) + Integer(count).to_bytes() + W = Integer.from_bytes(SHA256.new(U).digest()) + g = pow(W, e, p) + if g != 1: + break + + return (p, q, g, seed) + + +def generate(bits, randfunc=None, domain=None): + """Generate a new DSA key pair. + + The algorithm follows Appendix A.1/A.2 and B.1 of `FIPS 186-4`_, + respectively for domain generation and key pair generation. + + Args: + bits (integer): + Key length, or size (in bits) of the DSA modulus *p*. + It must be 1024, 2048 or 3072. + + randfunc (callable): + Random number generation function; it accepts a single integer N + and return a string of random data N bytes long. + If not specified, :func:`Crypto.Random.get_random_bytes` is used. + + domain (tuple): + The DSA domain parameters *p*, *q* and *g* as a list of 3 + integers. Size of *p* and *q* must comply to `FIPS 186-4`_. + If not specified, the parameters are created anew. + + Returns: + :class:`DsaKey` : a new DSA key object + + Raises: + ValueError : when **bits** is too little, too big, or not a multiple of 64. + + .. _FIPS 186-4: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + + if domain: + p, q, g = map(Integer, domain) + + ## Perform consistency check on domain parameters + # P and Q must be prime + fmt_error = test_probable_prime(p) == COMPOSITE + fmt_error |= test_probable_prime(q) == COMPOSITE + # Verify Lagrange's theorem for sub-group + fmt_error |= ((p - 1) % q) != 0 + fmt_error |= g <= 1 or g >= p + fmt_error |= pow(g, q, p) != 1 + if fmt_error: + raise ValueError("Invalid DSA domain parameters") + else: + p, q, g, _ = _generate_domain(bits, randfunc) + + L = p.size_in_bits() + N = q.size_in_bits() + + if L != bits: + raise ValueError("Mismatch between size of modulus (%d)" + " and 'bits' parameter (%d)" % (L, bits)) + + if (L, N) not in [(1024, 160), (2048, 224), + (2048, 256), (3072, 256)]: + raise ValueError("Lengths of p and q (%d, %d) are not compatible" + "to FIPS 186-3" % (L, N)) + + if not 1 < g < p: + raise ValueError("Incorrent DSA generator") + + # B.1.1 + c = Integer.random(exact_bits=N + 64, randfunc=randfunc) + x = c % (q - 1) + 1 # 1 <= x <= q-1 + y = pow(g, x, p) + + key_dict = { 'y':y, 'g':g, 'p':p, 'q':q, 'x':x } + return DsaKey(key_dict) + + +def construct(tup, consistency_check=True): + """Construct a DSA key from a tuple of valid DSA components. + + Args: + tup (tuple): + A tuple of long integers, with 4 or 5 items + in the following order: + + 1. Public key (*y*). + 2. Sub-group generator (*g*). + 3. Modulus, finite field order (*p*). + 4. Sub-group order (*q*). + 5. Private key (*x*). Optional. + + consistency_check (boolean): + If ``True``, the library will verify that the provided components + fulfil the main DSA properties. + + Raises: + ValueError: when the key being imported fails the most basic DSA validity checks. + + Returns: + :class:`DsaKey` : a DSA key object + """ + + key_dict = dict(zip(('y', 'g', 'p', 'q', 'x'), map(Integer, tup))) + key = DsaKey(key_dict) + + fmt_error = False + if consistency_check: + # P and Q must be prime + fmt_error = test_probable_prime(key.p) == COMPOSITE + fmt_error |= test_probable_prime(key.q) == COMPOSITE + # Verify Lagrange's theorem for sub-group + fmt_error |= ((key.p - 1) % key.q) != 0 + fmt_error |= key.g <= 1 or key.g >= key.p + fmt_error |= pow(key.g, key.q, key.p) != 1 + # Public key + fmt_error |= key.y <= 0 or key.y >= key.p + if hasattr(key, 'x'): + fmt_error |= key.x <= 0 or key.x >= key.q + fmt_error |= pow(key.g, key.x, key.p) != key.y + + if fmt_error: + raise ValueError("Invalid DSA key components") + + return key + + +# Dss-Parms ::= SEQUENCE { +# p OCTET STRING, +# q OCTET STRING, +# g OCTET STRING +# } +# DSAPublicKey ::= INTEGER -- public key, y + +def _import_openssl_private(encoded, passphrase, params): + if params: + raise ValueError("DSA private key already comes with parameters") + der = DerSequence().decode(encoded, nr_elements=6, only_ints_expected=True) + if der[0] != 0: + raise ValueError("No version found") + tup = [der[comp] for comp in (4, 3, 1, 2, 5)] + return construct(tup) + + +def _import_subjectPublicKeyInfo(encoded, passphrase, params): + + algoid, encoded_key, emb_params = _expand_subject_public_key_info(encoded) + if algoid != oid: + raise ValueError("No DSA subjectPublicKeyInfo") + if params and emb_params: + raise ValueError("Too many DSA parameters") + + y = DerInteger().decode(encoded_key).value + p, q, g = list(DerSequence().decode(params or emb_params)) + tup = (y, g, p, q) + return construct(tup) + + +def _import_x509_cert(encoded, passphrase, params): + + sp_info = _extract_subject_public_key_info(encoded) + return _import_subjectPublicKeyInfo(sp_info, None, params) + + +def _import_pkcs8(encoded, passphrase, params): + if params: + raise ValueError("PKCS#8 already includes parameters") + k = PKCS8.unwrap(encoded, passphrase) + if k[0] != oid: + raise ValueError("No PKCS#8 encoded DSA key") + x = DerInteger().decode(k[1]).value + p, q, g = list(DerSequence().decode(k[2])) + tup = (pow(g, x, p), g, p, q, x) + return construct(tup) + + +def _import_key_der(key_data, passphrase, params): + """Import a DSA key (public or private half), encoded in DER form.""" + + decodings = (_import_openssl_private, + _import_subjectPublicKeyInfo, + _import_x509_cert, + _import_pkcs8) + + for decoding in decodings: + try: + return decoding(key_data, passphrase, params) + except ValueError: + pass + + raise ValueError("DSA key format is not supported") + + +def import_key(extern_key, passphrase=None): + """Import a DSA key. + + Args: + extern_key (string or byte string): + The DSA key to import. + + The following formats are supported for a DSA **public** key: + + - X.509 certificate (binary DER or PEM) + - X.509 ``subjectPublicKeyInfo`` (binary DER or PEM) + - OpenSSH (ASCII one-liner, see `RFC4253`_) + + The following formats are supported for a DSA **private** key: + + - `PKCS#8`_ ``PrivateKeyInfo`` or ``EncryptedPrivateKeyInfo`` + DER SEQUENCE (binary or PEM) + - OpenSSL/OpenSSH custom format (binary or PEM) + + For details about the PEM encoding, see `RFC1421`_/`RFC1423`_. + + passphrase (string): + In case of an encrypted private key, this is the pass phrase + from which the decryption key is derived. + + Encryption may be applied either at the `PKCS#8`_ or at the PEM level. + + Returns: + :class:`DsaKey` : a DSA key object + + Raises: + ValueError : when the given key cannot be parsed (possibly because + the pass phrase is wrong). + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _RFC4253: http://www.ietf.org/rfc/rfc4253.txt + .. _PKCS#8: http://www.ietf.org/rfc/rfc5208.txt + """ + + extern_key = tobytes(extern_key) + if passphrase is not None: + passphrase = tobytes(passphrase) + + if extern_key.startswith(b'-----'): + # This is probably a PEM encoded key + (der, marker, enc_flag) = PEM.decode(tostr(extern_key), passphrase) + if enc_flag: + passphrase = None + return _import_key_der(der, passphrase, None) + + if extern_key.startswith(b'ssh-dss '): + # This is probably a public OpenSSH key + keystring = binascii.a2b_base64(extern_key.split(b' ')[1]) + keyparts = [] + while len(keystring) > 4: + length = struct.unpack(">I", keystring[:4])[0] + keyparts.append(keystring[4:4 + length]) + keystring = keystring[4 + length:] + if keyparts[0] == b"ssh-dss": + tup = [Integer.from_bytes(keyparts[x]) for x in (4, 3, 1, 2)] + return construct(tup) + + if len(extern_key) > 0 and bord(extern_key[0]) == 0x30: + # This is probably a DER encoded key + return _import_key_der(extern_key, passphrase, None) + + raise ValueError("DSA key format is not supported") + + +# Backward compatibility +importKey = import_key + +#: `Object ID`_ for a DSA key. +#: +#: id-dsa ID ::= { iso(1) member-body(2) us(840) x9-57(10040) x9cm(4) 1 } +#: +#: .. _`Object ID`: http://www.alvestrand.no/objectid/1.2.840.10040.4.1.html +oid = "1.2.840.10040.4.1" diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/DSA.pyi b/env/lib/python3.8/site-packages/Crypto/PublicKey/DSA.pyi new file mode 100644 index 00000000..354ac1f0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/DSA.pyi @@ -0,0 +1,31 @@ +from typing import Dict, Tuple, Callable, Union, Optional + +__all__ = ['generate', 'construct', 'DsaKey', 'import_key' ] + +RNG = Callable[[int], bytes] + +class DsaKey(object): + def __init__(self, key_dict: Dict[str, int]) -> None: ... + def has_private(self) -> bool: ... + def can_encrypt(self) -> bool: ... # legacy + def can_sign(self) -> bool: ... # legacy + def public_key(self) -> DsaKey: ... + def __eq__(self, other: object) -> bool: ... + def __ne__(self, other: object) -> bool: ... + def __getstate__(self) -> None: ... + def domain(self) -> Tuple[int, int, int]: ... + def __repr__(self) -> str: ... + def __getattr__(self, item: str) -> int: ... + def export_key(self, format: Optional[str]="PEM", pkcs8: Optional[bool]=None, passphrase: Optional[str]=None, + protection: Optional[str]=None, randfunc: Optional[RNG]=None) -> bytes: ... + # Backward-compatibility + exportKey = export_key + publickey = public_key + +def generate(bits: int, randfunc: Optional[RNG]=None, domain: Optional[Tuple[int, int, int]]=None) -> DsaKey: ... +def construct(tup: Union[Tuple[int, int, int, int], Tuple[int, int, int, int, int]], consistency_check: Optional[bool]=True) -> DsaKey: ... +def import_key(extern_key: Union[str, bytes], passphrase: Optional[str]=None) -> DsaKey: ... +# Backward compatibility +importKey = import_key + +oid: str diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/ECC.py b/env/lib/python3.8/site-packages/Crypto/PublicKey/ECC.py new file mode 100644 index 00000000..0b605c49 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/ECC.py @@ -0,0 +1,1794 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from __future__ import print_function + +import re +import struct +import binascii +from collections import namedtuple + +from Crypto.Util.py3compat import bord, tobytes, tostr, bchr, is_string +from Crypto.Util.number import bytes_to_long, long_to_bytes + +from Crypto.Math.Numbers import Integer +from Crypto.Util.asn1 import (DerObjectId, DerOctetString, DerSequence, + DerBitString) + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + SmartPointer, c_size_t, c_uint8_ptr, + c_ulonglong, null_pointer) + +from Crypto.PublicKey import (_expand_subject_public_key_info, + _create_subject_public_key_info, + _extract_subject_public_key_info) + +from Crypto.Hash import SHA512, SHAKE256 + +from Crypto.Random import get_random_bytes +from Crypto.Random.random import getrandbits + + +_ec_lib = load_pycryptodome_raw_lib("Crypto.PublicKey._ec_ws", """ +typedef void EcContext; +typedef void EcPoint; +int ec_ws_new_context(EcContext **pec_ctx, + const uint8_t *modulus, + const uint8_t *b, + const uint8_t *order, + size_t len, + uint64_t seed); +void ec_free_context(EcContext *ec_ctx); +int ec_ws_new_point(EcPoint **pecp, + const uint8_t *x, + const uint8_t *y, + size_t len, + const EcContext *ec_ctx); +void ec_ws_free_point(EcPoint *ecp); +int ec_ws_get_xy(uint8_t *x, + uint8_t *y, + size_t len, + const EcPoint *ecp); +int ec_ws_double(EcPoint *p); +int ec_ws_add(EcPoint *ecpa, EcPoint *ecpb); +int ec_ws_scalar(EcPoint *ecp, + const uint8_t *k, + size_t len, + uint64_t seed); +int ec_ws_clone(EcPoint **pecp2, const EcPoint *ecp); +int ec_ws_cmp(const EcPoint *ecp1, const EcPoint *ecp2); +int ec_ws_neg(EcPoint *p); +""") + +_ed25519_lib = load_pycryptodome_raw_lib("Crypto.PublicKey._ed25519", """ +typedef void Point; +int ed25519_new_point(Point **out, + const uint8_t x[32], + const uint8_t y[32], + size_t modsize, + const void *context); +int ed25519_clone(Point **P, const Point *Q); +void ed25519_free_point(Point *p); +int ed25519_cmp(const Point *p1, const Point *p2); +int ed25519_neg(Point *p); +int ed25519_get_xy(uint8_t *xb, uint8_t *yb, size_t modsize, Point *p); +int ed25519_double(Point *p); +int ed25519_add(Point *P1, const Point *P2); +int ed25519_scalar(Point *P, uint8_t *scalar, size_t scalar_len, uint64_t seed); +""") + +_ed448_lib = load_pycryptodome_raw_lib("Crypto.PublicKey._ed448", """ +typedef void EcContext; +typedef void PointEd448; +int ed448_new_context(EcContext **pec_ctx); +void ed448_context(EcContext *ec_ctx); +void ed448_free_context(EcContext *ec_ctx); +int ed448_new_point(PointEd448 **out, + const uint8_t x[56], + const uint8_t y[56], + size_t len, + const EcContext *context); +int ed448_clone(PointEd448 **P, const PointEd448 *Q); +void ed448_free_point(PointEd448 *p); +int ed448_cmp(const PointEd448 *p1, const PointEd448 *p2); +int ed448_neg(PointEd448 *p); +int ed448_get_xy(uint8_t *xb, uint8_t *yb, size_t len, const PointEd448 *p); +int ed448_double(PointEd448 *p); +int ed448_add(PointEd448 *P1, const PointEd448 *P2); +int ed448_scalar(PointEd448 *P, const uint8_t *scalar, size_t scalar_len, uint64_t seed); +""") + + +def lib_func(ecc_obj, func_name): + if ecc_obj._curve.desc == "Ed25519": + result = getattr(_ed25519_lib, "ed25519_" + func_name) + elif ecc_obj._curve.desc == "Ed448": + result = getattr(_ed448_lib, "ed448_" + func_name) + else: + result = getattr(_ec_lib, "ec_ws_" + func_name) + return result + +# +# _curves is a database of curve parameters. Items are indexed by their +# human-friendly name, suchas "P-256". Each item has the following fields: +# - p: the prime number that defines the finite field for all modulo operations +# - b: the constant in the Short Weierstrass curve equation +# - order: the number of elements in the group with the generator below +# - Gx the affine coordinate X of the generator point +# - Gy the affine coordinate Y of the generator point +# - G the generator, as an EccPoint object +# - modulus_bits the minimum number of bits for encoding the modulus p +# - oid an ASCII string with the registered ASN.1 Object ID +# - context a raw pointer to memory holding a context for all curve operations (can be NULL) +# - desc an ASCII string describing the curve +# - openssh the ASCII string used in OpenSSH id files for public keys on this curve +# - name the ASCII string which is also a valid key in _curves + + +_Curve = namedtuple("_Curve", "p b order Gx Gy G modulus_bits oid context desc openssh name") +_curves = {} + + +p192_names = ["p192", "NIST P-192", "P-192", "prime192v1", "secp192r1", + "nistp192"] + + +def init_p192(): + p = 0xfffffffffffffffffffffffffffffffeffffffffffffffff + b = 0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1 + order = 0xffffffffffffffffffffffff99def836146bc9b1b4d22831 + Gx = 0x188da80eb03090f67cbf20eb43a18800f4ff0afd82ff1012 + Gy = 0x07192b95ffc8da78631011ed6b24cdd573f977a11e794811 + + p192_modulus = long_to_bytes(p, 24) + p192_b = long_to_bytes(b, 24) + p192_order = long_to_bytes(order, 24) + + ec_p192_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p192_context.address_of(), + c_uint8_ptr(p192_modulus), + c_uint8_ptr(p192_b), + c_uint8_ptr(p192_order), + c_size_t(len(p192_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-192 context" % result) + + context = SmartPointer(ec_p192_context.get(), _ec_lib.ec_free_context) + p192 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 192, + "1.2.840.10045.3.1.1", # ANSI X9.62 / SEC2 + context, + "NIST P-192", + "ecdsa-sha2-nistp192", + "p192") + global p192_names + _curves.update(dict.fromkeys(p192_names, p192)) + + +init_p192() +del init_p192 + + +p224_names = ["p224", "NIST P-224", "P-224", "prime224v1", "secp224r1", + "nistp224"] + + +def init_p224(): + p = 0xffffffffffffffffffffffffffffffff000000000000000000000001 + b = 0xb4050a850c04b3abf54132565044b0b7d7bfd8ba270b39432355ffb4 + order = 0xffffffffffffffffffffffffffff16a2e0b8f03e13dd29455c5c2a3d + Gx = 0xb70e0cbd6bb4bf7f321390b94a03c1d356c21122343280d6115c1d21 + Gy = 0xbd376388b5f723fb4c22dfe6cd4375a05a07476444d5819985007e34 + + p224_modulus = long_to_bytes(p, 28) + p224_b = long_to_bytes(b, 28) + p224_order = long_to_bytes(order, 28) + + ec_p224_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p224_context.address_of(), + c_uint8_ptr(p224_modulus), + c_uint8_ptr(p224_b), + c_uint8_ptr(p224_order), + c_size_t(len(p224_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-224 context" % result) + + context = SmartPointer(ec_p224_context.get(), _ec_lib.ec_free_context) + p224 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 224, + "1.3.132.0.33", # SEC 2 + context, + "NIST P-224", + "ecdsa-sha2-nistp224", + "p224") + global p224_names + _curves.update(dict.fromkeys(p224_names, p224)) + + +init_p224() +del init_p224 + + +p256_names = ["p256", "NIST P-256", "P-256", "prime256v1", "secp256r1", + "nistp256"] + + +def init_p256(): + p = 0xffffffff00000001000000000000000000000000ffffffffffffffffffffffff + b = 0x5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b + order = 0xffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632551 + Gx = 0x6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296 + Gy = 0x4fe342e2fe1a7f9b8ee7eb4a7c0f9e162bce33576b315ececbb6406837bf51f5 + + p256_modulus = long_to_bytes(p, 32) + p256_b = long_to_bytes(b, 32) + p256_order = long_to_bytes(order, 32) + + ec_p256_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p256_context.address_of(), + c_uint8_ptr(p256_modulus), + c_uint8_ptr(p256_b), + c_uint8_ptr(p256_order), + c_size_t(len(p256_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-256 context" % result) + + context = SmartPointer(ec_p256_context.get(), _ec_lib.ec_free_context) + p256 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 256, + "1.2.840.10045.3.1.7", # ANSI X9.62 / SEC2 + context, + "NIST P-256", + "ecdsa-sha2-nistp256", + "p256") + global p256_names + _curves.update(dict.fromkeys(p256_names, p256)) + + +init_p256() +del init_p256 + + +p384_names = ["p384", "NIST P-384", "P-384", "prime384v1", "secp384r1", + "nistp384"] + + +def init_p384(): + p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffff0000000000000000ffffffff + b = 0xb3312fa7e23ee7e4988e056be3f82d19181d9c6efe8141120314088f5013875ac656398d8a2ed19d2a85c8edd3ec2aef + order = 0xffffffffffffffffffffffffffffffffffffffffffffffffc7634d81f4372ddf581a0db248b0a77aecec196accc52973 + Gx = 0xaa87ca22be8b05378eb1c71ef320ad746e1d3b628ba79b9859f741e082542a385502f25dbf55296c3a545e3872760aB7 + Gy = 0x3617de4a96262c6f5d9e98bf9292dc29f8f41dbd289a147ce9da3113b5f0b8c00a60b1ce1d7e819d7a431d7c90ea0e5F + + p384_modulus = long_to_bytes(p, 48) + p384_b = long_to_bytes(b, 48) + p384_order = long_to_bytes(order, 48) + + ec_p384_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p384_context.address_of(), + c_uint8_ptr(p384_modulus), + c_uint8_ptr(p384_b), + c_uint8_ptr(p384_order), + c_size_t(len(p384_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-384 context" % result) + + context = SmartPointer(ec_p384_context.get(), _ec_lib.ec_free_context) + p384 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 384, + "1.3.132.0.34", # SEC 2 + context, + "NIST P-384", + "ecdsa-sha2-nistp384", + "p384") + global p384_names + _curves.update(dict.fromkeys(p384_names, p384)) + + +init_p384() +del init_p384 + + +p521_names = ["p521", "NIST P-521", "P-521", "prime521v1", "secp521r1", + "nistp521"] + + +def init_p521(): + p = 0x000001ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff + b = 0x00000051953eb9618e1c9a1f929a21a0b68540eea2da725b99b315f3b8b489918ef109e156193951ec7e937b1652c0bd3bb1bf073573df883d2c34f1ef451fd46b503f00 + order = 0x000001fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffa51868783bf2f966b7fcc0148f709a5d03bb5c9b8899c47aebb6fb71e91386409 + Gx = 0x000000c6858e06b70404e9cd9e3ecb662395b4429c648139053fb521f828af606b4d3dbaa14b5e77efe75928fe1dc127a2ffa8de3348b3c1856a429bf97e7e31c2e5bd66 + Gy = 0x0000011839296a789a3bc0045c8a5fb42c7d1bd998f54449579b446817afbd17273e662c97ee72995ef42640c550b9013fad0761353c7086a272c24088be94769fd16650 + + p521_modulus = long_to_bytes(p, 66) + p521_b = long_to_bytes(b, 66) + p521_order = long_to_bytes(order, 66) + + ec_p521_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p521_context.address_of(), + c_uint8_ptr(p521_modulus), + c_uint8_ptr(p521_b), + c_uint8_ptr(p521_order), + c_size_t(len(p521_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-521 context" % result) + + context = SmartPointer(ec_p521_context.get(), _ec_lib.ec_free_context) + p521 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 521, + "1.3.132.0.35", # SEC 2 + context, + "NIST P-521", + "ecdsa-sha2-nistp521", + "p521") + global p521_names + _curves.update(dict.fromkeys(p521_names, p521)) + + +init_p521() +del init_p521 + + +ed25519_names = ["ed25519", "Ed25519"] + + +def init_ed25519(): + p = 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed # 2**255 - 19 + order = 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed + Gx = 0x216936d3cd6e53fec0a4e231fdd6dc5c692cc7609525a7b2c9562d608f25d51a + Gy = 0x6666666666666666666666666666666666666666666666666666666666666658 + + ed25519 = _Curve(Integer(p), + None, + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 255, + "1.3.101.112", # RFC8410 + None, + "Ed25519", # Used throughout; do not change + "ssh-ed25519", + "ed25519") + global ed25519_names + _curves.update(dict.fromkeys(ed25519_names, ed25519)) + + +init_ed25519() +del init_ed25519 + + +ed448_names = ["ed448", "Ed448"] + + +def init_ed448(): + p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffffffff # 2**448 - 2**224 - 1 + order = 0x3fffffffffffffffffffffffffffffffffffffffffffffffffffffff7cca23e9c44edb49aed63690216cc2728dc58f552378c292ab5844f3 + Gx = 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e + Gy = 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14 + + ed448_context = VoidPointer() + result = _ed448_lib.ed448_new_context(ed448_context.address_of()) + if result: + raise ImportError("Error %d initializing Ed448 context" % result) + + context = SmartPointer(ed448_context.get(), _ed448_lib.ed448_free_context) + + ed448 = _Curve(Integer(p), + None, + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 448, + "1.3.101.113", # RFC8410 + context, + "Ed448", # Used throughout; do not change + None, + "ed448") + global ed448_names + _curves.update(dict.fromkeys(ed448_names, ed448)) + + +init_ed448() +del init_ed448 + + +class UnsupportedEccFeature(ValueError): + pass + + +class EccPoint(object): + """A class to model a point on an Elliptic Curve. + + The class supports operators for: + + * Adding two points: ``R = S + T`` + * In-place addition: ``S += T`` + * Negating a point: ``R = -T`` + * Comparing two points: ``if S == T: ...`` or ``if S != T: ...`` + * Multiplying a point by a scalar: ``R = S*k`` + * In-place multiplication by a scalar: ``T *= k`` + + :ivar x: The affine X-coordinate of the ECC point + :vartype x: integer + + :ivar y: The affine Y-coordinate of the ECC point + :vartype y: integer + + :ivar xy: The tuple with affine X- and Y- coordinates + """ + + def __init__(self, x, y, curve="p256"): + + try: + self._curve = _curves[curve] + except KeyError: + raise ValueError("Unknown curve name %s" % str(curve)) + self._curve_name = curve + + modulus_bytes = self.size_in_bytes() + + xb = long_to_bytes(x, modulus_bytes) + yb = long_to_bytes(y, modulus_bytes) + if len(xb) != modulus_bytes or len(yb) != modulus_bytes: + raise ValueError("Incorrect coordinate length") + + new_point = lib_func(self, "new_point") + free_func = lib_func(self, "free_point") + + self._point = VoidPointer() + try: + context = self._curve.context.get() + except AttributeError: + context = null_pointer + result = new_point(self._point.address_of(), + c_uint8_ptr(xb), + c_uint8_ptr(yb), + c_size_t(modulus_bytes), + context) + + if result: + if result == 15: + raise ValueError("The EC point does not belong to the curve") + raise ValueError("Error %d while instantiating an EC point" % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the EC point + self._point = SmartPointer(self._point.get(), free_func) + + def set(self, point): + clone = lib_func(self, "clone") + free_func = lib_func(self, "free_point") + + self._point = VoidPointer() + result = clone(self._point.address_of(), + point._point.get()) + + if result: + raise ValueError("Error %d while cloning an EC point" % result) + + self._point = SmartPointer(self._point.get(), free_func) + return self + + def __eq__(self, point): + cmp_func = lib_func(self, "cmp") + return 0 == cmp_func(self._point.get(), point._point.get()) + + # Only needed for Python 2 + def __ne__(self, point): + return not self == point + + def __neg__(self): + neg_func = lib_func(self, "neg") + np = self.copy() + result = neg_func(np._point.get()) + if result: + raise ValueError("Error %d while inverting an EC point" % result) + return np + + def copy(self): + """Return a copy of this point.""" + x, y = self.xy + np = EccPoint(x, y, self._curve_name) + return np + + def _is_eddsa(self): + return self._curve.name in ("ed25519", "ed448") + + def is_point_at_infinity(self): + """``True`` if this is the *point-at-infinity*.""" + + if self._is_eddsa(): + return self.x == 0 + else: + return self.xy == (0, 0) + + def point_at_infinity(self): + """Return the *point-at-infinity* for the curve.""" + + if self._is_eddsa(): + return EccPoint(0, 1, self._curve_name) + else: + return EccPoint(0, 0, self._curve_name) + + @property + def x(self): + return self.xy[0] + + @property + def y(self): + return self.xy[1] + + @property + def xy(self): + modulus_bytes = self.size_in_bytes() + xb = bytearray(modulus_bytes) + yb = bytearray(modulus_bytes) + get_xy = lib_func(self, "get_xy") + result = get_xy(c_uint8_ptr(xb), + c_uint8_ptr(yb), + c_size_t(modulus_bytes), + self._point.get()) + if result: + raise ValueError("Error %d while encoding an EC point" % result) + + return (Integer(bytes_to_long(xb)), Integer(bytes_to_long(yb))) + + def size_in_bytes(self): + """Size of each coordinate, in bytes.""" + return (self.size_in_bits() + 7) // 8 + + def size_in_bits(self): + """Size of each coordinate, in bits.""" + return self._curve.modulus_bits + + def double(self): + """Double this point (in-place operation). + + Returns: + This same object (to enable chaining). + """ + + double_func = lib_func(self, "double") + result = double_func(self._point.get()) + if result: + raise ValueError("Error %d while doubling an EC point" % result) + return self + + def __iadd__(self, point): + """Add a second point to this one""" + + add_func = lib_func(self, "add") + result = add_func(self._point.get(), point._point.get()) + if result: + if result == 16: + raise ValueError("EC points are not on the same curve") + raise ValueError("Error %d while adding two EC points" % result) + return self + + def __add__(self, point): + """Return a new point, the addition of this one and another""" + + np = self.copy() + np += point + return np + + def __imul__(self, scalar): + """Multiply this point by a scalar""" + + scalar_func = lib_func(self, "scalar") + if scalar < 0: + raise ValueError("Scalar multiplication is only defined for non-negative integers") + sb = long_to_bytes(scalar) + result = scalar_func(self._point.get(), + c_uint8_ptr(sb), + c_size_t(len(sb)), + c_ulonglong(getrandbits(64))) + if result: + raise ValueError("Error %d during scalar multiplication" % result) + return self + + def __mul__(self, scalar): + """Return a new point, the scalar product of this one""" + + np = self.copy() + np *= scalar + return np + + def __rmul__(self, left_hand): + return self.__mul__(left_hand) + + +# Last piece of initialization +p192_G = EccPoint(_curves['p192'].Gx, _curves['p192'].Gy, "p192") +p192 = _curves['p192']._replace(G=p192_G) +_curves.update(dict.fromkeys(p192_names, p192)) +del p192_G, p192, p192_names + +p224_G = EccPoint(_curves['p224'].Gx, _curves['p224'].Gy, "p224") +p224 = _curves['p224']._replace(G=p224_G) +_curves.update(dict.fromkeys(p224_names, p224)) +del p224_G, p224, p224_names + +p256_G = EccPoint(_curves['p256'].Gx, _curves['p256'].Gy, "p256") +p256 = _curves['p256']._replace(G=p256_G) +_curves.update(dict.fromkeys(p256_names, p256)) +del p256_G, p256, p256_names + +p384_G = EccPoint(_curves['p384'].Gx, _curves['p384'].Gy, "p384") +p384 = _curves['p384']._replace(G=p384_G) +_curves.update(dict.fromkeys(p384_names, p384)) +del p384_G, p384, p384_names + +p521_G = EccPoint(_curves['p521'].Gx, _curves['p521'].Gy, "p521") +p521 = _curves['p521']._replace(G=p521_G) +_curves.update(dict.fromkeys(p521_names, p521)) +del p521_G, p521, p521_names + +ed25519_G = EccPoint(_curves['Ed25519'].Gx, _curves['Ed25519'].Gy, "Ed25519") +ed25519 = _curves['Ed25519']._replace(G=ed25519_G) +_curves.update(dict.fromkeys(ed25519_names, ed25519)) +del ed25519_G, ed25519, ed25519_names + +ed448_G = EccPoint(_curves['Ed448'].Gx, _curves['Ed448'].Gy, "Ed448") +ed448 = _curves['Ed448']._replace(G=ed448_G) +_curves.update(dict.fromkeys(ed448_names, ed448)) +del ed448_G, ed448, ed448_names + + +class EccKey(object): + r"""Class defining an ECC key. + Do not instantiate directly. + Use :func:`generate`, :func:`construct` or :func:`import_key` instead. + + :ivar curve: The name of the curve as defined in the `ECC table`_. + :vartype curve: string + + :ivar pointQ: an ECC point representating the public component. + :vartype pointQ: :class:`EccPoint` + + :ivar d: A scalar that represents the private component + in NIST P curves. It is smaller than the + order of the generator point. + :vartype d: integer + + :ivar seed: A seed that representats the private component + in EdDSA curves + (Ed25519, 32 bytes; Ed448, 57 bytes). + :vartype seed: bytes + """ + + def __init__(self, **kwargs): + """Create a new ECC key + + Keywords: + curve : string + The name of the curve. + d : integer + Mandatory for a private key one NIST P curves. + It must be in the range ``[1..order-1]``. + seed : bytes + Mandatory for a private key on the Ed25519 (32 bytes) + or Ed448 (57 bytes) curve. + point : EccPoint + Mandatory for a public key. If provided for a private key, + the implementation will NOT check whether it matches ``d``. + + Only one parameter among ``d``, ``seed`` or ``point`` may be used. + """ + + kwargs_ = dict(kwargs) + curve_name = kwargs_.pop("curve", None) + self._d = kwargs_.pop("d", None) + self._seed = kwargs_.pop("seed", None) + self._point = kwargs_.pop("point", None) + if curve_name is None and self._point: + curve_name = self._point._curve_name + if kwargs_: + raise TypeError("Unknown parameters: " + str(kwargs_)) + + if curve_name not in _curves: + raise ValueError("Unsupported curve (%s)" % curve_name) + self._curve = _curves[curve_name] + self.curve = curve_name + + count = int(self._d is not None) + int(self._seed is not None) + + if count == 0: + if self._point is None: + raise ValueError("At lest one between parameters 'point', 'd' or 'seed' must be specified") + return + + if count == 2: + raise ValueError("Parameters d and seed are mutually exclusive") + + # NIST P curves work with d, EdDSA works with seed + + if not self._is_eddsa(): + if self._seed is not None: + raise ValueError("Parameter 'seed' can only be used with Ed25519 or Ed448") + self._d = Integer(self._d) + if not 1 <= self._d < self._curve.order: + raise ValueError("Parameter d must be an integer smaller than the curve order") + else: + if self._d is not None: + raise ValueError("Parameter d can only be used with NIST P curves") + # RFC 8032, 5.1.5 + if self._curve.name == "ed25519": + if len(self._seed) != 32: + raise ValueError("Parameter seed must be 32 bytes long for Ed25519") + seed_hash = SHA512.new(self._seed).digest() # h + self._prefix = seed_hash[32:] + tmp = bytearray(seed_hash[:32]) + tmp[0] &= 0xF8 + tmp[31] = (tmp[31] & 0x7F) | 0x40 + # RFC 8032, 5.2.5 + elif self._curve.name == "ed448": + if len(self._seed) != 57: + raise ValueError("Parameter seed must be 57 bytes long for Ed448") + seed_hash = SHAKE256.new(self._seed).read(114) # h + self._prefix = seed_hash[57:] + tmp = bytearray(seed_hash[:57]) + tmp[0] &= 0xFC + tmp[55] |= 0x80 + tmp[56] = 0 + self._d = Integer.from_bytes(tmp, byteorder='little') + + def _is_eddsa(self): + return self._curve.desc in ("Ed25519", "Ed448") + + def __eq__(self, other): + if other.has_private() != self.has_private(): + return False + + return other.pointQ == self.pointQ + + def __repr__(self): + if self.has_private(): + if self._is_eddsa(): + extra = ", seed=%s" % self._seed.hex() + else: + extra = ", d=%d" % int(self._d) + else: + extra = "" + x, y = self.pointQ.xy + return "EccKey(curve='%s', point_x=%d, point_y=%d%s)" % (self._curve.desc, x, y, extra) + + def has_private(self): + """``True`` if this key can be used for making signatures or decrypting data.""" + + return self._d is not None + + # ECDSA + def _sign(self, z, k): + assert 0 < k < self._curve.order + + order = self._curve.order + blind = Integer.random_range(min_inclusive=1, + max_exclusive=order) + + blind_d = self._d * blind + inv_blind_k = (blind * k).inverse(order) + + r = (self._curve.G * k).x % order + s = inv_blind_k * (blind * z + blind_d * r) % order + return (r, s) + + # ECDSA + def _verify(self, z, rs): + order = self._curve.order + sinv = rs[1].inverse(order) + point1 = self._curve.G * ((sinv * z) % order) + point2 = self.pointQ * ((sinv * rs[0]) % order) + return (point1 + point2).x == rs[0] + + @property + def d(self): + if not self.has_private(): + raise ValueError("This is not a private ECC key") + return self._d + + @property + def seed(self): + if not self.has_private(): + raise ValueError("This is not a private ECC key") + return self._seed + + @property + def pointQ(self): + if self._point is None: + self._point = self._curve.G * self._d + return self._point + + def public_key(self): + """A matching ECC public key. + + Returns: + a new :class:`EccKey` object + """ + + return EccKey(curve=self._curve.desc, point=self.pointQ) + + def _export_SEC1(self, compress): + if self._is_eddsa(): + raise ValueError("SEC1 format is unsupported for EdDSA curves") + + # See 2.2 in RFC5480 and 2.3.3 in SEC1 + # + # The first byte is: + # - 0x02: compressed, only X-coordinate, Y-coordinate is even + # - 0x03: compressed, only X-coordinate, Y-coordinate is odd + # - 0x04: uncompressed, X-coordinate is followed by Y-coordinate + # + # PAI is in theory encoded as 0x00. + + modulus_bytes = self.pointQ.size_in_bytes() + + if compress: + if self.pointQ.y.is_odd(): + first_byte = b'\x03' + else: + first_byte = b'\x02' + public_key = (first_byte + + self.pointQ.x.to_bytes(modulus_bytes)) + else: + public_key = (b'\x04' + + self.pointQ.x.to_bytes(modulus_bytes) + + self.pointQ.y.to_bytes(modulus_bytes)) + return public_key + + def _export_eddsa(self): + x, y = self.pointQ.xy + if self._curve.name == "ed25519": + result = bytearray(y.to_bytes(32, byteorder='little')) + result[31] = ((x & 1) << 7) | result[31] + elif self._curve.name == "ed448": + result = bytearray(y.to_bytes(57, byteorder='little')) + result[56] = (x & 1) << 7 + else: + raise ValueError("Not an EdDSA key to export") + return bytes(result) + + def _export_subjectPublicKeyInfo(self, compress): + if self._is_eddsa(): + oid = self._curve.oid + public_key = self._export_eddsa() + params = None + else: + oid = "1.2.840.10045.2.1" # unrestricted + public_key = self._export_SEC1(compress) + params = DerObjectId(self._curve.oid) + + return _create_subject_public_key_info(oid, + public_key, + params) + + def _export_rfc5915_private_der(self, include_ec_params=True): + + assert self.has_private() + + # ECPrivateKey ::= SEQUENCE { + # version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1), + # privateKey OCTET STRING, + # parameters [0] ECParameters {{ NamedCurve }} OPTIONAL, + # publicKey [1] BIT STRING OPTIONAL + # } + + # Public key - uncompressed form + modulus_bytes = self.pointQ.size_in_bytes() + public_key = (b'\x04' + + self.pointQ.x.to_bytes(modulus_bytes) + + self.pointQ.y.to_bytes(modulus_bytes)) + + seq = [1, + DerOctetString(self.d.to_bytes(modulus_bytes)), + DerObjectId(self._curve.oid, explicit=0), + DerBitString(public_key, explicit=1)] + + if not include_ec_params: + del seq[2] + + return DerSequence(seq).encode() + + def _export_pkcs8(self, **kwargs): + from Crypto.IO import PKCS8 + + if kwargs.get('passphrase', None) is not None and 'protection' not in kwargs: + raise ValueError("At least the 'protection' parameter should be present") + + if self._is_eddsa(): + oid = self._curve.oid + private_key = DerOctetString(self._seed).encode() + params = None + else: + oid = "1.2.840.10045.2.1" # unrestricted + private_key = self._export_rfc5915_private_der(include_ec_params=False) + params = DerObjectId(self._curve.oid) + + result = PKCS8.wrap(private_key, + oid, + key_params=params, + **kwargs) + return result + + def _export_public_pem(self, compress): + from Crypto.IO import PEM + + encoded_der = self._export_subjectPublicKeyInfo(compress) + return PEM.encode(encoded_der, "PUBLIC KEY") + + def _export_private_pem(self, passphrase, **kwargs): + from Crypto.IO import PEM + + encoded_der = self._export_rfc5915_private_der() + return PEM.encode(encoded_der, "EC PRIVATE KEY", passphrase, **kwargs) + + def _export_private_clear_pkcs8_in_clear_pem(self): + from Crypto.IO import PEM + + encoded_der = self._export_pkcs8() + return PEM.encode(encoded_der, "PRIVATE KEY") + + def _export_private_encrypted_pkcs8_in_clear_pem(self, passphrase, **kwargs): + from Crypto.IO import PEM + + assert passphrase + if 'protection' not in kwargs: + raise ValueError("At least the 'protection' parameter should be present") + encoded_der = self._export_pkcs8(passphrase=passphrase, **kwargs) + return PEM.encode(encoded_der, "ENCRYPTED PRIVATE KEY") + + def _export_openssh(self, compress): + if self.has_private(): + raise ValueError("Cannot export OpenSSH private keys") + + desc = self._curve.openssh + + if desc is None: + raise ValueError("Cannot export %s keys as OpenSSH" % self._curve.name) + elif desc == "ssh-ed25519": + public_key = self._export_eddsa() + comps = (tobytes(desc), tobytes(public_key)) + else: + modulus_bytes = self.pointQ.size_in_bytes() + + if compress: + first_byte = 2 + self.pointQ.y.is_odd() + public_key = (bchr(first_byte) + + self.pointQ.x.to_bytes(modulus_bytes)) + else: + public_key = (b'\x04' + + self.pointQ.x.to_bytes(modulus_bytes) + + self.pointQ.y.to_bytes(modulus_bytes)) + + middle = desc.split("-")[2] + comps = (tobytes(desc), tobytes(middle), public_key) + + blob = b"".join([struct.pack(">I", len(x)) + x for x in comps]) + return desc + " " + tostr(binascii.b2a_base64(blob)) + + def export_key(self, **kwargs): + """Export this ECC key. + + Args: + format (string): + The format to use for encoding the key: + + - ``'DER'``. The key will be encoded in ASN.1 DER format (binary). + For a public key, the ASN.1 ``subjectPublicKeyInfo`` structure + defined in `RFC5480`_ will be used. + For a private key, the ASN.1 ``ECPrivateKey`` structure defined + in `RFC5915`_ is used instead (possibly within a PKCS#8 envelope, + see the ``use_pkcs8`` flag below). + - ``'PEM'``. The key will be encoded in a PEM_ envelope (ASCII). + - ``'OpenSSH'``. The key will be encoded in the OpenSSH_ format + (ASCII, public keys only). + - ``'SEC1'``. The public key (i.e., the EC point) will be encoded + into ``bytes`` according to Section 2.3.3 of `SEC1`_ + (which is a subset of the older X9.62 ITU standard). + Only for NIST P-curves. + - ``'raw'``. The public key will be encoded as ``bytes``, + without any metadata. + + * For NIST P-curves: equivalent to ``'SEC1'``. + * For EdDSA curves: ``bytes`` in the format defined in `RFC8032`_. + + passphrase (byte string or string): + The passphrase to use for protecting the private key. + + use_pkcs8 (boolean): + Only relevant for private keys. + + If ``True`` (default and recommended), the `PKCS#8`_ representation + will be used. It must be ``True`` for EdDSA curves. + + protection (string): + When a private key is exported with password-protection + and PKCS#8 (both ``DER`` and ``PEM`` formats), this parameter MUST be + present and be a valid algorithm supported by :mod:`Crypto.IO.PKCS8`. + It is recommended to use ``PBKDF2WithHMAC-SHA1AndAES128-CBC``. + + compress (boolean): + If ``True``, the method returns a more compact representation + of the public key, with the X-coordinate only. + + If ``False`` (default), the method returns the full public key. + + This parameter is ignored for EdDSA curves, as compression is + mandatory. + + .. warning:: + If you don't provide a passphrase, the private key will be + exported in the clear! + + .. note:: + When exporting a private key with password-protection and `PKCS#8`_ + (both ``DER`` and ``PEM`` formats), any extra parameters + to ``export_key()`` will be passed to :mod:`Crypto.IO.PKCS8`. + + .. _PEM: http://www.ietf.org/rfc/rfc1421.txt + .. _`PEM encryption`: http://www.ietf.org/rfc/rfc1423.txt + .. _OpenSSH: http://www.openssh.com/txt/rfc5656.txt + .. _RFC5480: https://tools.ietf.org/html/rfc5480 + .. _SEC1: https://www.secg.org/sec1-v2.pdf + + Returns: + A multi-line string (for ``'PEM'`` and ``'OpenSSH'``) or + ``bytes`` (for ``'DER'``, ``'SEC1'``, and ``'raw'``) with the encoded key. + """ + + args = kwargs.copy() + ext_format = args.pop("format") + if ext_format not in ("PEM", "DER", "OpenSSH", "SEC1", "raw"): + raise ValueError("Unknown format '%s'" % ext_format) + + compress = args.pop("compress", False) + + if self.has_private(): + passphrase = args.pop("passphrase", None) + if is_string(passphrase): + passphrase = tobytes(passphrase) + if not passphrase: + raise ValueError("Empty passphrase") + use_pkcs8 = args.pop("use_pkcs8", True) + + if not use_pkcs8 and self._is_eddsa(): + raise ValueError("'pkcs8' must be True for EdDSA curves") + + if ext_format == "PEM": + if use_pkcs8: + if passphrase: + return self._export_private_encrypted_pkcs8_in_clear_pem(passphrase, **args) + else: + return self._export_private_clear_pkcs8_in_clear_pem() + else: + return self._export_private_pem(passphrase, **args) + elif ext_format == "DER": + # DER + if passphrase and not use_pkcs8: + raise ValueError("Private keys can only be encrpyted with DER using PKCS#8") + if use_pkcs8: + return self._export_pkcs8(passphrase=passphrase, **args) + else: + return self._export_rfc5915_private_der() + else: + raise ValueError("Private keys cannot be exported " + "in the '%s' format" % ext_format) + else: # Public key + if args: + raise ValueError("Unexpected parameters: '%s'" % args) + if ext_format == "PEM": + return self._export_public_pem(compress) + elif ext_format == "DER": + return self._export_subjectPublicKeyInfo(compress) + elif ext_format == "SEC1": + return self._export_SEC1(compress) + elif ext_format == "raw": + if self._curve.name in ('ed25519', 'ed448'): + return self._export_eddsa() + else: + return self._export_SEC1(compress) + else: + return self._export_openssh(compress) + + +def generate(**kwargs): + """Generate a new private key on the given curve. + + Args: + + curve (string): + Mandatory. It must be a curve name defined in the `ECC table`_. + + randfunc (callable): + Optional. The RNG to read randomness from. + If ``None``, :func:`Crypto.Random.get_random_bytes` is used. + """ + + curve_name = kwargs.pop("curve") + curve = _curves[curve_name] + randfunc = kwargs.pop("randfunc", get_random_bytes) + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + if _curves[curve_name].name == "ed25519": + seed = randfunc(32) + new_key = EccKey(curve=curve_name, seed=seed) + elif _curves[curve_name].name == "ed448": + seed = randfunc(57) + new_key = EccKey(curve=curve_name, seed=seed) + else: + d = Integer.random_range(min_inclusive=1, + max_exclusive=curve.order, + randfunc=randfunc) + new_key = EccKey(curve=curve_name, d=d) + + return new_key + + +def construct(**kwargs): + """Build a new ECC key (private or public) starting + from some base components. + + In most cases, you will already have an existing key + which you can read in with :func:`import_key` instead + of this function. + + Args: + curve (string): + Mandatory. The name of the elliptic curve, as defined in the `ECC table`_. + + d (integer): + Mandatory for a private key and a NIST P-curve (e.g., P-256): + the integer in the range ``[1..order-1]`` that represents the key. + + seed (bytes): + Mandatory for a private key and an EdDSA curve. + It must be 32 bytes for Ed25519, and 57 bytes for Ed448. + + point_x (integer): + Mandatory for a public key: the X coordinate (affine) of the ECC point. + + point_y (integer): + Mandatory for a public key: the Y coordinate (affine) of the ECC point. + + Returns: + :class:`EccKey` : a new ECC key object + """ + + curve_name = kwargs["curve"] + curve = _curves[curve_name] + point_x = kwargs.pop("point_x", None) + point_y = kwargs.pop("point_y", None) + + if "point" in kwargs: + raise TypeError("Unknown keyword: point") + + if None not in (point_x, point_y): + # ValueError is raised if the point is not on the curve + kwargs["point"] = EccPoint(point_x, point_y, curve_name) + + new_key = EccKey(**kwargs) + + # Validate that the private key matches the public one + # because EccKey will not do that automatically + if new_key.has_private() and 'point' in kwargs: + pub_key = curve.G * new_key.d + if pub_key.xy != (point_x, point_y): + raise ValueError("Private and public ECC keys do not match") + + return new_key + + +def _import_public_der(ec_point, curve_oid=None, curve_name=None): + """Convert an encoded EC point into an EccKey object + + ec_point: byte string with the EC point (SEC1-encoded) + curve_oid: string with the name the curve + curve_name: string with the OID of the curve + + Either curve_id or curve_name must be specified + + """ + + for _curve_name, curve in _curves.items(): + if curve_oid and curve.oid == curve_oid: + break + if curve_name == _curve_name: + break + else: + if curve_oid: + raise UnsupportedEccFeature("Unsupported ECC curve (OID: %s)" % curve_oid) + else: + raise UnsupportedEccFeature("Unsupported ECC curve (%s)" % curve_name) + + # See 2.2 in RFC5480 and 2.3.3 in SEC1 + # The first byte is: + # - 0x02: compressed, only X-coordinate, Y-coordinate is even + # - 0x03: compressed, only X-coordinate, Y-coordinate is odd + # - 0x04: uncompressed, X-coordinate is followed by Y-coordinate + # + # PAI is in theory encoded as 0x00. + + modulus_bytes = curve.p.size_in_bytes() + point_type = bord(ec_point[0]) + + # Uncompressed point + if point_type == 0x04: + if len(ec_point) != (1 + 2 * modulus_bytes): + raise ValueError("Incorrect EC point length") + x = Integer.from_bytes(ec_point[1:modulus_bytes+1]) + y = Integer.from_bytes(ec_point[modulus_bytes+1:]) + # Compressed point + elif point_type in (0x02, 0x03): + if len(ec_point) != (1 + modulus_bytes): + raise ValueError("Incorrect EC point length") + x = Integer.from_bytes(ec_point[1:]) + # Right now, we only support Short Weierstrass curves + y = (x**3 - x*3 + curve.b).sqrt(curve.p) + if point_type == 0x02 and y.is_odd(): + y = curve.p - y + if point_type == 0x03 and y.is_even(): + y = curve.p - y + else: + raise ValueError("Incorrect EC point encoding") + + return construct(curve=_curve_name, point_x=x, point_y=y) + + +def _import_subjectPublicKeyInfo(encoded, *kwargs): + """Convert a subjectPublicKeyInfo into an EccKey object""" + + # See RFC5480 + + # Parse the generic subjectPublicKeyInfo structure + oid, ec_point, params = _expand_subject_public_key_info(encoded) + + nist_p_oids = ( + "1.2.840.10045.2.1", # id-ecPublicKey (unrestricted) + "1.3.132.1.12", # id-ecDH + "1.3.132.1.13" # id-ecMQV + ) + eddsa_oids = { + "1.3.101.112": ("Ed25519", _import_ed25519_public_key), # id-Ed25519 + "1.3.101.113": ("Ed448", _import_ed448_public_key) # id-Ed448 + } + + if oid in nist_p_oids: + # See RFC5480 + + # Parameters are mandatory and encoded as ECParameters + # ECParameters ::= CHOICE { + # namedCurve OBJECT IDENTIFIER + # -- implicitCurve NULL + # -- specifiedCurve SpecifiedECDomain + # } + # implicitCurve and specifiedCurve are not supported (as per RFC) + if not params: + raise ValueError("Missing ECC parameters for ECC OID %s" % oid) + try: + curve_oid = DerObjectId().decode(params).value + except ValueError: + raise ValueError("Error decoding namedCurve") + + # ECPoint ::= OCTET STRING + return _import_public_der(ec_point, curve_oid=curve_oid) + + elif oid in eddsa_oids: + # See RFC8410 + curve_name, import_eddsa_public_key = eddsa_oids[oid] + + # Parameters must be absent + if params: + raise ValueError("Unexpected ECC parameters for ECC OID %s" % oid) + + x, y = import_eddsa_public_key(ec_point) + return construct(point_x=x, point_y=y, curve=curve_name) + else: + raise UnsupportedEccFeature("Unsupported ECC OID: %s" % oid) + + +def _import_rfc5915_der(encoded, passphrase, curve_oid=None): + + # See RFC5915 https://tools.ietf.org/html/rfc5915 + # + # ECPrivateKey ::= SEQUENCE { + # version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1), + # privateKey OCTET STRING, + # parameters [0] ECParameters {{ NamedCurve }} OPTIONAL, + # publicKey [1] BIT STRING OPTIONAL + # } + + private_key = DerSequence().decode(encoded, nr_elements=(3, 4)) + if private_key[0] != 1: + raise ValueError("Incorrect ECC private key version") + + try: + parameters = DerObjectId(explicit=0).decode(private_key[2]).value + if curve_oid is not None and parameters != curve_oid: + raise ValueError("Curve mismatch") + curve_oid = parameters + except ValueError: + pass + + if curve_oid is None: + raise ValueError("No curve found") + + for curve_name, curve in _curves.items(): + if curve.oid == curve_oid: + break + else: + raise UnsupportedEccFeature("Unsupported ECC curve (OID: %s)" % curve_oid) + + scalar_bytes = DerOctetString().decode(private_key[1]).payload + modulus_bytes = curve.p.size_in_bytes() + if len(scalar_bytes) != modulus_bytes: + raise ValueError("Private key is too small") + d = Integer.from_bytes(scalar_bytes) + + # Decode public key (if any) + if len(private_key) == 4: + public_key_enc = DerBitString(explicit=1).decode(private_key[3]).value + public_key = _import_public_der(public_key_enc, curve_oid=curve_oid) + point_x = public_key.pointQ.x + point_y = public_key.pointQ.y + else: + point_x = point_y = None + + return construct(curve=curve_name, d=d, point_x=point_x, point_y=point_y) + + +def _import_pkcs8(encoded, passphrase): + from Crypto.IO import PKCS8 + + algo_oid, private_key, params = PKCS8.unwrap(encoded, passphrase) + + nist_p_oids = ( + "1.2.840.10045.2.1", # id-ecPublicKey (unrestricted) + "1.3.132.1.12", # id-ecDH + "1.3.132.1.13" # id-ecMQV + ) + eddsa_oids = { + "1.3.101.112": "Ed25519", # id-Ed25519 + "1.3.101.113": "Ed448", # id-Ed448 + } + + if algo_oid in nist_p_oids: + curve_oid = DerObjectId().decode(params).value + return _import_rfc5915_der(private_key, passphrase, curve_oid) + elif algo_oid in eddsa_oids: + if params is not None: + raise ValueError("EdDSA ECC private key must not have parameters") + curve_oid = None + seed = DerOctetString().decode(private_key).payload + return construct(curve=eddsa_oids[algo_oid], seed=seed) + else: + raise UnsupportedEccFeature("Unsupported ECC purpose (OID: %s)" % algo_oid) + + +def _import_x509_cert(encoded, *kwargs): + + sp_info = _extract_subject_public_key_info(encoded) + return _import_subjectPublicKeyInfo(sp_info) + + +def _import_der(encoded, passphrase): + + try: + return _import_subjectPublicKeyInfo(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + try: + return _import_x509_cert(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + try: + return _import_rfc5915_der(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + try: + return _import_pkcs8(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + raise ValueError("Not an ECC DER key") + + +def _import_openssh_public(encoded): + parts = encoded.split(b' ') + if len(parts) not in (2, 3): + raise ValueError("Not an openssh public key") + + try: + keystring = binascii.a2b_base64(parts[1]) + + keyparts = [] + while len(keystring) > 4: + lk = struct.unpack(">I", keystring[:4])[0] + keyparts.append(keystring[4:4 + lk]) + keystring = keystring[4 + lk:] + + if parts[0] != keyparts[0]: + raise ValueError("Mismatch in openssh public key") + + # NIST P curves + if parts[0].startswith(b"ecdsa-sha2-"): + + for curve_name, curve in _curves.items(): + if curve.openssh is None: + continue + if not curve.openssh.startswith("ecdsa-sha2"): + continue + middle = tobytes(curve.openssh.split("-")[2]) + if keyparts[1] == middle: + break + else: + raise ValueError("Unsupported ECC curve: " + middle) + + ecc_key = _import_public_der(keyparts[2], curve_oid=curve.oid) + + # EdDSA + elif parts[0] == b"ssh-ed25519": + x, y = _import_ed25519_public_key(keyparts[1]) + ecc_key = construct(curve="Ed25519", point_x=x, point_y=y) + else: + raise ValueError("Unsupported SSH key type: " + parts[0]) + + except (IndexError, TypeError, binascii.Error): + raise ValueError("Error parsing SSH key type: " + parts[0]) + + return ecc_key + + +def _import_openssh_private_ecc(data, password): + + from ._openssh import (import_openssh_private_generic, + read_bytes, read_string, check_padding) + + key_type, decrypted = import_openssh_private_generic(data, password) + + eddsa_keys = { + "ssh-ed25519": ("Ed25519", _import_ed25519_public_key, 32), + } + + # https://datatracker.ietf.org/doc/html/draft-miller-ssh-agent-04 + if key_type.startswith("ecdsa-sha2"): + + ecdsa_curve_name, decrypted = read_string(decrypted) + if ecdsa_curve_name not in _curves: + raise UnsupportedEccFeature("Unsupported ECC curve %s" % ecdsa_curve_name) + curve = _curves[ecdsa_curve_name] + modulus_bytes = (curve.modulus_bits + 7) // 8 + + public_key, decrypted = read_bytes(decrypted) + + if bord(public_key[0]) != 4: + raise ValueError("Only uncompressed OpenSSH EC keys are supported") + if len(public_key) != 2 * modulus_bytes + 1: + raise ValueError("Incorrect public key length") + + point_x = Integer.from_bytes(public_key[1:1+modulus_bytes]) + point_y = Integer.from_bytes(public_key[1+modulus_bytes:]) + + private_key, decrypted = read_bytes(decrypted) + d = Integer.from_bytes(private_key) + + params = {'d': d, 'curve': ecdsa_curve_name} + + elif key_type in eddsa_keys: + + curve_name, import_eddsa_public_key, seed_len = eddsa_keys[key_type] + + public_key, decrypted = read_bytes(decrypted) + point_x, point_y = import_eddsa_public_key(public_key) + + private_public_key, decrypted = read_bytes(decrypted) + seed = private_public_key[:seed_len] + + params = {'seed': seed, 'curve': curve_name} + else: + raise ValueError("Unsupport SSH agent key type:" + key_type) + + _, padded = read_string(decrypted) # Comment + check_padding(padded) + + return construct(point_x=point_x, point_y=point_y, **params) + + +def _import_ed25519_public_key(encoded): + """Import an Ed25519 ECC public key, encoded as raw bytes as described + in RFC8032_. + + Args: + encoded (bytes): + The Ed25519 public key to import. It must be 32 bytes long. + + Returns: + :class:`EccKey` : a new ECC key object + + Raises: + ValueError: when the given key cannot be parsed. + + .. _RFC8032: https://datatracker.ietf.org/doc/html/rfc8032 + """ + + if len(encoded) != 32: + raise ValueError("Incorrect length. Only Ed25519 public keys are supported.") + + p = Integer(0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed) # 2**255 - 19 + d = 37095705934669439343138083508754565189542113879843219016388785533085940283555 + + y = bytearray(encoded) + x_lsb = y[31] >> 7 + y[31] &= 0x7F + point_y = Integer.from_bytes(y, byteorder='little') + if point_y >= p: + raise ValueError("Invalid Ed25519 key (y)") + if point_y == 1: + return 0, 1 + + u = (point_y**2 - 1) % p + v = ((point_y**2 % p) * d + 1) % p + try: + v_inv = v.inverse(p) + x2 = (u * v_inv) % p + point_x = Integer._tonelli_shanks(x2, p) + if (point_x & 1) != x_lsb: + point_x = p - point_x + except ValueError: + raise ValueError("Invalid Ed25519 public key") + return point_x, point_y + + +def _import_ed448_public_key(encoded): + """Import an Ed448 ECC public key, encoded as raw bytes as described + in RFC8032_. + + Args: + encoded (bytes): + The Ed448 public key to import. It must be 57 bytes long. + + Returns: + :class:`EccKey` : a new ECC key object + + Raises: + ValueError: when the given key cannot be parsed. + + .. _RFC8032: https://datatracker.ietf.org/doc/html/rfc8032 + """ + + if len(encoded) != 57: + raise ValueError("Incorrect length. Only Ed448 public keys are supported.") + + p = Integer(0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffffffff) # 2**448 - 2**224 - 1 + d = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffff6756 + + y = encoded[:56] + x_lsb = bord(encoded[56]) >> 7 + point_y = Integer.from_bytes(y, byteorder='little') + if point_y >= p: + raise ValueError("Invalid Ed448 key (y)") + if point_y == 1: + return 0, 1 + + u = (point_y**2 - 1) % p + v = ((point_y**2 % p) * d - 1) % p + try: + v_inv = v.inverse(p) + x2 = (u * v_inv) % p + point_x = Integer._tonelli_shanks(x2, p) + if (point_x & 1) != x_lsb: + point_x = p - point_x + except ValueError: + raise ValueError("Invalid Ed448 public key") + return point_x, point_y + + +def import_key(encoded, passphrase=None, curve_name=None): + """Import an ECC key (public or private). + + Args: + encoded (bytes or multi-line string): + The ECC key to import. + The function will try to automatically detect the right format. + + Supported formats for an ECC **public** key: + + * X.509 certificate: binary (DER) or ASCII (PEM). + * X.509 ``subjectPublicKeyInfo``: binary (DER) or ASCII (PEM). + * SEC1_ (or X9.62), as ``bytes``. NIST P curves only. + You must also provide the ``curve_name`` (with a value from the `ECC table`_) + * OpenSSH line, defined in RFC5656_ and RFC8709_ (ASCII). + This is normally the content of files like ``~/.ssh/id_ecdsa.pub``. + + Supported formats for an ECC **private** key: + + * A binary ``ECPrivateKey`` structure, as defined in `RFC5915`_ (DER). + NIST P curves only. + * A `PKCS#8`_ structure (or the more recent Asymmetric Key Package, RFC5958_): binary (DER) or ASCII (PEM). + * `OpenSSH 6.5`_ and newer versions (ASCII). + + Private keys can be in the clear or password-protected. + + For details about the PEM encoding, see `RFC1421`_/`RFC1423`_. + + passphrase (byte string): + The passphrase to use for decrypting a private key. + Encryption may be applied protected at the PEM level (not recommended) + or at the PKCS#8 level (recommended). + This parameter is ignored if the key in input is not encrypted. + + curve_name (string): + For a SEC1 encoding only. This is the name of the curve, + as defined in the `ECC table`_. + + .. note:: + + To import EdDSA private and public keys, when encoded as raw ``bytes``, use: + + * :func:`Crypto.Signature.eddsa.import_public_key`, or + * :func:`Crypto.Signature.eddsa.import_private_key`. + + Returns: + :class:`EccKey` : a new ECC key object + + Raises: + ValueError: when the given key cannot be parsed (possibly because + the pass phrase is wrong). + + .. _RFC1421: https://datatracker.ietf.org/doc/html/rfc1421 + .. _RFC1423: https://datatracker.ietf.org/doc/html/rfc1423 + .. _RFC5915: https://datatracker.ietf.org/doc/html/rfc5915 + .. _RFC5656: https://datatracker.ietf.org/doc/html/rfc5656 + .. _RFC8709: https://datatracker.ietf.org/doc/html/rfc8709 + .. _RFC5958: https://datatracker.ietf.org/doc/html/rfc5958 + .. _`PKCS#8`: https://datatracker.ietf.org/doc/html/rfc5208 + .. _`OpenSSH 6.5`: https://flak.tedunangst.com/post/new-openssh-key-format-and-bcrypt-pbkdf + .. _SEC1: https://www.secg.org/sec1-v2.pdf + """ + + from Crypto.IO import PEM + + encoded = tobytes(encoded) + if passphrase is not None: + passphrase = tobytes(passphrase) + + # PEM + if encoded.startswith(b'-----BEGIN OPENSSH PRIVATE KEY'): + text_encoded = tostr(encoded) + openssh_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) + result = _import_openssh_private_ecc(openssh_encoded, passphrase) + return result + + elif encoded.startswith(b'-----'): + + text_encoded = tostr(encoded) + + # Remove any EC PARAMETERS section + # Ignore its content because the curve type must be already given in the key + ecparams_start = "-----BEGIN EC PARAMETERS-----" + ecparams_end = "-----END EC PARAMETERS-----" + text_encoded = re.sub(ecparams_start + ".*?" + ecparams_end, "", + text_encoded, + flags=re.DOTALL) + + der_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) + if enc_flag: + passphrase = None + try: + result = _import_der(der_encoded, passphrase) + except UnsupportedEccFeature as uef: + raise uef + except ValueError: + raise ValueError("Invalid DER encoding inside the PEM file") + return result + + # OpenSSH + if encoded.startswith((b'ecdsa-sha2-', b'ssh-ed25519')): + return _import_openssh_public(encoded) + + # DER + if len(encoded) > 0 and bord(encoded[0]) == 0x30: + return _import_der(encoded, passphrase) + + # SEC1 + if len(encoded) > 0 and bord(encoded[0]) in b'\x02\x03\x04': + if curve_name is None: + raise ValueError("No curve name was provided") + return _import_public_der(encoded, curve_name=curve_name) + + raise ValueError("ECC key format is not supported") + + +if __name__ == "__main__": + + import time + + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + + point = _curves['p256'].G.copy() + count = 3000 + + start = time.time() + for x in range(count): + pointX = point * d + print("(P-256 G)", (time.time() - start) / count * 1000, "ms") + + start = time.time() + for x in range(count): + pointX = pointX * d + print("(P-256 arbitrary point)", (time.time() - start) / count * 1000, "ms") diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/ECC.pyi b/env/lib/python3.8/site-packages/Crypto/PublicKey/ECC.pyi new file mode 100644 index 00000000..89f5a13f --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/ECC.pyi @@ -0,0 +1,66 @@ +from typing import Union, Callable, Optional, NamedTuple, List, Tuple, Dict, NamedTuple, Any + +from Crypto.Math.Numbers import Integer + +RNG = Callable[[int], bytes] + +class UnsupportedEccFeature(ValueError): ... +class EccPoint(object): + def __init__(self, x: Union[int, Integer], y: Union[int, Integer], curve: Optional[str] = ...) -> None: ... + def set(self, point: EccPoint) -> EccPoint: ... + def __eq__(self, point: object) -> bool: ... + def __neg__(self) -> EccPoint: ... + def copy(self) -> EccPoint: ... + def is_point_at_infinity(self) -> bool: ... + def point_at_infinity(self) -> EccPoint: ... + @property + def x(self) -> int: ... + @property + def y(self) -> int: ... + @property + def xy(self) -> Tuple[int, int]: ... + def size_in_bytes(self) -> int: ... + def size_in_bits(self) -> int: ... + def double(self) -> EccPoint: ... + def __iadd__(self, point: EccPoint) -> EccPoint: ... + def __add__(self, point: EccPoint) -> EccPoint: ... + def __imul__(self, scalar: int) -> EccPoint: ... + def __mul__(self, scalar: int) -> EccPoint: ... + +class EccKey(object): + curve: str + def __init__(self, *, curve: str = ..., d: int = ..., point: EccPoint = ...) -> None: ... + def __eq__(self, other: object) -> bool: ... + def __repr__(self) -> str: ... + def has_private(self) -> bool: ... + @property + def d(self) -> int: ... + @property + def pointQ(self) -> EccPoint: ... + def public_key(self) -> EccKey: ... + def export_key(self, **kwargs: Union[str, bytes, bool]) -> Union[str,bytes]: ... + + +_Curve = NamedTuple("_Curve", [('p', Integer), + ('order', Integer), + ('b', Integer), + ('Gx', Integer), + ('Gy', Integer), + ('G', EccPoint), + ('modulus_bits', int), + ('oid', str), + ('context', Any), + ('desc', str), + ('openssh', Union[str, None]), + ]) + +_curves : Dict[str, _Curve] + + +def generate(**kwargs: Union[str, RNG]) -> EccKey: ... +def construct(**kwargs: Union[str, int]) -> EccKey: ... +def import_key(encoded: Union[bytes, str], + passphrase: Optional[str]=None, + curve_name:Optional[str]=None) -> EccKey: ... +def _import_ed25519_public_key(encoded: bytes) -> EccKey: ... +def _import_ed448_public_key(encoded: bytes) -> EccKey: ... diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/ElGamal.py b/env/lib/python3.8/site-packages/Crypto/PublicKey/ElGamal.py new file mode 100644 index 00000000..3b108405 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/ElGamal.py @@ -0,0 +1,286 @@ +# +# ElGamal.py : ElGamal encryption/decryption and signatures +# +# Part of the Python Cryptography Toolkit +# +# Originally written by: A.M. Kuchling +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['generate', 'construct', 'ElGamalKey'] + +from Crypto import Random +from Crypto.Math.Primality import ( generate_probable_safe_prime, + test_probable_prime, COMPOSITE ) +from Crypto.Math.Numbers import Integer + +# Generate an ElGamal key with N bits +def generate(bits, randfunc): + """Randomly generate a fresh, new ElGamal key. + + The key will be safe for use for both encryption and signature + (although it should be used for **only one** purpose). + + Args: + bits (int): + Key length, or size (in bits) of the modulus *p*. + The recommended value is 2048. + randfunc (callable): + Random number generation function; it should accept + a single integer *N* and return a string of random + *N* random bytes. + + Return: + an :class:`ElGamalKey` object + """ + + obj=ElGamalKey() + + # Generate a safe prime p + # See Algorithm 4.86 in Handbook of Applied Cryptography + obj.p = generate_probable_safe_prime(exact_bits=bits, randfunc=randfunc) + q = (obj.p - 1) >> 1 + + # Generate generator g + while 1: + # Choose a square residue; it will generate a cyclic group of order q. + obj.g = pow(Integer.random_range(min_inclusive=2, + max_exclusive=obj.p, + randfunc=randfunc), 2, obj.p) + + # We must avoid g=2 because of Bleichenbacher's attack described + # in "Generating ElGamal signatures without knowning the secret key", + # 1996 + if obj.g in (1, 2): + continue + + # Discard g if it divides p-1 because of the attack described + # in Note 11.67 (iii) in HAC + if (obj.p - 1) % obj.g == 0: + continue + + # g^{-1} must not divide p-1 because of Khadir's attack + # described in "Conditions of the generator for forging ElGamal + # signature", 2011 + ginv = obj.g.inverse(obj.p) + if (obj.p - 1) % ginv == 0: + continue + + # Found + break + + # Generate private key x + obj.x = Integer.random_range(min_inclusive=2, + max_exclusive=obj.p-1, + randfunc=randfunc) + # Generate public key y + obj.y = pow(obj.g, obj.x, obj.p) + return obj + +def construct(tup): + r"""Construct an ElGamal key from a tuple of valid ElGamal components. + + The modulus *p* must be a prime. + The following conditions must apply: + + .. math:: + + \begin{align} + &1 < g < p-1 \\ + &g^{p-1} = 1 \text{ mod } 1 \\ + &1 < x < p-1 \\ + &g^x = y \text{ mod } p + \end{align} + + Args: + tup (tuple): + A tuple with either 3 or 4 integers, + in the following order: + + 1. Modulus (*p*). + 2. Generator (*g*). + 3. Public key (*y*). + 4. Private key (*x*). Optional. + + Raises: + ValueError: when the key being imported fails the most basic ElGamal validity checks. + + Returns: + an :class:`ElGamalKey` object + """ + + obj=ElGamalKey() + if len(tup) not in [3,4]: + raise ValueError('argument for construct() wrong length') + for i in range(len(tup)): + field = obj._keydata[i] + setattr(obj, field, Integer(tup[i])) + + fmt_error = test_probable_prime(obj.p) == COMPOSITE + fmt_error |= obj.g<=1 or obj.g>=obj.p + fmt_error |= pow(obj.g, obj.p-1, obj.p)!=1 + fmt_error |= obj.y<1 or obj.y>=obj.p + if len(tup)==4: + fmt_error |= obj.x<=1 or obj.x>=obj.p + fmt_error |= pow(obj.g, obj.x, obj.p)!=obj.y + + if fmt_error: + raise ValueError("Invalid ElGamal key components") + + return obj + +class ElGamalKey(object): + r"""Class defining an ElGamal key. + Do not instantiate directly. + Use :func:`generate` or :func:`construct` instead. + + :ivar p: Modulus + :vartype d: integer + + :ivar g: Generator + :vartype e: integer + + :ivar y: Public key component + :vartype y: integer + + :ivar x: Private key component + :vartype x: integer + """ + + #: Dictionary of ElGamal parameters. + #: + #: A public key will only have the following entries: + #: + #: - **y**, the public key. + #: - **g**, the generator. + #: - **p**, the modulus. + #: + #: A private key will also have: + #: + #: - **x**, the private key. + _keydata=['p', 'g', 'y', 'x'] + + def __init__(self, randfunc=None): + if randfunc is None: + randfunc = Random.new().read + self._randfunc = randfunc + + def _encrypt(self, M, K): + a=pow(self.g, K, self.p) + b=( pow(self.y, K, self.p)*M ) % self.p + return [int(a), int(b)] + + def _decrypt(self, M): + if (not hasattr(self, 'x')): + raise TypeError('Private key not available in this object') + r = Integer.random_range(min_inclusive=2, + max_exclusive=self.p-1, + randfunc=self._randfunc) + a_blind = (pow(self.g, r, self.p) * M[0]) % self.p + ax=pow(a_blind, self.x, self.p) + plaintext_blind = (ax.inverse(self.p) * M[1] ) % self.p + plaintext = (plaintext_blind * pow(self.y, r, self.p)) % self.p + return int(plaintext) + + def _sign(self, M, K): + if (not hasattr(self, 'x')): + raise TypeError('Private key not available in this object') + p1=self.p-1 + K = Integer(K) + if (K.gcd(p1)!=1): + raise ValueError('Bad K value: GCD(K,p-1)!=1') + a=pow(self.g, K, self.p) + t=(Integer(M)-self.x*a) % p1 + while t<0: t=t+p1 + b=(t*K.inverse(p1)) % p1 + return [int(a), int(b)] + + def _verify(self, M, sig): + sig = [Integer(x) for x in sig] + if sig[0]<1 or sig[0]>self.p-1: + return 0 + v1=pow(self.y, sig[0], self.p) + v1=(v1*pow(sig[0], sig[1], self.p)) % self.p + v2=pow(self.g, M, self.p) + if v1==v2: + return 1 + return 0 + + def has_private(self): + """Whether this is an ElGamal private key""" + + if hasattr(self, 'x'): + return 1 + else: + return 0 + + def can_encrypt(self): + return True + + def can_sign(self): + return True + + def publickey(self): + """A matching ElGamal public key. + + Returns: + a new :class:`ElGamalKey` object + """ + return construct((self.p, self.g, self.y)) + + def __eq__(self, other): + if bool(self.has_private()) != bool(other.has_private()): + return False + + result = True + for comp in self._keydata: + result = result and (getattr(self.key, comp, None) == + getattr(other.key, comp, None)) + return result + + def __ne__(self, other): + return not self.__eq__(other) + + def __getstate__(self): + # ElGamal key is not pickable + from pickle import PicklingError + raise PicklingError + + # Methods defined in PyCrypto that we don't support anymore + + def sign(self, M, K): + raise NotImplementedError + + def verify(self, M, signature): + raise NotImplementedError + + def encrypt(self, plaintext, K): + raise NotImplementedError + + def decrypt(self, ciphertext): + raise NotImplementedError + + def blind(self, M, B): + raise NotImplementedError + + def unblind(self, M, B): + raise NotImplementedError + + def size(self): + raise NotImplementedError diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/ElGamal.pyi b/env/lib/python3.8/site-packages/Crypto/PublicKey/ElGamal.pyi new file mode 100644 index 00000000..90485313 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/ElGamal.pyi @@ -0,0 +1,18 @@ +from typing import Callable, Union, Tuple, Optional + +__all__ = ['generate', 'construct', 'ElGamalKey'] + +RNG = Callable[[int], bytes] + +def generate(bits: int, randfunc: RNG) -> ElGamalKey: ... +def construct(tup: Union[Tuple[int, int, int], Tuple[int, int, int, int]]) -> ElGamalKey: ... + +class ElGamalKey(object): + def __init__(self, randfunc: Optional[RNG]=None) -> None: ... + def has_private(self) -> bool: ... + def can_encrypt(self) -> bool: ... + def can_sign(self) -> bool: ... + def publickey(self) -> ElGamalKey: ... + def __eq__(self, other: object) -> bool: ... + def __ne__(self, other: object) -> bool: ... + def __getstate__(self) -> None: ... diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/RSA.py b/env/lib/python3.8/site-packages/Crypto/PublicKey/RSA.py new file mode 100644 index 00000000..0f5e5892 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/RSA.py @@ -0,0 +1,802 @@ +# -*- coding: utf-8 -*- +# =================================================================== +# +# Copyright (c) 2016, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['generate', 'construct', 'import_key', + 'RsaKey', 'oid'] + +import binascii +import struct + +from Crypto import Random +from Crypto.Util.py3compat import tobytes, bord, tostr +from Crypto.Util.asn1 import DerSequence, DerNull + +from Crypto.Math.Numbers import Integer +from Crypto.Math.Primality import (test_probable_prime, + generate_probable_prime, COMPOSITE) + +from Crypto.PublicKey import (_expand_subject_public_key_info, + _create_subject_public_key_info, + _extract_subject_public_key_info) + + +class RsaKey(object): + r"""Class defining an actual RSA key. + Do not instantiate directly. + Use :func:`generate`, :func:`construct` or :func:`import_key` instead. + + :ivar n: RSA modulus + :vartype n: integer + + :ivar e: RSA public exponent + :vartype e: integer + + :ivar d: RSA private exponent + :vartype d: integer + + :ivar p: First factor of the RSA modulus + :vartype p: integer + + :ivar q: Second factor of the RSA modulus + :vartype q: integer + + :ivar u: Chinese remainder component (:math:`p^{-1} \text{mod } q`) + :vartype u: integer + + :undocumented: exportKey, publickey + """ + + def __init__(self, **kwargs): + """Build an RSA key. + + :Keywords: + n : integer + The modulus. + e : integer + The public exponent. + d : integer + The private exponent. Only required for private keys. + p : integer + The first factor of the modulus. Only required for private keys. + q : integer + The second factor of the modulus. Only required for private keys. + u : integer + The CRT coefficient (inverse of p modulo q). Only required for + private keys. + """ + + input_set = set(kwargs.keys()) + public_set = set(('n', 'e')) + private_set = public_set | set(('p', 'q', 'd', 'u')) + if input_set not in (private_set, public_set): + raise ValueError("Some RSA components are missing") + for component, value in kwargs.items(): + setattr(self, "_" + component, value) + if input_set == private_set: + self._dp = self._d % (self._p - 1) # = (e⁻¹) mod (p-1) + self._dq = self._d % (self._q - 1) # = (e⁻¹) mod (q-1) + + @property + def n(self): + return int(self._n) + + @property + def e(self): + return int(self._e) + + @property + def d(self): + if not self.has_private(): + raise AttributeError("No private exponent available for public keys") + return int(self._d) + + @property + def p(self): + if not self.has_private(): + raise AttributeError("No CRT component 'p' available for public keys") + return int(self._p) + + @property + def q(self): + if not self.has_private(): + raise AttributeError("No CRT component 'q' available for public keys") + return int(self._q) + + @property + def u(self): + if not self.has_private(): + raise AttributeError("No CRT component 'u' available for public keys") + return int(self._u) + + def size_in_bits(self): + """Size of the RSA modulus in bits""" + return self._n.size_in_bits() + + def size_in_bytes(self): + """The minimal amount of bytes that can hold the RSA modulus""" + return (self._n.size_in_bits() - 1) // 8 + 1 + + def _encrypt(self, plaintext): + if not 0 <= plaintext < self._n: + raise ValueError("Plaintext too large") + return int(pow(Integer(plaintext), self._e, self._n)) + + def _decrypt(self, ciphertext): + if not 0 <= ciphertext < self._n: + raise ValueError("Ciphertext too large") + if not self.has_private(): + raise TypeError("This is not a private key") + + # Blinded RSA decryption (to prevent timing attacks): + # Step 1: Generate random secret blinding factor r, + # such that 0 < r < n-1 + r = Integer.random_range(min_inclusive=1, max_exclusive=self._n) + # Step 2: Compute c' = c * r**e mod n + cp = Integer(ciphertext) * pow(r, self._e, self._n) % self._n + # Step 3: Compute m' = c'**d mod n (normal RSA decryption) + m1 = pow(cp, self._dp, self._p) + m2 = pow(cp, self._dq, self._q) + h = ((m2 - m1) * self._u) % self._q + mp = h * self._p + m1 + # Step 4: Compute m = m' * (r**(-1)) mod n + result = (r.inverse(self._n) * mp) % self._n + # Verify no faults occurred + if ciphertext != pow(result, self._e, self._n): + raise ValueError("Fault detected in RSA decryption") + return result + + def has_private(self): + """Whether this is an RSA private key""" + + return hasattr(self, "_d") + + def can_encrypt(self): # legacy + return True + + def can_sign(self): # legacy + return True + + def public_key(self): + """A matching RSA public key. + + Returns: + a new :class:`RsaKey` object + """ + return RsaKey(n=self._n, e=self._e) + + def __eq__(self, other): + if self.has_private() != other.has_private(): + return False + if self.n != other.n or self.e != other.e: + return False + if not self.has_private(): + return True + return (self.d == other.d) + + def __ne__(self, other): + return not (self == other) + + def __getstate__(self): + # RSA key is not pickable + from pickle import PicklingError + raise PicklingError + + def __repr__(self): + if self.has_private(): + extra = ", d=%d, p=%d, q=%d, u=%d" % (int(self._d), int(self._p), + int(self._q), int(self._u)) + else: + extra = "" + return "RsaKey(n=%d, e=%d%s)" % (int(self._n), int(self._e), extra) + + def __str__(self): + if self.has_private(): + key_type = "Private" + else: + key_type = "Public" + return "%s RSA key at 0x%X" % (key_type, id(self)) + + def export_key(self, format='PEM', passphrase=None, pkcs=1, + protection=None, randfunc=None): + """Export this RSA key. + + Args: + format (string): + The format to use for wrapping the key: + + - *'PEM'*. (*Default*) Text encoding, done according to `RFC1421`_/`RFC1423`_. + - *'DER'*. Binary encoding. + - *'OpenSSH'*. Textual encoding, done according to OpenSSH specification. + Only suitable for public keys (not private keys). + + passphrase (string): + (*For private keys only*) The pass phrase used for protecting the output. + + pkcs (integer): + (*For private keys only*) The ASN.1 structure to use for + serializing the key. Note that even in case of PEM + encoding, there is an inner ASN.1 DER structure. + + With ``pkcs=1`` (*default*), the private key is encoded in a + simple `PKCS#1`_ structure (``RSAPrivateKey``). + + With ``pkcs=8``, the private key is encoded in a `PKCS#8`_ structure + (``PrivateKeyInfo``). + + .. note:: + This parameter is ignored for a public key. + For DER and PEM, an ASN.1 DER ``SubjectPublicKeyInfo`` + structure is always used. + + protection (string): + (*For private keys only*) + The encryption scheme to use for protecting the private key. + + If ``None`` (default), the behavior depends on :attr:`format`: + + - For *'DER'*, the *PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC* + scheme is used. The following operations are performed: + + 1. A 16 byte Triple DES key is derived from the passphrase + using :func:`Crypto.Protocol.KDF.PBKDF2` with 8 bytes salt, + and 1 000 iterations of :mod:`Crypto.Hash.HMAC`. + 2. The private key is encrypted using CBC. + 3. The encrypted key is encoded according to PKCS#8. + + - For *'PEM'*, the obsolete PEM encryption scheme is used. + It is based on MD5 for key derivation, and Triple DES for encryption. + + Specifying a value for :attr:`protection` is only meaningful for PKCS#8 + (that is, ``pkcs=8``) and only if a pass phrase is present too. + + The supported schemes for PKCS#8 are listed in the + :mod:`Crypto.IO.PKCS8` module (see :attr:`wrap_algo` parameter). + + randfunc (callable): + A function that provides random bytes. Only used for PEM encoding. + The default is :func:`Crypto.Random.get_random_bytes`. + + Returns: + byte string: the encoded key + + Raises: + ValueError:when the format is unknown or when you try to encrypt a private + key with *DER* format and PKCS#1. + + .. warning:: + If you don't provide a pass phrase, the private key will be + exported in the clear! + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _`PKCS#1`: http://www.ietf.org/rfc/rfc3447.txt + .. _`PKCS#8`: http://www.ietf.org/rfc/rfc5208.txt + """ + + if passphrase is not None: + passphrase = tobytes(passphrase) + + if randfunc is None: + randfunc = Random.get_random_bytes + + if format == 'OpenSSH': + e_bytes, n_bytes = [x.to_bytes() for x in (self._e, self._n)] + if bord(e_bytes[0]) & 0x80: + e_bytes = b'\x00' + e_bytes + if bord(n_bytes[0]) & 0x80: + n_bytes = b'\x00' + n_bytes + keyparts = [b'ssh-rsa', e_bytes, n_bytes] + keystring = b''.join([struct.pack(">I", len(kp)) + kp for kp in keyparts]) + return b'ssh-rsa ' + binascii.b2a_base64(keystring)[:-1] + + # DER format is always used, even in case of PEM, which simply + # encodes it into BASE64. + if self.has_private(): + binary_key = DerSequence([0, + self.n, + self.e, + self.d, + self.p, + self.q, + self.d % (self.p-1), + self.d % (self.q-1), + Integer(self.q).inverse(self.p) + ]).encode() + if pkcs == 1: + key_type = 'RSA PRIVATE KEY' + if format == 'DER' and passphrase: + raise ValueError("PKCS#1 private key cannot be encrypted") + else: # PKCS#8 + from Crypto.IO import PKCS8 + + if format == 'PEM' and protection is None: + key_type = 'PRIVATE KEY' + binary_key = PKCS8.wrap(binary_key, oid, None, + key_params=DerNull()) + else: + key_type = 'ENCRYPTED PRIVATE KEY' + if not protection: + protection = 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC' + binary_key = PKCS8.wrap(binary_key, oid, + passphrase, protection, + key_params=DerNull()) + passphrase = None + else: + key_type = "PUBLIC KEY" + binary_key = _create_subject_public_key_info(oid, + DerSequence([self.n, + self.e]), + DerNull() + ) + + if format == 'DER': + return binary_key + if format == 'PEM': + from Crypto.IO import PEM + + pem_str = PEM.encode(binary_key, key_type, passphrase, randfunc) + return tobytes(pem_str) + + raise ValueError("Unknown key format '%s'. Cannot export the RSA key." % format) + + # Backward compatibility + exportKey = export_key + publickey = public_key + + # Methods defined in PyCrypto that we don't support anymore + def sign(self, M, K): + raise NotImplementedError("Use module Crypto.Signature.pkcs1_15 instead") + + def verify(self, M, signature): + raise NotImplementedError("Use module Crypto.Signature.pkcs1_15 instead") + + def encrypt(self, plaintext, K): + raise NotImplementedError("Use module Crypto.Cipher.PKCS1_OAEP instead") + + def decrypt(self, ciphertext): + raise NotImplementedError("Use module Crypto.Cipher.PKCS1_OAEP instead") + + def blind(self, M, B): + raise NotImplementedError + + def unblind(self, M, B): + raise NotImplementedError + + def size(self): + raise NotImplementedError + + +def generate(bits, randfunc=None, e=65537): + """Create a new RSA key pair. + + The algorithm closely follows NIST `FIPS 186-4`_ in its + sections B.3.1 and B.3.3. The modulus is the product of + two non-strong probable primes. + Each prime passes a suitable number of Miller-Rabin tests + with random bases and a single Lucas test. + + Args: + bits (integer): + Key length, or size (in bits) of the RSA modulus. + It must be at least 1024, but **2048 is recommended.** + The FIPS standard only defines 1024, 2048 and 3072. + randfunc (callable): + Function that returns random bytes. + The default is :func:`Crypto.Random.get_random_bytes`. + e (integer): + Public RSA exponent. It must be an odd positive integer. + It is typically a small number with very few ones in its + binary representation. + The FIPS standard requires the public exponent to be + at least 65537 (the default). + + Returns: an RSA key object (:class:`RsaKey`, with private key). + + .. _FIPS 186-4: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if bits < 1024: + raise ValueError("RSA modulus length must be >= 1024") + if e % 2 == 0 or e < 3: + raise ValueError("RSA public exponent must be a positive, odd integer larger than 2.") + + if randfunc is None: + randfunc = Random.get_random_bytes + + d = n = Integer(1) + e = Integer(e) + + while n.size_in_bits() != bits and d < (1 << (bits // 2)): + # Generate the prime factors of n: p and q. + # By construciton, their product is always + # 2^{bits-1} < p*q < 2^bits. + size_q = bits // 2 + size_p = bits - size_q + + min_p = min_q = (Integer(1) << (2 * size_q - 1)).sqrt() + if size_q != size_p: + min_p = (Integer(1) << (2 * size_p - 1)).sqrt() + + def filter_p(candidate): + return candidate > min_p and (candidate - 1).gcd(e) == 1 + + p = generate_probable_prime(exact_bits=size_p, + randfunc=randfunc, + prime_filter=filter_p) + + min_distance = Integer(1) << (bits // 2 - 100) + + def filter_q(candidate): + return (candidate > min_q and + (candidate - 1).gcd(e) == 1 and + abs(candidate - p) > min_distance) + + q = generate_probable_prime(exact_bits=size_q, + randfunc=randfunc, + prime_filter=filter_q) + + n = p * q + lcm = (p - 1).lcm(q - 1) + d = e.inverse(lcm) + + if p > q: + p, q = q, p + + u = p.inverse(q) + + return RsaKey(n=n, e=e, d=d, p=p, q=q, u=u) + + +def construct(rsa_components, consistency_check=True): + r"""Construct an RSA key from a tuple of valid RSA components. + + The modulus **n** must be the product of two primes. + The public exponent **e** must be odd and larger than 1. + + In case of a private key, the following equations must apply: + + .. math:: + + \begin{align} + p*q &= n \\ + e*d &\equiv 1 ( \text{mod lcm} [(p-1)(q-1)]) \\ + p*u &\equiv 1 ( \text{mod } q) + \end{align} + + Args: + rsa_components (tuple): + A tuple of integers, with at least 2 and no + more than 6 items. The items come in the following order: + + 1. RSA modulus *n*. + 2. Public exponent *e*. + 3. Private exponent *d*. + Only required if the key is private. + 4. First factor of *n* (*p*). + Optional, but the other factor *q* must also be present. + 5. Second factor of *n* (*q*). Optional. + 6. CRT coefficient *q*, that is :math:`p^{-1} \text{mod }q`. Optional. + + consistency_check (boolean): + If ``True``, the library will verify that the provided components + fulfil the main RSA properties. + + Raises: + ValueError: when the key being imported fails the most basic RSA validity checks. + + Returns: An RSA key object (:class:`RsaKey`). + """ + + class InputComps(object): + pass + + input_comps = InputComps() + for (comp, value) in zip(('n', 'e', 'd', 'p', 'q', 'u'), rsa_components): + setattr(input_comps, comp, Integer(value)) + + n = input_comps.n + e = input_comps.e + if not hasattr(input_comps, 'd'): + key = RsaKey(n=n, e=e) + else: + d = input_comps.d + if hasattr(input_comps, 'q'): + p = input_comps.p + q = input_comps.q + else: + # Compute factors p and q from the private exponent d. + # We assume that n has no more than two factors. + # See 8.2.2(i) in Handbook of Applied Cryptography. + ktot = d * e - 1 + # The quantity d*e-1 is a multiple of phi(n), even, + # and can be represented as t*2^s. + t = ktot + while t % 2 == 0: + t //= 2 + # Cycle through all multiplicative inverses in Zn. + # The algorithm is non-deterministic, but there is a 50% chance + # any candidate a leads to successful factoring. + # See "Digitalized Signatures and Public Key Functions as Intractable + # as Factorization", M. Rabin, 1979 + spotted = False + a = Integer(2) + while not spotted and a < 100: + k = Integer(t) + # Cycle through all values a^{t*2^i}=a^k + while k < ktot: + cand = pow(a, k, n) + # Check if a^k is a non-trivial root of unity (mod n) + if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1: + # We have found a number such that (cand-1)(cand+1)=0 (mod n). + # Either of the terms divides n. + p = Integer(n).gcd(cand + 1) + spotted = True + break + k *= 2 + # This value was not any good... let's try another! + a += 2 + if not spotted: + raise ValueError("Unable to compute factors p and q from exponent d.") + # Found ! + assert ((n % p) == 0) + q = n // p + + if hasattr(input_comps, 'u'): + u = input_comps.u + else: + u = p.inverse(q) + + # Build key object + key = RsaKey(n=n, e=e, d=d, p=p, q=q, u=u) + + # Verify consistency of the key + if consistency_check: + + # Modulus and public exponent must be coprime + if e <= 1 or e >= n: + raise ValueError("Invalid RSA public exponent") + if Integer(n).gcd(e) != 1: + raise ValueError("RSA public exponent is not coprime to modulus") + + # For RSA, modulus must be odd + if not n & 1: + raise ValueError("RSA modulus is not odd") + + if key.has_private(): + # Modulus and private exponent must be coprime + if d <= 1 or d >= n: + raise ValueError("Invalid RSA private exponent") + if Integer(n).gcd(d) != 1: + raise ValueError("RSA private exponent is not coprime to modulus") + # Modulus must be product of 2 primes + if p * q != n: + raise ValueError("RSA factors do not match modulus") + if test_probable_prime(p) == COMPOSITE: + raise ValueError("RSA factor p is composite") + if test_probable_prime(q) == COMPOSITE: + raise ValueError("RSA factor q is composite") + # See Carmichael theorem + phi = (p - 1) * (q - 1) + lcm = phi // (p - 1).gcd(q - 1) + if (e * d % int(lcm)) != 1: + raise ValueError("Invalid RSA condition") + if hasattr(key, 'u'): + # CRT coefficient + if u <= 1 or u >= q: + raise ValueError("Invalid RSA component u") + if (p * u % q) != 1: + raise ValueError("Invalid RSA component u with p") + + return key + + +def _import_pkcs1_private(encoded, *kwargs): + # RSAPrivateKey ::= SEQUENCE { + # version Version, + # modulus INTEGER, -- n + # publicExponent INTEGER, -- e + # privateExponent INTEGER, -- d + # prime1 INTEGER, -- p + # prime2 INTEGER, -- q + # exponent1 INTEGER, -- d mod (p-1) + # exponent2 INTEGER, -- d mod (q-1) + # coefficient INTEGER -- (inverse of q) mod p + # } + # + # Version ::= INTEGER + der = DerSequence().decode(encoded, nr_elements=9, only_ints_expected=True) + if der[0] != 0: + raise ValueError("No PKCS#1 encoding of an RSA private key") + return construct(der[1:6] + [Integer(der[4]).inverse(der[5])]) + + +def _import_pkcs1_public(encoded, *kwargs): + # RSAPublicKey ::= SEQUENCE { + # modulus INTEGER, -- n + # publicExponent INTEGER -- e + # } + der = DerSequence().decode(encoded, nr_elements=2, only_ints_expected=True) + return construct(der) + + +def _import_subjectPublicKeyInfo(encoded, *kwargs): + + algoid, encoded_key, params = _expand_subject_public_key_info(encoded) + if algoid != oid or params is not None: + raise ValueError("No RSA subjectPublicKeyInfo") + return _import_pkcs1_public(encoded_key) + + +def _import_x509_cert(encoded, *kwargs): + + sp_info = _extract_subject_public_key_info(encoded) + return _import_subjectPublicKeyInfo(sp_info) + + +def _import_pkcs8(encoded, passphrase): + from Crypto.IO import PKCS8 + + k = PKCS8.unwrap(encoded, passphrase) + if k[0] != oid: + raise ValueError("No PKCS#8 encoded RSA key") + return _import_keyDER(k[1], passphrase) + + +def _import_keyDER(extern_key, passphrase): + """Import an RSA key (public or private half), encoded in DER form.""" + + decodings = (_import_pkcs1_private, + _import_pkcs1_public, + _import_subjectPublicKeyInfo, + _import_x509_cert, + _import_pkcs8) + + for decoding in decodings: + try: + return decoding(extern_key, passphrase) + except ValueError: + pass + + raise ValueError("RSA key format is not supported") + + +def _import_openssh_private_rsa(data, password): + + from ._openssh import (import_openssh_private_generic, + read_bytes, read_string, check_padding) + + ssh_name, decrypted = import_openssh_private_generic(data, password) + + if ssh_name != "ssh-rsa": + raise ValueError("This SSH key is not RSA") + + n, decrypted = read_bytes(decrypted) + e, decrypted = read_bytes(decrypted) + d, decrypted = read_bytes(decrypted) + iqmp, decrypted = read_bytes(decrypted) + p, decrypted = read_bytes(decrypted) + q, decrypted = read_bytes(decrypted) + + _, padded = read_string(decrypted) # Comment + check_padding(padded) + + build = [Integer.from_bytes(x) for x in (n, e, d, q, p, iqmp)] + return construct(build) + + +def import_key(extern_key, passphrase=None): + """Import an RSA key (public or private). + + Args: + extern_key (string or byte string): + The RSA key to import. + + The following formats are supported for an RSA **public key**: + + - X.509 certificate (binary or PEM format) + - X.509 ``subjectPublicKeyInfo`` DER SEQUENCE (binary or PEM + encoding) + - `PKCS#1`_ ``RSAPublicKey`` DER SEQUENCE (binary or PEM encoding) + - An OpenSSH line (e.g. the content of ``~/.ssh/id_ecdsa``, ASCII) + + The following formats are supported for an RSA **private key**: + + - PKCS#1 ``RSAPrivateKey`` DER SEQUENCE (binary or PEM encoding) + - `PKCS#8`_ ``PrivateKeyInfo`` or ``EncryptedPrivateKeyInfo`` + DER SEQUENCE (binary or PEM encoding) + - OpenSSH (text format, introduced in `OpenSSH 6.5`_) + + For details about the PEM encoding, see `RFC1421`_/`RFC1423`_. + + passphrase (string or byte string): + For private keys only, the pass phrase that encrypts the key. + + Returns: An RSA key object (:class:`RsaKey`). + + Raises: + ValueError/IndexError/TypeError: + When the given key cannot be parsed (possibly because the pass + phrase is wrong). + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _`PKCS#1`: http://www.ietf.org/rfc/rfc3447.txt + .. _`PKCS#8`: http://www.ietf.org/rfc/rfc5208.txt + .. _`OpenSSH 6.5`: https://flak.tedunangst.com/post/new-openssh-key-format-and-bcrypt-pbkdf + """ + + from Crypto.IO import PEM + + extern_key = tobytes(extern_key) + if passphrase is not None: + passphrase = tobytes(passphrase) + + if extern_key.startswith(b'-----BEGIN OPENSSH PRIVATE KEY'): + text_encoded = tostr(extern_key) + openssh_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) + result = _import_openssh_private_rsa(openssh_encoded, passphrase) + return result + + if extern_key.startswith(b'-----'): + # This is probably a PEM encoded key. + (der, marker, enc_flag) = PEM.decode(tostr(extern_key), passphrase) + if enc_flag: + passphrase = None + return _import_keyDER(der, passphrase) + + if extern_key.startswith(b'ssh-rsa '): + # This is probably an OpenSSH key + keystring = binascii.a2b_base64(extern_key.split(b' ')[1]) + keyparts = [] + while len(keystring) > 4: + length = struct.unpack(">I", keystring[:4])[0] + keyparts.append(keystring[4:4 + length]) + keystring = keystring[4 + length:] + e = Integer.from_bytes(keyparts[1]) + n = Integer.from_bytes(keyparts[2]) + return construct([n, e]) + + if len(extern_key) > 0 and bord(extern_key[0]) == 0x30: + # This is probably a DER encoded key + return _import_keyDER(extern_key, passphrase) + + raise ValueError("RSA key format is not supported") + + +# Backward compatibility +importKey = import_key + +#: `Object ID`_ for the RSA encryption algorithm. This OID often indicates +#: a generic RSA key, even when such key will be actually used for digital +#: signatures. +#: +#: .. _`Object ID`: http://www.alvestrand.no/objectid/1.2.840.113549.1.1.1.html +oid = "1.2.840.113549.1.1.1" diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/RSA.pyi b/env/lib/python3.8/site-packages/Crypto/PublicKey/RSA.pyi new file mode 100644 index 00000000..d436acf8 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/RSA.pyi @@ -0,0 +1,51 @@ +from typing import Callable, Union, Tuple, Optional + +__all__ = ['generate', 'construct', 'import_key', + 'RsaKey', 'oid'] + +RNG = Callable[[int], bytes] + +class RsaKey(object): + def __init__(self, **kwargs: int) -> None: ... + @property + def n(self) -> int: ... + @property + def e(self) -> int: ... + @property + def d(self) -> int: ... + @property + def p(self) -> int: ... + @property + def q(self) -> int: ... + @property + def u(self) -> int: ... + def size_in_bits(self) -> int: ... + def size_in_bytes(self) -> int: ... + def has_private(self) -> bool: ... + def can_encrypt(self) -> bool: ... # legacy + def can_sign(self) -> bool:... # legacy + def public_key(self) -> RsaKey: ... + def __eq__(self, other: object) -> bool: ... + def __ne__(self, other: object) -> bool: ... + def __getstate__(self) -> None: ... + def __repr__(self) -> str: ... + def __str__(self) -> str: ... + def export_key(self, format: Optional[str]="PEM", passphrase: Optional[str]=None, pkcs: Optional[int]=1, + protection: Optional[str]=None, randfunc: Optional[RNG]=None) -> bytes: ... + + # Backward compatibility + exportKey = export_key + publickey = public_key + +def generate(bits: int, randfunc: Optional[RNG]=None, e: Optional[int]=65537) -> RsaKey: ... +def construct(rsa_components: Union[Tuple[int, int], # n, e + Tuple[int, int, int], # n, e, d + Tuple[int, int, int, int, int], # n, e, d, p, q + Tuple[int, int, int, int, int, int]], # n, e, d, p, q, crt_q + consistency_check: Optional[bool]=True) -> RsaKey: ... +def import_key(extern_key: Union[str, bytes], passphrase: Optional[str]=None) -> RsaKey: ... + +# Backward compatibility +importKey = import_key + +oid: str diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/__init__.py b/env/lib/python3.8/site-packages/Crypto/PublicKey/__init__.py new file mode 100644 index 00000000..cf3a2384 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/__init__.py @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Crypto.Util.asn1 import (DerSequence, DerInteger, DerBitString, + DerObjectId, DerNull) + + +def _expand_subject_public_key_info(encoded): + """Parse a SubjectPublicKeyInfo structure. + + It returns a triple with: + * OID (string) + * encoded public key (bytes) + * Algorithm parameters (bytes or None) + """ + + # + # SubjectPublicKeyInfo ::= SEQUENCE { + # algorithm AlgorithmIdentifier, + # subjectPublicKey BIT STRING + # } + # + # AlgorithmIdentifier ::= SEQUENCE { + # algorithm OBJECT IDENTIFIER, + # parameters ANY DEFINED BY algorithm OPTIONAL + # } + # + + spki = DerSequence().decode(encoded, nr_elements=2) + algo = DerSequence().decode(spki[0], nr_elements=(1,2)) + algo_oid = DerObjectId().decode(algo[0]) + spk = DerBitString().decode(spki[1]).value + + if len(algo) == 1: + algo_params = None + else: + try: + DerNull().decode(algo[1]) + algo_params = None + except: + algo_params = algo[1] + + return algo_oid.value, spk, algo_params + + +def _create_subject_public_key_info(algo_oid, public_key, params): + + if params is None: + algorithm = DerSequence([DerObjectId(algo_oid)]) + else: + algorithm = DerSequence([DerObjectId(algo_oid), params]) + + spki = DerSequence([algorithm, + DerBitString(public_key) + ]) + return spki.encode() + + +def _extract_subject_public_key_info(x509_certificate): + """Extract subjectPublicKeyInfo from a DER X.509 certificate.""" + + certificate = DerSequence().decode(x509_certificate, nr_elements=3) + tbs_certificate = DerSequence().decode(certificate[0], + nr_elements=range(6, 11)) + + index = 5 + try: + tbs_certificate[0] + 1 + # Version not present + version = 1 + except TypeError: + version = DerInteger(explicit=0).decode(tbs_certificate[0]).value + if version not in (2, 3): + raise ValueError("Incorrect X.509 certificate version") + index = 6 + + return tbs_certificate[index] diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/__init__.pyi b/env/lib/python3.8/site-packages/Crypto/PublicKey/__init__.pyi new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/_ec_ws.abi3.so b/env/lib/python3.8/site-packages/Crypto/PublicKey/_ec_ws.abi3.so new file mode 100644 index 00000000..46ec7955 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/PublicKey/_ec_ws.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/_ed25519.abi3.so b/env/lib/python3.8/site-packages/Crypto/PublicKey/_ed25519.abi3.so new file mode 100644 index 00000000..891ec443 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/PublicKey/_ed25519.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/_ed448.abi3.so b/env/lib/python3.8/site-packages/Crypto/PublicKey/_ed448.abi3.so new file mode 100644 index 00000000..a3ddd4bc Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/PublicKey/_ed448.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/_openssh.py b/env/lib/python3.8/site-packages/Crypto/PublicKey/_openssh.py new file mode 100644 index 00000000..88dacfc2 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/_openssh.py @@ -0,0 +1,135 @@ +# =================================================================== +# +# Copyright (c) 2019, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import struct + +from Crypto.Cipher import AES +from Crypto.Hash import SHA512 +from Crypto.Protocol.KDF import _bcrypt_hash +from Crypto.Util.strxor import strxor +from Crypto.Util.py3compat import tostr, bchr, bord + + +def read_int4(data): + if len(data) < 4: + raise ValueError("Insufficient data") + value = struct.unpack(">I", data[:4])[0] + return value, data[4:] + + +def read_bytes(data): + size, data = read_int4(data) + if len(data) < size: + raise ValueError("Insufficient data (V)") + return data[:size], data[size:] + + +def read_string(data): + s, d = read_bytes(data) + return tostr(s), d + + +def check_padding(pad): + for v, x in enumerate(pad): + if bord(x) != ((v + 1) & 0xFF): + raise ValueError("Incorrect padding") + + +def import_openssh_private_generic(data, password): + # https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/PROTOCOL.key?annotate=HEAD + # https://github.com/openssh/openssh-portable/blob/master/sshkey.c + # https://coolaj86.com/articles/the-openssh-private-key-format/ + # https://coolaj86.com/articles/the-ssh-public-key-format/ + + if not data.startswith(b'openssh-key-v1\x00'): + raise ValueError("Incorrect magic value") + data = data[15:] + + ciphername, data = read_string(data) + kdfname, data = read_string(data) + kdfoptions, data = read_bytes(data) + number_of_keys, data = read_int4(data) + + if number_of_keys != 1: + raise ValueError("We only handle 1 key at a time") + + _, data = read_string(data) # Public key + encrypted, data = read_bytes(data) + if data: + raise ValueError("Too much data") + + if len(encrypted) % 8 != 0: + raise ValueError("Incorrect payload length") + + # Decrypt if necessary + if ciphername == 'none': + decrypted = encrypted + else: + if (ciphername, kdfname) != ('aes256-ctr', 'bcrypt'): + raise ValueError("Unsupported encryption scheme %s/%s" % (ciphername, kdfname)) + + salt, kdfoptions = read_bytes(kdfoptions) + iterations, kdfoptions = read_int4(kdfoptions) + + if len(salt) != 16: + raise ValueError("Incorrect salt length") + if kdfoptions: + raise ValueError("Too much data in kdfoptions") + + pwd_sha512 = SHA512.new(password).digest() + # We need 32+16 = 48 bytes, therefore 2 bcrypt outputs are sufficient + stripes = [] + constant = b"OxychromaticBlowfishSwatDynamite" + for count in range(1, 3): + salt_sha512 = SHA512.new(salt + struct.pack(">I", count)).digest() + out_le = _bcrypt_hash(pwd_sha512, 6, salt_sha512, constant, False) + out = struct.pack("IIIIIIII", out_le)) + acc = bytearray(out) + for _ in range(1, iterations): + out_le = _bcrypt_hash(pwd_sha512, 6, SHA512.new(out).digest(), constant, False) + out = struct.pack("IIIIIIII", out_le)) + strxor(acc, out, output=acc) + stripes.append(acc[:24]) + + result = b"".join([bchr(a)+bchr(b) for (a, b) in zip(*stripes)]) + + cipher = AES.new(result[:32], + AES.MODE_CTR, + nonce=b"", + initial_value=result[32:32+16]) + decrypted = cipher.decrypt(encrypted) + + checkint1, decrypted = read_int4(decrypted) + checkint2, decrypted = read_int4(decrypted) + if checkint1 != checkint2: + raise ValueError("Incorrect checksum") + ssh_name, decrypted = read_string(decrypted) + + return ssh_name, decrypted diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/_openssh.pyi b/env/lib/python3.8/site-packages/Crypto/PublicKey/_openssh.pyi new file mode 100644 index 00000000..15f3677a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/PublicKey/_openssh.pyi @@ -0,0 +1,7 @@ +from typing import Tuple + +def read_int4(data: bytes) -> Tuple[int, bytes]: ... +def read_bytes(data: bytes) -> Tuple[bytes, bytes]: ... +def read_string(data: bytes) -> Tuple[str, bytes]: ... +def check_padding(pad: bytes) -> None: ... +def import_openssh_private_generic(data: bytes, password: bytes) -> Tuple[str, bytes]: ... diff --git a/env/lib/python3.8/site-packages/Crypto/PublicKey/_x25519.abi3.so b/env/lib/python3.8/site-packages/Crypto/PublicKey/_x25519.abi3.so new file mode 100644 index 00000000..afd3ee40 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/PublicKey/_x25519.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Random/__init__.py b/env/lib/python3.8/site-packages/Crypto/Random/__init__.py new file mode 100644 index 00000000..0f83a07e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Random/__init__.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# +# Random/__init__.py : PyCrypto random number generation +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['new', 'get_random_bytes'] + +from os import urandom + +class _UrandomRNG(object): + + def read(self, n): + """Return a random byte string of the desired size.""" + return urandom(n) + + def flush(self): + """Method provided for backward compatibility only.""" + pass + + def reinit(self): + """Method provided for backward compatibility only.""" + pass + + def close(self): + """Method provided for backward compatibility only.""" + pass + + +def new(*args, **kwargs): + """Return a file-like object that outputs cryptographically random bytes.""" + return _UrandomRNG() + + +def atfork(): + pass + + +#: Function that returns a random byte string of the desired size. +get_random_bytes = urandom + diff --git a/env/lib/python3.8/site-packages/Crypto/Random/__init__.pyi b/env/lib/python3.8/site-packages/Crypto/Random/__init__.pyi new file mode 100644 index 00000000..ddc5b9b0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Random/__init__.pyi @@ -0,0 +1,19 @@ +from typing import Any + +__all__ = ['new', 'get_random_bytes'] + +from os import urandom + +class _UrandomRNG(object): + + def read(self, n: int) -> bytes:... + def flush(self) -> None: ... + def reinit(self) -> None: ... + def close(self) -> None: ... + +def new(*args: Any, **kwargs: Any) -> _UrandomRNG: ... + +def atfork() -> None: ... + +get_random_bytes = urandom + diff --git a/env/lib/python3.8/site-packages/Crypto/Random/random.py b/env/lib/python3.8/site-packages/Crypto/Random/random.py new file mode 100644 index 00000000..5389b3bb --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Random/random.py @@ -0,0 +1,138 @@ +# -*- coding: utf-8 -*- +# +# Random/random.py : Strong alternative for the standard 'random' module +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['StrongRandom', 'getrandbits', 'randrange', 'randint', 'choice', 'shuffle', 'sample'] + +from Crypto import Random + +from Crypto.Util.py3compat import is_native_int + +class StrongRandom(object): + def __init__(self, rng=None, randfunc=None): + if randfunc is None and rng is None: + self._randfunc = None + elif randfunc is not None and rng is None: + self._randfunc = randfunc + elif randfunc is None and rng is not None: + self._randfunc = rng.read + else: + raise ValueError("Cannot specify both 'rng' and 'randfunc'") + + def getrandbits(self, k): + """Return an integer with k random bits.""" + + if self._randfunc is None: + self._randfunc = Random.new().read + mask = (1 << k) - 1 + return mask & bytes_to_long(self._randfunc(ceil_div(k, 8))) + + def randrange(self, *args): + """randrange([start,] stop[, step]): + Return a randomly-selected element from range(start, stop, step).""" + if len(args) == 3: + (start, stop, step) = args + elif len(args) == 2: + (start, stop) = args + step = 1 + elif len(args) == 1: + (stop,) = args + start = 0 + step = 1 + else: + raise TypeError("randrange expected at most 3 arguments, got %d" % (len(args),)) + if (not is_native_int(start) or not is_native_int(stop) or not + is_native_int(step)): + raise TypeError("randrange requires integer arguments") + if step == 0: + raise ValueError("randrange step argument must not be zero") + + num_choices = ceil_div(stop - start, step) + if num_choices < 0: + num_choices = 0 + if num_choices < 1: + raise ValueError("empty range for randrange(%r, %r, %r)" % (start, stop, step)) + + # Pick a random number in the range of possible numbers + r = num_choices + while r >= num_choices: + r = self.getrandbits(size(num_choices)) + + return start + (step * r) + + def randint(self, a, b): + """Return a random integer N such that a <= N <= b.""" + if not is_native_int(a) or not is_native_int(b): + raise TypeError("randint requires integer arguments") + N = self.randrange(a, b+1) + assert a <= N <= b + return N + + def choice(self, seq): + """Return a random element from a (non-empty) sequence. + + If the seqence is empty, raises IndexError. + """ + if len(seq) == 0: + raise IndexError("empty sequence") + return seq[self.randrange(len(seq))] + + def shuffle(self, x): + """Shuffle the sequence in place.""" + # Fisher-Yates shuffle. O(n) + # See http://en.wikipedia.org/wiki/Fisher-Yates_shuffle + # Working backwards from the end of the array, we choose a random item + # from the remaining items until all items have been chosen. + for i in range(len(x)-1, 0, -1): # iterate from len(x)-1 downto 1 + j = self.randrange(0, i+1) # choose random j such that 0 <= j <= i + x[i], x[j] = x[j], x[i] # exchange x[i] and x[j] + + def sample(self, population, k): + """Return a k-length list of unique elements chosen from the population sequence.""" + + num_choices = len(population) + if k > num_choices: + raise ValueError("sample larger than population") + + retval = [] + selected = {} # we emulate a set using a dict here + for i in range(k): + r = None + while r is None or r in selected: + r = self.randrange(num_choices) + retval.append(population[r]) + selected[r] = 1 + return retval + +_r = StrongRandom() +getrandbits = _r.getrandbits +randrange = _r.randrange +randint = _r.randint +choice = _r.choice +shuffle = _r.shuffle +sample = _r.sample + +# These are at the bottom to avoid problems with recursive imports +from Crypto.Util.number import ceil_div, bytes_to_long, long_to_bytes, size + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/Random/random.pyi b/env/lib/python3.8/site-packages/Crypto/Random/random.pyi new file mode 100644 index 00000000..f873c4a4 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Random/random.pyi @@ -0,0 +1,20 @@ +from typing import Callable, Tuple, Union, Sequence, Any, Optional + +__all__ = ['StrongRandom', 'getrandbits', 'randrange', 'randint', 'choice', 'shuffle', 'sample'] + +class StrongRandom(object): + def __init__(self, rng: Optional[Any]=None, randfunc: Optional[Callable]=None) -> None: ... # TODO What is rng? + def getrandbits(self, k: int) -> int: ... + def randrange(self, start: int, stop: int = ..., step: int = ...) -> int: ... + def randint(self, a: int, b: int) -> int: ... + def choice(self, seq: Sequence) -> object: ... + def shuffle(self, x: Sequence) -> None: ... + def sample(self, population: Sequence, k: int) -> list: ... + +_r = StrongRandom() +getrandbits = _r.getrandbits +randrange = _r.randrange +randint = _r.randint +choice = _r.choice +shuffle = _r.shuffle +sample = _r.sample diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/__init__.py new file mode 100644 index 00000000..05fc139a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/__init__.py @@ -0,0 +1,60 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/__init__.py: Self-test for cipher modules +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for cipher modules""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest.Cipher import test_AES; tests += test_AES.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_ARC2; tests += test_ARC2.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_ARC4; tests += test_ARC4.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_Blowfish; tests += test_Blowfish.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_CAST; tests += test_CAST.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_DES3; tests += test_DES3.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_DES; tests += test_DES.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_Salsa20; tests += test_Salsa20.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_ChaCha20; tests += test_ChaCha20.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_ChaCha20_Poly1305; tests += test_ChaCha20_Poly1305.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_pkcs1_15; tests += test_pkcs1_15.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_pkcs1_oaep; tests += test_pkcs1_oaep.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_OCB; tests += test_OCB.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_CBC; tests += test_CBC.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_CFB; tests += test_CFB.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_OpenPGP; tests += test_OpenPGP.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_OFB; tests += test_OFB.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_CTR; tests += test_CTR.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_CCM; tests += test_CCM.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_EAX; tests += test_EAX.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_GCM; tests += test_GCM.get_tests(config=config) + from Crypto.SelfTest.Cipher import test_SIV; tests += test_SIV.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/common.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/common.py new file mode 100644 index 00000000..c5bc755a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/common.py @@ -0,0 +1,510 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/common.py: Common code for Crypto.SelfTest.Hash +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-testing for PyCrypto hash modules""" + +import unittest +from binascii import a2b_hex, b2a_hex, hexlify + +from Crypto.Util.py3compat import b +from Crypto.Util.strxor import strxor_c + +class _NoDefault: pass # sentinel object +def _extract(d, k, default=_NoDefault): + """Get an item from a dictionary, and remove it from the dictionary.""" + try: + retval = d[k] + except KeyError: + if default is _NoDefault: + raise + return default + del d[k] + return retval + +# Generic cipher test case +class CipherSelfTest(unittest.TestCase): + + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + + # Extract the parameters + params = params.copy() + self.description = _extract(params, 'description') + self.key = b(_extract(params, 'key')) + self.plaintext = b(_extract(params, 'plaintext')) + self.ciphertext = b(_extract(params, 'ciphertext')) + self.module_name = _extract(params, 'module_name', None) + self.assoc_data = _extract(params, 'assoc_data', None) + self.mac = _extract(params, 'mac', None) + if self.assoc_data: + self.mac = b(self.mac) + + mode = _extract(params, 'mode', None) + self.mode_name = str(mode) + + if mode is not None: + # Block cipher + self.mode = getattr(self.module, "MODE_" + mode) + + self.iv = _extract(params, 'iv', None) + if self.iv is None: + self.iv = _extract(params, 'nonce', None) + if self.iv is not None: + self.iv = b(self.iv) + + else: + # Stream cipher + self.mode = None + self.iv = _extract(params, 'iv', None) + if self.iv is not None: + self.iv = b(self.iv) + + self.extra_params = params + + def shortDescription(self): + return self.description + + def _new(self): + params = self.extra_params.copy() + key = a2b_hex(self.key) + + old_style = [] + if self.mode is not None: + old_style = [ self.mode ] + if self.iv is not None: + old_style += [ a2b_hex(self.iv) ] + + return self.module.new(key, *old_style, **params) + + def isMode(self, name): + if not hasattr(self.module, "MODE_"+name): + return False + return self.mode == getattr(self.module, "MODE_"+name) + + def runTest(self): + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + assoc_data = [] + if self.assoc_data: + assoc_data = [ a2b_hex(b(x)) for x in self.assoc_data] + + ct = None + pt = None + + # + # Repeat the same encryption or decryption twice and verify + # that the result is always the same + # + for i in range(2): + cipher = self._new() + decipher = self._new() + + # Only AEAD modes + for comp in assoc_data: + cipher.update(comp) + decipher.update(comp) + + ctX = b2a_hex(cipher.encrypt(plaintext)) + ptX = b2a_hex(decipher.decrypt(ciphertext)) + + if ct: + self.assertEqual(ct, ctX) + self.assertEqual(pt, ptX) + ct, pt = ctX, ptX + + self.assertEqual(self.ciphertext, ct) # encrypt + self.assertEqual(self.plaintext, pt) # decrypt + + if self.mac: + mac = b2a_hex(cipher.digest()) + self.assertEqual(self.mac, mac) + decipher.verify(a2b_hex(self.mac)) + +class CipherStreamingSelfTest(CipherSelfTest): + + def shortDescription(self): + desc = self.module_name + if self.mode is not None: + desc += " in %s mode" % (self.mode_name,) + return "%s should behave like a stream cipher" % (desc,) + + def runTest(self): + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + + # The cipher should work like a stream cipher + + # Test counter mode encryption, 3 bytes at a time + ct3 = [] + cipher = self._new() + for i in range(0, len(plaintext), 3): + ct3.append(cipher.encrypt(plaintext[i:i+3])) + ct3 = b2a_hex(b("").join(ct3)) + self.assertEqual(self.ciphertext, ct3) # encryption (3 bytes at a time) + + # Test counter mode decryption, 3 bytes at a time + pt3 = [] + cipher = self._new() + for i in range(0, len(ciphertext), 3): + pt3.append(cipher.encrypt(ciphertext[i:i+3])) + # PY3K: This is meant to be text, do not change to bytes (data) + pt3 = b2a_hex(b("").join(pt3)) + self.assertEqual(self.plaintext, pt3) # decryption (3 bytes at a time) + + +class RoundtripTest(unittest.TestCase): + def __init__(self, module, params): + from Crypto import Random + unittest.TestCase.__init__(self) + self.module = module + self.iv = Random.get_random_bytes(module.block_size) + self.key = b(params['key']) + self.plaintext = 100 * b(params['plaintext']) + self.module_name = params.get('module_name', None) + + def shortDescription(self): + return """%s .decrypt() output of .encrypt() should not be garbled""" % (self.module_name,) + + def runTest(self): + + ## ECB mode + mode = self.module.MODE_ECB + encryption_cipher = self.module.new(a2b_hex(self.key), mode) + ciphertext = encryption_cipher.encrypt(self.plaintext) + decryption_cipher = self.module.new(a2b_hex(self.key), mode) + decrypted_plaintext = decryption_cipher.decrypt(ciphertext) + self.assertEqual(self.plaintext, decrypted_plaintext) + + +class IVLengthTest(unittest.TestCase): + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + self.key = b(params['key']) + + def shortDescription(self): + return "Check that all modes except MODE_ECB and MODE_CTR require an IV of the proper length" + + def runTest(self): + self.assertRaises(TypeError, self.module.new, a2b_hex(self.key), + self.module.MODE_ECB, b("")) + + def _dummy_counter(self): + return "\0" * self.module.block_size + + +class NoDefaultECBTest(unittest.TestCase): + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + self.key = b(params['key']) + + def runTest(self): + self.assertRaises(TypeError, self.module.new, a2b_hex(self.key)) + + +class BlockSizeTest(unittest.TestCase): + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + self.key = a2b_hex(b(params['key'])) + + def runTest(self): + cipher = self.module.new(self.key, self.module.MODE_ECB) + self.assertEqual(cipher.block_size, self.module.block_size) + + +class ByteArrayTest(unittest.TestCase): + """Verify we can use bytearray's for encrypting and decrypting""" + + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + + # Extract the parameters + params = params.copy() + self.description = _extract(params, 'description') + self.key = b(_extract(params, 'key')) + self.plaintext = b(_extract(params, 'plaintext')) + self.ciphertext = b(_extract(params, 'ciphertext')) + self.module_name = _extract(params, 'module_name', None) + self.assoc_data = _extract(params, 'assoc_data', None) + self.mac = _extract(params, 'mac', None) + if self.assoc_data: + self.mac = b(self.mac) + + mode = _extract(params, 'mode', None) + self.mode_name = str(mode) + + if mode is not None: + # Block cipher + self.mode = getattr(self.module, "MODE_" + mode) + + self.iv = _extract(params, 'iv', None) + if self.iv is None: + self.iv = _extract(params, 'nonce', None) + if self.iv is not None: + self.iv = b(self.iv) + else: + # Stream cipher + self.mode = None + self.iv = _extract(params, 'iv', None) + if self.iv is not None: + self.iv = b(self.iv) + + self.extra_params = params + + def _new(self): + params = self.extra_params.copy() + key = a2b_hex(self.key) + + old_style = [] + if self.mode is not None: + old_style = [ self.mode ] + if self.iv is not None: + old_style += [ a2b_hex(self.iv) ] + + return self.module.new(key, *old_style, **params) + + def runTest(self): + + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + assoc_data = [] + if self.assoc_data: + assoc_data = [ bytearray(a2b_hex(b(x))) for x in self.assoc_data] + + cipher = self._new() + decipher = self._new() + + # Only AEAD modes + for comp in assoc_data: + cipher.update(comp) + decipher.update(comp) + + ct = b2a_hex(cipher.encrypt(bytearray(plaintext))) + pt = b2a_hex(decipher.decrypt(bytearray(ciphertext))) + + self.assertEqual(self.ciphertext, ct) # encrypt + self.assertEqual(self.plaintext, pt) # decrypt + + if self.mac: + mac = b2a_hex(cipher.digest()) + self.assertEqual(self.mac, mac) + decipher.verify(bytearray(a2b_hex(self.mac))) + + +class MemoryviewTest(unittest.TestCase): + """Verify we can use memoryviews for encrypting and decrypting""" + + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + + # Extract the parameters + params = params.copy() + self.description = _extract(params, 'description') + self.key = b(_extract(params, 'key')) + self.plaintext = b(_extract(params, 'plaintext')) + self.ciphertext = b(_extract(params, 'ciphertext')) + self.module_name = _extract(params, 'module_name', None) + self.assoc_data = _extract(params, 'assoc_data', None) + self.mac = _extract(params, 'mac', None) + if self.assoc_data: + self.mac = b(self.mac) + + mode = _extract(params, 'mode', None) + self.mode_name = str(mode) + + if mode is not None: + # Block cipher + self.mode = getattr(self.module, "MODE_" + mode) + + self.iv = _extract(params, 'iv', None) + if self.iv is None: + self.iv = _extract(params, 'nonce', None) + if self.iv is not None: + self.iv = b(self.iv) + else: + # Stream cipher + self.mode = None + self.iv = _extract(params, 'iv', None) + if self.iv is not None: + self.iv = b(self.iv) + + self.extra_params = params + + def _new(self): + params = self.extra_params.copy() + key = a2b_hex(self.key) + + old_style = [] + if self.mode is not None: + old_style = [ self.mode ] + if self.iv is not None: + old_style += [ a2b_hex(self.iv) ] + + return self.module.new(key, *old_style, **params) + + def runTest(self): + + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + assoc_data = [] + if self.assoc_data: + assoc_data = [ memoryview(a2b_hex(b(x))) for x in self.assoc_data] + + cipher = self._new() + decipher = self._new() + + # Only AEAD modes + for comp in assoc_data: + cipher.update(comp) + decipher.update(comp) + + ct = b2a_hex(cipher.encrypt(memoryview(plaintext))) + pt = b2a_hex(decipher.decrypt(memoryview(ciphertext))) + + self.assertEqual(self.ciphertext, ct) # encrypt + self.assertEqual(self.plaintext, pt) # decrypt + + if self.mac: + mac = b2a_hex(cipher.digest()) + self.assertEqual(self.mac, mac) + decipher.verify(memoryview(a2b_hex(self.mac))) + + +def make_block_tests(module, module_name, test_data, additional_params=dict()): + tests = [] + extra_tests_added = False + for i in range(len(test_data)): + row = test_data[i] + + # Build the "params" dictionary with + # - plaintext + # - ciphertext + # - key + # - mode (default is ECB) + # - (optionally) description + # - (optionally) any other parameter that this cipher mode requires + params = {} + if len(row) == 3: + (params['plaintext'], params['ciphertext'], params['key']) = row + elif len(row) == 4: + (params['plaintext'], params['ciphertext'], params['key'], params['description']) = row + elif len(row) == 5: + (params['plaintext'], params['ciphertext'], params['key'], params['description'], extra_params) = row + params.update(extra_params) + else: + raise AssertionError("Unsupported tuple size %d" % (len(row),)) + + if not "mode" in params: + params["mode"] = "ECB" + + # Build the display-name for the test + p2 = params.copy() + p_key = _extract(p2, 'key') + p_plaintext = _extract(p2, 'plaintext') + p_ciphertext = _extract(p2, 'ciphertext') + p_mode = _extract(p2, 'mode') + p_description = _extract(p2, 'description', None) + + if p_description is not None: + description = p_description + elif p_mode == 'ECB' and not p2: + description = "p=%s, k=%s" % (p_plaintext, p_key) + else: + description = "p=%s, k=%s, %r" % (p_plaintext, p_key, p2) + name = "%s #%d: %s" % (module_name, i+1, description) + params['description'] = name + params['module_name'] = module_name + params.update(additional_params) + + # Add extra test(s) to the test suite before the current test + if not extra_tests_added: + tests += [ + RoundtripTest(module, params), + IVLengthTest(module, params), + NoDefaultECBTest(module, params), + ByteArrayTest(module, params), + BlockSizeTest(module, params), + ] + extra_tests_added = True + + # Add the current test to the test suite + tests.append(CipherSelfTest(module, params)) + + return tests + +def make_stream_tests(module, module_name, test_data): + tests = [] + extra_tests_added = False + for i in range(len(test_data)): + row = test_data[i] + + # Build the "params" dictionary + params = {} + if len(row) == 3: + (params['plaintext'], params['ciphertext'], params['key']) = row + elif len(row) == 4: + (params['plaintext'], params['ciphertext'], params['key'], params['description']) = row + elif len(row) == 5: + (params['plaintext'], params['ciphertext'], params['key'], params['description'], extra_params) = row + params.update(extra_params) + else: + raise AssertionError("Unsupported tuple size %d" % (len(row),)) + + # Build the display-name for the test + p2 = params.copy() + p_key = _extract(p2, 'key') + p_plaintext = _extract(p2, 'plaintext') + p_ciphertext = _extract(p2, 'ciphertext') + p_description = _extract(p2, 'description', None) + + if p_description is not None: + description = p_description + elif not p2: + description = "p=%s, k=%s" % (p_plaintext, p_key) + else: + description = "p=%s, k=%s, %r" % (p_plaintext, p_key, p2) + name = "%s #%d: %s" % (module_name, i+1, description) + params['description'] = name + params['module_name'] = module_name + + # Add extra test(s) to the test suite before the current test + if not extra_tests_added: + tests += [ + ByteArrayTest(module, params), + ] + + tests.append(MemoryviewTest(module, params)) + extra_tests_added = True + + # Add the test to the test suite + tests.append(CipherSelfTest(module, params)) + tests.append(CipherStreamingSelfTest(module, params)) + return tests + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_AES.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_AES.py new file mode 100644 index 00000000..116deec6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_AES.py @@ -0,0 +1,1351 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/AES.py: Self-test for the AES cipher +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.AES""" + +from __future__ import print_function + +import unittest +from Crypto.Hash import SHA256 +from Crypto.Cipher import AES +from Crypto.Util.py3compat import * +from binascii import hexlify + +# This is a list of (plaintext, ciphertext, key[, description[, params]]) tuples. +test_data = [ + # FIPS PUB 197 test vectors + # http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf + + ('00112233445566778899aabbccddeeff', '69c4e0d86a7b0430d8cdb78070b4c55a', + '000102030405060708090a0b0c0d0e0f', 'FIPS 197 C.1 (AES-128)'), + + ('00112233445566778899aabbccddeeff', 'dda97ca4864cdfe06eaf70a0ec0d7191', + '000102030405060708090a0b0c0d0e0f1011121314151617', + 'FIPS 197 C.2 (AES-192)'), + + ('00112233445566778899aabbccddeeff', '8ea2b7ca516745bfeafc49904b496089', + '000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + 'FIPS 197 C.3 (AES-256)'), + + # Rijndael128 test vectors + # Downloaded 2008-09-13 from + # http://www.iaik.tugraz.at/Research/krypto/AES/old/~rijmen/rijndael/testvalues.tar.gz + + # ecb_tbl.txt, KEYSIZE=128 + ('506812a45f08c889b97f5980038b8359', 'd8f532538289ef7d06b506a4fd5be9c9', + '00010203050607080a0b0c0d0f101112', + 'ecb-tbl-128: I=1'), + ('5c6d71ca30de8b8b00549984d2ec7d4b', '59ab30f4d4ee6e4ff9907ef65b1fb68c', + '14151617191a1b1c1e1f202123242526', + 'ecb-tbl-128: I=2'), + ('53f3f4c64f8616e4e7c56199f48f21f6', 'bf1ed2fcb2af3fd41443b56d85025cb1', + '28292a2b2d2e2f30323334353738393a', + 'ecb-tbl-128: I=3'), + ('a1eb65a3487165fb0f1c27ff9959f703', '7316632d5c32233edcb0780560eae8b2', + '3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-128: I=4'), + ('3553ecf0b1739558b08e350a98a39bfa', '408c073e3e2538072b72625e68b8364b', + '50515253555657585a5b5c5d5f606162', + 'ecb-tbl-128: I=5'), + ('67429969490b9711ae2b01dc497afde8', 'e1f94dfa776597beaca262f2f6366fea', + '64656667696a6b6c6e6f707173747576', + 'ecb-tbl-128: I=6'), + ('93385c1f2aec8bed192f5a8e161dd508', 'f29e986c6a1c27d7b29ffd7ee92b75f1', + '78797a7b7d7e7f80828384858788898a', + 'ecb-tbl-128: I=7'), + ('b5bf946be19beb8db3983b5f4c6e8ddb', '131c886a57f8c2e713aba6955e2b55b5', + '8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-128: I=8'), + ('41321ee10e21bd907227c4450ff42324', 'd2ab7662df9b8c740210e5eeb61c199d', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-128: I=9'), + ('00a82f59c91c8486d12c0a80124f6089', '14c10554b2859c484cab5869bbe7c470', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-128: I=10'), + ('7ce0fd076754691b4bbd9faf8a1372fe', 'db4d498f0a49cf55445d502c1f9ab3b5', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9da', + 'ecb-tbl-128: I=11'), + ('23605a8243d07764541bc5ad355b3129', '6d96fef7d66590a77a77bb2056667f7f', + 'dcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-128: I=12'), + ('12a8cfa23ea764fd876232b4e842bc44', '316fb68edba736c53e78477bf913725c', + 'f0f1f2f3f5f6f7f8fafbfcfdfe010002', + 'ecb-tbl-128: I=13'), + ('bcaf32415e8308b3723e5fdd853ccc80', '6936f2b93af8397fd3a771fc011c8c37', + '04050607090a0b0c0e0f101113141516', + 'ecb-tbl-128: I=14'), + ('89afae685d801ad747ace91fc49adde0', 'f3f92f7a9c59179c1fcc2c2ba0b082cd', + '2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-128: I=15'), + ('f521d07b484357c4a69e76124a634216', '6a95ea659ee3889158e7a9152ff04ebc', + '40414243454647484a4b4c4d4f505152', + 'ecb-tbl-128: I=16'), + ('3e23b3bc065bcc152407e23896d77783', '1959338344e945670678a5d432c90b93', + '54555657595a5b5c5e5f606163646566', + 'ecb-tbl-128: I=17'), + ('79f0fba002be1744670e7e99290d8f52', 'e49bddd2369b83ee66e6c75a1161b394', + '68696a6b6d6e6f70727374757778797a', + 'ecb-tbl-128: I=18'), + ('da23fe9d5bd63e1d72e3dafbe21a6c2a', 'd3388f19057ff704b70784164a74867d', + '7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-128: I=19'), + ('e3f5698ba90b6a022efd7db2c7e6c823', '23aa03e2d5e4cd24f3217e596480d1e1', + 'a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-128: I=20'), + ('bdc2691d4f1b73d2700679c3bcbf9c6e', 'c84113d68b666ab2a50a8bdb222e91b9', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2', + 'ecb-tbl-128: I=21'), + ('ba74e02093217ee1ba1b42bd5624349a', 'ac02403981cd4340b507963db65cb7b6', + '08090a0b0d0e0f10121314151718191a', + 'ecb-tbl-128: I=22'), + ('b5c593b5851c57fbf8b3f57715e8f680', '8d1299236223359474011f6bf5088414', + '6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-128: I=23'), + ('3da9bd9cec072381788f9387c3bbf4ee', '5a1d6ab8605505f7977e55b9a54d9b90', + '80818283858687888a8b8c8d8f909192', + 'ecb-tbl-128: I=24'), + ('4197f3051121702ab65d316b3c637374', '72e9c2d519cf555e4208805aabe3b258', + '94959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-128: I=25'), + ('9f46c62ec4f6ee3f6e8c62554bc48ab7', 'a8f3e81c4a23a39ef4d745dffe026e80', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9ba', + 'ecb-tbl-128: I=26'), + ('0220673fe9e699a4ebc8e0dbeb6979c8', '546f646449d31458f9eb4ef5483aee6c', + 'bcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-128: I=27'), + ('b2b99171337ded9bc8c2c23ff6f18867', '4dbe4bc84ac797c0ee4efb7f1a07401c', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2', + 'ecb-tbl-128: I=28'), + ('a7facf4e301e984e5efeefd645b23505', '25e10bfb411bbd4d625ac8795c8ca3b3', + 'e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-128: I=29'), + ('f7c762e4a9819160fd7acfb6c4eedcdd', '315637405054ec803614e43def177579', + 'f8f9fafbfdfefe00020304050708090a', + 'ecb-tbl-128: I=30'), + ('9b64fc21ea08709f4915436faa70f1be', '60c5bc8a1410247295c6386c59e572a8', + '0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-128: I=31'), + ('52af2c3de07ee6777f55a4abfc100b3f', '01366fc8ca52dfe055d6a00a76471ba6', + '20212223252627282a2b2c2d2f303132', + 'ecb-tbl-128: I=32'), + ('2fca001224386c57aa3f968cbe2c816f', 'ecc46595516ec612449c3f581e7d42ff', + '34353637393a3b3c3e3f404143444546', + 'ecb-tbl-128: I=33'), + ('4149c73658a4a9c564342755ee2c132f', '6b7ffe4c602a154b06ee9c7dab5331c9', + '48494a4b4d4e4f50525354555758595a', + 'ecb-tbl-128: I=34'), + ('af60005a00a1772f7c07a48a923c23d2', '7da234c14039a240dd02dd0fbf84eb67', + '5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-128: I=35'), + ('6fccbc28363759914b6f0280afaf20c6', 'c7dc217d9e3604ffe7e91f080ecd5a3a', + '70717273757677787a7b7c7d7f808182', + 'ecb-tbl-128: I=36'), + ('7d82a43ddf4fefa2fc5947499884d386', '37785901863f5c81260ea41e7580cda5', + '84858687898a8b8c8e8f909193949596', + 'ecb-tbl-128: I=37'), + ('5d5a990eaab9093afe4ce254dfa49ef9', 'a07b9338e92ed105e6ad720fccce9fe4', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aa', + 'ecb-tbl-128: I=38'), + ('4cd1e2fd3f4434b553aae453f0ed1a02', 'ae0fb9722418cc21a7da816bbc61322c', + 'acadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-128: I=39'), + ('5a2c9a9641d4299125fa1b9363104b5e', 'c826a193080ff91ffb21f71d3373c877', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2', + 'ecb-tbl-128: I=40'), + ('b517fe34c0fa217d341740bfd4fe8dd4', '1181b11b0e494e8d8b0aa6b1d5ac2c48', + 'd4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-128: I=41'), + ('014baf2278a69d331d5180103643e99a', '6743c3d1519ab4f2cd9a78ab09a511bd', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fa', + 'ecb-tbl-128: I=42'), + ('b529bd8164f20d0aa443d4932116841c', 'dc55c076d52bacdf2eefd952946a439d', + 'fcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-128: I=43'), + ('2e596dcbb2f33d4216a1176d5bd1e456', '711b17b590ffc72b5c8e342b601e8003', + '10111213151617181a1b1c1d1f202122', + 'ecb-tbl-128: I=44'), + ('7274a1ea2b7ee2424e9a0e4673689143', '19983bb0950783a537e1339f4aa21c75', + '24252627292a2b2c2e2f303133343536', + 'ecb-tbl-128: I=45'), + ('ae20020bd4f13e9d90140bee3b5d26af', '3ba7762e15554169c0f4fa39164c410c', + '38393a3b3d3e3f40424344454748494a', + 'ecb-tbl-128: I=46'), + ('baac065da7ac26e855e79c8849d75a02', 'a0564c41245afca7af8aa2e0e588ea89', + '4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-128: I=47'), + ('7c917d8d1d45fab9e2540e28832540cc', '5e36a42a2e099f54ae85ecd92e2381ed', + '60616263656667686a6b6c6d6f707172', + 'ecb-tbl-128: I=48'), + ('bde6f89e16daadb0e847a2a614566a91', '770036f878cd0f6ca2268172f106f2fe', + '74757677797a7b7c7e7f808183848586', + 'ecb-tbl-128: I=49'), + ('c9de163725f1f5be44ebb1db51d07fbc', '7e4e03908b716116443ccf7c94e7c259', + '88898a8b8d8e8f90929394959798999a', + 'ecb-tbl-128: I=50'), + ('3af57a58f0c07dffa669572b521e2b92', '482735a48c30613a242dd494c7f9185d', + '9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-128: I=51'), + ('3d5ebac306dde4604f1b4fbbbfcdae55', 'b4c0f6c9d4d7079addf9369fc081061d', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2', + 'ecb-tbl-128: I=52'), + ('c2dfa91bceb76a1183c995020ac0b556', 'd5810fe0509ac53edcd74f89962e6270', + 'c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-128: I=53'), + ('c70f54305885e9a0746d01ec56c8596b', '03f17a16b3f91848269ecdd38ebb2165', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9ea', + 'ecb-tbl-128: I=54'), + ('c4f81b610e98012ce000182050c0c2b2', 'da1248c3180348bad4a93b4d9856c9df', + 'ecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-128: I=55'), + ('eaab86b1d02a95d7404eff67489f97d4', '3d10d7b63f3452c06cdf6cce18be0c2c', + '00010203050607080a0b0c0d0f101112', + 'ecb-tbl-128: I=56'), + ('7c55bdb40b88870b52bec3738de82886', '4ab823e7477dfddc0e6789018fcb6258', + '14151617191a1b1c1e1f202123242526', + 'ecb-tbl-128: I=57'), + ('ba6eaa88371ff0a3bd875e3f2a975ce0', 'e6478ba56a77e70cfdaa5c843abde30e', + '28292a2b2d2e2f30323334353738393a', + 'ecb-tbl-128: I=58'), + ('08059130c4c24bd30cf0575e4e0373dc', '1673064895fbeaf7f09c5429ff75772d', + '3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-128: I=59'), + ('9a8eab004ef53093dfcf96f57e7eda82', '4488033ae9f2efd0ca9383bfca1a94e9', + '50515253555657585a5b5c5d5f606162', + 'ecb-tbl-128: I=60'), + ('0745b589e2400c25f117b1d796c28129', '978f3b8c8f9d6f46626cac3c0bcb9217', + '64656667696a6b6c6e6f707173747576', + 'ecb-tbl-128: I=61'), + ('2f1777781216cec3f044f134b1b92bbe', 'e08c8a7e582e15e5527f1d9e2eecb236', + '78797a7b7d7e7f80828384858788898a', + 'ecb-tbl-128: I=62'), + ('353a779ffc541b3a3805d90ce17580fc', 'cec155b76ac5ffda4cf4f9ca91e49a7a', + '8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-128: I=63'), + ('1a1eae4415cefcf08c4ac1c8f68bea8f', 'd5ac7165763225dd2a38cdc6862c29ad', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-128: I=64'), + ('e6e7e4e5b0b3b2b5d4d5aaab16111013', '03680fe19f7ce7275452020be70e8204', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-128: I=65'), + ('f8f9fafbfbf8f9e677767170efe0e1e2', '461df740c9781c388e94bb861ceb54f6', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9da', + 'ecb-tbl-128: I=66'), + ('63626160a1a2a3a445444b4a75727370', '451bd60367f96483042742219786a074', + 'dcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-128: I=67'), + ('717073720605040b2d2c2b2a05fafbf9', 'e4dfa42671a02e57ef173b85c0ea9f2b', + 'f0f1f2f3f5f6f7f8fafbfcfdfe010002', + 'ecb-tbl-128: I=68'), + ('78797a7beae9e8ef3736292891969794', 'ed11b89e76274282227d854700a78b9e', + '04050607090a0b0c0e0f101113141516', + 'ecb-tbl-128: I=69'), + ('838281803231300fdddcdbdaa0afaead', '433946eaa51ea47af33895f2b90b3b75', + '18191a1b1d1e1f20222324252728292a', + 'ecb-tbl-128: I=70'), + ('18191a1bbfbcbdba75747b7a7f78797a', '6bc6d616a5d7d0284a5910ab35022528', + '2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-128: I=71'), + ('848586879b989996a3a2a5a4849b9a99', 'd2a920ecfe919d354b5f49eae9719c98', + '40414243454647484a4b4c4d4f505152', + 'ecb-tbl-128: I=72'), + ('0001020322212027cacbf4f551565754', '3a061b17f6a92885efbd0676985b373d', + '54555657595a5b5c5e5f606163646566', + 'ecb-tbl-128: I=73'), + ('cecfcccdafacadb2515057564a454447', 'fadeec16e33ea2f4688499d157e20d8f', + '68696a6b6d6e6f70727374757778797a', + 'ecb-tbl-128: I=74'), + ('92939091cdcecfc813121d1c80878685', '5cdefede59601aa3c3cda36fa6b1fa13', + '7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-128: I=75'), + ('d2d3d0d16f6c6d6259585f5ed1eeefec', '9574b00039844d92ebba7ee8719265f8', + '90919293959697989a9b9c9d9fa0a1a2', + 'ecb-tbl-128: I=76'), + ('acadaeaf878485820f0e1110d5d2d3d0', '9a9cf33758671787e5006928188643fa', + 'a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-128: I=77'), + ('9091929364676619e6e7e0e1757a7b78', '2cddd634c846ba66bb46cbfea4a674f9', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9ca', + 'ecb-tbl-128: I=78'), + ('babbb8b98a89888f74757a7b92959497', 'd28bae029393c3e7e26e9fafbbb4b98f', + 'cccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-128: I=79'), + ('8d8c8f8e6e6d6c633b3a3d3ccad5d4d7', 'ec27529b1bee0a9ab6a0d73ebc82e9b7', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2', + 'ecb-tbl-128: I=80'), + ('86878485010203040808f7f767606162', '3cb25c09472aff6ee7e2b47ccd7ccb17', + 'f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-128: I=81'), + ('8e8f8c8d656667788a8b8c8d010e0f0c', 'dee33103a7283370d725e44ca38f8fe5', + '08090a0b0d0e0f10121314151718191a', + 'ecb-tbl-128: I=82'), + ('c8c9cacb858687807a7b7475e7e0e1e2', '27f9bcd1aac64bffc11e7815702c1a69', + '1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-128: I=83'), + ('6d6c6f6e5053525d8c8d8a8badd2d3d0', '5df534ffad4ed0749a9988e9849d0021', + '30313233353637383a3b3c3d3f404142', + 'ecb-tbl-128: I=84'), + ('28292a2b393a3b3c0607181903040506', 'a48bee75db04fb60ca2b80f752a8421b', + '44454647494a4b4c4e4f505153545556', + 'ecb-tbl-128: I=85'), + ('a5a4a7a6b0b3b28ddbdadddcbdb2b3b0', '024c8cf70bc86ee5ce03678cb7af45f9', + '58595a5b5d5e5f60626364656768696a', + 'ecb-tbl-128: I=86'), + ('323330316467666130313e3f2c2b2a29', '3c19ac0f8a3a3862ce577831301e166b', + '6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-128: I=87'), + ('27262524080b0a05171611100b141516', 'c5e355b796a57421d59ca6be82e73bca', + '80818283858687888a8b8c8d8f909192', + 'ecb-tbl-128: I=88'), + ('040506074142434435340b0aa3a4a5a6', 'd94033276417abfb05a69d15b6e386e2', + '94959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-128: I=89'), + ('242526271112130c61606766bdb2b3b0', '24b36559ea3a9b9b958fe6da3e5b8d85', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9ba', + 'ecb-tbl-128: I=90'), + ('4b4a4948252627209e9f9091cec9c8cb', '20fd4feaa0e8bf0cce7861d74ef4cb72', + 'bcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-128: I=91'), + ('68696a6b6665646b9f9e9998d9e6e7e4', '350e20d5174277b9ec314c501570a11d', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2', + 'ecb-tbl-128: I=92'), + ('34353637c5c6c7c0f0f1eeef7c7b7a79', '87a29d61b7c604d238fe73045a7efd57', + 'e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-128: I=93'), + ('32333031c2c1c13f0d0c0b0a050a0b08', '2c3164c1cc7d0064816bdc0faa362c52', + 'f8f9fafbfdfefe00020304050708090a', + 'ecb-tbl-128: I=94'), + ('cdcccfcebebdbcbbabaaa5a4181f1e1d', '195fe5e8a05a2ed594f6e4400eee10b3', + '0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-128: I=95'), + ('212023223635343ba0a1a6a7445b5a59', 'e4663df19b9a21a5a284c2bd7f905025', + '20212223252627282a2b2c2d2f303132', + 'ecb-tbl-128: I=96'), + ('0e0f0c0da8abaaad2f2e515002050407', '21b88714cfb4e2a933bd281a2c4743fd', + '34353637393a3b3c3e3f404143444546', + 'ecb-tbl-128: I=97'), + ('070605042a2928378e8f8889bdb2b3b0', 'cbfc3980d704fd0fc54378ab84e17870', + '48494a4b4d4e4f50525354555758595a', + 'ecb-tbl-128: I=98'), + ('cbcac9c893909196a9a8a7a6a5a2a3a0', 'bc5144baa48bdeb8b63e22e03da418ef', + '5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-128: I=99'), + ('80818283c1c2c3cc9c9d9a9b0cf3f2f1', '5a1dbaef1ee2984b8395da3bdffa3ccc', + '70717273757677787a7b7c7d7f808182', + 'ecb-tbl-128: I=100'), + ('1213101125262720fafbe4e5b1b6b7b4', 'f0b11cd0729dfcc80cec903d97159574', + '84858687898a8b8c8e8f909193949596', + 'ecb-tbl-128: I=101'), + ('7f7e7d7c3033320d97969190222d2c2f', '9f95314acfddc6d1914b7f19a9cc8209', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aa', + 'ecb-tbl-128: I=102'), + ('4e4f4c4d484b4a4d81808f8e53545556', '595736f6f0f70914a94e9e007f022519', + 'acadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-128: I=103'), + ('dcdddedfb0b3b2bd15141312a1bebfbc', '1f19f57892cae586fcdfb4c694deb183', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2', + 'ecb-tbl-128: I=104'), + ('93929190282b2a2dc4c5fafb92959497', '540700ee1f6f3dab0b3eddf6caee1ef5', + 'd4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-128: I=105'), + ('f5f4f7f6c4c7c6d9373631307e717073', '14a342a91019a331687a2254e6626ca2', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fa', + 'ecb-tbl-128: I=106'), + ('93929190b6b5b4b364656a6b05020300', '7b25f3c3b2eea18d743ef283140f29ff', + 'fcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-128: I=107'), + ('babbb8b90d0e0f00a4a5a2a3043b3a39', '46c2587d66e5e6fa7f7ca6411ad28047', + '10111213151617181a1b1c1d1f202122', + 'ecb-tbl-128: I=108'), + ('d8d9dadb7f7c7d7a10110e0f787f7e7d', '09470e72229d954ed5ee73886dfeeba9', + '24252627292a2b2c2e2f303133343536', + 'ecb-tbl-128: I=109'), + ('fefffcfdefeced923b3a3d3c6768696a', 'd77c03de92d4d0d79ef8d4824ef365eb', + '38393a3b3d3e3f40424344454748494a', + 'ecb-tbl-128: I=110'), + ('d6d7d4d58a89888f96979899a5a2a3a0', '1d190219f290e0f1715d152d41a23593', + '4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-128: I=111'), + ('18191a1ba8abaaa5303136379b848586', 'a2cd332ce3a0818769616292e87f757b', + '60616263656667686a6b6c6d6f707172', + 'ecb-tbl-128: I=112'), + ('6b6a6968a4a7a6a1d6d72829b0b7b6b5', 'd54afa6ce60fbf9341a3690e21385102', + '74757677797a7b7c7e7f808183848586', + 'ecb-tbl-128: I=113'), + ('000102038a89889755545352a6a9a8ab', '06e5c364ded628a3f5e05e613e356f46', + '88898a8b8d8e8f90929394959798999a', + 'ecb-tbl-128: I=114'), + ('2d2c2f2eb3b0b1b6b6b7b8b9f2f5f4f7', 'eae63c0e62556dac85d221099896355a', + '9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-128: I=115'), + ('979695943536373856575051e09f9e9d', '1fed060e2c6fc93ee764403a889985a2', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2', + 'ecb-tbl-128: I=116'), + ('a4a5a6a7989b9a9db1b0afae7a7d7c7f', 'c25235c1a30fdec1c7cb5c5737b2a588', + 'c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-128: I=117'), + ('c1c0c3c2686b6a55a8a9aeafeae5e4e7', '796dbef95147d4d30873ad8b7b92efc0', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9ea', + 'ecb-tbl-128: I=118'), + ('c1c0c3c2141716118c8d828364636261', 'cbcf0fb34d98d0bd5c22ce37211a46bf', + 'ecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-128: I=119'), + ('93929190cccfcec196979091e0fffefd', '94b44da6466126cafa7c7fd09063fc24', + '00010203050607080a0b0c0d0f101112', + 'ecb-tbl-128: I=120'), + ('b4b5b6b7f9fafbfc25241b1a6e69686b', 'd78c5b5ebf9b4dbda6ae506c5074c8fe', + '14151617191a1b1c1e1f202123242526', + 'ecb-tbl-128: I=121'), + ('868784850704051ac7c6c1c08788898a', '6c27444c27204b043812cf8cf95f9769', + '28292a2b2d2e2f30323334353738393a', + 'ecb-tbl-128: I=122'), + ('f4f5f6f7aaa9a8affdfcf3f277707172', 'be94524ee5a2aa50bba8b75f4c0aebcf', + '3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-128: I=123'), + ('d3d2d1d00605040bc3c2c5c43e010003', 'a0aeaae91ba9f31f51aeb3588cf3a39e', + '50515253555657585a5b5c5d5f606162', + 'ecb-tbl-128: I=124'), + ('73727170424140476a6b74750d0a0b08', '275297779c28266ef9fe4c6a13c08488', + '64656667696a6b6c6e6f707173747576', + 'ecb-tbl-128: I=125'), + ('c2c3c0c10a0908f754555253a1aeafac', '86523d92bb8672cb01cf4a77fd725882', + '78797a7b7d7e7f80828384858788898a', + 'ecb-tbl-128: I=126'), + ('6d6c6f6ef8fbfafd82838c8df8fffefd', '4b8327640e9f33322a04dd96fcbf9a36', + '8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-128: I=127'), + ('f5f4f7f684878689a6a7a0a1d2cdcccf', 'ce52af650d088ca559425223f4d32694', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-128: I=128'), + + # ecb_tbl.txt, KEYSIZE=192 + ('2d33eef2c0430a8a9ebf45e809c40bb6', 'dff4945e0336df4c1c56bc700eff837f', + '00010203050607080a0b0c0d0f10111214151617191a1b1c', + 'ecb-tbl-192: I=1'), + ('6aa375d1fa155a61fb72353e0a5a8756', 'b6fddef4752765e347d5d2dc196d1252', + '1e1f20212324252628292a2b2d2e2f30323334353738393a', + 'ecb-tbl-192: I=2'), + ('bc3736518b9490dcb8ed60eb26758ed4', 'd23684e3d963b3afcf1a114aca90cbd6', + '3c3d3e3f41424344464748494b4c4d4e5051525355565758', + 'ecb-tbl-192: I=3'), + ('aa214402b46cffb9f761ec11263a311e', '3a7ac027753e2a18c2ceab9e17c11fd0', + '5a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-192: I=4'), + ('02aea86e572eeab66b2c3af5e9a46fd6', '8f6786bd007528ba26603c1601cdd0d8', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394', + 'ecb-tbl-192: I=5'), + ('e2aef6acc33b965c4fa1f91c75ff6f36', 'd17d073b01e71502e28b47ab551168b3', + '969798999b9c9d9ea0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-192: I=6'), + ('0659df46427162b9434865dd9499f91d', 'a469da517119fab95876f41d06d40ffa', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6c8c9cacbcdcecfd0', + 'ecb-tbl-192: I=7'), + ('49a44239c748feb456f59c276a5658df', '6091aa3b695c11f5c0b6ad26d3d862ff', + 'd2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-192: I=8'), + ('66208f6e9d04525bdedb2733b6a6be37', '70f9e67f9f8df1294131662dc6e69364', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c', + 'ecb-tbl-192: I=9'), + ('3393f8dfc729c97f5480b950bc9666b0', 'd154dcafad8b207fa5cbc95e9996b559', + '0e0f10111314151618191a1b1d1e1f20222324252728292a', + 'ecb-tbl-192: I=10'), + ('606834c8ce063f3234cf1145325dbd71', '4934d541e8b46fa339c805a7aeb9e5da', + '2c2d2e2f31323334363738393b3c3d3e4041424345464748', + 'ecb-tbl-192: I=11'), + ('fec1c04f529bbd17d8cecfcc4718b17f', '62564c738f3efe186e1a127a0c4d3c61', + '4a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-192: I=12'), + ('32df99b431ed5dc5acf8caf6dc6ce475', '07805aa043986eb23693e23bef8f3438', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384', + 'ecb-tbl-192: I=13'), + ('7fdc2b746f3f665296943b83710d1f82', 'df0b4931038bade848dee3b4b85aa44b', + '868788898b8c8d8e90919293959697989a9b9c9d9fa0a1a2', + 'ecb-tbl-192: I=14'), + ('8fba1510a3c5b87e2eaa3f7a91455ca2', '592d5fded76582e4143c65099309477c', + 'a4a5a6a7a9aaabacaeafb0b1b3b4b5b6b8b9babbbdbebfc0', + 'ecb-tbl-192: I=15'), + ('2c9b468b1c2eed92578d41b0716b223b', 'c9b8d6545580d3dfbcdd09b954ed4e92', + 'c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-192: I=16'), + ('0a2bbf0efc6bc0034f8a03433fca1b1a', '5dccd5d6eb7c1b42acb008201df707a0', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfc', + 'ecb-tbl-192: I=17'), + ('25260e1f31f4104d387222e70632504b', 'a2a91682ffeb6ed1d34340946829e6f9', + 'fefe01010304050608090a0b0d0e0f10121314151718191a', + 'ecb-tbl-192: I=18'), + ('c527d25a49f08a5228d338642ae65137', 'e45d185b797000348d9267960a68435d', + '1c1d1e1f21222324262728292b2c2d2e3031323335363738', + 'ecb-tbl-192: I=19'), + ('3b49fc081432f5890d0e3d87e884a69e', '45e060dae5901cda8089e10d4f4c246b', + '3a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-192: I=20'), + ('d173f9ed1e57597e166931df2754a083', 'f6951afacc0079a369c71fdcff45df50', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374', + 'ecb-tbl-192: I=21'), + ('8c2b7cafa5afe7f13562daeae1adede0', '9e95e00f351d5b3ac3d0e22e626ddad6', + '767778797b7c7d7e80818283858687888a8b8c8d8f909192', + 'ecb-tbl-192: I=22'), + ('aaf4ec8c1a815aeb826cab741339532c', '9cb566ff26d92dad083b51fdc18c173c', + '94959697999a9b9c9e9fa0a1a3a4a5a6a8a9aaabadaeafb0', + 'ecb-tbl-192: I=23'), + ('40be8c5d9108e663f38f1a2395279ecf', 'c9c82766176a9b228eb9a974a010b4fb', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebec', + 'ecb-tbl-192: I=24'), + ('0c8ad9bc32d43e04716753aa4cfbe351', 'd8e26aa02945881d5137f1c1e1386e88', + '2a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-192: I=25'), + ('1407b1d5f87d63357c8dc7ebbaebbfee', 'c0e024ccd68ff5ffa4d139c355a77c55', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364', + 'ecb-tbl-192: I=26'), + ('e62734d1ae3378c4549e939e6f123416', '0b18b3d16f491619da338640df391d43', + '84858687898a8b8c8e8f90919394959698999a9b9d9e9fa0', + 'ecb-tbl-192: I=27'), + ('5a752cff2a176db1a1de77f2d2cdee41', 'dbe09ac8f66027bf20cb6e434f252efc', + 'a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-192: I=28'), + ('a9c8c3a4eabedc80c64730ddd018cd88', '6d04e5e43c5b9cbe05feb9606b6480fe', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdc', + 'ecb-tbl-192: I=29'), + ('ee9b3dbbdb86180072130834d305999a', 'dd1d6553b96be526d9fee0fbd7176866', + '1a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-192: I=30'), + ('a7fa8c3586b8ebde7568ead6f634a879', '0260ca7e3f979fd015b0dd4690e16d2a', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354', + 'ecb-tbl-192: I=31'), + ('37e0f4a87f127d45ac936fe7ad88c10a', '9893734de10edcc8a67c3b110b8b8cc6', + '929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-192: I=32'), + ('3f77d8b5d92bac148e4e46f697a535c5', '93b30b750516b2d18808d710c2ee84ef', + '464748494b4c4d4e50515253555657585a5b5c5d5f606162', + 'ecb-tbl-192: I=33'), + ('d25ebb686c40f7e2c4da1014936571ca', '16f65fa47be3cb5e6dfe7c6c37016c0e', + '828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-192: I=34'), + ('4f1c769d1e5b0552c7eca84dea26a549', 'f3847210d5391e2360608e5acb560581', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbc', + 'ecb-tbl-192: I=35'), + ('8548e2f882d7584d0fafc54372b6633a', '8754462cd223366d0753913e6af2643d', + 'bebfc0c1c3c4c5c6c8c9cacbcdcecfd0d2d3d4d5d7d8d9da', + 'ecb-tbl-192: I=36'), + ('87d7a336cb476f177cd2a51af2a62cdf', '1ea20617468d1b806a1fd58145462017', + 'dcdddedfe1e2e3e4e6e7e8e9ebecedeef0f1f2f3f5f6f7f8', + 'ecb-tbl-192: I=37'), + ('03b1feac668c4e485c1065dfc22b44ee', '3b155d927355d737c6be9dda60136e2e', + 'fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-192: I=38'), + ('bda15e66819fa72d653a6866aa287962', '26144f7b66daa91b6333dbd3850502b3', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334', + 'ecb-tbl-192: I=39'), + ('4d0c7a0d2505b80bf8b62ceb12467f0a', 'e4f9a4ab52ced8134c649bf319ebcc90', + '363738393b3c3d3e40414243454647484a4b4c4d4f505152', + 'ecb-tbl-192: I=40'), + ('626d34c9429b37211330986466b94e5f', 'b9ddd29ac6128a6cab121e34a4c62b36', + '54555657595a5b5c5e5f60616364656668696a6b6d6e6f70', + 'ecb-tbl-192: I=41'), + ('333c3e6bf00656b088a17e5ff0e7f60a', '6fcddad898f2ce4eff51294f5eaaf5c9', + '727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-192: I=42'), + ('687ed0cdc0d2a2bc8c466d05ef9d2891', 'c9a6fe2bf4028080bea6f7fc417bd7e3', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabac', + 'ecb-tbl-192: I=43'), + ('487830e78cc56c1693e64b2a6660c7b6', '6a2026846d8609d60f298a9c0673127f', + 'aeafb0b1b3b4b5b6b8b9babbbdbebfc0c2c3c4c5c7c8c9ca', + 'ecb-tbl-192: I=44'), + ('7a48d6b7b52b29392aa2072a32b66160', '2cb25c005e26efea44336c4c97a4240b', + 'cccdcecfd1d2d3d4d6d7d8d9dbdcdddee0e1e2e3e5e6e7e8', + 'ecb-tbl-192: I=45'), + ('907320e64c8c5314d10f8d7a11c8618d', '496967ab8680ddd73d09a0e4c7dcc8aa', + 'eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-192: I=46'), + ('b561f2ca2d6e65a4a98341f3ed9ff533', 'd5af94de93487d1f3a8c577cb84a66a4', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324', + 'ecb-tbl-192: I=47'), + ('df769380d212792d026f049e2e3e48ef', '84bdac569cae2828705f267cc8376e90', + '262728292b2c2d2e30313233353637383a3b3c3d3f404142', + 'ecb-tbl-192: I=48'), + ('79f374bc445bdabf8fccb8843d6054c6', 'f7401dda5ad5ab712b7eb5d10c6f99b6', + '44454647494a4b4c4e4f50515354555658595a5b5d5e5f60', + 'ecb-tbl-192: I=49'), + ('4e02f1242fa56b05c68dbae8fe44c9d6', '1c9d54318539ebd4c3b5b7e37bf119f0', + '626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-192: I=50'), + ('cf73c93cbff57ac635a6f4ad2a4a1545', 'aca572d65fb2764cffd4a6eca090ea0d', + '80818283858687888a8b8c8d8f90919294959697999a9b9c', + 'ecb-tbl-192: I=51'), + ('9923548e2875750725b886566784c625', '36d9c627b8c2a886a10ccb36eae3dfbb', + '9e9fa0a1a3a4a5a6a8a9aaabadaeafb0b2b3b4b5b7b8b9ba', + 'ecb-tbl-192: I=52'), + ('4888336b723a022c9545320f836a4207', '010edbf5981e143a81d646e597a4a568', + 'bcbdbebfc1c2c3c4c6c7c8c9cbcccdced0d1d2d3d5d6d7d8', + 'ecb-tbl-192: I=53'), + ('f84d9a5561b0608b1160dee000c41ba8', '8db44d538dc20cc2f40f3067fd298e60', + 'dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-192: I=54'), + ('c23192a0418e30a19b45ae3e3625bf22', '930eb53bc71e6ac4b82972bdcd5aafb3', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314', + 'ecb-tbl-192: I=55'), + ('b84e0690b28b0025381ad82a15e501a7', '6c42a81edcbc9517ccd89c30c95597b4', + '161718191b1c1d1e20212223252627282a2b2c2d2f303132', + 'ecb-tbl-192: I=56'), + ('acef5e5c108876c4f06269f865b8f0b0', 'da389847ad06df19d76ee119c71e1dd3', + '34353637393a3b3c3e3f40414344454648494a4b4d4e4f50', + 'ecb-tbl-192: I=57'), + ('0f1b3603e0f5ddea4548246153a5e064', 'e018fdae13d3118f9a5d1a647a3f0462', + '525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-192: I=58'), + ('fbb63893450d42b58c6d88cd3c1809e3', '2aa65db36264239d3846180fabdfad20', + '70717273757677787a7b7c7d7f80818284858687898a8b8c', + 'ecb-tbl-192: I=59'), + ('4bef736df150259dae0c91354e8a5f92', '1472163e9a4f780f1ceb44b07ecf4fdb', + '8e8f90919394959698999a9b9d9e9fa0a2a3a4a5a7a8a9aa', + 'ecb-tbl-192: I=60'), + ('7d2d46242056ef13d3c3fc93c128f4c7', 'c8273fdc8f3a9f72e91097614b62397c', + 'acadaeafb1b2b3b4b6b7b8b9bbbcbdbec0c1c2c3c5c6c7c8', + 'ecb-tbl-192: I=61'), + ('e9c1ba2df415657a256edb33934680fd', '66c8427dcd733aaf7b3470cb7d976e3f', + 'cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-192: I=62'), + ('e23ee277b0aa0a1dfb81f7527c3514f1', '146131cb17f1424d4f8da91e6f80c1d0', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304', + 'ecb-tbl-192: I=63'), + ('3e7445b0b63caaf75e4a911e12106b4c', '2610d0ad83659081ae085266a88770dc', + '060708090b0c0d0e10111213151617181a1b1c1d1f202122', + 'ecb-tbl-192: I=64'), + ('767774752023222544455a5be6e1e0e3', '38a2b5a974b0575c5d733917fb0d4570', + '24252627292a2b2c2e2f30313334353638393a3b3d3e3f40', + 'ecb-tbl-192: I=65'), + ('72737475717e7f7ce9e8ebea696a6b6c', 'e21d401ebc60de20d6c486e4f39a588b', + '424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-192: I=66'), + ('dfdedddc25262728c9c8cfcef1eeefec', 'e51d5f88c670b079c0ca1f0c2c4405a2', + '60616263656667686a6b6c6d6f70717274757677797a7b7c', + 'ecb-tbl-192: I=67'), + ('fffe0100707776755f5e5d5c7675746b', '246a94788a642fb3d1b823c8762380c8', + '7e7f80818384858688898a8b8d8e8f90929394959798999a', + 'ecb-tbl-192: I=68'), + ('e0e1e2e3424140479f9e9190292e2f2c', 'b80c391c5c41a4c3b30c68e0e3d7550f', + '9c9d9e9fa1a2a3a4a6a7a8a9abacadaeb0b1b2b3b5b6b7b8', + 'ecb-tbl-192: I=69'), + ('2120272690efeeed3b3a39384e4d4c4b', 'b77c4754fc64eb9a1154a9af0bb1f21c', + 'babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-192: I=70'), + ('ecedeeef5350516ea1a0a7a6a3acadae', 'fb554de520d159a06bf219fc7f34a02f', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4', + 'ecb-tbl-192: I=71'), + ('32333c3d25222320e9e8ebeacecdccc3', 'a89fba152d76b4927beed160ddb76c57', + 'f6f7f8f9fbfcfdfe00010203050607080a0b0c0d0f101112', + 'ecb-tbl-192: I=72'), + ('40414243626160678a8bb4b511161714', '5676eab4a98d2e8473b3f3d46424247c', + '14151617191a1b1c1e1f20212324252628292a2b2d2e2f30', + 'ecb-tbl-192: I=73'), + ('94959293f5fafbf81f1e1d1c7c7f7e79', '4e8f068bd7ede52a639036ec86c33568', + '323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-192: I=74'), + ('bebfbcbd191a1b14cfcec9c8546b6a69', 'f0193c4d7aff1791ee4c07eb4a1824fc', + '50515253555657585a5b5c5d5f60616264656667696a6b6c', + 'ecb-tbl-192: I=75'), + ('2c2d3233898e8f8cbbbab9b8333031ce', 'ac8686eeca9ba761afe82d67b928c33f', + '6e6f70717374757678797a7b7d7e7f80828384858788898a', + 'ecb-tbl-192: I=76'), + ('84858687bfbcbdba37363938fdfafbf8', '5faf8573e33b145b6a369cd3606ab2c9', + '8c8d8e8f91929394969798999b9c9d9ea0a1a2a3a5a6a7a8', + 'ecb-tbl-192: I=77'), + ('828384857669686b909192930b08090e', '31587e9944ab1c16b844ecad0df2e7da', + 'aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-192: I=78'), + ('bebfbcbd9695948b707176779e919093', 'd017fecd91148aba37f6f3068aa67d8a', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4', + 'ecb-tbl-192: I=79'), + ('8b8a85846067666521202322d0d3d2dd', '788ef2f021a73cba2794b616078a8500', + 'e6e7e8e9ebecedeef0f1f2f3f5f6f7f8fafbfcfdfe010002', + 'ecb-tbl-192: I=80'), + ('76777475f1f2f3f4f8f9e6e777707172', '5d1ef20dced6bcbc12131ac7c54788aa', + '04050607090a0b0c0e0f10111314151618191a1b1d1e1f20', + 'ecb-tbl-192: I=81'), + ('a4a5a2a34f404142b4b5b6b727242522', 'b3c8cf961faf9ea05fdde6d1e4d8f663', + '222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-192: I=82'), + ('94959697e1e2e3ec16171011839c9d9e', '143075c70605861c7fac6526199e459f', + '40414243454647484a4b4c4d4f50515254555657595a5b5c', + 'ecb-tbl-192: I=83'), + ('03023d3c06010003dedfdcddfffcfde2', 'a5ae12eade9a87268d898bfc8fc0252a', + '5e5f60616364656668696a6b6d6e6f70727374757778797a', + 'ecb-tbl-192: I=84'), + ('10111213f1f2f3f4cecfc0c1dbdcddde', '0924f7cf2e877a4819f5244a360dcea9', + '7c7d7e7f81828384868788898b8c8d8e9091929395969798', + 'ecb-tbl-192: I=85'), + ('67666160724d4c4f1d1c1f1e73707176', '3d9e9635afcc3e291cc7ab3f27d1c99a', + '9a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-192: I=86'), + ('e6e7e4e5a8abaad584858283909f9e9d', '9d80feebf87510e2b8fb98bb54fd788c', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4', + 'ecb-tbl-192: I=87'), + ('71707f7e565150537d7c7f7e6162636c', '5f9d1a082a1a37985f174002eca01309', + 'd6d7d8d9dbdcdddee0e1e2e3e5e6e7e8eaebecedeff0f1f2', + 'ecb-tbl-192: I=88'), + ('64656667212223245555aaaa03040506', 'a390ebb1d1403930184a44b4876646e4', + 'f4f5f6f7f9fafbfcfefe01010304050608090a0b0d0e0f10', + 'ecb-tbl-192: I=89'), + ('9e9f9899aba4a5a6cfcecdcc2b28292e', '700fe918981c3195bb6c4bcb46b74e29', + '121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-192: I=90'), + ('c7c6c5c4d1d2d3dc626364653a454447', '907984406f7bf2d17fb1eb15b673d747', + '30313233353637383a3b3c3d3f40414244454647494a4b4c', + 'ecb-tbl-192: I=91'), + ('f6f7e8e9e0e7e6e51d1c1f1e5b585966', 'c32a956dcfc875c2ac7c7cc8b8cc26e1', + '4e4f50515354555658595a5b5d5e5f60626364656768696a', + 'ecb-tbl-192: I=92'), + ('bcbdbebf5d5e5f5868696667f4f3f2f1', '02646e2ebfa9b820cf8424e9b9b6eb51', + '6c6d6e6f71727374767778797b7c7d7e8081828385868788', + 'ecb-tbl-192: I=93'), + ('40414647b0afaead9b9a99989b98999e', '621fda3a5bbd54c6d3c685816bd4ead8', + '8a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-192: I=94'), + ('69686b6a0201001f0f0e0908b4bbbab9', 'd4e216040426dfaf18b152469bc5ac2f', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4', + 'ecb-tbl-192: I=95'), + ('c7c6c9c8d8dfdedd5a5b5859bebdbcb3', '9d0635b9d33b6cdbd71f5d246ea17cc8', + 'c6c7c8c9cbcccdced0d1d2d3d5d6d7d8dadbdcdddfe0e1e2', + 'ecb-tbl-192: I=96'), + ('dedfdcdd787b7a7dfffee1e0b2b5b4b7', '10abad1bd9bae5448808765583a2cc1a', + 'e4e5e6e7e9eaebeceeeff0f1f3f4f5f6f8f9fafbfdfefe00', + 'ecb-tbl-192: I=97'), + ('4d4c4b4a606f6e6dd0d1d2d3fbf8f9fe', '6891889e16544e355ff65a793c39c9a8', + '020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-192: I=98'), + ('b7b6b5b4d7d4d5dae5e4e3e2e1fefffc', 'cc735582e68072c163cd9ddf46b91279', + '20212223252627282a2b2c2d2f30313234353637393a3b3c', + 'ecb-tbl-192: I=99'), + ('cecfb0b1f7f0f1f2aeafacad3e3d3c23', 'c5c68b9aeeb7f878df578efa562f9574', + '3e3f40414344454648494a4b4d4e4f50525354555758595a', + 'ecb-tbl-192: I=100'), + ('cacbc8c9cdcecfc812131c1d494e4f4c', '5f4764395a667a47d73452955d0d2ce8', + '5c5d5e5f61626364666768696b6c6d6e7071727375767778', + 'ecb-tbl-192: I=101'), + ('9d9c9b9ad22d2c2fb1b0b3b20c0f0e09', '701448331f66106cefddf1eb8267c357', + '7a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-192: I=102'), + ('7a7b787964676659959493924f404142', 'cb3ee56d2e14b4e1941666f13379d657', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4', + 'ecb-tbl-192: I=103'), + ('aaaba4a5cec9c8cb1f1e1d1caba8a9a6', '9fe16efd18ab6e1981191851fedb0764', + 'b6b7b8b9bbbcbdbec0c1c2c3c5c6c7c8cacbcccdcfd0d1d2', + 'ecb-tbl-192: I=104'), + ('93929190282b2a2dc4c5fafb92959497', '3dc9ba24e1b223589b147adceb4c8e48', + 'd4d5d6d7d9dadbdcdedfe0e1e3e4e5e6e8e9eaebedeeeff0', + 'ecb-tbl-192: I=105'), + ('efeee9e8ded1d0d339383b3a888b8a8d', '1c333032682e7d4de5e5afc05c3e483c', + 'f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-192: I=106'), + ('7f7e7d7ca2a1a0af78797e7f112e2f2c', 'd593cc99a95afef7e92038e05a59d00a', + '10111213151617181a1b1c1d1f20212224252627292a2b2c', + 'ecb-tbl-192: I=107'), + ('84859a9b2b2c2d2e868784852625245b', '51e7f96f53b4353923452c222134e1ec', + '2e2f30313334353638393a3b3d3e3f40424344454748494a', + 'ecb-tbl-192: I=108'), + ('b0b1b2b3070405026869666710171615', '4075b357a1a2b473400c3b25f32f81a4', + '4c4d4e4f51525354565758595b5c5d5e6061626365666768', + 'ecb-tbl-192: I=109'), + ('acadaaabbda2a3a00d0c0f0e595a5b5c', '302e341a3ebcd74f0d55f61714570284', + '6a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-192: I=110'), + ('121310115655544b5253545569666764', '57abdd8231280da01c5042b78cf76522', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4', + 'ecb-tbl-192: I=111'), + ('dedfd0d166616063eaebe8e94142434c', '17f9ea7eea17ac1adf0e190fef799e92', + 'a6a7a8a9abacadaeb0b1b2b3b5b6b7b8babbbcbdbfc0c1c2', + 'ecb-tbl-192: I=112'), + ('dbdad9d81417161166677879e0e7e6e5', '2e1bdd563dd87ee5c338dd6d098d0a7a', + 'c4c5c6c7c9cacbcccecfd0d1d3d4d5d6d8d9dadbdddedfe0', + 'ecb-tbl-192: I=113'), + ('6a6b6c6de0efeeed2b2a2928c0c3c2c5', 'eb869996e6f8bfb2bfdd9e0c4504dbb2', + 'e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-192: I=114'), + ('b1b0b3b21714151a1a1b1c1d5649484b', 'c2e01549e9decf317468b3e018c61ba8', + '00010203050607080a0b0c0d0f10111214151617191a1b1c', + 'ecb-tbl-192: I=115'), + ('39380706a3a4a5a6c4c5c6c77271706f', '8da875d033c01dd463b244a1770f4a22', + '1e1f20212324252628292a2b2d2e2f30323334353738393a', + 'ecb-tbl-192: I=116'), + ('5c5d5e5f1013121539383736e2e5e4e7', '8ba0dcf3a186844f026d022f8839d696', + '3c3d3e3f41424344464748494b4c4d4e5051525355565758', + 'ecb-tbl-192: I=117'), + ('43424544ead5d4d72e2f2c2d64676661', 'e9691ff9a6cc6970e51670a0fd5b88c1', + '5a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-192: I=118'), + ('55545756989b9a65f8f9feff18171615', 'f2baec06faeed30f88ee63ba081a6e5b', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394', + 'ecb-tbl-192: I=119'), + ('05040b0a525554573c3d3e3f4a494847', '9c39d4c459ae5753394d6094adc21e78', + '969798999b9c9d9ea0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-192: I=120'), + ('14151617595a5b5c8584fbfa8e89888b', '6345b532a11904502ea43ba99c6bd2b2', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6c8c9cacbcdcecfd0', + 'ecb-tbl-192: I=121'), + ('7c7d7a7bfdf2f3f029282b2a51525354', '5ffae3061a95172e4070cedce1e428c8', + 'd2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-192: I=122'), + ('38393a3b1e1d1c1341404746c23d3c3e', '0a4566be4cdf9adce5dec865b5ab34cd', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c', + 'ecb-tbl-192: I=123'), + ('8d8c939240474645818083827c7f7e41', 'ca17fcce79b7404f2559b22928f126fb', + '0e0f10111314151618191a1b1d1e1f20222324252728292a', + 'ecb-tbl-192: I=124'), + ('3b3a39381a19181f32333c3d45424340', '97ca39b849ed73a6470a97c821d82f58', + '2c2d2e2f31323334363738393b3c3d3e4041424345464748', + 'ecb-tbl-192: I=125'), + ('f0f1f6f738272625828380817f7c7d7a', '8198cb06bc684c6d3e9b7989428dcf7a', + '4a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-192: I=126'), + ('89888b8a0407061966676061141b1a19', 'f53c464c705ee0f28d9a4c59374928bd', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384', + 'ecb-tbl-192: I=127'), + ('d3d2dddcaaadacaf9c9d9e9fe8ebeae5', '9adb3d4cca559bb98c3e2ed73dbf1154', + '868788898b8c8d8e90919293959697989a9b9c9d9fa0a1a2', + 'ecb-tbl-192: I=128'), + + # ecb_tbl.txt, KEYSIZE=256 + ('834eadfccac7e1b30664b1aba44815ab', '1946dabf6a03a2a2c3d0b05080aed6fc', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=1'), + ('d9dc4dba3021b05d67c0518f72b62bf1', '5ed301d747d3cc715445ebdec62f2fb4', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=2'), + ('a291d86301a4a739f7392173aa3c604c', '6585c8f43d13a6beab6419fc5935b9d0', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=3'), + ('4264b2696498de4df79788a9f83e9390', '2a5b56a596680fcc0e05f5e0f151ecae', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=4'), + ('ee9932b3721804d5a83ef5949245b6f6', 'f5d6ff414fd2c6181494d20c37f2b8c4', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=5'), + ('e6248f55c5fdcbca9cbbb01c88a2ea77', '85399c01f59fffb5204f19f8482f00b8', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=6'), + ('b8358e41b9dff65fd461d55a99266247', '92097b4c88a041ddf98144bc8d22e8e7', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=7'), + ('f0e2d72260af58e21e015ab3a4c0d906', '89bd5b73b356ab412aef9f76cea2d65c', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=8'), + ('475b8b823ce8893db3c44a9f2a379ff7', '2536969093c55ff9454692f2fac2f530', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=9'), + ('688f5281945812862f5f3076cf80412f', '07fc76a872843f3f6e0081ee9396d637', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=10'), + ('08d1d2bc750af553365d35e75afaceaa', 'e38ba8ec2aa741358dcc93e8f141c491', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=11'), + ('8707121f47cc3efceca5f9a8474950a1', 'd028ee23e4a89075d0b03e868d7d3a42', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=12'), + ('e51aa0b135dba566939c3b6359a980c5', '8cd9423dfc459e547155c5d1d522e540', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=13'), + ('069a007fc76a459f98baf917fedf9521', '080e9517eb1677719acf728086040ae3', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=14'), + ('726165c1723fbcf6c026d7d00b091027', '7c1700211a3991fc0ecded0ab3e576b0', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=15'), + ('d7c544de91d55cfcde1f84ca382200ce', 'dabcbcc855839251db51e224fbe87435', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=16'), + ('fed3c9a161b9b5b2bd611b41dc9da357', '68d56fad0406947a4dd27a7448c10f1d', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=17'), + ('4f634cdc6551043409f30b635832cf82', 'da9a11479844d1ffee24bbf3719a9925', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=18'), + ('109ce98db0dfb36734d9f3394711b4e6', '5e4ba572f8d23e738da9b05ba24b8d81', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=19'), + ('4ea6dfaba2d8a02ffdffa89835987242', 'a115a2065d667e3f0b883837a6e903f8', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=20'), + ('5ae094f54af58e6e3cdbf976dac6d9ef', '3e9e90dc33eac2437d86ad30b137e66e', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=21'), + ('764d8e8e0f29926dbe5122e66354fdbe', '01ce82d8fbcdae824cb3c48e495c3692', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=22'), + ('3f0418f888cdf29a982bf6b75410d6a9', '0c9cff163ce936faaf083cfd3dea3117', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=23'), + ('e4a3e7cb12cdd56aa4a75197a9530220', '5131ba9bd48f2bba85560680df504b52', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=24'), + ('211677684aac1ec1a160f44c4ebf3f26', '9dc503bbf09823aec8a977a5ad26ccb2', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=25'), + ('d21e439ff749ac8f18d6d4b105e03895', '9a6db0c0862e506a9e397225884041d7', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=26'), + ('d9f6ff44646c4725bd4c0103ff5552a7', '430bf9570804185e1ab6365fc6a6860c', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=27'), + ('0b1256c2a00b976250cfc5b0c37ed382', '3525ebc02f4886e6a5a3762813e8ce8a', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=28'), + ('b056447ffc6dc4523a36cc2e972a3a79', '07fa265c763779cce224c7bad671027b', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=29'), + ('5e25ca78f0de55802524d38da3fe4456', 'e8b72b4e8be243438c9fff1f0e205872', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=30'), + ('a5bcf4728fa5eaad8567c0dc24675f83', '109d4f999a0e11ace1f05e6b22cbcb50', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=31'), + ('814e59f97ed84646b78b2ca022e9ca43', '45a5e8d4c3ed58403ff08d68a0cc4029', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=32'), + ('15478beec58f4775c7a7f5d4395514d7', '196865964db3d417b6bd4d586bcb7634', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=33'), + ('253548ffca461c67c8cbc78cd59f4756', '60436ad45ac7d30d99195f815d98d2ae', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=34'), + ('fd7ad8d73b9b0f8cc41600640f503d65', 'bb07a23f0b61014b197620c185e2cd75', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=35'), + ('06199de52c6cbf8af954cd65830bcd56', '5bc0b2850129c854423aff0751fe343b', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=36'), + ('f17c4ffe48e44c61bd891e257e725794', '7541a78f96738e6417d2a24bd2beca40', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=37'), + ('9a5b4a402a3e8a59be6bf5cd8154f029', 'b0a303054412882e464591f1546c5b9e', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=38'), + ('79bd40b91a7e07dc939d441782ae6b17', '778c06d8a355eeee214fcea14b4e0eef', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=39'), + ('d8ceaaf8976e5fbe1012d8c84f323799', '09614206d15cbace63227d06db6beebb', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=40'), + ('3316e2751e2e388b083da23dd6ac3fbe', '41b97fb20e427a9fdbbb358d9262255d', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=41'), + ('8b7cfbe37de7dca793521819242c5816', 'c1940f703d845f957652c2d64abd7adf', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=42'), + ('f23f033c0eebf8ec55752662fd58ce68', 'd2d44fcdae5332343366db297efcf21b', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=43'), + ('59eb34f6c8bdbacc5fc6ad73a59a1301', 'ea8196b79dbe167b6aa9896e287eed2b', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=44'), + ('dcde8b6bd5cf7cc22d9505e3ce81261a', 'd6b0b0c4ba6c7dbe5ed467a1e3f06c2d', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=45'), + ('e33cf7e524fed781e7042ff9f4b35dc7', 'ec51eb295250c22c2fb01816fb72bcae', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=46'), + ('27963c8facdf73062867d164df6d064c', 'aded6630a07ce9c7408a155d3bd0d36f', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=47'), + ('77b1ce386b551b995f2f2a1da994eef8', '697c9245b9937f32f5d1c82319f0363a', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=48'), + ('f083388b013679efcf0bb9b15d52ae5c', 'aad5ad50c6262aaec30541a1b7b5b19c', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-256: I=49'), + ('c5009e0dab55db0abdb636f2600290c8', '7d34b893855341ec625bd6875ac18c0d', + '20212223252627282a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-256: I=50'), + ('7804881e26cd532d8514d3683f00f1b9', '7ef05105440f83862f5d780e88f02b41', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-256: I=51'), + ('46cddcd73d1eb53e675ca012870a92a3', 'c377c06403382061af2c9c93a8e70df6', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=52'), + ('a9fb44062bb07fe130a8e8299eacb1ab', '1dbdb3ffdc052dacc83318853abc6de5', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=53'), + ('2b6ff8d7a5cc3a28a22d5a6f221af26b', '69a6eab00432517d0bf483c91c0963c7', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=54'), + ('1a9527c29b8add4b0e3e656dbb2af8b4', '0797f41dc217c80446e1d514bd6ab197', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=55'), + ('7f99cf2c75244df015eb4b0c1050aeae', '9dfd76575902a637c01343c58e011a03', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=56'), + ('e84ff85b0d9454071909c1381646c4ed', 'acf4328ae78f34b9fa9b459747cc2658', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=57'), + ('89afd40f99521280d5399b12404f6db4', 'b0479aea12bac4fe2384cf98995150c6', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=58'), + ('a09ef32dbc5119a35ab7fa38656f0329', '9dd52789efe3ffb99f33b3da5030109a', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=59'), + ('61773457f068c376c7829b93e696e716', 'abbb755e4621ef8f1214c19f649fb9fd', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=60'), + ('a34f0cae726cce41dd498747d891b967', 'da27fb8174357bce2bed0e7354f380f9', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=61'), + ('856f59496c7388ee2d2b1a27b7697847', 'c59a0663f0993838f6e5856593bdc5ef', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=62'), + ('cb090c593ef7720bd95908fb93b49df4', 'ed60b264b5213e831607a99c0ce5e57e', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=63'), + ('a0ac75cd2f1923d460fc4d457ad95baf', 'e50548746846f3eb77b8c520640884ed', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=64'), + ('2a2b282974777689e8e9eeef525d5c5f', '28282cc7d21d6a2923641e52d188ef0c', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=65'), + ('909192939390919e0f0e09089788898a', '0dfa5b02abb18e5a815305216d6d4f8e', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=66'), + ('777675748d8e8f907170777649464744', '7359635c0eecefe31d673395fb46fb99', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=67'), + ('717073720605040b2d2c2b2a05fafbf9', '73c679f7d5aef2745c9737bb4c47fb36', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=68'), + ('64656667fefdfcc31b1a1d1ca5aaaba8', 'b192bd472a4d2eafb786e97458967626', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=69'), + ('dbdad9d86a696867b5b4b3b2c8d7d6d5', '0ec327f6c8a2b147598ca3fde61dc6a4', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=70'), + ('5c5d5e5fe3e0e1fe31303736333c3d3e', 'fc418eb3c41b859b38d4b6f646629729', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=71'), + ('545556574b48494673727574546b6a69', '30249e5ac282b1c981ea64b609f3a154', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=72'), + ('ecedeeefc6c5c4bb56575051f5fafbf8', '5e6e08646d12150776bb43c2d78a9703', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=73'), + ('464744452724252ac9c8cfced2cdcccf', 'faeb3d5de652cd3447dceb343f30394a', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=74'), + ('e6e7e4e54142435c878681801c131211', 'a8e88706823f6993ef80d05c1c7b2cf0', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=75'), + ('72737071cfcccdc2f9f8fffe710e0f0c', '8ced86677e6e00a1a1b15968f2d3cce6', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=76'), + ('505152537370714ec3c2c5c4010e0f0c', '9fc7c23858be03bdebb84e90db6786a9', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=77'), + ('a8a9aaab5c5f5e51aeafa8a93d222320', 'b4fbd65b33f70d8cf7f1111ac4649c36', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=78'), + ('dedfdcddf6f5f4eb10111617fef1f0f3', 'c5c32d5ed03c4b53cc8c1bd0ef0dbbf6', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=79'), + ('bdbcbfbe5e5d5c530b0a0d0cfac5c4c7', 'd1a7f03b773e5c212464b63709c6a891', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=80'), + ('8a8b8889050606f8f4f5f2f3636c6d6e', '6b7161d8745947ac6950438ea138d028', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-256: I=81'), + ('a6a7a4a54d4e4f40b2b3b4b539262724', 'fd47a9f7e366ee7a09bc508b00460661', + '20212223252627282a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-256: I=82'), + ('9c9d9e9fe9eaebf40e0f08099b949596', '00d40b003dc3a0d9310b659b98c7e416', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-256: I=83'), + ('2d2c2f2e1013121dcccdcacbed121310', 'eea4c79dcc8e2bda691f20ac48be0717', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=84'), + ('f4f5f6f7edeeefd0eaebecedf7f8f9fa', 'e78f43b11c204403e5751f89d05a2509', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=85'), + ('3d3c3f3e282b2a2573727574150a0b08', 'd0f0e3d1f1244bb979931e38dd1786ef', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=86'), + ('b6b7b4b5f8fbfae5b4b5b2b3a0afaead', '042e639dc4e1e4dde7b75b749ea6f765', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=87'), + ('b7b6b5b4989b9a95878681809ba4a5a6', 'bc032fdd0efe29503a980a7d07ab46a8', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=88'), + ('a8a9aaabe5e6e798e9e8efee4748494a', '0c93ac949c0da6446effb86183b6c910', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=89'), + ('ecedeeefd9dadbd4b9b8bfbe657a7b78', 'e0d343e14da75c917b4a5cec4810d7c2', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=90'), + ('7f7e7d7c696a6b74cacbcccd929d9c9f', '0eafb821748408279b937b626792e619', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=91'), + ('08090a0b0605040bfffef9f8b9c6c7c4', 'fa1ac6e02d23b106a1fef18b274a553f', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=92'), + ('08090a0bf1f2f3ccfcfdfafb68676665', '0dadfe019cd12368075507df33c1a1e9', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=93'), + ('cacbc8c93a393837050403020d121310', '3a0879b414465d9ffbaf86b33a63a1b9', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=94'), + ('e9e8ebea8281809f8f8e8988343b3a39', '62199fadc76d0be1805d3ba0b7d914bf', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=95'), + ('515053524645444bd0d1d6d7340b0a09', '1b06d6c5d333e742730130cf78e719b4', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=96'), + ('42434041ecefee1193929594c6c9c8cb', 'f1f848824c32e9dcdcbf21580f069329', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=97'), + ('efeeedecc2c1c0cf76777071455a5b58', '1a09050cbd684f784d8e965e0782f28a', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=98'), + ('5f5e5d5c3f3c3d221d1c1b1a19161714', '79c2969e7ded2ba7d088f3f320692360', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=99'), + ('000102034142434c1c1d1a1b8d727371', '091a658a2f7444c16accb669450c7b63', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=100'), + ('8e8f8c8db1b2b38c56575051050a0b08', '97c1e3a72cca65fa977d5ed0e8a7bbfc', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=101'), + ('a7a6a5a4e8ebeae57f7e7978cad5d4d7', '70c430c6db9a17828937305a2df91a2a', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=102'), + ('8a8b888994979689454443429f909192', '629553457fbe2479098571c7c903fde8', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=103'), + ('8c8d8e8fe0e3e2ed45444342f1cecfcc', 'a25b25a61f612669e7d91265c7d476ba', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=104'), + ('fffefdfc4c4f4e31d8d9dedfb6b9b8bb', 'eb7e4e49b8ae0f024570dda293254fed', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=105'), + ('fdfcfffecccfcec12f2e29286679787b', '38fe15d61cca84516e924adce5014f67', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=106'), + ('67666564bab9b8a77071767719161714', '3ad208492249108c9f3ebeb167ad0583', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=107'), + ('9a9b98992d2e2f2084858283245b5a59', '299ba9f9bf5ab05c3580fc26edd1ed12', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=108'), + ('a4a5a6a70b0809365c5d5a5b2c232221', '19dc705b857a60fb07717b2ea5717781', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=109'), + ('464744455754555af3f2f5f4afb0b1b2', 'ffc8aeb885b5efcad06b6dbebf92e76b', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=110'), + ('323330317675746b7273747549464744', 'f58900c5e0b385253ff2546250a0142b', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=111'), + ('a8a9aaab181b1a15808186872b141516', '2ee67b56280bc462429cee6e3370cbc1', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=112'), + ('e7e6e5e4202323ddaaabacad343b3a39', '20db650a9c8e9a84ab4d25f7edc8f03f', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-256: I=113'), + ('a8a9aaab2221202fedecebea1e010003', '3c36da169525cf818843805f25b78ae5', + '20212223252627282a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-256: I=114'), + ('f9f8fbfa5f5c5d42424344450e010003', '9a781d960db9e45e37779042fea51922', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-256: I=115'), + ('57565554f5f6f7f89697909120dfdedd', '6560395ec269c672a3c288226efdba77', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=116'), + ('f8f9fafbcccfcef1dddcdbda0e010003', '8c772b7a189ac544453d5916ebb27b9a', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=117'), + ('d9d8dbda7073727d80818687c2dddcdf', '77ca5468cc48e843d05f78eed9d6578f', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=118'), + ('c5c4c7c6080b0a1588898e8f68676665', '72cdcc71dc82c60d4429c9e2d8195baa', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=119'), + ('83828180dcdfded186878081f0cfcecd', '8080d68ce60e94b40b5b8b69eeb35afa', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=120'), + ('98999a9bdddedfa079787f7e0a050407', '44222d3cde299c04369d58ac0eba1e8e', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=121'), + ('cecfcccd4f4c4d429f9e9998dfc0c1c2', '9b8721b0a8dfc691c5bc5885dbfcb27a', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=122'), + ('404142436665647b29282f2eaba4a5a6', '0dc015ce9a3a3414b5e62ec643384183', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=123'), + ('33323130e6e5e4eb23222524dea1a0a3', '705715448a8da412025ce38345c2a148', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=124'), + ('cfcecdccf6f5f4cbe6e7e0e199969794', 'c32b5b0b6fbae165266c569f4b6ecf0b', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=125'), + ('babbb8b97271707fdcdddadb29363734', '4dca6c75192a01ddca9476af2a521e87', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=126'), + ('c9c8cbca4447465926272021545b5a59', '058691e627ecbc36ac07b6db423bd698', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=127'), + ('050407067477767956575051221d1c1f', '7444527095838fe080fc2bcdd30847eb', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=128'), + + # FIPS PUB 800-38A test vectors, 2001 edition. Annex F. + + ('6bc1bee22e409f96e93d7e117393172a'+'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+'f69f2445df4f9b17ad2b417be66c3710', + '3ad77bb40d7a3660a89ecaf32466ef97'+'f5d3d58503b9699de785895a96fdbaaf'+ + '43b1cd7f598ece23881b00e3ed030688'+'7b0c785e27e8ad3f8223207104725dd4', + '2b7e151628aed2a6abf7158809cf4f3c', + 'NIST 800-38A, F.1.1, ECB and AES-128'), + + ('6bc1bee22e409f96e93d7e117393172a'+'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+'f69f2445df4f9b17ad2b417be66c3710', + 'bd334f1d6e45f25ff712a214571fa5cc'+'974104846d0ad3ad7734ecb3ecee4eef'+ + 'ef7afd2270e2e60adce0ba2face6444e'+'9a4b41ba738d6c72fb16691603c18e0e', + '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b', + 'NIST 800-38A, F.1.3, ECB and AES-192'), + + ('6bc1bee22e409f96e93d7e117393172a'+'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+'f69f2445df4f9b17ad2b417be66c3710', + 'f3eed1bdb5d2a03c064b5a7e3db181f8'+'591ccb10d410ed26dc5ba74a31362870'+ + 'b6ed21b99ca6f4f9f153e7b1beafed1d'+'23304b7a39f9f3ff067d8d8f9e24ecc7', + '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4', + 'NIST 800-38A, F.1.3, ECB and AES-256'), + +] + +test_data_8_lanes = [] +for td in test_data: + test_data_8_lanes.append((td[0] * 8, td[1] * 8, td[2], td[3])) +test_data += test_data_8_lanes + +class TestMultipleBlocks(unittest.TestCase): + + def __init__(self, use_aesni): + unittest.TestCase.__init__(self) + self.use_aesni = use_aesni + + def runTest(self): + # Encrypt data which is 8*2+4 bytes long, so as to trigger (for the + # AESNI variant) both the path that parallelizes 8 lanes and the one + # that processes data serially + + tvs = [ + (b'a' * 16, 'c0b27011eb15bf144d2fc9fae80ea16d4c231cb230416c5fac02e6835ad9d7d0'), + (b'a' * 24, 'df8435ce361a78c535b41dcb57da952abbf9ee5954dc6fbcd75fd00fa626915d'), + (b'a' * 32, '211402de6c80db1f92ba255881178e1f70783b8cfd3b37808205e48b80486cd8') + ] + + for key, expected in tvs: + + cipher = AES.new(key, AES.MODE_ECB, use_aesni=self.use_aesni) + h = SHA256.new() + + pt = b"".join([ tobytes('{0:016x}'.format(x)) for x in range(20) ]) + ct = cipher.encrypt(pt) + self.assertEqual(SHA256.new(ct).hexdigest(), expected) + + +class TestIncompleteBlocks(unittest.TestCase): + + def __init__(self, use_aesni): + unittest.TestCase.__init__(self) + self.use_aesni = use_aesni + + def runTest(self): + # Encrypt data with length not multiple of 16 bytes + + cipher = AES.new(b'4'*16, AES.MODE_ECB, use_aesni=self.use_aesni) + + for msg_len in range(1, 16): + self.assertRaises(ValueError, cipher.encrypt, b'1' * msg_len) + self.assertRaises(ValueError, cipher.encrypt, b'1' * (msg_len+16)) + self.assertRaises(ValueError, cipher.decrypt, b'1' * msg_len) + self.assertRaises(ValueError, cipher.decrypt, b'1' * (msg_len+16)) + + self.assertEqual(cipher.encrypt(b''), b'') + self.assertEqual(cipher.decrypt(b''), b'') + + +class TestOutput(unittest.TestCase): + + def __init__(self, use_aesni): + unittest.TestCase.__init__(self) + self.use_aesni = use_aesni + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = AES.new(b'4'*16, AES.MODE_ECB, use_aesni=self.use_aesni) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(15) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from Crypto.Util import _cpu_features + from .common import make_block_tests + + tests = make_block_tests(AES, "AES", test_data, {'use_aesni': False}) + tests += [ TestMultipleBlocks(False) ] + tests += [ TestIncompleteBlocks(False) ] + if _cpu_features.have_aes_ni(): + # Run tests with AES-NI instructions if they are available. + tests += make_block_tests(AES, "AESNI", test_data, {'use_aesni': True}) + tests += [ TestMultipleBlocks(True) ] + tests += [ TestIncompleteBlocks(True) ] + tests += [ TestOutput(True) ] + else: + print("Skipping AESNI tests") + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ARC2.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ARC2.py new file mode 100644 index 00000000..fd9448c1 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ARC2.py @@ -0,0 +1,167 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/ARC2.py: Self-test for the Alleged-RC2 cipher +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.ARC2""" + +import unittest + +from Crypto.Util.py3compat import b, bchr + +from Crypto.Cipher import ARC2 + +# This is a list of (plaintext, ciphertext, key[, description[, extra_params]]) tuples. +test_data = [ + # Test vectors from RFC 2268 + + # 63-bit effective key length + ('0000000000000000', 'ebb773f993278eff', '0000000000000000', + 'RFC2268-1', dict(effective_keylen=63)), + + # 64-bit effective key length + ('ffffffffffffffff', '278b27e42e2f0d49', 'ffffffffffffffff', + 'RFC2268-2', dict(effective_keylen=64)), + ('1000000000000001', '30649edf9be7d2c2', '3000000000000000', + 'RFC2268-3', dict(effective_keylen=64)), + #('0000000000000000', '61a8a244adacccf0', '88', + # 'RFC2268-4', dict(effective_keylen=64)), + ('0000000000000000', '6ccf4308974c267f', '88bca90e90875a', + 'RFC2268-5', dict(effective_keylen=64)), + ('0000000000000000', '1a807d272bbe5db1', '88bca90e90875a7f0f79c384627bafb2', + 'RFC2268-6', dict(effective_keylen=64)), + + # 128-bit effective key length + ('0000000000000000', '2269552ab0f85ca6', '88bca90e90875a7f0f79c384627bafb2', + "RFC2268-7", dict(effective_keylen=128)), + ('0000000000000000', '5b78d3a43dfff1f1', + '88bca90e90875a7f0f79c384627bafb216f80a6f85920584c42fceb0be255daf1e', + "RFC2268-8", dict(effective_keylen=129)), + + # Test vectors from PyCrypto 2.0.1's testdata.py + # 1024-bit effective key length + ('0000000000000000', '624fb3e887419e48', '5068696c6970476c617373', + 'PCTv201-0'), + ('ffffffffffffffff', '79cadef44c4a5a85', '5068696c6970476c617373', + 'PCTv201-1'), + ('0001020304050607', '90411525b34e4c2c', '5068696c6970476c617373', + 'PCTv201-2'), + ('0011223344556677', '078656aaba61cbfb', '5068696c6970476c617373', + 'PCTv201-3'), + ('0000000000000000', 'd7bcc5dbb4d6e56a', 'ffffffffffffffff', + 'PCTv201-4'), + ('ffffffffffffffff', '7259018ec557b357', 'ffffffffffffffff', + 'PCTv201-5'), + ('0001020304050607', '93d20a497f2ccb62', 'ffffffffffffffff', + 'PCTv201-6'), + ('0011223344556677', 'cb15a7f819c0014d', 'ffffffffffffffff', + 'PCTv201-7'), + ('0000000000000000', '63ac98cdf3843a7a', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-8'), + ('ffffffffffffffff', '3fb49e2fa12371dd', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-9'), + ('0001020304050607', '46414781ab387d5f', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-10'), + ('0011223344556677', 'be09dc81feaca271', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-11'), + ('0000000000000000', 'e64221e608be30ab', '53e5ffe553', + 'PCTv201-12'), + ('ffffffffffffffff', '862bc60fdcd4d9a9', '53e5ffe553', + 'PCTv201-13'), + ('0001020304050607', '6a34da50fa5e47de', '53e5ffe553', + 'PCTv201-14'), + ('0011223344556677', '584644c34503122c', '53e5ffe553', + 'PCTv201-15'), +] + +class BufferOverflowTest(unittest.TestCase): + # Test a buffer overflow found in older versions of PyCrypto + + def runTest(self): + """ARC2 with keylength > 128""" + key = b("x") * 16384 + self.assertRaises(ValueError, ARC2.new, key, ARC2.MODE_ECB) + +class KeyLength(unittest.TestCase): + + def runTest(self): + ARC2.new(b'\x00' * 16, ARC2.MODE_ECB, effective_keylen=40) + self.assertRaises(ValueError, ARC2.new, bchr(0) * 4, ARC2.MODE_ECB) + self.assertRaises(ValueError, ARC2.new, bchr(0) * 129, ARC2.MODE_ECB) + + self.assertRaises(ValueError, ARC2.new, bchr(0) * 16, ARC2.MODE_ECB, + effective_keylen=39) + self.assertRaises(ValueError, ARC2.new, bchr(0) * 16, ARC2.MODE_ECB, + effective_keylen=1025) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = ARC2.new(b'4'*16, ARC2.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from Crypto.Cipher import ARC2 + from .common import make_block_tests + + tests = make_block_tests(ARC2, "ARC2", test_data) + tests.append(BufferOverflowTest()) + tests.append(KeyLength()) + tests += [TestOutput()] + + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ARC4.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ARC4.py new file mode 100644 index 00000000..1b9a7d88 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ARC4.py @@ -0,0 +1,466 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/ARC4.py: Self-test for the Alleged-RC4 cipher +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.ARC4""" + +__revision__ = "$Id$" + +from Crypto.Util.py3compat import * +from Crypto.SelfTest.st_common import * +from binascii import unhexlify + +from Crypto.Cipher import ARC4 + +# This is a list of (plaintext, ciphertext, key[, description]) tuples. +test_data = [ + # Test vectors from Eric Rescorla's message with the subject + # "RC4 compatibility testing", sent to the cipherpunks mailing list on + # September 13, 1994. + # http://cypherpunks.venona.com/date/1994/09/msg00420.html + + ('0123456789abcdef', '75b7878099e0c596', '0123456789abcdef', + 'Test vector 0'), + + ('0000000000000000', '7494c2e7104b0879', '0123456789abcdef', + 'Test vector 1'), + + ('0000000000000000', 'de188941a3375d3a', '0000000000000000', + 'Test vector 2'), + + #('00000000000000000000', 'd6a141a7ec3c38dfbd61', 'ef012345', + # 'Test vector 3'), + + ('01' * 512, + '7595c3e6114a09780c4ad452338e1ffd9a1be9498f813d76533449b6778dcad8' + + 'c78a8d2ba9ac66085d0e53d59c26c2d1c490c1ebbe0ce66d1b6b1b13b6b919b8' + + '47c25a91447a95e75e4ef16779cde8bf0a95850e32af9689444fd377108f98fd' + + 'cbd4e726567500990bcc7e0ca3c4aaa304a387d20f3b8fbbcd42a1bd311d7a43' + + '03dda5ab078896ae80c18b0af66dff319616eb784e495ad2ce90d7f772a81747' + + 'b65f62093b1e0db9e5ba532fafec47508323e671327df9444432cb7367cec82f' + + '5d44c0d00b67d650a075cd4b70dedd77eb9b10231b6b5b741347396d62897421' + + 'd43df9b42e446e358e9c11a9b2184ecbef0cd8e7a877ef968f1390ec9b3d35a5' + + '585cb009290e2fcde7b5ec66d9084be44055a619d9dd7fc3166f9487f7cb2729' + + '12426445998514c15d53a18c864ce3a2b7555793988126520eacf2e3066e230c' + + '91bee4dd5304f5fd0405b35bd99c73135d3d9bc335ee049ef69b3867bf2d7bd1' + + 'eaa595d8bfc0066ff8d31509eb0c6caa006c807a623ef84c3d33c195d23ee320' + + 'c40de0558157c822d4b8c569d849aed59d4e0fd7f379586b4b7ff684ed6a189f' + + '7486d49b9c4bad9ba24b96abf924372c8a8fffb10d55354900a77a3db5f205e1' + + 'b99fcd8660863a159ad4abe40fa48934163ddde542a6585540fd683cbfd8c00f' + + '12129a284deacc4cdefe58be7137541c047126c8d49e2755ab181ab7e940b0c0', + '0123456789abcdef', + "Test vector 4"), +] + +class RFC6229_Tests(unittest.TestCase): + # Test vectors from RFC 6229. Each test vector is a tuple with two items: + # the ARC4 key and a dictionary. The dictionary has keystream offsets as keys + # and the 16-byte keystream starting at the relevant offset as value. + rfc6229_data = [ + # Page 3 + ( + '0102030405', + { + 0: 'b2 39 63 05 f0 3d c0 27 cc c3 52 4a 0a 11 18 a8', + 16: '69 82 94 4f 18 fc 82 d5 89 c4 03 a4 7a 0d 09 19', + 240: '28 cb 11 32 c9 6c e2 86 42 1d ca ad b8 b6 9e ae', + 256: '1c fc f6 2b 03 ed db 64 1d 77 df cf 7f 8d 8c 93', + 496: '42 b7 d0 cd d9 18 a8 a3 3d d5 17 81 c8 1f 40 41', + 512: '64 59 84 44 32 a7 da 92 3c fb 3e b4 98 06 61 f6', + 752: 'ec 10 32 7b de 2b ee fd 18 f9 27 76 80 45 7e 22', + 768: 'eb 62 63 8d 4f 0b a1 fe 9f ca 20 e0 5b f8 ff 2b', + 1008:'45 12 90 48 e6 a0 ed 0b 56 b4 90 33 8f 07 8d a5', + 1024:'30 ab bc c7 c2 0b 01 60 9f 23 ee 2d 5f 6b b7 df', + 1520:'32 94 f7 44 d8 f9 79 05 07 e7 0f 62 e5 bb ce ea', + 1536:'d8 72 9d b4 18 82 25 9b ee 4f 82 53 25 f5 a1 30', + 2032:'1e b1 4a 0c 13 b3 bf 47 fa 2a 0b a9 3a d4 5b 8b', + 2048:'cc 58 2f 8b a9 f2 65 e2 b1 be 91 12 e9 75 d2 d7', + 3056:'f2 e3 0f 9b d1 02 ec bf 75 aa ad e9 bc 35 c4 3c', + 3072:'ec 0e 11 c4 79 dc 32 9d c8 da 79 68 fe 96 56 81', + 4080:'06 83 26 a2 11 84 16 d2 1f 9d 04 b2 cd 1c a0 50', + 4096:'ff 25 b5 89 95 99 67 07 e5 1f bd f0 8b 34 d8 75' + } + ), + # Page 4 + ( + '01020304050607', + { + 0: '29 3f 02 d4 7f 37 c9 b6 33 f2 af 52 85 fe b4 6b', + 16: 'e6 20 f1 39 0d 19 bd 84 e2 e0 fd 75 20 31 af c1', + 240: '91 4f 02 53 1c 92 18 81 0d f6 0f 67 e3 38 15 4c', + 256: 'd0 fd b5 83 07 3c e8 5a b8 39 17 74 0e c0 11 d5', + 496: '75 f8 14 11 e8 71 cf fa 70 b9 0c 74 c5 92 e4 54', + 512: '0b b8 72 02 93 8d ad 60 9e 87 a5 a1 b0 79 e5 e4', + 752: 'c2 91 12 46 b6 12 e7 e7 b9 03 df ed a1 da d8 66', + 768: '32 82 8f 91 50 2b 62 91 36 8d e8 08 1d e3 6f c2', + 1008:'f3 b9 a7 e3 b2 97 bf 9a d8 04 51 2f 90 63 ef f1', + 1024:'8e cb 67 a9 ba 1f 55 a5 a0 67 e2 b0 26 a3 67 6f', + 1520:'d2 aa 90 2b d4 2d 0d 7c fd 34 0c d4 58 10 52 9f', + 1536:'78 b2 72 c9 6e 42 ea b4 c6 0b d9 14 e3 9d 06 e3', + 2032:'f4 33 2f d3 1a 07 93 96 ee 3c ee 3f 2a 4f f0 49', + 2048:'05 45 97 81 d4 1f da 7f 30 c1 be 7e 12 46 c6 23', + 3056:'ad fd 38 68 b8 e5 14 85 d5 e6 10 01 7e 3d d6 09', + 3072:'ad 26 58 1c 0c 5b e4 5f 4c ea 01 db 2f 38 05 d5', + 4080:'f3 17 2c ef fc 3b 3d 99 7c 85 cc d5 af 1a 95 0c', + 4096:'e7 4b 0b 97 31 22 7f d3 7c 0e c0 8a 47 dd d8 b8' + } + ), + ( + '0102030405060708', + { + 0: '97 ab 8a 1b f0 af b9 61 32 f2 f6 72 58 da 15 a8', + 16: '82 63 ef db 45 c4 a1 86 84 ef 87 e6 b1 9e 5b 09', + 240: '96 36 eb c9 84 19 26 f4 f7 d1 f3 62 bd df 6e 18', + 256: 'd0 a9 90 ff 2c 05 fe f5 b9 03 73 c9 ff 4b 87 0a', + 496: '73 23 9f 1d b7 f4 1d 80 b6 43 c0 c5 25 18 ec 63', + 512: '16 3b 31 99 23 a6 bd b4 52 7c 62 61 26 70 3c 0f', + 752: '49 d6 c8 af 0f 97 14 4a 87 df 21 d9 14 72 f9 66', + 768: '44 17 3a 10 3b 66 16 c5 d5 ad 1c ee 40 c8 63 d0', + 1008:'27 3c 9c 4b 27 f3 22 e4 e7 16 ef 53 a4 7d e7 a4', + 1024:'c6 d0 e7 b2 26 25 9f a9 02 34 90 b2 61 67 ad 1d', + 1520:'1f e8 98 67 13 f0 7c 3d 9a e1 c1 63 ff 8c f9 d3', + 1536:'83 69 e1 a9 65 61 0b e8 87 fb d0 c7 91 62 aa fb', + 2032:'0a 01 27 ab b4 44 84 b9 fb ef 5a bc ae 1b 57 9f', + 2048:'c2 cd ad c6 40 2e 8e e8 66 e1 f3 7b db 47 e4 2c', + 3056:'26 b5 1e a3 7d f8 e1 d6 f7 6f c3 b6 6a 74 29 b3', + 3072:'bc 76 83 20 5d 4f 44 3d c1 f2 9d da 33 15 c8 7b', + 4080:'d5 fa 5a 34 69 d2 9a aa f8 3d 23 58 9d b8 c8 5b', + 4096:'3f b4 6e 2c 8f 0f 06 8e dc e8 cd cd 7d fc 58 62' + } + ), + # Page 5 + ( + '0102030405060708090a', + { + 0: 'ed e3 b0 46 43 e5 86 cc 90 7d c2 18 51 70 99 02', + 16: '03 51 6b a7 8f 41 3b eb 22 3a a5 d4 d2 df 67 11', + 240: '3c fd 6c b5 8e e0 fd de 64 01 76 ad 00 00 04 4d', + 256: '48 53 2b 21 fb 60 79 c9 11 4c 0f fd 9c 04 a1 ad', + 496: '3e 8c ea 98 01 71 09 97 90 84 b1 ef 92 f9 9d 86', + 512: 'e2 0f b4 9b db 33 7e e4 8b 8d 8d c0 f4 af ef fe', + 752: '5c 25 21 ea cd 79 66 f1 5e 05 65 44 be a0 d3 15', + 768: 'e0 67 a7 03 19 31 a2 46 a6 c3 87 5d 2f 67 8a cb', + 1008:'a6 4f 70 af 88 ae 56 b6 f8 75 81 c0 e2 3e 6b 08', + 1024:'f4 49 03 1d e3 12 81 4e c6 f3 19 29 1f 4a 05 16', + 1520:'bd ae 85 92 4b 3c b1 d0 a2 e3 3a 30 c6 d7 95 99', + 1536:'8a 0f ed db ac 86 5a 09 bc d1 27 fb 56 2e d6 0a', + 2032:'b5 5a 0a 5b 51 a1 2a 8b e3 48 99 c3 e0 47 51 1a', + 2048:'d9 a0 9c ea 3c e7 5f e3 96 98 07 03 17 a7 13 39', + 3056:'55 22 25 ed 11 77 f4 45 84 ac 8c fa 6c 4e b5 fc', + 3072:'7e 82 cb ab fc 95 38 1b 08 09 98 44 21 29 c2 f8', + 4080:'1f 13 5e d1 4c e6 0a 91 36 9d 23 22 be f2 5e 3c', + 4096:'08 b6 be 45 12 4a 43 e2 eb 77 95 3f 84 dc 85 53' + } + ), + ( + '0102030405060708090a0b0c0d0e0f10', + { + 0: '9a c7 cc 9a 60 9d 1e f7 b2 93 28 99 cd e4 1b 97', + 16: '52 48 c4 95 90 14 12 6a 6e 8a 84 f1 1d 1a 9e 1c', + 240: '06 59 02 e4 b6 20 f6 cc 36 c8 58 9f 66 43 2f 2b', + 256: 'd3 9d 56 6b c6 bc e3 01 07 68 15 15 49 f3 87 3f', + 496: 'b6 d1 e6 c4 a5 e4 77 1c ad 79 53 8d f2 95 fb 11', + 512: 'c6 8c 1d 5c 55 9a 97 41 23 df 1d bc 52 a4 3b 89', + 752: 'c5 ec f8 8d e8 97 fd 57 fe d3 01 70 1b 82 a2 59', + 768: 'ec cb e1 3d e1 fc c9 1c 11 a0 b2 6c 0b c8 fa 4d', + 1008:'e7 a7 25 74 f8 78 2a e2 6a ab cf 9e bc d6 60 65', + 1024:'bd f0 32 4e 60 83 dc c6 d3 ce dd 3c a8 c5 3c 16', + 1520:'b4 01 10 c4 19 0b 56 22 a9 61 16 b0 01 7e d2 97', + 1536:'ff a0 b5 14 64 7e c0 4f 63 06 b8 92 ae 66 11 81', + 2032:'d0 3d 1b c0 3c d3 3d 70 df f9 fa 5d 71 96 3e bd', + 2048:'8a 44 12 64 11 ea a7 8b d5 1e 8d 87 a8 87 9b f5', + 3056:'fa be b7 60 28 ad e2 d0 e4 87 22 e4 6c 46 15 a3', + 3072:'c0 5d 88 ab d5 03 57 f9 35 a6 3c 59 ee 53 76 23', + 4080:'ff 38 26 5c 16 42 c1 ab e8 d3 c2 fe 5e 57 2b f8', + 4096:'a3 6a 4c 30 1a e8 ac 13 61 0c cb c1 22 56 ca cc' + } + ), + # Page 6 + ( + '0102030405060708090a0b0c0d0e0f101112131415161718', + { + 0: '05 95 e5 7f e5 f0 bb 3c 70 6e da c8 a4 b2 db 11', + 16: 'df de 31 34 4a 1a f7 69 c7 4f 07 0a ee 9e 23 26', + 240: 'b0 6b 9b 1e 19 5d 13 d8 f4 a7 99 5c 45 53 ac 05', + 256: '6b d2 37 8e c3 41 c9 a4 2f 37 ba 79 f8 8a 32 ff', + 496: 'e7 0b ce 1d f7 64 5a db 5d 2c 41 30 21 5c 35 22', + 512: '9a 57 30 c7 fc b4 c9 af 51 ff da 89 c7 f1 ad 22', + 752: '04 85 05 5f d4 f6 f0 d9 63 ef 5a b9 a5 47 69 82', + 768: '59 1f c6 6b cd a1 0e 45 2b 03 d4 55 1f 6b 62 ac', + 1008:'27 53 cc 83 98 8a fa 3e 16 88 a1 d3 b4 2c 9a 02', + 1024:'93 61 0d 52 3d 1d 3f 00 62 b3 c2 a3 bb c7 c7 f0', + 1520:'96 c2 48 61 0a ad ed fe af 89 78 c0 3d e8 20 5a', + 1536:'0e 31 7b 3d 1c 73 b9 e9 a4 68 8f 29 6d 13 3a 19', + 2032:'bd f0 e6 c3 cc a5 b5 b9 d5 33 b6 9c 56 ad a1 20', + 2048:'88 a2 18 b6 e2 ec e1 e6 24 6d 44 c7 59 d1 9b 10', + 3056:'68 66 39 7e 95 c1 40 53 4f 94 26 34 21 00 6e 40', + 3072:'32 cb 0a 1e 95 42 c6 b3 b8 b3 98 ab c3 b0 f1 d5', + 4080:'29 a0 b8 ae d5 4a 13 23 24 c6 2e 42 3f 54 b4 c8', + 4096:'3c b0 f3 b5 02 0a 98 b8 2a f9 fe 15 44 84 a1 68' + } + ), + ( + '0102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20', + { + 0: 'ea a6 bd 25 88 0b f9 3d 3f 5d 1e 4c a2 61 1d 91', + 16: 'cf a4 5c 9f 7e 71 4b 54 bd fa 80 02 7c b1 43 80', + 240: '11 4a e3 44 de d7 1b 35 f2 e6 0f eb ad 72 7f d8', + 256: '02 e1 e7 05 6b 0f 62 39 00 49 64 22 94 3e 97 b6', + 496: '91 cb 93 c7 87 96 4e 10 d9 52 7d 99 9c 6f 93 6b', + 512: '49 b1 8b 42 f8 e8 36 7c be b5 ef 10 4b a1 c7 cd', + 752: '87 08 4b 3b a7 00 ba de 95 56 10 67 27 45 b3 74', + 768: 'e7 a7 b9 e9 ec 54 0d 5f f4 3b db 12 79 2d 1b 35', + 1008:'c7 99 b5 96 73 8f 6b 01 8c 76 c7 4b 17 59 bd 90', + 1024:'7f ec 5b fd 9f 9b 89 ce 65 48 30 90 92 d7 e9 58', + 1520:'40 f2 50 b2 6d 1f 09 6a 4a fd 4c 34 0a 58 88 15', + 1536:'3e 34 13 5c 79 db 01 02 00 76 76 51 cf 26 30 73', + 2032:'f6 56 ab cc f8 8d d8 27 02 7b 2c e9 17 d4 64 ec', + 2048:'18 b6 25 03 bf bc 07 7f ba bb 98 f2 0d 98 ab 34', + 3056:'8a ed 95 ee 5b 0d cb fb ef 4e b2 1d 3a 3f 52 f9', + 3072:'62 5a 1a b0 0e e3 9a 53 27 34 6b dd b0 1a 9c 18', + 4080:'a1 3a 7c 79 c7 e1 19 b5 ab 02 96 ab 28 c3 00 b9', + 4096:'f3 e4 c0 a2 e0 2d 1d 01 f7 f0 a7 46 18 af 2b 48' + } + ), + # Page 7 + ( + '833222772a', + { + 0: '80 ad 97 bd c9 73 df 8a 2e 87 9e 92 a4 97 ef da', + 16: '20 f0 60 c2 f2 e5 12 65 01 d3 d4 fe a1 0d 5f c0', + 240: 'fa a1 48 e9 90 46 18 1f ec 6b 20 85 f3 b2 0e d9', + 256: 'f0 da f5 ba b3 d5 96 83 98 57 84 6f 73 fb fe 5a', + 496: '1c 7e 2f c4 63 92 32 fe 29 75 84 b2 96 99 6b c8', + 512: '3d b9 b2 49 40 6c c8 ed ff ac 55 cc d3 22 ba 12', + 752: 'e4 f9 f7 e0 06 61 54 bb d1 25 b7 45 56 9b c8 97', + 768: '75 d5 ef 26 2b 44 c4 1a 9c f6 3a e1 45 68 e1 b9', + 1008:'6d a4 53 db f8 1e 82 33 4a 3d 88 66 cb 50 a1 e3', + 1024:'78 28 d0 74 11 9c ab 5c 22 b2 94 d7 a9 bf a0 bb', + 1520:'ad b8 9c ea 9a 15 fb e6 17 29 5b d0 4b 8c a0 5c', + 1536:'62 51 d8 7f d4 aa ae 9a 7e 4a d5 c2 17 d3 f3 00', + 2032:'e7 11 9b d6 dd 9b 22 af e8 f8 95 85 43 28 81 e2', + 2048:'78 5b 60 fd 7e c4 e9 fc b6 54 5f 35 0d 66 0f ab', + 3056:'af ec c0 37 fd b7 b0 83 8e b3 d7 0b cd 26 83 82', + 3072:'db c1 a7 b4 9d 57 35 8c c9 fa 6d 61 d7 3b 7c f0', + 4080:'63 49 d1 26 a3 7a fc ba 89 79 4f 98 04 91 4f dc', + 4096:'bf 42 c3 01 8c 2f 7c 66 bf de 52 49 75 76 81 15' + } + ), + ( + '1910833222772a', + { + 0: 'bc 92 22 db d3 27 4d 8f c6 6d 14 cc bd a6 69 0b', + 16: '7a e6 27 41 0c 9a 2b e6 93 df 5b b7 48 5a 63 e3', + 240: '3f 09 31 aa 03 de fb 30 0f 06 01 03 82 6f 2a 64', + 256: 'be aa 9e c8 d5 9b b6 81 29 f3 02 7c 96 36 11 81', + 496: '74 e0 4d b4 6d 28 64 8d 7d ee 8a 00 64 b0 6c fe', + 512: '9b 5e 81 c6 2f e0 23 c5 5b e4 2f 87 bb f9 32 b8', + 752: 'ce 17 8f c1 82 6e fe cb c1 82 f5 79 99 a4 61 40', + 768: '8b df 55 cd 55 06 1c 06 db a6 be 11 de 4a 57 8a', + 1008:'62 6f 5f 4d ce 65 25 01 f3 08 7d 39 c9 2c c3 49', + 1024:'42 da ac 6a 8f 9a b9 a7 fd 13 7c 60 37 82 56 82', + 1520:'cc 03 fd b7 91 92 a2 07 31 2f 53 f5 d4 dc 33 d9', + 1536:'f7 0f 14 12 2a 1c 98 a3 15 5d 28 b8 a0 a8 a4 1d', + 2032:'2a 3a 30 7a b2 70 8a 9c 00 fe 0b 42 f9 c2 d6 a1', + 2048:'86 26 17 62 7d 22 61 ea b0 b1 24 65 97 ca 0a e9', + 3056:'55 f8 77 ce 4f 2e 1d db bf 8e 13 e2 cd e0 fd c8', + 3072:'1b 15 56 cb 93 5f 17 33 37 70 5f bb 5d 50 1f c1', + 4080:'ec d0 e9 66 02 be 7f 8d 50 92 81 6c cc f2 c2 e9', + 4096:'02 78 81 fa b4 99 3a 1c 26 20 24 a9 4f ff 3f 61' + } + ), + # Page 8 + ( + '641910833222772a', + { + 0: 'bb f6 09 de 94 13 17 2d 07 66 0c b6 80 71 69 26', + 16: '46 10 1a 6d ab 43 11 5d 6c 52 2b 4f e9 36 04 a9', + 240: 'cb e1 ff f2 1c 96 f3 ee f6 1e 8f e0 54 2c bd f0', + 256: '34 79 38 bf fa 40 09 c5 12 cf b4 03 4b 0d d1 a7', + 496: '78 67 a7 86 d0 0a 71 47 90 4d 76 dd f1 e5 20 e3', + 512: '8d 3e 9e 1c ae fc cc b3 fb f8 d1 8f 64 12 0b 32', + 752: '94 23 37 f8 fd 76 f0 fa e8 c5 2d 79 54 81 06 72', + 768: 'b8 54 8c 10 f5 16 67 f6 e6 0e 18 2f a1 9b 30 f7', + 1008:'02 11 c7 c6 19 0c 9e fd 12 37 c3 4c 8f 2e 06 c4', + 1024:'bd a6 4f 65 27 6d 2a ac b8 f9 02 12 20 3a 80 8e', + 1520:'bd 38 20 f7 32 ff b5 3e c1 93 e7 9d 33 e2 7c 73', + 1536:'d0 16 86 16 86 19 07 d4 82 e3 6c da c8 cf 57 49', + 2032:'97 b0 f0 f2 24 b2 d2 31 71 14 80 8f b0 3a f7 a0', + 2048:'e5 96 16 e4 69 78 79 39 a0 63 ce ea 9a f9 56 d1', + 3056:'c4 7e 0d c1 66 09 19 c1 11 01 20 8f 9e 69 aa 1f', + 3072:'5a e4 f1 28 96 b8 37 9a 2a ad 89 b5 b5 53 d6 b0', + 4080:'6b 6b 09 8d 0c 29 3b c2 99 3d 80 bf 05 18 b6 d9', + 4096:'81 70 cc 3c cd 92 a6 98 62 1b 93 9d d3 8f e7 b9' + } + ), + ( + '8b37641910833222772a', + { + 0: 'ab 65 c2 6e dd b2 87 60 0d b2 fd a1 0d 1e 60 5c', + 16: 'bb 75 90 10 c2 96 58 f2 c7 2d 93 a2 d1 6d 29 30', + 240: 'b9 01 e8 03 6e d1 c3 83 cd 3c 4c 4d d0 a6 ab 05', + 256: '3d 25 ce 49 22 92 4c 55 f0 64 94 33 53 d7 8a 6c', + 496: '12 c1 aa 44 bb f8 7e 75 e6 11 f6 9b 2c 38 f4 9b', + 512: '28 f2 b3 43 4b 65 c0 98 77 47 00 44 c6 ea 17 0d', + 752: 'bd 9e f8 22 de 52 88 19 61 34 cf 8a f7 83 93 04', + 768: '67 55 9c 23 f0 52 15 84 70 a2 96 f7 25 73 5a 32', + 1008:'8b ab 26 fb c2 c1 2b 0f 13 e2 ab 18 5e ab f2 41', + 1024:'31 18 5a 6d 69 6f 0c fa 9b 42 80 8b 38 e1 32 a2', + 1520:'56 4d 3d ae 18 3c 52 34 c8 af 1e 51 06 1c 44 b5', + 1536:'3c 07 78 a7 b5 f7 2d 3c 23 a3 13 5c 7d 67 b9 f4', + 2032:'f3 43 69 89 0f cf 16 fb 51 7d ca ae 44 63 b2 dd', + 2048:'02 f3 1c 81 e8 20 07 31 b8 99 b0 28 e7 91 bf a7', + 3056:'72 da 64 62 83 22 8c 14 30 08 53 70 17 95 61 6f', + 3072:'4e 0a 8c 6f 79 34 a7 88 e2 26 5e 81 d6 d0 c8 f4', + 4080:'43 8d d5 ea fe a0 11 1b 6f 36 b4 b9 38 da 2a 68', + 4096:'5f 6b fc 73 81 58 74 d9 71 00 f0 86 97 93 57 d8' + } + ), + # Page 9 + ( + 'ebb46227c6cc8b37641910833222772a', + { + 0: '72 0c 94 b6 3e df 44 e1 31 d9 50 ca 21 1a 5a 30', + 16: 'c3 66 fd ea cf 9c a8 04 36 be 7c 35 84 24 d2 0b', + 240: 'b3 39 4a 40 aa bf 75 cb a4 22 82 ef 25 a0 05 9f', + 256: '48 47 d8 1d a4 94 2d bc 24 9d ef c4 8c 92 2b 9f', + 496: '08 12 8c 46 9f 27 53 42 ad da 20 2b 2b 58 da 95', + 512: '97 0d ac ef 40 ad 98 72 3b ac 5d 69 55 b8 17 61', + 752: '3c b8 99 93 b0 7b 0c ed 93 de 13 d2 a1 10 13 ac', + 768: 'ef 2d 67 6f 15 45 c2 c1 3d c6 80 a0 2f 4a db fe', + 1008:'b6 05 95 51 4f 24 bc 9f e5 22 a6 ca d7 39 36 44', + 1024:'b5 15 a8 c5 01 17 54 f5 90 03 05 8b db 81 51 4e', + 1520:'3c 70 04 7e 8c bc 03 8e 3b 98 20 db 60 1d a4 95', + 1536:'11 75 da 6e e7 56 de 46 a5 3e 2b 07 56 60 b7 70', + 2032:'00 a5 42 bb a0 21 11 cc 2c 65 b3 8e bd ba 58 7e', + 2048:'58 65 fd bb 5b 48 06 41 04 e8 30 b3 80 f2 ae de', + 3056:'34 b2 1a d2 ad 44 e9 99 db 2d 7f 08 63 f0 d9 b6', + 3072:'84 a9 21 8f c3 6e 8a 5f 2c cf be ae 53 a2 7d 25', + 4080:'a2 22 1a 11 b8 33 cc b4 98 a5 95 40 f0 54 5f 4a', + 4096:'5b be b4 78 7d 59 e5 37 3f db ea 6c 6f 75 c2 9b' + } + ), + ( + 'c109163908ebe51debb46227c6cc8b37641910833222772a', + { + 0: '54 b6 4e 6b 5a 20 b5 e2 ec 84 59 3d c7 98 9d a7', + 16: 'c1 35 ee e2 37 a8 54 65 ff 97 dc 03 92 4f 45 ce', + 240: 'cf cc 92 2f b4 a1 4a b4 5d 61 75 aa bb f2 d2 01', + 256: '83 7b 87 e2 a4 46 ad 0e f7 98 ac d0 2b 94 12 4f', + 496: '17 a6 db d6 64 92 6a 06 36 b3 f4 c3 7a 4f 46 94', + 512: '4a 5f 9f 26 ae ee d4 d4 a2 5f 63 2d 30 52 33 d9', + 752: '80 a3 d0 1e f0 0c 8e 9a 42 09 c1 7f 4e eb 35 8c', + 768: 'd1 5e 7d 5f fa aa bc 02 07 bf 20 0a 11 77 93 a2', + 1008:'34 96 82 bf 58 8e aa 52 d0 aa 15 60 34 6a ea fa', + 1024:'f5 85 4c db 76 c8 89 e3 ad 63 35 4e 5f 72 75 e3', + 1520:'53 2c 7c ec cb 39 df 32 36 31 84 05 a4 b1 27 9c', + 1536:'ba ef e6 d9 ce b6 51 84 22 60 e0 d1 e0 5e 3b 90', + 2032:'e8 2d 8c 6d b5 4e 3c 63 3f 58 1c 95 2b a0 42 07', + 2048:'4b 16 e5 0a bd 38 1b d7 09 00 a9 cd 9a 62 cb 23', + 3056:'36 82 ee 33 bd 14 8b d9 f5 86 56 cd 8f 30 d9 fb', + 3072:'1e 5a 0b 84 75 04 5d 9b 20 b2 62 86 24 ed fd 9e', + 4080:'63 ed d6 84 fb 82 62 82 fe 52 8f 9c 0e 92 37 bc', + 4096:'e4 dd 2e 98 d6 96 0f ae 0b 43 54 54 56 74 33 91' + } + ), + # Page 10 + ( + '1ada31d5cf688221c109163908ebe51debb46227c6cc8b37641910833222772a', + { + 0: 'dd 5b cb 00 18 e9 22 d4 94 75 9d 7c 39 5d 02 d3', + 16: 'c8 44 6f 8f 77 ab f7 37 68 53 53 eb 89 a1 c9 eb', + 240: 'af 3e 30 f9 c0 95 04 59 38 15 15 75 c3 fb 90 98', + 256: 'f8 cb 62 74 db 99 b8 0b 1d 20 12 a9 8e d4 8f 0e', + 496: '25 c3 00 5a 1c b8 5d e0 76 25 98 39 ab 71 98 ab', + 512: '9d cb c1 83 e8 cb 99 4b 72 7b 75 be 31 80 76 9c', + 752: 'a1 d3 07 8d fa 91 69 50 3e d9 d4 49 1d ee 4e b2', + 768: '85 14 a5 49 58 58 09 6f 59 6e 4b cd 66 b1 06 65', + 1008:'5f 40 d5 9e c1 b0 3b 33 73 8e fa 60 b2 25 5d 31', + 1024:'34 77 c7 f7 64 a4 1b ac ef f9 0b f1 4f 92 b7 cc', + 1520:'ac 4e 95 36 8d 99 b9 eb 78 b8 da 8f 81 ff a7 95', + 1536:'8c 3c 13 f8 c2 38 8b b7 3f 38 57 6e 65 b7 c4 46', + 2032:'13 c4 b9 c1 df b6 65 79 ed dd 8a 28 0b 9f 73 16', + 2048:'dd d2 78 20 55 01 26 69 8e fa ad c6 4b 64 f6 6e', + 3056:'f0 8f 2e 66 d2 8e d1 43 f3 a2 37 cf 9d e7 35 59', + 3072:'9e a3 6c 52 55 31 b8 80 ba 12 43 34 f5 7b 0b 70', + 4080:'d5 a3 9e 3d fc c5 02 80 ba c4 a6 b5 aa 0d ca 7d', + 4096:'37 0b 1c 1f e6 55 91 6d 97 fd 0d 47 ca 1d 72 b8' + } + ) + ] + + def test_keystream(self): + for tv in self.rfc6229_data: + key = unhexlify(b((tv[0]))) + cipher = ARC4.new(key) + count = 0 + for offset in range(0,4096+1,16): + ct = cipher.encrypt(b('\x00')*16) + expected = tv[1].get(offset) + if expected: + expected = unhexlify(b(expected.replace(" ",''))) + self.assertEqual(ct, expected) + count += 1 + self.assertEqual(count, len(tv[1])) + +class Drop_Tests(unittest.TestCase): + key = b('\xAA')*16 + data = b('\x00')*5000 + + def setUp(self): + self.cipher = ARC4.new(self.key) + + def test_drop256_encrypt(self): + cipher_drop = ARC4.new(self.key, 256) + ct_drop = cipher_drop.encrypt(self.data[:16]) + ct = self.cipher.encrypt(self.data)[256:256+16] + self.assertEqual(ct_drop, ct) + + def test_drop256_decrypt(self): + cipher_drop = ARC4.new(self.key, 256) + pt_drop = cipher_drop.decrypt(self.data[:16]) + pt = self.cipher.decrypt(self.data)[256:256+16] + self.assertEqual(pt_drop, pt) + + +class KeyLength(unittest.TestCase): + + def runTest(self): + self.assertRaises(ValueError, ARC4.new, bchr(0) * 4) + self.assertRaises(ValueError, ARC4.new, bchr(0) * 257) + + +def get_tests(config={}): + from .common import make_stream_tests + tests = make_stream_tests(ARC4, "ARC4", test_data) + tests += list_test_cases(RFC6229_Tests) + tests += list_test_cases(Drop_Tests) + tests.append(KeyLength()) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_Blowfish.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_Blowfish.py new file mode 100644 index 00000000..4ce3a415 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_Blowfish.py @@ -0,0 +1,160 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/test_Blowfish.py: Self-test for the Blowfish cipher +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.Blowfish""" + +import unittest + +from Crypto.Util.py3compat import bchr + +from Crypto.Cipher import Blowfish + +# This is a list of (plaintext, ciphertext, key) tuples. +test_data = [ + # Test vectors from http://www.schneier.com/code/vectors.txt + ('0000000000000000', '4ef997456198dd78', '0000000000000000'), + ('ffffffffffffffff', '51866fd5b85ecb8a', 'ffffffffffffffff'), + ('1000000000000001', '7d856f9a613063f2', '3000000000000000'), + ('1111111111111111', '2466dd878b963c9d', '1111111111111111'), + ('1111111111111111', '61f9c3802281b096', '0123456789abcdef'), + ('0123456789abcdef', '7d0cc630afda1ec7', '1111111111111111'), + ('0000000000000000', '4ef997456198dd78', '0000000000000000'), + ('0123456789abcdef', '0aceab0fc6a0a28d', 'fedcba9876543210'), + ('01a1d6d039776742', '59c68245eb05282b', '7ca110454a1a6e57'), + ('5cd54ca83def57da', 'b1b8cc0b250f09a0', '0131d9619dc1376e'), + ('0248d43806f67172', '1730e5778bea1da4', '07a1133e4a0b2686'), + ('51454b582ddf440a', 'a25e7856cf2651eb', '3849674c2602319e'), + ('42fd443059577fa2', '353882b109ce8f1a', '04b915ba43feb5b6'), + ('059b5e0851cf143a', '48f4d0884c379918', '0113b970fd34f2ce'), + ('0756d8e0774761d2', '432193b78951fc98', '0170f175468fb5e6'), + ('762514b829bf486a', '13f04154d69d1ae5', '43297fad38e373fe'), + ('3bdd119049372802', '2eedda93ffd39c79', '07a7137045da2a16'), + ('26955f6835af609a', 'd887e0393c2da6e3', '04689104c2fd3b2f'), + ('164d5e404f275232', '5f99d04f5b163969', '37d06bb516cb7546'), + ('6b056e18759f5cca', '4a057a3b24d3977b', '1f08260d1ac2465e'), + ('004bd6ef09176062', '452031c1e4fada8e', '584023641aba6176'), + ('480d39006ee762f2', '7555ae39f59b87bd', '025816164629b007'), + ('437540c8698f3cfa', '53c55f9cb49fc019', '49793ebc79b3258f'), + ('072d43a077075292', '7a8e7bfa937e89a3', '4fb05e1515ab73a7'), + ('02fe55778117f12a', 'cf9c5d7a4986adb5', '49e95d6d4ca229bf'), + ('1d9d5c5018f728c2', 'd1abb290658bc778', '018310dc409b26d6'), + ('305532286d6f295a', '55cb3774d13ef201', '1c587f1c13924fef'), + ('0123456789abcdef', 'fa34ec4847b268b2', '0101010101010101'), + ('0123456789abcdef', 'a790795108ea3cae', '1f1f1f1f0e0e0e0e'), + ('0123456789abcdef', 'c39e072d9fac631d', 'e0fee0fef1fef1fe'), + ('ffffffffffffffff', '014933e0cdaff6e4', '0000000000000000'), + ('0000000000000000', 'f21e9a77b71c49bc', 'ffffffffffffffff'), + ('0000000000000000', '245946885754369a', '0123456789abcdef'), + ('ffffffffffffffff', '6b5c5a9c5d9e0a5a', 'fedcba9876543210'), + #('fedcba9876543210', 'f9ad597c49db005e', 'f0'), + #('fedcba9876543210', 'e91d21c1d961a6d6', 'f0e1'), + #('fedcba9876543210', 'e9c2b70a1bc65cf3', 'f0e1d2'), + ('fedcba9876543210', 'be1e639408640f05', 'f0e1d2c3'), + ('fedcba9876543210', 'b39e44481bdb1e6e', 'f0e1d2c3b4'), + ('fedcba9876543210', '9457aa83b1928c0d', 'f0e1d2c3b4a5'), + ('fedcba9876543210', '8bb77032f960629d', 'f0e1d2c3b4a596'), + ('fedcba9876543210', 'e87a244e2cc85e82', 'f0e1d2c3b4a59687'), + ('fedcba9876543210', '15750e7a4f4ec577', 'f0e1d2c3b4a5968778'), + ('fedcba9876543210', '122ba70b3ab64ae0', 'f0e1d2c3b4a596877869'), + ('fedcba9876543210', '3a833c9affc537f6', 'f0e1d2c3b4a5968778695a'), + ('fedcba9876543210', '9409da87a90f6bf2', 'f0e1d2c3b4a5968778695a4b'), + ('fedcba9876543210', '884f80625060b8b4', 'f0e1d2c3b4a5968778695a4b3c'), + ('fedcba9876543210', '1f85031c19e11968', 'f0e1d2c3b4a5968778695a4b3c2d'), + ('fedcba9876543210', '79d9373a714ca34f', 'f0e1d2c3b4a5968778695a4b3c2d1e'), + ('fedcba9876543210', '93142887ee3be15c', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f'), + ('fedcba9876543210', '03429e838ce2d14b', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f00'), + ('fedcba9876543210', 'a4299e27469ff67b', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f0011'), + ('fedcba9876543210', 'afd5aed1c1bc96a8', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f001122'), + ('fedcba9876543210', '10851c0e3858da9f', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f00112233'), + ('fedcba9876543210', 'e6f51ed79b9db21f', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f0011223344'), + ('fedcba9876543210', '64a6e14afd36b46f', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f001122334455'), + ('fedcba9876543210', '80c7d7d45a5479ad', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f00112233445566'), + ('fedcba9876543210', '05044b62fa52d080', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f0011223344556677'), +] + + +class KeyLength(unittest.TestCase): + + def runTest(self): + self.assertRaises(ValueError, Blowfish.new, bchr(0) * 3, + Blowfish.MODE_ECB) + self.assertRaises(ValueError, Blowfish.new, bchr(0) * 57, + Blowfish.MODE_ECB) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = Blowfish.new(b'4'*16, Blowfish.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + tests = make_block_tests(Blowfish, "Blowfish", test_data) + tests.append(KeyLength()) + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CAST.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CAST.py new file mode 100644 index 00000000..ff13bd4a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CAST.py @@ -0,0 +1,101 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/CAST.py: Self-test for the CAST-128 (CAST5) cipher +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.CAST""" + +import unittest + +from Crypto.Util.py3compat import bchr + +from Crypto.Cipher import CAST + +# This is a list of (plaintext, ciphertext, key) tuples. +test_data = [ + # Test vectors from RFC 2144, B.1 + ('0123456789abcdef', '238b4fe5847e44b2', + '0123456712345678234567893456789a', + '128-bit key'), + + ('0123456789abcdef', 'eb6a711a2c02271b', + '01234567123456782345', + '80-bit key'), + + ('0123456789abcdef', '7ac816d16e9b302e', + '0123456712', + '40-bit key'), +] + + +class KeyLength(unittest.TestCase): + + def runTest(self): + self.assertRaises(ValueError, CAST.new, bchr(0) * 4, CAST.MODE_ECB) + self.assertRaises(ValueError, CAST.new, bchr(0) * 17, CAST.MODE_ECB) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = CAST.new(b'4'*16, CAST.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + + tests = make_block_tests(CAST, "CAST", test_data) + tests.append(KeyLength()) + tests.append(TestOutput()) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CBC.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CBC.py new file mode 100644 index 00000000..374fb5a4 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CBC.py @@ -0,0 +1,556 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.py3compat import tobytes, is_string +from Crypto.Cipher import AES, DES3, DES +from Crypto.Hash import SHAKE128 + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + +class BlockChainingTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 24) + iv_128 = get_tag_random("iv_128", 16) + iv_64 = get_tag_random("iv_64", 8) + data_128 = get_tag_random("data_128", 16) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_iv(self): + # If not passed, the iv is created randomly + cipher = AES.new(self.key_128, self.aes_mode) + iv1 = cipher.iv + cipher = AES.new(self.key_128, self.aes_mode) + iv2 = cipher.iv + self.assertNotEqual(iv1, iv2) + self.assertEqual(len(iv1), 16) + + # IV can be passed in uppercase or lowercase + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + ct = cipher.encrypt(self.data_128) + + cipher = AES.new(self.key_128, self.aes_mode, iv=self.iv_128) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + cipher = AES.new(self.key_128, self.aes_mode, IV=self.iv_128) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + def test_iv_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + iv = u'test1234567890-*') + + def test_only_one_iv(self): + # Only one IV/iv keyword allowed + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + iv=self.iv_128, IV=self.iv_128) + + def test_iv_with_matching_length(self): + self.assertRaises(ValueError, AES.new, self.key_128, self.aes_mode, + b"") + self.assertRaises(ValueError, AES.new, self.key_128, self.aes_mode, + self.iv_128[:15]) + self.assertRaises(ValueError, AES.new, self.key_128, self.aes_mode, + self.iv_128 + b"0") + + def test_block_size_128(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_block_size_64(self): + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + self.assertEqual(cipher.block_size, DES3.block_size) + + def test_unaligned_data_128(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + for wrong_length in range(1,16): + self.assertRaises(ValueError, cipher.encrypt, b"5" * wrong_length) + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + for wrong_length in range(1,16): + self.assertRaises(ValueError, cipher.decrypt, b"5" * wrong_length) + + def test_unaligned_data_64(self): + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + for wrong_length in range(1,8): + self.assertRaises(ValueError, cipher.encrypt, b"5" * wrong_length) + + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + for wrong_length in range(1,8): + self.assertRaises(ValueError, cipher.decrypt, b"5" * wrong_length) + + def test_IV_iv_attributes(self): + data = get_tag_random("data", 16 * 100) + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + getattr(cipher, func)(data) + self.assertEqual(cipher.iv, self.iv_128) + self.assertEqual(cipher.IV, self.iv_128) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + self.iv_128, 7) + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + iv=self.iv_128, unknown=7) + # But some are only known by the base cipher (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, self.aes_mode, iv=self.iv_128, use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_bytearray(self): + data = b"1" * 128 + data_ba = bytearray(data) + + # Encrypt + key_ba = bytearray(self.key_128) + iv_ba = bytearray(self.iv_128) + + cipher1 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref1 = cipher1.encrypt(data) + + cipher2 = AES.new(key_ba, self.aes_mode, iv_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + iv_ba[:3] = b'\xFF\xFF\xFF' + ref2 = cipher2.encrypt(data_ba) + + self.assertEqual(ref1, ref2) + self.assertEqual(cipher1.iv, cipher2.iv) + + # Decrypt + key_ba = bytearray(self.key_128) + iv_ba = bytearray(self.iv_128) + + cipher3 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref3 = cipher3.decrypt(data) + + cipher4 = AES.new(key_ba, self.aes_mode, iv_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + iv_ba[:3] = b'\xFF\xFF\xFF' + ref4 = cipher4.decrypt(data_ba) + + self.assertEqual(ref3, ref4) + + def test_memoryview(self): + data = b"1" * 128 + data_mv = memoryview(bytearray(data)) + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + iv_mv = memoryview(bytearray(self.iv_128)) + + cipher1 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref1 = cipher1.encrypt(data) + + cipher2 = AES.new(key_mv, self.aes_mode, iv_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + iv_mv[:3] = b'\xFF\xFF\xFF' + ref2 = cipher2.encrypt(data_mv) + + self.assertEqual(ref1, ref2) + self.assertEqual(cipher1.iv, cipher2.iv) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + iv_mv = memoryview(bytearray(self.iv_128)) + + cipher3 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref3 = cipher3.decrypt(data) + + cipher4 = AES.new(key_mv, self.aes_mode, iv_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + iv_mv[:3] = b'\xFF\xFF\xFF' + ref4 = cipher4.decrypt(data_mv) + + self.assertEqual(ref3, ref4) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + output = bytearray(128) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + + def test_output_param_same_buffer(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + pt_ba = bytearray(pt) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.encrypt(pt_ba, output=pt_ba) + self.assertEqual(ct, pt_ba) + self.assertEqual(res, None) + + ct_ba = bytearray(ct) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.decrypt(ct_ba, output=ct_ba) + self.assertEqual(pt, ct_ba) + self.assertEqual(res, None) + + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class CbcTests(BlockChainingTests): + aes_mode = AES.MODE_CBC + des3_mode = DES3.MODE_CBC + + +class NistBlockChainingVectors(unittest.TestCase): + + def _do_kat_aes_test(self, file_name): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CBC KAT", + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + + cipher = AES.new(tv.key, self.aes_mode, tv.iv) + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + # See Section 6.4.2 in AESAVS + def _do_mct_aes_test(self, file_name): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CBC Montecarlo", + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + cipher = AES.new(tv.key, self.aes_mode, tv.iv) + + if direction == '[ENCRYPT]': + cts = [ tv.iv ] + for count in range(1000): + cts.append(cipher.encrypt(tv.plaintext)) + tv.plaintext = cts[-2] + self.assertEqual(cts[-1], tv.ciphertext) + elif direction == '[DECRYPT]': + pts = [ tv.iv] + for count in range(1000): + pts.append(cipher.decrypt(tv.ciphertext)) + tv.ciphertext = pts[-2] + self.assertEqual(pts[-1], tv.plaintext) + else: + assert False + + def _do_tdes_test(self, file_name): + + test_vectors = load_test_vectors(("Cipher", "TDES"), + file_name, + "TDES CBC KAT", + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + if hasattr(tv, "keys"): + cipher = DES.new(tv.keys, self.des_mode, tv.iv) + else: + if tv.key1 != tv.key3: + key = tv.key1 + tv.key2 + tv.key3 # Option 3 + else: + key = tv.key1 + tv.key2 # Option 2 + cipher = DES3.new(key, self.des3_mode, tv.iv) + + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + +class NistCbcVectors(NistBlockChainingVectors): + aes_mode = AES.MODE_CBC + des_mode = DES.MODE_CBC + des3_mode = DES3.MODE_CBC + + +# Create one test method per file +nist_aes_kat_mmt_files = ( + # KAT + "CBCGFSbox128.rsp", + "CBCGFSbox192.rsp", + "CBCGFSbox256.rsp", + "CBCKeySbox128.rsp", + "CBCKeySbox192.rsp", + "CBCKeySbox256.rsp", + "CBCVarKey128.rsp", + "CBCVarKey192.rsp", + "CBCVarKey256.rsp", + "CBCVarTxt128.rsp", + "CBCVarTxt192.rsp", + "CBCVarTxt256.rsp", + # MMT + "CBCMMT128.rsp", + "CBCMMT192.rsp", + "CBCMMT256.rsp", + ) +nist_aes_mct_files = ( + "CBCMCT128.rsp", + "CBCMCT192.rsp", + "CBCMCT256.rsp", + ) + +for file_name in nist_aes_kat_mmt_files: + def new_func(self, file_name=file_name): + self._do_kat_aes_test(file_name) + setattr(NistCbcVectors, "test_AES_" + file_name, new_func) + +for file_name in nist_aes_mct_files: + def new_func(self, file_name=file_name): + self._do_mct_aes_test(file_name) + setattr(NistCbcVectors, "test_AES_" + file_name, new_func) +del file_name, new_func + +nist_tdes_files = ( + "TCBCMMT2.rsp", # 2TDES + "TCBCMMT3.rsp", # 3TDES + "TCBCinvperm.rsp", # Single DES + "TCBCpermop.rsp", + "TCBCsubtab.rsp", + "TCBCvarkey.rsp", + "TCBCvartext.rsp", + ) + +for file_name in nist_tdes_files: + def new_func(self, file_name=file_name): + self._do_tdes_test(file_name) + setattr(NistCbcVectors, "test_TDES_" + file_name, new_func) + +# END OF NIST CBC TEST VECTORS + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the CBC test vectors found in Section F.2 + of NIST SP 800-3A""" + + def test_aes_128(self): + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '7649abac8119b246cee98e9b12e9197d' +\ + '5086cb9b507219ee95db113a917678b2' +\ + '73bed6b8e3c1743b7116e69e22229516' +\ + '3ff1caa1681fac09120eca307586e1a7' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192(self): + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '4f021db243bc633d7178183a9fa071e8' +\ + 'b4d9ada9ad7dedf4e5e738763f69145a' +\ + '571b242012fb7ae07fa9baac3df102e0' +\ + '08b0e27988598881d920a9e64f5615cd' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256(self): + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'f58c4c04d6e5f1ba779eabfb5f7bfbd6' +\ + '9cfc4e967edb808d679f777bc6702c7d' +\ + '39f23369a9d9bacfa530e26304231461' +\ + 'b2eb05e2c39be9fcda6c19078c6a9d1b' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(CbcTests) + if config.get('slow_tests'): + tests += list_test_cases(NistCbcVectors) + tests += list_test_cases(SP800TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CCM.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CCM.py new file mode 100644 index 00000000..e8ebc0b1 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CCM.py @@ -0,0 +1,936 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors_wycheproof +from Crypto.Util.py3compat import tobytes, bchr +from Crypto.Cipher import AES +from Crypto.Hash import SHAKE128 + +from Crypto.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class CcmTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # If not passed, the nonce is created randomly + cipher = AES.new(self.key_128, AES.MODE_CCM) + nonce1 = cipher.nonce + cipher = AES.new(self.key_128, AES.MODE_CCM) + nonce2 = cipher.nonce + self.assertEqual(len(nonce1), 11) + self.assertNotEqual(nonce1, nonce2) + + cipher = AES.new(self.key_128, AES.MODE_CCM, self.nonce_96) + ct = cipher.encrypt(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CCM, + nonce=u'test12345678') + + def test_nonce_length(self): + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=b"") + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=bchr(1) * 6) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=bchr(1) * 14) + for x in range(7, 13 + 1): + AES.new(self.key_128, AES.MODE_CCM, nonce=bchr(1) * x) + + def test_block_size(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 11 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_CCM).nonce + nonce2 = AES.new(self.key_128, AES.MODE_CCM).nonce + self.assertEqual(len(nonce1), 11) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CCM, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + for mac_len in range(3, 17 + 1, 2): + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, mac_len=mac_len) + + # Valid MAC length + for mac_len in range(4, 16 + 1, 2): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Crypto.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_longer_assoc_data_than_declared(self): + # More than zero + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=0) + self.assertRaises(ValueError, cipher.update, b"1") + + # Too large + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=15) + self.assertRaises(ValueError, cipher.update, self.data) + + def test_shorter_assoc_data_than_expected(self): + DATA_LEN = len(self.data) + + # With plaintext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.encrypt, self.data) + + # With empty plaintext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.digest) + + # With ciphertext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.decrypt, self.data) + + # With empty ciphertext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.verify, mac) + + def test_shorter_and_longer_plaintext_than_declared(self): + DATA_LEN = len(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN + 1) + cipher.encrypt(self.data) + self.assertRaises(ValueError, cipher.digest) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN - 1) + self.assertRaises(ValueError, cipher.encrypt, self.data) + + def test_shorter_ciphertext_than_declared(self): + DATA_LEN = len(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN + 1) + cipher.decrypt(ct) + self.assertRaises(ValueError, cipher.verify, mac) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN - 1) + self.assertRaises(ValueError, cipher.decrypt, ct) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=127, assoc_len=127) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=127, assoc_len=127) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + data_ba = bytearray(self.data) + + cipher1 = AES.new(self.key_128, + AES.MODE_CCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_CCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + del data_ba + + cipher4 = AES.new(key_ba, + AES.MODE_CCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + data_mv = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_128, + AES.MODE_CCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_CCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + del data_mv + + cipher4 = AES.new(key_mv, + AES.MODE_CCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + tag = cipher.digest() + + output = bytearray(128) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + + pt = b'5' * 16 + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(15) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class CcmFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 16) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + for assoc_len in (None, 0): + for msg_len in (None, len(self.data)): + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + for assoc_len in (None, len(self.data)): + for msg_len in (None, 0): + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + for assoc_len in (None, len(self.data)): + for msg_len in (None, len(self.data)): + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + # Only possible if msg_len is declared in advance + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data, + self.data + b"3"): + if auth_data is None: + assoc_len = None + else: + assoc_len = len(auth_data) + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + msg_len=64, + assoc_len=assoc_len) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data) + method(self.data) + method(self.data) + method(self.data) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_multiple_encrypt_decrypt_without_msg_len(self): + # Once per method, with or without assoc. data + for method_name in "encrypt", "decrypt": + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data) + method = getattr(cipher, method_name) + method(self.data) + self.assertRaises(TypeError, method, self.data) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + msg_len=32) + if assoc_data_present: + cipher.update(self.data) + getattr(cipher, method1_name)(self.data) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt(self.data) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + +class TestVectors(unittest.TestCase): + """Class exercising the CCM test vectors found in Appendix C + of NIST SP 800-38C and in RFC 3610""" + + # List of test vectors, each made up of: + # - authenticated data + # - plaintext + # - ciphertext + # - MAC + # - AES key + # - nonce + test_vectors_hex = [ + # NIST SP 800 38C + ( '0001020304050607', + '20212223', + '7162015b', + '4dac255d', + '404142434445464748494a4b4c4d4e4f', + '10111213141516'), + ( '000102030405060708090a0b0c0d0e0f', + '202122232425262728292a2b2c2d2e2f', + 'd2a1f0e051ea5f62081a7792073d593d', + '1fc64fbfaccd', + '404142434445464748494a4b4c4d4e4f', + '1011121314151617'), + ( '000102030405060708090a0b0c0d0e0f10111213', + '202122232425262728292a2b2c2d2e2f3031323334353637', + 'e3b201a9f5b71a7a9b1ceaeccd97e70b6176aad9a4428aa5', + '484392fbc1b09951', + '404142434445464748494a4b4c4d4e4f', + '101112131415161718191a1b'), + ( (''.join(["%02X" % (x*16+y) for x in range(0,16) for y in range(0,16)]))*256, + '202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f', + '69915dad1e84c6376a68c2967e4dab615ae0fd1faec44cc484828529463ccf72', + 'b4ac6bec93e8598e7f0dadbcea5b', + '404142434445464748494a4b4c4d4e4f', + '101112131415161718191a1b1c'), + # RFC3610 + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e', + '588c979a61c663d2f066d0c2c0f989806d5f6b61dac384', + '17e8d12cfdf926e0', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000003020100a0a1a2a3a4a5'), + ( + '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '72c91a36e135f8cf291ca894085c87e3cc15c439c9e43a3b', + 'a091d56e10400916', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000004030201a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20', + '51b1e5f44a197d1da46b0f8e2d282ae871e838bb64da859657', + '4adaa76fbd9fb0c5', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000005040302A0A1A2A3A4A5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e', + 'a28c6865939a9a79faaa5c4c2a9d4a91cdac8c', + '96c861b9c9e61ef1', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000006050403a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f', + 'dcf1fb7b5d9e23fb9d4e131253658ad86ebdca3e', + '51e83f077d9c2d93', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000007060504a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f20', + '6fc1b011f006568b5171a42d953d469b2570a4bd87', + '405a0443ac91cb94', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000008070605a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e', + '0135d1b2c95f41d5d1d4fec185d166b8094e999dfed96c', + '048c56602c97acbb7490', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000009080706a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '7b75399ac0831dd2f0bbd75879a2fd8f6cae6b6cd9b7db24', + 'c17b4433f434963f34b4', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000a090807a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20', + '82531a60cc24945a4b8279181ab5c84df21ce7f9b73f42e197', + 'ea9c07e56b5eb17e5f4e', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000b0a0908a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e', + '07342594157785152b074098330abb141b947b', + '566aa9406b4d999988dd', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000c0b0a09a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f', + '676bb20380b0e301e8ab79590a396da78b834934', + 'f53aa2e9107a8b6c022c', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000d0c0b0aa0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f20', + 'c0ffa0d6f05bdb67f24d43a4338d2aa4bed7b20e43', + 'cd1aa31662e7ad65d6db', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000e0d0c0ba0a1a2a3a4a5'), + ( '0be1a88bace018b1', + '08e8cf97d820ea258460e96ad9cf5289054d895ceac47c', + '4cb97f86a2a4689a877947ab8091ef5386a6ffbdd080f8', + 'e78cf7cb0cddd7b3', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00412b4ea9cdbe3c9696766cfa'), + ( '63018f76dc8a1bcb', + '9020ea6f91bdd85afa0039ba4baff9bfb79c7028949cd0ec', + '4ccb1e7ca981befaa0726c55d378061298c85c92814abc33', + 'c52ee81d7d77c08a', + 'd7828d13b2b0bdc325a76236df93cc6b', + '0033568ef7b2633c9696766cfa'), + ( 'aa6cfa36cae86b40', + 'b916e0eacc1c00d7dcec68ec0b3bbb1a02de8a2d1aa346132e', + 'b1d23a2220ddc0ac900d9aa03c61fcf4a559a4417767089708', + 'a776796edb723506', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00103fe41336713c9696766cfa'), + ( 'd0d0735c531e1becf049c244', + '12daac5630efa5396f770ce1a66b21f7b2101c', + '14d253c3967b70609b7cbb7c49916028324526', + '9a6f49975bcadeaf', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00764c63b8058e3c9696766cfa'), + ( '77b60f011c03e1525899bcae', + 'e88b6a46c78d63e52eb8c546efb5de6f75e9cc0d', + '5545ff1a085ee2efbf52b2e04bee1e2336c73e3f', + '762c0c7744fe7e3c', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00f8b678094e3b3c9696766cfa'), + ( 'cd9044d2b71fdb8120ea60c0', + '6435acbafb11a82e2f071d7ca4a5ebd93a803ba87f', + '009769ecabdf48625594c59251e6035722675e04c8', + '47099e5ae0704551', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00d560912d3f703c9696766cfa'), + ( 'd85bc7e69f944fb8', + '8a19b950bcf71a018e5e6701c91787659809d67dbedd18', + 'bc218daa947427b6db386a99ac1aef23ade0b52939cb6a', + '637cf9bec2408897c6ba', + 'd7828d13b2b0bdc325a76236df93cc6b', + '0042fff8f1951c3c9696766cfa'), + ( '74a0ebc9069f5b37', + '1761433c37c5a35fc1f39f406302eb907c6163be38c98437', + '5810e6fd25874022e80361a478e3e9cf484ab04f447efff6', + 'f0a477cc2fc9bf548944', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00920f40e56cdc3c9696766cfa'), + ( '44a3aa3aae6475ca', + 'a434a8e58500c6e41530538862d686ea9e81301b5ae4226bfa', + 'f2beed7bc5098e83feb5b31608f8e29c38819a89c8e776f154', + '4d4151a4ed3a8b87b9ce', + 'd7828d13b2b0bdc325a76236df93cc6b', + '0027ca0c7120bc3c9696766cfa'), + ( 'ec46bb63b02520c33c49fd70', + 'b96b49e21d621741632875db7f6c9243d2d7c2', + '31d750a09da3ed7fddd49a2032aabf17ec8ebf', + '7d22c8088c666be5c197', + 'd7828d13b2b0bdc325a76236df93cc6b', + '005b8ccbcd9af83c9696766cfa'), + ( '47a65ac78b3d594227e85e71', + 'e2fcfbb880442c731bf95167c8ffd7895e337076', + 'e882f1dbd38ce3eda7c23f04dd65071eb41342ac', + 'df7e00dccec7ae52987d', + 'd7828d13b2b0bdc325a76236df93cc6b', + '003ebe94044b9a3c9696766cfa'), + ( '6e37a6ef546d955d34ab6059', + 'abf21c0b02feb88f856df4a37381bce3cc128517d4', + 'f32905b88a641b04b9c9ffb58cc390900f3da12ab1', + '6dce9e82efa16da62059', + 'd7828d13b2b0bdc325a76236df93cc6b', + '008d493b30ae8b3c9696766cfa'), + ] + + test_vectors = [[unhexlify(x) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + # Encrypt + cipher = AES.new(key, AES.MODE_CCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_CCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, **extra_params): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._extra_params = extra_params + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_ccm_test.json", + "Wycheproof AES CCM", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt CCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_CCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) not in range(7, 13 + 1, 2) and "Length of parameter 'nonce'" in str(e): + assert not tv.valid + return + if tv.tag_size not in range(4, 16 + 1, 2) and "Parameter 'mac_len'" in str(e): + assert not tv.valid + return + raise e + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt CCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_CCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) not in range(7, 13 + 1, 2) and "Length of parameter 'nonce'" in str(e): + assert not tv.valid + return + if tv.tag_size not in range(4, 16 + 1, 2) and "Parameter 'mac_len'" in str(e): + assert not tv.valid + return + raise e + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt CCM Test #" + str(tv.id) + if len(tv.iv) not in range(7, 13 + 1, 2) or len(tv.ct) == 0: + return + cipher = AES.new(tv.key, AES.MODE_CCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(CcmTests) + tests += list_test_cases(CcmFSMTests) + tests += [TestVectors()] + tests += [TestVectorsWycheproof(wycheproof_warnings)] + + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CFB.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CFB.py new file mode 100644 index 00000000..cb0c3529 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CFB.py @@ -0,0 +1,411 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.py3compat import tobytes, is_string +from Crypto.Cipher import AES, DES3, DES +from Crypto.Hash import SHAKE128 + +from Crypto.SelfTest.Cipher.test_CBC import BlockChainingTests + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class CfbTests(BlockChainingTests): + + aes_mode = AES.MODE_CFB + des3_mode = DES3.MODE_CFB + + # Redefine test_unaligned_data_128/64 + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + # Extra + + def test_segment_size_128(self): + for bits in range(8, 129, 8): + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, + segment_size=bits) + + for bits in 0, 7, 9, 127, 129: + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CFB, + self.iv_128, + segment_size=bits) + + def test_segment_size_64(self): + for bits in range(8, 65, 8): + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, + segment_size=bits) + + for bits in 0, 7, 9, 63, 65: + self.assertRaises(ValueError, DES3.new, self.key_192, AES.MODE_CFB, + self.iv_64, + segment_size=bits) + + +class NistCfbVectors(unittest.TestCase): + + def _do_kat_aes_test(self, file_name, segment_size): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CFB%d KAT" % segment_size, + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + cipher = AES.new(tv.key, AES.MODE_CFB, tv.iv, + segment_size=segment_size) + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + # See Section 6.4.5 in AESAVS + def _do_mct_aes_test(self, file_name, segment_size): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CFB%d Montecarlo" % segment_size, + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + assert(segment_size in (8, 128)) + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + cipher = AES.new(tv.key, AES.MODE_CFB, tv.iv, + segment_size=segment_size) + + def get_input(input_text, output_seq, j): + # CFB128 + if segment_size == 128: + if j >= 2: + return output_seq[-2] + return [input_text, tv.iv][j] + # CFB8 + if j == 0: + return input_text + elif j <= 16: + return tv.iv[j - 1:j] + return output_seq[j - 17] + + if direction == '[ENCRYPT]': + cts = [] + for j in range(1000): + plaintext = get_input(tv.plaintext, cts, j) + cts.append(cipher.encrypt(plaintext)) + self.assertEqual(cts[-1], tv.ciphertext) + elif direction == '[DECRYPT]': + pts = [] + for j in range(1000): + ciphertext = get_input(tv.ciphertext, pts, j) + pts.append(cipher.decrypt(ciphertext)) + self.assertEqual(pts[-1], tv.plaintext) + else: + assert False + + def _do_tdes_test(self, file_name, segment_size): + + test_vectors = load_test_vectors(("Cipher", "TDES"), + file_name, + "TDES CFB%d KAT" % segment_size, + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + if hasattr(tv, "keys"): + cipher = DES.new(tv.keys, DES.MODE_CFB, tv.iv, + segment_size=segment_size) + else: + if tv.key1 != tv.key3: + key = tv.key1 + tv.key2 + tv.key3 # Option 3 + else: + key = tv.key1 + tv.key2 # Option 2 + cipher = DES3.new(key, DES3.MODE_CFB, tv.iv, + segment_size=segment_size) + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + +# Create one test method per file +nist_aes_kat_mmt_files = ( + # KAT + "CFB?GFSbox128.rsp", + "CFB?GFSbox192.rsp", + "CFB?GFSbox256.rsp", + "CFB?KeySbox128.rsp", + "CFB?KeySbox192.rsp", + "CFB?KeySbox256.rsp", + "CFB?VarKey128.rsp", + "CFB?VarKey192.rsp", + "CFB?VarKey256.rsp", + "CFB?VarTxt128.rsp", + "CFB?VarTxt192.rsp", + "CFB?VarTxt256.rsp", + # MMT + "CFB?MMT128.rsp", + "CFB?MMT192.rsp", + "CFB?MMT256.rsp", + ) +nist_aes_mct_files = ( + "CFB?MCT128.rsp", + "CFB?MCT192.rsp", + "CFB?MCT256.rsp", + ) + +for file_gen_name in nist_aes_kat_mmt_files: + for bits in "8", "128": + file_name = file_gen_name.replace("?", bits) + def new_func(self, file_name=file_name, bits=bits): + self._do_kat_aes_test(file_name, int(bits)) + setattr(NistCfbVectors, "test_AES_" + file_name, new_func) + +for file_gen_name in nist_aes_mct_files: + for bits in "8", "128": + file_name = file_gen_name.replace("?", bits) + def new_func(self, file_name=file_name, bits=bits): + self._do_mct_aes_test(file_name, int(bits)) + setattr(NistCfbVectors, "test_AES_" + file_name, new_func) +del file_name, new_func + +nist_tdes_files = ( + "TCFB?MMT2.rsp", # 2TDES + "TCFB?MMT3.rsp", # 3TDES + "TCFB?invperm.rsp", # Single DES + "TCFB?permop.rsp", + "TCFB?subtab.rsp", + "TCFB?varkey.rsp", + "TCFB?vartext.rsp", + ) + +for file_gen_name in nist_tdes_files: + for bits in "8", "64": + file_name = file_gen_name.replace("?", bits) + def new_func(self, file_name=file_name, bits=bits): + self._do_tdes_test(file_name, int(bits)) + setattr(NistCfbVectors, "test_TDES_" + file_name, new_func) + +# END OF NIST CBC TEST VECTORS + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the CFB test vectors found in Section F.3 + of NIST SP 800-3A""" + + def test_aes_128_cfb8(self): + plaintext = '6bc1bee22e409f96e93d7e117393172aae2d' + ciphertext = '3b79424c9c0dd436bace9e0ed4586a4f32b9' + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192_cfb8(self): + plaintext = '6bc1bee22e409f96e93d7e117393172aae2d' + ciphertext = 'cda2521ef0a905ca44cd057cbf0d47a0678a' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256_cfb8(self): + plaintext = '6bc1bee22e409f96e93d7e117393172aae2d' + ciphertext = 'dc1f1a8520a64db55fcc8ac554844e889700' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_128_cfb128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '3b3fd92eb72dad20333449f8e83cfb4a' +\ + 'c8a64537a0b3a93fcde3cdad9f1ce58b' +\ + '26751f67a3cbb140b1808cf187a4f4df' +\ + 'c04b05357c5d1c0eeac4c66f9ff7f2e6' + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192_cfb128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'cdc80d6fddf18cab34c25909c99a4174' +\ + '67ce7f7f81173621961a2b70171d3d7a' +\ + '2e1e8a1dd59b88b1c8e60fed1efac4c9' +\ + 'c05f9f9ca9834fa042ae8fba584b09ff' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256_cfb128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + + ciphertext = 'dc7e84bfda79164b7ecd8486985d3860' +\ + '39ffed143b28b1c832113c6331e5407b' +\ + 'df10132415e54b92a13ed0a8267ae2f9' +\ + '75a385741ab9cef82031623d55b1e471' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(CfbTests) + if config.get('slow_tests'): + tests += list_test_cases(NistCfbVectors) + tests += list_test_cases(SP800TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CTR.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CTR.py new file mode 100644 index 00000000..6fc43ef7 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_CTR.py @@ -0,0 +1,472 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import hexlify, unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.py3compat import tobytes, bchr +from Crypto.Cipher import AES, DES3 +from Crypto.Hash import SHAKE128, SHA256 +from Crypto.Util import Counter + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + +class CtrTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 24) + nonce_32 = get_tag_random("nonce_32", 4) + nonce_64 = get_tag_random("nonce_64", 8) + ctr_64 = Counter.new(32, prefix=nonce_32) + ctr_128 = Counter.new(64, prefix=nonce_64) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_invalid_counter_parameter(self): + # Counter object is required for ciphers with short block size + self.assertRaises(TypeError, DES3.new, self.key_192, AES.MODE_CTR) + # Positional arguments are not allowed (Counter must be passed as + # keyword) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, self.ctr_128) + + def test_nonce_attribute(self): + # Nonce attribute is the prefix passed to Counter (DES3) + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + self.assertEqual(cipher.nonce, self.nonce_32) + + # Nonce attribute is the prefix passed to Counter (AES) + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(cipher.nonce, self.nonce_64) + + # Nonce attribute is not defined if suffix is used in Counter + counter = Counter.new(64, prefix=self.nonce_32, suffix=self.nonce_32) + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertFalse(hasattr(cipher, "nonce")) + + def test_nonce_parameter(self): + # Nonce parameter becomes nonce attribute + cipher1 = AES.new(self.key_128, AES.MODE_CTR, nonce=self.nonce_64) + self.assertEqual(cipher1.nonce, self.nonce_64) + + counter = Counter.new(64, prefix=self.nonce_64, initial_value=0) + cipher2 = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Nonce is implicitly created (for AES) when no parameters are passed + nonce1 = AES.new(self.key_128, AES.MODE_CTR).nonce + nonce2 = AES.new(self.key_128, AES.MODE_CTR).nonce + self.assertNotEqual(nonce1, nonce2) + self.assertEqual(len(nonce1), 8) + + # Nonce can be zero-length + cipher = AES.new(self.key_128, AES.MODE_CTR, nonce=b"") + self.assertEqual(b"", cipher.nonce) + cipher.encrypt(b'0'*300) + + # Nonce and Counter are mutually exclusive + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + counter=self.ctr_128, nonce=self.nonce_64) + + def test_initial_value_parameter(self): + # Test with nonce parameter + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=0xFFFF) + counter = Counter.new(64, prefix=self.nonce_64, initial_value=0xFFFF) + cipher2 = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Test without nonce parameter + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + initial_value=0xFFFF) + counter = Counter.new(64, prefix=cipher1.nonce, initial_value=0xFFFF) + cipher2 = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Initial_value and Counter are mutually exclusive + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + counter=self.ctr_128, initial_value=0) + + def test_initial_value_bytes_parameter(self): + # Same result as when passing an integer + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, + initial_value=b"\x00"*6+b"\xFF\xFF") + cipher2 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=0xFFFF) + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Fail if the iv is too large + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + initial_value=b"5"*17) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=b"5"*9) + + # Fail if the iv is too short + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + initial_value=b"5"*15) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=b"5"*7) + + def test_iv_with_matching_length(self): + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + counter=Counter.new(120)) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + counter=Counter.new(136)) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_block_size_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + self.assertEqual(cipher.block_size, DES3.block_size) + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + 7, counter=self.ctr_128) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + counter=self.ctr_128, unknown=7) + # But some are only known by the base cipher (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128, use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_wrap_around(self): + # Counter is only 8 bits, so we can only encrypt/decrypt 256 blocks (=4096 bytes) + counter = Counter.new(8, prefix=bchr(9) * 15) + max_bytes = 4096 + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + cipher.encrypt(b'9' * max_bytes) + self.assertRaises(OverflowError, cipher.encrypt, b'9') + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertRaises(OverflowError, cipher.encrypt, b'9' * (max_bytes + 1)) + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + cipher.decrypt(b'9' * max_bytes) + self.assertRaises(OverflowError, cipher.decrypt, b'9') + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertRaises(OverflowError, cipher.decrypt, b'9' * (max_bytes + 1)) + + def test_bytearray(self): + data = b"1" * 16 + iv = b"\x00" * 6 + b"\xFF\xFF" + + # Encrypt + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, + initial_value=iv) + ref1 = cipher1.encrypt(data) + + cipher2 = AES.new(self.key_128, AES.MODE_CTR, + nonce=bytearray(self.nonce_64), + initial_value=bytearray(iv)) + ref2 = cipher2.encrypt(bytearray(data)) + + self.assertEqual(ref1, ref2) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + cipher3 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, + initial_value=iv) + ref3 = cipher3.decrypt(data) + + cipher4 = AES.new(self.key_128, AES.MODE_CTR, + nonce=bytearray(self.nonce_64), + initial_value=bytearray(iv)) + ref4 = cipher4.decrypt(bytearray(data)) + + self.assertEqual(ref3, ref4) + + def test_very_long_data(self): + cipher = AES.new(b'A' * 32, AES.MODE_CTR, nonce=b'') + ct = cipher.encrypt(b'B' * 1000000) + digest = SHA256.new(ct).hexdigest() + self.assertEqual(digest, "96204fc470476561a3a8f3b6fe6d24be85c87510b638142d1d0fb90989f8a6a6") + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + ct = cipher.encrypt(pt) + + output = bytearray(128) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + ct = cipher.encrypt(pt) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the CTR test vectors found in Section F.5 + of NIST SP 800-38A""" + + def test_aes_128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '874d6191b620e3261bef6864990db6ce' +\ + '9806f66b7970fdff8617187bb9fffdff' +\ + '5ae4df3edbd5d35e5b4f09020db03eab' +\ + '1e031dda2fbe03d1792170a0f3009cee' + key = '2b7e151628aed2a6abf7158809cf4f3c' + counter = Counter.new(nbits=16, + prefix=unhexlify('f0f1f2f3f4f5f6f7f8f9fafbfcfd'), + initial_value=0xfeff) + + key = unhexlify(key) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '1abc932417521ca24f2b0459fe7e6e0b' +\ + '090339ec0aa6faefd5ccc2c6f4ce8e94' +\ + '1e36b26bd1ebc670d1bd1d665620abf7' +\ + '4f78a7f6d29809585a97daec58c6b050' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + counter = Counter.new(nbits=16, + prefix=unhexlify('f0f1f2f3f4f5f6f7f8f9fafbfcfd'), + initial_value=0xfeff) + + key = unhexlify(key) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '601ec313775789a5b7a7f504bbf3d228' +\ + 'f443e3ca4d62b59aca84e990cacaf5c5' +\ + '2b0930daa23de94ce87017ba2d84988d' +\ + 'dfc9c58db67aada613c2dd08457941a6' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + counter = Counter.new(nbits=16, + prefix=unhexlify('f0f1f2f3f4f5f6f7f8f9fafbfcfd'), + initial_value=0xfeff) + key = unhexlify(key) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + +class RFC3686TestVectors(unittest.TestCase): + + # Each item is a test vector with: + # - plaintext + # - ciphertext + # - key (AES 128, 192 or 256 bits) + # - counter prefix (4 byte nonce + 8 byte nonce) + data = ( + ('53696e676c6520626c6f636b206d7367', + 'e4095d4fb7a7b3792d6175a3261311b8', + 'ae6852f8121067cc4bf7a5765577f39e', + '000000300000000000000000'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '5104a106168a72d9790d41ee8edad388eb2e1efc46da57c8fce630df9141be28', + '7e24067817fae0d743d6ce1f32539163', + '006cb6dbc0543b59da48d90b'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20212223', + 'c1cf48a89f2ffdd9cf4652e9efdb72d74540a42bde6d7836d59a5ceaaef3105325b2072f', + '7691be035e5020a8ac6e618529f9a0dc', + '00e0017b27777f3f4a1786f0'), + ('53696e676c6520626c6f636b206d7367', + '4b55384fe259c9c84e7935a003cbe928', + '16af5b145fc9f579c175f93e3bfb0eed863d06ccfdb78515', + '0000004836733c147d6d93cb'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '453243fc609b23327edfaafa7131cd9f8490701c5ad4a79cfc1fe0ff42f4fb00', + '7c5cb2401b3dc33c19e7340819e0f69c678c3db8e6f6a91a', + '0096b03b020c6eadc2cb500d'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20212223', + '96893fc55e5c722f540b7dd1ddf7e758d288bc95c69165884536c811662f2188abee0935', + '02bf391ee8ecb159b959617b0965279bf59b60a786d3e0fe', + '0007bdfd5cbd60278dcc0912'), + ('53696e676c6520626c6f636b206d7367', + '145ad01dbf824ec7560863dc71e3e0c0', + '776beff2851db06f4c8a0542c8696f6c6a81af1eec96b4d37fc1d689e6c1c104', + '00000060db5672c97aa8f0b2'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + 'f05e231b3894612c49ee000b804eb2a9b8306b508f839d6a5530831d9344af1c', + 'f6d66d6bd52d59bb0796365879eff886c66dd51a5b6a99744b50590c87a23884', + '00faac24c1585ef15a43d875'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20212223', + 'eb6c52821d0bbbf7ce7594462aca4faab407df866569fd07f48cc0b583d6071f1ec0e6b8', + 'ff7a617ce69148e4f1726e2f43581de2aa62d9f805532edff1eed687fb54153d', + '001cc5b751a51d70a1c11148') + ) + + bindata = [] + for tv in data: + bindata.append([unhexlify(x) for x in tv]) + + def runTest(self): + for pt, ct, key, prefix in self.bindata: + counter = Counter.new(32, prefix=prefix) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + result = cipher.encrypt(pt) + self.assertEqual(hexlify(ct), hexlify(result)) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(CtrTests) + tests += list_test_cases(SP800TestVectors) + tests += [ RFC3686TestVectors() ] + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20.py new file mode 100644 index 00000000..4396ac28 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20.py @@ -0,0 +1,529 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import re +import unittest +from binascii import hexlify, unhexlify + +from Crypto.Util.py3compat import b, tobytes, bchr +from Crypto.Util.strxor import strxor_c +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Cipher import ChaCha20 + + +class ChaCha20Test(unittest.TestCase): + + def test_new_positive(self): + cipher = ChaCha20.new(key=b("0")*32, nonce=b"0"*8) + self.assertEqual(cipher.nonce, b"0" * 8) + cipher = ChaCha20.new(key=b("0")*32, nonce=b"0"*12) + self.assertEqual(cipher.nonce, b"0" * 12) + + def test_new_negative(self): + new = ChaCha20.new + self.assertRaises(TypeError, new) + self.assertRaises(TypeError, new, nonce=b("0")) + self.assertRaises(ValueError, new, nonce=b("0")*8, key=b("0")) + self.assertRaises(ValueError, new, nonce=b("0"), key=b("0")*32) + + def test_default_nonce(self): + cipher1 = ChaCha20.new(key=bchr(1) * 32) + cipher2 = ChaCha20.new(key=bchr(1) * 32) + self.assertEqual(len(cipher1.nonce), 8) + self.assertNotEqual(cipher1.nonce, cipher2.nonce) + + def test_nonce(self): + key = b'A' * 32 + + nonce1 = b'P' * 8 + cipher1 = ChaCha20.new(key=key, nonce=nonce1) + self.assertEqual(nonce1, cipher1.nonce) + + nonce2 = b'Q' * 12 + cipher2 = ChaCha20.new(key=key, nonce=nonce2) + self.assertEqual(nonce2, cipher2.nonce) + + def test_eiter_encrypt_or_decrypt(self): + """Verify that a cipher cannot be used for both decrypting and encrypting""" + + c1 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + c1.encrypt(b("8")) + self.assertRaises(TypeError, c1.decrypt, b("9")) + + c2 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + c2.decrypt(b("8")) + self.assertRaises(TypeError, c2.encrypt, b("9")) + + def test_round_trip(self): + pt = b("A") * 1024 + c1 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + c2 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + ct = c1.encrypt(pt) + self.assertEqual(c2.decrypt(ct), pt) + + self.assertEqual(c1.encrypt(b("")), b("")) + self.assertEqual(c2.decrypt(b("")), b("")) + + def test_streaming(self): + """Verify that an arbitrary number of bytes can be encrypted/decrypted""" + from Crypto.Hash import SHA1 + + segments = (1, 3, 5, 7, 11, 17, 23) + total = sum(segments) + + pt = b("") + while len(pt) < total: + pt += SHA1.new(pt).digest() + + cipher1 = ChaCha20.new(key=b("7") * 32, nonce=b("t") * 8) + ct = cipher1.encrypt(pt) + + cipher2 = ChaCha20.new(key=b("7") * 32, nonce=b("t") * 8) + cipher3 = ChaCha20.new(key=b("7") * 32, nonce=b("t") * 8) + idx = 0 + for segment in segments: + self.assertEqual(cipher2.decrypt(ct[idx:idx+segment]), pt[idx:idx+segment]) + self.assertEqual(cipher3.encrypt(pt[idx:idx+segment]), ct[idx:idx+segment]) + idx += segment + + def test_seek(self): + cipher1 = ChaCha20.new(key=b("9") * 32, nonce=b("e") * 8) + + offset = 64 * 900 + 7 + pt = b("1") * 64 + + cipher1.encrypt(b("0") * offset) + ct1 = cipher1.encrypt(pt) + + cipher2 = ChaCha20.new(key=b("9") * 32, nonce=b("e") * 8) + cipher2.seek(offset) + ct2 = cipher2.encrypt(pt) + + self.assertEqual(ct1, ct2) + + def test_seek_tv(self): + # Test Vector #4, A.1 from + # http://tools.ietf.org/html/draft-nir-cfrg-chacha20-poly1305-04 + key = bchr(0) + bchr(255) + bchr(0) * 30 + nonce = bchr(0) * 8 + cipher = ChaCha20.new(key=key, nonce=nonce) + cipher.seek(64 * 2) + expected_key_stream = unhexlify(b( + "72d54dfbf12ec44b362692df94137f32" + "8fea8da73990265ec1bbbea1ae9af0ca" + "13b25aa26cb4a648cb9b9d1be65b2c09" + "24a66c54d545ec1b7374f4872e99f096" + )) + ct = cipher.encrypt(bchr(0) * len(expected_key_stream)) + self.assertEqual(expected_key_stream, ct) + + def test_rfc7539(self): + # from https://tools.ietf.org/html/rfc7539 Annex A.1 + # Each item is: key, nonce, block #, plaintext, ciphertext + tvs = [ + # Test Vector #1 + ( + "00"*32, + "00"*12, + 0, + "00"*16*4, + "76b8e0ada0f13d90405d6ae55386bd28" + "bdd219b8a08ded1aa836efcc8b770dc7" + "da41597c5157488d7724e03fb8d84a37" + "6a43b8f41518a11cc387b669b2ee6586" + ), + # Test Vector #2 + ( + "00"*31 + "01", + "00"*11 + "02", + 1, + "416e79207375626d697373696f6e2074" + "6f20746865204945544620696e74656e" + "6465642062792074686520436f6e7472" + "696275746f7220666f72207075626c69" + "636174696f6e20617320616c6c206f72" + "2070617274206f6620616e2049455446" + "20496e7465726e65742d447261667420" + "6f722052464320616e6420616e792073" + "746174656d656e74206d616465207769" + "7468696e2074686520636f6e74657874" + "206f6620616e20494554462061637469" + "7669747920697320636f6e7369646572" + "656420616e20224945544620436f6e74" + "7269627574696f6e222e205375636820" + "73746174656d656e747320696e636c75" + "6465206f72616c2073746174656d656e" + "747320696e2049455446207365737369" + "6f6e732c2061732077656c6c20617320" + "7772697474656e20616e6420656c6563" + "74726f6e696320636f6d6d756e696361" + "74696f6e73206d61646520617420616e" + "792074696d65206f7220706c6163652c" + "20776869636820617265206164647265" + "7373656420746f", + "a3fbf07df3fa2fde4f376ca23e827370" + "41605d9f4f4f57bd8cff2c1d4b7955ec" + "2a97948bd3722915c8f3d337f7d37005" + "0e9e96d647b7c39f56e031ca5eb6250d" + "4042e02785ececfa4b4bb5e8ead0440e" + "20b6e8db09d881a7c6132f420e527950" + "42bdfa7773d8a9051447b3291ce1411c" + "680465552aa6c405b7764d5e87bea85a" + "d00f8449ed8f72d0d662ab052691ca66" + "424bc86d2df80ea41f43abf937d3259d" + "c4b2d0dfb48a6c9139ddd7f76966e928" + "e635553ba76c5c879d7b35d49eb2e62b" + "0871cdac638939e25e8a1e0ef9d5280f" + "a8ca328b351c3c765989cbcf3daa8b6c" + "cc3aaf9f3979c92b3720fc88dc95ed84" + "a1be059c6499b9fda236e7e818b04b0b" + "c39c1e876b193bfe5569753f88128cc0" + "8aaa9b63d1a16f80ef2554d7189c411f" + "5869ca52c5b83fa36ff216b9c1d30062" + "bebcfd2dc5bce0911934fda79a86f6e6" + "98ced759c3ff9b6477338f3da4f9cd85" + "14ea9982ccafb341b2384dd902f3d1ab" + "7ac61dd29c6f21ba5b862f3730e37cfd" + "c4fd806c22f221" + ), + # Test Vector #3 + ( + "1c9240a5eb55d38af333888604f6b5f0" + "473917c1402b80099dca5cbc207075c0", + "00"*11 + "02", + 42, + "2754776173206272696c6c69672c2061" + "6e642074686520736c6974687920746f" + "7665730a446964206779726520616e64" + "2067696d626c6520696e207468652077" + "6162653a0a416c6c206d696d73792077" + "6572652074686520626f726f676f7665" + "732c0a416e6420746865206d6f6d6520" + "7261746873206f757467726162652e", + "62e6347f95ed87a45ffae7426f27a1df" + "5fb69110044c0d73118effa95b01e5cf" + "166d3df2d721caf9b21e5fb14c616871" + "fd84c54f9d65b283196c7fe4f60553eb" + "f39c6402c42234e32a356b3e764312a6" + "1a5532055716ead6962568f87d3f3f77" + "04c6a8d1bcd1bf4d50d6154b6da731b1" + "87b58dfd728afa36757a797ac188d1" + ) + ] + + for tv in tvs: + key = unhexlify(tv[0]) + nonce = unhexlify(tv[1]) + offset = tv[2] * 64 + pt = unhexlify(tv[3]) + ct_expect = unhexlify(tv[4]) + + cipher = ChaCha20.new(key=key, nonce=nonce) + if offset != 0: + cipher.seek(offset) + ct = cipher.encrypt(pt) + assert(ct == ct_expect) + + +class XChaCha20Test(unittest.TestCase): + + # From https://tools.ietf.org/html/draft-arciszewski-xchacha-03 + + def test_hchacha20(self): + # Section 2.2.1 + + from Crypto.Cipher.ChaCha20 import _HChaCha20 + + key = b"00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f:10:11:12:13:14:15:16:17:18:19:1a:1b:1c:1d:1e:1f" + key = unhexlify(key.replace(b":", b"")) + + nonce = b"00:00:00:09:00:00:00:4a:00:00:00:00:31:41:59:27" + nonce = unhexlify(nonce.replace(b":", b"")) + + subkey = _HChaCha20(key, nonce) + + expected = b"82413b42 27b27bfe d30e4250 8a877d73 a0f9e4d5 8a74a853 c12ec413 26d3ecdc" + expected = unhexlify(expected.replace(b" ", b"")) + + self.assertEqual(subkey, expected) + + def test_nonce(self): + key = b'A' * 32 + nonce = b'P' * 24 + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertEqual(nonce, cipher.nonce) + + def test_encrypt(self): + # Section A.3.2 + + pt = b""" + 5468652064686f6c65202870726f6e6f756e6365642022646f6c652229206973 + 20616c736f206b6e6f776e2061732074686520417369617469632077696c6420 + 646f672c2072656420646f672c20616e642077686973746c696e6720646f672e + 2049742069732061626f7574207468652073697a65206f662061204765726d61 + 6e20736865706865726420627574206c6f6f6b73206d6f7265206c696b652061 + 206c6f6e672d6c656767656420666f782e205468697320686967686c7920656c + 757369766520616e6420736b696c6c6564206a756d70657220697320636c6173 + 736966696564207769746820776f6c7665732c20636f796f7465732c206a6163 + 6b616c732c20616e6420666f78657320696e20746865207461786f6e6f6d6963 + 2066616d696c792043616e696461652e""" + pt = unhexlify(pt.replace(b"\n", b"").replace(b" ", b"")) + + key = unhexlify(b"808182838485868788898a8b8c8d8e8f909192939495969798999a9b9c9d9e9f") + iv = unhexlify(b"404142434445464748494a4b4c4d4e4f5051525354555658") + + ct = b""" + 7d0a2e6b7f7c65a236542630294e063b7ab9b555a5d5149aa21e4ae1e4fbce87 + ecc8e08a8b5e350abe622b2ffa617b202cfad72032a3037e76ffdcdc4376ee05 + 3a190d7e46ca1de04144850381b9cb29f051915386b8a710b8ac4d027b8b050f + 7cba5854e028d564e453b8a968824173fc16488b8970cac828f11ae53cabd201 + 12f87107df24ee6183d2274fe4c8b1485534ef2c5fbc1ec24bfc3663efaa08bc + 047d29d25043532db8391a8a3d776bf4372a6955827ccb0cdd4af403a7ce4c63 + d595c75a43e045f0cce1f29c8b93bd65afc5974922f214a40b7c402cdb91ae73 + c0b63615cdad0480680f16515a7ace9d39236464328a37743ffc28f4ddb324f4 + d0f5bbdc270c65b1749a6efff1fbaa09536175ccd29fb9e6057b307320d31683 + 8a9c71f70b5b5907a66f7ea49aadc409""" + ct = unhexlify(ct.replace(b"\n", b"").replace(b" ", b"")) + + cipher = ChaCha20.new(key=key, nonce=iv) + cipher.seek(64) # Counter = 1 + ct_test = cipher.encrypt(pt) + self.assertEqual(ct, ct_test) + + +class ByteArrayTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_ba = bytearray(data) + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + + cipher1 = ChaCha20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = ChaCha20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_ba) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + ct_ba = bytearray(ct) + + cipher3 = ChaCha20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_ba) + + self.assertEqual(data, pt_test) + + +class MemoryviewTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_mv = memoryview(bytearray(data)) + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + + cipher1 = ChaCha20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = ChaCha20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_mv) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + ct_mv = memoryview(bytearray(ct)) + + cipher3 = ChaCha20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_mv) + + self.assertEqual(data, pt_test) + + +class ChaCha20_AGL_NIR(unittest.TestCase): + + # From http://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-04 + # and http://tools.ietf.org/html/draft-nir-cfrg-chacha20-poly1305-04 + tv = [ + ( "00" * 32, + "00" * 8, + "76b8e0ada0f13d90405d6ae55386bd28bdd219b8a08ded1aa836efcc" + "8b770dc7da41597c5157488d7724e03fb8d84a376a43b8f41518a11c" + "c387b669b2ee6586" + "9f07e7be5551387a98ba977c732d080d" + "cb0f29a048e3656912c6533e32ee7aed" + "29b721769ce64e43d57133b074d839d5" + "31ed1f28510afb45ace10a1f4b794d6f" + ), + ( "00" * 31 + "01", + "00" * 8, + "4540f05a9f1fb296d7736e7b208e3c96eb4fe1834688d2604f450952" + "ed432d41bbe2a0b6ea7566d2a5d1e7e20d42af2c53d792b1c43fea81" + "7e9ad275ae546963" + "3aeb5224ecf849929b9d828db1ced4dd" + "832025e8018b8160b82284f3c949aa5a" + "8eca00bbb4a73bdad192b5c42f73f2fd" + "4e273644c8b36125a64addeb006c13a0" + ), + ( "00" * 32, + "00" * 7 + "01", + "de9cba7bf3d69ef5e786dc63973f653a0b49e015adbff7134fcb7df1" + "37821031e85a050278a7084527214f73efc7fa5b5277062eb7a0433e" + "445f41e3" + ), + ( "00" * 32, + "01" + "00" * 7, + "ef3fdfd6c61578fbf5cf35bd3dd33b8009631634d21e42ac33960bd1" + "38e50d32111e4caf237ee53ca8ad6426194a88545ddc497a0b466e7d" + "6bbdb0041b2f586b" + ), + ( "000102030405060708090a0b0c0d0e0f101112131415161718191a1b" + "1c1d1e1f", + "0001020304050607", + "f798a189f195e66982105ffb640bb7757f579da31602fc93ec01ac56" + "f85ac3c134a4547b733b46413042c9440049176905d3be59ea1c53f1" + "5916155c2be8241a38008b9a26bc35941e2444177c8ade6689de9526" + "4986d95889fb60e84629c9bd9a5acb1cc118be563eb9b3a4a472f82e" + "09a7e778492b562ef7130e88dfe031c79db9d4f7c7a899151b9a4750" + "32b63fc385245fe054e3dd5a97a5f576fe064025d3ce042c566ab2c5" + "07b138db853e3d6959660996546cc9c4a6eafdc777c040d70eaf46f7" + "6dad3979e5c5360c3317166a1c894c94a371876a94df7628fe4eaaf2" + "ccb27d5aaae0ad7ad0f9d4b6ad3b54098746d4524d38407a6deb3ab7" + "8fab78c9" + ), + ( "00" * 32, + "00" * 7 + "02", + "c2c64d378cd536374ae204b9ef933fcd" + "1a8b2288b3dfa49672ab765b54ee27c7" + "8a970e0e955c14f3a88e741b97c286f7" + "5f8fc299e8148362fa198a39531bed6d" + ), + ] + + def runTest(self): + for (key, nonce, stream) in self.tv: + c = ChaCha20.new(key=unhexlify(b(key)), nonce=unhexlify(b(nonce))) + ct = unhexlify(b(stream)) + pt = b("\x00") * len(ct) + self.assertEqual(c.encrypt(pt), ct) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + key = b'4' * 32 + nonce = b'5' * 8 + cipher = ChaCha20.new(key=key, nonce=nonce) + + pt = b'5' * 300 + ct = cipher.encrypt(pt) + + output = bytearray(len(pt)) + cipher = ChaCha20.new(key=key, nonce=nonce) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = ChaCha20.new(key=key, nonce=nonce) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(len(pt))) + cipher = ChaCha20.new(key=key, nonce=nonce) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = ChaCha20.new(key=key, nonce=nonce) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*len(pt)) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*len(pt)) + + shorter_output = bytearray(len(pt) - 1) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(ChaCha20Test) + tests += list_test_cases(XChaCha20Test) + tests.append(ChaCha20_AGL_NIR()) + tests.append(ByteArrayTest()) + tests.append(MemoryviewTest()) + tests.append(TestOutput()) + + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20_Poly1305.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20_Poly1305.py new file mode 100644 index 00000000..67440d76 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20_Poly1305.py @@ -0,0 +1,770 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors_wycheproof +from Crypto.Util.py3compat import tobytes +from Crypto.Cipher import ChaCha20_Poly1305 +from Crypto.Hash import SHAKE128 + +from Crypto.Util._file_system import pycryptodome_filename +from Crypto.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class ChaCha20Poly1305Tests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + nonce_96 = get_tag_random("nonce_96", 12) + data_128 = get_tag_random("data_128", 16) + + def test_loopback(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Nonce can only be 8 or 12 bytes + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=b'H' * 8) + self.assertEqual(len(cipher.nonce), 8) + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=b'H' * 12) + self.assertEqual(len(cipher.nonce), 12) + + # If not passed, the nonce is created randomly + cipher = ChaCha20_Poly1305.new(key=self.key_256) + nonce1 = cipher.nonce + cipher = ChaCha20_Poly1305.new(key=self.key_256) + nonce2 = cipher.nonce + self.assertEqual(len(nonce1), 12) + self.assertNotEqual(nonce1, nonce2) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, + ChaCha20_Poly1305.new, + key=self.key_256, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can only be 8 or 12 bytes long + self.assertRaises(ValueError, + ChaCha20_Poly1305.new, + key=self.key_256, + nonce=b'0' * 7) + self.assertRaises(ValueError, + ChaCha20_Poly1305.new, + key=self.key_256, + nonce=b'') + + def test_block_size(self): + # Not based on block ciphers + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertFalse(hasattr(cipher, 'block_size')) + + def test_nonce_attribute(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 12 bytes long nonce is randomly generated + nonce1 = ChaCha20_Poly1305.new(key=self.key_256).nonce + nonce2 = ChaCha20_Poly1305.new(key=self.key_256).nonce + self.assertEqual(len(nonce1), 12) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, + ChaCha20_Poly1305.new, + key=self.key_256, + param=9) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data_128) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Crypto.Util.strxor import strxor_c + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_256) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + data_ba = bytearray(self.data_128) + + cipher1 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_256) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + ct_ba = bytearray(ct) + tag_ba = bytearray(tag) + del data_ba + + cipher3 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_ba) + ct_ba[:3] = b'\xFF\xFF\xFF' + cipher3.verify(tag_ba) + + self.assertEqual(pt_test, self.data_128) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_256)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + data_mv = memoryview(bytearray(self.data_128)) + + cipher1 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_256)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + ct_mv = memoryview(bytearray(ct)) + tag_mv = memoryview(bytearray(tag)) + del data_mv + + cipher3 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_mv) + ct_mv[:3] = b'\x99\x99\x99' + cipher3.verify(tag_mv) + + self.assertEqual(pt_test, self.data_128) + + +class XChaCha20Poly1305Tests(unittest.TestCase): + + def test_encrypt(self): + # From https://tools.ietf.org/html/draft-arciszewski-xchacha-03 + # Section A.3.1 + + pt = b""" + 4c616469657320616e642047656e746c656d656e206f662074686520636c6173 + 73206f66202739393a204966204920636f756c64206f6666657220796f75206f + 6e6c79206f6e652074697020666f7220746865206675747572652c2073756e73 + 637265656e20776f756c642062652069742e""" + pt = unhexlify(pt.replace(b"\n", b"").replace(b" ", b"")) + + aad = unhexlify(b"50515253c0c1c2c3c4c5c6c7") + key = unhexlify(b"808182838485868788898a8b8c8d8e8f909192939495969798999a9b9c9d9e9f") + iv = unhexlify(b"404142434445464748494a4b4c4d4e4f5051525354555657") + + ct = b""" + bd6d179d3e83d43b9576579493c0e939572a1700252bfaccbed2902c21396cbb + 731c7f1b0b4aa6440bf3a82f4eda7e39ae64c6708c54c216cb96b72e1213b452 + 2f8c9ba40db5d945b11b69b982c1bb9e3f3fac2bc369488f76b2383565d3fff9 + 21f9664c97637da9768812f615c68b13b52e""" + ct = unhexlify(ct.replace(b"\n", b"").replace(b" ", b"")) + + tag = unhexlify(b"c0875924c1c7987947deafd8780acf49") + + cipher = ChaCha20_Poly1305.new(key=key, nonce=iv) + cipher.update(aad) + ct_test, tag_test = cipher.encrypt_and_digest(pt) + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=iv) + cipher.update(aad) + cipher.decrypt_and_verify(ct, tag) + + +class ChaCha20Poly1305FSMTests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + nonce_96 = get_tag_random("nonce_96", 12) + data_128 = get_tag_random("data_128", 16) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + mac = cipher.digest() + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data_128, + self.data_128 + b"3"): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data_128) + method(self.data_128) + method(self.data_128) + method(self.data_128) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + # decrypt_and_verify + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data_128, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data_128) + getattr(cipher, method1_name)(self.data_128) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data_128) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.encrypt(self.data_128) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data_128) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + +def compact(x): + return unhexlify(x.replace(" ", "").replace(":", "")) + + +class TestVectorsRFC(unittest.TestCase): + """Test cases from RFC7539""" + + # AAD, PT, CT, MAC, KEY, NONCE + test_vectors_hex = [ + ( '50 51 52 53 c0 c1 c2 c3 c4 c5 c6 c7', + '4c 61 64 69 65 73 20 61 6e 64 20 47 65 6e 74 6c' + '65 6d 65 6e 20 6f 66 20 74 68 65 20 63 6c 61 73' + '73 20 6f 66 20 27 39 39 3a 20 49 66 20 49 20 63' + '6f 75 6c 64 20 6f 66 66 65 72 20 79 6f 75 20 6f' + '6e 6c 79 20 6f 6e 65 20 74 69 70 20 66 6f 72 20' + '74 68 65 20 66 75 74 75 72 65 2c 20 73 75 6e 73' + '63 72 65 65 6e 20 77 6f 75 6c 64 20 62 65 20 69' + '74 2e', + 'd3 1a 8d 34 64 8e 60 db 7b 86 af bc 53 ef 7e c2' + 'a4 ad ed 51 29 6e 08 fe a9 e2 b5 a7 36 ee 62 d6' + '3d be a4 5e 8c a9 67 12 82 fa fb 69 da 92 72 8b' + '1a 71 de 0a 9e 06 0b 29 05 d6 a5 b6 7e cd 3b 36' + '92 dd bd 7f 2d 77 8b 8c 98 03 ae e3 28 09 1b 58' + 'fa b3 24 e4 fa d6 75 94 55 85 80 8b 48 31 d7 bc' + '3f f4 de f0 8e 4b 7a 9d e5 76 d2 65 86 ce c6 4b' + '61 16', + '1a:e1:0b:59:4f:09:e2:6a:7e:90:2e:cb:d0:60:06:91', + '80 81 82 83 84 85 86 87 88 89 8a 8b 8c 8d 8e 8f' + '90 91 92 93 94 95 96 97 98 99 9a 9b 9c 9d 9e 9f', + '07 00 00 00' + '40 41 42 43 44 45 46 47', + ), + ( 'f3 33 88 86 00 00 00 00 00 00 4e 91', + '49 6e 74 65 72 6e 65 74 2d 44 72 61 66 74 73 20' + '61 72 65 20 64 72 61 66 74 20 64 6f 63 75 6d 65' + '6e 74 73 20 76 61 6c 69 64 20 66 6f 72 20 61 20' + '6d 61 78 69 6d 75 6d 20 6f 66 20 73 69 78 20 6d' + '6f 6e 74 68 73 20 61 6e 64 20 6d 61 79 20 62 65' + '20 75 70 64 61 74 65 64 2c 20 72 65 70 6c 61 63' + '65 64 2c 20 6f 72 20 6f 62 73 6f 6c 65 74 65 64' + '20 62 79 20 6f 74 68 65 72 20 64 6f 63 75 6d 65' + '6e 74 73 20 61 74 20 61 6e 79 20 74 69 6d 65 2e' + '20 49 74 20 69 73 20 69 6e 61 70 70 72 6f 70 72' + '69 61 74 65 20 74 6f 20 75 73 65 20 49 6e 74 65' + '72 6e 65 74 2d 44 72 61 66 74 73 20 61 73 20 72' + '65 66 65 72 65 6e 63 65 20 6d 61 74 65 72 69 61' + '6c 20 6f 72 20 74 6f 20 63 69 74 65 20 74 68 65' + '6d 20 6f 74 68 65 72 20 74 68 61 6e 20 61 73 20' + '2f e2 80 9c 77 6f 72 6b 20 69 6e 20 70 72 6f 67' + '72 65 73 73 2e 2f e2 80 9d', + '64 a0 86 15 75 86 1a f4 60 f0 62 c7 9b e6 43 bd' + '5e 80 5c fd 34 5c f3 89 f1 08 67 0a c7 6c 8c b2' + '4c 6c fc 18 75 5d 43 ee a0 9e e9 4e 38 2d 26 b0' + 'bd b7 b7 3c 32 1b 01 00 d4 f0 3b 7f 35 58 94 cf' + '33 2f 83 0e 71 0b 97 ce 98 c8 a8 4a bd 0b 94 81' + '14 ad 17 6e 00 8d 33 bd 60 f9 82 b1 ff 37 c8 55' + '97 97 a0 6e f4 f0 ef 61 c1 86 32 4e 2b 35 06 38' + '36 06 90 7b 6a 7c 02 b0 f9 f6 15 7b 53 c8 67 e4' + 'b9 16 6c 76 7b 80 4d 46 a5 9b 52 16 cd e7 a4 e9' + '90 40 c5 a4 04 33 22 5e e2 82 a1 b0 a0 6c 52 3e' + 'af 45 34 d7 f8 3f a1 15 5b 00 47 71 8c bc 54 6a' + '0d 07 2b 04 b3 56 4e ea 1b 42 22 73 f5 48 27 1a' + '0b b2 31 60 53 fa 76 99 19 55 eb d6 31 59 43 4e' + 'ce bb 4e 46 6d ae 5a 10 73 a6 72 76 27 09 7a 10' + '49 e6 17 d9 1d 36 10 94 fa 68 f0 ff 77 98 71 30' + '30 5b ea ba 2e da 04 df 99 7b 71 4d 6c 6f 2c 29' + 'a6 ad 5c b4 02 2b 02 70 9b', + 'ee ad 9d 67 89 0c bb 22 39 23 36 fe a1 85 1f 38', + '1c 92 40 a5 eb 55 d3 8a f3 33 88 86 04 f6 b5 f0' + '47 39 17 c1 40 2b 80 09 9d ca 5c bc 20 70 75 c0', + '00 00 00 00 01 02 03 04 05 06 07 08', + ) + ] + + test_vectors = [[unhexlify(x.replace(" ","").replace(":","")) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + # Encrypt + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def load_tests(self, filename): + + def filter_tag(group): + return group['tagSize'] // 8 + + def filter_algo(root): + return root['algorithm'] + + result = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + filename, + "Wycheproof ChaCha20-Poly1305", + root_tag={'algo': filter_algo}, + group_tag={'tag_size': filter_tag}) + return result + + def setUp(self): + self.tv = [] + self.tv.extend(self.load_tests("chacha20_poly1305_test.json")) + self.tv.extend(self.load_tests("xchacha20_poly1305_test.json")) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt %s Test #%s" % (tv.algo, tv.id) + + try: + cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) + except ValueError as e: + assert len(tv.iv) not in (8, 12) and "Nonce must be" in str(e) + return + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt %s Test #%s" % (tv.algo, tv.id) + + try: + cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) + except ValueError as e: + assert len(tv.iv) not in (8, 12) and "Nonce must be" in str(e) + return + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt ChaCha20-Poly1305 Test #" + str(tv.id) + if len(tv.iv) == 0 or len(tv.ct) < 1: + return + cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + key = b'4' * 32 + nonce = b'5' * 12 + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(ChaCha20Poly1305Tests) + tests += list_test_cases(XChaCha20Poly1305Tests) + tests += list_test_cases(ChaCha20Poly1305FSMTests) + tests += [TestVectorsRFC()] + tests += [TestVectorsWycheproof(wycheproof_warnings)] + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_DES.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_DES.py new file mode 100644 index 00000000..ee261bce --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_DES.py @@ -0,0 +1,374 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/DES.py: Self-test for the (Single) DES cipher +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.DES""" + +import unittest + +from Crypto.Cipher import DES + +# This is a list of (plaintext, ciphertext, key, description) tuples. +SP800_17_B1_KEY = '01' * 8 +SP800_17_B2_PT = '00' * 8 +test_data = [ + # Test vectors from Appendix A of NIST SP 800-17 + # "Modes of Operation Validation System (MOVS): Requirements and Procedures" + # http://csrc.nist.gov/publications/nistpubs/800-17/800-17.pdf + + # Appendix A - "Sample Round Outputs for the DES" + ('0000000000000000', '82dcbafbdeab6602', '10316e028c8f3b4a', + "NIST SP800-17 A"), + + # Table B.1 - Variable Plaintext Known Answer Test + ('8000000000000000', '95f8a5e5dd31d900', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #0'), + ('4000000000000000', 'dd7f121ca5015619', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #1'), + ('2000000000000000', '2e8653104f3834ea', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #2'), + ('1000000000000000', '4bd388ff6cd81d4f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #3'), + ('0800000000000000', '20b9e767b2fb1456', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #4'), + ('0400000000000000', '55579380d77138ef', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #5'), + ('0200000000000000', '6cc5defaaf04512f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #6'), + ('0100000000000000', '0d9f279ba5d87260', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #7'), + ('0080000000000000', 'd9031b0271bd5a0a', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #8'), + ('0040000000000000', '424250b37c3dd951', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #9'), + ('0020000000000000', 'b8061b7ecd9a21e5', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #10'), + ('0010000000000000', 'f15d0f286b65bd28', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #11'), + ('0008000000000000', 'add0cc8d6e5deba1', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #12'), + ('0004000000000000', 'e6d5f82752ad63d1', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #13'), + ('0002000000000000', 'ecbfe3bd3f591a5e', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #14'), + ('0001000000000000', 'f356834379d165cd', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #15'), + ('0000800000000000', '2b9f982f20037fa9', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #16'), + ('0000400000000000', '889de068a16f0be6', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #17'), + ('0000200000000000', 'e19e275d846a1298', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #18'), + ('0000100000000000', '329a8ed523d71aec', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #19'), + ('0000080000000000', 'e7fce22557d23c97', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #20'), + ('0000040000000000', '12a9f5817ff2d65d', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #21'), + ('0000020000000000', 'a484c3ad38dc9c19', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #22'), + ('0000010000000000', 'fbe00a8a1ef8ad72', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #23'), + ('0000008000000000', '750d079407521363', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #24'), + ('0000004000000000', '64feed9c724c2faf', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #25'), + ('0000002000000000', 'f02b263b328e2b60', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #26'), + ('0000001000000000', '9d64555a9a10b852', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #27'), + ('0000000800000000', 'd106ff0bed5255d7', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #28'), + ('0000000400000000', 'e1652c6b138c64a5', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #29'), + ('0000000200000000', 'e428581186ec8f46', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #30'), + ('0000000100000000', 'aeb5f5ede22d1a36', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #31'), + ('0000000080000000', 'e943d7568aec0c5c', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #32'), + ('0000000040000000', 'df98c8276f54b04b', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #33'), + ('0000000020000000', 'b160e4680f6c696f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #34'), + ('0000000010000000', 'fa0752b07d9c4ab8', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #35'), + ('0000000008000000', 'ca3a2b036dbc8502', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #36'), + ('0000000004000000', '5e0905517bb59bcf', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #37'), + ('0000000002000000', '814eeb3b91d90726', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #38'), + ('0000000001000000', '4d49db1532919c9f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #39'), + ('0000000000800000', '25eb5fc3f8cf0621', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #40'), + ('0000000000400000', 'ab6a20c0620d1c6f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #41'), + ('0000000000200000', '79e90dbc98f92cca', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #42'), + ('0000000000100000', '866ecedd8072bb0e', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #43'), + ('0000000000080000', '8b54536f2f3e64a8', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #44'), + ('0000000000040000', 'ea51d3975595b86b', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #45'), + ('0000000000020000', 'caffc6ac4542de31', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #46'), + ('0000000000010000', '8dd45a2ddf90796c', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #47'), + ('0000000000008000', '1029d55e880ec2d0', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #48'), + ('0000000000004000', '5d86cb23639dbea9', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #49'), + ('0000000000002000', '1d1ca853ae7c0c5f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #50'), + ('0000000000001000', 'ce332329248f3228', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #51'), + ('0000000000000800', '8405d1abe24fb942', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #52'), + ('0000000000000400', 'e643d78090ca4207', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #53'), + ('0000000000000200', '48221b9937748a23', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #54'), + ('0000000000000100', 'dd7c0bbd61fafd54', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #55'), + ('0000000000000080', '2fbc291a570db5c4', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #56'), + ('0000000000000040', 'e07c30d7e4e26e12', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #57'), + ('0000000000000020', '0953e2258e8e90a1', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #58'), + ('0000000000000010', '5b711bc4ceebf2ee', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #59'), + ('0000000000000008', 'cc083f1e6d9e85f6', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #60'), + ('0000000000000004', 'd2fd8867d50d2dfe', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #61'), + ('0000000000000002', '06e7ea22ce92708f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #62'), + ('0000000000000001', '166b40b44aba4bd6', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #63'), + + # Table B.2 - Variable Key Known Answer Test + (SP800_17_B2_PT, '95a8d72813daa94d', '8001010101010101', + 'NIST SP800-17 B.2 #0'), + (SP800_17_B2_PT, '0eec1487dd8c26d5', '4001010101010101', + 'NIST SP800-17 B.2 #1'), + (SP800_17_B2_PT, '7ad16ffb79c45926', '2001010101010101', + 'NIST SP800-17 B.2 #2'), + (SP800_17_B2_PT, 'd3746294ca6a6cf3', '1001010101010101', + 'NIST SP800-17 B.2 #3'), + (SP800_17_B2_PT, '809f5f873c1fd761', '0801010101010101', + 'NIST SP800-17 B.2 #4'), + (SP800_17_B2_PT, 'c02faffec989d1fc', '0401010101010101', + 'NIST SP800-17 B.2 #5'), + (SP800_17_B2_PT, '4615aa1d33e72f10', '0201010101010101', + 'NIST SP800-17 B.2 #6'), + (SP800_17_B2_PT, '2055123350c00858', '0180010101010101', + 'NIST SP800-17 B.2 #7'), + (SP800_17_B2_PT, 'df3b99d6577397c8', '0140010101010101', + 'NIST SP800-17 B.2 #8'), + (SP800_17_B2_PT, '31fe17369b5288c9', '0120010101010101', + 'NIST SP800-17 B.2 #9'), + (SP800_17_B2_PT, 'dfdd3cc64dae1642', '0110010101010101', + 'NIST SP800-17 B.2 #10'), + (SP800_17_B2_PT, '178c83ce2b399d94', '0108010101010101', + 'NIST SP800-17 B.2 #11'), + (SP800_17_B2_PT, '50f636324a9b7f80', '0104010101010101', + 'NIST SP800-17 B.2 #12'), + (SP800_17_B2_PT, 'a8468ee3bc18f06d', '0102010101010101', + 'NIST SP800-17 B.2 #13'), + (SP800_17_B2_PT, 'a2dc9e92fd3cde92', '0101800101010101', + 'NIST SP800-17 B.2 #14'), + (SP800_17_B2_PT, 'cac09f797d031287', '0101400101010101', + 'NIST SP800-17 B.2 #15'), + (SP800_17_B2_PT, '90ba680b22aeb525', '0101200101010101', + 'NIST SP800-17 B.2 #16'), + (SP800_17_B2_PT, 'ce7a24f350e280b6', '0101100101010101', + 'NIST SP800-17 B.2 #17'), + (SP800_17_B2_PT, '882bff0aa01a0b87', '0101080101010101', + 'NIST SP800-17 B.2 #18'), + (SP800_17_B2_PT, '25610288924511c2', '0101040101010101', + 'NIST SP800-17 B.2 #19'), + (SP800_17_B2_PT, 'c71516c29c75d170', '0101020101010101', + 'NIST SP800-17 B.2 #20'), + (SP800_17_B2_PT, '5199c29a52c9f059', '0101018001010101', + 'NIST SP800-17 B.2 #21'), + (SP800_17_B2_PT, 'c22f0a294a71f29f', '0101014001010101', + 'NIST SP800-17 B.2 #22'), + (SP800_17_B2_PT, 'ee371483714c02ea', '0101012001010101', + 'NIST SP800-17 B.2 #23'), + (SP800_17_B2_PT, 'a81fbd448f9e522f', '0101011001010101', + 'NIST SP800-17 B.2 #24'), + (SP800_17_B2_PT, '4f644c92e192dfed', '0101010801010101', + 'NIST SP800-17 B.2 #25'), + (SP800_17_B2_PT, '1afa9a66a6df92ae', '0101010401010101', + 'NIST SP800-17 B.2 #26'), + (SP800_17_B2_PT, 'b3c1cc715cb879d8', '0101010201010101', + 'NIST SP800-17 B.2 #27'), + (SP800_17_B2_PT, '19d032e64ab0bd8b', '0101010180010101', + 'NIST SP800-17 B.2 #28'), + (SP800_17_B2_PT, '3cfaa7a7dc8720dc', '0101010140010101', + 'NIST SP800-17 B.2 #29'), + (SP800_17_B2_PT, 'b7265f7f447ac6f3', '0101010120010101', + 'NIST SP800-17 B.2 #30'), + (SP800_17_B2_PT, '9db73b3c0d163f54', '0101010110010101', + 'NIST SP800-17 B.2 #31'), + (SP800_17_B2_PT, '8181b65babf4a975', '0101010108010101', + 'NIST SP800-17 B.2 #32'), + (SP800_17_B2_PT, '93c9b64042eaa240', '0101010104010101', + 'NIST SP800-17 B.2 #33'), + (SP800_17_B2_PT, '5570530829705592', '0101010102010101', + 'NIST SP800-17 B.2 #34'), + (SP800_17_B2_PT, '8638809e878787a0', '0101010101800101', + 'NIST SP800-17 B.2 #35'), + (SP800_17_B2_PT, '41b9a79af79ac208', '0101010101400101', + 'NIST SP800-17 B.2 #36'), + (SP800_17_B2_PT, '7a9be42f2009a892', '0101010101200101', + 'NIST SP800-17 B.2 #37'), + (SP800_17_B2_PT, '29038d56ba6d2745', '0101010101100101', + 'NIST SP800-17 B.2 #38'), + (SP800_17_B2_PT, '5495c6abf1e5df51', '0101010101080101', + 'NIST SP800-17 B.2 #39'), + (SP800_17_B2_PT, 'ae13dbd561488933', '0101010101040101', + 'NIST SP800-17 B.2 #40'), + (SP800_17_B2_PT, '024d1ffa8904e389', '0101010101020101', + 'NIST SP800-17 B.2 #41'), + (SP800_17_B2_PT, 'd1399712f99bf02e', '0101010101018001', + 'NIST SP800-17 B.2 #42'), + (SP800_17_B2_PT, '14c1d7c1cffec79e', '0101010101014001', + 'NIST SP800-17 B.2 #43'), + (SP800_17_B2_PT, '1de5279dae3bed6f', '0101010101012001', + 'NIST SP800-17 B.2 #44'), + (SP800_17_B2_PT, 'e941a33f85501303', '0101010101011001', + 'NIST SP800-17 B.2 #45'), + (SP800_17_B2_PT, 'da99dbbc9a03f379', '0101010101010801', + 'NIST SP800-17 B.2 #46'), + (SP800_17_B2_PT, 'b7fc92f91d8e92e9', '0101010101010401', + 'NIST SP800-17 B.2 #47'), + (SP800_17_B2_PT, 'ae8e5caa3ca04e85', '0101010101010201', + 'NIST SP800-17 B.2 #48'), + (SP800_17_B2_PT, '9cc62df43b6eed74', '0101010101010180', + 'NIST SP800-17 B.2 #49'), + (SP800_17_B2_PT, 'd863dbb5c59a91a0', '0101010101010140', + 'NIST SP800-17 B.2 #50'), + (SP800_17_B2_PT, 'a1ab2190545b91d7', '0101010101010120', + 'NIST SP800-17 B.2 #51'), + (SP800_17_B2_PT, '0875041e64c570f7', '0101010101010110', + 'NIST SP800-17 B.2 #52'), + (SP800_17_B2_PT, '5a594528bebef1cc', '0101010101010108', + 'NIST SP800-17 B.2 #53'), + (SP800_17_B2_PT, 'fcdb3291de21f0c0', '0101010101010104', + 'NIST SP800-17 B.2 #54'), + (SP800_17_B2_PT, '869efd7f9f265a09', '0101010101010102', + 'NIST SP800-17 B.2 #55'), +] + +class RonRivestTest(unittest.TestCase): + """ Ronald L. Rivest's DES test, see + http://people.csail.mit.edu/rivest/Destest.txt + ABSTRACT + -------- + + We present a simple way to test the correctness of a DES implementation: + Use the recurrence relation: + + X0 = 9474B8E8C73BCA7D (hexadecimal) + + X(i+1) = IF (i is even) THEN E(Xi,Xi) ELSE D(Xi,Xi) + + to compute a sequence of 64-bit values: X0, X1, X2, ..., X16. Here + E(X,K) denotes the DES encryption of X using key K, and D(X,K) denotes + the DES decryption of X using key K. If you obtain + + X16 = 1B1A2DDB4C642438 + + your implementation does not have any of the 36,568 possible single-fault + errors described herein. + """ + def runTest(self): + from binascii import b2a_hex + + X = [] + X[0:] = [b'\x94\x74\xB8\xE8\xC7\x3B\xCA\x7D'] + + for i in range(16): + c = DES.new(X[i],DES.MODE_ECB) + if not (i&1): # (num&1) returns 1 for odd numbers + X[i+1:] = [c.encrypt(X[i])] # even + else: + X[i+1:] = [c.decrypt(X[i])] # odd + + self.assertEqual(b2a_hex(X[16]), + b2a_hex(b'\x1B\x1A\x2D\xDB\x4C\x64\x24\x38')) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = DES.new(b'4'*8, DES.MODE_ECB) + + pt = b'5' * 8 + ct = cipher.encrypt(pt) + + output = bytearray(8) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(8)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*8) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*8) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + tests = make_block_tests(DES, "DES", test_data) + tests += [RonRivestTest()] + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_DES3.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_DES3.py new file mode 100644 index 00000000..8d6a6485 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_DES3.py @@ -0,0 +1,195 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/DES3.py: Self-test for the Triple-DES cipher +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.DES3""" + +import unittest +from binascii import hexlify, unhexlify + +from Crypto.Cipher import DES3 + +from Crypto.Util.strxor import strxor_c +from Crypto.Util.py3compat import bchr, tostr +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases + +# This is a list of (plaintext, ciphertext, key, description) tuples. +test_data = [ + # Test vector from Appendix B of NIST SP 800-67 + # "Recommendation for the Triple Data Encryption Algorithm (TDEA) Block + # Cipher" + # http://csrc.nist.gov/publications/nistpubs/800-67/SP800-67.pdf + ('54686520717566636b2062726f776e20666f78206a756d70', + 'a826fd8ce53b855fcce21c8112256fe668d5c05dd9b6b900', + '0123456789abcdef23456789abcdef01456789abcdef0123', + 'NIST SP800-67 B.1'), + + # This test is designed to test the DES3 API, not the correctness of the + # output. + ('21e81b7ade88a259', '5c577d4d9b20c0f8', + '9b397ebf81b1181e282f4bb8adbadc6b', 'Two-key 3DES'), +] + +# NIST CAVP test vectors + +nist_tdes_mmt_files = ("TECBMMT2.rsp", "TECBMMT3.rsp") + +for tdes_file in nist_tdes_mmt_files: + + test_vectors = load_test_vectors( + ("Cipher", "TDES"), + tdes_file, + "TDES ECB (%s)" % tdes_file, + {"count": lambda x: int(x)}) or [] + + for index, tv in enumerate(test_vectors): + + # The test vector file contains some directive lines + if isinstance(tv, str): + continue + + key = tv.key1 + tv.key2 + tv.key3 + test_data_item = (tostr(hexlify(tv.plaintext)), + tostr(hexlify(tv.ciphertext)), + tostr(hexlify(key)), + "%s (%s)" % (tdes_file, index)) + test_data.append(test_data_item) + + +class CheckParity(unittest.TestCase): + + def test_parity_option2(self): + before_2k = unhexlify("CABF326FA56734324FFCCABCDEFACABF") + after_2k = DES3.adjust_key_parity(before_2k) + self.assertEqual(after_2k, + unhexlify("CBBF326EA46734324FFDCBBCDFFBCBBF")) + + def test_parity_option3(self): + before_3k = unhexlify("AAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCC") + after_3k = DES3.adjust_key_parity(before_3k) + self.assertEqual(after_3k, + unhexlify("ABABABABABABABABBABABABABABABABACDCDCDCDCDCDCDCD")) + + def test_degradation(self): + sub_key1 = bchr(1) * 8 + sub_key2 = bchr(255) * 8 + + # K1 == K2 + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 * 2 + sub_key2) + + # K2 == K3 + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 + sub_key2 * 2) + + # K1 == K2 == K3 + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 * 3) + + # K1 == K2 (with different parity) + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 + strxor_c(sub_key1, 1) + sub_key2) + + +class DegenerateToDESTest(unittest.TestCase): + + def runTest(self): + sub_key1 = bchr(1) * 8 + sub_key2 = bchr(255) * 8 + + # K1 == K2 + self.assertRaises(ValueError, DES3.new, + sub_key1 * 2 + sub_key2, + DES3.MODE_ECB) + + # K2 == K3 + self.assertRaises(ValueError, DES3.new, + sub_key1 + sub_key2 * 2, + DES3.MODE_ECB) + + # K1 == K2 == K3 + self.assertRaises(ValueError, DES3.new, + sub_key1 * 3, + DES3.MODE_ECB) + + # K2 == K3 (parity is ignored) + self.assertRaises(ValueError, DES3.new, + sub_key1 + sub_key2 + strxor_c(sub_key2, 0x1), + DES3.MODE_ECB) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = DES3.new(b'4'*8 + b'G'*8 + b'T'*8, DES3.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + + tests = [] + tests = make_block_tests(DES3, "DES3", test_data) + tests.append(DegenerateToDESTest()) + tests += list_test_cases(CheckParity) + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + import unittest + + def suite(): + unittest.TestSuite(get_tests()) + + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_EAX.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_EAX.py new file mode 100644 index 00000000..fe93d719 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_EAX.py @@ -0,0 +1,773 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors_wycheproof +from Crypto.Util.py3compat import tobytes, bchr +from Crypto.Cipher import AES, DES3 +from Crypto.Hash import SHAKE128 + +from Crypto.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class EaxTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data_128 = get_tag_random("data_128", 16) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_EAX, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + cipher = DES3.new(self.key_192, DES3.MODE_EAX, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # If not passed, the nonce is created randomly + cipher = AES.new(self.key_128, AES.MODE_EAX) + nonce1 = cipher.nonce + cipher = AES.new(self.key_128, AES.MODE_EAX) + nonce2 = cipher.nonce + self.assertEqual(len(nonce1), 16) + self.assertNotEqual(nonce1, nonce2) + + cipher = AES.new(self.key_128, AES.MODE_EAX, self.nonce_96) + ct = cipher.encrypt(self.data_128) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_EAX, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can be of any length (but not empty) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_EAX, + nonce=b"") + + for x in range(1, 128): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=bchr(1) * x) + cipher.encrypt(bchr(1)) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_block_size_64(self): + cipher = DES3.new(self.key_192, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, DES3.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 16 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_EAX).nonce + nonce2 = AES.new(self.key_128, AES.MODE_EAX).nonce + self.assertEqual(len(nonce1), 16) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_EAX, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_EAX, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_EAX, + nonce=self.nonce_96, mac_len=3) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_EAX, + nonce=self.nonce_96, mac_len=16+1) + + # Valid MAC length + for mac_len in range(5, 16 + 1): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data_128) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data_128) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Crypto.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + data_ba = bytearray(self.data_128) + + cipher1 = AES.new(self.key_128, + AES.MODE_EAX, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_EAX, + nonce=nonce_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + ct_ba = bytearray(ct) + tag_ba = bytearray(tag) + del data_ba + + cipher3 = AES.new(key_ba, + AES.MODE_EAX, + nonce=nonce_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_ba) + ct_ba[:3] = b'\xFF\xFF\xFF' + cipher3.verify(tag_ba) + + self.assertEqual(pt_test, self.data_128) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + data_mv = memoryview(bytearray(self.data_128)) + + cipher1 = AES.new(self.key_128, + AES.MODE_EAX, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_EAX, + nonce=nonce_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + ct_mv = memoryview(bytearray(ct)) + tag_mv = memoryview(bytearray(tag)) + del data_mv + + cipher3 = AES.new(key_mv, + AES.MODE_EAX, + nonce=nonce_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_mv) + ct_mv[:3] = b'\x99\x99\x99' + cipher3.verify(tag_mv) + + self.assertEqual(pt_test, self.data_128) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + tag = cipher.digest() + + output = bytearray(128) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 16 + + pt = b'5' * LEN_PT + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class EaxFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data_128 = get_tag_random("data_128", 16) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data_128, + self.data_128 + b"3"): + if auth_data is None: + assoc_len = None + else: + assoc_len = len(auth_data) + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data_128) + method(self.data_128) + method(self.data_128) + method(self.data_128) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data_128, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data_128) + getattr(cipher, method1_name)(self.data_128) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data_128) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt(self.data_128) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data_128) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + +class TestVectorsPaper(unittest.TestCase): + """Class exercising the EAX test vectors found in + http://www.cs.ucdavis.edu/~rogaway/papers/eax.pdf""" + + test_vectors_hex = [ + ( '6bfb914fd07eae6b', + '', + '', + 'e037830e8389f27b025a2d6527e79d01', + '233952dee4d5ed5f9b9c6d6ff80ff478', + '62EC67F9C3A4A407FCB2A8C49031A8B3' + ), + ( + 'fa3bfd4806eb53fa', + 'f7fb', + '19dd', + '5c4c9331049d0bdab0277408f67967e5', + '91945d3f4dcbee0bf45ef52255f095a4', + 'BECAF043B0A23D843194BA972C66DEBD' + ), + ( '234a3463c1264ac6', + '1a47cb4933', + 'd851d5bae0', + '3a59f238a23e39199dc9266626c40f80', + '01f74ad64077f2e704c0f60ada3dd523', + '70C3DB4F0D26368400A10ED05D2BFF5E' + ), + ( + '33cce2eabff5a79d', + '481c9e39b1', + '632a9d131a', + 'd4c168a4225d8e1ff755939974a7bede', + 'd07cf6cbb7f313bdde66b727afd3c5e8', + '8408DFFF3C1A2B1292DC199E46B7D617' + ), + ( + 'aeb96eaebe2970e9', + '40d0c07da5e4', + '071dfe16c675', + 'cb0677e536f73afe6a14b74ee49844dd', + '35b6d0580005bbc12b0587124557d2c2', + 'FDB6B06676EEDC5C61D74276E1F8E816' + ), + ( + 'd4482d1ca78dce0f', + '4de3b35c3fc039245bd1fb7d', + '835bb4f15d743e350e728414', + 'abb8644fd6ccb86947c5e10590210a4f', + 'bd8e6e11475e60b268784c38c62feb22', + '6EAC5C93072D8E8513F750935E46DA1B' + ), + ( + '65d2017990d62528', + '8b0a79306c9ce7ed99dae4f87f8dd61636', + '02083e3979da014812f59f11d52630da30', + '137327d10649b0aa6e1c181db617d7f2', + '7c77d6e813bed5ac98baa417477a2e7d', + '1A8C98DCD73D38393B2BF1569DEEFC19' + ), + ( + '54b9f04e6a09189a', + '1bda122bce8a8dbaf1877d962b8592dd2d56', + '2ec47b2c4954a489afc7ba4897edcdae8cc3', + '3b60450599bd02c96382902aef7f832a', + '5fff20cafab119ca2fc73549e20f5b0d', + 'DDE59B97D722156D4D9AFF2BC7559826' + ), + ( + '899a175897561d7e', + '6cf36720872b8513f6eab1a8a44438d5ef11', + '0de18fd0fdd91e7af19f1d8ee8733938b1e8', + 'e7f6d2231618102fdb7fe55ff1991700', + 'a4a4782bcffd3ec5e7ef6d8c34a56123', + 'B781FCF2F75FA5A8DE97A9CA48E522EC' + ), + ( + '126735fcc320d25a', + 'ca40d7446e545ffaed3bd12a740a659ffbbb3ceab7', + 'cb8920f87a6c75cff39627b56e3ed197c552d295a7', + 'cfc46afc253b4652b1af3795b124ab6e', + '8395fcf1e95bebd697bd010bc766aac3', + '22E7ADD93CFC6393C57EC0B3C17D6B44' + ), + ] + + test_vectors = [[unhexlify(x) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + # Encrypt + cipher = AES.new(key, AES.MODE_EAX, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_EAX, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_eax_test.json", + "Wycheproof EAX", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt EAX Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_EAX, tv.iv, mac_len=tv.tag_size) + except ValueError as e: + assert len(tv.iv) == 0 and "Nonce cannot be empty" in str(e) + return + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt EAX Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_EAX, tv.iv, mac_len=tv.tag_size) + except ValueError as e: + assert len(tv.iv) == 0 and "Nonce cannot be empty" in str(e) + return + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt EAX Test #" + str(tv.id) + if len(tv.iv) == 0 or len(tv.ct) < 1: + return + cipher = AES.new(tv.key, AES.MODE_EAX, tv.iv, mac_len=tv.tag_size) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +class TestOtherCiphers(unittest.TestCase): + + @classmethod + def create_test(cls, name, factory, key_size): + + def test_template(self, factory=factory, key_size=key_size): + cipher = factory.new(get_tag_random("cipher", key_size), + factory.MODE_EAX, + nonce=b"nonce") + ct, mac = cipher.encrypt_and_digest(b"plaintext") + + cipher = factory.new(get_tag_random("cipher", key_size), + factory.MODE_EAX, + nonce=b"nonce") + pt2 = cipher.decrypt_and_verify(ct, mac) + + self.assertEqual(b"plaintext", pt2) + + setattr(cls, "test_" + name, test_template) + + +from Crypto.Cipher import DES, DES3, ARC2, CAST, Blowfish + +TestOtherCiphers.create_test("DES_" + str(DES.key_size), DES, DES.key_size) +for ks in DES3.key_size: + TestOtherCiphers.create_test("DES3_" + str(ks), DES3, ks) +for ks in ARC2.key_size: + TestOtherCiphers.create_test("ARC2_" + str(ks), ARC2, ks) +for ks in CAST.key_size: + TestOtherCiphers.create_test("CAST_" + str(ks), CAST, ks) +for ks in Blowfish.key_size: + TestOtherCiphers.create_test("Blowfish_" + str(ks), Blowfish, ks) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(EaxTests) + tests += list_test_cases(EaxFSMTests) + tests += [ TestVectorsPaper() ] + tests += [ TestVectorsWycheproof(wycheproof_warnings) ] + tests += list_test_cases(TestOtherCiphers) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_GCM.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_GCM.py new file mode 100644 index 00000000..dd8da2fd --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_GCM.py @@ -0,0 +1,951 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from __future__ import print_function + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof + +from Crypto.Util.py3compat import tobytes, bchr +from Crypto.Cipher import AES +from Crypto.Hash import SHAKE128, SHA256 + +from Crypto.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class GcmTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Nonce is optional (a random one will be created) + AES.new(self.key_128, AES.MODE_GCM) + + cipher = AES.new(self.key_128, AES.MODE_GCM, self.nonce_96) + ct = cipher.encrypt(self.data) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can be of any length (but not empty) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, + nonce=b"") + + for x in range(1, 128): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=bchr(1) * x) + cipher.encrypt(bchr(1)) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 15 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_GCM).nonce + nonce2 = AES.new(self.key_128, AES.MODE_GCM).nonce + self.assertEqual(len(nonce1), 16) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, + nonce=self.nonce_96, mac_len=3) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, + nonce=self.nonce_96, mac_len=16+1) + + # Valid MAC length + for mac_len in range(5, 16 + 1): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Crypto.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + data_ba = bytearray(self.data) + + cipher1 = AES.new(self.key_128, + AES.MODE_GCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_GCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + del data_ba + + cipher4 = AES.new(key_ba, + AES.MODE_GCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + data_mv = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_128, + AES.MODE_GCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_GCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + del data_mv + + cipher4 = AES.new(key_mv, + AES.MODE_GCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + tag = cipher.digest() + + output = bytearray(128) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class GcmFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data, + self.data + b"3"): + if auth_data is None: + assoc_len = None + else: + assoc_len = len(auth_data) + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data) + method(self.data) + method(self.data) + method(self.data) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data) + getattr(cipher, method1_name)(self.data) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt(self.data) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + +class TestVectors(unittest.TestCase): + """Class exercising the GCM test vectors found in + http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-revised-spec.pdf""" + + # List of test vectors, each made up of: + # - authenticated data + # - plaintext + # - ciphertext + # - MAC + # - AES key + # - nonce + test_vectors_hex = [ + ( + '', + '', + '', + '58e2fccefa7e3061367f1d57a4e7455a', + '00000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + '00000000000000000000000000000000', + '0388dace60b6a392f328c2b971b2fe78', + 'ab6e47d42cec13bdf53a67b21257bddf', + '00000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', + '42831ec2217774244b7221b784d0d49ce3aa212f2c02a4e035c17e2329aca12e' + + '21d514b25466931c7d8f6a5aac84aa051ba30b396a0aac973d58e091473f5985', + '4d5c2af327cd64a62cf35abd2ba6fab4', + 'feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '42831ec2217774244b7221b784d0d49ce3aa212f2c02a4e035c17e2329aca12e' + + '21d514b25466931c7d8f6a5aac84aa051ba30b396a0aac973d58e091', + '5bc94fbc3221a5db94fae95ae7121a47', + 'feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '61353b4c2806934a777ff51fa22a4755699b2a714fcdc6f83766e5f97b6c7423' + + '73806900e49f24b22b097544d4896b424989b5e1ebac0f07c23f4598', + '3612d2e79e3b0785561be14aaca2fccb', + 'feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbad' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '8ce24998625615b603a033aca13fb894be9112a5c3a211a8ba262a3cca7e2ca7' + + '01e4a9a4fba43c90ccdcb281d48c7c6fd62875d2aca417034c34aee5', + '619cc5aefffe0bfa462af43c1699d050', + 'feffe9928665731c6d6a8f9467308308', + '9313225df88406e555909c5aff5269aa' + + '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + + '16aedbf5a0de6a57a637b39b' + ), + ( + '', + '', + '', + 'cd33b28ac773f74ba00ed1f312572435', + '000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + '00000000000000000000000000000000', + '98e7247c07f0fe411c267e4384b0f600', + '2ff58d80033927ab8ef4d4587514f0fb', + '000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', + '3980ca0b3c00e841eb06fac4872a2757859e1ceaa6efd984628593b40ca1e19c' + + '7d773d00c144c525ac619d18c84a3f4718e2448b2fe324d9ccda2710acade256', + '9924a7c8587336bfb118024db8674a14', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '3980ca0b3c00e841eb06fac4872a2757859e1ceaa6efd984628593b40ca1e19c' + + '7d773d00c144c525ac619d18c84a3f4718e2448b2fe324d9ccda2710', + '2519498e80f1478f37ba55bd6d27618c', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '0f10f599ae14a154ed24b36e25324db8c566632ef2bbb34f8347280fc4507057' + + 'fddc29df9a471f75c66541d4d4dad1c9e93a19a58e8b473fa0f062f7', + '65dcc57fcf623a24094fcca40d3533f8', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + 'cafebabefacedbad' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + 'd27e88681ce3243c4830165a8fdcf9ff1de9a1d8e6b447ef6ef7b79828666e45' + + '81e79012af34ddd9e2f037589b292db3e67c036745fa22e7e9b7373b', + 'dcf566ff291c25bbb8568fc3d376a6d9', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + '9313225df88406e555909c5aff5269aa' + + '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + + '16aedbf5a0de6a57a637b39b' + ), + ( + '', + '', + '', + '530f8afbc74536b9a963b4f1c4cb738b', + '0000000000000000000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + '00000000000000000000000000000000', + 'cea7403d4d606b6e074ec5d3baf39d18', + 'd0d1c8a799996bf0265b98b5d48ab919', + '0000000000000000000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( '', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', + '522dc1f099567d07f47f37a32a84427d643a8cdcbfe5c0c97598a2bd2555d1aa' + + '8cb08e48590dbb3da7b08b1056828838c5f61e6393ba7a0abcc9f662898015ad', + 'b094dac5d93471bdec1a502270e3cc6c', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '522dc1f099567d07f47f37a32a84427d643a8cdcbfe5c0c97598a2bd2555d1aa' + + '8cb08e48590dbb3da7b08b1056828838c5f61e6393ba7a0abcc9f662', + '76fc6ece0f4e1768cddf8853bb2d551b', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + 'c3762df1ca787d32ae47c13bf19844cbaf1ae14d0b976afac52ff7d79bba9de0' + + 'feb582d33934a4f0954cc2363bc73f7862ac430e64abe499f47c9b1f', + '3a337dbf46a792c45e454913fe2ea8f2', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbad' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '5a8def2f0c9e53f1f75d7853659e2a20eeb2b22aafde6419a058ab4f6f746bf4' + + '0fc0c3b780f244452da3ebf1c5d82cdea2418997200ef82e44ae7e3f', + 'a44a8266ee1c8eb0c8b5d4cf5ae9f19a', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + '9313225df88406e555909c5aff5269aa' + + '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + + '16aedbf5a0de6a57a637b39b' + ) + ] + + test_vectors = [[unhexlify(x) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + + # Encrypt + cipher = AES.new(key, AES.MODE_GCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_GCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsGueronKrasnov(unittest.TestCase): + """Class exercising the GCM test vectors found in + 'The fragility of AES-GCM authentication algorithm', Gueron, Krasnov + https://eprint.iacr.org/2013/157.pdf""" + + def test_1(self): + key = unhexlify("3da6c536d6295579c0959a7043efb503") + iv = unhexlify("2b926197d34e091ef722db94") + aad = unhexlify("00000000000000000000000000000000" + + "000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f") + digest = unhexlify("69dd586555ce3fcc89663801a71d957b") + + cipher = AES.new(key, AES.MODE_GCM, iv).update(aad) + self.assertEqual(digest, cipher.digest()) + + def test_2(self): + key = unhexlify("843ffcf5d2b72694d19ed01d01249412") + iv = unhexlify("dbcca32ebf9b804617c3aa9e") + aad = unhexlify("00000000000000000000000000000000" + + "101112131415161718191a1b1c1d1e1f") + pt = unhexlify("000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f" + + "404142434445464748494a4b4c4d4e4f") + ct = unhexlify("6268c6fa2a80b2d137467f092f657ac0" + + "4d89be2beaa623d61b5a868c8f03ff95" + + "d3dcee23ad2f1ab3a6c80eaf4b140eb0" + + "5de3457f0fbc111a6b43d0763aa422a3" + + "013cf1dc37fe417d1fbfc449b75d4cc5") + digest = unhexlify("3b629ccfbc1119b7319e1dce2cd6fd6d") + + cipher = AES.new(key, AES.MODE_GCM, iv).update(aad) + ct2, digest2 = cipher.encrypt_and_digest(pt) + + self.assertEqual(ct, ct2) + self.assertEqual(digest, digest2) + + +class NISTTestVectorsGCM(unittest.TestCase): + + def __init__(self, a): + self.use_clmul = True + unittest.TestCase.__init__(self, a) + + +class NISTTestVectorsGCM_no_clmul(unittest.TestCase): + + def __init__(self, a): + self.use_clmul = False + unittest.TestCase.__init__(self, a) + + +test_vectors_nist = load_test_vectors( + ("Cipher", "AES"), + "gcmDecrypt128.rsp", + "GCM decrypt", + {"count": lambda x: int(x)}) or [] + +test_vectors_nist += load_test_vectors( + ("Cipher", "AES"), + "gcmEncryptExtIV128.rsp", + "GCM encrypt", + {"count": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_nist): + + # The test vector file contains some directive lines + if isinstance(tv, str): + continue + + def single_test(self, tv=tv): + + self.description = tv.desc + cipher = AES.new(tv.key, AES.MODE_GCM, nonce=tv.iv, + mac_len=len(tv.tag), use_clmul=self.use_clmul) + cipher.update(tv.aad) + if "FAIL" in tv.others: + self.assertRaises(ValueError, cipher.decrypt_and_verify, + tv.ct, tv.tag) + else: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + self.assertEqual(pt, tv.pt) + + setattr(NISTTestVectorsGCM, "test_%d" % idx, single_test) + setattr(NISTTestVectorsGCM_no_clmul, "test_%d" % idx, single_test) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, **extra_params): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._extra_params = extra_params + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_gcm_test.json", + "Wycheproof GCM", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt GCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) == 0 and "Nonce cannot be empty" in str(e): + return + raise e + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt GCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) == 0 and "Nonce cannot be empty" in str(e): + return + raise e + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt GCM Test #" + str(tv.id) + if len(tv.iv) == 0 or len(tv.ct) < 1: + return + cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +class TestVariableLength(unittest.TestCase): + + def __init__(self, **extra_params): + unittest.TestCase.__init__(self) + self._extra_params = extra_params + + def runTest(self): + key = b'0' * 16 + h = SHA256.new() + + for length in range(160): + nonce = '{0:04d}'.format(length).encode('utf-8') + data = bchr(length) * length + cipher = AES.new(key, AES.MODE_GCM, nonce=nonce, **self._extra_params) + ct, tag = cipher.encrypt_and_digest(data) + h.update(ct) + h.update(tag) + + self.assertEqual(h.hexdigest(), "7b7eb1ffbe67a2e53a912067c0ec8e62ebc7ce4d83490ea7426941349811bdf4") + + +def get_tests(config={}): + from Crypto.Util import _cpu_features + + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(GcmTests) + tests += list_test_cases(GcmFSMTests) + tests += [TestVectors()] + tests += [TestVectorsWycheproof(wycheproof_warnings)] + tests += list_test_cases(TestVectorsGueronKrasnov) + tests += [TestVariableLength()] + if config.get('slow_tests'): + tests += list_test_cases(NISTTestVectorsGCM) + + if _cpu_features.have_clmul(): + tests += [TestVectorsWycheproof(wycheproof_warnings, use_clmul=False)] + tests += [TestVariableLength(use_clmul=False)] + if config.get('slow_tests'): + tests += list_test_cases(NISTTestVectorsGCM_no_clmul) + else: + print("Skipping test of PCLMULDQD in AES GCM") + + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OCB.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OCB.py new file mode 100644 index 00000000..3a891220 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OCB.py @@ -0,0 +1,742 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import re +import unittest +from binascii import hexlify, unhexlify + +from Crypto.Util.py3compat import b, tobytes, bchr +from Crypto.Util.strxor import strxor_c +from Crypto.Util.number import long_to_bytes +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Cipher import AES +from Crypto.Hash import SHAKE128 + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class OcbTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct, mac = cipher.encrypt_and_digest(pt) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Nonce is optional + AES.new(self.key_128, AES.MODE_OCB) + + cipher = AES.new(self.key_128, AES.MODE_OCB, self.nonce_96) + ct = cipher.encrypt(self.data) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce cannot be empty + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=b("")) + + # nonce can be up to 15 bytes long + for length in range(1, 16): + AES.new(self.key_128, AES.MODE_OCB, nonce=self.data[:length]) + + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.data) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + # By default, a 15 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_OCB).nonce + nonce2 = AES.new(self.key_128, AES.MODE_OCB).nonce + self.assertEqual(len(nonce1), 15) + self.assertNotEqual(nonce1, nonce2) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 15 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_OCB).nonce + nonce2 = AES.new(self.key_128, AES.MODE_OCB).nonce + self.assertEqual(len(nonce1), 15) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + result = getattr(cipher, func)(b("")) + self.assertEqual(result, b("")) + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.encrypt(b("xyz")) + self.assertRaises(TypeError, cipher.decrypt, b("xyz")) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.decrypt(b("xyz")) + self.assertRaises(TypeError, cipher.encrypt, b("xyz")) + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.nonce_96, mac_len=7) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.nonce_96, mac_len=16+1) + + # Valid MAC length + for mac_len in range(8, 16 + 1): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Crypto.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b("") + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + pt2 += cipher.decrypt() + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b("") + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + ct2 += cipher.encrypt() + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + data_ba = bytearray(self.data) + + cipher1 = AES.new(self.key_128, + AES.MODE_OCB, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + cipher1.encrypt() + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_OCB, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_ba) + cipher2.encrypt() + data_ba[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + del data_ba + + cipher4 = AES.new(key_ba, + AES.MODE_OCB, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + data_mv = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_128, + AES.MODE_OCB, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + cipher1.encrypt() + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_OCB, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_mv) + cipher2.encrypt() + data_mv[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + del data_mv + + cipher4 = AES.new(key_mv, + AES.MODE_OCB, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) + + self.assertEqual(self.data, pt_test) + + +class OcbFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->ENCRYPT(NONE)->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + ct += cipher.encrypt() + mac = cipher.digest() + + # Verify path INIT->DECRYPT->DECRYPT(NONCE)->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.decrypt() + cipher.verify(mac) + + def test_invalid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + self.assertRaises(TypeError, cipher.digest) + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.decrypt(ct) + self.assertRaises(TypeError, cipher.verify) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->ENCRYPT(NONE)->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + ct = cipher.encrypt(self.data) + ct += cipher.encrypt() + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->DECRYPT(NONE)->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.decrypt(ct) + cipher.decrypt() + cipher.verify(mac) + + def test_invalid_encrypt_after_final(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.encrypt(self.data) + cipher.encrypt() + self.assertRaises(TypeError, cipher.encrypt, self.data) + + def test_invalid_decrypt_after_final(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.decrypt(self.data) + cipher.decrypt() + self.assertRaises(TypeError, cipher.decrypt, self.data) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b("333"), self.data, + self.data + b("3")): + if auth_data is None: + assoc_len = None + else: + assoc_len = len(auth_data) + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data) + method(self.data) + method(self.data) + method(self.data) + method() + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data) + getattr(cipher, method1_name)(self.data) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.encrypt(self.data) + cipher.encrypt() + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + ct += cipher.encrypt() + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.decrypt() + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + +class OcbRfc7253Test(unittest.TestCase): + + # Tuple with + # - nonce + # - authenticated data + # - plaintext + # - ciphertext and 16 byte MAC tag + tv1_key = "000102030405060708090A0B0C0D0E0F" + tv1 = ( + ( + "BBAA99887766554433221100", + "", + "", + "785407BFFFC8AD9EDCC5520AC9111EE6" + ), + ( + "BBAA99887766554433221101", + "0001020304050607", + "0001020304050607", + "6820B3657B6F615A5725BDA0D3B4EB3A257C9AF1F8F03009" + ), + ( + "BBAA99887766554433221102", + "0001020304050607", + "", + "81017F8203F081277152FADE694A0A00" + ), + ( + "BBAA99887766554433221103", + "", + "0001020304050607", + "45DD69F8F5AAE72414054CD1F35D82760B2CD00D2F99BFA9" + ), + ( + "BBAA99887766554433221104", + "000102030405060708090A0B0C0D0E0F", + "000102030405060708090A0B0C0D0E0F", + "571D535B60B277188BE5147170A9A22C3AD7A4FF3835B8C5" + "701C1CCEC8FC3358" + ), + ( + "BBAA99887766554433221105", + "000102030405060708090A0B0C0D0E0F", + "", + "8CF761B6902EF764462AD86498CA6B97" + ), + ( + "BBAA99887766554433221106", + "", + "000102030405060708090A0B0C0D0E0F", + "5CE88EC2E0692706A915C00AEB8B2396F40E1C743F52436B" + "DF06D8FA1ECA343D" + ), + ( + "BBAA99887766554433221107", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "1CA2207308C87C010756104D8840CE1952F09673A448A122" + "C92C62241051F57356D7F3C90BB0E07F" + ), + ( + "BBAA99887766554433221108", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "", + "6DC225A071FC1B9F7C69F93B0F1E10DE" + ), + ( + "BBAA99887766554433221109", + "", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "221BD0DE7FA6FE993ECCD769460A0AF2D6CDED0C395B1C3C" + "E725F32494B9F914D85C0B1EB38357FF" + ), + ( + "BBAA9988776655443322110A", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "BD6F6C496201C69296C11EFD138A467ABD3C707924B964DE" + "AFFC40319AF5A48540FBBA186C5553C68AD9F592A79A4240" + ), + ( + "BBAA9988776655443322110B", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "", + "FE80690BEE8A485D11F32965BC9D2A32" + ), + ( + "BBAA9988776655443322110C", + "", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "2942BFC773BDA23CABC6ACFD9BFD5835BD300F0973792EF4" + "6040C53F1432BCDFB5E1DDE3BC18A5F840B52E653444D5DF" + ), + ( + "BBAA9988776655443322110D", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "D5CA91748410C1751FF8A2F618255B68A0A12E093FF45460" + "6E59F9C1D0DDC54B65E8628E568BAD7AED07BA06A4A69483" + "A7035490C5769E60" + ), + ( + "BBAA9988776655443322110E", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "", + "C5CD9D1850C141E358649994EE701B68" + ), + ( + "BBAA9988776655443322110F", + "", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "4412923493C57D5DE0D700F753CCE0D1D2D95060122E9F15" + "A5DDBFC5787E50B5CC55EE507BCB084E479AD363AC366B95" + "A98CA5F3000B1479" + ) + ) + + # Tuple with + # - key + # - nonce + # - authenticated data + # - plaintext + # - ciphertext and 12 byte MAC tag + tv2 = ( + "0F0E0D0C0B0A09080706050403020100", + "BBAA9988776655443322110D", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "1792A4E31E0755FB03E31B22116E6C2DDF9EFD6E33D536F1" + "A0124B0A55BAE884ED93481529C76B6AD0C515F4D1CDD4FD" + "AC4F02AA" + ) + + # Tuple with + # - key length + # - MAC tag length + # - Expected output + tv3 = ( + (128, 128, "67E944D23256C5E0B6C61FA22FDF1EA2"), + (192, 128, "F673F2C3E7174AAE7BAE986CA9F29E17"), + (256, 128, "D90EB8E9C977C88B79DD793D7FFA161C"), + (128, 96, "77A3D8E73589158D25D01209"), + (192, 96, "05D56EAD2752C86BE6932C5E"), + (256, 96, "5458359AC23B0CBA9E6330DD"), + (128, 64, "192C9B7BD90BA06A"), + (192, 64, "0066BC6E0EF34E24"), + (256, 64, "7D4EA5D445501CBE"), + ) + + def test1(self): + key = unhexlify(b(self.tv1_key)) + for tv in self.tv1: + nonce, aad, pt, ct = [ unhexlify(b(x)) for x in tv ] + ct, mac_tag = ct[:-16], ct[-16:] + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) + cipher.update(aad) + ct2 = cipher.encrypt(pt) + cipher.encrypt() + self.assertEqual(ct, ct2) + self.assertEqual(mac_tag, cipher.digest()) + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) + cipher.update(aad) + pt2 = cipher.decrypt(ct) + cipher.decrypt() + self.assertEqual(pt, pt2) + cipher.verify(mac_tag) + + def test2(self): + + key, nonce, aad, pt, ct = [ unhexlify(b(x)) for x in self.tv2 ] + ct, mac_tag = ct[:-12], ct[-12:] + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce, mac_len=12) + cipher.update(aad) + ct2 = cipher.encrypt(pt) + cipher.encrypt() + self.assertEqual(ct, ct2) + self.assertEqual(mac_tag, cipher.digest()) + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce, mac_len=12) + cipher.update(aad) + pt2 = cipher.decrypt(ct) + cipher.decrypt() + self.assertEqual(pt, pt2) + cipher.verify(mac_tag) + + def test3(self): + + for keylen, taglen, result in self.tv3: + + key = bchr(0) * (keylen // 8 - 1) + bchr(taglen) + C = b("") + + for i in range(128): + S = bchr(0) * i + + N = long_to_bytes(3 * i + 1, 12) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + cipher.update(S) + C += cipher.encrypt(S) + cipher.encrypt() + cipher.digest() + + N = long_to_bytes(3 * i + 2, 12) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + C += cipher.encrypt(S) + cipher.encrypt() + cipher.digest() + + N = long_to_bytes(3 * i + 3, 12) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + cipher.update(S) + C += cipher.encrypt() + cipher.digest() + + N = long_to_bytes(385, 12) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + cipher.update(C) + result2 = cipher.encrypt() + cipher.digest() + self.assertEqual(unhexlify(b(result)), result2) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(OcbTests) + tests += list_test_cases(OcbFSMTests) + tests += list_test_cases(OcbRfc7253Test) + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OFB.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OFB.py new file mode 100644 index 00000000..ec145ada --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OFB.py @@ -0,0 +1,238 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.py3compat import tobytes +from Crypto.Cipher import AES, DES3, DES +from Crypto.Hash import SHAKE128 +from Crypto.SelfTest.loader import load_test_vectors_wycheproof + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + +from Crypto.SelfTest.Cipher.test_CBC import BlockChainingTests + +class OfbTests(BlockChainingTests): + + aes_mode = AES.MODE_OFB + des3_mode = DES3.MODE_OFB + + # Redefine test_unaligned_data_128/64 + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + +from Crypto.SelfTest.Cipher.test_CBC import NistBlockChainingVectors + +class NistOfbVectors(NistBlockChainingVectors): + aes_mode = AES.MODE_OFB + des_mode = DES.MODE_OFB + des3_mode = DES3.MODE_OFB + + +# Create one test method per file +nist_aes_kat_mmt_files = ( + # KAT + "OFBGFSbox128.rsp", + "OFBGFSbox192.rsp", + "OFBGFSbox256.rsp", + "OFBKeySbox128.rsp", + "OFBKeySbox192.rsp", + "OFBKeySbox256.rsp", + "OFBVarKey128.rsp", + "OFBVarKey192.rsp", + "OFBVarKey256.rsp", + "OFBVarTxt128.rsp", + "OFBVarTxt192.rsp", + "OFBVarTxt256.rsp", + # MMT + "OFBMMT128.rsp", + "OFBMMT192.rsp", + "OFBMMT256.rsp", + ) +nist_aes_mct_files = ( + "OFBMCT128.rsp", + "OFBMCT192.rsp", + "OFBMCT256.rsp", + ) + +for file_name in nist_aes_kat_mmt_files: + def new_func(self, file_name=file_name): + self._do_kat_aes_test(file_name) + setattr(NistOfbVectors, "test_AES_" + file_name, new_func) + +for file_name in nist_aes_mct_files: + def new_func(self, file_name=file_name): + self._do_mct_aes_test(file_name) + setattr(NistOfbVectors, "test_AES_" + file_name, new_func) +del file_name, new_func + +nist_tdes_files = ( + "TOFBMMT2.rsp", # 2TDES + "TOFBMMT3.rsp", # 3TDES + "TOFBinvperm.rsp", # Single DES + "TOFBpermop.rsp", + "TOFBsubtab.rsp", + "TOFBvarkey.rsp", + "TOFBvartext.rsp", + ) + +for file_name in nist_tdes_files: + def new_func(self, file_name=file_name): + self._do_tdes_test(file_name) + setattr(NistOfbVectors, "test_TDES_" + file_name, new_func) + +# END OF NIST OFB TEST VECTORS + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the OFB test vectors found in Section F.4 + of NIST SP 800-3A""" + + def test_aes_128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '3b3fd92eb72dad20333449f8e83cfb4a' +\ + '7789508d16918f03f53c52dac54ed825' +\ + '9740051e9c5fecf64344f7a82260edcc' +\ + '304c6528f659c77866a510d9c1d6ae5e' + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext[:-8]), ciphertext[:-8]) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext[:-8]), plaintext[:-8]) + + def test_aes_192(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'cdc80d6fddf18cab34c25909c99a4174' +\ + 'fcc28b8d4c63837c09e81700c1100401' +\ + '8d9a9aeac0f6596f559c6d4daf59a5f2' +\ + '6d9f200857ca6c3e9cac524bd9acc92a' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext[:-8]), ciphertext[:-8]) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext[:-8]), plaintext[:-8]) + + def test_aes_256(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'dc7e84bfda79164b7ecd8486985d3860' +\ + '4febdc6740d20b3ac88f6ad82a4fb08d' +\ + '71ab47a086e86eedf39d1c5bba97c408' +\ + '0126141d67f37be8538f5a8be740e484' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext[:-8]), ciphertext[:-8]) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext[:-8]), plaintext[:-8]) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(OfbTests) + if config.get('slow_tests'): + tests += list_test_cases(NistOfbVectors) + tests += list_test_cases(SP800TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OpenPGP.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OpenPGP.py new file mode 100644 index 00000000..e6cae670 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_OpenPGP.py @@ -0,0 +1,218 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.py3compat import tobytes +from Crypto.Cipher import AES, DES3, DES +from Crypto.Hash import SHAKE128 + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +from Crypto.SelfTest.Cipher.test_CBC import BlockChainingTests + +class OpenPGPTests(BlockChainingTests): + + aes_mode = AES.MODE_OPENPGP + des3_mode = DES3.MODE_OPENPGP + + # Redefine test_unaligned_data_128/64 + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 24) + iv_128 = get_tag_random("iv_128", 16) + iv_64 = get_tag_random("iv_64", 8) + data_128 = get_tag_random("data_128", 16) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + eiv, ct = ct[:18], ct[18:] + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, self.iv_64) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + eiv, ct = ct[:10], ct[10:] + + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, eiv) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_IV_iv_attributes(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + eiv = cipher.encrypt(b"") + self.assertEqual(cipher.iv, self.iv_128) + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + self.assertEqual(cipher.iv, self.iv_128) + + def test_null_encryption_decryption(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + eiv = cipher.encrypt(b"") + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + self.assertEqual(cipher.decrypt(b""), b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + eiv = cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, self.iv_64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, self.iv_64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_output_param(self): + pass + + def test_output_param_same_buffer(self): + pass + + def test_output_param_memoryview(self): + pass + + def test_output_param_neg(self): + pass + + +class TestVectors(unittest.TestCase): + + def test_aes(self): + # The following test vectors have been generated with gpg v1.4.0. + # The command line used was: + # + # gpg -c -z 0 --cipher-algo AES --passphrase secret_passphrase \ + # --disable-mdc --s2k-mode 0 --output ct pt + # + # As result, the content of the file 'pt' is encrypted with a key derived + # from 'secret_passphrase' and written to file 'ct'. + # Test vectors must be extracted from 'ct', which is a collection of + # TLVs (see RFC4880 for all details): + # - the encrypted data (with the encrypted IV as prefix) is the payload + # of the TLV with tag 9 (Symmetrical Encrypted Data Packet). + # This is the ciphertext in the test vector. + # - inside the encrypted part, there is a further layer of TLVs. One must + # look for tag 11 (Literal Data Packet); in its payload, after a short + # but time dependent header, there is the content of file 'pt'. + # In the test vector, the plaintext is the complete set of TLVs that gets + # encrypted. It is not just the content of 'pt'. + # - the key is the leftmost 16 bytes of the SHA1 digest of the password. + # The test vector contains such shortened digest. + # + # Note that encryption uses a clear IV, and decryption an encrypted IV + + plaintext = 'ac18620270744fb4f647426c61636b4361745768697465436174' + ciphertext = 'dc6b9e1f095de609765c59983db5956ae4f63aea7405389d2ebb' + key = '5baa61e4c9b93f3f0682250b6cf8331b' + iv = '3d7d3e62282add7eb203eeba5c800733' + encrypted_iv='fd934601ef49cb58b6d9aebca6056bdb96ef' + + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + key = unhexlify(key) + iv = unhexlify(iv) + encrypted_iv = unhexlify(encrypted_iv) + + cipher = AES.new(key, AES.MODE_OPENPGP, iv) + ct = cipher.encrypt(plaintext) + self.assertEqual(ct[:18], encrypted_iv) + self.assertEqual(ct[18:], ciphertext) + + cipher = AES.new(key, AES.MODE_OPENPGP, encrypted_iv) + pt = cipher.decrypt(ciphertext) + self.assertEqual(pt, plaintext) + + def test_des3(self): + # The following test vectors have been generated with gpg v1.4.0. + # The command line used was: + # gpg -c -z 0 --cipher-algo 3DES --passphrase secret_passphrase \ + # --disable-mdc --s2k-mode 0 --output ct pt + # For an explanation, see test_AES.py . + + plaintext = 'ac1762037074324fb53ba3596f73656d69746556616c6c6579' + ciphertext = '9979238528357b90e2e0be549cb0b2d5999b9a4a447e5c5c7d' + key = '7ade65b460f5ea9be35f9e14aa883a2048e3824aa616c0b2' + iv='cd47e2afb8b7e4b0' + encrypted_iv='6a7eef0b58050e8b904a' + + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + key = unhexlify(key) + iv = unhexlify(iv) + encrypted_iv = unhexlify(encrypted_iv) + + cipher = DES3.new(key, DES3.MODE_OPENPGP, iv) + ct = cipher.encrypt(plaintext) + self.assertEqual(ct[:10], encrypted_iv) + self.assertEqual(ct[10:], ciphertext) + + cipher = DES3.new(key, DES3.MODE_OPENPGP, encrypted_iv) + pt = cipher.decrypt(ciphertext) + self.assertEqual(pt, plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(OpenPGPTests) + tests += list_test_cases(TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_SIV.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_SIV.py new file mode 100644 index 00000000..a80ddc1e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_SIV.py @@ -0,0 +1,552 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import json +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors_wycheproof + +from Crypto.Util.py3compat import tobytes, bchr +from Crypto.Cipher import AES +from Crypto.Hash import SHAKE128 + +from Crypto.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class SivTests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + key_384 = get_tag_random("key_384", 48) + key_512 = get_tag_random("key_512", 64) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + for key in self.key_256, self.key_384, self.key_512: + cipher = AES.new(key, AES.MODE_SIV, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct, mac = cipher.encrypt_and_digest(pt) + + cipher = AES.new(key, AES.MODE_SIV, nonce=self.nonce_96) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Deterministic encryption + AES.new(self.key_256, AES.MODE_SIV) + + cipher = AES.new(self.key_256, AES.MODE_SIV, self.nonce_96) + ct1, tag1 = cipher.encrypt_and_digest(self.data) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct2, tag2 = cipher.encrypt_and_digest(self.data) + self.assertEqual(ct1 + tag1, ct2 + tag2) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can be of any length (but not empty) + self.assertRaises(ValueError, AES.new, self.key_256, AES.MODE_SIV, + nonce=b"") + + for x in range(1, 128): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=bchr(1) * x) + cipher.encrypt_and_digest(b'\x01') + + def test_block_size_128(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, no nonce is randomly generated + self.assertFalse(hasattr(AES.new(self.key_256, AES.MODE_SIV), "nonce")) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96, + use_aesni=False) + + def test_encrypt_excludes_decrypt(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + self.assertRaises(TypeError, cipher.decrypt, self.data) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + self.assertRaises(TypeError, cipher.decrypt_and_verify, + self.data, self.data) + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt_and_verify, + u'test1234567890-*', b"xxxx") + + def test_mac_len(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Crypto.Util.strxor import strxor_c + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_bytearray(self): + + # Encrypt + key = bytearray(self.key_256) + nonce = bytearray(self.nonce_96) + data = bytearray(self.data) + header = bytearray(self.data) + + cipher1 = AES.new(self.key_256, + AES.MODE_SIV, + nonce=self.nonce_96) + cipher1.update(self.data) + ct, tag = cipher1.encrypt_and_digest(self.data) + + cipher2 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher2.update(header) + header[:3] = b'\xFF\xFF\xFF' + ct_test, tag_test = cipher2.encrypt_and_digest(data) + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key = bytearray(self.key_256) + nonce = bytearray(self.nonce_96) + header = bytearray(self.data) + ct_ba = bytearray(ct) + tag_ba = bytearray(tag) + + cipher3 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher3.update(header) + header[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt_and_verify(ct_ba, tag_ba) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key = memoryview(bytearray(self.key_256)) + nonce = memoryview(bytearray(self.nonce_96)) + data = memoryview(bytearray(self.data)) + header = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_256, + AES.MODE_SIV, + nonce=self.nonce_96) + cipher1.update(self.data) + ct, tag = cipher1.encrypt_and_digest(self.data) + + cipher2 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher2.update(header) + header[:3] = b'\xFF\xFF\xFF' + ct_test, tag_test= cipher2.encrypt_and_digest(data) + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key = memoryview(bytearray(self.key_256)) + nonce = memoryview(bytearray(self.nonce_96)) + header = memoryview(bytearray(self.data)) + ct_ba = memoryview(bytearray(ct)) + tag_ba = memoryview(bytearray(tag)) + + cipher3 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher3.update(header) + header[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt_and_verify(ct_ba, tag_ba) + + self.assertEqual(self.data, pt_test) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(pt) + + output = bytearray(128) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(pt) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt_and_digest, pt, output=b'0' * LEN_PT) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt_and_verify, ct, tag, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt_and_digest, pt, output=shorter_output) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, tag, output=shorter_output) + + +class SivFSMTests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + nonce_96 = get_tag_random("nonce_96", 12) + data = get_tag_random("data", 128) + + def test_invalid_init_encrypt(self): + # Path INIT->ENCRYPT fails + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, b"xxx") + + def test_invalid_init_decrypt(self): + # Path INIT->DECRYPT fails + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, b"xxx") + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_multiple_encrypt_and_digest(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(self.data) + self.assertRaises(TypeError, cipher.encrypt_and_digest, b'') + + def test_invalid_multiple_decrypt_and_verify(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(self.data) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, tag) + self.assertRaises(TypeError, cipher.decrypt_and_verify, ct, tag) + + +def transform(tv): + new_tv = [[unhexlify(x) for x in tv[0].split("-")]] + new_tv += [ unhexlify(x) for x in tv[1:5]] + if tv[5]: + nonce = unhexlify(tv[5]) + else: + nonce = None + new_tv += [ nonce ] + return new_tv + + +class TestVectors(unittest.TestCase): + """Class exercising the SIV test vectors found in RFC5297""" + + # This is a list of tuples with 5 items: + # + # 1. Header + '|' + plaintext + # 2. Header + '|' + ciphertext + '|' + MAC + # 3. AES-128 key + # 4. Description + # 5. Dictionary of parameters to be passed to AES.new(). + # It must include the nonce. + # + # A "Header" is a dash ('-') separated sequece of components. + # + test_vectors_hex = [ + ( + '101112131415161718191a1b1c1d1e1f2021222324252627', + '112233445566778899aabbccddee', + '40c02b9690c4dc04daef7f6afe5c', + '85632d07c6e8f37f950acd320a2ecc93', + 'fffefdfcfbfaf9f8f7f6f5f4f3f2f1f0f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff', + None + ), + ( + '00112233445566778899aabbccddeeffdeaddadadeaddadaffeeddccbbaa9988' + + '7766554433221100-102030405060708090a0', + '7468697320697320736f6d6520706c61696e7465787420746f20656e63727970' + + '74207573696e67205349562d414553', + 'cb900f2fddbe404326601965c889bf17dba77ceb094fa663b7a3f748ba8af829' + + 'ea64ad544a272e9c485b62a3fd5c0d', + '7bdb6e3b432667eb06f4d14bff2fbd0f', + '7f7e7d7c7b7a79787776757473727170404142434445464748494a4b4c4d4e4f', + '09f911029d74e35bd84156c5635688c0' + ), + ] + + test_vectors = [ transform(tv) for tv in test_vectors_hex ] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + + # Encrypt + cipher = AES.new(key, AES.MODE_SIV, nonce=nonce) + for x in assoc_data: + cipher.update(x) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_SIV, nonce=nonce) + for x in assoc_data: + cipher.update(x) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self): + unittest.TestCase.__init__(self) + self._id = "None" + + def setUp(self): + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_siv_cmac_test.json", + "Wycheproof AES SIV") + + def shortDescription(self): + return self._id + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt AES-SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV) + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(tag + ct, tv.ct) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt AES_SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV) + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct[16:], tv.ct[:16]) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + + +class TestVectorsWycheproof2(unittest.TestCase): + + def __init__(self): + unittest.TestCase.__init__(self) + self._id = "None" + + def setUp(self): + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aead_aes_siv_cmac_test.json", + "Wycheproof AEAD SIV") + + def shortDescription(self): + return self._id + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt AEAD-AES-SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV, nonce=tv.iv) + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt AEAD-AES-SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV, nonce=tv.iv) + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(SivTests) + tests += list_test_cases(SivFSMTests) + tests += [ TestVectors() ] + tests += [ TestVectorsWycheproof() ] + tests += [ TestVectorsWycheproof2() ] + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py new file mode 100644 index 00000000..a710462e --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py @@ -0,0 +1,367 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/Salsa20.py: Self-test for the Salsa20 stream cipher +# +# Written in 2013 by Fabrizio Tarizzo +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Cipher.Salsa20""" + +import unittest + +from Crypto.Util.py3compat import bchr + +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Cipher import Salsa20 + +from .common import make_stream_tests + +# This is a list of (plaintext, ciphertext, key[, description[, params]]) +# tuples. +test_data = [ + # Test vectors are taken from + # http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/verified.test-vectors + ( '00' * 512, + '4dfa5e481da23ea09a31022050859936da52fcee218005164f267cb65f5cfd7f' + + '2b4f97e0ff16924a52df269515110a07f9e460bc65ef95da58f740b7d1dbb0aa' + + 'd64cec189c7eb8c6bbf3d7376c80a481d43e628701f6a27afb9fe23919f24114' + + '8db44f70d7063efcc3dd55a0893a613c3c6fe1c127bd6f59910589293bb6ef9e' + + 'e24819066dee1a64f49b0bbad5988635272b169af861f85df881939f29ada6fd' + + '0241410e8d332ae4798d929434a2630de451ec4e0169694cbaa7ebb121ea6a2b' + + 'da9c1581f429e0a00f7d67e23b730676783b262e8eb43a25f55fb90b3e753aef' + + '8c6713ec66c51881111593ccb3e8cb8f8de124080501eeeb389c4bcb6977cf95' + + '7d5789631eb4554400e1e025935dfa7b3e9039d61bdc58a8697d36815bf1985c' + + 'efdf7ae112e5bb81e37ecf0616ce7147fc08a93a367e08631f23c03b00a8da2f' + + 'aa5024e5c8d30aca43fc2d5082067b21b234bc741d68fb292c6012c3764ccee3' + + '1e364a5403e00cfee338a21a01e7d3cefd5a770ca0ab48c435ea6116435f7ad8' + + '30b217b49f978a68e207ed9f462af7fb195b2115fe8f24f152e4ddc32202d6f2' + + 'b52fafbcfbc202d8a259a611e901d3f62d065eb13f09bbc45cd45119b843efaa' + + 'b375703739daced4dd4059fd71c3c47fc2f9939670fad4a46066adcc6a564578' + + '3308b90ffb72be04a6b147cbe38cc0c3b9267c296a92a7c69873f9f263be9703', + '80000000000000000000000000000000', + '128 bits key, set 1, vector 0', + dict (iv='00'*8)), + + ( '00' * 512, + 'e3be8fdd8beca2e3ea8ef9475b29a6e7003951e1097a5c38d23b7a5fad9f6844' + + 'b22c97559e2723c7cbbd3fe4fc8d9a0744652a83e72a9c461876af4d7ef1a117' + + '8da2b74eef1b6283e7e20166abcae538e9716e4669e2816b6b20c5c356802001' + + 'cc1403a9a117d12a2669f456366d6ebb0f1246f1265150f793cdb4b253e348ae' + + '203d89bc025e802a7e0e00621d70aa36b7e07cb1e7d5b38d5e222b8b0e4b8407' + + '0142b1e29504767d76824850320b5368129fdd74e861b498e3be8d16f2d7d169' + + '57be81f47b17d9ae7c4ff15429a73e10acf250ed3a90a93c711308a74c6216a9' + + 'ed84cd126da7f28e8abf8bb63517e1ca98e712f4fb2e1a6aed9fdc73291faa17' + + '958211c4ba2ebd5838c635edb81f513a91a294e194f1c039aeec657dce40aa7e' + + '7c0af57cacefa40c9f14b71a4b3456a63e162ec7d8d10b8ffb1810d71001b618' + + '2f9f73da53b85405c11f7b2d890fa8ae0c7f2e926d8a98c7ec4e91b65120e988' + + '349631a700c6facec3471cb0413656e75e309456584084d7e12c5b43a41c43ed' + + '9a048abd9b880da65f6a665a20fe7b77cd292fe62cae644b7f7df69f32bdb331' + + '903e6505ce44fdc293920c6a9ec7057e23df7dad298f82ddf4efb7fdc7bfc622' + + '696afcfd0cddcc83c7e77f11a649d79acdc3354e9635ff137e929933a0bd6f53' + + '77efa105a3a4266b7c0d089d08f1e855cc32b15b93784a36e56a76cc64bc8477', + '8000000000000000000000000000000000000000000000000000000000000000', + '256 bits key, set 1, vector 0', + dict (iv='00'*8)), + + ( '00' * 512, + '169060ccb42bea7bee4d8012a02f3635eb7bca12859fa159cd559094b3507db8' + + '01735d1a1300102a9c9415546829cbd2021ba217b39b81d89c55b13d0c603359' + + '3f84159a3c84f4b4f4a0edcd9d38ff261a737909e0b66d68b5cac496f3a5be99' + + 'cb12c321ab711afaab36cc0947955e1a9bb952ed54425e7711279fbc81bb83f5' + + '6e55cea44e6daddb05858a153ea6213b3350c12aa1a83ef2726f09485fa71790' + + 'f9b9f922c7dda1113b1f9d56658ed3402803f511bc1f122601d5e7f0ff036e23' + + '23ef24bb24195b9fd574823cd8a40c29d86bd35c191e2038779ff696c712b6d8' + + '2e7014dbe1ac5d527af076c088c4a8d44317958189f6ef54933a7e0816b5b916' + + 'd8f12ed8afe9422b85e5cc9b8adec9d6cfabe8dbc1082bccc02f5a7266aa074c' + + 'a284e583a35837798cc0e69d4ce937653b8cdd65ce414b89138615ccb165ad19' + + '3c6b9c3d05eef4be921a10ea811fe61d11c6867600188e065daff90b509ec56b' + + 'd41e7e8968c478c78d590c2d2ee24ea009c8f49bc3d81672cfc47895a9e21c9a' + + '471ebf8e294bee5d2de436ac8d052bf31111b345f1da23c3a4d13b9fc5f0900a' + + 'a298f98f538973b8fad40d4d159777de2cfe2a3dead1645ddb49794827dba040' + + 'f70a0ff4ecd155e0f033604693a51e2363880e2ecf98699e7174af7c2c6b0fc6' + + '59ae329599a3949272a37b9b2183a0910922a3f325ae124dcbdd735364055ceb', + '09090909090909090909090909090909', + '128 bits key, set 2, vector 9', + dict (iv='00'*8)), + + ( '00' * 512, + '7041e747ceb22ed7812985465f50333124f971da1c5d6efe5ca201b886f31046' + + 'e757e5c3ec914f60ed1f6bce2819b6810953f12b8ba1199bf82d746a8b8a88f1' + + '142002978ec4c35b95dc2c82990f9e847a0ab45f2ca72625f5190c820f29f3aa' + + 'f5f0b5572b06b70a144f2a240c3b3098d4831fa1ce1459f8d1df226a6a79b0ab' + + '41e91799ef31b5ff3d756c19126b19025858ee70fbd69f2be955cb011c005e31' + + '32b271b378f39b0cb594e95c99ce6ff17735a541891845bbf0450afcb4a850b9' + + '4ee90afb713ae7e01295c74381180a3816d7020d5a396c0d97aaa783eaabb6ec' + + '44d5111157f2212d1b1b8fca7893e8b520cd482418c272ab119b569a2b9598eb' + + '355624d12e79adab81153b58cd22eaf1b2a32395dedc4a1c66f4d274070b9800' + + 'ea95766f0245a8295f8aadb36ddbbdfa936417c8dbc6235d19494036964d3e70' + + 'b125b0f800c3d53881d9d11e7970f827c2f9556935cd29e927b0aceb8cae5fd4' + + '0fd88a8854010a33db94c96c98735858f1c5df6844f864feaca8f41539313e7f' + + '3c0610214912cd5e6362197646207e2d64cd5b26c9dfe0822629dcbeb16662e8' + + '9ff5bf5cf2e499138a5e27bd5027329d0e68ddf53103e9e409523662e27f61f6' + + '5cf38c1232023e6a6ef66c315bcb2a4328642faabb7ca1e889e039e7c444b34b' + + 'b3443f596ac730f3df3dfcdb343c307c80f76e43e8898c5e8f43dc3bb280add0', + '0909090909090909090909090909090909090909090909090909090909090909', + '256 bits key, set 2, vector 9', + dict (iv='00'*8)), + + ( '00' * 1024, + '71daee5142d0728b41b6597933ebf467e43279e30978677078941602629cbf68' + + 'b73d6bd2c95f118d2b3e6ec955dabb6dc61c4143bc9a9b32b99dbe6866166dc0' + + '8631b7d6553050303d7252c264d3a90d26c853634813e09ad7545a6ce7e84a5d' + + 'fc75ec43431207d5319970b0faadb0e1510625bb54372c8515e28e2accf0a993' + + '0ad15f431874923d2a59e20d9f2a5367dba6051564f150287debb1db536ff9b0' + + '9ad981f25e5010d85d76ee0c305f755b25e6f09341e0812f95c94f42eead346e' + + '81f39c58c5faa2c88953dc0cac90469db2063cb5cdb22c9eae22afbf0506fca4' + + '1dc710b846fbdfe3c46883dd118f3a5e8b11b6afd9e71680d8666557301a2daa' + + 'fb9496c559784d35a035360885f9b17bd7191977deea932b981ebdb29057ae3c' + + '92cfeff5e6c5d0cb62f209ce342d4e35c69646ccd14e53350e488bb310a32f8b' + + '0248e70acc5b473df537ced3f81a014d4083932bedd62ed0e447b6766cd2604b' + + '706e9b346c4468beb46a34ecf1610ebd38331d52bf33346afec15eefb2a7699e' + + '8759db5a1f636a48a039688e39de34d995df9f27ed9edc8dd795e39e53d9d925' + + 'b278010565ff665269042f05096d94da3433d957ec13d2fd82a0066283d0d1ee' + + 'b81bf0ef133b7fd90248b8ffb499b2414cd4fa003093ff0864575a43749bf596' + + '02f26c717fa96b1d057697db08ebc3fa664a016a67dcef8807577cc3a09385d3' + + 'f4dc79b34364bb3b166ce65fe1dd28e3950fe6fa81063f7b16ce1c0e6daac1f8' + + '188455b77752045e863c9b256ad92bc6e2d08314c5bba191c274f42dfbb3d652' + + 'bb771956555e880f84cd8b827a4c5a52f3a099fa0259bd4aac3efd541f191170' + + '4412d6e85fbcc628b335875b9fef24807f6e1bc66c3186159e1e7f5a13913e02' + + 'd241ce2efdbcaa275039fb14eac5923d17ffbc7f1abd3b45e92127575bfbabf9' + + '3a257ebef0aa1437b326e41b585af572f7239c33b32981a1577a4f629b027e1e' + + 'b49d58cc497e944d79cef44357c2bf25442ab779651e991147bf79d6fd3a8868' + + '0cd3b1748e07fd10d78aceef6db8a5e563570d40127f754146c34a440f2a991a' + + '23fa39d365141f255041f2135c5cba4373452c114da1801bacca38610e3a6524' + + '2b822d32de4ab5a7d3cf9b61b37493c863bd12e2cae10530cddcda2cb7a5436b' + + 'ef8988d4d24e8cdc31b2d2a3586340bc5141f8f6632d0dd543bfed81eb471ba1' + + 'f3dc2225a15ffddcc03eb48f44e27e2aa390598adf83f15c6608a5f18d4dfcf0' + + 'f547d467a4d70b281c83a595d7660d0b62de78b9cca023cca89d7b1f83484638' + + '0e228c25f049184a612ef5bb3d37454e6cfa5b10dceda619d898a699b3c8981a' + + '173407844bb89b4287bf57dd6600c79e352c681d74b03fa7ea0d7bf6ad69f8a6' + + '8ecb001963bd2dd8a2baa0083ec09751cd9742402ad716be16d5c052304cfca1', + '0F62B5085BAE0154A7FA4DA0F34699EC', + '128 bits key, Set 6, vector# 3', + dict (iv='288FF65DC42B92F9')), + + ( '00' * 1024, + '5e5e71f90199340304abb22a37b6625bf883fb89ce3b21f54a10b81066ef87da' + + '30b77699aa7379da595c77dd59542da208e5954f89e40eb7aa80a84a6176663f' + + 'd910cde567cf1ff60f7040548d8f376bfd1f44c4774aac37410ede7d5c3463fc' + + '4508a603201d8495ad257894e5eb1914b53e8da5e4bf2bc83ac87ce55cc67df7' + + '093d9853d2a83a9c8be969175df7c807a17156df768445dd0874a9271c6537f5' + + 'ce0466473582375f067fa4fcdaf65dbc0139cd75e8c21a482f28c0fb8c3d9f94' + + '22606cc8e88fe28fe73ec3cb10ff0e8cc5f2a49e540f007265c65b7130bfdb98' + + '795b1df9522da46e48b30e55d9f0d787955ece720205b29c85f3ad9be33b4459' + + '7d21b54d06c9a60b04b8e640c64e566e51566730e86cf128ab14174f91bd8981' + + 'a6fb00fe587bbd6c38b5a1dfdb04ea7e61536fd229f957aa9b070ca931358e85' + + '11b92c53c523cb54828fb1513c5636fa9a0645b4a3c922c0db94986d92f314ff' + + '7852c03b231e4dceea5dd8cced621869cff818daf3c270ff3c8be2e5c74be767' + + 'a4e1fdf3327a934fe31e46df5a74ae2021cee021d958c4f615263d99a5ddae7f' + + 'eab45e6eccbafefe4761c57750847b7e75ee2e2f14333c0779ce4678f47b1e1b' + + '760a03a5f17d6e91d4b42313b3f1077ee270e432fe04917ed1fc8babebf7c941' + + '42b80dfb44a28a2a3e59093027606f6860bfb8c2e5897078cfccda7314c70035' + + 'f137de6f05daa035891d5f6f76e1df0fce1112a2ff0ac2bd3534b5d1bf4c7165' + + 'fb40a1b6eacb7f295711c4907ae457514a7010f3a342b4427593d61ba993bc59' + + '8bd09c56b9ee53aac5dd861fa4b4bb53888952a4aa9d8ca8671582de716270e1' + + '97375b3ee49e51fa2bf4ef32015dd9a764d966aa2ae541592d0aa650849e99ca' + + '5c6c39beebf516457cc32fe4c105bff314a12f1ec94bdf4d626f5d9b1cbbde42' + + 'e5733f0885765ba29e2e82c829d312f5fc7e180679ac84826c08d0a644b326d0' + + '44da0fdcc75fa53cfe4ced0437fa4df5a7ecbca8b4cb7c4a9ecf9a60d00a56eb' + + '81da52adc21f508dbb60a9503a3cc94a896616d86020d5b0e5c637329b6d396a' + + '41a21ba2c4a9493cf33fa2d4f10f77d5b12fdad7e478ccfe79b74851fc96a7ca' + + '6320c5efd561a222c0ab0fb44bbda0e42149611d2262bb7d1719150fa798718a' + + '0eec63ee297cad459869c8b0f06c4e2b56cbac03cd2605b2a924efedf85ec8f1' + + '9b0b6c90e7cbd933223ffeb1b3a3f9677657905829294c4c70acdb8b0891b47d' + + '0875d0cd6c0f4efe2917fc44b581ef0d1e4280197065d07da34ab33283364552' + + 'efad0bd9257b059acdd0a6f246812feb69e7e76065f27dbc2eee94da9cc41835' + + 'bf826e36e5cebe5d4d6a37a6a666246290ce51a0c082718ab0ec855668db1add' + + 'a658e5f257e0db39384d02e6145c4c00eaa079098f6d820d872de711b6ed08cf', + '0F62B5085BAE0154A7FA4DA0F34699EC3F92E5388BDE3184D72A7DD02376C91C', + '256 bits key, Set 6, vector# 3', + dict (iv='288FF65DC42B92F9')), + +] + + +class KeyLength(unittest.TestCase): + + def runTest(self): + + nonce = bchr(0) * 8 + for key_length in (15, 30, 33): + key = bchr(1) * key_length + self.assertRaises(ValueError, Salsa20.new, key, nonce) + + +class NonceTests(unittest.TestCase): + + def test_invalid_nonce_length(self): + key = bchr(1) * 16 + self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 7) + self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 9) + + def test_default_nonce(self): + + cipher1 = Salsa20.new(bchr(1) * 16) + cipher2 = Salsa20.new(bchr(1) * 16) + self.assertEqual(len(cipher1.nonce), 8) + self.assertNotEqual(cipher1.nonce, cipher2.nonce) + + +class ByteArrayTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_ba = bytearray(data) + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + + cipher1 = Salsa20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = Salsa20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_ba) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + ct_ba = bytearray(ct) + + cipher3 = Salsa20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_ba) + + self.assertEqual(data, pt_test) + + +class MemoryviewTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_mv = memoryview(bytearray(data)) + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + + cipher1 = Salsa20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = Salsa20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_mv) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + ct_mv = memoryview(bytearray(ct)) + + cipher3 = Salsa20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_mv) + + self.assertEqual(data, pt_test) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + key = b'4' * 32 + nonce = b'5' * 8 + cipher = Salsa20.new(key=key, nonce=nonce) + + pt = b'5' * 300 + ct = cipher.encrypt(pt) + + output = bytearray(len(pt)) + cipher = Salsa20.new(key=key, nonce=nonce) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = Salsa20.new(key=key, nonce=nonce) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(len(pt))) + cipher = Salsa20.new(key=key, nonce=nonce) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = Salsa20.new(key=key, nonce=nonce) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*len(pt)) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*len(ct)) + + shorter_output = bytearray(len(pt) - 1) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + tests = make_stream_tests(Salsa20, "Salsa20", test_data) + tests.append(KeyLength()) + tests += list_test_cases(NonceTests) + tests.append(ByteArrayTest()) + tests.append(MemoryviewTest()) + tests.append(TestOutput()) + + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_pkcs1_15.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_pkcs1_15.py new file mode 100644 index 00000000..e16a5438 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_pkcs1_15.py @@ -0,0 +1,283 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/test_pkcs1_15.py: Self-test for PKCS#1 v1.5 encryption +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from __future__ import print_function + +import unittest + +from Crypto.PublicKey import RSA +from Crypto.SelfTest.st_common import list_test_cases, a2b_hex +from Crypto import Random +from Crypto.Cipher import PKCS1_v1_5 as PKCS +from Crypto.Util.py3compat import b +from Crypto.Util.number import bytes_to_long, long_to_bytes +from Crypto.SelfTest.loader import load_test_vectors_wycheproof + + +def rws(t): + """Remove white spaces, tabs, and new lines from a string""" + for c in ['\n', '\t', ' ']: + t = t.replace(c, '') + return t + + +def t2b(t): + """Convert a text string with bytes in hex form to a byte string""" + clean = b(rws(t)) + if len(clean) % 2 == 1: + raise ValueError("Even number of characters expected") + return a2b_hex(clean) + + +class PKCS1_15_Tests(unittest.TestCase): + + def setUp(self): + self.rng = Random.new().read + self.key1024 = RSA.generate(1024, self.rng) + + # List of tuples with test data for PKCS#1 v1.5. + # Each tuple is made up by: + # Item #0: dictionary with RSA key component, or key to import + # Item #1: plaintext + # Item #2: ciphertext + # Item #3: random data + + _testData = ( + + # + # Generated with openssl 0.9.8o + # + ( + # Private key + '''-----BEGIN RSA PRIVATE KEY----- +MIICXAIBAAKBgQDAiAnvIAOvqVwJTaYzsKnefZftgtXGE2hPJppGsWl78yz9jeXY +W/FxX/gTPURArNhdnhP6n3p2ZaDIBrO2zizbgIXs0IsljTTcr4vnI8fMXzyNUOjA +zP3nzMqZDZK6757XQAobOssMkBFqRWwilT/3DsBhRpl3iMUhF+wvpTSHewIDAQAB +AoGAC4HV/inOrpgTvSab8Wj0riyZgQOZ3U3ZpSlsfR8ra9Ib9Uee3jCYnKscu6Gk +y6zI/cdt8EPJ4PuwAWSNJzbpbVaDvUq25OD+CX8/uRT08yBS4J8TzBitZJTD4lS7 +atdTnKT0Wmwk+u8tDbhvMKwnUHdJLcuIsycts9rwJVapUtkCQQDvDpx2JMun0YKG +uUttjmL8oJ3U0m3ZvMdVwBecA0eebZb1l2J5PvI3EJD97eKe91Nsw8T3lwpoN40k +IocSVDklAkEAzi1HLHE6EzVPOe5+Y0kGvrIYRRhncOb72vCvBZvD6wLZpQgqo6c4 +d3XHFBBQWA6xcvQb5w+VVEJZzw64y25sHwJBAMYReRl6SzL0qA0wIYrYWrOt8JeQ +8mthulcWHXmqTgC6FEXP9Es5GD7/fuKl4wqLKZgIbH4nqvvGay7xXLCXD/ECQH9a +1JYNMtRen5unSAbIOxRcKkWz92F0LKpm9ZW/S9vFHO+mBcClMGoKJHiuQxLBsLbT +NtEZfSJZAeS2sUtn3/0CQDb2M2zNBTF8LlM0nxmh0k9VGm5TVIyBEMcipmvOgqIs +HKukWBcq9f/UOmS0oEhai/6g+Uf7VHJdWaeO5LzuvwU= +-----END RSA PRIVATE KEY-----''', + # Plaintext + '''THIS IS PLAINTEXT\x0A''', + # Ciphertext + '''3f dc fd 3c cd 5c 9b 12 af 65 32 e3 f7 d0 da 36 + 8f 8f d9 e3 13 1c 7f c8 b3 f9 c1 08 e4 eb 79 9c + 91 89 1f 96 3b 94 77 61 99 a4 b1 ee 5d e6 17 c9 + 5d 0a b5 63 52 0a eb 00 45 38 2a fb b0 71 3d 11 + f7 a1 9e a7 69 b3 af 61 c0 bb 04 5b 5d 4b 27 44 + 1f 5b 97 89 ba 6a 08 95 ee 4f a2 eb 56 64 e5 0f + da 7c f9 9a 61 61 06 62 ed a0 bc 5f aa 6c 31 78 + 70 28 1a bb 98 3c e3 6a 60 3c d1 0b 0f 5a f4 75''', + # Random data + '''eb d7 7d 86 a4 35 23 a3 54 7e 02 0b 42 1d + 61 6c af 67 b8 4e 17 56 80 66 36 04 64 34 26 8a + 47 dd 44 b3 1a b2 17 60 f4 91 2e e2 b5 95 64 cc + f9 da c8 70 94 54 86 4c ef 5b 08 7d 18 c4 ab 8d + 04 06 33 8f ca 15 5f 52 60 8a a1 0c f5 08 b5 4c + bb 99 b8 94 25 04 9c e6 01 75 e6 f9 63 7a 65 61 + 13 8a a7 47 77 81 ae 0d b8 2c 4d 50 a5''' + ), + ) + + def testEncrypt1(self): + for test in self._testData: + # Build the key + key = RSA.importKey(test[0]) + # RNG that takes its random numbers from a pool given + # at initialization + class randGen: + def __init__(self, data): + self.data = data + self.idx = 0 + def __call__(self, N): + r = self.data[self.idx:self.idx+N] + self.idx += N + return r + # The real test + cipher = PKCS.new(key, randfunc=randGen(t2b(test[3]))) + ct = cipher.encrypt(b(test[1])) + self.assertEqual(ct, t2b(test[2])) + + def testEncrypt2(self): + # Verify that encryption fail if plaintext is too long + pt = '\x00'*(128-11+1) + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.encrypt, pt) + + def testVerify1(self): + for test in self._testData: + key = RSA.importKey(test[0]) + expected_pt = b(test[1]) + ct = t2b(test[2]) + cipher = PKCS.new(key) + + # The real test + pt = cipher.decrypt(ct, None) + self.assertEqual(pt, expected_pt) + + pt = cipher.decrypt(ct, b'\xFF' * len(expected_pt)) + self.assertEqual(pt, expected_pt) + + def testVerify2(self): + # Verify that decryption fails if ciphertext is not as long as + # RSA modulus + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.decrypt, '\x00'*127, "---") + self.assertRaises(ValueError, cipher.decrypt, '\x00'*129, "---") + + # Verify that decryption fails if there are less then 8 non-zero padding + # bytes + pt = b('\x00\x02' + '\xFF'*7 + '\x00' + '\x45'*118) + pt_int = bytes_to_long(pt) + ct_int = self.key1024._encrypt(pt_int) + ct = long_to_bytes(ct_int, 128) + self.assertEqual(b"---", cipher.decrypt(ct, b"---")) + + def testEncryptVerify1(self): + # Encrypt/Verify messages of length [0..RSAlen-11] + # and therefore padding [8..117] + for pt_len in range(0, 128 - 11 + 1): + pt = self.rng(pt_len) + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(pt) + pt2 = cipher.decrypt(ct, b'\xAA' * pt_len) + self.assertEqual(pt, pt2) + + def test_encrypt_verify_exp_pt_len(self): + + cipher = PKCS.new(self.key1024) + pt = b'5' * 16 + ct = cipher.encrypt(pt) + sentinel = b'\xAA' * 16 + + pt_A = cipher.decrypt(ct, sentinel, 16) + self.assertEqual(pt, pt_A) + + pt_B = cipher.decrypt(ct, sentinel, 15) + self.assertEqual(sentinel, pt_B) + + pt_C = cipher.decrypt(ct, sentinel, 17) + self.assertEqual(sentinel, pt_C) + + def testByteArray(self): + pt = b"XER" + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(bytearray(pt)) + pt2 = cipher.decrypt(bytearray(ct), '\xFF' * len(pt)) + self.assertEqual(pt, pt2) + + def testMemoryview(self): + pt = b"XER" + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(memoryview(bytearray(pt))) + pt2 = cipher.decrypt(memoryview(bytearray(ct)), b'\xFF' * len(pt)) + self.assertEqual(pt, pt2) + + def test_return_type(self): + pt = b"XYZ" + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(pt) + self.assertTrue(isinstance(ct, bytes)) + pt2 = cipher.decrypt(ct, b'\xAA' * 3) + self.assertTrue(isinstance(pt2, bytes)) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, skip_slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._skip_slow_tests = skip_slow_tests + self._id = "None" + + def load_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['privateKeyPem']) + + result = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + filename, + "Wycheproof PKCS#1v1.5 (%s)" % filename, + group_tag={'rsa_key': filter_rsa} + ) + return result + + def setUp(self): + self.tv = [] + self.tv.extend(self.load_tests("rsa_pkcs1_2048_test.json")) + if not self._skip_slow_tests: + self.tv.extend(self.load_tests("rsa_pkcs1_3072_test.json")) + self.tv.extend(self.load_tests("rsa_pkcs1_4096_test.json")) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt PKCS#1v1.5 Test #%s" % tv.id + sentinel = b'\xAA' * max(3, len(tv.msg)) + cipher = PKCS.new(tv.rsa_key) + try: + pt = cipher.decrypt(tv.ct, sentinel=sentinel) + except ValueError: + assert not tv.valid + else: + if pt == sentinel: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def runTest(self): + + for tv in self.tv: + self.test_decrypt(tv) + + +def get_tests(config={}): + skip_slow_tests = not config.get('slow_tests') + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(PKCS1_15_Tests) + tests += [TestVectorsWycheproof(wycheproof_warnings, skip_slow_tests)] + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_pkcs1_oaep.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_pkcs1_oaep.py new file mode 100644 index 00000000..17115819 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Cipher/test_pkcs1_oaep.py @@ -0,0 +1,506 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/test_pkcs1_oaep.py: Self-test for PKCS#1 OAEP encryption +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import unittest + +from Crypto.SelfTest.st_common import list_test_cases, a2b_hex +from Crypto.SelfTest.loader import load_test_vectors_wycheproof + +from Crypto.PublicKey import RSA +from Crypto.Cipher import PKCS1_OAEP as PKCS +from Crypto.Hash import MD2, MD5, SHA1, SHA256, RIPEMD160, SHA224, SHA384, SHA512 +from Crypto import Random +from Crypto.Signature.pss import MGF1 + +from Crypto.Util.py3compat import b, bchr + + +def rws(t): + """Remove white spaces, tabs, and new lines from a string""" + for c in ['\n', '\t', ' ']: + t = t.replace(c, '') + return t + + +def t2b(t): + """Convert a text string with bytes in hex form to a byte string""" + clean = rws(t) + if len(clean) % 2 == 1: + raise ValueError("Even number of characters expected") + return a2b_hex(clean) + + +class PKCS1_OAEP_Tests(unittest.TestCase): + + def setUp(self): + self.rng = Random.new().read + self.key1024 = RSA.generate(1024, self.rng) + + # List of tuples with test data for PKCS#1 OAEP + # Each tuple is made up by: + # Item #0: dictionary with RSA key component + # Item #1: plaintext + # Item #2: ciphertext + # Item #3: random data (=seed) + # Item #4: hash object + + _testData = ( + + # + # From in oaep-int.txt to be found in + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''bb f8 2f 09 06 82 ce 9c 23 38 ac 2b 9d a8 71 f7 + 36 8d 07 ee d4 10 43 a4 40 d6 b6 f0 74 54 f5 1f + b8 df ba af 03 5c 02 ab 61 ea 48 ce eb 6f cd 48 + 76 ed 52 0d 60 e1 ec 46 19 71 9d 8a 5b 8b 80 7f + af b8 e0 a3 df c7 37 72 3e e6 b4 b7 d9 3a 25 84 + ee 6a 64 9d 06 09 53 74 88 34 b2 45 45 98 39 4e + e0 aa b1 2d 7b 61 a5 1f 52 7a 9a 41 f6 c1 68 7f + e2 53 72 98 ca 2a 8f 59 46 f8 e5 fd 09 1d bd cb''', + # Public key + 'e':'11', + # In the test vector, only p and q were given... + # d is computed offline as e^{-1} mod (p-1)(q-1) + 'd':'''a5dafc5341faf289c4b988db30c1cdf83f31251e0 + 668b42784813801579641b29410b3c7998d6bc465745e5c3 + 92669d6870da2c082a939e37fdcb82ec93edac97ff3ad595 + 0accfbc111c76f1a9529444e56aaf68c56c092cd38dc3bef + 5d20a939926ed4f74a13eddfbe1a1cecc4894af9428c2b7b + 8883fe4463a4bc85b1cb3c1''' + } + , + # Plaintext + '''d4 36 e9 95 69 fd 32 a7 c8 a0 5b bc 90 d3 2c 49''', + # Ciphertext + '''12 53 e0 4d c0 a5 39 7b b4 4a 7a b8 7e 9b f2 a0 + 39 a3 3d 1e 99 6f c8 2a 94 cc d3 00 74 c9 5d f7 + 63 72 20 17 06 9e 52 68 da 5d 1c 0b 4f 87 2c f6 + 53 c1 1d f8 23 14 a6 79 68 df ea e2 8d ef 04 bb + 6d 84 b1 c3 1d 65 4a 19 70 e5 78 3b d6 eb 96 a0 + 24 c2 ca 2f 4a 90 fe 9f 2e f5 c9 c1 40 e5 bb 48 + da 95 36 ad 87 00 c8 4f c9 13 0a de a7 4e 55 8d + 51 a7 4d df 85 d8 b5 0d e9 68 38 d6 06 3e 09 55''', + # Random + '''aa fd 12 f6 59 ca e6 34 89 b4 79 e5 07 6d de c2 + f0 6c b5 8f''', + # Hash + SHA1, + ), + + # + # From in oaep-vect.txt to be found in Example 1.1 + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''a8 b3 b2 84 af 8e b5 0b 38 70 34 a8 60 f1 46 c4 + 91 9f 31 87 63 cd 6c 55 98 c8 ae 48 11 a1 e0 ab + c4 c7 e0 b0 82 d6 93 a5 e7 fc ed 67 5c f4 66 85 + 12 77 2c 0c bc 64 a7 42 c6 c6 30 f5 33 c8 cc 72 + f6 2a e8 33 c4 0b f2 58 42 e9 84 bb 78 bd bf 97 + c0 10 7d 55 bd b6 62 f5 c4 e0 fa b9 84 5c b5 14 + 8e f7 39 2d d3 aa ff 93 ae 1e 6b 66 7b b3 d4 24 + 76 16 d4 f5 ba 10 d4 cf d2 26 de 88 d3 9f 16 fb''', + 'e':'''01 00 01''', + 'd':'''53 33 9c fd b7 9f c8 46 6a 65 5c 73 16 ac a8 5c + 55 fd 8f 6d d8 98 fd af 11 95 17 ef 4f 52 e8 fd + 8e 25 8d f9 3f ee 18 0f a0 e4 ab 29 69 3c d8 3b + 15 2a 55 3d 4a c4 d1 81 2b 8b 9f a5 af 0e 7f 55 + fe 73 04 df 41 57 09 26 f3 31 1f 15 c4 d6 5a 73 + 2c 48 31 16 ee 3d 3d 2d 0a f3 54 9a d9 bf 7c bf + b7 8a d8 84 f8 4d 5b eb 04 72 4d c7 36 9b 31 de + f3 7d 0c f5 39 e9 cf cd d3 de 65 37 29 ea d5 d1 ''' + } + , + # Plaintext + '''66 28 19 4e 12 07 3d b0 3b a9 4c da 9e f9 53 23 + 97 d5 0d ba 79 b9 87 00 4a fe fe 34''', + # Ciphertext + '''35 4f e6 7b 4a 12 6d 5d 35 fe 36 c7 77 79 1a 3f + 7b a1 3d ef 48 4e 2d 39 08 af f7 22 fa d4 68 fb + 21 69 6d e9 5d 0b e9 11 c2 d3 17 4f 8a fc c2 01 + 03 5f 7b 6d 8e 69 40 2d e5 45 16 18 c2 1a 53 5f + a9 d7 bf c5 b8 dd 9f c2 43 f8 cf 92 7d b3 13 22 + d6 e8 81 ea a9 1a 99 61 70 e6 57 a0 5a 26 64 26 + d9 8c 88 00 3f 84 77 c1 22 70 94 a0 d9 fa 1e 8c + 40 24 30 9c e1 ec cc b5 21 00 35 d4 7a c7 2e 8a''', + # Random + '''18 b7 76 ea 21 06 9d 69 77 6a 33 e9 6b ad 48 e1 + dd a0 a5 ef''', + SHA1 + ), + + # + # From in oaep-vect.txt to be found in Example 2.1 + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''01 94 7c 7f ce 90 42 5f 47 27 9e 70 85 1f 25 d5 + e6 23 16 fe 8a 1d f1 93 71 e3 e6 28 e2 60 54 3e + 49 01 ef 60 81 f6 8c 0b 81 41 19 0d 2a e8 da ba + 7d 12 50 ec 6d b6 36 e9 44 ec 37 22 87 7c 7c 1d + 0a 67 f1 4b 16 94 c5 f0 37 94 51 a4 3e 49 a3 2d + de 83 67 0b 73 da 91 a1 c9 9b c2 3b 43 6a 60 05 + 5c 61 0f 0b af 99 c1 a0 79 56 5b 95 a3 f1 52 66 + 32 d1 d4 da 60 f2 0e da 25 e6 53 c4 f0 02 76 6f + 45''', + 'e':'''01 00 01''', + 'd':'''08 23 f2 0f ad b5 da 89 08 8a 9d 00 89 3e 21 fa + 4a 1b 11 fb c9 3c 64 a3 be 0b aa ea 97 fb 3b 93 + c3 ff 71 37 04 c1 9c 96 3c 1d 10 7a ae 99 05 47 + 39 f7 9e 02 e1 86 de 86 f8 7a 6d de fe a6 d8 cc + d1 d3 c8 1a 47 bf a7 25 5b e2 06 01 a4 a4 b2 f0 + 8a 16 7b 5e 27 9d 71 5b 1b 45 5b dd 7e ab 24 59 + 41 d9 76 8b 9a ce fb 3c cd a5 95 2d a3 ce e7 25 + 25 b4 50 16 63 a8 ee 15 c9 e9 92 d9 24 62 fe 39''' + }, + # Plaintext + '''8f f0 0c aa 60 5c 70 28 30 63 4d 9a 6c 3d 42 c6 + 52 b5 8c f1 d9 2f ec 57 0b ee e7''', + # Ciphertext + '''01 81 af 89 22 b9 fc b4 d7 9d 92 eb e1 98 15 99 + 2f c0 c1 43 9d 8b cd 49 13 98 a0 f4 ad 3a 32 9a + 5b d9 38 55 60 db 53 26 83 c8 b7 da 04 e4 b1 2a + ed 6a ac df 47 1c 34 c9 cd a8 91 ad dc c2 df 34 + 56 65 3a a6 38 2e 9a e5 9b 54 45 52 57 eb 09 9d + 56 2b be 10 45 3f 2b 6d 13 c5 9c 02 e1 0f 1f 8a + bb 5d a0 d0 57 09 32 da cf 2d 09 01 db 72 9d 0f + ef cc 05 4e 70 96 8e a5 40 c8 1b 04 bc ae fe 72 + 0e''', + # Random + '''8c 40 7b 5e c2 89 9e 50 99 c5 3e 8c e7 93 bf 94 + e7 1b 17 82''', + SHA1 + ), + + # + # From in oaep-vect.txt to be found in Example 10.1 + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''ae 45 ed 56 01 ce c6 b8 cc 05 f8 03 93 5c 67 4d + db e0 d7 5c 4c 09 fd 79 51 fc 6b 0c ae c3 13 a8 + df 39 97 0c 51 8b ff ba 5e d6 8f 3f 0d 7f 22 a4 + 02 9d 41 3f 1a e0 7e 4e be 9e 41 77 ce 23 e7 f5 + 40 4b 56 9e 4e e1 bd cf 3c 1f b0 3e f1 13 80 2d + 4f 85 5e b9 b5 13 4b 5a 7c 80 85 ad ca e6 fa 2f + a1 41 7e c3 76 3b e1 71 b0 c6 2b 76 0e de 23 c1 + 2a d9 2b 98 08 84 c6 41 f5 a8 fa c2 6b da d4 a0 + 33 81 a2 2f e1 b7 54 88 50 94 c8 25 06 d4 01 9a + 53 5a 28 6a fe b2 71 bb 9b a5 92 de 18 dc f6 00 + c2 ae ea e5 6e 02 f7 cf 79 fc 14 cf 3b dc 7c d8 + 4f eb bb f9 50 ca 90 30 4b 22 19 a7 aa 06 3a ef + a2 c3 c1 98 0e 56 0c d6 4a fe 77 95 85 b6 10 76 + 57 b9 57 85 7e fd e6 01 09 88 ab 7d e4 17 fc 88 + d8 f3 84 c4 e6 e7 2c 3f 94 3e 0c 31 c0 c4 a5 cc + 36 f8 79 d8 a3 ac 9d 7d 59 86 0e aa da 6b 83 bb''', + 'e':'''01 00 01''', + 'd':'''05 6b 04 21 6f e5 f3 54 ac 77 25 0a 4b 6b 0c 85 + 25 a8 5c 59 b0 bd 80 c5 64 50 a2 2d 5f 43 8e 59 + 6a 33 3a a8 75 e2 91 dd 43 f4 8c b8 8b 9d 5f c0 + d4 99 f9 fc d1 c3 97 f9 af c0 70 cd 9e 39 8c 8d + 19 e6 1d b7 c7 41 0a 6b 26 75 df bf 5d 34 5b 80 + 4d 20 1a dd 50 2d 5c e2 df cb 09 1c e9 99 7b be + be 57 30 6f 38 3e 4d 58 81 03 f0 36 f7 e8 5d 19 + 34 d1 52 a3 23 e4 a8 db 45 1d 6f 4a 5b 1b 0f 10 + 2c c1 50 e0 2f ee e2 b8 8d ea 4a d4 c1 ba cc b2 + 4d 84 07 2d 14 e1 d2 4a 67 71 f7 40 8e e3 05 64 + fb 86 d4 39 3a 34 bc f0 b7 88 50 1d 19 33 03 f1 + 3a 22 84 b0 01 f0 f6 49 ea f7 93 28 d4 ac 5c 43 + 0a b4 41 49 20 a9 46 0e d1 b7 bc 40 ec 65 3e 87 + 6d 09 ab c5 09 ae 45 b5 25 19 01 16 a0 c2 61 01 + 84 82 98 50 9c 1c 3b f3 a4 83 e7 27 40 54 e1 5e + 97 07 50 36 e9 89 f6 09 32 80 7b 52 57 75 1e 79''' + }, + # Plaintext + '''8b ba 6b f8 2a 6c 0f 86 d5 f1 75 6e 97 95 68 70 + b0 89 53 b0 6b 4e b2 05 bc 16 94 ee''', + # Ciphertext + '''53 ea 5d c0 8c d2 60 fb 3b 85 85 67 28 7f a9 15 + 52 c3 0b 2f eb fb a2 13 f0 ae 87 70 2d 06 8d 19 + ba b0 7f e5 74 52 3d fb 42 13 9d 68 c3 c5 af ee + e0 bf e4 cb 79 69 cb f3 82 b8 04 d6 e6 13 96 14 + 4e 2d 0e 60 74 1f 89 93 c3 01 4b 58 b9 b1 95 7a + 8b ab cd 23 af 85 4f 4c 35 6f b1 66 2a a7 2b fc + c7 e5 86 55 9d c4 28 0d 16 0c 12 67 85 a7 23 eb + ee be ff 71 f1 15 94 44 0a ae f8 7d 10 79 3a 87 + 74 a2 39 d4 a0 4c 87 fe 14 67 b9 da f8 52 08 ec + 6c 72 55 79 4a 96 cc 29 14 2f 9a 8b d4 18 e3 c1 + fd 67 34 4b 0c d0 82 9d f3 b2 be c6 02 53 19 62 + 93 c6 b3 4d 3f 75 d3 2f 21 3d d4 5c 62 73 d5 05 + ad f4 cc ed 10 57 cb 75 8f c2 6a ee fa 44 12 55 + ed 4e 64 c1 99 ee 07 5e 7f 16 64 61 82 fd b4 64 + 73 9b 68 ab 5d af f0 e6 3e 95 52 01 68 24 f0 54 + bf 4d 3c 8c 90 a9 7b b6 b6 55 32 84 eb 42 9f cc''', + # Random + '''47 e1 ab 71 19 fe e5 6c 95 ee 5e aa d8 6f 40 d0 + aa 63 bd 33''', + SHA1 + ), + ) + + def testEncrypt1(self): + # Verify encryption using all test vectors + for test in self._testData: + # Build the key + comps = [int(rws(test[0][x]), 16) for x in ('n', 'e')] + key = RSA.construct(comps) + + # RNG that takes its random numbers from a pool given + # at initialization + class randGen: + + def __init__(self, data): + self.data = data + self.idx = 0 + + def __call__(self, N): + r = self.data[self.idx:N] + self.idx += N + return r + + # The real test + cipher = PKCS.new(key, test[4], randfunc=randGen(t2b(test[3]))) + ct = cipher.encrypt(t2b(test[1])) + self.assertEqual(ct, t2b(test[2])) + + def testEncrypt2(self): + # Verify that encryption fails if plaintext is too long + pt = '\x00'*(128-2*20-2+1) + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.encrypt, pt) + + def testDecrypt1(self): + # Verify decryption using all test vectors + for test in self._testData: + # Build the key + comps = [int(rws(test[0][x]),16) for x in ('n', 'e', 'd')] + key = RSA.construct(comps) + # The real test + cipher = PKCS.new(key, test[4]) + pt = cipher.decrypt(t2b(test[2])) + self.assertEqual(pt, t2b(test[1])) + + def testDecrypt2(self): + # Simplest possible negative tests + for ct_size in (127, 128, 129): + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.decrypt, bchr(0x00)*ct_size) + + def testEncryptDecrypt1(self): + # Encrypt/Decrypt messages of length [0..128-2*20-2] + for pt_len in range(0, 128-2*20-2): + pt = self.rng(pt_len) + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(pt) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def testEncryptDecrypt2(self): + # Helper function to monitor what's requested from RNG + global asked + + def localRng(N): + global asked + asked += N + return self.rng(N) + + # Verify that OAEP is friendly to all hashes + for hashmod in (MD2, MD5, SHA1, SHA256, RIPEMD160): + # Verify that encrypt() asks for as many random bytes + # as the hash output size + asked = 0 + pt = self.rng(40) + cipher = PKCS.new(self.key1024, hashmod, randfunc=localRng) + ct = cipher.encrypt(pt) + self.assertEqual(cipher.decrypt(ct), pt) + self.assertEqual(asked, hashmod.digest_size) + + def testEncryptDecrypt3(self): + # Verify that OAEP supports labels + pt = self.rng(35) + xlabel = self.rng(22) + cipher = PKCS.new(self.key1024, label=xlabel) + ct = cipher.encrypt(pt) + self.assertEqual(cipher.decrypt(ct), pt) + + def testEncryptDecrypt4(self): + # Verify that encrypt() uses the custom MGF + global mgfcalls + # Helper function to monitor what's requested from MGF + + def newMGF(seed, maskLen): + global mgfcalls + mgfcalls += 1 + return b'\x00' * maskLen + + mgfcalls = 0 + pt = self.rng(32) + cipher = PKCS.new(self.key1024, mgfunc=newMGF) + ct = cipher.encrypt(pt) + self.assertEqual(mgfcalls, 2) + self.assertEqual(cipher.decrypt(ct), pt) + + def testByteArray(self): + pt = b("XER") + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(bytearray(pt)) + pt2 = cipher.decrypt(bytearray(ct)) + self.assertEqual(pt, pt2) + + def testMemoryview(self): + pt = b("XER") + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(memoryview(bytearray(pt))) + pt2 = cipher.decrypt(memoryview(bytearray(ct))) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, skip_slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._skip_slow_tests = skip_slow_tests + self._id = "None" + + def load_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['privateKeyPem']) + + def filter_sha(group): + if group['sha'] == "SHA-1": + return SHA1 + elif group['sha'] == "SHA-224": + return SHA224 + elif group['sha'] == "SHA-256": + return SHA256 + elif group['sha'] == "SHA-384": + return SHA384 + elif group['sha'] == "SHA-512": + return SHA512 + else: + raise ValueError("Unknown sha " + group['sha']) + + def filter_mgf(group): + if group['mgfSha'] == "SHA-1": + return lambda x, y: MGF1(x, y, SHA1) + elif group['mgfSha'] == "SHA-224": + return lambda x, y: MGF1(x, y, SHA224) + elif group['mgfSha'] == "SHA-256": + return lambda x, y: MGF1(x, y, SHA256) + elif group['mgfSha'] == "SHA-384": + return lambda x, y: MGF1(x, y, SHA384) + elif group['mgfSha'] == "SHA-512": + return lambda x, y: MGF1(x, y, SHA512) + else: + raise ValueError("Unknown mgf/sha " + group['mgfSha']) + + def filter_algo(group): + return "%s with MGF1/%s" % (group['sha'], group['mgfSha']) + + result = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + filename, + "Wycheproof PKCS#1 OAEP (%s)" % filename, + group_tag={'rsa_key': filter_rsa, + 'hash_mod': filter_sha, + 'mgf': filter_mgf, + 'algo': filter_algo} + ) + return result + + def setUp(self): + self.tv = [] + self.tv.extend(self.load_tests("rsa_oaep_2048_sha1_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha224_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha224_mgf1sha224_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha256_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha256_mgf1sha256_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha384_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha384_mgf1sha384_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha512_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha512_mgf1sha512_test.json")) + if not self._skip_slow_tests: + self.tv.extend(self.load_tests("rsa_oaep_3072_sha256_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_3072_sha256_mgf1sha256_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_3072_sha512_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_3072_sha512_mgf1sha512_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha256_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha256_mgf1sha256_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha512_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha512_mgf1sha512_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha512_mgf1sha512_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_misc_test.json")) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt %s Test #%s" % (tv.algo, tv.id) + + cipher = PKCS.new(tv.rsa_key, hashAlgo=tv.hash_mod, mgfunc=tv.mgf, label=tv.label) + try: + pt = cipher.decrypt(tv.ct) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def runTest(self): + + for tv in self.tv: + self.test_decrypt(tv) + + +def get_tests(config={}): + skip_slow_tests = not config.get('slow_tests') + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(PKCS1_OAEP_Tests) + tests += [TestVectorsWycheproof(wycheproof_warnings, skip_slow_tests)] + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/__init__.py new file mode 100644 index 00000000..008a8103 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/__init__.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/__init__.py: Self-test for hash modules +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for hash modules""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest.Hash import test_HMAC; tests += test_HMAC.get_tests(config=config) + from Crypto.SelfTest.Hash import test_CMAC; tests += test_CMAC.get_tests(config=config) + from Crypto.SelfTest.Hash import test_MD2; tests += test_MD2.get_tests(config=config) + from Crypto.SelfTest.Hash import test_MD4; tests += test_MD4.get_tests(config=config) + from Crypto.SelfTest.Hash import test_MD5; tests += test_MD5.get_tests(config=config) + from Crypto.SelfTest.Hash import test_RIPEMD160; tests += test_RIPEMD160.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA1; tests += test_SHA1.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA224; tests += test_SHA224.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA256; tests += test_SHA256.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA384; tests += test_SHA384.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA512; tests += test_SHA512.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA3_224; tests += test_SHA3_224.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA3_256; tests += test_SHA3_256.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA3_384; tests += test_SHA3_384.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHA3_512; tests += test_SHA3_512.get_tests(config=config) + from Crypto.SelfTest.Hash import test_keccak; tests += test_keccak.get_tests(config=config) + from Crypto.SelfTest.Hash import test_SHAKE; tests += test_SHAKE.get_tests(config=config) + from Crypto.SelfTest.Hash import test_BLAKE2; tests += test_BLAKE2.get_tests(config=config) + from Crypto.SelfTest.Hash import test_Poly1305; tests += test_Poly1305.get_tests(config=config) + from Crypto.SelfTest.Hash import test_cSHAKE; tests += test_cSHAKE.get_tests(config=config) + from Crypto.SelfTest.Hash import test_KMAC; tests += test_KMAC.get_tests(config=config) + from Crypto.SelfTest.Hash import test_TupleHash; tests += test_TupleHash.get_tests(config=config) + from Crypto.SelfTest.Hash import test_KangarooTwelve; tests += test_KangarooTwelve.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/common.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/common.py new file mode 100644 index 00000000..15786671 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/common.py @@ -0,0 +1,290 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/common.py: Common code for Crypto.SelfTest.Hash +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-testing for PyCrypto hash modules""" + +import re +import sys +import unittest +import binascii +import Crypto.Hash +from binascii import hexlify, unhexlify +from Crypto.Util.py3compat import b, tobytes +from Crypto.Util.strxor import strxor_c + +def t2b(hex_string): + shorter = re.sub(br'\s+', b'', tobytes(hex_string)) + return unhexlify(shorter) + + +class HashDigestSizeSelfTest(unittest.TestCase): + + def __init__(self, hashmod, description, expected, extra_params): + unittest.TestCase.__init__(self) + self.hashmod = hashmod + self.expected = expected + self.description = description + self.extra_params = extra_params + + def shortDescription(self): + return self.description + + def runTest(self): + if "truncate" not in self.extra_params: + self.assertTrue(hasattr(self.hashmod, "digest_size")) + self.assertEqual(self.hashmod.digest_size, self.expected) + h = self.hashmod.new(**self.extra_params) + self.assertTrue(hasattr(h, "digest_size")) + self.assertEqual(h.digest_size, self.expected) + + +class HashSelfTest(unittest.TestCase): + + def __init__(self, hashmod, description, expected, input, extra_params): + unittest.TestCase.__init__(self) + self.hashmod = hashmod + self.expected = expected.lower() + self.input = input + self.description = description + self.extra_params = extra_params + + def shortDescription(self): + return self.description + + def runTest(self): + h = self.hashmod.new(**self.extra_params) + h.update(self.input) + + out1 = binascii.b2a_hex(h.digest()) + out2 = h.hexdigest() + + h = self.hashmod.new(self.input, **self.extra_params) + + out3 = h.hexdigest() + out4 = binascii.b2a_hex(h.digest()) + + # PY3K: hexdigest() should return str(), and digest() bytes + self.assertEqual(self.expected, out1) # h = .new(); h.update(data); h.digest() + if sys.version_info[0] == 2: + self.assertEqual(self.expected, out2) # h = .new(); h.update(data); h.hexdigest() + self.assertEqual(self.expected, out3) # h = .new(data); h.hexdigest() + else: + self.assertEqual(self.expected.decode(), out2) # h = .new(); h.update(data); h.hexdigest() + self.assertEqual(self.expected.decode(), out3) # h = .new(data); h.hexdigest() + self.assertEqual(self.expected, out4) # h = .new(data); h.digest() + + # Verify that the .new() method produces a fresh hash object, except + # for MD5 and SHA1, which are hashlib objects. (But test any .new() + # method that does exist.) + if self.hashmod.__name__ not in ('Crypto.Hash.MD5', 'Crypto.Hash.SHA1') or hasattr(h, 'new'): + h2 = h.new() + h2.update(self.input) + out5 = binascii.b2a_hex(h2.digest()) + self.assertEqual(self.expected, out5) + + +class HashTestOID(unittest.TestCase): + def __init__(self, hashmod, oid, extra_params): + unittest.TestCase.__init__(self) + self.hashmod = hashmod + self.oid = oid + self.extra_params = extra_params + + def runTest(self): + h = self.hashmod.new(**self.extra_params) + self.assertEqual(h.oid, self.oid) + + +class ByteArrayTest(unittest.TestCase): + + def __init__(self, module, extra_params): + unittest.TestCase.__init__(self) + self.module = module + self.extra_params = extra_params + + def runTest(self): + data = b("\x00\x01\x02") + + # Data can be a bytearray (during initialization) + ba = bytearray(data) + + h1 = self.module.new(data, **self.extra_params) + h2 = self.module.new(ba, **self.extra_params) + ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + ba = bytearray(data) + + h1 = self.module.new(**self.extra_params) + h2 = self.module.new(**self.extra_params) + + h1.update(data) + h2.update(ba) + + ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MemoryViewTest(unittest.TestCase): + + def __init__(self, module, extra_params): + unittest.TestCase.__init__(self) + self.module = module + self.extra_params = extra_params + + def runTest(self): + + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in get_mv_ro, get_mv_rw: + + # Data can be a memoryview (during initialization) + mv = get_mv(data) + + h1 = self.module.new(data, **self.extra_params) + h2 = self.module.new(mv, **self.extra_params) + if not mv.readonly: + mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + mv = get_mv(data) + + h1 = self.module.new(**self.extra_params) + h2 = self.module.new(**self.extra_params) + h1.update(data) + h2.update(mv) + if not mv.readonly: + mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MACSelfTest(unittest.TestCase): + + def __init__(self, module, description, result, data, key, params): + unittest.TestCase.__init__(self) + self.module = module + self.result = t2b(result) + self.data = t2b(data) + self.key = t2b(key) + self.params = params + self.description = description + + def shortDescription(self): + return self.description + + def runTest(self): + + result_hex = hexlify(self.result) + + # Verify result + h = self.module.new(self.key, **self.params) + h.update(self.data) + self.assertEqual(self.result, h.digest()) + self.assertEqual(hexlify(self.result).decode('ascii'), h.hexdigest()) + + # Verify that correct MAC does not raise any exception + h.verify(self.result) + h.hexverify(result_hex) + + # Verify that incorrect MAC does raise ValueError exception + wrong_mac = strxor_c(self.result, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + # Verify again, with data passed to new() + h = self.module.new(self.key, self.data, **self.params) + self.assertEqual(self.result, h.digest()) + self.assertEqual(hexlify(self.result).decode('ascii'), h.hexdigest()) + + # Test .copy() + try: + h = self.module.new(self.key, self.data, **self.params) + h2 = h.copy() + h3 = h.copy() + + # Verify that changing the copy does not change the original + h2.update(b"bla") + self.assertEqual(h3.digest(), self.result) + + # Verify that both can reach the same state + h.update(b"bla") + self.assertEqual(h.digest(), h2.digest()) + except NotImplementedError: + pass + + # PY3K: Check that hexdigest() returns str and digest() returns bytes + self.assertTrue(isinstance(h.digest(), type(b""))) + self.assertTrue(isinstance(h.hexdigest(), type(""))) + + # PY3K: Check that .hexverify() accepts bytes or str + h.hexverify(h.hexdigest()) + h.hexverify(h.hexdigest().encode('ascii')) + + +def make_hash_tests(module, module_name, test_data, digest_size, oid=None, + extra_params={}): + tests = [] + for i in range(len(test_data)): + row = test_data[i] + (expected, input) = map(tobytes,row[0:2]) + if len(row) < 3: + description = repr(input) + else: + description = row[2] + name = "%s #%d: %s" % (module_name, i+1, description) + tests.append(HashSelfTest(module, name, expected, input, extra_params)) + + name = "%s #%d: digest_size" % (module_name, len(test_data) + 1) + tests.append(HashDigestSizeSelfTest(module, name, digest_size, extra_params)) + + if oid is not None: + tests.append(HashTestOID(module, oid, extra_params)) + + tests.append(ByteArrayTest(module, extra_params)) + + tests.append(MemoryViewTest(module, extra_params)) + + return tests + + +def make_mac_tests(module, module_name, test_data): + tests = [] + for i, row in enumerate(test_data): + if len(row) == 4: + (key, data, results, description, params) = list(row) + [ {} ] + else: + (key, data, results, description, params) = row + name = "%s #%d: %s" % (module_name, i+1, description) + tests.append(MACSelfTest(module, name, results, data, key, params)) + return tests + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_BLAKE2.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_BLAKE2.py new file mode 100644 index 00000000..f953eabd --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_BLAKE2.py @@ -0,0 +1,482 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import re +import unittest +import warnings +from binascii import unhexlify, hexlify + +from Crypto.Util.py3compat import tobytes +from Crypto.Util.strxor import strxor_c +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import BLAKE2b, BLAKE2s + + +class Blake2Test(unittest.TestCase): + + def test_new_positive(self): + + h = self.BLAKE2.new(digest_bits=self.max_bits) + for new_func in self.BLAKE2.new, h.new: + + for dbits in range(8, self.max_bits + 1, 8): + hobj = new_func(digest_bits=dbits) + self.assertEqual(hobj.digest_size, dbits // 8) + + for dbytes in range(1, self.max_bytes + 1): + hobj = new_func(digest_bytes=dbytes) + self.assertEqual(hobj.digest_size, dbytes) + + digest1 = new_func(data=b"\x90", digest_bytes=self.max_bytes).digest() + digest2 = new_func(digest_bytes=self.max_bytes).update(b"\x90").digest() + self.assertEqual(digest1, digest2) + + new_func(data=b"A", key=b"5", digest_bytes=self.max_bytes) + + hobj = h.new() + self.assertEqual(hobj.digest_size, self.max_bytes) + + def test_new_negative(self): + + h = self.BLAKE2.new(digest_bits=self.max_bits) + for new_func in self.BLAKE2.new, h.new: + self.assertRaises(TypeError, new_func, + digest_bytes=self.max_bytes, + digest_bits=self.max_bits) + self.assertRaises(ValueError, new_func, digest_bytes=0) + self.assertRaises(ValueError, new_func, + digest_bytes=self.max_bytes + 1) + self.assertRaises(ValueError, new_func, digest_bits=7) + self.assertRaises(ValueError, new_func, digest_bits=15) + self.assertRaises(ValueError, new_func, + digest_bits=self.max_bits + 1) + self.assertRaises(TypeError, new_func, + digest_bytes=self.max_bytes, + key=u"string") + self.assertRaises(TypeError, new_func, + digest_bytes=self.max_bytes, + data=u"string") + + def test_default_digest_size(self): + digest = self.BLAKE2.new(data=b'abc').digest() + self.assertEqual(len(digest), self.max_bytes) + + def test_update(self): + pieces = [b"\x0A" * 200, b"\x14" * 300] + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + h.update(pieces[0]).update(pieces[1]) + digest = h.digest() + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.digest(), digest) + + def test_update_negative(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg = b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = self.BLAKE2.new(digest_bits=256, data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = self.BLAKE2.new(digest_bits=256, data=msg).digest() + + # With the proper flag, it is allowed + h = self.BLAKE2.new(digest_bits=256, data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + def test_hex_digest(self): + mac = self.BLAKE2.new(digest_bits=self.max_bits) + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_verify(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes, key=b"4") + mac = h.digest() + h.verify(mac) + wrong_mac = strxor_c(mac, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + + def test_hexverify(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes, key=b"4") + mac = h.hexdigest() + h.hexverify(mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + def test_oid(self): + + prefix = "1.3.6.1.4.1.1722.12.2." + self.oid_variant + "." + + for digest_bits in self.digest_bits_oid: + h = self.BLAKE2.new(digest_bits=digest_bits) + self.assertEqual(h.oid, prefix + str(digest_bits // 8)) + + h = self.BLAKE2.new(digest_bits=digest_bits, key=b"secret") + self.assertRaises(AttributeError, lambda: h.oid) + + for digest_bits in (8, self.max_bits): + if digest_bits in self.digest_bits_oid: + continue + self.assertRaises(AttributeError, lambda: h.oid) + + def test_bytearray(self): + + key = b'0' * 16 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = self.BLAKE2.new(data=data, key=key) + h2 = self.BLAKE2.new(data=data_ba, key=key_ba) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = self.BLAKE2.new() + h2 = self.BLAKE2.new() + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + key = b'0' * 16 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = self.BLAKE2.new(data=data, key=key) + h2 = self.BLAKE2.new(data=data_mv, key=key_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + key_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = self.BLAKE2.new() + h2 = self.BLAKE2.new() + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class Blake2bTest(Blake2Test): + #: Module + BLAKE2 = BLAKE2b + #: Max output size (in bits) + max_bits = 512 + #: Max output size (in bytes) + max_bytes = 64 + #: Bit size of the digests for which an ASN OID exists + digest_bits_oid = (160, 256, 384, 512) + # http://tools.ietf.org/html/draft-saarinen-blake2-02 + oid_variant = "1" + + +class Blake2sTest(Blake2Test): + #: Module + BLAKE2 = BLAKE2s + #: Max output size (in bits) + max_bits = 256 + #: Max output size (in bytes) + max_bytes = 32 + #: Bit size of the digests for which an ASN OID exists + digest_bits_oid = (128, 160, 224, 256) + # http://tools.ietf.org/html/draft-saarinen-blake2-02 + oid_variant = "2" + + +class Blake2OfficialTestVector(unittest.TestCase): + + def _load_tests(self, test_vector_file): + expected = "in" + test_vectors = [] + with open(test_vector_file, "rt") as test_vector_fd: + for line_number, line in enumerate(test_vector_fd): + + if line.strip() == "" or line.startswith("#"): + continue + + res = re.match("%s:\t([0-9A-Fa-f]*)" % expected, line) + if not res: + raise ValueError("Incorrect test vector format (line %d)" + % line_number) + + if res.group(1): + bin_value = unhexlify(tobytes(res.group(1))) + else: + bin_value = b"" + if expected == "in": + input_data = bin_value + expected = "key" + elif expected == "key": + key = bin_value + expected = "hash" + else: + result = bin_value + expected = "in" + test_vectors.append((input_data, key, result)) + return test_vectors + + def setUp(self): + + dir_comps = ("Hash", self.name) + file_name = self.name.lower() + "-test.txt" + self.description = "%s tests" % self.name + + try: + import pycryptodome_test_vectors # type: ignore + except ImportError: + warnings.warn("Warning: skipping extended tests for %s" % self.name, + UserWarning) + self.test_vectors = [] + return + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + self.test_vectors = self._load_tests(full_file_name) + + def runTest(self): + for (input_data, key, result) in self.test_vectors: + mac = self.BLAKE2.new(key=key, digest_bytes=self.max_bytes) + mac.update(input_data) + self.assertEqual(mac.digest(), result) + + +class Blake2bOfficialTestVector(Blake2OfficialTestVector): + #: Module + BLAKE2 = BLAKE2b + #: Hash name + name = "BLAKE2b" + #: Max digest size + max_bytes = 64 + + +class Blake2sOfficialTestVector(Blake2OfficialTestVector): + #: Module + BLAKE2 = BLAKE2s + #: Hash name + name = "BLAKE2s" + #: Max digest size + max_bytes = 32 + + +class Blake2TestVector1(unittest.TestCase): + + def _load_tests(self, test_vector_file): + test_vectors = [] + with open(test_vector_file, "rt") as test_vector_fd: + for line_number, line in enumerate(test_vector_fd): + if line.strip() == "" or line.startswith("#"): + continue + res = re.match("digest: ([0-9A-Fa-f]*)", line) + if not res: + raise ValueError("Incorrect test vector format (line %d)" + % line_number) + + test_vectors.append(unhexlify(tobytes(res.group(1)))) + return test_vectors + + def setUp(self): + dir_comps = ("Hash", self.name) + file_name = "tv1.txt" + self.description = "%s tests" % self.name + + try: + import pycryptodome_test_vectors + except ImportError: + warnings.warn("Warning: skipping extended tests for %s" % self.name, + UserWarning) + self.test_vectors = [] + return + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + self.test_vectors = self._load_tests(full_file_name) + + def runTest(self): + + for tv in self.test_vectors: + digest_bytes = len(tv) + next_data = b"" + for _ in range(100): + h = self.BLAKE2.new(digest_bytes=digest_bytes) + h.update(next_data) + next_data = h.digest() + next_data + self.assertEqual(h.digest(), tv) + + +class Blake2bTestVector1(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2b + #: Hash name + name = "BLAKE2b" + + +class Blake2sTestVector1(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2s + #: Hash name + name = "BLAKE2s" + + +class Blake2TestVector2(unittest.TestCase): + + def _load_tests(self, test_vector_file): + test_vectors = [] + with open(test_vector_file, "rt") as test_vector_fd: + for line_number, line in enumerate(test_vector_fd): + if line.strip() == "" or line.startswith("#"): + continue + res = re.match(r"digest\(([0-9]+)\): ([0-9A-Fa-f]*)", line) + if not res: + raise ValueError("Incorrect test vector format (line %d)" + % line_number) + key_size = int(res.group(1)) + result = unhexlify(tobytes(res.group(2))) + test_vectors.append((key_size, result)) + return test_vectors + + def setUp(self): + dir_comps = ("Hash", self.name) + file_name = "tv2.txt" + self.description = "%s tests" % self.name + + try: + import pycryptodome_test_vectors # type: ignore + except ImportError: + warnings.warn("Warning: skipping extended tests for %s" % self.name, + UserWarning) + self.test_vectors = [] + return + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + self.test_vectors = self._load_tests(full_file_name) + + def runTest(self): + + for key_size, result in self.test_vectors: + next_data = b"" + for _ in range(100): + h = self.BLAKE2.new(digest_bytes=self.max_bytes, + key=b"A" * key_size) + h.update(next_data) + next_data = h.digest() + next_data + self.assertEqual(h.digest(), result) + + +class Blake2bTestVector2(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2b + #: Hash name + name = "BLAKE2b" + #: Max digest size in bytes + max_bytes = 64 + + +class Blake2sTestVector2(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2s + #: Hash name + name = "BLAKE2s" + #: Max digest size in bytes + max_bytes = 32 + + +def get_tests(config={}): + tests = [] + + tests += list_test_cases(Blake2bTest) + tests.append(Blake2bOfficialTestVector()) + tests.append(Blake2bTestVector1()) + tests.append(Blake2bTestVector2()) + + tests += list_test_cases(Blake2sTest) + tests.append(Blake2sOfficialTestVector()) + tests.append(Blake2sTestVector1()) + tests.append(Blake2sTestVector2()) + + return tests + + +if __name__ == '__main__': + import unittest + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_CMAC.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_CMAC.py new file mode 100644 index 00000000..f4763f21 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_CMAC.py @@ -0,0 +1,448 @@ +# +# SelfTest/Hash/CMAC.py: Self-test for the CMAC module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.CMAC""" + +import json +import unittest +from binascii import unhexlify + +from Crypto.Util.py3compat import tobytes + +from Crypto.Hash import CMAC +from Crypto.Cipher import AES, DES3 +from Crypto.Hash import SHAKE128 + +from Crypto.Util.strxor import strxor + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors_wycheproof + +# This is a list of (key, data, result, description, module) tuples. +test_data = [ + + ## Test vectors from RFC 4493 ## + ## The are also in NIST SP 800 38B D.2 ## + ( '2b7e151628aed2a6abf7158809cf4f3c', + '', + 'bb1d6929e95937287fa37d129b756746', + 'RFC 4493 #1', + AES + ), + + ( '2b7e151628aed2a6abf7158809cf4f3c', + '6bc1bee22e409f96e93d7e117393172a', + '070a16b46b4d4144f79bdd9dd04a287c', + 'RFC 4493 #2', + AES + ), + + ( '2b7e151628aed2a6abf7158809cf4f3c', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411', + 'dfa66747de9ae63030ca32611497c827', + 'RFC 4493 #3', + AES + ), + + ( '2b7e151628aed2a6abf7158809cf4f3c', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+ + 'f69f2445df4f9b17ad2b417be66c3710', + '51f0bebf7e3b9d92fc49741779363cfe', + 'RFC 4493 #4', + AES + ), + + ## The rest of Appendix D of NIST SP 800 38B + ## was not totally correct. + ## Values in Examples 14, 15, 18, and 19 were wrong. + ## The updated test values are published in: + ## http://csrc.nist.gov/publications/nistpubs/800-38B/Updated_CMAC_Examples.pdf + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '', + 'd17ddf46adaacde531cac483de7a9367', + 'NIST SP 800 38B D.2 Example 5', + AES + ), + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '6bc1bee22e409f96e93d7e117393172a', + '9e99a7bf31e710900662f65e617c5184', + 'NIST SP 800 38B D.2 Example 6', + AES + ), + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411', + '8a1de5be2eb31aad089a82e6ee908b0e', + 'NIST SP 800 38B D.2 Example 7', + AES + ), + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+ + 'f69f2445df4f9b17ad2b417be66c3710', + 'a1d5df0eed790f794d77589659f39a11', + 'NIST SP 800 38B D.2 Example 8', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '', + '028962f61b7bf89efc6b551f4667d983', + 'NIST SP 800 38B D.3 Example 9', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '6bc1bee22e409f96e93d7e117393172a', + '28a7023f452e8f82bd4bf28d8c37c35c', + 'NIST SP 800 38B D.3 Example 10', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411', + 'aaf3d8f1de5640c232f5b169b9c911e6', + 'NIST SP 800 38B D.3 Example 11', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+ + 'f69f2445df4f9b17ad2b417be66c3710', + 'e1992190549f6ed5696a2c056c315410', + 'NIST SP 800 38B D.3 Example 12', + AES + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '', + 'b7a688e122ffaf95', + 'NIST SP 800 38B D.4 Example 13', + DES3 + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '6bc1bee22e409f96', + '8e8f293136283797', + 'NIST SP 800 38B D.4 Example 14', + DES3 + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a57', + '743ddbe0ce2dc2ed', + 'NIST SP 800 38B D.4 Example 15', + DES3 + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a571e03ac9c'+ + '9eb76fac45af8e51', + '33e6b1092400eae5', + 'NIST SP 800 38B D.4 Example 16', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '', + 'bd2ebf9a3ba00361', + 'NIST SP 800 38B D.7 Example 17', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '6bc1bee22e409f96', + '4ff2ab813c53ce83', + 'NIST SP 800 38B D.7 Example 18', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a57', + '62dd1b471902bd4e', + 'NIST SP 800 38B D.7 Example 19', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a571e03ac9c'+ + '9eb76fac45af8e51', + '31b1e431dabc4eb8', + 'NIST SP 800 38B D.7 Example 20', + DES3 + ), + +] + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class TestCMAC(unittest.TestCase): + + def test_internal_caching(self): + """Verify that internal caching is implemented correctly""" + + data_to_mac = get_tag_random("data_to_mac", 128) + key = get_tag_random("key", 16) + ref_mac = CMAC.new(key, msg=data_to_mac, ciphermod=AES).digest() + + # Break up in chunks of different length + # The result must always be the same + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + chunks = [data_to_mac[i:i+chunk_length] for i in + range(0, len(data_to_mac), chunk_length)] + + mac = CMAC.new(key, ciphermod=AES) + for chunk in chunks: + mac.update(chunk) + self.assertEqual(ref_mac, mac.digest()) + + def test_update_after_digest(self): + msg = b"rrrrttt" + key = b"4" * 16 + + # Normally, update() cannot be done after digest() + h = CMAC.new(key, msg[:4], ciphermod=AES) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = CMAC.new(key, msg, ciphermod=AES).digest() + + # With the proper flag, it is allowed + h2 = CMAC.new(key, msg[:4], ciphermod=AES, update_after_digest=True) + self.assertEqual(h2.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h2.update(msg[4:]) + self.assertEqual(h2.digest(), dig2) + + +class ByteArrayTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = CMAC.new(key, data, ciphermod=AES) + h2 = CMAC.new(key_ba, data_ba, ciphermod=AES) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = CMAC.new(key, ciphermod=AES) + h2 = CMAC.new(key, ciphermod=AES) + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MemoryViewTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = CMAC.new(key, data, ciphermod=AES) + h2 = CMAC.new(key_mv, data_mv, ciphermod=AES) + if not data_mv.readonly: + key_mv[:1] = b'\xFF' + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = CMAC.new(key, ciphermod=AES) + h2 = CMAC.new(key, ciphermod=AES) + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Hash", "wycheproof"), + "aes_cmac_test.json", + "Wycheproof CMAC", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_create_mac(self, tv): + self._id = "Wycheproof MAC creation Test #" + str(tv.id) + + try: + tag = CMAC.new(tv.key, tv.msg, ciphermod=AES, mac_len=tv.tag_size).digest() + except ValueError as e: + if len(tv.key) not in (16, 24, 32) and "key length" in str(e): + return + raise e + if tv.valid: + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_verify_mac(self, tv): + self._id = "Wycheproof MAC verification Test #" + str(tv.id) + + try: + mac = CMAC.new(tv.key, tv.msg, ciphermod=AES, mac_len=tv.tag_size) + except ValueError as e: + if len(tv.key) not in (16, 24, 32) and "key length" in str(e): + return + raise e + try: + mac.verify(tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + + for tv in self.tv: + self.test_create_mac(tv) + self.test_verify_mac(tv) + + +def get_tests(config={}): + global test_data + import types + from .common import make_mac_tests + + wycheproof_warnings = config.get('wycheproof_warnings') + + # Add new() parameters to the back of each test vector + params_test_data = [] + for row in test_data: + t = list(row) + t[4] = dict(ciphermod=t[4]) + params_test_data.append(t) + + tests = make_mac_tests(CMAC, "CMAC", params_test_data) + tests.append(ByteArrayTests()) + tests.append(list_test_cases(TestCMAC)) + tests.append(MemoryViewTests()) + tests += [ TestVectorsWycheproof(wycheproof_warnings) ] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_HMAC.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_HMAC.py new file mode 100644 index 00000000..26b7b247 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_HMAC.py @@ -0,0 +1,548 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/HMAC.py: Self-test for the HMAC module +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.HMAC""" + +import unittest +from binascii import hexlify +from Crypto.Util.py3compat import tostr, tobytes + +from Crypto.Hash import (HMAC, MD5, SHA1, SHA256, + SHA224, SHA384, SHA512, + RIPEMD160, + SHA3_224, SHA3_256, SHA3_384, SHA3_512) + + +hash_modules = dict(MD5=MD5, SHA1=SHA1, SHA256=SHA256, + SHA224=SHA224, SHA384=SHA384, SHA512=SHA512, + RIPEMD160=RIPEMD160, + SHA3_224=SHA3_224, SHA3_256=SHA3_256, + SHA3_384=SHA3_384, SHA3_512=SHA3_512) + +default_hash = None + +def xl(text): + return tostr(hexlify(tobytes(text))) + +# This is a list of (key, data, results, description) tuples. +test_data = [ + ## Test vectors from RFC 2202 ## + # Test that the default hashmod is MD5 + ('0b' * 16, + '4869205468657265', + dict(default_hash='9294727a3638bb1c13f48ef8158bfc9d'), + 'default-is-MD5'), + + # Test case 1 (MD5) + ('0b' * 16, + '4869205468657265', + dict(MD5='9294727a3638bb1c13f48ef8158bfc9d'), + 'RFC 2202 #1-MD5 (HMAC-MD5)'), + + # Test case 1 (SHA1) + ('0b' * 20, + '4869205468657265', + dict(SHA1='b617318655057264e28bc0b6fb378c8ef146be00'), + 'RFC 2202 #1-SHA1 (HMAC-SHA1)'), + + # Test case 2 + ('4a656665', + '7768617420646f2079612077616e7420666f72206e6f7468696e673f', + dict(MD5='750c783e6ab0b503eaa86e310a5db738', + SHA1='effcdf6ae5eb2fa2d27416d5f184df9c259a7c79'), + 'RFC 2202 #2 (HMAC-MD5/SHA1)'), + + # Test case 3 (MD5) + ('aa' * 16, + 'dd' * 50, + dict(MD5='56be34521d144c88dbb8c733f0e8b3f6'), + 'RFC 2202 #3-MD5 (HMAC-MD5)'), + + # Test case 3 (SHA1) + ('aa' * 20, + 'dd' * 50, + dict(SHA1='125d7342b9ac11cd91a39af48aa17b4f63f175d3'), + 'RFC 2202 #3-SHA1 (HMAC-SHA1)'), + + # Test case 4 + ('0102030405060708090a0b0c0d0e0f10111213141516171819', + 'cd' * 50, + dict(MD5='697eaf0aca3a3aea3a75164746ffaa79', + SHA1='4c9007f4026250c6bc8414f9bf50c86c2d7235da'), + 'RFC 2202 #4 (HMAC-MD5/SHA1)'), + + # Test case 5 (MD5) + ('0c' * 16, + '546573742057697468205472756e636174696f6e', + dict(MD5='56461ef2342edc00f9bab995690efd4c'), + 'RFC 2202 #5-MD5 (HMAC-MD5)'), + + # Test case 5 (SHA1) + # NB: We do not implement hash truncation, so we only test the full hash here. + ('0c' * 20, + '546573742057697468205472756e636174696f6e', + dict(SHA1='4c1a03424b55e07fe7f27be1d58bb9324a9a5a04'), + 'RFC 2202 #5-SHA1 (HMAC-SHA1)'), + + # Test case 6 + ('aa' * 80, + '54657374205573696e67204c6172676572205468616e20426c6f636b2d53697a' + + '65204b6579202d2048617368204b6579204669727374', + dict(MD5='6b1ab7fe4bd7bf8f0b62e6ce61b9d0cd', + SHA1='aa4ae5e15272d00e95705637ce8a3b55ed402112'), + 'RFC 2202 #6 (HMAC-MD5/SHA1)'), + + # Test case 7 + ('aa' * 80, + '54657374205573696e67204c6172676572205468616e20426c6f636b2d53697a' + + '65204b657920616e64204c6172676572205468616e204f6e6520426c6f636b2d' + + '53697a652044617461', + dict(MD5='6f630fad67cda0ee1fb1f562db3aa53e', + SHA1='e8e99d0f45237d786d6bbaa7965c7808bbff1a91'), + 'RFC 2202 #7 (HMAC-MD5/SHA1)'), + + ## Test vectors from RFC 4231 ## + # 4.2. Test Case 1 + ('0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b', + '4869205468657265', + dict(SHA256=''' + b0344c61d8db38535ca8afceaf0bf12b + 881dc200c9833da726e9376c2e32cff7 + '''), + 'RFC 4231 #1 (HMAC-SHA256)'), + + # 4.3. Test Case 2 - Test with a key shorter than the length of the HMAC + # output. + ('4a656665', + '7768617420646f2079612077616e7420666f72206e6f7468696e673f', + dict(SHA256=''' + 5bdcc146bf60754e6a042426089575c7 + 5a003f089d2739839dec58b964ec3843 + '''), + 'RFC 4231 #2 (HMAC-SHA256)'), + + # 4.4. Test Case 3 - Test with a combined length of key and data that is + # larger than 64 bytes (= block-size of SHA-224 and SHA-256). + ('aa' * 20, + 'dd' * 50, + dict(SHA256=''' + 773ea91e36800e46854db8ebd09181a7 + 2959098b3ef8c122d9635514ced565fe + '''), + 'RFC 4231 #3 (HMAC-SHA256)'), + + # 4.5. Test Case 4 - Test with a combined length of key and data that is + # larger than 64 bytes (= block-size of SHA-224 and SHA-256). + ('0102030405060708090a0b0c0d0e0f10111213141516171819', + 'cd' * 50, + dict(SHA256=''' + 82558a389a443c0ea4cc819899f2083a + 85f0faa3e578f8077a2e3ff46729665b + '''), + 'RFC 4231 #4 (HMAC-SHA256)'), + + # 4.6. Test Case 5 - Test with a truncation of output to 128 bits. + # + # Not included because we do not implement hash truncation. + # + + # 4.7. Test Case 6 - Test with a key larger than 128 bytes (= block-size of + # SHA-384 and SHA-512). + ('aa' * 131, + '54657374205573696e67204c6172676572205468616e20426c6f636b2d53697a' + + '65204b6579202d2048617368204b6579204669727374', + dict(SHA256=''' + 60e431591ee0b67f0d8a26aacbf5b77f + 8e0bc6213728c5140546040f0ee37f54 + '''), + 'RFC 4231 #6 (HMAC-SHA256)'), + + # 4.8. Test Case 7 - Test with a key and data that is larger than 128 bytes + # (= block-size of SHA-384 and SHA-512). + ('aa' * 131, + '5468697320697320612074657374207573696e672061206c6172676572207468' + + '616e20626c6f636b2d73697a65206b657920616e642061206c61726765722074' + + '68616e20626c6f636b2d73697a6520646174612e20546865206b6579206e6565' + + '647320746f20626520686173686564206265666f7265206265696e6720757365' + + '642062792074686520484d414320616c676f726974686d2e', + dict(SHA256=''' + 9b09ffa71b942fcb27635fbcd5b0e944 + bfdc63644f0713938a7f51535c3a35e2 + '''), + 'RFC 4231 #7 (HMAC-SHA256)'), + + # Test case 8 (SHA224) + ('4a656665', + '7768617420646f2079612077616e74' + + '20666f72206e6f7468696e673f', + dict(SHA224='a30e01098bc6dbbf45690f3a7e9e6d0f8bbea2a39e6148008fd05e44'), + 'RFC 4634 8.4 SHA224 (HMAC-SHA224)'), + + # Test case 9 (SHA384) + ('4a656665', + '7768617420646f2079612077616e74' + + '20666f72206e6f7468696e673f', + dict(SHA384='af45d2e376484031617f78d2b58a6b1b9c7ef464f5a01b47e42ec3736322445e8e2240ca5e69e2c78b3239ecfab21649'), + 'RFC 4634 8.4 SHA384 (HMAC-SHA384)'), + + # Test case 10 (SHA512) + ('4a656665', + '7768617420646f2079612077616e74' + + '20666f72206e6f7468696e673f', + dict(SHA512='164b7a7bfcf819e2e395fbe73b56e0a387bd64222e831fd610270cd7ea2505549758bf75c05a994a6d034f65f8f0e6fdcaeab1a34d4a6b4b636e070a38bce737'), + 'RFC 4634 8.4 SHA512 (HMAC-SHA512)'), + + # Test case 11 (RIPEMD) + ('0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b', + xl("Hi There"), + dict(RIPEMD160='24cb4bd67d20fc1a5d2ed7732dcc39377f0a5668'), + 'RFC 2286 #1 (HMAC-RIPEMD)'), + + # Test case 12 (RIPEMD) + (xl("Jefe"), + xl("what do ya want for nothing?"), + dict(RIPEMD160='dda6c0213a485a9e24f4742064a7f033b43c4069'), + 'RFC 2286 #2 (HMAC-RIPEMD)'), + + # Test case 13 (RIPEMD) + ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', + 'dd' * 50, + dict(RIPEMD160='b0b105360de759960ab4f35298e116e295d8e7c1'), + 'RFC 2286 #3 (HMAC-RIPEMD)'), + + # Test case 14 (RIPEMD) + ('0102030405060708090a0b0c0d0e0f10111213141516171819', + 'cd' * 50, + dict(RIPEMD160='d5ca862f4d21d5e610e18b4cf1beb97a4365ecf4'), + 'RFC 2286 #4 (HMAC-RIPEMD)'), + + # Test case 15 (RIPEMD) + ('0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c', + xl("Test With Truncation"), + dict(RIPEMD160='7619693978f91d90539ae786500ff3d8e0518e39'), + 'RFC 2286 #5 (HMAC-RIPEMD)'), + + # Test case 16 (RIPEMD) + ('aa' * 80, + xl("Test Using Larger Than Block-Size Key - Hash Key First"), + dict(RIPEMD160='6466ca07ac5eac29e1bd523e5ada7605b791fd8b'), + 'RFC 2286 #6 (HMAC-RIPEMD)'), + + # Test case 17 (RIPEMD) + ('aa' * 80, + xl("Test Using Larger Than Block-Size Key and Larger Than One Block-Size Data"), + dict(RIPEMD160='69ea60798d71616cce5fd0871e23754cd75d5a0a'), + 'RFC 2286 #7 (HMAC-RIPEMD)'), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-224.pdf + ( + '000102030405060708090a0b0c0d0e0f' + '101112131415161718191a1b', + xl('Sample message for keylenblocklen'), + dict(SHA3_224='078695eecc227c636ad31d063a15dd05a7e819a66ec6d8de1e193e59'), + 'NIST CSRC Sample #3 (SHA3-224)' + ), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-256.pdf + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f', + xl('Sample message for keylenblocklen'), + dict(SHA3_256='9bcf2c238e235c3ce88404e813bd2f3a97185ac6f238c63d6229a00b07974258'), + 'NIST CSRC Sample #3 (SHA3-256)' + ), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-384.pdf + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f' + '202122232425262728292a2b2c2d2e2f', + xl('Sample message for keylenblocklen'), + dict(SHA3_384='e5ae4c739f455279368ebf36d4f5354c95aa184c899d3870e460ebc288ef1f9470053f73f7c6da2a71bcaec38ce7d6ac'), + 'NIST CSRC Sample #3 (SHA3-384)' + ), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-512.pdf + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f', + xl('Sample message for keylenblocklen'), + dict(SHA3_512='5f464f5e5b7848e3885e49b2c385f0694985d0e38966242dc4a5fe3fea4b37d46b65ceced5dcf59438dd840bab22269f0ba7febdb9fcf74602a35666b2a32915'), + 'NIST CSRC Sample #3 (SHA3-512)' + ), + +] + + +class HMAC_Module_and_Instance_Test(unittest.TestCase): + """Test the HMAC construction and verify that it does not + matter if you initialize it with a hash module or + with an hash instance. + + See https://bugs.launchpad.net/pycrypto/+bug/1209399 + """ + + def __init__(self, hashmods): + """Initialize the test with a dictionary of hash modules + indexed by their names""" + + unittest.TestCase.__init__(self) + self.hashmods = hashmods + self.description = "" + + def shortDescription(self): + return self.description + + def runTest(self): + key = b"\x90\x91\x92\x93" * 4 + payload = b"\x00" * 100 + + for hashname, hashmod in self.hashmods.items(): + if hashmod is None: + continue + self.description = "Test HMAC in combination with " + hashname + one = HMAC.new(key, payload, hashmod).digest() + two = HMAC.new(key, payload, hashmod.new()).digest() + self.assertEqual(one, two) + + +class HMAC_None(unittest.TestCase): + + def runTest(self): + + key = b"\x04" * 20 + one = HMAC.new(key, b"", SHA1).digest() + two = HMAC.new(key, None, SHA1).digest() + self.assertEqual(one, two) + + +class ByteArrayTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = HMAC.new(key, data) + h2 = HMAC.new(key_ba, data_ba) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = HMAC.new(key) + h2 = HMAC.new(key) + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MemoryViewTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = HMAC.new(key, data) + h2 = HMAC.new(key_mv, data_mv) + if not data_mv.readonly: + key_mv[:1] = b'\xFF' + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = HMAC.new(key) + h2 = HMAC.new(key) + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +def get_tests(config={}): + global test_data + import types + from .common import make_mac_tests + + # A test vector contains multiple results, each one for a + # different hash algorithm. + # Here we expand each test vector into multiple ones, + # and add the relevant parameters that will be passed to new() + exp_test_data = [] + for row in test_data: + for modname in row[2].keys(): + t = list(row) + t[2] = row[2][modname] + t.append(dict(digestmod=globals()[modname])) + exp_test_data.append(t) + tests = make_mac_tests(HMAC, "HMAC", exp_test_data) + tests.append(HMAC_Module_and_Instance_Test(hash_modules)) + tests.append(HMAC_None()) + + tests.append(ByteArrayTests()) + tests.append(MemoryViewTests()) + + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_KMAC.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_KMAC.py new file mode 100644 index 00000000..8e9bf702 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_KMAC.py @@ -0,0 +1,346 @@ +import unittest +from binascii import unhexlify, hexlify + +from Crypto.Util.py3compat import tobytes +from Crypto.Util.strxor import strxor_c +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import KMAC128, KMAC256 + + +class KMACTest(unittest.TestCase): + + def new(self, *args, **kwargs): + return self.KMAC.new(key=b'X' * (self.minimum_key_bits // 8), *args, **kwargs) + + def test_new_positive(self): + + key = b'X' * 32 + + h = self.new() + for new_func in self.KMAC.new, h.new: + + for dbytes in range(self.minimum_bytes, 128 + 1): + hobj = new_func(key=key, mac_len=dbytes) + self.assertEqual(hobj.digest_size, dbytes) + + digest1 = new_func(key=key, data=b"\x90").digest() + digest2 = new_func(key=key).update(b"\x90").digest() + self.assertEqual(digest1, digest2) + + new_func(data=b"A", key=key, custom=b"g") + + hobj = h.new(key=key) + self.assertEqual(hobj.digest_size, self.default_bytes) + + def test_new_negative(self): + + h = self.new() + for new_func in self.KMAC.new, h.new: + self.assertRaises(ValueError, new_func, key=b'X'*32, + mac_len=0) + self.assertRaises(ValueError, new_func, key=b'X'*32, + mac_len=self.minimum_bytes - 1) + self.assertRaises(TypeError, new_func, + key=u"string") + self.assertRaises(TypeError, new_func, + data=u"string") + + def test_default_digest_size(self): + digest = self.new(data=b'abc').digest() + self.assertEqual(len(digest), self.default_bytes) + + def test_update(self): + pieces = [b"\x0A" * 200, b"\x14" * 300] + h = self.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.digest() + h = self.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.digest(), digest) + + def test_update_negative(self): + h = self.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.new() + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg = b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = self.new(mac_len=32, data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, dig1) + + def test_hex_digest(self): + mac = self.new() + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_verify(self): + h = self.new() + mac = h.digest() + h.verify(mac) + wrong_mac = strxor_c(mac, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + + def test_hexverify(self): + h = self.new() + mac = h.hexdigest() + h.hexverify(mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + def test_oid(self): + + oid = "2.16.840.1.101.3.4.2." + self.oid_variant + h = self.new() + self.assertEqual(h.oid, oid) + + def test_bytearray(self): + + key = b'0' * 32 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = self.KMAC.new(data=data, key=key) + h2 = self.KMAC.new(data=data_ba, key=key_ba) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + key = b'0' * 32 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = self.KMAC.new(data=data, key=key) + h2 = self.KMAC.new(data=data_mv, key=key_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + key_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class KMAC128Test(KMACTest): + + KMAC = KMAC128 + + minimum_key_bits = 128 + + minimum_bytes = 8 + default_bytes = 64 + + oid_variant = "19" + + +class KMAC256Test(KMACTest): + + KMAC = KMAC256 + + minimum_key_bits = 256 + + minimum_bytes = 8 + default_bytes = 64 + + oid_variant = "20" + + +class NISTExampleTestVectors(unittest.TestCase): + + # https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/KMAC_samples.pdf + test_data = [ + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03", + "", + "E5 78 0B 0D 3E A6 F7 D3 A4 29 C5 70 6A A4 3A 00" + "FA DB D7 D4 96 28 83 9E 31 87 24 3F 45 6E E1 4E", + "Sample #1 NIST", + KMAC128 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03", + "My Tagged Application", + "3B 1F BA 96 3C D8 B0 B5 9E 8C 1A 6D 71 88 8B 71" + "43 65 1A F8 BA 0A 70 70 C0 97 9E 28 11 32 4A A5", + "Sample #2 NIST", + KMAC128 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F" + "10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F" + "20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F" + "30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F" + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F" + "60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F" + "70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F" + "80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F" + "90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F" + "A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF" + "B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF" + "C0 C1 C2 C3 C4 C5 C6 C7", + "My Tagged Application", + "1F 5B 4E 6C CA 02 20 9E 0D CB 5C A6 35 B8 9A 15" + "E2 71 EC C7 60 07 1D FD 80 5F AA 38 F9 72 92 30", + "Sample #3 NIST", + KMAC128 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03", + "My Tagged Application", + "20 C5 70 C3 13 46 F7 03 C9 AC 36 C6 1C 03 CB 64" + "C3 97 0D 0C FC 78 7E 9B 79 59 9D 27 3A 68 D2 F7" + "F6 9D 4C C3 DE 9D 10 4A 35 16 89 F2 7C F6 F5 95" + "1F 01 03 F3 3F 4F 24 87 10 24 D9 C2 77 73 A8 DD", + "Sample #4 NIST", + KMAC256 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F" + "10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F" + "20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F" + "30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F" + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F" + "60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F" + "70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F" + "80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F" + "90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F" + "A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF" + "B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF" + "C0 C1 C2 C3 C4 C5 C6 C7", + "", + "75 35 8C F3 9E 41 49 4E 94 97 07 92 7C EE 0A F2" + "0A 3F F5 53 90 4C 86 B0 8F 21 CC 41 4B CF D6 91" + "58 9D 27 CF 5E 15 36 9C BB FF 8B 9A 4C 2E B1 78" + "00 85 5D 02 35 FF 63 5D A8 25 33 EC 6B 75 9B 69", + "Sample #5 NIST", + KMAC256 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F" + "10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F" + "20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F" + "30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F" + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F" + "60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F" + "70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F" + "80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F" + "90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F" + "A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF" + "B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF" + "C0 C1 C2 C3 C4 C5 C6 C7", + "My Tagged Application", + "B5 86 18 F7 1F 92 E1 D5 6C 1B 8C 55 DD D7 CD 18" + "8B 97 B4 CA 4D 99 83 1E B2 69 9A 83 7D A2 E4 D9" + "70 FB AC FD E5 00 33 AE A5 85 F1 A2 70 85 10 C3" + "2D 07 88 08 01 BD 18 28 98 FE 47 68 76 FC 89 65", + "Sample #6 NIST", + KMAC256 + ), + ] + + def setUp(self): + td = [] + for key, data, custom, mac, text, module in self.test_data: + ni = ( + unhexlify(key.replace(" ", "")), + unhexlify(data.replace(" ", "")), + custom.encode(), + unhexlify(mac.replace(" ", "")), + text, + module + ) + td.append(ni) + self.test_data = td + + def runTest(self): + + for key, data, custom, mac, text, module in self.test_data: + h = module.new(data=data, key=key, custom=custom, mac_len=len(mac)) + mac_tag = h.digest() + self.assertEqual(mac_tag, mac, msg=text) + + +def get_tests(config={}): + tests = [] + + tests += list_test_cases(KMAC128Test) + tests += list_test_cases(KMAC256Test) + tests.append(NISTExampleTestVectors()) + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_KangarooTwelve.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_KangarooTwelve.py new file mode 100644 index 00000000..49aeaad7 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_KangarooTwelve.py @@ -0,0 +1,324 @@ +# =================================================================== +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.KangarooTwelve""" + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import KangarooTwelve as K12 +from Crypto.Util.py3compat import b, bchr + + +class KangarooTwelveTest(unittest.TestCase): + + def test_length_encode(self): + self.assertEqual(K12._length_encode(0), b'\x00') + self.assertEqual(K12._length_encode(12), b'\x0C\x01') + self.assertEqual(K12._length_encode(65538), b'\x01\x00\x02\x03') + + def test_new_positive(self): + + xof1 = K12.new() + xof2 = K12.new(data=b("90")) + xof3 = K12.new().update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + xof1 = K12.new() + ref = xof1.read(10) + xof2 = K12.new(custom=b("")) + xof3 = K12.new(custom=b("foo")) + + self.assertEqual(ref, xof2.read(10)) + self.assertNotEqual(ref, xof3.read(10)) + + xof1 = K12.new(custom=b("foo")) + xof2 = K12.new(custom=b("foo"), data=b("90")) + xof3 = K12.new(custom=b("foo")).update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = K12.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.read(10) + h = K12.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.read(10), digest) + + def test_update_negative(self): + h = K12.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = K12.new() + digest = h.read(90) + + # read returns a byte string of the right length + self.assertTrue(isinstance(digest, type(b("digest")))) + self.assertEqual(len(digest), 90) + + def test_update_after_read(self): + mac = K12.new() + mac.update(b("rrrr")) + mac.read(90) + self.assertRaises(TypeError, mac.update, b("ttt")) + + +def txt2bin(txt): + clean = txt.replace(" ", "").replace("\n", "").replace("\r", "") + return unhexlify(clean) + + +def ptn(n): + res = bytearray(n) + pattern = b"".join([bchr(x) for x in range(0, 0xFB)]) + for base in range(0, n - 0xFB, 0xFB): + res[base:base + 0xFB] = pattern + remain = n % 0xFB + if remain: + base = (n // 0xFB) * 0xFB + res[base:] = pattern[:remain] + assert(len(res) == n) + return res + + +def chunked(source, size): + for i in range(0, len(source), size): + yield source[i:i+size] + + +# https://github.com/XKCP/XKCP/blob/master/tests/TestVectors/KangarooTwelve.txt +class KangarooTwelveTV(unittest.TestCase): + + def test_zero_1(self): + tv = """1A C2 D4 50 FC 3B 42 05 D1 9D A7 BF CA 1B 37 51 + 3C 08 03 57 7A C7 16 7F 06 FE 2C E1 F0 EF 39 E5""" + + btv = txt2bin(tv) + res = K12.new().read(32) + self.assertEqual(res, btv) + + def test_zero_2(self): + tv = """1A C2 D4 50 FC 3B 42 05 D1 9D A7 BF CA 1B 37 51 + 3C 08 03 57 7A C7 16 7F 06 FE 2C E1 F0 EF 39 E5 + 42 69 C0 56 B8 C8 2E 48 27 60 38 B6 D2 92 96 6C + C0 7A 3D 46 45 27 2E 31 FF 38 50 81 39 EB 0A 71""" + + btv = txt2bin(tv) + res = K12.new().read(64) + self.assertEqual(res, btv) + + def test_zero_3(self): + tv = """E8 DC 56 36 42 F7 22 8C 84 68 4C 89 84 05 D3 A8 + 34 79 91 58 C0 79 B1 28 80 27 7A 1D 28 E2 FF 6D""" + + btv = txt2bin(tv) + res = K12.new().read(10032) + self.assertEqual(res[-32:], btv) + + def test_ptn_1(self): + tv = """2B DA 92 45 0E 8B 14 7F 8A 7C B6 29 E7 84 A0 58 + EF CA 7C F7 D8 21 8E 02 D3 45 DF AA 65 24 4A 1F""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(1)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17(self): + tv = """6B F7 5F A2 23 91 98 DB 47 72 E3 64 78 F8 E1 9B + 0F 37 12 05 F6 A9 A9 3A 27 3F 51 DF 37 12 28 88""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(17)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17_2(self): + tv = """0C 31 5E BC DE DB F6 14 26 DE 7D CF 8F B7 25 D1 + E7 46 75 D7 F5 32 7A 50 67 F3 67 B1 08 EC B6 7C""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(17**2)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17_3(self): + tv = """CB 55 2E 2E C7 7D 99 10 70 1D 57 8B 45 7D DF 77 + 2C 12 E3 22 E4 EE 7F E4 17 F9 2C 75 8F 0D 59 D0""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(17**3)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17_4(self): + tv = """87 01 04 5E 22 20 53 45 FF 4D DA 05 55 5C BB 5C + 3A F1 A7 71 C2 B8 9B AE F3 7D B4 3D 99 98 B9 FE""" + + btv = txt2bin(tv) + data = ptn(17**4) + + # All at once + res = K12.new(data=data).read(32) + self.assertEqual(res, btv) + + # Byte by byte + k12 = K12.new() + for x in data: + k12.update(bchr(x)) + res = k12.read(32) + self.assertEqual(res, btv) + + # Chunks of various prime sizes + for chunk_size in (13, 17, 19, 23, 31): + k12 = K12.new() + for x in chunked(data, chunk_size): + k12.update(x) + res = k12.read(32) + self.assertEqual(res, btv) + + def test_ptn_17_5(self): + tv = """84 4D 61 09 33 B1 B9 96 3C BD EB 5A E3 B6 B0 5C + C7 CB D6 7C EE DF 88 3E B6 78 A0 A8 E0 37 16 82""" + + btv = txt2bin(tv) + data = ptn(17**5) + + # All at once + res = K12.new(data=data).read(32) + self.assertEqual(res, btv) + + # Chunks + k12 = K12.new() + for chunk in chunked(data, 8192): + k12.update(chunk) + res = k12.read(32) + self.assertEqual(res, btv) + + def test_ptn_17_6(self): + tv = """3C 39 07 82 A8 A4 E8 9F A6 36 7F 72 FE AA F1 32 + 55 C8 D9 58 78 48 1D 3C D8 CE 85 F5 8E 88 0A F8""" + + btv = txt2bin(tv) + data = ptn(17**6) + + # All at once + res = K12.new(data=data).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_1(self): + tv = """FA B6 58 DB 63 E9 4A 24 61 88 BF 7A F6 9A 13 30 + 45 F4 6E E9 84 C5 6E 3C 33 28 CA AF 1A A1 A5 83""" + + btv = txt2bin(tv) + custom = ptn(1) + + # All at once + res = K12.new(custom=custom).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_41(self): + tv = """D8 48 C5 06 8C ED 73 6F 44 62 15 9B 98 67 FD 4C + 20 B8 08 AC C3 D5 BC 48 E0 B0 6B A0 A3 76 2E C4""" + + btv = txt2bin(tv) + custom = ptn(41) + + # All at once + res = K12.new(data=b'\xFF', custom=custom).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_41_2(self): + tv = """C3 89 E5 00 9A E5 71 20 85 4C 2E 8C 64 67 0A C0 + 13 58 CF 4C 1B AF 89 44 7A 72 42 34 DC 7C ED 74""" + + btv = txt2bin(tv) + custom = ptn(41**2) + + # All at once + res = K12.new(data=b'\xFF' * 3, custom=custom).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_41_3(self): + tv = """75 D2 F8 6A 2E 64 45 66 72 6B 4F BC FC 56 57 B9 + DB CF 07 0C 7B 0D CA 06 45 0A B2 91 D7 44 3B CF""" + + btv = txt2bin(tv) + custom = ptn(41**3) + + # All at once + res = K12.new(data=b'\xFF' * 7, custom=custom).read(32) + self.assertEqual(res, btv) + + ### + + def test_1(self): + tv = "fd608f91d81904a9916e78a18f65c157a78d63f93d8f6367db0524526a5ea2bb" + + btv = txt2bin(tv) + res = K12.new(data=b'', custom=ptn(100)).read(32) + self.assertEqual(res, btv) + + def test_2(self): + tv4 = "5a4ec9a649f81916d4ce1553492962f7868abf8dd1ceb2f0cb3682ea95cda6a6" + tv3 = "441688fe4fe4ae9425eb3105eb445eb2b3a6f67b66eff8e74ebfbc49371f6d4c" + tv2 = "17269a57759af0214c84a0fd9bc851f4d95f80554cfed4e7da8a6ee1ff080131" + tv1 = "33826990c09dc712ba7224f0d9be319e2720de95a4c1afbd2211507dae1c703a" + tv0 = "9f4d3aba908ddc096e4d3a71da954f917b9752f05052b9d26d916a6fbc75bf3e" + + res = K12.new(data=b'A' * (8192 - 4), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv4)) + + res = K12.new(data=b'A' * (8192 - 3), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv3)) + + res = K12.new(data=b'A' * (8192 - 2), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv2)) + + res = K12.new(data=b'A' * (8192 - 1), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv1)) + + res = K12.new(data=b'A' * (8192 - 0), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv0)) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(KangarooTwelveTest) + tests += list_test_cases(KangarooTwelveTV) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD2.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD2.py new file mode 100644 index 00000000..93751687 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD2.py @@ -0,0 +1,62 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/MD2.py: Self-test for the MD2 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.MD2""" + +from Crypto.Util.py3compat import * + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors from RFC 1319 + ('8350e5a3e24c153df2275c9f80692773', '', "'' (empty string)"), + ('32ec01ec4a6dac72c0ab96fb34c0b5d1', 'a'), + ('da853b0d3f88d99b30283a69e6ded6bb', 'abc'), + ('ab4f496bfb2a530b219ff33031fe06b0', 'message digest'), + + ('4e8ddff3650292ab5a4108c3aa47940b', 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('da33def2a42df13975352846c30338cd', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('d5976f79d83d3a0dc9806c3c66f3efd8', + '1234567890123456789012345678901234567890123456' + + '7890123456789012345678901234567890', + "'1234567890' * 8"), +] + +def get_tests(config={}): + from Crypto.Hash import MD2 + from .common import make_hash_tests + return make_hash_tests(MD2, "MD2", test_data, + digest_size=16, + oid="1.2.840.113549.2.2") + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD4.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD4.py new file mode 100644 index 00000000..17b48a77 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD4.py @@ -0,0 +1,64 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/MD4.py: Self-test for the MD4 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.MD4""" + +__revision__ = "$Id$" + +from Crypto.Util.py3compat import * + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors from RFC 1320 + ('31d6cfe0d16ae931b73c59d7e0c089c0', '', "'' (empty string)"), + ('bde52cb31de33e46245e05fbdbd6fb24', 'a'), + ('a448017aaf21d8525fc10ae87aa6729d', 'abc'), + ('d9130a8164549fe818874806e1c7014b', 'message digest'), + + ('d79e1c308aa5bbcdeea8ed63df412da9', 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('043f8582f241db351ce627e153e7f0e4', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('e33b4ddc9c38f2199c3e7b164fcc0536', + '1234567890123456789012345678901234567890123456' + + '7890123456789012345678901234567890', + "'1234567890' * 8"), +] + +def get_tests(config={}): + from Crypto.Hash import MD4 + from .common import make_hash_tests + return make_hash_tests(MD4, "MD4", test_data, + digest_size=16, + oid="1.2.840.113549.2.4") + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD5.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD5.py new file mode 100644 index 00000000..830ace73 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_MD5.py @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/MD5.py: Self-test for the MD5 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.MD5""" + +from Crypto.Util.py3compat import * +from Crypto.Hash import MD5 +from binascii import unhexlify +import unittest +from Crypto.SelfTest.st_common import list_test_cases + + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors from RFC 1321 + ('d41d8cd98f00b204e9800998ecf8427e', '', "'' (empty string)"), + ('0cc175b9c0f1b6a831c399e269772661', 'a'), + ('900150983cd24fb0d6963f7d28e17f72', 'abc'), + ('f96b697d7cb7938d525a2f31aaf161d0', 'message digest'), + + ('c3fcd3d76192e4007dfb496cca67e13b', 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('d174ab98d277d9f5a5611c2c9f419d9f', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('57edf4a22be3c955ac49da2e2107b67a', + '1234567890123456789012345678901234567890123456' + + '7890123456789012345678901234567890', + "'1234567890' * 8"), + + # https://www.cosic.esat.kuleuven.be/nessie/testvectors/hash/md5/Md5-128.unverified.test-vectors + ('57EDF4A22BE3C955AC49DA2E2107B67A', '1234567890' * 8, 'Set 1, vector #7'), + ('7707D6AE4E027C70EEA2A935C2296F21', 'a'*1000000, 'Set 1, vector #8'), +] + + +class Md5IterTest(unittest.TestCase): + + def runTest(self): + message = b("\x00") * 16 + result1 = "4AE71336E44BF9BF79D2752E234818A5".lower() + result2 = "1A83F51285E4D89403D00C46EF8508FE".lower() + + h = MD5.new(message) + message = h.digest() + self.assertEqual(h.hexdigest(), result1) + + for _ in range(99999): + h = MD5.new(message) + message = h.digest() + + self.assertEqual(h.hexdigest(), result2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = make_hash_tests(MD5, "MD5", test_data, + digest_size=16, + oid="1.2.840.113549.2.5") + if config.get('slow_tests'): + tests += [ Md5IterTest() ] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_Poly1305.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_Poly1305.py new file mode 100644 index 00000000..0612d4ed --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_Poly1305.py @@ -0,0 +1,542 @@ +# +# SelfTest/Hash/test_Poly1305.py: Self-test for the Poly1305 module +# +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Crypto.Hash._Poly1305""" + +import json +import unittest +from binascii import unhexlify, hexlify + +from .common import make_mac_tests +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import Poly1305 +from Crypto.Cipher import AES, ChaCha20 + +from Crypto.Util.py3compat import tobytes +from Crypto.Util.strxor import strxor_c + +# This is a list of (r+s keypair, data, result, description, keywords) tuples. +test_data_basic = [ + ( + "85d6be7857556d337f4452fe42d506a80103808afb0db2fd4abff6af4149f51b", + hexlify(b"Cryptographic Forum Research Group").decode(), + "a8061dc1305136c6c22b8baf0c0127a9", + "RFC7539" + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "0000000000000000000000000000000000000000000000000000000000000000", + "49ec78090e481ec6c26b33b91ccc0307", + "https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-00#section-7 A", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "48656c6c6f20776f726c6421", + "a6f745008f81c916a20dcc74eef2b2f0", + "https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-00#section-7 B", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "", + "6b657920666f7220506f6c7931333035", + "Generated with pure Python", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "FF", + "f7e4e0ef4c46d106219da3d1bdaeb3ff", + "Generated with pure Python", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "FF00", + "7471eceeb22988fc936da1d6e838b70e", + "Generated with pure Python", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "AA" * 17, + "32590bc07cb2afaccca3f67f122975fe", + "Generated with pure Python", + ), + ( + "00" * 32, + "00" * 64, + "00" * 16, + "RFC7539 A.3 #1", + ), + ( + "0000000000000000000000000000000036e5f6b5c5e06070f0efca96227a863e", + hexlify( + b"Any submission t" + b"o the IETF inten" + b"ded by the Contr" + b"ibutor for publi" + b"cation as all or" + b" part of an IETF" + b" Internet-Draft " + b"or RFC and any s" + b"tatement made wi" + b"thin the context" + b" of an IETF acti" + b"vity is consider" + b"ed an \"IETF Cont" + b"ribution\". Such " + b"statements inclu" + b"de oral statemen" + b"ts in IETF sessi" + b"ons, as well as " + b"written and elec" + b"tronic communica" + b"tions made at an" + b"y time or place," + b" which are addre" + b"ssed to").decode(), + "36e5f6b5c5e06070f0efca96227a863e", + "RFC7539 A.3 #2", + ), + ( + "36e5f6b5c5e06070f0efca96227a863e00000000000000000000000000000000", + hexlify( + b"Any submission t" + b"o the IETF inten" + b"ded by the Contr" + b"ibutor for publi" + b"cation as all or" + b" part of an IETF" + b" Internet-Draft " + b"or RFC and any s" + b"tatement made wi" + b"thin the context" + b" of an IETF acti" + b"vity is consider" + b"ed an \"IETF Cont" + b"ribution\". Such " + b"statements inclu" + b"de oral statemen" + b"ts in IETF sessi" + b"ons, as well as " + b"written and elec" + b"tronic communica" + b"tions made at an" + b"y time or place," + b" which are addre" + b"ssed to").decode(), + "f3477e7cd95417af89a6b8794c310cf0", + "RFC7539 A.3 #3", + ), + ( + "1c9240a5eb55d38af333888604f6b5f0473917c1402b80099dca5cbc207075c0", + "2754776173206272696c6c69672c2061" + "6e642074686520736c6974687920746f" + "7665730a446964206779726520616e64" + "2067696d626c6520696e207468652077" + "6162653a0a416c6c206d696d73792077" + "6572652074686520626f726f676f7665" + "732c0a416e6420746865206d6f6d6520" + "7261746873206f757467726162652e", + "4541669a7eaaee61e708dc7cbcc5eb62", + "RFC7539 A.3 #4", + ), + ( + "02" + "00" * 31, + "FF" * 16, + "03" + "00" * 15, + "RFC7539 A.3 #5", + ), + ( + "02" + "00" * 15 + "FF" * 16, + "02" + "00" * 15, + "03" + "00" * 15, + "RFC7539 A.3 #6", + ), + ( + "01" + "00" * 31, + "FF" * 16 + "F0" + "FF" * 15 + "11" + "00" * 15, + "05" + "00" * 15, + "RFC7539 A.3 #7", + ), + ( + "01" + "00" * 31, + "FF" * 16 + "FB" + "FE" * 15 + "01" * 16, + "00" * 16, + "RFC7539 A.3 #8", + ), + ( + "02" + "00" * 31, + "FD" + "FF" * 15, + "FA" + "FF" * 15, + "RFC7539 A.3 #9", + ), + ( + "01 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "E3 35 94 D7 50 5E 43 B9 00 00 00 00 00 00 00 00" + "33 94 D7 50 5E 43 79 CD 01 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00" + "01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "14 00 00 00 00 00 00 00 55 00 00 00 00 00 00 00", + "RFC7539 A.3 #10", + ), + ( + "01 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "E3 35 94 D7 50 5E 43 B9 00 00 00 00 00 00 00 00" + "33 94 D7 50 5E 43 79 CD 01 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "13" + "00" * 15, + "RFC7539 A.3 #11", + ), +] + +# This is a list of (key(k+r), data, result, description, keywords) tuples. +test_data_aes = [ + ( + "ec074c835580741701425b623235add6851fc40c3467ac0be05cc20404f3f700", + "f3f6", + "f4c633c3044fc145f84f335cb81953de", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("fb447350c4e868c52ac3275cf9d4327e") } + ), + ( + "75deaa25c09f208e1dc4ce6b5cad3fbfa0f3080000f46400d0c7e9076c834403", + "", + "dd3fab2251f11ac759f0887129cc2ee7", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("61ee09218d29b0aaed7e154a2c5509cc") } + ), + ( + "6acb5f61a7176dd320c5c1eb2edcdc7448443d0bb0d21109c89a100b5ce2c208", + "663cea190ffb83d89593f3f476b6bc24" + "d7e679107ea26adb8caf6652d0656136", + "0ee1c16bb73f0f4fd19881753c01cdbe", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("ae212a55399729595dea458bc621ff0e") } + ), + ( + "e1a5668a4d5b66a5f68cc5424ed5982d12976a08c4426d0ce8a82407c4f48207", + "ab0812724a7f1e342742cbed374d94d1" + "36c6b8795d45b3819830f2c04491faf0" + "990c62e48b8018b2c3e4a0fa3134cb67" + "fa83e158c994d961c4cb21095c1bf9", + "5154ad0d2cb26e01274fc51148491f1b", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("9ae831e743978d3a23527c7128149e3a") } + ), +] + +test_data_chacha20 = [ + ( + "00" * 32, + "FF" * 15, + "13cc5bbadc36b03a5163928f0bcb65aa", + "RFC7539 A.4 #1", + { 'cipher':ChaCha20, 'nonce':unhexlify("00" * 12) } + ), + ( + "00" * 31 + "01", + "FF" * 15, + "0baf33c1d6df211bdd50a6767e98e00a", + "RFC7539 A.4 #2", + { 'cipher':ChaCha20, 'nonce':unhexlify("00" * 11 + "02") } + ), + ( + "1c 92 40 a5 eb 55 d3 8a f3 33 88 86 04 f6 b5 f0" + "47 39 17 c1 40 2b 80 09 9d ca 5c bc 20 70 75 c0", + "FF" * 15, + "e8b4c6db226cd8939e65e02eebf834ce", + "RFC7539 A.4 #3", + { 'cipher':ChaCha20, 'nonce':unhexlify("00" * 11 + "02") } + ), + ( + "1c 92 40 a5 eb 55 d3 8a f3 33 88 86 04 f6 b5 f0" + "47 39 17 c1 40 2b 80 09 9d ca 5c bc 20 70 75 c0", + "f3 33 88 86 00 00 00 00 00 00 4e 91 00 00 00 00" + "64 a0 86 15 75 86 1a f4 60 f0 62 c7 9b e6 43 bd" + "5e 80 5c fd 34 5c f3 89 f1 08 67 0a c7 6c 8c b2" + "4c 6c fc 18 75 5d 43 ee a0 9e e9 4e 38 2d 26 b0" + "bd b7 b7 3c 32 1b 01 00 d4 f0 3b 7f 35 58 94 cf" + "33 2f 83 0e 71 0b 97 ce 98 c8 a8 4a bd 0b 94 81" + "14 ad 17 6e 00 8d 33 bd 60 f9 82 b1 ff 37 c8 55" + "97 97 a0 6e f4 f0 ef 61 c1 86 32 4e 2b 35 06 38" + "36 06 90 7b 6a 7c 02 b0 f9 f6 15 7b 53 c8 67 e4" + "b9 16 6c 76 7b 80 4d 46 a5 9b 52 16 cd e7 a4 e9" + "90 40 c5 a4 04 33 22 5e e2 82 a1 b0 a0 6c 52 3e" + "af 45 34 d7 f8 3f a1 15 5b 00 47 71 8c bc 54 6a" + "0d 07 2b 04 b3 56 4e ea 1b 42 22 73 f5 48 27 1a" + "0b b2 31 60 53 fa 76 99 19 55 eb d6 31 59 43 4e" + "ce bb 4e 46 6d ae 5a 10 73 a6 72 76 27 09 7a 10" + "49 e6 17 d9 1d 36 10 94 fa 68 f0 ff 77 98 71 30" + "30 5b ea ba 2e da 04 df 99 7b 71 4d 6c 6f 2c 29" + "a6 ad 5c b4 02 2b 02 70 9b 00 00 00 00 00 00 00" + "0c 00 00 00 00 00 00 00 09 01 00 00 00 00 00 00", + "ee ad 9d 67 89 0c bb 22 39 23 36 fe a1 85 1f 38", + "RFC7539 A.5", + { 'cipher':ChaCha20, 'nonce':unhexlify("000000000102030405060708") } + ), +] + + +class Poly1305Test_AES(unittest.TestCase): + + key = b'\x11' * 32 + + def test_new_positive(self): + + data = b'r' * 100 + + h1 = Poly1305.new(key=self.key, cipher=AES) + self.assertEqual(h1.digest_size, 16) + self.assertEqual(len(h1.nonce), 16) + d1 = h1.update(data).digest() + self.assertEqual(len(d1), 16) + + h2 = Poly1305.new(key=self.key, nonce=h1.nonce, data=data, cipher=AES) + d2 = h2.digest() + self.assertEqual(h1.nonce, h2.nonce) + self.assertEqual(d1, d2) + + def test_new_negative(self): + from Crypto.Cipher import DES3 + + self.assertRaises(ValueError, Poly1305.new, key=self.key[:31], cipher=AES) + self.assertRaises(ValueError, Poly1305.new, key=self.key, cipher=DES3) + self.assertRaises(ValueError, Poly1305.new, key=self.key, nonce=b'1' * 15, cipher=AES) + self.assertRaises(TypeError, Poly1305.new, key=u"2" * 32, cipher=AES) + self.assertRaises(TypeError, Poly1305.new, key=self.key, data=u"2" * 100, cipher=AES) + + def test_update(self): + pieces = [b"\x0A" * 200, b"\x14" * 300] + h1 = Poly1305.new(key=self.key, cipher=AES) + h1.update(pieces[0]).update(pieces[1]) + d1 = h1.digest() + + h2 = Poly1305.new(key=self.key, cipher=AES, nonce=h1.nonce) + h2.update(pieces[0] + pieces[1]) + d2 = h2.digest() + self.assertEqual(d1, d2) + + def test_update_negative(self): + h = Poly1305.new(key=self.key, cipher=AES) + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = Poly1305.new(key=self.key, cipher=AES) + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg=b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = Poly1305.new(key=self.key, data=msg[:4], cipher=AES) + h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + + def test_hex_digest(self): + mac = Poly1305.new(key=self.key, cipher=AES) + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_verify(self): + h = Poly1305.new(key=self.key, cipher=AES) + mac = h.digest() + h.verify(mac) + wrong_mac = strxor_c(mac, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + + def test_hexverify(self): + h = Poly1305.new(key=self.key, cipher=AES) + mac = h.hexdigest() + h.hexverify(mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + def test_bytearray(self): + + data = b"\x00\x01\x02" + h0 = Poly1305.new(key=self.key, data=data, cipher=AES) + d_ref = h0.digest() + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(self.key) + data_ba = bytearray(data) + + h1 = Poly1305.new(key=self.key, data=data, cipher=AES, nonce=h0.nonce) + h2 = Poly1305.new(key=key_ba, data=data_ba, cipher=AES, nonce=h0.nonce) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xEE' + + self.assertEqual(h1.digest(), d_ref) + self.assertEqual(h2.digest(), d_ref) + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = Poly1305.new(key=self.key, cipher=AES) + h2 = Poly1305.new(key=self.key, cipher=AES, nonce=h1.nonce) + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(self.key) + data_mv = get_mv(data) + + h1 = Poly1305.new(key=self.key, data=data, cipher=AES) + h2 = Poly1305.new(key=key_mv, data=data_mv, cipher=AES, + nonce=h1.nonce) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + key_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = Poly1305.new(key=self.key, cipher=AES) + h2 = Poly1305.new(key=self.key, cipher=AES, nonce=h1.nonce) + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class Poly1305Test_ChaCha20(unittest.TestCase): + + key = b'\x11' * 32 + + def test_new_positive(self): + data = b'r' * 100 + + h1 = Poly1305.new(key=self.key, cipher=ChaCha20) + self.assertEqual(h1.digest_size, 16) + self.assertEqual(len(h1.nonce), 12) + + h2 = Poly1305.new(key=self.key, cipher=ChaCha20, nonce = b'8' * 8) + self.assertEqual(len(h2.nonce), 8) + self.assertEqual(h2.nonce, b'8' * 8) + + def test_new_negative(self): + + self.assertRaises(ValueError, Poly1305.new, key=self.key, nonce=b'1' * 7, cipher=ChaCha20) + + +# +# make_mac_tests() expect a new() function with signature new(key, data, +# **kwargs), and we need to adapt Poly1305's, as it only uses keywords +# +class Poly1305_New(object): + + @staticmethod + def new(key, *data, **kwds): + _kwds = dict(kwds) + if len(data) == 1: + _kwds['data'] = data[0] + _kwds['key'] = key + return Poly1305.new(**_kwds) + + +class Poly1305_Basic(object): + + @staticmethod + def new(key, *data, **kwds): + from Crypto.Hash.Poly1305 import Poly1305_MAC + + if len(data) == 1: + msg = data[0] + else: + msg = None + + return Poly1305_MAC(key[:16], key[16:], msg) + + +class Poly1305AES_MC(unittest.TestCase): + + def runTest(self): + tag = unhexlify(b"fb447350c4e868c52ac3275cf9d4327e") + + msg = b'' + for msg_len in range(5000 + 1): + key = tag + strxor_c(tag, 0xFF) + nonce = tag[::-1] + if msg_len > 0: + msg = msg + tobytes(tag[0]) + auth = Poly1305.new(key=key, nonce=nonce, cipher=AES, data=msg) + tag = auth.digest() + + # Compare against output of original DJB's poly1305aes-20050218 + self.assertEqual("CDFA436DDD629C7DC20E1128530BAED2", auth.hexdigest().upper()) + + +def get_tests(config={}): + tests = make_mac_tests(Poly1305_Basic, "Poly1305", test_data_basic) + tests += make_mac_tests(Poly1305_New, "Poly1305", test_data_aes) + tests += make_mac_tests(Poly1305_New, "Poly1305", test_data_chacha20) + tests += [ Poly1305AES_MC() ] + tests += list_test_cases(Poly1305Test_AES) + tests += list_test_cases(Poly1305Test_ChaCha20) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py new file mode 100644 index 00000000..153c5700 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py @@ -0,0 +1,71 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_RIPEMD160.py: Self-test for the RIPEMD-160 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +#"""Self-test suite for Crypto.Hash.RIPEMD160""" + +from Crypto.Util.py3compat import * + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors downloaded 2008-09-12 from + # http://homes.esat.kuleuven.be/~bosselae/ripemd160.html + ('9c1185a5c5e9fc54612808977ee8f548b2258d31', '', "'' (empty string)"), + ('0bdc9d2d256b3ee9daae347be6f4dc835a467ffe', 'a'), + ('8eb208f7e05d987a9b044a8e98c6b087f15a0bfc', 'abc'), + ('5d0689ef49d2fae572b881b123a85ffa21595f36', 'message digest'), + + ('f71c27109c692c1b56bbdceb5b9d2865b3708dbc', + 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('12a053384a9c0c88e405a06c27dcf49ada62eb2b', + 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq', + 'abcdbcd...pnopq'), + + ('b0e20b6e3116640286ed3a87a5713079b21f5189', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('9b752e45573d4b39f4dbd3323cab82bf63326bfb', + '1234567890' * 8, + "'1234567890' * 8"), + + ('52783243c1697bdbe16d37f97f68f08325dc1528', + 'a' * 10**6, + '"a" * 10**6'), +] + +def get_tests(config={}): + from Crypto.Hash import RIPEMD160 + from .common import make_hash_tests + return make_hash_tests(RIPEMD160, "RIPEMD160", test_data, + digest_size=20, + oid="1.3.36.3.2.1") + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA1.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA1.py new file mode 100644 index 00000000..a883a44b --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA1.py @@ -0,0 +1,84 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/SHA1.py: Self-test for the SHA-1 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA""" + +from binascii import hexlify + +from Crypto.SelfTest.loader import load_test_vectors + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data_various = [ + # FIPS PUB 180-2, A.1 - "One-Block Message" + ('a9993e364706816aba3e25717850c26c9cd0d89d', 'abc'), + + # FIPS PUB 180-2, A.2 - "Multi-Block Message" + ('84983e441c3bd26ebaae4aa1f95129e5e54670f1', + 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq'), + + # FIPS PUB 180-2, A.3 - "Long Message" +# ('34aa973cd4c4daa4f61eeb2bdbad27316534016f', +# 'a' * 10**6, +# '"a" * 10**6'), + + # RFC 3174: Section 7.3, "TEST4" (multiple of 512 bits) + ('dea356a2cddd90c7a7ecedc5ebb563934f460452', + '01234567' * 80, + '"01234567" * 80'), +] + +def get_tests(config={}): + from Crypto.Hash import SHA1 + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA1"), + "SHA1ShortMsg.rsp", + "KAT SHA-1", + { "len" : lambda x: int(x) } ) or [] + + test_data = test_data_various[:] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA1, "SHA1", test_data, + digest_size=20, + oid="1.3.14.3.2.26") + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA224.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA224.py new file mode 100644 index 00000000..cf81ad98 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA224.py @@ -0,0 +1,63 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA224.py: Self-test for the SHA-224 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA224""" + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + + # RFC 3874: Section 3.1, "Test Vector #1 + ('23097d223405d8228642a477bda255b32aadbce4bda0b3f7e36c9da7', 'abc'), + + # RFC 3874: Section 3.2, "Test Vector #2 + ('75388b16512776cc5dba5da1fd890150b0c6455cb4f58b1952522525', 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq'), + + # RFC 3874: Section 3.3, "Test Vector #3 + ('20794655980c91d8bbb4c1ea97618a4bf03f42581948b2ee4ee7ad67', 'a' * 10**6, "'a' * 10**6"), + + # Examples from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('d14a028c2a3a2bc9476102bb288234c415a2b01f828ea62ac5b3e42f', ''), + + ('49b08defa65e644cbf8a2dd9270bdededabc741997d1dadd42026d7b', + 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), + + ('58911e7fccf2971a7d07f93162d8bd13568e71aa8fc86fc1fe9043d1', + 'Frank jagt im komplett verwahrlosten Taxi quer durch Bayern'), + +] + +def get_tests(config={}): + from Crypto.Hash import SHA224 + from .common import make_hash_tests + return make_hash_tests(SHA224, "SHA224", test_data, + digest_size=28, + oid='2.16.840.1.101.3.4.2.4') + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA256.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA256.py new file mode 100644 index 00000000..bb99326d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA256.py @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA256.py: Self-test for the SHA-256 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA256""" + +import unittest +from Crypto.Util.py3compat import * + +class LargeSHA256Test(unittest.TestCase): + def runTest(self): + """SHA256: 512/520 MiB test""" + from Crypto.Hash import SHA256 + zeros = bchr(0x00) * (1024*1024) + + h = SHA256.new(zeros) + for i in range(511): + h.update(zeros) + + # This test vector is from PyCrypto's old testdata.py file. + self.assertEqual('9acca8e8c22201155389f65abbf6bc9723edc7384ead80503839f49dcc56d767', h.hexdigest()) # 512 MiB + + for i in range(8): + h.update(zeros) + + # This test vector is from PyCrypto's old testdata.py file. + self.assertEqual('abf51ad954b246009dfe5a50ecd582fd5b8f1b8b27f30393853c3ef721e7fa6e', h.hexdigest()) # 520 MiB + +def get_tests(config={}): + # Test vectors from FIPS PUB 180-2 + # This is a list of (expected_result, input[, description]) tuples. + test_data = [ + # FIPS PUB 180-2, B.1 - "One-Block Message" + ('ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad', + 'abc'), + + # FIPS PUB 180-2, B.2 - "Multi-Block Message" + ('248d6a61d20638b8e5c026930c3e6039a33ce45964ff2167f6ecedd419db06c1', + 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq'), + + # FIPS PUB 180-2, B.3 - "Long Message" + ('cdc76e5c9914fb9281a1c7e284d73e67f1809a48a497200e046d39ccc7112cd0', + 'a' * 10**6, + '"a" * 10**6'), + + # Test for an old PyCrypto bug. + ('f7fd017a3c721ce7ff03f3552c0813adcc48b7f33f07e5e2ba71e23ea393d103', + 'This message is precisely 55 bytes long, to test a bug.', + 'Length = 55 (mod 64)'), + + # Example from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', ''), + + ('d32b568cd1b96d459e7291ebf4b25d007f275c9f13149beeb782fac0716613f8', + 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), + ] + + from Crypto.Hash import SHA256 + from .common import make_hash_tests + tests = make_hash_tests(SHA256, "SHA256", test_data, + digest_size=32, + oid="2.16.840.1.101.3.4.2.1") + + if config.get('slow_tests'): + tests += [LargeSHA256Test()] + + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA384.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA384.py new file mode 100644 index 00000000..c682eb43 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA384.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA.py: Self-test for the SHA-384 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA384""" + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + + # RFC 4634: Section Page 8.4, "Test 1" + ('cb00753f45a35e8bb5a03d699ac65007272c32ab0eded1631a8b605a43ff5bed8086072ba1e7cc2358baeca134c825a7', 'abc'), + + # RFC 4634: Section Page 8.4, "Test 2.2" + ('09330c33f71147e83d192fc782cd1b4753111b173b3b05d22fa08086e3b0f712fcc7c71a557e2db966c3e9fa91746039', 'abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmnhijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu'), + + # RFC 4634: Section Page 8.4, "Test 3" + ('9d0e1809716474cb086e834e310a4a1ced149e9c00f248527972cec5704c2a5b07b8b3dc38ecc4ebae97ddd87f3d8985', 'a' * 10**6, "'a' * 10**6"), + + # Taken from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b', ''), + + # Example from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('71e8383a4cea32d6fd6877495db2ee353542f46fa44bc23100bca48f3366b84e809f0708e81041f427c6d5219a286677', + 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), + +] + +def get_tests(config={}): + from Crypto.Hash import SHA384 + from .common import make_hash_tests + return make_hash_tests(SHA384, "SHA384", test_data, + digest_size=48, + oid='2.16.840.1.101.3.4.2.2') + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_224.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_224.py new file mode 100644 index 00000000..f92147a7 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_224.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_224.py: Self-test for the SHA-3/224 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA3_224""" + +import unittest +from binascii import hexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Hash import SHA3_224 as SHA3 +from Crypto.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-224.txt", + "KAT SHA-3 224", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests += make_hash_tests(SHA3, "SHA3_224", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.7") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_256.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_256.py new file mode 100644 index 00000000..432c9321 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_256.py @@ -0,0 +1,80 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_256.py: Self-test for the SHA-3/256 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA3_256""" + +import unittest +from binascii import hexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Hash import SHA3_256 as SHA3 +from Crypto.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-256.txt", + "KAT SHA-3 256", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + + tests += make_hash_tests(SHA3, "SHA3_256", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.8") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_384.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_384.py new file mode 100644 index 00000000..b0ba1bfe --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_384.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_384.py: Self-test for the SHA-3/384 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA3_384""" + +import unittest +from binascii import hexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Hash import SHA3_384 as SHA3 +from Crypto.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-384.txt", + "KAT SHA-3 384", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests += make_hash_tests(SHA3, "SHA3_384", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.9") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_512.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_512.py new file mode 100644 index 00000000..7d1007a6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA3_512.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_512.py: Self-test for the SHA-3/512 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA3_512""" + +import unittest +from binascii import hexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Hash import SHA3_512 as SHA3 +from Crypto.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-512.txt", + "KAT SHA-3 512", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests += make_hash_tests(SHA3, "SHA3_512", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.10") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA512.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA512.py new file mode 100644 index 00000000..20961aca --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHA512.py @@ -0,0 +1,140 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA512.py: Self-test for the SHA-512 hash function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHA512""" + +from binascii import hexlify + +from Crypto.Hash import SHA512 +from .common import make_hash_tests +from Crypto.SelfTest.loader import load_test_vectors + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data_512_other = [ + + # RFC 4634: Section Page 8.4, "Test 1" + ('ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f', 'abc'), + + # RFC 4634: Section Page 8.4, "Test 2.1" + ('8e959b75dae313da8cf4f72814fc143f8f7779c6eb9f7fa17299aeadb6889018501d289e4900f7e4331b99dec4b5433ac7d329eeb6dd26545e96e55b874be909', 'abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmnhijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu'), + + # RFC 4634: Section Page 8.4, "Test 3" + ('e718483d0ce769644e2e42c7bc15b4638e1f98b13b2044285632a803afa973ebde0ff244877ea60a4cb0432ce577c31beb009c5c2c49aa2e4eadb217ad8cc09b', 'a' * 10**6, "'a' * 10**6"), + + # Taken from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e', ''), + + ('af9ed2de700433b803240a552b41b5a472a6ef3fe1431a722b2063c75e9f07451f67a28e37d09cde769424c96aea6f8971389db9e1993d6c565c3c71b855723c', 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), +] + + +def get_tests_SHA512(): + + test_vectors = load_test_vectors(("Hash", "SHA2"), + "SHA512ShortMsg.rsp", + "KAT SHA-512", + {"len": lambda x: int(x)}) or [] + + test_data = test_data_512_other[:] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA512, "SHA512", test_data, + digest_size=64, + oid="2.16.840.1.101.3.4.2.3") + return tests + + +def get_tests_SHA512_224(): + + test_vectors = load_test_vectors(("Hash", "SHA2"), + "SHA512_224ShortMsg.rsp", + "KAT SHA-512/224", + {"len": lambda x: int(x)}) or [] + + test_data = [] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA512, "SHA512/224", test_data, + digest_size=28, + oid="2.16.840.1.101.3.4.2.5", + extra_params={ "truncate" : "224" }) + return tests + + +def get_tests_SHA512_256(): + + test_vectors = load_test_vectors(("Hash", "SHA2"), + "SHA512_256ShortMsg.rsp", + "KAT SHA-512/256", + {"len": lambda x: int(x)}) or [] + + test_data = [] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA512, "SHA512/256", test_data, + digest_size=32, + oid="2.16.840.1.101.3.4.2.6", + extra_params={ "truncate" : "256" }) + return tests + + +def get_tests(config={}): + + tests = [] + tests += get_tests_SHA512() + tests += get_tests_SHA512_224() + tests += get_tests_SHA512_256() + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py new file mode 100644 index 00000000..29bd34ed --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py @@ -0,0 +1,143 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.SHAKE128 and SHAKE256""" + +import unittest +from binascii import hexlify, unhexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import SHAKE128, SHAKE256 +from Crypto.Util.py3compat import b, bchr, bord, tobytes + +class SHAKETest(unittest.TestCase): + + def test_new_positive(self): + + xof1 = self.shake.new() + xof2 = self.shake.new(data=b("90")) + xof3 = self.shake.new().update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = self.shake.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.read(10) + h = self.shake.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.read(10), digest) + + def test_update_negative(self): + h = self.shake.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.shake.new() + digest = h.read(90) + + # read returns a byte string of the right length + self.assertTrue(isinstance(digest, type(b("digest")))) + self.assertEqual(len(digest), 90) + + def test_update_after_read(self): + mac = self.shake.new() + mac.update(b("rrrr")) + mac.read(90) + self.assertRaises(TypeError, mac.update, b("ttt")) + + +class SHAKE128Test(SHAKETest): + shake = SHAKE128 + + +class SHAKE256Test(SHAKETest): + shake = SHAKE256 + + +class SHAKEVectors(unittest.TestCase): + pass + + +test_vectors_128 = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHAKE128.txt", + "Short Messages KAT SHAKE128", + { "len" : lambda x: int(x) } ) or [] + +for idx, tv in enumerate(test_vectors_128): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = SHAKE128.new(data=data) + digest = hobj.read(len(result)) + self.assertEqual(digest, result) + + setattr(SHAKEVectors, "test_128_%d" % idx, new_test) + + +test_vectors_256 = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHAKE256.txt", + "Short Messages KAT SHAKE256", + { "len" : lambda x: int(x) } ) or [] + +for idx, tv in enumerate(test_vectors_256): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = SHAKE256.new(data=data) + digest = hobj.read(len(result)) + self.assertEqual(digest, result) + + setattr(SHAKEVectors, "test_256_%d" % idx, new_test) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(SHAKE128Test) + tests += list_test_cases(SHAKE256Test) + tests += list_test_cases(SHAKEVectors) + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_TupleHash.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_TupleHash.py new file mode 100644 index 00000000..803dc72c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_TupleHash.py @@ -0,0 +1,286 @@ +import unittest +from binascii import unhexlify, hexlify + +from Crypto.Util.py3compat import tobytes +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import TupleHash128, TupleHash256 + + +class TupleHashTest(unittest.TestCase): + + def new(self, *args, **kwargs): + return self.TupleHash.new(*args, **kwargs) + + def test_new_positive(self): + + h = self.new() + for new_func in self.TupleHash.new, h.new: + + for dbits in range(64, 1024 + 1, 8): + hobj = new_func(digest_bits=dbits) + self.assertEqual(hobj.digest_size * 8, dbits) + + for dbytes in range(8, 128 + 1): + hobj = new_func(digest_bytes=dbytes) + self.assertEqual(hobj.digest_size, dbytes) + + hobj = h.new() + self.assertEqual(hobj.digest_size, self.default_bytes) + + def test_new_negative(self): + + h = self.new() + for new_func in self.TupleHash.new, h.new: + self.assertRaises(TypeError, new_func, + digest_bytes=self.minimum_bytes, + digest_bits=self.minimum_bits) + self.assertRaises(ValueError, new_func, digest_bytes=0) + self.assertRaises(ValueError, new_func, + digest_bits=self.minimum_bits + 7) + self.assertRaises(ValueError, new_func, + digest_bits=self.minimum_bits - 8) + self.assertRaises(ValueError, new_func, + digest_bits=self.minimum_bytes - 1) + + def test_default_digest_size(self): + digest = self.new().digest() + self.assertEqual(len(digest), self.default_bytes) + + def test_update(self): + h = self.new() + h.update(b'') + h.digest() + + h = self.new() + h.update(b'') + h.update(b'STRING1') + h.update(b'STRING2') + mac1 = h.digest() + + h = self.new() + h.update(b'STRING1') + h.update(b'STRING2') + mac2 = h.digest() + + self.assertNotEqual(mac1, mac2) + + def test_update_negative(self): + h = self.new() + self.assertRaises(TypeError, h.update, u"string") + self.assertRaises(TypeError, h.update, None) + + def test_digest(self): + h = self.new() + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg = b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = self.new() + h.update(msg) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, dig1) + + def test_hex_digest(self): + mac = self.new() + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_bytearray(self): + + data = b"\x00\x01\x02" + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class TupleHash128Test(TupleHashTest): + + TupleHash = TupleHash128 + + minimum_bytes = 8 + default_bytes = 64 + + minimum_bits = 64 + default_bits = 512 + + +class TupleHash256Test(TupleHashTest): + + TupleHash = TupleHash256 + + minimum_bytes = 8 + default_bytes = 64 + + minimum_bits = 64 + default_bits = 512 + + +class NISTExampleTestVectors(unittest.TestCase): + + # http://csrc.nist.gov/groups/ST/toolkit/documents/Examples/TupleHash_samples.pdf + test_data = [ + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "", + "C5 D8 78 6C 1A FB 9B 82 11 1A B3 4B 65 B2 C0 04" + "8F A6 4E 6D 48 E2 63 26 4C E1 70 7D 3F FC 8E D1", + "KMAC128 Sample #1 NIST", + TupleHash128 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "My Tuple App", + "75 CD B2 0F F4 DB 11 54 E8 41 D7 58 E2 41 60 C5" + "4B AE 86 EB 8C 13 E7 F5 F4 0E B3 55 88 E9 6D FB", + "KMAC128 Sample #2 NIST", + TupleHash128 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + "20 21 22 23 24 25 26 27 28", + ), + "My Tuple App", + "E6 0F 20 2C 89 A2 63 1E DA 8D 4C 58 8C A5 FD 07" + "F3 9E 51 51 99 8D EC CF 97 3A DB 38 04 BB 6E 84", + "KMAC128 Sample #3 NIST", + TupleHash128 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "", + "CF B7 05 8C AC A5 E6 68 F8 1A 12 A2 0A 21 95 CE" + "97 A9 25 F1 DB A3 E7 44 9A 56 F8 22 01 EC 60 73" + "11 AC 26 96 B1 AB 5E A2 35 2D F1 42 3B DE 7B D4" + "BB 78 C9 AE D1 A8 53 C7 86 72 F9 EB 23 BB E1 94", + "KMAC256 Sample #4 NIST", + TupleHash256 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "My Tuple App", + "14 7C 21 91 D5 ED 7E FD 98 DB D9 6D 7A B5 A1 16" + "92 57 6F 5F E2 A5 06 5F 3E 33 DE 6B BA 9F 3A A1" + "C4 E9 A0 68 A2 89 C6 1C 95 AA B3 0A EE 1E 41 0B" + "0B 60 7D E3 62 0E 24 A4 E3 BF 98 52 A1 D4 36 7E", + "KMAC256 Sample #5 NIST", + TupleHash256 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + "20 21 22 23 24 25 26 27 28", + ), + "My Tuple App", + "45 00 0B E6 3F 9B 6B FD 89 F5 47 17 67 0F 69 A9" + "BC 76 35 91 A4 F0 5C 50 D6 88 91 A7 44 BC C6 E7" + "D6 D5 B5 E8 2C 01 8D A9 99 ED 35 B0 BB 49 C9 67" + "8E 52 6A BD 8E 85 C1 3E D2 54 02 1D B9 E7 90 CE", + "KMAC256 Sample #6 NIST", + TupleHash256 + ), + + + + ] + + def setUp(self): + td = [] + for tv_in in self.test_data: + tv_out = [None] * len(tv_in) + + tv_out[0] = [] + for string in tv_in[0]: + tv_out[0].append(unhexlify(string.replace(" ", ""))) + + tv_out[1] = tobytes(tv_in[1]) # Custom + tv_out[2] = unhexlify(tv_in[2].replace(" ", "")) + tv_out[3] = tv_in[3] + tv_out[4] = tv_in[4] + td.append(tv_out) + self.test_data = td + + def runTest(self): + + for data, custom, digest, text, module in self.test_data: + hd = module.new(custom=custom, digest_bytes=len(digest)) + for string in data: + hd.update(string) + self.assertEqual(hd.digest(), digest, msg=text) + + +def get_tests(config={}): + tests = [] + + tests += list_test_cases(TupleHash128Test) + tests += list_test_cases(TupleHash256Test) + tests.append(NISTExampleTestVectors()) + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_cSHAKE.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_cSHAKE.py new file mode 100644 index 00000000..72ad3411 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_cSHAKE.py @@ -0,0 +1,178 @@ +# =================================================================== +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.cSHAKE128 and cSHAKE256""" + +import unittest + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import cSHAKE128, cSHAKE256, SHAKE128, SHAKE256 +from Crypto.Util.py3compat import b, bchr, tobytes + + +class cSHAKETest(unittest.TestCase): + + def test_left_encode(self): + from Crypto.Hash.cSHAKE128 import _left_encode + self.assertEqual(_left_encode(0), b'\x01\x00') + self.assertEqual(_left_encode(1), b'\x01\x01') + self.assertEqual(_left_encode(256), b'\x02\x01\x00') + + def test_bytepad(self): + from Crypto.Hash.cSHAKE128 import _bytepad + self.assertEqual(_bytepad(b'', 4), b'\x01\x04\x00\x00') + self.assertEqual(_bytepad(b'A', 4), b'\x01\x04A\x00') + self.assertEqual(_bytepad(b'AA', 4), b'\x01\x04AA') + self.assertEqual(_bytepad(b'AAA', 4), b'\x01\x04AAA\x00\x00\x00') + self.assertEqual(_bytepad(b'AAAA', 4), b'\x01\x04AAAA\x00\x00') + self.assertEqual(_bytepad(b'AAAAA', 4), b'\x01\x04AAAAA\x00') + self.assertEqual(_bytepad(b'AAAAAA', 4), b'\x01\x04AAAAAA') + self.assertEqual(_bytepad(b'AAAAAAA', 4), b'\x01\x04AAAAAAA\x00\x00\x00') + + def test_new_positive(self): + + xof1 = self.cshake.new() + xof2 = self.cshake.new(data=b("90")) + xof3 = self.cshake.new().update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + xof1 = self.cshake.new() + ref = xof1.read(10) + xof2 = self.cshake.new(custom=b("")) + xof3 = self.cshake.new(custom=b("foo")) + + self.assertEqual(ref, xof2.read(10)) + self.assertNotEqual(ref, xof3.read(10)) + + xof1 = self.cshake.new(custom=b("foo")) + xof2 = self.cshake.new(custom=b("foo"), data=b("90")) + xof3 = self.cshake.new(custom=b("foo")).update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = self.cshake.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.read(10) + h = self.cshake.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.read(10), digest) + + def test_update_negative(self): + h = self.cshake.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.cshake.new() + digest = h.read(90) + + # read returns a byte string of the right length + self.assertTrue(isinstance(digest, type(b("digest")))) + self.assertEqual(len(digest), 90) + + def test_update_after_read(self): + mac = self.cshake.new() + mac.update(b("rrrr")) + mac.read(90) + self.assertRaises(TypeError, mac.update, b("ttt")) + + def test_shake(self): + # When no customization string is passed, results must match SHAKE + for digest_len in range(64): + xof1 = self.cshake.new(b'TEST') + xof2 = self.shake.new(b'TEST') + self.assertEqual(xof1.read(digest_len), xof2.read(digest_len)) + + +class cSHAKE128Test(cSHAKETest): + cshake = cSHAKE128 + shake = SHAKE128 + + +class cSHAKE256Test(cSHAKETest): + cshake = cSHAKE256 + shake = SHAKE256 + + +class cSHAKEVectors(unittest.TestCase): + pass + + +vector_files = [("ShortMsgSamples_cSHAKE128.txt", "Short Message Samples cSHAKE128", "128_cshake", cSHAKE128), + ("ShortMsgSamples_cSHAKE256.txt", "Short Message Samples cSHAKE256", "256_cshake", cSHAKE256), + ("CustomMsgSamples_cSHAKE128.txt", "Custom Message Samples cSHAKE128", "custom_128_cshake", cSHAKE128), + ("CustomMsgSamples_cSHAKE256.txt", "Custom Message Samples cSHAKE256", "custom_256_cshake", cSHAKE256), + ] + +for file, descr, tag, test_class in vector_files: + + test_vectors = load_test_vectors(("Hash", "SHA3"), file, descr, + {"len": lambda x: int(x), + "nlen": lambda x: int(x), + "slen": lambda x: int(x)}) or [] + + for idx, tv in enumerate(test_vectors): + if getattr(tv, "len", 0) == 0: + data = b("") + else: + data = tobytes(tv.msg) + assert(tv.len == len(tv.msg)*8) + if getattr(tv, "nlen", 0) != 0: + raise ValueError("Unsupported cSHAKE test vector") + if getattr(tv, "slen", 0) == 0: + custom = b("") + else: + custom = tobytes(tv.s) + assert(tv.slen == len(tv.s)*8) + + def new_test(self, data=data, result=tv.md, custom=custom, test_class=test_class): + hobj = test_class.new(data=data, custom=custom) + digest = hobj.read(len(result)) + self.assertEqual(digest, result) + + setattr(cSHAKEVectors, "test_%s_%d" % (tag, idx), new_test) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(cSHAKE128Test) + tests += list_test_cases(cSHAKE256Test) + tests += list_test_cases(cSHAKEVectors) + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_keccak.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_keccak.py new file mode 100644 index 00000000..54cdf27d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Hash/test_keccak.py @@ -0,0 +1,250 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Crypto.Hash.keccak""" + +import unittest +from binascii import hexlify, unhexlify + +from Crypto.SelfTest.loader import load_test_vectors +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Hash import keccak +from Crypto.Util.py3compat import b, tobytes, bchr + +class KeccakTest(unittest.TestCase): + + def test_new_positive(self): + + for digest_bits in (224, 256, 384, 512): + hobj = keccak.new(digest_bits=digest_bits) + self.assertEqual(hobj.digest_size, digest_bits // 8) + + hobj2 = hobj.new() + self.assertEqual(hobj2.digest_size, digest_bits // 8) + + for digest_bytes in (28, 32, 48, 64): + hobj = keccak.new(digest_bytes=digest_bytes) + self.assertEqual(hobj.digest_size, digest_bytes) + + hobj2 = hobj.new() + self.assertEqual(hobj2.digest_size, digest_bytes) + + def test_new_positive2(self): + + digest1 = keccak.new(data=b("\x90"), digest_bytes=64).digest() + digest2 = keccak.new(digest_bytes=64).update(b("\x90")).digest() + self.assertEqual(digest1, digest2) + + def test_new_negative(self): + + # keccak.new needs digest size + self.assertRaises(TypeError, keccak.new) + + h = keccak.new(digest_bits=512) + + # Either bits or bytes can be specified + self.assertRaises(TypeError, keccak.new, + digest_bytes=64, + digest_bits=512) + + # Range + self.assertRaises(ValueError, keccak.new, digest_bytes=0) + self.assertRaises(ValueError, keccak.new, digest_bytes=1) + self.assertRaises(ValueError, keccak.new, digest_bytes=65) + self.assertRaises(ValueError, keccak.new, digest_bits=0) + self.assertRaises(ValueError, keccak.new, digest_bits=1) + self.assertRaises(ValueError, keccak.new, digest_bits=513) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = keccak.new(digest_bytes=64) + h.update(pieces[0]).update(pieces[1]) + digest = h.digest() + h = keccak.new(digest_bytes=64) + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.digest(), digest) + + def test_update_negative(self): + h = keccak.new(digest_bytes=64) + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = keccak.new(digest_bytes=64) + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b("digest")))) + + def test_hex_digest(self): + mac = keccak.new(digest_bits=512) + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = keccak.new(digest_bits=512, data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = keccak.new(digest_bits=512, data=msg).digest() + + # With the proper flag, it is allowed + h = keccak.new(digest_bits=512, data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +class KeccakVectors(unittest.TestCase): + pass + + # TODO: add ExtremelyLong tests + + +test_vectors_224 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_224.txt", + "Short Messages KAT 224", + {"len": lambda x: int(x)}) or [] + +test_vectors_224 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_224.txt", + "Long Messages KAT 224", + {"len": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_224): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=224, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_224_%d" % idx, new_test) + +# --- + +test_vectors_256 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_256.txt", + "Short Messages KAT 256", + { "len" : lambda x: int(x) } ) or [] + +test_vectors_256 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_256.txt", + "Long Messages KAT 256", + { "len" : lambda x: int(x) } ) or [] + +for idx, tv in enumerate(test_vectors_256): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=256, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_256_%d" % idx, new_test) + + +# --- + +test_vectors_384 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_384.txt", + "Short Messages KAT 384", + {"len": lambda x: int(x)}) or [] + +test_vectors_384 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_384.txt", + "Long Messages KAT 384", + {"len": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_384): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=384, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_384_%d" % idx, new_test) + +# --- + +test_vectors_512 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_512.txt", + "Short Messages KAT 512", + {"len": lambda x: int(x)}) or [] + +test_vectors_512 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_512.txt", + "Long Messages KAT 512", + {"len": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_512): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=512, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_512_%d" % idx, new_test) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(KeccakTest) + tests += list_test_cases(KeccakVectors) + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/__init__.py new file mode 100644 index 00000000..c04a2a78 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/__init__.py @@ -0,0 +1,47 @@ +# +# SelfTest/IO/__init__.py: Self-test for input/output module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for I/O""" + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest.IO import test_PKCS8; tests += test_PKCS8.get_tests(config=config) + from Crypto.SelfTest.IO import test_PBES; tests += test_PBES.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + + diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/test_PBES.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/test_PBES.py new file mode 100644 index 00000000..b2a4f94a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/test_PBES.py @@ -0,0 +1,93 @@ +# +# SelfTest/IO/test_PBES.py: Self-test for the _PBES module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-tests for Crypto.IO._PBES module""" + +import unittest +from Crypto.Util.py3compat import * + +from Crypto.IO._PBES import PBES2 + + +class TestPBES2(unittest.TestCase): + + def setUp(self): + self.ref = b("Test data") + self.passphrase = b("Passphrase") + + def test1(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test2(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'PBKDF2WithHMAC-SHA1AndAES128-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test3(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'PBKDF2WithHMAC-SHA1AndAES192-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test4(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'scryptAndAES128-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test5(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'scryptAndAES192-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test6(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'scryptAndAES256-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + +def get_tests(config={}): + from Crypto.SelfTest.st_common import list_test_cases + listTests = [] + listTests += list_test_cases(TestPBES2) + return listTests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/test_PKCS8.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/test_PKCS8.py new file mode 100644 index 00000000..cf91d69c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/IO/test_PKCS8.py @@ -0,0 +1,425 @@ +# +# SelfTest/IO/test_PKCS8.py: Self-test for the PKCS8 module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-tests for Crypto.IO.PKCS8 module""" + +import unittest +from binascii import unhexlify + +from Crypto.Util.py3compat import * +from Crypto.IO import PKCS8 + +from Crypto.Util.asn1 import DerNull + +oid_key = '1.2.840.113549.1.1.1' + +# Original RSA key (in DER format) +# hexdump -v -e '32/1 "%02x" "\n"' key.der +clear_key=""" +308201ab020100025a00b94a7f7075ab9e79e8196f47be707781e80dd965cf16 +0c951a870b71783b6aaabbd550c0e65e5a3dfe15b8620009f6d7e5efec42a3f0 +6fe20faeebb0c356e79cdec6db4dd427e82d8ae4a5b90996227b8ba54ccfc4d2 +5c08050203010001025a00afa09c70d528299b7552fe766b5d20f9a221d66938 +c3b68371d48515359863ff96f0978d700e08cd6fd3d8a3f97066fc2e0d5f78eb +3a50b8e17ba297b24d1b8e9cdfd18d608668198d724ad15863ef0329195dee89 +3f039395022d0ebe0518df702a8b25954301ec60a97efdcec8eaa4f2e76ca7e8 +8dfbc3f7e0bb83f9a0e8dc47c0f8c746e9df6b022d0c9195de13f09b7be1fdd7 +1f56ae7d973e08bd9fd2c3dfd8936bb05be9cc67bd32d663c7f00d70932a0be3 +c24f022d0ac334eb6cabf1933633db007b763227b0d9971a9ea36aca8b669ec9 +4fcf16352f6b3dcae28e4bd6137db4ddd3022d0400a09f15ee7b351a2481cb03 +09920905c236d09c87afd3022f3afc2a19e3b746672b635238956ee7e6dd62d5 +022d0cd88ed14fcfbda5bbf0257f700147137bbab9c797af7df866704b889aa3 +7e2e93df3ff1a0fd3490111dcdbc4c +""" + +# Same key as above, wrapped in PKCS#8 but w/o password +# +# openssl pkcs8 -topk8 -inform DER -nocrypt -in key.der -outform DER -out keyp8.der +# hexdump -v -e '32/1 "%02x" "\n"' keyp8.der +wrapped_clear_key=""" +308201c5020100300d06092a864886f70d0101010500048201af308201ab0201 +00025a00b94a7f7075ab9e79e8196f47be707781e80dd965cf160c951a870b71 +783b6aaabbd550c0e65e5a3dfe15b8620009f6d7e5efec42a3f06fe20faeebb0 +c356e79cdec6db4dd427e82d8ae4a5b90996227b8ba54ccfc4d25c0805020301 +0001025a00afa09c70d528299b7552fe766b5d20f9a221d66938c3b68371d485 +15359863ff96f0978d700e08cd6fd3d8a3f97066fc2e0d5f78eb3a50b8e17ba2 +97b24d1b8e9cdfd18d608668198d724ad15863ef0329195dee893f039395022d +0ebe0518df702a8b25954301ec60a97efdcec8eaa4f2e76ca7e88dfbc3f7e0bb +83f9a0e8dc47c0f8c746e9df6b022d0c9195de13f09b7be1fdd71f56ae7d973e +08bd9fd2c3dfd8936bb05be9cc67bd32d663c7f00d70932a0be3c24f022d0ac3 +34eb6cabf1933633db007b763227b0d9971a9ea36aca8b669ec94fcf16352f6b +3dcae28e4bd6137db4ddd3022d0400a09f15ee7b351a2481cb0309920905c236 +d09c87afd3022f3afc2a19e3b746672b635238956ee7e6dd62d5022d0cd88ed1 +4fcfbda5bbf0257f700147137bbab9c797af7df866704b889aa37e2e93df3ff1 +a0fd3490111dcdbc4c +""" + +### +# +# The key above will now be encrypted with different algorithms. +# The password is always 'TestTest'. +# +# Each item in the wrapped_enc_keys list contains: +# * wrap algorithm +# * iteration count +# * Salt +# * IV +# * Expected result +### +wrapped_enc_keys = [] + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -outform DER -out keyenc.der -v2 des3 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC', +2048, +"47EA7227D8B22E2F", # IV +"E3F7A838AB911A4D", # Salt +""" +30820216304006092a864886f70d01050d3033301b06092a864886f70d01050c +300e0408e3f7a838ab911a4d02020800301406082a864886f70d0307040847ea +7227d8b22e2f048201d0ea388b374d2d0e4ceb7a5139f850fdff274884a6e6c0 +64326e09d00dbba9018834edb5a51a6ae3d1806e6e91eebf33788ce71fee0637 +a2ebf58859dd32afc644110c390274a6128b50c39b8d907823810ec471bada86 +6f5b75d8ea04ad310fad2e73621696db8e426cd511ee93ec1714a1a7db45e036 +4bf20d178d1f16bbb250b32c2d200093169d588de65f7d99aad9ddd0104b44f1 +326962e1520dfac3c2a800e8a14f678dff2b3d0bb23f69da635bf2a643ac934e +219a447d2f4460b67149e860e54f365da130763deefa649c72b0dcd48966a2d3 +4a477444782e3e66df5a582b07bbb19778a79bd355074ce331f4a82eb966b0c4 +52a09eab6116f2722064d314ae433b3d6e81d2436e93fdf446112663cde93b87 +9c8be44beb45f18e2c78fee9b016033f01ecda51b9b142091fa69f65ab784d2c +5ad8d34be6f7f1464adfc1e0ef3f7848f40d3bdea4412758f2fcb655c93d8f4d +f6fa48fc5aa4b75dd1c017ab79ac9d737233a6d668f5364ccf47786debd37334 +9c10c9e6efbe78430a61f71c89948aa32cdc3cc7338cf994147819ce7ab23450 +c8f7d9b94c3bb377d17a3fa204b601526317824b142ff6bc843fa7815ece89c0 +839573f234dac8d80cc571a045353d61db904a4398d8ef3df5ac +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -outform DER -out keyenc.der +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithMD5AndDES-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d010503300e0408f9b990c89af1d41b020208 +00048201d0c6267fe8592903891933d559e71a7ca68b2e39150f19daca0f7921 +52f97e249d72f670d5140e9150433310ed7c7ee51927693fd39884cb9551cea5 +a7b746f7edf199f8787d4787a35dad930d7db057b2118851211b645ac8b90fa6 +b0e7d49ac8567cbd5fff226e87aa9129a0f52c45e9307752e8575c3b0ff756b7 +31fda6942d15ecb6b27ea19370ccc79773f47891e80d22b440d81259c4c28eac +e0ca839524116bcf52d8c566e49a95ddb0e5493437279a770a39fd333f3fca91 +55884fad0ba5aaf273121f893059d37dd417da7dcfd0d6fa7494968f13b2cc95 +65633f2c891340193e5ec00e4ee0b0e90b3b93da362a4906360845771ade1754 +9df79140be5993f3424c012598eadd3e7c7c0b4db2c72cf103d7943a5cf61420 +93370b9702386c3dd4eb0a47f34b579624a46a108b2d13921fa1b367495fe345 +6aa128aa70f8ca80ae13eb301e96c380724ce67c54380bbea2316c1faf4d058e +b4ca2e23442047606b9bc4b3bf65b432cb271bea4eb35dd3eb360d3be8612a87 +a50e96a2264490aeabdc07c6e78e5dbf4fe3388726d0e2a228346bf3c2907d68 +2a6276b22ae883fb30fa611f4e4193e7a08480fcd7db48308bacbd72bf4807aa +11fd394859f97d22982f7fe890b2e2a0f7e7ffb693 +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v1 PBE-SHA1-RC2-64 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithSHA1AndRC2-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d01050b300e04083ee943bdae185008020208 +00048201d0e4614d9371d3ff10ceabc2f6a7a13a0f449f9a714144e46518ea55 +e3e6f0cde24031d01ef1f37ec40081449ef01914faf45983dde0d2bc496712de +8dd15a5527dff4721d9016c13f34fb93e3ce68577e30146266d71b539f854e56 +753a192cf126ed4812734d86f81884374f1100772f78d0646e9946407637c565 +d070acab413c55952f7237437f2e48cae7fa0ff8d370de2bf446dd08049a3663 +d9c813ac197468c02e2b687e7ca994cf7f03f01b6eca87dbfed94502c2094157 +ea39f73fe4e591df1a68b04d19d9adab90bb9898467c1464ad20bf2b8fb9a5ff +d3ec91847d1c67fd768a4b9cfb46572eccc83806601372b6fad0243f58f623b7 +1c5809dea0feb8278fe27e5560eed8448dc93f5612f546e5dd7c5f6404365eb2 +5bf3396814367ae8b15c5c432b57eaed1f882c05c7f6517ee9e42b87b7b8d071 +9d6125d1b52f7b2cca1f6bd5f584334bf90bce1a7d938274cafe27b68e629698 +b16e27ae528db28593af9adcfccbebb3b9e1f2af5cd5531b51968389caa6c091 +e7de1f1b96f0d258e54e540d961a7c0ef51fda45d6da5fddd33e9bbfd3a5f8d7 +d7ab2e971de495cddbc86d38444fee9f0ac097b00adaf7802dabe0cff5b43b45 +4f26b7b547016f89be52676866189911c53e2f2477""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v1 PBE-MD5-RC2-64 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithMD5AndRC2-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d010506300e0408f5cd2fee56d9b4b8020208 +00048201d086454942d6166a19d6b108465bd111e7080911f573d54b1369c676 +df28600e84936bfec04f91023ff16499e2e07178c340904f12ffa6886ab66228 +32bf43c2bff5a0ed14e765918cf5fc543ad49566246f7eb3fc044fa5a9c25f40 +8fc8c8296b91658d3bb1067c0aba008c4fefd9e2bcdbbbd63fdc8085482bccf4 +f150cec9a084259ad441a017e5d81a1034ef2484696a7a50863836d0eeda45cd +8cee8ecabfed703f8d9d4bbdf3a767d32a0ccdc38550ee2928d7fe3fa27eda5b +5c7899e75ad55d076d2c2d3c37d6da3d95236081f9671dab9a99afdb1cbc890e +332d1a91105d9a8ce08b6027aa07367bd1daec3059cb51f5d896124da16971e4 +0ca4bcadb06c854bdf39f42dd24174011414e51626d198775eff3449a982df7b +ace874e77e045eb6d7c3faef0750792b29a068a6291f7275df1123fac5789c51 +27ace42836d81633faf9daf38f6787fff0394ea484bbcd465b57d4dbee3cf8df +b77d1db287b3a6264c466805be5a4fe85cfbca180699859280f2dd8e2c2c10b5 +7a7d2ac670c6039d41952fbb0e4f99b560ebe1d020e1b96d02403283819c00cc +529c51f0b0101555e4c58002ba3c6e3c12e3fde1aec94382792e96d9666a2b33 +3dc397b22ecab67ee38a552fec29a1d4ff8719c748""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v1 PBE-SHA1-DES +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithSHA1AndDES-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d01050a300e04089bacc9cf1e8f734e020208 +00048201d03e502f3ceafe8fd19ab2939576bfdded26d719b2441db1459688f5 +9673218b41ec1f739edf1e460bd927bc28470c87b2d4fc8ea02ba17b47a63c49 +c5c1bee40529dadfd3ef8b4472c730bc136678c78abfb34670ec9d7dcd17ee3f +892f93f2629e6e0f4b24ecb9f954069bf722f466dece3913bb6abbd2c471d9a5 +c5eea89b14aaccda43d30b0dd0f6eb6e9850d9747aa8aa8414c383ad01c374ee +26d3552abec9ba22669cc9622ccf2921e3d0c8ecd1a70e861956de0bec6104b5 +b649ac994970c83f8a9e84b14a7dff7843d4ca3dd4af87cea43b5657e15ae0b5 +a940ce5047f006ab3596506600724764f23757205fe374fee04911336d655acc +03e159ec27789191d1517c4f3f9122f5242d44d25eab8f0658cafb928566ca0e +8f6589aa0c0ab13ca7a618008ae3eafd4671ee8fe0b562e70b3623b0e2a16eee +97fd388087d2e03530c9fe7db6e52eccc7c48fd701ede35e08922861a9508d12 +bc8bbf24f0c6bee6e63dbcb489b603d4c4a78ce45bf2eab1d5d10456c42a65a8 +3a606f4e4b9b46eb13b57f2624b651859d3d2d5192b45dbd5a2ead14ff20ca76 +48f321309aa56d8c0c4a192b580821cc6c70c75e6f19d1c5414da898ec4dd39d +b0eb93d6ba387a80702dfd2db610757ba340f63230 +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v2 aes128 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndAES128-CBC', +2048, +"4F66EE5D3BCD531FE6EBF4B4E73016B8", # IV +"479F25156176C53A", # Salt +""" +3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c +300e0408479f25156176c53a02020800301d060960864801650304010204104f +66ee5d3bcd531fe6ebf4b4e73016b8048201d0e33cfa560423f589d097d21533 +3b880a5ebac5b2ac58b4e73b0d787aee7764f034fe34ca1d1bd845c0a7c3316f +afbfb2129e03dcaf5a5031394206492828dacef1e04639bee5935e0f46114202 +10bc6c37182f4889be11c5d0486c398f4be952e5740f65de9d8edeb275e2b406 +e19bc29ad5ebb97fa536344fc3d84c7e755696f12b810898de4e6f069b8a81c8 +0aab0d45d7d062303aaa4a10c2ce84fdb5a03114039cfe138e38bb15b2ced717 +93549cdad85e730b14d9e2198b663dfdc8d04a4349eb3de59b076ad40b116d4a +25ed917c576bc7c883c95ef0f1180e28fc9981bea069594c309f1aa1b253ceab +a2f0313bb1372bcb51a745056be93d77a1f235a762a45e8856512d436b2ca0f7 +dd60fbed394ba28978d2a2b984b028529d0a58d93aba46c6bbd4ac1e4013cbaa +63b00988bc5f11ccc40141c346762d2b28f64435d4be98ec17c1884985e3807e +e550db606600993efccf6de0dfc2d2d70b5336a3b018fa415d6bdd59f5777118 +16806b7bc17c4c7e20ad7176ebfa5a1aa3f6bc10f04b77afd443944642ac9cca +d740e082b4a3bbb8bafdd34a0b3c5f2f3c2aceccccdccd092b78994b845bfa61 +706c3b9df5165ed1dbcbf1244fe41fc9bf993f52f7658e2f87e1baaeacb0f562 +9d905c +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v2 aes192 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndAES192-CBC', +2048, +"5CFC2A4FF7B63201A4A8A5B021148186", # IV +"D718541C264944CE", # Salt +""" +3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c +300e0408d718541c264944ce02020800301d060960864801650304011604105c +fc2a4ff7b63201a4a8a5b021148186048201d08e74aaa21b8bcfb15b9790fe95 +b0e09ddb0f189b6fb1682fdb9f122b804650ddec3c67a1df093a828b3e5fbcc6 +286abbcc5354c482fd796d972e919ca8a5eba1eaa2293af1d648013ddad72106 +75622264dfba55dafdda39e338f058f1bdb9846041ffff803797d3fdf3693135 +8a192729ea8346a7e5e58e925a2e2e4af0818581859e8215d87370eb4194a5ff +bae900857d4c591dbc651a241865a817eaede9987c9f9ae4f95c0bf930eea88c +4d7596e535ffb7ca369988aba75027a96b9d0bc9c8b0b75f359067fd145a378b +02aaa15e9db7a23176224da48a83249005460cc6e429168657f2efa8b1af7537 +d7d7042f2d683e8271b21d591090963eeb57aea6172f88da139e1614d6a7d1a2 +1002d5a7a93d6d21156e2b4777f6fc069287a85a1538c46b7722ccde591ab55c +630e1ceeb1ac42d1b41f3f654e9da86b5efced43775ea68b2594e50e4005e052 +0fe753c0898120c2c07265367ff157f6538a1e4080d6f9d1ca9eb51939c9574e +f2e4e1e87c1434affd5808563cddd376776dbbf790c6a40028f311a8b58dafa2 +0970ed34acd6e3e89d063987893b2b9570ddb8cc032b05a723bba9444933ebf3 +c624204be72f4190e0245197d0cb772bec933fd8442445f9a28bd042d5a3a1e9 +9a8a07 +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v2 aes192 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndAES256-CBC', +2048, +"323351F94462AC563E053A056252C2C4", # IV +"02A6CD0D12E727B5", # Salt +""" +3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c +300e040802a6cd0d12e727b502020800301d060960864801650304012a041032 +3351f94462ac563e053a056252c2c4048201d07f4ef1c7be21aae738a20c5632 +b8bdbbb9083b6e7f68822267b1f481fd27fdafd61a90660de6e4058790e4c912 +bf3f319a7c37e6eb3d956daaa143865020d554bf6215e8d7492359aaeef45d6e +d85a686ed26c0bf7c18d071d827a86f0b73e1db0c0e7f3d42201544093302a90 +551ad530692468c47ac15c69500b8ca67d4a17b64d15cecc035ae50b768a36cf +07c395afa091e9e6f86f665455fbdc1b21ad79c0908b73da5de75a9b43508d5d +44dc97a870cd3cd9f01ca24452e9b11c1b4982946702cfcbfda5b2fcc0203fb5 +0b52a115760bd635c94d4c95ac2c640ee9a04ffaf6ccff5a8d953dd5d88ca478 +c377811c521f2191639c643d657a9e364af88bb7c14a356c2b0b4870a23c2f54 +d41f8157afff731471dccc6058b15e1151bcf84b39b5e622a3a1d65859c912a5 +591b85e034a1f6af664f030a6bfc8c3d20c70f32b54bcf4da9c2da83cef49cf8 +e9a74f0e5d358fe50b88acdce6a9db9a7ad61536212fc5f877ebfc7957b8bda4 +b1582a0f10d515a20ee06cf768db9c977aa6fbdca7540d611ff953012d009dac +e8abd059f8e8ffea637c9c7721f817aaf0bb23403e26a0ef0ff0e2037da67d41 +af728481f53443551a9bff4cea023164e9622b5441a309e1f4bff98e5bf76677 +8d7cd9 +""" +)) + +def txt2bin(inputs): + s = b('').join([b(x) for x in inputs if not (x in '\n\r\t ')]) + return unhexlify(s) + +class Rng: + def __init__(self, output): + self.output=output + self.idx=0 + def __call__(self, n): + output = self.output[self.idx:self.idx+n] + self.idx += n + return output + +class PKCS8_Decrypt(unittest.TestCase): + + def setUp(self): + self.oid_key = oid_key + self.clear_key = txt2bin(clear_key) + self.wrapped_clear_key = txt2bin(wrapped_clear_key) + self.wrapped_enc_keys = [] + for t in wrapped_enc_keys: + self.wrapped_enc_keys.append(( + t[0], + t[1], + txt2bin(t[2]), + txt2bin(t[3]), + txt2bin(t[4]) + )) + + ### NO ENCRYTION + + def test1(self): + """Verify unwrapping w/o encryption""" + res1, res2, res3 = PKCS8.unwrap(self.wrapped_clear_key) + self.assertEqual(res1, self.oid_key) + self.assertEqual(res2, self.clear_key) + + def test2(self): + """Verify wrapping w/o encryption""" + wrapped = PKCS8.wrap(self.clear_key, self.oid_key) + res1, res2, res3 = PKCS8.unwrap(wrapped) + self.assertEqual(res1, self.oid_key) + self.assertEqual(res2, self.clear_key) + + ## ENCRYPTION + + def test3(self): + """Verify unwrapping with encryption""" + + for t in self.wrapped_enc_keys: + res1, res2, res3 = PKCS8.unwrap(t[4], b("TestTest")) + self.assertEqual(res1, self.oid_key) + self.assertEqual(res2, self.clear_key) + + def test4(self): + """Verify wrapping with encryption""" + + for t in self.wrapped_enc_keys: + if t[0] == 'skip encryption': + continue + rng = Rng(t[2]+t[3]) + params = { 'iteration_count':t[1] } + wrapped = PKCS8.wrap( + self.clear_key, + self.oid_key, + b("TestTest"), + protection=t[0], + prot_params=params, + key_params=DerNull(), + randfunc=rng) + self.assertEqual(wrapped, t[4]) + +def get_tests(config={}): + from Crypto.SelfTest.st_common import list_test_cases + listTests = [] + listTests += list_test_cases(PKCS8_Decrypt) + return listTests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/__init__.py new file mode 100644 index 00000000..18e83d10 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/__init__.py @@ -0,0 +1,49 @@ +# +# SelfTest/Math/__init__.py: Self-test for math module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for Math""" + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest.Math import test_Numbers + from Crypto.SelfTest.Math import test_Primality + from Crypto.SelfTest.Math import test_modexp + tests += test_Numbers.get_tests(config=config) + tests += test_Primality.get_tests(config=config) + tests += test_modexp.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_Numbers.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_Numbers.py new file mode 100644 index 00000000..924eca48 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_Numbers.py @@ -0,0 +1,797 @@ +# +# SelfTest/Math/test_Numbers.py: Self-test for Numbers module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for Math.Numbers""" + +import sys +import unittest + +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Util.py3compat import * + +from Crypto.Math._IntegerNative import IntegerNative + + +class TestIntegerBase(unittest.TestCase): + + def setUp(self): + raise NotImplementedError("To be implemented") + + def Integers(self, *arg): + return map(self.Integer, arg) + + def test_init_and_equality(self): + Integer = self.Integer + + v1 = Integer(23) + v2 = Integer(v1) + v3 = Integer(-9) + self.assertRaises(ValueError, Integer, 1.0) + + v4 = Integer(10**10) + v5 = Integer(-10**10) + + v6 = Integer(0xFFFF) + v7 = Integer(0xFFFFFFFF) + v8 = Integer(0xFFFFFFFFFFFFFFFF) + + self.assertEqual(v1, v1) + self.assertEqual(v1, 23) + self.assertEqual(v1, v2) + self.assertEqual(v3, -9) + self.assertEqual(v4, 10 ** 10) + self.assertEqual(v5, -10 ** 10) + self.assertEqual(v6, 0xFFFF) + self.assertEqual(v7, 0xFFFFFFFF) + self.assertEqual(v8, 0xFFFFFFFFFFFFFFFF) + + self.assertFalse(v1 == v4) + + # Init and comparison between Integer's + v6 = Integer(v1) + self.assertEqual(v1, v6) + + self.assertFalse(Integer(0) == None) + + def test_conversion_to_int(self): + v1, v2 = self.Integers(-23, 2 ** 1000) + self.assertEqual(int(v1), -23) + self.assertEqual(int(v2), 2 ** 1000) + + def test_equality_with_ints(self): + v1, v2, v3 = self.Integers(23, -89, 2 ** 1000) + self.assertTrue(v1 == 23) + self.assertTrue(v2 == -89) + self.assertFalse(v1 == 24) + self.assertTrue(v3 == 2 ** 1000) + + def test_conversion_to_str(self): + v1, v2, v3, v4 = self.Integers(20, 0, -20, 2 ** 1000) + self.assertTrue(str(v1) == "20") + self.assertTrue(str(v2) == "0") + self.assertTrue(str(v3) == "-20") + self.assertTrue(str(v4) == "10715086071862673209484250490600018105614048117055336074437503883703510511249361224931983788156958581275946729175531468251871452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062914571196477686542167660429831652624386837205668069376") + + def test_repr(self): + v1, v2 = self.Integers(-1, 2**80) + self.assertEqual(repr(v1), "Integer(-1)") + self.assertEqual(repr(v2), "Integer(1208925819614629174706176)") + + def test_conversion_to_bytes(self): + Integer = self.Integer + + v1 = Integer(0x17) + self.assertEqual(b("\x17"), v1.to_bytes()) + + v2 = Integer(0xFFFE) + self.assertEqual(b("\xFF\xFE"), v2.to_bytes()) + self.assertEqual(b("\x00\xFF\xFE"), v2.to_bytes(3)) + self.assertRaises(ValueError, v2.to_bytes, 1) + + self.assertEqual(b("\xFE\xFF"), v2.to_bytes(byteorder='little')) + self.assertEqual(b("\xFE\xFF\x00"), v2.to_bytes(3, byteorder='little')) + + v3 = Integer(-90) + self.assertRaises(ValueError, v3.to_bytes) + self.assertRaises(ValueError, v3.to_bytes, byteorder='bittle') + + def test_conversion_from_bytes(self): + Integer = self.Integer + + v1 = Integer.from_bytes(b"\x00") + self.assertTrue(isinstance(v1, Integer)) + self.assertEqual(0, v1) + + v2 = Integer.from_bytes(b"\x00\x01") + self.assertEqual(1, v2) + + v3 = Integer.from_bytes(b"\xFF\xFF") + self.assertEqual(0xFFFF, v3) + + v4 = Integer.from_bytes(b"\x00\x01", 'big') + self.assertEqual(1, v4) + + v5 = Integer.from_bytes(b"\x00\x01", byteorder='big') + self.assertEqual(1, v5) + + v6 = Integer.from_bytes(b"\x00\x01", byteorder='little') + self.assertEqual(0x0100, v6) + + self.assertRaises(ValueError, Integer.from_bytes, b'\x09', 'bittle') + + def test_inequality(self): + # Test Integer!=Integer and Integer!=int + v1, v2, v3, v4 = self.Integers(89, 89, 90, -8) + self.assertTrue(v1 != v3) + self.assertTrue(v1 != 90) + self.assertFalse(v1 != v2) + self.assertFalse(v1 != 89) + self.assertTrue(v1 != v4) + self.assertTrue(v4 != v1) + self.assertTrue(self.Integer(0) != None) + + def test_less_than(self): + # Test IntegerInteger and Integer>int + v1, v2, v3, v4, v5 = self.Integers(13, 13, 14, -8, 2 ** 10) + self.assertTrue(v3 > v1) + self.assertTrue(v3 > 13) + self.assertFalse(v1 > v1) + self.assertFalse(v1 > v2) + self.assertFalse(v1 > 13) + self.assertTrue(v1 > v4) + self.assertFalse(v4 > v1) + self.assertTrue(v5 > v1) + self.assertFalse(v1 > v5) + + def test_more_than_or_equal(self): + # Test Integer>=Integer and Integer>=int + v1, v2, v3, v4 = self.Integers(13, 13, 14, -4) + self.assertTrue(v3 >= v1) + self.assertTrue(v3 >= 13) + self.assertTrue(v1 >= v2) + self.assertTrue(v1 >= v1) + self.assertTrue(v1 >= 13) + self.assertFalse(v4 >= v1) + + def test_bool(self): + v1, v2, v3, v4 = self.Integers(0, 10, -9, 2 ** 10) + self.assertFalse(v1) + self.assertFalse(bool(v1)) + self.assertTrue(v2) + self.assertTrue(bool(v2)) + self.assertTrue(v3) + self.assertTrue(v4) + + def test_is_negative(self): + v1, v2, v3, v4, v5 = self.Integers(-3 ** 100, -3, 0, 3, 3**100) + self.assertTrue(v1.is_negative()) + self.assertTrue(v2.is_negative()) + self.assertFalse(v4.is_negative()) + self.assertFalse(v5.is_negative()) + + def test_addition(self): + # Test Integer+Integer and Integer+int + v1, v2, v3 = self.Integers(7, 90, -7) + self.assertTrue(isinstance(v1 + v2, self.Integer)) + self.assertEqual(v1 + v2, 97) + self.assertEqual(v1 + 90, 97) + self.assertEqual(v1 + v3, 0) + self.assertEqual(v1 + (-7), 0) + self.assertEqual(v1 + 2 ** 10, 2 ** 10 + 7) + + def test_subtraction(self): + # Test Integer-Integer and Integer-int + v1, v2, v3 = self.Integers(7, 90, -7) + self.assertTrue(isinstance(v1 - v2, self.Integer)) + self.assertEqual(v2 - v1, 83) + self.assertEqual(v2 - 7, 83) + self.assertEqual(v2 - v3, 97) + self.assertEqual(v1 - (-7), 14) + self.assertEqual(v1 - 2 ** 10, 7 - 2 ** 10) + + def test_multiplication(self): + # Test Integer-Integer and Integer-int + v1, v2, v3, v4 = self.Integers(4, 5, -2, 2 ** 10) + self.assertTrue(isinstance(v1 * v2, self.Integer)) + self.assertEqual(v1 * v2, 20) + self.assertEqual(v1 * 5, 20) + self.assertEqual(v1 * -2, -8) + self.assertEqual(v1 * 2 ** 10, 4 * (2 ** 10)) + + def test_floor_div(self): + v1, v2, v3 = self.Integers(3, 8, 2 ** 80) + self.assertTrue(isinstance(v1 // v2, self.Integer)) + self.assertEqual(v2 // v1, 2) + self.assertEqual(v2 // 3, 2) + self.assertEqual(v2 // -3, -3) + self.assertEqual(v3 // 2 ** 79, 2) + self.assertRaises(ZeroDivisionError, lambda: v1 // 0) + + def test_remainder(self): + # Test Integer%Integer and Integer%int + v1, v2, v3 = self.Integers(23, 5, -4) + self.assertTrue(isinstance(v1 % v2, self.Integer)) + self.assertEqual(v1 % v2, 3) + self.assertEqual(v1 % 5, 3) + self.assertEqual(v3 % 5, 1) + self.assertEqual(v1 % 2 ** 10, 23) + self.assertRaises(ZeroDivisionError, lambda: v1 % 0) + self.assertRaises(ValueError, lambda: v1 % -6) + + def test_simple_exponentiation(self): + v1, v2, v3 = self.Integers(4, 3, -2) + self.assertTrue(isinstance(v1 ** v2, self.Integer)) + self.assertEqual(v1 ** v2, 64) + self.assertEqual(pow(v1, v2), 64) + self.assertEqual(v1 ** 3, 64) + self.assertEqual(pow(v1, 3), 64) + self.assertEqual(v3 ** 2, 4) + self.assertEqual(v3 ** 3, -8) + + self.assertRaises(ValueError, pow, v1, -3) + + def test_modular_exponentiation(self): + v1, v2, v3 = self.Integers(23, 5, 17) + + self.assertTrue(isinstance(pow(v1, v2, v3), self.Integer)) + self.assertEqual(pow(v1, v2, v3), 7) + self.assertEqual(pow(v1, 5, v3), 7) + self.assertEqual(pow(v1, v2, 17), 7) + self.assertEqual(pow(v1, 5, 17), 7) + self.assertEqual(pow(v1, 0, 17), 1) + self.assertEqual(pow(v1, 1, 2 ** 80), 23) + self.assertEqual(pow(v1, 2 ** 80, 89298), 17689) + + self.assertRaises(ZeroDivisionError, pow, v1, 5, 0) + self.assertRaises(ValueError, pow, v1, 5, -4) + self.assertRaises(ValueError, pow, v1, -3, 8) + + def test_inplace_exponentiation(self): + v1 = self.Integer(4) + v1.inplace_pow(2) + self.assertEqual(v1, 16) + + v1 = self.Integer(4) + v1.inplace_pow(2, 15) + self.assertEqual(v1, 1) + + def test_abs(self): + v1, v2, v3, v4, v5 = self.Integers(-2 ** 100, -2, 0, 2, 2 ** 100) + self.assertEqual(abs(v1), 2 ** 100) + self.assertEqual(abs(v2), 2) + self.assertEqual(abs(v3), 0) + self.assertEqual(abs(v4), 2) + self.assertEqual(abs(v5), 2 ** 100) + + def test_sqrt(self): + v1, v2, v3, v4 = self.Integers(-2, 0, 49, 10**100) + + self.assertRaises(ValueError, v1.sqrt) + self.assertEqual(v2.sqrt(), 0) + self.assertEqual(v3.sqrt(), 7) + self.assertEqual(v4.sqrt(), 10**50) + + def test_sqrt_module(self): + + # Invalid modulus (non positive) + self.assertRaises(ValueError, self.Integer(5).sqrt, 0) + self.assertRaises(ValueError, self.Integer(5).sqrt, -1) + + # Simple cases + assert self.Integer(0).sqrt(5) == 0 + assert self.Integer(1).sqrt(5) in (1, 4) + + # Test with all quadratic residues in several fields + for p in (11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53): + for i in range(0, p): + square = i**2 % p + res = self.Integer(square).sqrt(p) + assert res in (i, p - i) + + # 2 is a non-quadratic reside in Z_11 + self.assertRaises(ValueError, self.Integer(2).sqrt, 11) + + # 10 is not a prime + self.assertRaises(ValueError, self.Integer(4).sqrt, 10) + + # 5 is square residue of 4 and 7 + assert self.Integer(5 - 11).sqrt(11) in (4, 7) + assert self.Integer(5 + 11).sqrt(11) in (4, 7) + + def test_in_place_add(self): + v1, v2 = self.Integers(10, 20) + + v1 += v2 + self.assertEqual(v1, 30) + v1 += 10 + self.assertEqual(v1, 40) + v1 += -1 + self.assertEqual(v1, 39) + v1 += 2 ** 1000 + self.assertEqual(v1, 39 + 2 ** 1000) + + def test_in_place_sub(self): + v1, v2 = self.Integers(10, 20) + + v1 -= v2 + self.assertEqual(v1, -10) + v1 -= -100 + self.assertEqual(v1, 90) + v1 -= 90000 + self.assertEqual(v1, -89910) + v1 -= -100000 + self.assertEqual(v1, 10090) + + def test_in_place_mul(self): + v1, v2 = self.Integers(3, 5) + + v1 *= v2 + self.assertEqual(v1, 15) + v1 *= 2 + self.assertEqual(v1, 30) + v1 *= -2 + self.assertEqual(v1, -60) + v1 *= 2 ** 1000 + self.assertEqual(v1, -60 * (2 ** 1000)) + + def test_in_place_modulus(self): + v1, v2 = self.Integers(20, 7) + + v1 %= v2 + self.assertEqual(v1, 6) + v1 %= 2 ** 1000 + self.assertEqual(v1, 6) + v1 %= 2 + self.assertEqual(v1, 0) + def t(): + v3 = self.Integer(9) + v3 %= 0 + self.assertRaises(ZeroDivisionError, t) + + def test_and(self): + v1, v2, v3 = self.Integers(0xF4, 0x31, -0xF) + self.assertTrue(isinstance(v1 & v2, self.Integer)) + self.assertEqual(v1 & v2, 0x30) + self.assertEqual(v1 & 0x31, 0x30) + self.assertEqual(v1 & v3, 0xF0) + self.assertEqual(v1 & -0xF, 0xF0) + self.assertEqual(v3 & -0xF, -0xF) + self.assertEqual(v2 & (2 ** 1000 + 0x31), 0x31) + + def test_or(self): + v1, v2, v3 = self.Integers(0x40, 0x82, -0xF) + self.assertTrue(isinstance(v1 | v2, self.Integer)) + self.assertEqual(v1 | v2, 0xC2) + self.assertEqual(v1 | 0x82, 0xC2) + self.assertEqual(v2 | v3, -0xD) + self.assertEqual(v2 | 2 ** 1000, 2 ** 1000 + 0x82) + + def test_right_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + self.assertEqual(v1 >> 0, v1) + self.assertTrue(isinstance(v1 >> v2, self.Integer)) + self.assertEqual(v1 >> v2, 0x08) + self.assertEqual(v1 >> 1, 0x08) + self.assertRaises(ValueError, lambda: v1 >> -1) + self.assertEqual(v1 >> (2 ** 1000), 0) + + self.assertEqual(v3 >> 1, -0x08) + self.assertEqual(v3 >> (2 ** 1000), -1) + + def test_in_place_right_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + v1 >>= 0 + self.assertEqual(v1, 0x10) + v1 >>= 1 + self.assertEqual(v1, 0x08) + v1 >>= v2 + self.assertEqual(v1, 0x04) + v3 >>= 1 + self.assertEqual(v3, -0x08) + def l(): + v4 = self.Integer(0x90) + v4 >>= -1 + self.assertRaises(ValueError, l) + def m1(): + v4 = self.Integer(0x90) + v4 >>= 2 ** 1000 + return v4 + self.assertEqual(0, m1()) + def m2(): + v4 = self.Integer(-1) + v4 >>= 2 ** 1000 + return v4 + self.assertEqual(-1, m2()) + + def _test_left_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + self.assertEqual(v1 << 0, v1) + self.assertTrue(isinstance(v1 << v2, self.Integer)) + self.assertEqual(v1 << v2, 0x20) + self.assertEqual(v1 << 1, 0x20) + self.assertEqual(v3 << 1, -0x20) + self.assertRaises(ValueError, lambda: v1 << -1) + self.assertRaises(ValueError, lambda: v1 << (2 ** 1000)) + + def test_in_place_left_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + v1 <<= 0 + self.assertEqual(v1, 0x10) + v1 <<= 1 + self.assertEqual(v1, 0x20) + v1 <<= v2 + self.assertEqual(v1, 0x40) + v3 <<= 1 + self.assertEqual(v3, -0x20) + def l(): + v4 = self.Integer(0x90) + v4 <<= -1 + self.assertRaises(ValueError, l) + def m(): + v4 = self.Integer(0x90) + v4 <<= 2 ** 1000 + self.assertRaises(ValueError, m) + + + def test_get_bit(self): + v1, v2, v3 = self.Integers(0x102, -3, 1) + self.assertEqual(v1.get_bit(0), 0) + self.assertEqual(v1.get_bit(1), 1) + self.assertEqual(v1.get_bit(v3), 1) + self.assertEqual(v1.get_bit(8), 1) + self.assertEqual(v1.get_bit(9), 0) + + self.assertRaises(ValueError, v1.get_bit, -1) + self.assertEqual(v1.get_bit(2 ** 1000), 0) + + self.assertRaises(ValueError, v2.get_bit, -1) + self.assertRaises(ValueError, v2.get_bit, 0) + self.assertRaises(ValueError, v2.get_bit, 1) + self.assertRaises(ValueError, v2.get_bit, 2 * 1000) + + def test_odd_even(self): + v1, v2, v3, v4, v5 = self.Integers(0, 4, 17, -4, -17) + + self.assertTrue(v1.is_even()) + self.assertTrue(v2.is_even()) + self.assertFalse(v3.is_even()) + self.assertTrue(v4.is_even()) + self.assertFalse(v5.is_even()) + + self.assertFalse(v1.is_odd()) + self.assertFalse(v2.is_odd()) + self.assertTrue(v3.is_odd()) + self.assertFalse(v4.is_odd()) + self.assertTrue(v5.is_odd()) + + def test_size_in_bits(self): + v1, v2, v3, v4 = self.Integers(0, 1, 0x100, -90) + self.assertEqual(v1.size_in_bits(), 1) + self.assertEqual(v2.size_in_bits(), 1) + self.assertEqual(v3.size_in_bits(), 9) + self.assertRaises(ValueError, v4.size_in_bits) + + def test_size_in_bytes(self): + v1, v2, v3, v4, v5, v6 = self.Integers(0, 1, 0xFF, 0x1FF, 0x10000, -9) + self.assertEqual(v1.size_in_bytes(), 1) + self.assertEqual(v2.size_in_bytes(), 1) + self.assertEqual(v3.size_in_bytes(), 1) + self.assertEqual(v4.size_in_bytes(), 2) + self.assertEqual(v5.size_in_bytes(), 3) + self.assertRaises(ValueError, v6.size_in_bits) + + def test_perfect_square(self): + + self.assertFalse(self.Integer(-9).is_perfect_square()) + self.assertTrue(self.Integer(0).is_perfect_square()) + self.assertTrue(self.Integer(1).is_perfect_square()) + self.assertFalse(self.Integer(2).is_perfect_square()) + self.assertFalse(self.Integer(3).is_perfect_square()) + self.assertTrue(self.Integer(4).is_perfect_square()) + self.assertTrue(self.Integer(39*39).is_perfect_square()) + self.assertFalse(self.Integer(39*39+1).is_perfect_square()) + + for x in range(100, 1000): + self.assertFalse(self.Integer(x**2+1).is_perfect_square()) + self.assertTrue(self.Integer(x**2).is_perfect_square()) + + def test_fail_if_divisible_by(self): + v1, v2, v3 = self.Integers(12, -12, 4) + + # No failure expected + v1.fail_if_divisible_by(7) + v2.fail_if_divisible_by(7) + v2.fail_if_divisible_by(2 ** 80) + + # Failure expected + self.assertRaises(ValueError, v1.fail_if_divisible_by, 4) + self.assertRaises(ValueError, v1.fail_if_divisible_by, v3) + + def test_multiply_accumulate(self): + v1, v2, v3 = self.Integers(4, 3, 2) + v1.multiply_accumulate(v2, v3) + self.assertEqual(v1, 10) + v1.multiply_accumulate(v2, 2) + self.assertEqual(v1, 16) + v1.multiply_accumulate(3, v3) + self.assertEqual(v1, 22) + v1.multiply_accumulate(1, -2) + self.assertEqual(v1, 20) + v1.multiply_accumulate(-2, 1) + self.assertEqual(v1, 18) + v1.multiply_accumulate(1, 2 ** 1000) + self.assertEqual(v1, 18 + 2 ** 1000) + v1.multiply_accumulate(2 ** 1000, 1) + self.assertEqual(v1, 18 + 2 ** 1001) + + def test_set(self): + v1, v2 = self.Integers(3, 6) + v1.set(v2) + self.assertEqual(v1, 6) + v1.set(9) + self.assertEqual(v1, 9) + v1.set(-2) + self.assertEqual(v1, -2) + v1.set(2 ** 1000) + self.assertEqual(v1, 2 ** 1000) + + def test_inverse(self): + v1, v2, v3, v4, v5, v6 = self.Integers(2, 5, -3, 0, 723872, 3433) + + self.assertTrue(isinstance(v1.inverse(v2), self.Integer)) + self.assertEqual(v1.inverse(v2), 3) + self.assertEqual(v1.inverse(5), 3) + self.assertEqual(v3.inverse(5), 3) + self.assertEqual(v5.inverse(92929921), 58610507) + self.assertEqual(v6.inverse(9912), 5353) + + self.assertRaises(ValueError, v2.inverse, 10) + self.assertRaises(ValueError, v1.inverse, -3) + self.assertRaises(ValueError, v4.inverse, 10) + self.assertRaises(ZeroDivisionError, v2.inverse, 0) + + def test_inplace_inverse(self): + v1, v2 = self.Integers(2, 5) + + v1.inplace_inverse(v2) + self.assertEqual(v1, 3) + + def test_gcd(self): + v1, v2, v3, v4 = self.Integers(6, 10, 17, -2) + self.assertTrue(isinstance(v1.gcd(v2), self.Integer)) + self.assertEqual(v1.gcd(v2), 2) + self.assertEqual(v1.gcd(10), 2) + self.assertEqual(v1.gcd(v3), 1) + self.assertEqual(v1.gcd(-2), 2) + self.assertEqual(v4.gcd(6), 2) + + def test_lcm(self): + v1, v2, v3, v4, v5 = self.Integers(6, 10, 17, -2, 0) + self.assertTrue(isinstance(v1.lcm(v2), self.Integer)) + self.assertEqual(v1.lcm(v2), 30) + self.assertEqual(v1.lcm(10), 30) + self.assertEqual(v1.lcm(v3), 102) + self.assertEqual(v1.lcm(-2), 6) + self.assertEqual(v4.lcm(6), 6) + self.assertEqual(v1.lcm(0), 0) + self.assertEqual(v5.lcm(0), 0) + + def test_jacobi_symbol(self): + + data = ( + (1001, 1, 1), + (19, 45, 1), + (8, 21, -1), + (5, 21, 1), + (610, 987, -1), + (1001, 9907, -1), + (5, 3439601197, -1) + ) + + js = self.Integer.jacobi_symbol + + # Jacobi symbol is always 1 for k==1 or n==1 + for k in range(1, 30): + self.assertEqual(js(k, 1), 1) + for n in range(1, 30, 2): + self.assertEqual(js(1, n), 1) + + # Fail if n is not positive odd + self.assertRaises(ValueError, js, 6, -2) + self.assertRaises(ValueError, js, 6, -1) + self.assertRaises(ValueError, js, 6, 0) + self.assertRaises(ValueError, js, 0, 0) + self.assertRaises(ValueError, js, 6, 2) + self.assertRaises(ValueError, js, 6, 4) + self.assertRaises(ValueError, js, 6, 6) + self.assertRaises(ValueError, js, 6, 8) + + for tv in data: + self.assertEqual(js(tv[0], tv[1]), tv[2]) + self.assertEqual(js(self.Integer(tv[0]), tv[1]), tv[2]) + self.assertEqual(js(tv[0], self.Integer(tv[1])), tv[2]) + + def test_jacobi_symbol_wikipedia(self): + + # Test vectors from https://en.wikipedia.org/wiki/Jacobi_symbol + tv = [ + (3, [(1, 1), (2, -1), (3, 0), (4, 1), (5, -1), (6, 0), (7, 1), (8, -1), (9, 0), (10, 1), (11, -1), (12, 0), (13, 1), (14, -1), (15, 0), (16, 1), (17, -1), (18, 0), (19, 1), (20, -1), (21, 0), (22, 1), (23, -1), (24, 0), (25, 1), (26, -1), (27, 0), (28, 1), (29, -1), (30, 0)]), + (5, [(1, 1), (2, -1), (3, -1), (4, 1), (5, 0), (6, 1), (7, -1), (8, -1), (9, 1), (10, 0), (11, 1), (12, -1), (13, -1), (14, 1), (15, 0), (16, 1), (17, -1), (18, -1), (19, 1), (20, 0), (21, 1), (22, -1), (23, -1), (24, 1), (25, 0), (26, 1), (27, -1), (28, -1), (29, 1), (30, 0)]), + (7, [(1, 1), (2, 1), (3, -1), (4, 1), (5, -1), (6, -1), (7, 0), (8, 1), (9, 1), (10, -1), (11, 1), (12, -1), (13, -1), (14, 0), (15, 1), (16, 1), (17, -1), (18, 1), (19, -1), (20, -1), (21, 0), (22, 1), (23, 1), (24, -1), (25, 1), (26, -1), (27, -1), (28, 0), (29, 1), (30, 1)]), + (9, [(1, 1), (2, 1), (3, 0), (4, 1), (5, 1), (6, 0), (7, 1), (8, 1), (9, 0), (10, 1), (11, 1), (12, 0), (13, 1), (14, 1), (15, 0), (16, 1), (17, 1), (18, 0), (19, 1), (20, 1), (21, 0), (22, 1), (23, 1), (24, 0), (25, 1), (26, 1), (27, 0), (28, 1), (29, 1), (30, 0)]), + (11, [(1, 1), (2, -1), (3, 1), (4, 1), (5, 1), (6, -1), (7, -1), (8, -1), (9, 1), (10, -1), (11, 0), (12, 1), (13, -1), (14, 1), (15, 1), (16, 1), (17, -1), (18, -1), (19, -1), (20, 1), (21, -1), (22, 0), (23, 1), (24, -1), (25, 1), (26, 1), (27, 1), (28, -1), (29, -1), (30, -1)]), + (13, [(1, 1), (2, -1), (3, 1), (4, 1), (5, -1), (6, -1), (7, -1), (8, -1), (9, 1), (10, 1), (11, -1), (12, 1), (13, 0), (14, 1), (15, -1), (16, 1), (17, 1), (18, -1), (19, -1), (20, -1), (21, -1), (22, 1), (23, 1), (24, -1), (25, 1), (26, 0), (27, 1), (28, -1), (29, 1), (30, 1)]), + (15, [(1, 1), (2, 1), (3, 0), (4, 1), (5, 0), (6, 0), (7, -1), (8, 1), (9, 0), (10, 0), (11, -1), (12, 0), (13, -1), (14, -1), (15, 0), (16, 1), (17, 1), (18, 0), (19, 1), (20, 0), (21, 0), (22, -1), (23, 1), (24, 0), (25, 0), (26, -1), (27, 0), (28, -1), (29, -1), (30, 0)]), + (17, [(1, 1), (2, 1), (3, -1), (4, 1), (5, -1), (6, -1), (7, -1), (8, 1), (9, 1), (10, -1), (11, -1), (12, -1), (13, 1), (14, -1), (15, 1), (16, 1), (17, 0), (18, 1), (19, 1), (20, -1), (21, 1), (22, -1), (23, -1), (24, -1), (25, 1), (26, 1), (27, -1), (28, -1), (29, -1), (30, 1)]), + (19, [(1, 1), (2, -1), (3, -1), (4, 1), (5, 1), (6, 1), (7, 1), (8, -1), (9, 1), (10, -1), (11, 1), (12, -1), (13, -1), (14, -1), (15, -1), (16, 1), (17, 1), (18, -1), (19, 0), (20, 1), (21, -1), (22, -1), (23, 1), (24, 1), (25, 1), (26, 1), (27, -1), (28, 1), (29, -1), (30, 1)]), + (21, [(1, 1), (2, -1), (3, 0), (4, 1), (5, 1), (6, 0), (7, 0), (8, -1), (9, 0), (10, -1), (11, -1), (12, 0), (13, -1), (14, 0), (15, 0), (16, 1), (17, 1), (18, 0), (19, -1), (20, 1), (21, 0), (22, 1), (23, -1), (24, 0), (25, 1), (26, 1), (27, 0), (28, 0), (29, -1), (30, 0)]), + (23, [(1, 1), (2, 1), (3, 1), (4, 1), (5, -1), (6, 1), (7, -1), (8, 1), (9, 1), (10, -1), (11, -1), (12, 1), (13, 1), (14, -1), (15, -1), (16, 1), (17, -1), (18, 1), (19, -1), (20, -1), (21, -1), (22, -1), (23, 0), (24, 1), (25, 1), (26, 1), (27, 1), (28, -1), (29, 1), (30, -1)]), + (25, [(1, 1), (2, 1), (3, 1), (4, 1), (5, 0), (6, 1), (7, 1), (8, 1), (9, 1), (10, 0), (11, 1), (12, 1), (13, 1), (14, 1), (15, 0), (16, 1), (17, 1), (18, 1), (19, 1), (20, 0), (21, 1), (22, 1), (23, 1), (24, 1), (25, 0), (26, 1), (27, 1), (28, 1), (29, 1), (30, 0)]), + (27, [(1, 1), (2, -1), (3, 0), (4, 1), (5, -1), (6, 0), (7, 1), (8, -1), (9, 0), (10, 1), (11, -1), (12, 0), (13, 1), (14, -1), (15, 0), (16, 1), (17, -1), (18, 0), (19, 1), (20, -1), (21, 0), (22, 1), (23, -1), (24, 0), (25, 1), (26, -1), (27, 0), (28, 1), (29, -1), (30, 0)]), + (29, [(1, 1), (2, -1), (3, -1), (4, 1), (5, 1), (6, 1), (7, 1), (8, -1), (9, 1), (10, -1), (11, -1), (12, -1), (13, 1), (14, -1), (15, -1), (16, 1), (17, -1), (18, -1), (19, -1), (20, 1), (21, -1), (22, 1), (23, 1), (24, 1), (25, 1), (26, -1), (27, -1), (28, 1), (29, 0), (30, 1)]), + ] + + js = self.Integer.jacobi_symbol + + for n, kj in tv: + for k, j in kj: + self.assertEqual(js(k, n), j) + + def test_hex(self): + v1, = self.Integers(0x10) + self.assertEqual(hex(v1), "0x10") + + +class TestIntegerInt(TestIntegerBase): + + def setUp(self): + self.Integer = IntegerNative + + +class testIntegerRandom(unittest.TestCase): + + def test_random_exact_bits(self): + + for _ in range(1000): + a = IntegerNative.random(exact_bits=8) + self.assertFalse(a < 128) + self.assertFalse(a >= 256) + + for bits_value in range(1024, 1024 + 8): + a = IntegerNative.random(exact_bits=bits_value) + self.assertFalse(a < 2**(bits_value - 1)) + self.assertFalse(a >= 2**bits_value) + + def test_random_max_bits(self): + + flag = False + for _ in range(1000): + a = IntegerNative.random(max_bits=8) + flag = flag or a < 128 + self.assertFalse(a>=256) + self.assertTrue(flag) + + for bits_value in range(1024, 1024 + 8): + a = IntegerNative.random(max_bits=bits_value) + self.assertFalse(a >= 2**bits_value) + + def test_random_bits_custom_rng(self): + + class CustomRNG(object): + def __init__(self): + self.counter = 0 + + def __call__(self, size): + self.counter += size + return bchr(0) * size + + custom_rng = CustomRNG() + a = IntegerNative.random(exact_bits=32, randfunc=custom_rng) + self.assertEqual(custom_rng.counter, 4) + + def test_random_range(self): + + func = IntegerNative.random_range + + for x in range(200): + a = func(min_inclusive=1, max_inclusive=15) + self.assertTrue(1 <= a <= 15) + + for x in range(200): + a = func(min_inclusive=1, max_exclusive=15) + self.assertTrue(1 <= a < 15) + + self.assertRaises(ValueError, func, min_inclusive=1, max_inclusive=2, + max_exclusive=3) + self.assertRaises(ValueError, func, max_inclusive=2, max_exclusive=3) + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestIntegerInt) + + try: + from Crypto.Math._IntegerGMP import IntegerGMP + + class TestIntegerGMP(TestIntegerBase): + def setUp(self): + self.Integer = IntegerGMP + + tests += list_test_cases(TestIntegerGMP) + except (ImportError, OSError) as e: + if sys.platform == "win32": + sys.stdout.write("Skipping GMP tests on Windows\n") + else: + sys.stdout.write("Skipping GMP tests (%s)\n" % str(e) ) + + try: + from Crypto.Math._IntegerCustom import IntegerCustom + + class TestIntegerCustomModexp(TestIntegerBase): + def setUp(self): + self.Integer = IntegerCustom + + tests += list_test_cases(TestIntegerCustomModexp) + except (ImportError, OSError) as e: + sys.stdout.write("Skipping custom modexp tests (%s)\n" % str(e) ) + + tests += list_test_cases(testIntegerRandom) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_Primality.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_Primality.py new file mode 100644 index 00000000..38344f35 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_Primality.py @@ -0,0 +1,118 @@ +# +# SelfTest/Math/test_Primality.py: Self-test for Primality module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for Math.Numbers""" + +import unittest + +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Util.py3compat import * + +from Crypto.Math.Numbers import Integer +from Crypto.Math.Primality import ( + PROBABLY_PRIME, COMPOSITE, + miller_rabin_test, lucas_test, + test_probable_prime, + generate_probable_prime, + generate_probable_safe_prime, + ) + + +class TestPrimality(unittest.TestCase): + + primes = (1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 2**127-1, 175637383534939453397801320455508570374088202376942372758907369518414308188137781042871856139027160010343454418881888953150175357127346872102307696660678617989191485418582475696230580407111841072614783095326672517315988762029036079794994990250662362650625650262324085116467511357592728695033227611029693067539) + composites = (0, 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 7*23, (2**19-1)*(2**67-1), 9746347772161,) + + def test_miller_rabin(self): + for prime in self.primes: + self.assertEqual(miller_rabin_test(prime, 3), PROBABLY_PRIME) + for composite in self.composites: + self.assertEqual(miller_rabin_test(composite, 3), COMPOSITE) + self.assertRaises(ValueError, miller_rabin_test, -1, 3) + + def test_lucas(self): + for prime in self.primes: + res = lucas_test(prime) + self.assertEqual(res, PROBABLY_PRIME) + for composite in self.composites: + res = lucas_test(composite) + self.assertEqual(res, COMPOSITE) + self.assertRaises(ValueError, lucas_test, -1) + + def test_is_prime(self): + primes = (170141183460469231731687303715884105727, + 19175002942688032928599, + 1363005552434666078217421284621279933627102780881053358473, + 2 ** 521 - 1) + for p in primes: + self.assertEqual(test_probable_prime(p), PROBABLY_PRIME) + + not_primes = ( + 4754868377601046732119933839981363081972014948522510826417784001, + 1334733877147062382486934807105197899496002201113849920496510541601, + 260849323075371835669784094383812120359260783810157225730623388382401, + ) + for np in not_primes: + self.assertEqual(test_probable_prime(np), COMPOSITE) + + from Crypto.Util.number import sieve_base + for p in sieve_base[:100]: + res = test_probable_prime(p) + self.assertEqual(res, PROBABLY_PRIME) + + def test_generate_prime_bit_size(self): + p = generate_probable_prime(exact_bits=512) + self.assertEqual(p.size_in_bits(), 512) + + def test_generate_prime_filter(self): + def ending_with_one(number): + return number % 10 == 1 + + for x in range(20): + q = generate_probable_prime(exact_bits=160, + prime_filter=ending_with_one) + self.assertEqual(q % 10, 1) + + def test_generate_safe_prime(self): + p = generate_probable_safe_prime(exact_bits=161) + self.assertEqual(p.size_in_bits(), 161) + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestPrimality) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_modexp.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_modexp.py new file mode 100644 index 00000000..b9eb8698 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Math/test_modexp.py @@ -0,0 +1,201 @@ +# +# SelfTest/Math/test_modexp.py: Self-test for module exponentiation +# +# =================================================================== +# +# Copyright (c) 2017, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for the custom module exponentiation""" + +import unittest + +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Util.number import long_to_bytes, bytes_to_long + +from Crypto.Util.py3compat import * + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, + c_size_t, + c_ulonglong) + +from Crypto.Hash import SHAKE128 +from Crypto.Math.Numbers import Integer +from Crypto.Math._IntegerCustom import _raw_montgomery + +from Crypto.Random.random import StrongRandom + + +def create_rng(tag): + rng = StrongRandom(SHAKE128.new(data=tag)) + return rng + +class ExceptionModulus(ValueError): + pass + +def monty_pow(base, exp, modulus): + max_len = len(long_to_bytes(max(base, exp, modulus))) + + base_b, exp_b, modulus_b = [ long_to_bytes(x, max_len) for x in + (base, exp, modulus) ] + + out = create_string_buffer(max_len) + error = _raw_montgomery.monty_pow( + out, + base_b, + exp_b, + modulus_b, + c_size_t(max_len), + c_ulonglong(32) + ) + + if error == 17: + raise ExceptionModulus() + if error: + raise ValueError("monty_pow failed with error: %d" % error) + + result = bytes_to_long(get_raw_buffer(out)) + return result + +exponent1 = 0x2ce0af628901460a419a08ef950d498b9fd6f271a1a52ac293b86fe5c60efe8e8ba93fa1ebe1eb3d614d2e7b328cb60a2591440e163441a190ecf101ceec245f600fffdcf3f5b3a17a7baeacb96a424db1d7ec985e8ec998bb479fecfffed6a75f9a90fc97062fd973303bce855ad7b8d8272a94025e8532be9aabd54a183f303538d2a7e621b4131d59e823a4625f39bd7d518d7784f7c3a8f19061da74974ff42fa1c063dec2db97d461e291a7d6e721708a5229de166c1246363372854e27f3f08ae274bc16bfd205b028a4d81386494433d516dfbb35f495acba5e4e1d1843cb3c3129b6642a85fc7244ce5845fac071c7f622e4ee12ac43fabeeaa0cd01 +modulus1 = 0xd66691b20071be4d66d4b71032b37fa007cfabf579fcb91e50bfc2753b3f0ce7be74e216aef7e26d4ae180bc20d7bd3ea88a6cbf6f87380e613c8979b5b043b200a8ff8856a3b12875e36e98a7569f3852d028e967551000b02c19e9fa52e83115b89309aabb1e1cf1e2cb6369d637d46775ce4523ea31f64ad2794cbc365dd8a35e007ed3b57695877fbf102dbeb8b3212491398e494314e93726926e1383f8abb5889bea954eb8c0ca1c62c8e9d83f41888095c5e645ed6d32515fe0c58c1368cad84694e18da43668c6f43e61d7c9bca633ddcda7aef5b79bc396d4a9f48e2a9abe0836cc455e435305357228e93d25aaed46b952defae0f57339bf26f5a9 + + +class TestModExp(unittest.TestCase): + + def test_small(self): + self.assertEqual(1, monty_pow(11,12,19)) + + def test_large_1(self): + base = 0xfffffffffffffffffffffffffffffffffffffffffffffffffff + expected = pow(base, exponent1, modulus1) + result = monty_pow(base, exponent1, modulus1) + self.assertEqual(result, expected) + + def test_zero_exp(self): + base = 0xfffffffffffffffffffffffffffffffffffffffffffffffffff + result = monty_pow(base, 0, modulus1) + self.assertEqual(result, 1) + + def test_zero_base(self): + result = monty_pow(0, exponent1, modulus1) + self.assertEqual(result, 0) + + def test_zero_modulus(self): + base = 0xfffffffffffffffffffffffffffffffffffffffffffffffff + self.assertRaises(ExceptionModulus, monty_pow, base, exponent1, 0) + self.assertRaises(ExceptionModulus, monty_pow, 0, 0, 0) + + def test_larger_exponent(self): + base = modulus1 - 0xFFFFFFF + expected = pow(base, modulus1<<64, modulus1) + result = monty_pow(base, modulus1<<64, modulus1) + self.assertEqual(result, expected) + + def test_even_modulus(self): + base = modulus1 >> 4 + self.assertRaises(ExceptionModulus, monty_pow, base, exponent1, modulus1-1) + + def test_several_lengths(self): + prng = SHAKE128.new().update(b('Test')) + for length in range(1, 100): + modulus2 = Integer.from_bytes(prng.read(length)) | 1 + base = Integer.from_bytes(prng.read(length)) % modulus2 + exponent2 = Integer.from_bytes(prng.read(length)) + + expected = pow(base, exponent2, modulus2) + result = monty_pow(base, exponent2, modulus2) + self.assertEqual(result, expected) + + def test_variable_exponent(self): + prng = create_rng(b('Test variable exponent')) + for i in range(20): + for j in range(7): + modulus = prng.getrandbits(8*30) | 1 + base = prng.getrandbits(8*30) % modulus + exponent = prng.getrandbits(i*8+j) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + exponent ^= (1 << (i*8+j)) - 1 + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + def test_stress_63(self): + prng = create_rng(b('Test 63')) + length = 63 + for _ in range(2000): + modulus = prng.getrandbits(8*length) | 1 + base = prng.getrandbits(8*length) % modulus + exponent = prng.getrandbits(8*length) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + def test_stress_64(self): + prng = create_rng(b('Test 64')) + length = 64 + for _ in range(2000): + modulus = prng.getrandbits(8*length) | 1 + base = prng.getrandbits(8*length) % modulus + exponent = prng.getrandbits(8*length) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + def test_stress_65(self): + prng = create_rng(b('Test 65')) + length = 65 + for _ in range(2000): + modulus = prng.getrandbits(8*length) | 1 + base = prng.getrandbits(8*length) % modulus + exponent = prng.getrandbits(8*length) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestModExp) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/__init__.py new file mode 100644 index 00000000..1c1c095c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/__init__.py @@ -0,0 +1,44 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Protocol/__init__.py: Self-tests for Crypto.Protocol +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for Crypto.Protocol""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest.Protocol import test_rfc1751; tests += test_rfc1751.get_tests(config=config) + from Crypto.SelfTest.Protocol import test_KDF; tests += test_KDF.get_tests(config=config) + + from Crypto.SelfTest.Protocol import test_SecretSharing; + tests += test_SecretSharing.get_tests(config=config) + + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_KDF.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_KDF.py new file mode 100644 index 00000000..b2869f80 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_KDF.py @@ -0,0 +1,732 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Protocol/test_KDF.py: Self-test for key derivation functions +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.Util.py3compat import b, bchr + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors_wycheproof +from Crypto.Hash import SHA1, HMAC, SHA256, MD5, SHA224, SHA384, SHA512 +from Crypto.Cipher import AES, DES3 + +from Crypto.Protocol.KDF import (PBKDF1, PBKDF2, _S2V, HKDF, scrypt, + bcrypt, bcrypt_check) + +from Crypto.Protocol.KDF import _bcrypt_decode + + +def t2b(t): + if t is None: + return None + t2 = t.replace(" ", "").replace("\n", "") + return unhexlify(b(t2)) + + +class TestVector(object): + pass + + +class PBKDF1_Tests(unittest.TestCase): + + # List of tuples with test data. + # Each tuple is made up by: + # Item #0: a pass phrase + # Item #1: salt (8 bytes encoded in hex) + # Item #2: output key length + # Item #3: iterations to use + # Item #4: expected result (encoded in hex) + _testData = ( + # From http://www.di-mgt.com.au/cryptoKDFs.html#examplespbkdf + ("password", "78578E5A5D63CB06", 16, 1000, "DC19847E05C64D2FAF10EBFB4A3D2A20"), + ) + + def test1(self): + v = self._testData[0] + res = PBKDF1(v[0], t2b(v[1]), v[2], v[3], SHA1) + self.assertEqual(res, t2b(v[4])) + + +class PBKDF2_Tests(unittest.TestCase): + + # List of tuples with test data. + # Each tuple is made up by: + # Item #0: a pass phrase + # Item #1: salt (encoded in hex) + # Item #2: output key length + # Item #3: iterations to use + # Item #4: hash module + # Item #5: expected result (encoded in hex) + _testData = ( + # From http://www.di-mgt.com.au/cryptoKDFs.html#examplespbkdf + ("password","78578E5A5D63CB06",24,2048, SHA1, "BFDE6BE94DF7E11DD409BCE20A0255EC327CB936FFE93643"), + # From RFC 6050 + ("password","73616c74", 20, 1, SHA1, "0c60c80f961f0e71f3a9b524af6012062fe037a6"), + ("password","73616c74", 20, 2, SHA1, "ea6c014dc72d6f8ccd1ed92ace1d41f0d8de8957"), + ("password","73616c74", 20, 4096, SHA1, "4b007901b765489abead49d926f721d065a429c1"), + ("passwordPASSWORDpassword","73616c7453414c5473616c7453414c5473616c7453414c5473616c7453414c5473616c74", + 25, 4096, SHA1, "3d2eec4fe41c849b80c8d83662c0e44a8b291a964cf2f07038"), + ( 'pass\x00word',"7361006c74",16,4096, SHA1, "56fa6aa75548099dcc37d7f03425e0c3"), + # From draft-josefsson-scrypt-kdf-01, Chapter 10 + ( 'passwd', '73616c74', 64, 1, SHA256, "55ac046e56e3089fec1691c22544b605f94185216dde0465e68b9d57c20dacbc49ca9cccf179b645991664b39d77ef317c71b845b1e30bd509112041d3a19783"), + ( 'Password', '4e61436c', 64, 80000, SHA256, "4ddcd8f60b98be21830cee5ef22701f9641a4418d04c0414aeff08876b34ab56a1d425a1225833549adb841b51c9b3176a272bdebba1d078478f62b397f33c8d"), + ) + + def test1(self): + # Test only for HMAC-SHA1 as PRF + + def prf_SHA1(p,s): + return HMAC.new(p,s,SHA1).digest() + + def prf_SHA256(p,s): + return HMAC.new(p,s,SHA256).digest() + + for i in range(len(self._testData)): + v = self._testData[i] + password = v[0] + salt = t2b(v[1]) + out_len = v[2] + iters = v[3] + hash_mod = v[4] + expected = t2b(v[5]) + + if hash_mod is SHA1: + res = PBKDF2(password, salt, out_len, iters) + self.assertEqual(res, expected) + + res = PBKDF2(password, salt, out_len, iters, prf_SHA1) + self.assertEqual(res, expected) + else: + res = PBKDF2(password, salt, out_len, iters, prf_SHA256) + self.assertEqual(res, expected) + + def test2(self): + # Verify that prf and hmac_hash_module are mutual exclusive + def prf_SHA1(p,s): + return HMAC.new(p,s,SHA1).digest() + + self.assertRaises(ValueError, PBKDF2, b("xxx"), b("yyy"), 16, 100, + prf=prf_SHA1, hmac_hash_module=SHA1) + + def test3(self): + # Verify that hmac_hash_module works like prf + + password = b("xxx") + salt = b("yyy") + + for hashmod in (MD5, SHA1, SHA224, SHA256, SHA384, SHA512): + + pr1 = PBKDF2(password, salt, 16, 100, + prf=lambda p, s: HMAC.new(p,s,hashmod).digest()) + pr2 = PBKDF2(password, salt, 16, 100, hmac_hash_module=hashmod) + + self.assertEqual(pr1, pr2) + + def test4(self): + # Verify that PBKDF2 can take bytes or strings as password or salt + k1 = PBKDF2("xxx", b("yyy"), 16, 10) + k2 = PBKDF2(b("xxx"), b("yyy"), 16, 10) + self.assertEqual(k1, k2) + + k1 = PBKDF2(b("xxx"), "yyy", 16, 10) + k2 = PBKDF2(b("xxx"), b("yyy"), 16, 10) + self.assertEqual(k1, k2) + + +class S2V_Tests(unittest.TestCase): + + # Sequence of test vectors. + # Each test vector is made up by: + # Item #0: a tuple of strings + # Item #1: an AES key + # Item #2: the result + # Item #3: the cipher module S2V is based on + # Everything is hex encoded + _testData = [ + + # RFC5297, A.1 + ( + ( '101112131415161718191a1b1c1d1e1f2021222324252627', + '112233445566778899aabbccddee' ), + 'fffefdfcfbfaf9f8f7f6f5f4f3f2f1f0', + '85632d07c6e8f37f950acd320a2ecc93', + AES + ), + + # RFC5297, A.2 + ( + ( '00112233445566778899aabbccddeeffdeaddadadeaddadaffeeddcc'+ + 'bbaa99887766554433221100', + '102030405060708090a0', + '09f911029d74e35bd84156c5635688c0', + '7468697320697320736f6d6520706c61'+ + '696e7465787420746f20656e63727970'+ + '74207573696e67205349562d414553'), + '7f7e7d7c7b7a79787776757473727170', + '7bdb6e3b432667eb06f4d14bff2fbd0f', + AES + ), + + ] + + def test1(self): + """Verify correctness of test vector""" + for tv in self._testData: + s2v = _S2V.new(t2b(tv[1]), tv[3]) + for s in tv[0]: + s2v.update(t2b(s)) + result = s2v.derive() + self.assertEqual(result, t2b(tv[2])) + + def test2(self): + """Verify that no more than 127(AES) and 63(TDES) + components are accepted.""" + key = bchr(0) * 8 + bchr(255) * 8 + for module in (AES, DES3): + s2v = _S2V.new(key, module) + max_comps = module.block_size*8-1 + for i in range(max_comps): + s2v.update(b("XX")) + self.assertRaises(TypeError, s2v.update, b("YY")) + + +class HKDF_Tests(unittest.TestCase): + + # Test vectors from RFC5869, Appendix A + # Each tuple is made up by: + # Item #0: hash module + # Item #1: secret + # Item #2: salt + # Item #3: context + # Item #4: expected result + _test_vector = ( + ( + SHA256, + "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b", + "000102030405060708090a0b0c", + "f0f1f2f3f4f5f6f7f8f9", + 42, + "3cb25f25faacd57a90434f64d0362f2a" + + "2d2d0a90cf1a5a4c5db02d56ecc4c5bf" + + "34007208d5b887185865" + ), + ( + SHA256, + "000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f" + + "404142434445464748494a4b4c4d4e4f", + "606162636465666768696a6b6c6d6e6f" + + "707172737475767778797a7b7c7d7e7f" + + "808182838485868788898a8b8c8d8e8f" + + "909192939495969798999a9b9c9d9e9f" + + "a0a1a2a3a4a5a6a7a8a9aaabacadaeaf", + "b0b1b2b3b4b5b6b7b8b9babbbcbdbebf" + + "c0c1c2c3c4c5c6c7c8c9cacbcccdcecf" + + "d0d1d2d3d4d5d6d7d8d9dadbdcdddedf" + + "e0e1e2e3e4e5e6e7e8e9eaebecedeeef" + + "f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff", + 82, + "b11e398dc80327a1c8e7f78c596a4934" + + "4f012eda2d4efad8a050cc4c19afa97c" + + "59045a99cac7827271cb41c65e590e09" + + "da3275600c2f09b8367793a9aca3db71" + + "cc30c58179ec3e87c14c01d5c1f3434f" + + "1d87" + ), + ( + SHA256, + "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b", + None, + None, + 42, + "8da4e775a563c18f715f802a063c5a31" + + "b8a11f5c5ee1879ec3454e5f3c738d2d" + + "9d201395faa4b61a96c8" + ), + ( + SHA1, + "0b0b0b0b0b0b0b0b0b0b0b", + "000102030405060708090a0b0c", + "f0f1f2f3f4f5f6f7f8f9", + 42, + "085a01ea1b10f36933068b56efa5ad81" + + "a4f14b822f5b091568a9cdd4f155fda2" + + "c22e422478d305f3f896" + ), + ( + SHA1, + "000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f" + + "404142434445464748494a4b4c4d4e4f", + "606162636465666768696a6b6c6d6e6f" + + "707172737475767778797a7b7c7d7e7f" + + "808182838485868788898a8b8c8d8e8f" + + "909192939495969798999a9b9c9d9e9f" + + "a0a1a2a3a4a5a6a7a8a9aaabacadaeaf", + "b0b1b2b3b4b5b6b7b8b9babbbcbdbebf" + + "c0c1c2c3c4c5c6c7c8c9cacbcccdcecf" + + "d0d1d2d3d4d5d6d7d8d9dadbdcdddedf" + + "e0e1e2e3e4e5e6e7e8e9eaebecedeeef" + + "f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff", + 82, + "0bd770a74d1160f7c9f12cd5912a06eb" + + "ff6adcae899d92191fe4305673ba2ffe" + + "8fa3f1a4e5ad79f3f334b3b202b2173c" + + "486ea37ce3d397ed034c7f9dfeb15c5e" + + "927336d0441f4c4300e2cff0d0900b52" + + "d3b4" + ), + ( + SHA1, + "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b", + "", + "", + 42, + "0ac1af7002b3d761d1e55298da9d0506" + + "b9ae52057220a306e07b6b87e8df21d0" + + "ea00033de03984d34918" + ), + ( + SHA1, + "0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c", + None, + "", + 42, + "2c91117204d745f3500d636a62f64f0a" + + "b3bae548aa53d423b0d1f27ebba6f5e5" + + "673a081d70cce7acfc48" + ) + ) + + def test1(self): + for tv in self._test_vector: + secret, salt, info, exp = [ t2b(tv[x]) for x in (1,2,3,5) ] + key_len, hashmod = [ tv[x] for x in (4,0) ] + + output = HKDF(secret, key_len, salt, hashmod, 1, info) + self.assertEqual(output, exp) + + def test2(self): + ref = HKDF(b("XXXXXX"), 12, b("YYYY"), SHA1) + + # Same output, but this time split over 2 keys + key1, key2 = HKDF(b("XXXXXX"), 6, b("YYYY"), SHA1, 2) + self.assertEqual((ref[:6], ref[6:]), (key1, key2)) + + # Same output, but this time split over 3 keys + key1, key2, key3 = HKDF(b("XXXXXX"), 4, b("YYYY"), SHA1, 3) + self.assertEqual((ref[:4], ref[4:8], ref[8:]), (key1, key2, key3)) + + +class scrypt_Tests(unittest.TestCase): + + # Test vectors taken from + # https://tools.ietf.org/html/rfc7914 + # - password + # - salt + # - N + # - r + # - p + data = ( + ( + "", + "", + 16, # 2K + 1, + 1, + """ + 77 d6 57 62 38 65 7b 20 3b 19 ca 42 c1 8a 04 97 + f1 6b 48 44 e3 07 4a e8 df df fa 3f ed e2 14 42 + fc d0 06 9d ed 09 48 f8 32 6a 75 3a 0f c8 1f 17 + e8 d3 e0 fb 2e 0d 36 28 cf 35 e2 0c 38 d1 89 06 + """ + ), + ( + "password", + "NaCl", + 1024, # 1M + 8, + 16, + """ + fd ba be 1c 9d 34 72 00 78 56 e7 19 0d 01 e9 fe + 7c 6a d7 cb c8 23 78 30 e7 73 76 63 4b 37 31 62 + 2e af 30 d9 2e 22 a3 88 6f f1 09 27 9d 98 30 da + c7 27 af b9 4a 83 ee 6d 83 60 cb df a2 cc 06 40 + """ + ), + ( + "pleaseletmein", + "SodiumChloride", + 16384, # 16M + 8, + 1, + """ + 70 23 bd cb 3a fd 73 48 46 1c 06 cd 81 fd 38 eb + fd a8 fb ba 90 4f 8e 3e a9 b5 43 f6 54 5d a1 f2 + d5 43 29 55 61 3f 0f cf 62 d4 97 05 24 2a 9a f9 + e6 1e 85 dc 0d 65 1e 40 df cf 01 7b 45 57 58 87 + """ + ), + ( + "pleaseletmein", + "SodiumChloride", + 1048576, # 1G + 8, + 1, + """ + 21 01 cb 9b 6a 51 1a ae ad db be 09 cf 70 f8 81 + ec 56 8d 57 4a 2f fd 4d ab e5 ee 98 20 ad aa 47 + 8e 56 fd 8f 4b a5 d0 9f fa 1c 6d 92 7c 40 f4 c3 + 37 30 40 49 e8 a9 52 fb cb f4 5c 6f a7 7a 41 a4 + """ + ), + ) + + def setUp(self): + new_test_vectors = [] + for tv in self.data: + new_tv = TestVector() + new_tv.P = b(tv[0]) + new_tv.S = b(tv[1]) + new_tv.N = tv[2] + new_tv.r = tv[3] + new_tv.p = tv[4] + new_tv.output = t2b(tv[5]) + new_tv.dkLen = len(new_tv.output) + new_test_vectors.append(new_tv) + self.data = new_test_vectors + + def test2(self): + + for tv in self.data: + try: + output = scrypt(tv.P, tv.S, tv.dkLen, tv.N, tv.r, tv.p) + except ValueError as e: + if " 2 " in str(e) and tv.N >= 1048576: + import warnings + warnings.warn("Not enough memory to unit test scrypt() with N=1048576", RuntimeWarning) + continue + else: + raise e + self.assertEqual(output, tv.output) + + def test3(self): + ref = scrypt(b("password"), b("salt"), 12, 16, 1, 1) + + # Same output, but this time split over 2 keys + key1, key2 = scrypt(b("password"), b("salt"), 6, 16, 1, 1, 2) + self.assertEqual((ref[:6], ref[6:]), (key1, key2)) + + # Same output, but this time split over 3 keys + key1, key2, key3 = scrypt(b("password"), b("salt"), 4, 16, 1, 1, 3) + self.assertEqual((ref[:4], ref[4:8], ref[8:]), (key1, key2, key3)) + + +class bcrypt_Tests(unittest.TestCase): + + def test_negative_cases(self): + self.assertRaises(ValueError, bcrypt, b"1" * 73, 10) + self.assertRaises(ValueError, bcrypt, b"1" * 10, 3) + self.assertRaises(ValueError, bcrypt, b"1" * 10, 32) + self.assertRaises(ValueError, bcrypt, b"1" * 10, 4, salt=b"") + self.assertRaises(ValueError, bcrypt, b"1" * 10, 4, salt=b"1") + self.assertRaises(ValueError, bcrypt, b"1" * 10, 4, salt=b"1" * 17) + self.assertRaises(ValueError, bcrypt, b"1\x00" * 10, 4) + + def test_bytearray_mismatch(self): + ref = bcrypt("pwd", 4) + bcrypt_check("pwd", ref) + bref = bytearray(ref) + bcrypt_check("pwd", bref) + + wrong = ref[:-1] + bchr(bref[-1] ^ 0x01) + self.assertRaises(ValueError, bcrypt_check, "pwd", wrong) + + wrong = b"x" + ref[1:] + self.assertRaises(ValueError, bcrypt_check, "pwd", wrong) + + # https://github.com/patrickfav/bcrypt/wiki/Published-Test-Vectors + + def test_empty_password(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"", 4, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$04$zVHmKQtGGQob.b/Nc7l9NO8UlrYcW05FiuCj/SxsFO/ZtiN9.mNzy"), + (b"", 5, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$05$zVHmKQtGGQob.b/Nc7l9NOWES.1hkVBgy5IWImh9DOjKNU8atY4Iy"), + (b"", 6, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$06$zVHmKQtGGQob.b/Nc7l9NOjOl7l4oz3WSh5fJ6414Uw8IXRAUoiaO"), + (b"", 7, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$07$zVHmKQtGGQob.b/Nc7l9NOBsj1dQpBA1HYNGpIETIByoNX9jc.hOi"), + (b"", 8, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$08$zVHmKQtGGQob.b/Nc7l9NOiLTUh/9MDpX86/DLyEzyiFjqjBFePgO"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_random_password_and_salt_short_pw(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"<.S.2K(Zq'", 4, b"VYAclAMpaXY/oqAo9yUpku", b"$2a$04$VYAclAMpaXY/oqAo9yUpkuWmoYywaPzyhu56HxXpVltnBIfmO9tgu"), + (b"5.rApO%5jA", 5, b"kVNDrnYKvbNr5AIcxNzeIu", b"$2a$05$kVNDrnYKvbNr5AIcxNzeIuRcyIF5cZk6UrwHGxENbxP5dVv.WQM/G"), + (b"oW++kSrQW^", 6, b"QLKkRMH9Am6irtPeSKN5sO", b"$2a$06$QLKkRMH9Am6irtPeSKN5sObJGr3j47cO6Pdf5JZ0AsJXuze0IbsNm"), + (b"ggJ\\KbTnDG", 7, b"4H896R09bzjhapgCPS/LYu", b"$2a$07$4H896R09bzjhapgCPS/LYuMzAQluVgR5iu/ALF8L8Aln6lzzYXwbq"), + (b"49b0:;VkH/", 8, b"hfvO2retKrSrx5f2RXikWe", b"$2a$08$hfvO2retKrSrx5f2RXikWeFWdtSesPlbj08t/uXxCeZoHRWDz/xFe"), + (b">9N^5jc##'", 9, b"XZLvl7rMB3EvM0c1.JHivu", b"$2a$09$XZLvl7rMB3EvM0c1.JHivuIDPJWeNJPTVrpjZIEVRYYB/mF6cYgJK"), + (b"\\$ch)s4WXp", 10, b"aIjpMOLK5qiS9zjhcHR5TO", b"$2a$10$aIjpMOLK5qiS9zjhcHR5TOU7v2NFDmcsBmSFDt5EHOgp/jeTF3O/q"), + (b"RYoj\\_>2P7", 12, b"esIAHiQAJNNBrsr5V13l7.", b"$2a$12$esIAHiQAJNNBrsr5V13l7.RFWWJI2BZFtQlkFyiWXjou05GyuREZa"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_random_password_and_salt_long_pw(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"^Q&\"]A`%/A(BVGt>QaX0M-#1ghq_+\":Y0CRmY", 5, b"YuQvhokOGVnevctykUYpKu", b"$2a$05$YuQvhokOGVnevctykUYpKutZD2pWeGGYn3auyLOasguMY3/0BbIyq"), + (b"F%uN/j>[GuB7-jB'_Yj!Tnb7Y!u^6)", 6, b"5L3vpQ0tG9O7k5gQ8nAHAe", b"$2a$06$5L3vpQ0tG9O7k5gQ8nAHAe9xxQiOcOLh8LGcI0PLWhIznsDt.S.C6"), + (b"Z>BobP32ub\"Cfe*Q<-q-=tRSjOBh8\\mLNW.", 9, b"nArqOfdCsD9kIbVnAixnwe", b"$2a$09$nArqOfdCsD9kIbVnAixnwe6s8QvyPYWtQBpEXKir2OJF9/oNBsEFe"), + (b"/MH51`!BP&0tj3%YCA;Xk%e3S`o\\EI", 10, b"ePiAc.s.yoBi3B6p1iQUCe", b"$2a$10$ePiAc.s.yoBi3B6p1iQUCezn3mraLwpVJ5XGelVyYFKyp5FZn/y.u"), + (b"ptAP\"mcg6oH.\";c0U2_oll.OKi5?Ui\"^ai#iQH7ZFtNMfs3AROnIncE9\"BNNoEgO[[*Yk8;RQ(#S,;I+aT", + 5, b"wgkOlGNXIVE2fWkT3gyRoO", b"$2a$05$wgkOlGNXIVE2fWkT3gyRoOqWi4gbi1Wv2Q2Jx3xVs3apl1w.Wtj8C"), + (b"M.E1=dt<.L0Q&p;94NfGm_Oo23+Kpl@M5?WIAL.[@/:'S)W96G8N^AWb7_smmC]>7#fGoB", + 6, b"W9zTCl35nEvUukhhFzkKMe", b"$2a$06$W9zTCl35nEvUukhhFzkKMekjT9/pj7M0lihRVEZrX3m8/SBNZRX7i"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_increasing_password_length(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"a", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.l4WvgHIVg17ZawDIrDM2IjlE64GDNQS"), + (b"aa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.AyUxBk.ThHlsLvRTH7IqcG7yVHJ3SXq"), + (b"aaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.BxOVac5xPB6XFdRc/ZrzM9FgZkqmvbW"), + (b"aaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.Qbr209bpCtfl5hN7UQlG/L4xiD3AKau"), + (b"aaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.oWszihPjDZI0ypReKsaDOW1jBl7oOii"), + (b"aaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ./k.Xxn9YiqtV/sxh3EHbnOHd0Qsq27K"), + (b"aaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.PYJqRFQbgRbIjMd5VNKmdKS4sBVOyDe"), + (b"aaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ..VMYfzaw1wP/SGxowpLeGf13fxCCt.q"), + (b"aaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.5B0p054nO5WgAD1n04XslDY/bqY9RJi"), + (b"aaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.INBTgqm7sdlBJDg.J5mLMSRK25ri04y"), + (b"aaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.s3y7CdFD0OR5p6rsZw/eZ.Dla40KLfm"), + (b"aaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.Jx742Djra6Q7PqJWnTAS.85c28g.Siq"), + (b"aaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.oKMXW3EZcPHcUV0ib5vDBnh9HojXnLu"), + (b"aaaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.w6nIjWpDPNSH5pZUvLjC1q25ONEQpeS"), + (b"aaaaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.k1b2/r9A/hxdwKEKurg6OCn4MwMdiGq"), + (b"aaaaaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.3prCNHVX1Ws.7Hm2bJxFUnQOX9f7DFa"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_non_ascii_characters(self): + # password, cost, salt, bcrypt hash + tvs = [ + ("àèìòùÀÈÌÒÙáéíóúýÁÉÍÓÚÝðÐ", 4, b"D3qS2aoTVyqM7z8v8crLm.", b"$2a$04$D3qS2aoTVyqM7z8v8crLm.3nKt4CzBZJbyFB.ZebmfCvRw7BGs.Xm"), + ("àèìòùÀÈÌÒÙáéíóúýÁÉÍÓÚÝðÐ", 5, b"VA1FujiOCMPkUHQ8kF7IaO", b"$2a$05$VA1FujiOCMPkUHQ8kF7IaOg7NGaNvpxwWzSluQutxEVmbZItRTsAa"), + ("àèìòùÀÈÌÒÙáéíóúýÁÉÍÓÚÝðÐ", 6, b"TXiaNrPeBSz5ugiQlehRt.", b"$2a$06$TXiaNrPeBSz5ugiQlehRt.gwpeDQnXWteQL4z2FulouBr6G7D9KUi"), + ("âêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿ", 4, b"YTn1Qlvps8e1odqMn6G5x.", b"$2a$04$YTn1Qlvps8e1odqMn6G5x.85pqKql6w773EZJAExk7/BatYAI4tyO"), + ("âêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿ", 5, b"C.8k5vJKD2NtfrRI9o17DO", b"$2a$05$C.8k5vJKD2NtfrRI9o17DOfIW0XnwItA529vJnh2jzYTb1QdoY0py"), + ("âêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿ", 6, b"xqfRPj3RYAgwurrhcA6uRO", b"$2a$06$xqfRPj3RYAgwurrhcA6uROtGlXDp/U6/gkoDYHwlubtcVcNft5.vW"), + ("ÄËÏÖÜŸåÅæÆœŒßçÇøØ¢¿¡€", 4, b"y8vGgMmr9EdyxP9rmMKjH.", b"$2a$04$y8vGgMmr9EdyxP9rmMKjH.wv2y3r7yRD79gykQtmb3N3zrwjKsyay"), + ("ÄËÏÖÜŸåÅæÆœŒßçÇøØ¢¿¡€", 5, b"iYH4XIKAOOm/xPQs7xKP1u", b"$2a$05$iYH4XIKAOOm/xPQs7xKP1upD0cWyMn3Jf0ZWiizXbEkVpS41K1dcO"), + ("ÄËÏÖÜŸåÅæÆœŒßçÇøØ¢¿¡€", 6, b"wCOob.D0VV8twafNDB2ape", b"$2a$06$wCOob.D0VV8twafNDB2apegiGD5nqF6Y1e6K95q6Y.R8C4QGd265q"), + ("ΔημοσιεύθηκεστηνΕφημερίδατης", 4, b"E5SQtS6P4568MDXW7cyUp.", b"$2a$04$E5SQtS6P4568MDXW7cyUp.18wfDisKZBxifnPZjAI1d/KTYMfHPYO"), + ("АБбВвГгДдЕеЁёЖжЗзИиЙйКкЛлМмН", 4, b"03e26gQFHhQwRNf81/ww9.", b"$2a$04$03e26gQFHhQwRNf81/ww9.p1UbrNwxpzWjLuT.zpTLH4t/w5WhAhC"), + ("нОоПпРрСсТтУуФфХхЦцЧчШшЩщЪъЫыЬьЭэЮю", 4, b"PHNoJwpXCfe32nUtLv2Upu", b"$2a$04$PHNoJwpXCfe32nUtLv2UpuhJXOzd4k7IdFwnEpYwfJVCZ/f/.8Pje"), + ("電电電島岛島兔兔兎龜龟亀國国国區区区", 4, b"wU4/0i1TmNl2u.1jIwBX.u", b"$2a$04$wU4/0i1TmNl2u.1jIwBX.uZUaOL3Rc5ID7nlQRloQh6q5wwhV/zLW"), + ("诶比伊艾弗豆贝尔维吾艾尺开艾丝维贼德", 4, b"P4kreGLhCd26d4WIy7DJXu", b"$2a$04$P4kreGLhCd26d4WIy7DJXusPkhxLvBouzV6OXkL5EB0jux0osjsry"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_special_case_salt(self): + # password, cost, salt, bcrypt hash + tvs = [ + ("-O_=*N!2JP", 4, b"......................", b"$2a$04$......................JjuKLOX9OOwo5PceZZXSkaLDvdmgb82"), + ("7B[$Q<4b>U", 5, b"......................", b"$2a$05$......................DRiedDQZRL3xq5A5FL8y7/6NM8a2Y5W"), + (">d5-I_8^.h", 6, b"......................", b"$2a$06$......................5Mq1Ng8jgDY.uHNU4h5p/x6BedzNH2W"), + (")V`/UM/]1t", 4, b".OC/.OC/.OC/.OC/.OC/.O", b"$2a$04$.OC/.OC/.OC/.OC/.OC/.OQIvKRDAam.Hm5/IaV/.hc7P8gwwIbmi"), + (":@t2.bWuH]", 5, b".OC/.OC/.OC/.OC/.OC/.O", b"$2a$05$.OC/.OC/.OC/.OC/.OC/.ONDbUvdOchUiKmQORX6BlkPofa/QxW9e"), + ("b(#KljF5s\"", 6, b".OC/.OC/.OC/.OC/.OC/.O", b"$2a$06$.OC/.OC/.OC/.OC/.OC/.OHfTd9e7svOu34vi1PCvOcAEq07ST7.K"), + ("@3YaJ^Xs]*", 4, b"eGA.eGA.eGA.eGA.eGA.e.", b"$2a$04$eGA.eGA.eGA.eGA.eGA.e.stcmvh.R70m.0jbfSFVxlONdj1iws0C"), + ("'\"5\\!k*C(p", 5, b"eGA.eGA.eGA.eGA.eGA.e.", b"$2a$05$eGA.eGA.eGA.eGA.eGA.e.vR37mVSbfdHwu.F0sNMvgn8oruQRghy"), + ("edEu7C?$'W", 6, b"eGA.eGA.eGA.eGA.eGA.e.", b"$2a$06$eGA.eGA.eGA.eGA.eGA.e.tSq0FN8MWHQXJXNFnHTPQKtA.n2a..G"), + ("N7dHmg\\PI^", 4, b"999999999999999999999u", b"$2a$04$999999999999999999999uCZfA/pLrlyngNDMq89r1uUk.bQ9icOu"), + ("\"eJuHh!)7*", 5, b"999999999999999999999u", b"$2a$05$999999999999999999999uj8Pfx.ufrJFAoWFLjapYBS5vVEQQ/hK"), + ("ZeDRJ:_tu:", 6, b"999999999999999999999u", b"$2a$06$999999999999999999999u6RB0P9UmbdbQgjoQFEJsrvrKe.BoU6q"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + +class TestVectorsHKDFWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def add_tests(self, filename): + + def filter_algo(root): + algo_name = root['algorithm'] + if algo_name == "HKDF-SHA-1": + return SHA1 + elif algo_name == "HKDF-SHA-256": + return SHA256 + elif algo_name == "HKDF-SHA-384": + return SHA384 + elif algo_name == "HKDF-SHA-512": + return SHA512 + else: + raise ValueError("Unknown algorithm " + algo_name) + + def filter_size(unit): + return int(unit['size']) + + result = load_test_vectors_wycheproof(("Protocol", "wycheproof"), + filename, + "Wycheproof HMAC (%s)" % filename, + root_tag={'hash_module': filter_algo}, + unit_tag={'size': filter_size}) + return result + + def setUp(self): + self.tv = [] + self.add_tests("hkdf_sha1_test.json") + self.add_tests("hkdf_sha256_test.json") + self.add_tests("hkdf_sha384_test.json") + self.add_tests("hkdf_sha512_test.json") + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof HKDF Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + + try: + key = HKDF(tv.ikm, tv.size, tv.salt, tv.hash_module, 1, tv.info) + except ValueError: + assert not tv.valid + else: + if key != tv.okm: + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + if not config.get('slow_tests'): + PBKDF2_Tests._testData = PBKDF2_Tests._testData[:3] + scrypt_Tests.data = scrypt_Tests.data[:3] + + tests = [] + tests += list_test_cases(PBKDF1_Tests) + tests += list_test_cases(PBKDF2_Tests) + tests += list_test_cases(S2V_Tests) + tests += list_test_cases(HKDF_Tests) + tests += [TestVectorsHKDFWycheproof(wycheproof_warnings)] + tests += list_test_cases(scrypt_Tests) + tests += list_test_cases(bcrypt_Tests) + + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_SecretSharing.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_SecretSharing.py new file mode 100644 index 00000000..0ea58a57 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_SecretSharing.py @@ -0,0 +1,267 @@ +# +# SelfTest/Protocol/test_secret_sharing.py: Self-test for secret sharing protocols +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from unittest import main, TestCase, TestSuite +from binascii import unhexlify, hexlify + +from Crypto.Util.py3compat import * +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Protocol.SecretSharing import Shamir, _Element, \ + _mult_gf2, _div_gf2 + +class GF2_Tests(TestCase): + + def test_mult_gf2(self): + # Prove mult by zero + x = _mult_gf2(0,0) + self.assertEqual(x, 0) + + # Prove mult by unity + x = _mult_gf2(34, 1) + self.assertEqual(x, 34) + + z = 3 # (x+1) + y = _mult_gf2(z, z) + self.assertEqual(y, 5) # (x+1)^2 = x^2 + 1 + y = _mult_gf2(y, z) + self.assertEqual(y, 15) # (x+1)^3 = x^3 + x^2 + x + 1 + y = _mult_gf2(y, z) + self.assertEqual(y, 17) # (x+1)^4 = x^4 + 1 + + # Prove linearity works + comps = [1, 4, 128, 2**34] + sum_comps = 1+4+128+2**34 + y = 908 + z = _mult_gf2(sum_comps, y) + w = 0 + for x in comps: + w ^= _mult_gf2(x, y) + self.assertEqual(w, z) + + def test_div_gf2(self): + from Crypto.Util.number import size as deg + + x, y = _div_gf2(567, 7) + self.assertTrue(deg(y) < deg(7)) + + w = _mult_gf2(x, 7) ^ y + self.assertEqual(567, w) + + x, y = _div_gf2(7, 567) + self.assertEqual(x, 0) + self.assertEqual(y, 7) + +class Element_Tests(TestCase): + + def test1(self): + # Test encondings + e = _Element(256) + self.assertEqual(int(e), 256) + self.assertEqual(e.encode(), bchr(0)*14 + b("\x01\x00")) + + e = _Element(bchr(0)*14 + b("\x01\x10")) + self.assertEqual(int(e), 0x110) + self.assertEqual(e.encode(), bchr(0)*14 + b("\x01\x10")) + + # Only 16 byte string are a valid encoding + self.assertRaises(ValueError, _Element, bchr(0)) + + def test2(self): + # Test addition + e = _Element(0x10) + f = _Element(0x0A) + self.assertEqual(int(e+f), 0x1A) + + def test3(self): + # Test multiplication + zero = _Element(0) + one = _Element(1) + two = _Element(2) + + x = _Element(6) * zero + self.assertEqual(int(x), 0) + + x = _Element(6) * one + self.assertEqual(int(x), 6) + + x = _Element(2**127) * two + self.assertEqual(int(x), 1 + 2 + 4 + 128) + + def test4(self): + # Test inversion + one = _Element(1) + + x = one.inverse() + self.assertEqual(int(x), 1) + + x = _Element(82323923) + y = x.inverse() + self.assertEqual(int(x * y), 1) + +class Shamir_Tests(TestCase): + + def test1(self): + # Test splitting + shares = Shamir.split(2, 3, bchr(90)*16) + self.assertEqual(len(shares), 3) + for index in range(3): + self.assertEqual(shares[index][0], index+1) + self.assertEqual(len(shares[index][1]), 16) + + def test2(self): + # Test recombine + from itertools import permutations + + test_vectors = ( + (2, "d9fe73909bae28b3757854c0af7ad405", + "1-594ae8964294174d95c33756d2504170", + "2-d897459d29da574eb40e93ec552ffe6e", + "3-5823de9bf0e068b054b5f07a28056b1b", + "4-db2c1f8bff46d748f795da995bd080cb"), + (2, "bf4f902d9a7efafd1f3ffd9291fd5de9", + "1-557bd3b0748064b533469722d1cc7935", + "2-6b2717164783c66d47cd28f2119f14d0", + "3-8113548ba97d58256bb4424251ae300c", + "4-179e9e5a218483ddaeda57539139cf04"), + (3, "ec96aa5c14c9faa699354cf1da74e904", + "1-64579fbf1908d66f7239bf6e2b4e41e1", + "2-6cd9428df8017b52322561e8c672ae3e", + "3-e418776ef5c0579bd9299277374806dd", + "4-ab3f77a0107398d23b323e581bb43f5d", + "5-23fe42431db2b41bd03ecdc7ea8e97ac"), + (3, "44cf249b68b80fcdc27b47be60c2c145", + "1-d6515a3905cd755119b86e311c801e31", + "2-16693d9ac9f10c254036ced5f8917fa3", + "3-84f74338a48476b99bf5e75a84d3a0d1", + "4-3fe8878dc4a5d35811cf3cbcd33dbe52", + "5-ad76f92fa9d0a9c4ca0c1533af7f6132"), + (5, "5398717c982db935d968eebe53a47f5a", + "1-be7be2dd4c068e7ef576aaa1b1c11b01", + "2-f821f5848441cb98b3eb467e2733ee21", + "3-25ee52f53e203f6e29a0297b5ab486b5", + "4-fc9fb58ef74dab947fbf9acd9d5d83cd", + "5-b1949cce46d81552e65f248d3f74cc5c", + "6-d64797f59977c4d4a7956ad916da7699", + "7-ab608a6546a8b9af8820ff832b1135c7"), + (5, "4a78db90fbf35da5545d2fb728e87596", + "1-08daf9a25d8aa184cfbf02b30a0ed6a0", + "2-dda28261e36f0b14168c2cf153fb734e", + "3-e9fdec5505d674a57f9836c417c1ecaa", + "4-4dce5636ae06dee42d2c82e65f06c735", + "5-3963dc118afc2ba798fa1d452b28ef00", + "6-6dfe6ff5b09e94d2f84c382b12f42424", + "7-6faea9d4d4a4e201bf6c90b9000630c3"), + (10, "eccbf6d66d680b49b073c4f1ddf804aa", + "01-7d8ac32fe4ae209ead1f3220fda34466", + "02-f9144e76988aad647d2e61353a6e96d5", + "03-b14c3b80179203363922d60760271c98", + "04-770bb2a8c28f6cee89e00f4d5cc7f861", + "05-6e3d7073ea368334ef67467871c66799", + "06-248792bc74a98ce024477c13c8fb5f8d", + "07-fcea4640d2db820c0604851e293d2487", + "08-2776c36fb714bb1f8525a0be36fc7dba", + "09-6ee7ac8be773e473a4bf75ee5f065762", + "10-33657fc073354cf91d4a68c735aacfc8", + "11-7645c65094a5868bf225c516fdee2d0c", + "12-840485aacb8226631ecd9c70e3018086"), + (10, "377e63bdbb5f7d4dc58a483d035212bb", + "01-32c53260103be431c843b1a633afe3bd", + "02-0107eb16cb8695084d452d2cc50bc7d6", + "03-df1e5c66cd755287fb0446faccd72a06", + "04-361bbcd5d40797f49dfa1898652da197", + "05-160d3ad1512f7dec7fd9344aed318591", + "06-659af6d95df4f25beca4fb9bfee3b7e8", + "07-37f3b208977bad50b3724566b72bfa9d", + "08-6c1de2dfc69c2986142c26a8248eb316", + "09-5e19220837a396bd4bc8cd685ff314c3", + "10-86e7b864fb0f3d628e46d50c1ba92f1c", + "11-065d0082c80b1aea18f4abe0c49df72e", + "12-84a09430c1d20ea9f388f3123c3733a3"), + ) + + def get_share(p): + pos = p.find('-') + return int(p[:pos]), unhexlify(p[pos + 1:]) + + for tv in test_vectors: + k = tv[0] + secret = unhexlify(tv[1]) + max_perms = 10 + for perm, shares_idx in enumerate(permutations(range(2, len(tv)), k)): + if perm > max_perms: + break + shares = [ get_share(tv[x]) for x in shares_idx ] + result = Shamir.combine(shares, True) + self.assertEqual(secret, result) + + def test3(self): + # Loopback split/recombine + secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) + + shares = Shamir.split(2, 3, secret) + + secret2 = Shamir.combine(shares[:2]) + self.assertEqual(secret, secret2) + + secret3 = Shamir.combine([ shares[0], shares[2] ]) + self.assertEqual(secret, secret3) + + def test4(self): + # Loopback split/recombine (SSSS) + secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) + + shares = Shamir.split(2, 3, secret, ssss=True) + + secret2 = Shamir.combine(shares[:2], ssss=True) + self.assertEqual(secret, secret2) + + def test5(self): + # Detect duplicate shares + secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) + + shares = Shamir.split(2, 3, secret) + self.assertRaises(ValueError, Shamir.combine, (shares[0], shares[0])) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(GF2_Tests) + tests += list_test_cases(Element_Tests) + tests += list_test_cases(Shamir_Tests) + return tests + +if __name__ == '__main__': + suite = lambda: TestSuite(get_tests()) + main(defaultTest='suite') + diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_rfc1751.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_rfc1751.py new file mode 100644 index 00000000..0878cc51 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Protocol/test_rfc1751.py @@ -0,0 +1,62 @@ +# +# Test script for Crypto.Util.RFC1751. +# +# Part of the Python Cryptography Toolkit +# +# Written by Andrew Kuchling and others +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__revision__ = "$Id$" + +import binascii +import unittest +from Crypto.Util import RFC1751 +from Crypto.Util.py3compat import * + +test_data = [('EB33F77EE73D4053', 'TIDE ITCH SLOW REIN RULE MOT'), + ('CCAC2AED591056BE4F90FD441C534766', + 'RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE'), + ('EFF81F9BFBC65350920CDD7416DE8009', + 'TROD MUTE TAIL WARM CHAR KONG HAAG CITY BORE O TEAL AWL') + ] + +class RFC1751Test_k2e (unittest.TestCase): + + def runTest (self): + "Check converting keys to English" + for key, words in test_data: + key=binascii.a2b_hex(b(key)) + self.assertEqual(RFC1751.key_to_english(key), words) + +class RFC1751Test_e2k (unittest.TestCase): + + def runTest (self): + "Check converting English strings to keys" + for key, words in test_data: + key=binascii.a2b_hex(b(key)) + self.assertEqual(RFC1751.english_to_key(words), key) + +# class RFC1751Test + +def get_tests(config={}): + return [RFC1751Test_k2e(), RFC1751Test_e2k()] + +if __name__ == "__main__": + unittest.main() diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/__init__.py new file mode 100644 index 00000000..437d3e41 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/__init__.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/__init__.py: Self-test for public key crypto +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for public-key crypto""" + +import unittest +from Crypto.SelfTest.PublicKey import (test_DSA, test_RSA, + test_ECC_NIST, test_ECC_25519, test_ECC_448, + test_import_DSA, test_import_RSA, + test_import_ECC, test_ElGamal) + + +def get_tests(config={}): + tests = [] + tests += test_DSA.get_tests(config=config) + tests += test_RSA.get_tests(config=config) + tests += test_ECC_NIST.get_tests(config=config) + tests += test_ECC_25519.get_tests(config=config) + tests += test_ECC_448.get_tests(config=config) + + tests += test_import_DSA.get_tests(config=config) + tests += test_import_RSA.get_tests(config=config) + tests += test_import_ECC.get_tests(config=config) + + tests += test_ElGamal.get_tests(config=config) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_DSA.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_DSA.py new file mode 100644 index 00000000..125cf6c4 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_DSA.py @@ -0,0 +1,247 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_DSA.py: Self-test for the DSA primitive +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.PublicKey.DSA""" + +import os +from Crypto.Util.py3compat import * + +import unittest +from Crypto.SelfTest.st_common import list_test_cases, a2b_hex, b2a_hex + +def _sws(s): + """Remove whitespace from a text or byte string""" + if isinstance(s,str): + return "".join(s.split()) + else: + return b("").join(s.split()) + +class DSATest(unittest.TestCase): + # Test vector from "Appendix 5. Example of the DSA" of + # "Digital Signature Standard (DSS)", + # U.S. Department of Commerce/National Institute of Standards and Technology + # FIPS 186-2 (+Change Notice), 2000 January 27. + # http://csrc.nist.gov/publications/fips/fips186-2/fips186-2-change1.pdf + + y = _sws("""19131871 d75b1612 a819f29d 78d1b0d7 346f7aa7 7bb62a85 + 9bfd6c56 75da9d21 2d3a36ef 1672ef66 0b8c7c25 5cc0ec74 + 858fba33 f44c0669 9630a76b 030ee333""") + + g = _sws("""626d0278 39ea0a13 413163a5 5b4cb500 299d5522 956cefcb + 3bff10f3 99ce2c2e 71cb9de5 fa24babf 58e5b795 21925c9c + c42e9f6f 464b088c c572af53 e6d78802""") + + p = _sws("""8df2a494 492276aa 3d25759b b06869cb eac0d83a fb8d0cf7 + cbb8324f 0d7882e5 d0762fc5 b7210eaf c2e9adac 32ab7aac + 49693dfb f83724c2 ec0736ee 31c80291""") + + q = _sws("""c773218c 737ec8ee 993b4f2d ed30f48e dace915f""") + + x = _sws("""2070b322 3dba372f de1c0ffc 7b2e3b49 8b260614""") + + k = _sws("""358dad57 1462710f 50e254cf 1a376b2b deaadfbf""") + k_inverse = _sws("""0d516729 8202e49b 4116ac10 4fc3f415 ae52f917""") + m = b2a_hex(b("abc")) + m_hash = _sws("""a9993e36 4706816a ba3e2571 7850c26c 9cd0d89d""") + r = _sws("""8bac1ab6 6410435c b7181f95 b16ab97c 92b341c0""") + s = _sws("""41e2345f 1f56df24 58f426d1 55b4ba2d b6dcd8c8""") + + def setUp(self): + global DSA, Random, bytes_to_long, size + from Crypto.PublicKey import DSA + from Crypto import Random + from Crypto.Util.number import bytes_to_long, inverse, size + + self.dsa = DSA + + def test_generate_1arg(self): + """DSA (default implementation) generated key (1 argument)""" + dsaObj = self.dsa.generate(1024) + self._check_private_key(dsaObj) + pub = dsaObj.public_key() + self._check_public_key(pub) + + def test_generate_2arg(self): + """DSA (default implementation) generated key (2 arguments)""" + dsaObj = self.dsa.generate(1024, Random.new().read) + self._check_private_key(dsaObj) + pub = dsaObj.public_key() + self._check_public_key(pub) + + def test_construct_4tuple(self): + """DSA (default implementation) constructed key (4-tuple)""" + (y, g, p, q) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q)] + dsaObj = self.dsa.construct((y, g, p, q)) + self._test_verification(dsaObj) + + def test_construct_5tuple(self): + """DSA (default implementation) constructed key (5-tuple)""" + (y, g, p, q, x) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q, self.x)] + dsaObj = self.dsa.construct((y, g, p, q, x)) + self._test_signing(dsaObj) + self._test_verification(dsaObj) + + def test_construct_bad_key4(self): + (y, g, p, q) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q)] + tup = (y, g, p+1, q) + self.assertRaises(ValueError, self.dsa.construct, tup) + + tup = (y, g, p, q+1) + self.assertRaises(ValueError, self.dsa.construct, tup) + + tup = (y, 1, p, q) + self.assertRaises(ValueError, self.dsa.construct, tup) + + def test_construct_bad_key5(self): + (y, g, p, q, x) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q, self.x)] + tup = (y, g, p, q, x+1) + self.assertRaises(ValueError, self.dsa.construct, tup) + + tup = (y, g, p, q, q+10) + self.assertRaises(ValueError, self.dsa.construct, tup) + + def _check_private_key(self, dsaObj): + # Check capabilities + self.assertEqual(1, dsaObj.has_private()) + self.assertEqual(1, dsaObj.can_sign()) + self.assertEqual(0, dsaObj.can_encrypt()) + + # Sanity check key data + self.assertEqual(1, dsaObj.p > dsaObj.q) # p > q + self.assertEqual(160, size(dsaObj.q)) # size(q) == 160 bits + self.assertEqual(0, (dsaObj.p - 1) % dsaObj.q) # q is a divisor of p-1 + self.assertEqual(dsaObj.y, pow(dsaObj.g, dsaObj.x, dsaObj.p)) # y == g**x mod p + self.assertEqual(1, 0 < dsaObj.x < dsaObj.q) # 0 < x < q + + def _check_public_key(self, dsaObj): + k = bytes_to_long(a2b_hex(self.k)) + m_hash = bytes_to_long(a2b_hex(self.m_hash)) + + # Check capabilities + self.assertEqual(0, dsaObj.has_private()) + self.assertEqual(1, dsaObj.can_sign()) + self.assertEqual(0, dsaObj.can_encrypt()) + + # Check that private parameters are all missing + self.assertEqual(0, hasattr(dsaObj, 'x')) + + # Sanity check key data + self.assertEqual(1, dsaObj.p > dsaObj.q) # p > q + self.assertEqual(160, size(dsaObj.q)) # size(q) == 160 bits + self.assertEqual(0, (dsaObj.p - 1) % dsaObj.q) # q is a divisor of p-1 + + # Public-only key objects should raise an error when .sign() is called + self.assertRaises(TypeError, dsaObj._sign, m_hash, k) + + # Check __eq__ and __ne__ + self.assertEqual(dsaObj.public_key() == dsaObj.public_key(),True) # assert_ + self.assertEqual(dsaObj.public_key() != dsaObj.public_key(),False) # assertFalse + + self.assertEqual(dsaObj.public_key(), dsaObj.publickey()) + + def _test_signing(self, dsaObj): + k = bytes_to_long(a2b_hex(self.k)) + m_hash = bytes_to_long(a2b_hex(self.m_hash)) + r = bytes_to_long(a2b_hex(self.r)) + s = bytes_to_long(a2b_hex(self.s)) + (r_out, s_out) = dsaObj._sign(m_hash, k) + self.assertEqual((r, s), (r_out, s_out)) + + def _test_verification(self, dsaObj): + m_hash = bytes_to_long(a2b_hex(self.m_hash)) + r = bytes_to_long(a2b_hex(self.r)) + s = bytes_to_long(a2b_hex(self.s)) + self.assertTrue(dsaObj._verify(m_hash, (r, s))) + self.assertFalse(dsaObj._verify(m_hash + 1, (r, s))) + + def test_repr(self): + (y, g, p, q) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q)] + dsaObj = self.dsa.construct((y, g, p, q)) + repr(dsaObj) + + +class DSADomainTest(unittest.TestCase): + + def test_domain1(self): + """Verify we can generate new keys in a given domain""" + dsa_key_1 = DSA.generate(1024) + domain_params = dsa_key_1.domain() + + dsa_key_2 = DSA.generate(1024, domain=domain_params) + self.assertEqual(dsa_key_1.p, dsa_key_2.p) + self.assertEqual(dsa_key_1.q, dsa_key_2.q) + self.assertEqual(dsa_key_1.g, dsa_key_2.g) + + self.assertEqual(dsa_key_1.domain(), dsa_key_2.domain()) + + def _get_weak_domain(self): + + from Crypto.Math.Numbers import Integer + from Crypto.Math import Primality + + p = Integer(4) + while p.size_in_bits() != 1024 or Primality.test_probable_prime(p) != Primality.PROBABLY_PRIME: + q1 = Integer.random(exact_bits=80) + q2 = Integer.random(exact_bits=80) + q = q1 * q2 + z = Integer.random(exact_bits=1024-160) + p = z * q + 1 + + h = Integer(2) + g = 1 + while g == 1: + g = pow(h, z, p) + h += 1 + + return (p, q, g) + + + def test_generate_error_weak_domain(self): + """Verify that domain parameters with composite q are rejected""" + + domain_params = self._get_weak_domain() + self.assertRaises(ValueError, DSA.generate, 1024, domain=domain_params) + + + def test_construct_error_weak_domain(self): + """Verify that domain parameters with composite q are rejected""" + + from Crypto.Math.Numbers import Integer + + p, q, g = self._get_weak_domain() + y = pow(g, 89, p) + self.assertRaises(ValueError, DSA.construct, (y, g, p, q)) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(DSATest) + tests += list_test_cases(DSADomainTest) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_25519.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_25519.py new file mode 100644 index 00000000..305c077c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_25519.py @@ -0,0 +1,327 @@ +# =================================================================== +# +# Copyright (c) 2022, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors + +from Crypto.PublicKey import ECC +from Crypto.PublicKey.ECC import EccPoint, _curves, EccKey + +from Crypto.Math.Numbers import Integer + +from Crypto.Hash import SHAKE128 + + +class TestEccPoint_Ed25519(unittest.TestCase): + + Gxy = {"x": 15112221349535400772501151409588531511454012693041857206046113283949847762202, + "y": 46316835694926478169428394003475163141307993866256225615783033603165251855960} + + G2xy = {"x": 24727413235106541002554574571675588834622768167397638456726423682521233608206, + "y": 15549675580280190176352668710449542251549572066445060580507079593062643049417} + + G3xy = {"x": 46896733464454938657123544595386787789046198280132665686241321779790909858396, + "y": 8324843778533443976490377120369201138301417226297555316741202210403726505172} + + pointG = EccPoint(Gxy['x'], Gxy['y'], curve="Ed25519") + pointG2 = EccPoint(G2xy['x'], G2xy['y'], curve="Ed25519") + pointG3 = EccPoint(G3xy['x'], G3xy['y'], curve="Ed25519") + + def test_init_xy(self): + EccPoint(self.Gxy['x'], self.Gxy['y'], curve="Ed25519") + + # Neutral point + pai = EccPoint(0, 1, curve="Ed25519") + self.assertEqual(pai.x, 0) + self.assertEqual(pai.y, 1) + self.assertEqual(pai.xy, (0, 1)) + + # G + bp = self.pointG.copy() + self.assertEqual(bp.x, 15112221349535400772501151409588531511454012693041857206046113283949847762202) + self.assertEqual(bp.y, 46316835694926478169428394003475163141307993866256225615783033603165251855960) + self.assertEqual(bp.xy, (bp.x, bp.y)) + + # 2G + bp2 = self.pointG2.copy() + self.assertEqual(bp2.x, 24727413235106541002554574571675588834622768167397638456726423682521233608206) + self.assertEqual(bp2.y, 15549675580280190176352668710449542251549572066445060580507079593062643049417) + self.assertEqual(bp2.xy, (bp2.x, bp2.y)) + + # 5G + EccPoint(x=33467004535436536005251147249499675200073690106659565782908757308821616914995, + y=43097193783671926753355113395909008640284023746042808659097434958891230611693, + curve="Ed25519") + + # Catch if point is not on the curve + self.assertRaises(ValueError, EccPoint, 34, 35, curve="Ed25519") + + def test_set(self): + pointW = EccPoint(0, 1, curve="Ed25519") + pointW.set(self.pointG) + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_copy(self): + pointW = self.pointG.copy() + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_equal(self): + pointH = self.pointG.copy() + pointI = self.pointG2.copy() + self.assertEqual(self.pointG, pointH) + self.assertNotEqual(self.pointG, pointI) + + def test_pai(self): + pai = EccPoint(0, 1, curve="Ed25519") + self.failUnless(pai.is_point_at_infinity()) + self.assertEqual(pai, pai.point_at_infinity()) + + def test_negate(self): + negG = -self.pointG + sum = self.pointG + negG + self.failUnless(sum.is_point_at_infinity()) + + def test_addition(self): + self.assertEqual(self.pointG + self.pointG2, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG.point_at_infinity(), self.pointG2) + self.assertEqual(self.pointG.point_at_infinity() + self.pointG2, self.pointG2) + + G5 = self.pointG2 + self.pointG3 + self.assertEqual(G5.x, 33467004535436536005251147249499675200073690106659565782908757308821616914995) + self.assertEqual(G5.y, 43097193783671926753355113395909008640284023746042808659097434958891230611693) + + def test_inplace_addition(self): + pointH = self.pointG.copy() + pointH += self.pointG + self.assertEqual(pointH, self.pointG2) + pointH += self.pointG + self.assertEqual(pointH, self.pointG3) + pointH += self.pointG.point_at_infinity() + self.assertEqual(pointH, self.pointG3) + + def test_doubling(self): + pointH = self.pointG.copy() + pointH.double() + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + # 2*0 + pai = self.pointG.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + def test_scalar_multiply(self): + d = 0 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0) + self.assertEqual(pointH.y, 1) + + d = 1 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG.x) + self.assertEqual(pointH.y, self.pointG.y) + + d = 2 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + d = 3 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG3.x) + self.assertEqual(pointH.y, self.pointG3.y) + + d = 4 + pointH = d * self.pointG + self.assertEqual(pointH.x, 14582954232372986451776170844943001818709880559417862259286374126315108956272) + self.assertEqual(pointH.y, 32483318716863467900234833297694612235682047836132991208333042722294373421359) + + d = 5 + pointH = d * self.pointG + self.assertEqual(pointH.x, 33467004535436536005251147249499675200073690106659565782908757308821616914995) + self.assertEqual(pointH.y, 43097193783671926753355113395909008640284023746042808659097434958891230611693) + + d = 10 + pointH = d * self.pointG + self.assertEqual(pointH.x, 43500613248243327786121022071801015118933854441360174117148262713429272820047) + self.assertEqual(pointH.y, 45005105423099817237495816771148012388779685712352441364231470781391834741548) + + d = 20 + pointH = d * self.pointG + self.assertEqual(pointH.x, 46694936775300686710656303283485882876784402425210400817529601134760286812591) + self.assertEqual(pointH.y, 8786390172762935853260670851718824721296437982862763585171334833968259029560) + + d = 255 + pointH = d * self.pointG + self.assertEqual(pointH.x, 36843863416400016952258312492144504209624961884991522125275155377549541182230) + self.assertEqual(pointH.y, 22327030283879720808995671630924669697661065034121040761798775626517750047180) + + d = 256 + pointH = d * self.pointG + self.assertEqual(pointH.x, 42740085206947573681423002599456489563927820004573071834350074001818321593686) + self.assertEqual(pointH.y, 6935684722522267618220753829624209639984359598320562595061366101608187623111) + + def test_sizes(self): + self.assertEqual(self.pointG.size_in_bits(), 255) + self.assertEqual(self.pointG.size_in_bytes(), 32) + + +class TestEccKey_Ed25519(unittest.TestCase): + + def test_private_key(self): + seed = unhexlify("9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60") + Px = 38815646466658113194383306759739515082307681141926459231621296960732224964046 + Py = 11903303657706407974989296177215005343713679411332034699907763981919547054807 + + key = EccKey(curve="Ed25519", seed=seed) + self.assertEqual(key.seed, seed) + self.assertEqual(key.d, 36144925721603087658594284515452164870581325872720374094707712194495455132720) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, Px) + self.assertEqual(key.pointQ.y, Py) + + point = EccPoint(Px, Py, "ed25519") + key = EccKey(curve="Ed25519", seed=seed, point=point) + self.assertEqual(key.d, 36144925721603087658594284515452164870581325872720374094707712194495455132720) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="ed25519", seed=seed) + + # Must not accept d parameter + self.assertRaises(ValueError, EccKey, curve="ed25519", d=1) + + def test_public_key(self): + point = EccPoint(_curves['ed25519'].Gx, _curves['ed25519'].Gy, curve='ed25519') + key = EccKey(curve="ed25519", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + priv_key = EccKey(curve="ed25519", seed=b'H'*32) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_seed(self): + self.assertRaises(ValueError, lambda: EccKey(curve="ed25519", seed=b'H' * 31)) + + def test_equality(self): + private_key = ECC.construct(seed=b'H'*32, curve="Ed25519") + private_key2 = ECC.construct(seed=b'H'*32, curve="ed25519") + private_key3 = ECC.construct(seed=b'C'*32, curve="Ed25519") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + +class TestEccModule_Ed25519(unittest.TestCase): + + def test_generate(self): + key = ECC.generate(curve="Ed25519") + self.assertTrue(key.has_private()) + point = EccPoint(_curves['Ed25519'].Gx, _curves['Ed25519'].Gy, curve="Ed25519") * key.d + self.assertEqual(key.pointQ, point) + + # Always random + key2 = ECC.generate(curve="Ed25519") + self.assertNotEqual(key, key2) + + # Other names + ECC.generate(curve="Ed25519") + + # Random source + key1 = ECC.generate(curve="Ed25519", randfunc=SHAKE128.new().read) + key2 = ECC.generate(curve="Ed25519", randfunc=SHAKE128.new().read) + self.assertEqual(key1, key2) + + def test_construct(self): + seed = unhexlify("9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60") + Px = 38815646466658113194383306759739515082307681141926459231621296960732224964046 + Py = 11903303657706407974989296177215005343713679411332034699907763981919547054807 + d = 36144925721603087658594284515452164870581325872720374094707712194495455132720 + point = EccPoint(Px, Py, curve="Ed25519") + + # Private key only + key = ECC.construct(curve="Ed25519", seed=seed) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Public key only + key = ECC.construct(curve="Ed25519", point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertFalse(key.has_private()) + + # Private and public key + key = ECC.construct(curve="Ed25519", seed=seed, point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Other names + key = ECC.construct(curve="ed25519", seed=seed) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['ed25519'].Gx, point_y=_curves['ed25519'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="Ed25519", **coord) + self.assertRaises(ValueError, ECC.construct, curve="Ed25519", d=2, **coordG) + self.assertRaises(ValueError, ECC.construct, curve="Ed25519", seed=b'H'*31) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestEccPoint_Ed25519) + tests += list_test_cases(TestEccKey_Ed25519) + tests += list_test_cases(TestEccModule_Ed25519) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_448.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_448.py new file mode 100644 index 00000000..68bcaa3c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_448.py @@ -0,0 +1,327 @@ +# =================================================================== +# +# Copyright (c) 2022, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors + +from Crypto.PublicKey import ECC +from Crypto.PublicKey.ECC import EccPoint, _curves, EccKey + +from Crypto.Math.Numbers import Integer + +from Crypto.Hash import SHAKE128 + + +class TestEccPoint_Ed448(unittest.TestCase): + + Gxy = {"x": 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e, + "y": 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14} + + G2xy = {"x": 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa955555555555555555555555555555555555555555555555555555555, + "y": 0xae05e9634ad7048db359d6205086c2b0036ed7a035884dd7b7e36d728ad8c4b80d6565833a2a3098bbbcb2bed1cda06bdaeafbcdea9386ed} + + G3xy = {"x": 0x865886b9108af6455bd64316cb6943332241b8b8cda82c7e2ba077a4a3fcfe8daa9cbf7f6271fd6e862b769465da8575728173286ff2f8f, + "y": 0xe005a8dbd5125cf706cbda7ad43aa6449a4a8d952356c3b9fce43c82ec4e1d58bb3a331bdb6767f0bffa9a68fed02dafb822ac13588ed6fc} + + pointG = EccPoint(Gxy['x'], Gxy['y'], curve="Ed448") + pointG2 = EccPoint(G2xy['x'], G2xy['y'], curve="Ed448") + pointG3 = EccPoint(G3xy['x'], G3xy['y'], curve="Ed448") + + def test_init_xy(self): + EccPoint(self.Gxy['x'], self.Gxy['y'], curve="Ed448") + + # Neutral point + pai = EccPoint(0, 1, curve="Ed448") + self.assertEqual(pai.x, 0) + self.assertEqual(pai.y, 1) + self.assertEqual(pai.xy, (0, 1)) + + # G + bp = self.pointG.copy() + self.assertEqual(bp.x, 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e) + self.assertEqual(bp.y, 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14) + self.assertEqual(bp.xy, (bp.x, bp.y)) + + # 2G + bp2 = self.pointG2.copy() + self.assertEqual(bp2.x, 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa955555555555555555555555555555555555555555555555555555555) + self.assertEqual(bp2.y, 0xae05e9634ad7048db359d6205086c2b0036ed7a035884dd7b7e36d728ad8c4b80d6565833a2a3098bbbcb2bed1cda06bdaeafbcdea9386ed) + self.assertEqual(bp2.xy, (bp2.x, bp2.y)) + + # 5G + EccPoint(x=0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034, + y=0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb, + curve="Ed448") + + # Catch if point is not on the curve + self.assertRaises(ValueError, EccPoint, 34, 35, curve="Ed448") + + def test_set(self): + pointW = EccPoint(0, 1, curve="Ed448") + pointW.set(self.pointG) + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_copy(self): + pointW = self.pointG.copy() + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_equal(self): + pointH = self.pointG.copy() + pointI = self.pointG2.copy() + self.assertEqual(self.pointG, pointH) + self.assertNotEqual(self.pointG, pointI) + + def test_pai(self): + pai = EccPoint(0, 1, curve="Ed448") + self.failUnless(pai.is_point_at_infinity()) + self.assertEqual(pai, pai.point_at_infinity()) + + def test_negate(self): + negG = -self.pointG + sum = self.pointG + negG + self.failUnless(sum.is_point_at_infinity()) + + def test_addition(self): + self.assertEqual(self.pointG + self.pointG2, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG.point_at_infinity(), self.pointG2) + self.assertEqual(self.pointG.point_at_infinity() + self.pointG2, self.pointG2) + + G5 = self.pointG2 + self.pointG3 + self.assertEqual(G5.x, 0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034) + self.assertEqual(G5.y, 0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb) + + def test_inplace_addition(self): + pointH = self.pointG.copy() + pointH += self.pointG + self.assertEqual(pointH, self.pointG2) + pointH += self.pointG + self.assertEqual(pointH, self.pointG3) + pointH += self.pointG.point_at_infinity() + self.assertEqual(pointH, self.pointG3) + + def test_doubling(self): + pointH = self.pointG.copy() + pointH.double() + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + # 2*0 + pai = self.pointG.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + def test_scalar_multiply(self): + d = 0 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0) + self.assertEqual(pointH.y, 1) + + d = 1 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG.x) + self.assertEqual(pointH.y, self.pointG.y) + + d = 2 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + d = 3 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG3.x) + self.assertEqual(pointH.y, self.pointG3.y) + + d = 4 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x49dcbc5c6c0cce2c1419a17226f929ea255a09cf4e0891c693fda4be70c74cc301b7bdf1515dd8ba21aee1798949e120e2ce42ac48ba7f30) + self.assertEqual(pointH.y, 0xd49077e4accde527164b33a5de021b979cb7c02f0457d845c90dc3227b8a5bc1c0d8f97ea1ca9472b5d444285d0d4f5b32e236f86de51839) + + d = 5 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034) + self.assertEqual(pointH.y, 0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb) + + d = 10 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x77486f9d19f6411cdd35d30d1c3235f71936452c787e5c034134d3e8172278aca61622bc805761ce3dab65118a0122d73b403165d0ed303d) + self.assertEqual(pointH.y, 0x4d2fea0b026be11024f1f0fe7e94e618e8ac17381ada1d1bf7ee293a68ff5d0bf93c1997dc1aabdc0c7e6381428d85b6b1954a89e4cddf67) + + d = 20 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x3c236422354600fe6763defcc1503737e4ed89e262d0de3ec1e552020f2a56fe3b9e1e012d021072598c3c2821e18268bb8fb8339c0d1216) + self.assertEqual(pointH.y, 0xb555b9721f630ccb05fc466de4c74d3d2781e69eca88e1b040844f04cab39fd946f91c688fa42402bb38fb9c3e61231017020b219b4396e1) + + d = 255 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0xbeb7f8388b05cd9c1aa2e3c0dcf31e2b563659361826225390e7748654f627d5c36cbe627e9019936b56d15d4dad7c337c09bac64ff4197f) + self.assertEqual(pointH.y, 0x1e37312b2dd4e9440c43c6e7725fc4fa3d11e582d4863f1d018e28f50c0efdb1f53f9b01ada7c87fa162b1f0d72401015d57613d25f1ad53) + + d = 256 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0xf19c34feb56730e3e2be761ac0a2a2b24853b281dda019fc35a5ab58e3696beb39609ae756b0d20fb7ccf0d79aaf5f3bca2e4fdb25bfac1c) + self.assertEqual(pointH.y, 0x3beb69cc9111bffcaddc61d363ce6fe5dd44da4aadce78f52e92e985d5442344ced72c4611ed0daac9f4f5661eab73d7a12d25ce8a30241e) + + def test_sizes(self): + self.assertEqual(self.pointG.size_in_bits(), 448) + self.assertEqual(self.pointG.size_in_bytes(), 56) + + +class TestEccKey_Ed448(unittest.TestCase): + + def test_private_key(self): + seed = unhexlify("4adf5d37ac6785e83e99a924f92676d366a78690af59c92b6bdf14f9cdbcf26fdad478109607583d633b60078d61d51d81b7509c5433b0d4c9") + Px = 0x72a01eea003a35f9ac44231dc4aae2a382f351d80bf32508175b0855edcf389aa2bbf308dd961ce361a6e7c2091bc78957f6ebcf3002a617 + Py = 0x9e0d08d84586e9aeefecacb41d049b831f1a3ee0c3eada63e34557b30702b50ab59fb372feff7c30b8cbb7dd51afbe88444ec56238722ec1 + + key = EccKey(curve="Ed448", seed=seed) + self.assertEqual(key.seed, seed) + self.assertEqual(key.d, 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, Px) + self.assertEqual(key.pointQ.y, Py) + + point = EccPoint(Px, Py, "ed448") + key = EccKey(curve="Ed448", seed=seed, point=point) + self.assertEqual(key.d, 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="ed448", seed=seed) + + # Must not accept d parameter + self.assertRaises(ValueError, EccKey, curve="ed448", d=1) + + def test_public_key(self): + point = EccPoint(_curves['ed448'].Gx, _curves['ed448'].Gy, curve='ed448') + key = EccKey(curve="ed448", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + priv_key = EccKey(curve="ed448", seed=b'H'*57) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_seed(self): + self.assertRaises(ValueError, lambda: EccKey(curve="ed448", seed=b'H' * 56)) + + def test_equality(self): + private_key = ECC.construct(seed=b'H'*57, curve="Ed448") + private_key2 = ECC.construct(seed=b'H'*57, curve="ed448") + private_key3 = ECC.construct(seed=b'C'*57, curve="Ed448") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + +class TestEccModule_Ed448(unittest.TestCase): + + def test_generate(self): + key = ECC.generate(curve="Ed448") + self.assertTrue(key.has_private()) + point = EccPoint(_curves['Ed448'].Gx, _curves['Ed448'].Gy, curve="Ed448") * key.d + self.assertEqual(key.pointQ, point) + + # Always random + key2 = ECC.generate(curve="Ed448") + self.assertNotEqual(key, key2) + + # Other names + ECC.generate(curve="Ed448") + + # Random source + key1 = ECC.generate(curve="Ed448", randfunc=SHAKE128.new().read) + key2 = ECC.generate(curve="Ed448", randfunc=SHAKE128.new().read) + self.assertEqual(key1, key2) + + def test_construct(self): + seed = unhexlify("4adf5d37ac6785e83e99a924f92676d366a78690af59c92b6bdf14f9cdbcf26fdad478109607583d633b60078d61d51d81b7509c5433b0d4c9") + Px = 0x72a01eea003a35f9ac44231dc4aae2a382f351d80bf32508175b0855edcf389aa2bbf308dd961ce361a6e7c2091bc78957f6ebcf3002a617 + Py = 0x9e0d08d84586e9aeefecacb41d049b831f1a3ee0c3eada63e34557b30702b50ab59fb372feff7c30b8cbb7dd51afbe88444ec56238722ec1 + d = 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80 + point = EccPoint(Px, Py, curve="Ed448") + + # Private key only + key = ECC.construct(curve="Ed448", seed=seed) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Public key only + key = ECC.construct(curve="Ed448", point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertFalse(key.has_private()) + + # Private and public key + key = ECC.construct(curve="Ed448", seed=seed, point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Other names + key = ECC.construct(curve="ed448", seed=seed) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['ed448'].Gx, point_y=_curves['ed448'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="Ed448", **coord) + self.assertRaises(ValueError, ECC.construct, curve="Ed448", d=2, **coordG) + self.assertRaises(ValueError, ECC.construct, curve="Ed448", seed=b'H'*58) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestEccPoint_Ed448) + tests += list_test_cases(TestEccKey_Ed448) + tests += list_test_cases(TestEccModule_Ed448) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_NIST.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_NIST.py new file mode 100644 index 00000000..fc13f2d6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ECC_NIST.py @@ -0,0 +1,1389 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors + +from Crypto.PublicKey import ECC +from Crypto.PublicKey.ECC import EccPoint, _curves, EccKey + +from Crypto.Math.Numbers import Integer + + +class TestEccPoint(unittest.TestCase): + + def test_mix(self): + + p1 = ECC.generate(curve='P-256').pointQ + p2 = ECC.generate(curve='P-384').pointQ + + try: + p1 + p2 + assert(False) + except ValueError as e: + assert "not on the same curve" in str(e) + + try: + p1 += p2 + assert(False) + except ValueError as e: + assert "not on the same curve" in str(e) + + def test_repr(self): + p1 = ECC.construct(curve='P-256', + d=75467964919405407085864614198393977741148485328036093939970922195112333446269, + point_x=20573031766139722500939782666697015100983491952082159880539639074939225934381, + point_y=108863130203210779921520632367477406025152638284581252625277850513266505911389) + self.assertEqual(repr(p1), "EccKey(curve='NIST P-256', point_x=20573031766139722500939782666697015100983491952082159880539639074939225934381, point_y=108863130203210779921520632367477406025152638284581252625277850513266505911389, d=75467964919405407085864614198393977741148485328036093939970922195112333446269)") + + +class TestEccPoint_NIST_P192(unittest.TestCase): + """Tests defined in section 4.1 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0xd458e7d127ae671b0c330266d246769353a012073e97acf8, + 0x325930500d851f336bddc050cf7fb11b5673a1645086df3b, + curve='p192') + + pointT = EccPoint( + 0xf22c4395213e9ebe67ddecdd87fdbd01be16fb059b9753a4, + 0x264424096af2b3597796db48f8dfb41fa9cecc97691a9c79, + curve='p192') + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x48e1e4096b9b8e5ca9d0f1f077b8abf58e843894de4d0290 + pointRy = 0x408fa77c797cd7dbfb16aa48a3648d3d63c94117d7b6aa4b + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x48e1e4096b9b8e5ca9d0f1f077b8abf58e843894de4d0290 + pointRy = 0x408fa77c797cd7dbfb16aa48a3648d3d63c94117d7b6aa4b + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x30c5bc6b8c7da25354b373dc14dd8a0eba42d25a3f6e6962 + pointRy = 0x0dde14bc4249a721c407aedbf011e2ddbbcb2968c9d889cf + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xa78a236d60baec0c5dd41b33a542463a8255391af64c74ee + pointRx = 0x1faee4205a4f669d2d0a8f25e3bcec9a62a6952965bf6d31 + pointRy = 0x5ff2cdfa508a2581892367087c696f179e7a4d7e8260fb06 + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + # Reverse order + pointR = d * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pointR = Integer(d) * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_joint_scalar_multiply(self): + d = 0xa78a236d60baec0c5dd41b33a542463a8255391af64c74ee + e = 0xc4be3d53ec3089e71e4de8ceab7cce889bc393cd85b972bc + pointRx = 0x019f64eed8fa9b72b7dfea82c17c9bfa60ecb9e1778b5bde + pointRy = 0x16590c5fcd8655fa4ced33fb800e2a7e3c61f35d83503644 + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 192) + self.assertEqual(self.pointS.size_in_bytes(), 24) + + +class TestEccPoint_NIST_P224(unittest.TestCase): + """Tests defined in section 4.2 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0x6eca814ba59a930843dc814edd6c97da95518df3c6fdf16e9a10bb5b, + 0xef4b497f0963bc8b6aec0ca0f259b89cd80994147e05dc6b64d7bf22, + curve='p224') + + pointT = EccPoint( + 0xb72b25aea5cb03fb88d7e842002969648e6ef23c5d39ac903826bd6d, + 0xc42a8a4d34984f0b71b5b4091af7dceb33ea729c1a2dc8b434f10c34, + curve='p224') + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x236f26d9e84c2f7d776b107bd478ee0a6d2bcfcaa2162afae8d2fd15 + pointRy = 0xe53cc0a7904ce6c3746f6a97471297a0b7d5cdf8d536ae25bb0fda70 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x236f26d9e84c2f7d776b107bd478ee0a6d2bcfcaa2162afae8d2fd15 + pointRy = 0xe53cc0a7904ce6c3746f6a97471297a0b7d5cdf8d536ae25bb0fda70 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0xa9c96f2117dee0f27ca56850ebb46efad8ee26852f165e29cb5cdfc7 + pointRy = 0xadf18c84cf77ced4d76d4930417d9579207840bf49bfbf5837dfdd7d + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xa78ccc30eaca0fcc8e36b2dd6fbb03df06d37f52711e6363aaf1d73b + pointRx = 0x96a7625e92a8d72bff1113abdb95777e736a14c6fdaacc392702bca4 + pointRy = 0x0f8e5702942a3c5e13cd2fd5801915258b43dfadc70d15dbada3ed10 + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + # Reverse order + pointR = d * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pointR = Integer(d) * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_joing_scalar_multiply(self): + d = 0xa78ccc30eaca0fcc8e36b2dd6fbb03df06d37f52711e6363aaf1d73b + e = 0x54d549ffc08c96592519d73e71e8e0703fc8177fa88aa77a6ed35736 + pointRx = 0xdbfe2958c7b2cda1302a67ea3ffd94c918c5b350ab838d52e288c83e + pointRy = 0x2f521b83ac3b0549ff4895abcc7f0c5a861aacb87acbc5b8147bb18b + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 224) + self.assertEqual(self.pointS.size_in_bytes(), 28) + + +class TestEccPoint_NIST_P256(unittest.TestCase): + """Tests defined in section 4.3 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0xde2444bebc8d36e682edd27e0f271508617519b3221a8fa0b77cab3989da97c9, + 0xc093ae7ff36e5380fc01a5aad1e66659702de80f53cec576b6350b243042a256) + + pointT = EccPoint( + 0x55a8b00f8da1d44e62f6b3b25316212e39540dc861c89575bb8cf92e35e0986b, + 0x5421c3209c2d6c704835d82ac4c3dd90f61a8a52598b9e7ab656e9d8c8b24316) + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x72b13dd4354b6b81745195e98cc5ba6970349191ac476bd4553cf35a545a067e + pointRy = 0x8d585cbb2e1327d75241a8a122d7620dc33b13315aa5c9d46d013011744ac264 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x72b13dd4354b6b81745195e98cc5ba6970349191ac476bd4553cf35a545a067e + pointRy = 0x8d585cbb2e1327d75241a8a122d7620dc33b13315aa5c9d46d013011744ac264 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x7669e6901606ee3ba1a8eef1e0024c33df6c22f3b17481b82a860ffcdb6127b0 + pointRy = 0xfa878162187a54f6c39f6ee0072f33de389ef3eecd03023de10ca2c1db61d0c7 + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + pointRx = 0x51d08d5f2d4278882946d88d83c97d11e62becc3cfc18bedacc89ba34eeca03f + pointRy = 0x75ee68eb8bf626aa5b673ab51f6e744e06f8fcf8a6c0cf3035beca956a7b41d5 + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + # Reverse order + pointR = d * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pointR = Integer(d) * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_joing_scalar_multiply(self): + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + e = 0xd37f628ece72a462f0145cbefe3f0b355ee8332d37acdd83a358016aea029db7 + pointRx = 0xd867b4679221009234939221b8046245efcf58413daacbeff857b8588341f6b8 + pointRy = 0xf2504055c03cede12d22720dad69c745106b6607ec7e50dd35d54bd80f615275 + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 256) + self.assertEqual(self.pointS.size_in_bytes(), 32) + + +class TestEccPoint_NIST_P384(unittest.TestCase): + """Tests defined in section 4.4 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0xfba203b81bbd23f2b3be971cc23997e1ae4d89e69cb6f92385dda82768ada415ebab4167459da98e62b1332d1e73cb0e, + 0x5ffedbaefdeba603e7923e06cdb5d0c65b22301429293376d5c6944e3fa6259f162b4788de6987fd59aed5e4b5285e45, + "p384") + + pointT = EccPoint( + 0xaacc05202e7fda6fc73d82f0a66220527da8117ee8f8330ead7d20ee6f255f582d8bd38c5a7f2b40bcdb68ba13d81051, + 0x84009a263fefba7c2c57cffa5db3634d286131afc0fca8d25afa22a7b5dce0d9470da89233cee178592f49b6fecb5092, + "p384") + + def test_set(self): + pointW = EccPoint(0, 0, "p384") + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x12dc5ce7acdfc5844d939f40b4df012e68f865b89c3213ba97090a247a2fc009075cf471cd2e85c489979b65ee0b5eed + pointRy = 0x167312e58fe0c0afa248f2854e3cddcb557f983b3189b67f21eee01341e7e9fe67f6ee81b36988efa406945c8804a4b0 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def _test_inplace_addition(self): + pointRx = 0x72b13dd4354b6b81745195e98cc5ba6970349191ac476bd4553cf35a545a067e + pointRy = 0x8d585cbb2e1327d75241a8a122d7620dc33b13315aa5c9d46d013011744ac264 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x2a2111b1e0aa8b2fc5a1975516bc4d58017ff96b25e1bdff3c229d5fac3bacc319dcbec29f9478f42dee597b4641504c + pointRy = 0xfa2e3d9dc84db8954ce8085ef28d7184fddfd1344b4d4797343af9b5f9d837520b450f726443e4114bd4e5bdb2f65ddd + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xa4ebcae5a665983493ab3e626085a24c104311a761b5a8fdac052ed1f111a5c44f76f45659d2d111a61b5fdd97583480 + pointRx = 0xe4f77e7ffeb7f0958910e3a680d677a477191df166160ff7ef6bb5261f791aa7b45e3e653d151b95dad3d93ca0290ef2 + pointRy = 0xac7dee41d8c5f4a7d5836960a773cfc1376289d3373f8cf7417b0c6207ac32e913856612fc9ff2e357eb2ee05cf9667f + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + def test_joing_scalar_multiply(self): + d = 0xa4ebcae5a665983493ab3e626085a24c104311a761b5a8fdac052ed1f111a5c44f76f45659d2d111a61b5fdd97583480 + e = 0xafcf88119a3a76c87acbd6008e1349b29f4ba9aa0e12ce89bcfcae2180b38d81ab8cf15095301a182afbc6893e75385d + pointRx = 0x917ea28bcd641741ae5d18c2f1bd917ba68d34f0f0577387dc81260462aea60e2417b8bdc5d954fc729d211db23a02dc + pointRy = 0x1a29f7ce6d074654d77b40888c73e92546c8f16a5ff6bcbd307f758d4aee684beff26f6742f597e2585c86da908f7186 + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 384) + self.assertEqual(self.pointS.size_in_bytes(), 48) + + +class TestEccPoint_NIST_P521(unittest.TestCase): + """Tests defined in section 4.5 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0x000001d5c693f66c08ed03ad0f031f937443458f601fd098d3d0227b4bf62873af50740b0bb84aa157fc847bcf8dc16a8b2b8bfd8e2d0a7d39af04b089930ef6dad5c1b4, + 0x00000144b7770963c63a39248865ff36b074151eac33549b224af5c8664c54012b818ed037b2b7c1a63ac89ebaa11e07db89fcee5b556e49764ee3fa66ea7ae61ac01823, + "p521") + + pointT = EccPoint( + 0x000000f411f2ac2eb971a267b80297ba67c322dba4bb21cec8b70073bf88fc1ca5fde3ba09e5df6d39acb2c0762c03d7bc224a3e197feaf760d6324006fe3be9a548c7d5, + 0x000001fdf842769c707c93c630df6d02eff399a06f1b36fb9684f0b373ed064889629abb92b1ae328fdb45534268384943f0e9222afe03259b32274d35d1b9584c65e305, + "p521") + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x000001264ae115ba9cbc2ee56e6f0059e24b52c8046321602c59a339cfb757c89a59c358a9a8e1f86d384b3f3b255ea3f73670c6dc9f45d46b6a196dc37bbe0f6b2dd9e9 + pointRy = 0x00000062a9c72b8f9f88a271690bfa017a6466c31b9cadc2fc544744aeb817072349cfddc5ad0e81b03f1897bd9c8c6efbdf68237dc3bb00445979fb373b20c9a967ac55 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x000001264ae115ba9cbc2ee56e6f0059e24b52c8046321602c59a339cfb757c89a59c358a9a8e1f86d384b3f3b255ea3f73670c6dc9f45d46b6a196dc37bbe0f6b2dd9e9 + pointRy = 0x00000062a9c72b8f9f88a271690bfa017a6466c31b9cadc2fc544744aeb817072349cfddc5ad0e81b03f1897bd9c8c6efbdf68237dc3bb00445979fb373b20c9a967ac55 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x0000012879442f2450c119e7119a5f738be1f1eba9e9d7c6cf41b325d9ce6d643106e9d61124a91a96bcf201305a9dee55fa79136dc700831e54c3ca4ff2646bd3c36bc6 + pointRy = 0x0000019864a8b8855c2479cbefe375ae553e2393271ed36fadfc4494fc0583f6bd03598896f39854abeae5f9a6515a021e2c0eef139e71de610143f53382f4104dccb543 + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0x000001eb7f81785c9629f136a7e8f8c674957109735554111a2a866fa5a166699419bfa9936c78b62653964df0d6da940a695c7294d41b2d6600de6dfcf0edcfc89fdcb1 + pointRx = 0x00000091b15d09d0ca0353f8f96b93cdb13497b0a4bb582ae9ebefa35eee61bf7b7d041b8ec34c6c00c0c0671c4ae063318fb75be87af4fe859608c95f0ab4774f8c95bb + pointRy = 0x00000130f8f8b5e1abb4dd94f6baaf654a2d5810411e77b7423965e0c7fd79ec1ae563c207bd255ee9828eb7a03fed565240d2cc80ddd2cecbb2eb50f0951f75ad87977f + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + def test_joing_scalar_multiply(self): + d = 0x000001eb7f81785c9629f136a7e8f8c674957109735554111a2a866fa5a166699419bfa9936c78b62653964df0d6da940a695c7294d41b2d6600de6dfcf0edcfc89fdcb1 + e = 0x00000137e6b73d38f153c3a7575615812608f2bab3229c92e21c0d1c83cfad9261dbb17bb77a63682000031b9122c2f0cdab2af72314be95254de4291a8f85f7c70412e3 + pointRx = 0x0000009d3802642b3bea152beb9e05fba247790f7fc168072d363340133402f2585588dc1385d40ebcb8552f8db02b23d687cae46185b27528adb1bf9729716e4eba653d + pointRy = 0x0000000fe44344e79da6f49d87c1063744e5957d9ac0a505bafa8281c9ce9ff25ad53f8da084a2deb0923e46501de5797850c61b229023dd9cf7fc7f04cd35ebb026d89d + + pointR = self.pointS * d + pointR += self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 521) + self.assertEqual(self.pointS.size_in_bytes(), 66) + + +class TestEccPoint_PAI_P192(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p192'] + pointG = EccPoint(curve.Gx, curve.Gy, "p192") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P192.txt", + "P-192 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P192, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P224(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p224'] + pointG = EccPoint(curve.Gx, curve.Gy, "p224") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P224.txt", + "P-224 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P224, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P256(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p256'] + pointG = EccPoint(curve.Gx, curve.Gy, "p256") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P256.txt", + "P-256 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P256, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P384(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p384'] + pointG = EccPoint(curve.Gx, curve.Gy, "p384") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P384.txt", + "P-384 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P384, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P521(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p521'] + pointG = EccPoint(curve.Gx, curve.Gy, "p521") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P521.txt", + "P-521 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P521, "test_%d" % tv.count, new_test) + + +class TestEccKey_P192(unittest.TestCase): + + def test_private_key(self): + + key = EccKey(curve="P-192", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, _curves['p192'].Gx) + self.assertEqual(key.pointQ.y, _curves['p192'].Gy) + + point = EccPoint(_curves['p192'].Gx, _curves['p192'].Gy, curve='P-192') + key = EccKey(curve="P-192", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="secp192r1", d=1) + key = EccKey(curve="prime192v1", d=1) + + def test_public_key(self): + + point = EccPoint(_curves['p192'].Gx, _curves['p192'].Gy, curve='P-192') + key = EccKey(curve="P-192", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-192", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-193", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-192", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-192", + d=_curves['p192'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-192") + private_key2 = ECC.construct(d=3, curve="P-192") + private_key3 = ECC.construct(d=4, curve="P-192") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + +class TestEccKey_P224(unittest.TestCase): + + def test_private_key(self): + + key = EccKey(curve="P-224", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, _curves['p224'].Gx) + self.assertEqual(key.pointQ.y, _curves['p224'].Gy) + + point = EccPoint(_curves['p224'].Gx, _curves['p224'].Gy, curve='P-224') + key = EccKey(curve="P-224", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="secp224r1", d=1) + key = EccKey(curve="prime224v1", d=1) + + def test_public_key(self): + + point = EccPoint(_curves['p224'].Gx, _curves['p224'].Gy, curve='P-224') + key = EccKey(curve="P-224", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-224", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-225", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-224", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-224", + d=_curves['p224'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-224") + private_key2 = ECC.construct(d=3, curve="P-224") + private_key3 = ECC.construct(d=4, curve="P-224") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + +class TestEccKey_P256(unittest.TestCase): + + def test_private_key(self): + + key = EccKey(curve="P-256", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, _curves['p256'].Gx) + self.assertEqual(key.pointQ.y, _curves['p256'].Gy) + + point = EccPoint(_curves['p256'].Gx, _curves['p256'].Gy) + key = EccKey(curve="P-256", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="secp256r1", d=1) + key = EccKey(curve="prime256v1", d=1) + + # Must not accept d parameter + self.assertRaises(ValueError, EccKey, curve="p256", seed=b'H'*32) + + def test_public_key(self): + + point = EccPoint(_curves['p256'].Gx, _curves['p256'].Gy) + key = EccKey(curve="P-256", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-256", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-257", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-256", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-256", d=_curves['p256'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-256") + private_key2 = ECC.construct(d=3, curve="P-256") + private_key3 = ECC.construct(d=4, curve="P-256") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + +class TestEccKey_P384(unittest.TestCase): + + def test_private_key(self): + + p384 = _curves['p384'] + + key = EccKey(curve="P-384", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, p384.Gx) + self.assertEqual(key.pointQ.y, p384.Gy) + + point = EccPoint(p384.Gx, p384.Gy, "p384") + key = EccKey(curve="P-384", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="p384", d=1) + key = EccKey(curve="secp384r1", d=1) + key = EccKey(curve="prime384v1", d=1) + + def test_public_key(self): + + p384 = _curves['p384'] + point = EccPoint(p384.Gx, p384.Gy, 'p384') + key = EccKey(curve="P-384", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-384", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-385", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-384", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-384", + d=_curves['p384'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-384") + private_key2 = ECC.construct(d=3, curve="P-384") + private_key3 = ECC.construct(d=4, curve="P-384") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + +class TestEccKey_P521(unittest.TestCase): + + def test_private_key(self): + + p521 = _curves['p521'] + + key = EccKey(curve="P-521", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, p521.Gx) + self.assertEqual(key.pointQ.y, p521.Gy) + + point = EccPoint(p521.Gx, p521.Gy, "p521") + key = EccKey(curve="P-521", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="p521", d=1) + key = EccKey(curve="secp521r1", d=1) + key = EccKey(curve="prime521v1", d=1) + + def test_public_key(self): + + p521 = _curves['p521'] + point = EccPoint(p521.Gx, p521.Gy, 'p521') + key = EccKey(curve="P-384", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-521", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-522", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-521", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-521", + d=_curves['p521'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-521") + private_key2 = ECC.construct(d=3, curve="P-521") + private_key3 = ECC.construct(d=4, curve="P-521") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + +class TestEccModule_P192(unittest.TestCase): + + def test_generate(self): + + key = ECC.generate(curve="P-192") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(_curves['p192'].Gx, + _curves['p192'].Gy, + "P-192") * key.d, + "p192") + + # Other names + ECC.generate(curve="secp192r1") + ECC.generate(curve="prime192v1") + + def test_construct(self): + + key = ECC.construct(curve="P-192", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p192'].G) + + key = ECC.construct(curve="P-192", point_x=_curves['p192'].Gx, + point_y=_curves['p192'].Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, _curves['p192'].G) + + # Other names + ECC.construct(curve="p192", d=1) + ECC.construct(curve="secp192r1", d=1) + ECC.construct(curve="prime192v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p192'].Gx, point_y=_curves['p192'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-192", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-192", d=2, **coordG) + + +class TestEccModule_P224(unittest.TestCase): + + def test_generate(self): + + key = ECC.generate(curve="P-224") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(_curves['p224'].Gx, + _curves['p224'].Gy, + "P-224") * key.d, + "p224") + + # Other names + ECC.generate(curve="secp224r1") + ECC.generate(curve="prime224v1") + + def test_construct(self): + + key = ECC.construct(curve="P-224", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p224'].G) + + key = ECC.construct(curve="P-224", point_x=_curves['p224'].Gx, + point_y=_curves['p224'].Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, _curves['p224'].G) + + # Other names + ECC.construct(curve="p224", d=1) + ECC.construct(curve="secp224r1", d=1) + ECC.construct(curve="prime224v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p224'].Gx, point_y=_curves['p224'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-224", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-224", d=2, **coordG) + + +class TestEccModule_P256(unittest.TestCase): + + def test_generate(self): + + key = ECC.generate(curve="P-256") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(_curves['p256'].Gx, + _curves['p256'].Gy) * key.d, + "p256") + + # Other names + ECC.generate(curve="secp256r1") + ECC.generate(curve="prime256v1") + + def test_construct(self): + + key = ECC.construct(curve="P-256", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p256'].G) + + key = ECC.construct(curve="P-256", point_x=_curves['p256'].Gx, + point_y=_curves['p256'].Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, _curves['p256'].G) + + # Other names + ECC.construct(curve="p256", d=1) + ECC.construct(curve="secp256r1", d=1) + ECC.construct(curve="prime256v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p256'].Gx, point_y=_curves['p256'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-256", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-256", d=2, **coordG) + + +class TestEccModule_P384(unittest.TestCase): + + def test_generate(self): + + curve = _curves['p384'] + key = ECC.generate(curve="P-384") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(curve.Gx, curve.Gy, "p384") * key.d) + + # Other names + ECC.generate(curve="secp384r1") + ECC.generate(curve="prime384v1") + + def test_construct(self): + + curve = _curves['p384'] + key = ECC.construct(curve="P-384", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p384'].G) + + key = ECC.construct(curve="P-384", point_x=curve.Gx, point_y=curve.Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, curve.G) + + # Other names + ECC.construct(curve="p384", d=1) + ECC.construct(curve="secp384r1", d=1) + ECC.construct(curve="prime384v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p384'].Gx, point_y=_curves['p384'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-384", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-384", d=2, **coordG) + + +class TestEccModule_P521(unittest.TestCase): + + def test_generate(self): + + curve = _curves['p521'] + key = ECC.generate(curve="P-521") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(curve.Gx, curve.Gy, "p521") * key.d) + + # Other names + ECC.generate(curve="secp521r1") + ECC.generate(curve="prime521v1") + + def test_construct(self): + + curve = _curves['p521'] + key = ECC.construct(curve="P-521", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p521'].G) + + key = ECC.construct(curve="P-521", point_x=curve.Gx, point_y=curve.Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, curve.G) + + # Other names + ECC.construct(curve="p521", d=1) + ECC.construct(curve="secp521r1", d=1) + ECC.construct(curve="prime521v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p521'].Gx, point_y=_curves['p521'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-521", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-521", d=2, **coordG) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestEccPoint) + tests += list_test_cases(TestEccPoint_NIST_P192) + tests += list_test_cases(TestEccPoint_NIST_P224) + tests += list_test_cases(TestEccPoint_NIST_P256) + tests += list_test_cases(TestEccPoint_NIST_P384) + tests += list_test_cases(TestEccPoint_NIST_P521) + tests += list_test_cases(TestEccPoint_PAI_P192) + tests += list_test_cases(TestEccPoint_PAI_P224) + tests += list_test_cases(TestEccPoint_PAI_P256) + tests += list_test_cases(TestEccPoint_PAI_P384) + tests += list_test_cases(TestEccPoint_PAI_P521) + tests += list_test_cases(TestEccKey_P192) + tests += list_test_cases(TestEccKey_P224) + tests += list_test_cases(TestEccKey_P256) + tests += list_test_cases(TestEccKey_P384) + tests += list_test_cases(TestEccKey_P521) + tests += list_test_cases(TestEccModule_P192) + tests += list_test_cases(TestEccModule_P224) + tests += list_test_cases(TestEccModule_P256) + tests += list_test_cases(TestEccModule_P384) + tests += list_test_cases(TestEccModule_P521) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ElGamal.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ElGamal.py new file mode 100644 index 00000000..0b394aef --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_ElGamal.py @@ -0,0 +1,217 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_ElGamal.py: Self-test for the ElGamal primitive +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.PublicKey.ElGamal""" + +__revision__ = "$Id$" + +import unittest +from Crypto.SelfTest.st_common import list_test_cases, a2b_hex, b2a_hex +from Crypto import Random +from Crypto.PublicKey import ElGamal +from Crypto.Util.number import bytes_to_long +from Crypto.Util.py3compat import * + +class ElGamalTest(unittest.TestCase): + + # + # Test vectors + # + # There seem to be no real ElGamal test vectors available in the + # public domain. The following test vectors have been generated + # with libgcrypt 1.5.0. + # + # Encryption + tve=[ + { + # 256 bits + 'p' :'BA4CAEAAED8CBE952AFD2126C63EB3B345D65C2A0A73D2A3AD4138B6D09BD933', + 'g' :'05', + 'y' :'60D063600ECED7C7C55146020E7A31C4476E9793BEAED420FEC9E77604CAE4EF', + 'x' :'1D391BA2EE3C37FE1BA175A69B2C73A11238AD77675932', + 'k' :'F5893C5BAB4131264066F57AB3D8AD89E391A0B68A68A1', + 'pt' :'48656C6C6F207468657265', + 'ct1':'32BFD5F487966CEA9E9356715788C491EC515E4ED48B58F0F00971E93AAA5EC7', + 'ct2':'7BE8FBFF317C93E82FCEF9BD515284BA506603FEA25D01C0CB874A31F315EE68' + }, + + { + # 512 bits + 'p' :'F1B18AE9F7B4E08FDA9A04832F4E919D89462FD31BF12F92791A93519F75076D6CE3942689CDFF2F344CAFF0F82D01864F69F3AECF566C774CBACF728B81A227', + 'g' :'07', + 'y' :'688628C676E4F05D630E1BE39D0066178CA7AA83836B645DE5ADD359B4825A12B02EF4252E4E6FA9BEC1DB0BE90F6D7C8629CABB6E531F472B2664868156E20C', + 'x' :'14E60B1BDFD33436C0DA8A22FDC14A2CCDBBED0627CE68', + 'k' :'38DBF14E1F319BDA9BAB33EEEADCAF6B2EA5250577ACE7', + 'pt' :'48656C6C6F207468657265', + 'ct1':'290F8530C2CC312EC46178724F196F308AD4C523CEABB001FACB0506BFED676083FE0F27AC688B5C749AB3CB8A80CD6F7094DBA421FB19442F5A413E06A9772B', + 'ct2':'1D69AAAD1DC50493FB1B8E8721D621D683F3BF1321BE21BC4A43E11B40C9D4D9C80DE3AAC2AB60D31782B16B61112E68220889D53C4C3136EE6F6CE61F8A23A0' + } + ] + + # Signature + tvs=[ + { + # 256 bits + 'p' :'D2F3C41EA66530838A704A48FFAC9334F4701ECE3A97CEE4C69DD01AE7129DD7', + 'g' :'05', + 'y' :'C3F9417DC0DAFEA6A05C1D2333B7A95E63B3F4F28CC962254B3256984D1012E7', + 'x' :'165E4A39BE44D5A2D8B1332D416BC559616F536BC735BB', + 'k' :'C7F0C794A7EAD726E25A47FF8928013680E73C51DD3D7D99BFDA8F492585928F', + 'h' :'48656C6C6F207468657265', + 'sig1':'35CA98133779E2073EF31165AFCDEB764DD54E96ADE851715495F9C635E1E7C2', + 'sig2':'0135B88B1151279FE5D8078D4FC685EE81177EE9802AB123A73925FC1CB059A7', + }, + { + # 512 bits + 'p' :'E24CF3A4B8A6AF749DCA6D714282FE4AABEEE44A53BB6ED15FBE32B5D3C3EF9CC4124A2ECA331F3C1C1B667ACA3766825217E7B5F9856648D95F05330C6A19CF', + 'g' :'0B', + 'y' :'2AD3A1049CA5D4ED207B2431C79A8719BB4073D4A94E450EA6CEE8A760EB07ADB67C0D52C275EE85D7B52789061EE45F2F37D9B2AE522A51C28329766BFE68AC', + 'x' :'16CBB4F46D9ECCF24FF9F7E63CAA3BD8936341555062AB', + 'k' :'8A3D89A4E429FD2476D7D717251FB79BF900FFE77444E6BB8299DC3F84D0DD57ABAB50732AE158EA52F5B9E7D8813E81FD9F79470AE22F8F1CF9AEC820A78C69', + 'h' :'48656C6C6F207468657265', + 'sig1':'BE001AABAFFF976EC9016198FBFEA14CBEF96B000CCC0063D3324016F9E91FE80D8F9325812ED24DDB2B4D4CF4430B169880B3CE88313B53255BD4EC0378586F', + 'sig2':'5E266F3F837BA204E3BBB6DBECC0611429D96F8C7CE8F4EFDF9D4CB681C2A954468A357BF4242CEC7418B51DFC081BCD21299EF5B5A0DDEF3A139A1817503DDE', + } + ] + + def test_generate_180(self): + self._test_random_key(180) + + def test_encryption(self): + for tv in self.tve: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + ct = key._encrypt(d['pt'], d['k']) + self.assertEqual(ct[0], d['ct1']) + self.assertEqual(ct[1], d['ct2']) + + def test_decryption(self): + for tv in self.tve: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + pt = key._decrypt((d['ct1'], d['ct2'])) + self.assertEqual(pt, d['pt']) + + def test_signing(self): + for tv in self.tvs: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + sig1, sig2 = key._sign(d['h'], d['k']) + self.assertEqual(sig1, d['sig1']) + self.assertEqual(sig2, d['sig2']) + + def test_verification(self): + for tv in self.tvs: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + # Positive test + res = key._verify( d['h'], (d['sig1'],d['sig2']) ) + self.assertTrue(res) + # Negative test + res = key._verify( d['h'], (d['sig1']+1,d['sig2']) ) + self.assertFalse(res) + + def test_bad_key3(self): + tup = tup0 = list(self.convert_tv(self.tvs[0], 1)['key'])[:3] + tup[0] += 1 # p += 1 (not prime) + self.assertRaises(ValueError, ElGamal.construct, tup) + + tup = tup0 + tup[1] = 1 # g = 1 + self.assertRaises(ValueError, ElGamal.construct, tup) + + tup = tup0 + tup[2] = tup[0]*2 # y = 2*p + self.assertRaises(ValueError, ElGamal.construct, tup) + + def test_bad_key4(self): + tup = tup0 = list(self.convert_tv(self.tvs[0], 1)['key']) + tup[3] += 1 # x += 1 + self.assertRaises(ValueError, ElGamal.construct, tup) + + def convert_tv(self, tv, as_longs=0): + """Convert a test vector from textual form (hexadecimal ascii + to either integers or byte strings.""" + key_comps = 'p','g','y','x' + tv2 = {} + for c in tv.keys(): + tv2[c] = a2b_hex(tv[c]) + if as_longs or c in key_comps or c in ('sig1','sig2'): + tv2[c] = bytes_to_long(tv2[c]) + tv2['key']=[] + for c in key_comps: + tv2['key'] += [tv2[c]] + del tv2[c] + return tv2 + + def _test_random_key(self, bits): + elgObj = ElGamal.generate(bits, Random.new().read) + self._check_private_key(elgObj) + self._exercise_primitive(elgObj) + pub = elgObj.publickey() + self._check_public_key(pub) + self._exercise_public_primitive(elgObj) + + def _check_private_key(self, elgObj): + + # Check capabilities + self.assertTrue(elgObj.has_private()) + + # Sanity check key data + self.assertTrue(1 +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.PublicKey.RSA""" + +__revision__ = "$Id$" + +import os +import pickle +from pickle import PicklingError +from Crypto.Util.py3compat import * + +import unittest +from Crypto.SelfTest.st_common import list_test_cases, a2b_hex, b2a_hex + +class RSATest(unittest.TestCase): + # Test vectors from "RSA-OAEP and RSA-PSS test vectors (.zip file)" + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # See RSADSI's PKCS#1 page at + # http://www.rsa.com/rsalabs/node.asp?id=2125 + + # from oaep-int.txt + + # TODO: PyCrypto treats the message as starting *after* the leading "00" + # TODO: That behaviour should probably be changed in the future. + plaintext = """ + eb 7a 19 ac e9 e3 00 63 50 e3 29 50 4b 45 e2 + ca 82 31 0b 26 dc d8 7d 5c 68 f1 ee a8 f5 52 67 + c3 1b 2e 8b b4 25 1f 84 d7 e0 b2 c0 46 26 f5 af + f9 3e dc fb 25 c9 c2 b3 ff 8a e1 0e 83 9a 2d db + 4c dc fe 4f f4 77 28 b4 a1 b7 c1 36 2b aa d2 9a + b4 8d 28 69 d5 02 41 21 43 58 11 59 1b e3 92 f9 + 82 fb 3e 87 d0 95 ae b4 04 48 db 97 2f 3a c1 4f + 7b c2 75 19 52 81 ce 32 d2 f1 b7 6d 4d 35 3e 2d + """ + + ciphertext = """ + 12 53 e0 4d c0 a5 39 7b b4 4a 7a b8 7e 9b f2 a0 + 39 a3 3d 1e 99 6f c8 2a 94 cc d3 00 74 c9 5d f7 + 63 72 20 17 06 9e 52 68 da 5d 1c 0b 4f 87 2c f6 + 53 c1 1d f8 23 14 a6 79 68 df ea e2 8d ef 04 bb + 6d 84 b1 c3 1d 65 4a 19 70 e5 78 3b d6 eb 96 a0 + 24 c2 ca 2f 4a 90 fe 9f 2e f5 c9 c1 40 e5 bb 48 + da 95 36 ad 87 00 c8 4f c9 13 0a de a7 4e 55 8d + 51 a7 4d df 85 d8 b5 0d e9 68 38 d6 06 3e 09 55 + """ + + modulus = """ + bb f8 2f 09 06 82 ce 9c 23 38 ac 2b 9d a8 71 f7 + 36 8d 07 ee d4 10 43 a4 40 d6 b6 f0 74 54 f5 1f + b8 df ba af 03 5c 02 ab 61 ea 48 ce eb 6f cd 48 + 76 ed 52 0d 60 e1 ec 46 19 71 9d 8a 5b 8b 80 7f + af b8 e0 a3 df c7 37 72 3e e6 b4 b7 d9 3a 25 84 + ee 6a 64 9d 06 09 53 74 88 34 b2 45 45 98 39 4e + e0 aa b1 2d 7b 61 a5 1f 52 7a 9a 41 f6 c1 68 7f + e2 53 72 98 ca 2a 8f 59 46 f8 e5 fd 09 1d bd cb + """ + + e = 0x11 # public exponent + + prime_factor = """ + c9 7f b1 f0 27 f4 53 f6 34 12 33 ea aa d1 d9 35 + 3f 6c 42 d0 88 66 b1 d0 5a 0f 20 35 02 8b 9d 86 + 98 40 b4 16 66 b4 2e 92 ea 0d a3 b4 32 04 b5 cf + ce 33 52 52 4d 04 16 a5 a4 41 e7 00 af 46 15 03 + """ + + def setUp(self): + global RSA, Random, bytes_to_long + from Crypto.PublicKey import RSA + from Crypto import Random + from Crypto.Util.number import bytes_to_long, inverse + self.n = bytes_to_long(a2b_hex(self.modulus)) + self.p = bytes_to_long(a2b_hex(self.prime_factor)) + + # Compute q, d, and u from n, e, and p + self.q = self.n // self.p + self.d = inverse(self.e, (self.p-1)*(self.q-1)) + self.u = inverse(self.p, self.q) # u = e**-1 (mod q) + + self.rsa = RSA + + def test_generate_1arg(self): + """RSA (default implementation) generated key (1 argument)""" + rsaObj = self.rsa.generate(1024) + self._check_private_key(rsaObj) + self._exercise_primitive(rsaObj) + pub = rsaObj.public_key() + self._check_public_key(pub) + self._exercise_public_primitive(rsaObj) + + def test_generate_2arg(self): + """RSA (default implementation) generated key (2 arguments)""" + rsaObj = self.rsa.generate(1024, Random.new().read) + self._check_private_key(rsaObj) + self._exercise_primitive(rsaObj) + pub = rsaObj.public_key() + self._check_public_key(pub) + self._exercise_public_primitive(rsaObj) + + def test_generate_3args(self): + rsaObj = self.rsa.generate(1024, Random.new().read,e=65537) + self._check_private_key(rsaObj) + self._exercise_primitive(rsaObj) + pub = rsaObj.public_key() + self._check_public_key(pub) + self._exercise_public_primitive(rsaObj) + self.assertEqual(65537,rsaObj.e) + + def test_construct_2tuple(self): + """RSA (default implementation) constructed key (2-tuple)""" + pub = self.rsa.construct((self.n, self.e)) + self._check_public_key(pub) + self._check_encryption(pub) + + def test_construct_3tuple(self): + """RSA (default implementation) constructed key (3-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d)) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_4tuple(self): + """RSA (default implementation) constructed key (4-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p)) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_5tuple(self): + """RSA (default implementation) constructed key (5-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p, self.q)) + self._check_private_key(rsaObj) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_6tuple(self): + """RSA (default implementation) constructed key (6-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p, self.q, self.u)) + self._check_private_key(rsaObj) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_bad_key2(self): + tup = (self.n, 1) + self.assertRaises(ValueError, self.rsa.construct, tup) + + # An even modulus is wrong + tup = (self.n+1, self.e) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_construct_bad_key3(self): + tup = (self.n, self.e, self.d+1) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_construct_bad_key5(self): + tup = (self.n, self.e, self.d, self.p, self.p) + self.assertRaises(ValueError, self.rsa.construct, tup) + + tup = (self.p*self.p, self.e, self.p, self.p) + self.assertRaises(ValueError, self.rsa.construct, tup) + + tup = (self.p*self.p, 3, self.p, self.q) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_construct_bad_key6(self): + tup = (self.n, self.e, self.d, self.p, self.q, 10) + self.assertRaises(ValueError, self.rsa.construct, tup) + + from Crypto.Util.number import inverse + tup = (self.n, self.e, self.d, self.p, self.q, inverse(self.q, self.p)) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_factoring(self): + rsaObj = self.rsa.construct([self.n, self.e, self.d]) + self.assertTrue(rsaObj.p==self.p or rsaObj.p==self.q) + self.assertTrue(rsaObj.q==self.p or rsaObj.q==self.q) + self.assertTrue(rsaObj.q*rsaObj.p == self.n) + + self.assertRaises(ValueError, self.rsa.construct, [self.n, self.e, self.n-1]) + + def test_repr(self): + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p, self.q)) + repr(rsaObj) + + def test_serialization(self): + """RSA keys are unpickable""" + + rsa_key = self.rsa.generate(1024) + self.assertRaises(PicklingError, pickle.dumps, rsa_key) + + def test_raw_rsa_boundary(self): + # The argument of every RSA raw operation (encrypt/decrypt) must be + # non-negative and no larger than the modulus + rsa_obj = self.rsa.generate(1024) + + self.assertRaises(ValueError, rsa_obj._decrypt, rsa_obj.n) + self.assertRaises(ValueError, rsa_obj._encrypt, rsa_obj.n) + + self.assertRaises(ValueError, rsa_obj._decrypt, -1) + self.assertRaises(ValueError, rsa_obj._encrypt, -1) + + def test_size(self): + pub = self.rsa.construct((self.n, self.e)) + self.assertEqual(pub.size_in_bits(), 1024) + self.assertEqual(pub.size_in_bytes(), 128) + + def _check_private_key(self, rsaObj): + from Crypto.Math.Numbers import Integer + + # Check capabilities + self.assertEqual(1, rsaObj.has_private()) + + # Sanity check key data + self.assertEqual(rsaObj.n, rsaObj.p * rsaObj.q) # n = pq + lcm = int(Integer(rsaObj.p-1).lcm(rsaObj.q-1)) + self.assertEqual(1, rsaObj.d * rsaObj.e % lcm) # ed = 1 (mod LCM(p-1, q-1)) + self.assertEqual(1, rsaObj.p * rsaObj.u % rsaObj.q) # pu = 1 (mod q) + self.assertEqual(1, rsaObj.p > 1) # p > 1 + self.assertEqual(1, rsaObj.q > 1) # q > 1 + self.assertEqual(1, rsaObj.e > 1) # e > 1 + self.assertEqual(1, rsaObj.d > 1) # d > 1 + + def _check_public_key(self, rsaObj): + ciphertext = a2b_hex(self.ciphertext) + + # Check capabilities + self.assertEqual(0, rsaObj.has_private()) + + # Check rsaObj.[ne] -> rsaObj.[ne] mapping + self.assertEqual(rsaObj.n, rsaObj.n) + self.assertEqual(rsaObj.e, rsaObj.e) + + # Check that private parameters are all missing + self.assertEqual(0, hasattr(rsaObj, 'd')) + self.assertEqual(0, hasattr(rsaObj, 'p')) + self.assertEqual(0, hasattr(rsaObj, 'q')) + self.assertEqual(0, hasattr(rsaObj, 'u')) + + # Sanity check key data + self.assertEqual(1, rsaObj.e > 1) # e > 1 + + # Public keys should not be able to sign or decrypt + self.assertRaises(TypeError, rsaObj._decrypt, + bytes_to_long(ciphertext)) + + # Check __eq__ and __ne__ + self.assertEqual(rsaObj.public_key() == rsaObj.public_key(),True) # assert_ + self.assertEqual(rsaObj.public_key() != rsaObj.public_key(),False) # assertFalse + + self.assertEqual(rsaObj.publickey(), rsaObj.public_key()) + + def _exercise_primitive(self, rsaObj): + # Since we're using a randomly-generated key, we can't check the test + # vector, but we can make sure encryption and decryption are inverse + # operations. + ciphertext = bytes_to_long(a2b_hex(self.ciphertext)) + + # Test decryption + plaintext = rsaObj._decrypt(ciphertext) + + # Test encryption (2 arguments) + new_ciphertext2 = rsaObj._encrypt(plaintext) + self.assertEqual(ciphertext, new_ciphertext2) + + def _exercise_public_primitive(self, rsaObj): + plaintext = a2b_hex(self.plaintext) + + # Test encryption (2 arguments) + new_ciphertext2 = rsaObj._encrypt(bytes_to_long(plaintext)) + + def _check_encryption(self, rsaObj): + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + + # Test encryption + new_ciphertext2 = rsaObj._encrypt(bytes_to_long(plaintext)) + self.assertEqual(bytes_to_long(ciphertext), new_ciphertext2) + + def _check_decryption(self, rsaObj): + plaintext = bytes_to_long(a2b_hex(self.plaintext)) + ciphertext = bytes_to_long(a2b_hex(self.ciphertext)) + + # Test plain decryption + new_plaintext = rsaObj._decrypt(ciphertext) + self.assertEqual(plaintext, new_plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(RSATest) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_DSA.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_DSA.py new file mode 100644 index 00000000..266b46f0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_DSA.py @@ -0,0 +1,554 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_import_DSA.py: Self-test for importing DSA keys +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import unittest +import re + +from Crypto.PublicKey import DSA +from Crypto.SelfTest.st_common import * +from Crypto.Util.py3compat import * + +from binascii import unhexlify + +class ImportKeyTests(unittest.TestCase): + + y = 92137165128186062214622779787483327510946462589285775188003362705875131352591574106484271700740858696583623951844732128165434284507709057439633739849986759064015013893156866539696757799934634945787496920169462601722830899660681779448742875054459716726855443681559131362852474817534616736104831095601710736729 + p = 162452170958135306109773853318304545923250830605675936228618290525164105310663722368377131295055868997377338797580997938253236213714988311430600065853662861806894003694743806769284131194035848116051021923956699231855223389086646903420682639786976554552864568460372266462812137447840653688476258666833303658691 + q = 988791743931120302950649732173330531512663554851 + g = 85583152299197514738065570254868711517748965097380456700369348466136657764813442044039878840094809620913085570225318356734366886985903212775602770761953571967834823306046501307810937486758039063386311593890777319935391363872375452381836756832784184928202587843258855704771836753434368484556809100537243908232 + x = 540873410045082450874416847965843801027716145253 + + def setUp(self): + + # It is easier to write test vectors in text form, + # and convert them to byte strigs dynamically here + for mname, mvalue in ImportKeyTests.__dict__.items(): + if mname[:4] in ('der_', 'pem_', 'ssh_'): + if mname[:4] == 'der_': + mvalue = unhexlify(tobytes(mvalue)) + mvalue = tobytes(mvalue) + setattr(self, mname, mvalue) + + # 1. SubjectPublicKeyInfo + der_public=\ + '308201b73082012b06072a8648ce3804013082011e02818100e756ee1717f4b6'+\ + '794c7c214724a19763742c45572b4b3f8ff3b44f3be9f44ce039a2757695ec91'+\ + '5697da74ef914fcd1b05660e2419c761d639f45d2d79b802dbd23e7ab8b81b47'+\ + '9a380e1f30932584ba2a0b955032342ebc83cb5ca906e7b0d7cd6fe656cecb4c'+\ + '8b5a77123a8c6750a481e3b06057aff6aa6eba620b832d60c3021500ad32f48c'+\ + 'd3ae0c45a198a61fa4b5e20320763b2302818079dfdc3d614fe635fceb7eaeae'+\ + '3718dc2efefb45282993ac6749dc83c223d8c1887296316b3b0b54466cf444f3'+\ + '4b82e3554d0b90a778faaf1306f025dae6a3e36c7f93dd5bac4052b92370040a'+\ + 'ca70b8d5820599711900efbc961812c355dd9beffe0981da85c5548074b41c56'+\ + 'ae43fd300d89262e4efd89943f99a651b03888038185000281810083352a69a1'+\ + '32f34843d2a0eb995bff4e2f083a73f0049d2c91ea2f0ce43d144abda48199e4'+\ + 'b003c570a8af83303d45105f606c5c48d925a40ed9c2630c2fa4cdbf838539de'+\ + 'b9a29f919085f2046369f627ca84b2cb1e2c7940564b670f963ab1164d4e2ca2'+\ + 'bf6ffd39f12f548928bf4d2d1b5e6980b4f1be4c92a91986fba559' + + def testImportKey1(self): + key_obj = DSA.importKey(self.der_public) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def testExportKey1(self): + tup = (self.y, self.g, self.p, self.q) + key = DSA.construct(tup) + encoded = key.export_key('DER') + self.assertEqual(self.der_public, encoded) + + # 2. + pem_public="""\ +-----BEGIN PUBLIC KEY----- +MIIBtzCCASsGByqGSM44BAEwggEeAoGBAOdW7hcX9LZ5THwhRyShl2N0LEVXK0s/ +j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4uBtH +mjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47BgV6/2 +qm66YguDLWDDAhUArTL0jNOuDEWhmKYfpLXiAyB2OyMCgYB539w9YU/mNfzrfq6u +NxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG8CXa +5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0tBxW +rkP9MA2JJi5O/YmUP5mmUbA4iAOBhQACgYEAgzUqaaEy80hD0qDrmVv/Ti8IOnPw +BJ0skeovDOQ9FEq9pIGZ5LADxXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTne +uaKfkZCF8gRjafYnyoSyyx4seUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmA +tPG+TJKpGYb7pVk= +-----END PUBLIC KEY-----""" + + def testImportKey2(self): + for pem in (self.pem_public, tostr(self.pem_public)): + key_obj = DSA.importKey(pem) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def testExportKey2(self): + tup = (self.y, self.g, self.p, self.q) + key = DSA.construct(tup) + encoded = key.export_key('PEM') + self.assertEqual(self.pem_public, encoded) + + # 3. OpenSSL/OpenSSH format + der_private=\ + '308201bb02010002818100e756ee1717f4b6794c7c214724a19763742c45572b'+\ + '4b3f8ff3b44f3be9f44ce039a2757695ec915697da74ef914fcd1b05660e2419'+\ + 'c761d639f45d2d79b802dbd23e7ab8b81b479a380e1f30932584ba2a0b955032'+\ + '342ebc83cb5ca906e7b0d7cd6fe656cecb4c8b5a77123a8c6750a481e3b06057'+\ + 'aff6aa6eba620b832d60c3021500ad32f48cd3ae0c45a198a61fa4b5e2032076'+\ + '3b2302818079dfdc3d614fe635fceb7eaeae3718dc2efefb45282993ac6749dc'+\ + '83c223d8c1887296316b3b0b54466cf444f34b82e3554d0b90a778faaf1306f0'+\ + '25dae6a3e36c7f93dd5bac4052b92370040aca70b8d5820599711900efbc9618'+\ + '12c355dd9beffe0981da85c5548074b41c56ae43fd300d89262e4efd89943f99'+\ + 'a651b038880281810083352a69a132f34843d2a0eb995bff4e2f083a73f0049d'+\ + '2c91ea2f0ce43d144abda48199e4b003c570a8af83303d45105f606c5c48d925'+\ + 'a40ed9c2630c2fa4cdbf838539deb9a29f919085f2046369f627ca84b2cb1e2c'+\ + '7940564b670f963ab1164d4e2ca2bf6ffd39f12f548928bf4d2d1b5e6980b4f1'+\ + 'be4c92a91986fba55902145ebd9a3f0b82069d98420986b314215025756065' + + def testImportKey3(self): + key_obj = DSA.importKey(self.der_private) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey3(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('DER', pkcs8=False) + self.assertEqual(self.der_private, encoded) + + # 4. + pem_private="""\ +-----BEGIN DSA PRIVATE KEY----- +MIIBuwIBAAKBgQDnVu4XF/S2eUx8IUckoZdjdCxFVytLP4/ztE876fRM4DmidXaV +7JFWl9p075FPzRsFZg4kGcdh1jn0XS15uALb0j56uLgbR5o4Dh8wkyWEuioLlVAy +NC68g8tcqQbnsNfNb+ZWzstMi1p3EjqMZ1CkgeOwYFev9qpuumILgy1gwwIVAK0y +9IzTrgxFoZimH6S14gMgdjsjAoGAed/cPWFP5jX8636urjcY3C7++0UoKZOsZ0nc +g8Ij2MGIcpYxazsLVEZs9ETzS4LjVU0LkKd4+q8TBvAl2uaj42x/k91brEBSuSNw +BArKcLjVggWZcRkA77yWGBLDVd2b7/4JgdqFxVSAdLQcVq5D/TANiSYuTv2JlD+Z +plGwOIgCgYEAgzUqaaEy80hD0qDrmVv/Ti8IOnPwBJ0skeovDOQ9FEq9pIGZ5LAD +xXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTneuaKfkZCF8gRjafYnyoSyyx4s +eUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmAtPG+TJKpGYb7pVkCFF69mj8L +ggadmEIJhrMUIVAldWBl +-----END DSA PRIVATE KEY-----""" + + def testImportKey4(self): + for pem in (self.pem_private, tostr(self.pem_private)): + key_obj = DSA.importKey(pem) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey4(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('PEM', pkcs8=False) + self.assertEqual(self.pem_private, encoded) + + # 5. PKCS8 (unencrypted) + der_pkcs8=\ + '3082014a0201003082012b06072a8648ce3804013082011e02818100e756ee17'+\ + '17f4b6794c7c214724a19763742c45572b4b3f8ff3b44f3be9f44ce039a27576'+\ + '95ec915697da74ef914fcd1b05660e2419c761d639f45d2d79b802dbd23e7ab8'+\ + 'b81b479a380e1f30932584ba2a0b955032342ebc83cb5ca906e7b0d7cd6fe656'+\ + 'cecb4c8b5a77123a8c6750a481e3b06057aff6aa6eba620b832d60c3021500ad'+\ + '32f48cd3ae0c45a198a61fa4b5e20320763b2302818079dfdc3d614fe635fceb'+\ + '7eaeae3718dc2efefb45282993ac6749dc83c223d8c1887296316b3b0b54466c'+\ + 'f444f34b82e3554d0b90a778faaf1306f025dae6a3e36c7f93dd5bac4052b923'+\ + '70040aca70b8d5820599711900efbc961812c355dd9beffe0981da85c5548074'+\ + 'b41c56ae43fd300d89262e4efd89943f99a651b03888041602145ebd9a3f0b82'+\ + '069d98420986b314215025756065' + + def testImportKey5(self): + key_obj = DSA.importKey(self.der_pkcs8) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey5(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('DER') + self.assertEqual(self.der_pkcs8, encoded) + encoded = key.export_key('DER', pkcs8=True) + self.assertEqual(self.der_pkcs8, encoded) + + # 6. + pem_pkcs8="""\ +-----BEGIN PRIVATE KEY----- +MIIBSgIBADCCASsGByqGSM44BAEwggEeAoGBAOdW7hcX9LZ5THwhRyShl2N0LEVX +K0s/j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4 +uBtHmjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47Bg +V6/2qm66YguDLWDDAhUArTL0jNOuDEWhmKYfpLXiAyB2OyMCgYB539w9YU/mNfzr +fq6uNxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG +8CXa5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0 +tBxWrkP9MA2JJi5O/YmUP5mmUbA4iAQWAhRevZo/C4IGnZhCCYazFCFQJXVgZQ== +-----END PRIVATE KEY-----""" + + def testImportKey6(self): + for pem in (self.pem_pkcs8, tostr(self.pem_pkcs8)): + key_obj = DSA.importKey(pem) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey6(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('PEM') + self.assertEqual(self.pem_pkcs8, encoded) + encoded = key.export_key('PEM', pkcs8=True) + self.assertEqual(self.pem_pkcs8, encoded) + + # 7. OpenSSH/RFC4253 + ssh_pub="""ssh-dss AAAAB3NzaC1kc3MAAACBAOdW7hcX9LZ5THwhRyShl2N0LEVXK0s/j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4uBtHmjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47BgV6/2qm66YguDLWDDAAAAFQCtMvSM064MRaGYph+kteIDIHY7IwAAAIB539w9YU/mNfzrfq6uNxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG8CXa5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0tBxWrkP9MA2JJi5O/YmUP5mmUbA4iAAAAIEAgzUqaaEy80hD0qDrmVv/Ti8IOnPwBJ0skeovDOQ9FEq9pIGZ5LADxXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTneuaKfkZCF8gRjafYnyoSyyx4seUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmAtPG+TJKpGYb7pVk=""" + + def testImportKey7(self): + for ssh in (self.ssh_pub, tostr(self.ssh_pub)): + key_obj = DSA.importKey(ssh) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def testExportKey7(self): + tup = (self.y, self.g, self.p, self.q) + key = DSA.construct(tup) + encoded = key.export_key('OpenSSH') + self.assertEqual(self.ssh_pub, encoded) + + # 8. Encrypted OpenSSL/OpenSSH + pem_private_encrypted="""\ +-----BEGIN DSA PRIVATE KEY----- +Proc-Type: 4,ENCRYPTED +DEK-Info: AES-128-CBC,70B6908939D65E9F2EB999E8729788CE + +4V6GHRDpCrdZ8MBjbyp5AlGUrjvr2Pn2e2zVxy5RBt4FBj9/pa0ae0nnyUPMLSUU +kKyOR0topRYTVRLElm4qVrb5uNZ3hRwfbklr+pSrB7O9eHz9V5sfOQxyODS07JxK +k1OdOs70/ouMXLF9EWfAZOmWUccZKHNblUwg1p1UrZIz5jXw4dUE/zqhvXh6d+iC +ADsICaBCjCrRQJKDp50h3+ndQjkYBKVH+pj8TiQ79U7lAvdp3+iMghQN6YXs9mdI +gFpWw/f97oWM4GHZFqHJ+VSMNFjBiFhAvYV587d7Lk4dhD8sCfbxj42PnfRgUItc +nnPqHxmhMQozBWzYM4mQuo3XbF2WlsNFbOzFVyGhw1Bx1s91qvXBVWJh2ozrW0s6 +HYDV7ZkcTml/4kjA/d+mve6LZ8kuuR1qCiZx6rkffhh1gDN/1Xz3HVvIy/dQ+h9s +5zp7PwUoWbhqp3WCOr156P6gR8qo7OlT6wMh33FSXK/mxikHK136fV2shwTKQVII +rJBvXpj8nACUmi7scKuTWGeUoXa+dwTZVVe+b+L2U1ZM7+h/neTJiXn7u99PFUwu +xVJtxaV37m3aXxtCsPnbBg== +-----END DSA PRIVATE KEY-----""" + + def testImportKey8(self): + for pem in (self.pem_private_encrypted, tostr(self.pem_private_encrypted)): + key_obj = DSA.importKey(pem, "PWDTEST") + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey8(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('PEM', pkcs8=False, passphrase="PWDTEST") + key = DSA.importKey(encoded, "PWDTEST") + self.assertEqual(self.y, key.y) + self.assertEqual(self.p, key.p) + self.assertEqual(self.q, key.q) + self.assertEqual(self.g, key.g) + self.assertEqual(self.x, key.x) + + # 9. Encrypted PKCS8 + # pbeWithMD5AndDES-CBC + pem_pkcs8_encrypted="""\ +-----BEGIN ENCRYPTED PRIVATE KEY----- +MIIBcTAbBgkqhkiG9w0BBQMwDgQI0GC3BJ/jSw8CAggABIIBUHc1cXZpExIE9tC7 +7ryiW+5ihtF2Ekurq3e408GYSAu5smJjN2bvQXmzRFBz8W38K8eMf1sbWroZ4+zn +kZSbb9nSm5kAa8lR2+oF2k+WRswMR/PTC3f/D9STO2X0QxdrzKgIHEcSGSHp5jTx +aVvbkCDHo9vhBTl6S3ogZ48As/MEro76+9igUwJ1jNhIQZPJ7e20QH5qDpQFFJN4 +CKl2ENSEuwGiqBszItFy4dqH0g63ZGZV/xt9wSO9Rd7SK/EbA/dklOxBa5Y/VItM +gnIhs9XDMoGYyn6F023EicNJm6g/bVQk81BTTma4tm+12TKGdYm+QkeZvCOMZylr +Wv67cKwO3cAXt5C3QXMDgYR64XvuaT5h7C0igMp2afSXJlnbHEbFxQVJlv83T4FM +eZ4k+NQDbEL8GiHmFxzDWQAuPPZKJWEEEV2p/To+WOh+kSDHQw== +-----END ENCRYPTED PRIVATE KEY-----""" + + def testImportKey9(self): + for pem in (self.pem_pkcs8_encrypted, tostr(self.pem_pkcs8_encrypted)): + key_obj = DSA.importKey(pem, "PWDTEST") + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + # 10. Encrypted PKCS8 + # pkcs5PBES2 / + # pkcs5PBKDF2 (rounds=1000, salt=D725BF1B6B8239F4) / + # des-EDE3-CBC (iv=27A1C66C42AFEECE) + # + der_pkcs8_encrypted=\ + '30820196304006092a864886f70d01050d3033301b06092a864886f70d01050c'+\ + '300e0408d725bf1b6b8239f4020203e8301406082a864886f70d0307040827a1'+\ + 'c66c42afeece048201505cacfde7bf8edabb3e0d387950dc872662ea7e9b1ed4'+\ + '400d2e7e6186284b64668d8d0328c33a9d9397e6f03df7cb68268b0a06b4e22f'+\ + '7d132821449ecf998a8b696dbc6dd2b19e66d7eb2edfeb4153c1771d49702395'+\ + '4f36072868b5fcccf93413a5ac4b2eb47d4b3f681c6bd67ae363ed776f45ae47'+\ + '174a00098a7c930a50f820b227ddf50f9742d8e950d02586ff2dac0e3c372248'+\ + 'e5f9b6a7a02f4004f20c87913e0f7b52bccc209b95d478256a890b31d4c9adec'+\ + '21a4d157a179a93a3dad06f94f3ce486b46dfa7fc15fd852dd7680bbb2f17478'+\ + '7e71bd8dbaf81eca7518d76c1d26256e95424864ba45ca5d47d7c5a421be02fa'+\ + 'b94ab01e18593f66cf9094eb5c94b9ecf3aa08b854a195cf87612fbe5e96c426'+\ + '2b0d573e52dc71ba3f5e468c601e816c49b7d32c698b22175e89aaef0c443770'+\ + '5ef2f88a116d99d8e2869a4fd09a771b84b49e4ccb79aadcb1c9' + + def testImportKey10(self): + key_obj = DSA.importKey(self.der_pkcs8_encrypted, "PWDTEST") + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey10(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + randfunc = BytesIO(unhexlify(b("27A1C66C42AFEECE") + b("D725BF1B6B8239F4"))).read + encoded = key.export_key('DER', pkcs8=True, passphrase="PWDTEST", randfunc=randfunc) + self.assertEqual(self.der_pkcs8_encrypted, encoded) + + # ---- + + def testImportError1(self): + self.assertRaises(ValueError, DSA.importKey, self.der_pkcs8_encrypted, "wrongpwd") + + def testExportError2(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + self.assertRaises(ValueError, key.export_key, 'DER', pkcs8=False, passphrase="PWDTEST") + + def test_import_key(self): + """Verify importKey is an alias to import_key""" + + key_obj = DSA.import_key(self.der_public) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def test_exportKey(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + self.assertEqual(key.exportKey(), key.export_key()) + + + def test_import_empty(self): + self.assertRaises(ValueError, DSA.import_key, b'') + + +class ImportKeyFromX509Cert(unittest.TestCase): + + def test_x509v1(self): + + # Sample V1 certificate with a 1024 bit DSA key + x509_v1_cert = """ +-----BEGIN CERTIFICATE----- +MIIDUjCCArsCAQIwDQYJKoZIhvcNAQEFBQAwfjENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxHDAaBgkqhkiG9w0BCQEWDXNwYW1AYWNtZS5vcmcxEzARBgNVBAcT +Ck1ldHJvcG9saXMxETAPBgNVBAgTCE5ldyBZb3JrMQswCQYDVQQGEwJVUzENMAsG +A1UEAxMEdGVzdDAeFw0xNDA3MTEyMDM4NDNaFw0xNzA0MDYyMDM4NDNaME0xCzAJ +BgNVBAYTAlVTMREwDwYDVQQIEwhOZXcgWW9yazENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxDzANBgNVBAMTBnBvbGFuZDCCAbYwggErBgcqhkjOOAQBMIIBHgKB +gQDOrN4Ox4+t3T6wKeHfhzArhcrNEFMQ4Ss+4PIKyimDy9Bn64WPkL1B/9dvYIga +23GLu6tVJmXo6EdJnVOHEMhr99EeOwuDWWeP7Awq7RSlKEejokr4BEzMTW/tExSD +cO6/GI7xzh0eTH+VTTPDfyrJMYCkh0rJAfCP+5xrmPNetwIVALtXYOV1yoRrzJ2Q +M5uEjidH6GiZAoGAfUqA1SAm5g5U68SILMVX9l5rq0OpB0waBMpJQ31/R/yXNDqo +c3gGWZTOJFU4IzwNpGhrGNADUByz/lc1SAOAdEJIr0JVrhbGewQjB4pWqoLGbBKz +RoavTNDc/zD7SYa12evWDHADwvlXoeQg+lWop1zS8OqaDC7aLGKpWN3/m8kDgYQA +AoGAKoirPAfcp1rbbl4y2FFAIktfW8f4+T7d2iKSg73aiVfujhNOt1Zz1lfC0NI2 +eonLWO3tAM4XGKf1TLjb5UXngGn40okPsaA81YE6ZIKm20ywjlOY3QkAEdMaLVY3 +9PJvM8RGB9m7pLKxyHfGMfF40MVN4222zKeGp7xhM0CNiCUwDQYJKoZIhvcNAQEF +BQADgYEAfbNZfpYa2KlALEM1FZnwvQDvJHntHz8LdeJ4WM7CXDlKi67wY2HKM30w +s2xej75imkVOFd1kF2d0A8sjfriXLVIt1Hwq9ANZomhu4Edx0xpH8tqdh/bDtnM2 +TmduZNY9OWkb07h0CtWD6Zt8fhRllVsSSrlWd/2or7FXNC5weFQ= +-----END CERTIFICATE----- + """.strip() + + # DSA public key as dumped by openssl + y_str = """ +2a:88:ab:3c:07:dc:a7:5a:db:6e:5e:32:d8:51:40: +22:4b:5f:5b:c7:f8:f9:3e:dd:da:22:92:83:bd:da: +89:57:ee:8e:13:4e:b7:56:73:d6:57:c2:d0:d2:36: +7a:89:cb:58:ed:ed:00:ce:17:18:a7:f5:4c:b8:db: +e5:45:e7:80:69:f8:d2:89:0f:b1:a0:3c:d5:81:3a: +64:82:a6:db:4c:b0:8e:53:98:dd:09:00:11:d3:1a: +2d:56:37:f4:f2:6f:33:c4:46:07:d9:bb:a4:b2:b1: +c8:77:c6:31:f1:78:d0:c5:4d:e3:6d:b6:cc:a7:86: +a7:bc:61:33:40:8d:88:25 + """ + p_str = """ +00:ce:ac:de:0e:c7:8f:ad:dd:3e:b0:29:e1:df:87: +30:2b:85:ca:cd:10:53:10:e1:2b:3e:e0:f2:0a:ca: +29:83:cb:d0:67:eb:85:8f:90:bd:41:ff:d7:6f:60: +88:1a:db:71:8b:bb:ab:55:26:65:e8:e8:47:49:9d: +53:87:10:c8:6b:f7:d1:1e:3b:0b:83:59:67:8f:ec: +0c:2a:ed:14:a5:28:47:a3:a2:4a:f8:04:4c:cc:4d: +6f:ed:13:14:83:70:ee:bf:18:8e:f1:ce:1d:1e:4c: +7f:95:4d:33:c3:7f:2a:c9:31:80:a4:87:4a:c9:01: +f0:8f:fb:9c:6b:98:f3:5e:b7 + """ + q_str = """ +00:bb:57:60:e5:75:ca:84:6b:cc:9d:90:33:9b:84: +8e:27:47:e8:68:99 + """ + g_str = """ +7d:4a:80:d5:20:26:e6:0e:54:eb:c4:88:2c:c5:57: +f6:5e:6b:ab:43:a9:07:4c:1a:04:ca:49:43:7d:7f: +47:fc:97:34:3a:a8:73:78:06:59:94:ce:24:55:38: +23:3c:0d:a4:68:6b:18:d0:03:50:1c:b3:fe:57:35: +48:03:80:74:42:48:af:42:55:ae:16:c6:7b:04:23: +07:8a:56:aa:82:c6:6c:12:b3:46:86:af:4c:d0:dc: +ff:30:fb:49:86:b5:d9:eb:d6:0c:70:03:c2:f9:57: +a1:e4:20:fa:55:a8:a7:5c:d2:f0:ea:9a:0c:2e:da: +2c:62:a9:58:dd:ff:9b:c9 + """ + + key = DSA.importKey(x509_v1_cert) + for comp_name in ('y', 'p', 'q', 'g'): + comp_str = locals()[comp_name + "_str"] + comp = int(re.sub("[^0-9a-f]", "", comp_str), 16) + self.assertEqual(getattr(key, comp_name), comp) + self.assertFalse(key.has_private()) + + def test_x509v3(self): + + # Sample V3 certificate with a 1024 bit DSA key + x509_v3_cert = """ +-----BEGIN CERTIFICATE----- +MIIFhjCCA26gAwIBAgIBAzANBgkqhkiG9w0BAQsFADBhMQswCQYDVQQGEwJVUzEL +MAkGA1UECAwCTUQxEjAQBgNVBAcMCUJhbHRpbW9yZTEQMA4GA1UEAwwHVGVzdCBD +QTEfMB0GCSqGSIb3DQEJARYQdGVzdEBleGFtcGxlLmNvbTAeFw0xNDA3MTMyMDUz +MjBaFw0xNzA0MDgyMDUzMjBaMEAxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJNRDES +MBAGA1UEBwwJQmFsdGltb3JlMRAwDgYDVQQDDAdhdXN0cmlhMIIBtjCCASsGByqG +SM44BAEwggEeAoGBALfd8gyEpVPA0ZI69Kp3nyJcu5N0ZZ3K1K9hleQLNqKEcZOh +7a/C2J1TPdmHTLJ0rAwBZ1nWxnARSgRphziGDFspKCYQwYcSMz8KoFgvXbXpuchy +oFACiQ2LqZnc5MakuLQtLcQciSYGYj3zmZdYMoa904F1aDWr+DxQI6DVC3/bAhUA +hqXMCJ6fQK3G2O9S3/CC/yVZXCsCgYBRXROl3R2khX7l10LQjDEgo3B1IzjXU/jP +McMBl6XO+nBJXxr/scbq8Ajiv7LTnGpSjgryHtvfj887kfvo8QbSS3kp3vq5uSqI +ui7E7r3jguWaLj616AG1HWOctXJUjqsiabZwsp2h09gHTzmHEXBOmiARu8xFxKAH +xsuo7onAbwOBhAACgYBylWjWSnKHE8mHx1A5m/0GQx6xnhWIe3+MJAnEhRGxA2J4 +SCsfWU0OwglIQToh1z5uUU9oDi9cYgNPBevOFRnDhc2yaJY6VAYnI+D+6J5IU6Yd +0iaG/iSc4sV4bFr0axcPpse3SN0XaQxiKeSFBfFnoMqL+dd9Gb3QPZSllBcVD6OB +1TCB0jAdBgNVHQ4EFgQUx5wN0Puotv388M9Tp/fsPbZpzAUwHwYDVR0jBBgwFoAU +a0hkif3RMaraiWtsOOZZlLu9wJwwCQYDVR0TBAIwADALBgNVHQ8EBAMCBeAwSgYD +VR0RBEMwQYILZXhhbXBsZS5jb22CD3d3dy5leGFtcGxlLmNvbYIQbWFpbC5leGFt +cGxlLmNvbYIPZnRwLmV4YW1wbGUuY29tMCwGCWCGSAGG+EIBDQQfFh1PcGVuU1NM +IEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsFAAOCAgEAyWf1TiJI +aNEIA9o/PG8/JiGASTS2/HBVTJbkq03k6NkJVk/GxC1DPziTUJ+CdWlHWcAi1EOW +Ach3QxNDRrVfCOfCMDgElIO1094/reJgdFYG00LRi8QkRJuxANV7YS4tLudhyHJC +kR2lhdMNmEuzWK+s2y+5cLrdm7qdvdENQCcV67uvGPx4sc+EaE7x13SczKjWBtbo +QCs6JTOW+EkPRl4Zo27K4OIZ43/J+GxvwU9QUVH3wPVdbbLNw+QeTFBYMTEcxyc4 +kv50HPBFaithziXBFyvdIs19FjkFzu0Uz/e0zb1+vMzQlJMD94HVOrMnIj5Sb2cL +KKdYXS4uhxFJmdV091Xur5JkYYwEzuaGav7J3zOzYutrIGTgDluLCvA+VQkRcTsy +jZ065SkY/v+38QHp+cmm8WRluupJTs8wYzVp6Fu0iFaaK7ztFmaZmHpiPIfDFjva +aCIgzzT5NweJd/b71A2SyzHXJ14zBXsr1PMylMp2TpHIidhuuNuQL6I0HaollB4M +Z3FsVBMhVDw4Z76qnFPr8mZE2tar33hSlJI/3pS/bBiukuBk8U7VB0X8OqaUnP3C +7b2Z4G8GtqDVcKGMzkvMjT4n9rKd/Le+qHSsQOGO9W/0LB7UDAZSwUsfAPnoBgdS +5t9tIomLCOstByXi+gGZue1TcdCa3Ph4kO0= +-----END CERTIFICATE----- + """.strip() + + # DSA public key as dumped by openssl + y_str = """ +72:95:68:d6:4a:72:87:13:c9:87:c7:50:39:9b:fd: +06:43:1e:b1:9e:15:88:7b:7f:8c:24:09:c4:85:11: +b1:03:62:78:48:2b:1f:59:4d:0e:c2:09:48:41:3a: +21:d7:3e:6e:51:4f:68:0e:2f:5c:62:03:4f:05:eb: +ce:15:19:c3:85:cd:b2:68:96:3a:54:06:27:23:e0: +fe:e8:9e:48:53:a6:1d:d2:26:86:fe:24:9c:e2:c5: +78:6c:5a:f4:6b:17:0f:a6:c7:b7:48:dd:17:69:0c: +62:29:e4:85:05:f1:67:a0:ca:8b:f9:d7:7d:19:bd: +d0:3d:94:a5:94:17:15:0f + """ + p_str = """ +00:b7:dd:f2:0c:84:a5:53:c0:d1:92:3a:f4:aa:77: +9f:22:5c:bb:93:74:65:9d:ca:d4:af:61:95:e4:0b: +36:a2:84:71:93:a1:ed:af:c2:d8:9d:53:3d:d9:87: +4c:b2:74:ac:0c:01:67:59:d6:c6:70:11:4a:04:69: +87:38:86:0c:5b:29:28:26:10:c1:87:12:33:3f:0a: +a0:58:2f:5d:b5:e9:b9:c8:72:a0:50:02:89:0d:8b: +a9:99:dc:e4:c6:a4:b8:b4:2d:2d:c4:1c:89:26:06: +62:3d:f3:99:97:58:32:86:bd:d3:81:75:68:35:ab: +f8:3c:50:23:a0:d5:0b:7f:db + """ + q_str = """ +00:86:a5:cc:08:9e:9f:40:ad:c6:d8:ef:52:df:f0: +82:ff:25:59:5c:2b + """ + g_str = """ +51:5d:13:a5:dd:1d:a4:85:7e:e5:d7:42:d0:8c:31: +20:a3:70:75:23:38:d7:53:f8:cf:31:c3:01:97:a5: +ce:fa:70:49:5f:1a:ff:b1:c6:ea:f0:08:e2:bf:b2: +d3:9c:6a:52:8e:0a:f2:1e:db:df:8f:cf:3b:91:fb: +e8:f1:06:d2:4b:79:29:de:fa:b9:b9:2a:88:ba:2e: +c4:ee:bd:e3:82:e5:9a:2e:3e:b5:e8:01:b5:1d:63: +9c:b5:72:54:8e:ab:22:69:b6:70:b2:9d:a1:d3:d8: +07:4f:39:87:11:70:4e:9a:20:11:bb:cc:45:c4:a0: +07:c6:cb:a8:ee:89:c0:6f + """ + + key = DSA.importKey(x509_v3_cert) + for comp_name in ('y', 'p', 'q', 'g'): + comp_str = locals()[comp_name + "_str"] + comp = int(re.sub("[^0-9a-f]", "", comp_str), 16) + self.assertEqual(getattr(key, comp_name), comp) + self.assertFalse(key.has_private()) + + +if __name__ == '__main__': + unittest.main() + +def get_tests(config={}): + tests = [] + tests += list_test_cases(ImportKeyTests) + tests += list_test_cases(ImportKeyFromX509Cert) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_ECC.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_ECC.py new file mode 100644 index 00000000..f9222c86 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_ECC.py @@ -0,0 +1,2643 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import errno +import warnings +import unittest +from binascii import unhexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.py3compat import bord, tostr, FileNotFoundError +from Crypto.Util.asn1 import DerSequence, DerBitString +from Crypto.Util.number import bytes_to_long +from Crypto.Hash import SHAKE128 + +from Crypto.PublicKey import ECC + +try: + import pycryptodome_test_vectors # type: ignore + test_vectors_available = True +except ImportError: + test_vectors_available = False + + +class MissingTestVectorException(ValueError): + pass + + +def load_file(file_name, mode="rb"): + results = None + + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + dir_comps = ("PublicKey", "ECC") + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name, mode) as file_in: + results = file_in.read() + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for ECC", + UserWarning, + stacklevel=2) + + if results is None: + raise MissingTestVectorException("Missing %s" % file_name) + + return results + + +def compact(lines): + ext = b"".join(lines) + return unhexlify(tostr(ext).replace(" ", "").replace(":", "")) + + +def create_ref_keys_p192(): + key_len = 24 + key_lines = load_file("ecc_p192.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:4])) + public_key_xy = compact(key_lines[5:9]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-192", d=private_key_d), + ECC.construct(curve="P-192", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p224(): + key_len = 28 + key_lines = load_file("ecc_p224.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:4])) + public_key_xy = compact(key_lines[5:9]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-224", d=private_key_d), + ECC.construct(curve="P-224", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p256(): + key_len = 32 + key_lines = load_file("ecc_p256.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:5])) + public_key_xy = compact(key_lines[6:11]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-256", d=private_key_d), + ECC.construct(curve="P-256", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p384(): + key_len = 48 + key_lines = load_file("ecc_p384.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:6])) + public_key_xy = compact(key_lines[7:14]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-384", d=private_key_d), + ECC.construct(curve="P-384", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p521(): + key_len = 66 + key_lines = load_file("ecc_p521.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:7])) + public_key_xy = compact(key_lines[8:17]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-521", d=private_key_d), + ECC.construct(curve="P-521", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_ed25519(): + key_lines = load_file("ecc_ed25519.txt").splitlines() + seed = compact(key_lines[5:8]) + key = ECC.construct(curve="Ed25519", seed=seed) + return (key, key.public_key()) + + +def create_ref_keys_ed448(): + key_lines = load_file("ecc_ed448.txt").splitlines() + seed = compact(key_lines[6:10]) + key = ECC.construct(curve="Ed448", seed=seed) + return (key, key.public_key()) + + +# Create reference key pair +# ref_private, ref_public = create_ref_keys_p521() + +def get_fixed_prng(): + return SHAKE128.new().update(b"SEED").read + + +def extract_bitstring_from_spki(data): + seq = DerSequence() + seq.decode(data) + bs = DerBitString() + bs.decode(seq[1]) + return bs.value + + +class TestImport(unittest.TestCase): + + def test_empty(self): + self.assertRaises(ValueError, ECC.import_key, b"") + + +class TestImport_P192(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P192, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p192() + + def test_import_public_der(self): + key_file = load_file("ecc_p192_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p192_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P192') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p192_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P192') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p192_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p192_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p192_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p192_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p192_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p192_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p192_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p192_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p192_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p192_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + +class TestImport_P224(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P224, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p224() + + def test_import_public_der(self): + key_file = load_file("ecc_p224_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p224_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P224') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p224_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P224') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p224_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p224_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p224_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p224_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p224_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p224_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p224_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p224_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p224_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p224_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + +class TestImport_P256(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P256, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p256() + + def test_import_public_der(self): + key_file = load_file("ecc_p256_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p256_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P256') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p256_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P256') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p256_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p256_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p256_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p256_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p256_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p256_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p256_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p256_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_with_ecparams(self): + key_file = load_file("ecc_p256_private_ecparams.pem") + key = ECC.import_key(key_file) + # We just check if the import succeeds + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p256_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p256_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_p256_public_openssh.txt") + + key = ECC._import_openssh_public(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_p256_private_openssh.pem") + key_file_old = load_file("ecc_p256_private_openssh_old.pem") + + key = ECC.import_key(key_file) + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_p256_private_openssh_pwd.pem") + key_file_old = load_file("ecc_p256_private_openssh_pwd_old.pem") + + key = ECC.import_key(key_file, b"password") + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + +class TestImport_P384(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P384, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p384() + + def test_import_public_der(self): + key_file = load_file("ecc_p384_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p384_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P384') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p384_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P384') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p384_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p384_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p384_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p384_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p384_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p384_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p384_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p384_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p384_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p384_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_p384_public_openssh.txt") + + key = ECC._import_openssh_public(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_p384_private_openssh.pem") + key_file_old = load_file("ecc_p384_private_openssh_old.pem") + + key = ECC.import_key(key_file) + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_p384_private_openssh_pwd.pem") + key_file_old = load_file("ecc_p384_private_openssh_pwd_old.pem") + + key = ECC.import_key(key_file, b"password") + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + +class TestImport_P521(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P521, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p521() + + def test_import_public_der(self): + key_file = load_file("ecc_p521_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p521_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P521') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p521_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P521') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p521_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p521_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p521_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p521_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p521_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p521_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p521_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p521_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p521_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p521_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_p521_public_openssh.txt") + + key = ECC._import_openssh_public(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_p521_private_openssh.pem") + key_file_old = load_file("ecc_p521_private_openssh_old.pem") + + key = ECC.import_key(key_file) + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_p521_private_openssh_pwd.pem") + key_file_old = load_file("ecc_p521_private_openssh_pwd_old.pem") + + key = ECC.import_key(key_file, b"password") + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + +class TestExport_P192(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P192, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p192() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p192_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p192_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p192_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p192_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p192_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p192_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p192_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p192_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p192_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p192_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p192_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p192_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p192_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + def test_compressed_curve(self): + + # Compressed P-192 curve (Y-point is even) + pem1 = """-----BEGIN EC PRIVATE KEY----- + MF8CAQEEGHvhXmIW95JxZYfd4AUPu9BwknjuvS36aqAKBggqhkjOPQMBAaE0AzIA + BLJZCyTu35DQIlqvMlBynn3k1Ig+dWfg/brRhHecxptrbloqFSP8ITw0CwbGF+2X + 5g== + -----END EC PRIVATE KEY-----""" + + # Compressed P-192 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- + MF8CAQEEGA3rAotUaWl7d47eX6tz9JmLzOMJwl13XaAKBggqhkjOPQMBAaE0AzIA + BG4tHlTBBBGokcWmGm2xubVB0NvPC/Ou5AYwivs+3iCxmEjsymVAj6iiuX2Lxr6g + /Q== + -----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x97E6) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0xA0FD) + + +class TestExport_P224(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P224, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p224() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p224_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p224_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p224_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p224_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p224_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p224_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p224_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p224_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p224_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p224_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p224_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p224_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p224_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + def test_compressed_curve(self): + + # Compressed P-224 curve (Y-point is even) + pem1 = """-----BEGIN EC PRIVATE KEY----- + MGgCAQEEHPYicBNI9nd6wDKAX2l+f3A0Q+KWUQeMqSt5GoOgBwYFK4EEACGhPAM6 + AATCL6rUIDT14zXKoS5GQUMDP/tpc+1iI/FyEZikt2roKDkhU5q08srmqaysbfJN + eUr7Xf1lnCVGag== + -----END EC PRIVATE KEY-----""" + + # Compressed P-224 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- + MGgCAQEEHEFjbaVPLJ3ngZyCibCvT0RLUqSlHjC5Z3e0FtugBwYFK4EEACGhPAM6 + AAT5IvL2V6m48y1JLMGr6ZbnOqNKP9hMf9mxyVkk6/SaRoBoJVkXrNIpYL0P7DS7 + QF8E/OGeZRwvow== + -----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x466A) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0x2FA3) + + +class TestExport_P256(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P256, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p256() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p256_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p256_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p256_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p256_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p256_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p256_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p256_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p256_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p256_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p256_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p256_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p256_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p256_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh_uncompressed(self): + key_file = load_file("ecc_p256_public_openssh.txt", "rt") + + encoded = self.ref_public._export_openssh(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="OpenSSH", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_openssh_compressed(self): + key_file = load_file("ecc_p256_public_openssh.txt", "rt") + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="OpenSSH", compress=True) + assert len(key_file) > len(key_file_compressed) + self.assertEqual(pub_key, ECC.import_key(key_file_compressed)) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + + def test_compressed_curve(self): + + # Compressed P-256 curve (Y-point is even) + pem1 = """-----BEGIN EC PRIVATE KEY----- + MFcCAQEEIHTuc09jC51xXomV6MVCDN+DpAAvSmaJWZPTEHM6D5H1oAoGCCqGSM49 + AwEHoSQDIgACWFuGbHe8yJ43rir7PMTE9w8vHz0BSpXHq90Xi7/s+a0= + -----END EC PRIVATE KEY-----""" + + # Compressed P-256 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- + MFcCAQEEIFggiPN9SQP+FAPTCPp08fRUz7rHp2qNBRcBJ1DXhb3ZoAoGCCqGSM49 + AwEHoSQDIgADLpph1trTIlVfa8NJvlMUPyWvL+wP+pW3BJITUL/wj9A= + -----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0xA6FC) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0x6E57) + + +class TestExport_P384(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P384, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p384() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p384_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p384_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p384_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p384_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p384_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p384_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p384_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p384_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p384_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p384_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p384_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p384_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p384_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh_uncompressed(self): + key_file = load_file("ecc_p384_public_openssh.txt", "rt") + + encoded = self.ref_public._export_openssh(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="OpenSSH", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_openssh_compressed(self): + key_file = load_file("ecc_p384_public_openssh.txt", "rt") + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="OpenSSH", compress=True) + assert len(key_file) > len(key_file_compressed) + self.assertEqual(pub_key, ECC.import_key(key_file_compressed)) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + def test_compressed_curve(self): + + # Compressed P-384 curve (Y-point is even) + # openssl ecparam -name secp384p1 -genkey -noout -conv_form compressed -out /tmp/a.pem + # openssl ec -in /tmp/a.pem -text -noout + pem1 = """-----BEGIN EC PRIVATE KEY----- +MIGkAgEBBDAM0lEIhvXuekK2SWtdbgOcZtBaxa9TxfpO/GcDFZLCJ3JVXaTgwken +QT+C+XLtD6WgBwYFK4EEACKhZANiAATs0kZMhFDu8DoBC21jrSDPyAUn4aXZ/DM4 +ylhDfWmb4LEbeszXceIzfhIUaaGs5y1xXaqf5KXTiAAYx2pKUzAAM9lcGUHCGKJG +k4AgUmVJON29XoUilcFrzjDmuye3B6Q= +-----END EC PRIVATE KEY-----""" + + # Compressed P-384 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- +MIGkAgEBBDDHPFTslYLltE16fHdSDTtE/2HTmd3M8mqy5MttAm4wZ833KXiGS9oe +kFdx9sNV0KygBwYFK4EEACKhZANiAASLIE5RqVMtNhtBH/u/p/ifqOAlKnK/+RrQ +YC46ZRsnKNayw3wATdPjgja7L/DSII3nZK0G6KOOVwJBznT/e+zudUJYhZKaBLRx +/bgXyxUtYClOXxb1Y/5N7txLstYRyP0= +-----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x07a4) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0xc8fd) + + +class TestExport_P521(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P521, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p521() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p521_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p521_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p521_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p521_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + encoded = self.ref_public.export_key(format="raw") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p521_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p521_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + encoded = self.ref_public.export_key(format="raw", compress=True) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p521_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p521_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p521_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p521_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p521_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p521_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p521_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh_uncompressed(self): + key_file = load_file("ecc_p521_public_openssh.txt", "rt") + + encoded = self.ref_public._export_openssh(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="OpenSSH", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_openssh_compressed(self): + key_file = load_file("ecc_p521_public_openssh.txt", "rt") + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="OpenSSH", compress=True) + assert len(key_file) > len(key_file_compressed) + self.assertEqual(pub_key, ECC.import_key(key_file_compressed)) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + def test_compressed_curve(self): + + # Compressed P-521 curve (Y-point is even) + # openssl ecparam -name secp521r1 -genkey -noout -conv_form compressed -out /tmp/a.pem + # openssl ec -in /tmp/a.pem -text -noout + pem1 = """-----BEGIN EC PRIVATE KEY----- +MIHcAgEBBEIAnm1CEjVjvNfXEN730p+D6su5l+mOztdc5XmTEoti+s2R4GQ4mAv3 +0zYLvyklvOHw0+yy8d0cyGEJGb8T3ZVKmg2gBwYFK4EEACOhgYkDgYYABAHzjTI1 +ckxQ3Togi0LAxiG0PucdBBBs5oIy3df95xv6SInp70z+4qQ2EltEmdNMssH8eOrl +M5CYdZ6nbcHMVaJUvQEzTrYxvFjOgJiOd+E9eBWbLkbMNqsh1UKVO6HbMbW0ohCI +uGxO8tM6r3w89/qzpG2SvFM/fvv3mIR30wSZDD84qA== +-----END EC PRIVATE KEY-----""" + + # Compressed P-521 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- +MIHcAgEBBEIB84OfhJluLBRLn3+cC/RQ37C2SfQVP/t0gQK2tCsTf5avRcWYRrOJ +PmX9lNnkC0Hobd75QFRmdxrB0Wd1/M4jZOWgBwYFK4EEACOhgYkDgYYABAAMZcdJ +1YLCGHt3bHCEzdidVy6+brlJIbv1aQ9fPQLF7WKNv4c8w3H8d5a2+SDZilBOsk5c +6cNJDMz2ExWQvxl4CwDJtJGt1+LHVKFGy73NANqVxMbRu+2F8lOxkNp/ziFTbVyV +vv6oYkMIIi7r5oQWAiQDrR2mlrrFDL9V7GH/r8SWQw== +-----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x38a8) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0x9643) + + +class TestImport_Ed25519(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_Ed25519, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed25519() + + def test_import_public_der(self): + key_file = load_file("ecc_ed25519_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_pkcs8_der(self): + key_file = load_file("ecc_ed25519_private.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_ed25519_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_ed25519_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_ed25519_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_ed25519_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_ed25519_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256": + key_file = load_file("ecc_ed25519_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_ed25519_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_ed25519_public_openssh.txt") + key = ECC._import_openssh_public(key_file) + self.failIf(key.has_private()) + key = ECC.import_key(key_file) + self.failIf(key.has_private()) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_ed25519_private_openssh.pem") + key = ECC.import_key(key_file) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_ed25519_private_openssh_pwd.pem") + key = ECC.import_key(key_file, b"password") + + +class TestExport_Ed25519(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_Ed25519, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed25519() + + def test_export_public_der(self): + key_file = load_file("ecc_ed25519_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(True) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_sec1(self): + self.assertRaises(ValueError, self.ref_public.export_key, format="SEC1") + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_ed25519_private.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + self.assertRaises(ValueError, self.ref_private.export_key, + format="DER", use_pkcs8=False) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem(self): + key_file_ref = load_file("ecc_ed25519_public.pem", "rt").strip() + key_file = self.ref_public.export_key(format="PEM").strip() + self.assertEqual(key_file_ref, key_file) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_ed25519_private.pem", "rt").strip() + encoded = self.ref_private.export_key(format="PEM").strip() + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh(self): + key_file = load_file("ecc_ed25519_public_openssh.txt", "rt") + public_key = ECC.import_key(key_file) + key_file = " ".join(key_file.split(' ')[:2]) # remove comment + + encoded = public_key._export_openssh(False) + self.assertEqual(key_file, encoded.strip()) + + encoded = public_key.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded.strip()) + + def test_export_raw(self): + encoded = self.ref_public.export_key(format='raw') + self.assertEqual(encoded, unhexlify(b'bc85b8cf585d20a4de47e84d1cb6183f63d9ba96223fcbc886e363ffdea20cff')) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + +class TestImport_Ed448(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_Ed448, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed448() + + def test_import_public_der(self): + key_file = load_file("ecc_ed448_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_pkcs8_der(self): + key_file = load_file("ecc_ed448_private.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_ed448_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_ed448_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_ed448_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_ed448_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_ed448_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256": + key_file = load_file("ecc_ed448_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_ed448_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + +class TestExport_Ed448(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_Ed448, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed448() + + def test_export_public_der(self): + key_file = load_file("ecc_ed448_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(True) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_sec1(self): + self.assertRaises(ValueError, self.ref_public.export_key, format="SEC1") + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_ed448_private.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + self.assertRaises(ValueError, self.ref_private.export_key, + format="DER", use_pkcs8=False) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem(self): + key_file_ref = load_file("ecc_ed448_public.pem", "rt").strip() + key_file = self.ref_public.export_key(format="PEM").strip() + self.assertEqual(key_file_ref, key_file) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_ed448_private.pem", "rt").strip() + encoded = self.ref_private.export_key(format="PEM").strip() + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh(self): + # Not supported + self.assertRaises(ValueError, self.ref_public.export_key, format="OpenSSH") + + def test_export_raw(self): + encoded = self.ref_public.export_key(format='raw') + self.assertEqual(encoded, unhexlify(b'899014ddc0a0e1260cfc1085afdf952019e9fd63372e3e366e26dad32b176624884330a14617237e3081febd9d1a15069e7499433d2f55dd80')) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestImport) + try: + tests += list_test_cases(TestImport_P192) + tests += list_test_cases(TestImport_P224) + tests += list_test_cases(TestImport_P256) + tests += list_test_cases(TestImport_P384) + tests += list_test_cases(TestImport_P521) + tests += list_test_cases(TestImport_Ed25519) + tests += list_test_cases(TestImport_Ed448) + + tests += list_test_cases(TestExport_P192) + tests += list_test_cases(TestExport_P224) + tests += list_test_cases(TestExport_P256) + tests += list_test_cases(TestExport_P384) + tests += list_test_cases(TestExport_P521) + tests += list_test_cases(TestExport_Ed25519) + tests += list_test_cases(TestExport_Ed448) + + except MissingTestVectorException: + pass + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_RSA.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_RSA.py new file mode 100644 index 00000000..fa92fb0a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/PublicKey/test_import_RSA.py @@ -0,0 +1,590 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_importKey.py: Self-test for importing RSA keys +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import os +import re +import errno +import warnings +import unittest + +from Crypto.PublicKey import RSA +from Crypto.SelfTest.st_common import a2b_hex, list_test_cases +from Crypto.Util.py3compat import b, tostr, FileNotFoundError +from Crypto.Util.number import inverse +from Crypto.Util import asn1 + +try: + import pycryptodome_test_vectors # type: ignore + test_vectors_available = True +except ImportError: + test_vectors_available = False + + +def load_file(file_name, mode="rb"): + results = None + + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + dir_comps = ("PublicKey", "RSA") + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name, mode) as file_in: + results = file_in.read() + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for RSA", + UserWarning, + stacklevel=2) + + return results + + +def der2pem(der, text='PUBLIC'): + import binascii + chunks = [binascii.b2a_base64(der[i:i+48]) for i in range(0, len(der), 48)] + pem = b('-----BEGIN %s KEY-----\n' % text) + pem += b('').join(chunks) + pem += b('-----END %s KEY-----' % text) + return pem + + +class ImportKeyTests(unittest.TestCase): + # 512-bit RSA key generated with openssl + rsaKeyPEM = u'''-----BEGIN RSA PRIVATE KEY----- +MIIBOwIBAAJBAL8eJ5AKoIsjURpcEoGubZMxLD7+kT+TLr7UkvEtFrRhDDKMtuII +q19FrL4pUIMymPMSLBn3hJLe30Dw48GQM4UCAwEAAQJACUSDEp8RTe32ftq8IwG8 +Wojl5mAd1wFiIOrZ/Uv8b963WJOJiuQcVN29vxU5+My9GPZ7RA3hrDBEAoHUDPrI +OQIhAPIPLz4dphiD9imAkivY31Rc5AfHJiQRA7XixTcjEkojAiEAyh/pJHks/Mlr ++rdPNEpotBjfV4M4BkgGAA/ipcmaAjcCIQCHvhwwKVBLzzTscT2HeUdEeBMoiXXK +JACAr3sJQJGxIQIgarRp+m1WSKV1MciwMaTOnbU7wxFs9DP1pva76lYBzgUCIQC9 +n0CnZCJ6IZYqSt0H5N7+Q+2Ro64nuwV/OSQfM6sBwQ== +-----END RSA PRIVATE KEY-----''' + + # As above, but this is actually an unencrypted PKCS#8 key + rsaKeyPEM8 = u'''-----BEGIN PRIVATE KEY----- +MIIBVQIBADANBgkqhkiG9w0BAQEFAASCAT8wggE7AgEAAkEAvx4nkAqgiyNRGlwS +ga5tkzEsPv6RP5MuvtSS8S0WtGEMMoy24girX0WsvilQgzKY8xIsGfeEkt7fQPDj +wZAzhQIDAQABAkAJRIMSnxFN7fZ+2rwjAbxaiOXmYB3XAWIg6tn9S/xv3rdYk4mK +5BxU3b2/FTn4zL0Y9ntEDeGsMEQCgdQM+sg5AiEA8g8vPh2mGIP2KYCSK9jfVFzk +B8cmJBEDteLFNyMSSiMCIQDKH+kkeSz8yWv6t080Smi0GN9XgzgGSAYAD+KlyZoC +NwIhAIe+HDApUEvPNOxxPYd5R0R4EyiJdcokAICvewlAkbEhAiBqtGn6bVZIpXUx +yLAxpM6dtTvDEWz0M/Wm9rvqVgHOBQIhAL2fQKdkInohlipK3Qfk3v5D7ZGjrie7 +BX85JB8zqwHB +-----END PRIVATE KEY-----''' + + # The same RSA private key as in rsaKeyPEM, but now encrypted + rsaKeyEncryptedPEM = ( + + # PEM encryption + # With DES and passphrase 'test' + ('test', u'''-----BEGIN RSA PRIVATE KEY----- +Proc-Type: 4,ENCRYPTED +DEK-Info: DES-CBC,AF8F9A40BD2FA2FC + +Ckl9ex1kaVEWhYC2QBmfaF+YPiR4NFkRXA7nj3dcnuFEzBnY5XULupqQpQI3qbfA +u8GYS7+b3toWWiHZivHbAAUBPDIZG9hKDyB9Sq2VMARGsX1yW1zhNvZLIiVJzUHs +C6NxQ1IJWOXzTew/xM2I26kPwHIvadq+/VaT8gLQdjdH0jOiVNaevjWnLgrn1mLP +BCNRMdcexozWtAFNNqSzfW58MJL2OdMi21ED184EFytIc1BlB+FZiGZduwKGuaKy +9bMbdb/1PSvsSzPsqW7KSSrTw6MgJAFJg6lzIYvR5F4poTVBxwBX3+EyEmShiaNY +IRX3TgQI0IjrVuLmvlZKbGWP18FXj7I7k9tSsNOOzllTTdq3ny5vgM3A+ynfAaxp +dysKznQ6P+IoqML1WxAID4aGRMWka+uArOJ148Rbj9s= +-----END RSA PRIVATE KEY-----'''), + + # PKCS8 encryption + ('winter', u'''-----BEGIN ENCRYPTED PRIVATE KEY----- +MIIBpjBABgkqhkiG9w0BBQ0wMzAbBgkqhkiG9w0BBQwwDgQIeZIsbW3O+JcCAggA +MBQGCCqGSIb3DQMHBAgSM2p0D8FilgSCAWBhFyP2tiGKVpGj3mO8qIBzinU60ApR +3unvP+N6j7LVgnV2lFGaXbJ6a1PbQXe+2D6DUyBLo8EMXrKKVLqOMGkFMHc0UaV6 +R6MmrsRDrbOqdpTuVRW+NVd5J9kQQh4xnfU/QrcPPt7vpJvSf4GzG0n666Ki50OV +M/feuVlIiyGXY6UWdVDpcOV72cq02eNUs/1JWdh2uEBvA9fCL0c07RnMrdT+CbJQ +NjJ7f8ULtp7xvR9O3Al/yJ4Wv3i4VxF1f3MCXzhlUD4I0ONlr0kJWgeQ80q/cWhw +ntvgJwnCn2XR1h6LA8Wp+0ghDTsL2NhJpWd78zClGhyU4r3hqu1XDjoXa7YCXCix +jCV15+ViDJzlNCwg+W6lRg18sSLkCT7alviIE0U5tHc6UPbbHwT5QqAxAABaP+nZ +CGqJGyiwBzrKebjgSm/KRd4C91XqcsysyH2kKPfT51MLAoD4xelOURBP +-----END ENCRYPTED PRIVATE KEY-----''' + ), + ) + + rsaPublicKeyPEM = u'''-----BEGIN PUBLIC KEY----- +MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL8eJ5AKoIsjURpcEoGubZMxLD7+kT+T +Lr7UkvEtFrRhDDKMtuIIq19FrL4pUIMymPMSLBn3hJLe30Dw48GQM4UCAwEAAQ== +-----END PUBLIC KEY-----''' + + # Obtained using 'ssh-keygen -i -m PKCS8 -f rsaPublicKeyPEM' + rsaPublicKeyOpenSSH = b('''ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAQQC/HieQCqCLI1EaXBKBrm2TMSw+/pE/ky6+1JLxLRa0YQwyjLbiCKtfRay+KVCDMpjzEiwZ94SS3t9A8OPBkDOF comment\n''') + + # The private key, in PKCS#1 format encoded with DER + rsaKeyDER = a2b_hex( + '''3082013b020100024100bf1e27900aa08b23511a5c1281ae6d93312c3efe + 913f932ebed492f12d16b4610c328cb6e208ab5f45acbe2950833298f312 + 2c19f78492dedf40f0e3c190338502030100010240094483129f114dedf6 + 7edabc2301bc5a88e5e6601dd7016220ead9fd4bfc6fdeb75893898ae41c + 54ddbdbf1539f8ccbd18f67b440de1ac30440281d40cfac839022100f20f + 2f3e1da61883f62980922bd8df545ce407c726241103b5e2c53723124a23 + 022100ca1fe924792cfcc96bfab74f344a68b418df578338064806000fe2 + a5c99a023702210087be1c3029504bcf34ec713d877947447813288975ca + 240080af7b094091b12102206ab469fa6d5648a57531c8b031a4ce9db53b + c3116cf433f5a6f6bbea5601ce05022100bd9f40a764227a21962a4add07 + e4defe43ed91a3ae27bb057f39241f33ab01c1 + '''.replace(" ","")) + + # The private key, in unencrypted PKCS#8 format encoded with DER + rsaKeyDER8 = a2b_hex( + '''30820155020100300d06092a864886f70d01010105000482013f3082013 + b020100024100bf1e27900aa08b23511a5c1281ae6d93312c3efe913f932 + ebed492f12d16b4610c328cb6e208ab5f45acbe2950833298f3122c19f78 + 492dedf40f0e3c190338502030100010240094483129f114dedf67edabc2 + 301bc5a88e5e6601dd7016220ead9fd4bfc6fdeb75893898ae41c54ddbdb + f1539f8ccbd18f67b440de1ac30440281d40cfac839022100f20f2f3e1da + 61883f62980922bd8df545ce407c726241103b5e2c53723124a23022100c + a1fe924792cfcc96bfab74f344a68b418df578338064806000fe2a5c99a0 + 23702210087be1c3029504bcf34ec713d877947447813288975ca240080a + f7b094091b12102206ab469fa6d5648a57531c8b031a4ce9db53bc3116cf + 433f5a6f6bbea5601ce05022100bd9f40a764227a21962a4add07e4defe4 + 3ed91a3ae27bb057f39241f33ab01c1 + '''.replace(" ","")) + + rsaPublicKeyDER = a2b_hex( + '''305c300d06092a864886f70d0101010500034b003048024100bf1e27900a + a08b23511a5c1281ae6d93312c3efe913f932ebed492f12d16b4610c328c + b6e208ab5f45acbe2950833298f3122c19f78492dedf40f0e3c190338502 + 03010001 + '''.replace(" ","")) + + n = int('BF 1E 27 90 0A A0 8B 23 51 1A 5C 12 81 AE 6D 93 31 2C 3E FE 91 3F 93 2E BE D4 92 F1 2D 16 B4 61 0C 32 8C B6 E2 08 AB 5F 45 AC BE 29 50 83 32 98 F3 12 2C 19 F7 84 92 DE DF 40 F0 E3 C1 90 33 85'.replace(" ",""),16) + e = 65537 + d = int('09 44 83 12 9F 11 4D ED F6 7E DA BC 23 01 BC 5A 88 E5 E6 60 1D D7 01 62 20 EA D9 FD 4B FC 6F DE B7 58 93 89 8A E4 1C 54 DD BD BF 15 39 F8 CC BD 18 F6 7B 44 0D E1 AC 30 44 02 81 D4 0C FA C8 39'.replace(" ",""),16) + p = int('00 F2 0F 2F 3E 1D A6 18 83 F6 29 80 92 2B D8 DF 54 5C E4 07 C7 26 24 11 03 B5 E2 C5 37 23 12 4A 23'.replace(" ",""),16) + q = int('00 CA 1F E9 24 79 2C FC C9 6B FA B7 4F 34 4A 68 B4 18 DF 57 83 38 06 48 06 00 0F E2 A5 C9 9A 02 37'.replace(" ",""),16) + + # This is q^{-1} mod p). fastmath and slowmath use pInv (p^{-1} + # mod q) instead! + qInv = int('00 BD 9F 40 A7 64 22 7A 21 96 2A 4A DD 07 E4 DE FE 43 ED 91 A3 AE 27 BB 05 7F 39 24 1F 33 AB 01 C1'.replace(" ",""),16) + pInv = inverse(p,q) + + def testImportKey1(self): + """Verify import of RSAPrivateKey DER SEQUENCE""" + key = RSA.importKey(self.rsaKeyDER) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey2(self): + """Verify import of SubjectPublicKeyInfo DER SEQUENCE""" + key = RSA.importKey(self.rsaPublicKeyDER) + self.assertFalse(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey3unicode(self): + """Verify import of RSAPrivateKey DER SEQUENCE, encoded with PEM as unicode""" + key = RSA.importKey(self.rsaKeyPEM) + self.assertEqual(key.has_private(),True) # assert_ + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey3bytes(self): + """Verify import of RSAPrivateKey DER SEQUENCE, encoded with PEM as byte string""" + key = RSA.importKey(b(self.rsaKeyPEM)) + self.assertEqual(key.has_private(),True) # assert_ + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey4unicode(self): + """Verify import of RSAPrivateKey DER SEQUENCE, encoded with PEM as unicode""" + key = RSA.importKey(self.rsaPublicKeyPEM) + self.assertEqual(key.has_private(),False) # assertFalse + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey4bytes(self): + """Verify import of SubjectPublicKeyInfo DER SEQUENCE, encoded with PEM as byte string""" + key = RSA.importKey(b(self.rsaPublicKeyPEM)) + self.assertEqual(key.has_private(),False) # assertFalse + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey5(self): + """Verifies that the imported key is still a valid RSA pair""" + key = RSA.importKey(self.rsaKeyPEM) + idem = key._encrypt(key._decrypt(89)) + self.assertEqual(idem, 89) + + def testImportKey6(self): + """Verifies that the imported key is still a valid RSA pair""" + key = RSA.importKey(self.rsaKeyDER) + idem = key._encrypt(key._decrypt(65)) + self.assertEqual(idem, 65) + + def testImportKey7(self): + """Verify import of OpenSSH public key""" + key = RSA.importKey(self.rsaPublicKeyOpenSSH) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey8(self): + """Verify import of encrypted PrivateKeyInfo DER SEQUENCE""" + for t in self.rsaKeyEncryptedPEM: + key = RSA.importKey(t[1], t[0]) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey9(self): + """Verify import of unencrypted PrivateKeyInfo DER SEQUENCE""" + key = RSA.importKey(self.rsaKeyDER8) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey10(self): + """Verify import of unencrypted PrivateKeyInfo DER SEQUENCE, encoded with PEM""" + key = RSA.importKey(self.rsaKeyPEM8) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey11(self): + """Verify import of RSAPublicKey DER SEQUENCE""" + der = asn1.DerSequence([17, 3]).encode() + key = RSA.importKey(der) + self.assertEqual(key.n, 17) + self.assertEqual(key.e, 3) + + def testImportKey12(self): + """Verify import of RSAPublicKey DER SEQUENCE, encoded with PEM""" + der = asn1.DerSequence([17, 3]).encode() + pem = der2pem(der) + key = RSA.importKey(pem) + self.assertEqual(key.n, 17) + self.assertEqual(key.e, 3) + + def test_import_key_windows_cr_lf(self): + pem_cr_lf = "\r\n".join(self.rsaKeyPEM.splitlines()) + key = RSA.importKey(pem_cr_lf) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def test_import_empty(self): + self.assertRaises(ValueError, RSA.import_key, b"") + + ### + def testExportKey1(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + derKey = key.export_key("DER") + self.assertEqual(derKey, self.rsaKeyDER) + + def testExportKey2(self): + key = RSA.construct([self.n, self.e]) + derKey = key.export_key("DER") + self.assertEqual(derKey, self.rsaPublicKeyDER) + + def testExportKey3(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + pemKey = key.export_key("PEM") + self.assertEqual(pemKey, b(self.rsaKeyPEM)) + + def testExportKey4(self): + key = RSA.construct([self.n, self.e]) + pemKey = key.export_key("PEM") + self.assertEqual(pemKey, b(self.rsaPublicKeyPEM)) + + def testExportKey5(self): + key = RSA.construct([self.n, self.e]) + openssh_1 = key.export_key("OpenSSH").split() + openssh_2 = self.rsaPublicKeyOpenSSH.split() + self.assertEqual(openssh_1[0], openssh_2[0]) + self.assertEqual(openssh_1[1], openssh_2[1]) + + def testExportKey7(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + derKey = key.export_key("DER", pkcs=8) + self.assertEqual(derKey, self.rsaKeyDER8) + + def testExportKey8(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + pemKey = key.export_key("PEM", pkcs=8) + self.assertEqual(pemKey, b(self.rsaKeyPEM8)) + + def testExportKey9(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + self.assertRaises(ValueError, key.export_key, "invalid-format") + + def testExportKey10(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#1, old PEM encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test') + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')!=-1) + self.assertTrue(tostr(outkey).find('BEGIN RSA PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey11(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#1, old PEM encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test', pkcs=1) + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')!=-1) + self.assertTrue(tostr(outkey).find('BEGIN RSA PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey12(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#8, old PEM encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test', pkcs=8) + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')!=-1) + self.assertTrue(tostr(outkey).find('BEGIN PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey13(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#8, PKCS#8 encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test', pkcs=8, + protection='PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC') + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')==-1) + self.assertTrue(tostr(outkey).find('BEGIN ENCRYPTED PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey14(self): + # Export and re-import the encrypted key. It must match. + # DER envelope, PKCS#8, PKCS#8 encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('DER', 'test', pkcs=8) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey15(self): + # Verify that that error an condition is detected when trying to + # use a password with DER encoding and PKCS#1. + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + self.assertRaises(ValueError, key.export_key, 'DER', 'test', 1) + + def test_import_key(self): + """Verify that import_key is an alias to importKey""" + key = RSA.import_key(self.rsaPublicKeyDER) + self.assertFalse(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def test_import_key_ba_mv(self): + """Verify that import_key can be used on bytearrays and memoryviews""" + key = RSA.import_key(bytearray(self.rsaPublicKeyDER)) + key = RSA.import_key(memoryview(self.rsaPublicKeyDER)) + + def test_exportKey(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + self.assertEqual(key.export_key(), key.exportKey()) + + +class ImportKeyFromX509Cert(unittest.TestCase): + + def test_x509v1(self): + + # Sample V1 certificate with a 1024 bit RSA key + x509_v1_cert = """ +-----BEGIN CERTIFICATE----- +MIICOjCCAaMCAQEwDQYJKoZIhvcNAQEEBQAwfjENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxHDAaBgkqhkiG9w0BCQEWDXNwYW1AYWNtZS5vcmcxEzARBgNVBAcT +Ck1ldHJvcG9saXMxETAPBgNVBAgTCE5ldyBZb3JrMQswCQYDVQQGEwJVUzENMAsG +A1UEAxMEdGVzdDAeFw0xNDA3MTExOTU3MjRaFw0xNzA0MDYxOTU3MjRaME0xCzAJ +BgNVBAYTAlVTMREwDwYDVQQIEwhOZXcgWW9yazENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxDzANBgNVBAMTBmxhdHZpYTCBnzANBgkqhkiG9w0BAQEFAAOBjQAw +gYkCgYEAyG+kytdRj3TFbRmHDYp3TXugVQ81chew0qeOxZWOz80IjtWpgdOaCvKW +NCuc8wUR9BWrEQW+39SaRMLiQfQtyFSQZijc3nsEBu/Lo4uWZ0W/FHDRVSvkJA/V +Ex5NL5ikI+wbUeCV5KajGNDalZ8F1pk32+CBs8h1xNx5DyxuEHUCAwEAATANBgkq +hkiG9w0BAQQFAAOBgQCVQF9Y//Q4Psy+umEM38pIlbZ2hxC5xNz/MbVPwuCkNcGn +KYNpQJP+JyVTsPpO8RLZsAQDzRueMI3S7fbbwTzAflN0z19wvblvu93xkaBytVok +9VBAH28olVhy9b1MMeg2WOt5sUEQaFNPnwwsyiY9+HsRpvpRnPSQF+kyYVsshQ== +-----END CERTIFICATE----- + """.strip() + + # RSA public key as dumped by openssl + exponent = 65537 + modulus_str = """ +00:c8:6f:a4:ca:d7:51:8f:74:c5:6d:19:87:0d:8a: +77:4d:7b:a0:55:0f:35:72:17:b0:d2:a7:8e:c5:95: +8e:cf:cd:08:8e:d5:a9:81:d3:9a:0a:f2:96:34:2b: +9c:f3:05:11:f4:15:ab:11:05:be:df:d4:9a:44:c2: +e2:41:f4:2d:c8:54:90:66:28:dc:de:7b:04:06:ef: +cb:a3:8b:96:67:45:bf:14:70:d1:55:2b:e4:24:0f: +d5:13:1e:4d:2f:98:a4:23:ec:1b:51:e0:95:e4:a6: +a3:18:d0:da:95:9f:05:d6:99:37:db:e0:81:b3:c8: +75:c4:dc:79:0f:2c:6e:10:75 + """ + modulus = int(re.sub("[^0-9a-f]","", modulus_str), 16) + + key = RSA.importKey(x509_v1_cert) + self.assertEqual(key.e, exponent) + self.assertEqual(key.n, modulus) + self.assertFalse(key.has_private()) + + def test_x509v3(self): + + # Sample V3 certificate with a 1024 bit RSA key + x509_v3_cert = """ +-----BEGIN CERTIFICATE----- +MIIEcjCCAlqgAwIBAgIBATANBgkqhkiG9w0BAQsFADBhMQswCQYDVQQGEwJVUzEL +MAkGA1UECAwCTUQxEjAQBgNVBAcMCUJhbHRpbW9yZTEQMA4GA1UEAwwHVGVzdCBD +QTEfMB0GCSqGSIb3DQEJARYQdGVzdEBleGFtcGxlLmNvbTAeFw0xNDA3MTIwOTM1 +MTJaFw0xNzA0MDcwOTM1MTJaMEQxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJNRDES +MBAGA1UEBwwJQmFsdGltb3JlMRQwEgYDVQQDDAtUZXN0IFNlcnZlcjCBnzANBgkq +hkiG9w0BAQEFAAOBjQAwgYkCgYEA/S7GJV2OcFdyNMQ4K75KrYFtMEn3VnEFdPHa +jyS37XlMxSh0oS4GeTGVUCJInl5Cpsv8WQdh03FfeOdvzp5IZ46OcjeOPiWnmjgl +2G5j7e2bDH7RSchGV+OD6Fb1Agvuu2/9iy8fdf3rPQ/7eAddzKUrzwacVbnW+tg2 +QtSXKRcCAwEAAaOB1TCB0jAdBgNVHQ4EFgQU/WwCX7FfWMIPDFfJ+I8a2COG+l8w +HwYDVR0jBBgwFoAUa0hkif3RMaraiWtsOOZZlLu9wJwwCQYDVR0TBAIwADALBgNV +HQ8EBAMCBeAwSgYDVR0RBEMwQYILZXhhbXBsZS5jb22CD3d3dy5leGFtcGxlLmNv +bYIQbWFpbC5leGFtcGxlLmNvbYIPZnRwLmV4YW1wbGUuY29tMCwGCWCGSAGG+EIB +DQQfFh1PcGVuU1NMIEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsF +AAOCAgEAvO6xfdsGbnoK4My3eJthodTAjMjPwFVY133LH04QLcCv54TxKhtUg1fi +PgdjVe1HpTytPBfXy2bSZbXAN0abZCtw1rYrnn7o1g2pN8iypVq3zVn0iMTzQzxs +zEPO3bpR/UhNSf90PmCsS5rqZpAAnXSaAy1ClwHWk/0eG2pYkhE1m1ABVMN2lsAW +e9WxGk6IFqaI9O37NYQwmEypMs4DC+ECJEvbPFiqi3n0gbXCZJJ6omDA5xJldaYK +Oa7KR3s/qjBsu9UAiWpLBuFoSTHIF2aeRKRFmUdmzwo43eVPep65pY6eQ4AdL2RF +rqEuINbGlzI5oQyYhu71IwB+iPZXaZZPlwjLgOsuad/p2hOgDb5WxUi8FnDPursQ +ujfpIpmrOP/zpvvQWnwePI3lI+5n41kTBSbefXEdv6rXpHk3QRzB90uPxnXPdxSC +16ASA8bQT5an/1AgoE3k9CrcD2K0EmgaX0YI0HUhkyzbkg34EhpWJ6vvRUbRiNRo +9cIbt/ya9Y9u0Ja8GLXv6dwX0l0IdJMkL8KifXUFAVCujp1FBrr/gdmwQn8itANy ++qbnWSxmOvtaY0zcaFAcONuHva0h51/WqXOMO1eb8PhR4HIIYU8p1oBwQp7dSni8 +THDi1F+GG5PsymMDj5cWK42f+QzjVw5PrVmFqqrrEoMlx8DWh5Y= +-----END CERTIFICATE----- +""".strip() + + # RSA public key as dumped by openssl + exponent = 65537 + modulus_str = """ +00:fd:2e:c6:25:5d:8e:70:57:72:34:c4:38:2b:be: +4a:ad:81:6d:30:49:f7:56:71:05:74:f1:da:8f:24: +b7:ed:79:4c:c5:28:74:a1:2e:06:79:31:95:50:22: +48:9e:5e:42:a6:cb:fc:59:07:61:d3:71:5f:78:e7: +6f:ce:9e:48:67:8e:8e:72:37:8e:3e:25:a7:9a:38: +25:d8:6e:63:ed:ed:9b:0c:7e:d1:49:c8:46:57:e3: +83:e8:56:f5:02:0b:ee:bb:6f:fd:8b:2f:1f:75:fd: +eb:3d:0f:fb:78:07:5d:cc:a5:2b:cf:06:9c:55:b9: +d6:fa:d8:36:42:d4:97:29:17 + """ + modulus = int(re.sub("[^0-9a-f]","", modulus_str), 16) + + key = RSA.importKey(x509_v3_cert) + self.assertEqual(key.e, exponent) + self.assertEqual(key.n, modulus) + self.assertFalse(key.has_private()) + + +class TestImport_2048(unittest.TestCase): + + def test_import_openssh_public(self): + key_file_ref = load_file("rsa2048_private.pem") + key_file = load_file("rsa2048_public_openssh.txt") + + # Skip test if test vectors are not installed + if None in (key_file_ref, key_file): + return + + key_ref = RSA.import_key(key_file_ref).public_key() + key = RSA.import_key(key_file) + self.assertEqual(key_ref, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("rsa2048_private_openssh.pem") + key_file_old = load_file("rsa2048_private_openssh_old.pem") + + # Skip test if test vectors are not installed + if None in (key_file_old, key_file): + return + + key = RSA.import_key(key_file) + key_old = RSA.import_key(key_file_old) + + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("rsa2048_private_openssh_pwd.pem") + key_file_old = load_file("rsa2048_private_openssh_pwd_old.pem") + + # Skip test if test vectors are not installed + if None in (key_file_old, key_file): + return + + key = RSA.import_key(key_file, b"password") + key_old = RSA.import_key(key_file_old) + self.assertEqual(key, key_old) + + +if __name__ == '__main__': + unittest.main() + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(ImportKeyTests) + tests += list_test_cases(ImportKeyFromX509Cert) + tests += list_test_cases(TestImport_2048) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Random/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Random/__init__.py new file mode 100644 index 00000000..53061cc0 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Random/__init__.py @@ -0,0 +1,39 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Random/__init__.py: Self-test for random number generation modules +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for random number generators""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest.Random import test_random; tests += test_random.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Random/test_random.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Random/test_random.py new file mode 100644 index 00000000..8fadc535 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Random/test_random.py @@ -0,0 +1,167 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/test_generic.py: Self-test for the Crypto.Random.new() function +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Crypto.Random.new()""" + +import sys +import unittest +from Crypto.Util.py3compat import b + +class SimpleTest(unittest.TestCase): + def runTest(self): + """Crypto.Random.new()""" + # Import the Random module and try to use it + from Crypto import Random + randobj = Random.new() + x = randobj.read(16) + y = randobj.read(16) + self.assertNotEqual(x, y) + z = Random.get_random_bytes(16) + self.assertNotEqual(x, z) + self.assertNotEqual(y, z) + # Test the Random.random module, which + # implements a subset of Python's random API + # Not implemented: + # seed(), getstate(), setstate(), jumpahead() + # random(), uniform(), triangular(), betavariate() + # expovariate(), gammavariate(), gauss(), + # longnormvariate(), normalvariate(), + # vonmisesvariate(), paretovariate() + # weibullvariate() + # WichmannHill(), whseed(), SystemRandom() + from Crypto.Random import random + x = random.getrandbits(16*8) + y = random.getrandbits(16*8) + self.assertNotEqual(x, y) + # Test randrange + if x>y: + start = y + stop = x + else: + start = x + stop = y + for step in range(1,10): + x = random.randrange(start,stop,step) + y = random.randrange(start,stop,step) + self.assertNotEqual(x, y) + self.assertEqual(start <= x < stop, True) + self.assertEqual(start <= y < stop, True) + self.assertEqual((x - start) % step, 0) + self.assertEqual((y - start) % step, 0) + for i in range(10): + self.assertEqual(random.randrange(1,2), 1) + self.assertRaises(ValueError, random.randrange, start, start) + self.assertRaises(ValueError, random.randrange, stop, start, step) + self.assertRaises(TypeError, random.randrange, start, stop, step, step) + self.assertRaises(TypeError, random.randrange, start, stop, "1") + self.assertRaises(TypeError, random.randrange, "1", stop, step) + self.assertRaises(TypeError, random.randrange, 1, "2", step) + self.assertRaises(ValueError, random.randrange, start, stop, 0) + # Test randint + x = random.randint(start,stop) + y = random.randint(start,stop) + self.assertNotEqual(x, y) + self.assertEqual(start <= x <= stop, True) + self.assertEqual(start <= y <= stop, True) + for i in range(10): + self.assertEqual(random.randint(1,1), 1) + self.assertRaises(ValueError, random.randint, stop, start) + self.assertRaises(TypeError, random.randint, start, stop, step) + self.assertRaises(TypeError, random.randint, "1", stop) + self.assertRaises(TypeError, random.randint, 1, "2") + # Test choice + seq = range(10000) + x = random.choice(seq) + y = random.choice(seq) + self.assertNotEqual(x, y) + self.assertEqual(x in seq, True) + self.assertEqual(y in seq, True) + for i in range(10): + self.assertEqual(random.choice((1,2,3)) in (1,2,3), True) + self.assertEqual(random.choice([1,2,3]) in [1,2,3], True) + if sys.version_info[0] == 3: + self.assertEqual(random.choice(bytearray(b('123'))) in bytearray(b('123')), True) + self.assertEqual(1, random.choice([1])) + self.assertRaises(IndexError, random.choice, []) + self.assertRaises(TypeError, random.choice, 1) + # Test shuffle. Lacks random parameter to specify function. + # Make copies of seq + seq = range(500) + x = list(seq) + y = list(seq) + random.shuffle(x) + random.shuffle(y) + self.assertNotEqual(x, y) + self.assertEqual(len(seq), len(x)) + self.assertEqual(len(seq), len(y)) + for i in range(len(seq)): + self.assertEqual(x[i] in seq, True) + self.assertEqual(y[i] in seq, True) + self.assertEqual(seq[i] in x, True) + self.assertEqual(seq[i] in y, True) + z = [1] + random.shuffle(z) + self.assertEqual(z, [1]) + if sys.version_info[0] == 3: + z = bytearray(b('12')) + random.shuffle(z) + self.assertEqual(b('1') in z, True) + self.assertRaises(TypeError, random.shuffle, b('12')) + self.assertRaises(TypeError, random.shuffle, 1) + self.assertRaises(TypeError, random.shuffle, "11") + self.assertRaises(TypeError, random.shuffle, (1,2)) + # 2to3 wraps a list() around it, alas - but I want to shoot + # myself in the foot here! :D + # if sys.version_info[0] == 3: + # self.assertRaises(TypeError, random.shuffle, range(3)) + # Test sample + x = random.sample(seq, 20) + y = random.sample(seq, 20) + self.assertNotEqual(x, y) + for i in range(20): + self.assertEqual(x[i] in seq, True) + self.assertEqual(y[i] in seq, True) + z = random.sample([1], 1) + self.assertEqual(z, [1]) + z = random.sample((1,2,3), 1) + self.assertEqual(z[0] in (1,2,3), True) + z = random.sample("123", 1) + self.assertEqual(z[0] in "123", True) + z = random.sample(range(3), 1) + self.assertEqual(z[0] in range(3), True) + if sys.version_info[0] == 3: + z = random.sample(b("123"), 1) + self.assertEqual(z[0] in b("123"), True) + z = random.sample(bytearray(b("123")), 1) + self.assertEqual(z[0] in bytearray(b("123")), True) + self.assertRaises(TypeError, random.sample, 1) + +def get_tests(config={}): + return [SimpleTest()] + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/__init__.py new file mode 100644 index 00000000..83cf0f39 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/__init__.py @@ -0,0 +1,41 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Signature/__init__.py: Self-test for signature modules +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for signature modules""" + +import unittest +from . import test_pkcs1_15, test_pss, test_dss, test_eddsa + + +def get_tests(config={}): + tests = [] + tests += test_pkcs1_15.get_tests(config=config) + tests += test_pss.get_tests(config=config) + tests += test_dss.get_tests(config=config) + tests += test_eddsa.get_tests(config=config) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_dss.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_dss.py new file mode 100644 index 00000000..d3f8dfce --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_dss.py @@ -0,0 +1,1369 @@ +# +# SelfTest/Signature/test_dss.py: Self-test for DSS signatures +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import re +import unittest +from binascii import hexlify, unhexlify + +from Crypto.Util.py3compat import tobytes, bord, bchr + +from Crypto.Hash import (SHA1, SHA224, SHA256, SHA384, SHA512, + SHA3_224, SHA3_256, SHA3_384, SHA3_512) +from Crypto.Signature import DSS +from Crypto.PublicKey import DSA, ECC +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof +from Crypto.Util.number import bytes_to_long, long_to_bytes + + +def t2b(hexstring): + ws = hexstring.replace(" ", "").replace("\n", "") + return unhexlify(tobytes(ws)) + + +def t2l(hexstring): + ws = hexstring.replace(" ", "").replace("\n", "") + return int(ws, 16) + + +def load_hash_by_name(hash_name): + return __import__("Crypto.Hash." + hash_name, globals(), locals(), ["new"]) + + +class StrRNG: + + def __init__(self, randomness): + length = len(randomness) + self._idx = 0 + # Fix required to get the right K (see how randint() works!) + self._randomness = long_to_bytes(bytes_to_long(randomness) - 1, length) + + def __call__(self, n): + out = self._randomness[self._idx:self._idx + n] + self._idx += n + return out + + +class FIPS_DSA_Tests(unittest.TestCase): + + # 1st 1024 bit key from SigGen.txt + P = 0xa8f9cd201e5e35d892f85f80e4db2599a5676a3b1d4f190330ed3256b26d0e80a0e49a8fffaaad2a24f472d2573241d4d6d6c7480c80b4c67bb4479c15ada7ea8424d2502fa01472e760241713dab025ae1b02e1703a1435f62ddf4ee4c1b664066eb22f2e3bf28bb70a2a76e4fd5ebe2d1229681b5b06439ac9c7e9d8bde283 + Q = 0xf85f0f83ac4df7ea0cdf8f469bfeeaea14156495 + G = 0x2b3152ff6c62f14622b8f48e59f8af46883b38e79b8c74deeae9df131f8b856e3ad6c8455dab87cc0da8ac973417ce4f7878557d6cdf40b35b4a0ca3eb310c6a95d68ce284ad4e25ea28591611ee08b8444bd64b25f3f7c572410ddfb39cc728b9c936f85f419129869929cdb909a6a3a99bbe089216368171bd0ba81de4fe33 + X = 0xc53eae6d45323164c7d07af5715703744a63fc3a + Y = 0x313fd9ebca91574e1c2eebe1517c57e0c21b0209872140c5328761bbb2450b33f1b18b409ce9ab7c4cd8fda3391e8e34868357c199e16a6b2eba06d6749def791d79e95d3a4d09b24c392ad89dbf100995ae19c01062056bb14bce005e8731efde175f95b975089bdcdaea562b32786d96f5a31aedf75364008ad4fffebb970b + + key_pub = DSA.construct((Y, G, P, Q)) + key_priv = DSA.construct((Y, G, P, Q, X)) + + def shortDescription(self): + return "FIPS DSA Tests" + + def test_loopback(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv, 'fips-186-3') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub, 'fips-186-3') + verifier.verify(hashed_msg, signature) + + def test_negative_unapproved_hashes(self): + """Verify that unapproved hashes are rejected""" + + from Crypto.Hash import RIPEMD160 + + self.description = "Unapproved hash (RIPEMD160) test" + hash_obj = RIPEMD160.new() + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertRaises(ValueError, signer.sign, hash_obj) + self.assertRaises(ValueError, signer.verify, hash_obj, b"\x00" * 40) + + def test_negative_unknown_modes_encodings(self): + """Verify that unknown modes/encodings are rejected""" + + self.description = "Unknown mode test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-0') + + self.description = "Unknown encoding test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-3', 'xml') + + def test_asn1_encoding(self): + """Verify ASN.1 encoding""" + + self.description = "ASN.1 encoding test" + hash_obj = SHA1.new() + signer = DSS.new(self.key_priv, 'fips-186-3', 'der') + signature = signer.sign(hash_obj) + + # Verify that output looks like a DER SEQUENCE + self.assertEqual(bord(signature[0]), 48) + signer.verify(hash_obj, signature) + + # Verify that ASN.1 parsing fails as expected + signature = bchr(7) + signature[1:] + self.assertRaises(ValueError, signer.verify, hash_obj, signature) + + def test_sign_verify(self): + """Verify public/private method""" + + self.description = "can_sign() test" + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertTrue(signer.can_sign()) + + signer = DSS.new(self.key_pub, 'fips-186-3') + self.assertFalse(signer.can_sign()) + + try: + signer.sign(SHA256.new(b'xyz')) + except TypeError as e: + msg = str(e) + else: + msg = "" + self.assertTrue("Private key is needed" in msg) + + +class FIPS_DSA_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "DSA"), + "FIPS_186_3_SigVer.rsp", + "Signature Verification 186-3", + {'result': lambda x: x}) or [] + +for idx, tv in enumerate(test_vectors_verify): + + if isinstance(tv, str): + res = re.match(r"\[mod = L=([0-9]+), N=([0-9]+), ([a-zA-Z0-9-]+)\]", tv) + assert(res) + hash_name = res.group(3).replace("-", "") + hash_module = load_hash_by_name(hash_name) + continue + + if hasattr(tv, "p"): + modulus = tv.p + generator = tv.g + suborder = tv.q + continue + + hash_obj = hash_module.new(tv.msg) + + comps = [bytes_to_long(x) for x in (tv.y, generator, modulus, suborder)] + key = DSA.construct(comps, False) # type: ignore + verifier = DSS.new(key, 'fips-186-3') + + def positive_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result == 'p': + setattr(FIPS_DSA_Tests_KAT, "test_verify_positive_%d" % idx, positive_test) + else: + setattr(FIPS_DSA_Tests_KAT, "test_verify_negative_%d" % idx, negative_test) + + +test_vectors_sign = load_test_vectors(("Signature", "DSA"), + "FIPS_186_3_SigGen.txt", + "Signature Creation 186-3", + {}) or [] + +for idx, tv in enumerate(test_vectors_sign): + + if isinstance(tv, str): + res = re.match(r"\[mod = L=([0-9]+), N=([0-9]+), ([a-zA-Z0-9-]+)\]", tv) + assert(res) + hash_name = res.group(3).replace("-", "") + hash_module = load_hash_by_name(hash_name) + continue + + if hasattr(tv, "p"): + modulus = tv.p + generator = tv.g + suborder = tv.q + continue + + hash_obj = hash_module.new(tv.msg) + comps_dsa = [bytes_to_long(x) for x in (tv.y, generator, modulus, suborder, tv.x)] + key = DSA.construct(comps_dsa, False) # type: ignore + signer = DSS.new(key, 'fips-186-3', randfunc=StrRNG(tv.k)) + + def new_test(self, signer=signer, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertEqual(signer.sign(hash_obj), signature) + setattr(FIPS_DSA_Tests_KAT, "test_sign_%d" % idx, new_test) + + +class FIPS_ECDSA_Tests(unittest.TestCase): + + key_priv = ECC.generate(curve="P-256") + key_pub = key_priv.public_key() + + def shortDescription(self): + return "FIPS ECDSA Tests" + + def test_loopback(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv, 'fips-186-3') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub, 'fips-186-3') + verifier.verify(hashed_msg, signature) + + def test_negative_unapproved_hashes(self): + """Verify that unapproved hashes are rejected""" + + from Crypto.Hash import SHA1 + + self.description = "Unapproved hash (SHA-1) test" + hash_obj = SHA1.new() + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertRaises(ValueError, signer.sign, hash_obj) + self.assertRaises(ValueError, signer.verify, hash_obj, b"\x00" * 40) + + def test_negative_eddsa_key(self): + key = ECC.generate(curve="ed25519") + self.assertRaises(ValueError, DSS.new, key, 'fips-186-3') + + def test_sign_verify(self): + """Verify public/private method""" + + self.description = "can_sign() test" + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertTrue(signer.can_sign()) + + signer = DSS.new(self.key_pub, 'fips-186-3') + self.assertFalse(signer.can_sign()) + self.assertRaises(TypeError, signer.sign, SHA256.new(b'xyz')) + + try: + signer.sign(SHA256.new(b'xyz')) + except TypeError as e: + msg = str(e) + else: + msg = "" + self.assertTrue("Private key is needed" in msg) + + def test_negative_unknown_modes_encodings(self): + """Verify that unknown modes/encodings are rejected""" + + self.description = "Unknown mode test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-0') + + self.description = "Unknown encoding test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-3', 'xml') + + def test_asn1_encoding(self): + """Verify ASN.1 encoding""" + + self.description = "ASN.1 encoding test" + hash_obj = SHA256.new() + signer = DSS.new(self.key_priv, 'fips-186-3', 'der') + signature = signer.sign(hash_obj) + + # Verify that output looks like a DER SEQUENCE + self.assertEqual(bord(signature[0]), 48) + signer.verify(hash_obj, signature) + + # Verify that ASN.1 parsing fails as expected + signature = bchr(7) + signature[1:] + self.assertRaises(ValueError, signer.verify, hash_obj, signature) + + +class FIPS_ECDSA_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "ECDSA"), + "SigVer.rsp", + "ECDSA Signature Verification 186-3", + {'result': lambda x: x, + 'qx': lambda x: int(x, 16), + 'qy': lambda x: int(x, 16), + }) or [] +test_vectors_verify += load_test_vectors(("Signature", "ECDSA"), + "SigVer_TruncatedSHAs.rsp", + "ECDSA Signature Verification 186-3", + {'result': lambda x: x, + 'qx': lambda x: int(x, 16), + 'qy': lambda x: int(x, 16), + }) or [] + + +for idx, tv in enumerate(test_vectors_verify): + + if isinstance(tv, str): + res = re.match(r"\[(P-[0-9]+),(SHA-[0-9]+)\]", tv) + assert res + curve_name = res.group(1) + hash_name = res.group(2).replace("-", "") + if hash_name in ("SHA512224", "SHA512256"): + truncate = hash_name[-3:] + hash_name = hash_name[:-3] + else: + truncate = None + hash_module = load_hash_by_name(hash_name) + continue + + if truncate is None: + hash_obj = hash_module.new(tv.msg) + else: + hash_obj = hash_module.new(tv.msg, truncate=truncate) + ecc_key = ECC.construct(curve=curve_name, point_x=tv.qx, point_y=tv.qy) + verifier = DSS.new(ecc_key, 'fips-186-3') + + def positive_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result.startswith('p'): + setattr(FIPS_ECDSA_Tests_KAT, "test_verify_positive_%d" % idx, positive_test) + else: + setattr(FIPS_ECDSA_Tests_KAT, "test_verify_negative_%d" % idx, negative_test) + + +test_vectors_sign = load_test_vectors(("Signature", "ECDSA"), + "SigGen.txt", + "ECDSA Signature Verification 186-3", + {'d': lambda x: int(x, 16)}) or [] + +for idx, tv in enumerate(test_vectors_sign): + + if isinstance(tv, str): + res = re.match(r"\[(P-[0-9]+),(SHA-[0-9]+)\]", tv) + assert res + curve_name = res.group(1) + hash_name = res.group(2).replace("-", "") + hash_module = load_hash_by_name(hash_name) + continue + + hash_obj = hash_module.new(tv.msg) + ecc_key = ECC.construct(curve=curve_name, d=tv.d) + signer = DSS.new(ecc_key, 'fips-186-3', randfunc=StrRNG(tv.k)) + + def sign_test(self, signer=signer, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertEqual(signer.sign(hash_obj), signature) + setattr(FIPS_ECDSA_Tests_KAT, "test_sign_%d" % idx, sign_test) + + +class Det_DSA_Tests(unittest.TestCase): + """Tests from rfc6979""" + + # Each key is (p, q, g, x, y, desc) + keys = [ + ( + """ + 86F5CA03DCFEB225063FF830A0C769B9DD9D6153AD91D7CE27F787C43278B447 + E6533B86B18BED6E8A48B784A14C252C5BE0DBF60B86D6385BD2F12FB763ED88 + 73ABFD3F5BA2E0A8C0A59082EAC056935E529DAF7C610467899C77ADEDFC846C + 881870B7B19B2B58F9BE0521A17002E3BDD6B86685EE90B3D9A1B02B782B1779""", + "996F967F6C8E388D9E28D01E205FBA957A5698B1", + """ + 07B0F92546150B62514BB771E2A0C0CE387F03BDA6C56B505209FF25FD3C133D + 89BBCD97E904E09114D9A7DEFDEADFC9078EA544D2E401AEECC40BB9FBBF78FD + 87995A10A1C27CB7789B594BA7EFB5C4326A9FE59A070E136DB77175464ADCA4 + 17BE5DCE2F40D10A46A3A3943F26AB7FD9C0398FF8C76EE0A56826A8A88F1DBD""", + "411602CB19A6CCC34494D79D98EF1E7ED5AF25F7", + """ + 5DF5E01DED31D0297E274E1691C192FE5868FEF9E19A84776454B100CF16F653 + 92195A38B90523E2542EE61871C0440CB87C322FC4B4D2EC5E1E7EC766E1BE8D + 4CE935437DC11C3C8FD426338933EBFE739CB3465F4D3668C5E473508253B1E6 + 82F65CBDC4FAE93C2EA212390E54905A86E2223170B44EAA7DA5DD9FFCFB7F3B""", + "DSA1024" + ), + ( + """ + 9DB6FB5951B66BB6FE1E140F1D2CE5502374161FD6538DF1648218642F0B5C48 + C8F7A41AADFA187324B87674FA1822B00F1ECF8136943D7C55757264E5A1A44F + FE012E9936E00C1D3E9310B01C7D179805D3058B2A9F4BB6F9716BFE6117C6B5 + B3CC4D9BE341104AD4A80AD6C94E005F4B993E14F091EB51743BF33050C38DE2 + 35567E1B34C3D6A5C0CEAA1A0F368213C3D19843D0B4B09DCB9FC72D39C8DE41 + F1BF14D4BB4563CA28371621CAD3324B6A2D392145BEBFAC748805236F5CA2FE + 92B871CD8F9C36D3292B5509CA8CAA77A2ADFC7BFD77DDA6F71125A7456FEA15 + 3E433256A2261C6A06ED3693797E7995FAD5AABBCFBE3EDA2741E375404AE25B""", + "F2C3119374CE76C9356990B465374A17F23F9ED35089BD969F61C6DDE9998C1F", + """ + 5C7FF6B06F8F143FE8288433493E4769C4D988ACE5BE25A0E24809670716C613 + D7B0CEE6932F8FAA7C44D2CB24523DA53FBE4F6EC3595892D1AA58C4328A06C4 + 6A15662E7EAA703A1DECF8BBB2D05DBE2EB956C142A338661D10461C0D135472 + 085057F3494309FFA73C611F78B32ADBB5740C361C9F35BE90997DB2014E2EF5 + AA61782F52ABEB8BD6432C4DD097BC5423B285DAFB60DC364E8161F4A2A35ACA + 3A10B1C4D203CC76A470A33AFDCBDD92959859ABD8B56E1725252D78EAC66E71 + BA9AE3F1DD2487199874393CD4D832186800654760E1E34C09E4D155179F9EC0 + DC4473F996BDCE6EED1CABED8B6F116F7AD9CF505DF0F998E34AB27514B0FFE7""", + "69C7548C21D0DFEA6B9A51C9EAD4E27C33D3B3F180316E5BCAB92C933F0E4DBC", + """ + 667098C654426C78D7F8201EAC6C203EF030D43605032C2F1FA937E5237DBD94 + 9F34A0A2564FE126DC8B715C5141802CE0979C8246463C40E6B6BDAA2513FA61 + 1728716C2E4FD53BC95B89E69949D96512E873B9C8F8DFD499CC312882561ADE + CB31F658E934C0C197F2C4D96B05CBAD67381E7B768891E4DA3843D24D94CDFB + 5126E9B8BF21E8358EE0E0A30EF13FD6A664C0DCE3731F7FB49A4845A4FD8254 + 687972A2D382599C9BAC4E0ED7998193078913032558134976410B89D2C171D1 + 23AC35FD977219597AA7D15C1A9A428E59194F75C721EBCBCFAE44696A499AFA + 74E04299F132026601638CB87AB79190D4A0986315DA8EEC6561C938996BEADF""", + "DSA2048" + ), + ] + + # This is a sequence of items: + # message, k, r, s, hash module + signatures = [ + ( + "sample", + "7BDB6B0FF756E1BB5D53583EF979082F9AD5BD5B", + "2E1A0C2562B2912CAAF89186FB0F42001585DA55", + "29EFB6B0AFF2D7A68EB70CA313022253B9A88DF5", + SHA1, + 'DSA1024' + ), + ( + "sample", + "562097C06782D60C3037BA7BE104774344687649", + "4BC3B686AEA70145856814A6F1BB53346F02101E", + "410697B92295D994D21EDD2F4ADA85566F6F94C1", + SHA224, + 'DSA1024' + ), + ( + "sample", + "519BA0546D0C39202A7D34D7DFA5E760B318BCFB", + "81F2F5850BE5BC123C43F71A3033E9384611C545", + "4CDD914B65EB6C66A8AAAD27299BEE6B035F5E89", + SHA256, + 'DSA1024' + ), + ( + "sample", + "95897CD7BBB944AA932DBC579C1C09EB6FCFC595", + "07F2108557EE0E3921BC1774F1CA9B410B4CE65A", + "54DF70456C86FAC10FAB47C1949AB83F2C6F7595", + SHA384, + 'DSA1024' + ), + ( + "sample", + "09ECE7CA27D0F5A4DD4E556C9DF1D21D28104F8B", + "16C3491F9B8C3FBBDD5E7A7B667057F0D8EE8E1B", + "02C36A127A7B89EDBB72E4FFBC71DABC7D4FC69C", + SHA512, + 'DSA1024' + ), + ( + "test", + "5C842DF4F9E344EE09F056838B42C7A17F4A6433", + "42AB2052FD43E123F0607F115052A67DCD9C5C77", + "183916B0230D45B9931491D4C6B0BD2FB4AAF088", + SHA1, + 'DSA1024' + ), + ( + "test", + "4598B8EFC1A53BC8AECD58D1ABBB0C0C71E67297", + "6868E9964E36C1689F6037F91F28D5F2C30610F2", + "49CEC3ACDC83018C5BD2674ECAAD35B8CD22940F", + SHA224, + 'DSA1024' + ), + ( + "test", + "5A67592E8128E03A417B0484410FB72C0B630E1A", + "22518C127299B0F6FDC9872B282B9E70D0790812", + "6837EC18F150D55DE95B5E29BE7AF5D01E4FE160", + SHA256, + 'DSA1024' + ), + ( + "test", + "220156B761F6CA5E6C9F1B9CF9C24BE25F98CD89", + "854CF929B58D73C3CBFDC421E8D5430CD6DB5E66", + "91D0E0F53E22F898D158380676A871A157CDA622", + SHA384, + 'DSA1024' + ), + ( + "test", + "65D2C2EEB175E370F28C75BFCDC028D22C7DBE9C", + "8EA47E475BA8AC6F2D821DA3BD212D11A3DEB9A0", + "7C670C7AD72B6C050C109E1790008097125433E8", + SHA512, + 'DSA1024' + ), + ( + "sample", + "888FA6F7738A41BDC9846466ABDB8174C0338250AE50CE955CA16230F9CBD53E", + "3A1B2DBD7489D6ED7E608FD036C83AF396E290DBD602408E8677DAABD6E7445A", + "D26FCBA19FA3E3058FFC02CA1596CDBB6E0D20CB37B06054F7E36DED0CDBBCCF", + SHA1, + 'DSA2048' + ), + ( + "sample", + "BC372967702082E1AA4FCE892209F71AE4AD25A6DFD869334E6F153BD0C4D806", + "DC9F4DEADA8D8FF588E98FED0AB690FFCE858DC8C79376450EB6B76C24537E2C", + "A65A9C3BC7BABE286B195D5DA68616DA8D47FA0097F36DD19F517327DC848CEC", + SHA224, + 'DSA2048' + ), + ( + "sample", + "8926A27C40484216F052F4427CFD5647338B7B3939BC6573AF4333569D597C52", + "EACE8BDBBE353C432A795D9EC556C6D021F7A03F42C36E9BC87E4AC7932CC809", + "7081E175455F9247B812B74583E9E94F9EA79BD640DC962533B0680793A38D53", + SHA256, + 'DSA2048' + ), + ( + "sample", + "C345D5AB3DA0A5BCB7EC8F8FB7A7E96069E03B206371EF7D83E39068EC564920", + "B2DA945E91858834FD9BF616EBAC151EDBC4B45D27D0DD4A7F6A22739F45C00B", + "19048B63D9FD6BCA1D9BAE3664E1BCB97F7276C306130969F63F38FA8319021B", + SHA384, + 'DSA2048' + ), + ( + "sample", + "5A12994431785485B3F5F067221517791B85A597B7A9436995C89ED0374668FC", + "2016ED092DC5FB669B8EFB3D1F31A91EECB199879BE0CF78F02BA062CB4C942E", + "D0C76F84B5F091E141572A639A4FB8C230807EEA7D55C8A154A224400AFF2351", + SHA512, + 'DSA2048' + ), + ( + "test", + "6EEA486F9D41A037B2C640BC5645694FF8FF4B98D066A25F76BE641CCB24BA4F", + "C18270A93CFC6063F57A4DFA86024F700D980E4CF4E2CB65A504397273D98EA0", + "414F22E5F31A8B6D33295C7539C1C1BA3A6160D7D68D50AC0D3A5BEAC2884FAA", + SHA1, + 'DSA2048' + ), + ( + "test", + "06BD4C05ED74719106223BE33F2D95DA6B3B541DAD7BFBD7AC508213B6DA6670", + "272ABA31572F6CC55E30BF616B7A265312018DD325BE031BE0CC82AA17870EA3", + "E9CC286A52CCE201586722D36D1E917EB96A4EBDB47932F9576AC645B3A60806", + SHA224, + 'DSA2048' + ), + ( + "test", + "1D6CE6DDA1C5D37307839CD03AB0A5CBB18E60D800937D67DFB4479AAC8DEAD7", + "8190012A1969F9957D56FCCAAD223186F423398D58EF5B3CEFD5A4146A4476F0", + "7452A53F7075D417B4B013B278D1BB8BBD21863F5E7B1CEE679CF2188E1AB19E", + SHA256, + 'DSA2048' + ), + ( + "test", + "206E61F73DBE1B2DC8BE736B22B079E9DACD974DB00EEBBC5B64CAD39CF9F91C", + "239E66DDBE8F8C230A3D071D601B6FFBDFB5901F94D444C6AF56F732BEB954BE", + "6BD737513D5E72FE85D1C750E0F73921FE299B945AAD1C802F15C26A43D34961", + SHA384, + 'DSA2048' + ), + ( + "test", + "AFF1651E4CD6036D57AA8B2A05CCF1A9D5A40166340ECBBDC55BE10B568AA0AA", + "89EC4BB1400ECCFF8E7D9AA515CD1DE7803F2DAFF09693EE7FD1353E90A68307", + "C9F0BDABCC0D880BB137A994CC7F3980CE91CC10FAF529FC46565B15CEA854E1", + SHA512, + 'DSA2048' + ) + ] + + def setUp(self): + # Convert DSA key components from hex strings to integers + # Each key is (p, q, g, x, y, desc) + + from collections import namedtuple + + TestKey = namedtuple('TestKey', 'p q g x y') + new_keys = {} + for k in self.keys: + tk = TestKey(*[t2l(y) for y in k[:-1]]) + new_keys[k[-1]] = tk + self.keys = new_keys + + # Convert signature encoding + TestSig = namedtuple('TestSig', 'message nonce result module test_key') + new_signatures = [] + for message, nonce, r, s, module, test_key in self.signatures: + tsig = TestSig( + tobytes(message), + t2l(nonce), + t2b(r) + t2b(s), + module, + self.keys[test_key] + ) + new_signatures.append(tsig) + self.signatures = new_signatures + + def test1(self): + q = 0x4000000000000000000020108A2E0CC0D99F8A5EF + x = 0x09A4D6792295A7F730FC3F2B49CBC0F62E862272F + p = 2 * q + 1 + y = pow(2, x, p) + key = DSA.construct([pow(y, 2, p), 2, p, q, x], False) + signer = DSS.new(key, 'deterministic-rfc6979') + + # Test _int2octets + self.assertEqual(hexlify(signer._int2octets(x)), + b'009a4d6792295a7f730fc3f2b49cbc0f62e862272f') + + # Test _bits2octets + h1 = SHA256.new(b"sample").digest() + self.assertEqual(hexlify(signer._bits2octets(h1)), + b'01795edf0d54db760f156d0dac04c0322b3a204224') + + def test2(self): + + for sig in self.signatures: + tk = sig.test_key + key = DSA.construct([tk.y, tk.g, tk.p, tk.q, tk.x], False) + signer = DSS.new(key, 'deterministic-rfc6979') + + hash_obj = sig.module.new(sig.message) + result = signer.sign(hash_obj) + self.assertEqual(sig.result, result) + + +class Det_ECDSA_Tests(unittest.TestCase): + + key_priv_p192 = ECC.construct(curve="P-192", d=0x6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4) + key_pub_p192 = key_priv_p192.public_key() + + key_priv_p224 = ECC.construct(curve="P-224", d=0xF220266E1105BFE3083E03EC7A3A654651F45E37167E88600BF257C1) + key_pub_p224 = key_priv_p224.public_key() + + key_priv_p256 = ECC.construct(curve="P-256", d=0xC9AFA9D845BA75166B5C215767B1D6934E50C3DB36E89B127B8A622B120F6721) + key_pub_p256 = key_priv_p256.public_key() + + key_priv_p384 = ECC.construct(curve="P-384", d=0x6B9D3DAD2E1B8C1C05B19875B6659F4DE23C3B667BF297BA9AA47740787137D896D5724E4C70A825F872C9EA60D2EDF5) + key_pub_p384 = key_priv_p384.public_key() + + key_priv_p521 = ECC.construct(curve="P-521", d=0x0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538) + key_pub_p521 = key_priv_p521.public_key() + + # This is a sequence of items: + # message, k, r, s, hash module + # taken from RFC6979 + signatures_p192_ = ( + ( + "sample", + "37D7CA00D2C7B0E5E412AC03BD44BA837FDD5B28CD3B0021", + "98C6BD12B23EAF5E2A2045132086BE3EB8EBD62ABF6698FF", + "57A22B07DEA9530F8DE9471B1DC6624472E8E2844BC25B64", + SHA1 + ), + ( + "sample", + "4381526B3FC1E7128F202E194505592F01D5FF4C5AF015D8", + "A1F00DAD97AEEC91C95585F36200C65F3C01812AA60378F5", + "E07EC1304C7C6C9DEBBE980B9692668F81D4DE7922A0F97A", + SHA224 + ), + ( + "sample", + "32B1B6D7D42A05CB449065727A84804FB1A3E34D8F261496", + "4B0B8CE98A92866A2820E20AA6B75B56382E0F9BFD5ECB55", + "CCDB006926EA9565CBADC840829D8C384E06DE1F1E381B85", + SHA256 + ), + ( + "sample", + "4730005C4FCB01834C063A7B6760096DBE284B8252EF4311", + "DA63BF0B9ABCF948FBB1E9167F136145F7A20426DCC287D5", + "C3AA2C960972BD7A2003A57E1C4C77F0578F8AE95E31EC5E", + SHA384 + ), + ( + "sample", + "A2AC7AB055E4F20692D49209544C203A7D1F2C0BFBC75DB1", + "4D60C5AB1996BD848343B31C00850205E2EA6922DAC2E4B8", + "3F6E837448F027A1BF4B34E796E32A811CBB4050908D8F67", + SHA512 + ), + ( + "test", + "D9CF9C3D3297D3260773A1DA7418DB5537AB8DD93DE7FA25", + "0F2141A0EBBC44D2E1AF90A50EBCFCE5E197B3B7D4DE036D", + "EB18BC9E1F3D7387500CB99CF5F7C157070A8961E38700B7", + SHA1 + ), + ( + "test", + "F5DC805F76EF851800700CCE82E7B98D8911B7D510059FBE", + "6945A1C1D1B2206B8145548F633BB61CEF04891BAF26ED34", + "B7FB7FDFC339C0B9BD61A9F5A8EAF9BE58FC5CBA2CB15293", + SHA224 + ), + ( + "test", + "5C4CE89CF56D9E7C77C8585339B006B97B5F0680B4306C6C", + "3A718BD8B4926C3B52EE6BBE67EF79B18CB6EB62B1AD97AE", + "5662E6848A4A19B1F1AE2F72ACD4B8BBE50F1EAC65D9124F", + SHA256 + ), + ( + "test", + "5AFEFB5D3393261B828DB6C91FBC68C230727B030C975693", + "B234B60B4DB75A733E19280A7A6034BD6B1EE88AF5332367", + "7994090B2D59BB782BE57E74A44C9A1C700413F8ABEFE77A", + SHA384 + ), + ( + "test", + "0758753A5254759C7CFBAD2E2D9B0792EEE44136C9480527", + "FE4F4AE86A58B6507946715934FE2D8FF9D95B6B098FE739", + "74CF5605C98FBA0E1EF34D4B5A1577A7DCF59457CAE52290", + SHA512 + ) + ) + + signatures_p224_ = ( + ( + "sample", + "7EEFADD91110D8DE6C2C470831387C50D3357F7F4D477054B8B426BC", + "22226F9D40A96E19C4A301CE5B74B115303C0F3A4FD30FC257FB57AC", + "66D1CDD83E3AF75605DD6E2FEFF196D30AA7ED7A2EDF7AF475403D69", + SHA1 + ), + ( + "sample", + "C1D1F2F10881088301880506805FEB4825FE09ACB6816C36991AA06D", + "1CDFE6662DDE1E4A1EC4CDEDF6A1F5A2FB7FBD9145C12113E6ABFD3E", + "A6694FD7718A21053F225D3F46197CA699D45006C06F871808F43EBC", + SHA224 + ), + ( + "sample", + "AD3029E0278F80643DE33917CE6908C70A8FF50A411F06E41DEDFCDC", + "61AA3DA010E8E8406C656BC477A7A7189895E7E840CDFE8FF42307BA", + "BC814050DAB5D23770879494F9E0A680DC1AF7161991BDE692B10101", + SHA256 + ), + ( + "sample", + "52B40F5A9D3D13040F494E83D3906C6079F29981035C7BD51E5CAC40", + "0B115E5E36F0F9EC81F1325A5952878D745E19D7BB3EABFABA77E953", + "830F34CCDFE826CCFDC81EB4129772E20E122348A2BBD889A1B1AF1D", + SHA384 + ), + ( + "sample", + "9DB103FFEDEDF9CFDBA05184F925400C1653B8501BAB89CEA0FBEC14", + "074BD1D979D5F32BF958DDC61E4FB4872ADCAFEB2256497CDAC30397", + "A4CECA196C3D5A1FF31027B33185DC8EE43F288B21AB342E5D8EB084", + SHA512 + ), + ( + "test", + "2519178F82C3F0E4F87ED5883A4E114E5B7A6E374043D8EFD329C253", + "DEAA646EC2AF2EA8AD53ED66B2E2DDAA49A12EFD8356561451F3E21C", + "95987796F6CF2062AB8135271DE56AE55366C045F6D9593F53787BD2", + SHA1 + ), + ( + "test", + "DF8B38D40DCA3E077D0AC520BF56B6D565134D9B5F2EAE0D34900524", + "C441CE8E261DED634E4CF84910E4C5D1D22C5CF3B732BB204DBEF019", + "902F42847A63BDC5F6046ADA114953120F99442D76510150F372A3F4", + SHA224 + ), + ( + "test", + "FF86F57924DA248D6E44E8154EB69F0AE2AEBAEE9931D0B5A969F904", + "AD04DDE87B84747A243A631EA47A1BA6D1FAA059149AD2440DE6FBA6", + "178D49B1AE90E3D8B629BE3DB5683915F4E8C99FDF6E666CF37ADCFD", + SHA256 + ), + ( + "test", + "7046742B839478C1B5BD31DB2E862AD868E1A45C863585B5F22BDC2D", + "389B92682E399B26518A95506B52C03BC9379A9DADF3391A21FB0EA4", + "414A718ED3249FF6DBC5B50C27F71F01F070944DA22AB1F78F559AAB", + SHA384 + ), + ( + "test", + "E39C2AA4EA6BE2306C72126D40ED77BF9739BB4D6EF2BBB1DCB6169D", + "049F050477C5ADD858CAC56208394B5A55BAEBBE887FDF765047C17C", + "077EB13E7005929CEFA3CD0403C7CDCC077ADF4E44F3C41B2F60ECFF", + SHA512 + ) + ) + + signatures_p256_ = ( + ( + "sample", + "882905F1227FD620FBF2ABF21244F0BA83D0DC3A9103DBBEE43A1FB858109DB4", + "61340C88C3AAEBEB4F6D667F672CA9759A6CCAA9FA8811313039EE4A35471D32", + "6D7F147DAC089441BB2E2FE8F7A3FA264B9C475098FDCF6E00D7C996E1B8B7EB", + SHA1 + ), + ( + "sample", + "103F90EE9DC52E5E7FB5132B7033C63066D194321491862059967C715985D473", + "53B2FFF5D1752B2C689DF257C04C40A587FABABB3F6FC2702F1343AF7CA9AA3F", + "B9AFB64FDC03DC1A131C7D2386D11E349F070AA432A4ACC918BEA988BF75C74C", + SHA224 + ), + ( + "sample", + "A6E3C57DD01ABE90086538398355DD4C3B17AA873382B0F24D6129493D8AAD60", + "EFD48B2AACB6A8FD1140DD9CD45E81D69D2C877B56AAF991C34D0EA84EAF3716", + "F7CB1C942D657C41D436C7A1B6E29F65F3E900DBB9AFF4064DC4AB2F843ACDA8", + SHA256 + ), + ( + "sample", + "09F634B188CEFD98E7EC88B1AA9852D734D0BC272F7D2A47DECC6EBEB375AAD4", + "0EAFEA039B20E9B42309FB1D89E213057CBF973DC0CFC8F129EDDDC800EF7719", + "4861F0491E6998B9455193E34E7B0D284DDD7149A74B95B9261F13ABDE940954", + SHA384 + ), + ( + "sample", + "5FA81C63109BADB88C1F367B47DA606DA28CAD69AA22C4FE6AD7DF73A7173AA5", + "8496A60B5E9B47C825488827E0495B0E3FA109EC4568FD3F8D1097678EB97F00", + "2362AB1ADBE2B8ADF9CB9EDAB740EA6049C028114F2460F96554F61FAE3302FE", + SHA512 + ), + ( + "test", + "8C9520267C55D6B980DF741E56B4ADEE114D84FBFA2E62137954164028632A2E", + "0CBCC86FD6ABD1D99E703E1EC50069EE5C0B4BA4B9AC60E409E8EC5910D81A89", + "01B9D7B73DFAA60D5651EC4591A0136F87653E0FD780C3B1BC872FFDEAE479B1", + SHA1 + ), + ( + "test", + "669F4426F2688B8BE0DB3A6BD1989BDAEFFF84B649EEB84F3DD26080F667FAA7", + "C37EDB6F0AE79D47C3C27E962FA269BB4F441770357E114EE511F662EC34A692", + "C820053A05791E521FCAAD6042D40AEA1D6B1A540138558F47D0719800E18F2D", + SHA224 + ), + ( + "test", + "D16B6AE827F17175E040871A1C7EC3500192C4C92677336EC2537ACAEE0008E0", + "F1ABB023518351CD71D881567B1EA663ED3EFCF6C5132B354F28D3B0B7D38367", + "019F4113742A2B14BD25926B49C649155F267E60D3814B4C0CC84250E46F0083", + SHA256 + ), + ( + "test", + "16AEFFA357260B04B1DD199693960740066C1A8F3E8EDD79070AA914D361B3B8", + "83910E8B48BB0C74244EBDF7F07A1C5413D61472BD941EF3920E623FBCCEBEB6", + "8DDBEC54CF8CD5874883841D712142A56A8D0F218F5003CB0296B6B509619F2C", + SHA384 + ), + ( + "test", + "6915D11632ACA3C40D5D51C08DAF9C555933819548784480E93499000D9F0B7F", + "461D93F31B6540894788FD206C07CFA0CC35F46FA3C91816FFF1040AD1581A04", + "39AF9F15DE0DB8D97E72719C74820D304CE5226E32DEDAE67519E840D1194E55", + SHA512 + ) + ) + + signatures_p384_ = ( + ( + "sample", + "4471EF7518BB2C7C20F62EAE1C387AD0C5E8E470995DB4ACF694466E6AB096630F29E5938D25106C3C340045A2DB01A7", + "EC748D839243D6FBEF4FC5C4859A7DFFD7F3ABDDF72014540C16D73309834FA37B9BA002899F6FDA3A4A9386790D4EB2", + "A3BCFA947BEEF4732BF247AC17F71676CB31A847B9FF0CBC9C9ED4C1A5B3FACF26F49CA031D4857570CCB5CA4424A443", + SHA1 + ), + ( + "sample", + "A4E4D2F0E729EB786B31FC20AD5D849E304450E0AE8E3E341134A5C1AFA03CAB8083EE4E3C45B06A5899EA56C51B5879", + "42356E76B55A6D9B4631C865445DBE54E056D3B3431766D0509244793C3F9366450F76EE3DE43F5A125333A6BE060122", + "9DA0C81787064021E78DF658F2FBB0B042BF304665DB721F077A4298B095E4834C082C03D83028EFBF93A3C23940CA8D", + SHA224 + ), + ( + "sample", + "180AE9F9AEC5438A44BC159A1FCB277C7BE54FA20E7CF404B490650A8ACC414E375572342863C899F9F2EDF9747A9B60", + "21B13D1E013C7FA1392D03C5F99AF8B30C570C6F98D4EA8E354B63A21D3DAA33BDE1E888E63355D92FA2B3C36D8FB2CD", + "F3AA443FB107745BF4BD77CB3891674632068A10CA67E3D45DB2266FA7D1FEEBEFDC63ECCD1AC42EC0CB8668A4FA0AB0", + SHA256 + ), + ( + "sample", + "94ED910D1A099DAD3254E9242AE85ABDE4BA15168EAF0CA87A555FD56D10FBCA2907E3E83BA95368623B8C4686915CF9", + "94EDBB92A5ECB8AAD4736E56C691916B3F88140666CE9FA73D64C4EA95AD133C81A648152E44ACF96E36DD1E80FABE46", + "99EF4AEB15F178CEA1FE40DB2603138F130E740A19624526203B6351D0A3A94FA329C145786E679E7B82C71A38628AC8", + SHA384 + ), + ( + "sample", + "92FC3C7183A883E24216D1141F1A8976C5B0DD797DFA597E3D7B32198BD35331A4E966532593A52980D0E3AAA5E10EC3", + "ED0959D5880AB2D869AE7F6C2915C6D60F96507F9CB3E047C0046861DA4A799CFE30F35CC900056D7C99CD7882433709", + "512C8CCEEE3890A84058CE1E22DBC2198F42323CE8ACA9135329F03C068E5112DC7CC3EF3446DEFCEB01A45C2667FDD5", + SHA512 + ), + ( + "test", + "66CC2C8F4D303FC962E5FF6A27BD79F84EC812DDAE58CF5243B64A4AD8094D47EC3727F3A3C186C15054492E30698497", + "4BC35D3A50EF4E30576F58CD96CE6BF638025EE624004A1F7789A8B8E43D0678ACD9D29876DAF46638645F7F404B11C7", + "D5A6326C494ED3FF614703878961C0FDE7B2C278F9A65FD8C4B7186201A2991695BA1C84541327E966FA7B50F7382282", + SHA1 + ), + ( + "test", + "18FA39DB95AA5F561F30FA3591DC59C0FA3653A80DAFFA0B48D1A4C6DFCBFF6E3D33BE4DC5EB8886A8ECD093F2935726", + "E8C9D0B6EA72A0E7837FEA1D14A1A9557F29FAA45D3E7EE888FC5BF954B5E62464A9A817C47FF78B8C11066B24080E72", + "07041D4A7A0379AC7232FF72E6F77B6DDB8F09B16CCE0EC3286B2BD43FA8C6141C53EA5ABEF0D8231077A04540A96B66", + SHA224 + ), + ( + "test", + "0CFAC37587532347DC3389FDC98286BBA8C73807285B184C83E62E26C401C0FAA48DD070BA79921A3457ABFF2D630AD7", + "6D6DEFAC9AB64DABAFE36C6BF510352A4CC27001263638E5B16D9BB51D451559F918EEDAF2293BE5B475CC8F0188636B", + "2D46F3BECBCC523D5F1A1256BF0C9B024D879BA9E838144C8BA6BAEB4B53B47D51AB373F9845C0514EEFB14024787265", + SHA256 + ), + ( + "test", + "015EE46A5BF88773ED9123A5AB0807962D193719503C527B031B4C2D225092ADA71F4A459BC0DA98ADB95837DB8312EA", + "8203B63D3C853E8D77227FB377BCF7B7B772E97892A80F36AB775D509D7A5FEB0542A7F0812998DA8F1DD3CA3CF023DB", + "DDD0760448D42D8A43AF45AF836FCE4DE8BE06B485E9B61B827C2F13173923E06A739F040649A667BF3B828246BAA5A5", + SHA384 + ), + ( + "test", + "3780C4F67CB15518B6ACAE34C9F83568D2E12E47DEAB6C50A4E4EE5319D1E8CE0E2CC8A136036DC4B9C00E6888F66B6C", + "A0D5D090C9980FAF3C2CE57B7AE951D31977DD11C775D314AF55F76C676447D06FB6495CD21B4B6E340FC236584FB277", + "976984E59B4C77B0E8E4460DCA3D9F20E07B9BB1F63BEEFAF576F6B2E8B224634A2092CD3792E0159AD9CEE37659C736", + SHA512 + ), + ) + + signatures_p521_ = ( + ( + "sample", + "0089C071B419E1C2820962321787258469511958E80582E95D8378E0C2CCDB3CB42BEDE42F50E3FA3C71F5A76724281D31D9C89F0F91FC1BE4918DB1C03A5838D0F9", + "00343B6EC45728975EA5CBA6659BBB6062A5FF89EEA58BE3C80B619F322C87910FE092F7D45BB0F8EEE01ED3F20BABEC079D202AE677B243AB40B5431D497C55D75D", + "00E7B0E675A9B24413D448B8CC119D2BF7B2D2DF032741C096634D6D65D0DBE3D5694625FB9E8104D3B842C1B0E2D0B98BEA19341E8676AEF66AE4EBA3D5475D5D16", + SHA1 + ), + ( + "sample", + "0121415EC2CD7726330A61F7F3FA5DE14BE9436019C4DB8CB4041F3B54CF31BE0493EE3F427FB906393D895A19C9523F3A1D54BB8702BD4AA9C99DAB2597B92113F3", + "01776331CFCDF927D666E032E00CF776187BC9FDD8E69D0DABB4109FFE1B5E2A30715F4CC923A4A5E94D2503E9ACFED92857B7F31D7152E0F8C00C15FF3D87E2ED2E", + "0050CB5265417FE2320BBB5A122B8E1A32BD699089851128E360E620A30C7E17BA41A666AF126CE100E5799B153B60528D5300D08489CA9178FB610A2006C254B41F", + SHA224 + ), + ( + "sample", + "00EDF38AFCAAECAB4383358B34D67C9F2216C8382AAEA44A3DAD5FDC9C32575761793FEF24EB0FC276DFC4F6E3EC476752F043CF01415387470BCBD8678ED2C7E1A0", + "01511BB4D675114FE266FC4372B87682BAECC01D3CC62CF2303C92B3526012659D16876E25C7C1E57648F23B73564D67F61C6F14D527D54972810421E7D87589E1A7", + "004A171143A83163D6DF460AAF61522695F207A58B95C0644D87E52AA1A347916E4F7A72930B1BC06DBE22CE3F58264AFD23704CBB63B29B931F7DE6C9D949A7ECFC", + SHA256 + ), + ( + "sample", + "01546A108BC23A15D6F21872F7DED661FA8431DDBD922D0DCDB77CC878C8553FFAD064C95A920A750AC9137E527390D2D92F153E66196966EA554D9ADFCB109C4211", + "01EA842A0E17D2DE4F92C15315C63DDF72685C18195C2BB95E572B9C5136CA4B4B576AD712A52BE9730627D16054BA40CC0B8D3FF035B12AE75168397F5D50C67451", + "01F21A3CEE066E1961025FB048BD5FE2B7924D0CD797BABE0A83B66F1E35EEAF5FDE143FA85DC394A7DEE766523393784484BDF3E00114A1C857CDE1AA203DB65D61", + SHA384 + ), + ( + "sample", + "01DAE2EA071F8110DC26882D4D5EAE0621A3256FC8847FB9022E2B7D28E6F10198B1574FDD03A9053C08A1854A168AA5A57470EC97DD5CE090124EF52A2F7ECBFFD3", + "00C328FAFCBD79DD77850370C46325D987CB525569FB63C5D3BC53950E6D4C5F174E25A1EE9017B5D450606ADD152B534931D7D4E8455CC91F9B15BF05EC36E377FA", + "00617CCE7CF5064806C467F678D3B4080D6F1CC50AF26CA209417308281B68AF282623EAA63E5B5C0723D8B8C37FF0777B1A20F8CCB1DCCC43997F1EE0E44DA4A67A", + SHA512 + ), + ( + "test", + "00BB9F2BF4FE1038CCF4DABD7139A56F6FD8BB1386561BD3C6A4FC818B20DF5DDBA80795A947107A1AB9D12DAA615B1ADE4F7A9DC05E8E6311150F47F5C57CE8B222", + "013BAD9F29ABE20DE37EBEB823C252CA0F63361284015A3BF430A46AAA80B87B0693F0694BD88AFE4E661FC33B094CD3B7963BED5A727ED8BD6A3A202ABE009D0367", + "01E9BB81FF7944CA409AD138DBBEE228E1AFCC0C890FC78EC8604639CB0DBDC90F717A99EAD9D272855D00162EE9527567DD6A92CBD629805C0445282BBC916797FF", + SHA1 + ), + ( + "test", + "0040D09FCF3C8A5F62CF4FB223CBBB2B9937F6B0577C27020A99602C25A01136987E452988781484EDBBCF1C47E554E7FC901BC3085E5206D9F619CFF07E73D6F706", + "01C7ED902E123E6815546065A2C4AF977B22AA8EADDB68B2C1110E7EA44D42086BFE4A34B67DDC0E17E96536E358219B23A706C6A6E16BA77B65E1C595D43CAE17FB", + "0177336676304FCB343CE028B38E7B4FBA76C1C1B277DA18CAD2A8478B2A9A9F5BEC0F3BA04F35DB3E4263569EC6AADE8C92746E4C82F8299AE1B8F1739F8FD519A4", + SHA224 + ), + ( + "test", + "001DE74955EFAABC4C4F17F8E84D881D1310B5392D7700275F82F145C61E843841AF09035BF7A6210F5A431A6A9E81C9323354A9E69135D44EBD2FCAA7731B909258", + "000E871C4A14F993C6C7369501900C4BC1E9C7B0B4BA44E04868B30B41D8071042EB28C4C250411D0CE08CD197E4188EA4876F279F90B3D8D74A3C76E6F1E4656AA8", + "00CD52DBAA33B063C3A6CD8058A1FB0A46A4754B034FCC644766CA14DA8CA5CA9FDE00E88C1AD60CCBA759025299079D7A427EC3CC5B619BFBC828E7769BCD694E86", + SHA256 + ), + ( + "test", + "01F1FC4A349A7DA9A9E116BFDD055DC08E78252FF8E23AC276AC88B1770AE0B5DCEB1ED14A4916B769A523CE1E90BA22846AF11DF8B300C38818F713DADD85DE0C88", + "014BEE21A18B6D8B3C93FAB08D43E739707953244FDBE924FA926D76669E7AC8C89DF62ED8975C2D8397A65A49DCC09F6B0AC62272741924D479354D74FF6075578C", + "0133330865C067A0EAF72362A65E2D7BC4E461E8C8995C3B6226A21BD1AA78F0ED94FE536A0DCA35534F0CD1510C41525D163FE9D74D134881E35141ED5E8E95B979", + SHA384 + ), + ( + "test", + "016200813020EC986863BEDFC1B121F605C1215645018AEA1A7B215A564DE9EB1B38A67AA1128B80CE391C4FB71187654AAA3431027BFC7F395766CA988C964DC56D", + "013E99020ABF5CEE7525D16B69B229652AB6BDF2AFFCAEF38773B4B7D08725F10CDB93482FDCC54EDCEE91ECA4166B2A7C6265EF0CE2BD7051B7CEF945BABD47EE6D", + "01FBD0013C674AA79CB39849527916CE301C66EA7CE8B80682786AD60F98F7E78A19CA69EFF5C57400E3B3A0AD66CE0978214D13BAF4E9AC60752F7B155E2DE4DCE3", + SHA512 + ), + ) + + signatures_p192 = [] + for a, b, c, d, e in signatures_p192_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p192.append(new_tv) + + signatures_p224 = [] + for a, b, c, d, e in signatures_p224_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p224.append(new_tv) + + signatures_p256 = [] + for a, b, c, d, e in signatures_p256_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p256.append(new_tv) + + signatures_p384 = [] + for a, b, c, d, e in signatures_p384_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p384.append(new_tv) + + signatures_p521 = [] + for a, b, c, d, e in signatures_p521_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p521.append(new_tv) + + def shortDescription(self): + return "Deterministic ECDSA Tests" + + def test_loopback_p192(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p192, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p192, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p224(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p224, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p224, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p256(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p256, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p256, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p384(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p384, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p384, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p521(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p521, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p521, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_data_rfc6979_p192(self): + signer = DSS.new(self.key_priv_p192, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p192: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p224(self): + signer = DSS.new(self.key_priv_p224, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p224: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p256(self): + signer = DSS.new(self.key_priv_p256, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p256: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p384(self): + signer = DSS.new(self.key_priv_p384, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p384: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p521(self): + signer = DSS.new(self.key_priv_p521, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p521: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + +def get_hash_module(hash_name): + if hash_name == "SHA-512": + hash_module = SHA512 + elif hash_name == "SHA-512/224": + hash_module = SHA512.new(truncate="224") + elif hash_name == "SHA-512/256": + hash_module = SHA512.new(truncate="256") + elif hash_name == "SHA-384": + hash_module = SHA384 + elif hash_name == "SHA-256": + hash_module = SHA256 + elif hash_name == "SHA-224": + hash_module = SHA224 + elif hash_name == "SHA-1": + hash_module = SHA1 + elif hash_name == "SHA3-224": + hash_module = SHA3_224 + elif hash_name == "SHA3-256": + hash_module = SHA3_256 + elif hash_name == "SHA3-384": + hash_module = SHA3_384 + elif hash_name == "SHA3-512": + hash_module = SHA3_512 + else: + raise ValueError("Unknown hash algorithm: " + hash_name) + return hash_module + + +class TestVectorsDSAWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._slow_tests = slow_tests + self._id = "None" + self.tv = [] + + def setUp(self): + + def filter_dsa(group): + return DSA.import_key(group['keyPem']) + + def filter_sha(group): + return get_hash_module(group['sha']) + + def filter_type(group): + sig_type = group['type'] + if sig_type != 'DsaVerify': + raise ValueError("Unknown signature type " + sig_type) + return sig_type + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + "dsa_test.json", + "Wycheproof DSA signature", + group_tag={'key': filter_dsa, + 'hash_module': filter_sha, + 'sig_type': filter_type}) + self.tv += result + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof DSA Test #" + str(tv.id) + + hashed_msg = tv.hash_module.new(tv.msg) + signer = DSS.new(tv.key, 'fips-186-3', encoding='der') + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +class TestVectorsECDSAWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._slow_tests = slow_tests + self._id = "None" + + def add_tests(self, filename): + + def filter_ecc(group): + # These are the only curves we accept to skip + if group['key']['curve'] in ('secp224k1', 'secp256k1', + 'brainpoolP224r1', 'brainpoolP224t1', + 'brainpoolP256r1', 'brainpoolP256t1', + 'brainpoolP320r1', 'brainpoolP320t1', + 'brainpoolP384r1', 'brainpoolP384t1', + 'brainpoolP512r1', 'brainpoolP512t1', + ): + return None + return ECC.import_key(group['keyPem']) + + def filter_sha(group): + return get_hash_module(group['sha']) + + def filter_encoding(group): + encoding_name = group['type'] + if encoding_name == "EcdsaVerify": + return "der" + elif encoding_name == "EcdsaP1363Verify": + return "binary" + else: + raise ValueError("Unknown signature type " + encoding_name) + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof ECDSA signature (%s)" % filename, + group_tag={'key': filter_ecc, + 'hash_module': filter_sha, + 'encoding': filter_encoding, + }) + self.tv += result + + def setUp(self): + self.tv = [] + self.add_tests("ecdsa_secp224r1_sha224_p1363_test.json") + self.add_tests("ecdsa_secp224r1_sha224_test.json") + if self._slow_tests: + self.add_tests("ecdsa_secp224r1_sha256_p1363_test.json") + self.add_tests("ecdsa_secp224r1_sha256_test.json") + self.add_tests("ecdsa_secp224r1_sha3_224_test.json") + self.add_tests("ecdsa_secp224r1_sha3_256_test.json") + self.add_tests("ecdsa_secp224r1_sha3_512_test.json") + self.add_tests("ecdsa_secp224r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp224r1_sha512_test.json") + self.add_tests("ecdsa_secp256r1_sha256_p1363_test.json") + self.add_tests("ecdsa_secp256r1_sha256_test.json") + self.add_tests("ecdsa_secp256r1_sha3_256_test.json") + self.add_tests("ecdsa_secp256r1_sha3_512_test.json") + self.add_tests("ecdsa_secp256r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp256r1_sha512_test.json") + if self._slow_tests: + self.add_tests("ecdsa_secp384r1_sha3_384_test.json") + self.add_tests("ecdsa_secp384r1_sha3_512_test.json") + self.add_tests("ecdsa_secp384r1_sha384_p1363_test.json") + self.add_tests("ecdsa_secp384r1_sha384_test.json") + self.add_tests("ecdsa_secp384r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp384r1_sha512_test.json") + if self._slow_tests: + self.add_tests("ecdsa_secp521r1_sha3_512_test.json") + self.add_tests("ecdsa_secp521r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp521r1_sha512_test.json") + self.add_tests("ecdsa_test.json") + self.add_tests("ecdsa_webcrypto_test.json") + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof ECDSA Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + + # Skip tests with unsupported curves + if tv.key is None: + return + + hashed_msg = tv.hash_module.new(tv.msg) + signer = DSS.new(tv.key, 'fips-186-3', encoding=tv.encoding) + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + if tv.comment == "k*G has a large x-coordinate": + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(FIPS_DSA_Tests) + tests += list_test_cases(FIPS_ECDSA_Tests) + tests += list_test_cases(Det_DSA_Tests) + tests += list_test_cases(Det_ECDSA_Tests) + + slow_tests = config.get('slow_tests') + if slow_tests: + tests += list_test_cases(FIPS_DSA_Tests_KAT) + tests += list_test_cases(FIPS_ECDSA_Tests_KAT) + + tests += [TestVectorsDSAWycheproof(wycheproof_warnings, slow_tests)] + tests += [TestVectorsECDSAWycheproof(wycheproof_warnings, slow_tests)] + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_eddsa.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_eddsa.py new file mode 100644 index 00000000..6a9a9b0c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_eddsa.py @@ -0,0 +1,578 @@ +# +# Copyright (c) 2022, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Crypto.PublicKey import ECC +from Crypto.Signature import eddsa +from Crypto.Hash import SHA512, SHAKE256 +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors_wycheproof +from Crypto.Util.number import bytes_to_long + +rfc8032_tv_str = ( + # 7.1 Ed25519 + ( + "9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60", + "d75a980182b10ab7d54bfed3c964073a0ee172f3daa62325af021a68f707511a", + "", + None, + "", + "e5564300c360ac729086e2cc806e828a" + "84877f1eb8e5d974d873e06522490155" + "5fb8821590a33bacc61e39701cf9b46b" + "d25bf5f0595bbe24655141438e7a100b" + ), + ( + "4ccd089b28ff96da9db6c346ec114e0f5b8a319f35aba624da8cf6ed4fb8a6fb", + "3d4017c3e843895a92b70aa74d1b7ebc9c982ccf2ec4968cc0cd55f12af4660c", + "72", + None, + "", + "92a009a9f0d4cab8720e820b5f642540" + "a2b27b5416503f8fb3762223ebdb69da" + "085ac1e43e15996e458f3613d0f11d8c" + "387b2eaeb4302aeeb00d291612bb0c00" + ), + ( + "c5aa8df43f9f837bedb7442f31dcb7b166d38535076f094b85ce3a2e0b4458f7", + "fc51cd8e6218a1a38da47ed00230f0580816ed13ba3303ac5deb911548908025", + "af82", + None, + "", + "6291d657deec24024827e69c3abe01a3" + "0ce548a284743a445e3680d7db5ac3ac" + "18ff9b538d16f290ae67f760984dc659" + "4a7c15e9716ed28dc027beceea1ec40a" + ), + ( + "f5e5767cf153319517630f226876b86c8160cc583bc013744c6bf255f5cc0ee5", + "278117fc144c72340f67d0f2316e8386ceffbf2b2428c9c51fef7c597f1d426e", + "08b8b2b733424243760fe426a4b54908" + "632110a66c2f6591eabd3345e3e4eb98" + "fa6e264bf09efe12ee50f8f54e9f77b1" + "e355f6c50544e23fb1433ddf73be84d8" + "79de7c0046dc4996d9e773f4bc9efe57" + "38829adb26c81b37c93a1b270b20329d" + "658675fc6ea534e0810a4432826bf58c" + "941efb65d57a338bbd2e26640f89ffbc" + "1a858efcb8550ee3a5e1998bd177e93a" + "7363c344fe6b199ee5d02e82d522c4fe" + "ba15452f80288a821a579116ec6dad2b" + "3b310da903401aa62100ab5d1a36553e" + "06203b33890cc9b832f79ef80560ccb9" + "a39ce767967ed628c6ad573cb116dbef" + "efd75499da96bd68a8a97b928a8bbc10" + "3b6621fcde2beca1231d206be6cd9ec7" + "aff6f6c94fcd7204ed3455c68c83f4a4" + "1da4af2b74ef5c53f1d8ac70bdcb7ed1" + "85ce81bd84359d44254d95629e9855a9" + "4a7c1958d1f8ada5d0532ed8a5aa3fb2" + "d17ba70eb6248e594e1a2297acbbb39d" + "502f1a8c6eb6f1ce22b3de1a1f40cc24" + "554119a831a9aad6079cad88425de6bd" + "e1a9187ebb6092cf67bf2b13fd65f270" + "88d78b7e883c8759d2c4f5c65adb7553" + "878ad575f9fad878e80a0c9ba63bcbcc" + "2732e69485bbc9c90bfbd62481d9089b" + "eccf80cfe2df16a2cf65bd92dd597b07" + "07e0917af48bbb75fed413d238f5555a" + "7a569d80c3414a8d0859dc65a46128ba" + "b27af87a71314f318c782b23ebfe808b" + "82b0ce26401d2e22f04d83d1255dc51a" + "ddd3b75a2b1ae0784504df543af8969b" + "e3ea7082ff7fc9888c144da2af58429e" + "c96031dbcad3dad9af0dcbaaaf268cb8" + "fcffead94f3c7ca495e056a9b47acdb7" + "51fb73e666c6c655ade8297297d07ad1" + "ba5e43f1bca32301651339e22904cc8c" + "42f58c30c04aafdb038dda0847dd988d" + "cda6f3bfd15c4b4c4525004aa06eeff8" + "ca61783aacec57fb3d1f92b0fe2fd1a8" + "5f6724517b65e614ad6808d6f6ee34df" + "f7310fdc82aebfd904b01e1dc54b2927" + "094b2db68d6f903b68401adebf5a7e08" + "d78ff4ef5d63653a65040cf9bfd4aca7" + "984a74d37145986780fc0b16ac451649" + "de6188a7dbdf191f64b5fc5e2ab47b57" + "f7f7276cd419c17a3ca8e1b939ae49e4" + "88acba6b965610b5480109c8b17b80e1" + "b7b750dfc7598d5d5011fd2dcc5600a3" + "2ef5b52a1ecc820e308aa342721aac09" + "43bf6686b64b2579376504ccc493d97e" + "6aed3fb0f9cd71a43dd497f01f17c0e2" + "cb3797aa2a2f256656168e6c496afc5f" + "b93246f6b1116398a346f1a641f3b041" + "e989f7914f90cc2c7fff357876e506b5" + "0d334ba77c225bc307ba537152f3f161" + "0e4eafe595f6d9d90d11faa933a15ef1" + "369546868a7f3a45a96768d40fd9d034" + "12c091c6315cf4fde7cb68606937380d" + "b2eaaa707b4c4185c32eddcdd306705e" + "4dc1ffc872eeee475a64dfac86aba41c" + "0618983f8741c5ef68d3a101e8a3b8ca" + "c60c905c15fc910840b94c00a0b9d0", + None, + "", + "0aab4c900501b3e24d7cdf4663326a3a" + "87df5e4843b2cbdb67cbf6e460fec350" + "aa5371b1508f9f4528ecea23c436d94b" + "5e8fcd4f681e30a6ac00a9704a188a03" + ), + # 7.2 Ed25519ctx + ( + "0305334e381af78f141cb666f6199f57" + "bc3495335a256a95bd2a55bf546663f6", + "dfc9425e4f968f7f0c29f0259cf5f9ae" + "d6851c2bb4ad8bfb860cfee0ab248292", + "f726936d19c800494e3fdaff20b276a8", + None, + "666f6f", + "55a4cc2f70a54e04288c5f4cd1e45a7b" + "b520b36292911876cada7323198dd87a" + "8b36950b95130022907a7fb7c4e9b2d5" + "f6cca685a587b4b21f4b888e4e7edb0d" + ), + ( + "0305334e381af78f141cb666f6199f57" + "bc3495335a256a95bd2a55bf546663f6", + "dfc9425e4f968f7f0c29f0259cf5f9ae" + "d6851c2bb4ad8bfb860cfee0ab248292", + "f726936d19c800494e3fdaff20b276a8", + None, + "626172", + "fc60d5872fc46b3aa69f8b5b4351d580" + "8f92bcc044606db097abab6dbcb1aee3" + "216c48e8b3b66431b5b186d1d28f8ee1" + "5a5ca2df6668346291c2043d4eb3e90d" + ), + ( + "0305334e381af78f141cb666f6199f57" + "bc3495335a256a95bd2a55bf546663f6", + "dfc9425e4f968f7f0c29f0259cf5f9ae" + "d6851c2bb4ad8bfb860cfee0ab248292", + "508e9e6882b979fea900f62adceaca35", + None, + "666f6f", + "8b70c1cc8310e1de20ac53ce28ae6e72" + "07f33c3295e03bb5c0732a1d20dc6490" + "8922a8b052cf99b7c4fe107a5abb5b2c" + "4085ae75890d02df26269d8945f84b0b" + ), + ( + "ab9c2853ce297ddab85c993b3ae14bca" + "d39b2c682beabc27d6d4eb20711d6560", + "0f1d1274943b91415889152e893d80e9" + "3275a1fc0b65fd71b4b0dda10ad7d772", + "f726936d19c800494e3fdaff20b276a8", + None, + "666f6f", + "21655b5f1aa965996b3f97b3c849eafb" + "a922a0a62992f73b3d1b73106a84ad85" + "e9b86a7b6005ea868337ff2d20a7f5fb" + "d4cd10b0be49a68da2b2e0dc0ad8960f" + ), + # 7.3 Ed25519ph + ( + "833fe62409237b9d62ec77587520911e" + "9a759cec1d19755b7da901b96dca3d42", + "ec172b93ad5e563bf4932c70e1245034" + "c35467ef2efd4d64ebf819683467e2bf", + "616263", + SHA512, + "", + "98a70222f0b8121aa9d30f813d683f80" + "9e462b469c7ff87639499bb94e6dae41" + "31f85042463c2a355a2003d062adf5aa" + "a10b8c61e636062aaad11c2a26083406" + ), + # 7.4 Ed448 + ( + "6c82a562cb808d10d632be89c8513ebf6c929f34ddfa8c9f63c9960ef6e348a3" + "528c8a3fcc2f044e39a3fc5b94492f8f032e7549a20098f95b", + "5fd7449b59b461fd2ce787ec616ad46a1da1342485a70e1f8a0ea75d80e96778" + "edf124769b46c7061bd6783df1e50f6cd1fa1abeafe8256180", + "", + None, + "", + "533a37f6bbe457251f023c0d88f976ae2dfb504a843e34d2074fd823d41a591f" + "2b233f034f628281f2fd7a22ddd47d7828c59bd0a21bfd3980ff0d2028d4b18a" + "9df63e006c5d1c2d345b925d8dc00b4104852db99ac5c7cdda8530a113a0f4db" + "b61149f05a7363268c71d95808ff2e652600" + ), + ( + "c4eab05d357007c632f3dbb48489924d552b08fe0c353a0d4a1f00acda2c463a" + "fbea67c5e8d2877c5e3bc397a659949ef8021e954e0a12274e", + "43ba28f430cdff456ae531545f7ecd0ac834a55d9358c0372bfa0c6c6798c086" + "6aea01eb00742802b8438ea4cb82169c235160627b4c3a9480", + "03", + None, + "", + "26b8f91727bd62897af15e41eb43c377efb9c610d48f2335cb0bd0087810f435" + "2541b143c4b981b7e18f62de8ccdf633fc1bf037ab7cd779805e0dbcc0aae1cb" + "cee1afb2e027df36bc04dcecbf154336c19f0af7e0a6472905e799f1953d2a0f" + "f3348ab21aa4adafd1d234441cf807c03a00", + ), + ( + "c4eab05d357007c632f3dbb48489924d552b08fe0c353a0d4a1f00acda2c463a" + "fbea67c5e8d2877c5e3bc397a659949ef8021e954e0a12274e", + "43ba28f430cdff456ae531545f7ecd0ac834a55d9358c0372bfa0c6c6798c086" + "6aea01eb00742802b8438ea4cb82169c235160627b4c3a9480", + "03", + None, + "666f6f", + "d4f8f6131770dd46f40867d6fd5d5055de43541f8c5e35abbcd001b32a89f7d2" + "151f7647f11d8ca2ae279fb842d607217fce6e042f6815ea000c85741de5c8da" + "1144a6a1aba7f96de42505d7a7298524fda538fccbbb754f578c1cad10d54d0d" + "5428407e85dcbc98a49155c13764e66c3c00", + ), + ( + "cd23d24f714274e744343237b93290f511f6425f98e64459ff203e8985083ffd" + "f60500553abc0e05cd02184bdb89c4ccd67e187951267eb328", + "dcea9e78f35a1bf3499a831b10b86c90aac01cd84b67a0109b55a36e9328b1e3" + "65fce161d71ce7131a543ea4cb5f7e9f1d8b00696447001400", + "0c3e544074ec63b0265e0c", + None, + "", + "1f0a8888ce25e8d458a21130879b840a9089d999aaba039eaf3e3afa090a09d3" + "89dba82c4ff2ae8ac5cdfb7c55e94d5d961a29fe0109941e00b8dbdeea6d3b05" + "1068df7254c0cdc129cbe62db2dc957dbb47b51fd3f213fb8698f064774250a5" + "028961c9bf8ffd973fe5d5c206492b140e00", + ), + ( + "258cdd4ada32ed9c9ff54e63756ae582fb8fab2ac721f2c8e676a72768513d93" + "9f63dddb55609133f29adf86ec9929dccb52c1c5fd2ff7e21b", + "3ba16da0c6f2cc1f30187740756f5e798d6bc5fc015d7c63cc9510ee3fd44adc" + "24d8e968b6e46e6f94d19b945361726bd75e149ef09817f580", + "64a65f3cdedcdd66811e2915", + None, + "", + "7eeeab7c4e50fb799b418ee5e3197ff6bf15d43a14c34389b59dd1a7b1b85b4a" + "e90438aca634bea45e3a2695f1270f07fdcdf7c62b8efeaf00b45c2c96ba457e" + "b1a8bf075a3db28e5c24f6b923ed4ad747c3c9e03c7079efb87cb110d3a99861" + "e72003cbae6d6b8b827e4e6c143064ff3c00", + ), + ( + "7ef4e84544236752fbb56b8f31a23a10e42814f5f55ca037cdcc11c64c9a3b29" + "49c1bb60700314611732a6c2fea98eebc0266a11a93970100e", + "b3da079b0aa493a5772029f0467baebee5a8112d9d3a22532361da294f7bb381" + "5c5dc59e176b4d9f381ca0938e13c6c07b174be65dfa578e80", + "64a65f3cdedcdd66811e2915e7", + None, + "", + "6a12066f55331b6c22acd5d5bfc5d71228fbda80ae8dec26bdd306743c5027cb" + "4890810c162c027468675ecf645a83176c0d7323a2ccde2d80efe5a1268e8aca" + "1d6fbc194d3f77c44986eb4ab4177919ad8bec33eb47bbb5fc6e28196fd1caf5" + "6b4e7e0ba5519234d047155ac727a1053100", + ), + ( + "d65df341ad13e008567688baedda8e9dcdc17dc024974ea5b4227b6530e339bf" + "f21f99e68ca6968f3cca6dfe0fb9f4fab4fa135d5542ea3f01", + "df9705f58edbab802c7f8363cfe5560ab1c6132c20a9f1dd163483a26f8ac53a" + "39d6808bf4a1dfbd261b099bb03b3fb50906cb28bd8a081f00", + "bd0f6a3747cd561bdddf4640a332461a4a30a12a434cd0bf40d766d9c6d458e5" + "512204a30c17d1f50b5079631f64eb3112182da3005835461113718d1a5ef944", + None, + "", + "554bc2480860b49eab8532d2a533b7d578ef473eeb58c98bb2d0e1ce488a98b1" + "8dfde9b9b90775e67f47d4a1c3482058efc9f40d2ca033a0801b63d45b3b722e" + "f552bad3b4ccb667da350192b61c508cf7b6b5adadc2c8d9a446ef003fb05cba" + "5f30e88e36ec2703b349ca229c2670833900", + ), + ( + "2ec5fe3c17045abdb136a5e6a913e32ab75ae68b53d2fc149b77e504132d3756" + "9b7e766ba74a19bd6162343a21c8590aa9cebca9014c636df5", + "79756f014dcfe2079f5dd9e718be4171e2ef2486a08f25186f6bff43a9936b9b" + "fe12402b08ae65798a3d81e22e9ec80e7690862ef3d4ed3a00", + "15777532b0bdd0d1389f636c5f6b9ba734c90af572877e2d272dd078aa1e567c" + "fa80e12928bb542330e8409f3174504107ecd5efac61ae7504dabe2a602ede89" + "e5cca6257a7c77e27a702b3ae39fc769fc54f2395ae6a1178cab4738e543072f" + "c1c177fe71e92e25bf03e4ecb72f47b64d0465aaea4c7fad372536c8ba516a60" + "39c3c2a39f0e4d832be432dfa9a706a6e5c7e19f397964ca4258002f7c0541b5" + "90316dbc5622b6b2a6fe7a4abffd96105eca76ea7b98816af0748c10df048ce0" + "12d901015a51f189f3888145c03650aa23ce894c3bd889e030d565071c59f409" + "a9981b51878fd6fc110624dcbcde0bf7a69ccce38fabdf86f3bef6044819de11", + None, + "", + "c650ddbb0601c19ca11439e1640dd931f43c518ea5bea70d3dcde5f4191fe53f" + "00cf966546b72bcc7d58be2b9badef28743954e3a44a23f880e8d4f1cfce2d7a" + "61452d26da05896f0a50da66a239a8a188b6d825b3305ad77b73fbac0836ecc6" + "0987fd08527c1a8e80d5823e65cafe2a3d00", + ), + ( + "872d093780f5d3730df7c212664b37b8a0f24f56810daa8382cd4fa3f77634ec" + "44dc54f1c2ed9bea86fafb7632d8be199ea165f5ad55dd9ce8", + "a81b2e8a70a5ac94ffdbcc9badfc3feb0801f258578bb114ad44ece1ec0e799d" + "a08effb81c5d685c0c56f64eecaef8cdf11cc38737838cf400", + "6ddf802e1aae4986935f7f981ba3f0351d6273c0a0c22c9c0e8339168e675412" + "a3debfaf435ed651558007db4384b650fcc07e3b586a27a4f7a00ac8a6fec2cd" + "86ae4bf1570c41e6a40c931db27b2faa15a8cedd52cff7362c4e6e23daec0fbc" + "3a79b6806e316efcc7b68119bf46bc76a26067a53f296dafdbdc11c77f7777e9" + "72660cf4b6a9b369a6665f02e0cc9b6edfad136b4fabe723d2813db3136cfde9" + "b6d044322fee2947952e031b73ab5c603349b307bdc27bc6cb8b8bbd7bd32321" + "9b8033a581b59eadebb09b3c4f3d2277d4f0343624acc817804728b25ab79717" + "2b4c5c21a22f9c7839d64300232eb66e53f31c723fa37fe387c7d3e50bdf9813" + "a30e5bb12cf4cd930c40cfb4e1fc622592a49588794494d56d24ea4b40c89fc0" + "596cc9ebb961c8cb10adde976a5d602b1c3f85b9b9a001ed3c6a4d3b1437f520" + "96cd1956d042a597d561a596ecd3d1735a8d570ea0ec27225a2c4aaff26306d1" + "526c1af3ca6d9cf5a2c98f47e1c46db9a33234cfd4d81f2c98538a09ebe76998" + "d0d8fd25997c7d255c6d66ece6fa56f11144950f027795e653008f4bd7ca2dee" + "85d8e90f3dc315130ce2a00375a318c7c3d97be2c8ce5b6db41a6254ff264fa6" + "155baee3b0773c0f497c573f19bb4f4240281f0b1f4f7be857a4e59d416c06b4" + "c50fa09e1810ddc6b1467baeac5a3668d11b6ecaa901440016f389f80acc4db9" + "77025e7f5924388c7e340a732e554440e76570f8dd71b7d640b3450d1fd5f041" + "0a18f9a3494f707c717b79b4bf75c98400b096b21653b5d217cf3565c9597456" + "f70703497a078763829bc01bb1cbc8fa04eadc9a6e3f6699587a9e75c94e5bab" + "0036e0b2e711392cff0047d0d6b05bd2a588bc109718954259f1d86678a579a3" + "120f19cfb2963f177aeb70f2d4844826262e51b80271272068ef5b3856fa8535" + "aa2a88b2d41f2a0e2fda7624c2850272ac4a2f561f8f2f7a318bfd5caf969614" + "9e4ac824ad3460538fdc25421beec2cc6818162d06bbed0c40a387192349db67" + "a118bada6cd5ab0140ee273204f628aad1c135f770279a651e24d8c14d75a605" + "9d76b96a6fd857def5e0b354b27ab937a5815d16b5fae407ff18222c6d1ed263" + "be68c95f32d908bd895cd76207ae726487567f9a67dad79abec316f683b17f2d" + "02bf07e0ac8b5bc6162cf94697b3c27cd1fea49b27f23ba2901871962506520c" + "392da8b6ad0d99f7013fbc06c2c17a569500c8a7696481c1cd33e9b14e40b82e" + "79a5f5db82571ba97bae3ad3e0479515bb0e2b0f3bfcd1fd33034efc6245eddd" + "7ee2086ddae2600d8ca73e214e8c2b0bdb2b047c6a464a562ed77b73d2d841c4" + "b34973551257713b753632efba348169abc90a68f42611a40126d7cb21b58695" + "568186f7e569d2ff0f9e745d0487dd2eb997cafc5abf9dd102e62ff66cba87", + None, + "", + "e301345a41a39a4d72fff8df69c98075a0cc082b802fc9b2b6bc503f926b65bd" + "df7f4c8f1cb49f6396afc8a70abe6d8aef0db478d4c6b2970076c6a0484fe76d" + "76b3a97625d79f1ce240e7c576750d295528286f719b413de9ada3e8eb78ed57" + "3603ce30d8bb761785dc30dbc320869e1a00" + ), + # 7.5 Ed448ph + ( + "833fe62409237b9d62ec77587520911e9a759cec1d19755b7da901b96dca3d42" + "ef7822e0d5104127dc05d6dbefde69e3ab2cec7c867c6e2c49", + "259b71c19f83ef77a7abd26524cbdb3161b590a48f7d17de3ee0ba9c52beb743" + "c09428a131d6b1b57303d90d8132c276d5ed3d5d01c0f53880", + "616263", + SHAKE256, + "", + "822f6901f7480f3d5f562c592994d9693602875614483256505600bbc281ae38" + "1f54d6bce2ea911574932f52a4e6cadd78769375ec3ffd1b801a0d9b3f4030cd" + "433964b6457ea39476511214f97469b57dd32dbc560a9a94d00bff07620464a3" + "ad203df7dc7ce360c3cd3696d9d9fab90f00" + ), + ( + "833fe62409237b9d62ec77587520911e9a759cec1d19755b7da901b96dca3d42" + "ef7822e0d5104127dc05d6dbefde69e3ab2cec7c867c6e2c49", + "259b71c19f83ef77a7abd26524cbdb3161b590a48f7d17de3ee0ba9c52beb743" + "c09428a131d6b1b57303d90d8132c276d5ed3d5d01c0f53880", + "616263", + SHAKE256, + "666f6f", + "c32299d46ec8ff02b54540982814dce9a05812f81962b649d528095916a2aa48" + "1065b1580423ef927ecf0af5888f90da0f6a9a85ad5dc3f280d91224ba9911a3" + "653d00e484e2ce232521481c8658df304bb7745a73514cdb9bf3e15784ab7128" + "4f8d0704a608c54a6b62d97beb511d132100", + ), +) + + +rfc8032_tv_bytes = [] +for tv_str in rfc8032_tv_str: + rfc8032_tv_bytes.append([unhexlify(i) if isinstance(i, str) else i for i in tv_str]) + + +class TestEdDSA(unittest.TestCase): + + def test_sign(self): + for sk, _, msg, hashmod, ctx, exp_signature in rfc8032_tv_bytes: + key = eddsa.import_private_key(sk) + signer = eddsa.new(key, 'rfc8032', context=ctx) + if hashmod is None: + # PureEdDSA + signature = signer.sign(msg) + else: + # HashEdDSA + hashobj = hashmod.new(msg) + signature = signer.sign(hashobj) + self.assertEqual(exp_signature, signature) + + def test_verify(self): + for _, pk, msg, hashmod, ctx, exp_signature in rfc8032_tv_bytes: + key = eddsa.import_public_key(pk) + verifier = eddsa.new(key, 'rfc8032', context=ctx) + if hashmod is None: + # PureEdDSA + verifier.verify(msg, exp_signature) + else: + # HashEdDSA + hashobj = hashmod.new(msg) + verifier.verify(hashobj, exp_signature) + + def test_negative(self): + key = ECC.generate(curve="ed25519") + self.assertRaises(ValueError, eddsa.new, key, 'rfc9999') + + nist_key = ECC.generate(curve="p256") + self.assertRaises(ValueError, eddsa.new, nist_key, 'rfc8032') + + +class TestExport_Ed25519(unittest.TestCase): + + def test_raw(self): + key = ECC.generate(curve="Ed25519") + x, y = key.pointQ.xy + raw = bytearray(key._export_eddsa()) + sign_x = raw[31] >> 7 + raw[31] &= 0x7F + yt = bytes_to_long(raw[::-1]) + self.assertEqual(y, yt) + self.assertEqual(x & 1, sign_x) + + key = ECC.construct(point_x=0, point_y=1, curve="Ed25519") + out = key._export_eddsa() + self.assertEqual(b'\x01' + b'\x00' * 31, out) + + +class TestExport_Ed448(unittest.TestCase): + + def test_raw(self): + key = ECC.generate(curve="Ed448") + x, y = key.pointQ.xy + raw = bytearray(key._export_eddsa()) + sign_x = raw[56] >> 7 + raw[56] &= 0x7F + yt = bytes_to_long(raw[::-1]) + self.assertEqual(y, yt) + self.assertEqual(x & 1, sign_x) + + key = ECC.construct(point_x=0, point_y=1, curve="Ed448") + out = key._export_eddsa() + self.assertEqual(b'\x01' + b'\x00' * 56, out) + + +class TestImport_Ed25519(unittest.TestCase): + + def test_raw(self): + Px = 24407857220263921307776619664228778204996144802740950419837658238229122415920 + Py = 56480760040633817885061096979765646085062883740629155052073094891081309750690 + encoded = b'\xa2\x05\xd6\x00\xe1 \xe1\xc0\xff\x96\xee?V\x8e\xba/\xd3\x89\x06\xd7\xc4c\xe8$\xc2d\xd7a1\xfa\xde|' + key = eddsa.import_public_key(encoded) + self.assertEqual(Py, key.pointQ.y) + self.assertEqual(Px, key.pointQ.x) + + encoded = b'\x01' + b'\x00' * 31 + key = eddsa.import_public_key(encoded) + self.assertEqual(1, key.pointQ.y) + self.assertEqual(0, key.pointQ.x) + + +class TestImport_Ed448(unittest.TestCase): + + def test_raw(self): + Px = 0x153f42025aba3b0daecaa5cd79458b3146c7c9378c16c17b4a59bc3561113d90c169045bc12966c3f93e140c2ca0a3acc33d9205b9daf9b1 + Py = 0x38f5c0015d3dedd576c232810dd90373b5b1d631a12894c043b7be529cbae03ede177d8fa490b56131dbcb2465d2aba777ef839fc1719b25 + encoded = unhexlify("259b71c19f83ef77a7abd26524cbdb31" + "61b590a48f7d17de3ee0ba9c52beb743" + "c09428a131d6b1b57303d90d8132c276" + "d5ed3d5d01c0f53880") + key = eddsa.import_public_key(encoded) + self.assertEqual(Py, key.pointQ.y) + self.assertEqual(Px, key.pointQ.x) + + encoded = b'\x01' + b'\x00' * 56 + key = eddsa.import_public_key(encoded) + self.assertEqual(1, key.pointQ.y) + self.assertEqual(0, key.pointQ.x) + + +class TestVectorsEdDSAWycheproof(unittest.TestCase): + + def add_tests(self, filename): + + def pk(group): + elem = group['key']['pk'] + return unhexlify(elem) + + def sk(group): + elem = group['key']['sk'] + return unhexlify(elem) + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof ECDSA signature (%s)" + % filename, + group_tag={'pk': pk, 'sk': sk}) + self.tv += result + + def setUp(self): + self.tv = [] + self.add_tests("eddsa_test.json") + self.add_tests("ed448_test.json") + + def test_sign(self, tv): + if not tv.valid: + return + + self._id = "Wycheproof EdDSA Sign Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + key = eddsa.import_private_key(tv.sk) + signer = eddsa.new(key, 'rfc8032') + signature = signer.sign(tv.msg) + self.assertEqual(signature, tv.sig) + + def test_verify(self, tv): + self._id = "Wycheproof EdDSA Verify Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + key = eddsa.import_public_key(tv.pk) + verifier = eddsa.new(key, 'rfc8032') + try: + verifier.verify(tv.msg, tv.sig) + except ValueError: + assert not tv.valid + else: + assert tv.valid + + def runTest(self): + for tv in self.tv: + self.test_sign(tv) + self.test_verify(tv) + + +def get_tests(config={}): + + tests = [] + tests += list_test_cases(TestExport_Ed25519) + tests += list_test_cases(TestExport_Ed448) + tests += list_test_cases(TestImport_Ed25519) + tests += list_test_cases(TestImport_Ed448) + tests += list_test_cases(TestEdDSA) + tests += [TestVectorsEdDSAWycheproof()] + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_pkcs1_15.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_pkcs1_15.py new file mode 100644 index 00000000..8e2c6ee6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_pkcs1_15.py @@ -0,0 +1,348 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import json +import unittest +from binascii import unhexlify + +from Crypto.Util.py3compat import bchr +from Crypto.Util.number import bytes_to_long +from Crypto.Util.strxor import strxor +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof + +from Crypto.Hash import (SHA1, SHA224, SHA256, SHA384, SHA512, SHA3_384, + SHA3_224, SHA3_256, SHA3_512) +from Crypto.PublicKey import RSA +from Crypto.Signature import pkcs1_15 +from Crypto.Signature import PKCS1_v1_5 + +from Crypto.Util._file_system import pycryptodome_filename +from Crypto.Util.strxor import strxor + + +def load_hash_by_name(hash_name): + return __import__("Crypto.Hash." + hash_name, globals(), locals(), ["new"]) + + +class FIPS_PKCS1_Verify_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Verify)" + + def test_can_sign(self): + test_public_key = RSA.generate(1024).public_key() + verifier = pkcs1_15.new(test_public_key) + self.assertEqual(verifier.can_sign(), False) + + +class FIPS_PKCS1_Verify_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "PKCS1-v1.5"), + "SigVer15_186-3.rsp", + "Signature Verification 186-3", + {'shaalg': lambda x: x, + 'd': lambda x: int(x), + 'result': lambda x: x}) or [] + + +for count, tv in enumerate(test_vectors_verify): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + public_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e)]) # type: ignore + verifier = pkcs1_15.new(public_key) + + def positive_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result == 'f': + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_negative_%d" % count, negative_test) + else: + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_positive_%d" % count, positive_test) + + +class FIPS_PKCS1_Sign_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Sign)" + + def test_can_sign(self): + test_private_key = RSA.generate(1024) + signer = pkcs1_15.new(test_private_key) + self.assertEqual(signer.can_sign(), True) + + +class FIPS_PKCS1_Sign_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_sign = load_test_vectors(("Signature", "PKCS1-v1.5"), + "SigGen15_186-2.txt", + "Signature Generation 186-2", + {'shaalg': lambda x: x}) or [] + +test_vectors_sign += load_test_vectors(("Signature", "PKCS1-v1.5"), + "SigGen15_186-3.txt", + "Signature Generation 186-3", + {'shaalg': lambda x: x}) or [] + +for count, tv in enumerate(test_vectors_sign): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + if hasattr(tv, "e"): + private_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e, tv.d)]) # type: ignore + signer = pkcs1_15.new(private_key) + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + + def new_test(self, hash_obj=hash_obj, signer=signer, result=tv.s): + signature = signer.sign(hash_obj) + self.assertEqual(signature, result) + + setattr(FIPS_PKCS1_Sign_Tests_KAT, "test_%d" % count, new_test) + + +class PKCS1_15_NoParams(unittest.TestCase): + """Verify that PKCS#1 v1.5 signatures pass even without NULL parameters in + the algorithm identifier (PyCrypto/LP bug #1119552).""" + + rsakey = """-----BEGIN RSA PRIVATE KEY----- + MIIBOwIBAAJBAL8eJ5AKoIsjURpcEoGubZMxLD7+kT+TLr7UkvEtFrRhDDKMtuII + q19FrL4pUIMymPMSLBn3hJLe30Dw48GQM4UCAwEAAQJACUSDEp8RTe32ftq8IwG8 + Wojl5mAd1wFiIOrZ/Uv8b963WJOJiuQcVN29vxU5+My9GPZ7RA3hrDBEAoHUDPrI + OQIhAPIPLz4dphiD9imAkivY31Rc5AfHJiQRA7XixTcjEkojAiEAyh/pJHks/Mlr + +rdPNEpotBjfV4M4BkgGAA/ipcmaAjcCIQCHvhwwKVBLzzTscT2HeUdEeBMoiXXK + JACAr3sJQJGxIQIgarRp+m1WSKV1MciwMaTOnbU7wxFs9DP1pva76lYBzgUCIQC9 + n0CnZCJ6IZYqSt0H5N7+Q+2Ro64nuwV/OSQfM6sBwQ== + -----END RSA PRIVATE KEY-----""" + + msg = b"This is a test\x0a" + + # PKCS1 v1.5 signature of the message computed using SHA-1. + # The digestAlgorithm SEQUENCE does NOT contain the NULL parameter. + sig_str = "a287a13517f716e72fb14eea8e33a8db4a4643314607e7ca3e3e28"\ + "1893db74013dda8b855fd99f6fecedcb25fcb7a434f35cd0a101f8"\ + "b19348e0bd7b6f152dfc" + signature = unhexlify(sig_str) + + def runTest(self): + verifier = pkcs1_15.new(RSA.importKey(self.rsakey)) + hashed = SHA1.new(self.msg) + verifier.verify(hashed, self.signature) + + +class PKCS1_Legacy_Module_Tests(unittest.TestCase): + """Verify that the legacy module Crypto.Signature.PKCS1_v1_5 + behaves as expected. The only difference is that the verify() + method returns True/False and does not raise exceptions.""" + + def shortDescription(self): + return "Test legacy Crypto.Signature.PKCS1_v1_5" + + def runTest(self): + key = RSA.importKey(PKCS1_15_NoParams.rsakey) + hashed = SHA1.new(b"Test") + good_signature = PKCS1_v1_5.new(key).sign(hashed) + verifier = PKCS1_v1_5.new(key.public_key()) + + self.assertEqual(verifier.verify(hashed, good_signature), True) + + # Flip a few bits in the signature + bad_signature = strxor(good_signature, bchr(1) * len(good_signature)) + self.assertEqual(verifier.verify(hashed, bad_signature), False) + + +class PKCS1_All_Hashes_Tests(unittest.TestCase): + + def shortDescription(self): + return "Test PKCS#1v1.5 signature in combination with all hashes" + + def runTest(self): + + key = RSA.generate(1024) + signer = pkcs1_15.new(key) + hash_names = ("MD2", "MD4", "MD5", "RIPEMD160", "SHA1", + "SHA224", "SHA256", "SHA384", "SHA512", + "SHA3_224", "SHA3_256", "SHA3_384", "SHA3_512") + + for name in hash_names: + hashed = load_hash_by_name(name).new(b"Test") + signer.sign(hashed) + + from Crypto.Hash import BLAKE2b, BLAKE2s + for hash_size in (20, 32, 48, 64): + hashed_b = BLAKE2b.new(digest_bytes=hash_size, data=b"Test") + signer.sign(hashed_b) + for hash_size in (16, 20, 28, 32): + hashed_s = BLAKE2s.new(digest_bytes=hash_size, data=b"Test") + signer.sign(hashed_s) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def setUp(self): + self.tv = [] + self.add_tests("rsa_sig_gen_misc_test.json") + self.add_tests("rsa_signature_2048_sha224_test.json") + self.add_tests("rsa_signature_2048_sha256_test.json") + self.add_tests("rsa_signature_2048_sha384_test.json") + self.add_tests("rsa_signature_2048_sha3_224_test.json") + self.add_tests("rsa_signature_2048_sha3_256_test.json") + self.add_tests("rsa_signature_2048_sha3_384_test.json") + self.add_tests("rsa_signature_2048_sha3_512_test.json") + self.add_tests("rsa_signature_2048_sha512_test.json") + self.add_tests("rsa_signature_2048_sha512_224_test.json") + self.add_tests("rsa_signature_2048_sha512_256_test.json") + self.add_tests("rsa_signature_3072_sha256_test.json") + self.add_tests("rsa_signature_3072_sha384_test.json") + self.add_tests("rsa_signature_3072_sha3_256_test.json") + self.add_tests("rsa_signature_3072_sha3_384_test.json") + self.add_tests("rsa_signature_3072_sha3_512_test.json") + self.add_tests("rsa_signature_3072_sha512_test.json") + self.add_tests("rsa_signature_3072_sha512_256_test.json") + self.add_tests("rsa_signature_4096_sha384_test.json") + self.add_tests("rsa_signature_4096_sha512_test.json") + self.add_tests("rsa_signature_4096_sha512_256_test.json") + self.add_tests("rsa_signature_test.json") + + def add_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['keyPem']) + + def filter_sha(group): + hash_name = group['sha'] + if hash_name == "SHA-512": + return SHA512 + elif hash_name == "SHA-512/224": + return SHA512.new(truncate="224") + elif hash_name == "SHA-512/256": + return SHA512.new(truncate="256") + elif hash_name == "SHA3-512": + return SHA3_512 + elif hash_name == "SHA-384": + return SHA384 + elif hash_name == "SHA3-384": + return SHA3_384 + elif hash_name == "SHA-256": + return SHA256 + elif hash_name == "SHA3-256": + return SHA3_256 + elif hash_name == "SHA-224": + return SHA224 + elif hash_name == "SHA3-224": + return SHA3_224 + elif hash_name == "SHA-1": + return SHA1 + else: + raise ValueError("Unknown hash algorithm: " + hash_name) + + def filter_type(group): + type_name = group['type'] + if type_name not in ("RsassaPkcs1Verify", "RsassaPkcs1Generate"): + raise ValueError("Unknown type name " + type_name) + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof PKCS#1v1.5 signature (%s)" % filename, + group_tag={'rsa_key': filter_rsa, + 'hash_mod': filter_sha, + 'type': filter_type}) + return result + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof RSA PKCS$#1 Test #" + str(tv.id) + + hashed_msg = tv.hash_module.new(tv.msg) + signer = pkcs1_15.new(tv.key) + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(FIPS_PKCS1_Verify_Tests) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests) + tests += list_test_cases(PKCS1_15_NoParams) + tests += list_test_cases(PKCS1_Legacy_Module_Tests) + tests += list_test_cases(PKCS1_All_Hashes_Tests) + tests += [ TestVectorsWycheproof(wycheproof_warnings) ] + + if config.get('slow_tests'): + tests += list_test_cases(FIPS_PKCS1_Verify_Tests_KAT) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests_KAT) + + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_pss.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_pss.py new file mode 100644 index 00000000..535474be --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Signature/test_pss.py @@ -0,0 +1,377 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest + +from Crypto.Util.py3compat import b, bchr +from Crypto.Util.number import bytes_to_long +from Crypto.Util.strxor import strxor +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof + +from Crypto.Hash import SHA1, SHA224, SHA256, SHA384, SHA512 +from Crypto.PublicKey import RSA +from Crypto.Signature import pss +from Crypto.Signature import PKCS1_PSS + +from Crypto.Signature.pss import MGF1 + + +def load_hash_by_name(hash_name): + return __import__("Crypto.Hash." + hash_name, globals(), locals(), ["new"]) + + +class PRNG(object): + + def __init__(self, stream): + self.stream = stream + self.idx = 0 + + def __call__(self, rnd_size): + result = self.stream[self.idx:self.idx + rnd_size] + self.idx += rnd_size + return result + + +class PSS_Tests(unittest.TestCase): + + rsa_key = b'-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAsvI34FgiTK8+txBvmooNGpNwk23YTU51dwNZi5yha3W4lA/Q\nvcZrDalkmD7ekWQwnduxVKa6pRSI13KBgeUOIqJoGXSWhntEtY3FEwvWOHW5AE7Q\njUzTzCiYT6TVaCcpa/7YLai+p6ai2g5f5Zfh4jSawa9uYeuggFygQq4IVW796MgV\nyqxYMM/arEj+/sKz3Viua9Rp9fFosertCYCX4DUTgW0mX9bwEnEOgjSI3pLOPXz1\n8vx+DRZS5wMCmwCUa0sKonLn3cAUPq+sGix7+eo7T0Z12MU8ud7IYVX/75r3cXiF\nPaYE2q8Le0kgOApIXbb+x74x0rNgyIh1yGygkwIDAQABAoIBABz4t1A0pLT6qHI2\nEIOaNz3mwhK0dZEqkz0GB1Dhtoax5ATgvKCFB98J3lYB08IBURe1snOsnMpOVUtg\naBRSM+QqnCUG6bnzKjAkuFP5liDE+oNQv1YpKp9CsUovuzdmI8Au3ewihl+ZTIN2\nUVNYMEOR1b5m+z2SSwWNOYsiJwpBrT7zkpdlDyjat7FiiPhMMIMXjhQFVxURMIcB\njUBtPzGvV/PG90cVDWi1wRGeeP1dDqti/jsnvykQ15KW1MqGrpeNKRmDdTy/Ucl1\nWIoYklKw3U456lgZ/rDTDB818+Tlnk35z4yF7d5ANPM8CKfqOPcnO1BCKVFzf4eq\n54wvUtkCgYEA1Zv2lp06l7rXMsvNtyYQjbFChezRDRnPwZmN4NCdRtTgGG1G0Ryd\nYz6WWoPGqZp0b4LAaaHd3W2GTcpXF8WXMKfMX1W+tMAxMozfsXRKMcHoypwuS5wT\nfJRXJCG4pvd57AB0iVUEJW2we+uGKU5Zxcx//id2nXGCpoRyViIplQsCgYEA1nVC\neHupHChht0Fh4N09cGqZHZzuwXjOUMzR3Vsfz+4WzVS3NvIgN4g5YgmQFOeKwo5y\niRq5yvubcNdFvf85eHWClg0zPAyxJCVUWigCrrOanGEhJo6re4idJvNVzu4Ucg0v\n6B3SJ1HsCda+ZSNz24bSyqRep8A+RoAaoVSFx5kCgYEAn3RvXPs9s+obnqWYiPF3\nRe5etE6Vt2vfNKwFxx6zaR6bsmBQjuUHcABWiHb6I71S0bMPI0tbrWGG8ibrYKl1\nNTLtUvVVCOS3VP7oNTWT9RTFTAnOXU7DFSo+6o/poWn3r36ff6zhDXeWWMr2OXtt\ndEQ1/2lCGEGVv+v61eVmmQUCgYABFHITPTwqwiFL1O5zPWnzyPWgaovhOYSAb6eW\n38CXQXGn8wdBJZL39J2lWrr4//l45VK6UgIhfYbY2JynSkO10ZGow8RARygVMILu\nOUlaK9lZdDvAf/NpGdUAvzTtZ9F+iYZ2OsA2JnlzyzsGM1l//3vMPWukmJk3ral0\nqoJJ8QKBgGRG3eVHnIegBbFVuMDp2NTcfuSuDVUQ1fGAwtPiFa8u81IodJnMk2pq\niXu2+0ytNA/M+SVrAnE2AgIzcaJbtr0p2srkuVM7KMWnG1vWFNjtXN8fAhf/joOv\nD+NmPL/N4uE57e40tbiU/H7KdyZaDt+5QiTmdhuyAe6CBjKsF2jy\n-----END RSA PRIVATE KEY-----' + msg = b'AAA' + tag = b'\x00[c5\xd8\xb0\x8b!D\x81\x83\x07\xc0\xdd\xb9\xb4\xb2`\x92\xe7\x02\xf1\xe1P\xea\xc3\xf0\xe3>\xddX5\xdd\x8e\xc5\x89\xef\xf3\xc2\xdc\xfeP\x02\x7f\x12+\xc9\xaf\xbb\xec\xfe\xb0\xa5\xb9\x08\x11P\x8fL\xee5\x9b\xb0k{=_\xd2\x14\xfb\x01R\xb7\xfe\x14}b\x03\x8d5Y\x89~}\xfc\xf2l\xd01-\xbd\xeb\x11\xcdV\x11\xe9l\x19k/o5\xa2\x0f\x15\xe7Q$\t=\xec\x1dAB\x19\xa5P\x9a\xaf\xa3G\x86"\xd6~\xf0j\xfcqkbs\x13\x84b\xe4\xbdm(\xed`\xa4F\xfb\x8f.\xe1\x8c)/_\x9eS\x98\xa4v\xb8\xdc\xfe\xf7/D\x18\x19\xb3T\x97:\xe2\x96s\xe8<\xa2\xb4\xb9\xf8/' + + def test_positive_1(self): + key = RSA.import_key(self.rsa_key) + h = SHA256.new(self.msg) + verifier = pss.new(key) + verifier.verify(h, self.tag) + + def test_negative_1(self): + key = RSA.import_key(self.rsa_key) + h = SHA256.new(self.msg + b'A') + verifier = pss.new(key) + tag = bytearray(self.tag) + self.assertRaises(ValueError, verifier.verify, h, tag) + + def test_negative_2(self): + key = RSA.import_key(self.rsa_key) + h = SHA256.new(self.msg) + verifier = pss.new(key, salt_bytes=1000) + tag = bytearray(self.tag) + self.assertRaises(ValueError, verifier.verify, h, tag) + + +class FIPS_PKCS1_Verify_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Verify)" + + def verify_positive(self, hashmod, message, public_key, salt, signature): + prng = PRNG(salt) + hashed = hashmod.new(message) + verifier = pss.new(public_key, salt_bytes=len(salt), rand_func=prng) + verifier.verify(hashed, signature) + + def verify_negative(self, hashmod, message, public_key, salt, signature): + prng = PRNG(salt) + hashed = hashmod.new(message) + verifier = pss.new(public_key, salt_bytes=len(salt), rand_func=prng) + self.assertRaises(ValueError, verifier.verify, hashed, signature) + + def test_can_sign(self): + test_public_key = RSA.generate(1024).public_key() + verifier = pss.new(test_public_key) + self.assertEqual(verifier.can_sign(), False) + + +class FIPS_PKCS1_Verify_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "PKCS1-PSS"), + "SigVerPSS_186-3.rsp", + "Signature Verification 186-3", + {'shaalg': lambda x: x, + 'result': lambda x: x}) or [] + + +for count, tv in enumerate(test_vectors_verify): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + if hasattr(tv, "p"): + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + public_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e)]) # type: ignore + if tv.saltval != b("\x00"): + prng = PRNG(tv.saltval) + verifier = pss.new(public_key, salt_bytes=len(tv.saltval), rand_func=prng) + else: + verifier = pss.new(public_key, salt_bytes=0) + + def positive_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result == 'p': + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_positive_%d" % count, positive_test) + else: + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_negative_%d" % count, negative_test) + + +class FIPS_PKCS1_Sign_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Sign)" + + def test_can_sign(self): + test_private_key = RSA.generate(1024) + signer = pss.new(test_private_key) + self.assertEqual(signer.can_sign(), True) + + +class FIPS_PKCS1_Sign_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_sign = load_test_vectors(("Signature", "PKCS1-PSS"), + "SigGenPSS_186-2.txt", + "Signature Generation 186-2", + {'shaalg': lambda x: x}) or [] + +test_vectors_sign += load_test_vectors(("Signature", "PKCS1-PSS"), + "SigGenPSS_186-3.txt", + "Signature Generation 186-3", + {'shaalg': lambda x: x}) or [] + +for count, tv in enumerate(test_vectors_sign): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + if hasattr(tv, "e"): + private_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e, tv.d)]) # type: ignore + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + if tv.saltval != b("\x00"): + prng = PRNG(tv.saltval) + signer = pss.new(private_key, salt_bytes=len(tv.saltval), rand_func=prng) + else: + signer = pss.new(private_key, salt_bytes=0) + + def new_test(self, hash_obj=hash_obj, signer=signer, result=tv.s): + signature = signer.sign(hash_obj) + self.assertEqual(signature, result) + + setattr(FIPS_PKCS1_Sign_Tests_KAT, "test_%d" % count, new_test) + + +class PKCS1_Legacy_Module_Tests(unittest.TestCase): + """Verify that the legacy module Crypto.Signature.PKCS1_PSS + behaves as expected. The only difference is that the verify() + method returns True/False and does not raise exceptions.""" + + def shortDescription(self): + return "Test legacy Crypto.Signature.PKCS1_PSS" + + def runTest(self): + key = RSA.generate(1024) + hashed = SHA1.new(b("Test")) + good_signature = PKCS1_PSS.new(key).sign(hashed) + verifier = PKCS1_PSS.new(key.public_key()) + + self.assertEqual(verifier.verify(hashed, good_signature), True) + + # Flip a few bits in the signature + bad_signature = strxor(good_signature, bchr(1) * len(good_signature)) + self.assertEqual(verifier.verify(hashed, bad_signature), False) + + +class PKCS1_All_Hashes_Tests(unittest.TestCase): + + def shortDescription(self): + return "Test PKCS#1 PSS signature in combination with all hashes" + + def runTest(self): + + key = RSA.generate(1280) + signer = pss.new(key) + hash_names = ("MD2", "MD4", "MD5", "RIPEMD160", "SHA1", + "SHA224", "SHA256", "SHA384", "SHA512", + "SHA3_224", "SHA3_256", "SHA3_384", "SHA3_512") + + for name in hash_names: + hashed = load_hash_by_name(name).new(b("Test")) + signer.sign(hashed) + + from Crypto.Hash import BLAKE2b, BLAKE2s + for hash_size in (20, 32, 48, 64): + hashed_b = BLAKE2b.new(digest_bytes=hash_size, data=b("Test")) + signer.sign(hashed_b) + for hash_size in (16, 20, 28, 32): + hashed_s = BLAKE2s.new(digest_bytes=hash_size, data=b("Test")) + signer.sign(hashed_s) + + +def get_hash_module(hash_name): + if hash_name == "SHA-512": + hash_module = SHA512 + elif hash_name == "SHA-512/224": + hash_module = SHA512.new(truncate="224") + elif hash_name == "SHA-512/256": + hash_module = SHA512.new(truncate="256") + elif hash_name == "SHA-384": + hash_module = SHA384 + elif hash_name == "SHA-256": + hash_module = SHA256 + elif hash_name == "SHA-224": + hash_module = SHA224 + elif hash_name == "SHA-1": + hash_module = SHA1 + else: + raise ValueError("Unknown hash algorithm: " + hash_name) + return hash_module + + +class TestVectorsPSSWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def add_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['keyPem']) + + def filter_sha(group): + return get_hash_module(group['sha']) + + def filter_type(group): + type_name = group['type'] + if type_name not in ("RsassaPssVerify", ): + raise ValueError("Unknown type name " + type_name) + + def filter_slen(group): + return group['sLen'] + + def filter_mgf(group): + mgf = group['mgf'] + if mgf not in ("MGF1", ): + raise ValueError("Unknown MGF " + mgf) + mgf1_hash = get_hash_module(group['mgfSha']) + + def mgf(x, y, mh=mgf1_hash): + return MGF1(x, y, mh) + + return mgf + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof PSS signature (%s)" % filename, + group_tag={'key': filter_rsa, + 'hash_module': filter_sha, + 'sLen': filter_slen, + 'mgf': filter_mgf, + 'type': filter_type}) + return result + + def setUp(self): + self.tv = [] + self.add_tests("rsa_pss_2048_sha1_mgf1_20_test.json") + self.add_tests("rsa_pss_2048_sha256_mgf1_0_test.json") + self.add_tests("rsa_pss_2048_sha256_mgf1_32_test.json") + self.add_tests("rsa_pss_2048_sha512_256_mgf1_28_test.json") + self.add_tests("rsa_pss_2048_sha512_256_mgf1_32_test.json") + self.add_tests("rsa_pss_3072_sha256_mgf1_32_test.json") + self.add_tests("rsa_pss_4096_sha256_mgf1_32_test.json") + self.add_tests("rsa_pss_4096_sha512_mgf1_32_test.json") + self.add_tests("rsa_pss_misc_test.json") + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof RSA PSS Test #%d (%s)" % (tv.id, tv.comment) + + hashed_msg = tv.hash_module.new(tv.msg) + signer = pss.new(tv.key, mask_func=tv.mgf, salt_bytes=tv.sLen) + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(PSS_Tests) + tests += list_test_cases(FIPS_PKCS1_Verify_Tests) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests) + tests += list_test_cases(PKCS1_Legacy_Module_Tests) + tests += list_test_cases(PKCS1_All_Hashes_Tests) + + if config.get('slow_tests'): + tests += list_test_cases(FIPS_PKCS1_Verify_Tests_KAT) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests_KAT) + + tests += [TestVectorsPSSWycheproof(wycheproof_warnings)] + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/__init__.py new file mode 100644 index 00000000..ee993db6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/__init__.py @@ -0,0 +1,46 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/__init__.py: Self-test for utility modules +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for utility modules""" + +__revision__ = "$Id$" + +import os + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest.Util import test_number; tests += test_number.get_tests(config=config) + from Crypto.SelfTest.Util import test_Counter; tests += test_Counter.get_tests(config=config) + from Crypto.SelfTest.Util import test_Padding; tests += test_Padding.get_tests(config=config) + from Crypto.SelfTest.Util import test_strxor; tests += test_strxor.get_tests(config=config) + from Crypto.SelfTest.Util import test_asn1; tests += test_asn1.get_tests(config=config) + from Crypto.SelfTest.Util import test_rfc1751; tests += test_rfc1751.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_Counter.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_Counter.py new file mode 100644 index 00000000..8837a32c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_Counter.py @@ -0,0 +1,67 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/test_Counter: Self-test for the Crypto.Util.Counter module +# +# Written in 2009 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-tests for Crypto.Util.Counter""" + +from Crypto.Util.py3compat import * + +import unittest + +class CounterTests(unittest.TestCase): + def setUp(self): + global Counter + from Crypto.Util import Counter + + def test_BE(self): + """Big endian""" + c = Counter.new(128) + c = Counter.new(128, little_endian=False) + + def test_LE(self): + """Little endian""" + c = Counter.new(128, little_endian=True) + + def test_nbits(self): + c = Counter.new(nbits=128) + self.assertRaises(ValueError, Counter.new, 129) + + def test_prefix(self): + c = Counter.new(128, prefix=b("xx")) + + def test_suffix(self): + c = Counter.new(128, suffix=b("xx")) + + def test_iv(self): + c = Counter.new(128, initial_value=2) + self.assertRaises(ValueError, Counter.new, 16, initial_value=0x1FFFF) + +def get_tests(config={}): + from Crypto.SelfTest.st_common import list_test_cases + return list_test_cases(CounterTests) + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_Padding.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_Padding.py new file mode 100644 index 00000000..12e2ec61 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_Padding.py @@ -0,0 +1,154 @@ +# +# SelfTest/Util/test_Padding.py: Self-test for padding functions +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify as uh + +from Crypto.Util.py3compat import * +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.Padding import pad, unpad + +class PKCS7_Tests(unittest.TestCase): + + def test1(self): + padded = pad(b(""), 4) + self.assertTrue(padded == uh(b("04040404"))) + padded = pad(b(""), 4, 'pkcs7') + self.assertTrue(padded == uh(b("04040404"))) + back = unpad(padded, 4) + self.assertTrue(back == b("")) + + def test2(self): + padded = pad(uh(b("12345678")), 4) + self.assertTrue(padded == uh(b("1234567804040404"))) + back = unpad(padded, 4) + self.assertTrue(back == uh(b("12345678"))) + + def test3(self): + padded = pad(uh(b("123456")), 4) + self.assertTrue(padded == uh(b("12345601"))) + back = unpad(padded, 4) + self.assertTrue(back == uh(b("123456"))) + + def test4(self): + padded = pad(uh(b("1234567890")), 4) + self.assertTrue(padded == uh(b("1234567890030303"))) + back = unpad(padded, 4) + self.assertTrue(back == uh(b("1234567890"))) + + def testn1(self): + self.assertRaises(ValueError, pad, uh(b("12")), 4, 'pkcs8') + + def testn2(self): + self.assertRaises(ValueError, unpad, b("\0\0\0"), 4) + self.assertRaises(ValueError, unpad, b(""), 4) + + def testn3(self): + self.assertRaises(ValueError, unpad, b("123456\x02"), 4) + self.assertRaises(ValueError, unpad, b("123456\x00"), 4) + self.assertRaises(ValueError, unpad, b("123456\x05\x05\x05\x05\x05"), 4) + +class X923_Tests(unittest.TestCase): + + def test1(self): + padded = pad(b(""), 4, 'x923') + self.assertTrue(padded == uh(b("00000004"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == b("")) + + def test2(self): + padded = pad(uh(b("12345678")), 4, 'x923') + self.assertTrue(padded == uh(b("1234567800000004"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == uh(b("12345678"))) + + def test3(self): + padded = pad(uh(b("123456")), 4, 'x923') + self.assertTrue(padded == uh(b("12345601"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == uh(b("123456"))) + + def test4(self): + padded = pad(uh(b("1234567890")), 4, 'x923') + self.assertTrue(padded == uh(b("1234567890000003"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == uh(b("1234567890"))) + + def testn1(self): + self.assertRaises(ValueError, unpad, b("123456\x02"), 4, 'x923') + self.assertRaises(ValueError, unpad, b("123456\x00"), 4, 'x923') + self.assertRaises(ValueError, unpad, b("123456\x00\x00\x00\x00\x05"), 4, 'x923') + self.assertRaises(ValueError, unpad, b(""), 4, 'x923') + +class ISO7816_Tests(unittest.TestCase): + + def test1(self): + padded = pad(b(""), 4, 'iso7816') + self.assertTrue(padded == uh(b("80000000"))) + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == b("")) + + def test2(self): + padded = pad(uh(b("12345678")), 4, 'iso7816') + self.assertTrue(padded == uh(b("1234567880000000"))) + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == uh(b("12345678"))) + + def test3(self): + padded = pad(uh(b("123456")), 4, 'iso7816') + self.assertTrue(padded == uh(b("12345680"))) + #import pdb; pdb.set_trace() + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == uh(b("123456"))) + + def test4(self): + padded = pad(uh(b("1234567890")), 4, 'iso7816') + self.assertTrue(padded == uh(b("1234567890800000"))) + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == uh(b("1234567890"))) + + def testn1(self): + self.assertRaises(ValueError, unpad, b("123456\x81"), 4, 'iso7816') + self.assertRaises(ValueError, unpad, b(""), 4, 'iso7816') + +def get_tests(config={}): + tests = [] + tests += list_test_cases(PKCS7_Tests) + tests += list_test_cases(X923_Tests) + tests += list_test_cases(ISO7816_Tests) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_asn1.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_asn1.py new file mode 100644 index 00000000..68292f30 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_asn1.py @@ -0,0 +1,784 @@ +# +# SelfTest/Util/test_asn.py: Self-test for the Crypto.Util.asn1 module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-tests for Crypto.Util.asn1""" + +import unittest + +from Crypto.Util.py3compat import * +from Crypto.Util.asn1 import (DerObject, DerSetOf, DerInteger, + DerBitString, + DerObjectId, DerNull, DerOctetString, + DerSequence) + +class DerObjectTests(unittest.TestCase): + + def testObjInit1(self): + # Fail with invalid tag format (must be 1 byte) + self.assertRaises(ValueError, DerObject, b('\x00\x99')) + # Fail with invalid implicit tag (must be <0x1F) + self.assertRaises(ValueError, DerObject, 0x1F) + + # ------ + + def testObjEncode1(self): + # No payload + der = DerObject(b('\x02')) + self.assertEqual(der.encode(), b('\x02\x00')) + # Small payload (primitive) + der.payload = b('\x45') + self.assertEqual(der.encode(), b('\x02\x01\x45')) + # Invariant + self.assertEqual(der.encode(), b('\x02\x01\x45')) + # Initialize with numerical tag + der = DerObject(0x04) + der.payload = b('\x45') + self.assertEqual(der.encode(), b('\x04\x01\x45')) + # Initialize with constructed type + der = DerObject(b('\x10'), constructed=True) + self.assertEqual(der.encode(), b('\x30\x00')) + + def testObjEncode2(self): + # Initialize with payload + der = DerObject(0x03, b('\x12\x12')) + self.assertEqual(der.encode(), b('\x03\x02\x12\x12')) + + def testObjEncode3(self): + # Long payload + der = DerObject(b('\x10')) + der.payload = b("0")*128 + self.assertEqual(der.encode(), b('\x10\x81\x80' + "0"*128)) + + def testObjEncode4(self): + # Implicit tags (constructed) + der = DerObject(0x10, implicit=1, constructed=True) + der.payload = b('ppll') + self.assertEqual(der.encode(), b('\xa1\x04ppll')) + # Implicit tags (primitive) + der = DerObject(0x02, implicit=0x1E, constructed=False) + der.payload = b('ppll') + self.assertEqual(der.encode(), b('\x9E\x04ppll')) + + def testObjEncode5(self): + # Encode type with explicit tag + der = DerObject(0x10, explicit=5) + der.payload = b("xxll") + self.assertEqual(der.encode(), b("\xa5\x06\x10\x04xxll")) + + # ----- + + def testObjDecode1(self): + # Decode short payload + der = DerObject(0x02) + der.decode(b('\x02\x02\x01\x02')) + self.assertEqual(der.payload, b("\x01\x02")) + self.assertEqual(der._tag_octet, 0x02) + + def testObjDecode2(self): + # Decode long payload + der = DerObject(0x02) + der.decode(b('\x02\x81\x80' + "1"*128)) + self.assertEqual(der.payload, b("1")*128) + self.assertEqual(der._tag_octet, 0x02) + + def testObjDecode3(self): + # Decode payload with too much data gives error + der = DerObject(0x02) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02\xFF')) + # Decode payload with too little data gives error + der = DerObject(0x02) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01')) + + def testObjDecode4(self): + # Decode implicit tag (primitive) + der = DerObject(0x02, constructed=False, implicit=0xF) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02')) + der.decode(b('\x8F\x01\x00')) + self.assertEqual(der.payload, b('\x00')) + # Decode implicit tag (constructed) + der = DerObject(0x02, constructed=True, implicit=0xF) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02')) + der.decode(b('\xAF\x01\x00')) + self.assertEqual(der.payload, b('\x00')) + + def testObjDecode5(self): + # Decode payload with unexpected tag gives error + der = DerObject(0x02) + self.assertRaises(ValueError, der.decode, b('\x03\x02\x01\x02')) + + def testObjDecode6(self): + # Arbitrary DER object + der = DerObject() + der.decode(b('\x65\x01\x88')) + self.assertEqual(der._tag_octet, 0x65) + self.assertEqual(der.payload, b('\x88')) + + def testObjDecode7(self): + # Decode explicit tag + der = DerObject(0x10, explicit=5) + der.decode(b("\xa5\x06\x10\x04xxll")) + self.assertEqual(der._inner_tag_octet, 0x10) + self.assertEqual(der.payload, b('xxll')) + + # Explicit tag may be 0 + der = DerObject(0x10, explicit=0) + der.decode(b("\xa0\x06\x10\x04xxll")) + self.assertEqual(der._inner_tag_octet, 0x10) + self.assertEqual(der.payload, b('xxll')) + + def testObjDecode8(self): + # Verify that decode returns the object + der = DerObject(0x02) + self.assertEqual(der, der.decode(b('\x02\x02\x01\x02'))) + +class DerIntegerTests(unittest.TestCase): + + def testInit1(self): + der = DerInteger(1) + self.assertEqual(der.encode(), b('\x02\x01\x01')) + + def testEncode1(self): + # Single-byte integers + # Value 0 + der = DerInteger(0) + self.assertEqual(der.encode(), b('\x02\x01\x00')) + # Value 1 + der = DerInteger(1) + self.assertEqual(der.encode(), b('\x02\x01\x01')) + # Value 127 + der = DerInteger(127) + self.assertEqual(der.encode(), b('\x02\x01\x7F')) + + def testEncode2(self): + # Multi-byte integers + # Value 128 + der = DerInteger(128) + self.assertEqual(der.encode(), b('\x02\x02\x00\x80')) + # Value 0x180 + der = DerInteger(0x180) + self.assertEqual(der.encode(), b('\x02\x02\x01\x80')) + # One very long integer + der = DerInteger(2**2048) + self.assertEqual(der.encode(), + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + + def testEncode3(self): + # Negative integers + # Value -1 + der = DerInteger(-1) + self.assertEqual(der.encode(), b('\x02\x01\xFF')) + # Value -128 + der = DerInteger(-128) + self.assertEqual(der.encode(), b('\x02\x01\x80')) + # Value + der = DerInteger(-87873) + self.assertEqual(der.encode(), b('\x02\x03\xFE\xA8\xBF')) + + def testEncode4(self): + # Explicit encoding + number = DerInteger(0x34, explicit=3) + self.assertEqual(number.encode(), b('\xa3\x03\x02\x01\x34')) + + # ----- + + def testDecode1(self): + # Single-byte integer + der = DerInteger() + # Value 0 + der.decode(b('\x02\x01\x00')) + self.assertEqual(der.value, 0) + # Value 1 + der.decode(b('\x02\x01\x01')) + self.assertEqual(der.value, 1) + # Value 127 + der.decode(b('\x02\x01\x7F')) + self.assertEqual(der.value, 127) + + def testDecode2(self): + # Multi-byte integer + der = DerInteger() + # Value 0x180L + der.decode(b('\x02\x02\x01\x80')) + self.assertEqual(der.value,0x180) + # One very long integer + der.decode( + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + self.assertEqual(der.value,2**2048) + + def testDecode3(self): + # Negative integer + der = DerInteger() + # Value -1 + der.decode(b('\x02\x01\xFF')) + self.assertEqual(der.value, -1) + # Value -32768 + der.decode(b('\x02\x02\x80\x00')) + self.assertEqual(der.value, -32768) + + def testDecode5(self): + # We still accept BER integer format + der = DerInteger() + # Redundant leading zeroes + der.decode(b('\x02\x02\x00\x01')) + self.assertEqual(der.value, 1) + # Redundant leading 0xFF + der.decode(b('\x02\x02\xFF\xFF')) + self.assertEqual(der.value, -1) + # Empty payload + der.decode(b('\x02\x00')) + self.assertEqual(der.value, 0) + + def testDecode6(self): + # Explicit encoding + number = DerInteger(explicit=3) + number.decode(b('\xa3\x03\x02\x01\x34')) + self.assertEqual(number.value, 0x34) + + def testDecode7(self): + # Verify decode returns the DerInteger + der = DerInteger() + self.assertEqual(der, der.decode(b('\x02\x01\x7F'))) + + ### + + def testStrict1(self): + number = DerInteger() + + number.decode(b'\x02\x02\x00\x01') + number.decode(b'\x02\x02\x00\x7F') + self.assertRaises(ValueError, number.decode, b'\x02\x02\x00\x01', strict=True) + self.assertRaises(ValueError, number.decode, b'\x02\x02\x00\x7F', strict=True) + + ### + + def testErrDecode1(self): + # Wide length field + der = DerInteger() + self.assertRaises(ValueError, der.decode, b('\x02\x81\x01\x01')) + + +class DerSequenceTests(unittest.TestCase): + + def testInit1(self): + der = DerSequence([1, DerInteger(2), b('0\x00')]) + self.assertEqual(der.encode(), b('0\x08\x02\x01\x01\x02\x01\x020\x00')) + + def testEncode1(self): + # Empty sequence + der = DerSequence() + self.assertEqual(der.encode(), b('0\x00')) + self.assertFalse(der.hasOnlyInts()) + # One single-byte integer (zero) + der.append(0) + self.assertEqual(der.encode(), b('0\x03\x02\x01\x00')) + self.assertEqual(der.hasInts(),1) + self.assertEqual(der.hasInts(False),1) + self.assertTrue(der.hasOnlyInts()) + self.assertTrue(der.hasOnlyInts(False)) + # Invariant + self.assertEqual(der.encode(), b('0\x03\x02\x01\x00')) + + def testEncode2(self): + # Indexing + der = DerSequence() + der.append(0) + der[0] = 1 + self.assertEqual(len(der),1) + self.assertEqual(der[0],1) + self.assertEqual(der[-1],1) + self.assertEqual(der.encode(), b('0\x03\x02\x01\x01')) + # + der[:] = [1] + self.assertEqual(len(der),1) + self.assertEqual(der[0],1) + self.assertEqual(der.encode(), b('0\x03\x02\x01\x01')) + + def testEncode3(self): + # One multi-byte integer (non-zero) + der = DerSequence() + der.append(0x180) + self.assertEqual(der.encode(), b('0\x04\x02\x02\x01\x80')) + + def testEncode4(self): + # One very long integer + der = DerSequence() + der.append(2**2048) + self.assertEqual(der.encode(), b('0\x82\x01\x05')+ + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + + def testEncode5(self): + der = DerSequence() + der += 1 + der += b('\x30\x00') + self.assertEqual(der.encode(), b('\x30\x05\x02\x01\x01\x30\x00')) + + def testEncode6(self): + # Two positive integers + der = DerSequence() + der.append(0x180) + der.append(0xFF) + self.assertEqual(der.encode(), b('0\x08\x02\x02\x01\x80\x02\x02\x00\xff')) + self.assertTrue(der.hasOnlyInts()) + self.assertTrue(der.hasOnlyInts(False)) + # Two mixed integers + der = DerSequence() + der.append(2) + der.append(-2) + self.assertEqual(der.encode(), b('0\x06\x02\x01\x02\x02\x01\xFE')) + self.assertEqual(der.hasInts(), 1) + self.assertEqual(der.hasInts(False), 2) + self.assertFalse(der.hasOnlyInts()) + self.assertTrue(der.hasOnlyInts(False)) + # + der.append(0x01) + der[1:] = [9,8] + self.assertEqual(len(der),3) + self.assertEqual(der[1:],[9,8]) + self.assertEqual(der[1:-1],[9]) + self.assertEqual(der.encode(), b('0\x09\x02\x01\x02\x02\x01\x09\x02\x01\x08')) + + def testEncode7(self): + # One integer and another type (already encoded) + der = DerSequence() + der.append(0x180) + der.append(b('0\x03\x02\x01\x05')) + self.assertEqual(der.encode(), b('0\x09\x02\x02\x01\x800\x03\x02\x01\x05')) + self.assertFalse(der.hasOnlyInts()) + + def testEncode8(self): + # One integer and another type (yet to encode) + der = DerSequence() + der.append(0x180) + der.append(DerSequence([5])) + self.assertEqual(der.encode(), b('0\x09\x02\x02\x01\x800\x03\x02\x01\x05')) + self.assertFalse(der.hasOnlyInts()) + + #### + + def testDecode1(self): + # Empty sequence + der = DerSequence() + der.decode(b('0\x00')) + self.assertEqual(len(der),0) + # One single-byte integer (zero) + der.decode(b('0\x03\x02\x01\x00')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],0) + # Invariant + der.decode(b('0\x03\x02\x01\x00')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],0) + + def testDecode2(self): + # One single-byte integer (non-zero) + der = DerSequence() + der.decode(b('0\x03\x02\x01\x7f')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],127) + + def testDecode4(self): + # One very long integer + der = DerSequence() + der.decode(b('0\x82\x01\x05')+ + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],2**2048) + + def testDecode6(self): + # Two integers + der = DerSequence() + der.decode(b('0\x08\x02\x02\x01\x80\x02\x02\x00\xff')) + self.assertEqual(len(der),2) + self.assertEqual(der[0],0x180) + self.assertEqual(der[1],0xFF) + + def testDecode7(self): + # One integer and 2 other types + der = DerSequence() + der.decode(b('0\x0A\x02\x02\x01\x80\x24\x02\xb6\x63\x12\x00')) + self.assertEqual(len(der),3) + self.assertEqual(der[0],0x180) + self.assertEqual(der[1],b('\x24\x02\xb6\x63')) + self.assertEqual(der[2],b('\x12\x00')) + + def testDecode8(self): + # Only 2 other types + der = DerSequence() + der.decode(b('0\x06\x24\x02\xb6\x63\x12\x00')) + self.assertEqual(len(der),2) + self.assertEqual(der[0],b('\x24\x02\xb6\x63')) + self.assertEqual(der[1],b('\x12\x00')) + self.assertEqual(der.hasInts(), 0) + self.assertEqual(der.hasInts(False), 0) + self.assertFalse(der.hasOnlyInts()) + self.assertFalse(der.hasOnlyInts(False)) + + def testDecode9(self): + # Verify that decode returns itself + der = DerSequence() + self.assertEqual(der, der.decode(b('0\x06\x24\x02\xb6\x63\x12\x00'))) + + ### + + def testErrDecode1(self): + # Not a sequence + der = DerSequence() + self.assertRaises(ValueError, der.decode, b('')) + self.assertRaises(ValueError, der.decode, b('\x00')) + self.assertRaises(ValueError, der.decode, b('\x30')) + + def testErrDecode2(self): + der = DerSequence() + # Too much data + self.assertRaises(ValueError, der.decode, b('\x30\x00\x00')) + + def testErrDecode3(self): + # Wrong length format + der = DerSequence() + # Missing length in sub-item + self.assertRaises(ValueError, der.decode, b('\x30\x04\x02\x01\x01\x00')) + # Valid BER, but invalid DER length + self.assertRaises(ValueError, der.decode, b('\x30\x81\x03\x02\x01\x01')) + self.assertRaises(ValueError, der.decode, b('\x30\x04\x02\x81\x01\x01')) + + def test_expected_nr_elements(self): + der_bin = DerSequence([1, 2, 3]).encode() + + DerSequence().decode(der_bin, nr_elements=3) + DerSequence().decode(der_bin, nr_elements=(2,3)) + self.assertRaises(ValueError, DerSequence().decode, der_bin, nr_elements=1) + self.assertRaises(ValueError, DerSequence().decode, der_bin, nr_elements=(4,5)) + + def test_expected_only_integers(self): + + der_bin1 = DerSequence([1, 2, 3]).encode() + der_bin2 = DerSequence([1, 2, DerSequence([3, 4])]).encode() + + DerSequence().decode(der_bin1, only_ints_expected=True) + DerSequence().decode(der_bin1, only_ints_expected=False) + DerSequence().decode(der_bin2, only_ints_expected=False) + self.assertRaises(ValueError, DerSequence().decode, der_bin2, only_ints_expected=True) + + +class DerOctetStringTests(unittest.TestCase): + + def testInit1(self): + der = DerOctetString(b('\xFF')) + self.assertEqual(der.encode(), b('\x04\x01\xFF')) + + def testEncode1(self): + # Empty sequence + der = DerOctetString() + self.assertEqual(der.encode(), b('\x04\x00')) + # Small payload + der.payload = b('\x01\x02') + self.assertEqual(der.encode(), b('\x04\x02\x01\x02')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerOctetString() + der.decode(b('\x04\x00')) + self.assertEqual(der.payload, b('')) + # Small payload + der.decode(b('\x04\x02\x01\x02')) + self.assertEqual(der.payload, b('\x01\x02')) + + def testDecode2(self): + # Verify that decode returns the object + der = DerOctetString() + self.assertEqual(der, der.decode(b('\x04\x00'))) + + def testErrDecode1(self): + # No leftovers allowed + der = DerOctetString() + self.assertRaises(ValueError, der.decode, b('\x04\x01\x01\xff')) + +class DerNullTests(unittest.TestCase): + + def testEncode1(self): + der = DerNull() + self.assertEqual(der.encode(), b('\x05\x00')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerNull() + self.assertEqual(der, der.decode(b('\x05\x00'))) + +class DerObjectIdTests(unittest.TestCase): + + def testInit1(self): + der = DerObjectId("1.1") + self.assertEqual(der.encode(), b('\x06\x01)')) + + def testEncode1(self): + der = DerObjectId('1.2.840.113549.1.1.1') + self.assertEqual(der.encode(), b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01')) + # + der = DerObjectId() + der.value = '1.2.840.113549.1.1.1' + self.assertEqual(der.encode(), b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerObjectId() + der.decode(b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01')) + self.assertEqual(der.value, '1.2.840.113549.1.1.1') + + def testDecode2(self): + # Verify that decode returns the object + der = DerObjectId() + self.assertEqual(der, + der.decode(b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01'))) + + def testDecode3(self): + der = DerObjectId() + der.decode(b('\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x00\x01')) + self.assertEqual(der.value, '1.2.840.113549.1.0.1') + + +class DerBitStringTests(unittest.TestCase): + + def testInit1(self): + der = DerBitString(b("\xFF")) + self.assertEqual(der.encode(), b('\x03\x02\x00\xFF')) + + def testInit2(self): + der = DerBitString(DerInteger(1)) + self.assertEqual(der.encode(), b('\x03\x04\x00\x02\x01\x01')) + + def testEncode1(self): + # Empty sequence + der = DerBitString() + self.assertEqual(der.encode(), b('\x03\x01\x00')) + # Small payload + der = DerBitString(b('\x01\x02')) + self.assertEqual(der.encode(), b('\x03\x03\x00\x01\x02')) + # Small payload + der = DerBitString() + der.value = b('\x01\x02') + self.assertEqual(der.encode(), b('\x03\x03\x00\x01\x02')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerBitString() + der.decode(b('\x03\x00')) + self.assertEqual(der.value, b('')) + # Small payload + der.decode(b('\x03\x03\x00\x01\x02')) + self.assertEqual(der.value, b('\x01\x02')) + + def testDecode2(self): + # Verify that decode returns the object + der = DerBitString() + self.assertEqual(der, der.decode(b('\x03\x00'))) + + +class DerSetOfTests(unittest.TestCase): + + def testInit1(self): + der = DerSetOf([DerInteger(1), DerInteger(2)]) + self.assertEqual(der.encode(), b('1\x06\x02\x01\x01\x02\x01\x02')) + + def testEncode1(self): + # Empty set + der = DerSetOf() + self.assertEqual(der.encode(), b('1\x00')) + # One single-byte integer (zero) + der.add(0) + self.assertEqual(der.encode(), b('1\x03\x02\x01\x00')) + # Invariant + self.assertEqual(der.encode(), b('1\x03\x02\x01\x00')) + + def testEncode2(self): + # Two integers + der = DerSetOf() + der.add(0x180) + der.add(0xFF) + self.assertEqual(der.encode(), b('1\x08\x02\x02\x00\xff\x02\x02\x01\x80')) + # Initialize with integers + der = DerSetOf([0x180, 0xFF]) + self.assertEqual(der.encode(), b('1\x08\x02\x02\x00\xff\x02\x02\x01\x80')) + + def testEncode3(self): + # One integer and another type (no matter what it is) + der = DerSetOf() + der.add(0x180) + self.assertRaises(ValueError, der.add, b('\x00\x02\x00\x00')) + + def testEncode4(self): + # Only non integers + der = DerSetOf() + der.add(b('\x01\x00')) + der.add(b('\x01\x01\x01')) + self.assertEqual(der.encode(), b('1\x05\x01\x00\x01\x01\x01')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerSetOf() + der.decode(b('1\x00')) + self.assertEqual(len(der),0) + # One single-byte integer (zero) + der.decode(b('1\x03\x02\x01\x00')) + self.assertEqual(len(der),1) + self.assertEqual(list(der),[0]) + + def testDecode2(self): + # Two integers + der = DerSetOf() + der.decode(b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff')) + self.assertEqual(len(der),2) + l = list(der) + self.assertTrue(0x180 in l) + self.assertTrue(0xFF in l) + + def testDecode3(self): + # One integer and 2 other types + der = DerSetOf() + #import pdb; pdb.set_trace() + self.assertRaises(ValueError, der.decode, + b('0\x0A\x02\x02\x01\x80\x24\x02\xb6\x63\x12\x00')) + + def testDecode4(self): + # Verify that decode returns the object + der = DerSetOf() + self.assertEqual(der, + der.decode(b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff'))) + + ### + + def testErrDecode1(self): + # No leftovers allowed + der = DerSetOf() + self.assertRaises(ValueError, der.decode, + b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff\xAA')) + +def get_tests(config={}): + from Crypto.SelfTest.st_common import list_test_cases + listTests = [] + listTests += list_test_cases(DerObjectTests) + listTests += list_test_cases(DerIntegerTests) + listTests += list_test_cases(DerSequenceTests) + listTests += list_test_cases(DerOctetStringTests) + listTests += list_test_cases(DerNullTests) + listTests += list_test_cases(DerObjectIdTests) + listTests += list_test_cases(DerBitStringTests) + listTests += list_test_cases(DerSetOfTests) + return listTests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_number.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_number.py new file mode 100644 index 00000000..bb143f3b --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_number.py @@ -0,0 +1,192 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/test_number.py: Self-test for parts of the Crypto.Util.number module +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-tests for (some of) Crypto.Util.number""" + +import math +import unittest + +from Crypto.Util.py3compat import * +from Crypto.SelfTest.st_common import list_test_cases + +from Crypto.Util import number +from Crypto.Util.number import long_to_bytes + + +class MyError(Exception): + """Dummy exception used for tests""" + +# NB: In some places, we compare tuples instead of just output values so that +# if any inputs cause a test failure, we'll be able to tell which ones. + +class MiscTests(unittest.TestCase): + + def test_ceil_div(self): + """Util.number.ceil_div""" + self.assertRaises(TypeError, number.ceil_div, "1", 1) + self.assertRaises(ZeroDivisionError, number.ceil_div, 1, 0) + self.assertRaises(ZeroDivisionError, number.ceil_div, -1, 0) + + # b = 1 + self.assertEqual(0, number.ceil_div(0, 1)) + self.assertEqual(1, number.ceil_div(1, 1)) + self.assertEqual(2, number.ceil_div(2, 1)) + self.assertEqual(3, number.ceil_div(3, 1)) + + # b = 2 + self.assertEqual(0, number.ceil_div(0, 2)) + self.assertEqual(1, number.ceil_div(1, 2)) + self.assertEqual(1, number.ceil_div(2, 2)) + self.assertEqual(2, number.ceil_div(3, 2)) + self.assertEqual(2, number.ceil_div(4, 2)) + self.assertEqual(3, number.ceil_div(5, 2)) + + # b = 3 + self.assertEqual(0, number.ceil_div(0, 3)) + self.assertEqual(1, number.ceil_div(1, 3)) + self.assertEqual(1, number.ceil_div(2, 3)) + self.assertEqual(1, number.ceil_div(3, 3)) + self.assertEqual(2, number.ceil_div(4, 3)) + self.assertEqual(2, number.ceil_div(5, 3)) + self.assertEqual(2, number.ceil_div(6, 3)) + self.assertEqual(3, number.ceil_div(7, 3)) + + # b = 4 + self.assertEqual(0, number.ceil_div(0, 4)) + self.assertEqual(1, number.ceil_div(1, 4)) + self.assertEqual(1, number.ceil_div(2, 4)) + self.assertEqual(1, number.ceil_div(3, 4)) + self.assertEqual(1, number.ceil_div(4, 4)) + self.assertEqual(2, number.ceil_div(5, 4)) + self.assertEqual(2, number.ceil_div(6, 4)) + self.assertEqual(2, number.ceil_div(7, 4)) + self.assertEqual(2, number.ceil_div(8, 4)) + self.assertEqual(3, number.ceil_div(9, 4)) + + def test_getPrime(self): + """Util.number.getPrime""" + self.assertRaises(ValueError, number.getPrime, -100) + self.assertRaises(ValueError, number.getPrime, 0) + self.assertRaises(ValueError, number.getPrime, 1) + + bits = 4 + for i in range(100): + x = number.getPrime(bits) + self.assertEqual(x >= (1 << bits - 1), 1) + self.assertEqual(x < (1 << bits), 1) + + bits = 512 + x = number.getPrime(bits) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x >= (1 << bits - 1), 1) + self.assertEqual(x < (1 << bits), 1) + + def test_getStrongPrime(self): + """Util.number.getStrongPrime""" + self.assertRaises(ValueError, number.getStrongPrime, 256) + self.assertRaises(ValueError, number.getStrongPrime, 513) + bits = 512 + x = number.getStrongPrime(bits) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x > (1 << bits-1)-1, 1) + self.assertEqual(x < (1 << bits), 1) + e = 2**16+1 + x = number.getStrongPrime(bits, e) + self.assertEqual(number.GCD(x-1, e), 1) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x > (1 << bits-1)-1, 1) + self.assertEqual(x < (1 << bits), 1) + e = 2**16+2 + x = number.getStrongPrime(bits, e) + self.assertEqual(number.GCD((x-1)>>1, e), 1) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x > (1 << bits-1)-1, 1) + self.assertEqual(x < (1 << bits), 1) + + def test_isPrime(self): + """Util.number.isPrime""" + self.assertEqual(number.isPrime(-3), False) # Regression test: negative numbers should not be prime + self.assertEqual(number.isPrime(-2), False) # Regression test: negative numbers should not be prime + self.assertEqual(number.isPrime(1), False) # Regression test: isPrime(1) caused some versions of PyCrypto to crash. + self.assertEqual(number.isPrime(2), True) + self.assertEqual(number.isPrime(3), True) + self.assertEqual(number.isPrime(4), False) + self.assertEqual(number.isPrime(2**1279-1), True) + self.assertEqual(number.isPrime(-(2**1279-1)), False) # Regression test: negative numbers should not be prime + # test some known gmp pseudo-primes taken from + # http://www.trnicely.net/misc/mpzspsp.html + for composite in (43 * 127 * 211, 61 * 151 * 211, 15259 * 30517, + 346141 * 692281, 1007119 * 2014237, 3589477 * 7178953, + 4859419 * 9718837, 2730439 * 5460877, + 245127919 * 490255837, 963939391 * 1927878781, + 4186358431 * 8372716861, 1576820467 * 3153640933): + self.assertEqual(number.isPrime(int(composite)), False) + + def test_size(self): + self.assertEqual(number.size(2),2) + self.assertEqual(number.size(3),2) + self.assertEqual(number.size(0xa2),8) + self.assertEqual(number.size(0xa2ba40),8*3) + self.assertEqual(number.size(0xa2ba40ee07e3b2bd2f02ce227f36a195024486e49c19cb41bbbdfbba98b22b0e577c2eeaffa20d883a76e65e394c69d4b3c05a1e8fadda27edb2a42bc000fe888b9b32c22d15add0cd76b3e7936e19955b220dd17d4ea904b1ec102b2e4de7751222aa99151024c7cb41cc5ea21d00eeb41f7c800834d2c6e06bce3bce7ea9a5), 1024) + self.assertRaises(ValueError, number.size, -1) + + +class LongTests(unittest.TestCase): + + def test1(self): + self.assertEqual(long_to_bytes(0), b'\x00') + self.assertEqual(long_to_bytes(1), b'\x01') + self.assertEqual(long_to_bytes(0x100), b'\x01\x00') + self.assertEqual(long_to_bytes(0xFF00000000), b'\xFF\x00\x00\x00\x00') + self.assertEqual(long_to_bytes(0xFF00000000), b'\xFF\x00\x00\x00\x00') + self.assertEqual(long_to_bytes(0x1122334455667788), b'\x11\x22\x33\x44\x55\x66\x77\x88') + self.assertEqual(long_to_bytes(0x112233445566778899), b'\x11\x22\x33\x44\x55\x66\x77\x88\x99') + + def test2(self): + self.assertEqual(long_to_bytes(0, 1), b'\x00') + self.assertEqual(long_to_bytes(0, 2), b'\x00\x00') + self.assertEqual(long_to_bytes(1, 3), b'\x00\x00\x01') + self.assertEqual(long_to_bytes(65535, 2), b'\xFF\xFF') + self.assertEqual(long_to_bytes(65536, 2), b'\x00\x01\x00\x00') + self.assertEqual(long_to_bytes(0x100, 1), b'\x01\x00') + self.assertEqual(long_to_bytes(0xFF00000001, 6), b'\x00\xFF\x00\x00\x00\x01') + self.assertEqual(long_to_bytes(0xFF00000001, 8), b'\x00\x00\x00\xFF\x00\x00\x00\x01') + self.assertEqual(long_to_bytes(0xFF00000001, 10), b'\x00\x00\x00\x00\x00\xFF\x00\x00\x00\x01') + self.assertEqual(long_to_bytes(0xFF00000001, 11), b'\x00\x00\x00\x00\x00\x00\xFF\x00\x00\x00\x01') + + def test_err1(self): + self.assertRaises(ValueError, long_to_bytes, -1) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(MiscTests) + tests += list_test_cases(LongTests) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_rfc1751.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_rfc1751.py new file mode 100644 index 00000000..af0aa2ba --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_rfc1751.py @@ -0,0 +1,38 @@ +import unittest + +import binascii +from Crypto.Util.RFC1751 import key_to_english, english_to_key + + +class RFC1751_Tests(unittest.TestCase): + + def test1(self): + data = [ + ('EB33F77EE73D4053', 'TIDE ITCH SLOW REIN RULE MOT'), + ('CCAC2AED591056BE4F90FD441C534766', 'RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE'), + ('EFF81F9BFBC65350920CDD7416DE8009', 'TROD MUTE TAIL WARM CHAR KONG HAAG CITY BORE O TEAL AWL') + ] + + for key_hex, words in data: + key_bin = binascii.a2b_hex(key_hex) + + w2 = key_to_english(key_bin) + self.assertEqual(w2, words) + + k2 = english_to_key(words) + self.assertEqual(k2, key_bin) + + def test_error_key_to_english(self): + + self.assertRaises(ValueError, key_to_english, b'0' * 7) + + +def get_tests(config={}): + from Crypto.SelfTest.st_common import list_test_cases + tests = list_test_cases(RFC1751_Tests) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_strxor.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_strxor.py new file mode 100644 index 00000000..c91d38f5 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/Util/test_strxor.py @@ -0,0 +1,280 @@ +# +# SelfTest/Util/test_strxor.py: Self-test for XORing +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify, hexlify + +from Crypto.SelfTest.st_common import list_test_cases +from Crypto.Util.strxor import strxor, strxor_c + + +class StrxorTests(unittest.TestCase): + + def test1(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + result = unhexlify(b"c70ed123c59a7fcb6f12") + self.assertEqual(strxor(term1, term2), result) + self.assertEqual(strxor(term2, term1), result) + + def test2(self): + es = b"" + self.assertEqual(strxor(es, es), es) + + def test3(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + all_zeros = b"\x00" * len(term1) + self.assertEqual(strxor(term1, term1), all_zeros) + + def test_wrong_length(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"ff339a83e5cd4cdf564990") + self.assertRaises(ValueError, strxor, term1, term2) + + def test_bytearray(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_ba = bytearray(term1) + term2 = unhexlify(b"383d4ba020573314395b") + result = unhexlify(b"c70ed123c59a7fcb6f12") + + self.assertEqual(strxor(term1_ba, term2), result) + + def test_memoryview(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_mv = memoryview(term1) + term2 = unhexlify(b"383d4ba020573314395b") + result = unhexlify(b"c70ed123c59a7fcb6f12") + + self.assertEqual(strxor(term1_mv, term2), result) + + def test_output_bytearray(self): + """Verify result can be stored in pre-allocated memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + original_term1 = term1[:] + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + output = bytearray(len(term1)) + + result = strxor(term1, term2, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_xor) + self.assertEqual(term1, original_term1) + self.assertEqual(term2, original_term2) + + def test_output_memoryview(self): + """Verify result can be stored in pre-allocated memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + original_term1 = term1[:] + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + output = memoryview(bytearray(len(term1))) + + result = strxor(term1, term2, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_xor) + self.assertEqual(term1, original_term1) + self.assertEqual(term2, original_term2) + + def test_output_overlapping_bytearray(self): + """Verify result can be stored in overlapping memory""" + + term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649")) + term2 = unhexlify(b"383d4ba020573314395b") + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + + result = strxor(term1, term2, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + self.assertEqual(term2, original_term2) + + def test_output_overlapping_memoryview(self): + """Verify result can be stored in overlapping memory""" + + term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))) + term2 = unhexlify(b"383d4ba020573314395b") + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + + result = strxor(term1, term2, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + self.assertEqual(term2, original_term2) + + def test_output_ro_bytes(self): + """Verify result cannot be stored in read-only memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + + self.assertRaises(TypeError, strxor, term1, term2, output=term1) + + def test_output_ro_memoryview(self): + """Verify result cannot be stored in read-only memory""" + + term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649")) + term2 = unhexlify(b"383d4ba020573314395b") + + self.assertRaises(TypeError, strxor, term1, term2, output=term1) + + def test_output_incorrect_length(self): + """Verify result cannot be stored in memory of incorrect length""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + output = bytearray(len(term1) - 1) + + self.assertRaises(ValueError, strxor, term1, term2, output=output) + + +class Strxor_cTests(unittest.TestCase): + + def test1(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + result = unhexlify(b"be72dbc2a48c0d9e1708") + self.assertEqual(strxor_c(term1, 65), result) + + def test2(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + self.assertEqual(strxor_c(term1, 0), term1) + + def test3(self): + self.assertEqual(strxor_c(b"", 90), b"") + + def test_wrong_range(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + self.assertRaises(ValueError, strxor_c, term1, -1) + self.assertRaises(ValueError, strxor_c, term1, 256) + + def test_bytearray(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_ba = bytearray(term1) + result = unhexlify(b"be72dbc2a48c0d9e1708") + + self.assertEqual(strxor_c(term1_ba, 65), result) + + def test_memoryview(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_mv = memoryview(term1) + result = unhexlify(b"be72dbc2a48c0d9e1708") + + self.assertEqual(strxor_c(term1_mv, 65), result) + + def test_output_bytearray(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + original_term1 = term1[:] + expected_result = unhexlify(b"be72dbc2a48c0d9e1708") + output = bytearray(len(term1)) + + result = strxor_c(term1, 65, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_result) + self.assertEqual(term1, original_term1) + + def test_output_memoryview(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + original_term1 = term1[:] + expected_result = unhexlify(b"be72dbc2a48c0d9e1708") + output = memoryview(bytearray(len(term1))) + + result = strxor_c(term1, 65, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_result) + self.assertEqual(term1, original_term1) + + def test_output_overlapping_bytearray(self): + """Verify result can be stored in overlapping memory""" + + term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649")) + expected_xor = unhexlify(b"be72dbc2a48c0d9e1708") + + result = strxor_c(term1, 65, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + + def test_output_overlapping_memoryview(self): + """Verify result can be stored in overlapping memory""" + + term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))) + expected_xor = unhexlify(b"be72dbc2a48c0d9e1708") + + result = strxor_c(term1, 65, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + + def test_output_ro_bytes(self): + """Verify result cannot be stored in read-only memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + + self.assertRaises(TypeError, strxor_c, term1, 65, output=term1) + + def test_output_ro_memoryview(self): + """Verify result cannot be stored in read-only memory""" + + term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649")) + term2 = unhexlify(b"383d4ba020573314395b") + + self.assertRaises(TypeError, strxor_c, term1, 65, output=term1) + + def test_output_incorrect_length(self): + """Verify result cannot be stored in memory of incorrect length""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + output = bytearray(len(term1) - 1) + + self.assertRaises(ValueError, strxor_c, term1, 65, output=output) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(StrxorTests) + tests += list_test_cases(Strxor_cTests) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/__init__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/__init__.py new file mode 100644 index 00000000..12b7592f --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/__init__.py @@ -0,0 +1,97 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/__init__.py: Self-test for PyCrypto +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self tests + +These tests should perform quickly and can ideally be used every time an +application runs. +""" + +__revision__ = "$Id$" + +import sys +import unittest +from Crypto.Util.py3compat import StringIO + +class SelfTestError(Exception): + def __init__(self, message, result): + Exception.__init__(self, message, result) + self.message = message + self.result = result + +def run(module=None, verbosity=0, stream=None, tests=None, config=None, **kwargs): + """Execute self-tests. + + This raises SelfTestError if any test is unsuccessful. + + You may optionally pass in a sub-module of SelfTest if you only want to + perform some of the tests. For example, the following would test only the + hash modules: + + Crypto.SelfTest.run(Crypto.SelfTest.Hash) + + """ + + if config is None: + config = {} + suite = unittest.TestSuite() + if module is None: + if tests is None: + tests = get_tests(config=config) + suite.addTests(tests) + else: + if tests is None: + suite.addTests(module.get_tests(config=config)) + else: + raise ValueError("'module' and 'tests' arguments are mutually exclusive") + if stream is None: + kwargs['stream'] = StringIO() + else: + kwargs['stream'] = stream + runner = unittest.TextTestRunner(verbosity=verbosity, **kwargs) + result = runner.run(suite) + if not result.wasSuccessful(): + if stream is None: + sys.stderr.write(kwargs['stream'].getvalue()) + raise SelfTestError("Self-test failed", result) + return result + +def get_tests(config={}): + tests = [] + from Crypto.SelfTest import Cipher; tests += Cipher.get_tests(config=config) + from Crypto.SelfTest import Hash; tests += Hash.get_tests(config=config) + from Crypto.SelfTest import Protocol; tests += Protocol.get_tests(config=config) + from Crypto.SelfTest import PublicKey; tests += PublicKey.get_tests(config=config) + from Crypto.SelfTest import Random; tests += Random.get_tests(config=config) + from Crypto.SelfTest import Util; tests += Util.get_tests(config=config) + from Crypto.SelfTest import Signature; tests += Signature.get_tests(config=config) + from Crypto.SelfTest import IO; tests += IO.get_tests(config=config) + from Crypto.SelfTest import Math; tests += Math.get_tests(config=config) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/__main__.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/__main__.py new file mode 100644 index 00000000..9ab0912a --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/__main__.py @@ -0,0 +1,38 @@ +#! /usr/bin/env python +# +# __main__.py : Stand-along loader for PyCryptodome test suite +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from __future__ import print_function + +import sys + +from Crypto import SelfTest + +slow_tests = not "--skip-slow-tests" in sys.argv +if not slow_tests: + print("Skipping slow tests") + +wycheproof_warnings = "--wycheproof-warnings" in sys.argv +if wycheproof_warnings: + print("Printing Wycheproof warnings") + +config = {'slow_tests' : slow_tests, 'wycheproof_warnings' : wycheproof_warnings } +SelfTest.run(stream=sys.stdout, verbosity=1, config=config) diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/loader.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/loader.py new file mode 100644 index 00000000..18be270d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/loader.py @@ -0,0 +1,206 @@ +# =================================================================== +# +# Copyright (c) 2016, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import re +import json +import errno +import binascii +import warnings +from binascii import unhexlify +from Crypto.Util.py3compat import FileNotFoundError + + +try: + import pycryptodome_test_vectors # type: ignore + test_vectors_available = True +except ImportError: + test_vectors_available = False + + +def _load_tests(dir_comps, file_in, description, conversions): + """Load and parse a test vector file + + Return a list of objects, one per group of adjacent + KV lines or for a single line in the form "[.*]". + + For a group of lines, the object has one attribute per line. + """ + + line_number = 0 + results = [] + + class TestVector(object): + def __init__(self, description, count): + self.desc = description + self.count = count + self.others = [] + + test_vector = None + count = 0 + new_group = True + + while True: + line_number += 1 + line = file_in.readline() + if not line: + if test_vector is not None: + results.append(test_vector) + break + line = line.strip() + + # Skip comments and empty lines + if line.startswith('#') or not line: + new_group = True + continue + + if line.startswith("["): + if test_vector is not None: + results.append(test_vector) + test_vector = None + results.append(line) + continue + + if new_group: + count += 1 + new_group = False + if test_vector is not None: + results.append(test_vector) + test_vector = TestVector("%s (#%d)" % (description, count), count) + + res = re.match("([A-Za-z0-9]+) = ?(.*)", line) + if not res: + test_vector.others += [line] + else: + token = res.group(1).lower() + data = res.group(2).lower() + + conversion = conversions.get(token, None) + if conversion is None: + if len(data) % 2 != 0: + data = "0" + data + setattr(test_vector, token, binascii.unhexlify(data)) + else: + setattr(test_vector, token, conversion(data)) + + # This line is ignored + return results + + +def load_test_vectors(dir_comps, file_name, description, conversions): + """Load and parse a test vector file + + This function returns a list of objects, one per group of adjacent + KV lines or for a single line in the form "[.*]". + + For a group of lines, the object has one attribute per line. + """ + + results = None + + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + description = "%s test (%s)" % (description, file_name) + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name) as file_in: + results = _load_tests(dir_comps, file_in, description, conversions) + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for " + description, + UserWarning, + stacklevel=2) + + return results + + +def load_test_vectors_wycheproof(dir_comps, file_name, description, + root_tag={}, group_tag={}, unit_tag={}): + + result = [] + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name) as file_in: + tv_tree = json.load(file_in) + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for " + description, + UserWarning, + stacklevel=2) + return result + + class TestVector(object): + pass + + common_root = {} + for k, v in root_tag.items(): + common_root[k] = v(tv_tree) + + for group in tv_tree['testGroups']: + + common_group = {} + for k, v in group_tag.items(): + common_group[k] = v(group) + + for test in group['tests']: + tv = TestVector() + + for k, v in common_root.items(): + setattr(tv, k, v) + for k, v in common_group.items(): + setattr(tv, k, v) + + tv.id = test['tcId'] + tv.comment = test['comment'] + for attr in 'key', 'iv', 'aad', 'msg', 'ct', 'tag', 'label', 'ikm', 'salt', 'info', 'okm', 'sig': + if attr in test: + setattr(tv, attr, unhexlify(test[attr])) + tv.filename = file_name + + for k, v in unit_tag.items(): + setattr(tv, k, v(test)) + + tv.valid = test['result'] != "invalid" + tv.warning = test['result'] == "acceptable" + result.append(tv) + + return result + diff --git a/env/lib/python3.8/site-packages/Crypto/SelfTest/st_common.py b/env/lib/python3.8/site-packages/Crypto/SelfTest/st_common.py new file mode 100644 index 00000000..e098d812 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/SelfTest/st_common.py @@ -0,0 +1,55 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/st_common.py: Common functions for SelfTest modules +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Common functions for SelfTest modules""" + +import unittest +import binascii +from Crypto.Util.py3compat import b + + +def list_test_cases(class_): + """Return a list of TestCase instances given a TestCase class + + This is useful when you have defined test* methods on your TestCase class. + """ + return unittest.TestLoader().loadTestsFromTestCase(class_) + +def strip_whitespace(s): + """Remove whitespace from a text or byte string""" + if isinstance(s,str): + return b("".join(s.split())) + else: + return b("").join(s.split()) + +def a2b_hex(s): + """Convert hexadecimal to binary, ignoring whitespace""" + return binascii.a2b_hex(strip_whitespace(s)) + +def b2a_hex(s): + """Convert binary to hexadecimal""" + # For completeness + return binascii.b2a_hex(s) + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/DSS.py b/env/lib/python3.8/site-packages/Crypto/Signature/DSS.py new file mode 100644 index 00000000..fa848179 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/DSS.py @@ -0,0 +1,403 @@ +# +# Signature/DSS.py : DSS.py +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.asn1 import DerSequence +from Crypto.Util.number import long_to_bytes +from Crypto.Math.Numbers import Integer + +from Crypto.Hash import HMAC +from Crypto.PublicKey.ECC import EccKey +from Crypto.PublicKey.DSA import DsaKey + +__all__ = ['DssSigScheme', 'new'] + + +class DssSigScheme(object): + """A (EC)DSA signature object. + Do not instantiate directly. + Use :func:`Crypto.Signature.DSS.new`. + """ + + def __init__(self, key, encoding, order): + """Create a new Digital Signature Standard (DSS) object. + + Do not instantiate this object directly, + use `Crypto.Signature.DSS.new` instead. + """ + + self._key = key + self._encoding = encoding + self._order = order + + self._order_bits = self._order.size_in_bits() + self._order_bytes = (self._order_bits - 1) // 8 + 1 + + def can_sign(self): + """Return ``True`` if this signature object can be used + for signing messages.""" + + return self._key.has_private() + + def _compute_nonce(self, msg_hash): + raise NotImplementedError("To be provided by subclasses") + + def _valid_hash(self, msg_hash): + raise NotImplementedError("To be provided by subclasses") + + def sign(self, msg_hash): + """Compute the DSA/ECDSA signature of a message. + + Args: + msg_hash (hash object): + The hash that was carried out over the message. + The object belongs to the :mod:`Crypto.Hash` package. + Under mode ``'fips-186-3'``, the hash must be a FIPS + approved secure hash (SHA-2 or SHA-3). + + :return: The signature as ``bytes`` + :raise ValueError: if the hash algorithm is incompatible to the (EC)DSA key + :raise TypeError: if the (EC)DSA key has no private half + """ + + if not self._key.has_private(): + raise TypeError("Private key is needed to sign") + + if not self._valid_hash(msg_hash): + raise ValueError("Hash is not sufficiently strong") + + # Generate the nonce k (critical!) + nonce = self._compute_nonce(msg_hash) + + # Perform signature using the raw API + z = Integer.from_bytes(msg_hash.digest()[:self._order_bytes]) + sig_pair = self._key._sign(z, nonce) + + # Encode the signature into a single byte string + if self._encoding == 'binary': + output = b"".join([long_to_bytes(x, self._order_bytes) + for x in sig_pair]) + else: + # Dss-sig ::= SEQUENCE { + # r INTEGER, + # s INTEGER + # } + # Ecdsa-Sig-Value ::= SEQUENCE { + # r INTEGER, + # s INTEGER + # } + output = DerSequence(sig_pair).encode() + + return output + + def verify(self, msg_hash, signature): + """Check if a certain (EC)DSA signature is authentic. + + Args: + msg_hash (hash object): + The hash that was carried out over the message. + This is an object belonging to the :mod:`Crypto.Hash` module. + Under mode ``'fips-186-3'``, the hash must be a FIPS + approved secure hash (SHA-2 or SHA-3). + + signature (``bytes``): + The signature that needs to be validated. + + :raise ValueError: if the signature is not authentic + """ + + if not self._valid_hash(msg_hash): + raise ValueError("Hash is not sufficiently strong") + + if self._encoding == 'binary': + if len(signature) != (2 * self._order_bytes): + raise ValueError("The signature is not authentic (length)") + r_prime, s_prime = [Integer.from_bytes(x) + for x in (signature[:self._order_bytes], + signature[self._order_bytes:])] + else: + try: + der_seq = DerSequence().decode(signature, strict=True) + except (ValueError, IndexError): + raise ValueError("The signature is not authentic (DER)") + if len(der_seq) != 2 or not der_seq.hasOnlyInts(): + raise ValueError("The signature is not authentic (DER content)") + r_prime, s_prime = Integer(der_seq[0]), Integer(der_seq[1]) + + if not (0 < r_prime < self._order) or not (0 < s_prime < self._order): + raise ValueError("The signature is not authentic (d)") + + z = Integer.from_bytes(msg_hash.digest()[:self._order_bytes]) + result = self._key._verify(z, (r_prime, s_prime)) + if not result: + raise ValueError("The signature is not authentic") + # Make PyCrypto code to fail + return False + + +class DeterministicDsaSigScheme(DssSigScheme): + # Also applicable to ECDSA + + def __init__(self, key, encoding, order, private_key): + super(DeterministicDsaSigScheme, self).__init__(key, encoding, order) + self._private_key = private_key + + def _bits2int(self, bstr): + """See 2.3.2 in RFC6979""" + + result = Integer.from_bytes(bstr) + q_len = self._order.size_in_bits() + b_len = len(bstr) * 8 + if b_len > q_len: + # Only keep leftmost q_len bits + result >>= (b_len - q_len) + return result + + def _int2octets(self, int_mod_q): + """See 2.3.3 in RFC6979""" + + assert 0 < int_mod_q < self._order + return long_to_bytes(int_mod_q, self._order_bytes) + + def _bits2octets(self, bstr): + """See 2.3.4 in RFC6979""" + + z1 = self._bits2int(bstr) + if z1 < self._order: + z2 = z1 + else: + z2 = z1 - self._order + return self._int2octets(z2) + + def _compute_nonce(self, mhash): + """Generate k in a deterministic way""" + + # See section 3.2 in RFC6979.txt + # Step a + h1 = mhash.digest() + # Step b + mask_v = b'\x01' * mhash.digest_size + # Step c + nonce_k = b'\x00' * mhash.digest_size + + for int_oct in (b'\x00', b'\x01'): + # Step d/f + nonce_k = HMAC.new(nonce_k, + mask_v + int_oct + + self._int2octets(self._private_key) + + self._bits2octets(h1), mhash).digest() + # Step e/g + mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() + + nonce = -1 + while not (0 < nonce < self._order): + # Step h.C (second part) + if nonce != -1: + nonce_k = HMAC.new(nonce_k, mask_v + b'\x00', + mhash).digest() + mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() + + # Step h.A + mask_t = b"" + + # Step h.B + while len(mask_t) < self._order_bytes: + mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() + mask_t += mask_v + + # Step h.C (first part) + nonce = self._bits2int(mask_t) + return nonce + + def _valid_hash(self, msg_hash): + return True + + +class FipsDsaSigScheme(DssSigScheme): + + #: List of L (bit length of p) and N (bit length of q) combinations + #: that are allowed by FIPS 186-3. The security level is provided in + #: Table 2 of FIPS 800-57 (rev3). + _fips_186_3_L_N = ( + (1024, 160), # 80 bits (SHA-1 or stronger) + (2048, 224), # 112 bits (SHA-224 or stronger) + (2048, 256), # 128 bits (SHA-256 or stronger) + (3072, 256) # 256 bits (SHA-512) + ) + + def __init__(self, key, encoding, order, randfunc): + super(FipsDsaSigScheme, self).__init__(key, encoding, order) + self._randfunc = randfunc + + L = Integer(key.p).size_in_bits() + if (L, self._order_bits) not in self._fips_186_3_L_N: + error = ("L/N (%d, %d) is not compliant to FIPS 186-3" + % (L, self._order_bits)) + raise ValueError(error) + + def _compute_nonce(self, msg_hash): + # hash is not used + return Integer.random_range(min_inclusive=1, + max_exclusive=self._order, + randfunc=self._randfunc) + + def _valid_hash(self, msg_hash): + """Verify that SHA-1, SHA-2 or SHA-3 are used""" + return (msg_hash.oid == "1.3.14.3.2.26" or + msg_hash.oid.startswith("2.16.840.1.101.3.4.2.")) + + +class FipsEcDsaSigScheme(DssSigScheme): + + def __init__(self, key, encoding, order, randfunc): + super(FipsEcDsaSigScheme, self).__init__(key, encoding, order) + self._randfunc = randfunc + + def _compute_nonce(self, msg_hash): + return Integer.random_range(min_inclusive=1, + max_exclusive=self._key._curve.order, + randfunc=self._randfunc) + + def _valid_hash(self, msg_hash): + """Verify that the strength of the hash matches or exceeds + the strength of the EC. We fail if the hash is too weak.""" + + modulus_bits = self._key.pointQ.size_in_bits() + + # SHS: SHA-2, SHA-3, truncated SHA-512 + sha224 = ("2.16.840.1.101.3.4.2.4", "2.16.840.1.101.3.4.2.7", "2.16.840.1.101.3.4.2.5") + sha256 = ("2.16.840.1.101.3.4.2.1", "2.16.840.1.101.3.4.2.8", "2.16.840.1.101.3.4.2.6") + sha384 = ("2.16.840.1.101.3.4.2.2", "2.16.840.1.101.3.4.2.9") + sha512 = ("2.16.840.1.101.3.4.2.3", "2.16.840.1.101.3.4.2.10") + shs = sha224 + sha256 + sha384 + sha512 + + try: + result = msg_hash.oid in shs + except AttributeError: + result = False + return result + + +def new(key, mode, encoding='binary', randfunc=None): + """Create a signature object :class:`DssSigScheme` that + can perform (EC)DSA signature or verification. + + .. note:: + Refer to `NIST SP 800 Part 1 Rev 4`_ (or newer release) for an + overview of the recommended key lengths. + + Args: + key (:class:`Crypto.PublicKey.DSA` or :class:`Crypto.PublicKey.ECC`): + The key to use for computing the signature (*private* keys only) + or for verifying one. + For DSA keys, let ``L`` and ``N`` be the bit lengths of the modulus ``p`` + and of ``q``: the pair ``(L,N)`` must appear in the following list, + in compliance to section 4.2 of `FIPS 186-4`_: + + - (1024, 160) *legacy only; do not create new signatures with this* + - (2048, 224) *deprecated; do not create new signatures with this* + - (2048, 256) + - (3072, 256) + + For ECC, only keys over P-224, P-256, P-384, and P-521 are accepted. + + mode (string): + The parameter can take these values: + + - ``'fips-186-3'``. The signature generation is randomized and carried out + according to `FIPS 186-3`_: the nonce ``k`` is taken from the RNG. + - ``'deterministic-rfc6979'``. The signature generation is not + randomized. See RFC6979_. + + encoding (string): + How the signature is encoded. This value determines the output of + :meth:`sign` and the input to :meth:`verify`. + + The following values are accepted: + + - ``'binary'`` (default), the signature is the raw concatenation + of ``r`` and ``s``. It is defined in the IEEE P.1363 standard. + For DSA, the size in bytes of the signature is ``N/4`` bytes + (e.g. 64 for ``N=256``). + For ECDSA, the signature is always twice the length of a point + coordinate (e.g. 64 bytes for P-256). + + - ``'der'``, the signature is a ASN.1 DER SEQUENCE + with two INTEGERs (``r`` and ``s``). It is defined in RFC3279_. + The size of the signature is variable. + + randfunc (callable): + A function that returns random ``bytes``, of a given length. + If omitted, the internal RNG is used. + Only applicable for the *'fips-186-3'* mode. + + .. _FIPS 186-3: http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf + .. _FIPS 186-4: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + .. _NIST SP 800 Part 1 Rev 4: http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r4.pdf + .. _RFC6979: http://tools.ietf.org/html/rfc6979 + .. _RFC3279: https://tools.ietf.org/html/rfc3279#section-2.2.2 + """ + + # The goal of the 'mode' parameter is to avoid to + # have the current version of the standard as default. + # + # Over time, such version will be superseded by (for instance) + # FIPS 186-4 and it will be odd to have -3 as default. + + if encoding not in ('binary', 'der'): + raise ValueError("Unknown encoding '%s'" % encoding) + + if isinstance(key, EccKey): + order = key._curve.order + private_key_attr = 'd' + if key._curve.name == "ed25519": + raise ValueError("ECC key is not on a NIST P curve") + elif isinstance(key, DsaKey): + order = Integer(key.q) + private_key_attr = 'x' + else: + raise ValueError("Unsupported key type " + str(type(key))) + + if key.has_private(): + private_key = getattr(key, private_key_attr) + else: + private_key = None + + if mode == 'deterministic-rfc6979': + return DeterministicDsaSigScheme(key, encoding, order, private_key) + elif mode == 'fips-186-3': + if isinstance(key, EccKey): + return FipsEcDsaSigScheme(key, encoding, order, randfunc) + else: + return FipsDsaSigScheme(key, encoding, order, randfunc) + else: + raise ValueError("Unknown DSS mode '%s'" % mode) diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/DSS.pyi b/env/lib/python3.8/site-packages/Crypto/Signature/DSS.pyi new file mode 100644 index 00000000..08cad814 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/DSS.pyi @@ -0,0 +1,27 @@ +from typing import Union, Optional, Callable +from typing_extensions import Protocol + +from Crypto.PublicKey.DSA import DsaKey +from Crypto.PublicKey.ECC import EccKey + +class Hash(Protocol): + def digest(self) -> bytes: ... + +__all__ = ['new'] + +class DssSigScheme: + def __init__(self, key: Union[DsaKey, EccKey], encoding: str, order: int) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> bool: ... + +class DeterministicDsaSigScheme(DssSigScheme): + def __init__(self, key, encoding, order, private_key) -> None: ... + +class FipsDsaSigScheme(DssSigScheme): + def __init__(self, key: DsaKey, encoding: str, order: int, randfunc: Callable) -> None: ... + +class FipsEcDsaSigScheme(DssSigScheme): + def __init__(self, key: EccKey, encoding: str, order: int, randfunc: Callable) -> None: ... + +def new(key: Union[DsaKey, EccKey], mode: str, encoding: Optional[str]='binary', randfunc: Optional[Callable]=None) -> Union[DeterministicDsaSigScheme, FipsDsaSigScheme, FipsEcDsaSigScheme]: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_PSS.py b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_PSS.py new file mode 100644 index 00000000..c39d3881 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_PSS.py @@ -0,0 +1,55 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Legacy module for PKCS#1 PSS signatures. + +:undocumented: __package__ +""" + +import types + +from Crypto.Signature import pss + + +def _pycrypto_verify(self, hash_object, signature): + try: + self._verify(hash_object, signature) + except (ValueError, TypeError): + return False + return True + + +def new(rsa_key, mgfunc=None, saltLen=None, randfunc=None): + pkcs1 = pss.new(rsa_key, mask_func=mgfunc, + salt_bytes=saltLen, rand_func=randfunc) + pkcs1._verify = pkcs1.verify + pkcs1.verify = types.MethodType(_pycrypto_verify, pkcs1) + return pkcs1 diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_PSS.pyi b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_PSS.pyi new file mode 100644 index 00000000..882cc8f7 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_PSS.pyi @@ -0,0 +1,7 @@ +from typing import Optional, Callable + +from Crypto.PublicKey.RSA import RsaKey +from Crypto.Signature.pss import PSS_SigScheme + + +def new(rsa_key: RsaKey, mgfunc: Optional[Callable]=None, saltLen: Optional[int]=None, randfunc: Optional[Callable]=None) -> PSS_SigScheme: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_v1_5.py b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_v1_5.py new file mode 100644 index 00000000..ac888edb --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_v1_5.py @@ -0,0 +1,53 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Legacy module for PKCS#1 v1.5 signatures. + +:undocumented: __package__ +""" + +import types + +from Crypto.Signature import pkcs1_15 + +def _pycrypto_verify(self, hash_object, signature): + try: + self._verify(hash_object, signature) + except (ValueError, TypeError): + return False + return True + +def new(rsa_key): + pkcs1 = pkcs1_15.new(rsa_key) + pkcs1._verify = pkcs1.verify + pkcs1.verify = types.MethodType(_pycrypto_verify, pkcs1) + return pkcs1 + diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_v1_5.pyi b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_v1_5.pyi new file mode 100644 index 00000000..55b96373 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/PKCS1_v1_5.pyi @@ -0,0 +1,6 @@ +from Crypto.PublicKey.RSA import RsaKey + +from Crypto.Signature.pkcs1_15 import PKCS115_SigScheme + + +def new(rsa_key: RsaKey) -> PKCS115_SigScheme: ... \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/__init__.py b/env/lib/python3.8/site-packages/Crypto/Signature/__init__.py new file mode 100644 index 00000000..11ca64c3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/__init__.py @@ -0,0 +1,36 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Digital signature protocols + +A collection of standardized protocols to carry out digital signatures. +""" + +__all__ = ['PKCS1_v1_5', 'PKCS1_PSS', 'DSS', 'pkcs1_15', 'pss', 'eddsa'] diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/eddsa.py b/env/lib/python3.8/site-packages/Crypto/Signature/eddsa.py new file mode 100644 index 00000000..97145ca9 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/eddsa.py @@ -0,0 +1,341 @@ +# =================================================================== +# +# Copyright (c) 2022, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Math.Numbers import Integer + +from Crypto.Hash import SHA512, SHAKE256 +from Crypto.Util.py3compat import bchr, is_bytes +from Crypto.PublicKey.ECC import (EccKey, + construct, + _import_ed25519_public_key, + _import_ed448_public_key) + + +def import_public_key(encoded): + """Import an EdDSA ECC public key, when encoded as raw ``bytes`` as described + in RFC8032. + + Args: + encoded (bytes): + The EdDSA public key to import. + It must be 32 bytes for Ed25519, and 57 bytes for Ed448. + + Returns: + :class:`Crypto.PublicKey.EccKey` : a new ECC key object. + + Raises: + ValueError: when the given key cannot be parsed. + """ + + if len(encoded) == 32: + x, y = _import_ed25519_public_key(encoded) + curve_name = "Ed25519" + elif len(encoded) == 57: + x, y = _import_ed448_public_key(encoded) + curve_name = "Ed448" + else: + raise ValueError("Not an EdDSA key (%d bytes)" % len(encoded)) + return construct(curve=curve_name, point_x=x, point_y=y) + + +def import_private_key(encoded): + """Import an EdDSA ECC private key, when encoded as raw ``bytes`` as described + in RFC8032. + + Args: + encoded (bytes): + The EdDSA private key to import. + It must be 32 bytes for Ed25519, and 57 bytes for Ed448. + + Returns: + :class:`Crypto.PublicKey.EccKey` : a new ECC key object. + + Raises: + ValueError: when the given key cannot be parsed. + """ + + if len(encoded) == 32: + curve_name = "ed25519" + elif len(encoded) == 57: + curve_name = "ed448" + else: + raise ValueError("Incorrect length. Only EdDSA private keys are supported.") + + # Note that the private key is truly a sequence of random bytes, + # so we cannot check its correctness in any way. + + return construct(seed=encoded, curve=curve_name) + + +class EdDSASigScheme(object): + """An EdDSA signature object. + Do not instantiate directly. + Use :func:`Crypto.Signature.eddsa.new`. + """ + + def __init__(self, key, context): + """Create a new EdDSA object. + + Do not instantiate this object directly, + use `Crypto.Signature.DSS.new` instead. + """ + + self._key = key + self._context = context + self._A = key._export_eddsa() + self._order = key._curve.order + + def can_sign(self): + """Return ``True`` if this signature object can be used + for signing messages.""" + + return self._key.has_private() + + def sign(self, msg_or_hash): + """Compute the EdDSA signature of a message. + + Args: + msg_or_hash (bytes or a hash object): + The message to sign (``bytes``, in case of *PureEdDSA*) or + the hash that was carried out over the message (hash object, for *HashEdDSA*). + + The hash object must be :class:`Crypto.Hash.SHA512` for Ed25519, + and :class:`Crypto.Hash.SHAKE256` object for Ed448. + + :return: The signature as ``bytes``. It is always 64 bytes for Ed25519, and 114 bytes for Ed448. + :raise TypeError: if the EdDSA key has no private half + """ + + if not self._key.has_private(): + raise TypeError("Private key is needed to sign") + + if self._key._curve.name == "ed25519": + ph = isinstance(msg_or_hash, SHA512.SHA512Hash) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHA-512 hash") + eddsa_sign_method = self._sign_ed25519 + + elif self._key._curve.name == "ed448": + ph = isinstance(msg_or_hash, SHAKE256.SHAKE256_XOF) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHAKE256 hash") + eddsa_sign_method = self._sign_ed448 + + else: + raise ValueError("Incorrect curve for EdDSA") + + return eddsa_sign_method(msg_or_hash, ph) + + def _sign_ed25519(self, msg_or_hash, ph): + + if self._context or ph: + flag = int(ph) + # dom2(flag, self._context) + dom2 = b'SigEd25519 no Ed25519 collisions' + bchr(flag) + \ + bchr(len(self._context)) + self._context + else: + dom2 = b'' + + PHM = msg_or_hash.digest() if ph else msg_or_hash + + # See RFC 8032, section 5.1.6 + + # Step 2 + r_hash = SHA512.new(dom2 + self._key._prefix + PHM).digest() + r = Integer.from_bytes(r_hash, 'little') % self._order + # Step 3 + R_pk = EccKey(point=r * self._key._curve.G)._export_eddsa() + # Step 4 + k_hash = SHA512.new(dom2 + R_pk + self._A + PHM).digest() + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 5 + s = (r + k * self._key.d) % self._order + + return R_pk + s.to_bytes(32, 'little') + + def _sign_ed448(self, msg_or_hash, ph): + + flag = int(ph) + # dom4(flag, self._context) + dom4 = b'SigEd448' + bchr(flag) + \ + bchr(len(self._context)) + self._context + + PHM = msg_or_hash.read(64) if ph else msg_or_hash + + # See RFC 8032, section 5.2.6 + + # Step 2 + r_hash = SHAKE256.new(dom4 + self._key._prefix + PHM).read(114) + r = Integer.from_bytes(r_hash, 'little') % self._order + # Step 3 + R_pk = EccKey(point=r * self._key._curve.G)._export_eddsa() + # Step 4 + k_hash = SHAKE256.new(dom4 + R_pk + self._A + PHM).read(114) + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 5 + s = (r + k * self._key.d) % self._order + + return R_pk + s.to_bytes(57, 'little') + + def verify(self, msg_or_hash, signature): + """Check if an EdDSA signature is authentic. + + Args: + msg_or_hash (bytes or a hash object): + The message to verify (``bytes``, in case of *PureEdDSA*) or + the hash that was carried out over the message (hash object, for *HashEdDSA*). + + The hash object must be :class:`Crypto.Hash.SHA512` object for Ed25519, + and :class:`Crypto.Hash.SHAKE256` for Ed448. + + signature (``bytes``): + The signature that needs to be validated. + It must be 64 bytes for Ed25519, and 114 bytes for Ed448. + + :raise ValueError: if the signature is not authentic + """ + + if self._key._curve.name == "ed25519": + ph = isinstance(msg_or_hash, SHA512.SHA512Hash) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHA-512 hash") + eddsa_verify_method = self._verify_ed25519 + + elif self._key._curve.name == "ed448": + ph = isinstance(msg_or_hash, SHAKE256.SHAKE256_XOF) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHAKE256 hash") + eddsa_verify_method = self._verify_ed448 + + else: + raise ValueError("Incorrect curve for EdDSA") + + return eddsa_verify_method(msg_or_hash, signature, ph) + + def _verify_ed25519(self, msg_or_hash, signature, ph): + + if len(signature) != 64: + raise ValueError("The signature is not authentic (length)") + + if self._context or ph: + flag = int(ph) + dom2 = b'SigEd25519 no Ed25519 collisions' + bchr(flag) + \ + bchr(len(self._context)) + self._context + else: + dom2 = b'' + + PHM = msg_or_hash.digest() if ph else msg_or_hash + + # Section 5.1.7 + + # Step 1 + try: + R = import_public_key(signature[:32]).pointQ + except ValueError: + raise ValueError("The signature is not authentic (R)") + s = Integer.from_bytes(signature[32:], 'little') + if s > self._order: + raise ValueError("The signature is not authentic (S)") + # Step 2 + k_hash = SHA512.new(dom2 + signature[:32] + self._A + PHM).digest() + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 3 + point1 = s * 8 * self._key._curve.G + # OPTIMIZE: with double-scalar multiplication, with no SCA + # countermeasures because it is public values + point2 = 8 * R + k * 8 * self._key.pointQ + if point1 != point2: + raise ValueError("The signature is not authentic") + + def _verify_ed448(self, msg_or_hash, signature, ph): + + if len(signature) != 114: + raise ValueError("The signature is not authentic (length)") + + flag = int(ph) + # dom4(flag, self._context) + dom4 = b'SigEd448' + bchr(flag) + \ + bchr(len(self._context)) + self._context + + PHM = msg_or_hash.read(64) if ph else msg_or_hash + + # Section 5.2.7 + + # Step 1 + try: + R = import_public_key(signature[:57]).pointQ + except ValueError: + raise ValueError("The signature is not authentic (R)") + s = Integer.from_bytes(signature[57:], 'little') + if s > self._order: + raise ValueError("The signature is not authentic (S)") + # Step 2 + k_hash = SHAKE256.new(dom4 + signature[:57] + self._A + PHM).read(114) + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 3 + point1 = s * 8 * self._key._curve.G + # OPTIMIZE: with double-scalar multiplication, with no SCA + # countermeasures because it is public values + point2 = 8 * R + k * 8 * self._key.pointQ + if point1 != point2: + raise ValueError("The signature is not authentic") + + +def new(key, mode, context=None): + """Create a signature object :class:`EdDSASigScheme` that + can perform or verify an EdDSA signature. + + Args: + key (:class:`Crypto.PublicKey.ECC` object: + The key to use for computing the signature (*private* keys only) + or for verifying one. + The key must be on the curve ``Ed25519`` or ``Ed448``. + + mode (string): + This parameter must be ``'rfc8032'``. + + context (bytes): + Up to 255 bytes of `context `_, + which is a constant byte string to segregate different protocols or + different applications of the same key. + """ + + if not isinstance(key, EccKey) or not key._is_eddsa(): + raise ValueError("EdDSA can only be used with EdDSA keys") + + if mode != 'rfc8032': + raise ValueError("Mode must be 'rfc8032'") + + if context is None: + context = b'' + elif len(context) > 255: + raise ValueError("Context for EdDSA must not be longer than 255 bytes") + + return EdDSASigScheme(key, context) diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/eddsa.pyi b/env/lib/python3.8/site-packages/Crypto/Signature/eddsa.pyi new file mode 100644 index 00000000..3842ff78 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/eddsa.pyi @@ -0,0 +1,21 @@ +from typing import Union, Optional +from typing_extensions import Protocol +from Crypto.PublicKey.ECC import EccKey + +class Hash(Protocol): + def digest(self) -> bytes: ... + +class XOF(Protocol): + def read(self, len: int) -> bytes: ... + +def import_public_key(encoded: bytes) -> EccKey: ... +def import_private_key(encoded: bytes) -> EccKey: ... + +class EdDSASigScheme(object): + + def __init__(self, key: EccKey, context: bytes) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_or_hash: Union[bytes, Hash, XOF]) -> bytes: ... + def verify(self, msg_or_hash: Union[bytes, Hash, XOF], signature: bytes) -> None: ... + +def new(key: EccKey, mode: bytes, context: Optional[bytes]=None) -> EdDSASigScheme: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/pkcs1_15.py b/env/lib/python3.8/site-packages/Crypto/Signature/pkcs1_15.py new file mode 100644 index 00000000..54a4bf7c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/pkcs1_15.py @@ -0,0 +1,222 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import Crypto.Util.number +from Crypto.Util.number import ceil_div, bytes_to_long, long_to_bytes +from Crypto.Util.asn1 import DerSequence, DerNull, DerOctetString, DerObjectId + +class PKCS115_SigScheme: + """A signature object for ``RSASSA-PKCS1-v1_5``. + Do not instantiate directly. + Use :func:`Crypto.Signature.pkcs1_15.new`. + """ + + def __init__(self, rsa_key): + """Initialize this PKCS#1 v1.5 signature scheme object. + + :Parameters: + rsa_key : an RSA key object + Creation of signatures is only possible if this is a *private* + RSA key. Verification of signatures is always possible. + """ + self._key = rsa_key + + def can_sign(self): + """Return ``True`` if this object can be used to sign messages.""" + return self._key.has_private() + + def sign(self, msg_hash): + """Create the PKCS#1 v1.5 signature of a message. + + This function is also called ``RSASSA-PKCS1-V1_5-SIGN`` and + it is specified in + `section 8.2.1 of RFC8017 `_. + + :parameter msg_hash: + This is an object from the :mod:`Crypto.Hash` package. + It has been used to digest the message to sign. + :type msg_hash: hash object + + :return: the signature encoded as a *byte string*. + :raise ValueError: if the RSA key is not long enough for the given hash algorithm. + :raise TypeError: if the RSA key has no private half. + """ + + # See 8.2.1 in RFC3447 + modBits = Crypto.Util.number.size(self._key.n) + k = ceil_div(modBits,8) # Convert from bits to bytes + + # Step 1 + em = _EMSA_PKCS1_V1_5_ENCODE(msg_hash, k) + # Step 2a (OS2IP) + em_int = bytes_to_long(em) + # Step 2b (RSASP1) + m_int = self._key._decrypt(em_int) + # Step 2c (I2OSP) + signature = long_to_bytes(m_int, k) + return signature + + def verify(self, msg_hash, signature): + """Check if the PKCS#1 v1.5 signature over a message is valid. + + This function is also called ``RSASSA-PKCS1-V1_5-VERIFY`` and + it is specified in + `section 8.2.2 of RFC8037 `_. + + :parameter msg_hash: + The hash that was carried out over the message. This is an object + belonging to the :mod:`Crypto.Hash` module. + :type parameter: hash object + + :parameter signature: + The signature that needs to be validated. + :type signature: byte string + + :raise ValueError: if the signature is not valid. + """ + + # See 8.2.2 in RFC3447 + modBits = Crypto.Util.number.size(self._key.n) + k = ceil_div(modBits, 8) # Convert from bits to bytes + + # Step 1 + if len(signature) != k: + raise ValueError("Invalid signature") + # Step 2a (O2SIP) + signature_int = bytes_to_long(signature) + # Step 2b (RSAVP1) + em_int = self._key._encrypt(signature_int) + # Step 2c (I2OSP) + em1 = long_to_bytes(em_int, k) + # Step 3 + try: + possible_em1 = [ _EMSA_PKCS1_V1_5_ENCODE(msg_hash, k, True) ] + # MD2/4/5 hashes always require NULL params in AlgorithmIdentifier. + # For all others, it is optional. + try: + algorithm_is_md = msg_hash.oid.startswith('1.2.840.113549.2.') + except AttributeError: + algorithm_is_md = False + if not algorithm_is_md: # MD2/MD4/MD5 + possible_em1.append(_EMSA_PKCS1_V1_5_ENCODE(msg_hash, k, False)) + except ValueError: + raise ValueError("Invalid signature") + # Step 4 + # By comparing the full encodings (as opposed to checking each + # of its components one at a time) we avoid attacks to the padding + # scheme like Bleichenbacher's (see http://www.mail-archive.com/cryptography@metzdowd.com/msg06537). + # + if em1 not in possible_em1: + raise ValueError("Invalid signature") + pass + + +def _EMSA_PKCS1_V1_5_ENCODE(msg_hash, emLen, with_hash_parameters=True): + """ + Implement the ``EMSA-PKCS1-V1_5-ENCODE`` function, as defined + in PKCS#1 v2.1 (RFC3447, 9.2). + + ``_EMSA-PKCS1-V1_5-ENCODE`` actually accepts the message ``M`` as input, + and hash it internally. Here, we expect that the message has already + been hashed instead. + + :Parameters: + msg_hash : hash object + The hash object that holds the digest of the message being signed. + emLen : int + The length the final encoding must have, in bytes. + with_hash_parameters : bool + If True (default), include NULL parameters for the hash + algorithm in the ``digestAlgorithm`` SEQUENCE. + + :attention: the early standard (RFC2313) stated that ``DigestInfo`` + had to be BER-encoded. This means that old signatures + might have length tags in indefinite form, which + is not supported in DER. Such encoding cannot be + reproduced by this function. + + :Return: An ``emLen`` byte long string that encodes the hash. + """ + + # First, build the ASN.1 DER object DigestInfo: + # + # DigestInfo ::= SEQUENCE { + # digestAlgorithm AlgorithmIdentifier, + # digest OCTET STRING + # } + # + # where digestAlgorithm identifies the hash function and shall be an + # algorithm ID with an OID in the set PKCS1-v1-5DigestAlgorithms. + # + # PKCS1-v1-5DigestAlgorithms ALGORITHM-IDENTIFIER ::= { + # { OID id-md2 PARAMETERS NULL }| + # { OID id-md5 PARAMETERS NULL }| + # { OID id-sha1 PARAMETERS NULL }| + # { OID id-sha256 PARAMETERS NULL }| + # { OID id-sha384 PARAMETERS NULL }| + # { OID id-sha512 PARAMETERS NULL } + # } + # + # Appendix B.1 also says that for SHA-1/-2 algorithms, the parameters + # should be omitted. They may be present, but when they are, they shall + # have NULL value. + + digestAlgo = DerSequence([ DerObjectId(msg_hash.oid).encode() ]) + + if with_hash_parameters: + digestAlgo.append(DerNull().encode()) + + digest = DerOctetString(msg_hash.digest()) + digestInfo = DerSequence([ + digestAlgo.encode(), + digest.encode() + ]).encode() + + # We need at least 11 bytes for the remaining data: 3 fixed bytes and + # at least 8 bytes of padding). + if emLen bytes: ... + +class PKCS115_SigScheme: + def __init__(self, rsa_key: RsaKey) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> None: ... + +def _EMSA_PKCS1_V1_5_ENCODE(msg_hash: Hash, emLen: int, with_hash_parameters: Optional[bool]=True) -> bytes: ... + +def new(rsa_key: RsaKey) -> PKCS115_SigScheme: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/pss.py b/env/lib/python3.8/site-packages/Crypto/Signature/pss.py new file mode 100644 index 00000000..5f34ace7 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/pss.py @@ -0,0 +1,386 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util.py3compat import bchr, bord, iter_range +import Crypto.Util.number +from Crypto.Util.number import (ceil_div, + long_to_bytes, + bytes_to_long + ) +from Crypto.Util.strxor import strxor +from Crypto import Random + + +class PSS_SigScheme: + """A signature object for ``RSASSA-PSS``. + Do not instantiate directly. + Use :func:`Crypto.Signature.pss.new`. + """ + + def __init__(self, key, mgfunc, saltLen, randfunc): + """Initialize this PKCS#1 PSS signature scheme object. + + :Parameters: + key : an RSA key object + If a private half is given, both signature and + verification are possible. + If a public half is given, only verification is possible. + mgfunc : callable + A mask generation function that accepts two parameters: + a string to use as seed, and the lenth of the mask to + generate, in bytes. + saltLen : integer + Length of the salt, in bytes. + randfunc : callable + A function that returns random bytes. + """ + + self._key = key + self._saltLen = saltLen + self._mgfunc = mgfunc + self._randfunc = randfunc + + def can_sign(self): + """Return ``True`` if this object can be used to sign messages.""" + return self._key.has_private() + + def sign(self, msg_hash): + """Create the PKCS#1 PSS signature of a message. + + This function is also called ``RSASSA-PSS-SIGN`` and + it is specified in + `section 8.1.1 of RFC8017 `_. + + :parameter msg_hash: + This is an object from the :mod:`Crypto.Hash` package. + It has been used to digest the message to sign. + :type msg_hash: hash object + + :return: the signature encoded as a *byte string*. + :raise ValueError: if the RSA key is not long enough for the given hash algorithm. + :raise TypeError: if the RSA key has no private half. + """ + + # Set defaults for salt length and mask generation function + if self._saltLen is None: + sLen = msg_hash.digest_size + else: + sLen = self._saltLen + + if self._mgfunc is None: + mgf = lambda x, y: MGF1(x, y, msg_hash) + else: + mgf = self._mgfunc + + modBits = Crypto.Util.number.size(self._key.n) + + # See 8.1.1 in RFC3447 + k = ceil_div(modBits, 8) # k is length in bytes of the modulus + # Step 1 + em = _EMSA_PSS_ENCODE(msg_hash, modBits-1, self._randfunc, mgf, sLen) + # Step 2a (OS2IP) + em_int = bytes_to_long(em) + # Step 2b (RSASP1) + m_int = self._key._decrypt(em_int) + # Step 2c (I2OSP) + signature = long_to_bytes(m_int, k) + return signature + + def verify(self, msg_hash, signature): + """Check if the PKCS#1 PSS signature over a message is valid. + + This function is also called ``RSASSA-PSS-VERIFY`` and + it is specified in + `section 8.1.2 of RFC8037 `_. + + :parameter msg_hash: + The hash that was carried out over the message. This is an object + belonging to the :mod:`Crypto.Hash` module. + :type parameter: hash object + + :parameter signature: + The signature that needs to be validated. + :type signature: bytes + + :raise ValueError: if the signature is not valid. + """ + + # Set defaults for salt length and mask generation function + if self._saltLen is None: + sLen = msg_hash.digest_size + else: + sLen = self._saltLen + if self._mgfunc: + mgf = self._mgfunc + else: + mgf = lambda x, y: MGF1(x, y, msg_hash) + + modBits = Crypto.Util.number.size(self._key.n) + + # See 8.1.2 in RFC3447 + k = ceil_div(modBits, 8) # Convert from bits to bytes + # Step 1 + if len(signature) != k: + raise ValueError("Incorrect signature") + # Step 2a (O2SIP) + signature_int = bytes_to_long(signature) + # Step 2b (RSAVP1) + em_int = self._key._encrypt(signature_int) + # Step 2c (I2OSP) + emLen = ceil_div(modBits - 1, 8) + em = long_to_bytes(em_int, emLen) + # Step 3/4 + _EMSA_PSS_VERIFY(msg_hash, em, modBits-1, mgf, sLen) + + +def MGF1(mgfSeed, maskLen, hash_gen): + """Mask Generation Function, described in `B.2.1 of RFC8017 + `_. + + :param mfgSeed: + seed from which the mask is generated + :type mfgSeed: byte string + + :param maskLen: + intended length in bytes of the mask + :type maskLen: integer + + :param hash_gen: + A module or a hash object from :mod:`Crypto.Hash` + :type hash_object: + + :return: the mask, as a *byte string* + """ + + T = b"" + for counter in iter_range(ceil_div(maskLen, hash_gen.digest_size)): + c = long_to_bytes(counter, 4) + hobj = hash_gen.new() + hobj.update(mgfSeed + c) + T = T + hobj.digest() + assert(len(T) >= maskLen) + return T[:maskLen] + + +def _EMSA_PSS_ENCODE(mhash, emBits, randFunc, mgf, sLen): + r""" + Implement the ``EMSA-PSS-ENCODE`` function, as defined + in PKCS#1 v2.1 (RFC3447, 9.1.1). + + The original ``EMSA-PSS-ENCODE`` actually accepts the message ``M`` + as input, and hash it internally. Here, we expect that the message + has already been hashed instead. + + :Parameters: + mhash : hash object + The hash object that holds the digest of the message being signed. + emBits : int + Maximum length of the final encoding, in bits. + randFunc : callable + An RNG function that accepts as only parameter an int, and returns + a string of random bytes, to be used as salt. + mgf : callable + A mask generation function that accepts two parameters: a string to + use as seed, and the lenth of the mask to generate, in bytes. + sLen : int + Length of the salt, in bytes. + + :Return: An ``emLen`` byte long string that encodes the hash + (with ``emLen = \ceil(emBits/8)``). + + :Raise ValueError: + When digest or salt length are too big. + """ + + emLen = ceil_div(emBits, 8) + + # Bitmask of digits that fill up + lmask = 0 + for i in iter_range(8*emLen-emBits): + lmask = lmask >> 1 | 0x80 + + # Step 1 and 2 have been already done + # Step 3 + if emLen < mhash.digest_size+sLen+2: + raise ValueError("Digest or salt length are too long" + " for given key size.") + # Step 4 + salt = randFunc(sLen) + # Step 5 + m_prime = bchr(0)*8 + mhash.digest() + salt + # Step 6 + h = mhash.new() + h.update(m_prime) + # Step 7 + ps = bchr(0)*(emLen-sLen-mhash.digest_size-2) + # Step 8 + db = ps + bchr(1) + salt + # Step 9 + dbMask = mgf(h.digest(), emLen-mhash.digest_size-1) + # Step 10 + maskedDB = strxor(db, dbMask) + # Step 11 + maskedDB = bchr(bord(maskedDB[0]) & ~lmask) + maskedDB[1:] + # Step 12 + em = maskedDB + h.digest() + bchr(0xBC) + return em + + +def _EMSA_PSS_VERIFY(mhash, em, emBits, mgf, sLen): + """ + Implement the ``EMSA-PSS-VERIFY`` function, as defined + in PKCS#1 v2.1 (RFC3447, 9.1.2). + + ``EMSA-PSS-VERIFY`` actually accepts the message ``M`` as input, + and hash it internally. Here, we expect that the message has already + been hashed instead. + + :Parameters: + mhash : hash object + The hash object that holds the digest of the message to be verified. + em : string + The signature to verify, therefore proving that the sender really + signed the message that was received. + emBits : int + Length of the final encoding (em), in bits. + mgf : callable + A mask generation function that accepts two parameters: a string to + use as seed, and the lenth of the mask to generate, in bytes. + sLen : int + Length of the salt, in bytes. + + :Raise ValueError: + When the encoding is inconsistent, or the digest or salt lengths + are too big. + """ + + emLen = ceil_div(emBits, 8) + + # Bitmask of digits that fill up + lmask = 0 + for i in iter_range(8*emLen-emBits): + lmask = lmask >> 1 | 0x80 + + # Step 1 and 2 have been already done + # Step 3 + if emLen < mhash.digest_size+sLen+2: + raise ValueError("Incorrect signature") + # Step 4 + if ord(em[-1:]) != 0xBC: + raise ValueError("Incorrect signature") + # Step 5 + maskedDB = em[:emLen-mhash.digest_size-1] + h = em[emLen-mhash.digest_size-1:-1] + # Step 6 + if lmask & bord(em[0]): + raise ValueError("Incorrect signature") + # Step 7 + dbMask = mgf(h, emLen-mhash.digest_size-1) + # Step 8 + db = strxor(maskedDB, dbMask) + # Step 9 + db = bchr(bord(db[0]) & ~lmask) + db[1:] + # Step 10 + if not db.startswith(bchr(0)*(emLen-mhash.digest_size-sLen-2) + bchr(1)): + raise ValueError("Incorrect signature") + # Step 11 + if sLen > 0: + salt = db[-sLen:] + else: + salt = b"" + # Step 12 + m_prime = bchr(0)*8 + mhash.digest() + salt + # Step 13 + hobj = mhash.new() + hobj.update(m_prime) + hp = hobj.digest() + # Step 14 + if h != hp: + raise ValueError("Incorrect signature") + + +def new(rsa_key, **kwargs): + """Create an object for making or verifying PKCS#1 PSS signatures. + + :parameter rsa_key: + The RSA key to use for signing or verifying the message. + This is a :class:`Crypto.PublicKey.RSA` object. + Signing is only possible when ``rsa_key`` is a **private** RSA key. + :type rsa_key: RSA object + + :Keyword Arguments: + + * *mask_func* (``callable``) -- + A function that returns the mask (as `bytes`). + It must accept two parameters: a seed (as `bytes`) + and the length of the data to return. + + If not specified, it will be the function :func:`MGF1` defined in + `RFC8017 `_ and + combined with the same hash algorithm applied to the + message to sign or verify. + + If you want to use a different function, for instance still :func:`MGF1` + but together with another hash, you can do:: + + from Crypto.Hash import SHA256 + from Crypto.Signature.pss import MGF1 + mgf = lambda x, y: MGF1(x, y, SHA256) + + * *salt_bytes* (``integer``) -- + Length of the salt, in bytes. + It is a value between 0 and ``emLen - hLen - 2``, where ``emLen`` + is the size of the RSA modulus and ``hLen`` is the size of the digest + applied to the message to sign or verify. + + The salt is generated internally, you don't need to provide it. + + If not specified, the salt length will be ``hLen``. + If it is zero, the signature scheme becomes deterministic. + + Note that in some implementations such as OpenSSL the default + salt length is ``emLen - hLen - 2`` (even though it is not more + secure than ``hLen``). + + * *rand_func* (``callable``) -- + A function that returns random ``bytes``, of the desired length. + The default is :func:`Crypto.Random.get_random_bytes`. + + :return: a :class:`PSS_SigScheme` signature object + """ + + mask_func = kwargs.pop("mask_func", None) + salt_len = kwargs.pop("salt_bytes", None) + rand_func = kwargs.pop("rand_func", None) + if rand_func is None: + rand_func = Random.get_random_bytes + if kwargs: + raise ValueError("Unknown keywords: " + str(kwargs.keys())) + return PSS_SigScheme(rsa_key, mask_func, salt_len, rand_func) diff --git a/env/lib/python3.8/site-packages/Crypto/Signature/pss.pyi b/env/lib/python3.8/site-packages/Crypto/Signature/pss.pyi new file mode 100644 index 00000000..4d216ca9 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Signature/pss.pyi @@ -0,0 +1,30 @@ +from typing import Union, Callable, Optional +from typing_extensions import Protocol + +from Crypto.PublicKey.RSA import RsaKey + + +class Hash(Protocol): + def digest(self) -> bytes: ... + def update(self, bytes) -> None: ... + + +class HashModule(Protocol): + @staticmethod + def new(data: Optional[bytes]) -> Hash: ... + + +MaskFunction = Callable[[bytes, int, Union[Hash, HashModule]], bytes] +RndFunction = Callable[[int], bytes] + +class PSS_SigScheme: + def __init__(self, key: RsaKey, mgfunc: RndFunction, saltLen: int, randfunc: RndFunction) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> None: ... + + +MGF1 : MaskFunction +def _EMSA_PSS_ENCODE(mhash: Hash, emBits: int, randFunc: RndFunction, mgf:MaskFunction, sLen: int) -> str: ... +def _EMSA_PSS_VERIFY(mhash: Hash, em: str, emBits: int, mgf: MaskFunction, sLen: int) -> None: ... +def new(rsa_key: RsaKey, **kwargs: Union[MaskFunction, RndFunction, int]) -> PSS_SigScheme: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Util/Counter.py b/env/lib/python3.8/site-packages/Crypto/Util/Counter.py new file mode 100644 index 00000000..c67bc95d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/Counter.py @@ -0,0 +1,77 @@ +# -*- coding: ascii -*- +# +# Util/Counter.py : Fast counter for use with CTR-mode ciphers +# +# Written in 2008 by Dwayne C. Litzenberger +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +def new(nbits, prefix=b"", suffix=b"", initial_value=1, little_endian=False, allow_wraparound=False): + """Create a stateful counter block function suitable for CTR encryption modes. + + Each call to the function returns the next counter block. + Each counter block is made up by three parts: + + +------+--------------+-------+ + |prefix| counter value|postfix| + +------+--------------+-------+ + + The counter value is incremented by 1 at each call. + + Args: + nbits (integer): + Length of the desired counter value, in bits. It must be a multiple of 8. + prefix (byte string): + The constant prefix of the counter block. By default, no prefix is + used. + suffix (byte string): + The constant postfix of the counter block. By default, no suffix is + used. + initial_value (integer): + The initial value of the counter. Default value is 1. + Its length in bits must not exceed the argument ``nbits``. + little_endian (boolean): + If ``True``, the counter number will be encoded in little endian format. + If ``False`` (default), in big endian format. + allow_wraparound (boolean): + This parameter is ignored. + Returns: + An object that can be passed with the :data:`counter` parameter to a CTR mode + cipher. + + It must hold that *len(prefix) + nbits//8 + len(suffix)* matches the + block size of the underlying block cipher. + """ + + if (nbits % 8) != 0: + raise ValueError("'nbits' must be a multiple of 8") + + iv_bl = initial_value.bit_length() + if iv_bl > nbits: + raise ValueError("Initial value takes %d bits but it is longer than " + "the counter (%d bits)" % + (iv_bl, nbits)) + + # Ignore wraparound + return {"counter_len": nbits // 8, + "prefix": prefix, + "suffix": suffix, + "initial_value": initial_value, + "little_endian": little_endian + } diff --git a/env/lib/python3.8/site-packages/Crypto/Util/Counter.pyi b/env/lib/python3.8/site-packages/Crypto/Util/Counter.pyi new file mode 100644 index 00000000..fa2ffdd8 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/Counter.pyi @@ -0,0 +1,5 @@ +from typing import Optional, Union, Dict + +def new(nbits: int, prefix: Optional[bytes]=..., suffix: Optional[bytes]=..., initial_value: Optional[int]=1, + little_endian: Optional[bool]=False, allow_wraparound: Optional[bool]=False) -> \ + Dict[str, Union[int, bytes, bool]]: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Util/Padding.py b/env/lib/python3.8/site-packages/Crypto/Util/Padding.py new file mode 100644 index 00000000..da69e559 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/Padding.py @@ -0,0 +1,108 @@ +# +# Util/Padding.py : Functions to manage padding +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = [ 'pad', 'unpad' ] + +from Crypto.Util.py3compat import * + + +def pad(data_to_pad, block_size, style='pkcs7'): + """Apply standard padding. + + Args: + data_to_pad (byte string): + The data that needs to be padded. + block_size (integer): + The block boundary to use for padding. The output length is guaranteed + to be a multiple of :data:`block_size`. + style (string): + Padding algorithm. It can be *'pkcs7'* (default), *'iso7816'* or *'x923'*. + + Return: + byte string : the original data with the appropriate padding added at the end. + """ + + padding_len = block_size-len(data_to_pad)%block_size + if style == 'pkcs7': + padding = bchr(padding_len)*padding_len + elif style == 'x923': + padding = bchr(0)*(padding_len-1) + bchr(padding_len) + elif style == 'iso7816': + padding = bchr(128) + bchr(0)*(padding_len-1) + else: + raise ValueError("Unknown padding style") + return data_to_pad + padding + + +def unpad(padded_data, block_size, style='pkcs7'): + """Remove standard padding. + + Args: + padded_data (byte string): + A piece of data with padding that needs to be stripped. + block_size (integer): + The block boundary to use for padding. The input length + must be a multiple of :data:`block_size`. + style (string): + Padding algorithm. It can be *'pkcs7'* (default), *'iso7816'* or *'x923'*. + Return: + byte string : data without padding. + Raises: + ValueError: if the padding is incorrect. + """ + + pdata_len = len(padded_data) + if pdata_len == 0: + raise ValueError("Zero-length input cannot be unpadded") + if pdata_len % block_size: + raise ValueError("Input data is not padded") + if style in ('pkcs7', 'x923'): + padding_len = bord(padded_data[-1]) + if padding_len<1 or padding_len>min(block_size, pdata_len): + raise ValueError("Padding is incorrect.") + if style == 'pkcs7': + if padded_data[-padding_len:]!=bchr(padding_len)*padding_len: + raise ValueError("PKCS#7 padding is incorrect.") + else: + if padded_data[-padding_len:-1]!=bchr(0)*(padding_len-1): + raise ValueError("ANSI X.923 padding is incorrect.") + elif style == 'iso7816': + padding_len = pdata_len - padded_data.rfind(bchr(128)) + if padding_len<1 or padding_len>min(block_size, pdata_len): + raise ValueError("Padding is incorrect.") + if padding_len>1 and padded_data[1-padding_len:]!=bchr(0)*(padding_len-1): + raise ValueError("ISO 7816-4 padding is incorrect.") + else: + raise ValueError("Unknown padding style") + return padded_data[:-padding_len] + diff --git a/env/lib/python3.8/site-packages/Crypto/Util/Padding.pyi b/env/lib/python3.8/site-packages/Crypto/Util/Padding.pyi new file mode 100644 index 00000000..4d8d30df --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/Padding.pyi @@ -0,0 +1,6 @@ +from typing import Optional + +__all__ = [ 'pad', 'unpad' ] + +def pad(data_to_pad: bytes, block_size: int, style: Optional[str]='pkcs7') -> bytes: ... +def unpad(padded_data: bytes, block_size: int, style: Optional[str]='pkcs7') -> bytes: ... \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/Crypto/Util/RFC1751.py b/env/lib/python3.8/site-packages/Crypto/Util/RFC1751.py new file mode 100644 index 00000000..9ed52d2c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/RFC1751.py @@ -0,0 +1,386 @@ +# rfc1751.py : Converts between 128-bit strings and a human-readable +# sequence of words, as defined in RFC1751: "A Convention for +# Human-Readable 128-bit Keys", by Daniel L. McDonald. +# +# Part of the Python Cryptography Toolkit +# +# Written by Andrew M. Kuchling and others +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from __future__ import print_function + +import binascii + +from Crypto.Util.py3compat import bord, bchr + +binary = {0: '0000', 1: '0001', 2: '0010', 3: '0011', 4: '0100', 5: '0101', + 6: '0110', 7: '0111', 8: '1000', 9: '1001', 10: '1010', 11: '1011', + 12: '1100', 13: '1101', 14: '1110', 15: '1111'} + + +def _key2bin(s): + "Convert a key into a string of binary digits" + kl = map(lambda x: bord(x), s) + kl = map(lambda x: binary[x >> 4] + binary[x & 15], kl) + return ''.join(kl) + + +def _extract(key, start, length): + """Extract a bitstring(2.x)/bytestring(2.x) from a string of binary digits, and return its + numeric value.""" + + result = 0 + for y in key[start:start+length]: + result = result * 2 + ord(y) - 48 + return result + + +def key_to_english(key): + """Transform an arbitrary key into a string containing English words. + + Example:: + + >>> from Crypto.Util.RFC1751 import key_to_english + >>> key_to_english(b'66666666') + 'RAM LOIS GOAD CREW CARE HIT' + + Args: + key (byte string): + The key to convert. Its length must be a multiple of 8. + Return: + A string of English words. + """ + + if len(key) % 8 != 0: + raise ValueError('The length of the key must be a multiple of 8.') + + english = '' + for index in range(0, len(key), 8): # Loop over 8-byte subkeys + subkey = key[index:index + 8] + # Compute the parity of the key + skbin = _key2bin(subkey) + p = 0 + for i in range(0, 64, 2): + p = p + _extract(skbin, i, 2) + # Append parity bits to the subkey + skbin = _key2bin(subkey + bchr((p << 6) & 255)) + for i in range(0, 64, 11): + english = english + wordlist[_extract(skbin, i, 11)] + ' ' + + return english.strip() + + +def english_to_key(s): + """Transform a string into a corresponding key. + + Example:: + + >>> from Crypto.Util.RFC1751 import english_to_key + >>> english_to_key('RAM LOIS GOAD CREW CARE HIT') + b'66666666' + + Args: + s (string): the string with the words separated by whitespace; + the number of words must be a multiple of 6. + Return: + A byte string. + """ + + L = s.upper().split() + key = b'' + for index in range(0, len(L), 6): + sublist = L[index:index + 6] + char = 9 * [0] + bits = 0 + for i in sublist: + index = wordlist.index(i) + shift = (8 - (bits + 11) % 8) % 8 + y = index << shift + cl, cc, cr = (y >> 16), (y >> 8) & 0xff, y & 0xff + if (shift > 5): + char[bits >> 3] = char[bits >> 3] | cl + char[(bits >> 3) + 1] = char[(bits >> 3) + 1] | cc + char[(bits >> 3) + 2] = char[(bits >> 3) + 2] | cr + elif shift > -3: + char[bits >> 3] = char[bits >> 3] | cc + char[(bits >> 3) + 1] = char[(bits >> 3) + 1] | cr + else: + char[bits >> 3] = char[bits >> 3] | cr + bits = bits + 11 + + subkey = b'' + for y in char: + subkey = subkey + bchr(y) + + # Check the parity of the resulting key + skbin = _key2bin(subkey) + p = 0 + for i in range(0, 64, 2): + p = p + _extract(skbin, i, 2) + if (p & 3) != _extract(skbin, 64, 2): + raise ValueError("Parity error in resulting key") + key = key + subkey[0:8] + return key + + +wordlist = [ + "A", "ABE", "ACE", "ACT", "AD", "ADA", "ADD", + "AGO", "AID", "AIM", "AIR", "ALL", "ALP", "AM", "AMY", "AN", "ANA", + "AND", "ANN", "ANT", "ANY", "APE", "APS", "APT", "ARC", "ARE", "ARK", + "ARM", "ART", "AS", "ASH", "ASK", "AT", "ATE", "AUG", "AUK", "AVE", + "AWE", "AWK", "AWL", "AWN", "AX", "AYE", "BAD", "BAG", "BAH", "BAM", + "BAN", "BAR", "BAT", "BAY", "BE", "BED", "BEE", "BEG", "BEN", "BET", + "BEY", "BIB", "BID", "BIG", "BIN", "BIT", "BOB", "BOG", "BON", "BOO", + "BOP", "BOW", "BOY", "BUB", "BUD", "BUG", "BUM", "BUN", "BUS", "BUT", + "BUY", "BY", "BYE", "CAB", "CAL", "CAM", "CAN", "CAP", "CAR", "CAT", + "CAW", "COD", "COG", "COL", "CON", "COO", "COP", "COT", "COW", "COY", + "CRY", "CUB", "CUE", "CUP", "CUR", "CUT", "DAB", "DAD", "DAM", "DAN", + "DAR", "DAY", "DEE", "DEL", "DEN", "DES", "DEW", "DID", "DIE", "DIG", + "DIN", "DIP", "DO", "DOE", "DOG", "DON", "DOT", "DOW", "DRY", "DUB", + "DUD", "DUE", "DUG", "DUN", "EAR", "EAT", "ED", "EEL", "EGG", "EGO", + "ELI", "ELK", "ELM", "ELY", "EM", "END", "EST", "ETC", "EVA", "EVE", + "EWE", "EYE", "FAD", "FAN", "FAR", "FAT", "FAY", "FED", "FEE", "FEW", + "FIB", "FIG", "FIN", "FIR", "FIT", "FLO", "FLY", "FOE", "FOG", "FOR", + "FRY", "FUM", "FUN", "FUR", "GAB", "GAD", "GAG", "GAL", "GAM", "GAP", + "GAS", "GAY", "GEE", "GEL", "GEM", "GET", "GIG", "GIL", "GIN", "GO", + "GOT", "GUM", "GUN", "GUS", "GUT", "GUY", "GYM", "GYP", "HA", "HAD", + "HAL", "HAM", "HAN", "HAP", "HAS", "HAT", "HAW", "HAY", "HE", "HEM", + "HEN", "HER", "HEW", "HEY", "HI", "HID", "HIM", "HIP", "HIS", "HIT", + "HO", "HOB", "HOC", "HOE", "HOG", "HOP", "HOT", "HOW", "HUB", "HUE", + "HUG", "HUH", "HUM", "HUT", "I", "ICY", "IDA", "IF", "IKE", "ILL", + "INK", "INN", "IO", "ION", "IQ", "IRA", "IRE", "IRK", "IS", "IT", + "ITS", "IVY", "JAB", "JAG", "JAM", "JAN", "JAR", "JAW", "JAY", "JET", + "JIG", "JIM", "JO", "JOB", "JOE", "JOG", "JOT", "JOY", "JUG", "JUT", + "KAY", "KEG", "KEN", "KEY", "KID", "KIM", "KIN", "KIT", "LA", "LAB", + "LAC", "LAD", "LAG", "LAM", "LAP", "LAW", "LAY", "LEA", "LED", "LEE", + "LEG", "LEN", "LEO", "LET", "LEW", "LID", "LIE", "LIN", "LIP", "LIT", + "LO", "LOB", "LOG", "LOP", "LOS", "LOT", "LOU", "LOW", "LOY", "LUG", + "LYE", "MA", "MAC", "MAD", "MAE", "MAN", "MAO", "MAP", "MAT", "MAW", + "MAY", "ME", "MEG", "MEL", "MEN", "MET", "MEW", "MID", "MIN", "MIT", + "MOB", "MOD", "MOE", "MOO", "MOP", "MOS", "MOT", "MOW", "MUD", "MUG", + "MUM", "MY", "NAB", "NAG", "NAN", "NAP", "NAT", "NAY", "NE", "NED", + "NEE", "NET", "NEW", "NIB", "NIL", "NIP", "NIT", "NO", "NOB", "NOD", + "NON", "NOR", "NOT", "NOV", "NOW", "NU", "NUN", "NUT", "O", "OAF", + "OAK", "OAR", "OAT", "ODD", "ODE", "OF", "OFF", "OFT", "OH", "OIL", + "OK", "OLD", "ON", "ONE", "OR", "ORB", "ORE", "ORR", "OS", "OTT", + "OUR", "OUT", "OVA", "OW", "OWE", "OWL", "OWN", "OX", "PA", "PAD", + "PAL", "PAM", "PAN", "PAP", "PAR", "PAT", "PAW", "PAY", "PEA", "PEG", + "PEN", "PEP", "PER", "PET", "PEW", "PHI", "PI", "PIE", "PIN", "PIT", + "PLY", "PO", "POD", "POE", "POP", "POT", "POW", "PRO", "PRY", "PUB", + "PUG", "PUN", "PUP", "PUT", "QUO", "RAG", "RAM", "RAN", "RAP", "RAT", + "RAW", "RAY", "REB", "RED", "REP", "RET", "RIB", "RID", "RIG", "RIM", + "RIO", "RIP", "ROB", "ROD", "ROE", "RON", "ROT", "ROW", "ROY", "RUB", + "RUE", "RUG", "RUM", "RUN", "RYE", "SAC", "SAD", "SAG", "SAL", "SAM", + "SAN", "SAP", "SAT", "SAW", "SAY", "SEA", "SEC", "SEE", "SEN", "SET", + "SEW", "SHE", "SHY", "SIN", "SIP", "SIR", "SIS", "SIT", "SKI", "SKY", + "SLY", "SO", "SOB", "SOD", "SON", "SOP", "SOW", "SOY", "SPA", "SPY", + "SUB", "SUD", "SUE", "SUM", "SUN", "SUP", "TAB", "TAD", "TAG", "TAN", + "TAP", "TAR", "TEA", "TED", "TEE", "TEN", "THE", "THY", "TIC", "TIE", + "TIM", "TIN", "TIP", "TO", "TOE", "TOG", "TOM", "TON", "TOO", "TOP", + "TOW", "TOY", "TRY", "TUB", "TUG", "TUM", "TUN", "TWO", "UN", "UP", + "US", "USE", "VAN", "VAT", "VET", "VIE", "WAD", "WAG", "WAR", "WAS", + "WAY", "WE", "WEB", "WED", "WEE", "WET", "WHO", "WHY", "WIN", "WIT", + "WOK", "WON", "WOO", "WOW", "WRY", "WU", "YAM", "YAP", "YAW", "YE", + "YEA", "YES", "YET", "YOU", "ABED", "ABEL", "ABET", "ABLE", "ABUT", + "ACHE", "ACID", "ACME", "ACRE", "ACTA", "ACTS", "ADAM", "ADDS", + "ADEN", "AFAR", "AFRO", "AGEE", "AHEM", "AHOY", "AIDA", "AIDE", + "AIDS", "AIRY", "AJAR", "AKIN", "ALAN", "ALEC", "ALGA", "ALIA", + "ALLY", "ALMA", "ALOE", "ALSO", "ALTO", "ALUM", "ALVA", "AMEN", + "AMES", "AMID", "AMMO", "AMOK", "AMOS", "AMRA", "ANDY", "ANEW", + "ANNA", "ANNE", "ANTE", "ANTI", "AQUA", "ARAB", "ARCH", "AREA", + "ARGO", "ARID", "ARMY", "ARTS", "ARTY", "ASIA", "ASKS", "ATOM", + "AUNT", "AURA", "AUTO", "AVER", "AVID", "AVIS", "AVON", "AVOW", + "AWAY", "AWRY", "BABE", "BABY", "BACH", "BACK", "BADE", "BAIL", + "BAIT", "BAKE", "BALD", "BALE", "BALI", "BALK", "BALL", "BALM", + "BAND", "BANE", "BANG", "BANK", "BARB", "BARD", "BARE", "BARK", + "BARN", "BARR", "BASE", "BASH", "BASK", "BASS", "BATE", "BATH", + "BAWD", "BAWL", "BEAD", "BEAK", "BEAM", "BEAN", "BEAR", "BEAT", + "BEAU", "BECK", "BEEF", "BEEN", "BEER", + "BEET", "BELA", "BELL", "BELT", "BEND", "BENT", "BERG", "BERN", + "BERT", "BESS", "BEST", "BETA", "BETH", "BHOY", "BIAS", "BIDE", + "BIEN", "BILE", "BILK", "BILL", "BIND", "BING", "BIRD", "BITE", + "BITS", "BLAB", "BLAT", "BLED", "BLEW", "BLOB", "BLOC", "BLOT", + "BLOW", "BLUE", "BLUM", "BLUR", "BOAR", "BOAT", "BOCA", "BOCK", + "BODE", "BODY", "BOGY", "BOHR", "BOIL", "BOLD", "BOLO", "BOLT", + "BOMB", "BONA", "BOND", "BONE", "BONG", "BONN", "BONY", "BOOK", + "BOOM", "BOON", "BOOT", "BORE", "BORG", "BORN", "BOSE", "BOSS", + "BOTH", "BOUT", "BOWL", "BOYD", "BRAD", "BRAE", "BRAG", "BRAN", + "BRAY", "BRED", "BREW", "BRIG", "BRIM", "BROW", "BUCK", "BUDD", + "BUFF", "BULB", "BULK", "BULL", "BUNK", "BUNT", "BUOY", "BURG", + "BURL", "BURN", "BURR", "BURT", "BURY", "BUSH", "BUSS", "BUST", + "BUSY", "BYTE", "CADY", "CAFE", "CAGE", "CAIN", "CAKE", "CALF", + "CALL", "CALM", "CAME", "CANE", "CANT", "CARD", "CARE", "CARL", + "CARR", "CART", "CASE", "CASH", "CASK", "CAST", "CAVE", "CEIL", + "CELL", "CENT", "CERN", "CHAD", "CHAR", "CHAT", "CHAW", "CHEF", + "CHEN", "CHEW", "CHIC", "CHIN", "CHOU", "CHOW", "CHUB", "CHUG", + "CHUM", "CITE", "CITY", "CLAD", "CLAM", "CLAN", "CLAW", "CLAY", + "CLOD", "CLOG", "CLOT", "CLUB", "CLUE", "COAL", "COAT", "COCA", + "COCK", "COCO", "CODA", "CODE", "CODY", "COED", "COIL", "COIN", + "COKE", "COLA", "COLD", "COLT", "COMA", "COMB", "COME", "COOK", + "COOL", "COON", "COOT", "CORD", "CORE", "CORK", "CORN", "COST", + "COVE", "COWL", "CRAB", "CRAG", "CRAM", "CRAY", "CREW", "CRIB", + "CROW", "CRUD", "CUBA", "CUBE", "CUFF", "CULL", "CULT", "CUNY", + "CURB", "CURD", "CURE", "CURL", "CURT", "CUTS", "DADE", "DALE", + "DAME", "DANA", "DANE", "DANG", "DANK", "DARE", "DARK", "DARN", + "DART", "DASH", "DATA", "DATE", "DAVE", "DAVY", "DAWN", "DAYS", + "DEAD", "DEAF", "DEAL", "DEAN", "DEAR", "DEBT", "DECK", "DEED", + "DEEM", "DEER", "DEFT", "DEFY", "DELL", "DENT", "DENY", "DESK", + "DIAL", "DICE", "DIED", "DIET", "DIME", "DINE", "DING", "DINT", + "DIRE", "DIRT", "DISC", "DISH", "DISK", "DIVE", "DOCK", "DOES", + "DOLE", "DOLL", "DOLT", "DOME", "DONE", "DOOM", "DOOR", "DORA", + "DOSE", "DOTE", "DOUG", "DOUR", "DOVE", "DOWN", "DRAB", "DRAG", + "DRAM", "DRAW", "DREW", "DRUB", "DRUG", "DRUM", "DUAL", "DUCK", + "DUCT", "DUEL", "DUET", "DUKE", "DULL", "DUMB", "DUNE", "DUNK", + "DUSK", "DUST", "DUTY", "EACH", "EARL", "EARN", "EASE", "EAST", + "EASY", "EBEN", "ECHO", "EDDY", "EDEN", "EDGE", "EDGY", "EDIT", + "EDNA", "EGAN", "ELAN", "ELBA", "ELLA", "ELSE", "EMIL", "EMIT", + "EMMA", "ENDS", "ERIC", "EROS", "EVEN", "EVER", "EVIL", "EYED", + "FACE", "FACT", "FADE", "FAIL", "FAIN", "FAIR", "FAKE", "FALL", + "FAME", "FANG", "FARM", "FAST", "FATE", "FAWN", "FEAR", "FEAT", + "FEED", "FEEL", "FEET", "FELL", "FELT", "FEND", "FERN", "FEST", + "FEUD", "FIEF", "FIGS", "FILE", "FILL", "FILM", "FIND", "FINE", + "FINK", "FIRE", "FIRM", "FISH", "FISK", "FIST", "FITS", "FIVE", + "FLAG", "FLAK", "FLAM", "FLAT", "FLAW", "FLEA", "FLED", "FLEW", + "FLIT", "FLOC", "FLOG", "FLOW", "FLUB", "FLUE", "FOAL", "FOAM", + "FOGY", "FOIL", "FOLD", "FOLK", "FOND", "FONT", "FOOD", "FOOL", + "FOOT", "FORD", "FORE", "FORK", "FORM", "FORT", "FOSS", "FOUL", + "FOUR", "FOWL", "FRAU", "FRAY", "FRED", "FREE", "FRET", "FREY", + "FROG", "FROM", "FUEL", "FULL", "FUME", "FUND", "FUNK", "FURY", + "FUSE", "FUSS", "GAFF", "GAGE", "GAIL", "GAIN", "GAIT", "GALA", + "GALE", "GALL", "GALT", "GAME", "GANG", "GARB", "GARY", "GASH", + "GATE", "GAUL", "GAUR", "GAVE", "GAWK", "GEAR", "GELD", "GENE", + "GENT", "GERM", "GETS", "GIBE", "GIFT", "GILD", "GILL", "GILT", + "GINA", "GIRD", "GIRL", "GIST", "GIVE", "GLAD", "GLEE", "GLEN", + "GLIB", "GLOB", "GLOM", "GLOW", "GLUE", "GLUM", "GLUT", "GOAD", + "GOAL", "GOAT", "GOER", "GOES", "GOLD", "GOLF", "GONE", "GONG", + "GOOD", "GOOF", "GORE", "GORY", "GOSH", "GOUT", "GOWN", "GRAB", + "GRAD", "GRAY", "GREG", "GREW", "GREY", "GRID", "GRIM", "GRIN", + "GRIT", "GROW", "GRUB", "GULF", "GULL", "GUNK", "GURU", "GUSH", + "GUST", "GWEN", "GWYN", "HAAG", "HAAS", "HACK", "HAIL", "HAIR", + "HALE", "HALF", "HALL", "HALO", "HALT", "HAND", "HANG", "HANK", + "HANS", "HARD", "HARK", "HARM", "HART", "HASH", "HAST", "HATE", + "HATH", "HAUL", "HAVE", "HAWK", "HAYS", "HEAD", "HEAL", "HEAR", + "HEAT", "HEBE", "HECK", "HEED", "HEEL", "HEFT", "HELD", "HELL", + "HELM", "HERB", "HERD", "HERE", "HERO", "HERS", "HESS", "HEWN", + "HICK", "HIDE", "HIGH", "HIKE", "HILL", "HILT", "HIND", "HINT", + "HIRE", "HISS", "HIVE", "HOBO", "HOCK", "HOFF", "HOLD", "HOLE", + "HOLM", "HOLT", "HOME", "HONE", "HONK", "HOOD", "HOOF", "HOOK", + "HOOT", "HORN", "HOSE", "HOST", "HOUR", "HOVE", "HOWE", "HOWL", + "HOYT", "HUCK", "HUED", "HUFF", "HUGE", "HUGH", "HUGO", "HULK", + "HULL", "HUNK", "HUNT", "HURD", "HURL", "HURT", "HUSH", "HYDE", + "HYMN", "IBIS", "ICON", "IDEA", "IDLE", "IFFY", "INCA", "INCH", + "INTO", "IONS", "IOTA", "IOWA", "IRIS", "IRMA", "IRON", "ISLE", + "ITCH", "ITEM", "IVAN", "JACK", "JADE", "JAIL", "JAKE", "JANE", + "JAVA", "JEAN", "JEFF", "JERK", "JESS", "JEST", "JIBE", "JILL", + "JILT", "JIVE", "JOAN", "JOBS", "JOCK", "JOEL", "JOEY", "JOHN", + "JOIN", "JOKE", "JOLT", "JOVE", "JUDD", "JUDE", "JUDO", "JUDY", + "JUJU", "JUKE", "JULY", "JUNE", "JUNK", "JUNO", "JURY", "JUST", + "JUTE", "KAHN", "KALE", "KANE", "KANT", "KARL", "KATE", "KEEL", + "KEEN", "KENO", "KENT", "KERN", "KERR", "KEYS", "KICK", "KILL", + "KIND", "KING", "KIRK", "KISS", "KITE", "KLAN", "KNEE", "KNEW", + "KNIT", "KNOB", "KNOT", "KNOW", "KOCH", "KONG", "KUDO", "KURD", + "KURT", "KYLE", "LACE", "LACK", "LACY", "LADY", "LAID", "LAIN", + "LAIR", "LAKE", "LAMB", "LAME", "LAND", "LANE", "LANG", "LARD", + "LARK", "LASS", "LAST", "LATE", "LAUD", "LAVA", "LAWN", "LAWS", + "LAYS", "LEAD", "LEAF", "LEAK", "LEAN", "LEAR", "LEEK", "LEER", + "LEFT", "LEND", "LENS", "LENT", "LEON", "LESK", "LESS", "LEST", + "LETS", "LIAR", "LICE", "LICK", "LIED", "LIEN", "LIES", "LIEU", + "LIFE", "LIFT", "LIKE", "LILA", "LILT", "LILY", "LIMA", "LIMB", + "LIME", "LIND", "LINE", "LINK", "LINT", "LION", "LISA", "LIST", + "LIVE", "LOAD", "LOAF", "LOAM", "LOAN", "LOCK", "LOFT", "LOGE", + "LOIS", "LOLA", "LONE", "LONG", "LOOK", "LOON", "LOOT", "LORD", + "LORE", "LOSE", "LOSS", "LOST", "LOUD", "LOVE", "LOWE", "LUCK", + "LUCY", "LUGE", "LUKE", "LULU", "LUND", "LUNG", "LURA", "LURE", + "LURK", "LUSH", "LUST", "LYLE", "LYNN", "LYON", "LYRA", "MACE", + "MADE", "MAGI", "MAID", "MAIL", "MAIN", "MAKE", "MALE", "MALI", + "MALL", "MALT", "MANA", "MANN", "MANY", "MARC", "MARE", "MARK", + "MARS", "MART", "MARY", "MASH", "MASK", "MASS", "MAST", "MATE", + "MATH", "MAUL", "MAYO", "MEAD", "MEAL", "MEAN", "MEAT", "MEEK", + "MEET", "MELD", "MELT", "MEMO", "MEND", "MENU", "MERT", "MESH", + "MESS", "MICE", "MIKE", "MILD", "MILE", "MILK", "MILL", "MILT", + "MIMI", "MIND", "MINE", "MINI", "MINK", "MINT", "MIRE", "MISS", + "MIST", "MITE", "MITT", "MOAN", "MOAT", "MOCK", "MODE", "MOLD", + "MOLE", "MOLL", "MOLT", "MONA", "MONK", "MONT", "MOOD", "MOON", + "MOOR", "MOOT", "MORE", "MORN", "MORT", "MOSS", "MOST", "MOTH", + "MOVE", "MUCH", "MUCK", "MUDD", "MUFF", "MULE", "MULL", "MURK", + "MUSH", "MUST", "MUTE", "MUTT", "MYRA", "MYTH", "NAGY", "NAIL", + "NAIR", "NAME", "NARY", "NASH", "NAVE", "NAVY", "NEAL", "NEAR", + "NEAT", "NECK", "NEED", "NEIL", "NELL", "NEON", "NERO", "NESS", + "NEST", "NEWS", "NEWT", "NIBS", "NICE", "NICK", "NILE", "NINA", + "NINE", "NOAH", "NODE", "NOEL", "NOLL", "NONE", "NOOK", "NOON", + "NORM", "NOSE", "NOTE", "NOUN", "NOVA", "NUDE", "NULL", "NUMB", + "OATH", "OBEY", "OBOE", "ODIN", "OHIO", "OILY", "OINT", "OKAY", + "OLAF", "OLDY", "OLGA", "OLIN", "OMAN", "OMEN", "OMIT", "ONCE", + "ONES", "ONLY", "ONTO", "ONUS", "ORAL", "ORGY", "OSLO", "OTIS", + "OTTO", "OUCH", "OUST", "OUTS", "OVAL", "OVEN", "OVER", "OWLY", + "OWNS", "QUAD", "QUIT", "QUOD", "RACE", "RACK", "RACY", "RAFT", + "RAGE", "RAID", "RAIL", "RAIN", "RAKE", "RANK", "RANT", "RARE", + "RASH", "RATE", "RAVE", "RAYS", "READ", "REAL", "REAM", "REAR", + "RECK", "REED", "REEF", "REEK", "REEL", "REID", "REIN", "RENA", + "REND", "RENT", "REST", "RICE", "RICH", "RICK", "RIDE", "RIFT", + "RILL", "RIME", "RING", "RINK", "RISE", "RISK", "RITE", "ROAD", + "ROAM", "ROAR", "ROBE", "ROCK", "RODE", "ROIL", "ROLL", "ROME", + "ROOD", "ROOF", "ROOK", "ROOM", "ROOT", "ROSA", "ROSE", "ROSS", + "ROSY", "ROTH", "ROUT", "ROVE", "ROWE", "ROWS", "RUBE", "RUBY", + "RUDE", "RUDY", "RUIN", "RULE", "RUNG", "RUNS", "RUNT", "RUSE", + "RUSH", "RUSK", "RUSS", "RUST", "RUTH", "SACK", "SAFE", "SAGE", + "SAID", "SAIL", "SALE", "SALK", "SALT", "SAME", "SAND", "SANE", + "SANG", "SANK", "SARA", "SAUL", "SAVE", "SAYS", "SCAN", "SCAR", + "SCAT", "SCOT", "SEAL", "SEAM", "SEAR", "SEAT", "SEED", "SEEK", + "SEEM", "SEEN", "SEES", "SELF", "SELL", "SEND", "SENT", "SETS", + "SEWN", "SHAG", "SHAM", "SHAW", "SHAY", "SHED", "SHIM", "SHIN", + "SHOD", "SHOE", "SHOT", "SHOW", "SHUN", "SHUT", "SICK", "SIDE", + "SIFT", "SIGH", "SIGN", "SILK", "SILL", "SILO", "SILT", "SINE", + "SING", "SINK", "SIRE", "SITE", "SITS", "SITU", "SKAT", "SKEW", + "SKID", "SKIM", "SKIN", "SKIT", "SLAB", "SLAM", "SLAT", "SLAY", + "SLED", "SLEW", "SLID", "SLIM", "SLIT", "SLOB", "SLOG", "SLOT", + "SLOW", "SLUG", "SLUM", "SLUR", "SMOG", "SMUG", "SNAG", "SNOB", + "SNOW", "SNUB", "SNUG", "SOAK", "SOAR", "SOCK", "SODA", "SOFA", + "SOFT", "SOIL", "SOLD", "SOME", "SONG", "SOON", "SOOT", "SORE", + "SORT", "SOUL", "SOUR", "SOWN", "STAB", "STAG", "STAN", "STAR", + "STAY", "STEM", "STEW", "STIR", "STOW", "STUB", "STUN", "SUCH", + "SUDS", "SUIT", "SULK", "SUMS", "SUNG", "SUNK", "SURE", "SURF", + "SWAB", "SWAG", "SWAM", "SWAN", "SWAT", "SWAY", "SWIM", "SWUM", + "TACK", "TACT", "TAIL", "TAKE", "TALE", "TALK", "TALL", "TANK", + "TASK", "TATE", "TAUT", "TEAL", "TEAM", "TEAR", "TECH", "TEEM", + "TEEN", "TEET", "TELL", "TEND", "TENT", "TERM", "TERN", "TESS", + "TEST", "THAN", "THAT", "THEE", "THEM", "THEN", "THEY", "THIN", + "THIS", "THUD", "THUG", "TICK", "TIDE", "TIDY", "TIED", "TIER", + "TILE", "TILL", "TILT", "TIME", "TINA", "TINE", "TINT", "TINY", + "TIRE", "TOAD", "TOGO", "TOIL", "TOLD", "TOLL", "TONE", "TONG", + "TONY", "TOOK", "TOOL", "TOOT", "TORE", "TORN", "TOTE", "TOUR", + "TOUT", "TOWN", "TRAG", "TRAM", "TRAY", "TREE", "TREK", "TRIG", + "TRIM", "TRIO", "TROD", "TROT", "TROY", "TRUE", "TUBA", "TUBE", + "TUCK", "TUFT", "TUNA", "TUNE", "TUNG", "TURF", "TURN", "TUSK", + "TWIG", "TWIN", "TWIT", "ULAN", "UNIT", "URGE", "USED", "USER", + "USES", "UTAH", "VAIL", "VAIN", "VALE", "VARY", "VASE", "VAST", + "VEAL", "VEDA", "VEIL", "VEIN", "VEND", "VENT", "VERB", "VERY", + "VETO", "VICE", "VIEW", "VINE", "VISE", "VOID", "VOLT", "VOTE", + "WACK", "WADE", "WAGE", "WAIL", "WAIT", "WAKE", "WALE", "WALK", + "WALL", "WALT", "WAND", "WANE", "WANG", "WANT", "WARD", "WARM", + "WARN", "WART", "WASH", "WAST", "WATS", "WATT", "WAVE", "WAVY", + "WAYS", "WEAK", "WEAL", "WEAN", "WEAR", "WEED", "WEEK", "WEIR", + "WELD", "WELL", "WELT", "WENT", "WERE", "WERT", "WEST", "WHAM", + "WHAT", "WHEE", "WHEN", "WHET", "WHOA", "WHOM", "WICK", "WIFE", + "WILD", "WILL", "WIND", "WINE", "WING", "WINK", "WINO", "WIRE", + "WISE", "WISH", "WITH", "WOLF", "WONT", "WOOD", "WOOL", "WORD", + "WORE", "WORK", "WORM", "WORN", "WOVE", "WRIT", "WYNN", "YALE", + "YANG", "YANK", "YARD", "YARN", "YAWL", "YAWN", "YEAH", "YEAR", + "YELL", "YOGA", "YOKE" ] diff --git a/env/lib/python3.8/site-packages/Crypto/Util/RFC1751.pyi b/env/lib/python3.8/site-packages/Crypto/Util/RFC1751.pyi new file mode 100644 index 00000000..6ad07ff6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/RFC1751.pyi @@ -0,0 +1,7 @@ +from typing import Dict, List + +binary: Dict[int, str] +wordlist: List[str] + +def key_to_english(key: bytes) -> str: ... +def english_to_key(s: str) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Util/__init__.py b/env/lib/python3.8/site-packages/Crypto/Util/__init__.py new file mode 100644 index 00000000..f12214d3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/__init__.py @@ -0,0 +1,41 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Miscellaneous modules + +Contains useful modules that don't belong into any of the +other Crypto.* subpackages. + +======================== ============================================= +Module Description +======================== ============================================= +`Crypto.Util.number` Number-theoretic functions (primality testing, etc.) +`Crypto.Util.Counter` Fast counter functions for CTR cipher modes. +`Crypto.Util.RFC1751` Converts between 128-bit keys and human-readable + strings of words. +`Crypto.Util.asn1` Minimal support for ASN.1 DER encoding +`Crypto.Util.Padding` Set of functions for adding and removing padding. +======================== ============================================= + +:undocumented: _galois, _number_new, cpuid, py3compat, _raw_api +""" + +__all__ = ['RFC1751', 'number', 'strxor', 'asn1', 'Counter', 'Padding'] + diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_cpu_features.py b/env/lib/python3.8/site-packages/Crypto/Util/_cpu_features.py new file mode 100644 index 00000000..b3039b58 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/_cpu_features.py @@ -0,0 +1,46 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util._raw_api import load_pycryptodome_raw_lib + + +_raw_cpuid_lib = load_pycryptodome_raw_lib("Crypto.Util._cpuid_c", + """ + int have_aes_ni(void); + int have_clmul(void); + """) + + +def have_aes_ni(): + return _raw_cpuid_lib.have_aes_ni() + + +def have_clmul(): + return _raw_cpuid_lib.have_clmul() diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_cpu_features.pyi b/env/lib/python3.8/site-packages/Crypto/Util/_cpu_features.pyi new file mode 100644 index 00000000..10e669e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/_cpu_features.pyi @@ -0,0 +1,2 @@ +def have_aes_ni() -> int: ... +def have_clmul() -> int: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_cpuid_c.abi3.so b/env/lib/python3.8/site-packages/Crypto/Util/_cpuid_c.abi3.so new file mode 100644 index 00000000..6665a81c Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Util/_cpuid_c.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_file_system.py b/env/lib/python3.8/site-packages/Crypto/Util/_file_system.py new file mode 100644 index 00000000..1cb0c4bf --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/_file_system.py @@ -0,0 +1,54 @@ +# =================================================================== +# +# Copyright (c) 2016, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os + + +def pycryptodome_filename(dir_comps, filename): + """Return the complete file name for the module + + dir_comps : list of string + The list of directory names in the PyCryptodome package. + The first element must be "Crypto". + + filename : string + The filename (inclusing extension) in the target directory. + """ + + if dir_comps[0] != "Crypto": + raise ValueError("Only available for modules under 'Crypto'") + + dir_comps = list(dir_comps[1:]) + [filename] + + util_lib, _ = os.path.split(os.path.abspath(__file__)) + root_lib = os.path.join(util_lib, "..") + + return os.path.join(root_lib, *dir_comps) + diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_file_system.pyi b/env/lib/python3.8/site-packages/Crypto/Util/_file_system.pyi new file mode 100644 index 00000000..d54a1268 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/_file_system.pyi @@ -0,0 +1,4 @@ +from typing import List + + +def pycryptodome_filename(dir_comps: List[str], filename: str) -> str: ... \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_raw_api.py b/env/lib/python3.8/site-packages/Crypto/Util/_raw_api.py new file mode 100644 index 00000000..f026e8f2 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/_raw_api.py @@ -0,0 +1,319 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import abc +import sys +from Crypto.Util.py3compat import byte_string +from Crypto.Util._file_system import pycryptodome_filename + +# +# List of file suffixes for Python extensions +# +if sys.version_info[0] < 3: + + import imp + extension_suffixes = [] + for ext, mod, typ in imp.get_suffixes(): + if typ == imp.C_EXTENSION: + extension_suffixes.append(ext) + +else: + + from importlib import machinery + extension_suffixes = machinery.EXTENSION_SUFFIXES + +# Which types with buffer interface we support (apart from byte strings) +_buffer_type = (bytearray, memoryview) + + +class _VoidPointer(object): + @abc.abstractmethod + def get(self): + """Return the memory location we point to""" + return + + @abc.abstractmethod + def address_of(self): + """Return a raw pointer to this pointer""" + return + + +try: + # Starting from v2.18, pycparser (used by cffi for in-line ABI mode) + # stops working correctly when PYOPTIMIZE==2 or the parameter -OO is + # passed. In that case, we fall back to ctypes. + # Note that PyPy ships with an old version of pycparser so we can keep + # using cffi there. + # See https://github.com/Legrandin/pycryptodome/issues/228 + if '__pypy__' not in sys.builtin_module_names and sys.flags.optimize == 2: + raise ImportError("CFFI with optimize=2 fails due to pycparser bug.") + + from cffi import FFI + + ffi = FFI() + null_pointer = ffi.NULL + uint8_t_type = ffi.typeof(ffi.new("const uint8_t*")) + + _Array = ffi.new("uint8_t[1]").__class__.__bases__ + + def load_lib(name, cdecl): + """Load a shared library and return a handle to it. + + @name, either an absolute path or the name of a library + in the system search path. + + @cdecl, the C function declarations. + """ + + if hasattr(ffi, "RTLD_DEEPBIND") and not os.getenv('PYCRYPTODOME_DISABLE_DEEPBIND'): + lib = ffi.dlopen(name, ffi.RTLD_DEEPBIND) + else: + lib = ffi.dlopen(name) + ffi.cdef(cdecl) + return lib + + def c_ulong(x): + """Convert a Python integer to unsigned long""" + return x + + c_ulonglong = c_ulong + c_uint = c_ulong + c_ubyte = c_ulong + + def c_size_t(x): + """Convert a Python integer to size_t""" + return x + + def create_string_buffer(init_or_size, size=None): + """Allocate the given amount of bytes (initially set to 0)""" + + if isinstance(init_or_size, bytes): + size = max(len(init_or_size) + 1, size) + result = ffi.new("uint8_t[]", size) + result[:] = init_or_size + else: + if size: + raise ValueError("Size must be specified once only") + result = ffi.new("uint8_t[]", init_or_size) + return result + + def get_c_string(c_string): + """Convert a C string into a Python byte sequence""" + return ffi.string(c_string) + + def get_raw_buffer(buf): + """Convert a C buffer into a Python byte sequence""" + return ffi.buffer(buf)[:] + + def c_uint8_ptr(data): + if isinstance(data, _buffer_type): + # This only works for cffi >= 1.7 + return ffi.cast(uint8_t_type, ffi.from_buffer(data)) + elif byte_string(data) or isinstance(data, _Array): + return data + else: + raise TypeError("Object type %s cannot be passed to C code" % type(data)) + + class VoidPointer_cffi(_VoidPointer): + """Model a newly allocated pointer to void""" + + def __init__(self): + self._pp = ffi.new("void *[1]") + + def get(self): + return self._pp[0] + + def address_of(self): + return self._pp + + def VoidPointer(): + return VoidPointer_cffi() + + backend = "cffi" + +except ImportError: + + import ctypes + from ctypes import (CDLL, c_void_p, byref, c_ulong, c_ulonglong, c_size_t, + create_string_buffer, c_ubyte, c_uint) + from ctypes.util import find_library + from ctypes import Array as _Array + + null_pointer = None + cached_architecture = [] + + def c_ubyte(c): + if not (0 <= c < 256): + raise OverflowError() + return ctypes.c_ubyte(c) + + def load_lib(name, cdecl): + if not cached_architecture: + # platform.architecture() creates a subprocess, so caching the + # result makes successive imports faster. + import platform + cached_architecture[:] = platform.architecture() + bits, linkage = cached_architecture + if "." not in name and not linkage.startswith("Win"): + full_name = find_library(name) + if full_name is None: + raise OSError("Cannot load library '%s'" % name) + name = full_name + return CDLL(name) + + def get_c_string(c_string): + return c_string.value + + def get_raw_buffer(buf): + return buf.raw + + # ---- Get raw pointer --- + + _c_ssize_t = ctypes.c_ssize_t + + _PyBUF_SIMPLE = 0 + _PyObject_GetBuffer = ctypes.pythonapi.PyObject_GetBuffer + _PyBuffer_Release = ctypes.pythonapi.PyBuffer_Release + _py_object = ctypes.py_object + _c_ssize_p = ctypes.POINTER(_c_ssize_t) + + # See Include/object.h for CPython + # and https://github.com/pallets/click/blob/master/src/click/_winconsole.py + class _Py_buffer(ctypes.Structure): + _fields_ = [ + ('buf', c_void_p), + ('obj', ctypes.py_object), + ('len', _c_ssize_t), + ('itemsize', _c_ssize_t), + ('readonly', ctypes.c_int), + ('ndim', ctypes.c_int), + ('format', ctypes.c_char_p), + ('shape', _c_ssize_p), + ('strides', _c_ssize_p), + ('suboffsets', _c_ssize_p), + ('internal', c_void_p) + ] + + # Extra field for CPython 2.6/2.7 + if sys.version_info[0] == 2: + _fields_.insert(-1, ('smalltable', _c_ssize_t * 2)) + + def c_uint8_ptr(data): + if byte_string(data) or isinstance(data, _Array): + return data + elif isinstance(data, _buffer_type): + obj = _py_object(data) + buf = _Py_buffer() + _PyObject_GetBuffer(obj, byref(buf), _PyBUF_SIMPLE) + try: + buffer_type = ctypes.c_ubyte * buf.len + return buffer_type.from_address(buf.buf) + finally: + _PyBuffer_Release(byref(buf)) + else: + raise TypeError("Object type %s cannot be passed to C code" % type(data)) + + # --- + + class VoidPointer_ctypes(_VoidPointer): + """Model a newly allocated pointer to void""" + + def __init__(self): + self._p = c_void_p() + + def get(self): + return self._p + + def address_of(self): + return byref(self._p) + + def VoidPointer(): + return VoidPointer_ctypes() + + backend = "ctypes" + + +class SmartPointer(object): + """Class to hold a non-managed piece of memory""" + + def __init__(self, raw_pointer, destructor): + self._raw_pointer = raw_pointer + self._destructor = destructor + + def get(self): + return self._raw_pointer + + def release(self): + rp, self._raw_pointer = self._raw_pointer, None + return rp + + def __del__(self): + try: + if self._raw_pointer is not None: + self._destructor(self._raw_pointer) + self._raw_pointer = None + except AttributeError: + pass + + +def load_pycryptodome_raw_lib(name, cdecl): + """Load a shared library and return a handle to it. + + @name, the name of the library expressed as a PyCryptodome module, + for instance Crypto.Cipher._raw_cbc. + + @cdecl, the C function declarations. + """ + + split = name.split(".") + dir_comps, basename = split[:-1], split[-1] + attempts = [] + for ext in extension_suffixes: + try: + filename = basename + ext + full_name = pycryptodome_filename(dir_comps, filename) + if not os.path.isfile(full_name): + attempts.append("Not found '%s'" % filename) + continue + return load_lib(full_name, cdecl) + except OSError as exp: + attempts.append("Cannot load '%s': %s" % (filename, str(exp))) + raise OSError("Cannot load native module '%s': %s" % (name, ", ".join(attempts))) + + +def is_buffer(x): + """Return True if object x supports the buffer interface""" + return isinstance(x, (bytes, bytearray, memoryview)) + + +def is_writeable_buffer(x): + return (isinstance(x, bytearray) or + (isinstance(x, memoryview) and not x.readonly)) diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_raw_api.pyi b/env/lib/python3.8/site-packages/Crypto/Util/_raw_api.pyi new file mode 100644 index 00000000..2bc53016 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/_raw_api.pyi @@ -0,0 +1,27 @@ +from typing import Any, Optional, Union + +def load_lib(name: str, cdecl: str) -> Any : ... +def c_ulong(x: int ) -> Any : ... +def c_ulonglong(x: int ) -> Any : ... +def c_size_t(x: int) -> Any : ... +def create_string_buffer(init_or_size: Union[bytes,int], size: Optional[int]) -> Any : ... +def get_c_string(c_string: Any) -> bytes : ... +def get_raw_buffer(buf: Any) -> bytes : ... +def c_uint8_ptr(data: Union[bytes, memoryview, bytearray]) -> Any : ... + +class VoidPointer(object): + def get(self) -> Any : ... + def address_of(self) -> Any : ... + +class SmartPointer(object): + def __init__(self, raw_pointer: Any, destructor: Any) -> None : ... + def get(self) -> Any : ... + def release(self) -> Any : ... + +backend : str +null_pointer : Any +ffi: Any + +def load_pycryptodome_raw_lib(name: str, cdecl: str) -> Any : ... +def is_buffer(x: Any) -> bool : ... +def is_writeable_buffer(x: Any) -> bool : ... diff --git a/env/lib/python3.8/site-packages/Crypto/Util/_strxor.abi3.so b/env/lib/python3.8/site-packages/Crypto/Util/_strxor.abi3.so new file mode 100644 index 00000000..97e42459 Binary files /dev/null and b/env/lib/python3.8/site-packages/Crypto/Util/_strxor.abi3.so differ diff --git a/env/lib/python3.8/site-packages/Crypto/Util/asn1.py b/env/lib/python3.8/site-packages/Crypto/Util/asn1.py new file mode 100644 index 00000000..c4571d49 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/asn1.py @@ -0,0 +1,939 @@ +# -*- coding: ascii -*- +# +# Util/asn1.py : Minimal support for ASN.1 DER binary encoding. +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import struct + +from Crypto.Util.py3compat import byte_string, b, bchr, bord + +from Crypto.Util.number import long_to_bytes, bytes_to_long + +__all__ = ['DerObject', 'DerInteger', 'DerOctetString', 'DerNull', + 'DerSequence', 'DerObjectId', 'DerBitString', 'DerSetOf'] + + +def _is_number(x, only_non_negative=False): + test = 0 + try: + test = x + test + except TypeError: + return False + return not only_non_negative or x >= 0 + + +class BytesIO_EOF(object): + """This class differs from BytesIO in that a ValueError exception is + raised whenever EOF is reached.""" + + def __init__(self, initial_bytes): + self._buffer = initial_bytes + self._index = 0 + self._bookmark = None + + def set_bookmark(self): + self._bookmark = self._index + + def data_since_bookmark(self): + assert self._bookmark is not None + return self._buffer[self._bookmark:self._index] + + def remaining_data(self): + return len(self._buffer) - self._index + + def read(self, length): + new_index = self._index + length + if new_index > len(self._buffer): + raise ValueError("Not enough data for DER decoding: expected %d bytes and found %d" % (new_index, len(self._buffer))) + + result = self._buffer[self._index:new_index] + self._index = new_index + return result + + def read_byte(self): + return bord(self.read(1)[0]) + + +class DerObject(object): + """Base class for defining a single DER object. + + This class should never be directly instantiated. + """ + + def __init__(self, asn1Id=None, payload=b'', implicit=None, + constructed=False, explicit=None): + """Initialize the DER object according to a specific ASN.1 type. + + :Parameters: + asn1Id : integer + The universal DER tag number for this object + (e.g. 0x10 for a SEQUENCE). + If None, the tag is not known yet. + + payload : byte string + The initial payload of the object (that it, + the content octets). + If not specified, the payload is empty. + + implicit : integer + The IMPLICIT tag number to use for the encoded object. + It overrides the universal tag *asn1Id*. + + constructed : bool + True when the ASN.1 type is *constructed*. + False when it is *primitive*. + + explicit : integer + The EXPLICIT tag number to use for the encoded object. + """ + + if asn1Id is None: + # The tag octet will be read in with ``decode`` + self._tag_octet = None + return + asn1Id = self._convertTag(asn1Id) + + self.payload = payload + + # In a BER/DER identifier octet: + # * bits 4-0 contain the tag value + # * bit 5 is set if the type is 'constructed' + # and unset if 'primitive' + # * bits 7-6 depend on the encoding class + # + # Class | Bit 7, Bit 6 + # ---------------------------------- + # universal | 0 0 + # application | 0 1 + # context-spec | 1 0 (default for IMPLICIT/EXPLICIT) + # private | 1 1 + # + if None not in (explicit, implicit): + raise ValueError("Explicit and implicit tags are" + " mutually exclusive") + + if implicit is not None: + self._tag_octet = 0x80 | 0x20 * constructed | self._convertTag(implicit) + return + + if explicit is not None: + self._tag_octet = 0xA0 | self._convertTag(explicit) + self._inner_tag_octet = 0x20 * constructed | asn1Id + return + + self._tag_octet = 0x20 * constructed | asn1Id + + def _convertTag(self, tag): + """Check if *tag* is a real DER tag. + Convert it from a character to number if necessary. + """ + if not _is_number(tag): + if len(tag) == 1: + tag = bord(tag[0]) + # Ensure that tag is a low tag + if not (_is_number(tag) and 0 <= tag < 0x1F): + raise ValueError("Wrong DER tag") + return tag + + @staticmethod + def _definite_form(length): + """Build length octets according to BER/DER + definite form. + """ + if length > 127: + encoding = long_to_bytes(length) + return bchr(len(encoding) + 128) + encoding + return bchr(length) + + def encode(self): + """Return this DER element, fully encoded as a binary byte string.""" + + # Concatenate identifier octets, length octets, + # and contents octets + + output_payload = self.payload + + # In case of an EXTERNAL tag, first encode the inner + # element. + if hasattr(self, "_inner_tag_octet"): + output_payload = (bchr(self._inner_tag_octet) + + self._definite_form(len(self.payload)) + + self.payload) + + return (bchr(self._tag_octet) + + self._definite_form(len(output_payload)) + + output_payload) + + def _decodeLen(self, s): + """Decode DER length octets from a file.""" + + length = s.read_byte() + + if length > 127: + encoded_length = s.read(length & 0x7F) + if bord(encoded_length[0]) == 0: + raise ValueError("Invalid DER: length has leading zero") + length = bytes_to_long(encoded_length) + if length <= 127: + raise ValueError("Invalid DER: length in long form but smaller than 128") + + return length + + def decode(self, der_encoded, strict=False): + """Decode a complete DER element, and re-initializes this + object with it. + + Args: + der_encoded (byte string): A complete DER element. + + Raises: + ValueError: in case of parsing errors. + """ + + if not byte_string(der_encoded): + raise ValueError("Input is not a byte string") + + s = BytesIO_EOF(der_encoded) + self._decodeFromStream(s, strict) + + # There shouldn't be other bytes left + if s.remaining_data() > 0: + raise ValueError("Unexpected extra data after the DER structure") + + return self + + def _decodeFromStream(self, s, strict): + """Decode a complete DER element from a file.""" + + idOctet = s.read_byte() + if self._tag_octet is not None: + if idOctet != self._tag_octet: + raise ValueError("Unexpected DER tag") + else: + self._tag_octet = idOctet + length = self._decodeLen(s) + self.payload = s.read(length) + + # In case of an EXTERNAL tag, further decode the inner + # element. + if hasattr(self, "_inner_tag_octet"): + p = BytesIO_EOF(self.payload) + inner_octet = p.read_byte() + if inner_octet != self._inner_tag_octet: + raise ValueError("Unexpected internal DER tag") + length = self._decodeLen(p) + self.payload = p.read(length) + + # There shouldn't be other bytes left + if p.remaining_data() > 0: + raise ValueError("Unexpected extra data after the DER structure") + + +class DerInteger(DerObject): + """Class to model a DER INTEGER. + + An example of encoding is:: + + >>> from Crypto.Util.asn1 import DerInteger + >>> from binascii import hexlify, unhexlify + >>> int_der = DerInteger(9) + >>> print hexlify(int_der.encode()) + + which will show ``020109``, the DER encoding of 9. + + And for decoding:: + + >>> s = unhexlify(b'020109') + >>> try: + >>> int_der = DerInteger() + >>> int_der.decode(s) + >>> print int_der.value + >>> except ValueError: + >>> print "Not a valid DER INTEGER" + + the output will be ``9``. + + :ivar value: The integer value + :vartype value: integer + """ + + def __init__(self, value=0, implicit=None, explicit=None): + """Initialize the DER object as an INTEGER. + + :Parameters: + value : integer + The value of the integer. + + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for INTEGER (2). + """ + + DerObject.__init__(self, 0x02, b'', implicit, + False, explicit) + self.value = value # The integer value + + def encode(self): + """Return the DER INTEGER, fully encoded as a + binary string.""" + + number = self.value + self.payload = b'' + while True: + self.payload = bchr(int(number & 255)) + self.payload + if 128 <= number <= 255: + self.payload = bchr(0x00) + self.payload + if -128 <= number <= 255: + break + number >>= 8 + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False): + """Decode a complete DER INTEGER DER, and re-initializes this + object with it. + + Args: + der_encoded (byte string): A complete INTEGER DER element. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict=strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER INTEGER from a file.""" + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + if strict: + if len(self.payload) == 0: + raise ValueError("Invalid encoding for DER INTEGER: empty payload") + if len(self.payload) >= 2 and struct.unpack('>H', self.payload[:2])[0] < 0x80: + raise ValueError("Invalid encoding for DER INTEGER: leading zero") + + # Derive self.value from self.payload + self.value = 0 + bits = 1 + for i in self.payload: + self.value *= 256 + self.value += bord(i) + bits <<= 8 + if self.payload and bord(self.payload[0]) & 0x80: + self.value -= bits + + +class DerSequence(DerObject): + """Class to model a DER SEQUENCE. + + This object behaves like a dynamic Python sequence. + + Sub-elements that are INTEGERs behave like Python integers. + + Any other sub-element is a binary string encoded as a complete DER + sub-element (TLV). + + An example of encoding is: + + >>> from Crypto.Util.asn1 import DerSequence, DerInteger + >>> from binascii import hexlify, unhexlify + >>> obj_der = unhexlify('070102') + >>> seq_der = DerSequence([4]) + >>> seq_der.append(9) + >>> seq_der.append(obj_der.encode()) + >>> print hexlify(seq_der.encode()) + + which will show ``3009020104020109070102``, the DER encoding of the + sequence containing ``4``, ``9``, and the object with payload ``02``. + + For decoding: + + >>> s = unhexlify(b'3009020104020109070102') + >>> try: + >>> seq_der = DerSequence() + >>> seq_der.decode(s) + >>> print len(seq_der) + >>> print seq_der[0] + >>> print seq_der[:] + >>> except ValueError: + >>> print "Not a valid DER SEQUENCE" + + the output will be:: + + 3 + 4 + [4, 9, b'\x07\x01\x02'] + + """ + + def __init__(self, startSeq=None, implicit=None): + """Initialize the DER object as a SEQUENCE. + + :Parameters: + startSeq : Python sequence + A sequence whose element are either integers or + other DER objects. + + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for SEQUENCE (16). + """ + + DerObject.__init__(self, 0x10, b'', implicit, True) + if startSeq is None: + self._seq = [] + else: + self._seq = startSeq + + # A few methods to make it behave like a python sequence + + def __delitem__(self, n): + del self._seq[n] + + def __getitem__(self, n): + return self._seq[n] + + def __setitem__(self, key, value): + self._seq[key] = value + + def __setslice__(self, i, j, sequence): + self._seq[i:j] = sequence + + def __delslice__(self, i, j): + del self._seq[i:j] + + def __getslice__(self, i, j): + return self._seq[max(0, i):max(0, j)] + + def __len__(self): + return len(self._seq) + + def __iadd__(self, item): + self._seq.append(item) + return self + + def append(self, item): + self._seq.append(item) + return self + + def hasInts(self, only_non_negative=True): + """Return the number of items in this sequence that are + integers. + + Args: + only_non_negative (boolean): + If ``True``, negative integers are not counted in. + """ + + items = [x for x in self._seq if _is_number(x, only_non_negative)] + return len(items) + + def hasOnlyInts(self, only_non_negative=True): + """Return ``True`` if all items in this sequence are integers + or non-negative integers. + + This function returns False is the sequence is empty, + or at least one member is not an integer. + + Args: + only_non_negative (boolean): + If ``True``, the presence of negative integers + causes the method to return ``False``.""" + return self._seq and self.hasInts(only_non_negative) == len(self._seq) + + def encode(self): + """Return this DER SEQUENCE, fully encoded as a + binary string. + + Raises: + ValueError: if some elements in the sequence are neither integers + nor byte strings. + """ + self.payload = b'' + for item in self._seq: + if byte_string(item): + self.payload += item + elif _is_number(item): + self.payload += DerInteger(item).encode() + else: + self.payload += item.encode() + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False, nr_elements=None, only_ints_expected=False): + """Decode a complete DER SEQUENCE, and re-initializes this + object with it. + + Args: + der_encoded (byte string): + A complete SEQUENCE DER element. + nr_elements (None or integer or list of integers): + The number of members the SEQUENCE can have + only_ints_expected (boolean): + Whether the SEQUENCE is expected to contain only integers. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + + DER INTEGERs are decoded into Python integers. Any other DER + element is not decoded. Its validity is not checked. + """ + + self._nr_elements = nr_elements + result = DerObject.decode(self, der_encoded, strict=strict) + + if only_ints_expected and not self.hasOnlyInts(): + raise ValueError("Some members are not INTEGERs") + + return result + + def _decodeFromStream(self, s, strict): + """Decode a complete DER SEQUENCE from a file.""" + + self._seq = [] + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + # Add one item at a time to self.seq, by scanning self.payload + p = BytesIO_EOF(self.payload) + while p.remaining_data() > 0: + p.set_bookmark() + + der = DerObject() + der._decodeFromStream(p, strict) + + # Parse INTEGERs differently + if der._tag_octet != 0x02: + self._seq.append(p.data_since_bookmark()) + else: + derInt = DerInteger() + #import pdb; pdb.set_trace() + data = p.data_since_bookmark() + derInt.decode(data, strict=strict) + self._seq.append(derInt.value) + + ok = True + if self._nr_elements is not None: + try: + ok = len(self._seq) in self._nr_elements + except TypeError: + ok = len(self._seq) == self._nr_elements + + if not ok: + raise ValueError("Unexpected number of members (%d)" + " in the sequence" % len(self._seq)) + + +class DerOctetString(DerObject): + """Class to model a DER OCTET STRING. + + An example of encoding is: + + >>> from Crypto.Util.asn1 import DerOctetString + >>> from binascii import hexlify, unhexlify + >>> os_der = DerOctetString(b'\\xaa') + >>> os_der.payload += b'\\xbb' + >>> print hexlify(os_der.encode()) + + which will show ``0402aabb``, the DER encoding for the byte string + ``b'\\xAA\\xBB'``. + + For decoding: + + >>> s = unhexlify(b'0402aabb') + >>> try: + >>> os_der = DerOctetString() + >>> os_der.decode(s) + >>> print hexlify(os_der.payload) + >>> except ValueError: + >>> print "Not a valid DER OCTET STRING" + + the output will be ``aabb``. + + :ivar payload: The content of the string + :vartype payload: byte string + """ + + def __init__(self, value=b'', implicit=None): + """Initialize the DER object as an OCTET STRING. + + :Parameters: + value : byte string + The initial payload of the object. + If not specified, the payload is empty. + + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for OCTET STRING (4). + """ + DerObject.__init__(self, 0x04, value, implicit, False) + + +class DerNull(DerObject): + """Class to model a DER NULL element.""" + + def __init__(self): + """Initialize the DER object as a NULL.""" + + DerObject.__init__(self, 0x05, b'', None, False) + + +class DerObjectId(DerObject): + """Class to model a DER OBJECT ID. + + An example of encoding is: + + >>> from Crypto.Util.asn1 import DerObjectId + >>> from binascii import hexlify, unhexlify + >>> oid_der = DerObjectId("1.2") + >>> oid_der.value += ".840.113549.1.1.1" + >>> print hexlify(oid_der.encode()) + + which will show ``06092a864886f70d010101``, the DER encoding for the + RSA Object Identifier ``1.2.840.113549.1.1.1``. + + For decoding: + + >>> s = unhexlify(b'06092a864886f70d010101') + >>> try: + >>> oid_der = DerObjectId() + >>> oid_der.decode(s) + >>> print oid_der.value + >>> except ValueError: + >>> print "Not a valid DER OBJECT ID" + + the output will be ``1.2.840.113549.1.1.1``. + + :ivar value: The Object ID (OID), a dot separated list of integers + :vartype value: string + """ + + def __init__(self, value='', implicit=None, explicit=None): + """Initialize the DER object as an OBJECT ID. + + :Parameters: + value : string + The initial Object Identifier (e.g. "1.2.0.0.6.2"). + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for OBJECT ID (6). + explicit : integer + The EXPLICIT tag to use for the encoded object. + """ + DerObject.__init__(self, 0x06, b'', implicit, False, explicit) + self.value = value + + def encode(self): + """Return the DER OBJECT ID, fully encoded as a + binary string.""" + + comps = [int(x) for x in self.value.split(".")] + if len(comps) < 2: + raise ValueError("Not a valid Object Identifier string") + self.payload = bchr(40*comps[0]+comps[1]) + for v in comps[2:]: + if v == 0: + enc = [0] + else: + enc = [] + while v: + enc.insert(0, (v & 0x7F) | 0x80) + v >>= 7 + enc[-1] &= 0x7F + self.payload += b''.join([bchr(x) for x in enc]) + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False): + """Decode a complete DER OBJECT ID, and re-initializes this + object with it. + + Args: + der_encoded (byte string): + A complete DER OBJECT ID. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER OBJECT ID from a file.""" + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + # Derive self.value from self.payload + p = BytesIO_EOF(self.payload) + comps = [str(x) for x in divmod(p.read_byte(), 40)] + v = 0 + while p.remaining_data(): + c = p.read_byte() + v = v*128 + (c & 0x7F) + if not (c & 0x80): + comps.append(str(v)) + v = 0 + self.value = '.'.join(comps) + + +class DerBitString(DerObject): + """Class to model a DER BIT STRING. + + An example of encoding is: + + >>> from Crypto.Util.asn1 import DerBitString + >>> bs_der = DerBitString(b'\\xAA') + >>> bs_der.value += b'\\xBB' + >>> print(bs_der.encode().hex()) + + which will show ``030300aabb``, the DER encoding for the bit string + ``b'\\xAA\\xBB'``. + + For decoding: + + >>> s = bytes.fromhex('030300aabb') + >>> try: + >>> bs_der = DerBitString() + >>> bs_der.decode(s) + >>> print(bs_der.value.hex()) + >>> except ValueError: + >>> print "Not a valid DER BIT STRING" + + the output will be ``aabb``. + + :ivar value: The content of the string + :vartype value: byte string + """ + + def __init__(self, value=b'', implicit=None, explicit=None): + """Initialize the DER object as a BIT STRING. + + :Parameters: + value : byte string or DER object + The initial, packed bit string. + If not specified, the bit string is empty. + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for OCTET STRING (3). + explicit : integer + The EXPLICIT tag to use for the encoded object. + """ + DerObject.__init__(self, 0x03, b'', implicit, False, explicit) + + # The bitstring value (packed) + if isinstance(value, DerObject): + self.value = value.encode() + else: + self.value = value + + def encode(self): + """Return the DER BIT STRING, fully encoded as a + byte string.""" + + # Add padding count byte + self.payload = b'\x00' + self.value + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False): + """Decode a complete DER BIT STRING, and re-initializes this + object with it. + + Args: + der_encoded (byte string): a complete DER BIT STRING. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER BIT STRING DER from a file.""" + + # Fill-up self.payload + DerObject._decodeFromStream(self, s, strict) + + if self.payload and bord(self.payload[0]) != 0: + raise ValueError("Not a valid BIT STRING") + + # Fill-up self.value + self.value = b'' + # Remove padding count byte + if self.payload: + self.value = self.payload[1:] + + +class DerSetOf(DerObject): + """Class to model a DER SET OF. + + An example of encoding is: + + >>> from Crypto.Util.asn1 import DerBitString + >>> from binascii import hexlify, unhexlify + >>> so_der = DerSetOf([4,5]) + >>> so_der.add(6) + >>> print hexlify(so_der.encode()) + + which will show ``3109020104020105020106``, the DER encoding + of a SET OF with items 4,5, and 6. + + For decoding: + + >>> s = unhexlify(b'3109020104020105020106') + >>> try: + >>> so_der = DerSetOf() + >>> so_der.decode(s) + >>> print [x for x in so_der] + >>> except ValueError: + >>> print "Not a valid DER SET OF" + + the output will be ``[4, 5, 6]``. + """ + + def __init__(self, startSet=None, implicit=None): + """Initialize the DER object as a SET OF. + + :Parameters: + startSet : container + The initial set of integers or DER encoded objects. + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for SET OF (17). + """ + DerObject.__init__(self, 0x11, b'', implicit, True) + self._seq = [] + + # All elements must be of the same type (and therefore have the + # same leading octet) + self._elemOctet = None + + if startSet: + for e in startSet: + self.add(e) + + def __getitem__(self, n): + return self._seq[n] + + def __iter__(self): + return iter(self._seq) + + def __len__(self): + return len(self._seq) + + def add(self, elem): + """Add an element to the set. + + Args: + elem (byte string or integer): + An element of the same type of objects already in the set. + It can be an integer or a DER encoded object. + """ + + if _is_number(elem): + eo = 0x02 + elif isinstance(elem, DerObject): + eo = self._tag_octet + else: + eo = bord(elem[0]) + + if self._elemOctet != eo: + if self._elemOctet is not None: + raise ValueError("New element does not belong to the set") + self._elemOctet = eo + + if elem not in self._seq: + self._seq.append(elem) + + def decode(self, der_encoded, strict=False): + """Decode a complete SET OF DER element, and re-initializes this + object with it. + + DER INTEGERs are decoded into Python integers. Any other DER + element is left undecoded; its validity is not checked. + + Args: + der_encoded (byte string): a complete DER BIT SET OF. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER SET OF from a file.""" + + self._seq = [] + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + # Add one item at a time to self.seq, by scanning self.payload + p = BytesIO_EOF(self.payload) + setIdOctet = -1 + while p.remaining_data() > 0: + p.set_bookmark() + + der = DerObject() + der._decodeFromStream(p, strict) + + # Verify that all members are of the same type + if setIdOctet < 0: + setIdOctet = der._tag_octet + else: + if setIdOctet != der._tag_octet: + raise ValueError("Not all elements are of the same DER type") + + # Parse INTEGERs differently + if setIdOctet != 0x02: + self._seq.append(p.data_since_bookmark()) + else: + derInt = DerInteger() + derInt.decode(p.data_since_bookmark(), strict) + self._seq.append(derInt.value) + # end + + def encode(self): + """Return this SET OF DER element, fully encoded as a + binary string. + """ + + # Elements in the set must be ordered in lexicographic order + ordered = [] + for item in self._seq: + if _is_number(item): + bys = DerInteger(item).encode() + elif isinstance(item, DerObject): + bys = item.encode() + else: + bys = item + ordered.append(bys) + ordered.sort() + self.payload = b''.join(ordered) + return DerObject.encode(self) diff --git a/env/lib/python3.8/site-packages/Crypto/Util/asn1.pyi b/env/lib/python3.8/site-packages/Crypto/Util/asn1.pyi new file mode 100644 index 00000000..dac023be --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/asn1.pyi @@ -0,0 +1,74 @@ +from typing import Optional, Sequence, Union, Set, Iterable + +__all__ = ['DerObject', 'DerInteger', 'DerOctetString', 'DerNull', + 'DerSequence', 'DerObjectId', 'DerBitString', 'DerSetOf'] + +# TODO: Make the encoded DerObjects their own type, so that DerSequence and +# DerSetOf can check their contents better + +class BytesIO_EOF: + def __init__(self, initial_bytes: bytes) -> None: ... + def set_bookmark(self) -> None: ... + def data_since_bookmark(self) -> bytes: ... + def remaining_data(self) -> int: ... + def read(self, length: int) -> bytes: ... + def read_byte(self) -> bytes: ... + +class DerObject: + payload: bytes + def __init__(self, asn1Id: Optional[int]=None, payload: Optional[bytes]=..., implicit: Optional[int]=None, + constructed: Optional[bool]=False, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: Optional[bool]=False) -> DerObject: ... + +class DerInteger(DerObject): + value: int + def __init__(self, value: Optional[int]= 0, implicit: Optional[int]=None, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: Optional[bool]=False) -> DerInteger: ... + +class DerSequence(DerObject): + def __init__(self, startSeq: Optional[Sequence[Union[int, DerInteger, DerObject]]]=None, implicit: Optional[int]=None) -> None: ... + def __delitem__(self, n: int) -> None: ... + def __getitem__(self, n: int) -> None: ... + def __setitem__(self, key: int, value: DerObject) -> None: ... + def __setslice__(self, i: int, j: int, sequence: Sequence) -> None: ... + def __delslice__(self, i: int, j: int) -> None: ... + def __getslice__(self, i: int, j: int) -> DerSequence: ... + def __len__(self) -> int: ... + def __iadd__(self, item: DerObject) -> DerSequence: ... + def append(self, item: DerObject) -> DerSequence: ... + def hasInts(self, only_non_negative: Optional[bool]=True) -> int: ... + def hasOnlyInts(self, only_non_negative: Optional[bool]=True) -> bool: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: Optional[bool]=False, nr_elements: Optional[int]=None, only_ints_expected: Optional[bool]=False) -> DerSequence: ... + +class DerOctetString(DerObject): + payload: bytes + def __init__(self, value: Optional[bytes]=..., implicit: Optional[int]=None) -> None: ... + +class DerNull(DerObject): + def __init__(self) -> None: ... + +class DerObjectId(DerObject): + value: str + def __init__(self, value: Optional[str]=..., implicit: Optional[int]=None, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: Optional[bool]=False) -> DerObjectId: ... + +class DerBitString(DerObject): + value: bytes + def __init__(self, value: Optional[bytes]=..., implicit: Optional[int]=None, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: Optional[bool]=False) -> DerBitString: ... + +DerSetElement = Union[bytes, int] + +class DerSetOf(DerObject): + def __init__(self, startSet: Optional[Set[DerSetElement]]=None, implicit: Optional[int]=None) -> None: ... + def __getitem__(self, n: int) -> DerSetElement: ... + def __iter__(self) -> Iterable: ... + def __len__(self) -> int: ... + def add(self, elem: DerSetElement) -> None: ... + def decode(self, der_encoded: bytes, strict: Optional[bool]=False) -> DerObject: ... + def encode(self) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Util/number.py b/env/lib/python3.8/site-packages/Crypto/Util/number.py new file mode 100644 index 00000000..279ffe0c --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/number.py @@ -0,0 +1,1500 @@ +# +# number.py : Number-theoretic functions +# +# Part of the Python Cryptography Toolkit +# +# Written by Andrew M. Kuchling, Barry A. Warsaw, and others +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +# + +import math +import sys +import struct +from Crypto import Random +from Crypto.Util.py3compat import iter_range + +# Backward compatibility +_fastmath = None + + +def ceil_div(n, d): + """Return ceil(n/d), that is, the smallest integer r such that r*d >= n""" + + if d == 0: + raise ZeroDivisionError() + if (n < 0) or (d < 0): + raise ValueError("Non positive values") + r, q = divmod(n, d) + if (n != 0) and (q != 0): + r += 1 + return r + + +def size (N): + """Returns the size of the number N in bits.""" + + if N < 0: + raise ValueError("Size in bits only avialable for non-negative numbers") + + bits = 0 + while N >> bits: + bits += 1 + return bits + + +def getRandomInteger(N, randfunc=None): + """Return a random number at most N bits long. + + If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. Use :func:`Crypto.Random.random.getrandbits` instead. + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + + S = randfunc(N>>3) + odd_bits = N % 8 + if odd_bits != 0: + rand_bits = ord(randfunc(1)) >> (8-odd_bits) + S = struct.pack('B', rand_bits) + S + value = bytes_to_long(S) + return value + +def getRandomRange(a, b, randfunc=None): + """Return a random number *n* so that *a <= n < b*. + + If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. Use :func:`Crypto.Random.random.randrange` instead. + """ + + range_ = b - a - 1 + bits = size(range_) + value = getRandomInteger(bits, randfunc) + while value > range_: + value = getRandomInteger(bits, randfunc) + return a + value + +def getRandomNBitInteger(N, randfunc=None): + """Return a random number with exactly N-bits, + i.e. a random number between 2**(N-1) and (2**N)-1. + + If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. + """ + + value = getRandomInteger (N-1, randfunc) + value |= 2 ** (N-1) # Ensure high bit is set + assert size(value) >= N + return value + +def GCD(x,y): + """Greatest Common Denominator of :data:`x` and :data:`y`. + """ + + x = abs(x) ; y = abs(y) + while x > 0: + x, y = y % x, x + return y + +def inverse(u, v): + """The inverse of :data:`u` *mod* :data:`v`.""" + + u3, v3 = u, v + u1, v1 = 1, 0 + while v3 > 0: + q = u3 // v3 + u1, v1 = v1, u1 - v1*q + u3, v3 = v3, u3 - v3*q + while u1<0: + u1 = u1 + v + return u1 + +# Given a number of bits to generate and a random generation function, +# find a prime number of the appropriate size. + +def getPrime(N, randfunc=None): + """Return a random N-bit prime number. + + N must be an integer larger than 1. + If randfunc is omitted, then :meth:`Random.get_random_bytes` is used. + """ + if randfunc is None: + randfunc = Random.get_random_bytes + + if N < 2: + raise ValueError("N must be larger than 1") + + while True: + number = getRandomNBitInteger(N, randfunc) | 1 + if isPrime(number, randfunc=randfunc): + break + return number + + +def _rabinMillerTest(n, rounds, randfunc=None): + """_rabinMillerTest(n:long, rounds:int, randfunc:callable):int + Tests if n is prime. + Returns 0 when n is definitely composite. + Returns 1 when n is probably prime. + Returns 2 when n is definitely prime. + + If randfunc is omitted, then Random.new().read is used. + + This function is for internal use only and may be renamed or removed in + the future. + """ + # check special cases (n==2, n even, n < 2) + if n < 3 or (n & 1) == 0: + return n == 2 + # n might be very large so it might be beneficial to precalculate n-1 + n_1 = n - 1 + # determine m and b so that 2**b * m = n - 1 and b maximal + b = 0 + m = n_1 + while (m & 1) == 0: + b += 1 + m >>= 1 + + tested = [] + # we need to do at most n-2 rounds. + for i in iter_range (min (rounds, n-2)): + # randomly choose a < n and make sure it hasn't been tested yet + a = getRandomRange (2, n, randfunc) + while a in tested: + a = getRandomRange (2, n, randfunc) + tested.append (a) + # do the rabin-miller test + z = pow (a, m, n) # (a**m) % n + if z == 1 or z == n_1: + continue + composite = 1 + for r in iter_range(b): + z = (z * z) % n + if z == 1: + return 0 + elif z == n_1: + composite = 0 + break + if composite: + return 0 + return 1 + +def getStrongPrime(N, e=0, false_positive_prob=1e-6, randfunc=None): + r""" + Return a random strong *N*-bit prime number. + In this context, *p* is a strong prime if *p-1* and *p+1* have at + least one large prime factor. + + Args: + N (integer): the exact length of the strong prime. + It must be a multiple of 128 and > 512. + e (integer): if provided, the returned prime (minus 1) + will be coprime to *e* and thus suitable for RSA where + *e* is the public exponent. + false_positive_prob (float): + The statistical probability for the result not to be actually a + prime. It defaults to 10\ :sup:`-6`. + Note that the real probability of a false-positive is far less. This is + just the mathematically provable limit. + randfunc (callable): + A function that takes a parameter *N* and that returns + a random byte string of such length. + If omitted, :func:`Crypto.Random.get_random_bytes` is used. + Return: + The new strong prime. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. + """ + + # This function was implemented following the + # instructions found in the paper: + # "FAST GENERATION OF RANDOM, STRONG RSA PRIMES" + # by Robert D. Silverman + # RSA Laboratories + # May 17, 1997 + # which by the time of writing could be freely downloaded here: + # http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.2713&rep=rep1&type=pdf + + if randfunc is None: + randfunc = Random.get_random_bytes + + # Use the accelerator if available + if _fastmath is not None: + return _fastmath.getStrongPrime(long(N), long(e), false_positive_prob, + randfunc) + + if (N < 512) or ((N % 128) != 0): + raise ValueError ("bits must be multiple of 128 and > 512") + + rabin_miller_rounds = int(math.ceil(-math.log(false_positive_prob)/math.log(4))) + + # calculate range for X + # lower_bound = sqrt(2) * 2^{511 + 128*x} + # upper_bound = 2^{512 + 128*x} - 1 + x = (N - 512) >> 7; + # We need to approximate the sqrt(2) in the lower_bound by an integer + # expression because floating point math overflows with these numbers + lower_bound = (14142135623730950489 * (2 ** (511 + 128*x))) // 10000000000000000000 + upper_bound = (1 << (512 + 128*x)) - 1 + # Randomly choose X in calculated range + X = getRandomRange (lower_bound, upper_bound, randfunc) + + # generate p1 and p2 + p = [0, 0] + for i in (0, 1): + # randomly choose 101-bit y + y = getRandomNBitInteger (101, randfunc) + # initialize the field for sieving + field = [0] * 5 * len (sieve_base) + # sieve the field + for prime in sieve_base: + offset = y % prime + for j in iter_range((prime - offset) % prime, len (field), prime): + field[j] = 1 + + # look for suitable p[i] starting at y + result = 0 + for j in range(len(field)): + composite = field[j] + # look for next canidate + if composite: + continue + tmp = y + j + result = _rabinMillerTest (tmp, rabin_miller_rounds) + if result > 0: + p[i] = tmp + break + if result == 0: + raise RuntimeError ("Couln't find prime in field. " + "Developer: Increase field_size") + + # Calculate R + # R = (p2^{-1} mod p1) * p2 - (p1^{-1} mod p2) * p1 + tmp1 = inverse (p[1], p[0]) * p[1] # (p2^-1 mod p1)*p2 + tmp2 = inverse (p[0], p[1]) * p[0] # (p1^-1 mod p2)*p1 + R = tmp1 - tmp2 # (p2^-1 mod p1)*p2 - (p1^-1 mod p2)*p1 + + # search for final prime number starting by Y0 + # Y0 = X + (R - X mod p1p2) + increment = p[0] * p[1] + X = X + (R - (X % increment)) + while 1: + is_possible_prime = 1 + # first check candidate against sieve_base + for prime in sieve_base: + if (X % prime) == 0: + is_possible_prime = 0 + break + # if e is given make sure that e and X-1 are coprime + # this is not necessarily a strong prime criterion but useful when + # creating them for RSA where the p-1 and q-1 should be coprime to + # the public exponent e + if e and is_possible_prime: + if e & 1: + if GCD(e, X-1) != 1: + is_possible_prime = 0 + else: + if GCD(e, (X-1) // 2) != 1: + is_possible_prime = 0 + + # do some Rabin-Miller-Tests + if is_possible_prime: + result = _rabinMillerTest (X, rabin_miller_rounds) + if result > 0: + break + X += increment + # abort when X has more bits than requested + # TODO: maybe we shouldn't abort but rather start over. + if X >= 1 << N: + raise RuntimeError ("Couln't find prime in field. " + "Developer: Increase field_size") + return X + +def isPrime(N, false_positive_prob=1e-6, randfunc=None): + r"""Test if a number *N* is a prime. + + Args: + false_positive_prob (float): + The statistical probability for the result not to be actually a + prime. It defaults to 10\ :sup:`-6`. + Note that the real probability of a false-positive is far less. + This is just the mathematically provable limit. + randfunc (callable): + A function that takes a parameter *N* and that returns + a random byte string of such length. + If omitted, :func:`Crypto.Random.get_random_bytes` is used. + + Return: + `True` is the input is indeed prime. + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + + if _fastmath is not None: + return _fastmath.isPrime(long(N), false_positive_prob, randfunc) + + if N < 3 or N & 1 == 0: + return N == 2 + for p in sieve_base: + if N == p: + return 1 + if N % p == 0: + return 0 + + rounds = int(math.ceil(-math.log(false_positive_prob)/math.log(4))) + return _rabinMillerTest(N, rounds, randfunc) + + +# Improved conversion functions contributed by Barry Warsaw, after +# careful benchmarking + +import struct + +def long_to_bytes(n, blocksize=0): + """Convert a positive integer to a byte string using big endian encoding. + + If :data:`blocksize` is absent or zero, the byte string will + be of minimal length. + + Otherwise, the length of the byte string is guaranteed to be a multiple + of :data:`blocksize`. If necessary, zeroes (``\\x00``) are added at the left. + + .. note:: + In Python 3, if you are sure that :data:`n` can fit into + :data:`blocksize` bytes, you can simply use the native method instead:: + + >>> n.to_bytes(blocksize, 'big') + + For instance:: + + >>> n = 80 + >>> n.to_bytes(2, 'big') + b'\\x00P' + + However, and unlike this ``long_to_bytes()`` function, + an ``OverflowError`` exception is raised if :data:`n` does not fit. + """ + + if n < 0 or blocksize < 0: + raise ValueError("Values must be non-negative") + + result = [] + pack = struct.pack + + # Fill the first block independently from the value of n + bsr = blocksize + while bsr >= 8: + result.insert(0, pack('>Q', n & 0xFFFFFFFFFFFFFFFF)) + n = n >> 64 + bsr -= 8 + + while bsr >= 4: + result.insert(0, pack('>I', n & 0xFFFFFFFF)) + n = n >> 32 + bsr -= 4 + + while bsr > 0: + result.insert(0, pack('>B', n & 0xFF)) + n = n >> 8 + bsr -= 1 + + if n == 0: + if len(result) == 0: + bresult = b'\x00' + else: + bresult = b''.join(result) + else: + # The encoded number exceeds the block size + while n > 0: + result.insert(0, pack('>Q', n & 0xFFFFFFFFFFFFFFFF)) + n = n >> 64 + result[0] = result[0].lstrip(b'\x00') + bresult = b''.join(result) + # bresult has minimum length here + if blocksize > 0: + target_len = ((len(bresult) - 1) // blocksize + 1) * blocksize + bresult = b'\x00' * (target_len - len(bresult)) + bresult + + return bresult + + +def bytes_to_long(s): + """Convert a byte string to a long integer (big endian). + + In Python 3.2+, use the native method instead:: + + >>> int.from_bytes(s, 'big') + + For instance:: + + >>> int.from_bytes(b'\x00P', 'big') + 80 + + This is (essentially) the inverse of :func:`long_to_bytes`. + """ + acc = 0 + + unpack = struct.unpack + + # Up to Python 2.7.4, struct.unpack can't work with bytearrays nor + # memoryviews + if sys.version_info[0:3] < (2, 7, 4): + if isinstance(s, bytearray): + s = bytes(s) + elif isinstance(s, memoryview): + s = s.tobytes() + + length = len(s) + if length % 4: + extra = (4 - length % 4) + s = b'\x00' * extra + s + length = length + extra + for i in range(0, length, 4): + acc = (acc << 32) + unpack('>I', s[i:i+4])[0] + return acc + + +# For backwards compatibility... +import warnings +def long2str(n, blocksize=0): + warnings.warn("long2str() has been replaced by long_to_bytes()") + return long_to_bytes(n, blocksize) +def str2long(s): + warnings.warn("str2long() has been replaced by bytes_to_long()") + return bytes_to_long(s) + + +# The first 10000 primes used for checking primality. +# This should be enough to eliminate most of the odd +# numbers before needing to do a Rabin-Miller test at all. +sieve_base = ( + 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, + 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, + 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, + 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, + 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, + 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, + 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, + 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, + 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, + 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, + 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, + 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, + 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, + 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, + 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, + 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, + 947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, + 1019, 1021, 1031, 1033, 1039, 1049, 1051, 1061, 1063, 1069, + 1087, 1091, 1093, 1097, 1103, 1109, 1117, 1123, 1129, 1151, + 1153, 1163, 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223, + 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283, 1289, 1291, + 1297, 1301, 1303, 1307, 1319, 1321, 1327, 1361, 1367, 1373, + 1381, 1399, 1409, 1423, 1427, 1429, 1433, 1439, 1447, 1451, + 1453, 1459, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511, + 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571, 1579, 1583, + 1597, 1601, 1607, 1609, 1613, 1619, 1621, 1627, 1637, 1657, + 1663, 1667, 1669, 1693, 1697, 1699, 1709, 1721, 1723, 1733, + 1741, 1747, 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811, + 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877, 1879, 1889, + 1901, 1907, 1913, 1931, 1933, 1949, 1951, 1973, 1979, 1987, + 1993, 1997, 1999, 2003, 2011, 2017, 2027, 2029, 2039, 2053, + 2063, 2069, 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129, + 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203, 2207, 2213, + 2221, 2237, 2239, 2243, 2251, 2267, 2269, 2273, 2281, 2287, + 2293, 2297, 2309, 2311, 2333, 2339, 2341, 2347, 2351, 2357, + 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423, + 2437, 2441, 2447, 2459, 2467, 2473, 2477, 2503, 2521, 2531, + 2539, 2543, 2549, 2551, 2557, 2579, 2591, 2593, 2609, 2617, + 2621, 2633, 2647, 2657, 2659, 2663, 2671, 2677, 2683, 2687, + 2689, 2693, 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741, + 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801, 2803, 2819, + 2833, 2837, 2843, 2851, 2857, 2861, 2879, 2887, 2897, 2903, + 2909, 2917, 2927, 2939, 2953, 2957, 2963, 2969, 2971, 2999, + 3001, 3011, 3019, 3023, 3037, 3041, 3049, 3061, 3067, 3079, + 3083, 3089, 3109, 3119, 3121, 3137, 3163, 3167, 3169, 3181, + 3187, 3191, 3203, 3209, 3217, 3221, 3229, 3251, 3253, 3257, + 3259, 3271, 3299, 3301, 3307, 3313, 3319, 3323, 3329, 3331, + 3343, 3347, 3359, 3361, 3371, 3373, 3389, 3391, 3407, 3413, + 3433, 3449, 3457, 3461, 3463, 3467, 3469, 3491, 3499, 3511, + 3517, 3527, 3529, 3533, 3539, 3541, 3547, 3557, 3559, 3571, + 3581, 3583, 3593, 3607, 3613, 3617, 3623, 3631, 3637, 3643, + 3659, 3671, 3673, 3677, 3691, 3697, 3701, 3709, 3719, 3727, + 3733, 3739, 3761, 3767, 3769, 3779, 3793, 3797, 3803, 3821, + 3823, 3833, 3847, 3851, 3853, 3863, 3877, 3881, 3889, 3907, + 3911, 3917, 3919, 3923, 3929, 3931, 3943, 3947, 3967, 3989, + 4001, 4003, 4007, 4013, 4019, 4021, 4027, 4049, 4051, 4057, + 4073, 4079, 4091, 4093, 4099, 4111, 4127, 4129, 4133, 4139, + 4153, 4157, 4159, 4177, 4201, 4211, 4217, 4219, 4229, 4231, + 4241, 4243, 4253, 4259, 4261, 4271, 4273, 4283, 4289, 4297, + 4327, 4337, 4339, 4349, 4357, 4363, 4373, 4391, 4397, 4409, + 4421, 4423, 4441, 4447, 4451, 4457, 4463, 4481, 4483, 4493, + 4507, 4513, 4517, 4519, 4523, 4547, 4549, 4561, 4567, 4583, + 4591, 4597, 4603, 4621, 4637, 4639, 4643, 4649, 4651, 4657, + 4663, 4673, 4679, 4691, 4703, 4721, 4723, 4729, 4733, 4751, + 4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813, 4817, 4831, + 4861, 4871, 4877, 4889, 4903, 4909, 4919, 4931, 4933, 4937, + 4943, 4951, 4957, 4967, 4969, 4973, 4987, 4993, 4999, 5003, + 5009, 5011, 5021, 5023, 5039, 5051, 5059, 5077, 5081, 5087, + 5099, 5101, 5107, 5113, 5119, 5147, 5153, 5167, 5171, 5179, + 5189, 5197, 5209, 5227, 5231, 5233, 5237, 5261, 5273, 5279, + 5281, 5297, 5303, 5309, 5323, 5333, 5347, 5351, 5381, 5387, + 5393, 5399, 5407, 5413, 5417, 5419, 5431, 5437, 5441, 5443, + 5449, 5471, 5477, 5479, 5483, 5501, 5503, 5507, 5519, 5521, + 5527, 5531, 5557, 5563, 5569, 5573, 5581, 5591, 5623, 5639, + 5641, 5647, 5651, 5653, 5657, 5659, 5669, 5683, 5689, 5693, + 5701, 5711, 5717, 5737, 5741, 5743, 5749, 5779, 5783, 5791, + 5801, 5807, 5813, 5821, 5827, 5839, 5843, 5849, 5851, 5857, + 5861, 5867, 5869, 5879, 5881, 5897, 5903, 5923, 5927, 5939, + 5953, 5981, 5987, 6007, 6011, 6029, 6037, 6043, 6047, 6053, + 6067, 6073, 6079, 6089, 6091, 6101, 6113, 6121, 6131, 6133, + 6143, 6151, 6163, 6173, 6197, 6199, 6203, 6211, 6217, 6221, + 6229, 6247, 6257, 6263, 6269, 6271, 6277, 6287, 6299, 6301, + 6311, 6317, 6323, 6329, 6337, 6343, 6353, 6359, 6361, 6367, + 6373, 6379, 6389, 6397, 6421, 6427, 6449, 6451, 6469, 6473, + 6481, 6491, 6521, 6529, 6547, 6551, 6553, 6563, 6569, 6571, + 6577, 6581, 6599, 6607, 6619, 6637, 6653, 6659, 6661, 6673, + 6679, 6689, 6691, 6701, 6703, 6709, 6719, 6733, 6737, 6761, + 6763, 6779, 6781, 6791, 6793, 6803, 6823, 6827, 6829, 6833, + 6841, 6857, 6863, 6869, 6871, 6883, 6899, 6907, 6911, 6917, + 6947, 6949, 6959, 6961, 6967, 6971, 6977, 6983, 6991, 6997, + 7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, + 7109, 7121, 7127, 7129, 7151, 7159, 7177, 7187, 7193, 7207, + 7211, 7213, 7219, 7229, 7237, 7243, 7247, 7253, 7283, 7297, + 7307, 7309, 7321, 7331, 7333, 7349, 7351, 7369, 7393, 7411, + 7417, 7433, 7451, 7457, 7459, 7477, 7481, 7487, 7489, 7499, + 7507, 7517, 7523, 7529, 7537, 7541, 7547, 7549, 7559, 7561, + 7573, 7577, 7583, 7589, 7591, 7603, 7607, 7621, 7639, 7643, + 7649, 7669, 7673, 7681, 7687, 7691, 7699, 7703, 7717, 7723, + 7727, 7741, 7753, 7757, 7759, 7789, 7793, 7817, 7823, 7829, + 7841, 7853, 7867, 7873, 7877, 7879, 7883, 7901, 7907, 7919, + 7927, 7933, 7937, 7949, 7951, 7963, 7993, 8009, 8011, 8017, + 8039, 8053, 8059, 8069, 8081, 8087, 8089, 8093, 8101, 8111, + 8117, 8123, 8147, 8161, 8167, 8171, 8179, 8191, 8209, 8219, + 8221, 8231, 8233, 8237, 8243, 8263, 8269, 8273, 8287, 8291, + 8293, 8297, 8311, 8317, 8329, 8353, 8363, 8369, 8377, 8387, + 8389, 8419, 8423, 8429, 8431, 8443, 8447, 8461, 8467, 8501, + 8513, 8521, 8527, 8537, 8539, 8543, 8563, 8573, 8581, 8597, + 8599, 8609, 8623, 8627, 8629, 8641, 8647, 8663, 8669, 8677, + 8681, 8689, 8693, 8699, 8707, 8713, 8719, 8731, 8737, 8741, + 8747, 8753, 8761, 8779, 8783, 8803, 8807, 8819, 8821, 8831, + 8837, 8839, 8849, 8861, 8863, 8867, 8887, 8893, 8923, 8929, + 8933, 8941, 8951, 8963, 8969, 8971, 8999, 9001, 9007, 9011, + 9013, 9029, 9041, 9043, 9049, 9059, 9067, 9091, 9103, 9109, + 9127, 9133, 9137, 9151, 9157, 9161, 9173, 9181, 9187, 9199, + 9203, 9209, 9221, 9227, 9239, 9241, 9257, 9277, 9281, 9283, + 9293, 9311, 9319, 9323, 9337, 9341, 9343, 9349, 9371, 9377, + 9391, 9397, 9403, 9413, 9419, 9421, 9431, 9433, 9437, 9439, + 9461, 9463, 9467, 9473, 9479, 9491, 9497, 9511, 9521, 9533, + 9539, 9547, 9551, 9587, 9601, 9613, 9619, 9623, 9629, 9631, + 9643, 9649, 9661, 9677, 9679, 9689, 9697, 9719, 9721, 9733, + 9739, 9743, 9749, 9767, 9769, 9781, 9787, 9791, 9803, 9811, + 9817, 9829, 9833, 9839, 9851, 9857, 9859, 9871, 9883, 9887, + 9901, 9907, 9923, 9929, 9931, 9941, 9949, 9967, 9973, 10007, + 10009, 10037, 10039, 10061, 10067, 10069, 10079, 10091, 10093, 10099, + 10103, 10111, 10133, 10139, 10141, 10151, 10159, 10163, 10169, 10177, + 10181, 10193, 10211, 10223, 10243, 10247, 10253, 10259, 10267, 10271, + 10273, 10289, 10301, 10303, 10313, 10321, 10331, 10333, 10337, 10343, + 10357, 10369, 10391, 10399, 10427, 10429, 10433, 10453, 10457, 10459, + 10463, 10477, 10487, 10499, 10501, 10513, 10529, 10531, 10559, 10567, + 10589, 10597, 10601, 10607, 10613, 10627, 10631, 10639, 10651, 10657, + 10663, 10667, 10687, 10691, 10709, 10711, 10723, 10729, 10733, 10739, + 10753, 10771, 10781, 10789, 10799, 10831, 10837, 10847, 10853, 10859, + 10861, 10867, 10883, 10889, 10891, 10903, 10909, 10937, 10939, 10949, + 10957, 10973, 10979, 10987, 10993, 11003, 11027, 11047, 11057, 11059, + 11069, 11071, 11083, 11087, 11093, 11113, 11117, 11119, 11131, 11149, + 11159, 11161, 11171, 11173, 11177, 11197, 11213, 11239, 11243, 11251, + 11257, 11261, 11273, 11279, 11287, 11299, 11311, 11317, 11321, 11329, + 11351, 11353, 11369, 11383, 11393, 11399, 11411, 11423, 11437, 11443, + 11447, 11467, 11471, 11483, 11489, 11491, 11497, 11503, 11519, 11527, + 11549, 11551, 11579, 11587, 11593, 11597, 11617, 11621, 11633, 11657, + 11677, 11681, 11689, 11699, 11701, 11717, 11719, 11731, 11743, 11777, + 11779, 11783, 11789, 11801, 11807, 11813, 11821, 11827, 11831, 11833, + 11839, 11863, 11867, 11887, 11897, 11903, 11909, 11923, 11927, 11933, + 11939, 11941, 11953, 11959, 11969, 11971, 11981, 11987, 12007, 12011, + 12037, 12041, 12043, 12049, 12071, 12073, 12097, 12101, 12107, 12109, + 12113, 12119, 12143, 12149, 12157, 12161, 12163, 12197, 12203, 12211, + 12227, 12239, 12241, 12251, 12253, 12263, 12269, 12277, 12281, 12289, + 12301, 12323, 12329, 12343, 12347, 12373, 12377, 12379, 12391, 12401, + 12409, 12413, 12421, 12433, 12437, 12451, 12457, 12473, 12479, 12487, + 12491, 12497, 12503, 12511, 12517, 12527, 12539, 12541, 12547, 12553, + 12569, 12577, 12583, 12589, 12601, 12611, 12613, 12619, 12637, 12641, + 12647, 12653, 12659, 12671, 12689, 12697, 12703, 12713, 12721, 12739, + 12743, 12757, 12763, 12781, 12791, 12799, 12809, 12821, 12823, 12829, + 12841, 12853, 12889, 12893, 12899, 12907, 12911, 12917, 12919, 12923, + 12941, 12953, 12959, 12967, 12973, 12979, 12983, 13001, 13003, 13007, + 13009, 13033, 13037, 13043, 13049, 13063, 13093, 13099, 13103, 13109, + 13121, 13127, 13147, 13151, 13159, 13163, 13171, 13177, 13183, 13187, + 13217, 13219, 13229, 13241, 13249, 13259, 13267, 13291, 13297, 13309, + 13313, 13327, 13331, 13337, 13339, 13367, 13381, 13397, 13399, 13411, + 13417, 13421, 13441, 13451, 13457, 13463, 13469, 13477, 13487, 13499, + 13513, 13523, 13537, 13553, 13567, 13577, 13591, 13597, 13613, 13619, + 13627, 13633, 13649, 13669, 13679, 13681, 13687, 13691, 13693, 13697, + 13709, 13711, 13721, 13723, 13729, 13751, 13757, 13759, 13763, 13781, + 13789, 13799, 13807, 13829, 13831, 13841, 13859, 13873, 13877, 13879, + 13883, 13901, 13903, 13907, 13913, 13921, 13931, 13933, 13963, 13967, + 13997, 13999, 14009, 14011, 14029, 14033, 14051, 14057, 14071, 14081, + 14083, 14087, 14107, 14143, 14149, 14153, 14159, 14173, 14177, 14197, + 14207, 14221, 14243, 14249, 14251, 14281, 14293, 14303, 14321, 14323, + 14327, 14341, 14347, 14369, 14387, 14389, 14401, 14407, 14411, 14419, + 14423, 14431, 14437, 14447, 14449, 14461, 14479, 14489, 14503, 14519, + 14533, 14537, 14543, 14549, 14551, 14557, 14561, 14563, 14591, 14593, + 14621, 14627, 14629, 14633, 14639, 14653, 14657, 14669, 14683, 14699, + 14713, 14717, 14723, 14731, 14737, 14741, 14747, 14753, 14759, 14767, + 14771, 14779, 14783, 14797, 14813, 14821, 14827, 14831, 14843, 14851, + 14867, 14869, 14879, 14887, 14891, 14897, 14923, 14929, 14939, 14947, + 14951, 14957, 14969, 14983, 15013, 15017, 15031, 15053, 15061, 15073, + 15077, 15083, 15091, 15101, 15107, 15121, 15131, 15137, 15139, 15149, + 15161, 15173, 15187, 15193, 15199, 15217, 15227, 15233, 15241, 15259, + 15263, 15269, 15271, 15277, 15287, 15289, 15299, 15307, 15313, 15319, + 15329, 15331, 15349, 15359, 15361, 15373, 15377, 15383, 15391, 15401, + 15413, 15427, 15439, 15443, 15451, 15461, 15467, 15473, 15493, 15497, + 15511, 15527, 15541, 15551, 15559, 15569, 15581, 15583, 15601, 15607, + 15619, 15629, 15641, 15643, 15647, 15649, 15661, 15667, 15671, 15679, + 15683, 15727, 15731, 15733, 15737, 15739, 15749, 15761, 15767, 15773, + 15787, 15791, 15797, 15803, 15809, 15817, 15823, 15859, 15877, 15881, + 15887, 15889, 15901, 15907, 15913, 15919, 15923, 15937, 15959, 15971, + 15973, 15991, 16001, 16007, 16033, 16057, 16061, 16063, 16067, 16069, + 16073, 16087, 16091, 16097, 16103, 16111, 16127, 16139, 16141, 16183, + 16187, 16189, 16193, 16217, 16223, 16229, 16231, 16249, 16253, 16267, + 16273, 16301, 16319, 16333, 16339, 16349, 16361, 16363, 16369, 16381, + 16411, 16417, 16421, 16427, 16433, 16447, 16451, 16453, 16477, 16481, + 16487, 16493, 16519, 16529, 16547, 16553, 16561, 16567, 16573, 16603, + 16607, 16619, 16631, 16633, 16649, 16651, 16657, 16661, 16673, 16691, + 16693, 16699, 16703, 16729, 16741, 16747, 16759, 16763, 16787, 16811, + 16823, 16829, 16831, 16843, 16871, 16879, 16883, 16889, 16901, 16903, + 16921, 16927, 16931, 16937, 16943, 16963, 16979, 16981, 16987, 16993, + 17011, 17021, 17027, 17029, 17033, 17041, 17047, 17053, 17077, 17093, + 17099, 17107, 17117, 17123, 17137, 17159, 17167, 17183, 17189, 17191, + 17203, 17207, 17209, 17231, 17239, 17257, 17291, 17293, 17299, 17317, + 17321, 17327, 17333, 17341, 17351, 17359, 17377, 17383, 17387, 17389, + 17393, 17401, 17417, 17419, 17431, 17443, 17449, 17467, 17471, 17477, + 17483, 17489, 17491, 17497, 17509, 17519, 17539, 17551, 17569, 17573, + 17579, 17581, 17597, 17599, 17609, 17623, 17627, 17657, 17659, 17669, + 17681, 17683, 17707, 17713, 17729, 17737, 17747, 17749, 17761, 17783, + 17789, 17791, 17807, 17827, 17837, 17839, 17851, 17863, 17881, 17891, + 17903, 17909, 17911, 17921, 17923, 17929, 17939, 17957, 17959, 17971, + 17977, 17981, 17987, 17989, 18013, 18041, 18043, 18047, 18049, 18059, + 18061, 18077, 18089, 18097, 18119, 18121, 18127, 18131, 18133, 18143, + 18149, 18169, 18181, 18191, 18199, 18211, 18217, 18223, 18229, 18233, + 18251, 18253, 18257, 18269, 18287, 18289, 18301, 18307, 18311, 18313, + 18329, 18341, 18353, 18367, 18371, 18379, 18397, 18401, 18413, 18427, + 18433, 18439, 18443, 18451, 18457, 18461, 18481, 18493, 18503, 18517, + 18521, 18523, 18539, 18541, 18553, 18583, 18587, 18593, 18617, 18637, + 18661, 18671, 18679, 18691, 18701, 18713, 18719, 18731, 18743, 18749, + 18757, 18773, 18787, 18793, 18797, 18803, 18839, 18859, 18869, 18899, + 18911, 18913, 18917, 18919, 18947, 18959, 18973, 18979, 19001, 19009, + 19013, 19031, 19037, 19051, 19069, 19073, 19079, 19081, 19087, 19121, + 19139, 19141, 19157, 19163, 19181, 19183, 19207, 19211, 19213, 19219, + 19231, 19237, 19249, 19259, 19267, 19273, 19289, 19301, 19309, 19319, + 19333, 19373, 19379, 19381, 19387, 19391, 19403, 19417, 19421, 19423, + 19427, 19429, 19433, 19441, 19447, 19457, 19463, 19469, 19471, 19477, + 19483, 19489, 19501, 19507, 19531, 19541, 19543, 19553, 19559, 19571, + 19577, 19583, 19597, 19603, 19609, 19661, 19681, 19687, 19697, 19699, + 19709, 19717, 19727, 19739, 19751, 19753, 19759, 19763, 19777, 19793, + 19801, 19813, 19819, 19841, 19843, 19853, 19861, 19867, 19889, 19891, + 19913, 19919, 19927, 19937, 19949, 19961, 19963, 19973, 19979, 19991, + 19993, 19997, 20011, 20021, 20023, 20029, 20047, 20051, 20063, 20071, + 20089, 20101, 20107, 20113, 20117, 20123, 20129, 20143, 20147, 20149, + 20161, 20173, 20177, 20183, 20201, 20219, 20231, 20233, 20249, 20261, + 20269, 20287, 20297, 20323, 20327, 20333, 20341, 20347, 20353, 20357, + 20359, 20369, 20389, 20393, 20399, 20407, 20411, 20431, 20441, 20443, + 20477, 20479, 20483, 20507, 20509, 20521, 20533, 20543, 20549, 20551, + 20563, 20593, 20599, 20611, 20627, 20639, 20641, 20663, 20681, 20693, + 20707, 20717, 20719, 20731, 20743, 20747, 20749, 20753, 20759, 20771, + 20773, 20789, 20807, 20809, 20849, 20857, 20873, 20879, 20887, 20897, + 20899, 20903, 20921, 20929, 20939, 20947, 20959, 20963, 20981, 20983, + 21001, 21011, 21013, 21017, 21019, 21023, 21031, 21059, 21061, 21067, + 21089, 21101, 21107, 21121, 21139, 21143, 21149, 21157, 21163, 21169, + 21179, 21187, 21191, 21193, 21211, 21221, 21227, 21247, 21269, 21277, + 21283, 21313, 21317, 21319, 21323, 21341, 21347, 21377, 21379, 21383, + 21391, 21397, 21401, 21407, 21419, 21433, 21467, 21481, 21487, 21491, + 21493, 21499, 21503, 21517, 21521, 21523, 21529, 21557, 21559, 21563, + 21569, 21577, 21587, 21589, 21599, 21601, 21611, 21613, 21617, 21647, + 21649, 21661, 21673, 21683, 21701, 21713, 21727, 21737, 21739, 21751, + 21757, 21767, 21773, 21787, 21799, 21803, 21817, 21821, 21839, 21841, + 21851, 21859, 21863, 21871, 21881, 21893, 21911, 21929, 21937, 21943, + 21961, 21977, 21991, 21997, 22003, 22013, 22027, 22031, 22037, 22039, + 22051, 22063, 22067, 22073, 22079, 22091, 22093, 22109, 22111, 22123, + 22129, 22133, 22147, 22153, 22157, 22159, 22171, 22189, 22193, 22229, + 22247, 22259, 22271, 22273, 22277, 22279, 22283, 22291, 22303, 22307, + 22343, 22349, 22367, 22369, 22381, 22391, 22397, 22409, 22433, 22441, + 22447, 22453, 22469, 22481, 22483, 22501, 22511, 22531, 22541, 22543, + 22549, 22567, 22571, 22573, 22613, 22619, 22621, 22637, 22639, 22643, + 22651, 22669, 22679, 22691, 22697, 22699, 22709, 22717, 22721, 22727, + 22739, 22741, 22751, 22769, 22777, 22783, 22787, 22807, 22811, 22817, + 22853, 22859, 22861, 22871, 22877, 22901, 22907, 22921, 22937, 22943, + 22961, 22963, 22973, 22993, 23003, 23011, 23017, 23021, 23027, 23029, + 23039, 23041, 23053, 23057, 23059, 23063, 23071, 23081, 23087, 23099, + 23117, 23131, 23143, 23159, 23167, 23173, 23189, 23197, 23201, 23203, + 23209, 23227, 23251, 23269, 23279, 23291, 23293, 23297, 23311, 23321, + 23327, 23333, 23339, 23357, 23369, 23371, 23399, 23417, 23431, 23447, + 23459, 23473, 23497, 23509, 23531, 23537, 23539, 23549, 23557, 23561, + 23563, 23567, 23581, 23593, 23599, 23603, 23609, 23623, 23627, 23629, + 23633, 23663, 23669, 23671, 23677, 23687, 23689, 23719, 23741, 23743, + 23747, 23753, 23761, 23767, 23773, 23789, 23801, 23813, 23819, 23827, + 23831, 23833, 23857, 23869, 23873, 23879, 23887, 23893, 23899, 23909, + 23911, 23917, 23929, 23957, 23971, 23977, 23981, 23993, 24001, 24007, + 24019, 24023, 24029, 24043, 24049, 24061, 24071, 24077, 24083, 24091, + 24097, 24103, 24107, 24109, 24113, 24121, 24133, 24137, 24151, 24169, + 24179, 24181, 24197, 24203, 24223, 24229, 24239, 24247, 24251, 24281, + 24317, 24329, 24337, 24359, 24371, 24373, 24379, 24391, 24407, 24413, + 24419, 24421, 24439, 24443, 24469, 24473, 24481, 24499, 24509, 24517, + 24527, 24533, 24547, 24551, 24571, 24593, 24611, 24623, 24631, 24659, + 24671, 24677, 24683, 24691, 24697, 24709, 24733, 24749, 24763, 24767, + 24781, 24793, 24799, 24809, 24821, 24841, 24847, 24851, 24859, 24877, + 24889, 24907, 24917, 24919, 24923, 24943, 24953, 24967, 24971, 24977, + 24979, 24989, 25013, 25031, 25033, 25037, 25057, 25073, 25087, 25097, + 25111, 25117, 25121, 25127, 25147, 25153, 25163, 25169, 25171, 25183, + 25189, 25219, 25229, 25237, 25243, 25247, 25253, 25261, 25301, 25303, + 25307, 25309, 25321, 25339, 25343, 25349, 25357, 25367, 25373, 25391, + 25409, 25411, 25423, 25439, 25447, 25453, 25457, 25463, 25469, 25471, + 25523, 25537, 25541, 25561, 25577, 25579, 25583, 25589, 25601, 25603, + 25609, 25621, 25633, 25639, 25643, 25657, 25667, 25673, 25679, 25693, + 25703, 25717, 25733, 25741, 25747, 25759, 25763, 25771, 25793, 25799, + 25801, 25819, 25841, 25847, 25849, 25867, 25873, 25889, 25903, 25913, + 25919, 25931, 25933, 25939, 25943, 25951, 25969, 25981, 25997, 25999, + 26003, 26017, 26021, 26029, 26041, 26053, 26083, 26099, 26107, 26111, + 26113, 26119, 26141, 26153, 26161, 26171, 26177, 26183, 26189, 26203, + 26209, 26227, 26237, 26249, 26251, 26261, 26263, 26267, 26293, 26297, + 26309, 26317, 26321, 26339, 26347, 26357, 26371, 26387, 26393, 26399, + 26407, 26417, 26423, 26431, 26437, 26449, 26459, 26479, 26489, 26497, + 26501, 26513, 26539, 26557, 26561, 26573, 26591, 26597, 26627, 26633, + 26641, 26647, 26669, 26681, 26683, 26687, 26693, 26699, 26701, 26711, + 26713, 26717, 26723, 26729, 26731, 26737, 26759, 26777, 26783, 26801, + 26813, 26821, 26833, 26839, 26849, 26861, 26863, 26879, 26881, 26891, + 26893, 26903, 26921, 26927, 26947, 26951, 26953, 26959, 26981, 26987, + 26993, 27011, 27017, 27031, 27043, 27059, 27061, 27067, 27073, 27077, + 27091, 27103, 27107, 27109, 27127, 27143, 27179, 27191, 27197, 27211, + 27239, 27241, 27253, 27259, 27271, 27277, 27281, 27283, 27299, 27329, + 27337, 27361, 27367, 27397, 27407, 27409, 27427, 27431, 27437, 27449, + 27457, 27479, 27481, 27487, 27509, 27527, 27529, 27539, 27541, 27551, + 27581, 27583, 27611, 27617, 27631, 27647, 27653, 27673, 27689, 27691, + 27697, 27701, 27733, 27737, 27739, 27743, 27749, 27751, 27763, 27767, + 27773, 27779, 27791, 27793, 27799, 27803, 27809, 27817, 27823, 27827, + 27847, 27851, 27883, 27893, 27901, 27917, 27919, 27941, 27943, 27947, + 27953, 27961, 27967, 27983, 27997, 28001, 28019, 28027, 28031, 28051, + 28057, 28069, 28081, 28087, 28097, 28099, 28109, 28111, 28123, 28151, + 28163, 28181, 28183, 28201, 28211, 28219, 28229, 28277, 28279, 28283, + 28289, 28297, 28307, 28309, 28319, 28349, 28351, 28387, 28393, 28403, + 28409, 28411, 28429, 28433, 28439, 28447, 28463, 28477, 28493, 28499, + 28513, 28517, 28537, 28541, 28547, 28549, 28559, 28571, 28573, 28579, + 28591, 28597, 28603, 28607, 28619, 28621, 28627, 28631, 28643, 28649, + 28657, 28661, 28663, 28669, 28687, 28697, 28703, 28711, 28723, 28729, + 28751, 28753, 28759, 28771, 28789, 28793, 28807, 28813, 28817, 28837, + 28843, 28859, 28867, 28871, 28879, 28901, 28909, 28921, 28927, 28933, + 28949, 28961, 28979, 29009, 29017, 29021, 29023, 29027, 29033, 29059, + 29063, 29077, 29101, 29123, 29129, 29131, 29137, 29147, 29153, 29167, + 29173, 29179, 29191, 29201, 29207, 29209, 29221, 29231, 29243, 29251, + 29269, 29287, 29297, 29303, 29311, 29327, 29333, 29339, 29347, 29363, + 29383, 29387, 29389, 29399, 29401, 29411, 29423, 29429, 29437, 29443, + 29453, 29473, 29483, 29501, 29527, 29531, 29537, 29567, 29569, 29573, + 29581, 29587, 29599, 29611, 29629, 29633, 29641, 29663, 29669, 29671, + 29683, 29717, 29723, 29741, 29753, 29759, 29761, 29789, 29803, 29819, + 29833, 29837, 29851, 29863, 29867, 29873, 29879, 29881, 29917, 29921, + 29927, 29947, 29959, 29983, 29989, 30011, 30013, 30029, 30047, 30059, + 30071, 30089, 30091, 30097, 30103, 30109, 30113, 30119, 30133, 30137, + 30139, 30161, 30169, 30181, 30187, 30197, 30203, 30211, 30223, 30241, + 30253, 30259, 30269, 30271, 30293, 30307, 30313, 30319, 30323, 30341, + 30347, 30367, 30389, 30391, 30403, 30427, 30431, 30449, 30467, 30469, + 30491, 30493, 30497, 30509, 30517, 30529, 30539, 30553, 30557, 30559, + 30577, 30593, 30631, 30637, 30643, 30649, 30661, 30671, 30677, 30689, + 30697, 30703, 30707, 30713, 30727, 30757, 30763, 30773, 30781, 30803, + 30809, 30817, 30829, 30839, 30841, 30851, 30853, 30859, 30869, 30871, + 30881, 30893, 30911, 30931, 30937, 30941, 30949, 30971, 30977, 30983, + 31013, 31019, 31033, 31039, 31051, 31063, 31069, 31079, 31081, 31091, + 31121, 31123, 31139, 31147, 31151, 31153, 31159, 31177, 31181, 31183, + 31189, 31193, 31219, 31223, 31231, 31237, 31247, 31249, 31253, 31259, + 31267, 31271, 31277, 31307, 31319, 31321, 31327, 31333, 31337, 31357, + 31379, 31387, 31391, 31393, 31397, 31469, 31477, 31481, 31489, 31511, + 31513, 31517, 31531, 31541, 31543, 31547, 31567, 31573, 31583, 31601, + 31607, 31627, 31643, 31649, 31657, 31663, 31667, 31687, 31699, 31721, + 31723, 31727, 31729, 31741, 31751, 31769, 31771, 31793, 31799, 31817, + 31847, 31849, 31859, 31873, 31883, 31891, 31907, 31957, 31963, 31973, + 31981, 31991, 32003, 32009, 32027, 32029, 32051, 32057, 32059, 32063, + 32069, 32077, 32083, 32089, 32099, 32117, 32119, 32141, 32143, 32159, + 32173, 32183, 32189, 32191, 32203, 32213, 32233, 32237, 32251, 32257, + 32261, 32297, 32299, 32303, 32309, 32321, 32323, 32327, 32341, 32353, + 32359, 32363, 32369, 32371, 32377, 32381, 32401, 32411, 32413, 32423, + 32429, 32441, 32443, 32467, 32479, 32491, 32497, 32503, 32507, 32531, + 32533, 32537, 32561, 32563, 32569, 32573, 32579, 32587, 32603, 32609, + 32611, 32621, 32633, 32647, 32653, 32687, 32693, 32707, 32713, 32717, + 32719, 32749, 32771, 32779, 32783, 32789, 32797, 32801, 32803, 32831, + 32833, 32839, 32843, 32869, 32887, 32909, 32911, 32917, 32933, 32939, + 32941, 32957, 32969, 32971, 32983, 32987, 32993, 32999, 33013, 33023, + 33029, 33037, 33049, 33053, 33071, 33073, 33083, 33091, 33107, 33113, + 33119, 33149, 33151, 33161, 33179, 33181, 33191, 33199, 33203, 33211, + 33223, 33247, 33287, 33289, 33301, 33311, 33317, 33329, 33331, 33343, + 33347, 33349, 33353, 33359, 33377, 33391, 33403, 33409, 33413, 33427, + 33457, 33461, 33469, 33479, 33487, 33493, 33503, 33521, 33529, 33533, + 33547, 33563, 33569, 33577, 33581, 33587, 33589, 33599, 33601, 33613, + 33617, 33619, 33623, 33629, 33637, 33641, 33647, 33679, 33703, 33713, + 33721, 33739, 33749, 33751, 33757, 33767, 33769, 33773, 33791, 33797, + 33809, 33811, 33827, 33829, 33851, 33857, 33863, 33871, 33889, 33893, + 33911, 33923, 33931, 33937, 33941, 33961, 33967, 33997, 34019, 34031, + 34033, 34039, 34057, 34061, 34123, 34127, 34129, 34141, 34147, 34157, + 34159, 34171, 34183, 34211, 34213, 34217, 34231, 34253, 34259, 34261, + 34267, 34273, 34283, 34297, 34301, 34303, 34313, 34319, 34327, 34337, + 34351, 34361, 34367, 34369, 34381, 34403, 34421, 34429, 34439, 34457, + 34469, 34471, 34483, 34487, 34499, 34501, 34511, 34513, 34519, 34537, + 34543, 34549, 34583, 34589, 34591, 34603, 34607, 34613, 34631, 34649, + 34651, 34667, 34673, 34679, 34687, 34693, 34703, 34721, 34729, 34739, + 34747, 34757, 34759, 34763, 34781, 34807, 34819, 34841, 34843, 34847, + 34849, 34871, 34877, 34883, 34897, 34913, 34919, 34939, 34949, 34961, + 34963, 34981, 35023, 35027, 35051, 35053, 35059, 35069, 35081, 35083, + 35089, 35099, 35107, 35111, 35117, 35129, 35141, 35149, 35153, 35159, + 35171, 35201, 35221, 35227, 35251, 35257, 35267, 35279, 35281, 35291, + 35311, 35317, 35323, 35327, 35339, 35353, 35363, 35381, 35393, 35401, + 35407, 35419, 35423, 35437, 35447, 35449, 35461, 35491, 35507, 35509, + 35521, 35527, 35531, 35533, 35537, 35543, 35569, 35573, 35591, 35593, + 35597, 35603, 35617, 35671, 35677, 35729, 35731, 35747, 35753, 35759, + 35771, 35797, 35801, 35803, 35809, 35831, 35837, 35839, 35851, 35863, + 35869, 35879, 35897, 35899, 35911, 35923, 35933, 35951, 35963, 35969, + 35977, 35983, 35993, 35999, 36007, 36011, 36013, 36017, 36037, 36061, + 36067, 36073, 36083, 36097, 36107, 36109, 36131, 36137, 36151, 36161, + 36187, 36191, 36209, 36217, 36229, 36241, 36251, 36263, 36269, 36277, + 36293, 36299, 36307, 36313, 36319, 36341, 36343, 36353, 36373, 36383, + 36389, 36433, 36451, 36457, 36467, 36469, 36473, 36479, 36493, 36497, + 36523, 36527, 36529, 36541, 36551, 36559, 36563, 36571, 36583, 36587, + 36599, 36607, 36629, 36637, 36643, 36653, 36671, 36677, 36683, 36691, + 36697, 36709, 36713, 36721, 36739, 36749, 36761, 36767, 36779, 36781, + 36787, 36791, 36793, 36809, 36821, 36833, 36847, 36857, 36871, 36877, + 36887, 36899, 36901, 36913, 36919, 36923, 36929, 36931, 36943, 36947, + 36973, 36979, 36997, 37003, 37013, 37019, 37021, 37039, 37049, 37057, + 37061, 37087, 37097, 37117, 37123, 37139, 37159, 37171, 37181, 37189, + 37199, 37201, 37217, 37223, 37243, 37253, 37273, 37277, 37307, 37309, + 37313, 37321, 37337, 37339, 37357, 37361, 37363, 37369, 37379, 37397, + 37409, 37423, 37441, 37447, 37463, 37483, 37489, 37493, 37501, 37507, + 37511, 37517, 37529, 37537, 37547, 37549, 37561, 37567, 37571, 37573, + 37579, 37589, 37591, 37607, 37619, 37633, 37643, 37649, 37657, 37663, + 37691, 37693, 37699, 37717, 37747, 37781, 37783, 37799, 37811, 37813, + 37831, 37847, 37853, 37861, 37871, 37879, 37889, 37897, 37907, 37951, + 37957, 37963, 37967, 37987, 37991, 37993, 37997, 38011, 38039, 38047, + 38053, 38069, 38083, 38113, 38119, 38149, 38153, 38167, 38177, 38183, + 38189, 38197, 38201, 38219, 38231, 38237, 38239, 38261, 38273, 38281, + 38287, 38299, 38303, 38317, 38321, 38327, 38329, 38333, 38351, 38371, + 38377, 38393, 38431, 38447, 38449, 38453, 38459, 38461, 38501, 38543, + 38557, 38561, 38567, 38569, 38593, 38603, 38609, 38611, 38629, 38639, + 38651, 38653, 38669, 38671, 38677, 38693, 38699, 38707, 38711, 38713, + 38723, 38729, 38737, 38747, 38749, 38767, 38783, 38791, 38803, 38821, + 38833, 38839, 38851, 38861, 38867, 38873, 38891, 38903, 38917, 38921, + 38923, 38933, 38953, 38959, 38971, 38977, 38993, 39019, 39023, 39041, + 39043, 39047, 39079, 39089, 39097, 39103, 39107, 39113, 39119, 39133, + 39139, 39157, 39161, 39163, 39181, 39191, 39199, 39209, 39217, 39227, + 39229, 39233, 39239, 39241, 39251, 39293, 39301, 39313, 39317, 39323, + 39341, 39343, 39359, 39367, 39371, 39373, 39383, 39397, 39409, 39419, + 39439, 39443, 39451, 39461, 39499, 39503, 39509, 39511, 39521, 39541, + 39551, 39563, 39569, 39581, 39607, 39619, 39623, 39631, 39659, 39667, + 39671, 39679, 39703, 39709, 39719, 39727, 39733, 39749, 39761, 39769, + 39779, 39791, 39799, 39821, 39827, 39829, 39839, 39841, 39847, 39857, + 39863, 39869, 39877, 39883, 39887, 39901, 39929, 39937, 39953, 39971, + 39979, 39983, 39989, 40009, 40013, 40031, 40037, 40039, 40063, 40087, + 40093, 40099, 40111, 40123, 40127, 40129, 40151, 40153, 40163, 40169, + 40177, 40189, 40193, 40213, 40231, 40237, 40241, 40253, 40277, 40283, + 40289, 40343, 40351, 40357, 40361, 40387, 40423, 40427, 40429, 40433, + 40459, 40471, 40483, 40487, 40493, 40499, 40507, 40519, 40529, 40531, + 40543, 40559, 40577, 40583, 40591, 40597, 40609, 40627, 40637, 40639, + 40693, 40697, 40699, 40709, 40739, 40751, 40759, 40763, 40771, 40787, + 40801, 40813, 40819, 40823, 40829, 40841, 40847, 40849, 40853, 40867, + 40879, 40883, 40897, 40903, 40927, 40933, 40939, 40949, 40961, 40973, + 40993, 41011, 41017, 41023, 41039, 41047, 41051, 41057, 41077, 41081, + 41113, 41117, 41131, 41141, 41143, 41149, 41161, 41177, 41179, 41183, + 41189, 41201, 41203, 41213, 41221, 41227, 41231, 41233, 41243, 41257, + 41263, 41269, 41281, 41299, 41333, 41341, 41351, 41357, 41381, 41387, + 41389, 41399, 41411, 41413, 41443, 41453, 41467, 41479, 41491, 41507, + 41513, 41519, 41521, 41539, 41543, 41549, 41579, 41593, 41597, 41603, + 41609, 41611, 41617, 41621, 41627, 41641, 41647, 41651, 41659, 41669, + 41681, 41687, 41719, 41729, 41737, 41759, 41761, 41771, 41777, 41801, + 41809, 41813, 41843, 41849, 41851, 41863, 41879, 41887, 41893, 41897, + 41903, 41911, 41927, 41941, 41947, 41953, 41957, 41959, 41969, 41981, + 41983, 41999, 42013, 42017, 42019, 42023, 42043, 42061, 42071, 42073, + 42083, 42089, 42101, 42131, 42139, 42157, 42169, 42179, 42181, 42187, + 42193, 42197, 42209, 42221, 42223, 42227, 42239, 42257, 42281, 42283, + 42293, 42299, 42307, 42323, 42331, 42337, 42349, 42359, 42373, 42379, + 42391, 42397, 42403, 42407, 42409, 42433, 42437, 42443, 42451, 42457, + 42461, 42463, 42467, 42473, 42487, 42491, 42499, 42509, 42533, 42557, + 42569, 42571, 42577, 42589, 42611, 42641, 42643, 42649, 42667, 42677, + 42683, 42689, 42697, 42701, 42703, 42709, 42719, 42727, 42737, 42743, + 42751, 42767, 42773, 42787, 42793, 42797, 42821, 42829, 42839, 42841, + 42853, 42859, 42863, 42899, 42901, 42923, 42929, 42937, 42943, 42953, + 42961, 42967, 42979, 42989, 43003, 43013, 43019, 43037, 43049, 43051, + 43063, 43067, 43093, 43103, 43117, 43133, 43151, 43159, 43177, 43189, + 43201, 43207, 43223, 43237, 43261, 43271, 43283, 43291, 43313, 43319, + 43321, 43331, 43391, 43397, 43399, 43403, 43411, 43427, 43441, 43451, + 43457, 43481, 43487, 43499, 43517, 43541, 43543, 43573, 43577, 43579, + 43591, 43597, 43607, 43609, 43613, 43627, 43633, 43649, 43651, 43661, + 43669, 43691, 43711, 43717, 43721, 43753, 43759, 43777, 43781, 43783, + 43787, 43789, 43793, 43801, 43853, 43867, 43889, 43891, 43913, 43933, + 43943, 43951, 43961, 43963, 43969, 43973, 43987, 43991, 43997, 44017, + 44021, 44027, 44029, 44041, 44053, 44059, 44071, 44087, 44089, 44101, + 44111, 44119, 44123, 44129, 44131, 44159, 44171, 44179, 44189, 44201, + 44203, 44207, 44221, 44249, 44257, 44263, 44267, 44269, 44273, 44279, + 44281, 44293, 44351, 44357, 44371, 44381, 44383, 44389, 44417, 44449, + 44453, 44483, 44491, 44497, 44501, 44507, 44519, 44531, 44533, 44537, + 44543, 44549, 44563, 44579, 44587, 44617, 44621, 44623, 44633, 44641, + 44647, 44651, 44657, 44683, 44687, 44699, 44701, 44711, 44729, 44741, + 44753, 44771, 44773, 44777, 44789, 44797, 44809, 44819, 44839, 44843, + 44851, 44867, 44879, 44887, 44893, 44909, 44917, 44927, 44939, 44953, + 44959, 44963, 44971, 44983, 44987, 45007, 45013, 45053, 45061, 45077, + 45083, 45119, 45121, 45127, 45131, 45137, 45139, 45161, 45179, 45181, + 45191, 45197, 45233, 45247, 45259, 45263, 45281, 45289, 45293, 45307, + 45317, 45319, 45329, 45337, 45341, 45343, 45361, 45377, 45389, 45403, + 45413, 45427, 45433, 45439, 45481, 45491, 45497, 45503, 45523, 45533, + 45541, 45553, 45557, 45569, 45587, 45589, 45599, 45613, 45631, 45641, + 45659, 45667, 45673, 45677, 45691, 45697, 45707, 45737, 45751, 45757, + 45763, 45767, 45779, 45817, 45821, 45823, 45827, 45833, 45841, 45853, + 45863, 45869, 45887, 45893, 45943, 45949, 45953, 45959, 45971, 45979, + 45989, 46021, 46027, 46049, 46051, 46061, 46073, 46091, 46093, 46099, + 46103, 46133, 46141, 46147, 46153, 46171, 46181, 46183, 46187, 46199, + 46219, 46229, 46237, 46261, 46271, 46273, 46279, 46301, 46307, 46309, + 46327, 46337, 46349, 46351, 46381, 46399, 46411, 46439, 46441, 46447, + 46451, 46457, 46471, 46477, 46489, 46499, 46507, 46511, 46523, 46549, + 46559, 46567, 46573, 46589, 46591, 46601, 46619, 46633, 46639, 46643, + 46649, 46663, 46679, 46681, 46687, 46691, 46703, 46723, 46727, 46747, + 46751, 46757, 46769, 46771, 46807, 46811, 46817, 46819, 46829, 46831, + 46853, 46861, 46867, 46877, 46889, 46901, 46919, 46933, 46957, 46993, + 46997, 47017, 47041, 47051, 47057, 47059, 47087, 47093, 47111, 47119, + 47123, 47129, 47137, 47143, 47147, 47149, 47161, 47189, 47207, 47221, + 47237, 47251, 47269, 47279, 47287, 47293, 47297, 47303, 47309, 47317, + 47339, 47351, 47353, 47363, 47381, 47387, 47389, 47407, 47417, 47419, + 47431, 47441, 47459, 47491, 47497, 47501, 47507, 47513, 47521, 47527, + 47533, 47543, 47563, 47569, 47581, 47591, 47599, 47609, 47623, 47629, + 47639, 47653, 47657, 47659, 47681, 47699, 47701, 47711, 47713, 47717, + 47737, 47741, 47743, 47777, 47779, 47791, 47797, 47807, 47809, 47819, + 47837, 47843, 47857, 47869, 47881, 47903, 47911, 47917, 47933, 47939, + 47947, 47951, 47963, 47969, 47977, 47981, 48017, 48023, 48029, 48049, + 48073, 48079, 48091, 48109, 48119, 48121, 48131, 48157, 48163, 48179, + 48187, 48193, 48197, 48221, 48239, 48247, 48259, 48271, 48281, 48299, + 48311, 48313, 48337, 48341, 48353, 48371, 48383, 48397, 48407, 48409, + 48413, 48437, 48449, 48463, 48473, 48479, 48481, 48487, 48491, 48497, + 48523, 48527, 48533, 48539, 48541, 48563, 48571, 48589, 48593, 48611, + 48619, 48623, 48647, 48649, 48661, 48673, 48677, 48679, 48731, 48733, + 48751, 48757, 48761, 48767, 48779, 48781, 48787, 48799, 48809, 48817, + 48821, 48823, 48847, 48857, 48859, 48869, 48871, 48883, 48889, 48907, + 48947, 48953, 48973, 48989, 48991, 49003, 49009, 49019, 49031, 49033, + 49037, 49043, 49057, 49069, 49081, 49103, 49109, 49117, 49121, 49123, + 49139, 49157, 49169, 49171, 49177, 49193, 49199, 49201, 49207, 49211, + 49223, 49253, 49261, 49277, 49279, 49297, 49307, 49331, 49333, 49339, + 49363, 49367, 49369, 49391, 49393, 49409, 49411, 49417, 49429, 49433, + 49451, 49459, 49463, 49477, 49481, 49499, 49523, 49529, 49531, 49537, + 49547, 49549, 49559, 49597, 49603, 49613, 49627, 49633, 49639, 49663, + 49667, 49669, 49681, 49697, 49711, 49727, 49739, 49741, 49747, 49757, + 49783, 49787, 49789, 49801, 49807, 49811, 49823, 49831, 49843, 49853, + 49871, 49877, 49891, 49919, 49921, 49927, 49937, 49939, 49943, 49957, + 49991, 49993, 49999, 50021, 50023, 50033, 50047, 50051, 50053, 50069, + 50077, 50087, 50093, 50101, 50111, 50119, 50123, 50129, 50131, 50147, + 50153, 50159, 50177, 50207, 50221, 50227, 50231, 50261, 50263, 50273, + 50287, 50291, 50311, 50321, 50329, 50333, 50341, 50359, 50363, 50377, + 50383, 50387, 50411, 50417, 50423, 50441, 50459, 50461, 50497, 50503, + 50513, 50527, 50539, 50543, 50549, 50551, 50581, 50587, 50591, 50593, + 50599, 50627, 50647, 50651, 50671, 50683, 50707, 50723, 50741, 50753, + 50767, 50773, 50777, 50789, 50821, 50833, 50839, 50849, 50857, 50867, + 50873, 50891, 50893, 50909, 50923, 50929, 50951, 50957, 50969, 50971, + 50989, 50993, 51001, 51031, 51043, 51047, 51059, 51061, 51071, 51109, + 51131, 51133, 51137, 51151, 51157, 51169, 51193, 51197, 51199, 51203, + 51217, 51229, 51239, 51241, 51257, 51263, 51283, 51287, 51307, 51329, + 51341, 51343, 51347, 51349, 51361, 51383, 51407, 51413, 51419, 51421, + 51427, 51431, 51437, 51439, 51449, 51461, 51473, 51479, 51481, 51487, + 51503, 51511, 51517, 51521, 51539, 51551, 51563, 51577, 51581, 51593, + 51599, 51607, 51613, 51631, 51637, 51647, 51659, 51673, 51679, 51683, + 51691, 51713, 51719, 51721, 51749, 51767, 51769, 51787, 51797, 51803, + 51817, 51827, 51829, 51839, 51853, 51859, 51869, 51871, 51893, 51899, + 51907, 51913, 51929, 51941, 51949, 51971, 51973, 51977, 51991, 52009, + 52021, 52027, 52051, 52057, 52067, 52069, 52081, 52103, 52121, 52127, + 52147, 52153, 52163, 52177, 52181, 52183, 52189, 52201, 52223, 52237, + 52249, 52253, 52259, 52267, 52289, 52291, 52301, 52313, 52321, 52361, + 52363, 52369, 52379, 52387, 52391, 52433, 52453, 52457, 52489, 52501, + 52511, 52517, 52529, 52541, 52543, 52553, 52561, 52567, 52571, 52579, + 52583, 52609, 52627, 52631, 52639, 52667, 52673, 52691, 52697, 52709, + 52711, 52721, 52727, 52733, 52747, 52757, 52769, 52783, 52807, 52813, + 52817, 52837, 52859, 52861, 52879, 52883, 52889, 52901, 52903, 52919, + 52937, 52951, 52957, 52963, 52967, 52973, 52981, 52999, 53003, 53017, + 53047, 53051, 53069, 53077, 53087, 53089, 53093, 53101, 53113, 53117, + 53129, 53147, 53149, 53161, 53171, 53173, 53189, 53197, 53201, 53231, + 53233, 53239, 53267, 53269, 53279, 53281, 53299, 53309, 53323, 53327, + 53353, 53359, 53377, 53381, 53401, 53407, 53411, 53419, 53437, 53441, + 53453, 53479, 53503, 53507, 53527, 53549, 53551, 53569, 53591, 53593, + 53597, 53609, 53611, 53617, 53623, 53629, 53633, 53639, 53653, 53657, + 53681, 53693, 53699, 53717, 53719, 53731, 53759, 53773, 53777, 53783, + 53791, 53813, 53819, 53831, 53849, 53857, 53861, 53881, 53887, 53891, + 53897, 53899, 53917, 53923, 53927, 53939, 53951, 53959, 53987, 53993, + 54001, 54011, 54013, 54037, 54049, 54059, 54083, 54091, 54101, 54121, + 54133, 54139, 54151, 54163, 54167, 54181, 54193, 54217, 54251, 54269, + 54277, 54287, 54293, 54311, 54319, 54323, 54331, 54347, 54361, 54367, + 54371, 54377, 54401, 54403, 54409, 54413, 54419, 54421, 54437, 54443, + 54449, 54469, 54493, 54497, 54499, 54503, 54517, 54521, 54539, 54541, + 54547, 54559, 54563, 54577, 54581, 54583, 54601, 54617, 54623, 54629, + 54631, 54647, 54667, 54673, 54679, 54709, 54713, 54721, 54727, 54751, + 54767, 54773, 54779, 54787, 54799, 54829, 54833, 54851, 54869, 54877, + 54881, 54907, 54917, 54919, 54941, 54949, 54959, 54973, 54979, 54983, + 55001, 55009, 55021, 55049, 55051, 55057, 55061, 55073, 55079, 55103, + 55109, 55117, 55127, 55147, 55163, 55171, 55201, 55207, 55213, 55217, + 55219, 55229, 55243, 55249, 55259, 55291, 55313, 55331, 55333, 55337, + 55339, 55343, 55351, 55373, 55381, 55399, 55411, 55439, 55441, 55457, + 55469, 55487, 55501, 55511, 55529, 55541, 55547, 55579, 55589, 55603, + 55609, 55619, 55621, 55631, 55633, 55639, 55661, 55663, 55667, 55673, + 55681, 55691, 55697, 55711, 55717, 55721, 55733, 55763, 55787, 55793, + 55799, 55807, 55813, 55817, 55819, 55823, 55829, 55837, 55843, 55849, + 55871, 55889, 55897, 55901, 55903, 55921, 55927, 55931, 55933, 55949, + 55967, 55987, 55997, 56003, 56009, 56039, 56041, 56053, 56081, 56087, + 56093, 56099, 56101, 56113, 56123, 56131, 56149, 56167, 56171, 56179, + 56197, 56207, 56209, 56237, 56239, 56249, 56263, 56267, 56269, 56299, + 56311, 56333, 56359, 56369, 56377, 56383, 56393, 56401, 56417, 56431, + 56437, 56443, 56453, 56467, 56473, 56477, 56479, 56489, 56501, 56503, + 56509, 56519, 56527, 56531, 56533, 56543, 56569, 56591, 56597, 56599, + 56611, 56629, 56633, 56659, 56663, 56671, 56681, 56687, 56701, 56711, + 56713, 56731, 56737, 56747, 56767, 56773, 56779, 56783, 56807, 56809, + 56813, 56821, 56827, 56843, 56857, 56873, 56891, 56893, 56897, 56909, + 56911, 56921, 56923, 56929, 56941, 56951, 56957, 56963, 56983, 56989, + 56993, 56999, 57037, 57041, 57047, 57059, 57073, 57077, 57089, 57097, + 57107, 57119, 57131, 57139, 57143, 57149, 57163, 57173, 57179, 57191, + 57193, 57203, 57221, 57223, 57241, 57251, 57259, 57269, 57271, 57283, + 57287, 57301, 57329, 57331, 57347, 57349, 57367, 57373, 57383, 57389, + 57397, 57413, 57427, 57457, 57467, 57487, 57493, 57503, 57527, 57529, + 57557, 57559, 57571, 57587, 57593, 57601, 57637, 57641, 57649, 57653, + 57667, 57679, 57689, 57697, 57709, 57713, 57719, 57727, 57731, 57737, + 57751, 57773, 57781, 57787, 57791, 57793, 57803, 57809, 57829, 57839, + 57847, 57853, 57859, 57881, 57899, 57901, 57917, 57923, 57943, 57947, + 57973, 57977, 57991, 58013, 58027, 58031, 58043, 58049, 58057, 58061, + 58067, 58073, 58099, 58109, 58111, 58129, 58147, 58151, 58153, 58169, + 58171, 58189, 58193, 58199, 58207, 58211, 58217, 58229, 58231, 58237, + 58243, 58271, 58309, 58313, 58321, 58337, 58363, 58367, 58369, 58379, + 58391, 58393, 58403, 58411, 58417, 58427, 58439, 58441, 58451, 58453, + 58477, 58481, 58511, 58537, 58543, 58549, 58567, 58573, 58579, 58601, + 58603, 58613, 58631, 58657, 58661, 58679, 58687, 58693, 58699, 58711, + 58727, 58733, 58741, 58757, 58763, 58771, 58787, 58789, 58831, 58889, + 58897, 58901, 58907, 58909, 58913, 58921, 58937, 58943, 58963, 58967, + 58979, 58991, 58997, 59009, 59011, 59021, 59023, 59029, 59051, 59053, + 59063, 59069, 59077, 59083, 59093, 59107, 59113, 59119, 59123, 59141, + 59149, 59159, 59167, 59183, 59197, 59207, 59209, 59219, 59221, 59233, + 59239, 59243, 59263, 59273, 59281, 59333, 59341, 59351, 59357, 59359, + 59369, 59377, 59387, 59393, 59399, 59407, 59417, 59419, 59441, 59443, + 59447, 59453, 59467, 59471, 59473, 59497, 59509, 59513, 59539, 59557, + 59561, 59567, 59581, 59611, 59617, 59621, 59627, 59629, 59651, 59659, + 59663, 59669, 59671, 59693, 59699, 59707, 59723, 59729, 59743, 59747, + 59753, 59771, 59779, 59791, 59797, 59809, 59833, 59863, 59879, 59887, + 59921, 59929, 59951, 59957, 59971, 59981, 59999, 60013, 60017, 60029, + 60037, 60041, 60077, 60083, 60089, 60091, 60101, 60103, 60107, 60127, + 60133, 60139, 60149, 60161, 60167, 60169, 60209, 60217, 60223, 60251, + 60257, 60259, 60271, 60289, 60293, 60317, 60331, 60337, 60343, 60353, + 60373, 60383, 60397, 60413, 60427, 60443, 60449, 60457, 60493, 60497, + 60509, 60521, 60527, 60539, 60589, 60601, 60607, 60611, 60617, 60623, + 60631, 60637, 60647, 60649, 60659, 60661, 60679, 60689, 60703, 60719, + 60727, 60733, 60737, 60757, 60761, 60763, 60773, 60779, 60793, 60811, + 60821, 60859, 60869, 60887, 60889, 60899, 60901, 60913, 60917, 60919, + 60923, 60937, 60943, 60953, 60961, 61001, 61007, 61027, 61031, 61043, + 61051, 61057, 61091, 61099, 61121, 61129, 61141, 61151, 61153, 61169, + 61211, 61223, 61231, 61253, 61261, 61283, 61291, 61297, 61331, 61333, + 61339, 61343, 61357, 61363, 61379, 61381, 61403, 61409, 61417, 61441, + 61463, 61469, 61471, 61483, 61487, 61493, 61507, 61511, 61519, 61543, + 61547, 61553, 61559, 61561, 61583, 61603, 61609, 61613, 61627, 61631, + 61637, 61643, 61651, 61657, 61667, 61673, 61681, 61687, 61703, 61717, + 61723, 61729, 61751, 61757, 61781, 61813, 61819, 61837, 61843, 61861, + 61871, 61879, 61909, 61927, 61933, 61949, 61961, 61967, 61979, 61981, + 61987, 61991, 62003, 62011, 62017, 62039, 62047, 62053, 62057, 62071, + 62081, 62099, 62119, 62129, 62131, 62137, 62141, 62143, 62171, 62189, + 62191, 62201, 62207, 62213, 62219, 62233, 62273, 62297, 62299, 62303, + 62311, 62323, 62327, 62347, 62351, 62383, 62401, 62417, 62423, 62459, + 62467, 62473, 62477, 62483, 62497, 62501, 62507, 62533, 62539, 62549, + 62563, 62581, 62591, 62597, 62603, 62617, 62627, 62633, 62639, 62653, + 62659, 62683, 62687, 62701, 62723, 62731, 62743, 62753, 62761, 62773, + 62791, 62801, 62819, 62827, 62851, 62861, 62869, 62873, 62897, 62903, + 62921, 62927, 62929, 62939, 62969, 62971, 62981, 62983, 62987, 62989, + 63029, 63031, 63059, 63067, 63073, 63079, 63097, 63103, 63113, 63127, + 63131, 63149, 63179, 63197, 63199, 63211, 63241, 63247, 63277, 63281, + 63299, 63311, 63313, 63317, 63331, 63337, 63347, 63353, 63361, 63367, + 63377, 63389, 63391, 63397, 63409, 63419, 63421, 63439, 63443, 63463, + 63467, 63473, 63487, 63493, 63499, 63521, 63527, 63533, 63541, 63559, + 63577, 63587, 63589, 63599, 63601, 63607, 63611, 63617, 63629, 63647, + 63649, 63659, 63667, 63671, 63689, 63691, 63697, 63703, 63709, 63719, + 63727, 63737, 63743, 63761, 63773, 63781, 63793, 63799, 63803, 63809, + 63823, 63839, 63841, 63853, 63857, 63863, 63901, 63907, 63913, 63929, + 63949, 63977, 63997, 64007, 64013, 64019, 64033, 64037, 64063, 64067, + 64081, 64091, 64109, 64123, 64151, 64153, 64157, 64171, 64187, 64189, + 64217, 64223, 64231, 64237, 64271, 64279, 64283, 64301, 64303, 64319, + 64327, 64333, 64373, 64381, 64399, 64403, 64433, 64439, 64451, 64453, + 64483, 64489, 64499, 64513, 64553, 64567, 64577, 64579, 64591, 64601, + 64609, 64613, 64621, 64627, 64633, 64661, 64663, 64667, 64679, 64693, + 64709, 64717, 64747, 64763, 64781, 64783, 64793, 64811, 64817, 64849, + 64853, 64871, 64877, 64879, 64891, 64901, 64919, 64921, 64927, 64937, + 64951, 64969, 64997, 65003, 65011, 65027, 65029, 65033, 65053, 65063, + 65071, 65089, 65099, 65101, 65111, 65119, 65123, 65129, 65141, 65147, + 65167, 65171, 65173, 65179, 65183, 65203, 65213, 65239, 65257, 65267, + 65269, 65287, 65293, 65309, 65323, 65327, 65353, 65357, 65371, 65381, + 65393, 65407, 65413, 65419, 65423, 65437, 65447, 65449, 65479, 65497, + 65519, 65521, 65537, 65539, 65543, 65551, 65557, 65563, 65579, 65581, + 65587, 65599, 65609, 65617, 65629, 65633, 65647, 65651, 65657, 65677, + 65687, 65699, 65701, 65707, 65713, 65717, 65719, 65729, 65731, 65761, + 65777, 65789, 65809, 65827, 65831, 65837, 65839, 65843, 65851, 65867, + 65881, 65899, 65921, 65927, 65929, 65951, 65957, 65963, 65981, 65983, + 65993, 66029, 66037, 66041, 66047, 66067, 66071, 66083, 66089, 66103, + 66107, 66109, 66137, 66161, 66169, 66173, 66179, 66191, 66221, 66239, + 66271, 66293, 66301, 66337, 66343, 66347, 66359, 66361, 66373, 66377, + 66383, 66403, 66413, 66431, 66449, 66457, 66463, 66467, 66491, 66499, + 66509, 66523, 66529, 66533, 66541, 66553, 66569, 66571, 66587, 66593, + 66601, 66617, 66629, 66643, 66653, 66683, 66697, 66701, 66713, 66721, + 66733, 66739, 66749, 66751, 66763, 66791, 66797, 66809, 66821, 66841, + 66851, 66853, 66863, 66877, 66883, 66889, 66919, 66923, 66931, 66943, + 66947, 66949, 66959, 66973, 66977, 67003, 67021, 67033, 67043, 67049, + 67057, 67061, 67073, 67079, 67103, 67121, 67129, 67139, 67141, 67153, + 67157, 67169, 67181, 67187, 67189, 67211, 67213, 67217, 67219, 67231, + 67247, 67261, 67271, 67273, 67289, 67307, 67339, 67343, 67349, 67369, + 67391, 67399, 67409, 67411, 67421, 67427, 67429, 67433, 67447, 67453, + 67477, 67481, 67489, 67493, 67499, 67511, 67523, 67531, 67537, 67547, + 67559, 67567, 67577, 67579, 67589, 67601, 67607, 67619, 67631, 67651, + 67679, 67699, 67709, 67723, 67733, 67741, 67751, 67757, 67759, 67763, + 67777, 67783, 67789, 67801, 67807, 67819, 67829, 67843, 67853, 67867, + 67883, 67891, 67901, 67927, 67931, 67933, 67939, 67943, 67957, 67961, + 67967, 67979, 67987, 67993, 68023, 68041, 68053, 68059, 68071, 68087, + 68099, 68111, 68113, 68141, 68147, 68161, 68171, 68207, 68209, 68213, + 68219, 68227, 68239, 68261, 68279, 68281, 68311, 68329, 68351, 68371, + 68389, 68399, 68437, 68443, 68447, 68449, 68473, 68477, 68483, 68489, + 68491, 68501, 68507, 68521, 68531, 68539, 68543, 68567, 68581, 68597, + 68611, 68633, 68639, 68659, 68669, 68683, 68687, 68699, 68711, 68713, + 68729, 68737, 68743, 68749, 68767, 68771, 68777, 68791, 68813, 68819, + 68821, 68863, 68879, 68881, 68891, 68897, 68899, 68903, 68909, 68917, + 68927, 68947, 68963, 68993, 69001, 69011, 69019, 69029, 69031, 69061, + 69067, 69073, 69109, 69119, 69127, 69143, 69149, 69151, 69163, 69191, + 69193, 69197, 69203, 69221, 69233, 69239, 69247, 69257, 69259, 69263, + 69313, 69317, 69337, 69341, 69371, 69379, 69383, 69389, 69401, 69403, + 69427, 69431, 69439, 69457, 69463, 69467, 69473, 69481, 69491, 69493, + 69497, 69499, 69539, 69557, 69593, 69623, 69653, 69661, 69677, 69691, + 69697, 69709, 69737, 69739, 69761, 69763, 69767, 69779, 69809, 69821, + 69827, 69829, 69833, 69847, 69857, 69859, 69877, 69899, 69911, 69929, + 69931, 69941, 69959, 69991, 69997, 70001, 70003, 70009, 70019, 70039, + 70051, 70061, 70067, 70079, 70099, 70111, 70117, 70121, 70123, 70139, + 70141, 70157, 70163, 70177, 70181, 70183, 70199, 70201, 70207, 70223, + 70229, 70237, 70241, 70249, 70271, 70289, 70297, 70309, 70313, 70321, + 70327, 70351, 70373, 70379, 70381, 70393, 70423, 70429, 70439, 70451, + 70457, 70459, 70481, 70487, 70489, 70501, 70507, 70529, 70537, 70549, + 70571, 70573, 70583, 70589, 70607, 70619, 70621, 70627, 70639, 70657, + 70663, 70667, 70687, 70709, 70717, 70729, 70753, 70769, 70783, 70793, + 70823, 70841, 70843, 70849, 70853, 70867, 70877, 70879, 70891, 70901, + 70913, 70919, 70921, 70937, 70949, 70951, 70957, 70969, 70979, 70981, + 70991, 70997, 70999, 71011, 71023, 71039, 71059, 71069, 71081, 71089, + 71119, 71129, 71143, 71147, 71153, 71161, 71167, 71171, 71191, 71209, + 71233, 71237, 71249, 71257, 71261, 71263, 71287, 71293, 71317, 71327, + 71329, 71333, 71339, 71341, 71347, 71353, 71359, 71363, 71387, 71389, + 71399, 71411, 71413, 71419, 71429, 71437, 71443, 71453, 71471, 71473, + 71479, 71483, 71503, 71527, 71537, 71549, 71551, 71563, 71569, 71593, + 71597, 71633, 71647, 71663, 71671, 71693, 71699, 71707, 71711, 71713, + 71719, 71741, 71761, 71777, 71789, 71807, 71809, 71821, 71837, 71843, + 71849, 71861, 71867, 71879, 71881, 71887, 71899, 71909, 71917, 71933, + 71941, 71947, 71963, 71971, 71983, 71987, 71993, 71999, 72019, 72031, + 72043, 72047, 72053, 72073, 72077, 72089, 72091, 72101, 72103, 72109, + 72139, 72161, 72167, 72169, 72173, 72211, 72221, 72223, 72227, 72229, + 72251, 72253, 72269, 72271, 72277, 72287, 72307, 72313, 72337, 72341, + 72353, 72367, 72379, 72383, 72421, 72431, 72461, 72467, 72469, 72481, + 72493, 72497, 72503, 72533, 72547, 72551, 72559, 72577, 72613, 72617, + 72623, 72643, 72647, 72649, 72661, 72671, 72673, 72679, 72689, 72701, + 72707, 72719, 72727, 72733, 72739, 72763, 72767, 72797, 72817, 72823, + 72859, 72869, 72871, 72883, 72889, 72893, 72901, 72907, 72911, 72923, + 72931, 72937, 72949, 72953, 72959, 72973, 72977, 72997, 73009, 73013, + 73019, 73037, 73039, 73043, 73061, 73063, 73079, 73091, 73121, 73127, + 73133, 73141, 73181, 73189, 73237, 73243, 73259, 73277, 73291, 73303, + 73309, 73327, 73331, 73351, 73361, 73363, 73369, 73379, 73387, 73417, + 73421, 73433, 73453, 73459, 73471, 73477, 73483, 73517, 73523, 73529, + 73547, 73553, 73561, 73571, 73583, 73589, 73597, 73607, 73609, 73613, + 73637, 73643, 73651, 73673, 73679, 73681, 73693, 73699, 73709, 73721, + 73727, 73751, 73757, 73771, 73783, 73819, 73823, 73847, 73849, 73859, + 73867, 73877, 73883, 73897, 73907, 73939, 73943, 73951, 73961, 73973, + 73999, 74017, 74021, 74027, 74047, 74051, 74071, 74077, 74093, 74099, + 74101, 74131, 74143, 74149, 74159, 74161, 74167, 74177, 74189, 74197, + 74201, 74203, 74209, 74219, 74231, 74257, 74279, 74287, 74293, 74297, + 74311, 74317, 74323, 74353, 74357, 74363, 74377, 74381, 74383, 74411, + 74413, 74419, 74441, 74449, 74453, 74471, 74489, 74507, 74509, 74521, + 74527, 74531, 74551, 74561, 74567, 74573, 74587, 74597, 74609, 74611, + 74623, 74653, 74687, 74699, 74707, 74713, 74717, 74719, 74729, 74731, + 74747, 74759, 74761, 74771, 74779, 74797, 74821, 74827, 74831, 74843, + 74857, 74861, 74869, 74873, 74887, 74891, 74897, 74903, 74923, 74929, + 74933, 74941, 74959, 75011, 75013, 75017, 75029, 75037, 75041, 75079, + 75083, 75109, 75133, 75149, 75161, 75167, 75169, 75181, 75193, 75209, + 75211, 75217, 75223, 75227, 75239, 75253, 75269, 75277, 75289, 75307, + 75323, 75329, 75337, 75347, 75353, 75367, 75377, 75389, 75391, 75401, + 75403, 75407, 75431, 75437, 75479, 75503, 75511, 75521, 75527, 75533, + 75539, 75541, 75553, 75557, 75571, 75577, 75583, 75611, 75617, 75619, + 75629, 75641, 75653, 75659, 75679, 75683, 75689, 75703, 75707, 75709, + 75721, 75731, 75743, 75767, 75773, 75781, 75787, 75793, 75797, 75821, + 75833, 75853, 75869, 75883, 75913, 75931, 75937, 75941, 75967, 75979, + 75983, 75989, 75991, 75997, 76001, 76003, 76031, 76039, 76079, 76081, + 76091, 76099, 76103, 76123, 76129, 76147, 76157, 76159, 76163, 76207, + 76213, 76231, 76243, 76249, 76253, 76259, 76261, 76283, 76289, 76303, + 76333, 76343, 76367, 76369, 76379, 76387, 76403, 76421, 76423, 76441, + 76463, 76471, 76481, 76487, 76493, 76507, 76511, 76519, 76537, 76541, + 76543, 76561, 76579, 76597, 76603, 76607, 76631, 76649, 76651, 76667, + 76673, 76679, 76697, 76717, 76733, 76753, 76757, 76771, 76777, 76781, + 76801, 76819, 76829, 76831, 76837, 76847, 76871, 76873, 76883, 76907, + 76913, 76919, 76943, 76949, 76961, 76963, 76991, 77003, 77017, 77023, + 77029, 77041, 77047, 77069, 77081, 77093, 77101, 77137, 77141, 77153, + 77167, 77171, 77191, 77201, 77213, 77237, 77239, 77243, 77249, 77261, + 77263, 77267, 77269, 77279, 77291, 77317, 77323, 77339, 77347, 77351, + 77359, 77369, 77377, 77383, 77417, 77419, 77431, 77447, 77471, 77477, + 77479, 77489, 77491, 77509, 77513, 77521, 77527, 77543, 77549, 77551, + 77557, 77563, 77569, 77573, 77587, 77591, 77611, 77617, 77621, 77641, + 77647, 77659, 77681, 77687, 77689, 77699, 77711, 77713, 77719, 77723, + 77731, 77743, 77747, 77761, 77773, 77783, 77797, 77801, 77813, 77839, + 77849, 77863, 77867, 77893, 77899, 77929, 77933, 77951, 77969, 77977, + 77983, 77999, 78007, 78017, 78031, 78041, 78049, 78059, 78079, 78101, + 78121, 78137, 78139, 78157, 78163, 78167, 78173, 78179, 78191, 78193, + 78203, 78229, 78233, 78241, 78259, 78277, 78283, 78301, 78307, 78311, + 78317, 78341, 78347, 78367, 78401, 78427, 78437, 78439, 78467, 78479, + 78487, 78497, 78509, 78511, 78517, 78539, 78541, 78553, 78569, 78571, + 78577, 78583, 78593, 78607, 78623, 78643, 78649, 78653, 78691, 78697, + 78707, 78713, 78721, 78737, 78779, 78781, 78787, 78791, 78797, 78803, + 78809, 78823, 78839, 78853, 78857, 78877, 78887, 78889, 78893, 78901, + 78919, 78929, 78941, 78977, 78979, 78989, 79031, 79039, 79043, 79063, + 79087, 79103, 79111, 79133, 79139, 79147, 79151, 79153, 79159, 79181, + 79187, 79193, 79201, 79229, 79231, 79241, 79259, 79273, 79279, 79283, + 79301, 79309, 79319, 79333, 79337, 79349, 79357, 79367, 79379, 79393, + 79397, 79399, 79411, 79423, 79427, 79433, 79451, 79481, 79493, 79531, + 79537, 79549, 79559, 79561, 79579, 79589, 79601, 79609, 79613, 79621, + 79627, 79631, 79633, 79657, 79669, 79687, 79691, 79693, 79697, 79699, + 79757, 79769, 79777, 79801, 79811, 79813, 79817, 79823, 79829, 79841, + 79843, 79847, 79861, 79867, 79873, 79889, 79901, 79903, 79907, 79939, + 79943, 79967, 79973, 79979, 79987, 79997, 79999, 80021, 80039, 80051, + 80071, 80077, 80107, 80111, 80141, 80147, 80149, 80153, 80167, 80173, + 80177, 80191, 80207, 80209, 80221, 80231, 80233, 80239, 80251, 80263, + 80273, 80279, 80287, 80309, 80317, 80329, 80341, 80347, 80363, 80369, + 80387, 80407, 80429, 80447, 80449, 80471, 80473, 80489, 80491, 80513, + 80527, 80537, 80557, 80567, 80599, 80603, 80611, 80621, 80627, 80629, + 80651, 80657, 80669, 80671, 80677, 80681, 80683, 80687, 80701, 80713, + 80737, 80747, 80749, 80761, 80777, 80779, 80783, 80789, 80803, 80809, + 80819, 80831, 80833, 80849, 80863, 80897, 80909, 80911, 80917, 80923, + 80929, 80933, 80953, 80963, 80989, 81001, 81013, 81017, 81019, 81023, + 81031, 81041, 81043, 81047, 81049, 81071, 81077, 81083, 81097, 81101, + 81119, 81131, 81157, 81163, 81173, 81181, 81197, 81199, 81203, 81223, + 81233, 81239, 81281, 81283, 81293, 81299, 81307, 81331, 81343, 81349, + 81353, 81359, 81371, 81373, 81401, 81409, 81421, 81439, 81457, 81463, + 81509, 81517, 81527, 81533, 81547, 81551, 81553, 81559, 81563, 81569, + 81611, 81619, 81629, 81637, 81647, 81649, 81667, 81671, 81677, 81689, + 81701, 81703, 81707, 81727, 81737, 81749, 81761, 81769, 81773, 81799, + 81817, 81839, 81847, 81853, 81869, 81883, 81899, 81901, 81919, 81929, + 81931, 81937, 81943, 81953, 81967, 81971, 81973, 82003, 82007, 82009, + 82013, 82021, 82031, 82037, 82039, 82051, 82067, 82073, 82129, 82139, + 82141, 82153, 82163, 82171, 82183, 82189, 82193, 82207, 82217, 82219, + 82223, 82231, 82237, 82241, 82261, 82267, 82279, 82301, 82307, 82339, + 82349, 82351, 82361, 82373, 82387, 82393, 82421, 82457, 82463, 82469, + 82471, 82483, 82487, 82493, 82499, 82507, 82529, 82531, 82549, 82559, + 82561, 82567, 82571, 82591, 82601, 82609, 82613, 82619, 82633, 82651, + 82657, 82699, 82721, 82723, 82727, 82729, 82757, 82759, 82763, 82781, + 82787, 82793, 82799, 82811, 82813, 82837, 82847, 82883, 82889, 82891, + 82903, 82913, 82939, 82963, 82981, 82997, 83003, 83009, 83023, 83047, + 83059, 83063, 83071, 83077, 83089, 83093, 83101, 83117, 83137, 83177, + 83203, 83207, 83219, 83221, 83227, 83231, 83233, 83243, 83257, 83267, + 83269, 83273, 83299, 83311, 83339, 83341, 83357, 83383, 83389, 83399, + 83401, 83407, 83417, 83423, 83431, 83437, 83443, 83449, 83459, 83471, + 83477, 83497, 83537, 83557, 83561, 83563, 83579, 83591, 83597, 83609, + 83617, 83621, 83639, 83641, 83653, 83663, 83689, 83701, 83717, 83719, + 83737, 83761, 83773, 83777, 83791, 83813, 83833, 83843, 83857, 83869, + 83873, 83891, 83903, 83911, 83921, 83933, 83939, 83969, 83983, 83987, + 84011, 84017, 84047, 84053, 84059, 84061, 84067, 84089, 84121, 84127, + 84131, 84137, 84143, 84163, 84179, 84181, 84191, 84199, 84211, 84221, + 84223, 84229, 84239, 84247, 84263, 84299, 84307, 84313, 84317, 84319, + 84347, 84349, 84377, 84389, 84391, 84401, 84407, 84421, 84431, 84437, + 84443, 84449, 84457, 84463, 84467, 84481, 84499, 84503, 84509, 84521, + 84523, 84533, 84551, 84559, 84589, 84629, 84631, 84649, 84653, 84659, + 84673, 84691, 84697, 84701, 84713, 84719, 84731, 84737, 84751, 84761, + 84787, 84793, 84809, 84811, 84827, 84857, 84859, 84869, 84871, 84913, + 84919, 84947, 84961, 84967, 84977, 84979, 84991, 85009, 85021, 85027, + 85037, 85049, 85061, 85081, 85087, 85091, 85093, 85103, 85109, 85121, + 85133, 85147, 85159, 85193, 85199, 85201, 85213, 85223, 85229, 85237, + 85243, 85247, 85259, 85297, 85303, 85313, 85331, 85333, 85361, 85363, + 85369, 85381, 85411, 85427, 85429, 85439, 85447, 85451, 85453, 85469, + 85487, 85513, 85517, 85523, 85531, 85549, 85571, 85577, 85597, 85601, + 85607, 85619, 85621, 85627, 85639, 85643, 85661, 85667, 85669, 85691, + 85703, 85711, 85717, 85733, 85751, 85781, 85793, 85817, 85819, 85829, + 85831, 85837, 85843, 85847, 85853, 85889, 85903, 85909, 85931, 85933, + 85991, 85999, 86011, 86017, 86027, 86029, 86069, 86077, 86083, 86111, + 86113, 86117, 86131, 86137, 86143, 86161, 86171, 86179, 86183, 86197, + 86201, 86209, 86239, 86243, 86249, 86257, 86263, 86269, 86287, 86291, + 86293, 86297, 86311, 86323, 86341, 86351, 86353, 86357, 86369, 86371, + 86381, 86389, 86399, 86413, 86423, 86441, 86453, 86461, 86467, 86477, + 86491, 86501, 86509, 86531, 86533, 86539, 86561, 86573, 86579, 86587, + 86599, 86627, 86629, 86677, 86689, 86693, 86711, 86719, 86729, 86743, + 86753, 86767, 86771, 86783, 86813, 86837, 86843, 86851, 86857, 86861, + 86869, 86923, 86927, 86929, 86939, 86951, 86959, 86969, 86981, 86993, + 87011, 87013, 87037, 87041, 87049, 87071, 87083, 87103, 87107, 87119, + 87121, 87133, 87149, 87151, 87179, 87181, 87187, 87211, 87221, 87223, + 87251, 87253, 87257, 87277, 87281, 87293, 87299, 87313, 87317, 87323, + 87337, 87359, 87383, 87403, 87407, 87421, 87427, 87433, 87443, 87473, + 87481, 87491, 87509, 87511, 87517, 87523, 87539, 87541, 87547, 87553, + 87557, 87559, 87583, 87587, 87589, 87613, 87623, 87629, 87631, 87641, + 87643, 87649, 87671, 87679, 87683, 87691, 87697, 87701, 87719, 87721, + 87739, 87743, 87751, 87767, 87793, 87797, 87803, 87811, 87833, 87853, + 87869, 87877, 87881, 87887, 87911, 87917, 87931, 87943, 87959, 87961, + 87973, 87977, 87991, 88001, 88003, 88007, 88019, 88037, 88069, 88079, + 88093, 88117, 88129, 88169, 88177, 88211, 88223, 88237, 88241, 88259, + 88261, 88289, 88301, 88321, 88327, 88337, 88339, 88379, 88397, 88411, + 88423, 88427, 88463, 88469, 88471, 88493, 88499, 88513, 88523, 88547, + 88589, 88591, 88607, 88609, 88643, 88651, 88657, 88661, 88663, 88667, + 88681, 88721, 88729, 88741, 88747, 88771, 88789, 88793, 88799, 88801, + 88807, 88811, 88813, 88817, 88819, 88843, 88853, 88861, 88867, 88873, + 88883, 88897, 88903, 88919, 88937, 88951, 88969, 88993, 88997, 89003, + 89009, 89017, 89021, 89041, 89051, 89057, 89069, 89071, 89083, 89087, + 89101, 89107, 89113, 89119, 89123, 89137, 89153, 89189, 89203, 89209, + 89213, 89227, 89231, 89237, 89261, 89269, 89273, 89293, 89303, 89317, + 89329, 89363, 89371, 89381, 89387, 89393, 89399, 89413, 89417, 89431, + 89443, 89449, 89459, 89477, 89491, 89501, 89513, 89519, 89521, 89527, + 89533, 89561, 89563, 89567, 89591, 89597, 89599, 89603, 89611, 89627, + 89633, 89653, 89657, 89659, 89669, 89671, 89681, 89689, 89753, 89759, + 89767, 89779, 89783, 89797, 89809, 89819, 89821, 89833, 89839, 89849, + 89867, 89891, 89897, 89899, 89909, 89917, 89923, 89939, 89959, 89963, + 89977, 89983, 89989, 90001, 90007, 90011, 90017, 90019, 90023, 90031, + 90053, 90059, 90067, 90071, 90073, 90089, 90107, 90121, 90127, 90149, + 90163, 90173, 90187, 90191, 90197, 90199, 90203, 90217, 90227, 90239, + 90247, 90263, 90271, 90281, 90289, 90313, 90353, 90359, 90371, 90373, + 90379, 90397, 90401, 90403, 90407, 90437, 90439, 90469, 90473, 90481, + 90499, 90511, 90523, 90527, 90529, 90533, 90547, 90583, 90599, 90617, + 90619, 90631, 90641, 90647, 90659, 90677, 90679, 90697, 90703, 90709, + 90731, 90749, 90787, 90793, 90803, 90821, 90823, 90833, 90841, 90847, + 90863, 90887, 90901, 90907, 90911, 90917, 90931, 90947, 90971, 90977, + 90989, 90997, 91009, 91019, 91033, 91079, 91081, 91097, 91099, 91121, + 91127, 91129, 91139, 91141, 91151, 91153, 91159, 91163, 91183, 91193, + 91199, 91229, 91237, 91243, 91249, 91253, 91283, 91291, 91297, 91303, + 91309, 91331, 91367, 91369, 91373, 91381, 91387, 91393, 91397, 91411, + 91423, 91433, 91453, 91457, 91459, 91463, 91493, 91499, 91513, 91529, + 91541, 91571, 91573, 91577, 91583, 91591, 91621, 91631, 91639, 91673, + 91691, 91703, 91711, 91733, 91753, 91757, 91771, 91781, 91801, 91807, + 91811, 91813, 91823, 91837, 91841, 91867, 91873, 91909, 91921, 91939, + 91943, 91951, 91957, 91961, 91967, 91969, 91997, 92003, 92009, 92033, + 92041, 92051, 92077, 92083, 92107, 92111, 92119, 92143, 92153, 92173, + 92177, 92179, 92189, 92203, 92219, 92221, 92227, 92233, 92237, 92243, + 92251, 92269, 92297, 92311, 92317, 92333, 92347, 92353, 92357, 92363, + 92369, 92377, 92381, 92383, 92387, 92399, 92401, 92413, 92419, 92431, + 92459, 92461, 92467, 92479, 92489, 92503, 92507, 92551, 92557, 92567, + 92569, 92581, 92593, 92623, 92627, 92639, 92641, 92647, 92657, 92669, + 92671, 92681, 92683, 92693, 92699, 92707, 92717, 92723, 92737, 92753, + 92761, 92767, 92779, 92789, 92791, 92801, 92809, 92821, 92831, 92849, + 92857, 92861, 92863, 92867, 92893, 92899, 92921, 92927, 92941, 92951, + 92957, 92959, 92987, 92993, 93001, 93047, 93053, 93059, 93077, 93083, + 93089, 93097, 93103, 93113, 93131, 93133, 93139, 93151, 93169, 93179, + 93187, 93199, 93229, 93239, 93241, 93251, 93253, 93257, 93263, 93281, + 93283, 93287, 93307, 93319, 93323, 93329, 93337, 93371, 93377, 93383, + 93407, 93419, 93427, 93463, 93479, 93481, 93487, 93491, 93493, 93497, + 93503, 93523, 93529, 93553, 93557, 93559, 93563, 93581, 93601, 93607, + 93629, 93637, 93683, 93701, 93703, 93719, 93739, 93761, 93763, 93787, + 93809, 93811, 93827, 93851, 93871, 93887, 93889, 93893, 93901, 93911, + 93913, 93923, 93937, 93941, 93949, 93967, 93971, 93979, 93983, 93997, + 94007, 94009, 94033, 94049, 94057, 94063, 94079, 94099, 94109, 94111, + 94117, 94121, 94151, 94153, 94169, 94201, 94207, 94219, 94229, 94253, + 94261, 94273, 94291, 94307, 94309, 94321, 94327, 94331, 94343, 94349, + 94351, 94379, 94397, 94399, 94421, 94427, 94433, 94439, 94441, 94447, + 94463, 94477, 94483, 94513, 94529, 94531, 94541, 94543, 94547, 94559, + 94561, 94573, 94583, 94597, 94603, 94613, 94621, 94649, 94651, 94687, + 94693, 94709, 94723, 94727, 94747, 94771, 94777, 94781, 94789, 94793, + 94811, 94819, 94823, 94837, 94841, 94847, 94849, 94873, 94889, 94903, + 94907, 94933, 94949, 94951, 94961, 94993, 94999, 95003, 95009, 95021, + 95027, 95063, 95071, 95083, 95087, 95089, 95093, 95101, 95107, 95111, + 95131, 95143, 95153, 95177, 95189, 95191, 95203, 95213, 95219, 95231, + 95233, 95239, 95257, 95261, 95267, 95273, 95279, 95287, 95311, 95317, + 95327, 95339, 95369, 95383, 95393, 95401, 95413, 95419, 95429, 95441, + 95443, 95461, 95467, 95471, 95479, 95483, 95507, 95527, 95531, 95539, + 95549, 95561, 95569, 95581, 95597, 95603, 95617, 95621, 95629, 95633, + 95651, 95701, 95707, 95713, 95717, 95723, 95731, 95737, 95747, 95773, + 95783, 95789, 95791, 95801, 95803, 95813, 95819, 95857, 95869, 95873, + 95881, 95891, 95911, 95917, 95923, 95929, 95947, 95957, 95959, 95971, + 95987, 95989, 96001, 96013, 96017, 96043, 96053, 96059, 96079, 96097, + 96137, 96149, 96157, 96167, 96179, 96181, 96199, 96211, 96221, 96223, + 96233, 96259, 96263, 96269, 96281, 96289, 96293, 96323, 96329, 96331, + 96337, 96353, 96377, 96401, 96419, 96431, 96443, 96451, 96457, 96461, + 96469, 96479, 96487, 96493, 96497, 96517, 96527, 96553, 96557, 96581, + 96587, 96589, 96601, 96643, 96661, 96667, 96671, 96697, 96703, 96731, + 96737, 96739, 96749, 96757, 96763, 96769, 96779, 96787, 96797, 96799, + 96821, 96823, 96827, 96847, 96851, 96857, 96893, 96907, 96911, 96931, + 96953, 96959, 96973, 96979, 96989, 96997, 97001, 97003, 97007, 97021, + 97039, 97073, 97081, 97103, 97117, 97127, 97151, 97157, 97159, 97169, + 97171, 97177, 97187, 97213, 97231, 97241, 97259, 97283, 97301, 97303, + 97327, 97367, 97369, 97373, 97379, 97381, 97387, 97397, 97423, 97429, + 97441, 97453, 97459, 97463, 97499, 97501, 97511, 97523, 97547, 97549, + 97553, 97561, 97571, 97577, 97579, 97583, 97607, 97609, 97613, 97649, + 97651, 97673, 97687, 97711, 97729, 97771, 97777, 97787, 97789, 97813, + 97829, 97841, 97843, 97847, 97849, 97859, 97861, 97871, 97879, 97883, + 97919, 97927, 97931, 97943, 97961, 97967, 97973, 97987, 98009, 98011, + 98017, 98041, 98047, 98057, 98081, 98101, 98123, 98129, 98143, 98179, + 98207, 98213, 98221, 98227, 98251, 98257, 98269, 98297, 98299, 98317, + 98321, 98323, 98327, 98347, 98369, 98377, 98387, 98389, 98407, 98411, + 98419, 98429, 98443, 98453, 98459, 98467, 98473, 98479, 98491, 98507, + 98519, 98533, 98543, 98561, 98563, 98573, 98597, 98621, 98627, 98639, + 98641, 98663, 98669, 98689, 98711, 98713, 98717, 98729, 98731, 98737, + 98773, 98779, 98801, 98807, 98809, 98837, 98849, 98867, 98869, 98873, + 98887, 98893, 98897, 98899, 98909, 98911, 98927, 98929, 98939, 98947, + 98953, 98963, 98981, 98993, 98999, 99013, 99017, 99023, 99041, 99053, + 99079, 99083, 99089, 99103, 99109, 99119, 99131, 99133, 99137, 99139, + 99149, 99173, 99181, 99191, 99223, 99233, 99241, 99251, 99257, 99259, + 99277, 99289, 99317, 99347, 99349, 99367, 99371, 99377, 99391, 99397, + 99401, 99409, 99431, 99439, 99469, 99487, 99497, 99523, 99527, 99529, + 99551, 99559, 99563, 99571, 99577, 99581, 99607, 99611, 99623, 99643, + 99661, 99667, 99679, 99689, 99707, 99709, 99713, 99719, 99721, 99733, + 99761, 99767, 99787, 99793, 99809, 99817, 99823, 99829, 99833, 99839, + 99859, 99871, 99877, 99881, 99901, 99907, 99923, 99929, 99961, 99971, + 99989, 99991, 100003, 100019, 100043, 100049, 100057, 100069, 100103, 100109, +100129, 100151, 100153, 100169, 100183, 100189, 100193, 100207, 100213, 100237, +100267, 100271, 100279, 100291, 100297, 100313, 100333, 100343, 100357, 100361, +100363, 100379, 100391, 100393, 100403, 100411, 100417, 100447, 100459, 100469, +100483, 100493, 100501, 100511, 100517, 100519, 100523, 100537, 100547, 100549, +100559, 100591, 100609, 100613, 100621, 100649, 100669, 100673, 100693, 100699, +100703, 100733, 100741, 100747, 100769, 100787, 100799, 100801, 100811, 100823, +100829, 100847, 100853, 100907, 100913, 100927, 100931, 100937, 100943, 100957, +100981, 100987, 100999, 101009, 101021, 101027, 101051, 101063, 101081, 101089, +101107, 101111, 101113, 101117, 101119, 101141, 101149, 101159, 101161, 101173, +101183, 101197, 101203, 101207, 101209, 101221, 101267, 101273, 101279, 101281, +101287, 101293, 101323, 101333, 101341, 101347, 101359, 101363, 101377, 101383, +101399, 101411, 101419, 101429, 101449, 101467, 101477, 101483, 101489, 101501, +101503, 101513, 101527, 101531, 101533, 101537, 101561, 101573, 101581, 101599, +101603, 101611, 101627, 101641, 101653, 101663, 101681, 101693, 101701, 101719, +101723, 101737, 101741, 101747, 101749, 101771, 101789, 101797, 101807, 101833, +101837, 101839, 101863, 101869, 101873, 101879, 101891, 101917, 101921, 101929, +101939, 101957, 101963, 101977, 101987, 101999, 102001, 102013, 102019, 102023, +102031, 102043, 102059, 102061, 102071, 102077, 102079, 102101, 102103, 102107, +102121, 102139, 102149, 102161, 102181, 102191, 102197, 102199, 102203, 102217, +102229, 102233, 102241, 102251, 102253, 102259, 102293, 102299, 102301, 102317, +102329, 102337, 102359, 102367, 102397, 102407, 102409, 102433, 102437, 102451, +102461, 102481, 102497, 102499, 102503, 102523, 102533, 102539, 102547, 102551, +102559, 102563, 102587, 102593, 102607, 102611, 102643, 102647, 102653, 102667, +102673, 102677, 102679, 102701, 102761, 102763, 102769, 102793, 102797, 102811, +102829, 102841, 102859, 102871, 102877, 102881, 102911, 102913, 102929, 102931, +102953, 102967, 102983, 103001, 103007, 103043, 103049, 103067, 103069, 103079, +103087, 103091, 103093, 103099, 103123, 103141, 103171, 103177, 103183, 103217, +103231, 103237, 103289, 103291, 103307, 103319, 103333, 103349, 103357, 103387, +103391, 103393, 103399, 103409, 103421, 103423, 103451, 103457, 103471, 103483, +103511, 103529, 103549, 103553, 103561, 103567, 103573, 103577, 103583, 103591, +103613, 103619, 103643, 103651, 103657, 103669, 103681, 103687, 103699, 103703, +103723, 103769, 103787, 103801, 103811, 103813, 103837, 103841, 103843, 103867, +103889, 103903, 103913, 103919, 103951, 103963, 103967, 103969, 103979, 103981, +103991, 103993, 103997, 104003, 104009, 104021, 104033, 104047, 104053, 104059, +104087, 104089, 104107, 104113, 104119, 104123, 104147, 104149, 104161, 104173, +104179, 104183, 104207, 104231, 104233, 104239, 104243, 104281, 104287, 104297, +104309, 104311, 104323, 104327, 104347, 104369, 104381, 104383, 104393, 104399, +104417, 104459, 104471, 104473, 104479, 104491, 104513, 104527, 104537, 104543, +104549, 104551, 104561, 104579, 104593, 104597, 104623, 104639, 104651, 104659, +104677, 104681, 104683, 104693, 104701, 104707, 104711, 104717, 104723, 104729, +) diff --git a/env/lib/python3.8/site-packages/Crypto/Util/number.pyi b/env/lib/python3.8/site-packages/Crypto/Util/number.pyi new file mode 100644 index 00000000..f8680bf8 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/number.pyi @@ -0,0 +1,19 @@ +from typing import List, Optional, Callable + + +def ceil_div(n: int, d: int) -> int: ... +def size (N: int) -> int: ... +def getRandomInteger(N: int, randfunc: Optional[Callable]=None) -> int: ... +def getRandomRange(a: int, b: int, randfunc: Optional[Callable]=None) -> int: ... +def getRandomNBitInteger(N: int, randfunc: Optional[Callable]=None) -> int: ... +def GCD(x: int,y: int) -> int: ... +def inverse(u: int, v: int) -> int: ... +def getPrime(N: int, randfunc: Optional[Callable]=None) -> int: ... +def getStrongPrime(N: int, e: Optional[int]=0, false_positive_prob: Optional[float]=1e-6, randfunc: Optional[Callable]=None) -> int: ... +def isPrime(N: int, false_positive_prob: Optional[float]=1e-6, randfunc: Optional[Callable]=None) -> bool: ... +def long_to_bytes(n: int, blocksize: Optional[int]=0) -> bytes: ... +def bytes_to_long(s: bytes) -> int: ... +def long2str(n: int, blocksize: Optional[int]=0) -> bytes: ... +def str2long(s: bytes) -> int: ... + +sieve_base: List[int] diff --git a/env/lib/python3.8/site-packages/Crypto/Util/py3compat.py b/env/lib/python3.8/site-packages/Crypto/Util/py3compat.py new file mode 100644 index 00000000..36083578 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/py3compat.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# Util/py3compat.py : Compatibility code for handling Py3k / Python 2.x +# +# Written in 2010 by Thorsten Behrens +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Compatibility code for handling string/bytes changes from Python 2.x to Py3k + +In Python 2.x, strings (of type ''str'') contain binary data, including encoded +Unicode text (e.g. UTF-8). The separate type ''unicode'' holds Unicode text. +Unicode literals are specified via the u'...' prefix. Indexing or slicing +either type always produces a string of the same type as the original. +Data read from a file is always of '''str'' type. + +In Python 3.x, strings (type ''str'') may only contain Unicode text. The u'...' +prefix and the ''unicode'' type are now redundant. A new type (called +''bytes'') has to be used for binary data (including any particular +''encoding'' of a string). The b'...' prefix allows one to specify a binary +literal. Indexing or slicing a string produces another string. Slicing a byte +string produces another byte string, but the indexing operation produces an +integer. Data read from a file is of '''str'' type if the file was opened in +text mode, or of ''bytes'' type otherwise. + +Since PyCrypto aims at supporting both Python 2.x and 3.x, the following helper +functions are used to keep the rest of the library as independent as possible +from the actual Python version. + +In general, the code should always deal with binary strings, and use integers +instead of 1-byte character strings. + +b(s) + Take a text string literal (with no prefix or with u'...' prefix) and + make a byte string. +bchr(c) + Take an integer and make a 1-character byte string. +bord(c) + Take the result of indexing on a byte string and make an integer. +tobytes(s) + Take a text string, a byte string, or a sequence of character taken from + a byte string and make a byte string. +""" + +import sys +import abc + + +if sys.version_info[0] == 2: + def b(s): + return s + def bchr(s): + return chr(s) + def bstr(s): + return str(s) + def bord(s): + return ord(s) + def tobytes(s, encoding="latin-1"): + if isinstance(s, unicode): + return s.encode(encoding) + elif isinstance(s, str): + return s + elif isinstance(s, bytearray): + return bytes(s) + elif isinstance(s, memoryview): + return s.tobytes() + else: + return ''.join(s) + def tostr(bs): + return bs + def byte_string(s): + return isinstance(s, str) + + from StringIO import StringIO + BytesIO = StringIO + + from sys import maxint + + iter_range = xrange + + def is_native_int(x): + return isinstance(x, (int, long)) + + def is_string(x): + return isinstance(x, basestring) + + def is_bytes(x): + return isinstance(x, str) or \ + isinstance(x, bytearray) or \ + isinstance(x, memoryview) + + ABC = abc.ABCMeta('ABC', (object,), {'__slots__': ()}) + + FileNotFoundError = IOError + +else: + def b(s): + return s.encode("latin-1") # utf-8 would cause some side-effects we don't want + def bchr(s): + return bytes([s]) + def bstr(s): + if isinstance(s,str): + return bytes(s,"latin-1") + else: + return bytes(s) + def bord(s): + return s + def tobytes(s, encoding="latin-1"): + if isinstance(s, bytes): + return s + elif isinstance(s, bytearray): + return bytes(s) + elif isinstance(s,str): + return s.encode(encoding) + elif isinstance(s, memoryview): + return s.tobytes() + else: + return bytes([s]) + def tostr(bs): + return bs.decode("latin-1") + def byte_string(s): + return isinstance(s, bytes) + + from io import BytesIO + from io import StringIO + from sys import maxsize as maxint + + iter_range = range + + def is_native_int(x): + return isinstance(x, int) + + def is_string(x): + return isinstance(x, str) + + def is_bytes(x): + return isinstance(x, bytes) or \ + isinstance(x, bytearray) or \ + isinstance(x, memoryview) + + from abc import ABC + + FileNotFoundError = FileNotFoundError + + +def _copy_bytes(start, end, seq): + """Return an immutable copy of a sequence (byte string, byte array, memoryview) + in a certain interval [start:seq]""" + + if isinstance(seq, memoryview): + return seq[start:end].tobytes() + elif isinstance(seq, bytearray): + return bytes(seq[start:end]) + else: + return seq[start:end] + +del sys +del abc diff --git a/env/lib/python3.8/site-packages/Crypto/Util/py3compat.pyi b/env/lib/python3.8/site-packages/Crypto/Util/py3compat.pyi new file mode 100644 index 00000000..74e04a2b --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/py3compat.pyi @@ -0,0 +1,33 @@ +from typing import Union, Any, Optional, IO + +Buffer = Union[bytes, bytearray, memoryview] + +import sys + +def b(s: str) -> bytes: ... +def bchr(s: int) -> bytes: ... +def bord(s: bytes) -> int: ... +def tobytes(s: Union[bytes, str]) -> bytes: ... +def tostr(b: bytes) -> str: ... +def bytestring(x: Any) -> bool: ... + +def is_native_int(s: Any) -> bool: ... +def is_string(x: Any) -> bool: ... +def is_bytes(x: Any) -> bool: ... + +def BytesIO(b: bytes) -> IO[bytes]: ... +def StringIO(s: str) -> IO[str]: ... + +if sys.version_info[0] == 2: + from sys import maxint + iter_range = xrange + +else: + from sys import maxsize as maxint + iter_range = range + +class FileNotFoundError: + def __init__(self, err: int, msg: str, filename: str) -> None: + pass + +def _copy_bytes(start: Optional[int], end: Optional[int], seq: Buffer) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/Util/strxor.py b/env/lib/python3.8/site-packages/Crypto/Util/strxor.py new file mode 100644 index 00000000..362db6e6 --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/strxor.py @@ -0,0 +1,146 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, c_size_t, + create_string_buffer, get_raw_buffer, + c_uint8_ptr, is_writeable_buffer) + +_raw_strxor = load_pycryptodome_raw_lib( + "Crypto.Util._strxor", + """ + void strxor(const uint8_t *in1, + const uint8_t *in2, + uint8_t *out, size_t len); + void strxor_c(const uint8_t *in, + uint8_t c, + uint8_t *out, + size_t len); + """) + + +def strxor(term1, term2, output=None): + """From two byte strings of equal length, + create a third one which is the byte-by-byte XOR of the two. + + Args: + term1 (bytes/bytearray/memoryview): + The first byte string to XOR. + term2 (bytes/bytearray/memoryview): + The second byte string to XOR. + output (bytearray/memoryview): + The location where the result will be written to. + It must have the same length as ``term1`` and ``term2``. + If ``None``, the result is returned. + :Return: + If ``output`` is ``None``, a new byte string with the result. + Otherwise ``None``. + + .. note:: + ``term1`` and ``term2`` must have the same length. + """ + + if len(term1) != len(term2): + raise ValueError("Only byte strings of equal length can be xored") + + if output is None: + result = create_string_buffer(len(term1)) + else: + # Note: output may overlap with either input + result = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(term1) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(term1)) + + _raw_strxor.strxor(c_uint8_ptr(term1), + c_uint8_ptr(term2), + c_uint8_ptr(result), + c_size_t(len(term1))) + + if output is None: + return get_raw_buffer(result) + else: + return None + + +def strxor_c(term, c, output=None): + """From a byte string, create a second one of equal length + where each byte is XOR-red with the same value. + + Args: + term(bytes/bytearray/memoryview): + The byte string to XOR. + c (int): + Every byte in the string will be XOR-ed with this value. + It must be between 0 and 255 (included). + output (None or bytearray/memoryview): + The location where the result will be written to. + It must have the same length as ``term``. + If ``None``, the result is returned. + + Return: + If ``output`` is ``None``, a new ``bytes`` string with the result. + Otherwise ``None``. + """ + + if not 0 <= c < 256: + raise ValueError("c must be in range(256)") + + if output is None: + result = create_string_buffer(len(term)) + else: + # Note: output may overlap with either input + result = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(term) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(term)) + + _raw_strxor.strxor_c(c_uint8_ptr(term), + c, + c_uint8_ptr(result), + c_size_t(len(term)) + ) + + if output is None: + return get_raw_buffer(result) + else: + return None + + +def _strxor_direct(term1, term2, result): + """Very fast XOR - check conditions!""" + _raw_strxor.strxor(term1, term2, result, c_size_t(len(term1))) diff --git a/env/lib/python3.8/site-packages/Crypto/Util/strxor.pyi b/env/lib/python3.8/site-packages/Crypto/Util/strxor.pyi new file mode 100644 index 00000000..ca896f3b --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/Util/strxor.pyi @@ -0,0 +1,6 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +def strxor(term1: bytes, term2: bytes, output: Optional[Buffer]=...) -> bytes: ... +def strxor_c(term: bytes, c: int, output: Optional[Buffer]=...) -> bytes: ... diff --git a/env/lib/python3.8/site-packages/Crypto/__init__.py b/env/lib/python3.8/site-packages/Crypto/__init__.py new file mode 100644 index 00000000..9c2f83bc --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/__init__.py @@ -0,0 +1,6 @@ +__all__ = ['Cipher', 'Hash', 'Protocol', 'PublicKey', 'Util', 'Signature', + 'IO', 'Math'] + +version_info = (3, 15, '0') + +__version__ = ".".join([str(x) for x in version_info]) diff --git a/env/lib/python3.8/site-packages/Crypto/__init__.pyi b/env/lib/python3.8/site-packages/Crypto/__init__.pyi new file mode 100644 index 00000000..bc73446d --- /dev/null +++ b/env/lib/python3.8/site-packages/Crypto/__init__.pyi @@ -0,0 +1,4 @@ +from typing import Tuple, Union + +version_info : Tuple[int, int, Union[int, str]] +__version__ : str diff --git a/env/lib/python3.8/site-packages/Crypto/py.typed b/env/lib/python3.8/site-packages/Crypto/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/LICENSE new file mode 100644 index 00000000..2f1b8e15 --- /dev/null +++ b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/LICENSE @@ -0,0 +1,20 @@ +Copyright (c) 2017-2021 Ingy döt Net +Copyright (c) 2006-2016 Kirill Simonov + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies +of the Software, and to permit persons to whom the Software is furnished to do +so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/METADATA b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/METADATA new file mode 100644 index 00000000..9a910762 --- /dev/null +++ b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/METADATA @@ -0,0 +1,46 @@ +Metadata-Version: 2.1 +Name: PyYAML +Version: 6.0 +Summary: YAML parser and emitter for Python +Home-page: https://pyyaml.org/ +Author: Kirill Simonov +Author-email: xi@resolvent.net +License: MIT +Download-URL: https://pypi.org/project/PyYAML/ +Project-URL: Bug Tracker, https://github.com/yaml/pyyaml/issues +Project-URL: CI, https://github.com/yaml/pyyaml/actions +Project-URL: Documentation, https://pyyaml.org/wiki/PyYAMLDocumentation +Project-URL: Mailing lists, http://lists.sourceforge.net/lists/listinfo/yaml-core +Project-URL: Source Code, https://github.com/yaml/pyyaml +Platform: Any +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Cython +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup +Requires-Python: >=3.6 +License-File: LICENSE + +YAML is a data serialization format designed for human readability +and interaction with scripting languages. PyYAML is a YAML parser +and emitter for Python. + +PyYAML features a complete YAML 1.1 parser, Unicode support, pickle +support, capable extension API, and sensible error messages. PyYAML +supports standard YAML tags and provides Python-specific tags that +allow to represent an arbitrary Python object. + +PyYAML is applicable for a broad range of tasks from complex +configuration files to object serialization and persistence. + diff --git a/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/RECORD b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/RECORD new file mode 100644 index 00000000..7105b9f0 --- /dev/null +++ b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/RECORD @@ -0,0 +1,43 @@ +PyYAML-6.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +PyYAML-6.0.dist-info/LICENSE,sha256=jTko-dxEkP1jVwfLiOsmvXZBAqcoKVQwfT5RZ6V36KQ,1101 +PyYAML-6.0.dist-info/METADATA,sha256=QmHx9kGp_0yezQCXYaft4eEFeJ6W4oyFfYwHDLP1kdg,2006 +PyYAML-6.0.dist-info/RECORD,, +PyYAML-6.0.dist-info/WHEEL,sha256=RiwktpmF40OphKd3_aIG01PzIOQlJ7dpBn3cFSc9vak,217 +PyYAML-6.0.dist-info/top_level.txt,sha256=rpj0IVMTisAjh_1vG3Ccf9v5jpCQwAz6cD1IVU5ZdhQ,11 +_yaml/__init__.py,sha256=04Ae_5osxahpJHa3XBZUAf4wi6XX32gR8D6X6p64GEA,1402 +_yaml/__pycache__/__init__.cpython-38.pyc,, +yaml/__init__.py,sha256=NDS7S8XgA72-hY6LRmGzUWTPvzGzjWVrWk-OGA-77AA,12309 +yaml/__pycache__/__init__.cpython-38.pyc,, +yaml/__pycache__/composer.cpython-38.pyc,, +yaml/__pycache__/constructor.cpython-38.pyc,, +yaml/__pycache__/cyaml.cpython-38.pyc,, +yaml/__pycache__/dumper.cpython-38.pyc,, +yaml/__pycache__/emitter.cpython-38.pyc,, +yaml/__pycache__/error.cpython-38.pyc,, +yaml/__pycache__/events.cpython-38.pyc,, +yaml/__pycache__/loader.cpython-38.pyc,, +yaml/__pycache__/nodes.cpython-38.pyc,, +yaml/__pycache__/parser.cpython-38.pyc,, +yaml/__pycache__/reader.cpython-38.pyc,, +yaml/__pycache__/representer.cpython-38.pyc,, +yaml/__pycache__/resolver.cpython-38.pyc,, +yaml/__pycache__/scanner.cpython-38.pyc,, +yaml/__pycache__/serializer.cpython-38.pyc,, +yaml/__pycache__/tokens.cpython-38.pyc,, +yaml/_yaml.cpython-38-x86_64-linux-gnu.so,sha256=lMaKSmQZy3WNZSmmU0Wg5Y5ZAs-HR5vItyGVUIsp8Rg,2847784 +yaml/composer.py,sha256=_Ko30Wr6eDWUeUpauUGT3Lcg9QPBnOPVlTnIMRGJ9FM,4883 +yaml/constructor.py,sha256=kNgkfaeLUkwQYY_Q6Ff1Tz2XVw_pG1xVE9Ak7z-viLA,28639 +yaml/cyaml.py,sha256=6ZrAG9fAYvdVe2FK_w0hmXoG7ZYsoYUwapG8CiC72H0,3851 +yaml/dumper.py,sha256=PLctZlYwZLp7XmeUdwRuv4nYOZ2UBnDIUy8-lKfLF-o,2837 +yaml/emitter.py,sha256=jghtaU7eFwg31bG0B7RZea_29Adi9CKmXq_QjgQpCkQ,43006 +yaml/error.py,sha256=Ah9z-toHJUbE9j-M8YpxgSRM5CgLCcwVzJgLLRF2Fxo,2533 +yaml/events.py,sha256=50_TksgQiE4up-lKo_V-nBy-tAIxkIPQxY5qDhKCeHw,2445 +yaml/loader.py,sha256=UVa-zIqmkFSCIYq_PgSGm4NSJttHY2Rf_zQ4_b1fHN0,2061 +yaml/nodes.py,sha256=gPKNj8pKCdh2d4gr3gIYINnPOaOxGhJAUiYhGRnPE84,1440 +yaml/parser.py,sha256=ilWp5vvgoHFGzvOZDItFoGjD6D42nhlZrZyjAwa0oJo,25495 +yaml/reader.py,sha256=0dmzirOiDG4Xo41RnuQS7K9rkY3xjHiVasfDMNTqCNw,6794 +yaml/representer.py,sha256=IuWP-cAW9sHKEnS0gCqSa894k1Bg4cgTxaDwIcbRQ-Y,14190 +yaml/resolver.py,sha256=9L-VYfm4mWHxUD1Vg4X7rjDRK_7VZd6b92wzq7Y2IKY,9004 +yaml/scanner.py,sha256=YEM3iLZSaQwXcQRg2l2R4MdT0zGP2F9eHkKGKnHyWQY,51279 +yaml/serializer.py,sha256=ChuFgmhU01hj4xgI8GaKv6vfM2Bujwa9i7d2FAHj7cA,4165 +yaml/tokens.py,sha256=lTQIzSVw8Mg9wv459-TjiOQe6wVziqaRlqX2_89rp54,2573 diff --git a/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/WHEEL new file mode 100644 index 00000000..34ce4b8a --- /dev/null +++ b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/WHEEL @@ -0,0 +1,8 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.0) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_5_x86_64 +Tag: cp38-cp38-manylinux1_x86_64 +Tag: cp38-cp38-manylinux_2_12_x86_64 +Tag: cp38-cp38-manylinux2010_x86_64 + diff --git a/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/top_level.txt new file mode 100644 index 00000000..e6475e91 --- /dev/null +++ b/env/lib/python3.8/site-packages/PyYAML-6.0.dist-info/top_level.txt @@ -0,0 +1,2 @@ +_yaml +yaml diff --git a/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/INSTALLER b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/LICENSE b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/LICENSE new file mode 100644 index 00000000..a343a622 --- /dev/null +++ b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/LICENSE @@ -0,0 +1,25 @@ +Copyright 2012-2018 Dmitry Shachnev +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. +3. Neither the name of the University nor the names of its contributors may be + used to endorse or promote products derived from this software without + specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY +DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/METADATA b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/METADATA new file mode 100644 index 00000000..f0323fd1 --- /dev/null +++ b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/METADATA @@ -0,0 +1,114 @@ +Metadata-Version: 2.1 +Name: SecretStorage +Version: 3.3.3 +Summary: Python bindings to FreeDesktop.org Secret Service API +Home-page: https://github.com/mitya57/secretstorage +Author: Dmitry Shachnev +Author-email: mitya57@gmail.com +License: BSD 3-Clause License +Platform: Linux +Classifier: Development Status :: 5 - Production/Stable +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: POSIX +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Topic :: Security +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Python: >=3.6 +Description-Content-Type: text/x-rst +License-File: LICENSE +Requires-Dist: cryptography (>=2.0) +Requires-Dist: jeepney (>=0.6) + +.. image:: https://github.com/mitya57/secretstorage/workflows/tests/badge.svg + :target: https://github.com/mitya57/secretstorage/actions + :alt: GitHub Actions status +.. image:: https://codecov.io/gh/mitya57/secretstorage/branch/master/graph/badge.svg + :target: https://codecov.io/gh/mitya57/secretstorage + :alt: Coverage status +.. image:: https://readthedocs.org/projects/secretstorage/badge/?version=latest + :target: https://secretstorage.readthedocs.io/en/latest/ + :alt: ReadTheDocs status + +Module description +================== + +This module provides a way for securely storing passwords and other secrets. + +It uses D-Bus `Secret Service`_ API that is supported by GNOME Keyring, +KWallet (since version 5.97) and KeePassXC. + +The main classes provided are ``secretstorage.Item``, representing a secret +item (that has a *label*, a *secret* and some *attributes*) and +``secretstorage.Collection``, a place items are stored in. + +SecretStorage supports most of the functions provided by Secret Service, +including creating and deleting items and collections, editing items, +locking and unlocking collections (asynchronous unlocking is also supported). + +The documentation can be found on `secretstorage.readthedocs.io`_. + +.. _`Secret Service`: https://specifications.freedesktop.org/secret-service/ +.. _`secretstorage.readthedocs.io`: https://secretstorage.readthedocs.io/en/latest/ + +Building the module +=================== + +.. note:: + SecretStorage 3.x supports Python 3.6 and newer versions. + If you have an older version of Python, install SecretStorage 2.x:: + + pip install "SecretStorage < 3" + +SecretStorage requires these packages to work: + +* Jeepney_ +* `python-cryptography`_ + +To build SecretStorage, use this command:: + + python3 setup.py build + +If you have Sphinx_ installed, you can also build the documentation:: + + python3 setup.py build_sphinx + +.. _Jeepney: https://pypi.org/project/jeepney/ +.. _`python-cryptography`: https://pypi.org/project/cryptography/ +.. _Sphinx: http://sphinx-doc.org/ + +Testing the module +================== + +First, make sure that you have the Secret Service daemon installed. +The `GNOME Keyring`_ is the reference server-side implementation for the +Secret Service specification. + +.. _`GNOME Keyring`: https://download.gnome.org/sources/gnome-keyring/ + +Then, start the daemon and unlock the ``default`` collection, if needed. +The testsuite will fail to run if the ``default`` collection exists and is +locked. If it does not exist, the testsuite can also use the temporary +``session`` collection, as provided by the GNOME Keyring. + +Then, run the Python unittest module:: + + python3 -m unittest discover -s tests + +If you want to run the tests in an isolated or headless environment, run +this command in a D-Bus session:: + + dbus-run-session -- python3 -m unittest discover -s tests + +Get the code +============ + +SecretStorage is available under BSD license. The source code can be found +on GitHub_. + +.. _GitHub: https://github.com/mitya57/secretstorage diff --git a/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/RECORD b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/RECORD new file mode 100644 index 00000000..c55c1d7f --- /dev/null +++ b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/RECORD @@ -0,0 +1,21 @@ +SecretStorage-3.3.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +SecretStorage-3.3.3.dist-info/LICENSE,sha256=cPa_yndjPDXvohgyjtpUhtcFTCkU1hggmA43h5dSCiU,1504 +SecretStorage-3.3.3.dist-info/METADATA,sha256=ZScD5voEgjme04wlw9OZigESMxLa2xG_eaIeZ_IMqJI,4027 +SecretStorage-3.3.3.dist-info/RECORD,, +SecretStorage-3.3.3.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +SecretStorage-3.3.3.dist-info/top_level.txt,sha256=hveSi1OWGaEt3kEVbjmZ0M-ASPxi6y-nTPVa-d3c0B4,14 +secretstorage/__init__.py,sha256=W1p-HB1Qh12Dv6K8ml0Wj_MzN09_dEeezJjQZAHf-O4,3364 +secretstorage/__pycache__/__init__.cpython-38.pyc,, +secretstorage/__pycache__/collection.cpython-38.pyc,, +secretstorage/__pycache__/defines.cpython-38.pyc,, +secretstorage/__pycache__/dhcrypto.cpython-38.pyc,, +secretstorage/__pycache__/exceptions.cpython-38.pyc,, +secretstorage/__pycache__/item.cpython-38.pyc,, +secretstorage/__pycache__/util.cpython-38.pyc,, +secretstorage/collection.py,sha256=lHwSOkFO5sRspgcUBoBI8ZG2au2bcUSO6X64ksVdnsQ,9436 +secretstorage/defines.py,sha256=DzUrEWzSvBlN8kK2nVXnLGlCZv7HWNyfN1AYqRmjKGE,807 +secretstorage/dhcrypto.py,sha256=BiuDoNvNmd8i7Ul4ENouRnbqFE3SUmTUSAn6RVvn7Tg,2578 +secretstorage/exceptions.py,sha256=1uUZXTua4jRZf4PKDIT2SVWcSKP2lP97s8r3eJZudio,1655 +secretstorage/item.py,sha256=3niFSjOzwrB2hV1jrg78RXgBsTrpw44852VpZHXUpeE,5813 +secretstorage/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +secretstorage/util.py,sha256=vHu01QaooMQ5sRdRDFX9pg7rrzfJWF9vg0plm3Zg0Po,6755 diff --git a/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/WHEEL b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/top_level.txt b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/top_level.txt new file mode 100644 index 00000000..0ec6ae86 --- /dev/null +++ b/env/lib/python3.8/site-packages/SecretStorage-3.3.3.dist-info/top_level.txt @@ -0,0 +1 @@ +secretstorage diff --git a/env/lib/python3.8/site-packages/_black_version.py b/env/lib/python3.8/site-packages/_black_version.py new file mode 100644 index 00000000..20e33b8d --- /dev/null +++ b/env/lib/python3.8/site-packages/_black_version.py @@ -0,0 +1 @@ +version = "22.10.0" diff --git a/env/lib/python3.8/site-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..a06c3509 Binary files /dev/null and b/env/lib/python3.8/site-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/_distutils_hack/__init__.py b/env/lib/python3.8/site-packages/_distutils_hack/__init__.py new file mode 100644 index 00000000..f987a536 --- /dev/null +++ b/env/lib/python3.8/site-packages/_distutils_hack/__init__.py @@ -0,0 +1,222 @@ +# don't import any costly modules +import sys +import os + + +is_pypy = '__pypy__' in sys.builtin_module_names + + +def warn_distutils_present(): + if 'distutils' not in sys.modules: + return + if is_pypy and sys.version_info < (3, 7): + # PyPy for 3.6 unconditionally imports distutils, so bypass the warning + # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250 + return + import warnings + + warnings.warn( + "Distutils was imported before Setuptools, but importing Setuptools " + "also replaces the `distutils` module in `sys.modules`. This may lead " + "to undesirable behaviors or errors. To avoid these issues, avoid " + "using distutils directly, ensure that setuptools is installed in the " + "traditional way (e.g. not an editable install), and/or make sure " + "that setuptools is always imported before distutils." + ) + + +def clear_distutils(): + if 'distutils' not in sys.modules: + return + import warnings + + warnings.warn("Setuptools is replacing distutils.") + mods = [ + name + for name in sys.modules + if name == "distutils" or name.startswith("distutils.") + ] + for name in mods: + del sys.modules[name] + + +def enabled(): + """ + Allow selection of distutils by environment variable. + """ + which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'local') + return which == 'local' + + +def ensure_local_distutils(): + import importlib + + clear_distutils() + + # With the DistutilsMetaFinder in place, + # perform an import to cause distutils to be + # loaded from setuptools._distutils. Ref #2906. + with shim(): + importlib.import_module('distutils') + + # check that submodules load as expected + core = importlib.import_module('distutils.core') + assert '_distutils' in core.__file__, core.__file__ + assert 'setuptools._distutils.log' not in sys.modules + + +def do_override(): + """ + Ensure that the local copy of distutils is preferred over stdlib. + + See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401 + for more motivation. + """ + if enabled(): + warn_distutils_present() + ensure_local_distutils() + + +class _TrivialRe: + def __init__(self, *patterns): + self._patterns = patterns + + def match(self, string): + return all(pat in string for pat in self._patterns) + + +class DistutilsMetaFinder: + def find_spec(self, fullname, path, target=None): + # optimization: only consider top level modules and those + # found in the CPython test suite. + if path is not None and not fullname.startswith('test.'): + return + + method_name = 'spec_for_{fullname}'.format(**locals()) + method = getattr(self, method_name, lambda: None) + return method() + + def spec_for_distutils(self): + if self.is_cpython(): + return + + import importlib + import importlib.abc + import importlib.util + + try: + mod = importlib.import_module('setuptools._distutils') + except Exception: + # There are a couple of cases where setuptools._distutils + # may not be present: + # - An older Setuptools without a local distutils is + # taking precedence. Ref #2957. + # - Path manipulation during sitecustomize removes + # setuptools from the path but only after the hook + # has been loaded. Ref #2980. + # In either case, fall back to stdlib behavior. + return + + class DistutilsLoader(importlib.abc.Loader): + def create_module(self, spec): + mod.__name__ = 'distutils' + return mod + + def exec_module(self, module): + pass + + return importlib.util.spec_from_loader( + 'distutils', DistutilsLoader(), origin=mod.__file__ + ) + + @staticmethod + def is_cpython(): + """ + Suppress supplying distutils for CPython (build and tests). + Ref #2965 and #3007. + """ + return os.path.isfile('pybuilddir.txt') + + def spec_for_pip(self): + """ + Ensure stdlib distutils when running under pip. + See pypa/pip#8761 for rationale. + """ + if self.pip_imported_during_build(): + return + clear_distutils() + self.spec_for_distutils = lambda: None + + @classmethod + def pip_imported_during_build(cls): + """ + Detect if pip is being imported in a build script. Ref #2355. + """ + import traceback + + return any( + cls.frame_file_is_setup(frame) for frame, line in traceback.walk_stack(None) + ) + + @staticmethod + def frame_file_is_setup(frame): + """ + Return True if the indicated frame suggests a setup.py file. + """ + # some frames may not have __file__ (#2940) + return frame.f_globals.get('__file__', '').endswith('setup.py') + + def spec_for_sensitive_tests(self): + """ + Ensure stdlib distutils when running select tests under CPython. + + python/cpython#91169 + """ + clear_distutils() + self.spec_for_distutils = lambda: None + + sensitive_tests = ( + [ + 'test.test_distutils', + 'test.test_peg_generator', + 'test.test_importlib', + ] + if sys.version_info < (3, 10) + else [ + 'test.test_distutils', + ] + ) + + +for name in DistutilsMetaFinder.sensitive_tests: + setattr( + DistutilsMetaFinder, + f'spec_for_{name}', + DistutilsMetaFinder.spec_for_sensitive_tests, + ) + + +DISTUTILS_FINDER = DistutilsMetaFinder() + + +def add_shim(): + DISTUTILS_FINDER in sys.meta_path or insert_shim() + + +class shim: + def __enter__(self): + insert_shim() + + def __exit__(self, exc, value, tb): + remove_shim() + + +def insert_shim(): + sys.meta_path.insert(0, DISTUTILS_FINDER) + + +def remove_shim(): + try: + sys.meta_path.remove(DISTUTILS_FINDER) + except ValueError: + pass diff --git a/env/lib/python3.8/site-packages/_distutils_hack/override.py b/env/lib/python3.8/site-packages/_distutils_hack/override.py new file mode 100644 index 00000000..2cc433a4 --- /dev/null +++ b/env/lib/python3.8/site-packages/_distutils_hack/override.py @@ -0,0 +1 @@ +__import__('_distutils_hack').do_override() diff --git a/env/lib/python3.8/site-packages/_pyrsistent_version.py b/env/lib/python3.8/site-packages/_pyrsistent_version.py new file mode 100644 index 00000000..5daae67f --- /dev/null +++ b/env/lib/python3.8/site-packages/_pyrsistent_version.py @@ -0,0 +1 @@ +__version__ = '0.19.2' diff --git a/env/lib/python3.8/site-packages/_pytest/__init__.py b/env/lib/python3.8/site-packages/_pytest/__init__.py new file mode 100644 index 00000000..8a406c5c --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/__init__.py @@ -0,0 +1,9 @@ +__all__ = ["__version__", "version_tuple"] + +try: + from ._version import version as __version__, version_tuple +except ImportError: # pragma: no cover + # broken installation, we don't even try + # unknown only works because we do poor mans version compare + __version__ = "unknown" + version_tuple = (0, 0, "unknown") # type:ignore[assignment] diff --git a/env/lib/python3.8/site-packages/_pytest/_argcomplete.py b/env/lib/python3.8/site-packages/_pytest/_argcomplete.py new file mode 100644 index 00000000..120f09ff --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_argcomplete.py @@ -0,0 +1,116 @@ +"""Allow bash-completion for argparse with argcomplete if installed. + +Needs argcomplete>=0.5.6 for python 3.2/3.3 (older versions fail +to find the magic string, so _ARGCOMPLETE env. var is never set, and +this does not need special code). + +Function try_argcomplete(parser) should be called directly before +the call to ArgumentParser.parse_args(). + +The filescompleter is what you normally would use on the positional +arguments specification, in order to get "dirname/" after "dirn" +instead of the default "dirname ": + + optparser.add_argument(Config._file_or_dir, nargs='*').completer=filescompleter + +Other, application specific, completers should go in the file +doing the add_argument calls as they need to be specified as .completer +attributes as well. (If argcomplete is not installed, the function the +attribute points to will not be used). + +SPEEDUP +======= + +The generic argcomplete script for bash-completion +(/etc/bash_completion.d/python-argcomplete.sh) +uses a python program to determine startup script generated by pip. +You can speed up completion somewhat by changing this script to include + # PYTHON_ARGCOMPLETE_OK +so the python-argcomplete-check-easy-install-script does not +need to be called to find the entry point of the code and see if that is +marked with PYTHON_ARGCOMPLETE_OK. + +INSTALL/DEBUGGING +================= + +To include this support in another application that has setup.py generated +scripts: + +- Add the line: + # PYTHON_ARGCOMPLETE_OK + near the top of the main python entry point. + +- Include in the file calling parse_args(): + from _argcomplete import try_argcomplete, filescompleter + Call try_argcomplete just before parse_args(), and optionally add + filescompleter to the positional arguments' add_argument(). + +If things do not work right away: + +- Switch on argcomplete debugging with (also helpful when doing custom + completers): + export _ARC_DEBUG=1 + +- Run: + python-argcomplete-check-easy-install-script $(which appname) + echo $? + will echo 0 if the magic line has been found, 1 if not. + +- Sometimes it helps to find early on errors using: + _ARGCOMPLETE=1 _ARC_DEBUG=1 appname + which should throw a KeyError: 'COMPLINE' (which is properly set by the + global argcomplete script). +""" +import argparse +import os +import sys +from glob import glob +from typing import Any +from typing import List +from typing import Optional + + +class FastFilesCompleter: + """Fast file completer class.""" + + def __init__(self, directories: bool = True) -> None: + self.directories = directories + + def __call__(self, prefix: str, **kwargs: Any) -> List[str]: + # Only called on non option completions. + if os.path.sep in prefix[1:]: + prefix_dir = len(os.path.dirname(prefix) + os.path.sep) + else: + prefix_dir = 0 + completion = [] + globbed = [] + if "*" not in prefix and "?" not in prefix: + # We are on unix, otherwise no bash. + if not prefix or prefix[-1] == os.path.sep: + globbed.extend(glob(prefix + ".*")) + prefix += "*" + globbed.extend(glob(prefix)) + for x in sorted(globbed): + if os.path.isdir(x): + x += "/" + # Append stripping the prefix (like bash, not like compgen). + completion.append(x[prefix_dir:]) + return completion + + +if os.environ.get("_ARGCOMPLETE"): + try: + import argcomplete.completers + except ImportError: + sys.exit(-1) + filescompleter: Optional[FastFilesCompleter] = FastFilesCompleter() + + def try_argcomplete(parser: argparse.ArgumentParser) -> None: + argcomplete.autocomplete(parser, always_complete_options=False) + +else: + + def try_argcomplete(parser: argparse.ArgumentParser) -> None: + pass + + filescompleter = None diff --git a/env/lib/python3.8/site-packages/_pytest/_code/__init__.py b/env/lib/python3.8/site-packages/_pytest/_code/__init__.py new file mode 100644 index 00000000..511d0dde --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_code/__init__.py @@ -0,0 +1,22 @@ +"""Python inspection/code generation API.""" +from .code import Code +from .code import ExceptionInfo +from .code import filter_traceback +from .code import Frame +from .code import getfslineno +from .code import Traceback +from .code import TracebackEntry +from .source import getrawcode +from .source import Source + +__all__ = [ + "Code", + "ExceptionInfo", + "filter_traceback", + "Frame", + "getfslineno", + "getrawcode", + "Traceback", + "TracebackEntry", + "Source", +] diff --git a/env/lib/python3.8/site-packages/_pytest/_code/code.py b/env/lib/python3.8/site-packages/_pytest/_code/code.py new file mode 100644 index 00000000..97985def --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_code/code.py @@ -0,0 +1,1292 @@ +import ast +import inspect +import os +import re +import sys +import traceback +from inspect import CO_VARARGS +from inspect import CO_VARKEYWORDS +from io import StringIO +from pathlib import Path +from traceback import format_exception_only +from types import CodeType +from types import FrameType +from types import TracebackType +from typing import Any +from typing import Callable +from typing import ClassVar +from typing import Dict +from typing import Generic +from typing import Iterable +from typing import List +from typing import Mapping +from typing import Optional +from typing import overload +from typing import Pattern +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union +from weakref import ref + +import attr +import pluggy + +import _pytest +from _pytest._code.source import findsource +from _pytest._code.source import getrawcode +from _pytest._code.source import getstatementrange_ast +from _pytest._code.source import Source +from _pytest._io import TerminalWriter +from _pytest._io.saferepr import safeformat +from _pytest._io.saferepr import saferepr +from _pytest.compat import final +from _pytest.compat import get_real_func +from _pytest.deprecated import check_ispytest +from _pytest.pathlib import absolutepath +from _pytest.pathlib import bestrelpath + +if TYPE_CHECKING: + from typing_extensions import Literal + from typing_extensions import SupportsIndex + from weakref import ReferenceType + + _TracebackStyle = Literal["long", "short", "line", "no", "native", "value", "auto"] + +if sys.version_info[:2] < (3, 11): + from exceptiongroup import BaseExceptionGroup + + +class Code: + """Wrapper around Python code objects.""" + + __slots__ = ("raw",) + + def __init__(self, obj: CodeType) -> None: + self.raw = obj + + @classmethod + def from_function(cls, obj: object) -> "Code": + return cls(getrawcode(obj)) + + def __eq__(self, other): + return self.raw == other.raw + + # Ignore type because of https://github.com/python/mypy/issues/4266. + __hash__ = None # type: ignore + + @property + def firstlineno(self) -> int: + return self.raw.co_firstlineno - 1 + + @property + def name(self) -> str: + return self.raw.co_name + + @property + def path(self) -> Union[Path, str]: + """Return a path object pointing to source code, or an ``str`` in + case of ``OSError`` / non-existing file.""" + if not self.raw.co_filename: + return "" + try: + p = absolutepath(self.raw.co_filename) + # maybe don't try this checking + if not p.exists(): + raise OSError("path check failed.") + return p + except OSError: + # XXX maybe try harder like the weird logic + # in the standard lib [linecache.updatecache] does? + return self.raw.co_filename + + @property + def fullsource(self) -> Optional["Source"]: + """Return a _pytest._code.Source object for the full source file of the code.""" + full, _ = findsource(self.raw) + return full + + def source(self) -> "Source": + """Return a _pytest._code.Source object for the code object's source only.""" + # return source only for that part of code + return Source(self.raw) + + def getargs(self, var: bool = False) -> Tuple[str, ...]: + """Return a tuple with the argument names for the code object. + + If 'var' is set True also return the names of the variable and + keyword arguments when present. + """ + # Handy shortcut for getting args. + raw = self.raw + argcount = raw.co_argcount + if var: + argcount += raw.co_flags & CO_VARARGS + argcount += raw.co_flags & CO_VARKEYWORDS + return raw.co_varnames[:argcount] + + +class Frame: + """Wrapper around a Python frame holding f_locals and f_globals + in which expressions can be evaluated.""" + + __slots__ = ("raw",) + + def __init__(self, frame: FrameType) -> None: + self.raw = frame + + @property + def lineno(self) -> int: + return self.raw.f_lineno - 1 + + @property + def f_globals(self) -> Dict[str, Any]: + return self.raw.f_globals + + @property + def f_locals(self) -> Dict[str, Any]: + return self.raw.f_locals + + @property + def code(self) -> Code: + return Code(self.raw.f_code) + + @property + def statement(self) -> "Source": + """Statement this frame is at.""" + if self.code.fullsource is None: + return Source("") + return self.code.fullsource.getstatement(self.lineno) + + def eval(self, code, **vars): + """Evaluate 'code' in the frame. + + 'vars' are optional additional local variables. + + Returns the result of the evaluation. + """ + f_locals = self.f_locals.copy() + f_locals.update(vars) + return eval(code, self.f_globals, f_locals) + + def repr(self, object: object) -> str: + """Return a 'safe' (non-recursive, one-line) string repr for 'object'.""" + return saferepr(object) + + def getargs(self, var: bool = False): + """Return a list of tuples (name, value) for all arguments. + + If 'var' is set True, also include the variable and keyword arguments + when present. + """ + retval = [] + for arg in self.code.getargs(var): + try: + retval.append((arg, self.f_locals[arg])) + except KeyError: + pass # this can occur when using Psyco + return retval + + +class TracebackEntry: + """A single entry in a Traceback.""" + + __slots__ = ("_rawentry", "_excinfo", "_repr_style") + + def __init__( + self, + rawentry: TracebackType, + excinfo: Optional["ReferenceType[ExceptionInfo[BaseException]]"] = None, + ) -> None: + self._rawentry = rawentry + self._excinfo = excinfo + self._repr_style: Optional['Literal["short", "long"]'] = None + + @property + def lineno(self) -> int: + return self._rawentry.tb_lineno - 1 + + def set_repr_style(self, mode: "Literal['short', 'long']") -> None: + assert mode in ("short", "long") + self._repr_style = mode + + @property + def frame(self) -> Frame: + return Frame(self._rawentry.tb_frame) + + @property + def relline(self) -> int: + return self.lineno - self.frame.code.firstlineno + + def __repr__(self) -> str: + return "" % (self.frame.code.path, self.lineno + 1) + + @property + def statement(self) -> "Source": + """_pytest._code.Source object for the current statement.""" + source = self.frame.code.fullsource + assert source is not None + return source.getstatement(self.lineno) + + @property + def path(self) -> Union[Path, str]: + """Path to the source code.""" + return self.frame.code.path + + @property + def locals(self) -> Dict[str, Any]: + """Locals of underlying frame.""" + return self.frame.f_locals + + def getfirstlinesource(self) -> int: + return self.frame.code.firstlineno + + def getsource( + self, astcache: Optional[Dict[Union[str, Path], ast.AST]] = None + ) -> Optional["Source"]: + """Return failing source code.""" + # we use the passed in astcache to not reparse asttrees + # within exception info printing + source = self.frame.code.fullsource + if source is None: + return None + key = astnode = None + if astcache is not None: + key = self.frame.code.path + if key is not None: + astnode = astcache.get(key, None) + start = self.getfirstlinesource() + try: + astnode, _, end = getstatementrange_ast( + self.lineno, source, astnode=astnode + ) + except SyntaxError: + end = self.lineno + 1 + else: + if key is not None and astcache is not None: + astcache[key] = astnode + return source[start:end] + + source = property(getsource) + + def ishidden(self) -> bool: + """Return True if the current frame has a var __tracebackhide__ + resolving to True. + + If __tracebackhide__ is a callable, it gets called with the + ExceptionInfo instance and can decide whether to hide the traceback. + + Mostly for internal use. + """ + tbh: Union[ + bool, Callable[[Optional[ExceptionInfo[BaseException]]], bool] + ] = False + for maybe_ns_dct in (self.frame.f_locals, self.frame.f_globals): + # in normal cases, f_locals and f_globals are dictionaries + # however via `exec(...)` / `eval(...)` they can be other types + # (even incorrect types!). + # as such, we suppress all exceptions while accessing __tracebackhide__ + try: + tbh = maybe_ns_dct["__tracebackhide__"] + except Exception: + pass + else: + break + if tbh and callable(tbh): + return tbh(None if self._excinfo is None else self._excinfo()) + return tbh + + def __str__(self) -> str: + name = self.frame.code.name + try: + line = str(self.statement).lstrip() + except KeyboardInterrupt: + raise + except BaseException: + line = "???" + # This output does not quite match Python's repr for traceback entries, + # but changing it to do so would break certain plugins. See + # https://github.com/pytest-dev/pytest/pull/7535/ for details. + return " File %r:%d in %s\n %s\n" % ( + str(self.path), + self.lineno + 1, + name, + line, + ) + + @property + def name(self) -> str: + """co_name of underlying code.""" + return self.frame.code.raw.co_name + + +class Traceback(List[TracebackEntry]): + """Traceback objects encapsulate and offer higher level access to Traceback entries.""" + + def __init__( + self, + tb: Union[TracebackType, Iterable[TracebackEntry]], + excinfo: Optional["ReferenceType[ExceptionInfo[BaseException]]"] = None, + ) -> None: + """Initialize from given python traceback object and ExceptionInfo.""" + self._excinfo = excinfo + if isinstance(tb, TracebackType): + + def f(cur: TracebackType) -> Iterable[TracebackEntry]: + cur_: Optional[TracebackType] = cur + while cur_ is not None: + yield TracebackEntry(cur_, excinfo=excinfo) + cur_ = cur_.tb_next + + super().__init__(f(tb)) + else: + super().__init__(tb) + + def cut( + self, + path: Optional[Union["os.PathLike[str]", str]] = None, + lineno: Optional[int] = None, + firstlineno: Optional[int] = None, + excludepath: Optional["os.PathLike[str]"] = None, + ) -> "Traceback": + """Return a Traceback instance wrapping part of this Traceback. + + By providing any combination of path, lineno and firstlineno, the + first frame to start the to-be-returned traceback is determined. + + This allows cutting the first part of a Traceback instance e.g. + for formatting reasons (removing some uninteresting bits that deal + with handling of the exception/traceback). + """ + path_ = None if path is None else os.fspath(path) + excludepath_ = None if excludepath is None else os.fspath(excludepath) + for x in self: + code = x.frame.code + codepath = code.path + if path is not None and str(codepath) != path_: + continue + if ( + excludepath is not None + and isinstance(codepath, Path) + and excludepath_ in (str(p) for p in codepath.parents) # type: ignore[operator] + ): + continue + if lineno is not None and x.lineno != lineno: + continue + if firstlineno is not None and x.frame.code.firstlineno != firstlineno: + continue + return Traceback(x._rawentry, self._excinfo) + return self + + @overload + def __getitem__(self, key: "SupportsIndex") -> TracebackEntry: + ... + + @overload + def __getitem__(self, key: slice) -> "Traceback": + ... + + def __getitem__( + self, key: Union["SupportsIndex", slice] + ) -> Union[TracebackEntry, "Traceback"]: + if isinstance(key, slice): + return self.__class__(super().__getitem__(key)) + else: + return super().__getitem__(key) + + def filter( + self, fn: Callable[[TracebackEntry], bool] = lambda x: not x.ishidden() + ) -> "Traceback": + """Return a Traceback instance with certain items removed + + fn is a function that gets a single argument, a TracebackEntry + instance, and should return True when the item should be added + to the Traceback, False when not. + + By default this removes all the TracebackEntries which are hidden + (see ishidden() above). + """ + return Traceback(filter(fn, self), self._excinfo) + + def getcrashentry(self) -> TracebackEntry: + """Return last non-hidden traceback entry that lead to the exception of a traceback.""" + for i in range(-1, -len(self) - 1, -1): + entry = self[i] + if not entry.ishidden(): + return entry + return self[-1] + + def recursionindex(self) -> Optional[int]: + """Return the index of the frame/TracebackEntry where recursion originates if + appropriate, None if no recursion occurred.""" + cache: Dict[Tuple[Any, int, int], List[Dict[str, Any]]] = {} + for i, entry in enumerate(self): + # id for the code.raw is needed to work around + # the strange metaprogramming in the decorator lib from pypi + # which generates code objects that have hash/value equality + # XXX needs a test + key = entry.frame.code.path, id(entry.frame.code.raw), entry.lineno + # print "checking for recursion at", key + values = cache.setdefault(key, []) + if values: + f = entry.frame + loc = f.f_locals + for otherloc in values: + if otherloc == loc: + return i + values.append(entry.frame.f_locals) + return None + + +E = TypeVar("E", bound=BaseException, covariant=True) + + +@final +@attr.s(repr=False, init=False, auto_attribs=True) +class ExceptionInfo(Generic[E]): + """Wraps sys.exc_info() objects and offers help for navigating the traceback.""" + + _assert_start_repr: ClassVar = "AssertionError('assert " + + _excinfo: Optional[Tuple[Type["E"], "E", TracebackType]] + _striptext: str + _traceback: Optional[Traceback] + + def __init__( + self, + excinfo: Optional[Tuple[Type["E"], "E", TracebackType]], + striptext: str = "", + traceback: Optional[Traceback] = None, + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + self._excinfo = excinfo + self._striptext = striptext + self._traceback = traceback + + @classmethod + def from_exc_info( + cls, + exc_info: Tuple[Type[E], E, TracebackType], + exprinfo: Optional[str] = None, + ) -> "ExceptionInfo[E]": + """Return an ExceptionInfo for an existing exc_info tuple. + + .. warning:: + + Experimental API + + :param exprinfo: + A text string helping to determine if we should strip + ``AssertionError`` from the output. Defaults to the exception + message/``__str__()``. + """ + _striptext = "" + if exprinfo is None and isinstance(exc_info[1], AssertionError): + exprinfo = getattr(exc_info[1], "msg", None) + if exprinfo is None: + exprinfo = saferepr(exc_info[1]) + if exprinfo and exprinfo.startswith(cls._assert_start_repr): + _striptext = "AssertionError: " + + return cls(exc_info, _striptext, _ispytest=True) + + @classmethod + def from_current( + cls, exprinfo: Optional[str] = None + ) -> "ExceptionInfo[BaseException]": + """Return an ExceptionInfo matching the current traceback. + + .. warning:: + + Experimental API + + :param exprinfo: + A text string helping to determine if we should strip + ``AssertionError`` from the output. Defaults to the exception + message/``__str__()``. + """ + tup = sys.exc_info() + assert tup[0] is not None, "no current exception" + assert tup[1] is not None, "no current exception" + assert tup[2] is not None, "no current exception" + exc_info = (tup[0], tup[1], tup[2]) + return ExceptionInfo.from_exc_info(exc_info, exprinfo) + + @classmethod + def for_later(cls) -> "ExceptionInfo[E]": + """Return an unfilled ExceptionInfo.""" + return cls(None, _ispytest=True) + + def fill_unfilled(self, exc_info: Tuple[Type[E], E, TracebackType]) -> None: + """Fill an unfilled ExceptionInfo created with ``for_later()``.""" + assert self._excinfo is None, "ExceptionInfo was already filled" + self._excinfo = exc_info + + @property + def type(self) -> Type[E]: + """The exception class.""" + assert ( + self._excinfo is not None + ), ".type can only be used after the context manager exits" + return self._excinfo[0] + + @property + def value(self) -> E: + """The exception value.""" + assert ( + self._excinfo is not None + ), ".value can only be used after the context manager exits" + return self._excinfo[1] + + @property + def tb(self) -> TracebackType: + """The exception raw traceback.""" + assert ( + self._excinfo is not None + ), ".tb can only be used after the context manager exits" + return self._excinfo[2] + + @property + def typename(self) -> str: + """The type name of the exception.""" + assert ( + self._excinfo is not None + ), ".typename can only be used after the context manager exits" + return self.type.__name__ + + @property + def traceback(self) -> Traceback: + """The traceback.""" + if self._traceback is None: + self._traceback = Traceback(self.tb, excinfo=ref(self)) + return self._traceback + + @traceback.setter + def traceback(self, value: Traceback) -> None: + self._traceback = value + + def __repr__(self) -> str: + if self._excinfo is None: + return "" + return "<{} {} tblen={}>".format( + self.__class__.__name__, saferepr(self._excinfo[1]), len(self.traceback) + ) + + def exconly(self, tryshort: bool = False) -> str: + """Return the exception as a string. + + When 'tryshort' resolves to True, and the exception is an + AssertionError, only the actual exception part of the exception + representation is returned (so 'AssertionError: ' is removed from + the beginning). + """ + lines = format_exception_only(self.type, self.value) + text = "".join(lines) + text = text.rstrip() + if tryshort: + if text.startswith(self._striptext): + text = text[len(self._striptext) :] + return text + + def errisinstance( + self, exc: Union[Type[BaseException], Tuple[Type[BaseException], ...]] + ) -> bool: + """Return True if the exception is an instance of exc. + + Consider using ``isinstance(excinfo.value, exc)`` instead. + """ + return isinstance(self.value, exc) + + def _getreprcrash(self) -> "ReprFileLocation": + exconly = self.exconly(tryshort=True) + entry = self.traceback.getcrashentry() + path, lineno = entry.frame.code.raw.co_filename, entry.lineno + return ReprFileLocation(path, lineno + 1, exconly) + + def getrepr( + self, + showlocals: bool = False, + style: "_TracebackStyle" = "long", + abspath: bool = False, + tbfilter: bool = True, + funcargs: bool = False, + truncate_locals: bool = True, + chain: bool = True, + ) -> Union["ReprExceptionInfo", "ExceptionChainRepr"]: + """Return str()able representation of this exception info. + + :param bool showlocals: + Show locals per traceback entry. + Ignored if ``style=="native"``. + + :param str style: + long|short|no|native|value traceback style. + + :param bool abspath: + If paths should be changed to absolute or left unchanged. + + :param bool tbfilter: + Hide entries that contain a local variable ``__tracebackhide__==True``. + Ignored if ``style=="native"``. + + :param bool funcargs: + Show fixtures ("funcargs" for legacy purposes) per traceback entry. + + :param bool truncate_locals: + With ``showlocals==True``, make sure locals can be safely represented as strings. + + :param bool chain: + If chained exceptions in Python 3 should be shown. + + .. versionchanged:: 3.9 + + Added the ``chain`` parameter. + """ + if style == "native": + return ReprExceptionInfo( + ReprTracebackNative( + traceback.format_exception( + self.type, self.value, self.traceback[0]._rawentry + ) + ), + self._getreprcrash(), + ) + + fmt = FormattedExcinfo( + showlocals=showlocals, + style=style, + abspath=abspath, + tbfilter=tbfilter, + funcargs=funcargs, + truncate_locals=truncate_locals, + chain=chain, + ) + return fmt.repr_excinfo(self) + + def match(self, regexp: Union[str, Pattern[str]]) -> "Literal[True]": + """Check whether the regular expression `regexp` matches the string + representation of the exception using :func:`python:re.search`. + + If it matches `True` is returned, otherwise an `AssertionError` is raised. + """ + __tracebackhide__ = True + value = str(self.value) + msg = f"Regex pattern did not match.\n Regex: {regexp!r}\n Input: {value!r}" + if regexp == value: + msg += "\n Did you mean to `re.escape()` the regex?" + assert re.search(regexp, value), msg + # Return True to allow for "assert excinfo.match()". + return True + + +@attr.s(auto_attribs=True) +class FormattedExcinfo: + """Presenting information about failing Functions and Generators.""" + + # for traceback entries + flow_marker: ClassVar = ">" + fail_marker: ClassVar = "E" + + showlocals: bool = False + style: "_TracebackStyle" = "long" + abspath: bool = True + tbfilter: bool = True + funcargs: bool = False + truncate_locals: bool = True + chain: bool = True + astcache: Dict[Union[str, Path], ast.AST] = attr.ib( + factory=dict, init=False, repr=False + ) + + def _getindent(self, source: "Source") -> int: + # Figure out indent for the given source. + try: + s = str(source.getstatement(len(source) - 1)) + except KeyboardInterrupt: + raise + except BaseException: + try: + s = str(source[-1]) + except KeyboardInterrupt: + raise + except BaseException: + return 0 + return 4 + (len(s) - len(s.lstrip())) + + def _getentrysource(self, entry: TracebackEntry) -> Optional["Source"]: + source = entry.getsource(self.astcache) + if source is not None: + source = source.deindent() + return source + + def repr_args(self, entry: TracebackEntry) -> Optional["ReprFuncArgs"]: + if self.funcargs: + args = [] + for argname, argvalue in entry.frame.getargs(var=True): + args.append((argname, saferepr(argvalue))) + return ReprFuncArgs(args) + return None + + def get_source( + self, + source: Optional["Source"], + line_index: int = -1, + excinfo: Optional[ExceptionInfo[BaseException]] = None, + short: bool = False, + ) -> List[str]: + """Return formatted and marked up source lines.""" + lines = [] + if source is None or line_index >= len(source.lines): + source = Source("???") + line_index = 0 + if line_index < 0: + line_index += len(source) + space_prefix = " " + if short: + lines.append(space_prefix + source.lines[line_index].strip()) + else: + for line in source.lines[:line_index]: + lines.append(space_prefix + line) + lines.append(self.flow_marker + " " + source.lines[line_index]) + for line in source.lines[line_index + 1 :]: + lines.append(space_prefix + line) + if excinfo is not None: + indent = 4 if short else self._getindent(source) + lines.extend(self.get_exconly(excinfo, indent=indent, markall=True)) + return lines + + def get_exconly( + self, + excinfo: ExceptionInfo[BaseException], + indent: int = 4, + markall: bool = False, + ) -> List[str]: + lines = [] + indentstr = " " * indent + # Get the real exception information out. + exlines = excinfo.exconly(tryshort=True).split("\n") + failindent = self.fail_marker + indentstr[1:] + for line in exlines: + lines.append(failindent + line) + if not markall: + failindent = indentstr + return lines + + def repr_locals(self, locals: Mapping[str, object]) -> Optional["ReprLocals"]: + if self.showlocals: + lines = [] + keys = [loc for loc in locals if loc[0] != "@"] + keys.sort() + for name in keys: + value = locals[name] + if name == "__builtins__": + lines.append("__builtins__ = ") + else: + # This formatting could all be handled by the + # _repr() function, which is only reprlib.Repr in + # disguise, so is very configurable. + if self.truncate_locals: + str_repr = saferepr(value) + else: + str_repr = safeformat(value) + # if len(str_repr) < 70 or not isinstance(value, (list, tuple, dict)): + lines.append(f"{name:<10} = {str_repr}") + # else: + # self._line("%-10s =\\" % (name,)) + # # XXX + # pprint.pprint(value, stream=self.excinfowriter) + return ReprLocals(lines) + return None + + def repr_traceback_entry( + self, + entry: TracebackEntry, + excinfo: Optional[ExceptionInfo[BaseException]] = None, + ) -> "ReprEntry": + lines: List[str] = [] + style = entry._repr_style if entry._repr_style is not None else self.style + if style in ("short", "long"): + source = self._getentrysource(entry) + if source is None: + source = Source("???") + line_index = 0 + else: + line_index = entry.lineno - entry.getfirstlinesource() + short = style == "short" + reprargs = self.repr_args(entry) if not short else None + s = self.get_source(source, line_index, excinfo, short=short) + lines.extend(s) + if short: + message = "in %s" % (entry.name) + else: + message = excinfo and excinfo.typename or "" + entry_path = entry.path + path = self._makepath(entry_path) + reprfileloc = ReprFileLocation(path, entry.lineno + 1, message) + localsrepr = self.repr_locals(entry.locals) + return ReprEntry(lines, reprargs, localsrepr, reprfileloc, style) + elif style == "value": + if excinfo: + lines.extend(str(excinfo.value).split("\n")) + return ReprEntry(lines, None, None, None, style) + else: + if excinfo: + lines.extend(self.get_exconly(excinfo, indent=4)) + return ReprEntry(lines, None, None, None, style) + + def _makepath(self, path: Union[Path, str]) -> str: + if not self.abspath and isinstance(path, Path): + try: + np = bestrelpath(Path.cwd(), path) + except OSError: + return str(path) + if len(np) < len(str(path)): + return np + return str(path) + + def repr_traceback(self, excinfo: ExceptionInfo[BaseException]) -> "ReprTraceback": + traceback = excinfo.traceback + if self.tbfilter: + traceback = traceback.filter() + + if isinstance(excinfo.value, RecursionError): + traceback, extraline = self._truncate_recursive_traceback(traceback) + else: + extraline = None + + last = traceback[-1] + entries = [] + if self.style == "value": + reprentry = self.repr_traceback_entry(last, excinfo) + entries.append(reprentry) + return ReprTraceback(entries, None, style=self.style) + + for index, entry in enumerate(traceback): + einfo = (last == entry) and excinfo or None + reprentry = self.repr_traceback_entry(entry, einfo) + entries.append(reprentry) + return ReprTraceback(entries, extraline, style=self.style) + + def _truncate_recursive_traceback( + self, traceback: Traceback + ) -> Tuple[Traceback, Optional[str]]: + """Truncate the given recursive traceback trying to find the starting + point of the recursion. + + The detection is done by going through each traceback entry and + finding the point in which the locals of the frame are equal to the + locals of a previous frame (see ``recursionindex()``). + + Handle the situation where the recursion process might raise an + exception (for example comparing numpy arrays using equality raises a + TypeError), in which case we do our best to warn the user of the + error and show a limited traceback. + """ + try: + recursionindex = traceback.recursionindex() + except Exception as e: + max_frames = 10 + extraline: Optional[str] = ( + "!!! Recursion error detected, but an error occurred locating the origin of recursion.\n" + " The following exception happened when comparing locals in the stack frame:\n" + " {exc_type}: {exc_msg}\n" + " Displaying first and last {max_frames} stack frames out of {total}." + ).format( + exc_type=type(e).__name__, + exc_msg=str(e), + max_frames=max_frames, + total=len(traceback), + ) + # Type ignored because adding two instances of a List subtype + # currently incorrectly has type List instead of the subtype. + traceback = traceback[:max_frames] + traceback[-max_frames:] # type: ignore + else: + if recursionindex is not None: + extraline = "!!! Recursion detected (same locals & position)" + traceback = traceback[: recursionindex + 1] + else: + extraline = None + + return traceback, extraline + + def repr_excinfo( + self, excinfo: ExceptionInfo[BaseException] + ) -> "ExceptionChainRepr": + repr_chain: List[ + Tuple[ReprTraceback, Optional[ReprFileLocation], Optional[str]] + ] = [] + e: Optional[BaseException] = excinfo.value + excinfo_: Optional[ExceptionInfo[BaseException]] = excinfo + descr = None + seen: Set[int] = set() + while e is not None and id(e) not in seen: + seen.add(id(e)) + if excinfo_: + # Fall back to native traceback as a temporary workaround until + # full support for exception groups added to ExceptionInfo. + # See https://github.com/pytest-dev/pytest/issues/9159 + if isinstance(e, BaseExceptionGroup): + reprtraceback: Union[ + ReprTracebackNative, ReprTraceback + ] = ReprTracebackNative( + traceback.format_exception( + type(excinfo_.value), + excinfo_.value, + excinfo_.traceback[0]._rawentry, + ) + ) + else: + reprtraceback = self.repr_traceback(excinfo_) + reprcrash: Optional[ReprFileLocation] = ( + excinfo_._getreprcrash() if self.style != "value" else None + ) + else: + # Fallback to native repr if the exception doesn't have a traceback: + # ExceptionInfo objects require a full traceback to work. + reprtraceback = ReprTracebackNative( + traceback.format_exception(type(e), e, None) + ) + reprcrash = None + + repr_chain += [(reprtraceback, reprcrash, descr)] + if e.__cause__ is not None and self.chain: + e = e.__cause__ + excinfo_ = ( + ExceptionInfo.from_exc_info((type(e), e, e.__traceback__)) + if e.__traceback__ + else None + ) + descr = "The above exception was the direct cause of the following exception:" + elif ( + e.__context__ is not None and not e.__suppress_context__ and self.chain + ): + e = e.__context__ + excinfo_ = ( + ExceptionInfo.from_exc_info((type(e), e, e.__traceback__)) + if e.__traceback__ + else None + ) + descr = "During handling of the above exception, another exception occurred:" + else: + e = None + repr_chain.reverse() + return ExceptionChainRepr(repr_chain) + + +@attr.s(eq=False, auto_attribs=True) +class TerminalRepr: + def __str__(self) -> str: + # FYI this is called from pytest-xdist's serialization of exception + # information. + io = StringIO() + tw = TerminalWriter(file=io) + self.toterminal(tw) + return io.getvalue().strip() + + def __repr__(self) -> str: + return f"<{self.__class__} instance at {id(self):0x}>" + + def toterminal(self, tw: TerminalWriter) -> None: + raise NotImplementedError() + + +# This class is abstract -- only subclasses are instantiated. +@attr.s(eq=False) +class ExceptionRepr(TerminalRepr): + # Provided by subclasses. + reprcrash: Optional["ReprFileLocation"] + reprtraceback: "ReprTraceback" + + def __attrs_post_init__(self) -> None: + self.sections: List[Tuple[str, str, str]] = [] + + def addsection(self, name: str, content: str, sep: str = "-") -> None: + self.sections.append((name, content, sep)) + + def toterminal(self, tw: TerminalWriter) -> None: + for name, content, sep in self.sections: + tw.sep(sep, name) + tw.line(content) + + +@attr.s(eq=False, auto_attribs=True) +class ExceptionChainRepr(ExceptionRepr): + chain: Sequence[Tuple["ReprTraceback", Optional["ReprFileLocation"], Optional[str]]] + + def __attrs_post_init__(self) -> None: + super().__attrs_post_init__() + # reprcrash and reprtraceback of the outermost (the newest) exception + # in the chain. + self.reprtraceback = self.chain[-1][0] + self.reprcrash = self.chain[-1][1] + + def toterminal(self, tw: TerminalWriter) -> None: + for element in self.chain: + element[0].toterminal(tw) + if element[2] is not None: + tw.line("") + tw.line(element[2], yellow=True) + super().toterminal(tw) + + +@attr.s(eq=False, auto_attribs=True) +class ReprExceptionInfo(ExceptionRepr): + reprtraceback: "ReprTraceback" + reprcrash: "ReprFileLocation" + + def toterminal(self, tw: TerminalWriter) -> None: + self.reprtraceback.toterminal(tw) + super().toterminal(tw) + + +@attr.s(eq=False, auto_attribs=True) +class ReprTraceback(TerminalRepr): + reprentries: Sequence[Union["ReprEntry", "ReprEntryNative"]] + extraline: Optional[str] + style: "_TracebackStyle" + + entrysep: ClassVar = "_ " + + def toterminal(self, tw: TerminalWriter) -> None: + # The entries might have different styles. + for i, entry in enumerate(self.reprentries): + if entry.style == "long": + tw.line("") + entry.toterminal(tw) + if i < len(self.reprentries) - 1: + next_entry = self.reprentries[i + 1] + if ( + entry.style == "long" + or entry.style == "short" + and next_entry.style == "long" + ): + tw.sep(self.entrysep) + + if self.extraline: + tw.line(self.extraline) + + +class ReprTracebackNative(ReprTraceback): + def __init__(self, tblines: Sequence[str]) -> None: + self.style = "native" + self.reprentries = [ReprEntryNative(tblines)] + self.extraline = None + + +@attr.s(eq=False, auto_attribs=True) +class ReprEntryNative(TerminalRepr): + lines: Sequence[str] + + style: ClassVar["_TracebackStyle"] = "native" + + def toterminal(self, tw: TerminalWriter) -> None: + tw.write("".join(self.lines)) + + +@attr.s(eq=False, auto_attribs=True) +class ReprEntry(TerminalRepr): + lines: Sequence[str] + reprfuncargs: Optional["ReprFuncArgs"] + reprlocals: Optional["ReprLocals"] + reprfileloc: Optional["ReprFileLocation"] + style: "_TracebackStyle" + + def _write_entry_lines(self, tw: TerminalWriter) -> None: + """Write the source code portions of a list of traceback entries with syntax highlighting. + + Usually entries are lines like these: + + " x = 1" + "> assert x == 2" + "E assert 1 == 2" + + This function takes care of rendering the "source" portions of it (the lines without + the "E" prefix) using syntax highlighting, taking care to not highlighting the ">" + character, as doing so might break line continuations. + """ + + if not self.lines: + return + + # separate indents and source lines that are not failures: we want to + # highlight the code but not the indentation, which may contain markers + # such as "> assert 0" + fail_marker = f"{FormattedExcinfo.fail_marker} " + indent_size = len(fail_marker) + indents: List[str] = [] + source_lines: List[str] = [] + failure_lines: List[str] = [] + for index, line in enumerate(self.lines): + is_failure_line = line.startswith(fail_marker) + if is_failure_line: + # from this point on all lines are considered part of the failure + failure_lines.extend(self.lines[index:]) + break + else: + if self.style == "value": + source_lines.append(line) + else: + indents.append(line[:indent_size]) + source_lines.append(line[indent_size:]) + + tw._write_source(source_lines, indents) + + # failure lines are always completely red and bold + for line in failure_lines: + tw.line(line, bold=True, red=True) + + def toterminal(self, tw: TerminalWriter) -> None: + if self.style == "short": + assert self.reprfileloc is not None + self.reprfileloc.toterminal(tw) + self._write_entry_lines(tw) + if self.reprlocals: + self.reprlocals.toterminal(tw, indent=" " * 8) + return + + if self.reprfuncargs: + self.reprfuncargs.toterminal(tw) + + self._write_entry_lines(tw) + + if self.reprlocals: + tw.line("") + self.reprlocals.toterminal(tw) + if self.reprfileloc: + if self.lines: + tw.line("") + self.reprfileloc.toterminal(tw) + + def __str__(self) -> str: + return "{}\n{}\n{}".format( + "\n".join(self.lines), self.reprlocals, self.reprfileloc + ) + + +@attr.s(eq=False, auto_attribs=True) +class ReprFileLocation(TerminalRepr): + path: str = attr.ib(converter=str) + lineno: int + message: str + + def toterminal(self, tw: TerminalWriter) -> None: + # Filename and lineno output for each entry, using an output format + # that most editors understand. + msg = self.message + i = msg.find("\n") + if i != -1: + msg = msg[:i] + tw.write(self.path, bold=True, red=True) + tw.line(f":{self.lineno}: {msg}") + + +@attr.s(eq=False, auto_attribs=True) +class ReprLocals(TerminalRepr): + lines: Sequence[str] + + def toterminal(self, tw: TerminalWriter, indent="") -> None: + for line in self.lines: + tw.line(indent + line) + + +@attr.s(eq=False, auto_attribs=True) +class ReprFuncArgs(TerminalRepr): + args: Sequence[Tuple[str, object]] + + def toterminal(self, tw: TerminalWriter) -> None: + if self.args: + linesofar = "" + for name, value in self.args: + ns = f"{name} = {value}" + if len(ns) + len(linesofar) + 2 > tw.fullwidth: + if linesofar: + tw.line(linesofar) + linesofar = ns + else: + if linesofar: + linesofar += ", " + ns + else: + linesofar = ns + if linesofar: + tw.line(linesofar) + tw.line("") + + +def getfslineno(obj: object) -> Tuple[Union[str, Path], int]: + """Return source location (path, lineno) for the given object. + + If the source cannot be determined return ("", -1). + + The line number is 0-based. + """ + # xxx let decorators etc specify a sane ordering + # NOTE: this used to be done in _pytest.compat.getfslineno, initially added + # in 6ec13a2b9. It ("place_as") appears to be something very custom. + obj = get_real_func(obj) + if hasattr(obj, "place_as"): + obj = obj.place_as # type: ignore[attr-defined] + + try: + code = Code.from_function(obj) + except TypeError: + try: + fn = inspect.getsourcefile(obj) or inspect.getfile(obj) # type: ignore[arg-type] + except TypeError: + return "", -1 + + fspath = fn and absolutepath(fn) or "" + lineno = -1 + if fspath: + try: + _, lineno = findsource(obj) + except OSError: + pass + return fspath, lineno + + return code.path, code.firstlineno + + +# Relative paths that we use to filter traceback entries from appearing to the user; +# see filter_traceback. +# note: if we need to add more paths than what we have now we should probably use a list +# for better maintenance. + +_PLUGGY_DIR = Path(pluggy.__file__.rstrip("oc")) +# pluggy is either a package or a single module depending on the version +if _PLUGGY_DIR.name == "__init__.py": + _PLUGGY_DIR = _PLUGGY_DIR.parent +_PYTEST_DIR = Path(_pytest.__file__).parent + + +def filter_traceback(entry: TracebackEntry) -> bool: + """Return True if a TracebackEntry instance should be included in tracebacks. + + We hide traceback entries of: + + * dynamically generated code (no code to show up for it); + * internal traceback from pytest or its internal libraries, py and pluggy. + """ + # entry.path might sometimes return a str object when the entry + # points to dynamically generated code. + # See https://bitbucket.org/pytest-dev/py/issues/71. + raw_filename = entry.frame.code.raw.co_filename + is_generated = "<" in raw_filename and ">" in raw_filename + if is_generated: + return False + + # entry.path might point to a non-existing file, in which case it will + # also return a str object. See #1133. + p = Path(entry.path) + + parents = p.parents + if _PLUGGY_DIR in parents: + return False + if _PYTEST_DIR in parents: + return False + + return True diff --git a/env/lib/python3.8/site-packages/_pytest/_code/source.py b/env/lib/python3.8/site-packages/_pytest/_code/source.py new file mode 100644 index 00000000..208cfb80 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_code/source.py @@ -0,0 +1,217 @@ +import ast +import inspect +import textwrap +import tokenize +import types +import warnings +from bisect import bisect_right +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Optional +from typing import overload +from typing import Tuple +from typing import Union + + +class Source: + """An immutable object holding a source code fragment. + + When using Source(...), the source lines are deindented. + """ + + def __init__(self, obj: object = None) -> None: + if not obj: + self.lines: List[str] = [] + elif isinstance(obj, Source): + self.lines = obj.lines + elif isinstance(obj, (tuple, list)): + self.lines = deindent(x.rstrip("\n") for x in obj) + elif isinstance(obj, str): + self.lines = deindent(obj.split("\n")) + else: + try: + rawcode = getrawcode(obj) + src = inspect.getsource(rawcode) + except TypeError: + src = inspect.getsource(obj) # type: ignore[arg-type] + self.lines = deindent(src.split("\n")) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Source): + return NotImplemented + return self.lines == other.lines + + # Ignore type because of https://github.com/python/mypy/issues/4266. + __hash__ = None # type: ignore + + @overload + def __getitem__(self, key: int) -> str: + ... + + @overload + def __getitem__(self, key: slice) -> "Source": + ... + + def __getitem__(self, key: Union[int, slice]) -> Union[str, "Source"]: + if isinstance(key, int): + return self.lines[key] + else: + if key.step not in (None, 1): + raise IndexError("cannot slice a Source with a step") + newsource = Source() + newsource.lines = self.lines[key.start : key.stop] + return newsource + + def __iter__(self) -> Iterator[str]: + return iter(self.lines) + + def __len__(self) -> int: + return len(self.lines) + + def strip(self) -> "Source": + """Return new Source object with trailing and leading blank lines removed.""" + start, end = 0, len(self) + while start < end and not self.lines[start].strip(): + start += 1 + while end > start and not self.lines[end - 1].strip(): + end -= 1 + source = Source() + source.lines[:] = self.lines[start:end] + return source + + def indent(self, indent: str = " " * 4) -> "Source": + """Return a copy of the source object with all lines indented by the + given indent-string.""" + newsource = Source() + newsource.lines = [(indent + line) for line in self.lines] + return newsource + + def getstatement(self, lineno: int) -> "Source": + """Return Source statement which contains the given linenumber + (counted from 0).""" + start, end = self.getstatementrange(lineno) + return self[start:end] + + def getstatementrange(self, lineno: int) -> Tuple[int, int]: + """Return (start, end) tuple which spans the minimal statement region + which containing the given lineno.""" + if not (0 <= lineno < len(self)): + raise IndexError("lineno out of range") + ast, start, end = getstatementrange_ast(lineno, self) + return start, end + + def deindent(self) -> "Source": + """Return a new Source object deindented.""" + newsource = Source() + newsource.lines[:] = deindent(self.lines) + return newsource + + def __str__(self) -> str: + return "\n".join(self.lines) + + +# +# helper functions +# + + +def findsource(obj) -> Tuple[Optional[Source], int]: + try: + sourcelines, lineno = inspect.findsource(obj) + except Exception: + return None, -1 + source = Source() + source.lines = [line.rstrip() for line in sourcelines] + return source, lineno + + +def getrawcode(obj: object, trycall: bool = True) -> types.CodeType: + """Return code object for given function.""" + try: + return obj.__code__ # type: ignore[attr-defined,no-any-return] + except AttributeError: + pass + if trycall: + call = getattr(obj, "__call__", None) + if call and not isinstance(obj, type): + return getrawcode(call, trycall=False) + raise TypeError(f"could not get code object for {obj!r}") + + +def deindent(lines: Iterable[str]) -> List[str]: + return textwrap.dedent("\n".join(lines)).splitlines() + + +def get_statement_startend2(lineno: int, node: ast.AST) -> Tuple[int, Optional[int]]: + # Flatten all statements and except handlers into one lineno-list. + # AST's line numbers start indexing at 1. + values: List[int] = [] + for x in ast.walk(node): + if isinstance(x, (ast.stmt, ast.ExceptHandler)): + # Before Python 3.8, the lineno of a decorated class or function pointed at the decorator. + # Since Python 3.8, the lineno points to the class/def, so need to include the decorators. + if isinstance(x, (ast.ClassDef, ast.FunctionDef, ast.AsyncFunctionDef)): + for d in x.decorator_list: + values.append(d.lineno - 1) + values.append(x.lineno - 1) + for name in ("finalbody", "orelse"): + val: Optional[List[ast.stmt]] = getattr(x, name, None) + if val: + # Treat the finally/orelse part as its own statement. + values.append(val[0].lineno - 1 - 1) + values.sort() + insert_index = bisect_right(values, lineno) + start = values[insert_index - 1] + if insert_index >= len(values): + end = None + else: + end = values[insert_index] + return start, end + + +def getstatementrange_ast( + lineno: int, + source: Source, + assertion: bool = False, + astnode: Optional[ast.AST] = None, +) -> Tuple[ast.AST, int, int]: + if astnode is None: + content = str(source) + # See #4260: + # Don't produce duplicate warnings when compiling source to find AST. + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + astnode = ast.parse(content, "source", "exec") + + start, end = get_statement_startend2(lineno, astnode) + # We need to correct the end: + # - ast-parsing strips comments + # - there might be empty lines + # - we might have lesser indented code blocks at the end + if end is None: + end = len(source.lines) + + if end > start + 1: + # Make sure we don't span differently indented code blocks + # by using the BlockFinder helper used which inspect.getsource() uses itself. + block_finder = inspect.BlockFinder() + # If we start with an indented line, put blockfinder to "started" mode. + block_finder.started = source.lines[start][0].isspace() + it = ((x + "\n") for x in source.lines[start:end]) + try: + for tok in tokenize.generate_tokens(lambda: next(it)): + block_finder.tokeneater(*tok) + except (inspect.EndOfBlock, IndentationError): + end = block_finder.last + start + except Exception: + pass + + # The end might still point to a comment or empty line, correct it. + while end: + line = source.lines[end - 1].lstrip() + if line.startswith("#") or not line: + end -= 1 + else: + break + return astnode, start, end diff --git a/env/lib/python3.8/site-packages/_pytest/_io/__init__.py b/env/lib/python3.8/site-packages/_pytest/_io/__init__.py new file mode 100644 index 00000000..db001e91 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_io/__init__.py @@ -0,0 +1,8 @@ +from .terminalwriter import get_terminal_width +from .terminalwriter import TerminalWriter + + +__all__ = [ + "TerminalWriter", + "get_terminal_width", +] diff --git a/env/lib/python3.8/site-packages/_pytest/_io/saferepr.py b/env/lib/python3.8/site-packages/_pytest/_io/saferepr.py new file mode 100644 index 00000000..c7018722 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_io/saferepr.py @@ -0,0 +1,180 @@ +import pprint +import reprlib +from typing import Any +from typing import Dict +from typing import IO +from typing import Optional + + +def _try_repr_or_str(obj: object) -> str: + try: + return repr(obj) + except (KeyboardInterrupt, SystemExit): + raise + except BaseException: + return f'{type(obj).__name__}("{obj}")' + + +def _format_repr_exception(exc: BaseException, obj: object) -> str: + try: + exc_info = _try_repr_or_str(exc) + except (KeyboardInterrupt, SystemExit): + raise + except BaseException as exc: + exc_info = f"unpresentable exception ({_try_repr_or_str(exc)})" + return "<[{} raised in repr()] {} object at 0x{:x}>".format( + exc_info, type(obj).__name__, id(obj) + ) + + +def _ellipsize(s: str, maxsize: int) -> str: + if len(s) > maxsize: + i = max(0, (maxsize - 3) // 2) + j = max(0, maxsize - 3 - i) + return s[:i] + "..." + s[len(s) - j :] + return s + + +class SafeRepr(reprlib.Repr): + """ + repr.Repr that limits the resulting size of repr() and includes + information on exceptions raised during the call. + """ + + def __init__(self, maxsize: Optional[int], use_ascii: bool = False) -> None: + """ + :param maxsize: + If not None, will truncate the resulting repr to that specific size, using ellipsis + somewhere in the middle to hide the extra text. + If None, will not impose any size limits on the returning repr. + """ + super().__init__() + # ``maxstring`` is used by the superclass, and needs to be an int; using a + # very large number in case maxsize is None, meaning we want to disable + # truncation. + self.maxstring = maxsize if maxsize is not None else 1_000_000_000 + self.maxsize = maxsize + self.use_ascii = use_ascii + + def repr(self, x: object) -> str: + try: + if self.use_ascii: + s = ascii(x) + else: + s = super().repr(x) + + except (KeyboardInterrupt, SystemExit): + raise + except BaseException as exc: + s = _format_repr_exception(exc, x) + if self.maxsize is not None: + s = _ellipsize(s, self.maxsize) + return s + + def repr_instance(self, x: object, level: int) -> str: + try: + s = repr(x) + except (KeyboardInterrupt, SystemExit): + raise + except BaseException as exc: + s = _format_repr_exception(exc, x) + if self.maxsize is not None: + s = _ellipsize(s, self.maxsize) + return s + + +def safeformat(obj: object) -> str: + """Return a pretty printed string for the given object. + + Failing __repr__ functions of user instances will be represented + with a short exception info. + """ + try: + return pprint.pformat(obj) + except Exception as exc: + return _format_repr_exception(exc, obj) + + +# Maximum size of overall repr of objects to display during assertion errors. +DEFAULT_REPR_MAX_SIZE = 240 + + +def saferepr( + obj: object, maxsize: Optional[int] = DEFAULT_REPR_MAX_SIZE, use_ascii: bool = False +) -> str: + """Return a size-limited safe repr-string for the given object. + + Failing __repr__ functions of user instances will be represented + with a short exception info and 'saferepr' generally takes + care to never raise exceptions itself. + + This function is a wrapper around the Repr/reprlib functionality of the + stdlib. + """ + + return SafeRepr(maxsize, use_ascii).repr(obj) + + +def saferepr_unlimited(obj: object, use_ascii: bool = True) -> str: + """Return an unlimited-size safe repr-string for the given object. + + As with saferepr, failing __repr__ functions of user instances + will be represented with a short exception info. + + This function is a wrapper around simple repr. + + Note: a cleaner solution would be to alter ``saferepr``this way + when maxsize=None, but that might affect some other code. + """ + try: + if use_ascii: + return ascii(obj) + return repr(obj) + except Exception as exc: + return _format_repr_exception(exc, obj) + + +class AlwaysDispatchingPrettyPrinter(pprint.PrettyPrinter): + """PrettyPrinter that always dispatches (regardless of width).""" + + def _format( + self, + object: object, + stream: IO[str], + indent: int, + allowance: int, + context: Dict[int, Any], + level: int, + ) -> None: + # Type ignored because _dispatch is private. + p = self._dispatch.get(type(object).__repr__, None) # type: ignore[attr-defined] + + objid = id(object) + if objid in context or p is None: + # Type ignored because _format is private. + super()._format( # type: ignore[misc] + object, + stream, + indent, + allowance, + context, + level, + ) + return + + context[objid] = 1 + p(self, object, stream, indent, allowance, context, level + 1) + del context[objid] + + +def _pformat_dispatch( + object: object, + indent: int = 1, + width: int = 80, + depth: Optional[int] = None, + *, + compact: bool = False, +) -> str: + return AlwaysDispatchingPrettyPrinter( + indent=indent, width=width, depth=depth, compact=compact + ).pformat(object) diff --git a/env/lib/python3.8/site-packages/_pytest/_io/terminalwriter.py b/env/lib/python3.8/site-packages/_pytest/_io/terminalwriter.py new file mode 100644 index 00000000..379035d8 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_io/terminalwriter.py @@ -0,0 +1,233 @@ +"""Helper functions for writing to terminals and files.""" +import os +import shutil +import sys +from typing import Optional +from typing import Sequence +from typing import TextIO + +from .wcwidth import wcswidth +from _pytest.compat import final + + +# This code was initially copied from py 1.8.1, file _io/terminalwriter.py. + + +def get_terminal_width() -> int: + width, _ = shutil.get_terminal_size(fallback=(80, 24)) + + # The Windows get_terminal_size may be bogus, let's sanify a bit. + if width < 40: + width = 80 + + return width + + +def should_do_markup(file: TextIO) -> bool: + if os.environ.get("PY_COLORS") == "1": + return True + if os.environ.get("PY_COLORS") == "0": + return False + if "NO_COLOR" in os.environ: + return False + if "FORCE_COLOR" in os.environ: + return True + return ( + hasattr(file, "isatty") and file.isatty() and os.environ.get("TERM") != "dumb" + ) + + +@final +class TerminalWriter: + _esctable = dict( + black=30, + red=31, + green=32, + yellow=33, + blue=34, + purple=35, + cyan=36, + white=37, + Black=40, + Red=41, + Green=42, + Yellow=43, + Blue=44, + Purple=45, + Cyan=46, + White=47, + bold=1, + light=2, + blink=5, + invert=7, + ) + + def __init__(self, file: Optional[TextIO] = None) -> None: + if file is None: + file = sys.stdout + if hasattr(file, "isatty") and file.isatty() and sys.platform == "win32": + try: + import colorama + except ImportError: + pass + else: + file = colorama.AnsiToWin32(file).stream + assert file is not None + self._file = file + self.hasmarkup = should_do_markup(file) + self._current_line = "" + self._terminal_width: Optional[int] = None + self.code_highlight = True + + @property + def fullwidth(self) -> int: + if self._terminal_width is not None: + return self._terminal_width + return get_terminal_width() + + @fullwidth.setter + def fullwidth(self, value: int) -> None: + self._terminal_width = value + + @property + def width_of_current_line(self) -> int: + """Return an estimate of the width so far in the current line.""" + return wcswidth(self._current_line) + + def markup(self, text: str, **markup: bool) -> str: + for name in markup: + if name not in self._esctable: + raise ValueError(f"unknown markup: {name!r}") + if self.hasmarkup: + esc = [self._esctable[name] for name, on in markup.items() if on] + if esc: + text = "".join("\x1b[%sm" % cod for cod in esc) + text + "\x1b[0m" + return text + + def sep( + self, + sepchar: str, + title: Optional[str] = None, + fullwidth: Optional[int] = None, + **markup: bool, + ) -> None: + if fullwidth is None: + fullwidth = self.fullwidth + # The goal is to have the line be as long as possible + # under the condition that len(line) <= fullwidth. + if sys.platform == "win32": + # If we print in the last column on windows we are on a + # new line but there is no way to verify/neutralize this + # (we may not know the exact line width). + # So let's be defensive to avoid empty lines in the output. + fullwidth -= 1 + if title is not None: + # we want 2 + 2*len(fill) + len(title) <= fullwidth + # i.e. 2 + 2*len(sepchar)*N + len(title) <= fullwidth + # 2*len(sepchar)*N <= fullwidth - len(title) - 2 + # N <= (fullwidth - len(title) - 2) // (2*len(sepchar)) + N = max((fullwidth - len(title) - 2) // (2 * len(sepchar)), 1) + fill = sepchar * N + line = f"{fill} {title} {fill}" + else: + # we want len(sepchar)*N <= fullwidth + # i.e. N <= fullwidth // len(sepchar) + line = sepchar * (fullwidth // len(sepchar)) + # In some situations there is room for an extra sepchar at the right, + # in particular if we consider that with a sepchar like "_ " the + # trailing space is not important at the end of the line. + if len(line) + len(sepchar.rstrip()) <= fullwidth: + line += sepchar.rstrip() + + self.line(line, **markup) + + def write(self, msg: str, *, flush: bool = False, **markup: bool) -> None: + if msg: + current_line = msg.rsplit("\n", 1)[-1] + if "\n" in msg: + self._current_line = current_line + else: + self._current_line += current_line + + msg = self.markup(msg, **markup) + + try: + self._file.write(msg) + except UnicodeEncodeError: + # Some environments don't support printing general Unicode + # strings, due to misconfiguration or otherwise; in that case, + # print the string escaped to ASCII. + # When the Unicode situation improves we should consider + # letting the error propagate instead of masking it (see #7475 + # for one brief attempt). + msg = msg.encode("unicode-escape").decode("ascii") + self._file.write(msg) + + if flush: + self.flush() + + def line(self, s: str = "", **markup: bool) -> None: + self.write(s, **markup) + self.write("\n") + + def flush(self) -> None: + self._file.flush() + + def _write_source(self, lines: Sequence[str], indents: Sequence[str] = ()) -> None: + """Write lines of source code possibly highlighted. + + Keeping this private for now because the API is clunky. We should discuss how + to evolve the terminal writer so we can have more precise color support, for example + being able to write part of a line in one color and the rest in another, and so on. + """ + if indents and len(indents) != len(lines): + raise ValueError( + "indents size ({}) should have same size as lines ({})".format( + len(indents), len(lines) + ) + ) + if not indents: + indents = [""] * len(lines) + source = "\n".join(lines) + new_lines = self._highlight(source).splitlines() + for indent, new_line in zip(indents, new_lines): + self.line(indent + new_line) + + def _highlight(self, source: str) -> str: + """Highlight the given source code if we have markup support.""" + from _pytest.config.exceptions import UsageError + + if not self.hasmarkup or not self.code_highlight: + return source + try: + from pygments.formatters.terminal import TerminalFormatter + from pygments.lexers.python import PythonLexer + from pygments import highlight + import pygments.util + except ImportError: + return source + else: + try: + highlighted: str = highlight( + source, + PythonLexer(), + TerminalFormatter( + bg=os.getenv("PYTEST_THEME_MODE", "dark"), + style=os.getenv("PYTEST_THEME"), + ), + ) + return highlighted + except pygments.util.ClassNotFound: + raise UsageError( + "PYTEST_THEME environment variable had an invalid value: '{}'. " + "Only valid pygment styles are allowed.".format( + os.getenv("PYTEST_THEME") + ) + ) + except pygments.util.OptionError: + raise UsageError( + "PYTEST_THEME_MODE environment variable had an invalid value: '{}'. " + "The only allowed values are 'dark' and 'light'.".format( + os.getenv("PYTEST_THEME_MODE") + ) + ) diff --git a/env/lib/python3.8/site-packages/_pytest/_io/wcwidth.py b/env/lib/python3.8/site-packages/_pytest/_io/wcwidth.py new file mode 100644 index 00000000..e5c7bf4d --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_io/wcwidth.py @@ -0,0 +1,55 @@ +import unicodedata +from functools import lru_cache + + +@lru_cache(100) +def wcwidth(c: str) -> int: + """Determine how many columns are needed to display a character in a terminal. + + Returns -1 if the character is not printable. + Returns 0, 1 or 2 for other characters. + """ + o = ord(c) + + # ASCII fast path. + if 0x20 <= o < 0x07F: + return 1 + + # Some Cf/Zp/Zl characters which should be zero-width. + if ( + o == 0x0000 + or 0x200B <= o <= 0x200F + or 0x2028 <= o <= 0x202E + or 0x2060 <= o <= 0x2063 + ): + return 0 + + category = unicodedata.category(c) + + # Control characters. + if category == "Cc": + return -1 + + # Combining characters with zero width. + if category in ("Me", "Mn"): + return 0 + + # Full/Wide east asian characters. + if unicodedata.east_asian_width(c) in ("F", "W"): + return 2 + + return 1 + + +def wcswidth(s: str) -> int: + """Determine how many columns are needed to display a string in a terminal. + + Returns -1 if the string contains non-printable characters. + """ + width = 0 + for c in unicodedata.normalize("NFC", s): + wc = wcwidth(c) + if wc < 0: + return -1 + width += wc + return width diff --git a/env/lib/python3.8/site-packages/_pytest/_py/__init__.py b/env/lib/python3.8/site-packages/_pytest/_py/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/_pytest/_py/error.py b/env/lib/python3.8/site-packages/_pytest/_py/error.py new file mode 100644 index 00000000..0b8f2d53 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_py/error.py @@ -0,0 +1,109 @@ +"""create errno-specific classes for IO or os calls.""" +from __future__ import annotations + +import errno +import os +import sys +from typing import Callable +from typing import TYPE_CHECKING +from typing import TypeVar + +if TYPE_CHECKING: + from typing_extensions import ParamSpec + + P = ParamSpec("P") + +R = TypeVar("R") + + +class Error(EnvironmentError): + def __repr__(self) -> str: + return "{}.{} {!r}: {} ".format( + self.__class__.__module__, + self.__class__.__name__, + self.__class__.__doc__, + " ".join(map(str, self.args)), + # repr(self.args) + ) + + def __str__(self) -> str: + s = "[{}]: {}".format( + self.__class__.__doc__, + " ".join(map(str, self.args)), + ) + return s + + +_winerrnomap = { + 2: errno.ENOENT, + 3: errno.ENOENT, + 17: errno.EEXIST, + 18: errno.EXDEV, + 13: errno.EBUSY, # empty cd drive, but ENOMEDIUM seems unavailiable + 22: errno.ENOTDIR, + 20: errno.ENOTDIR, + 267: errno.ENOTDIR, + 5: errno.EACCES, # anything better? +} + + +class ErrorMaker: + """lazily provides Exception classes for each possible POSIX errno + (as defined per the 'errno' module). All such instances + subclass EnvironmentError. + """ + + _errno2class: dict[int, type[Error]] = {} + + def __getattr__(self, name: str) -> type[Error]: + if name[0] == "_": + raise AttributeError(name) + eno = getattr(errno, name) + cls = self._geterrnoclass(eno) + setattr(self, name, cls) + return cls + + def _geterrnoclass(self, eno: int) -> type[Error]: + try: + return self._errno2class[eno] + except KeyError: + clsname = errno.errorcode.get(eno, "UnknownErrno%d" % (eno,)) + errorcls = type( + clsname, + (Error,), + {"__module__": "py.error", "__doc__": os.strerror(eno)}, + ) + self._errno2class[eno] = errorcls + return errorcls + + def checked_call( + self, func: Callable[P, R], *args: P.args, **kwargs: P.kwargs + ) -> R: + """Call a function and raise an errno-exception if applicable.""" + __tracebackhide__ = True + try: + return func(*args, **kwargs) + except Error: + raise + except OSError as value: + if not hasattr(value, "errno"): + raise + errno = value.errno + if sys.platform == "win32": + try: + cls = self._geterrnoclass(_winerrnomap[errno]) + except KeyError: + raise value + else: + # we are not on Windows, or we got a proper OSError + cls = self._geterrnoclass(errno) + + raise cls(f"{func.__name__}{args!r}") + + +_error_maker = ErrorMaker() +checked_call = _error_maker.checked_call + + +def __getattr__(attr: str) -> type[Error]: + return getattr(_error_maker, attr) # type: ignore[no-any-return] diff --git a/env/lib/python3.8/site-packages/_pytest/_py/path.py b/env/lib/python3.8/site-packages/_pytest/_py/path.py new file mode 100644 index 00000000..00f15152 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_py/path.py @@ -0,0 +1,1474 @@ +"""local path implementation.""" +from __future__ import annotations + +import atexit +import fnmatch +import importlib.util +import io +import os +import posixpath +import sys +import uuid +import warnings +from contextlib import contextmanager +from os.path import abspath +from os.path import dirname +from os.path import exists +from os.path import isabs +from os.path import isdir +from os.path import isfile +from os.path import islink +from os.path import normpath +from stat import S_ISDIR +from stat import S_ISLNK +from stat import S_ISREG +from typing import Any +from typing import Callable +from typing import overload +from typing import TYPE_CHECKING + +from . import error + +if TYPE_CHECKING: + from typing import Literal + +# Moved from local.py. +iswin32 = sys.platform == "win32" or (getattr(os, "_name", False) == "nt") + + +class Checkers: + _depend_on_existence = "exists", "link", "dir", "file" + + def __init__(self, path): + self.path = path + + def dotfile(self): + return self.path.basename.startswith(".") + + def ext(self, arg): + if not arg.startswith("."): + arg = "." + arg + return self.path.ext == arg + + def basename(self, arg): + return self.path.basename == arg + + def basestarts(self, arg): + return self.path.basename.startswith(arg) + + def relto(self, arg): + return self.path.relto(arg) + + def fnmatch(self, arg): + return self.path.fnmatch(arg) + + def endswith(self, arg): + return str(self.path).endswith(arg) + + def _evaluate(self, kw): + from .._code.source import getrawcode + + for name, value in kw.items(): + invert = False + meth = None + try: + meth = getattr(self, name) + except AttributeError: + if name[:3] == "not": + invert = True + try: + meth = getattr(self, name[3:]) + except AttributeError: + pass + if meth is None: + raise TypeError(f"no {name!r} checker available for {self.path!r}") + try: + if getrawcode(meth).co_argcount > 1: + if (not meth(value)) ^ invert: + return False + else: + if bool(value) ^ bool(meth()) ^ invert: + return False + except (error.ENOENT, error.ENOTDIR, error.EBUSY): + # EBUSY feels not entirely correct, + # but its kind of necessary since ENOMEDIUM + # is not accessible in python + for name in self._depend_on_existence: + if name in kw: + if kw.get(name): + return False + name = "not" + name + if name in kw: + if not kw.get(name): + return False + return True + + _statcache: Stat + + def _stat(self) -> Stat: + try: + return self._statcache + except AttributeError: + try: + self._statcache = self.path.stat() + except error.ELOOP: + self._statcache = self.path.lstat() + return self._statcache + + def dir(self): + return S_ISDIR(self._stat().mode) + + def file(self): + return S_ISREG(self._stat().mode) + + def exists(self): + return self._stat() + + def link(self): + st = self.path.lstat() + return S_ISLNK(st.mode) + + +class NeverRaised(Exception): + pass + + +class Visitor: + def __init__(self, fil, rec, ignore, bf, sort): + if isinstance(fil, str): + fil = FNMatcher(fil) + if isinstance(rec, str): + self.rec: Callable[[LocalPath], bool] = FNMatcher(rec) + elif not hasattr(rec, "__call__") and rec: + self.rec = lambda path: True + else: + self.rec = rec + self.fil = fil + self.ignore = ignore + self.breadthfirst = bf + self.optsort = sort and sorted or (lambda x: x) + + def gen(self, path): + try: + entries = path.listdir() + except self.ignore: + return + rec = self.rec + dirs = self.optsort( + [p for p in entries if p.check(dir=1) and (rec is None or rec(p))] + ) + if not self.breadthfirst: + for subdir in dirs: + for p in self.gen(subdir): + yield p + for p in self.optsort(entries): + if self.fil is None or self.fil(p): + yield p + if self.breadthfirst: + for subdir in dirs: + for p in self.gen(subdir): + yield p + + +class FNMatcher: + def __init__(self, pattern): + self.pattern = pattern + + def __call__(self, path): + pattern = self.pattern + + if ( + pattern.find(path.sep) == -1 + and iswin32 + and pattern.find(posixpath.sep) != -1 + ): + # Running on Windows, the pattern has no Windows path separators, + # and the pattern has one or more Posix path separators. Replace + # the Posix path separators with the Windows path separator. + pattern = pattern.replace(posixpath.sep, path.sep) + + if pattern.find(path.sep) == -1: + name = path.basename + else: + name = str(path) # path.strpath # XXX svn? + if not os.path.isabs(pattern): + pattern = "*" + path.sep + pattern + return fnmatch.fnmatch(name, pattern) + + +def map_as_list(func, iter): + return list(map(func, iter)) + + +class Stat: + if TYPE_CHECKING: + + @property + def size(self) -> int: + ... + + @property + def mtime(self) -> float: + ... + + def __getattr__(self, name: str) -> Any: + return getattr(self._osstatresult, "st_" + name) + + def __init__(self, path, osstatresult): + self.path = path + self._osstatresult = osstatresult + + @property + def owner(self): + if iswin32: + raise NotImplementedError("XXX win32") + import pwd + + entry = error.checked_call(pwd.getpwuid, self.uid) + return entry[0] + + @property + def group(self): + """Return group name of file.""" + if iswin32: + raise NotImplementedError("XXX win32") + import grp + + entry = error.checked_call(grp.getgrgid, self.gid) + return entry[0] + + def isdir(self): + return S_ISDIR(self._osstatresult.st_mode) + + def isfile(self): + return S_ISREG(self._osstatresult.st_mode) + + def islink(self): + self.path.lstat() + return S_ISLNK(self._osstatresult.st_mode) + + +def getuserid(user): + import pwd + + if not isinstance(user, int): + user = pwd.getpwnam(user)[2] + return user + + +def getgroupid(group): + import grp + + if not isinstance(group, int): + group = grp.getgrnam(group)[2] + return group + + +class LocalPath: + """Object oriented interface to os.path and other local filesystem + related information. + """ + + class ImportMismatchError(ImportError): + """raised on pyimport() if there is a mismatch of __file__'s""" + + sep = os.sep + + def __init__(self, path=None, expanduser=False): + """Initialize and return a local Path instance. + + Path can be relative to the current directory. + If path is None it defaults to the current working directory. + If expanduser is True, tilde-expansion is performed. + Note that Path instances always carry an absolute path. + Note also that passing in a local path object will simply return + the exact same path object. Use new() to get a new copy. + """ + if path is None: + self.strpath = error.checked_call(os.getcwd) + else: + try: + path = os.fspath(path) + except TypeError: + raise ValueError( + "can only pass None, Path instances " + "or non-empty strings to LocalPath" + ) + if expanduser: + path = os.path.expanduser(path) + self.strpath = abspath(path) + + if sys.platform != "win32": + + def chown(self, user, group, rec=0): + """Change ownership to the given user and group. + user and group may be specified by a number or + by a name. if rec is True change ownership + recursively. + """ + uid = getuserid(user) + gid = getgroupid(group) + if rec: + for x in self.visit(rec=lambda x: x.check(link=0)): + if x.check(link=0): + error.checked_call(os.chown, str(x), uid, gid) + error.checked_call(os.chown, str(self), uid, gid) + + def readlink(self) -> str: + """Return value of a symbolic link.""" + # https://github.com/python/mypy/issues/12278 + return error.checked_call(os.readlink, self.strpath) # type: ignore[arg-type,return-value] + + def mklinkto(self, oldname): + """Posix style hard link to another name.""" + error.checked_call(os.link, str(oldname), str(self)) + + def mksymlinkto(self, value, absolute=1): + """Create a symbolic link with the given value (pointing to another name).""" + if absolute: + error.checked_call(os.symlink, str(value), self.strpath) + else: + base = self.common(value) + # with posix local paths '/' is always a common base + relsource = self.__class__(value).relto(base) + reldest = self.relto(base) + n = reldest.count(self.sep) + target = self.sep.join(("..",) * n + (relsource,)) + error.checked_call(os.symlink, target, self.strpath) + + def __div__(self, other): + return self.join(os.fspath(other)) + + __truediv__ = __div__ # py3k + + @property + def basename(self): + """Basename part of path.""" + return self._getbyspec("basename")[0] + + @property + def dirname(self): + """Dirname part of path.""" + return self._getbyspec("dirname")[0] + + @property + def purebasename(self): + """Pure base name of the path.""" + return self._getbyspec("purebasename")[0] + + @property + def ext(self): + """Extension of the path (including the '.').""" + return self._getbyspec("ext")[0] + + def read_binary(self): + """Read and return a bytestring from reading the path.""" + with self.open("rb") as f: + return f.read() + + def read_text(self, encoding): + """Read and return a Unicode string from reading the path.""" + with self.open("r", encoding=encoding) as f: + return f.read() + + def read(self, mode="r"): + """Read and return a bytestring from reading the path.""" + with self.open(mode) as f: + return f.read() + + def readlines(self, cr=1): + """Read and return a list of lines from the path. if cr is False, the + newline will be removed from the end of each line.""" + mode = "r" + + if not cr: + content = self.read(mode) + return content.split("\n") + else: + f = self.open(mode) + try: + return f.readlines() + finally: + f.close() + + def load(self): + """(deprecated) return object unpickled from self.read()""" + f = self.open("rb") + try: + import pickle + + return error.checked_call(pickle.load, f) + finally: + f.close() + + def move(self, target): + """Move this path to target.""" + if target.relto(self): + raise error.EINVAL(target, "cannot move path into a subdirectory of itself") + try: + self.rename(target) + except error.EXDEV: # invalid cross-device link + self.copy(target) + self.remove() + + def fnmatch(self, pattern): + """Return true if the basename/fullname matches the glob-'pattern'. + + valid pattern characters:: + + * matches everything + ? matches any single character + [seq] matches any character in seq + [!seq] matches any char not in seq + + If the pattern contains a path-separator then the full path + is used for pattern matching and a '*' is prepended to the + pattern. + + if the pattern doesn't contain a path-separator the pattern + is only matched against the basename. + """ + return FNMatcher(pattern)(self) + + def relto(self, relpath): + """Return a string which is the relative part of the path + to the given 'relpath'. + """ + if not isinstance(relpath, (str, LocalPath)): + raise TypeError(f"{relpath!r}: not a string or path object") + strrelpath = str(relpath) + if strrelpath and strrelpath[-1] != self.sep: + strrelpath += self.sep + # assert strrelpath[-1] == self.sep + # assert strrelpath[-2] != self.sep + strself = self.strpath + if sys.platform == "win32" or getattr(os, "_name", None) == "nt": + if os.path.normcase(strself).startswith(os.path.normcase(strrelpath)): + return strself[len(strrelpath) :] + elif strself.startswith(strrelpath): + return strself[len(strrelpath) :] + return "" + + def ensure_dir(self, *args): + """Ensure the path joined with args is a directory.""" + return self.ensure(*args, **{"dir": True}) + + def bestrelpath(self, dest): + """Return a string which is a relative path from self + (assumed to be a directory) to dest such that + self.join(bestrelpath) == dest and if not such + path can be determined return dest. + """ + try: + if self == dest: + return os.curdir + base = self.common(dest) + if not base: # can be the case on windows + return str(dest) + self2base = self.relto(base) + reldest = dest.relto(base) + if self2base: + n = self2base.count(self.sep) + 1 + else: + n = 0 + lst = [os.pardir] * n + if reldest: + lst.append(reldest) + target = dest.sep.join(lst) + return target + except AttributeError: + return str(dest) + + def exists(self): + return self.check() + + def isdir(self): + return self.check(dir=1) + + def isfile(self): + return self.check(file=1) + + def parts(self, reverse=False): + """Return a root-first list of all ancestor directories + plus the path itself. + """ + current = self + lst = [self] + while 1: + last = current + current = current.dirpath() + if last == current: + break + lst.append(current) + if not reverse: + lst.reverse() + return lst + + def common(self, other): + """Return the common part shared with the other path + or None if there is no common part. + """ + last = None + for x, y in zip(self.parts(), other.parts()): + if x != y: + return last + last = x + return last + + def __add__(self, other): + """Return new path object with 'other' added to the basename""" + return self.new(basename=self.basename + str(other)) + + def visit(self, fil=None, rec=None, ignore=NeverRaised, bf=False, sort=False): + """Yields all paths below the current one + + fil is a filter (glob pattern or callable), if not matching the + path will not be yielded, defaulting to None (everything is + returned) + + rec is a filter (glob pattern or callable) that controls whether + a node is descended, defaulting to None + + ignore is an Exception class that is ignoredwhen calling dirlist() + on any of the paths (by default, all exceptions are reported) + + bf if True will cause a breadthfirst search instead of the + default depthfirst. Default: False + + sort if True will sort entries within each directory level. + """ + yield from Visitor(fil, rec, ignore, bf, sort).gen(self) + + def _sortlist(self, res, sort): + if sort: + if hasattr(sort, "__call__"): + warnings.warn( + DeprecationWarning( + "listdir(sort=callable) is deprecated and breaks on python3" + ), + stacklevel=3, + ) + res.sort(sort) + else: + res.sort() + + def __fspath__(self): + return self.strpath + + def __hash__(self): + s = self.strpath + if iswin32: + s = s.lower() + return hash(s) + + def __eq__(self, other): + s1 = os.fspath(self) + try: + s2 = os.fspath(other) + except TypeError: + return False + if iswin32: + s1 = s1.lower() + try: + s2 = s2.lower() + except AttributeError: + return False + return s1 == s2 + + def __ne__(self, other): + return not (self == other) + + def __lt__(self, other): + return os.fspath(self) < os.fspath(other) + + def __gt__(self, other): + return os.fspath(self) > os.fspath(other) + + def samefile(self, other): + """Return True if 'other' references the same file as 'self'.""" + other = os.fspath(other) + if not isabs(other): + other = abspath(other) + if self == other: + return True + if not hasattr(os.path, "samefile"): + return False + return error.checked_call(os.path.samefile, self.strpath, other) + + def remove(self, rec=1, ignore_errors=False): + """Remove a file or directory (or a directory tree if rec=1). + if ignore_errors is True, errors while removing directories will + be ignored. + """ + if self.check(dir=1, link=0): + if rec: + # force remove of readonly files on windows + if iswin32: + self.chmod(0o700, rec=1) + import shutil + + error.checked_call( + shutil.rmtree, self.strpath, ignore_errors=ignore_errors + ) + else: + error.checked_call(os.rmdir, self.strpath) + else: + if iswin32: + self.chmod(0o700) + error.checked_call(os.remove, self.strpath) + + def computehash(self, hashtype="md5", chunksize=524288): + """Return hexdigest of hashvalue for this file.""" + try: + try: + import hashlib as mod + except ImportError: + if hashtype == "sha1": + hashtype = "sha" + mod = __import__(hashtype) + hash = getattr(mod, hashtype)() + except (AttributeError, ImportError): + raise ValueError(f"Don't know how to compute {hashtype!r} hash") + f = self.open("rb") + try: + while 1: + buf = f.read(chunksize) + if not buf: + return hash.hexdigest() + hash.update(buf) + finally: + f.close() + + def new(self, **kw): + """Create a modified version of this path. + the following keyword arguments modify various path parts:: + + a:/some/path/to/a/file.ext + xx drive + xxxxxxxxxxxxxxxxx dirname + xxxxxxxx basename + xxxx purebasename + xxx ext + """ + obj = object.__new__(self.__class__) + if not kw: + obj.strpath = self.strpath + return obj + drive, dirname, basename, purebasename, ext = self._getbyspec( + "drive,dirname,basename,purebasename,ext" + ) + if "basename" in kw: + if "purebasename" in kw or "ext" in kw: + raise ValueError("invalid specification %r" % kw) + else: + pb = kw.setdefault("purebasename", purebasename) + try: + ext = kw["ext"] + except KeyError: + pass + else: + if ext and not ext.startswith("."): + ext = "." + ext + kw["basename"] = pb + ext + + if "dirname" in kw and not kw["dirname"]: + kw["dirname"] = drive + else: + kw.setdefault("dirname", dirname) + kw.setdefault("sep", self.sep) + obj.strpath = normpath("%(dirname)s%(sep)s%(basename)s" % kw) + return obj + + def _getbyspec(self, spec: str) -> list[str]: + """See new for what 'spec' can be.""" + res = [] + parts = self.strpath.split(self.sep) + + args = filter(None, spec.split(",")) + for name in args: + if name == "drive": + res.append(parts[0]) + elif name == "dirname": + res.append(self.sep.join(parts[:-1])) + else: + basename = parts[-1] + if name == "basename": + res.append(basename) + else: + i = basename.rfind(".") + if i == -1: + purebasename, ext = basename, "" + else: + purebasename, ext = basename[:i], basename[i:] + if name == "purebasename": + res.append(purebasename) + elif name == "ext": + res.append(ext) + else: + raise ValueError("invalid part specification %r" % name) + return res + + def dirpath(self, *args, **kwargs): + """Return the directory path joined with any given path arguments.""" + if not kwargs: + path = object.__new__(self.__class__) + path.strpath = dirname(self.strpath) + if args: + path = path.join(*args) + return path + return self.new(basename="").join(*args, **kwargs) + + def join(self, *args: os.PathLike[str], abs: bool = False) -> LocalPath: + """Return a new path by appending all 'args' as path + components. if abs=1 is used restart from root if any + of the args is an absolute path. + """ + sep = self.sep + strargs = [os.fspath(arg) for arg in args] + strpath = self.strpath + if abs: + newargs: list[str] = [] + for arg in reversed(strargs): + if isabs(arg): + strpath = arg + strargs = newargs + break + newargs.insert(0, arg) + # special case for when we have e.g. strpath == "/" + actual_sep = "" if strpath.endswith(sep) else sep + for arg in strargs: + arg = arg.strip(sep) + if iswin32: + # allow unix style paths even on windows. + arg = arg.strip("/") + arg = arg.replace("/", sep) + strpath = strpath + actual_sep + arg + actual_sep = sep + obj = object.__new__(self.__class__) + obj.strpath = normpath(strpath) + return obj + + def open(self, mode="r", ensure=False, encoding=None): + """Return an opened file with the given mode. + + If ensure is True, create parent directories if needed. + """ + if ensure: + self.dirpath().ensure(dir=1) + if encoding: + return error.checked_call(io.open, self.strpath, mode, encoding=encoding) + return error.checked_call(open, self.strpath, mode) + + def _fastjoin(self, name): + child = object.__new__(self.__class__) + child.strpath = self.strpath + self.sep + name + return child + + def islink(self): + return islink(self.strpath) + + def check(self, **kw): + """Check a path for existence and properties. + + Without arguments, return True if the path exists, otherwise False. + + valid checkers:: + + file=1 # is a file + file=0 # is not a file (may not even exist) + dir=1 # is a dir + link=1 # is a link + exists=1 # exists + + You can specify multiple checker definitions, for example:: + + path.check(file=1, link=1) # a link pointing to a file + """ + if not kw: + return exists(self.strpath) + if len(kw) == 1: + if "dir" in kw: + return not kw["dir"] ^ isdir(self.strpath) + if "file" in kw: + return not kw["file"] ^ isfile(self.strpath) + if not kw: + kw = {"exists": 1} + return Checkers(self)._evaluate(kw) + + _patternchars = set("*?[" + os.path.sep) + + def listdir(self, fil=None, sort=None): + """List directory contents, possibly filter by the given fil func + and possibly sorted. + """ + if fil is None and sort is None: + names = error.checked_call(os.listdir, self.strpath) + return map_as_list(self._fastjoin, names) + if isinstance(fil, str): + if not self._patternchars.intersection(fil): + child = self._fastjoin(fil) + if exists(child.strpath): + return [child] + return [] + fil = FNMatcher(fil) + names = error.checked_call(os.listdir, self.strpath) + res = [] + for name in names: + child = self._fastjoin(name) + if fil is None or fil(child): + res.append(child) + self._sortlist(res, sort) + return res + + def size(self) -> int: + """Return size of the underlying file object""" + return self.stat().size + + def mtime(self) -> float: + """Return last modification time of the path.""" + return self.stat().mtime + + def copy(self, target, mode=False, stat=False): + """Copy path to target. + + If mode is True, will copy copy permission from path to target. + If stat is True, copy permission, last modification + time, last access time, and flags from path to target. + """ + if self.check(file=1): + if target.check(dir=1): + target = target.join(self.basename) + assert self != target + copychunked(self, target) + if mode: + copymode(self.strpath, target.strpath) + if stat: + copystat(self, target) + else: + + def rec(p): + return p.check(link=0) + + for x in self.visit(rec=rec): + relpath = x.relto(self) + newx = target.join(relpath) + newx.dirpath().ensure(dir=1) + if x.check(link=1): + newx.mksymlinkto(x.readlink()) + continue + elif x.check(file=1): + copychunked(x, newx) + elif x.check(dir=1): + newx.ensure(dir=1) + if mode: + copymode(x.strpath, newx.strpath) + if stat: + copystat(x, newx) + + def rename(self, target): + """Rename this path to target.""" + target = os.fspath(target) + return error.checked_call(os.rename, self.strpath, target) + + def dump(self, obj, bin=1): + """Pickle object into path location""" + f = self.open("wb") + import pickle + + try: + error.checked_call(pickle.dump, obj, f, bin) + finally: + f.close() + + def mkdir(self, *args): + """Create & return the directory joined with args.""" + p = self.join(*args) + error.checked_call(os.mkdir, os.fspath(p)) + return p + + def write_binary(self, data, ensure=False): + """Write binary data into path. If ensure is True create + missing parent directories. + """ + if ensure: + self.dirpath().ensure(dir=1) + with self.open("wb") as f: + f.write(data) + + def write_text(self, data, encoding, ensure=False): + """Write text data into path using the specified encoding. + If ensure is True create missing parent directories. + """ + if ensure: + self.dirpath().ensure(dir=1) + with self.open("w", encoding=encoding) as f: + f.write(data) + + def write(self, data, mode="w", ensure=False): + """Write data into path. If ensure is True create + missing parent directories. + """ + if ensure: + self.dirpath().ensure(dir=1) + if "b" in mode: + if not isinstance(data, bytes): + raise ValueError("can only process bytes") + else: + if not isinstance(data, str): + if not isinstance(data, bytes): + data = str(data) + else: + data = data.decode(sys.getdefaultencoding()) + f = self.open(mode) + try: + f.write(data) + finally: + f.close() + + def _ensuredirs(self): + parent = self.dirpath() + if parent == self: + return self + if parent.check(dir=0): + parent._ensuredirs() + if self.check(dir=0): + try: + self.mkdir() + except error.EEXIST: + # race condition: file/dir created by another thread/process. + # complain if it is not a dir + if self.check(dir=0): + raise + return self + + def ensure(self, *args, **kwargs): + """Ensure that an args-joined path exists (by default as + a file). if you specify a keyword argument 'dir=True' + then the path is forced to be a directory path. + """ + p = self.join(*args) + if kwargs.get("dir", 0): + return p._ensuredirs() + else: + p.dirpath()._ensuredirs() + if not p.check(file=1): + p.open("w").close() + return p + + @overload + def stat(self, raising: Literal[True] = ...) -> Stat: + ... + + @overload + def stat(self, raising: Literal[False]) -> Stat | None: + ... + + def stat(self, raising: bool = True) -> Stat | None: + """Return an os.stat() tuple.""" + if raising: + return Stat(self, error.checked_call(os.stat, self.strpath)) + try: + return Stat(self, os.stat(self.strpath)) + except KeyboardInterrupt: + raise + except Exception: + return None + + def lstat(self) -> Stat: + """Return an os.lstat() tuple.""" + return Stat(self, error.checked_call(os.lstat, self.strpath)) + + def setmtime(self, mtime=None): + """Set modification time for the given path. if 'mtime' is None + (the default) then the file's mtime is set to current time. + + Note that the resolution for 'mtime' is platform dependent. + """ + if mtime is None: + return error.checked_call(os.utime, self.strpath, mtime) + try: + return error.checked_call(os.utime, self.strpath, (-1, mtime)) + except error.EINVAL: + return error.checked_call(os.utime, self.strpath, (self.atime(), mtime)) + + def chdir(self): + """Change directory to self and return old current directory""" + try: + old = self.__class__() + except error.ENOENT: + old = None + error.checked_call(os.chdir, self.strpath) + return old + + @contextmanager + def as_cwd(self): + """ + Return a context manager, which changes to the path's dir during the + managed "with" context. + On __enter__ it returns the old dir, which might be ``None``. + """ + old = self.chdir() + try: + yield old + finally: + if old is not None: + old.chdir() + + def realpath(self): + """Return a new path which contains no symbolic links.""" + return self.__class__(os.path.realpath(self.strpath)) + + def atime(self): + """Return last access time of the path.""" + return self.stat().atime + + def __repr__(self): + return "local(%r)" % self.strpath + + def __str__(self): + """Return string representation of the Path.""" + return self.strpath + + def chmod(self, mode, rec=0): + """Change permissions to the given mode. If mode is an + integer it directly encodes the os-specific modes. + if rec is True perform recursively. + """ + if not isinstance(mode, int): + raise TypeError(f"mode {mode!r} must be an integer") + if rec: + for x in self.visit(rec=rec): + error.checked_call(os.chmod, str(x), mode) + error.checked_call(os.chmod, self.strpath, mode) + + def pypkgpath(self): + """Return the Python package path by looking for the last + directory upwards which still contains an __init__.py. + Return None if a pkgpath can not be determined. + """ + pkgpath = None + for parent in self.parts(reverse=True): + if parent.isdir(): + if not parent.join("__init__.py").exists(): + break + if not isimportable(parent.basename): + break + pkgpath = parent + return pkgpath + + def _ensuresyspath(self, ensuremode, path): + if ensuremode: + s = str(path) + if ensuremode == "append": + if s not in sys.path: + sys.path.append(s) + else: + if s != sys.path[0]: + sys.path.insert(0, s) + + def pyimport(self, modname=None, ensuresyspath=True): + """Return path as an imported python module. + + If modname is None, look for the containing package + and construct an according module name. + The module will be put/looked up in sys.modules. + if ensuresyspath is True then the root dir for importing + the file (taking __init__.py files into account) will + be prepended to sys.path if it isn't there already. + If ensuresyspath=="append" the root dir will be appended + if it isn't already contained in sys.path. + if ensuresyspath is False no modification of syspath happens. + + Special value of ensuresyspath=="importlib" is intended + purely for using in pytest, it is capable only of importing + separate .py files outside packages, e.g. for test suite + without any __init__.py file. It effectively allows having + same-named test modules in different places and offers + mild opt-in via this option. Note that it works only in + recent versions of python. + """ + if not self.check(): + raise error.ENOENT(self) + + if ensuresyspath == "importlib": + if modname is None: + modname = self.purebasename + spec = importlib.util.spec_from_file_location(modname, str(self)) + if spec is None or spec.loader is None: + raise ImportError( + f"Can't find module {modname} at location {str(self)}" + ) + mod = importlib.util.module_from_spec(spec) + spec.loader.exec_module(mod) + return mod + + pkgpath = None + if modname is None: + pkgpath = self.pypkgpath() + if pkgpath is not None: + pkgroot = pkgpath.dirpath() + names = self.new(ext="").relto(pkgroot).split(self.sep) + if names[-1] == "__init__": + names.pop() + modname = ".".join(names) + else: + pkgroot = self.dirpath() + modname = self.purebasename + + self._ensuresyspath(ensuresyspath, pkgroot) + __import__(modname) + mod = sys.modules[modname] + if self.basename == "__init__.py": + return mod # we don't check anything as we might + # be in a namespace package ... too icky to check + modfile = mod.__file__ + assert modfile is not None + if modfile[-4:] in (".pyc", ".pyo"): + modfile = modfile[:-1] + elif modfile.endswith("$py.class"): + modfile = modfile[:-9] + ".py" + if modfile.endswith(os.path.sep + "__init__.py"): + if self.basename != "__init__.py": + modfile = modfile[:-12] + try: + issame = self.samefile(modfile) + except error.ENOENT: + issame = False + if not issame: + ignore = os.getenv("PY_IGNORE_IMPORTMISMATCH") + if ignore != "1": + raise self.ImportMismatchError(modname, modfile, self) + return mod + else: + try: + return sys.modules[modname] + except KeyError: + # we have a custom modname, do a pseudo-import + import types + + mod = types.ModuleType(modname) + mod.__file__ = str(self) + sys.modules[modname] = mod + try: + with open(str(self), "rb") as f: + exec(f.read(), mod.__dict__) + except BaseException: + del sys.modules[modname] + raise + return mod + + def sysexec(self, *argv: os.PathLike[str], **popen_opts: Any) -> str: + """Return stdout text from executing a system child process, + where the 'self' path points to executable. + The process is directly invoked and not through a system shell. + """ + from subprocess import Popen, PIPE + + popen_opts.pop("stdout", None) + popen_opts.pop("stderr", None) + proc = Popen( + [str(self)] + [str(arg) for arg in argv], + **popen_opts, + stdout=PIPE, + stderr=PIPE, + ) + stdout: str | bytes + stdout, stderr = proc.communicate() + ret = proc.wait() + if isinstance(stdout, bytes): + stdout = stdout.decode(sys.getdefaultencoding()) + if ret != 0: + if isinstance(stderr, bytes): + stderr = stderr.decode(sys.getdefaultencoding()) + raise RuntimeError( + ret, + ret, + str(self), + stdout, + stderr, + ) + return stdout + + @classmethod + def sysfind(cls, name, checker=None, paths=None): + """Return a path object found by looking at the systems + underlying PATH specification. If the checker is not None + it will be invoked to filter matching paths. If a binary + cannot be found, None is returned + Note: This is probably not working on plain win32 systems + but may work on cygwin. + """ + if isabs(name): + p = local(name) + if p.check(file=1): + return p + else: + if paths is None: + if iswin32: + paths = os.environ["Path"].split(";") + if "" not in paths and "." not in paths: + paths.append(".") + try: + systemroot = os.environ["SYSTEMROOT"] + except KeyError: + pass + else: + paths = [ + path.replace("%SystemRoot%", systemroot) for path in paths + ] + else: + paths = os.environ["PATH"].split(":") + tryadd = [] + if iswin32: + tryadd += os.environ["PATHEXT"].split(os.pathsep) + tryadd.append("") + + for x in paths: + for addext in tryadd: + p = local(x).join(name, abs=True) + addext + try: + if p.check(file=1): + if checker: + if not checker(p): + continue + return p + except error.EACCES: + pass + return None + + @classmethod + def _gethomedir(cls): + try: + x = os.environ["HOME"] + except KeyError: + try: + x = os.environ["HOMEDRIVE"] + os.environ["HOMEPATH"] + except KeyError: + return None + return cls(x) + + # """ + # special class constructors for local filesystem paths + # """ + @classmethod + def get_temproot(cls): + """Return the system's temporary directory + (where tempfiles are usually created in) + """ + import tempfile + + return local(tempfile.gettempdir()) + + @classmethod + def mkdtemp(cls, rootdir=None): + """Return a Path object pointing to a fresh new temporary directory + (which we created ourself). + """ + import tempfile + + if rootdir is None: + rootdir = cls.get_temproot() + return cls(error.checked_call(tempfile.mkdtemp, dir=str(rootdir))) + + @classmethod + def make_numbered_dir( + cls, prefix="session-", rootdir=None, keep=3, lock_timeout=172800 + ): # two days + """Return unique directory with a number greater than the current + maximum one. The number is assumed to start directly after prefix. + if keep is true directories with a number less than (maxnum-keep) + will be removed. If .lock files are used (lock_timeout non-zero), + algorithm is multi-process safe. + """ + if rootdir is None: + rootdir = cls.get_temproot() + + nprefix = prefix.lower() + + def parse_num(path): + """Parse the number out of a path (if it matches the prefix)""" + nbasename = path.basename.lower() + if nbasename.startswith(nprefix): + try: + return int(nbasename[len(nprefix) :]) + except ValueError: + pass + + def create_lockfile(path): + """Exclusively create lockfile. Throws when failed""" + mypid = os.getpid() + lockfile = path.join(".lock") + if hasattr(lockfile, "mksymlinkto"): + lockfile.mksymlinkto(str(mypid)) + else: + fd = error.checked_call( + os.open, str(lockfile), os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o644 + ) + with os.fdopen(fd, "w") as f: + f.write(str(mypid)) + return lockfile + + def atexit_remove_lockfile(lockfile): + """Ensure lockfile is removed at process exit""" + mypid = os.getpid() + + def try_remove_lockfile(): + # in a fork() situation, only the last process should + # remove the .lock, otherwise the other processes run the + # risk of seeing their temporary dir disappear. For now + # we remove the .lock in the parent only (i.e. we assume + # that the children finish before the parent). + if os.getpid() != mypid: + return + try: + lockfile.remove() + except error.Error: + pass + + atexit.register(try_remove_lockfile) + + # compute the maximum number currently in use with the prefix + lastmax = None + while True: + maxnum = -1 + for path in rootdir.listdir(): + num = parse_num(path) + if num is not None: + maxnum = max(maxnum, num) + + # make the new directory + try: + udir = rootdir.mkdir(prefix + str(maxnum + 1)) + if lock_timeout: + lockfile = create_lockfile(udir) + atexit_remove_lockfile(lockfile) + except (error.EEXIST, error.ENOENT, error.EBUSY): + # race condition (1): another thread/process created the dir + # in the meantime - try again + # race condition (2): another thread/process spuriously acquired + # lock treating empty directory as candidate + # for removal - try again + # race condition (3): another thread/process tried to create the lock at + # the same time (happened in Python 3.3 on Windows) + # https://ci.appveyor.com/project/pytestbot/py/build/1.0.21/job/ffi85j4c0lqwsfwa + if lastmax == maxnum: + raise + lastmax = maxnum + continue + break + + def get_mtime(path): + """Read file modification time""" + try: + return path.lstat().mtime + except error.Error: + pass + + garbage_prefix = prefix + "garbage-" + + def is_garbage(path): + """Check if path denotes directory scheduled for removal""" + bn = path.basename + return bn.startswith(garbage_prefix) + + # prune old directories + udir_time = get_mtime(udir) + if keep and udir_time: + for path in rootdir.listdir(): + num = parse_num(path) + if num is not None and num <= (maxnum - keep): + try: + # try acquiring lock to remove directory as exclusive user + if lock_timeout: + create_lockfile(path) + except (error.EEXIST, error.ENOENT, error.EBUSY): + path_time = get_mtime(path) + if not path_time: + # assume directory doesn't exist now + continue + if abs(udir_time - path_time) < lock_timeout: + # assume directory with lockfile exists + # and lock timeout hasn't expired yet + continue + + # path dir locked for exclusive use + # and scheduled for removal to avoid another thread/process + # treating it as a new directory or removal candidate + garbage_path = rootdir.join(garbage_prefix + str(uuid.uuid4())) + try: + path.rename(garbage_path) + garbage_path.remove(rec=1) + except KeyboardInterrupt: + raise + except Exception: # this might be error.Error, WindowsError ... + pass + if is_garbage(path): + try: + path.remove(rec=1) + except KeyboardInterrupt: + raise + except Exception: # this might be error.Error, WindowsError ... + pass + + # make link... + try: + username = os.environ["USER"] # linux, et al + except KeyError: + try: + username = os.environ["USERNAME"] # windows + except KeyError: + username = "current" + + src = str(udir) + dest = src[: src.rfind("-")] + "-" + username + try: + os.unlink(dest) + except OSError: + pass + try: + os.symlink(src, dest) + except (OSError, AttributeError, NotImplementedError): + pass + + return udir + + +def copymode(src, dest): + """Copy permission from src to dst.""" + import shutil + + shutil.copymode(src, dest) + + +def copystat(src, dest): + """Copy permission, last modification time, + last access time, and flags from src to dst.""" + import shutil + + shutil.copystat(str(src), str(dest)) + + +def copychunked(src, dest): + chunksize = 524288 # half a meg of bytes + fsrc = src.open("rb") + try: + fdest = dest.open("wb") + try: + while 1: + buf = fsrc.read(chunksize) + if not buf: + break + fdest.write(buf) + finally: + fdest.close() + finally: + fsrc.close() + + +def isimportable(name): + if name and (name[0].isalpha() or name[0] == "_"): + name = name.replace("_", "") + return not name or name.isalnum() + + +local = LocalPath diff --git a/env/lib/python3.8/site-packages/_pytest/_version.py b/env/lib/python3.8/site-packages/_pytest/_version.py new file mode 100644 index 00000000..62023fe9 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/_version.py @@ -0,0 +1,5 @@ +# coding: utf-8 +# file generated by setuptools_scm +# don't change, don't track in version control +__version__ = version = '7.2.0' +__version_tuple__ = version_tuple = (7, 2, 0) diff --git a/env/lib/python3.8/site-packages/_pytest/assertion/__init__.py b/env/lib/python3.8/site-packages/_pytest/assertion/__init__.py new file mode 100644 index 00000000..a46e5813 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/assertion/__init__.py @@ -0,0 +1,181 @@ +"""Support for presenting detailed information in failing assertions.""" +import sys +from typing import Any +from typing import Generator +from typing import List +from typing import Optional +from typing import TYPE_CHECKING + +from _pytest.assertion import rewrite +from _pytest.assertion import truncate +from _pytest.assertion import util +from _pytest.assertion.rewrite import assertstate_key +from _pytest.config import Config +from _pytest.config import hookimpl +from _pytest.config.argparsing import Parser +from _pytest.nodes import Item + +if TYPE_CHECKING: + from _pytest.main import Session + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("debugconfig") + group.addoption( + "--assert", + action="store", + dest="assertmode", + choices=("rewrite", "plain"), + default="rewrite", + metavar="MODE", + help=( + "Control assertion debugging tools.\n" + "'plain' performs no assertion debugging.\n" + "'rewrite' (the default) rewrites assert statements in test modules" + " on import to provide assert expression information." + ), + ) + parser.addini( + "enable_assertion_pass_hook", + type="bool", + default=False, + help="Enables the pytest_assertion_pass hook. " + "Make sure to delete any previously generated pyc cache files.", + ) + + +def register_assert_rewrite(*names: str) -> None: + """Register one or more module names to be rewritten on import. + + This function will make sure that this module or all modules inside + the package will get their assert statements rewritten. + Thus you should make sure to call this before the module is + actually imported, usually in your __init__.py if you are a plugin + using a package. + + :param names: The module names to register. + """ + for name in names: + if not isinstance(name, str): + msg = "expected module names as *args, got {0} instead" # type: ignore[unreachable] + raise TypeError(msg.format(repr(names))) + for hook in sys.meta_path: + if isinstance(hook, rewrite.AssertionRewritingHook): + importhook = hook + break + else: + # TODO(typing): Add a protocol for mark_rewrite() and use it + # for importhook and for PytestPluginManager.rewrite_hook. + importhook = DummyRewriteHook() # type: ignore + importhook.mark_rewrite(*names) + + +class DummyRewriteHook: + """A no-op import hook for when rewriting is disabled.""" + + def mark_rewrite(self, *names: str) -> None: + pass + + +class AssertionState: + """State for the assertion plugin.""" + + def __init__(self, config: Config, mode) -> None: + self.mode = mode + self.trace = config.trace.root.get("assertion") + self.hook: Optional[rewrite.AssertionRewritingHook] = None + + +def install_importhook(config: Config) -> rewrite.AssertionRewritingHook: + """Try to install the rewrite hook, raise SystemError if it fails.""" + config.stash[assertstate_key] = AssertionState(config, "rewrite") + config.stash[assertstate_key].hook = hook = rewrite.AssertionRewritingHook(config) + sys.meta_path.insert(0, hook) + config.stash[assertstate_key].trace("installed rewrite import hook") + + def undo() -> None: + hook = config.stash[assertstate_key].hook + if hook is not None and hook in sys.meta_path: + sys.meta_path.remove(hook) + + config.add_cleanup(undo) + return hook + + +def pytest_collection(session: "Session") -> None: + # This hook is only called when test modules are collected + # so for example not in the managing process of pytest-xdist + # (which does not collect test modules). + assertstate = session.config.stash.get(assertstate_key, None) + if assertstate: + if assertstate.hook is not None: + assertstate.hook.set_session(session) + + +@hookimpl(tryfirst=True, hookwrapper=True) +def pytest_runtest_protocol(item: Item) -> Generator[None, None, None]: + """Setup the pytest_assertrepr_compare and pytest_assertion_pass hooks. + + The rewrite module will use util._reprcompare if it exists to use custom + reporting via the pytest_assertrepr_compare hook. This sets up this custom + comparison for the test. + """ + + ihook = item.ihook + + def callbinrepr(op, left: object, right: object) -> Optional[str]: + """Call the pytest_assertrepr_compare hook and prepare the result. + + This uses the first result from the hook and then ensures the + following: + * Overly verbose explanations are truncated unless configured otherwise + (eg. if running in verbose mode). + * Embedded newlines are escaped to help util.format_explanation() + later. + * If the rewrite mode is used embedded %-characters are replaced + to protect later % formatting. + + The result can be formatted by util.format_explanation() for + pretty printing. + """ + hook_result = ihook.pytest_assertrepr_compare( + config=item.config, op=op, left=left, right=right + ) + for new_expl in hook_result: + if new_expl: + new_expl = truncate.truncate_if_required(new_expl, item) + new_expl = [line.replace("\n", "\\n") for line in new_expl] + res = "\n~".join(new_expl) + if item.config.getvalue("assertmode") == "rewrite": + res = res.replace("%", "%%") + return res + return None + + saved_assert_hooks = util._reprcompare, util._assertion_pass + util._reprcompare = callbinrepr + util._config = item.config + + if ihook.pytest_assertion_pass.get_hookimpls(): + + def call_assertion_pass_hook(lineno: int, orig: str, expl: str) -> None: + ihook.pytest_assertion_pass(item=item, lineno=lineno, orig=orig, expl=expl) + + util._assertion_pass = call_assertion_pass_hook + + yield + + util._reprcompare, util._assertion_pass = saved_assert_hooks + util._config = None + + +def pytest_sessionfinish(session: "Session") -> None: + assertstate = session.config.stash.get(assertstate_key, None) + if assertstate: + if assertstate.hook is not None: + assertstate.hook.set_session(None) + + +def pytest_assertrepr_compare( + config: Config, op: str, left: Any, right: Any +) -> Optional[List[str]]: + return util.assertrepr_compare(config=config, op=op, left=left, right=right) diff --git a/env/lib/python3.8/site-packages/_pytest/assertion/rewrite.py b/env/lib/python3.8/site-packages/_pytest/assertion/rewrite.py new file mode 100644 index 00000000..63f9dd8f --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/assertion/rewrite.py @@ -0,0 +1,1107 @@ +"""Rewrite assertion AST to produce nice error messages.""" +import ast +import errno +import functools +import importlib.abc +import importlib.machinery +import importlib.util +import io +import itertools +import marshal +import os +import struct +import sys +import tokenize +import types +from pathlib import Path +from pathlib import PurePath +from typing import Callable +from typing import Dict +from typing import IO +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Optional +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from _pytest._io.saferepr import DEFAULT_REPR_MAX_SIZE +from _pytest._io.saferepr import saferepr +from _pytest._version import version +from _pytest.assertion import util +from _pytest.assertion.util import ( # noqa: F401 + format_explanation as _format_explanation, +) +from _pytest.config import Config +from _pytest.main import Session +from _pytest.pathlib import absolutepath +from _pytest.pathlib import fnmatch_ex +from _pytest.stash import StashKey + +if TYPE_CHECKING: + from _pytest.assertion import AssertionState + + +assertstate_key = StashKey["AssertionState"]() + + +# pytest caches rewritten pycs in pycache dirs +PYTEST_TAG = f"{sys.implementation.cache_tag}-pytest-{version}" +PYC_EXT = ".py" + (__debug__ and "c" or "o") +PYC_TAIL = "." + PYTEST_TAG + PYC_EXT + + +class AssertionRewritingHook(importlib.abc.MetaPathFinder, importlib.abc.Loader): + """PEP302/PEP451 import hook which rewrites asserts.""" + + def __init__(self, config: Config) -> None: + self.config = config + try: + self.fnpats = config.getini("python_files") + except ValueError: + self.fnpats = ["test_*.py", "*_test.py"] + self.session: Optional[Session] = None + self._rewritten_names: Dict[str, Path] = {} + self._must_rewrite: Set[str] = set() + # flag to guard against trying to rewrite a pyc file while we are already writing another pyc file, + # which might result in infinite recursion (#3506) + self._writing_pyc = False + self._basenames_to_check_rewrite = {"conftest"} + self._marked_for_rewrite_cache: Dict[str, bool] = {} + self._session_paths_checked = False + + def set_session(self, session: Optional[Session]) -> None: + self.session = session + self._session_paths_checked = False + + # Indirection so we can mock calls to find_spec originated from the hook during testing + _find_spec = importlib.machinery.PathFinder.find_spec + + def find_spec( + self, + name: str, + path: Optional[Sequence[Union[str, bytes]]] = None, + target: Optional[types.ModuleType] = None, + ) -> Optional[importlib.machinery.ModuleSpec]: + if self._writing_pyc: + return None + state = self.config.stash[assertstate_key] + if self._early_rewrite_bailout(name, state): + return None + state.trace("find_module called for: %s" % name) + + # Type ignored because mypy is confused about the `self` binding here. + spec = self._find_spec(name, path) # type: ignore + if ( + # the import machinery could not find a file to import + spec is None + # this is a namespace package (without `__init__.py`) + # there's nothing to rewrite there + or spec.origin is None + # we can only rewrite source files + or not isinstance(spec.loader, importlib.machinery.SourceFileLoader) + # if the file doesn't exist, we can't rewrite it + or not os.path.exists(spec.origin) + ): + return None + else: + fn = spec.origin + + if not self._should_rewrite(name, fn, state): + return None + + return importlib.util.spec_from_file_location( + name, + fn, + loader=self, + submodule_search_locations=spec.submodule_search_locations, + ) + + def create_module( + self, spec: importlib.machinery.ModuleSpec + ) -> Optional[types.ModuleType]: + return None # default behaviour is fine + + def exec_module(self, module: types.ModuleType) -> None: + assert module.__spec__ is not None + assert module.__spec__.origin is not None + fn = Path(module.__spec__.origin) + state = self.config.stash[assertstate_key] + + self._rewritten_names[module.__name__] = fn + + # The requested module looks like a test file, so rewrite it. This is + # the most magical part of the process: load the source, rewrite the + # asserts, and load the rewritten source. We also cache the rewritten + # module code in a special pyc. We must be aware of the possibility of + # concurrent pytest processes rewriting and loading pycs. To avoid + # tricky race conditions, we maintain the following invariant: The + # cached pyc is always a complete, valid pyc. Operations on it must be + # atomic. POSIX's atomic rename comes in handy. + write = not sys.dont_write_bytecode + cache_dir = get_cache_dir(fn) + if write: + ok = try_makedirs(cache_dir) + if not ok: + write = False + state.trace(f"read only directory: {cache_dir}") + + cache_name = fn.name[:-3] + PYC_TAIL + pyc = cache_dir / cache_name + # Notice that even if we're in a read-only directory, I'm going + # to check for a cached pyc. This may not be optimal... + co = _read_pyc(fn, pyc, state.trace) + if co is None: + state.trace(f"rewriting {fn!r}") + source_stat, co = _rewrite_test(fn, self.config) + if write: + self._writing_pyc = True + try: + _write_pyc(state, co, source_stat, pyc) + finally: + self._writing_pyc = False + else: + state.trace(f"found cached rewritten pyc for {fn}") + exec(co, module.__dict__) + + def _early_rewrite_bailout(self, name: str, state: "AssertionState") -> bool: + """A fast way to get out of rewriting modules. + + Profiling has shown that the call to PathFinder.find_spec (inside of + the find_spec from this class) is a major slowdown, so, this method + tries to filter what we're sure won't be rewritten before getting to + it. + """ + if self.session is not None and not self._session_paths_checked: + self._session_paths_checked = True + for initial_path in self.session._initialpaths: + # Make something as c:/projects/my_project/path.py -> + # ['c:', 'projects', 'my_project', 'path.py'] + parts = str(initial_path).split(os.path.sep) + # add 'path' to basenames to be checked. + self._basenames_to_check_rewrite.add(os.path.splitext(parts[-1])[0]) + + # Note: conftest already by default in _basenames_to_check_rewrite. + parts = name.split(".") + if parts[-1] in self._basenames_to_check_rewrite: + return False + + # For matching the name it must be as if it was a filename. + path = PurePath(*parts).with_suffix(".py") + + for pat in self.fnpats: + # if the pattern contains subdirectories ("tests/**.py" for example) we can't bail out based + # on the name alone because we need to match against the full path + if os.path.dirname(pat): + return False + if fnmatch_ex(pat, path): + return False + + if self._is_marked_for_rewrite(name, state): + return False + + state.trace(f"early skip of rewriting module: {name}") + return True + + def _should_rewrite(self, name: str, fn: str, state: "AssertionState") -> bool: + # always rewrite conftest files + if os.path.basename(fn) == "conftest.py": + state.trace(f"rewriting conftest file: {fn!r}") + return True + + if self.session is not None: + if self.session.isinitpath(absolutepath(fn)): + state.trace(f"matched test file (was specified on cmdline): {fn!r}") + return True + + # modules not passed explicitly on the command line are only + # rewritten if they match the naming convention for test files + fn_path = PurePath(fn) + for pat in self.fnpats: + if fnmatch_ex(pat, fn_path): + state.trace(f"matched test file {fn!r}") + return True + + return self._is_marked_for_rewrite(name, state) + + def _is_marked_for_rewrite(self, name: str, state: "AssertionState") -> bool: + try: + return self._marked_for_rewrite_cache[name] + except KeyError: + for marked in self._must_rewrite: + if name == marked or name.startswith(marked + "."): + state.trace(f"matched marked file {name!r} (from {marked!r})") + self._marked_for_rewrite_cache[name] = True + return True + + self._marked_for_rewrite_cache[name] = False + return False + + def mark_rewrite(self, *names: str) -> None: + """Mark import names as needing to be rewritten. + + The named module or package as well as any nested modules will + be rewritten on import. + """ + already_imported = ( + set(names).intersection(sys.modules).difference(self._rewritten_names) + ) + for name in already_imported: + mod = sys.modules[name] + if not AssertionRewriter.is_rewrite_disabled( + mod.__doc__ or "" + ) and not isinstance(mod.__loader__, type(self)): + self._warn_already_imported(name) + self._must_rewrite.update(names) + self._marked_for_rewrite_cache.clear() + + def _warn_already_imported(self, name: str) -> None: + from _pytest.warning_types import PytestAssertRewriteWarning + + self.config.issue_config_time_warning( + PytestAssertRewriteWarning( + "Module already imported so cannot be rewritten: %s" % name + ), + stacklevel=5, + ) + + def get_data(self, pathname: Union[str, bytes]) -> bytes: + """Optional PEP302 get_data API.""" + with open(pathname, "rb") as f: + return f.read() + + if sys.version_info >= (3, 10): + + def get_resource_reader(self, name: str) -> importlib.abc.TraversableResources: # type: ignore + if sys.version_info < (3, 11): + from importlib.readers import FileReader + else: + from importlib.resources.readers import FileReader + + return FileReader( # type:ignore[no-any-return] + types.SimpleNamespace(path=self._rewritten_names[name]) + ) + + +def _write_pyc_fp( + fp: IO[bytes], source_stat: os.stat_result, co: types.CodeType +) -> None: + # Technically, we don't have to have the same pyc format as + # (C)Python, since these "pycs" should never be seen by builtin + # import. However, there's little reason to deviate. + fp.write(importlib.util.MAGIC_NUMBER) + # https://www.python.org/dev/peps/pep-0552/ + flags = b"\x00\x00\x00\x00" + fp.write(flags) + # as of now, bytecode header expects 32-bit numbers for size and mtime (#4903) + mtime = int(source_stat.st_mtime) & 0xFFFFFFFF + size = source_stat.st_size & 0xFFFFFFFF + # " bool: + proc_pyc = f"{pyc}.{os.getpid()}" + try: + with open(proc_pyc, "wb") as fp: + _write_pyc_fp(fp, source_stat, co) + except OSError as e: + state.trace(f"error writing pyc file at {proc_pyc}: errno={e.errno}") + return False + + try: + os.replace(proc_pyc, pyc) + except OSError as e: + state.trace(f"error writing pyc file at {pyc}: {e}") + # we ignore any failure to write the cache file + # there are many reasons, permission-denied, pycache dir being a + # file etc. + return False + return True + + +def _rewrite_test(fn: Path, config: Config) -> Tuple[os.stat_result, types.CodeType]: + """Read and rewrite *fn* and return the code object.""" + stat = os.stat(fn) + source = fn.read_bytes() + strfn = str(fn) + tree = ast.parse(source, filename=strfn) + rewrite_asserts(tree, source, strfn, config) + co = compile(tree, strfn, "exec", dont_inherit=True) + return stat, co + + +def _read_pyc( + source: Path, pyc: Path, trace: Callable[[str], None] = lambda x: None +) -> Optional[types.CodeType]: + """Possibly read a pytest pyc containing rewritten code. + + Return rewritten code if successful or None if not. + """ + try: + fp = open(pyc, "rb") + except OSError: + return None + with fp: + try: + stat_result = os.stat(source) + mtime = int(stat_result.st_mtime) + size = stat_result.st_size + data = fp.read(16) + except OSError as e: + trace(f"_read_pyc({source}): OSError {e}") + return None + # Check for invalid or out of date pyc file. + if len(data) != (16): + trace("_read_pyc(%s): invalid pyc (too short)" % source) + return None + if data[:4] != importlib.util.MAGIC_NUMBER: + trace("_read_pyc(%s): invalid pyc (bad magic number)" % source) + return None + if data[4:8] != b"\x00\x00\x00\x00": + trace("_read_pyc(%s): invalid pyc (unsupported flags)" % source) + return None + mtime_data = data[8:12] + if int.from_bytes(mtime_data, "little") != mtime & 0xFFFFFFFF: + trace("_read_pyc(%s): out of date" % source) + return None + size_data = data[12:16] + if int.from_bytes(size_data, "little") != size & 0xFFFFFFFF: + trace("_read_pyc(%s): invalid pyc (incorrect size)" % source) + return None + try: + co = marshal.load(fp) + except Exception as e: + trace(f"_read_pyc({source}): marshal.load error {e}") + return None + if not isinstance(co, types.CodeType): + trace("_read_pyc(%s): not a code object" % source) + return None + return co + + +def rewrite_asserts( + mod: ast.Module, + source: bytes, + module_path: Optional[str] = None, + config: Optional[Config] = None, +) -> None: + """Rewrite the assert statements in mod.""" + AssertionRewriter(module_path, config, source).run(mod) + + +def _saferepr(obj: object) -> str: + r"""Get a safe repr of an object for assertion error messages. + + The assertion formatting (util.format_explanation()) requires + newlines to be escaped since they are a special character for it. + Normally assertion.util.format_explanation() does this but for a + custom repr it is possible to contain one of the special escape + sequences, especially '\n{' and '\n}' are likely to be present in + JSON reprs. + """ + maxsize = _get_maxsize_for_saferepr(util._config) + return saferepr(obj, maxsize=maxsize).replace("\n", "\\n") + + +def _get_maxsize_for_saferepr(config: Optional[Config]) -> Optional[int]: + """Get `maxsize` configuration for saferepr based on the given config object.""" + verbosity = config.getoption("verbose") if config is not None else 0 + if verbosity >= 2: + return None + if verbosity >= 1: + return DEFAULT_REPR_MAX_SIZE * 10 + return DEFAULT_REPR_MAX_SIZE + + +def _format_assertmsg(obj: object) -> str: + r"""Format the custom assertion message given. + + For strings this simply replaces newlines with '\n~' so that + util.format_explanation() will preserve them instead of escaping + newlines. For other objects saferepr() is used first. + """ + # reprlib appears to have a bug which means that if a string + # contains a newline it gets escaped, however if an object has a + # .__repr__() which contains newlines it does not get escaped. + # However in either case we want to preserve the newline. + replaces = [("\n", "\n~"), ("%", "%%")] + if not isinstance(obj, str): + obj = saferepr(obj) + replaces.append(("\\n", "\n~")) + + for r1, r2 in replaces: + obj = obj.replace(r1, r2) + + return obj + + +def _should_repr_global_name(obj: object) -> bool: + if callable(obj): + return False + + try: + return not hasattr(obj, "__name__") + except Exception: + return True + + +def _format_boolop(explanations: Iterable[str], is_or: bool) -> str: + explanation = "(" + (is_or and " or " or " and ").join(explanations) + ")" + return explanation.replace("%", "%%") + + +def _call_reprcompare( + ops: Sequence[str], + results: Sequence[bool], + expls: Sequence[str], + each_obj: Sequence[object], +) -> str: + for i, res, expl in zip(range(len(ops)), results, expls): + try: + done = not res + except Exception: + done = True + if done: + break + if util._reprcompare is not None: + custom = util._reprcompare(ops[i], each_obj[i], each_obj[i + 1]) + if custom is not None: + return custom + return expl + + +def _call_assertion_pass(lineno: int, orig: str, expl: str) -> None: + if util._assertion_pass is not None: + util._assertion_pass(lineno, orig, expl) + + +def _check_if_assertion_pass_impl() -> bool: + """Check if any plugins implement the pytest_assertion_pass hook + in order not to generate explanation unnecessarily (might be expensive).""" + return True if util._assertion_pass else False + + +UNARY_MAP = {ast.Not: "not %s", ast.Invert: "~%s", ast.USub: "-%s", ast.UAdd: "+%s"} + +BINOP_MAP = { + ast.BitOr: "|", + ast.BitXor: "^", + ast.BitAnd: "&", + ast.LShift: "<<", + ast.RShift: ">>", + ast.Add: "+", + ast.Sub: "-", + ast.Mult: "*", + ast.Div: "/", + ast.FloorDiv: "//", + ast.Mod: "%%", # escaped for string formatting + ast.Eq: "==", + ast.NotEq: "!=", + ast.Lt: "<", + ast.LtE: "<=", + ast.Gt: ">", + ast.GtE: ">=", + ast.Pow: "**", + ast.Is: "is", + ast.IsNot: "is not", + ast.In: "in", + ast.NotIn: "not in", + ast.MatMult: "@", +} + + +def traverse_node(node: ast.AST) -> Iterator[ast.AST]: + """Recursively yield node and all its children in depth-first order.""" + yield node + for child in ast.iter_child_nodes(node): + yield from traverse_node(child) + + +@functools.lru_cache(maxsize=1) +def _get_assertion_exprs(src: bytes) -> Dict[int, str]: + """Return a mapping from {lineno: "assertion test expression"}.""" + ret: Dict[int, str] = {} + + depth = 0 + lines: List[str] = [] + assert_lineno: Optional[int] = None + seen_lines: Set[int] = set() + + def _write_and_reset() -> None: + nonlocal depth, lines, assert_lineno, seen_lines + assert assert_lineno is not None + ret[assert_lineno] = "".join(lines).rstrip().rstrip("\\") + depth = 0 + lines = [] + assert_lineno = None + seen_lines = set() + + tokens = tokenize.tokenize(io.BytesIO(src).readline) + for tp, source, (lineno, offset), _, line in tokens: + if tp == tokenize.NAME and source == "assert": + assert_lineno = lineno + elif assert_lineno is not None: + # keep track of depth for the assert-message `,` lookup + if tp == tokenize.OP and source in "([{": + depth += 1 + elif tp == tokenize.OP and source in ")]}": + depth -= 1 + + if not lines: + lines.append(line[offset:]) + seen_lines.add(lineno) + # a non-nested comma separates the expression from the message + elif depth == 0 and tp == tokenize.OP and source == ",": + # one line assert with message + if lineno in seen_lines and len(lines) == 1: + offset_in_trimmed = offset + len(lines[-1]) - len(line) + lines[-1] = lines[-1][:offset_in_trimmed] + # multi-line assert with message + elif lineno in seen_lines: + lines[-1] = lines[-1][:offset] + # multi line assert with escapd newline before message + else: + lines.append(line[:offset]) + _write_and_reset() + elif tp in {tokenize.NEWLINE, tokenize.ENDMARKER}: + _write_and_reset() + elif lines and lineno not in seen_lines: + lines.append(line) + seen_lines.add(lineno) + + return ret + + +class AssertionRewriter(ast.NodeVisitor): + """Assertion rewriting implementation. + + The main entrypoint is to call .run() with an ast.Module instance, + this will then find all the assert statements and rewrite them to + provide intermediate values and a detailed assertion error. See + http://pybites.blogspot.be/2011/07/behind-scenes-of-pytests-new-assertion.html + for an overview of how this works. + + The entry point here is .run() which will iterate over all the + statements in an ast.Module and for each ast.Assert statement it + finds call .visit() with it. Then .visit_Assert() takes over and + is responsible for creating new ast statements to replace the + original assert statement: it rewrites the test of an assertion + to provide intermediate values and replace it with an if statement + which raises an assertion error with a detailed explanation in + case the expression is false and calls pytest_assertion_pass hook + if expression is true. + + For this .visit_Assert() uses the visitor pattern to visit all the + AST nodes of the ast.Assert.test field, each visit call returning + an AST node and the corresponding explanation string. During this + state is kept in several instance attributes: + + :statements: All the AST statements which will replace the assert + statement. + + :variables: This is populated by .variable() with each variable + used by the statements so that they can all be set to None at + the end of the statements. + + :variable_counter: Counter to create new unique variables needed + by statements. Variables are created using .variable() and + have the form of "@py_assert0". + + :expl_stmts: The AST statements which will be executed to get + data from the assertion. This is the code which will construct + the detailed assertion message that is used in the AssertionError + or for the pytest_assertion_pass hook. + + :explanation_specifiers: A dict filled by .explanation_param() + with %-formatting placeholders and their corresponding + expressions to use in the building of an assertion message. + This is used by .pop_format_context() to build a message. + + :stack: A stack of the explanation_specifiers dicts maintained by + .push_format_context() and .pop_format_context() which allows + to build another %-formatted string while already building one. + + This state is reset on every new assert statement visited and used + by the other visitors. + """ + + def __init__( + self, module_path: Optional[str], config: Optional[Config], source: bytes + ) -> None: + super().__init__() + self.module_path = module_path + self.config = config + if config is not None: + self.enable_assertion_pass_hook = config.getini( + "enable_assertion_pass_hook" + ) + else: + self.enable_assertion_pass_hook = False + self.source = source + + def run(self, mod: ast.Module) -> None: + """Find all assert statements in *mod* and rewrite them.""" + if not mod.body: + # Nothing to do. + return + + # We'll insert some special imports at the top of the module, but after any + # docstrings and __future__ imports, so first figure out where that is. + doc = getattr(mod, "docstring", None) + expect_docstring = doc is None + if doc is not None and self.is_rewrite_disabled(doc): + return + pos = 0 + lineno = 1 + for item in mod.body: + if ( + expect_docstring + and isinstance(item, ast.Expr) + and isinstance(item.value, ast.Str) + ): + doc = item.value.s + if self.is_rewrite_disabled(doc): + return + expect_docstring = False + elif ( + isinstance(item, ast.ImportFrom) + and item.level == 0 + and item.module == "__future__" + ): + pass + else: + break + pos += 1 + # Special case: for a decorated function, set the lineno to that of the + # first decorator, not the `def`. Issue #4984. + if isinstance(item, ast.FunctionDef) and item.decorator_list: + lineno = item.decorator_list[0].lineno + else: + lineno = item.lineno + # Now actually insert the special imports. + if sys.version_info >= (3, 10): + aliases = [ + ast.alias("builtins", "@py_builtins", lineno=lineno, col_offset=0), + ast.alias( + "_pytest.assertion.rewrite", + "@pytest_ar", + lineno=lineno, + col_offset=0, + ), + ] + else: + aliases = [ + ast.alias("builtins", "@py_builtins"), + ast.alias("_pytest.assertion.rewrite", "@pytest_ar"), + ] + imports = [ + ast.Import([alias], lineno=lineno, col_offset=0) for alias in aliases + ] + mod.body[pos:pos] = imports + + # Collect asserts. + nodes: List[ast.AST] = [mod] + while nodes: + node = nodes.pop() + for name, field in ast.iter_fields(node): + if isinstance(field, list): + new: List[ast.AST] = [] + for i, child in enumerate(field): + if isinstance(child, ast.Assert): + # Transform assert. + new.extend(self.visit(child)) + else: + new.append(child) + if isinstance(child, ast.AST): + nodes.append(child) + setattr(node, name, new) + elif ( + isinstance(field, ast.AST) + # Don't recurse into expressions as they can't contain + # asserts. + and not isinstance(field, ast.expr) + ): + nodes.append(field) + + @staticmethod + def is_rewrite_disabled(docstring: str) -> bool: + return "PYTEST_DONT_REWRITE" in docstring + + def variable(self) -> str: + """Get a new variable.""" + # Use a character invalid in python identifiers to avoid clashing. + name = "@py_assert" + str(next(self.variable_counter)) + self.variables.append(name) + return name + + def assign(self, expr: ast.expr) -> ast.Name: + """Give *expr* a name.""" + name = self.variable() + self.statements.append(ast.Assign([ast.Name(name, ast.Store())], expr)) + return ast.Name(name, ast.Load()) + + def display(self, expr: ast.expr) -> ast.expr: + """Call saferepr on the expression.""" + return self.helper("_saferepr", expr) + + def helper(self, name: str, *args: ast.expr) -> ast.expr: + """Call a helper in this module.""" + py_name = ast.Name("@pytest_ar", ast.Load()) + attr = ast.Attribute(py_name, name, ast.Load()) + return ast.Call(attr, list(args), []) + + def builtin(self, name: str) -> ast.Attribute: + """Return the builtin called *name*.""" + builtin_name = ast.Name("@py_builtins", ast.Load()) + return ast.Attribute(builtin_name, name, ast.Load()) + + def explanation_param(self, expr: ast.expr) -> str: + """Return a new named %-formatting placeholder for expr. + + This creates a %-formatting placeholder for expr in the + current formatting context, e.g. ``%(py0)s``. The placeholder + and expr are placed in the current format context so that it + can be used on the next call to .pop_format_context(). + """ + specifier = "py" + str(next(self.variable_counter)) + self.explanation_specifiers[specifier] = expr + return "%(" + specifier + ")s" + + def push_format_context(self) -> None: + """Create a new formatting context. + + The format context is used for when an explanation wants to + have a variable value formatted in the assertion message. In + this case the value required can be added using + .explanation_param(). Finally .pop_format_context() is used + to format a string of %-formatted values as added by + .explanation_param(). + """ + self.explanation_specifiers: Dict[str, ast.expr] = {} + self.stack.append(self.explanation_specifiers) + + def pop_format_context(self, expl_expr: ast.expr) -> ast.Name: + """Format the %-formatted string with current format context. + + The expl_expr should be an str ast.expr instance constructed from + the %-placeholders created by .explanation_param(). This will + add the required code to format said string to .expl_stmts and + return the ast.Name instance of the formatted string. + """ + current = self.stack.pop() + if self.stack: + self.explanation_specifiers = self.stack[-1] + keys = [ast.Str(key) for key in current.keys()] + format_dict = ast.Dict(keys, list(current.values())) + form = ast.BinOp(expl_expr, ast.Mod(), format_dict) + name = "@py_format" + str(next(self.variable_counter)) + if self.enable_assertion_pass_hook: + self.format_variables.append(name) + self.expl_stmts.append(ast.Assign([ast.Name(name, ast.Store())], form)) + return ast.Name(name, ast.Load()) + + def generic_visit(self, node: ast.AST) -> Tuple[ast.Name, str]: + """Handle expressions we don't have custom code for.""" + assert isinstance(node, ast.expr) + res = self.assign(node) + return res, self.explanation_param(self.display(res)) + + def visit_Assert(self, assert_: ast.Assert) -> List[ast.stmt]: + """Return the AST statements to replace the ast.Assert instance. + + This rewrites the test of an assertion to provide + intermediate values and replace it with an if statement which + raises an assertion error with a detailed explanation in case + the expression is false. + """ + if isinstance(assert_.test, ast.Tuple) and len(assert_.test.elts) >= 1: + from _pytest.warning_types import PytestAssertRewriteWarning + import warnings + + # TODO: This assert should not be needed. + assert self.module_path is not None + warnings.warn_explicit( + PytestAssertRewriteWarning( + "assertion is always true, perhaps remove parentheses?" + ), + category=None, + filename=self.module_path, + lineno=assert_.lineno, + ) + + self.statements: List[ast.stmt] = [] + self.variables: List[str] = [] + self.variable_counter = itertools.count() + + if self.enable_assertion_pass_hook: + self.format_variables: List[str] = [] + + self.stack: List[Dict[str, ast.expr]] = [] + self.expl_stmts: List[ast.stmt] = [] + self.push_format_context() + # Rewrite assert into a bunch of statements. + top_condition, explanation = self.visit(assert_.test) + + negation = ast.UnaryOp(ast.Not(), top_condition) + + if self.enable_assertion_pass_hook: # Experimental pytest_assertion_pass hook + msg = self.pop_format_context(ast.Str(explanation)) + + # Failed + if assert_.msg: + assertmsg = self.helper("_format_assertmsg", assert_.msg) + gluestr = "\n>assert " + else: + assertmsg = ast.Str("") + gluestr = "assert " + err_explanation = ast.BinOp(ast.Str(gluestr), ast.Add(), msg) + err_msg = ast.BinOp(assertmsg, ast.Add(), err_explanation) + err_name = ast.Name("AssertionError", ast.Load()) + fmt = self.helper("_format_explanation", err_msg) + exc = ast.Call(err_name, [fmt], []) + raise_ = ast.Raise(exc, None) + statements_fail = [] + statements_fail.extend(self.expl_stmts) + statements_fail.append(raise_) + + # Passed + fmt_pass = self.helper("_format_explanation", msg) + orig = _get_assertion_exprs(self.source)[assert_.lineno] + hook_call_pass = ast.Expr( + self.helper( + "_call_assertion_pass", + ast.Num(assert_.lineno), + ast.Str(orig), + fmt_pass, + ) + ) + # If any hooks implement assert_pass hook + hook_impl_test = ast.If( + self.helper("_check_if_assertion_pass_impl"), + self.expl_stmts + [hook_call_pass], + [], + ) + statements_pass = [hook_impl_test] + + # Test for assertion condition + main_test = ast.If(negation, statements_fail, statements_pass) + self.statements.append(main_test) + if self.format_variables: + variables = [ + ast.Name(name, ast.Store()) for name in self.format_variables + ] + clear_format = ast.Assign(variables, ast.NameConstant(None)) + self.statements.append(clear_format) + + else: # Original assertion rewriting + # Create failure message. + body = self.expl_stmts + self.statements.append(ast.If(negation, body, [])) + if assert_.msg: + assertmsg = self.helper("_format_assertmsg", assert_.msg) + explanation = "\n>assert " + explanation + else: + assertmsg = ast.Str("") + explanation = "assert " + explanation + template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation)) + msg = self.pop_format_context(template) + fmt = self.helper("_format_explanation", msg) + err_name = ast.Name("AssertionError", ast.Load()) + exc = ast.Call(err_name, [fmt], []) + raise_ = ast.Raise(exc, None) + + body.append(raise_) + + # Clear temporary variables by setting them to None. + if self.variables: + variables = [ast.Name(name, ast.Store()) for name in self.variables] + clear = ast.Assign(variables, ast.NameConstant(None)) + self.statements.append(clear) + # Fix locations (line numbers/column offsets). + for stmt in self.statements: + for node in traverse_node(stmt): + ast.copy_location(node, assert_) + return self.statements + + def visit_Name(self, name: ast.Name) -> Tuple[ast.Name, str]: + # Display the repr of the name if it's a local variable or + # _should_repr_global_name() thinks it's acceptable. + locs = ast.Call(self.builtin("locals"), [], []) + inlocs = ast.Compare(ast.Str(name.id), [ast.In()], [locs]) + dorepr = self.helper("_should_repr_global_name", name) + test = ast.BoolOp(ast.Or(), [inlocs, dorepr]) + expr = ast.IfExp(test, self.display(name), ast.Str(name.id)) + return name, self.explanation_param(expr) + + def visit_BoolOp(self, boolop: ast.BoolOp) -> Tuple[ast.Name, str]: + res_var = self.variable() + expl_list = self.assign(ast.List([], ast.Load())) + app = ast.Attribute(expl_list, "append", ast.Load()) + is_or = int(isinstance(boolop.op, ast.Or)) + body = save = self.statements + fail_save = self.expl_stmts + levels = len(boolop.values) - 1 + self.push_format_context() + # Process each operand, short-circuiting if needed. + for i, v in enumerate(boolop.values): + if i: + fail_inner: List[ast.stmt] = [] + # cond is set in a prior loop iteration below + self.expl_stmts.append(ast.If(cond, fail_inner, [])) # noqa + self.expl_stmts = fail_inner + self.push_format_context() + res, expl = self.visit(v) + body.append(ast.Assign([ast.Name(res_var, ast.Store())], res)) + expl_format = self.pop_format_context(ast.Str(expl)) + call = ast.Call(app, [expl_format], []) + self.expl_stmts.append(ast.Expr(call)) + if i < levels: + cond: ast.expr = res + if is_or: + cond = ast.UnaryOp(ast.Not(), cond) + inner: List[ast.stmt] = [] + self.statements.append(ast.If(cond, inner, [])) + self.statements = body = inner + self.statements = save + self.expl_stmts = fail_save + expl_template = self.helper("_format_boolop", expl_list, ast.Num(is_or)) + expl = self.pop_format_context(expl_template) + return ast.Name(res_var, ast.Load()), self.explanation_param(expl) + + def visit_UnaryOp(self, unary: ast.UnaryOp) -> Tuple[ast.Name, str]: + pattern = UNARY_MAP[unary.op.__class__] + operand_res, operand_expl = self.visit(unary.operand) + res = self.assign(ast.UnaryOp(unary.op, operand_res)) + return res, pattern % (operand_expl,) + + def visit_BinOp(self, binop: ast.BinOp) -> Tuple[ast.Name, str]: + symbol = BINOP_MAP[binop.op.__class__] + left_expr, left_expl = self.visit(binop.left) + right_expr, right_expl = self.visit(binop.right) + explanation = f"({left_expl} {symbol} {right_expl})" + res = self.assign(ast.BinOp(left_expr, binop.op, right_expr)) + return res, explanation + + def visit_Call(self, call: ast.Call) -> Tuple[ast.Name, str]: + new_func, func_expl = self.visit(call.func) + arg_expls = [] + new_args = [] + new_kwargs = [] + for arg in call.args: + res, expl = self.visit(arg) + arg_expls.append(expl) + new_args.append(res) + for keyword in call.keywords: + res, expl = self.visit(keyword.value) + new_kwargs.append(ast.keyword(keyword.arg, res)) + if keyword.arg: + arg_expls.append(keyword.arg + "=" + expl) + else: # **args have `arg` keywords with an .arg of None + arg_expls.append("**" + expl) + + expl = "{}({})".format(func_expl, ", ".join(arg_expls)) + new_call = ast.Call(new_func, new_args, new_kwargs) + res = self.assign(new_call) + res_expl = self.explanation_param(self.display(res)) + outer_expl = f"{res_expl}\n{{{res_expl} = {expl}\n}}" + return res, outer_expl + + def visit_Starred(self, starred: ast.Starred) -> Tuple[ast.Starred, str]: + # A Starred node can appear in a function call. + res, expl = self.visit(starred.value) + new_starred = ast.Starred(res, starred.ctx) + return new_starred, "*" + expl + + def visit_Attribute(self, attr: ast.Attribute) -> Tuple[ast.Name, str]: + if not isinstance(attr.ctx, ast.Load): + return self.generic_visit(attr) + value, value_expl = self.visit(attr.value) + res = self.assign(ast.Attribute(value, attr.attr, ast.Load())) + res_expl = self.explanation_param(self.display(res)) + pat = "%s\n{%s = %s.%s\n}" + expl = pat % (res_expl, res_expl, value_expl, attr.attr) + return res, expl + + def visit_Compare(self, comp: ast.Compare) -> Tuple[ast.expr, str]: + self.push_format_context() + left_res, left_expl = self.visit(comp.left) + if isinstance(comp.left, (ast.Compare, ast.BoolOp)): + left_expl = f"({left_expl})" + res_variables = [self.variable() for i in range(len(comp.ops))] + load_names = [ast.Name(v, ast.Load()) for v in res_variables] + store_names = [ast.Name(v, ast.Store()) for v in res_variables] + it = zip(range(len(comp.ops)), comp.ops, comp.comparators) + expls = [] + syms = [] + results = [left_res] + for i, op, next_operand in it: + next_res, next_expl = self.visit(next_operand) + if isinstance(next_operand, (ast.Compare, ast.BoolOp)): + next_expl = f"({next_expl})" + results.append(next_res) + sym = BINOP_MAP[op.__class__] + syms.append(ast.Str(sym)) + expl = f"{left_expl} {sym} {next_expl}" + expls.append(ast.Str(expl)) + res_expr = ast.Compare(left_res, [op], [next_res]) + self.statements.append(ast.Assign([store_names[i]], res_expr)) + left_res, left_expl = next_res, next_expl + # Use pytest.assertion.util._reprcompare if that's available. + expl_call = self.helper( + "_call_reprcompare", + ast.Tuple(syms, ast.Load()), + ast.Tuple(load_names, ast.Load()), + ast.Tuple(expls, ast.Load()), + ast.Tuple(results, ast.Load()), + ) + if len(comp.ops) > 1: + res: ast.expr = ast.BoolOp(ast.And(), load_names) + else: + res = load_names[0] + return res, self.explanation_param(self.pop_format_context(expl_call)) + + +def try_makedirs(cache_dir: Path) -> bool: + """Attempt to create the given directory and sub-directories exist. + + Returns True if successful or if it already exists. + """ + try: + os.makedirs(cache_dir, exist_ok=True) + except (FileNotFoundError, NotADirectoryError, FileExistsError): + # One of the path components was not a directory: + # - we're in a zip file + # - it is a file + return False + except PermissionError: + return False + except OSError as e: + # as of now, EROFS doesn't have an equivalent OSError-subclass + if e.errno == errno.EROFS: + return False + raise + return True + + +def get_cache_dir(file_path: Path) -> Path: + """Return the cache directory to write .pyc files for the given .py file path.""" + if sys.version_info >= (3, 8) and sys.pycache_prefix: + # given: + # prefix = '/tmp/pycs' + # path = '/home/user/proj/test_app.py' + # we want: + # '/tmp/pycs/home/user/proj' + return Path(sys.pycache_prefix) / Path(*file_path.parts[1:-1]) + else: + # classic pycache directory + return file_path.parent / "__pycache__" diff --git a/env/lib/python3.8/site-packages/_pytest/assertion/truncate.py b/env/lib/python3.8/site-packages/_pytest/assertion/truncate.py new file mode 100644 index 00000000..ce148dca --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/assertion/truncate.py @@ -0,0 +1,94 @@ +"""Utilities for truncating assertion output. + +Current default behaviour is to truncate assertion explanations at +~8 terminal lines, unless running in "-vv" mode or running on CI. +""" +from typing import List +from typing import Optional + +from _pytest.assertion import util +from _pytest.nodes import Item + + +DEFAULT_MAX_LINES = 8 +DEFAULT_MAX_CHARS = 8 * 80 +USAGE_MSG = "use '-vv' to show" + + +def truncate_if_required( + explanation: List[str], item: Item, max_length: Optional[int] = None +) -> List[str]: + """Truncate this assertion explanation if the given test item is eligible.""" + if _should_truncate_item(item): + return _truncate_explanation(explanation) + return explanation + + +def _should_truncate_item(item: Item) -> bool: + """Whether or not this test item is eligible for truncation.""" + verbose = item.config.option.verbose + return verbose < 2 and not util.running_on_ci() + + +def _truncate_explanation( + input_lines: List[str], + max_lines: Optional[int] = None, + max_chars: Optional[int] = None, +) -> List[str]: + """Truncate given list of strings that makes up the assertion explanation. + + Truncates to either 8 lines, or 640 characters - whichever the input reaches + first. The remaining lines will be replaced by a usage message. + """ + + if max_lines is None: + max_lines = DEFAULT_MAX_LINES + if max_chars is None: + max_chars = DEFAULT_MAX_CHARS + + # Check if truncation required + input_char_count = len("".join(input_lines)) + if len(input_lines) <= max_lines and input_char_count <= max_chars: + return input_lines + + # Truncate first to max_lines, and then truncate to max_chars if max_chars + # is exceeded. + truncated_explanation = input_lines[:max_lines] + truncated_explanation = _truncate_by_char_count(truncated_explanation, max_chars) + + # Add ellipsis to final line + truncated_explanation[-1] = truncated_explanation[-1] + "..." + + # Append useful message to explanation + truncated_line_count = len(input_lines) - len(truncated_explanation) + truncated_line_count += 1 # Account for the part-truncated final line + msg = "...Full output truncated" + if truncated_line_count == 1: + msg += f" ({truncated_line_count} line hidden)" + else: + msg += f" ({truncated_line_count} lines hidden)" + msg += f", {USAGE_MSG}" + truncated_explanation.extend(["", str(msg)]) + return truncated_explanation + + +def _truncate_by_char_count(input_lines: List[str], max_chars: int) -> List[str]: + # Check if truncation required + if len("".join(input_lines)) <= max_chars: + return input_lines + + # Find point at which input length exceeds total allowed length + iterated_char_count = 0 + for iterated_index, input_line in enumerate(input_lines): + if iterated_char_count + len(input_line) > max_chars: + break + iterated_char_count += len(input_line) + + # Create truncated explanation with modified final line + truncated_result = input_lines[:iterated_index] + final_line = input_lines[iterated_index] + if final_line: + final_line_truncate_point = max_chars - iterated_char_count + final_line = final_line[:final_line_truncate_point] + truncated_result.append(final_line) + return truncated_result diff --git a/env/lib/python3.8/site-packages/_pytest/assertion/util.py b/env/lib/python3.8/site-packages/_pytest/assertion/util.py new file mode 100644 index 00000000..fc5dfdbd --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/assertion/util.py @@ -0,0 +1,522 @@ +"""Utilities for assertion debugging.""" +import collections.abc +import os +import pprint +from typing import AbstractSet +from typing import Any +from typing import Callable +from typing import Iterable +from typing import List +from typing import Mapping +from typing import Optional +from typing import Sequence +from unicodedata import normalize + +import _pytest._code +from _pytest import outcomes +from _pytest._io.saferepr import _pformat_dispatch +from _pytest._io.saferepr import saferepr +from _pytest._io.saferepr import saferepr_unlimited +from _pytest.config import Config + +# The _reprcompare attribute on the util module is used by the new assertion +# interpretation code and assertion rewriter to detect this plugin was +# loaded and in turn call the hooks defined here as part of the +# DebugInterpreter. +_reprcompare: Optional[Callable[[str, object, object], Optional[str]]] = None + +# Works similarly as _reprcompare attribute. Is populated with the hook call +# when pytest_runtest_setup is called. +_assertion_pass: Optional[Callable[[int, str, str], None]] = None + +# Config object which is assigned during pytest_runtest_protocol. +_config: Optional[Config] = None + + +def format_explanation(explanation: str) -> str: + r"""Format an explanation. + + Normally all embedded newlines are escaped, however there are + three exceptions: \n{, \n} and \n~. The first two are intended + cover nested explanations, see function and attribute explanations + for examples (.visit_Call(), visit_Attribute()). The last one is + for when one explanation needs to span multiple lines, e.g. when + displaying diffs. + """ + lines = _split_explanation(explanation) + result = _format_lines(lines) + return "\n".join(result) + + +def _split_explanation(explanation: str) -> List[str]: + r"""Return a list of individual lines in the explanation. + + This will return a list of lines split on '\n{', '\n}' and '\n~'. + Any other newlines will be escaped and appear in the line as the + literal '\n' characters. + """ + raw_lines = (explanation or "").split("\n") + lines = [raw_lines[0]] + for values in raw_lines[1:]: + if values and values[0] in ["{", "}", "~", ">"]: + lines.append(values) + else: + lines[-1] += "\\n" + values + return lines + + +def _format_lines(lines: Sequence[str]) -> List[str]: + """Format the individual lines. + + This will replace the '{', '}' and '~' characters of our mini formatting + language with the proper 'where ...', 'and ...' and ' + ...' text, taking + care of indentation along the way. + + Return a list of formatted lines. + """ + result = list(lines[:1]) + stack = [0] + stackcnt = [0] + for line in lines[1:]: + if line.startswith("{"): + if stackcnt[-1]: + s = "and " + else: + s = "where " + stack.append(len(result)) + stackcnt[-1] += 1 + stackcnt.append(0) + result.append(" +" + " " * (len(stack) - 1) + s + line[1:]) + elif line.startswith("}"): + stack.pop() + stackcnt.pop() + result[stack[-1]] += line[1:] + else: + assert line[0] in ["~", ">"] + stack[-1] += 1 + indent = len(stack) if line.startswith("~") else len(stack) - 1 + result.append(" " * indent + line[1:]) + assert len(stack) == 1 + return result + + +def issequence(x: Any) -> bool: + return isinstance(x, collections.abc.Sequence) and not isinstance(x, str) + + +def istext(x: Any) -> bool: + return isinstance(x, str) + + +def isdict(x: Any) -> bool: + return isinstance(x, dict) + + +def isset(x: Any) -> bool: + return isinstance(x, (set, frozenset)) + + +def isnamedtuple(obj: Any) -> bool: + return isinstance(obj, tuple) and getattr(obj, "_fields", None) is not None + + +def isdatacls(obj: Any) -> bool: + return getattr(obj, "__dataclass_fields__", None) is not None + + +def isattrs(obj: Any) -> bool: + return getattr(obj, "__attrs_attrs__", None) is not None + + +def isiterable(obj: Any) -> bool: + try: + iter(obj) + return not istext(obj) + except TypeError: + return False + + +def has_default_eq( + obj: object, +) -> bool: + """Check if an instance of an object contains the default eq + + First, we check if the object's __eq__ attribute has __code__, + if so, we check the equally of the method code filename (__code__.co_filename) + to the default one generated by the dataclass and attr module + for dataclasses the default co_filename is , for attrs class, the __eq__ should contain "attrs eq generated" + """ + # inspired from https://github.com/willmcgugan/rich/blob/07d51ffc1aee6f16bd2e5a25b4e82850fb9ed778/rich/pretty.py#L68 + if hasattr(obj.__eq__, "__code__") and hasattr(obj.__eq__.__code__, "co_filename"): + code_filename = obj.__eq__.__code__.co_filename + + if isattrs(obj): + return "attrs generated eq" in code_filename + + return code_filename == "" # data class + return True + + +def assertrepr_compare( + config, op: str, left: Any, right: Any, use_ascii: bool = False +) -> Optional[List[str]]: + """Return specialised explanations for some operators/operands.""" + verbose = config.getoption("verbose") + + # Strings which normalize equal are often hard to distinguish when printed; use ascii() to make this easier. + # See issue #3246. + use_ascii = ( + isinstance(left, str) + and isinstance(right, str) + and normalize("NFD", left) == normalize("NFD", right) + ) + + if verbose > 1: + left_repr = saferepr_unlimited(left, use_ascii=use_ascii) + right_repr = saferepr_unlimited(right, use_ascii=use_ascii) + else: + # XXX: "15 chars indentation" is wrong + # ("E AssertionError: assert "); should use term width. + maxsize = ( + 80 - 15 - len(op) - 2 + ) // 2 # 15 chars indentation, 1 space around op + + left_repr = saferepr(left, maxsize=maxsize, use_ascii=use_ascii) + right_repr = saferepr(right, maxsize=maxsize, use_ascii=use_ascii) + + summary = f"{left_repr} {op} {right_repr}" + + explanation = None + try: + if op == "==": + explanation = _compare_eq_any(left, right, verbose) + elif op == "not in": + if istext(left) and istext(right): + explanation = _notin_text(left, right, verbose) + except outcomes.Exit: + raise + except Exception: + explanation = [ + "(pytest_assertion plugin: representation of details failed: {}.".format( + _pytest._code.ExceptionInfo.from_current()._getreprcrash() + ), + " Probably an object has a faulty __repr__.)", + ] + + if not explanation: + return None + + return [summary] + explanation + + +def _compare_eq_any(left: Any, right: Any, verbose: int = 0) -> List[str]: + explanation = [] + if istext(left) and istext(right): + explanation = _diff_text(left, right, verbose) + else: + from _pytest.python_api import ApproxBase + + if isinstance(left, ApproxBase) or isinstance(right, ApproxBase): + # Although the common order should be obtained == expected, this ensures both ways + approx_side = left if isinstance(left, ApproxBase) else right + other_side = right if isinstance(left, ApproxBase) else left + + explanation = approx_side._repr_compare(other_side) + elif type(left) == type(right) and ( + isdatacls(left) or isattrs(left) or isnamedtuple(left) + ): + # Note: unlike dataclasses/attrs, namedtuples compare only the + # field values, not the type or field names. But this branch + # intentionally only handles the same-type case, which was often + # used in older code bases before dataclasses/attrs were available. + explanation = _compare_eq_cls(left, right, verbose) + elif issequence(left) and issequence(right): + explanation = _compare_eq_sequence(left, right, verbose) + elif isset(left) and isset(right): + explanation = _compare_eq_set(left, right, verbose) + elif isdict(left) and isdict(right): + explanation = _compare_eq_dict(left, right, verbose) + + if isiterable(left) and isiterable(right): + expl = _compare_eq_iterable(left, right, verbose) + explanation.extend(expl) + + return explanation + + +def _diff_text(left: str, right: str, verbose: int = 0) -> List[str]: + """Return the explanation for the diff between text. + + Unless --verbose is used this will skip leading and trailing + characters which are identical to keep the diff minimal. + """ + from difflib import ndiff + + explanation: List[str] = [] + + if verbose < 1: + i = 0 # just in case left or right has zero length + for i in range(min(len(left), len(right))): + if left[i] != right[i]: + break + if i > 42: + i -= 10 # Provide some context + explanation = [ + "Skipping %s identical leading characters in diff, use -v to show" % i + ] + left = left[i:] + right = right[i:] + if len(left) == len(right): + for i in range(len(left)): + if left[-i] != right[-i]: + break + if i > 42: + i -= 10 # Provide some context + explanation += [ + "Skipping {} identical trailing " + "characters in diff, use -v to show".format(i) + ] + left = left[:-i] + right = right[:-i] + keepends = True + if left.isspace() or right.isspace(): + left = repr(str(left)) + right = repr(str(right)) + explanation += ["Strings contain only whitespace, escaping them using repr()"] + # "right" is the expected base against which we compare "left", + # see https://github.com/pytest-dev/pytest/issues/3333 + explanation += [ + line.strip("\n") + for line in ndiff(right.splitlines(keepends), left.splitlines(keepends)) + ] + return explanation + + +def _surrounding_parens_on_own_lines(lines: List[str]) -> None: + """Move opening/closing parenthesis/bracket to own lines.""" + opening = lines[0][:1] + if opening in ["(", "[", "{"]: + lines[0] = " " + lines[0][1:] + lines[:] = [opening] + lines + closing = lines[-1][-1:] + if closing in [")", "]", "}"]: + lines[-1] = lines[-1][:-1] + "," + lines[:] = lines + [closing] + + +def _compare_eq_iterable( + left: Iterable[Any], right: Iterable[Any], verbose: int = 0 +) -> List[str]: + if verbose <= 0 and not running_on_ci(): + return ["Use -v to get more diff"] + # dynamic import to speedup pytest + import difflib + + left_formatting = pprint.pformat(left).splitlines() + right_formatting = pprint.pformat(right).splitlines() + + # Re-format for different output lengths. + lines_left = len(left_formatting) + lines_right = len(right_formatting) + if lines_left != lines_right: + left_formatting = _pformat_dispatch(left).splitlines() + right_formatting = _pformat_dispatch(right).splitlines() + + if lines_left > 1 or lines_right > 1: + _surrounding_parens_on_own_lines(left_formatting) + _surrounding_parens_on_own_lines(right_formatting) + + explanation = ["Full diff:"] + # "right" is the expected base against which we compare "left", + # see https://github.com/pytest-dev/pytest/issues/3333 + explanation.extend( + line.rstrip() for line in difflib.ndiff(right_formatting, left_formatting) + ) + return explanation + + +def _compare_eq_sequence( + left: Sequence[Any], right: Sequence[Any], verbose: int = 0 +) -> List[str]: + comparing_bytes = isinstance(left, bytes) and isinstance(right, bytes) + explanation: List[str] = [] + len_left = len(left) + len_right = len(right) + for i in range(min(len_left, len_right)): + if left[i] != right[i]: + if comparing_bytes: + # when comparing bytes, we want to see their ascii representation + # instead of their numeric values (#5260) + # using a slice gives us the ascii representation: + # >>> s = b'foo' + # >>> s[0] + # 102 + # >>> s[0:1] + # b'f' + left_value = left[i : i + 1] + right_value = right[i : i + 1] + else: + left_value = left[i] + right_value = right[i] + + explanation += [f"At index {i} diff: {left_value!r} != {right_value!r}"] + break + + if comparing_bytes: + # when comparing bytes, it doesn't help to show the "sides contain one or more + # items" longer explanation, so skip it + + return explanation + + len_diff = len_left - len_right + if len_diff: + if len_diff > 0: + dir_with_more = "Left" + extra = saferepr(left[len_right]) + else: + len_diff = 0 - len_diff + dir_with_more = "Right" + extra = saferepr(right[len_left]) + + if len_diff == 1: + explanation += [f"{dir_with_more} contains one more item: {extra}"] + else: + explanation += [ + "%s contains %d more items, first extra item: %s" + % (dir_with_more, len_diff, extra) + ] + return explanation + + +def _compare_eq_set( + left: AbstractSet[Any], right: AbstractSet[Any], verbose: int = 0 +) -> List[str]: + explanation = [] + diff_left = left - right + diff_right = right - left + if diff_left: + explanation.append("Extra items in the left set:") + for item in diff_left: + explanation.append(saferepr(item)) + if diff_right: + explanation.append("Extra items in the right set:") + for item in diff_right: + explanation.append(saferepr(item)) + return explanation + + +def _compare_eq_dict( + left: Mapping[Any, Any], right: Mapping[Any, Any], verbose: int = 0 +) -> List[str]: + explanation: List[str] = [] + set_left = set(left) + set_right = set(right) + common = set_left.intersection(set_right) + same = {k: left[k] for k in common if left[k] == right[k]} + if same and verbose < 2: + explanation += ["Omitting %s identical items, use -vv to show" % len(same)] + elif same: + explanation += ["Common items:"] + explanation += pprint.pformat(same).splitlines() + diff = {k for k in common if left[k] != right[k]} + if diff: + explanation += ["Differing items:"] + for k in diff: + explanation += [saferepr({k: left[k]}) + " != " + saferepr({k: right[k]})] + extra_left = set_left - set_right + len_extra_left = len(extra_left) + if len_extra_left: + explanation.append( + "Left contains %d more item%s:" + % (len_extra_left, "" if len_extra_left == 1 else "s") + ) + explanation.extend( + pprint.pformat({k: left[k] for k in extra_left}).splitlines() + ) + extra_right = set_right - set_left + len_extra_right = len(extra_right) + if len_extra_right: + explanation.append( + "Right contains %d more item%s:" + % (len_extra_right, "" if len_extra_right == 1 else "s") + ) + explanation.extend( + pprint.pformat({k: right[k] for k in extra_right}).splitlines() + ) + return explanation + + +def _compare_eq_cls(left: Any, right: Any, verbose: int) -> List[str]: + if not has_default_eq(left): + return [] + if isdatacls(left): + import dataclasses + + all_fields = dataclasses.fields(left) + fields_to_check = [info.name for info in all_fields if info.compare] + elif isattrs(left): + all_fields = left.__attrs_attrs__ + fields_to_check = [field.name for field in all_fields if getattr(field, "eq")] + elif isnamedtuple(left): + fields_to_check = left._fields + else: + assert False + + indent = " " + same = [] + diff = [] + for field in fields_to_check: + if getattr(left, field) == getattr(right, field): + same.append(field) + else: + diff.append(field) + + explanation = [] + if same or diff: + explanation += [""] + if same and verbose < 2: + explanation.append("Omitting %s identical items, use -vv to show" % len(same)) + elif same: + explanation += ["Matching attributes:"] + explanation += pprint.pformat(same).splitlines() + if diff: + explanation += ["Differing attributes:"] + explanation += pprint.pformat(diff).splitlines() + for field in diff: + field_left = getattr(left, field) + field_right = getattr(right, field) + explanation += [ + "", + "Drill down into differing attribute %s:" % field, + ("%s%s: %r != %r") % (indent, field, field_left, field_right), + ] + explanation += [ + indent + line + for line in _compare_eq_any(field_left, field_right, verbose) + ] + return explanation + + +def _notin_text(term: str, text: str, verbose: int = 0) -> List[str]: + index = text.find(term) + head = text[:index] + tail = text[index + len(term) :] + correct_text = head + tail + diff = _diff_text(text, correct_text, verbose) + newdiff = ["%s is contained here:" % saferepr(term, maxsize=42)] + for line in diff: + if line.startswith("Skipping"): + continue + if line.startswith("- "): + continue + if line.startswith("+ "): + newdiff.append(" " + line[2:]) + else: + newdiff.append(line) + return newdiff + + +def running_on_ci() -> bool: + """Check if we're currently running on a CI system.""" + env_vars = ["CI", "BUILD_NUMBER"] + return any(var in os.environ for var in env_vars) diff --git a/env/lib/python3.8/site-packages/_pytest/cacheprovider.py b/env/lib/python3.8/site-packages/_pytest/cacheprovider.py new file mode 100644 index 00000000..777c1b0b --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/cacheprovider.py @@ -0,0 +1,580 @@ +"""Implementation of the cache provider.""" +# This plugin was not named "cache" to avoid conflicts with the external +# pytest-cache version. +import json +import os +from pathlib import Path +from typing import Dict +from typing import Generator +from typing import Iterable +from typing import List +from typing import Optional +from typing import Set +from typing import Union + +import attr + +from .pathlib import resolve_from_str +from .pathlib import rm_rf +from .reports import CollectReport +from _pytest import nodes +from _pytest._io import TerminalWriter +from _pytest.compat import final +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config import hookimpl +from _pytest.config.argparsing import Parser +from _pytest.deprecated import check_ispytest +from _pytest.fixtures import fixture +from _pytest.fixtures import FixtureRequest +from _pytest.main import Session +from _pytest.python import Module +from _pytest.python import Package +from _pytest.reports import TestReport + + +README_CONTENT = """\ +# pytest cache directory # + +This directory contains data from the pytest's cache plugin, +which provides the `--lf` and `--ff` options, as well as the `cache` fixture. + +**Do not** commit this to version control. + +See [the docs](https://docs.pytest.org/en/stable/how-to/cache.html) for more information. +""" + +CACHEDIR_TAG_CONTENT = b"""\ +Signature: 8a477f597d28d172789f06886806bc55 +# This file is a cache directory tag created by pytest. +# For information about cache directory tags, see: +# https://bford.info/cachedir/spec.html +""" + + +@final +@attr.s(init=False, auto_attribs=True) +class Cache: + _cachedir: Path = attr.ib(repr=False) + _config: Config = attr.ib(repr=False) + + # Sub-directory under cache-dir for directories created by `mkdir()`. + _CACHE_PREFIX_DIRS = "d" + + # Sub-directory under cache-dir for values created by `set()`. + _CACHE_PREFIX_VALUES = "v" + + def __init__( + self, cachedir: Path, config: Config, *, _ispytest: bool = False + ) -> None: + check_ispytest(_ispytest) + self._cachedir = cachedir + self._config = config + + @classmethod + def for_config(cls, config: Config, *, _ispytest: bool = False) -> "Cache": + """Create the Cache instance for a Config. + + :meta private: + """ + check_ispytest(_ispytest) + cachedir = cls.cache_dir_from_config(config, _ispytest=True) + if config.getoption("cacheclear") and cachedir.is_dir(): + cls.clear_cache(cachedir, _ispytest=True) + return cls(cachedir, config, _ispytest=True) + + @classmethod + def clear_cache(cls, cachedir: Path, _ispytest: bool = False) -> None: + """Clear the sub-directories used to hold cached directories and values. + + :meta private: + """ + check_ispytest(_ispytest) + for prefix in (cls._CACHE_PREFIX_DIRS, cls._CACHE_PREFIX_VALUES): + d = cachedir / prefix + if d.is_dir(): + rm_rf(d) + + @staticmethod + def cache_dir_from_config(config: Config, *, _ispytest: bool = False) -> Path: + """Get the path to the cache directory for a Config. + + :meta private: + """ + check_ispytest(_ispytest) + return resolve_from_str(config.getini("cache_dir"), config.rootpath) + + def warn(self, fmt: str, *, _ispytest: bool = False, **args: object) -> None: + """Issue a cache warning. + + :meta private: + """ + check_ispytest(_ispytest) + import warnings + from _pytest.warning_types import PytestCacheWarning + + warnings.warn( + PytestCacheWarning(fmt.format(**args) if args else fmt), + self._config.hook, + stacklevel=3, + ) + + def mkdir(self, name: str) -> Path: + """Return a directory path object with the given name. + + If the directory does not yet exist, it will be created. You can use + it to manage files to e.g. store/retrieve database dumps across test + sessions. + + .. versionadded:: 7.0 + + :param name: + Must be a string not containing a ``/`` separator. + Make sure the name contains your plugin or application + identifiers to prevent clashes with other cache users. + """ + path = Path(name) + if len(path.parts) > 1: + raise ValueError("name is not allowed to contain path separators") + res = self._cachedir.joinpath(self._CACHE_PREFIX_DIRS, path) + res.mkdir(exist_ok=True, parents=True) + return res + + def _getvaluepath(self, key: str) -> Path: + return self._cachedir.joinpath(self._CACHE_PREFIX_VALUES, Path(key)) + + def get(self, key: str, default): + """Return the cached value for the given key. + + If no value was yet cached or the value cannot be read, the specified + default is returned. + + :param key: + Must be a ``/`` separated value. Usually the first + name is the name of your plugin or your application. + :param default: + The value to return in case of a cache-miss or invalid cache value. + """ + path = self._getvaluepath(key) + try: + with path.open("r", encoding="UTF-8") as f: + return json.load(f) + except (ValueError, OSError): + return default + + def set(self, key: str, value: object) -> None: + """Save value for the given key. + + :param key: + Must be a ``/`` separated value. Usually the first + name is the name of your plugin or your application. + :param value: + Must be of any combination of basic python types, + including nested types like lists of dictionaries. + """ + path = self._getvaluepath(key) + try: + if path.parent.is_dir(): + cache_dir_exists_already = True + else: + cache_dir_exists_already = self._cachedir.exists() + path.parent.mkdir(exist_ok=True, parents=True) + except OSError: + self.warn("could not create cache path {path}", path=path, _ispytest=True) + return + if not cache_dir_exists_already: + self._ensure_supporting_files() + data = json.dumps(value, ensure_ascii=False, indent=2) + try: + f = path.open("w", encoding="UTF-8") + except OSError: + self.warn("cache could not write path {path}", path=path, _ispytest=True) + else: + with f: + f.write(data) + + def _ensure_supporting_files(self) -> None: + """Create supporting files in the cache dir that are not really part of the cache.""" + readme_path = self._cachedir / "README.md" + readme_path.write_text(README_CONTENT, encoding="UTF-8") + + gitignore_path = self._cachedir.joinpath(".gitignore") + msg = "# Created by pytest automatically.\n*\n" + gitignore_path.write_text(msg, encoding="UTF-8") + + cachedir_tag_path = self._cachedir.joinpath("CACHEDIR.TAG") + cachedir_tag_path.write_bytes(CACHEDIR_TAG_CONTENT) + + +class LFPluginCollWrapper: + def __init__(self, lfplugin: "LFPlugin") -> None: + self.lfplugin = lfplugin + self._collected_at_least_one_failure = False + + @hookimpl(hookwrapper=True) + def pytest_make_collect_report(self, collector: nodes.Collector): + if isinstance(collector, Session): + out = yield + res: CollectReport = out.get_result() + + # Sort any lf-paths to the beginning. + lf_paths = self.lfplugin._last_failed_paths + + res.result = sorted( + res.result, + # use stable sort to priorize last failed + key=lambda x: x.path in lf_paths, + reverse=True, + ) + return + + elif isinstance(collector, Module): + if collector.path in self.lfplugin._last_failed_paths: + out = yield + res = out.get_result() + result = res.result + lastfailed = self.lfplugin.lastfailed + + # Only filter with known failures. + if not self._collected_at_least_one_failure: + if not any(x.nodeid in lastfailed for x in result): + return + self.lfplugin.config.pluginmanager.register( + LFPluginCollSkipfiles(self.lfplugin), "lfplugin-collskip" + ) + self._collected_at_least_one_failure = True + + session = collector.session + result[:] = [ + x + for x in result + if x.nodeid in lastfailed + # Include any passed arguments (not trivial to filter). + or session.isinitpath(x.path) + # Keep all sub-collectors. + or isinstance(x, nodes.Collector) + ] + return + yield + + +class LFPluginCollSkipfiles: + def __init__(self, lfplugin: "LFPlugin") -> None: + self.lfplugin = lfplugin + + @hookimpl + def pytest_make_collect_report( + self, collector: nodes.Collector + ) -> Optional[CollectReport]: + # Packages are Modules, but _last_failed_paths only contains + # test-bearing paths and doesn't try to include the paths of their + # packages, so don't filter them. + if isinstance(collector, Module) and not isinstance(collector, Package): + if collector.path not in self.lfplugin._last_failed_paths: + self.lfplugin._skipped_files += 1 + + return CollectReport( + collector.nodeid, "passed", longrepr=None, result=[] + ) + return None + + +class LFPlugin: + """Plugin which implements the --lf (run last-failing) option.""" + + def __init__(self, config: Config) -> None: + self.config = config + active_keys = "lf", "failedfirst" + self.active = any(config.getoption(key) for key in active_keys) + assert config.cache + self.lastfailed: Dict[str, bool] = config.cache.get("cache/lastfailed", {}) + self._previously_failed_count: Optional[int] = None + self._report_status: Optional[str] = None + self._skipped_files = 0 # count skipped files during collection due to --lf + + if config.getoption("lf"): + self._last_failed_paths = self.get_last_failed_paths() + config.pluginmanager.register( + LFPluginCollWrapper(self), "lfplugin-collwrapper" + ) + + def get_last_failed_paths(self) -> Set[Path]: + """Return a set with all Paths()s of the previously failed nodeids.""" + rootpath = self.config.rootpath + result = {rootpath / nodeid.split("::")[0] for nodeid in self.lastfailed} + return {x for x in result if x.exists()} + + def pytest_report_collectionfinish(self) -> Optional[str]: + if self.active and self.config.getoption("verbose") >= 0: + return "run-last-failure: %s" % self._report_status + return None + + def pytest_runtest_logreport(self, report: TestReport) -> None: + if (report.when == "call" and report.passed) or report.skipped: + self.lastfailed.pop(report.nodeid, None) + elif report.failed: + self.lastfailed[report.nodeid] = True + + def pytest_collectreport(self, report: CollectReport) -> None: + passed = report.outcome in ("passed", "skipped") + if passed: + if report.nodeid in self.lastfailed: + self.lastfailed.pop(report.nodeid) + self.lastfailed.update((item.nodeid, True) for item in report.result) + else: + self.lastfailed[report.nodeid] = True + + @hookimpl(hookwrapper=True, tryfirst=True) + def pytest_collection_modifyitems( + self, config: Config, items: List[nodes.Item] + ) -> Generator[None, None, None]: + yield + + if not self.active: + return + + if self.lastfailed: + previously_failed = [] + previously_passed = [] + for item in items: + if item.nodeid in self.lastfailed: + previously_failed.append(item) + else: + previously_passed.append(item) + self._previously_failed_count = len(previously_failed) + + if not previously_failed: + # Running a subset of all tests with recorded failures + # only outside of it. + self._report_status = "%d known failures not in selected tests" % ( + len(self.lastfailed), + ) + else: + if self.config.getoption("lf"): + items[:] = previously_failed + config.hook.pytest_deselected(items=previously_passed) + else: # --failedfirst + items[:] = previously_failed + previously_passed + + noun = "failure" if self._previously_failed_count == 1 else "failures" + suffix = " first" if self.config.getoption("failedfirst") else "" + self._report_status = "rerun previous {count} {noun}{suffix}".format( + count=self._previously_failed_count, suffix=suffix, noun=noun + ) + + if self._skipped_files > 0: + files_noun = "file" if self._skipped_files == 1 else "files" + self._report_status += " (skipped {files} {files_noun})".format( + files=self._skipped_files, files_noun=files_noun + ) + else: + self._report_status = "no previously failed tests, " + if self.config.getoption("last_failed_no_failures") == "none": + self._report_status += "deselecting all items." + config.hook.pytest_deselected(items=items[:]) + items[:] = [] + else: + self._report_status += "not deselecting items." + + def pytest_sessionfinish(self, session: Session) -> None: + config = self.config + if config.getoption("cacheshow") or hasattr(config, "workerinput"): + return + + assert config.cache is not None + saved_lastfailed = config.cache.get("cache/lastfailed", {}) + if saved_lastfailed != self.lastfailed: + config.cache.set("cache/lastfailed", self.lastfailed) + + +class NFPlugin: + """Plugin which implements the --nf (run new-first) option.""" + + def __init__(self, config: Config) -> None: + self.config = config + self.active = config.option.newfirst + assert config.cache is not None + self.cached_nodeids = set(config.cache.get("cache/nodeids", [])) + + @hookimpl(hookwrapper=True, tryfirst=True) + def pytest_collection_modifyitems( + self, items: List[nodes.Item] + ) -> Generator[None, None, None]: + yield + + if self.active: + new_items: Dict[str, nodes.Item] = {} + other_items: Dict[str, nodes.Item] = {} + for item in items: + if item.nodeid not in self.cached_nodeids: + new_items[item.nodeid] = item + else: + other_items[item.nodeid] = item + + items[:] = self._get_increasing_order( + new_items.values() + ) + self._get_increasing_order(other_items.values()) + self.cached_nodeids.update(new_items) + else: + self.cached_nodeids.update(item.nodeid for item in items) + + def _get_increasing_order(self, items: Iterable[nodes.Item]) -> List[nodes.Item]: + return sorted(items, key=lambda item: item.path.stat().st_mtime, reverse=True) # type: ignore[no-any-return] + + def pytest_sessionfinish(self) -> None: + config = self.config + if config.getoption("cacheshow") or hasattr(config, "workerinput"): + return + + if config.getoption("collectonly"): + return + + assert config.cache is not None + config.cache.set("cache/nodeids", sorted(self.cached_nodeids)) + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("general") + group.addoption( + "--lf", + "--last-failed", + action="store_true", + dest="lf", + help="Rerun only the tests that failed " + "at the last run (or all if none failed)", + ) + group.addoption( + "--ff", + "--failed-first", + action="store_true", + dest="failedfirst", + help="Run all tests, but run the last failures first. " + "This may re-order tests and thus lead to " + "repeated fixture setup/teardown.", + ) + group.addoption( + "--nf", + "--new-first", + action="store_true", + dest="newfirst", + help="Run tests from new files first, then the rest of the tests " + "sorted by file mtime", + ) + group.addoption( + "--cache-show", + action="append", + nargs="?", + dest="cacheshow", + help=( + "Show cache contents, don't perform collection or tests. " + "Optional argument: glob (default: '*')." + ), + ) + group.addoption( + "--cache-clear", + action="store_true", + dest="cacheclear", + help="Remove all cache contents at start of test run", + ) + cache_dir_default = ".pytest_cache" + if "TOX_ENV_DIR" in os.environ: + cache_dir_default = os.path.join(os.environ["TOX_ENV_DIR"], cache_dir_default) + parser.addini("cache_dir", default=cache_dir_default, help="Cache directory path") + group.addoption( + "--lfnf", + "--last-failed-no-failures", + action="store", + dest="last_failed_no_failures", + choices=("all", "none"), + default="all", + help="Which tests to run with no previously (known) failures", + ) + + +def pytest_cmdline_main(config: Config) -> Optional[Union[int, ExitCode]]: + if config.option.cacheshow: + from _pytest.main import wrap_session + + return wrap_session(config, cacheshow) + return None + + +@hookimpl(tryfirst=True) +def pytest_configure(config: Config) -> None: + config.cache = Cache.for_config(config, _ispytest=True) + config.pluginmanager.register(LFPlugin(config), "lfplugin") + config.pluginmanager.register(NFPlugin(config), "nfplugin") + + +@fixture +def cache(request: FixtureRequest) -> Cache: + """Return a cache object that can persist state between testing sessions. + + cache.get(key, default) + cache.set(key, value) + + Keys must be ``/`` separated strings, where the first part is usually the + name of your plugin or application to avoid clashes with other cache users. + + Values can be any object handled by the json stdlib module. + """ + assert request.config.cache is not None + return request.config.cache + + +def pytest_report_header(config: Config) -> Optional[str]: + """Display cachedir with --cache-show and if non-default.""" + if config.option.verbose > 0 or config.getini("cache_dir") != ".pytest_cache": + assert config.cache is not None + cachedir = config.cache._cachedir + # TODO: evaluate generating upward relative paths + # starting with .., ../.. if sensible + + try: + displaypath = cachedir.relative_to(config.rootpath) + except ValueError: + displaypath = cachedir + return f"cachedir: {displaypath}" + return None + + +def cacheshow(config: Config, session: Session) -> int: + from pprint import pformat + + assert config.cache is not None + + tw = TerminalWriter() + tw.line("cachedir: " + str(config.cache._cachedir)) + if not config.cache._cachedir.is_dir(): + tw.line("cache is empty") + return 0 + + glob = config.option.cacheshow[0] + if glob is None: + glob = "*" + + dummy = object() + basedir = config.cache._cachedir + vdir = basedir / Cache._CACHE_PREFIX_VALUES + tw.sep("-", "cache values for %r" % glob) + for valpath in sorted(x for x in vdir.rglob(glob) if x.is_file()): + key = str(valpath.relative_to(vdir)) + val = config.cache.get(key, dummy) + if val is dummy: + tw.line("%s contains unreadable content, will be ignored" % key) + else: + tw.line("%s contains:" % key) + for line in pformat(val).splitlines(): + tw.line(" " + line) + + ddir = basedir / Cache._CACHE_PREFIX_DIRS + if ddir.is_dir(): + contents = sorted(ddir.rglob(glob)) + tw.sep("-", "cache directories for %r" % glob) + for p in contents: + # if p.is_dir(): + # print("%s/" % p.relative_to(basedir)) + if p.is_file(): + key = str(p.relative_to(basedir)) + tw.line(f"{key} is a file of length {p.stat().st_size:d}") + return 0 diff --git a/env/lib/python3.8/site-packages/_pytest/capture.py b/env/lib/python3.8/site-packages/_pytest/capture.py new file mode 100644 index 00000000..6131a46d --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/capture.py @@ -0,0 +1,1014 @@ +"""Per-test stdout/stderr capturing mechanism.""" +import contextlib +import functools +import io +import os +import sys +from io import UnsupportedOperation +from tempfile import TemporaryFile +from typing import Any +from typing import AnyStr +from typing import Generator +from typing import Generic +from typing import Iterator +from typing import Optional +from typing import TextIO +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from _pytest.compat import final +from _pytest.config import Config +from _pytest.config import hookimpl +from _pytest.config.argparsing import Parser +from _pytest.deprecated import check_ispytest +from _pytest.fixtures import fixture +from _pytest.fixtures import SubRequest +from _pytest.nodes import Collector +from _pytest.nodes import File +from _pytest.nodes import Item + +if TYPE_CHECKING: + from typing_extensions import Literal + + _CaptureMethod = Literal["fd", "sys", "no", "tee-sys"] + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("general") + group._addoption( + "--capture", + action="store", + default="fd", + metavar="method", + choices=["fd", "sys", "no", "tee-sys"], + help="Per-test capturing method: one of fd|sys|no|tee-sys", + ) + group._addoption( + "-s", + action="store_const", + const="no", + dest="capture", + help="Shortcut for --capture=no", + ) + + +def _colorama_workaround() -> None: + """Ensure colorama is imported so that it attaches to the correct stdio + handles on Windows. + + colorama uses the terminal on import time. So if something does the + first import of colorama while I/O capture is active, colorama will + fail in various ways. + """ + if sys.platform.startswith("win32"): + try: + import colorama # noqa: F401 + except ImportError: + pass + + +def _windowsconsoleio_workaround(stream: TextIO) -> None: + """Workaround for Windows Unicode console handling. + + Python 3.6 implemented Unicode console handling for Windows. This works + by reading/writing to the raw console handle using + ``{Read,Write}ConsoleW``. + + The problem is that we are going to ``dup2`` over the stdio file + descriptors when doing ``FDCapture`` and this will ``CloseHandle`` the + handles used by Python to write to the console. Though there is still some + weirdness and the console handle seems to only be closed randomly and not + on the first call to ``CloseHandle``, or maybe it gets reopened with the + same handle value when we suspend capturing. + + The workaround in this case will reopen stdio with a different fd which + also means a different handle by replicating the logic in + "Py_lifecycle.c:initstdio/create_stdio". + + :param stream: + In practice ``sys.stdout`` or ``sys.stderr``, but given + here as parameter for unittesting purposes. + + See https://github.com/pytest-dev/py/issues/103. + """ + if not sys.platform.startswith("win32") or hasattr(sys, "pypy_version_info"): + return + + # Bail out if ``stream`` doesn't seem like a proper ``io`` stream (#2666). + if not hasattr(stream, "buffer"): # type: ignore[unreachable] + return + + buffered = hasattr(stream.buffer, "raw") + raw_stdout = stream.buffer.raw if buffered else stream.buffer # type: ignore[attr-defined] + + if not isinstance(raw_stdout, io._WindowsConsoleIO): # type: ignore[attr-defined] + return + + def _reopen_stdio(f, mode): + if not buffered and mode[0] == "w": + buffering = 0 + else: + buffering = -1 + + return io.TextIOWrapper( + open(os.dup(f.fileno()), mode, buffering), + f.encoding, + f.errors, + f.newlines, + f.line_buffering, + ) + + sys.stdin = _reopen_stdio(sys.stdin, "rb") + sys.stdout = _reopen_stdio(sys.stdout, "wb") + sys.stderr = _reopen_stdio(sys.stderr, "wb") + + +@hookimpl(hookwrapper=True) +def pytest_load_initial_conftests(early_config: Config): + ns = early_config.known_args_namespace + if ns.capture == "fd": + _windowsconsoleio_workaround(sys.stdout) + _colorama_workaround() + pluginmanager = early_config.pluginmanager + capman = CaptureManager(ns.capture) + pluginmanager.register(capman, "capturemanager") + + # Make sure that capturemanager is properly reset at final shutdown. + early_config.add_cleanup(capman.stop_global_capturing) + + # Finally trigger conftest loading but while capturing (issue #93). + capman.start_global_capturing() + outcome = yield + capman.suspend_global_capture() + if outcome.excinfo is not None: + out, err = capman.read_global_capture() + sys.stdout.write(out) + sys.stderr.write(err) + + +# IO Helpers. + + +class EncodedFile(io.TextIOWrapper): + __slots__ = () + + @property + def name(self) -> str: + # Ensure that file.name is a string. Workaround for a Python bug + # fixed in >=3.7.4: https://bugs.python.org/issue36015 + return repr(self.buffer) + + @property + def mode(self) -> str: + # TextIOWrapper doesn't expose a mode, but at least some of our + # tests check it. + return self.buffer.mode.replace("b", "") + + +class CaptureIO(io.TextIOWrapper): + def __init__(self) -> None: + super().__init__(io.BytesIO(), encoding="UTF-8", newline="", write_through=True) + + def getvalue(self) -> str: + assert isinstance(self.buffer, io.BytesIO) + return self.buffer.getvalue().decode("UTF-8") + + +class TeeCaptureIO(CaptureIO): + def __init__(self, other: TextIO) -> None: + self._other = other + super().__init__() + + def write(self, s: str) -> int: + super().write(s) + return self._other.write(s) + + +class DontReadFromInput: + encoding = None + + def read(self, *args): + raise OSError( + "pytest: reading from stdin while output is captured! Consider using `-s`." + ) + + readline = read + readlines = read + __next__ = read + + def __iter__(self): + return self + + def fileno(self) -> int: + raise UnsupportedOperation("redirected stdin is pseudofile, has no fileno()") + + def flush(self) -> None: + raise UnsupportedOperation("redirected stdin is pseudofile, has no flush()") + + def isatty(self) -> bool: + return False + + def close(self) -> None: + pass + + def readable(self) -> bool: + return False + + def seek(self, offset: int) -> int: + raise UnsupportedOperation("redirected stdin is pseudofile, has no seek(int)") + + def seekable(self) -> bool: + return False + + def tell(self) -> int: + raise UnsupportedOperation("redirected stdin is pseudofile, has no tell()") + + def truncate(self, size: int) -> None: + raise UnsupportedOperation("cannont truncate stdin") + + def write(self, *args) -> None: + raise UnsupportedOperation("cannot write to stdin") + + def writelines(self, *args) -> None: + raise UnsupportedOperation("Cannot write to stdin") + + def writable(self) -> bool: + return False + + @property + def buffer(self): + return self + + +# Capture classes. + + +patchsysdict = {0: "stdin", 1: "stdout", 2: "stderr"} + + +class NoCapture: + EMPTY_BUFFER = None + __init__ = start = done = suspend = resume = lambda *args: None + + +class SysCaptureBinary: + + EMPTY_BUFFER = b"" + + def __init__(self, fd: int, tmpfile=None, *, tee: bool = False) -> None: + name = patchsysdict[fd] + self._old = getattr(sys, name) + self.name = name + if tmpfile is None: + if name == "stdin": + tmpfile = DontReadFromInput() + else: + tmpfile = CaptureIO() if not tee else TeeCaptureIO(self._old) + self.tmpfile = tmpfile + self._state = "initialized" + + def repr(self, class_name: str) -> str: + return "<{} {} _old={} _state={!r} tmpfile={!r}>".format( + class_name, + self.name, + hasattr(self, "_old") and repr(self._old) or "", + self._state, + self.tmpfile, + ) + + def __repr__(self) -> str: + return "<{} {} _old={} _state={!r} tmpfile={!r}>".format( + self.__class__.__name__, + self.name, + hasattr(self, "_old") and repr(self._old) or "", + self._state, + self.tmpfile, + ) + + def _assert_state(self, op: str, states: Tuple[str, ...]) -> None: + assert ( + self._state in states + ), "cannot {} in state {!r}: expected one of {}".format( + op, self._state, ", ".join(states) + ) + + def start(self) -> None: + self._assert_state("start", ("initialized",)) + setattr(sys, self.name, self.tmpfile) + self._state = "started" + + def snap(self): + self._assert_state("snap", ("started", "suspended")) + self.tmpfile.seek(0) + res = self.tmpfile.buffer.read() + self.tmpfile.seek(0) + self.tmpfile.truncate() + return res + + def done(self) -> None: + self._assert_state("done", ("initialized", "started", "suspended", "done")) + if self._state == "done": + return + setattr(sys, self.name, self._old) + del self._old + self.tmpfile.close() + self._state = "done" + + def suspend(self) -> None: + self._assert_state("suspend", ("started", "suspended")) + setattr(sys, self.name, self._old) + self._state = "suspended" + + def resume(self) -> None: + self._assert_state("resume", ("started", "suspended")) + if self._state == "started": + return + setattr(sys, self.name, self.tmpfile) + self._state = "started" + + def writeorg(self, data) -> None: + self._assert_state("writeorg", ("started", "suspended")) + self._old.flush() + self._old.buffer.write(data) + self._old.buffer.flush() + + +class SysCapture(SysCaptureBinary): + EMPTY_BUFFER = "" # type: ignore[assignment] + + def snap(self): + res = self.tmpfile.getvalue() + self.tmpfile.seek(0) + self.tmpfile.truncate() + return res + + def writeorg(self, data): + self._assert_state("writeorg", ("started", "suspended")) + self._old.write(data) + self._old.flush() + + +class FDCaptureBinary: + """Capture IO to/from a given OS-level file descriptor. + + snap() produces `bytes`. + """ + + EMPTY_BUFFER = b"" + + def __init__(self, targetfd: int) -> None: + self.targetfd = targetfd + + try: + os.fstat(targetfd) + except OSError: + # FD capturing is conceptually simple -- create a temporary file, + # redirect the FD to it, redirect back when done. But when the + # target FD is invalid it throws a wrench into this lovely scheme. + # + # Tests themselves shouldn't care if the FD is valid, FD capturing + # should work regardless of external circumstances. So falling back + # to just sys capturing is not a good option. + # + # Further complications are the need to support suspend() and the + # possibility of FD reuse (e.g. the tmpfile getting the very same + # target FD). The following approach is robust, I believe. + self.targetfd_invalid: Optional[int] = os.open(os.devnull, os.O_RDWR) + os.dup2(self.targetfd_invalid, targetfd) + else: + self.targetfd_invalid = None + self.targetfd_save = os.dup(targetfd) + + if targetfd == 0: + self.tmpfile = open(os.devnull, encoding="utf-8") + self.syscapture = SysCapture(targetfd) + else: + self.tmpfile = EncodedFile( + TemporaryFile(buffering=0), + encoding="utf-8", + errors="replace", + newline="", + write_through=True, + ) + if targetfd in patchsysdict: + self.syscapture = SysCapture(targetfd, self.tmpfile) + else: + self.syscapture = NoCapture() + + self._state = "initialized" + + def __repr__(self) -> str: + return "<{} {} oldfd={} _state={!r} tmpfile={!r}>".format( + self.__class__.__name__, + self.targetfd, + self.targetfd_save, + self._state, + self.tmpfile, + ) + + def _assert_state(self, op: str, states: Tuple[str, ...]) -> None: + assert ( + self._state in states + ), "cannot {} in state {!r}: expected one of {}".format( + op, self._state, ", ".join(states) + ) + + def start(self) -> None: + """Start capturing on targetfd using memorized tmpfile.""" + self._assert_state("start", ("initialized",)) + os.dup2(self.tmpfile.fileno(), self.targetfd) + self.syscapture.start() + self._state = "started" + + def snap(self): + self._assert_state("snap", ("started", "suspended")) + self.tmpfile.seek(0) + res = self.tmpfile.buffer.read() + self.tmpfile.seek(0) + self.tmpfile.truncate() + return res + + def done(self) -> None: + """Stop capturing, restore streams, return original capture file, + seeked to position zero.""" + self._assert_state("done", ("initialized", "started", "suspended", "done")) + if self._state == "done": + return + os.dup2(self.targetfd_save, self.targetfd) + os.close(self.targetfd_save) + if self.targetfd_invalid is not None: + if self.targetfd_invalid != self.targetfd: + os.close(self.targetfd) + os.close(self.targetfd_invalid) + self.syscapture.done() + self.tmpfile.close() + self._state = "done" + + def suspend(self) -> None: + self._assert_state("suspend", ("started", "suspended")) + if self._state == "suspended": + return + self.syscapture.suspend() + os.dup2(self.targetfd_save, self.targetfd) + self._state = "suspended" + + def resume(self) -> None: + self._assert_state("resume", ("started", "suspended")) + if self._state == "started": + return + self.syscapture.resume() + os.dup2(self.tmpfile.fileno(), self.targetfd) + self._state = "started" + + def writeorg(self, data): + """Write to original file descriptor.""" + self._assert_state("writeorg", ("started", "suspended")) + os.write(self.targetfd_save, data) + + +class FDCapture(FDCaptureBinary): + """Capture IO to/from a given OS-level file descriptor. + + snap() produces text. + """ + + # Ignore type because it doesn't match the type in the superclass (bytes). + EMPTY_BUFFER = "" # type: ignore + + def snap(self): + self._assert_state("snap", ("started", "suspended")) + self.tmpfile.seek(0) + res = self.tmpfile.read() + self.tmpfile.seek(0) + self.tmpfile.truncate() + return res + + def writeorg(self, data): + """Write to original file descriptor.""" + super().writeorg(data.encode("utf-8")) # XXX use encoding of original stream + + +# MultiCapture + + +# This class was a namedtuple, but due to mypy limitation[0] it could not be +# made generic, so was replaced by a regular class which tries to emulate the +# pertinent parts of a namedtuple. If the mypy limitation is ever lifted, can +# make it a namedtuple again. +# [0]: https://github.com/python/mypy/issues/685 +@final +@functools.total_ordering +class CaptureResult(Generic[AnyStr]): + """The result of :method:`CaptureFixture.readouterr`.""" + + __slots__ = ("out", "err") + + def __init__(self, out: AnyStr, err: AnyStr) -> None: + self.out: AnyStr = out + self.err: AnyStr = err + + def __len__(self) -> int: + return 2 + + def __iter__(self) -> Iterator[AnyStr]: + return iter((self.out, self.err)) + + def __getitem__(self, item: int) -> AnyStr: + return tuple(self)[item] + + def _replace( + self, *, out: Optional[AnyStr] = None, err: Optional[AnyStr] = None + ) -> "CaptureResult[AnyStr]": + return CaptureResult( + out=self.out if out is None else out, err=self.err if err is None else err + ) + + def count(self, value: AnyStr) -> int: + return tuple(self).count(value) + + def index(self, value) -> int: + return tuple(self).index(value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, (CaptureResult, tuple)): + return NotImplemented + return tuple(self) == tuple(other) + + def __hash__(self) -> int: + return hash(tuple(self)) + + def __lt__(self, other: object) -> bool: + if not isinstance(other, (CaptureResult, tuple)): + return NotImplemented + return tuple(self) < tuple(other) + + def __repr__(self) -> str: + return f"CaptureResult(out={self.out!r}, err={self.err!r})" + + +class MultiCapture(Generic[AnyStr]): + _state = None + _in_suspended = False + + def __init__(self, in_, out, err) -> None: + self.in_ = in_ + self.out = out + self.err = err + + def __repr__(self) -> str: + return "".format( + self.out, + self.err, + self.in_, + self._state, + self._in_suspended, + ) + + def start_capturing(self) -> None: + self._state = "started" + if self.in_: + self.in_.start() + if self.out: + self.out.start() + if self.err: + self.err.start() + + def pop_outerr_to_orig(self) -> Tuple[AnyStr, AnyStr]: + """Pop current snapshot out/err capture and flush to orig streams.""" + out, err = self.readouterr() + if out: + self.out.writeorg(out) + if err: + self.err.writeorg(err) + return out, err + + def suspend_capturing(self, in_: bool = False) -> None: + self._state = "suspended" + if self.out: + self.out.suspend() + if self.err: + self.err.suspend() + if in_ and self.in_: + self.in_.suspend() + self._in_suspended = True + + def resume_capturing(self) -> None: + self._state = "started" + if self.out: + self.out.resume() + if self.err: + self.err.resume() + if self._in_suspended: + self.in_.resume() + self._in_suspended = False + + def stop_capturing(self) -> None: + """Stop capturing and reset capturing streams.""" + if self._state == "stopped": + raise ValueError("was already stopped") + self._state = "stopped" + if self.out: + self.out.done() + if self.err: + self.err.done() + if self.in_: + self.in_.done() + + def is_started(self) -> bool: + """Whether actively capturing -- not suspended or stopped.""" + return self._state == "started" + + def readouterr(self) -> CaptureResult[AnyStr]: + out = self.out.snap() if self.out else "" + err = self.err.snap() if self.err else "" + return CaptureResult(out, err) + + +def _get_multicapture(method: "_CaptureMethod") -> MultiCapture[str]: + if method == "fd": + return MultiCapture(in_=FDCapture(0), out=FDCapture(1), err=FDCapture(2)) + elif method == "sys": + return MultiCapture(in_=SysCapture(0), out=SysCapture(1), err=SysCapture(2)) + elif method == "no": + return MultiCapture(in_=None, out=None, err=None) + elif method == "tee-sys": + return MultiCapture( + in_=None, out=SysCapture(1, tee=True), err=SysCapture(2, tee=True) + ) + raise ValueError(f"unknown capturing method: {method!r}") + + +# CaptureManager and CaptureFixture + + +class CaptureManager: + """The capture plugin. + + Manages that the appropriate capture method is enabled/disabled during + collection and each test phase (setup, call, teardown). After each of + those points, the captured output is obtained and attached to the + collection/runtest report. + + There are two levels of capture: + + * global: enabled by default and can be suppressed by the ``-s`` + option. This is always enabled/disabled during collection and each test + phase. + + * fixture: when a test function or one of its fixture depend on the + ``capsys`` or ``capfd`` fixtures. In this case special handling is + needed to ensure the fixtures take precedence over the global capture. + """ + + def __init__(self, method: "_CaptureMethod") -> None: + self._method = method + self._global_capturing: Optional[MultiCapture[str]] = None + self._capture_fixture: Optional[CaptureFixture[Any]] = None + + def __repr__(self) -> str: + return "".format( + self._method, self._global_capturing, self._capture_fixture + ) + + def is_capturing(self) -> Union[str, bool]: + if self.is_globally_capturing(): + return "global" + if self._capture_fixture: + return "fixture %s" % self._capture_fixture.request.fixturename + return False + + # Global capturing control + + def is_globally_capturing(self) -> bool: + return self._method != "no" + + def start_global_capturing(self) -> None: + assert self._global_capturing is None + self._global_capturing = _get_multicapture(self._method) + self._global_capturing.start_capturing() + + def stop_global_capturing(self) -> None: + if self._global_capturing is not None: + self._global_capturing.pop_outerr_to_orig() + self._global_capturing.stop_capturing() + self._global_capturing = None + + def resume_global_capture(self) -> None: + # During teardown of the python process, and on rare occasions, capture + # attributes can be `None` while trying to resume global capture. + if self._global_capturing is not None: + self._global_capturing.resume_capturing() + + def suspend_global_capture(self, in_: bool = False) -> None: + if self._global_capturing is not None: + self._global_capturing.suspend_capturing(in_=in_) + + def suspend(self, in_: bool = False) -> None: + # Need to undo local capsys-et-al if it exists before disabling global capture. + self.suspend_fixture() + self.suspend_global_capture(in_) + + def resume(self) -> None: + self.resume_global_capture() + self.resume_fixture() + + def read_global_capture(self) -> CaptureResult[str]: + assert self._global_capturing is not None + return self._global_capturing.readouterr() + + # Fixture Control + + def set_fixture(self, capture_fixture: "CaptureFixture[Any]") -> None: + if self._capture_fixture: + current_fixture = self._capture_fixture.request.fixturename + requested_fixture = capture_fixture.request.fixturename + capture_fixture.request.raiseerror( + "cannot use {} and {} at the same time".format( + requested_fixture, current_fixture + ) + ) + self._capture_fixture = capture_fixture + + def unset_fixture(self) -> None: + self._capture_fixture = None + + def activate_fixture(self) -> None: + """If the current item is using ``capsys`` or ``capfd``, activate + them so they take precedence over the global capture.""" + if self._capture_fixture: + self._capture_fixture._start() + + def deactivate_fixture(self) -> None: + """Deactivate the ``capsys`` or ``capfd`` fixture of this item, if any.""" + if self._capture_fixture: + self._capture_fixture.close() + + def suspend_fixture(self) -> None: + if self._capture_fixture: + self._capture_fixture._suspend() + + def resume_fixture(self) -> None: + if self._capture_fixture: + self._capture_fixture._resume() + + # Helper context managers + + @contextlib.contextmanager + def global_and_fixture_disabled(self) -> Generator[None, None, None]: + """Context manager to temporarily disable global and current fixture capturing.""" + do_fixture = self._capture_fixture and self._capture_fixture._is_started() + if do_fixture: + self.suspend_fixture() + do_global = self._global_capturing and self._global_capturing.is_started() + if do_global: + self.suspend_global_capture() + try: + yield + finally: + if do_global: + self.resume_global_capture() + if do_fixture: + self.resume_fixture() + + @contextlib.contextmanager + def item_capture(self, when: str, item: Item) -> Generator[None, None, None]: + self.resume_global_capture() + self.activate_fixture() + try: + yield + finally: + self.deactivate_fixture() + self.suspend_global_capture(in_=False) + + out, err = self.read_global_capture() + item.add_report_section(when, "stdout", out) + item.add_report_section(when, "stderr", err) + + # Hooks + + @hookimpl(hookwrapper=True) + def pytest_make_collect_report(self, collector: Collector): + if isinstance(collector, File): + self.resume_global_capture() + outcome = yield + self.suspend_global_capture() + out, err = self.read_global_capture() + rep = outcome.get_result() + if out: + rep.sections.append(("Captured stdout", out)) + if err: + rep.sections.append(("Captured stderr", err)) + else: + yield + + @hookimpl(hookwrapper=True) + def pytest_runtest_setup(self, item: Item) -> Generator[None, None, None]: + with self.item_capture("setup", item): + yield + + @hookimpl(hookwrapper=True) + def pytest_runtest_call(self, item: Item) -> Generator[None, None, None]: + with self.item_capture("call", item): + yield + + @hookimpl(hookwrapper=True) + def pytest_runtest_teardown(self, item: Item) -> Generator[None, None, None]: + with self.item_capture("teardown", item): + yield + + @hookimpl(tryfirst=True) + def pytest_keyboard_interrupt(self) -> None: + self.stop_global_capturing() + + @hookimpl(tryfirst=True) + def pytest_internalerror(self) -> None: + self.stop_global_capturing() + + +class CaptureFixture(Generic[AnyStr]): + """Object returned by the :fixture:`capsys`, :fixture:`capsysbinary`, + :fixture:`capfd` and :fixture:`capfdbinary` fixtures.""" + + def __init__( + self, captureclass, request: SubRequest, *, _ispytest: bool = False + ) -> None: + check_ispytest(_ispytest) + self.captureclass = captureclass + self.request = request + self._capture: Optional[MultiCapture[AnyStr]] = None + self._captured_out = self.captureclass.EMPTY_BUFFER + self._captured_err = self.captureclass.EMPTY_BUFFER + + def _start(self) -> None: + if self._capture is None: + self._capture = MultiCapture( + in_=None, + out=self.captureclass(1), + err=self.captureclass(2), + ) + self._capture.start_capturing() + + def close(self) -> None: + if self._capture is not None: + out, err = self._capture.pop_outerr_to_orig() + self._captured_out += out + self._captured_err += err + self._capture.stop_capturing() + self._capture = None + + def readouterr(self) -> CaptureResult[AnyStr]: + """Read and return the captured output so far, resetting the internal + buffer. + + :returns: + The captured content as a namedtuple with ``out`` and ``err`` + string attributes. + """ + captured_out, captured_err = self._captured_out, self._captured_err + if self._capture is not None: + out, err = self._capture.readouterr() + captured_out += out + captured_err += err + self._captured_out = self.captureclass.EMPTY_BUFFER + self._captured_err = self.captureclass.EMPTY_BUFFER + return CaptureResult(captured_out, captured_err) + + def _suspend(self) -> None: + """Suspend this fixture's own capturing temporarily.""" + if self._capture is not None: + self._capture.suspend_capturing() + + def _resume(self) -> None: + """Resume this fixture's own capturing temporarily.""" + if self._capture is not None: + self._capture.resume_capturing() + + def _is_started(self) -> bool: + """Whether actively capturing -- not disabled or closed.""" + if self._capture is not None: + return self._capture.is_started() + return False + + @contextlib.contextmanager + def disabled(self) -> Generator[None, None, None]: + """Temporarily disable capturing while inside the ``with`` block.""" + capmanager = self.request.config.pluginmanager.getplugin("capturemanager") + with capmanager.global_and_fixture_disabled(): + yield + + +# The fixtures. + + +@fixture +def capsys(request: SubRequest) -> Generator[CaptureFixture[str], None, None]: + r"""Enable text capturing of writes to ``sys.stdout`` and ``sys.stderr``. + + The captured output is made available via ``capsys.readouterr()`` method + calls, which return a ``(out, err)`` namedtuple. + ``out`` and ``err`` will be ``text`` objects. + + Returns an instance of :class:`CaptureFixture[str] `. + + Example: + + .. code-block:: python + + def test_output(capsys): + print("hello") + captured = capsys.readouterr() + assert captured.out == "hello\n" + """ + capman = request.config.pluginmanager.getplugin("capturemanager") + capture_fixture = CaptureFixture[str](SysCapture, request, _ispytest=True) + capman.set_fixture(capture_fixture) + capture_fixture._start() + yield capture_fixture + capture_fixture.close() + capman.unset_fixture() + + +@fixture +def capsysbinary(request: SubRequest) -> Generator[CaptureFixture[bytes], None, None]: + r"""Enable bytes capturing of writes to ``sys.stdout`` and ``sys.stderr``. + + The captured output is made available via ``capsysbinary.readouterr()`` + method calls, which return a ``(out, err)`` namedtuple. + ``out`` and ``err`` will be ``bytes`` objects. + + Returns an instance of :class:`CaptureFixture[bytes] `. + + Example: + + .. code-block:: python + + def test_output(capsysbinary): + print("hello") + captured = capsysbinary.readouterr() + assert captured.out == b"hello\n" + """ + capman = request.config.pluginmanager.getplugin("capturemanager") + capture_fixture = CaptureFixture[bytes](SysCaptureBinary, request, _ispytest=True) + capman.set_fixture(capture_fixture) + capture_fixture._start() + yield capture_fixture + capture_fixture.close() + capman.unset_fixture() + + +@fixture +def capfd(request: SubRequest) -> Generator[CaptureFixture[str], None, None]: + r"""Enable text capturing of writes to file descriptors ``1`` and ``2``. + + The captured output is made available via ``capfd.readouterr()`` method + calls, which return a ``(out, err)`` namedtuple. + ``out`` and ``err`` will be ``text`` objects. + + Returns an instance of :class:`CaptureFixture[str] `. + + Example: + + .. code-block:: python + + def test_system_echo(capfd): + os.system('echo "hello"') + captured = capfd.readouterr() + assert captured.out == "hello\n" + """ + capman = request.config.pluginmanager.getplugin("capturemanager") + capture_fixture = CaptureFixture[str](FDCapture, request, _ispytest=True) + capman.set_fixture(capture_fixture) + capture_fixture._start() + yield capture_fixture + capture_fixture.close() + capman.unset_fixture() + + +@fixture +def capfdbinary(request: SubRequest) -> Generator[CaptureFixture[bytes], None, None]: + r"""Enable bytes capturing of writes to file descriptors ``1`` and ``2``. + + The captured output is made available via ``capfd.readouterr()`` method + calls, which return a ``(out, err)`` namedtuple. + ``out`` and ``err`` will be ``byte`` objects. + + Returns an instance of :class:`CaptureFixture[bytes] `. + + Example: + + .. code-block:: python + + def test_system_echo(capfdbinary): + os.system('echo "hello"') + captured = capfdbinary.readouterr() + assert captured.out == b"hello\n" + + """ + capman = request.config.pluginmanager.getplugin("capturemanager") + capture_fixture = CaptureFixture[bytes](FDCaptureBinary, request, _ispytest=True) + capman.set_fixture(capture_fixture) + capture_fixture._start() + yield capture_fixture + capture_fixture.close() + capman.unset_fixture() diff --git a/env/lib/python3.8/site-packages/_pytest/compat.py b/env/lib/python3.8/site-packages/_pytest/compat.py new file mode 100644 index 00000000..211407b2 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/compat.py @@ -0,0 +1,417 @@ +"""Python version compatibility code.""" +import enum +import functools +import inspect +import os +import sys +from inspect import Parameter +from inspect import signature +from pathlib import Path +from typing import Any +from typing import Callable +from typing import Generic +from typing import NoReturn +from typing import Optional +from typing import Tuple +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +import attr + +import py + +# fmt: off +# Workaround for https://github.com/sphinx-doc/sphinx/issues/10351. +# If `overload` is imported from `compat` instead of from `typing`, +# Sphinx doesn't recognize it as `overload` and the API docs for +# overloaded functions look good again. But type checkers handle +# it fine. +# fmt: on +if True: + from typing import overload as overload + +if TYPE_CHECKING: + from typing_extensions import Final + + +_T = TypeVar("_T") +_S = TypeVar("_S") + +#: constant to prepare valuing pylib path replacements/lazy proxies later on +# intended for removal in pytest 8.0 or 9.0 + +# fmt: off +# intentional space to create a fake difference for the verification +LEGACY_PATH = py.path. local +# fmt: on + + +def legacy_path(path: Union[str, "os.PathLike[str]"]) -> LEGACY_PATH: + """Internal wrapper to prepare lazy proxies for legacy_path instances""" + return LEGACY_PATH(path) + + +# fmt: off +# Singleton type for NOTSET, as described in: +# https://www.python.org/dev/peps/pep-0484/#support-for-singleton-types-in-unions +class NotSetType(enum.Enum): + token = 0 +NOTSET: "Final" = NotSetType.token # noqa: E305 +# fmt: on + +if sys.version_info >= (3, 8): + import importlib.metadata + + importlib_metadata = importlib.metadata +else: + import importlib_metadata as importlib_metadata # noqa: F401 + + +def _format_args(func: Callable[..., Any]) -> str: + return str(signature(func)) + + +def is_generator(func: object) -> bool: + genfunc = inspect.isgeneratorfunction(func) + return genfunc and not iscoroutinefunction(func) + + +def iscoroutinefunction(func: object) -> bool: + """Return True if func is a coroutine function (a function defined with async + def syntax, and doesn't contain yield), or a function decorated with + @asyncio.coroutine. + + Note: copied and modified from Python 3.5's builtin couroutines.py to avoid + importing asyncio directly, which in turns also initializes the "logging" + module as a side-effect (see issue #8). + """ + return inspect.iscoroutinefunction(func) or getattr(func, "_is_coroutine", False) + + +def is_async_function(func: object) -> bool: + """Return True if the given function seems to be an async function or + an async generator.""" + return iscoroutinefunction(func) or inspect.isasyncgenfunction(func) + + +def getlocation(function, curdir: Optional[str] = None) -> str: + function = get_real_func(function) + fn = Path(inspect.getfile(function)) + lineno = function.__code__.co_firstlineno + if curdir is not None: + try: + relfn = fn.relative_to(curdir) + except ValueError: + pass + else: + return "%s:%d" % (relfn, lineno + 1) + return "%s:%d" % (fn, lineno + 1) + + +def num_mock_patch_args(function) -> int: + """Return number of arguments used up by mock arguments (if any).""" + patchings = getattr(function, "patchings", None) + if not patchings: + return 0 + + mock_sentinel = getattr(sys.modules.get("mock"), "DEFAULT", object()) + ut_mock_sentinel = getattr(sys.modules.get("unittest.mock"), "DEFAULT", object()) + + return len( + [ + p + for p in patchings + if not p.attribute_name + and (p.new is mock_sentinel or p.new is ut_mock_sentinel) + ] + ) + + +def getfuncargnames( + function: Callable[..., Any], + *, + name: str = "", + is_method: bool = False, + cls: Optional[type] = None, +) -> Tuple[str, ...]: + """Return the names of a function's mandatory arguments. + + Should return the names of all function arguments that: + * Aren't bound to an instance or type as in instance or class methods. + * Don't have default values. + * Aren't bound with functools.partial. + * Aren't replaced with mocks. + + The is_method and cls arguments indicate that the function should + be treated as a bound method even though it's not unless, only in + the case of cls, the function is a static method. + + The name parameter should be the original name in which the function was collected. + """ + # TODO(RonnyPfannschmidt): This function should be refactored when we + # revisit fixtures. The fixture mechanism should ask the node for + # the fixture names, and not try to obtain directly from the + # function object well after collection has occurred. + + # The parameters attribute of a Signature object contains an + # ordered mapping of parameter names to Parameter instances. This + # creates a tuple of the names of the parameters that don't have + # defaults. + try: + parameters = signature(function).parameters + except (ValueError, TypeError) as e: + from _pytest.outcomes import fail + + fail( + f"Could not determine arguments of {function!r}: {e}", + pytrace=False, + ) + + arg_names = tuple( + p.name + for p in parameters.values() + if ( + p.kind is Parameter.POSITIONAL_OR_KEYWORD + or p.kind is Parameter.KEYWORD_ONLY + ) + and p.default is Parameter.empty + ) + if not name: + name = function.__name__ + + # If this function should be treated as a bound method even though + # it's passed as an unbound method or function, remove the first + # parameter name. + if is_method or ( + # Not using `getattr` because we don't want to resolve the staticmethod. + # Not using `cls.__dict__` because we want to check the entire MRO. + cls + and not isinstance( + inspect.getattr_static(cls, name, default=None), staticmethod + ) + ): + arg_names = arg_names[1:] + # Remove any names that will be replaced with mocks. + if hasattr(function, "__wrapped__"): + arg_names = arg_names[num_mock_patch_args(function) :] + return arg_names + + +def get_default_arg_names(function: Callable[..., Any]) -> Tuple[str, ...]: + # Note: this code intentionally mirrors the code at the beginning of + # getfuncargnames, to get the arguments which were excluded from its result + # because they had default values. + return tuple( + p.name + for p in signature(function).parameters.values() + if p.kind in (Parameter.POSITIONAL_OR_KEYWORD, Parameter.KEYWORD_ONLY) + and p.default is not Parameter.empty + ) + + +_non_printable_ascii_translate_table = { + i: f"\\x{i:02x}" for i in range(128) if i not in range(32, 127) +} +_non_printable_ascii_translate_table.update( + {ord("\t"): "\\t", ord("\r"): "\\r", ord("\n"): "\\n"} +) + + +def _translate_non_printable(s: str) -> str: + return s.translate(_non_printable_ascii_translate_table) + + +STRING_TYPES = bytes, str + + +def _bytes_to_ascii(val: bytes) -> str: + return val.decode("ascii", "backslashreplace") + + +def ascii_escaped(val: Union[bytes, str]) -> str: + r"""If val is pure ASCII, return it as an str, otherwise, escape + bytes objects into a sequence of escaped bytes: + + b'\xc3\xb4\xc5\xd6' -> r'\xc3\xb4\xc5\xd6' + + and escapes unicode objects into a sequence of escaped unicode + ids, e.g.: + + r'4\nV\U00043efa\x0eMXWB\x1e\u3028\u15fd\xcd\U0007d944' + + Note: + The obvious "v.decode('unicode-escape')" will return + valid UTF-8 unicode if it finds them in bytes, but we + want to return escaped bytes for any byte, even if they match + a UTF-8 string. + """ + if isinstance(val, bytes): + ret = _bytes_to_ascii(val) + else: + ret = val.encode("unicode_escape").decode("ascii") + return _translate_non_printable(ret) + + +@attr.s +class _PytestWrapper: + """Dummy wrapper around a function object for internal use only. + + Used to correctly unwrap the underlying function object when we are + creating fixtures, because we wrap the function object ourselves with a + decorator to issue warnings when the fixture function is called directly. + """ + + obj = attr.ib() + + +def get_real_func(obj): + """Get the real function object of the (possibly) wrapped object by + functools.wraps or functools.partial.""" + start_obj = obj + for i in range(100): + # __pytest_wrapped__ is set by @pytest.fixture when wrapping the fixture function + # to trigger a warning if it gets called directly instead of by pytest: we don't + # want to unwrap further than this otherwise we lose useful wrappings like @mock.patch (#3774) + new_obj = getattr(obj, "__pytest_wrapped__", None) + if isinstance(new_obj, _PytestWrapper): + obj = new_obj.obj + break + new_obj = getattr(obj, "__wrapped__", None) + if new_obj is None: + break + obj = new_obj + else: + from _pytest._io.saferepr import saferepr + + raise ValueError( + ("could not find real function of {start}\nstopped at {current}").format( + start=saferepr(start_obj), current=saferepr(obj) + ) + ) + if isinstance(obj, functools.partial): + obj = obj.func + return obj + + +def get_real_method(obj, holder): + """Attempt to obtain the real function object that might be wrapping + ``obj``, while at the same time returning a bound method to ``holder`` if + the original object was a bound method.""" + try: + is_method = hasattr(obj, "__func__") + obj = get_real_func(obj) + except Exception: # pragma: no cover + return obj + if is_method and hasattr(obj, "__get__") and callable(obj.__get__): + obj = obj.__get__(holder) + return obj + + +def getimfunc(func): + try: + return func.__func__ + except AttributeError: + return func + + +def safe_getattr(object: Any, name: str, default: Any) -> Any: + """Like getattr but return default upon any Exception or any OutcomeException. + + Attribute access can potentially fail for 'evil' Python objects. + See issue #214. + It catches OutcomeException because of #2490 (issue #580), new outcomes + are derived from BaseException instead of Exception (for more details + check #2707). + """ + from _pytest.outcomes import TEST_OUTCOME + + try: + return getattr(object, name, default) + except TEST_OUTCOME: + return default + + +def safe_isclass(obj: object) -> bool: + """Ignore any exception via isinstance on Python 3.""" + try: + return inspect.isclass(obj) + except Exception: + return False + + +if TYPE_CHECKING: + if sys.version_info >= (3, 8): + from typing import final as final + else: + from typing_extensions import final as final +elif sys.version_info >= (3, 8): + from typing import final as final +else: + + def final(f): + return f + + +if sys.version_info >= (3, 8): + from functools import cached_property as cached_property +else: + from typing import Type + + class cached_property(Generic[_S, _T]): + __slots__ = ("func", "__doc__") + + def __init__(self, func: Callable[[_S], _T]) -> None: + self.func = func + self.__doc__ = func.__doc__ + + @overload + def __get__( + self, instance: None, owner: Optional[Type[_S]] = ... + ) -> "cached_property[_S, _T]": + ... + + @overload + def __get__(self, instance: _S, owner: Optional[Type[_S]] = ...) -> _T: + ... + + def __get__(self, instance, owner=None): + if instance is None: + return self + value = instance.__dict__[self.func.__name__] = self.func(instance) + return value + + +# Perform exhaustiveness checking. +# +# Consider this example: +# +# MyUnion = Union[int, str] +# +# def handle(x: MyUnion) -> int { +# if isinstance(x, int): +# return 1 +# elif isinstance(x, str): +# return 2 +# else: +# raise Exception('unreachable') +# +# Now suppose we add a new variant: +# +# MyUnion = Union[int, str, bytes] +# +# After doing this, we must remember ourselves to go and update the handle +# function to handle the new variant. +# +# With `assert_never` we can do better: +# +# // raise Exception('unreachable') +# return assert_never(x) +# +# Now, if we forget to handle the new variant, the type-checker will emit a +# compile-time error, instead of the runtime error we would have gotten +# previously. +# +# This also work for Enums (if you use `is` to compare) and Literals. +def assert_never(value: NoReturn) -> NoReturn: + assert False, f"Unhandled value: {value} ({type(value).__name__})" diff --git a/env/lib/python3.8/site-packages/_pytest/config/__init__.py b/env/lib/python3.8/site-packages/_pytest/config/__init__.py new file mode 100644 index 00000000..25f156f8 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/config/__init__.py @@ -0,0 +1,1745 @@ +"""Command line options, ini-file and conftest.py processing.""" +import argparse +import collections.abc +import copy +import enum +import glob +import inspect +import os +import re +import shlex +import sys +import types +import warnings +from functools import lru_cache +from pathlib import Path +from textwrap import dedent +from types import FunctionType +from types import TracebackType +from typing import Any +from typing import Callable +from typing import cast +from typing import Dict +from typing import Generator +from typing import IO +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Optional +from typing import Sequence +from typing import Set +from typing import TextIO +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union + +import attr +from pluggy import HookimplMarker +from pluggy import HookspecMarker +from pluggy import PluginManager + +import _pytest._code +import _pytest.deprecated +import _pytest.hookspec +from .exceptions import PrintHelp as PrintHelp +from .exceptions import UsageError as UsageError +from .findpaths import determine_setup +from _pytest._code import ExceptionInfo +from _pytest._code import filter_traceback +from _pytest._io import TerminalWriter +from _pytest.compat import final +from _pytest.compat import importlib_metadata +from _pytest.outcomes import fail +from _pytest.outcomes import Skipped +from _pytest.pathlib import absolutepath +from _pytest.pathlib import bestrelpath +from _pytest.pathlib import import_path +from _pytest.pathlib import ImportMode +from _pytest.pathlib import resolve_package_path +from _pytest.stash import Stash +from _pytest.warning_types import PytestConfigWarning +from _pytest.warning_types import warn_explicit_for + +if TYPE_CHECKING: + + from _pytest._code.code import _TracebackStyle + from _pytest.terminal import TerminalReporter + from .argparsing import Argument + + +_PluggyPlugin = object +"""A type to represent plugin objects. + +Plugins can be any namespace, so we can't narrow it down much, but we use an +alias to make the intent clear. + +Ideally this type would be provided by pluggy itself. +""" + + +hookimpl = HookimplMarker("pytest") +hookspec = HookspecMarker("pytest") + + +@final +class ExitCode(enum.IntEnum): + """Encodes the valid exit codes by pytest. + + Currently users and plugins may supply other exit codes as well. + + .. versionadded:: 5.0 + """ + + #: Tests passed. + OK = 0 + #: Tests failed. + TESTS_FAILED = 1 + #: pytest was interrupted. + INTERRUPTED = 2 + #: An internal error got in the way. + INTERNAL_ERROR = 3 + #: pytest was misused. + USAGE_ERROR = 4 + #: pytest couldn't find tests. + NO_TESTS_COLLECTED = 5 + + +class ConftestImportFailure(Exception): + def __init__( + self, + path: Path, + excinfo: Tuple[Type[Exception], Exception, TracebackType], + ) -> None: + super().__init__(path, excinfo) + self.path = path + self.excinfo = excinfo + + def __str__(self) -> str: + return "{}: {} (from {})".format( + self.excinfo[0].__name__, self.excinfo[1], self.path + ) + + +def filter_traceback_for_conftest_import_failure( + entry: _pytest._code.TracebackEntry, +) -> bool: + """Filter tracebacks entries which point to pytest internals or importlib. + + Make a special case for importlib because we use it to import test modules and conftest files + in _pytest.pathlib.import_path. + """ + return filter_traceback(entry) and "importlib" not in str(entry.path).split(os.sep) + + +def main( + args: Optional[Union[List[str], "os.PathLike[str]"]] = None, + plugins: Optional[Sequence[Union[str, _PluggyPlugin]]] = None, +) -> Union[int, ExitCode]: + """Perform an in-process test run. + + :param args: List of command line arguments. + :param plugins: List of plugin objects to be auto-registered during initialization. + + :returns: An exit code. + """ + try: + try: + config = _prepareconfig(args, plugins) + except ConftestImportFailure as e: + exc_info = ExceptionInfo.from_exc_info(e.excinfo) + tw = TerminalWriter(sys.stderr) + tw.line(f"ImportError while loading conftest '{e.path}'.", red=True) + exc_info.traceback = exc_info.traceback.filter( + filter_traceback_for_conftest_import_failure + ) + exc_repr = ( + exc_info.getrepr(style="short", chain=False) + if exc_info.traceback + else exc_info.exconly() + ) + formatted_tb = str(exc_repr) + for line in formatted_tb.splitlines(): + tw.line(line.rstrip(), red=True) + return ExitCode.USAGE_ERROR + else: + try: + ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( + config=config + ) + try: + return ExitCode(ret) + except ValueError: + return ret + finally: + config._ensure_unconfigure() + except UsageError as e: + tw = TerminalWriter(sys.stderr) + for msg in e.args: + tw.line(f"ERROR: {msg}\n", red=True) + return ExitCode.USAGE_ERROR + + +def console_main() -> int: + """The CLI entry point of pytest. + + This function is not meant for programmable use; use `main()` instead. + """ + # https://docs.python.org/3/library/signal.html#note-on-sigpipe + try: + code = main() + sys.stdout.flush() + return code + except BrokenPipeError: + # Python flushes standard streams on exit; redirect remaining output + # to devnull to avoid another BrokenPipeError at shutdown + devnull = os.open(os.devnull, os.O_WRONLY) + os.dup2(devnull, sys.stdout.fileno()) + return 1 # Python exits with error code 1 on EPIPE + + +class cmdline: # compatibility namespace + main = staticmethod(main) + + +def filename_arg(path: str, optname: str) -> str: + """Argparse type validator for filename arguments. + + :path: Path of filename. + :optname: Name of the option. + """ + if os.path.isdir(path): + raise UsageError(f"{optname} must be a filename, given: {path}") + return path + + +def directory_arg(path: str, optname: str) -> str: + """Argparse type validator for directory arguments. + + :path: Path of directory. + :optname: Name of the option. + """ + if not os.path.isdir(path): + raise UsageError(f"{optname} must be a directory, given: {path}") + return path + + +# Plugins that cannot be disabled via "-p no:X" currently. +essential_plugins = ( + "mark", + "main", + "runner", + "fixtures", + "helpconfig", # Provides -p. +) + +default_plugins = essential_plugins + ( + "python", + "terminal", + "debugging", + "unittest", + "capture", + "skipping", + "legacypath", + "tmpdir", + "monkeypatch", + "recwarn", + "pastebin", + "nose", + "assertion", + "junitxml", + "doctest", + "cacheprovider", + "freeze_support", + "setuponly", + "setupplan", + "stepwise", + "warnings", + "logging", + "reports", + "python_path", + *(["unraisableexception", "threadexception"] if sys.version_info >= (3, 8) else []), + "faulthandler", +) + +builtin_plugins = set(default_plugins) +builtin_plugins.add("pytester") +builtin_plugins.add("pytester_assertions") + + +def get_config( + args: Optional[List[str]] = None, + plugins: Optional[Sequence[Union[str, _PluggyPlugin]]] = None, +) -> "Config": + # subsequent calls to main will create a fresh instance + pluginmanager = PytestPluginManager() + config = Config( + pluginmanager, + invocation_params=Config.InvocationParams( + args=args or (), + plugins=plugins, + dir=Path.cwd(), + ), + ) + + if args is not None: + # Handle any "-p no:plugin" args. + pluginmanager.consider_preparse(args, exclude_only=True) + + for spec in default_plugins: + pluginmanager.import_plugin(spec) + + return config + + +def get_plugin_manager() -> "PytestPluginManager": + """Obtain a new instance of the + :py:class:`pytest.PytestPluginManager`, with default plugins + already loaded. + + This function can be used by integration with other tools, like hooking + into pytest to run tests into an IDE. + """ + return get_config().pluginmanager + + +def _prepareconfig( + args: Optional[Union[List[str], "os.PathLike[str]"]] = None, + plugins: Optional[Sequence[Union[str, _PluggyPlugin]]] = None, +) -> "Config": + if args is None: + args = sys.argv[1:] + elif isinstance(args, os.PathLike): + args = [os.fspath(args)] + elif not isinstance(args, list): + msg = ( # type:ignore[unreachable] + "`args` parameter expected to be a list of strings, got: {!r} (type: {})" + ) + raise TypeError(msg.format(args, type(args))) + + config = get_config(args, plugins) + pluginmanager = config.pluginmanager + try: + if plugins: + for plugin in plugins: + if isinstance(plugin, str): + pluginmanager.consider_pluginarg(plugin) + else: + pluginmanager.register(plugin) + config = pluginmanager.hook.pytest_cmdline_parse( + pluginmanager=pluginmanager, args=args + ) + return config + except BaseException: + config._ensure_unconfigure() + raise + + +def _get_directory(path: Path) -> Path: + """Get the directory of a path - itself if already a directory.""" + if path.is_file(): + return path.parent + else: + return path + + +def _get_legacy_hook_marks( + method: Any, + hook_type: str, + opt_names: Tuple[str, ...], +) -> Dict[str, bool]: + if TYPE_CHECKING: + # abuse typeguard from importlib to avoid massive method type union thats lacking a alias + assert inspect.isroutine(method) + known_marks: set[str] = {m.name for m in getattr(method, "pytestmark", [])} + must_warn: list[str] = [] + opts: dict[str, bool] = {} + for opt_name in opt_names: + opt_attr = getattr(method, opt_name, AttributeError) + if opt_attr is not AttributeError: + must_warn.append(f"{opt_name}={opt_attr}") + opts[opt_name] = True + elif opt_name in known_marks: + must_warn.append(f"{opt_name}=True") + opts[opt_name] = True + else: + opts[opt_name] = False + if must_warn: + hook_opts = ", ".join(must_warn) + message = _pytest.deprecated.HOOK_LEGACY_MARKING.format( + type=hook_type, + fullname=method.__qualname__, + hook_opts=hook_opts, + ) + warn_explicit_for(cast(FunctionType, method), message) + return opts + + +@final +class PytestPluginManager(PluginManager): + """A :py:class:`pluggy.PluginManager ` with + additional pytest-specific functionality: + + * Loading plugins from the command line, ``PYTEST_PLUGINS`` env variable and + ``pytest_plugins`` global variables found in plugins being loaded. + * ``conftest.py`` loading during start-up. + """ + + def __init__(self) -> None: + import _pytest.assertion + + super().__init__("pytest") + + # -- State related to local conftest plugins. + # All loaded conftest modules. + self._conftest_plugins: Set[types.ModuleType] = set() + # All conftest modules applicable for a directory. + # This includes the directory's own conftest modules as well + # as those of its parent directories. + self._dirpath2confmods: Dict[Path, List[types.ModuleType]] = {} + # Cutoff directory above which conftests are no longer discovered. + self._confcutdir: Optional[Path] = None + # If set, conftest loading is skipped. + self._noconftest = False + + # _getconftestmodules()'s call to _get_directory() causes a stat + # storm when it's called potentially thousands of times in a test + # session (#9478), often with the same path, so cache it. + self._get_directory = lru_cache(256)(_get_directory) + + self._duplicatepaths: Set[Path] = set() + + # plugins that were explicitly skipped with pytest.skip + # list of (module name, skip reason) + # previously we would issue a warning when a plugin was skipped, but + # since we refactored warnings as first citizens of Config, they are + # just stored here to be used later. + self.skipped_plugins: List[Tuple[str, str]] = [] + + self.add_hookspecs(_pytest.hookspec) + self.register(self) + if os.environ.get("PYTEST_DEBUG"): + err: IO[str] = sys.stderr + encoding: str = getattr(err, "encoding", "utf8") + try: + err = open( + os.dup(err.fileno()), + mode=err.mode, + buffering=1, + encoding=encoding, + ) + except Exception: + pass + self.trace.root.setwriter(err.write) + self.enable_tracing() + + # Config._consider_importhook will set a real object if required. + self.rewrite_hook = _pytest.assertion.DummyRewriteHook() + # Used to know when we are importing conftests after the pytest_configure stage. + self._configured = False + + def parse_hookimpl_opts(self, plugin: _PluggyPlugin, name: str): + # pytest hooks are always prefixed with "pytest_", + # so we avoid accessing possibly non-readable attributes + # (see issue #1073). + if not name.startswith("pytest_"): + return + # Ignore names which can not be hooks. + if name == "pytest_plugins": + return + + opts = super().parse_hookimpl_opts(plugin, name) + if opts is not None: + return opts + + method = getattr(plugin, name) + # Consider only actual functions for hooks (#3775). + if not inspect.isroutine(method): + return + # Collect unmarked hooks as long as they have the `pytest_' prefix. + return _get_legacy_hook_marks( + method, "impl", ("tryfirst", "trylast", "optionalhook", "hookwrapper") + ) + + def parse_hookspec_opts(self, module_or_class, name: str): + opts = super().parse_hookspec_opts(module_or_class, name) + if opts is None: + method = getattr(module_or_class, name) + if name.startswith("pytest_"): + opts = _get_legacy_hook_marks( + method, + "spec", + ("firstresult", "historic"), + ) + return opts + + def register( + self, plugin: _PluggyPlugin, name: Optional[str] = None + ) -> Optional[str]: + if name in _pytest.deprecated.DEPRECATED_EXTERNAL_PLUGINS: + warnings.warn( + PytestConfigWarning( + "{} plugin has been merged into the core, " + "please remove it from your requirements.".format( + name.replace("_", "-") + ) + ) + ) + return None + ret: Optional[str] = super().register(plugin, name) + if ret: + self.hook.pytest_plugin_registered.call_historic( + kwargs=dict(plugin=plugin, manager=self) + ) + + if isinstance(plugin, types.ModuleType): + self.consider_module(plugin) + return ret + + def getplugin(self, name: str): + # Support deprecated naming because plugins (xdist e.g.) use it. + plugin: Optional[_PluggyPlugin] = self.get_plugin(name) + return plugin + + def hasplugin(self, name: str) -> bool: + """Return whether a plugin with the given name is registered.""" + return bool(self.get_plugin(name)) + + def pytest_configure(self, config: "Config") -> None: + """:meta private:""" + # XXX now that the pluginmanager exposes hookimpl(tryfirst...) + # we should remove tryfirst/trylast as markers. + config.addinivalue_line( + "markers", + "tryfirst: mark a hook implementation function such that the " + "plugin machinery will try to call it first/as early as possible. " + "DEPRECATED, use @pytest.hookimpl(tryfirst=True) instead.", + ) + config.addinivalue_line( + "markers", + "trylast: mark a hook implementation function such that the " + "plugin machinery will try to call it last/as late as possible. " + "DEPRECATED, use @pytest.hookimpl(trylast=True) instead.", + ) + self._configured = True + + # + # Internal API for local conftest plugin handling. + # + def _set_initial_conftests( + self, namespace: argparse.Namespace, rootpath: Path + ) -> None: + """Load initial conftest files given a preparsed "namespace". + + As conftest files may add their own command line options which have + arguments ('--my-opt somepath') we might get some false positives. + All builtin and 3rd party plugins will have been loaded, however, so + common options will not confuse our logic here. + """ + current = Path.cwd() + self._confcutdir = ( + absolutepath(current / namespace.confcutdir) + if namespace.confcutdir + else None + ) + self._noconftest = namespace.noconftest + self._using_pyargs = namespace.pyargs + testpaths = namespace.file_or_dir + foundanchor = False + for testpath in testpaths: + path = str(testpath) + # remove node-id syntax + i = path.find("::") + if i != -1: + path = path[:i] + anchor = absolutepath(current / path) + if anchor.exists(): # we found some file object + self._try_load_conftest(anchor, namespace.importmode, rootpath) + foundanchor = True + if not foundanchor: + self._try_load_conftest(current, namespace.importmode, rootpath) + + def _is_in_confcutdir(self, path: Path) -> bool: + """Whether a path is within the confcutdir. + + When false, should not load conftest. + """ + if self._confcutdir is None: + return True + return path not in self._confcutdir.parents + + def _try_load_conftest( + self, anchor: Path, importmode: Union[str, ImportMode], rootpath: Path + ) -> None: + self._getconftestmodules(anchor, importmode, rootpath) + # let's also consider test* subdirs + if anchor.is_dir(): + for x in anchor.glob("test*"): + if x.is_dir(): + self._getconftestmodules(x, importmode, rootpath) + + def _getconftestmodules( + self, path: Path, importmode: Union[str, ImportMode], rootpath: Path + ) -> Sequence[types.ModuleType]: + if self._noconftest: + return [] + + directory = self._get_directory(path) + + # Optimization: avoid repeated searches in the same directory. + # Assumes always called with same importmode and rootpath. + existing_clist = self._dirpath2confmods.get(directory) + if existing_clist is not None: + return existing_clist + + # XXX these days we may rather want to use config.rootpath + # and allow users to opt into looking into the rootdir parent + # directories instead of requiring to specify confcutdir. + clist = [] + for parent in reversed((directory, *directory.parents)): + if self._is_in_confcutdir(parent): + conftestpath = parent / "conftest.py" + if conftestpath.is_file(): + mod = self._importconftest(conftestpath, importmode, rootpath) + clist.append(mod) + self._dirpath2confmods[directory] = clist + return clist + + def _rget_with_confmod( + self, + name: str, + path: Path, + importmode: Union[str, ImportMode], + rootpath: Path, + ) -> Tuple[types.ModuleType, Any]: + modules = self._getconftestmodules(path, importmode, rootpath=rootpath) + for mod in reversed(modules): + try: + return mod, getattr(mod, name) + except AttributeError: + continue + raise KeyError(name) + + def _importconftest( + self, conftestpath: Path, importmode: Union[str, ImportMode], rootpath: Path + ) -> types.ModuleType: + existing = self.get_plugin(str(conftestpath)) + if existing is not None: + return cast(types.ModuleType, existing) + + pkgpath = resolve_package_path(conftestpath) + if pkgpath is None: + _ensure_removed_sysmodule(conftestpath.stem) + + try: + mod = import_path(conftestpath, mode=importmode, root=rootpath) + except Exception as e: + assert e.__traceback__ is not None + exc_info = (type(e), e, e.__traceback__) + raise ConftestImportFailure(conftestpath, exc_info) from e + + self._check_non_top_pytest_plugins(mod, conftestpath) + + self._conftest_plugins.add(mod) + dirpath = conftestpath.parent + if dirpath in self._dirpath2confmods: + for path, mods in self._dirpath2confmods.items(): + if dirpath in path.parents or path == dirpath: + assert mod not in mods + mods.append(mod) + self.trace(f"loading conftestmodule {mod!r}") + self.consider_conftest(mod) + return mod + + def _check_non_top_pytest_plugins( + self, + mod: types.ModuleType, + conftestpath: Path, + ) -> None: + if ( + hasattr(mod, "pytest_plugins") + and self._configured + and not self._using_pyargs + ): + msg = ( + "Defining 'pytest_plugins' in a non-top-level conftest is no longer supported:\n" + "It affects the entire test suite instead of just below the conftest as expected.\n" + " {}\n" + "Please move it to a top level conftest file at the rootdir:\n" + " {}\n" + "For more information, visit:\n" + " https://docs.pytest.org/en/stable/deprecations.html#pytest-plugins-in-non-top-level-conftest-files" + ) + fail(msg.format(conftestpath, self._confcutdir), pytrace=False) + + # + # API for bootstrapping plugin loading + # + # + + def consider_preparse( + self, args: Sequence[str], *, exclude_only: bool = False + ) -> None: + """:meta private:""" + i = 0 + n = len(args) + while i < n: + opt = args[i] + i += 1 + if isinstance(opt, str): + if opt == "-p": + try: + parg = args[i] + except IndexError: + return + i += 1 + elif opt.startswith("-p"): + parg = opt[2:] + else: + continue + if exclude_only and not parg.startswith("no:"): + continue + self.consider_pluginarg(parg) + + def consider_pluginarg(self, arg: str) -> None: + """:meta private:""" + if arg.startswith("no:"): + name = arg[3:] + if name in essential_plugins: + raise UsageError("plugin %s cannot be disabled" % name) + + # PR #4304: remove stepwise if cacheprovider is blocked. + if name == "cacheprovider": + self.set_blocked("stepwise") + self.set_blocked("pytest_stepwise") + + self.set_blocked(name) + if not name.startswith("pytest_"): + self.set_blocked("pytest_" + name) + else: + name = arg + # Unblock the plugin. None indicates that it has been blocked. + # There is no interface with pluggy for this. + if self._name2plugin.get(name, -1) is None: + del self._name2plugin[name] + if not name.startswith("pytest_"): + if self._name2plugin.get("pytest_" + name, -1) is None: + del self._name2plugin["pytest_" + name] + self.import_plugin(arg, consider_entry_points=True) + + def consider_conftest(self, conftestmodule: types.ModuleType) -> None: + """:meta private:""" + self.register(conftestmodule, name=conftestmodule.__file__) + + def consider_env(self) -> None: + """:meta private:""" + self._import_plugin_specs(os.environ.get("PYTEST_PLUGINS")) + + def consider_module(self, mod: types.ModuleType) -> None: + """:meta private:""" + self._import_plugin_specs(getattr(mod, "pytest_plugins", [])) + + def _import_plugin_specs( + self, spec: Union[None, types.ModuleType, str, Sequence[str]] + ) -> None: + plugins = _get_plugin_specs_as_list(spec) + for import_spec in plugins: + self.import_plugin(import_spec) + + def import_plugin(self, modname: str, consider_entry_points: bool = False) -> None: + """Import a plugin with ``modname``. + + If ``consider_entry_points`` is True, entry point names are also + considered to find a plugin. + """ + # Most often modname refers to builtin modules, e.g. "pytester", + # "terminal" or "capture". Those plugins are registered under their + # basename for historic purposes but must be imported with the + # _pytest prefix. + assert isinstance(modname, str), ( + "module name as text required, got %r" % modname + ) + if self.is_blocked(modname) or self.get_plugin(modname) is not None: + return + + importspec = "_pytest." + modname if modname in builtin_plugins else modname + self.rewrite_hook.mark_rewrite(importspec) + + if consider_entry_points: + loaded = self.load_setuptools_entrypoints("pytest11", name=modname) + if loaded: + return + + try: + __import__(importspec) + except ImportError as e: + raise ImportError( + f'Error importing plugin "{modname}": {e.args[0]}' + ).with_traceback(e.__traceback__) from e + + except Skipped as e: + self.skipped_plugins.append((modname, e.msg or "")) + else: + mod = sys.modules[importspec] + self.register(mod, modname) + + +def _get_plugin_specs_as_list( + specs: Union[None, types.ModuleType, str, Sequence[str]] +) -> List[str]: + """Parse a plugins specification into a list of plugin names.""" + # None means empty. + if specs is None: + return [] + # Workaround for #3899 - a submodule which happens to be called "pytest_plugins". + if isinstance(specs, types.ModuleType): + return [] + # Comma-separated list. + if isinstance(specs, str): + return specs.split(",") if specs else [] + # Direct specification. + if isinstance(specs, collections.abc.Sequence): + return list(specs) + raise UsageError( + "Plugins may be specified as a sequence or a ','-separated string of plugin names. Got: %r" + % specs + ) + + +def _ensure_removed_sysmodule(modname: str) -> None: + try: + del sys.modules[modname] + except KeyError: + pass + + +class Notset: + def __repr__(self): + return "" + + +notset = Notset() + + +def _iter_rewritable_modules(package_files: Iterable[str]) -> Iterator[str]: + """Given an iterable of file names in a source distribution, return the "names" that should + be marked for assertion rewrite. + + For example the package "pytest_mock/__init__.py" should be added as "pytest_mock" in + the assertion rewrite mechanism. + + This function has to deal with dist-info based distributions and egg based distributions + (which are still very much in use for "editable" installs). + + Here are the file names as seen in a dist-info based distribution: + + pytest_mock/__init__.py + pytest_mock/_version.py + pytest_mock/plugin.py + pytest_mock.egg-info/PKG-INFO + + Here are the file names as seen in an egg based distribution: + + src/pytest_mock/__init__.py + src/pytest_mock/_version.py + src/pytest_mock/plugin.py + src/pytest_mock.egg-info/PKG-INFO + LICENSE + setup.py + + We have to take in account those two distribution flavors in order to determine which + names should be considered for assertion rewriting. + + More information: + https://github.com/pytest-dev/pytest-mock/issues/167 + """ + package_files = list(package_files) + seen_some = False + for fn in package_files: + is_simple_module = "/" not in fn and fn.endswith(".py") + is_package = fn.count("/") == 1 and fn.endswith("__init__.py") + if is_simple_module: + module_name, _ = os.path.splitext(fn) + # we ignore "setup.py" at the root of the distribution + # as well as editable installation finder modules made by setuptools + if module_name != "setup" and not module_name.startswith("__editable__"): + seen_some = True + yield module_name + elif is_package: + package_name = os.path.dirname(fn) + seen_some = True + yield package_name + + if not seen_some: + # At this point we did not find any packages or modules suitable for assertion + # rewriting, so we try again by stripping the first path component (to account for + # "src" based source trees for example). + # This approach lets us have the common case continue to be fast, as egg-distributions + # are rarer. + new_package_files = [] + for fn in package_files: + parts = fn.split("/") + new_fn = "/".join(parts[1:]) + if new_fn: + new_package_files.append(new_fn) + if new_package_files: + yield from _iter_rewritable_modules(new_package_files) + + +def _args_converter(args: Iterable[str]) -> Tuple[str, ...]: + return tuple(args) + + +@final +class Config: + """Access to configuration values, pluginmanager and plugin hooks. + + :param PytestPluginManager pluginmanager: + A pytest PluginManager. + + :param InvocationParams invocation_params: + Object containing parameters regarding the :func:`pytest.main` + invocation. + """ + + @final + @attr.s(frozen=True, auto_attribs=True) + class InvocationParams: + """Holds parameters passed during :func:`pytest.main`. + + The object attributes are read-only. + + .. versionadded:: 5.1 + + .. note:: + + Note that the environment variable ``PYTEST_ADDOPTS`` and the ``addopts`` + ini option are handled by pytest, not being included in the ``args`` attribute. + + Plugins accessing ``InvocationParams`` must be aware of that. + """ + + args: Tuple[str, ...] = attr.ib(converter=_args_converter) + """The command-line arguments as passed to :func:`pytest.main`.""" + plugins: Optional[Sequence[Union[str, _PluggyPlugin]]] + """Extra plugins, might be `None`.""" + dir: Path + """The directory from which :func:`pytest.main` was invoked.""" + + class ArgsSource(enum.Enum): + """Indicates the source of the test arguments. + + .. versionadded:: 7.2 + """ + + #: Command line arguments. + ARGS = enum.auto() + #: Invocation directory. + INCOVATION_DIR = enum.auto() + #: 'testpaths' configuration value. + TESTPATHS = enum.auto() + + def __init__( + self, + pluginmanager: PytestPluginManager, + *, + invocation_params: Optional[InvocationParams] = None, + ) -> None: + from .argparsing import Parser, FILE_OR_DIR + + if invocation_params is None: + invocation_params = self.InvocationParams( + args=(), plugins=None, dir=Path.cwd() + ) + + self.option = argparse.Namespace() + """Access to command line option as attributes. + + :type: argparse.Namespace + """ + + self.invocation_params = invocation_params + """The parameters with which pytest was invoked. + + :type: InvocationParams + """ + + _a = FILE_OR_DIR + self._parser = Parser( + usage=f"%(prog)s [options] [{_a}] [{_a}] [...]", + processopt=self._processopt, + _ispytest=True, + ) + self.pluginmanager = pluginmanager + """The plugin manager handles plugin registration and hook invocation. + + :type: PytestPluginManager + """ + + self.stash = Stash() + """A place where plugins can store information on the config for their + own use. + + :type: Stash + """ + # Deprecated alias. Was never public. Can be removed in a few releases. + self._store = self.stash + + from .compat import PathAwareHookProxy + + self.trace = self.pluginmanager.trace.root.get("config") + self.hook = PathAwareHookProxy(self.pluginmanager.hook) + self._inicache: Dict[str, Any] = {} + self._override_ini: Sequence[str] = () + self._opt2dest: Dict[str, str] = {} + self._cleanup: List[Callable[[], None]] = [] + self.pluginmanager.register(self, "pytestconfig") + self._configured = False + self.hook.pytest_addoption.call_historic( + kwargs=dict(parser=self._parser, pluginmanager=self.pluginmanager) + ) + + if TYPE_CHECKING: + from _pytest.cacheprovider import Cache + + self.cache: Optional[Cache] = None + + @property + def rootpath(self) -> Path: + """The path to the :ref:`rootdir `. + + :type: pathlib.Path + + .. versionadded:: 6.1 + """ + return self._rootpath + + @property + def inipath(self) -> Optional[Path]: + """The path to the :ref:`configfile `. + + :type: Optional[pathlib.Path] + + .. versionadded:: 6.1 + """ + return self._inipath + + def add_cleanup(self, func: Callable[[], None]) -> None: + """Add a function to be called when the config object gets out of + use (usually coinciding with pytest_unconfigure).""" + self._cleanup.append(func) + + def _do_configure(self) -> None: + assert not self._configured + self._configured = True + with warnings.catch_warnings(): + warnings.simplefilter("default") + self.hook.pytest_configure.call_historic(kwargs=dict(config=self)) + + def _ensure_unconfigure(self) -> None: + if self._configured: + self._configured = False + self.hook.pytest_unconfigure(config=self) + self.hook.pytest_configure._call_history = [] + while self._cleanup: + fin = self._cleanup.pop() + fin() + + def get_terminal_writer(self) -> TerminalWriter: + terminalreporter: TerminalReporter = self.pluginmanager.get_plugin( + "terminalreporter" + ) + return terminalreporter._tw + + def pytest_cmdline_parse( + self, pluginmanager: PytestPluginManager, args: List[str] + ) -> "Config": + try: + self.parse(args) + except UsageError: + + # Handle --version and --help here in a minimal fashion. + # This gets done via helpconfig normally, but its + # pytest_cmdline_main is not called in case of errors. + if getattr(self.option, "version", False) or "--version" in args: + from _pytest.helpconfig import showversion + + showversion(self) + elif ( + getattr(self.option, "help", False) or "--help" in args or "-h" in args + ): + self._parser._getparser().print_help() + sys.stdout.write( + "\nNOTE: displaying only minimal help due to UsageError.\n\n" + ) + + raise + + return self + + def notify_exception( + self, + excinfo: ExceptionInfo[BaseException], + option: Optional[argparse.Namespace] = None, + ) -> None: + if option and getattr(option, "fulltrace", False): + style: _TracebackStyle = "long" + else: + style = "native" + excrepr = excinfo.getrepr( + funcargs=True, showlocals=getattr(option, "showlocals", False), style=style + ) + res = self.hook.pytest_internalerror(excrepr=excrepr, excinfo=excinfo) + if not any(res): + for line in str(excrepr).split("\n"): + sys.stderr.write("INTERNALERROR> %s\n" % line) + sys.stderr.flush() + + def cwd_relative_nodeid(self, nodeid: str) -> str: + # nodeid's are relative to the rootpath, compute relative to cwd. + if self.invocation_params.dir != self.rootpath: + fullpath = self.rootpath / nodeid + nodeid = bestrelpath(self.invocation_params.dir, fullpath) + return nodeid + + @classmethod + def fromdictargs(cls, option_dict, args) -> "Config": + """Constructor usable for subprocesses.""" + config = get_config(args) + config.option.__dict__.update(option_dict) + config.parse(args, addopts=False) + for x in config.option.plugins: + config.pluginmanager.consider_pluginarg(x) + return config + + def _processopt(self, opt: "Argument") -> None: + for name in opt._short_opts + opt._long_opts: + self._opt2dest[name] = opt.dest + + if hasattr(opt, "default"): + if not hasattr(self.option, opt.dest): + setattr(self.option, opt.dest, opt.default) + + @hookimpl(trylast=True) + def pytest_load_initial_conftests(self, early_config: "Config") -> None: + self.pluginmanager._set_initial_conftests( + early_config.known_args_namespace, rootpath=early_config.rootpath + ) + + def _initini(self, args: Sequence[str]) -> None: + ns, unknown_args = self._parser.parse_known_and_unknown_args( + args, namespace=copy.copy(self.option) + ) + rootpath, inipath, inicfg = determine_setup( + ns.inifilename, + ns.file_or_dir + unknown_args, + rootdir_cmd_arg=ns.rootdir or None, + config=self, + ) + self._rootpath = rootpath + self._inipath = inipath + self.inicfg = inicfg + self._parser.extra_info["rootdir"] = str(self.rootpath) + self._parser.extra_info["inifile"] = str(self.inipath) + self._parser.addini("addopts", "Extra command line options", "args") + self._parser.addini("minversion", "Minimally required pytest version") + self._parser.addini( + "required_plugins", + "Plugins that must be present for pytest to run", + type="args", + default=[], + ) + self._override_ini = ns.override_ini or () + + def _consider_importhook(self, args: Sequence[str]) -> None: + """Install the PEP 302 import hook if using assertion rewriting. + + Needs to parse the --assert= option from the commandline + and find all the installed plugins to mark them for rewriting + by the importhook. + """ + ns, unknown_args = self._parser.parse_known_and_unknown_args(args) + mode = getattr(ns, "assertmode", "plain") + if mode == "rewrite": + import _pytest.assertion + + try: + hook = _pytest.assertion.install_importhook(self) + except SystemError: + mode = "plain" + else: + self._mark_plugins_for_rewrite(hook) + self._warn_about_missing_assertion(mode) + + def _mark_plugins_for_rewrite(self, hook) -> None: + """Given an importhook, mark for rewrite any top-level + modules or packages in the distribution package for + all pytest plugins.""" + self.pluginmanager.rewrite_hook = hook + + if os.environ.get("PYTEST_DISABLE_PLUGIN_AUTOLOAD"): + # We don't autoload from setuptools entry points, no need to continue. + return + + package_files = ( + str(file) + for dist in importlib_metadata.distributions() + if any(ep.group == "pytest11" for ep in dist.entry_points) + for file in dist.files or [] + ) + + for name in _iter_rewritable_modules(package_files): + hook.mark_rewrite(name) + + def _validate_args(self, args: List[str], via: str) -> List[str]: + """Validate known args.""" + self._parser._config_source_hint = via # type: ignore + try: + self._parser.parse_known_and_unknown_args( + args, namespace=copy.copy(self.option) + ) + finally: + del self._parser._config_source_hint # type: ignore + + return args + + def _preparse(self, args: List[str], addopts: bool = True) -> None: + if addopts: + env_addopts = os.environ.get("PYTEST_ADDOPTS", "") + if len(env_addopts): + args[:] = ( + self._validate_args(shlex.split(env_addopts), "via PYTEST_ADDOPTS") + + args + ) + self._initini(args) + if addopts: + args[:] = ( + self._validate_args(self.getini("addopts"), "via addopts config") + args + ) + + self.known_args_namespace = self._parser.parse_known_args( + args, namespace=copy.copy(self.option) + ) + self._checkversion() + self._consider_importhook(args) + self.pluginmanager.consider_preparse(args, exclude_only=False) + if not os.environ.get("PYTEST_DISABLE_PLUGIN_AUTOLOAD"): + # Don't autoload from setuptools entry point. Only explicitly specified + # plugins are going to be loaded. + self.pluginmanager.load_setuptools_entrypoints("pytest11") + self.pluginmanager.consider_env() + + self.known_args_namespace = self._parser.parse_known_args( + args, namespace=copy.copy(self.known_args_namespace) + ) + + self._validate_plugins() + self._warn_about_skipped_plugins() + + if self.known_args_namespace.strict: + self.issue_config_time_warning( + _pytest.deprecated.STRICT_OPTION, stacklevel=2 + ) + + if self.known_args_namespace.confcutdir is None and self.inipath is not None: + confcutdir = str(self.inipath.parent) + self.known_args_namespace.confcutdir = confcutdir + try: + self.hook.pytest_load_initial_conftests( + early_config=self, args=args, parser=self._parser + ) + except ConftestImportFailure as e: + if self.known_args_namespace.help or self.known_args_namespace.version: + # we don't want to prevent --help/--version to work + # so just let is pass and print a warning at the end + self.issue_config_time_warning( + PytestConfigWarning(f"could not load initial conftests: {e.path}"), + stacklevel=2, + ) + else: + raise + + @hookimpl(hookwrapper=True) + def pytest_collection(self) -> Generator[None, None, None]: + # Validate invalid ini keys after collection is done so we take in account + # options added by late-loading conftest files. + yield + self._validate_config_options() + + def _checkversion(self) -> None: + import pytest + + minver = self.inicfg.get("minversion", None) + if minver: + # Imported lazily to improve start-up time. + from packaging.version import Version + + if not isinstance(minver, str): + raise pytest.UsageError( + "%s: 'minversion' must be a single value" % self.inipath + ) + + if Version(minver) > Version(pytest.__version__): + raise pytest.UsageError( + "%s: 'minversion' requires pytest-%s, actual pytest-%s'" + % ( + self.inipath, + minver, + pytest.__version__, + ) + ) + + def _validate_config_options(self) -> None: + for key in sorted(self._get_unknown_ini_keys()): + self._warn_or_fail_if_strict(f"Unknown config option: {key}\n") + + def _validate_plugins(self) -> None: + required_plugins = sorted(self.getini("required_plugins")) + if not required_plugins: + return + + # Imported lazily to improve start-up time. + from packaging.version import Version + from packaging.requirements import InvalidRequirement, Requirement + + plugin_info = self.pluginmanager.list_plugin_distinfo() + plugin_dist_info = {dist.project_name: dist.version for _, dist in plugin_info} + + missing_plugins = [] + for required_plugin in required_plugins: + try: + req = Requirement(required_plugin) + except InvalidRequirement: + missing_plugins.append(required_plugin) + continue + + if req.name not in plugin_dist_info: + missing_plugins.append(required_plugin) + elif not req.specifier.contains( + Version(plugin_dist_info[req.name]), prereleases=True + ): + missing_plugins.append(required_plugin) + + if missing_plugins: + raise UsageError( + "Missing required plugins: {}".format(", ".join(missing_plugins)), + ) + + def _warn_or_fail_if_strict(self, message: str) -> None: + if self.known_args_namespace.strict_config: + raise UsageError(message) + + self.issue_config_time_warning(PytestConfigWarning(message), stacklevel=3) + + def _get_unknown_ini_keys(self) -> List[str]: + parser_inicfg = self._parser._inidict + return [name for name in self.inicfg if name not in parser_inicfg] + + def parse(self, args: List[str], addopts: bool = True) -> None: + # Parse given cmdline arguments into this config object. + assert not hasattr( + self, "args" + ), "can only parse cmdline args at most once per Config object" + self.hook.pytest_addhooks.call_historic( + kwargs=dict(pluginmanager=self.pluginmanager) + ) + self._preparse(args, addopts=addopts) + # XXX deprecated hook: + self.hook.pytest_cmdline_preparse(config=self, args=args) + self._parser.after_preparse = True # type: ignore + try: + source = Config.ArgsSource.ARGS + args = self._parser.parse_setoption( + args, self.option, namespace=self.option + ) + if not args: + if self.invocation_params.dir == self.rootpath: + source = Config.ArgsSource.TESTPATHS + testpaths: List[str] = self.getini("testpaths") + if self.known_args_namespace.pyargs: + args = testpaths + else: + args = [] + for path in testpaths: + args.extend(sorted(glob.iglob(path, recursive=True))) + if not args: + source = Config.ArgsSource.INCOVATION_DIR + args = [str(self.invocation_params.dir)] + self.args = args + self.args_source = source + except PrintHelp: + pass + + def issue_config_time_warning(self, warning: Warning, stacklevel: int) -> None: + """Issue and handle a warning during the "configure" stage. + + During ``pytest_configure`` we can't capture warnings using the ``catch_warnings_for_item`` + function because it is not possible to have hookwrappers around ``pytest_configure``. + + This function is mainly intended for plugins that need to issue warnings during + ``pytest_configure`` (or similar stages). + + :param warning: The warning instance. + :param stacklevel: stacklevel forwarded to warnings.warn. + """ + if self.pluginmanager.is_blocked("warnings"): + return + + cmdline_filters = self.known_args_namespace.pythonwarnings or [] + config_filters = self.getini("filterwarnings") + + with warnings.catch_warnings(record=True) as records: + warnings.simplefilter("always", type(warning)) + apply_warning_filters(config_filters, cmdline_filters) + warnings.warn(warning, stacklevel=stacklevel) + + if records: + frame = sys._getframe(stacklevel - 1) + location = frame.f_code.co_filename, frame.f_lineno, frame.f_code.co_name + self.hook.pytest_warning_recorded.call_historic( + kwargs=dict( + warning_message=records[0], + when="config", + nodeid="", + location=location, + ) + ) + + def addinivalue_line(self, name: str, line: str) -> None: + """Add a line to an ini-file option. The option must have been + declared but might not yet be set in which case the line becomes + the first line in its value.""" + x = self.getini(name) + assert isinstance(x, list) + x.append(line) # modifies the cached list inline + + def getini(self, name: str): + """Return configuration value from an :ref:`ini file `. + + If the specified name hasn't been registered through a prior + :func:`parser.addini ` call (usually from a + plugin), a ValueError is raised. + """ + try: + return self._inicache[name] + except KeyError: + self._inicache[name] = val = self._getini(name) + return val + + # Meant for easy monkeypatching by legacypath plugin. + # Can be inlined back (with no cover removed) once legacypath is gone. + def _getini_unknown_type(self, name: str, type: str, value: Union[str, List[str]]): + msg = f"unknown configuration type: {type}" + raise ValueError(msg, value) # pragma: no cover + + def _getini(self, name: str): + try: + description, type, default = self._parser._inidict[name] + except KeyError as e: + raise ValueError(f"unknown configuration value: {name!r}") from e + override_value = self._get_override_ini_value(name) + if override_value is None: + try: + value = self.inicfg[name] + except KeyError: + if default is not None: + return default + if type is None: + return "" + return [] + else: + value = override_value + # Coerce the values based on types. + # + # Note: some coercions are only required if we are reading from .ini files, because + # the file format doesn't contain type information, but when reading from toml we will + # get either str or list of str values (see _parse_ini_config_from_pyproject_toml). + # For example: + # + # ini: + # a_line_list = "tests acceptance" + # in this case, we need to split the string to obtain a list of strings. + # + # toml: + # a_line_list = ["tests", "acceptance"] + # in this case, we already have a list ready to use. + # + if type == "paths": + # TODO: This assert is probably not valid in all cases. + assert self.inipath is not None + dp = self.inipath.parent + input_values = shlex.split(value) if isinstance(value, str) else value + return [dp / x for x in input_values] + elif type == "args": + return shlex.split(value) if isinstance(value, str) else value + elif type == "linelist": + if isinstance(value, str): + return [t for t in map(lambda x: x.strip(), value.split("\n")) if t] + else: + return value + elif type == "bool": + return _strtobool(str(value).strip()) + elif type == "string": + return value + elif type is None: + return value + else: + return self._getini_unknown_type(name, type, value) + + def _getconftest_pathlist( + self, name: str, path: Path, rootpath: Path + ) -> Optional[List[Path]]: + try: + mod, relroots = self.pluginmanager._rget_with_confmod( + name, path, self.getoption("importmode"), rootpath + ) + except KeyError: + return None + assert mod.__file__ is not None + modpath = Path(mod.__file__).parent + values: List[Path] = [] + for relroot in relroots: + if isinstance(relroot, os.PathLike): + relroot = Path(relroot) + else: + relroot = relroot.replace("/", os.sep) + relroot = absolutepath(modpath / relroot) + values.append(relroot) + return values + + def _get_override_ini_value(self, name: str) -> Optional[str]: + value = None + # override_ini is a list of "ini=value" options. + # Always use the last item if multiple values are set for same ini-name, + # e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2. + for ini_config in self._override_ini: + try: + key, user_ini_value = ini_config.split("=", 1) + except ValueError as e: + raise UsageError( + "-o/--override-ini expects option=value style (got: {!r}).".format( + ini_config + ) + ) from e + else: + if key == name: + value = user_ini_value + return value + + def getoption(self, name: str, default=notset, skip: bool = False): + """Return command line option value. + + :param name: Name of the option. You may also specify + the literal ``--OPT`` option instead of the "dest" option name. + :param default: Default value if no option of that name exists. + :param skip: If True, raise pytest.skip if option does not exists + or has a None value. + """ + name = self._opt2dest.get(name, name) + try: + val = getattr(self.option, name) + if val is None and skip: + raise AttributeError(name) + return val + except AttributeError as e: + if default is not notset: + return default + if skip: + import pytest + + pytest.skip(f"no {name!r} option found") + raise ValueError(f"no option named {name!r}") from e + + def getvalue(self, name: str, path=None): + """Deprecated, use getoption() instead.""" + return self.getoption(name) + + def getvalueorskip(self, name: str, path=None): + """Deprecated, use getoption(skip=True) instead.""" + return self.getoption(name, skip=True) + + def _warn_about_missing_assertion(self, mode: str) -> None: + if not _assertion_supported(): + if mode == "plain": + warning_text = ( + "ASSERTIONS ARE NOT EXECUTED" + " and FAILING TESTS WILL PASS. Are you" + " using python -O?" + ) + else: + warning_text = ( + "assertions not in test modules or" + " plugins will be ignored" + " because assert statements are not executed " + "by the underlying Python interpreter " + "(are you using python -O?)\n" + ) + self.issue_config_time_warning( + PytestConfigWarning(warning_text), + stacklevel=3, + ) + + def _warn_about_skipped_plugins(self) -> None: + for module_name, msg in self.pluginmanager.skipped_plugins: + self.issue_config_time_warning( + PytestConfigWarning(f"skipped plugin {module_name!r}: {msg}"), + stacklevel=2, + ) + + +def _assertion_supported() -> bool: + try: + assert False + except AssertionError: + return True + else: + return False # type: ignore[unreachable] + + +def create_terminal_writer( + config: Config, file: Optional[TextIO] = None +) -> TerminalWriter: + """Create a TerminalWriter instance configured according to the options + in the config object. + + Every code which requires a TerminalWriter object and has access to a + config object should use this function. + """ + tw = TerminalWriter(file=file) + + if config.option.color == "yes": + tw.hasmarkup = True + elif config.option.color == "no": + tw.hasmarkup = False + + if config.option.code_highlight == "yes": + tw.code_highlight = True + elif config.option.code_highlight == "no": + tw.code_highlight = False + + return tw + + +def _strtobool(val: str) -> bool: + """Convert a string representation of truth to True or False. + + True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values + are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if + 'val' is anything else. + + .. note:: Copied from distutils.util. + """ + val = val.lower() + if val in ("y", "yes", "t", "true", "on", "1"): + return True + elif val in ("n", "no", "f", "false", "off", "0"): + return False + else: + raise ValueError(f"invalid truth value {val!r}") + + +@lru_cache(maxsize=50) +def parse_warning_filter( + arg: str, *, escape: bool +) -> Tuple["warnings._ActionKind", str, Type[Warning], str, int]: + """Parse a warnings filter string. + + This is copied from warnings._setoption with the following changes: + + * Does not apply the filter. + * Escaping is optional. + * Raises UsageError so we get nice error messages on failure. + """ + __tracebackhide__ = True + error_template = dedent( + f"""\ + while parsing the following warning configuration: + + {arg} + + This error occurred: + + {{error}} + """ + ) + + parts = arg.split(":") + if len(parts) > 5: + doc_url = ( + "https://docs.python.org/3/library/warnings.html#describing-warning-filters" + ) + error = dedent( + f"""\ + Too many fields ({len(parts)}), expected at most 5 separated by colons: + + action:message:category:module:line + + For more information please consult: {doc_url} + """ + ) + raise UsageError(error_template.format(error=error)) + + while len(parts) < 5: + parts.append("") + action_, message, category_, module, lineno_ = (s.strip() for s in parts) + try: + action: "warnings._ActionKind" = warnings._getaction(action_) # type: ignore[attr-defined] + except warnings._OptionError as e: + raise UsageError(error_template.format(error=str(e))) + try: + category: Type[Warning] = _resolve_warning_category(category_) + except Exception: + exc_info = ExceptionInfo.from_current() + exception_text = exc_info.getrepr(style="native") + raise UsageError(error_template.format(error=exception_text)) + if message and escape: + message = re.escape(message) + if module and escape: + module = re.escape(module) + r"\Z" + if lineno_: + try: + lineno = int(lineno_) + if lineno < 0: + raise ValueError("number is negative") + except ValueError as e: + raise UsageError( + error_template.format(error=f"invalid lineno {lineno_!r}: {e}") + ) + else: + lineno = 0 + return action, message, category, module, lineno + + +def _resolve_warning_category(category: str) -> Type[Warning]: + """ + Copied from warnings._getcategory, but changed so it lets exceptions (specially ImportErrors) + propagate so we can get access to their tracebacks (#9218). + """ + __tracebackhide__ = True + if not category: + return Warning + + if "." not in category: + import builtins as m + + klass = category + else: + module, _, klass = category.rpartition(".") + m = __import__(module, None, None, [klass]) + cat = getattr(m, klass) + if not issubclass(cat, Warning): + raise UsageError(f"{cat} is not a Warning subclass") + return cast(Type[Warning], cat) + + +def apply_warning_filters( + config_filters: Iterable[str], cmdline_filters: Iterable[str] +) -> None: + """Applies pytest-configured filters to the warnings module""" + # Filters should have this precedence: cmdline options, config. + # Filters should be applied in the inverse order of precedence. + for arg in config_filters: + warnings.filterwarnings(*parse_warning_filter(arg, escape=False)) + + for arg in cmdline_filters: + warnings.filterwarnings(*parse_warning_filter(arg, escape=True)) diff --git a/env/lib/python3.8/site-packages/_pytest/config/argparsing.py b/env/lib/python3.8/site-packages/_pytest/config/argparsing.py new file mode 100644 index 00000000..d3f01916 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/config/argparsing.py @@ -0,0 +1,551 @@ +import argparse +import os +import sys +import warnings +from gettext import gettext +from typing import Any +from typing import Callable +from typing import cast +from typing import Dict +from typing import List +from typing import Mapping +from typing import NoReturn +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +import _pytest._io +from _pytest.compat import final +from _pytest.config.exceptions import UsageError +from _pytest.deprecated import ARGUMENT_PERCENT_DEFAULT +from _pytest.deprecated import ARGUMENT_TYPE_STR +from _pytest.deprecated import ARGUMENT_TYPE_STR_CHOICE +from _pytest.deprecated import check_ispytest + +if TYPE_CHECKING: + from typing_extensions import Literal + +FILE_OR_DIR = "file_or_dir" + + +@final +class Parser: + """Parser for command line arguments and ini-file values. + + :ivar extra_info: Dict of generic param -> value to display in case + there's an error processing the command line arguments. + """ + + prog: Optional[str] = None + + def __init__( + self, + usage: Optional[str] = None, + processopt: Optional[Callable[["Argument"], None]] = None, + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + self._anonymous = OptionGroup("Custom options", parser=self, _ispytest=True) + self._groups: List[OptionGroup] = [] + self._processopt = processopt + self._usage = usage + self._inidict: Dict[str, Tuple[str, Optional[str], Any]] = {} + self._ininames: List[str] = [] + self.extra_info: Dict[str, Any] = {} + + def processoption(self, option: "Argument") -> None: + if self._processopt: + if option.dest: + self._processopt(option) + + def getgroup( + self, name: str, description: str = "", after: Optional[str] = None + ) -> "OptionGroup": + """Get (or create) a named option Group. + + :param name: Name of the option group. + :param description: Long description for --help output. + :param after: Name of another group, used for ordering --help output. + :returns: The option group. + + The returned group object has an ``addoption`` method with the same + signature as :func:`parser.addoption ` but + will be shown in the respective group in the output of + ``pytest --help``. + """ + for group in self._groups: + if group.name == name: + return group + group = OptionGroup(name, description, parser=self, _ispytest=True) + i = 0 + for i, grp in enumerate(self._groups): + if grp.name == after: + break + self._groups.insert(i + 1, group) + return group + + def addoption(self, *opts: str, **attrs: Any) -> None: + """Register a command line option. + + :param opts: + Option names, can be short or long options. + :param attrs: + Same attributes as the argparse library's :py:func:`add_argument() + ` function accepts. + + After command line parsing, options are available on the pytest config + object via ``config.option.NAME`` where ``NAME`` is usually set + by passing a ``dest`` attribute, for example + ``addoption("--long", dest="NAME", ...)``. + """ + self._anonymous.addoption(*opts, **attrs) + + def parse( + self, + args: Sequence[Union[str, "os.PathLike[str]"]], + namespace: Optional[argparse.Namespace] = None, + ) -> argparse.Namespace: + from _pytest._argcomplete import try_argcomplete + + self.optparser = self._getparser() + try_argcomplete(self.optparser) + strargs = [os.fspath(x) for x in args] + return self.optparser.parse_args(strargs, namespace=namespace) + + def _getparser(self) -> "MyOptionParser": + from _pytest._argcomplete import filescompleter + + optparser = MyOptionParser(self, self.extra_info, prog=self.prog) + groups = self._groups + [self._anonymous] + for group in groups: + if group.options: + desc = group.description or group.name + arggroup = optparser.add_argument_group(desc) + for option in group.options: + n = option.names() + a = option.attrs() + arggroup.add_argument(*n, **a) + file_or_dir_arg = optparser.add_argument(FILE_OR_DIR, nargs="*") + # bash like autocompletion for dirs (appending '/') + # Type ignored because typeshed doesn't know about argcomplete. + file_or_dir_arg.completer = filescompleter # type: ignore + return optparser + + def parse_setoption( + self, + args: Sequence[Union[str, "os.PathLike[str]"]], + option: argparse.Namespace, + namespace: Optional[argparse.Namespace] = None, + ) -> List[str]: + parsedoption = self.parse(args, namespace=namespace) + for name, value in parsedoption.__dict__.items(): + setattr(option, name, value) + return cast(List[str], getattr(parsedoption, FILE_OR_DIR)) + + def parse_known_args( + self, + args: Sequence[Union[str, "os.PathLike[str]"]], + namespace: Optional[argparse.Namespace] = None, + ) -> argparse.Namespace: + """Parse the known arguments at this point. + + :returns: An argparse namespace object. + """ + return self.parse_known_and_unknown_args(args, namespace=namespace)[0] + + def parse_known_and_unknown_args( + self, + args: Sequence[Union[str, "os.PathLike[str]"]], + namespace: Optional[argparse.Namespace] = None, + ) -> Tuple[argparse.Namespace, List[str]]: + """Parse the known arguments at this point, and also return the + remaining unknown arguments. + + :returns: + A tuple containing an argparse namespace object for the known + arguments, and a list of the unknown arguments. + """ + optparser = self._getparser() + strargs = [os.fspath(x) for x in args] + return optparser.parse_known_args(strargs, namespace=namespace) + + def addini( + self, + name: str, + help: str, + type: Optional[ + "Literal['string', 'paths', 'pathlist', 'args', 'linelist', 'bool']" + ] = None, + default: Any = None, + ) -> None: + """Register an ini-file option. + + :param name: + Name of the ini-variable. + :param type: + Type of the variable. Can be: + + * ``string``: a string + * ``bool``: a boolean + * ``args``: a list of strings, separated as in a shell + * ``linelist``: a list of strings, separated by line breaks + * ``paths``: a list of :class:`pathlib.Path`, separated as in a shell + * ``pathlist``: a list of ``py.path``, separated as in a shell + + .. versionadded:: 7.0 + The ``paths`` variable type. + + Defaults to ``string`` if ``None`` or not passed. + :param default: + Default value if no ini-file option exists but is queried. + + The value of ini-variables can be retrieved via a call to + :py:func:`config.getini(name) `. + """ + assert type in (None, "string", "paths", "pathlist", "args", "linelist", "bool") + self._inidict[name] = (help, type, default) + self._ininames.append(name) + + +class ArgumentError(Exception): + """Raised if an Argument instance is created with invalid or + inconsistent arguments.""" + + def __init__(self, msg: str, option: Union["Argument", str]) -> None: + self.msg = msg + self.option_id = str(option) + + def __str__(self) -> str: + if self.option_id: + return f"option {self.option_id}: {self.msg}" + else: + return self.msg + + +class Argument: + """Class that mimics the necessary behaviour of optparse.Option. + + It's currently a least effort implementation and ignoring choices + and integer prefixes. + + https://docs.python.org/3/library/optparse.html#optparse-standard-option-types + """ + + _typ_map = {"int": int, "string": str, "float": float, "complex": complex} + + def __init__(self, *names: str, **attrs: Any) -> None: + """Store params in private vars for use in add_argument.""" + self._attrs = attrs + self._short_opts: List[str] = [] + self._long_opts: List[str] = [] + if "%default" in (attrs.get("help") or ""): + warnings.warn(ARGUMENT_PERCENT_DEFAULT, stacklevel=3) + try: + typ = attrs["type"] + except KeyError: + pass + else: + # This might raise a keyerror as well, don't want to catch that. + if isinstance(typ, str): + if typ == "choice": + warnings.warn( + ARGUMENT_TYPE_STR_CHOICE.format(typ=typ, names=names), + stacklevel=4, + ) + # argparse expects a type here take it from + # the type of the first element + attrs["type"] = type(attrs["choices"][0]) + else: + warnings.warn( + ARGUMENT_TYPE_STR.format(typ=typ, names=names), stacklevel=4 + ) + attrs["type"] = Argument._typ_map[typ] + # Used in test_parseopt -> test_parse_defaultgetter. + self.type = attrs["type"] + else: + self.type = typ + try: + # Attribute existence is tested in Config._processopt. + self.default = attrs["default"] + except KeyError: + pass + self._set_opt_strings(names) + dest: Optional[str] = attrs.get("dest") + if dest: + self.dest = dest + elif self._long_opts: + self.dest = self._long_opts[0][2:].replace("-", "_") + else: + try: + self.dest = self._short_opts[0][1:] + except IndexError as e: + self.dest = "???" # Needed for the error repr. + raise ArgumentError("need a long or short option", self) from e + + def names(self) -> List[str]: + return self._short_opts + self._long_opts + + def attrs(self) -> Mapping[str, Any]: + # Update any attributes set by processopt. + attrs = "default dest help".split() + attrs.append(self.dest) + for attr in attrs: + try: + self._attrs[attr] = getattr(self, attr) + except AttributeError: + pass + if self._attrs.get("help"): + a = self._attrs["help"] + a = a.replace("%default", "%(default)s") + # a = a.replace('%prog', '%(prog)s') + self._attrs["help"] = a + return self._attrs + + def _set_opt_strings(self, opts: Sequence[str]) -> None: + """Directly from optparse. + + Might not be necessary as this is passed to argparse later on. + """ + for opt in opts: + if len(opt) < 2: + raise ArgumentError( + "invalid option string %r: " + "must be at least two characters long" % opt, + self, + ) + elif len(opt) == 2: + if not (opt[0] == "-" and opt[1] != "-"): + raise ArgumentError( + "invalid short option string %r: " + "must be of the form -x, (x any non-dash char)" % opt, + self, + ) + self._short_opts.append(opt) + else: + if not (opt[0:2] == "--" and opt[2] != "-"): + raise ArgumentError( + "invalid long option string %r: " + "must start with --, followed by non-dash" % opt, + self, + ) + self._long_opts.append(opt) + + def __repr__(self) -> str: + args: List[str] = [] + if self._short_opts: + args += ["_short_opts: " + repr(self._short_opts)] + if self._long_opts: + args += ["_long_opts: " + repr(self._long_opts)] + args += ["dest: " + repr(self.dest)] + if hasattr(self, "type"): + args += ["type: " + repr(self.type)] + if hasattr(self, "default"): + args += ["default: " + repr(self.default)] + return "Argument({})".format(", ".join(args)) + + +class OptionGroup: + """A group of options shown in its own section.""" + + def __init__( + self, + name: str, + description: str = "", + parser: Optional[Parser] = None, + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + self.name = name + self.description = description + self.options: List[Argument] = [] + self.parser = parser + + def addoption(self, *opts: str, **attrs: Any) -> None: + """Add an option to this group. + + If a shortened version of a long option is specified, it will + be suppressed in the help. ``addoption('--twowords', '--two-words')`` + results in help showing ``--two-words`` only, but ``--twowords`` gets + accepted **and** the automatic destination is in ``args.twowords``. + + :param opts: + Option names, can be short or long options. + :param attrs: + Same attributes as the argparse library's :py:func:`add_argument() + ` function accepts. + """ + conflict = set(opts).intersection( + name for opt in self.options for name in opt.names() + ) + if conflict: + raise ValueError("option names %s already added" % conflict) + option = Argument(*opts, **attrs) + self._addoption_instance(option, shortupper=False) + + def _addoption(self, *opts: str, **attrs: Any) -> None: + option = Argument(*opts, **attrs) + self._addoption_instance(option, shortupper=True) + + def _addoption_instance(self, option: "Argument", shortupper: bool = False) -> None: + if not shortupper: + for opt in option._short_opts: + if opt[0] == "-" and opt[1].islower(): + raise ValueError("lowercase shortoptions reserved") + if self.parser: + self.parser.processoption(option) + self.options.append(option) + + +class MyOptionParser(argparse.ArgumentParser): + def __init__( + self, + parser: Parser, + extra_info: Optional[Dict[str, Any]] = None, + prog: Optional[str] = None, + ) -> None: + self._parser = parser + super().__init__( + prog=prog, + usage=parser._usage, + add_help=False, + formatter_class=DropShorterLongHelpFormatter, + allow_abbrev=False, + ) + # extra_info is a dict of (param -> value) to display if there's + # an usage error to provide more contextual information to the user. + self.extra_info = extra_info if extra_info else {} + + def error(self, message: str) -> NoReturn: + """Transform argparse error message into UsageError.""" + msg = f"{self.prog}: error: {message}" + + if hasattr(self._parser, "_config_source_hint"): + # Type ignored because the attribute is set dynamically. + msg = f"{msg} ({self._parser._config_source_hint})" # type: ignore + + raise UsageError(self.format_usage() + msg) + + # Type ignored because typeshed has a very complex type in the superclass. + def parse_args( # type: ignore + self, + args: Optional[Sequence[str]] = None, + namespace: Optional[argparse.Namespace] = None, + ) -> argparse.Namespace: + """Allow splitting of positional arguments.""" + parsed, unrecognized = self.parse_known_args(args, namespace) + if unrecognized: + for arg in unrecognized: + if arg and arg[0] == "-": + lines = ["unrecognized arguments: %s" % (" ".join(unrecognized))] + for k, v in sorted(self.extra_info.items()): + lines.append(f" {k}: {v}") + self.error("\n".join(lines)) + getattr(parsed, FILE_OR_DIR).extend(unrecognized) + return parsed + + if sys.version_info[:2] < (3, 9): # pragma: no cover + # Backport of https://github.com/python/cpython/pull/14316 so we can + # disable long --argument abbreviations without breaking short flags. + def _parse_optional( + self, arg_string: str + ) -> Optional[Tuple[Optional[argparse.Action], str, Optional[str]]]: + if not arg_string: + return None + if not arg_string[0] in self.prefix_chars: + return None + if arg_string in self._option_string_actions: + action = self._option_string_actions[arg_string] + return action, arg_string, None + if len(arg_string) == 1: + return None + if "=" in arg_string: + option_string, explicit_arg = arg_string.split("=", 1) + if option_string in self._option_string_actions: + action = self._option_string_actions[option_string] + return action, option_string, explicit_arg + if self.allow_abbrev or not arg_string.startswith("--"): + option_tuples = self._get_option_tuples(arg_string) + if len(option_tuples) > 1: + msg = gettext( + "ambiguous option: %(option)s could match %(matches)s" + ) + options = ", ".join(option for _, option, _ in option_tuples) + self.error(msg % {"option": arg_string, "matches": options}) + elif len(option_tuples) == 1: + (option_tuple,) = option_tuples + return option_tuple + if self._negative_number_matcher.match(arg_string): + if not self._has_negative_number_optionals: + return None + if " " in arg_string: + return None + return None, arg_string, None + + +class DropShorterLongHelpFormatter(argparse.HelpFormatter): + """Shorten help for long options that differ only in extra hyphens. + + - Collapse **long** options that are the same except for extra hyphens. + - Shortcut if there are only two options and one of them is a short one. + - Cache result on the action object as this is called at least 2 times. + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + # Use more accurate terminal width. + if "width" not in kwargs: + kwargs["width"] = _pytest._io.get_terminal_width() + super().__init__(*args, **kwargs) + + def _format_action_invocation(self, action: argparse.Action) -> str: + orgstr = super()._format_action_invocation(action) + if orgstr and orgstr[0] != "-": # only optional arguments + return orgstr + res: Optional[str] = getattr(action, "_formatted_action_invocation", None) + if res: + return res + options = orgstr.split(", ") + if len(options) == 2 and (len(options[0]) == 2 or len(options[1]) == 2): + # a shortcut for '-h, --help' or '--abc', '-a' + action._formatted_action_invocation = orgstr # type: ignore + return orgstr + return_list = [] + short_long: Dict[str, str] = {} + for option in options: + if len(option) == 2 or option[2] == " ": + continue + if not option.startswith("--"): + raise ArgumentError( + 'long optional argument without "--": [%s]' % (option), option + ) + xxoption = option[2:] + shortened = xxoption.replace("-", "") + if shortened not in short_long or len(short_long[shortened]) < len( + xxoption + ): + short_long[shortened] = xxoption + # now short_long has been filled out to the longest with dashes + # **and** we keep the right option ordering from add_argument + for option in options: + if len(option) == 2 or option[2] == " ": + return_list.append(option) + if option[2:] == short_long.get(option.replace("-", "")): + return_list.append(option.replace(" ", "=", 1)) + formatted_action_invocation = ", ".join(return_list) + action._formatted_action_invocation = formatted_action_invocation # type: ignore + return formatted_action_invocation + + def _split_lines(self, text, width): + """Wrap lines after splitting on original newlines. + + This allows to have explicit line breaks in the help text. + """ + import textwrap + + lines = [] + for line in text.splitlines(): + lines.extend(textwrap.wrap(line.strip(), width)) + return lines diff --git a/env/lib/python3.8/site-packages/_pytest/config/compat.py b/env/lib/python3.8/site-packages/_pytest/config/compat.py new file mode 100644 index 00000000..ba267d21 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/config/compat.py @@ -0,0 +1,71 @@ +import functools +import warnings +from pathlib import Path +from typing import Optional + +from ..compat import LEGACY_PATH +from ..compat import legacy_path +from ..deprecated import HOOK_LEGACY_PATH_ARG +from _pytest.nodes import _check_path + +# hookname: (Path, LEGACY_PATH) +imply_paths_hooks = { + "pytest_ignore_collect": ("collection_path", "path"), + "pytest_collect_file": ("file_path", "path"), + "pytest_pycollect_makemodule": ("module_path", "path"), + "pytest_report_header": ("start_path", "startdir"), + "pytest_report_collectionfinish": ("start_path", "startdir"), +} + + +class PathAwareHookProxy: + """ + this helper wraps around hook callers + until pluggy supports fixingcalls, this one will do + + it currently doesn't return full hook caller proxies for fixed hooks, + this may have to be changed later depending on bugs + """ + + def __init__(self, hook_caller): + self.__hook_caller = hook_caller + + def __dir__(self): + return dir(self.__hook_caller) + + def __getattr__(self, key, _wraps=functools.wraps): + hook = getattr(self.__hook_caller, key) + if key not in imply_paths_hooks: + self.__dict__[key] = hook + return hook + else: + path_var, fspath_var = imply_paths_hooks[key] + + @_wraps(hook) + def fixed_hook(**kw): + + path_value: Optional[Path] = kw.pop(path_var, None) + fspath_value: Optional[LEGACY_PATH] = kw.pop(fspath_var, None) + if fspath_value is not None: + warnings.warn( + HOOK_LEGACY_PATH_ARG.format( + pylib_path_arg=fspath_var, pathlib_path_arg=path_var + ), + stacklevel=2, + ) + if path_value is not None: + if fspath_value is not None: + _check_path(path_value, fspath_value) + else: + fspath_value = legacy_path(path_value) + else: + assert fspath_value is not None + path_value = Path(fspath_value) + + kw[path_var] = path_value + kw[fspath_var] = fspath_value + return hook(**kw) + + fixed_hook.__name__ = key + self.__dict__[key] = fixed_hook + return fixed_hook diff --git a/env/lib/python3.8/site-packages/_pytest/config/exceptions.py b/env/lib/python3.8/site-packages/_pytest/config/exceptions.py new file mode 100644 index 00000000..4f1320e7 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/config/exceptions.py @@ -0,0 +1,11 @@ +from _pytest.compat import final + + +@final +class UsageError(Exception): + """Error in pytest usage or invocation.""" + + +class PrintHelp(Exception): + """Raised when pytest should print its help to skip the rest of the + argument parsing and validation.""" diff --git a/env/lib/python3.8/site-packages/_pytest/config/findpaths.py b/env/lib/python3.8/site-packages/_pytest/config/findpaths.py new file mode 100644 index 00000000..43c23677 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/config/findpaths.py @@ -0,0 +1,218 @@ +import os +import sys +from pathlib import Path +from typing import Dict +from typing import Iterable +from typing import List +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +import iniconfig + +from .exceptions import UsageError +from _pytest.outcomes import fail +from _pytest.pathlib import absolutepath +from _pytest.pathlib import commonpath + +if TYPE_CHECKING: + from . import Config + + +def _parse_ini_config(path: Path) -> iniconfig.IniConfig: + """Parse the given generic '.ini' file using legacy IniConfig parser, returning + the parsed object. + + Raise UsageError if the file cannot be parsed. + """ + try: + return iniconfig.IniConfig(str(path)) + except iniconfig.ParseError as exc: + raise UsageError(str(exc)) from exc + + +def load_config_dict_from_file( + filepath: Path, +) -> Optional[Dict[str, Union[str, List[str]]]]: + """Load pytest configuration from the given file path, if supported. + + Return None if the file does not contain valid pytest configuration. + """ + + # Configuration from ini files are obtained from the [pytest] section, if present. + if filepath.suffix == ".ini": + iniconfig = _parse_ini_config(filepath) + + if "pytest" in iniconfig: + return dict(iniconfig["pytest"].items()) + else: + # "pytest.ini" files are always the source of configuration, even if empty. + if filepath.name == "pytest.ini": + return {} + + # '.cfg' files are considered if they contain a "[tool:pytest]" section. + elif filepath.suffix == ".cfg": + iniconfig = _parse_ini_config(filepath) + + if "tool:pytest" in iniconfig.sections: + return dict(iniconfig["tool:pytest"].items()) + elif "pytest" in iniconfig.sections: + # If a setup.cfg contains a "[pytest]" section, we raise a failure to indicate users that + # plain "[pytest]" sections in setup.cfg files is no longer supported (#3086). + fail(CFG_PYTEST_SECTION.format(filename="setup.cfg"), pytrace=False) + + # '.toml' files are considered if they contain a [tool.pytest.ini_options] table. + elif filepath.suffix == ".toml": + if sys.version_info >= (3, 11): + import tomllib + else: + import tomli as tomllib + + toml_text = filepath.read_text(encoding="utf-8") + try: + config = tomllib.loads(toml_text) + except tomllib.TOMLDecodeError as exc: + raise UsageError(f"{filepath}: {exc}") from exc + + result = config.get("tool", {}).get("pytest", {}).get("ini_options", None) + if result is not None: + # TOML supports richer data types than ini files (strings, arrays, floats, ints, etc), + # however we need to convert all scalar values to str for compatibility with the rest + # of the configuration system, which expects strings only. + def make_scalar(v: object) -> Union[str, List[str]]: + return v if isinstance(v, list) else str(v) + + return {k: make_scalar(v) for k, v in result.items()} + + return None + + +def locate_config( + args: Iterable[Path], +) -> Tuple[Optional[Path], Optional[Path], Dict[str, Union[str, List[str]]]]: + """Search in the list of arguments for a valid ini-file for pytest, + and return a tuple of (rootdir, inifile, cfg-dict).""" + config_names = [ + "pytest.ini", + ".pytest.ini", + "pyproject.toml", + "tox.ini", + "setup.cfg", + ] + args = [x for x in args if not str(x).startswith("-")] + if not args: + args = [Path.cwd()] + for arg in args: + argpath = absolutepath(arg) + for base in (argpath, *argpath.parents): + for config_name in config_names: + p = base / config_name + if p.is_file(): + ini_config = load_config_dict_from_file(p) + if ini_config is not None: + return base, p, ini_config + return None, None, {} + + +def get_common_ancestor(paths: Iterable[Path]) -> Path: + common_ancestor: Optional[Path] = None + for path in paths: + if not path.exists(): + continue + if common_ancestor is None: + common_ancestor = path + else: + if common_ancestor in path.parents or path == common_ancestor: + continue + elif path in common_ancestor.parents: + common_ancestor = path + else: + shared = commonpath(path, common_ancestor) + if shared is not None: + common_ancestor = shared + if common_ancestor is None: + common_ancestor = Path.cwd() + elif common_ancestor.is_file(): + common_ancestor = common_ancestor.parent + return common_ancestor + + +def get_dirs_from_args(args: Iterable[str]) -> List[Path]: + def is_option(x: str) -> bool: + return x.startswith("-") + + def get_file_part_from_node_id(x: str) -> str: + return x.split("::")[0] + + def get_dir_from_path(path: Path) -> Path: + if path.is_dir(): + return path + return path.parent + + def safe_exists(path: Path) -> bool: + # This can throw on paths that contain characters unrepresentable at the OS level, + # or with invalid syntax on Windows (https://bugs.python.org/issue35306) + try: + return path.exists() + except OSError: + return False + + # These look like paths but may not exist + possible_paths = ( + absolutepath(get_file_part_from_node_id(arg)) + for arg in args + if not is_option(arg) + ) + + return [get_dir_from_path(path) for path in possible_paths if safe_exists(path)] + + +CFG_PYTEST_SECTION = "[pytest] section in {filename} files is no longer supported, change to [tool:pytest] instead." + + +def determine_setup( + inifile: Optional[str], + args: Sequence[str], + rootdir_cmd_arg: Optional[str] = None, + config: Optional["Config"] = None, +) -> Tuple[Path, Optional[Path], Dict[str, Union[str, List[str]]]]: + rootdir = None + dirs = get_dirs_from_args(args) + if inifile: + inipath_ = absolutepath(inifile) + inipath: Optional[Path] = inipath_ + inicfg = load_config_dict_from_file(inipath_) or {} + if rootdir_cmd_arg is None: + rootdir = inipath_.parent + else: + ancestor = get_common_ancestor(dirs) + rootdir, inipath, inicfg = locate_config([ancestor]) + if rootdir is None and rootdir_cmd_arg is None: + for possible_rootdir in (ancestor, *ancestor.parents): + if (possible_rootdir / "setup.py").is_file(): + rootdir = possible_rootdir + break + else: + if dirs != [ancestor]: + rootdir, inipath, inicfg = locate_config(dirs) + if rootdir is None: + if config is not None: + cwd = config.invocation_params.dir + else: + cwd = Path.cwd() + rootdir = get_common_ancestor([cwd, ancestor]) + is_fs_root = os.path.splitdrive(str(rootdir))[1] == "/" + if is_fs_root: + rootdir = ancestor + if rootdir_cmd_arg: + rootdir = absolutepath(os.path.expandvars(rootdir_cmd_arg)) + if not rootdir.is_dir(): + raise UsageError( + "Directory '{}' not found. Check your '--rootdir' option.".format( + rootdir + ) + ) + assert rootdir is not None + return rootdir, inipath, inicfg or {} diff --git a/env/lib/python3.8/site-packages/_pytest/debugging.py b/env/lib/python3.8/site-packages/_pytest/debugging.py new file mode 100644 index 00000000..a3f80802 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/debugging.py @@ -0,0 +1,391 @@ +"""Interactive debugging with PDB, the Python Debugger.""" +import argparse +import functools +import sys +import types +import unittest +from typing import Any +from typing import Callable +from typing import Generator +from typing import List +from typing import Optional +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union + +from _pytest import outcomes +from _pytest._code import ExceptionInfo +from _pytest.config import Config +from _pytest.config import ConftestImportFailure +from _pytest.config import hookimpl +from _pytest.config import PytestPluginManager +from _pytest.config.argparsing import Parser +from _pytest.config.exceptions import UsageError +from _pytest.nodes import Node +from _pytest.reports import BaseReport + +if TYPE_CHECKING: + from _pytest.capture import CaptureManager + from _pytest.runner import CallInfo + + +def _validate_usepdb_cls(value: str) -> Tuple[str, str]: + """Validate syntax of --pdbcls option.""" + try: + modname, classname = value.split(":") + except ValueError as e: + raise argparse.ArgumentTypeError( + f"{value!r} is not in the format 'modname:classname'" + ) from e + return (modname, classname) + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("general") + group._addoption( + "--pdb", + dest="usepdb", + action="store_true", + help="Start the interactive Python debugger on errors or KeyboardInterrupt", + ) + group._addoption( + "--pdbcls", + dest="usepdb_cls", + metavar="modulename:classname", + type=_validate_usepdb_cls, + help="Specify a custom interactive Python debugger for use with --pdb." + "For example: --pdbcls=IPython.terminal.debugger:TerminalPdb", + ) + group._addoption( + "--trace", + dest="trace", + action="store_true", + help="Immediately break when running each test", + ) + + +def pytest_configure(config: Config) -> None: + import pdb + + if config.getvalue("trace"): + config.pluginmanager.register(PdbTrace(), "pdbtrace") + if config.getvalue("usepdb"): + config.pluginmanager.register(PdbInvoke(), "pdbinvoke") + + pytestPDB._saved.append( + (pdb.set_trace, pytestPDB._pluginmanager, pytestPDB._config) + ) + pdb.set_trace = pytestPDB.set_trace + pytestPDB._pluginmanager = config.pluginmanager + pytestPDB._config = config + + # NOTE: not using pytest_unconfigure, since it might get called although + # pytest_configure was not (if another plugin raises UsageError). + def fin() -> None: + ( + pdb.set_trace, + pytestPDB._pluginmanager, + pytestPDB._config, + ) = pytestPDB._saved.pop() + + config.add_cleanup(fin) + + +class pytestPDB: + """Pseudo PDB that defers to the real pdb.""" + + _pluginmanager: Optional[PytestPluginManager] = None + _config: Optional[Config] = None + _saved: List[ + Tuple[Callable[..., None], Optional[PytestPluginManager], Optional[Config]] + ] = [] + _recursive_debug = 0 + _wrapped_pdb_cls: Optional[Tuple[Type[Any], Type[Any]]] = None + + @classmethod + def _is_capturing(cls, capman: Optional["CaptureManager"]) -> Union[str, bool]: + if capman: + return capman.is_capturing() + return False + + @classmethod + def _import_pdb_cls(cls, capman: Optional["CaptureManager"]): + if not cls._config: + import pdb + + # Happens when using pytest.set_trace outside of a test. + return pdb.Pdb + + usepdb_cls = cls._config.getvalue("usepdb_cls") + + if cls._wrapped_pdb_cls and cls._wrapped_pdb_cls[0] == usepdb_cls: + return cls._wrapped_pdb_cls[1] + + if usepdb_cls: + modname, classname = usepdb_cls + + try: + __import__(modname) + mod = sys.modules[modname] + + # Handle --pdbcls=pdb:pdb.Pdb (useful e.g. with pdbpp). + parts = classname.split(".") + pdb_cls = getattr(mod, parts[0]) + for part in parts[1:]: + pdb_cls = getattr(pdb_cls, part) + except Exception as exc: + value = ":".join((modname, classname)) + raise UsageError( + f"--pdbcls: could not import {value!r}: {exc}" + ) from exc + else: + import pdb + + pdb_cls = pdb.Pdb + + wrapped_cls = cls._get_pdb_wrapper_class(pdb_cls, capman) + cls._wrapped_pdb_cls = (usepdb_cls, wrapped_cls) + return wrapped_cls + + @classmethod + def _get_pdb_wrapper_class(cls, pdb_cls, capman: Optional["CaptureManager"]): + import _pytest.config + + # Type ignored because mypy doesn't support "dynamic" + # inheritance like this. + class PytestPdbWrapper(pdb_cls): # type: ignore[valid-type,misc] + _pytest_capman = capman + _continued = False + + def do_debug(self, arg): + cls._recursive_debug += 1 + ret = super().do_debug(arg) + cls._recursive_debug -= 1 + return ret + + def do_continue(self, arg): + ret = super().do_continue(arg) + if cls._recursive_debug == 0: + assert cls._config is not None + tw = _pytest.config.create_terminal_writer(cls._config) + tw.line() + + capman = self._pytest_capman + capturing = pytestPDB._is_capturing(capman) + if capturing: + if capturing == "global": + tw.sep(">", "PDB continue (IO-capturing resumed)") + else: + tw.sep( + ">", + "PDB continue (IO-capturing resumed for %s)" + % capturing, + ) + assert capman is not None + capman.resume() + else: + tw.sep(">", "PDB continue") + assert cls._pluginmanager is not None + cls._pluginmanager.hook.pytest_leave_pdb(config=cls._config, pdb=self) + self._continued = True + return ret + + do_c = do_cont = do_continue + + def do_quit(self, arg): + """Raise Exit outcome when quit command is used in pdb. + + This is a bit of a hack - it would be better if BdbQuit + could be handled, but this would require to wrap the + whole pytest run, and adjust the report etc. + """ + ret = super().do_quit(arg) + + if cls._recursive_debug == 0: + outcomes.exit("Quitting debugger") + + return ret + + do_q = do_quit + do_exit = do_quit + + def setup(self, f, tb): + """Suspend on setup(). + + Needed after do_continue resumed, and entering another + breakpoint again. + """ + ret = super().setup(f, tb) + if not ret and self._continued: + # pdb.setup() returns True if the command wants to exit + # from the interaction: do not suspend capturing then. + if self._pytest_capman: + self._pytest_capman.suspend_global_capture(in_=True) + return ret + + def get_stack(self, f, t): + stack, i = super().get_stack(f, t) + if f is None: + # Find last non-hidden frame. + i = max(0, len(stack) - 1) + while i and stack[i][0].f_locals.get("__tracebackhide__", False): + i -= 1 + return stack, i + + return PytestPdbWrapper + + @classmethod + def _init_pdb(cls, method, *args, **kwargs): + """Initialize PDB debugging, dropping any IO capturing.""" + import _pytest.config + + if cls._pluginmanager is None: + capman: Optional[CaptureManager] = None + else: + capman = cls._pluginmanager.getplugin("capturemanager") + if capman: + capman.suspend(in_=True) + + if cls._config: + tw = _pytest.config.create_terminal_writer(cls._config) + tw.line() + + if cls._recursive_debug == 0: + # Handle header similar to pdb.set_trace in py37+. + header = kwargs.pop("header", None) + if header is not None: + tw.sep(">", header) + else: + capturing = cls._is_capturing(capman) + if capturing == "global": + tw.sep(">", f"PDB {method} (IO-capturing turned off)") + elif capturing: + tw.sep( + ">", + "PDB %s (IO-capturing turned off for %s)" + % (method, capturing), + ) + else: + tw.sep(">", f"PDB {method}") + + _pdb = cls._import_pdb_cls(capman)(**kwargs) + + if cls._pluginmanager: + cls._pluginmanager.hook.pytest_enter_pdb(config=cls._config, pdb=_pdb) + return _pdb + + @classmethod + def set_trace(cls, *args, **kwargs) -> None: + """Invoke debugging via ``Pdb.set_trace``, dropping any IO capturing.""" + frame = sys._getframe().f_back + _pdb = cls._init_pdb("set_trace", *args, **kwargs) + _pdb.set_trace(frame) + + +class PdbInvoke: + def pytest_exception_interact( + self, node: Node, call: "CallInfo[Any]", report: BaseReport + ) -> None: + capman = node.config.pluginmanager.getplugin("capturemanager") + if capman: + capman.suspend_global_capture(in_=True) + out, err = capman.read_global_capture() + sys.stdout.write(out) + sys.stdout.write(err) + assert call.excinfo is not None + + if not isinstance(call.excinfo.value, unittest.SkipTest): + _enter_pdb(node, call.excinfo, report) + + def pytest_internalerror(self, excinfo: ExceptionInfo[BaseException]) -> None: + tb = _postmortem_traceback(excinfo) + post_mortem(tb) + + +class PdbTrace: + @hookimpl(hookwrapper=True) + def pytest_pyfunc_call(self, pyfuncitem) -> Generator[None, None, None]: + wrap_pytest_function_for_tracing(pyfuncitem) + yield + + +def wrap_pytest_function_for_tracing(pyfuncitem): + """Change the Python function object of the given Function item by a + wrapper which actually enters pdb before calling the python function + itself, effectively leaving the user in the pdb prompt in the first + statement of the function.""" + _pdb = pytestPDB._init_pdb("runcall") + testfunction = pyfuncitem.obj + + # we can't just return `partial(pdb.runcall, testfunction)` because (on + # python < 3.7.4) runcall's first param is `func`, which means we'd get + # an exception if one of the kwargs to testfunction was called `func`. + @functools.wraps(testfunction) + def wrapper(*args, **kwargs): + func = functools.partial(testfunction, *args, **kwargs) + _pdb.runcall(func) + + pyfuncitem.obj = wrapper + + +def maybe_wrap_pytest_function_for_tracing(pyfuncitem): + """Wrap the given pytestfunct item for tracing support if --trace was given in + the command line.""" + if pyfuncitem.config.getvalue("trace"): + wrap_pytest_function_for_tracing(pyfuncitem) + + +def _enter_pdb( + node: Node, excinfo: ExceptionInfo[BaseException], rep: BaseReport +) -> BaseReport: + # XXX we re-use the TerminalReporter's terminalwriter + # because this seems to avoid some encoding related troubles + # for not completely clear reasons. + tw = node.config.pluginmanager.getplugin("terminalreporter")._tw + tw.line() + + showcapture = node.config.option.showcapture + + for sectionname, content in ( + ("stdout", rep.capstdout), + ("stderr", rep.capstderr), + ("log", rep.caplog), + ): + if showcapture in (sectionname, "all") and content: + tw.sep(">", "captured " + sectionname) + if content[-1:] == "\n": + content = content[:-1] + tw.line(content) + + tw.sep(">", "traceback") + rep.toterminal(tw) + tw.sep(">", "entering PDB") + tb = _postmortem_traceback(excinfo) + rep._pdbshown = True # type: ignore[attr-defined] + post_mortem(tb) + return rep + + +def _postmortem_traceback(excinfo: ExceptionInfo[BaseException]) -> types.TracebackType: + from doctest import UnexpectedException + + if isinstance(excinfo.value, UnexpectedException): + # A doctest.UnexpectedException is not useful for post_mortem. + # Use the underlying exception instead: + return excinfo.value.exc_info[2] + elif isinstance(excinfo.value, ConftestImportFailure): + # A config.ConftestImportFailure is not useful for post_mortem. + # Use the underlying exception instead: + return excinfo.value.excinfo[2] + else: + assert excinfo._excinfo is not None + return excinfo._excinfo[2] + + +def post_mortem(t: types.TracebackType) -> None: + p = pytestPDB._init_pdb("post_mortem") + p.reset() + p.interaction(None, t) + if p.quitting: + outcomes.exit("Quitting debugger") diff --git a/env/lib/python3.8/site-packages/_pytest/deprecated.py b/env/lib/python3.8/site-packages/_pytest/deprecated.py new file mode 100644 index 00000000..b9c10df7 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/deprecated.py @@ -0,0 +1,146 @@ +"""Deprecation messages and bits of code used elsewhere in the codebase that +is planned to be removed in the next pytest release. + +Keeping it in a central location makes it easy to track what is deprecated and should +be removed when the time comes. + +All constants defined in this module should be either instances of +:class:`PytestWarning`, or :class:`UnformattedWarning` +in case of warnings which need to format their messages. +""" +from warnings import warn + +from _pytest.warning_types import PytestDeprecationWarning +from _pytest.warning_types import PytestRemovedIn8Warning +from _pytest.warning_types import UnformattedWarning + +# set of plugins which have been integrated into the core; we use this list to ignore +# them during registration to avoid conflicts +DEPRECATED_EXTERNAL_PLUGINS = { + "pytest_catchlog", + "pytest_capturelog", + "pytest_faulthandler", +} + +NOSE_SUPPORT = UnformattedWarning( + PytestRemovedIn8Warning, + "Support for nose tests is deprecated and will be removed in a future release.\n" + "{nodeid} is using nose method: `{method}` ({stage})\n" + "See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose", +) + +NOSE_SUPPORT_METHOD = UnformattedWarning( + PytestRemovedIn8Warning, + "Support for nose tests is deprecated and will be removed in a future release.\n" + "{nodeid} is using nose-specific method: `{method}(self)`\n" + "To remove this warning, rename it to `{method}_method(self)`\n" + "See docs: https://docs.pytest.org/en/stable/deprecations.html#support-for-tests-written-for-nose", +) + + +# This can be* removed pytest 8, but it's harmless and common, so no rush to remove. +# * If you're in the future: "could have been". +YIELD_FIXTURE = PytestDeprecationWarning( + "@pytest.yield_fixture is deprecated.\n" + "Use @pytest.fixture instead; they are the same." +) + +WARNING_CMDLINE_PREPARSE_HOOK = PytestRemovedIn8Warning( + "The pytest_cmdline_preparse hook is deprecated and will be removed in a future release. \n" + "Please use pytest_load_initial_conftests hook instead." +) + +FSCOLLECTOR_GETHOOKPROXY_ISINITPATH = PytestRemovedIn8Warning( + "The gethookproxy() and isinitpath() methods of FSCollector and Package are deprecated; " + "use self.session.gethookproxy() and self.session.isinitpath() instead. " +) + +STRICT_OPTION = PytestRemovedIn8Warning( + "The --strict option is deprecated, use --strict-markers instead." +) + +# This deprecation is never really meant to be removed. +PRIVATE = PytestDeprecationWarning("A private pytest class or function was used.") + +ARGUMENT_PERCENT_DEFAULT = PytestRemovedIn8Warning( + 'pytest now uses argparse. "%default" should be changed to "%(default)s"', +) + +ARGUMENT_TYPE_STR_CHOICE = UnformattedWarning( + PytestRemovedIn8Warning, + "`type` argument to addoption() is the string {typ!r}." + " For choices this is optional and can be omitted, " + " but when supplied should be a type (for example `str` or `int`)." + " (options: {names})", +) + +ARGUMENT_TYPE_STR = UnformattedWarning( + PytestRemovedIn8Warning, + "`type` argument to addoption() is the string {typ!r}, " + " but when supplied should be a type (for example `str` or `int`)." + " (options: {names})", +) + + +HOOK_LEGACY_PATH_ARG = UnformattedWarning( + PytestRemovedIn8Warning, + "The ({pylib_path_arg}: py.path.local) argument is deprecated, please use ({pathlib_path_arg}: pathlib.Path)\n" + "see https://docs.pytest.org/en/latest/deprecations.html" + "#py-path-local-arguments-for-hooks-replaced-with-pathlib-path", +) + +NODE_CTOR_FSPATH_ARG = UnformattedWarning( + PytestRemovedIn8Warning, + "The (fspath: py.path.local) argument to {node_type_name} is deprecated. " + "Please use the (path: pathlib.Path) argument instead.\n" + "See https://docs.pytest.org/en/latest/deprecations.html" + "#fspath-argument-for-node-constructors-replaced-with-pathlib-path", +) + +WARNS_NONE_ARG = PytestRemovedIn8Warning( + "Passing None has been deprecated.\n" + "See https://docs.pytest.org/en/latest/how-to/capture-warnings.html" + "#additional-use-cases-of-warnings-in-tests" + " for alternatives in common use cases." +) + +KEYWORD_MSG_ARG = UnformattedWarning( + PytestRemovedIn8Warning, + "pytest.{func}(msg=...) is now deprecated, use pytest.{func}(reason=...) instead", +) + +INSTANCE_COLLECTOR = PytestRemovedIn8Warning( + "The pytest.Instance collector type is deprecated and is no longer used. " + "See https://docs.pytest.org/en/latest/deprecations.html#the-pytest-instance-collector", +) +HOOK_LEGACY_MARKING = UnformattedWarning( + PytestDeprecationWarning, + "The hook{type} {fullname} uses old-style configuration options (marks or attributes).\n" + "Please use the pytest.hook{type}({hook_opts}) decorator instead\n" + " to configure the hooks.\n" + " See https://docs.pytest.org/en/latest/deprecations.html" + "#configuring-hook-specs-impls-using-markers", +) + +# You want to make some `__init__` or function "private". +# +# def my_private_function(some, args): +# ... +# +# Do this: +# +# def my_private_function(some, args, *, _ispytest: bool = False): +# check_ispytest(_ispytest) +# ... +# +# Change all internal/allowed calls to +# +# my_private_function(some, args, _ispytest=True) +# +# All other calls will get the default _ispytest=False and trigger +# the warning (possibly error in the future). + + +def check_ispytest(ispytest: bool) -> None: + if not ispytest: + warn(PRIVATE, stacklevel=3) diff --git a/env/lib/python3.8/site-packages/_pytest/doctest.py b/env/lib/python3.8/site-packages/_pytest/doctest.py new file mode 100644 index 00000000..771f0890 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/doctest.py @@ -0,0 +1,752 @@ +"""Discover and run doctests in modules and test files.""" +import bdb +import inspect +import os +import platform +import sys +import traceback +import types +import warnings +from contextlib import contextmanager +from pathlib import Path +from typing import Any +from typing import Callable +from typing import Dict +from typing import Generator +from typing import Iterable +from typing import List +from typing import Optional +from typing import Pattern +from typing import Sequence +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union + +from _pytest import outcomes +from _pytest._code.code import ExceptionInfo +from _pytest._code.code import ReprFileLocation +from _pytest._code.code import TerminalRepr +from _pytest._io import TerminalWriter +from _pytest.compat import safe_getattr +from _pytest.config import Config +from _pytest.config.argparsing import Parser +from _pytest.fixtures import fixture +from _pytest.fixtures import FixtureRequest +from _pytest.nodes import Collector +from _pytest.nodes import Item +from _pytest.outcomes import OutcomeException +from _pytest.outcomes import skip +from _pytest.pathlib import fnmatch_ex +from _pytest.pathlib import import_path +from _pytest.python import Module +from _pytest.python_api import approx +from _pytest.warning_types import PytestWarning + +if TYPE_CHECKING: + import doctest + +DOCTEST_REPORT_CHOICE_NONE = "none" +DOCTEST_REPORT_CHOICE_CDIFF = "cdiff" +DOCTEST_REPORT_CHOICE_NDIFF = "ndiff" +DOCTEST_REPORT_CHOICE_UDIFF = "udiff" +DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE = "only_first_failure" + +DOCTEST_REPORT_CHOICES = ( + DOCTEST_REPORT_CHOICE_NONE, + DOCTEST_REPORT_CHOICE_CDIFF, + DOCTEST_REPORT_CHOICE_NDIFF, + DOCTEST_REPORT_CHOICE_UDIFF, + DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE, +) + +# Lazy definition of runner class +RUNNER_CLASS = None +# Lazy definition of output checker class +CHECKER_CLASS: Optional[Type["doctest.OutputChecker"]] = None + + +def pytest_addoption(parser: Parser) -> None: + parser.addini( + "doctest_optionflags", + "Option flags for doctests", + type="args", + default=["ELLIPSIS"], + ) + parser.addini( + "doctest_encoding", "Encoding used for doctest files", default="utf-8" + ) + group = parser.getgroup("collect") + group.addoption( + "--doctest-modules", + action="store_true", + default=False, + help="Run doctests in all .py modules", + dest="doctestmodules", + ) + group.addoption( + "--doctest-report", + type=str.lower, + default="udiff", + help="Choose another output format for diffs on doctest failure", + choices=DOCTEST_REPORT_CHOICES, + dest="doctestreport", + ) + group.addoption( + "--doctest-glob", + action="append", + default=[], + metavar="pat", + help="Doctests file matching pattern, default: test*.txt", + dest="doctestglob", + ) + group.addoption( + "--doctest-ignore-import-errors", + action="store_true", + default=False, + help="Ignore doctest ImportErrors", + dest="doctest_ignore_import_errors", + ) + group.addoption( + "--doctest-continue-on-failure", + action="store_true", + default=False, + help="For a given doctest, continue to run after the first failure", + dest="doctest_continue_on_failure", + ) + + +def pytest_unconfigure() -> None: + global RUNNER_CLASS + + RUNNER_CLASS = None + + +def pytest_collect_file( + file_path: Path, + parent: Collector, +) -> Optional[Union["DoctestModule", "DoctestTextfile"]]: + config = parent.config + if file_path.suffix == ".py": + if config.option.doctestmodules and not any( + (_is_setup_py(file_path), _is_main_py(file_path)) + ): + mod: DoctestModule = DoctestModule.from_parent(parent, path=file_path) + return mod + elif _is_doctest(config, file_path, parent): + txt: DoctestTextfile = DoctestTextfile.from_parent(parent, path=file_path) + return txt + return None + + +def _is_setup_py(path: Path) -> bool: + if path.name != "setup.py": + return False + contents = path.read_bytes() + return b"setuptools" in contents or b"distutils" in contents + + +def _is_doctest(config: Config, path: Path, parent: Collector) -> bool: + if path.suffix in (".txt", ".rst") and parent.session.isinitpath(path): + return True + globs = config.getoption("doctestglob") or ["test*.txt"] + return any(fnmatch_ex(glob, path) for glob in globs) + + +def _is_main_py(path: Path) -> bool: + return path.name == "__main__.py" + + +class ReprFailDoctest(TerminalRepr): + def __init__( + self, reprlocation_lines: Sequence[Tuple[ReprFileLocation, Sequence[str]]] + ) -> None: + self.reprlocation_lines = reprlocation_lines + + def toterminal(self, tw: TerminalWriter) -> None: + for reprlocation, lines in self.reprlocation_lines: + for line in lines: + tw.line(line) + reprlocation.toterminal(tw) + + +class MultipleDoctestFailures(Exception): + def __init__(self, failures: Sequence["doctest.DocTestFailure"]) -> None: + super().__init__() + self.failures = failures + + +def _init_runner_class() -> Type["doctest.DocTestRunner"]: + import doctest + + class PytestDoctestRunner(doctest.DebugRunner): + """Runner to collect failures. + + Note that the out variable in this case is a list instead of a + stdout-like object. + """ + + def __init__( + self, + checker: Optional["doctest.OutputChecker"] = None, + verbose: Optional[bool] = None, + optionflags: int = 0, + continue_on_failure: bool = True, + ) -> None: + super().__init__(checker=checker, verbose=verbose, optionflags=optionflags) + self.continue_on_failure = continue_on_failure + + def report_failure( + self, + out, + test: "doctest.DocTest", + example: "doctest.Example", + got: str, + ) -> None: + failure = doctest.DocTestFailure(test, example, got) + if self.continue_on_failure: + out.append(failure) + else: + raise failure + + def report_unexpected_exception( + self, + out, + test: "doctest.DocTest", + example: "doctest.Example", + exc_info: Tuple[Type[BaseException], BaseException, types.TracebackType], + ) -> None: + if isinstance(exc_info[1], OutcomeException): + raise exc_info[1] + if isinstance(exc_info[1], bdb.BdbQuit): + outcomes.exit("Quitting debugger") + failure = doctest.UnexpectedException(test, example, exc_info) + if self.continue_on_failure: + out.append(failure) + else: + raise failure + + return PytestDoctestRunner + + +def _get_runner( + checker: Optional["doctest.OutputChecker"] = None, + verbose: Optional[bool] = None, + optionflags: int = 0, + continue_on_failure: bool = True, +) -> "doctest.DocTestRunner": + # We need this in order to do a lazy import on doctest + global RUNNER_CLASS + if RUNNER_CLASS is None: + RUNNER_CLASS = _init_runner_class() + # Type ignored because the continue_on_failure argument is only defined on + # PytestDoctestRunner, which is lazily defined so can't be used as a type. + return RUNNER_CLASS( # type: ignore + checker=checker, + verbose=verbose, + optionflags=optionflags, + continue_on_failure=continue_on_failure, + ) + + +class DoctestItem(Item): + def __init__( + self, + name: str, + parent: "Union[DoctestTextfile, DoctestModule]", + runner: Optional["doctest.DocTestRunner"] = None, + dtest: Optional["doctest.DocTest"] = None, + ) -> None: + super().__init__(name, parent) + self.runner = runner + self.dtest = dtest + self.obj = None + self.fixture_request: Optional[FixtureRequest] = None + + @classmethod + def from_parent( # type: ignore + cls, + parent: "Union[DoctestTextfile, DoctestModule]", + *, + name: str, + runner: "doctest.DocTestRunner", + dtest: "doctest.DocTest", + ): + # incompatible signature due to imposed limits on subclass + """The public named constructor.""" + return super().from_parent(name=name, parent=parent, runner=runner, dtest=dtest) + + def setup(self) -> None: + if self.dtest is not None: + self.fixture_request = _setup_fixtures(self) + globs = dict(getfixture=self.fixture_request.getfixturevalue) + for name, value in self.fixture_request.getfixturevalue( + "doctest_namespace" + ).items(): + globs[name] = value + self.dtest.globs.update(globs) + + def runtest(self) -> None: + assert self.dtest is not None + assert self.runner is not None + _check_all_skipped(self.dtest) + self._disable_output_capturing_for_darwin() + failures: List["doctest.DocTestFailure"] = [] + # Type ignored because we change the type of `out` from what + # doctest expects. + self.runner.run(self.dtest, out=failures) # type: ignore[arg-type] + if failures: + raise MultipleDoctestFailures(failures) + + def _disable_output_capturing_for_darwin(self) -> None: + """Disable output capturing. Otherwise, stdout is lost to doctest (#985).""" + if platform.system() != "Darwin": + return + capman = self.config.pluginmanager.getplugin("capturemanager") + if capman: + capman.suspend_global_capture(in_=True) + out, err = capman.read_global_capture() + sys.stdout.write(out) + sys.stderr.write(err) + + # TODO: Type ignored -- breaks Liskov Substitution. + def repr_failure( # type: ignore[override] + self, + excinfo: ExceptionInfo[BaseException], + ) -> Union[str, TerminalRepr]: + import doctest + + failures: Optional[ + Sequence[Union[doctest.DocTestFailure, doctest.UnexpectedException]] + ] = None + if isinstance( + excinfo.value, (doctest.DocTestFailure, doctest.UnexpectedException) + ): + failures = [excinfo.value] + elif isinstance(excinfo.value, MultipleDoctestFailures): + failures = excinfo.value.failures + + if failures is None: + return super().repr_failure(excinfo) + + reprlocation_lines = [] + for failure in failures: + example = failure.example + test = failure.test + filename = test.filename + if test.lineno is None: + lineno = None + else: + lineno = test.lineno + example.lineno + 1 + message = type(failure).__name__ + # TODO: ReprFileLocation doesn't expect a None lineno. + reprlocation = ReprFileLocation(filename, lineno, message) # type: ignore[arg-type] + checker = _get_checker() + report_choice = _get_report_choice(self.config.getoption("doctestreport")) + if lineno is not None: + assert failure.test.docstring is not None + lines = failure.test.docstring.splitlines(False) + # add line numbers to the left of the error message + assert test.lineno is not None + lines = [ + "%03d %s" % (i + test.lineno + 1, x) for (i, x) in enumerate(lines) + ] + # trim docstring error lines to 10 + lines = lines[max(example.lineno - 9, 0) : example.lineno + 1] + else: + lines = [ + "EXAMPLE LOCATION UNKNOWN, not showing all tests of that example" + ] + indent = ">>>" + for line in example.source.splitlines(): + lines.append(f"??? {indent} {line}") + indent = "..." + if isinstance(failure, doctest.DocTestFailure): + lines += checker.output_difference( + example, failure.got, report_choice + ).split("\n") + else: + inner_excinfo = ExceptionInfo.from_exc_info(failure.exc_info) + lines += ["UNEXPECTED EXCEPTION: %s" % repr(inner_excinfo.value)] + lines += [ + x.strip("\n") for x in traceback.format_exception(*failure.exc_info) + ] + reprlocation_lines.append((reprlocation, lines)) + return ReprFailDoctest(reprlocation_lines) + + def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]: + assert self.dtest is not None + return self.path, self.dtest.lineno, "[doctest] %s" % self.name + + +def _get_flag_lookup() -> Dict[str, int]: + import doctest + + return dict( + DONT_ACCEPT_TRUE_FOR_1=doctest.DONT_ACCEPT_TRUE_FOR_1, + DONT_ACCEPT_BLANKLINE=doctest.DONT_ACCEPT_BLANKLINE, + NORMALIZE_WHITESPACE=doctest.NORMALIZE_WHITESPACE, + ELLIPSIS=doctest.ELLIPSIS, + IGNORE_EXCEPTION_DETAIL=doctest.IGNORE_EXCEPTION_DETAIL, + COMPARISON_FLAGS=doctest.COMPARISON_FLAGS, + ALLOW_UNICODE=_get_allow_unicode_flag(), + ALLOW_BYTES=_get_allow_bytes_flag(), + NUMBER=_get_number_flag(), + ) + + +def get_optionflags(parent): + optionflags_str = parent.config.getini("doctest_optionflags") + flag_lookup_table = _get_flag_lookup() + flag_acc = 0 + for flag in optionflags_str: + flag_acc |= flag_lookup_table[flag] + return flag_acc + + +def _get_continue_on_failure(config): + continue_on_failure = config.getvalue("doctest_continue_on_failure") + if continue_on_failure: + # We need to turn off this if we use pdb since we should stop at + # the first failure. + if config.getvalue("usepdb"): + continue_on_failure = False + return continue_on_failure + + +class DoctestTextfile(Module): + obj = None + + def collect(self) -> Iterable[DoctestItem]: + import doctest + + # Inspired by doctest.testfile; ideally we would use it directly, + # but it doesn't support passing a custom checker. + encoding = self.config.getini("doctest_encoding") + text = self.path.read_text(encoding) + filename = str(self.path) + name = self.path.name + globs = {"__name__": "__main__"} + + optionflags = get_optionflags(self) + + runner = _get_runner( + verbose=False, + optionflags=optionflags, + checker=_get_checker(), + continue_on_failure=_get_continue_on_failure(self.config), + ) + + parser = doctest.DocTestParser() + test = parser.get_doctest(text, globs, name, filename, 0) + if test.examples: + yield DoctestItem.from_parent( + self, name=test.name, runner=runner, dtest=test + ) + + +def _check_all_skipped(test: "doctest.DocTest") -> None: + """Raise pytest.skip() if all examples in the given DocTest have the SKIP + option set.""" + import doctest + + all_skipped = all(x.options.get(doctest.SKIP, False) for x in test.examples) + if all_skipped: + skip("all tests skipped by +SKIP option") + + +def _is_mocked(obj: object) -> bool: + """Return if an object is possibly a mock object by checking the + existence of a highly improbable attribute.""" + return ( + safe_getattr(obj, "pytest_mock_example_attribute_that_shouldnt_exist", None) + is not None + ) + + +@contextmanager +def _patch_unwrap_mock_aware() -> Generator[None, None, None]: + """Context manager which replaces ``inspect.unwrap`` with a version + that's aware of mock objects and doesn't recurse into them.""" + real_unwrap = inspect.unwrap + + def _mock_aware_unwrap( + func: Callable[..., Any], *, stop: Optional[Callable[[Any], Any]] = None + ) -> Any: + try: + if stop is None or stop is _is_mocked: + return real_unwrap(func, stop=_is_mocked) + _stop = stop + return real_unwrap(func, stop=lambda obj: _is_mocked(obj) or _stop(func)) + except Exception as e: + warnings.warn( + "Got %r when unwrapping %r. This is usually caused " + "by a violation of Python's object protocol; see e.g. " + "https://github.com/pytest-dev/pytest/issues/5080" % (e, func), + PytestWarning, + ) + raise + + inspect.unwrap = _mock_aware_unwrap + try: + yield + finally: + inspect.unwrap = real_unwrap + + +class DoctestModule(Module): + def collect(self) -> Iterable[DoctestItem]: + import doctest + + class MockAwareDocTestFinder(doctest.DocTestFinder): + """A hackish doctest finder that overrides stdlib internals to fix a stdlib bug. + + https://github.com/pytest-dev/pytest/issues/3456 + https://bugs.python.org/issue25532 + """ + + def _find_lineno(self, obj, source_lines): + """Doctest code does not take into account `@property`, this + is a hackish way to fix it. https://bugs.python.org/issue17446 + + Wrapped Doctests will need to be unwrapped so the correct + line number is returned. This will be reported upstream. #8796 + """ + if isinstance(obj, property): + obj = getattr(obj, "fget", obj) + + if hasattr(obj, "__wrapped__"): + # Get the main obj in case of it being wrapped + obj = inspect.unwrap(obj) + + # Type ignored because this is a private function. + return super()._find_lineno( # type:ignore[misc] + obj, + source_lines, + ) + + def _find( + self, tests, obj, name, module, source_lines, globs, seen + ) -> None: + if _is_mocked(obj): + return + with _patch_unwrap_mock_aware(): + + # Type ignored because this is a private function. + super()._find( # type:ignore[misc] + tests, obj, name, module, source_lines, globs, seen + ) + + if self.path.name == "conftest.py": + module = self.config.pluginmanager._importconftest( + self.path, + self.config.getoption("importmode"), + rootpath=self.config.rootpath, + ) + else: + try: + module = import_path( + self.path, + root=self.config.rootpath, + mode=self.config.getoption("importmode"), + ) + except ImportError: + if self.config.getvalue("doctest_ignore_import_errors"): + skip("unable to import module %r" % self.path) + else: + raise + # Uses internal doctest module parsing mechanism. + finder = MockAwareDocTestFinder() + optionflags = get_optionflags(self) + runner = _get_runner( + verbose=False, + optionflags=optionflags, + checker=_get_checker(), + continue_on_failure=_get_continue_on_failure(self.config), + ) + + for test in finder.find(module, module.__name__): + if test.examples: # skip empty doctests + yield DoctestItem.from_parent( + self, name=test.name, runner=runner, dtest=test + ) + + +def _setup_fixtures(doctest_item: DoctestItem) -> FixtureRequest: + """Used by DoctestTextfile and DoctestItem to setup fixture information.""" + + def func() -> None: + pass + + doctest_item.funcargs = {} # type: ignore[attr-defined] + fm = doctest_item.session._fixturemanager + doctest_item._fixtureinfo = fm.getfixtureinfo( # type: ignore[attr-defined] + node=doctest_item, func=func, cls=None, funcargs=False + ) + fixture_request = FixtureRequest(doctest_item, _ispytest=True) + fixture_request._fillfixtures() + return fixture_request + + +def _init_checker_class() -> Type["doctest.OutputChecker"]: + import doctest + import re + + class LiteralsOutputChecker(doctest.OutputChecker): + # Based on doctest_nose_plugin.py from the nltk project + # (https://github.com/nltk/nltk) and on the "numtest" doctest extension + # by Sebastien Boisgerault (https://github.com/boisgera/numtest). + + _unicode_literal_re = re.compile(r"(\W|^)[uU]([rR]?[\'\"])", re.UNICODE) + _bytes_literal_re = re.compile(r"(\W|^)[bB]([rR]?[\'\"])", re.UNICODE) + _number_re = re.compile( + r""" + (?P + (?P + (?P [+-]?\d*)\.(?P\d+) + | + (?P [+-]?\d+)\. + ) + (?: + [Ee] + (?P [+-]?\d+) + )? + | + (?P [+-]?\d+) + (?: + [Ee] + (?P [+-]?\d+) + ) + ) + """, + re.VERBOSE, + ) + + def check_output(self, want: str, got: str, optionflags: int) -> bool: + if super().check_output(want, got, optionflags): + return True + + allow_unicode = optionflags & _get_allow_unicode_flag() + allow_bytes = optionflags & _get_allow_bytes_flag() + allow_number = optionflags & _get_number_flag() + + if not allow_unicode and not allow_bytes and not allow_number: + return False + + def remove_prefixes(regex: Pattern[str], txt: str) -> str: + return re.sub(regex, r"\1\2", txt) + + if allow_unicode: + want = remove_prefixes(self._unicode_literal_re, want) + got = remove_prefixes(self._unicode_literal_re, got) + + if allow_bytes: + want = remove_prefixes(self._bytes_literal_re, want) + got = remove_prefixes(self._bytes_literal_re, got) + + if allow_number: + got = self._remove_unwanted_precision(want, got) + + return super().check_output(want, got, optionflags) + + def _remove_unwanted_precision(self, want: str, got: str) -> str: + wants = list(self._number_re.finditer(want)) + gots = list(self._number_re.finditer(got)) + if len(wants) != len(gots): + return got + offset = 0 + for w, g in zip(wants, gots): + fraction: Optional[str] = w.group("fraction") + exponent: Optional[str] = w.group("exponent1") + if exponent is None: + exponent = w.group("exponent2") + precision = 0 if fraction is None else len(fraction) + if exponent is not None: + precision -= int(exponent) + if float(w.group()) == approx(float(g.group()), abs=10**-precision): + # They're close enough. Replace the text we actually + # got with the text we want, so that it will match when we + # check the string literally. + got = ( + got[: g.start() + offset] + w.group() + got[g.end() + offset :] + ) + offset += w.end() - w.start() - (g.end() - g.start()) + return got + + return LiteralsOutputChecker + + +def _get_checker() -> "doctest.OutputChecker": + """Return a doctest.OutputChecker subclass that supports some + additional options: + + * ALLOW_UNICODE and ALLOW_BYTES options to ignore u'' and b'' + prefixes (respectively) in string literals. Useful when the same + doctest should run in Python 2 and Python 3. + + * NUMBER to ignore floating-point differences smaller than the + precision of the literal number in the doctest. + + An inner class is used to avoid importing "doctest" at the module + level. + """ + global CHECKER_CLASS + if CHECKER_CLASS is None: + CHECKER_CLASS = _init_checker_class() + return CHECKER_CLASS() + + +def _get_allow_unicode_flag() -> int: + """Register and return the ALLOW_UNICODE flag.""" + import doctest + + return doctest.register_optionflag("ALLOW_UNICODE") + + +def _get_allow_bytes_flag() -> int: + """Register and return the ALLOW_BYTES flag.""" + import doctest + + return doctest.register_optionflag("ALLOW_BYTES") + + +def _get_number_flag() -> int: + """Register and return the NUMBER flag.""" + import doctest + + return doctest.register_optionflag("NUMBER") + + +def _get_report_choice(key: str) -> int: + """Return the actual `doctest` module flag value. + + We want to do it as late as possible to avoid importing `doctest` and all + its dependencies when parsing options, as it adds overhead and breaks tests. + """ + import doctest + + return { + DOCTEST_REPORT_CHOICE_UDIFF: doctest.REPORT_UDIFF, + DOCTEST_REPORT_CHOICE_CDIFF: doctest.REPORT_CDIFF, + DOCTEST_REPORT_CHOICE_NDIFF: doctest.REPORT_NDIFF, + DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE: doctest.REPORT_ONLY_FIRST_FAILURE, + DOCTEST_REPORT_CHOICE_NONE: 0, + }[key] + + +@fixture(scope="session") +def doctest_namespace() -> Dict[str, Any]: + """Fixture that returns a :py:class:`dict` that will be injected into the + namespace of doctests. + + Usually this fixture is used in conjunction with another ``autouse`` fixture: + + .. code-block:: python + + @pytest.fixture(autouse=True) + def add_np(doctest_namespace): + doctest_namespace["np"] = numpy + + For more details: :ref:`doctest_namespace`. + """ + return dict() diff --git a/env/lib/python3.8/site-packages/_pytest/faulthandler.py b/env/lib/python3.8/site-packages/_pytest/faulthandler.py new file mode 100644 index 00000000..b9c92558 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/faulthandler.py @@ -0,0 +1,97 @@ +import io +import os +import sys +from typing import Generator +from typing import TextIO + +import pytest +from _pytest.config import Config +from _pytest.config.argparsing import Parser +from _pytest.nodes import Item +from _pytest.stash import StashKey + + +fault_handler_stderr_key = StashKey[TextIO]() +fault_handler_originally_enabled_key = StashKey[bool]() + + +def pytest_addoption(parser: Parser) -> None: + help = ( + "Dump the traceback of all threads if a test takes " + "more than TIMEOUT seconds to finish" + ) + parser.addini("faulthandler_timeout", help, default=0.0) + + +def pytest_configure(config: Config) -> None: + import faulthandler + + stderr_fd_copy = os.dup(get_stderr_fileno()) + config.stash[fault_handler_stderr_key] = open(stderr_fd_copy, "w") + config.stash[fault_handler_originally_enabled_key] = faulthandler.is_enabled() + faulthandler.enable(file=config.stash[fault_handler_stderr_key]) + + +def pytest_unconfigure(config: Config) -> None: + import faulthandler + + faulthandler.disable() + # Close the dup file installed during pytest_configure. + if fault_handler_stderr_key in config.stash: + config.stash[fault_handler_stderr_key].close() + del config.stash[fault_handler_stderr_key] + if config.stash.get(fault_handler_originally_enabled_key, False): + # Re-enable the faulthandler if it was originally enabled. + faulthandler.enable(file=get_stderr_fileno()) + + +def get_stderr_fileno() -> int: + try: + fileno = sys.stderr.fileno() + # The Twisted Logger will return an invalid file descriptor since it is not backed + # by an FD. So, let's also forward this to the same code path as with pytest-xdist. + if fileno == -1: + raise AttributeError() + return fileno + except (AttributeError, io.UnsupportedOperation): + # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file. + # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors + # This is potentially dangerous, but the best we can do. + return sys.__stderr__.fileno() + + +def get_timeout_config_value(config: Config) -> float: + return float(config.getini("faulthandler_timeout") or 0.0) + + +@pytest.hookimpl(hookwrapper=True, trylast=True) +def pytest_runtest_protocol(item: Item) -> Generator[None, None, None]: + timeout = get_timeout_config_value(item.config) + stderr = item.config.stash[fault_handler_stderr_key] + if timeout > 0 and stderr is not None: + import faulthandler + + faulthandler.dump_traceback_later(timeout, file=stderr) + try: + yield + finally: + faulthandler.cancel_dump_traceback_later() + else: + yield + + +@pytest.hookimpl(tryfirst=True) +def pytest_enter_pdb() -> None: + """Cancel any traceback dumping due to timeout before entering pdb.""" + import faulthandler + + faulthandler.cancel_dump_traceback_later() + + +@pytest.hookimpl(tryfirst=True) +def pytest_exception_interact() -> None: + """Cancel any traceback dumping due to an interactive exception being + raised.""" + import faulthandler + + faulthandler.cancel_dump_traceback_later() diff --git a/env/lib/python3.8/site-packages/_pytest/fixtures.py b/env/lib/python3.8/site-packages/_pytest/fixtures.py new file mode 100644 index 00000000..d79895c2 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/fixtures.py @@ -0,0 +1,1672 @@ +import functools +import inspect +import os +import sys +import warnings +from collections import defaultdict +from collections import deque +from contextlib import suppress +from pathlib import Path +from types import TracebackType +from typing import Any +from typing import Callable +from typing import cast +from typing import Dict +from typing import Generator +from typing import Generic +from typing import Iterable +from typing import Iterator +from typing import List +from typing import MutableMapping +from typing import NoReturn +from typing import Optional +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +import attr + +import _pytest +from _pytest import nodes +from _pytest._code import getfslineno +from _pytest._code.code import FormattedExcinfo +from _pytest._code.code import TerminalRepr +from _pytest._io import TerminalWriter +from _pytest.compat import _format_args +from _pytest.compat import _PytestWrapper +from _pytest.compat import assert_never +from _pytest.compat import final +from _pytest.compat import get_real_func +from _pytest.compat import get_real_method +from _pytest.compat import getfuncargnames +from _pytest.compat import getimfunc +from _pytest.compat import getlocation +from _pytest.compat import is_generator +from _pytest.compat import NOTSET +from _pytest.compat import overload +from _pytest.compat import safe_getattr +from _pytest.config import _PluggyPlugin +from _pytest.config import Config +from _pytest.config.argparsing import Parser +from _pytest.deprecated import check_ispytest +from _pytest.deprecated import YIELD_FIXTURE +from _pytest.mark import Mark +from _pytest.mark import ParameterSet +from _pytest.mark.structures import MarkDecorator +from _pytest.outcomes import fail +from _pytest.outcomes import TEST_OUTCOME +from _pytest.pathlib import absolutepath +from _pytest.pathlib import bestrelpath +from _pytest.scope import HIGH_SCOPES +from _pytest.scope import Scope +from _pytest.stash import StashKey + + +if TYPE_CHECKING: + from typing import Deque + + from _pytest.scope import _ScopeName + from _pytest.main import Session + from _pytest.python import CallSpec2 + from _pytest.python import Metafunc + + +# The value of the fixture -- return/yield of the fixture function (type variable). +FixtureValue = TypeVar("FixtureValue") +# The type of the fixture function (type variable). +FixtureFunction = TypeVar("FixtureFunction", bound=Callable[..., object]) +# The type of a fixture function (type alias generic in fixture value). +_FixtureFunc = Union[ + Callable[..., FixtureValue], Callable[..., Generator[FixtureValue, None, None]] +] +# The type of FixtureDef.cached_result (type alias generic in fixture value). +_FixtureCachedResult = Union[ + Tuple[ + # The result. + FixtureValue, + # Cache key. + object, + None, + ], + Tuple[ + None, + # Cache key. + object, + # Exc info if raised. + Tuple[Type[BaseException], BaseException, TracebackType], + ], +] + + +@attr.s(frozen=True, auto_attribs=True) +class PseudoFixtureDef(Generic[FixtureValue]): + cached_result: "_FixtureCachedResult[FixtureValue]" + _scope: Scope + + +def pytest_sessionstart(session: "Session") -> None: + session._fixturemanager = FixtureManager(session) + + +def get_scope_package(node, fixturedef: "FixtureDef[object]"): + import pytest + + cls = pytest.Package + current = node + fixture_package_name = "{}/{}".format(fixturedef.baseid, "__init__.py") + while current and ( + type(current) is not cls or fixture_package_name != current.nodeid + ): + current = current.parent + if current is None: + return node.session + return current + + +def get_scope_node( + node: nodes.Node, scope: Scope +) -> Optional[Union[nodes.Item, nodes.Collector]]: + import _pytest.python + + if scope is Scope.Function: + return node.getparent(nodes.Item) + elif scope is Scope.Class: + return node.getparent(_pytest.python.Class) + elif scope is Scope.Module: + return node.getparent(_pytest.python.Module) + elif scope is Scope.Package: + return node.getparent(_pytest.python.Package) + elif scope is Scope.Session: + return node.getparent(_pytest.main.Session) + else: + assert_never(scope) + + +# Used for storing artificial fixturedefs for direct parametrization. +name2pseudofixturedef_key = StashKey[Dict[str, "FixtureDef[Any]"]]() + + +def add_funcarg_pseudo_fixture_def( + collector: nodes.Collector, metafunc: "Metafunc", fixturemanager: "FixtureManager" +) -> None: + # This function will transform all collected calls to functions + # if they use direct funcargs (i.e. direct parametrization) + # because we want later test execution to be able to rely on + # an existing FixtureDef structure for all arguments. + # XXX we can probably avoid this algorithm if we modify CallSpec2 + # to directly care for creating the fixturedefs within its methods. + if not metafunc._calls[0].funcargs: + # This function call does not have direct parametrization. + return + # Collect funcargs of all callspecs into a list of values. + arg2params: Dict[str, List[object]] = {} + arg2scope: Dict[str, Scope] = {} + for callspec in metafunc._calls: + for argname, argvalue in callspec.funcargs.items(): + assert argname not in callspec.params + callspec.params[argname] = argvalue + arg2params_list = arg2params.setdefault(argname, []) + callspec.indices[argname] = len(arg2params_list) + arg2params_list.append(argvalue) + if argname not in arg2scope: + scope = callspec._arg2scope.get(argname, Scope.Function) + arg2scope[argname] = scope + callspec.funcargs.clear() + + # Register artificial FixtureDef's so that later at test execution + # time we can rely on a proper FixtureDef to exist for fixture setup. + arg2fixturedefs = metafunc._arg2fixturedefs + for argname, valuelist in arg2params.items(): + # If we have a scope that is higher than function, we need + # to make sure we only ever create an according fixturedef on + # a per-scope basis. We thus store and cache the fixturedef on the + # node related to the scope. + scope = arg2scope[argname] + node = None + if scope is not Scope.Function: + node = get_scope_node(collector, scope) + if node is None: + assert scope is Scope.Class and isinstance( + collector, _pytest.python.Module + ) + # Use module-level collector for class-scope (for now). + node = collector + if node is None: + name2pseudofixturedef = None + else: + default: Dict[str, FixtureDef[Any]] = {} + name2pseudofixturedef = node.stash.setdefault( + name2pseudofixturedef_key, default + ) + if name2pseudofixturedef is not None and argname in name2pseudofixturedef: + arg2fixturedefs[argname] = [name2pseudofixturedef[argname]] + else: + fixturedef = FixtureDef( + fixturemanager=fixturemanager, + baseid="", + argname=argname, + func=get_direct_param_fixture_func, + scope=arg2scope[argname], + params=valuelist, + unittest=False, + ids=None, + ) + arg2fixturedefs[argname] = [fixturedef] + if name2pseudofixturedef is not None: + name2pseudofixturedef[argname] = fixturedef + + +def getfixturemarker(obj: object) -> Optional["FixtureFunctionMarker"]: + """Return fixturemarker or None if it doesn't exist or raised + exceptions.""" + return cast( + Optional[FixtureFunctionMarker], + safe_getattr(obj, "_pytestfixturefunction", None), + ) + + +# Parametrized fixture key, helper alias for code below. +_Key = Tuple[object, ...] + + +def get_parametrized_fixture_keys(item: nodes.Item, scope: Scope) -> Iterator[_Key]: + """Return list of keys for all parametrized arguments which match + the specified scope.""" + assert scope is not Scope.Function + try: + callspec = item.callspec # type: ignore[attr-defined] + except AttributeError: + pass + else: + cs: CallSpec2 = callspec + # cs.indices.items() is random order of argnames. Need to + # sort this so that different calls to + # get_parametrized_fixture_keys will be deterministic. + for argname, param_index in sorted(cs.indices.items()): + if cs._arg2scope[argname] != scope: + continue + if scope is Scope.Session: + key: _Key = (argname, param_index) + elif scope is Scope.Package: + key = (argname, param_index, item.path.parent) + elif scope is Scope.Module: + key = (argname, param_index, item.path) + elif scope is Scope.Class: + item_cls = item.cls # type: ignore[attr-defined] + key = (argname, param_index, item.path, item_cls) + else: + assert_never(scope) + yield key + + +# Algorithm for sorting on a per-parametrized resource setup basis. +# It is called for Session scope first and performs sorting +# down to the lower scopes such as to minimize number of "high scope" +# setups and teardowns. + + +def reorder_items(items: Sequence[nodes.Item]) -> List[nodes.Item]: + argkeys_cache: Dict[Scope, Dict[nodes.Item, Dict[_Key, None]]] = {} + items_by_argkey: Dict[Scope, Dict[_Key, Deque[nodes.Item]]] = {} + for scope in HIGH_SCOPES: + d: Dict[nodes.Item, Dict[_Key, None]] = {} + argkeys_cache[scope] = d + item_d: Dict[_Key, Deque[nodes.Item]] = defaultdict(deque) + items_by_argkey[scope] = item_d + for item in items: + keys = dict.fromkeys(get_parametrized_fixture_keys(item, scope), None) + if keys: + d[item] = keys + for key in keys: + item_d[key].append(item) + items_dict = dict.fromkeys(items, None) + return list( + reorder_items_atscope(items_dict, argkeys_cache, items_by_argkey, Scope.Session) + ) + + +def fix_cache_order( + item: nodes.Item, + argkeys_cache: Dict[Scope, Dict[nodes.Item, Dict[_Key, None]]], + items_by_argkey: Dict[Scope, Dict[_Key, "Deque[nodes.Item]"]], +) -> None: + for scope in HIGH_SCOPES: + for key in argkeys_cache[scope].get(item, []): + items_by_argkey[scope][key].appendleft(item) + + +def reorder_items_atscope( + items: Dict[nodes.Item, None], + argkeys_cache: Dict[Scope, Dict[nodes.Item, Dict[_Key, None]]], + items_by_argkey: Dict[Scope, Dict[_Key, "Deque[nodes.Item]"]], + scope: Scope, +) -> Dict[nodes.Item, None]: + if scope is Scope.Function or len(items) < 3: + return items + ignore: Set[Optional[_Key]] = set() + items_deque = deque(items) + items_done: Dict[nodes.Item, None] = {} + scoped_items_by_argkey = items_by_argkey[scope] + scoped_argkeys_cache = argkeys_cache[scope] + while items_deque: + no_argkey_group: Dict[nodes.Item, None] = {} + slicing_argkey = None + while items_deque: + item = items_deque.popleft() + if item in items_done or item in no_argkey_group: + continue + argkeys = dict.fromkeys( + (k for k in scoped_argkeys_cache.get(item, []) if k not in ignore), None + ) + if not argkeys: + no_argkey_group[item] = None + else: + slicing_argkey, _ = argkeys.popitem() + # We don't have to remove relevant items from later in the + # deque because they'll just be ignored. + matching_items = [ + i for i in scoped_items_by_argkey[slicing_argkey] if i in items + ] + for i in reversed(matching_items): + fix_cache_order(i, argkeys_cache, items_by_argkey) + items_deque.appendleft(i) + break + if no_argkey_group: + no_argkey_group = reorder_items_atscope( + no_argkey_group, argkeys_cache, items_by_argkey, scope.next_lower() + ) + for item in no_argkey_group: + items_done[item] = None + ignore.add(slicing_argkey) + return items_done + + +def get_direct_param_fixture_func(request: "FixtureRequest") -> Any: + return request.param + + +@attr.s(slots=True, auto_attribs=True) +class FuncFixtureInfo: + # Original function argument names. + argnames: Tuple[str, ...] + # Argnames that function immediately requires. These include argnames + + # fixture names specified via usefixtures and via autouse=True in fixture + # definitions. + initialnames: Tuple[str, ...] + names_closure: List[str] + name2fixturedefs: Dict[str, Sequence["FixtureDef[Any]"]] + + def prune_dependency_tree(self) -> None: + """Recompute names_closure from initialnames and name2fixturedefs. + + Can only reduce names_closure, which means that the new closure will + always be a subset of the old one. The order is preserved. + + This method is needed because direct parametrization may shadow some + of the fixtures that were included in the originally built dependency + tree. In this way the dependency tree can get pruned, and the closure + of argnames may get reduced. + """ + closure: Set[str] = set() + working_set = set(self.initialnames) + while working_set: + argname = working_set.pop() + # Argname may be smth not included in the original names_closure, + # in which case we ignore it. This currently happens with pseudo + # FixtureDefs which wrap 'get_direct_param_fixture_func(request)'. + # So they introduce the new dependency 'request' which might have + # been missing in the original tree (closure). + if argname not in closure and argname in self.names_closure: + closure.add(argname) + if argname in self.name2fixturedefs: + working_set.update(self.name2fixturedefs[argname][-1].argnames) + + self.names_closure[:] = sorted(closure, key=self.names_closure.index) + + +class FixtureRequest: + """A request for a fixture from a test or fixture function. + + A request object gives access to the requesting test context and has + an optional ``param`` attribute in case the fixture is parametrized + indirectly. + """ + + def __init__(self, pyfuncitem, *, _ispytest: bool = False) -> None: + check_ispytest(_ispytest) + self._pyfuncitem = pyfuncitem + #: Fixture for which this request is being performed. + self.fixturename: Optional[str] = None + self._scope = Scope.Function + self._fixture_defs: Dict[str, FixtureDef[Any]] = {} + fixtureinfo: FuncFixtureInfo = pyfuncitem._fixtureinfo + self._arg2fixturedefs = fixtureinfo.name2fixturedefs.copy() + self._arg2index: Dict[str, int] = {} + self._fixturemanager: FixtureManager = pyfuncitem.session._fixturemanager + # Notes on the type of `param`: + # -`request.param` is only defined in parametrized fixtures, and will raise + # AttributeError otherwise. Python typing has no notion of "undefined", so + # this cannot be reflected in the type. + # - Technically `param` is only (possibly) defined on SubRequest, not + # FixtureRequest, but the typing of that is still in flux so this cheats. + # - In the future we might consider using a generic for the param type, but + # for now just using Any. + self.param: Any + + @property + def scope(self) -> "_ScopeName": + """Scope string, one of "function", "class", "module", "package", "session".""" + return self._scope.value + + @property + def fixturenames(self) -> List[str]: + """Names of all active fixtures in this request.""" + result = list(self._pyfuncitem._fixtureinfo.names_closure) + result.extend(set(self._fixture_defs).difference(result)) + return result + + @property + def node(self): + """Underlying collection node (depends on current request scope).""" + return self._getscopeitem(self._scope) + + def _getnextfixturedef(self, argname: str) -> "FixtureDef[Any]": + fixturedefs = self._arg2fixturedefs.get(argname, None) + if fixturedefs is None: + # We arrive here because of a dynamic call to + # getfixturevalue(argname) usage which was naturally + # not known at parsing/collection time. + assert self._pyfuncitem.parent is not None + parentid = self._pyfuncitem.parent.nodeid + fixturedefs = self._fixturemanager.getfixturedefs(argname, parentid) + # TODO: Fix this type ignore. Either add assert or adjust types. + # Can this be None here? + self._arg2fixturedefs[argname] = fixturedefs # type: ignore[assignment] + # fixturedefs list is immutable so we maintain a decreasing index. + index = self._arg2index.get(argname, 0) - 1 + if fixturedefs is None or (-index > len(fixturedefs)): + raise FixtureLookupError(argname, self) + self._arg2index[argname] = index + return fixturedefs[index] + + @property + def config(self) -> Config: + """The pytest config object associated with this request.""" + return self._pyfuncitem.config # type: ignore[no-any-return] + + @property + def function(self): + """Test function object if the request has a per-function scope.""" + if self.scope != "function": + raise AttributeError( + f"function not available in {self.scope}-scoped context" + ) + return self._pyfuncitem.obj + + @property + def cls(self): + """Class (can be None) where the test function was collected.""" + if self.scope not in ("class", "function"): + raise AttributeError(f"cls not available in {self.scope}-scoped context") + clscol = self._pyfuncitem.getparent(_pytest.python.Class) + if clscol: + return clscol.obj + + @property + def instance(self): + """Instance (can be None) on which test function was collected.""" + # unittest support hack, see _pytest.unittest.TestCaseFunction. + try: + return self._pyfuncitem._testcase + except AttributeError: + function = getattr(self, "function", None) + return getattr(function, "__self__", None) + + @property + def module(self): + """Python module object where the test function was collected.""" + if self.scope not in ("function", "class", "module"): + raise AttributeError(f"module not available in {self.scope}-scoped context") + return self._pyfuncitem.getparent(_pytest.python.Module).obj + + @property + def path(self) -> Path: + """Path where the test function was collected.""" + if self.scope not in ("function", "class", "module", "package"): + raise AttributeError(f"path not available in {self.scope}-scoped context") + # TODO: Remove ignore once _pyfuncitem is properly typed. + return self._pyfuncitem.path # type: ignore + + @property + def keywords(self) -> MutableMapping[str, Any]: + """Keywords/markers dictionary for the underlying node.""" + node: nodes.Node = self.node + return node.keywords + + @property + def session(self) -> "Session": + """Pytest session object.""" + return self._pyfuncitem.session # type: ignore[no-any-return] + + def addfinalizer(self, finalizer: Callable[[], object]) -> None: + """Add finalizer/teardown function to be called without arguments after + the last test within the requesting test context finished execution.""" + # XXX usually this method is shadowed by fixturedef specific ones. + self._addfinalizer(finalizer, scope=self.scope) + + def _addfinalizer(self, finalizer: Callable[[], object], scope) -> None: + node = self._getscopeitem(scope) + node.addfinalizer(finalizer) + + def applymarker(self, marker: Union[str, MarkDecorator]) -> None: + """Apply a marker to a single test function invocation. + + This method is useful if you don't want to have a keyword/marker + on all function invocations. + + :param marker: + An object created by a call to ``pytest.mark.NAME(...)``. + """ + self.node.add_marker(marker) + + def raiseerror(self, msg: Optional[str]) -> NoReturn: + """Raise a FixtureLookupError exception. + + :param msg: + An optional custom error message. + """ + raise self._fixturemanager.FixtureLookupError(None, self, msg) + + def _fillfixtures(self) -> None: + item = self._pyfuncitem + fixturenames = getattr(item, "fixturenames", self.fixturenames) + for argname in fixturenames: + if argname not in item.funcargs: + item.funcargs[argname] = self.getfixturevalue(argname) + + def getfixturevalue(self, argname: str) -> Any: + """Dynamically run a named fixture function. + + Declaring fixtures via function argument is recommended where possible. + But if you can only decide whether to use another fixture at test + setup time, you may use this function to retrieve it inside a fixture + or test function body. + + This method can be used during the test setup phase or the test run + phase, but during the test teardown phase a fixture's value may not + be available. + + :param argname: + The fixture name. + :raises pytest.FixtureLookupError: + If the given fixture could not be found. + """ + fixturedef = self._get_active_fixturedef(argname) + assert fixturedef.cached_result is not None, ( + f'The fixture value for "{argname}" is not available. ' + "This can happen when the fixture has already been torn down." + ) + return fixturedef.cached_result[0] + + def _get_active_fixturedef( + self, argname: str + ) -> Union["FixtureDef[object]", PseudoFixtureDef[object]]: + try: + return self._fixture_defs[argname] + except KeyError: + try: + fixturedef = self._getnextfixturedef(argname) + except FixtureLookupError: + if argname == "request": + cached_result = (self, [0], None) + return PseudoFixtureDef(cached_result, Scope.Function) + raise + # Remove indent to prevent the python3 exception + # from leaking into the call. + self._compute_fixture_value(fixturedef) + self._fixture_defs[argname] = fixturedef + return fixturedef + + def _get_fixturestack(self) -> List["FixtureDef[Any]"]: + current = self + values: List[FixtureDef[Any]] = [] + while isinstance(current, SubRequest): + values.append(current._fixturedef) # type: ignore[has-type] + current = current._parent_request + values.reverse() + return values + + def _compute_fixture_value(self, fixturedef: "FixtureDef[object]") -> None: + """Create a SubRequest based on "self" and call the execute method + of the given FixtureDef object. + + This will force the FixtureDef object to throw away any previous + results and compute a new fixture value, which will be stored into + the FixtureDef object itself. + """ + # prepare a subrequest object before calling fixture function + # (latter managed by fixturedef) + argname = fixturedef.argname + funcitem = self._pyfuncitem + scope = fixturedef._scope + try: + callspec = funcitem.callspec + except AttributeError: + callspec = None + if callspec is not None and argname in callspec.params: + param = callspec.params[argname] + param_index = callspec.indices[argname] + # If a parametrize invocation set a scope it will override + # the static scope defined with the fixture function. + with suppress(KeyError): + scope = callspec._arg2scope[argname] + else: + param = NOTSET + param_index = 0 + has_params = fixturedef.params is not None + fixtures_not_supported = getattr(funcitem, "nofuncargs", False) + if has_params and fixtures_not_supported: + msg = ( + "{name} does not support fixtures, maybe unittest.TestCase subclass?\n" + "Node id: {nodeid}\n" + "Function type: {typename}" + ).format( + name=funcitem.name, + nodeid=funcitem.nodeid, + typename=type(funcitem).__name__, + ) + fail(msg, pytrace=False) + if has_params: + frame = inspect.stack()[3] + frameinfo = inspect.getframeinfo(frame[0]) + source_path = absolutepath(frameinfo.filename) + source_lineno = frameinfo.lineno + try: + source_path_str = str( + source_path.relative_to(funcitem.config.rootpath) + ) + except ValueError: + source_path_str = str(source_path) + msg = ( + "The requested fixture has no parameter defined for test:\n" + " {}\n\n" + "Requested fixture '{}' defined in:\n{}" + "\n\nRequested here:\n{}:{}".format( + funcitem.nodeid, + fixturedef.argname, + getlocation(fixturedef.func, funcitem.config.rootpath), + source_path_str, + source_lineno, + ) + ) + fail(msg, pytrace=False) + + subrequest = SubRequest( + self, scope, param, param_index, fixturedef, _ispytest=True + ) + + # Check if a higher-level scoped fixture accesses a lower level one. + subrequest._check_scope(argname, self._scope, scope) + try: + # Call the fixture function. + fixturedef.execute(request=subrequest) + finally: + self._schedule_finalizers(fixturedef, subrequest) + + def _schedule_finalizers( + self, fixturedef: "FixtureDef[object]", subrequest: "SubRequest" + ) -> None: + # If fixture function failed it might have registered finalizers. + subrequest.node.addfinalizer(lambda: fixturedef.finish(request=subrequest)) + + def _check_scope( + self, + argname: str, + invoking_scope: Scope, + requested_scope: Scope, + ) -> None: + if argname == "request": + return + if invoking_scope > requested_scope: + # Try to report something helpful. + text = "\n".join(self._factorytraceback()) + fail( + f"ScopeMismatch: You tried to access the {requested_scope.value} scoped " + f"fixture {argname} with a {invoking_scope.value} scoped request object, " + f"involved factories:\n{text}", + pytrace=False, + ) + + def _factorytraceback(self) -> List[str]: + lines = [] + for fixturedef in self._get_fixturestack(): + factory = fixturedef.func + fs, lineno = getfslineno(factory) + if isinstance(fs, Path): + session: Session = self._pyfuncitem.session + p = bestrelpath(session.path, fs) + else: + p = fs + args = _format_args(factory) + lines.append("%s:%d: def %s%s" % (p, lineno + 1, factory.__name__, args)) + return lines + + def _getscopeitem( + self, scope: Union[Scope, "_ScopeName"] + ) -> Union[nodes.Item, nodes.Collector]: + if isinstance(scope, str): + scope = Scope(scope) + if scope is Scope.Function: + # This might also be a non-function Item despite its attribute name. + node: Optional[Union[nodes.Item, nodes.Collector]] = self._pyfuncitem + elif scope is Scope.Package: + # FIXME: _fixturedef is not defined on FixtureRequest (this class), + # but on FixtureRequest (a subclass). + node = get_scope_package(self._pyfuncitem, self._fixturedef) # type: ignore[attr-defined] + else: + node = get_scope_node(self._pyfuncitem, scope) + if node is None and scope is Scope.Class: + # Fallback to function item itself. + node = self._pyfuncitem + assert node, 'Could not obtain a node for scope "{}" for function {!r}'.format( + scope, self._pyfuncitem + ) + return node + + def __repr__(self) -> str: + return "" % (self.node) + + +@final +class SubRequest(FixtureRequest): + """A sub request for handling getting a fixture from a test function/fixture.""" + + def __init__( + self, + request: "FixtureRequest", + scope: Scope, + param: Any, + param_index: int, + fixturedef: "FixtureDef[object]", + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + self._parent_request = request + self.fixturename = fixturedef.argname + if param is not NOTSET: + self.param = param + self.param_index = param_index + self._scope = scope + self._fixturedef = fixturedef + self._pyfuncitem = request._pyfuncitem + self._fixture_defs = request._fixture_defs + self._arg2fixturedefs = request._arg2fixturedefs + self._arg2index = request._arg2index + self._fixturemanager = request._fixturemanager + + def __repr__(self) -> str: + return f"" + + def addfinalizer(self, finalizer: Callable[[], object]) -> None: + """Add finalizer/teardown function to be called without arguments after + the last test within the requesting test context finished execution.""" + self._fixturedef.addfinalizer(finalizer) + + def _schedule_finalizers( + self, fixturedef: "FixtureDef[object]", subrequest: "SubRequest" + ) -> None: + # If the executing fixturedef was not explicitly requested in the argument list (via + # getfixturevalue inside the fixture call) then ensure this fixture def will be finished + # first. + if fixturedef.argname not in self.fixturenames: + fixturedef.addfinalizer( + functools.partial(self._fixturedef.finish, request=self) + ) + super()._schedule_finalizers(fixturedef, subrequest) + + +@final +class FixtureLookupError(LookupError): + """Could not return a requested fixture (missing or invalid).""" + + def __init__( + self, argname: Optional[str], request: FixtureRequest, msg: Optional[str] = None + ) -> None: + self.argname = argname + self.request = request + self.fixturestack = request._get_fixturestack() + self.msg = msg + + def formatrepr(self) -> "FixtureLookupErrorRepr": + tblines: List[str] = [] + addline = tblines.append + stack = [self.request._pyfuncitem.obj] + stack.extend(map(lambda x: x.func, self.fixturestack)) + msg = self.msg + if msg is not None: + # The last fixture raise an error, let's present + # it at the requesting side. + stack = stack[:-1] + for function in stack: + fspath, lineno = getfslineno(function) + try: + lines, _ = inspect.getsourcelines(get_real_func(function)) + except (OSError, IndexError, TypeError): + error_msg = "file %s, line %s: source code not available" + addline(error_msg % (fspath, lineno + 1)) + else: + addline(f"file {fspath}, line {lineno + 1}") + for i, line in enumerate(lines): + line = line.rstrip() + addline(" " + line) + if line.lstrip().startswith("def"): + break + + if msg is None: + fm = self.request._fixturemanager + available = set() + parentid = self.request._pyfuncitem.parent.nodeid + for name, fixturedefs in fm._arg2fixturedefs.items(): + faclist = list(fm._matchfactories(fixturedefs, parentid)) + if faclist: + available.add(name) + if self.argname in available: + msg = " recursive dependency involving fixture '{}' detected".format( + self.argname + ) + else: + msg = f"fixture '{self.argname}' not found" + msg += "\n available fixtures: {}".format(", ".join(sorted(available))) + msg += "\n use 'pytest --fixtures [testpath]' for help on them." + + return FixtureLookupErrorRepr(fspath, lineno, tblines, msg, self.argname) + + +class FixtureLookupErrorRepr(TerminalRepr): + def __init__( + self, + filename: Union[str, "os.PathLike[str]"], + firstlineno: int, + tblines: Sequence[str], + errorstring: str, + argname: Optional[str], + ) -> None: + self.tblines = tblines + self.errorstring = errorstring + self.filename = filename + self.firstlineno = firstlineno + self.argname = argname + + def toterminal(self, tw: TerminalWriter) -> None: + # tw.line("FixtureLookupError: %s" %(self.argname), red=True) + for tbline in self.tblines: + tw.line(tbline.rstrip()) + lines = self.errorstring.split("\n") + if lines: + tw.line( + f"{FormattedExcinfo.fail_marker} {lines[0].strip()}", + red=True, + ) + for line in lines[1:]: + tw.line( + f"{FormattedExcinfo.flow_marker} {line.strip()}", + red=True, + ) + tw.line() + tw.line("%s:%d" % (os.fspath(self.filename), self.firstlineno + 1)) + + +def fail_fixturefunc(fixturefunc, msg: str) -> NoReturn: + fs, lineno = getfslineno(fixturefunc) + location = f"{fs}:{lineno + 1}" + source = _pytest._code.Source(fixturefunc) + fail(msg + ":\n\n" + str(source.indent()) + "\n" + location, pytrace=False) + + +def call_fixture_func( + fixturefunc: "_FixtureFunc[FixtureValue]", request: FixtureRequest, kwargs +) -> FixtureValue: + if is_generator(fixturefunc): + fixturefunc = cast( + Callable[..., Generator[FixtureValue, None, None]], fixturefunc + ) + generator = fixturefunc(**kwargs) + try: + fixture_result = next(generator) + except StopIteration: + raise ValueError(f"{request.fixturename} did not yield a value") from None + finalizer = functools.partial(_teardown_yield_fixture, fixturefunc, generator) + request.addfinalizer(finalizer) + else: + fixturefunc = cast(Callable[..., FixtureValue], fixturefunc) + fixture_result = fixturefunc(**kwargs) + return fixture_result + + +def _teardown_yield_fixture(fixturefunc, it) -> None: + """Execute the teardown of a fixture function by advancing the iterator + after the yield and ensure the iteration ends (if not it means there is + more than one yield in the function).""" + try: + next(it) + except StopIteration: + pass + else: + fail_fixturefunc(fixturefunc, "fixture function has more than one 'yield'") + + +def _eval_scope_callable( + scope_callable: "Callable[[str, Config], _ScopeName]", + fixture_name: str, + config: Config, +) -> "_ScopeName": + try: + # Type ignored because there is no typing mechanism to specify + # keyword arguments, currently. + result = scope_callable(fixture_name=fixture_name, config=config) # type: ignore[call-arg] + except Exception as e: + raise TypeError( + "Error evaluating {} while defining fixture '{}'.\n" + "Expected a function with the signature (*, fixture_name, config)".format( + scope_callable, fixture_name + ) + ) from e + if not isinstance(result, str): + fail( + "Expected {} to return a 'str' while defining fixture '{}', but it returned:\n" + "{!r}".format(scope_callable, fixture_name, result), + pytrace=False, + ) + return result + + +@final +class FixtureDef(Generic[FixtureValue]): + """A container for a fixture definition.""" + + def __init__( + self, + fixturemanager: "FixtureManager", + baseid: Optional[str], + argname: str, + func: "_FixtureFunc[FixtureValue]", + scope: Union[Scope, "_ScopeName", Callable[[str, Config], "_ScopeName"], None], + params: Optional[Sequence[object]], + unittest: bool = False, + ids: Optional[ + Union[Tuple[Optional[object], ...], Callable[[Any], Optional[object]]] + ] = None, + ) -> None: + self._fixturemanager = fixturemanager + # The "base" node ID for the fixture. + # + # This is a node ID prefix. A fixture is only available to a node (e.g. + # a `Function` item) if the fixture's baseid is a parent of the node's + # nodeid (see the `iterparentnodeids` function for what constitutes a + # "parent" and a "prefix" in this context). + # + # For a fixture found in a Collector's object (e.g. a `Module`s module, + # a `Class`'s class), the baseid is the Collector's nodeid. + # + # For a fixture found in a conftest plugin, the baseid is the conftest's + # directory path relative to the rootdir. + # + # For other plugins, the baseid is the empty string (always matches). + self.baseid = baseid or "" + # Whether the fixture was found from a node or a conftest in the + # collection tree. Will be false for fixtures defined in non-conftest + # plugins. + self.has_location = baseid is not None + # The fixture factory function. + self.func = func + # The name by which the fixture may be requested. + self.argname = argname + if scope is None: + scope = Scope.Function + elif callable(scope): + scope = _eval_scope_callable(scope, argname, fixturemanager.config) + if isinstance(scope, str): + scope = Scope.from_user( + scope, descr=f"Fixture '{func.__name__}'", where=baseid + ) + self._scope = scope + # If the fixture is directly parametrized, the parameter values. + self.params: Optional[Sequence[object]] = params + # If the fixture is directly parametrized, a tuple of explicit IDs to + # assign to the parameter values, or a callable to generate an ID given + # a parameter value. + self.ids = ids + # The names requested by the fixtures. + self.argnames = getfuncargnames(func, name=argname, is_method=unittest) + # Whether the fixture was collected from a unittest TestCase class. + # Note that it really only makes sense to define autouse fixtures in + # unittest TestCases. + self.unittest = unittest + # If the fixture was executed, the current value of the fixture. + # Can change if the fixture is executed with different parameters. + self.cached_result: Optional[_FixtureCachedResult[FixtureValue]] = None + self._finalizers: List[Callable[[], object]] = [] + + @property + def scope(self) -> "_ScopeName": + """Scope string, one of "function", "class", "module", "package", "session".""" + return self._scope.value + + def addfinalizer(self, finalizer: Callable[[], object]) -> None: + self._finalizers.append(finalizer) + + def finish(self, request: SubRequest) -> None: + exc = None + try: + while self._finalizers: + try: + func = self._finalizers.pop() + func() + except BaseException as e: + # XXX Only first exception will be seen by user, + # ideally all should be reported. + if exc is None: + exc = e + if exc: + raise exc + finally: + ihook = request.node.ihook + ihook.pytest_fixture_post_finalizer(fixturedef=self, request=request) + # Even if finalization fails, we invalidate the cached fixture + # value and remove all finalizers because they may be bound methods + # which will keep instances alive. + self.cached_result = None + self._finalizers = [] + + def execute(self, request: SubRequest) -> FixtureValue: + # Get required arguments and register our own finish() + # with their finalization. + for argname in self.argnames: + fixturedef = request._get_active_fixturedef(argname) + if argname != "request": + # PseudoFixtureDef is only for "request". + assert isinstance(fixturedef, FixtureDef) + fixturedef.addfinalizer(functools.partial(self.finish, request=request)) + + my_cache_key = self.cache_key(request) + if self.cached_result is not None: + # note: comparison with `==` can fail (or be expensive) for e.g. + # numpy arrays (#6497). + cache_key = self.cached_result[1] + if my_cache_key is cache_key: + if self.cached_result[2] is not None: + _, val, tb = self.cached_result[2] + raise val.with_traceback(tb) + else: + result = self.cached_result[0] + return result + # We have a previous but differently parametrized fixture instance + # so we need to tear it down before creating a new one. + self.finish(request) + assert self.cached_result is None + + ihook = request.node.ihook + result = ihook.pytest_fixture_setup(fixturedef=self, request=request) + return result + + def cache_key(self, request: SubRequest) -> object: + return request.param_index if not hasattr(request, "param") else request.param + + def __repr__(self) -> str: + return "".format( + self.argname, self.scope, self.baseid + ) + + +def resolve_fixture_function( + fixturedef: FixtureDef[FixtureValue], request: FixtureRequest +) -> "_FixtureFunc[FixtureValue]": + """Get the actual callable that can be called to obtain the fixture + value, dealing with unittest-specific instances and bound methods.""" + fixturefunc = fixturedef.func + if fixturedef.unittest: + if request.instance is not None: + # Bind the unbound method to the TestCase instance. + fixturefunc = fixturedef.func.__get__(request.instance) # type: ignore[union-attr] + else: + # The fixture function needs to be bound to the actual + # request.instance so that code working with "fixturedef" behaves + # as expected. + if request.instance is not None: + # Handle the case where fixture is defined not in a test class, but some other class + # (for example a plugin class with a fixture), see #2270. + if hasattr(fixturefunc, "__self__") and not isinstance( + request.instance, fixturefunc.__self__.__class__ # type: ignore[union-attr] + ): + return fixturefunc + fixturefunc = getimfunc(fixturedef.func) + if fixturefunc != fixturedef.func: + fixturefunc = fixturefunc.__get__(request.instance) # type: ignore[union-attr] + return fixturefunc + + +def pytest_fixture_setup( + fixturedef: FixtureDef[FixtureValue], request: SubRequest +) -> FixtureValue: + """Execution of fixture setup.""" + kwargs = {} + for argname in fixturedef.argnames: + fixdef = request._get_active_fixturedef(argname) + assert fixdef.cached_result is not None + result, arg_cache_key, exc = fixdef.cached_result + request._check_scope(argname, request._scope, fixdef._scope) + kwargs[argname] = result + + fixturefunc = resolve_fixture_function(fixturedef, request) + my_cache_key = fixturedef.cache_key(request) + try: + result = call_fixture_func(fixturefunc, request, kwargs) + except TEST_OUTCOME: + exc_info = sys.exc_info() + assert exc_info[0] is not None + fixturedef.cached_result = (None, my_cache_key, exc_info) + raise + fixturedef.cached_result = (result, my_cache_key, None) + return result + + +def _ensure_immutable_ids( + ids: Optional[Union[Sequence[Optional[object]], Callable[[Any], Optional[object]]]] +) -> Optional[Union[Tuple[Optional[object], ...], Callable[[Any], Optional[object]]]]: + if ids is None: + return None + if callable(ids): + return ids + return tuple(ids) + + +def _params_converter( + params: Optional[Iterable[object]], +) -> Optional[Tuple[object, ...]]: + return tuple(params) if params is not None else None + + +def wrap_function_to_error_out_if_called_directly( + function: FixtureFunction, + fixture_marker: "FixtureFunctionMarker", +) -> FixtureFunction: + """Wrap the given fixture function so we can raise an error about it being called directly, + instead of used as an argument in a test function.""" + message = ( + 'Fixture "{name}" called directly. Fixtures are not meant to be called directly,\n' + "but are created automatically when test functions request them as parameters.\n" + "See https://docs.pytest.org/en/stable/explanation/fixtures.html for more information about fixtures, and\n" + "https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code." + ).format(name=fixture_marker.name or function.__name__) + + @functools.wraps(function) + def result(*args, **kwargs): + fail(message, pytrace=False) + + # Keep reference to the original function in our own custom attribute so we don't unwrap + # further than this point and lose useful wrappings like @mock.patch (#3774). + result.__pytest_wrapped__ = _PytestWrapper(function) # type: ignore[attr-defined] + + return cast(FixtureFunction, result) + + +@final +@attr.s(frozen=True, auto_attribs=True) +class FixtureFunctionMarker: + scope: "Union[_ScopeName, Callable[[str, Config], _ScopeName]]" + params: Optional[Tuple[object, ...]] = attr.ib(converter=_params_converter) + autouse: bool = False + ids: Optional[ + Union[Tuple[Optional[object], ...], Callable[[Any], Optional[object]]] + ] = attr.ib( + default=None, + converter=_ensure_immutable_ids, + ) + name: Optional[str] = None + + def __call__(self, function: FixtureFunction) -> FixtureFunction: + if inspect.isclass(function): + raise ValueError("class fixtures not supported (maybe in the future)") + + if getattr(function, "_pytestfixturefunction", False): + raise ValueError( + "fixture is being applied more than once to the same function" + ) + + function = wrap_function_to_error_out_if_called_directly(function, self) + + name = self.name or function.__name__ + if name == "request": + location = getlocation(function) + fail( + "'request' is a reserved word for fixtures, use another name:\n {}".format( + location + ), + pytrace=False, + ) + + # Type ignored because https://github.com/python/mypy/issues/2087. + function._pytestfixturefunction = self # type: ignore[attr-defined] + return function + + +@overload +def fixture( + fixture_function: FixtureFunction, + *, + scope: "Union[_ScopeName, Callable[[str, Config], _ScopeName]]" = ..., + params: Optional[Iterable[object]] = ..., + autouse: bool = ..., + ids: Optional[ + Union[Sequence[Optional[object]], Callable[[Any], Optional[object]]] + ] = ..., + name: Optional[str] = ..., +) -> FixtureFunction: + ... + + +@overload +def fixture( # noqa: F811 + fixture_function: None = ..., + *, + scope: "Union[_ScopeName, Callable[[str, Config], _ScopeName]]" = ..., + params: Optional[Iterable[object]] = ..., + autouse: bool = ..., + ids: Optional[ + Union[Sequence[Optional[object]], Callable[[Any], Optional[object]]] + ] = ..., + name: Optional[str] = None, +) -> FixtureFunctionMarker: + ... + + +def fixture( # noqa: F811 + fixture_function: Optional[FixtureFunction] = None, + *, + scope: "Union[_ScopeName, Callable[[str, Config], _ScopeName]]" = "function", + params: Optional[Iterable[object]] = None, + autouse: bool = False, + ids: Optional[ + Union[Sequence[Optional[object]], Callable[[Any], Optional[object]]] + ] = None, + name: Optional[str] = None, +) -> Union[FixtureFunctionMarker, FixtureFunction]: + """Decorator to mark a fixture factory function. + + This decorator can be used, with or without parameters, to define a + fixture function. + + The name of the fixture function can later be referenced to cause its + invocation ahead of running tests: test modules or classes can use the + ``pytest.mark.usefixtures(fixturename)`` marker. + + Test functions can directly use fixture names as input arguments in which + case the fixture instance returned from the fixture function will be + injected. + + Fixtures can provide their values to test functions using ``return`` or + ``yield`` statements. When using ``yield`` the code block after the + ``yield`` statement is executed as teardown code regardless of the test + outcome, and must yield exactly once. + + :param scope: + The scope for which this fixture is shared; one of ``"function"`` + (default), ``"class"``, ``"module"``, ``"package"`` or ``"session"``. + + This parameter may also be a callable which receives ``(fixture_name, config)`` + as parameters, and must return a ``str`` with one of the values mentioned above. + + See :ref:`dynamic scope` in the docs for more information. + + :param params: + An optional list of parameters which will cause multiple invocations + of the fixture function and all of the tests using it. The current + parameter is available in ``request.param``. + + :param autouse: + If True, the fixture func is activated for all tests that can see it. + If False (the default), an explicit reference is needed to activate + the fixture. + + :param ids: + Sequence of ids each corresponding to the params so that they are + part of the test id. If no ids are provided they will be generated + automatically from the params. + + :param name: + The name of the fixture. This defaults to the name of the decorated + function. If a fixture is used in the same module in which it is + defined, the function name of the fixture will be shadowed by the + function arg that requests the fixture; one way to resolve this is to + name the decorated function ``fixture_`` and then use + ``@pytest.fixture(name='')``. + """ + fixture_marker = FixtureFunctionMarker( + scope=scope, + params=params, + autouse=autouse, + ids=ids, + name=name, + ) + + # Direct decoration. + if fixture_function: + return fixture_marker(fixture_function) + + return fixture_marker + + +def yield_fixture( + fixture_function=None, + *args, + scope="function", + params=None, + autouse=False, + ids=None, + name=None, +): + """(Return a) decorator to mark a yield-fixture factory function. + + .. deprecated:: 3.0 + Use :py:func:`pytest.fixture` directly instead. + """ + warnings.warn(YIELD_FIXTURE, stacklevel=2) + return fixture( + fixture_function, + *args, + scope=scope, + params=params, + autouse=autouse, + ids=ids, + name=name, + ) + + +@fixture(scope="session") +def pytestconfig(request: FixtureRequest) -> Config: + """Session-scoped fixture that returns the session's :class:`pytest.Config` + object. + + Example:: + + def test_foo(pytestconfig): + if pytestconfig.getoption("verbose") > 0: + ... + + """ + return request.config + + +def pytest_addoption(parser: Parser) -> None: + parser.addini( + "usefixtures", + type="args", + default=[], + help="List of default fixtures to be used with this project", + ) + + +class FixtureManager: + """pytest fixture definitions and information is stored and managed + from this class. + + During collection fm.parsefactories() is called multiple times to parse + fixture function definitions into FixtureDef objects and internal + data structures. + + During collection of test functions, metafunc-mechanics instantiate + a FuncFixtureInfo object which is cached per node/func-name. + This FuncFixtureInfo object is later retrieved by Function nodes + which themselves offer a fixturenames attribute. + + The FuncFixtureInfo object holds information about fixtures and FixtureDefs + relevant for a particular function. An initial list of fixtures is + assembled like this: + + - ini-defined usefixtures + - autouse-marked fixtures along the collection chain up from the function + - usefixtures markers at module/class/function level + - test function funcargs + + Subsequently the funcfixtureinfo.fixturenames attribute is computed + as the closure of the fixtures needed to setup the initial fixtures, + i.e. fixtures needed by fixture functions themselves are appended + to the fixturenames list. + + Upon the test-setup phases all fixturenames are instantiated, retrieved + by a lookup of their FuncFixtureInfo. + """ + + FixtureLookupError = FixtureLookupError + FixtureLookupErrorRepr = FixtureLookupErrorRepr + + def __init__(self, session: "Session") -> None: + self.session = session + self.config: Config = session.config + self._arg2fixturedefs: Dict[str, List[FixtureDef[Any]]] = {} + self._holderobjseen: Set[object] = set() + # A mapping from a nodeid to a list of autouse fixtures it defines. + self._nodeid_autousenames: Dict[str, List[str]] = { + "": self.config.getini("usefixtures"), + } + session.config.pluginmanager.register(self, "funcmanage") + + def _get_direct_parametrize_args(self, node: nodes.Node) -> List[str]: + """Return all direct parametrization arguments of a node, so we don't + mistake them for fixtures. + + Check https://github.com/pytest-dev/pytest/issues/5036. + + These things are done later as well when dealing with parametrization + so this could be improved. + """ + parametrize_argnames: List[str] = [] + for marker in node.iter_markers(name="parametrize"): + if not marker.kwargs.get("indirect", False): + p_argnames, _ = ParameterSet._parse_parametrize_args( + *marker.args, **marker.kwargs + ) + parametrize_argnames.extend(p_argnames) + + return parametrize_argnames + + def getfixtureinfo( + self, node: nodes.Node, func, cls, funcargs: bool = True + ) -> FuncFixtureInfo: + if funcargs and not getattr(node, "nofuncargs", False): + argnames = getfuncargnames(func, name=node.name, cls=cls) + else: + argnames = () + + usefixtures = tuple( + arg for mark in node.iter_markers(name="usefixtures") for arg in mark.args + ) + initialnames = usefixtures + argnames + fm = node.session._fixturemanager + initialnames, names_closure, arg2fixturedefs = fm.getfixtureclosure( + initialnames, node, ignore_args=self._get_direct_parametrize_args(node) + ) + return FuncFixtureInfo(argnames, initialnames, names_closure, arg2fixturedefs) + + def pytest_plugin_registered(self, plugin: _PluggyPlugin) -> None: + nodeid = None + try: + p = absolutepath(plugin.__file__) # type: ignore[attr-defined] + except AttributeError: + pass + else: + # Construct the base nodeid which is later used to check + # what fixtures are visible for particular tests (as denoted + # by their test id). + if p.name.startswith("conftest.py"): + try: + nodeid = str(p.parent.relative_to(self.config.rootpath)) + except ValueError: + nodeid = "" + if nodeid == ".": + nodeid = "" + if os.sep != nodes.SEP: + nodeid = nodeid.replace(os.sep, nodes.SEP) + + self.parsefactories(plugin, nodeid) + + def _getautousenames(self, nodeid: str) -> Iterator[str]: + """Return the names of autouse fixtures applicable to nodeid.""" + for parentnodeid in nodes.iterparentnodeids(nodeid): + basenames = self._nodeid_autousenames.get(parentnodeid) + if basenames: + yield from basenames + + def getfixtureclosure( + self, + fixturenames: Tuple[str, ...], + parentnode: nodes.Node, + ignore_args: Sequence[str] = (), + ) -> Tuple[Tuple[str, ...], List[str], Dict[str, Sequence[FixtureDef[Any]]]]: + # Collect the closure of all fixtures, starting with the given + # fixturenames as the initial set. As we have to visit all + # factory definitions anyway, we also return an arg2fixturedefs + # mapping so that the caller can reuse it and does not have + # to re-discover fixturedefs again for each fixturename + # (discovering matching fixtures for a given name/node is expensive). + + parentid = parentnode.nodeid + fixturenames_closure = list(self._getautousenames(parentid)) + + def merge(otherlist: Iterable[str]) -> None: + for arg in otherlist: + if arg not in fixturenames_closure: + fixturenames_closure.append(arg) + + merge(fixturenames) + + # At this point, fixturenames_closure contains what we call "initialnames", + # which is a set of fixturenames the function immediately requests. We + # need to return it as well, so save this. + initialnames = tuple(fixturenames_closure) + + arg2fixturedefs: Dict[str, Sequence[FixtureDef[Any]]] = {} + lastlen = -1 + while lastlen != len(fixturenames_closure): + lastlen = len(fixturenames_closure) + for argname in fixturenames_closure: + if argname in ignore_args: + continue + if argname in arg2fixturedefs: + continue + fixturedefs = self.getfixturedefs(argname, parentid) + if fixturedefs: + arg2fixturedefs[argname] = fixturedefs + merge(fixturedefs[-1].argnames) + + def sort_by_scope(arg_name: str) -> Scope: + try: + fixturedefs = arg2fixturedefs[arg_name] + except KeyError: + return Scope.Function + else: + return fixturedefs[-1]._scope + + fixturenames_closure.sort(key=sort_by_scope, reverse=True) + return initialnames, fixturenames_closure, arg2fixturedefs + + def pytest_generate_tests(self, metafunc: "Metafunc") -> None: + """Generate new tests based on parametrized fixtures used by the given metafunc""" + + def get_parametrize_mark_argnames(mark: Mark) -> Sequence[str]: + args, _ = ParameterSet._parse_parametrize_args(*mark.args, **mark.kwargs) + return args + + for argname in metafunc.fixturenames: + # Get the FixtureDefs for the argname. + fixture_defs = metafunc._arg2fixturedefs.get(argname) + if not fixture_defs: + # Will raise FixtureLookupError at setup time if not parametrized somewhere + # else (e.g @pytest.mark.parametrize) + continue + + # If the test itself parametrizes using this argname, give it + # precedence. + if any( + argname in get_parametrize_mark_argnames(mark) + for mark in metafunc.definition.iter_markers("parametrize") + ): + continue + + # In the common case we only look at the fixture def with the + # closest scope (last in the list). But if the fixture overrides + # another fixture, while requesting the super fixture, keep going + # in case the super fixture is parametrized (#1953). + for fixturedef in reversed(fixture_defs): + # Fixture is parametrized, apply it and stop. + if fixturedef.params is not None: + metafunc.parametrize( + argname, + fixturedef.params, + indirect=True, + scope=fixturedef.scope, + ids=fixturedef.ids, + ) + break + + # Not requesting the overridden super fixture, stop. + if argname not in fixturedef.argnames: + break + + # Try next super fixture, if any. + + def pytest_collection_modifyitems(self, items: List[nodes.Item]) -> None: + # Separate parametrized setups. + items[:] = reorder_items(items) + + def parsefactories( + self, node_or_obj, nodeid=NOTSET, unittest: bool = False + ) -> None: + if nodeid is not NOTSET: + holderobj = node_or_obj + else: + holderobj = node_or_obj.obj + nodeid = node_or_obj.nodeid + if holderobj in self._holderobjseen: + return + + self._holderobjseen.add(holderobj) + autousenames = [] + for name in dir(holderobj): + # ugly workaround for one of the fspath deprecated property of node + # todo: safely generalize + if isinstance(holderobj, nodes.Node) and name == "fspath": + continue + + # The attribute can be an arbitrary descriptor, so the attribute + # access below can raise. safe_getatt() ignores such exceptions. + obj = safe_getattr(holderobj, name, None) + marker = getfixturemarker(obj) + if not isinstance(marker, FixtureFunctionMarker): + # Magic globals with __getattr__ might have got us a wrong + # fixture attribute. + continue + + if marker.name: + name = marker.name + + # During fixture definition we wrap the original fixture function + # to issue a warning if called directly, so here we unwrap it in + # order to not emit the warning when pytest itself calls the + # fixture function. + obj = get_real_method(obj, holderobj) + + fixture_def = FixtureDef( + fixturemanager=self, + baseid=nodeid, + argname=name, + func=obj, + scope=marker.scope, + params=marker.params, + unittest=unittest, + ids=marker.ids, + ) + + faclist = self._arg2fixturedefs.setdefault(name, []) + if fixture_def.has_location: + faclist.append(fixture_def) + else: + # fixturedefs with no location are at the front + # so this inserts the current fixturedef after the + # existing fixturedefs from external plugins but + # before the fixturedefs provided in conftests. + i = len([f for f in faclist if not f.has_location]) + faclist.insert(i, fixture_def) + if marker.autouse: + autousenames.append(name) + + if autousenames: + self._nodeid_autousenames.setdefault(nodeid or "", []).extend(autousenames) + + def getfixturedefs( + self, argname: str, nodeid: str + ) -> Optional[Sequence[FixtureDef[Any]]]: + """Get a list of fixtures which are applicable to the given node id. + + :param str argname: Name of the fixture to search for. + :param str nodeid: Full node id of the requesting test. + :rtype: Sequence[FixtureDef] + """ + try: + fixturedefs = self._arg2fixturedefs[argname] + except KeyError: + return None + return tuple(self._matchfactories(fixturedefs, nodeid)) + + def _matchfactories( + self, fixturedefs: Iterable[FixtureDef[Any]], nodeid: str + ) -> Iterator[FixtureDef[Any]]: + parentnodeids = set(nodes.iterparentnodeids(nodeid)) + for fixturedef in fixturedefs: + if fixturedef.baseid in parentnodeids: + yield fixturedef diff --git a/env/lib/python3.8/site-packages/_pytest/freeze_support.py b/env/lib/python3.8/site-packages/_pytest/freeze_support.py new file mode 100644 index 00000000..9f8ea231 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/freeze_support.py @@ -0,0 +1,44 @@ +"""Provides a function to report all internal modules for using freezing +tools.""" +import types +from typing import Iterator +from typing import List +from typing import Union + + +def freeze_includes() -> List[str]: + """Return a list of module names used by pytest that should be + included by cx_freeze.""" + import _pytest + + result = list(_iter_all_modules(_pytest)) + return result + + +def _iter_all_modules( + package: Union[str, types.ModuleType], + prefix: str = "", +) -> Iterator[str]: + """Iterate over the names of all modules that can be found in the given + package, recursively. + + >>> import _pytest + >>> list(_iter_all_modules(_pytest)) + ['_pytest._argcomplete', '_pytest._code.code', ...] + """ + import os + import pkgutil + + if isinstance(package, str): + path = package + else: + # Type ignored because typeshed doesn't define ModuleType.__path__ + # (only defined on packages). + package_path = package.__path__ # type: ignore[attr-defined] + path, prefix = package_path[0], package.__name__ + "." + for _, name, is_package in pkgutil.iter_modules([path]): + if is_package: + for m in _iter_all_modules(os.path.join(path, name), prefix=name + "."): + yield prefix + m + else: + yield prefix + name diff --git a/env/lib/python3.8/site-packages/_pytest/helpconfig.py b/env/lib/python3.8/site-packages/_pytest/helpconfig.py new file mode 100644 index 00000000..151bc6df --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/helpconfig.py @@ -0,0 +1,265 @@ +"""Version info, help messages, tracing configuration.""" +import os +import sys +from argparse import Action +from typing import List +from typing import Optional +from typing import Union + +import pytest +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config import PrintHelp +from _pytest.config.argparsing import Parser + + +class HelpAction(Action): + """An argparse Action that will raise an exception in order to skip the + rest of the argument parsing when --help is passed. + + This prevents argparse from quitting due to missing required arguments + when any are defined, for example by ``pytest_addoption``. + This is similar to the way that the builtin argparse --help option is + implemented by raising SystemExit. + """ + + def __init__(self, option_strings, dest=None, default=False, help=None): + super().__init__( + option_strings=option_strings, + dest=dest, + const=True, + default=default, + nargs=0, + help=help, + ) + + def __call__(self, parser, namespace, values, option_string=None): + setattr(namespace, self.dest, self.const) + + # We should only skip the rest of the parsing after preparse is done. + if getattr(parser._parser, "after_preparse", False): + raise PrintHelp + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("debugconfig") + group.addoption( + "--version", + "-V", + action="count", + default=0, + dest="version", + help="Display pytest version and information about plugins. " + "When given twice, also display information about plugins.", + ) + group._addoption( + "-h", + "--help", + action=HelpAction, + dest="help", + help="Show help message and configuration info", + ) + group._addoption( + "-p", + action="append", + dest="plugins", + default=[], + metavar="name", + help="Early-load given plugin module name or entry point (multi-allowed). " + "To avoid loading of plugins, use the `no:` prefix, e.g. " + "`no:doctest`.", + ) + group.addoption( + "--traceconfig", + "--trace-config", + action="store_true", + default=False, + help="Trace considerations of conftest.py files", + ) + group.addoption( + "--debug", + action="store", + nargs="?", + const="pytestdebug.log", + dest="debug", + metavar="DEBUG_FILE_NAME", + help="Store internal tracing debug information in this log file. " + "This file is opened with 'w' and truncated as a result, care advised. " + "Default: pytestdebug.log.", + ) + group._addoption( + "-o", + "--override-ini", + dest="override_ini", + action="append", + help='Override ini option with "option=value" style, ' + "e.g. `-o xfail_strict=True -o cache_dir=cache`.", + ) + + +@pytest.hookimpl(hookwrapper=True) +def pytest_cmdline_parse(): + outcome = yield + config: Config = outcome.get_result() + + if config.option.debug: + # --debug | --debug was provided. + path = config.option.debug + debugfile = open(path, "w") + debugfile.write( + "versions pytest-%s, " + "python-%s\ncwd=%s\nargs=%s\n\n" + % ( + pytest.__version__, + ".".join(map(str, sys.version_info)), + os.getcwd(), + config.invocation_params.args, + ) + ) + config.trace.root.setwriter(debugfile.write) + undo_tracing = config.pluginmanager.enable_tracing() + sys.stderr.write("writing pytest debug information to %s\n" % path) + + def unset_tracing() -> None: + debugfile.close() + sys.stderr.write("wrote pytest debug information to %s\n" % debugfile.name) + config.trace.root.setwriter(None) + undo_tracing() + + config.add_cleanup(unset_tracing) + + +def showversion(config: Config) -> None: + if config.option.version > 1: + sys.stdout.write( + "This is pytest version {}, imported from {}\n".format( + pytest.__version__, pytest.__file__ + ) + ) + plugininfo = getpluginversioninfo(config) + if plugininfo: + for line in plugininfo: + sys.stdout.write(line + "\n") + else: + sys.stdout.write(f"pytest {pytest.__version__}\n") + + +def pytest_cmdline_main(config: Config) -> Optional[Union[int, ExitCode]]: + if config.option.version > 0: + showversion(config) + return 0 + elif config.option.help: + config._do_configure() + showhelp(config) + config._ensure_unconfigure() + return 0 + return None + + +def showhelp(config: Config) -> None: + import textwrap + + reporter = config.pluginmanager.get_plugin("terminalreporter") + tw = reporter._tw + tw.write(config._parser.optparser.format_help()) + tw.line() + tw.line( + "[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:" + ) + tw.line() + + columns = tw.fullwidth # costly call + indent_len = 24 # based on argparse's max_help_position=24 + indent = " " * indent_len + for name in config._parser._ininames: + help, type, default = config._parser._inidict[name] + if type is None: + type = "string" + if help is None: + raise TypeError(f"help argument cannot be None for {name}") + spec = f"{name} ({type}):" + tw.write(" %s" % spec) + spec_len = len(spec) + if spec_len > (indent_len - 3): + # Display help starting at a new line. + tw.line() + helplines = textwrap.wrap( + help, + columns, + initial_indent=indent, + subsequent_indent=indent, + break_on_hyphens=False, + ) + + for line in helplines: + tw.line(line) + else: + # Display help starting after the spec, following lines indented. + tw.write(" " * (indent_len - spec_len - 2)) + wrapped = textwrap.wrap(help, columns - indent_len, break_on_hyphens=False) + + if wrapped: + tw.line(wrapped[0]) + for line in wrapped[1:]: + tw.line(indent + line) + + tw.line() + tw.line("Environment variables:") + vars = [ + ("PYTEST_ADDOPTS", "Extra command line options"), + ("PYTEST_PLUGINS", "Comma-separated plugins to load during startup"), + ("PYTEST_DISABLE_PLUGIN_AUTOLOAD", "Set to disable plugin auto-loading"), + ("PYTEST_DEBUG", "Set to enable debug tracing of pytest's internals"), + ] + for name, help in vars: + tw.line(f" {name:<24} {help}") + tw.line() + tw.line() + + tw.line("to see available markers type: pytest --markers") + tw.line("to see available fixtures type: pytest --fixtures") + tw.line( + "(shown according to specified file_or_dir or current dir " + "if not specified; fixtures with leading '_' are only shown " + "with the '-v' option" + ) + + for warningreport in reporter.stats.get("warnings", []): + tw.line("warning : " + warningreport.message, red=True) + return + + +conftest_options = [("pytest_plugins", "list of plugin names to load")] + + +def getpluginversioninfo(config: Config) -> List[str]: + lines = [] + plugininfo = config.pluginmanager.list_plugin_distinfo() + if plugininfo: + lines.append("setuptools registered plugins:") + for plugin, dist in plugininfo: + loc = getattr(plugin, "__file__", repr(plugin)) + content = f"{dist.project_name}-{dist.version} at {loc}" + lines.append(" " + content) + return lines + + +def pytest_report_header(config: Config) -> List[str]: + lines = [] + if config.option.debug or config.option.traceconfig: + lines.append(f"using: pytest-{pytest.__version__}") + + verinfo = getpluginversioninfo(config) + if verinfo: + lines.extend(verinfo) + + if config.option.traceconfig: + lines.append("active plugins:") + items = config.pluginmanager.list_name_plugin() + for name, plugin in items: + if hasattr(plugin, "__file__"): + r = plugin.__file__ + else: + r = repr(plugin) + lines.append(f" {name:<20}: {r}") + return lines diff --git a/env/lib/python3.8/site-packages/_pytest/hookspec.py b/env/lib/python3.8/site-packages/_pytest/hookspec.py new file mode 100644 index 00000000..cc0828dd --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/hookspec.py @@ -0,0 +1,972 @@ +"""Hook specifications for pytest plugins which are invoked by pytest itself +and by builtin plugins.""" +from pathlib import Path +from typing import Any +from typing import Dict +from typing import List +from typing import Mapping +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +from pluggy import HookspecMarker + +from _pytest.deprecated import WARNING_CMDLINE_PREPARSE_HOOK + +if TYPE_CHECKING: + import pdb + import warnings + from typing_extensions import Literal + + from _pytest._code.code import ExceptionRepr + from _pytest.code import ExceptionInfo + from _pytest.config import Config + from _pytest.config import ExitCode + from _pytest.config import PytestPluginManager + from _pytest.config import _PluggyPlugin + from _pytest.config.argparsing import Parser + from _pytest.fixtures import FixtureDef + from _pytest.fixtures import SubRequest + from _pytest.main import Session + from _pytest.nodes import Collector + from _pytest.nodes import Item + from _pytest.outcomes import Exit + from _pytest.python import Class + from _pytest.python import Function + from _pytest.python import Metafunc + from _pytest.python import Module + from _pytest.reports import CollectReport + from _pytest.reports import TestReport + from _pytest.runner import CallInfo + from _pytest.terminal import TerminalReporter + from _pytest.compat import LEGACY_PATH + + +hookspec = HookspecMarker("pytest") + +# ------------------------------------------------------------------------- +# Initialization hooks called for every plugin +# ------------------------------------------------------------------------- + + +@hookspec(historic=True) +def pytest_addhooks(pluginmanager: "PytestPluginManager") -> None: + """Called at plugin registration time to allow adding new hooks via a call to + ``pluginmanager.add_hookspecs(module_or_class, prefix)``. + + :param pytest.PytestPluginManager pluginmanager: The pytest plugin manager. + + .. note:: + This hook is incompatible with ``hookwrapper=True``. + """ + + +@hookspec(historic=True) +def pytest_plugin_registered( + plugin: "_PluggyPlugin", manager: "PytestPluginManager" +) -> None: + """A new pytest plugin got registered. + + :param plugin: The plugin module or instance. + :param pytest.PytestPluginManager manager: pytest plugin manager. + + .. note:: + This hook is incompatible with ``hookwrapper=True``. + """ + + +@hookspec(historic=True) +def pytest_addoption(parser: "Parser", pluginmanager: "PytestPluginManager") -> None: + """Register argparse-style options and ini-style config values, + called once at the beginning of a test run. + + .. note:: + + This function should be implemented only in plugins or ``conftest.py`` + files situated at the tests root directory due to how pytest + :ref:`discovers plugins during startup `. + + :param pytest.Parser parser: + To add command line options, call + :py:func:`parser.addoption(...) `. + To add ini-file values call :py:func:`parser.addini(...) + `. + + :param pytest.PytestPluginManager pluginmanager: + The pytest plugin manager, which can be used to install :py:func:`hookspec`'s + or :py:func:`hookimpl`'s and allow one plugin to call another plugin's hooks + to change how command line options are added. + + Options can later be accessed through the + :py:class:`config ` object, respectively: + + - :py:func:`config.getoption(name) ` to + retrieve the value of a command line option. + + - :py:func:`config.getini(name) ` to retrieve + a value read from an ini-style file. + + The config object is passed around on many internal objects via the ``.config`` + attribute or can be retrieved as the ``pytestconfig`` fixture. + + .. note:: + This hook is incompatible with ``hookwrapper=True``. + """ + + +@hookspec(historic=True) +def pytest_configure(config: "Config") -> None: + """Allow plugins and conftest files to perform initial configuration. + + This hook is called for every plugin and initial conftest file + after command line options have been parsed. + + After that, the hook is called for other conftest files as they are + imported. + + .. note:: + This hook is incompatible with ``hookwrapper=True``. + + :param pytest.Config config: The pytest config object. + """ + + +# ------------------------------------------------------------------------- +# Bootstrapping hooks called for plugins registered early enough: +# internal and 3rd party plugins. +# ------------------------------------------------------------------------- + + +@hookspec(firstresult=True) +def pytest_cmdline_parse( + pluginmanager: "PytestPluginManager", args: List[str] +) -> Optional["Config"]: + """Return an initialized :class:`~pytest.Config`, parsing the specified args. + + Stops at first non-None result, see :ref:`firstresult`. + + .. note:: + This hook will only be called for plugin classes passed to the + ``plugins`` arg when using `pytest.main`_ to perform an in-process + test run. + + :param pluginmanager: The pytest plugin manager. + :param args: List of arguments passed on the command line. + :returns: A pytest config object. + """ + + +@hookspec(warn_on_impl=WARNING_CMDLINE_PREPARSE_HOOK) +def pytest_cmdline_preparse(config: "Config", args: List[str]) -> None: + """(**Deprecated**) modify command line arguments before option parsing. + + This hook is considered deprecated and will be removed in a future pytest version. Consider + using :hook:`pytest_load_initial_conftests` instead. + + .. note:: + This hook will not be called for ``conftest.py`` files, only for setuptools plugins. + + :param config: The pytest config object. + :param args: Arguments passed on the command line. + """ + + +@hookspec(firstresult=True) +def pytest_cmdline_main(config: "Config") -> Optional[Union["ExitCode", int]]: + """Called for performing the main command line action. The default + implementation will invoke the configure hooks and runtest_mainloop. + + Stops at first non-None result, see :ref:`firstresult`. + + :param config: The pytest config object. + :returns: The exit code. + """ + + +def pytest_load_initial_conftests( + early_config: "Config", parser: "Parser", args: List[str] +) -> None: + """Called to implement the loading of initial conftest files ahead + of command line option parsing. + + .. note:: + This hook will not be called for ``conftest.py`` files, only for setuptools plugins. + + :param early_config: The pytest config object. + :param args: Arguments passed on the command line. + :param parser: To add command line options. + """ + + +# ------------------------------------------------------------------------- +# collection hooks +# ------------------------------------------------------------------------- + + +@hookspec(firstresult=True) +def pytest_collection(session: "Session") -> Optional[object]: + """Perform the collection phase for the given session. + + Stops at first non-None result, see :ref:`firstresult`. + The return value is not used, but only stops further processing. + + The default collection phase is this (see individual hooks for full details): + + 1. Starting from ``session`` as the initial collector: + + 1. ``pytest_collectstart(collector)`` + 2. ``report = pytest_make_collect_report(collector)`` + 3. ``pytest_exception_interact(collector, call, report)`` if an interactive exception occurred + 4. For each collected node: + + 1. If an item, ``pytest_itemcollected(item)`` + 2. If a collector, recurse into it. + + 5. ``pytest_collectreport(report)`` + + 2. ``pytest_collection_modifyitems(session, config, items)`` + + 1. ``pytest_deselected(items)`` for any deselected items (may be called multiple times) + + 3. ``pytest_collection_finish(session)`` + 4. Set ``session.items`` to the list of collected items + 5. Set ``session.testscollected`` to the number of collected items + + You can implement this hook to only perform some action before collection, + for example the terminal plugin uses it to start displaying the collection + counter (and returns `None`). + + :param session: The pytest session object. + """ + + +def pytest_collection_modifyitems( + session: "Session", config: "Config", items: List["Item"] +) -> None: + """Called after collection has been performed. May filter or re-order + the items in-place. + + :param session: The pytest session object. + :param config: The pytest config object. + :param items: List of item objects. + """ + + +def pytest_collection_finish(session: "Session") -> None: + """Called after collection has been performed and modified. + + :param session: The pytest session object. + """ + + +@hookspec(firstresult=True) +def pytest_ignore_collect( + collection_path: Path, path: "LEGACY_PATH", config: "Config" +) -> Optional[bool]: + """Return True to prevent considering this path for collection. + + This hook is consulted for all files and directories prior to calling + more specific hooks. + + Stops at first non-None result, see :ref:`firstresult`. + + :param collection_path: The path to analyze. + :param path: The path to analyze (deprecated). + :param config: The pytest config object. + + .. versionchanged:: 7.0.0 + The ``collection_path`` parameter was added as a :class:`pathlib.Path` + equivalent of the ``path`` parameter. The ``path`` parameter + has been deprecated. + """ + + +def pytest_collect_file( + file_path: Path, path: "LEGACY_PATH", parent: "Collector" +) -> "Optional[Collector]": + """Create a :class:`~pytest.Collector` for the given path, or None if not relevant. + + The new node needs to have the specified ``parent`` as a parent. + + :param file_path: The path to analyze. + :param path: The path to collect (deprecated). + + .. versionchanged:: 7.0.0 + The ``file_path`` parameter was added as a :class:`pathlib.Path` + equivalent of the ``path`` parameter. The ``path`` parameter + has been deprecated. + """ + + +# logging hooks for collection + + +def pytest_collectstart(collector: "Collector") -> None: + """Collector starts collecting. + + :param collector: + The collector. + """ + + +def pytest_itemcollected(item: "Item") -> None: + """We just collected a test item. + + :param item: + The item. + """ + + +def pytest_collectreport(report: "CollectReport") -> None: + """Collector finished collecting. + + :param report: + The collect report. + """ + + +def pytest_deselected(items: Sequence["Item"]) -> None: + """Called for deselected test items, e.g. by keyword. + + May be called multiple times. + + :param items: + The items. + """ + + +@hookspec(firstresult=True) +def pytest_make_collect_report(collector: "Collector") -> "Optional[CollectReport]": + """Perform :func:`collector.collect() ` and return + a :class:`~pytest.CollectReport`. + + Stops at first non-None result, see :ref:`firstresult`. + + :param collector: + The collector. + """ + + +# ------------------------------------------------------------------------- +# Python test function related hooks +# ------------------------------------------------------------------------- + + +@hookspec(firstresult=True) +def pytest_pycollect_makemodule( + module_path: Path, path: "LEGACY_PATH", parent +) -> Optional["Module"]: + """Return a :class:`pytest.Module` collector or None for the given path. + + This hook will be called for each matching test module path. + The :hook:`pytest_collect_file` hook needs to be used if you want to + create test modules for files that do not match as a test module. + + Stops at first non-None result, see :ref:`firstresult`. + + :param module_path: The path of the module to collect. + :param path: The path of the module to collect (deprecated). + + .. versionchanged:: 7.0.0 + The ``module_path`` parameter was added as a :class:`pathlib.Path` + equivalent of the ``path`` parameter. + + The ``path`` parameter has been deprecated in favor of ``fspath``. + """ + + +@hookspec(firstresult=True) +def pytest_pycollect_makeitem( + collector: Union["Module", "Class"], name: str, obj: object +) -> Union[None, "Item", "Collector", List[Union["Item", "Collector"]]]: + """Return a custom item/collector for a Python object in a module, or None. + + Stops at first non-None result, see :ref:`firstresult`. + + :param collector: + The module/class collector. + :param name: + The name of the object in the module/class. + :param obj: + The object. + :returns: + The created items/collectors. + """ + + +@hookspec(firstresult=True) +def pytest_pyfunc_call(pyfuncitem: "Function") -> Optional[object]: + """Call underlying test function. + + Stops at first non-None result, see :ref:`firstresult`. + + :param pyfuncitem: + The function item. + """ + + +def pytest_generate_tests(metafunc: "Metafunc") -> None: + """Generate (multiple) parametrized calls to a test function. + + :param metafunc: + The :class:`~pytest.Metafunc` helper for the test function. + """ + + +@hookspec(firstresult=True) +def pytest_make_parametrize_id( + config: "Config", val: object, argname: str +) -> Optional[str]: + """Return a user-friendly string representation of the given ``val`` + that will be used by @pytest.mark.parametrize calls, or None if the hook + doesn't know about ``val``. + + The parameter name is available as ``argname``, if required. + + Stops at first non-None result, see :ref:`firstresult`. + + :param config: The pytest config object. + :param val: The parametrized value. + :param str argname: The automatic parameter name produced by pytest. + """ + + +# ------------------------------------------------------------------------- +# runtest related hooks +# ------------------------------------------------------------------------- + + +@hookspec(firstresult=True) +def pytest_runtestloop(session: "Session") -> Optional[object]: + """Perform the main runtest loop (after collection finished). + + The default hook implementation performs the runtest protocol for all items + collected in the session (``session.items``), unless the collection failed + or the ``collectonly`` pytest option is set. + + If at any point :py:func:`pytest.exit` is called, the loop is + terminated immediately. + + If at any point ``session.shouldfail`` or ``session.shouldstop`` are set, the + loop is terminated after the runtest protocol for the current item is finished. + + :param session: The pytest session object. + + Stops at first non-None result, see :ref:`firstresult`. + The return value is not used, but only stops further processing. + """ + + +@hookspec(firstresult=True) +def pytest_runtest_protocol( + item: "Item", nextitem: "Optional[Item]" +) -> Optional[object]: + """Perform the runtest protocol for a single test item. + + The default runtest protocol is this (see individual hooks for full details): + + - ``pytest_runtest_logstart(nodeid, location)`` + + - Setup phase: + - ``call = pytest_runtest_setup(item)`` (wrapped in ``CallInfo(when="setup")``) + - ``report = pytest_runtest_makereport(item, call)`` + - ``pytest_runtest_logreport(report)`` + - ``pytest_exception_interact(call, report)`` if an interactive exception occurred + + - Call phase, if the the setup passed and the ``setuponly`` pytest option is not set: + - ``call = pytest_runtest_call(item)`` (wrapped in ``CallInfo(when="call")``) + - ``report = pytest_runtest_makereport(item, call)`` + - ``pytest_runtest_logreport(report)`` + - ``pytest_exception_interact(call, report)`` if an interactive exception occurred + + - Teardown phase: + - ``call = pytest_runtest_teardown(item, nextitem)`` (wrapped in ``CallInfo(when="teardown")``) + - ``report = pytest_runtest_makereport(item, call)`` + - ``pytest_runtest_logreport(report)`` + - ``pytest_exception_interact(call, report)`` if an interactive exception occurred + + - ``pytest_runtest_logfinish(nodeid, location)`` + + :param item: Test item for which the runtest protocol is performed. + :param nextitem: The scheduled-to-be-next test item (or None if this is the end my friend). + + Stops at first non-None result, see :ref:`firstresult`. + The return value is not used, but only stops further processing. + """ + + +def pytest_runtest_logstart( + nodeid: str, location: Tuple[str, Optional[int], str] +) -> None: + """Called at the start of running the runtest protocol for a single item. + + See :hook:`pytest_runtest_protocol` for a description of the runtest protocol. + + :param nodeid: Full node ID of the item. + :param location: A tuple of ``(filename, lineno, testname)``. + """ + + +def pytest_runtest_logfinish( + nodeid: str, location: Tuple[str, Optional[int], str] +) -> None: + """Called at the end of running the runtest protocol for a single item. + + See :hook:`pytest_runtest_protocol` for a description of the runtest protocol. + + :param nodeid: Full node ID of the item. + :param location: A tuple of ``(filename, lineno, testname)``. + """ + + +def pytest_runtest_setup(item: "Item") -> None: + """Called to perform the setup phase for a test item. + + The default implementation runs ``setup()`` on ``item`` and all of its + parents (which haven't been setup yet). This includes obtaining the + values of fixtures required by the item (which haven't been obtained + yet). + + :param item: + The item. + """ + + +def pytest_runtest_call(item: "Item") -> None: + """Called to run the test for test item (the call phase). + + The default implementation calls ``item.runtest()``. + + :param item: + The item. + """ + + +def pytest_runtest_teardown(item: "Item", nextitem: Optional["Item"]) -> None: + """Called to perform the teardown phase for a test item. + + The default implementation runs the finalizers and calls ``teardown()`` + on ``item`` and all of its parents (which need to be torn down). This + includes running the teardown phase of fixtures required by the item (if + they go out of scope). + + :param item: + The item. + :param nextitem: + The scheduled-to-be-next test item (None if no further test item is + scheduled). This argument is used to perform exact teardowns, i.e. + calling just enough finalizers so that nextitem only needs to call + setup functions. + """ + + +@hookspec(firstresult=True) +def pytest_runtest_makereport( + item: "Item", call: "CallInfo[None]" +) -> Optional["TestReport"]: + """Called to create a :class:`~pytest.TestReport` for each of + the setup, call and teardown runtest phases of a test item. + + See :hook:`pytest_runtest_protocol` for a description of the runtest protocol. + + :param item: The item. + :param call: The :class:`~pytest.CallInfo` for the phase. + + Stops at first non-None result, see :ref:`firstresult`. + """ + + +def pytest_runtest_logreport(report: "TestReport") -> None: + """Process the :class:`~pytest.TestReport` produced for each + of the setup, call and teardown runtest phases of an item. + + See :hook:`pytest_runtest_protocol` for a description of the runtest protocol. + """ + + +@hookspec(firstresult=True) +def pytest_report_to_serializable( + config: "Config", + report: Union["CollectReport", "TestReport"], +) -> Optional[Dict[str, Any]]: + """Serialize the given report object into a data structure suitable for + sending over the wire, e.g. converted to JSON. + + :param config: The pytest config object. + :param report: The report. + """ + + +@hookspec(firstresult=True) +def pytest_report_from_serializable( + config: "Config", + data: Dict[str, Any], +) -> Optional[Union["CollectReport", "TestReport"]]: + """Restore a report object previously serialized with + :hook:`pytest_report_to_serializable`. + + :param config: The pytest config object. + """ + + +# ------------------------------------------------------------------------- +# Fixture related hooks +# ------------------------------------------------------------------------- + + +@hookspec(firstresult=True) +def pytest_fixture_setup( + fixturedef: "FixtureDef[Any]", request: "SubRequest" +) -> Optional[object]: + """Perform fixture setup execution. + + :param fixturdef: + The fixture definition object. + :param request: + The fixture request object. + :returns: + The return value of the call to the fixture function. + + Stops at first non-None result, see :ref:`firstresult`. + + .. note:: + If the fixture function returns None, other implementations of + this hook function will continue to be called, according to the + behavior of the :ref:`firstresult` option. + """ + + +def pytest_fixture_post_finalizer( + fixturedef: "FixtureDef[Any]", request: "SubRequest" +) -> None: + """Called after fixture teardown, but before the cache is cleared, so + the fixture result ``fixturedef.cached_result`` is still available (not + ``None``). + + :param fixturdef: + The fixture definition object. + :param request: + The fixture request object. + """ + + +# ------------------------------------------------------------------------- +# test session related hooks +# ------------------------------------------------------------------------- + + +def pytest_sessionstart(session: "Session") -> None: + """Called after the ``Session`` object has been created and before performing collection + and entering the run test loop. + + :param session: The pytest session object. + """ + + +def pytest_sessionfinish( + session: "Session", + exitstatus: Union[int, "ExitCode"], +) -> None: + """Called after whole test run finished, right before returning the exit status to the system. + + :param session: The pytest session object. + :param exitstatus: The status which pytest will return to the system. + """ + + +def pytest_unconfigure(config: "Config") -> None: + """Called before test process is exited. + + :param config: The pytest config object. + """ + + +# ------------------------------------------------------------------------- +# hooks for customizing the assert methods +# ------------------------------------------------------------------------- + + +def pytest_assertrepr_compare( + config: "Config", op: str, left: object, right: object +) -> Optional[List[str]]: + """Return explanation for comparisons in failing assert expressions. + + Return None for no custom explanation, otherwise return a list + of strings. The strings will be joined by newlines but any newlines + *in* a string will be escaped. Note that all but the first line will + be indented slightly, the intention is for the first line to be a summary. + + :param config: The pytest config object. + :param op: The operator, e.g. `"=="`, `"!="`, `"not in"`. + :param left: The left operand. + :param right: The right operand. + """ + + +def pytest_assertion_pass(item: "Item", lineno: int, orig: str, expl: str) -> None: + """Called whenever an assertion passes. + + .. versionadded:: 5.0 + + Use this hook to do some processing after a passing assertion. + The original assertion information is available in the `orig` string + and the pytest introspected assertion information is available in the + `expl` string. + + This hook must be explicitly enabled by the ``enable_assertion_pass_hook`` + ini-file option: + + .. code-block:: ini + + [pytest] + enable_assertion_pass_hook=true + + You need to **clean the .pyc** files in your project directory and interpreter libraries + when enabling this option, as assertions will require to be re-written. + + :param item: pytest item object of current test. + :param lineno: Line number of the assert statement. + :param orig: String with the original assertion. + :param expl: String with the assert explanation. + """ + + +# ------------------------------------------------------------------------- +# Hooks for influencing reporting (invoked from _pytest_terminal). +# ------------------------------------------------------------------------- + + +def pytest_report_header( + config: "Config", start_path: Path, startdir: "LEGACY_PATH" +) -> Union[str, List[str]]: + """Return a string or list of strings to be displayed as header info for terminal reporting. + + :param config: The pytest config object. + :param start_path: The starting dir. + :param startdir: The starting dir (deprecated). + + .. note:: + + Lines returned by a plugin are displayed before those of plugins which + ran before it. + If you want to have your line(s) displayed first, use + :ref:`trylast=True `. + + .. note:: + + This function should be implemented only in plugins or ``conftest.py`` + files situated at the tests root directory due to how pytest + :ref:`discovers plugins during startup `. + + .. versionchanged:: 7.0.0 + The ``start_path`` parameter was added as a :class:`pathlib.Path` + equivalent of the ``startdir`` parameter. The ``startdir`` parameter + has been deprecated. + """ + + +def pytest_report_collectionfinish( + config: "Config", + start_path: Path, + startdir: "LEGACY_PATH", + items: Sequence["Item"], +) -> Union[str, List[str]]: + """Return a string or list of strings to be displayed after collection + has finished successfully. + + These strings will be displayed after the standard "collected X items" message. + + .. versionadded:: 3.2 + + :param config: The pytest config object. + :param start_path: The starting dir. + :param startdir: The starting dir (deprecated). + :param items: List of pytest items that are going to be executed; this list should not be modified. + + .. note:: + + Lines returned by a plugin are displayed before those of plugins which + ran before it. + If you want to have your line(s) displayed first, use + :ref:`trylast=True `. + + .. versionchanged:: 7.0.0 + The ``start_path`` parameter was added as a :class:`pathlib.Path` + equivalent of the ``startdir`` parameter. The ``startdir`` parameter + has been deprecated. + """ + + +@hookspec(firstresult=True) +def pytest_report_teststatus( + report: Union["CollectReport", "TestReport"], config: "Config" +) -> Tuple[str, str, Union[str, Mapping[str, bool]]]: + """Return result-category, shortletter and verbose word for status + reporting. + + The result-category is a category in which to count the result, for + example "passed", "skipped", "error" or the empty string. + + The shortletter is shown as testing progresses, for example ".", "s", + "E" or the empty string. + + The verbose word is shown as testing progresses in verbose mode, for + example "PASSED", "SKIPPED", "ERROR" or the empty string. + + pytest may style these implicitly according to the report outcome. + To provide explicit styling, return a tuple for the verbose word, + for example ``"rerun", "R", ("RERUN", {"yellow": True})``. + + :param report: The report object whose status is to be returned. + :param config: The pytest config object. + :returns: The test status. + + Stops at first non-None result, see :ref:`firstresult`. + """ + + +def pytest_terminal_summary( + terminalreporter: "TerminalReporter", + exitstatus: "ExitCode", + config: "Config", +) -> None: + """Add a section to terminal summary reporting. + + :param terminalreporter: The internal terminal reporter object. + :param exitstatus: The exit status that will be reported back to the OS. + :param config: The pytest config object. + + .. versionadded:: 4.2 + The ``config`` parameter. + """ + + +@hookspec(historic=True) +def pytest_warning_recorded( + warning_message: "warnings.WarningMessage", + when: "Literal['config', 'collect', 'runtest']", + nodeid: str, + location: Optional[Tuple[str, int, str]], +) -> None: + """Process a warning captured by the internal pytest warnings plugin. + + :param warning_message: + The captured warning. This is the same object produced by :py:func:`warnings.catch_warnings`, and contains + the same attributes as the parameters of :py:func:`warnings.showwarning`. + + :param when: + Indicates when the warning was captured. Possible values: + + * ``"config"``: during pytest configuration/initialization stage. + * ``"collect"``: during test collection. + * ``"runtest"``: during test execution. + + :param nodeid: + Full id of the item. + + :param location: + When available, holds information about the execution context of the captured + warning (filename, linenumber, function). ``function`` evaluates to + when the execution context is at the module level. + + .. versionadded:: 6.0 + """ + + +# ------------------------------------------------------------------------- +# Hooks for influencing skipping +# ------------------------------------------------------------------------- + + +def pytest_markeval_namespace(config: "Config") -> Dict[str, Any]: + """Called when constructing the globals dictionary used for + evaluating string conditions in xfail/skipif markers. + + This is useful when the condition for a marker requires + objects that are expensive or impossible to obtain during + collection time, which is required by normal boolean + conditions. + + .. versionadded:: 6.2 + + :param config: The pytest config object. + :returns: A dictionary of additional globals to add. + """ + + +# ------------------------------------------------------------------------- +# error handling and internal debugging hooks +# ------------------------------------------------------------------------- + + +def pytest_internalerror( + excrepr: "ExceptionRepr", + excinfo: "ExceptionInfo[BaseException]", +) -> Optional[bool]: + """Called for internal errors. + + Return True to suppress the fallback handling of printing an + INTERNALERROR message directly to sys.stderr. + + :param excrepr: The exception repr object. + :param excinfo: The exception info. + """ + + +def pytest_keyboard_interrupt( + excinfo: "ExceptionInfo[Union[KeyboardInterrupt, Exit]]", +) -> None: + """Called for keyboard interrupt. + + :param excinfo: The exception info. + """ + + +def pytest_exception_interact( + node: Union["Item", "Collector"], + call: "CallInfo[Any]", + report: Union["CollectReport", "TestReport"], +) -> None: + """Called when an exception was raised which can potentially be + interactively handled. + + May be called during collection (see :hook:`pytest_make_collect_report`), + in which case ``report`` is a :class:`CollectReport`. + + May be called during runtest of an item (see :hook:`pytest_runtest_protocol`), + in which case ``report`` is a :class:`TestReport`. + + This hook is not called if the exception that was raised is an internal + exception like ``skip.Exception``. + + :param node: + The item or collector. + :param call: + The call information. Contains the exception. + :param report: + The collection or test report. + """ + + +def pytest_enter_pdb(config: "Config", pdb: "pdb.Pdb") -> None: + """Called upon pdb.set_trace(). + + Can be used by plugins to take special action just before the python + debugger enters interactive mode. + + :param config: The pytest config object. + :param pdb: The Pdb instance. + """ + + +def pytest_leave_pdb(config: "Config", pdb: "pdb.Pdb") -> None: + """Called when leaving pdb (e.g. with continue after pdb.set_trace()). + + Can be used by plugins to take special action just after the python + debugger leaves interactive mode. + + :param config: The pytest config object. + :param pdb: The Pdb instance. + """ diff --git a/env/lib/python3.8/site-packages/_pytest/junitxml.py b/env/lib/python3.8/site-packages/_pytest/junitxml.py new file mode 100644 index 00000000..7a5170f3 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/junitxml.py @@ -0,0 +1,699 @@ +"""Report test results in JUnit-XML format, for use with Jenkins and build +integration servers. + +Based on initial code from Ross Lawley. + +Output conforms to +https://github.com/jenkinsci/xunit-plugin/blob/master/src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd +""" +import functools +import os +import platform +import re +import xml.etree.ElementTree as ET +from datetime import datetime +from typing import Callable +from typing import Dict +from typing import List +from typing import Match +from typing import Optional +from typing import Tuple +from typing import Union + +import pytest +from _pytest import nodes +from _pytest import timing +from _pytest._code.code import ExceptionRepr +from _pytest._code.code import ReprFileLocation +from _pytest.config import Config +from _pytest.config import filename_arg +from _pytest.config.argparsing import Parser +from _pytest.fixtures import FixtureRequest +from _pytest.reports import TestReport +from _pytest.stash import StashKey +from _pytest.terminal import TerminalReporter + + +xml_key = StashKey["LogXML"]() + + +def bin_xml_escape(arg: object) -> str: + r"""Visually escape invalid XML characters. + + For example, transforms + 'hello\aworld\b' + into + 'hello#x07world#x08' + Note that the #xABs are *not* XML escapes - missing the ampersand «. + The idea is to escape visually for the user rather than for XML itself. + """ + + def repl(matchobj: Match[str]) -> str: + i = ord(matchobj.group()) + if i <= 0xFF: + return "#x%02X" % i + else: + return "#x%04X" % i + + # The spec range of valid chars is: + # Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] + # For an unknown(?) reason, we disallow #x7F (DEL) as well. + illegal_xml_re = ( + "[^\u0009\u000A\u000D\u0020-\u007E\u0080-\uD7FF\uE000-\uFFFD\u10000-\u10FFFF]" + ) + return re.sub(illegal_xml_re, repl, str(arg)) + + +def merge_family(left, right) -> None: + result = {} + for kl, vl in left.items(): + for kr, vr in right.items(): + if not isinstance(vl, list): + raise TypeError(type(vl)) + result[kl] = vl + vr + left.update(result) + + +families = {} +families["_base"] = {"testcase": ["classname", "name"]} +families["_base_legacy"] = {"testcase": ["file", "line", "url"]} + +# xUnit 1.x inherits legacy attributes. +families["xunit1"] = families["_base"].copy() +merge_family(families["xunit1"], families["_base_legacy"]) + +# xUnit 2.x uses strict base attributes. +families["xunit2"] = families["_base"] + + +class _NodeReporter: + def __init__(self, nodeid: Union[str, TestReport], xml: "LogXML") -> None: + self.id = nodeid + self.xml = xml + self.add_stats = self.xml.add_stats + self.family = self.xml.family + self.duration = 0.0 + self.properties: List[Tuple[str, str]] = [] + self.nodes: List[ET.Element] = [] + self.attrs: Dict[str, str] = {} + + def append(self, node: ET.Element) -> None: + self.xml.add_stats(node.tag) + self.nodes.append(node) + + def add_property(self, name: str, value: object) -> None: + self.properties.append((str(name), bin_xml_escape(value))) + + def add_attribute(self, name: str, value: object) -> None: + self.attrs[str(name)] = bin_xml_escape(value) + + def make_properties_node(self) -> Optional[ET.Element]: + """Return a Junit node containing custom properties, if any.""" + if self.properties: + properties = ET.Element("properties") + for name, value in self.properties: + properties.append(ET.Element("property", name=name, value=value)) + return properties + return None + + def record_testreport(self, testreport: TestReport) -> None: + names = mangle_test_address(testreport.nodeid) + existing_attrs = self.attrs + classnames = names[:-1] + if self.xml.prefix: + classnames.insert(0, self.xml.prefix) + attrs: Dict[str, str] = { + "classname": ".".join(classnames), + "name": bin_xml_escape(names[-1]), + "file": testreport.location[0], + } + if testreport.location[1] is not None: + attrs["line"] = str(testreport.location[1]) + if hasattr(testreport, "url"): + attrs["url"] = testreport.url + self.attrs = attrs + self.attrs.update(existing_attrs) # Restore any user-defined attributes. + + # Preserve legacy testcase behavior. + if self.family == "xunit1": + return + + # Filter out attributes not permitted by this test family. + # Including custom attributes because they are not valid here. + temp_attrs = {} + for key in self.attrs.keys(): + if key in families[self.family]["testcase"]: + temp_attrs[key] = self.attrs[key] + self.attrs = temp_attrs + + def to_xml(self) -> ET.Element: + testcase = ET.Element("testcase", self.attrs, time="%.3f" % self.duration) + properties = self.make_properties_node() + if properties is not None: + testcase.append(properties) + testcase.extend(self.nodes) + return testcase + + def _add_simple(self, tag: str, message: str, data: Optional[str] = None) -> None: + node = ET.Element(tag, message=message) + node.text = bin_xml_escape(data) + self.append(node) + + def write_captured_output(self, report: TestReport) -> None: + if not self.xml.log_passing_tests and report.passed: + return + + content_out = report.capstdout + content_log = report.caplog + content_err = report.capstderr + if self.xml.logging == "no": + return + content_all = "" + if self.xml.logging in ["log", "all"]: + content_all = self._prepare_content(content_log, " Captured Log ") + if self.xml.logging in ["system-out", "out-err", "all"]: + content_all += self._prepare_content(content_out, " Captured Out ") + self._write_content(report, content_all, "system-out") + content_all = "" + if self.xml.logging in ["system-err", "out-err", "all"]: + content_all += self._prepare_content(content_err, " Captured Err ") + self._write_content(report, content_all, "system-err") + content_all = "" + if content_all: + self._write_content(report, content_all, "system-out") + + def _prepare_content(self, content: str, header: str) -> str: + return "\n".join([header.center(80, "-"), content, ""]) + + def _write_content(self, report: TestReport, content: str, jheader: str) -> None: + tag = ET.Element(jheader) + tag.text = bin_xml_escape(content) + self.append(tag) + + def append_pass(self, report: TestReport) -> None: + self.add_stats("passed") + + def append_failure(self, report: TestReport) -> None: + # msg = str(report.longrepr.reprtraceback.extraline) + if hasattr(report, "wasxfail"): + self._add_simple("skipped", "xfail-marked test passes unexpectedly") + else: + assert report.longrepr is not None + reprcrash: Optional[ReprFileLocation] = getattr( + report.longrepr, "reprcrash", None + ) + if reprcrash is not None: + message = reprcrash.message + else: + message = str(report.longrepr) + message = bin_xml_escape(message) + self._add_simple("failure", message, str(report.longrepr)) + + def append_collect_error(self, report: TestReport) -> None: + # msg = str(report.longrepr.reprtraceback.extraline) + assert report.longrepr is not None + self._add_simple("error", "collection failure", str(report.longrepr)) + + def append_collect_skipped(self, report: TestReport) -> None: + self._add_simple("skipped", "collection skipped", str(report.longrepr)) + + def append_error(self, report: TestReport) -> None: + assert report.longrepr is not None + reprcrash: Optional[ReprFileLocation] = getattr( + report.longrepr, "reprcrash", None + ) + if reprcrash is not None: + reason = reprcrash.message + else: + reason = str(report.longrepr) + + if report.when == "teardown": + msg = f'failed on teardown with "{reason}"' + else: + msg = f'failed on setup with "{reason}"' + self._add_simple("error", bin_xml_escape(msg), str(report.longrepr)) + + def append_skipped(self, report: TestReport) -> None: + if hasattr(report, "wasxfail"): + xfailreason = report.wasxfail + if xfailreason.startswith("reason: "): + xfailreason = xfailreason[8:] + xfailreason = bin_xml_escape(xfailreason) + skipped = ET.Element("skipped", type="pytest.xfail", message=xfailreason) + self.append(skipped) + else: + assert isinstance(report.longrepr, tuple) + filename, lineno, skipreason = report.longrepr + if skipreason.startswith("Skipped: "): + skipreason = skipreason[9:] + details = f"{filename}:{lineno}: {skipreason}" + + skipped = ET.Element("skipped", type="pytest.skip", message=skipreason) + skipped.text = bin_xml_escape(details) + self.append(skipped) + self.write_captured_output(report) + + def finalize(self) -> None: + data = self.to_xml() + self.__dict__.clear() + # Type ignored because mypy doesn't like overriding a method. + # Also the return value doesn't match... + self.to_xml = lambda: data # type: ignore[assignment] + + +def _warn_incompatibility_with_xunit2( + request: FixtureRequest, fixture_name: str +) -> None: + """Emit a PytestWarning about the given fixture being incompatible with newer xunit revisions.""" + from _pytest.warning_types import PytestWarning + + xml = request.config.stash.get(xml_key, None) + if xml is not None and xml.family not in ("xunit1", "legacy"): + request.node.warn( + PytestWarning( + "{fixture_name} is incompatible with junit_family '{family}' (use 'legacy' or 'xunit1')".format( + fixture_name=fixture_name, family=xml.family + ) + ) + ) + + +@pytest.fixture +def record_property(request: FixtureRequest) -> Callable[[str, object], None]: + """Add extra properties to the calling test. + + User properties become part of the test report and are available to the + configured reporters, like JUnit XML. + + The fixture is callable with ``name, value``. The value is automatically + XML-encoded. + + Example:: + + def test_function(record_property): + record_property("example_key", 1) + """ + _warn_incompatibility_with_xunit2(request, "record_property") + + def append_property(name: str, value: object) -> None: + request.node.user_properties.append((name, value)) + + return append_property + + +@pytest.fixture +def record_xml_attribute(request: FixtureRequest) -> Callable[[str, object], None]: + """Add extra xml attributes to the tag for the calling test. + + The fixture is callable with ``name, value``. The value is + automatically XML-encoded. + """ + from _pytest.warning_types import PytestExperimentalApiWarning + + request.node.warn( + PytestExperimentalApiWarning("record_xml_attribute is an experimental feature") + ) + + _warn_incompatibility_with_xunit2(request, "record_xml_attribute") + + # Declare noop + def add_attr_noop(name: str, value: object) -> None: + pass + + attr_func = add_attr_noop + + xml = request.config.stash.get(xml_key, None) + if xml is not None: + node_reporter = xml.node_reporter(request.node.nodeid) + attr_func = node_reporter.add_attribute + + return attr_func + + +def _check_record_param_type(param: str, v: str) -> None: + """Used by record_testsuite_property to check that the given parameter name is of the proper + type.""" + __tracebackhide__ = True + if not isinstance(v, str): + msg = "{param} parameter needs to be a string, but {g} given" # type: ignore[unreachable] + raise TypeError(msg.format(param=param, g=type(v).__name__)) + + +@pytest.fixture(scope="session") +def record_testsuite_property(request: FixtureRequest) -> Callable[[str, object], None]: + """Record a new ```` tag as child of the root ````. + + This is suitable to writing global information regarding the entire test + suite, and is compatible with ``xunit2`` JUnit family. + + This is a ``session``-scoped fixture which is called with ``(name, value)``. Example: + + .. code-block:: python + + def test_foo(record_testsuite_property): + record_testsuite_property("ARCH", "PPC") + record_testsuite_property("STORAGE_TYPE", "CEPH") + + :param name: + The property name. + :param value: + The property value. Will be converted to a string. + + .. warning:: + + Currently this fixture **does not work** with the + `pytest-xdist `__ plugin. See + :issue:`7767` for details. + """ + + __tracebackhide__ = True + + def record_func(name: str, value: object) -> None: + """No-op function in case --junitxml was not passed in the command-line.""" + __tracebackhide__ = True + _check_record_param_type("name", name) + + xml = request.config.stash.get(xml_key, None) + if xml is not None: + record_func = xml.add_global_property # noqa + return record_func + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("terminal reporting") + group.addoption( + "--junitxml", + "--junit-xml", + action="store", + dest="xmlpath", + metavar="path", + type=functools.partial(filename_arg, optname="--junitxml"), + default=None, + help="Create junit-xml style report file at given path", + ) + group.addoption( + "--junitprefix", + "--junit-prefix", + action="store", + metavar="str", + default=None, + help="Prepend prefix to classnames in junit-xml output", + ) + parser.addini( + "junit_suite_name", "Test suite name for JUnit report", default="pytest" + ) + parser.addini( + "junit_logging", + "Write captured log messages to JUnit report: " + "one of no|log|system-out|system-err|out-err|all", + default="no", + ) + parser.addini( + "junit_log_passing_tests", + "Capture log information for passing tests to JUnit report: ", + type="bool", + default=True, + ) + parser.addini( + "junit_duration_report", + "Duration time to report: one of total|call", + default="total", + ) # choices=['total', 'call']) + parser.addini( + "junit_family", + "Emit XML for schema: one of legacy|xunit1|xunit2", + default="xunit2", + ) + + +def pytest_configure(config: Config) -> None: + xmlpath = config.option.xmlpath + # Prevent opening xmllog on worker nodes (xdist). + if xmlpath and not hasattr(config, "workerinput"): + junit_family = config.getini("junit_family") + config.stash[xml_key] = LogXML( + xmlpath, + config.option.junitprefix, + config.getini("junit_suite_name"), + config.getini("junit_logging"), + config.getini("junit_duration_report"), + junit_family, + config.getini("junit_log_passing_tests"), + ) + config.pluginmanager.register(config.stash[xml_key]) + + +def pytest_unconfigure(config: Config) -> None: + xml = config.stash.get(xml_key, None) + if xml: + del config.stash[xml_key] + config.pluginmanager.unregister(xml) + + +def mangle_test_address(address: str) -> List[str]: + path, possible_open_bracket, params = address.partition("[") + names = path.split("::") + # Convert file path to dotted path. + names[0] = names[0].replace(nodes.SEP, ".") + names[0] = re.sub(r"\.py$", "", names[0]) + # Put any params back. + names[-1] += possible_open_bracket + params + return names + + +class LogXML: + def __init__( + self, + logfile, + prefix: Optional[str], + suite_name: str = "pytest", + logging: str = "no", + report_duration: str = "total", + family="xunit1", + log_passing_tests: bool = True, + ) -> None: + logfile = os.path.expanduser(os.path.expandvars(logfile)) + self.logfile = os.path.normpath(os.path.abspath(logfile)) + self.prefix = prefix + self.suite_name = suite_name + self.logging = logging + self.log_passing_tests = log_passing_tests + self.report_duration = report_duration + self.family = family + self.stats: Dict[str, int] = dict.fromkeys( + ["error", "passed", "failure", "skipped"], 0 + ) + self.node_reporters: Dict[ + Tuple[Union[str, TestReport], object], _NodeReporter + ] = {} + self.node_reporters_ordered: List[_NodeReporter] = [] + self.global_properties: List[Tuple[str, str]] = [] + + # List of reports that failed on call but teardown is pending. + self.open_reports: List[TestReport] = [] + self.cnt_double_fail_tests = 0 + + # Replaces convenience family with real family. + if self.family == "legacy": + self.family = "xunit1" + + def finalize(self, report: TestReport) -> None: + nodeid = getattr(report, "nodeid", report) + # Local hack to handle xdist report order. + workernode = getattr(report, "node", None) + reporter = self.node_reporters.pop((nodeid, workernode)) + if reporter is not None: + reporter.finalize() + + def node_reporter(self, report: Union[TestReport, str]) -> _NodeReporter: + nodeid: Union[str, TestReport] = getattr(report, "nodeid", report) + # Local hack to handle xdist report order. + workernode = getattr(report, "node", None) + + key = nodeid, workernode + + if key in self.node_reporters: + # TODO: breaks for --dist=each + return self.node_reporters[key] + + reporter = _NodeReporter(nodeid, self) + + self.node_reporters[key] = reporter + self.node_reporters_ordered.append(reporter) + + return reporter + + def add_stats(self, key: str) -> None: + if key in self.stats: + self.stats[key] += 1 + + def _opentestcase(self, report: TestReport) -> _NodeReporter: + reporter = self.node_reporter(report) + reporter.record_testreport(report) + return reporter + + def pytest_runtest_logreport(self, report: TestReport) -> None: + """Handle a setup/call/teardown report, generating the appropriate + XML tags as necessary. + + Note: due to plugins like xdist, this hook may be called in interlaced + order with reports from other nodes. For example: + + Usual call order: + -> setup node1 + -> call node1 + -> teardown node1 + -> setup node2 + -> call node2 + -> teardown node2 + + Possible call order in xdist: + -> setup node1 + -> call node1 + -> setup node2 + -> call node2 + -> teardown node2 + -> teardown node1 + """ + close_report = None + if report.passed: + if report.when == "call": # ignore setup/teardown + reporter = self._opentestcase(report) + reporter.append_pass(report) + elif report.failed: + if report.when == "teardown": + # The following vars are needed when xdist plugin is used. + report_wid = getattr(report, "worker_id", None) + report_ii = getattr(report, "item_index", None) + close_report = next( + ( + rep + for rep in self.open_reports + if ( + rep.nodeid == report.nodeid + and getattr(rep, "item_index", None) == report_ii + and getattr(rep, "worker_id", None) == report_wid + ) + ), + None, + ) + if close_report: + # We need to open new testcase in case we have failure in + # call and error in teardown in order to follow junit + # schema. + self.finalize(close_report) + self.cnt_double_fail_tests += 1 + reporter = self._opentestcase(report) + if report.when == "call": + reporter.append_failure(report) + self.open_reports.append(report) + if not self.log_passing_tests: + reporter.write_captured_output(report) + else: + reporter.append_error(report) + elif report.skipped: + reporter = self._opentestcase(report) + reporter.append_skipped(report) + self.update_testcase_duration(report) + if report.when == "teardown": + reporter = self._opentestcase(report) + reporter.write_captured_output(report) + + for propname, propvalue in report.user_properties: + reporter.add_property(propname, str(propvalue)) + + self.finalize(report) + report_wid = getattr(report, "worker_id", None) + report_ii = getattr(report, "item_index", None) + close_report = next( + ( + rep + for rep in self.open_reports + if ( + rep.nodeid == report.nodeid + and getattr(rep, "item_index", None) == report_ii + and getattr(rep, "worker_id", None) == report_wid + ) + ), + None, + ) + if close_report: + self.open_reports.remove(close_report) + + def update_testcase_duration(self, report: TestReport) -> None: + """Accumulate total duration for nodeid from given report and update + the Junit.testcase with the new total if already created.""" + if self.report_duration == "total" or report.when == self.report_duration: + reporter = self.node_reporter(report) + reporter.duration += getattr(report, "duration", 0.0) + + def pytest_collectreport(self, report: TestReport) -> None: + if not report.passed: + reporter = self._opentestcase(report) + if report.failed: + reporter.append_collect_error(report) + else: + reporter.append_collect_skipped(report) + + def pytest_internalerror(self, excrepr: ExceptionRepr) -> None: + reporter = self.node_reporter("internal") + reporter.attrs.update(classname="pytest", name="internal") + reporter._add_simple("error", "internal error", str(excrepr)) + + def pytest_sessionstart(self) -> None: + self.suite_start_time = timing.time() + + def pytest_sessionfinish(self) -> None: + dirname = os.path.dirname(os.path.abspath(self.logfile)) + if not os.path.isdir(dirname): + os.makedirs(dirname) + + with open(self.logfile, "w", encoding="utf-8") as logfile: + suite_stop_time = timing.time() + suite_time_delta = suite_stop_time - self.suite_start_time + + numtests = ( + self.stats["passed"] + + self.stats["failure"] + + self.stats["skipped"] + + self.stats["error"] + - self.cnt_double_fail_tests + ) + logfile.write('') + + suite_node = ET.Element( + "testsuite", + name=self.suite_name, + errors=str(self.stats["error"]), + failures=str(self.stats["failure"]), + skipped=str(self.stats["skipped"]), + tests=str(numtests), + time="%.3f" % suite_time_delta, + timestamp=datetime.fromtimestamp(self.suite_start_time).isoformat(), + hostname=platform.node(), + ) + global_properties = self._get_global_properties_node() + if global_properties is not None: + suite_node.append(global_properties) + for node_reporter in self.node_reporters_ordered: + suite_node.append(node_reporter.to_xml()) + testsuites = ET.Element("testsuites") + testsuites.append(suite_node) + logfile.write(ET.tostring(testsuites, encoding="unicode")) + + def pytest_terminal_summary(self, terminalreporter: TerminalReporter) -> None: + terminalreporter.write_sep("-", f"generated xml file: {self.logfile}") + + def add_global_property(self, name: str, value: object) -> None: + __tracebackhide__ = True + _check_record_param_type("name", name) + self.global_properties.append((name, bin_xml_escape(value))) + + def _get_global_properties_node(self) -> Optional[ET.Element]: + """Return a Junit node containing custom properties, if any.""" + if self.global_properties: + properties = ET.Element("properties") + for name, value in self.global_properties: + properties.append(ET.Element("property", name=name, value=value)) + return properties + return None diff --git a/env/lib/python3.8/site-packages/_pytest/legacypath.py b/env/lib/python3.8/site-packages/_pytest/legacypath.py new file mode 100644 index 00000000..f71e7e96 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/legacypath.py @@ -0,0 +1,479 @@ +"""Add backward compatibility support for the legacy py path type.""" +import shlex +import subprocess +from pathlib import Path +from typing import List +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +import attr +from iniconfig import SectionWrapper + +from _pytest.cacheprovider import Cache +from _pytest.compat import final +from _pytest.compat import LEGACY_PATH +from _pytest.compat import legacy_path +from _pytest.config import Config +from _pytest.config import hookimpl +from _pytest.config import PytestPluginManager +from _pytest.deprecated import check_ispytest +from _pytest.fixtures import fixture +from _pytest.fixtures import FixtureRequest +from _pytest.main import Session +from _pytest.monkeypatch import MonkeyPatch +from _pytest.nodes import Collector +from _pytest.nodes import Item +from _pytest.nodes import Node +from _pytest.pytester import HookRecorder +from _pytest.pytester import Pytester +from _pytest.pytester import RunResult +from _pytest.terminal import TerminalReporter +from _pytest.tmpdir import TempPathFactory + +if TYPE_CHECKING: + from typing_extensions import Final + + import pexpect + + +@final +class Testdir: + """ + Similar to :class:`Pytester`, but this class works with legacy legacy_path objects instead. + + All methods just forward to an internal :class:`Pytester` instance, converting results + to `legacy_path` objects as necessary. + """ + + __test__ = False + + CLOSE_STDIN: "Final" = Pytester.CLOSE_STDIN + TimeoutExpired: "Final" = Pytester.TimeoutExpired + + def __init__(self, pytester: Pytester, *, _ispytest: bool = False) -> None: + check_ispytest(_ispytest) + self._pytester = pytester + + @property + def tmpdir(self) -> LEGACY_PATH: + """Temporary directory where tests are executed.""" + return legacy_path(self._pytester.path) + + @property + def test_tmproot(self) -> LEGACY_PATH: + return legacy_path(self._pytester._test_tmproot) + + @property + def request(self): + return self._pytester._request + + @property + def plugins(self): + return self._pytester.plugins + + @plugins.setter + def plugins(self, plugins): + self._pytester.plugins = plugins + + @property + def monkeypatch(self) -> MonkeyPatch: + return self._pytester._monkeypatch + + def make_hook_recorder(self, pluginmanager) -> HookRecorder: + """See :meth:`Pytester.make_hook_recorder`.""" + return self._pytester.make_hook_recorder(pluginmanager) + + def chdir(self) -> None: + """See :meth:`Pytester.chdir`.""" + return self._pytester.chdir() + + def finalize(self) -> None: + """See :meth:`Pytester._finalize`.""" + return self._pytester._finalize() + + def makefile(self, ext, *args, **kwargs) -> LEGACY_PATH: + """See :meth:`Pytester.makefile`.""" + if ext and not ext.startswith("."): + # pytester.makefile is going to throw a ValueError in a way that + # testdir.makefile did not, because + # pathlib.Path is stricter suffixes than py.path + # This ext arguments is likely user error, but since testdir has + # allowed this, we will prepend "." as a workaround to avoid breaking + # testdir usage that worked before + ext = "." + ext + return legacy_path(self._pytester.makefile(ext, *args, **kwargs)) + + def makeconftest(self, source) -> LEGACY_PATH: + """See :meth:`Pytester.makeconftest`.""" + return legacy_path(self._pytester.makeconftest(source)) + + def makeini(self, source) -> LEGACY_PATH: + """See :meth:`Pytester.makeini`.""" + return legacy_path(self._pytester.makeini(source)) + + def getinicfg(self, source: str) -> SectionWrapper: + """See :meth:`Pytester.getinicfg`.""" + return self._pytester.getinicfg(source) + + def makepyprojecttoml(self, source) -> LEGACY_PATH: + """See :meth:`Pytester.makepyprojecttoml`.""" + return legacy_path(self._pytester.makepyprojecttoml(source)) + + def makepyfile(self, *args, **kwargs) -> LEGACY_PATH: + """See :meth:`Pytester.makepyfile`.""" + return legacy_path(self._pytester.makepyfile(*args, **kwargs)) + + def maketxtfile(self, *args, **kwargs) -> LEGACY_PATH: + """See :meth:`Pytester.maketxtfile`.""" + return legacy_path(self._pytester.maketxtfile(*args, **kwargs)) + + def syspathinsert(self, path=None) -> None: + """See :meth:`Pytester.syspathinsert`.""" + return self._pytester.syspathinsert(path) + + def mkdir(self, name) -> LEGACY_PATH: + """See :meth:`Pytester.mkdir`.""" + return legacy_path(self._pytester.mkdir(name)) + + def mkpydir(self, name) -> LEGACY_PATH: + """See :meth:`Pytester.mkpydir`.""" + return legacy_path(self._pytester.mkpydir(name)) + + def copy_example(self, name=None) -> LEGACY_PATH: + """See :meth:`Pytester.copy_example`.""" + return legacy_path(self._pytester.copy_example(name)) + + def getnode(self, config: Config, arg) -> Optional[Union[Item, Collector]]: + """See :meth:`Pytester.getnode`.""" + return self._pytester.getnode(config, arg) + + def getpathnode(self, path): + """See :meth:`Pytester.getpathnode`.""" + return self._pytester.getpathnode(path) + + def genitems(self, colitems: List[Union[Item, Collector]]) -> List[Item]: + """See :meth:`Pytester.genitems`.""" + return self._pytester.genitems(colitems) + + def runitem(self, source): + """See :meth:`Pytester.runitem`.""" + return self._pytester.runitem(source) + + def inline_runsource(self, source, *cmdlineargs): + """See :meth:`Pytester.inline_runsource`.""" + return self._pytester.inline_runsource(source, *cmdlineargs) + + def inline_genitems(self, *args): + """See :meth:`Pytester.inline_genitems`.""" + return self._pytester.inline_genitems(*args) + + def inline_run(self, *args, plugins=(), no_reraise_ctrlc: bool = False): + """See :meth:`Pytester.inline_run`.""" + return self._pytester.inline_run( + *args, plugins=plugins, no_reraise_ctrlc=no_reraise_ctrlc + ) + + def runpytest_inprocess(self, *args, **kwargs) -> RunResult: + """See :meth:`Pytester.runpytest_inprocess`.""" + return self._pytester.runpytest_inprocess(*args, **kwargs) + + def runpytest(self, *args, **kwargs) -> RunResult: + """See :meth:`Pytester.runpytest`.""" + return self._pytester.runpytest(*args, **kwargs) + + def parseconfig(self, *args) -> Config: + """See :meth:`Pytester.parseconfig`.""" + return self._pytester.parseconfig(*args) + + def parseconfigure(self, *args) -> Config: + """See :meth:`Pytester.parseconfigure`.""" + return self._pytester.parseconfigure(*args) + + def getitem(self, source, funcname="test_func"): + """See :meth:`Pytester.getitem`.""" + return self._pytester.getitem(source, funcname) + + def getitems(self, source): + """See :meth:`Pytester.getitems`.""" + return self._pytester.getitems(source) + + def getmodulecol(self, source, configargs=(), withinit=False): + """See :meth:`Pytester.getmodulecol`.""" + return self._pytester.getmodulecol( + source, configargs=configargs, withinit=withinit + ) + + def collect_by_name( + self, modcol: Collector, name: str + ) -> Optional[Union[Item, Collector]]: + """See :meth:`Pytester.collect_by_name`.""" + return self._pytester.collect_by_name(modcol, name) + + def popen( + self, + cmdargs, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + stdin=CLOSE_STDIN, + **kw, + ): + """See :meth:`Pytester.popen`.""" + return self._pytester.popen(cmdargs, stdout, stderr, stdin, **kw) + + def run(self, *cmdargs, timeout=None, stdin=CLOSE_STDIN) -> RunResult: + """See :meth:`Pytester.run`.""" + return self._pytester.run(*cmdargs, timeout=timeout, stdin=stdin) + + def runpython(self, script) -> RunResult: + """See :meth:`Pytester.runpython`.""" + return self._pytester.runpython(script) + + def runpython_c(self, command): + """See :meth:`Pytester.runpython_c`.""" + return self._pytester.runpython_c(command) + + def runpytest_subprocess(self, *args, timeout=None) -> RunResult: + """See :meth:`Pytester.runpytest_subprocess`.""" + return self._pytester.runpytest_subprocess(*args, timeout=timeout) + + def spawn_pytest( + self, string: str, expect_timeout: float = 10.0 + ) -> "pexpect.spawn": + """See :meth:`Pytester.spawn_pytest`.""" + return self._pytester.spawn_pytest(string, expect_timeout=expect_timeout) + + def spawn(self, cmd: str, expect_timeout: float = 10.0) -> "pexpect.spawn": + """See :meth:`Pytester.spawn`.""" + return self._pytester.spawn(cmd, expect_timeout=expect_timeout) + + def __repr__(self) -> str: + return f"" + + def __str__(self) -> str: + return str(self.tmpdir) + + +class LegacyTestdirPlugin: + @staticmethod + @fixture + def testdir(pytester: Pytester) -> Testdir: + """ + Identical to :fixture:`pytester`, and provides an instance whose methods return + legacy ``LEGACY_PATH`` objects instead when applicable. + + New code should avoid using :fixture:`testdir` in favor of :fixture:`pytester`. + """ + return Testdir(pytester, _ispytest=True) + + +@final +@attr.s(init=False, auto_attribs=True) +class TempdirFactory: + """Backward compatibility wrapper that implements :class:`py.path.local` + for :class:`TempPathFactory`. + + .. note:: + These days, it is preferred to use ``tmp_path_factory``. + + :ref:`About the tmpdir and tmpdir_factory fixtures`. + + """ + + _tmppath_factory: TempPathFactory + + def __init__( + self, tmppath_factory: TempPathFactory, *, _ispytest: bool = False + ) -> None: + check_ispytest(_ispytest) + self._tmppath_factory = tmppath_factory + + def mktemp(self, basename: str, numbered: bool = True) -> LEGACY_PATH: + """Same as :meth:`TempPathFactory.mktemp`, but returns a :class:`py.path.local` object.""" + return legacy_path(self._tmppath_factory.mktemp(basename, numbered).resolve()) + + def getbasetemp(self) -> LEGACY_PATH: + """Same as :meth:`TempPathFactory.getbasetemp`, but returns a :class:`py.path.local` object.""" + return legacy_path(self._tmppath_factory.getbasetemp().resolve()) + + +class LegacyTmpdirPlugin: + @staticmethod + @fixture(scope="session") + def tmpdir_factory(request: FixtureRequest) -> TempdirFactory: + """Return a :class:`pytest.TempdirFactory` instance for the test session.""" + # Set dynamically by pytest_configure(). + return request.config._tmpdirhandler # type: ignore + + @staticmethod + @fixture + def tmpdir(tmp_path: Path) -> LEGACY_PATH: + """Return a temporary directory path object which is unique to each test + function invocation, created as a sub directory of the base temporary + directory. + + By default, a new base temporary directory is created each test session, + and old bases are removed after 3 sessions, to aid in debugging. If + ``--basetemp`` is used then it is cleared each session. See :ref:`base + temporary directory`. + + The returned object is a `legacy_path`_ object. + + .. note:: + These days, it is preferred to use ``tmp_path``. + + :ref:`About the tmpdir and tmpdir_factory fixtures`. + + .. _legacy_path: https://py.readthedocs.io/en/latest/path.html + """ + return legacy_path(tmp_path) + + +def Cache_makedir(self: Cache, name: str) -> LEGACY_PATH: + """Return a directory path object with the given name. + + Same as :func:`mkdir`, but returns a legacy py path instance. + """ + return legacy_path(self.mkdir(name)) + + +def FixtureRequest_fspath(self: FixtureRequest) -> LEGACY_PATH: + """(deprecated) The file system path of the test module which collected this test.""" + return legacy_path(self.path) + + +def TerminalReporter_startdir(self: TerminalReporter) -> LEGACY_PATH: + """The directory from which pytest was invoked. + + Prefer to use ``startpath`` which is a :class:`pathlib.Path`. + + :type: LEGACY_PATH + """ + return legacy_path(self.startpath) + + +def Config_invocation_dir(self: Config) -> LEGACY_PATH: + """The directory from which pytest was invoked. + + Prefer to use :attr:`invocation_params.dir `, + which is a :class:`pathlib.Path`. + + :type: LEGACY_PATH + """ + return legacy_path(str(self.invocation_params.dir)) + + +def Config_rootdir(self: Config) -> LEGACY_PATH: + """The path to the :ref:`rootdir `. + + Prefer to use :attr:`rootpath`, which is a :class:`pathlib.Path`. + + :type: LEGACY_PATH + """ + return legacy_path(str(self.rootpath)) + + +def Config_inifile(self: Config) -> Optional[LEGACY_PATH]: + """The path to the :ref:`configfile `. + + Prefer to use :attr:`inipath`, which is a :class:`pathlib.Path`. + + :type: Optional[LEGACY_PATH] + """ + return legacy_path(str(self.inipath)) if self.inipath else None + + +def Session_stardir(self: Session) -> LEGACY_PATH: + """The path from which pytest was invoked. + + Prefer to use ``startpath`` which is a :class:`pathlib.Path`. + + :type: LEGACY_PATH + """ + return legacy_path(self.startpath) + + +def Config__getini_unknown_type( + self, name: str, type: str, value: Union[str, List[str]] +): + if type == "pathlist": + # TODO: This assert is probably not valid in all cases. + assert self.inipath is not None + dp = self.inipath.parent + input_values = shlex.split(value) if isinstance(value, str) else value + return [legacy_path(str(dp / x)) for x in input_values] + else: + raise ValueError(f"unknown configuration type: {type}", value) + + +def Node_fspath(self: Node) -> LEGACY_PATH: + """(deprecated) returns a legacy_path copy of self.path""" + return legacy_path(self.path) + + +def Node_fspath_set(self: Node, value: LEGACY_PATH) -> None: + self.path = Path(value) + + +@hookimpl(tryfirst=True) +def pytest_load_initial_conftests(early_config: Config) -> None: + """Monkeypatch legacy path attributes in several classes, as early as possible.""" + mp = MonkeyPatch() + early_config.add_cleanup(mp.undo) + + # Add Cache.makedir(). + mp.setattr(Cache, "makedir", Cache_makedir, raising=False) + + # Add FixtureRequest.fspath property. + mp.setattr(FixtureRequest, "fspath", property(FixtureRequest_fspath), raising=False) + + # Add TerminalReporter.startdir property. + mp.setattr( + TerminalReporter, "startdir", property(TerminalReporter_startdir), raising=False + ) + + # Add Config.{invocation_dir,rootdir,inifile} properties. + mp.setattr(Config, "invocation_dir", property(Config_invocation_dir), raising=False) + mp.setattr(Config, "rootdir", property(Config_rootdir), raising=False) + mp.setattr(Config, "inifile", property(Config_inifile), raising=False) + + # Add Session.startdir property. + mp.setattr(Session, "startdir", property(Session_stardir), raising=False) + + # Add pathlist configuration type. + mp.setattr(Config, "_getini_unknown_type", Config__getini_unknown_type) + + # Add Node.fspath property. + mp.setattr(Node, "fspath", property(Node_fspath, Node_fspath_set), raising=False) + + +@hookimpl +def pytest_configure(config: Config) -> None: + """Installs the LegacyTmpdirPlugin if the ``tmpdir`` plugin is also installed.""" + if config.pluginmanager.has_plugin("tmpdir"): + mp = MonkeyPatch() + config.add_cleanup(mp.undo) + # Create TmpdirFactory and attach it to the config object. + # + # This is to comply with existing plugins which expect the handler to be + # available at pytest_configure time, but ideally should be moved entirely + # to the tmpdir_factory session fixture. + try: + tmp_path_factory = config._tmp_path_factory # type: ignore[attr-defined] + except AttributeError: + # tmpdir plugin is blocked. + pass + else: + _tmpdirhandler = TempdirFactory(tmp_path_factory, _ispytest=True) + mp.setattr(config, "_tmpdirhandler", _tmpdirhandler, raising=False) + + config.pluginmanager.register(LegacyTmpdirPlugin, "legacypath-tmpdir") + + +@hookimpl +def pytest_plugin_registered(plugin: object, manager: PytestPluginManager) -> None: + # pytester is not loaded by default and is commonly loaded from a conftest, + # so checking for it in `pytest_configure` is not enough. + is_pytester = plugin is manager.get_plugin("pytester") + if is_pytester and not manager.is_registered(LegacyTestdirPlugin): + manager.register(LegacyTestdirPlugin, "legacypath-pytester") diff --git a/env/lib/python3.8/site-packages/_pytest/logging.py b/env/lib/python3.8/site-packages/_pytest/logging.py new file mode 100644 index 00000000..f9091399 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/logging.py @@ -0,0 +1,830 @@ +"""Access and control log capturing.""" +import io +import logging +import os +import re +from contextlib import contextmanager +from contextlib import nullcontext +from io import StringIO +from pathlib import Path +from typing import AbstractSet +from typing import Dict +from typing import Generator +from typing import List +from typing import Mapping +from typing import Optional +from typing import Tuple +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +from _pytest import nodes +from _pytest._io import TerminalWriter +from _pytest.capture import CaptureManager +from _pytest.compat import final +from _pytest.config import _strtobool +from _pytest.config import Config +from _pytest.config import create_terminal_writer +from _pytest.config import hookimpl +from _pytest.config import UsageError +from _pytest.config.argparsing import Parser +from _pytest.deprecated import check_ispytest +from _pytest.fixtures import fixture +from _pytest.fixtures import FixtureRequest +from _pytest.main import Session +from _pytest.stash import StashKey +from _pytest.terminal import TerminalReporter + +if TYPE_CHECKING: + logging_StreamHandler = logging.StreamHandler[StringIO] + + from typing_extensions import Literal +else: + logging_StreamHandler = logging.StreamHandler + +DEFAULT_LOG_FORMAT = "%(levelname)-8s %(name)s:%(filename)s:%(lineno)d %(message)s" +DEFAULT_LOG_DATE_FORMAT = "%H:%M:%S" +_ANSI_ESCAPE_SEQ = re.compile(r"\x1b\[[\d;]+m") +caplog_handler_key = StashKey["LogCaptureHandler"]() +caplog_records_key = StashKey[Dict[str, List[logging.LogRecord]]]() + + +def _remove_ansi_escape_sequences(text: str) -> str: + return _ANSI_ESCAPE_SEQ.sub("", text) + + +class ColoredLevelFormatter(logging.Formatter): + """A logging formatter which colorizes the %(levelname)..s part of the + log format passed to __init__.""" + + LOGLEVEL_COLOROPTS: Mapping[int, AbstractSet[str]] = { + logging.CRITICAL: {"red"}, + logging.ERROR: {"red", "bold"}, + logging.WARNING: {"yellow"}, + logging.WARN: {"yellow"}, + logging.INFO: {"green"}, + logging.DEBUG: {"purple"}, + logging.NOTSET: set(), + } + LEVELNAME_FMT_REGEX = re.compile(r"%\(levelname\)([+-.]?\d*(?:\.\d+)?s)") + + def __init__(self, terminalwriter: TerminalWriter, *args, **kwargs) -> None: + super().__init__(*args, **kwargs) + self._terminalwriter = terminalwriter + self._original_fmt = self._style._fmt + self._level_to_fmt_mapping: Dict[int, str] = {} + + for level, color_opts in self.LOGLEVEL_COLOROPTS.items(): + self.add_color_level(level, *color_opts) + + def add_color_level(self, level: int, *color_opts: str) -> None: + """Add or update color opts for a log level. + + :param level: + Log level to apply a style to, e.g. ``logging.INFO``. + :param color_opts: + ANSI escape sequence color options. Capitalized colors indicates + background color, i.e. ``'green', 'Yellow', 'bold'`` will give bold + green text on yellow background. + + .. warning:: + This is an experimental API. + """ + + assert self._fmt is not None + levelname_fmt_match = self.LEVELNAME_FMT_REGEX.search(self._fmt) + if not levelname_fmt_match: + return + levelname_fmt = levelname_fmt_match.group() + + formatted_levelname = levelname_fmt % {"levelname": logging.getLevelName(level)} + + # add ANSI escape sequences around the formatted levelname + color_kwargs = {name: True for name in color_opts} + colorized_formatted_levelname = self._terminalwriter.markup( + formatted_levelname, **color_kwargs + ) + self._level_to_fmt_mapping[level] = self.LEVELNAME_FMT_REGEX.sub( + colorized_formatted_levelname, self._fmt + ) + + def format(self, record: logging.LogRecord) -> str: + fmt = self._level_to_fmt_mapping.get(record.levelno, self._original_fmt) + self._style._fmt = fmt + return super().format(record) + + +class PercentStyleMultiline(logging.PercentStyle): + """A logging style with special support for multiline messages. + + If the message of a record consists of multiple lines, this style + formats the message as if each line were logged separately. + """ + + def __init__(self, fmt: str, auto_indent: Union[int, str, bool, None]) -> None: + super().__init__(fmt) + self._auto_indent = self._get_auto_indent(auto_indent) + + @staticmethod + def _get_auto_indent(auto_indent_option: Union[int, str, bool, None]) -> int: + """Determine the current auto indentation setting. + + Specify auto indent behavior (on/off/fixed) by passing in + extra={"auto_indent": [value]} to the call to logging.log() or + using a --log-auto-indent [value] command line or the + log_auto_indent [value] config option. + + Default behavior is auto-indent off. + + Using the string "True" or "on" or the boolean True as the value + turns auto indent on, using the string "False" or "off" or the + boolean False or the int 0 turns it off, and specifying a + positive integer fixes the indentation position to the value + specified. + + Any other values for the option are invalid, and will silently be + converted to the default. + + :param None|bool|int|str auto_indent_option: + User specified option for indentation from command line, config + or extra kwarg. Accepts int, bool or str. str option accepts the + same range of values as boolean config options, as well as + positive integers represented in str form. + + :returns: + Indentation value, which can be + -1 (automatically determine indentation) or + 0 (auto-indent turned off) or + >0 (explicitly set indentation position). + """ + + if auto_indent_option is None: + return 0 + elif isinstance(auto_indent_option, bool): + if auto_indent_option: + return -1 + else: + return 0 + elif isinstance(auto_indent_option, int): + return int(auto_indent_option) + elif isinstance(auto_indent_option, str): + try: + return int(auto_indent_option) + except ValueError: + pass + try: + if _strtobool(auto_indent_option): + return -1 + except ValueError: + return 0 + + return 0 + + def format(self, record: logging.LogRecord) -> str: + if "\n" in record.message: + if hasattr(record, "auto_indent"): + # Passed in from the "extra={}" kwarg on the call to logging.log(). + auto_indent = self._get_auto_indent(record.auto_indent) # type: ignore[attr-defined] + else: + auto_indent = self._auto_indent + + if auto_indent: + lines = record.message.splitlines() + formatted = self._fmt % {**record.__dict__, "message": lines[0]} + + if auto_indent < 0: + indentation = _remove_ansi_escape_sequences(formatted).find( + lines[0] + ) + else: + # Optimizes logging by allowing a fixed indentation. + indentation = auto_indent + lines[0] = formatted + return ("\n" + " " * indentation).join(lines) + return self._fmt % record.__dict__ + + +def get_option_ini(config: Config, *names: str): + for name in names: + ret = config.getoption(name) # 'default' arg won't work as expected + if ret is None: + ret = config.getini(name) + if ret: + return ret + + +def pytest_addoption(parser: Parser) -> None: + """Add options to control log capturing.""" + group = parser.getgroup("logging") + + def add_option_ini(option, dest, default=None, type=None, **kwargs): + parser.addini( + dest, default=default, type=type, help="Default value for " + option + ) + group.addoption(option, dest=dest, **kwargs) + + add_option_ini( + "--log-level", + dest="log_level", + default=None, + metavar="LEVEL", + help=( + "Level of messages to catch/display." + " Not set by default, so it depends on the root/parent log handler's" + ' effective level, where it is "WARNING" by default.' + ), + ) + add_option_ini( + "--log-format", + dest="log_format", + default=DEFAULT_LOG_FORMAT, + help="Log format used by the logging module", + ) + add_option_ini( + "--log-date-format", + dest="log_date_format", + default=DEFAULT_LOG_DATE_FORMAT, + help="Log date format used by the logging module", + ) + parser.addini( + "log_cli", + default=False, + type="bool", + help='Enable log display during test run (also known as "live logging")', + ) + add_option_ini( + "--log-cli-level", dest="log_cli_level", default=None, help="CLI logging level" + ) + add_option_ini( + "--log-cli-format", + dest="log_cli_format", + default=None, + help="Log format used by the logging module", + ) + add_option_ini( + "--log-cli-date-format", + dest="log_cli_date_format", + default=None, + help="Log date format used by the logging module", + ) + add_option_ini( + "--log-file", + dest="log_file", + default=None, + help="Path to a file when logging will be written to", + ) + add_option_ini( + "--log-file-level", + dest="log_file_level", + default=None, + help="Log file logging level", + ) + add_option_ini( + "--log-file-format", + dest="log_file_format", + default=DEFAULT_LOG_FORMAT, + help="Log format used by the logging module", + ) + add_option_ini( + "--log-file-date-format", + dest="log_file_date_format", + default=DEFAULT_LOG_DATE_FORMAT, + help="Log date format used by the logging module", + ) + add_option_ini( + "--log-auto-indent", + dest="log_auto_indent", + default=None, + help="Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.", + ) + + +_HandlerType = TypeVar("_HandlerType", bound=logging.Handler) + + +# Not using @contextmanager for performance reasons. +class catching_logs: + """Context manager that prepares the whole logging machinery properly.""" + + __slots__ = ("handler", "level", "orig_level") + + def __init__(self, handler: _HandlerType, level: Optional[int] = None) -> None: + self.handler = handler + self.level = level + + def __enter__(self): + root_logger = logging.getLogger() + if self.level is not None: + self.handler.setLevel(self.level) + root_logger.addHandler(self.handler) + if self.level is not None: + self.orig_level = root_logger.level + root_logger.setLevel(min(self.orig_level, self.level)) + return self.handler + + def __exit__(self, type, value, traceback): + root_logger = logging.getLogger() + if self.level is not None: + root_logger.setLevel(self.orig_level) + root_logger.removeHandler(self.handler) + + +class LogCaptureHandler(logging_StreamHandler): + """A logging handler that stores log records and the log text.""" + + def __init__(self) -> None: + """Create a new log handler.""" + super().__init__(StringIO()) + self.records: List[logging.LogRecord] = [] + + def emit(self, record: logging.LogRecord) -> None: + """Keep the log records in a list in addition to the log text.""" + self.records.append(record) + super().emit(record) + + def reset(self) -> None: + self.records = [] + self.stream = StringIO() + + def clear(self) -> None: + self.records.clear() + self.stream = StringIO() + + def handleError(self, record: logging.LogRecord) -> None: + if logging.raiseExceptions: + # Fail the test if the log message is bad (emit failed). + # The default behavior of logging is to print "Logging error" + # to stderr with the call stack and some extra details. + # pytest wants to make such mistakes visible during testing. + raise + + +@final +class LogCaptureFixture: + """Provides access and control of log capturing.""" + + def __init__(self, item: nodes.Node, *, _ispytest: bool = False) -> None: + check_ispytest(_ispytest) + self._item = item + self._initial_handler_level: Optional[int] = None + # Dict of log name -> log level. + self._initial_logger_levels: Dict[Optional[str], int] = {} + + def _finalize(self) -> None: + """Finalize the fixture. + + This restores the log levels changed by :meth:`set_level`. + """ + # Restore log levels. + if self._initial_handler_level is not None: + self.handler.setLevel(self._initial_handler_level) + for logger_name, level in self._initial_logger_levels.items(): + logger = logging.getLogger(logger_name) + logger.setLevel(level) + + @property + def handler(self) -> LogCaptureHandler: + """Get the logging handler used by the fixture.""" + return self._item.stash[caplog_handler_key] + + def get_records( + self, when: "Literal['setup', 'call', 'teardown']" + ) -> List[logging.LogRecord]: + """Get the logging records for one of the possible test phases. + + :param when: + Which test phase to obtain the records from. + Valid values are: "setup", "call" and "teardown". + + :returns: The list of captured records at the given stage. + + .. versionadded:: 3.4 + """ + return self._item.stash[caplog_records_key].get(when, []) + + @property + def text(self) -> str: + """The formatted log text.""" + return _remove_ansi_escape_sequences(self.handler.stream.getvalue()) + + @property + def records(self) -> List[logging.LogRecord]: + """The list of log records.""" + return self.handler.records + + @property + def record_tuples(self) -> List[Tuple[str, int, str]]: + """A list of a stripped down version of log records intended + for use in assertion comparison. + + The format of the tuple is: + + (logger_name, log_level, message) + """ + return [(r.name, r.levelno, r.getMessage()) for r in self.records] + + @property + def messages(self) -> List[str]: + """A list of format-interpolated log messages. + + Unlike 'records', which contains the format string and parameters for + interpolation, log messages in this list are all interpolated. + + Unlike 'text', which contains the output from the handler, log + messages in this list are unadorned with levels, timestamps, etc, + making exact comparisons more reliable. + + Note that traceback or stack info (from :func:`logging.exception` or + the `exc_info` or `stack_info` arguments to the logging functions) is + not included, as this is added by the formatter in the handler. + + .. versionadded:: 3.7 + """ + return [r.getMessage() for r in self.records] + + def clear(self) -> None: + """Reset the list of log records and the captured log text.""" + self.handler.clear() + + def set_level(self, level: Union[int, str], logger: Optional[str] = None) -> None: + """Set the level of a logger for the duration of a test. + + .. versionchanged:: 3.4 + The levels of the loggers changed by this function will be + restored to their initial values at the end of the test. + + :param level: The level. + :param logger: The logger to update. If not given, the root logger. + """ + logger_obj = logging.getLogger(logger) + # Save the original log-level to restore it during teardown. + self._initial_logger_levels.setdefault(logger, logger_obj.level) + logger_obj.setLevel(level) + if self._initial_handler_level is None: + self._initial_handler_level = self.handler.level + self.handler.setLevel(level) + + @contextmanager + def at_level( + self, level: Union[int, str], logger: Optional[str] = None + ) -> Generator[None, None, None]: + """Context manager that sets the level for capturing of logs. After + the end of the 'with' statement the level is restored to its original + value. + + :param level: The level. + :param logger: The logger to update. If not given, the root logger. + """ + logger_obj = logging.getLogger(logger) + orig_level = logger_obj.level + logger_obj.setLevel(level) + handler_orig_level = self.handler.level + self.handler.setLevel(level) + try: + yield + finally: + logger_obj.setLevel(orig_level) + self.handler.setLevel(handler_orig_level) + + +@fixture +def caplog(request: FixtureRequest) -> Generator[LogCaptureFixture, None, None]: + """Access and control log capturing. + + Captured logs are available through the following properties/methods:: + + * caplog.messages -> list of format-interpolated log messages + * caplog.text -> string containing formatted log output + * caplog.records -> list of logging.LogRecord instances + * caplog.record_tuples -> list of (logger_name, level, message) tuples + * caplog.clear() -> clear captured records and formatted log output string + """ + result = LogCaptureFixture(request.node, _ispytest=True) + yield result + result._finalize() + + +def get_log_level_for_setting(config: Config, *setting_names: str) -> Optional[int]: + for setting_name in setting_names: + log_level = config.getoption(setting_name) + if log_level is None: + log_level = config.getini(setting_name) + if log_level: + break + else: + return None + + if isinstance(log_level, str): + log_level = log_level.upper() + try: + return int(getattr(logging, log_level, log_level)) + except ValueError as e: + # Python logging does not recognise this as a logging level + raise UsageError( + "'{}' is not recognized as a logging level name for " + "'{}'. Please consider passing the " + "logging level num instead.".format(log_level, setting_name) + ) from e + + +# run after terminalreporter/capturemanager are configured +@hookimpl(trylast=True) +def pytest_configure(config: Config) -> None: + config.pluginmanager.register(LoggingPlugin(config), "logging-plugin") + + +class LoggingPlugin: + """Attaches to the logging module and captures log messages for each test.""" + + def __init__(self, config: Config) -> None: + """Create a new plugin to capture log messages. + + The formatter can be safely shared across all handlers so + create a single one for the entire test session here. + """ + self._config = config + + # Report logging. + self.formatter = self._create_formatter( + get_option_ini(config, "log_format"), + get_option_ini(config, "log_date_format"), + get_option_ini(config, "log_auto_indent"), + ) + self.log_level = get_log_level_for_setting(config, "log_level") + self.caplog_handler = LogCaptureHandler() + self.caplog_handler.setFormatter(self.formatter) + self.report_handler = LogCaptureHandler() + self.report_handler.setFormatter(self.formatter) + + # File logging. + self.log_file_level = get_log_level_for_setting(config, "log_file_level") + log_file = get_option_ini(config, "log_file") or os.devnull + if log_file != os.devnull: + directory = os.path.dirname(os.path.abspath(log_file)) + if not os.path.isdir(directory): + os.makedirs(directory) + + self.log_file_handler = _FileHandler(log_file, mode="w", encoding="UTF-8") + log_file_format = get_option_ini(config, "log_file_format", "log_format") + log_file_date_format = get_option_ini( + config, "log_file_date_format", "log_date_format" + ) + + log_file_formatter = logging.Formatter( + log_file_format, datefmt=log_file_date_format + ) + self.log_file_handler.setFormatter(log_file_formatter) + + # CLI/live logging. + self.log_cli_level = get_log_level_for_setting( + config, "log_cli_level", "log_level" + ) + if self._log_cli_enabled(): + terminal_reporter = config.pluginmanager.get_plugin("terminalreporter") + capture_manager = config.pluginmanager.get_plugin("capturemanager") + # if capturemanager plugin is disabled, live logging still works. + self.log_cli_handler: Union[ + _LiveLoggingStreamHandler, _LiveLoggingNullHandler + ] = _LiveLoggingStreamHandler(terminal_reporter, capture_manager) + else: + self.log_cli_handler = _LiveLoggingNullHandler() + log_cli_formatter = self._create_formatter( + get_option_ini(config, "log_cli_format", "log_format"), + get_option_ini(config, "log_cli_date_format", "log_date_format"), + get_option_ini(config, "log_auto_indent"), + ) + self.log_cli_handler.setFormatter(log_cli_formatter) + + def _create_formatter(self, log_format, log_date_format, auto_indent): + # Color option doesn't exist if terminal plugin is disabled. + color = getattr(self._config.option, "color", "no") + if color != "no" and ColoredLevelFormatter.LEVELNAME_FMT_REGEX.search( + log_format + ): + formatter: logging.Formatter = ColoredLevelFormatter( + create_terminal_writer(self._config), log_format, log_date_format + ) + else: + formatter = logging.Formatter(log_format, log_date_format) + + formatter._style = PercentStyleMultiline( + formatter._style._fmt, auto_indent=auto_indent + ) + + return formatter + + def set_log_path(self, fname: str) -> None: + """Set the filename parameter for Logging.FileHandler(). + + Creates parent directory if it does not exist. + + .. warning:: + This is an experimental API. + """ + fpath = Path(fname) + + if not fpath.is_absolute(): + fpath = self._config.rootpath / fpath + + if not fpath.parent.exists(): + fpath.parent.mkdir(exist_ok=True, parents=True) + + # https://github.com/python/mypy/issues/11193 + stream: io.TextIOWrapper = fpath.open(mode="w", encoding="UTF-8") # type: ignore[assignment] + old_stream = self.log_file_handler.setStream(stream) + if old_stream: + old_stream.close() + + def _log_cli_enabled(self): + """Return whether live logging is enabled.""" + enabled = self._config.getoption( + "--log-cli-level" + ) is not None or self._config.getini("log_cli") + if not enabled: + return False + + terminal_reporter = self._config.pluginmanager.get_plugin("terminalreporter") + if terminal_reporter is None: + # terminal reporter is disabled e.g. by pytest-xdist. + return False + + return True + + @hookimpl(hookwrapper=True, tryfirst=True) + def pytest_sessionstart(self) -> Generator[None, None, None]: + self.log_cli_handler.set_when("sessionstart") + + with catching_logs(self.log_cli_handler, level=self.log_cli_level): + with catching_logs(self.log_file_handler, level=self.log_file_level): + yield + + @hookimpl(hookwrapper=True, tryfirst=True) + def pytest_collection(self) -> Generator[None, None, None]: + self.log_cli_handler.set_when("collection") + + with catching_logs(self.log_cli_handler, level=self.log_cli_level): + with catching_logs(self.log_file_handler, level=self.log_file_level): + yield + + @hookimpl(hookwrapper=True) + def pytest_runtestloop(self, session: Session) -> Generator[None, None, None]: + if session.config.option.collectonly: + yield + return + + if self._log_cli_enabled() and self._config.getoption("verbose") < 1: + # The verbose flag is needed to avoid messy test progress output. + self._config.option.verbose = 1 + + with catching_logs(self.log_cli_handler, level=self.log_cli_level): + with catching_logs(self.log_file_handler, level=self.log_file_level): + yield # Run all the tests. + + @hookimpl + def pytest_runtest_logstart(self) -> None: + self.log_cli_handler.reset() + self.log_cli_handler.set_when("start") + + @hookimpl + def pytest_runtest_logreport(self) -> None: + self.log_cli_handler.set_when("logreport") + + def _runtest_for(self, item: nodes.Item, when: str) -> Generator[None, None, None]: + """Implement the internals of the pytest_runtest_xxx() hooks.""" + with catching_logs( + self.caplog_handler, + level=self.log_level, + ) as caplog_handler, catching_logs( + self.report_handler, + level=self.log_level, + ) as report_handler: + caplog_handler.reset() + report_handler.reset() + item.stash[caplog_records_key][when] = caplog_handler.records + item.stash[caplog_handler_key] = caplog_handler + + yield + + log = report_handler.stream.getvalue().strip() + item.add_report_section(when, "log", log) + + @hookimpl(hookwrapper=True) + def pytest_runtest_setup(self, item: nodes.Item) -> Generator[None, None, None]: + self.log_cli_handler.set_when("setup") + + empty: Dict[str, List[logging.LogRecord]] = {} + item.stash[caplog_records_key] = empty + yield from self._runtest_for(item, "setup") + + @hookimpl(hookwrapper=True) + def pytest_runtest_call(self, item: nodes.Item) -> Generator[None, None, None]: + self.log_cli_handler.set_when("call") + + yield from self._runtest_for(item, "call") + + @hookimpl(hookwrapper=True) + def pytest_runtest_teardown(self, item: nodes.Item) -> Generator[None, None, None]: + self.log_cli_handler.set_when("teardown") + + yield from self._runtest_for(item, "teardown") + del item.stash[caplog_records_key] + del item.stash[caplog_handler_key] + + @hookimpl + def pytest_runtest_logfinish(self) -> None: + self.log_cli_handler.set_when("finish") + + @hookimpl(hookwrapper=True, tryfirst=True) + def pytest_sessionfinish(self) -> Generator[None, None, None]: + self.log_cli_handler.set_when("sessionfinish") + + with catching_logs(self.log_cli_handler, level=self.log_cli_level): + with catching_logs(self.log_file_handler, level=self.log_file_level): + yield + + @hookimpl + def pytest_unconfigure(self) -> None: + # Close the FileHandler explicitly. + # (logging.shutdown might have lost the weakref?!) + self.log_file_handler.close() + + +class _FileHandler(logging.FileHandler): + """A logging FileHandler with pytest tweaks.""" + + def handleError(self, record: logging.LogRecord) -> None: + # Handled by LogCaptureHandler. + pass + + +class _LiveLoggingStreamHandler(logging_StreamHandler): + """A logging StreamHandler used by the live logging feature: it will + write a newline before the first log message in each test. + + During live logging we must also explicitly disable stdout/stderr + capturing otherwise it will get captured and won't appear in the + terminal. + """ + + # Officially stream needs to be a IO[str], but TerminalReporter + # isn't. So force it. + stream: TerminalReporter = None # type: ignore + + def __init__( + self, + terminal_reporter: TerminalReporter, + capture_manager: Optional[CaptureManager], + ) -> None: + super().__init__(stream=terminal_reporter) # type: ignore[arg-type] + self.capture_manager = capture_manager + self.reset() + self.set_when(None) + self._test_outcome_written = False + + def reset(self) -> None: + """Reset the handler; should be called before the start of each test.""" + self._first_record_emitted = False + + def set_when(self, when: Optional[str]) -> None: + """Prepare for the given test phase (setup/call/teardown).""" + self._when = when + self._section_name_shown = False + if when == "start": + self._test_outcome_written = False + + def emit(self, record: logging.LogRecord) -> None: + ctx_manager = ( + self.capture_manager.global_and_fixture_disabled() + if self.capture_manager + else nullcontext() + ) + with ctx_manager: + if not self._first_record_emitted: + self.stream.write("\n") + self._first_record_emitted = True + elif self._when in ("teardown", "finish"): + if not self._test_outcome_written: + self._test_outcome_written = True + self.stream.write("\n") + if not self._section_name_shown and self._when: + self.stream.section("live log " + self._when, sep="-", bold=True) + self._section_name_shown = True + super().emit(record) + + def handleError(self, record: logging.LogRecord) -> None: + # Handled by LogCaptureHandler. + pass + + +class _LiveLoggingNullHandler(logging.NullHandler): + """A logging handler used when live logging is disabled.""" + + def reset(self) -> None: + pass + + def set_when(self, when: str) -> None: + pass + + def handleError(self, record: logging.LogRecord) -> None: + # Handled by LogCaptureHandler. + pass diff --git a/env/lib/python3.8/site-packages/_pytest/main.py b/env/lib/python3.8/site-packages/_pytest/main.py new file mode 100644 index 00000000..61fb7eaa --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/main.py @@ -0,0 +1,902 @@ +"""Core implementation of the testing process: init, session, runtest loop.""" +import argparse +import fnmatch +import functools +import importlib +import os +import sys +from pathlib import Path +from typing import Callable +from typing import Dict +from typing import FrozenSet +from typing import Iterator +from typing import List +from typing import Optional +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union + +import attr + +import _pytest._code +from _pytest import nodes +from _pytest.compat import final +from _pytest.compat import overload +from _pytest.config import Config +from _pytest.config import directory_arg +from _pytest.config import ExitCode +from _pytest.config import hookimpl +from _pytest.config import PytestPluginManager +from _pytest.config import UsageError +from _pytest.config.argparsing import Parser +from _pytest.fixtures import FixtureManager +from _pytest.outcomes import exit +from _pytest.pathlib import absolutepath +from _pytest.pathlib import bestrelpath +from _pytest.pathlib import fnmatch_ex +from _pytest.pathlib import visit +from _pytest.reports import CollectReport +from _pytest.reports import TestReport +from _pytest.runner import collect_one_node +from _pytest.runner import SetupState + + +if TYPE_CHECKING: + from typing_extensions import Literal + + +def pytest_addoption(parser: Parser) -> None: + parser.addini( + "norecursedirs", + "Directory patterns to avoid for recursion", + type="args", + default=[ + "*.egg", + ".*", + "_darcs", + "build", + "CVS", + "dist", + "node_modules", + "venv", + "{arch}", + ], + ) + parser.addini( + "testpaths", + "Directories to search for tests when no files or directories are given on the " + "command line", + type="args", + default=[], + ) + group = parser.getgroup("general", "Running and selection options") + group._addoption( + "-x", + "--exitfirst", + action="store_const", + dest="maxfail", + const=1, + help="Exit instantly on first error or failed test", + ) + group = parser.getgroup("pytest-warnings") + group.addoption( + "-W", + "--pythonwarnings", + action="append", + help="Set which warnings to report, see -W option of Python itself", + ) + parser.addini( + "filterwarnings", + type="linelist", + help="Each line specifies a pattern for " + "warnings.filterwarnings. " + "Processed after -W/--pythonwarnings.", + ) + group._addoption( + "--maxfail", + metavar="num", + action="store", + type=int, + dest="maxfail", + default=0, + help="Exit after first num failures or errors", + ) + group._addoption( + "--strict-config", + action="store_true", + help="Any warnings encountered while parsing the `pytest` section of the " + "configuration file raise errors", + ) + group._addoption( + "--strict-markers", + action="store_true", + help="Markers not registered in the `markers` section of the configuration " + "file raise errors", + ) + group._addoption( + "--strict", + action="store_true", + help="(Deprecated) alias to --strict-markers", + ) + group._addoption( + "-c", + metavar="file", + type=str, + dest="inifilename", + help="Load configuration from `file` instead of trying to locate one of the " + "implicit configuration files", + ) + group._addoption( + "--continue-on-collection-errors", + action="store_true", + default=False, + dest="continue_on_collection_errors", + help="Force test execution even if collection errors occur", + ) + group._addoption( + "--rootdir", + action="store", + dest="rootdir", + help="Define root directory for tests. Can be relative path: 'root_dir', './root_dir', " + "'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: " + "'$HOME/root_dir'.", + ) + + group = parser.getgroup("collect", "collection") + group.addoption( + "--collectonly", + "--collect-only", + "--co", + action="store_true", + help="Only collect tests, don't execute them", + ) + group.addoption( + "--pyargs", + action="store_true", + help="Try to interpret all arguments as Python packages", + ) + group.addoption( + "--ignore", + action="append", + metavar="path", + help="Ignore path during collection (multi-allowed)", + ) + group.addoption( + "--ignore-glob", + action="append", + metavar="path", + help="Ignore path pattern during collection (multi-allowed)", + ) + group.addoption( + "--deselect", + action="append", + metavar="nodeid_prefix", + help="Deselect item (via node id prefix) during collection (multi-allowed)", + ) + group.addoption( + "--confcutdir", + dest="confcutdir", + default=None, + metavar="dir", + type=functools.partial(directory_arg, optname="--confcutdir"), + help="Only load conftest.py's relative to specified dir", + ) + group.addoption( + "--noconftest", + action="store_true", + dest="noconftest", + default=False, + help="Don't load any conftest.py files", + ) + group.addoption( + "--keepduplicates", + "--keep-duplicates", + action="store_true", + dest="keepduplicates", + default=False, + help="Keep duplicate tests", + ) + group.addoption( + "--collect-in-virtualenv", + action="store_true", + dest="collect_in_virtualenv", + default=False, + help="Don't ignore tests in a local virtualenv directory", + ) + group.addoption( + "--import-mode", + default="prepend", + choices=["prepend", "append", "importlib"], + dest="importmode", + help="Prepend/append to sys.path when importing test modules and conftest " + "files. Default: prepend.", + ) + + group = parser.getgroup("debugconfig", "test session debugging and configuration") + group.addoption( + "--basetemp", + dest="basetemp", + default=None, + type=validate_basetemp, + metavar="dir", + help=( + "Base temporary directory for this test run. " + "(Warning: this directory is removed if it exists.)" + ), + ) + + +def validate_basetemp(path: str) -> str: + # GH 7119 + msg = "basetemp must not be empty, the current working directory or any parent directory of it" + + # empty path + if not path: + raise argparse.ArgumentTypeError(msg) + + def is_ancestor(base: Path, query: Path) -> bool: + """Return whether query is an ancestor of base.""" + if base == query: + return True + return query in base.parents + + # check if path is an ancestor of cwd + if is_ancestor(Path.cwd(), Path(path).absolute()): + raise argparse.ArgumentTypeError(msg) + + # check symlinks for ancestors + if is_ancestor(Path.cwd().resolve(), Path(path).resolve()): + raise argparse.ArgumentTypeError(msg) + + return path + + +def wrap_session( + config: Config, doit: Callable[[Config, "Session"], Optional[Union[int, ExitCode]]] +) -> Union[int, ExitCode]: + """Skeleton command line program.""" + session = Session.from_config(config) + session.exitstatus = ExitCode.OK + initstate = 0 + try: + try: + config._do_configure() + initstate = 1 + config.hook.pytest_sessionstart(session=session) + initstate = 2 + session.exitstatus = doit(config, session) or 0 + except UsageError: + session.exitstatus = ExitCode.USAGE_ERROR + raise + except Failed: + session.exitstatus = ExitCode.TESTS_FAILED + except (KeyboardInterrupt, exit.Exception): + excinfo = _pytest._code.ExceptionInfo.from_current() + exitstatus: Union[int, ExitCode] = ExitCode.INTERRUPTED + if isinstance(excinfo.value, exit.Exception): + if excinfo.value.returncode is not None: + exitstatus = excinfo.value.returncode + if initstate < 2: + sys.stderr.write(f"{excinfo.typename}: {excinfo.value.msg}\n") + config.hook.pytest_keyboard_interrupt(excinfo=excinfo) + session.exitstatus = exitstatus + except BaseException: + session.exitstatus = ExitCode.INTERNAL_ERROR + excinfo = _pytest._code.ExceptionInfo.from_current() + try: + config.notify_exception(excinfo, config.option) + except exit.Exception as exc: + if exc.returncode is not None: + session.exitstatus = exc.returncode + sys.stderr.write(f"{type(exc).__name__}: {exc}\n") + else: + if isinstance(excinfo.value, SystemExit): + sys.stderr.write("mainloop: caught unexpected SystemExit!\n") + + finally: + # Explicitly break reference cycle. + excinfo = None # type: ignore + os.chdir(session.startpath) + if initstate >= 2: + try: + config.hook.pytest_sessionfinish( + session=session, exitstatus=session.exitstatus + ) + except exit.Exception as exc: + if exc.returncode is not None: + session.exitstatus = exc.returncode + sys.stderr.write(f"{type(exc).__name__}: {exc}\n") + config._ensure_unconfigure() + return session.exitstatus + + +def pytest_cmdline_main(config: Config) -> Union[int, ExitCode]: + return wrap_session(config, _main) + + +def _main(config: Config, session: "Session") -> Optional[Union[int, ExitCode]]: + """Default command line protocol for initialization, session, + running tests and reporting.""" + config.hook.pytest_collection(session=session) + config.hook.pytest_runtestloop(session=session) + + if session.testsfailed: + return ExitCode.TESTS_FAILED + elif session.testscollected == 0: + return ExitCode.NO_TESTS_COLLECTED + return None + + +def pytest_collection(session: "Session") -> None: + session.perform_collect() + + +def pytest_runtestloop(session: "Session") -> bool: + if session.testsfailed and not session.config.option.continue_on_collection_errors: + raise session.Interrupted( + "%d error%s during collection" + % (session.testsfailed, "s" if session.testsfailed != 1 else "") + ) + + if session.config.option.collectonly: + return True + + for i, item in enumerate(session.items): + nextitem = session.items[i + 1] if i + 1 < len(session.items) else None + item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) + if session.shouldfail: + raise session.Failed(session.shouldfail) + if session.shouldstop: + raise session.Interrupted(session.shouldstop) + return True + + +def _in_venv(path: Path) -> bool: + """Attempt to detect if ``path`` is the root of a Virtual Environment by + checking for the existence of the appropriate activate script.""" + bindir = path.joinpath("Scripts" if sys.platform.startswith("win") else "bin") + try: + if not bindir.is_dir(): + return False + except OSError: + return False + activates = ( + "activate", + "activate.csh", + "activate.fish", + "Activate", + "Activate.bat", + "Activate.ps1", + ) + return any(fname.name in activates for fname in bindir.iterdir()) + + +def pytest_ignore_collect(collection_path: Path, config: Config) -> Optional[bool]: + ignore_paths = config._getconftest_pathlist( + "collect_ignore", path=collection_path.parent, rootpath=config.rootpath + ) + ignore_paths = ignore_paths or [] + excludeopt = config.getoption("ignore") + if excludeopt: + ignore_paths.extend(absolutepath(x) for x in excludeopt) + + if collection_path in ignore_paths: + return True + + ignore_globs = config._getconftest_pathlist( + "collect_ignore_glob", path=collection_path.parent, rootpath=config.rootpath + ) + ignore_globs = ignore_globs or [] + excludeglobopt = config.getoption("ignore_glob") + if excludeglobopt: + ignore_globs.extend(absolutepath(x) for x in excludeglobopt) + + if any(fnmatch.fnmatch(str(collection_path), str(glob)) for glob in ignore_globs): + return True + + allow_in_venv = config.getoption("collect_in_virtualenv") + if not allow_in_venv and _in_venv(collection_path): + return True + return None + + +def pytest_collection_modifyitems(items: List[nodes.Item], config: Config) -> None: + deselect_prefixes = tuple(config.getoption("deselect") or []) + if not deselect_prefixes: + return + + remaining = [] + deselected = [] + for colitem in items: + if colitem.nodeid.startswith(deselect_prefixes): + deselected.append(colitem) + else: + remaining.append(colitem) + + if deselected: + config.hook.pytest_deselected(items=deselected) + items[:] = remaining + + +class FSHookProxy: + def __init__(self, pm: PytestPluginManager, remove_mods) -> None: + self.pm = pm + self.remove_mods = remove_mods + + def __getattr__(self, name: str): + x = self.pm.subset_hook_caller(name, remove_plugins=self.remove_mods) + self.__dict__[name] = x + return x + + +class Interrupted(KeyboardInterrupt): + """Signals that the test run was interrupted.""" + + __module__ = "builtins" # For py3. + + +class Failed(Exception): + """Signals a stop as failed test run.""" + + +@attr.s(slots=True, auto_attribs=True) +class _bestrelpath_cache(Dict[Path, str]): + path: Path + + def __missing__(self, path: Path) -> str: + r = bestrelpath(self.path, path) + self[path] = r + return r + + +@final +class Session(nodes.FSCollector): + Interrupted = Interrupted + Failed = Failed + # Set on the session by runner.pytest_sessionstart. + _setupstate: SetupState + # Set on the session by fixtures.pytest_sessionstart. + _fixturemanager: FixtureManager + exitstatus: Union[int, ExitCode] + + def __init__(self, config: Config) -> None: + super().__init__( + path=config.rootpath, + fspath=None, + parent=None, + config=config, + session=self, + nodeid="", + ) + self.testsfailed = 0 + self.testscollected = 0 + self.shouldstop: Union[bool, str] = False + self.shouldfail: Union[bool, str] = False + self.trace = config.trace.root.get("collection") + self._initialpaths: FrozenSet[Path] = frozenset() + + self._bestrelpathcache: Dict[Path, str] = _bestrelpath_cache(config.rootpath) + + self.config.pluginmanager.register(self, name="session") + + @classmethod + def from_config(cls, config: Config) -> "Session": + session: Session = cls._create(config=config) + return session + + def __repr__(self) -> str: + return "<%s %s exitstatus=%r testsfailed=%d testscollected=%d>" % ( + self.__class__.__name__, + self.name, + getattr(self, "exitstatus", ""), + self.testsfailed, + self.testscollected, + ) + + @property + def startpath(self) -> Path: + """The path from which pytest was invoked. + + .. versionadded:: 7.0.0 + """ + return self.config.invocation_params.dir + + def _node_location_to_relpath(self, node_path: Path) -> str: + # bestrelpath is a quite slow function. + return self._bestrelpathcache[node_path] + + @hookimpl(tryfirst=True) + def pytest_collectstart(self) -> None: + if self.shouldfail: + raise self.Failed(self.shouldfail) + if self.shouldstop: + raise self.Interrupted(self.shouldstop) + + @hookimpl(tryfirst=True) + def pytest_runtest_logreport( + self, report: Union[TestReport, CollectReport] + ) -> None: + if report.failed and not hasattr(report, "wasxfail"): + self.testsfailed += 1 + maxfail = self.config.getvalue("maxfail") + if maxfail and self.testsfailed >= maxfail: + self.shouldfail = "stopping after %d failures" % (self.testsfailed) + + pytest_collectreport = pytest_runtest_logreport + + def isinitpath(self, path: Union[str, "os.PathLike[str]"]) -> bool: + # Optimization: Path(Path(...)) is much slower than isinstance. + path_ = path if isinstance(path, Path) else Path(path) + return path_ in self._initialpaths + + def gethookproxy(self, fspath: "os.PathLike[str]"): + # Optimization: Path(Path(...)) is much slower than isinstance. + path = fspath if isinstance(fspath, Path) else Path(fspath) + pm = self.config.pluginmanager + # Check if we have the common case of running + # hooks with all conftest.py files. + my_conftestmodules = pm._getconftestmodules( + path, + self.config.getoption("importmode"), + rootpath=self.config.rootpath, + ) + remove_mods = pm._conftest_plugins.difference(my_conftestmodules) + if remove_mods: + # One or more conftests are not in use at this fspath. + from .config.compat import PathAwareHookProxy + + proxy = PathAwareHookProxy(FSHookProxy(pm, remove_mods)) + else: + # All plugins are active for this fspath. + proxy = self.config.hook + return proxy + + def _recurse(self, direntry: "os.DirEntry[str]") -> bool: + if direntry.name == "__pycache__": + return False + fspath = Path(direntry.path) + ihook = self.gethookproxy(fspath.parent) + if ihook.pytest_ignore_collect(collection_path=fspath, config=self.config): + return False + norecursepatterns = self.config.getini("norecursedirs") + if any(fnmatch_ex(pat, fspath) for pat in norecursepatterns): + return False + return True + + def _collectfile( + self, fspath: Path, handle_dupes: bool = True + ) -> Sequence[nodes.Collector]: + assert ( + fspath.is_file() + ), "{!r} is not a file (isdir={!r}, exists={!r}, islink={!r})".format( + fspath, fspath.is_dir(), fspath.exists(), fspath.is_symlink() + ) + ihook = self.gethookproxy(fspath) + if not self.isinitpath(fspath): + if ihook.pytest_ignore_collect(collection_path=fspath, config=self.config): + return () + + if handle_dupes: + keepduplicates = self.config.getoption("keepduplicates") + if not keepduplicates: + duplicate_paths = self.config.pluginmanager._duplicatepaths + if fspath in duplicate_paths: + return () + else: + duplicate_paths.add(fspath) + + return ihook.pytest_collect_file(file_path=fspath, parent=self) # type: ignore[no-any-return] + + @overload + def perform_collect( + self, args: Optional[Sequence[str]] = ..., genitems: "Literal[True]" = ... + ) -> Sequence[nodes.Item]: + ... + + @overload + def perform_collect( # noqa: F811 + self, args: Optional[Sequence[str]] = ..., genitems: bool = ... + ) -> Sequence[Union[nodes.Item, nodes.Collector]]: + ... + + def perform_collect( # noqa: F811 + self, args: Optional[Sequence[str]] = None, genitems: bool = True + ) -> Sequence[Union[nodes.Item, nodes.Collector]]: + """Perform the collection phase for this session. + + This is called by the default :hook:`pytest_collection` hook + implementation; see the documentation of this hook for more details. + For testing purposes, it may also be called directly on a fresh + ``Session``. + + This function normally recursively expands any collectors collected + from the session to their items, and only items are returned. For + testing purposes, this may be suppressed by passing ``genitems=False``, + in which case the return value contains these collectors unexpanded, + and ``session.items`` is empty. + """ + if args is None: + args = self.config.args + + self.trace("perform_collect", self, args) + self.trace.root.indent += 1 + + self._notfound: List[Tuple[str, Sequence[nodes.Collector]]] = [] + self._initial_parts: List[Tuple[Path, List[str]]] = [] + self.items: List[nodes.Item] = [] + + hook = self.config.hook + + items: Sequence[Union[nodes.Item, nodes.Collector]] = self.items + try: + initialpaths: List[Path] = [] + for arg in args: + fspath, parts = resolve_collection_argument( + self.config.invocation_params.dir, + arg, + as_pypath=self.config.option.pyargs, + ) + self._initial_parts.append((fspath, parts)) + initialpaths.append(fspath) + self._initialpaths = frozenset(initialpaths) + rep = collect_one_node(self) + self.ihook.pytest_collectreport(report=rep) + self.trace.root.indent -= 1 + if self._notfound: + errors = [] + for arg, collectors in self._notfound: + if collectors: + errors.append( + f"not found: {arg}\n(no name {arg!r} in any of {collectors!r})" + ) + else: + errors.append(f"found no collectors for {arg}") + + raise UsageError(*errors) + if not genitems: + items = rep.result + else: + if rep.passed: + for node in rep.result: + self.items.extend(self.genitems(node)) + + self.config.pluginmanager.check_pending() + hook.pytest_collection_modifyitems( + session=self, config=self.config, items=items + ) + finally: + hook.pytest_collection_finish(session=self) + + self.testscollected = len(items) + return items + + def collect(self) -> Iterator[Union[nodes.Item, nodes.Collector]]: + from _pytest.python import Package + + # Keep track of any collected nodes in here, so we don't duplicate fixtures. + node_cache1: Dict[Path, Sequence[nodes.Collector]] = {} + node_cache2: Dict[Tuple[Type[nodes.Collector], Path], nodes.Collector] = {} + + # Keep track of any collected collectors in matchnodes paths, so they + # are not collected more than once. + matchnodes_cache: Dict[Tuple[Type[nodes.Collector], str], CollectReport] = {} + + # Dirnames of pkgs with dunder-init files. + pkg_roots: Dict[str, Package] = {} + + for argpath, names in self._initial_parts: + self.trace("processing argument", (argpath, names)) + self.trace.root.indent += 1 + + # Start with a Session root, and delve to argpath item (dir or file) + # and stack all Packages found on the way. + # No point in finding packages when collecting doctests. + if not self.config.getoption("doctestmodules", False): + pm = self.config.pluginmanager + for parent in (argpath, *argpath.parents): + if not pm._is_in_confcutdir(argpath): + break + + if parent.is_dir(): + pkginit = parent / "__init__.py" + if pkginit.is_file() and pkginit not in node_cache1: + col = self._collectfile(pkginit, handle_dupes=False) + if col: + if isinstance(col[0], Package): + pkg_roots[str(parent)] = col[0] + node_cache1[col[0].path] = [col[0]] + + # If it's a directory argument, recurse and look for any Subpackages. + # Let the Package collector deal with subnodes, don't collect here. + if argpath.is_dir(): + assert not names, f"invalid arg {(argpath, names)!r}" + + seen_dirs: Set[Path] = set() + for direntry in visit(str(argpath), self._recurse): + if not direntry.is_file(): + continue + + path = Path(direntry.path) + dirpath = path.parent + + if dirpath not in seen_dirs: + # Collect packages first. + seen_dirs.add(dirpath) + pkginit = dirpath / "__init__.py" + if pkginit.exists(): + for x in self._collectfile(pkginit): + yield x + if isinstance(x, Package): + pkg_roots[str(dirpath)] = x + if str(dirpath) in pkg_roots: + # Do not collect packages here. + continue + + for x in self._collectfile(path): + key2 = (type(x), x.path) + if key2 in node_cache2: + yield node_cache2[key2] + else: + node_cache2[key2] = x + yield x + else: + assert argpath.is_file() + + if argpath in node_cache1: + col = node_cache1[argpath] + else: + collect_root = pkg_roots.get(str(argpath.parent), self) + col = collect_root._collectfile(argpath, handle_dupes=False) + if col: + node_cache1[argpath] = col + + matching = [] + work: List[ + Tuple[Sequence[Union[nodes.Item, nodes.Collector]], Sequence[str]] + ] = [(col, names)] + while work: + self.trace("matchnodes", col, names) + self.trace.root.indent += 1 + + matchnodes, matchnames = work.pop() + for node in matchnodes: + if not matchnames: + matching.append(node) + continue + if not isinstance(node, nodes.Collector): + continue + key = (type(node), node.nodeid) + if key in matchnodes_cache: + rep = matchnodes_cache[key] + else: + rep = collect_one_node(node) + matchnodes_cache[key] = rep + if rep.passed: + submatchnodes = [] + for r in rep.result: + # TODO: Remove parametrized workaround once collection structure contains + # parametrization. + if ( + r.name == matchnames[0] + or r.name.split("[")[0] == matchnames[0] + ): + submatchnodes.append(r) + if submatchnodes: + work.append((submatchnodes, matchnames[1:])) + else: + # Report collection failures here to avoid failing to run some test + # specified in the command line because the module could not be + # imported (#134). + node.ihook.pytest_collectreport(report=rep) + + self.trace("matchnodes finished -> ", len(matching), "nodes") + self.trace.root.indent -= 1 + + if not matching: + report_arg = "::".join((str(argpath), *names)) + self._notfound.append((report_arg, col)) + continue + + # If __init__.py was the only file requested, then the matched + # node will be the corresponding Package (by default), and the + # first yielded item will be the __init__ Module itself, so + # just use that. If this special case isn't taken, then all the + # files in the package will be yielded. + if argpath.name == "__init__.py" and isinstance(matching[0], Package): + try: + yield next(iter(matching[0].collect())) + except StopIteration: + # The package collects nothing with only an __init__.py + # file in it, which gets ignored by the default + # "python_files" option. + pass + continue + + yield from matching + + self.trace.root.indent -= 1 + + def genitems( + self, node: Union[nodes.Item, nodes.Collector] + ) -> Iterator[nodes.Item]: + self.trace("genitems", node) + if isinstance(node, nodes.Item): + node.ihook.pytest_itemcollected(item=node) + yield node + else: + assert isinstance(node, nodes.Collector) + rep = collect_one_node(node) + if rep.passed: + for subnode in rep.result: + yield from self.genitems(subnode) + node.ihook.pytest_collectreport(report=rep) + + +def search_pypath(module_name: str) -> str: + """Search sys.path for the given a dotted module name, and return its file system path.""" + try: + spec = importlib.util.find_spec(module_name) + # AttributeError: looks like package module, but actually filename + # ImportError: module does not exist + # ValueError: not a module name + except (AttributeError, ImportError, ValueError): + return module_name + if spec is None or spec.origin is None or spec.origin == "namespace": + return module_name + elif spec.submodule_search_locations: + return os.path.dirname(spec.origin) + else: + return spec.origin + + +def resolve_collection_argument( + invocation_path: Path, arg: str, *, as_pypath: bool = False +) -> Tuple[Path, List[str]]: + """Parse path arguments optionally containing selection parts and return (fspath, names). + + Command-line arguments can point to files and/or directories, and optionally contain + parts for specific tests selection, for example: + + "pkg/tests/test_foo.py::TestClass::test_foo" + + This function ensures the path exists, and returns a tuple: + + (Path("/full/path/to/pkg/tests/test_foo.py"), ["TestClass", "test_foo"]) + + When as_pypath is True, expects that the command-line argument actually contains + module paths instead of file-system paths: + + "pkg.tests.test_foo::TestClass::test_foo" + + In which case we search sys.path for a matching module, and then return the *path* to the + found module. + + If the path doesn't exist, raise UsageError. + If the path is a directory and selection parts are present, raise UsageError. + """ + base, squacket, rest = str(arg).partition("[") + strpath, *parts = base.split("::") + if parts: + parts[-1] = f"{parts[-1]}{squacket}{rest}" + if as_pypath: + strpath = search_pypath(strpath) + fspath = invocation_path / strpath + fspath = absolutepath(fspath) + if not fspath.exists(): + msg = ( + "module or package not found: {arg} (missing __init__.py?)" + if as_pypath + else "file or directory not found: {arg}" + ) + raise UsageError(msg.format(arg=arg)) + if parts and fspath.is_dir(): + msg = ( + "package argument cannot contain :: selection parts: {arg}" + if as_pypath + else "directory argument cannot contain :: selection parts: {arg}" + ) + raise UsageError(msg.format(arg=arg)) + return fspath, parts diff --git a/env/lib/python3.8/site-packages/_pytest/mark/__init__.py b/env/lib/python3.8/site-packages/_pytest/mark/__init__.py new file mode 100644 index 00000000..6717d113 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/mark/__init__.py @@ -0,0 +1,266 @@ +"""Generic mechanism for marking and selecting python functions.""" +from typing import AbstractSet +from typing import Collection +from typing import List +from typing import Optional +from typing import TYPE_CHECKING +from typing import Union + +import attr + +from .expression import Expression +from .expression import ParseError +from .structures import EMPTY_PARAMETERSET_OPTION +from .structures import get_empty_parameterset_mark +from .structures import Mark +from .structures import MARK_GEN +from .structures import MarkDecorator +from .structures import MarkGenerator +from .structures import ParameterSet +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config import hookimpl +from _pytest.config import UsageError +from _pytest.config.argparsing import Parser +from _pytest.stash import StashKey + +if TYPE_CHECKING: + from _pytest.nodes import Item + + +__all__ = [ + "MARK_GEN", + "Mark", + "MarkDecorator", + "MarkGenerator", + "ParameterSet", + "get_empty_parameterset_mark", +] + + +old_mark_config_key = StashKey[Optional[Config]]() + + +def param( + *values: object, + marks: Union[MarkDecorator, Collection[Union[MarkDecorator, Mark]]] = (), + id: Optional[str] = None, +) -> ParameterSet: + """Specify a parameter in `pytest.mark.parametrize`_ calls or + :ref:`parametrized fixtures `. + + .. code-block:: python + + @pytest.mark.parametrize( + "test_input,expected", + [ + ("3+5", 8), + pytest.param("6*9", 42, marks=pytest.mark.xfail), + ], + ) + def test_eval(test_input, expected): + assert eval(test_input) == expected + + :param values: Variable args of the values of the parameter set, in order. + :param marks: A single mark or a list of marks to be applied to this parameter set. + :param id: The id to attribute to this parameter set. + """ + return ParameterSet.param(*values, marks=marks, id=id) + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("general") + group._addoption( + "-k", + action="store", + dest="keyword", + default="", + metavar="EXPRESSION", + help="Only run tests which match the given substring expression. " + "An expression is a Python evaluatable expression " + "where all names are substring-matched against test names " + "and their parent classes. Example: -k 'test_method or test_" + "other' matches all test functions and classes whose name " + "contains 'test_method' or 'test_other', while -k 'not test_method' " + "matches those that don't contain 'test_method' in their names. " + "-k 'not test_method and not test_other' will eliminate the matches. " + "Additionally keywords are matched to classes and functions " + "containing extra names in their 'extra_keyword_matches' set, " + "as well as functions which have names assigned directly to them. " + "The matching is case-insensitive.", + ) + + group._addoption( + "-m", + action="store", + dest="markexpr", + default="", + metavar="MARKEXPR", + help="Only run tests matching given mark expression. " + "For example: -m 'mark1 and not mark2'.", + ) + + group.addoption( + "--markers", + action="store_true", + help="show markers (builtin, plugin and per-project ones).", + ) + + parser.addini("markers", "Markers for test functions", "linelist") + parser.addini(EMPTY_PARAMETERSET_OPTION, "Default marker for empty parametersets") + + +@hookimpl(tryfirst=True) +def pytest_cmdline_main(config: Config) -> Optional[Union[int, ExitCode]]: + import _pytest.config + + if config.option.markers: + config._do_configure() + tw = _pytest.config.create_terminal_writer(config) + for line in config.getini("markers"): + parts = line.split(":", 1) + name = parts[0] + rest = parts[1] if len(parts) == 2 else "" + tw.write("@pytest.mark.%s:" % name, bold=True) + tw.line(rest) + tw.line() + config._ensure_unconfigure() + return 0 + + return None + + +@attr.s(slots=True, auto_attribs=True) +class KeywordMatcher: + """A matcher for keywords. + + Given a list of names, matches any substring of one of these names. The + string inclusion check is case-insensitive. + + Will match on the name of colitem, including the names of its parents. + Only matches names of items which are either a :class:`Class` or a + :class:`Function`. + + Additionally, matches on names in the 'extra_keyword_matches' set of + any item, as well as names directly assigned to test functions. + """ + + _names: AbstractSet[str] + + @classmethod + def from_item(cls, item: "Item") -> "KeywordMatcher": + mapped_names = set() + + # Add the names of the current item and any parent items. + import pytest + + for node in item.listchain(): + if not isinstance(node, pytest.Session): + mapped_names.add(node.name) + + # Add the names added as extra keywords to current or parent items. + mapped_names.update(item.listextrakeywords()) + + # Add the names attached to the current function through direct assignment. + function_obj = getattr(item, "function", None) + if function_obj: + mapped_names.update(function_obj.__dict__) + + # Add the markers to the keywords as we no longer handle them correctly. + mapped_names.update(mark.name for mark in item.iter_markers()) + + return cls(mapped_names) + + def __call__(self, subname: str) -> bool: + subname = subname.lower() + names = (name.lower() for name in self._names) + + for name in names: + if subname in name: + return True + return False + + +def deselect_by_keyword(items: "List[Item]", config: Config) -> None: + keywordexpr = config.option.keyword.lstrip() + if not keywordexpr: + return + + expr = _parse_expression(keywordexpr, "Wrong expression passed to '-k'") + + remaining = [] + deselected = [] + for colitem in items: + if not expr.evaluate(KeywordMatcher.from_item(colitem)): + deselected.append(colitem) + else: + remaining.append(colitem) + + if deselected: + config.hook.pytest_deselected(items=deselected) + items[:] = remaining + + +@attr.s(slots=True, auto_attribs=True) +class MarkMatcher: + """A matcher for markers which are present. + + Tries to match on any marker names, attached to the given colitem. + """ + + own_mark_names: AbstractSet[str] + + @classmethod + def from_item(cls, item: "Item") -> "MarkMatcher": + mark_names = {mark.name for mark in item.iter_markers()} + return cls(mark_names) + + def __call__(self, name: str) -> bool: + return name in self.own_mark_names + + +def deselect_by_mark(items: "List[Item]", config: Config) -> None: + matchexpr = config.option.markexpr + if not matchexpr: + return + + expr = _parse_expression(matchexpr, "Wrong expression passed to '-m'") + remaining: List[Item] = [] + deselected: List[Item] = [] + for item in items: + if expr.evaluate(MarkMatcher.from_item(item)): + remaining.append(item) + else: + deselected.append(item) + if deselected: + config.hook.pytest_deselected(items=deselected) + items[:] = remaining + + +def _parse_expression(expr: str, exc_message: str) -> Expression: + try: + return Expression.compile(expr) + except ParseError as e: + raise UsageError(f"{exc_message}: {expr}: {e}") from None + + +def pytest_collection_modifyitems(items: "List[Item]", config: Config) -> None: + deselect_by_keyword(items, config) + deselect_by_mark(items, config) + + +def pytest_configure(config: Config) -> None: + config.stash[old_mark_config_key] = MARK_GEN._config + MARK_GEN._config = config + + empty_parameterset = config.getini(EMPTY_PARAMETERSET_OPTION) + + if empty_parameterset not in ("skip", "xfail", "fail_at_collect", None, ""): + raise UsageError( + "{!s} must be one of skip, xfail or fail_at_collect" + " but it is {!r}".format(EMPTY_PARAMETERSET_OPTION, empty_parameterset) + ) + + +def pytest_unconfigure(config: Config) -> None: + MARK_GEN._config = config.stash.get(old_mark_config_key, None) diff --git a/env/lib/python3.8/site-packages/_pytest/mark/expression.py b/env/lib/python3.8/site-packages/_pytest/mark/expression.py new file mode 100644 index 00000000..0a2e7c65 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/mark/expression.py @@ -0,0 +1,222 @@ +r"""Evaluate match expressions, as used by `-k` and `-m`. + +The grammar is: + +expression: expr? EOF +expr: and_expr ('or' and_expr)* +and_expr: not_expr ('and' not_expr)* +not_expr: 'not' not_expr | '(' expr ')' | ident +ident: (\w|:|\+|-|\.|\[|\]|\\|/)+ + +The semantics are: + +- Empty expression evaluates to False. +- ident evaluates to True of False according to a provided matcher function. +- or/and/not evaluate according to the usual boolean semantics. +""" +import ast +import enum +import re +import types +from typing import Callable +from typing import Iterator +from typing import Mapping +from typing import NoReturn +from typing import Optional +from typing import Sequence + +import attr + + +__all__ = [ + "Expression", + "ParseError", +] + + +class TokenType(enum.Enum): + LPAREN = "left parenthesis" + RPAREN = "right parenthesis" + OR = "or" + AND = "and" + NOT = "not" + IDENT = "identifier" + EOF = "end of input" + + +@attr.s(frozen=True, slots=True, auto_attribs=True) +class Token: + type: TokenType + value: str + pos: int + + +class ParseError(Exception): + """The expression contains invalid syntax. + + :param column: The column in the line where the error occurred (1-based). + :param message: A description of the error. + """ + + def __init__(self, column: int, message: str) -> None: + self.column = column + self.message = message + + def __str__(self) -> str: + return f"at column {self.column}: {self.message}" + + +class Scanner: + __slots__ = ("tokens", "current") + + def __init__(self, input: str) -> None: + self.tokens = self.lex(input) + self.current = next(self.tokens) + + def lex(self, input: str) -> Iterator[Token]: + pos = 0 + while pos < len(input): + if input[pos] in (" ", "\t"): + pos += 1 + elif input[pos] == "(": + yield Token(TokenType.LPAREN, "(", pos) + pos += 1 + elif input[pos] == ")": + yield Token(TokenType.RPAREN, ")", pos) + pos += 1 + else: + match = re.match(r"(:?\w|:|\+|-|\.|\[|\]|\\|/)+", input[pos:]) + if match: + value = match.group(0) + if value == "or": + yield Token(TokenType.OR, value, pos) + elif value == "and": + yield Token(TokenType.AND, value, pos) + elif value == "not": + yield Token(TokenType.NOT, value, pos) + else: + yield Token(TokenType.IDENT, value, pos) + pos += len(value) + else: + raise ParseError( + pos + 1, + f'unexpected character "{input[pos]}"', + ) + yield Token(TokenType.EOF, "", pos) + + def accept(self, type: TokenType, *, reject: bool = False) -> Optional[Token]: + if self.current.type is type: + token = self.current + if token.type is not TokenType.EOF: + self.current = next(self.tokens) + return token + if reject: + self.reject((type,)) + return None + + def reject(self, expected: Sequence[TokenType]) -> NoReturn: + raise ParseError( + self.current.pos + 1, + "expected {}; got {}".format( + " OR ".join(type.value for type in expected), + self.current.type.value, + ), + ) + + +# True, False and None are legal match expression identifiers, +# but illegal as Python identifiers. To fix this, this prefix +# is added to identifiers in the conversion to Python AST. +IDENT_PREFIX = "$" + + +def expression(s: Scanner) -> ast.Expression: + if s.accept(TokenType.EOF): + ret: ast.expr = ast.NameConstant(False) + else: + ret = expr(s) + s.accept(TokenType.EOF, reject=True) + return ast.fix_missing_locations(ast.Expression(ret)) + + +def expr(s: Scanner) -> ast.expr: + ret = and_expr(s) + while s.accept(TokenType.OR): + rhs = and_expr(s) + ret = ast.BoolOp(ast.Or(), [ret, rhs]) + return ret + + +def and_expr(s: Scanner) -> ast.expr: + ret = not_expr(s) + while s.accept(TokenType.AND): + rhs = not_expr(s) + ret = ast.BoolOp(ast.And(), [ret, rhs]) + return ret + + +def not_expr(s: Scanner) -> ast.expr: + if s.accept(TokenType.NOT): + return ast.UnaryOp(ast.Not(), not_expr(s)) + if s.accept(TokenType.LPAREN): + ret = expr(s) + s.accept(TokenType.RPAREN, reject=True) + return ret + ident = s.accept(TokenType.IDENT) + if ident: + return ast.Name(IDENT_PREFIX + ident.value, ast.Load()) + s.reject((TokenType.NOT, TokenType.LPAREN, TokenType.IDENT)) + + +class MatcherAdapter(Mapping[str, bool]): + """Adapts a matcher function to a locals mapping as required by eval().""" + + def __init__(self, matcher: Callable[[str], bool]) -> None: + self.matcher = matcher + + def __getitem__(self, key: str) -> bool: + return self.matcher(key[len(IDENT_PREFIX) :]) + + def __iter__(self) -> Iterator[str]: + raise NotImplementedError() + + def __len__(self) -> int: + raise NotImplementedError() + + +class Expression: + """A compiled match expression as used by -k and -m. + + The expression can be evaluated against different matchers. + """ + + __slots__ = ("code",) + + def __init__(self, code: types.CodeType) -> None: + self.code = code + + @classmethod + def compile(self, input: str) -> "Expression": + """Compile a match expression. + + :param input: The input expression - one line. + """ + astexpr = expression(Scanner(input)) + code: types.CodeType = compile( + astexpr, + filename="", + mode="eval", + ) + return Expression(code) + + def evaluate(self, matcher: Callable[[str], bool]) -> bool: + """Evaluate the match expression. + + :param matcher: + Given an identifier, should return whether it matches or not. + Should be prepared to handle arbitrary strings as input. + + :returns: Whether the expression matches or not. + """ + ret: bool = eval(self.code, {"__builtins__": {}}, MatcherAdapter(matcher)) + return ret diff --git a/env/lib/python3.8/site-packages/_pytest/mark/structures.py b/env/lib/python3.8/site-packages/_pytest/mark/structures.py new file mode 100644 index 00000000..5186c9ea --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/mark/structures.py @@ -0,0 +1,613 @@ +import collections.abc +import inspect +import warnings +from typing import Any +from typing import Callable +from typing import Collection +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Mapping +from typing import MutableMapping +from typing import NamedTuple +from typing import Optional +from typing import overload +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +import attr + +from .._code import getfslineno +from ..compat import ascii_escaped +from ..compat import final +from ..compat import NOTSET +from ..compat import NotSetType +from _pytest.config import Config +from _pytest.deprecated import check_ispytest +from _pytest.outcomes import fail +from _pytest.warning_types import PytestUnknownMarkWarning + +if TYPE_CHECKING: + from ..nodes import Node + + +EMPTY_PARAMETERSET_OPTION = "empty_parameter_set_mark" + + +def istestfunc(func) -> bool: + return callable(func) and getattr(func, "__name__", "") != "" + + +def get_empty_parameterset_mark( + config: Config, argnames: Sequence[str], func +) -> "MarkDecorator": + from ..nodes import Collector + + fs, lineno = getfslineno(func) + reason = "got empty parameter set %r, function %s at %s:%d" % ( + argnames, + func.__name__, + fs, + lineno, + ) + + requested_mark = config.getini(EMPTY_PARAMETERSET_OPTION) + if requested_mark in ("", None, "skip"): + mark = MARK_GEN.skip(reason=reason) + elif requested_mark == "xfail": + mark = MARK_GEN.xfail(reason=reason, run=False) + elif requested_mark == "fail_at_collect": + f_name = func.__name__ + _, lineno = getfslineno(func) + raise Collector.CollectError( + "Empty parameter set in '%s' at line %d" % (f_name, lineno + 1) + ) + else: + raise LookupError(requested_mark) + return mark + + +class ParameterSet(NamedTuple): + values: Sequence[Union[object, NotSetType]] + marks: Collection[Union["MarkDecorator", "Mark"]] + id: Optional[str] + + @classmethod + def param( + cls, + *values: object, + marks: Union["MarkDecorator", Collection[Union["MarkDecorator", "Mark"]]] = (), + id: Optional[str] = None, + ) -> "ParameterSet": + if isinstance(marks, MarkDecorator): + marks = (marks,) + else: + assert isinstance(marks, collections.abc.Collection) + + if id is not None: + if not isinstance(id, str): + raise TypeError(f"Expected id to be a string, got {type(id)}: {id!r}") + id = ascii_escaped(id) + return cls(values, marks, id) + + @classmethod + def extract_from( + cls, + parameterset: Union["ParameterSet", Sequence[object], object], + force_tuple: bool = False, + ) -> "ParameterSet": + """Extract from an object or objects. + + :param parameterset: + A legacy style parameterset that may or may not be a tuple, + and may or may not be wrapped into a mess of mark objects. + + :param force_tuple: + Enforce tuple wrapping so single argument tuple values + don't get decomposed and break tests. + """ + + if isinstance(parameterset, cls): + return parameterset + if force_tuple: + return cls.param(parameterset) + else: + # TODO: Refactor to fix this type-ignore. Currently the following + # passes type-checking but crashes: + # + # @pytest.mark.parametrize(('x', 'y'), [1, 2]) + # def test_foo(x, y): pass + return cls(parameterset, marks=[], id=None) # type: ignore[arg-type] + + @staticmethod + def _parse_parametrize_args( + argnames: Union[str, Sequence[str]], + argvalues: Iterable[Union["ParameterSet", Sequence[object], object]], + *args, + **kwargs, + ) -> Tuple[Sequence[str], bool]: + if isinstance(argnames, str): + argnames = [x.strip() for x in argnames.split(",") if x.strip()] + force_tuple = len(argnames) == 1 + else: + force_tuple = False + return argnames, force_tuple + + @staticmethod + def _parse_parametrize_parameters( + argvalues: Iterable[Union["ParameterSet", Sequence[object], object]], + force_tuple: bool, + ) -> List["ParameterSet"]: + return [ + ParameterSet.extract_from(x, force_tuple=force_tuple) for x in argvalues + ] + + @classmethod + def _for_parametrize( + cls, + argnames: Union[str, Sequence[str]], + argvalues: Iterable[Union["ParameterSet", Sequence[object], object]], + func, + config: Config, + nodeid: str, + ) -> Tuple[Sequence[str], List["ParameterSet"]]: + argnames, force_tuple = cls._parse_parametrize_args(argnames, argvalues) + parameters = cls._parse_parametrize_parameters(argvalues, force_tuple) + del argvalues + + if parameters: + # Check all parameter sets have the correct number of values. + for param in parameters: + if len(param.values) != len(argnames): + msg = ( + '{nodeid}: in "parametrize" the number of names ({names_len}):\n' + " {names}\n" + "must be equal to the number of values ({values_len}):\n" + " {values}" + ) + fail( + msg.format( + nodeid=nodeid, + values=param.values, + names=argnames, + names_len=len(argnames), + values_len=len(param.values), + ), + pytrace=False, + ) + else: + # Empty parameter set (likely computed at runtime): create a single + # parameter set with NOTSET values, with the "empty parameter set" mark applied to it. + mark = get_empty_parameterset_mark(config, argnames, func) + parameters.append( + ParameterSet(values=(NOTSET,) * len(argnames), marks=[mark], id=None) + ) + return argnames, parameters + + +@final +@attr.s(frozen=True, init=False, auto_attribs=True) +class Mark: + #: Name of the mark. + name: str + #: Positional arguments of the mark decorator. + args: Tuple[Any, ...] + #: Keyword arguments of the mark decorator. + kwargs: Mapping[str, Any] + + #: Source Mark for ids with parametrize Marks. + _param_ids_from: Optional["Mark"] = attr.ib(default=None, repr=False) + #: Resolved/generated ids with parametrize Marks. + _param_ids_generated: Optional[Sequence[str]] = attr.ib(default=None, repr=False) + + def __init__( + self, + name: str, + args: Tuple[Any, ...], + kwargs: Mapping[str, Any], + param_ids_from: Optional["Mark"] = None, + param_ids_generated: Optional[Sequence[str]] = None, + *, + _ispytest: bool = False, + ) -> None: + """:meta private:""" + check_ispytest(_ispytest) + # Weirdness to bypass frozen=True. + object.__setattr__(self, "name", name) + object.__setattr__(self, "args", args) + object.__setattr__(self, "kwargs", kwargs) + object.__setattr__(self, "_param_ids_from", param_ids_from) + object.__setattr__(self, "_param_ids_generated", param_ids_generated) + + def _has_param_ids(self) -> bool: + return "ids" in self.kwargs or len(self.args) >= 4 + + def combined_with(self, other: "Mark") -> "Mark": + """Return a new Mark which is a combination of this + Mark and another Mark. + + Combines by appending args and merging kwargs. + + :param Mark other: The mark to combine with. + :rtype: Mark + """ + assert self.name == other.name + + # Remember source of ids with parametrize Marks. + param_ids_from: Optional[Mark] = None + if self.name == "parametrize": + if other._has_param_ids(): + param_ids_from = other + elif self._has_param_ids(): + param_ids_from = self + + return Mark( + self.name, + self.args + other.args, + dict(self.kwargs, **other.kwargs), + param_ids_from=param_ids_from, + _ispytest=True, + ) + + +# A generic parameter designating an object to which a Mark may +# be applied -- a test function (callable) or class. +# Note: a lambda is not allowed, but this can't be represented. +Markable = TypeVar("Markable", bound=Union[Callable[..., object], type]) + + +@attr.s(init=False, auto_attribs=True) +class MarkDecorator: + """A decorator for applying a mark on test functions and classes. + + ``MarkDecorators`` are created with ``pytest.mark``:: + + mark1 = pytest.mark.NAME # Simple MarkDecorator + mark2 = pytest.mark.NAME(name1=value) # Parametrized MarkDecorator + + and can then be applied as decorators to test functions:: + + @mark2 + def test_function(): + pass + + When a ``MarkDecorator`` is called, it does the following: + + 1. If called with a single class as its only positional argument and no + additional keyword arguments, it attaches the mark to the class so it + gets applied automatically to all test cases found in that class. + + 2. If called with a single function as its only positional argument and + no additional keyword arguments, it attaches the mark to the function, + containing all the arguments already stored internally in the + ``MarkDecorator``. + + 3. When called in any other case, it returns a new ``MarkDecorator`` + instance with the original ``MarkDecorator``'s content updated with + the arguments passed to this call. + + Note: The rules above prevent a ``MarkDecorator`` from storing only a + single function or class reference as its positional argument with no + additional keyword or positional arguments. You can work around this by + using `with_args()`. + """ + + mark: Mark + + def __init__(self, mark: Mark, *, _ispytest: bool = False) -> None: + """:meta private:""" + check_ispytest(_ispytest) + self.mark = mark + + @property + def name(self) -> str: + """Alias for mark.name.""" + return self.mark.name + + @property + def args(self) -> Tuple[Any, ...]: + """Alias for mark.args.""" + return self.mark.args + + @property + def kwargs(self) -> Mapping[str, Any]: + """Alias for mark.kwargs.""" + return self.mark.kwargs + + @property + def markname(self) -> str: + """:meta private:""" + return self.name # for backward-compat (2.4.1 had this attr) + + def with_args(self, *args: object, **kwargs: object) -> "MarkDecorator": + """Return a MarkDecorator with extra arguments added. + + Unlike calling the MarkDecorator, with_args() can be used even + if the sole argument is a callable/class. + """ + mark = Mark(self.name, args, kwargs, _ispytest=True) + return MarkDecorator(self.mark.combined_with(mark), _ispytest=True) + + # Type ignored because the overloads overlap with an incompatible + # return type. Not much we can do about that. Thankfully mypy picks + # the first match so it works out even if we break the rules. + @overload + def __call__(self, arg: Markable) -> Markable: # type: ignore[misc] + pass + + @overload + def __call__(self, *args: object, **kwargs: object) -> "MarkDecorator": + pass + + def __call__(self, *args: object, **kwargs: object): + """Call the MarkDecorator.""" + if args and not kwargs: + func = args[0] + is_class = inspect.isclass(func) + if len(args) == 1 and (istestfunc(func) or is_class): + store_mark(func, self.mark) + return func + return self.with_args(*args, **kwargs) + + +def get_unpacked_marks( + obj: Union[object, type], + *, + consider_mro: bool = True, +) -> List[Mark]: + """Obtain the unpacked marks that are stored on an object. + + If obj is a class and consider_mro is true, return marks applied to + this class and all of its super-classes in MRO order. If consider_mro + is false, only return marks applied directly to this class. + """ + if isinstance(obj, type): + if not consider_mro: + mark_lists = [obj.__dict__.get("pytestmark", [])] + else: + mark_lists = [x.__dict__.get("pytestmark", []) for x in obj.__mro__] + mark_list = [] + for item in mark_lists: + if isinstance(item, list): + mark_list.extend(item) + else: + mark_list.append(item) + else: + mark_attribute = getattr(obj, "pytestmark", []) + if isinstance(mark_attribute, list): + mark_list = mark_attribute + else: + mark_list = [mark_attribute] + return list(normalize_mark_list(mark_list)) + + +def normalize_mark_list( + mark_list: Iterable[Union[Mark, MarkDecorator]] +) -> Iterable[Mark]: + """ + Normalize an iterable of Mark or MarkDecorator objects into a list of marks + by retrieving the `mark` attribute on MarkDecorator instances. + + :param mark_list: marks to normalize + :returns: A new list of the extracted Mark objects + """ + for mark in mark_list: + mark_obj = getattr(mark, "mark", mark) + if not isinstance(mark_obj, Mark): + raise TypeError(f"got {repr(mark_obj)} instead of Mark") + yield mark_obj + + +def store_mark(obj, mark: Mark) -> None: + """Store a Mark on an object. + + This is used to implement the Mark declarations/decorators correctly. + """ + assert isinstance(mark, Mark), mark + # Always reassign name to avoid updating pytestmark in a reference that + # was only borrowed. + obj.pytestmark = [*get_unpacked_marks(obj, consider_mro=False), mark] + + +# Typing for builtin pytest marks. This is cheating; it gives builtin marks +# special privilege, and breaks modularity. But practicality beats purity... +if TYPE_CHECKING: + from _pytest.scope import _ScopeName + + class _SkipMarkDecorator(MarkDecorator): + @overload # type: ignore[override,misc,no-overload-impl] + def __call__(self, arg: Markable) -> Markable: + ... + + @overload + def __call__(self, reason: str = ...) -> "MarkDecorator": + ... + + class _SkipifMarkDecorator(MarkDecorator): + def __call__( # type: ignore[override] + self, + condition: Union[str, bool] = ..., + *conditions: Union[str, bool], + reason: str = ..., + ) -> MarkDecorator: + ... + + class _XfailMarkDecorator(MarkDecorator): + @overload # type: ignore[override,misc,no-overload-impl] + def __call__(self, arg: Markable) -> Markable: + ... + + @overload + def __call__( + self, + condition: Union[str, bool] = ..., + *conditions: Union[str, bool], + reason: str = ..., + run: bool = ..., + raises: Union[Type[BaseException], Tuple[Type[BaseException], ...]] = ..., + strict: bool = ..., + ) -> MarkDecorator: + ... + + class _ParametrizeMarkDecorator(MarkDecorator): + def __call__( # type: ignore[override] + self, + argnames: Union[str, Sequence[str]], + argvalues: Iterable[Union[ParameterSet, Sequence[object], object]], + *, + indirect: Union[bool, Sequence[str]] = ..., + ids: Optional[ + Union[ + Iterable[Union[None, str, float, int, bool]], + Callable[[Any], Optional[object]], + ] + ] = ..., + scope: Optional[_ScopeName] = ..., + ) -> MarkDecorator: + ... + + class _UsefixturesMarkDecorator(MarkDecorator): + def __call__(self, *fixtures: str) -> MarkDecorator: # type: ignore[override] + ... + + class _FilterwarningsMarkDecorator(MarkDecorator): + def __call__(self, *filters: str) -> MarkDecorator: # type: ignore[override] + ... + + +@final +class MarkGenerator: + """Factory for :class:`MarkDecorator` objects - exposed as + a ``pytest.mark`` singleton instance. + + Example:: + + import pytest + + @pytest.mark.slowtest + def test_function(): + pass + + applies a 'slowtest' :class:`Mark` on ``test_function``. + """ + + # See TYPE_CHECKING above. + if TYPE_CHECKING: + skip: _SkipMarkDecorator + skipif: _SkipifMarkDecorator + xfail: _XfailMarkDecorator + parametrize: _ParametrizeMarkDecorator + usefixtures: _UsefixturesMarkDecorator + filterwarnings: _FilterwarningsMarkDecorator + + def __init__(self, *, _ispytest: bool = False) -> None: + check_ispytest(_ispytest) + self._config: Optional[Config] = None + self._markers: Set[str] = set() + + def __getattr__(self, name: str) -> MarkDecorator: + """Generate a new :class:`MarkDecorator` with the given name.""" + if name[0] == "_": + raise AttributeError("Marker name must NOT start with underscore") + + if self._config is not None: + # We store a set of markers as a performance optimisation - if a mark + # name is in the set we definitely know it, but a mark may be known and + # not in the set. We therefore start by updating the set! + if name not in self._markers: + for line in self._config.getini("markers"): + # example lines: "skipif(condition): skip the given test if..." + # or "hypothesis: tests which use Hypothesis", so to get the + # marker name we split on both `:` and `(`. + marker = line.split(":")[0].split("(")[0].strip() + self._markers.add(marker) + + # If the name is not in the set of known marks after updating, + # then it really is time to issue a warning or an error. + if name not in self._markers: + if self._config.option.strict_markers or self._config.option.strict: + fail( + f"{name!r} not found in `markers` configuration option", + pytrace=False, + ) + + # Raise a specific error for common misspellings of "parametrize". + if name in ["parameterize", "parametrise", "parameterise"]: + __tracebackhide__ = True + fail(f"Unknown '{name}' mark, did you mean 'parametrize'?") + + warnings.warn( + "Unknown pytest.mark.%s - is this a typo? You can register " + "custom marks to avoid this warning - for details, see " + "https://docs.pytest.org/en/stable/how-to/mark.html" % name, + PytestUnknownMarkWarning, + 2, + ) + + return MarkDecorator(Mark(name, (), {}, _ispytest=True), _ispytest=True) + + +MARK_GEN = MarkGenerator(_ispytest=True) + + +@final +class NodeKeywords(MutableMapping[str, Any]): + __slots__ = ("node", "parent", "_markers") + + def __init__(self, node: "Node") -> None: + self.node = node + self.parent = node.parent + self._markers = {node.name: True} + + def __getitem__(self, key: str) -> Any: + try: + return self._markers[key] + except KeyError: + if self.parent is None: + raise + return self.parent.keywords[key] + + def __setitem__(self, key: str, value: Any) -> None: + self._markers[key] = value + + # Note: we could've avoided explicitly implementing some of the methods + # below and use the collections.abc fallback, but that would be slow. + + def __contains__(self, key: object) -> bool: + return ( + key in self._markers + or self.parent is not None + and key in self.parent.keywords + ) + + def update( # type: ignore[override] + self, + other: Union[Mapping[str, Any], Iterable[Tuple[str, Any]]] = (), + **kwds: Any, + ) -> None: + self._markers.update(other) + self._markers.update(kwds) + + def __delitem__(self, key: str) -> None: + raise ValueError("cannot delete key in keywords dict") + + def __iter__(self) -> Iterator[str]: + # Doesn't need to be fast. + yield from self._markers + if self.parent is not None: + for keyword in self.parent.keywords: + # self._marks and self.parent.keywords can have duplicates. + if keyword not in self._markers: + yield keyword + + def __len__(self) -> int: + # Doesn't need to be fast. + return sum(1 for keyword in self) + + def __repr__(self) -> str: + return f"" diff --git a/env/lib/python3.8/site-packages/_pytest/monkeypatch.py b/env/lib/python3.8/site-packages/_pytest/monkeypatch.py new file mode 100644 index 00000000..c6e29ac7 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/monkeypatch.py @@ -0,0 +1,416 @@ +"""Monkeypatching and mocking functionality.""" +import os +import re +import sys +import warnings +from contextlib import contextmanager +from typing import Any +from typing import Generator +from typing import List +from typing import MutableMapping +from typing import Optional +from typing import overload +from typing import Tuple +from typing import TypeVar +from typing import Union + +from _pytest.compat import final +from _pytest.fixtures import fixture +from _pytest.warning_types import PytestWarning + +RE_IMPORT_ERROR_NAME = re.compile(r"^No module named (.*)$") + + +K = TypeVar("K") +V = TypeVar("V") + + +@fixture +def monkeypatch() -> Generator["MonkeyPatch", None, None]: + """A convenient fixture for monkey-patching. + + The fixture provides these methods to modify objects, dictionaries, or + :data:`os.environ`: + + * :meth:`monkeypatch.setattr(obj, name, value, raising=True) ` + * :meth:`monkeypatch.delattr(obj, name, raising=True) ` + * :meth:`monkeypatch.setitem(mapping, name, value) ` + * :meth:`monkeypatch.delitem(obj, name, raising=True) ` + * :meth:`monkeypatch.setenv(name, value, prepend=None) ` + * :meth:`monkeypatch.delenv(name, raising=True) ` + * :meth:`monkeypatch.syspath_prepend(path) ` + * :meth:`monkeypatch.chdir(path) ` + * :meth:`monkeypatch.context() ` + + All modifications will be undone after the requesting test function or + fixture has finished. The ``raising`` parameter determines if a :class:`KeyError` + or :class:`AttributeError` will be raised if the set/deletion operation does not have the + specified target. + + To undo modifications done by the fixture in a contained scope, + use :meth:`context() `. + """ + mpatch = MonkeyPatch() + yield mpatch + mpatch.undo() + + +def resolve(name: str) -> object: + # Simplified from zope.dottedname. + parts = name.split(".") + + used = parts.pop(0) + found: object = __import__(used) + for part in parts: + used += "." + part + try: + found = getattr(found, part) + except AttributeError: + pass + else: + continue + # We use explicit un-nesting of the handling block in order + # to avoid nested exceptions. + try: + __import__(used) + except ImportError as ex: + expected = str(ex).split()[-1] + if expected == used: + raise + else: + raise ImportError(f"import error in {used}: {ex}") from ex + found = annotated_getattr(found, part, used) + return found + + +def annotated_getattr(obj: object, name: str, ann: str) -> object: + try: + obj = getattr(obj, name) + except AttributeError as e: + raise AttributeError( + "{!r} object at {} has no attribute {!r}".format( + type(obj).__name__, ann, name + ) + ) from e + return obj + + +def derive_importpath(import_path: str, raising: bool) -> Tuple[str, object]: + if not isinstance(import_path, str) or "." not in import_path: + raise TypeError(f"must be absolute import path string, not {import_path!r}") + module, attr = import_path.rsplit(".", 1) + target = resolve(module) + if raising: + annotated_getattr(target, attr, ann=module) + return attr, target + + +class Notset: + def __repr__(self) -> str: + return "" + + +notset = Notset() + + +@final +class MonkeyPatch: + """Helper to conveniently monkeypatch attributes/items/environment + variables/syspath. + + Returned by the :fixture:`monkeypatch` fixture. + + .. versionchanged:: 6.2 + Can now also be used directly as `pytest.MonkeyPatch()`, for when + the fixture is not available. In this case, use + :meth:`with MonkeyPatch.context() as mp: ` or remember to call + :meth:`undo` explicitly. + """ + + def __init__(self) -> None: + self._setattr: List[Tuple[object, str, object]] = [] + self._setitem: List[Tuple[MutableMapping[Any, Any], object, object]] = [] + self._cwd: Optional[str] = None + self._savesyspath: Optional[List[str]] = None + + @classmethod + @contextmanager + def context(cls) -> Generator["MonkeyPatch", None, None]: + """Context manager that returns a new :class:`MonkeyPatch` object + which undoes any patching done inside the ``with`` block upon exit. + + Example: + + .. code-block:: python + + import functools + + + def test_partial(monkeypatch): + with monkeypatch.context() as m: + m.setattr(functools, "partial", 3) + + Useful in situations where it is desired to undo some patches before the test ends, + such as mocking ``stdlib`` functions that might break pytest itself if mocked (for examples + of this see :issue:`3290`). + """ + m = cls() + try: + yield m + finally: + m.undo() + + @overload + def setattr( + self, + target: str, + name: object, + value: Notset = ..., + raising: bool = ..., + ) -> None: + ... + + @overload + def setattr( + self, + target: object, + name: str, + value: object, + raising: bool = ..., + ) -> None: + ... + + def setattr( + self, + target: Union[str, object], + name: Union[object, str], + value: object = notset, + raising: bool = True, + ) -> None: + """ + Set attribute value on target, memorizing the old value. + + For example: + + .. code-block:: python + + import os + + monkeypatch.setattr(os, "getcwd", lambda: "/") + + The code above replaces the :func:`os.getcwd` function by a ``lambda`` which + always returns ``"/"``. + + For convenience, you can specify a string as ``target`` which + will be interpreted as a dotted import path, with the last part + being the attribute name: + + .. code-block:: python + + monkeypatch.setattr("os.getcwd", lambda: "/") + + Raises :class:`AttributeError` if the attribute does not exist, unless + ``raising`` is set to False. + + **Where to patch** + + ``monkeypatch.setattr`` works by (temporarily) changing the object that a name points to with another one. + There can be many names pointing to any individual object, so for patching to work you must ensure + that you patch the name used by the system under test. + + See the section :ref:`Where to patch ` in the :mod:`unittest.mock` + docs for a complete explanation, which is meant for :func:`unittest.mock.patch` but + applies to ``monkeypatch.setattr`` as well. + """ + __tracebackhide__ = True + import inspect + + if isinstance(value, Notset): + if not isinstance(target, str): + raise TypeError( + "use setattr(target, name, value) or " + "setattr(target, value) with target being a dotted " + "import string" + ) + value = name + name, target = derive_importpath(target, raising) + else: + if not isinstance(name, str): + raise TypeError( + "use setattr(target, name, value) with name being a string or " + "setattr(target, value) with target being a dotted " + "import string" + ) + + oldval = getattr(target, name, notset) + if raising and oldval is notset: + raise AttributeError(f"{target!r} has no attribute {name!r}") + + # avoid class descriptors like staticmethod/classmethod + if inspect.isclass(target): + oldval = target.__dict__.get(name, notset) + self._setattr.append((target, name, oldval)) + setattr(target, name, value) + + def delattr( + self, + target: Union[object, str], + name: Union[str, Notset] = notset, + raising: bool = True, + ) -> None: + """Delete attribute ``name`` from ``target``. + + If no ``name`` is specified and ``target`` is a string + it will be interpreted as a dotted import path with the + last part being the attribute name. + + Raises AttributeError it the attribute does not exist, unless + ``raising`` is set to False. + """ + __tracebackhide__ = True + import inspect + + if isinstance(name, Notset): + if not isinstance(target, str): + raise TypeError( + "use delattr(target, name) or " + "delattr(target) with target being a dotted " + "import string" + ) + name, target = derive_importpath(target, raising) + + if not hasattr(target, name): + if raising: + raise AttributeError(name) + else: + oldval = getattr(target, name, notset) + # Avoid class descriptors like staticmethod/classmethod. + if inspect.isclass(target): + oldval = target.__dict__.get(name, notset) + self._setattr.append((target, name, oldval)) + delattr(target, name) + + def setitem(self, dic: MutableMapping[K, V], name: K, value: V) -> None: + """Set dictionary entry ``name`` to value.""" + self._setitem.append((dic, name, dic.get(name, notset))) + dic[name] = value + + def delitem(self, dic: MutableMapping[K, V], name: K, raising: bool = True) -> None: + """Delete ``name`` from dict. + + Raises ``KeyError`` if it doesn't exist, unless ``raising`` is set to + False. + """ + if name not in dic: + if raising: + raise KeyError(name) + else: + self._setitem.append((dic, name, dic.get(name, notset))) + del dic[name] + + def setenv(self, name: str, value: str, prepend: Optional[str] = None) -> None: + """Set environment variable ``name`` to ``value``. + + If ``prepend`` is a character, read the current environment variable + value and prepend the ``value`` adjoined with the ``prepend`` + character. + """ + if not isinstance(value, str): + warnings.warn( # type: ignore[unreachable] + PytestWarning( + "Value of environment variable {name} type should be str, but got " + "{value!r} (type: {type}); converted to str implicitly".format( + name=name, value=value, type=type(value).__name__ + ) + ), + stacklevel=2, + ) + value = str(value) + if prepend and name in os.environ: + value = value + prepend + os.environ[name] + self.setitem(os.environ, name, value) + + def delenv(self, name: str, raising: bool = True) -> None: + """Delete ``name`` from the environment. + + Raises ``KeyError`` if it does not exist, unless ``raising`` is set to + False. + """ + environ: MutableMapping[str, str] = os.environ + self.delitem(environ, name, raising=raising) + + def syspath_prepend(self, path) -> None: + """Prepend ``path`` to ``sys.path`` list of import locations.""" + + if self._savesyspath is None: + self._savesyspath = sys.path[:] + sys.path.insert(0, str(path)) + + # https://github.com/pypa/setuptools/blob/d8b901bc/docs/pkg_resources.txt#L162-L171 + # this is only needed when pkg_resources was already loaded by the namespace package + if "pkg_resources" in sys.modules: + from pkg_resources import fixup_namespace_packages + + fixup_namespace_packages(str(path)) + + # A call to syspathinsert() usually means that the caller wants to + # import some dynamically created files, thus with python3 we + # invalidate its import caches. + # This is especially important when any namespace package is in use, + # since then the mtime based FileFinder cache (that gets created in + # this case already) gets not invalidated when writing the new files + # quickly afterwards. + from importlib import invalidate_caches + + invalidate_caches() + + def chdir(self, path: Union[str, "os.PathLike[str]"]) -> None: + """Change the current working directory to the specified path. + + :param path: + The path to change into. + """ + if self._cwd is None: + self._cwd = os.getcwd() + os.chdir(path) + + def undo(self) -> None: + """Undo previous changes. + + This call consumes the undo stack. Calling it a second time has no + effect unless you do more monkeypatching after the undo call. + + There is generally no need to call `undo()`, since it is + called automatically during tear-down. + + .. note:: + The same `monkeypatch` fixture is used across a + single test function invocation. If `monkeypatch` is used both by + the test function itself and one of the test fixtures, + calling `undo()` will undo all of the changes made in + both functions. + + Prefer to use :meth:`context() ` instead. + """ + for obj, name, value in reversed(self._setattr): + if value is not notset: + setattr(obj, name, value) + else: + delattr(obj, name) + self._setattr[:] = [] + for dictionary, key, value in reversed(self._setitem): + if value is notset: + try: + del dictionary[key] + except KeyError: + pass # Was already deleted, so we have the desired state. + else: + dictionary[key] = value + self._setitem[:] = [] + if self._savesyspath is not None: + sys.path[:] = self._savesyspath + self._savesyspath = None + + if self._cwd is not None: + os.chdir(self._cwd) + self._cwd = None diff --git a/env/lib/python3.8/site-packages/_pytest/nodes.py b/env/lib/python3.8/site-packages/_pytest/nodes.py new file mode 100644 index 00000000..cfb9b5a3 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/nodes.py @@ -0,0 +1,771 @@ +import os +import warnings +from inspect import signature +from pathlib import Path +from typing import Any +from typing import Callable +from typing import cast +from typing import Iterable +from typing import Iterator +from typing import List +from typing import MutableMapping +from typing import Optional +from typing import overload +from typing import Set +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +import _pytest._code +from _pytest._code import getfslineno +from _pytest._code.code import ExceptionInfo +from _pytest._code.code import TerminalRepr +from _pytest.compat import cached_property +from _pytest.compat import LEGACY_PATH +from _pytest.config import Config +from _pytest.config import ConftestImportFailure +from _pytest.deprecated import FSCOLLECTOR_GETHOOKPROXY_ISINITPATH +from _pytest.deprecated import NODE_CTOR_FSPATH_ARG +from _pytest.mark.structures import Mark +from _pytest.mark.structures import MarkDecorator +from _pytest.mark.structures import NodeKeywords +from _pytest.outcomes import fail +from _pytest.pathlib import absolutepath +from _pytest.pathlib import commonpath +from _pytest.stash import Stash +from _pytest.warning_types import PytestWarning + +if TYPE_CHECKING: + # Imported here due to circular import. + from _pytest.main import Session + from _pytest._code.code import _TracebackStyle + + +SEP = "/" + +tracebackcutdir = Path(_pytest.__file__).parent + + +def iterparentnodeids(nodeid: str) -> Iterator[str]: + """Return the parent node IDs of a given node ID, inclusive. + + For the node ID + + "testing/code/test_excinfo.py::TestFormattedExcinfo::test_repr_source" + + the result would be + + "" + "testing" + "testing/code" + "testing/code/test_excinfo.py" + "testing/code/test_excinfo.py::TestFormattedExcinfo" + "testing/code/test_excinfo.py::TestFormattedExcinfo::test_repr_source" + + Note that / components are only considered until the first ::. + """ + pos = 0 + first_colons: Optional[int] = nodeid.find("::") + if first_colons == -1: + first_colons = None + # The root Session node - always present. + yield "" + # Eagerly consume SEP parts until first colons. + while True: + at = nodeid.find(SEP, pos, first_colons) + if at == -1: + break + if at > 0: + yield nodeid[:at] + pos = at + len(SEP) + # Eagerly consume :: parts. + while True: + at = nodeid.find("::", pos) + if at == -1: + break + if at > 0: + yield nodeid[:at] + pos = at + len("::") + # The node ID itself. + if nodeid: + yield nodeid + + +def _check_path(path: Path, fspath: LEGACY_PATH) -> None: + if Path(fspath) != path: + raise ValueError( + f"Path({fspath!r}) != {path!r}\n" + "if both path and fspath are given they need to be equal" + ) + + +def _imply_path( + node_type: Type["Node"], + path: Optional[Path], + fspath: Optional[LEGACY_PATH], +) -> Path: + if fspath is not None: + warnings.warn( + NODE_CTOR_FSPATH_ARG.format( + node_type_name=node_type.__name__, + ), + stacklevel=6, + ) + if path is not None: + if fspath is not None: + _check_path(path, fspath) + return path + else: + assert fspath is not None + return Path(fspath) + + +_NodeType = TypeVar("_NodeType", bound="Node") + + +class NodeMeta(type): + def __call__(self, *k, **kw): + msg = ( + "Direct construction of {name} has been deprecated, please use {name}.from_parent.\n" + "See " + "https://docs.pytest.org/en/stable/deprecations.html#node-construction-changed-to-node-from-parent" + " for more details." + ).format(name=f"{self.__module__}.{self.__name__}") + fail(msg, pytrace=False) + + def _create(self, *k, **kw): + try: + return super().__call__(*k, **kw) + except TypeError: + sig = signature(getattr(self, "__init__")) + known_kw = {k: v for k, v in kw.items() if k in sig.parameters} + from .warning_types import PytestDeprecationWarning + + warnings.warn( + PytestDeprecationWarning( + f"{self} is not using a cooperative constructor and only takes {set(known_kw)}.\n" + "See https://docs.pytest.org/en/stable/deprecations.html" + "#constructors-of-custom-pytest-node-subclasses-should-take-kwargs " + "for more details." + ) + ) + + return super().__call__(*k, **known_kw) + + +class Node(metaclass=NodeMeta): + """Base class for Collector and Item, the components of the test + collection tree. + + Collector subclasses have children; Items are leaf nodes. + """ + + # Implemented in the legacypath plugin. + #: A ``LEGACY_PATH`` copy of the :attr:`path` attribute. Intended for usage + #: for methods not migrated to ``pathlib.Path`` yet, such as + #: :meth:`Item.reportinfo`. Will be deprecated in a future release, prefer + #: using :attr:`path` instead. + fspath: LEGACY_PATH + + # Use __slots__ to make attribute access faster. + # Note that __dict__ is still available. + __slots__ = ( + "name", + "parent", + "config", + "session", + "path", + "_nodeid", + "_store", + "__dict__", + ) + + def __init__( + self, + name: str, + parent: "Optional[Node]" = None, + config: Optional[Config] = None, + session: "Optional[Session]" = None, + fspath: Optional[LEGACY_PATH] = None, + path: Optional[Path] = None, + nodeid: Optional[str] = None, + ) -> None: + #: A unique name within the scope of the parent node. + self.name: str = name + + #: The parent collector node. + self.parent = parent + + if config: + #: The pytest config object. + self.config: Config = config + else: + if not parent: + raise TypeError("config or parent must be provided") + self.config = parent.config + + if session: + #: The pytest session this node is part of. + self.session: Session = session + else: + if not parent: + raise TypeError("session or parent must be provided") + self.session = parent.session + + if path is None and fspath is None: + path = getattr(parent, "path", None) + #: Filesystem path where this node was collected from (can be None). + self.path: Path = _imply_path(type(self), path, fspath=fspath) + + # The explicit annotation is to avoid publicly exposing NodeKeywords. + #: Keywords/markers collected from all scopes. + self.keywords: MutableMapping[str, Any] = NodeKeywords(self) + + #: The marker objects belonging to this node. + self.own_markers: List[Mark] = [] + + #: Allow adding of extra keywords to use for matching. + self.extra_keyword_matches: Set[str] = set() + + if nodeid is not None: + assert "::()" not in nodeid + self._nodeid = nodeid + else: + if not self.parent: + raise TypeError("nodeid or parent must be provided") + self._nodeid = self.parent.nodeid + "::" + self.name + + #: A place where plugins can store information on the node for their + #: own use. + self.stash: Stash = Stash() + # Deprecated alias. Was never public. Can be removed in a few releases. + self._store = self.stash + + @classmethod + def from_parent(cls, parent: "Node", **kw): + """Public constructor for Nodes. + + This indirection got introduced in order to enable removing + the fragile logic from the node constructors. + + Subclasses can use ``super().from_parent(...)`` when overriding the + construction. + + :param parent: The parent node of this Node. + """ + if "config" in kw: + raise TypeError("config is not a valid argument for from_parent") + if "session" in kw: + raise TypeError("session is not a valid argument for from_parent") + return cls._create(parent=parent, **kw) + + @property + def ihook(self): + """fspath-sensitive hook proxy used to call pytest hooks.""" + return self.session.gethookproxy(self.path) + + def __repr__(self) -> str: + return "<{} {}>".format(self.__class__.__name__, getattr(self, "name", None)) + + def warn(self, warning: Warning) -> None: + """Issue a warning for this Node. + + Warnings will be displayed after the test session, unless explicitly suppressed. + + :param Warning warning: + The warning instance to issue. + + :raises ValueError: If ``warning`` instance is not a subclass of Warning. + + Example usage: + + .. code-block:: python + + node.warn(PytestWarning("some message")) + node.warn(UserWarning("some message")) + + .. versionchanged:: 6.2 + Any subclass of :class:`Warning` is now accepted, rather than only + :class:`PytestWarning ` subclasses. + """ + # enforce type checks here to avoid getting a generic type error later otherwise. + if not isinstance(warning, Warning): + raise ValueError( + "warning must be an instance of Warning or subclass, got {!r}".format( + warning + ) + ) + path, lineno = get_fslocation_from_item(self) + assert lineno is not None + warnings.warn_explicit( + warning, + category=None, + filename=str(path), + lineno=lineno + 1, + ) + + # Methods for ordering nodes. + + @property + def nodeid(self) -> str: + """A ::-separated string denoting its collection tree address.""" + return self._nodeid + + def __hash__(self) -> int: + return hash(self._nodeid) + + def setup(self) -> None: + pass + + def teardown(self) -> None: + pass + + def listchain(self) -> List["Node"]: + """Return list of all parent collectors up to self, starting from + the root of collection tree. + + :returns: The nodes. + """ + chain = [] + item: Optional[Node] = self + while item is not None: + chain.append(item) + item = item.parent + chain.reverse() + return chain + + def add_marker( + self, marker: Union[str, MarkDecorator], append: bool = True + ) -> None: + """Dynamically add a marker object to the node. + + :param marker: + The marker. + :param append: + Whether to append the marker, or prepend it. + """ + from _pytest.mark import MARK_GEN + + if isinstance(marker, MarkDecorator): + marker_ = marker + elif isinstance(marker, str): + marker_ = getattr(MARK_GEN, marker) + else: + raise ValueError("is not a string or pytest.mark.* Marker") + self.keywords[marker_.name] = marker_ + if append: + self.own_markers.append(marker_.mark) + else: + self.own_markers.insert(0, marker_.mark) + + def iter_markers(self, name: Optional[str] = None) -> Iterator[Mark]: + """Iterate over all markers of the node. + + :param name: If given, filter the results by the name attribute. + :returns: An iterator of the markers of the node. + """ + return (x[1] for x in self.iter_markers_with_node(name=name)) + + def iter_markers_with_node( + self, name: Optional[str] = None + ) -> Iterator[Tuple["Node", Mark]]: + """Iterate over all markers of the node. + + :param name: If given, filter the results by the name attribute. + :returns: An iterator of (node, mark) tuples. + """ + for node in reversed(self.listchain()): + for mark in node.own_markers: + if name is None or getattr(mark, "name", None) == name: + yield node, mark + + @overload + def get_closest_marker(self, name: str) -> Optional[Mark]: + ... + + @overload + def get_closest_marker(self, name: str, default: Mark) -> Mark: + ... + + def get_closest_marker( + self, name: str, default: Optional[Mark] = None + ) -> Optional[Mark]: + """Return the first marker matching the name, from closest (for + example function) to farther level (for example module level). + + :param default: Fallback return value if no marker was found. + :param name: Name to filter by. + """ + return next(self.iter_markers(name=name), default) + + def listextrakeywords(self) -> Set[str]: + """Return a set of all extra keywords in self and any parents.""" + extra_keywords: Set[str] = set() + for item in self.listchain(): + extra_keywords.update(item.extra_keyword_matches) + return extra_keywords + + def listnames(self) -> List[str]: + return [x.name for x in self.listchain()] + + def addfinalizer(self, fin: Callable[[], object]) -> None: + """Register a function to be called without arguments when this node is + finalized. + + This method can only be called when this node is active + in a setup chain, for example during self.setup(). + """ + self.session._setupstate.addfinalizer(fin, self) + + def getparent(self, cls: Type[_NodeType]) -> Optional[_NodeType]: + """Get the next parent node (including self) which is an instance of + the given class. + + :param cls: The node class to search for. + :returns: The node, if found. + """ + current: Optional[Node] = self + while current and not isinstance(current, cls): + current = current.parent + assert current is None or isinstance(current, cls) + return current + + def _prunetraceback(self, excinfo: ExceptionInfo[BaseException]) -> None: + pass + + def _repr_failure_py( + self, + excinfo: ExceptionInfo[BaseException], + style: "Optional[_TracebackStyle]" = None, + ) -> TerminalRepr: + from _pytest.fixtures import FixtureLookupError + + if isinstance(excinfo.value, ConftestImportFailure): + excinfo = ExceptionInfo.from_exc_info(excinfo.value.excinfo) + if isinstance(excinfo.value, fail.Exception): + if not excinfo.value.pytrace: + style = "value" + if isinstance(excinfo.value, FixtureLookupError): + return excinfo.value.formatrepr() + if self.config.getoption("fulltrace", False): + style = "long" + else: + tb = _pytest._code.Traceback([excinfo.traceback[-1]]) + self._prunetraceback(excinfo) + if len(excinfo.traceback) == 0: + excinfo.traceback = tb + if style == "auto": + style = "long" + # XXX should excinfo.getrepr record all data and toterminal() process it? + if style is None: + if self.config.getoption("tbstyle", "auto") == "short": + style = "short" + else: + style = "long" + + if self.config.getoption("verbose", 0) > 1: + truncate_locals = False + else: + truncate_locals = True + + # excinfo.getrepr() formats paths relative to the CWD if `abspath` is False. + # It is possible for a fixture/test to change the CWD while this code runs, which + # would then result in the user seeing confusing paths in the failure message. + # To fix this, if the CWD changed, always display the full absolute path. + # It will be better to just always display paths relative to invocation_dir, but + # this requires a lot of plumbing (#6428). + try: + abspath = Path(os.getcwd()) != self.config.invocation_params.dir + except OSError: + abspath = True + + return excinfo.getrepr( + funcargs=True, + abspath=abspath, + showlocals=self.config.getoption("showlocals", False), + style=style, + tbfilter=False, # pruned already, or in --fulltrace mode. + truncate_locals=truncate_locals, + ) + + def repr_failure( + self, + excinfo: ExceptionInfo[BaseException], + style: "Optional[_TracebackStyle]" = None, + ) -> Union[str, TerminalRepr]: + """Return a representation of a collection or test failure. + + .. seealso:: :ref:`non-python tests` + + :param excinfo: Exception information for the failure. + """ + return self._repr_failure_py(excinfo, style) + + +def get_fslocation_from_item(node: "Node") -> Tuple[Union[str, Path], Optional[int]]: + """Try to extract the actual location from a node, depending on available attributes: + + * "location": a pair (path, lineno) + * "obj": a Python object that the node wraps. + * "fspath": just a path + + :rtype: A tuple of (str|Path, int) with filename and line number. + """ + # See Item.location. + location: Optional[Tuple[str, Optional[int], str]] = getattr(node, "location", None) + if location is not None: + return location[:2] + obj = getattr(node, "obj", None) + if obj is not None: + return getfslineno(obj) + return getattr(node, "fspath", "unknown location"), -1 + + +class Collector(Node): + """Collector instances create children through collect() and thus + iteratively build a tree.""" + + class CollectError(Exception): + """An error during collection, contains a custom message.""" + + def collect(self) -> Iterable[Union["Item", "Collector"]]: + """Return a list of children (items and collectors) for this + collection node.""" + raise NotImplementedError("abstract") + + # TODO: This omits the style= parameter which breaks Liskov Substitution. + def repr_failure( # type: ignore[override] + self, excinfo: ExceptionInfo[BaseException] + ) -> Union[str, TerminalRepr]: + """Return a representation of a collection failure. + + :param excinfo: Exception information for the failure. + """ + if isinstance(excinfo.value, self.CollectError) and not self.config.getoption( + "fulltrace", False + ): + exc = excinfo.value + return str(exc.args[0]) + + # Respect explicit tbstyle option, but default to "short" + # (_repr_failure_py uses "long" with "fulltrace" option always). + tbstyle = self.config.getoption("tbstyle", "auto") + if tbstyle == "auto": + tbstyle = "short" + + return self._repr_failure_py(excinfo, style=tbstyle) + + def _prunetraceback(self, excinfo: ExceptionInfo[BaseException]) -> None: + if hasattr(self, "path"): + traceback = excinfo.traceback + ntraceback = traceback.cut(path=self.path) + if ntraceback == traceback: + ntraceback = ntraceback.cut(excludepath=tracebackcutdir) + excinfo.traceback = ntraceback.filter() + + +def _check_initialpaths_for_relpath(session: "Session", path: Path) -> Optional[str]: + for initial_path in session._initialpaths: + if commonpath(path, initial_path) == initial_path: + rel = str(path.relative_to(initial_path)) + return "" if rel == "." else rel + return None + + +class FSCollector(Collector): + def __init__( + self, + fspath: Optional[LEGACY_PATH] = None, + path_or_parent: Optional[Union[Path, Node]] = None, + path: Optional[Path] = None, + name: Optional[str] = None, + parent: Optional[Node] = None, + config: Optional[Config] = None, + session: Optional["Session"] = None, + nodeid: Optional[str] = None, + ) -> None: + if path_or_parent: + if isinstance(path_or_parent, Node): + assert parent is None + parent = cast(FSCollector, path_or_parent) + elif isinstance(path_or_parent, Path): + assert path is None + path = path_or_parent + + path = _imply_path(type(self), path, fspath=fspath) + if name is None: + name = path.name + if parent is not None and parent.path != path: + try: + rel = path.relative_to(parent.path) + except ValueError: + pass + else: + name = str(rel) + name = name.replace(os.sep, SEP) + self.path = path + + if session is None: + assert parent is not None + session = parent.session + + if nodeid is None: + try: + nodeid = str(self.path.relative_to(session.config.rootpath)) + except ValueError: + nodeid = _check_initialpaths_for_relpath(session, path) + + if nodeid and os.sep != SEP: + nodeid = nodeid.replace(os.sep, SEP) + + super().__init__( + name=name, + parent=parent, + config=config, + session=session, + nodeid=nodeid, + path=path, + ) + + @classmethod + def from_parent( + cls, + parent, + *, + fspath: Optional[LEGACY_PATH] = None, + path: Optional[Path] = None, + **kw, + ): + """The public constructor.""" + return super().from_parent(parent=parent, fspath=fspath, path=path, **kw) + + def gethookproxy(self, fspath: "os.PathLike[str]"): + warnings.warn(FSCOLLECTOR_GETHOOKPROXY_ISINITPATH, stacklevel=2) + return self.session.gethookproxy(fspath) + + def isinitpath(self, path: Union[str, "os.PathLike[str]"]) -> bool: + warnings.warn(FSCOLLECTOR_GETHOOKPROXY_ISINITPATH, stacklevel=2) + return self.session.isinitpath(path) + + +class File(FSCollector): + """Base class for collecting tests from a file. + + :ref:`non-python tests`. + """ + + +class Item(Node): + """A basic test invocation item. + + Note that for a single function there might be multiple test invocation items. + """ + + nextitem = None + + def __init__( + self, + name, + parent=None, + config: Optional[Config] = None, + session: Optional["Session"] = None, + nodeid: Optional[str] = None, + **kw, + ) -> None: + # The first two arguments are intentionally passed positionally, + # to keep plugins who define a node type which inherits from + # (pytest.Item, pytest.File) working (see issue #8435). + # They can be made kwargs when the deprecation above is done. + super().__init__( + name, + parent, + config=config, + session=session, + nodeid=nodeid, + **kw, + ) + self._report_sections: List[Tuple[str, str, str]] = [] + + #: A list of tuples (name, value) that holds user defined properties + #: for this test. + self.user_properties: List[Tuple[str, object]] = [] + + self._check_item_and_collector_diamond_inheritance() + + def _check_item_and_collector_diamond_inheritance(self) -> None: + """ + Check if the current type inherits from both File and Collector + at the same time, emitting a warning accordingly (#8447). + """ + cls = type(self) + + # We inject an attribute in the type to avoid issuing this warning + # for the same class more than once, which is not helpful. + # It is a hack, but was deemed acceptable in order to avoid + # flooding the user in the common case. + attr_name = "_pytest_diamond_inheritance_warning_shown" + if getattr(cls, attr_name, False): + return + setattr(cls, attr_name, True) + + problems = ", ".join( + base.__name__ for base in cls.__bases__ if issubclass(base, Collector) + ) + if problems: + warnings.warn( + f"{cls.__name__} is an Item subclass and should not be a collector, " + f"however its bases {problems} are collectors.\n" + "Please split the Collectors and the Item into separate node types.\n" + "Pytest Doc example: https://docs.pytest.org/en/latest/example/nonpython.html\n" + "example pull request on a plugin: https://github.com/asmeurer/pytest-flakes/pull/40/", + PytestWarning, + ) + + def runtest(self) -> None: + """Run the test case for this item. + + Must be implemented by subclasses. + + .. seealso:: :ref:`non-python tests` + """ + raise NotImplementedError("runtest must be implemented by Item subclass") + + def add_report_section(self, when: str, key: str, content: str) -> None: + """Add a new report section, similar to what's done internally to add + stdout and stderr captured output:: + + item.add_report_section("call", "stdout", "report section contents") + + :param str when: + One of the possible capture states, ``"setup"``, ``"call"``, ``"teardown"``. + :param str key: + Name of the section, can be customized at will. Pytest uses ``"stdout"`` and + ``"stderr"`` internally. + :param str content: + The full contents as a string. + """ + if content: + self._report_sections.append((when, key, content)) + + def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]: + """Get location information for this item for test reports. + + Returns a tuple with three elements: + + - The path of the test (default ``self.path``) + - The line number of the test (default ``None``) + - A name of the test to be shown (default ``""``) + + .. seealso:: :ref:`non-python tests` + """ + return self.path, None, "" + + @cached_property + def location(self) -> Tuple[str, Optional[int], str]: + location = self.reportinfo() + path = absolutepath(os.fspath(location[0])) + relfspath = self.session._node_location_to_relpath(path) + assert type(location[2]) is str + return (relfspath, location[1], location[2]) diff --git a/env/lib/python3.8/site-packages/_pytest/nose.py b/env/lib/python3.8/site-packages/_pytest/nose.py new file mode 100644 index 00000000..273bd045 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/nose.py @@ -0,0 +1,50 @@ +"""Run testsuites written for nose.""" +import warnings + +from _pytest.config import hookimpl +from _pytest.deprecated import NOSE_SUPPORT +from _pytest.fixtures import getfixturemarker +from _pytest.nodes import Item +from _pytest.python import Function +from _pytest.unittest import TestCaseFunction + + +@hookimpl(trylast=True) +def pytest_runtest_setup(item: Item) -> None: + if not isinstance(item, Function): + return + # Don't do nose style setup/teardown on direct unittest style classes. + if isinstance(item, TestCaseFunction): + return + + # Capture the narrowed type of item for the teardown closure, + # see https://github.com/python/mypy/issues/2608 + func = item + + call_optional(func.obj, "setup", func.nodeid) + func.addfinalizer(lambda: call_optional(func.obj, "teardown", func.nodeid)) + + # NOTE: Module- and class-level fixtures are handled in python.py + # with `pluginmanager.has_plugin("nose")` checks. + # It would have been nicer to implement them outside of core, but + # it's not straightforward. + + +def call_optional(obj: object, name: str, nodeid: str) -> bool: + method = getattr(obj, name, None) + if method is None: + return False + is_fixture = getfixturemarker(method) is not None + if is_fixture: + return False + if not callable(method): + return False + # Warn about deprecation of this plugin. + method_name = getattr(method, "__name__", str(method)) + warnings.warn( + NOSE_SUPPORT.format(nodeid=nodeid, method=method_name, stage=name), stacklevel=2 + ) + # If there are any problems allow the exception to raise rather than + # silently ignoring it. + method() + return True diff --git a/env/lib/python3.8/site-packages/_pytest/outcomes.py b/env/lib/python3.8/site-packages/_pytest/outcomes.py new file mode 100644 index 00000000..e46b663d --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/outcomes.py @@ -0,0 +1,308 @@ +"""Exception classes and constants handling test outcomes as well as +functions creating them.""" +import sys +import warnings +from typing import Any +from typing import Callable +from typing import cast +from typing import NoReturn +from typing import Optional +from typing import Type +from typing import TypeVar + +from _pytest.deprecated import KEYWORD_MSG_ARG + +TYPE_CHECKING = False # Avoid circular import through compat. + +if TYPE_CHECKING: + from typing_extensions import Protocol +else: + # typing.Protocol is only available starting from Python 3.8. It is also + # available from typing_extensions, but we don't want a runtime dependency + # on that. So use a dummy runtime implementation. + from typing import Generic + + Protocol = Generic + + +class OutcomeException(BaseException): + """OutcomeException and its subclass instances indicate and contain info + about test and collection outcomes.""" + + def __init__(self, msg: Optional[str] = None, pytrace: bool = True) -> None: + if msg is not None and not isinstance(msg, str): + error_msg = ( # type: ignore[unreachable] + "{} expected string as 'msg' parameter, got '{}' instead.\n" + "Perhaps you meant to use a mark?" + ) + raise TypeError(error_msg.format(type(self).__name__, type(msg).__name__)) + super().__init__(msg) + self.msg = msg + self.pytrace = pytrace + + def __repr__(self) -> str: + if self.msg is not None: + return self.msg + return f"<{self.__class__.__name__} instance>" + + __str__ = __repr__ + + +TEST_OUTCOME = (OutcomeException, Exception) + + +class Skipped(OutcomeException): + # XXX hackish: on 3k we fake to live in the builtins + # in order to have Skipped exception printing shorter/nicer + __module__ = "builtins" + + def __init__( + self, + msg: Optional[str] = None, + pytrace: bool = True, + allow_module_level: bool = False, + *, + _use_item_location: bool = False, + ) -> None: + super().__init__(msg=msg, pytrace=pytrace) + self.allow_module_level = allow_module_level + # If true, the skip location is reported as the item's location, + # instead of the place that raises the exception/calls skip(). + self._use_item_location = _use_item_location + + +class Failed(OutcomeException): + """Raised from an explicit call to pytest.fail().""" + + __module__ = "builtins" + + +class Exit(Exception): + """Raised for immediate program exits (no tracebacks/summaries).""" + + def __init__( + self, msg: str = "unknown reason", returncode: Optional[int] = None + ) -> None: + self.msg = msg + self.returncode = returncode + super().__init__(msg) + + +# Elaborate hack to work around https://github.com/python/mypy/issues/2087. +# Ideally would just be `exit.Exception = Exit` etc. + +_F = TypeVar("_F", bound=Callable[..., object]) +_ET = TypeVar("_ET", bound=Type[BaseException]) + + +class _WithException(Protocol[_F, _ET]): + Exception: _ET + __call__: _F + + +def _with_exception(exception_type: _ET) -> Callable[[_F], _WithException[_F, _ET]]: + def decorate(func: _F) -> _WithException[_F, _ET]: + func_with_exception = cast(_WithException[_F, _ET], func) + func_with_exception.Exception = exception_type + return func_with_exception + + return decorate + + +# Exposed helper methods. + + +@_with_exception(Exit) +def exit( + reason: str = "", returncode: Optional[int] = None, *, msg: Optional[str] = None +) -> NoReturn: + """Exit testing process. + + :param reason: + The message to show as the reason for exiting pytest. reason has a default value + only because `msg` is deprecated. + + :param returncode: + Return code to be used when exiting pytest. + + :param msg: + Same as ``reason``, but deprecated. Will be removed in a future version, use ``reason`` instead. + """ + __tracebackhide__ = True + from _pytest.config import UsageError + + if reason and msg: + raise UsageError( + "cannot pass reason and msg to exit(), `msg` is deprecated, use `reason`." + ) + if not reason: + if msg is None: + raise UsageError("exit() requires a reason argument") + warnings.warn(KEYWORD_MSG_ARG.format(func="exit"), stacklevel=2) + reason = msg + raise Exit(reason, returncode) + + +@_with_exception(Skipped) +def skip( + reason: str = "", *, allow_module_level: bool = False, msg: Optional[str] = None +) -> NoReturn: + """Skip an executing test with the given message. + + This function should be called only during testing (setup, call or teardown) or + during collection by using the ``allow_module_level`` flag. This function can + be called in doctests as well. + + :param reason: + The message to show the user as reason for the skip. + + :param allow_module_level: + Allows this function to be called at module level, skipping the rest + of the module. Defaults to False. + + :param msg: + Same as ``reason``, but deprecated. Will be removed in a future version, use ``reason`` instead. + + .. note:: + It is better to use the :ref:`pytest.mark.skipif ref` marker when + possible to declare a test to be skipped under certain conditions + like mismatching platforms or dependencies. + Similarly, use the ``# doctest: +SKIP`` directive (see :py:data:`doctest.SKIP`) + to skip a doctest statically. + """ + __tracebackhide__ = True + reason = _resolve_msg_to_reason("skip", reason, msg) + raise Skipped(msg=reason, allow_module_level=allow_module_level) + + +@_with_exception(Failed) +def fail(reason: str = "", pytrace: bool = True, msg: Optional[str] = None) -> NoReturn: + """Explicitly fail an executing test with the given message. + + :param reason: + The message to show the user as reason for the failure. + + :param pytrace: + If False, msg represents the full failure information and no + python traceback will be reported. + + :param msg: + Same as ``reason``, but deprecated. Will be removed in a future version, use ``reason`` instead. + """ + __tracebackhide__ = True + reason = _resolve_msg_to_reason("fail", reason, msg) + raise Failed(msg=reason, pytrace=pytrace) + + +def _resolve_msg_to_reason( + func_name: str, reason: str, msg: Optional[str] = None +) -> str: + """ + Handles converting the deprecated msg parameter if provided into + reason, raising a deprecation warning. This function will be removed + when the optional msg argument is removed from here in future. + + :param str func_name: + The name of the offending function, this is formatted into the deprecation message. + + :param str reason: + The reason= passed into either pytest.fail() or pytest.skip() + + :param str msg: + The msg= passed into either pytest.fail() or pytest.skip(). This will + be converted into reason if it is provided to allow pytest.skip(msg=) or + pytest.fail(msg=) to continue working in the interim period. + + :returns: + The value to use as reason. + + """ + __tracebackhide__ = True + if msg is not None: + + if reason: + from pytest import UsageError + + raise UsageError( + f"Passing both ``reason`` and ``msg`` to pytest.{func_name}(...) is not permitted." + ) + warnings.warn(KEYWORD_MSG_ARG.format(func=func_name), stacklevel=3) + reason = msg + return reason + + +class XFailed(Failed): + """Raised from an explicit call to pytest.xfail().""" + + +@_with_exception(XFailed) +def xfail(reason: str = "") -> NoReturn: + """Imperatively xfail an executing test or setup function with the given reason. + + This function should be called only during testing (setup, call or teardown). + + :param reason: + The message to show the user as reason for the xfail. + + .. note:: + It is better to use the :ref:`pytest.mark.xfail ref` marker when + possible to declare a test to be xfailed under certain conditions + like known bugs or missing features. + """ + __tracebackhide__ = True + raise XFailed(reason) + + +def importorskip( + modname: str, minversion: Optional[str] = None, reason: Optional[str] = None +) -> Any: + """Import and return the requested module ``modname``, or skip the + current test if the module cannot be imported. + + :param modname: + The name of the module to import. + :param minversion: + If given, the imported module's ``__version__`` attribute must be at + least this minimal version, otherwise the test is still skipped. + :param reason: + If given, this reason is shown as the message when the module cannot + be imported. + + :returns: + The imported module. This should be assigned to its canonical name. + + Example:: + + docutils = pytest.importorskip("docutils") + """ + import warnings + + __tracebackhide__ = True + compile(modname, "", "eval") # to catch syntaxerrors + + with warnings.catch_warnings(): + # Make sure to ignore ImportWarnings that might happen because + # of existing directories with the same name we're trying to + # import but without a __init__.py file. + warnings.simplefilter("ignore") + try: + __import__(modname) + except ImportError as exc: + if reason is None: + reason = f"could not import {modname!r}: {exc}" + raise Skipped(reason, allow_module_level=True) from None + mod = sys.modules[modname] + if minversion is None: + return mod + verattr = getattr(mod, "__version__", None) + if minversion is not None: + # Imported lazily to improve start-up time. + from packaging.version import Version + + if verattr is None or Version(verattr) < Version(minversion): + raise Skipped( + "module %r has __version__ %r, required is: %r" + % (modname, verattr, minversion), + allow_module_level=True, + ) + return mod diff --git a/env/lib/python3.8/site-packages/_pytest/pastebin.py b/env/lib/python3.8/site-packages/_pytest/pastebin.py new file mode 100644 index 00000000..22c7a622 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/pastebin.py @@ -0,0 +1,110 @@ +"""Submit failure or test session information to a pastebin service.""" +import tempfile +from io import StringIO +from typing import IO +from typing import Union + +import pytest +from _pytest.config import Config +from _pytest.config import create_terminal_writer +from _pytest.config.argparsing import Parser +from _pytest.stash import StashKey +from _pytest.terminal import TerminalReporter + + +pastebinfile_key = StashKey[IO[bytes]]() + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("terminal reporting") + group._addoption( + "--pastebin", + metavar="mode", + action="store", + dest="pastebin", + default=None, + choices=["failed", "all"], + help="Send failed|all info to bpaste.net pastebin service", + ) + + +@pytest.hookimpl(trylast=True) +def pytest_configure(config: Config) -> None: + if config.option.pastebin == "all": + tr = config.pluginmanager.getplugin("terminalreporter") + # If no terminal reporter plugin is present, nothing we can do here; + # this can happen when this function executes in a worker node + # when using pytest-xdist, for example. + if tr is not None: + # pastebin file will be UTF-8 encoded binary file. + config.stash[pastebinfile_key] = tempfile.TemporaryFile("w+b") + oldwrite = tr._tw.write + + def tee_write(s, **kwargs): + oldwrite(s, **kwargs) + if isinstance(s, str): + s = s.encode("utf-8") + config.stash[pastebinfile_key].write(s) + + tr._tw.write = tee_write + + +def pytest_unconfigure(config: Config) -> None: + if pastebinfile_key in config.stash: + pastebinfile = config.stash[pastebinfile_key] + # Get terminal contents and delete file. + pastebinfile.seek(0) + sessionlog = pastebinfile.read() + pastebinfile.close() + del config.stash[pastebinfile_key] + # Undo our patching in the terminal reporter. + tr = config.pluginmanager.getplugin("terminalreporter") + del tr._tw.__dict__["write"] + # Write summary. + tr.write_sep("=", "Sending information to Paste Service") + pastebinurl = create_new_paste(sessionlog) + tr.write_line("pastebin session-log: %s\n" % pastebinurl) + + +def create_new_paste(contents: Union[str, bytes]) -> str: + """Create a new paste using the bpaste.net service. + + :contents: Paste contents string. + :returns: URL to the pasted contents, or an error message. + """ + import re + from urllib.request import urlopen + from urllib.parse import urlencode + + params = {"code": contents, "lexer": "text", "expiry": "1week"} + url = "https://bpa.st" + try: + response: str = ( + urlopen(url, data=urlencode(params).encode("ascii")).read().decode("utf-8") + ) + except OSError as exc_info: # urllib errors + return "bad response: %s" % exc_info + m = re.search(r'href="/raw/(\w+)"', response) + if m: + return f"{url}/show/{m.group(1)}" + else: + return "bad response: invalid format ('" + response + "')" + + +def pytest_terminal_summary(terminalreporter: TerminalReporter) -> None: + if terminalreporter.config.option.pastebin != "failed": + return + if "failed" in terminalreporter.stats: + terminalreporter.write_sep("=", "Sending information to Paste Service") + for rep in terminalreporter.stats["failed"]: + try: + msg = rep.longrepr.reprtraceback.reprentries[-1].reprfileloc + except AttributeError: + msg = terminalreporter._getfailureheadline(rep) + file = StringIO() + tw = create_terminal_writer(terminalreporter.config, file) + rep.toterminal(tw) + s = file.getvalue() + assert len(s) + pastebinurl = create_new_paste(s) + terminalreporter.write_line(f"{msg} --> {pastebinurl}") diff --git a/env/lib/python3.8/site-packages/_pytest/pathlib.py b/env/lib/python3.8/site-packages/_pytest/pathlib.py new file mode 100644 index 00000000..c5a411b5 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/pathlib.py @@ -0,0 +1,735 @@ +import atexit +import contextlib +import fnmatch +import importlib.util +import itertools +import os +import shutil +import sys +import uuid +import warnings +from enum import Enum +from errno import EBADF +from errno import ELOOP +from errno import ENOENT +from errno import ENOTDIR +from functools import partial +from os.path import expanduser +from os.path import expandvars +from os.path import isabs +from os.path import sep +from pathlib import Path +from pathlib import PurePath +from posixpath import sep as posix_sep +from types import ModuleType +from typing import Callable +from typing import Dict +from typing import Iterable +from typing import Iterator +from typing import Optional +from typing import Set +from typing import TypeVar +from typing import Union + +from _pytest.compat import assert_never +from _pytest.outcomes import skip +from _pytest.warning_types import PytestWarning + +LOCK_TIMEOUT = 60 * 60 * 24 * 3 + + +_AnyPurePath = TypeVar("_AnyPurePath", bound=PurePath) + +# The following function, variables and comments were +# copied from cpython 3.9 Lib/pathlib.py file. + +# EBADF - guard against macOS `stat` throwing EBADF +_IGNORED_ERRORS = (ENOENT, ENOTDIR, EBADF, ELOOP) + +_IGNORED_WINERRORS = ( + 21, # ERROR_NOT_READY - drive exists but is not accessible + 1921, # ERROR_CANT_RESOLVE_FILENAME - fix for broken symlink pointing to itself +) + + +def _ignore_error(exception): + return ( + getattr(exception, "errno", None) in _IGNORED_ERRORS + or getattr(exception, "winerror", None) in _IGNORED_WINERRORS + ) + + +def get_lock_path(path: _AnyPurePath) -> _AnyPurePath: + return path.joinpath(".lock") + + +def on_rm_rf_error(func, path: str, exc, *, start_path: Path) -> bool: + """Handle known read-only errors during rmtree. + + The returned value is used only by our own tests. + """ + exctype, excvalue = exc[:2] + + # Another process removed the file in the middle of the "rm_rf" (xdist for example). + # More context: https://github.com/pytest-dev/pytest/issues/5974#issuecomment-543799018 + if isinstance(excvalue, FileNotFoundError): + return False + + if not isinstance(excvalue, PermissionError): + warnings.warn( + PytestWarning(f"(rm_rf) error removing {path}\n{exctype}: {excvalue}") + ) + return False + + if func not in (os.rmdir, os.remove, os.unlink): + if func not in (os.open,): + warnings.warn( + PytestWarning( + "(rm_rf) unknown function {} when removing {}:\n{}: {}".format( + func, path, exctype, excvalue + ) + ) + ) + return False + + # Chmod + retry. + import stat + + def chmod_rw(p: str) -> None: + mode = os.stat(p).st_mode + os.chmod(p, mode | stat.S_IRUSR | stat.S_IWUSR) + + # For files, we need to recursively go upwards in the directories to + # ensure they all are also writable. + p = Path(path) + if p.is_file(): + for parent in p.parents: + chmod_rw(str(parent)) + # Stop when we reach the original path passed to rm_rf. + if parent == start_path: + break + chmod_rw(str(path)) + + func(path) + return True + + +def ensure_extended_length_path(path: Path) -> Path: + """Get the extended-length version of a path (Windows). + + On Windows, by default, the maximum length of a path (MAX_PATH) is 260 + characters, and operations on paths longer than that fail. But it is possible + to overcome this by converting the path to "extended-length" form before + performing the operation: + https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file#maximum-path-length-limitation + + On Windows, this function returns the extended-length absolute version of path. + On other platforms it returns path unchanged. + """ + if sys.platform.startswith("win32"): + path = path.resolve() + path = Path(get_extended_length_path_str(str(path))) + return path + + +def get_extended_length_path_str(path: str) -> str: + """Convert a path to a Windows extended length path.""" + long_path_prefix = "\\\\?\\" + unc_long_path_prefix = "\\\\?\\UNC\\" + if path.startswith((long_path_prefix, unc_long_path_prefix)): + return path + # UNC + if path.startswith("\\\\"): + return unc_long_path_prefix + path[2:] + return long_path_prefix + path + + +def rm_rf(path: Path) -> None: + """Remove the path contents recursively, even if some elements + are read-only.""" + path = ensure_extended_length_path(path) + onerror = partial(on_rm_rf_error, start_path=path) + shutil.rmtree(str(path), onerror=onerror) + + +def find_prefixed(root: Path, prefix: str) -> Iterator[Path]: + """Find all elements in root that begin with the prefix, case insensitive.""" + l_prefix = prefix.lower() + for x in root.iterdir(): + if x.name.lower().startswith(l_prefix): + yield x + + +def extract_suffixes(iter: Iterable[PurePath], prefix: str) -> Iterator[str]: + """Return the parts of the paths following the prefix. + + :param iter: Iterator over path names. + :param prefix: Expected prefix of the path names. + """ + p_len = len(prefix) + for p in iter: + yield p.name[p_len:] + + +def find_suffixes(root: Path, prefix: str) -> Iterator[str]: + """Combine find_prefixes and extract_suffixes.""" + return extract_suffixes(find_prefixed(root, prefix), prefix) + + +def parse_num(maybe_num) -> int: + """Parse number path suffixes, returns -1 on error.""" + try: + return int(maybe_num) + except ValueError: + return -1 + + +def _force_symlink( + root: Path, target: Union[str, PurePath], link_to: Union[str, Path] +) -> None: + """Helper to create the current symlink. + + It's full of race conditions that are reasonably OK to ignore + for the context of best effort linking to the latest test run. + + The presumption being that in case of much parallelism + the inaccuracy is going to be acceptable. + """ + current_symlink = root.joinpath(target) + try: + current_symlink.unlink() + except OSError: + pass + try: + current_symlink.symlink_to(link_to) + except Exception: + pass + + +def make_numbered_dir(root: Path, prefix: str, mode: int = 0o700) -> Path: + """Create a directory with an increased number as suffix for the given prefix.""" + for i in range(10): + # try up to 10 times to create the folder + max_existing = max(map(parse_num, find_suffixes(root, prefix)), default=-1) + new_number = max_existing + 1 + new_path = root.joinpath(f"{prefix}{new_number}") + try: + new_path.mkdir(mode=mode) + except Exception: + pass + else: + _force_symlink(root, prefix + "current", new_path) + return new_path + else: + raise OSError( + "could not create numbered dir with prefix " + "{prefix} in {root} after 10 tries".format(prefix=prefix, root=root) + ) + + +def create_cleanup_lock(p: Path) -> Path: + """Create a lock to prevent premature folder cleanup.""" + lock_path = get_lock_path(p) + try: + fd = os.open(str(lock_path), os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o644) + except FileExistsError as e: + raise OSError(f"cannot create lockfile in {p}") from e + else: + pid = os.getpid() + spid = str(pid).encode() + os.write(fd, spid) + os.close(fd) + if not lock_path.is_file(): + raise OSError("lock path got renamed after successful creation") + return lock_path + + +def register_cleanup_lock_removal(lock_path: Path, register=atexit.register): + """Register a cleanup function for removing a lock, by default on atexit.""" + pid = os.getpid() + + def cleanup_on_exit(lock_path: Path = lock_path, original_pid: int = pid) -> None: + current_pid = os.getpid() + if current_pid != original_pid: + # fork + return + try: + lock_path.unlink() + except OSError: + pass + + return register(cleanup_on_exit) + + +def maybe_delete_a_numbered_dir(path: Path) -> None: + """Remove a numbered directory if its lock can be obtained and it does + not seem to be in use.""" + path = ensure_extended_length_path(path) + lock_path = None + try: + lock_path = create_cleanup_lock(path) + parent = path.parent + + garbage = parent.joinpath(f"garbage-{uuid.uuid4()}") + path.rename(garbage) + rm_rf(garbage) + except OSError: + # known races: + # * other process did a cleanup at the same time + # * deletable folder was found + # * process cwd (Windows) + return + finally: + # If we created the lock, ensure we remove it even if we failed + # to properly remove the numbered dir. + if lock_path is not None: + try: + lock_path.unlink() + except OSError: + pass + + +def ensure_deletable(path: Path, consider_lock_dead_if_created_before: float) -> bool: + """Check if `path` is deletable based on whether the lock file is expired.""" + if path.is_symlink(): + return False + lock = get_lock_path(path) + try: + if not lock.is_file(): + return True + except OSError: + # we might not have access to the lock file at all, in this case assume + # we don't have access to the entire directory (#7491). + return False + try: + lock_time = lock.stat().st_mtime + except Exception: + return False + else: + if lock_time < consider_lock_dead_if_created_before: + # We want to ignore any errors while trying to remove the lock such as: + # - PermissionDenied, like the file permissions have changed since the lock creation; + # - FileNotFoundError, in case another pytest process got here first; + # and any other cause of failure. + with contextlib.suppress(OSError): + lock.unlink() + return True + return False + + +def try_cleanup(path: Path, consider_lock_dead_if_created_before: float) -> None: + """Try to cleanup a folder if we can ensure it's deletable.""" + if ensure_deletable(path, consider_lock_dead_if_created_before): + maybe_delete_a_numbered_dir(path) + + +def cleanup_candidates(root: Path, prefix: str, keep: int) -> Iterator[Path]: + """List candidates for numbered directories to be removed - follows py.path.""" + max_existing = max(map(parse_num, find_suffixes(root, prefix)), default=-1) + max_delete = max_existing - keep + paths = find_prefixed(root, prefix) + paths, paths2 = itertools.tee(paths) + numbers = map(parse_num, extract_suffixes(paths2, prefix)) + for path, number in zip(paths, numbers): + if number <= max_delete: + yield path + + +def cleanup_numbered_dir( + root: Path, prefix: str, keep: int, consider_lock_dead_if_created_before: float +) -> None: + """Cleanup for lock driven numbered directories.""" + for path in cleanup_candidates(root, prefix, keep): + try_cleanup(path, consider_lock_dead_if_created_before) + for path in root.glob("garbage-*"): + try_cleanup(path, consider_lock_dead_if_created_before) + + +def make_numbered_dir_with_cleanup( + root: Path, + prefix: str, + keep: int, + lock_timeout: float, + mode: int, +) -> Path: + """Create a numbered dir with a cleanup lock and remove old ones.""" + e = None + for i in range(10): + try: + p = make_numbered_dir(root, prefix, mode) + lock_path = create_cleanup_lock(p) + register_cleanup_lock_removal(lock_path) + except Exception as exc: + e = exc + else: + consider_lock_dead_if_created_before = p.stat().st_mtime - lock_timeout + # Register a cleanup for program exit + atexit.register( + cleanup_numbered_dir, + root, + prefix, + keep, + consider_lock_dead_if_created_before, + ) + return p + assert e is not None + raise e + + +def resolve_from_str(input: str, rootpath: Path) -> Path: + input = expanduser(input) + input = expandvars(input) + if isabs(input): + return Path(input) + else: + return rootpath.joinpath(input) + + +def fnmatch_ex(pattern: str, path: Union[str, "os.PathLike[str]"]) -> bool: + """A port of FNMatcher from py.path.common which works with PurePath() instances. + + The difference between this algorithm and PurePath.match() is that the + latter matches "**" glob expressions for each part of the path, while + this algorithm uses the whole path instead. + + For example: + "tests/foo/bar/doc/test_foo.py" matches pattern "tests/**/doc/test*.py" + with this algorithm, but not with PurePath.match(). + + This algorithm was ported to keep backward-compatibility with existing + settings which assume paths match according this logic. + + References: + * https://bugs.python.org/issue29249 + * https://bugs.python.org/issue34731 + """ + path = PurePath(path) + iswin32 = sys.platform.startswith("win") + + if iswin32 and sep not in pattern and posix_sep in pattern: + # Running on Windows, the pattern has no Windows path separators, + # and the pattern has one or more Posix path separators. Replace + # the Posix path separators with the Windows path separator. + pattern = pattern.replace(posix_sep, sep) + + if sep not in pattern: + name = path.name + else: + name = str(path) + if path.is_absolute() and not os.path.isabs(pattern): + pattern = f"*{os.sep}{pattern}" + return fnmatch.fnmatch(name, pattern) + + +def parts(s: str) -> Set[str]: + parts = s.split(sep) + return {sep.join(parts[: i + 1]) or sep for i in range(len(parts))} + + +def symlink_or_skip(src, dst, **kwargs): + """Make a symlink, or skip the test in case symlinks are not supported.""" + try: + os.symlink(str(src), str(dst), **kwargs) + except OSError as e: + skip(f"symlinks not supported: {e}") + + +class ImportMode(Enum): + """Possible values for `mode` parameter of `import_path`.""" + + prepend = "prepend" + append = "append" + importlib = "importlib" + + +class ImportPathMismatchError(ImportError): + """Raised on import_path() if there is a mismatch of __file__'s. + + This can happen when `import_path` is called multiple times with different filenames that has + the same basename but reside in packages + (for example "/tests1/test_foo.py" and "/tests2/test_foo.py"). + """ + + +def import_path( + p: Union[str, "os.PathLike[str]"], + *, + mode: Union[str, ImportMode] = ImportMode.prepend, + root: Path, +) -> ModuleType: + """Import and return a module from the given path, which can be a file (a module) or + a directory (a package). + + The import mechanism used is controlled by the `mode` parameter: + + * `mode == ImportMode.prepend`: the directory containing the module (or package, taking + `__init__.py` files into account) will be put at the *start* of `sys.path` before + being imported with `__import__. + + * `mode == ImportMode.append`: same as `prepend`, but the directory will be appended + to the end of `sys.path`, if not already in `sys.path`. + + * `mode == ImportMode.importlib`: uses more fine control mechanisms provided by `importlib` + to import the module, which avoids having to use `__import__` and muck with `sys.path` + at all. It effectively allows having same-named test modules in different places. + + :param root: + Used as an anchor when mode == ImportMode.importlib to obtain + a unique name for the module being imported so it can safely be stored + into ``sys.modules``. + + :raises ImportPathMismatchError: + If after importing the given `path` and the module `__file__` + are different. Only raised in `prepend` and `append` modes. + """ + mode = ImportMode(mode) + + path = Path(p) + + if not path.exists(): + raise ImportError(path) + + if mode is ImportMode.importlib: + module_name = module_name_from_path(path, root) + + for meta_importer in sys.meta_path: + spec = meta_importer.find_spec(module_name, [str(path.parent)]) + if spec is not None: + break + else: + spec = importlib.util.spec_from_file_location(module_name, str(path)) + + if spec is None: + raise ImportError(f"Can't find module {module_name} at location {path}") + mod = importlib.util.module_from_spec(spec) + sys.modules[module_name] = mod + spec.loader.exec_module(mod) # type: ignore[union-attr] + insert_missing_modules(sys.modules, module_name) + return mod + + pkg_path = resolve_package_path(path) + if pkg_path is not None: + pkg_root = pkg_path.parent + names = list(path.with_suffix("").relative_to(pkg_root).parts) + if names[-1] == "__init__": + names.pop() + module_name = ".".join(names) + else: + pkg_root = path.parent + module_name = path.stem + + # Change sys.path permanently: restoring it at the end of this function would cause surprising + # problems because of delayed imports: for example, a conftest.py file imported by this function + # might have local imports, which would fail at runtime if we restored sys.path. + if mode is ImportMode.append: + if str(pkg_root) not in sys.path: + sys.path.append(str(pkg_root)) + elif mode is ImportMode.prepend: + if str(pkg_root) != sys.path[0]: + sys.path.insert(0, str(pkg_root)) + else: + assert_never(mode) + + importlib.import_module(module_name) + + mod = sys.modules[module_name] + if path.name == "__init__.py": + return mod + + ignore = os.environ.get("PY_IGNORE_IMPORTMISMATCH", "") + if ignore != "1": + module_file = mod.__file__ + if module_file is None: + raise ImportPathMismatchError(module_name, module_file, path) + + if module_file.endswith((".pyc", ".pyo")): + module_file = module_file[:-1] + if module_file.endswith(os.path.sep + "__init__.py"): + module_file = module_file[: -(len(os.path.sep + "__init__.py"))] + + try: + is_same = _is_same(str(path), module_file) + except FileNotFoundError: + is_same = False + + if not is_same: + raise ImportPathMismatchError(module_name, module_file, path) + + return mod + + +# Implement a special _is_same function on Windows which returns True if the two filenames +# compare equal, to circumvent os.path.samefile returning False for mounts in UNC (#7678). +if sys.platform.startswith("win"): + + def _is_same(f1: str, f2: str) -> bool: + return Path(f1) == Path(f2) or os.path.samefile(f1, f2) + +else: + + def _is_same(f1: str, f2: str) -> bool: + return os.path.samefile(f1, f2) + + +def module_name_from_path(path: Path, root: Path) -> str: + """ + Return a dotted module name based on the given path, anchored on root. + + For example: path="projects/src/tests/test_foo.py" and root="/projects", the + resulting module name will be "src.tests.test_foo". + """ + path = path.with_suffix("") + try: + relative_path = path.relative_to(root) + except ValueError: + # If we can't get a relative path to root, use the full path, except + # for the first part ("d:\\" or "/" depending on the platform, for example). + path_parts = path.parts[1:] + else: + # Use the parts for the relative path to the root path. + path_parts = relative_path.parts + + return ".".join(path_parts) + + +def insert_missing_modules(modules: Dict[str, ModuleType], module_name: str) -> None: + """ + Used by ``import_path`` to create intermediate modules when using mode=importlib. + + When we want to import a module as "src.tests.test_foo" for example, we need + to create empty modules "src" and "src.tests" after inserting "src.tests.test_foo", + otherwise "src.tests.test_foo" is not importable by ``__import__``. + """ + module_parts = module_name.split(".") + while module_name: + if module_name not in modules: + try: + # If sys.meta_path is empty, calling import_module will issue + # a warning and raise ModuleNotFoundError. To avoid the + # warning, we check sys.meta_path explicitly and raise the error + # ourselves to fall back to creating a dummy module. + if not sys.meta_path: + raise ModuleNotFoundError + importlib.import_module(module_name) + except ModuleNotFoundError: + module = ModuleType( + module_name, + doc="Empty module created by pytest's importmode=importlib.", + ) + modules[module_name] = module + module_parts.pop(-1) + module_name = ".".join(module_parts) + + +def resolve_package_path(path: Path) -> Optional[Path]: + """Return the Python package path by looking for the last + directory upwards which still contains an __init__.py. + + Returns None if it can not be determined. + """ + result = None + for parent in itertools.chain((path,), path.parents): + if parent.is_dir(): + if not parent.joinpath("__init__.py").is_file(): + break + if not parent.name.isidentifier(): + break + result = parent + return result + + +def visit( + path: Union[str, "os.PathLike[str]"], recurse: Callable[["os.DirEntry[str]"], bool] +) -> Iterator["os.DirEntry[str]"]: + """Walk a directory recursively, in breadth-first order. + + Entries at each directory level are sorted. + """ + + # Skip entries with symlink loops and other brokenness, so the caller doesn't + # have to deal with it. + entries = [] + for entry in os.scandir(path): + try: + entry.is_file() + except OSError as err: + if _ignore_error(err): + continue + raise + entries.append(entry) + + entries.sort(key=lambda entry: entry.name) + + yield from entries + + for entry in entries: + if entry.is_dir() and recurse(entry): + yield from visit(entry.path, recurse) + + +def absolutepath(path: Union[Path, str]) -> Path: + """Convert a path to an absolute path using os.path.abspath. + + Prefer this over Path.resolve() (see #6523). + Prefer this over Path.absolute() (not public, doesn't normalize). + """ + return Path(os.path.abspath(str(path))) + + +def commonpath(path1: Path, path2: Path) -> Optional[Path]: + """Return the common part shared with the other path, or None if there is + no common part. + + If one path is relative and one is absolute, returns None. + """ + try: + return Path(os.path.commonpath((str(path1), str(path2)))) + except ValueError: + return None + + +def bestrelpath(directory: Path, dest: Path) -> str: + """Return a string which is a relative path from directory to dest such + that directory/bestrelpath == dest. + + The paths must be either both absolute or both relative. + + If no such path can be determined, returns dest. + """ + assert isinstance(directory, Path) + assert isinstance(dest, Path) + if dest == directory: + return os.curdir + # Find the longest common directory. + base = commonpath(directory, dest) + # Can be the case on Windows for two absolute paths on different drives. + # Can be the case for two relative paths without common prefix. + # Can be the case for a relative path and an absolute path. + if not base: + return str(dest) + reldirectory = directory.relative_to(base) + reldest = dest.relative_to(base) + return os.path.join( + # Back from directory to base. + *([os.pardir] * len(reldirectory.parts)), + # Forward from base to dest. + *reldest.parts, + ) + + +# Originates from py. path.local.copy(), with siginficant trims and adjustments. +# TODO(py38): Replace with shutil.copytree(..., symlinks=True, dirs_exist_ok=True) +def copytree(source: Path, target: Path) -> None: + """Recursively copy a source directory to target.""" + assert source.is_dir() + for entry in visit(source, recurse=lambda entry: not entry.is_symlink()): + x = Path(entry) + relpath = x.relative_to(source) + newx = target / relpath + newx.parent.mkdir(exist_ok=True) + if x.is_symlink(): + newx.symlink_to(os.readlink(x)) + elif x.is_file(): + shutil.copyfile(x, newx) + elif x.is_dir(): + newx.mkdir(exist_ok=True) diff --git a/env/lib/python3.8/site-packages/_pytest/py.typed b/env/lib/python3.8/site-packages/_pytest/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/_pytest/pytester.py b/env/lib/python3.8/site-packages/_pytest/pytester.py new file mode 100644 index 00000000..a9299944 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/pytester.py @@ -0,0 +1,1787 @@ +"""(Disabled by default) support for testing pytest and pytest plugins. + +PYTEST_DONT_REWRITE +""" +import collections.abc +import contextlib +import gc +import importlib +import os +import platform +import re +import shutil +import subprocess +import sys +import traceback +from fnmatch import fnmatch +from io import StringIO +from pathlib import Path +from typing import Any +from typing import Callable +from typing import Dict +from typing import Generator +from typing import IO +from typing import Iterable +from typing import List +from typing import Optional +from typing import overload +from typing import Sequence +from typing import TextIO +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union +from weakref import WeakKeyDictionary + +from iniconfig import IniConfig +from iniconfig import SectionWrapper + +from _pytest import timing +from _pytest._code import Source +from _pytest.capture import _get_multicapture +from _pytest.compat import final +from _pytest.compat import NOTSET +from _pytest.compat import NotSetType +from _pytest.config import _PluggyPlugin +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config import hookimpl +from _pytest.config import main +from _pytest.config import PytestPluginManager +from _pytest.config.argparsing import Parser +from _pytest.deprecated import check_ispytest +from _pytest.fixtures import fixture +from _pytest.fixtures import FixtureRequest +from _pytest.main import Session +from _pytest.monkeypatch import MonkeyPatch +from _pytest.nodes import Collector +from _pytest.nodes import Item +from _pytest.outcomes import fail +from _pytest.outcomes import importorskip +from _pytest.outcomes import skip +from _pytest.pathlib import bestrelpath +from _pytest.pathlib import copytree +from _pytest.pathlib import make_numbered_dir +from _pytest.reports import CollectReport +from _pytest.reports import TestReport +from _pytest.tmpdir import TempPathFactory +from _pytest.warning_types import PytestWarning + + +if TYPE_CHECKING: + from typing_extensions import Final + from typing_extensions import Literal + + import pexpect + + +pytest_plugins = ["pytester_assertions"] + + +IGNORE_PAM = [ # filenames added when obtaining details about the current user + "/var/lib/sss/mc/passwd" +] + + +def pytest_addoption(parser: Parser) -> None: + parser.addoption( + "--lsof", + action="store_true", + dest="lsof", + default=False, + help="Run FD checks if lsof is available", + ) + + parser.addoption( + "--runpytest", + default="inprocess", + dest="runpytest", + choices=("inprocess", "subprocess"), + help=( + "Run pytest sub runs in tests using an 'inprocess' " + "or 'subprocess' (python -m main) method" + ), + ) + + parser.addini( + "pytester_example_dir", help="Directory to take the pytester example files from" + ) + + +def pytest_configure(config: Config) -> None: + if config.getvalue("lsof"): + checker = LsofFdLeakChecker() + if checker.matching_platform(): + config.pluginmanager.register(checker) + + config.addinivalue_line( + "markers", + "pytester_example_path(*path_segments): join the given path " + "segments to `pytester_example_dir` for this test.", + ) + + +class LsofFdLeakChecker: + def get_open_files(self) -> List[Tuple[str, str]]: + out = subprocess.run( + ("lsof", "-Ffn0", "-p", str(os.getpid())), + stdout=subprocess.PIPE, + stderr=subprocess.DEVNULL, + check=True, + text=True, + ).stdout + + def isopen(line: str) -> bool: + return line.startswith("f") and ( + "deleted" not in line + and "mem" not in line + and "txt" not in line + and "cwd" not in line + ) + + open_files = [] + + for line in out.split("\n"): + if isopen(line): + fields = line.split("\0") + fd = fields[0][1:] + filename = fields[1][1:] + if filename in IGNORE_PAM: + continue + if filename.startswith("/"): + open_files.append((fd, filename)) + + return open_files + + def matching_platform(self) -> bool: + try: + subprocess.run(("lsof", "-v"), check=True) + except (OSError, subprocess.CalledProcessError): + return False + else: + return True + + @hookimpl(hookwrapper=True, tryfirst=True) + def pytest_runtest_protocol(self, item: Item) -> Generator[None, None, None]: + lines1 = self.get_open_files() + yield + if hasattr(sys, "pypy_version_info"): + gc.collect() + lines2 = self.get_open_files() + + new_fds = {t[0] for t in lines2} - {t[0] for t in lines1} + leaked_files = [t for t in lines2 if t[0] in new_fds] + if leaked_files: + error = [ + "***** %s FD leakage detected" % len(leaked_files), + *(str(f) for f in leaked_files), + "*** Before:", + *(str(f) for f in lines1), + "*** After:", + *(str(f) for f in lines2), + "***** %s FD leakage detected" % len(leaked_files), + "*** function %s:%s: %s " % item.location, + "See issue #2366", + ] + item.warn(PytestWarning("\n".join(error))) + + +# used at least by pytest-xdist plugin + + +@fixture +def _pytest(request: FixtureRequest) -> "PytestArg": + """Return a helper which offers a gethookrecorder(hook) method which + returns a HookRecorder instance which helps to make assertions about called + hooks.""" + return PytestArg(request) + + +class PytestArg: + def __init__(self, request: FixtureRequest) -> None: + self._request = request + + def gethookrecorder(self, hook) -> "HookRecorder": + hookrecorder = HookRecorder(hook._pm) + self._request.addfinalizer(hookrecorder.finish_recording) + return hookrecorder + + +def get_public_names(values: Iterable[str]) -> List[str]: + """Only return names from iterator values without a leading underscore.""" + return [x for x in values if x[0] != "_"] + + +@final +class RecordedHookCall: + """A recorded call to a hook. + + The arguments to the hook call are set as attributes. + For example: + + .. code-block:: python + + calls = hook_recorder.getcalls("pytest_runtest_setup") + # Suppose pytest_runtest_setup was called once with `item=an_item`. + assert calls[0].item is an_item + """ + + def __init__(self, name: str, kwargs) -> None: + self.__dict__.update(kwargs) + self._name = name + + def __repr__(self) -> str: + d = self.__dict__.copy() + del d["_name"] + return f"" + + if TYPE_CHECKING: + # The class has undetermined attributes, this tells mypy about it. + def __getattr__(self, key: str): + ... + + +@final +class HookRecorder: + """Record all hooks called in a plugin manager. + + Hook recorders are created by :class:`Pytester`. + + This wraps all the hook calls in the plugin manager, recording each call + before propagating the normal calls. + """ + + def __init__( + self, pluginmanager: PytestPluginManager, *, _ispytest: bool = False + ) -> None: + check_ispytest(_ispytest) + + self._pluginmanager = pluginmanager + self.calls: List[RecordedHookCall] = [] + self.ret: Optional[Union[int, ExitCode]] = None + + def before(hook_name: str, hook_impls, kwargs) -> None: + self.calls.append(RecordedHookCall(hook_name, kwargs)) + + def after(outcome, hook_name: str, hook_impls, kwargs) -> None: + pass + + self._undo_wrapping = pluginmanager.add_hookcall_monitoring(before, after) + + def finish_recording(self) -> None: + self._undo_wrapping() + + def getcalls(self, names: Union[str, Iterable[str]]) -> List[RecordedHookCall]: + """Get all recorded calls to hooks with the given names (or name).""" + if isinstance(names, str): + names = names.split() + return [call for call in self.calls if call._name in names] + + def assert_contains(self, entries: Sequence[Tuple[str, str]]) -> None: + __tracebackhide__ = True + i = 0 + entries = list(entries) + backlocals = sys._getframe(1).f_locals + while entries: + name, check = entries.pop(0) + for ind, call in enumerate(self.calls[i:]): + if call._name == name: + print("NAMEMATCH", name, call) + if eval(check, backlocals, call.__dict__): + print("CHECKERMATCH", repr(check), "->", call) + else: + print("NOCHECKERMATCH", repr(check), "-", call) + continue + i += ind + 1 + break + print("NONAMEMATCH", name, "with", call) + else: + fail(f"could not find {name!r} check {check!r}") + + def popcall(self, name: str) -> RecordedHookCall: + __tracebackhide__ = True + for i, call in enumerate(self.calls): + if call._name == name: + del self.calls[i] + return call + lines = [f"could not find call {name!r}, in:"] + lines.extend([" %s" % x for x in self.calls]) + fail("\n".join(lines)) + + def getcall(self, name: str) -> RecordedHookCall: + values = self.getcalls(name) + assert len(values) == 1, (name, values) + return values[0] + + # functionality for test reports + + @overload + def getreports( + self, + names: "Literal['pytest_collectreport']", + ) -> Sequence[CollectReport]: + ... + + @overload + def getreports( + self, + names: "Literal['pytest_runtest_logreport']", + ) -> Sequence[TestReport]: + ... + + @overload + def getreports( + self, + names: Union[str, Iterable[str]] = ( + "pytest_collectreport", + "pytest_runtest_logreport", + ), + ) -> Sequence[Union[CollectReport, TestReport]]: + ... + + def getreports( + self, + names: Union[str, Iterable[str]] = ( + "pytest_collectreport", + "pytest_runtest_logreport", + ), + ) -> Sequence[Union[CollectReport, TestReport]]: + return [x.report for x in self.getcalls(names)] + + def matchreport( + self, + inamepart: str = "", + names: Union[str, Iterable[str]] = ( + "pytest_runtest_logreport", + "pytest_collectreport", + ), + when: Optional[str] = None, + ) -> Union[CollectReport, TestReport]: + """Return a testreport whose dotted import path matches.""" + values = [] + for rep in self.getreports(names=names): + if not when and rep.when != "call" and rep.passed: + # setup/teardown passing reports - let's ignore those + continue + if when and rep.when != when: + continue + if not inamepart or inamepart in rep.nodeid.split("::"): + values.append(rep) + if not values: + raise ValueError( + "could not find test report matching %r: " + "no test reports at all!" % (inamepart,) + ) + if len(values) > 1: + raise ValueError( + "found 2 or more testreports matching {!r}: {}".format( + inamepart, values + ) + ) + return values[0] + + @overload + def getfailures( + self, + names: "Literal['pytest_collectreport']", + ) -> Sequence[CollectReport]: + ... + + @overload + def getfailures( + self, + names: "Literal['pytest_runtest_logreport']", + ) -> Sequence[TestReport]: + ... + + @overload + def getfailures( + self, + names: Union[str, Iterable[str]] = ( + "pytest_collectreport", + "pytest_runtest_logreport", + ), + ) -> Sequence[Union[CollectReport, TestReport]]: + ... + + def getfailures( + self, + names: Union[str, Iterable[str]] = ( + "pytest_collectreport", + "pytest_runtest_logreport", + ), + ) -> Sequence[Union[CollectReport, TestReport]]: + return [rep for rep in self.getreports(names) if rep.failed] + + def getfailedcollections(self) -> Sequence[CollectReport]: + return self.getfailures("pytest_collectreport") + + def listoutcomes( + self, + ) -> Tuple[ + Sequence[TestReport], + Sequence[Union[CollectReport, TestReport]], + Sequence[Union[CollectReport, TestReport]], + ]: + passed = [] + skipped = [] + failed = [] + for rep in self.getreports( + ("pytest_collectreport", "pytest_runtest_logreport") + ): + if rep.passed: + if rep.when == "call": + assert isinstance(rep, TestReport) + passed.append(rep) + elif rep.skipped: + skipped.append(rep) + else: + assert rep.failed, f"Unexpected outcome: {rep!r}" + failed.append(rep) + return passed, skipped, failed + + def countoutcomes(self) -> List[int]: + return [len(x) for x in self.listoutcomes()] + + def assertoutcome(self, passed: int = 0, skipped: int = 0, failed: int = 0) -> None: + __tracebackhide__ = True + from _pytest.pytester_assertions import assertoutcome + + outcomes = self.listoutcomes() + assertoutcome( + outcomes, + passed=passed, + skipped=skipped, + failed=failed, + ) + + def clear(self) -> None: + self.calls[:] = [] + + +@fixture +def linecomp() -> "LineComp": + """A :class: `LineComp` instance for checking that an input linearly + contains a sequence of strings.""" + return LineComp() + + +@fixture(name="LineMatcher") +def LineMatcher_fixture(request: FixtureRequest) -> Type["LineMatcher"]: + """A reference to the :class: `LineMatcher`. + + This is instantiable with a list of lines (without their trailing newlines). + This is useful for testing large texts, such as the output of commands. + """ + return LineMatcher + + +@fixture +def pytester( + request: FixtureRequest, tmp_path_factory: TempPathFactory, monkeypatch: MonkeyPatch +) -> "Pytester": + """ + Facilities to write tests/configuration files, execute pytest in isolation, and match + against expected output, perfect for black-box testing of pytest plugins. + + It attempts to isolate the test run from external factors as much as possible, modifying + the current working directory to ``path`` and environment variables during initialization. + + It is particularly useful for testing plugins. It is similar to the :fixture:`tmp_path` + fixture but provides methods which aid in testing pytest itself. + """ + return Pytester(request, tmp_path_factory, monkeypatch, _ispytest=True) + + +@fixture +def _sys_snapshot() -> Generator[None, None, None]: + snappaths = SysPathsSnapshot() + snapmods = SysModulesSnapshot() + yield + snapmods.restore() + snappaths.restore() + + +@fixture +def _config_for_test() -> Generator[Config, None, None]: + from _pytest.config import get_config + + config = get_config() + yield config + config._ensure_unconfigure() # cleanup, e.g. capman closing tmpfiles. + + +# Regex to match the session duration string in the summary: "74.34s". +rex_session_duration = re.compile(r"\d+\.\d\ds") +# Regex to match all the counts and phrases in the summary line: "34 passed, 111 skipped". +rex_outcome = re.compile(r"(\d+) (\w+)") + + +@final +class RunResult: + """The result of running a command from :class:`~pytest.Pytester`.""" + + def __init__( + self, + ret: Union[int, ExitCode], + outlines: List[str], + errlines: List[str], + duration: float, + ) -> None: + try: + self.ret: Union[int, ExitCode] = ExitCode(ret) + """The return value.""" + except ValueError: + self.ret = ret + self.outlines = outlines + """List of lines captured from stdout.""" + self.errlines = errlines + """List of lines captured from stderr.""" + self.stdout = LineMatcher(outlines) + """:class:`~pytest.LineMatcher` of stdout. + + Use e.g. :func:`str(stdout) ` to reconstruct stdout, or the commonly used + :func:`stdout.fnmatch_lines() ` method. + """ + self.stderr = LineMatcher(errlines) + """:class:`~pytest.LineMatcher` of stderr.""" + self.duration = duration + """Duration in seconds.""" + + def __repr__(self) -> str: + return ( + "" + % (self.ret, len(self.stdout.lines), len(self.stderr.lines), self.duration) + ) + + def parseoutcomes(self) -> Dict[str, int]: + """Return a dictionary of outcome noun -> count from parsing the terminal + output that the test process produced. + + The returned nouns will always be in plural form:: + + ======= 1 failed, 1 passed, 1 warning, 1 error in 0.13s ==== + + Will return ``{"failed": 1, "passed": 1, "warnings": 1, "errors": 1}``. + """ + return self.parse_summary_nouns(self.outlines) + + @classmethod + def parse_summary_nouns(cls, lines) -> Dict[str, int]: + """Extract the nouns from a pytest terminal summary line. + + It always returns the plural noun for consistency:: + + ======= 1 failed, 1 passed, 1 warning, 1 error in 0.13s ==== + + Will return ``{"failed": 1, "passed": 1, "warnings": 1, "errors": 1}``. + """ + for line in reversed(lines): + if rex_session_duration.search(line): + outcomes = rex_outcome.findall(line) + ret = {noun: int(count) for (count, noun) in outcomes} + break + else: + raise ValueError("Pytest terminal summary report not found") + + to_plural = { + "warning": "warnings", + "error": "errors", + } + return {to_plural.get(k, k): v for k, v in ret.items()} + + def assert_outcomes( + self, + passed: int = 0, + skipped: int = 0, + failed: int = 0, + errors: int = 0, + xpassed: int = 0, + xfailed: int = 0, + warnings: Optional[int] = None, + deselected: Optional[int] = None, + ) -> None: + """ + Assert that the specified outcomes appear with the respective + numbers (0 means it didn't occur) in the text output from a test run. + + ``warnings`` and ``deselected`` are only checked if not None. + """ + __tracebackhide__ = True + from _pytest.pytester_assertions import assert_outcomes + + outcomes = self.parseoutcomes() + assert_outcomes( + outcomes, + passed=passed, + skipped=skipped, + failed=failed, + errors=errors, + xpassed=xpassed, + xfailed=xfailed, + warnings=warnings, + deselected=deselected, + ) + + +class CwdSnapshot: + def __init__(self) -> None: + self.__saved = os.getcwd() + + def restore(self) -> None: + os.chdir(self.__saved) + + +class SysModulesSnapshot: + def __init__(self, preserve: Optional[Callable[[str], bool]] = None) -> None: + self.__preserve = preserve + self.__saved = dict(sys.modules) + + def restore(self) -> None: + if self.__preserve: + self.__saved.update( + (k, m) for k, m in sys.modules.items() if self.__preserve(k) + ) + sys.modules.clear() + sys.modules.update(self.__saved) + + +class SysPathsSnapshot: + def __init__(self) -> None: + self.__saved = list(sys.path), list(sys.meta_path) + + def restore(self) -> None: + sys.path[:], sys.meta_path[:] = self.__saved + + +@final +class Pytester: + """ + Facilities to write tests/configuration files, execute pytest in isolation, and match + against expected output, perfect for black-box testing of pytest plugins. + + It attempts to isolate the test run from external factors as much as possible, modifying + the current working directory to :attr:`path` and environment variables during initialization. + """ + + __test__ = False + + CLOSE_STDIN: "Final" = NOTSET + + class TimeoutExpired(Exception): + pass + + def __init__( + self, + request: FixtureRequest, + tmp_path_factory: TempPathFactory, + monkeypatch: MonkeyPatch, + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + self._request = request + self._mod_collections: WeakKeyDictionary[ + Collector, List[Union[Item, Collector]] + ] = WeakKeyDictionary() + if request.function: + name: str = request.function.__name__ + else: + name = request.node.name + self._name = name + self._path: Path = tmp_path_factory.mktemp(name, numbered=True) + #: A list of plugins to use with :py:meth:`parseconfig` and + #: :py:meth:`runpytest`. Initially this is an empty list but plugins can + #: be added to the list. The type of items to add to the list depends on + #: the method using them so refer to them for details. + self.plugins: List[Union[str, _PluggyPlugin]] = [] + self._cwd_snapshot = CwdSnapshot() + self._sys_path_snapshot = SysPathsSnapshot() + self._sys_modules_snapshot = self.__take_sys_modules_snapshot() + self.chdir() + self._request.addfinalizer(self._finalize) + self._method = self._request.config.getoption("--runpytest") + self._test_tmproot = tmp_path_factory.mktemp(f"tmp-{name}", numbered=True) + + self._monkeypatch = mp = monkeypatch + mp.setenv("PYTEST_DEBUG_TEMPROOT", str(self._test_tmproot)) + # Ensure no unexpected caching via tox. + mp.delenv("TOX_ENV_DIR", raising=False) + # Discard outer pytest options. + mp.delenv("PYTEST_ADDOPTS", raising=False) + # Ensure no user config is used. + tmphome = str(self.path) + mp.setenv("HOME", tmphome) + mp.setenv("USERPROFILE", tmphome) + # Do not use colors for inner runs by default. + mp.setenv("PY_COLORS", "0") + + @property + def path(self) -> Path: + """Temporary directory path used to create files/run tests from, etc.""" + return self._path + + def __repr__(self) -> str: + return f"" + + def _finalize(self) -> None: + """ + Clean up global state artifacts. + + Some methods modify the global interpreter state and this tries to + clean this up. It does not remove the temporary directory however so + it can be looked at after the test run has finished. + """ + self._sys_modules_snapshot.restore() + self._sys_path_snapshot.restore() + self._cwd_snapshot.restore() + + def __take_sys_modules_snapshot(self) -> SysModulesSnapshot: + # Some zope modules used by twisted-related tests keep internal state + # and can't be deleted; we had some trouble in the past with + # `zope.interface` for example. + # + # Preserve readline due to https://bugs.python.org/issue41033. + # pexpect issues a SIGWINCH. + def preserve_module(name): + return name.startswith(("zope", "readline")) + + return SysModulesSnapshot(preserve=preserve_module) + + def make_hook_recorder(self, pluginmanager: PytestPluginManager) -> HookRecorder: + """Create a new :class:`HookRecorder` for a :class:`PytestPluginManager`.""" + pluginmanager.reprec = reprec = HookRecorder(pluginmanager, _ispytest=True) + self._request.addfinalizer(reprec.finish_recording) + return reprec + + def chdir(self) -> None: + """Cd into the temporary directory. + + This is done automatically upon instantiation. + """ + os.chdir(self.path) + + def _makefile( + self, + ext: str, + lines: Sequence[Union[Any, bytes]], + files: Dict[str, str], + encoding: str = "utf-8", + ) -> Path: + items = list(files.items()) + + if ext and not ext.startswith("."): + raise ValueError( + f"pytester.makefile expects a file extension, try .{ext} instead of {ext}" + ) + + def to_text(s: Union[Any, bytes]) -> str: + return s.decode(encoding) if isinstance(s, bytes) else str(s) + + if lines: + source = "\n".join(to_text(x) for x in lines) + basename = self._name + items.insert(0, (basename, source)) + + ret = None + for basename, value in items: + p = self.path.joinpath(basename).with_suffix(ext) + p.parent.mkdir(parents=True, exist_ok=True) + source_ = Source(value) + source = "\n".join(to_text(line) for line in source_.lines) + p.write_text(source.strip(), encoding=encoding) + if ret is None: + ret = p + assert ret is not None + return ret + + def makefile(self, ext: str, *args: str, **kwargs: str) -> Path: + r"""Create new text file(s) in the test directory. + + :param ext: + The extension the file(s) should use, including the dot, e.g. `.py`. + :param args: + All args are treated as strings and joined using newlines. + The result is written as contents to the file. The name of the + file is based on the test function requesting this fixture. + :param kwargs: + Each keyword is the name of a file, while the value of it will + be written as contents of the file. + :returns: + The first created file. + + Examples: + + .. code-block:: python + + pytester.makefile(".txt", "line1", "line2") + + pytester.makefile(".ini", pytest="[pytest]\naddopts=-rs\n") + + To create binary files, use :meth:`pathlib.Path.write_bytes` directly: + + .. code-block:: python + + filename = pytester.path.joinpath("foo.bin") + filename.write_bytes(b"...") + """ + return self._makefile(ext, args, kwargs) + + def makeconftest(self, source: str) -> Path: + """Write a contest.py file. + + :param source: The contents. + :returns: The conftest.py file. + """ + return self.makepyfile(conftest=source) + + def makeini(self, source: str) -> Path: + """Write a tox.ini file. + + :param source: The contents. + :returns: The tox.ini file. + """ + return self.makefile(".ini", tox=source) + + def getinicfg(self, source: str) -> SectionWrapper: + """Return the pytest section from the tox.ini config file.""" + p = self.makeini(source) + return IniConfig(str(p))["pytest"] + + def makepyprojecttoml(self, source: str) -> Path: + """Write a pyproject.toml file. + + :param source: The contents. + :returns: The pyproject.ini file. + + .. versionadded:: 6.0 + """ + return self.makefile(".toml", pyproject=source) + + def makepyfile(self, *args, **kwargs) -> Path: + r"""Shortcut for .makefile() with a .py extension. + + Defaults to the test name with a '.py' extension, e.g test_foobar.py, overwriting + existing files. + + Examples: + + .. code-block:: python + + def test_something(pytester): + # Initial file is created test_something.py. + pytester.makepyfile("foobar") + # To create multiple files, pass kwargs accordingly. + pytester.makepyfile(custom="foobar") + # At this point, both 'test_something.py' & 'custom.py' exist in the test directory. + + """ + return self._makefile(".py", args, kwargs) + + def maketxtfile(self, *args, **kwargs) -> Path: + r"""Shortcut for .makefile() with a .txt extension. + + Defaults to the test name with a '.txt' extension, e.g test_foobar.txt, overwriting + existing files. + + Examples: + + .. code-block:: python + + def test_something(pytester): + # Initial file is created test_something.txt. + pytester.maketxtfile("foobar") + # To create multiple files, pass kwargs accordingly. + pytester.maketxtfile(custom="foobar") + # At this point, both 'test_something.txt' & 'custom.txt' exist in the test directory. + + """ + return self._makefile(".txt", args, kwargs) + + def syspathinsert( + self, path: Optional[Union[str, "os.PathLike[str]"]] = None + ) -> None: + """Prepend a directory to sys.path, defaults to :attr:`path`. + + This is undone automatically when this object dies at the end of each + test. + + :param path: + The path. + """ + if path is None: + path = self.path + + self._monkeypatch.syspath_prepend(str(path)) + + def mkdir(self, name: Union[str, "os.PathLike[str]"]) -> Path: + """Create a new (sub)directory. + + :param name: + The name of the directory, relative to the pytester path. + :returns: + The created directory. + """ + p = self.path / name + p.mkdir() + return p + + def mkpydir(self, name: Union[str, "os.PathLike[str]"]) -> Path: + """Create a new python package. + + This creates a (sub)directory with an empty ``__init__.py`` file so it + gets recognised as a Python package. + """ + p = self.path / name + p.mkdir() + p.joinpath("__init__.py").touch() + return p + + def copy_example(self, name: Optional[str] = None) -> Path: + """Copy file from project's directory into the testdir. + + :param name: + The name of the file to copy. + :return: + Path to the copied directory (inside ``self.path``). + """ + example_dir_ = self._request.config.getini("pytester_example_dir") + if example_dir_ is None: + raise ValueError("pytester_example_dir is unset, can't copy examples") + example_dir: Path = self._request.config.rootpath / example_dir_ + + for extra_element in self._request.node.iter_markers("pytester_example_path"): + assert extra_element.args + example_dir = example_dir.joinpath(*extra_element.args) + + if name is None: + func_name = self._name + maybe_dir = example_dir / func_name + maybe_file = example_dir / (func_name + ".py") + + if maybe_dir.is_dir(): + example_path = maybe_dir + elif maybe_file.is_file(): + example_path = maybe_file + else: + raise LookupError( + f"{func_name} can't be found as module or package in {example_dir}" + ) + else: + example_path = example_dir.joinpath(name) + + if example_path.is_dir() and not example_path.joinpath("__init__.py").is_file(): + copytree(example_path, self.path) + return self.path + elif example_path.is_file(): + result = self.path.joinpath(example_path.name) + shutil.copy(example_path, result) + return result + else: + raise LookupError( + f'example "{example_path}" is not found as a file or directory' + ) + + def getnode( + self, config: Config, arg: Union[str, "os.PathLike[str]"] + ) -> Union[Collector, Item]: + """Get the collection node of a file. + + :param config: + A pytest config. + See :py:meth:`parseconfig` and :py:meth:`parseconfigure` for creating it. + :param arg: + Path to the file. + :returns: + The node. + """ + session = Session.from_config(config) + assert "::" not in str(arg) + p = Path(os.path.abspath(arg)) + config.hook.pytest_sessionstart(session=session) + res = session.perform_collect([str(p)], genitems=False)[0] + config.hook.pytest_sessionfinish(session=session, exitstatus=ExitCode.OK) + return res + + def getpathnode( + self, path: Union[str, "os.PathLike[str]"] + ) -> Union[Collector, Item]: + """Return the collection node of a file. + + This is like :py:meth:`getnode` but uses :py:meth:`parseconfigure` to + create the (configured) pytest Config instance. + + :param path: + Path to the file. + :returns: + The node. + """ + path = Path(path) + config = self.parseconfigure(path) + session = Session.from_config(config) + x = bestrelpath(session.path, path) + config.hook.pytest_sessionstart(session=session) + res = session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionfinish(session=session, exitstatus=ExitCode.OK) + return res + + def genitems(self, colitems: Sequence[Union[Item, Collector]]) -> List[Item]: + """Generate all test items from a collection node. + + This recurses into the collection node and returns a list of all the + test items contained within. + + :param colitems: + The collection nodes. + :returns: + The collected items. + """ + session = colitems[0].session + result: List[Item] = [] + for colitem in colitems: + result.extend(session.genitems(colitem)) + return result + + def runitem(self, source: str) -> Any: + """Run the "test_func" Item. + + The calling test instance (class containing the test method) must + provide a ``.getrunner()`` method which should return a runner which + can run the test protocol for a single item, e.g. + :py:func:`_pytest.runner.runtestprotocol`. + """ + # used from runner functional tests + item = self.getitem(source) + # the test class where we are called from wants to provide the runner + testclassinstance = self._request.instance + runner = testclassinstance.getrunner() + return runner(item) + + def inline_runsource(self, source: str, *cmdlineargs) -> HookRecorder: + """Run a test module in process using ``pytest.main()``. + + This run writes "source" into a temporary file and runs + ``pytest.main()`` on it, returning a :py:class:`HookRecorder` instance + for the result. + + :param source: The source code of the test module. + :param cmdlineargs: Any extra command line arguments to use. + """ + p = self.makepyfile(source) + values = list(cmdlineargs) + [p] + return self.inline_run(*values) + + def inline_genitems(self, *args) -> Tuple[List[Item], HookRecorder]: + """Run ``pytest.main(['--collectonly'])`` in-process. + + Runs the :py:func:`pytest.main` function to run all of pytest inside + the test process itself like :py:meth:`inline_run`, but returns a + tuple of the collected items and a :py:class:`HookRecorder` instance. + """ + rec = self.inline_run("--collect-only", *args) + items = [x.item for x in rec.getcalls("pytest_itemcollected")] + return items, rec + + def inline_run( + self, + *args: Union[str, "os.PathLike[str]"], + plugins=(), + no_reraise_ctrlc: bool = False, + ) -> HookRecorder: + """Run ``pytest.main()`` in-process, returning a HookRecorder. + + Runs the :py:func:`pytest.main` function to run all of pytest inside + the test process itself. This means it can return a + :py:class:`HookRecorder` instance which gives more detailed results + from that run than can be done by matching stdout/stderr from + :py:meth:`runpytest`. + + :param args: + Command line arguments to pass to :py:func:`pytest.main`. + :param plugins: + Extra plugin instances the ``pytest.main()`` instance should use. + :param no_reraise_ctrlc: + Typically we reraise keyboard interrupts from the child run. If + True, the KeyboardInterrupt exception is captured. + """ + # (maybe a cpython bug?) the importlib cache sometimes isn't updated + # properly between file creation and inline_run (especially if imports + # are interspersed with file creation) + importlib.invalidate_caches() + + plugins = list(plugins) + finalizers = [] + try: + # Any sys.module or sys.path changes done while running pytest + # inline should be reverted after the test run completes to avoid + # clashing with later inline tests run within the same pytest test, + # e.g. just because they use matching test module names. + finalizers.append(self.__take_sys_modules_snapshot().restore) + finalizers.append(SysPathsSnapshot().restore) + + # Important note: + # - our tests should not leave any other references/registrations + # laying around other than possibly loaded test modules + # referenced from sys.modules, as nothing will clean those up + # automatically + + rec = [] + + class Collect: + def pytest_configure(x, config: Config) -> None: + rec.append(self.make_hook_recorder(config.pluginmanager)) + + plugins.append(Collect()) + ret = main([str(x) for x in args], plugins=plugins) + if len(rec) == 1: + reprec = rec.pop() + else: + + class reprec: # type: ignore + pass + + reprec.ret = ret + + # Typically we reraise keyboard interrupts from the child run + # because it's our user requesting interruption of the testing. + if ret == ExitCode.INTERRUPTED and not no_reraise_ctrlc: + calls = reprec.getcalls("pytest_keyboard_interrupt") + if calls and calls[-1].excinfo.type == KeyboardInterrupt: + raise KeyboardInterrupt() + return reprec + finally: + for finalizer in finalizers: + finalizer() + + def runpytest_inprocess( + self, *args: Union[str, "os.PathLike[str]"], **kwargs: Any + ) -> RunResult: + """Return result of running pytest in-process, providing a similar + interface to what self.runpytest() provides.""" + syspathinsert = kwargs.pop("syspathinsert", False) + + if syspathinsert: + self.syspathinsert() + now = timing.time() + capture = _get_multicapture("sys") + capture.start_capturing() + try: + try: + reprec = self.inline_run(*args, **kwargs) + except SystemExit as e: + ret = e.args[0] + try: + ret = ExitCode(e.args[0]) + except ValueError: + pass + + class reprec: # type: ignore + ret = ret + + except Exception: + traceback.print_exc() + + class reprec: # type: ignore + ret = ExitCode(3) + + finally: + out, err = capture.readouterr() + capture.stop_capturing() + sys.stdout.write(out) + sys.stderr.write(err) + + assert reprec.ret is not None + res = RunResult( + reprec.ret, out.splitlines(), err.splitlines(), timing.time() - now + ) + res.reprec = reprec # type: ignore + return res + + def runpytest( + self, *args: Union[str, "os.PathLike[str]"], **kwargs: Any + ) -> RunResult: + """Run pytest inline or in a subprocess, depending on the command line + option "--runpytest" and return a :py:class:`~pytest.RunResult`.""" + new_args = self._ensure_basetemp(args) + if self._method == "inprocess": + return self.runpytest_inprocess(*new_args, **kwargs) + elif self._method == "subprocess": + return self.runpytest_subprocess(*new_args, **kwargs) + raise RuntimeError(f"Unrecognized runpytest option: {self._method}") + + def _ensure_basetemp( + self, args: Sequence[Union[str, "os.PathLike[str]"]] + ) -> List[Union[str, "os.PathLike[str]"]]: + new_args = list(args) + for x in new_args: + if str(x).startswith("--basetemp"): + break + else: + new_args.append("--basetemp=%s" % self.path.parent.joinpath("basetemp")) + return new_args + + def parseconfig(self, *args: Union[str, "os.PathLike[str]"]) -> Config: + """Return a new pytest :class:`pytest.Config` instance from given + commandline args. + + This invokes the pytest bootstrapping code in _pytest.config to create a + new :py:class:`pytest.PytestPluginManager` and call the + :hook:`pytest_cmdline_parse` hook to create a new :class:`pytest.Config` + instance. + + If :attr:`plugins` has been populated they should be plugin modules + to be registered with the plugin manager. + """ + import _pytest.config + + new_args = self._ensure_basetemp(args) + new_args = [str(x) for x in new_args] + + config = _pytest.config._prepareconfig(new_args, self.plugins) # type: ignore[arg-type] + # we don't know what the test will do with this half-setup config + # object and thus we make sure it gets unconfigured properly in any + # case (otherwise capturing could still be active, for example) + self._request.addfinalizer(config._ensure_unconfigure) + return config + + def parseconfigure(self, *args: Union[str, "os.PathLike[str]"]) -> Config: + """Return a new pytest configured Config instance. + + Returns a new :py:class:`pytest.Config` instance like + :py:meth:`parseconfig`, but also calls the :hook:`pytest_configure` + hook. + """ + config = self.parseconfig(*args) + config._do_configure() + return config + + def getitem( + self, source: Union[str, "os.PathLike[str]"], funcname: str = "test_func" + ) -> Item: + """Return the test item for a test function. + + Writes the source to a python file and runs pytest's collection on + the resulting module, returning the test item for the requested + function name. + + :param source: + The module source. + :param funcname: + The name of the test function for which to return a test item. + :returns: + The test item. + """ + items = self.getitems(source) + for item in items: + if item.name == funcname: + return item + assert 0, "{!r} item not found in module:\n{}\nitems: {}".format( + funcname, source, items + ) + + def getitems(self, source: Union[str, "os.PathLike[str]"]) -> List[Item]: + """Return all test items collected from the module. + + Writes the source to a Python file and runs pytest's collection on + the resulting module, returning all test items contained within. + """ + modcol = self.getmodulecol(source) + return self.genitems([modcol]) + + def getmodulecol( + self, + source: Union[str, "os.PathLike[str]"], + configargs=(), + *, + withinit: bool = False, + ): + """Return the module collection node for ``source``. + + Writes ``source`` to a file using :py:meth:`makepyfile` and then + runs the pytest collection on it, returning the collection node for the + test module. + + :param source: + The source code of the module to collect. + + :param configargs: + Any extra arguments to pass to :py:meth:`parseconfigure`. + + :param withinit: + Whether to also write an ``__init__.py`` file to the same + directory to ensure it is a package. + """ + if isinstance(source, os.PathLike): + path = self.path.joinpath(source) + assert not withinit, "not supported for paths" + else: + kw = {self._name: str(source)} + path = self.makepyfile(**kw) + if withinit: + self.makepyfile(__init__="#") + self.config = config = self.parseconfigure(path, *configargs) + return self.getnode(config, path) + + def collect_by_name( + self, modcol: Collector, name: str + ) -> Optional[Union[Item, Collector]]: + """Return the collection node for name from the module collection. + + Searches a module collection node for a collection node matching the + given name. + + :param modcol: A module collection node; see :py:meth:`getmodulecol`. + :param name: The name of the node to return. + """ + if modcol not in self._mod_collections: + self._mod_collections[modcol] = list(modcol.collect()) + for colitem in self._mod_collections[modcol]: + if colitem.name == name: + return colitem + return None + + def popen( + self, + cmdargs: Sequence[Union[str, "os.PathLike[str]"]], + stdout: Union[int, TextIO] = subprocess.PIPE, + stderr: Union[int, TextIO] = subprocess.PIPE, + stdin: Union[NotSetType, bytes, IO[Any], int] = CLOSE_STDIN, + **kw, + ): + """Invoke :py:class:`subprocess.Popen`. + + Calls :py:class:`subprocess.Popen` making sure the current working + directory is in ``PYTHONPATH``. + + You probably want to use :py:meth:`run` instead. + """ + env = os.environ.copy() + env["PYTHONPATH"] = os.pathsep.join( + filter(None, [os.getcwd(), env.get("PYTHONPATH", "")]) + ) + kw["env"] = env + + if stdin is self.CLOSE_STDIN: + kw["stdin"] = subprocess.PIPE + elif isinstance(stdin, bytes): + kw["stdin"] = subprocess.PIPE + else: + kw["stdin"] = stdin + + popen = subprocess.Popen(cmdargs, stdout=stdout, stderr=stderr, **kw) + if stdin is self.CLOSE_STDIN: + assert popen.stdin is not None + popen.stdin.close() + elif isinstance(stdin, bytes): + assert popen.stdin is not None + popen.stdin.write(stdin) + + return popen + + def run( + self, + *cmdargs: Union[str, "os.PathLike[str]"], + timeout: Optional[float] = None, + stdin: Union[NotSetType, bytes, IO[Any], int] = CLOSE_STDIN, + ) -> RunResult: + """Run a command with arguments. + + Run a process using :py:class:`subprocess.Popen` saving the stdout and + stderr. + + :param cmdargs: + The sequence of arguments to pass to :py:class:`subprocess.Popen`, + with path-like objects being converted to :py:class:`str` + automatically. + :param timeout: + The period in seconds after which to timeout and raise + :py:class:`Pytester.TimeoutExpired`. + :param stdin: + Optional standard input. + + - If it is :py:attr:`CLOSE_STDIN` (Default), then this method calls + :py:class:`subprocess.Popen` with ``stdin=subprocess.PIPE``, and + the standard input is closed immediately after the new command is + started. + + - If it is of type :py:class:`bytes`, these bytes are sent to the + standard input of the command. + + - Otherwise, it is passed through to :py:class:`subprocess.Popen`. + For further information in this case, consult the document of the + ``stdin`` parameter in :py:class:`subprocess.Popen`. + :returns: + The result. + """ + __tracebackhide__ = True + + cmdargs = tuple(os.fspath(arg) for arg in cmdargs) + p1 = self.path.joinpath("stdout") + p2 = self.path.joinpath("stderr") + print("running:", *cmdargs) + print(" in:", Path.cwd()) + + with p1.open("w", encoding="utf8") as f1, p2.open("w", encoding="utf8") as f2: + now = timing.time() + popen = self.popen( + cmdargs, + stdin=stdin, + stdout=f1, + stderr=f2, + close_fds=(sys.platform != "win32"), + ) + if popen.stdin is not None: + popen.stdin.close() + + def handle_timeout() -> None: + __tracebackhide__ = True + + timeout_message = ( + "{seconds} second timeout expired running:" + " {command}".format(seconds=timeout, command=cmdargs) + ) + + popen.kill() + popen.wait() + raise self.TimeoutExpired(timeout_message) + + if timeout is None: + ret = popen.wait() + else: + try: + ret = popen.wait(timeout) + except subprocess.TimeoutExpired: + handle_timeout() + + with p1.open(encoding="utf8") as f1, p2.open(encoding="utf8") as f2: + out = f1.read().splitlines() + err = f2.read().splitlines() + + self._dump_lines(out, sys.stdout) + self._dump_lines(err, sys.stderr) + + with contextlib.suppress(ValueError): + ret = ExitCode(ret) + return RunResult(ret, out, err, timing.time() - now) + + def _dump_lines(self, lines, fp): + try: + for line in lines: + print(line, file=fp) + except UnicodeEncodeError: + print(f"couldn't print to {fp} because of encoding") + + def _getpytestargs(self) -> Tuple[str, ...]: + return sys.executable, "-mpytest" + + def runpython(self, script: "os.PathLike[str]") -> RunResult: + """Run a python script using sys.executable as interpreter.""" + return self.run(sys.executable, script) + + def runpython_c(self, command: str) -> RunResult: + """Run ``python -c "command"``.""" + return self.run(sys.executable, "-c", command) + + def runpytest_subprocess( + self, *args: Union[str, "os.PathLike[str]"], timeout: Optional[float] = None + ) -> RunResult: + """Run pytest as a subprocess with given arguments. + + Any plugins added to the :py:attr:`plugins` list will be added using the + ``-p`` command line option. Additionally ``--basetemp`` is used to put + any temporary files and directories in a numbered directory prefixed + with "runpytest-" to not conflict with the normal numbered pytest + location for temporary files and directories. + + :param args: + The sequence of arguments to pass to the pytest subprocess. + :param timeout: + The period in seconds after which to timeout and raise + :py:class:`Pytester.TimeoutExpired`. + :returns: + The result. + """ + __tracebackhide__ = True + p = make_numbered_dir(root=self.path, prefix="runpytest-", mode=0o700) + args = ("--basetemp=%s" % p,) + args + plugins = [x for x in self.plugins if isinstance(x, str)] + if plugins: + args = ("-p", plugins[0]) + args + args = self._getpytestargs() + args + return self.run(*args, timeout=timeout) + + def spawn_pytest( + self, string: str, expect_timeout: float = 10.0 + ) -> "pexpect.spawn": + """Run pytest using pexpect. + + This makes sure to use the right pytest and sets up the temporary + directory locations. + + The pexpect child is returned. + """ + basetemp = self.path / "temp-pexpect" + basetemp.mkdir(mode=0o700) + invoke = " ".join(map(str, self._getpytestargs())) + cmd = f"{invoke} --basetemp={basetemp} {string}" + return self.spawn(cmd, expect_timeout=expect_timeout) + + def spawn(self, cmd: str, expect_timeout: float = 10.0) -> "pexpect.spawn": + """Run a command using pexpect. + + The pexpect child is returned. + """ + pexpect = importorskip("pexpect", "3.0") + if hasattr(sys, "pypy_version_info") and "64" in platform.machine(): + skip("pypy-64 bit not supported") + if not hasattr(pexpect, "spawn"): + skip("pexpect.spawn not available") + logfile = self.path.joinpath("spawn.out").open("wb") + + child = pexpect.spawn(cmd, logfile=logfile, timeout=expect_timeout) + self._request.addfinalizer(logfile.close) + return child + + +class LineComp: + def __init__(self) -> None: + self.stringio = StringIO() + """:class:`python:io.StringIO()` instance used for input.""" + + def assert_contains_lines(self, lines2: Sequence[str]) -> None: + """Assert that ``lines2`` are contained (linearly) in :attr:`stringio`'s value. + + Lines are matched using :func:`LineMatcher.fnmatch_lines `. + """ + __tracebackhide__ = True + val = self.stringio.getvalue() + self.stringio.truncate(0) + self.stringio.seek(0) + lines1 = val.split("\n") + LineMatcher(lines1).fnmatch_lines(lines2) + + +class LineMatcher: + """Flexible matching of text. + + This is a convenience class to test large texts like the output of + commands. + + The constructor takes a list of lines without their trailing newlines, i.e. + ``text.splitlines()``. + """ + + def __init__(self, lines: List[str]) -> None: + self.lines = lines + self._log_output: List[str] = [] + + def __str__(self) -> str: + """Return the entire original text. + + .. versionadded:: 6.2 + You can use :meth:`str` in older versions. + """ + return "\n".join(self.lines) + + def _getlines(self, lines2: Union[str, Sequence[str], Source]) -> Sequence[str]: + if isinstance(lines2, str): + lines2 = Source(lines2) + if isinstance(lines2, Source): + lines2 = lines2.strip().lines + return lines2 + + def fnmatch_lines_random(self, lines2: Sequence[str]) -> None: + """Check lines exist in the output in any order (using :func:`python:fnmatch.fnmatch`).""" + __tracebackhide__ = True + self._match_lines_random(lines2, fnmatch) + + def re_match_lines_random(self, lines2: Sequence[str]) -> None: + """Check lines exist in the output in any order (using :func:`python:re.match`).""" + __tracebackhide__ = True + self._match_lines_random(lines2, lambda name, pat: bool(re.match(pat, name))) + + def _match_lines_random( + self, lines2: Sequence[str], match_func: Callable[[str, str], bool] + ) -> None: + __tracebackhide__ = True + lines2 = self._getlines(lines2) + for line in lines2: + for x in self.lines: + if line == x or match_func(x, line): + self._log("matched: ", repr(line)) + break + else: + msg = "line %r not found in output" % line + self._log(msg) + self._fail(msg) + + def get_lines_after(self, fnline: str) -> Sequence[str]: + """Return all lines following the given line in the text. + + The given line can contain glob wildcards. + """ + for i, line in enumerate(self.lines): + if fnline == line or fnmatch(line, fnline): + return self.lines[i + 1 :] + raise ValueError("line %r not found in output" % fnline) + + def _log(self, *args) -> None: + self._log_output.append(" ".join(str(x) for x in args)) + + @property + def _log_text(self) -> str: + return "\n".join(self._log_output) + + def fnmatch_lines( + self, lines2: Sequence[str], *, consecutive: bool = False + ) -> None: + """Check lines exist in the output (using :func:`python:fnmatch.fnmatch`). + + The argument is a list of lines which have to match and can use glob + wildcards. If they do not match a pytest.fail() is called. The + matches and non-matches are also shown as part of the error message. + + :param lines2: String patterns to match. + :param consecutive: Match lines consecutively? + """ + __tracebackhide__ = True + self._match_lines(lines2, fnmatch, "fnmatch", consecutive=consecutive) + + def re_match_lines( + self, lines2: Sequence[str], *, consecutive: bool = False + ) -> None: + """Check lines exist in the output (using :func:`python:re.match`). + + The argument is a list of lines which have to match using ``re.match``. + If they do not match a pytest.fail() is called. + + The matches and non-matches are also shown as part of the error message. + + :param lines2: string patterns to match. + :param consecutive: match lines consecutively? + """ + __tracebackhide__ = True + self._match_lines( + lines2, + lambda name, pat: bool(re.match(pat, name)), + "re.match", + consecutive=consecutive, + ) + + def _match_lines( + self, + lines2: Sequence[str], + match_func: Callable[[str, str], bool], + match_nickname: str, + *, + consecutive: bool = False, + ) -> None: + """Underlying implementation of ``fnmatch_lines`` and ``re_match_lines``. + + :param Sequence[str] lines2: + List of string patterns to match. The actual format depends on + ``match_func``. + :param match_func: + A callable ``match_func(line, pattern)`` where line is the + captured line from stdout/stderr and pattern is the matching + pattern. + :param str match_nickname: + The nickname for the match function that will be logged to stdout + when a match occurs. + :param consecutive: + Match lines consecutively? + """ + if not isinstance(lines2, collections.abc.Sequence): + raise TypeError(f"invalid type for lines2: {type(lines2).__name__}") + lines2 = self._getlines(lines2) + lines1 = self.lines[:] + extralines = [] + __tracebackhide__ = True + wnick = len(match_nickname) + 1 + started = False + for line in lines2: + nomatchprinted = False + while lines1: + nextline = lines1.pop(0) + if line == nextline: + self._log("exact match:", repr(line)) + started = True + break + elif match_func(nextline, line): + self._log("%s:" % match_nickname, repr(line)) + self._log( + "{:>{width}}".format("with:", width=wnick), repr(nextline) + ) + started = True + break + else: + if consecutive and started: + msg = f"no consecutive match: {line!r}" + self._log(msg) + self._log( + "{:>{width}}".format("with:", width=wnick), repr(nextline) + ) + self._fail(msg) + if not nomatchprinted: + self._log( + "{:>{width}}".format("nomatch:", width=wnick), repr(line) + ) + nomatchprinted = True + self._log("{:>{width}}".format("and:", width=wnick), repr(nextline)) + extralines.append(nextline) + else: + msg = f"remains unmatched: {line!r}" + self._log(msg) + self._fail(msg) + self._log_output = [] + + def no_fnmatch_line(self, pat: str) -> None: + """Ensure captured lines do not match the given pattern, using ``fnmatch.fnmatch``. + + :param str pat: The pattern to match lines. + """ + __tracebackhide__ = True + self._no_match_line(pat, fnmatch, "fnmatch") + + def no_re_match_line(self, pat: str) -> None: + """Ensure captured lines do not match the given pattern, using ``re.match``. + + :param str pat: The regular expression to match lines. + """ + __tracebackhide__ = True + self._no_match_line( + pat, lambda name, pat: bool(re.match(pat, name)), "re.match" + ) + + def _no_match_line( + self, pat: str, match_func: Callable[[str, str], bool], match_nickname: str + ) -> None: + """Ensure captured lines does not have a the given pattern, using ``fnmatch.fnmatch``. + + :param str pat: The pattern to match lines. + """ + __tracebackhide__ = True + nomatch_printed = False + wnick = len(match_nickname) + 1 + for line in self.lines: + if match_func(line, pat): + msg = f"{match_nickname}: {pat!r}" + self._log(msg) + self._log("{:>{width}}".format("with:", width=wnick), repr(line)) + self._fail(msg) + else: + if not nomatch_printed: + self._log("{:>{width}}".format("nomatch:", width=wnick), repr(pat)) + nomatch_printed = True + self._log("{:>{width}}".format("and:", width=wnick), repr(line)) + self._log_output = [] + + def _fail(self, msg: str) -> None: + __tracebackhide__ = True + log_text = self._log_text + self._log_output = [] + fail(log_text) + + def str(self) -> str: + """Return the entire original text.""" + return str(self) diff --git a/env/lib/python3.8/site-packages/_pytest/pytester_assertions.py b/env/lib/python3.8/site-packages/_pytest/pytester_assertions.py new file mode 100644 index 00000000..657e4db5 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/pytester_assertions.py @@ -0,0 +1,75 @@ +"""Helper plugin for pytester; should not be loaded on its own.""" +# This plugin contains assertions used by pytester. pytester cannot +# contain them itself, since it is imported by the `pytest` module, +# hence cannot be subject to assertion rewriting, which requires a +# module to not be already imported. +from typing import Dict +from typing import Optional +from typing import Sequence +from typing import Tuple +from typing import Union + +from _pytest.reports import CollectReport +from _pytest.reports import TestReport + + +def assertoutcome( + outcomes: Tuple[ + Sequence[TestReport], + Sequence[Union[CollectReport, TestReport]], + Sequence[Union[CollectReport, TestReport]], + ], + passed: int = 0, + skipped: int = 0, + failed: int = 0, +) -> None: + __tracebackhide__ = True + + realpassed, realskipped, realfailed = outcomes + obtained = { + "passed": len(realpassed), + "skipped": len(realskipped), + "failed": len(realfailed), + } + expected = {"passed": passed, "skipped": skipped, "failed": failed} + assert obtained == expected, outcomes + + +def assert_outcomes( + outcomes: Dict[str, int], + passed: int = 0, + skipped: int = 0, + failed: int = 0, + errors: int = 0, + xpassed: int = 0, + xfailed: int = 0, + warnings: Optional[int] = None, + deselected: Optional[int] = None, +) -> None: + """Assert that the specified outcomes appear with the respective + numbers (0 means it didn't occur) in the text output from a test run.""" + __tracebackhide__ = True + + obtained = { + "passed": outcomes.get("passed", 0), + "skipped": outcomes.get("skipped", 0), + "failed": outcomes.get("failed", 0), + "errors": outcomes.get("errors", 0), + "xpassed": outcomes.get("xpassed", 0), + "xfailed": outcomes.get("xfailed", 0), + } + expected = { + "passed": passed, + "skipped": skipped, + "failed": failed, + "errors": errors, + "xpassed": xpassed, + "xfailed": xfailed, + } + if warnings is not None: + obtained["warnings"] = outcomes.get("warnings", 0) + expected["warnings"] = warnings + if deselected is not None: + obtained["deselected"] = outcomes.get("deselected", 0) + expected["deselected"] = deselected + assert obtained == expected diff --git a/env/lib/python3.8/site-packages/_pytest/python.py b/env/lib/python3.8/site-packages/_pytest/python.py new file mode 100644 index 00000000..1e30d42c --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/python.py @@ -0,0 +1,1835 @@ +"""Python test discovery, setup and run of test functions.""" +import enum +import fnmatch +import inspect +import itertools +import os +import sys +import types +import warnings +from collections import Counter +from collections import defaultdict +from functools import partial +from pathlib import Path +from typing import Any +from typing import Callable +from typing import Dict +from typing import Generator +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Mapping +from typing import Optional +from typing import Pattern +from typing import Sequence +from typing import Set +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +import attr + +import _pytest +from _pytest import fixtures +from _pytest import nodes +from _pytest._code import filter_traceback +from _pytest._code import getfslineno +from _pytest._code.code import ExceptionInfo +from _pytest._code.code import TerminalRepr +from _pytest._io import TerminalWriter +from _pytest._io.saferepr import saferepr +from _pytest.compat import ascii_escaped +from _pytest.compat import assert_never +from _pytest.compat import final +from _pytest.compat import get_default_arg_names +from _pytest.compat import get_real_func +from _pytest.compat import getimfunc +from _pytest.compat import getlocation +from _pytest.compat import is_async_function +from _pytest.compat import is_generator +from _pytest.compat import LEGACY_PATH +from _pytest.compat import NOTSET +from _pytest.compat import safe_getattr +from _pytest.compat import safe_isclass +from _pytest.compat import STRING_TYPES +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config import hookimpl +from _pytest.config.argparsing import Parser +from _pytest.deprecated import check_ispytest +from _pytest.deprecated import FSCOLLECTOR_GETHOOKPROXY_ISINITPATH +from _pytest.deprecated import INSTANCE_COLLECTOR +from _pytest.deprecated import NOSE_SUPPORT_METHOD +from _pytest.fixtures import FuncFixtureInfo +from _pytest.main import Session +from _pytest.mark import MARK_GEN +from _pytest.mark import ParameterSet +from _pytest.mark.structures import get_unpacked_marks +from _pytest.mark.structures import Mark +from _pytest.mark.structures import MarkDecorator +from _pytest.mark.structures import normalize_mark_list +from _pytest.outcomes import fail +from _pytest.outcomes import skip +from _pytest.pathlib import bestrelpath +from _pytest.pathlib import fnmatch_ex +from _pytest.pathlib import import_path +from _pytest.pathlib import ImportPathMismatchError +from _pytest.pathlib import parts +from _pytest.pathlib import visit +from _pytest.scope import Scope +from _pytest.warning_types import PytestCollectionWarning +from _pytest.warning_types import PytestReturnNotNoneWarning +from _pytest.warning_types import PytestUnhandledCoroutineWarning + +if TYPE_CHECKING: + from typing_extensions import Literal + + from _pytest.scope import _ScopeName + + +_PYTEST_DIR = Path(_pytest.__file__).parent + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("general") + group.addoption( + "--fixtures", + "--funcargs", + action="store_true", + dest="showfixtures", + default=False, + help="Show available fixtures, sorted by plugin appearance " + "(fixtures with leading '_' are only shown with '-v')", + ) + group.addoption( + "--fixtures-per-test", + action="store_true", + dest="show_fixtures_per_test", + default=False, + help="Show fixtures per test", + ) + parser.addini( + "python_files", + type="args", + # NOTE: default is also used in AssertionRewritingHook. + default=["test_*.py", "*_test.py"], + help="Glob-style file patterns for Python test module discovery", + ) + parser.addini( + "python_classes", + type="args", + default=["Test"], + help="Prefixes or glob names for Python test class discovery", + ) + parser.addini( + "python_functions", + type="args", + default=["test"], + help="Prefixes or glob names for Python test function and method discovery", + ) + parser.addini( + "disable_test_id_escaping_and_forfeit_all_rights_to_community_support", + type="bool", + default=False, + help="Disable string escape non-ASCII characters, might cause unwanted " + "side effects(use at your own risk)", + ) + + +def pytest_cmdline_main(config: Config) -> Optional[Union[int, ExitCode]]: + if config.option.showfixtures: + showfixtures(config) + return 0 + if config.option.show_fixtures_per_test: + show_fixtures_per_test(config) + return 0 + return None + + +def pytest_generate_tests(metafunc: "Metafunc") -> None: + for marker in metafunc.definition.iter_markers(name="parametrize"): + metafunc.parametrize(*marker.args, **marker.kwargs, _param_mark=marker) + + +def pytest_configure(config: Config) -> None: + config.addinivalue_line( + "markers", + "parametrize(argnames, argvalues): call a test function multiple " + "times passing in different arguments in turn. argvalues generally " + "needs to be a list of values if argnames specifies only one name " + "or a list of tuples of values if argnames specifies multiple names. " + "Example: @parametrize('arg1', [1,2]) would lead to two calls of the " + "decorated test function, one with arg1=1 and another with arg1=2." + "see https://docs.pytest.org/en/stable/how-to/parametrize.html for more info " + "and examples.", + ) + config.addinivalue_line( + "markers", + "usefixtures(fixturename1, fixturename2, ...): mark tests as needing " + "all of the specified fixtures. see " + "https://docs.pytest.org/en/stable/explanation/fixtures.html#usefixtures ", + ) + + +def async_warn_and_skip(nodeid: str) -> None: + msg = "async def functions are not natively supported and have been skipped.\n" + msg += ( + "You need to install a suitable plugin for your async framework, for example:\n" + ) + msg += " - anyio\n" + msg += " - pytest-asyncio\n" + msg += " - pytest-tornasync\n" + msg += " - pytest-trio\n" + msg += " - pytest-twisted" + warnings.warn(PytestUnhandledCoroutineWarning(msg.format(nodeid))) + skip(reason="async def function and no async plugin installed (see warnings)") + + +@hookimpl(trylast=True) +def pytest_pyfunc_call(pyfuncitem: "Function") -> Optional[object]: + testfunction = pyfuncitem.obj + if is_async_function(testfunction): + async_warn_and_skip(pyfuncitem.nodeid) + funcargs = pyfuncitem.funcargs + testargs = {arg: funcargs[arg] for arg in pyfuncitem._fixtureinfo.argnames} + result = testfunction(**testargs) + if hasattr(result, "__await__") or hasattr(result, "__aiter__"): + async_warn_and_skip(pyfuncitem.nodeid) + elif result is not None: + warnings.warn( + PytestReturnNotNoneWarning( + f"Expected None, but {pyfuncitem.nodeid} returned {result!r}, which will be an error in a " + "future version of pytest. Did you mean to use `assert` instead of `return`?" + ) + ) + return True + + +def pytest_collect_file(file_path: Path, parent: nodes.Collector) -> Optional["Module"]: + if file_path.suffix == ".py": + if not parent.session.isinitpath(file_path): + if not path_matches_patterns( + file_path, parent.config.getini("python_files") + ["__init__.py"] + ): + return None + ihook = parent.session.gethookproxy(file_path) + module: Module = ihook.pytest_pycollect_makemodule( + module_path=file_path, parent=parent + ) + return module + return None + + +def path_matches_patterns(path: Path, patterns: Iterable[str]) -> bool: + """Return whether path matches any of the patterns in the list of globs given.""" + return any(fnmatch_ex(pattern, path) for pattern in patterns) + + +def pytest_pycollect_makemodule(module_path: Path, parent) -> "Module": + if module_path.name == "__init__.py": + pkg: Package = Package.from_parent(parent, path=module_path) + return pkg + mod: Module = Module.from_parent(parent, path=module_path) + return mod + + +@hookimpl(trylast=True) +def pytest_pycollect_makeitem( + collector: Union["Module", "Class"], name: str, obj: object +) -> Union[None, nodes.Item, nodes.Collector, List[Union[nodes.Item, nodes.Collector]]]: + assert isinstance(collector, (Class, Module)), type(collector) + # Nothing was collected elsewhere, let's do it here. + if safe_isclass(obj): + if collector.istestclass(obj, name): + klass: Class = Class.from_parent(collector, name=name, obj=obj) + return klass + elif collector.istestfunction(obj, name): + # mock seems to store unbound methods (issue473), normalize it. + obj = getattr(obj, "__func__", obj) + # We need to try and unwrap the function if it's a functools.partial + # or a functools.wrapped. + # We mustn't if it's been wrapped with mock.patch (python 2 only). + if not (inspect.isfunction(obj) or inspect.isfunction(get_real_func(obj))): + filename, lineno = getfslineno(obj) + warnings.warn_explicit( + message=PytestCollectionWarning( + "cannot collect %r because it is not a function." % name + ), + category=None, + filename=str(filename), + lineno=lineno + 1, + ) + elif getattr(obj, "__test__", True): + if is_generator(obj): + res: Function = Function.from_parent(collector, name=name) + reason = "yield tests were removed in pytest 4.0 - {name} will be ignored".format( + name=name + ) + res.add_marker(MARK_GEN.xfail(run=False, reason=reason)) + res.warn(PytestCollectionWarning(reason)) + return res + else: + return list(collector._genfunctions(name, obj)) + return None + + +class PyobjMixin(nodes.Node): + """this mix-in inherits from Node to carry over the typing information + + as its intended to always mix in before a node + its position in the mro is unaffected""" + + _ALLOW_MARKERS = True + + @property + def module(self): + """Python module object this node was collected from (can be None).""" + node = self.getparent(Module) + return node.obj if node is not None else None + + @property + def cls(self): + """Python class object this node was collected from (can be None).""" + node = self.getparent(Class) + return node.obj if node is not None else None + + @property + def instance(self): + """Python instance object the function is bound to. + + Returns None if not a test method, e.g. for a standalone test function, + a staticmethod, a class or a module. + """ + node = self.getparent(Function) + return getattr(node.obj, "__self__", None) if node is not None else None + + @property + def obj(self): + """Underlying Python object.""" + obj = getattr(self, "_obj", None) + if obj is None: + self._obj = obj = self._getobj() + # XXX evil hack + # used to avoid Function marker duplication + if self._ALLOW_MARKERS: + self.own_markers.extend(get_unpacked_marks(self.obj)) + # This assumes that `obj` is called before there is a chance + # to add custom keys to `self.keywords`, so no fear of overriding. + self.keywords.update((mark.name, mark) for mark in self.own_markers) + return obj + + @obj.setter + def obj(self, value): + self._obj = value + + def _getobj(self): + """Get the underlying Python object. May be overwritten by subclasses.""" + # TODO: Improve the type of `parent` such that assert/ignore aren't needed. + assert self.parent is not None + obj = self.parent.obj # type: ignore[attr-defined] + return getattr(obj, self.name) + + def getmodpath(self, stopatmodule: bool = True, includemodule: bool = False) -> str: + """Return Python path relative to the containing module.""" + chain = self.listchain() + chain.reverse() + parts = [] + for node in chain: + name = node.name + if isinstance(node, Module): + name = os.path.splitext(name)[0] + if stopatmodule: + if includemodule: + parts.append(name) + break + parts.append(name) + parts.reverse() + return ".".join(parts) + + def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]: + # XXX caching? + obj = self.obj + compat_co_firstlineno = getattr(obj, "compat_co_firstlineno", None) + if isinstance(compat_co_firstlineno, int): + # nose compatibility + file_path = sys.modules[obj.__module__].__file__ + assert file_path is not None + if file_path.endswith(".pyc"): + file_path = file_path[:-1] + path: Union["os.PathLike[str]", str] = file_path + lineno = compat_co_firstlineno + else: + path, lineno = getfslineno(obj) + modpath = self.getmodpath() + assert isinstance(lineno, int) + return path, lineno, modpath + + +# As an optimization, these builtin attribute names are pre-ignored when +# iterating over an object during collection -- the pytest_pycollect_makeitem +# hook is not called for them. +# fmt: off +class _EmptyClass: pass # noqa: E701 +IGNORED_ATTRIBUTES = frozenset.union( # noqa: E305 + frozenset(), + # Module. + dir(types.ModuleType("empty_module")), + # Some extra module attributes the above doesn't catch. + {"__builtins__", "__file__", "__cached__"}, + # Class. + dir(_EmptyClass), + # Instance. + dir(_EmptyClass()), +) +del _EmptyClass +# fmt: on + + +class PyCollector(PyobjMixin, nodes.Collector): + def funcnamefilter(self, name: str) -> bool: + return self._matches_prefix_or_glob_option("python_functions", name) + + def isnosetest(self, obj: object) -> bool: + """Look for the __test__ attribute, which is applied by the + @nose.tools.istest decorator. + """ + # We explicitly check for "is True" here to not mistakenly treat + # classes with a custom __getattr__ returning something truthy (like a + # function) as test classes. + return safe_getattr(obj, "__test__", False) is True + + def classnamefilter(self, name: str) -> bool: + return self._matches_prefix_or_glob_option("python_classes", name) + + def istestfunction(self, obj: object, name: str) -> bool: + if self.funcnamefilter(name) or self.isnosetest(obj): + if isinstance(obj, staticmethod): + # staticmethods need to be unwrapped. + obj = safe_getattr(obj, "__func__", False) + return callable(obj) and fixtures.getfixturemarker(obj) is None + else: + return False + + def istestclass(self, obj: object, name: str) -> bool: + return self.classnamefilter(name) or self.isnosetest(obj) + + def _matches_prefix_or_glob_option(self, option_name: str, name: str) -> bool: + """Check if the given name matches the prefix or glob-pattern defined + in ini configuration.""" + for option in self.config.getini(option_name): + if name.startswith(option): + return True + # Check that name looks like a glob-string before calling fnmatch + # because this is called for every name in each collected module, + # and fnmatch is somewhat expensive to call. + elif ("*" in option or "?" in option or "[" in option) and fnmatch.fnmatch( + name, option + ): + return True + return False + + def collect(self) -> Iterable[Union[nodes.Item, nodes.Collector]]: + if not getattr(self.obj, "__test__", True): + return [] + + # Avoid random getattrs and peek in the __dict__ instead. + dicts = [getattr(self.obj, "__dict__", {})] + if isinstance(self.obj, type): + for basecls in self.obj.__mro__: + dicts.append(basecls.__dict__) + + # In each class, nodes should be definition ordered. + # __dict__ is definition ordered. + seen: Set[str] = set() + dict_values: List[List[Union[nodes.Item, nodes.Collector]]] = [] + ihook = self.ihook + for dic in dicts: + values: List[Union[nodes.Item, nodes.Collector]] = [] + # Note: seems like the dict can change during iteration - + # be careful not to remove the list() without consideration. + for name, obj in list(dic.items()): + if name in IGNORED_ATTRIBUTES: + continue + if name in seen: + continue + seen.add(name) + res = ihook.pytest_pycollect_makeitem( + collector=self, name=name, obj=obj + ) + if res is None: + continue + elif isinstance(res, list): + values.extend(res) + else: + values.append(res) + dict_values.append(values) + + # Between classes in the class hierarchy, reverse-MRO order -- nodes + # inherited from base classes should come before subclasses. + result = [] + for values in reversed(dict_values): + result.extend(values) + return result + + def _genfunctions(self, name: str, funcobj) -> Iterator["Function"]: + modulecol = self.getparent(Module) + assert modulecol is not None + module = modulecol.obj + clscol = self.getparent(Class) + cls = clscol and clscol.obj or None + + definition = FunctionDefinition.from_parent(self, name=name, callobj=funcobj) + fixtureinfo = definition._fixtureinfo + + # pytest_generate_tests impls call metafunc.parametrize() which fills + # metafunc._calls, the outcome of the hook. + metafunc = Metafunc( + definition=definition, + fixtureinfo=fixtureinfo, + config=self.config, + cls=cls, + module=module, + _ispytest=True, + ) + methods = [] + if hasattr(module, "pytest_generate_tests"): + methods.append(module.pytest_generate_tests) + if cls is not None and hasattr(cls, "pytest_generate_tests"): + methods.append(cls().pytest_generate_tests) + self.ihook.pytest_generate_tests.call_extra(methods, dict(metafunc=metafunc)) + + if not metafunc._calls: + yield Function.from_parent(self, name=name, fixtureinfo=fixtureinfo) + else: + # Add funcargs() as fixturedefs to fixtureinfo.arg2fixturedefs. + fm = self.session._fixturemanager + fixtures.add_funcarg_pseudo_fixture_def(self, metafunc, fm) + + # Add_funcarg_pseudo_fixture_def may have shadowed some fixtures + # with direct parametrization, so make sure we update what the + # function really needs. + fixtureinfo.prune_dependency_tree() + + for callspec in metafunc._calls: + subname = f"{name}[{callspec.id}]" + yield Function.from_parent( + self, + name=subname, + callspec=callspec, + fixtureinfo=fixtureinfo, + keywords={callspec.id: True}, + originalname=name, + ) + + +class Module(nodes.File, PyCollector): + """Collector for test classes and functions.""" + + def _getobj(self): + return self._importtestmodule() + + def collect(self) -> Iterable[Union[nodes.Item, nodes.Collector]]: + self._inject_setup_module_fixture() + self._inject_setup_function_fixture() + self.session._fixturemanager.parsefactories(self) + return super().collect() + + def _inject_setup_module_fixture(self) -> None: + """Inject a hidden autouse, module scoped fixture into the collected module object + that invokes setUpModule/tearDownModule if either or both are available. + + Using a fixture to invoke this methods ensures we play nicely and unsurprisingly with + other fixtures (#517). + """ + has_nose = self.config.pluginmanager.has_plugin("nose") + setup_module = _get_first_non_fixture_func( + self.obj, ("setUpModule", "setup_module") + ) + if setup_module is None and has_nose: + # The name "setup" is too common - only treat as fixture if callable. + setup_module = _get_first_non_fixture_func(self.obj, ("setup",)) + if not callable(setup_module): + setup_module = None + teardown_module = _get_first_non_fixture_func( + self.obj, ("tearDownModule", "teardown_module") + ) + if teardown_module is None and has_nose: + teardown_module = _get_first_non_fixture_func(self.obj, ("teardown",)) + # Same as "setup" above - only treat as fixture if callable. + if not callable(teardown_module): + teardown_module = None + + if setup_module is None and teardown_module is None: + return + + @fixtures.fixture( + autouse=True, + scope="module", + # Use a unique name to speed up lookup. + name=f"_xunit_setup_module_fixture_{self.obj.__name__}", + ) + def xunit_setup_module_fixture(request) -> Generator[None, None, None]: + if setup_module is not None: + _call_with_optional_argument(setup_module, request.module) + yield + if teardown_module is not None: + _call_with_optional_argument(teardown_module, request.module) + + self.obj.__pytest_setup_module = xunit_setup_module_fixture + + def _inject_setup_function_fixture(self) -> None: + """Inject a hidden autouse, function scoped fixture into the collected module object + that invokes setup_function/teardown_function if either or both are available. + + Using a fixture to invoke this methods ensures we play nicely and unsurprisingly with + other fixtures (#517). + """ + setup_function = _get_first_non_fixture_func(self.obj, ("setup_function",)) + teardown_function = _get_first_non_fixture_func( + self.obj, ("teardown_function",) + ) + if setup_function is None and teardown_function is None: + return + + @fixtures.fixture( + autouse=True, + scope="function", + # Use a unique name to speed up lookup. + name=f"_xunit_setup_function_fixture_{self.obj.__name__}", + ) + def xunit_setup_function_fixture(request) -> Generator[None, None, None]: + if request.instance is not None: + # in this case we are bound to an instance, so we need to let + # setup_method handle this + yield + return + if setup_function is not None: + _call_with_optional_argument(setup_function, request.function) + yield + if teardown_function is not None: + _call_with_optional_argument(teardown_function, request.function) + + self.obj.__pytest_setup_function = xunit_setup_function_fixture + + def _importtestmodule(self): + # We assume we are only called once per module. + importmode = self.config.getoption("--import-mode") + try: + mod = import_path(self.path, mode=importmode, root=self.config.rootpath) + except SyntaxError as e: + raise self.CollectError( + ExceptionInfo.from_current().getrepr(style="short") + ) from e + except ImportPathMismatchError as e: + raise self.CollectError( + "import file mismatch:\n" + "imported module %r has this __file__ attribute:\n" + " %s\n" + "which is not the same as the test file we want to collect:\n" + " %s\n" + "HINT: remove __pycache__ / .pyc files and/or use a " + "unique basename for your test file modules" % e.args + ) from e + except ImportError as e: + exc_info = ExceptionInfo.from_current() + if self.config.getoption("verbose") < 2: + exc_info.traceback = exc_info.traceback.filter(filter_traceback) + exc_repr = ( + exc_info.getrepr(style="short") + if exc_info.traceback + else exc_info.exconly() + ) + formatted_tb = str(exc_repr) + raise self.CollectError( + "ImportError while importing test module '{path}'.\n" + "Hint: make sure your test modules/packages have valid Python names.\n" + "Traceback:\n" + "{traceback}".format(path=self.path, traceback=formatted_tb) + ) from e + except skip.Exception as e: + if e.allow_module_level: + raise + raise self.CollectError( + "Using pytest.skip outside of a test will skip the entire module. " + "If that's your intention, pass `allow_module_level=True`. " + "If you want to skip a specific test or an entire class, " + "use the @pytest.mark.skip or @pytest.mark.skipif decorators." + ) from e + self.config.pluginmanager.consider_module(mod) + return mod + + +class Package(Module): + def __init__( + self, + fspath: Optional[LEGACY_PATH], + parent: nodes.Collector, + # NOTE: following args are unused: + config=None, + session=None, + nodeid=None, + path=Optional[Path], + ) -> None: + # NOTE: Could be just the following, but kept as-is for compat. + # nodes.FSCollector.__init__(self, fspath, parent=parent) + session = parent.session + nodes.FSCollector.__init__( + self, + fspath=fspath, + path=path, + parent=parent, + config=config, + session=session, + nodeid=nodeid, + ) + self.name = self.path.parent.name + + def setup(self) -> None: + # Not using fixtures to call setup_module here because autouse fixtures + # from packages are not called automatically (#4085). + setup_module = _get_first_non_fixture_func( + self.obj, ("setUpModule", "setup_module") + ) + if setup_module is not None: + _call_with_optional_argument(setup_module, self.obj) + + teardown_module = _get_first_non_fixture_func( + self.obj, ("tearDownModule", "teardown_module") + ) + if teardown_module is not None: + func = partial(_call_with_optional_argument, teardown_module, self.obj) + self.addfinalizer(func) + + def gethookproxy(self, fspath: "os.PathLike[str]"): + warnings.warn(FSCOLLECTOR_GETHOOKPROXY_ISINITPATH, stacklevel=2) + return self.session.gethookproxy(fspath) + + def isinitpath(self, path: Union[str, "os.PathLike[str]"]) -> bool: + warnings.warn(FSCOLLECTOR_GETHOOKPROXY_ISINITPATH, stacklevel=2) + return self.session.isinitpath(path) + + def _recurse(self, direntry: "os.DirEntry[str]") -> bool: + if direntry.name == "__pycache__": + return False + fspath = Path(direntry.path) + ihook = self.session.gethookproxy(fspath.parent) + if ihook.pytest_ignore_collect(collection_path=fspath, config=self.config): + return False + norecursepatterns = self.config.getini("norecursedirs") + if any(fnmatch_ex(pat, fspath) for pat in norecursepatterns): + return False + return True + + def _collectfile( + self, fspath: Path, handle_dupes: bool = True + ) -> Sequence[nodes.Collector]: + assert ( + fspath.is_file() + ), "{!r} is not a file (isdir={!r}, exists={!r}, islink={!r})".format( + fspath, fspath.is_dir(), fspath.exists(), fspath.is_symlink() + ) + ihook = self.session.gethookproxy(fspath) + if not self.session.isinitpath(fspath): + if ihook.pytest_ignore_collect(collection_path=fspath, config=self.config): + return () + + if handle_dupes: + keepduplicates = self.config.getoption("keepduplicates") + if not keepduplicates: + duplicate_paths = self.config.pluginmanager._duplicatepaths + if fspath in duplicate_paths: + return () + else: + duplicate_paths.add(fspath) + + return ihook.pytest_collect_file(file_path=fspath, parent=self) # type: ignore[no-any-return] + + def collect(self) -> Iterable[Union[nodes.Item, nodes.Collector]]: + this_path = self.path.parent + init_module = this_path / "__init__.py" + if init_module.is_file() and path_matches_patterns( + init_module, self.config.getini("python_files") + ): + yield Module.from_parent(self, path=init_module) + pkg_prefixes: Set[Path] = set() + for direntry in visit(str(this_path), recurse=self._recurse): + path = Path(direntry.path) + + # We will visit our own __init__.py file, in which case we skip it. + if direntry.is_file(): + if direntry.name == "__init__.py" and path.parent == this_path: + continue + + parts_ = parts(direntry.path) + if any( + str(pkg_prefix) in parts_ and pkg_prefix / "__init__.py" != path + for pkg_prefix in pkg_prefixes + ): + continue + + if direntry.is_file(): + yield from self._collectfile(path) + elif not direntry.is_dir(): + # Broken symlink or invalid/missing file. + continue + elif path.joinpath("__init__.py").is_file(): + pkg_prefixes.add(path) + + +def _call_with_optional_argument(func, arg) -> None: + """Call the given function with the given argument if func accepts one argument, otherwise + calls func without arguments.""" + arg_count = func.__code__.co_argcount + if inspect.ismethod(func): + arg_count -= 1 + if arg_count: + func(arg) + else: + func() + + +def _get_first_non_fixture_func(obj: object, names: Iterable[str]) -> Optional[object]: + """Return the attribute from the given object to be used as a setup/teardown + xunit-style function, but only if not marked as a fixture to avoid calling it twice.""" + for name in names: + meth: Optional[object] = getattr(obj, name, None) + if meth is not None and fixtures.getfixturemarker(meth) is None: + return meth + return None + + +class Class(PyCollector): + """Collector for test methods.""" + + @classmethod + def from_parent(cls, parent, *, name, obj=None, **kw): + """The public constructor.""" + return super().from_parent(name=name, parent=parent, **kw) + + def newinstance(self): + return self.obj() + + def collect(self) -> Iterable[Union[nodes.Item, nodes.Collector]]: + if not safe_getattr(self.obj, "__test__", True): + return [] + if hasinit(self.obj): + assert self.parent is not None + self.warn( + PytestCollectionWarning( + "cannot collect test class %r because it has a " + "__init__ constructor (from: %s)" + % (self.obj.__name__, self.parent.nodeid) + ) + ) + return [] + elif hasnew(self.obj): + assert self.parent is not None + self.warn( + PytestCollectionWarning( + "cannot collect test class %r because it has a " + "__new__ constructor (from: %s)" + % (self.obj.__name__, self.parent.nodeid) + ) + ) + return [] + + self._inject_setup_class_fixture() + self._inject_setup_method_fixture() + + self.session._fixturemanager.parsefactories(self.newinstance(), self.nodeid) + + return super().collect() + + def _inject_setup_class_fixture(self) -> None: + """Inject a hidden autouse, class scoped fixture into the collected class object + that invokes setup_class/teardown_class if either or both are available. + + Using a fixture to invoke this methods ensures we play nicely and unsurprisingly with + other fixtures (#517). + """ + setup_class = _get_first_non_fixture_func(self.obj, ("setup_class",)) + teardown_class = getattr(self.obj, "teardown_class", None) + if setup_class is None and teardown_class is None: + return + + @fixtures.fixture( + autouse=True, + scope="class", + # Use a unique name to speed up lookup. + name=f"_xunit_setup_class_fixture_{self.obj.__qualname__}", + ) + def xunit_setup_class_fixture(cls) -> Generator[None, None, None]: + if setup_class is not None: + func = getimfunc(setup_class) + _call_with_optional_argument(func, self.obj) + yield + if teardown_class is not None: + func = getimfunc(teardown_class) + _call_with_optional_argument(func, self.obj) + + self.obj.__pytest_setup_class = xunit_setup_class_fixture + + def _inject_setup_method_fixture(self) -> None: + """Inject a hidden autouse, function scoped fixture into the collected class object + that invokes setup_method/teardown_method if either or both are available. + + Using a fixture to invoke these methods ensures we play nicely and unsurprisingly with + other fixtures (#517). + """ + has_nose = self.config.pluginmanager.has_plugin("nose") + setup_name = "setup_method" + setup_method = _get_first_non_fixture_func(self.obj, (setup_name,)) + emit_nose_setup_warning = False + if setup_method is None and has_nose: + setup_name = "setup" + emit_nose_setup_warning = True + setup_method = _get_first_non_fixture_func(self.obj, (setup_name,)) + teardown_name = "teardown_method" + teardown_method = getattr(self.obj, teardown_name, None) + emit_nose_teardown_warning = False + if teardown_method is None and has_nose: + teardown_name = "teardown" + emit_nose_teardown_warning = True + teardown_method = getattr(self.obj, teardown_name, None) + if setup_method is None and teardown_method is None: + return + + @fixtures.fixture( + autouse=True, + scope="function", + # Use a unique name to speed up lookup. + name=f"_xunit_setup_method_fixture_{self.obj.__qualname__}", + ) + def xunit_setup_method_fixture(self, request) -> Generator[None, None, None]: + method = request.function + if setup_method is not None: + func = getattr(self, setup_name) + _call_with_optional_argument(func, method) + if emit_nose_setup_warning: + warnings.warn( + NOSE_SUPPORT_METHOD.format( + nodeid=request.node.nodeid, method="setup" + ), + stacklevel=2, + ) + yield + if teardown_method is not None: + func = getattr(self, teardown_name) + _call_with_optional_argument(func, method) + if emit_nose_teardown_warning: + warnings.warn( + NOSE_SUPPORT_METHOD.format( + nodeid=request.node.nodeid, method="teardown" + ), + stacklevel=2, + ) + + self.obj.__pytest_setup_method = xunit_setup_method_fixture + + +class InstanceDummy: + """Instance used to be a node type between Class and Function. It has been + removed in pytest 7.0. Some plugins exist which reference `pytest.Instance` + only to ignore it; this dummy class keeps them working. This will be removed + in pytest 8.""" + + +def __getattr__(name: str) -> object: + if name == "Instance": + warnings.warn(INSTANCE_COLLECTOR, 2) + return InstanceDummy + raise AttributeError(f"module {__name__} has no attribute {name}") + + +def hasinit(obj: object) -> bool: + init: object = getattr(obj, "__init__", None) + if init: + return init != object.__init__ + return False + + +def hasnew(obj: object) -> bool: + new: object = getattr(obj, "__new__", None) + if new: + return new != object.__new__ + return False + + +@final +@attr.s(frozen=True, auto_attribs=True, slots=True) +class IdMaker: + """Make IDs for a parametrization.""" + + # The argnames of the parametrization. + argnames: Sequence[str] + # The ParameterSets of the parametrization. + parametersets: Sequence[ParameterSet] + # Optionally, a user-provided callable to make IDs for parameters in a + # ParameterSet. + idfn: Optional[Callable[[Any], Optional[object]]] + # Optionally, explicit IDs for ParameterSets by index. + ids: Optional[Sequence[Optional[object]]] + # Optionally, the pytest config. + # Used for controlling ASCII escaping, and for calling the + # :hook:`pytest_make_parametrize_id` hook. + config: Optional[Config] + # Optionally, the ID of the node being parametrized. + # Used only for clearer error messages. + nodeid: Optional[str] + # Optionally, the ID of the function being parametrized. + # Used only for clearer error messages. + func_name: Optional[str] + + def make_unique_parameterset_ids(self) -> List[str]: + """Make a unique identifier for each ParameterSet, that may be used to + identify the parametrization in a node ID. + + Format is -...-[counter], where prm_x_token is + - user-provided id, if given + - else an id derived from the value, applicable for certain types + - else + The counter suffix is appended only in case a string wouldn't be unique + otherwise. + """ + resolved_ids = list(self._resolve_ids()) + # All IDs must be unique! + if len(resolved_ids) != len(set(resolved_ids)): + # Record the number of occurrences of each ID. + id_counts = Counter(resolved_ids) + # Map the ID to its next suffix. + id_suffixes: Dict[str, int] = defaultdict(int) + # Suffix non-unique IDs to make them unique. + for index, id in enumerate(resolved_ids): + if id_counts[id] > 1: + resolved_ids[index] = f"{id}{id_suffixes[id]}" + id_suffixes[id] += 1 + return resolved_ids + + def _resolve_ids(self) -> Iterable[str]: + """Resolve IDs for all ParameterSets (may contain duplicates).""" + for idx, parameterset in enumerate(self.parametersets): + if parameterset.id is not None: + # ID provided directly - pytest.param(..., id="...") + yield parameterset.id + elif self.ids and idx < len(self.ids) and self.ids[idx] is not None: + # ID provided in the IDs list - parametrize(..., ids=[...]). + yield self._idval_from_value_required(self.ids[idx], idx) + else: + # ID not provided - generate it. + yield "-".join( + self._idval(val, argname, idx) + for val, argname in zip(parameterset.values, self.argnames) + ) + + def _idval(self, val: object, argname: str, idx: int) -> str: + """Make an ID for a parameter in a ParameterSet.""" + idval = self._idval_from_function(val, argname, idx) + if idval is not None: + return idval + idval = self._idval_from_hook(val, argname) + if idval is not None: + return idval + idval = self._idval_from_value(val) + if idval is not None: + return idval + return self._idval_from_argname(argname, idx) + + def _idval_from_function( + self, val: object, argname: str, idx: int + ) -> Optional[str]: + """Try to make an ID for a parameter in a ParameterSet using the + user-provided id callable, if given.""" + if self.idfn is None: + return None + try: + id = self.idfn(val) + except Exception as e: + prefix = f"{self.nodeid}: " if self.nodeid is not None else "" + msg = "error raised while trying to determine id of parameter '{}' at position {}" + msg = prefix + msg.format(argname, idx) + raise ValueError(msg) from e + if id is None: + return None + return self._idval_from_value(id) + + def _idval_from_hook(self, val: object, argname: str) -> Optional[str]: + """Try to make an ID for a parameter in a ParameterSet by calling the + :hook:`pytest_make_parametrize_id` hook.""" + if self.config: + id: Optional[str] = self.config.hook.pytest_make_parametrize_id( + config=self.config, val=val, argname=argname + ) + return id + return None + + def _idval_from_value(self, val: object) -> Optional[str]: + """Try to make an ID for a parameter in a ParameterSet from its value, + if the value type is supported.""" + if isinstance(val, STRING_TYPES): + return _ascii_escaped_by_config(val, self.config) + elif val is None or isinstance(val, (float, int, bool, complex)): + return str(val) + elif isinstance(val, Pattern): + return ascii_escaped(val.pattern) + elif val is NOTSET: + # Fallback to default. Note that NOTSET is an enum.Enum. + pass + elif isinstance(val, enum.Enum): + return str(val) + elif isinstance(getattr(val, "__name__", None), str): + # Name of a class, function, module, etc. + name: str = getattr(val, "__name__") + return name + return None + + def _idval_from_value_required(self, val: object, idx: int) -> str: + """Like _idval_from_value(), but fails if the type is not supported.""" + id = self._idval_from_value(val) + if id is not None: + return id + + # Fail. + if self.func_name is not None: + prefix = f"In {self.func_name}: " + elif self.nodeid is not None: + prefix = f"In {self.nodeid}: " + else: + prefix = "" + msg = ( + f"{prefix}ids contains unsupported value {saferepr(val)} (type: {type(val)!r}) at index {idx}. " + "Supported types are: str, bytes, int, float, complex, bool, enum, regex or anything with a __name__." + ) + fail(msg, pytrace=False) + + @staticmethod + def _idval_from_argname(argname: str, idx: int) -> str: + """Make an ID for a parameter in a ParameterSet from the argument name + and the index of the ParameterSet.""" + return str(argname) + str(idx) + + +@final +@attr.s(frozen=True, slots=True, auto_attribs=True) +class CallSpec2: + """A planned parameterized invocation of a test function. + + Calculated during collection for a given test function's Metafunc. + Once collection is over, each callspec is turned into a single Item + and stored in item.callspec. + """ + + # arg name -> arg value which will be passed to the parametrized test + # function (direct parameterization). + funcargs: Dict[str, object] = attr.Factory(dict) + # arg name -> arg value which will be passed to a fixture of the same name + # (indirect parametrization). + params: Dict[str, object] = attr.Factory(dict) + # arg name -> arg index. + indices: Dict[str, int] = attr.Factory(dict) + # Used for sorting parametrized resources. + _arg2scope: Dict[str, Scope] = attr.Factory(dict) + # Parts which will be added to the item's name in `[..]` separated by "-". + _idlist: List[str] = attr.Factory(list) + # Marks which will be applied to the item. + marks: List[Mark] = attr.Factory(list) + + def setmulti( + self, + *, + valtypes: Mapping[str, "Literal['params', 'funcargs']"], + argnames: Iterable[str], + valset: Iterable[object], + id: str, + marks: Iterable[Union[Mark, MarkDecorator]], + scope: Scope, + param_index: int, + ) -> "CallSpec2": + funcargs = self.funcargs.copy() + params = self.params.copy() + indices = self.indices.copy() + arg2scope = self._arg2scope.copy() + for arg, val in zip(argnames, valset): + if arg in params or arg in funcargs: + raise ValueError(f"duplicate {arg!r}") + valtype_for_arg = valtypes[arg] + if valtype_for_arg == "params": + params[arg] = val + elif valtype_for_arg == "funcargs": + funcargs[arg] = val + else: + assert_never(valtype_for_arg) + indices[arg] = param_index + arg2scope[arg] = scope + return CallSpec2( + funcargs=funcargs, + params=params, + arg2scope=arg2scope, + indices=indices, + idlist=[*self._idlist, id], + marks=[*self.marks, *normalize_mark_list(marks)], + ) + + def getparam(self, name: str) -> object: + try: + return self.params[name] + except KeyError as e: + raise ValueError(name) from e + + @property + def id(self) -> str: + return "-".join(self._idlist) + + +@final +class Metafunc: + """Objects passed to the :hook:`pytest_generate_tests` hook. + + They help to inspect a test function and to generate tests according to + test configuration or values specified in the class or module where a + test function is defined. + """ + + def __init__( + self, + definition: "FunctionDefinition", + fixtureinfo: fixtures.FuncFixtureInfo, + config: Config, + cls=None, + module=None, + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + + #: Access to the underlying :class:`_pytest.python.FunctionDefinition`. + self.definition = definition + + #: Access to the :class:`pytest.Config` object for the test session. + self.config = config + + #: The module object where the test function is defined in. + self.module = module + + #: Underlying Python test function. + self.function = definition.obj + + #: Set of fixture names required by the test function. + self.fixturenames = fixtureinfo.names_closure + + #: Class object where the test function is defined in or ``None``. + self.cls = cls + + self._arg2fixturedefs = fixtureinfo.name2fixturedefs + + # Result of parametrize(). + self._calls: List[CallSpec2] = [] + + def parametrize( + self, + argnames: Union[str, Sequence[str]], + argvalues: Iterable[Union[ParameterSet, Sequence[object], object]], + indirect: Union[bool, Sequence[str]] = False, + ids: Optional[ + Union[Iterable[Optional[object]], Callable[[Any], Optional[object]]] + ] = None, + scope: "Optional[_ScopeName]" = None, + *, + _param_mark: Optional[Mark] = None, + ) -> None: + """Add new invocations to the underlying test function using the list + of argvalues for the given argnames. Parametrization is performed + during the collection phase. If you need to setup expensive resources + see about setting indirect to do it rather than at test setup time. + + Can be called multiple times, in which case each call parametrizes all + previous parametrizations, e.g. + + :: + + unparametrized: t + parametrize ["x", "y"]: t[x], t[y] + parametrize [1, 2]: t[x-1], t[x-2], t[y-1], t[y-2] + + :param argnames: + A comma-separated string denoting one or more argument names, or + a list/tuple of argument strings. + + :param argvalues: + The list of argvalues determines how often a test is invoked with + different argument values. + + If only one argname was specified argvalues is a list of values. + If N argnames were specified, argvalues must be a list of + N-tuples, where each tuple-element specifies a value for its + respective argname. + + :param indirect: + A list of arguments' names (subset of argnames) or a boolean. + If True the list contains all names from the argnames. Each + argvalue corresponding to an argname in this list will + be passed as request.param to its respective argname fixture + function so that it can perform more expensive setups during the + setup phase of a test rather than at collection time. + + :param ids: + Sequence of (or generator for) ids for ``argvalues``, + or a callable to return part of the id for each argvalue. + + With sequences (and generators like ``itertools.count()``) the + returned ids should be of type ``string``, ``int``, ``float``, + ``bool``, or ``None``. + They are mapped to the corresponding index in ``argvalues``. + ``None`` means to use the auto-generated id. + + If it is a callable it will be called for each entry in + ``argvalues``, and the return value is used as part of the + auto-generated id for the whole set (where parts are joined with + dashes ("-")). + This is useful to provide more specific ids for certain items, e.g. + dates. Returning ``None`` will use an auto-generated id. + + If no ids are provided they will be generated automatically from + the argvalues. + + :param scope: + If specified it denotes the scope of the parameters. + The scope is used for grouping tests by parameter instances. + It will also override any fixture-function defined scope, allowing + to set a dynamic scope using test context or configuration. + """ + argnames, parametersets = ParameterSet._for_parametrize( + argnames, + argvalues, + self.function, + self.config, + nodeid=self.definition.nodeid, + ) + del argvalues + + if "request" in argnames: + fail( + "'request' is a reserved name and cannot be used in @pytest.mark.parametrize", + pytrace=False, + ) + + if scope is not None: + scope_ = Scope.from_user( + scope, descr=f"parametrize() call in {self.function.__name__}" + ) + else: + scope_ = _find_parametrized_scope(argnames, self._arg2fixturedefs, indirect) + + self._validate_if_using_arg_names(argnames, indirect) + + arg_values_types = self._resolve_arg_value_types(argnames, indirect) + + # Use any already (possibly) generated ids with parametrize Marks. + if _param_mark and _param_mark._param_ids_from: + generated_ids = _param_mark._param_ids_from._param_ids_generated + if generated_ids is not None: + ids = generated_ids + + ids = self._resolve_parameter_set_ids( + argnames, ids, parametersets, nodeid=self.definition.nodeid + ) + + # Store used (possibly generated) ids with parametrize Marks. + if _param_mark and _param_mark._param_ids_from and generated_ids is None: + object.__setattr__(_param_mark._param_ids_from, "_param_ids_generated", ids) + + # Create the new calls: if we are parametrize() multiple times (by applying the decorator + # more than once) then we accumulate those calls generating the cartesian product + # of all calls. + newcalls = [] + for callspec in self._calls or [CallSpec2()]: + for param_index, (param_id, param_set) in enumerate( + zip(ids, parametersets) + ): + newcallspec = callspec.setmulti( + valtypes=arg_values_types, + argnames=argnames, + valset=param_set.values, + id=param_id, + marks=param_set.marks, + scope=scope_, + param_index=param_index, + ) + newcalls.append(newcallspec) + self._calls = newcalls + + def _resolve_parameter_set_ids( + self, + argnames: Sequence[str], + ids: Optional[ + Union[Iterable[Optional[object]], Callable[[Any], Optional[object]]] + ], + parametersets: Sequence[ParameterSet], + nodeid: str, + ) -> List[str]: + """Resolve the actual ids for the given parameter sets. + + :param argnames: + Argument names passed to ``parametrize()``. + :param ids: + The `ids` parameter of the ``parametrize()`` call (see docs). + :param parametersets: + The parameter sets, each containing a set of values corresponding + to ``argnames``. + :param nodeid str: + The nodeid of the definition item that generated this + parametrization. + :returns: + List with ids for each parameter set given. + """ + if ids is None: + idfn = None + ids_ = None + elif callable(ids): + idfn = ids + ids_ = None + else: + idfn = None + ids_ = self._validate_ids(ids, parametersets, self.function.__name__) + id_maker = IdMaker( + argnames, + parametersets, + idfn, + ids_, + self.config, + nodeid=nodeid, + func_name=self.function.__name__, + ) + return id_maker.make_unique_parameterset_ids() + + def _validate_ids( + self, + ids: Iterable[Optional[object]], + parametersets: Sequence[ParameterSet], + func_name: str, + ) -> List[Optional[object]]: + try: + num_ids = len(ids) # type: ignore[arg-type] + except TypeError: + try: + iter(ids) + except TypeError as e: + raise TypeError("ids must be a callable or an iterable") from e + num_ids = len(parametersets) + + # num_ids == 0 is a special case: https://github.com/pytest-dev/pytest/issues/1849 + if num_ids != len(parametersets) and num_ids != 0: + msg = "In {}: {} parameter sets specified, with different number of ids: {}" + fail(msg.format(func_name, len(parametersets), num_ids), pytrace=False) + + return list(itertools.islice(ids, num_ids)) + + def _resolve_arg_value_types( + self, + argnames: Sequence[str], + indirect: Union[bool, Sequence[str]], + ) -> Dict[str, "Literal['params', 'funcargs']"]: + """Resolve if each parametrized argument must be considered a + parameter to a fixture or a "funcarg" to the function, based on the + ``indirect`` parameter of the parametrized() call. + + :param List[str] argnames: List of argument names passed to ``parametrize()``. + :param indirect: Same as the ``indirect`` parameter of ``parametrize()``. + :rtype: Dict[str, str] + A dict mapping each arg name to either: + * "params" if the argname should be the parameter of a fixture of the same name. + * "funcargs" if the argname should be a parameter to the parametrized test function. + """ + if isinstance(indirect, bool): + valtypes: Dict[str, Literal["params", "funcargs"]] = dict.fromkeys( + argnames, "params" if indirect else "funcargs" + ) + elif isinstance(indirect, Sequence): + valtypes = dict.fromkeys(argnames, "funcargs") + for arg in indirect: + if arg not in argnames: + fail( + "In {}: indirect fixture '{}' doesn't exist".format( + self.function.__name__, arg + ), + pytrace=False, + ) + valtypes[arg] = "params" + else: + fail( + "In {func}: expected Sequence or boolean for indirect, got {type}".format( + type=type(indirect).__name__, func=self.function.__name__ + ), + pytrace=False, + ) + return valtypes + + def _validate_if_using_arg_names( + self, + argnames: Sequence[str], + indirect: Union[bool, Sequence[str]], + ) -> None: + """Check if all argnames are being used, by default values, or directly/indirectly. + + :param List[str] argnames: List of argument names passed to ``parametrize()``. + :param indirect: Same as the ``indirect`` parameter of ``parametrize()``. + :raises ValueError: If validation fails. + """ + default_arg_names = set(get_default_arg_names(self.function)) + func_name = self.function.__name__ + for arg in argnames: + if arg not in self.fixturenames: + if arg in default_arg_names: + fail( + "In {}: function already takes an argument '{}' with a default value".format( + func_name, arg + ), + pytrace=False, + ) + else: + if isinstance(indirect, Sequence): + name = "fixture" if arg in indirect else "argument" + else: + name = "fixture" if indirect else "argument" + fail( + f"In {func_name}: function uses no {name} '{arg}'", + pytrace=False, + ) + + +def _find_parametrized_scope( + argnames: Sequence[str], + arg2fixturedefs: Mapping[str, Sequence[fixtures.FixtureDef[object]]], + indirect: Union[bool, Sequence[str]], +) -> Scope: + """Find the most appropriate scope for a parametrized call based on its arguments. + + When there's at least one direct argument, always use "function" scope. + + When a test function is parametrized and all its arguments are indirect + (e.g. fixtures), return the most narrow scope based on the fixtures used. + + Related to issue #1832, based on code posted by @Kingdread. + """ + if isinstance(indirect, Sequence): + all_arguments_are_fixtures = len(indirect) == len(argnames) + else: + all_arguments_are_fixtures = bool(indirect) + + if all_arguments_are_fixtures: + fixturedefs = arg2fixturedefs or {} + used_scopes = [ + fixturedef[0]._scope + for name, fixturedef in fixturedefs.items() + if name in argnames + ] + # Takes the most narrow scope from used fixtures. + return min(used_scopes, default=Scope.Function) + + return Scope.Function + + +def _ascii_escaped_by_config(val: Union[str, bytes], config: Optional[Config]) -> str: + if config is None: + escape_option = False + else: + escape_option = config.getini( + "disable_test_id_escaping_and_forfeit_all_rights_to_community_support" + ) + # TODO: If escaping is turned off and the user passes bytes, + # will return a bytes. For now we ignore this but the + # code *probably* doesn't handle this case. + return val if escape_option else ascii_escaped(val) # type: ignore + + +def _pretty_fixture_path(func) -> str: + cwd = Path.cwd() + loc = Path(getlocation(func, str(cwd))) + prefix = Path("...", "_pytest") + try: + return str(prefix / loc.relative_to(_PYTEST_DIR)) + except ValueError: + return bestrelpath(cwd, loc) + + +def show_fixtures_per_test(config): + from _pytest.main import wrap_session + + return wrap_session(config, _show_fixtures_per_test) + + +def _show_fixtures_per_test(config: Config, session: Session) -> None: + import _pytest.config + + session.perform_collect() + curdir = Path.cwd() + tw = _pytest.config.create_terminal_writer(config) + verbose = config.getvalue("verbose") + + def get_best_relpath(func) -> str: + loc = getlocation(func, str(curdir)) + return bestrelpath(curdir, Path(loc)) + + def write_fixture(fixture_def: fixtures.FixtureDef[object]) -> None: + argname = fixture_def.argname + if verbose <= 0 and argname.startswith("_"): + return + prettypath = _pretty_fixture_path(fixture_def.func) + tw.write(f"{argname}", green=True) + tw.write(f" -- {prettypath}", yellow=True) + tw.write("\n") + fixture_doc = inspect.getdoc(fixture_def.func) + if fixture_doc: + write_docstring( + tw, fixture_doc.split("\n\n")[0] if verbose <= 0 else fixture_doc + ) + else: + tw.line(" no docstring available", red=True) + + def write_item(item: nodes.Item) -> None: + # Not all items have _fixtureinfo attribute. + info: Optional[FuncFixtureInfo] = getattr(item, "_fixtureinfo", None) + if info is None or not info.name2fixturedefs: + # This test item does not use any fixtures. + return + tw.line() + tw.sep("-", f"fixtures used by {item.name}") + # TODO: Fix this type ignore. + tw.sep("-", f"({get_best_relpath(item.function)})") # type: ignore[attr-defined] + # dict key not used in loop but needed for sorting. + for _, fixturedefs in sorted(info.name2fixturedefs.items()): + assert fixturedefs is not None + if not fixturedefs: + continue + # Last item is expected to be the one used by the test item. + write_fixture(fixturedefs[-1]) + + for session_item in session.items: + write_item(session_item) + + +def showfixtures(config: Config) -> Union[int, ExitCode]: + from _pytest.main import wrap_session + + return wrap_session(config, _showfixtures_main) + + +def _showfixtures_main(config: Config, session: Session) -> None: + import _pytest.config + + session.perform_collect() + curdir = Path.cwd() + tw = _pytest.config.create_terminal_writer(config) + verbose = config.getvalue("verbose") + + fm = session._fixturemanager + + available = [] + seen: Set[Tuple[str, str]] = set() + + for argname, fixturedefs in fm._arg2fixturedefs.items(): + assert fixturedefs is not None + if not fixturedefs: + continue + for fixturedef in fixturedefs: + loc = getlocation(fixturedef.func, str(curdir)) + if (fixturedef.argname, loc) in seen: + continue + seen.add((fixturedef.argname, loc)) + available.append( + ( + len(fixturedef.baseid), + fixturedef.func.__module__, + _pretty_fixture_path(fixturedef.func), + fixturedef.argname, + fixturedef, + ) + ) + + available.sort() + currentmodule = None + for baseid, module, prettypath, argname, fixturedef in available: + if currentmodule != module: + if not module.startswith("_pytest."): + tw.line() + tw.sep("-", f"fixtures defined from {module}") + currentmodule = module + if verbose <= 0 and argname.startswith("_"): + continue + tw.write(f"{argname}", green=True) + if fixturedef.scope != "function": + tw.write(" [%s scope]" % fixturedef.scope, cyan=True) + tw.write(f" -- {prettypath}", yellow=True) + tw.write("\n") + doc = inspect.getdoc(fixturedef.func) + if doc: + write_docstring(tw, doc.split("\n\n")[0] if verbose <= 0 else doc) + else: + tw.line(" no docstring available", red=True) + tw.line() + + +def write_docstring(tw: TerminalWriter, doc: str, indent: str = " ") -> None: + for line in doc.split("\n"): + tw.line(indent + line) + + +class Function(PyobjMixin, nodes.Item): + """An Item responsible for setting up and executing a Python test function. + + :param name: + The full function name, including any decorations like those + added by parametrization (``my_func[my_param]``). + :param parent: + The parent Node. + :param config: + The pytest Config object. + :param callspec: + If given, this is function has been parametrized and the callspec contains + meta information about the parametrization. + :param callobj: + If given, the object which will be called when the Function is invoked, + otherwise the callobj will be obtained from ``parent`` using ``originalname``. + :param keywords: + Keywords bound to the function object for "-k" matching. + :param session: + The pytest Session object. + :param fixtureinfo: + Fixture information already resolved at this fixture node.. + :param originalname: + The attribute name to use for accessing the underlying function object. + Defaults to ``name``. Set this if name is different from the original name, + for example when it contains decorations like those added by parametrization + (``my_func[my_param]``). + """ + + # Disable since functions handle it themselves. + _ALLOW_MARKERS = False + + def __init__( + self, + name: str, + parent, + config: Optional[Config] = None, + callspec: Optional[CallSpec2] = None, + callobj=NOTSET, + keywords: Optional[Mapping[str, Any]] = None, + session: Optional[Session] = None, + fixtureinfo: Optional[FuncFixtureInfo] = None, + originalname: Optional[str] = None, + ) -> None: + super().__init__(name, parent, config=config, session=session) + + if callobj is not NOTSET: + self.obj = callobj + + #: Original function name, without any decorations (for example + #: parametrization adds a ``"[...]"`` suffix to function names), used to access + #: the underlying function object from ``parent`` (in case ``callobj`` is not given + #: explicitly). + #: + #: .. versionadded:: 3.0 + self.originalname = originalname or name + + # Note: when FunctionDefinition is introduced, we should change ``originalname`` + # to a readonly property that returns FunctionDefinition.name. + + self.own_markers.extend(get_unpacked_marks(self.obj)) + if callspec: + self.callspec = callspec + self.own_markers.extend(callspec.marks) + + # todo: this is a hell of a hack + # https://github.com/pytest-dev/pytest/issues/4569 + # Note: the order of the updates is important here; indicates what + # takes priority (ctor argument over function attributes over markers). + # Take own_markers only; NodeKeywords handles parent traversal on its own. + self.keywords.update((mark.name, mark) for mark in self.own_markers) + self.keywords.update(self.obj.__dict__) + if keywords: + self.keywords.update(keywords) + + if fixtureinfo is None: + fixtureinfo = self.session._fixturemanager.getfixtureinfo( + self, self.obj, self.cls, funcargs=True + ) + self._fixtureinfo: FuncFixtureInfo = fixtureinfo + self.fixturenames = fixtureinfo.names_closure + self._initrequest() + + @classmethod + def from_parent(cls, parent, **kw): # todo: determine sound type limitations + """The public constructor.""" + return super().from_parent(parent=parent, **kw) + + def _initrequest(self) -> None: + self.funcargs: Dict[str, object] = {} + self._request = fixtures.FixtureRequest(self, _ispytest=True) + + @property + def function(self): + """Underlying python 'function' object.""" + return getimfunc(self.obj) + + def _getobj(self): + assert self.parent is not None + if isinstance(self.parent, Class): + # Each Function gets a fresh class instance. + parent_obj = self.parent.newinstance() + else: + parent_obj = self.parent.obj # type: ignore[attr-defined] + return getattr(parent_obj, self.originalname) + + @property + def _pyfuncitem(self): + """(compatonly) for code expecting pytest-2.2 style request objects.""" + return self + + def runtest(self) -> None: + """Execute the underlying test function.""" + self.ihook.pytest_pyfunc_call(pyfuncitem=self) + + def setup(self) -> None: + self._request._fillfixtures() + + def _prunetraceback(self, excinfo: ExceptionInfo[BaseException]) -> None: + if hasattr(self, "_obj") and not self.config.getoption("fulltrace", False): + code = _pytest._code.Code.from_function(get_real_func(self.obj)) + path, firstlineno = code.path, code.firstlineno + traceback = excinfo.traceback + ntraceback = traceback.cut(path=path, firstlineno=firstlineno) + if ntraceback == traceback: + ntraceback = ntraceback.cut(path=path) + if ntraceback == traceback: + ntraceback = ntraceback.filter(filter_traceback) + if not ntraceback: + ntraceback = traceback + + excinfo.traceback = ntraceback.filter() + # issue364: mark all but first and last frames to + # only show a single-line message for each frame. + if self.config.getoption("tbstyle", "auto") == "auto": + if len(excinfo.traceback) > 2: + for entry in excinfo.traceback[1:-1]: + entry.set_repr_style("short") + + # TODO: Type ignored -- breaks Liskov Substitution. + def repr_failure( # type: ignore[override] + self, + excinfo: ExceptionInfo[BaseException], + ) -> Union[str, TerminalRepr]: + style = self.config.getoption("tbstyle", "auto") + if style == "auto": + style = "long" + return self._repr_failure_py(excinfo, style=style) + + +class FunctionDefinition(Function): + """ + This class is a step gap solution until we evolve to have actual function definition nodes + and manage to get rid of ``metafunc``. + """ + + def runtest(self) -> None: + raise RuntimeError("function definitions are not supposed to be run as tests") + + setup = runtest diff --git a/env/lib/python3.8/site-packages/_pytest/python_api.py b/env/lib/python3.8/site-packages/_pytest/python_api.py new file mode 100644 index 00000000..515d437f --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/python_api.py @@ -0,0 +1,993 @@ +import math +import pprint +from collections.abc import Collection +from collections.abc import Sized +from decimal import Decimal +from numbers import Complex +from types import TracebackType +from typing import Any +from typing import Callable +from typing import cast +from typing import Generic +from typing import List +from typing import Mapping +from typing import Optional +from typing import Pattern +from typing import Sequence +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +if TYPE_CHECKING: + from numpy import ndarray + + +import _pytest._code +from _pytest.compat import final +from _pytest.compat import STRING_TYPES +from _pytest.compat import overload +from _pytest.outcomes import fail + + +def _non_numeric_type_error(value, at: Optional[str]) -> TypeError: + at_str = f" at {at}" if at else "" + return TypeError( + "cannot make approximate comparisons to non-numeric values: {!r} {}".format( + value, at_str + ) + ) + + +def _compare_approx( + full_object: object, + message_data: Sequence[Tuple[str, str, str]], + number_of_elements: int, + different_ids: Sequence[object], + max_abs_diff: float, + max_rel_diff: float, +) -> List[str]: + message_list = list(message_data) + message_list.insert(0, ("Index", "Obtained", "Expected")) + max_sizes = [0, 0, 0] + for index, obtained, expected in message_list: + max_sizes[0] = max(max_sizes[0], len(index)) + max_sizes[1] = max(max_sizes[1], len(obtained)) + max_sizes[2] = max(max_sizes[2], len(expected)) + explanation = [ + f"comparison failed. Mismatched elements: {len(different_ids)} / {number_of_elements}:", + f"Max absolute difference: {max_abs_diff}", + f"Max relative difference: {max_rel_diff}", + ] + [ + f"{indexes:<{max_sizes[0]}} | {obtained:<{max_sizes[1]}} | {expected:<{max_sizes[2]}}" + for indexes, obtained, expected in message_list + ] + return explanation + + +# builtin pytest.approx helper + + +class ApproxBase: + """Provide shared utilities for making approximate comparisons between + numbers or sequences of numbers.""" + + # Tell numpy to use our `__eq__` operator instead of its. + __array_ufunc__ = None + __array_priority__ = 100 + + def __init__(self, expected, rel=None, abs=None, nan_ok: bool = False) -> None: + __tracebackhide__ = True + self.expected = expected + self.abs = abs + self.rel = rel + self.nan_ok = nan_ok + self._check_type() + + def __repr__(self) -> str: + raise NotImplementedError + + def _repr_compare(self, other_side: Any) -> List[str]: + return [ + "comparison failed", + f"Obtained: {other_side}", + f"Expected: {self}", + ] + + def __eq__(self, actual) -> bool: + return all( + a == self._approx_scalar(x) for a, x in self._yield_comparisons(actual) + ) + + def __bool__(self): + __tracebackhide__ = True + raise AssertionError( + "approx() is not supported in a boolean context.\nDid you mean: `assert a == approx(b)`?" + ) + + # Ignore type because of https://github.com/python/mypy/issues/4266. + __hash__ = None # type: ignore + + def __ne__(self, actual) -> bool: + return not (actual == self) + + def _approx_scalar(self, x) -> "ApproxScalar": + if isinstance(x, Decimal): + return ApproxDecimal(x, rel=self.rel, abs=self.abs, nan_ok=self.nan_ok) + return ApproxScalar(x, rel=self.rel, abs=self.abs, nan_ok=self.nan_ok) + + def _yield_comparisons(self, actual): + """Yield all the pairs of numbers to be compared. + + This is used to implement the `__eq__` method. + """ + raise NotImplementedError + + def _check_type(self) -> None: + """Raise a TypeError if the expected value is not a valid type.""" + # This is only a concern if the expected value is a sequence. In every + # other case, the approx() function ensures that the expected value has + # a numeric type. For this reason, the default is to do nothing. The + # classes that deal with sequences should reimplement this method to + # raise if there are any non-numeric elements in the sequence. + + +def _recursive_sequence_map(f, x): + """Recursively map a function over a sequence of arbitrary depth""" + if isinstance(x, (list, tuple)): + seq_type = type(x) + return seq_type(_recursive_sequence_map(f, xi) for xi in x) + else: + return f(x) + + +class ApproxNumpy(ApproxBase): + """Perform approximate comparisons where the expected value is numpy array.""" + + def __repr__(self) -> str: + list_scalars = _recursive_sequence_map( + self._approx_scalar, self.expected.tolist() + ) + return f"approx({list_scalars!r})" + + def _repr_compare(self, other_side: "ndarray") -> List[str]: + import itertools + import math + + def get_value_from_nested_list( + nested_list: List[Any], nd_index: Tuple[Any, ...] + ) -> Any: + """ + Helper function to get the value out of a nested list, given an n-dimensional index. + This mimics numpy's indexing, but for raw nested python lists. + """ + value: Any = nested_list + for i in nd_index: + value = value[i] + return value + + np_array_shape = self.expected.shape + approx_side_as_seq = _recursive_sequence_map( + self._approx_scalar, self.expected.tolist() + ) + + if np_array_shape != other_side.shape: + return [ + "Impossible to compare arrays with different shapes.", + f"Shapes: {np_array_shape} and {other_side.shape}", + ] + + number_of_elements = self.expected.size + max_abs_diff = -math.inf + max_rel_diff = -math.inf + different_ids = [] + for index in itertools.product(*(range(i) for i in np_array_shape)): + approx_value = get_value_from_nested_list(approx_side_as_seq, index) + other_value = get_value_from_nested_list(other_side, index) + if approx_value != other_value: + abs_diff = abs(approx_value.expected - other_value) + max_abs_diff = max(max_abs_diff, abs_diff) + if other_value == 0.0: + max_rel_diff = math.inf + else: + max_rel_diff = max(max_rel_diff, abs_diff / abs(other_value)) + different_ids.append(index) + + message_data = [ + ( + str(index), + str(get_value_from_nested_list(other_side, index)), + str(get_value_from_nested_list(approx_side_as_seq, index)), + ) + for index in different_ids + ] + return _compare_approx( + self.expected, + message_data, + number_of_elements, + different_ids, + max_abs_diff, + max_rel_diff, + ) + + def __eq__(self, actual) -> bool: + import numpy as np + + # self.expected is supposed to always be an array here. + + if not np.isscalar(actual): + try: + actual = np.asarray(actual) + except Exception as e: + raise TypeError(f"cannot compare '{actual}' to numpy.ndarray") from e + + if not np.isscalar(actual) and actual.shape != self.expected.shape: + return False + + return super().__eq__(actual) + + def _yield_comparisons(self, actual): + import numpy as np + + # `actual` can either be a numpy array or a scalar, it is treated in + # `__eq__` before being passed to `ApproxBase.__eq__`, which is the + # only method that calls this one. + + if np.isscalar(actual): + for i in np.ndindex(self.expected.shape): + yield actual, self.expected[i].item() + else: + for i in np.ndindex(self.expected.shape): + yield actual[i].item(), self.expected[i].item() + + +class ApproxMapping(ApproxBase): + """Perform approximate comparisons where the expected value is a mapping + with numeric values (the keys can be anything).""" + + def __repr__(self) -> str: + return "approx({!r})".format( + {k: self._approx_scalar(v) for k, v in self.expected.items()} + ) + + def _repr_compare(self, other_side: Mapping[object, float]) -> List[str]: + import math + + approx_side_as_map = { + k: self._approx_scalar(v) for k, v in self.expected.items() + } + + number_of_elements = len(approx_side_as_map) + max_abs_diff = -math.inf + max_rel_diff = -math.inf + different_ids = [] + for (approx_key, approx_value), other_value in zip( + approx_side_as_map.items(), other_side.values() + ): + if approx_value != other_value: + max_abs_diff = max( + max_abs_diff, abs(approx_value.expected - other_value) + ) + max_rel_diff = max( + max_rel_diff, + abs((approx_value.expected - other_value) / approx_value.expected), + ) + different_ids.append(approx_key) + + message_data = [ + (str(key), str(other_side[key]), str(approx_side_as_map[key])) + for key in different_ids + ] + + return _compare_approx( + self.expected, + message_data, + number_of_elements, + different_ids, + max_abs_diff, + max_rel_diff, + ) + + def __eq__(self, actual) -> bool: + try: + if set(actual.keys()) != set(self.expected.keys()): + return False + except AttributeError: + return False + + return super().__eq__(actual) + + def _yield_comparisons(self, actual): + for k in self.expected.keys(): + yield actual[k], self.expected[k] + + def _check_type(self) -> None: + __tracebackhide__ = True + for key, value in self.expected.items(): + if isinstance(value, type(self.expected)): + msg = "pytest.approx() does not support nested dictionaries: key={!r} value={!r}\n full mapping={}" + raise TypeError(msg.format(key, value, pprint.pformat(self.expected))) + + +class ApproxSequenceLike(ApproxBase): + """Perform approximate comparisons where the expected value is a sequence of numbers.""" + + def __repr__(self) -> str: + seq_type = type(self.expected) + if seq_type not in (tuple, list): + seq_type = list + return "approx({!r})".format( + seq_type(self._approx_scalar(x) for x in self.expected) + ) + + def _repr_compare(self, other_side: Sequence[float]) -> List[str]: + import math + + if len(self.expected) != len(other_side): + return [ + "Impossible to compare lists with different sizes.", + f"Lengths: {len(self.expected)} and {len(other_side)}", + ] + + approx_side_as_map = _recursive_sequence_map(self._approx_scalar, self.expected) + + number_of_elements = len(approx_side_as_map) + max_abs_diff = -math.inf + max_rel_diff = -math.inf + different_ids = [] + for i, (approx_value, other_value) in enumerate( + zip(approx_side_as_map, other_side) + ): + if approx_value != other_value: + abs_diff = abs(approx_value.expected - other_value) + max_abs_diff = max(max_abs_diff, abs_diff) + if other_value == 0.0: + max_rel_diff = math.inf + else: + max_rel_diff = max(max_rel_diff, abs_diff / abs(other_value)) + different_ids.append(i) + + message_data = [ + (str(i), str(other_side[i]), str(approx_side_as_map[i])) + for i in different_ids + ] + + return _compare_approx( + self.expected, + message_data, + number_of_elements, + different_ids, + max_abs_diff, + max_rel_diff, + ) + + def __eq__(self, actual) -> bool: + try: + if len(actual) != len(self.expected): + return False + except TypeError: + return False + return super().__eq__(actual) + + def _yield_comparisons(self, actual): + return zip(actual, self.expected) + + def _check_type(self) -> None: + __tracebackhide__ = True + for index, x in enumerate(self.expected): + if isinstance(x, type(self.expected)): + msg = "pytest.approx() does not support nested data structures: {!r} at index {}\n full sequence: {}" + raise TypeError(msg.format(x, index, pprint.pformat(self.expected))) + + +class ApproxScalar(ApproxBase): + """Perform approximate comparisons where the expected value is a single number.""" + + # Using Real should be better than this Union, but not possible yet: + # https://github.com/python/typeshed/pull/3108 + DEFAULT_ABSOLUTE_TOLERANCE: Union[float, Decimal] = 1e-12 + DEFAULT_RELATIVE_TOLERANCE: Union[float, Decimal] = 1e-6 + + def __repr__(self) -> str: + """Return a string communicating both the expected value and the + tolerance for the comparison being made. + + For example, ``1.0 ± 1e-6``, ``(3+4j) ± 5e-6 ∠ ±180°``. + """ + # Don't show a tolerance for values that aren't compared using + # tolerances, i.e. non-numerics and infinities. Need to call abs to + # handle complex numbers, e.g. (inf + 1j). + if (not isinstance(self.expected, (Complex, Decimal))) or math.isinf( + abs(self.expected) # type: ignore[arg-type] + ): + return str(self.expected) + + # If a sensible tolerance can't be calculated, self.tolerance will + # raise a ValueError. In this case, display '???'. + try: + vetted_tolerance = f"{self.tolerance:.1e}" + if ( + isinstance(self.expected, Complex) + and self.expected.imag + and not math.isinf(self.tolerance) + ): + vetted_tolerance += " ∠ ±180°" + except ValueError: + vetted_tolerance = "???" + + return f"{self.expected} ± {vetted_tolerance}" + + def __eq__(self, actual) -> bool: + """Return whether the given value is equal to the expected value + within the pre-specified tolerance.""" + asarray = _as_numpy_array(actual) + if asarray is not None: + # Call ``__eq__()`` manually to prevent infinite-recursion with + # numpy<1.13. See #3748. + return all(self.__eq__(a) for a in asarray.flat) + + # Short-circuit exact equality. + if actual == self.expected: + return True + + # If either type is non-numeric, fall back to strict equality. + # NB: we need Complex, rather than just Number, to ensure that __abs__, + # __sub__, and __float__ are defined. + if not ( + isinstance(self.expected, (Complex, Decimal)) + and isinstance(actual, (Complex, Decimal)) + ): + return False + + # Allow the user to control whether NaNs are considered equal to each + # other or not. The abs() calls are for compatibility with complex + # numbers. + if math.isnan(abs(self.expected)): # type: ignore[arg-type] + return self.nan_ok and math.isnan(abs(actual)) # type: ignore[arg-type] + + # Infinity shouldn't be approximately equal to anything but itself, but + # if there's a relative tolerance, it will be infinite and infinity + # will seem approximately equal to everything. The equal-to-itself + # case would have been short circuited above, so here we can just + # return false if the expected value is infinite. The abs() call is + # for compatibility with complex numbers. + if math.isinf(abs(self.expected)): # type: ignore[arg-type] + return False + + # Return true if the two numbers are within the tolerance. + result: bool = abs(self.expected - actual) <= self.tolerance + return result + + # Ignore type because of https://github.com/python/mypy/issues/4266. + __hash__ = None # type: ignore + + @property + def tolerance(self): + """Return the tolerance for the comparison. + + This could be either an absolute tolerance or a relative tolerance, + depending on what the user specified or which would be larger. + """ + + def set_default(x, default): + return x if x is not None else default + + # Figure out what the absolute tolerance should be. ``self.abs`` is + # either None or a value specified by the user. + absolute_tolerance = set_default(self.abs, self.DEFAULT_ABSOLUTE_TOLERANCE) + + if absolute_tolerance < 0: + raise ValueError( + f"absolute tolerance can't be negative: {absolute_tolerance}" + ) + if math.isnan(absolute_tolerance): + raise ValueError("absolute tolerance can't be NaN.") + + # If the user specified an absolute tolerance but not a relative one, + # just return the absolute tolerance. + if self.rel is None: + if self.abs is not None: + return absolute_tolerance + + # Figure out what the relative tolerance should be. ``self.rel`` is + # either None or a value specified by the user. This is done after + # we've made sure the user didn't ask for an absolute tolerance only, + # because we don't want to raise errors about the relative tolerance if + # we aren't even going to use it. + relative_tolerance = set_default( + self.rel, self.DEFAULT_RELATIVE_TOLERANCE + ) * abs(self.expected) + + if relative_tolerance < 0: + raise ValueError( + f"relative tolerance can't be negative: {relative_tolerance}" + ) + if math.isnan(relative_tolerance): + raise ValueError("relative tolerance can't be NaN.") + + # Return the larger of the relative and absolute tolerances. + return max(relative_tolerance, absolute_tolerance) + + +class ApproxDecimal(ApproxScalar): + """Perform approximate comparisons where the expected value is a Decimal.""" + + DEFAULT_ABSOLUTE_TOLERANCE = Decimal("1e-12") + DEFAULT_RELATIVE_TOLERANCE = Decimal("1e-6") + + +def approx(expected, rel=None, abs=None, nan_ok: bool = False) -> ApproxBase: + """Assert that two numbers (or two ordered sequences of numbers) are equal to each other + within some tolerance. + + Due to the :doc:`python:tutorial/floatingpoint`, numbers that we + would intuitively expect to be equal are not always so:: + + >>> 0.1 + 0.2 == 0.3 + False + + This problem is commonly encountered when writing tests, e.g. when making + sure that floating-point values are what you expect them to be. One way to + deal with this problem is to assert that two floating-point numbers are + equal to within some appropriate tolerance:: + + >>> abs((0.1 + 0.2) - 0.3) < 1e-6 + True + + However, comparisons like this are tedious to write and difficult to + understand. Furthermore, absolute comparisons like the one above are + usually discouraged because there's no tolerance that works well for all + situations. ``1e-6`` is good for numbers around ``1``, but too small for + very big numbers and too big for very small ones. It's better to express + the tolerance as a fraction of the expected value, but relative comparisons + like that are even more difficult to write correctly and concisely. + + The ``approx`` class performs floating-point comparisons using a syntax + that's as intuitive as possible:: + + >>> from pytest import approx + >>> 0.1 + 0.2 == approx(0.3) + True + + The same syntax also works for ordered sequences of numbers:: + + >>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) + True + + ``numpy`` arrays:: + + >>> import numpy as np # doctest: +SKIP + >>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) # doctest: +SKIP + True + + And for a ``numpy`` array against a scalar:: + + >>> import numpy as np # doctest: +SKIP + >>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) # doctest: +SKIP + True + + Only ordered sequences are supported, because ``approx`` needs + to infer the relative position of the sequences without ambiguity. This means + ``sets`` and other unordered sequences are not supported. + + Finally, dictionary *values* can also be compared:: + + >>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6}) + True + + The comparison will be true if both mappings have the same keys and their + respective values match the expected tolerances. + + **Tolerances** + + By default, ``approx`` considers numbers within a relative tolerance of + ``1e-6`` (i.e. one part in a million) of its expected value to be equal. + This treatment would lead to surprising results if the expected value was + ``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``. + To handle this case less surprisingly, ``approx`` also considers numbers + within an absolute tolerance of ``1e-12`` of its expected value to be + equal. Infinity and NaN are special cases. Infinity is only considered + equal to itself, regardless of the relative tolerance. NaN is not + considered equal to anything by default, but you can make it be equal to + itself by setting the ``nan_ok`` argument to True. (This is meant to + facilitate comparing arrays that use NaN to mean "no data".) + + Both the relative and absolute tolerances can be changed by passing + arguments to the ``approx`` constructor:: + + >>> 1.0001 == approx(1) + False + >>> 1.0001 == approx(1, rel=1e-3) + True + >>> 1.0001 == approx(1, abs=1e-3) + True + + If you specify ``abs`` but not ``rel``, the comparison will not consider + the relative tolerance at all. In other words, two numbers that are within + the default relative tolerance of ``1e-6`` will still be considered unequal + if they exceed the specified absolute tolerance. If you specify both + ``abs`` and ``rel``, the numbers will be considered equal if either + tolerance is met:: + + >>> 1 + 1e-8 == approx(1) + True + >>> 1 + 1e-8 == approx(1, abs=1e-12) + False + >>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12) + True + + You can also use ``approx`` to compare nonnumeric types, or dicts and + sequences containing nonnumeric types, in which case it falls back to + strict equality. This can be useful for comparing dicts and sequences that + can contain optional values:: + + >>> {"required": 1.0000005, "optional": None} == approx({"required": 1, "optional": None}) + True + >>> [None, 1.0000005] == approx([None,1]) + True + >>> ["foo", 1.0000005] == approx([None,1]) + False + + If you're thinking about using ``approx``, then you might want to know how + it compares to other good ways of comparing floating-point numbers. All of + these algorithms are based on relative and absolute tolerances and should + agree for the most part, but they do have meaningful differences: + + - ``math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)``: True if the relative + tolerance is met w.r.t. either ``a`` or ``b`` or if the absolute + tolerance is met. Because the relative tolerance is calculated w.r.t. + both ``a`` and ``b``, this test is symmetric (i.e. neither ``a`` nor + ``b`` is a "reference value"). You have to specify an absolute tolerance + if you want to compare to ``0.0`` because there is no tolerance by + default. More information: :py:func:`math.isclose`. + + - ``numpy.isclose(a, b, rtol=1e-5, atol=1e-8)``: True if the difference + between ``a`` and ``b`` is less that the sum of the relative tolerance + w.r.t. ``b`` and the absolute tolerance. Because the relative tolerance + is only calculated w.r.t. ``b``, this test is asymmetric and you can + think of ``b`` as the reference value. Support for comparing sequences + is provided by :py:func:`numpy.allclose`. More information: + :std:doc:`numpy:reference/generated/numpy.isclose`. + + - ``unittest.TestCase.assertAlmostEqual(a, b)``: True if ``a`` and ``b`` + are within an absolute tolerance of ``1e-7``. No relative tolerance is + considered , so this function is not appropriate for very large or very + small numbers. Also, it's only available in subclasses of ``unittest.TestCase`` + and it's ugly because it doesn't follow PEP8. More information: + :py:meth:`unittest.TestCase.assertAlmostEqual`. + + - ``a == pytest.approx(b, rel=1e-6, abs=1e-12)``: True if the relative + tolerance is met w.r.t. ``b`` or if the absolute tolerance is met. + Because the relative tolerance is only calculated w.r.t. ``b``, this test + is asymmetric and you can think of ``b`` as the reference value. In the + special case that you explicitly specify an absolute tolerance but not a + relative tolerance, only the absolute tolerance is considered. + + .. note:: + + ``approx`` can handle numpy arrays, but we recommend the + specialised test helpers in :std:doc:`numpy:reference/routines.testing` + if you need support for comparisons, NaNs, or ULP-based tolerances. + + To match strings using regex, you can use + `Matches `_ + from the + `re_assert package `_. + + .. warning:: + + .. versionchanged:: 3.2 + + In order to avoid inconsistent behavior, :py:exc:`TypeError` is + raised for ``>``, ``>=``, ``<`` and ``<=`` comparisons. + The example below illustrates the problem:: + + assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).__gt__(0.1 + 1e-10) + assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).__lt__(0.1 + 1e-10) + + In the second example one expects ``approx(0.1).__le__(0.1 + 1e-10)`` + to be called. But instead, ``approx(0.1).__lt__(0.1 + 1e-10)`` is used to + comparison. This is because the call hierarchy of rich comparisons + follows a fixed behavior. More information: :py:meth:`object.__ge__` + + .. versionchanged:: 3.7.1 + ``approx`` raises ``TypeError`` when it encounters a dict value or + sequence element of nonnumeric type. + + .. versionchanged:: 6.1.0 + ``approx`` falls back to strict equality for nonnumeric types instead + of raising ``TypeError``. + """ + + # Delegate the comparison to a class that knows how to deal with the type + # of the expected value (e.g. int, float, list, dict, numpy.array, etc). + # + # The primary responsibility of these classes is to implement ``__eq__()`` + # and ``__repr__()``. The former is used to actually check if some + # "actual" value is equivalent to the given expected value within the + # allowed tolerance. The latter is used to show the user the expected + # value and tolerance, in the case that a test failed. + # + # The actual logic for making approximate comparisons can be found in + # ApproxScalar, which is used to compare individual numbers. All of the + # other Approx classes eventually delegate to this class. The ApproxBase + # class provides some convenient methods and overloads, but isn't really + # essential. + + __tracebackhide__ = True + + if isinstance(expected, Decimal): + cls: Type[ApproxBase] = ApproxDecimal + elif isinstance(expected, Mapping): + cls = ApproxMapping + elif _is_numpy_array(expected): + expected = _as_numpy_array(expected) + cls = ApproxNumpy + elif ( + hasattr(expected, "__getitem__") + and isinstance(expected, Sized) + # Type ignored because the error is wrong -- not unreachable. + and not isinstance(expected, STRING_TYPES) # type: ignore[unreachable] + ): + cls = ApproxSequenceLike + elif ( + isinstance(expected, Collection) + # Type ignored because the error is wrong -- not unreachable. + and not isinstance(expected, STRING_TYPES) # type: ignore[unreachable] + ): + msg = f"pytest.approx() only supports ordered sequences, but got: {repr(expected)}" + raise TypeError(msg) + else: + cls = ApproxScalar + + return cls(expected, rel, abs, nan_ok) + + +def _is_numpy_array(obj: object) -> bool: + """ + Return true if the given object is implicitly convertible to ndarray, + and numpy is already imported. + """ + return _as_numpy_array(obj) is not None + + +def _as_numpy_array(obj: object) -> Optional["ndarray"]: + """ + Return an ndarray if the given object is implicitly convertible to ndarray, + and numpy is already imported, otherwise None. + """ + import sys + + np: Any = sys.modules.get("numpy") + if np is not None: + # avoid infinite recursion on numpy scalars, which have __array__ + if np.isscalar(obj): + return None + elif isinstance(obj, np.ndarray): + return obj + elif hasattr(obj, "__array__") or hasattr("obj", "__array_interface__"): + return np.asarray(obj) + return None + + +# builtin pytest.raises helper + +E = TypeVar("E", bound=BaseException) + + +@overload +def raises( + expected_exception: Union[Type[E], Tuple[Type[E], ...]], + *, + match: Optional[Union[str, Pattern[str]]] = ..., +) -> "RaisesContext[E]": + ... + + +@overload +def raises( # noqa: F811 + expected_exception: Union[Type[E], Tuple[Type[E], ...]], + func: Callable[..., Any], + *args: Any, + **kwargs: Any, +) -> _pytest._code.ExceptionInfo[E]: + ... + + +def raises( # noqa: F811 + expected_exception: Union[Type[E], Tuple[Type[E], ...]], *args: Any, **kwargs: Any +) -> Union["RaisesContext[E]", _pytest._code.ExceptionInfo[E]]: + r"""Assert that a code block/function call raises an exception. + + :param typing.Type[E] | typing.Tuple[typing.Type[E], ...] expected_exception: + The excpected exception type, or a tuple if one of multiple possible + exception types are excepted. + :kwparam str | typing.Pattern[str] | None match: + If specified, a string containing a regular expression, + or a regular expression object, that is tested against the string + representation of the exception using :func:`re.search`. + + To match a literal string that may contain :ref:`special characters + `, the pattern can first be escaped with :func:`re.escape`. + + (This is only used when :py:func:`pytest.raises` is used as a context manager, + and passed through to the function otherwise. + When using :py:func:`pytest.raises` as a function, you can use: + ``pytest.raises(Exc, func, match="passed on").match("my pattern")``.) + + .. currentmodule:: _pytest._code + + Use ``pytest.raises`` as a context manager, which will capture the exception of the given + type:: + + >>> import pytest + >>> with pytest.raises(ZeroDivisionError): + ... 1/0 + + If the code block does not raise the expected exception (``ZeroDivisionError`` in the example + above), or no exception at all, the check will fail instead. + + You can also use the keyword argument ``match`` to assert that the + exception matches a text or regex:: + + >>> with pytest.raises(ValueError, match='must be 0 or None'): + ... raise ValueError("value must be 0 or None") + + >>> with pytest.raises(ValueError, match=r'must be \d+$'): + ... raise ValueError("value must be 42") + + The context manager produces an :class:`ExceptionInfo` object which can be used to inspect the + details of the captured exception:: + + >>> with pytest.raises(ValueError) as exc_info: + ... raise ValueError("value must be 42") + >>> assert exc_info.type is ValueError + >>> assert exc_info.value.args[0] == "value must be 42" + + .. note:: + + When using ``pytest.raises`` as a context manager, it's worthwhile to + note that normal context manager rules apply and that the exception + raised *must* be the final line in the scope of the context manager. + Lines of code after that, within the scope of the context manager will + not be executed. For example:: + + >>> value = 15 + >>> with pytest.raises(ValueError) as exc_info: + ... if value > 10: + ... raise ValueError("value must be <= 10") + ... assert exc_info.type is ValueError # this will not execute + + Instead, the following approach must be taken (note the difference in + scope):: + + >>> with pytest.raises(ValueError) as exc_info: + ... if value > 10: + ... raise ValueError("value must be <= 10") + ... + >>> assert exc_info.type is ValueError + + **Using with** ``pytest.mark.parametrize`` + + When using :ref:`pytest.mark.parametrize ref` + it is possible to parametrize tests such that + some runs raise an exception and others do not. + + See :ref:`parametrizing_conditional_raising` for an example. + + **Legacy form** + + It is possible to specify a callable by passing a to-be-called lambda:: + + >>> raises(ZeroDivisionError, lambda: 1/0) + + + or you can specify an arbitrary callable with arguments:: + + >>> def f(x): return 1/x + ... + >>> raises(ZeroDivisionError, f, 0) + + >>> raises(ZeroDivisionError, f, x=0) + + + The form above is fully supported but discouraged for new code because the + context manager form is regarded as more readable and less error-prone. + + .. note:: + Similar to caught exception objects in Python, explicitly clearing + local references to returned ``ExceptionInfo`` objects can + help the Python interpreter speed up its garbage collection. + + Clearing those references breaks a reference cycle + (``ExceptionInfo`` --> caught exception --> frame stack raising + the exception --> current frame stack --> local variables --> + ``ExceptionInfo``) which makes Python keep all objects referenced + from that cycle (including all local variables in the current + frame) alive until the next cyclic garbage collection run. + More detailed information can be found in the official Python + documentation for :ref:`the try statement `. + """ + __tracebackhide__ = True + + if not expected_exception: + raise ValueError( + f"Expected an exception type or a tuple of exception types, but got `{expected_exception!r}`. " + f"Raising exceptions is already understood as failing the test, so you don't need " + f"any special code to say 'this should never raise an exception'." + ) + if isinstance(expected_exception, type): + excepted_exceptions: Tuple[Type[E], ...] = (expected_exception,) + else: + excepted_exceptions = expected_exception + for exc in excepted_exceptions: + if not isinstance(exc, type) or not issubclass(exc, BaseException): + msg = "expected exception must be a BaseException type, not {}" # type: ignore[unreachable] + not_a = exc.__name__ if isinstance(exc, type) else type(exc).__name__ + raise TypeError(msg.format(not_a)) + + message = f"DID NOT RAISE {expected_exception}" + + if not args: + match: Optional[Union[str, Pattern[str]]] = kwargs.pop("match", None) + if kwargs: + msg = "Unexpected keyword arguments passed to pytest.raises: " + msg += ", ".join(sorted(kwargs)) + msg += "\nUse context-manager form instead?" + raise TypeError(msg) + return RaisesContext(expected_exception, message, match) + else: + func = args[0] + if not callable(func): + raise TypeError(f"{func!r} object (type: {type(func)}) must be callable") + try: + func(*args[1:], **kwargs) + except expected_exception as e: + # We just caught the exception - there is a traceback. + assert e.__traceback__ is not None + return _pytest._code.ExceptionInfo.from_exc_info( + (type(e), e, e.__traceback__) + ) + fail(message) + + +# This doesn't work with mypy for now. Use fail.Exception instead. +raises.Exception = fail.Exception # type: ignore + + +@final +class RaisesContext(Generic[E]): + def __init__( + self, + expected_exception: Union[Type[E], Tuple[Type[E], ...]], + message: str, + match_expr: Optional[Union[str, Pattern[str]]] = None, + ) -> None: + self.expected_exception = expected_exception + self.message = message + self.match_expr = match_expr + self.excinfo: Optional[_pytest._code.ExceptionInfo[E]] = None + + def __enter__(self) -> _pytest._code.ExceptionInfo[E]: + self.excinfo = _pytest._code.ExceptionInfo.for_later() + return self.excinfo + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> bool: + __tracebackhide__ = True + if exc_type is None: + fail(self.message) + assert self.excinfo is not None + if not issubclass(exc_type, self.expected_exception): + return False + # Cast to narrow the exception type now that it's verified. + exc_info = cast(Tuple[Type[E], E, TracebackType], (exc_type, exc_val, exc_tb)) + self.excinfo.fill_unfilled(exc_info) + if self.match_expr is not None: + self.excinfo.match(self.match_expr) + return True diff --git a/env/lib/python3.8/site-packages/_pytest/python_path.py b/env/lib/python3.8/site-packages/_pytest/python_path.py new file mode 100644 index 00000000..cceabbca --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/python_path.py @@ -0,0 +1,24 @@ +import sys + +import pytest +from pytest import Config +from pytest import Parser + + +def pytest_addoption(parser: Parser) -> None: + parser.addini("pythonpath", type="paths", help="Add paths to sys.path", default=[]) + + +@pytest.hookimpl(tryfirst=True) +def pytest_load_initial_conftests(early_config: Config) -> None: + # `pythonpath = a b` will set `sys.path` to `[a, b, x, y, z, ...]` + for path in reversed(early_config.getini("pythonpath")): + sys.path.insert(0, str(path)) + + +@pytest.hookimpl(trylast=True) +def pytest_unconfigure(config: Config) -> None: + for path in config.getini("pythonpath"): + path_str = str(path) + if path_str in sys.path: + sys.path.remove(path_str) diff --git a/env/lib/python3.8/site-packages/_pytest/recwarn.py b/env/lib/python3.8/site-packages/_pytest/recwarn.py new file mode 100644 index 00000000..d76ea020 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/recwarn.py @@ -0,0 +1,313 @@ +"""Record warnings during test function execution.""" +import re +import warnings +from pprint import pformat +from types import TracebackType +from typing import Any +from typing import Callable +from typing import Generator +from typing import Iterator +from typing import List +from typing import Optional +from typing import Pattern +from typing import Tuple +from typing import Type +from typing import TypeVar +from typing import Union + +from _pytest.compat import final +from _pytest.compat import overload +from _pytest.deprecated import check_ispytest +from _pytest.deprecated import WARNS_NONE_ARG +from _pytest.fixtures import fixture +from _pytest.outcomes import fail + + +T = TypeVar("T") + + +@fixture +def recwarn() -> Generator["WarningsRecorder", None, None]: + """Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions. + + See https://docs.pytest.org/en/latest/how-to/capture-warnings.html for information + on warning categories. + """ + wrec = WarningsRecorder(_ispytest=True) + with wrec: + warnings.simplefilter("default") + yield wrec + + +@overload +def deprecated_call( + *, match: Optional[Union[str, Pattern[str]]] = ... +) -> "WarningsRecorder": + ... + + +@overload +def deprecated_call( # noqa: F811 + func: Callable[..., T], *args: Any, **kwargs: Any +) -> T: + ... + + +def deprecated_call( # noqa: F811 + func: Optional[Callable[..., Any]] = None, *args: Any, **kwargs: Any +) -> Union["WarningsRecorder", Any]: + """Assert that code produces a ``DeprecationWarning`` or ``PendingDeprecationWarning``. + + This function can be used as a context manager:: + + >>> import warnings + >>> def api_call_v2(): + ... warnings.warn('use v3 of this api', DeprecationWarning) + ... return 200 + + >>> import pytest + >>> with pytest.deprecated_call(): + ... assert api_call_v2() == 200 + + It can also be used by passing a function and ``*args`` and ``**kwargs``, + in which case it will ensure calling ``func(*args, **kwargs)`` produces one of + the warnings types above. The return value is the return value of the function. + + In the context manager form you may use the keyword argument ``match`` to assert + that the warning matches a text or regex. + + The context manager produces a list of :class:`warnings.WarningMessage` objects, + one for each warning raised. + """ + __tracebackhide__ = True + if func is not None: + args = (func,) + args + return warns((DeprecationWarning, PendingDeprecationWarning), *args, **kwargs) + + +@overload +def warns( + expected_warning: Union[Type[Warning], Tuple[Type[Warning], ...]] = ..., + *, + match: Optional[Union[str, Pattern[str]]] = ..., +) -> "WarningsChecker": + ... + + +@overload +def warns( # noqa: F811 + expected_warning: Union[Type[Warning], Tuple[Type[Warning], ...]], + func: Callable[..., T], + *args: Any, + **kwargs: Any, +) -> T: + ... + + +def warns( # noqa: F811 + expected_warning: Union[Type[Warning], Tuple[Type[Warning], ...]] = Warning, + *args: Any, + match: Optional[Union[str, Pattern[str]]] = None, + **kwargs: Any, +) -> Union["WarningsChecker", Any]: + r"""Assert that code raises a particular class of warning. + + Specifically, the parameter ``expected_warning`` can be a warning class or sequence + of warning classes, and the code inside the ``with`` block must issue at least one + warning of that class or classes. + + This helper produces a list of :class:`warnings.WarningMessage` objects, one for + each warning raised (regardless of whether it is an ``expected_warning`` or not). + + This function can be used as a context manager, which will capture all the raised + warnings inside it:: + + >>> import pytest + >>> with pytest.warns(RuntimeWarning): + ... warnings.warn("my warning", RuntimeWarning) + + In the context manager form you may use the keyword argument ``match`` to assert + that the warning matches a text or regex:: + + >>> with pytest.warns(UserWarning, match='must be 0 or None'): + ... warnings.warn("value must be 0 or None", UserWarning) + + >>> with pytest.warns(UserWarning, match=r'must be \d+$'): + ... warnings.warn("value must be 42", UserWarning) + + >>> with pytest.warns(UserWarning, match=r'must be \d+$'): + ... warnings.warn("this is not here", UserWarning) + Traceback (most recent call last): + ... + Failed: DID NOT WARN. No warnings of type ...UserWarning... were emitted... + + **Using with** ``pytest.mark.parametrize`` + + When using :ref:`pytest.mark.parametrize ref` it is possible to parametrize tests + such that some runs raise a warning and others do not. + + This could be achieved in the same way as with exceptions, see + :ref:`parametrizing_conditional_raising` for an example. + + """ + __tracebackhide__ = True + if not args: + if kwargs: + argnames = ", ".join(sorted(kwargs)) + raise TypeError( + f"Unexpected keyword arguments passed to pytest.warns: {argnames}" + "\nUse context-manager form instead?" + ) + return WarningsChecker(expected_warning, match_expr=match, _ispytest=True) + else: + func = args[0] + if not callable(func): + raise TypeError(f"{func!r} object (type: {type(func)}) must be callable") + with WarningsChecker(expected_warning, _ispytest=True): + return func(*args[1:], **kwargs) + + +class WarningsRecorder(warnings.catch_warnings): # type:ignore[type-arg] + """A context manager to record raised warnings. + + Each recorded warning is an instance of :class:`warnings.WarningMessage`. + + Adapted from `warnings.catch_warnings`. + + .. note:: + ``DeprecationWarning`` and ``PendingDeprecationWarning`` are treated + differently; see :ref:`ensuring_function_triggers`. + + """ + + def __init__(self, *, _ispytest: bool = False) -> None: + check_ispytest(_ispytest) + # Type ignored due to the way typeshed handles warnings.catch_warnings. + super().__init__(record=True) # type: ignore[call-arg] + self._entered = False + self._list: List[warnings.WarningMessage] = [] + + @property + def list(self) -> List["warnings.WarningMessage"]: + """The list of recorded warnings.""" + return self._list + + def __getitem__(self, i: int) -> "warnings.WarningMessage": + """Get a recorded warning by index.""" + return self._list[i] + + def __iter__(self) -> Iterator["warnings.WarningMessage"]: + """Iterate through the recorded warnings.""" + return iter(self._list) + + def __len__(self) -> int: + """The number of recorded warnings.""" + return len(self._list) + + def pop(self, cls: Type[Warning] = Warning) -> "warnings.WarningMessage": + """Pop the first recorded warning, raise exception if not exists.""" + for i, w in enumerate(self._list): + if issubclass(w.category, cls): + return self._list.pop(i) + __tracebackhide__ = True + raise AssertionError(f"{cls!r} not found in warning list") + + def clear(self) -> None: + """Clear the list of recorded warnings.""" + self._list[:] = [] + + # Type ignored because it doesn't exactly warnings.catch_warnings.__enter__ + # -- it returns a List but we only emulate one. + def __enter__(self) -> "WarningsRecorder": # type: ignore + if self._entered: + __tracebackhide__ = True + raise RuntimeError(f"Cannot enter {self!r} twice") + _list = super().__enter__() + # record=True means it's None. + assert _list is not None + self._list = _list + warnings.simplefilter("always") + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + if not self._entered: + __tracebackhide__ = True + raise RuntimeError(f"Cannot exit {self!r} without entering first") + + super().__exit__(exc_type, exc_val, exc_tb) + + # Built-in catch_warnings does not reset entered state so we do it + # manually here for this context manager to become reusable. + self._entered = False + + +@final +class WarningsChecker(WarningsRecorder): + def __init__( + self, + expected_warning: Optional[ + Union[Type[Warning], Tuple[Type[Warning], ...]] + ] = Warning, + match_expr: Optional[Union[str, Pattern[str]]] = None, + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + super().__init__(_ispytest=True) + + msg = "exceptions must be derived from Warning, not %s" + if expected_warning is None: + warnings.warn(WARNS_NONE_ARG, stacklevel=4) + expected_warning_tup = None + elif isinstance(expected_warning, tuple): + for exc in expected_warning: + if not issubclass(exc, Warning): + raise TypeError(msg % type(exc)) + expected_warning_tup = expected_warning + elif issubclass(expected_warning, Warning): + expected_warning_tup = (expected_warning,) + else: + raise TypeError(msg % type(expected_warning)) + + self.expected_warning = expected_warning_tup + self.match_expr = match_expr + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + super().__exit__(exc_type, exc_val, exc_tb) + + __tracebackhide__ = True + + def found_str(): + return pformat([record.message for record in self], indent=2) + + # only check if we're not currently handling an exception + if exc_type is None and exc_val is None and exc_tb is None: + if self.expected_warning is not None: + if not any(issubclass(r.category, self.expected_warning) for r in self): + __tracebackhide__ = True + fail( + f"DID NOT WARN. No warnings of type {self.expected_warning} were emitted.\n" + f"The list of emitted warnings is: {found_str()}." + ) + elif self.match_expr is not None: + for r in self: + if issubclass(r.category, self.expected_warning): + if re.compile(self.match_expr).search(str(r.message)): + break + else: + fail( + f"""\ +DID NOT WARN. No warnings of type {self.expected_warning} matching the regex were emitted. + Regex: {self.match_expr} + Emitted warnings: {found_str()}""" + ) diff --git a/env/lib/python3.8/site-packages/_pytest/reports.py b/env/lib/python3.8/site-packages/_pytest/reports.py new file mode 100644 index 00000000..c35f7087 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/reports.py @@ -0,0 +1,603 @@ +import os +from io import StringIO +from pprint import pprint +from typing import Any +from typing import cast +from typing import Dict +from typing import Iterable +from typing import Iterator +from typing import List +from typing import Mapping +from typing import NoReturn +from typing import Optional +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +import attr + +from _pytest._code.code import ExceptionChainRepr +from _pytest._code.code import ExceptionInfo +from _pytest._code.code import ExceptionRepr +from _pytest._code.code import ReprEntry +from _pytest._code.code import ReprEntryNative +from _pytest._code.code import ReprExceptionInfo +from _pytest._code.code import ReprFileLocation +from _pytest._code.code import ReprFuncArgs +from _pytest._code.code import ReprLocals +from _pytest._code.code import ReprTraceback +from _pytest._code.code import TerminalRepr +from _pytest._io import TerminalWriter +from _pytest.compat import final +from _pytest.config import Config +from _pytest.nodes import Collector +from _pytest.nodes import Item +from _pytest.outcomes import skip + +if TYPE_CHECKING: + from typing_extensions import Literal + + from _pytest.runner import CallInfo + + +def getworkerinfoline(node): + try: + return node._workerinfocache + except AttributeError: + d = node.workerinfo + ver = "%s.%s.%s" % d["version_info"][:3] + node._workerinfocache = s = "[{}] {} -- Python {} {}".format( + d["id"], d["sysplatform"], ver, d["executable"] + ) + return s + + +_R = TypeVar("_R", bound="BaseReport") + + +class BaseReport: + when: Optional[str] + location: Optional[Tuple[str, Optional[int], str]] + longrepr: Union[ + None, ExceptionInfo[BaseException], Tuple[str, int, str], str, TerminalRepr + ] + sections: List[Tuple[str, str]] + nodeid: str + outcome: "Literal['passed', 'failed', 'skipped']" + + def __init__(self, **kw: Any) -> None: + self.__dict__.update(kw) + + if TYPE_CHECKING: + # Can have arbitrary fields given to __init__(). + def __getattr__(self, key: str) -> Any: + ... + + def toterminal(self, out: TerminalWriter) -> None: + if hasattr(self, "node"): + worker_info = getworkerinfoline(self.node) + if worker_info: + out.line(worker_info) + + longrepr = self.longrepr + if longrepr is None: + return + + if hasattr(longrepr, "toterminal"): + longrepr_terminal = cast(TerminalRepr, longrepr) + longrepr_terminal.toterminal(out) + else: + try: + s = str(longrepr) + except UnicodeEncodeError: + s = "" + out.line(s) + + def get_sections(self, prefix: str) -> Iterator[Tuple[str, str]]: + for name, content in self.sections: + if name.startswith(prefix): + yield prefix, content + + @property + def longreprtext(self) -> str: + """Read-only property that returns the full string representation of + ``longrepr``. + + .. versionadded:: 3.0 + """ + file = StringIO() + tw = TerminalWriter(file) + tw.hasmarkup = False + self.toterminal(tw) + exc = file.getvalue() + return exc.strip() + + @property + def caplog(self) -> str: + """Return captured log lines, if log capturing is enabled. + + .. versionadded:: 3.5 + """ + return "\n".join( + content for (prefix, content) in self.get_sections("Captured log") + ) + + @property + def capstdout(self) -> str: + """Return captured text from stdout, if capturing is enabled. + + .. versionadded:: 3.0 + """ + return "".join( + content for (prefix, content) in self.get_sections("Captured stdout") + ) + + @property + def capstderr(self) -> str: + """Return captured text from stderr, if capturing is enabled. + + .. versionadded:: 3.0 + """ + return "".join( + content for (prefix, content) in self.get_sections("Captured stderr") + ) + + @property + def passed(self) -> bool: + """Whether the outcome is passed.""" + return self.outcome == "passed" + + @property + def failed(self) -> bool: + """Whether the outcome is failed.""" + return self.outcome == "failed" + + @property + def skipped(self) -> bool: + """Whether the outcome is skipped.""" + return self.outcome == "skipped" + + @property + def fspath(self) -> str: + """The path portion of the reported node, as a string.""" + return self.nodeid.split("::")[0] + + @property + def count_towards_summary(self) -> bool: + """**Experimental** Whether this report should be counted towards the + totals shown at the end of the test session: "1 passed, 1 failure, etc". + + .. note:: + + This function is considered **experimental**, so beware that it is subject to changes + even in patch releases. + """ + return True + + @property + def head_line(self) -> Optional[str]: + """**Experimental** The head line shown with longrepr output for this + report, more commonly during traceback representation during + failures:: + + ________ Test.foo ________ + + + In the example above, the head_line is "Test.foo". + + .. note:: + + This function is considered **experimental**, so beware that it is subject to changes + even in patch releases. + """ + if self.location is not None: + fspath, lineno, domain = self.location + return domain + return None + + def _get_verbose_word(self, config: Config): + _category, _short, verbose = config.hook.pytest_report_teststatus( + report=self, config=config + ) + return verbose + + def _to_json(self) -> Dict[str, Any]: + """Return the contents of this report as a dict of builtin entries, + suitable for serialization. + + This was originally the serialize_report() function from xdist (ca03269). + + Experimental method. + """ + return _report_to_json(self) + + @classmethod + def _from_json(cls: Type[_R], reportdict: Dict[str, object]) -> _R: + """Create either a TestReport or CollectReport, depending on the calling class. + + It is the callers responsibility to know which class to pass here. + + This was originally the serialize_report() function from xdist (ca03269). + + Experimental method. + """ + kwargs = _report_kwargs_from_json(reportdict) + return cls(**kwargs) + + +def _report_unserialization_failure( + type_name: str, report_class: Type[BaseReport], reportdict +) -> NoReturn: + url = "https://github.com/pytest-dev/pytest/issues" + stream = StringIO() + pprint("-" * 100, stream=stream) + pprint("INTERNALERROR: Unknown entry type returned: %s" % type_name, stream=stream) + pprint("report_name: %s" % report_class, stream=stream) + pprint(reportdict, stream=stream) + pprint("Please report this bug at %s" % url, stream=stream) + pprint("-" * 100, stream=stream) + raise RuntimeError(stream.getvalue()) + + +@final +class TestReport(BaseReport): + """Basic test report object (also used for setup and teardown calls if + they fail). + + Reports can contain arbitrary extra attributes. + """ + + __test__ = False + + def __init__( + self, + nodeid: str, + location: Tuple[str, Optional[int], str], + keywords: Mapping[str, Any], + outcome: "Literal['passed', 'failed', 'skipped']", + longrepr: Union[ + None, ExceptionInfo[BaseException], Tuple[str, int, str], str, TerminalRepr + ], + when: "Literal['setup', 'call', 'teardown']", + sections: Iterable[Tuple[str, str]] = (), + duration: float = 0, + user_properties: Optional[Iterable[Tuple[str, object]]] = None, + **extra, + ) -> None: + #: Normalized collection nodeid. + self.nodeid = nodeid + + #: A (filesystempath, lineno, domaininfo) tuple indicating the + #: actual location of a test item - it might be different from the + #: collected one e.g. if a method is inherited from a different module. + self.location: Tuple[str, Optional[int], str] = location + + #: A name -> value dictionary containing all keywords and + #: markers associated with a test invocation. + self.keywords: Mapping[str, Any] = keywords + + #: Test outcome, always one of "passed", "failed", "skipped". + self.outcome = outcome + + #: None or a failure representation. + self.longrepr = longrepr + + #: One of 'setup', 'call', 'teardown' to indicate runtest phase. + self.when = when + + #: User properties is a list of tuples (name, value) that holds user + #: defined properties of the test. + self.user_properties = list(user_properties or []) + + #: Tuples of str ``(heading, content)`` with extra information + #: for the test report. Used by pytest to add text captured + #: from ``stdout``, ``stderr``, and intercepted logging events. May + #: be used by other plugins to add arbitrary information to reports. + self.sections = list(sections) + + #: Time it took to run just the test. + self.duration: float = duration + + self.__dict__.update(extra) + + def __repr__(self) -> str: + return "<{} {!r} when={!r} outcome={!r}>".format( + self.__class__.__name__, self.nodeid, self.when, self.outcome + ) + + @classmethod + def from_item_and_call(cls, item: Item, call: "CallInfo[None]") -> "TestReport": + """Create and fill a TestReport with standard item and call info. + + :param item: The item. + :param call: The call info. + """ + when = call.when + # Remove "collect" from the Literal type -- only for collection calls. + assert when != "collect" + duration = call.duration + keywords = {x: 1 for x in item.keywords} + excinfo = call.excinfo + sections = [] + if not call.excinfo: + outcome: Literal["passed", "failed", "skipped"] = "passed" + longrepr: Union[ + None, + ExceptionInfo[BaseException], + Tuple[str, int, str], + str, + TerminalRepr, + ] = None + else: + if not isinstance(excinfo, ExceptionInfo): + outcome = "failed" + longrepr = excinfo + elif isinstance(excinfo.value, skip.Exception): + outcome = "skipped" + r = excinfo._getreprcrash() + if excinfo.value._use_item_location: + path, line = item.reportinfo()[:2] + assert line is not None + longrepr = os.fspath(path), line + 1, r.message + else: + longrepr = (str(r.path), r.lineno, r.message) + else: + outcome = "failed" + if call.when == "call": + longrepr = item.repr_failure(excinfo) + else: # exception in setup or teardown + longrepr = item._repr_failure_py( + excinfo, style=item.config.getoption("tbstyle", "auto") + ) + for rwhen, key, content in item._report_sections: + sections.append((f"Captured {key} {rwhen}", content)) + return cls( + item.nodeid, + item.location, + keywords, + outcome, + longrepr, + when, + sections, + duration, + user_properties=item.user_properties, + ) + + +@final +class CollectReport(BaseReport): + """Collection report object. + + Reports can contain arbitrary extra attributes. + """ + + when = "collect" + + def __init__( + self, + nodeid: str, + outcome: "Literal['passed', 'failed', 'skipped']", + longrepr: Union[ + None, ExceptionInfo[BaseException], Tuple[str, int, str], str, TerminalRepr + ], + result: Optional[List[Union[Item, Collector]]], + sections: Iterable[Tuple[str, str]] = (), + **extra, + ) -> None: + #: Normalized collection nodeid. + self.nodeid = nodeid + + #: Test outcome, always one of "passed", "failed", "skipped". + self.outcome = outcome + + #: None or a failure representation. + self.longrepr = longrepr + + #: The collected items and collection nodes. + self.result = result or [] + + #: Tuples of str ``(heading, content)`` with extra information + #: for the test report. Used by pytest to add text captured + #: from ``stdout``, ``stderr``, and intercepted logging events. May + #: be used by other plugins to add arbitrary information to reports. + self.sections = list(sections) + + self.__dict__.update(extra) + + @property + def location(self): + return (self.fspath, None, self.fspath) + + def __repr__(self) -> str: + return "".format( + self.nodeid, len(self.result), self.outcome + ) + + +class CollectErrorRepr(TerminalRepr): + def __init__(self, msg: str) -> None: + self.longrepr = msg + + def toterminal(self, out: TerminalWriter) -> None: + out.line(self.longrepr, red=True) + + +def pytest_report_to_serializable( + report: Union[CollectReport, TestReport] +) -> Optional[Dict[str, Any]]: + if isinstance(report, (TestReport, CollectReport)): + data = report._to_json() + data["$report_type"] = report.__class__.__name__ + return data + # TODO: Check if this is actually reachable. + return None # type: ignore[unreachable] + + +def pytest_report_from_serializable( + data: Dict[str, Any], +) -> Optional[Union[CollectReport, TestReport]]: + if "$report_type" in data: + if data["$report_type"] == "TestReport": + return TestReport._from_json(data) + elif data["$report_type"] == "CollectReport": + return CollectReport._from_json(data) + assert False, "Unknown report_type unserialize data: {}".format( + data["$report_type"] + ) + return None + + +def _report_to_json(report: BaseReport) -> Dict[str, Any]: + """Return the contents of this report as a dict of builtin entries, + suitable for serialization. + + This was originally the serialize_report() function from xdist (ca03269). + """ + + def serialize_repr_entry( + entry: Union[ReprEntry, ReprEntryNative] + ) -> Dict[str, Any]: + data = attr.asdict(entry) + for key, value in data.items(): + if hasattr(value, "__dict__"): + data[key] = attr.asdict(value) + entry_data = {"type": type(entry).__name__, "data": data} + return entry_data + + def serialize_repr_traceback(reprtraceback: ReprTraceback) -> Dict[str, Any]: + result = attr.asdict(reprtraceback) + result["reprentries"] = [ + serialize_repr_entry(x) for x in reprtraceback.reprentries + ] + return result + + def serialize_repr_crash( + reprcrash: Optional[ReprFileLocation], + ) -> Optional[Dict[str, Any]]: + if reprcrash is not None: + return attr.asdict(reprcrash) + else: + return None + + def serialize_exception_longrepr(rep: BaseReport) -> Dict[str, Any]: + assert rep.longrepr is not None + # TODO: Investigate whether the duck typing is really necessary here. + longrepr = cast(ExceptionRepr, rep.longrepr) + result: Dict[str, Any] = { + "reprcrash": serialize_repr_crash(longrepr.reprcrash), + "reprtraceback": serialize_repr_traceback(longrepr.reprtraceback), + "sections": longrepr.sections, + } + if isinstance(longrepr, ExceptionChainRepr): + result["chain"] = [] + for repr_traceback, repr_crash, description in longrepr.chain: + result["chain"].append( + ( + serialize_repr_traceback(repr_traceback), + serialize_repr_crash(repr_crash), + description, + ) + ) + else: + result["chain"] = None + return result + + d = report.__dict__.copy() + if hasattr(report.longrepr, "toterminal"): + if hasattr(report.longrepr, "reprtraceback") and hasattr( + report.longrepr, "reprcrash" + ): + d["longrepr"] = serialize_exception_longrepr(report) + else: + d["longrepr"] = str(report.longrepr) + else: + d["longrepr"] = report.longrepr + for name in d: + if isinstance(d[name], os.PathLike): + d[name] = os.fspath(d[name]) + elif name == "result": + d[name] = None # for now + return d + + +def _report_kwargs_from_json(reportdict: Dict[str, Any]) -> Dict[str, Any]: + """Return **kwargs that can be used to construct a TestReport or + CollectReport instance. + + This was originally the serialize_report() function from xdist (ca03269). + """ + + def deserialize_repr_entry(entry_data): + data = entry_data["data"] + entry_type = entry_data["type"] + if entry_type == "ReprEntry": + reprfuncargs = None + reprfileloc = None + reprlocals = None + if data["reprfuncargs"]: + reprfuncargs = ReprFuncArgs(**data["reprfuncargs"]) + if data["reprfileloc"]: + reprfileloc = ReprFileLocation(**data["reprfileloc"]) + if data["reprlocals"]: + reprlocals = ReprLocals(data["reprlocals"]["lines"]) + + reprentry: Union[ReprEntry, ReprEntryNative] = ReprEntry( + lines=data["lines"], + reprfuncargs=reprfuncargs, + reprlocals=reprlocals, + reprfileloc=reprfileloc, + style=data["style"], + ) + elif entry_type == "ReprEntryNative": + reprentry = ReprEntryNative(data["lines"]) + else: + _report_unserialization_failure(entry_type, TestReport, reportdict) + return reprentry + + def deserialize_repr_traceback(repr_traceback_dict): + repr_traceback_dict["reprentries"] = [ + deserialize_repr_entry(x) for x in repr_traceback_dict["reprentries"] + ] + return ReprTraceback(**repr_traceback_dict) + + def deserialize_repr_crash(repr_crash_dict: Optional[Dict[str, Any]]): + if repr_crash_dict is not None: + return ReprFileLocation(**repr_crash_dict) + else: + return None + + if ( + reportdict["longrepr"] + and "reprcrash" in reportdict["longrepr"] + and "reprtraceback" in reportdict["longrepr"] + ): + + reprtraceback = deserialize_repr_traceback( + reportdict["longrepr"]["reprtraceback"] + ) + reprcrash = deserialize_repr_crash(reportdict["longrepr"]["reprcrash"]) + if reportdict["longrepr"]["chain"]: + chain = [] + for repr_traceback_data, repr_crash_data, description in reportdict[ + "longrepr" + ]["chain"]: + chain.append( + ( + deserialize_repr_traceback(repr_traceback_data), + deserialize_repr_crash(repr_crash_data), + description, + ) + ) + exception_info: Union[ + ExceptionChainRepr, ReprExceptionInfo + ] = ExceptionChainRepr(chain) + else: + exception_info = ReprExceptionInfo(reprtraceback, reprcrash) + + for section in reportdict["longrepr"]["sections"]: + exception_info.addsection(*section) + reportdict["longrepr"] = exception_info + + return reportdict diff --git a/env/lib/python3.8/site-packages/_pytest/runner.py b/env/lib/python3.8/site-packages/_pytest/runner.py new file mode 100644 index 00000000..584c3229 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/runner.py @@ -0,0 +1,542 @@ +"""Basic collect and runtest protocol implementations.""" +import bdb +import os +import sys +from typing import Callable +from typing import cast +from typing import Dict +from typing import Generic +from typing import List +from typing import Optional +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import TypeVar +from typing import Union + +import attr + +from .reports import BaseReport +from .reports import CollectErrorRepr +from .reports import CollectReport +from .reports import TestReport +from _pytest import timing +from _pytest._code.code import ExceptionChainRepr +from _pytest._code.code import ExceptionInfo +from _pytest._code.code import TerminalRepr +from _pytest.compat import final +from _pytest.config.argparsing import Parser +from _pytest.deprecated import check_ispytest +from _pytest.nodes import Collector +from _pytest.nodes import Item +from _pytest.nodes import Node +from _pytest.outcomes import Exit +from _pytest.outcomes import OutcomeException +from _pytest.outcomes import Skipped +from _pytest.outcomes import TEST_OUTCOME + +if TYPE_CHECKING: + from typing_extensions import Literal + + from _pytest.main import Session + from _pytest.terminal import TerminalReporter + +# +# pytest plugin hooks. + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("terminal reporting", "Reporting", after="general") + group.addoption( + "--durations", + action="store", + type=int, + default=None, + metavar="N", + help="Show N slowest setup/test durations (N=0 for all)", + ) + group.addoption( + "--durations-min", + action="store", + type=float, + default=0.005, + metavar="N", + help="Minimal duration in seconds for inclusion in slowest list. " + "Default: 0.005.", + ) + + +def pytest_terminal_summary(terminalreporter: "TerminalReporter") -> None: + durations = terminalreporter.config.option.durations + durations_min = terminalreporter.config.option.durations_min + verbose = terminalreporter.config.getvalue("verbose") + if durations is None: + return + tr = terminalreporter + dlist = [] + for replist in tr.stats.values(): + for rep in replist: + if hasattr(rep, "duration"): + dlist.append(rep) + if not dlist: + return + dlist.sort(key=lambda x: x.duration, reverse=True) # type: ignore[no-any-return] + if not durations: + tr.write_sep("=", "slowest durations") + else: + tr.write_sep("=", "slowest %s durations" % durations) + dlist = dlist[:durations] + + for i, rep in enumerate(dlist): + if verbose < 2 and rep.duration < durations_min: + tr.write_line("") + tr.write_line( + "(%s durations < %gs hidden. Use -vv to show these durations.)" + % (len(dlist) - i, durations_min) + ) + break + tr.write_line(f"{rep.duration:02.2f}s {rep.when:<8} {rep.nodeid}") + + +def pytest_sessionstart(session: "Session") -> None: + session._setupstate = SetupState() + + +def pytest_sessionfinish(session: "Session") -> None: + session._setupstate.teardown_exact(None) + + +def pytest_runtest_protocol(item: Item, nextitem: Optional[Item]) -> bool: + ihook = item.ihook + ihook.pytest_runtest_logstart(nodeid=item.nodeid, location=item.location) + runtestprotocol(item, nextitem=nextitem) + ihook.pytest_runtest_logfinish(nodeid=item.nodeid, location=item.location) + return True + + +def runtestprotocol( + item: Item, log: bool = True, nextitem: Optional[Item] = None +) -> List[TestReport]: + hasrequest = hasattr(item, "_request") + if hasrequest and not item._request: # type: ignore[attr-defined] + # This only happens if the item is re-run, as is done by + # pytest-rerunfailures. + item._initrequest() # type: ignore[attr-defined] + rep = call_and_report(item, "setup", log) + reports = [rep] + if rep.passed: + if item.config.getoption("setupshow", False): + show_test_item(item) + if not item.config.getoption("setuponly", False): + reports.append(call_and_report(item, "call", log)) + reports.append(call_and_report(item, "teardown", log, nextitem=nextitem)) + # After all teardown hooks have been called + # want funcargs and request info to go away. + if hasrequest: + item._request = False # type: ignore[attr-defined] + item.funcargs = None # type: ignore[attr-defined] + return reports + + +def show_test_item(item: Item) -> None: + """Show test function, parameters and the fixtures of the test item.""" + tw = item.config.get_terminal_writer() + tw.line() + tw.write(" " * 8) + tw.write(item.nodeid) + used_fixtures = sorted(getattr(item, "fixturenames", [])) + if used_fixtures: + tw.write(" (fixtures used: {})".format(", ".join(used_fixtures))) + tw.flush() + + +def pytest_runtest_setup(item: Item) -> None: + _update_current_test_var(item, "setup") + item.session._setupstate.setup(item) + + +def pytest_runtest_call(item: Item) -> None: + _update_current_test_var(item, "call") + try: + del sys.last_type + del sys.last_value + del sys.last_traceback + except AttributeError: + pass + try: + item.runtest() + except Exception as e: + # Store trace info to allow postmortem debugging + sys.last_type = type(e) + sys.last_value = e + assert e.__traceback__ is not None + # Skip *this* frame + sys.last_traceback = e.__traceback__.tb_next + raise e + + +def pytest_runtest_teardown(item: Item, nextitem: Optional[Item]) -> None: + _update_current_test_var(item, "teardown") + item.session._setupstate.teardown_exact(nextitem) + _update_current_test_var(item, None) + + +def _update_current_test_var( + item: Item, when: Optional["Literal['setup', 'call', 'teardown']"] +) -> None: + """Update :envvar:`PYTEST_CURRENT_TEST` to reflect the current item and stage. + + If ``when`` is None, delete ``PYTEST_CURRENT_TEST`` from the environment. + """ + var_name = "PYTEST_CURRENT_TEST" + if when: + value = f"{item.nodeid} ({when})" + # don't allow null bytes on environment variables (see #2644, #2957) + value = value.replace("\x00", "(null)") + os.environ[var_name] = value + else: + os.environ.pop(var_name) + + +def pytest_report_teststatus(report: BaseReport) -> Optional[Tuple[str, str, str]]: + if report.when in ("setup", "teardown"): + if report.failed: + # category, shortletter, verbose-word + return "error", "E", "ERROR" + elif report.skipped: + return "skipped", "s", "SKIPPED" + else: + return "", "", "" + return None + + +# +# Implementation + + +def call_and_report( + item: Item, when: "Literal['setup', 'call', 'teardown']", log: bool = True, **kwds +) -> TestReport: + call = call_runtest_hook(item, when, **kwds) + hook = item.ihook + report: TestReport = hook.pytest_runtest_makereport(item=item, call=call) + if log: + hook.pytest_runtest_logreport(report=report) + if check_interactive_exception(call, report): + hook.pytest_exception_interact(node=item, call=call, report=report) + return report + + +def check_interactive_exception(call: "CallInfo[object]", report: BaseReport) -> bool: + """Check whether the call raised an exception that should be reported as + interactive.""" + if call.excinfo is None: + # Didn't raise. + return False + if hasattr(report, "wasxfail"): + # Exception was expected. + return False + if isinstance(call.excinfo.value, (Skipped, bdb.BdbQuit)): + # Special control flow exception. + return False + return True + + +def call_runtest_hook( + item: Item, when: "Literal['setup', 'call', 'teardown']", **kwds +) -> "CallInfo[None]": + if when == "setup": + ihook: Callable[..., None] = item.ihook.pytest_runtest_setup + elif when == "call": + ihook = item.ihook.pytest_runtest_call + elif when == "teardown": + ihook = item.ihook.pytest_runtest_teardown + else: + assert False, f"Unhandled runtest hook case: {when}" + reraise: Tuple[Type[BaseException], ...] = (Exit,) + if not item.config.getoption("usepdb", False): + reraise += (KeyboardInterrupt,) + return CallInfo.from_call( + lambda: ihook(item=item, **kwds), when=when, reraise=reraise + ) + + +TResult = TypeVar("TResult", covariant=True) + + +@final +@attr.s(repr=False, init=False, auto_attribs=True) +class CallInfo(Generic[TResult]): + """Result/Exception info of a function invocation.""" + + _result: Optional[TResult] + #: The captured exception of the call, if it raised. + excinfo: Optional[ExceptionInfo[BaseException]] + #: The system time when the call started, in seconds since the epoch. + start: float + #: The system time when the call ended, in seconds since the epoch. + stop: float + #: The call duration, in seconds. + duration: float + #: The context of invocation: "collect", "setup", "call" or "teardown". + when: "Literal['collect', 'setup', 'call', 'teardown']" + + def __init__( + self, + result: Optional[TResult], + excinfo: Optional[ExceptionInfo[BaseException]], + start: float, + stop: float, + duration: float, + when: "Literal['collect', 'setup', 'call', 'teardown']", + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + self._result = result + self.excinfo = excinfo + self.start = start + self.stop = stop + self.duration = duration + self.when = when + + @property + def result(self) -> TResult: + """The return value of the call, if it didn't raise. + + Can only be accessed if excinfo is None. + """ + if self.excinfo is not None: + raise AttributeError(f"{self!r} has no valid result") + # The cast is safe because an exception wasn't raised, hence + # _result has the expected function return type (which may be + # None, that's why a cast and not an assert). + return cast(TResult, self._result) + + @classmethod + def from_call( + cls, + func: "Callable[[], TResult]", + when: "Literal['collect', 'setup', 'call', 'teardown']", + reraise: Optional[ + Union[Type[BaseException], Tuple[Type[BaseException], ...]] + ] = None, + ) -> "CallInfo[TResult]": + """Call func, wrapping the result in a CallInfo. + + :param func: + The function to call. Called without arguments. + :param when: + The phase in which the function is called. + :param reraise: + Exception or exceptions that shall propagate if raised by the + function, instead of being wrapped in the CallInfo. + """ + excinfo = None + start = timing.time() + precise_start = timing.perf_counter() + try: + result: Optional[TResult] = func() + except BaseException: + excinfo = ExceptionInfo.from_current() + if reraise is not None and isinstance(excinfo.value, reraise): + raise + result = None + # use the perf counter + precise_stop = timing.perf_counter() + duration = precise_stop - precise_start + stop = timing.time() + return cls( + start=start, + stop=stop, + duration=duration, + when=when, + result=result, + excinfo=excinfo, + _ispytest=True, + ) + + def __repr__(self) -> str: + if self.excinfo is None: + return f"" + return f"" + + +def pytest_runtest_makereport(item: Item, call: CallInfo[None]) -> TestReport: + return TestReport.from_item_and_call(item, call) + + +def pytest_make_collect_report(collector: Collector) -> CollectReport: + call = CallInfo.from_call(lambda: list(collector.collect()), "collect") + longrepr: Union[None, Tuple[str, int, str], str, TerminalRepr] = None + if not call.excinfo: + outcome: Literal["passed", "skipped", "failed"] = "passed" + else: + skip_exceptions = [Skipped] + unittest = sys.modules.get("unittest") + if unittest is not None: + # Type ignored because unittest is loaded dynamically. + skip_exceptions.append(unittest.SkipTest) # type: ignore + if isinstance(call.excinfo.value, tuple(skip_exceptions)): + outcome = "skipped" + r_ = collector._repr_failure_py(call.excinfo, "line") + assert isinstance(r_, ExceptionChainRepr), repr(r_) + r = r_.reprcrash + assert r + longrepr = (str(r.path), r.lineno, r.message) + else: + outcome = "failed" + errorinfo = collector.repr_failure(call.excinfo) + if not hasattr(errorinfo, "toterminal"): + assert isinstance(errorinfo, str) + errorinfo = CollectErrorRepr(errorinfo) + longrepr = errorinfo + result = call.result if not call.excinfo else None + rep = CollectReport(collector.nodeid, outcome, longrepr, result) + rep.call = call # type: ignore # see collect_one_node + return rep + + +class SetupState: + """Shared state for setting up/tearing down test items or collectors + in a session. + + Suppose we have a collection tree as follows: + + + + + + + + The SetupState maintains a stack. The stack starts out empty: + + [] + + During the setup phase of item1, setup(item1) is called. What it does + is: + + push session to stack, run session.setup() + push mod1 to stack, run mod1.setup() + push item1 to stack, run item1.setup() + + The stack is: + + [session, mod1, item1] + + While the stack is in this shape, it is allowed to add finalizers to + each of session, mod1, item1 using addfinalizer(). + + During the teardown phase of item1, teardown_exact(item2) is called, + where item2 is the next item to item1. What it does is: + + pop item1 from stack, run its teardowns + pop mod1 from stack, run its teardowns + + mod1 was popped because it ended its purpose with item1. The stack is: + + [session] + + During the setup phase of item2, setup(item2) is called. What it does + is: + + push mod2 to stack, run mod2.setup() + push item2 to stack, run item2.setup() + + Stack: + + [session, mod2, item2] + + During the teardown phase of item2, teardown_exact(None) is called, + because item2 is the last item. What it does is: + + pop item2 from stack, run its teardowns + pop mod2 from stack, run its teardowns + pop session from stack, run its teardowns + + Stack: + + [] + + The end! + """ + + def __init__(self) -> None: + # The stack is in the dict insertion order. + self.stack: Dict[ + Node, + Tuple[ + # Node's finalizers. + List[Callable[[], object]], + # Node's exception, if its setup raised. + Optional[Union[OutcomeException, Exception]], + ], + ] = {} + + def setup(self, item: Item) -> None: + """Setup objects along the collector chain to the item.""" + needed_collectors = item.listchain() + + # If a collector fails its setup, fail its entire subtree of items. + # The setup is not retried for each item - the same exception is used. + for col, (finalizers, exc) in self.stack.items(): + assert col in needed_collectors, "previous item was not torn down properly" + if exc: + raise exc + + for col in needed_collectors[len(self.stack) :]: + assert col not in self.stack + # Push onto the stack. + self.stack[col] = ([col.teardown], None) + try: + col.setup() + except TEST_OUTCOME as exc: + self.stack[col] = (self.stack[col][0], exc) + raise exc + + def addfinalizer(self, finalizer: Callable[[], object], node: Node) -> None: + """Attach a finalizer to the given node. + + The node must be currently active in the stack. + """ + assert node and not isinstance(node, tuple) + assert callable(finalizer) + assert node in self.stack, (node, self.stack) + self.stack[node][0].append(finalizer) + + def teardown_exact(self, nextitem: Optional[Item]) -> None: + """Teardown the current stack up until reaching nodes that nextitem + also descends from. + + When nextitem is None (meaning we're at the last item), the entire + stack is torn down. + """ + needed_collectors = nextitem and nextitem.listchain() or [] + exc = None + while self.stack: + if list(self.stack.keys()) == needed_collectors[: len(self.stack)]: + break + node, (finalizers, _) = self.stack.popitem() + while finalizers: + fin = finalizers.pop() + try: + fin() + except TEST_OUTCOME as e: + # XXX Only first exception will be seen by user, + # ideally all should be reported. + if exc is None: + exc = e + if exc: + raise exc + if nextitem is None: + assert not self.stack + + +def collect_one_node(collector: Collector) -> CollectReport: + ihook = collector.ihook + ihook.pytest_collectstart(collector=collector) + rep: CollectReport = ihook.pytest_make_collect_report(collector=collector) + call = rep.__dict__.pop("call", None) + if call and check_interactive_exception(call, rep): + ihook.pytest_exception_interact(node=collector, call=call, report=rep) + return rep diff --git a/env/lib/python3.8/site-packages/_pytest/scope.py b/env/lib/python3.8/site-packages/_pytest/scope.py new file mode 100644 index 00000000..7a746fb9 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/scope.py @@ -0,0 +1,91 @@ +""" +Scope definition and related utilities. + +Those are defined here, instead of in the 'fixtures' module because +their use is spread across many other pytest modules, and centralizing it in 'fixtures' +would cause circular references. + +Also this makes the module light to import, as it should. +""" +from enum import Enum +from functools import total_ordering +from typing import Optional +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from typing_extensions import Literal + + _ScopeName = Literal["session", "package", "module", "class", "function"] + + +@total_ordering +class Scope(Enum): + """ + Represents one of the possible fixture scopes in pytest. + + Scopes are ordered from lower to higher, that is: + + ->>> higher ->>> + + Function < Class < Module < Package < Session + + <<<- lower <<<- + """ + + # Scopes need to be listed from lower to higher. + Function: "_ScopeName" = "function" + Class: "_ScopeName" = "class" + Module: "_ScopeName" = "module" + Package: "_ScopeName" = "package" + Session: "_ScopeName" = "session" + + def next_lower(self) -> "Scope": + """Return the next lower scope.""" + index = _SCOPE_INDICES[self] + if index == 0: + raise ValueError(f"{self} is the lower-most scope") + return _ALL_SCOPES[index - 1] + + def next_higher(self) -> "Scope": + """Return the next higher scope.""" + index = _SCOPE_INDICES[self] + if index == len(_SCOPE_INDICES) - 1: + raise ValueError(f"{self} is the upper-most scope") + return _ALL_SCOPES[index + 1] + + def __lt__(self, other: "Scope") -> bool: + self_index = _SCOPE_INDICES[self] + other_index = _SCOPE_INDICES[other] + return self_index < other_index + + @classmethod + def from_user( + cls, scope_name: "_ScopeName", descr: str, where: Optional[str] = None + ) -> "Scope": + """ + Given a scope name from the user, return the equivalent Scope enum. Should be used + whenever we want to convert a user provided scope name to its enum object. + + If the scope name is invalid, construct a user friendly message and call pytest.fail. + """ + from _pytest.outcomes import fail + + try: + # Holding this reference is necessary for mypy at the moment. + scope = Scope(scope_name) + except ValueError: + fail( + "{} {}got an unexpected scope value '{}'".format( + descr, f"from {where} " if where else "", scope_name + ), + pytrace=False, + ) + return scope + + +_ALL_SCOPES = list(Scope) +_SCOPE_INDICES = {scope: index for index, scope in enumerate(_ALL_SCOPES)} + + +# Ordered list of scopes which can contain many tests (in practice all except Function). +HIGH_SCOPES = [x for x in Scope if x is not Scope.Function] diff --git a/env/lib/python3.8/site-packages/_pytest/setuponly.py b/env/lib/python3.8/site-packages/_pytest/setuponly.py new file mode 100644 index 00000000..583590d6 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/setuponly.py @@ -0,0 +1,97 @@ +from typing import Generator +from typing import Optional +from typing import Union + +import pytest +from _pytest._io.saferepr import saferepr +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config.argparsing import Parser +from _pytest.fixtures import FixtureDef +from _pytest.fixtures import SubRequest +from _pytest.scope import Scope + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("debugconfig") + group.addoption( + "--setuponly", + "--setup-only", + action="store_true", + help="Only setup fixtures, do not execute tests", + ) + group.addoption( + "--setupshow", + "--setup-show", + action="store_true", + help="Show setup of fixtures while executing tests", + ) + + +@pytest.hookimpl(hookwrapper=True) +def pytest_fixture_setup( + fixturedef: FixtureDef[object], request: SubRequest +) -> Generator[None, None, None]: + yield + if request.config.option.setupshow: + if hasattr(request, "param"): + # Save the fixture parameter so ._show_fixture_action() can + # display it now and during the teardown (in .finish()). + if fixturedef.ids: + if callable(fixturedef.ids): + param = fixturedef.ids(request.param) + else: + param = fixturedef.ids[request.param_index] + else: + param = request.param + fixturedef.cached_param = param # type: ignore[attr-defined] + _show_fixture_action(fixturedef, "SETUP") + + +def pytest_fixture_post_finalizer(fixturedef: FixtureDef[object]) -> None: + if fixturedef.cached_result is not None: + config = fixturedef._fixturemanager.config + if config.option.setupshow: + _show_fixture_action(fixturedef, "TEARDOWN") + if hasattr(fixturedef, "cached_param"): + del fixturedef.cached_param # type: ignore[attr-defined] + + +def _show_fixture_action(fixturedef: FixtureDef[object], msg: str) -> None: + config = fixturedef._fixturemanager.config + capman = config.pluginmanager.getplugin("capturemanager") + if capman: + capman.suspend_global_capture() + + tw = config.get_terminal_writer() + tw.line() + # Use smaller indentation the higher the scope: Session = 0, Package = 1, etc. + scope_indent = list(reversed(Scope)).index(fixturedef._scope) + tw.write(" " * 2 * scope_indent) + tw.write( + "{step} {scope} {fixture}".format( + step=msg.ljust(8), # align the output to TEARDOWN + scope=fixturedef.scope[0].upper(), + fixture=fixturedef.argname, + ) + ) + + if msg == "SETUP": + deps = sorted(arg for arg in fixturedef.argnames if arg != "request") + if deps: + tw.write(" (fixtures used: {})".format(", ".join(deps))) + + if hasattr(fixturedef, "cached_param"): + tw.write(f"[{saferepr(fixturedef.cached_param, maxsize=42)}]") # type: ignore[attr-defined] + + tw.flush() + + if capman: + capman.resume_global_capture() + + +@pytest.hookimpl(tryfirst=True) +def pytest_cmdline_main(config: Config) -> Optional[Union[int, ExitCode]]: + if config.option.setuponly: + config.option.setupshow = True + return None diff --git a/env/lib/python3.8/site-packages/_pytest/setupplan.py b/env/lib/python3.8/site-packages/_pytest/setupplan.py new file mode 100644 index 00000000..1a4ebdd9 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/setupplan.py @@ -0,0 +1,40 @@ +from typing import Optional +from typing import Union + +import pytest +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config.argparsing import Parser +from _pytest.fixtures import FixtureDef +from _pytest.fixtures import SubRequest + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("debugconfig") + group.addoption( + "--setupplan", + "--setup-plan", + action="store_true", + help="Show what fixtures and tests would be executed but " + "don't execute anything", + ) + + +@pytest.hookimpl(tryfirst=True) +def pytest_fixture_setup( + fixturedef: FixtureDef[object], request: SubRequest +) -> Optional[object]: + # Will return a dummy fixture if the setuponly option is provided. + if request.config.option.setupplan: + my_cache_key = fixturedef.cache_key(request) + fixturedef.cached_result = (None, my_cache_key, None) + return fixturedef.cached_result + return None + + +@pytest.hookimpl(tryfirst=True) +def pytest_cmdline_main(config: Config) -> Optional[Union[int, ExitCode]]: + if config.option.setupplan: + config.option.setuponly = True + config.option.setupshow = True + return None diff --git a/env/lib/python3.8/site-packages/_pytest/skipping.py b/env/lib/python3.8/site-packages/_pytest/skipping.py new file mode 100644 index 00000000..b2044235 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/skipping.py @@ -0,0 +1,296 @@ +"""Support for skip/xfail functions and markers.""" +import os +import platform +import sys +import traceback +from collections.abc import Mapping +from typing import Generator +from typing import Optional +from typing import Tuple +from typing import Type + +import attr + +from _pytest.config import Config +from _pytest.config import hookimpl +from _pytest.config.argparsing import Parser +from _pytest.mark.structures import Mark +from _pytest.nodes import Item +from _pytest.outcomes import fail +from _pytest.outcomes import skip +from _pytest.outcomes import xfail +from _pytest.reports import BaseReport +from _pytest.runner import CallInfo +from _pytest.stash import StashKey + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("general") + group.addoption( + "--runxfail", + action="store_true", + dest="runxfail", + default=False, + help="Report the results of xfail tests as if they were not marked", + ) + + parser.addini( + "xfail_strict", + "Default for the strict parameter of xfail " + "markers when not given explicitly (default: False)", + default=False, + type="bool", + ) + + +def pytest_configure(config: Config) -> None: + if config.option.runxfail: + # yay a hack + import pytest + + old = pytest.xfail + config.add_cleanup(lambda: setattr(pytest, "xfail", old)) + + def nop(*args, **kwargs): + pass + + nop.Exception = xfail.Exception # type: ignore[attr-defined] + setattr(pytest, "xfail", nop) + + config.addinivalue_line( + "markers", + "skip(reason=None): skip the given test function with an optional reason. " + 'Example: skip(reason="no way of currently testing this") skips the ' + "test.", + ) + config.addinivalue_line( + "markers", + "skipif(condition, ..., *, reason=...): " + "skip the given test function if any of the conditions evaluate to True. " + "Example: skipif(sys.platform == 'win32') skips the test if we are on the win32 platform. " + "See https://docs.pytest.org/en/stable/reference/reference.html#pytest-mark-skipif", + ) + config.addinivalue_line( + "markers", + "xfail(condition, ..., *, reason=..., run=True, raises=None, strict=xfail_strict): " + "mark the test function as an expected failure if any of the conditions " + "evaluate to True. Optionally specify a reason for better reporting " + "and run=False if you don't even want to execute the test function. " + "If only specific exception(s) are expected, you can list them in " + "raises, and if the test fails in other ways, it will be reported as " + "a true failure. See https://docs.pytest.org/en/stable/reference/reference.html#pytest-mark-xfail", + ) + + +def evaluate_condition(item: Item, mark: Mark, condition: object) -> Tuple[bool, str]: + """Evaluate a single skipif/xfail condition. + + If an old-style string condition is given, it is eval()'d, otherwise the + condition is bool()'d. If this fails, an appropriately formatted pytest.fail + is raised. + + Returns (result, reason). The reason is only relevant if the result is True. + """ + # String condition. + if isinstance(condition, str): + globals_ = { + "os": os, + "sys": sys, + "platform": platform, + "config": item.config, + } + for dictionary in reversed( + item.ihook.pytest_markeval_namespace(config=item.config) + ): + if not isinstance(dictionary, Mapping): + raise ValueError( + "pytest_markeval_namespace() needs to return a dict, got {!r}".format( + dictionary + ) + ) + globals_.update(dictionary) + if hasattr(item, "obj"): + globals_.update(item.obj.__globals__) # type: ignore[attr-defined] + try: + filename = f"<{mark.name} condition>" + condition_code = compile(condition, filename, "eval") + result = eval(condition_code, globals_) + except SyntaxError as exc: + msglines = [ + "Error evaluating %r condition" % mark.name, + " " + condition, + " " + " " * (exc.offset or 0) + "^", + "SyntaxError: invalid syntax", + ] + fail("\n".join(msglines), pytrace=False) + except Exception as exc: + msglines = [ + "Error evaluating %r condition" % mark.name, + " " + condition, + *traceback.format_exception_only(type(exc), exc), + ] + fail("\n".join(msglines), pytrace=False) + + # Boolean condition. + else: + try: + result = bool(condition) + except Exception as exc: + msglines = [ + "Error evaluating %r condition as a boolean" % mark.name, + *traceback.format_exception_only(type(exc), exc), + ] + fail("\n".join(msglines), pytrace=False) + + reason = mark.kwargs.get("reason", None) + if reason is None: + if isinstance(condition, str): + reason = "condition: " + condition + else: + # XXX better be checked at collection time + msg = ( + "Error evaluating %r: " % mark.name + + "you need to specify reason=STRING when using booleans as conditions." + ) + fail(msg, pytrace=False) + + return result, reason + + +@attr.s(slots=True, frozen=True, auto_attribs=True) +class Skip: + """The result of evaluate_skip_marks().""" + + reason: str = "unconditional skip" + + +def evaluate_skip_marks(item: Item) -> Optional[Skip]: + """Evaluate skip and skipif marks on item, returning Skip if triggered.""" + for mark in item.iter_markers(name="skipif"): + if "condition" not in mark.kwargs: + conditions = mark.args + else: + conditions = (mark.kwargs["condition"],) + + # Unconditional. + if not conditions: + reason = mark.kwargs.get("reason", "") + return Skip(reason) + + # If any of the conditions are true. + for condition in conditions: + result, reason = evaluate_condition(item, mark, condition) + if result: + return Skip(reason) + + for mark in item.iter_markers(name="skip"): + try: + return Skip(*mark.args, **mark.kwargs) + except TypeError as e: + raise TypeError(str(e) + " - maybe you meant pytest.mark.skipif?") from None + + return None + + +@attr.s(slots=True, frozen=True, auto_attribs=True) +class Xfail: + """The result of evaluate_xfail_marks().""" + + reason: str + run: bool + strict: bool + raises: Optional[Tuple[Type[BaseException], ...]] + + +def evaluate_xfail_marks(item: Item) -> Optional[Xfail]: + """Evaluate xfail marks on item, returning Xfail if triggered.""" + for mark in item.iter_markers(name="xfail"): + run = mark.kwargs.get("run", True) + strict = mark.kwargs.get("strict", item.config.getini("xfail_strict")) + raises = mark.kwargs.get("raises", None) + if "condition" not in mark.kwargs: + conditions = mark.args + else: + conditions = (mark.kwargs["condition"],) + + # Unconditional. + if not conditions: + reason = mark.kwargs.get("reason", "") + return Xfail(reason, run, strict, raises) + + # If any of the conditions are true. + for condition in conditions: + result, reason = evaluate_condition(item, mark, condition) + if result: + return Xfail(reason, run, strict, raises) + + return None + + +# Saves the xfail mark evaluation. Can be refreshed during call if None. +xfailed_key = StashKey[Optional[Xfail]]() + + +@hookimpl(tryfirst=True) +def pytest_runtest_setup(item: Item) -> None: + skipped = evaluate_skip_marks(item) + if skipped: + raise skip.Exception(skipped.reason, _use_item_location=True) + + item.stash[xfailed_key] = xfailed = evaluate_xfail_marks(item) + if xfailed and not item.config.option.runxfail and not xfailed.run: + xfail("[NOTRUN] " + xfailed.reason) + + +@hookimpl(hookwrapper=True) +def pytest_runtest_call(item: Item) -> Generator[None, None, None]: + xfailed = item.stash.get(xfailed_key, None) + if xfailed is None: + item.stash[xfailed_key] = xfailed = evaluate_xfail_marks(item) + + if xfailed and not item.config.option.runxfail and not xfailed.run: + xfail("[NOTRUN] " + xfailed.reason) + + yield + + # The test run may have added an xfail mark dynamically. + xfailed = item.stash.get(xfailed_key, None) + if xfailed is None: + item.stash[xfailed_key] = xfailed = evaluate_xfail_marks(item) + + +@hookimpl(hookwrapper=True) +def pytest_runtest_makereport(item: Item, call: CallInfo[None]): + outcome = yield + rep = outcome.get_result() + xfailed = item.stash.get(xfailed_key, None) + if item.config.option.runxfail: + pass # don't interfere + elif call.excinfo and isinstance(call.excinfo.value, xfail.Exception): + assert call.excinfo.value.msg is not None + rep.wasxfail = "reason: " + call.excinfo.value.msg + rep.outcome = "skipped" + elif not rep.skipped and xfailed: + if call.excinfo: + raises = xfailed.raises + if raises is not None and not isinstance(call.excinfo.value, raises): + rep.outcome = "failed" + else: + rep.outcome = "skipped" + rep.wasxfail = xfailed.reason + elif call.when == "call": + if xfailed.strict: + rep.outcome = "failed" + rep.longrepr = "[XPASS(strict)] " + xfailed.reason + else: + rep.outcome = "passed" + rep.wasxfail = xfailed.reason + + +def pytest_report_teststatus(report: BaseReport) -> Optional[Tuple[str, str, str]]: + if hasattr(report, "wasxfail"): + if report.skipped: + return "xfailed", "x", "XFAIL" + elif report.passed: + return "xpassed", "X", "XPASS" + return None diff --git a/env/lib/python3.8/site-packages/_pytest/stash.py b/env/lib/python3.8/site-packages/_pytest/stash.py new file mode 100644 index 00000000..e61d75b9 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/stash.py @@ -0,0 +1,112 @@ +from typing import Any +from typing import cast +from typing import Dict +from typing import Generic +from typing import TypeVar +from typing import Union + + +__all__ = ["Stash", "StashKey"] + + +T = TypeVar("T") +D = TypeVar("D") + + +class StashKey(Generic[T]): + """``StashKey`` is an object used as a key to a :class:`Stash`. + + A ``StashKey`` is associated with the type ``T`` of the value of the key. + + A ``StashKey`` is unique and cannot conflict with another key. + """ + + __slots__ = () + + +class Stash: + r"""``Stash`` is a type-safe heterogeneous mutable mapping that + allows keys and value types to be defined separately from + where it (the ``Stash``) is created. + + Usually you will be given an object which has a ``Stash``, for example + :class:`~pytest.Config` or a :class:`~_pytest.nodes.Node`: + + .. code-block:: python + + stash: Stash = some_object.stash + + If a module or plugin wants to store data in this ``Stash``, it creates + :class:`StashKey`\s for its keys (at the module level): + + .. code-block:: python + + # At the top-level of the module + some_str_key = StashKey[str]() + some_bool_key = StashKey[bool]() + + To store information: + + .. code-block:: python + + # Value type must match the key. + stash[some_str_key] = "value" + stash[some_bool_key] = True + + To retrieve the information: + + .. code-block:: python + + # The static type of some_str is str. + some_str = stash[some_str_key] + # The static type of some_bool is bool. + some_bool = stash[some_bool_key] + """ + + __slots__ = ("_storage",) + + def __init__(self) -> None: + self._storage: Dict[StashKey[Any], object] = {} + + def __setitem__(self, key: StashKey[T], value: T) -> None: + """Set a value for key.""" + self._storage[key] = value + + def __getitem__(self, key: StashKey[T]) -> T: + """Get the value for key. + + Raises ``KeyError`` if the key wasn't set before. + """ + return cast(T, self._storage[key]) + + def get(self, key: StashKey[T], default: D) -> Union[T, D]: + """Get the value for key, or return default if the key wasn't set + before.""" + try: + return self[key] + except KeyError: + return default + + def setdefault(self, key: StashKey[T], default: T) -> T: + """Return the value of key if already set, otherwise set the value + of key to default and return default.""" + try: + return self[key] + except KeyError: + self[key] = default + return default + + def __delitem__(self, key: StashKey[T]) -> None: + """Delete the value for key. + + Raises ``KeyError`` if the key wasn't set before. + """ + del self._storage[key] + + def __contains__(self, key: StashKey[T]) -> bool: + """Return whether key was set.""" + return key in self._storage + + def __len__(self) -> int: + """Return how many items exist in the stash.""" + return len(self._storage) diff --git a/env/lib/python3.8/site-packages/_pytest/stepwise.py b/env/lib/python3.8/site-packages/_pytest/stepwise.py new file mode 100644 index 00000000..84f1a6ce --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/stepwise.py @@ -0,0 +1,122 @@ +from typing import List +from typing import Optional +from typing import TYPE_CHECKING + +import pytest +from _pytest import nodes +from _pytest.config import Config +from _pytest.config.argparsing import Parser +from _pytest.main import Session +from _pytest.reports import TestReport + +if TYPE_CHECKING: + from _pytest.cacheprovider import Cache + +STEPWISE_CACHE_DIR = "cache/stepwise" + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("general") + group.addoption( + "--sw", + "--stepwise", + action="store_true", + default=False, + dest="stepwise", + help="Exit on test failure and continue from last failing test next time", + ) + group.addoption( + "--sw-skip", + "--stepwise-skip", + action="store_true", + default=False, + dest="stepwise_skip", + help="Ignore the first failing test but stop on the next failing test. " + "Implicitly enables --stepwise.", + ) + + +@pytest.hookimpl +def pytest_configure(config: Config) -> None: + if config.option.stepwise_skip: + # allow --stepwise-skip to work on it's own merits. + config.option.stepwise = True + if config.getoption("stepwise"): + config.pluginmanager.register(StepwisePlugin(config), "stepwiseplugin") + + +def pytest_sessionfinish(session: Session) -> None: + if not session.config.getoption("stepwise"): + assert session.config.cache is not None + # Clear the list of failing tests if the plugin is not active. + session.config.cache.set(STEPWISE_CACHE_DIR, []) + + +class StepwisePlugin: + def __init__(self, config: Config) -> None: + self.config = config + self.session: Optional[Session] = None + self.report_status = "" + assert config.cache is not None + self.cache: Cache = config.cache + self.lastfailed: Optional[str] = self.cache.get(STEPWISE_CACHE_DIR, None) + self.skip: bool = config.getoption("stepwise_skip") + + def pytest_sessionstart(self, session: Session) -> None: + self.session = session + + def pytest_collection_modifyitems( + self, config: Config, items: List[nodes.Item] + ) -> None: + if not self.lastfailed: + self.report_status = "no previously failed tests, not skipping." + return + + # check all item nodes until we find a match on last failed + failed_index = None + for index, item in enumerate(items): + if item.nodeid == self.lastfailed: + failed_index = index + break + + # If the previously failed test was not found among the test items, + # do not skip any tests. + if failed_index is None: + self.report_status = "previously failed test not found, not skipping." + else: + self.report_status = f"skipping {failed_index} already passed items." + deselected = items[:failed_index] + del items[:failed_index] + config.hook.pytest_deselected(items=deselected) + + def pytest_runtest_logreport(self, report: TestReport) -> None: + if report.failed: + if self.skip: + # Remove test from the failed ones (if it exists) and unset the skip option + # to make sure the following tests will not be skipped. + if report.nodeid == self.lastfailed: + self.lastfailed = None + + self.skip = False + else: + # Mark test as the last failing and interrupt the test session. + self.lastfailed = report.nodeid + assert self.session is not None + self.session.shouldstop = ( + "Test failed, continuing from this test next run." + ) + + else: + # If the test was actually run and did pass. + if report.when == "call": + # Remove test from the failed ones, if exists. + if report.nodeid == self.lastfailed: + self.lastfailed = None + + def pytest_report_collectionfinish(self) -> Optional[str]: + if self.config.getoption("verbose") >= 0 and self.report_status: + return f"stepwise: {self.report_status}" + return None + + def pytest_sessionfinish(self) -> None: + self.cache.set(STEPWISE_CACHE_DIR, self.lastfailed) diff --git a/env/lib/python3.8/site-packages/_pytest/terminal.py b/env/lib/python3.8/site-packages/_pytest/terminal.py new file mode 100644 index 00000000..d967a3ee --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/terminal.py @@ -0,0 +1,1432 @@ +"""Terminal reporting of the full testing process. + +This is a good source for looking at the various reporting hooks. +""" +import argparse +import datetime +import inspect +import platform +import sys +import warnings +from collections import Counter +from functools import partial +from pathlib import Path +from typing import Any +from typing import Callable +from typing import cast +from typing import ClassVar +from typing import Dict +from typing import Generator +from typing import List +from typing import Mapping +from typing import Optional +from typing import Sequence +from typing import Set +from typing import TextIO +from typing import Tuple +from typing import TYPE_CHECKING +from typing import Union + +import attr +import pluggy + +import _pytest._version +from _pytest import nodes +from _pytest import timing +from _pytest._code import ExceptionInfo +from _pytest._code.code import ExceptionRepr +from _pytest._io import TerminalWriter +from _pytest._io.wcwidth import wcswidth +from _pytest.assertion.util import running_on_ci +from _pytest.compat import final +from _pytest.config import _PluggyPlugin +from _pytest.config import Config +from _pytest.config import ExitCode +from _pytest.config import hookimpl +from _pytest.config.argparsing import Parser +from _pytest.nodes import Item +from _pytest.nodes import Node +from _pytest.pathlib import absolutepath +from _pytest.pathlib import bestrelpath +from _pytest.reports import BaseReport +from _pytest.reports import CollectReport +from _pytest.reports import TestReport + +if TYPE_CHECKING: + from typing_extensions import Literal + + from _pytest.main import Session + + +REPORT_COLLECTING_RESOLUTION = 0.5 + +KNOWN_TYPES = ( + "failed", + "passed", + "skipped", + "deselected", + "xfailed", + "xpassed", + "warnings", + "error", +) + +_REPORTCHARS_DEFAULT = "fE" + + +class MoreQuietAction(argparse.Action): + """A modified copy of the argparse count action which counts down and updates + the legacy quiet attribute at the same time. + + Used to unify verbosity handling. + """ + + def __init__( + self, + option_strings: Sequence[str], + dest: str, + default: object = None, + required: bool = False, + help: Optional[str] = None, + ) -> None: + super().__init__( + option_strings=option_strings, + dest=dest, + nargs=0, + default=default, + required=required, + help=help, + ) + + def __call__( + self, + parser: argparse.ArgumentParser, + namespace: argparse.Namespace, + values: Union[str, Sequence[object], None], + option_string: Optional[str] = None, + ) -> None: + new_count = getattr(namespace, self.dest, 0) - 1 + setattr(namespace, self.dest, new_count) + # todo Deprecate config.quiet + namespace.quiet = getattr(namespace, "quiet", 0) + 1 + + +def pytest_addoption(parser: Parser) -> None: + group = parser.getgroup("terminal reporting", "Reporting", after="general") + group._addoption( + "-v", + "--verbose", + action="count", + default=0, + dest="verbose", + help="Increase verbosity", + ) + group._addoption( + "--no-header", + action="store_true", + default=False, + dest="no_header", + help="Disable header", + ) + group._addoption( + "--no-summary", + action="store_true", + default=False, + dest="no_summary", + help="Disable summary", + ) + group._addoption( + "-q", + "--quiet", + action=MoreQuietAction, + default=0, + dest="verbose", + help="Decrease verbosity", + ) + group._addoption( + "--verbosity", + dest="verbose", + type=int, + default=0, + help="Set verbosity. Default: 0.", + ) + group._addoption( + "-r", + action="store", + dest="reportchars", + default=_REPORTCHARS_DEFAULT, + metavar="chars", + help="Show extra test summary info as specified by chars: (f)ailed, " + "(E)rror, (s)kipped, (x)failed, (X)passed, " + "(p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. " + "(w)arnings are enabled by default (see --disable-warnings), " + "'N' can be used to reset the list. (default: 'fE').", + ) + group._addoption( + "--disable-warnings", + "--disable-pytest-warnings", + default=False, + dest="disable_warnings", + action="store_true", + help="Disable warnings summary", + ) + group._addoption( + "-l", + "--showlocals", + action="store_true", + dest="showlocals", + default=False, + help="Show locals in tracebacks (disabled by default)", + ) + group._addoption( + "--no-showlocals", + action="store_false", + dest="showlocals", + help="Hide locals in tracebacks (negate --showlocals passed through addopts)", + ) + group._addoption( + "--tb", + metavar="style", + action="store", + dest="tbstyle", + default="auto", + choices=["auto", "long", "short", "no", "line", "native"], + help="Traceback print mode (auto/long/short/line/native/no)", + ) + group._addoption( + "--show-capture", + action="store", + dest="showcapture", + choices=["no", "stdout", "stderr", "log", "all"], + default="all", + help="Controls how captured stdout/stderr/log is shown on failed tests. " + "Default: all.", + ) + group._addoption( + "--fulltrace", + "--full-trace", + action="store_true", + default=False, + help="Don't cut any tracebacks (default is to cut)", + ) + group._addoption( + "--color", + metavar="color", + action="store", + dest="color", + default="auto", + choices=["yes", "no", "auto"], + help="Color terminal output (yes/no/auto)", + ) + group._addoption( + "--code-highlight", + default="yes", + choices=["yes", "no"], + help="Whether code should be highlighted (only if --color is also enabled). " + "Default: yes.", + ) + + parser.addini( + "console_output_style", + help='Console output: "classic", or with additional progress information ' + '("progress" (percentage) | "count")', + default="progress", + ) + + +def pytest_configure(config: Config) -> None: + reporter = TerminalReporter(config, sys.stdout) + config.pluginmanager.register(reporter, "terminalreporter") + if config.option.debug or config.option.traceconfig: + + def mywriter(tags, args): + msg = " ".join(map(str, args)) + reporter.write_line("[traceconfig] " + msg) + + config.trace.root.setprocessor("pytest:config", mywriter) + + +def getreportopt(config: Config) -> str: + reportchars: str = config.option.reportchars + + old_aliases = {"F", "S"} + reportopts = "" + for char in reportchars: + if char in old_aliases: + char = char.lower() + if char == "a": + reportopts = "sxXEf" + elif char == "A": + reportopts = "PpsxXEf" + elif char == "N": + reportopts = "" + elif char not in reportopts: + reportopts += char + + if not config.option.disable_warnings and "w" not in reportopts: + reportopts = "w" + reportopts + elif config.option.disable_warnings and "w" in reportopts: + reportopts = reportopts.replace("w", "") + + return reportopts + + +@hookimpl(trylast=True) # after _pytest.runner +def pytest_report_teststatus(report: BaseReport) -> Tuple[str, str, str]: + letter = "F" + if report.passed: + letter = "." + elif report.skipped: + letter = "s" + + outcome: str = report.outcome + if report.when in ("collect", "setup", "teardown") and outcome == "failed": + outcome = "error" + letter = "E" + + return outcome, letter, outcome.upper() + + +@attr.s(auto_attribs=True) +class WarningReport: + """Simple structure to hold warnings information captured by ``pytest_warning_recorded``. + + :ivar str message: + User friendly message about the warning. + :ivar str|None nodeid: + nodeid that generated the warning (see ``get_location``). + :ivar tuple fslocation: + File system location of the source of the warning (see ``get_location``). + """ + + message: str + nodeid: Optional[str] = None + fslocation: Optional[Tuple[str, int]] = None + + count_towards_summary: ClassVar = True + + def get_location(self, config: Config) -> Optional[str]: + """Return the more user-friendly information about the location of a warning, or None.""" + if self.nodeid: + return self.nodeid + if self.fslocation: + filename, linenum = self.fslocation + relpath = bestrelpath(config.invocation_params.dir, absolutepath(filename)) + return f"{relpath}:{linenum}" + return None + + +@final +class TerminalReporter: + def __init__(self, config: Config, file: Optional[TextIO] = None) -> None: + import _pytest.config + + self.config = config + self._numcollected = 0 + self._session: Optional[Session] = None + self._showfspath: Optional[bool] = None + + self.stats: Dict[str, List[Any]] = {} + self._main_color: Optional[str] = None + self._known_types: Optional[List[str]] = None + self.startpath = config.invocation_params.dir + if file is None: + file = sys.stdout + self._tw = _pytest.config.create_terminal_writer(config, file) + self._screen_width = self._tw.fullwidth + self.currentfspath: Union[None, Path, str, int] = None + self.reportchars = getreportopt(config) + self.hasmarkup = self._tw.hasmarkup + self.isatty = file.isatty() + self._progress_nodeids_reported: Set[str] = set() + self._show_progress_info = self._determine_show_progress_info() + self._collect_report_last_write: Optional[float] = None + self._already_displayed_warnings: Optional[int] = None + self._keyboardinterrupt_memo: Optional[ExceptionRepr] = None + + def _determine_show_progress_info(self) -> "Literal['progress', 'count', False]": + """Return whether we should display progress information based on the current config.""" + # do not show progress if we are not capturing output (#3038) + if self.config.getoption("capture", "no") == "no": + return False + # do not show progress if we are showing fixture setup/teardown + if self.config.getoption("setupshow", False): + return False + cfg: str = self.config.getini("console_output_style") + if cfg == "progress": + return "progress" + elif cfg == "count": + return "count" + else: + return False + + @property + def verbosity(self) -> int: + verbosity: int = self.config.option.verbose + return verbosity + + @property + def showheader(self) -> bool: + return self.verbosity >= 0 + + @property + def no_header(self) -> bool: + return bool(self.config.option.no_header) + + @property + def no_summary(self) -> bool: + return bool(self.config.option.no_summary) + + @property + def showfspath(self) -> bool: + if self._showfspath is None: + return self.verbosity >= 0 + return self._showfspath + + @showfspath.setter + def showfspath(self, value: Optional[bool]) -> None: + self._showfspath = value + + @property + def showlongtestinfo(self) -> bool: + return self.verbosity > 0 + + def hasopt(self, char: str) -> bool: + char = {"xfailed": "x", "skipped": "s"}.get(char, char) + return char in self.reportchars + + def write_fspath_result(self, nodeid: str, res, **markup: bool) -> None: + fspath = self.config.rootpath / nodeid.split("::")[0] + if self.currentfspath is None or fspath != self.currentfspath: + if self.currentfspath is not None and self._show_progress_info: + self._write_progress_information_filling_space() + self.currentfspath = fspath + relfspath = bestrelpath(self.startpath, fspath) + self._tw.line() + self._tw.write(relfspath + " ") + self._tw.write(res, flush=True, **markup) + + def write_ensure_prefix(self, prefix: str, extra: str = "", **kwargs) -> None: + if self.currentfspath != prefix: + self._tw.line() + self.currentfspath = prefix + self._tw.write(prefix) + if extra: + self._tw.write(extra, **kwargs) + self.currentfspath = -2 + + def ensure_newline(self) -> None: + if self.currentfspath: + self._tw.line() + self.currentfspath = None + + def write(self, content: str, *, flush: bool = False, **markup: bool) -> None: + self._tw.write(content, flush=flush, **markup) + + def flush(self) -> None: + self._tw.flush() + + def write_line(self, line: Union[str, bytes], **markup: bool) -> None: + if not isinstance(line, str): + line = str(line, errors="replace") + self.ensure_newline() + self._tw.line(line, **markup) + + def rewrite(self, line: str, **markup: bool) -> None: + """Rewinds the terminal cursor to the beginning and writes the given line. + + :param erase: + If True, will also add spaces until the full terminal width to ensure + previous lines are properly erased. + + The rest of the keyword arguments are markup instructions. + """ + erase = markup.pop("erase", False) + if erase: + fill_count = self._tw.fullwidth - len(line) - 1 + fill = " " * fill_count + else: + fill = "" + line = str(line) + self._tw.write("\r" + line + fill, **markup) + + def write_sep( + self, + sep: str, + title: Optional[str] = None, + fullwidth: Optional[int] = None, + **markup: bool, + ) -> None: + self.ensure_newline() + self._tw.sep(sep, title, fullwidth, **markup) + + def section(self, title: str, sep: str = "=", **kw: bool) -> None: + self._tw.sep(sep, title, **kw) + + def line(self, msg: str, **kw: bool) -> None: + self._tw.line(msg, **kw) + + def _add_stats(self, category: str, items: Sequence[Any]) -> None: + set_main_color = category not in self.stats + self.stats.setdefault(category, []).extend(items) + if set_main_color: + self._set_main_color() + + def pytest_internalerror(self, excrepr: ExceptionRepr) -> bool: + for line in str(excrepr).split("\n"): + self.write_line("INTERNALERROR> " + line) + return True + + def pytest_warning_recorded( + self, + warning_message: warnings.WarningMessage, + nodeid: str, + ) -> None: + from _pytest.warnings import warning_record_to_str + + fslocation = warning_message.filename, warning_message.lineno + message = warning_record_to_str(warning_message) + + warning_report = WarningReport( + fslocation=fslocation, message=message, nodeid=nodeid + ) + self._add_stats("warnings", [warning_report]) + + def pytest_plugin_registered(self, plugin: _PluggyPlugin) -> None: + if self.config.option.traceconfig: + msg = f"PLUGIN registered: {plugin}" + # XXX This event may happen during setup/teardown time + # which unfortunately captures our output here + # which garbles our output if we use self.write_line. + self.write_line(msg) + + def pytest_deselected(self, items: Sequence[Item]) -> None: + self._add_stats("deselected", items) + + def pytest_runtest_logstart( + self, nodeid: str, location: Tuple[str, Optional[int], str] + ) -> None: + # Ensure that the path is printed before the + # 1st test of a module starts running. + if self.showlongtestinfo: + line = self._locationline(nodeid, *location) + self.write_ensure_prefix(line, "") + self.flush() + elif self.showfspath: + self.write_fspath_result(nodeid, "") + self.flush() + + def pytest_runtest_logreport(self, report: TestReport) -> None: + self._tests_ran = True + rep = report + res: Tuple[ + str, str, Union[str, Tuple[str, Mapping[str, bool]]] + ] = self.config.hook.pytest_report_teststatus(report=rep, config=self.config) + category, letter, word = res + if not isinstance(word, tuple): + markup = None + else: + word, markup = word + self._add_stats(category, [rep]) + if not letter and not word: + # Probably passed setup/teardown. + return + running_xdist = hasattr(rep, "node") + if markup is None: + was_xfail = hasattr(report, "wasxfail") + if rep.passed and not was_xfail: + markup = {"green": True} + elif rep.passed and was_xfail: + markup = {"yellow": True} + elif rep.failed: + markup = {"red": True} + elif rep.skipped: + markup = {"yellow": True} + else: + markup = {} + if self.verbosity <= 0: + self._tw.write(letter, **markup) + else: + self._progress_nodeids_reported.add(rep.nodeid) + line = self._locationline(rep.nodeid, *rep.location) + if not running_xdist: + self.write_ensure_prefix(line, word, **markup) + if rep.skipped or hasattr(report, "wasxfail"): + reason = _get_raw_skip_reason(rep) + if self.config.option.verbose < 2: + available_width = ( + (self._tw.fullwidth - self._tw.width_of_current_line) + - len(" [100%]") + - 1 + ) + formatted_reason = _format_trimmed( + " ({})", reason, available_width + ) + else: + formatted_reason = f" ({reason})" + + if reason and formatted_reason is not None: + self._tw.write(formatted_reason) + if self._show_progress_info: + self._write_progress_information_filling_space() + else: + self.ensure_newline() + self._tw.write("[%s]" % rep.node.gateway.id) + if self._show_progress_info: + self._tw.write( + self._get_progress_information_message() + " ", cyan=True + ) + else: + self._tw.write(" ") + self._tw.write(word, **markup) + self._tw.write(" " + line) + self.currentfspath = -2 + self.flush() + + @property + def _is_last_item(self) -> bool: + assert self._session is not None + return len(self._progress_nodeids_reported) == self._session.testscollected + + def pytest_runtest_logfinish(self, nodeid: str) -> None: + assert self._session + if self.verbosity <= 0 and self._show_progress_info: + if self._show_progress_info == "count": + num_tests = self._session.testscollected + progress_length = len(f" [{num_tests}/{num_tests}]") + else: + progress_length = len(" [100%]") + + self._progress_nodeids_reported.add(nodeid) + + if self._is_last_item: + self._write_progress_information_filling_space() + else: + main_color, _ = self._get_main_color() + w = self._width_of_current_line + past_edge = w + progress_length + 1 >= self._screen_width + if past_edge: + msg = self._get_progress_information_message() + self._tw.write(msg + "\n", **{main_color: True}) + + def _get_progress_information_message(self) -> str: + assert self._session + collected = self._session.testscollected + if self._show_progress_info == "count": + if collected: + progress = self._progress_nodeids_reported + counter_format = f"{{:{len(str(collected))}d}}" + format_string = f" [{counter_format}/{{}}]" + return format_string.format(len(progress), collected) + return f" [ {collected} / {collected} ]" + else: + if collected: + return " [{:3d}%]".format( + len(self._progress_nodeids_reported) * 100 // collected + ) + return " [100%]" + + def _write_progress_information_filling_space(self) -> None: + color, _ = self._get_main_color() + msg = self._get_progress_information_message() + w = self._width_of_current_line + fill = self._tw.fullwidth - w - 1 + self.write(msg.rjust(fill), flush=True, **{color: True}) + + @property + def _width_of_current_line(self) -> int: + """Return the width of the current line.""" + return self._tw.width_of_current_line + + def pytest_collection(self) -> None: + if self.isatty: + if self.config.option.verbose >= 0: + self.write("collecting ... ", flush=True, bold=True) + self._collect_report_last_write = timing.time() + elif self.config.option.verbose >= 1: + self.write("collecting ... ", flush=True, bold=True) + + def pytest_collectreport(self, report: CollectReport) -> None: + if report.failed: + self._add_stats("error", [report]) + elif report.skipped: + self._add_stats("skipped", [report]) + items = [x for x in report.result if isinstance(x, Item)] + self._numcollected += len(items) + if self.isatty: + self.report_collect() + + def report_collect(self, final: bool = False) -> None: + if self.config.option.verbose < 0: + return + + if not final: + # Only write "collecting" report every 0.5s. + t = timing.time() + if ( + self._collect_report_last_write is not None + and self._collect_report_last_write > t - REPORT_COLLECTING_RESOLUTION + ): + return + self._collect_report_last_write = t + + errors = len(self.stats.get("error", [])) + skipped = len(self.stats.get("skipped", [])) + deselected = len(self.stats.get("deselected", [])) + selected = self._numcollected - deselected + line = "collected " if final else "collecting " + line += ( + str(self._numcollected) + " item" + ("" if self._numcollected == 1 else "s") + ) + if errors: + line += " / %d error%s" % (errors, "s" if errors != 1 else "") + if deselected: + line += " / %d deselected" % deselected + if skipped: + line += " / %d skipped" % skipped + if self._numcollected > selected: + line += " / %d selected" % selected + if self.isatty: + self.rewrite(line, bold=True, erase=True) + if final: + self.write("\n") + else: + self.write_line(line) + + @hookimpl(trylast=True) + def pytest_sessionstart(self, session: "Session") -> None: + self._session = session + self._sessionstarttime = timing.time() + if not self.showheader: + return + self.write_sep("=", "test session starts", bold=True) + verinfo = platform.python_version() + if not self.no_header: + msg = f"platform {sys.platform} -- Python {verinfo}" + pypy_version_info = getattr(sys, "pypy_version_info", None) + if pypy_version_info: + verinfo = ".".join(map(str, pypy_version_info[:3])) + msg += f"[pypy-{verinfo}-{pypy_version_info[3]}]" + msg += ", pytest-{}, pluggy-{}".format( + _pytest._version.version, pluggy.__version__ + ) + if ( + self.verbosity > 0 + or self.config.option.debug + or getattr(self.config.option, "pastebin", None) + ): + msg += " -- " + str(sys.executable) + self.write_line(msg) + lines = self.config.hook.pytest_report_header( + config=self.config, start_path=self.startpath + ) + self._write_report_lines_from_hooks(lines) + + def _write_report_lines_from_hooks( + self, lines: Sequence[Union[str, Sequence[str]]] + ) -> None: + for line_or_lines in reversed(lines): + if isinstance(line_or_lines, str): + self.write_line(line_or_lines) + else: + for line in line_or_lines: + self.write_line(line) + + def pytest_report_header(self, config: Config) -> List[str]: + line = "rootdir: %s" % config.rootpath + + if config.inipath: + line += ", configfile: " + bestrelpath(config.rootpath, config.inipath) + + if config.args_source == Config.ArgsSource.TESTPATHS: + testpaths: List[str] = config.getini("testpaths") + line += ", testpaths: {}".format(", ".join(testpaths)) + + result = [line] + + plugininfo = config.pluginmanager.list_plugin_distinfo() + if plugininfo: + result.append("plugins: %s" % ", ".join(_plugin_nameversions(plugininfo))) + return result + + def pytest_collection_finish(self, session: "Session") -> None: + self.report_collect(True) + + lines = self.config.hook.pytest_report_collectionfinish( + config=self.config, + start_path=self.startpath, + items=session.items, + ) + self._write_report_lines_from_hooks(lines) + + if self.config.getoption("collectonly"): + if session.items: + if self.config.option.verbose > -1: + self._tw.line("") + self._printcollecteditems(session.items) + + failed = self.stats.get("failed") + if failed: + self._tw.sep("!", "collection failures") + for rep in failed: + rep.toterminal(self._tw) + + def _printcollecteditems(self, items: Sequence[Item]) -> None: + if self.config.option.verbose < 0: + if self.config.option.verbose < -1: + counts = Counter(item.nodeid.split("::", 1)[0] for item in items) + for name, count in sorted(counts.items()): + self._tw.line("%s: %d" % (name, count)) + else: + for item in items: + self._tw.line(item.nodeid) + return + stack: List[Node] = [] + indent = "" + for item in items: + needed_collectors = item.listchain()[1:] # strip root node + while stack: + if stack == needed_collectors[: len(stack)]: + break + stack.pop() + for col in needed_collectors[len(stack) :]: + stack.append(col) + indent = (len(stack) - 1) * " " + self._tw.line(f"{indent}{col}") + if self.config.option.verbose >= 1: + obj = getattr(col, "obj", None) + doc = inspect.getdoc(obj) if obj else None + if doc: + for line in doc.splitlines(): + self._tw.line("{}{}".format(indent + " ", line)) + + @hookimpl(hookwrapper=True) + def pytest_sessionfinish( + self, session: "Session", exitstatus: Union[int, ExitCode] + ): + outcome = yield + outcome.get_result() + self._tw.line("") + summary_exit_codes = ( + ExitCode.OK, + ExitCode.TESTS_FAILED, + ExitCode.INTERRUPTED, + ExitCode.USAGE_ERROR, + ExitCode.NO_TESTS_COLLECTED, + ) + if exitstatus in summary_exit_codes and not self.no_summary: + self.config.hook.pytest_terminal_summary( + terminalreporter=self, exitstatus=exitstatus, config=self.config + ) + if session.shouldfail: + self.write_sep("!", str(session.shouldfail), red=True) + if exitstatus == ExitCode.INTERRUPTED: + self._report_keyboardinterrupt() + self._keyboardinterrupt_memo = None + elif session.shouldstop: + self.write_sep("!", str(session.shouldstop), red=True) + self.summary_stats() + + @hookimpl(hookwrapper=True) + def pytest_terminal_summary(self) -> Generator[None, None, None]: + self.summary_errors() + self.summary_failures() + self.summary_warnings() + self.summary_passes() + yield + self.short_test_summary() + # Display any extra warnings from teardown here (if any). + self.summary_warnings() + + def pytest_keyboard_interrupt(self, excinfo: ExceptionInfo[BaseException]) -> None: + self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True) + + def pytest_unconfigure(self) -> None: + if self._keyboardinterrupt_memo is not None: + self._report_keyboardinterrupt() + + def _report_keyboardinterrupt(self) -> None: + excrepr = self._keyboardinterrupt_memo + assert excrepr is not None + assert excrepr.reprcrash is not None + msg = excrepr.reprcrash.message + self.write_sep("!", msg) + if "KeyboardInterrupt" in msg: + if self.config.option.fulltrace: + excrepr.toterminal(self._tw) + else: + excrepr.reprcrash.toterminal(self._tw) + self._tw.line( + "(to show a full traceback on KeyboardInterrupt use --full-trace)", + yellow=True, + ) + + def _locationline( + self, nodeid: str, fspath: str, lineno: Optional[int], domain: str + ) -> str: + def mkrel(nodeid: str) -> str: + line = self.config.cwd_relative_nodeid(nodeid) + if domain and line.endswith(domain): + line = line[: -len(domain)] + values = domain.split("[") + values[0] = values[0].replace(".", "::") # don't replace '.' in params + line += "[".join(values) + return line + + # collect_fspath comes from testid which has a "/"-normalized path. + if fspath: + res = mkrel(nodeid) + if self.verbosity >= 2 and nodeid.split("::")[0] != fspath.replace( + "\\", nodes.SEP + ): + res += " <- " + bestrelpath(self.startpath, Path(fspath)) + else: + res = "[location]" + return res + " " + + def _getfailureheadline(self, rep): + head_line = rep.head_line + if head_line: + return head_line + return "test session" # XXX? + + def _getcrashline(self, rep): + try: + return str(rep.longrepr.reprcrash) + except AttributeError: + try: + return str(rep.longrepr)[:50] + except AttributeError: + return "" + + # + # Summaries for sessionfinish. + # + def getreports(self, name: str): + return [x for x in self.stats.get(name, ()) if not hasattr(x, "_pdbshown")] + + def summary_warnings(self) -> None: + if self.hasopt("w"): + all_warnings: Optional[List[WarningReport]] = self.stats.get("warnings") + if not all_warnings: + return + + final = self._already_displayed_warnings is not None + if final: + warning_reports = all_warnings[self._already_displayed_warnings :] + else: + warning_reports = all_warnings + self._already_displayed_warnings = len(warning_reports) + if not warning_reports: + return + + reports_grouped_by_message: Dict[str, List[WarningReport]] = {} + for wr in warning_reports: + reports_grouped_by_message.setdefault(wr.message, []).append(wr) + + def collapsed_location_report(reports: List[WarningReport]) -> str: + locations = [] + for w in reports: + location = w.get_location(self.config) + if location: + locations.append(location) + + if len(locations) < 10: + return "\n".join(map(str, locations)) + + counts_by_filename = Counter( + str(loc).split("::", 1)[0] for loc in locations + ) + return "\n".join( + "{}: {} warning{}".format(k, v, "s" if v > 1 else "") + for k, v in counts_by_filename.items() + ) + + title = "warnings summary (final)" if final else "warnings summary" + self.write_sep("=", title, yellow=True, bold=False) + for message, message_reports in reports_grouped_by_message.items(): + maybe_location = collapsed_location_report(message_reports) + if maybe_location: + self._tw.line(maybe_location) + lines = message.splitlines() + indented = "\n".join(" " + x for x in lines) + message = indented.rstrip() + else: + message = message.rstrip() + self._tw.line(message) + self._tw.line() + self._tw.line( + "-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html" + ) + + def summary_passes(self) -> None: + if self.config.option.tbstyle != "no": + if self.hasopt("P"): + reports: List[TestReport] = self.getreports("passed") + if not reports: + return + self.write_sep("=", "PASSES") + for rep in reports: + if rep.sections: + msg = self._getfailureheadline(rep) + self.write_sep("_", msg, green=True, bold=True) + self._outrep_summary(rep) + self._handle_teardown_sections(rep.nodeid) + + def _get_teardown_reports(self, nodeid: str) -> List[TestReport]: + reports = self.getreports("") + return [ + report + for report in reports + if report.when == "teardown" and report.nodeid == nodeid + ] + + def _handle_teardown_sections(self, nodeid: str) -> None: + for report in self._get_teardown_reports(nodeid): + self.print_teardown_sections(report) + + def print_teardown_sections(self, rep: TestReport) -> None: + showcapture = self.config.option.showcapture + if showcapture == "no": + return + for secname, content in rep.sections: + if showcapture != "all" and showcapture not in secname: + continue + if "teardown" in secname: + self._tw.sep("-", secname) + if content[-1:] == "\n": + content = content[:-1] + self._tw.line(content) + + def summary_failures(self) -> None: + if self.config.option.tbstyle != "no": + reports: List[BaseReport] = self.getreports("failed") + if not reports: + return + self.write_sep("=", "FAILURES") + if self.config.option.tbstyle == "line": + for rep in reports: + line = self._getcrashline(rep) + self.write_line(line) + else: + for rep in reports: + msg = self._getfailureheadline(rep) + self.write_sep("_", msg, red=True, bold=True) + self._outrep_summary(rep) + self._handle_teardown_sections(rep.nodeid) + + def summary_errors(self) -> None: + if self.config.option.tbstyle != "no": + reports: List[BaseReport] = self.getreports("error") + if not reports: + return + self.write_sep("=", "ERRORS") + for rep in self.stats["error"]: + msg = self._getfailureheadline(rep) + if rep.when == "collect": + msg = "ERROR collecting " + msg + else: + msg = f"ERROR at {rep.when} of {msg}" + self.write_sep("_", msg, red=True, bold=True) + self._outrep_summary(rep) + + def _outrep_summary(self, rep: BaseReport) -> None: + rep.toterminal(self._tw) + showcapture = self.config.option.showcapture + if showcapture == "no": + return + for secname, content in rep.sections: + if showcapture != "all" and showcapture not in secname: + continue + self._tw.sep("-", secname) + if content[-1:] == "\n": + content = content[:-1] + self._tw.line(content) + + def summary_stats(self) -> None: + if self.verbosity < -1: + return + + session_duration = timing.time() - self._sessionstarttime + (parts, main_color) = self.build_summary_stats_line() + line_parts = [] + + display_sep = self.verbosity >= 0 + if display_sep: + fullwidth = self._tw.fullwidth + for text, markup in parts: + with_markup = self._tw.markup(text, **markup) + if display_sep: + fullwidth += len(with_markup) - len(text) + line_parts.append(with_markup) + msg = ", ".join(line_parts) + + main_markup = {main_color: True} + duration = f" in {format_session_duration(session_duration)}" + duration_with_markup = self._tw.markup(duration, **main_markup) + if display_sep: + fullwidth += len(duration_with_markup) - len(duration) + msg += duration_with_markup + + if display_sep: + markup_for_end_sep = self._tw.markup("", **main_markup) + if markup_for_end_sep.endswith("\x1b[0m"): + markup_for_end_sep = markup_for_end_sep[:-4] + fullwidth += len(markup_for_end_sep) + msg += markup_for_end_sep + + if display_sep: + self.write_sep("=", msg, fullwidth=fullwidth, **main_markup) + else: + self.write_line(msg, **main_markup) + + def short_test_summary(self) -> None: + if not self.reportchars: + return + + def show_simple(lines: List[str], *, stat: str) -> None: + failed = self.stats.get(stat, []) + if not failed: + return + config = self.config + for rep in failed: + color = _color_for_type.get(stat, _color_for_type_default) + line = _get_line_with_reprcrash_message( + config, rep, self._tw, {color: True} + ) + lines.append(line) + + def show_xfailed(lines: List[str]) -> None: + xfailed = self.stats.get("xfailed", []) + for rep in xfailed: + verbose_word = rep._get_verbose_word(self.config) + markup_word = self._tw.markup( + verbose_word, **{_color_for_type["warnings"]: True} + ) + nodeid = _get_node_id_with_markup(self._tw, self.config, rep) + line = f"{markup_word} {nodeid}" + reason = rep.wasxfail + if reason: + line += " - " + str(reason) + + lines.append(line) + + def show_xpassed(lines: List[str]) -> None: + xpassed = self.stats.get("xpassed", []) + for rep in xpassed: + verbose_word = rep._get_verbose_word(self.config) + markup_word = self._tw.markup( + verbose_word, **{_color_for_type["warnings"]: True} + ) + nodeid = _get_node_id_with_markup(self._tw, self.config, rep) + reason = rep.wasxfail + lines.append(f"{markup_word} {nodeid} {reason}") + + def show_skipped(lines: List[str]) -> None: + skipped: List[CollectReport] = self.stats.get("skipped", []) + fskips = _folded_skips(self.startpath, skipped) if skipped else [] + if not fskips: + return + verbose_word = skipped[0]._get_verbose_word(self.config) + markup_word = self._tw.markup( + verbose_word, **{_color_for_type["warnings"]: True} + ) + prefix = "Skipped: " + for num, fspath, lineno, reason in fskips: + if reason.startswith(prefix): + reason = reason[len(prefix) :] + if lineno is not None: + lines.append( + "%s [%d] %s:%d: %s" % (markup_word, num, fspath, lineno, reason) + ) + else: + lines.append("%s [%d] %s: %s" % (markup_word, num, fspath, reason)) + + REPORTCHAR_ACTIONS: Mapping[str, Callable[[List[str]], None]] = { + "x": show_xfailed, + "X": show_xpassed, + "f": partial(show_simple, stat="failed"), + "s": show_skipped, + "p": partial(show_simple, stat="passed"), + "E": partial(show_simple, stat="error"), + } + + lines: List[str] = [] + for char in self.reportchars: + action = REPORTCHAR_ACTIONS.get(char) + if action: # skipping e.g. "P" (passed with output) here. + action(lines) + + if lines: + self.write_sep("=", "short test summary info", cyan=True, bold=True) + for line in lines: + self.write_line(line) + + def _get_main_color(self) -> Tuple[str, List[str]]: + if self._main_color is None or self._known_types is None or self._is_last_item: + self._set_main_color() + assert self._main_color + assert self._known_types + return self._main_color, self._known_types + + def _determine_main_color(self, unknown_type_seen: bool) -> str: + stats = self.stats + if "failed" in stats or "error" in stats: + main_color = "red" + elif "warnings" in stats or "xpassed" in stats or unknown_type_seen: + main_color = "yellow" + elif "passed" in stats or not self._is_last_item: + main_color = "green" + else: + main_color = "yellow" + return main_color + + def _set_main_color(self) -> None: + unknown_types: List[str] = [] + for found_type in self.stats.keys(): + if found_type: # setup/teardown reports have an empty key, ignore them + if found_type not in KNOWN_TYPES and found_type not in unknown_types: + unknown_types.append(found_type) + self._known_types = list(KNOWN_TYPES) + unknown_types + self._main_color = self._determine_main_color(bool(unknown_types)) + + def build_summary_stats_line(self) -> Tuple[List[Tuple[str, Dict[str, bool]]], str]: + """ + Build the parts used in the last summary stats line. + + The summary stats line is the line shown at the end, "=== 12 passed, 2 errors in Xs===". + + This function builds a list of the "parts" that make up for the text in that line, in + the example above it would be: + + [ + ("12 passed", {"green": True}), + ("2 errors", {"red": True} + ] + + That last dict for each line is a "markup dictionary", used by TerminalWriter to + color output. + + The final color of the line is also determined by this function, and is the second + element of the returned tuple. + """ + if self.config.getoption("collectonly"): + return self._build_collect_only_summary_stats_line() + else: + return self._build_normal_summary_stats_line() + + def _get_reports_to_display(self, key: str) -> List[Any]: + """Get test/collection reports for the given status key, such as `passed` or `error`.""" + reports = self.stats.get(key, []) + return [x for x in reports if getattr(x, "count_towards_summary", True)] + + def _build_normal_summary_stats_line( + self, + ) -> Tuple[List[Tuple[str, Dict[str, bool]]], str]: + main_color, known_types = self._get_main_color() + parts = [] + + for key in known_types: + reports = self._get_reports_to_display(key) + if reports: + count = len(reports) + color = _color_for_type.get(key, _color_for_type_default) + markup = {color: True, "bold": color == main_color} + parts.append(("%d %s" % pluralize(count, key), markup)) + + if not parts: + parts = [("no tests ran", {_color_for_type_default: True})] + + return parts, main_color + + def _build_collect_only_summary_stats_line( + self, + ) -> Tuple[List[Tuple[str, Dict[str, bool]]], str]: + deselected = len(self._get_reports_to_display("deselected")) + errors = len(self._get_reports_to_display("error")) + + if self._numcollected == 0: + parts = [("no tests collected", {"yellow": True})] + main_color = "yellow" + + elif deselected == 0: + main_color = "green" + collected_output = "%d %s collected" % pluralize(self._numcollected, "test") + parts = [(collected_output, {main_color: True})] + else: + all_tests_were_deselected = self._numcollected == deselected + if all_tests_were_deselected: + main_color = "yellow" + collected_output = f"no tests collected ({deselected} deselected)" + else: + main_color = "green" + selected = self._numcollected - deselected + collected_output = f"{selected}/{self._numcollected} tests collected ({deselected} deselected)" + + parts = [(collected_output, {main_color: True})] + + if errors: + main_color = _color_for_type["error"] + parts += [("%d %s" % pluralize(errors, "error"), {main_color: True})] + + return parts, main_color + + +def _get_node_id_with_markup(tw: TerminalWriter, config: Config, rep: BaseReport): + nodeid = config.cwd_relative_nodeid(rep.nodeid) + path, *parts = nodeid.split("::") + if parts: + parts_markup = tw.markup("::".join(parts), bold=True) + return path + "::" + parts_markup + else: + return path + + +def _format_trimmed(format: str, msg: str, available_width: int) -> Optional[str]: + """Format msg into format, ellipsizing it if doesn't fit in available_width. + + Returns None if even the ellipsis can't fit. + """ + # Only use the first line. + i = msg.find("\n") + if i != -1: + msg = msg[:i] + + ellipsis = "..." + format_width = wcswidth(format.format("")) + if format_width + len(ellipsis) > available_width: + return None + + if format_width + wcswidth(msg) > available_width: + available_width -= len(ellipsis) + msg = msg[:available_width] + while format_width + wcswidth(msg) > available_width: + msg = msg[:-1] + msg += ellipsis + + return format.format(msg) + + +def _get_line_with_reprcrash_message( + config: Config, rep: BaseReport, tw: TerminalWriter, word_markup: Dict[str, bool] +) -> str: + """Get summary line for a report, trying to add reprcrash message.""" + verbose_word = rep._get_verbose_word(config) + word = tw.markup(verbose_word, **word_markup) + node = _get_node_id_with_markup(tw, config, rep) + + line = f"{word} {node}" + line_width = wcswidth(line) + + try: + # Type ignored intentionally -- possible AttributeError expected. + msg = rep.longrepr.reprcrash.message # type: ignore[union-attr] + except AttributeError: + pass + else: + if not running_on_ci(): + available_width = tw.fullwidth - line_width + msg = _format_trimmed(" - {}", msg, available_width) + else: + msg = f" - {msg}" + if msg is not None: + line += msg + + return line + + +def _folded_skips( + startpath: Path, + skipped: Sequence[CollectReport], +) -> List[Tuple[int, str, Optional[int], str]]: + d: Dict[Tuple[str, Optional[int], str], List[CollectReport]] = {} + for event in skipped: + assert event.longrepr is not None + assert isinstance(event.longrepr, tuple), (event, event.longrepr) + assert len(event.longrepr) == 3, (event, event.longrepr) + fspath, lineno, reason = event.longrepr + # For consistency, report all fspaths in relative form. + fspath = bestrelpath(startpath, Path(fspath)) + keywords = getattr(event, "keywords", {}) + # Folding reports with global pytestmark variable. + # This is a workaround, because for now we cannot identify the scope of a skip marker + # TODO: Revisit after marks scope would be fixed. + if ( + event.when == "setup" + and "skip" in keywords + and "pytestmark" not in keywords + ): + key: Tuple[str, Optional[int], str] = (fspath, None, reason) + else: + key = (fspath, lineno, reason) + d.setdefault(key, []).append(event) + values: List[Tuple[int, str, Optional[int], str]] = [] + for key, events in d.items(): + values.append((len(events), *key)) + return values + + +_color_for_type = { + "failed": "red", + "error": "red", + "warnings": "yellow", + "passed": "green", +} +_color_for_type_default = "yellow" + + +def pluralize(count: int, noun: str) -> Tuple[int, str]: + # No need to pluralize words such as `failed` or `passed`. + if noun not in ["error", "warnings", "test"]: + return count, noun + + # The `warnings` key is plural. To avoid API breakage, we keep it that way but + # set it to singular here so we can determine plurality in the same way as we do + # for `error`. + noun = noun.replace("warnings", "warning") + + return count, noun + "s" if count != 1 else noun + + +def _plugin_nameversions(plugininfo) -> List[str]: + values: List[str] = [] + for plugin, dist in plugininfo: + # Gets us name and version! + name = "{dist.project_name}-{dist.version}".format(dist=dist) + # Questionable convenience, but it keeps things short. + if name.startswith("pytest-"): + name = name[7:] + # We decided to print python package names they can have more than one plugin. + if name not in values: + values.append(name) + return values + + +def format_session_duration(seconds: float) -> str: + """Format the given seconds in a human readable manner to show in the final summary.""" + if seconds < 60: + return f"{seconds:.2f}s" + else: + dt = datetime.timedelta(seconds=int(seconds)) + return f"{seconds:.2f}s ({dt})" + + +def _get_raw_skip_reason(report: TestReport) -> str: + """Get the reason string of a skip/xfail/xpass test report. + + The string is just the part given by the user. + """ + if hasattr(report, "wasxfail"): + reason = cast(str, report.wasxfail) + if reason.startswith("reason: "): + reason = reason[len("reason: ") :] + return reason + else: + assert report.skipped + assert isinstance(report.longrepr, tuple) + _, _, reason = report.longrepr + if reason.startswith("Skipped: "): + reason = reason[len("Skipped: ") :] + elif reason == "Skipped": + reason = "" + return reason diff --git a/env/lib/python3.8/site-packages/_pytest/threadexception.py b/env/lib/python3.8/site-packages/_pytest/threadexception.py new file mode 100644 index 00000000..43341e73 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/threadexception.py @@ -0,0 +1,88 @@ +import threading +import traceback +import warnings +from types import TracebackType +from typing import Any +from typing import Callable +from typing import Generator +from typing import Optional +from typing import Type + +import pytest + + +# Copied from cpython/Lib/test/support/threading_helper.py, with modifications. +class catch_threading_exception: + """Context manager catching threading.Thread exception using + threading.excepthook. + + Storing exc_value using a custom hook can create a reference cycle. The + reference cycle is broken explicitly when the context manager exits. + + Storing thread using a custom hook can resurrect it if it is set to an + object which is being finalized. Exiting the context manager clears the + stored object. + + Usage: + with threading_helper.catch_threading_exception() as cm: + # code spawning a thread which raises an exception + ... + # check the thread exception: use cm.args + ... + # cm.args attribute no longer exists at this point + # (to break a reference cycle) + """ + + def __init__(self) -> None: + self.args: Optional["threading.ExceptHookArgs"] = None + self._old_hook: Optional[Callable[["threading.ExceptHookArgs"], Any]] = None + + def _hook(self, args: "threading.ExceptHookArgs") -> None: + self.args = args + + def __enter__(self) -> "catch_threading_exception": + self._old_hook = threading.excepthook + threading.excepthook = self._hook + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + assert self._old_hook is not None + threading.excepthook = self._old_hook + self._old_hook = None + del self.args + + +def thread_exception_runtest_hook() -> Generator[None, None, None]: + with catch_threading_exception() as cm: + yield + if cm.args: + thread_name = "" if cm.args.thread is None else cm.args.thread.name + msg = f"Exception in thread {thread_name}\n\n" + msg += "".join( + traceback.format_exception( + cm.args.exc_type, + cm.args.exc_value, + cm.args.exc_traceback, + ) + ) + warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) + + +@pytest.hookimpl(hookwrapper=True, trylast=True) +def pytest_runtest_setup() -> Generator[None, None, None]: + yield from thread_exception_runtest_hook() + + +@pytest.hookimpl(hookwrapper=True, tryfirst=True) +def pytest_runtest_call() -> Generator[None, None, None]: + yield from thread_exception_runtest_hook() + + +@pytest.hookimpl(hookwrapper=True, tryfirst=True) +def pytest_runtest_teardown() -> Generator[None, None, None]: + yield from thread_exception_runtest_hook() diff --git a/env/lib/python3.8/site-packages/_pytest/timing.py b/env/lib/python3.8/site-packages/_pytest/timing.py new file mode 100644 index 00000000..925163a5 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/timing.py @@ -0,0 +1,12 @@ +"""Indirection for time functions. + +We intentionally grab some "time" functions internally to avoid tests mocking "time" to affect +pytest runtime information (issue #185). + +Fixture "mock_timing" also interacts with this module for pytest's own tests. +""" +from time import perf_counter +from time import sleep +from time import time + +__all__ = ["perf_counter", "sleep", "time"] diff --git a/env/lib/python3.8/site-packages/_pytest/tmpdir.py b/env/lib/python3.8/site-packages/_pytest/tmpdir.py new file mode 100644 index 00000000..9497a0d4 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/tmpdir.py @@ -0,0 +1,216 @@ +"""Support for providing temporary directories to test functions.""" +import os +import re +import sys +import tempfile +from pathlib import Path +from typing import Optional + +import attr + +from .pathlib import LOCK_TIMEOUT +from .pathlib import make_numbered_dir +from .pathlib import make_numbered_dir_with_cleanup +from .pathlib import rm_rf +from _pytest.compat import final +from _pytest.config import Config +from _pytest.deprecated import check_ispytest +from _pytest.fixtures import fixture +from _pytest.fixtures import FixtureRequest +from _pytest.monkeypatch import MonkeyPatch + + +@final +@attr.s(init=False) +class TempPathFactory: + """Factory for temporary directories under the common base temp directory. + + The base directory can be configured using the ``--basetemp`` option. + """ + + _given_basetemp = attr.ib(type=Optional[Path]) + _trace = attr.ib() + _basetemp = attr.ib(type=Optional[Path]) + + def __init__( + self, + given_basetemp: Optional[Path], + trace, + basetemp: Optional[Path] = None, + *, + _ispytest: bool = False, + ) -> None: + check_ispytest(_ispytest) + if given_basetemp is None: + self._given_basetemp = None + else: + # Use os.path.abspath() to get absolute path instead of resolve() as it + # does not work the same in all platforms (see #4427). + # Path.absolute() exists, but it is not public (see https://bugs.python.org/issue25012). + self._given_basetemp = Path(os.path.abspath(str(given_basetemp))) + self._trace = trace + self._basetemp = basetemp + + @classmethod + def from_config( + cls, + config: Config, + *, + _ispytest: bool = False, + ) -> "TempPathFactory": + """Create a factory according to pytest configuration. + + :meta private: + """ + check_ispytest(_ispytest) + return cls( + given_basetemp=config.option.basetemp, + trace=config.trace.get("tmpdir"), + _ispytest=True, + ) + + def _ensure_relative_to_basetemp(self, basename: str) -> str: + basename = os.path.normpath(basename) + if (self.getbasetemp() / basename).resolve().parent != self.getbasetemp(): + raise ValueError(f"{basename} is not a normalized and relative path") + return basename + + def mktemp(self, basename: str, numbered: bool = True) -> Path: + """Create a new temporary directory managed by the factory. + + :param basename: + Directory base name, must be a relative path. + + :param numbered: + If ``True``, ensure the directory is unique by adding a numbered + suffix greater than any existing one: ``basename="foo-"`` and ``numbered=True`` + means that this function will create directories named ``"foo-0"``, + ``"foo-1"``, ``"foo-2"`` and so on. + + :returns: + The path to the new directory. + """ + basename = self._ensure_relative_to_basetemp(basename) + if not numbered: + p = self.getbasetemp().joinpath(basename) + p.mkdir(mode=0o700) + else: + p = make_numbered_dir(root=self.getbasetemp(), prefix=basename, mode=0o700) + self._trace("mktemp", p) + return p + + def getbasetemp(self) -> Path: + """Return the base temporary directory, creating it if needed. + + :returns: + The base temporary directory. + """ + if self._basetemp is not None: + return self._basetemp + + if self._given_basetemp is not None: + basetemp = self._given_basetemp + if basetemp.exists(): + rm_rf(basetemp) + basetemp.mkdir(mode=0o700) + basetemp = basetemp.resolve() + else: + from_env = os.environ.get("PYTEST_DEBUG_TEMPROOT") + temproot = Path(from_env or tempfile.gettempdir()).resolve() + user = get_user() or "unknown" + # use a sub-directory in the temproot to speed-up + # make_numbered_dir() call + rootdir = temproot.joinpath(f"pytest-of-{user}") + try: + rootdir.mkdir(mode=0o700, exist_ok=True) + except OSError: + # getuser() likely returned illegal characters for the platform, use unknown back off mechanism + rootdir = temproot.joinpath("pytest-of-unknown") + rootdir.mkdir(mode=0o700, exist_ok=True) + # Because we use exist_ok=True with a predictable name, make sure + # we are the owners, to prevent any funny business (on unix, where + # temproot is usually shared). + # Also, to keep things private, fixup any world-readable temp + # rootdir's permissions. Historically 0o755 was used, so we can't + # just error out on this, at least for a while. + if sys.platform != "win32": + uid = os.getuid() + rootdir_stat = rootdir.stat() + # getuid shouldn't fail, but cpython defines such a case. + # Let's hope for the best. + if uid != -1: + if rootdir_stat.st_uid != uid: + raise OSError( + f"The temporary directory {rootdir} is not owned by the current user. " + "Fix this and try again." + ) + if (rootdir_stat.st_mode & 0o077) != 0: + os.chmod(rootdir, rootdir_stat.st_mode & ~0o077) + basetemp = make_numbered_dir_with_cleanup( + prefix="pytest-", + root=rootdir, + keep=3, + lock_timeout=LOCK_TIMEOUT, + mode=0o700, + ) + assert basetemp is not None, basetemp + self._basetemp = basetemp + self._trace("new basetemp", basetemp) + return basetemp + + +def get_user() -> Optional[str]: + """Return the current user name, or None if getuser() does not work + in the current environment (see #1010).""" + try: + # In some exotic environments, getpass may not be importable. + import getpass + + return getpass.getuser() + except (ImportError, KeyError): + return None + + +def pytest_configure(config: Config) -> None: + """Create a TempPathFactory and attach it to the config object. + + This is to comply with existing plugins which expect the handler to be + available at pytest_configure time, but ideally should be moved entirely + to the tmp_path_factory session fixture. + """ + mp = MonkeyPatch() + config.add_cleanup(mp.undo) + _tmp_path_factory = TempPathFactory.from_config(config, _ispytest=True) + mp.setattr(config, "_tmp_path_factory", _tmp_path_factory, raising=False) + + +@fixture(scope="session") +def tmp_path_factory(request: FixtureRequest) -> TempPathFactory: + """Return a :class:`pytest.TempPathFactory` instance for the test session.""" + # Set dynamically by pytest_configure() above. + return request.config._tmp_path_factory # type: ignore + + +def _mk_tmp(request: FixtureRequest, factory: TempPathFactory) -> Path: + name = request.node.name + name = re.sub(r"[\W]", "_", name) + MAXVAL = 30 + name = name[:MAXVAL] + return factory.mktemp(name, numbered=True) + + +@fixture +def tmp_path(request: FixtureRequest, tmp_path_factory: TempPathFactory) -> Path: + """Return a temporary directory path object which is unique to each test + function invocation, created as a sub directory of the base temporary + directory. + + By default, a new base temporary directory is created each test session, + and old bases are removed after 3 sessions, to aid in debugging. If + ``--basetemp`` is used then it is cleared each session. See :ref:`base + temporary directory`. + + The returned object is a :class:`pathlib.Path` object. + """ + + return _mk_tmp(request, tmp_path_factory) diff --git a/env/lib/python3.8/site-packages/_pytest/unittest.py b/env/lib/python3.8/site-packages/_pytest/unittest.py new file mode 100644 index 00000000..c2df9865 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/unittest.py @@ -0,0 +1,417 @@ +"""Discover and run std-library "unittest" style tests.""" +import sys +import traceback +import types +from typing import Any +from typing import Callable +from typing import Generator +from typing import Iterable +from typing import List +from typing import Optional +from typing import Tuple +from typing import Type +from typing import TYPE_CHECKING +from typing import Union + +import _pytest._code +import pytest +from _pytest.compat import getimfunc +from _pytest.compat import is_async_function +from _pytest.config import hookimpl +from _pytest.fixtures import FixtureRequest +from _pytest.nodes import Collector +from _pytest.nodes import Item +from _pytest.outcomes import exit +from _pytest.outcomes import fail +from _pytest.outcomes import skip +from _pytest.outcomes import xfail +from _pytest.python import Class +from _pytest.python import Function +from _pytest.python import Module +from _pytest.runner import CallInfo +from _pytest.scope import Scope + +if TYPE_CHECKING: + import unittest + import twisted.trial.unittest + + _SysExcInfoType = Union[ + Tuple[Type[BaseException], BaseException, types.TracebackType], + Tuple[None, None, None], + ] + + +def pytest_pycollect_makeitem( + collector: Union[Module, Class], name: str, obj: object +) -> Optional["UnitTestCase"]: + # Has unittest been imported and is obj a subclass of its TestCase? + try: + ut = sys.modules["unittest"] + # Type ignored because `ut` is an opaque module. + if not issubclass(obj, ut.TestCase): # type: ignore + return None + except Exception: + return None + # Yes, so let's collect it. + item: UnitTestCase = UnitTestCase.from_parent(collector, name=name, obj=obj) + return item + + +class UnitTestCase(Class): + # Marker for fixturemanger.getfixtureinfo() + # to declare that our children do not support funcargs. + nofuncargs = True + + def collect(self) -> Iterable[Union[Item, Collector]]: + from unittest import TestLoader + + cls = self.obj + if not getattr(cls, "__test__", True): + return + + skipped = _is_skipped(cls) + if not skipped: + self._inject_setup_teardown_fixtures(cls) + self._inject_setup_class_fixture() + + self.session._fixturemanager.parsefactories(self, unittest=True) + loader = TestLoader() + foundsomething = False + for name in loader.getTestCaseNames(self.obj): + x = getattr(self.obj, name) + if not getattr(x, "__test__", True): + continue + funcobj = getimfunc(x) + yield TestCaseFunction.from_parent(self, name=name, callobj=funcobj) + foundsomething = True + + if not foundsomething: + runtest = getattr(self.obj, "runTest", None) + if runtest is not None: + ut = sys.modules.get("twisted.trial.unittest", None) + # Type ignored because `ut` is an opaque module. + if ut is None or runtest != ut.TestCase.runTest: # type: ignore + yield TestCaseFunction.from_parent(self, name="runTest") + + def _inject_setup_teardown_fixtures(self, cls: type) -> None: + """Injects a hidden auto-use fixture to invoke setUpClass/setup_method and corresponding + teardown functions (#517).""" + class_fixture = _make_xunit_fixture( + cls, + "setUpClass", + "tearDownClass", + "doClassCleanups", + scope=Scope.Class, + pass_self=False, + ) + if class_fixture: + cls.__pytest_class_setup = class_fixture # type: ignore[attr-defined] + + method_fixture = _make_xunit_fixture( + cls, + "setup_method", + "teardown_method", + None, + scope=Scope.Function, + pass_self=True, + ) + if method_fixture: + cls.__pytest_method_setup = method_fixture # type: ignore[attr-defined] + + +def _make_xunit_fixture( + obj: type, + setup_name: str, + teardown_name: str, + cleanup_name: Optional[str], + scope: Scope, + pass_self: bool, +): + setup = getattr(obj, setup_name, None) + teardown = getattr(obj, teardown_name, None) + if setup is None and teardown is None: + return None + + if cleanup_name: + cleanup = getattr(obj, cleanup_name, lambda *args: None) + else: + + def cleanup(*args): + pass + + @pytest.fixture( + scope=scope.value, + autouse=True, + # Use a unique name to speed up lookup. + name=f"_unittest_{setup_name}_fixture_{obj.__qualname__}", + ) + def fixture(self, request: FixtureRequest) -> Generator[None, None, None]: + if _is_skipped(self): + reason = self.__unittest_skip_why__ + raise pytest.skip.Exception(reason, _use_item_location=True) + if setup is not None: + try: + if pass_self: + setup(self, request.function) + else: + setup() + # unittest does not call the cleanup function for every BaseException, so we + # follow this here. + except Exception: + if pass_self: + cleanup(self) + else: + cleanup() + + raise + yield + try: + if teardown is not None: + if pass_self: + teardown(self, request.function) + else: + teardown() + finally: + if pass_self: + cleanup(self) + else: + cleanup() + + return fixture + + +class TestCaseFunction(Function): + nofuncargs = True + _excinfo: Optional[List[_pytest._code.ExceptionInfo[BaseException]]] = None + _testcase: Optional["unittest.TestCase"] = None + + def _getobj(self): + assert self.parent is not None + # Unlike a regular Function in a Class, where `item.obj` returns + # a *bound* method (attached to an instance), TestCaseFunction's + # `obj` returns an *unbound* method (not attached to an instance). + # This inconsistency is probably not desirable, but needs some + # consideration before changing. + return getattr(self.parent.obj, self.originalname) # type: ignore[attr-defined] + + def setup(self) -> None: + # A bound method to be called during teardown() if set (see 'runtest()'). + self._explicit_tearDown: Optional[Callable[[], None]] = None + assert self.parent is not None + self._testcase = self.parent.obj(self.name) # type: ignore[attr-defined] + self._obj = getattr(self._testcase, self.name) + if hasattr(self, "_request"): + self._request._fillfixtures() + + def teardown(self) -> None: + if self._explicit_tearDown is not None: + self._explicit_tearDown() + self._explicit_tearDown = None + self._testcase = None + self._obj = None + + def startTest(self, testcase: "unittest.TestCase") -> None: + pass + + def _addexcinfo(self, rawexcinfo: "_SysExcInfoType") -> None: + # Unwrap potential exception info (see twisted trial support below). + rawexcinfo = getattr(rawexcinfo, "_rawexcinfo", rawexcinfo) + try: + excinfo = _pytest._code.ExceptionInfo[BaseException].from_exc_info(rawexcinfo) # type: ignore[arg-type] + # Invoke the attributes to trigger storing the traceback + # trial causes some issue there. + excinfo.value + excinfo.traceback + except TypeError: + try: + try: + values = traceback.format_exception(*rawexcinfo) + values.insert( + 0, + "NOTE: Incompatible Exception Representation, " + "displaying natively:\n\n", + ) + fail("".join(values), pytrace=False) + except (fail.Exception, KeyboardInterrupt): + raise + except BaseException: + fail( + "ERROR: Unknown Incompatible Exception " + "representation:\n%r" % (rawexcinfo,), + pytrace=False, + ) + except KeyboardInterrupt: + raise + except fail.Exception: + excinfo = _pytest._code.ExceptionInfo.from_current() + self.__dict__.setdefault("_excinfo", []).append(excinfo) + + def addError( + self, testcase: "unittest.TestCase", rawexcinfo: "_SysExcInfoType" + ) -> None: + try: + if isinstance(rawexcinfo[1], exit.Exception): + exit(rawexcinfo[1].msg) + except TypeError: + pass + self._addexcinfo(rawexcinfo) + + def addFailure( + self, testcase: "unittest.TestCase", rawexcinfo: "_SysExcInfoType" + ) -> None: + self._addexcinfo(rawexcinfo) + + def addSkip(self, testcase: "unittest.TestCase", reason: str) -> None: + try: + raise pytest.skip.Exception(reason, _use_item_location=True) + except skip.Exception: + self._addexcinfo(sys.exc_info()) + + def addExpectedFailure( + self, + testcase: "unittest.TestCase", + rawexcinfo: "_SysExcInfoType", + reason: str = "", + ) -> None: + try: + xfail(str(reason)) + except xfail.Exception: + self._addexcinfo(sys.exc_info()) + + def addUnexpectedSuccess( + self, + testcase: "unittest.TestCase", + reason: Optional["twisted.trial.unittest.Todo"] = None, + ) -> None: + msg = "Unexpected success" + if reason: + msg += f": {reason.reason}" + # Preserve unittest behaviour - fail the test. Explicitly not an XPASS. + try: + fail(msg, pytrace=False) + except fail.Exception: + self._addexcinfo(sys.exc_info()) + + def addSuccess(self, testcase: "unittest.TestCase") -> None: + pass + + def stopTest(self, testcase: "unittest.TestCase") -> None: + pass + + def runtest(self) -> None: + from _pytest.debugging import maybe_wrap_pytest_function_for_tracing + + assert self._testcase is not None + + maybe_wrap_pytest_function_for_tracing(self) + + # Let the unittest framework handle async functions. + if is_async_function(self.obj): + # Type ignored because self acts as the TestResult, but is not actually one. + self._testcase(result=self) # type: ignore[arg-type] + else: + # When --pdb is given, we want to postpone calling tearDown() otherwise + # when entering the pdb prompt, tearDown() would have probably cleaned up + # instance variables, which makes it difficult to debug. + # Arguably we could always postpone tearDown(), but this changes the moment where the + # TestCase instance interacts with the results object, so better to only do it + # when absolutely needed. + # We need to consider if the test itself is skipped, or the whole class. + assert isinstance(self.parent, UnitTestCase) + skipped = _is_skipped(self.obj) or _is_skipped(self.parent.obj) + if self.config.getoption("usepdb") and not skipped: + self._explicit_tearDown = self._testcase.tearDown + setattr(self._testcase, "tearDown", lambda *args: None) + + # We need to update the actual bound method with self.obj, because + # wrap_pytest_function_for_tracing replaces self.obj by a wrapper. + setattr(self._testcase, self.name, self.obj) + try: + self._testcase(result=self) # type: ignore[arg-type] + finally: + delattr(self._testcase, self.name) + + def _prunetraceback( + self, excinfo: _pytest._code.ExceptionInfo[BaseException] + ) -> None: + super()._prunetraceback(excinfo) + traceback = excinfo.traceback.filter( + lambda x: not x.frame.f_globals.get("__unittest") + ) + if traceback: + excinfo.traceback = traceback + + +@hookimpl(tryfirst=True) +def pytest_runtest_makereport(item: Item, call: CallInfo[None]) -> None: + if isinstance(item, TestCaseFunction): + if item._excinfo: + call.excinfo = item._excinfo.pop(0) + try: + del call.result + except AttributeError: + pass + + # Convert unittest.SkipTest to pytest.skip. + # This is actually only needed for nose, which reuses unittest.SkipTest for + # its own nose.SkipTest. For unittest TestCases, SkipTest is already + # handled internally, and doesn't reach here. + unittest = sys.modules.get("unittest") + if ( + unittest + and call.excinfo + and isinstance(call.excinfo.value, unittest.SkipTest) # type: ignore[attr-defined] + ): + excinfo = call.excinfo + call2 = CallInfo[None].from_call( + lambda: pytest.skip(str(excinfo.value)), call.when + ) + call.excinfo = call2.excinfo + + +# Twisted trial support. + + +@hookimpl(hookwrapper=True) +def pytest_runtest_protocol(item: Item) -> Generator[None, None, None]: + if isinstance(item, TestCaseFunction) and "twisted.trial.unittest" in sys.modules: + ut: Any = sys.modules["twisted.python.failure"] + Failure__init__ = ut.Failure.__init__ + check_testcase_implements_trial_reporter() + + def excstore( + self, exc_value=None, exc_type=None, exc_tb=None, captureVars=None + ): + if exc_value is None: + self._rawexcinfo = sys.exc_info() + else: + if exc_type is None: + exc_type = type(exc_value) + self._rawexcinfo = (exc_type, exc_value, exc_tb) + try: + Failure__init__( + self, exc_value, exc_type, exc_tb, captureVars=captureVars + ) + except TypeError: + Failure__init__(self, exc_value, exc_type, exc_tb) + + ut.Failure.__init__ = excstore + yield + ut.Failure.__init__ = Failure__init__ + else: + yield + + +def check_testcase_implements_trial_reporter(done: List[int] = []) -> None: + if done: + return + from zope.interface import classImplements + from twisted.trial.itrial import IReporter + + classImplements(TestCaseFunction, IReporter) + done.append(1) + + +def _is_skipped(obj) -> bool: + """Return True if the given object has been marked with @unittest.skip.""" + return bool(getattr(obj, "__unittest_skip__", False)) diff --git a/env/lib/python3.8/site-packages/_pytest/unraisableexception.py b/env/lib/python3.8/site-packages/_pytest/unraisableexception.py new file mode 100644 index 00000000..fcb5d823 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/unraisableexception.py @@ -0,0 +1,93 @@ +import sys +import traceback +import warnings +from types import TracebackType +from typing import Any +from typing import Callable +from typing import Generator +from typing import Optional +from typing import Type + +import pytest + + +# Copied from cpython/Lib/test/support/__init__.py, with modifications. +class catch_unraisable_exception: + """Context manager catching unraisable exception using sys.unraisablehook. + + Storing the exception value (cm.unraisable.exc_value) creates a reference + cycle. The reference cycle is broken explicitly when the context manager + exits. + + Storing the object (cm.unraisable.object) can resurrect it if it is set to + an object which is being finalized. Exiting the context manager clears the + stored object. + + Usage: + with catch_unraisable_exception() as cm: + # code creating an "unraisable exception" + ... + # check the unraisable exception: use cm.unraisable + ... + # cm.unraisable attribute no longer exists at this point + # (to break a reference cycle) + """ + + def __init__(self) -> None: + self.unraisable: Optional["sys.UnraisableHookArgs"] = None + self._old_hook: Optional[Callable[["sys.UnraisableHookArgs"], Any]] = None + + def _hook(self, unraisable: "sys.UnraisableHookArgs") -> None: + # Storing unraisable.object can resurrect an object which is being + # finalized. Storing unraisable.exc_value creates a reference cycle. + self.unraisable = unraisable + + def __enter__(self) -> "catch_unraisable_exception": + self._old_hook = sys.unraisablehook + sys.unraisablehook = self._hook + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + assert self._old_hook is not None + sys.unraisablehook = self._old_hook + self._old_hook = None + del self.unraisable + + +def unraisable_exception_runtest_hook() -> Generator[None, None, None]: + with catch_unraisable_exception() as cm: + yield + if cm.unraisable: + if cm.unraisable.err_msg is not None: + err_msg = cm.unraisable.err_msg + else: + err_msg = "Exception ignored in" + msg = f"{err_msg}: {cm.unraisable.object!r}\n\n" + msg += "".join( + traceback.format_exception( + cm.unraisable.exc_type, + cm.unraisable.exc_value, + cm.unraisable.exc_traceback, + ) + ) + warnings.warn(pytest.PytestUnraisableExceptionWarning(msg)) + + +@pytest.hookimpl(hookwrapper=True, tryfirst=True) +def pytest_runtest_setup() -> Generator[None, None, None]: + yield from unraisable_exception_runtest_hook() + + +@pytest.hookimpl(hookwrapper=True, tryfirst=True) +def pytest_runtest_call() -> Generator[None, None, None]: + yield from unraisable_exception_runtest_hook() + + +@pytest.hookimpl(hookwrapper=True, tryfirst=True) +def pytest_runtest_teardown() -> Generator[None, None, None]: + yield from unraisable_exception_runtest_hook() diff --git a/env/lib/python3.8/site-packages/_pytest/warning_types.py b/env/lib/python3.8/site-packages/_pytest/warning_types.py new file mode 100644 index 00000000..620860c1 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/warning_types.py @@ -0,0 +1,171 @@ +import inspect +import warnings +from types import FunctionType +from typing import Any +from typing import Generic +from typing import Type +from typing import TypeVar + +import attr + +from _pytest.compat import final + + +class PytestWarning(UserWarning): + """Base class for all warnings emitted by pytest.""" + + __module__ = "pytest" + + +@final +class PytestAssertRewriteWarning(PytestWarning): + """Warning emitted by the pytest assert rewrite module.""" + + __module__ = "pytest" + + +@final +class PytestCacheWarning(PytestWarning): + """Warning emitted by the cache plugin in various situations.""" + + __module__ = "pytest" + + +@final +class PytestConfigWarning(PytestWarning): + """Warning emitted for configuration issues.""" + + __module__ = "pytest" + + +@final +class PytestCollectionWarning(PytestWarning): + """Warning emitted when pytest is not able to collect a file or symbol in a module.""" + + __module__ = "pytest" + + +class PytestDeprecationWarning(PytestWarning, DeprecationWarning): + """Warning class for features that will be removed in a future version.""" + + __module__ = "pytest" + + +class PytestRemovedIn8Warning(PytestDeprecationWarning): + """Warning class for features that will be removed in pytest 8.""" + + __module__ = "pytest" + + +class PytestReturnNotNoneWarning(PytestRemovedIn8Warning): + """Warning emitted when a test function is returning value other than None.""" + + __module__ = "pytest" + + +@final +class PytestExperimentalApiWarning(PytestWarning, FutureWarning): + """Warning category used to denote experiments in pytest. + + Use sparingly as the API might change or even be removed completely in a + future version. + """ + + __module__ = "pytest" + + @classmethod + def simple(cls, apiname: str) -> "PytestExperimentalApiWarning": + return cls( + "{apiname} is an experimental api that may change over time".format( + apiname=apiname + ) + ) + + +@final +class PytestUnhandledCoroutineWarning(PytestReturnNotNoneWarning): + """Warning emitted for an unhandled coroutine. + + A coroutine was encountered when collecting test functions, but was not + handled by any async-aware plugin. + Coroutine test functions are not natively supported. + """ + + __module__ = "pytest" + + +@final +class PytestUnknownMarkWarning(PytestWarning): + """Warning emitted on use of unknown markers. + + See :ref:`mark` for details. + """ + + __module__ = "pytest" + + +@final +class PytestUnraisableExceptionWarning(PytestWarning): + """An unraisable exception was reported. + + Unraisable exceptions are exceptions raised in :meth:`__del__ ` + implementations and similar situations when the exception cannot be raised + as normal. + """ + + __module__ = "pytest" + + +@final +class PytestUnhandledThreadExceptionWarning(PytestWarning): + """An unhandled exception occurred in a :class:`~threading.Thread`. + + Such exceptions don't propagate normally. + """ + + __module__ = "pytest" + + +_W = TypeVar("_W", bound=PytestWarning) + + +@final +@attr.s(auto_attribs=True) +class UnformattedWarning(Generic[_W]): + """A warning meant to be formatted during runtime. + + This is used to hold warnings that need to format their message at runtime, + as opposed to a direct message. + """ + + category: Type["_W"] + template: str + + def format(self, **kwargs: Any) -> _W: + """Return an instance of the warning category, formatted with given kwargs.""" + return self.category(self.template.format(**kwargs)) + + +def warn_explicit_for(method: FunctionType, message: PytestWarning) -> None: + """ + Issue the warning :param:`message` for the definition of the given :param:`method` + + this helps to log warnigns for functions defined prior to finding an issue with them + (like hook wrappers being marked in a legacy mechanism) + """ + lineno = method.__code__.co_firstlineno + filename = inspect.getfile(method) + module = method.__module__ + mod_globals = method.__globals__ + try: + warnings.warn_explicit( + message, + type(message), + filename=filename, + module=module, + registry=mod_globals.setdefault("__warningregistry__", {}), + lineno=lineno, + ) + except Warning as w: + # If warnings are errors (e.g. -Werror), location information gets lost, so we add it to the message. + raise type(w)(f"{w}\n at {filename}:{lineno}") from None diff --git a/env/lib/python3.8/site-packages/_pytest/warnings.py b/env/lib/python3.8/site-packages/_pytest/warnings.py new file mode 100644 index 00000000..4aaa9445 --- /dev/null +++ b/env/lib/python3.8/site-packages/_pytest/warnings.py @@ -0,0 +1,148 @@ +import sys +import warnings +from contextlib import contextmanager +from typing import Generator +from typing import Optional +from typing import TYPE_CHECKING + +import pytest +from _pytest.config import apply_warning_filters +from _pytest.config import Config +from _pytest.config import parse_warning_filter +from _pytest.main import Session +from _pytest.nodes import Item +from _pytest.terminal import TerminalReporter + +if TYPE_CHECKING: + from typing_extensions import Literal + + +def pytest_configure(config: Config) -> None: + config.addinivalue_line( + "markers", + "filterwarnings(warning): add a warning filter to the given test. " + "see https://docs.pytest.org/en/stable/how-to/capture-warnings.html#pytest-mark-filterwarnings ", + ) + + +@contextmanager +def catch_warnings_for_item( + config: Config, + ihook, + when: "Literal['config', 'collect', 'runtest']", + item: Optional[Item], +) -> Generator[None, None, None]: + """Context manager that catches warnings generated in the contained execution block. + + ``item`` can be None if we are not in the context of an item execution. + + Each warning captured triggers the ``pytest_warning_recorded`` hook. + """ + config_filters = config.getini("filterwarnings") + cmdline_filters = config.known_args_namespace.pythonwarnings or [] + with warnings.catch_warnings(record=True) as log: + # mypy can't infer that record=True means log is not None; help it. + assert log is not None + + if not sys.warnoptions: + # If user is not explicitly configuring warning filters, show deprecation warnings by default (#2908). + warnings.filterwarnings("always", category=DeprecationWarning) + warnings.filterwarnings("always", category=PendingDeprecationWarning) + + apply_warning_filters(config_filters, cmdline_filters) + + # apply filters from "filterwarnings" marks + nodeid = "" if item is None else item.nodeid + if item is not None: + for mark in item.iter_markers(name="filterwarnings"): + for arg in mark.args: + warnings.filterwarnings(*parse_warning_filter(arg, escape=False)) + + yield + + for warning_message in log: + ihook.pytest_warning_recorded.call_historic( + kwargs=dict( + warning_message=warning_message, + nodeid=nodeid, + when=when, + location=None, + ) + ) + + +def warning_record_to_str(warning_message: warnings.WarningMessage) -> str: + """Convert a warnings.WarningMessage to a string.""" + warn_msg = warning_message.message + msg = warnings.formatwarning( + str(warn_msg), + warning_message.category, + warning_message.filename, + warning_message.lineno, + warning_message.line, + ) + if warning_message.source is not None: + try: + import tracemalloc + except ImportError: + pass + else: + tb = tracemalloc.get_object_traceback(warning_message.source) + if tb is not None: + formatted_tb = "\n".join(tb.format()) + # Use a leading new line to better separate the (large) output + # from the traceback to the previous warning text. + msg += f"\nObject allocated at:\n{formatted_tb}" + else: + # No need for a leading new line. + url = "https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings" + msg += "Enable tracemalloc to get traceback where the object was allocated.\n" + msg += f"See {url} for more info." + return msg + + +@pytest.hookimpl(hookwrapper=True, tryfirst=True) +def pytest_runtest_protocol(item: Item) -> Generator[None, None, None]: + with catch_warnings_for_item( + config=item.config, ihook=item.ihook, when="runtest", item=item + ): + yield + + +@pytest.hookimpl(hookwrapper=True, tryfirst=True) +def pytest_collection(session: Session) -> Generator[None, None, None]: + config = session.config + with catch_warnings_for_item( + config=config, ihook=config.hook, when="collect", item=None + ): + yield + + +@pytest.hookimpl(hookwrapper=True) +def pytest_terminal_summary( + terminalreporter: TerminalReporter, +) -> Generator[None, None, None]: + config = terminalreporter.config + with catch_warnings_for_item( + config=config, ihook=config.hook, when="config", item=None + ): + yield + + +@pytest.hookimpl(hookwrapper=True) +def pytest_sessionfinish(session: Session) -> Generator[None, None, None]: + config = session.config + with catch_warnings_for_item( + config=config, ihook=config.hook, when="config", item=None + ): + yield + + +@pytest.hookimpl(hookwrapper=True) +def pytest_load_initial_conftests( + early_config: "Config", +) -> Generator[None, None, None]: + with catch_warnings_for_item( + config=early_config, ihook=early_config.hook, when="config", item=None + ): + yield diff --git a/env/lib/python3.8/site-packages/_yaml/__init__.py b/env/lib/python3.8/site-packages/_yaml/__init__.py new file mode 100644 index 00000000..7baa8c4b --- /dev/null +++ b/env/lib/python3.8/site-packages/_yaml/__init__.py @@ -0,0 +1,33 @@ +# This is a stub package designed to roughly emulate the _yaml +# extension module, which previously existed as a standalone module +# and has been moved into the `yaml` package namespace. +# It does not perfectly mimic its old counterpart, but should get +# close enough for anyone who's relying on it even when they shouldn't. +import yaml + +# in some circumstances, the yaml module we imoprted may be from a different version, so we need +# to tread carefully when poking at it here (it may not have the attributes we expect) +if not getattr(yaml, '__with_libyaml__', False): + from sys import version_info + + exc = ModuleNotFoundError if version_info >= (3, 6) else ImportError + raise exc("No module named '_yaml'") +else: + from yaml._yaml import * + import warnings + warnings.warn( + 'The _yaml extension module is now located at yaml._yaml' + ' and its location is subject to change. To use the' + ' LibYAML-based parser and emitter, import from `yaml`:' + ' `from yaml import CLoader as Loader, CDumper as Dumper`.', + DeprecationWarning + ) + del warnings + # Don't `del yaml` here because yaml is actually an existing + # namespace member of _yaml. + +__name__ = '_yaml' +# If the module is top-level (i.e. not a part of any specific package) +# then the attribute should be set to ''. +# https://docs.python.org/3.8/library/types.html +__package__ = '' diff --git a/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/INSTALLER b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/LICENSE.txt b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/LICENSE.txt new file mode 100644 index 00000000..e497a322 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/LICENSE.txt @@ -0,0 +1,13 @@ + Copyright aio-libs contributors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/METADATA b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/METADATA new file mode 100644 index 00000000..36469595 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/METADATA @@ -0,0 +1,253 @@ +Metadata-Version: 2.1 +Name: aiohttp +Version: 3.8.3 +Summary: Async http client/server framework (asyncio) +Home-page: https://github.com/aio-libs/aiohttp +Maintainer: aiohttp team +Maintainer-email: team@aiohttp.org +License: Apache 2 +Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby +Project-URL: CI: GitHub Actions, https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI +Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/aiohttp +Project-URL: Docs: Changelog, https://docs.aiohttp.org/en/stable/changes.html +Project-URL: Docs: RTD, https://docs.aiohttp.org +Project-URL: GitHub: issues, https://github.com/aio-libs/aiohttp/issues +Project-URL: GitHub: repo, https://github.com/aio-libs/aiohttp +Classifier: Development Status :: 5 - Production/Stable +Classifier: Framework :: AsyncIO +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: POSIX +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: Microsoft :: Windows +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Topic :: Internet :: WWW/HTTP +Requires-Python: >=3.6 +Description-Content-Type: text/x-rst +License-File: LICENSE.txt +Requires-Dist: attrs (>=17.3.0) +Requires-Dist: charset-normalizer (<3.0,>=2.0) +Requires-Dist: multidict (<7.0,>=4.5) +Requires-Dist: async-timeout (<5.0,>=4.0.0a3) +Requires-Dist: yarl (<2.0,>=1.0) +Requires-Dist: frozenlist (>=1.1.1) +Requires-Dist: aiosignal (>=1.1.2) +Requires-Dist: idna-ssl (>=1.0) ; python_version < "3.7" +Requires-Dist: asynctest (==0.13.0) ; python_version < "3.8" +Requires-Dist: typing-extensions (>=3.7.4) ; python_version < "3.8" +Provides-Extra: speedups +Requires-Dist: aiodns ; extra == 'speedups' +Requires-Dist: Brotli ; extra == 'speedups' +Requires-Dist: cchardet ; (python_version < "3.10") and extra == 'speedups' + +================================== +Async http client/server framework +================================== + +.. image:: https://raw.githubusercontent.com/aio-libs/aiohttp/master/docs/aiohttp-plain.svg + :height: 64px + :width: 64px + :alt: aiohttp logo + +| + +.. image:: https://github.com/aio-libs/aiohttp/workflows/CI/badge.svg + :target: https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI + :alt: GitHub Actions status for master branch + +.. image:: https://codecov.io/gh/aio-libs/aiohttp/branch/master/graph/badge.svg + :target: https://codecov.io/gh/aio-libs/aiohttp + :alt: codecov.io status for master branch + +.. image:: https://badge.fury.io/py/aiohttp.svg + :target: https://pypi.org/project/aiohttp + :alt: Latest PyPI package version + +.. image:: https://readthedocs.org/projects/aiohttp/badge/?version=latest + :target: https://docs.aiohttp.org/ + :alt: Latest Read The Docs + +.. image:: https://img.shields.io/discourse/status?server=https%3A%2F%2Faio-libs.discourse.group + :target: https://aio-libs.discourse.group + :alt: Discourse status + +.. image:: https://badges.gitter.im/Join%20Chat.svg + :target: https://gitter.im/aio-libs/Lobby + :alt: Chat on Gitter + + +Key Features +============ + +- Supports both client and server side of HTTP protocol. +- Supports both client and server Web-Sockets out-of-the-box and avoids + Callback Hell. +- Provides Web-server with middlewares and plugable routing. + + +Getting started +=============== + +Client +------ + +To get something from the web: + +.. code-block:: python + + import aiohttp + import asyncio + + async def main(): + + async with aiohttp.ClientSession() as session: + async with session.get('http://python.org') as response: + + print("Status:", response.status) + print("Content-type:", response.headers['content-type']) + + html = await response.text() + print("Body:", html[:15], "...") + + asyncio.run(main()) + +This prints: + +.. code-block:: + + Status: 200 + Content-type: text/html; charset=utf-8 + Body: ... + +Coming from `requests `_ ? Read `why we need so many lines `_. + +Server +------ + +An example using a simple server: + +.. code-block:: python + + # examples/server_simple.py + from aiohttp import web + + async def handle(request): + name = request.match_info.get('name', "Anonymous") + text = "Hello, " + name + return web.Response(text=text) + + async def wshandle(request): + ws = web.WebSocketResponse() + await ws.prepare(request) + + async for msg in ws: + if msg.type == web.WSMsgType.text: + await ws.send_str("Hello, {}".format(msg.data)) + elif msg.type == web.WSMsgType.binary: + await ws.send_bytes(msg.data) + elif msg.type == web.WSMsgType.close: + break + + return ws + + + app = web.Application() + app.add_routes([web.get('/', handle), + web.get('/echo', wshandle), + web.get('/{name}', handle)]) + + if __name__ == '__main__': + web.run_app(app) + + +Documentation +============= + +https://aiohttp.readthedocs.io/ + + +Demos +===== + +https://github.com/aio-libs/aiohttp-demos + + +External links +============== + +* `Third party libraries + `_ +* `Built with aiohttp + `_ +* `Powered by aiohttp + `_ + +Feel free to make a Pull Request for adding your link to these pages! + + +Communication channels +====================== + +*aio-libs discourse group*: https://aio-libs.discourse.group + +*gitter chat* https://gitter.im/aio-libs/Lobby + +We support `Stack Overflow +`_. +Please add *aiohttp* tag to your question there. + +Requirements +============ + +- Python >= 3.6 +- async-timeout_ +- attrs_ +- charset-normalizer_ +- multidict_ +- yarl_ +- frozenlist_ + +Optionally you may install the cChardet_ and aiodns_ libraries (highly +recommended for sake of speed). + +.. _charset-normalizer: https://pypi.org/project/charset-normalizer +.. _aiodns: https://pypi.python.org/pypi/aiodns +.. _attrs: https://github.com/python-attrs/attrs +.. _multidict: https://pypi.python.org/pypi/multidict +.. _frozenlist: https://pypi.org/project/frozenlist/ +.. _yarl: https://pypi.python.org/pypi/yarl +.. _async-timeout: https://pypi.python.org/pypi/async_timeout +.. _cChardet: https://pypi.python.org/pypi/cchardet + +License +======= + +``aiohttp`` is offered under the Apache 2 license. + + +Keepsafe +======== + +The aiohttp community would like to thank Keepsafe +(https://www.getkeepsafe.com) for its support in the early days of +the project. + + +Source code +=========== + +The latest developer version is available in a GitHub repository: +https://github.com/aio-libs/aiohttp + +Benchmarks +========== + +If you are interested in efficiency, the AsyncIO community maintains a +list of benchmarks on the official wiki: +https://github.com/python/asyncio/wiki/Benchmarks diff --git a/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/RECORD b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/RECORD new file mode 100644 index 00000000..fcf0e17c --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/RECORD @@ -0,0 +1,117 @@ +aiohttp-3.8.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +aiohttp-3.8.3.dist-info/LICENSE.txt,sha256=n4DQ2311WpQdtFchcsJw7L2PCCuiFd3QlZhZQu2Uqes,588 +aiohttp-3.8.3.dist-info/METADATA,sha256=tGcxB-KseArVRgBQmGjX9g9YQehDdkqeIAeueYZI4-I,7355 +aiohttp-3.8.3.dist-info/RECORD,, +aiohttp-3.8.3.dist-info/WHEEL,sha256=-ijGDuALlPxm3HbhKntps0QzHsi-DPlXqgerYTTJkFE,148 +aiohttp-3.8.3.dist-info/top_level.txt,sha256=iv-JIaacmTl-hSho3QmphcKnbRRYx1st47yjz_178Ro,8 +aiohttp/.hash/_cparser.pxd.hash,sha256=GoFy-KArtO3unhO5IuosMnc-wwcx7yofVZp2gJi_n_Y,121 +aiohttp/.hash/_find_header.pxd.hash,sha256=_mbpD6vM-CVCKq3ulUvsOAz5Wdo88wrDzfpOsMQaMNA,125 +aiohttp/.hash/_helpers.pyi.hash,sha256=Ew4BZDc2LqFwszgZZUHHrJvw5P8HBhJ700n1Ntg52hE,121 +aiohttp/.hash/_helpers.pyx.hash,sha256=5JQ6BlMBE4HnRaCGdkK9_wpL3ZSWpU1gyLYva0Wwx2c,121 +aiohttp/.hash/_http_parser.pyx.hash,sha256=pEF-JTzd1dVYEwfuzUiTtp8aekvrhhOwiFi4vELWcsM,125 +aiohttp/.hash/_http_writer.pyx.hash,sha256=3Qg3T3D-Ud73elzPHBufK0yEu9tP5jsu6g-aPKQY9gE,125 +aiohttp/.hash/_websocket.pyx.hash,sha256=M97f-Yti-4vnE4GNTD1s_DzKs-fG_ww3jle6EUvixnE,123 +aiohttp/.hash/hdrs.py.hash,sha256=KpTaDTcWfiQrW2QPA5glgIfw6o5JC1hsAYZHeFMuBnI,116 +aiohttp/__init__.py,sha256=8k5pIKoY4JIJImY1z9sSjhGNbZzRmPazjF3TcbvDIIw,6870 +aiohttp/__pycache__/__init__.cpython-38.pyc,, +aiohttp/__pycache__/abc.cpython-38.pyc,, +aiohttp/__pycache__/base_protocol.cpython-38.pyc,, +aiohttp/__pycache__/client.cpython-38.pyc,, +aiohttp/__pycache__/client_exceptions.cpython-38.pyc,, +aiohttp/__pycache__/client_proto.cpython-38.pyc,, +aiohttp/__pycache__/client_reqrep.cpython-38.pyc,, +aiohttp/__pycache__/client_ws.cpython-38.pyc,, +aiohttp/__pycache__/connector.cpython-38.pyc,, +aiohttp/__pycache__/cookiejar.cpython-38.pyc,, +aiohttp/__pycache__/formdata.cpython-38.pyc,, +aiohttp/__pycache__/hdrs.cpython-38.pyc,, +aiohttp/__pycache__/helpers.cpython-38.pyc,, +aiohttp/__pycache__/http.cpython-38.pyc,, +aiohttp/__pycache__/http_exceptions.cpython-38.pyc,, +aiohttp/__pycache__/http_parser.cpython-38.pyc,, +aiohttp/__pycache__/http_websocket.cpython-38.pyc,, +aiohttp/__pycache__/http_writer.cpython-38.pyc,, +aiohttp/__pycache__/locks.cpython-38.pyc,, +aiohttp/__pycache__/log.cpython-38.pyc,, +aiohttp/__pycache__/multipart.cpython-38.pyc,, +aiohttp/__pycache__/payload.cpython-38.pyc,, +aiohttp/__pycache__/payload_streamer.cpython-38.pyc,, +aiohttp/__pycache__/pytest_plugin.cpython-38.pyc,, +aiohttp/__pycache__/resolver.cpython-38.pyc,, +aiohttp/__pycache__/streams.cpython-38.pyc,, +aiohttp/__pycache__/tcp_helpers.cpython-38.pyc,, +aiohttp/__pycache__/test_utils.cpython-38.pyc,, +aiohttp/__pycache__/tracing.cpython-38.pyc,, +aiohttp/__pycache__/typedefs.cpython-38.pyc,, +aiohttp/__pycache__/web.cpython-38.pyc,, +aiohttp/__pycache__/web_app.cpython-38.pyc,, +aiohttp/__pycache__/web_exceptions.cpython-38.pyc,, +aiohttp/__pycache__/web_fileresponse.cpython-38.pyc,, +aiohttp/__pycache__/web_log.cpython-38.pyc,, +aiohttp/__pycache__/web_middlewares.cpython-38.pyc,, +aiohttp/__pycache__/web_protocol.cpython-38.pyc,, +aiohttp/__pycache__/web_request.cpython-38.pyc,, +aiohttp/__pycache__/web_response.cpython-38.pyc,, +aiohttp/__pycache__/web_routedef.cpython-38.pyc,, +aiohttp/__pycache__/web_runner.cpython-38.pyc,, +aiohttp/__pycache__/web_server.cpython-38.pyc,, +aiohttp/__pycache__/web_urldispatcher.cpython-38.pyc,, +aiohttp/__pycache__/web_ws.cpython-38.pyc,, +aiohttp/__pycache__/worker.cpython-38.pyc,, +aiohttp/_cparser.pxd,sha256=5tE01W1fUWqytcOyldDUQKO--RH0OE1QYgQBiJWh-DM,4998 +aiohttp/_find_header.pxd,sha256=0GfwFCPN2zxEKTO1_MA5sYq2UfzsG8kcV3aTqvwlz3g,68 +aiohttp/_headers.pxi,sha256=n701k28dVPjwRnx5j6LpJhLTfj7dqu2vJt7f0O60Oyg,2007 +aiohttp/_helpers.cpython-38-x86_64-linux-gnu.so,sha256=P2bi11aP0JO44PSfu8MfQbRcpujnFMLlg4qJxOKdQ5I,353856 +aiohttp/_helpers.pyi,sha256=ZoKiJSS51PxELhI2cmIr5737YjjZcJt7FbIRO3ym1Ss,202 +aiohttp/_helpers.pyx,sha256=XeLbNft5X_4ifi8QB8i6TyrRuayijMSO3IDHeSA89uM,1049 +aiohttp/_http_parser.cpython-38-x86_64-linux-gnu.so,sha256=H7HVFIglsG77TAoY8riONXIZBz9gJtTxHMhxlXMepO8,2382200 +aiohttp/_http_parser.pyx,sha256=1u38_ESw5VgSqajx1mnGdO6Hqk0ov9PxeFreHq4ktoM,27336 +aiohttp/_http_writer.cpython-38-x86_64-linux-gnu.so,sha256=1CrwAf6ukB-bnc2grlew6P59rznaps1UvXN2_QsLg5Y,331288 +aiohttp/_http_writer.pyx,sha256=aIHAp8g4ZV5kbGRdmZce-vXjELw2M6fGKyJuOdgYQqw,4575 +aiohttp/_websocket.cpython-38-x86_64-linux-gnu.so,sha256=WhkckbcHcqC9PdSMQ_HLmEUCH3aPnGv0drG17Pa1BNk,124128 +aiohttp/_websocket.pyx,sha256=1XuOSNDCbyDrzF5uMA2isqausSs8l2jWTLDlNDLM9Io,1561 +aiohttp/abc.py,sha256=0FhHtbb3W7wRNtJISACN1teP8LZ49553v5Xoh5zNAFQ,5505 +aiohttp/base_protocol.py,sha256=XBq6YcLl7nwKJPIBKf2b1EobE0v7EBeZAoDJCkOO2eo,2676 +aiohttp/client.py,sha256=5dTKnaqzZvbEjd4M6yBOhDDR1uErDKu_xI3xGlzzYjs,45037 +aiohttp/client_exceptions.py,sha256=tiaUIb2xy3s-O-KTPVL6L_P0rpGQT0rV1dujwwgJKoI,9270 +aiohttp/client_proto.py,sha256=c4TAK8CVdycukenCJj7LtlQ3SEj04ilJ3DfmN3kPgeE,8170 +aiohttp/client_reqrep.py,sha256=ULqrco544ZQgYruj1mFD6Fd3af2fOZWJqn07R8rB5J8,36973 +aiohttp/client_ws.py,sha256=Sc0u3S-vZMadtEO6JpbLhVVw7KgtlsgZWHwaSYjkN0I,10516 +aiohttp/connector.py,sha256=0EPxWYzIF3UudRFL77QBVico5EfSMg9mPoy75m1aBhw,51177 +aiohttp/cookiejar.py,sha256=6Y8F3H19q81WypX7EmWR4eQvTc0Pj8QXdv20xM8BUlE,13514 +aiohttp/formdata.py,sha256=q2gpeiM9NFsl_eSFVxHZ7Qez6RbM8_BujERMkooQkx0,6106 +aiohttp/hdrs.py,sha256=owNRw0dgodeDWyobBVLkY88dLbkNoM20czE9xm40oDE,4724 +aiohttp/helpers.py,sha256=PwEnFh_jHJXr6fcM_QwEY3sSibVx8FgjdO37So19lqA,26361 +aiohttp/http.py,sha256=_B20NZc113uNtg0jabY-x4_3RrIpTsqmbRIwMcm-Bco,1800 +aiohttp/http_exceptions.py,sha256=eACQt7azwNbUi8aqXB5J2tIcc_CB4_4y4mLM9SZd4tM,2586 +aiohttp/http_parser.py,sha256=_8ESr_Qo22D7tsLtmzJHK2vysqgbI06WWiGmPm5fjWc,33092 +aiohttp/http_websocket.py,sha256=X6kzIgu0-wRLU9yQxP4iH-Hv_36uVjTCOmF2HgpZLlk,25299 +aiohttp/http_writer.py,sha256=Lia-FlcFvzvr0riJHZrvKFuzLzf2drWchEQ8vew5CJ0,5952 +aiohttp/locks.py,sha256=wRYFo1U82LwBBdqwU24JEPaoTAlKaaJd2FtfDKhkTb4,1136 +aiohttp/log.py,sha256=BbNKx9e3VMIm0xYjZI0IcBBoS7wjdeIeSaiJE7-qK2g,325 +aiohttp/multipart.py,sha256=gmqFziP4ou8ZuoAOibOjoW7OJOsURzI5gkxoLPARQy8,32313 +aiohttp/payload.py,sha256=rachrZ66vC90f7swPXINaatGpPcCSVkiCCVKhrCFfuw,13634 +aiohttp/payload_streamer.py,sha256=3WmZ77SQ4dc8aKU6i2ZAb8L7FPdesCSoqRonSVOd_kE,2112 +aiohttp/py.typed,sha256=sow9soTwP9T_gEAQSVh7Gb8855h04Nwmhs2We-JRgZM,7 +aiohttp/pytest_plugin.py,sha256=4-5LfdrnZIBP2wLp8CjE54eshOolFbBpOTufvp4tUcI,11772 +aiohttp/resolver.py,sha256=CASOnXp5oZh_1sCFWzFlD-5x3V49HAXbAJw-5RYjgms,5092 +aiohttp/streams.py,sha256=_OTvFQVA8-8GJ120Y95lH-hmLq1QaFNBFdaA49TECIc,20758 +aiohttp/tcp_helpers.py,sha256=BSadqVWaBpMFDRWnhaaR941N9MiDZ7bdTrxgCb0CW-M,961 +aiohttp/test_utils.py,sha256=MNQb4Zq6rKLLC3PABy5hIjONlsoxd-lc3OirNGHIJS4,21434 +aiohttp/tracing.py,sha256=gn_O9btTDB66nQ1wJWT3N2gEsO5kYFSIynnd8cp4YgA,15177 +aiohttp/typedefs.py,sha256=oLnG3fxcBEdu4kIzUi0Npcg5kjq01cRkK27VSgjNnmw,1766 +aiohttp/web.py,sha256=bAhkE0oeP6eI06ohkBoPu4ZLchctpEz23cVym_JECSg,18081 +aiohttp/web_app.py,sha256=QkVmy8pR_oaJ3od2fYfSfjzfZ8oGEPg9FPW_d0r2THo,17170 +aiohttp/web_exceptions.py,sha256=T_ghTJB_hnPkoWa3qyeY_GtVbehYQwPCpcCRb-072N0,10098 +aiohttp/web_fileresponse.py,sha256=F_xRvYFL2ox4oVNW3WSjwQ6LmVpkkYM8MPxsqiNsfUg,10784 +aiohttp/web_log.py,sha256=OH-C5shv59-nXchWX8owfLfToMxVdtj0PuK3uGIGEJQ,7557 +aiohttp/web_middlewares.py,sha256=nnllFgxX9GjvkG3YW43jPDH7C03iCfyMBxhYw-OkTI0,4137 +aiohttp/web_protocol.py,sha256=EFr0sy29rW7ffRz-tlRlBnHogmlyt6YvaJP6X1Sg3i8,22399 +aiohttp/web_request.py,sha256=t0RvF_yOJG6uq9-nZjmwXjfqc9Mvgpc3sXUxC67uGvw,28187 +aiohttp/web_response.py,sha256=7WTmyeXY2DrWAhr9HWuxY1SecgxiO_QwRql86PSUS-U,27471 +aiohttp/web_routedef.py,sha256=EFk3v1dcFnLimTT5z0JSBO3PShF0w9sIzfK9iJd-LNs,6152 +aiohttp/web_runner.py,sha256=EzxH4v1lntydU3W-c6iLgDu5LI7kAyD7sAmkz6W5-9g,11157 +aiohttp/web_server.py,sha256=EA1YUxv_4szhpaED1O_VVCUFHNhPUJhl2Cq7W1BK72s,2050 +aiohttp/web_urldispatcher.py,sha256=IAvlOBqCPLjasMnYtCwSqbhj-j1JTQRRHFnNvMFd4d4,39483 +aiohttp/web_ws.py,sha256=8Qmp_zH-F7t9V4NLRFwfoTBr4oWuo3cZrnYT-i-zBI0,17144 +aiohttp/worker.py,sha256=Cbx1KyVligvWa6kje0hSXhdjgJ1AORLN1qu8qJ0LzSQ,8763 diff --git a/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/WHEEL b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/WHEEL new file mode 100644 index 00000000..3a48d348 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_17_x86_64 +Tag: cp38-cp38-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/top_level.txt b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/top_level.txt new file mode 100644 index 00000000..ee4ba4f3 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp-3.8.3.dist-info/top_level.txt @@ -0,0 +1 @@ +aiohttp diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/_cparser.pxd.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/_cparser.pxd.hash new file mode 100644 index 00000000..a4c37d20 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/_cparser.pxd.hash @@ -0,0 +1 @@ +e6d134d56d5f516ab2b5c3b295d0d440a3bef911f4384d506204018895a1f833 /home/runner/work/aiohttp/aiohttp/aiohttp/_cparser.pxd diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/_find_header.pxd.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/_find_header.pxd.hash new file mode 100644 index 00000000..f006c2de --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/_find_header.pxd.hash @@ -0,0 +1 @@ +d067f01423cddb3c442933b5fcc039b18ab651fcec1bc91c577693aafc25cf78 /home/runner/work/aiohttp/aiohttp/aiohttp/_find_header.pxd diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/_helpers.pyi.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/_helpers.pyi.hash new file mode 100644 index 00000000..6a30d632 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/_helpers.pyi.hash @@ -0,0 +1 @@ +6682a22524b9d4fc442e123672622be7bdfb6238d9709b7b15b2113b7ca6d52b /home/runner/work/aiohttp/aiohttp/aiohttp/_helpers.pyi diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/_helpers.pyx.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/_helpers.pyx.hash new file mode 100644 index 00000000..8f38727d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/_helpers.pyx.hash @@ -0,0 +1 @@ +5de2db35fb795ffe227e2f1007c8ba4f2ad1b9aca28cc48edc80c779203cf6e3 /home/runner/work/aiohttp/aiohttp/aiohttp/_helpers.pyx diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/_http_parser.pyx.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/_http_parser.pyx.hash new file mode 100644 index 00000000..7a414c4d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/_http_parser.pyx.hash @@ -0,0 +1 @@ +d6edfcfc44b0e55812a9a8f1d669c674ee87aa4d28bfd3f1785ade1eae24b683 /home/runner/work/aiohttp/aiohttp/aiohttp/_http_parser.pyx diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/_http_writer.pyx.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/_http_writer.pyx.hash new file mode 100644 index 00000000..8e1aaab0 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/_http_writer.pyx.hash @@ -0,0 +1 @@ +6881c0a7c838655e646c645d99971efaf5e310bc3633a7c62b226e39d81842ac /home/runner/work/aiohttp/aiohttp/aiohttp/_http_writer.pyx diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/_websocket.pyx.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/_websocket.pyx.hash new file mode 100644 index 00000000..ddbb4c7a --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/_websocket.pyx.hash @@ -0,0 +1 @@ +d57b8e48d0c26f20ebcc5e6e300da2b2a6aeb12b3c9768d64cb0e53432ccf48a /home/runner/work/aiohttp/aiohttp/aiohttp/_websocket.pyx diff --git a/env/lib/python3.8/site-packages/aiohttp/.hash/hdrs.py.hash b/env/lib/python3.8/site-packages/aiohttp/.hash/hdrs.py.hash new file mode 100644 index 00000000..1c8653f6 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/.hash/hdrs.py.hash @@ -0,0 +1 @@ +a30351c34760a1d7835b2a1b0552e463cf1d2db90da0cdb473313dc66e34a031 /home/runner/work/aiohttp/aiohttp/aiohttp/hdrs.py diff --git a/env/lib/python3.8/site-packages/aiohttp/__init__.py b/env/lib/python3.8/site-packages/aiohttp/__init__.py new file mode 100644 index 00000000..b407feca --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/__init__.py @@ -0,0 +1,216 @@ +__version__ = "3.8.3" + +from typing import Tuple + +from . import hdrs as hdrs +from .client import ( + BaseConnector as BaseConnector, + ClientConnectionError as ClientConnectionError, + ClientConnectorCertificateError as ClientConnectorCertificateError, + ClientConnectorError as ClientConnectorError, + ClientConnectorSSLError as ClientConnectorSSLError, + ClientError as ClientError, + ClientHttpProxyError as ClientHttpProxyError, + ClientOSError as ClientOSError, + ClientPayloadError as ClientPayloadError, + ClientProxyConnectionError as ClientProxyConnectionError, + ClientRequest as ClientRequest, + ClientResponse as ClientResponse, + ClientResponseError as ClientResponseError, + ClientSession as ClientSession, + ClientSSLError as ClientSSLError, + ClientTimeout as ClientTimeout, + ClientWebSocketResponse as ClientWebSocketResponse, + ContentTypeError as ContentTypeError, + Fingerprint as Fingerprint, + InvalidURL as InvalidURL, + NamedPipeConnector as NamedPipeConnector, + RequestInfo as RequestInfo, + ServerConnectionError as ServerConnectionError, + ServerDisconnectedError as ServerDisconnectedError, + ServerFingerprintMismatch as ServerFingerprintMismatch, + ServerTimeoutError as ServerTimeoutError, + TCPConnector as TCPConnector, + TooManyRedirects as TooManyRedirects, + UnixConnector as UnixConnector, + WSServerHandshakeError as WSServerHandshakeError, + request as request, +) +from .cookiejar import CookieJar as CookieJar, DummyCookieJar as DummyCookieJar +from .formdata import FormData as FormData +from .helpers import BasicAuth, ChainMapProxy, ETag +from .http import ( + HttpVersion as HttpVersion, + HttpVersion10 as HttpVersion10, + HttpVersion11 as HttpVersion11, + WebSocketError as WebSocketError, + WSCloseCode as WSCloseCode, + WSMessage as WSMessage, + WSMsgType as WSMsgType, +) +from .multipart import ( + BadContentDispositionHeader as BadContentDispositionHeader, + BadContentDispositionParam as BadContentDispositionParam, + BodyPartReader as BodyPartReader, + MultipartReader as MultipartReader, + MultipartWriter as MultipartWriter, + content_disposition_filename as content_disposition_filename, + parse_content_disposition as parse_content_disposition, +) +from .payload import ( + PAYLOAD_REGISTRY as PAYLOAD_REGISTRY, + AsyncIterablePayload as AsyncIterablePayload, + BufferedReaderPayload as BufferedReaderPayload, + BytesIOPayload as BytesIOPayload, + BytesPayload as BytesPayload, + IOBasePayload as IOBasePayload, + JsonPayload as JsonPayload, + Payload as Payload, + StringIOPayload as StringIOPayload, + StringPayload as StringPayload, + TextIOPayload as TextIOPayload, + get_payload as get_payload, + payload_type as payload_type, +) +from .payload_streamer import streamer as streamer +from .resolver import ( + AsyncResolver as AsyncResolver, + DefaultResolver as DefaultResolver, + ThreadedResolver as ThreadedResolver, +) +from .streams import ( + EMPTY_PAYLOAD as EMPTY_PAYLOAD, + DataQueue as DataQueue, + EofStream as EofStream, + FlowControlDataQueue as FlowControlDataQueue, + StreamReader as StreamReader, +) +from .tracing import ( + TraceConfig as TraceConfig, + TraceConnectionCreateEndParams as TraceConnectionCreateEndParams, + TraceConnectionCreateStartParams as TraceConnectionCreateStartParams, + TraceConnectionQueuedEndParams as TraceConnectionQueuedEndParams, + TraceConnectionQueuedStartParams as TraceConnectionQueuedStartParams, + TraceConnectionReuseconnParams as TraceConnectionReuseconnParams, + TraceDnsCacheHitParams as TraceDnsCacheHitParams, + TraceDnsCacheMissParams as TraceDnsCacheMissParams, + TraceDnsResolveHostEndParams as TraceDnsResolveHostEndParams, + TraceDnsResolveHostStartParams as TraceDnsResolveHostStartParams, + TraceRequestChunkSentParams as TraceRequestChunkSentParams, + TraceRequestEndParams as TraceRequestEndParams, + TraceRequestExceptionParams as TraceRequestExceptionParams, + TraceRequestRedirectParams as TraceRequestRedirectParams, + TraceRequestStartParams as TraceRequestStartParams, + TraceResponseChunkReceivedParams as TraceResponseChunkReceivedParams, +) + +__all__: Tuple[str, ...] = ( + "hdrs", + # client + "BaseConnector", + "ClientConnectionError", + "ClientConnectorCertificateError", + "ClientConnectorError", + "ClientConnectorSSLError", + "ClientError", + "ClientHttpProxyError", + "ClientOSError", + "ClientPayloadError", + "ClientProxyConnectionError", + "ClientResponse", + "ClientRequest", + "ClientResponseError", + "ClientSSLError", + "ClientSession", + "ClientTimeout", + "ClientWebSocketResponse", + "ContentTypeError", + "Fingerprint", + "InvalidURL", + "RequestInfo", + "ServerConnectionError", + "ServerDisconnectedError", + "ServerFingerprintMismatch", + "ServerTimeoutError", + "TCPConnector", + "TooManyRedirects", + "UnixConnector", + "NamedPipeConnector", + "WSServerHandshakeError", + "request", + # cookiejar + "CookieJar", + "DummyCookieJar", + # formdata + "FormData", + # helpers + "BasicAuth", + "ChainMapProxy", + "ETag", + # http + "HttpVersion", + "HttpVersion10", + "HttpVersion11", + "WSMsgType", + "WSCloseCode", + "WSMessage", + "WebSocketError", + # multipart + "BadContentDispositionHeader", + "BadContentDispositionParam", + "BodyPartReader", + "MultipartReader", + "MultipartWriter", + "content_disposition_filename", + "parse_content_disposition", + # payload + "AsyncIterablePayload", + "BufferedReaderPayload", + "BytesIOPayload", + "BytesPayload", + "IOBasePayload", + "JsonPayload", + "PAYLOAD_REGISTRY", + "Payload", + "StringIOPayload", + "StringPayload", + "TextIOPayload", + "get_payload", + "payload_type", + # payload_streamer + "streamer", + # resolver + "AsyncResolver", + "DefaultResolver", + "ThreadedResolver", + # streams + "DataQueue", + "EMPTY_PAYLOAD", + "EofStream", + "FlowControlDataQueue", + "StreamReader", + # tracing + "TraceConfig", + "TraceConnectionCreateEndParams", + "TraceConnectionCreateStartParams", + "TraceConnectionQueuedEndParams", + "TraceConnectionQueuedStartParams", + "TraceConnectionReuseconnParams", + "TraceDnsCacheHitParams", + "TraceDnsCacheMissParams", + "TraceDnsResolveHostEndParams", + "TraceDnsResolveHostStartParams", + "TraceRequestChunkSentParams", + "TraceRequestEndParams", + "TraceRequestExceptionParams", + "TraceRequestRedirectParams", + "TraceRequestStartParams", + "TraceResponseChunkReceivedParams", +) + +try: + from .worker import GunicornUVLoopWebWorker, GunicornWebWorker + + __all__ += ("GunicornWebWorker", "GunicornUVLoopWebWorker") +except ImportError: # pragma: no cover + pass diff --git a/env/lib/python3.8/site-packages/aiohttp/_cparser.pxd b/env/lib/python3.8/site-packages/aiohttp/_cparser.pxd new file mode 100644 index 00000000..165dd61d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_cparser.pxd @@ -0,0 +1,190 @@ +from libc.stdint cimport ( + int8_t, + int16_t, + int32_t, + int64_t, + uint8_t, + uint16_t, + uint32_t, + uint64_t, +) + + +cdef extern from "../vendor/llhttp/build/llhttp.h": + + struct llhttp__internal_s: + int32_t _index + void* _span_pos0 + void* _span_cb0 + int32_t error + const char* reason + const char* error_pos + void* data + void* _current + uint64_t content_length + uint8_t type + uint8_t method + uint8_t http_major + uint8_t http_minor + uint8_t header_state + uint8_t lenient_flags + uint8_t upgrade + uint8_t finish + uint16_t flags + uint16_t status_code + void* settings + + ctypedef llhttp__internal_s llhttp__internal_t + ctypedef llhttp__internal_t llhttp_t + + ctypedef int (*llhttp_data_cb)(llhttp_t*, const char *at, size_t length) except -1 + ctypedef int (*llhttp_cb)(llhttp_t*) except -1 + + struct llhttp_settings_s: + llhttp_cb on_message_begin + llhttp_data_cb on_url + llhttp_data_cb on_status + llhttp_data_cb on_header_field + llhttp_data_cb on_header_value + llhttp_cb on_headers_complete + llhttp_data_cb on_body + llhttp_cb on_message_complete + llhttp_cb on_chunk_header + llhttp_cb on_chunk_complete + + llhttp_cb on_url_complete + llhttp_cb on_status_complete + llhttp_cb on_header_field_complete + llhttp_cb on_header_value_complete + + ctypedef llhttp_settings_s llhttp_settings_t + + enum llhttp_errno: + HPE_OK, + HPE_INTERNAL, + HPE_STRICT, + HPE_LF_EXPECTED, + HPE_UNEXPECTED_CONTENT_LENGTH, + HPE_CLOSED_CONNECTION, + HPE_INVALID_METHOD, + HPE_INVALID_URL, + HPE_INVALID_CONSTANT, + HPE_INVALID_VERSION, + HPE_INVALID_HEADER_TOKEN, + HPE_INVALID_CONTENT_LENGTH, + HPE_INVALID_CHUNK_SIZE, + HPE_INVALID_STATUS, + HPE_INVALID_EOF_STATE, + HPE_INVALID_TRANSFER_ENCODING, + HPE_CB_MESSAGE_BEGIN, + HPE_CB_HEADERS_COMPLETE, + HPE_CB_MESSAGE_COMPLETE, + HPE_CB_CHUNK_HEADER, + HPE_CB_CHUNK_COMPLETE, + HPE_PAUSED, + HPE_PAUSED_UPGRADE, + HPE_USER + + ctypedef llhttp_errno llhttp_errno_t + + enum llhttp_flags: + F_CONNECTION_KEEP_ALIVE, + F_CONNECTION_CLOSE, + F_CONNECTION_UPGRADE, + F_CHUNKED, + F_UPGRADE, + F_CONTENT_LENGTH, + F_SKIPBODY, + F_TRAILING, + F_TRANSFER_ENCODING + + enum llhttp_lenient_flags: + LENIENT_HEADERS, + LENIENT_CHUNKED_LENGTH + + enum llhttp_type: + HTTP_REQUEST, + HTTP_RESPONSE, + HTTP_BOTH + + enum llhttp_finish_t: + HTTP_FINISH_SAFE, + HTTP_FINISH_SAFE_WITH_CB, + HTTP_FINISH_UNSAFE + + enum llhttp_method: + HTTP_DELETE, + HTTP_GET, + HTTP_HEAD, + HTTP_POST, + HTTP_PUT, + HTTP_CONNECT, + HTTP_OPTIONS, + HTTP_TRACE, + HTTP_COPY, + HTTP_LOCK, + HTTP_MKCOL, + HTTP_MOVE, + HTTP_PROPFIND, + HTTP_PROPPATCH, + HTTP_SEARCH, + HTTP_UNLOCK, + HTTP_BIND, + HTTP_REBIND, + HTTP_UNBIND, + HTTP_ACL, + HTTP_REPORT, + HTTP_MKACTIVITY, + HTTP_CHECKOUT, + HTTP_MERGE, + HTTP_MSEARCH, + HTTP_NOTIFY, + HTTP_SUBSCRIBE, + HTTP_UNSUBSCRIBE, + HTTP_PATCH, + HTTP_PURGE, + HTTP_MKCALENDAR, + HTTP_LINK, + HTTP_UNLINK, + HTTP_SOURCE, + HTTP_PRI, + HTTP_DESCRIBE, + HTTP_ANNOUNCE, + HTTP_SETUP, + HTTP_PLAY, + HTTP_PAUSE, + HTTP_TEARDOWN, + HTTP_GET_PARAMETER, + HTTP_SET_PARAMETER, + HTTP_REDIRECT, + HTTP_RECORD, + HTTP_FLUSH + + ctypedef llhttp_method llhttp_method_t; + + void llhttp_settings_init(llhttp_settings_t* settings) + void llhttp_init(llhttp_t* parser, llhttp_type type, + const llhttp_settings_t* settings) + + llhttp_errno_t llhttp_execute(llhttp_t* parser, const char* data, size_t len) + llhttp_errno_t llhttp_finish(llhttp_t* parser) + + int llhttp_message_needs_eof(const llhttp_t* parser) + + int llhttp_should_keep_alive(const llhttp_t* parser) + + void llhttp_pause(llhttp_t* parser) + void llhttp_resume(llhttp_t* parser) + + void llhttp_resume_after_upgrade(llhttp_t* parser) + + llhttp_errno_t llhttp_get_errno(const llhttp_t* parser) + const char* llhttp_get_error_reason(const llhttp_t* parser) + void llhttp_set_error_reason(llhttp_t* parser, const char* reason) + const char* llhttp_get_error_pos(const llhttp_t* parser) + const char* llhttp_errno_name(llhttp_errno_t err) + + const char* llhttp_method_name(llhttp_method_t method) + + void llhttp_set_lenient_headers(llhttp_t* parser, int enabled) + void llhttp_set_lenient_chunked_length(llhttp_t* parser, int enabled) diff --git a/env/lib/python3.8/site-packages/aiohttp/_find_header.pxd b/env/lib/python3.8/site-packages/aiohttp/_find_header.pxd new file mode 100644 index 00000000..37a6c372 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_find_header.pxd @@ -0,0 +1,2 @@ +cdef extern from "_find_header.h": + int find_header(char *, int) diff --git a/env/lib/python3.8/site-packages/aiohttp/_headers.pxi b/env/lib/python3.8/site-packages/aiohttp/_headers.pxi new file mode 100644 index 00000000..3744721d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_headers.pxi @@ -0,0 +1,83 @@ +# The file is autogenerated from aiohttp/hdrs.py +# Run ./tools/gen.py to update it after the origin changing. + +from . import hdrs +cdef tuple headers = ( + hdrs.ACCEPT, + hdrs.ACCEPT_CHARSET, + hdrs.ACCEPT_ENCODING, + hdrs.ACCEPT_LANGUAGE, + hdrs.ACCEPT_RANGES, + hdrs.ACCESS_CONTROL_ALLOW_CREDENTIALS, + hdrs.ACCESS_CONTROL_ALLOW_HEADERS, + hdrs.ACCESS_CONTROL_ALLOW_METHODS, + hdrs.ACCESS_CONTROL_ALLOW_ORIGIN, + hdrs.ACCESS_CONTROL_EXPOSE_HEADERS, + hdrs.ACCESS_CONTROL_MAX_AGE, + hdrs.ACCESS_CONTROL_REQUEST_HEADERS, + hdrs.ACCESS_CONTROL_REQUEST_METHOD, + hdrs.AGE, + hdrs.ALLOW, + hdrs.AUTHORIZATION, + hdrs.CACHE_CONTROL, + hdrs.CONNECTION, + hdrs.CONTENT_DISPOSITION, + hdrs.CONTENT_ENCODING, + hdrs.CONTENT_LANGUAGE, + hdrs.CONTENT_LENGTH, + hdrs.CONTENT_LOCATION, + hdrs.CONTENT_MD5, + hdrs.CONTENT_RANGE, + hdrs.CONTENT_TRANSFER_ENCODING, + hdrs.CONTENT_TYPE, + hdrs.COOKIE, + hdrs.DATE, + hdrs.DESTINATION, + hdrs.DIGEST, + hdrs.ETAG, + hdrs.EXPECT, + hdrs.EXPIRES, + hdrs.FORWARDED, + hdrs.FROM, + hdrs.HOST, + hdrs.IF_MATCH, + hdrs.IF_MODIFIED_SINCE, + hdrs.IF_NONE_MATCH, + hdrs.IF_RANGE, + hdrs.IF_UNMODIFIED_SINCE, + hdrs.KEEP_ALIVE, + hdrs.LAST_EVENT_ID, + hdrs.LAST_MODIFIED, + hdrs.LINK, + hdrs.LOCATION, + hdrs.MAX_FORWARDS, + hdrs.ORIGIN, + hdrs.PRAGMA, + hdrs.PROXY_AUTHENTICATE, + hdrs.PROXY_AUTHORIZATION, + hdrs.RANGE, + hdrs.REFERER, + hdrs.RETRY_AFTER, + hdrs.SEC_WEBSOCKET_ACCEPT, + hdrs.SEC_WEBSOCKET_EXTENSIONS, + hdrs.SEC_WEBSOCKET_KEY, + hdrs.SEC_WEBSOCKET_KEY1, + hdrs.SEC_WEBSOCKET_PROTOCOL, + hdrs.SEC_WEBSOCKET_VERSION, + hdrs.SERVER, + hdrs.SET_COOKIE, + hdrs.TE, + hdrs.TRAILER, + hdrs.TRANSFER_ENCODING, + hdrs.URI, + hdrs.UPGRADE, + hdrs.USER_AGENT, + hdrs.VARY, + hdrs.VIA, + hdrs.WWW_AUTHENTICATE, + hdrs.WANT_DIGEST, + hdrs.WARNING, + hdrs.X_FORWARDED_FOR, + hdrs.X_FORWARDED_HOST, + hdrs.X_FORWARDED_PROTO, +) diff --git a/env/lib/python3.8/site-packages/aiohttp/_helpers.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/aiohttp/_helpers.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..ebe47526 Binary files /dev/null and b/env/lib/python3.8/site-packages/aiohttp/_helpers.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/aiohttp/_helpers.pyi b/env/lib/python3.8/site-packages/aiohttp/_helpers.pyi new file mode 100644 index 00000000..1e358937 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_helpers.pyi @@ -0,0 +1,6 @@ +from typing import Any + +class reify: + def __init__(self, wrapped: Any) -> None: ... + def __get__(self, inst: Any, owner: Any) -> Any: ... + def __set__(self, inst: Any, value: Any) -> None: ... diff --git a/env/lib/python3.8/site-packages/aiohttp/_helpers.pyx b/env/lib/python3.8/site-packages/aiohttp/_helpers.pyx new file mode 100644 index 00000000..665f367c --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_helpers.pyx @@ -0,0 +1,35 @@ +cdef class reify: + """Use as a class method decorator. It operates almost exactly like + the Python `@property` decorator, but it puts the result of the + method it decorates into the instance dict after the first call, + effectively replacing the function it decorates with an instance + variable. It is, in Python parlance, a data descriptor. + + """ + + cdef object wrapped + cdef object name + + def __init__(self, wrapped): + self.wrapped = wrapped + self.name = wrapped.__name__ + + @property + def __doc__(self): + return self.wrapped.__doc__ + + def __get__(self, inst, owner): + try: + try: + return inst._cache[self.name] + except KeyError: + val = self.wrapped(inst) + inst._cache[self.name] = val + return val + except AttributeError: + if inst is None: + return self + raise + + def __set__(self, inst, value): + raise AttributeError("reified property is read-only") diff --git a/env/lib/python3.8/site-packages/aiohttp/_http_parser.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/aiohttp/_http_parser.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..c9f39308 Binary files /dev/null and b/env/lib/python3.8/site-packages/aiohttp/_http_parser.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/aiohttp/_http_parser.pyx b/env/lib/python3.8/site-packages/aiohttp/_http_parser.pyx new file mode 100644 index 00000000..bebd9894 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_http_parser.pyx @@ -0,0 +1,832 @@ +#cython: language_level=3 +# +# Based on https://github.com/MagicStack/httptools +# +from __future__ import absolute_import, print_function + +from cpython cimport ( + Py_buffer, + PyBUF_SIMPLE, + PyBuffer_Release, + PyBytes_AsString, + PyBytes_AsStringAndSize, + PyObject_GetBuffer, +) +from cpython.mem cimport PyMem_Free, PyMem_Malloc +from libc.limits cimport ULLONG_MAX +from libc.string cimport memcpy + +from multidict import CIMultiDict as _CIMultiDict, CIMultiDictProxy as _CIMultiDictProxy +from yarl import URL as _URL + +from aiohttp import hdrs + +from .http_exceptions import ( + BadHttpMessage, + BadStatusLine, + ContentLengthError, + InvalidHeader, + InvalidURLError, + LineTooLong, + PayloadEncodingError, + TransferEncodingError, +) +from .http_parser import DeflateBuffer as _DeflateBuffer +from .http_writer import ( + HttpVersion as _HttpVersion, + HttpVersion10 as _HttpVersion10, + HttpVersion11 as _HttpVersion11, +) +from .streams import EMPTY_PAYLOAD as _EMPTY_PAYLOAD, StreamReader as _StreamReader + +cimport cython + +from aiohttp cimport _cparser as cparser + +include "_headers.pxi" + +from aiohttp cimport _find_header + +DEF DEFAULT_FREELIST_SIZE = 250 + +cdef extern from "Python.h": + int PyByteArray_Resize(object, Py_ssize_t) except -1 + Py_ssize_t PyByteArray_Size(object) except -1 + char* PyByteArray_AsString(object) + +__all__ = ('HttpRequestParser', 'HttpResponseParser', + 'RawRequestMessage', 'RawResponseMessage') + +cdef object URL = _URL +cdef object URL_build = URL.build +cdef object CIMultiDict = _CIMultiDict +cdef object CIMultiDictProxy = _CIMultiDictProxy +cdef object HttpVersion = _HttpVersion +cdef object HttpVersion10 = _HttpVersion10 +cdef object HttpVersion11 = _HttpVersion11 +cdef object SEC_WEBSOCKET_KEY1 = hdrs.SEC_WEBSOCKET_KEY1 +cdef object CONTENT_ENCODING = hdrs.CONTENT_ENCODING +cdef object EMPTY_PAYLOAD = _EMPTY_PAYLOAD +cdef object StreamReader = _StreamReader +cdef object DeflateBuffer = _DeflateBuffer + + +cdef inline object extend(object buf, const char* at, size_t length): + cdef Py_ssize_t s + cdef char* ptr + s = PyByteArray_Size(buf) + PyByteArray_Resize(buf, s + length) + ptr = PyByteArray_AsString(buf) + memcpy(ptr + s, at, length) + + +DEF METHODS_COUNT = 46; + +cdef list _http_method = [] + +for i in range(METHODS_COUNT): + _http_method.append( + cparser.llhttp_method_name( i).decode('ascii')) + + +cdef inline str http_method_str(int i): + if i < METHODS_COUNT: + return _http_method[i] + else: + return "" + +cdef inline object find_header(bytes raw_header): + cdef Py_ssize_t size + cdef char *buf + cdef int idx + PyBytes_AsStringAndSize(raw_header, &buf, &size) + idx = _find_header.find_header(buf, size) + if idx == -1: + return raw_header.decode('utf-8', 'surrogateescape') + return headers[idx] + + +@cython.freelist(DEFAULT_FREELIST_SIZE) +cdef class RawRequestMessage: + cdef readonly str method + cdef readonly str path + cdef readonly object version # HttpVersion + cdef readonly object headers # CIMultiDict + cdef readonly object raw_headers # tuple + cdef readonly object should_close + cdef readonly object compression + cdef readonly object upgrade + cdef readonly object chunked + cdef readonly object url # yarl.URL + + def __init__(self, method, path, version, headers, raw_headers, + should_close, compression, upgrade, chunked, url): + self.method = method + self.path = path + self.version = version + self.headers = headers + self.raw_headers = raw_headers + self.should_close = should_close + self.compression = compression + self.upgrade = upgrade + self.chunked = chunked + self.url = url + + def __repr__(self): + info = [] + info.append(("method", self.method)) + info.append(("path", self.path)) + info.append(("version", self.version)) + info.append(("headers", self.headers)) + info.append(("raw_headers", self.raw_headers)) + info.append(("should_close", self.should_close)) + info.append(("compression", self.compression)) + info.append(("upgrade", self.upgrade)) + info.append(("chunked", self.chunked)) + info.append(("url", self.url)) + sinfo = ', '.join(name + '=' + repr(val) for name, val in info) + return '' + + def _replace(self, **dct): + cdef RawRequestMessage ret + ret = _new_request_message(self.method, + self.path, + self.version, + self.headers, + self.raw_headers, + self.should_close, + self.compression, + self.upgrade, + self.chunked, + self.url) + if "method" in dct: + ret.method = dct["method"] + if "path" in dct: + ret.path = dct["path"] + if "version" in dct: + ret.version = dct["version"] + if "headers" in dct: + ret.headers = dct["headers"] + if "raw_headers" in dct: + ret.raw_headers = dct["raw_headers"] + if "should_close" in dct: + ret.should_close = dct["should_close"] + if "compression" in dct: + ret.compression = dct["compression"] + if "upgrade" in dct: + ret.upgrade = dct["upgrade"] + if "chunked" in dct: + ret.chunked = dct["chunked"] + if "url" in dct: + ret.url = dct["url"] + return ret + +cdef _new_request_message(str method, + str path, + object version, + object headers, + object raw_headers, + bint should_close, + object compression, + bint upgrade, + bint chunked, + object url): + cdef RawRequestMessage ret + ret = RawRequestMessage.__new__(RawRequestMessage) + ret.method = method + ret.path = path + ret.version = version + ret.headers = headers + ret.raw_headers = raw_headers + ret.should_close = should_close + ret.compression = compression + ret.upgrade = upgrade + ret.chunked = chunked + ret.url = url + return ret + + +@cython.freelist(DEFAULT_FREELIST_SIZE) +cdef class RawResponseMessage: + cdef readonly object version # HttpVersion + cdef readonly int code + cdef readonly str reason + cdef readonly object headers # CIMultiDict + cdef readonly object raw_headers # tuple + cdef readonly object should_close + cdef readonly object compression + cdef readonly object upgrade + cdef readonly object chunked + + def __init__(self, version, code, reason, headers, raw_headers, + should_close, compression, upgrade, chunked): + self.version = version + self.code = code + self.reason = reason + self.headers = headers + self.raw_headers = raw_headers + self.should_close = should_close + self.compression = compression + self.upgrade = upgrade + self.chunked = chunked + + def __repr__(self): + info = [] + info.append(("version", self.version)) + info.append(("code", self.code)) + info.append(("reason", self.reason)) + info.append(("headers", self.headers)) + info.append(("raw_headers", self.raw_headers)) + info.append(("should_close", self.should_close)) + info.append(("compression", self.compression)) + info.append(("upgrade", self.upgrade)) + info.append(("chunked", self.chunked)) + sinfo = ', '.join(name + '=' + repr(val) for name, val in info) + return '' + + +cdef _new_response_message(object version, + int code, + str reason, + object headers, + object raw_headers, + bint should_close, + object compression, + bint upgrade, + bint chunked): + cdef RawResponseMessage ret + ret = RawResponseMessage.__new__(RawResponseMessage) + ret.version = version + ret.code = code + ret.reason = reason + ret.headers = headers + ret.raw_headers = raw_headers + ret.should_close = should_close + ret.compression = compression + ret.upgrade = upgrade + ret.chunked = chunked + return ret + + +@cython.internal +cdef class HttpParser: + + cdef: + cparser.llhttp_t* _cparser + cparser.llhttp_settings_t* _csettings + + bytearray _raw_name + bytearray _raw_value + bint _has_value + + object _protocol + object _loop + object _timer + + size_t _max_line_size + size_t _max_field_size + size_t _max_headers + bint _response_with_body + bint _read_until_eof + + bint _started + object _url + bytearray _buf + str _path + str _reason + object _headers + list _raw_headers + bint _upgraded + list _messages + object _payload + bint _payload_error + object _payload_exception + object _last_error + bint _auto_decompress + int _limit + + str _content_encoding + + Py_buffer py_buf + + def __cinit__(self): + self._cparser = \ + PyMem_Malloc(sizeof(cparser.llhttp_t)) + if self._cparser is NULL: + raise MemoryError() + + self._csettings = \ + PyMem_Malloc(sizeof(cparser.llhttp_settings_t)) + if self._csettings is NULL: + raise MemoryError() + + def __dealloc__(self): + PyMem_Free(self._cparser) + PyMem_Free(self._csettings) + + cdef _init( + self, cparser.llhttp_type mode, + object protocol, object loop, int limit, + object timer=None, + size_t max_line_size=8190, size_t max_headers=32768, + size_t max_field_size=8190, payload_exception=None, + bint response_with_body=True, bint read_until_eof=False, + bint auto_decompress=True, + ): + cparser.llhttp_settings_init(self._csettings) + cparser.llhttp_init(self._cparser, mode, self._csettings) + self._cparser.data = self + self._cparser.content_length = 0 + + self._protocol = protocol + self._loop = loop + self._timer = timer + + self._buf = bytearray() + self._payload = None + self._payload_error = 0 + self._payload_exception = payload_exception + self._messages = [] + + self._raw_name = bytearray() + self._raw_value = bytearray() + self._has_value = False + + self._max_line_size = max_line_size + self._max_headers = max_headers + self._max_field_size = max_field_size + self._response_with_body = response_with_body + self._read_until_eof = read_until_eof + self._upgraded = False + self._auto_decompress = auto_decompress + self._content_encoding = None + + self._csettings.on_url = cb_on_url + self._csettings.on_status = cb_on_status + self._csettings.on_header_field = cb_on_header_field + self._csettings.on_header_value = cb_on_header_value + self._csettings.on_headers_complete = cb_on_headers_complete + self._csettings.on_body = cb_on_body + self._csettings.on_message_begin = cb_on_message_begin + self._csettings.on_message_complete = cb_on_message_complete + self._csettings.on_chunk_header = cb_on_chunk_header + self._csettings.on_chunk_complete = cb_on_chunk_complete + + self._last_error = None + self._limit = limit + + cdef _process_header(self): + if self._raw_name: + raw_name = bytes(self._raw_name) + raw_value = bytes(self._raw_value) + + name = find_header(raw_name) + value = raw_value.decode('utf-8', 'surrogateescape') + + self._headers.add(name, value) + + if name is CONTENT_ENCODING: + self._content_encoding = value + + PyByteArray_Resize(self._raw_name, 0) + PyByteArray_Resize(self._raw_value, 0) + self._has_value = False + self._raw_headers.append((raw_name, raw_value)) + + cdef _on_header_field(self, char* at, size_t length): + cdef Py_ssize_t size + cdef char *buf + if self._has_value: + self._process_header() + + size = PyByteArray_Size(self._raw_name) + PyByteArray_Resize(self._raw_name, size + length) + buf = PyByteArray_AsString(self._raw_name) + memcpy(buf + size, at, length) + + cdef _on_header_value(self, char* at, size_t length): + cdef Py_ssize_t size + cdef char *buf + + size = PyByteArray_Size(self._raw_value) + PyByteArray_Resize(self._raw_value, size + length) + buf = PyByteArray_AsString(self._raw_value) + memcpy(buf + size, at, length) + self._has_value = True + + cdef _on_headers_complete(self): + self._process_header() + + method = http_method_str(self._cparser.method) + should_close = not cparser.llhttp_should_keep_alive(self._cparser) + upgrade = self._cparser.upgrade + chunked = self._cparser.flags & cparser.F_CHUNKED + + raw_headers = tuple(self._raw_headers) + headers = CIMultiDictProxy(self._headers) + + if upgrade or self._cparser.method == cparser.HTTP_CONNECT: + self._upgraded = True + + # do not support old websocket spec + if SEC_WEBSOCKET_KEY1 in headers: + raise InvalidHeader(SEC_WEBSOCKET_KEY1) + + encoding = None + enc = self._content_encoding + if enc is not None: + self._content_encoding = None + enc = enc.lower() + if enc in ('gzip', 'deflate', 'br'): + encoding = enc + + if self._cparser.type == cparser.HTTP_REQUEST: + msg = _new_request_message( + method, self._path, + self.http_version(), headers, raw_headers, + should_close, encoding, upgrade, chunked, self._url) + else: + msg = _new_response_message( + self.http_version(), self._cparser.status_code, self._reason, + headers, raw_headers, should_close, encoding, + upgrade, chunked) + + if ( + ULLONG_MAX > self._cparser.content_length > 0 or chunked or + self._cparser.method == cparser.HTTP_CONNECT or + (self._cparser.status_code >= 199 and + self._cparser.content_length == 0 and + self._read_until_eof) + ): + payload = StreamReader( + self._protocol, timer=self._timer, loop=self._loop, + limit=self._limit) + else: + payload = EMPTY_PAYLOAD + + self._payload = payload + if encoding is not None and self._auto_decompress: + self._payload = DeflateBuffer(payload, encoding) + + if not self._response_with_body: + payload = EMPTY_PAYLOAD + + self._messages.append((msg, payload)) + + cdef _on_message_complete(self): + self._payload.feed_eof() + self._payload = None + + cdef _on_chunk_header(self): + self._payload.begin_http_chunk_receiving() + + cdef _on_chunk_complete(self): + self._payload.end_http_chunk_receiving() + + cdef object _on_status_complete(self): + pass + + cdef inline http_version(self): + cdef cparser.llhttp_t* parser = self._cparser + + if parser.http_major == 1: + if parser.http_minor == 0: + return HttpVersion10 + elif parser.http_minor == 1: + return HttpVersion11 + + return HttpVersion(parser.http_major, parser.http_minor) + + ### Public API ### + + def feed_eof(self): + cdef bytes desc + + if self._payload is not None: + if self._cparser.flags & cparser.F_CHUNKED: + raise TransferEncodingError( + "Not enough data for satisfy transfer length header.") + elif self._cparser.flags & cparser.F_CONTENT_LENGTH: + raise ContentLengthError( + "Not enough data for satisfy content length header.") + elif cparser.llhttp_get_errno(self._cparser) != cparser.HPE_OK: + desc = cparser.llhttp_get_error_reason(self._cparser) + raise PayloadEncodingError(desc.decode('latin-1')) + else: + self._payload.feed_eof() + elif self._started: + self._on_headers_complete() + if self._messages: + return self._messages[-1][0] + + def feed_data(self, data): + cdef: + size_t data_len + size_t nb + cdef cparser.llhttp_errno_t errno + + PyObject_GetBuffer(data, &self.py_buf, PyBUF_SIMPLE) + data_len = self.py_buf.len + + errno = cparser.llhttp_execute( + self._cparser, + self.py_buf.buf, + data_len) + + if errno is cparser.HPE_PAUSED_UPGRADE: + cparser.llhttp_resume_after_upgrade(self._cparser) + + nb = cparser.llhttp_get_error_pos(self._cparser) - self.py_buf.buf + + PyBuffer_Release(&self.py_buf) + + if errno not in (cparser.HPE_OK, cparser.HPE_PAUSED_UPGRADE): + if self._payload_error == 0: + if self._last_error is not None: + ex = self._last_error + self._last_error = None + else: + ex = parser_error_from_errno(self._cparser) + self._payload = None + raise ex + + if self._messages: + messages = self._messages + self._messages = [] + else: + messages = () + + if self._upgraded: + return messages, True, data[nb:] + else: + return messages, False, b'' + + def set_upgraded(self, val): + self._upgraded = val + + +cdef class HttpRequestParser(HttpParser): + + def __init__( + self, protocol, loop, int limit, timer=None, + size_t max_line_size=8190, size_t max_headers=32768, + size_t max_field_size=8190, payload_exception=None, + bint response_with_body=True, bint read_until_eof=False, + bint auto_decompress=True, + ): + self._init(cparser.HTTP_REQUEST, protocol, loop, limit, timer, + max_line_size, max_headers, max_field_size, + payload_exception, response_with_body, read_until_eof, + auto_decompress) + + cdef object _on_status_complete(self): + cdef int idx1, idx2 + if not self._buf: + return + self._path = self._buf.decode('utf-8', 'surrogateescape') + try: + idx3 = len(self._path) + if self._cparser.method == cparser.HTTP_CONNECT: + # authority-form, + # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.3 + self._url = URL.build(authority=self._path, encoded=True) + elif idx3 > 1 and self._path[0] == '/': + # origin-form, + # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.1 + idx1 = self._path.find("?") + if idx1 == -1: + query = "" + idx2 = self._path.find("#") + if idx2 == -1: + path = self._path + fragment = "" + else: + path = self._path[0: idx2] + fragment = self._path[idx2+1:] + + else: + path = self._path[0:idx1] + idx1 += 1 + idx2 = self._path.find("#", idx1+1) + if idx2 == -1: + query = self._path[idx1:] + fragment = "" + else: + query = self._path[idx1: idx2] + fragment = self._path[idx2+1:] + + self._url = URL.build( + path=path, + query_string=query, + fragment=fragment, + encoded=True, + ) + else: + # absolute-form for proxy maybe, + # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2 + self._url = URL(self._path, encoded=True) + finally: + PyByteArray_Resize(self._buf, 0) + + +cdef class HttpResponseParser(HttpParser): + + def __init__( + self, protocol, loop, int limit, timer=None, + size_t max_line_size=8190, size_t max_headers=32768, + size_t max_field_size=8190, payload_exception=None, + bint response_with_body=True, bint read_until_eof=False, + bint auto_decompress=True + ): + self._init(cparser.HTTP_RESPONSE, protocol, loop, limit, timer, + max_line_size, max_headers, max_field_size, + payload_exception, response_with_body, read_until_eof, + auto_decompress) + + cdef object _on_status_complete(self): + if self._buf: + self._reason = self._buf.decode('utf-8', 'surrogateescape') + PyByteArray_Resize(self._buf, 0) + else: + self._reason = self._reason or '' + +cdef int cb_on_message_begin(cparser.llhttp_t* parser) except -1: + cdef HttpParser pyparser = parser.data + + pyparser._started = True + pyparser._headers = CIMultiDict() + pyparser._raw_headers = [] + PyByteArray_Resize(pyparser._buf, 0) + pyparser._path = None + pyparser._reason = None + return 0 + + +cdef int cb_on_url(cparser.llhttp_t* parser, + const char *at, size_t length) except -1: + cdef HttpParser pyparser = parser.data + try: + if length > pyparser._max_line_size: + raise LineTooLong( + 'Status line is too long', pyparser._max_line_size, length) + extend(pyparser._buf, at, length) + except BaseException as ex: + pyparser._last_error = ex + return -1 + else: + return 0 + + +cdef int cb_on_status(cparser.llhttp_t* parser, + const char *at, size_t length) except -1: + cdef HttpParser pyparser = parser.data + cdef str reason + try: + if length > pyparser._max_line_size: + raise LineTooLong( + 'Status line is too long', pyparser._max_line_size, length) + extend(pyparser._buf, at, length) + except BaseException as ex: + pyparser._last_error = ex + return -1 + else: + return 0 + + +cdef int cb_on_header_field(cparser.llhttp_t* parser, + const char *at, size_t length) except -1: + cdef HttpParser pyparser = parser.data + cdef Py_ssize_t size + try: + pyparser._on_status_complete() + size = len(pyparser._raw_name) + length + if size > pyparser._max_field_size: + raise LineTooLong( + 'Header name is too long', pyparser._max_field_size, size) + pyparser._on_header_field(at, length) + except BaseException as ex: + pyparser._last_error = ex + return -1 + else: + return 0 + + +cdef int cb_on_header_value(cparser.llhttp_t* parser, + const char *at, size_t length) except -1: + cdef HttpParser pyparser = parser.data + cdef Py_ssize_t size + try: + size = len(pyparser._raw_value) + length + if size > pyparser._max_field_size: + raise LineTooLong( + 'Header value is too long', pyparser._max_field_size, size) + pyparser._on_header_value(at, length) + except BaseException as ex: + pyparser._last_error = ex + return -1 + else: + return 0 + + +cdef int cb_on_headers_complete(cparser.llhttp_t* parser) except -1: + cdef HttpParser pyparser = parser.data + try: + pyparser._on_status_complete() + pyparser._on_headers_complete() + except BaseException as exc: + pyparser._last_error = exc + return -1 + else: + if ( + pyparser._cparser.upgrade or + pyparser._cparser.method == cparser.HTTP_CONNECT + ): + return 2 + else: + return 0 + + +cdef int cb_on_body(cparser.llhttp_t* parser, + const char *at, size_t length) except -1: + cdef HttpParser pyparser = parser.data + cdef bytes body = at[:length] + try: + pyparser._payload.feed_data(body, length) + except BaseException as exc: + if pyparser._payload_exception is not None: + pyparser._payload.set_exception(pyparser._payload_exception(str(exc))) + else: + pyparser._payload.set_exception(exc) + pyparser._payload_error = 1 + return -1 + else: + return 0 + + +cdef int cb_on_message_complete(cparser.llhttp_t* parser) except -1: + cdef HttpParser pyparser = parser.data + try: + pyparser._started = False + pyparser._on_message_complete() + except BaseException as exc: + pyparser._last_error = exc + return -1 + else: + return 0 + + +cdef int cb_on_chunk_header(cparser.llhttp_t* parser) except -1: + cdef HttpParser pyparser = parser.data + try: + pyparser._on_chunk_header() + except BaseException as exc: + pyparser._last_error = exc + return -1 + else: + return 0 + + +cdef int cb_on_chunk_complete(cparser.llhttp_t* parser) except -1: + cdef HttpParser pyparser = parser.data + try: + pyparser._on_chunk_complete() + except BaseException as exc: + pyparser._last_error = exc + return -1 + else: + return 0 + + +cdef parser_error_from_errno(cparser.llhttp_t* parser): + cdef cparser.llhttp_errno_t errno = cparser.llhttp_get_errno(parser) + cdef bytes desc = cparser.llhttp_get_error_reason(parser) + + if errno in (cparser.HPE_CB_MESSAGE_BEGIN, + cparser.HPE_CB_HEADERS_COMPLETE, + cparser.HPE_CB_MESSAGE_COMPLETE, + cparser.HPE_CB_CHUNK_HEADER, + cparser.HPE_CB_CHUNK_COMPLETE, + cparser.HPE_INVALID_CONSTANT, + cparser.HPE_INVALID_HEADER_TOKEN, + cparser.HPE_INVALID_CONTENT_LENGTH, + cparser.HPE_INVALID_CHUNK_SIZE, + cparser.HPE_INVALID_EOF_STATE, + cparser.HPE_INVALID_TRANSFER_ENCODING): + cls = BadHttpMessage + + elif errno == cparser.HPE_INVALID_STATUS: + cls = BadStatusLine + + elif errno == cparser.HPE_INVALID_METHOD: + cls = BadStatusLine + + elif errno == cparser.HPE_INVALID_VERSION: + cls = BadStatusLine + + elif errno == cparser.HPE_INVALID_URL: + cls = InvalidURLError + + else: + cls = BadHttpMessage + + return cls(desc.decode('latin-1')) diff --git a/env/lib/python3.8/site-packages/aiohttp/_http_writer.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/aiohttp/_http_writer.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..1721723b Binary files /dev/null and b/env/lib/python3.8/site-packages/aiohttp/_http_writer.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/aiohttp/_http_writer.pyx b/env/lib/python3.8/site-packages/aiohttp/_http_writer.pyx new file mode 100644 index 00000000..eff85219 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_http_writer.pyx @@ -0,0 +1,163 @@ +from cpython.bytes cimport PyBytes_FromStringAndSize +from cpython.exc cimport PyErr_NoMemory +from cpython.mem cimport PyMem_Free, PyMem_Malloc, PyMem_Realloc +from cpython.object cimport PyObject_Str +from libc.stdint cimport uint8_t, uint64_t +from libc.string cimport memcpy + +from multidict import istr + +DEF BUF_SIZE = 16 * 1024 # 16KiB +cdef char BUFFER[BUF_SIZE] + +cdef object _istr = istr + + +# ----------------- writer --------------------------- + +cdef struct Writer: + char *buf + Py_ssize_t size + Py_ssize_t pos + + +cdef inline void _init_writer(Writer* writer): + writer.buf = &BUFFER[0] + writer.size = BUF_SIZE + writer.pos = 0 + + +cdef inline void _release_writer(Writer* writer): + if writer.buf != BUFFER: + PyMem_Free(writer.buf) + + +cdef inline int _write_byte(Writer* writer, uint8_t ch): + cdef char * buf + cdef Py_ssize_t size + + if writer.pos == writer.size: + # reallocate + size = writer.size + BUF_SIZE + if writer.buf == BUFFER: + buf = PyMem_Malloc(size) + if buf == NULL: + PyErr_NoMemory() + return -1 + memcpy(buf, writer.buf, writer.size) + else: + buf = PyMem_Realloc(writer.buf, size) + if buf == NULL: + PyErr_NoMemory() + return -1 + writer.buf = buf + writer.size = size + writer.buf[writer.pos] = ch + writer.pos += 1 + return 0 + + +cdef inline int _write_utf8(Writer* writer, Py_UCS4 symbol): + cdef uint64_t utf = symbol + + if utf < 0x80: + return _write_byte(writer, utf) + elif utf < 0x800: + if _write_byte(writer, (0xc0 | (utf >> 6))) < 0: + return -1 + return _write_byte(writer, (0x80 | (utf & 0x3f))) + elif 0xD800 <= utf <= 0xDFFF: + # surogate pair, ignored + return 0 + elif utf < 0x10000: + if _write_byte(writer, (0xe0 | (utf >> 12))) < 0: + return -1 + if _write_byte(writer, (0x80 | ((utf >> 6) & 0x3f))) < 0: + return -1 + return _write_byte(writer, (0x80 | (utf & 0x3f))) + elif utf > 0x10FFFF: + # symbol is too large + return 0 + else: + if _write_byte(writer, (0xf0 | (utf >> 18))) < 0: + return -1 + if _write_byte(writer, + (0x80 | ((utf >> 12) & 0x3f))) < 0: + return -1 + if _write_byte(writer, + (0x80 | ((utf >> 6) & 0x3f))) < 0: + return -1 + return _write_byte(writer, (0x80 | (utf & 0x3f))) + + +cdef inline int _write_str(Writer* writer, str s): + cdef Py_UCS4 ch + for ch in s: + if _write_utf8(writer, ch) < 0: + return -1 + + +# --------------- _serialize_headers ---------------------- + +cdef str to_str(object s): + typ = type(s) + if typ is str: + return s + elif typ is _istr: + return PyObject_Str(s) + elif not isinstance(s, str): + raise TypeError("Cannot serialize non-str key {!r}".format(s)) + else: + return str(s) + + +cdef void _safe_header(str string) except *: + if "\r" in string or "\n" in string: + raise ValueError( + "Newline or carriage return character detected in HTTP status message or " + "header. This is a potential security issue." + ) + + +def _serialize_headers(str status_line, headers): + cdef Writer writer + cdef object key + cdef object val + cdef bytes ret + + _init_writer(&writer) + + for key, val in headers.items(): + _safe_header(to_str(key)) + _safe_header(to_str(val)) + + try: + if _write_str(&writer, status_line) < 0: + raise + if _write_byte(&writer, b'\r') < 0: + raise + if _write_byte(&writer, b'\n') < 0: + raise + + for key, val in headers.items(): + if _write_str(&writer, to_str(key)) < 0: + raise + if _write_byte(&writer, b':') < 0: + raise + if _write_byte(&writer, b' ') < 0: + raise + if _write_str(&writer, to_str(val)) < 0: + raise + if _write_byte(&writer, b'\r') < 0: + raise + if _write_byte(&writer, b'\n') < 0: + raise + + if _write_byte(&writer, b'\r') < 0: + raise + if _write_byte(&writer, b'\n') < 0: + raise + + return PyBytes_FromStringAndSize(writer.buf, writer.pos) + finally: + _release_writer(&writer) diff --git a/env/lib/python3.8/site-packages/aiohttp/_websocket.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/aiohttp/_websocket.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..75ce7dbd Binary files /dev/null and b/env/lib/python3.8/site-packages/aiohttp/_websocket.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/aiohttp/_websocket.pyx b/env/lib/python3.8/site-packages/aiohttp/_websocket.pyx new file mode 100644 index 00000000..94318d2b --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/_websocket.pyx @@ -0,0 +1,56 @@ +from cpython cimport PyBytes_AsString + + +#from cpython cimport PyByteArray_AsString # cython still not exports that +cdef extern from "Python.h": + char* PyByteArray_AsString(bytearray ba) except NULL + +from libc.stdint cimport uint32_t, uint64_t, uintmax_t + + +def _websocket_mask_cython(object mask, object data): + """Note, this function mutates its `data` argument + """ + cdef: + Py_ssize_t data_len, i + # bit operations on signed integers are implementation-specific + unsigned char * in_buf + const unsigned char * mask_buf + uint32_t uint32_msk + uint64_t uint64_msk + + assert len(mask) == 4 + + if not isinstance(mask, bytes): + mask = bytes(mask) + + if isinstance(data, bytearray): + data = data + else: + data = bytearray(data) + + data_len = len(data) + in_buf = PyByteArray_AsString(data) + mask_buf = PyBytes_AsString(mask) + uint32_msk = (mask_buf)[0] + + # TODO: align in_data ptr to achieve even faster speeds + # does it need in python ?! malloc() always aligns to sizeof(long) bytes + + if sizeof(size_t) >= 8: + uint64_msk = uint32_msk + uint64_msk = (uint64_msk << 32) | uint32_msk + + while data_len >= 8: + (in_buf)[0] ^= uint64_msk + in_buf += 8 + data_len -= 8 + + + while data_len >= 4: + (in_buf)[0] ^= uint32_msk + in_buf += 4 + data_len -= 4 + + for i in range(0, data_len): + in_buf[i] ^= mask_buf[i] diff --git a/env/lib/python3.8/site-packages/aiohttp/abc.py b/env/lib/python3.8/site-packages/aiohttp/abc.py new file mode 100644 index 00000000..44a3bda3 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/abc.py @@ -0,0 +1,207 @@ +import asyncio +import logging +from abc import ABC, abstractmethod +from collections.abc import Sized +from http.cookies import BaseCookie, Morsel +from typing import ( + TYPE_CHECKING, + Any, + Awaitable, + Callable, + Dict, + Generator, + Iterable, + List, + Optional, + Tuple, +) + +from multidict import CIMultiDict +from yarl import URL + +from .helpers import get_running_loop +from .typedefs import LooseCookies + +if TYPE_CHECKING: # pragma: no cover + from .web_app import Application + from .web_exceptions import HTTPException + from .web_request import BaseRequest, Request + from .web_response import StreamResponse +else: + BaseRequest = Request = Application = StreamResponse = None + HTTPException = None + + +class AbstractRouter(ABC): + def __init__(self) -> None: + self._frozen = False + + def post_init(self, app: Application) -> None: + """Post init stage. + + Not an abstract method for sake of backward compatibility, + but if the router wants to be aware of the application + it can override this. + """ + + @property + def frozen(self) -> bool: + return self._frozen + + def freeze(self) -> None: + """Freeze router.""" + self._frozen = True + + @abstractmethod + async def resolve(self, request: Request) -> "AbstractMatchInfo": + """Return MATCH_INFO for given request""" + + +class AbstractMatchInfo(ABC): + @property # pragma: no branch + @abstractmethod + def handler(self) -> Callable[[Request], Awaitable[StreamResponse]]: + """Execute matched request handler""" + + @property + @abstractmethod + def expect_handler(self) -> Callable[[Request], Awaitable[None]]: + """Expect handler for 100-continue processing""" + + @property # pragma: no branch + @abstractmethod + def http_exception(self) -> Optional[HTTPException]: + """HTTPException instance raised on router's resolving, or None""" + + @abstractmethod # pragma: no branch + def get_info(self) -> Dict[str, Any]: + """Return a dict with additional info useful for introspection""" + + @property # pragma: no branch + @abstractmethod + def apps(self) -> Tuple[Application, ...]: + """Stack of nested applications. + + Top level application is left-most element. + + """ + + @abstractmethod + def add_app(self, app: Application) -> None: + """Add application to the nested apps stack.""" + + @abstractmethod + def freeze(self) -> None: + """Freeze the match info. + + The method is called after route resolution. + + After the call .add_app() is forbidden. + + """ + + +class AbstractView(ABC): + """Abstract class based view.""" + + def __init__(self, request: Request) -> None: + self._request = request + + @property + def request(self) -> Request: + """Request instance.""" + return self._request + + @abstractmethod + def __await__(self) -> Generator[Any, None, StreamResponse]: + """Execute the view handler.""" + + +class AbstractResolver(ABC): + """Abstract DNS resolver.""" + + @abstractmethod + async def resolve(self, host: str, port: int, family: int) -> List[Dict[str, Any]]: + """Return IP address for given hostname""" + + @abstractmethod + async def close(self) -> None: + """Release resolver""" + + +if TYPE_CHECKING: # pragma: no cover + IterableBase = Iterable[Morsel[str]] +else: + IterableBase = Iterable + + +ClearCookiePredicate = Callable[["Morsel[str]"], bool] + + +class AbstractCookieJar(Sized, IterableBase): + """Abstract Cookie Jar.""" + + def __init__(self, *, loop: Optional[asyncio.AbstractEventLoop] = None) -> None: + self._loop = get_running_loop(loop) + + @abstractmethod + def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: + """Clear all cookies if no predicate is passed.""" + + @abstractmethod + def clear_domain(self, domain: str) -> None: + """Clear all cookies for domain and all subdomains.""" + + @abstractmethod + def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: + """Update cookies.""" + + @abstractmethod + def filter_cookies(self, request_url: URL) -> "BaseCookie[str]": + """Return the jar's cookies filtered by their attributes.""" + + +class AbstractStreamWriter(ABC): + """Abstract stream writer.""" + + buffer_size = 0 + output_size = 0 + length: Optional[int] = 0 + + @abstractmethod + async def write(self, chunk: bytes) -> None: + """Write chunk into stream.""" + + @abstractmethod + async def write_eof(self, chunk: bytes = b"") -> None: + """Write last chunk.""" + + @abstractmethod + async def drain(self) -> None: + """Flush the write buffer.""" + + @abstractmethod + def enable_compression(self, encoding: str = "deflate") -> None: + """Enable HTTP body compression""" + + @abstractmethod + def enable_chunking(self) -> None: + """Enable HTTP chunked mode""" + + @abstractmethod + async def write_headers( + self, status_line: str, headers: "CIMultiDict[str]" + ) -> None: + """Write HTTP headers""" + + +class AbstractAccessLogger(ABC): + """Abstract writer to access log.""" + + def __init__(self, logger: logging.Logger, log_format: str) -> None: + self.logger = logger + self.log_format = log_format + + @abstractmethod + def log(self, request: BaseRequest, response: StreamResponse, time: float) -> None: + """Emit log to logger.""" diff --git a/env/lib/python3.8/site-packages/aiohttp/base_protocol.py b/env/lib/python3.8/site-packages/aiohttp/base_protocol.py new file mode 100644 index 00000000..8189835e --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/base_protocol.py @@ -0,0 +1,87 @@ +import asyncio +from typing import Optional, cast + +from .tcp_helpers import tcp_nodelay + + +class BaseProtocol(asyncio.Protocol): + __slots__ = ( + "_loop", + "_paused", + "_drain_waiter", + "_connection_lost", + "_reading_paused", + "transport", + ) + + def __init__(self, loop: asyncio.AbstractEventLoop) -> None: + self._loop: asyncio.AbstractEventLoop = loop + self._paused = False + self._drain_waiter: Optional[asyncio.Future[None]] = None + self._connection_lost = False + self._reading_paused = False + + self.transport: Optional[asyncio.Transport] = None + + def pause_writing(self) -> None: + assert not self._paused + self._paused = True + + def resume_writing(self) -> None: + assert self._paused + self._paused = False + + waiter = self._drain_waiter + if waiter is not None: + self._drain_waiter = None + if not waiter.done(): + waiter.set_result(None) + + def pause_reading(self) -> None: + if not self._reading_paused and self.transport is not None: + try: + self.transport.pause_reading() + except (AttributeError, NotImplementedError, RuntimeError): + pass + self._reading_paused = True + + def resume_reading(self) -> None: + if self._reading_paused and self.transport is not None: + try: + self.transport.resume_reading() + except (AttributeError, NotImplementedError, RuntimeError): + pass + self._reading_paused = False + + def connection_made(self, transport: asyncio.BaseTransport) -> None: + tr = cast(asyncio.Transport, transport) + tcp_nodelay(tr, True) + self.transport = tr + + def connection_lost(self, exc: Optional[BaseException]) -> None: + self._connection_lost = True + # Wake up the writer if currently paused. + self.transport = None + if not self._paused: + return + waiter = self._drain_waiter + if waiter is None: + return + self._drain_waiter = None + if waiter.done(): + return + if exc is None: + waiter.set_result(None) + else: + waiter.set_exception(exc) + + async def _drain_helper(self) -> None: + if self._connection_lost: + raise ConnectionResetError("Connection lost") + if not self._paused: + return + waiter = self._drain_waiter + if waiter is None: + waiter = self._loop.create_future() + self._drain_waiter = waiter + await asyncio.shield(waiter) diff --git a/env/lib/python3.8/site-packages/aiohttp/client.py b/env/lib/python3.8/site-packages/aiohttp/client.py new file mode 100644 index 00000000..0d0f4c16 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/client.py @@ -0,0 +1,1305 @@ +"""HTTP Client for asyncio.""" + +import asyncio +import base64 +import hashlib +import json +import os +import sys +import traceback +import warnings +from contextlib import suppress +from types import SimpleNamespace, TracebackType +from typing import ( + Any, + Awaitable, + Callable, + Coroutine, + FrozenSet, + Generator, + Generic, + Iterable, + List, + Mapping, + Optional, + Set, + Tuple, + Type, + TypeVar, + Union, +) + +import attr +from multidict import CIMultiDict, MultiDict, MultiDictProxy, istr +from yarl import URL + +from . import hdrs, http, payload +from .abc import AbstractCookieJar +from .client_exceptions import ( + ClientConnectionError as ClientConnectionError, + ClientConnectorCertificateError as ClientConnectorCertificateError, + ClientConnectorError as ClientConnectorError, + ClientConnectorSSLError as ClientConnectorSSLError, + ClientError as ClientError, + ClientHttpProxyError as ClientHttpProxyError, + ClientOSError as ClientOSError, + ClientPayloadError as ClientPayloadError, + ClientProxyConnectionError as ClientProxyConnectionError, + ClientResponseError as ClientResponseError, + ClientSSLError as ClientSSLError, + ContentTypeError as ContentTypeError, + InvalidURL as InvalidURL, + ServerConnectionError as ServerConnectionError, + ServerDisconnectedError as ServerDisconnectedError, + ServerFingerprintMismatch as ServerFingerprintMismatch, + ServerTimeoutError as ServerTimeoutError, + TooManyRedirects as TooManyRedirects, + WSServerHandshakeError as WSServerHandshakeError, +) +from .client_reqrep import ( + ClientRequest as ClientRequest, + ClientResponse as ClientResponse, + Fingerprint as Fingerprint, + RequestInfo as RequestInfo, + _merge_ssl_params, +) +from .client_ws import ClientWebSocketResponse as ClientWebSocketResponse +from .connector import ( + BaseConnector as BaseConnector, + NamedPipeConnector as NamedPipeConnector, + TCPConnector as TCPConnector, + UnixConnector as UnixConnector, +) +from .cookiejar import CookieJar +from .helpers import ( + DEBUG, + PY_36, + BasicAuth, + TimeoutHandle, + ceil_timeout, + get_env_proxy_for_url, + get_running_loop, + sentinel, + strip_auth_from_url, +) +from .http import WS_KEY, HttpVersion, WebSocketReader, WebSocketWriter +from .http_websocket import WSHandshakeError, WSMessage, ws_ext_gen, ws_ext_parse +from .streams import FlowControlDataQueue +from .tracing import Trace, TraceConfig +from .typedefs import Final, JSONEncoder, LooseCookies, LooseHeaders, StrOrURL + +__all__ = ( + # client_exceptions + "ClientConnectionError", + "ClientConnectorCertificateError", + "ClientConnectorError", + "ClientConnectorSSLError", + "ClientError", + "ClientHttpProxyError", + "ClientOSError", + "ClientPayloadError", + "ClientProxyConnectionError", + "ClientResponseError", + "ClientSSLError", + "ContentTypeError", + "InvalidURL", + "ServerConnectionError", + "ServerDisconnectedError", + "ServerFingerprintMismatch", + "ServerTimeoutError", + "TooManyRedirects", + "WSServerHandshakeError", + # client_reqrep + "ClientRequest", + "ClientResponse", + "Fingerprint", + "RequestInfo", + # connector + "BaseConnector", + "TCPConnector", + "UnixConnector", + "NamedPipeConnector", + # client_ws + "ClientWebSocketResponse", + # client + "ClientSession", + "ClientTimeout", + "request", +) + + +try: + from ssl import SSLContext +except ImportError: # pragma: no cover + SSLContext = object # type: ignore[misc,assignment] + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class ClientTimeout: + total: Optional[float] = None + connect: Optional[float] = None + sock_read: Optional[float] = None + sock_connect: Optional[float] = None + + # pool_queue_timeout: Optional[float] = None + # dns_resolution_timeout: Optional[float] = None + # socket_connect_timeout: Optional[float] = None + # connection_acquiring_timeout: Optional[float] = None + # new_connection_timeout: Optional[float] = None + # http_header_timeout: Optional[float] = None + # response_body_timeout: Optional[float] = None + + # to create a timeout specific for a single request, either + # - create a completely new one to overwrite the default + # - or use http://www.attrs.org/en/stable/api.html#attr.evolve + # to overwrite the defaults + + +# 5 Minute default read timeout +DEFAULT_TIMEOUT: Final[ClientTimeout] = ClientTimeout(total=5 * 60) + +_RetType = TypeVar("_RetType") + + +class ClientSession: + """First-class interface for making HTTP requests.""" + + ATTRS = frozenset( + [ + "_base_url", + "_source_traceback", + "_connector", + "requote_redirect_url", + "_loop", + "_cookie_jar", + "_connector_owner", + "_default_auth", + "_version", + "_json_serialize", + "_requote_redirect_url", + "_timeout", + "_raise_for_status", + "_auto_decompress", + "_trust_env", + "_default_headers", + "_skip_auto_headers", + "_request_class", + "_response_class", + "_ws_response_class", + "_trace_configs", + "_read_bufsize", + ] + ) + + _source_traceback = None # type: Optional[traceback.StackSummary] + _connector = None # type: Optional[BaseConnector] + + def __init__( + self, + base_url: Optional[StrOrURL] = None, + *, + connector: Optional[BaseConnector] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, + cookies: Optional[LooseCookies] = None, + headers: Optional[LooseHeaders] = None, + skip_auto_headers: Optional[Iterable[str]] = None, + auth: Optional[BasicAuth] = None, + json_serialize: JSONEncoder = json.dumps, + request_class: Type[ClientRequest] = ClientRequest, + response_class: Type[ClientResponse] = ClientResponse, + ws_response_class: Type[ClientWebSocketResponse] = ClientWebSocketResponse, + version: HttpVersion = http.HttpVersion11, + cookie_jar: Optional[AbstractCookieJar] = None, + connector_owner: bool = True, + raise_for_status: bool = False, + read_timeout: Union[float, object] = sentinel, + conn_timeout: Optional[float] = None, + timeout: Union[object, ClientTimeout] = sentinel, + auto_decompress: bool = True, + trust_env: bool = False, + requote_redirect_url: bool = True, + trace_configs: Optional[List[TraceConfig]] = None, + read_bufsize: int = 2**16, + ) -> None: + if loop is None: + if connector is not None: + loop = connector._loop + + loop = get_running_loop(loop) + + if base_url is None or isinstance(base_url, URL): + self._base_url: Optional[URL] = base_url + else: + self._base_url = URL(base_url) + assert ( + self._base_url.origin() == self._base_url + ), "Only absolute URLs without path part are supported" + + if connector is None: + connector = TCPConnector(loop=loop) + + if connector._loop is not loop: + raise RuntimeError("Session and connector has to use same event loop") + + self._loop = loop + + if loop.get_debug(): + self._source_traceback = traceback.extract_stack(sys._getframe(1)) + + if cookie_jar is None: + cookie_jar = CookieJar(loop=loop) + self._cookie_jar = cookie_jar + + if cookies is not None: + self._cookie_jar.update_cookies(cookies) + + self._connector = connector + self._connector_owner = connector_owner + self._default_auth = auth + self._version = version + self._json_serialize = json_serialize + if timeout is sentinel: + self._timeout = DEFAULT_TIMEOUT + if read_timeout is not sentinel: + warnings.warn( + "read_timeout is deprecated, " "use timeout argument instead", + DeprecationWarning, + stacklevel=2, + ) + self._timeout = attr.evolve(self._timeout, total=read_timeout) + if conn_timeout is not None: + self._timeout = attr.evolve(self._timeout, connect=conn_timeout) + warnings.warn( + "conn_timeout is deprecated, " "use timeout argument instead", + DeprecationWarning, + stacklevel=2, + ) + else: + self._timeout = timeout # type: ignore[assignment] + if read_timeout is not sentinel: + raise ValueError( + "read_timeout and timeout parameters " + "conflict, please setup " + "timeout.read" + ) + if conn_timeout is not None: + raise ValueError( + "conn_timeout and timeout parameters " + "conflict, please setup " + "timeout.connect" + ) + self._raise_for_status = raise_for_status + self._auto_decompress = auto_decompress + self._trust_env = trust_env + self._requote_redirect_url = requote_redirect_url + self._read_bufsize = read_bufsize + + # Convert to list of tuples + if headers: + real_headers: CIMultiDict[str] = CIMultiDict(headers) + else: + real_headers = CIMultiDict() + self._default_headers: CIMultiDict[str] = real_headers + if skip_auto_headers is not None: + self._skip_auto_headers = frozenset(istr(i) for i in skip_auto_headers) + else: + self._skip_auto_headers = frozenset() + + self._request_class = request_class + self._response_class = response_class + self._ws_response_class = ws_response_class + + self._trace_configs = trace_configs or [] + for trace_config in self._trace_configs: + trace_config.freeze() + + def __init_subclass__(cls: Type["ClientSession"]) -> None: + warnings.warn( + "Inheritance class {} from ClientSession " + "is discouraged".format(cls.__name__), + DeprecationWarning, + stacklevel=2, + ) + + if DEBUG: + + def __setattr__(self, name: str, val: Any) -> None: + if name not in self.ATTRS: + warnings.warn( + "Setting custom ClientSession.{} attribute " + "is discouraged".format(name), + DeprecationWarning, + stacklevel=2, + ) + super().__setattr__(name, val) + + def __del__(self, _warnings: Any = warnings) -> None: + if not self.closed: + if PY_36: + kwargs = {"source": self} + else: + kwargs = {} + _warnings.warn( + f"Unclosed client session {self!r}", ResourceWarning, **kwargs + ) + context = {"client_session": self, "message": "Unclosed client session"} + if self._source_traceback is not None: + context["source_traceback"] = self._source_traceback + self._loop.call_exception_handler(context) + + def request( + self, method: str, url: StrOrURL, **kwargs: Any + ) -> "_RequestContextManager": + """Perform HTTP request.""" + return _RequestContextManager(self._request(method, url, **kwargs)) + + def _build_url(self, str_or_url: StrOrURL) -> URL: + url = URL(str_or_url) + if self._base_url is None: + return url + else: + assert not url.is_absolute() and url.path.startswith("/") + return self._base_url.join(url) + + async def _request( + self, + method: str, + str_or_url: StrOrURL, + *, + params: Optional[Mapping[str, str]] = None, + data: Any = None, + json: Any = None, + cookies: Optional[LooseCookies] = None, + headers: Optional[LooseHeaders] = None, + skip_auto_headers: Optional[Iterable[str]] = None, + auth: Optional[BasicAuth] = None, + allow_redirects: bool = True, + max_redirects: int = 10, + compress: Optional[str] = None, + chunked: Optional[bool] = None, + expect100: bool = False, + raise_for_status: Optional[bool] = None, + read_until_eof: bool = True, + proxy: Optional[StrOrURL] = None, + proxy_auth: Optional[BasicAuth] = None, + timeout: Union[ClientTimeout, object] = sentinel, + verify_ssl: Optional[bool] = None, + fingerprint: Optional[bytes] = None, + ssl_context: Optional[SSLContext] = None, + ssl: Optional[Union[SSLContext, bool, Fingerprint]] = None, + proxy_headers: Optional[LooseHeaders] = None, + trace_request_ctx: Optional[SimpleNamespace] = None, + read_bufsize: Optional[int] = None, + ) -> ClientResponse: + + # NOTE: timeout clamps existing connect and read timeouts. We cannot + # set the default to None because we need to detect if the user wants + # to use the existing timeouts by setting timeout to None. + + if self.closed: + raise RuntimeError("Session is closed") + + ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) + + if data is not None and json is not None: + raise ValueError( + "data and json parameters can not be used at the same time" + ) + elif json is not None: + data = payload.JsonPayload(json, dumps=self._json_serialize) + + if not isinstance(chunked, bool) and chunked is not None: + warnings.warn("Chunk size is deprecated #1615", DeprecationWarning) + + redirects = 0 + history = [] + version = self._version + + # Merge with default headers and transform to CIMultiDict + headers = self._prepare_headers(headers) + proxy_headers = self._prepare_headers(proxy_headers) + + try: + url = self._build_url(str_or_url) + except ValueError as e: + raise InvalidURL(str_or_url) from e + + skip_headers = set(self._skip_auto_headers) + if skip_auto_headers is not None: + for i in skip_auto_headers: + skip_headers.add(istr(i)) + + if proxy is not None: + try: + proxy = URL(proxy) + except ValueError as e: + raise InvalidURL(proxy) from e + + if timeout is sentinel: + real_timeout: ClientTimeout = self._timeout + else: + if not isinstance(timeout, ClientTimeout): + real_timeout = ClientTimeout(total=timeout) # type: ignore[arg-type] + else: + real_timeout = timeout + # timeout is cumulative for all request operations + # (request, redirects, responses, data consuming) + tm = TimeoutHandle(self._loop, real_timeout.total) + handle = tm.start() + + if read_bufsize is None: + read_bufsize = self._read_bufsize + + traces = [ + Trace( + self, + trace_config, + trace_config.trace_config_ctx(trace_request_ctx=trace_request_ctx), + ) + for trace_config in self._trace_configs + ] + + for trace in traces: + await trace.send_request_start(method, url.update_query(params), headers) + + timer = tm.timer() + try: + with timer: + while True: + url, auth_from_url = strip_auth_from_url(url) + if auth and auth_from_url: + raise ValueError( + "Cannot combine AUTH argument with " + "credentials encoded in URL" + ) + + if auth is None: + auth = auth_from_url + if auth is None: + auth = self._default_auth + # It would be confusing if we support explicit + # Authorization header with auth argument + if ( + headers is not None + and auth is not None + and hdrs.AUTHORIZATION in headers + ): + raise ValueError( + "Cannot combine AUTHORIZATION header " + "with AUTH argument or credentials " + "encoded in URL" + ) + + all_cookies = self._cookie_jar.filter_cookies(url) + + if cookies is not None: + tmp_cookie_jar = CookieJar() + tmp_cookie_jar.update_cookies(cookies) + req_cookies = tmp_cookie_jar.filter_cookies(url) + if req_cookies: + all_cookies.load(req_cookies) + + if proxy is not None: + proxy = URL(proxy) + elif self._trust_env: + with suppress(LookupError): + proxy, proxy_auth = get_env_proxy_for_url(url) + + req = self._request_class( + method, + url, + params=params, + headers=headers, + skip_auto_headers=skip_headers, + data=data, + cookies=all_cookies, + auth=auth, + version=version, + compress=compress, + chunked=chunked, + expect100=expect100, + loop=self._loop, + response_class=self._response_class, + proxy=proxy, + proxy_auth=proxy_auth, + timer=timer, + session=self, + ssl=ssl, + proxy_headers=proxy_headers, + traces=traces, + ) + + # connection timeout + try: + async with ceil_timeout(real_timeout.connect): + assert self._connector is not None + conn = await self._connector.connect( + req, traces=traces, timeout=real_timeout + ) + except asyncio.TimeoutError as exc: + raise ServerTimeoutError( + "Connection timeout " "to host {}".format(url) + ) from exc + + assert conn.transport is not None + + assert conn.protocol is not None + conn.protocol.set_response_params( + timer=timer, + skip_payload=method.upper() == "HEAD", + read_until_eof=read_until_eof, + auto_decompress=self._auto_decompress, + read_timeout=real_timeout.sock_read, + read_bufsize=read_bufsize, + ) + + try: + try: + resp = await req.send(conn) + try: + await resp.start(conn) + except BaseException: + resp.close() + raise + except BaseException: + conn.close() + raise + except ClientError: + raise + except OSError as exc: + if exc.errno is None and isinstance(exc, asyncio.TimeoutError): + raise + raise ClientOSError(*exc.args) from exc + + self._cookie_jar.update_cookies(resp.cookies, resp.url) + + # redirects + if resp.status in (301, 302, 303, 307, 308) and allow_redirects: + + for trace in traces: + await trace.send_request_redirect( + method, url.update_query(params), headers, resp + ) + + redirects += 1 + history.append(resp) + if max_redirects and redirects >= max_redirects: + resp.close() + raise TooManyRedirects( + history[0].request_info, tuple(history) + ) + + # For 301 and 302, mimic IE, now changed in RFC + # https://github.com/kennethreitz/requests/pull/269 + if (resp.status == 303 and resp.method != hdrs.METH_HEAD) or ( + resp.status in (301, 302) and resp.method == hdrs.METH_POST + ): + method = hdrs.METH_GET + data = None + if headers.get(hdrs.CONTENT_LENGTH): + headers.pop(hdrs.CONTENT_LENGTH) + + r_url = resp.headers.get(hdrs.LOCATION) or resp.headers.get( + hdrs.URI + ) + if r_url is None: + # see github.com/aio-libs/aiohttp/issues/2022 + break + else: + # reading from correct redirection + # response is forbidden + resp.release() + + try: + parsed_url = URL( + r_url, encoded=not self._requote_redirect_url + ) + + except ValueError as e: + raise InvalidURL(r_url) from e + + scheme = parsed_url.scheme + if scheme not in ("http", "https", ""): + resp.close() + raise ValueError("Can redirect only to http or https") + elif not scheme: + parsed_url = url.join(parsed_url) + + if url.origin() != parsed_url.origin(): + auth = None + headers.pop(hdrs.AUTHORIZATION, None) + + url = parsed_url + params = None + resp.release() + continue + + break + + # check response status + if raise_for_status is None: + raise_for_status = self._raise_for_status + if raise_for_status: + resp.raise_for_status() + + # register connection + if handle is not None: + if resp.connection is not None: + resp.connection.add_callback(handle.cancel) + else: + handle.cancel() + + resp._history = tuple(history) + + for trace in traces: + await trace.send_request_end( + method, url.update_query(params), headers, resp + ) + return resp + + except BaseException as e: + # cleanup timer + tm.close() + if handle: + handle.cancel() + handle = None + + for trace in traces: + await trace.send_request_exception( + method, url.update_query(params), headers, e + ) + raise + + def ws_connect( + self, + url: StrOrURL, + *, + method: str = hdrs.METH_GET, + protocols: Iterable[str] = (), + timeout: float = 10.0, + receive_timeout: Optional[float] = None, + autoclose: bool = True, + autoping: bool = True, + heartbeat: Optional[float] = None, + auth: Optional[BasicAuth] = None, + origin: Optional[str] = None, + params: Optional[Mapping[str, str]] = None, + headers: Optional[LooseHeaders] = None, + proxy: Optional[StrOrURL] = None, + proxy_auth: Optional[BasicAuth] = None, + ssl: Union[SSLContext, bool, None, Fingerprint] = None, + verify_ssl: Optional[bool] = None, + fingerprint: Optional[bytes] = None, + ssl_context: Optional[SSLContext] = None, + proxy_headers: Optional[LooseHeaders] = None, + compress: int = 0, + max_msg_size: int = 4 * 1024 * 1024, + ) -> "_WSRequestContextManager": + """Initiate websocket connection.""" + return _WSRequestContextManager( + self._ws_connect( + url, + method=method, + protocols=protocols, + timeout=timeout, + receive_timeout=receive_timeout, + autoclose=autoclose, + autoping=autoping, + heartbeat=heartbeat, + auth=auth, + origin=origin, + params=params, + headers=headers, + proxy=proxy, + proxy_auth=proxy_auth, + ssl=ssl, + verify_ssl=verify_ssl, + fingerprint=fingerprint, + ssl_context=ssl_context, + proxy_headers=proxy_headers, + compress=compress, + max_msg_size=max_msg_size, + ) + ) + + async def _ws_connect( + self, + url: StrOrURL, + *, + method: str = hdrs.METH_GET, + protocols: Iterable[str] = (), + timeout: float = 10.0, + receive_timeout: Optional[float] = None, + autoclose: bool = True, + autoping: bool = True, + heartbeat: Optional[float] = None, + auth: Optional[BasicAuth] = None, + origin: Optional[str] = None, + params: Optional[Mapping[str, str]] = None, + headers: Optional[LooseHeaders] = None, + proxy: Optional[StrOrURL] = None, + proxy_auth: Optional[BasicAuth] = None, + ssl: Union[SSLContext, bool, None, Fingerprint] = None, + verify_ssl: Optional[bool] = None, + fingerprint: Optional[bytes] = None, + ssl_context: Optional[SSLContext] = None, + proxy_headers: Optional[LooseHeaders] = None, + compress: int = 0, + max_msg_size: int = 4 * 1024 * 1024, + ) -> ClientWebSocketResponse: + + if headers is None: + real_headers: CIMultiDict[str] = CIMultiDict() + else: + real_headers = CIMultiDict(headers) + + default_headers = { + hdrs.UPGRADE: "websocket", + hdrs.CONNECTION: "upgrade", + hdrs.SEC_WEBSOCKET_VERSION: "13", + } + + for key, value in default_headers.items(): + real_headers.setdefault(key, value) + + sec_key = base64.b64encode(os.urandom(16)) + real_headers[hdrs.SEC_WEBSOCKET_KEY] = sec_key.decode() + + if protocols: + real_headers[hdrs.SEC_WEBSOCKET_PROTOCOL] = ",".join(protocols) + if origin is not None: + real_headers[hdrs.ORIGIN] = origin + if compress: + extstr = ws_ext_gen(compress=compress) + real_headers[hdrs.SEC_WEBSOCKET_EXTENSIONS] = extstr + + ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) + + # send request + resp = await self.request( + method, + url, + params=params, + headers=real_headers, + read_until_eof=False, + auth=auth, + proxy=proxy, + proxy_auth=proxy_auth, + ssl=ssl, + proxy_headers=proxy_headers, + ) + + try: + # check handshake + if resp.status != 101: + raise WSServerHandshakeError( + resp.request_info, + resp.history, + message="Invalid response status", + status=resp.status, + headers=resp.headers, + ) + + if resp.headers.get(hdrs.UPGRADE, "").lower() != "websocket": + raise WSServerHandshakeError( + resp.request_info, + resp.history, + message="Invalid upgrade header", + status=resp.status, + headers=resp.headers, + ) + + if resp.headers.get(hdrs.CONNECTION, "").lower() != "upgrade": + raise WSServerHandshakeError( + resp.request_info, + resp.history, + message="Invalid connection header", + status=resp.status, + headers=resp.headers, + ) + + # key calculation + r_key = resp.headers.get(hdrs.SEC_WEBSOCKET_ACCEPT, "") + match = base64.b64encode(hashlib.sha1(sec_key + WS_KEY).digest()).decode() + if r_key != match: + raise WSServerHandshakeError( + resp.request_info, + resp.history, + message="Invalid challenge response", + status=resp.status, + headers=resp.headers, + ) + + # websocket protocol + protocol = None + if protocols and hdrs.SEC_WEBSOCKET_PROTOCOL in resp.headers: + resp_protocols = [ + proto.strip() + for proto in resp.headers[hdrs.SEC_WEBSOCKET_PROTOCOL].split(",") + ] + + for proto in resp_protocols: + if proto in protocols: + protocol = proto + break + + # websocket compress + notakeover = False + if compress: + compress_hdrs = resp.headers.get(hdrs.SEC_WEBSOCKET_EXTENSIONS) + if compress_hdrs: + try: + compress, notakeover = ws_ext_parse(compress_hdrs) + except WSHandshakeError as exc: + raise WSServerHandshakeError( + resp.request_info, + resp.history, + message=exc.args[0], + status=resp.status, + headers=resp.headers, + ) from exc + else: + compress = 0 + notakeover = False + + conn = resp.connection + assert conn is not None + conn_proto = conn.protocol + assert conn_proto is not None + transport = conn.transport + assert transport is not None + reader: FlowControlDataQueue[WSMessage] = FlowControlDataQueue( + conn_proto, 2**16, loop=self._loop + ) + conn_proto.set_parser(WebSocketReader(reader, max_msg_size), reader) + writer = WebSocketWriter( + conn_proto, + transport, + use_mask=True, + compress=compress, + notakeover=notakeover, + ) + except BaseException: + resp.close() + raise + else: + return self._ws_response_class( + reader, + writer, + protocol, + resp, + timeout, + autoclose, + autoping, + self._loop, + receive_timeout=receive_timeout, + heartbeat=heartbeat, + compress=compress, + client_notakeover=notakeover, + ) + + def _prepare_headers(self, headers: Optional[LooseHeaders]) -> "CIMultiDict[str]": + """Add default headers and transform it to CIMultiDict""" + # Convert headers to MultiDict + result = CIMultiDict(self._default_headers) + if headers: + if not isinstance(headers, (MultiDictProxy, MultiDict)): + headers = CIMultiDict(headers) + added_names: Set[str] = set() + for key, value in headers.items(): + if key in added_names: + result.add(key, value) + else: + result[key] = value + added_names.add(key) + return result + + def get( + self, url: StrOrURL, *, allow_redirects: bool = True, **kwargs: Any + ) -> "_RequestContextManager": + """Perform HTTP GET request.""" + return _RequestContextManager( + self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs) + ) + + def options( + self, url: StrOrURL, *, allow_redirects: bool = True, **kwargs: Any + ) -> "_RequestContextManager": + """Perform HTTP OPTIONS request.""" + return _RequestContextManager( + self._request( + hdrs.METH_OPTIONS, url, allow_redirects=allow_redirects, **kwargs + ) + ) + + def head( + self, url: StrOrURL, *, allow_redirects: bool = False, **kwargs: Any + ) -> "_RequestContextManager": + """Perform HTTP HEAD request.""" + return _RequestContextManager( + self._request( + hdrs.METH_HEAD, url, allow_redirects=allow_redirects, **kwargs + ) + ) + + def post( + self, url: StrOrURL, *, data: Any = None, **kwargs: Any + ) -> "_RequestContextManager": + """Perform HTTP POST request.""" + return _RequestContextManager( + self._request(hdrs.METH_POST, url, data=data, **kwargs) + ) + + def put( + self, url: StrOrURL, *, data: Any = None, **kwargs: Any + ) -> "_RequestContextManager": + """Perform HTTP PUT request.""" + return _RequestContextManager( + self._request(hdrs.METH_PUT, url, data=data, **kwargs) + ) + + def patch( + self, url: StrOrURL, *, data: Any = None, **kwargs: Any + ) -> "_RequestContextManager": + """Perform HTTP PATCH request.""" + return _RequestContextManager( + self._request(hdrs.METH_PATCH, url, data=data, **kwargs) + ) + + def delete(self, url: StrOrURL, **kwargs: Any) -> "_RequestContextManager": + """Perform HTTP DELETE request.""" + return _RequestContextManager(self._request(hdrs.METH_DELETE, url, **kwargs)) + + async def close(self) -> None: + """Close underlying connector. + + Release all acquired resources. + """ + if not self.closed: + if self._connector is not None and self._connector_owner: + await self._connector.close() + self._connector = None + + @property + def closed(self) -> bool: + """Is client session closed. + + A readonly property. + """ + return self._connector is None or self._connector.closed + + @property + def connector(self) -> Optional[BaseConnector]: + """Connector instance used for the session.""" + return self._connector + + @property + def cookie_jar(self) -> AbstractCookieJar: + """The session cookies.""" + return self._cookie_jar + + @property + def version(self) -> Tuple[int, int]: + """The session HTTP protocol version.""" + return self._version + + @property + def requote_redirect_url(self) -> bool: + """Do URL requoting on redirection handling.""" + return self._requote_redirect_url + + @requote_redirect_url.setter + def requote_redirect_url(self, val: bool) -> None: + """Do URL requoting on redirection handling.""" + warnings.warn( + "session.requote_redirect_url modification " "is deprecated #2778", + DeprecationWarning, + stacklevel=2, + ) + self._requote_redirect_url = val + + @property + def loop(self) -> asyncio.AbstractEventLoop: + """Session's loop.""" + warnings.warn( + "client.loop property is deprecated", DeprecationWarning, stacklevel=2 + ) + return self._loop + + @property + def timeout(self) -> ClientTimeout: + """Timeout for the session.""" + return self._timeout + + @property + def headers(self) -> "CIMultiDict[str]": + """The default headers of the client session.""" + return self._default_headers + + @property + def skip_auto_headers(self) -> FrozenSet[istr]: + """Headers for which autogeneration should be skipped""" + return self._skip_auto_headers + + @property + def auth(self) -> Optional[BasicAuth]: + """An object that represents HTTP Basic Authorization""" + return self._default_auth + + @property + def json_serialize(self) -> JSONEncoder: + """Json serializer callable""" + return self._json_serialize + + @property + def connector_owner(self) -> bool: + """Should connector be closed on session closing""" + return self._connector_owner + + @property + def raise_for_status( + self, + ) -> Union[bool, Callable[[ClientResponse], Awaitable[None]]]: + """Should `ClientResponse.raise_for_status()` be called for each response.""" + return self._raise_for_status + + @property + def auto_decompress(self) -> bool: + """Should the body response be automatically decompressed.""" + return self._auto_decompress + + @property + def trust_env(self) -> bool: + """ + Should proxies information from environment or netrc be trusted. + + Information is from HTTP_PROXY / HTTPS_PROXY environment variables + or ~/.netrc file if present. + """ + return self._trust_env + + @property + def trace_configs(self) -> List[TraceConfig]: + """A list of TraceConfig instances used for client tracing""" + return self._trace_configs + + def detach(self) -> None: + """Detach connector from session without closing the former. + + Session is switched to closed state anyway. + """ + self._connector = None + + def __enter__(self) -> None: + raise TypeError("Use async with instead") + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + # __exit__ should exist in pair with __enter__ but never executed + pass # pragma: no cover + + async def __aenter__(self) -> "ClientSession": + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + await self.close() + + +class _BaseRequestContextManager(Coroutine[Any, Any, _RetType], Generic[_RetType]): + + __slots__ = ("_coro", "_resp") + + def __init__(self, coro: Coroutine["asyncio.Future[Any]", None, _RetType]) -> None: + self._coro = coro + + def send(self, arg: None) -> "asyncio.Future[Any]": + return self._coro.send(arg) + + def throw(self, arg: BaseException) -> None: # type: ignore[arg-type,override] + self._coro.throw(arg) + + def close(self) -> None: + return self._coro.close() + + def __await__(self) -> Generator[Any, None, _RetType]: + ret = self._coro.__await__() + return ret + + def __iter__(self) -> Generator[Any, None, _RetType]: + return self.__await__() + + async def __aenter__(self) -> _RetType: + self._resp = await self._coro + return self._resp + + +class _RequestContextManager(_BaseRequestContextManager[ClientResponse]): + __slots__ = () + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc: Optional[BaseException], + tb: Optional[TracebackType], + ) -> None: + # We're basing behavior on the exception as it can be caused by + # user code unrelated to the status of the connection. If you + # would like to close a connection you must do that + # explicitly. Otherwise connection error handling should kick in + # and close/recycle the connection as required. + self._resp.release() + + +class _WSRequestContextManager(_BaseRequestContextManager[ClientWebSocketResponse]): + __slots__ = () + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc: Optional[BaseException], + tb: Optional[TracebackType], + ) -> None: + await self._resp.close() + + +class _SessionRequestContextManager: + + __slots__ = ("_coro", "_resp", "_session") + + def __init__( + self, + coro: Coroutine["asyncio.Future[Any]", None, ClientResponse], + session: ClientSession, + ) -> None: + self._coro = coro + self._resp: Optional[ClientResponse] = None + self._session = session + + async def __aenter__(self) -> ClientResponse: + try: + self._resp = await self._coro + except BaseException: + await self._session.close() + raise + else: + return self._resp + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc: Optional[BaseException], + tb: Optional[TracebackType], + ) -> None: + assert self._resp is not None + self._resp.close() + await self._session.close() + + +def request( + method: str, + url: StrOrURL, + *, + params: Optional[Mapping[str, str]] = None, + data: Any = None, + json: Any = None, + headers: Optional[LooseHeaders] = None, + skip_auto_headers: Optional[Iterable[str]] = None, + auth: Optional[BasicAuth] = None, + allow_redirects: bool = True, + max_redirects: int = 10, + compress: Optional[str] = None, + chunked: Optional[bool] = None, + expect100: bool = False, + raise_for_status: Optional[bool] = None, + read_until_eof: bool = True, + proxy: Optional[StrOrURL] = None, + proxy_auth: Optional[BasicAuth] = None, + timeout: Union[ClientTimeout, object] = sentinel, + cookies: Optional[LooseCookies] = None, + version: HttpVersion = http.HttpVersion11, + connector: Optional[BaseConnector] = None, + read_bufsize: Optional[int] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, +) -> _SessionRequestContextManager: + """Constructs and sends a request. + + Returns response object. + method - HTTP method + url - request url + params - (optional) Dictionary or bytes to be sent in the query + string of the new request + data - (optional) Dictionary, bytes, or file-like object to + send in the body of the request + json - (optional) Any json compatible python object + headers - (optional) Dictionary of HTTP Headers to send with + the request + cookies - (optional) Dict object to send with the request + auth - (optional) BasicAuth named tuple represent HTTP Basic Auth + auth - aiohttp.helpers.BasicAuth + allow_redirects - (optional) If set to False, do not follow + redirects + version - Request HTTP version. + compress - Set to True if request has to be compressed + with deflate encoding. + chunked - Set to chunk size for chunked transfer encoding. + expect100 - Expect 100-continue response from server. + connector - BaseConnector sub-class instance to support + connection pooling. + read_until_eof - Read response until eof if response + does not have Content-Length header. + loop - Optional event loop. + timeout - Optional ClientTimeout settings structure, 5min + total timeout by default. + Usage:: + >>> import aiohttp + >>> resp = await aiohttp.request('GET', 'http://python.org/') + >>> resp + + >>> data = await resp.read() + """ + connector_owner = False + if connector is None: + connector_owner = True + connector = TCPConnector(loop=loop, force_close=True) + + session = ClientSession( + loop=loop, + cookies=cookies, + version=version, + timeout=timeout, + connector=connector, + connector_owner=connector_owner, + ) + + return _SessionRequestContextManager( + session._request( + method, + url, + params=params, + data=data, + json=json, + headers=headers, + skip_auto_headers=skip_auto_headers, + auth=auth, + allow_redirects=allow_redirects, + max_redirects=max_redirects, + compress=compress, + chunked=chunked, + expect100=expect100, + raise_for_status=raise_for_status, + read_until_eof=read_until_eof, + proxy=proxy, + proxy_auth=proxy_auth, + read_bufsize=read_bufsize, + ), + session, + ) diff --git a/env/lib/python3.8/site-packages/aiohttp/client_exceptions.py b/env/lib/python3.8/site-packages/aiohttp/client_exceptions.py new file mode 100644 index 00000000..c640e1e7 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/client_exceptions.py @@ -0,0 +1,342 @@ +"""HTTP related errors.""" + +import asyncio +import warnings +from typing import TYPE_CHECKING, Any, Optional, Tuple, Union + +from .http_parser import RawResponseMessage +from .typedefs import LooseHeaders + +try: + import ssl + + SSLContext = ssl.SSLContext +except ImportError: # pragma: no cover + ssl = SSLContext = None # type: ignore[assignment] + + +if TYPE_CHECKING: # pragma: no cover + from .client_reqrep import ClientResponse, ConnectionKey, Fingerprint, RequestInfo +else: + RequestInfo = ClientResponse = ConnectionKey = None + +__all__ = ( + "ClientError", + "ClientConnectionError", + "ClientOSError", + "ClientConnectorError", + "ClientProxyConnectionError", + "ClientSSLError", + "ClientConnectorSSLError", + "ClientConnectorCertificateError", + "ServerConnectionError", + "ServerTimeoutError", + "ServerDisconnectedError", + "ServerFingerprintMismatch", + "ClientResponseError", + "ClientHttpProxyError", + "WSServerHandshakeError", + "ContentTypeError", + "ClientPayloadError", + "InvalidURL", +) + + +class ClientError(Exception): + """Base class for client connection errors.""" + + +class ClientResponseError(ClientError): + """Connection error during reading response. + + request_info: instance of RequestInfo + """ + + def __init__( + self, + request_info: RequestInfo, + history: Tuple[ClientResponse, ...], + *, + code: Optional[int] = None, + status: Optional[int] = None, + message: str = "", + headers: Optional[LooseHeaders] = None, + ) -> None: + self.request_info = request_info + if code is not None: + if status is not None: + raise ValueError( + "Both code and status arguments are provided; " + "code is deprecated, use status instead" + ) + warnings.warn( + "code argument is deprecated, use status instead", + DeprecationWarning, + stacklevel=2, + ) + if status is not None: + self.status = status + elif code is not None: + self.status = code + else: + self.status = 0 + self.message = message + self.headers = headers + self.history = history + self.args = (request_info, history) + + def __str__(self) -> str: + return "{}, message={!r}, url={!r}".format( + self.status, + self.message, + self.request_info.real_url, + ) + + def __repr__(self) -> str: + args = f"{self.request_info!r}, {self.history!r}" + if self.status != 0: + args += f", status={self.status!r}" + if self.message != "": + args += f", message={self.message!r}" + if self.headers is not None: + args += f", headers={self.headers!r}" + return f"{type(self).__name__}({args})" + + @property + def code(self) -> int: + warnings.warn( + "code property is deprecated, use status instead", + DeprecationWarning, + stacklevel=2, + ) + return self.status + + @code.setter + def code(self, value: int) -> None: + warnings.warn( + "code property is deprecated, use status instead", + DeprecationWarning, + stacklevel=2, + ) + self.status = value + + +class ContentTypeError(ClientResponseError): + """ContentType found is not valid.""" + + +class WSServerHandshakeError(ClientResponseError): + """websocket server handshake error.""" + + +class ClientHttpProxyError(ClientResponseError): + """HTTP proxy error. + + Raised in :class:`aiohttp.connector.TCPConnector` if + proxy responds with status other than ``200 OK`` + on ``CONNECT`` request. + """ + + +class TooManyRedirects(ClientResponseError): + """Client was redirected too many times.""" + + +class ClientConnectionError(ClientError): + """Base class for client socket errors.""" + + +class ClientOSError(ClientConnectionError, OSError): + """OSError error.""" + + +class ClientConnectorError(ClientOSError): + """Client connector error. + + Raised in :class:`aiohttp.connector.TCPConnector` if + a connection can not be established. + """ + + def __init__(self, connection_key: ConnectionKey, os_error: OSError) -> None: + self._conn_key = connection_key + self._os_error = os_error + super().__init__(os_error.errno, os_error.strerror) + self.args = (connection_key, os_error) + + @property + def os_error(self) -> OSError: + return self._os_error + + @property + def host(self) -> str: + return self._conn_key.host + + @property + def port(self) -> Optional[int]: + return self._conn_key.port + + @property + def ssl(self) -> Union[SSLContext, None, bool, "Fingerprint"]: + return self._conn_key.ssl + + def __str__(self) -> str: + return "Cannot connect to host {0.host}:{0.port} ssl:{1} [{2}]".format( + self, self.ssl if self.ssl is not None else "default", self.strerror + ) + + # OSError.__reduce__ does too much black magick + __reduce__ = BaseException.__reduce__ + + +class ClientProxyConnectionError(ClientConnectorError): + """Proxy connection error. + + Raised in :class:`aiohttp.connector.TCPConnector` if + connection to proxy can not be established. + """ + + +class UnixClientConnectorError(ClientConnectorError): + """Unix connector error. + + Raised in :py:class:`aiohttp.connector.UnixConnector` + if connection to unix socket can not be established. + """ + + def __init__( + self, path: str, connection_key: ConnectionKey, os_error: OSError + ) -> None: + self._path = path + super().__init__(connection_key, os_error) + + @property + def path(self) -> str: + return self._path + + def __str__(self) -> str: + return "Cannot connect to unix socket {0.path} ssl:{1} [{2}]".format( + self, self.ssl if self.ssl is not None else "default", self.strerror + ) + + +class ServerConnectionError(ClientConnectionError): + """Server connection errors.""" + + +class ServerDisconnectedError(ServerConnectionError): + """Server disconnected.""" + + def __init__(self, message: Union[RawResponseMessage, str, None] = None) -> None: + if message is None: + message = "Server disconnected" + + self.args = (message,) + self.message = message + + +class ServerTimeoutError(ServerConnectionError, asyncio.TimeoutError): + """Server timeout error.""" + + +class ServerFingerprintMismatch(ServerConnectionError): + """SSL certificate does not match expected fingerprint.""" + + def __init__(self, expected: bytes, got: bytes, host: str, port: int) -> None: + self.expected = expected + self.got = got + self.host = host + self.port = port + self.args = (expected, got, host, port) + + def __repr__(self) -> str: + return "<{} expected={!r} got={!r} host={!r} port={!r}>".format( + self.__class__.__name__, self.expected, self.got, self.host, self.port + ) + + +class ClientPayloadError(ClientError): + """Response payload error.""" + + +class InvalidURL(ClientError, ValueError): + """Invalid URL. + + URL used for fetching is malformed, e.g. it doesn't contains host + part. + """ + + # Derive from ValueError for backward compatibility + + def __init__(self, url: Any) -> None: + # The type of url is not yarl.URL because the exception can be raised + # on URL(url) call + super().__init__(url) + + @property + def url(self) -> Any: + return self.args[0] + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} {self.url}>" + + +class ClientSSLError(ClientConnectorError): + """Base error for ssl.*Errors.""" + + +if ssl is not None: + cert_errors = (ssl.CertificateError,) + cert_errors_bases = ( + ClientSSLError, + ssl.CertificateError, + ) + + ssl_errors = (ssl.SSLError,) + ssl_error_bases = (ClientSSLError, ssl.SSLError) +else: # pragma: no cover + cert_errors = tuple() + cert_errors_bases = ( + ClientSSLError, + ValueError, + ) + + ssl_errors = tuple() + ssl_error_bases = (ClientSSLError,) + + +class ClientConnectorSSLError(*ssl_error_bases): # type: ignore[misc] + """Response ssl error.""" + + +class ClientConnectorCertificateError(*cert_errors_bases): # type: ignore[misc] + """Response certificate error.""" + + def __init__( + self, connection_key: ConnectionKey, certificate_error: Exception + ) -> None: + self._conn_key = connection_key + self._certificate_error = certificate_error + self.args = (connection_key, certificate_error) + + @property + def certificate_error(self) -> Exception: + return self._certificate_error + + @property + def host(self) -> str: + return self._conn_key.host + + @property + def port(self) -> Optional[int]: + return self._conn_key.port + + @property + def ssl(self) -> bool: + return self._conn_key.is_ssl + + def __str__(self) -> str: + return ( + "Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} " + "[{0.certificate_error.__class__.__name__}: " + "{0.certificate_error.args}]".format(self) + ) diff --git a/env/lib/python3.8/site-packages/aiohttp/client_proto.py b/env/lib/python3.8/site-packages/aiohttp/client_proto.py new file mode 100644 index 00000000..3041157d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/client_proto.py @@ -0,0 +1,251 @@ +import asyncio +from contextlib import suppress +from typing import Any, Optional, Tuple + +from .base_protocol import BaseProtocol +from .client_exceptions import ( + ClientOSError, + ClientPayloadError, + ServerDisconnectedError, + ServerTimeoutError, +) +from .helpers import BaseTimerContext +from .http import HttpResponseParser, RawResponseMessage +from .streams import EMPTY_PAYLOAD, DataQueue, StreamReader + + +class ResponseHandler(BaseProtocol, DataQueue[Tuple[RawResponseMessage, StreamReader]]): + """Helper class to adapt between Protocol and StreamReader.""" + + def __init__(self, loop: asyncio.AbstractEventLoop) -> None: + BaseProtocol.__init__(self, loop=loop) + DataQueue.__init__(self, loop) + + self._should_close = False + + self._payload: Optional[StreamReader] = None + self._skip_payload = False + self._payload_parser = None + + self._timer = None + + self._tail = b"" + self._upgraded = False + self._parser: Optional[HttpResponseParser] = None + + self._read_timeout: Optional[float] = None + self._read_timeout_handle: Optional[asyncio.TimerHandle] = None + + @property + def upgraded(self) -> bool: + return self._upgraded + + @property + def should_close(self) -> bool: + if self._payload is not None and not self._payload.is_eof() or self._upgraded: + return True + + return ( + self._should_close + or self._upgraded + or self.exception() is not None + or self._payload_parser is not None + or len(self) > 0 + or bool(self._tail) + ) + + def force_close(self) -> None: + self._should_close = True + + def close(self) -> None: + transport = self.transport + if transport is not None: + transport.close() + self.transport = None + self._payload = None + self._drop_timeout() + + def is_connected(self) -> bool: + return self.transport is not None and not self.transport.is_closing() + + def connection_lost(self, exc: Optional[BaseException]) -> None: + self._drop_timeout() + + if self._payload_parser is not None: + with suppress(Exception): + self._payload_parser.feed_eof() + + uncompleted = None + if self._parser is not None: + try: + uncompleted = self._parser.feed_eof() + except Exception: + if self._payload is not None: + self._payload.set_exception( + ClientPayloadError("Response payload is not completed") + ) + + if not self.is_eof(): + if isinstance(exc, OSError): + exc = ClientOSError(*exc.args) + if exc is None: + exc = ServerDisconnectedError(uncompleted) + # assigns self._should_close to True as side effect, + # we do it anyway below + self.set_exception(exc) + + self._should_close = True + self._parser = None + self._payload = None + self._payload_parser = None + self._reading_paused = False + + super().connection_lost(exc) + + def eof_received(self) -> None: + # should call parser.feed_eof() most likely + self._drop_timeout() + + def pause_reading(self) -> None: + super().pause_reading() + self._drop_timeout() + + def resume_reading(self) -> None: + super().resume_reading() + self._reschedule_timeout() + + def set_exception(self, exc: BaseException) -> None: + self._should_close = True + self._drop_timeout() + super().set_exception(exc) + + def set_parser(self, parser: Any, payload: Any) -> None: + # TODO: actual types are: + # parser: WebSocketReader + # payload: FlowControlDataQueue + # but they are not generi enough + # Need an ABC for both types + self._payload = payload + self._payload_parser = parser + + self._drop_timeout() + + if self._tail: + data, self._tail = self._tail, b"" + self.data_received(data) + + def set_response_params( + self, + *, + timer: Optional[BaseTimerContext] = None, + skip_payload: bool = False, + read_until_eof: bool = False, + auto_decompress: bool = True, + read_timeout: Optional[float] = None, + read_bufsize: int = 2**16, + ) -> None: + self._skip_payload = skip_payload + + self._read_timeout = read_timeout + self._reschedule_timeout() + + self._parser = HttpResponseParser( + self, + self._loop, + read_bufsize, + timer=timer, + payload_exception=ClientPayloadError, + response_with_body=not skip_payload, + read_until_eof=read_until_eof, + auto_decompress=auto_decompress, + ) + + if self._tail: + data, self._tail = self._tail, b"" + self.data_received(data) + + def _drop_timeout(self) -> None: + if self._read_timeout_handle is not None: + self._read_timeout_handle.cancel() + self._read_timeout_handle = None + + def _reschedule_timeout(self) -> None: + timeout = self._read_timeout + if self._read_timeout_handle is not None: + self._read_timeout_handle.cancel() + + if timeout: + self._read_timeout_handle = self._loop.call_later( + timeout, self._on_read_timeout + ) + else: + self._read_timeout_handle = None + + def _on_read_timeout(self) -> None: + exc = ServerTimeoutError("Timeout on reading data from socket") + self.set_exception(exc) + if self._payload is not None: + self._payload.set_exception(exc) + + def data_received(self, data: bytes) -> None: + self._reschedule_timeout() + + if not data: + return + + # custom payload parser + if self._payload_parser is not None: + eof, tail = self._payload_parser.feed_data(data) + if eof: + self._payload = None + self._payload_parser = None + + if tail: + self.data_received(tail) + return + else: + if self._upgraded or self._parser is None: + # i.e. websocket connection, websocket parser is not set yet + self._tail += data + else: + # parse http messages + try: + messages, upgraded, tail = self._parser.feed_data(data) + except BaseException as exc: + if self.transport is not None: + # connection.release() could be called BEFORE + # data_received(), the transport is already + # closed in this case + self.transport.close() + # should_close is True after the call + self.set_exception(exc) + return + + self._upgraded = upgraded + + payload: Optional[StreamReader] = None + for message, payload in messages: + if message.should_close: + self._should_close = True + + self._payload = payload + + if self._skip_payload or message.code in (204, 304): + self.feed_data((message, EMPTY_PAYLOAD), 0) + else: + self.feed_data((message, payload), 0) + if payload is not None: + # new message(s) was processed + # register timeout handler unsubscribing + # either on end-of-stream or immediately for + # EMPTY_PAYLOAD + if payload is not EMPTY_PAYLOAD: + payload.on_eof(self._drop_timeout) + else: + self._drop_timeout() + + if tail: + if upgraded: + self.data_received(tail) + else: + self._tail = tail diff --git a/env/lib/python3.8/site-packages/aiohttp/client_reqrep.py b/env/lib/python3.8/site-packages/aiohttp/client_reqrep.py new file mode 100644 index 00000000..28b8a28d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/client_reqrep.py @@ -0,0 +1,1134 @@ +import asyncio +import codecs +import functools +import io +import re +import sys +import traceback +import warnings +from hashlib import md5, sha1, sha256 +from http.cookies import CookieError, Morsel, SimpleCookie +from types import MappingProxyType, TracebackType +from typing import ( + TYPE_CHECKING, + Any, + Dict, + Iterable, + List, + Mapping, + Optional, + Tuple, + Type, + Union, + cast, +) + +import attr +from multidict import CIMultiDict, CIMultiDictProxy, MultiDict, MultiDictProxy +from yarl import URL + +from . import hdrs, helpers, http, multipart, payload +from .abc import AbstractStreamWriter +from .client_exceptions import ( + ClientConnectionError, + ClientOSError, + ClientResponseError, + ContentTypeError, + InvalidURL, + ServerFingerprintMismatch, +) +from .formdata import FormData +from .helpers import ( + PY_36, + BaseTimerContext, + BasicAuth, + HeadersMixin, + TimerNoop, + noop, + reify, + set_result, +) +from .http import SERVER_SOFTWARE, HttpVersion10, HttpVersion11, StreamWriter +from .log import client_logger +from .streams import StreamReader +from .typedefs import ( + DEFAULT_JSON_DECODER, + JSONDecoder, + LooseCookies, + LooseHeaders, + RawHeaders, +) + +try: + import ssl + from ssl import SSLContext +except ImportError: # pragma: no cover + ssl = None # type: ignore[assignment] + SSLContext = object # type: ignore[misc,assignment] + +try: + import cchardet as chardet +except ImportError: # pragma: no cover + import charset_normalizer as chardet # type: ignore[no-redef] + + +__all__ = ("ClientRequest", "ClientResponse", "RequestInfo", "Fingerprint") + + +if TYPE_CHECKING: # pragma: no cover + from .client import ClientSession + from .connector import Connection + from .tracing import Trace + + +json_re = re.compile(r"^application/(?:[\w.+-]+?\+)?json") + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class ContentDisposition: + type: Optional[str] + parameters: "MappingProxyType[str, str]" + filename: Optional[str] + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class RequestInfo: + url: URL + method: str + headers: "CIMultiDictProxy[str]" + real_url: URL = attr.ib() + + @real_url.default + def real_url_default(self) -> URL: + return self.url + + +class Fingerprint: + HASHFUNC_BY_DIGESTLEN = { + 16: md5, + 20: sha1, + 32: sha256, + } + + def __init__(self, fingerprint: bytes) -> None: + digestlen = len(fingerprint) + hashfunc = self.HASHFUNC_BY_DIGESTLEN.get(digestlen) + if not hashfunc: + raise ValueError("fingerprint has invalid length") + elif hashfunc is md5 or hashfunc is sha1: + raise ValueError( + "md5 and sha1 are insecure and " "not supported. Use sha256." + ) + self._hashfunc = hashfunc + self._fingerprint = fingerprint + + @property + def fingerprint(self) -> bytes: + return self._fingerprint + + def check(self, transport: asyncio.Transport) -> None: + if not transport.get_extra_info("sslcontext"): + return + sslobj = transport.get_extra_info("ssl_object") + cert = sslobj.getpeercert(binary_form=True) + got = self._hashfunc(cert).digest() + if got != self._fingerprint: + host, port, *_ = transport.get_extra_info("peername") + raise ServerFingerprintMismatch(self._fingerprint, got, host, port) + + +if ssl is not None: + SSL_ALLOWED_TYPES = (ssl.SSLContext, bool, Fingerprint, type(None)) +else: # pragma: no cover + SSL_ALLOWED_TYPES = type(None) + + +def _merge_ssl_params( + ssl: Union["SSLContext", bool, Fingerprint, None], + verify_ssl: Optional[bool], + ssl_context: Optional["SSLContext"], + fingerprint: Optional[bytes], +) -> Union["SSLContext", bool, Fingerprint, None]: + if verify_ssl is not None and not verify_ssl: + warnings.warn( + "verify_ssl is deprecated, use ssl=False instead", + DeprecationWarning, + stacklevel=3, + ) + if ssl is not None: + raise ValueError( + "verify_ssl, ssl_context, fingerprint and ssl " + "parameters are mutually exclusive" + ) + else: + ssl = False + if ssl_context is not None: + warnings.warn( + "ssl_context is deprecated, use ssl=context instead", + DeprecationWarning, + stacklevel=3, + ) + if ssl is not None: + raise ValueError( + "verify_ssl, ssl_context, fingerprint and ssl " + "parameters are mutually exclusive" + ) + else: + ssl = ssl_context + if fingerprint is not None: + warnings.warn( + "fingerprint is deprecated, " "use ssl=Fingerprint(fingerprint) instead", + DeprecationWarning, + stacklevel=3, + ) + if ssl is not None: + raise ValueError( + "verify_ssl, ssl_context, fingerprint and ssl " + "parameters are mutually exclusive" + ) + else: + ssl = Fingerprint(fingerprint) + if not isinstance(ssl, SSL_ALLOWED_TYPES): + raise TypeError( + "ssl should be SSLContext, bool, Fingerprint or None, " + "got {!r} instead.".format(ssl) + ) + return ssl + + +@attr.s(auto_attribs=True, slots=True, frozen=True) +class ConnectionKey: + # the key should contain an information about used proxy / TLS + # to prevent reusing wrong connections from a pool + host: str + port: Optional[int] + is_ssl: bool + ssl: Union[SSLContext, None, bool, Fingerprint] + proxy: Optional[URL] + proxy_auth: Optional[BasicAuth] + proxy_headers_hash: Optional[int] # hash(CIMultiDict) + + +def _is_expected_content_type( + response_content_type: str, expected_content_type: str +) -> bool: + if expected_content_type == "application/json": + return json_re.match(response_content_type) is not None + return expected_content_type in response_content_type + + +class ClientRequest: + GET_METHODS = { + hdrs.METH_GET, + hdrs.METH_HEAD, + hdrs.METH_OPTIONS, + hdrs.METH_TRACE, + } + POST_METHODS = {hdrs.METH_PATCH, hdrs.METH_POST, hdrs.METH_PUT} + ALL_METHODS = GET_METHODS.union(POST_METHODS).union({hdrs.METH_DELETE}) + + DEFAULT_HEADERS = { + hdrs.ACCEPT: "*/*", + hdrs.ACCEPT_ENCODING: "gzip, deflate", + } + + body = b"" + auth = None + response = None + + _writer = None # async task for streaming data + _continue = None # waiter future for '100 Continue' response + + # N.B. + # Adding __del__ method with self._writer closing doesn't make sense + # because _writer is instance method, thus it keeps a reference to self. + # Until writer has finished finalizer will not be called. + + def __init__( + self, + method: str, + url: URL, + *, + params: Optional[Mapping[str, str]] = None, + headers: Optional[LooseHeaders] = None, + skip_auto_headers: Iterable[str] = frozenset(), + data: Any = None, + cookies: Optional[LooseCookies] = None, + auth: Optional[BasicAuth] = None, + version: http.HttpVersion = http.HttpVersion11, + compress: Optional[str] = None, + chunked: Optional[bool] = None, + expect100: bool = False, + loop: Optional[asyncio.AbstractEventLoop] = None, + response_class: Optional[Type["ClientResponse"]] = None, + proxy: Optional[URL] = None, + proxy_auth: Optional[BasicAuth] = None, + timer: Optional[BaseTimerContext] = None, + session: Optional["ClientSession"] = None, + ssl: Union[SSLContext, bool, Fingerprint, None] = None, + proxy_headers: Optional[LooseHeaders] = None, + traces: Optional[List["Trace"]] = None, + ): + + if loop is None: + loop = asyncio.get_event_loop() + + assert isinstance(url, URL), url + assert isinstance(proxy, (URL, type(None))), proxy + # FIXME: session is None in tests only, need to fix tests + # assert session is not None + self._session = cast("ClientSession", session) + if params: + q = MultiDict(url.query) + url2 = url.with_query(params) + q.extend(url2.query) + url = url.with_query(q) + self.original_url = url + self.url = url.with_fragment(None) + self.method = method.upper() + self.chunked = chunked + self.compress = compress + self.loop = loop + self.length = None + if response_class is None: + real_response_class = ClientResponse + else: + real_response_class = response_class + self.response_class: Type[ClientResponse] = real_response_class + self._timer = timer if timer is not None else TimerNoop() + self._ssl = ssl + + if loop.get_debug(): + self._source_traceback = traceback.extract_stack(sys._getframe(1)) + + self.update_version(version) + self.update_host(url) + self.update_headers(headers) + self.update_auto_headers(skip_auto_headers) + self.update_cookies(cookies) + self.update_content_encoding(data) + self.update_auth(auth) + self.update_proxy(proxy, proxy_auth, proxy_headers) + + self.update_body_from_data(data) + if data is not None or self.method not in self.GET_METHODS: + self.update_transfer_encoding() + self.update_expect_continue(expect100) + if traces is None: + traces = [] + self._traces = traces + + def is_ssl(self) -> bool: + return self.url.scheme in ("https", "wss") + + @property + def ssl(self) -> Union["SSLContext", None, bool, Fingerprint]: + return self._ssl + + @property + def connection_key(self) -> ConnectionKey: + proxy_headers = self.proxy_headers + if proxy_headers: + h: Optional[int] = hash(tuple((k, v) for k, v in proxy_headers.items())) + else: + h = None + return ConnectionKey( + self.host, + self.port, + self.is_ssl(), + self.ssl, + self.proxy, + self.proxy_auth, + h, + ) + + @property + def host(self) -> str: + ret = self.url.raw_host + assert ret is not None + return ret + + @property + def port(self) -> Optional[int]: + return self.url.port + + @property + def request_info(self) -> RequestInfo: + headers: CIMultiDictProxy[str] = CIMultiDictProxy(self.headers) + return RequestInfo(self.url, self.method, headers, self.original_url) + + def update_host(self, url: URL) -> None: + """Update destination host, port and connection type (ssl).""" + # get host/port + if not url.raw_host: + raise InvalidURL(url) + + # basic auth info + username, password = url.user, url.password + if username: + self.auth = helpers.BasicAuth(username, password or "") + + def update_version(self, version: Union[http.HttpVersion, str]) -> None: + """Convert request version to two elements tuple. + + parser HTTP version '1.1' => (1, 1) + """ + if isinstance(version, str): + v = [part.strip() for part in version.split(".", 1)] + try: + version = http.HttpVersion(int(v[0]), int(v[1])) + except ValueError: + raise ValueError( + f"Can not parse http version number: {version}" + ) from None + self.version = version + + def update_headers(self, headers: Optional[LooseHeaders]) -> None: + """Update request headers.""" + self.headers: CIMultiDict[str] = CIMultiDict() + + # add host + netloc = cast(str, self.url.raw_host) + if helpers.is_ipv6_address(netloc): + netloc = f"[{netloc}]" + if self.url.port is not None and not self.url.is_default_port(): + netloc += ":" + str(self.url.port) + self.headers[hdrs.HOST] = netloc + + if headers: + if isinstance(headers, (dict, MultiDictProxy, MultiDict)): + headers = headers.items() # type: ignore[assignment] + + for key, value in headers: # type: ignore[misc] + # A special case for Host header + if key.lower() == "host": + self.headers[key] = value + else: + self.headers.add(key, value) + + def update_auto_headers(self, skip_auto_headers: Iterable[str]) -> None: + self.skip_auto_headers = CIMultiDict( + (hdr, None) for hdr in sorted(skip_auto_headers) + ) + used_headers = self.headers.copy() + used_headers.extend(self.skip_auto_headers) # type: ignore[arg-type] + + for hdr, val in self.DEFAULT_HEADERS.items(): + if hdr not in used_headers: + self.headers.add(hdr, val) + + if hdrs.USER_AGENT not in used_headers: + self.headers[hdrs.USER_AGENT] = SERVER_SOFTWARE + + def update_cookies(self, cookies: Optional[LooseCookies]) -> None: + """Update request cookies header.""" + if not cookies: + return + + c: SimpleCookie[str] = SimpleCookie() + if hdrs.COOKIE in self.headers: + c.load(self.headers.get(hdrs.COOKIE, "")) + del self.headers[hdrs.COOKIE] + + if isinstance(cookies, Mapping): + iter_cookies = cookies.items() + else: + iter_cookies = cookies # type: ignore[assignment] + for name, value in iter_cookies: + if isinstance(value, Morsel): + # Preserve coded_value + mrsl_val = value.get(value.key, Morsel()) + mrsl_val.set(value.key, value.value, value.coded_value) + c[name] = mrsl_val + else: + c[name] = value # type: ignore[assignment] + + self.headers[hdrs.COOKIE] = c.output(header="", sep=";").strip() + + def update_content_encoding(self, data: Any) -> None: + """Set request content encoding.""" + if data is None: + return + + enc = self.headers.get(hdrs.CONTENT_ENCODING, "").lower() + if enc: + if self.compress: + raise ValueError( + "compress can not be set " "if Content-Encoding header is set" + ) + elif self.compress: + if not isinstance(self.compress, str): + self.compress = "deflate" + self.headers[hdrs.CONTENT_ENCODING] = self.compress + self.chunked = True # enable chunked, no need to deal with length + + def update_transfer_encoding(self) -> None: + """Analyze transfer-encoding header.""" + te = self.headers.get(hdrs.TRANSFER_ENCODING, "").lower() + + if "chunked" in te: + if self.chunked: + raise ValueError( + "chunked can not be set " + 'if "Transfer-Encoding: chunked" header is set' + ) + + elif self.chunked: + if hdrs.CONTENT_LENGTH in self.headers: + raise ValueError( + "chunked can not be set " "if Content-Length header is set" + ) + + self.headers[hdrs.TRANSFER_ENCODING] = "chunked" + else: + if hdrs.CONTENT_LENGTH not in self.headers: + self.headers[hdrs.CONTENT_LENGTH] = str(len(self.body)) + + def update_auth(self, auth: Optional[BasicAuth]) -> None: + """Set basic auth.""" + if auth is None: + auth = self.auth + if auth is None: + return + + if not isinstance(auth, helpers.BasicAuth): + raise TypeError("BasicAuth() tuple is required instead") + + self.headers[hdrs.AUTHORIZATION] = auth.encode() + + def update_body_from_data(self, body: Any) -> None: + if body is None: + return + + # FormData + if isinstance(body, FormData): + body = body() + + try: + body = payload.PAYLOAD_REGISTRY.get(body, disposition=None) + except payload.LookupError: + body = FormData(body)() + + self.body = body + + # enable chunked encoding if needed + if not self.chunked: + if hdrs.CONTENT_LENGTH not in self.headers: + size = body.size + if size is None: + self.chunked = True + else: + if hdrs.CONTENT_LENGTH not in self.headers: + self.headers[hdrs.CONTENT_LENGTH] = str(size) + + # copy payload headers + assert body.headers + for (key, value) in body.headers.items(): + if key in self.headers: + continue + if key in self.skip_auto_headers: + continue + self.headers[key] = value + + def update_expect_continue(self, expect: bool = False) -> None: + if expect: + self.headers[hdrs.EXPECT] = "100-continue" + elif self.headers.get(hdrs.EXPECT, "").lower() == "100-continue": + expect = True + + if expect: + self._continue = self.loop.create_future() + + def update_proxy( + self, + proxy: Optional[URL], + proxy_auth: Optional[BasicAuth], + proxy_headers: Optional[LooseHeaders], + ) -> None: + if proxy_auth and not isinstance(proxy_auth, helpers.BasicAuth): + raise ValueError("proxy_auth must be None or BasicAuth() tuple") + self.proxy = proxy + self.proxy_auth = proxy_auth + self.proxy_headers = proxy_headers + + def keep_alive(self) -> bool: + if self.version < HttpVersion10: + # keep alive not supported at all + return False + if self.version == HttpVersion10: + if self.headers.get(hdrs.CONNECTION) == "keep-alive": + return True + else: # no headers means we close for Http 1.0 + return False + elif self.headers.get(hdrs.CONNECTION) == "close": + return False + + return True + + async def write_bytes( + self, writer: AbstractStreamWriter, conn: "Connection" + ) -> None: + """Support coroutines that yields bytes objects.""" + # 100 response + if self._continue is not None: + await writer.drain() + await self._continue + + protocol = conn.protocol + assert protocol is not None + try: + if isinstance(self.body, payload.Payload): + await self.body.write(writer) + else: + if isinstance(self.body, (bytes, bytearray)): + self.body = (self.body,) # type: ignore[assignment] + + for chunk in self.body: + await writer.write(chunk) # type: ignore[arg-type] + + await writer.write_eof() + except OSError as exc: + if exc.errno is None and isinstance(exc, asyncio.TimeoutError): + protocol.set_exception(exc) + else: + new_exc = ClientOSError( + exc.errno, "Can not write request body for %s" % self.url + ) + new_exc.__context__ = exc + new_exc.__cause__ = exc + protocol.set_exception(new_exc) + except asyncio.CancelledError as exc: + if not conn.closed: + protocol.set_exception(exc) + except Exception as exc: + protocol.set_exception(exc) + finally: + self._writer = None + + async def send(self, conn: "Connection") -> "ClientResponse": + # Specify request target: + # - CONNECT request must send authority form URI + # - not CONNECT proxy must send absolute form URI + # - most common is origin form URI + if self.method == hdrs.METH_CONNECT: + connect_host = self.url.raw_host + assert connect_host is not None + if helpers.is_ipv6_address(connect_host): + connect_host = f"[{connect_host}]" + path = f"{connect_host}:{self.url.port}" + elif self.proxy and not self.is_ssl(): + path = str(self.url) + else: + path = self.url.raw_path + if self.url.raw_query_string: + path += "?" + self.url.raw_query_string + + protocol = conn.protocol + assert protocol is not None + writer = StreamWriter( + protocol, + self.loop, + on_chunk_sent=functools.partial( + self._on_chunk_request_sent, self.method, self.url + ), + on_headers_sent=functools.partial( + self._on_headers_request_sent, self.method, self.url + ), + ) + + if self.compress: + writer.enable_compression(self.compress) + + if self.chunked is not None: + writer.enable_chunking() + + # set default content-type + if ( + self.method in self.POST_METHODS + and hdrs.CONTENT_TYPE not in self.skip_auto_headers + and hdrs.CONTENT_TYPE not in self.headers + ): + self.headers[hdrs.CONTENT_TYPE] = "application/octet-stream" + + # set the connection header + connection = self.headers.get(hdrs.CONNECTION) + if not connection: + if self.keep_alive(): + if self.version == HttpVersion10: + connection = "keep-alive" + else: + if self.version == HttpVersion11: + connection = "close" + + if connection is not None: + self.headers[hdrs.CONNECTION] = connection + + # status + headers + status_line = "{0} {1} HTTP/{2[0]}.{2[1]}".format( + self.method, path, self.version + ) + await writer.write_headers(status_line, self.headers) + + self._writer = self.loop.create_task(self.write_bytes(writer, conn)) + + response_class = self.response_class + assert response_class is not None + self.response = response_class( + self.method, + self.original_url, + writer=self._writer, + continue100=self._continue, + timer=self._timer, + request_info=self.request_info, + traces=self._traces, + loop=self.loop, + session=self._session, + ) + return self.response + + async def close(self) -> None: + if self._writer is not None: + try: + await self._writer + finally: + self._writer = None + + def terminate(self) -> None: + if self._writer is not None: + if not self.loop.is_closed(): + self._writer.cancel() + self._writer = None + + async def _on_chunk_request_sent(self, method: str, url: URL, chunk: bytes) -> None: + for trace in self._traces: + await trace.send_request_chunk_sent(method, url, chunk) + + async def _on_headers_request_sent( + self, method: str, url: URL, headers: "CIMultiDict[str]" + ) -> None: + for trace in self._traces: + await trace.send_request_headers(method, url, headers) + + +class ClientResponse(HeadersMixin): + + # from the Status-Line of the response + version = None # HTTP-Version + status: int = None # type: ignore[assignment] # Status-Code + reason = None # Reason-Phrase + + content: StreamReader = None # type: ignore[assignment] # Payload stream + _headers: "CIMultiDictProxy[str]" = None # type: ignore[assignment] + _raw_headers: RawHeaders = None # type: ignore[assignment] # Response raw headers + + _connection = None # current connection + _source_traceback = None + # setted up by ClientRequest after ClientResponse object creation + # post-init stage allows to not change ctor signature + _closed = True # to allow __del__ for non-initialized properly response + _released = False + + def __init__( + self, + method: str, + url: URL, + *, + writer: "asyncio.Task[None]", + continue100: Optional["asyncio.Future[bool]"], + timer: BaseTimerContext, + request_info: RequestInfo, + traces: List["Trace"], + loop: asyncio.AbstractEventLoop, + session: "ClientSession", + ) -> None: + assert isinstance(url, URL) + + self.method = method + self.cookies: SimpleCookie[str] = SimpleCookie() + + self._real_url = url + self._url = url.with_fragment(None) + self._body: Any = None + self._writer: Optional[asyncio.Task[None]] = writer + self._continue = continue100 # None by default + self._closed = True + self._history: Tuple[ClientResponse, ...] = () + self._request_info = request_info + self._timer = timer if timer is not None else TimerNoop() + self._cache: Dict[str, Any] = {} + self._traces = traces + self._loop = loop + # store a reference to session #1985 + self._session: Optional[ClientSession] = session + if loop.get_debug(): + self._source_traceback = traceback.extract_stack(sys._getframe(1)) + + @reify + def url(self) -> URL: + return self._url + + @reify + def url_obj(self) -> URL: + warnings.warn("Deprecated, use .url #1654", DeprecationWarning, stacklevel=2) + return self._url + + @reify + def real_url(self) -> URL: + return self._real_url + + @reify + def host(self) -> str: + assert self._url.host is not None + return self._url.host + + @reify + def headers(self) -> "CIMultiDictProxy[str]": + return self._headers + + @reify + def raw_headers(self) -> RawHeaders: + return self._raw_headers + + @reify + def request_info(self) -> RequestInfo: + return self._request_info + + @reify + def content_disposition(self) -> Optional[ContentDisposition]: + raw = self._headers.get(hdrs.CONTENT_DISPOSITION) + if raw is None: + return None + disposition_type, params_dct = multipart.parse_content_disposition(raw) + params = MappingProxyType(params_dct) + filename = multipart.content_disposition_filename(params) + return ContentDisposition(disposition_type, params, filename) + + def __del__(self, _warnings: Any = warnings) -> None: + if self._closed: + return + + if self._connection is not None: + self._connection.release() + self._cleanup_writer() + + if self._loop.get_debug(): + if PY_36: + kwargs = {"source": self} + else: + kwargs = {} + _warnings.warn(f"Unclosed response {self!r}", ResourceWarning, **kwargs) + context = {"client_response": self, "message": "Unclosed response"} + if self._source_traceback: + context["source_traceback"] = self._source_traceback + self._loop.call_exception_handler(context) + + def __repr__(self) -> str: + out = io.StringIO() + ascii_encodable_url = str(self.url) + if self.reason: + ascii_encodable_reason = self.reason.encode( + "ascii", "backslashreplace" + ).decode("ascii") + else: + ascii_encodable_reason = self.reason + print( + "".format( + ascii_encodable_url, self.status, ascii_encodable_reason + ), + file=out, + ) + print(self.headers, file=out) + return out.getvalue() + + @property + def connection(self) -> Optional["Connection"]: + return self._connection + + @reify + def history(self) -> Tuple["ClientResponse", ...]: + """A sequence of of responses, if redirects occurred.""" + return self._history + + @reify + def links(self) -> "MultiDictProxy[MultiDictProxy[Union[str, URL]]]": + links_str = ", ".join(self.headers.getall("link", [])) + + if not links_str: + return MultiDictProxy(MultiDict()) + + links: MultiDict[MultiDictProxy[Union[str, URL]]] = MultiDict() + + for val in re.split(r",(?=\s*<)", links_str): + match = re.match(r"\s*<(.*)>(.*)", val) + if match is None: # pragma: no cover + # the check exists to suppress mypy error + continue + url, params_str = match.groups() + params = params_str.split(";")[1:] + + link: MultiDict[Union[str, URL]] = MultiDict() + + for param in params: + match = re.match(r"^\s*(\S*)\s*=\s*(['\"]?)(.*?)(\2)\s*$", param, re.M) + if match is None: # pragma: no cover + # the check exists to suppress mypy error + continue + key, _, value, _ = match.groups() + + link.add(key, value) + + key = link.get("rel", url) # type: ignore[assignment] + + link.add("url", self.url.join(URL(url))) + + links.add(key, MultiDictProxy(link)) + + return MultiDictProxy(links) + + async def start(self, connection: "Connection") -> "ClientResponse": + """Start response processing.""" + self._closed = False + self._protocol = connection.protocol + self._connection = connection + + with self._timer: + while True: + # read response + try: + protocol = self._protocol + message, payload = await protocol.read() # type: ignore[union-attr] + except http.HttpProcessingError as exc: + raise ClientResponseError( + self.request_info, + self.history, + status=exc.code, + message=exc.message, + headers=exc.headers, + ) from exc + + if message.code < 100 or message.code > 199 or message.code == 101: + break + + if self._continue is not None: + set_result(self._continue, True) + self._continue = None + + # payload eof handler + payload.on_eof(self._response_eof) + + # response status + self.version = message.version + self.status = message.code + self.reason = message.reason + + # headers + self._headers = message.headers # type is CIMultiDictProxy + self._raw_headers = message.raw_headers # type is Tuple[bytes, bytes] + + # payload + self.content = payload + + # cookies + for hdr in self.headers.getall(hdrs.SET_COOKIE, ()): + try: + self.cookies.load(hdr) + except CookieError as exc: + client_logger.warning("Can not load response cookies: %s", exc) + return self + + def _response_eof(self) -> None: + if self._closed: + return + + if self._connection is not None: + # websocket, protocol could be None because + # connection could be detached + if ( + self._connection.protocol is not None + and self._connection.protocol.upgraded + ): + return + + self._connection.release() + self._connection = None + + self._closed = True + self._cleanup_writer() + + @property + def closed(self) -> bool: + return self._closed + + def close(self) -> None: + if not self._released: + self._notify_content() + if self._closed: + return + + self._closed = True + if self._loop is None or self._loop.is_closed(): + return + + if self._connection is not None: + self._connection.close() + self._connection = None + self._cleanup_writer() + + def release(self) -> Any: + if not self._released: + self._notify_content() + if self._closed: + return noop() + + self._closed = True + if self._connection is not None: + self._connection.release() + self._connection = None + + self._cleanup_writer() + return noop() + + @property + def ok(self) -> bool: + """Returns ``True`` if ``status`` is less than ``400``, ``False`` if not. + + This is **not** a check for ``200 OK`` but a check that the response + status is under 400. + """ + return 400 > self.status + + def raise_for_status(self) -> None: + if not self.ok: + # reason should always be not None for a started response + assert self.reason is not None + self.release() + raise ClientResponseError( + self.request_info, + self.history, + status=self.status, + message=self.reason, + headers=self.headers, + ) + + def _cleanup_writer(self) -> None: + if self._writer is not None: + self._writer.cancel() + self._writer = None + self._session = None + + def _notify_content(self) -> None: + content = self.content + if content and content.exception() is None: + content.set_exception(ClientConnectionError("Connection closed")) + self._released = True + + async def wait_for_close(self) -> None: + if self._writer is not None: + try: + await self._writer + finally: + self._writer = None + self.release() + + async def read(self) -> bytes: + """Read response payload.""" + if self._body is None: + try: + self._body = await self.content.read() + for trace in self._traces: + await trace.send_response_chunk_received( + self.method, self.url, self._body + ) + except BaseException: + self.close() + raise + elif self._released: + raise ClientConnectionError("Connection closed") + + return self._body # type: ignore[no-any-return] + + def get_encoding(self) -> str: + ctype = self.headers.get(hdrs.CONTENT_TYPE, "").lower() + mimetype = helpers.parse_mimetype(ctype) + + encoding = mimetype.parameters.get("charset") + if encoding: + try: + codecs.lookup(encoding) + except LookupError: + encoding = None + if not encoding: + if mimetype.type == "application" and ( + mimetype.subtype == "json" or mimetype.subtype == "rdap" + ): + # RFC 7159 states that the default encoding is UTF-8. + # RFC 7483 defines application/rdap+json + encoding = "utf-8" + elif self._body is None: + raise RuntimeError( + "Cannot guess the encoding of " "a not yet read body" + ) + else: + encoding = chardet.detect(self._body)["encoding"] + if not encoding: + encoding = "utf-8" + + return encoding + + async def text(self, encoding: Optional[str] = None, errors: str = "strict") -> str: + """Read response payload and decode.""" + if self._body is None: + await self.read() + + if encoding is None: + encoding = self.get_encoding() + + return self._body.decode( # type: ignore[no-any-return,union-attr] + encoding, errors=errors + ) + + async def json( + self, + *, + encoding: Optional[str] = None, + loads: JSONDecoder = DEFAULT_JSON_DECODER, + content_type: Optional[str] = "application/json", + ) -> Any: + """Read and decodes JSON response.""" + if self._body is None: + await self.read() + + if content_type: + ctype = self.headers.get(hdrs.CONTENT_TYPE, "").lower() + if not _is_expected_content_type(ctype, content_type): + raise ContentTypeError( + self.request_info, + self.history, + message=( + "Attempt to decode JSON with " "unexpected mimetype: %s" % ctype + ), + headers=self.headers, + ) + + stripped = self._body.strip() # type: ignore[union-attr] + if not stripped: + return None + + if encoding is None: + encoding = self.get_encoding() + + return loads(stripped.decode(encoding)) + + async def __aenter__(self) -> "ClientResponse": + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + # similar to _RequestContextManager, we do not need to check + # for exceptions, response object can close connection + # if state is broken + self.release() diff --git a/env/lib/python3.8/site-packages/aiohttp/client_ws.py b/env/lib/python3.8/site-packages/aiohttp/client_ws.py new file mode 100644 index 00000000..9a8ba84c --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/client_ws.py @@ -0,0 +1,300 @@ +"""WebSocket client for asyncio.""" + +import asyncio +from typing import Any, Optional, cast + +import async_timeout + +from .client_exceptions import ClientError +from .client_reqrep import ClientResponse +from .helpers import call_later, set_result +from .http import ( + WS_CLOSED_MESSAGE, + WS_CLOSING_MESSAGE, + WebSocketError, + WSCloseCode, + WSMessage, + WSMsgType, +) +from .http_websocket import WebSocketWriter # WSMessage +from .streams import EofStream, FlowControlDataQueue +from .typedefs import ( + DEFAULT_JSON_DECODER, + DEFAULT_JSON_ENCODER, + JSONDecoder, + JSONEncoder, +) + + +class ClientWebSocketResponse: + def __init__( + self, + reader: "FlowControlDataQueue[WSMessage]", + writer: WebSocketWriter, + protocol: Optional[str], + response: ClientResponse, + timeout: float, + autoclose: bool, + autoping: bool, + loop: asyncio.AbstractEventLoop, + *, + receive_timeout: Optional[float] = None, + heartbeat: Optional[float] = None, + compress: int = 0, + client_notakeover: bool = False, + ) -> None: + self._response = response + self._conn = response.connection + + self._writer = writer + self._reader = reader + self._protocol = protocol + self._closed = False + self._closing = False + self._close_code: Optional[int] = None + self._timeout = timeout + self._receive_timeout = receive_timeout + self._autoclose = autoclose + self._autoping = autoping + self._heartbeat = heartbeat + self._heartbeat_cb: Optional[asyncio.TimerHandle] = None + if heartbeat is not None: + self._pong_heartbeat = heartbeat / 2.0 + self._pong_response_cb: Optional[asyncio.TimerHandle] = None + self._loop = loop + self._waiting: Optional[asyncio.Future[bool]] = None + self._exception: Optional[BaseException] = None + self._compress = compress + self._client_notakeover = client_notakeover + + self._reset_heartbeat() + + def _cancel_heartbeat(self) -> None: + if self._pong_response_cb is not None: + self._pong_response_cb.cancel() + self._pong_response_cb = None + + if self._heartbeat_cb is not None: + self._heartbeat_cb.cancel() + self._heartbeat_cb = None + + def _reset_heartbeat(self) -> None: + self._cancel_heartbeat() + + if self._heartbeat is not None: + self._heartbeat_cb = call_later( + self._send_heartbeat, self._heartbeat, self._loop + ) + + def _send_heartbeat(self) -> None: + if self._heartbeat is not None and not self._closed: + # fire-and-forget a task is not perfect but maybe ok for + # sending ping. Otherwise we need a long-living heartbeat + # task in the class. + self._loop.create_task(self._writer.ping()) + + if self._pong_response_cb is not None: + self._pong_response_cb.cancel() + self._pong_response_cb = call_later( + self._pong_not_received, self._pong_heartbeat, self._loop + ) + + def _pong_not_received(self) -> None: + if not self._closed: + self._closed = True + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._exception = asyncio.TimeoutError() + self._response.close() + + @property + def closed(self) -> bool: + return self._closed + + @property + def close_code(self) -> Optional[int]: + return self._close_code + + @property + def protocol(self) -> Optional[str]: + return self._protocol + + @property + def compress(self) -> int: + return self._compress + + @property + def client_notakeover(self) -> bool: + return self._client_notakeover + + def get_extra_info(self, name: str, default: Any = None) -> Any: + """extra info from connection transport""" + conn = self._response.connection + if conn is None: + return default + transport = conn.transport + if transport is None: + return default + return transport.get_extra_info(name, default) + + def exception(self) -> Optional[BaseException]: + return self._exception + + async def ping(self, message: bytes = b"") -> None: + await self._writer.ping(message) + + async def pong(self, message: bytes = b"") -> None: + await self._writer.pong(message) + + async def send_str(self, data: str, compress: Optional[int] = None) -> None: + if not isinstance(data, str): + raise TypeError("data argument must be str (%r)" % type(data)) + await self._writer.send(data, binary=False, compress=compress) + + async def send_bytes(self, data: bytes, compress: Optional[int] = None) -> None: + if not isinstance(data, (bytes, bytearray, memoryview)): + raise TypeError("data argument must be byte-ish (%r)" % type(data)) + await self._writer.send(data, binary=True, compress=compress) + + async def send_json( + self, + data: Any, + compress: Optional[int] = None, + *, + dumps: JSONEncoder = DEFAULT_JSON_ENCODER, + ) -> None: + await self.send_str(dumps(data), compress=compress) + + async def close(self, *, code: int = WSCloseCode.OK, message: bytes = b"") -> bool: + # we need to break `receive()` cycle first, + # `close()` may be called from different task + if self._waiting is not None and not self._closed: + self._reader.feed_data(WS_CLOSING_MESSAGE, 0) + await self._waiting + + if not self._closed: + self._cancel_heartbeat() + self._closed = True + try: + await self._writer.close(code, message) + except asyncio.CancelledError: + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._response.close() + raise + except Exception as exc: + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._exception = exc + self._response.close() + return True + + if self._closing: + self._response.close() + return True + + while True: + try: + async with async_timeout.timeout(self._timeout): + msg = await self._reader.read() + except asyncio.CancelledError: + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._response.close() + raise + except Exception as exc: + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._exception = exc + self._response.close() + return True + + if msg.type == WSMsgType.CLOSE: + self._close_code = msg.data + self._response.close() + return True + else: + return False + + async def receive(self, timeout: Optional[float] = None) -> WSMessage: + while True: + if self._waiting is not None: + raise RuntimeError("Concurrent call to receive() is not allowed") + + if self._closed: + return WS_CLOSED_MESSAGE + elif self._closing: + await self.close() + return WS_CLOSED_MESSAGE + + try: + self._waiting = self._loop.create_future() + try: + async with async_timeout.timeout(timeout or self._receive_timeout): + msg = await self._reader.read() + self._reset_heartbeat() + finally: + waiter = self._waiting + self._waiting = None + set_result(waiter, True) + except (asyncio.CancelledError, asyncio.TimeoutError): + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + raise + except EofStream: + self._close_code = WSCloseCode.OK + await self.close() + return WSMessage(WSMsgType.CLOSED, None, None) + except ClientError: + self._closed = True + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + return WS_CLOSED_MESSAGE + except WebSocketError as exc: + self._close_code = exc.code + await self.close(code=exc.code) + return WSMessage(WSMsgType.ERROR, exc, None) + except Exception as exc: + self._exception = exc + self._closing = True + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + await self.close() + return WSMessage(WSMsgType.ERROR, exc, None) + + if msg.type == WSMsgType.CLOSE: + self._closing = True + self._close_code = msg.data + if not self._closed and self._autoclose: + await self.close() + elif msg.type == WSMsgType.CLOSING: + self._closing = True + elif msg.type == WSMsgType.PING and self._autoping: + await self.pong(msg.data) + continue + elif msg.type == WSMsgType.PONG and self._autoping: + continue + + return msg + + async def receive_str(self, *, timeout: Optional[float] = None) -> str: + msg = await self.receive(timeout) + if msg.type != WSMsgType.TEXT: + raise TypeError(f"Received message {msg.type}:{msg.data!r} is not str") + return cast(str, msg.data) + + async def receive_bytes(self, *, timeout: Optional[float] = None) -> bytes: + msg = await self.receive(timeout) + if msg.type != WSMsgType.BINARY: + raise TypeError(f"Received message {msg.type}:{msg.data!r} is not bytes") + return cast(bytes, msg.data) + + async def receive_json( + self, + *, + loads: JSONDecoder = DEFAULT_JSON_DECODER, + timeout: Optional[float] = None, + ) -> Any: + data = await self.receive_str(timeout=timeout) + return loads(data) + + def __aiter__(self) -> "ClientWebSocketResponse": + return self + + async def __anext__(self) -> WSMessage: + msg = await self.receive() + if msg.type in (WSMsgType.CLOSE, WSMsgType.CLOSING, WSMsgType.CLOSED): + raise StopAsyncIteration + return msg diff --git a/env/lib/python3.8/site-packages/aiohttp/connector.py b/env/lib/python3.8/site-packages/aiohttp/connector.py new file mode 100644 index 00000000..bf40689d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/connector.py @@ -0,0 +1,1453 @@ +import asyncio +import functools +import random +import sys +import traceback +import warnings +from collections import defaultdict, deque +from contextlib import suppress +from http.cookies import SimpleCookie +from itertools import cycle, islice +from time import monotonic +from types import TracebackType +from typing import ( + TYPE_CHECKING, + Any, + Awaitable, + Callable, + DefaultDict, + Dict, + Iterator, + List, + Optional, + Set, + Tuple, + Type, + Union, + cast, +) + +import attr + +from . import hdrs, helpers +from .abc import AbstractResolver +from .client_exceptions import ( + ClientConnectionError, + ClientConnectorCertificateError, + ClientConnectorError, + ClientConnectorSSLError, + ClientHttpProxyError, + ClientProxyConnectionError, + ServerFingerprintMismatch, + UnixClientConnectorError, + cert_errors, + ssl_errors, +) +from .client_proto import ResponseHandler +from .client_reqrep import ClientRequest, Fingerprint, _merge_ssl_params +from .helpers import ( + PY_36, + ceil_timeout, + get_running_loop, + is_ip_address, + noop, + sentinel, +) +from .http import RESPONSES +from .locks import EventResultOrError +from .resolver import DefaultResolver + +try: + import ssl + + SSLContext = ssl.SSLContext +except ImportError: # pragma: no cover + ssl = None # type: ignore[assignment] + SSLContext = object # type: ignore[misc,assignment] + + +__all__ = ("BaseConnector", "TCPConnector", "UnixConnector", "NamedPipeConnector") + + +if TYPE_CHECKING: # pragma: no cover + from .client import ClientTimeout + from .client_reqrep import ConnectionKey + from .tracing import Trace + + +class _DeprecationWaiter: + __slots__ = ("_awaitable", "_awaited") + + def __init__(self, awaitable: Awaitable[Any]) -> None: + self._awaitable = awaitable + self._awaited = False + + def __await__(self) -> Any: + self._awaited = True + return self._awaitable.__await__() + + def __del__(self) -> None: + if not self._awaited: + warnings.warn( + "Connector.close() is a coroutine, " + "please use await connector.close()", + DeprecationWarning, + ) + + +class Connection: + + _source_traceback = None + _transport = None + + def __init__( + self, + connector: "BaseConnector", + key: "ConnectionKey", + protocol: ResponseHandler, + loop: asyncio.AbstractEventLoop, + ) -> None: + self._key = key + self._connector = connector + self._loop = loop + self._protocol: Optional[ResponseHandler] = protocol + self._callbacks: List[Callable[[], None]] = [] + + if loop.get_debug(): + self._source_traceback = traceback.extract_stack(sys._getframe(1)) + + def __repr__(self) -> str: + return f"Connection<{self._key}>" + + def __del__(self, _warnings: Any = warnings) -> None: + if self._protocol is not None: + if PY_36: + kwargs = {"source": self} + else: + kwargs = {} + _warnings.warn(f"Unclosed connection {self!r}", ResourceWarning, **kwargs) + if self._loop.is_closed(): + return + + self._connector._release(self._key, self._protocol, should_close=True) + + context = {"client_connection": self, "message": "Unclosed connection"} + if self._source_traceback is not None: + context["source_traceback"] = self._source_traceback + self._loop.call_exception_handler(context) + + @property + def loop(self) -> asyncio.AbstractEventLoop: + warnings.warn( + "connector.loop property is deprecated", DeprecationWarning, stacklevel=2 + ) + return self._loop + + @property + def transport(self) -> Optional[asyncio.Transport]: + if self._protocol is None: + return None + return self._protocol.transport + + @property + def protocol(self) -> Optional[ResponseHandler]: + return self._protocol + + def add_callback(self, callback: Callable[[], None]) -> None: + if callback is not None: + self._callbacks.append(callback) + + def _notify_release(self) -> None: + callbacks, self._callbacks = self._callbacks[:], [] + + for cb in callbacks: + with suppress(Exception): + cb() + + def close(self) -> None: + self._notify_release() + + if self._protocol is not None: + self._connector._release(self._key, self._protocol, should_close=True) + self._protocol = None + + def release(self) -> None: + self._notify_release() + + if self._protocol is not None: + self._connector._release( + self._key, self._protocol, should_close=self._protocol.should_close + ) + self._protocol = None + + @property + def closed(self) -> bool: + return self._protocol is None or not self._protocol.is_connected() + + +class _TransportPlaceholder: + """placeholder for BaseConnector.connect function""" + + def close(self) -> None: + pass + + +class BaseConnector: + """Base connector class. + + keepalive_timeout - (optional) Keep-alive timeout. + force_close - Set to True to force close and do reconnect + after each request (and between redirects). + limit - The total number of simultaneous connections. + limit_per_host - Number of simultaneous connections to one host. + enable_cleanup_closed - Enables clean-up closed ssl transports. + Disabled by default. + loop - Optional event loop. + """ + + _closed = True # prevent AttributeError in __del__ if ctor was failed + _source_traceback = None + + # abort transport after 2 seconds (cleanup broken connections) + _cleanup_closed_period = 2.0 + + def __init__( + self, + *, + keepalive_timeout: Union[object, None, float] = sentinel, + force_close: bool = False, + limit: int = 100, + limit_per_host: int = 0, + enable_cleanup_closed: bool = False, + loop: Optional[asyncio.AbstractEventLoop] = None, + ) -> None: + + if force_close: + if keepalive_timeout is not None and keepalive_timeout is not sentinel: + raise ValueError( + "keepalive_timeout cannot " "be set if force_close is True" + ) + else: + if keepalive_timeout is sentinel: + keepalive_timeout = 15.0 + + loop = get_running_loop(loop) + + self._closed = False + if loop.get_debug(): + self._source_traceback = traceback.extract_stack(sys._getframe(1)) + + self._conns: Dict[ConnectionKey, List[Tuple[ResponseHandler, float]]] = {} + self._limit = limit + self._limit_per_host = limit_per_host + self._acquired: Set[ResponseHandler] = set() + self._acquired_per_host: DefaultDict[ + ConnectionKey, Set[ResponseHandler] + ] = defaultdict(set) + self._keepalive_timeout = cast(float, keepalive_timeout) + self._force_close = force_close + + # {host_key: FIFO list of waiters} + self._waiters = defaultdict(deque) # type: ignore[var-annotated] + + self._loop = loop + self._factory = functools.partial(ResponseHandler, loop=loop) + + self.cookies: SimpleCookie[str] = SimpleCookie() + + # start keep-alive connection cleanup task + self._cleanup_handle: Optional[asyncio.TimerHandle] = None + + # start cleanup closed transports task + self._cleanup_closed_handle: Optional[asyncio.TimerHandle] = None + self._cleanup_closed_disabled = not enable_cleanup_closed + self._cleanup_closed_transports: List[Optional[asyncio.Transport]] = [] + self._cleanup_closed() + + def __del__(self, _warnings: Any = warnings) -> None: + if self._closed: + return + if not self._conns: + return + + conns = [repr(c) for c in self._conns.values()] + + self._close() + + if PY_36: + kwargs = {"source": self} + else: + kwargs = {} + _warnings.warn(f"Unclosed connector {self!r}", ResourceWarning, **kwargs) + context = { + "connector": self, + "connections": conns, + "message": "Unclosed connector", + } + if self._source_traceback is not None: + context["source_traceback"] = self._source_traceback + self._loop.call_exception_handler(context) + + def __enter__(self) -> "BaseConnector": + warnings.warn( + '"with Connector():" is deprecated, ' + 'use "async with Connector():" instead', + DeprecationWarning, + ) + return self + + def __exit__(self, *exc: Any) -> None: + self._close() + + async def __aenter__(self) -> "BaseConnector": + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + exc_traceback: Optional[TracebackType] = None, + ) -> None: + await self.close() + + @property + def force_close(self) -> bool: + """Ultimately close connection on releasing if True.""" + return self._force_close + + @property + def limit(self) -> int: + """The total number for simultaneous connections. + + If limit is 0 the connector has no limit. + The default limit size is 100. + """ + return self._limit + + @property + def limit_per_host(self) -> int: + """The limit for simultaneous connections to the same endpoint. + + Endpoints are the same if they are have equal + (host, port, is_ssl) triple. + """ + return self._limit_per_host + + def _cleanup(self) -> None: + """Cleanup unused transports.""" + if self._cleanup_handle: + self._cleanup_handle.cancel() + # _cleanup_handle should be unset, otherwise _release() will not + # recreate it ever! + self._cleanup_handle = None + + now = self._loop.time() + timeout = self._keepalive_timeout + + if self._conns: + connections = {} + deadline = now - timeout + for key, conns in self._conns.items(): + alive = [] + for proto, use_time in conns: + if proto.is_connected(): + if use_time - deadline < 0: + transport = proto.transport + proto.close() + if key.is_ssl and not self._cleanup_closed_disabled: + self._cleanup_closed_transports.append(transport) + else: + alive.append((proto, use_time)) + else: + transport = proto.transport + proto.close() + if key.is_ssl and not self._cleanup_closed_disabled: + self._cleanup_closed_transports.append(transport) + + if alive: + connections[key] = alive + + self._conns = connections + + if self._conns: + self._cleanup_handle = helpers.weakref_handle( + self, "_cleanup", timeout, self._loop + ) + + def _drop_acquired_per_host( + self, key: "ConnectionKey", val: ResponseHandler + ) -> None: + acquired_per_host = self._acquired_per_host + if key not in acquired_per_host: + return + conns = acquired_per_host[key] + conns.remove(val) + if not conns: + del self._acquired_per_host[key] + + def _cleanup_closed(self) -> None: + """Double confirmation for transport close. + + Some broken ssl servers may leave socket open without proper close. + """ + if self._cleanup_closed_handle: + self._cleanup_closed_handle.cancel() + + for transport in self._cleanup_closed_transports: + if transport is not None: + transport.abort() + + self._cleanup_closed_transports = [] + + if not self._cleanup_closed_disabled: + self._cleanup_closed_handle = helpers.weakref_handle( + self, "_cleanup_closed", self._cleanup_closed_period, self._loop + ) + + def close(self) -> Awaitable[None]: + """Close all opened transports.""" + self._close() + return _DeprecationWaiter(noop()) + + def _close(self) -> None: + if self._closed: + return + + self._closed = True + + try: + if self._loop.is_closed(): + return + + # cancel cleanup task + if self._cleanup_handle: + self._cleanup_handle.cancel() + + # cancel cleanup close task + if self._cleanup_closed_handle: + self._cleanup_closed_handle.cancel() + + for data in self._conns.values(): + for proto, t0 in data: + proto.close() + + for proto in self._acquired: + proto.close() + + for transport in self._cleanup_closed_transports: + if transport is not None: + transport.abort() + + finally: + self._conns.clear() + self._acquired.clear() + self._waiters.clear() + self._cleanup_handle = None + self._cleanup_closed_transports.clear() + self._cleanup_closed_handle = None + + @property + def closed(self) -> bool: + """Is connector closed. + + A readonly property. + """ + return self._closed + + def _available_connections(self, key: "ConnectionKey") -> int: + """ + Return number of available connections. + + The limit, limit_per_host and the connection key are taken into account. + + If it returns less than 1 means that there are no connections + available. + """ + if self._limit: + # total calc available connections + available = self._limit - len(self._acquired) + + # check limit per host + if ( + self._limit_per_host + and available > 0 + and key in self._acquired_per_host + ): + acquired = self._acquired_per_host.get(key) + assert acquired is not None + available = self._limit_per_host - len(acquired) + + elif self._limit_per_host and key in self._acquired_per_host: + # check limit per host + acquired = self._acquired_per_host.get(key) + assert acquired is not None + available = self._limit_per_host - len(acquired) + else: + available = 1 + + return available + + async def connect( + self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" + ) -> Connection: + """Get from pool or create new connection.""" + key = req.connection_key + available = self._available_connections(key) + + # Wait if there are no available connections or if there are/were + # waiters (i.e. don't steal connection from a waiter about to wake up) + if available <= 0 or key in self._waiters: + fut = self._loop.create_future() + + # This connection will now count towards the limit. + self._waiters[key].append(fut) + + if traces: + for trace in traces: + await trace.send_connection_queued_start() + + try: + await fut + except BaseException as e: + if key in self._waiters: + # remove a waiter even if it was cancelled, normally it's + # removed when it's notified + try: + self._waiters[key].remove(fut) + except ValueError: # fut may no longer be in list + pass + + raise e + finally: + if key in self._waiters and not self._waiters[key]: + del self._waiters[key] + + if traces: + for trace in traces: + await trace.send_connection_queued_end() + + proto = self._get(key) + if proto is None: + placeholder = cast(ResponseHandler, _TransportPlaceholder()) + self._acquired.add(placeholder) + self._acquired_per_host[key].add(placeholder) + + if traces: + for trace in traces: + await trace.send_connection_create_start() + + try: + proto = await self._create_connection(req, traces, timeout) + if self._closed: + proto.close() + raise ClientConnectionError("Connector is closed.") + except BaseException: + if not self._closed: + self._acquired.remove(placeholder) + self._drop_acquired_per_host(key, placeholder) + self._release_waiter() + raise + else: + if not self._closed: + self._acquired.remove(placeholder) + self._drop_acquired_per_host(key, placeholder) + + if traces: + for trace in traces: + await trace.send_connection_create_end() + else: + if traces: + # Acquire the connection to prevent race conditions with limits + placeholder = cast(ResponseHandler, _TransportPlaceholder()) + self._acquired.add(placeholder) + self._acquired_per_host[key].add(placeholder) + for trace in traces: + await trace.send_connection_reuseconn() + self._acquired.remove(placeholder) + self._drop_acquired_per_host(key, placeholder) + + self._acquired.add(proto) + self._acquired_per_host[key].add(proto) + return Connection(self, key, proto, self._loop) + + def _get(self, key: "ConnectionKey") -> Optional[ResponseHandler]: + try: + conns = self._conns[key] + except KeyError: + return None + + t1 = self._loop.time() + while conns: + proto, t0 = conns.pop() + if proto.is_connected(): + if t1 - t0 > self._keepalive_timeout: + transport = proto.transport + proto.close() + # only for SSL transports + if key.is_ssl and not self._cleanup_closed_disabled: + self._cleanup_closed_transports.append(transport) + else: + if not conns: + # The very last connection was reclaimed: drop the key + del self._conns[key] + return proto + else: + transport = proto.transport + proto.close() + if key.is_ssl and not self._cleanup_closed_disabled: + self._cleanup_closed_transports.append(transport) + + # No more connections: drop the key + del self._conns[key] + return None + + def _release_waiter(self) -> None: + """ + Iterates over all waiters until one to be released is found. + + The one to be released is not finsihed and + belongs to a host that has available connections. + """ + if not self._waiters: + return + + # Having the dict keys ordered this avoids to iterate + # at the same order at each call. + queues = list(self._waiters.keys()) + random.shuffle(queues) + + for key in queues: + if self._available_connections(key) < 1: + continue + + waiters = self._waiters[key] + while waiters: + waiter = waiters.popleft() + if not waiter.done(): + waiter.set_result(None) + return + + def _release_acquired(self, key: "ConnectionKey", proto: ResponseHandler) -> None: + if self._closed: + # acquired connection is already released on connector closing + return + + try: + self._acquired.remove(proto) + self._drop_acquired_per_host(key, proto) + except KeyError: # pragma: no cover + # this may be result of undetermenistic order of objects + # finalization due garbage collection. + pass + else: + self._release_waiter() + + def _release( + self, + key: "ConnectionKey", + protocol: ResponseHandler, + *, + should_close: bool = False, + ) -> None: + if self._closed: + # acquired connection is already released on connector closing + return + + self._release_acquired(key, protocol) + + if self._force_close: + should_close = True + + if should_close or protocol.should_close: + transport = protocol.transport + protocol.close() + + if key.is_ssl and not self._cleanup_closed_disabled: + self._cleanup_closed_transports.append(transport) + else: + conns = self._conns.get(key) + if conns is None: + conns = self._conns[key] = [] + conns.append((protocol, self._loop.time())) + + if self._cleanup_handle is None: + self._cleanup_handle = helpers.weakref_handle( + self, "_cleanup", self._keepalive_timeout, self._loop + ) + + async def _create_connection( + self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" + ) -> ResponseHandler: + raise NotImplementedError() + + +class _DNSCacheTable: + def __init__(self, ttl: Optional[float] = None) -> None: + self._addrs_rr: Dict[Tuple[str, int], Tuple[Iterator[Dict[str, Any]], int]] = {} + self._timestamps: Dict[Tuple[str, int], float] = {} + self._ttl = ttl + + def __contains__(self, host: object) -> bool: + return host in self._addrs_rr + + def add(self, key: Tuple[str, int], addrs: List[Dict[str, Any]]) -> None: + self._addrs_rr[key] = (cycle(addrs), len(addrs)) + + if self._ttl: + self._timestamps[key] = monotonic() + + def remove(self, key: Tuple[str, int]) -> None: + self._addrs_rr.pop(key, None) + + if self._ttl: + self._timestamps.pop(key, None) + + def clear(self) -> None: + self._addrs_rr.clear() + self._timestamps.clear() + + def next_addrs(self, key: Tuple[str, int]) -> List[Dict[str, Any]]: + loop, length = self._addrs_rr[key] + addrs = list(islice(loop, length)) + # Consume one more element to shift internal state of `cycle` + next(loop) + return addrs + + def expired(self, key: Tuple[str, int]) -> bool: + if self._ttl is None: + return False + + return self._timestamps[key] + self._ttl < monotonic() + + +class TCPConnector(BaseConnector): + """TCP connector. + + verify_ssl - Set to True to check ssl certifications. + fingerprint - Pass the binary sha256 + digest of the expected certificate in DER format to verify + that the certificate the server presents matches. See also + https://en.wikipedia.org/wiki/Transport_Layer_Security#Certificate_pinning + resolver - Enable DNS lookups and use this + resolver + use_dns_cache - Use memory cache for DNS lookups. + ttl_dns_cache - Max seconds having cached a DNS entry, None forever. + family - socket address family + local_addr - local tuple of (host, port) to bind socket to + + keepalive_timeout - (optional) Keep-alive timeout. + force_close - Set to True to force close and do reconnect + after each request (and between redirects). + limit - The total number of simultaneous connections. + limit_per_host - Number of simultaneous connections to one host. + enable_cleanup_closed - Enables clean-up closed ssl transports. + Disabled by default. + loop - Optional event loop. + """ + + def __init__( + self, + *, + verify_ssl: bool = True, + fingerprint: Optional[bytes] = None, + use_dns_cache: bool = True, + ttl_dns_cache: Optional[int] = 10, + family: int = 0, + ssl_context: Optional[SSLContext] = None, + ssl: Union[None, bool, Fingerprint, SSLContext] = None, + local_addr: Optional[Tuple[str, int]] = None, + resolver: Optional[AbstractResolver] = None, + keepalive_timeout: Union[None, float, object] = sentinel, + force_close: bool = False, + limit: int = 100, + limit_per_host: int = 0, + enable_cleanup_closed: bool = False, + loop: Optional[asyncio.AbstractEventLoop] = None, + ): + super().__init__( + keepalive_timeout=keepalive_timeout, + force_close=force_close, + limit=limit, + limit_per_host=limit_per_host, + enable_cleanup_closed=enable_cleanup_closed, + loop=loop, + ) + + self._ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) + if resolver is None: + resolver = DefaultResolver(loop=self._loop) + self._resolver = resolver + + self._use_dns_cache = use_dns_cache + self._cached_hosts = _DNSCacheTable(ttl=ttl_dns_cache) + self._throttle_dns_events: Dict[Tuple[str, int], EventResultOrError] = {} + self._family = family + self._local_addr = local_addr + + def close(self) -> Awaitable[None]: + """Close all ongoing DNS calls.""" + for ev in self._throttle_dns_events.values(): + ev.cancel() + + return super().close() + + @property + def family(self) -> int: + """Socket family like AF_INET.""" + return self._family + + @property + def use_dns_cache(self) -> bool: + """True if local DNS caching is enabled.""" + return self._use_dns_cache + + def clear_dns_cache( + self, host: Optional[str] = None, port: Optional[int] = None + ) -> None: + """Remove specified host/port or clear all dns local cache.""" + if host is not None and port is not None: + self._cached_hosts.remove((host, port)) + elif host is not None or port is not None: + raise ValueError("either both host and port " "or none of them are allowed") + else: + self._cached_hosts.clear() + + async def _resolve_host( + self, host: str, port: int, traces: Optional[List["Trace"]] = None + ) -> List[Dict[str, Any]]: + if is_ip_address(host): + return [ + { + "hostname": host, + "host": host, + "port": port, + "family": self._family, + "proto": 0, + "flags": 0, + } + ] + + if not self._use_dns_cache: + + if traces: + for trace in traces: + await trace.send_dns_resolvehost_start(host) + + res = await self._resolver.resolve(host, port, family=self._family) + + if traces: + for trace in traces: + await trace.send_dns_resolvehost_end(host) + + return res + + key = (host, port) + + if (key in self._cached_hosts) and (not self._cached_hosts.expired(key)): + # get result early, before any await (#4014) + result = self._cached_hosts.next_addrs(key) + + if traces: + for trace in traces: + await trace.send_dns_cache_hit(host) + return result + + if key in self._throttle_dns_events: + # get event early, before any await (#4014) + event = self._throttle_dns_events[key] + if traces: + for trace in traces: + await trace.send_dns_cache_hit(host) + await event.wait() + else: + # update dict early, before any await (#4014) + self._throttle_dns_events[key] = EventResultOrError(self._loop) + if traces: + for trace in traces: + await trace.send_dns_cache_miss(host) + try: + + if traces: + for trace in traces: + await trace.send_dns_resolvehost_start(host) + + addrs = await self._resolver.resolve(host, port, family=self._family) + if traces: + for trace in traces: + await trace.send_dns_resolvehost_end(host) + + self._cached_hosts.add(key, addrs) + self._throttle_dns_events[key].set() + except BaseException as e: + # any DNS exception, independently of the implementation + # is set for the waiters to raise the same exception. + self._throttle_dns_events[key].set(exc=e) + raise + finally: + self._throttle_dns_events.pop(key) + + return self._cached_hosts.next_addrs(key) + + async def _create_connection( + self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" + ) -> ResponseHandler: + """Create connection. + + Has same keyword arguments as BaseEventLoop.create_connection. + """ + if req.proxy: + _, proto = await self._create_proxy_connection(req, traces, timeout) + else: + _, proto = await self._create_direct_connection(req, traces, timeout) + + return proto + + @staticmethod + @functools.lru_cache(None) + def _make_ssl_context(verified: bool) -> SSLContext: + if verified: + return ssl.create_default_context() + else: + sslcontext = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) + sslcontext.options |= ssl.OP_NO_SSLv2 + sslcontext.options |= ssl.OP_NO_SSLv3 + sslcontext.check_hostname = False + sslcontext.verify_mode = ssl.CERT_NONE + try: + sslcontext.options |= ssl.OP_NO_COMPRESSION + except AttributeError as attr_err: + warnings.warn( + "{!s}: The Python interpreter is compiled " + "against OpenSSL < 1.0.0. Ref: " + "https://docs.python.org/3/library/ssl.html" + "#ssl.OP_NO_COMPRESSION".format(attr_err), + ) + sslcontext.set_default_verify_paths() + return sslcontext + + def _get_ssl_context(self, req: "ClientRequest") -> Optional[SSLContext]: + """Logic to get the correct SSL context + + 0. if req.ssl is false, return None + + 1. if ssl_context is specified in req, use it + 2. if _ssl_context is specified in self, use it + 3. otherwise: + 1. if verify_ssl is not specified in req, use self.ssl_context + (will generate a default context according to self.verify_ssl) + 2. if verify_ssl is True in req, generate a default SSL context + 3. if verify_ssl is False in req, generate a SSL context that + won't verify + """ + if req.is_ssl(): + if ssl is None: # pragma: no cover + raise RuntimeError("SSL is not supported.") + sslcontext = req.ssl + if isinstance(sslcontext, ssl.SSLContext): + return sslcontext + if sslcontext is not None: + # not verified or fingerprinted + return self._make_ssl_context(False) + sslcontext = self._ssl + if isinstance(sslcontext, ssl.SSLContext): + return sslcontext + if sslcontext is not None: + # not verified or fingerprinted + return self._make_ssl_context(False) + return self._make_ssl_context(True) + else: + return None + + def _get_fingerprint(self, req: "ClientRequest") -> Optional["Fingerprint"]: + ret = req.ssl + if isinstance(ret, Fingerprint): + return ret + ret = self._ssl + if isinstance(ret, Fingerprint): + return ret + return None + + async def _wrap_create_connection( + self, + *args: Any, + req: "ClientRequest", + timeout: "ClientTimeout", + client_error: Type[Exception] = ClientConnectorError, + **kwargs: Any, + ) -> Tuple[asyncio.Transport, ResponseHandler]: + try: + async with ceil_timeout(timeout.sock_connect): + return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa + except cert_errors as exc: + raise ClientConnectorCertificateError(req.connection_key, exc) from exc + except ssl_errors as exc: + raise ClientConnectorSSLError(req.connection_key, exc) from exc + except OSError as exc: + if exc.errno is None and isinstance(exc, asyncio.TimeoutError): + raise + raise client_error(req.connection_key, exc) from exc + + def _fail_on_no_start_tls(self, req: "ClientRequest") -> None: + """Raise a :py:exc:`RuntimeError` on missing ``start_tls()``. + + One case is that :py:meth:`asyncio.loop.start_tls` is not yet + implemented under Python 3.6. It is necessary for TLS-in-TLS so + that it is possible to send HTTPS queries through HTTPS proxies. + + This doesn't affect regular HTTP requests, though. + """ + if not req.is_ssl(): + return + + proxy_url = req.proxy + assert proxy_url is not None + if proxy_url.scheme != "https": + return + + self._check_loop_for_start_tls() + + def _check_loop_for_start_tls(self) -> None: + try: + self._loop.start_tls + except AttributeError as attr_exc: + raise RuntimeError( + "An HTTPS request is being sent through an HTTPS proxy. " + "This needs support for TLS in TLS but it is not implemented " + "in your runtime for the stdlib asyncio.\n\n" + "Please upgrade to Python 3.7 or higher. For more details, " + "please see:\n" + "* https://bugs.python.org/issue37179\n" + "* https://github.com/python/cpython/pull/28073\n" + "* https://docs.aiohttp.org/en/stable/" + "client_advanced.html#proxy-support\n" + "* https://github.com/aio-libs/aiohttp/discussions/6044\n", + ) from attr_exc + + def _loop_supports_start_tls(self) -> bool: + try: + self._check_loop_for_start_tls() + except RuntimeError: + return False + else: + return True + + def _warn_about_tls_in_tls( + self, + underlying_transport: asyncio.Transport, + req: "ClientRequest", + ) -> None: + """Issue a warning if the requested URL has HTTPS scheme.""" + if req.request_info.url.scheme != "https": + return + + asyncio_supports_tls_in_tls = getattr( + underlying_transport, + "_start_tls_compatible", + False, + ) + + if asyncio_supports_tls_in_tls: + return + + warnings.warn( + "An HTTPS request is being sent through an HTTPS proxy. " + "This support for TLS in TLS is known to be disabled " + "in the stdlib asyncio. This is why you'll probably see " + "an error in the log below.\n\n" + "It is possible to enable it via monkeypatching under " + "Python 3.7 or higher. For more details, see:\n" + "* https://bugs.python.org/issue37179\n" + "* https://github.com/python/cpython/pull/28073\n\n" + "You can temporarily patch this as follows:\n" + "* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support\n" + "* https://github.com/aio-libs/aiohttp/discussions/6044\n", + RuntimeWarning, + source=self, + # Why `4`? At least 3 of the calls in the stack originate + # from the methods in this class. + stacklevel=3, + ) + + async def _start_tls_connection( + self, + underlying_transport: asyncio.Transport, + req: "ClientRequest", + timeout: "ClientTimeout", + client_error: Type[Exception] = ClientConnectorError, + ) -> Tuple[asyncio.BaseTransport, ResponseHandler]: + """Wrap the raw TCP transport with TLS.""" + tls_proto = self._factory() # Create a brand new proto for TLS + + # Safety of the `cast()` call here is based on the fact that + # internally `_get_ssl_context()` only returns `None` when + # `req.is_ssl()` evaluates to `False` which is never gonna happen + # in this code path. Of course, it's rather fragile + # maintainability-wise but this is to be solved separately. + sslcontext = cast(ssl.SSLContext, self._get_ssl_context(req)) + + try: + async with ceil_timeout(timeout.sock_connect): + try: + tls_transport = await self._loop.start_tls( + underlying_transport, + tls_proto, + sslcontext, + server_hostname=req.host, + ssl_handshake_timeout=timeout.total, + ) + except BaseException: + # We need to close the underlying transport since + # `start_tls()` probably failed before it had a + # chance to do this: + underlying_transport.close() + raise + except cert_errors as exc: + raise ClientConnectorCertificateError(req.connection_key, exc) from exc + except ssl_errors as exc: + raise ClientConnectorSSLError(req.connection_key, exc) from exc + except OSError as exc: + if exc.errno is None and isinstance(exc, asyncio.TimeoutError): + raise + raise client_error(req.connection_key, exc) from exc + except TypeError as type_err: + # Example cause looks like this: + # TypeError: transport is not supported by start_tls() + + raise ClientConnectionError( + "Cannot initialize a TLS-in-TLS connection to host " + f"{req.host!s}:{req.port:d} through an underlying connection " + f"to an HTTPS proxy {req.proxy!s} ssl:{req.ssl or 'default'} " + f"[{type_err!s}]" + ) from type_err + else: + tls_proto.connection_made( + tls_transport + ) # Kick the state machine of the new TLS protocol + + return tls_transport, tls_proto + + async def _create_direct_connection( + self, + req: "ClientRequest", + traces: List["Trace"], + timeout: "ClientTimeout", + *, + client_error: Type[Exception] = ClientConnectorError, + ) -> Tuple[asyncio.Transport, ResponseHandler]: + sslcontext = self._get_ssl_context(req) + fingerprint = self._get_fingerprint(req) + + host = req.url.raw_host + assert host is not None + port = req.port + assert port is not None + host_resolved = asyncio.ensure_future( + self._resolve_host(host, port, traces=traces), loop=self._loop + ) + try: + # Cancelling this lookup should not cancel the underlying lookup + # or else the cancel event will get broadcast to all the waiters + # across all connections. + hosts = await asyncio.shield(host_resolved) + except asyncio.CancelledError: + + def drop_exception(fut: "asyncio.Future[List[Dict[str, Any]]]") -> None: + with suppress(Exception, asyncio.CancelledError): + fut.result() + + host_resolved.add_done_callback(drop_exception) + raise + except OSError as exc: + if exc.errno is None and isinstance(exc, asyncio.TimeoutError): + raise + # in case of proxy it is not ClientProxyConnectionError + # it is problem of resolving proxy ip itself + raise ClientConnectorError(req.connection_key, exc) from exc + + last_exc: Optional[Exception] = None + + for hinfo in hosts: + host = hinfo["host"] + port = hinfo["port"] + + try: + transp, proto = await self._wrap_create_connection( + self._factory, + host, + port, + timeout=timeout, + ssl=sslcontext, + family=hinfo["family"], + proto=hinfo["proto"], + flags=hinfo["flags"], + server_hostname=hinfo["hostname"] if sslcontext else None, + local_addr=self._local_addr, + req=req, + client_error=client_error, + ) + except ClientConnectorError as exc: + last_exc = exc + continue + + if req.is_ssl() and fingerprint: + try: + fingerprint.check(transp) + except ServerFingerprintMismatch as exc: + transp.close() + if not self._cleanup_closed_disabled: + self._cleanup_closed_transports.append(transp) + last_exc = exc + continue + + return transp, proto + else: + assert last_exc is not None + raise last_exc + + async def _create_proxy_connection( + self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" + ) -> Tuple[asyncio.BaseTransport, ResponseHandler]: + self._fail_on_no_start_tls(req) + runtime_has_start_tls = self._loop_supports_start_tls() + + headers: Dict[str, str] = {} + if req.proxy_headers is not None: + headers = req.proxy_headers # type: ignore[assignment] + headers[hdrs.HOST] = req.headers[hdrs.HOST] + + url = req.proxy + assert url is not None + proxy_req = ClientRequest( + hdrs.METH_GET, + url, + headers=headers, + auth=req.proxy_auth, + loop=self._loop, + ssl=req.ssl, + ) + + # create connection to proxy server + transport, proto = await self._create_direct_connection( + proxy_req, [], timeout, client_error=ClientProxyConnectionError + ) + + # Many HTTP proxies has buggy keepalive support. Let's not + # reuse connection but close it after processing every + # response. + proto.force_close() + + auth = proxy_req.headers.pop(hdrs.AUTHORIZATION, None) + if auth is not None: + if not req.is_ssl(): + req.headers[hdrs.PROXY_AUTHORIZATION] = auth + else: + proxy_req.headers[hdrs.PROXY_AUTHORIZATION] = auth + + if req.is_ssl(): + if runtime_has_start_tls: + self._warn_about_tls_in_tls(transport, req) + + # For HTTPS requests over HTTP proxy + # we must notify proxy to tunnel connection + # so we send CONNECT command: + # CONNECT www.python.org:443 HTTP/1.1 + # Host: www.python.org + # + # next we must do TLS handshake and so on + # to do this we must wrap raw socket into secure one + # asyncio handles this perfectly + proxy_req.method = hdrs.METH_CONNECT + proxy_req.url = req.url + key = attr.evolve( + req.connection_key, proxy=None, proxy_auth=None, proxy_headers_hash=None + ) + conn = Connection(self, key, proto, self._loop) + proxy_resp = await proxy_req.send(conn) + try: + protocol = conn._protocol + assert protocol is not None + + # read_until_eof=True will ensure the connection isn't closed + # once the response is received and processed allowing + # START_TLS to work on the connection below. + protocol.set_response_params(read_until_eof=runtime_has_start_tls) + resp = await proxy_resp.start(conn) + except BaseException: + proxy_resp.close() + conn.close() + raise + else: + conn._protocol = None + conn._transport = None + try: + if resp.status != 200: + message = resp.reason + if message is None: + message = RESPONSES[resp.status][0] + raise ClientHttpProxyError( + proxy_resp.request_info, + resp.history, + status=resp.status, + message=message, + headers=resp.headers, + ) + if not runtime_has_start_tls: + rawsock = transport.get_extra_info("socket", default=None) + if rawsock is None: + raise RuntimeError( + "Transport does not expose socket instance" + ) + # Duplicate the socket, so now we can close proxy transport + rawsock = rawsock.dup() + except BaseException: + # It shouldn't be closed in `finally` because it's fed to + # `loop.start_tls()` and the docs say not to touch it after + # passing there. + transport.close() + raise + finally: + if not runtime_has_start_tls: + transport.close() + + if not runtime_has_start_tls: + # HTTP proxy with support for upgrade to HTTPS + sslcontext = self._get_ssl_context(req) + return await self._wrap_create_connection( + self._factory, + timeout=timeout, + ssl=sslcontext, + sock=rawsock, + server_hostname=req.host, + req=req, + ) + + return await self._start_tls_connection( + # Access the old transport for the last time before it's + # closed and forgotten forever: + transport, + req=req, + timeout=timeout, + ) + finally: + proxy_resp.close() + + return transport, proto + + +class UnixConnector(BaseConnector): + """Unix socket connector. + + path - Unix socket path. + keepalive_timeout - (optional) Keep-alive timeout. + force_close - Set to True to force close and do reconnect + after each request (and between redirects). + limit - The total number of simultaneous connections. + limit_per_host - Number of simultaneous connections to one host. + loop - Optional event loop. + """ + + def __init__( + self, + path: str, + force_close: bool = False, + keepalive_timeout: Union[object, float, None] = sentinel, + limit: int = 100, + limit_per_host: int = 0, + loop: Optional[asyncio.AbstractEventLoop] = None, + ) -> None: + super().__init__( + force_close=force_close, + keepalive_timeout=keepalive_timeout, + limit=limit, + limit_per_host=limit_per_host, + loop=loop, + ) + self._path = path + + @property + def path(self) -> str: + """Path to unix socket.""" + return self._path + + async def _create_connection( + self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" + ) -> ResponseHandler: + try: + async with ceil_timeout(timeout.sock_connect): + _, proto = await self._loop.create_unix_connection( + self._factory, self._path + ) + except OSError as exc: + if exc.errno is None and isinstance(exc, asyncio.TimeoutError): + raise + raise UnixClientConnectorError(self.path, req.connection_key, exc) from exc + + return cast(ResponseHandler, proto) + + +class NamedPipeConnector(BaseConnector): + """Named pipe connector. + + Only supported by the proactor event loop. + See also: https://docs.python.org/3.7/library/asyncio-eventloop.html + + path - Windows named pipe path. + keepalive_timeout - (optional) Keep-alive timeout. + force_close - Set to True to force close and do reconnect + after each request (and between redirects). + limit - The total number of simultaneous connections. + limit_per_host - Number of simultaneous connections to one host. + loop - Optional event loop. + """ + + def __init__( + self, + path: str, + force_close: bool = False, + keepalive_timeout: Union[object, float, None] = sentinel, + limit: int = 100, + limit_per_host: int = 0, + loop: Optional[asyncio.AbstractEventLoop] = None, + ) -> None: + super().__init__( + force_close=force_close, + keepalive_timeout=keepalive_timeout, + limit=limit, + limit_per_host=limit_per_host, + loop=loop, + ) + if not isinstance( + self._loop, asyncio.ProactorEventLoop # type: ignore[attr-defined] + ): + raise RuntimeError( + "Named Pipes only available in proactor " "loop under windows" + ) + self._path = path + + @property + def path(self) -> str: + """Path to the named pipe.""" + return self._path + + async def _create_connection( + self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" + ) -> ResponseHandler: + try: + async with ceil_timeout(timeout.sock_connect): + _, proto = await self._loop.create_pipe_connection( # type: ignore[attr-defined] # noqa: E501 + self._factory, self._path + ) + # the drain is required so that the connection_made is called + # and transport is set otherwise it is not set before the + # `assert conn.transport is not None` + # in client.py's _request method + await asyncio.sleep(0) + # other option is to manually set transport like + # `proto.transport = trans` + except OSError as exc: + if exc.errno is None and isinstance(exc, asyncio.TimeoutError): + raise + raise ClientConnectorError(req.connection_key, exc) from exc + + return cast(ResponseHandler, proto) diff --git a/env/lib/python3.8/site-packages/aiohttp/cookiejar.py b/env/lib/python3.8/site-packages/aiohttp/cookiejar.py new file mode 100644 index 00000000..1ac8854a --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/cookiejar.py @@ -0,0 +1,411 @@ +import asyncio +import contextlib +import datetime +import os # noqa +import pathlib +import pickle +import re +from collections import defaultdict +from http.cookies import BaseCookie, Morsel, SimpleCookie +from typing import ( # noqa + DefaultDict, + Dict, + Iterable, + Iterator, + List, + Mapping, + Optional, + Set, + Tuple, + Union, + cast, +) + +from yarl import URL + +from .abc import AbstractCookieJar, ClearCookiePredicate +from .helpers import is_ip_address, next_whole_second +from .typedefs import LooseCookies, PathLike, StrOrURL + +__all__ = ("CookieJar", "DummyCookieJar") + + +CookieItem = Union[str, "Morsel[str]"] + + +class CookieJar(AbstractCookieJar): + """Implements cookie storage adhering to RFC 6265.""" + + DATE_TOKENS_RE = re.compile( + r"[\x09\x20-\x2F\x3B-\x40\x5B-\x60\x7B-\x7E]*" + r"(?P[\x00-\x08\x0A-\x1F\d:a-zA-Z\x7F-\xFF]+)" + ) + + DATE_HMS_TIME_RE = re.compile(r"(\d{1,2}):(\d{1,2}):(\d{1,2})") + + DATE_DAY_OF_MONTH_RE = re.compile(r"(\d{1,2})") + + DATE_MONTH_RE = re.compile( + "(jan)|(feb)|(mar)|(apr)|(may)|(jun)|(jul)|" "(aug)|(sep)|(oct)|(nov)|(dec)", + re.I, + ) + + DATE_YEAR_RE = re.compile(r"(\d{2,4})") + + MAX_TIME = datetime.datetime.max.replace(tzinfo=datetime.timezone.utc) + + MAX_32BIT_TIME = datetime.datetime.utcfromtimestamp(2**31 - 1) + + def __init__( + self, + *, + unsafe: bool = False, + quote_cookie: bool = True, + treat_as_secure_origin: Union[StrOrURL, List[StrOrURL], None] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, + ) -> None: + super().__init__(loop=loop) + self._cookies: DefaultDict[str, SimpleCookie[str]] = defaultdict(SimpleCookie) + self._host_only_cookies: Set[Tuple[str, str]] = set() + self._unsafe = unsafe + self._quote_cookie = quote_cookie + if treat_as_secure_origin is None: + treat_as_secure_origin = [] + elif isinstance(treat_as_secure_origin, URL): + treat_as_secure_origin = [treat_as_secure_origin.origin()] + elif isinstance(treat_as_secure_origin, str): + treat_as_secure_origin = [URL(treat_as_secure_origin).origin()] + else: + treat_as_secure_origin = [ + URL(url).origin() if isinstance(url, str) else url.origin() + for url in treat_as_secure_origin + ] + self._treat_as_secure_origin = treat_as_secure_origin + self._next_expiration = next_whole_second() + self._expirations: Dict[Tuple[str, str], datetime.datetime] = {} + # #4515: datetime.max may not be representable on 32-bit platforms + self._max_time = self.MAX_TIME + try: + self._max_time.timestamp() + except OverflowError: + self._max_time = self.MAX_32BIT_TIME + + def save(self, file_path: PathLike) -> None: + file_path = pathlib.Path(file_path) + with file_path.open(mode="wb") as f: + pickle.dump(self._cookies, f, pickle.HIGHEST_PROTOCOL) + + def load(self, file_path: PathLike) -> None: + file_path = pathlib.Path(file_path) + with file_path.open(mode="rb") as f: + self._cookies = pickle.load(f) + + def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: + if predicate is None: + self._next_expiration = next_whole_second() + self._cookies.clear() + self._host_only_cookies.clear() + self._expirations.clear() + return + + to_del = [] + now = datetime.datetime.now(datetime.timezone.utc) + for domain, cookie in self._cookies.items(): + for name, morsel in cookie.items(): + key = (domain, name) + if ( + key in self._expirations and self._expirations[key] <= now + ) or predicate(morsel): + to_del.append(key) + + for domain, name in to_del: + key = (domain, name) + self._host_only_cookies.discard(key) + if key in self._expirations: + del self._expirations[(domain, name)] + self._cookies[domain].pop(name, None) + + next_expiration = min(self._expirations.values(), default=self._max_time) + try: + self._next_expiration = next_expiration.replace( + microsecond=0 + ) + datetime.timedelta(seconds=1) + except OverflowError: + self._next_expiration = self._max_time + + def clear_domain(self, domain: str) -> None: + self.clear(lambda x: self._is_domain_match(domain, x["domain"])) + + def __iter__(self) -> "Iterator[Morsel[str]]": + self._do_expiration() + for val in self._cookies.values(): + yield from val.values() + + def __len__(self) -> int: + return sum(1 for i in self) + + def _do_expiration(self) -> None: + self.clear(lambda x: False) + + def _expire_cookie(self, when: datetime.datetime, domain: str, name: str) -> None: + self._next_expiration = min(self._next_expiration, when) + self._expirations[(domain, name)] = when + + def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: + """Update cookies.""" + hostname = response_url.raw_host + + if not self._unsafe and is_ip_address(hostname): + # Don't accept cookies from IPs + return + + if isinstance(cookies, Mapping): + cookies = cookies.items() + + for name, cookie in cookies: + if not isinstance(cookie, Morsel): + tmp: SimpleCookie[str] = SimpleCookie() + tmp[name] = cookie # type: ignore[assignment] + cookie = tmp[name] + + domain = cookie["domain"] + + # ignore domains with trailing dots + if domain.endswith("."): + domain = "" + del cookie["domain"] + + if not domain and hostname is not None: + # Set the cookie's domain to the response hostname + # and set its host-only-flag + self._host_only_cookies.add((hostname, name)) + domain = cookie["domain"] = hostname + + if domain.startswith("."): + # Remove leading dot + domain = domain[1:] + cookie["domain"] = domain + + if hostname and not self._is_domain_match(domain, hostname): + # Setting cookies for different domains is not allowed + continue + + path = cookie["path"] + if not path or not path.startswith("/"): + # Set the cookie's path to the response path + path = response_url.path + if not path.startswith("/"): + path = "/" + else: + # Cut everything from the last slash to the end + path = "/" + path[1 : path.rfind("/")] + cookie["path"] = path + + max_age = cookie["max-age"] + if max_age: + try: + delta_seconds = int(max_age) + try: + max_age_expiration = datetime.datetime.now( + datetime.timezone.utc + ) + datetime.timedelta(seconds=delta_seconds) + except OverflowError: + max_age_expiration = self._max_time + self._expire_cookie(max_age_expiration, domain, name) + except ValueError: + cookie["max-age"] = "" + + else: + expires = cookie["expires"] + if expires: + expire_time = self._parse_date(expires) + if expire_time: + self._expire_cookie(expire_time, domain, name) + else: + cookie["expires"] = "" + + self._cookies[domain][name] = cookie + + self._do_expiration() + + def filter_cookies( + self, request_url: URL = URL() + ) -> Union["BaseCookie[str]", "SimpleCookie[str]"]: + """Returns this jar's cookies filtered by their attributes.""" + self._do_expiration() + request_url = URL(request_url) + filtered: Union["SimpleCookie[str]", "BaseCookie[str]"] = ( + SimpleCookie() if self._quote_cookie else BaseCookie() + ) + hostname = request_url.raw_host or "" + request_origin = URL() + with contextlib.suppress(ValueError): + request_origin = request_url.origin() + + is_not_secure = ( + request_url.scheme not in ("https", "wss") + and request_origin not in self._treat_as_secure_origin + ) + + for cookie in self: + name = cookie.key + domain = cookie["domain"] + + # Send shared cookies + if not domain: + filtered[name] = cookie.value + continue + + if not self._unsafe and is_ip_address(hostname): + continue + + if (domain, name) in self._host_only_cookies: + if domain != hostname: + continue + elif not self._is_domain_match(domain, hostname): + continue + + if not self._is_path_match(request_url.path, cookie["path"]): + continue + + if is_not_secure and cookie["secure"]: + continue + + # It's critical we use the Morsel so the coded_value + # (based on cookie version) is preserved + mrsl_val = cast("Morsel[str]", cookie.get(cookie.key, Morsel())) + mrsl_val.set(cookie.key, cookie.value, cookie.coded_value) + filtered[name] = mrsl_val + + return filtered + + @staticmethod + def _is_domain_match(domain: str, hostname: str) -> bool: + """Implements domain matching adhering to RFC 6265.""" + if hostname == domain: + return True + + if not hostname.endswith(domain): + return False + + non_matching = hostname[: -len(domain)] + + if not non_matching.endswith("."): + return False + + return not is_ip_address(hostname) + + @staticmethod + def _is_path_match(req_path: str, cookie_path: str) -> bool: + """Implements path matching adhering to RFC 6265.""" + if not req_path.startswith("/"): + req_path = "/" + + if req_path == cookie_path: + return True + + if not req_path.startswith(cookie_path): + return False + + if cookie_path.endswith("/"): + return True + + non_matching = req_path[len(cookie_path) :] + + return non_matching.startswith("/") + + @classmethod + def _parse_date(cls, date_str: str) -> Optional[datetime.datetime]: + """Implements date string parsing adhering to RFC 6265.""" + if not date_str: + return None + + found_time = False + found_day = False + found_month = False + found_year = False + + hour = minute = second = 0 + day = 0 + month = 0 + year = 0 + + for token_match in cls.DATE_TOKENS_RE.finditer(date_str): + + token = token_match.group("token") + + if not found_time: + time_match = cls.DATE_HMS_TIME_RE.match(token) + if time_match: + found_time = True + hour, minute, second = (int(s) for s in time_match.groups()) + continue + + if not found_day: + day_match = cls.DATE_DAY_OF_MONTH_RE.match(token) + if day_match: + found_day = True + day = int(day_match.group()) + continue + + if not found_month: + month_match = cls.DATE_MONTH_RE.match(token) + if month_match: + found_month = True + assert month_match.lastindex is not None + month = month_match.lastindex + continue + + if not found_year: + year_match = cls.DATE_YEAR_RE.match(token) + if year_match: + found_year = True + year = int(year_match.group()) + + if 70 <= year <= 99: + year += 1900 + elif 0 <= year <= 69: + year += 2000 + + if False in (found_day, found_month, found_year, found_time): + return None + + if not 1 <= day <= 31: + return None + + if year < 1601 or hour > 23 or minute > 59 or second > 59: + return None + + return datetime.datetime( + year, month, day, hour, minute, second, tzinfo=datetime.timezone.utc + ) + + +class DummyCookieJar(AbstractCookieJar): + """Implements a dummy cookie storage. + + It can be used with the ClientSession when no cookie processing is needed. + + """ + + def __init__(self, *, loop: Optional[asyncio.AbstractEventLoop] = None) -> None: + super().__init__(loop=loop) + + def __iter__(self) -> "Iterator[Morsel[str]]": + while False: + yield None + + def __len__(self) -> int: + return 0 + + def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: + pass + + def clear_domain(self, domain: str) -> None: + pass + + def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: + pass + + def filter_cookies(self, request_url: URL) -> "BaseCookie[str]": + return SimpleCookie() diff --git a/env/lib/python3.8/site-packages/aiohttp/formdata.py b/env/lib/python3.8/site-packages/aiohttp/formdata.py new file mode 100644 index 00000000..e7cd24ca --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/formdata.py @@ -0,0 +1,172 @@ +import io +from typing import Any, Iterable, List, Optional +from urllib.parse import urlencode + +from multidict import MultiDict, MultiDictProxy + +from . import hdrs, multipart, payload +from .helpers import guess_filename +from .payload import Payload + +__all__ = ("FormData",) + + +class FormData: + """Helper class for form body generation. + + Supports multipart/form-data and application/x-www-form-urlencoded. + """ + + def __init__( + self, + fields: Iterable[Any] = (), + quote_fields: bool = True, + charset: Optional[str] = None, + ) -> None: + self._writer = multipart.MultipartWriter("form-data") + self._fields: List[Any] = [] + self._is_multipart = False + self._is_processed = False + self._quote_fields = quote_fields + self._charset = charset + + if isinstance(fields, dict): + fields = list(fields.items()) + elif not isinstance(fields, (list, tuple)): + fields = (fields,) + self.add_fields(*fields) + + @property + def is_multipart(self) -> bool: + return self._is_multipart + + def add_field( + self, + name: str, + value: Any, + *, + content_type: Optional[str] = None, + filename: Optional[str] = None, + content_transfer_encoding: Optional[str] = None, + ) -> None: + + if isinstance(value, io.IOBase): + self._is_multipart = True + elif isinstance(value, (bytes, bytearray, memoryview)): + if filename is None and content_transfer_encoding is None: + filename = name + + type_options: MultiDict[str] = MultiDict({"name": name}) + if filename is not None and not isinstance(filename, str): + raise TypeError( + "filename must be an instance of str. " "Got: %s" % filename + ) + if filename is None and isinstance(value, io.IOBase): + filename = guess_filename(value, name) + if filename is not None: + type_options["filename"] = filename + self._is_multipart = True + + headers = {} + if content_type is not None: + if not isinstance(content_type, str): + raise TypeError( + "content_type must be an instance of str. " "Got: %s" % content_type + ) + headers[hdrs.CONTENT_TYPE] = content_type + self._is_multipart = True + if content_transfer_encoding is not None: + if not isinstance(content_transfer_encoding, str): + raise TypeError( + "content_transfer_encoding must be an instance" + " of str. Got: %s" % content_transfer_encoding + ) + headers[hdrs.CONTENT_TRANSFER_ENCODING] = content_transfer_encoding + self._is_multipart = True + + self._fields.append((type_options, headers, value)) + + def add_fields(self, *fields: Any) -> None: + to_add = list(fields) + + while to_add: + rec = to_add.pop(0) + + if isinstance(rec, io.IOBase): + k = guess_filename(rec, "unknown") + self.add_field(k, rec) # type: ignore[arg-type] + + elif isinstance(rec, (MultiDictProxy, MultiDict)): + to_add.extend(rec.items()) + + elif isinstance(rec, (list, tuple)) and len(rec) == 2: + k, fp = rec + self.add_field(k, fp) # type: ignore[arg-type] + + else: + raise TypeError( + "Only io.IOBase, multidict and (name, file) " + "pairs allowed, use .add_field() for passing " + "more complex parameters, got {!r}".format(rec) + ) + + def _gen_form_urlencoded(self) -> payload.BytesPayload: + # form data (x-www-form-urlencoded) + data = [] + for type_options, _, value in self._fields: + data.append((type_options["name"], value)) + + charset = self._charset if self._charset is not None else "utf-8" + + if charset == "utf-8": + content_type = "application/x-www-form-urlencoded" + else: + content_type = "application/x-www-form-urlencoded; " "charset=%s" % charset + + return payload.BytesPayload( + urlencode(data, doseq=True, encoding=charset).encode(), + content_type=content_type, + ) + + def _gen_form_data(self) -> multipart.MultipartWriter: + """Encode a list of fields using the multipart/form-data MIME format""" + if self._is_processed: + raise RuntimeError("Form data has been processed already") + for dispparams, headers, value in self._fields: + try: + if hdrs.CONTENT_TYPE in headers: + part = payload.get_payload( + value, + content_type=headers[hdrs.CONTENT_TYPE], + headers=headers, + encoding=self._charset, + ) + else: + part = payload.get_payload( + value, headers=headers, encoding=self._charset + ) + except Exception as exc: + raise TypeError( + "Can not serialize value type: %r\n " + "headers: %r\n value: %r" % (type(value), headers, value) + ) from exc + + if dispparams: + part.set_content_disposition( + "form-data", quote_fields=self._quote_fields, **dispparams + ) + # FIXME cgi.FieldStorage doesn't likes body parts with + # Content-Length which were sent via chunked transfer encoding + assert part.headers is not None + part.headers.popall(hdrs.CONTENT_LENGTH, None) + + self._writer.append_payload(part) + + self._is_processed = True + return self._writer + + def __call__(self) -> Payload: + if self._is_multipart: + return self._gen_form_data() + else: + return self._gen_form_urlencoded() diff --git a/env/lib/python3.8/site-packages/aiohttp/hdrs.py b/env/lib/python3.8/site-packages/aiohttp/hdrs.py new file mode 100644 index 00000000..a619f254 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/hdrs.py @@ -0,0 +1,114 @@ +"""HTTP Headers constants.""" + +# After changing the file content call ./tools/gen.py +# to regenerate the headers parser +import sys +from typing import Set + +from multidict import istr + +if sys.version_info >= (3, 8): + from typing import Final +else: + from typing_extensions import Final + +METH_ANY: Final[str] = "*" +METH_CONNECT: Final[str] = "CONNECT" +METH_HEAD: Final[str] = "HEAD" +METH_GET: Final[str] = "GET" +METH_DELETE: Final[str] = "DELETE" +METH_OPTIONS: Final[str] = "OPTIONS" +METH_PATCH: Final[str] = "PATCH" +METH_POST: Final[str] = "POST" +METH_PUT: Final[str] = "PUT" +METH_TRACE: Final[str] = "TRACE" + +METH_ALL: Final[Set[str]] = { + METH_CONNECT, + METH_HEAD, + METH_GET, + METH_DELETE, + METH_OPTIONS, + METH_PATCH, + METH_POST, + METH_PUT, + METH_TRACE, +} + +ACCEPT: Final[istr] = istr("Accept") +ACCEPT_CHARSET: Final[istr] = istr("Accept-Charset") +ACCEPT_ENCODING: Final[istr] = istr("Accept-Encoding") +ACCEPT_LANGUAGE: Final[istr] = istr("Accept-Language") +ACCEPT_RANGES: Final[istr] = istr("Accept-Ranges") +ACCESS_CONTROL_MAX_AGE: Final[istr] = istr("Access-Control-Max-Age") +ACCESS_CONTROL_ALLOW_CREDENTIALS: Final[istr] = istr("Access-Control-Allow-Credentials") +ACCESS_CONTROL_ALLOW_HEADERS: Final[istr] = istr("Access-Control-Allow-Headers") +ACCESS_CONTROL_ALLOW_METHODS: Final[istr] = istr("Access-Control-Allow-Methods") +ACCESS_CONTROL_ALLOW_ORIGIN: Final[istr] = istr("Access-Control-Allow-Origin") +ACCESS_CONTROL_EXPOSE_HEADERS: Final[istr] = istr("Access-Control-Expose-Headers") +ACCESS_CONTROL_REQUEST_HEADERS: Final[istr] = istr("Access-Control-Request-Headers") +ACCESS_CONTROL_REQUEST_METHOD: Final[istr] = istr("Access-Control-Request-Method") +AGE: Final[istr] = istr("Age") +ALLOW: Final[istr] = istr("Allow") +AUTHORIZATION: Final[istr] = istr("Authorization") +CACHE_CONTROL: Final[istr] = istr("Cache-Control") +CONNECTION: Final[istr] = istr("Connection") +CONTENT_DISPOSITION: Final[istr] = istr("Content-Disposition") +CONTENT_ENCODING: Final[istr] = istr("Content-Encoding") +CONTENT_LANGUAGE: Final[istr] = istr("Content-Language") +CONTENT_LENGTH: Final[istr] = istr("Content-Length") +CONTENT_LOCATION: Final[istr] = istr("Content-Location") +CONTENT_MD5: Final[istr] = istr("Content-MD5") +CONTENT_RANGE: Final[istr] = istr("Content-Range") +CONTENT_TRANSFER_ENCODING: Final[istr] = istr("Content-Transfer-Encoding") +CONTENT_TYPE: Final[istr] = istr("Content-Type") +COOKIE: Final[istr] = istr("Cookie") +DATE: Final[istr] = istr("Date") +DESTINATION: Final[istr] = istr("Destination") +DIGEST: Final[istr] = istr("Digest") +ETAG: Final[istr] = istr("Etag") +EXPECT: Final[istr] = istr("Expect") +EXPIRES: Final[istr] = istr("Expires") +FORWARDED: Final[istr] = istr("Forwarded") +FROM: Final[istr] = istr("From") +HOST: Final[istr] = istr("Host") +IF_MATCH: Final[istr] = istr("If-Match") +IF_MODIFIED_SINCE: Final[istr] = istr("If-Modified-Since") +IF_NONE_MATCH: Final[istr] = istr("If-None-Match") +IF_RANGE: Final[istr] = istr("If-Range") +IF_UNMODIFIED_SINCE: Final[istr] = istr("If-Unmodified-Since") +KEEP_ALIVE: Final[istr] = istr("Keep-Alive") +LAST_EVENT_ID: Final[istr] = istr("Last-Event-ID") +LAST_MODIFIED: Final[istr] = istr("Last-Modified") +LINK: Final[istr] = istr("Link") +LOCATION: Final[istr] = istr("Location") +MAX_FORWARDS: Final[istr] = istr("Max-Forwards") +ORIGIN: Final[istr] = istr("Origin") +PRAGMA: Final[istr] = istr("Pragma") +PROXY_AUTHENTICATE: Final[istr] = istr("Proxy-Authenticate") +PROXY_AUTHORIZATION: Final[istr] = istr("Proxy-Authorization") +RANGE: Final[istr] = istr("Range") +REFERER: Final[istr] = istr("Referer") +RETRY_AFTER: Final[istr] = istr("Retry-After") +SEC_WEBSOCKET_ACCEPT: Final[istr] = istr("Sec-WebSocket-Accept") +SEC_WEBSOCKET_VERSION: Final[istr] = istr("Sec-WebSocket-Version") +SEC_WEBSOCKET_PROTOCOL: Final[istr] = istr("Sec-WebSocket-Protocol") +SEC_WEBSOCKET_EXTENSIONS: Final[istr] = istr("Sec-WebSocket-Extensions") +SEC_WEBSOCKET_KEY: Final[istr] = istr("Sec-WebSocket-Key") +SEC_WEBSOCKET_KEY1: Final[istr] = istr("Sec-WebSocket-Key1") +SERVER: Final[istr] = istr("Server") +SET_COOKIE: Final[istr] = istr("Set-Cookie") +TE: Final[istr] = istr("TE") +TRAILER: Final[istr] = istr("Trailer") +TRANSFER_ENCODING: Final[istr] = istr("Transfer-Encoding") +UPGRADE: Final[istr] = istr("Upgrade") +URI: Final[istr] = istr("URI") +USER_AGENT: Final[istr] = istr("User-Agent") +VARY: Final[istr] = istr("Vary") +VIA: Final[istr] = istr("Via") +WANT_DIGEST: Final[istr] = istr("Want-Digest") +WARNING: Final[istr] = istr("Warning") +WWW_AUTHENTICATE: Final[istr] = istr("WWW-Authenticate") +X_FORWARDED_FOR: Final[istr] = istr("X-Forwarded-For") +X_FORWARDED_HOST: Final[istr] = istr("X-Forwarded-Host") +X_FORWARDED_PROTO: Final[istr] = istr("X-Forwarded-Proto") diff --git a/env/lib/python3.8/site-packages/aiohttp/helpers.py b/env/lib/python3.8/site-packages/aiohttp/helpers.py new file mode 100644 index 00000000..0469ee41 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/helpers.py @@ -0,0 +1,877 @@ +"""Various helper functions""" + +import asyncio +import base64 +import binascii +import datetime +import functools +import inspect +import netrc +import os +import platform +import re +import sys +import time +import warnings +import weakref +from collections import namedtuple +from contextlib import suppress +from email.parser import HeaderParser +from email.utils import parsedate +from math import ceil +from pathlib import Path +from types import TracebackType +from typing import ( + Any, + Callable, + ContextManager, + Dict, + Generator, + Generic, + Iterable, + Iterator, + List, + Mapping, + Optional, + Pattern, + Set, + Tuple, + Type, + TypeVar, + Union, + cast, +) +from urllib.parse import quote +from urllib.request import getproxies, proxy_bypass + +import async_timeout +import attr +from multidict import MultiDict, MultiDictProxy +from yarl import URL + +from . import hdrs +from .log import client_logger, internal_logger +from .typedefs import PathLike, Protocol # noqa + +__all__ = ("BasicAuth", "ChainMapProxy", "ETag") + +IS_MACOS = platform.system() == "Darwin" +IS_WINDOWS = platform.system() == "Windows" + +PY_36 = sys.version_info >= (3, 6) +PY_37 = sys.version_info >= (3, 7) +PY_38 = sys.version_info >= (3, 8) +PY_310 = sys.version_info >= (3, 10) + +if sys.version_info < (3, 7): + import idna_ssl + + idna_ssl.patch_match_hostname() + + def all_tasks( + loop: Optional[asyncio.AbstractEventLoop] = None, + ) -> Set["asyncio.Task[Any]"]: + tasks = list(asyncio.Task.all_tasks(loop)) + return {t for t in tasks if not t.done()} + +else: + all_tasks = asyncio.all_tasks + + +_T = TypeVar("_T") +_S = TypeVar("_S") + + +sentinel: Any = object() +NO_EXTENSIONS: bool = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS")) + +# N.B. sys.flags.dev_mode is available on Python 3.7+, use getattr +# for compatibility with older versions +DEBUG: bool = getattr(sys.flags, "dev_mode", False) or ( + not sys.flags.ignore_environment and bool(os.environ.get("PYTHONASYNCIODEBUG")) +) + + +CHAR = {chr(i) for i in range(0, 128)} +CTL = {chr(i) for i in range(0, 32)} | { + chr(127), +} +SEPARATORS = { + "(", + ")", + "<", + ">", + "@", + ",", + ";", + ":", + "\\", + '"', + "/", + "[", + "]", + "?", + "=", + "{", + "}", + " ", + chr(9), +} +TOKEN = CHAR ^ CTL ^ SEPARATORS + + +class noop: + def __await__(self) -> Generator[None, None, None]: + yield + + +class BasicAuth(namedtuple("BasicAuth", ["login", "password", "encoding"])): + """Http basic authentication helper.""" + + def __new__( + cls, login: str, password: str = "", encoding: str = "latin1" + ) -> "BasicAuth": + if login is None: + raise ValueError("None is not allowed as login value") + + if password is None: + raise ValueError("None is not allowed as password value") + + if ":" in login: + raise ValueError('A ":" is not allowed in login (RFC 1945#section-11.1)') + + return super().__new__(cls, login, password, encoding) + + @classmethod + def decode(cls, auth_header: str, encoding: str = "latin1") -> "BasicAuth": + """Create a BasicAuth object from an Authorization HTTP header.""" + try: + auth_type, encoded_credentials = auth_header.split(" ", 1) + except ValueError: + raise ValueError("Could not parse authorization header.") + + if auth_type.lower() != "basic": + raise ValueError("Unknown authorization method %s" % auth_type) + + try: + decoded = base64.b64decode( + encoded_credentials.encode("ascii"), validate=True + ).decode(encoding) + except binascii.Error: + raise ValueError("Invalid base64 encoding.") + + try: + # RFC 2617 HTTP Authentication + # https://www.ietf.org/rfc/rfc2617.txt + # the colon must be present, but the username and password may be + # otherwise blank. + username, password = decoded.split(":", 1) + except ValueError: + raise ValueError("Invalid credentials.") + + return cls(username, password, encoding=encoding) + + @classmethod + def from_url(cls, url: URL, *, encoding: str = "latin1") -> Optional["BasicAuth"]: + """Create BasicAuth from url.""" + if not isinstance(url, URL): + raise TypeError("url should be yarl.URL instance") + if url.user is None: + return None + return cls(url.user, url.password or "", encoding=encoding) + + def encode(self) -> str: + """Encode credentials.""" + creds = (f"{self.login}:{self.password}").encode(self.encoding) + return "Basic %s" % base64.b64encode(creds).decode(self.encoding) + + +def strip_auth_from_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: + auth = BasicAuth.from_url(url) + if auth is None: + return url, None + else: + return url.with_user(None), auth + + +def netrc_from_env() -> Optional[netrc.netrc]: + """Load netrc from file. + + Attempt to load it from the path specified by the env-var + NETRC or in the default location in the user's home directory. + + Returns None if it couldn't be found or fails to parse. + """ + netrc_env = os.environ.get("NETRC") + + if netrc_env is not None: + netrc_path = Path(netrc_env) + else: + try: + home_dir = Path.home() + except RuntimeError as e: # pragma: no cover + # if pathlib can't resolve home, it may raise a RuntimeError + client_logger.debug( + "Could not resolve home directory when " + "trying to look for .netrc file: %s", + e, + ) + return None + + netrc_path = home_dir / ("_netrc" if IS_WINDOWS else ".netrc") + + try: + return netrc.netrc(str(netrc_path)) + except netrc.NetrcParseError as e: + client_logger.warning("Could not parse .netrc file: %s", e) + except OSError as e: + # we couldn't read the file (doesn't exist, permissions, etc.) + if netrc_env or netrc_path.is_file(): + # only warn if the environment wanted us to load it, + # or it appears like the default file does actually exist + client_logger.warning("Could not read .netrc file: %s", e) + + return None + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class ProxyInfo: + proxy: URL + proxy_auth: Optional[BasicAuth] + + +def proxies_from_env() -> Dict[str, ProxyInfo]: + proxy_urls = { + k: URL(v) + for k, v in getproxies().items() + if k in ("http", "https", "ws", "wss") + } + netrc_obj = netrc_from_env() + stripped = {k: strip_auth_from_url(v) for k, v in proxy_urls.items()} + ret = {} + for proto, val in stripped.items(): + proxy, auth = val + if proxy.scheme in ("https", "wss"): + client_logger.warning( + "%s proxies %s are not supported, ignoring", proxy.scheme.upper(), proxy + ) + continue + if netrc_obj and auth is None: + auth_from_netrc = None + if proxy.host is not None: + auth_from_netrc = netrc_obj.authenticators(proxy.host) + if auth_from_netrc is not None: + # auth_from_netrc is a (`user`, `account`, `password`) tuple, + # `user` and `account` both can be username, + # if `user` is None, use `account` + *logins, password = auth_from_netrc + login = logins[0] if logins[0] else logins[-1] + auth = BasicAuth(cast(str, login), cast(str, password)) + ret[proto] = ProxyInfo(proxy, auth) + return ret + + +def current_task( + loop: Optional[asyncio.AbstractEventLoop] = None, +) -> "Optional[asyncio.Task[Any]]": + if sys.version_info >= (3, 7): + return asyncio.current_task(loop=loop) + else: + return asyncio.Task.current_task(loop=loop) + + +def get_running_loop( + loop: Optional[asyncio.AbstractEventLoop] = None, +) -> asyncio.AbstractEventLoop: + if loop is None: + loop = asyncio.get_event_loop() + if not loop.is_running(): + warnings.warn( + "The object should be created within an async function", + DeprecationWarning, + stacklevel=3, + ) + if loop.get_debug(): + internal_logger.warning( + "The object should be created within an async function", stack_info=True + ) + return loop + + +def isasyncgenfunction(obj: Any) -> bool: + func = getattr(inspect, "isasyncgenfunction", None) + if func is not None: + return func(obj) # type: ignore[no-any-return] + else: + return False + + +def get_env_proxy_for_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: + """Get a permitted proxy for the given URL from the env.""" + if url.host is not None and proxy_bypass(url.host): + raise LookupError(f"Proxying is disallowed for `{url.host!r}`") + + proxies_in_env = proxies_from_env() + try: + proxy_info = proxies_in_env[url.scheme] + except KeyError: + raise LookupError(f"No proxies found for `{url!s}` in the env") + else: + return proxy_info.proxy, proxy_info.proxy_auth + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class MimeType: + type: str + subtype: str + suffix: str + parameters: "MultiDictProxy[str]" + + +@functools.lru_cache(maxsize=56) +def parse_mimetype(mimetype: str) -> MimeType: + """Parses a MIME type into its components. + + mimetype is a MIME type string. + + Returns a MimeType object. + + Example: + + >>> parse_mimetype('text/html; charset=utf-8') + MimeType(type='text', subtype='html', suffix='', + parameters={'charset': 'utf-8'}) + + """ + if not mimetype: + return MimeType( + type="", subtype="", suffix="", parameters=MultiDictProxy(MultiDict()) + ) + + parts = mimetype.split(";") + params: MultiDict[str] = MultiDict() + for item in parts[1:]: + if not item: + continue + key, value = cast( + Tuple[str, str], item.split("=", 1) if "=" in item else (item, "") + ) + params.add(key.lower().strip(), value.strip(' "')) + + fulltype = parts[0].strip().lower() + if fulltype == "*": + fulltype = "*/*" + + mtype, stype = ( + cast(Tuple[str, str], fulltype.split("/", 1)) + if "/" in fulltype + else (fulltype, "") + ) + stype, suffix = ( + cast(Tuple[str, str], stype.split("+", 1)) if "+" in stype else (stype, "") + ) + + return MimeType( + type=mtype, subtype=stype, suffix=suffix, parameters=MultiDictProxy(params) + ) + + +def guess_filename(obj: Any, default: Optional[str] = None) -> Optional[str]: + name = getattr(obj, "name", None) + if name and isinstance(name, str) and name[0] != "<" and name[-1] != ">": + return Path(name).name + return default + + +not_qtext_re = re.compile(r"[^\041\043-\133\135-\176]") +QCONTENT = {chr(i) for i in range(0x20, 0x7F)} | {"\t"} + + +def quoted_string(content: str) -> str: + """Return 7-bit content as quoted-string. + + Format content into a quoted-string as defined in RFC5322 for + Internet Message Format. Notice that this is not the 8-bit HTTP + format, but the 7-bit email format. Content must be in usascii or + a ValueError is raised. + """ + if not (QCONTENT > set(content)): + raise ValueError(f"bad content for quoted-string {content!r}") + return not_qtext_re.sub(lambda x: "\\" + x.group(0), content) + + +def content_disposition_header( + disptype: str, quote_fields: bool = True, _charset: str = "utf-8", **params: str +) -> str: + """Sets ``Content-Disposition`` header for MIME. + + This is the MIME payload Content-Disposition header from RFC 2183 + and RFC 7579 section 4.2, not the HTTP Content-Disposition from + RFC 6266. + + disptype is a disposition type: inline, attachment, form-data. + Should be valid extension token (see RFC 2183) + + quote_fields performs value quoting to 7-bit MIME headers + according to RFC 7578. Set to quote_fields to False if recipient + can take 8-bit file names and field values. + + _charset specifies the charset to use when quote_fields is True. + + params is a dict with disposition params. + """ + if not disptype or not (TOKEN > set(disptype)): + raise ValueError("bad content disposition type {!r}" "".format(disptype)) + + value = disptype + if params: + lparams = [] + for key, val in params.items(): + if not key or not (TOKEN > set(key)): + raise ValueError( + "bad content disposition parameter" " {!r}={!r}".format(key, val) + ) + if quote_fields: + if key.lower() == "filename": + qval = quote(val, "", encoding=_charset) + lparams.append((key, '"%s"' % qval)) + else: + try: + qval = quoted_string(val) + except ValueError: + qval = "".join( + (_charset, "''", quote(val, "", encoding=_charset)) + ) + lparams.append((key + "*", qval)) + else: + lparams.append((key, '"%s"' % qval)) + else: + qval = val.replace("\\", "\\\\").replace('"', '\\"') + lparams.append((key, '"%s"' % qval)) + sparams = "; ".join("=".join(pair) for pair in lparams) + value = "; ".join((value, sparams)) + return value + + +class _TSelf(Protocol, Generic[_T]): + _cache: Dict[str, _T] + + +class reify(Generic[_T]): + """Use as a class method decorator. + + It operates almost exactly like + the Python `@property` decorator, but it puts the result of the + method it decorates into the instance dict after the first call, + effectively replacing the function it decorates with an instance + variable. It is, in Python parlance, a data descriptor. + """ + + def __init__(self, wrapped: Callable[..., _T]) -> None: + self.wrapped = wrapped + self.__doc__ = wrapped.__doc__ + self.name = wrapped.__name__ + + def __get__(self, inst: _TSelf[_T], owner: Optional[Type[Any]] = None) -> _T: + try: + try: + return inst._cache[self.name] + except KeyError: + val = self.wrapped(inst) + inst._cache[self.name] = val + return val + except AttributeError: + if inst is None: + return self + raise + + def __set__(self, inst: _TSelf[_T], value: _T) -> None: + raise AttributeError("reified property is read-only") + + +reify_py = reify + +try: + from ._helpers import reify as reify_c + + if not NO_EXTENSIONS: + reify = reify_c # type: ignore[misc,assignment] +except ImportError: + pass + +_ipv4_pattern = ( + r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}" + r"(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$" +) +_ipv6_pattern = ( + r"^(?:(?:(?:[A-F0-9]{1,4}:){6}|(?=(?:[A-F0-9]{0,4}:){0,6}" + r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}$)(([0-9A-F]{1,4}:){0,5}|:)" + r"((:[0-9A-F]{1,4}){1,5}:|:)|::(?:[A-F0-9]{1,4}:){5})" + r"(?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}" + r"(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])|(?:[A-F0-9]{1,4}:){7}" + r"[A-F0-9]{1,4}|(?=(?:[A-F0-9]{0,4}:){0,7}[A-F0-9]{0,4}$)" + r"(([0-9A-F]{1,4}:){1,7}|:)((:[0-9A-F]{1,4}){1,7}|:)|(?:[A-F0-9]{1,4}:){7}" + r":|:(:[A-F0-9]{1,4}){7})$" +) +_ipv4_regex = re.compile(_ipv4_pattern) +_ipv6_regex = re.compile(_ipv6_pattern, flags=re.IGNORECASE) +_ipv4_regexb = re.compile(_ipv4_pattern.encode("ascii")) +_ipv6_regexb = re.compile(_ipv6_pattern.encode("ascii"), flags=re.IGNORECASE) + + +def _is_ip_address( + regex: Pattern[str], regexb: Pattern[bytes], host: Optional[Union[str, bytes]] +) -> bool: + if host is None: + return False + if isinstance(host, str): + return bool(regex.match(host)) + elif isinstance(host, (bytes, bytearray, memoryview)): + return bool(regexb.match(host)) + else: + raise TypeError(f"{host} [{type(host)}] is not a str or bytes") + + +is_ipv4_address = functools.partial(_is_ip_address, _ipv4_regex, _ipv4_regexb) +is_ipv6_address = functools.partial(_is_ip_address, _ipv6_regex, _ipv6_regexb) + + +def is_ip_address(host: Optional[Union[str, bytes, bytearray, memoryview]]) -> bool: + return is_ipv4_address(host) or is_ipv6_address(host) + + +def next_whole_second() -> datetime.datetime: + """Return current time rounded up to the next whole second.""" + return datetime.datetime.now(datetime.timezone.utc).replace( + microsecond=0 + ) + datetime.timedelta(seconds=0) + + +_cached_current_datetime: Optional[int] = None +_cached_formatted_datetime = "" + + +def rfc822_formatted_time() -> str: + global _cached_current_datetime + global _cached_formatted_datetime + + now = int(time.time()) + if now != _cached_current_datetime: + # Weekday and month names for HTTP date/time formatting; + # always English! + # Tuples are constants stored in codeobject! + _weekdayname = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun") + _monthname = ( + "", # Dummy so we can use 1-based month numbers + "Jan", + "Feb", + "Mar", + "Apr", + "May", + "Jun", + "Jul", + "Aug", + "Sep", + "Oct", + "Nov", + "Dec", + ) + + year, month, day, hh, mm, ss, wd, *tail = time.gmtime(now) + _cached_formatted_datetime = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % ( + _weekdayname[wd], + day, + _monthname[month], + year, + hh, + mm, + ss, + ) + _cached_current_datetime = now + return _cached_formatted_datetime + + +def _weakref_handle(info: "Tuple[weakref.ref[object], str]") -> None: + ref, name = info + ob = ref() + if ob is not None: + with suppress(Exception): + getattr(ob, name)() + + +def weakref_handle( + ob: object, name: str, timeout: float, loop: asyncio.AbstractEventLoop +) -> Optional[asyncio.TimerHandle]: + if timeout is not None and timeout > 0: + when = loop.time() + timeout + if timeout >= 5: + when = ceil(when) + + return loop.call_at(when, _weakref_handle, (weakref.ref(ob), name)) + return None + + +def call_later( + cb: Callable[[], Any], timeout: float, loop: asyncio.AbstractEventLoop +) -> Optional[asyncio.TimerHandle]: + if timeout is not None and timeout > 0: + when = loop.time() + timeout + if timeout > 5: + when = ceil(when) + return loop.call_at(when, cb) + return None + + +class TimeoutHandle: + """Timeout handle""" + + def __init__( + self, loop: asyncio.AbstractEventLoop, timeout: Optional[float] + ) -> None: + self._timeout = timeout + self._loop = loop + self._callbacks: List[ + Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]] + ] = [] + + def register( + self, callback: Callable[..., None], *args: Any, **kwargs: Any + ) -> None: + self._callbacks.append((callback, args, kwargs)) + + def close(self) -> None: + self._callbacks.clear() + + def start(self) -> Optional[asyncio.Handle]: + timeout = self._timeout + if timeout is not None and timeout > 0: + when = self._loop.time() + timeout + if timeout >= 5: + when = ceil(when) + return self._loop.call_at(when, self.__call__) + else: + return None + + def timer(self) -> "BaseTimerContext": + if self._timeout is not None and self._timeout > 0: + timer = TimerContext(self._loop) + self.register(timer.timeout) + return timer + else: + return TimerNoop() + + def __call__(self) -> None: + for cb, args, kwargs in self._callbacks: + with suppress(Exception): + cb(*args, **kwargs) + + self._callbacks.clear() + + +class BaseTimerContext(ContextManager["BaseTimerContext"]): + pass + + +class TimerNoop(BaseTimerContext): + def __enter__(self) -> BaseTimerContext: + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + return + + +class TimerContext(BaseTimerContext): + """Low resolution timeout context manager""" + + def __init__(self, loop: asyncio.AbstractEventLoop) -> None: + self._loop = loop + self._tasks: List[asyncio.Task[Any]] = [] + self._cancelled = False + + def __enter__(self) -> BaseTimerContext: + task = current_task(loop=self._loop) + + if task is None: + raise RuntimeError( + "Timeout context manager should be used " "inside a task" + ) + + if self._cancelled: + raise asyncio.TimeoutError from None + + self._tasks.append(task) + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> Optional[bool]: + if self._tasks: + self._tasks.pop() + + if exc_type is asyncio.CancelledError and self._cancelled: + raise asyncio.TimeoutError from None + return None + + def timeout(self) -> None: + if not self._cancelled: + for task in set(self._tasks): + task.cancel() + + self._cancelled = True + + +def ceil_timeout(delay: Optional[float]) -> async_timeout.Timeout: + if delay is None or delay <= 0: + return async_timeout.timeout(None) + + loop = get_running_loop() + now = loop.time() + when = now + delay + if delay > 5: + when = ceil(when) + return async_timeout.timeout_at(when) + + +class HeadersMixin: + + ATTRS = frozenset(["_content_type", "_content_dict", "_stored_content_type"]) + + _content_type: Optional[str] = None + _content_dict: Optional[Dict[str, str]] = None + _stored_content_type = sentinel + + def _parse_content_type(self, raw: str) -> None: + self._stored_content_type = raw + if raw is None: + # default value according to RFC 2616 + self._content_type = "application/octet-stream" + self._content_dict = {} + else: + msg = HeaderParser().parsestr("Content-Type: " + raw) + self._content_type = msg.get_content_type() + params = msg.get_params() + self._content_dict = dict(params[1:]) # First element is content type again + + @property + def content_type(self) -> str: + """The value of content part for Content-Type HTTP header.""" + raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] + if self._stored_content_type != raw: + self._parse_content_type(raw) + return self._content_type # type: ignore[return-value] + + @property + def charset(self) -> Optional[str]: + """The value of charset part for Content-Type HTTP header.""" + raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] + if self._stored_content_type != raw: + self._parse_content_type(raw) + return self._content_dict.get("charset") # type: ignore[union-attr] + + @property + def content_length(self) -> Optional[int]: + """The value of Content-Length HTTP header.""" + content_length = self._headers.get( # type: ignore[attr-defined] + hdrs.CONTENT_LENGTH + ) + + if content_length is not None: + return int(content_length) + else: + return None + + +def set_result(fut: "asyncio.Future[_T]", result: _T) -> None: + if not fut.done(): + fut.set_result(result) + + +def set_exception(fut: "asyncio.Future[_T]", exc: BaseException) -> None: + if not fut.done(): + fut.set_exception(exc) + + +class ChainMapProxy(Mapping[str, Any]): + __slots__ = ("_maps",) + + def __init__(self, maps: Iterable[Mapping[str, Any]]) -> None: + self._maps = tuple(maps) + + def __init_subclass__(cls) -> None: + raise TypeError( + "Inheritance class {} from ChainMapProxy " + "is forbidden".format(cls.__name__) + ) + + def __getitem__(self, key: str) -> Any: + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def get(self, key: str, default: Any = None) -> Any: + return self[key] if key in self else default + + def __len__(self) -> int: + # reuses stored hash values if possible + return len(set().union(*self._maps)) # type: ignore[arg-type] + + def __iter__(self) -> Iterator[str]: + d: Dict[str, Any] = {} + for mapping in reversed(self._maps): + # reuses stored hash values if possible + d.update(mapping) + return iter(d) + + def __contains__(self, key: object) -> bool: + return any(key in m for m in self._maps) + + def __bool__(self) -> bool: + return any(self._maps) + + def __repr__(self) -> str: + content = ", ".join(map(repr, self._maps)) + return f"ChainMapProxy({content})" + + +# https://tools.ietf.org/html/rfc7232#section-2.3 +_ETAGC = r"[!#-}\x80-\xff]+" +_ETAGC_RE = re.compile(_ETAGC) +_QUOTED_ETAG = rf'(W/)?"({_ETAGC})"' +QUOTED_ETAG_RE = re.compile(_QUOTED_ETAG) +LIST_QUOTED_ETAG_RE = re.compile(rf"({_QUOTED_ETAG})(?:\s*,\s*|$)|(.)") + +ETAG_ANY = "*" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class ETag: + value: str + is_weak: bool = False + + +def validate_etag_value(value: str) -> None: + if value != ETAG_ANY and not _ETAGC_RE.fullmatch(value): + raise ValueError( + f"Value {value!r} is not a valid etag. Maybe it contains '\"'?" + ) + + +def parse_http_date(date_str: Optional[str]) -> Optional[datetime.datetime]: + """Process a date string, return a datetime object""" + if date_str is not None: + timetuple = parsedate(date_str) + if timetuple is not None: + with suppress(ValueError): + return datetime.datetime(*timetuple[:6], tzinfo=datetime.timezone.utc) + return None diff --git a/env/lib/python3.8/site-packages/aiohttp/http.py b/env/lib/python3.8/site-packages/aiohttp/http.py new file mode 100644 index 00000000..ca9dc54b --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/http.py @@ -0,0 +1,70 @@ +import http.server +import sys +from typing import Mapping, Tuple + +from . import __version__ +from .http_exceptions import HttpProcessingError as HttpProcessingError +from .http_parser import ( + HeadersParser as HeadersParser, + HttpParser as HttpParser, + HttpRequestParser as HttpRequestParser, + HttpResponseParser as HttpResponseParser, + RawRequestMessage as RawRequestMessage, + RawResponseMessage as RawResponseMessage, +) +from .http_websocket import ( + WS_CLOSED_MESSAGE as WS_CLOSED_MESSAGE, + WS_CLOSING_MESSAGE as WS_CLOSING_MESSAGE, + WS_KEY as WS_KEY, + WebSocketError as WebSocketError, + WebSocketReader as WebSocketReader, + WebSocketWriter as WebSocketWriter, + WSCloseCode as WSCloseCode, + WSMessage as WSMessage, + WSMsgType as WSMsgType, + ws_ext_gen as ws_ext_gen, + ws_ext_parse as ws_ext_parse, +) +from .http_writer import ( + HttpVersion as HttpVersion, + HttpVersion10 as HttpVersion10, + HttpVersion11 as HttpVersion11, + StreamWriter as StreamWriter, +) + +__all__ = ( + "HttpProcessingError", + "RESPONSES", + "SERVER_SOFTWARE", + # .http_writer + "StreamWriter", + "HttpVersion", + "HttpVersion10", + "HttpVersion11", + # .http_parser + "HeadersParser", + "HttpParser", + "HttpRequestParser", + "HttpResponseParser", + "RawRequestMessage", + "RawResponseMessage", + # .http_websocket + "WS_CLOSED_MESSAGE", + "WS_CLOSING_MESSAGE", + "WS_KEY", + "WebSocketReader", + "WebSocketWriter", + "ws_ext_gen", + "ws_ext_parse", + "WSMessage", + "WebSocketError", + "WSMsgType", + "WSCloseCode", +) + + +SERVER_SOFTWARE: str = "Python/{0[0]}.{0[1]} aiohttp/{1}".format( + sys.version_info, __version__ +) + +RESPONSES: Mapping[int, Tuple[str, str]] = http.server.BaseHTTPRequestHandler.responses diff --git a/env/lib/python3.8/site-packages/aiohttp/http_exceptions.py b/env/lib/python3.8/site-packages/aiohttp/http_exceptions.py new file mode 100644 index 00000000..c885f80f --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/http_exceptions.py @@ -0,0 +1,105 @@ +"""Low-level http related exceptions.""" + + +from typing import Optional, Union + +from .typedefs import _CIMultiDict + +__all__ = ("HttpProcessingError",) + + +class HttpProcessingError(Exception): + """HTTP error. + + Shortcut for raising HTTP errors with custom code, message and headers. + + code: HTTP Error code. + message: (optional) Error message. + headers: (optional) Headers to be sent in response, a list of pairs + """ + + code = 0 + message = "" + headers = None + + def __init__( + self, + *, + code: Optional[int] = None, + message: str = "", + headers: Optional[_CIMultiDict] = None, + ) -> None: + if code is not None: + self.code = code + self.headers = headers + self.message = message + + def __str__(self) -> str: + return f"{self.code}, message={self.message!r}" + + def __repr__(self) -> str: + return f"<{self.__class__.__name__}: {self}>" + + +class BadHttpMessage(HttpProcessingError): + + code = 400 + message = "Bad Request" + + def __init__(self, message: str, *, headers: Optional[_CIMultiDict] = None) -> None: + super().__init__(message=message, headers=headers) + self.args = (message,) + + +class HttpBadRequest(BadHttpMessage): + + code = 400 + message = "Bad Request" + + +class PayloadEncodingError(BadHttpMessage): + """Base class for payload errors""" + + +class ContentEncodingError(PayloadEncodingError): + """Content encoding error.""" + + +class TransferEncodingError(PayloadEncodingError): + """transfer encoding error.""" + + +class ContentLengthError(PayloadEncodingError): + """Not enough data for satisfy content length header.""" + + +class LineTooLong(BadHttpMessage): + def __init__( + self, line: str, limit: str = "Unknown", actual_size: str = "Unknown" + ) -> None: + super().__init__( + f"Got more than {limit} bytes ({actual_size}) when reading {line}." + ) + self.args = (line, limit, actual_size) + + +class InvalidHeader(BadHttpMessage): + def __init__(self, hdr: Union[bytes, str]) -> None: + if isinstance(hdr, bytes): + hdr = hdr.decode("utf-8", "surrogateescape") + super().__init__(f"Invalid HTTP Header: {hdr}") + self.hdr = hdr + self.args = (hdr,) + + +class BadStatusLine(BadHttpMessage): + def __init__(self, line: str = "") -> None: + if not isinstance(line, str): + line = repr(line) + super().__init__(f"Bad status line {line!r}") + self.args = (line,) + self.line = line + + +class InvalidURLError(BadHttpMessage): + pass diff --git a/env/lib/python3.8/site-packages/aiohttp/http_parser.py b/env/lib/python3.8/site-packages/aiohttp/http_parser.py new file mode 100644 index 00000000..5a66ce4b --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/http_parser.py @@ -0,0 +1,969 @@ +import abc +import asyncio +import collections +import re +import string +import zlib +from contextlib import suppress +from enum import IntEnum +from typing import ( + Any, + Generic, + List, + NamedTuple, + Optional, + Pattern, + Set, + Tuple, + Type, + TypeVar, + Union, + cast, +) + +from multidict import CIMultiDict, CIMultiDictProxy, istr +from yarl import URL + +from . import hdrs +from .base_protocol import BaseProtocol +from .helpers import NO_EXTENSIONS, BaseTimerContext +from .http_exceptions import ( + BadHttpMessage, + BadStatusLine, + ContentEncodingError, + ContentLengthError, + InvalidHeader, + LineTooLong, + TransferEncodingError, +) +from .http_writer import HttpVersion, HttpVersion10 +from .log import internal_logger +from .streams import EMPTY_PAYLOAD, StreamReader +from .typedefs import Final, RawHeaders + +try: + import brotli + + HAS_BROTLI = True +except ImportError: # pragma: no cover + HAS_BROTLI = False + + +__all__ = ( + "HeadersParser", + "HttpParser", + "HttpRequestParser", + "HttpResponseParser", + "RawRequestMessage", + "RawResponseMessage", +) + +ASCIISET: Final[Set[str]] = set(string.printable) + +# See https://tools.ietf.org/html/rfc7230#section-3.1.1 +# and https://tools.ietf.org/html/rfc7230#appendix-B +# +# method = token +# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." / +# "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA +# token = 1*tchar +METHRE: Final[Pattern[str]] = re.compile(r"[!#$%&'*+\-.^_`|~0-9A-Za-z]+") +VERSRE: Final[Pattern[str]] = re.compile(r"HTTP/(\d+).(\d+)") +HDRRE: Final[Pattern[bytes]] = re.compile(rb"[\x00-\x1F\x7F()<>@,;:\[\]={} \t\\\\\"]") + + +class RawRequestMessage(NamedTuple): + method: str + path: str + version: HttpVersion + headers: "CIMultiDictProxy[str]" + raw_headers: RawHeaders + should_close: bool + compression: Optional[str] + upgrade: bool + chunked: bool + url: URL + + +RawResponseMessage = collections.namedtuple( + "RawResponseMessage", + [ + "version", + "code", + "reason", + "headers", + "raw_headers", + "should_close", + "compression", + "upgrade", + "chunked", + ], +) + + +_MsgT = TypeVar("_MsgT", RawRequestMessage, RawResponseMessage) + + +class ParseState(IntEnum): + + PARSE_NONE = 0 + PARSE_LENGTH = 1 + PARSE_CHUNKED = 2 + PARSE_UNTIL_EOF = 3 + + +class ChunkState(IntEnum): + PARSE_CHUNKED_SIZE = 0 + PARSE_CHUNKED_CHUNK = 1 + PARSE_CHUNKED_CHUNK_EOF = 2 + PARSE_MAYBE_TRAILERS = 3 + PARSE_TRAILERS = 4 + + +class HeadersParser: + def __init__( + self, + max_line_size: int = 8190, + max_headers: int = 32768, + max_field_size: int = 8190, + ) -> None: + self.max_line_size = max_line_size + self.max_headers = max_headers + self.max_field_size = max_field_size + + def parse_headers( + self, lines: List[bytes] + ) -> Tuple["CIMultiDictProxy[str]", RawHeaders]: + headers: CIMultiDict[str] = CIMultiDict() + raw_headers = [] + + lines_idx = 1 + line = lines[1] + line_count = len(lines) + + while line: + # Parse initial header name : value pair. + try: + bname, bvalue = line.split(b":", 1) + except ValueError: + raise InvalidHeader(line) from None + + bname = bname.strip(b" \t") + bvalue = bvalue.lstrip() + if HDRRE.search(bname): + raise InvalidHeader(bname) + if len(bname) > self.max_field_size: + raise LineTooLong( + "request header name {}".format( + bname.decode("utf8", "xmlcharrefreplace") + ), + str(self.max_field_size), + str(len(bname)), + ) + + header_length = len(bvalue) + + # next line + lines_idx += 1 + line = lines[lines_idx] + + # consume continuation lines + continuation = line and line[0] in (32, 9) # (' ', '\t') + + if continuation: + bvalue_lst = [bvalue] + while continuation: + header_length += len(line) + if header_length > self.max_field_size: + raise LineTooLong( + "request header field {}".format( + bname.decode("utf8", "xmlcharrefreplace") + ), + str(self.max_field_size), + str(header_length), + ) + bvalue_lst.append(line) + + # next line + lines_idx += 1 + if lines_idx < line_count: + line = lines[lines_idx] + if line: + continuation = line[0] in (32, 9) # (' ', '\t') + else: + line = b"" + break + bvalue = b"".join(bvalue_lst) + else: + if header_length > self.max_field_size: + raise LineTooLong( + "request header field {}".format( + bname.decode("utf8", "xmlcharrefreplace") + ), + str(self.max_field_size), + str(header_length), + ) + + bvalue = bvalue.strip() + name = bname.decode("utf-8", "surrogateescape") + value = bvalue.decode("utf-8", "surrogateescape") + + headers.add(name, value) + raw_headers.append((bname, bvalue)) + + return (CIMultiDictProxy(headers), tuple(raw_headers)) + + +class HttpParser(abc.ABC, Generic[_MsgT]): + def __init__( + self, + protocol: Optional[BaseProtocol] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, + limit: int = 2**16, + max_line_size: int = 8190, + max_headers: int = 32768, + max_field_size: int = 8190, + timer: Optional[BaseTimerContext] = None, + code: Optional[int] = None, + method: Optional[str] = None, + readall: bool = False, + payload_exception: Optional[Type[BaseException]] = None, + response_with_body: bool = True, + read_until_eof: bool = False, + auto_decompress: bool = True, + ) -> None: + self.protocol = protocol + self.loop = loop + self.max_line_size = max_line_size + self.max_headers = max_headers + self.max_field_size = max_field_size + self.timer = timer + self.code = code + self.method = method + self.readall = readall + self.payload_exception = payload_exception + self.response_with_body = response_with_body + self.read_until_eof = read_until_eof + + self._lines: List[bytes] = [] + self._tail = b"" + self._upgraded = False + self._payload = None + self._payload_parser: Optional[HttpPayloadParser] = None + self._auto_decompress = auto_decompress + self._limit = limit + self._headers_parser = HeadersParser(max_line_size, max_headers, max_field_size) + + @abc.abstractmethod + def parse_message(self, lines: List[bytes]) -> _MsgT: + pass + + def feed_eof(self) -> Optional[_MsgT]: + if self._payload_parser is not None: + self._payload_parser.feed_eof() + self._payload_parser = None + else: + # try to extract partial message + if self._tail: + self._lines.append(self._tail) + + if self._lines: + if self._lines[-1] != "\r\n": + self._lines.append(b"") + with suppress(Exception): + return self.parse_message(self._lines) + return None + + def feed_data( + self, + data: bytes, + SEP: bytes = b"\r\n", + EMPTY: bytes = b"", + CONTENT_LENGTH: istr = hdrs.CONTENT_LENGTH, + METH_CONNECT: str = hdrs.METH_CONNECT, + SEC_WEBSOCKET_KEY1: istr = hdrs.SEC_WEBSOCKET_KEY1, + ) -> Tuple[List[Tuple[_MsgT, StreamReader]], bool, bytes]: + + messages = [] + + if self._tail: + data, self._tail = self._tail + data, b"" + + data_len = len(data) + start_pos = 0 + loop = self.loop + + while start_pos < data_len: + + # read HTTP message (request/response line + headers), \r\n\r\n + # and split by lines + if self._payload_parser is None and not self._upgraded: + pos = data.find(SEP, start_pos) + # consume \r\n + if pos == start_pos and not self._lines: + start_pos = pos + 2 + continue + + if pos >= start_pos: + # line found + self._lines.append(data[start_pos:pos]) + start_pos = pos + 2 + + # \r\n\r\n found + if self._lines[-1] == EMPTY: + try: + msg: _MsgT = self.parse_message(self._lines) + finally: + self._lines.clear() + + def get_content_length() -> Optional[int]: + # payload length + length_hdr = msg.headers.get(CONTENT_LENGTH) + if length_hdr is None: + return None + + try: + length = int(length_hdr) + except ValueError: + raise InvalidHeader(CONTENT_LENGTH) + + if length < 0: + raise InvalidHeader(CONTENT_LENGTH) + + return length + + length = get_content_length() + # do not support old websocket spec + if SEC_WEBSOCKET_KEY1 in msg.headers: + raise InvalidHeader(SEC_WEBSOCKET_KEY1) + + self._upgraded = msg.upgrade + + method = getattr(msg, "method", self.method) + + assert self.protocol is not None + # calculate payload + if ( + (length is not None and length > 0) + or msg.chunked + and not msg.upgrade + ): + payload = StreamReader( + self.protocol, + timer=self.timer, + loop=loop, + limit=self._limit, + ) + payload_parser = HttpPayloadParser( + payload, + length=length, + chunked=msg.chunked, + method=method, + compression=msg.compression, + code=self.code, + readall=self.readall, + response_with_body=self.response_with_body, + auto_decompress=self._auto_decompress, + ) + if not payload_parser.done: + self._payload_parser = payload_parser + elif method == METH_CONNECT: + assert isinstance(msg, RawRequestMessage) + payload = StreamReader( + self.protocol, + timer=self.timer, + loop=loop, + limit=self._limit, + ) + self._upgraded = True + self._payload_parser = HttpPayloadParser( + payload, + method=msg.method, + compression=msg.compression, + readall=True, + auto_decompress=self._auto_decompress, + ) + else: + if ( + getattr(msg, "code", 100) >= 199 + and length is None + and self.read_until_eof + ): + payload = StreamReader( + self.protocol, + timer=self.timer, + loop=loop, + limit=self._limit, + ) + payload_parser = HttpPayloadParser( + payload, + length=length, + chunked=msg.chunked, + method=method, + compression=msg.compression, + code=self.code, + readall=True, + response_with_body=self.response_with_body, + auto_decompress=self._auto_decompress, + ) + if not payload_parser.done: + self._payload_parser = payload_parser + else: + payload = EMPTY_PAYLOAD + + messages.append((msg, payload)) + else: + self._tail = data[start_pos:] + data = EMPTY + break + + # no parser, just store + elif self._payload_parser is None and self._upgraded: + assert not self._lines + break + + # feed payload + elif data and start_pos < data_len: + assert not self._lines + assert self._payload_parser is not None + try: + eof, data = self._payload_parser.feed_data(data[start_pos:]) + except BaseException as exc: + if self.payload_exception is not None: + self._payload_parser.payload.set_exception( + self.payload_exception(str(exc)) + ) + else: + self._payload_parser.payload.set_exception(exc) + + eof = True + data = b"" + + if eof: + start_pos = 0 + data_len = len(data) + self._payload_parser = None + continue + else: + break + + if data and start_pos < data_len: + data = data[start_pos:] + else: + data = EMPTY + + return messages, self._upgraded, data + + def parse_headers( + self, lines: List[bytes] + ) -> Tuple[ + "CIMultiDictProxy[str]", RawHeaders, Optional[bool], Optional[str], bool, bool + ]: + """Parses RFC 5322 headers from a stream. + + Line continuations are supported. Returns list of header name + and value pairs. Header name is in upper case. + """ + headers, raw_headers = self._headers_parser.parse_headers(lines) + close_conn = None + encoding = None + upgrade = False + chunked = False + + # keep-alive + conn = headers.get(hdrs.CONNECTION) + if conn: + v = conn.lower() + if v == "close": + close_conn = True + elif v == "keep-alive": + close_conn = False + elif v == "upgrade": + upgrade = True + + # encoding + enc = headers.get(hdrs.CONTENT_ENCODING) + if enc: + enc = enc.lower() + if enc in ("gzip", "deflate", "br"): + encoding = enc + + # chunking + te = headers.get(hdrs.TRANSFER_ENCODING) + if te is not None: + if "chunked" == te.lower(): + chunked = True + else: + raise BadHttpMessage("Request has invalid `Transfer-Encoding`") + + if hdrs.CONTENT_LENGTH in headers: + raise BadHttpMessage( + "Content-Length can't be present with Transfer-Encoding", + ) + + return (headers, raw_headers, close_conn, encoding, upgrade, chunked) + + def set_upgraded(self, val: bool) -> None: + """Set connection upgraded (to websocket) mode. + + :param bool val: new state. + """ + self._upgraded = val + + +class HttpRequestParser(HttpParser[RawRequestMessage]): + """Read request status line. + + Exception .http_exceptions.BadStatusLine + could be raised in case of any errors in status line. + Returns RawRequestMessage. + """ + + def parse_message(self, lines: List[bytes]) -> RawRequestMessage: + # request line + line = lines[0].decode("utf-8", "surrogateescape") + try: + method, path, version = line.split(None, 2) + except ValueError: + raise BadStatusLine(line) from None + + if len(path) > self.max_line_size: + raise LineTooLong( + "Status line is too long", str(self.max_line_size), str(len(path)) + ) + + # method + if not METHRE.match(method): + raise BadStatusLine(method) + + # version + try: + if version.startswith("HTTP/"): + n1, n2 = version[5:].split(".", 1) + version_o = HttpVersion(int(n1), int(n2)) + else: + raise BadStatusLine(version) + except Exception: + raise BadStatusLine(version) + + if method == "CONNECT": + # authority-form, + # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.3 + url = URL.build(authority=path, encoded=True) + elif path.startswith("/"): + # origin-form, + # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.1 + path_part, _hash_separator, url_fragment = path.partition("#") + path_part, _question_mark_separator, qs_part = path_part.partition("?") + + # NOTE: `yarl.URL.build()` is used to mimic what the Cython-based + # NOTE: parser does, otherwise it results into the same + # NOTE: HTTP Request-Line input producing different + # NOTE: `yarl.URL()` objects + url = URL.build( + path=path_part, + query_string=qs_part, + fragment=url_fragment, + encoded=True, + ) + else: + # absolute-form for proxy maybe, + # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2 + url = URL(path, encoded=True) + + # read headers + ( + headers, + raw_headers, + close, + compression, + upgrade, + chunked, + ) = self.parse_headers(lines) + + if close is None: # then the headers weren't set in the request + if version_o <= HttpVersion10: # HTTP 1.0 must asks to not close + close = True + else: # HTTP 1.1 must ask to close. + close = False + + return RawRequestMessage( + method, + path, + version_o, + headers, + raw_headers, + close, + compression, + upgrade, + chunked, + url, + ) + + +class HttpResponseParser(HttpParser[RawResponseMessage]): + """Read response status line and headers. + + BadStatusLine could be raised in case of any errors in status line. + Returns RawResponseMessage. + """ + + def parse_message(self, lines: List[bytes]) -> RawResponseMessage: + line = lines[0].decode("utf-8", "surrogateescape") + try: + version, status = line.split(None, 1) + except ValueError: + raise BadStatusLine(line) from None + + try: + status, reason = status.split(None, 1) + except ValueError: + reason = "" + + if len(reason) > self.max_line_size: + raise LineTooLong( + "Status line is too long", str(self.max_line_size), str(len(reason)) + ) + + # version + match = VERSRE.match(version) + if match is None: + raise BadStatusLine(line) + version_o = HttpVersion(int(match.group(1)), int(match.group(2))) + + # The status code is a three-digit number + try: + status_i = int(status) + except ValueError: + raise BadStatusLine(line) from None + + if status_i > 999: + raise BadStatusLine(line) + + # read headers + ( + headers, + raw_headers, + close, + compression, + upgrade, + chunked, + ) = self.parse_headers(lines) + + if close is None: + close = version_o <= HttpVersion10 + + return RawResponseMessage( + version_o, + status_i, + reason.strip(), + headers, + raw_headers, + close, + compression, + upgrade, + chunked, + ) + + +class HttpPayloadParser: + def __init__( + self, + payload: StreamReader, + length: Optional[int] = None, + chunked: bool = False, + compression: Optional[str] = None, + code: Optional[int] = None, + method: Optional[str] = None, + readall: bool = False, + response_with_body: bool = True, + auto_decompress: bool = True, + ) -> None: + self._length = 0 + self._type = ParseState.PARSE_NONE + self._chunk = ChunkState.PARSE_CHUNKED_SIZE + self._chunk_size = 0 + self._chunk_tail = b"" + self._auto_decompress = auto_decompress + self.done = False + + # payload decompression wrapper + if response_with_body and compression and self._auto_decompress: + real_payload: Union[StreamReader, DeflateBuffer] = DeflateBuffer( + payload, compression + ) + else: + real_payload = payload + + # payload parser + if not response_with_body: + # don't parse payload if it's not expected to be received + self._type = ParseState.PARSE_NONE + real_payload.feed_eof() + self.done = True + + elif chunked: + self._type = ParseState.PARSE_CHUNKED + elif length is not None: + self._type = ParseState.PARSE_LENGTH + self._length = length + if self._length == 0: + real_payload.feed_eof() + self.done = True + else: + if readall and code != 204: + self._type = ParseState.PARSE_UNTIL_EOF + elif method in ("PUT", "POST"): + internal_logger.warning( # pragma: no cover + "Content-Length or Transfer-Encoding header is required" + ) + self._type = ParseState.PARSE_NONE + real_payload.feed_eof() + self.done = True + + self.payload = real_payload + + def feed_eof(self) -> None: + if self._type == ParseState.PARSE_UNTIL_EOF: + self.payload.feed_eof() + elif self._type == ParseState.PARSE_LENGTH: + raise ContentLengthError( + "Not enough data for satisfy content length header." + ) + elif self._type == ParseState.PARSE_CHUNKED: + raise TransferEncodingError( + "Not enough data for satisfy transfer length header." + ) + + def feed_data( + self, chunk: bytes, SEP: bytes = b"\r\n", CHUNK_EXT: bytes = b";" + ) -> Tuple[bool, bytes]: + # Read specified amount of bytes + if self._type == ParseState.PARSE_LENGTH: + required = self._length + chunk_len = len(chunk) + + if required >= chunk_len: + self._length = required - chunk_len + self.payload.feed_data(chunk, chunk_len) + if self._length == 0: + self.payload.feed_eof() + return True, b"" + else: + self._length = 0 + self.payload.feed_data(chunk[:required], required) + self.payload.feed_eof() + return True, chunk[required:] + + # Chunked transfer encoding parser + elif self._type == ParseState.PARSE_CHUNKED: + if self._chunk_tail: + chunk = self._chunk_tail + chunk + self._chunk_tail = b"" + + while chunk: + + # read next chunk size + if self._chunk == ChunkState.PARSE_CHUNKED_SIZE: + pos = chunk.find(SEP) + if pos >= 0: + i = chunk.find(CHUNK_EXT, 0, pos) + if i >= 0: + size_b = chunk[:i] # strip chunk-extensions + else: + size_b = chunk[:pos] + + try: + size = int(bytes(size_b), 16) + except ValueError: + exc = TransferEncodingError( + chunk[:pos].decode("ascii", "surrogateescape") + ) + self.payload.set_exception(exc) + raise exc from None + + chunk = chunk[pos + 2 :] + if size == 0: # eof marker + self._chunk = ChunkState.PARSE_MAYBE_TRAILERS + else: + self._chunk = ChunkState.PARSE_CHUNKED_CHUNK + self._chunk_size = size + self.payload.begin_http_chunk_receiving() + else: + self._chunk_tail = chunk + return False, b"" + + # read chunk and feed buffer + if self._chunk == ChunkState.PARSE_CHUNKED_CHUNK: + required = self._chunk_size + chunk_len = len(chunk) + + if required > chunk_len: + self._chunk_size = required - chunk_len + self.payload.feed_data(chunk, chunk_len) + return False, b"" + else: + self._chunk_size = 0 + self.payload.feed_data(chunk[:required], required) + chunk = chunk[required:] + self._chunk = ChunkState.PARSE_CHUNKED_CHUNK_EOF + self.payload.end_http_chunk_receiving() + + # toss the CRLF at the end of the chunk + if self._chunk == ChunkState.PARSE_CHUNKED_CHUNK_EOF: + if chunk[:2] == SEP: + chunk = chunk[2:] + self._chunk = ChunkState.PARSE_CHUNKED_SIZE + else: + self._chunk_tail = chunk + return False, b"" + + # if stream does not contain trailer, after 0\r\n + # we should get another \r\n otherwise + # trailers needs to be skiped until \r\n\r\n + if self._chunk == ChunkState.PARSE_MAYBE_TRAILERS: + head = chunk[:2] + if head == SEP: + # end of stream + self.payload.feed_eof() + return True, chunk[2:] + # Both CR and LF, or only LF may not be received yet. It is + # expected that CRLF or LF will be shown at the very first + # byte next time, otherwise trailers should come. The last + # CRLF which marks the end of response might not be + # contained in the same TCP segment which delivered the + # size indicator. + if not head: + return False, b"" + if head == SEP[:1]: + self._chunk_tail = head + return False, b"" + self._chunk = ChunkState.PARSE_TRAILERS + + # read and discard trailer up to the CRLF terminator + if self._chunk == ChunkState.PARSE_TRAILERS: + pos = chunk.find(SEP) + if pos >= 0: + chunk = chunk[pos + 2 :] + self._chunk = ChunkState.PARSE_MAYBE_TRAILERS + else: + self._chunk_tail = chunk + return False, b"" + + # Read all bytes until eof + elif self._type == ParseState.PARSE_UNTIL_EOF: + self.payload.feed_data(chunk, len(chunk)) + + return False, b"" + + +class DeflateBuffer: + """DeflateStream decompress stream and feed data into specified stream.""" + + decompressor: Any + + def __init__(self, out: StreamReader, encoding: Optional[str]) -> None: + self.out = out + self.size = 0 + self.encoding = encoding + self._started_decoding = False + + if encoding == "br": + if not HAS_BROTLI: # pragma: no cover + raise ContentEncodingError( + "Can not decode content-encoding: brotli (br). " + "Please install `Brotli`" + ) + + class BrotliDecoder: + # Supports both 'brotlipy' and 'Brotli' packages + # since they share an import name. The top branches + # are for 'brotlipy' and bottom branches for 'Brotli' + def __init__(self) -> None: + self._obj = brotli.Decompressor() + + def decompress(self, data: bytes) -> bytes: + if hasattr(self._obj, "decompress"): + return cast(bytes, self._obj.decompress(data)) + return cast(bytes, self._obj.process(data)) + + def flush(self) -> bytes: + if hasattr(self._obj, "flush"): + return cast(bytes, self._obj.flush()) + return b"" + + self.decompressor = BrotliDecoder() + else: + zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else zlib.MAX_WBITS + self.decompressor = zlib.decompressobj(wbits=zlib_mode) + + def set_exception(self, exc: BaseException) -> None: + self.out.set_exception(exc) + + def feed_data(self, chunk: bytes, size: int) -> None: + if not size: + return + + self.size += size + + # RFC1950 + # bits 0..3 = CM = 0b1000 = 8 = "deflate" + # bits 4..7 = CINFO = 1..7 = windows size. + if ( + not self._started_decoding + and self.encoding == "deflate" + and chunk[0] & 0xF != 8 + ): + # Change the decoder to decompress incorrectly compressed data + # Actually we should issue a warning about non-RFC-compliant data. + self.decompressor = zlib.decompressobj(wbits=-zlib.MAX_WBITS) + + try: + chunk = self.decompressor.decompress(chunk) + except Exception: + raise ContentEncodingError( + "Can not decode content-encoding: %s" % self.encoding + ) + + self._started_decoding = True + + if chunk: + self.out.feed_data(chunk, len(chunk)) + + def feed_eof(self) -> None: + chunk = self.decompressor.flush() + + if chunk or self.size > 0: + self.out.feed_data(chunk, len(chunk)) + if self.encoding == "deflate" and not self.decompressor.eof: + raise ContentEncodingError("deflate") + + self.out.feed_eof() + + def begin_http_chunk_receiving(self) -> None: + self.out.begin_http_chunk_receiving() + + def end_http_chunk_receiving(self) -> None: + self.out.end_http_chunk_receiving() + + +HttpRequestParserPy = HttpRequestParser +HttpResponseParserPy = HttpResponseParser +RawRequestMessagePy = RawRequestMessage +RawResponseMessagePy = RawResponseMessage + +try: + if not NO_EXTENSIONS: + from ._http_parser import ( # type: ignore[import,no-redef] + HttpRequestParser, + HttpResponseParser, + RawRequestMessage, + RawResponseMessage, + ) + + HttpRequestParserC = HttpRequestParser + HttpResponseParserC = HttpResponseParser + RawRequestMessageC = RawRequestMessage + RawResponseMessageC = RawResponseMessage +except ImportError: # pragma: no cover + pass diff --git a/env/lib/python3.8/site-packages/aiohttp/http_websocket.py b/env/lib/python3.8/site-packages/aiohttp/http_websocket.py new file mode 100644 index 00000000..2cfc5193 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/http_websocket.py @@ -0,0 +1,701 @@ +"""WebSocket protocol versions 13 and 8.""" + +import asyncio +import collections +import json +import random +import re +import sys +import zlib +from enum import IntEnum +from struct import Struct +from typing import Any, Callable, List, Optional, Pattern, Set, Tuple, Union, cast + +from .base_protocol import BaseProtocol +from .helpers import NO_EXTENSIONS +from .streams import DataQueue +from .typedefs import Final + +__all__ = ( + "WS_CLOSED_MESSAGE", + "WS_CLOSING_MESSAGE", + "WS_KEY", + "WebSocketReader", + "WebSocketWriter", + "WSMessage", + "WebSocketError", + "WSMsgType", + "WSCloseCode", +) + + +class WSCloseCode(IntEnum): + OK = 1000 + GOING_AWAY = 1001 + PROTOCOL_ERROR = 1002 + UNSUPPORTED_DATA = 1003 + ABNORMAL_CLOSURE = 1006 + INVALID_TEXT = 1007 + POLICY_VIOLATION = 1008 + MESSAGE_TOO_BIG = 1009 + MANDATORY_EXTENSION = 1010 + INTERNAL_ERROR = 1011 + SERVICE_RESTART = 1012 + TRY_AGAIN_LATER = 1013 + BAD_GATEWAY = 1014 + + +ALLOWED_CLOSE_CODES: Final[Set[int]] = {int(i) for i in WSCloseCode} + + +class WSMsgType(IntEnum): + # websocket spec types + CONTINUATION = 0x0 + TEXT = 0x1 + BINARY = 0x2 + PING = 0x9 + PONG = 0xA + CLOSE = 0x8 + + # aiohttp specific types + CLOSING = 0x100 + CLOSED = 0x101 + ERROR = 0x102 + + text = TEXT + binary = BINARY + ping = PING + pong = PONG + close = CLOSE + closing = CLOSING + closed = CLOSED + error = ERROR + + +WS_KEY: Final[bytes] = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11" + + +UNPACK_LEN2 = Struct("!H").unpack_from +UNPACK_LEN3 = Struct("!Q").unpack_from +UNPACK_CLOSE_CODE = Struct("!H").unpack +PACK_LEN1 = Struct("!BB").pack +PACK_LEN2 = Struct("!BBH").pack +PACK_LEN3 = Struct("!BBQ").pack +PACK_CLOSE_CODE = Struct("!H").pack +MSG_SIZE: Final[int] = 2**14 +DEFAULT_LIMIT: Final[int] = 2**16 + + +_WSMessageBase = collections.namedtuple("_WSMessageBase", ["type", "data", "extra"]) + + +class WSMessage(_WSMessageBase): + def json(self, *, loads: Callable[[Any], Any] = json.loads) -> Any: + """Return parsed JSON data. + + .. versionadded:: 0.22 + """ + return loads(self.data) + + +WS_CLOSED_MESSAGE = WSMessage(WSMsgType.CLOSED, None, None) +WS_CLOSING_MESSAGE = WSMessage(WSMsgType.CLOSING, None, None) + + +class WebSocketError(Exception): + """WebSocket protocol parser error.""" + + def __init__(self, code: int, message: str) -> None: + self.code = code + super().__init__(code, message) + + def __str__(self) -> str: + return cast(str, self.args[1]) + + +class WSHandshakeError(Exception): + """WebSocket protocol handshake error.""" + + +native_byteorder: Final[str] = sys.byteorder + + +# Used by _websocket_mask_python +_XOR_TABLE: Final[List[bytes]] = [bytes(a ^ b for a in range(256)) for b in range(256)] + + +def _websocket_mask_python(mask: bytes, data: bytearray) -> None: + """Websocket masking function. + + `mask` is a `bytes` object of length 4; `data` is a `bytearray` + object of any length. The contents of `data` are masked with `mask`, + as specified in section 5.3 of RFC 6455. + + Note that this function mutates the `data` argument. + + This pure-python implementation may be replaced by an optimized + version when available. + + """ + assert isinstance(data, bytearray), data + assert len(mask) == 4, mask + + if data: + a, b, c, d = (_XOR_TABLE[n] for n in mask) + data[::4] = data[::4].translate(a) + data[1::4] = data[1::4].translate(b) + data[2::4] = data[2::4].translate(c) + data[3::4] = data[3::4].translate(d) + + +if NO_EXTENSIONS: # pragma: no cover + _websocket_mask = _websocket_mask_python +else: + try: + from ._websocket import _websocket_mask_cython # type: ignore[import] + + _websocket_mask = _websocket_mask_cython + except ImportError: # pragma: no cover + _websocket_mask = _websocket_mask_python + +_WS_DEFLATE_TRAILING: Final[bytes] = bytes([0x00, 0x00, 0xFF, 0xFF]) + + +_WS_EXT_RE: Final[Pattern[str]] = re.compile( + r"^(?:;\s*(?:" + r"(server_no_context_takeover)|" + r"(client_no_context_takeover)|" + r"(server_max_window_bits(?:=(\d+))?)|" + r"(client_max_window_bits(?:=(\d+))?)))*$" +) + +_WS_EXT_RE_SPLIT: Final[Pattern[str]] = re.compile(r"permessage-deflate([^,]+)?") + + +def ws_ext_parse(extstr: Optional[str], isserver: bool = False) -> Tuple[int, bool]: + if not extstr: + return 0, False + + compress = 0 + notakeover = False + for ext in _WS_EXT_RE_SPLIT.finditer(extstr): + defext = ext.group(1) + # Return compress = 15 when get `permessage-deflate` + if not defext: + compress = 15 + break + match = _WS_EXT_RE.match(defext) + if match: + compress = 15 + if isserver: + # Server never fail to detect compress handshake. + # Server does not need to send max wbit to client + if match.group(4): + compress = int(match.group(4)) + # Group3 must match if group4 matches + # Compress wbit 8 does not support in zlib + # If compress level not support, + # CONTINUE to next extension + if compress > 15 or compress < 9: + compress = 0 + continue + if match.group(1): + notakeover = True + # Ignore regex group 5 & 6 for client_max_window_bits + break + else: + if match.group(6): + compress = int(match.group(6)) + # Group5 must match if group6 matches + # Compress wbit 8 does not support in zlib + # If compress level not support, + # FAIL the parse progress + if compress > 15 or compress < 9: + raise WSHandshakeError("Invalid window size") + if match.group(2): + notakeover = True + # Ignore regex group 5 & 6 for client_max_window_bits + break + # Return Fail if client side and not match + elif not isserver: + raise WSHandshakeError("Extension for deflate not supported" + ext.group(1)) + + return compress, notakeover + + +def ws_ext_gen( + compress: int = 15, isserver: bool = False, server_notakeover: bool = False +) -> str: + # client_notakeover=False not used for server + # compress wbit 8 does not support in zlib + if compress < 9 or compress > 15: + raise ValueError( + "Compress wbits must between 9 and 15, " "zlib does not support wbits=8" + ) + enabledext = ["permessage-deflate"] + if not isserver: + enabledext.append("client_max_window_bits") + + if compress < 15: + enabledext.append("server_max_window_bits=" + str(compress)) + if server_notakeover: + enabledext.append("server_no_context_takeover") + # if client_notakeover: + # enabledext.append('client_no_context_takeover') + return "; ".join(enabledext) + + +class WSParserState(IntEnum): + READ_HEADER = 1 + READ_PAYLOAD_LENGTH = 2 + READ_PAYLOAD_MASK = 3 + READ_PAYLOAD = 4 + + +class WebSocketReader: + def __init__( + self, queue: DataQueue[WSMessage], max_msg_size: int, compress: bool = True + ) -> None: + self.queue = queue + self._max_msg_size = max_msg_size + + self._exc: Optional[BaseException] = None + self._partial = bytearray() + self._state = WSParserState.READ_HEADER + + self._opcode: Optional[int] = None + self._frame_fin = False + self._frame_opcode: Optional[int] = None + self._frame_payload = bytearray() + + self._tail = b"" + self._has_mask = False + self._frame_mask: Optional[bytes] = None + self._payload_length = 0 + self._payload_length_flag = 0 + self._compressed: Optional[bool] = None + self._decompressobj: Any = None # zlib.decompressobj actually + self._compress = compress + + def feed_eof(self) -> None: + self.queue.feed_eof() + + def feed_data(self, data: bytes) -> Tuple[bool, bytes]: + if self._exc: + return True, data + + try: + return self._feed_data(data) + except Exception as exc: + self._exc = exc + self.queue.set_exception(exc) + return True, b"" + + def _feed_data(self, data: bytes) -> Tuple[bool, bytes]: + for fin, opcode, payload, compressed in self.parse_frame(data): + if compressed and not self._decompressobj: + self._decompressobj = zlib.decompressobj(wbits=-zlib.MAX_WBITS) + if opcode == WSMsgType.CLOSE: + if len(payload) >= 2: + close_code = UNPACK_CLOSE_CODE(payload[:2])[0] + if close_code < 3000 and close_code not in ALLOWED_CLOSE_CODES: + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, + f"Invalid close code: {close_code}", + ) + try: + close_message = payload[2:].decode("utf-8") + except UnicodeDecodeError as exc: + raise WebSocketError( + WSCloseCode.INVALID_TEXT, "Invalid UTF-8 text message" + ) from exc + msg = WSMessage(WSMsgType.CLOSE, close_code, close_message) + elif payload: + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, + f"Invalid close frame: {fin} {opcode} {payload!r}", + ) + else: + msg = WSMessage(WSMsgType.CLOSE, 0, "") + + self.queue.feed_data(msg, 0) + + elif opcode == WSMsgType.PING: + self.queue.feed_data( + WSMessage(WSMsgType.PING, payload, ""), len(payload) + ) + + elif opcode == WSMsgType.PONG: + self.queue.feed_data( + WSMessage(WSMsgType.PONG, payload, ""), len(payload) + ) + + elif ( + opcode not in (WSMsgType.TEXT, WSMsgType.BINARY) + and self._opcode is None + ): + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, f"Unexpected opcode={opcode!r}" + ) + else: + # load text/binary + if not fin: + # got partial frame payload + if opcode != WSMsgType.CONTINUATION: + self._opcode = opcode + self._partial.extend(payload) + if self._max_msg_size and len(self._partial) >= self._max_msg_size: + raise WebSocketError( + WSCloseCode.MESSAGE_TOO_BIG, + "Message size {} exceeds limit {}".format( + len(self._partial), self._max_msg_size + ), + ) + else: + # previous frame was non finished + # we should get continuation opcode + if self._partial: + if opcode != WSMsgType.CONTINUATION: + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, + "The opcode in non-fin frame is expected " + "to be zero, got {!r}".format(opcode), + ) + + if opcode == WSMsgType.CONTINUATION: + assert self._opcode is not None + opcode = self._opcode + self._opcode = None + + self._partial.extend(payload) + if self._max_msg_size and len(self._partial) >= self._max_msg_size: + raise WebSocketError( + WSCloseCode.MESSAGE_TOO_BIG, + "Message size {} exceeds limit {}".format( + len(self._partial), self._max_msg_size + ), + ) + + # Decompress process must to be done after all packets + # received. + if compressed: + self._partial.extend(_WS_DEFLATE_TRAILING) + payload_merged = self._decompressobj.decompress( + self._partial, self._max_msg_size + ) + if self._decompressobj.unconsumed_tail: + left = len(self._decompressobj.unconsumed_tail) + raise WebSocketError( + WSCloseCode.MESSAGE_TOO_BIG, + "Decompressed message size {} exceeds limit {}".format( + self._max_msg_size + left, self._max_msg_size + ), + ) + else: + payload_merged = bytes(self._partial) + + self._partial.clear() + + if opcode == WSMsgType.TEXT: + try: + text = payload_merged.decode("utf-8") + self.queue.feed_data( + WSMessage(WSMsgType.TEXT, text, ""), len(text) + ) + except UnicodeDecodeError as exc: + raise WebSocketError( + WSCloseCode.INVALID_TEXT, "Invalid UTF-8 text message" + ) from exc + else: + self.queue.feed_data( + WSMessage(WSMsgType.BINARY, payload_merged, ""), + len(payload_merged), + ) + + return False, b"" + + def parse_frame( + self, buf: bytes + ) -> List[Tuple[bool, Optional[int], bytearray, Optional[bool]]]: + """Return the next frame from the socket.""" + frames = [] + if self._tail: + buf, self._tail = self._tail + buf, b"" + + start_pos = 0 + buf_length = len(buf) + + while True: + # read header + if self._state == WSParserState.READ_HEADER: + if buf_length - start_pos >= 2: + data = buf[start_pos : start_pos + 2] + start_pos += 2 + first_byte, second_byte = data + + fin = (first_byte >> 7) & 1 + rsv1 = (first_byte >> 6) & 1 + rsv2 = (first_byte >> 5) & 1 + rsv3 = (first_byte >> 4) & 1 + opcode = first_byte & 0xF + + # frame-fin = %x0 ; more frames of this message follow + # / %x1 ; final frame of this message + # frame-rsv1 = %x0 ; + # 1 bit, MUST be 0 unless negotiated otherwise + # frame-rsv2 = %x0 ; + # 1 bit, MUST be 0 unless negotiated otherwise + # frame-rsv3 = %x0 ; + # 1 bit, MUST be 0 unless negotiated otherwise + # + # Remove rsv1 from this test for deflate development + if rsv2 or rsv3 or (rsv1 and not self._compress): + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, + "Received frame with non-zero reserved bits", + ) + + if opcode > 0x7 and fin == 0: + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, + "Received fragmented control frame", + ) + + has_mask = (second_byte >> 7) & 1 + length = second_byte & 0x7F + + # Control frames MUST have a payload + # length of 125 bytes or less + if opcode > 0x7 and length > 125: + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, + "Control frame payload cannot be " "larger than 125 bytes", + ) + + # Set compress status if last package is FIN + # OR set compress status if this is first fragment + # Raise error if not first fragment with rsv1 = 0x1 + if self._frame_fin or self._compressed is None: + self._compressed = True if rsv1 else False + elif rsv1: + raise WebSocketError( + WSCloseCode.PROTOCOL_ERROR, + "Received frame with non-zero reserved bits", + ) + + self._frame_fin = bool(fin) + self._frame_opcode = opcode + self._has_mask = bool(has_mask) + self._payload_length_flag = length + self._state = WSParserState.READ_PAYLOAD_LENGTH + else: + break + + # read payload length + if self._state == WSParserState.READ_PAYLOAD_LENGTH: + length = self._payload_length_flag + if length == 126: + if buf_length - start_pos >= 2: + data = buf[start_pos : start_pos + 2] + start_pos += 2 + length = UNPACK_LEN2(data)[0] + self._payload_length = length + self._state = ( + WSParserState.READ_PAYLOAD_MASK + if self._has_mask + else WSParserState.READ_PAYLOAD + ) + else: + break + elif length > 126: + if buf_length - start_pos >= 8: + data = buf[start_pos : start_pos + 8] + start_pos += 8 + length = UNPACK_LEN3(data)[0] + self._payload_length = length + self._state = ( + WSParserState.READ_PAYLOAD_MASK + if self._has_mask + else WSParserState.READ_PAYLOAD + ) + else: + break + else: + self._payload_length = length + self._state = ( + WSParserState.READ_PAYLOAD_MASK + if self._has_mask + else WSParserState.READ_PAYLOAD + ) + + # read payload mask + if self._state == WSParserState.READ_PAYLOAD_MASK: + if buf_length - start_pos >= 4: + self._frame_mask = buf[start_pos : start_pos + 4] + start_pos += 4 + self._state = WSParserState.READ_PAYLOAD + else: + break + + if self._state == WSParserState.READ_PAYLOAD: + length = self._payload_length + payload = self._frame_payload + + chunk_len = buf_length - start_pos + if length >= chunk_len: + self._payload_length = length - chunk_len + payload.extend(buf[start_pos:]) + start_pos = buf_length + else: + self._payload_length = 0 + payload.extend(buf[start_pos : start_pos + length]) + start_pos = start_pos + length + + if self._payload_length == 0: + if self._has_mask: + assert self._frame_mask is not None + _websocket_mask(self._frame_mask, payload) + + frames.append( + (self._frame_fin, self._frame_opcode, payload, self._compressed) + ) + + self._frame_payload = bytearray() + self._state = WSParserState.READ_HEADER + else: + break + + self._tail = buf[start_pos:] + + return frames + + +class WebSocketWriter: + def __init__( + self, + protocol: BaseProtocol, + transport: asyncio.Transport, + *, + use_mask: bool = False, + limit: int = DEFAULT_LIMIT, + random: Any = random.Random(), + compress: int = 0, + notakeover: bool = False, + ) -> None: + self.protocol = protocol + self.transport = transport + self.use_mask = use_mask + self.randrange = random.randrange + self.compress = compress + self.notakeover = notakeover + self._closing = False + self._limit = limit + self._output_size = 0 + self._compressobj: Any = None # actually compressobj + + async def _send_frame( + self, message: bytes, opcode: int, compress: Optional[int] = None + ) -> None: + """Send a frame over the websocket with message as its payload.""" + if self._closing and not (opcode & WSMsgType.CLOSE): + raise ConnectionResetError("Cannot write to closing transport") + + rsv = 0 + + # Only compress larger packets (disabled) + # Does small packet needs to be compressed? + # if self.compress and opcode < 8 and len(message) > 124: + if (compress or self.compress) and opcode < 8: + if compress: + # Do not set self._compress if compressing is for this frame + compressobj = zlib.compressobj(level=zlib.Z_BEST_SPEED, wbits=-compress) + else: # self.compress + if not self._compressobj: + self._compressobj = zlib.compressobj( + level=zlib.Z_BEST_SPEED, wbits=-self.compress + ) + compressobj = self._compressobj + + message = compressobj.compress(message) + message = message + compressobj.flush( + zlib.Z_FULL_FLUSH if self.notakeover else zlib.Z_SYNC_FLUSH + ) + if message.endswith(_WS_DEFLATE_TRAILING): + message = message[:-4] + rsv = rsv | 0x40 + + msg_length = len(message) + + use_mask = self.use_mask + if use_mask: + mask_bit = 0x80 + else: + mask_bit = 0 + + if msg_length < 126: + header = PACK_LEN1(0x80 | rsv | opcode, msg_length | mask_bit) + elif msg_length < (1 << 16): + header = PACK_LEN2(0x80 | rsv | opcode, 126 | mask_bit, msg_length) + else: + header = PACK_LEN3(0x80 | rsv | opcode, 127 | mask_bit, msg_length) + if use_mask: + mask = self.randrange(0, 0xFFFFFFFF) + mask = mask.to_bytes(4, "big") + message = bytearray(message) + _websocket_mask(mask, message) + self._write(header + mask + message) + self._output_size += len(header) + len(mask) + len(message) + else: + if len(message) > MSG_SIZE: + self._write(header) + self._write(message) + else: + self._write(header + message) + + self._output_size += len(header) + len(message) + + if self._output_size > self._limit: + self._output_size = 0 + await self.protocol._drain_helper() + + def _write(self, data: bytes) -> None: + if self.transport is None or self.transport.is_closing(): + raise ConnectionResetError("Cannot write to closing transport") + self.transport.write(data) + + async def pong(self, message: bytes = b"") -> None: + """Send pong message.""" + if isinstance(message, str): + message = message.encode("utf-8") + await self._send_frame(message, WSMsgType.PONG) + + async def ping(self, message: bytes = b"") -> None: + """Send ping message.""" + if isinstance(message, str): + message = message.encode("utf-8") + await self._send_frame(message, WSMsgType.PING) + + async def send( + self, + message: Union[str, bytes], + binary: bool = False, + compress: Optional[int] = None, + ) -> None: + """Send a frame over the websocket with message as its payload.""" + if isinstance(message, str): + message = message.encode("utf-8") + if binary: + await self._send_frame(message, WSMsgType.BINARY, compress) + else: + await self._send_frame(message, WSMsgType.TEXT, compress) + + async def close(self, code: int = 1000, message: bytes = b"") -> None: + """Close the websocket, sending the specified code and message.""" + if isinstance(message, str): + message = message.encode("utf-8") + try: + await self._send_frame( + PACK_CLOSE_CODE(code) + message, opcode=WSMsgType.CLOSE + ) + finally: + self._closing = True diff --git a/env/lib/python3.8/site-packages/aiohttp/http_writer.py b/env/lib/python3.8/site-packages/aiohttp/http_writer.py new file mode 100644 index 00000000..db3d6a04 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/http_writer.py @@ -0,0 +1,200 @@ +"""Http related parsers and protocol.""" + +import asyncio +import zlib +from typing import Any, Awaitable, Callable, NamedTuple, Optional, Union # noqa + +from multidict import CIMultiDict + +from .abc import AbstractStreamWriter +from .base_protocol import BaseProtocol +from .helpers import NO_EXTENSIONS + +__all__ = ("StreamWriter", "HttpVersion", "HttpVersion10", "HttpVersion11") + + +class HttpVersion(NamedTuple): + major: int + minor: int + + +HttpVersion10 = HttpVersion(1, 0) +HttpVersion11 = HttpVersion(1, 1) + + +_T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]] +_T_OnHeadersSent = Optional[Callable[["CIMultiDict[str]"], Awaitable[None]]] + + +class StreamWriter(AbstractStreamWriter): + def __init__( + self, + protocol: BaseProtocol, + loop: asyncio.AbstractEventLoop, + on_chunk_sent: _T_OnChunkSent = None, + on_headers_sent: _T_OnHeadersSent = None, + ) -> None: + self._protocol = protocol + self._transport = protocol.transport + + self.loop = loop + self.length = None + self.chunked = False + self.buffer_size = 0 + self.output_size = 0 + + self._eof = False + self._compress: Any = None + self._drain_waiter = None + + self._on_chunk_sent: _T_OnChunkSent = on_chunk_sent + self._on_headers_sent: _T_OnHeadersSent = on_headers_sent + + @property + def transport(self) -> Optional[asyncio.Transport]: + return self._transport + + @property + def protocol(self) -> BaseProtocol: + return self._protocol + + def enable_chunking(self) -> None: + self.chunked = True + + def enable_compression( + self, encoding: str = "deflate", strategy: int = zlib.Z_DEFAULT_STRATEGY + ) -> None: + zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else zlib.MAX_WBITS + self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy) + + def _write(self, chunk: bytes) -> None: + size = len(chunk) + self.buffer_size += size + self.output_size += size + + if self._transport is None or self._transport.is_closing(): + raise ConnectionResetError("Cannot write to closing transport") + self._transport.write(chunk) + + async def write( + self, chunk: bytes, *, drain: bool = True, LIMIT: int = 0x10000 + ) -> None: + """Writes chunk of data to a stream. + + write_eof() indicates end of stream. + writer can't be used after write_eof() method being called. + write() return drain future. + """ + if self._on_chunk_sent is not None: + await self._on_chunk_sent(chunk) + + if isinstance(chunk, memoryview): + if chunk.nbytes != len(chunk): + # just reshape it + chunk = chunk.cast("c") + + if self._compress is not None: + chunk = self._compress.compress(chunk) + if not chunk: + return + + if self.length is not None: + chunk_len = len(chunk) + if self.length >= chunk_len: + self.length = self.length - chunk_len + else: + chunk = chunk[: self.length] + self.length = 0 + if not chunk: + return + + if chunk: + if self.chunked: + chunk_len_pre = ("%x\r\n" % len(chunk)).encode("ascii") + chunk = chunk_len_pre + chunk + b"\r\n" + + self._write(chunk) + + if self.buffer_size > LIMIT and drain: + self.buffer_size = 0 + await self.drain() + + async def write_headers( + self, status_line: str, headers: "CIMultiDict[str]" + ) -> None: + """Write request/response status and headers.""" + if self._on_headers_sent is not None: + await self._on_headers_sent(headers) + + # status + headers + buf = _serialize_headers(status_line, headers) + self._write(buf) + + async def write_eof(self, chunk: bytes = b"") -> None: + if self._eof: + return + + if chunk and self._on_chunk_sent is not None: + await self._on_chunk_sent(chunk) + + if self._compress: + if chunk: + chunk = self._compress.compress(chunk) + + chunk = chunk + self._compress.flush() + if chunk and self.chunked: + chunk_len = ("%x\r\n" % len(chunk)).encode("ascii") + chunk = chunk_len + chunk + b"\r\n0\r\n\r\n" + else: + if self.chunked: + if chunk: + chunk_len = ("%x\r\n" % len(chunk)).encode("ascii") + chunk = chunk_len + chunk + b"\r\n0\r\n\r\n" + else: + chunk = b"0\r\n\r\n" + + if chunk: + self._write(chunk) + + await self.drain() + + self._eof = True + self._transport = None + + async def drain(self) -> None: + """Flush the write buffer. + + The intended use is to write + + await w.write(data) + await w.drain() + """ + if self._protocol.transport is not None: + await self._protocol._drain_helper() + + +def _safe_header(string: str) -> str: + if "\r" in string or "\n" in string: + raise ValueError( + "Newline or carriage return detected in headers. " + "Potential header injection attack." + ) + return string + + +def _py_serialize_headers(status_line: str, headers: "CIMultiDict[str]") -> bytes: + headers_gen = (_safe_header(k) + ": " + _safe_header(v) for k, v in headers.items()) + line = status_line + "\r\n" + "\r\n".join(headers_gen) + "\r\n\r\n" + return line.encode("utf-8") + + +_serialize_headers = _py_serialize_headers + +try: + import aiohttp._http_writer as _http_writer # type: ignore[import] + + _c_serialize_headers = _http_writer._serialize_headers + if not NO_EXTENSIONS: + _serialize_headers = _c_serialize_headers +except ImportError: + pass diff --git a/env/lib/python3.8/site-packages/aiohttp/locks.py b/env/lib/python3.8/site-packages/aiohttp/locks.py new file mode 100644 index 00000000..de2dc83d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/locks.py @@ -0,0 +1,41 @@ +import asyncio +import collections +from typing import Any, Deque, Optional + + +class EventResultOrError: + """Event asyncio lock helper class. + + Wraps the Event asyncio lock allowing either to awake the + locked Tasks without any error or raising an exception. + + thanks to @vorpalsmith for the simple design. + """ + + def __init__(self, loop: asyncio.AbstractEventLoop) -> None: + self._loop = loop + self._exc: Optional[BaseException] = None + self._event = asyncio.Event() + self._waiters: Deque[asyncio.Future[Any]] = collections.deque() + + def set(self, exc: Optional[BaseException] = None) -> None: + self._exc = exc + self._event.set() + + async def wait(self) -> Any: + waiter = self._loop.create_task(self._event.wait()) + self._waiters.append(waiter) + try: + val = await waiter + finally: + self._waiters.remove(waiter) + + if self._exc is not None: + raise self._exc + + return val + + def cancel(self) -> None: + """Cancel all waiters""" + for waiter in self._waiters: + waiter.cancel() diff --git a/env/lib/python3.8/site-packages/aiohttp/log.py b/env/lib/python3.8/site-packages/aiohttp/log.py new file mode 100644 index 00000000..3cecea2b --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/log.py @@ -0,0 +1,8 @@ +import logging + +access_logger = logging.getLogger("aiohttp.access") +client_logger = logging.getLogger("aiohttp.client") +internal_logger = logging.getLogger("aiohttp.internal") +server_logger = logging.getLogger("aiohttp.server") +web_logger = logging.getLogger("aiohttp.web") +ws_logger = logging.getLogger("aiohttp.websocket") diff --git a/env/lib/python3.8/site-packages/aiohttp/multipart.py b/env/lib/python3.8/site-packages/aiohttp/multipart.py new file mode 100644 index 00000000..73801f45 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/multipart.py @@ -0,0 +1,961 @@ +import base64 +import binascii +import json +import re +import uuid +import warnings +import zlib +from collections import deque +from types import TracebackType +from typing import ( + TYPE_CHECKING, + Any, + AsyncIterator, + Deque, + Dict, + Iterator, + List, + Mapping, + Optional, + Sequence, + Tuple, + Type, + Union, + cast, +) +from urllib.parse import parse_qsl, unquote, urlencode + +from multidict import CIMultiDict, CIMultiDictProxy, MultiMapping + +from .hdrs import ( + CONTENT_DISPOSITION, + CONTENT_ENCODING, + CONTENT_LENGTH, + CONTENT_TRANSFER_ENCODING, + CONTENT_TYPE, +) +from .helpers import CHAR, TOKEN, parse_mimetype, reify +from .http import HeadersParser +from .payload import ( + JsonPayload, + LookupError, + Order, + Payload, + StringPayload, + get_payload, + payload_type, +) +from .streams import StreamReader + +__all__ = ( + "MultipartReader", + "MultipartWriter", + "BodyPartReader", + "BadContentDispositionHeader", + "BadContentDispositionParam", + "parse_content_disposition", + "content_disposition_filename", +) + + +if TYPE_CHECKING: # pragma: no cover + from .client_reqrep import ClientResponse + + +class BadContentDispositionHeader(RuntimeWarning): + pass + + +class BadContentDispositionParam(RuntimeWarning): + pass + + +def parse_content_disposition( + header: Optional[str], +) -> Tuple[Optional[str], Dict[str, str]]: + def is_token(string: str) -> bool: + return bool(string) and TOKEN >= set(string) + + def is_quoted(string: str) -> bool: + return string[0] == string[-1] == '"' + + def is_rfc5987(string: str) -> bool: + return is_token(string) and string.count("'") == 2 + + def is_extended_param(string: str) -> bool: + return string.endswith("*") + + def is_continuous_param(string: str) -> bool: + pos = string.find("*") + 1 + if not pos: + return False + substring = string[pos:-1] if string.endswith("*") else string[pos:] + return substring.isdigit() + + def unescape(text: str, *, chars: str = "".join(map(re.escape, CHAR))) -> str: + return re.sub(f"\\\\([{chars}])", "\\1", text) + + if not header: + return None, {} + + disptype, *parts = header.split(";") + if not is_token(disptype): + warnings.warn(BadContentDispositionHeader(header)) + return None, {} + + params: Dict[str, str] = {} + while parts: + item = parts.pop(0) + + if "=" not in item: + warnings.warn(BadContentDispositionHeader(header)) + return None, {} + + key, value = item.split("=", 1) + key = key.lower().strip() + value = value.lstrip() + + if key in params: + warnings.warn(BadContentDispositionHeader(header)) + return None, {} + + if not is_token(key): + warnings.warn(BadContentDispositionParam(item)) + continue + + elif is_continuous_param(key): + if is_quoted(value): + value = unescape(value[1:-1]) + elif not is_token(value): + warnings.warn(BadContentDispositionParam(item)) + continue + + elif is_extended_param(key): + if is_rfc5987(value): + encoding, _, value = value.split("'", 2) + encoding = encoding or "utf-8" + else: + warnings.warn(BadContentDispositionParam(item)) + continue + + try: + value = unquote(value, encoding, "strict") + except UnicodeDecodeError: # pragma: nocover + warnings.warn(BadContentDispositionParam(item)) + continue + + else: + failed = True + if is_quoted(value): + failed = False + value = unescape(value[1:-1].lstrip("\\/")) + elif is_token(value): + failed = False + elif parts: + # maybe just ; in filename, in any case this is just + # one case fix, for proper fix we need to redesign parser + _value = f"{value};{parts[0]}" + if is_quoted(_value): + parts.pop(0) + value = unescape(_value[1:-1].lstrip("\\/")) + failed = False + + if failed: + warnings.warn(BadContentDispositionHeader(header)) + return None, {} + + params[key] = value + + return disptype.lower(), params + + +def content_disposition_filename( + params: Mapping[str, str], name: str = "filename" +) -> Optional[str]: + name_suf = "%s*" % name + if not params: + return None + elif name_suf in params: + return params[name_suf] + elif name in params: + return params[name] + else: + parts = [] + fnparams = sorted( + (key, value) for key, value in params.items() if key.startswith(name_suf) + ) + for num, (key, value) in enumerate(fnparams): + _, tail = key.split("*", 1) + if tail.endswith("*"): + tail = tail[:-1] + if tail == str(num): + parts.append(value) + else: + break + if not parts: + return None + value = "".join(parts) + if "'" in value: + encoding, _, value = value.split("'", 2) + encoding = encoding or "utf-8" + return unquote(value, encoding, "strict") + return value + + +class MultipartResponseWrapper: + """Wrapper around the MultipartReader. + + It takes care about + underlying connection and close it when it needs in. + """ + + def __init__( + self, + resp: "ClientResponse", + stream: "MultipartReader", + ) -> None: + self.resp = resp + self.stream = stream + + def __aiter__(self) -> "MultipartResponseWrapper": + return self + + async def __anext__( + self, + ) -> Union["MultipartReader", "BodyPartReader"]: + part = await self.next() + if part is None: + raise StopAsyncIteration + return part + + def at_eof(self) -> bool: + """Returns True when all response data had been read.""" + return self.resp.content.at_eof() + + async def next( + self, + ) -> Optional[Union["MultipartReader", "BodyPartReader"]]: + """Emits next multipart reader object.""" + item = await self.stream.next() + if self.stream.at_eof(): + await self.release() + return item + + async def release(self) -> None: + """Release the connection gracefully. + + All remaining content is read to the void. + """ + await self.resp.release() + + +class BodyPartReader: + """Multipart reader for single body part.""" + + chunk_size = 8192 + + def __init__( + self, boundary: bytes, headers: "CIMultiDictProxy[str]", content: StreamReader + ) -> None: + self.headers = headers + self._boundary = boundary + self._content = content + self._at_eof = False + length = self.headers.get(CONTENT_LENGTH, None) + self._length = int(length) if length is not None else None + self._read_bytes = 0 + # TODO: typeing.Deque is not supported by Python 3.5 + self._unread: Deque[bytes] = deque() + self._prev_chunk: Optional[bytes] = None + self._content_eof = 0 + self._cache: Dict[str, Any] = {} + + def __aiter__(self) -> AsyncIterator["BodyPartReader"]: + return self # type: ignore[return-value] + + async def __anext__(self) -> bytes: + part = await self.next() + if part is None: + raise StopAsyncIteration + return part + + async def next(self) -> Optional[bytes]: + item = await self.read() + if not item: + return None + return item + + async def read(self, *, decode: bool = False) -> bytes: + """Reads body part data. + + decode: Decodes data following by encoding + method from Content-Encoding header. If it missed + data remains untouched + """ + if self._at_eof: + return b"" + data = bytearray() + while not self._at_eof: + data.extend(await self.read_chunk(self.chunk_size)) + if decode: + return self.decode(data) + return data + + async def read_chunk(self, size: int = chunk_size) -> bytes: + """Reads body part content chunk of the specified size. + + size: chunk size + """ + if self._at_eof: + return b"" + if self._length: + chunk = await self._read_chunk_from_length(size) + else: + chunk = await self._read_chunk_from_stream(size) + + self._read_bytes += len(chunk) + if self._read_bytes == self._length: + self._at_eof = True + if self._at_eof: + clrf = await self._content.readline() + assert ( + b"\r\n" == clrf + ), "reader did not read all the data or it is malformed" + return chunk + + async def _read_chunk_from_length(self, size: int) -> bytes: + # Reads body part content chunk of the specified size. + # The body part must has Content-Length header with proper value. + assert self._length is not None, "Content-Length required for chunked read" + chunk_size = min(size, self._length - self._read_bytes) + chunk = await self._content.read(chunk_size) + return chunk + + async def _read_chunk_from_stream(self, size: int) -> bytes: + # Reads content chunk of body part with unknown length. + # The Content-Length header for body part is not necessary. + assert ( + size >= len(self._boundary) + 2 + ), "Chunk size must be greater or equal than boundary length + 2" + first_chunk = self._prev_chunk is None + if first_chunk: + self._prev_chunk = await self._content.read(size) + + chunk = await self._content.read(size) + self._content_eof += int(self._content.at_eof()) + assert self._content_eof < 3, "Reading after EOF" + assert self._prev_chunk is not None + window = self._prev_chunk + chunk + sub = b"\r\n" + self._boundary + if first_chunk: + idx = window.find(sub) + else: + idx = window.find(sub, max(0, len(self._prev_chunk) - len(sub))) + if idx >= 0: + # pushing boundary back to content + with warnings.catch_warnings(): + warnings.filterwarnings("ignore", category=DeprecationWarning) + self._content.unread_data(window[idx:]) + if size > idx: + self._prev_chunk = self._prev_chunk[:idx] + chunk = window[len(self._prev_chunk) : idx] + if not chunk: + self._at_eof = True + result = self._prev_chunk + self._prev_chunk = chunk + return result + + async def readline(self) -> bytes: + """Reads body part by line by line.""" + if self._at_eof: + return b"" + + if self._unread: + line = self._unread.popleft() + else: + line = await self._content.readline() + + if line.startswith(self._boundary): + # the very last boundary may not come with \r\n, + # so set single rules for everyone + sline = line.rstrip(b"\r\n") + boundary = self._boundary + last_boundary = self._boundary + b"--" + # ensure that we read exactly the boundary, not something alike + if sline == boundary or sline == last_boundary: + self._at_eof = True + self._unread.append(line) + return b"" + else: + next_line = await self._content.readline() + if next_line.startswith(self._boundary): + line = line[:-2] # strip CRLF but only once + self._unread.append(next_line) + + return line + + async def release(self) -> None: + """Like read(), but reads all the data to the void.""" + if self._at_eof: + return + while not self._at_eof: + await self.read_chunk(self.chunk_size) + + async def text(self, *, encoding: Optional[str] = None) -> str: + """Like read(), but assumes that body part contains text data.""" + data = await self.read(decode=True) + # see https://www.w3.org/TR/html5/forms.html#multipart/form-data-encoding-algorithm # NOQA + # and https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#dom-xmlhttprequest-send # NOQA + encoding = encoding or self.get_charset(default="utf-8") + return data.decode(encoding) + + async def json(self, *, encoding: Optional[str] = None) -> Optional[Dict[str, Any]]: + """Like read(), but assumes that body parts contains JSON data.""" + data = await self.read(decode=True) + if not data: + return None + encoding = encoding or self.get_charset(default="utf-8") + return cast(Dict[str, Any], json.loads(data.decode(encoding))) + + async def form(self, *, encoding: Optional[str] = None) -> List[Tuple[str, str]]: + """Like read(), but assumes that body parts contain form urlencoded data.""" + data = await self.read(decode=True) + if not data: + return [] + if encoding is not None: + real_encoding = encoding + else: + real_encoding = self.get_charset(default="utf-8") + return parse_qsl( + data.rstrip().decode(real_encoding), + keep_blank_values=True, + encoding=real_encoding, + ) + + def at_eof(self) -> bool: + """Returns True if the boundary was reached or False otherwise.""" + return self._at_eof + + def decode(self, data: bytes) -> bytes: + """Decodes data. + + Decoding is done according the specified Content-Encoding + or Content-Transfer-Encoding headers value. + """ + if CONTENT_TRANSFER_ENCODING in self.headers: + data = self._decode_content_transfer(data) + if CONTENT_ENCODING in self.headers: + return self._decode_content(data) + return data + + def _decode_content(self, data: bytes) -> bytes: + encoding = self.headers.get(CONTENT_ENCODING, "").lower() + + if encoding == "deflate": + return zlib.decompress(data, -zlib.MAX_WBITS) + elif encoding == "gzip": + return zlib.decompress(data, 16 + zlib.MAX_WBITS) + elif encoding == "identity": + return data + else: + raise RuntimeError(f"unknown content encoding: {encoding}") + + def _decode_content_transfer(self, data: bytes) -> bytes: + encoding = self.headers.get(CONTENT_TRANSFER_ENCODING, "").lower() + + if encoding == "base64": + return base64.b64decode(data) + elif encoding == "quoted-printable": + return binascii.a2b_qp(data) + elif encoding in ("binary", "8bit", "7bit"): + return data + else: + raise RuntimeError( + "unknown content transfer encoding: {}" "".format(encoding) + ) + + def get_charset(self, default: str) -> str: + """Returns charset parameter from Content-Type header or default.""" + ctype = self.headers.get(CONTENT_TYPE, "") + mimetype = parse_mimetype(ctype) + return mimetype.parameters.get("charset", default) + + @reify + def name(self) -> Optional[str]: + """Returns name specified in Content-Disposition header. + + If the header is missing or malformed, returns None. + """ + _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) + return content_disposition_filename(params, "name") + + @reify + def filename(self) -> Optional[str]: + """Returns filename specified in Content-Disposition header. + + Returns None if the header is missing or malformed. + """ + _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) + return content_disposition_filename(params, "filename") + + +@payload_type(BodyPartReader, order=Order.try_first) +class BodyPartReaderPayload(Payload): + def __init__(self, value: BodyPartReader, *args: Any, **kwargs: Any) -> None: + super().__init__(value, *args, **kwargs) + + params: Dict[str, str] = {} + if value.name is not None: + params["name"] = value.name + if value.filename is not None: + params["filename"] = value.filename + + if params: + self.set_content_disposition("attachment", True, **params) + + async def write(self, writer: Any) -> None: + field = self._value + chunk = await field.read_chunk(size=2**16) + while chunk: + await writer.write(field.decode(chunk)) + chunk = await field.read_chunk(size=2**16) + + +class MultipartReader: + """Multipart body reader.""" + + #: Response wrapper, used when multipart readers constructs from response. + response_wrapper_cls = MultipartResponseWrapper + #: Multipart reader class, used to handle multipart/* body parts. + #: None points to type(self) + multipart_reader_cls = None + #: Body part reader class for non multipart/* content types. + part_reader_cls = BodyPartReader + + def __init__(self, headers: Mapping[str, str], content: StreamReader) -> None: + self.headers = headers + self._boundary = ("--" + self._get_boundary()).encode() + self._content = content + self._last_part: Optional[Union["MultipartReader", BodyPartReader]] = None + self._at_eof = False + self._at_bof = True + self._unread: List[bytes] = [] + + def __aiter__( + self, + ) -> AsyncIterator["BodyPartReader"]: + return self # type: ignore[return-value] + + async def __anext__( + self, + ) -> Optional[Union["MultipartReader", BodyPartReader]]: + part = await self.next() + if part is None: + raise StopAsyncIteration + return part + + @classmethod + def from_response( + cls, + response: "ClientResponse", + ) -> MultipartResponseWrapper: + """Constructs reader instance from HTTP response. + + :param response: :class:`~aiohttp.client.ClientResponse` instance + """ + obj = cls.response_wrapper_cls( + response, cls(response.headers, response.content) + ) + return obj + + def at_eof(self) -> bool: + """Returns True if the final boundary was reached, false otherwise.""" + return self._at_eof + + async def next( + self, + ) -> Optional[Union["MultipartReader", BodyPartReader]]: + """Emits the next multipart body part.""" + # So, if we're at BOF, we need to skip till the boundary. + if self._at_eof: + return None + await self._maybe_release_last_part() + if self._at_bof: + await self._read_until_first_boundary() + self._at_bof = False + else: + await self._read_boundary() + if self._at_eof: # we just read the last boundary, nothing to do there + return None + self._last_part = await self.fetch_next_part() + return self._last_part + + async def release(self) -> None: + """Reads all the body parts to the void till the final boundary.""" + while not self._at_eof: + item = await self.next() + if item is None: + break + await item.release() + + async def fetch_next_part( + self, + ) -> Union["MultipartReader", BodyPartReader]: + """Returns the next body part reader.""" + headers = await self._read_headers() + return self._get_part_reader(headers) + + def _get_part_reader( + self, + headers: "CIMultiDictProxy[str]", + ) -> Union["MultipartReader", BodyPartReader]: + """Dispatches the response by the `Content-Type` header. + + Returns a suitable reader instance. + + :param dict headers: Response headers + """ + ctype = headers.get(CONTENT_TYPE, "") + mimetype = parse_mimetype(ctype) + + if mimetype.type == "multipart": + if self.multipart_reader_cls is None: + return type(self)(headers, self._content) + return self.multipart_reader_cls(headers, self._content) + else: + return self.part_reader_cls(self._boundary, headers, self._content) + + def _get_boundary(self) -> str: + mimetype = parse_mimetype(self.headers[CONTENT_TYPE]) + + assert mimetype.type == "multipart", "multipart/* content type expected" + + if "boundary" not in mimetype.parameters: + raise ValueError( + "boundary missed for Content-Type: %s" % self.headers[CONTENT_TYPE] + ) + + boundary = mimetype.parameters["boundary"] + if len(boundary) > 70: + raise ValueError("boundary %r is too long (70 chars max)" % boundary) + + return boundary + + async def _readline(self) -> bytes: + if self._unread: + return self._unread.pop() + return await self._content.readline() + + async def _read_until_first_boundary(self) -> None: + while True: + chunk = await self._readline() + if chunk == b"": + raise ValueError( + "Could not find starting boundary %r" % (self._boundary) + ) + chunk = chunk.rstrip() + if chunk == self._boundary: + return + elif chunk == self._boundary + b"--": + self._at_eof = True + return + + async def _read_boundary(self) -> None: + chunk = (await self._readline()).rstrip() + if chunk == self._boundary: + pass + elif chunk == self._boundary + b"--": + self._at_eof = True + epilogue = await self._readline() + next_line = await self._readline() + + # the epilogue is expected and then either the end of input or the + # parent multipart boundary, if the parent boundary is found then + # it should be marked as unread and handed to the parent for + # processing + if next_line[:2] == b"--": + self._unread.append(next_line) + # otherwise the request is likely missing an epilogue and both + # lines should be passed to the parent for processing + # (this handles the old behavior gracefully) + else: + self._unread.extend([next_line, epilogue]) + else: + raise ValueError(f"Invalid boundary {chunk!r}, expected {self._boundary!r}") + + async def _read_headers(self) -> "CIMultiDictProxy[str]": + lines = [b""] + while True: + chunk = await self._content.readline() + chunk = chunk.strip() + lines.append(chunk) + if not chunk: + break + parser = HeadersParser() + headers, raw_headers = parser.parse_headers(lines) + return headers + + async def _maybe_release_last_part(self) -> None: + """Ensures that the last read body part is read completely.""" + if self._last_part is not None: + if not self._last_part.at_eof(): + await self._last_part.release() + self._unread.extend(self._last_part._unread) + self._last_part = None + + +_Part = Tuple[Payload, str, str] + + +class MultipartWriter(Payload): + """Multipart body writer.""" + + def __init__(self, subtype: str = "mixed", boundary: Optional[str] = None) -> None: + boundary = boundary if boundary is not None else uuid.uuid4().hex + # The underlying Payload API demands a str (utf-8), not bytes, + # so we need to ensure we don't lose anything during conversion. + # As a result, require the boundary to be ASCII only. + # In both situations. + + try: + self._boundary = boundary.encode("ascii") + except UnicodeEncodeError: + raise ValueError("boundary should contain ASCII only chars") from None + ctype = f"multipart/{subtype}; boundary={self._boundary_value}" + + super().__init__(None, content_type=ctype) + + self._parts: List[_Part] = [] + + def __enter__(self) -> "MultipartWriter": + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + pass + + def __iter__(self) -> Iterator[_Part]: + return iter(self._parts) + + def __len__(self) -> int: + return len(self._parts) + + def __bool__(self) -> bool: + return True + + _valid_tchar_regex = re.compile(rb"\A[!#$%&'*+\-.^_`|~\w]+\Z") + _invalid_qdtext_char_regex = re.compile(rb"[\x00-\x08\x0A-\x1F\x7F]") + + @property + def _boundary_value(self) -> str: + """Wrap boundary parameter value in quotes, if necessary. + + Reads self.boundary and returns a unicode sting. + """ + # Refer to RFCs 7231, 7230, 5234. + # + # parameter = token "=" ( token / quoted-string ) + # token = 1*tchar + # quoted-string = DQUOTE *( qdtext / quoted-pair ) DQUOTE + # qdtext = HTAB / SP / %x21 / %x23-5B / %x5D-7E / obs-text + # obs-text = %x80-FF + # quoted-pair = "\" ( HTAB / SP / VCHAR / obs-text ) + # tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" + # / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" + # / DIGIT / ALPHA + # ; any VCHAR, except delimiters + # VCHAR = %x21-7E + value = self._boundary + if re.match(self._valid_tchar_regex, value): + return value.decode("ascii") # cannot fail + + if re.search(self._invalid_qdtext_char_regex, value): + raise ValueError("boundary value contains invalid characters") + + # escape %x5C and %x22 + quoted_value_content = value.replace(b"\\", b"\\\\") + quoted_value_content = quoted_value_content.replace(b'"', b'\\"') + + return '"' + quoted_value_content.decode("ascii") + '"' + + @property + def boundary(self) -> str: + return self._boundary.decode("ascii") + + def append(self, obj: Any, headers: Optional[MultiMapping[str]] = None) -> Payload: + if headers is None: + headers = CIMultiDict() + + if isinstance(obj, Payload): + obj.headers.update(headers) + return self.append_payload(obj) + else: + try: + payload = get_payload(obj, headers=headers) + except LookupError: + raise TypeError("Cannot create payload from %r" % obj) + else: + return self.append_payload(payload) + + def append_payload(self, payload: Payload) -> Payload: + """Adds a new body part to multipart writer.""" + # compression + encoding: Optional[str] = payload.headers.get( + CONTENT_ENCODING, + "", + ).lower() + if encoding and encoding not in ("deflate", "gzip", "identity"): + raise RuntimeError(f"unknown content encoding: {encoding}") + if encoding == "identity": + encoding = None + + # te encoding + te_encoding: Optional[str] = payload.headers.get( + CONTENT_TRANSFER_ENCODING, + "", + ).lower() + if te_encoding not in ("", "base64", "quoted-printable", "binary"): + raise RuntimeError( + "unknown content transfer encoding: {}" "".format(te_encoding) + ) + if te_encoding == "binary": + te_encoding = None + + # size + size = payload.size + if size is not None and not (encoding or te_encoding): + payload.headers[CONTENT_LENGTH] = str(size) + + self._parts.append((payload, encoding, te_encoding)) # type: ignore[arg-type] + return payload + + def append_json( + self, obj: Any, headers: Optional[MultiMapping[str]] = None + ) -> Payload: + """Helper to append JSON part.""" + if headers is None: + headers = CIMultiDict() + + return self.append_payload(JsonPayload(obj, headers=headers)) + + def append_form( + self, + obj: Union[Sequence[Tuple[str, str]], Mapping[str, str]], + headers: Optional[MultiMapping[str]] = None, + ) -> Payload: + """Helper to append form urlencoded part.""" + assert isinstance(obj, (Sequence, Mapping)) + + if headers is None: + headers = CIMultiDict() + + if isinstance(obj, Mapping): + obj = list(obj.items()) + data = urlencode(obj, doseq=True) + + return self.append_payload( + StringPayload( + data, headers=headers, content_type="application/x-www-form-urlencoded" + ) + ) + + @property + def size(self) -> Optional[int]: + """Size of the payload.""" + total = 0 + for part, encoding, te_encoding in self._parts: + if encoding or te_encoding or part.size is None: + return None + + total += int( + 2 + + len(self._boundary) + + 2 + + part.size # b'--'+self._boundary+b'\r\n' + + len(part._binary_headers) + + 2 # b'\r\n' + ) + + total += 2 + len(self._boundary) + 4 # b'--'+self._boundary+b'--\r\n' + return total + + async def write(self, writer: Any, close_boundary: bool = True) -> None: + """Write body.""" + for part, encoding, te_encoding in self._parts: + await writer.write(b"--" + self._boundary + b"\r\n") + await writer.write(part._binary_headers) + + if encoding or te_encoding: + w = MultipartPayloadWriter(writer) + if encoding: + w.enable_compression(encoding) + if te_encoding: + w.enable_encoding(te_encoding) + await part.write(w) # type: ignore[arg-type] + await w.write_eof() + else: + await part.write(writer) + + await writer.write(b"\r\n") + + if close_boundary: + await writer.write(b"--" + self._boundary + b"--\r\n") + + +class MultipartPayloadWriter: + def __init__(self, writer: Any) -> None: + self._writer = writer + self._encoding: Optional[str] = None + self._compress: Any = None + self._encoding_buffer: Optional[bytearray] = None + + def enable_encoding(self, encoding: str) -> None: + if encoding == "base64": + self._encoding = encoding + self._encoding_buffer = bytearray() + elif encoding == "quoted-printable": + self._encoding = "quoted-printable" + + def enable_compression( + self, encoding: str = "deflate", strategy: int = zlib.Z_DEFAULT_STRATEGY + ) -> None: + zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else -zlib.MAX_WBITS + self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy) + + async def write_eof(self) -> None: + if self._compress is not None: + chunk = self._compress.flush() + if chunk: + self._compress = None + await self.write(chunk) + + if self._encoding == "base64": + if self._encoding_buffer: + await self._writer.write(base64.b64encode(self._encoding_buffer)) + + async def write(self, chunk: bytes) -> None: + if self._compress is not None: + if chunk: + chunk = self._compress.compress(chunk) + if not chunk: + return + + if self._encoding == "base64": + buf = self._encoding_buffer + assert buf is not None + buf.extend(chunk) + + if buf: + div, mod = divmod(len(buf), 3) + enc_chunk, self._encoding_buffer = (buf[: div * 3], buf[div * 3 :]) + if enc_chunk: + b64chunk = base64.b64encode(enc_chunk) + await self._writer.write(b64chunk) + elif self._encoding == "quoted-printable": + await self._writer.write(binascii.b2a_qp(chunk)) + else: + await self._writer.write(chunk) diff --git a/env/lib/python3.8/site-packages/aiohttp/payload.py b/env/lib/python3.8/site-packages/aiohttp/payload.py new file mode 100644 index 00000000..625b2eac --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/payload.py @@ -0,0 +1,465 @@ +import asyncio +import enum +import io +import json +import mimetypes +import os +import warnings +from abc import ABC, abstractmethod +from itertools import chain +from typing import ( + IO, + TYPE_CHECKING, + Any, + ByteString, + Dict, + Iterable, + Optional, + TextIO, + Tuple, + Type, + Union, +) + +from multidict import CIMultiDict + +from . import hdrs +from .abc import AbstractStreamWriter +from .helpers import ( + PY_36, + content_disposition_header, + guess_filename, + parse_mimetype, + sentinel, +) +from .streams import StreamReader +from .typedefs import Final, JSONEncoder, _CIMultiDict + +__all__ = ( + "PAYLOAD_REGISTRY", + "get_payload", + "payload_type", + "Payload", + "BytesPayload", + "StringPayload", + "IOBasePayload", + "BytesIOPayload", + "BufferedReaderPayload", + "TextIOPayload", + "StringIOPayload", + "JsonPayload", + "AsyncIterablePayload", +) + +TOO_LARGE_BYTES_BODY: Final[int] = 2**20 # 1 MB + +if TYPE_CHECKING: # pragma: no cover + from typing import List + + +class LookupError(Exception): + pass + + +class Order(str, enum.Enum): + normal = "normal" + try_first = "try_first" + try_last = "try_last" + + +def get_payload(data: Any, *args: Any, **kwargs: Any) -> "Payload": + return PAYLOAD_REGISTRY.get(data, *args, **kwargs) + + +def register_payload( + factory: Type["Payload"], type: Any, *, order: Order = Order.normal +) -> None: + PAYLOAD_REGISTRY.register(factory, type, order=order) + + +class payload_type: + def __init__(self, type: Any, *, order: Order = Order.normal) -> None: + self.type = type + self.order = order + + def __call__(self, factory: Type["Payload"]) -> Type["Payload"]: + register_payload(factory, self.type, order=self.order) + return factory + + +PayloadType = Type["Payload"] +_PayloadRegistryItem = Tuple[PayloadType, Any] + + +class PayloadRegistry: + """Payload registry. + + note: we need zope.interface for more efficient adapter search + """ + + def __init__(self) -> None: + self._first: List[_PayloadRegistryItem] = [] + self._normal: List[_PayloadRegistryItem] = [] + self._last: List[_PayloadRegistryItem] = [] + + def get( + self, + data: Any, + *args: Any, + _CHAIN: "Type[chain[_PayloadRegistryItem]]" = chain, + **kwargs: Any, + ) -> "Payload": + if isinstance(data, Payload): + return data + for factory, type in _CHAIN(self._first, self._normal, self._last): + if isinstance(data, type): + return factory(data, *args, **kwargs) + + raise LookupError() + + def register( + self, factory: PayloadType, type: Any, *, order: Order = Order.normal + ) -> None: + if order is Order.try_first: + self._first.append((factory, type)) + elif order is Order.normal: + self._normal.append((factory, type)) + elif order is Order.try_last: + self._last.append((factory, type)) + else: + raise ValueError(f"Unsupported order {order!r}") + + +class Payload(ABC): + + _default_content_type: str = "application/octet-stream" + _size: Optional[int] = None + + def __init__( + self, + value: Any, + headers: Optional[ + Union[_CIMultiDict, Dict[str, str], Iterable[Tuple[str, str]]] + ] = None, + content_type: Optional[str] = sentinel, + filename: Optional[str] = None, + encoding: Optional[str] = None, + **kwargs: Any, + ) -> None: + self._encoding = encoding + self._filename = filename + self._headers: _CIMultiDict = CIMultiDict() + self._value = value + if content_type is not sentinel and content_type is not None: + self._headers[hdrs.CONTENT_TYPE] = content_type + elif self._filename is not None: + content_type = mimetypes.guess_type(self._filename)[0] + if content_type is None: + content_type = self._default_content_type + self._headers[hdrs.CONTENT_TYPE] = content_type + else: + self._headers[hdrs.CONTENT_TYPE] = self._default_content_type + self._headers.update(headers or {}) + + @property + def size(self) -> Optional[int]: + """Size of the payload.""" + return self._size + + @property + def filename(self) -> Optional[str]: + """Filename of the payload.""" + return self._filename + + @property + def headers(self) -> _CIMultiDict: + """Custom item headers""" + return self._headers + + @property + def _binary_headers(self) -> bytes: + return ( + "".join([k + ": " + v + "\r\n" for k, v in self.headers.items()]).encode( + "utf-8" + ) + + b"\r\n" + ) + + @property + def encoding(self) -> Optional[str]: + """Payload encoding""" + return self._encoding + + @property + def content_type(self) -> str: + """Content type""" + return self._headers[hdrs.CONTENT_TYPE] + + def set_content_disposition( + self, + disptype: str, + quote_fields: bool = True, + _charset: str = "utf-8", + **params: Any, + ) -> None: + """Sets ``Content-Disposition`` header.""" + self._headers[hdrs.CONTENT_DISPOSITION] = content_disposition_header( + disptype, quote_fields=quote_fields, _charset=_charset, **params + ) + + @abstractmethod + async def write(self, writer: AbstractStreamWriter) -> None: + """Write payload. + + writer is an AbstractStreamWriter instance: + """ + + +class BytesPayload(Payload): + def __init__(self, value: ByteString, *args: Any, **kwargs: Any) -> None: + if not isinstance(value, (bytes, bytearray, memoryview)): + raise TypeError(f"value argument must be byte-ish, not {type(value)!r}") + + if "content_type" not in kwargs: + kwargs["content_type"] = "application/octet-stream" + + super().__init__(value, *args, **kwargs) + + if isinstance(value, memoryview): + self._size = value.nbytes + else: + self._size = len(value) + + if self._size > TOO_LARGE_BYTES_BODY: + if PY_36: + kwargs = {"source": self} + else: + kwargs = {} + warnings.warn( + "Sending a large body directly with raw bytes might" + " lock the event loop. You should probably pass an " + "io.BytesIO object instead", + ResourceWarning, + **kwargs, + ) + + async def write(self, writer: AbstractStreamWriter) -> None: + await writer.write(self._value) + + +class StringPayload(BytesPayload): + def __init__( + self, + value: str, + *args: Any, + encoding: Optional[str] = None, + content_type: Optional[str] = None, + **kwargs: Any, + ) -> None: + + if encoding is None: + if content_type is None: + real_encoding = "utf-8" + content_type = "text/plain; charset=utf-8" + else: + mimetype = parse_mimetype(content_type) + real_encoding = mimetype.parameters.get("charset", "utf-8") + else: + if content_type is None: + content_type = "text/plain; charset=%s" % encoding + real_encoding = encoding + + super().__init__( + value.encode(real_encoding), + encoding=real_encoding, + content_type=content_type, + *args, + **kwargs, + ) + + +class StringIOPayload(StringPayload): + def __init__(self, value: IO[str], *args: Any, **kwargs: Any) -> None: + super().__init__(value.read(), *args, **kwargs) + + +class IOBasePayload(Payload): + _value: IO[Any] + + def __init__( + self, value: IO[Any], disposition: str = "attachment", *args: Any, **kwargs: Any + ) -> None: + if "filename" not in kwargs: + kwargs["filename"] = guess_filename(value) + + super().__init__(value, *args, **kwargs) + + if self._filename is not None and disposition is not None: + if hdrs.CONTENT_DISPOSITION not in self.headers: + self.set_content_disposition(disposition, filename=self._filename) + + async def write(self, writer: AbstractStreamWriter) -> None: + loop = asyncio.get_event_loop() + try: + chunk = await loop.run_in_executor(None, self._value.read, 2**16) + while chunk: + await writer.write(chunk) + chunk = await loop.run_in_executor(None, self._value.read, 2**16) + finally: + await loop.run_in_executor(None, self._value.close) + + +class TextIOPayload(IOBasePayload): + _value: TextIO + + def __init__( + self, + value: TextIO, + *args: Any, + encoding: Optional[str] = None, + content_type: Optional[str] = None, + **kwargs: Any, + ) -> None: + + if encoding is None: + if content_type is None: + encoding = "utf-8" + content_type = "text/plain; charset=utf-8" + else: + mimetype = parse_mimetype(content_type) + encoding = mimetype.parameters.get("charset", "utf-8") + else: + if content_type is None: + content_type = "text/plain; charset=%s" % encoding + + super().__init__( + value, + content_type=content_type, + encoding=encoding, + *args, + **kwargs, + ) + + @property + def size(self) -> Optional[int]: + try: + return os.fstat(self._value.fileno()).st_size - self._value.tell() + except OSError: + return None + + async def write(self, writer: AbstractStreamWriter) -> None: + loop = asyncio.get_event_loop() + try: + chunk = await loop.run_in_executor(None, self._value.read, 2**16) + while chunk: + data = ( + chunk.encode(encoding=self._encoding) + if self._encoding + else chunk.encode() + ) + await writer.write(data) + chunk = await loop.run_in_executor(None, self._value.read, 2**16) + finally: + await loop.run_in_executor(None, self._value.close) + + +class BytesIOPayload(IOBasePayload): + @property + def size(self) -> int: + position = self._value.tell() + end = self._value.seek(0, os.SEEK_END) + self._value.seek(position) + return end - position + + +class BufferedReaderPayload(IOBasePayload): + @property + def size(self) -> Optional[int]: + try: + return os.fstat(self._value.fileno()).st_size - self._value.tell() + except OSError: + # data.fileno() is not supported, e.g. + # io.BufferedReader(io.BytesIO(b'data')) + return None + + +class JsonPayload(BytesPayload): + def __init__( + self, + value: Any, + encoding: str = "utf-8", + content_type: str = "application/json", + dumps: JSONEncoder = json.dumps, + *args: Any, + **kwargs: Any, + ) -> None: + + super().__init__( + dumps(value).encode(encoding), + content_type=content_type, + encoding=encoding, + *args, + **kwargs, + ) + + +if TYPE_CHECKING: # pragma: no cover + from typing import AsyncIterable, AsyncIterator + + _AsyncIterator = AsyncIterator[bytes] + _AsyncIterable = AsyncIterable[bytes] +else: + from collections.abc import AsyncIterable, AsyncIterator + + _AsyncIterator = AsyncIterator + _AsyncIterable = AsyncIterable + + +class AsyncIterablePayload(Payload): + + _iter: Optional[_AsyncIterator] = None + + def __init__(self, value: _AsyncIterable, *args: Any, **kwargs: Any) -> None: + if not isinstance(value, AsyncIterable): + raise TypeError( + "value argument must support " + "collections.abc.AsyncIterablebe interface, " + "got {!r}".format(type(value)) + ) + + if "content_type" not in kwargs: + kwargs["content_type"] = "application/octet-stream" + + super().__init__(value, *args, **kwargs) + + self._iter = value.__aiter__() + + async def write(self, writer: AbstractStreamWriter) -> None: + if self._iter: + try: + # iter is not None check prevents rare cases + # when the case iterable is used twice + while True: + chunk = await self._iter.__anext__() + await writer.write(chunk) + except StopAsyncIteration: + self._iter = None + + +class StreamReaderPayload(AsyncIterablePayload): + def __init__(self, value: StreamReader, *args: Any, **kwargs: Any) -> None: + super().__init__(value.iter_any(), *args, **kwargs) + + +PAYLOAD_REGISTRY = PayloadRegistry() +PAYLOAD_REGISTRY.register(BytesPayload, (bytes, bytearray, memoryview)) +PAYLOAD_REGISTRY.register(StringPayload, str) +PAYLOAD_REGISTRY.register(StringIOPayload, io.StringIO) +PAYLOAD_REGISTRY.register(TextIOPayload, io.TextIOBase) +PAYLOAD_REGISTRY.register(BytesIOPayload, io.BytesIO) +PAYLOAD_REGISTRY.register(BufferedReaderPayload, (io.BufferedReader, io.BufferedRandom)) +PAYLOAD_REGISTRY.register(IOBasePayload, io.IOBase) +PAYLOAD_REGISTRY.register(StreamReaderPayload, StreamReader) +# try_last for giving a chance to more specialized async interables like +# multidict.BodyPartReaderPayload override the default +PAYLOAD_REGISTRY.register(AsyncIterablePayload, AsyncIterable, order=Order.try_last) diff --git a/env/lib/python3.8/site-packages/aiohttp/payload_streamer.py b/env/lib/python3.8/site-packages/aiohttp/payload_streamer.py new file mode 100644 index 00000000..9f8b8bc5 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/payload_streamer.py @@ -0,0 +1,75 @@ +""" +Payload implemenation for coroutines as data provider. + +As a simple case, you can upload data from file:: + + @aiohttp.streamer + async def file_sender(writer, file_name=None): + with open(file_name, 'rb') as f: + chunk = f.read(2**16) + while chunk: + await writer.write(chunk) + + chunk = f.read(2**16) + +Then you can use `file_sender` like this: + + async with session.post('http://httpbin.org/post', + data=file_sender(file_name='huge_file')) as resp: + print(await resp.text()) + +..note:: Coroutine must accept `writer` as first argument + +""" + +import types +import warnings +from typing import Any, Awaitable, Callable, Dict, Tuple + +from .abc import AbstractStreamWriter +from .payload import Payload, payload_type + +__all__ = ("streamer",) + + +class _stream_wrapper: + def __init__( + self, + coro: Callable[..., Awaitable[None]], + args: Tuple[Any, ...], + kwargs: Dict[str, Any], + ) -> None: + self.coro = types.coroutine(coro) + self.args = args + self.kwargs = kwargs + + async def __call__(self, writer: AbstractStreamWriter) -> None: + await self.coro(writer, *self.args, **self.kwargs) # type: ignore[operator] + + +class streamer: + def __init__(self, coro: Callable[..., Awaitable[None]]) -> None: + warnings.warn( + "@streamer is deprecated, use async generators instead", + DeprecationWarning, + stacklevel=2, + ) + self.coro = coro + + def __call__(self, *args: Any, **kwargs: Any) -> _stream_wrapper: + return _stream_wrapper(self.coro, args, kwargs) + + +@payload_type(_stream_wrapper) +class StreamWrapperPayload(Payload): + async def write(self, writer: AbstractStreamWriter) -> None: + await self._value(writer) + + +@payload_type(streamer) +class StreamPayload(StreamWrapperPayload): + def __init__(self, value: Any, *args: Any, **kwargs: Any) -> None: + super().__init__(value(), *args, **kwargs) + + async def write(self, writer: AbstractStreamWriter) -> None: + await self._value(writer) diff --git a/env/lib/python3.8/site-packages/aiohttp/py.typed b/env/lib/python3.8/site-packages/aiohttp/py.typed new file mode 100644 index 00000000..f5642f79 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/py.typed @@ -0,0 +1 @@ +Marker diff --git a/env/lib/python3.8/site-packages/aiohttp/pytest_plugin.py b/env/lib/python3.8/site-packages/aiohttp/pytest_plugin.py new file mode 100644 index 00000000..dd9a9f61 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/pytest_plugin.py @@ -0,0 +1,391 @@ +import asyncio +import contextlib +import warnings +from collections.abc import Callable +from typing import Any, Awaitable, Callable, Dict, Generator, Optional, Union + +import pytest + +from aiohttp.helpers import PY_37, isasyncgenfunction +from aiohttp.web import Application + +from .test_utils import ( + BaseTestServer, + RawTestServer, + TestClient, + TestServer, + loop_context, + setup_test_loop, + teardown_test_loop, + unused_port as _unused_port, +) + +try: + import uvloop +except ImportError: # pragma: no cover + uvloop = None + +try: + import tokio +except ImportError: # pragma: no cover + tokio = None + +AiohttpClient = Callable[[Union[Application, BaseTestServer]], Awaitable[TestClient]] + + +def pytest_addoption(parser): # type: ignore[no-untyped-def] + parser.addoption( + "--aiohttp-fast", + action="store_true", + default=False, + help="run tests faster by disabling extra checks", + ) + parser.addoption( + "--aiohttp-loop", + action="store", + default="pyloop", + help="run tests with specific loop: pyloop, uvloop, tokio or all", + ) + parser.addoption( + "--aiohttp-enable-loop-debug", + action="store_true", + default=False, + help="enable event loop debug mode", + ) + + +def pytest_fixture_setup(fixturedef): # type: ignore[no-untyped-def] + """Set up pytest fixture. + + Allow fixtures to be coroutines. Run coroutine fixtures in an event loop. + """ + func = fixturedef.func + + if isasyncgenfunction(func): + # async generator fixture + is_async_gen = True + elif asyncio.iscoroutinefunction(func): + # regular async fixture + is_async_gen = False + else: + # not an async fixture, nothing to do + return + + strip_request = False + if "request" not in fixturedef.argnames: + fixturedef.argnames += ("request",) + strip_request = True + + def wrapper(*args, **kwargs): # type: ignore[no-untyped-def] + request = kwargs["request"] + if strip_request: + del kwargs["request"] + + # if neither the fixture nor the test use the 'loop' fixture, + # 'getfixturevalue' will fail because the test is not parameterized + # (this can be removed someday if 'loop' is no longer parameterized) + if "loop" not in request.fixturenames: + raise Exception( + "Asynchronous fixtures must depend on the 'loop' fixture or " + "be used in tests depending from it." + ) + + _loop = request.getfixturevalue("loop") + + if is_async_gen: + # for async generators, we need to advance the generator once, + # then advance it again in a finalizer + gen = func(*args, **kwargs) + + def finalizer(): # type: ignore[no-untyped-def] + try: + return _loop.run_until_complete(gen.__anext__()) + except StopAsyncIteration: + pass + + request.addfinalizer(finalizer) + return _loop.run_until_complete(gen.__anext__()) + else: + return _loop.run_until_complete(func(*args, **kwargs)) + + fixturedef.func = wrapper + + +@pytest.fixture +def fast(request): # type: ignore[no-untyped-def] + """--fast config option""" + return request.config.getoption("--aiohttp-fast") + + +@pytest.fixture +def loop_debug(request): # type: ignore[no-untyped-def] + """--enable-loop-debug config option""" + return request.config.getoption("--aiohttp-enable-loop-debug") + + +@contextlib.contextmanager +def _runtime_warning_context(): # type: ignore[no-untyped-def] + """Context manager which checks for RuntimeWarnings. + + This exists specifically to + avoid "coroutine 'X' was never awaited" warnings being missed. + + If RuntimeWarnings occur in the context a RuntimeError is raised. + """ + with warnings.catch_warnings(record=True) as _warnings: + yield + rw = [ + "{w.filename}:{w.lineno}:{w.message}".format(w=w) + for w in _warnings + if w.category == RuntimeWarning + ] + if rw: + raise RuntimeError( + "{} Runtime Warning{},\n{}".format( + len(rw), "" if len(rw) == 1 else "s", "\n".join(rw) + ) + ) + + +@contextlib.contextmanager +def _passthrough_loop_context(loop, fast=False): # type: ignore[no-untyped-def] + """Passthrough loop context. + + Sets up and tears down a loop unless one is passed in via the loop + argument when it's passed straight through. + """ + if loop: + # loop already exists, pass it straight through + yield loop + else: + # this shadows loop_context's standard behavior + loop = setup_test_loop() + yield loop + teardown_test_loop(loop, fast=fast) + + +def pytest_pycollect_makeitem(collector, name, obj): # type: ignore[no-untyped-def] + """Fix pytest collecting for coroutines.""" + if collector.funcnamefilter(name) and asyncio.iscoroutinefunction(obj): + return list(collector._genfunctions(name, obj)) + + +def pytest_pyfunc_call(pyfuncitem): # type: ignore[no-untyped-def] + """Run coroutines in an event loop instead of a normal function call.""" + fast = pyfuncitem.config.getoption("--aiohttp-fast") + if asyncio.iscoroutinefunction(pyfuncitem.function): + existing_loop = pyfuncitem.funcargs.get( + "proactor_loop" + ) or pyfuncitem.funcargs.get("loop", None) + with _runtime_warning_context(): + with _passthrough_loop_context(existing_loop, fast=fast) as _loop: + testargs = { + arg: pyfuncitem.funcargs[arg] + for arg in pyfuncitem._fixtureinfo.argnames + } + _loop.run_until_complete(pyfuncitem.obj(**testargs)) + + return True + + +def pytest_generate_tests(metafunc): # type: ignore[no-untyped-def] + if "loop_factory" not in metafunc.fixturenames: + return + + loops = metafunc.config.option.aiohttp_loop + avail_factories = {"pyloop": asyncio.DefaultEventLoopPolicy} + + if uvloop is not None: # pragma: no cover + avail_factories["uvloop"] = uvloop.EventLoopPolicy + + if tokio is not None: # pragma: no cover + avail_factories["tokio"] = tokio.EventLoopPolicy + + if loops == "all": + loops = "pyloop,uvloop?,tokio?" + + factories = {} # type: ignore[var-annotated] + for name in loops.split(","): + required = not name.endswith("?") + name = name.strip(" ?") + if name not in avail_factories: # pragma: no cover + if required: + raise ValueError( + "Unknown loop '%s', available loops: %s" + % (name, list(factories.keys())) + ) + else: + continue + factories[name] = avail_factories[name] + metafunc.parametrize( + "loop_factory", list(factories.values()), ids=list(factories.keys()) + ) + + +@pytest.fixture +def loop(loop_factory, fast, loop_debug): # type: ignore[no-untyped-def] + """Return an instance of the event loop.""" + policy = loop_factory() + asyncio.set_event_loop_policy(policy) + with loop_context(fast=fast) as _loop: + if loop_debug: + _loop.set_debug(True) # pragma: no cover + asyncio.set_event_loop(_loop) + yield _loop + + +@pytest.fixture +def proactor_loop(): # type: ignore[no-untyped-def] + if not PY_37: + policy = asyncio.get_event_loop_policy() + policy._loop_factory = asyncio.ProactorEventLoop # type: ignore[attr-defined] + else: + policy = asyncio.WindowsProactorEventLoopPolicy() # type: ignore[attr-defined] + asyncio.set_event_loop_policy(policy) + + with loop_context(policy.new_event_loop) as _loop: + asyncio.set_event_loop(_loop) + yield _loop + + +@pytest.fixture +def unused_port(aiohttp_unused_port): # type: ignore[no-untyped-def] # pragma: no cover + warnings.warn( + "Deprecated, use aiohttp_unused_port fixture instead", + DeprecationWarning, + stacklevel=2, + ) + return aiohttp_unused_port + + +@pytest.fixture +def aiohttp_unused_port(): # type: ignore[no-untyped-def] + """Return a port that is unused on the current host.""" + return _unused_port + + +@pytest.fixture +def aiohttp_server(loop): # type: ignore[no-untyped-def] + """Factory to create a TestServer instance, given an app. + + aiohttp_server(app, **kwargs) + """ + servers = [] + + async def go(app, *, port=None, **kwargs): # type: ignore[no-untyped-def] + server = TestServer(app, port=port) + await server.start_server(loop=loop, **kwargs) + servers.append(server) + return server + + yield go + + async def finalize() -> None: + while servers: + await servers.pop().close() + + loop.run_until_complete(finalize()) + + +@pytest.fixture +def test_server(aiohttp_server): # type: ignore[no-untyped-def] # pragma: no cover + warnings.warn( + "Deprecated, use aiohttp_server fixture instead", + DeprecationWarning, + stacklevel=2, + ) + return aiohttp_server + + +@pytest.fixture +def aiohttp_raw_server(loop): # type: ignore[no-untyped-def] + """Factory to create a RawTestServer instance, given a web handler. + + aiohttp_raw_server(handler, **kwargs) + """ + servers = [] + + async def go(handler, *, port=None, **kwargs): # type: ignore[no-untyped-def] + server = RawTestServer(handler, port=port) + await server.start_server(loop=loop, **kwargs) + servers.append(server) + return server + + yield go + + async def finalize() -> None: + while servers: + await servers.pop().close() + + loop.run_until_complete(finalize()) + + +@pytest.fixture +def raw_test_server( # type: ignore[no-untyped-def] # pragma: no cover + aiohttp_raw_server, +): + warnings.warn( + "Deprecated, use aiohttp_raw_server fixture instead", + DeprecationWarning, + stacklevel=2, + ) + return aiohttp_raw_server + + +@pytest.fixture +def aiohttp_client( + loop: asyncio.AbstractEventLoop, +) -> Generator[AiohttpClient, None, None]: + """Factory to create a TestClient instance. + + aiohttp_client(app, **kwargs) + aiohttp_client(server, **kwargs) + aiohttp_client(raw_server, **kwargs) + """ + clients = [] + + async def go( + __param: Union[Application, BaseTestServer], + *args: Any, + server_kwargs: Optional[Dict[str, Any]] = None, + **kwargs: Any + ) -> TestClient: + + if isinstance(__param, Callable) and not isinstance( # type: ignore[arg-type] + __param, (Application, BaseTestServer) + ): + __param = __param(loop, *args, **kwargs) + kwargs = {} + else: + assert not args, "args should be empty" + + if isinstance(__param, Application): + server_kwargs = server_kwargs or {} + server = TestServer(__param, loop=loop, **server_kwargs) + client = TestClient(server, loop=loop, **kwargs) + elif isinstance(__param, BaseTestServer): + client = TestClient(__param, loop=loop, **kwargs) + else: + raise ValueError("Unknown argument type: %r" % type(__param)) + + await client.start_server() + clients.append(client) + return client + + yield go + + async def finalize() -> None: + while clients: + await clients.pop().close() + + loop.run_until_complete(finalize()) + + +@pytest.fixture +def test_client(aiohttp_client): # type: ignore[no-untyped-def] # pragma: no cover + warnings.warn( + "Deprecated, use aiohttp_client fixture instead", + DeprecationWarning, + stacklevel=2, + ) + return aiohttp_client diff --git a/env/lib/python3.8/site-packages/aiohttp/resolver.py b/env/lib/python3.8/site-packages/aiohttp/resolver.py new file mode 100644 index 00000000..531ce93f --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/resolver.py @@ -0,0 +1,160 @@ +import asyncio +import socket +from typing import Any, Dict, List, Optional, Type, Union + +from .abc import AbstractResolver +from .helpers import get_running_loop + +__all__ = ("ThreadedResolver", "AsyncResolver", "DefaultResolver") + +try: + import aiodns + + # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname') +except ImportError: # pragma: no cover + aiodns = None + +aiodns_default = False + + +class ThreadedResolver(AbstractResolver): + """Threaded resolver. + + Uses an Executor for synchronous getaddrinfo() calls. + concurrent.futures.ThreadPoolExecutor is used by default. + """ + + def __init__(self, loop: Optional[asyncio.AbstractEventLoop] = None) -> None: + self._loop = get_running_loop(loop) + + async def resolve( + self, hostname: str, port: int = 0, family: int = socket.AF_INET + ) -> List[Dict[str, Any]]: + infos = await self._loop.getaddrinfo( + hostname, + port, + type=socket.SOCK_STREAM, + family=family, + flags=socket.AI_ADDRCONFIG, + ) + + hosts = [] + for family, _, proto, _, address in infos: + if family == socket.AF_INET6: + if len(address) < 3: + # IPv6 is not supported by Python build, + # or IPv6 is not enabled in the host + continue + if address[3]: # type: ignore[misc] + # This is essential for link-local IPv6 addresses. + # LL IPv6 is a VERY rare case. Strictly speaking, we should use + # getnameinfo() unconditionally, but performance makes sense. + host, _port = socket.getnameinfo( + address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV + ) + port = int(_port) + else: + host, port = address[:2] + else: # IPv4 + assert family == socket.AF_INET + host, port = address # type: ignore[misc] + hosts.append( + { + "hostname": hostname, + "host": host, + "port": port, + "family": family, + "proto": proto, + "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV, + } + ) + + return hosts + + async def close(self) -> None: + pass + + +class AsyncResolver(AbstractResolver): + """Use the `aiodns` package to make asynchronous DNS lookups""" + + def __init__( + self, + loop: Optional[asyncio.AbstractEventLoop] = None, + *args: Any, + **kwargs: Any + ) -> None: + if aiodns is None: + raise RuntimeError("Resolver requires aiodns library") + + self._loop = get_running_loop(loop) + self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs) + + if not hasattr(self._resolver, "gethostbyname"): + # aiodns 1.1 is not available, fallback to DNSResolver.query + self.resolve = self._resolve_with_query # type: ignore + + async def resolve( + self, host: str, port: int = 0, family: int = socket.AF_INET + ) -> List[Dict[str, Any]]: + try: + resp = await self._resolver.gethostbyname(host, family) + except aiodns.error.DNSError as exc: + msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed" + raise OSError(msg) from exc + hosts = [] + for address in resp.addresses: + hosts.append( + { + "hostname": host, + "host": address, + "port": port, + "family": family, + "proto": 0, + "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV, + } + ) + + if not hosts: + raise OSError("DNS lookup failed") + + return hosts + + async def _resolve_with_query( + self, host: str, port: int = 0, family: int = socket.AF_INET + ) -> List[Dict[str, Any]]: + if family == socket.AF_INET6: + qtype = "AAAA" + else: + qtype = "A" + + try: + resp = await self._resolver.query(host, qtype) + except aiodns.error.DNSError as exc: + msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed" + raise OSError(msg) from exc + + hosts = [] + for rr in resp: + hosts.append( + { + "hostname": host, + "host": rr.host, + "port": port, + "family": family, + "proto": 0, + "flags": socket.AI_NUMERICHOST, + } + ) + + if not hosts: + raise OSError("DNS lookup failed") + + return hosts + + async def close(self) -> None: + self._resolver.cancel() + + +_DefaultType = Type[Union[AsyncResolver, ThreadedResolver]] +DefaultResolver: _DefaultType = AsyncResolver if aiodns_default else ThreadedResolver diff --git a/env/lib/python3.8/site-packages/aiohttp/streams.py b/env/lib/python3.8/site-packages/aiohttp/streams.py new file mode 100644 index 00000000..726b0232 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/streams.py @@ -0,0 +1,660 @@ +import asyncio +import collections +import warnings +from typing import Awaitable, Callable, Deque, Generic, List, Optional, Tuple, TypeVar + +from .base_protocol import BaseProtocol +from .helpers import BaseTimerContext, set_exception, set_result +from .log import internal_logger +from .typedefs import Final + +__all__ = ( + "EMPTY_PAYLOAD", + "EofStream", + "StreamReader", + "DataQueue", + "FlowControlDataQueue", +) + +_T = TypeVar("_T") + + +class EofStream(Exception): + """eof stream indication.""" + + +class AsyncStreamIterator(Generic[_T]): + def __init__(self, read_func: Callable[[], Awaitable[_T]]) -> None: + self.read_func = read_func + + def __aiter__(self) -> "AsyncStreamIterator[_T]": + return self + + async def __anext__(self) -> _T: + try: + rv = await self.read_func() + except EofStream: + raise StopAsyncIteration + if rv == b"": + raise StopAsyncIteration + return rv + + +class ChunkTupleAsyncStreamIterator: + def __init__(self, stream: "StreamReader") -> None: + self._stream = stream + + def __aiter__(self) -> "ChunkTupleAsyncStreamIterator": + return self + + async def __anext__(self) -> Tuple[bytes, bool]: + rv = await self._stream.readchunk() + if rv == (b"", False): + raise StopAsyncIteration + return rv + + +class AsyncStreamReaderMixin: + def __aiter__(self) -> AsyncStreamIterator[bytes]: + return AsyncStreamIterator(self.readline) # type: ignore[attr-defined] + + def iter_chunked(self, n: int) -> AsyncStreamIterator[bytes]: + """Returns an asynchronous iterator that yields chunks of size n. + + Python-3.5 available for Python 3.5+ only + """ + return AsyncStreamIterator( + lambda: self.read(n) # type: ignore[attr-defined,no-any-return] + ) + + def iter_any(self) -> AsyncStreamIterator[bytes]: + """Yield all available data as soon as it is received. + + Python-3.5 available for Python 3.5+ only + """ + return AsyncStreamIterator(self.readany) # type: ignore[attr-defined] + + def iter_chunks(self) -> ChunkTupleAsyncStreamIterator: + """Yield chunks of data as they are received by the server. + + The yielded objects are tuples + of (bytes, bool) as returned by the StreamReader.readchunk method. + + Python-3.5 available for Python 3.5+ only + """ + return ChunkTupleAsyncStreamIterator(self) # type: ignore[arg-type] + + +class StreamReader(AsyncStreamReaderMixin): + """An enhancement of asyncio.StreamReader. + + Supports asynchronous iteration by line, chunk or as available:: + + async for line in reader: + ... + async for chunk in reader.iter_chunked(1024): + ... + async for slice in reader.iter_any(): + ... + + """ + + total_bytes = 0 + + def __init__( + self, + protocol: BaseProtocol, + limit: int, + *, + timer: Optional[BaseTimerContext] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, + ) -> None: + self._protocol = protocol + self._low_water = limit + self._high_water = limit * 2 + if loop is None: + loop = asyncio.get_event_loop() + self._loop = loop + self._size = 0 + self._cursor = 0 + self._http_chunk_splits: Optional[List[int]] = None + self._buffer: Deque[bytes] = collections.deque() + self._buffer_offset = 0 + self._eof = False + self._waiter: Optional[asyncio.Future[None]] = None + self._eof_waiter: Optional[asyncio.Future[None]] = None + self._exception: Optional[BaseException] = None + self._timer = timer + self._eof_callbacks: List[Callable[[], None]] = [] + + def __repr__(self) -> str: + info = [self.__class__.__name__] + if self._size: + info.append("%d bytes" % self._size) + if self._eof: + info.append("eof") + if self._low_water != 2**16: # default limit + info.append("low=%d high=%d" % (self._low_water, self._high_water)) + if self._waiter: + info.append("w=%r" % self._waiter) + if self._exception: + info.append("e=%r" % self._exception) + return "<%s>" % " ".join(info) + + def get_read_buffer_limits(self) -> Tuple[int, int]: + return (self._low_water, self._high_water) + + def exception(self) -> Optional[BaseException]: + return self._exception + + def set_exception(self, exc: BaseException) -> None: + self._exception = exc + self._eof_callbacks.clear() + + waiter = self._waiter + if waiter is not None: + self._waiter = None + set_exception(waiter, exc) + + waiter = self._eof_waiter + if waiter is not None: + self._eof_waiter = None + set_exception(waiter, exc) + + def on_eof(self, callback: Callable[[], None]) -> None: + if self._eof: + try: + callback() + except Exception: + internal_logger.exception("Exception in eof callback") + else: + self._eof_callbacks.append(callback) + + def feed_eof(self) -> None: + self._eof = True + + waiter = self._waiter + if waiter is not None: + self._waiter = None + set_result(waiter, None) + + waiter = self._eof_waiter + if waiter is not None: + self._eof_waiter = None + set_result(waiter, None) + + for cb in self._eof_callbacks: + try: + cb() + except Exception: + internal_logger.exception("Exception in eof callback") + + self._eof_callbacks.clear() + + def is_eof(self) -> bool: + """Return True if 'feed_eof' was called.""" + return self._eof + + def at_eof(self) -> bool: + """Return True if the buffer is empty and 'feed_eof' was called.""" + return self._eof and not self._buffer + + async def wait_eof(self) -> None: + if self._eof: + return + + assert self._eof_waiter is None + self._eof_waiter = self._loop.create_future() + try: + await self._eof_waiter + finally: + self._eof_waiter = None + + def unread_data(self, data: bytes) -> None: + """rollback reading some data from stream, inserting it to buffer head.""" + warnings.warn( + "unread_data() is deprecated " + "and will be removed in future releases (#3260)", + DeprecationWarning, + stacklevel=2, + ) + if not data: + return + + if self._buffer_offset: + self._buffer[0] = self._buffer[0][self._buffer_offset :] + self._buffer_offset = 0 + self._size += len(data) + self._cursor -= len(data) + self._buffer.appendleft(data) + self._eof_counter = 0 + + # TODO: size is ignored, remove the param later + def feed_data(self, data: bytes, size: int = 0) -> None: + assert not self._eof, "feed_data after feed_eof" + + if not data: + return + + self._size += len(data) + self._buffer.append(data) + self.total_bytes += len(data) + + waiter = self._waiter + if waiter is not None: + self._waiter = None + set_result(waiter, None) + + if self._size > self._high_water and not self._protocol._reading_paused: + self._protocol.pause_reading() + + def begin_http_chunk_receiving(self) -> None: + if self._http_chunk_splits is None: + if self.total_bytes: + raise RuntimeError( + "Called begin_http_chunk_receiving when" "some data was already fed" + ) + self._http_chunk_splits = [] + + def end_http_chunk_receiving(self) -> None: + if self._http_chunk_splits is None: + raise RuntimeError( + "Called end_chunk_receiving without calling " + "begin_chunk_receiving first" + ) + + # self._http_chunk_splits contains logical byte offsets from start of + # the body transfer. Each offset is the offset of the end of a chunk. + # "Logical" means bytes, accessible for a user. + # If no chunks containig logical data were received, current position + # is difinitely zero. + pos = self._http_chunk_splits[-1] if self._http_chunk_splits else 0 + + if self.total_bytes == pos: + # We should not add empty chunks here. So we check for that. + # Note, when chunked + gzip is used, we can receive a chunk + # of compressed data, but that data may not be enough for gzip FSM + # to yield any uncompressed data. That's why current position may + # not change after receiving a chunk. + return + + self._http_chunk_splits.append(self.total_bytes) + + # wake up readchunk when end of http chunk received + waiter = self._waiter + if waiter is not None: + self._waiter = None + set_result(waiter, None) + + async def _wait(self, func_name: str) -> None: + # StreamReader uses a future to link the protocol feed_data() method + # to a read coroutine. Running two read coroutines at the same time + # would have an unexpected behaviour. It would not possible to know + # which coroutine would get the next data. + if self._waiter is not None: + raise RuntimeError( + "%s() called while another coroutine is " + "already waiting for incoming data" % func_name + ) + + waiter = self._waiter = self._loop.create_future() + try: + if self._timer: + with self._timer: + await waiter + else: + await waiter + finally: + self._waiter = None + + async def readline(self) -> bytes: + return await self.readuntil() + + async def readuntil(self, separator: bytes = b"\n") -> bytes: + seplen = len(separator) + if seplen == 0: + raise ValueError("Separator should be at least one-byte string") + + if self._exception is not None: + raise self._exception + + chunk = b"" + chunk_size = 0 + not_enough = True + + while not_enough: + while self._buffer and not_enough: + offset = self._buffer_offset + ichar = self._buffer[0].find(separator, offset) + 1 + # Read from current offset to found separator or to the end. + data = self._read_nowait_chunk(ichar - offset if ichar else -1) + chunk += data + chunk_size += len(data) + if ichar: + not_enough = False + + if chunk_size > self._high_water: + raise ValueError("Chunk too big") + + if self._eof: + break + + if not_enough: + await self._wait("readuntil") + + return chunk + + async def read(self, n: int = -1) -> bytes: + if self._exception is not None: + raise self._exception + + # migration problem; with DataQueue you have to catch + # EofStream exception, so common way is to run payload.read() inside + # infinite loop. what can cause real infinite loop with StreamReader + # lets keep this code one major release. + if __debug__: + if self._eof and not self._buffer: + self._eof_counter = getattr(self, "_eof_counter", 0) + 1 + if self._eof_counter > 5: + internal_logger.warning( + "Multiple access to StreamReader in eof state, " + "might be infinite loop.", + stack_info=True, + ) + + if not n: + return b"" + + if n < 0: + # This used to just loop creating a new waiter hoping to + # collect everything in self._buffer, but that would + # deadlock if the subprocess sends more than self.limit + # bytes. So just call self.readany() until EOF. + blocks = [] + while True: + block = await self.readany() + if not block: + break + blocks.append(block) + return b"".join(blocks) + + # TODO: should be `if` instead of `while` + # because waiter maybe triggered on chunk end, + # without feeding any data + while not self._buffer and not self._eof: + await self._wait("read") + + return self._read_nowait(n) + + async def readany(self) -> bytes: + if self._exception is not None: + raise self._exception + + # TODO: should be `if` instead of `while` + # because waiter maybe triggered on chunk end, + # without feeding any data + while not self._buffer and not self._eof: + await self._wait("readany") + + return self._read_nowait(-1) + + async def readchunk(self) -> Tuple[bytes, bool]: + """Returns a tuple of (data, end_of_http_chunk). + + When chunked transfer + encoding is used, end_of_http_chunk is a boolean indicating if the end + of the data corresponds to the end of a HTTP chunk , otherwise it is + always False. + """ + while True: + if self._exception is not None: + raise self._exception + + while self._http_chunk_splits: + pos = self._http_chunk_splits.pop(0) + if pos == self._cursor: + return (b"", True) + if pos > self._cursor: + return (self._read_nowait(pos - self._cursor), True) + internal_logger.warning( + "Skipping HTTP chunk end due to data " + "consumption beyond chunk boundary" + ) + + if self._buffer: + return (self._read_nowait_chunk(-1), False) + # return (self._read_nowait(-1), False) + + if self._eof: + # Special case for signifying EOF. + # (b'', True) is not a final return value actually. + return (b"", False) + + await self._wait("readchunk") + + async def readexactly(self, n: int) -> bytes: + if self._exception is not None: + raise self._exception + + blocks: List[bytes] = [] + while n > 0: + block = await self.read(n) + if not block: + partial = b"".join(blocks) + raise asyncio.IncompleteReadError(partial, len(partial) + n) + blocks.append(block) + n -= len(block) + + return b"".join(blocks) + + def read_nowait(self, n: int = -1) -> bytes: + # default was changed to be consistent with .read(-1) + # + # I believe the most users don't know about the method and + # they are not affected. + if self._exception is not None: + raise self._exception + + if self._waiter and not self._waiter.done(): + raise RuntimeError( + "Called while some coroutine is waiting for incoming data." + ) + + return self._read_nowait(n) + + def _read_nowait_chunk(self, n: int) -> bytes: + first_buffer = self._buffer[0] + offset = self._buffer_offset + if n != -1 and len(first_buffer) - offset > n: + data = first_buffer[offset : offset + n] + self._buffer_offset += n + + elif offset: + self._buffer.popleft() + data = first_buffer[offset:] + self._buffer_offset = 0 + + else: + data = self._buffer.popleft() + + self._size -= len(data) + self._cursor += len(data) + + chunk_splits = self._http_chunk_splits + # Prevent memory leak: drop useless chunk splits + while chunk_splits and chunk_splits[0] < self._cursor: + chunk_splits.pop(0) + + if self._size < self._low_water and self._protocol._reading_paused: + self._protocol.resume_reading() + return data + + def _read_nowait(self, n: int) -> bytes: + """Read not more than n bytes, or whole buffer if n == -1""" + chunks = [] + + while self._buffer: + chunk = self._read_nowait_chunk(n) + chunks.append(chunk) + if n != -1: + n -= len(chunk) + if n == 0: + break + + return b"".join(chunks) if chunks else b"" + + +class EmptyStreamReader(StreamReader): # lgtm [py/missing-call-to-init] + def __init__(self) -> None: + pass + + def exception(self) -> Optional[BaseException]: + return None + + def set_exception(self, exc: BaseException) -> None: + pass + + def on_eof(self, callback: Callable[[], None]) -> None: + try: + callback() + except Exception: + internal_logger.exception("Exception in eof callback") + + def feed_eof(self) -> None: + pass + + def is_eof(self) -> bool: + return True + + def at_eof(self) -> bool: + return True + + async def wait_eof(self) -> None: + return + + def feed_data(self, data: bytes, n: int = 0) -> None: + pass + + async def readline(self) -> bytes: + return b"" + + async def read(self, n: int = -1) -> bytes: + return b"" + + # TODO add async def readuntil + + async def readany(self) -> bytes: + return b"" + + async def readchunk(self) -> Tuple[bytes, bool]: + return (b"", True) + + async def readexactly(self, n: int) -> bytes: + raise asyncio.IncompleteReadError(b"", n) + + def read_nowait(self, n: int = -1) -> bytes: + return b"" + + +EMPTY_PAYLOAD: Final[StreamReader] = EmptyStreamReader() + + +class DataQueue(Generic[_T]): + """DataQueue is a general-purpose blocking queue with one reader.""" + + def __init__(self, loop: asyncio.AbstractEventLoop) -> None: + self._loop = loop + self._eof = False + self._waiter: Optional[asyncio.Future[None]] = None + self._exception: Optional[BaseException] = None + self._size = 0 + self._buffer: Deque[Tuple[_T, int]] = collections.deque() + + def __len__(self) -> int: + return len(self._buffer) + + def is_eof(self) -> bool: + return self._eof + + def at_eof(self) -> bool: + return self._eof and not self._buffer + + def exception(self) -> Optional[BaseException]: + return self._exception + + def set_exception(self, exc: BaseException) -> None: + self._eof = True + self._exception = exc + + waiter = self._waiter + if waiter is not None: + self._waiter = None + set_exception(waiter, exc) + + def feed_data(self, data: _T, size: int = 0) -> None: + self._size += size + self._buffer.append((data, size)) + + waiter = self._waiter + if waiter is not None: + self._waiter = None + set_result(waiter, None) + + def feed_eof(self) -> None: + self._eof = True + + waiter = self._waiter + if waiter is not None: + self._waiter = None + set_result(waiter, None) + + async def read(self) -> _T: + if not self._buffer and not self._eof: + assert not self._waiter + self._waiter = self._loop.create_future() + try: + await self._waiter + except (asyncio.CancelledError, asyncio.TimeoutError): + self._waiter = None + raise + + if self._buffer: + data, size = self._buffer.popleft() + self._size -= size + return data + else: + if self._exception is not None: + raise self._exception + else: + raise EofStream + + def __aiter__(self) -> AsyncStreamIterator[_T]: + return AsyncStreamIterator(self.read) + + +class FlowControlDataQueue(DataQueue[_T]): + """FlowControlDataQueue resumes and pauses an underlying stream. + + It is a destination for parsed data. + """ + + def __init__( + self, protocol: BaseProtocol, limit: int, *, loop: asyncio.AbstractEventLoop + ) -> None: + super().__init__(loop=loop) + + self._protocol = protocol + self._limit = limit * 2 + + def feed_data(self, data: _T, size: int = 0) -> None: + super().feed_data(data, size) + + if self._size > self._limit and not self._protocol._reading_paused: + self._protocol.pause_reading() + + async def read(self) -> _T: + try: + return await super().read() + finally: + if self._size < self._limit and self._protocol._reading_paused: + self._protocol.resume_reading() diff --git a/env/lib/python3.8/site-packages/aiohttp/tcp_helpers.py b/env/lib/python3.8/site-packages/aiohttp/tcp_helpers.py new file mode 100644 index 00000000..88b24422 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/tcp_helpers.py @@ -0,0 +1,37 @@ +"""Helper methods to tune a TCP connection""" + +import asyncio +import socket +from contextlib import suppress +from typing import Optional # noqa + +__all__ = ("tcp_keepalive", "tcp_nodelay") + + +if hasattr(socket, "SO_KEEPALIVE"): + + def tcp_keepalive(transport: asyncio.Transport) -> None: + sock = transport.get_extra_info("socket") + if sock is not None: + sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) + +else: + + def tcp_keepalive(transport: asyncio.Transport) -> None: # pragma: no cover + pass + + +def tcp_nodelay(transport: asyncio.Transport, value: bool) -> None: + sock = transport.get_extra_info("socket") + + if sock is None: + return + + if sock.family not in (socket.AF_INET, socket.AF_INET6): + return + + value = bool(value) + + # socket may be closed already, on windows OSError get raised + with suppress(OSError): + sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, value) diff --git a/env/lib/python3.8/site-packages/aiohttp/test_utils.py b/env/lib/python3.8/site-packages/aiohttp/test_utils.py new file mode 100644 index 00000000..fcda2f3d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/test_utils.py @@ -0,0 +1,706 @@ +"""Utilities shared by tests.""" + +import asyncio +import contextlib +import gc +import inspect +import ipaddress +import os +import socket +import sys +import warnings +from abc import ABC, abstractmethod +from types import TracebackType +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Iterator, + List, + Optional, + Type, + Union, + cast, +) +from unittest import mock + +from aiosignal import Signal +from multidict import CIMultiDict, CIMultiDictProxy +from yarl import URL + +import aiohttp +from aiohttp.client import _RequestContextManager, _WSRequestContextManager + +from . import ClientSession, hdrs +from .abc import AbstractCookieJar +from .client_reqrep import ClientResponse +from .client_ws import ClientWebSocketResponse +from .helpers import PY_38, sentinel +from .http import HttpVersion, RawRequestMessage +from .web import ( + Application, + AppRunner, + BaseRunner, + Request, + Server, + ServerRunner, + SockSite, + UrlMappingMatchInfo, +) +from .web_protocol import _RequestHandler + +if TYPE_CHECKING: # pragma: no cover + from ssl import SSLContext +else: + SSLContext = None + +if PY_38: + from unittest import IsolatedAsyncioTestCase as TestCase +else: + from asynctest import TestCase # type: ignore[no-redef] + +REUSE_ADDRESS = os.name == "posix" and sys.platform != "cygwin" + + +def get_unused_port_socket( + host: str, family: socket.AddressFamily = socket.AF_INET +) -> socket.socket: + return get_port_socket(host, 0, family) + + +def get_port_socket( + host: str, port: int, family: socket.AddressFamily +) -> socket.socket: + s = socket.socket(family, socket.SOCK_STREAM) + if REUSE_ADDRESS: + # Windows has different semantics for SO_REUSEADDR, + # so don't set it. Ref: + # https://docs.microsoft.com/en-us/windows/win32/winsock/using-so-reuseaddr-and-so-exclusiveaddruse + s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + s.bind((host, port)) + return s + + +def unused_port() -> int: + """Return a port that is unused on the current host.""" + with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: + s.bind(("127.0.0.1", 0)) + return cast(int, s.getsockname()[1]) + + +class BaseTestServer(ABC): + __test__ = False + + def __init__( + self, + *, + scheme: Union[str, object] = sentinel, + loop: Optional[asyncio.AbstractEventLoop] = None, + host: str = "127.0.0.1", + port: Optional[int] = None, + skip_url_asserts: bool = False, + socket_factory: Callable[ + [str, int, socket.AddressFamily], socket.socket + ] = get_port_socket, + **kwargs: Any, + ) -> None: + self._loop = loop + self.runner: Optional[BaseRunner] = None + self._root: Optional[URL] = None + self.host = host + self.port = port + self._closed = False + self.scheme = scheme + self.skip_url_asserts = skip_url_asserts + self.socket_factory = socket_factory + + async def start_server( + self, loop: Optional[asyncio.AbstractEventLoop] = None, **kwargs: Any + ) -> None: + if self.runner: + return + self._loop = loop + self._ssl = kwargs.pop("ssl", None) + self.runner = await self._make_runner(**kwargs) + await self.runner.setup() + if not self.port: + self.port = 0 + try: + version = ipaddress.ip_address(self.host).version + except ValueError: + version = 4 + family = socket.AF_INET6 if version == 6 else socket.AF_INET + _sock = self.socket_factory(self.host, self.port, family) + self.host, self.port = _sock.getsockname()[:2] + site = SockSite(self.runner, sock=_sock, ssl_context=self._ssl) + await site.start() + server = site._server + assert server is not None + sockets = server.sockets + assert sockets is not None + self.port = sockets[0].getsockname()[1] + if self.scheme is sentinel: + if self._ssl: + scheme = "https" + else: + scheme = "http" + self.scheme = scheme + self._root = URL(f"{self.scheme}://{self.host}:{self.port}") + + @abstractmethod # pragma: no cover + async def _make_runner(self, **kwargs: Any) -> BaseRunner: + pass + + def make_url(self, path: str) -> URL: + assert self._root is not None + url = URL(path) + if not self.skip_url_asserts: + assert not url.is_absolute() + return self._root.join(url) + else: + return URL(str(self._root) + path) + + @property + def started(self) -> bool: + return self.runner is not None + + @property + def closed(self) -> bool: + return self._closed + + @property + def handler(self) -> Server: + # for backward compatibility + # web.Server instance + runner = self.runner + assert runner is not None + assert runner.server is not None + return runner.server + + async def close(self) -> None: + """Close all fixtures created by the test client. + + After that point, the TestClient is no longer usable. + + This is an idempotent function: running close multiple times + will not have any additional effects. + + close is also run when the object is garbage collected, and on + exit when used as a context manager. + + """ + if self.started and not self.closed: + assert self.runner is not None + await self.runner.cleanup() + self._root = None + self.port = None + self._closed = True + + def __enter__(self) -> None: + raise TypeError("Use async with instead") + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_value: Optional[BaseException], + traceback: Optional[TracebackType], + ) -> None: + # __exit__ should exist in pair with __enter__ but never executed + pass # pragma: no cover + + async def __aenter__(self) -> "BaseTestServer": + await self.start_server(loop=self._loop) + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc_value: Optional[BaseException], + traceback: Optional[TracebackType], + ) -> None: + await self.close() + + +class TestServer(BaseTestServer): + def __init__( + self, + app: Application, + *, + scheme: Union[str, object] = sentinel, + host: str = "127.0.0.1", + port: Optional[int] = None, + **kwargs: Any, + ): + self.app = app + super().__init__(scheme=scheme, host=host, port=port, **kwargs) + + async def _make_runner(self, **kwargs: Any) -> BaseRunner: + return AppRunner(self.app, **kwargs) + + +class RawTestServer(BaseTestServer): + def __init__( + self, + handler: _RequestHandler, + *, + scheme: Union[str, object] = sentinel, + host: str = "127.0.0.1", + port: Optional[int] = None, + **kwargs: Any, + ) -> None: + self._handler = handler + super().__init__(scheme=scheme, host=host, port=port, **kwargs) + + async def _make_runner(self, debug: bool = True, **kwargs: Any) -> ServerRunner: + srv = Server(self._handler, loop=self._loop, debug=debug, **kwargs) + return ServerRunner(srv, debug=debug, **kwargs) + + +class TestClient: + """ + A test client implementation. + + To write functional tests for aiohttp based servers. + + """ + + __test__ = False + + def __init__( + self, + server: BaseTestServer, + *, + cookie_jar: Optional[AbstractCookieJar] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, + **kwargs: Any, + ) -> None: + if not isinstance(server, BaseTestServer): + raise TypeError( + "server must be TestServer " "instance, found type: %r" % type(server) + ) + self._server = server + self._loop = loop + if cookie_jar is None: + cookie_jar = aiohttp.CookieJar(unsafe=True, loop=loop) + self._session = ClientSession(loop=loop, cookie_jar=cookie_jar, **kwargs) + self._closed = False + self._responses: List[ClientResponse] = [] + self._websockets: List[ClientWebSocketResponse] = [] + + async def start_server(self) -> None: + await self._server.start_server(loop=self._loop) + + @property + def host(self) -> str: + return self._server.host + + @property + def port(self) -> Optional[int]: + return self._server.port + + @property + def server(self) -> BaseTestServer: + return self._server + + @property + def app(self) -> Optional[Application]: + return cast(Optional[Application], getattr(self._server, "app", None)) + + @property + def session(self) -> ClientSession: + """An internal aiohttp.ClientSession. + + Unlike the methods on the TestClient, client session requests + do not automatically include the host in the url queried, and + will require an absolute path to the resource. + + """ + return self._session + + def make_url(self, path: str) -> URL: + return self._server.make_url(path) + + async def _request(self, method: str, path: str, **kwargs: Any) -> ClientResponse: + resp = await self._session.request(method, self.make_url(path), **kwargs) + # save it to close later + self._responses.append(resp) + return resp + + def request(self, method: str, path: str, **kwargs: Any) -> _RequestContextManager: + """Routes a request to tested http server. + + The interface is identical to aiohttp.ClientSession.request, + except the loop kwarg is overridden by the instance used by the + test server. + + """ + return _RequestContextManager(self._request(method, path, **kwargs)) + + def get(self, path: str, **kwargs: Any) -> _RequestContextManager: + """Perform an HTTP GET request.""" + return _RequestContextManager(self._request(hdrs.METH_GET, path, **kwargs)) + + def post(self, path: str, **kwargs: Any) -> _RequestContextManager: + """Perform an HTTP POST request.""" + return _RequestContextManager(self._request(hdrs.METH_POST, path, **kwargs)) + + def options(self, path: str, **kwargs: Any) -> _RequestContextManager: + """Perform an HTTP OPTIONS request.""" + return _RequestContextManager(self._request(hdrs.METH_OPTIONS, path, **kwargs)) + + def head(self, path: str, **kwargs: Any) -> _RequestContextManager: + """Perform an HTTP HEAD request.""" + return _RequestContextManager(self._request(hdrs.METH_HEAD, path, **kwargs)) + + def put(self, path: str, **kwargs: Any) -> _RequestContextManager: + """Perform an HTTP PUT request.""" + return _RequestContextManager(self._request(hdrs.METH_PUT, path, **kwargs)) + + def patch(self, path: str, **kwargs: Any) -> _RequestContextManager: + """Perform an HTTP PATCH request.""" + return _RequestContextManager(self._request(hdrs.METH_PATCH, path, **kwargs)) + + def delete(self, path: str, **kwargs: Any) -> _RequestContextManager: + """Perform an HTTP PATCH request.""" + return _RequestContextManager(self._request(hdrs.METH_DELETE, path, **kwargs)) + + def ws_connect(self, path: str, **kwargs: Any) -> _WSRequestContextManager: + """Initiate websocket connection. + + The api corresponds to aiohttp.ClientSession.ws_connect. + + """ + return _WSRequestContextManager(self._ws_connect(path, **kwargs)) + + async def _ws_connect(self, path: str, **kwargs: Any) -> ClientWebSocketResponse: + ws = await self._session.ws_connect(self.make_url(path), **kwargs) + self._websockets.append(ws) + return ws + + async def close(self) -> None: + """Close all fixtures created by the test client. + + After that point, the TestClient is no longer usable. + + This is an idempotent function: running close multiple times + will not have any additional effects. + + close is also run on exit when used as a(n) (asynchronous) + context manager. + + """ + if not self._closed: + for resp in self._responses: + resp.close() + for ws in self._websockets: + await ws.close() + await self._session.close() + await self._server.close() + self._closed = True + + def __enter__(self) -> None: + raise TypeError("Use async with instead") + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc: Optional[BaseException], + tb: Optional[TracebackType], + ) -> None: + # __exit__ should exist in pair with __enter__ but never executed + pass # pragma: no cover + + async def __aenter__(self) -> "TestClient": + await self.start_server() + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc: Optional[BaseException], + tb: Optional[TracebackType], + ) -> None: + await self.close() + + +class AioHTTPTestCase(TestCase): + """A base class to allow for unittest web applications using aiohttp. + + Provides the following: + + * self.client (aiohttp.test_utils.TestClient): an aiohttp test client. + * self.loop (asyncio.BaseEventLoop): the event loop in which the + application and server are running. + * self.app (aiohttp.web.Application): the application returned by + self.get_application() + + Note that the TestClient's methods are asynchronous: you have to + execute function on the test client using asynchronous methods. + """ + + async def get_application(self) -> Application: + """Get application. + + This method should be overridden + to return the aiohttp.web.Application + object to test. + """ + return self.get_app() + + def get_app(self) -> Application: + """Obsolete method used to constructing web application. + + Use .get_application() coroutine instead. + """ + raise RuntimeError("Did you forget to define get_application()?") + + def setUp(self) -> None: + if not PY_38: + asyncio.get_event_loop().run_until_complete(self.asyncSetUp()) + + async def asyncSetUp(self) -> None: + try: + self.loop = asyncio.get_running_loop() + except (AttributeError, RuntimeError): # AttributeError->py36 + self.loop = asyncio.get_event_loop_policy().get_event_loop() + + return await self.setUpAsync() + + async def setUpAsync(self) -> None: + self.app = await self.get_application() + self.server = await self.get_server(self.app) + self.client = await self.get_client(self.server) + + await self.client.start_server() + + def tearDown(self) -> None: + if not PY_38: + self.loop.run_until_complete(self.asyncTearDown()) + + async def asyncTearDown(self) -> None: + return await self.tearDownAsync() + + async def tearDownAsync(self) -> None: + await self.client.close() + + async def get_server(self, app: Application) -> TestServer: + """Return a TestServer instance.""" + return TestServer(app, loop=self.loop) + + async def get_client(self, server: TestServer) -> TestClient: + """Return a TestClient instance.""" + return TestClient(server, loop=self.loop) + + +def unittest_run_loop(func: Any, *args: Any, **kwargs: Any) -> Any: + """ + A decorator dedicated to use with asynchronous AioHTTPTestCase test methods. + + In 3.8+, this does nothing. + """ + warnings.warn( + "Decorator `@unittest_run_loop` is no longer needed in aiohttp 3.8+", + DeprecationWarning, + stacklevel=2, + ) + return func + + +_LOOP_FACTORY = Callable[[], asyncio.AbstractEventLoop] + + +@contextlib.contextmanager +def loop_context( + loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, fast: bool = False +) -> Iterator[asyncio.AbstractEventLoop]: + """A contextmanager that creates an event_loop, for test purposes. + + Handles the creation and cleanup of a test loop. + """ + loop = setup_test_loop(loop_factory) + yield loop + teardown_test_loop(loop, fast=fast) + + +def setup_test_loop( + loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, +) -> asyncio.AbstractEventLoop: + """Create and return an asyncio.BaseEventLoop instance. + + The caller should also call teardown_test_loop, + once they are done with the loop. + """ + loop = loop_factory() + try: + module = loop.__class__.__module__ + skip_watcher = "uvloop" in module + except AttributeError: # pragma: no cover + # Just in case + skip_watcher = True + asyncio.set_event_loop(loop) + if sys.platform != "win32" and not skip_watcher: + policy = asyncio.get_event_loop_policy() + watcher: asyncio.AbstractChildWatcher + try: # Python >= 3.8 + # Refs: + # * https://github.com/pytest-dev/pytest-xdist/issues/620 + # * https://stackoverflow.com/a/58614689/595220 + # * https://bugs.python.org/issue35621 + # * https://github.com/python/cpython/pull/14344 + watcher = asyncio.ThreadedChildWatcher() + except AttributeError: # Python < 3.8 + watcher = asyncio.SafeChildWatcher() + watcher.attach_loop(loop) + with contextlib.suppress(NotImplementedError): + policy.set_child_watcher(watcher) + return loop + + +def teardown_test_loop(loop: asyncio.AbstractEventLoop, fast: bool = False) -> None: + """Teardown and cleanup an event_loop created by setup_test_loop.""" + closed = loop.is_closed() + if not closed: + loop.call_soon(loop.stop) + loop.run_forever() + loop.close() + + if not fast: + gc.collect() + + asyncio.set_event_loop(None) + + +def _create_app_mock() -> mock.MagicMock: + def get_dict(app: Any, key: str) -> Any: + return app.__app_dict[key] + + def set_dict(app: Any, key: str, value: Any) -> None: + app.__app_dict[key] = value + + app = mock.MagicMock(spec=Application) + app.__app_dict = {} + app.__getitem__ = get_dict + app.__setitem__ = set_dict + + app._debug = False + app.on_response_prepare = Signal(app) + app.on_response_prepare.freeze() + return app + + +def _create_transport(sslcontext: Optional[SSLContext] = None) -> mock.Mock: + transport = mock.Mock() + + def get_extra_info(key: str) -> Optional[SSLContext]: + if key == "sslcontext": + return sslcontext + else: + return None + + transport.get_extra_info.side_effect = get_extra_info + return transport + + +def make_mocked_request( + method: str, + path: str, + headers: Any = None, + *, + match_info: Any = sentinel, + version: HttpVersion = HttpVersion(1, 1), + closing: bool = False, + app: Any = None, + writer: Any = sentinel, + protocol: Any = sentinel, + transport: Any = sentinel, + payload: Any = sentinel, + sslcontext: Optional[SSLContext] = None, + client_max_size: int = 1024**2, + loop: Any = ..., +) -> Request: + """Creates mocked web.Request testing purposes. + + Useful in unit tests, when spinning full web server is overkill or + specific conditions and errors are hard to trigger. + """ + task = mock.Mock() + if loop is ...: + loop = mock.Mock() + loop.create_future.return_value = () + + if version < HttpVersion(1, 1): + closing = True + + if headers: + headers = CIMultiDictProxy(CIMultiDict(headers)) + raw_hdrs = tuple( + (k.encode("utf-8"), v.encode("utf-8")) for k, v in headers.items() + ) + else: + headers = CIMultiDictProxy(CIMultiDict()) + raw_hdrs = () + + chunked = "chunked" in headers.get(hdrs.TRANSFER_ENCODING, "").lower() + + message = RawRequestMessage( + method, + path, + version, + headers, + raw_hdrs, + closing, + None, + False, + chunked, + URL(path), + ) + if app is None: + app = _create_app_mock() + + if transport is sentinel: + transport = _create_transport(sslcontext) + + if protocol is sentinel: + protocol = mock.Mock() + protocol.transport = transport + + if writer is sentinel: + writer = mock.Mock() + writer.write_headers = make_mocked_coro(None) + writer.write = make_mocked_coro(None) + writer.write_eof = make_mocked_coro(None) + writer.drain = make_mocked_coro(None) + writer.transport = transport + + protocol.transport = transport + protocol.writer = writer + + if payload is sentinel: + payload = mock.Mock() + + req = Request( + message, payload, protocol, writer, task, loop, client_max_size=client_max_size + ) + + match_info = UrlMappingMatchInfo( + {} if match_info is sentinel else match_info, mock.Mock() + ) + match_info.add_app(app) + req._match_info = match_info + + return req + + +def make_mocked_coro( + return_value: Any = sentinel, raise_exception: Any = sentinel +) -> Any: + """Creates a coroutine mock.""" + + async def mock_coro(*args: Any, **kwargs: Any) -> Any: + if raise_exception is not sentinel: + raise raise_exception + if not inspect.isawaitable(return_value): + return return_value + await return_value + + return mock.Mock(wraps=mock_coro) diff --git a/env/lib/python3.8/site-packages/aiohttp/tracing.py b/env/lib/python3.8/site-packages/aiohttp/tracing.py new file mode 100644 index 00000000..d5596a4c --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/tracing.py @@ -0,0 +1,472 @@ +from types import SimpleNamespace +from typing import TYPE_CHECKING, Awaitable, Optional, Type, TypeVar + +import attr +from aiosignal import Signal +from multidict import CIMultiDict +from yarl import URL + +from .client_reqrep import ClientResponse + +if TYPE_CHECKING: # pragma: no cover + from .client import ClientSession + from .typedefs import Protocol + + _ParamT_contra = TypeVar("_ParamT_contra", contravariant=True) + + class _SignalCallback(Protocol[_ParamT_contra]): + def __call__( + self, + __client_session: ClientSession, + __trace_config_ctx: SimpleNamespace, + __params: _ParamT_contra, + ) -> Awaitable[None]: + ... + + +__all__ = ( + "TraceConfig", + "TraceRequestStartParams", + "TraceRequestEndParams", + "TraceRequestExceptionParams", + "TraceConnectionQueuedStartParams", + "TraceConnectionQueuedEndParams", + "TraceConnectionCreateStartParams", + "TraceConnectionCreateEndParams", + "TraceConnectionReuseconnParams", + "TraceDnsResolveHostStartParams", + "TraceDnsResolveHostEndParams", + "TraceDnsCacheHitParams", + "TraceDnsCacheMissParams", + "TraceRequestRedirectParams", + "TraceRequestChunkSentParams", + "TraceResponseChunkReceivedParams", + "TraceRequestHeadersSentParams", +) + + +class TraceConfig: + """First-class used to trace requests launched via ClientSession objects.""" + + def __init__( + self, trace_config_ctx_factory: Type[SimpleNamespace] = SimpleNamespace + ) -> None: + self._on_request_start: Signal[ + _SignalCallback[TraceRequestStartParams] + ] = Signal(self) + self._on_request_chunk_sent: Signal[ + _SignalCallback[TraceRequestChunkSentParams] + ] = Signal(self) + self._on_response_chunk_received: Signal[ + _SignalCallback[TraceResponseChunkReceivedParams] + ] = Signal(self) + self._on_request_end: Signal[_SignalCallback[TraceRequestEndParams]] = Signal( + self + ) + self._on_request_exception: Signal[ + _SignalCallback[TraceRequestExceptionParams] + ] = Signal(self) + self._on_request_redirect: Signal[ + _SignalCallback[TraceRequestRedirectParams] + ] = Signal(self) + self._on_connection_queued_start: Signal[ + _SignalCallback[TraceConnectionQueuedStartParams] + ] = Signal(self) + self._on_connection_queued_end: Signal[ + _SignalCallback[TraceConnectionQueuedEndParams] + ] = Signal(self) + self._on_connection_create_start: Signal[ + _SignalCallback[TraceConnectionCreateStartParams] + ] = Signal(self) + self._on_connection_create_end: Signal[ + _SignalCallback[TraceConnectionCreateEndParams] + ] = Signal(self) + self._on_connection_reuseconn: Signal[ + _SignalCallback[TraceConnectionReuseconnParams] + ] = Signal(self) + self._on_dns_resolvehost_start: Signal[ + _SignalCallback[TraceDnsResolveHostStartParams] + ] = Signal(self) + self._on_dns_resolvehost_end: Signal[ + _SignalCallback[TraceDnsResolveHostEndParams] + ] = Signal(self) + self._on_dns_cache_hit: Signal[ + _SignalCallback[TraceDnsCacheHitParams] + ] = Signal(self) + self._on_dns_cache_miss: Signal[ + _SignalCallback[TraceDnsCacheMissParams] + ] = Signal(self) + self._on_request_headers_sent: Signal[ + _SignalCallback[TraceRequestHeadersSentParams] + ] = Signal(self) + + self._trace_config_ctx_factory = trace_config_ctx_factory + + def trace_config_ctx( + self, trace_request_ctx: Optional[SimpleNamespace] = None + ) -> SimpleNamespace: + """Return a new trace_config_ctx instance""" + return self._trace_config_ctx_factory(trace_request_ctx=trace_request_ctx) + + def freeze(self) -> None: + self._on_request_start.freeze() + self._on_request_chunk_sent.freeze() + self._on_response_chunk_received.freeze() + self._on_request_end.freeze() + self._on_request_exception.freeze() + self._on_request_redirect.freeze() + self._on_connection_queued_start.freeze() + self._on_connection_queued_end.freeze() + self._on_connection_create_start.freeze() + self._on_connection_create_end.freeze() + self._on_connection_reuseconn.freeze() + self._on_dns_resolvehost_start.freeze() + self._on_dns_resolvehost_end.freeze() + self._on_dns_cache_hit.freeze() + self._on_dns_cache_miss.freeze() + self._on_request_headers_sent.freeze() + + @property + def on_request_start(self) -> "Signal[_SignalCallback[TraceRequestStartParams]]": + return self._on_request_start + + @property + def on_request_chunk_sent( + self, + ) -> "Signal[_SignalCallback[TraceRequestChunkSentParams]]": + return self._on_request_chunk_sent + + @property + def on_response_chunk_received( + self, + ) -> "Signal[_SignalCallback[TraceResponseChunkReceivedParams]]": + return self._on_response_chunk_received + + @property + def on_request_end(self) -> "Signal[_SignalCallback[TraceRequestEndParams]]": + return self._on_request_end + + @property + def on_request_exception( + self, + ) -> "Signal[_SignalCallback[TraceRequestExceptionParams]]": + return self._on_request_exception + + @property + def on_request_redirect( + self, + ) -> "Signal[_SignalCallback[TraceRequestRedirectParams]]": + return self._on_request_redirect + + @property + def on_connection_queued_start( + self, + ) -> "Signal[_SignalCallback[TraceConnectionQueuedStartParams]]": + return self._on_connection_queued_start + + @property + def on_connection_queued_end( + self, + ) -> "Signal[_SignalCallback[TraceConnectionQueuedEndParams]]": + return self._on_connection_queued_end + + @property + def on_connection_create_start( + self, + ) -> "Signal[_SignalCallback[TraceConnectionCreateStartParams]]": + return self._on_connection_create_start + + @property + def on_connection_create_end( + self, + ) -> "Signal[_SignalCallback[TraceConnectionCreateEndParams]]": + return self._on_connection_create_end + + @property + def on_connection_reuseconn( + self, + ) -> "Signal[_SignalCallback[TraceConnectionReuseconnParams]]": + return self._on_connection_reuseconn + + @property + def on_dns_resolvehost_start( + self, + ) -> "Signal[_SignalCallback[TraceDnsResolveHostStartParams]]": + return self._on_dns_resolvehost_start + + @property + def on_dns_resolvehost_end( + self, + ) -> "Signal[_SignalCallback[TraceDnsResolveHostEndParams]]": + return self._on_dns_resolvehost_end + + @property + def on_dns_cache_hit(self) -> "Signal[_SignalCallback[TraceDnsCacheHitParams]]": + return self._on_dns_cache_hit + + @property + def on_dns_cache_miss(self) -> "Signal[_SignalCallback[TraceDnsCacheMissParams]]": + return self._on_dns_cache_miss + + @property + def on_request_headers_sent( + self, + ) -> "Signal[_SignalCallback[TraceRequestHeadersSentParams]]": + return self._on_request_headers_sent + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceRequestStartParams: + """Parameters sent by the `on_request_start` signal""" + + method: str + url: URL + headers: "CIMultiDict[str]" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceRequestChunkSentParams: + """Parameters sent by the `on_request_chunk_sent` signal""" + + method: str + url: URL + chunk: bytes + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceResponseChunkReceivedParams: + """Parameters sent by the `on_response_chunk_received` signal""" + + method: str + url: URL + chunk: bytes + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceRequestEndParams: + """Parameters sent by the `on_request_end` signal""" + + method: str + url: URL + headers: "CIMultiDict[str]" + response: ClientResponse + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceRequestExceptionParams: + """Parameters sent by the `on_request_exception` signal""" + + method: str + url: URL + headers: "CIMultiDict[str]" + exception: BaseException + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceRequestRedirectParams: + """Parameters sent by the `on_request_redirect` signal""" + + method: str + url: URL + headers: "CIMultiDict[str]" + response: ClientResponse + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceConnectionQueuedStartParams: + """Parameters sent by the `on_connection_queued_start` signal""" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceConnectionQueuedEndParams: + """Parameters sent by the `on_connection_queued_end` signal""" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceConnectionCreateStartParams: + """Parameters sent by the `on_connection_create_start` signal""" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceConnectionCreateEndParams: + """Parameters sent by the `on_connection_create_end` signal""" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceConnectionReuseconnParams: + """Parameters sent by the `on_connection_reuseconn` signal""" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceDnsResolveHostStartParams: + """Parameters sent by the `on_dns_resolvehost_start` signal""" + + host: str + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceDnsResolveHostEndParams: + """Parameters sent by the `on_dns_resolvehost_end` signal""" + + host: str + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceDnsCacheHitParams: + """Parameters sent by the `on_dns_cache_hit` signal""" + + host: str + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceDnsCacheMissParams: + """Parameters sent by the `on_dns_cache_miss` signal""" + + host: str + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class TraceRequestHeadersSentParams: + """Parameters sent by the `on_request_headers_sent` signal""" + + method: str + url: URL + headers: "CIMultiDict[str]" + + +class Trace: + """Internal dependency holder class. + + Used to keep together the main dependencies used + at the moment of send a signal. + """ + + def __init__( + self, + session: "ClientSession", + trace_config: TraceConfig, + trace_config_ctx: SimpleNamespace, + ) -> None: + self._trace_config = trace_config + self._trace_config_ctx = trace_config_ctx + self._session = session + + async def send_request_start( + self, method: str, url: URL, headers: "CIMultiDict[str]" + ) -> None: + return await self._trace_config.on_request_start.send( + self._session, + self._trace_config_ctx, + TraceRequestStartParams(method, url, headers), + ) + + async def send_request_chunk_sent( + self, method: str, url: URL, chunk: bytes + ) -> None: + return await self._trace_config.on_request_chunk_sent.send( + self._session, + self._trace_config_ctx, + TraceRequestChunkSentParams(method, url, chunk), + ) + + async def send_response_chunk_received( + self, method: str, url: URL, chunk: bytes + ) -> None: + return await self._trace_config.on_response_chunk_received.send( + self._session, + self._trace_config_ctx, + TraceResponseChunkReceivedParams(method, url, chunk), + ) + + async def send_request_end( + self, + method: str, + url: URL, + headers: "CIMultiDict[str]", + response: ClientResponse, + ) -> None: + return await self._trace_config.on_request_end.send( + self._session, + self._trace_config_ctx, + TraceRequestEndParams(method, url, headers, response), + ) + + async def send_request_exception( + self, + method: str, + url: URL, + headers: "CIMultiDict[str]", + exception: BaseException, + ) -> None: + return await self._trace_config.on_request_exception.send( + self._session, + self._trace_config_ctx, + TraceRequestExceptionParams(method, url, headers, exception), + ) + + async def send_request_redirect( + self, + method: str, + url: URL, + headers: "CIMultiDict[str]", + response: ClientResponse, + ) -> None: + return await self._trace_config._on_request_redirect.send( + self._session, + self._trace_config_ctx, + TraceRequestRedirectParams(method, url, headers, response), + ) + + async def send_connection_queued_start(self) -> None: + return await self._trace_config.on_connection_queued_start.send( + self._session, self._trace_config_ctx, TraceConnectionQueuedStartParams() + ) + + async def send_connection_queued_end(self) -> None: + return await self._trace_config.on_connection_queued_end.send( + self._session, self._trace_config_ctx, TraceConnectionQueuedEndParams() + ) + + async def send_connection_create_start(self) -> None: + return await self._trace_config.on_connection_create_start.send( + self._session, self._trace_config_ctx, TraceConnectionCreateStartParams() + ) + + async def send_connection_create_end(self) -> None: + return await self._trace_config.on_connection_create_end.send( + self._session, self._trace_config_ctx, TraceConnectionCreateEndParams() + ) + + async def send_connection_reuseconn(self) -> None: + return await self._trace_config.on_connection_reuseconn.send( + self._session, self._trace_config_ctx, TraceConnectionReuseconnParams() + ) + + async def send_dns_resolvehost_start(self, host: str) -> None: + return await self._trace_config.on_dns_resolvehost_start.send( + self._session, self._trace_config_ctx, TraceDnsResolveHostStartParams(host) + ) + + async def send_dns_resolvehost_end(self, host: str) -> None: + return await self._trace_config.on_dns_resolvehost_end.send( + self._session, self._trace_config_ctx, TraceDnsResolveHostEndParams(host) + ) + + async def send_dns_cache_hit(self, host: str) -> None: + return await self._trace_config.on_dns_cache_hit.send( + self._session, self._trace_config_ctx, TraceDnsCacheHitParams(host) + ) + + async def send_dns_cache_miss(self, host: str) -> None: + return await self._trace_config.on_dns_cache_miss.send( + self._session, self._trace_config_ctx, TraceDnsCacheMissParams(host) + ) + + async def send_request_headers( + self, method: str, url: URL, headers: "CIMultiDict[str]" + ) -> None: + return await self._trace_config._on_request_headers_sent.send( + self._session, + self._trace_config_ctx, + TraceRequestHeadersSentParams(method, url, headers), + ) diff --git a/env/lib/python3.8/site-packages/aiohttp/typedefs.py b/env/lib/python3.8/site-packages/aiohttp/typedefs.py new file mode 100644 index 00000000..84283d9a --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/typedefs.py @@ -0,0 +1,64 @@ +import json +import os +import sys +from typing import ( + TYPE_CHECKING, + Any, + Awaitable, + Callable, + Iterable, + Mapping, + Tuple, + Union, +) + +from multidict import CIMultiDict, CIMultiDictProxy, MultiDict, MultiDictProxy, istr +from yarl import URL + +# These are for other modules to use (to avoid repeating the conditional import). +if sys.version_info >= (3, 8): + from typing import Final as Final, Protocol as Protocol, TypedDict as TypedDict +else: + from typing_extensions import ( # noqa: F401 + Final, + Protocol as Protocol, + TypedDict as TypedDict, + ) + +DEFAULT_JSON_ENCODER = json.dumps +DEFAULT_JSON_DECODER = json.loads + +if TYPE_CHECKING: # pragma: no cover + _CIMultiDict = CIMultiDict[str] + _CIMultiDictProxy = CIMultiDictProxy[str] + _MultiDict = MultiDict[str] + _MultiDictProxy = MultiDictProxy[str] + from http.cookies import BaseCookie, Morsel + + from .web import Request, StreamResponse +else: + _CIMultiDict = CIMultiDict + _CIMultiDictProxy = CIMultiDictProxy + _MultiDict = MultiDict + _MultiDictProxy = MultiDictProxy + +Byteish = Union[bytes, bytearray, memoryview] +JSONEncoder = Callable[[Any], str] +JSONDecoder = Callable[[str], Any] +LooseHeaders = Union[Mapping[Union[str, istr], str], _CIMultiDict, _CIMultiDictProxy] +RawHeaders = Tuple[Tuple[bytes, bytes], ...] +StrOrURL = Union[str, URL] + +LooseCookiesMappings = Mapping[str, Union[str, "BaseCookie[str]", "Morsel[Any]"]] +LooseCookiesIterables = Iterable[ + Tuple[str, Union[str, "BaseCookie[str]", "Morsel[Any]"]] +] +LooseCookies = Union[ + LooseCookiesMappings, + LooseCookiesIterables, + "BaseCookie[str]", +] + +Handler = Callable[["Request"], Awaitable["StreamResponse"]] + +PathLike = Union[str, "os.PathLike[str]"] diff --git a/env/lib/python3.8/site-packages/aiohttp/web.py b/env/lib/python3.8/site-packages/aiohttp/web.py new file mode 100644 index 00000000..cefae2b9 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web.py @@ -0,0 +1,588 @@ +import asyncio +import logging +import socket +import sys +from argparse import ArgumentParser +from collections.abc import Iterable +from importlib import import_module +from typing import ( + Any, + Awaitable, + Callable, + Iterable as TypingIterable, + List, + Optional, + Set, + Type, + Union, + cast, +) + +from .abc import AbstractAccessLogger +from .helpers import all_tasks +from .log import access_logger +from .web_app import Application as Application, CleanupError as CleanupError +from .web_exceptions import ( + HTTPAccepted as HTTPAccepted, + HTTPBadGateway as HTTPBadGateway, + HTTPBadRequest as HTTPBadRequest, + HTTPClientError as HTTPClientError, + HTTPConflict as HTTPConflict, + HTTPCreated as HTTPCreated, + HTTPError as HTTPError, + HTTPException as HTTPException, + HTTPExpectationFailed as HTTPExpectationFailed, + HTTPFailedDependency as HTTPFailedDependency, + HTTPForbidden as HTTPForbidden, + HTTPFound as HTTPFound, + HTTPGatewayTimeout as HTTPGatewayTimeout, + HTTPGone as HTTPGone, + HTTPInsufficientStorage as HTTPInsufficientStorage, + HTTPInternalServerError as HTTPInternalServerError, + HTTPLengthRequired as HTTPLengthRequired, + HTTPMethodNotAllowed as HTTPMethodNotAllowed, + HTTPMisdirectedRequest as HTTPMisdirectedRequest, + HTTPMovedPermanently as HTTPMovedPermanently, + HTTPMultipleChoices as HTTPMultipleChoices, + HTTPNetworkAuthenticationRequired as HTTPNetworkAuthenticationRequired, + HTTPNoContent as HTTPNoContent, + HTTPNonAuthoritativeInformation as HTTPNonAuthoritativeInformation, + HTTPNotAcceptable as HTTPNotAcceptable, + HTTPNotExtended as HTTPNotExtended, + HTTPNotFound as HTTPNotFound, + HTTPNotImplemented as HTTPNotImplemented, + HTTPNotModified as HTTPNotModified, + HTTPOk as HTTPOk, + HTTPPartialContent as HTTPPartialContent, + HTTPPaymentRequired as HTTPPaymentRequired, + HTTPPermanentRedirect as HTTPPermanentRedirect, + HTTPPreconditionFailed as HTTPPreconditionFailed, + HTTPPreconditionRequired as HTTPPreconditionRequired, + HTTPProxyAuthenticationRequired as HTTPProxyAuthenticationRequired, + HTTPRedirection as HTTPRedirection, + HTTPRequestEntityTooLarge as HTTPRequestEntityTooLarge, + HTTPRequestHeaderFieldsTooLarge as HTTPRequestHeaderFieldsTooLarge, + HTTPRequestRangeNotSatisfiable as HTTPRequestRangeNotSatisfiable, + HTTPRequestTimeout as HTTPRequestTimeout, + HTTPRequestURITooLong as HTTPRequestURITooLong, + HTTPResetContent as HTTPResetContent, + HTTPSeeOther as HTTPSeeOther, + HTTPServerError as HTTPServerError, + HTTPServiceUnavailable as HTTPServiceUnavailable, + HTTPSuccessful as HTTPSuccessful, + HTTPTemporaryRedirect as HTTPTemporaryRedirect, + HTTPTooManyRequests as HTTPTooManyRequests, + HTTPUnauthorized as HTTPUnauthorized, + HTTPUnavailableForLegalReasons as HTTPUnavailableForLegalReasons, + HTTPUnprocessableEntity as HTTPUnprocessableEntity, + HTTPUnsupportedMediaType as HTTPUnsupportedMediaType, + HTTPUpgradeRequired as HTTPUpgradeRequired, + HTTPUseProxy as HTTPUseProxy, + HTTPVariantAlsoNegotiates as HTTPVariantAlsoNegotiates, + HTTPVersionNotSupported as HTTPVersionNotSupported, +) +from .web_fileresponse import FileResponse as FileResponse +from .web_log import AccessLogger +from .web_middlewares import ( + middleware as middleware, + normalize_path_middleware as normalize_path_middleware, +) +from .web_protocol import ( + PayloadAccessError as PayloadAccessError, + RequestHandler as RequestHandler, + RequestPayloadError as RequestPayloadError, +) +from .web_request import ( + BaseRequest as BaseRequest, + FileField as FileField, + Request as Request, +) +from .web_response import ( + ContentCoding as ContentCoding, + Response as Response, + StreamResponse as StreamResponse, + json_response as json_response, +) +from .web_routedef import ( + AbstractRouteDef as AbstractRouteDef, + RouteDef as RouteDef, + RouteTableDef as RouteTableDef, + StaticDef as StaticDef, + delete as delete, + get as get, + head as head, + options as options, + patch as patch, + post as post, + put as put, + route as route, + static as static, + view as view, +) +from .web_runner import ( + AppRunner as AppRunner, + BaseRunner as BaseRunner, + BaseSite as BaseSite, + GracefulExit as GracefulExit, + NamedPipeSite as NamedPipeSite, + ServerRunner as ServerRunner, + SockSite as SockSite, + TCPSite as TCPSite, + UnixSite as UnixSite, +) +from .web_server import Server as Server +from .web_urldispatcher import ( + AbstractResource as AbstractResource, + AbstractRoute as AbstractRoute, + DynamicResource as DynamicResource, + PlainResource as PlainResource, + PrefixedSubAppResource as PrefixedSubAppResource, + Resource as Resource, + ResourceRoute as ResourceRoute, + StaticResource as StaticResource, + UrlDispatcher as UrlDispatcher, + UrlMappingMatchInfo as UrlMappingMatchInfo, + View as View, +) +from .web_ws import ( + WebSocketReady as WebSocketReady, + WebSocketResponse as WebSocketResponse, + WSMsgType as WSMsgType, +) + +__all__ = ( + # web_app + "Application", + "CleanupError", + # web_exceptions + "HTTPAccepted", + "HTTPBadGateway", + "HTTPBadRequest", + "HTTPClientError", + "HTTPConflict", + "HTTPCreated", + "HTTPError", + "HTTPException", + "HTTPExpectationFailed", + "HTTPFailedDependency", + "HTTPForbidden", + "HTTPFound", + "HTTPGatewayTimeout", + "HTTPGone", + "HTTPInsufficientStorage", + "HTTPInternalServerError", + "HTTPLengthRequired", + "HTTPMethodNotAllowed", + "HTTPMisdirectedRequest", + "HTTPMovedPermanently", + "HTTPMultipleChoices", + "HTTPNetworkAuthenticationRequired", + "HTTPNoContent", + "HTTPNonAuthoritativeInformation", + "HTTPNotAcceptable", + "HTTPNotExtended", + "HTTPNotFound", + "HTTPNotImplemented", + "HTTPNotModified", + "HTTPOk", + "HTTPPartialContent", + "HTTPPaymentRequired", + "HTTPPermanentRedirect", + "HTTPPreconditionFailed", + "HTTPPreconditionRequired", + "HTTPProxyAuthenticationRequired", + "HTTPRedirection", + "HTTPRequestEntityTooLarge", + "HTTPRequestHeaderFieldsTooLarge", + "HTTPRequestRangeNotSatisfiable", + "HTTPRequestTimeout", + "HTTPRequestURITooLong", + "HTTPResetContent", + "HTTPSeeOther", + "HTTPServerError", + "HTTPServiceUnavailable", + "HTTPSuccessful", + "HTTPTemporaryRedirect", + "HTTPTooManyRequests", + "HTTPUnauthorized", + "HTTPUnavailableForLegalReasons", + "HTTPUnprocessableEntity", + "HTTPUnsupportedMediaType", + "HTTPUpgradeRequired", + "HTTPUseProxy", + "HTTPVariantAlsoNegotiates", + "HTTPVersionNotSupported", + # web_fileresponse + "FileResponse", + # web_middlewares + "middleware", + "normalize_path_middleware", + # web_protocol + "PayloadAccessError", + "RequestHandler", + "RequestPayloadError", + # web_request + "BaseRequest", + "FileField", + "Request", + # web_response + "ContentCoding", + "Response", + "StreamResponse", + "json_response", + # web_routedef + "AbstractRouteDef", + "RouteDef", + "RouteTableDef", + "StaticDef", + "delete", + "get", + "head", + "options", + "patch", + "post", + "put", + "route", + "static", + "view", + # web_runner + "AppRunner", + "BaseRunner", + "BaseSite", + "GracefulExit", + "ServerRunner", + "SockSite", + "TCPSite", + "UnixSite", + "NamedPipeSite", + # web_server + "Server", + # web_urldispatcher + "AbstractResource", + "AbstractRoute", + "DynamicResource", + "PlainResource", + "PrefixedSubAppResource", + "Resource", + "ResourceRoute", + "StaticResource", + "UrlDispatcher", + "UrlMappingMatchInfo", + "View", + # web_ws + "WebSocketReady", + "WebSocketResponse", + "WSMsgType", + # web + "run_app", +) + + +try: + from ssl import SSLContext +except ImportError: # pragma: no cover + SSLContext = Any # type: ignore[misc,assignment] + +HostSequence = TypingIterable[str] + + +async def _run_app( + app: Union[Application, Awaitable[Application]], + *, + host: Optional[Union[str, HostSequence]] = None, + port: Optional[int] = None, + path: Optional[str] = None, + sock: Optional[Union[socket.socket, TypingIterable[socket.socket]]] = None, + shutdown_timeout: float = 60.0, + keepalive_timeout: float = 75.0, + ssl_context: Optional[SSLContext] = None, + print: Callable[..., None] = print, + backlog: int = 128, + access_log_class: Type[AbstractAccessLogger] = AccessLogger, + access_log_format: str = AccessLogger.LOG_FORMAT, + access_log: Optional[logging.Logger] = access_logger, + handle_signals: bool = True, + reuse_address: Optional[bool] = None, + reuse_port: Optional[bool] = None, +) -> None: + # A internal functio to actually do all dirty job for application running + if asyncio.iscoroutine(app): + app = await app # type: ignore[misc] + + app = cast(Application, app) + + runner = AppRunner( + app, + handle_signals=handle_signals, + access_log_class=access_log_class, + access_log_format=access_log_format, + access_log=access_log, + keepalive_timeout=keepalive_timeout, + ) + + await runner.setup() + + sites: List[BaseSite] = [] + + try: + if host is not None: + if isinstance(host, (str, bytes, bytearray, memoryview)): + sites.append( + TCPSite( + runner, + host, + port, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + reuse_address=reuse_address, + reuse_port=reuse_port, + ) + ) + else: + for h in host: + sites.append( + TCPSite( + runner, + h, + port, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + reuse_address=reuse_address, + reuse_port=reuse_port, + ) + ) + elif path is None and sock is None or port is not None: + sites.append( + TCPSite( + runner, + port=port, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + reuse_address=reuse_address, + reuse_port=reuse_port, + ) + ) + + if path is not None: + if isinstance(path, (str, bytes, bytearray, memoryview)): + sites.append( + UnixSite( + runner, + path, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + ) + ) + else: + for p in path: + sites.append( + UnixSite( + runner, + p, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + ) + ) + + if sock is not None: + if not isinstance(sock, Iterable): + sites.append( + SockSite( + runner, + sock, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + ) + ) + else: + for s in sock: + sites.append( + SockSite( + runner, + s, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + ) + ) + for site in sites: + await site.start() + + if print: # pragma: no branch + names = sorted(str(s.name) for s in runner.sites) + print( + "======== Running on {} ========\n" + "(Press CTRL+C to quit)".format(", ".join(names)) + ) + + # sleep forever by 1 hour intervals, + # on Windows before Python 3.8 wake up every 1 second to handle + # Ctrl+C smoothly + if sys.platform == "win32" and sys.version_info < (3, 8): + delay = 1 + else: + delay = 3600 + + while True: + await asyncio.sleep(delay) + finally: + await runner.cleanup() + + +def _cancel_tasks( + to_cancel: Set["asyncio.Task[Any]"], loop: asyncio.AbstractEventLoop +) -> None: + if not to_cancel: + return + + for task in to_cancel: + task.cancel() + + loop.run_until_complete(asyncio.gather(*to_cancel, return_exceptions=True)) + + for task in to_cancel: + if task.cancelled(): + continue + if task.exception() is not None: + loop.call_exception_handler( + { + "message": "unhandled exception during asyncio.run() shutdown", + "exception": task.exception(), + "task": task, + } + ) + + +def run_app( + app: Union[Application, Awaitable[Application]], + *, + host: Optional[Union[str, HostSequence]] = None, + port: Optional[int] = None, + path: Optional[str] = None, + sock: Optional[Union[socket.socket, TypingIterable[socket.socket]]] = None, + shutdown_timeout: float = 60.0, + keepalive_timeout: float = 75.0, + ssl_context: Optional[SSLContext] = None, + print: Callable[..., None] = print, + backlog: int = 128, + access_log_class: Type[AbstractAccessLogger] = AccessLogger, + access_log_format: str = AccessLogger.LOG_FORMAT, + access_log: Optional[logging.Logger] = access_logger, + handle_signals: bool = True, + reuse_address: Optional[bool] = None, + reuse_port: Optional[bool] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, +) -> None: + """Run an app locally""" + if loop is None: + loop = asyncio.new_event_loop() + + # Configure if and only if in debugging mode and using the default logger + if loop.get_debug() and access_log and access_log.name == "aiohttp.access": + if access_log.level == logging.NOTSET: + access_log.setLevel(logging.DEBUG) + if not access_log.hasHandlers(): + access_log.addHandler(logging.StreamHandler()) + + main_task = loop.create_task( + _run_app( + app, + host=host, + port=port, + path=path, + sock=sock, + shutdown_timeout=shutdown_timeout, + keepalive_timeout=keepalive_timeout, + ssl_context=ssl_context, + print=print, + backlog=backlog, + access_log_class=access_log_class, + access_log_format=access_log_format, + access_log=access_log, + handle_signals=handle_signals, + reuse_address=reuse_address, + reuse_port=reuse_port, + ) + ) + + try: + asyncio.set_event_loop(loop) + loop.run_until_complete(main_task) + except (GracefulExit, KeyboardInterrupt): # pragma: no cover + pass + finally: + _cancel_tasks({main_task}, loop) + _cancel_tasks(all_tasks(loop), loop) + loop.run_until_complete(loop.shutdown_asyncgens()) + loop.close() + + +def main(argv: List[str]) -> None: + arg_parser = ArgumentParser( + description="aiohttp.web Application server", prog="aiohttp.web" + ) + arg_parser.add_argument( + "entry_func", + help=( + "Callable returning the `aiohttp.web.Application` instance to " + "run. Should be specified in the 'module:function' syntax." + ), + metavar="entry-func", + ) + arg_parser.add_argument( + "-H", + "--hostname", + help="TCP/IP hostname to serve on (default: %(default)r)", + default="localhost", + ) + arg_parser.add_argument( + "-P", + "--port", + help="TCP/IP port to serve on (default: %(default)r)", + type=int, + default="8080", + ) + arg_parser.add_argument( + "-U", + "--path", + help="Unix file system path to serve on. Specifying a path will cause " + "hostname and port arguments to be ignored.", + ) + args, extra_argv = arg_parser.parse_known_args(argv) + + # Import logic + mod_str, _, func_str = args.entry_func.partition(":") + if not func_str or not mod_str: + arg_parser.error("'entry-func' not in 'module:function' syntax") + if mod_str.startswith("."): + arg_parser.error("relative module names not supported") + try: + module = import_module(mod_str) + except ImportError as ex: + arg_parser.error(f"unable to import {mod_str}: {ex}") + try: + func = getattr(module, func_str) + except AttributeError: + arg_parser.error(f"module {mod_str!r} has no attribute {func_str!r}") + + # Compatibility logic + if args.path is not None and not hasattr(socket, "AF_UNIX"): + arg_parser.error( + "file system paths not supported by your operating" " environment" + ) + + logging.basicConfig(level=logging.DEBUG) + + app = func(extra_argv) + run_app(app, host=args.hostname, port=args.port, path=args.path) + arg_parser.exit(message="Stopped\n") + + +if __name__ == "__main__": # pragma: no branch + main(sys.argv[1:]) # pragma: no cover diff --git a/env/lib/python3.8/site-packages/aiohttp/web_app.py b/env/lib/python3.8/site-packages/aiohttp/web_app.py new file mode 100644 index 00000000..8fd4471d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_app.py @@ -0,0 +1,557 @@ +import asyncio +import logging +import warnings +from functools import partial, update_wrapper +from typing import ( + TYPE_CHECKING, + Any, + AsyncIterator, + Awaitable, + Callable, + Dict, + Iterable, + Iterator, + List, + Mapping, + MutableMapping, + Optional, + Sequence, + Tuple, + Type, + Union, + cast, +) + +from aiosignal import Signal +from frozenlist import FrozenList + +from . import hdrs +from .abc import ( + AbstractAccessLogger, + AbstractMatchInfo, + AbstractRouter, + AbstractStreamWriter, +) +from .helpers import DEBUG +from .http_parser import RawRequestMessage +from .log import web_logger +from .streams import StreamReader +from .web_log import AccessLogger +from .web_middlewares import _fix_request_current_app +from .web_protocol import RequestHandler +from .web_request import Request +from .web_response import StreamResponse +from .web_routedef import AbstractRouteDef +from .web_server import Server +from .web_urldispatcher import ( + AbstractResource, + AbstractRoute, + Domain, + MaskDomain, + MatchedSubAppResource, + PrefixedSubAppResource, + UrlDispatcher, +) + +__all__ = ("Application", "CleanupError") + + +if TYPE_CHECKING: # pragma: no cover + from .typedefs import Handler + + _AppSignal = Signal[Callable[["Application"], Awaitable[None]]] + _RespPrepareSignal = Signal[Callable[[Request, StreamResponse], Awaitable[None]]] + _Middleware = Union[ + Callable[[Request, Handler], Awaitable[StreamResponse]], + Callable[["Application", Handler], Awaitable[Handler]], # old-style + ] + _Middlewares = FrozenList[_Middleware] + _MiddlewaresHandlers = Optional[Sequence[Tuple[_Middleware, bool]]] + _Subapps = List["Application"] +else: + # No type checker mode, skip types + _AppSignal = Signal + _RespPrepareSignal = Signal + _Middleware = Callable + _Middlewares = FrozenList + _MiddlewaresHandlers = Optional[Sequence] + _Subapps = List + + +class Application(MutableMapping[str, Any]): + ATTRS = frozenset( + [ + "logger", + "_debug", + "_router", + "_loop", + "_handler_args", + "_middlewares", + "_middlewares_handlers", + "_run_middlewares", + "_state", + "_frozen", + "_pre_frozen", + "_subapps", + "_on_response_prepare", + "_on_startup", + "_on_shutdown", + "_on_cleanup", + "_client_max_size", + "_cleanup_ctx", + ] + ) + + def __init__( + self, + *, + logger: logging.Logger = web_logger, + router: Optional[UrlDispatcher] = None, + middlewares: Iterable[_Middleware] = (), + handler_args: Optional[Mapping[str, Any]] = None, + client_max_size: int = 1024**2, + loop: Optional[asyncio.AbstractEventLoop] = None, + debug: Any = ..., # mypy doesn't support ellipsis + ) -> None: + if router is None: + router = UrlDispatcher() + else: + warnings.warn( + "router argument is deprecated", DeprecationWarning, stacklevel=2 + ) + assert isinstance(router, AbstractRouter), router + + if loop is not None: + warnings.warn( + "loop argument is deprecated", DeprecationWarning, stacklevel=2 + ) + + if debug is not ...: + warnings.warn( + "debug argument is deprecated", DeprecationWarning, stacklevel=2 + ) + self._debug = debug + self._router: UrlDispatcher = router + self._loop = loop + self._handler_args = handler_args + self.logger = logger + + self._middlewares: _Middlewares = FrozenList(middlewares) + + # initialized on freezing + self._middlewares_handlers: _MiddlewaresHandlers = None + # initialized on freezing + self._run_middlewares: Optional[bool] = None + + self._state: Dict[str, Any] = {} + self._frozen = False + self._pre_frozen = False + self._subapps: _Subapps = [] + + self._on_response_prepare: _RespPrepareSignal = Signal(self) + self._on_startup: _AppSignal = Signal(self) + self._on_shutdown: _AppSignal = Signal(self) + self._on_cleanup: _AppSignal = Signal(self) + self._cleanup_ctx = CleanupContext() + self._on_startup.append(self._cleanup_ctx._on_startup) + self._on_cleanup.append(self._cleanup_ctx._on_cleanup) + self._client_max_size = client_max_size + + def __init_subclass__(cls: Type["Application"]) -> None: + warnings.warn( + "Inheritance class {} from web.Application " + "is discouraged".format(cls.__name__), + DeprecationWarning, + stacklevel=2, + ) + + if DEBUG: # pragma: no cover + + def __setattr__(self, name: str, val: Any) -> None: + if name not in self.ATTRS: + warnings.warn( + "Setting custom web.Application.{} attribute " + "is discouraged".format(name), + DeprecationWarning, + stacklevel=2, + ) + super().__setattr__(name, val) + + # MutableMapping API + + def __eq__(self, other: object) -> bool: + return self is other + + def __getitem__(self, key: str) -> Any: + return self._state[key] + + def _check_frozen(self) -> None: + if self._frozen: + warnings.warn( + "Changing state of started or joined " "application is deprecated", + DeprecationWarning, + stacklevel=3, + ) + + def __setitem__(self, key: str, value: Any) -> None: + self._check_frozen() + self._state[key] = value + + def __delitem__(self, key: str) -> None: + self._check_frozen() + del self._state[key] + + def __len__(self) -> int: + return len(self._state) + + def __iter__(self) -> Iterator[str]: + return iter(self._state) + + ######## + @property + def loop(self) -> asyncio.AbstractEventLoop: + # Technically the loop can be None + # but we mask it by explicit type cast + # to provide more convinient type annotation + warnings.warn("loop property is deprecated", DeprecationWarning, stacklevel=2) + return cast(asyncio.AbstractEventLoop, self._loop) + + def _set_loop(self, loop: Optional[asyncio.AbstractEventLoop]) -> None: + if loop is None: + loop = asyncio.get_event_loop() + if self._loop is not None and self._loop is not loop: + raise RuntimeError( + "web.Application instance initialized with different loop" + ) + + self._loop = loop + + # set loop debug + if self._debug is ...: + self._debug = loop.get_debug() + + # set loop to sub applications + for subapp in self._subapps: + subapp._set_loop(loop) + + @property + def pre_frozen(self) -> bool: + return self._pre_frozen + + def pre_freeze(self) -> None: + if self._pre_frozen: + return + + self._pre_frozen = True + self._middlewares.freeze() + self._router.freeze() + self._on_response_prepare.freeze() + self._cleanup_ctx.freeze() + self._on_startup.freeze() + self._on_shutdown.freeze() + self._on_cleanup.freeze() + self._middlewares_handlers = tuple(self._prepare_middleware()) + + # If current app and any subapp do not have middlewares avoid run all + # of the code footprint that it implies, which have a middleware + # hardcoded per app that sets up the current_app attribute. If no + # middlewares are configured the handler will receive the proper + # current_app without needing all of this code. + self._run_middlewares = True if self.middlewares else False + + for subapp in self._subapps: + subapp.pre_freeze() + self._run_middlewares = self._run_middlewares or subapp._run_middlewares + + @property + def frozen(self) -> bool: + return self._frozen + + def freeze(self) -> None: + if self._frozen: + return + + self.pre_freeze() + self._frozen = True + for subapp in self._subapps: + subapp.freeze() + + @property + def debug(self) -> bool: + warnings.warn("debug property is deprecated", DeprecationWarning, stacklevel=2) + return self._debug # type: ignore[no-any-return] + + def _reg_subapp_signals(self, subapp: "Application") -> None: + def reg_handler(signame: str) -> None: + subsig = getattr(subapp, signame) + + async def handler(app: "Application") -> None: + await subsig.send(subapp) + + appsig = getattr(self, signame) + appsig.append(handler) + + reg_handler("on_startup") + reg_handler("on_shutdown") + reg_handler("on_cleanup") + + def add_subapp(self, prefix: str, subapp: "Application") -> AbstractResource: + if not isinstance(prefix, str): + raise TypeError("Prefix must be str") + prefix = prefix.rstrip("/") + if not prefix: + raise ValueError("Prefix cannot be empty") + factory = partial(PrefixedSubAppResource, prefix, subapp) + return self._add_subapp(factory, subapp) + + def _add_subapp( + self, resource_factory: Callable[[], AbstractResource], subapp: "Application" + ) -> AbstractResource: + if self.frozen: + raise RuntimeError("Cannot add sub application to frozen application") + if subapp.frozen: + raise RuntimeError("Cannot add frozen application") + resource = resource_factory() + self.router.register_resource(resource) + self._reg_subapp_signals(subapp) + self._subapps.append(subapp) + subapp.pre_freeze() + if self._loop is not None: + subapp._set_loop(self._loop) + return resource + + def add_domain(self, domain: str, subapp: "Application") -> AbstractResource: + if not isinstance(domain, str): + raise TypeError("Domain must be str") + elif "*" in domain: + rule: Domain = MaskDomain(domain) + else: + rule = Domain(domain) + factory = partial(MatchedSubAppResource, rule, subapp) + return self._add_subapp(factory, subapp) + + def add_routes(self, routes: Iterable[AbstractRouteDef]) -> List[AbstractRoute]: + return self.router.add_routes(routes) + + @property + def on_response_prepare(self) -> _RespPrepareSignal: + return self._on_response_prepare + + @property + def on_startup(self) -> _AppSignal: + return self._on_startup + + @property + def on_shutdown(self) -> _AppSignal: + return self._on_shutdown + + @property + def on_cleanup(self) -> _AppSignal: + return self._on_cleanup + + @property + def cleanup_ctx(self) -> "CleanupContext": + return self._cleanup_ctx + + @property + def router(self) -> UrlDispatcher: + return self._router + + @property + def middlewares(self) -> _Middlewares: + return self._middlewares + + def _make_handler( + self, + *, + loop: Optional[asyncio.AbstractEventLoop] = None, + access_log_class: Type[AbstractAccessLogger] = AccessLogger, + **kwargs: Any, + ) -> Server: + + if not issubclass(access_log_class, AbstractAccessLogger): + raise TypeError( + "access_log_class must be subclass of " + "aiohttp.abc.AbstractAccessLogger, got {}".format(access_log_class) + ) + + self._set_loop(loop) + self.freeze() + + kwargs["debug"] = self._debug + kwargs["access_log_class"] = access_log_class + if self._handler_args: + for k, v in self._handler_args.items(): + kwargs[k] = v + + return Server( + self._handle, # type: ignore[arg-type] + request_factory=self._make_request, + loop=self._loop, + **kwargs, + ) + + def make_handler( + self, + *, + loop: Optional[asyncio.AbstractEventLoop] = None, + access_log_class: Type[AbstractAccessLogger] = AccessLogger, + **kwargs: Any, + ) -> Server: + + warnings.warn( + "Application.make_handler(...) is deprecated, " "use AppRunner API instead", + DeprecationWarning, + stacklevel=2, + ) + + return self._make_handler( + loop=loop, access_log_class=access_log_class, **kwargs + ) + + async def startup(self) -> None: + """Causes on_startup signal + + Should be called in the event loop along with the request handler. + """ + await self.on_startup.send(self) + + async def shutdown(self) -> None: + """Causes on_shutdown signal + + Should be called before cleanup() + """ + await self.on_shutdown.send(self) + + async def cleanup(self) -> None: + """Causes on_cleanup signal + + Should be called after shutdown() + """ + if self.on_cleanup.frozen: + await self.on_cleanup.send(self) + else: + # If an exception occurs in startup, ensure cleanup contexts are completed. + await self._cleanup_ctx._on_cleanup(self) + + def _make_request( + self, + message: RawRequestMessage, + payload: StreamReader, + protocol: RequestHandler, + writer: AbstractStreamWriter, + task: "asyncio.Task[None]", + _cls: Type[Request] = Request, + ) -> Request: + return _cls( + message, + payload, + protocol, + writer, + task, + self._loop, + client_max_size=self._client_max_size, + ) + + def _prepare_middleware(self) -> Iterator[Tuple[_Middleware, bool]]: + for m in reversed(self._middlewares): + if getattr(m, "__middleware_version__", None) == 1: + yield m, True + else: + warnings.warn( + 'old-style middleware "{!r}" deprecated, ' "see #2252".format(m), + DeprecationWarning, + stacklevel=2, + ) + yield m, False + + yield _fix_request_current_app(self), True + + async def _handle(self, request: Request) -> StreamResponse: + loop = asyncio.get_event_loop() + debug = loop.get_debug() + match_info = await self._router.resolve(request) + if debug: # pragma: no cover + if not isinstance(match_info, AbstractMatchInfo): + raise TypeError( + "match_info should be AbstractMatchInfo " + "instance, not {!r}".format(match_info) + ) + match_info.add_app(self) + + match_info.freeze() + + resp = None + request._match_info = match_info + expect = request.headers.get(hdrs.EXPECT) + if expect: + resp = await match_info.expect_handler(request) + await request.writer.drain() + + if resp is None: + handler = match_info.handler + + if self._run_middlewares: + for app in match_info.apps[::-1]: + for m, new_style in app._middlewares_handlers: # type: ignore[union-attr] # noqa + if new_style: + handler = update_wrapper( + partial(m, handler=handler), handler + ) + else: + handler = await m(app, handler) # type: ignore[arg-type] + + resp = await handler(request) + + return resp + + def __call__(self) -> "Application": + """gunicorn compatibility""" + return self + + def __repr__(self) -> str: + return f"" + + def __bool__(self) -> bool: + return True + + +class CleanupError(RuntimeError): + @property + def exceptions(self) -> List[BaseException]: + return cast(List[BaseException], self.args[1]) + + +if TYPE_CHECKING: # pragma: no cover + _CleanupContextBase = FrozenList[Callable[[Application], AsyncIterator[None]]] +else: + _CleanupContextBase = FrozenList + + +class CleanupContext(_CleanupContextBase): + def __init__(self) -> None: + super().__init__() + self._exits: List[AsyncIterator[None]] = [] + + async def _on_startup(self, app: Application) -> None: + for cb in self: + it = cb(app).__aiter__() + await it.__anext__() + self._exits.append(it) + + async def _on_cleanup(self, app: Application) -> None: + errors = [] + for it in reversed(self._exits): + try: + await it.__anext__() + except StopAsyncIteration: + pass + except Exception as exc: + errors.append(exc) + else: + errors.append(RuntimeError(f"{it!r} has more than one 'yield'")) + if errors: + if len(errors) == 1: + raise errors[0] + else: + raise CleanupError("Multiple errors on cleanup stage", errors) diff --git a/env/lib/python3.8/site-packages/aiohttp/web_exceptions.py b/env/lib/python3.8/site-packages/aiohttp/web_exceptions.py new file mode 100644 index 00000000..ae706a18 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_exceptions.py @@ -0,0 +1,441 @@ +import warnings +from typing import Any, Dict, Iterable, List, Optional, Set # noqa + +from yarl import URL + +from .typedefs import LooseHeaders, StrOrURL +from .web_response import Response + +__all__ = ( + "HTTPException", + "HTTPError", + "HTTPRedirection", + "HTTPSuccessful", + "HTTPOk", + "HTTPCreated", + "HTTPAccepted", + "HTTPNonAuthoritativeInformation", + "HTTPNoContent", + "HTTPResetContent", + "HTTPPartialContent", + "HTTPMultipleChoices", + "HTTPMovedPermanently", + "HTTPFound", + "HTTPSeeOther", + "HTTPNotModified", + "HTTPUseProxy", + "HTTPTemporaryRedirect", + "HTTPPermanentRedirect", + "HTTPClientError", + "HTTPBadRequest", + "HTTPUnauthorized", + "HTTPPaymentRequired", + "HTTPForbidden", + "HTTPNotFound", + "HTTPMethodNotAllowed", + "HTTPNotAcceptable", + "HTTPProxyAuthenticationRequired", + "HTTPRequestTimeout", + "HTTPConflict", + "HTTPGone", + "HTTPLengthRequired", + "HTTPPreconditionFailed", + "HTTPRequestEntityTooLarge", + "HTTPRequestURITooLong", + "HTTPUnsupportedMediaType", + "HTTPRequestRangeNotSatisfiable", + "HTTPExpectationFailed", + "HTTPMisdirectedRequest", + "HTTPUnprocessableEntity", + "HTTPFailedDependency", + "HTTPUpgradeRequired", + "HTTPPreconditionRequired", + "HTTPTooManyRequests", + "HTTPRequestHeaderFieldsTooLarge", + "HTTPUnavailableForLegalReasons", + "HTTPServerError", + "HTTPInternalServerError", + "HTTPNotImplemented", + "HTTPBadGateway", + "HTTPServiceUnavailable", + "HTTPGatewayTimeout", + "HTTPVersionNotSupported", + "HTTPVariantAlsoNegotiates", + "HTTPInsufficientStorage", + "HTTPNotExtended", + "HTTPNetworkAuthenticationRequired", +) + + +############################################################ +# HTTP Exceptions +############################################################ + + +class HTTPException(Response, Exception): + + # You should set in subclasses: + # status = 200 + + status_code = -1 + empty_body = False + + __http_exception__ = True + + def __init__( + self, + *, + headers: Optional[LooseHeaders] = None, + reason: Optional[str] = None, + body: Any = None, + text: Optional[str] = None, + content_type: Optional[str] = None, + ) -> None: + if body is not None: + warnings.warn( + "body argument is deprecated for http web exceptions", + DeprecationWarning, + ) + Response.__init__( + self, + status=self.status_code, + headers=headers, + reason=reason, + body=body, + text=text, + content_type=content_type, + ) + Exception.__init__(self, self.reason) + if self.body is None and not self.empty_body: + self.text = f"{self.status}: {self.reason}" + + def __bool__(self) -> bool: + return True + + +class HTTPError(HTTPException): + """Base class for exceptions with status codes in the 400s and 500s.""" + + +class HTTPRedirection(HTTPException): + """Base class for exceptions with status codes in the 300s.""" + + +class HTTPSuccessful(HTTPException): + """Base class for exceptions with status codes in the 200s.""" + + +class HTTPOk(HTTPSuccessful): + status_code = 200 + + +class HTTPCreated(HTTPSuccessful): + status_code = 201 + + +class HTTPAccepted(HTTPSuccessful): + status_code = 202 + + +class HTTPNonAuthoritativeInformation(HTTPSuccessful): + status_code = 203 + + +class HTTPNoContent(HTTPSuccessful): + status_code = 204 + empty_body = True + + +class HTTPResetContent(HTTPSuccessful): + status_code = 205 + empty_body = True + + +class HTTPPartialContent(HTTPSuccessful): + status_code = 206 + + +############################################################ +# 3xx redirection +############################################################ + + +class _HTTPMove(HTTPRedirection): + def __init__( + self, + location: StrOrURL, + *, + headers: Optional[LooseHeaders] = None, + reason: Optional[str] = None, + body: Any = None, + text: Optional[str] = None, + content_type: Optional[str] = None, + ) -> None: + if not location: + raise ValueError("HTTP redirects need a location to redirect to.") + super().__init__( + headers=headers, + reason=reason, + body=body, + text=text, + content_type=content_type, + ) + self.headers["Location"] = str(URL(location)) + self.location = location + + +class HTTPMultipleChoices(_HTTPMove): + status_code = 300 + + +class HTTPMovedPermanently(_HTTPMove): + status_code = 301 + + +class HTTPFound(_HTTPMove): + status_code = 302 + + +# This one is safe after a POST (the redirected location will be +# retrieved with GET): +class HTTPSeeOther(_HTTPMove): + status_code = 303 + + +class HTTPNotModified(HTTPRedirection): + # FIXME: this should include a date or etag header + status_code = 304 + empty_body = True + + +class HTTPUseProxy(_HTTPMove): + # Not a move, but looks a little like one + status_code = 305 + + +class HTTPTemporaryRedirect(_HTTPMove): + status_code = 307 + + +class HTTPPermanentRedirect(_HTTPMove): + status_code = 308 + + +############################################################ +# 4xx client error +############################################################ + + +class HTTPClientError(HTTPError): + pass + + +class HTTPBadRequest(HTTPClientError): + status_code = 400 + + +class HTTPUnauthorized(HTTPClientError): + status_code = 401 + + +class HTTPPaymentRequired(HTTPClientError): + status_code = 402 + + +class HTTPForbidden(HTTPClientError): + status_code = 403 + + +class HTTPNotFound(HTTPClientError): + status_code = 404 + + +class HTTPMethodNotAllowed(HTTPClientError): + status_code = 405 + + def __init__( + self, + method: str, + allowed_methods: Iterable[str], + *, + headers: Optional[LooseHeaders] = None, + reason: Optional[str] = None, + body: Any = None, + text: Optional[str] = None, + content_type: Optional[str] = None, + ) -> None: + allow = ",".join(sorted(allowed_methods)) + super().__init__( + headers=headers, + reason=reason, + body=body, + text=text, + content_type=content_type, + ) + self.headers["Allow"] = allow + self.allowed_methods: Set[str] = set(allowed_methods) + self.method = method.upper() + + +class HTTPNotAcceptable(HTTPClientError): + status_code = 406 + + +class HTTPProxyAuthenticationRequired(HTTPClientError): + status_code = 407 + + +class HTTPRequestTimeout(HTTPClientError): + status_code = 408 + + +class HTTPConflict(HTTPClientError): + status_code = 409 + + +class HTTPGone(HTTPClientError): + status_code = 410 + + +class HTTPLengthRequired(HTTPClientError): + status_code = 411 + + +class HTTPPreconditionFailed(HTTPClientError): + status_code = 412 + + +class HTTPRequestEntityTooLarge(HTTPClientError): + status_code = 413 + + def __init__(self, max_size: float, actual_size: float, **kwargs: Any) -> None: + kwargs.setdefault( + "text", + "Maximum request body size {} exceeded, " + "actual body size {}".format(max_size, actual_size), + ) + super().__init__(**kwargs) + + +class HTTPRequestURITooLong(HTTPClientError): + status_code = 414 + + +class HTTPUnsupportedMediaType(HTTPClientError): + status_code = 415 + + +class HTTPRequestRangeNotSatisfiable(HTTPClientError): + status_code = 416 + + +class HTTPExpectationFailed(HTTPClientError): + status_code = 417 + + +class HTTPMisdirectedRequest(HTTPClientError): + status_code = 421 + + +class HTTPUnprocessableEntity(HTTPClientError): + status_code = 422 + + +class HTTPFailedDependency(HTTPClientError): + status_code = 424 + + +class HTTPUpgradeRequired(HTTPClientError): + status_code = 426 + + +class HTTPPreconditionRequired(HTTPClientError): + status_code = 428 + + +class HTTPTooManyRequests(HTTPClientError): + status_code = 429 + + +class HTTPRequestHeaderFieldsTooLarge(HTTPClientError): + status_code = 431 + + +class HTTPUnavailableForLegalReasons(HTTPClientError): + status_code = 451 + + def __init__( + self, + link: str, + *, + headers: Optional[LooseHeaders] = None, + reason: Optional[str] = None, + body: Any = None, + text: Optional[str] = None, + content_type: Optional[str] = None, + ) -> None: + super().__init__( + headers=headers, + reason=reason, + body=body, + text=text, + content_type=content_type, + ) + self.headers["Link"] = '<%s>; rel="blocked-by"' % link + self.link = link + + +############################################################ +# 5xx Server Error +############################################################ +# Response status codes beginning with the digit "5" indicate cases in +# which the server is aware that it has erred or is incapable of +# performing the request. Except when responding to a HEAD request, the +# server SHOULD include an entity containing an explanation of the error +# situation, and whether it is a temporary or permanent condition. User +# agents SHOULD display any included entity to the user. These response +# codes are applicable to any request method. + + +class HTTPServerError(HTTPError): + pass + + +class HTTPInternalServerError(HTTPServerError): + status_code = 500 + + +class HTTPNotImplemented(HTTPServerError): + status_code = 501 + + +class HTTPBadGateway(HTTPServerError): + status_code = 502 + + +class HTTPServiceUnavailable(HTTPServerError): + status_code = 503 + + +class HTTPGatewayTimeout(HTTPServerError): + status_code = 504 + + +class HTTPVersionNotSupported(HTTPServerError): + status_code = 505 + + +class HTTPVariantAlsoNegotiates(HTTPServerError): + status_code = 506 + + +class HTTPInsufficientStorage(HTTPServerError): + status_code = 507 + + +class HTTPNotExtended(HTTPServerError): + status_code = 510 + + +class HTTPNetworkAuthenticationRequired(HTTPServerError): + status_code = 511 diff --git a/env/lib/python3.8/site-packages/aiohttp/web_fileresponse.py b/env/lib/python3.8/site-packages/aiohttp/web_fileresponse.py new file mode 100644 index 00000000..f41ed3fd --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_fileresponse.py @@ -0,0 +1,288 @@ +import asyncio +import mimetypes +import os +import pathlib +import sys +from typing import ( # noqa + IO, + TYPE_CHECKING, + Any, + Awaitable, + Callable, + Iterator, + List, + Optional, + Tuple, + Union, + cast, +) + +from . import hdrs +from .abc import AbstractStreamWriter +from .helpers import ETAG_ANY, ETag +from .typedefs import Final, LooseHeaders +from .web_exceptions import ( + HTTPNotModified, + HTTPPartialContent, + HTTPPreconditionFailed, + HTTPRequestRangeNotSatisfiable, +) +from .web_response import StreamResponse + +__all__ = ("FileResponse",) + +if TYPE_CHECKING: # pragma: no cover + from .web_request import BaseRequest + + +_T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]] + + +NOSENDFILE: Final[bool] = bool(os.environ.get("AIOHTTP_NOSENDFILE")) + + +class FileResponse(StreamResponse): + """A response object can be used to send files.""" + + def __init__( + self, + path: Union[str, pathlib.Path], + chunk_size: int = 256 * 1024, + status: int = 200, + reason: Optional[str] = None, + headers: Optional[LooseHeaders] = None, + ) -> None: + super().__init__(status=status, reason=reason, headers=headers) + + if isinstance(path, str): + path = pathlib.Path(path) + + self._path = path + self._chunk_size = chunk_size + + async def _sendfile_fallback( + self, writer: AbstractStreamWriter, fobj: IO[Any], offset: int, count: int + ) -> AbstractStreamWriter: + # To keep memory usage low,fobj is transferred in chunks + # controlled by the constructor's chunk_size argument. + + chunk_size = self._chunk_size + loop = asyncio.get_event_loop() + + await loop.run_in_executor(None, fobj.seek, offset) + + chunk = await loop.run_in_executor(None, fobj.read, chunk_size) + while chunk: + await writer.write(chunk) + count = count - chunk_size + if count <= 0: + break + chunk = await loop.run_in_executor(None, fobj.read, min(chunk_size, count)) + + await writer.drain() + return writer + + async def _sendfile( + self, request: "BaseRequest", fobj: IO[Any], offset: int, count: int + ) -> AbstractStreamWriter: + writer = await super().prepare(request) + assert writer is not None + + if NOSENDFILE or sys.version_info < (3, 7) or self.compression: + return await self._sendfile_fallback(writer, fobj, offset, count) + + loop = request._loop + transport = request.transport + assert transport is not None + + try: + await loop.sendfile(transport, fobj, offset, count) + except NotImplementedError: + return await self._sendfile_fallback(writer, fobj, offset, count) + + await super().write_eof() + return writer + + @staticmethod + def _strong_etag_match(etag_value: str, etags: Tuple[ETag, ...]) -> bool: + if len(etags) == 1 and etags[0].value == ETAG_ANY: + return True + return any(etag.value == etag_value for etag in etags if not etag.is_weak) + + async def _not_modified( + self, request: "BaseRequest", etag_value: str, last_modified: float + ) -> Optional[AbstractStreamWriter]: + self.set_status(HTTPNotModified.status_code) + self._length_check = False + self.etag = etag_value # type: ignore[assignment] + self.last_modified = last_modified # type: ignore[assignment] + # Delete any Content-Length headers provided by user. HTTP 304 + # should always have empty response body + return await super().prepare(request) + + async def _precondition_failed( + self, request: "BaseRequest" + ) -> Optional[AbstractStreamWriter]: + self.set_status(HTTPPreconditionFailed.status_code) + self.content_length = 0 + return await super().prepare(request) + + async def prepare(self, request: "BaseRequest") -> Optional[AbstractStreamWriter]: + filepath = self._path + + gzip = False + if "gzip" in request.headers.get(hdrs.ACCEPT_ENCODING, ""): + gzip_path = filepath.with_name(filepath.name + ".gz") + + if gzip_path.is_file(): + filepath = gzip_path + gzip = True + + loop = asyncio.get_event_loop() + st: os.stat_result = await loop.run_in_executor(None, filepath.stat) + + etag_value = f"{st.st_mtime_ns:x}-{st.st_size:x}" + last_modified = st.st_mtime + + # https://tools.ietf.org/html/rfc7232#section-6 + ifmatch = request.if_match + if ifmatch is not None and not self._strong_etag_match(etag_value, ifmatch): + return await self._precondition_failed(request) + + unmodsince = request.if_unmodified_since + if ( + unmodsince is not None + and ifmatch is None + and st.st_mtime > unmodsince.timestamp() + ): + return await self._precondition_failed(request) + + ifnonematch = request.if_none_match + if ifnonematch is not None and self._strong_etag_match(etag_value, ifnonematch): + return await self._not_modified(request, etag_value, last_modified) + + modsince = request.if_modified_since + if ( + modsince is not None + and ifnonematch is None + and st.st_mtime <= modsince.timestamp() + ): + return await self._not_modified(request, etag_value, last_modified) + + if hdrs.CONTENT_TYPE not in self.headers: + ct, encoding = mimetypes.guess_type(str(filepath)) + if not ct: + ct = "application/octet-stream" + should_set_ct = True + else: + encoding = "gzip" if gzip else None + should_set_ct = False + + status = self._status + file_size = st.st_size + count = file_size + + start = None + + ifrange = request.if_range + if ifrange is None or st.st_mtime <= ifrange.timestamp(): + # If-Range header check: + # condition = cached date >= last modification date + # return 206 if True else 200. + # if False: + # Range header would not be processed, return 200 + # if True but Range header missing + # return 200 + try: + rng = request.http_range + start = rng.start + end = rng.stop + except ValueError: + # https://tools.ietf.org/html/rfc7233: + # A server generating a 416 (Range Not Satisfiable) response to + # a byte-range request SHOULD send a Content-Range header field + # with an unsatisfied-range value. + # The complete-length in a 416 response indicates the current + # length of the selected representation. + # + # Will do the same below. Many servers ignore this and do not + # send a Content-Range header with HTTP 416 + self.headers[hdrs.CONTENT_RANGE] = f"bytes */{file_size}" + self.set_status(HTTPRequestRangeNotSatisfiable.status_code) + return await super().prepare(request) + + # If a range request has been made, convert start, end slice + # notation into file pointer offset and count + if start is not None or end is not None: + if start < 0 and end is None: # return tail of file + start += file_size + if start < 0: + # if Range:bytes=-1000 in request header but file size + # is only 200, there would be trouble without this + start = 0 + count = file_size - start + else: + # rfc7233:If the last-byte-pos value is + # absent, or if the value is greater than or equal to + # the current length of the representation data, + # the byte range is interpreted as the remainder + # of the representation (i.e., the server replaces the + # value of last-byte-pos with a value that is one less than + # the current length of the selected representation). + count = ( + min(end if end is not None else file_size, file_size) - start + ) + + if start >= file_size: + # HTTP 416 should be returned in this case. + # + # According to https://tools.ietf.org/html/rfc7233: + # If a valid byte-range-set includes at least one + # byte-range-spec with a first-byte-pos that is less than + # the current length of the representation, or at least one + # suffix-byte-range-spec with a non-zero suffix-length, + # then the byte-range-set is satisfiable. Otherwise, the + # byte-range-set is unsatisfiable. + self.headers[hdrs.CONTENT_RANGE] = f"bytes */{file_size}" + self.set_status(HTTPRequestRangeNotSatisfiable.status_code) + return await super().prepare(request) + + status = HTTPPartialContent.status_code + # Even though you are sending the whole file, you should still + # return a HTTP 206 for a Range request. + self.set_status(status) + + if should_set_ct: + self.content_type = ct # type: ignore[assignment] + if encoding: + self.headers[hdrs.CONTENT_ENCODING] = encoding + if gzip: + self.headers[hdrs.VARY] = hdrs.ACCEPT_ENCODING + + self.etag = etag_value # type: ignore[assignment] + self.last_modified = st.st_mtime # type: ignore[assignment] + self.content_length = count + + self.headers[hdrs.ACCEPT_RANGES] = "bytes" + + real_start = cast(int, start) + + if status == HTTPPartialContent.status_code: + self.headers[hdrs.CONTENT_RANGE] = "bytes {}-{}/{}".format( + real_start, real_start + count - 1, file_size + ) + + # If we are sending 0 bytes calling sendfile() will throw a ValueError + if count == 0 or request.method == hdrs.METH_HEAD or self.status in [204, 304]: + return await super().prepare(request) + + fobj = await loop.run_in_executor(None, filepath.open, "rb") + if start: # be aware that start could be None or int=0 here. + offset = start + else: + offset = 0 + + try: + return await self._sendfile(request, fobj, offset, count) + finally: + await loop.run_in_executor(None, fobj.close) diff --git a/env/lib/python3.8/site-packages/aiohttp/web_log.py b/env/lib/python3.8/site-packages/aiohttp/web_log.py new file mode 100644 index 00000000..bc6e3b5a --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_log.py @@ -0,0 +1,208 @@ +import datetime +import functools +import logging +import os +import re +from collections import namedtuple +from typing import Any, Callable, Dict, Iterable, List, Tuple # noqa + +from .abc import AbstractAccessLogger +from .web_request import BaseRequest +from .web_response import StreamResponse + +KeyMethod = namedtuple("KeyMethod", "key method") + + +class AccessLogger(AbstractAccessLogger): + """Helper object to log access. + + Usage: + log = logging.getLogger("spam") + log_format = "%a %{User-Agent}i" + access_logger = AccessLogger(log, log_format) + access_logger.log(request, response, time) + + Format: + %% The percent sign + %a Remote IP-address (IP-address of proxy if using reverse proxy) + %t Time when the request was started to process + %P The process ID of the child that serviced the request + %r First line of request + %s Response status code + %b Size of response in bytes, including HTTP headers + %T Time taken to serve the request, in seconds + %Tf Time taken to serve the request, in seconds with floating fraction + in .06f format + %D Time taken to serve the request, in microseconds + %{FOO}i request.headers['FOO'] + %{FOO}o response.headers['FOO'] + %{FOO}e os.environ['FOO'] + + """ + + LOG_FORMAT_MAP = { + "a": "remote_address", + "t": "request_start_time", + "P": "process_id", + "r": "first_request_line", + "s": "response_status", + "b": "response_size", + "T": "request_time", + "Tf": "request_time_frac", + "D": "request_time_micro", + "i": "request_header", + "o": "response_header", + } + + LOG_FORMAT = '%a %t "%r" %s %b "%{Referer}i" "%{User-Agent}i"' + FORMAT_RE = re.compile(r"%(\{([A-Za-z0-9\-_]+)\}([ioe])|[atPrsbOD]|Tf?)") + CLEANUP_RE = re.compile(r"(%[^s])") + _FORMAT_CACHE: Dict[str, Tuple[str, List[KeyMethod]]] = {} + + def __init__(self, logger: logging.Logger, log_format: str = LOG_FORMAT) -> None: + """Initialise the logger. + + logger is a logger object to be used for logging. + log_format is a string with apache compatible log format description. + + """ + super().__init__(logger, log_format=log_format) + + _compiled_format = AccessLogger._FORMAT_CACHE.get(log_format) + if not _compiled_format: + _compiled_format = self.compile_format(log_format) + AccessLogger._FORMAT_CACHE[log_format] = _compiled_format + + self._log_format, self._methods = _compiled_format + + def compile_format(self, log_format: str) -> Tuple[str, List[KeyMethod]]: + """Translate log_format into form usable by modulo formatting + + All known atoms will be replaced with %s + Also methods for formatting of those atoms will be added to + _methods in appropriate order + + For example we have log_format = "%a %t" + This format will be translated to "%s %s" + Also contents of _methods will be + [self._format_a, self._format_t] + These method will be called and results will be passed + to translated string format. + + Each _format_* method receive 'args' which is list of arguments + given to self.log + + Exceptions are _format_e, _format_i and _format_o methods which + also receive key name (by functools.partial) + + """ + # list of (key, method) tuples, we don't use an OrderedDict as users + # can repeat the same key more than once + methods = list() + + for atom in self.FORMAT_RE.findall(log_format): + if atom[1] == "": + format_key1 = self.LOG_FORMAT_MAP[atom[0]] + m = getattr(AccessLogger, "_format_%s" % atom[0]) + key_method = KeyMethod(format_key1, m) + else: + format_key2 = (self.LOG_FORMAT_MAP[atom[2]], atom[1]) + m = getattr(AccessLogger, "_format_%s" % atom[2]) + key_method = KeyMethod(format_key2, functools.partial(m, atom[1])) + + methods.append(key_method) + + log_format = self.FORMAT_RE.sub(r"%s", log_format) + log_format = self.CLEANUP_RE.sub(r"%\1", log_format) + return log_format, methods + + @staticmethod + def _format_i( + key: str, request: BaseRequest, response: StreamResponse, time: float + ) -> str: + if request is None: + return "(no headers)" + + # suboptimal, make istr(key) once + return request.headers.get(key, "-") + + @staticmethod + def _format_o( + key: str, request: BaseRequest, response: StreamResponse, time: float + ) -> str: + # suboptimal, make istr(key) once + return response.headers.get(key, "-") + + @staticmethod + def _format_a(request: BaseRequest, response: StreamResponse, time: float) -> str: + if request is None: + return "-" + ip = request.remote + return ip if ip is not None else "-" + + @staticmethod + def _format_t(request: BaseRequest, response: StreamResponse, time: float) -> str: + now = datetime.datetime.utcnow() + start_time = now - datetime.timedelta(seconds=time) + return start_time.strftime("[%d/%b/%Y:%H:%M:%S +0000]") + + @staticmethod + def _format_P(request: BaseRequest, response: StreamResponse, time: float) -> str: + return "<%s>" % os.getpid() + + @staticmethod + def _format_r(request: BaseRequest, response: StreamResponse, time: float) -> str: + if request is None: + return "-" + return "{} {} HTTP/{}.{}".format( + request.method, + request.path_qs, + request.version.major, + request.version.minor, + ) + + @staticmethod + def _format_s(request: BaseRequest, response: StreamResponse, time: float) -> int: + return response.status + + @staticmethod + def _format_b(request: BaseRequest, response: StreamResponse, time: float) -> int: + return response.body_length + + @staticmethod + def _format_T(request: BaseRequest, response: StreamResponse, time: float) -> str: + return str(round(time)) + + @staticmethod + def _format_Tf(request: BaseRequest, response: StreamResponse, time: float) -> str: + return "%06f" % time + + @staticmethod + def _format_D(request: BaseRequest, response: StreamResponse, time: float) -> str: + return str(round(time * 1000000)) + + def _format_line( + self, request: BaseRequest, response: StreamResponse, time: float + ) -> Iterable[Tuple[str, Callable[[BaseRequest, StreamResponse, float], str]]]: + return [(key, method(request, response, time)) for key, method in self._methods] + + def log(self, request: BaseRequest, response: StreamResponse, time: float) -> None: + try: + fmt_info = self._format_line(request, response, time) + + values = list() + extra = dict() + for key, value in fmt_info: + values.append(value) + + if key.__class__ is str: + extra[key] = value + else: + k1, k2 = key # type: ignore[misc] + dct = extra.get(k1, {}) # type: ignore[var-annotated,has-type] + dct[k2] = value # type: ignore[index,has-type] + extra[k1] = dct # type: ignore[has-type,assignment] + + self.logger.info(self._log_format % tuple(values), extra=extra) + except Exception: + self.logger.exception("Error in logging") diff --git a/env/lib/python3.8/site-packages/aiohttp/web_middlewares.py b/env/lib/python3.8/site-packages/aiohttp/web_middlewares.py new file mode 100644 index 00000000..fabcc449 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_middlewares.py @@ -0,0 +1,119 @@ +import re +from typing import TYPE_CHECKING, Awaitable, Callable, Tuple, Type, TypeVar + +from .typedefs import Handler +from .web_exceptions import HTTPPermanentRedirect, _HTTPMove +from .web_request import Request +from .web_response import StreamResponse +from .web_urldispatcher import SystemRoute + +__all__ = ( + "middleware", + "normalize_path_middleware", +) + +if TYPE_CHECKING: # pragma: no cover + from .web_app import Application + +_Func = TypeVar("_Func") + + +async def _check_request_resolves(request: Request, path: str) -> Tuple[bool, Request]: + alt_request = request.clone(rel_url=path) + + match_info = await request.app.router.resolve(alt_request) + alt_request._match_info = match_info + + if match_info.http_exception is None: + return True, alt_request + + return False, request + + +def middleware(f: _Func) -> _Func: + f.__middleware_version__ = 1 # type: ignore[attr-defined] + return f + + +_Middleware = Callable[[Request, Handler], Awaitable[StreamResponse]] + + +def normalize_path_middleware( + *, + append_slash: bool = True, + remove_slash: bool = False, + merge_slashes: bool = True, + redirect_class: Type[_HTTPMove] = HTTPPermanentRedirect, +) -> _Middleware: + """Factory for producing a middleware that normalizes the path of a request. + + Normalizing means: + - Add or remove a trailing slash to the path. + - Double slashes are replaced by one. + + The middleware returns as soon as it finds a path that resolves + correctly. The order if both merge and append/remove are enabled is + 1) merge slashes + 2) append/remove slash + 3) both merge slashes and append/remove slash. + If the path resolves with at least one of those conditions, it will + redirect to the new path. + + Only one of `append_slash` and `remove_slash` can be enabled. If both + are `True` the factory will raise an assertion error + + If `append_slash` is `True` the middleware will append a slash when + needed. If a resource is defined with trailing slash and the request + comes without it, it will append it automatically. + + If `remove_slash` is `True`, `append_slash` must be `False`. When enabled + the middleware will remove trailing slashes and redirect if the resource + is defined + + If merge_slashes is True, merge multiple consecutive slashes in the + path into one. + """ + correct_configuration = not (append_slash and remove_slash) + assert correct_configuration, "Cannot both remove and append slash" + + @middleware + async def impl(request: Request, handler: Handler) -> StreamResponse: + if isinstance(request.match_info.route, SystemRoute): + paths_to_check = [] + if "?" in request.raw_path: + path, query = request.raw_path.split("?", 1) + query = "?" + query + else: + query = "" + path = request.raw_path + + if merge_slashes: + paths_to_check.append(re.sub("//+", "/", path)) + if append_slash and not request.path.endswith("/"): + paths_to_check.append(path + "/") + if remove_slash and request.path.endswith("/"): + paths_to_check.append(path[:-1]) + if merge_slashes and append_slash: + paths_to_check.append(re.sub("//+", "/", path + "/")) + if merge_slashes and remove_slash: + merged_slashes = re.sub("//+", "/", path) + paths_to_check.append(merged_slashes[:-1]) + + for path in paths_to_check: + path = re.sub("^//+", "/", path) # SECURITY: GHSA-v6wp-4m6f-gcjg + resolves, request = await _check_request_resolves(request, path) + if resolves: + raise redirect_class(request.raw_path + query) + + return await handler(request) + + return impl + + +def _fix_request_current_app(app: "Application") -> _Middleware: + @middleware + async def impl(request: Request, handler: Handler) -> StreamResponse: + with request.match_info.set_current_app(app): + return await handler(request) + + return impl diff --git a/env/lib/python3.8/site-packages/aiohttp/web_protocol.py b/env/lib/python3.8/site-packages/aiohttp/web_protocol.py new file mode 100644 index 00000000..10a96080 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_protocol.py @@ -0,0 +1,679 @@ +import asyncio +import asyncio.streams +import traceback +import warnings +from collections import deque +from contextlib import suppress +from html import escape as html_escape +from http import HTTPStatus +from logging import Logger +from typing import ( + TYPE_CHECKING, + Any, + Awaitable, + Callable, + Deque, + Optional, + Sequence, + Tuple, + Type, + Union, + cast, +) + +import attr +import yarl + +from .abc import AbstractAccessLogger, AbstractStreamWriter +from .base_protocol import BaseProtocol +from .helpers import ceil_timeout +from .http import ( + HttpProcessingError, + HttpRequestParser, + HttpVersion10, + RawRequestMessage, + StreamWriter, +) +from .log import access_logger, server_logger +from .streams import EMPTY_PAYLOAD, StreamReader +from .tcp_helpers import tcp_keepalive +from .web_exceptions import HTTPException +from .web_log import AccessLogger +from .web_request import BaseRequest +from .web_response import Response, StreamResponse + +__all__ = ("RequestHandler", "RequestPayloadError", "PayloadAccessError") + +if TYPE_CHECKING: # pragma: no cover + from .web_server import Server + + +_RequestFactory = Callable[ + [ + RawRequestMessage, + StreamReader, + "RequestHandler", + AbstractStreamWriter, + "asyncio.Task[None]", + ], + BaseRequest, +] + +_RequestHandler = Callable[[BaseRequest], Awaitable[StreamResponse]] + +ERROR = RawRequestMessage( + "UNKNOWN", + "/", + HttpVersion10, + {}, # type: ignore[arg-type] + {}, # type: ignore[arg-type] + True, + None, + False, + False, + yarl.URL("/"), +) + + +class RequestPayloadError(Exception): + """Payload parsing error.""" + + +class PayloadAccessError(Exception): + """Payload was accessed after response was sent.""" + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class _ErrInfo: + status: int + exc: BaseException + message: str + + +_MsgType = Tuple[Union[RawRequestMessage, _ErrInfo], StreamReader] + + +class RequestHandler(BaseProtocol): + """HTTP protocol implementation. + + RequestHandler handles incoming HTTP request. It reads request line, + request headers and request payload and calls handle_request() method. + By default it always returns with 404 response. + + RequestHandler handles errors in incoming request, like bad + status line, bad headers or incomplete payload. If any error occurs, + connection gets closed. + + keepalive_timeout -- number of seconds before closing + keep-alive connection + + tcp_keepalive -- TCP keep-alive is on, default is on + + debug -- enable debug mode + + logger -- custom logger object + + access_log_class -- custom class for access_logger + + access_log -- custom logging object + + access_log_format -- access log format string + + loop -- Optional event loop + + max_line_size -- Optional maximum header line size + + max_field_size -- Optional maximum header field size + + max_headers -- Optional maximum header size + + """ + + KEEPALIVE_RESCHEDULE_DELAY = 1 + + __slots__ = ( + "_request_count", + "_keepalive", + "_manager", + "_request_handler", + "_request_factory", + "_tcp_keepalive", + "_keepalive_time", + "_keepalive_handle", + "_keepalive_timeout", + "_lingering_time", + "_messages", + "_message_tail", + "_waiter", + "_task_handler", + "_upgrade", + "_payload_parser", + "_request_parser", + "_reading_paused", + "logger", + "debug", + "access_log", + "access_logger", + "_close", + "_force_close", + "_current_request", + ) + + def __init__( + self, + manager: "Server", + *, + loop: asyncio.AbstractEventLoop, + keepalive_timeout: float = 75.0, # NGINX default is 75 secs + tcp_keepalive: bool = True, + logger: Logger = server_logger, + access_log_class: Type[AbstractAccessLogger] = AccessLogger, + access_log: Logger = access_logger, + access_log_format: str = AccessLogger.LOG_FORMAT, + debug: bool = False, + max_line_size: int = 8190, + max_headers: int = 32768, + max_field_size: int = 8190, + lingering_time: float = 10.0, + read_bufsize: int = 2**16, + auto_decompress: bool = True, + ): + super().__init__(loop) + + self._request_count = 0 + self._keepalive = False + self._current_request: Optional[BaseRequest] = None + self._manager: Optional[Server] = manager + self._request_handler: Optional[_RequestHandler] = manager.request_handler + self._request_factory: Optional[_RequestFactory] = manager.request_factory + + self._tcp_keepalive = tcp_keepalive + # placeholder to be replaced on keepalive timeout setup + self._keepalive_time = 0.0 + self._keepalive_handle: Optional[asyncio.Handle] = None + self._keepalive_timeout = keepalive_timeout + self._lingering_time = float(lingering_time) + + self._messages: Deque[_MsgType] = deque() + self._message_tail = b"" + + self._waiter: Optional[asyncio.Future[None]] = None + self._task_handler: Optional[asyncio.Task[None]] = None + + self._upgrade = False + self._payload_parser: Any = None + self._request_parser: Optional[HttpRequestParser] = HttpRequestParser( + self, + loop, + read_bufsize, + max_line_size=max_line_size, + max_field_size=max_field_size, + max_headers=max_headers, + payload_exception=RequestPayloadError, + auto_decompress=auto_decompress, + ) + + self.logger = logger + self.debug = debug + self.access_log = access_log + if access_log: + self.access_logger: Optional[AbstractAccessLogger] = access_log_class( + access_log, access_log_format + ) + else: + self.access_logger = None + + self._close = False + self._force_close = False + + def __repr__(self) -> str: + return "<{} {}>".format( + self.__class__.__name__, + "connected" if self.transport is not None else "disconnected", + ) + + @property + def keepalive_timeout(self) -> float: + return self._keepalive_timeout + + async def shutdown(self, timeout: Optional[float] = 15.0) -> None: + """Do worker process exit preparations. + + We need to clean up everything and stop accepting requests. + It is especially important for keep-alive connections. + """ + self._force_close = True + + if self._keepalive_handle is not None: + self._keepalive_handle.cancel() + + if self._waiter: + self._waiter.cancel() + + # wait for handlers + with suppress(asyncio.CancelledError, asyncio.TimeoutError): + async with ceil_timeout(timeout): + if self._current_request is not None: + self._current_request._cancel(asyncio.CancelledError()) + + if self._task_handler is not None and not self._task_handler.done(): + await self._task_handler + + # force-close non-idle handler + if self._task_handler is not None: + self._task_handler.cancel() + + if self.transport is not None: + self.transport.close() + self.transport = None + + def connection_made(self, transport: asyncio.BaseTransport) -> None: + super().connection_made(transport) + + real_transport = cast(asyncio.Transport, transport) + if self._tcp_keepalive: + tcp_keepalive(real_transport) + + self._task_handler = self._loop.create_task(self.start()) + assert self._manager is not None + self._manager.connection_made(self, real_transport) + + def connection_lost(self, exc: Optional[BaseException]) -> None: + if self._manager is None: + return + self._manager.connection_lost(self, exc) + + super().connection_lost(exc) + + self._manager = None + self._force_close = True + self._request_factory = None + self._request_handler = None + self._request_parser = None + + if self._keepalive_handle is not None: + self._keepalive_handle.cancel() + + if self._current_request is not None: + if exc is None: + exc = ConnectionResetError("Connection lost") + self._current_request._cancel(exc) + + if self._waiter is not None: + self._waiter.cancel() + + self._task_handler = None + + if self._payload_parser is not None: + self._payload_parser.feed_eof() + self._payload_parser = None + + def set_parser(self, parser: Any) -> None: + # Actual type is WebReader + assert self._payload_parser is None + + self._payload_parser = parser + + if self._message_tail: + self._payload_parser.feed_data(self._message_tail) + self._message_tail = b"" + + def eof_received(self) -> None: + pass + + def data_received(self, data: bytes) -> None: + if self._force_close or self._close: + return + # parse http messages + messages: Sequence[_MsgType] + if self._payload_parser is None and not self._upgrade: + assert self._request_parser is not None + try: + messages, upgraded, tail = self._request_parser.feed_data(data) + except HttpProcessingError as exc: + messages = [ + (_ErrInfo(status=400, exc=exc, message=exc.message), EMPTY_PAYLOAD) + ] + upgraded = False + tail = b"" + + for msg, payload in messages or (): + self._request_count += 1 + self._messages.append((msg, payload)) + + waiter = self._waiter + if messages and waiter is not None and not waiter.done(): + # don't set result twice + waiter.set_result(None) + + self._upgrade = upgraded + if upgraded and tail: + self._message_tail = tail + + # no parser, just store + elif self._payload_parser is None and self._upgrade and data: + self._message_tail += data + + # feed payload + elif data: + eof, tail = self._payload_parser.feed_data(data) + if eof: + self.close() + + def keep_alive(self, val: bool) -> None: + """Set keep-alive connection mode. + + :param bool val: new state. + """ + self._keepalive = val + if self._keepalive_handle: + self._keepalive_handle.cancel() + self._keepalive_handle = None + + def close(self) -> None: + """Close connection. + + Stop accepting new pipelining messages and close + connection when handlers done processing messages. + """ + self._close = True + if self._waiter: + self._waiter.cancel() + + def force_close(self) -> None: + """Forcefully close connection.""" + self._force_close = True + if self._waiter: + self._waiter.cancel() + if self.transport is not None: + self.transport.close() + self.transport = None + + def log_access( + self, request: BaseRequest, response: StreamResponse, time: float + ) -> None: + if self.access_logger is not None: + self.access_logger.log(request, response, self._loop.time() - time) + + def log_debug(self, *args: Any, **kw: Any) -> None: + if self.debug: + self.logger.debug(*args, **kw) + + def log_exception(self, *args: Any, **kw: Any) -> None: + self.logger.exception(*args, **kw) + + def _process_keepalive(self) -> None: + if self._force_close or not self._keepalive: + return + + next = self._keepalive_time + self._keepalive_timeout + + # handler in idle state + if self._waiter: + if self._loop.time() > next: + self.force_close() + return + + # not all request handlers are done, + # reschedule itself to next second + self._keepalive_handle = self._loop.call_later( + self.KEEPALIVE_RESCHEDULE_DELAY, self._process_keepalive + ) + + async def _handle_request( + self, + request: BaseRequest, + start_time: float, + request_handler: Callable[[BaseRequest], Awaitable[StreamResponse]], + ) -> Tuple[StreamResponse, bool]: + assert self._request_handler is not None + try: + try: + self._current_request = request + resp = await request_handler(request) + finally: + self._current_request = None + except HTTPException as exc: + resp = exc + reset = await self.finish_response(request, resp, start_time) + except asyncio.CancelledError: + raise + except asyncio.TimeoutError as exc: + self.log_debug("Request handler timed out.", exc_info=exc) + resp = self.handle_error(request, 504) + reset = await self.finish_response(request, resp, start_time) + except Exception as exc: + resp = self.handle_error(request, 500, exc) + reset = await self.finish_response(request, resp, start_time) + else: + # Deprecation warning (See #2415) + if getattr(resp, "__http_exception__", False): + warnings.warn( + "returning HTTPException object is deprecated " + "(#2415) and will be removed, " + "please raise the exception instead", + DeprecationWarning, + ) + + reset = await self.finish_response(request, resp, start_time) + + return resp, reset + + async def start(self) -> None: + """Process incoming request. + + It reads request line, request headers and request payload, then + calls handle_request() method. Subclass has to override + handle_request(). start() handles various exceptions in request + or response handling. Connection is being closed always unless + keep_alive(True) specified. + """ + loop = self._loop + handler = self._task_handler + assert handler is not None + manager = self._manager + assert manager is not None + keepalive_timeout = self._keepalive_timeout + resp = None + assert self._request_factory is not None + assert self._request_handler is not None + + while not self._force_close: + if not self._messages: + try: + # wait for next request + self._waiter = loop.create_future() + await self._waiter + except asyncio.CancelledError: + break + finally: + self._waiter = None + + message, payload = self._messages.popleft() + + start = loop.time() + + manager.requests_count += 1 + writer = StreamWriter(self, loop) + if isinstance(message, _ErrInfo): + # make request_factory work + request_handler = self._make_error_handler(message) + message = ERROR + else: + request_handler = self._request_handler + + request = self._request_factory(message, payload, self, writer, handler) + try: + # a new task is used for copy context vars (#3406) + task = self._loop.create_task( + self._handle_request(request, start, request_handler) + ) + try: + resp, reset = await task + except (asyncio.CancelledError, ConnectionError): + self.log_debug("Ignored premature client disconnection") + break + + # Drop the processed task from asyncio.Task.all_tasks() early + del task + if reset: + self.log_debug("Ignored premature client disconnection 2") + break + + # notify server about keep-alive + self._keepalive = bool(resp.keep_alive) + + # check payload + if not payload.is_eof(): + lingering_time = self._lingering_time + if not self._force_close and lingering_time: + self.log_debug( + "Start lingering close timer for %s sec.", lingering_time + ) + + now = loop.time() + end_t = now + lingering_time + + with suppress(asyncio.TimeoutError, asyncio.CancelledError): + while not payload.is_eof() and now < end_t: + async with ceil_timeout(end_t - now): + # read and ignore + await payload.readany() + now = loop.time() + + # if payload still uncompleted + if not payload.is_eof() and not self._force_close: + self.log_debug("Uncompleted request.") + self.close() + + payload.set_exception(PayloadAccessError()) + + except asyncio.CancelledError: + self.log_debug("Ignored premature client disconnection ") + break + except RuntimeError as exc: + if self.debug: + self.log_exception("Unhandled runtime exception", exc_info=exc) + self.force_close() + except Exception as exc: + self.log_exception("Unhandled exception", exc_info=exc) + self.force_close() + finally: + if self.transport is None and resp is not None: + self.log_debug("Ignored premature client disconnection.") + elif not self._force_close: + if self._keepalive and not self._close: + # start keep-alive timer + if keepalive_timeout is not None: + now = self._loop.time() + self._keepalive_time = now + if self._keepalive_handle is None: + self._keepalive_handle = loop.call_at( + now + keepalive_timeout, self._process_keepalive + ) + else: + break + + # remove handler, close transport if no handlers left + if not self._force_close: + self._task_handler = None + if self.transport is not None: + self.transport.close() + + async def finish_response( + self, request: BaseRequest, resp: StreamResponse, start_time: float + ) -> bool: + """Prepare the response and write_eof, then log access. + + This has to + be called within the context of any exception so the access logger + can get exception information. Returns True if the client disconnects + prematurely. + """ + if self._request_parser is not None: + self._request_parser.set_upgraded(False) + self._upgrade = False + if self._message_tail: + self._request_parser.feed_data(self._message_tail) + self._message_tail = b"" + try: + prepare_meth = resp.prepare + except AttributeError: + if resp is None: + raise RuntimeError("Missing return " "statement on request handler") + else: + raise RuntimeError( + "Web-handler should return " + "a response instance, " + "got {!r}".format(resp) + ) + try: + await prepare_meth(request) + await resp.write_eof() + except ConnectionError: + self.log_access(request, resp, start_time) + return True + else: + self.log_access(request, resp, start_time) + return False + + def handle_error( + self, + request: BaseRequest, + status: int = 500, + exc: Optional[BaseException] = None, + message: Optional[str] = None, + ) -> StreamResponse: + """Handle errors. + + Returns HTTP response with specific status code. Logs additional + information. It always closes current connection. + """ + self.log_exception("Error handling request", exc_info=exc) + + # some data already got sent, connection is broken + if request.writer.output_size > 0: + raise ConnectionError( + "Response is sent already, cannot send another response " + "with the error message" + ) + + ct = "text/plain" + if status == HTTPStatus.INTERNAL_SERVER_ERROR: + title = "{0.value} {0.phrase}".format(HTTPStatus.INTERNAL_SERVER_ERROR) + msg = HTTPStatus.INTERNAL_SERVER_ERROR.description + tb = None + if self.debug: + with suppress(Exception): + tb = traceback.format_exc() + + if "text/html" in request.headers.get("Accept", ""): + if tb: + tb = html_escape(tb) + msg = f"

Traceback:

\n
{tb}
" + message = ( + "" + "{title}" + "\n

{title}

" + "\n{msg}\n\n" + ).format(title=title, msg=msg) + ct = "text/html" + else: + if tb: + msg = tb + message = title + "\n\n" + msg + + resp = Response(status=status, text=message, content_type=ct) + resp.force_close() + + return resp + + def _make_error_handler( + self, err_info: _ErrInfo + ) -> Callable[[BaseRequest], Awaitable[StreamResponse]]: + async def handler(request: BaseRequest) -> StreamResponse: + return self.handle_error( + request, err_info.status, err_info.exc, err_info.message + ) + + return handler diff --git a/env/lib/python3.8/site-packages/aiohttp/web_request.py b/env/lib/python3.8/site-packages/aiohttp/web_request.py new file mode 100644 index 00000000..c02ebfcd --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_request.py @@ -0,0 +1,882 @@ +import asyncio +import datetime +import io +import re +import socket +import string +import tempfile +import types +import warnings +from http.cookies import SimpleCookie +from types import MappingProxyType +from typing import ( + TYPE_CHECKING, + Any, + Dict, + Iterator, + Mapping, + MutableMapping, + Optional, + Pattern, + Tuple, + Union, + cast, +) +from urllib.parse import parse_qsl + +import attr +from multidict import CIMultiDict, CIMultiDictProxy, MultiDict, MultiDictProxy +from yarl import URL + +from . import hdrs +from .abc import AbstractStreamWriter +from .helpers import ( + DEBUG, + ETAG_ANY, + LIST_QUOTED_ETAG_RE, + ChainMapProxy, + ETag, + HeadersMixin, + parse_http_date, + reify, + sentinel, +) +from .http_parser import RawRequestMessage +from .http_writer import HttpVersion +from .multipart import BodyPartReader, MultipartReader +from .streams import EmptyStreamReader, StreamReader +from .typedefs import ( + DEFAULT_JSON_DECODER, + Final, + JSONDecoder, + LooseHeaders, + RawHeaders, + StrOrURL, +) +from .web_exceptions import HTTPRequestEntityTooLarge +from .web_response import StreamResponse + +__all__ = ("BaseRequest", "FileField", "Request") + + +if TYPE_CHECKING: # pragma: no cover + from .web_app import Application + from .web_protocol import RequestHandler + from .web_urldispatcher import UrlMappingMatchInfo + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class FileField: + name: str + filename: str + file: io.BufferedReader + content_type: str + headers: "CIMultiDictProxy[str]" + + +_TCHAR: Final[str] = string.digits + string.ascii_letters + r"!#$%&'*+.^_`|~-" +# '-' at the end to prevent interpretation as range in a char class + +_TOKEN: Final[str] = rf"[{_TCHAR}]+" + +_QDTEXT: Final[str] = r"[{}]".format( + r"".join(chr(c) for c in (0x09, 0x20, 0x21) + tuple(range(0x23, 0x7F))) +) +# qdtext includes 0x5C to escape 0x5D ('\]') +# qdtext excludes obs-text (because obsoleted, and encoding not specified) + +_QUOTED_PAIR: Final[str] = r"\\[\t !-~]" + +_QUOTED_STRING: Final[str] = r'"(?:{quoted_pair}|{qdtext})*"'.format( + qdtext=_QDTEXT, quoted_pair=_QUOTED_PAIR +) + +_FORWARDED_PAIR: Final[ + str +] = r"({token})=({token}|{quoted_string})(:\d{{1,4}})?".format( + token=_TOKEN, quoted_string=_QUOTED_STRING +) + +_QUOTED_PAIR_REPLACE_RE: Final[Pattern[str]] = re.compile(r"\\([\t !-~])") +# same pattern as _QUOTED_PAIR but contains a capture group + +_FORWARDED_PAIR_RE: Final[Pattern[str]] = re.compile(_FORWARDED_PAIR) + +############################################################ +# HTTP Request +############################################################ + + +class BaseRequest(MutableMapping[str, Any], HeadersMixin): + + POST_METHODS = { + hdrs.METH_PATCH, + hdrs.METH_POST, + hdrs.METH_PUT, + hdrs.METH_TRACE, + hdrs.METH_DELETE, + } + + ATTRS = HeadersMixin.ATTRS | frozenset( + [ + "_message", + "_protocol", + "_payload_writer", + "_payload", + "_headers", + "_method", + "_version", + "_rel_url", + "_post", + "_read_bytes", + "_state", + "_cache", + "_task", + "_client_max_size", + "_loop", + "_transport_sslcontext", + "_transport_peername", + ] + ) + + def __init__( + self, + message: RawRequestMessage, + payload: StreamReader, + protocol: "RequestHandler", + payload_writer: AbstractStreamWriter, + task: "asyncio.Task[None]", + loop: asyncio.AbstractEventLoop, + *, + client_max_size: int = 1024**2, + state: Optional[Dict[str, Any]] = None, + scheme: Optional[str] = None, + host: Optional[str] = None, + remote: Optional[str] = None, + ) -> None: + if state is None: + state = {} + self._message = message + self._protocol = protocol + self._payload_writer = payload_writer + + self._payload = payload + self._headers = message.headers + self._method = message.method + self._version = message.version + self._cache: Dict[str, Any] = {} + url = message.url + if url.is_absolute(): + # absolute URL is given, + # override auto-calculating url, host, and scheme + # all other properties should be good + self._cache["url"] = url + self._cache["host"] = url.host + self._cache["scheme"] = url.scheme + self._rel_url = url.relative() + else: + self._rel_url = message.url + self._post: Optional[MultiDictProxy[Union[str, bytes, FileField]]] = None + self._read_bytes: Optional[bytes] = None + + self._state = state + self._task = task + self._client_max_size = client_max_size + self._loop = loop + + transport = self._protocol.transport + assert transport is not None + self._transport_sslcontext = transport.get_extra_info("sslcontext") + self._transport_peername = transport.get_extra_info("peername") + + if scheme is not None: + self._cache["scheme"] = scheme + if host is not None: + self._cache["host"] = host + if remote is not None: + self._cache["remote"] = remote + + def clone( + self, + *, + method: str = sentinel, + rel_url: StrOrURL = sentinel, + headers: LooseHeaders = sentinel, + scheme: str = sentinel, + host: str = sentinel, + remote: str = sentinel, + ) -> "BaseRequest": + """Clone itself with replacement some attributes. + + Creates and returns a new instance of Request object. If no parameters + are given, an exact copy is returned. If a parameter is not passed, it + will reuse the one from the current request object. + """ + if self._read_bytes: + raise RuntimeError("Cannot clone request " "after reading its content") + + dct: Dict[str, Any] = {} + if method is not sentinel: + dct["method"] = method + if rel_url is not sentinel: + new_url = URL(rel_url) + dct["url"] = new_url + dct["path"] = str(new_url) + if headers is not sentinel: + # a copy semantic + dct["headers"] = CIMultiDictProxy(CIMultiDict(headers)) + dct["raw_headers"] = tuple( + (k.encode("utf-8"), v.encode("utf-8")) for k, v in headers.items() + ) + + message = self._message._replace(**dct) + + kwargs = {} + if scheme is not sentinel: + kwargs["scheme"] = scheme + if host is not sentinel: + kwargs["host"] = host + if remote is not sentinel: + kwargs["remote"] = remote + + return self.__class__( + message, + self._payload, + self._protocol, + self._payload_writer, + self._task, + self._loop, + client_max_size=self._client_max_size, + state=self._state.copy(), + **kwargs, + ) + + @property + def task(self) -> "asyncio.Task[None]": + return self._task + + @property + def protocol(self) -> "RequestHandler": + return self._protocol + + @property + def transport(self) -> Optional[asyncio.Transport]: + if self._protocol is None: + return None + return self._protocol.transport + + @property + def writer(self) -> AbstractStreamWriter: + return self._payload_writer + + @reify + def message(self) -> RawRequestMessage: + warnings.warn("Request.message is deprecated", DeprecationWarning, stacklevel=3) + return self._message + + @reify + def rel_url(self) -> URL: + return self._rel_url + + @reify + def loop(self) -> asyncio.AbstractEventLoop: + warnings.warn( + "request.loop property is deprecated", DeprecationWarning, stacklevel=2 + ) + return self._loop + + # MutableMapping API + + def __getitem__(self, key: str) -> Any: + return self._state[key] + + def __setitem__(self, key: str, value: Any) -> None: + self._state[key] = value + + def __delitem__(self, key: str) -> None: + del self._state[key] + + def __len__(self) -> int: + return len(self._state) + + def __iter__(self) -> Iterator[str]: + return iter(self._state) + + ######## + + @reify + def secure(self) -> bool: + """A bool indicating if the request is handled with SSL.""" + return self.scheme == "https" + + @reify + def forwarded(self) -> Tuple[Mapping[str, str], ...]: + """A tuple containing all parsed Forwarded header(s). + + Makes an effort to parse Forwarded headers as specified by RFC 7239: + + - It adds one (immutable) dictionary per Forwarded 'field-value', ie + per proxy. The element corresponds to the data in the Forwarded + field-value added by the first proxy encountered by the client. Each + subsequent item corresponds to those added by later proxies. + - It checks that every value has valid syntax in general as specified + in section 4: either a 'token' or a 'quoted-string'. + - It un-escapes found escape sequences. + - It does NOT validate 'by' and 'for' contents as specified in section + 6. + - It does NOT validate 'host' contents (Host ABNF). + - It does NOT validate 'proto' contents for valid URI scheme names. + + Returns a tuple containing one or more immutable dicts + """ + elems = [] + for field_value in self._message.headers.getall(hdrs.FORWARDED, ()): + length = len(field_value) + pos = 0 + need_separator = False + elem: Dict[str, str] = {} + elems.append(types.MappingProxyType(elem)) + while 0 <= pos < length: + match = _FORWARDED_PAIR_RE.match(field_value, pos) + if match is not None: # got a valid forwarded-pair + if need_separator: + # bad syntax here, skip to next comma + pos = field_value.find(",", pos) + else: + name, value, port = match.groups() + if value[0] == '"': + # quoted string: remove quotes and unescape + value = _QUOTED_PAIR_REPLACE_RE.sub(r"\1", value[1:-1]) + if port: + value += port + elem[name.lower()] = value + pos += len(match.group(0)) + need_separator = True + elif field_value[pos] == ",": # next forwarded-element + need_separator = False + elem = {} + elems.append(types.MappingProxyType(elem)) + pos += 1 + elif field_value[pos] == ";": # next forwarded-pair + need_separator = False + pos += 1 + elif field_value[pos] in " \t": + # Allow whitespace even between forwarded-pairs, though + # RFC 7239 doesn't. This simplifies code and is in line + # with Postel's law. + pos += 1 + else: + # bad syntax here, skip to next comma + pos = field_value.find(",", pos) + return tuple(elems) + + @reify + def scheme(self) -> str: + """A string representing the scheme of the request. + + Hostname is resolved in this order: + + - overridden value by .clone(scheme=new_scheme) call. + - type of connection to peer: HTTPS if socket is SSL, HTTP otherwise. + + 'http' or 'https'. + """ + if self._transport_sslcontext: + return "https" + else: + return "http" + + @reify + def method(self) -> str: + """Read only property for getting HTTP method. + + The value is upper-cased str like 'GET', 'POST', 'PUT' etc. + """ + return self._method + + @reify + def version(self) -> HttpVersion: + """Read only property for getting HTTP version of request. + + Returns aiohttp.protocol.HttpVersion instance. + """ + return self._version + + @reify + def host(self) -> str: + """Hostname of the request. + + Hostname is resolved in this order: + + - overridden value by .clone(host=new_host) call. + - HOST HTTP header + - socket.getfqdn() value + """ + host = self._message.headers.get(hdrs.HOST) + if host is not None: + return host + return socket.getfqdn() + + @reify + def remote(self) -> Optional[str]: + """Remote IP of client initiated HTTP request. + + The IP is resolved in this order: + + - overridden value by .clone(remote=new_remote) call. + - peername of opened socket + """ + if self._transport_peername is None: + return None + if isinstance(self._transport_peername, (list, tuple)): + return str(self._transport_peername[0]) + return str(self._transport_peername) + + @reify + def url(self) -> URL: + url = URL.build(scheme=self.scheme, host=self.host) + return url.join(self._rel_url) + + @reify + def path(self) -> str: + """The URL including *PATH INFO* without the host or scheme. + + E.g., ``/app/blog`` + """ + return self._rel_url.path + + @reify + def path_qs(self) -> str: + """The URL including PATH_INFO and the query string. + + E.g, /app/blog?id=10 + """ + return str(self._rel_url) + + @reify + def raw_path(self) -> str: + """The URL including raw *PATH INFO* without the host or scheme. + + Warning, the path is unquoted and may contains non valid URL characters + + E.g., ``/my%2Fpath%7Cwith%21some%25strange%24characters`` + """ + return self._message.path + + @reify + def query(self) -> "MultiDictProxy[str]": + """A multidict with all the variables in the query string.""" + return MultiDictProxy(self._rel_url.query) + + @reify + def query_string(self) -> str: + """The query string in the URL. + + E.g., id=10 + """ + return self._rel_url.query_string + + @reify + def headers(self) -> "CIMultiDictProxy[str]": + """A case-insensitive multidict proxy with all headers.""" + return self._headers + + @reify + def raw_headers(self) -> RawHeaders: + """A sequence of pairs for all headers.""" + return self._message.raw_headers + + @reify + def if_modified_since(self) -> Optional[datetime.datetime]: + """The value of If-Modified-Since HTTP header, or None. + + This header is represented as a `datetime` object. + """ + return parse_http_date(self.headers.get(hdrs.IF_MODIFIED_SINCE)) + + @reify + def if_unmodified_since(self) -> Optional[datetime.datetime]: + """The value of If-Unmodified-Since HTTP header, or None. + + This header is represented as a `datetime` object. + """ + return parse_http_date(self.headers.get(hdrs.IF_UNMODIFIED_SINCE)) + + @staticmethod + def _etag_values(etag_header: str) -> Iterator[ETag]: + """Extract `ETag` objects from raw header.""" + if etag_header == ETAG_ANY: + yield ETag( + is_weak=False, + value=ETAG_ANY, + ) + else: + for match in LIST_QUOTED_ETAG_RE.finditer(etag_header): + is_weak, value, garbage = match.group(2, 3, 4) + # Any symbol captured by 4th group means + # that the following sequence is invalid. + if garbage: + break + + yield ETag( + is_weak=bool(is_weak), + value=value, + ) + + @classmethod + def _if_match_or_none_impl( + cls, header_value: Optional[str] + ) -> Optional[Tuple[ETag, ...]]: + if not header_value: + return None + + return tuple(cls._etag_values(header_value)) + + @reify + def if_match(self) -> Optional[Tuple[ETag, ...]]: + """The value of If-Match HTTP header, or None. + + This header is represented as a `tuple` of `ETag` objects. + """ + return self._if_match_or_none_impl(self.headers.get(hdrs.IF_MATCH)) + + @reify + def if_none_match(self) -> Optional[Tuple[ETag, ...]]: + """The value of If-None-Match HTTP header, or None. + + This header is represented as a `tuple` of `ETag` objects. + """ + return self._if_match_or_none_impl(self.headers.get(hdrs.IF_NONE_MATCH)) + + @reify + def if_range(self) -> Optional[datetime.datetime]: + """The value of If-Range HTTP header, or None. + + This header is represented as a `datetime` object. + """ + return parse_http_date(self.headers.get(hdrs.IF_RANGE)) + + @reify + def keep_alive(self) -> bool: + """Is keepalive enabled by client?""" + return not self._message.should_close + + @reify + def cookies(self) -> Mapping[str, str]: + """Return request cookies. + + A read-only dictionary-like object. + """ + raw = self.headers.get(hdrs.COOKIE, "") + parsed: SimpleCookie[str] = SimpleCookie(raw) + return MappingProxyType({key: val.value for key, val in parsed.items()}) + + @reify + def http_range(self) -> slice: + """The content of Range HTTP header. + + Return a slice instance. + + """ + rng = self._headers.get(hdrs.RANGE) + start, end = None, None + if rng is not None: + try: + pattern = r"^bytes=(\d*)-(\d*)$" + start, end = re.findall(pattern, rng)[0] + except IndexError: # pattern was not found in header + raise ValueError("range not in acceptable format") + + end = int(end) if end else None + start = int(start) if start else None + + if start is None and end is not None: + # end with no start is to return tail of content + start = -end + end = None + + if start is not None and end is not None: + # end is inclusive in range header, exclusive for slice + end += 1 + + if start >= end: + raise ValueError("start cannot be after end") + + if start is end is None: # No valid range supplied + raise ValueError("No start or end of range specified") + + return slice(start, end, 1) + + @reify + def content(self) -> StreamReader: + """Return raw payload stream.""" + return self._payload + + @property + def has_body(self) -> bool: + """Return True if request's HTTP BODY can be read, False otherwise.""" + warnings.warn( + "Deprecated, use .can_read_body #2005", DeprecationWarning, stacklevel=2 + ) + return not self._payload.at_eof() + + @property + def can_read_body(self) -> bool: + """Return True if request's HTTP BODY can be read, False otherwise.""" + return not self._payload.at_eof() + + @reify + def body_exists(self) -> bool: + """Return True if request has HTTP BODY, False otherwise.""" + return type(self._payload) is not EmptyStreamReader + + async def release(self) -> None: + """Release request. + + Eat unread part of HTTP BODY if present. + """ + while not self._payload.at_eof(): + await self._payload.readany() + + async def read(self) -> bytes: + """Read request body if present. + + Returns bytes object with full request content. + """ + if self._read_bytes is None: + body = bytearray() + while True: + chunk = await self._payload.readany() + body.extend(chunk) + if self._client_max_size: + body_size = len(body) + if body_size >= self._client_max_size: + raise HTTPRequestEntityTooLarge( + max_size=self._client_max_size, actual_size=body_size + ) + if not chunk: + break + self._read_bytes = bytes(body) + return self._read_bytes + + async def text(self) -> str: + """Return BODY as text using encoding from .charset.""" + bytes_body = await self.read() + encoding = self.charset or "utf-8" + return bytes_body.decode(encoding) + + async def json(self, *, loads: JSONDecoder = DEFAULT_JSON_DECODER) -> Any: + """Return BODY as JSON.""" + body = await self.text() + return loads(body) + + async def multipart(self) -> MultipartReader: + """Return async iterator to process BODY as multipart.""" + return MultipartReader(self._headers, self._payload) + + async def post(self) -> "MultiDictProxy[Union[str, bytes, FileField]]": + """Return POST parameters.""" + if self._post is not None: + return self._post + if self._method not in self.POST_METHODS: + self._post = MultiDictProxy(MultiDict()) + return self._post + + content_type = self.content_type + if content_type not in ( + "", + "application/x-www-form-urlencoded", + "multipart/form-data", + ): + self._post = MultiDictProxy(MultiDict()) + return self._post + + out: MultiDict[Union[str, bytes, FileField]] = MultiDict() + + if content_type == "multipart/form-data": + multipart = await self.multipart() + max_size = self._client_max_size + + field = await multipart.next() + while field is not None: + size = 0 + field_ct = field.headers.get(hdrs.CONTENT_TYPE) + + if isinstance(field, BodyPartReader): + assert field.name is not None + + # Note that according to RFC 7578, the Content-Type header + # is optional, even for files, so we can't assume it's + # present. + # https://tools.ietf.org/html/rfc7578#section-4.4 + if field.filename: + # store file in temp file + tmp = tempfile.TemporaryFile() + chunk = await field.read_chunk(size=2**16) + while chunk: + chunk = field.decode(chunk) + tmp.write(chunk) + size += len(chunk) + if 0 < max_size < size: + tmp.close() + raise HTTPRequestEntityTooLarge( + max_size=max_size, actual_size=size + ) + chunk = await field.read_chunk(size=2**16) + tmp.seek(0) + + if field_ct is None: + field_ct = "application/octet-stream" + + ff = FileField( + field.name, + field.filename, + cast(io.BufferedReader, tmp), + field_ct, + field.headers, + ) + out.add(field.name, ff) + else: + # deal with ordinary data + value = await field.read(decode=True) + if field_ct is None or field_ct.startswith("text/"): + charset = field.get_charset(default="utf-8") + out.add(field.name, value.decode(charset)) + else: + out.add(field.name, value) + size += len(value) + if 0 < max_size < size: + raise HTTPRequestEntityTooLarge( + max_size=max_size, actual_size=size + ) + else: + raise ValueError( + "To decode nested multipart you need " "to use custom reader", + ) + + field = await multipart.next() + else: + data = await self.read() + if data: + charset = self.charset or "utf-8" + out.extend( + parse_qsl( + data.rstrip().decode(charset), + keep_blank_values=True, + encoding=charset, + ) + ) + + self._post = MultiDictProxy(out) + return self._post + + def get_extra_info(self, name: str, default: Any = None) -> Any: + """Extra info from protocol transport""" + protocol = self._protocol + if protocol is None: + return default + + transport = protocol.transport + if transport is None: + return default + + return transport.get_extra_info(name, default) + + def __repr__(self) -> str: + ascii_encodable_path = self.path.encode("ascii", "backslashreplace").decode( + "ascii" + ) + return "<{} {} {} >".format( + self.__class__.__name__, self._method, ascii_encodable_path + ) + + def __eq__(self, other: object) -> bool: + return id(self) == id(other) + + def __bool__(self) -> bool: + return True + + async def _prepare_hook(self, response: StreamResponse) -> None: + return + + def _cancel(self, exc: BaseException) -> None: + self._payload.set_exception(exc) + + +class Request(BaseRequest): + + ATTRS = BaseRequest.ATTRS | frozenset(["_match_info"]) + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + # matchdict, route_name, handler + # or information about traversal lookup + + # initialized after route resolving + self._match_info: Optional[UrlMappingMatchInfo] = None + + if DEBUG: + + def __setattr__(self, name: str, val: Any) -> None: + if name not in self.ATTRS: + warnings.warn( + "Setting custom {}.{} attribute " + "is discouraged".format(self.__class__.__name__, name), + DeprecationWarning, + stacklevel=2, + ) + super().__setattr__(name, val) + + def clone( + self, + *, + method: str = sentinel, + rel_url: StrOrURL = sentinel, + headers: LooseHeaders = sentinel, + scheme: str = sentinel, + host: str = sentinel, + remote: str = sentinel, + ) -> "Request": + ret = super().clone( + method=method, + rel_url=rel_url, + headers=headers, + scheme=scheme, + host=host, + remote=remote, + ) + new_ret = cast(Request, ret) + new_ret._match_info = self._match_info + return new_ret + + @reify + def match_info(self) -> "UrlMappingMatchInfo": + """Result of route resolving.""" + match_info = self._match_info + assert match_info is not None + return match_info + + @property + def app(self) -> "Application": + """Application instance.""" + match_info = self._match_info + assert match_info is not None + return match_info.current_app + + @property + def config_dict(self) -> ChainMapProxy: + match_info = self._match_info + assert match_info is not None + lst = match_info.apps + app = self.app + idx = lst.index(app) + sublist = list(reversed(lst[: idx + 1])) + return ChainMapProxy(sublist) + + async def _prepare_hook(self, response: StreamResponse) -> None: + match_info = self._match_info + if match_info is None: + return + for app in match_info._apps: + await app.on_response_prepare.send(self, response) diff --git a/env/lib/python3.8/site-packages/aiohttp/web_response.py b/env/lib/python3.8/site-packages/aiohttp/web_response.py new file mode 100644 index 00000000..ce07f815 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_response.py @@ -0,0 +1,825 @@ +import asyncio +import collections.abc +import datetime +import enum +import json +import math +import time +import warnings +import zlib +from concurrent.futures import Executor +from http.cookies import Morsel, SimpleCookie +from typing import ( + TYPE_CHECKING, + Any, + Dict, + Iterator, + Mapping, + MutableMapping, + Optional, + Tuple, + Union, + cast, +) + +from multidict import CIMultiDict, istr + +from . import hdrs, payload +from .abc import AbstractStreamWriter +from .helpers import ( + ETAG_ANY, + PY_38, + QUOTED_ETAG_RE, + ETag, + HeadersMixin, + parse_http_date, + rfc822_formatted_time, + sentinel, + validate_etag_value, +) +from .http import RESPONSES, SERVER_SOFTWARE, HttpVersion10, HttpVersion11 +from .payload import Payload +from .typedefs import JSONEncoder, LooseHeaders + +__all__ = ("ContentCoding", "StreamResponse", "Response", "json_response") + + +if TYPE_CHECKING: # pragma: no cover + from .web_request import BaseRequest + + BaseClass = MutableMapping[str, Any] +else: + BaseClass = collections.abc.MutableMapping + + +if not PY_38: + # allow samesite to be used in python < 3.8 + # already permitted in python 3.8, see https://bugs.python.org/issue29613 + Morsel._reserved["samesite"] = "SameSite" # type: ignore[attr-defined] + + +class ContentCoding(enum.Enum): + # The content codings that we have support for. + # + # Additional registered codings are listed at: + # https://www.iana.org/assignments/http-parameters/http-parameters.xhtml#content-coding + deflate = "deflate" + gzip = "gzip" + identity = "identity" + + +############################################################ +# HTTP Response classes +############################################################ + + +class StreamResponse(BaseClass, HeadersMixin): + + _length_check = True + + def __init__( + self, + *, + status: int = 200, + reason: Optional[str] = None, + headers: Optional[LooseHeaders] = None, + ) -> None: + self._body = None + self._keep_alive: Optional[bool] = None + self._chunked = False + self._compression = False + self._compression_force: Optional[ContentCoding] = None + self._cookies: SimpleCookie[str] = SimpleCookie() + + self._req: Optional[BaseRequest] = None + self._payload_writer: Optional[AbstractStreamWriter] = None + self._eof_sent = False + self._body_length = 0 + self._state: Dict[str, Any] = {} + + if headers is not None: + self._headers: CIMultiDict[str] = CIMultiDict(headers) + else: + self._headers = CIMultiDict() + + self.set_status(status, reason) + + @property + def prepared(self) -> bool: + return self._payload_writer is not None + + @property + def task(self) -> "Optional[asyncio.Task[None]]": + if self._req: + return self._req.task + else: + return None + + @property + def status(self) -> int: + return self._status + + @property + def chunked(self) -> bool: + return self._chunked + + @property + def compression(self) -> bool: + return self._compression + + @property + def reason(self) -> str: + return self._reason + + def set_status( + self, + status: int, + reason: Optional[str] = None, + _RESPONSES: Mapping[int, Tuple[str, str]] = RESPONSES, + ) -> None: + assert not self.prepared, ( + "Cannot change the response status code after " "the headers have been sent" + ) + self._status = int(status) + if reason is None: + try: + reason = _RESPONSES[self._status][0] + except Exception: + reason = "" + self._reason = reason + + @property + def keep_alive(self) -> Optional[bool]: + return self._keep_alive + + def force_close(self) -> None: + self._keep_alive = False + + @property + def body_length(self) -> int: + return self._body_length + + @property + def output_length(self) -> int: + warnings.warn("output_length is deprecated", DeprecationWarning) + assert self._payload_writer + return self._payload_writer.buffer_size + + def enable_chunked_encoding(self, chunk_size: Optional[int] = None) -> None: + """Enables automatic chunked transfer encoding.""" + self._chunked = True + + if hdrs.CONTENT_LENGTH in self._headers: + raise RuntimeError( + "You can't enable chunked encoding when " "a content length is set" + ) + if chunk_size is not None: + warnings.warn("Chunk size is deprecated #1615", DeprecationWarning) + + def enable_compression( + self, force: Optional[Union[bool, ContentCoding]] = None + ) -> None: + """Enables response compression encoding.""" + # Backwards compatibility for when force was a bool <0.17. + if type(force) == bool: + force = ContentCoding.deflate if force else ContentCoding.identity + warnings.warn( + "Using boolean for force is deprecated #3318", DeprecationWarning + ) + elif force is not None: + assert isinstance(force, ContentCoding), ( + "force should one of " "None, bool or " "ContentEncoding" + ) + + self._compression = True + self._compression_force = force + + @property + def headers(self) -> "CIMultiDict[str]": + return self._headers + + @property + def cookies(self) -> "SimpleCookie[str]": + return self._cookies + + def set_cookie( + self, + name: str, + value: str, + *, + expires: Optional[str] = None, + domain: Optional[str] = None, + max_age: Optional[Union[int, str]] = None, + path: str = "/", + secure: Optional[bool] = None, + httponly: Optional[bool] = None, + version: Optional[str] = None, + samesite: Optional[str] = None, + ) -> None: + """Set or update response cookie. + + Sets new cookie or updates existent with new value. + Also updates only those params which are not None. + """ + old = self._cookies.get(name) + if old is not None and old.coded_value == "": + # deleted cookie + self._cookies.pop(name, None) + + self._cookies[name] = value + c = self._cookies[name] + + if expires is not None: + c["expires"] = expires + elif c.get("expires") == "Thu, 01 Jan 1970 00:00:00 GMT": + del c["expires"] + + if domain is not None: + c["domain"] = domain + + if max_age is not None: + c["max-age"] = str(max_age) + elif "max-age" in c: + del c["max-age"] + + c["path"] = path + + if secure is not None: + c["secure"] = secure + if httponly is not None: + c["httponly"] = httponly + if version is not None: + c["version"] = version + if samesite is not None: + c["samesite"] = samesite + + def del_cookie( + self, name: str, *, domain: Optional[str] = None, path: str = "/" + ) -> None: + """Delete cookie. + + Creates new empty expired cookie. + """ + # TODO: do we need domain/path here? + self._cookies.pop(name, None) + self.set_cookie( + name, + "", + max_age=0, + expires="Thu, 01 Jan 1970 00:00:00 GMT", + domain=domain, + path=path, + ) + + @property + def content_length(self) -> Optional[int]: + # Just a placeholder for adding setter + return super().content_length + + @content_length.setter + def content_length(self, value: Optional[int]) -> None: + if value is not None: + value = int(value) + if self._chunked: + raise RuntimeError( + "You can't set content length when " "chunked encoding is enable" + ) + self._headers[hdrs.CONTENT_LENGTH] = str(value) + else: + self._headers.pop(hdrs.CONTENT_LENGTH, None) + + @property + def content_type(self) -> str: + # Just a placeholder for adding setter + return super().content_type + + @content_type.setter + def content_type(self, value: str) -> None: + self.content_type # read header values if needed + self._content_type = str(value) + self._generate_content_type_header() + + @property + def charset(self) -> Optional[str]: + # Just a placeholder for adding setter + return super().charset + + @charset.setter + def charset(self, value: Optional[str]) -> None: + ctype = self.content_type # read header values if needed + if ctype == "application/octet-stream": + raise RuntimeError( + "Setting charset for application/octet-stream " + "doesn't make sense, setup content_type first" + ) + assert self._content_dict is not None + if value is None: + self._content_dict.pop("charset", None) + else: + self._content_dict["charset"] = str(value).lower() + self._generate_content_type_header() + + @property + def last_modified(self) -> Optional[datetime.datetime]: + """The value of Last-Modified HTTP header, or None. + + This header is represented as a `datetime` object. + """ + return parse_http_date(self._headers.get(hdrs.LAST_MODIFIED)) + + @last_modified.setter + def last_modified( + self, value: Optional[Union[int, float, datetime.datetime, str]] + ) -> None: + if value is None: + self._headers.pop(hdrs.LAST_MODIFIED, None) + elif isinstance(value, (int, float)): + self._headers[hdrs.LAST_MODIFIED] = time.strftime( + "%a, %d %b %Y %H:%M:%S GMT", time.gmtime(math.ceil(value)) + ) + elif isinstance(value, datetime.datetime): + self._headers[hdrs.LAST_MODIFIED] = time.strftime( + "%a, %d %b %Y %H:%M:%S GMT", value.utctimetuple() + ) + elif isinstance(value, str): + self._headers[hdrs.LAST_MODIFIED] = value + + @property + def etag(self) -> Optional[ETag]: + quoted_value = self._headers.get(hdrs.ETAG) + if not quoted_value: + return None + elif quoted_value == ETAG_ANY: + return ETag(value=ETAG_ANY) + match = QUOTED_ETAG_RE.fullmatch(quoted_value) + if not match: + return None + is_weak, value = match.group(1, 2) + return ETag( + is_weak=bool(is_weak), + value=value, + ) + + @etag.setter + def etag(self, value: Optional[Union[ETag, str]]) -> None: + if value is None: + self._headers.pop(hdrs.ETAG, None) + elif (isinstance(value, str) and value == ETAG_ANY) or ( + isinstance(value, ETag) and value.value == ETAG_ANY + ): + self._headers[hdrs.ETAG] = ETAG_ANY + elif isinstance(value, str): + validate_etag_value(value) + self._headers[hdrs.ETAG] = f'"{value}"' + elif isinstance(value, ETag) and isinstance(value.value, str): + validate_etag_value(value.value) + hdr_value = f'W/"{value.value}"' if value.is_weak else f'"{value.value}"' + self._headers[hdrs.ETAG] = hdr_value + else: + raise ValueError( + f"Unsupported etag type: {type(value)}. " + f"etag must be str, ETag or None" + ) + + def _generate_content_type_header( + self, CONTENT_TYPE: istr = hdrs.CONTENT_TYPE + ) -> None: + assert self._content_dict is not None + assert self._content_type is not None + params = "; ".join(f"{k}={v}" for k, v in self._content_dict.items()) + if params: + ctype = self._content_type + "; " + params + else: + ctype = self._content_type + self._headers[CONTENT_TYPE] = ctype + + async def _do_start_compression(self, coding: ContentCoding) -> None: + if coding != ContentCoding.identity: + assert self._payload_writer is not None + self._headers[hdrs.CONTENT_ENCODING] = coding.value + self._payload_writer.enable_compression(coding.value) + # Compressed payload may have different content length, + # remove the header + self._headers.popall(hdrs.CONTENT_LENGTH, None) + + async def _start_compression(self, request: "BaseRequest") -> None: + if self._compression_force: + await self._do_start_compression(self._compression_force) + else: + accept_encoding = request.headers.get(hdrs.ACCEPT_ENCODING, "").lower() + for coding in ContentCoding: + if coding.value in accept_encoding: + await self._do_start_compression(coding) + return + + async def prepare(self, request: "BaseRequest") -> Optional[AbstractStreamWriter]: + if self._eof_sent: + return None + if self._payload_writer is not None: + return self._payload_writer + + return await self._start(request) + + async def _start(self, request: "BaseRequest") -> AbstractStreamWriter: + self._req = request + writer = self._payload_writer = request._payload_writer + + await self._prepare_headers() + await request._prepare_hook(self) + await self._write_headers() + + return writer + + async def _prepare_headers(self) -> None: + request = self._req + assert request is not None + writer = self._payload_writer + assert writer is not None + keep_alive = self._keep_alive + if keep_alive is None: + keep_alive = request.keep_alive + self._keep_alive = keep_alive + + version = request.version + + headers = self._headers + for cookie in self._cookies.values(): + value = cookie.output(header="")[1:] + headers.add(hdrs.SET_COOKIE, value) + + if self._compression: + await self._start_compression(request) + + if self._chunked: + if version != HttpVersion11: + raise RuntimeError( + "Using chunked encoding is forbidden " + "for HTTP/{0.major}.{0.minor}".format(request.version) + ) + writer.enable_chunking() + headers[hdrs.TRANSFER_ENCODING] = "chunked" + if hdrs.CONTENT_LENGTH in headers: + del headers[hdrs.CONTENT_LENGTH] + elif self._length_check: + writer.length = self.content_length + if writer.length is None: + if version >= HttpVersion11 and self.status != 204: + writer.enable_chunking() + headers[hdrs.TRANSFER_ENCODING] = "chunked" + if hdrs.CONTENT_LENGTH in headers: + del headers[hdrs.CONTENT_LENGTH] + else: + keep_alive = False + # HTTP 1.1: https://tools.ietf.org/html/rfc7230#section-3.3.2 + # HTTP 1.0: https://tools.ietf.org/html/rfc1945#section-10.4 + elif version >= HttpVersion11 and self.status in (100, 101, 102, 103, 204): + del headers[hdrs.CONTENT_LENGTH] + + if self.status not in (204, 304): + headers.setdefault(hdrs.CONTENT_TYPE, "application/octet-stream") + headers.setdefault(hdrs.DATE, rfc822_formatted_time()) + headers.setdefault(hdrs.SERVER, SERVER_SOFTWARE) + + # connection header + if hdrs.CONNECTION not in headers: + if keep_alive: + if version == HttpVersion10: + headers[hdrs.CONNECTION] = "keep-alive" + else: + if version == HttpVersion11: + headers[hdrs.CONNECTION] = "close" + + async def _write_headers(self) -> None: + request = self._req + assert request is not None + writer = self._payload_writer + assert writer is not None + # status line + version = request.version + status_line = "HTTP/{}.{} {} {}".format( + version[0], version[1], self._status, self._reason + ) + await writer.write_headers(status_line, self._headers) + + async def write(self, data: bytes) -> None: + assert isinstance( + data, (bytes, bytearray, memoryview) + ), "data argument must be byte-ish (%r)" % type(data) + + if self._eof_sent: + raise RuntimeError("Cannot call write() after write_eof()") + if self._payload_writer is None: + raise RuntimeError("Cannot call write() before prepare()") + + await self._payload_writer.write(data) + + async def drain(self) -> None: + assert not self._eof_sent, "EOF has already been sent" + assert self._payload_writer is not None, "Response has not been started" + warnings.warn( + "drain method is deprecated, use await resp.write()", + DeprecationWarning, + stacklevel=2, + ) + await self._payload_writer.drain() + + async def write_eof(self, data: bytes = b"") -> None: + assert isinstance( + data, (bytes, bytearray, memoryview) + ), "data argument must be byte-ish (%r)" % type(data) + + if self._eof_sent: + return + + assert self._payload_writer is not None, "Response has not been started" + + await self._payload_writer.write_eof(data) + self._eof_sent = True + self._req = None + self._body_length = self._payload_writer.output_size + self._payload_writer = None + + def __repr__(self) -> str: + if self._eof_sent: + info = "eof" + elif self.prepared: + assert self._req is not None + info = f"{self._req.method} {self._req.path} " + else: + info = "not prepared" + return f"<{self.__class__.__name__} {self.reason} {info}>" + + def __getitem__(self, key: str) -> Any: + return self._state[key] + + def __setitem__(self, key: str, value: Any) -> None: + self._state[key] = value + + def __delitem__(self, key: str) -> None: + del self._state[key] + + def __len__(self) -> int: + return len(self._state) + + def __iter__(self) -> Iterator[str]: + return iter(self._state) + + def __hash__(self) -> int: + return hash(id(self)) + + def __eq__(self, other: object) -> bool: + return self is other + + +class Response(StreamResponse): + def __init__( + self, + *, + body: Any = None, + status: int = 200, + reason: Optional[str] = None, + text: Optional[str] = None, + headers: Optional[LooseHeaders] = None, + content_type: Optional[str] = None, + charset: Optional[str] = None, + zlib_executor_size: Optional[int] = None, + zlib_executor: Optional[Executor] = None, + ) -> None: + if body is not None and text is not None: + raise ValueError("body and text are not allowed together") + + if headers is None: + real_headers: CIMultiDict[str] = CIMultiDict() + elif not isinstance(headers, CIMultiDict): + real_headers = CIMultiDict(headers) + else: + real_headers = headers # = cast('CIMultiDict[str]', headers) + + if content_type is not None and "charset" in content_type: + raise ValueError("charset must not be in content_type " "argument") + + if text is not None: + if hdrs.CONTENT_TYPE in real_headers: + if content_type or charset: + raise ValueError( + "passing both Content-Type header and " + "content_type or charset params " + "is forbidden" + ) + else: + # fast path for filling headers + if not isinstance(text, str): + raise TypeError("text argument must be str (%r)" % type(text)) + if content_type is None: + content_type = "text/plain" + if charset is None: + charset = "utf-8" + real_headers[hdrs.CONTENT_TYPE] = content_type + "; charset=" + charset + body = text.encode(charset) + text = None + else: + if hdrs.CONTENT_TYPE in real_headers: + if content_type is not None or charset is not None: + raise ValueError( + "passing both Content-Type header and " + "content_type or charset params " + "is forbidden" + ) + else: + if content_type is not None: + if charset is not None: + content_type += "; charset=" + charset + real_headers[hdrs.CONTENT_TYPE] = content_type + + super().__init__(status=status, reason=reason, headers=real_headers) + + if text is not None: + self.text = text + else: + self.body = body + + self._compressed_body: Optional[bytes] = None + self._zlib_executor_size = zlib_executor_size + self._zlib_executor = zlib_executor + + @property + def body(self) -> Optional[Union[bytes, Payload]]: + return self._body + + @body.setter + def body( + self, + body: bytes, + CONTENT_TYPE: istr = hdrs.CONTENT_TYPE, + CONTENT_LENGTH: istr = hdrs.CONTENT_LENGTH, + ) -> None: + if body is None: + self._body: Optional[bytes] = None + self._body_payload: bool = False + elif isinstance(body, (bytes, bytearray)): + self._body = body + self._body_payload = False + else: + try: + self._body = body = payload.PAYLOAD_REGISTRY.get(body) + except payload.LookupError: + raise ValueError("Unsupported body type %r" % type(body)) + + self._body_payload = True + + headers = self._headers + + # set content-length header if needed + if not self._chunked and CONTENT_LENGTH not in headers: + size = body.size + if size is not None: + headers[CONTENT_LENGTH] = str(size) + + # set content-type + if CONTENT_TYPE not in headers: + headers[CONTENT_TYPE] = body.content_type + + # copy payload headers + if body.headers: + for (key, value) in body.headers.items(): + if key not in headers: + headers[key] = value + + self._compressed_body = None + + @property + def text(self) -> Optional[str]: + if self._body is None: + return None + return self._body.decode(self.charset or "utf-8") + + @text.setter + def text(self, text: str) -> None: + assert text is None or isinstance( + text, str + ), "text argument must be str (%r)" % type(text) + + if self.content_type == "application/octet-stream": + self.content_type = "text/plain" + if self.charset is None: + self.charset = "utf-8" + + self._body = text.encode(self.charset) + self._body_payload = False + self._compressed_body = None + + @property + def content_length(self) -> Optional[int]: + if self._chunked: + return None + + if hdrs.CONTENT_LENGTH in self._headers: + return super().content_length + + if self._compressed_body is not None: + # Return length of the compressed body + return len(self._compressed_body) + elif self._body_payload: + # A payload without content length, or a compressed payload + return None + elif self._body is not None: + return len(self._body) + else: + return 0 + + @content_length.setter + def content_length(self, value: Optional[int]) -> None: + raise RuntimeError("Content length is set automatically") + + async def write_eof(self, data: bytes = b"") -> None: + if self._eof_sent: + return + if self._compressed_body is None: + body: Optional[Union[bytes, Payload]] = self._body + else: + body = self._compressed_body + assert not data, f"data arg is not supported, got {data!r}" + assert self._req is not None + assert self._payload_writer is not None + if body is not None: + if self._req._method == hdrs.METH_HEAD or self._status in [204, 304]: + await super().write_eof() + elif self._body_payload: + payload = cast(Payload, body) + await payload.write(self._payload_writer) + await super().write_eof() + else: + await super().write_eof(cast(bytes, body)) + else: + await super().write_eof() + + async def _start(self, request: "BaseRequest") -> AbstractStreamWriter: + if not self._chunked and hdrs.CONTENT_LENGTH not in self._headers: + if not self._body_payload: + if self._body is not None: + self._headers[hdrs.CONTENT_LENGTH] = str(len(self._body)) + else: + self._headers[hdrs.CONTENT_LENGTH] = "0" + + return await super()._start(request) + + def _compress_body(self, zlib_mode: int) -> None: + assert zlib_mode > 0 + compressobj = zlib.compressobj(wbits=zlib_mode) + body_in = self._body + assert body_in is not None + self._compressed_body = compressobj.compress(body_in) + compressobj.flush() + + async def _do_start_compression(self, coding: ContentCoding) -> None: + if self._body_payload or self._chunked: + return await super()._do_start_compression(coding) + + if coding != ContentCoding.identity: + # Instead of using _payload_writer.enable_compression, + # compress the whole body + zlib_mode = ( + 16 + zlib.MAX_WBITS if coding == ContentCoding.gzip else zlib.MAX_WBITS + ) + body_in = self._body + assert body_in is not None + if ( + self._zlib_executor_size is not None + and len(body_in) > self._zlib_executor_size + ): + await asyncio.get_event_loop().run_in_executor( + self._zlib_executor, self._compress_body, zlib_mode + ) + else: + self._compress_body(zlib_mode) + + body_out = self._compressed_body + assert body_out is not None + + self._headers[hdrs.CONTENT_ENCODING] = coding.value + self._headers[hdrs.CONTENT_LENGTH] = str(len(body_out)) + + +def json_response( + data: Any = sentinel, + *, + text: Optional[str] = None, + body: Optional[bytes] = None, + status: int = 200, + reason: Optional[str] = None, + headers: Optional[LooseHeaders] = None, + content_type: str = "application/json", + dumps: JSONEncoder = json.dumps, +) -> Response: + if data is not sentinel: + if text or body: + raise ValueError("only one of data, text, or body should be specified") + else: + text = dumps(data) + return Response( + text=text, + body=body, + status=status, + reason=reason, + headers=headers, + content_type=content_type, + ) diff --git a/env/lib/python3.8/site-packages/aiohttp/web_routedef.py b/env/lib/python3.8/site-packages/aiohttp/web_routedef.py new file mode 100644 index 00000000..a1eb0a76 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_routedef.py @@ -0,0 +1,216 @@ +import abc +import os # noqa +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Dict, + Iterator, + List, + Optional, + Sequence, + Type, + Union, + overload, +) + +import attr + +from . import hdrs +from .abc import AbstractView +from .typedefs import Handler, PathLike + +if TYPE_CHECKING: # pragma: no cover + from .web_request import Request + from .web_response import StreamResponse + from .web_urldispatcher import AbstractRoute, UrlDispatcher +else: + Request = StreamResponse = UrlDispatcher = AbstractRoute = None + + +__all__ = ( + "AbstractRouteDef", + "RouteDef", + "StaticDef", + "RouteTableDef", + "head", + "options", + "get", + "post", + "patch", + "put", + "delete", + "route", + "view", + "static", +) + + +class AbstractRouteDef(abc.ABC): + @abc.abstractmethod + def register(self, router: UrlDispatcher) -> List[AbstractRoute]: + pass # pragma: no cover + + +_HandlerType = Union[Type[AbstractView], Handler] + + +@attr.s(auto_attribs=True, frozen=True, repr=False, slots=True) +class RouteDef(AbstractRouteDef): + method: str + path: str + handler: _HandlerType + kwargs: Dict[str, Any] + + def __repr__(self) -> str: + info = [] + for name, value in sorted(self.kwargs.items()): + info.append(f", {name}={value!r}") + return " {handler.__name__!r}" "{info}>".format( + method=self.method, path=self.path, handler=self.handler, info="".join(info) + ) + + def register(self, router: UrlDispatcher) -> List[AbstractRoute]: + if self.method in hdrs.METH_ALL: + reg = getattr(router, "add_" + self.method.lower()) + return [reg(self.path, self.handler, **self.kwargs)] + else: + return [ + router.add_route(self.method, self.path, self.handler, **self.kwargs) + ] + + +@attr.s(auto_attribs=True, frozen=True, repr=False, slots=True) +class StaticDef(AbstractRouteDef): + prefix: str + path: PathLike + kwargs: Dict[str, Any] + + def __repr__(self) -> str: + info = [] + for name, value in sorted(self.kwargs.items()): + info.append(f", {name}={value!r}") + return " {path}" "{info}>".format( + prefix=self.prefix, path=self.path, info="".join(info) + ) + + def register(self, router: UrlDispatcher) -> List[AbstractRoute]: + resource = router.add_static(self.prefix, self.path, **self.kwargs) + routes = resource.get_info().get("routes", {}) + return list(routes.values()) + + +def route(method: str, path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef: + return RouteDef(method, path, handler, kwargs) + + +def head(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef: + return route(hdrs.METH_HEAD, path, handler, **kwargs) + + +def options(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef: + return route(hdrs.METH_OPTIONS, path, handler, **kwargs) + + +def get( + path: str, + handler: _HandlerType, + *, + name: Optional[str] = None, + allow_head: bool = True, + **kwargs: Any, +) -> RouteDef: + return route( + hdrs.METH_GET, path, handler, name=name, allow_head=allow_head, **kwargs + ) + + +def post(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef: + return route(hdrs.METH_POST, path, handler, **kwargs) + + +def put(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef: + return route(hdrs.METH_PUT, path, handler, **kwargs) + + +def patch(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef: + return route(hdrs.METH_PATCH, path, handler, **kwargs) + + +def delete(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef: + return route(hdrs.METH_DELETE, path, handler, **kwargs) + + +def view(path: str, handler: Type[AbstractView], **kwargs: Any) -> RouteDef: + return route(hdrs.METH_ANY, path, handler, **kwargs) + + +def static(prefix: str, path: PathLike, **kwargs: Any) -> StaticDef: + return StaticDef(prefix, path, kwargs) + + +_Deco = Callable[[_HandlerType], _HandlerType] + + +class RouteTableDef(Sequence[AbstractRouteDef]): + """Route definition table""" + + def __init__(self) -> None: + self._items: List[AbstractRouteDef] = [] + + def __repr__(self) -> str: + return f"" + + @overload + def __getitem__(self, index: int) -> AbstractRouteDef: + ... + + @overload + def __getitem__(self, index: slice) -> List[AbstractRouteDef]: + ... + + def __getitem__(self, index): # type: ignore[no-untyped-def] + return self._items[index] + + def __iter__(self) -> Iterator[AbstractRouteDef]: + return iter(self._items) + + def __len__(self) -> int: + return len(self._items) + + def __contains__(self, item: object) -> bool: + return item in self._items + + def route(self, method: str, path: str, **kwargs: Any) -> _Deco: + def inner(handler: _HandlerType) -> _HandlerType: + self._items.append(RouteDef(method, path, handler, kwargs)) + return handler + + return inner + + def head(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_HEAD, path, **kwargs) + + def get(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_GET, path, **kwargs) + + def post(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_POST, path, **kwargs) + + def put(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_PUT, path, **kwargs) + + def patch(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_PATCH, path, **kwargs) + + def delete(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_DELETE, path, **kwargs) + + def options(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_OPTIONS, path, **kwargs) + + def view(self, path: str, **kwargs: Any) -> _Deco: + return self.route(hdrs.METH_ANY, path, **kwargs) + + def static(self, prefix: str, path: PathLike, **kwargs: Any) -> None: + self._items.append(StaticDef(prefix, path, kwargs)) diff --git a/env/lib/python3.8/site-packages/aiohttp/web_runner.py b/env/lib/python3.8/site-packages/aiohttp/web_runner.py new file mode 100644 index 00000000..9282bb93 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_runner.py @@ -0,0 +1,381 @@ +import asyncio +import signal +import socket +from abc import ABC, abstractmethod +from typing import Any, List, Optional, Set + +from yarl import URL + +from .web_app import Application +from .web_server import Server + +try: + from ssl import SSLContext +except ImportError: + SSLContext = object # type: ignore[misc,assignment] + + +__all__ = ( + "BaseSite", + "TCPSite", + "UnixSite", + "NamedPipeSite", + "SockSite", + "BaseRunner", + "AppRunner", + "ServerRunner", + "GracefulExit", +) + + +class GracefulExit(SystemExit): + code = 1 + + +def _raise_graceful_exit() -> None: + raise GracefulExit() + + +class BaseSite(ABC): + __slots__ = ("_runner", "_shutdown_timeout", "_ssl_context", "_backlog", "_server") + + def __init__( + self, + runner: "BaseRunner", + *, + shutdown_timeout: float = 60.0, + ssl_context: Optional[SSLContext] = None, + backlog: int = 128, + ) -> None: + if runner.server is None: + raise RuntimeError("Call runner.setup() before making a site") + self._runner = runner + self._shutdown_timeout = shutdown_timeout + self._ssl_context = ssl_context + self._backlog = backlog + self._server: Optional[asyncio.AbstractServer] = None + + @property + @abstractmethod + def name(self) -> str: + pass # pragma: no cover + + @abstractmethod + async def start(self) -> None: + self._runner._reg_site(self) + + async def stop(self) -> None: + self._runner._check_site(self) + if self._server is None: + self._runner._unreg_site(self) + return # not started yet + self._server.close() + # named pipes do not have wait_closed property + if hasattr(self._server, "wait_closed"): + await self._server.wait_closed() + await self._runner.shutdown() + assert self._runner.server + await self._runner.server.shutdown(self._shutdown_timeout) + self._runner._unreg_site(self) + + +class TCPSite(BaseSite): + __slots__ = ("_host", "_port", "_reuse_address", "_reuse_port") + + def __init__( + self, + runner: "BaseRunner", + host: Optional[str] = None, + port: Optional[int] = None, + *, + shutdown_timeout: float = 60.0, + ssl_context: Optional[SSLContext] = None, + backlog: int = 128, + reuse_address: Optional[bool] = None, + reuse_port: Optional[bool] = None, + ) -> None: + super().__init__( + runner, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + ) + self._host = host + if port is None: + port = 8443 if self._ssl_context else 8080 + self._port = port + self._reuse_address = reuse_address + self._reuse_port = reuse_port + + @property + def name(self) -> str: + scheme = "https" if self._ssl_context else "http" + host = "0.0.0.0" if self._host is None else self._host + return str(URL.build(scheme=scheme, host=host, port=self._port)) + + async def start(self) -> None: + await super().start() + loop = asyncio.get_event_loop() + server = self._runner.server + assert server is not None + self._server = await loop.create_server( + server, + self._host, + self._port, + ssl=self._ssl_context, + backlog=self._backlog, + reuse_address=self._reuse_address, + reuse_port=self._reuse_port, + ) + + +class UnixSite(BaseSite): + __slots__ = ("_path",) + + def __init__( + self, + runner: "BaseRunner", + path: str, + *, + shutdown_timeout: float = 60.0, + ssl_context: Optional[SSLContext] = None, + backlog: int = 128, + ) -> None: + super().__init__( + runner, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + ) + self._path = path + + @property + def name(self) -> str: + scheme = "https" if self._ssl_context else "http" + return f"{scheme}://unix:{self._path}:" + + async def start(self) -> None: + await super().start() + loop = asyncio.get_event_loop() + server = self._runner.server + assert server is not None + self._server = await loop.create_unix_server( + server, self._path, ssl=self._ssl_context, backlog=self._backlog + ) + + +class NamedPipeSite(BaseSite): + __slots__ = ("_path",) + + def __init__( + self, runner: "BaseRunner", path: str, *, shutdown_timeout: float = 60.0 + ) -> None: + loop = asyncio.get_event_loop() + if not isinstance( + loop, asyncio.ProactorEventLoop # type: ignore[attr-defined] + ): + raise RuntimeError( + "Named Pipes only available in proactor" "loop under windows" + ) + super().__init__(runner, shutdown_timeout=shutdown_timeout) + self._path = path + + @property + def name(self) -> str: + return self._path + + async def start(self) -> None: + await super().start() + loop = asyncio.get_event_loop() + server = self._runner.server + assert server is not None + _server = await loop.start_serving_pipe( # type: ignore[attr-defined] + server, self._path + ) + self._server = _server[0] + + +class SockSite(BaseSite): + __slots__ = ("_sock", "_name") + + def __init__( + self, + runner: "BaseRunner", + sock: socket.socket, + *, + shutdown_timeout: float = 60.0, + ssl_context: Optional[SSLContext] = None, + backlog: int = 128, + ) -> None: + super().__init__( + runner, + shutdown_timeout=shutdown_timeout, + ssl_context=ssl_context, + backlog=backlog, + ) + self._sock = sock + scheme = "https" if self._ssl_context else "http" + if hasattr(socket, "AF_UNIX") and sock.family == socket.AF_UNIX: + name = f"{scheme}://unix:{sock.getsockname()}:" + else: + host, port = sock.getsockname()[:2] + name = str(URL.build(scheme=scheme, host=host, port=port)) + self._name = name + + @property + def name(self) -> str: + return self._name + + async def start(self) -> None: + await super().start() + loop = asyncio.get_event_loop() + server = self._runner.server + assert server is not None + self._server = await loop.create_server( + server, sock=self._sock, ssl=self._ssl_context, backlog=self._backlog + ) + + +class BaseRunner(ABC): + __slots__ = ("_handle_signals", "_kwargs", "_server", "_sites") + + def __init__(self, *, handle_signals: bool = False, **kwargs: Any) -> None: + self._handle_signals = handle_signals + self._kwargs = kwargs + self._server: Optional[Server] = None + self._sites: List[BaseSite] = [] + + @property + def server(self) -> Optional[Server]: + return self._server + + @property + def addresses(self) -> List[Any]: + ret: List[Any] = [] + for site in self._sites: + server = site._server + if server is not None: + sockets = server.sockets + if sockets is not None: + for sock in sockets: + ret.append(sock.getsockname()) + return ret + + @property + def sites(self) -> Set[BaseSite]: + return set(self._sites) + + async def setup(self) -> None: + loop = asyncio.get_event_loop() + + if self._handle_signals: + try: + loop.add_signal_handler(signal.SIGINT, _raise_graceful_exit) + loop.add_signal_handler(signal.SIGTERM, _raise_graceful_exit) + except NotImplementedError: # pragma: no cover + # add_signal_handler is not implemented on Windows + pass + + self._server = await self._make_server() + + @abstractmethod + async def shutdown(self) -> None: + pass # pragma: no cover + + async def cleanup(self) -> None: + loop = asyncio.get_event_loop() + + # The loop over sites is intentional, an exception on gather() + # leaves self._sites in unpredictable state. + # The loop guaranties that a site is either deleted on success or + # still present on failure + for site in list(self._sites): + await site.stop() + await self._cleanup_server() + self._server = None + if self._handle_signals: + try: + loop.remove_signal_handler(signal.SIGINT) + loop.remove_signal_handler(signal.SIGTERM) + except NotImplementedError: # pragma: no cover + # remove_signal_handler is not implemented on Windows + pass + + @abstractmethod + async def _make_server(self) -> Server: + pass # pragma: no cover + + @abstractmethod + async def _cleanup_server(self) -> None: + pass # pragma: no cover + + def _reg_site(self, site: BaseSite) -> None: + if site in self._sites: + raise RuntimeError(f"Site {site} is already registered in runner {self}") + self._sites.append(site) + + def _check_site(self, site: BaseSite) -> None: + if site not in self._sites: + raise RuntimeError(f"Site {site} is not registered in runner {self}") + + def _unreg_site(self, site: BaseSite) -> None: + if site not in self._sites: + raise RuntimeError(f"Site {site} is not registered in runner {self}") + self._sites.remove(site) + + +class ServerRunner(BaseRunner): + """Low-level web server runner""" + + __slots__ = ("_web_server",) + + def __init__( + self, web_server: Server, *, handle_signals: bool = False, **kwargs: Any + ) -> None: + super().__init__(handle_signals=handle_signals, **kwargs) + self._web_server = web_server + + async def shutdown(self) -> None: + pass + + async def _make_server(self) -> Server: + return self._web_server + + async def _cleanup_server(self) -> None: + pass + + +class AppRunner(BaseRunner): + """Web Application runner""" + + __slots__ = ("_app",) + + def __init__( + self, app: Application, *, handle_signals: bool = False, **kwargs: Any + ) -> None: + super().__init__(handle_signals=handle_signals, **kwargs) + if not isinstance(app, Application): + raise TypeError( + "The first argument should be web.Application " + "instance, got {!r}".format(app) + ) + self._app = app + + @property + def app(self) -> Application: + return self._app + + async def shutdown(self) -> None: + await self._app.shutdown() + + async def _make_server(self) -> Server: + loop = asyncio.get_event_loop() + self._app._set_loop(loop) + self._app.on_startup.freeze() + await self._app.startup() + self._app.freeze() + + return self._app._make_handler(loop=loop, **self._kwargs) + + async def _cleanup_server(self) -> None: + await self._app.cleanup() diff --git a/env/lib/python3.8/site-packages/aiohttp/web_server.py b/env/lib/python3.8/site-packages/aiohttp/web_server.py new file mode 100644 index 00000000..fa46e905 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_server.py @@ -0,0 +1,62 @@ +"""Low level HTTP server.""" +import asyncio +from typing import Any, Awaitable, Callable, Dict, List, Optional # noqa + +from .abc import AbstractStreamWriter +from .helpers import get_running_loop +from .http_parser import RawRequestMessage +from .streams import StreamReader +from .web_protocol import RequestHandler, _RequestFactory, _RequestHandler +from .web_request import BaseRequest + +__all__ = ("Server",) + + +class Server: + def __init__( + self, + handler: _RequestHandler, + *, + request_factory: Optional[_RequestFactory] = None, + loop: Optional[asyncio.AbstractEventLoop] = None, + **kwargs: Any + ) -> None: + self._loop = get_running_loop(loop) + self._connections: Dict[RequestHandler, asyncio.Transport] = {} + self._kwargs = kwargs + self.requests_count = 0 + self.request_handler = handler + self.request_factory = request_factory or self._make_request + + @property + def connections(self) -> List[RequestHandler]: + return list(self._connections.keys()) + + def connection_made( + self, handler: RequestHandler, transport: asyncio.Transport + ) -> None: + self._connections[handler] = transport + + def connection_lost( + self, handler: RequestHandler, exc: Optional[BaseException] = None + ) -> None: + if handler in self._connections: + del self._connections[handler] + + def _make_request( + self, + message: RawRequestMessage, + payload: StreamReader, + protocol: RequestHandler, + writer: AbstractStreamWriter, + task: "asyncio.Task[None]", + ) -> BaseRequest: + return BaseRequest(message, payload, protocol, writer, task, self._loop) + + async def shutdown(self, timeout: Optional[float] = None) -> None: + coros = [conn.shutdown(timeout) for conn in self._connections] + await asyncio.gather(*coros) + self._connections.clear() + + def __call__(self) -> RequestHandler: + return RequestHandler(self, loop=self._loop, **self._kwargs) diff --git a/env/lib/python3.8/site-packages/aiohttp/web_urldispatcher.py b/env/lib/python3.8/site-packages/aiohttp/web_urldispatcher.py new file mode 100644 index 00000000..5942e355 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_urldispatcher.py @@ -0,0 +1,1220 @@ +import abc +import asyncio +import base64 +import hashlib +import inspect +import keyword +import os +import re +import warnings +from contextlib import contextmanager +from functools import wraps +from pathlib import Path +from types import MappingProxyType +from typing import ( + TYPE_CHECKING, + Any, + Awaitable, + Callable, + Container, + Dict, + Generator, + Iterable, + Iterator, + List, + Mapping, + Optional, + Pattern, + Set, + Sized, + Tuple, + Type, + Union, + cast, +) + +from yarl import URL, __version__ as yarl_version # type: ignore[attr-defined] + +from . import hdrs +from .abc import AbstractMatchInfo, AbstractRouter, AbstractView +from .helpers import DEBUG +from .http import HttpVersion11 +from .typedefs import Final, Handler, PathLike, TypedDict +from .web_exceptions import ( + HTTPException, + HTTPExpectationFailed, + HTTPForbidden, + HTTPMethodNotAllowed, + HTTPNotFound, +) +from .web_fileresponse import FileResponse +from .web_request import Request +from .web_response import Response, StreamResponse +from .web_routedef import AbstractRouteDef + +__all__ = ( + "UrlDispatcher", + "UrlMappingMatchInfo", + "AbstractResource", + "Resource", + "PlainResource", + "DynamicResource", + "AbstractRoute", + "ResourceRoute", + "StaticResource", + "View", +) + + +if TYPE_CHECKING: # pragma: no cover + from .web_app import Application + + BaseDict = Dict[str, str] +else: + BaseDict = dict + +YARL_VERSION: Final[Tuple[int, ...]] = tuple(map(int, yarl_version.split(".")[:2])) + +HTTP_METHOD_RE: Final[Pattern[str]] = re.compile( + r"^[0-9A-Za-z!#\$%&'\*\+\-\.\^_`\|~]+$" +) +ROUTE_RE: Final[Pattern[str]] = re.compile( + r"(\{[_a-zA-Z][^{}]*(?:\{[^{}]*\}[^{}]*)*\})" +) +PATH_SEP: Final[str] = re.escape("/") + + +_ExpectHandler = Callable[[Request], Awaitable[None]] +_Resolve = Tuple[Optional["UrlMappingMatchInfo"], Set[str]] + + +class _InfoDict(TypedDict, total=False): + path: str + + formatter: str + pattern: Pattern[str] + + directory: Path + prefix: str + routes: Mapping[str, "AbstractRoute"] + + app: "Application" + + domain: str + + rule: "AbstractRuleMatching" + + http_exception: HTTPException + + +class AbstractResource(Sized, Iterable["AbstractRoute"]): + def __init__(self, *, name: Optional[str] = None) -> None: + self._name = name + + @property + def name(self) -> Optional[str]: + return self._name + + @property + @abc.abstractmethod + def canonical(self) -> str: + """Exposes the resource's canonical path. + + For example '/foo/bar/{name}' + + """ + + @abc.abstractmethod # pragma: no branch + def url_for(self, **kwargs: str) -> URL: + """Construct url for resource with additional params.""" + + @abc.abstractmethod # pragma: no branch + async def resolve(self, request: Request) -> _Resolve: + """Resolve resource. + + Return (UrlMappingMatchInfo, allowed_methods) pair. + """ + + @abc.abstractmethod + def add_prefix(self, prefix: str) -> None: + """Add a prefix to processed URLs. + + Required for subapplications support. + """ + + @abc.abstractmethod + def get_info(self) -> _InfoDict: + """Return a dict with additional info useful for introspection""" + + def freeze(self) -> None: + pass + + @abc.abstractmethod + def raw_match(self, path: str) -> bool: + """Perform a raw match against path""" + + +class AbstractRoute(abc.ABC): + def __init__( + self, + method: str, + handler: Union[Handler, Type[AbstractView]], + *, + expect_handler: Optional[_ExpectHandler] = None, + resource: Optional[AbstractResource] = None, + ) -> None: + + if expect_handler is None: + expect_handler = _default_expect_handler + + assert asyncio.iscoroutinefunction( + expect_handler + ), f"Coroutine is expected, got {expect_handler!r}" + + method = method.upper() + if not HTTP_METHOD_RE.match(method): + raise ValueError(f"{method} is not allowed HTTP method") + + assert callable(handler), handler + if asyncio.iscoroutinefunction(handler): + pass + elif inspect.isgeneratorfunction(handler): + warnings.warn( + "Bare generators are deprecated, " "use @coroutine wrapper", + DeprecationWarning, + ) + elif isinstance(handler, type) and issubclass(handler, AbstractView): + pass + else: + warnings.warn( + "Bare functions are deprecated, " "use async ones", DeprecationWarning + ) + + @wraps(handler) + async def handler_wrapper(request: Request) -> StreamResponse: + result = old_handler(request) + if asyncio.iscoroutine(result): + return await result + return result # type: ignore[return-value] + + old_handler = handler + handler = handler_wrapper + + self._method = method + self._handler = handler + self._expect_handler = expect_handler + self._resource = resource + + @property + def method(self) -> str: + return self._method + + @property + def handler(self) -> Handler: + return self._handler + + @property + @abc.abstractmethod + def name(self) -> Optional[str]: + """Optional route's name, always equals to resource's name.""" + + @property + def resource(self) -> Optional[AbstractResource]: + return self._resource + + @abc.abstractmethod + def get_info(self) -> _InfoDict: + """Return a dict with additional info useful for introspection""" + + @abc.abstractmethod # pragma: no branch + def url_for(self, *args: str, **kwargs: str) -> URL: + """Construct url for route with additional params.""" + + async def handle_expect_header(self, request: Request) -> None: + await self._expect_handler(request) + + +class UrlMappingMatchInfo(BaseDict, AbstractMatchInfo): + def __init__(self, match_dict: Dict[str, str], route: AbstractRoute): + super().__init__(match_dict) + self._route = route + self._apps: List[Application] = [] + self._current_app: Optional[Application] = None + self._frozen = False + + @property + def handler(self) -> Handler: + return self._route.handler + + @property + def route(self) -> AbstractRoute: + return self._route + + @property + def expect_handler(self) -> _ExpectHandler: + return self._route.handle_expect_header + + @property + def http_exception(self) -> Optional[HTTPException]: + return None + + def get_info(self) -> _InfoDict: # type: ignore[override] + return self._route.get_info() + + @property + def apps(self) -> Tuple["Application", ...]: + return tuple(self._apps) + + def add_app(self, app: "Application") -> None: + if self._frozen: + raise RuntimeError("Cannot change apps stack after .freeze() call") + if self._current_app is None: + self._current_app = app + self._apps.insert(0, app) + + @property + def current_app(self) -> "Application": + app = self._current_app + assert app is not None + return app + + @contextmanager + def set_current_app(self, app: "Application") -> Generator[None, None, None]: + if DEBUG: # pragma: no cover + if app not in self._apps: + raise RuntimeError( + "Expected one of the following apps {!r}, got {!r}".format( + self._apps, app + ) + ) + prev = self._current_app + self._current_app = app + try: + yield + finally: + self._current_app = prev + + def freeze(self) -> None: + self._frozen = True + + def __repr__(self) -> str: + return f"" + + +class MatchInfoError(UrlMappingMatchInfo): + def __init__(self, http_exception: HTTPException) -> None: + self._exception = http_exception + super().__init__({}, SystemRoute(self._exception)) + + @property + def http_exception(self) -> HTTPException: + return self._exception + + def __repr__(self) -> str: + return "".format( + self._exception.status, self._exception.reason + ) + + +async def _default_expect_handler(request: Request) -> None: + """Default handler for Expect header. + + Just send "100 Continue" to client. + raise HTTPExpectationFailed if value of header is not "100-continue" + """ + expect = request.headers.get(hdrs.EXPECT, "") + if request.version == HttpVersion11: + if expect.lower() == "100-continue": + await request.writer.write(b"HTTP/1.1 100 Continue\r\n\r\n") + else: + raise HTTPExpectationFailed(text="Unknown Expect: %s" % expect) + + +class Resource(AbstractResource): + def __init__(self, *, name: Optional[str] = None) -> None: + super().__init__(name=name) + self._routes: List[ResourceRoute] = [] + + def add_route( + self, + method: str, + handler: Union[Type[AbstractView], Handler], + *, + expect_handler: Optional[_ExpectHandler] = None, + ) -> "ResourceRoute": + + for route_obj in self._routes: + if route_obj.method == method or route_obj.method == hdrs.METH_ANY: + raise RuntimeError( + "Added route will never be executed, " + "method {route.method} is already " + "registered".format(route=route_obj) + ) + + route_obj = ResourceRoute(method, handler, self, expect_handler=expect_handler) + self.register_route(route_obj) + return route_obj + + def register_route(self, route: "ResourceRoute") -> None: + assert isinstance( + route, ResourceRoute + ), f"Instance of Route class is required, got {route!r}" + self._routes.append(route) + + async def resolve(self, request: Request) -> _Resolve: + allowed_methods: Set[str] = set() + + match_dict = self._match(request.rel_url.raw_path) + if match_dict is None: + return None, allowed_methods + + for route_obj in self._routes: + route_method = route_obj.method + allowed_methods.add(route_method) + + if route_method == request.method or route_method == hdrs.METH_ANY: + return (UrlMappingMatchInfo(match_dict, route_obj), allowed_methods) + else: + return None, allowed_methods + + @abc.abstractmethod + def _match(self, path: str) -> Optional[Dict[str, str]]: + pass # pragma: no cover + + def __len__(self) -> int: + return len(self._routes) + + def __iter__(self) -> Iterator[AbstractRoute]: + return iter(self._routes) + + # TODO: implement all abstract methods + + +class PlainResource(Resource): + def __init__(self, path: str, *, name: Optional[str] = None) -> None: + super().__init__(name=name) + assert not path or path.startswith("/") + self._path = path + + @property + def canonical(self) -> str: + return self._path + + def freeze(self) -> None: + if not self._path: + self._path = "/" + + def add_prefix(self, prefix: str) -> None: + assert prefix.startswith("/") + assert not prefix.endswith("/") + assert len(prefix) > 1 + self._path = prefix + self._path + + def _match(self, path: str) -> Optional[Dict[str, str]]: + # string comparison is about 10 times faster than regexp matching + if self._path == path: + return {} + else: + return None + + def raw_match(self, path: str) -> bool: + return self._path == path + + def get_info(self) -> _InfoDict: + return {"path": self._path} + + def url_for(self) -> URL: # type: ignore[override] + return URL.build(path=self._path, encoded=True) + + def __repr__(self) -> str: + name = "'" + self.name + "' " if self.name is not None else "" + return f"" + + +class DynamicResource(Resource): + + DYN = re.compile(r"\{(?P[_a-zA-Z][_a-zA-Z0-9]*)\}") + DYN_WITH_RE = re.compile(r"\{(?P[_a-zA-Z][_a-zA-Z0-9]*):(?P.+)\}") + GOOD = r"[^{}/]+" + + def __init__(self, path: str, *, name: Optional[str] = None) -> None: + super().__init__(name=name) + pattern = "" + formatter = "" + for part in ROUTE_RE.split(path): + match = self.DYN.fullmatch(part) + if match: + pattern += "(?P<{}>{})".format(match.group("var"), self.GOOD) + formatter += "{" + match.group("var") + "}" + continue + + match = self.DYN_WITH_RE.fullmatch(part) + if match: + pattern += "(?P<{var}>{re})".format(**match.groupdict()) + formatter += "{" + match.group("var") + "}" + continue + + if "{" in part or "}" in part: + raise ValueError(f"Invalid path '{path}'['{part}']") + + part = _requote_path(part) + formatter += part + pattern += re.escape(part) + + try: + compiled = re.compile(pattern) + except re.error as exc: + raise ValueError(f"Bad pattern '{pattern}': {exc}") from None + assert compiled.pattern.startswith(PATH_SEP) + assert formatter.startswith("/") + self._pattern = compiled + self._formatter = formatter + + @property + def canonical(self) -> str: + return self._formatter + + def add_prefix(self, prefix: str) -> None: + assert prefix.startswith("/") + assert not prefix.endswith("/") + assert len(prefix) > 1 + self._pattern = re.compile(re.escape(prefix) + self._pattern.pattern) + self._formatter = prefix + self._formatter + + def _match(self, path: str) -> Optional[Dict[str, str]]: + match = self._pattern.fullmatch(path) + if match is None: + return None + else: + return { + key: _unquote_path(value) for key, value in match.groupdict().items() + } + + def raw_match(self, path: str) -> bool: + return self._formatter == path + + def get_info(self) -> _InfoDict: + return {"formatter": self._formatter, "pattern": self._pattern} + + def url_for(self, **parts: str) -> URL: + url = self._formatter.format_map({k: _quote_path(v) for k, v in parts.items()}) + return URL.build(path=url, encoded=True) + + def __repr__(self) -> str: + name = "'" + self.name + "' " if self.name is not None else "" + return "".format( + name=name, formatter=self._formatter + ) + + +class PrefixResource(AbstractResource): + def __init__(self, prefix: str, *, name: Optional[str] = None) -> None: + assert not prefix or prefix.startswith("/"), prefix + assert prefix in ("", "/") or not prefix.endswith("/"), prefix + super().__init__(name=name) + self._prefix = _requote_path(prefix) + self._prefix2 = self._prefix + "/" + + @property + def canonical(self) -> str: + return self._prefix + + def add_prefix(self, prefix: str) -> None: + assert prefix.startswith("/") + assert not prefix.endswith("/") + assert len(prefix) > 1 + self._prefix = prefix + self._prefix + self._prefix2 = self._prefix + "/" + + def raw_match(self, prefix: str) -> bool: + return False + + # TODO: impl missing abstract methods + + +class StaticResource(PrefixResource): + VERSION_KEY = "v" + + def __init__( + self, + prefix: str, + directory: PathLike, + *, + name: Optional[str] = None, + expect_handler: Optional[_ExpectHandler] = None, + chunk_size: int = 256 * 1024, + show_index: bool = False, + follow_symlinks: bool = False, + append_version: bool = False, + ) -> None: + super().__init__(prefix, name=name) + try: + directory = Path(directory) + if str(directory).startswith("~"): + directory = Path(os.path.expanduser(str(directory))) + directory = directory.resolve() + if not directory.is_dir(): + raise ValueError("Not a directory") + except (FileNotFoundError, ValueError) as error: + raise ValueError(f"No directory exists at '{directory}'") from error + self._directory = directory + self._show_index = show_index + self._chunk_size = chunk_size + self._follow_symlinks = follow_symlinks + self._expect_handler = expect_handler + self._append_version = append_version + + self._routes = { + "GET": ResourceRoute( + "GET", self._handle, self, expect_handler=expect_handler + ), + "HEAD": ResourceRoute( + "HEAD", self._handle, self, expect_handler=expect_handler + ), + } + + def url_for( # type: ignore[override] + self, + *, + filename: Union[str, Path], + append_version: Optional[bool] = None, + ) -> URL: + if append_version is None: + append_version = self._append_version + if isinstance(filename, Path): + filename = str(filename) + filename = filename.lstrip("/") + + url = URL.build(path=self._prefix, encoded=True) + # filename is not encoded + if YARL_VERSION < (1, 6): + url = url / filename.replace("%", "%25") + else: + url = url / filename + + if append_version: + try: + filepath = self._directory.joinpath(filename).resolve() + if not self._follow_symlinks: + filepath.relative_to(self._directory) + except (ValueError, FileNotFoundError): + # ValueError for case when path point to symlink + # with follow_symlinks is False + return url # relatively safe + if filepath.is_file(): + # TODO cache file content + # with file watcher for cache invalidation + with filepath.open("rb") as f: + file_bytes = f.read() + h = self._get_file_hash(file_bytes) + url = url.with_query({self.VERSION_KEY: h}) + return url + return url + + @staticmethod + def _get_file_hash(byte_array: bytes) -> str: + m = hashlib.sha256() # todo sha256 can be configurable param + m.update(byte_array) + b64 = base64.urlsafe_b64encode(m.digest()) + return b64.decode("ascii") + + def get_info(self) -> _InfoDict: + return { + "directory": self._directory, + "prefix": self._prefix, + "routes": self._routes, + } + + def set_options_route(self, handler: Handler) -> None: + if "OPTIONS" in self._routes: + raise RuntimeError("OPTIONS route was set already") + self._routes["OPTIONS"] = ResourceRoute( + "OPTIONS", handler, self, expect_handler=self._expect_handler + ) + + async def resolve(self, request: Request) -> _Resolve: + path = request.rel_url.raw_path + method = request.method + allowed_methods = set(self._routes) + if not path.startswith(self._prefix2) and path != self._prefix: + return None, set() + + if method not in allowed_methods: + return None, allowed_methods + + match_dict = {"filename": _unquote_path(path[len(self._prefix) + 1 :])} + return (UrlMappingMatchInfo(match_dict, self._routes[method]), allowed_methods) + + def __len__(self) -> int: + return len(self._routes) + + def __iter__(self) -> Iterator[AbstractRoute]: + return iter(self._routes.values()) + + async def _handle(self, request: Request) -> StreamResponse: + rel_url = request.match_info["filename"] + try: + filename = Path(rel_url) + if filename.anchor: + # rel_url is an absolute name like + # /static/\\machine_name\c$ or /static/D:\path + # where the static dir is totally different + raise HTTPForbidden() + filepath = self._directory.joinpath(filename).resolve() + if not self._follow_symlinks: + filepath.relative_to(self._directory) + except (ValueError, FileNotFoundError) as error: + # relatively safe + raise HTTPNotFound() from error + except HTTPForbidden: + raise + except Exception as error: + # perm error or other kind! + request.app.logger.exception(error) + raise HTTPNotFound() from error + + # on opening a dir, load its contents if allowed + if filepath.is_dir(): + if self._show_index: + try: + return Response( + text=self._directory_as_html(filepath), content_type="text/html" + ) + except PermissionError: + raise HTTPForbidden() + else: + raise HTTPForbidden() + elif filepath.is_file(): + return FileResponse(filepath, chunk_size=self._chunk_size) + else: + raise HTTPNotFound + + def _directory_as_html(self, filepath: Path) -> str: + # returns directory's index as html + + # sanity check + assert filepath.is_dir() + + relative_path_to_dir = filepath.relative_to(self._directory).as_posix() + index_of = f"Index of /{relative_path_to_dir}" + h1 = f"

{index_of}

" + + index_list = [] + dir_index = filepath.iterdir() + for _file in sorted(dir_index): + # show file url as relative to static path + rel_path = _file.relative_to(self._directory).as_posix() + file_url = self._prefix + "/" + rel_path + + # if file is a directory, add '/' to the end of the name + if _file.is_dir(): + file_name = f"{_file.name}/" + else: + file_name = _file.name + + index_list.append( + '
  • {name}
  • '.format( + url=file_url, name=file_name + ) + ) + ul = "
      \n{}\n
    ".format("\n".join(index_list)) + body = f"\n{h1}\n{ul}\n" + + head_str = f"\n{index_of}\n" + html = f"\n{head_str}\n{body}\n" + + return html + + def __repr__(self) -> str: + name = "'" + self.name + "'" if self.name is not None else "" + return " {directory!r}>".format( + name=name, path=self._prefix, directory=self._directory + ) + + +class PrefixedSubAppResource(PrefixResource): + def __init__(self, prefix: str, app: "Application") -> None: + super().__init__(prefix) + self._app = app + for resource in app.router.resources(): + resource.add_prefix(prefix) + + def add_prefix(self, prefix: str) -> None: + super().add_prefix(prefix) + for resource in self._app.router.resources(): + resource.add_prefix(prefix) + + def url_for(self, *args: str, **kwargs: str) -> URL: + raise RuntimeError(".url_for() is not supported " "by sub-application root") + + def get_info(self) -> _InfoDict: + return {"app": self._app, "prefix": self._prefix} + + async def resolve(self, request: Request) -> _Resolve: + if ( + not request.url.raw_path.startswith(self._prefix2) + and request.url.raw_path != self._prefix + ): + return None, set() + match_info = await self._app.router.resolve(request) + match_info.add_app(self._app) + if isinstance(match_info.http_exception, HTTPMethodNotAllowed): + methods = match_info.http_exception.allowed_methods + else: + methods = set() + return match_info, methods + + def __len__(self) -> int: + return len(self._app.router.routes()) + + def __iter__(self) -> Iterator[AbstractRoute]: + return iter(self._app.router.routes()) + + def __repr__(self) -> str: + return " {app!r}>".format( + prefix=self._prefix, app=self._app + ) + + +class AbstractRuleMatching(abc.ABC): + @abc.abstractmethod # pragma: no branch + async def match(self, request: Request) -> bool: + """Return bool if the request satisfies the criteria""" + + @abc.abstractmethod # pragma: no branch + def get_info(self) -> _InfoDict: + """Return a dict with additional info useful for introspection""" + + @property + @abc.abstractmethod # pragma: no branch + def canonical(self) -> str: + """Return a str""" + + +class Domain(AbstractRuleMatching): + re_part = re.compile(r"(?!-)[a-z\d-]{1,63}(? None: + super().__init__() + self._domain = self.validation(domain) + + @property + def canonical(self) -> str: + return self._domain + + def validation(self, domain: str) -> str: + if not isinstance(domain, str): + raise TypeError("Domain must be str") + domain = domain.rstrip(".").lower() + if not domain: + raise ValueError("Domain cannot be empty") + elif "://" in domain: + raise ValueError("Scheme not supported") + url = URL("http://" + domain) + assert url.raw_host is not None + if not all(self.re_part.fullmatch(x) for x in url.raw_host.split(".")): + raise ValueError("Domain not valid") + if url.port == 80: + return url.raw_host + return f"{url.raw_host}:{url.port}" + + async def match(self, request: Request) -> bool: + host = request.headers.get(hdrs.HOST) + if not host: + return False + return self.match_domain(host) + + def match_domain(self, host: str) -> bool: + return host.lower() == self._domain + + def get_info(self) -> _InfoDict: + return {"domain": self._domain} + + +class MaskDomain(Domain): + re_part = re.compile(r"(?!-)[a-z\d\*-]{1,63}(? None: + super().__init__(domain) + mask = self._domain.replace(".", r"\.").replace("*", ".*") + self._mask = re.compile(mask) + + @property + def canonical(self) -> str: + return self._mask.pattern + + def match_domain(self, host: str) -> bool: + return self._mask.fullmatch(host) is not None + + +class MatchedSubAppResource(PrefixedSubAppResource): + def __init__(self, rule: AbstractRuleMatching, app: "Application") -> None: + AbstractResource.__init__(self) + self._prefix = "" + self._app = app + self._rule = rule + + @property + def canonical(self) -> str: + return self._rule.canonical + + def get_info(self) -> _InfoDict: + return {"app": self._app, "rule": self._rule} + + async def resolve(self, request: Request) -> _Resolve: + if not await self._rule.match(request): + return None, set() + match_info = await self._app.router.resolve(request) + match_info.add_app(self._app) + if isinstance(match_info.http_exception, HTTPMethodNotAllowed): + methods = match_info.http_exception.allowed_methods + else: + methods = set() + return match_info, methods + + def __repr__(self) -> str: + return " {app!r}>" "".format(app=self._app) + + +class ResourceRoute(AbstractRoute): + """A route with resource""" + + def __init__( + self, + method: str, + handler: Union[Handler, Type[AbstractView]], + resource: AbstractResource, + *, + expect_handler: Optional[_ExpectHandler] = None, + ) -> None: + super().__init__( + method, handler, expect_handler=expect_handler, resource=resource + ) + + def __repr__(self) -> str: + return " {handler!r}".format( + method=self.method, resource=self._resource, handler=self.handler + ) + + @property + def name(self) -> Optional[str]: + if self._resource is None: + return None + return self._resource.name + + def url_for(self, *args: str, **kwargs: str) -> URL: + """Construct url for route with additional params.""" + assert self._resource is not None + return self._resource.url_for(*args, **kwargs) + + def get_info(self) -> _InfoDict: + assert self._resource is not None + return self._resource.get_info() + + +class SystemRoute(AbstractRoute): + def __init__(self, http_exception: HTTPException) -> None: + super().__init__(hdrs.METH_ANY, self._handle) + self._http_exception = http_exception + + def url_for(self, *args: str, **kwargs: str) -> URL: + raise RuntimeError(".url_for() is not allowed for SystemRoute") + + @property + def name(self) -> Optional[str]: + return None + + def get_info(self) -> _InfoDict: + return {"http_exception": self._http_exception} + + async def _handle(self, request: Request) -> StreamResponse: + raise self._http_exception + + @property + def status(self) -> int: + return self._http_exception.status + + @property + def reason(self) -> str: + return self._http_exception.reason + + def __repr__(self) -> str: + return "".format(self=self) + + +class View(AbstractView): + async def _iter(self) -> StreamResponse: + if self.request.method not in hdrs.METH_ALL: + self._raise_allowed_methods() + method: Callable[[], Awaitable[StreamResponse]] = getattr( + self, self.request.method.lower(), None + ) + if method is None: + self._raise_allowed_methods() + resp = await method() + return resp + + def __await__(self) -> Generator[Any, None, StreamResponse]: + return self._iter().__await__() + + def _raise_allowed_methods(self) -> None: + allowed_methods = {m for m in hdrs.METH_ALL if hasattr(self, m.lower())} + raise HTTPMethodNotAllowed(self.request.method, allowed_methods) + + +class ResourcesView(Sized, Iterable[AbstractResource], Container[AbstractResource]): + def __init__(self, resources: List[AbstractResource]) -> None: + self._resources = resources + + def __len__(self) -> int: + return len(self._resources) + + def __iter__(self) -> Iterator[AbstractResource]: + yield from self._resources + + def __contains__(self, resource: object) -> bool: + return resource in self._resources + + +class RoutesView(Sized, Iterable[AbstractRoute], Container[AbstractRoute]): + def __init__(self, resources: List[AbstractResource]): + self._routes: List[AbstractRoute] = [] + for resource in resources: + for route in resource: + self._routes.append(route) + + def __len__(self) -> int: + return len(self._routes) + + def __iter__(self) -> Iterator[AbstractRoute]: + yield from self._routes + + def __contains__(self, route: object) -> bool: + return route in self._routes + + +class UrlDispatcher(AbstractRouter, Mapping[str, AbstractResource]): + + NAME_SPLIT_RE = re.compile(r"[.:-]") + + def __init__(self) -> None: + super().__init__() + self._resources: List[AbstractResource] = [] + self._named_resources: Dict[str, AbstractResource] = {} + + async def resolve(self, request: Request) -> UrlMappingMatchInfo: + method = request.method + allowed_methods: Set[str] = set() + + for resource in self._resources: + match_dict, allowed = await resource.resolve(request) + if match_dict is not None: + return match_dict + else: + allowed_methods |= allowed + + if allowed_methods: + return MatchInfoError(HTTPMethodNotAllowed(method, allowed_methods)) + else: + return MatchInfoError(HTTPNotFound()) + + def __iter__(self) -> Iterator[str]: + return iter(self._named_resources) + + def __len__(self) -> int: + return len(self._named_resources) + + def __contains__(self, resource: object) -> bool: + return resource in self._named_resources + + def __getitem__(self, name: str) -> AbstractResource: + return self._named_resources[name] + + def resources(self) -> ResourcesView: + return ResourcesView(self._resources) + + def routes(self) -> RoutesView: + return RoutesView(self._resources) + + def named_resources(self) -> Mapping[str, AbstractResource]: + return MappingProxyType(self._named_resources) + + def register_resource(self, resource: AbstractResource) -> None: + assert isinstance( + resource, AbstractResource + ), f"Instance of AbstractResource class is required, got {resource!r}" + if self.frozen: + raise RuntimeError("Cannot register a resource into frozen router.") + + name = resource.name + + if name is not None: + parts = self.NAME_SPLIT_RE.split(name) + for part in parts: + if keyword.iskeyword(part): + raise ValueError( + f"Incorrect route name {name!r}, " + "python keywords cannot be used " + "for route name" + ) + if not part.isidentifier(): + raise ValueError( + "Incorrect route name {!r}, " + "the name should be a sequence of " + "python identifiers separated " + "by dash, dot or column".format(name) + ) + if name in self._named_resources: + raise ValueError( + "Duplicate {!r}, " + "already handled by {!r}".format(name, self._named_resources[name]) + ) + self._named_resources[name] = resource + self._resources.append(resource) + + def add_resource(self, path: str, *, name: Optional[str] = None) -> Resource: + if path and not path.startswith("/"): + raise ValueError("path should be started with / or be empty") + # Reuse last added resource if path and name are the same + if self._resources: + resource = self._resources[-1] + if resource.name == name and resource.raw_match(path): + return cast(Resource, resource) + if not ("{" in path or "}" in path or ROUTE_RE.search(path)): + resource = PlainResource(_requote_path(path), name=name) + self.register_resource(resource) + return resource + resource = DynamicResource(path, name=name) + self.register_resource(resource) + return resource + + def add_route( + self, + method: str, + path: str, + handler: Union[Handler, Type[AbstractView]], + *, + name: Optional[str] = None, + expect_handler: Optional[_ExpectHandler] = None, + ) -> AbstractRoute: + resource = self.add_resource(path, name=name) + return resource.add_route(method, handler, expect_handler=expect_handler) + + def add_static( + self, + prefix: str, + path: PathLike, + *, + name: Optional[str] = None, + expect_handler: Optional[_ExpectHandler] = None, + chunk_size: int = 256 * 1024, + show_index: bool = False, + follow_symlinks: bool = False, + append_version: bool = False, + ) -> AbstractResource: + """Add static files view. + + prefix - url prefix + path - folder with files + + """ + assert prefix.startswith("/") + if prefix.endswith("/"): + prefix = prefix[:-1] + resource = StaticResource( + prefix, + path, + name=name, + expect_handler=expect_handler, + chunk_size=chunk_size, + show_index=show_index, + follow_symlinks=follow_symlinks, + append_version=append_version, + ) + self.register_resource(resource) + return resource + + def add_head(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute: + """Shortcut for add_route with method HEAD.""" + return self.add_route(hdrs.METH_HEAD, path, handler, **kwargs) + + def add_options(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute: + """Shortcut for add_route with method OPTIONS.""" + return self.add_route(hdrs.METH_OPTIONS, path, handler, **kwargs) + + def add_get( + self, + path: str, + handler: Handler, + *, + name: Optional[str] = None, + allow_head: bool = True, + **kwargs: Any, + ) -> AbstractRoute: + """Shortcut for add_route with method GET. + + If allow_head is true, another + route is added allowing head requests to the same endpoint. + """ + resource = self.add_resource(path, name=name) + if allow_head: + resource.add_route(hdrs.METH_HEAD, handler, **kwargs) + return resource.add_route(hdrs.METH_GET, handler, **kwargs) + + def add_post(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute: + """Shortcut for add_route with method POST.""" + return self.add_route(hdrs.METH_POST, path, handler, **kwargs) + + def add_put(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute: + """Shortcut for add_route with method PUT.""" + return self.add_route(hdrs.METH_PUT, path, handler, **kwargs) + + def add_patch(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute: + """Shortcut for add_route with method PATCH.""" + return self.add_route(hdrs.METH_PATCH, path, handler, **kwargs) + + def add_delete(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute: + """Shortcut for add_route with method DELETE.""" + return self.add_route(hdrs.METH_DELETE, path, handler, **kwargs) + + def add_view( + self, path: str, handler: Type[AbstractView], **kwargs: Any + ) -> AbstractRoute: + """Shortcut for add_route with ANY methods for a class-based view.""" + return self.add_route(hdrs.METH_ANY, path, handler, **kwargs) + + def freeze(self) -> None: + super().freeze() + for resource in self._resources: + resource.freeze() + + def add_routes(self, routes: Iterable[AbstractRouteDef]) -> List[AbstractRoute]: + """Append routes to route table. + + Parameter should be a sequence of RouteDef objects. + + Returns a list of registered AbstractRoute instances. + """ + registered_routes = [] + for route_def in routes: + registered_routes.extend(route_def.register(self)) + return registered_routes + + +def _quote_path(value: str) -> str: + if YARL_VERSION < (1, 6): + value = value.replace("%", "%25") + return URL.build(path=value, encoded=False).raw_path + + +def _unquote_path(value: str) -> str: + return URL.build(path=value, encoded=True).path + + +def _requote_path(value: str) -> str: + # Quote non-ascii characters and other characters which must be quoted, + # but preserve existing %-sequences. + result = _quote_path(value) + if "%" in value: + result = result.replace("%25", "%") + return result diff --git a/env/lib/python3.8/site-packages/aiohttp/web_ws.py b/env/lib/python3.8/site-packages/aiohttp/web_ws.py new file mode 100644 index 00000000..0d32a218 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/web_ws.py @@ -0,0 +1,487 @@ +import asyncio +import base64 +import binascii +import hashlib +import json +from typing import Any, Iterable, Optional, Tuple, cast + +import async_timeout +import attr +from multidict import CIMultiDict + +from . import hdrs +from .abc import AbstractStreamWriter +from .helpers import call_later, set_result +from .http import ( + WS_CLOSED_MESSAGE, + WS_CLOSING_MESSAGE, + WS_KEY, + WebSocketError, + WebSocketReader, + WebSocketWriter, + WSCloseCode, + WSMessage, + WSMsgType as WSMsgType, + ws_ext_gen, + ws_ext_parse, +) +from .log import ws_logger +from .streams import EofStream, FlowControlDataQueue +from .typedefs import Final, JSONDecoder, JSONEncoder +from .web_exceptions import HTTPBadRequest, HTTPException +from .web_request import BaseRequest +from .web_response import StreamResponse + +__all__ = ( + "WebSocketResponse", + "WebSocketReady", + "WSMsgType", +) + +THRESHOLD_CONNLOST_ACCESS: Final[int] = 5 + + +@attr.s(auto_attribs=True, frozen=True, slots=True) +class WebSocketReady: + ok: bool + protocol: Optional[str] + + def __bool__(self) -> bool: + return self.ok + + +class WebSocketResponse(StreamResponse): + + _length_check = False + + def __init__( + self, + *, + timeout: float = 10.0, + receive_timeout: Optional[float] = None, + autoclose: bool = True, + autoping: bool = True, + heartbeat: Optional[float] = None, + protocols: Iterable[str] = (), + compress: bool = True, + max_msg_size: int = 4 * 1024 * 1024, + ) -> None: + super().__init__(status=101) + self._protocols = protocols + self._ws_protocol: Optional[str] = None + self._writer: Optional[WebSocketWriter] = None + self._reader: Optional[FlowControlDataQueue[WSMessage]] = None + self._closed = False + self._closing = False + self._conn_lost = 0 + self._close_code: Optional[int] = None + self._loop: Optional[asyncio.AbstractEventLoop] = None + self._waiting: Optional[asyncio.Future[bool]] = None + self._exception: Optional[BaseException] = None + self._timeout = timeout + self._receive_timeout = receive_timeout + self._autoclose = autoclose + self._autoping = autoping + self._heartbeat = heartbeat + self._heartbeat_cb: Optional[asyncio.TimerHandle] = None + if heartbeat is not None: + self._pong_heartbeat = heartbeat / 2.0 + self._pong_response_cb: Optional[asyncio.TimerHandle] = None + self._compress = compress + self._max_msg_size = max_msg_size + + def _cancel_heartbeat(self) -> None: + if self._pong_response_cb is not None: + self._pong_response_cb.cancel() + self._pong_response_cb = None + + if self._heartbeat_cb is not None: + self._heartbeat_cb.cancel() + self._heartbeat_cb = None + + def _reset_heartbeat(self) -> None: + self._cancel_heartbeat() + + if self._heartbeat is not None: + assert self._loop is not None + self._heartbeat_cb = call_later( + self._send_heartbeat, self._heartbeat, self._loop + ) + + def _send_heartbeat(self) -> None: + if self._heartbeat is not None and not self._closed: + assert self._loop is not None + # fire-and-forget a task is not perfect but maybe ok for + # sending ping. Otherwise we need a long-living heartbeat + # task in the class. + self._loop.create_task(self._writer.ping()) # type: ignore[union-attr] + + if self._pong_response_cb is not None: + self._pong_response_cb.cancel() + self._pong_response_cb = call_later( + self._pong_not_received, self._pong_heartbeat, self._loop + ) + + def _pong_not_received(self) -> None: + if self._req is not None and self._req.transport is not None: + self._closed = True + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._exception = asyncio.TimeoutError() + self._req.transport.close() + + async def prepare(self, request: BaseRequest) -> AbstractStreamWriter: + # make pre-check to don't hide it by do_handshake() exceptions + if self._payload_writer is not None: + return self._payload_writer + + protocol, writer = self._pre_start(request) + payload_writer = await super().prepare(request) + assert payload_writer is not None + self._post_start(request, protocol, writer) + await payload_writer.drain() + return payload_writer + + def _handshake( + self, request: BaseRequest + ) -> Tuple["CIMultiDict[str]", str, bool, bool]: + headers = request.headers + if "websocket" != headers.get(hdrs.UPGRADE, "").lower().strip(): + raise HTTPBadRequest( + text=( + "No WebSocket UPGRADE hdr: {}\n Can " + '"Upgrade" only to "WebSocket".' + ).format(headers.get(hdrs.UPGRADE)) + ) + + if "upgrade" not in headers.get(hdrs.CONNECTION, "").lower(): + raise HTTPBadRequest( + text="No CONNECTION upgrade hdr: {}".format( + headers.get(hdrs.CONNECTION) + ) + ) + + # find common sub-protocol between client and server + protocol = None + if hdrs.SEC_WEBSOCKET_PROTOCOL in headers: + req_protocols = [ + str(proto.strip()) + for proto in headers[hdrs.SEC_WEBSOCKET_PROTOCOL].split(",") + ] + + for proto in req_protocols: + if proto in self._protocols: + protocol = proto + break + else: + # No overlap found: Return no protocol as per spec + ws_logger.warning( + "Client protocols %r don’t overlap server-known ones %r", + req_protocols, + self._protocols, + ) + + # check supported version + version = headers.get(hdrs.SEC_WEBSOCKET_VERSION, "") + if version not in ("13", "8", "7"): + raise HTTPBadRequest(text=f"Unsupported version: {version}") + + # check client handshake for validity + key = headers.get(hdrs.SEC_WEBSOCKET_KEY) + try: + if not key or len(base64.b64decode(key)) != 16: + raise HTTPBadRequest(text=f"Handshake error: {key!r}") + except binascii.Error: + raise HTTPBadRequest(text=f"Handshake error: {key!r}") from None + + accept_val = base64.b64encode( + hashlib.sha1(key.encode() + WS_KEY).digest() + ).decode() + response_headers = CIMultiDict( + { + hdrs.UPGRADE: "websocket", + hdrs.CONNECTION: "upgrade", + hdrs.SEC_WEBSOCKET_ACCEPT: accept_val, + } + ) + + notakeover = False + compress = 0 + if self._compress: + extensions = headers.get(hdrs.SEC_WEBSOCKET_EXTENSIONS) + # Server side always get return with no exception. + # If something happened, just drop compress extension + compress, notakeover = ws_ext_parse(extensions, isserver=True) + if compress: + enabledext = ws_ext_gen( + compress=compress, isserver=True, server_notakeover=notakeover + ) + response_headers[hdrs.SEC_WEBSOCKET_EXTENSIONS] = enabledext + + if protocol: + response_headers[hdrs.SEC_WEBSOCKET_PROTOCOL] = protocol + return ( + response_headers, + protocol, + compress, + notakeover, + ) # type: ignore[return-value] + + def _pre_start(self, request: BaseRequest) -> Tuple[str, WebSocketWriter]: + self._loop = request._loop + + headers, protocol, compress, notakeover = self._handshake(request) + + self.set_status(101) + self.headers.update(headers) + self.force_close() + self._compress = compress + transport = request._protocol.transport + assert transport is not None + writer = WebSocketWriter( + request._protocol, transport, compress=compress, notakeover=notakeover + ) + + return protocol, writer + + def _post_start( + self, request: BaseRequest, protocol: str, writer: WebSocketWriter + ) -> None: + self._ws_protocol = protocol + self._writer = writer + + self._reset_heartbeat() + + loop = self._loop + assert loop is not None + self._reader = FlowControlDataQueue(request._protocol, 2**16, loop=loop) + request.protocol.set_parser( + WebSocketReader(self._reader, self._max_msg_size, compress=self._compress) + ) + # disable HTTP keepalive for WebSocket + request.protocol.keep_alive(False) + + def can_prepare(self, request: BaseRequest) -> WebSocketReady: + if self._writer is not None: + raise RuntimeError("Already started") + try: + _, protocol, _, _ = self._handshake(request) + except HTTPException: + return WebSocketReady(False, None) + else: + return WebSocketReady(True, protocol) + + @property + def closed(self) -> bool: + return self._closed + + @property + def close_code(self) -> Optional[int]: + return self._close_code + + @property + def ws_protocol(self) -> Optional[str]: + return self._ws_protocol + + @property + def compress(self) -> bool: + return self._compress + + def exception(self) -> Optional[BaseException]: + return self._exception + + async def ping(self, message: bytes = b"") -> None: + if self._writer is None: + raise RuntimeError("Call .prepare() first") + await self._writer.ping(message) + + async def pong(self, message: bytes = b"") -> None: + # unsolicited pong + if self._writer is None: + raise RuntimeError("Call .prepare() first") + await self._writer.pong(message) + + async def send_str(self, data: str, compress: Optional[bool] = None) -> None: + if self._writer is None: + raise RuntimeError("Call .prepare() first") + if not isinstance(data, str): + raise TypeError("data argument must be str (%r)" % type(data)) + await self._writer.send(data, binary=False, compress=compress) + + async def send_bytes(self, data: bytes, compress: Optional[bool] = None) -> None: + if self._writer is None: + raise RuntimeError("Call .prepare() first") + if not isinstance(data, (bytes, bytearray, memoryview)): + raise TypeError("data argument must be byte-ish (%r)" % type(data)) + await self._writer.send(data, binary=True, compress=compress) + + async def send_json( + self, + data: Any, + compress: Optional[bool] = None, + *, + dumps: JSONEncoder = json.dumps, + ) -> None: + await self.send_str(dumps(data), compress=compress) + + async def write_eof(self) -> None: # type: ignore[override] + if self._eof_sent: + return + if self._payload_writer is None: + raise RuntimeError("Response has not been started") + + await self.close() + self._eof_sent = True + + async def close(self, *, code: int = WSCloseCode.OK, message: bytes = b"") -> bool: + if self._writer is None: + raise RuntimeError("Call .prepare() first") + + self._cancel_heartbeat() + reader = self._reader + assert reader is not None + + # we need to break `receive()` cycle first, + # `close()` may be called from different task + if self._waiting is not None and not self._closed: + reader.feed_data(WS_CLOSING_MESSAGE, 0) + await self._waiting + + if not self._closed: + self._closed = True + try: + await self._writer.close(code, message) + writer = self._payload_writer + assert writer is not None + await writer.drain() + except (asyncio.CancelledError, asyncio.TimeoutError): + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + raise + except Exception as exc: + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._exception = exc + return True + + if self._closing: + return True + + reader = self._reader + assert reader is not None + try: + async with async_timeout.timeout(self._timeout): + msg = await reader.read() + except asyncio.CancelledError: + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + raise + except Exception as exc: + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._exception = exc + return True + + if msg.type == WSMsgType.CLOSE: + self._close_code = msg.data + return True + + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + self._exception = asyncio.TimeoutError() + return True + else: + return False + + async def receive(self, timeout: Optional[float] = None) -> WSMessage: + if self._reader is None: + raise RuntimeError("Call .prepare() first") + + loop = self._loop + assert loop is not None + while True: + if self._waiting is not None: + raise RuntimeError("Concurrent call to receive() is not allowed") + + if self._closed: + self._conn_lost += 1 + if self._conn_lost >= THRESHOLD_CONNLOST_ACCESS: + raise RuntimeError("WebSocket connection is closed.") + return WS_CLOSED_MESSAGE + elif self._closing: + return WS_CLOSING_MESSAGE + + try: + self._waiting = loop.create_future() + try: + async with async_timeout.timeout(timeout or self._receive_timeout): + msg = await self._reader.read() + self._reset_heartbeat() + finally: + waiter = self._waiting + set_result(waiter, True) + self._waiting = None + except (asyncio.CancelledError, asyncio.TimeoutError): + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + raise + except EofStream: + self._close_code = WSCloseCode.OK + await self.close() + return WSMessage(WSMsgType.CLOSED, None, None) + except WebSocketError as exc: + self._close_code = exc.code + await self.close(code=exc.code) + return WSMessage(WSMsgType.ERROR, exc, None) + except Exception as exc: + self._exception = exc + self._closing = True + self._close_code = WSCloseCode.ABNORMAL_CLOSURE + await self.close() + return WSMessage(WSMsgType.ERROR, exc, None) + + if msg.type == WSMsgType.CLOSE: + self._closing = True + self._close_code = msg.data + if not self._closed and self._autoclose: + await self.close() + elif msg.type == WSMsgType.CLOSING: + self._closing = True + elif msg.type == WSMsgType.PING and self._autoping: + await self.pong(msg.data) + continue + elif msg.type == WSMsgType.PONG and self._autoping: + continue + + return msg + + async def receive_str(self, *, timeout: Optional[float] = None) -> str: + msg = await self.receive(timeout) + if msg.type != WSMsgType.TEXT: + raise TypeError( + "Received message {}:{!r} is not WSMsgType.TEXT".format( + msg.type, msg.data + ) + ) + return cast(str, msg.data) + + async def receive_bytes(self, *, timeout: Optional[float] = None) -> bytes: + msg = await self.receive(timeout) + if msg.type != WSMsgType.BINARY: + raise TypeError(f"Received message {msg.type}:{msg.data!r} is not bytes") + return cast(bytes, msg.data) + + async def receive_json( + self, *, loads: JSONDecoder = json.loads, timeout: Optional[float] = None + ) -> Any: + data = await self.receive_str(timeout=timeout) + return loads(data) + + async def write(self, data: bytes) -> None: + raise RuntimeError("Cannot call .write() for websocket") + + def __aiter__(self) -> "WebSocketResponse": + return self + + async def __anext__(self) -> WSMessage: + msg = await self.receive() + if msg.type in (WSMsgType.CLOSE, WSMsgType.CLOSING, WSMsgType.CLOSED): + raise StopAsyncIteration + return msg + + def _cancel(self, exc: BaseException) -> None: + if self._reader is not None: + self._reader.set_exception(exc) diff --git a/env/lib/python3.8/site-packages/aiohttp/worker.py b/env/lib/python3.8/site-packages/aiohttp/worker.py new file mode 100644 index 00000000..f1302899 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiohttp/worker.py @@ -0,0 +1,269 @@ +"""Async gunicorn worker for aiohttp.web""" + +import asyncio +import os +import re +import signal +import sys +from types import FrameType +from typing import Any, Awaitable, Callable, Optional, Union # noqa + +from gunicorn.config import AccessLogFormat as GunicornAccessLogFormat +from gunicorn.workers import base + +from aiohttp import web + +from .helpers import set_result +from .web_app import Application +from .web_log import AccessLogger + +try: + import ssl + + SSLContext = ssl.SSLContext +except ImportError: # pragma: no cover + ssl = None # type: ignore[assignment] + SSLContext = object # type: ignore[misc,assignment] + + +__all__ = ("GunicornWebWorker", "GunicornUVLoopWebWorker", "GunicornTokioWebWorker") + + +class GunicornWebWorker(base.Worker): # type: ignore[misc,no-any-unimported] + + DEFAULT_AIOHTTP_LOG_FORMAT = AccessLogger.LOG_FORMAT + DEFAULT_GUNICORN_LOG_FORMAT = GunicornAccessLogFormat.default + + def __init__(self, *args: Any, **kw: Any) -> None: # pragma: no cover + super().__init__(*args, **kw) + + self._task: Optional[asyncio.Task[None]] = None + self.exit_code = 0 + self._notify_waiter: Optional[asyncio.Future[bool]] = None + + def init_process(self) -> None: + # create new event_loop after fork + asyncio.get_event_loop().close() + + self.loop = asyncio.new_event_loop() + asyncio.set_event_loop(self.loop) + + super().init_process() + + def run(self) -> None: + self._task = self.loop.create_task(self._run()) + + try: # ignore all finalization problems + self.loop.run_until_complete(self._task) + except Exception: + self.log.exception("Exception in gunicorn worker") + self.loop.run_until_complete(self.loop.shutdown_asyncgens()) + self.loop.close() + + sys.exit(self.exit_code) + + async def _run(self) -> None: + runner = None + if isinstance(self.wsgi, Application): + app = self.wsgi + elif asyncio.iscoroutinefunction(self.wsgi): + wsgi = await self.wsgi() + if isinstance(wsgi, web.AppRunner): + runner = wsgi + app = runner.app + else: + app = wsgi + else: + raise RuntimeError( + "wsgi app should be either Application or " + "async function returning Application, got {}".format(self.wsgi) + ) + + if runner is None: + access_log = self.log.access_log if self.cfg.accesslog else None + runner = web.AppRunner( + app, + logger=self.log, + keepalive_timeout=self.cfg.keepalive, + access_log=access_log, + access_log_format=self._get_valid_log_format( + self.cfg.access_log_format + ), + ) + await runner.setup() + + ctx = self._create_ssl_context(self.cfg) if self.cfg.is_ssl else None + + runner = runner + assert runner is not None + server = runner.server + assert server is not None + for sock in self.sockets: + site = web.SockSite( + runner, + sock, + ssl_context=ctx, + shutdown_timeout=self.cfg.graceful_timeout / 100 * 95, + ) + await site.start() + + # If our parent changed then we shut down. + pid = os.getpid() + try: + while self.alive: # type: ignore[has-type] + self.notify() + + cnt = server.requests_count + if self.cfg.max_requests and cnt > self.cfg.max_requests: + self.alive = False + self.log.info("Max requests, shutting down: %s", self) + + elif pid == os.getpid() and self.ppid != os.getppid(): + self.alive = False + self.log.info("Parent changed, shutting down: %s", self) + else: + await self._wait_next_notify() + except BaseException: + pass + + await runner.cleanup() + + def _wait_next_notify(self) -> "asyncio.Future[bool]": + self._notify_waiter_done() + + loop = self.loop + assert loop is not None + self._notify_waiter = waiter = loop.create_future() + self.loop.call_later(1.0, self._notify_waiter_done, waiter) + + return waiter + + def _notify_waiter_done( + self, waiter: Optional["asyncio.Future[bool]"] = None + ) -> None: + if waiter is None: + waiter = self._notify_waiter + if waiter is not None: + set_result(waiter, True) + + if waiter is self._notify_waiter: + self._notify_waiter = None + + def init_signals(self) -> None: + # Set up signals through the event loop API. + + self.loop.add_signal_handler( + signal.SIGQUIT, self.handle_quit, signal.SIGQUIT, None + ) + + self.loop.add_signal_handler( + signal.SIGTERM, self.handle_exit, signal.SIGTERM, None + ) + + self.loop.add_signal_handler( + signal.SIGINT, self.handle_quit, signal.SIGINT, None + ) + + self.loop.add_signal_handler( + signal.SIGWINCH, self.handle_winch, signal.SIGWINCH, None + ) + + self.loop.add_signal_handler( + signal.SIGUSR1, self.handle_usr1, signal.SIGUSR1, None + ) + + self.loop.add_signal_handler( + signal.SIGABRT, self.handle_abort, signal.SIGABRT, None + ) + + # Don't let SIGTERM and SIGUSR1 disturb active requests + # by interrupting system calls + signal.siginterrupt(signal.SIGTERM, False) + signal.siginterrupt(signal.SIGUSR1, False) + # Reset signals so Gunicorn doesn't swallow subprocess return codes + # See: https://github.com/aio-libs/aiohttp/issues/6130 + if sys.version_info < (3, 8): + # Starting from Python 3.8, + # the default child watcher is ThreadedChildWatcher. + # The watcher doesn't depend on SIGCHLD signal, + # there is no need to reset it. + signal.signal(signal.SIGCHLD, signal.SIG_DFL) + + def handle_quit(self, sig: int, frame: FrameType) -> None: + self.alive = False + + # worker_int callback + self.cfg.worker_int(self) + + # wakeup closing process + self._notify_waiter_done() + + def handle_abort(self, sig: int, frame: FrameType) -> None: + self.alive = False + self.exit_code = 1 + self.cfg.worker_abort(self) + sys.exit(1) + + @staticmethod + def _create_ssl_context(cfg: Any) -> "SSLContext": + """Creates SSLContext instance for usage in asyncio.create_server. + + See ssl.SSLSocket.__init__ for more details. + """ + if ssl is None: # pragma: no cover + raise RuntimeError("SSL is not supported.") + + ctx = ssl.SSLContext(cfg.ssl_version) + ctx.load_cert_chain(cfg.certfile, cfg.keyfile) + ctx.verify_mode = cfg.cert_reqs + if cfg.ca_certs: + ctx.load_verify_locations(cfg.ca_certs) + if cfg.ciphers: + ctx.set_ciphers(cfg.ciphers) + return ctx + + def _get_valid_log_format(self, source_format: str) -> str: + if source_format == self.DEFAULT_GUNICORN_LOG_FORMAT: + return self.DEFAULT_AIOHTTP_LOG_FORMAT + elif re.search(r"%\([^\)]+\)", source_format): + raise ValueError( + "Gunicorn's style options in form of `%(name)s` are not " + "supported for the log formatting. Please use aiohttp's " + "format specification to configure access log formatting: " + "http://docs.aiohttp.org/en/stable/logging.html" + "#format-specification" + ) + else: + return source_format + + +class GunicornUVLoopWebWorker(GunicornWebWorker): + def init_process(self) -> None: + import uvloop + + # Close any existing event loop before setting a + # new policy. + asyncio.get_event_loop().close() + + # Setup uvloop policy, so that every + # asyncio.get_event_loop() will create an instance + # of uvloop event loop. + asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) + + super().init_process() + + +class GunicornTokioWebWorker(GunicornWebWorker): + def init_process(self) -> None: # pragma: no cover + import tokio + + # Close any existing event loop before setting a + # new policy. + asyncio.get_event_loop().close() + + # Setup tokio policy, so that every + # asyncio.get_event_loop() will create an instance + # of tokio event loop. + asyncio.set_event_loop_policy(tokio.EventLoopPolicy()) + + super().init_process() diff --git a/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/LICENSE b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/LICENSE new file mode 100644 index 00000000..7082a2d5 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/LICENSE @@ -0,0 +1,201 @@ +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2013-2019 Nikolay Kim and Andrew Svetlov + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/METADATA b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/METADATA new file mode 100644 index 00000000..fc964525 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/METADATA @@ -0,0 +1,128 @@ +Metadata-Version: 2.1 +Name: aiosignal +Version: 1.3.1 +Summary: aiosignal: a list of registered asynchronous callbacks +Home-page: https://github.com/aio-libs/aiosignal +Maintainer: aiohttp team +Maintainer-email: team@aiohttp.org +License: Apache 2.0 +Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby +Project-URL: CI: GitHub Actions, https://github.com/aio-libs/aiosignal/actions +Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/aiosignal +Project-URL: Docs: RTD, https://docs.aiosignal.org +Project-URL: GitHub: issues, https://github.com/aio-libs/aiosignal/issues +Project-URL: GitHub: repo, https://github.com/aio-libs/aiosignal +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Intended Audience :: Developers +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Development Status :: 5 - Production/Stable +Classifier: Operating System :: POSIX +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: Microsoft :: Windows +Classifier: Framework :: AsyncIO +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE +Requires-Dist: frozenlist (>=1.1.0) + +========= +aiosignal +========= + +.. image:: https://github.com/aio-libs/aiosignal/workflows/CI/badge.svg + :target: https://github.com/aio-libs/aiosignal/actions?query=workflow%3ACI + :alt: GitHub status for master branch + +.. image:: https://codecov.io/gh/aio-libs/aiosignal/branch/master/graph/badge.svg + :target: https://codecov.io/gh/aio-libs/aiosignal + :alt: codecov.io status for master branch + +.. image:: https://badge.fury.io/py/aiosignal.svg + :target: https://pypi.org/project/aiosignal + :alt: Latest PyPI package version + +.. image:: https://readthedocs.org/projects/aiosignal/badge/?version=latest + :target: https://aiosignal.readthedocs.io/ + :alt: Latest Read The Docs + +.. image:: https://img.shields.io/discourse/topics?server=https%3A%2F%2Faio-libs.discourse.group%2F + :target: https://aio-libs.discourse.group/ + :alt: Discourse group for io-libs + +.. image:: https://badges.gitter.im/Join%20Chat.svg + :target: https://gitter.im/aio-libs/Lobby + :alt: Chat on Gitter + +Introduction +============ + +A project to manage callbacks in `asyncio` projects. + +``Signal`` is a list of registered asynchronous callbacks. + +The signal's life-cycle has two stages: after creation its content +could be filled by using standard list operations: ``sig.append()`` +etc. + +After you call ``sig.freeze()`` the signal is *frozen*: adding, removing +and dropping callbacks is forbidden. + +The only available operation is calling the previously registered +callbacks by using ``await sig.send(data)``. + +For concrete usage examples see the `Signals + +section of the `Web Server Advanced +` chapter of the `aiohttp +documentation`_. + + +Installation +------------ + +:: + + $ pip install aiosignal + +The library requires Python 3.6 or newer. + + +Documentation +============= + +https://aiosignal.readthedocs.io/ + +Communication channels +====================== + +*gitter chat* https://gitter.im/aio-libs/Lobby + +Requirements +============ + +- Python >= 3.6 +- frozenlist >= 1.0.0 + +License +======= + +``aiosignal`` is offered under the Apache 2 license. + +Source code +=========== + +The project is hosted on GitHub_ + +Please file an issue in the `bug tracker +`_ if you have found a bug +or have some suggestions to improve the library. + +.. _GitHub: https://github.com/aio-libs/aiosignal +.. _aiohttp documentation: https://docs.aiohttp.org/ diff --git a/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/RECORD b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/RECORD new file mode 100644 index 00000000..d9aa8de6 --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/RECORD @@ -0,0 +1,10 @@ +aiosignal-1.3.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +aiosignal-1.3.1.dist-info/LICENSE,sha256=b9UkPpLdf5jsacesN3co50kFcJ_1J6W_mNbQJjwE9bY,11332 +aiosignal-1.3.1.dist-info/METADATA,sha256=c0HRnlYzfXKztZPTFDlPfygizTherhG5WdwXlvco0Ug,4008 +aiosignal-1.3.1.dist-info/RECORD,, +aiosignal-1.3.1.dist-info/WHEEL,sha256=ZL1lC_LiPDNRgDnOl2taCMc83aPEUZgHHv2h-LDgdiM,92 +aiosignal-1.3.1.dist-info/top_level.txt,sha256=z45aNOKGDdrI1roqZY3BGXQ22kJFPHBmVdwtLYLtXC0,10 +aiosignal/__init__.py,sha256=zQNfFYRSd84bswvpFv8ZWjEr5DeYwV3LXbMSyo2222s,867 +aiosignal/__init__.pyi,sha256=xeCddYSS8fZAkz8S4HuKSR2IDe3N7RW_LKcXDPPA1Xk,311 +aiosignal/__pycache__/__init__.cpython-38.pyc,, +aiosignal/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/WHEEL new file mode 100644 index 00000000..5e1f087c --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.38.2) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/top_level.txt new file mode 100644 index 00000000..ac6df3af --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal-1.3.1.dist-info/top_level.txt @@ -0,0 +1 @@ +aiosignal diff --git a/env/lib/python3.8/site-packages/aiosignal/__init__.py b/env/lib/python3.8/site-packages/aiosignal/__init__.py new file mode 100644 index 00000000..3d288e6e --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal/__init__.py @@ -0,0 +1,36 @@ +from frozenlist import FrozenList + +__version__ = "1.3.1" + +__all__ = ("Signal",) + + +class Signal(FrozenList): + """Coroutine-based signal implementation. + + To connect a callback to a signal, use any list method. + + Signals are fired using the send() coroutine, which takes named + arguments. + """ + + __slots__ = ("_owner",) + + def __init__(self, owner): + super().__init__() + self._owner = owner + + def __repr__(self): + return "".format( + self._owner, self.frozen, list(self) + ) + + async def send(self, *args, **kwargs): + """ + Sends data to all registered receivers. + """ + if not self.frozen: + raise RuntimeError("Cannot send non-frozen signal.") + + for receiver in self: + await receiver(*args, **kwargs) # type: ignore diff --git a/env/lib/python3.8/site-packages/aiosignal/__init__.pyi b/env/lib/python3.8/site-packages/aiosignal/__init__.pyi new file mode 100644 index 00000000..d4e3416d --- /dev/null +++ b/env/lib/python3.8/site-packages/aiosignal/__init__.pyi @@ -0,0 +1,12 @@ +from typing import Any, Generic, TypeVar + +from frozenlist import FrozenList + +__all__ = ("Signal",) + +_T = TypeVar("_T") + +class Signal(FrozenList[_T], Generic[_T]): + def __init__(self, owner: Any) -> None: ... + def __repr__(self) -> str: ... + async def send(self, *args: Any, **kwargs: Any) -> None: ... diff --git a/env/lib/python3.8/site-packages/aiosignal/py.typed b/env/lib/python3.8/site-packages/aiosignal/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/INSTALLER b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/LICENSE b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/LICENSE new file mode 100644 index 00000000..033c86b7 --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/LICENSE @@ -0,0 +1,13 @@ +Copyright 2016-2020 aio-libs collaboration. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/METADATA b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/METADATA new file mode 100644 index 00000000..e68d2347 --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/METADATA @@ -0,0 +1,133 @@ +Metadata-Version: 2.1 +Name: async-timeout +Version: 4.0.2 +Summary: Timeout context manager for asyncio programs +Home-page: https://github.com/aio-libs/async-timeout +Author: Andrew Svetlov +Author-email: andrew.svetlov@gmail.com +License: Apache 2 +Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby +Project-URL: CI: GitHub Actions, https://github.com/aio-libs/async-timeout/actions +Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/async-timeout +Project-URL: GitHub: issues, https://github.com/aio-libs/async-timeout/issues +Project-URL: GitHub: repo, https://github.com/aio-libs/async-timeout +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Topic :: Software Development :: Libraries +Classifier: Framework :: AsyncIO +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Requires-Python: >=3.6 +Description-Content-Type: text/x-rst +License-File: LICENSE +Requires-Dist: typing-extensions (>=3.6.5) ; python_version < "3.8" + +async-timeout +============= +.. image:: https://travis-ci.com/aio-libs/async-timeout.svg?branch=master + :target: https://travis-ci.com/aio-libs/async-timeout +.. image:: https://codecov.io/gh/aio-libs/async-timeout/branch/master/graph/badge.svg + :target: https://codecov.io/gh/aio-libs/async-timeout +.. image:: https://img.shields.io/pypi/v/async-timeout.svg + :target: https://pypi.python.org/pypi/async-timeout +.. image:: https://badges.gitter.im/Join%20Chat.svg + :target: https://gitter.im/aio-libs/Lobby + :alt: Chat on Gitter + +asyncio-compatible timeout context manager. + + +Usage example +------------- + + +The context manager is useful in cases when you want to apply timeout +logic around block of code or in cases when ``asyncio.wait_for()`` is +not suitable. Also it's much faster than ``asyncio.wait_for()`` +because ``timeout`` doesn't create a new task. + +The ``timeout(delay, *, loop=None)`` call returns a context manager +that cancels a block on *timeout* expiring:: + + async with timeout(1.5): + await inner() + +1. If ``inner()`` is executed faster than in ``1.5`` seconds nothing + happens. +2. Otherwise ``inner()`` is cancelled internally by sending + ``asyncio.CancelledError`` into but ``asyncio.TimeoutError`` is + raised outside of context manager scope. + +*timeout* parameter could be ``None`` for skipping timeout functionality. + + +Alternatively, ``timeout_at(when)`` can be used for scheduling +at the absolute time:: + + loop = asyncio.get_event_loop() + now = loop.time() + + async with timeout_at(now + 1.5): + await inner() + + +Please note: it is not POSIX time but a time with +undefined starting base, e.g. the time of the system power on. + + +Context manager has ``.expired`` property for check if timeout happens +exactly in context manager:: + + async with timeout(1.5) as cm: + await inner() + print(cm.expired) + +The property is ``True`` if ``inner()`` execution is cancelled by +timeout context manager. + +If ``inner()`` call explicitly raises ``TimeoutError`` ``cm.expired`` +is ``False``. + +The scheduled deadline time is available as ``.deadline`` property:: + + async with timeout(1.5) as cm: + cm.deadline + +Not finished yet timeout can be rescheduled by ``shift_by()`` +or ``shift_to()`` methods:: + + async with timeout(1.5) as cm: + cm.shift(1) # add another second on waiting + cm.update(loop.time() + 5) # reschedule to now+5 seconds + +Rescheduling is forbidden if the timeout is expired or after exit from ``async with`` +code block. + + +Installation +------------ + +:: + + $ pip install async-timeout + +The library is Python 3 only! + + + +Authors and License +------------------- + +The module is written by Andrew Svetlov. + +It's *Apache 2* licensed and freely available. + + diff --git a/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/RECORD b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/RECORD new file mode 100644 index 00000000..5b44901e --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/RECORD @@ -0,0 +1,10 @@ +async_timeout-4.0.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +async_timeout-4.0.2.dist-info/LICENSE,sha256=4Y17uPUT4sRrtYXJS1hb0wcg3TzLId2weG9y0WZY-Sw,568 +async_timeout-4.0.2.dist-info/METADATA,sha256=2pfMxxBst5vQ7SfMy5TDaDU0cRgCSQa7wcD5eI-Ew-8,4193 +async_timeout-4.0.2.dist-info/RECORD,, +async_timeout-4.0.2.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92 +async_timeout-4.0.2.dist-info/top_level.txt,sha256=9oM4e7Twq8iD_7_Q3Mz0E6GPIB6vJvRFo-UBwUQtBDU,14 +async_timeout-4.0.2.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1 +async_timeout/__init__.py,sha256=N-JUI_VExhHnO0emkF_-h08dl4HBgOje16N4Ci-W-go,7487 +async_timeout/__pycache__/__init__.cpython-38.pyc,, +async_timeout/py.typed,sha256=tyozzRT1fziXETDxokmuyt6jhOmtjUbnVNJdZcG7ik0,12 diff --git a/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/WHEEL b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/WHEEL new file mode 100644 index 00000000..5bad85fd --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/top_level.txt b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/top_level.txt new file mode 100644 index 00000000..ad29955e --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/top_level.txt @@ -0,0 +1 @@ +async_timeout diff --git a/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/zip-safe b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/zip-safe new file mode 100644 index 00000000..8b137891 --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout-4.0.2.dist-info/zip-safe @@ -0,0 +1 @@ + diff --git a/env/lib/python3.8/site-packages/async_timeout/__init__.py b/env/lib/python3.8/site-packages/async_timeout/__init__.py new file mode 100644 index 00000000..179d1b0a --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout/__init__.py @@ -0,0 +1,247 @@ +import asyncio +import enum +import sys +import warnings +from types import TracebackType +from typing import Any, Optional, Type + + +if sys.version_info >= (3, 8): + from typing import final +else: + from typing_extensions import final + + +__version__ = "4.0.2" + + +__all__ = ("timeout", "timeout_at", "Timeout") + + +def timeout(delay: Optional[float]) -> "Timeout": + """timeout context manager. + + Useful in cases when you want to apply timeout logic around block + of code or in cases when asyncio.wait_for is not suitable. For example: + + >>> async with timeout(0.001): + ... async with aiohttp.get('https://github.com') as r: + ... await r.text() + + + delay - value in seconds or None to disable timeout logic + """ + loop = _get_running_loop() + if delay is not None: + deadline = loop.time() + delay # type: Optional[float] + else: + deadline = None + return Timeout(deadline, loop) + + +def timeout_at(deadline: Optional[float]) -> "Timeout": + """Schedule the timeout at absolute time. + + deadline argument points on the time in the same clock system + as loop.time(). + + Please note: it is not POSIX time but a time with + undefined starting base, e.g. the time of the system power on. + + >>> async with timeout_at(loop.time() + 10): + ... async with aiohttp.get('https://github.com') as r: + ... await r.text() + + + """ + loop = _get_running_loop() + return Timeout(deadline, loop) + + +class _State(enum.Enum): + INIT = "INIT" + ENTER = "ENTER" + TIMEOUT = "TIMEOUT" + EXIT = "EXIT" + + +@final +class Timeout: + # Internal class, please don't instantiate it directly + # Use timeout() and timeout_at() public factories instead. + # + # Implementation note: `async with timeout()` is preferred + # over `with timeout()`. + # While technically the Timeout class implementation + # doesn't need to be async at all, + # the `async with` statement explicitly points that + # the context manager should be used from async function context. + # + # This design allows to avoid many silly misusages. + # + # TimeoutError is raised immadiatelly when scheduled + # if the deadline is passed. + # The purpose is to time out as sson as possible + # without waiting for the next await expression. + + __slots__ = ("_deadline", "_loop", "_state", "_timeout_handler") + + def __init__( + self, deadline: Optional[float], loop: asyncio.AbstractEventLoop + ) -> None: + self._loop = loop + self._state = _State.INIT + + self._timeout_handler = None # type: Optional[asyncio.Handle] + if deadline is None: + self._deadline = None # type: Optional[float] + else: + self.update(deadline) + + def __enter__(self) -> "Timeout": + warnings.warn( + "with timeout() is deprecated, use async with timeout() instead", + DeprecationWarning, + stacklevel=2, + ) + self._do_enter() + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> Optional[bool]: + self._do_exit(exc_type) + return None + + async def __aenter__(self) -> "Timeout": + self._do_enter() + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> Optional[bool]: + self._do_exit(exc_type) + return None + + @property + def expired(self) -> bool: + """Is timeout expired during execution?""" + return self._state == _State.TIMEOUT + + @property + def deadline(self) -> Optional[float]: + return self._deadline + + def reject(self) -> None: + """Reject scheduled timeout if any.""" + # cancel is maybe better name but + # task.cancel() raises CancelledError in asyncio world. + if self._state not in (_State.INIT, _State.ENTER): + raise RuntimeError(f"invalid state {self._state.value}") + self._reject() + + def _reject(self) -> None: + if self._timeout_handler is not None: + self._timeout_handler.cancel() + self._timeout_handler = None + + def shift(self, delay: float) -> None: + """Advance timeout on delay seconds. + + The delay can be negative. + + Raise RuntimeError if shift is called when deadline is not scheduled + """ + deadline = self._deadline + if deadline is None: + raise RuntimeError("cannot shift timeout if deadline is not scheduled") + self.update(deadline + delay) + + def update(self, deadline: float) -> None: + """Set deadline to absolute value. + + deadline argument points on the time in the same clock system + as loop.time(). + + If new deadline is in the past the timeout is raised immediatelly. + + Please note: it is not POSIX time but a time with + undefined starting base, e.g. the time of the system power on. + """ + if self._state == _State.EXIT: + raise RuntimeError("cannot reschedule after exit from context manager") + if self._state == _State.TIMEOUT: + raise RuntimeError("cannot reschedule expired timeout") + if self._timeout_handler is not None: + self._timeout_handler.cancel() + self._deadline = deadline + if self._state != _State.INIT: + self._reschedule() + + def _reschedule(self) -> None: + assert self._state == _State.ENTER + deadline = self._deadline + if deadline is None: + return + + now = self._loop.time() + if self._timeout_handler is not None: + self._timeout_handler.cancel() + + task = _current_task(self._loop) + if deadline <= now: + self._timeout_handler = self._loop.call_soon(self._on_timeout, task) + else: + self._timeout_handler = self._loop.call_at(deadline, self._on_timeout, task) + + def _do_enter(self) -> None: + if self._state != _State.INIT: + raise RuntimeError(f"invalid state {self._state.value}") + self._state = _State.ENTER + self._reschedule() + + def _do_exit(self, exc_type: Optional[Type[BaseException]]) -> None: + if exc_type is asyncio.CancelledError and self._state == _State.TIMEOUT: + self._timeout_handler = None + raise asyncio.TimeoutError + # timeout has not expired + self._state = _State.EXIT + self._reject() + return None + + def _on_timeout(self, task: "asyncio.Task[None]") -> None: + task.cancel() + self._state = _State.TIMEOUT + # drop the reference early + self._timeout_handler = None + + +if sys.version_info >= (3, 7): + + def _current_task(loop: asyncio.AbstractEventLoop) -> "Optional[asyncio.Task[Any]]": + return asyncio.current_task(loop=loop) + +else: + + def _current_task(loop: asyncio.AbstractEventLoop) -> "Optional[asyncio.Task[Any]]": + return asyncio.Task.current_task(loop=loop) + + +if sys.version_info >= (3, 7): + + def _get_running_loop() -> asyncio.AbstractEventLoop: + return asyncio.get_running_loop() + +else: + + def _get_running_loop() -> asyncio.AbstractEventLoop: + loop = asyncio.get_event_loop() + if not loop.is_running(): + raise RuntimeError("no running event loop") + return loop diff --git a/env/lib/python3.8/site-packages/async_timeout/py.typed b/env/lib/python3.8/site-packages/async_timeout/py.typed new file mode 100644 index 00000000..3b94f915 --- /dev/null +++ b/env/lib/python3.8/site-packages/async_timeout/py.typed @@ -0,0 +1 @@ +Placeholder diff --git a/env/lib/python3.8/site-packages/attr/__init__.py b/env/lib/python3.8/site-packages/attr/__init__.py new file mode 100644 index 00000000..386305d6 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/__init__.py @@ -0,0 +1,79 @@ +# SPDX-License-Identifier: MIT + + +import sys + +from functools import partial + +from . import converters, exceptions, filters, setters, validators +from ._cmp import cmp_using +from ._config import get_run_validators, set_run_validators +from ._funcs import asdict, assoc, astuple, evolve, has, resolve_types +from ._make import ( + NOTHING, + Attribute, + Factory, + attrib, + attrs, + fields, + fields_dict, + make_class, + validate, +) +from ._version_info import VersionInfo + + +__version__ = "22.1.0" +__version_info__ = VersionInfo._from_version_string(__version__) + +__title__ = "attrs" +__description__ = "Classes Without Boilerplate" +__url__ = "https://www.attrs.org/" +__uri__ = __url__ +__doc__ = __description__ + " <" + __uri__ + ">" + +__author__ = "Hynek Schlawack" +__email__ = "hs@ox.cx" + +__license__ = "MIT" +__copyright__ = "Copyright (c) 2015 Hynek Schlawack" + + +s = attributes = attrs +ib = attr = attrib +dataclass = partial(attrs, auto_attribs=True) # happy Easter ;) + +__all__ = [ + "Attribute", + "Factory", + "NOTHING", + "asdict", + "assoc", + "astuple", + "attr", + "attrib", + "attributes", + "attrs", + "cmp_using", + "converters", + "evolve", + "exceptions", + "fields", + "fields_dict", + "filters", + "get_run_validators", + "has", + "ib", + "make_class", + "resolve_types", + "s", + "set_run_validators", + "setters", + "validate", + "validators", +] + +if sys.version_info[:2] >= (3, 6): + from ._next_gen import define, field, frozen, mutable # noqa: F401 + + __all__.extend(("define", "field", "frozen", "mutable")) diff --git a/env/lib/python3.8/site-packages/attr/__init__.pyi b/env/lib/python3.8/site-packages/attr/__init__.pyi new file mode 100644 index 00000000..03cc4c82 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/__init__.pyi @@ -0,0 +1,486 @@ +import sys + +from typing import ( + Any, + Callable, + ClassVar, + Dict, + Generic, + List, + Mapping, + Optional, + Protocol, + Sequence, + Tuple, + Type, + TypeVar, + Union, + overload, +) + +# `import X as X` is required to make these public +from . import converters as converters +from . import exceptions as exceptions +from . import filters as filters +from . import setters as setters +from . import validators as validators +from ._cmp import cmp_using as cmp_using +from ._version_info import VersionInfo + +__version__: str +__version_info__: VersionInfo +__title__: str +__description__: str +__url__: str +__uri__: str +__author__: str +__email__: str +__license__: str +__copyright__: str + +_T = TypeVar("_T") +_C = TypeVar("_C", bound=type) + +_EqOrderType = Union[bool, Callable[[Any], Any]] +_ValidatorType = Callable[[Any, Attribute[_T], _T], Any] +_ConverterType = Callable[[Any], Any] +_FilterType = Callable[[Attribute[_T], _T], bool] +_ReprType = Callable[[Any], str] +_ReprArgType = Union[bool, _ReprType] +_OnSetAttrType = Callable[[Any, Attribute[Any], Any], Any] +_OnSetAttrArgType = Union[ + _OnSetAttrType, List[_OnSetAttrType], setters._NoOpType +] +_FieldTransformer = Callable[ + [type, List[Attribute[Any]]], List[Attribute[Any]] +] +# FIXME: in reality, if multiple validators are passed they must be in a list +# or tuple, but those are invariant and so would prevent subtypes of +# _ValidatorType from working when passed in a list or tuple. +_ValidatorArgType = Union[_ValidatorType[_T], Sequence[_ValidatorType[_T]]] + +# A protocol to be able to statically accept an attrs class. +class AttrsInstance(Protocol): + __attrs_attrs__: ClassVar[Any] + +# _make -- + +NOTHING: object + +# NOTE: Factory lies about its return type to make this possible: +# `x: List[int] # = Factory(list)` +# Work around mypy issue #4554 in the common case by using an overload. +if sys.version_info >= (3, 8): + from typing import Literal + @overload + def Factory(factory: Callable[[], _T]) -> _T: ... + @overload + def Factory( + factory: Callable[[Any], _T], + takes_self: Literal[True], + ) -> _T: ... + @overload + def Factory( + factory: Callable[[], _T], + takes_self: Literal[False], + ) -> _T: ... + +else: + @overload + def Factory(factory: Callable[[], _T]) -> _T: ... + @overload + def Factory( + factory: Union[Callable[[Any], _T], Callable[[], _T]], + takes_self: bool = ..., + ) -> _T: ... + +# Static type inference support via __dataclass_transform__ implemented as per: +# https://github.com/microsoft/pyright/blob/1.1.135/specs/dataclass_transforms.md +# This annotation must be applied to all overloads of "define" and "attrs" +# +# NOTE: This is a typing construct and does not exist at runtime. Extensions +# wrapping attrs decorators should declare a separate __dataclass_transform__ +# signature in the extension module using the specification linked above to +# provide pyright support. +def __dataclass_transform__( + *, + eq_default: bool = True, + order_default: bool = False, + kw_only_default: bool = False, + field_descriptors: Tuple[Union[type, Callable[..., Any]], ...] = (()), +) -> Callable[[_T], _T]: ... + +class Attribute(Generic[_T]): + name: str + default: Optional[_T] + validator: Optional[_ValidatorType[_T]] + repr: _ReprArgType + cmp: _EqOrderType + eq: _EqOrderType + order: _EqOrderType + hash: Optional[bool] + init: bool + converter: Optional[_ConverterType] + metadata: Dict[Any, Any] + type: Optional[Type[_T]] + kw_only: bool + on_setattr: _OnSetAttrType + def evolve(self, **changes: Any) -> "Attribute[Any]": ... + +# NOTE: We had several choices for the annotation to use for type arg: +# 1) Type[_T] +# - Pros: Handles simple cases correctly +# - Cons: Might produce less informative errors in the case of conflicting +# TypeVars e.g. `attr.ib(default='bad', type=int)` +# 2) Callable[..., _T] +# - Pros: Better error messages than #1 for conflicting TypeVars +# - Cons: Terrible error messages for validator checks. +# e.g. attr.ib(type=int, validator=validate_str) +# -> error: Cannot infer function type argument +# 3) type (and do all of the work in the mypy plugin) +# - Pros: Simple here, and we could customize the plugin with our own errors. +# - Cons: Would need to write mypy plugin code to handle all the cases. +# We chose option #1. + +# `attr` lies about its return type to make the following possible: +# attr() -> Any +# attr(8) -> int +# attr(validator=) -> Whatever the callable expects. +# This makes this type of assignments possible: +# x: int = attr(8) +# +# This form catches explicit None or no default but with no other arguments +# returns Any. +@overload +def attrib( + default: None = ..., + validator: None = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: None = ..., + converter: None = ..., + factory: None = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... + +# This form catches an explicit None or no default and infers the type from the +# other arguments. +@overload +def attrib( + default: None = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: Optional[Type[_T]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form catches an explicit default argument. +@overload +def attrib( + default: _T, + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: Optional[Type[_T]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form covers type=non-Type: e.g. forward references (str), Any +@overload +def attrib( + default: Optional[_T] = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: object = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... +@overload +def field( + *, + default: None = ..., + validator: None = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: None = ..., + factory: None = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... + +# This form catches an explicit None or no default and infers the type from the +# other arguments. +@overload +def field( + *, + default: None = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form catches an explicit default argument. +@overload +def field( + *, + default: _T, + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form covers type=non-Type: e.g. forward references (str), Any +@overload +def field( + *, + default: Optional[_T] = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... +@overload +@__dataclass_transform__(order_default=True, field_descriptors=(attrib, field)) +def attrs( + maybe_cls: _C, + these: Optional[Dict[str, Any]] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + auto_detect: bool = ..., + collect_by_mro: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., + match_args: bool = ..., +) -> _C: ... +@overload +@__dataclass_transform__(order_default=True, field_descriptors=(attrib, field)) +def attrs( + maybe_cls: None = ..., + these: Optional[Dict[str, Any]] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + auto_detect: bool = ..., + collect_by_mro: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., + match_args: bool = ..., +) -> Callable[[_C], _C]: ... +@overload +@__dataclass_transform__(field_descriptors=(attrib, field)) +def define( + maybe_cls: _C, + *, + these: Optional[Dict[str, Any]] = ..., + repr: bool = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., + match_args: bool = ..., +) -> _C: ... +@overload +@__dataclass_transform__(field_descriptors=(attrib, field)) +def define( + maybe_cls: None = ..., + *, + these: Optional[Dict[str, Any]] = ..., + repr: bool = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., + match_args: bool = ..., +) -> Callable[[_C], _C]: ... + +mutable = define +frozen = define # they differ only in their defaults + +def fields(cls: Type[AttrsInstance]) -> Any: ... +def fields_dict(cls: Type[AttrsInstance]) -> Dict[str, Attribute[Any]]: ... +def validate(inst: AttrsInstance) -> None: ... +def resolve_types( + cls: _C, + globalns: Optional[Dict[str, Any]] = ..., + localns: Optional[Dict[str, Any]] = ..., + attribs: Optional[List[Attribute[Any]]] = ..., +) -> _C: ... + +# TODO: add support for returning a proper attrs class from the mypy plugin +# we use Any instead of _CountingAttr so that e.g. `make_class('Foo', +# [attr.ib()])` is valid +def make_class( + name: str, + attrs: Union[List[str], Tuple[str, ...], Dict[str, Any]], + bases: Tuple[type, ...] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + collect_by_mro: bool = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> type: ... + +# _funcs -- + +# TODO: add support for returning TypedDict from the mypy plugin +# FIXME: asdict/astuple do not honor their factory args. Waiting on one of +# these: +# https://github.com/python/mypy/issues/4236 +# https://github.com/python/typing/issues/253 +# XXX: remember to fix attrs.asdict/astuple too! +def asdict( + inst: AttrsInstance, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + dict_factory: Type[Mapping[Any, Any]] = ..., + retain_collection_types: bool = ..., + value_serializer: Optional[ + Callable[[type, Attribute[Any], Any], Any] + ] = ..., + tuple_keys: Optional[bool] = ..., +) -> Dict[str, Any]: ... + +# TODO: add support for returning NamedTuple from the mypy plugin +def astuple( + inst: AttrsInstance, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + tuple_factory: Type[Sequence[Any]] = ..., + retain_collection_types: bool = ..., +) -> Tuple[Any, ...]: ... +def has(cls: type) -> bool: ... +def assoc(inst: _T, **changes: Any) -> _T: ... +def evolve(inst: _T, **changes: Any) -> _T: ... + +# _config -- + +def set_run_validators(run: bool) -> None: ... +def get_run_validators() -> bool: ... + +# aliases -- + +s = attributes = attrs +ib = attr = attrib +dataclass = attrs # Technically, partial(attrs, auto_attribs=True) ;) diff --git a/env/lib/python3.8/site-packages/attr/_cmp.py b/env/lib/python3.8/site-packages/attr/_cmp.py new file mode 100644 index 00000000..81b99e4c --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_cmp.py @@ -0,0 +1,155 @@ +# SPDX-License-Identifier: MIT + + +import functools +import types + +from ._make import _make_ne + + +_operation_names = {"eq": "==", "lt": "<", "le": "<=", "gt": ">", "ge": ">="} + + +def cmp_using( + eq=None, + lt=None, + le=None, + gt=None, + ge=None, + require_same_type=True, + class_name="Comparable", +): + """ + Create a class that can be passed into `attr.ib`'s ``eq``, ``order``, and + ``cmp`` arguments to customize field comparison. + + The resulting class will have a full set of ordering methods if + at least one of ``{lt, le, gt, ge}`` and ``eq`` are provided. + + :param Optional[callable] eq: `callable` used to evaluate equality + of two objects. + :param Optional[callable] lt: `callable` used to evaluate whether + one object is less than another object. + :param Optional[callable] le: `callable` used to evaluate whether + one object is less than or equal to another object. + :param Optional[callable] gt: `callable` used to evaluate whether + one object is greater than another object. + :param Optional[callable] ge: `callable` used to evaluate whether + one object is greater than or equal to another object. + + :param bool require_same_type: When `True`, equality and ordering methods + will return `NotImplemented` if objects are not of the same type. + + :param Optional[str] class_name: Name of class. Defaults to 'Comparable'. + + See `comparison` for more details. + + .. versionadded:: 21.1.0 + """ + + body = { + "__slots__": ["value"], + "__init__": _make_init(), + "_requirements": [], + "_is_comparable_to": _is_comparable_to, + } + + # Add operations. + num_order_functions = 0 + has_eq_function = False + + if eq is not None: + has_eq_function = True + body["__eq__"] = _make_operator("eq", eq) + body["__ne__"] = _make_ne() + + if lt is not None: + num_order_functions += 1 + body["__lt__"] = _make_operator("lt", lt) + + if le is not None: + num_order_functions += 1 + body["__le__"] = _make_operator("le", le) + + if gt is not None: + num_order_functions += 1 + body["__gt__"] = _make_operator("gt", gt) + + if ge is not None: + num_order_functions += 1 + body["__ge__"] = _make_operator("ge", ge) + + type_ = types.new_class( + class_name, (object,), {}, lambda ns: ns.update(body) + ) + + # Add same type requirement. + if require_same_type: + type_._requirements.append(_check_same_type) + + # Add total ordering if at least one operation was defined. + if 0 < num_order_functions < 4: + if not has_eq_function: + # functools.total_ordering requires __eq__ to be defined, + # so raise early error here to keep a nice stack. + raise ValueError( + "eq must be define is order to complete ordering from " + "lt, le, gt, ge." + ) + type_ = functools.total_ordering(type_) + + return type_ + + +def _make_init(): + """ + Create __init__ method. + """ + + def __init__(self, value): + """ + Initialize object with *value*. + """ + self.value = value + + return __init__ + + +def _make_operator(name, func): + """ + Create operator method. + """ + + def method(self, other): + if not self._is_comparable_to(other): + return NotImplemented + + result = func(self.value, other.value) + if result is NotImplemented: + return NotImplemented + + return result + + method.__name__ = "__%s__" % (name,) + method.__doc__ = "Return a %s b. Computed by attrs." % ( + _operation_names[name], + ) + + return method + + +def _is_comparable_to(self, other): + """ + Check whether `other` is comparable to `self`. + """ + for func in self._requirements: + if not func(self, other): + return False + return True + + +def _check_same_type(self, other): + """ + Return True if *self* and *other* are of the same type, False otherwise. + """ + return other.value.__class__ is self.value.__class__ diff --git a/env/lib/python3.8/site-packages/attr/_cmp.pyi b/env/lib/python3.8/site-packages/attr/_cmp.pyi new file mode 100644 index 00000000..35437eff --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_cmp.pyi @@ -0,0 +1,13 @@ +from typing import Any, Callable, Optional, Type + +_CompareWithType = Callable[[Any, Any], bool] + +def cmp_using( + eq: Optional[_CompareWithType], + lt: Optional[_CompareWithType], + le: Optional[_CompareWithType], + gt: Optional[_CompareWithType], + ge: Optional[_CompareWithType], + require_same_type: bool, + class_name: str, +) -> Type: ... diff --git a/env/lib/python3.8/site-packages/attr/_compat.py b/env/lib/python3.8/site-packages/attr/_compat.py new file mode 100644 index 00000000..58264932 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_compat.py @@ -0,0 +1,185 @@ +# SPDX-License-Identifier: MIT + + +import inspect +import platform +import sys +import threading +import types +import warnings + +from collections.abc import Mapping, Sequence # noqa + + +PYPY = platform.python_implementation() == "PyPy" +PY36 = sys.version_info[:2] >= (3, 6) +HAS_F_STRINGS = PY36 +PY310 = sys.version_info[:2] >= (3, 10) + + +if PYPY or PY36: + ordered_dict = dict +else: + from collections import OrderedDict + + ordered_dict = OrderedDict + + +def just_warn(*args, **kw): + warnings.warn( + "Running interpreter doesn't sufficiently support code object " + "introspection. Some features like bare super() or accessing " + "__class__ will not work with slotted classes.", + RuntimeWarning, + stacklevel=2, + ) + + +class _AnnotationExtractor: + """ + Extract type annotations from a callable, returning None whenever there + is none. + """ + + __slots__ = ["sig"] + + def __init__(self, callable): + try: + self.sig = inspect.signature(callable) + except (ValueError, TypeError): # inspect failed + self.sig = None + + def get_first_param_type(self): + """ + Return the type annotation of the first argument if it's not empty. + """ + if not self.sig: + return None + + params = list(self.sig.parameters.values()) + if params and params[0].annotation is not inspect.Parameter.empty: + return params[0].annotation + + return None + + def get_return_type(self): + """ + Return the return type if it's not empty. + """ + if ( + self.sig + and self.sig.return_annotation is not inspect.Signature.empty + ): + return self.sig.return_annotation + + return None + + +def make_set_closure_cell(): + """Return a function of two arguments (cell, value) which sets + the value stored in the closure cell `cell` to `value`. + """ + # pypy makes this easy. (It also supports the logic below, but + # why not do the easy/fast thing?) + if PYPY: + + def set_closure_cell(cell, value): + cell.__setstate__((value,)) + + return set_closure_cell + + # Otherwise gotta do it the hard way. + + # Create a function that will set its first cellvar to `value`. + def set_first_cellvar_to(value): + x = value + return + + # This function will be eliminated as dead code, but + # not before its reference to `x` forces `x` to be + # represented as a closure cell rather than a local. + def force_x_to_be_a_cell(): # pragma: no cover + return x + + try: + # Extract the code object and make sure our assumptions about + # the closure behavior are correct. + co = set_first_cellvar_to.__code__ + if co.co_cellvars != ("x",) or co.co_freevars != (): + raise AssertionError # pragma: no cover + + # Convert this code object to a code object that sets the + # function's first _freevar_ (not cellvar) to the argument. + if sys.version_info >= (3, 8): + + def set_closure_cell(cell, value): + cell.cell_contents = value + + else: + args = [co.co_argcount] + args.append(co.co_kwonlyargcount) + args.extend( + [ + co.co_nlocals, + co.co_stacksize, + co.co_flags, + co.co_code, + co.co_consts, + co.co_names, + co.co_varnames, + co.co_filename, + co.co_name, + co.co_firstlineno, + co.co_lnotab, + # These two arguments are reversed: + co.co_cellvars, + co.co_freevars, + ] + ) + set_first_freevar_code = types.CodeType(*args) + + def set_closure_cell(cell, value): + # Create a function using the set_first_freevar_code, + # whose first closure cell is `cell`. Calling it will + # change the value of that cell. + setter = types.FunctionType( + set_first_freevar_code, {}, "setter", (), (cell,) + ) + # And call it to set the cell. + setter(value) + + # Make sure it works on this interpreter: + def make_func_with_cell(): + x = None + + def func(): + return x # pragma: no cover + + return func + + cell = make_func_with_cell().__closure__[0] + set_closure_cell(cell, 100) + if cell.cell_contents != 100: + raise AssertionError # pragma: no cover + + except Exception: + return just_warn + else: + return set_closure_cell + + +set_closure_cell = make_set_closure_cell() + +# Thread-local global to track attrs instances which are already being repr'd. +# This is needed because there is no other (thread-safe) way to pass info +# about the instances that are already being repr'd through the call stack +# in order to ensure we don't perform infinite recursion. +# +# For instance, if an instance contains a dict which contains that instance, +# we need to know that we're already repr'ing the outside instance from within +# the dict's repr() call. +# +# This lives here rather than in _make.py so that the functions in _make.py +# don't have a direct reference to the thread-local in their globals dict. +# If they have such a reference, it breaks cloudpickle. +repr_context = threading.local() diff --git a/env/lib/python3.8/site-packages/attr/_config.py b/env/lib/python3.8/site-packages/attr/_config.py new file mode 100644 index 00000000..96d42007 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_config.py @@ -0,0 +1,31 @@ +# SPDX-License-Identifier: MIT + + +__all__ = ["set_run_validators", "get_run_validators"] + +_run_validators = True + + +def set_run_validators(run): + """ + Set whether or not validators are run. By default, they are run. + + .. deprecated:: 21.3.0 It will not be removed, but it also will not be + moved to new ``attrs`` namespace. Use `attrs.validators.set_disabled()` + instead. + """ + if not isinstance(run, bool): + raise TypeError("'run' must be bool.") + global _run_validators + _run_validators = run + + +def get_run_validators(): + """ + Return whether or not validators are run. + + .. deprecated:: 21.3.0 It will not be removed, but it also will not be + moved to new ``attrs`` namespace. Use `attrs.validators.get_disabled()` + instead. + """ + return _run_validators diff --git a/env/lib/python3.8/site-packages/attr/_funcs.py b/env/lib/python3.8/site-packages/attr/_funcs.py new file mode 100644 index 00000000..a982d7cb --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_funcs.py @@ -0,0 +1,420 @@ +# SPDX-License-Identifier: MIT + + +import copy + +from ._make import NOTHING, _obj_setattr, fields +from .exceptions import AttrsAttributeNotFoundError + + +def asdict( + inst, + recurse=True, + filter=None, + dict_factory=dict, + retain_collection_types=False, + value_serializer=None, +): + """ + Return the ``attrs`` attribute values of *inst* as a dict. + + Optionally recurse into other ``attrs``-decorated classes. + + :param inst: Instance of an ``attrs``-decorated class. + :param bool recurse: Recurse into classes that are also + ``attrs``-decorated. + :param callable filter: A callable whose return code determines whether an + attribute or element is included (``True``) or dropped (``False``). Is + called with the `attrs.Attribute` as the first argument and the + value as the second argument. + :param callable dict_factory: A callable to produce dictionaries from. For + example, to produce ordered dictionaries instead of normal Python + dictionaries, pass in ``collections.OrderedDict``. + :param bool retain_collection_types: Do not convert to ``list`` when + encountering an attribute whose type is ``tuple`` or ``set``. Only + meaningful if ``recurse`` is ``True``. + :param Optional[callable] value_serializer: A hook that is called for every + attribute or dict key/value. It receives the current instance, field + and value and must return the (updated) value. The hook is run *after* + the optional *filter* has been applied. + + :rtype: return type of *dict_factory* + + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 16.0.0 *dict_factory* + .. versionadded:: 16.1.0 *retain_collection_types* + .. versionadded:: 20.3.0 *value_serializer* + .. versionadded:: 21.3.0 If a dict has a collection for a key, it is + serialized as a tuple. + """ + attrs = fields(inst.__class__) + rv = dict_factory() + for a in attrs: + v = getattr(inst, a.name) + if filter is not None and not filter(a, v): + continue + + if value_serializer is not None: + v = value_serializer(inst, a, v) + + if recurse is True: + if has(v.__class__): + rv[a.name] = asdict( + v, + recurse=True, + filter=filter, + dict_factory=dict_factory, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ) + elif isinstance(v, (tuple, list, set, frozenset)): + cf = v.__class__ if retain_collection_types is True else list + rv[a.name] = cf( + [ + _asdict_anything( + i, + is_key=False, + filter=filter, + dict_factory=dict_factory, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ) + for i in v + ] + ) + elif isinstance(v, dict): + df = dict_factory + rv[a.name] = df( + ( + _asdict_anything( + kk, + is_key=True, + filter=filter, + dict_factory=df, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ), + _asdict_anything( + vv, + is_key=False, + filter=filter, + dict_factory=df, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ), + ) + for kk, vv in v.items() + ) + else: + rv[a.name] = v + else: + rv[a.name] = v + return rv + + +def _asdict_anything( + val, + is_key, + filter, + dict_factory, + retain_collection_types, + value_serializer, +): + """ + ``asdict`` only works on attrs instances, this works on anything. + """ + if getattr(val.__class__, "__attrs_attrs__", None) is not None: + # Attrs class. + rv = asdict( + val, + recurse=True, + filter=filter, + dict_factory=dict_factory, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ) + elif isinstance(val, (tuple, list, set, frozenset)): + if retain_collection_types is True: + cf = val.__class__ + elif is_key: + cf = tuple + else: + cf = list + + rv = cf( + [ + _asdict_anything( + i, + is_key=False, + filter=filter, + dict_factory=dict_factory, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ) + for i in val + ] + ) + elif isinstance(val, dict): + df = dict_factory + rv = df( + ( + _asdict_anything( + kk, + is_key=True, + filter=filter, + dict_factory=df, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ), + _asdict_anything( + vv, + is_key=False, + filter=filter, + dict_factory=df, + retain_collection_types=retain_collection_types, + value_serializer=value_serializer, + ), + ) + for kk, vv in val.items() + ) + else: + rv = val + if value_serializer is not None: + rv = value_serializer(None, None, rv) + + return rv + + +def astuple( + inst, + recurse=True, + filter=None, + tuple_factory=tuple, + retain_collection_types=False, +): + """ + Return the ``attrs`` attribute values of *inst* as a tuple. + + Optionally recurse into other ``attrs``-decorated classes. + + :param inst: Instance of an ``attrs``-decorated class. + :param bool recurse: Recurse into classes that are also + ``attrs``-decorated. + :param callable filter: A callable whose return code determines whether an + attribute or element is included (``True``) or dropped (``False``). Is + called with the `attrs.Attribute` as the first argument and the + value as the second argument. + :param callable tuple_factory: A callable to produce tuples from. For + example, to produce lists instead of tuples. + :param bool retain_collection_types: Do not convert to ``list`` + or ``dict`` when encountering an attribute which type is + ``tuple``, ``dict`` or ``set``. Only meaningful if ``recurse`` is + ``True``. + + :rtype: return type of *tuple_factory* + + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 16.2.0 + """ + attrs = fields(inst.__class__) + rv = [] + retain = retain_collection_types # Very long. :/ + for a in attrs: + v = getattr(inst, a.name) + if filter is not None and not filter(a, v): + continue + if recurse is True: + if has(v.__class__): + rv.append( + astuple( + v, + recurse=True, + filter=filter, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + ) + elif isinstance(v, (tuple, list, set, frozenset)): + cf = v.__class__ if retain is True else list + rv.append( + cf( + [ + astuple( + j, + recurse=True, + filter=filter, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(j.__class__) + else j + for j in v + ] + ) + ) + elif isinstance(v, dict): + df = v.__class__ if retain is True else dict + rv.append( + df( + ( + astuple( + kk, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(kk.__class__) + else kk, + astuple( + vv, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(vv.__class__) + else vv, + ) + for kk, vv in v.items() + ) + ) + else: + rv.append(v) + else: + rv.append(v) + + return rv if tuple_factory is list else tuple_factory(rv) + + +def has(cls): + """ + Check whether *cls* is a class with ``attrs`` attributes. + + :param type cls: Class to introspect. + :raise TypeError: If *cls* is not a class. + + :rtype: bool + """ + return getattr(cls, "__attrs_attrs__", None) is not None + + +def assoc(inst, **changes): + """ + Copy *inst* and apply *changes*. + + :param inst: Instance of a class with ``attrs`` attributes. + :param changes: Keyword changes in the new copy. + + :return: A copy of inst with *changes* incorporated. + + :raise attr.exceptions.AttrsAttributeNotFoundError: If *attr_name* couldn't + be found on *cls*. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. deprecated:: 17.1.0 + Use `attrs.evolve` instead if you can. + This function will not be removed du to the slightly different approach + compared to `attrs.evolve`. + """ + import warnings + + warnings.warn( + "assoc is deprecated and will be removed after 2018/01.", + DeprecationWarning, + stacklevel=2, + ) + new = copy.copy(inst) + attrs = fields(inst.__class__) + for k, v in changes.items(): + a = getattr(attrs, k, NOTHING) + if a is NOTHING: + raise AttrsAttributeNotFoundError( + "{k} is not an attrs attribute on {cl}.".format( + k=k, cl=new.__class__ + ) + ) + _obj_setattr(new, k, v) + return new + + +def evolve(inst, **changes): + """ + Create a new instance, based on *inst* with *changes* applied. + + :param inst: Instance of a class with ``attrs`` attributes. + :param changes: Keyword changes in the new copy. + + :return: A copy of inst with *changes* incorporated. + + :raise TypeError: If *attr_name* couldn't be found in the class + ``__init__``. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 17.1.0 + """ + cls = inst.__class__ + attrs = fields(cls) + for a in attrs: + if not a.init: + continue + attr_name = a.name # To deal with private attributes. + init_name = attr_name if attr_name[0] != "_" else attr_name[1:] + if init_name not in changes: + changes[init_name] = getattr(inst, attr_name) + + return cls(**changes) + + +def resolve_types(cls, globalns=None, localns=None, attribs=None): + """ + Resolve any strings and forward annotations in type annotations. + + This is only required if you need concrete types in `Attribute`'s *type* + field. In other words, you don't need to resolve your types if you only + use them for static type checking. + + With no arguments, names will be looked up in the module in which the class + was created. If this is not what you want, e.g. if the name only exists + inside a method, you may pass *globalns* or *localns* to specify other + dictionaries in which to look up these names. See the docs of + `typing.get_type_hints` for more details. + + :param type cls: Class to resolve. + :param Optional[dict] globalns: Dictionary containing global variables. + :param Optional[dict] localns: Dictionary containing local variables. + :param Optional[list] attribs: List of attribs for the given class. + This is necessary when calling from inside a ``field_transformer`` + since *cls* is not an ``attrs`` class yet. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class and you didn't pass any attribs. + :raise NameError: If types cannot be resolved because of missing variables. + + :returns: *cls* so you can use this function also as a class decorator. + Please note that you have to apply it **after** `attrs.define`. That + means the decorator has to come in the line **before** `attrs.define`. + + .. versionadded:: 20.1.0 + .. versionadded:: 21.1.0 *attribs* + + """ + # Since calling get_type_hints is expensive we cache whether we've + # done it already. + if getattr(cls, "__attrs_types_resolved__", None) != cls: + import typing + + hints = typing.get_type_hints(cls, globalns=globalns, localns=localns) + for field in fields(cls) if attribs is None else attribs: + if field.name in hints: + # Since fields have been frozen we must work around it. + _obj_setattr(field, "type", hints[field.name]) + # We store the class we resolved so that subclasses know they haven't + # been resolved. + cls.__attrs_types_resolved__ = cls + + # Return the class so you can use it as a decorator too. + return cls diff --git a/env/lib/python3.8/site-packages/attr/_make.py b/env/lib/python3.8/site-packages/attr/_make.py new file mode 100644 index 00000000..4d1afe3f --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_make.py @@ -0,0 +1,3006 @@ +# SPDX-License-Identifier: MIT + +import copy +import linecache +import sys +import types +import typing + +from operator import itemgetter + +# We need to import _compat itself in addition to the _compat members to avoid +# having the thread-local in the globals here. +from . import _compat, _config, setters +from ._compat import ( + HAS_F_STRINGS, + PY310, + PYPY, + _AnnotationExtractor, + ordered_dict, + set_closure_cell, +) +from .exceptions import ( + DefaultAlreadySetError, + FrozenInstanceError, + NotAnAttrsClassError, + UnannotatedAttributeError, +) + + +# This is used at least twice, so cache it here. +_obj_setattr = object.__setattr__ +_init_converter_pat = "__attr_converter_%s" +_init_factory_pat = "__attr_factory_{}" +_tuple_property_pat = ( + " {attr_name} = _attrs_property(_attrs_itemgetter({index}))" +) +_classvar_prefixes = ( + "typing.ClassVar", + "t.ClassVar", + "ClassVar", + "typing_extensions.ClassVar", +) +# we don't use a double-underscore prefix because that triggers +# name mangling when trying to create a slot for the field +# (when slots=True) +_hash_cache_field = "_attrs_cached_hash" + +_empty_metadata_singleton = types.MappingProxyType({}) + +# Unique object for unequivocal getattr() defaults. +_sentinel = object() + +_ng_default_on_setattr = setters.pipe(setters.convert, setters.validate) + + +class _Nothing: + """ + Sentinel class to indicate the lack of a value when ``None`` is ambiguous. + + ``_Nothing`` is a singleton. There is only ever one of it. + + .. versionchanged:: 21.1.0 ``bool(NOTHING)`` is now False. + """ + + _singleton = None + + def __new__(cls): + if _Nothing._singleton is None: + _Nothing._singleton = super().__new__(cls) + return _Nothing._singleton + + def __repr__(self): + return "NOTHING" + + def __bool__(self): + return False + + +NOTHING = _Nothing() +""" +Sentinel to indicate the lack of a value when ``None`` is ambiguous. +""" + + +class _CacheHashWrapper(int): + """ + An integer subclass that pickles / copies as None + + This is used for non-slots classes with ``cache_hash=True``, to avoid + serializing a potentially (even likely) invalid hash value. Since ``None`` + is the default value for uncalculated hashes, whenever this is copied, + the copy's value for the hash should automatically reset. + + See GH #613 for more details. + """ + + def __reduce__(self, _none_constructor=type(None), _args=()): + return _none_constructor, _args + + +def attrib( + default=NOTHING, + validator=None, + repr=True, + cmp=None, + hash=None, + init=True, + metadata=None, + type=None, + converter=None, + factory=None, + kw_only=False, + eq=None, + order=None, + on_setattr=None, +): + """ + Create a new attribute on a class. + + .. warning:: + + Does *not* do anything unless the class is also decorated with + `attr.s`! + + :param default: A value that is used if an ``attrs``-generated ``__init__`` + is used and no value is passed while instantiating or the attribute is + excluded using ``init=False``. + + If the value is an instance of `attrs.Factory`, its callable will be + used to construct a new value (useful for mutable data types like lists + or dicts). + + If a default is not set (or set manually to `attrs.NOTHING`), a value + *must* be supplied when instantiating; otherwise a `TypeError` + will be raised. + + The default can also be set using decorator notation as shown below. + + :type default: Any value + + :param callable factory: Syntactic sugar for + ``default=attr.Factory(factory)``. + + :param validator: `callable` that is called by ``attrs``-generated + ``__init__`` methods after the instance has been initialized. They + receive the initialized instance, the :func:`~attrs.Attribute`, and the + passed value. + + The return value is *not* inspected so the validator has to throw an + exception itself. + + If a `list` is passed, its items are treated as validators and must + all pass. + + Validators can be globally disabled and re-enabled using + `get_run_validators`. + + The validator can also be set using decorator notation as shown below. + + :type validator: `callable` or a `list` of `callable`\\ s. + + :param repr: Include this attribute in the generated ``__repr__`` + method. If ``True``, include the attribute; if ``False``, omit it. By + default, the built-in ``repr()`` function is used. To override how the + attribute value is formatted, pass a ``callable`` that takes a single + value and returns a string. Note that the resulting string is used + as-is, i.e. it will be used directly *instead* of calling ``repr()`` + (the default). + :type repr: a `bool` or a `callable` to use a custom function. + + :param eq: If ``True`` (default), include this attribute in the + generated ``__eq__`` and ``__ne__`` methods that check two instances + for equality. To override how the attribute value is compared, + pass a ``callable`` that takes a single value and returns the value + to be compared. + :type eq: a `bool` or a `callable`. + + :param order: If ``True`` (default), include this attributes in the + generated ``__lt__``, ``__le__``, ``__gt__`` and ``__ge__`` methods. + To override how the attribute value is ordered, + pass a ``callable`` that takes a single value and returns the value + to be ordered. + :type order: a `bool` or a `callable`. + + :param cmp: Setting *cmp* is equivalent to setting *eq* and *order* to the + same value. Must not be mixed with *eq* or *order*. + :type cmp: a `bool` or a `callable`. + + :param Optional[bool] hash: Include this attribute in the generated + ``__hash__`` method. If ``None`` (default), mirror *eq*'s value. This + is the correct behavior according the Python spec. Setting this value + to anything else than ``None`` is *discouraged*. + :param bool init: Include this attribute in the generated ``__init__`` + method. It is possible to set this to ``False`` and set a default + value. In that case this attributed is unconditionally initialized + with the specified default value or factory. + :param callable converter: `callable` that is called by + ``attrs``-generated ``__init__`` methods to convert attribute's value + to the desired format. It is given the passed-in value, and the + returned value will be used as the new value of the attribute. The + value is converted before being passed to the validator, if any. + :param metadata: An arbitrary mapping, to be used by third-party + components. See `extending_metadata`. + :param type: The type of the attribute. In Python 3.6 or greater, the + preferred method to specify the type is using a variable annotation + (see :pep:`526`). + This argument is provided for backward compatibility. + Regardless of the approach used, the type will be stored on + ``Attribute.type``. + + Please note that ``attrs`` doesn't do anything with this metadata by + itself. You can use it as part of your own code or for + `static type checking `. + :param kw_only: Make this attribute keyword-only (Python 3+) + in the generated ``__init__`` (if ``init`` is ``False``, this + parameter is ignored). + :param on_setattr: Allows to overwrite the *on_setattr* setting from + `attr.s`. If left `None`, the *on_setattr* value from `attr.s` is used. + Set to `attrs.setters.NO_OP` to run **no** `setattr` hooks for this + attribute -- regardless of the setting in `attr.s`. + :type on_setattr: `callable`, or a list of callables, or `None`, or + `attrs.setters.NO_OP` + + .. versionadded:: 15.2.0 *convert* + .. versionadded:: 16.3.0 *metadata* + .. versionchanged:: 17.1.0 *validator* can be a ``list`` now. + .. versionchanged:: 17.1.0 + *hash* is ``None`` and therefore mirrors *eq* by default. + .. versionadded:: 17.3.0 *type* + .. deprecated:: 17.4.0 *convert* + .. versionadded:: 17.4.0 *converter* as a replacement for the deprecated + *convert* to achieve consistency with other noun-based arguments. + .. versionadded:: 18.1.0 + ``factory=f`` is syntactic sugar for ``default=attr.Factory(f)``. + .. versionadded:: 18.2.0 *kw_only* + .. versionchanged:: 19.2.0 *convert* keyword argument removed. + .. versionchanged:: 19.2.0 *repr* also accepts a custom callable. + .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. + .. versionadded:: 19.2.0 *eq* and *order* + .. versionadded:: 20.1.0 *on_setattr* + .. versionchanged:: 20.3.0 *kw_only* backported to Python 2 + .. versionchanged:: 21.1.0 + *eq*, *order*, and *cmp* also accept a custom callable + .. versionchanged:: 21.1.0 *cmp* undeprecated + """ + eq, eq_key, order, order_key = _determine_attrib_eq_order( + cmp, eq, order, True + ) + + if hash is not None and hash is not True and hash is not False: + raise TypeError( + "Invalid value for hash. Must be True, False, or None." + ) + + if factory is not None: + if default is not NOTHING: + raise ValueError( + "The `default` and `factory` arguments are mutually " + "exclusive." + ) + if not callable(factory): + raise ValueError("The `factory` argument must be a callable.") + default = Factory(factory) + + if metadata is None: + metadata = {} + + # Apply syntactic sugar by auto-wrapping. + if isinstance(on_setattr, (list, tuple)): + on_setattr = setters.pipe(*on_setattr) + + if validator and isinstance(validator, (list, tuple)): + validator = and_(*validator) + + if converter and isinstance(converter, (list, tuple)): + converter = pipe(*converter) + + return _CountingAttr( + default=default, + validator=validator, + repr=repr, + cmp=None, + hash=hash, + init=init, + converter=converter, + metadata=metadata, + type=type, + kw_only=kw_only, + eq=eq, + eq_key=eq_key, + order=order, + order_key=order_key, + on_setattr=on_setattr, + ) + + +def _compile_and_eval(script, globs, locs=None, filename=""): + """ + "Exec" the script with the given global (globs) and local (locs) variables. + """ + bytecode = compile(script, filename, "exec") + eval(bytecode, globs, locs) + + +def _make_method(name, script, filename, globs): + """ + Create the method with the script given and return the method object. + """ + locs = {} + + # In order of debuggers like PDB being able to step through the code, + # we add a fake linecache entry. + count = 1 + base_filename = filename + while True: + linecache_tuple = ( + len(script), + None, + script.splitlines(True), + filename, + ) + old_val = linecache.cache.setdefault(filename, linecache_tuple) + if old_val == linecache_tuple: + break + else: + filename = "{}-{}>".format(base_filename[:-1], count) + count += 1 + + _compile_and_eval(script, globs, locs, filename) + + return locs[name] + + +def _make_attr_tuple_class(cls_name, attr_names): + """ + Create a tuple subclass to hold `Attribute`s for an `attrs` class. + + The subclass is a bare tuple with properties for names. + + class MyClassAttributes(tuple): + __slots__ = () + x = property(itemgetter(0)) + """ + attr_class_name = "{}Attributes".format(cls_name) + attr_class_template = [ + "class {}(tuple):".format(attr_class_name), + " __slots__ = ()", + ] + if attr_names: + for i, attr_name in enumerate(attr_names): + attr_class_template.append( + _tuple_property_pat.format(index=i, attr_name=attr_name) + ) + else: + attr_class_template.append(" pass") + globs = {"_attrs_itemgetter": itemgetter, "_attrs_property": property} + _compile_and_eval("\n".join(attr_class_template), globs) + return globs[attr_class_name] + + +# Tuple class for extracted attributes from a class definition. +# `base_attrs` is a subset of `attrs`. +_Attributes = _make_attr_tuple_class( + "_Attributes", + [ + # all attributes to build dunder methods for + "attrs", + # attributes that have been inherited + "base_attrs", + # map inherited attributes to their originating classes + "base_attrs_map", + ], +) + + +def _is_class_var(annot): + """ + Check whether *annot* is a typing.ClassVar. + + The string comparison hack is used to avoid evaluating all string + annotations which would put attrs-based classes at a performance + disadvantage compared to plain old classes. + """ + annot = str(annot) + + # Annotation can be quoted. + if annot.startswith(("'", '"')) and annot.endswith(("'", '"')): + annot = annot[1:-1] + + return annot.startswith(_classvar_prefixes) + + +def _has_own_attribute(cls, attrib_name): + """ + Check whether *cls* defines *attrib_name* (and doesn't just inherit it). + + Requires Python 3. + """ + attr = getattr(cls, attrib_name, _sentinel) + if attr is _sentinel: + return False + + for base_cls in cls.__mro__[1:]: + a = getattr(base_cls, attrib_name, None) + if attr is a: + return False + + return True + + +def _get_annotations(cls): + """ + Get annotations for *cls*. + """ + if _has_own_attribute(cls, "__annotations__"): + return cls.__annotations__ + + return {} + + +def _counter_getter(e): + """ + Key function for sorting to avoid re-creating a lambda for every class. + """ + return e[1].counter + + +def _collect_base_attrs(cls, taken_attr_names): + """ + Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. + """ + base_attrs = [] + base_attr_map = {} # A dictionary of base attrs to their classes. + + # Traverse the MRO and collect attributes. + for base_cls in reversed(cls.__mro__[1:-1]): + for a in getattr(base_cls, "__attrs_attrs__", []): + if a.inherited or a.name in taken_attr_names: + continue + + a = a.evolve(inherited=True) + base_attrs.append(a) + base_attr_map[a.name] = base_cls + + # For each name, only keep the freshest definition i.e. the furthest at the + # back. base_attr_map is fine because it gets overwritten with every new + # instance. + filtered = [] + seen = set() + for a in reversed(base_attrs): + if a.name in seen: + continue + filtered.insert(0, a) + seen.add(a.name) + + return filtered, base_attr_map + + +def _collect_base_attrs_broken(cls, taken_attr_names): + """ + Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. + + N.B. *taken_attr_names* will be mutated. + + Adhere to the old incorrect behavior. + + Notably it collects from the front and considers inherited attributes which + leads to the buggy behavior reported in #428. + """ + base_attrs = [] + base_attr_map = {} # A dictionary of base attrs to their classes. + + # Traverse the MRO and collect attributes. + for base_cls in cls.__mro__[1:-1]: + for a in getattr(base_cls, "__attrs_attrs__", []): + if a.name in taken_attr_names: + continue + + a = a.evolve(inherited=True) + taken_attr_names.add(a.name) + base_attrs.append(a) + base_attr_map[a.name] = base_cls + + return base_attrs, base_attr_map + + +def _transform_attrs( + cls, these, auto_attribs, kw_only, collect_by_mro, field_transformer +): + """ + Transform all `_CountingAttr`s on a class into `Attribute`s. + + If *these* is passed, use that and don't look for them on the class. + + *collect_by_mro* is True, collect them in the correct MRO order, otherwise + use the old -- incorrect -- order. See #428. + + Return an `_Attributes`. + """ + cd = cls.__dict__ + anns = _get_annotations(cls) + + if these is not None: + ca_list = [(name, ca) for name, ca in these.items()] + + if not isinstance(these, ordered_dict): + ca_list.sort(key=_counter_getter) + elif auto_attribs is True: + ca_names = { + name + for name, attr in cd.items() + if isinstance(attr, _CountingAttr) + } + ca_list = [] + annot_names = set() + for attr_name, type in anns.items(): + if _is_class_var(type): + continue + annot_names.add(attr_name) + a = cd.get(attr_name, NOTHING) + + if not isinstance(a, _CountingAttr): + if a is NOTHING: + a = attrib() + else: + a = attrib(default=a) + ca_list.append((attr_name, a)) + + unannotated = ca_names - annot_names + if len(unannotated) > 0: + raise UnannotatedAttributeError( + "The following `attr.ib`s lack a type annotation: " + + ", ".join( + sorted(unannotated, key=lambda n: cd.get(n).counter) + ) + + "." + ) + else: + ca_list = sorted( + ( + (name, attr) + for name, attr in cd.items() + if isinstance(attr, _CountingAttr) + ), + key=lambda e: e[1].counter, + ) + + own_attrs = [ + Attribute.from_counting_attr( + name=attr_name, ca=ca, type=anns.get(attr_name) + ) + for attr_name, ca in ca_list + ] + + if collect_by_mro: + base_attrs, base_attr_map = _collect_base_attrs( + cls, {a.name for a in own_attrs} + ) + else: + base_attrs, base_attr_map = _collect_base_attrs_broken( + cls, {a.name for a in own_attrs} + ) + + if kw_only: + own_attrs = [a.evolve(kw_only=True) for a in own_attrs] + base_attrs = [a.evolve(kw_only=True) for a in base_attrs] + + attrs = base_attrs + own_attrs + + # Mandatory vs non-mandatory attr order only matters when they are part of + # the __init__ signature and when they aren't kw_only (which are moved to + # the end and can be mandatory or non-mandatory in any order, as they will + # be specified as keyword args anyway). Check the order of those attrs: + had_default = False + for a in (a for a in attrs if a.init is not False and a.kw_only is False): + if had_default is True and a.default is NOTHING: + raise ValueError( + "No mandatory attributes allowed after an attribute with a " + "default value or factory. Attribute in question: %r" % (a,) + ) + + if had_default is False and a.default is not NOTHING: + had_default = True + + if field_transformer is not None: + attrs = field_transformer(cls, attrs) + + # Create AttrsClass *after* applying the field_transformer since it may + # add or remove attributes! + attr_names = [a.name for a in attrs] + AttrsClass = _make_attr_tuple_class(cls.__name__, attr_names) + + return _Attributes((AttrsClass(attrs), base_attrs, base_attr_map)) + + +if PYPY: + + def _frozen_setattrs(self, name, value): + """ + Attached to frozen classes as __setattr__. + """ + if isinstance(self, BaseException) and name in ( + "__cause__", + "__context__", + ): + BaseException.__setattr__(self, name, value) + return + + raise FrozenInstanceError() + +else: + + def _frozen_setattrs(self, name, value): + """ + Attached to frozen classes as __setattr__. + """ + raise FrozenInstanceError() + + +def _frozen_delattrs(self, name): + """ + Attached to frozen classes as __delattr__. + """ + raise FrozenInstanceError() + + +class _ClassBuilder: + """ + Iteratively build *one* class. + """ + + __slots__ = ( + "_attr_names", + "_attrs", + "_base_attr_map", + "_base_names", + "_cache_hash", + "_cls", + "_cls_dict", + "_delete_attribs", + "_frozen", + "_has_pre_init", + "_has_post_init", + "_is_exc", + "_on_setattr", + "_slots", + "_weakref_slot", + "_wrote_own_setattr", + "_has_custom_setattr", + ) + + def __init__( + self, + cls, + these, + slots, + frozen, + weakref_slot, + getstate_setstate, + auto_attribs, + kw_only, + cache_hash, + is_exc, + collect_by_mro, + on_setattr, + has_custom_setattr, + field_transformer, + ): + attrs, base_attrs, base_map = _transform_attrs( + cls, + these, + auto_attribs, + kw_only, + collect_by_mro, + field_transformer, + ) + + self._cls = cls + self._cls_dict = dict(cls.__dict__) if slots else {} + self._attrs = attrs + self._base_names = {a.name for a in base_attrs} + self._base_attr_map = base_map + self._attr_names = tuple(a.name for a in attrs) + self._slots = slots + self._frozen = frozen + self._weakref_slot = weakref_slot + self._cache_hash = cache_hash + self._has_pre_init = bool(getattr(cls, "__attrs_pre_init__", False)) + self._has_post_init = bool(getattr(cls, "__attrs_post_init__", False)) + self._delete_attribs = not bool(these) + self._is_exc = is_exc + self._on_setattr = on_setattr + + self._has_custom_setattr = has_custom_setattr + self._wrote_own_setattr = False + + self._cls_dict["__attrs_attrs__"] = self._attrs + + if frozen: + self._cls_dict["__setattr__"] = _frozen_setattrs + self._cls_dict["__delattr__"] = _frozen_delattrs + + self._wrote_own_setattr = True + elif on_setattr in ( + _ng_default_on_setattr, + setters.validate, + setters.convert, + ): + has_validator = has_converter = False + for a in attrs: + if a.validator is not None: + has_validator = True + if a.converter is not None: + has_converter = True + + if has_validator and has_converter: + break + if ( + ( + on_setattr == _ng_default_on_setattr + and not (has_validator or has_converter) + ) + or (on_setattr == setters.validate and not has_validator) + or (on_setattr == setters.convert and not has_converter) + ): + # If class-level on_setattr is set to convert + validate, but + # there's no field to convert or validate, pretend like there's + # no on_setattr. + self._on_setattr = None + + if getstate_setstate: + ( + self._cls_dict["__getstate__"], + self._cls_dict["__setstate__"], + ) = self._make_getstate_setstate() + + def __repr__(self): + return "<_ClassBuilder(cls={cls})>".format(cls=self._cls.__name__) + + def build_class(self): + """ + Finalize class based on the accumulated configuration. + + Builder cannot be used after calling this method. + """ + if self._slots is True: + return self._create_slots_class() + else: + return self._patch_original_class() + + def _patch_original_class(self): + """ + Apply accumulated methods and return the class. + """ + cls = self._cls + base_names = self._base_names + + # Clean class of attribute definitions (`attr.ib()`s). + if self._delete_attribs: + for name in self._attr_names: + if ( + name not in base_names + and getattr(cls, name, _sentinel) is not _sentinel + ): + try: + delattr(cls, name) + except AttributeError: + # This can happen if a base class defines a class + # variable and we want to set an attribute with the + # same name by using only a type annotation. + pass + + # Attach our dunder methods. + for name, value in self._cls_dict.items(): + setattr(cls, name, value) + + # If we've inherited an attrs __setattr__ and don't write our own, + # reset it to object's. + if not self._wrote_own_setattr and getattr( + cls, "__attrs_own_setattr__", False + ): + cls.__attrs_own_setattr__ = False + + if not self._has_custom_setattr: + cls.__setattr__ = _obj_setattr + + return cls + + def _create_slots_class(self): + """ + Build and return a new class with a `__slots__` attribute. + """ + cd = { + k: v + for k, v in self._cls_dict.items() + if k not in tuple(self._attr_names) + ("__dict__", "__weakref__") + } + + # If our class doesn't have its own implementation of __setattr__ + # (either from the user or by us), check the bases, if one of them has + # an attrs-made __setattr__, that needs to be reset. We don't walk the + # MRO because we only care about our immediate base classes. + # XXX: This can be confused by subclassing a slotted attrs class with + # XXX: a non-attrs class and subclass the resulting class with an attrs + # XXX: class. See `test_slotted_confused` for details. For now that's + # XXX: OK with us. + if not self._wrote_own_setattr: + cd["__attrs_own_setattr__"] = False + + if not self._has_custom_setattr: + for base_cls in self._cls.__bases__: + if base_cls.__dict__.get("__attrs_own_setattr__", False): + cd["__setattr__"] = _obj_setattr + break + + # Traverse the MRO to collect existing slots + # and check for an existing __weakref__. + existing_slots = dict() + weakref_inherited = False + for base_cls in self._cls.__mro__[1:-1]: + if base_cls.__dict__.get("__weakref__", None) is not None: + weakref_inherited = True + existing_slots.update( + { + name: getattr(base_cls, name) + for name in getattr(base_cls, "__slots__", []) + } + ) + + base_names = set(self._base_names) + + names = self._attr_names + if ( + self._weakref_slot + and "__weakref__" not in getattr(self._cls, "__slots__", ()) + and "__weakref__" not in names + and not weakref_inherited + ): + names += ("__weakref__",) + + # We only add the names of attributes that aren't inherited. + # Setting __slots__ to inherited attributes wastes memory. + slot_names = [name for name in names if name not in base_names] + # There are slots for attributes from current class + # that are defined in parent classes. + # As their descriptors may be overridden by a child class, + # we collect them here and update the class dict + reused_slots = { + slot: slot_descriptor + for slot, slot_descriptor in existing_slots.items() + if slot in slot_names + } + slot_names = [name for name in slot_names if name not in reused_slots] + cd.update(reused_slots) + if self._cache_hash: + slot_names.append(_hash_cache_field) + cd["__slots__"] = tuple(slot_names) + + cd["__qualname__"] = self._cls.__qualname__ + + # Create new class based on old class and our methods. + cls = type(self._cls)(self._cls.__name__, self._cls.__bases__, cd) + + # The following is a fix for + # . On Python 3, + # if a method mentions `__class__` or uses the no-arg super(), the + # compiler will bake a reference to the class in the method itself + # as `method.__closure__`. Since we replace the class with a + # clone, we rewrite these references so it keeps working. + for item in cls.__dict__.values(): + if isinstance(item, (classmethod, staticmethod)): + # Class- and staticmethods hide their functions inside. + # These might need to be rewritten as well. + closure_cells = getattr(item.__func__, "__closure__", None) + elif isinstance(item, property): + # Workaround for property `super()` shortcut (PY3-only). + # There is no universal way for other descriptors. + closure_cells = getattr(item.fget, "__closure__", None) + else: + closure_cells = getattr(item, "__closure__", None) + + if not closure_cells: # Catch None or the empty list. + continue + for cell in closure_cells: + try: + match = cell.cell_contents is self._cls + except ValueError: # ValueError: Cell is empty + pass + else: + if match: + set_closure_cell(cell, cls) + + return cls + + def add_repr(self, ns): + self._cls_dict["__repr__"] = self._add_method_dunders( + _make_repr(self._attrs, ns, self._cls) + ) + return self + + def add_str(self): + repr = self._cls_dict.get("__repr__") + if repr is None: + raise ValueError( + "__str__ can only be generated if a __repr__ exists." + ) + + def __str__(self): + return self.__repr__() + + self._cls_dict["__str__"] = self._add_method_dunders(__str__) + return self + + def _make_getstate_setstate(self): + """ + Create custom __setstate__ and __getstate__ methods. + """ + # __weakref__ is not writable. + state_attr_names = tuple( + an for an in self._attr_names if an != "__weakref__" + ) + + def slots_getstate(self): + """ + Automatically created by attrs. + """ + return tuple(getattr(self, name) for name in state_attr_names) + + hash_caching_enabled = self._cache_hash + + def slots_setstate(self, state): + """ + Automatically created by attrs. + """ + __bound_setattr = _obj_setattr.__get__(self, Attribute) + for name, value in zip(state_attr_names, state): + __bound_setattr(name, value) + + # The hash code cache is not included when the object is + # serialized, but it still needs to be initialized to None to + # indicate that the first call to __hash__ should be a cache + # miss. + if hash_caching_enabled: + __bound_setattr(_hash_cache_field, None) + + return slots_getstate, slots_setstate + + def make_unhashable(self): + self._cls_dict["__hash__"] = None + return self + + def add_hash(self): + self._cls_dict["__hash__"] = self._add_method_dunders( + _make_hash( + self._cls, + self._attrs, + frozen=self._frozen, + cache_hash=self._cache_hash, + ) + ) + + return self + + def add_init(self): + self._cls_dict["__init__"] = self._add_method_dunders( + _make_init( + self._cls, + self._attrs, + self._has_pre_init, + self._has_post_init, + self._frozen, + self._slots, + self._cache_hash, + self._base_attr_map, + self._is_exc, + self._on_setattr, + attrs_init=False, + ) + ) + + return self + + def add_match_args(self): + self._cls_dict["__match_args__"] = tuple( + field.name + for field in self._attrs + if field.init and not field.kw_only + ) + + def add_attrs_init(self): + self._cls_dict["__attrs_init__"] = self._add_method_dunders( + _make_init( + self._cls, + self._attrs, + self._has_pre_init, + self._has_post_init, + self._frozen, + self._slots, + self._cache_hash, + self._base_attr_map, + self._is_exc, + self._on_setattr, + attrs_init=True, + ) + ) + + return self + + def add_eq(self): + cd = self._cls_dict + + cd["__eq__"] = self._add_method_dunders( + _make_eq(self._cls, self._attrs) + ) + cd["__ne__"] = self._add_method_dunders(_make_ne()) + + return self + + def add_order(self): + cd = self._cls_dict + + cd["__lt__"], cd["__le__"], cd["__gt__"], cd["__ge__"] = ( + self._add_method_dunders(meth) + for meth in _make_order(self._cls, self._attrs) + ) + + return self + + def add_setattr(self): + if self._frozen: + return self + + sa_attrs = {} + for a in self._attrs: + on_setattr = a.on_setattr or self._on_setattr + if on_setattr and on_setattr is not setters.NO_OP: + sa_attrs[a.name] = a, on_setattr + + if not sa_attrs: + return self + + if self._has_custom_setattr: + # We need to write a __setattr__ but there already is one! + raise ValueError( + "Can't combine custom __setattr__ with on_setattr hooks." + ) + + # docstring comes from _add_method_dunders + def __setattr__(self, name, val): + try: + a, hook = sa_attrs[name] + except KeyError: + nval = val + else: + nval = hook(self, a, val) + + _obj_setattr(self, name, nval) + + self._cls_dict["__attrs_own_setattr__"] = True + self._cls_dict["__setattr__"] = self._add_method_dunders(__setattr__) + self._wrote_own_setattr = True + + return self + + def _add_method_dunders(self, method): + """ + Add __module__ and __qualname__ to a *method* if possible. + """ + try: + method.__module__ = self._cls.__module__ + except AttributeError: + pass + + try: + method.__qualname__ = ".".join( + (self._cls.__qualname__, method.__name__) + ) + except AttributeError: + pass + + try: + method.__doc__ = "Method generated by attrs for class %s." % ( + self._cls.__qualname__, + ) + except AttributeError: + pass + + return method + + +def _determine_attrs_eq_order(cmp, eq, order, default_eq): + """ + Validate the combination of *cmp*, *eq*, and *order*. Derive the effective + values of eq and order. If *eq* is None, set it to *default_eq*. + """ + if cmp is not None and any((eq is not None, order is not None)): + raise ValueError("Don't mix `cmp` with `eq' and `order`.") + + # cmp takes precedence due to bw-compatibility. + if cmp is not None: + return cmp, cmp + + # If left None, equality is set to the specified default and ordering + # mirrors equality. + if eq is None: + eq = default_eq + + if order is None: + order = eq + + if eq is False and order is True: + raise ValueError("`order` can only be True if `eq` is True too.") + + return eq, order + + +def _determine_attrib_eq_order(cmp, eq, order, default_eq): + """ + Validate the combination of *cmp*, *eq*, and *order*. Derive the effective + values of eq and order. If *eq* is None, set it to *default_eq*. + """ + if cmp is not None and any((eq is not None, order is not None)): + raise ValueError("Don't mix `cmp` with `eq' and `order`.") + + def decide_callable_or_boolean(value): + """ + Decide whether a key function is used. + """ + if callable(value): + value, key = True, value + else: + key = None + return value, key + + # cmp takes precedence due to bw-compatibility. + if cmp is not None: + cmp, cmp_key = decide_callable_or_boolean(cmp) + return cmp, cmp_key, cmp, cmp_key + + # If left None, equality is set to the specified default and ordering + # mirrors equality. + if eq is None: + eq, eq_key = default_eq, None + else: + eq, eq_key = decide_callable_or_boolean(eq) + + if order is None: + order, order_key = eq, eq_key + else: + order, order_key = decide_callable_or_boolean(order) + + if eq is False and order is True: + raise ValueError("`order` can only be True if `eq` is True too.") + + return eq, eq_key, order, order_key + + +def _determine_whether_to_implement( + cls, flag, auto_detect, dunders, default=True +): + """ + Check whether we should implement a set of methods for *cls*. + + *flag* is the argument passed into @attr.s like 'init', *auto_detect* the + same as passed into @attr.s and *dunders* is a tuple of attribute names + whose presence signal that the user has implemented it themselves. + + Return *default* if no reason for either for or against is found. + """ + if flag is True or flag is False: + return flag + + if flag is None and auto_detect is False: + return default + + # Logically, flag is None and auto_detect is True here. + for dunder in dunders: + if _has_own_attribute(cls, dunder): + return False + + return default + + +def attrs( + maybe_cls=None, + these=None, + repr_ns=None, + repr=None, + cmp=None, + hash=None, + init=None, + slots=False, + frozen=False, + weakref_slot=True, + str=False, + auto_attribs=False, + kw_only=False, + cache_hash=False, + auto_exc=False, + eq=None, + order=None, + auto_detect=False, + collect_by_mro=False, + getstate_setstate=None, + on_setattr=None, + field_transformer=None, + match_args=True, +): + r""" + A class decorator that adds `dunder + `_\ -methods according to the + specified attributes using `attr.ib` or the *these* argument. + + :param these: A dictionary of name to `attr.ib` mappings. This is + useful to avoid the definition of your attributes within the class body + because you can't (e.g. if you want to add ``__repr__`` methods to + Django models) or don't want to. + + If *these* is not ``None``, ``attrs`` will *not* search the class body + for attributes and will *not* remove any attributes from it. + + If *these* is an ordered dict (`dict` on Python 3.6+, + `collections.OrderedDict` otherwise), the order is deduced from + the order of the attributes inside *these*. Otherwise the order + of the definition of the attributes is used. + + :type these: `dict` of `str` to `attr.ib` + + :param str repr_ns: When using nested classes, there's no way in Python 2 + to automatically detect that. Therefore it's possible to set the + namespace explicitly for a more meaningful ``repr`` output. + :param bool auto_detect: Instead of setting the *init*, *repr*, *eq*, + *order*, and *hash* arguments explicitly, assume they are set to + ``True`` **unless any** of the involved methods for one of the + arguments is implemented in the *current* class (i.e. it is *not* + inherited from some base class). + + So for example by implementing ``__eq__`` on a class yourself, + ``attrs`` will deduce ``eq=False`` and will create *neither* + ``__eq__`` *nor* ``__ne__`` (but Python classes come with a sensible + ``__ne__`` by default, so it *should* be enough to only implement + ``__eq__`` in most cases). + + .. warning:: + + If you prevent ``attrs`` from creating the ordering methods for you + (``order=False``, e.g. by implementing ``__le__``), it becomes + *your* responsibility to make sure its ordering is sound. The best + way is to use the `functools.total_ordering` decorator. + + + Passing ``True`` or ``False`` to *init*, *repr*, *eq*, *order*, + *cmp*, or *hash* overrides whatever *auto_detect* would determine. + + *auto_detect* requires Python 3. Setting it ``True`` on Python 2 raises + an `attrs.exceptions.PythonTooOldError`. + + :param bool repr: Create a ``__repr__`` method with a human readable + representation of ``attrs`` attributes.. + :param bool str: Create a ``__str__`` method that is identical to + ``__repr__``. This is usually not necessary except for + `Exception`\ s. + :param Optional[bool] eq: If ``True`` or ``None`` (default), add ``__eq__`` + and ``__ne__`` methods that check two instances for equality. + + They compare the instances as if they were tuples of their ``attrs`` + attributes if and only if the types of both classes are *identical*! + :param Optional[bool] order: If ``True``, add ``__lt__``, ``__le__``, + ``__gt__``, and ``__ge__`` methods that behave like *eq* above and + allow instances to be ordered. If ``None`` (default) mirror value of + *eq*. + :param Optional[bool] cmp: Setting *cmp* is equivalent to setting *eq* + and *order* to the same value. Must not be mixed with *eq* or *order*. + :param Optional[bool] hash: If ``None`` (default), the ``__hash__`` method + is generated according how *eq* and *frozen* are set. + + 1. If *both* are True, ``attrs`` will generate a ``__hash__`` for you. + 2. If *eq* is True and *frozen* is False, ``__hash__`` will be set to + None, marking it unhashable (which it is). + 3. If *eq* is False, ``__hash__`` will be left untouched meaning the + ``__hash__`` method of the base class will be used (if base class is + ``object``, this means it will fall back to id-based hashing.). + + Although not recommended, you can decide for yourself and force + ``attrs`` to create one (e.g. if the class is immutable even though you + didn't freeze it programmatically) by passing ``True`` or not. Both of + these cases are rather special and should be used carefully. + + See our documentation on `hashing`, Python's documentation on + `object.__hash__`, and the `GitHub issue that led to the default \ + behavior `_ for more + details. + :param bool init: Create a ``__init__`` method that initializes the + ``attrs`` attributes. Leading underscores are stripped for the argument + name. If a ``__attrs_pre_init__`` method exists on the class, it will + be called before the class is initialized. If a ``__attrs_post_init__`` + method exists on the class, it will be called after the class is fully + initialized. + + If ``init`` is ``False``, an ``__attrs_init__`` method will be + injected instead. This allows you to define a custom ``__init__`` + method that can do pre-init work such as ``super().__init__()``, + and then call ``__attrs_init__()`` and ``__attrs_post_init__()``. + :param bool slots: Create a `slotted class ` that's more + memory-efficient. Slotted classes are generally superior to the default + dict classes, but have some gotchas you should know about, so we + encourage you to read the `glossary entry `. + :param bool frozen: Make instances immutable after initialization. If + someone attempts to modify a frozen instance, + `attr.exceptions.FrozenInstanceError` is raised. + + .. note:: + + 1. This is achieved by installing a custom ``__setattr__`` method + on your class, so you can't implement your own. + + 2. True immutability is impossible in Python. + + 3. This *does* have a minor a runtime performance `impact + ` when initializing new instances. In other words: + ``__init__`` is slightly slower with ``frozen=True``. + + 4. If a class is frozen, you cannot modify ``self`` in + ``__attrs_post_init__`` or a self-written ``__init__``. You can + circumvent that limitation by using + ``object.__setattr__(self, "attribute_name", value)``. + + 5. Subclasses of a frozen class are frozen too. + + :param bool weakref_slot: Make instances weak-referenceable. This has no + effect unless ``slots`` is also enabled. + :param bool auto_attribs: If ``True``, collect :pep:`526`-annotated + attributes (Python 3.6 and later only) from the class body. + + In this case, you **must** annotate every field. If ``attrs`` + encounters a field that is set to an `attr.ib` but lacks a type + annotation, an `attr.exceptions.UnannotatedAttributeError` is + raised. Use ``field_name: typing.Any = attr.ib(...)`` if you don't + want to set a type. + + If you assign a value to those attributes (e.g. ``x: int = 42``), that + value becomes the default value like if it were passed using + ``attr.ib(default=42)``. Passing an instance of `attrs.Factory` also + works as expected in most cases (see warning below). + + Attributes annotated as `typing.ClassVar`, and attributes that are + neither annotated nor set to an `attr.ib` are **ignored**. + + .. warning:: + For features that use the attribute name to create decorators (e.g. + `validators `), you still *must* assign `attr.ib` to + them. Otherwise Python will either not find the name or try to use + the default value to call e.g. ``validator`` on it. + + These errors can be quite confusing and probably the most common bug + report on our bug tracker. + + :param bool kw_only: Make all attributes keyword-only (Python 3+) + in the generated ``__init__`` (if ``init`` is ``False``, this + parameter is ignored). + :param bool cache_hash: Ensure that the object's hash code is computed + only once and stored on the object. If this is set to ``True``, + hashing must be either explicitly or implicitly enabled for this + class. If the hash code is cached, avoid any reassignments of + fields involved in hash code computation or mutations of the objects + those fields point to after object creation. If such changes occur, + the behavior of the object's hash code is undefined. + :param bool auto_exc: If the class subclasses `BaseException` + (which implicitly includes any subclass of any exception), the + following happens to behave like a well-behaved Python exceptions + class: + + - the values for *eq*, *order*, and *hash* are ignored and the + instances compare and hash by the instance's ids (N.B. ``attrs`` will + *not* remove existing implementations of ``__hash__`` or the equality + methods. It just won't add own ones.), + - all attributes that are either passed into ``__init__`` or have a + default value are additionally available as a tuple in the ``args`` + attribute, + - the value of *str* is ignored leaving ``__str__`` to base classes. + :param bool collect_by_mro: Setting this to `True` fixes the way ``attrs`` + collects attributes from base classes. The default behavior is + incorrect in certain cases of multiple inheritance. It should be on by + default but is kept off for backward-compatibility. + + See issue `#428 `_ for + more details. + + :param Optional[bool] getstate_setstate: + .. note:: + This is usually only interesting for slotted classes and you should + probably just set *auto_detect* to `True`. + + If `True`, ``__getstate__`` and + ``__setstate__`` are generated and attached to the class. This is + necessary for slotted classes to be pickleable. If left `None`, it's + `True` by default for slotted classes and ``False`` for dict classes. + + If *auto_detect* is `True`, and *getstate_setstate* is left `None`, + and **either** ``__getstate__`` or ``__setstate__`` is detected directly + on the class (i.e. not inherited), it is set to `False` (this is usually + what you want). + + :param on_setattr: A callable that is run whenever the user attempts to set + an attribute (either by assignment like ``i.x = 42`` or by using + `setattr` like ``setattr(i, "x", 42)``). It receives the same arguments + as validators: the instance, the attribute that is being modified, and + the new value. + + If no exception is raised, the attribute is set to the return value of + the callable. + + If a list of callables is passed, they're automatically wrapped in an + `attrs.setters.pipe`. + :type on_setattr: `callable`, or a list of callables, or `None`, or + `attrs.setters.NO_OP` + + :param Optional[callable] field_transformer: + A function that is called with the original class object and all + fields right before ``attrs`` finalizes the class. You can use + this, e.g., to automatically add converters or validators to + fields based on their types. See `transform-fields` for more details. + + :param bool match_args: + If `True` (default), set ``__match_args__`` on the class to support + :pep:`634` (Structural Pattern Matching). It is a tuple of all + non-keyword-only ``__init__`` parameter names on Python 3.10 and later. + Ignored on older Python versions. + + .. versionadded:: 16.0.0 *slots* + .. versionadded:: 16.1.0 *frozen* + .. versionadded:: 16.3.0 *str* + .. versionadded:: 16.3.0 Support for ``__attrs_post_init__``. + .. versionchanged:: 17.1.0 + *hash* supports ``None`` as value which is also the default now. + .. versionadded:: 17.3.0 *auto_attribs* + .. versionchanged:: 18.1.0 + If *these* is passed, no attributes are deleted from the class body. + .. versionchanged:: 18.1.0 If *these* is ordered, the order is retained. + .. versionadded:: 18.2.0 *weakref_slot* + .. deprecated:: 18.2.0 + ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now raise a + `DeprecationWarning` if the classes compared are subclasses of + each other. ``__eq`` and ``__ne__`` never tried to compared subclasses + to each other. + .. versionchanged:: 19.2.0 + ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now do not consider + subclasses comparable anymore. + .. versionadded:: 18.2.0 *kw_only* + .. versionadded:: 18.2.0 *cache_hash* + .. versionadded:: 19.1.0 *auto_exc* + .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. + .. versionadded:: 19.2.0 *eq* and *order* + .. versionadded:: 20.1.0 *auto_detect* + .. versionadded:: 20.1.0 *collect_by_mro* + .. versionadded:: 20.1.0 *getstate_setstate* + .. versionadded:: 20.1.0 *on_setattr* + .. versionadded:: 20.3.0 *field_transformer* + .. versionchanged:: 21.1.0 + ``init=False`` injects ``__attrs_init__`` + .. versionchanged:: 21.1.0 Support for ``__attrs_pre_init__`` + .. versionchanged:: 21.1.0 *cmp* undeprecated + .. versionadded:: 21.3.0 *match_args* + """ + eq_, order_ = _determine_attrs_eq_order(cmp, eq, order, None) + hash_ = hash # work around the lack of nonlocal + + if isinstance(on_setattr, (list, tuple)): + on_setattr = setters.pipe(*on_setattr) + + def wrap(cls): + is_frozen = frozen or _has_frozen_base_class(cls) + is_exc = auto_exc is True and issubclass(cls, BaseException) + has_own_setattr = auto_detect and _has_own_attribute( + cls, "__setattr__" + ) + + if has_own_setattr and is_frozen: + raise ValueError("Can't freeze a class with a custom __setattr__.") + + builder = _ClassBuilder( + cls, + these, + slots, + is_frozen, + weakref_slot, + _determine_whether_to_implement( + cls, + getstate_setstate, + auto_detect, + ("__getstate__", "__setstate__"), + default=slots, + ), + auto_attribs, + kw_only, + cache_hash, + is_exc, + collect_by_mro, + on_setattr, + has_own_setattr, + field_transformer, + ) + if _determine_whether_to_implement( + cls, repr, auto_detect, ("__repr__",) + ): + builder.add_repr(repr_ns) + if str is True: + builder.add_str() + + eq = _determine_whether_to_implement( + cls, eq_, auto_detect, ("__eq__", "__ne__") + ) + if not is_exc and eq is True: + builder.add_eq() + if not is_exc and _determine_whether_to_implement( + cls, order_, auto_detect, ("__lt__", "__le__", "__gt__", "__ge__") + ): + builder.add_order() + + builder.add_setattr() + + if ( + hash_ is None + and auto_detect is True + and _has_own_attribute(cls, "__hash__") + ): + hash = False + else: + hash = hash_ + if hash is not True and hash is not False and hash is not None: + # Can't use `hash in` because 1 == True for example. + raise TypeError( + "Invalid value for hash. Must be True, False, or None." + ) + elif hash is False or (hash is None and eq is False) or is_exc: + # Don't do anything. Should fall back to __object__'s __hash__ + # which is by id. + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " hashing must be either explicitly or implicitly " + "enabled." + ) + elif hash is True or ( + hash is None and eq is True and is_frozen is True + ): + # Build a __hash__ if told so, or if it's safe. + builder.add_hash() + else: + # Raise TypeError on attempts to hash. + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " hashing must be either explicitly or implicitly " + "enabled." + ) + builder.make_unhashable() + + if _determine_whether_to_implement( + cls, init, auto_detect, ("__init__",) + ): + builder.add_init() + else: + builder.add_attrs_init() + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " init must be True." + ) + + if ( + PY310 + and match_args + and not _has_own_attribute(cls, "__match_args__") + ): + builder.add_match_args() + + return builder.build_class() + + # maybe_cls's type depends on the usage of the decorator. It's a class + # if it's used as `@attrs` but ``None`` if used as `@attrs()`. + if maybe_cls is None: + return wrap + else: + return wrap(maybe_cls) + + +_attrs = attrs +""" +Internal alias so we can use it in functions that take an argument called +*attrs*. +""" + + +def _has_frozen_base_class(cls): + """ + Check whether *cls* has a frozen ancestor by looking at its + __setattr__. + """ + return cls.__setattr__ is _frozen_setattrs + + +def _generate_unique_filename(cls, func_name): + """ + Create a "filename" suitable for a function being generated. + """ + unique_filename = "".format( + func_name, + cls.__module__, + getattr(cls, "__qualname__", cls.__name__), + ) + return unique_filename + + +def _make_hash(cls, attrs, frozen, cache_hash): + attrs = tuple( + a for a in attrs if a.hash is True or (a.hash is None and a.eq is True) + ) + + tab = " " + + unique_filename = _generate_unique_filename(cls, "hash") + type_hash = hash(unique_filename) + # If eq is custom generated, we need to include the functions in globs + globs = {} + + hash_def = "def __hash__(self" + hash_func = "hash((" + closing_braces = "))" + if not cache_hash: + hash_def += "):" + else: + hash_def += ", *" + + hash_def += ( + ", _cache_wrapper=" + + "__import__('attr._make')._make._CacheHashWrapper):" + ) + hash_func = "_cache_wrapper(" + hash_func + closing_braces += ")" + + method_lines = [hash_def] + + def append_hash_computation_lines(prefix, indent): + """ + Generate the code for actually computing the hash code. + Below this will either be returned directly or used to compute + a value which is then cached, depending on the value of cache_hash + """ + + method_lines.extend( + [ + indent + prefix + hash_func, + indent + " %d," % (type_hash,), + ] + ) + + for a in attrs: + if a.eq_key: + cmp_name = "_%s_key" % (a.name,) + globs[cmp_name] = a.eq_key + method_lines.append( + indent + " %s(self.%s)," % (cmp_name, a.name) + ) + else: + method_lines.append(indent + " self.%s," % a.name) + + method_lines.append(indent + " " + closing_braces) + + if cache_hash: + method_lines.append(tab + "if self.%s is None:" % _hash_cache_field) + if frozen: + append_hash_computation_lines( + "object.__setattr__(self, '%s', " % _hash_cache_field, tab * 2 + ) + method_lines.append(tab * 2 + ")") # close __setattr__ + else: + append_hash_computation_lines( + "self.%s = " % _hash_cache_field, tab * 2 + ) + method_lines.append(tab + "return self.%s" % _hash_cache_field) + else: + append_hash_computation_lines("return ", tab) + + script = "\n".join(method_lines) + return _make_method("__hash__", script, unique_filename, globs) + + +def _add_hash(cls, attrs): + """ + Add a hash method to *cls*. + """ + cls.__hash__ = _make_hash(cls, attrs, frozen=False, cache_hash=False) + return cls + + +def _make_ne(): + """ + Create __ne__ method. + """ + + def __ne__(self, other): + """ + Check equality and either forward a NotImplemented or + return the result negated. + """ + result = self.__eq__(other) + if result is NotImplemented: + return NotImplemented + + return not result + + return __ne__ + + +def _make_eq(cls, attrs): + """ + Create __eq__ method for *cls* with *attrs*. + """ + attrs = [a for a in attrs if a.eq] + + unique_filename = _generate_unique_filename(cls, "eq") + lines = [ + "def __eq__(self, other):", + " if other.__class__ is not self.__class__:", + " return NotImplemented", + ] + + # We can't just do a big self.x = other.x and... clause due to + # irregularities like nan == nan is false but (nan,) == (nan,) is true. + globs = {} + if attrs: + lines.append(" return (") + others = [" ) == ("] + for a in attrs: + if a.eq_key: + cmp_name = "_%s_key" % (a.name,) + # Add the key function to the global namespace + # of the evaluated function. + globs[cmp_name] = a.eq_key + lines.append( + " %s(self.%s)," + % ( + cmp_name, + a.name, + ) + ) + others.append( + " %s(other.%s)," + % ( + cmp_name, + a.name, + ) + ) + else: + lines.append(" self.%s," % (a.name,)) + others.append(" other.%s," % (a.name,)) + + lines += others + [" )"] + else: + lines.append(" return True") + + script = "\n".join(lines) + + return _make_method("__eq__", script, unique_filename, globs) + + +def _make_order(cls, attrs): + """ + Create ordering methods for *cls* with *attrs*. + """ + attrs = [a for a in attrs if a.order] + + def attrs_to_tuple(obj): + """ + Save us some typing. + """ + return tuple( + key(value) if key else value + for value, key in ( + (getattr(obj, a.name), a.order_key) for a in attrs + ) + ) + + def __lt__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) < attrs_to_tuple(other) + + return NotImplemented + + def __le__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) <= attrs_to_tuple(other) + + return NotImplemented + + def __gt__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) > attrs_to_tuple(other) + + return NotImplemented + + def __ge__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) >= attrs_to_tuple(other) + + return NotImplemented + + return __lt__, __le__, __gt__, __ge__ + + +def _add_eq(cls, attrs=None): + """ + Add equality methods to *cls* with *attrs*. + """ + if attrs is None: + attrs = cls.__attrs_attrs__ + + cls.__eq__ = _make_eq(cls, attrs) + cls.__ne__ = _make_ne() + + return cls + + +if HAS_F_STRINGS: + + def _make_repr(attrs, ns, cls): + unique_filename = _generate_unique_filename(cls, "repr") + # Figure out which attributes to include, and which function to use to + # format them. The a.repr value can be either bool or a custom + # callable. + attr_names_with_reprs = tuple( + (a.name, (repr if a.repr is True else a.repr), a.init) + for a in attrs + if a.repr is not False + ) + globs = { + name + "_repr": r + for name, r, _ in attr_names_with_reprs + if r != repr + } + globs["_compat"] = _compat + globs["AttributeError"] = AttributeError + globs["NOTHING"] = NOTHING + attribute_fragments = [] + for name, r, i in attr_names_with_reprs: + accessor = ( + "self." + name + if i + else 'getattr(self, "' + name + '", NOTHING)' + ) + fragment = ( + "%s={%s!r}" % (name, accessor) + if r == repr + else "%s={%s_repr(%s)}" % (name, name, accessor) + ) + attribute_fragments.append(fragment) + repr_fragment = ", ".join(attribute_fragments) + + if ns is None: + cls_name_fragment = ( + '{self.__class__.__qualname__.rsplit(">.", 1)[-1]}' + ) + else: + cls_name_fragment = ns + ".{self.__class__.__name__}" + + lines = [ + "def __repr__(self):", + " try:", + " already_repring = _compat.repr_context.already_repring", + " except AttributeError:", + " already_repring = {id(self),}", + " _compat.repr_context.already_repring = already_repring", + " else:", + " if id(self) in already_repring:", + " return '...'", + " else:", + " already_repring.add(id(self))", + " try:", + " return f'%s(%s)'" % (cls_name_fragment, repr_fragment), + " finally:", + " already_repring.remove(id(self))", + ] + + return _make_method( + "__repr__", "\n".join(lines), unique_filename, globs=globs + ) + +else: + + def _make_repr(attrs, ns, _): + """ + Make a repr method that includes relevant *attrs*, adding *ns* to the + full name. + """ + + # Figure out which attributes to include, and which function to use to + # format them. The a.repr value can be either bool or a custom + # callable. + attr_names_with_reprs = tuple( + (a.name, repr if a.repr is True else a.repr) + for a in attrs + if a.repr is not False + ) + + def __repr__(self): + """ + Automatically created by attrs. + """ + try: + already_repring = _compat.repr_context.already_repring + except AttributeError: + already_repring = set() + _compat.repr_context.already_repring = already_repring + + if id(self) in already_repring: + return "..." + real_cls = self.__class__ + if ns is None: + class_name = real_cls.__qualname__.rsplit(">.", 1)[-1] + else: + class_name = ns + "." + real_cls.__name__ + + # Since 'self' remains on the stack (i.e.: strongly referenced) + # for the duration of this call, it's safe to depend on id(...) + # stability, and not need to track the instance and therefore + # worry about properties like weakref- or hash-ability. + already_repring.add(id(self)) + try: + result = [class_name, "("] + first = True + for name, attr_repr in attr_names_with_reprs: + if first: + first = False + else: + result.append(", ") + result.extend( + (name, "=", attr_repr(getattr(self, name, NOTHING))) + ) + return "".join(result) + ")" + finally: + already_repring.remove(id(self)) + + return __repr__ + + +def _add_repr(cls, ns=None, attrs=None): + """ + Add a repr method to *cls*. + """ + if attrs is None: + attrs = cls.__attrs_attrs__ + + cls.__repr__ = _make_repr(attrs, ns, cls) + return cls + + +def fields(cls): + """ + Return the tuple of ``attrs`` attributes for a class. + + The tuple also allows accessing the fields by their names (see below for + examples). + + :param type cls: Class to introspect. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + :rtype: tuple (with name accessors) of `attrs.Attribute` + + .. versionchanged:: 16.2.0 Returned tuple allows accessing the fields + by name. + """ + if not isinstance(cls, type): + raise TypeError("Passed object must be a class.") + attrs = getattr(cls, "__attrs_attrs__", None) + if attrs is None: + raise NotAnAttrsClassError( + "{cls!r} is not an attrs-decorated class.".format(cls=cls) + ) + return attrs + + +def fields_dict(cls): + """ + Return an ordered dictionary of ``attrs`` attributes for a class, whose + keys are the attribute names. + + :param type cls: Class to introspect. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + :rtype: an ordered dict where keys are attribute names and values are + `attrs.Attribute`\\ s. This will be a `dict` if it's + naturally ordered like on Python 3.6+ or an + :class:`~collections.OrderedDict` otherwise. + + .. versionadded:: 18.1.0 + """ + if not isinstance(cls, type): + raise TypeError("Passed object must be a class.") + attrs = getattr(cls, "__attrs_attrs__", None) + if attrs is None: + raise NotAnAttrsClassError( + "{cls!r} is not an attrs-decorated class.".format(cls=cls) + ) + return ordered_dict((a.name, a) for a in attrs) + + +def validate(inst): + """ + Validate all attributes on *inst* that have a validator. + + Leaves all exceptions through. + + :param inst: Instance of a class with ``attrs`` attributes. + """ + if _config._run_validators is False: + return + + for a in fields(inst.__class__): + v = a.validator + if v is not None: + v(inst, a, getattr(inst, a.name)) + + +def _is_slot_cls(cls): + return "__slots__" in cls.__dict__ + + +def _is_slot_attr(a_name, base_attr_map): + """ + Check if the attribute name comes from a slot class. + """ + return a_name in base_attr_map and _is_slot_cls(base_attr_map[a_name]) + + +def _make_init( + cls, + attrs, + pre_init, + post_init, + frozen, + slots, + cache_hash, + base_attr_map, + is_exc, + cls_on_setattr, + attrs_init, +): + has_cls_on_setattr = ( + cls_on_setattr is not None and cls_on_setattr is not setters.NO_OP + ) + + if frozen and has_cls_on_setattr: + raise ValueError("Frozen classes can't use on_setattr.") + + needs_cached_setattr = cache_hash or frozen + filtered_attrs = [] + attr_dict = {} + for a in attrs: + if not a.init and a.default is NOTHING: + continue + + filtered_attrs.append(a) + attr_dict[a.name] = a + + if a.on_setattr is not None: + if frozen is True: + raise ValueError("Frozen classes can't use on_setattr.") + + needs_cached_setattr = True + elif has_cls_on_setattr and a.on_setattr is not setters.NO_OP: + needs_cached_setattr = True + + unique_filename = _generate_unique_filename(cls, "init") + + script, globs, annotations = _attrs_to_init_script( + filtered_attrs, + frozen, + slots, + pre_init, + post_init, + cache_hash, + base_attr_map, + is_exc, + has_cls_on_setattr, + attrs_init, + ) + if cls.__module__ in sys.modules: + # This makes typing.get_type_hints(CLS.__init__) resolve string types. + globs.update(sys.modules[cls.__module__].__dict__) + + globs.update({"NOTHING": NOTHING, "attr_dict": attr_dict}) + + if needs_cached_setattr: + # Save the lookup overhead in __init__ if we need to circumvent + # setattr hooks. + globs["_setattr"] = _obj_setattr + + init = _make_method( + "__attrs_init__" if attrs_init else "__init__", + script, + unique_filename, + globs, + ) + init.__annotations__ = annotations + + return init + + +def _setattr(attr_name, value_var, has_on_setattr): + """ + Use the cached object.setattr to set *attr_name* to *value_var*. + """ + return "_setattr(self, '%s', %s)" % (attr_name, value_var) + + +def _setattr_with_converter(attr_name, value_var, has_on_setattr): + """ + Use the cached object.setattr to set *attr_name* to *value_var*, but run + its converter first. + """ + return "_setattr(self, '%s', %s(%s))" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + +def _assign(attr_name, value, has_on_setattr): + """ + Unless *attr_name* has an on_setattr hook, use normal assignment. Otherwise + relegate to _setattr. + """ + if has_on_setattr: + return _setattr(attr_name, value, True) + + return "self.%s = %s" % (attr_name, value) + + +def _assign_with_converter(attr_name, value_var, has_on_setattr): + """ + Unless *attr_name* has an on_setattr hook, use normal assignment after + conversion. Otherwise relegate to _setattr_with_converter. + """ + if has_on_setattr: + return _setattr_with_converter(attr_name, value_var, True) + + return "self.%s = %s(%s)" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + +def _attrs_to_init_script( + attrs, + frozen, + slots, + pre_init, + post_init, + cache_hash, + base_attr_map, + is_exc, + has_cls_on_setattr, + attrs_init, +): + """ + Return a script of an initializer for *attrs* and a dict of globals. + + The globals are expected by the generated script. + + If *frozen* is True, we cannot set the attributes directly so we use + a cached ``object.__setattr__``. + """ + lines = [] + if pre_init: + lines.append("self.__attrs_pre_init__()") + + if frozen is True: + if slots is True: + fmt_setter = _setattr + fmt_setter_with_converter = _setattr_with_converter + else: + # Dict frozen classes assign directly to __dict__. + # But only if the attribute doesn't come from an ancestor slot + # class. + # Note _inst_dict will be used again below if cache_hash is True + lines.append("_inst_dict = self.__dict__") + + def fmt_setter(attr_name, value_var, has_on_setattr): + if _is_slot_attr(attr_name, base_attr_map): + return _setattr(attr_name, value_var, has_on_setattr) + + return "_inst_dict['%s'] = %s" % (attr_name, value_var) + + def fmt_setter_with_converter( + attr_name, value_var, has_on_setattr + ): + if has_on_setattr or _is_slot_attr(attr_name, base_attr_map): + return _setattr_with_converter( + attr_name, value_var, has_on_setattr + ) + + return "_inst_dict['%s'] = %s(%s)" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + else: + # Not frozen. + fmt_setter = _assign + fmt_setter_with_converter = _assign_with_converter + + args = [] + kw_only_args = [] + attrs_to_validate = [] + + # This is a dictionary of names to validator and converter callables. + # Injecting this into __init__ globals lets us avoid lookups. + names_for_globals = {} + annotations = {"return": None} + + for a in attrs: + if a.validator: + attrs_to_validate.append(a) + + attr_name = a.name + has_on_setattr = a.on_setattr is not None or ( + a.on_setattr is not setters.NO_OP and has_cls_on_setattr + ) + arg_name = a.name.lstrip("_") + + has_factory = isinstance(a.default, Factory) + if has_factory and a.default.takes_self: + maybe_self = "self" + else: + maybe_self = "" + + if a.init is False: + if has_factory: + init_factory_name = _init_factory_pat.format(a.name) + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, + init_factory_name + "(%s)" % (maybe_self,), + has_on_setattr, + ) + ) + conv_name = _init_converter_pat % (a.name,) + names_for_globals[conv_name] = a.converter + else: + lines.append( + fmt_setter( + attr_name, + init_factory_name + "(%s)" % (maybe_self,), + has_on_setattr, + ) + ) + names_for_globals[init_factory_name] = a.default.factory + else: + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, + "attr_dict['%s'].default" % (attr_name,), + has_on_setattr, + ) + ) + conv_name = _init_converter_pat % (a.name,) + names_for_globals[conv_name] = a.converter + else: + lines.append( + fmt_setter( + attr_name, + "attr_dict['%s'].default" % (attr_name,), + has_on_setattr, + ) + ) + elif a.default is not NOTHING and not has_factory: + arg = "%s=attr_dict['%s'].default" % (arg_name, attr_name) + if a.kw_only: + kw_only_args.append(arg) + else: + args.append(arg) + + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) + + elif has_factory: + arg = "%s=NOTHING" % (arg_name,) + if a.kw_only: + kw_only_args.append(arg) + else: + args.append(arg) + lines.append("if %s is not NOTHING:" % (arg_name,)) + + init_factory_name = _init_factory_pat.format(a.name) + if a.converter is not None: + lines.append( + " " + + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + lines.append("else:") + lines.append( + " " + + fmt_setter_with_converter( + attr_name, + init_factory_name + "(" + maybe_self + ")", + has_on_setattr, + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append( + " " + fmt_setter(attr_name, arg_name, has_on_setattr) + ) + lines.append("else:") + lines.append( + " " + + fmt_setter( + attr_name, + init_factory_name + "(" + maybe_self + ")", + has_on_setattr, + ) + ) + names_for_globals[init_factory_name] = a.default.factory + else: + if a.kw_only: + kw_only_args.append(arg_name) + else: + args.append(arg_name) + + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) + + if a.init is True: + if a.type is not None and a.converter is None: + annotations[arg_name] = a.type + elif a.converter is not None: + # Try to get the type from the converter. + t = _AnnotationExtractor(a.converter).get_first_param_type() + if t: + annotations[arg_name] = t + + if attrs_to_validate: # we can skip this if there are no validators. + names_for_globals["_config"] = _config + lines.append("if _config._run_validators is True:") + for a in attrs_to_validate: + val_name = "__attr_validator_" + a.name + attr_name = "__attr_" + a.name + lines.append( + " %s(self, %s, self.%s)" % (val_name, attr_name, a.name) + ) + names_for_globals[val_name] = a.validator + names_for_globals[attr_name] = a + + if post_init: + lines.append("self.__attrs_post_init__()") + + # because this is set only after __attrs_post_init__ is called, a crash + # will result if post-init tries to access the hash code. This seemed + # preferable to setting this beforehand, in which case alteration to + # field values during post-init combined with post-init accessing the + # hash code would result in silent bugs. + if cache_hash: + if frozen: + if slots: + # if frozen and slots, then _setattr defined above + init_hash_cache = "_setattr(self, '%s', %s)" + else: + # if frozen and not slots, then _inst_dict defined above + init_hash_cache = "_inst_dict['%s'] = %s" + else: + init_hash_cache = "self.%s = %s" + lines.append(init_hash_cache % (_hash_cache_field, "None")) + + # For exceptions we rely on BaseException.__init__ for proper + # initialization. + if is_exc: + vals = ",".join("self." + a.name for a in attrs if a.init) + + lines.append("BaseException.__init__(self, %s)" % (vals,)) + + args = ", ".join(args) + if kw_only_args: + args += "%s*, %s" % ( + ", " if args else "", # leading comma + ", ".join(kw_only_args), # kw_only args + ) + return ( + """\ +def {init_name}(self, {args}): + {lines} +""".format( + init_name=("__attrs_init__" if attrs_init else "__init__"), + args=args, + lines="\n ".join(lines) if lines else "pass", + ), + names_for_globals, + annotations, + ) + + +class Attribute: + """ + *Read-only* representation of an attribute. + + The class has *all* arguments of `attr.ib` (except for ``factory`` + which is only syntactic sugar for ``default=Factory(...)`` plus the + following: + + - ``name`` (`str`): The name of the attribute. + - ``inherited`` (`bool`): Whether or not that attribute has been inherited + from a base class. + - ``eq_key`` and ``order_key`` (`typing.Callable` or `None`): The callables + that are used for comparing and ordering objects by this attribute, + respectively. These are set by passing a callable to `attr.ib`'s ``eq``, + ``order``, or ``cmp`` arguments. See also :ref:`comparison customization + `. + + Instances of this class are frequently used for introspection purposes + like: + + - `fields` returns a tuple of them. + - Validators get them passed as the first argument. + - The :ref:`field transformer ` hook receives a list of + them. + + .. versionadded:: 20.1.0 *inherited* + .. versionadded:: 20.1.0 *on_setattr* + .. versionchanged:: 20.2.0 *inherited* is not taken into account for + equality checks and hashing anymore. + .. versionadded:: 21.1.0 *eq_key* and *order_key* + + For the full version history of the fields, see `attr.ib`. + """ + + __slots__ = ( + "name", + "default", + "validator", + "repr", + "eq", + "eq_key", + "order", + "order_key", + "hash", + "init", + "metadata", + "type", + "converter", + "kw_only", + "inherited", + "on_setattr", + ) + + def __init__( + self, + name, + default, + validator, + repr, + cmp, # XXX: unused, remove along with other cmp code. + hash, + init, + inherited, + metadata=None, + type=None, + converter=None, + kw_only=False, + eq=None, + eq_key=None, + order=None, + order_key=None, + on_setattr=None, + ): + eq, eq_key, order, order_key = _determine_attrib_eq_order( + cmp, eq_key or eq, order_key or order, True + ) + + # Cache this descriptor here to speed things up later. + bound_setattr = _obj_setattr.__get__(self, Attribute) + + # Despite the big red warning, people *do* instantiate `Attribute` + # themselves. + bound_setattr("name", name) + bound_setattr("default", default) + bound_setattr("validator", validator) + bound_setattr("repr", repr) + bound_setattr("eq", eq) + bound_setattr("eq_key", eq_key) + bound_setattr("order", order) + bound_setattr("order_key", order_key) + bound_setattr("hash", hash) + bound_setattr("init", init) + bound_setattr("converter", converter) + bound_setattr( + "metadata", + ( + types.MappingProxyType(dict(metadata)) # Shallow copy + if metadata + else _empty_metadata_singleton + ), + ) + bound_setattr("type", type) + bound_setattr("kw_only", kw_only) + bound_setattr("inherited", inherited) + bound_setattr("on_setattr", on_setattr) + + def __setattr__(self, name, value): + raise FrozenInstanceError() + + @classmethod + def from_counting_attr(cls, name, ca, type=None): + # type holds the annotated value. deal with conflicts: + if type is None: + type = ca.type + elif ca.type is not None: + raise ValueError( + "Type annotation and type argument cannot both be present" + ) + inst_dict = { + k: getattr(ca, k) + for k in Attribute.__slots__ + if k + not in ( + "name", + "validator", + "default", + "type", + "inherited", + ) # exclude methods and deprecated alias + } + return cls( + name=name, + validator=ca._validator, + default=ca._default, + type=type, + cmp=None, + inherited=False, + **inst_dict + ) + + # Don't use attr.evolve since fields(Attribute) doesn't work + def evolve(self, **changes): + """ + Copy *self* and apply *changes*. + + This works similarly to `attr.evolve` but that function does not work + with ``Attribute``. + + It is mainly meant to be used for `transform-fields`. + + .. versionadded:: 20.3.0 + """ + new = copy.copy(self) + + new._setattrs(changes.items()) + + return new + + # Don't use _add_pickle since fields(Attribute) doesn't work + def __getstate__(self): + """ + Play nice with pickle. + """ + return tuple( + getattr(self, name) if name != "metadata" else dict(self.metadata) + for name in self.__slots__ + ) + + def __setstate__(self, state): + """ + Play nice with pickle. + """ + self._setattrs(zip(self.__slots__, state)) + + def _setattrs(self, name_values_pairs): + bound_setattr = _obj_setattr.__get__(self, Attribute) + for name, value in name_values_pairs: + if name != "metadata": + bound_setattr(name, value) + else: + bound_setattr( + name, + types.MappingProxyType(dict(value)) + if value + else _empty_metadata_singleton, + ) + + +_a = [ + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + eq=True, + order=False, + hash=(name != "metadata"), + init=True, + inherited=False, + ) + for name in Attribute.__slots__ +] + +Attribute = _add_hash( + _add_eq( + _add_repr(Attribute, attrs=_a), + attrs=[a for a in _a if a.name != "inherited"], + ), + attrs=[a for a in _a if a.hash and a.name != "inherited"], +) + + +class _CountingAttr: + """ + Intermediate representation of attributes that uses a counter to preserve + the order in which the attributes have been defined. + + *Internal* data structure of the attrs library. Running into is most + likely the result of a bug like a forgotten `@attr.s` decorator. + """ + + __slots__ = ( + "counter", + "_default", + "repr", + "eq", + "eq_key", + "order", + "order_key", + "hash", + "init", + "metadata", + "_validator", + "converter", + "type", + "kw_only", + "on_setattr", + ) + __attrs_attrs__ = tuple( + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + hash=True, + init=True, + kw_only=False, + eq=True, + eq_key=None, + order=False, + order_key=None, + inherited=False, + on_setattr=None, + ) + for name in ( + "counter", + "_default", + "repr", + "eq", + "order", + "hash", + "init", + "on_setattr", + ) + ) + ( + Attribute( + name="metadata", + default=None, + validator=None, + repr=True, + cmp=None, + hash=False, + init=True, + kw_only=False, + eq=True, + eq_key=None, + order=False, + order_key=None, + inherited=False, + on_setattr=None, + ), + ) + cls_counter = 0 + + def __init__( + self, + default, + validator, + repr, + cmp, + hash, + init, + converter, + metadata, + type, + kw_only, + eq, + eq_key, + order, + order_key, + on_setattr, + ): + _CountingAttr.cls_counter += 1 + self.counter = _CountingAttr.cls_counter + self._default = default + self._validator = validator + self.converter = converter + self.repr = repr + self.eq = eq + self.eq_key = eq_key + self.order = order + self.order_key = order_key + self.hash = hash + self.init = init + self.metadata = metadata + self.type = type + self.kw_only = kw_only + self.on_setattr = on_setattr + + def validator(self, meth): + """ + Decorator that adds *meth* to the list of validators. + + Returns *meth* unchanged. + + .. versionadded:: 17.1.0 + """ + if self._validator is None: + self._validator = meth + else: + self._validator = and_(self._validator, meth) + return meth + + def default(self, meth): + """ + Decorator that allows to set the default for an attribute. + + Returns *meth* unchanged. + + :raises DefaultAlreadySetError: If default has been set before. + + .. versionadded:: 17.1.0 + """ + if self._default is not NOTHING: + raise DefaultAlreadySetError() + + self._default = Factory(meth, takes_self=True) + + return meth + + +_CountingAttr = _add_eq(_add_repr(_CountingAttr)) + + +class Factory: + """ + Stores a factory callable. + + If passed as the default value to `attrs.field`, the factory is used to + generate a new value. + + :param callable factory: A callable that takes either none or exactly one + mandatory positional argument depending on *takes_self*. + :param bool takes_self: Pass the partially initialized instance that is + being initialized as a positional argument. + + .. versionadded:: 17.1.0 *takes_self* + """ + + __slots__ = ("factory", "takes_self") + + def __init__(self, factory, takes_self=False): + """ + `Factory` is part of the default machinery so if we want a default + value here, we have to implement it ourselves. + """ + self.factory = factory + self.takes_self = takes_self + + def __getstate__(self): + """ + Play nice with pickle. + """ + return tuple(getattr(self, name) for name in self.__slots__) + + def __setstate__(self, state): + """ + Play nice with pickle. + """ + for name, value in zip(self.__slots__, state): + setattr(self, name, value) + + +_f = [ + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + eq=True, + order=False, + hash=True, + init=True, + inherited=False, + ) + for name in Factory.__slots__ +] + +Factory = _add_hash(_add_eq(_add_repr(Factory, attrs=_f), attrs=_f), attrs=_f) + + +def make_class(name, attrs, bases=(object,), **attributes_arguments): + """ + A quick way to create a new class called *name* with *attrs*. + + :param str name: The name for the new class. + + :param attrs: A list of names or a dictionary of mappings of names to + attributes. + + If *attrs* is a list or an ordered dict (`dict` on Python 3.6+, + `collections.OrderedDict` otherwise), the order is deduced from + the order of the names or attributes inside *attrs*. Otherwise the + order of the definition of the attributes is used. + :type attrs: `list` or `dict` + + :param tuple bases: Classes that the new class will subclass. + + :param attributes_arguments: Passed unmodified to `attr.s`. + + :return: A new class with *attrs*. + :rtype: type + + .. versionadded:: 17.1.0 *bases* + .. versionchanged:: 18.1.0 If *attrs* is ordered, the order is retained. + """ + if isinstance(attrs, dict): + cls_dict = attrs + elif isinstance(attrs, (list, tuple)): + cls_dict = {a: attrib() for a in attrs} + else: + raise TypeError("attrs argument must be a dict or a list.") + + pre_init = cls_dict.pop("__attrs_pre_init__", None) + post_init = cls_dict.pop("__attrs_post_init__", None) + user_init = cls_dict.pop("__init__", None) + + body = {} + if pre_init is not None: + body["__attrs_pre_init__"] = pre_init + if post_init is not None: + body["__attrs_post_init__"] = post_init + if user_init is not None: + body["__init__"] = user_init + + type_ = types.new_class(name, bases, {}, lambda ns: ns.update(body)) + + # For pickling to work, the __module__ variable needs to be set to the + # frame where the class is created. Bypass this step in environments where + # sys._getframe is not defined (Jython for example) or sys._getframe is not + # defined for arguments greater than 0 (IronPython). + try: + type_.__module__ = sys._getframe(1).f_globals.get( + "__name__", "__main__" + ) + except (AttributeError, ValueError): + pass + + # We do it here for proper warnings with meaningful stacklevel. + cmp = attributes_arguments.pop("cmp", None) + ( + attributes_arguments["eq"], + attributes_arguments["order"], + ) = _determine_attrs_eq_order( + cmp, + attributes_arguments.get("eq"), + attributes_arguments.get("order"), + True, + ) + + return _attrs(these=cls_dict, **attributes_arguments)(type_) + + +# These are required by within this module so we define them here and merely +# import into .validators / .converters. + + +@attrs(slots=True, hash=True) +class _AndValidator: + """ + Compose many validators to a single one. + """ + + _validators = attrib() + + def __call__(self, inst, attr, value): + for v in self._validators: + v(inst, attr, value) + + +def and_(*validators): + """ + A validator that composes multiple validators into one. + + When called on a value, it runs all wrapped validators. + + :param callables validators: Arbitrary number of validators. + + .. versionadded:: 17.1.0 + """ + vals = [] + for validator in validators: + vals.extend( + validator._validators + if isinstance(validator, _AndValidator) + else [validator] + ) + + return _AndValidator(tuple(vals)) + + +def pipe(*converters): + """ + A converter that composes multiple converters into one. + + When called on a value, it runs all wrapped converters, returning the + *last* value. + + Type annotations will be inferred from the wrapped converters', if + they have any. + + :param callables converters: Arbitrary number of converters. + + .. versionadded:: 20.1.0 + """ + + def pipe_converter(val): + for converter in converters: + val = converter(val) + + return val + + if not converters: + # If the converter list is empty, pipe_converter is the identity. + A = typing.TypeVar("A") + pipe_converter.__annotations__ = {"val": A, "return": A} + else: + # Get parameter type from first converter. + t = _AnnotationExtractor(converters[0]).get_first_param_type() + if t: + pipe_converter.__annotations__["val"] = t + + # Get return type from last converter. + rt = _AnnotationExtractor(converters[-1]).get_return_type() + if rt: + pipe_converter.__annotations__["return"] = rt + + return pipe_converter diff --git a/env/lib/python3.8/site-packages/attr/_next_gen.py b/env/lib/python3.8/site-packages/attr/_next_gen.py new file mode 100644 index 00000000..5a06a743 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_next_gen.py @@ -0,0 +1,220 @@ +# SPDX-License-Identifier: MIT + +""" +These are Python 3.6+-only and keyword-only APIs that call `attr.s` and +`attr.ib` with different default values. +""" + + +from functools import partial + +from . import setters +from ._funcs import asdict as _asdict +from ._funcs import astuple as _astuple +from ._make import ( + NOTHING, + _frozen_setattrs, + _ng_default_on_setattr, + attrib, + attrs, +) +from .exceptions import UnannotatedAttributeError + + +def define( + maybe_cls=None, + *, + these=None, + repr=None, + hash=None, + init=None, + slots=True, + frozen=False, + weakref_slot=True, + str=False, + auto_attribs=None, + kw_only=False, + cache_hash=False, + auto_exc=True, + eq=None, + order=False, + auto_detect=True, + getstate_setstate=None, + on_setattr=None, + field_transformer=None, + match_args=True, +): + r""" + Define an ``attrs`` class. + + Differences to the classic `attr.s` that it uses underneath: + + - Automatically detect whether or not *auto_attribs* should be `True` (c.f. + *auto_attribs* parameter). + - If *frozen* is `False`, run converters and validators when setting an + attribute by default. + - *slots=True* + + .. caution:: + + Usually this has only upsides and few visible effects in everyday + programming. But it *can* lead to some suprising behaviors, so please + make sure to read :term:`slotted classes`. + - *auto_exc=True* + - *auto_detect=True* + - *order=False* + - Some options that were only relevant on Python 2 or were kept around for + backwards-compatibility have been removed. + + Please note that these are all defaults and you can change them as you + wish. + + :param Optional[bool] auto_attribs: If set to `True` or `False`, it behaves + exactly like `attr.s`. If left `None`, `attr.s` will try to guess: + + 1. If any attributes are annotated and no unannotated `attrs.fields`\ s + are found, it assumes *auto_attribs=True*. + 2. Otherwise it assumes *auto_attribs=False* and tries to collect + `attrs.fields`\ s. + + For now, please refer to `attr.s` for the rest of the parameters. + + .. versionadded:: 20.1.0 + .. versionchanged:: 21.3.0 Converters are also run ``on_setattr``. + """ + + def do_it(cls, auto_attribs): + return attrs( + maybe_cls=cls, + these=these, + repr=repr, + hash=hash, + init=init, + slots=slots, + frozen=frozen, + weakref_slot=weakref_slot, + str=str, + auto_attribs=auto_attribs, + kw_only=kw_only, + cache_hash=cache_hash, + auto_exc=auto_exc, + eq=eq, + order=order, + auto_detect=auto_detect, + collect_by_mro=True, + getstate_setstate=getstate_setstate, + on_setattr=on_setattr, + field_transformer=field_transformer, + match_args=match_args, + ) + + def wrap(cls): + """ + Making this a wrapper ensures this code runs during class creation. + + We also ensure that frozen-ness of classes is inherited. + """ + nonlocal frozen, on_setattr + + had_on_setattr = on_setattr not in (None, setters.NO_OP) + + # By default, mutable classes convert & validate on setattr. + if frozen is False and on_setattr is None: + on_setattr = _ng_default_on_setattr + + # However, if we subclass a frozen class, we inherit the immutability + # and disable on_setattr. + for base_cls in cls.__bases__: + if base_cls.__setattr__ is _frozen_setattrs: + if had_on_setattr: + raise ValueError( + "Frozen classes can't use on_setattr " + "(frozen-ness was inherited)." + ) + + on_setattr = setters.NO_OP + break + + if auto_attribs is not None: + return do_it(cls, auto_attribs) + + try: + return do_it(cls, True) + except UnannotatedAttributeError: + return do_it(cls, False) + + # maybe_cls's type depends on the usage of the decorator. It's a class + # if it's used as `@attrs` but ``None`` if used as `@attrs()`. + if maybe_cls is None: + return wrap + else: + return wrap(maybe_cls) + + +mutable = define +frozen = partial(define, frozen=True, on_setattr=None) + + +def field( + *, + default=NOTHING, + validator=None, + repr=True, + hash=None, + init=True, + metadata=None, + converter=None, + factory=None, + kw_only=False, + eq=None, + order=None, + on_setattr=None, +): + """ + Identical to `attr.ib`, except keyword-only and with some arguments + removed. + + .. versionadded:: 20.1.0 + """ + return attrib( + default=default, + validator=validator, + repr=repr, + hash=hash, + init=init, + metadata=metadata, + converter=converter, + factory=factory, + kw_only=kw_only, + eq=eq, + order=order, + on_setattr=on_setattr, + ) + + +def asdict(inst, *, recurse=True, filter=None, value_serializer=None): + """ + Same as `attr.asdict`, except that collections types are always retained + and dict is always used as *dict_factory*. + + .. versionadded:: 21.3.0 + """ + return _asdict( + inst=inst, + recurse=recurse, + filter=filter, + value_serializer=value_serializer, + retain_collection_types=True, + ) + + +def astuple(inst, *, recurse=True, filter=None): + """ + Same as `attr.astuple`, except that collections types are always retained + and `tuple` is always used as the *tuple_factory*. + + .. versionadded:: 21.3.0 + """ + return _astuple( + inst=inst, recurse=recurse, filter=filter, retain_collection_types=True + ) diff --git a/env/lib/python3.8/site-packages/attr/_version_info.py b/env/lib/python3.8/site-packages/attr/_version_info.py new file mode 100644 index 00000000..51a1312f --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_version_info.py @@ -0,0 +1,86 @@ +# SPDX-License-Identifier: MIT + + +from functools import total_ordering + +from ._funcs import astuple +from ._make import attrib, attrs + + +@total_ordering +@attrs(eq=False, order=False, slots=True, frozen=True) +class VersionInfo: + """ + A version object that can be compared to tuple of length 1--4: + + >>> attr.VersionInfo(19, 1, 0, "final") <= (19, 2) + True + >>> attr.VersionInfo(19, 1, 0, "final") < (19, 1, 1) + True + >>> vi = attr.VersionInfo(19, 2, 0, "final") + >>> vi < (19, 1, 1) + False + >>> vi < (19,) + False + >>> vi == (19, 2,) + True + >>> vi == (19, 2, 1) + False + + .. versionadded:: 19.2 + """ + + year = attrib(type=int) + minor = attrib(type=int) + micro = attrib(type=int) + releaselevel = attrib(type=str) + + @classmethod + def _from_version_string(cls, s): + """ + Parse *s* and return a _VersionInfo. + """ + v = s.split(".") + if len(v) == 3: + v.append("final") + + return cls( + year=int(v[0]), minor=int(v[1]), micro=int(v[2]), releaselevel=v[3] + ) + + def _ensure_tuple(self, other): + """ + Ensure *other* is a tuple of a valid length. + + Returns a possibly transformed *other* and ourselves as a tuple of + the same length as *other*. + """ + + if self.__class__ is other.__class__: + other = astuple(other) + + if not isinstance(other, tuple): + raise NotImplementedError + + if not (1 <= len(other) <= 4): + raise NotImplementedError + + return astuple(self)[: len(other)], other + + def __eq__(self, other): + try: + us, them = self._ensure_tuple(other) + except NotImplementedError: + return NotImplemented + + return us == them + + def __lt__(self, other): + try: + us, them = self._ensure_tuple(other) + except NotImplementedError: + return NotImplemented + + # Since alphabetically "dev0" < "final" < "post1" < "post2", we don't + # have to do anything special with releaselevel for now. + return us < them diff --git a/env/lib/python3.8/site-packages/attr/_version_info.pyi b/env/lib/python3.8/site-packages/attr/_version_info.pyi new file mode 100644 index 00000000..45ced086 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/_version_info.pyi @@ -0,0 +1,9 @@ +class VersionInfo: + @property + def year(self) -> int: ... + @property + def minor(self) -> int: ... + @property + def micro(self) -> int: ... + @property + def releaselevel(self) -> str: ... diff --git a/env/lib/python3.8/site-packages/attr/converters.py b/env/lib/python3.8/site-packages/attr/converters.py new file mode 100644 index 00000000..a73626c2 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/converters.py @@ -0,0 +1,144 @@ +# SPDX-License-Identifier: MIT + +""" +Commonly useful converters. +""" + + +import typing + +from ._compat import _AnnotationExtractor +from ._make import NOTHING, Factory, pipe + + +__all__ = [ + "default_if_none", + "optional", + "pipe", + "to_bool", +] + + +def optional(converter): + """ + A converter that allows an attribute to be optional. An optional attribute + is one which can be set to ``None``. + + Type annotations will be inferred from the wrapped converter's, if it + has any. + + :param callable converter: the converter that is used for non-``None`` + values. + + .. versionadded:: 17.1.0 + """ + + def optional_converter(val): + if val is None: + return None + return converter(val) + + xtr = _AnnotationExtractor(converter) + + t = xtr.get_first_param_type() + if t: + optional_converter.__annotations__["val"] = typing.Optional[t] + + rt = xtr.get_return_type() + if rt: + optional_converter.__annotations__["return"] = typing.Optional[rt] + + return optional_converter + + +def default_if_none(default=NOTHING, factory=None): + """ + A converter that allows to replace ``None`` values by *default* or the + result of *factory*. + + :param default: Value to be used if ``None`` is passed. Passing an instance + of `attrs.Factory` is supported, however the ``takes_self`` option + is *not*. + :param callable factory: A callable that takes no parameters whose result + is used if ``None`` is passed. + + :raises TypeError: If **neither** *default* or *factory* is passed. + :raises TypeError: If **both** *default* and *factory* are passed. + :raises ValueError: If an instance of `attrs.Factory` is passed with + ``takes_self=True``. + + .. versionadded:: 18.2.0 + """ + if default is NOTHING and factory is None: + raise TypeError("Must pass either `default` or `factory`.") + + if default is not NOTHING and factory is not None: + raise TypeError( + "Must pass either `default` or `factory` but not both." + ) + + if factory is not None: + default = Factory(factory) + + if isinstance(default, Factory): + if default.takes_self: + raise ValueError( + "`takes_self` is not supported by default_if_none." + ) + + def default_if_none_converter(val): + if val is not None: + return val + + return default.factory() + + else: + + def default_if_none_converter(val): + if val is not None: + return val + + return default + + return default_if_none_converter + + +def to_bool(val): + """ + Convert "boolean" strings (e.g., from env. vars.) to real booleans. + + Values mapping to :code:`True`: + + - :code:`True` + - :code:`"true"` / :code:`"t"` + - :code:`"yes"` / :code:`"y"` + - :code:`"on"` + - :code:`"1"` + - :code:`1` + + Values mapping to :code:`False`: + + - :code:`False` + - :code:`"false"` / :code:`"f"` + - :code:`"no"` / :code:`"n"` + - :code:`"off"` + - :code:`"0"` + - :code:`0` + + :raises ValueError: for any other value. + + .. versionadded:: 21.3.0 + """ + if isinstance(val, str): + val = val.lower() + truthy = {True, "true", "t", "yes", "y", "on", "1", 1} + falsy = {False, "false", "f", "no", "n", "off", "0", 0} + try: + if val in truthy: + return True + if val in falsy: + return False + except TypeError: + # Raised when "val" is not hashable (e.g., lists) + pass + raise ValueError("Cannot convert value to bool: {}".format(val)) diff --git a/env/lib/python3.8/site-packages/attr/converters.pyi b/env/lib/python3.8/site-packages/attr/converters.pyi new file mode 100644 index 00000000..0f58088a --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/converters.pyi @@ -0,0 +1,13 @@ +from typing import Callable, Optional, TypeVar, overload + +from . import _ConverterType + +_T = TypeVar("_T") + +def pipe(*validators: _ConverterType) -> _ConverterType: ... +def optional(converter: _ConverterType) -> _ConverterType: ... +@overload +def default_if_none(default: _T) -> _ConverterType: ... +@overload +def default_if_none(*, factory: Callable[[], _T]) -> _ConverterType: ... +def to_bool(val: str) -> bool: ... diff --git a/env/lib/python3.8/site-packages/attr/exceptions.py b/env/lib/python3.8/site-packages/attr/exceptions.py new file mode 100644 index 00000000..5dc51e0a --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/exceptions.py @@ -0,0 +1,92 @@ +# SPDX-License-Identifier: MIT + + +class FrozenError(AttributeError): + """ + A frozen/immutable instance or attribute have been attempted to be + modified. + + It mirrors the behavior of ``namedtuples`` by using the same error message + and subclassing `AttributeError`. + + .. versionadded:: 20.1.0 + """ + + msg = "can't set attribute" + args = [msg] + + +class FrozenInstanceError(FrozenError): + """ + A frozen instance has been attempted to be modified. + + .. versionadded:: 16.1.0 + """ + + +class FrozenAttributeError(FrozenError): + """ + A frozen attribute has been attempted to be modified. + + .. versionadded:: 20.1.0 + """ + + +class AttrsAttributeNotFoundError(ValueError): + """ + An ``attrs`` function couldn't find an attribute that the user asked for. + + .. versionadded:: 16.2.0 + """ + + +class NotAnAttrsClassError(ValueError): + """ + A non-``attrs`` class has been passed into an ``attrs`` function. + + .. versionadded:: 16.2.0 + """ + + +class DefaultAlreadySetError(RuntimeError): + """ + A default has been set using ``attr.ib()`` and is attempted to be reset + using the decorator. + + .. versionadded:: 17.1.0 + """ + + +class UnannotatedAttributeError(RuntimeError): + """ + A class with ``auto_attribs=True`` has an ``attr.ib()`` without a type + annotation. + + .. versionadded:: 17.3.0 + """ + + +class PythonTooOldError(RuntimeError): + """ + It was attempted to use an ``attrs`` feature that requires a newer Python + version. + + .. versionadded:: 18.2.0 + """ + + +class NotCallableError(TypeError): + """ + A ``attr.ib()`` requiring a callable has been set with a value + that is not callable. + + .. versionadded:: 19.2.0 + """ + + def __init__(self, msg, value): + super(TypeError, self).__init__(msg, value) + self.msg = msg + self.value = value + + def __str__(self): + return str(self.msg) diff --git a/env/lib/python3.8/site-packages/attr/exceptions.pyi b/env/lib/python3.8/site-packages/attr/exceptions.pyi new file mode 100644 index 00000000..f2680118 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/exceptions.pyi @@ -0,0 +1,17 @@ +from typing import Any + +class FrozenError(AttributeError): + msg: str = ... + +class FrozenInstanceError(FrozenError): ... +class FrozenAttributeError(FrozenError): ... +class AttrsAttributeNotFoundError(ValueError): ... +class NotAnAttrsClassError(ValueError): ... +class DefaultAlreadySetError(RuntimeError): ... +class UnannotatedAttributeError(RuntimeError): ... +class PythonTooOldError(RuntimeError): ... + +class NotCallableError(TypeError): + msg: str = ... + value: Any = ... + def __init__(self, msg: str, value: Any) -> None: ... diff --git a/env/lib/python3.8/site-packages/attr/filters.py b/env/lib/python3.8/site-packages/attr/filters.py new file mode 100644 index 00000000..baa25e94 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/filters.py @@ -0,0 +1,51 @@ +# SPDX-License-Identifier: MIT + +""" +Commonly useful filters for `attr.asdict`. +""" + +from ._make import Attribute + + +def _split_what(what): + """ + Returns a tuple of `frozenset`s of classes and attributes. + """ + return ( + frozenset(cls for cls in what if isinstance(cls, type)), + frozenset(cls for cls in what if isinstance(cls, Attribute)), + ) + + +def include(*what): + """ + Include *what*. + + :param what: What to include. + :type what: `list` of `type` or `attrs.Attribute`\\ s + + :rtype: `callable` + """ + cls, attrs = _split_what(what) + + def include_(attribute, value): + return value.__class__ in cls or attribute in attrs + + return include_ + + +def exclude(*what): + """ + Exclude *what*. + + :param what: What to exclude. + :type what: `list` of classes or `attrs.Attribute`\\ s. + + :rtype: `callable` + """ + cls, attrs = _split_what(what) + + def exclude_(attribute, value): + return value.__class__ not in cls and attribute not in attrs + + return exclude_ diff --git a/env/lib/python3.8/site-packages/attr/filters.pyi b/env/lib/python3.8/site-packages/attr/filters.pyi new file mode 100644 index 00000000..99386686 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/filters.pyi @@ -0,0 +1,6 @@ +from typing import Any, Union + +from . import Attribute, _FilterType + +def include(*what: Union[type, Attribute[Any]]) -> _FilterType[Any]: ... +def exclude(*what: Union[type, Attribute[Any]]) -> _FilterType[Any]: ... diff --git a/env/lib/python3.8/site-packages/attr/py.typed b/env/lib/python3.8/site-packages/attr/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/attr/setters.py b/env/lib/python3.8/site-packages/attr/setters.py new file mode 100644 index 00000000..12ed6750 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/setters.py @@ -0,0 +1,73 @@ +# SPDX-License-Identifier: MIT + +""" +Commonly used hooks for on_setattr. +""" + + +from . import _config +from .exceptions import FrozenAttributeError + + +def pipe(*setters): + """ + Run all *setters* and return the return value of the last one. + + .. versionadded:: 20.1.0 + """ + + def wrapped_pipe(instance, attrib, new_value): + rv = new_value + + for setter in setters: + rv = setter(instance, attrib, rv) + + return rv + + return wrapped_pipe + + +def frozen(_, __, ___): + """ + Prevent an attribute to be modified. + + .. versionadded:: 20.1.0 + """ + raise FrozenAttributeError() + + +def validate(instance, attrib, new_value): + """ + Run *attrib*'s validator on *new_value* if it has one. + + .. versionadded:: 20.1.0 + """ + if _config._run_validators is False: + return new_value + + v = attrib.validator + if not v: + return new_value + + v(instance, attrib, new_value) + + return new_value + + +def convert(instance, attrib, new_value): + """ + Run *attrib*'s converter -- if it has one -- on *new_value* and return the + result. + + .. versionadded:: 20.1.0 + """ + c = attrib.converter + if c: + return c(new_value) + + return new_value + + +# Sentinel for disabling class-wide *on_setattr* hooks for certain attributes. +# autodata stopped working, so the docstring is inlined in the API docs. +NO_OP = object() diff --git a/env/lib/python3.8/site-packages/attr/setters.pyi b/env/lib/python3.8/site-packages/attr/setters.pyi new file mode 100644 index 00000000..3f5603c2 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/setters.pyi @@ -0,0 +1,19 @@ +from typing import Any, NewType, NoReturn, TypeVar, cast + +from . import Attribute, _OnSetAttrType + +_T = TypeVar("_T") + +def frozen( + instance: Any, attribute: Attribute[Any], new_value: Any +) -> NoReturn: ... +def pipe(*setters: _OnSetAttrType) -> _OnSetAttrType: ... +def validate(instance: Any, attribute: Attribute[_T], new_value: _T) -> _T: ... + +# convert is allowed to return Any, because they can be chained using pipe. +def convert( + instance: Any, attribute: Attribute[Any], new_value: Any +) -> Any: ... + +_NoOpType = NewType("_NoOpType", object) +NO_OP: _NoOpType diff --git a/env/lib/python3.8/site-packages/attr/validators.py b/env/lib/python3.8/site-packages/attr/validators.py new file mode 100644 index 00000000..eece517d --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/validators.py @@ -0,0 +1,594 @@ +# SPDX-License-Identifier: MIT + +""" +Commonly useful validators. +""" + + +import operator +import re + +from contextlib import contextmanager + +from ._config import get_run_validators, set_run_validators +from ._make import _AndValidator, and_, attrib, attrs +from .exceptions import NotCallableError + + +try: + Pattern = re.Pattern +except AttributeError: # Python <3.7 lacks a Pattern type. + Pattern = type(re.compile("")) + + +__all__ = [ + "and_", + "deep_iterable", + "deep_mapping", + "disabled", + "ge", + "get_disabled", + "gt", + "in_", + "instance_of", + "is_callable", + "le", + "lt", + "matches_re", + "max_len", + "min_len", + "optional", + "provides", + "set_disabled", +] + + +def set_disabled(disabled): + """ + Globally disable or enable running validators. + + By default, they are run. + + :param disabled: If ``True``, disable running all validators. + :type disabled: bool + + .. warning:: + + This function is not thread-safe! + + .. versionadded:: 21.3.0 + """ + set_run_validators(not disabled) + + +def get_disabled(): + """ + Return a bool indicating whether validators are currently disabled or not. + + :return: ``True`` if validators are currently disabled. + :rtype: bool + + .. versionadded:: 21.3.0 + """ + return not get_run_validators() + + +@contextmanager +def disabled(): + """ + Context manager that disables running validators within its context. + + .. warning:: + + This context manager is not thread-safe! + + .. versionadded:: 21.3.0 + """ + set_run_validators(False) + try: + yield + finally: + set_run_validators(True) + + +@attrs(repr=False, slots=True, hash=True) +class _InstanceOfValidator: + type = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not isinstance(value, self.type): + raise TypeError( + "'{name}' must be {type!r} (got {value!r} that is a " + "{actual!r}).".format( + name=attr.name, + type=self.type, + actual=value.__class__, + value=value, + ), + attr, + self.type, + value, + ) + + def __repr__(self): + return "".format( + type=self.type + ) + + +def instance_of(type): + """ + A validator that raises a `TypeError` if the initializer is called + with a wrong type for this particular attribute (checks are performed using + `isinstance` therefore it's also valid to pass a tuple of types). + + :param type: The type to check for. + :type type: type or tuple of types + + :raises TypeError: With a human readable error message, the attribute + (of type `attrs.Attribute`), the expected type, and the value it + got. + """ + return _InstanceOfValidator(type) + + +@attrs(repr=False, frozen=True, slots=True) +class _MatchesReValidator: + pattern = attrib() + match_func = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not self.match_func(value): + raise ValueError( + "'{name}' must match regex {pattern!r}" + " ({value!r} doesn't)".format( + name=attr.name, pattern=self.pattern.pattern, value=value + ), + attr, + self.pattern, + value, + ) + + def __repr__(self): + return "".format( + pattern=self.pattern + ) + + +def matches_re(regex, flags=0, func=None): + r""" + A validator that raises `ValueError` if the initializer is called + with a string that doesn't match *regex*. + + :param regex: a regex string or precompiled pattern to match against + :param int flags: flags that will be passed to the underlying re function + (default 0) + :param callable func: which underlying `re` function to call. Valid options + are `re.fullmatch`, `re.search`, and `re.match`; the default ``None`` + means `re.fullmatch`. For performance reasons, the pattern is always + precompiled using `re.compile`. + + .. versionadded:: 19.2.0 + .. versionchanged:: 21.3.0 *regex* can be a pre-compiled pattern. + """ + valid_funcs = (re.fullmatch, None, re.search, re.match) + if func not in valid_funcs: + raise ValueError( + "'func' must be one of {}.".format( + ", ".join( + sorted( + e and e.__name__ or "None" for e in set(valid_funcs) + ) + ) + ) + ) + + if isinstance(regex, Pattern): + if flags: + raise TypeError( + "'flags' can only be used with a string pattern; " + "pass flags to re.compile() instead" + ) + pattern = regex + else: + pattern = re.compile(regex, flags) + + if func is re.match: + match_func = pattern.match + elif func is re.search: + match_func = pattern.search + else: + match_func = pattern.fullmatch + + return _MatchesReValidator(pattern, match_func) + + +@attrs(repr=False, slots=True, hash=True) +class _ProvidesValidator: + interface = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not self.interface.providedBy(value): + raise TypeError( + "'{name}' must provide {interface!r} which {value!r} " + "doesn't.".format( + name=attr.name, interface=self.interface, value=value + ), + attr, + self.interface, + value, + ) + + def __repr__(self): + return "".format( + interface=self.interface + ) + + +def provides(interface): + """ + A validator that raises a `TypeError` if the initializer is called + with an object that does not provide the requested *interface* (checks are + performed using ``interface.providedBy(value)`` (see `zope.interface + `_). + + :param interface: The interface to check for. + :type interface: ``zope.interface.Interface`` + + :raises TypeError: With a human readable error message, the attribute + (of type `attrs.Attribute`), the expected interface, and the + value it got. + """ + return _ProvidesValidator(interface) + + +@attrs(repr=False, slots=True, hash=True) +class _OptionalValidator: + validator = attrib() + + def __call__(self, inst, attr, value): + if value is None: + return + + self.validator(inst, attr, value) + + def __repr__(self): + return "".format( + what=repr(self.validator) + ) + + +def optional(validator): + """ + A validator that makes an attribute optional. An optional attribute is one + which can be set to ``None`` in addition to satisfying the requirements of + the sub-validator. + + :param validator: A validator (or a list of validators) that is used for + non-``None`` values. + :type validator: callable or `list` of callables. + + .. versionadded:: 15.1.0 + .. versionchanged:: 17.1.0 *validator* can be a list of validators. + """ + if isinstance(validator, list): + return _OptionalValidator(_AndValidator(validator)) + return _OptionalValidator(validator) + + +@attrs(repr=False, slots=True, hash=True) +class _InValidator: + options = attrib() + + def __call__(self, inst, attr, value): + try: + in_options = value in self.options + except TypeError: # e.g. `1 in "abc"` + in_options = False + + if not in_options: + raise ValueError( + "'{name}' must be in {options!r} (got {value!r})".format( + name=attr.name, options=self.options, value=value + ), + attr, + self.options, + value, + ) + + def __repr__(self): + return "".format( + options=self.options + ) + + +def in_(options): + """ + A validator that raises a `ValueError` if the initializer is called + with a value that does not belong in the options provided. The check is + performed using ``value in options``. + + :param options: Allowed options. + :type options: list, tuple, `enum.Enum`, ... + + :raises ValueError: With a human readable error message, the attribute (of + type `attrs.Attribute`), the expected options, and the value it + got. + + .. versionadded:: 17.1.0 + .. versionchanged:: 22.1.0 + The ValueError was incomplete until now and only contained the human + readable error message. Now it contains all the information that has + been promised since 17.1.0. + """ + return _InValidator(options) + + +@attrs(repr=False, slots=False, hash=True) +class _IsCallableValidator: + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not callable(value): + message = ( + "'{name}' must be callable " + "(got {value!r} that is a {actual!r})." + ) + raise NotCallableError( + msg=message.format( + name=attr.name, value=value, actual=value.__class__ + ), + value=value, + ) + + def __repr__(self): + return "" + + +def is_callable(): + """ + A validator that raises a `attr.exceptions.NotCallableError` if the + initializer is called with a value for this particular attribute + that is not callable. + + .. versionadded:: 19.1.0 + + :raises `attr.exceptions.NotCallableError`: With a human readable error + message containing the attribute (`attrs.Attribute`) name, + and the value it got. + """ + return _IsCallableValidator() + + +@attrs(repr=False, slots=True, hash=True) +class _DeepIterable: + member_validator = attrib(validator=is_callable()) + iterable_validator = attrib( + default=None, validator=optional(is_callable()) + ) + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if self.iterable_validator is not None: + self.iterable_validator(inst, attr, value) + + for member in value: + self.member_validator(inst, attr, member) + + def __repr__(self): + iterable_identifier = ( + "" + if self.iterable_validator is None + else " {iterable!r}".format(iterable=self.iterable_validator) + ) + return ( + "" + ).format( + iterable_identifier=iterable_identifier, + member=self.member_validator, + ) + + +def deep_iterable(member_validator, iterable_validator=None): + """ + A validator that performs deep validation of an iterable. + + :param member_validator: Validator(s) to apply to iterable members + :param iterable_validator: Validator to apply to iterable itself + (optional) + + .. versionadded:: 19.1.0 + + :raises TypeError: if any sub-validators fail + """ + if isinstance(member_validator, (list, tuple)): + member_validator = and_(*member_validator) + return _DeepIterable(member_validator, iterable_validator) + + +@attrs(repr=False, slots=True, hash=True) +class _DeepMapping: + key_validator = attrib(validator=is_callable()) + value_validator = attrib(validator=is_callable()) + mapping_validator = attrib(default=None, validator=optional(is_callable())) + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if self.mapping_validator is not None: + self.mapping_validator(inst, attr, value) + + for key in value: + self.key_validator(inst, attr, key) + self.value_validator(inst, attr, value[key]) + + def __repr__(self): + return ( + "" + ).format(key=self.key_validator, value=self.value_validator) + + +def deep_mapping(key_validator, value_validator, mapping_validator=None): + """ + A validator that performs deep validation of a dictionary. + + :param key_validator: Validator to apply to dictionary keys + :param value_validator: Validator to apply to dictionary values + :param mapping_validator: Validator to apply to top-level mapping + attribute (optional) + + .. versionadded:: 19.1.0 + + :raises TypeError: if any sub-validators fail + """ + return _DeepMapping(key_validator, value_validator, mapping_validator) + + +@attrs(repr=False, frozen=True, slots=True) +class _NumberValidator: + bound = attrib() + compare_op = attrib() + compare_func = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not self.compare_func(value, self.bound): + raise ValueError( + "'{name}' must be {op} {bound}: {value}".format( + name=attr.name, + op=self.compare_op, + bound=self.bound, + value=value, + ) + ) + + def __repr__(self): + return "".format( + op=self.compare_op, bound=self.bound + ) + + +def lt(val): + """ + A validator that raises `ValueError` if the initializer is called + with a number larger or equal to *val*. + + :param val: Exclusive upper bound for values + + .. versionadded:: 21.3.0 + """ + return _NumberValidator(val, "<", operator.lt) + + +def le(val): + """ + A validator that raises `ValueError` if the initializer is called + with a number greater than *val*. + + :param val: Inclusive upper bound for values + + .. versionadded:: 21.3.0 + """ + return _NumberValidator(val, "<=", operator.le) + + +def ge(val): + """ + A validator that raises `ValueError` if the initializer is called + with a number smaller than *val*. + + :param val: Inclusive lower bound for values + + .. versionadded:: 21.3.0 + """ + return _NumberValidator(val, ">=", operator.ge) + + +def gt(val): + """ + A validator that raises `ValueError` if the initializer is called + with a number smaller or equal to *val*. + + :param val: Exclusive lower bound for values + + .. versionadded:: 21.3.0 + """ + return _NumberValidator(val, ">", operator.gt) + + +@attrs(repr=False, frozen=True, slots=True) +class _MaxLengthValidator: + max_length = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if len(value) > self.max_length: + raise ValueError( + "Length of '{name}' must be <= {max}: {len}".format( + name=attr.name, max=self.max_length, len=len(value) + ) + ) + + def __repr__(self): + return "".format(max=self.max_length) + + +def max_len(length): + """ + A validator that raises `ValueError` if the initializer is called + with a string or iterable that is longer than *length*. + + :param int length: Maximum length of the string or iterable + + .. versionadded:: 21.3.0 + """ + return _MaxLengthValidator(length) + + +@attrs(repr=False, frozen=True, slots=True) +class _MinLengthValidator: + min_length = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if len(value) < self.min_length: + raise ValueError( + "Length of '{name}' must be => {min}: {len}".format( + name=attr.name, min=self.min_length, len=len(value) + ) + ) + + def __repr__(self): + return "".format(min=self.min_length) + + +def min_len(length): + """ + A validator that raises `ValueError` if the initializer is called + with a string or iterable that is shorter than *length*. + + :param int length: Minimum length of the string or iterable + + .. versionadded:: 22.1.0 + """ + return _MinLengthValidator(length) diff --git a/env/lib/python3.8/site-packages/attr/validators.pyi b/env/lib/python3.8/site-packages/attr/validators.pyi new file mode 100644 index 00000000..54b9dba2 --- /dev/null +++ b/env/lib/python3.8/site-packages/attr/validators.pyi @@ -0,0 +1,80 @@ +from typing import ( + Any, + AnyStr, + Callable, + Container, + ContextManager, + Iterable, + List, + Mapping, + Match, + Optional, + Pattern, + Tuple, + Type, + TypeVar, + Union, + overload, +) + +from . import _ValidatorType +from . import _ValidatorArgType + +_T = TypeVar("_T") +_T1 = TypeVar("_T1") +_T2 = TypeVar("_T2") +_T3 = TypeVar("_T3") +_I = TypeVar("_I", bound=Iterable) +_K = TypeVar("_K") +_V = TypeVar("_V") +_M = TypeVar("_M", bound=Mapping) + +def set_disabled(run: bool) -> None: ... +def get_disabled() -> bool: ... +def disabled() -> ContextManager[None]: ... + +# To be more precise on instance_of use some overloads. +# If there are more than 3 items in the tuple then we fall back to Any +@overload +def instance_of(type: Type[_T]) -> _ValidatorType[_T]: ... +@overload +def instance_of(type: Tuple[Type[_T]]) -> _ValidatorType[_T]: ... +@overload +def instance_of( + type: Tuple[Type[_T1], Type[_T2]] +) -> _ValidatorType[Union[_T1, _T2]]: ... +@overload +def instance_of( + type: Tuple[Type[_T1], Type[_T2], Type[_T3]] +) -> _ValidatorType[Union[_T1, _T2, _T3]]: ... +@overload +def instance_of(type: Tuple[type, ...]) -> _ValidatorType[Any]: ... +def provides(interface: Any) -> _ValidatorType[Any]: ... +def optional( + validator: Union[_ValidatorType[_T], List[_ValidatorType[_T]]] +) -> _ValidatorType[Optional[_T]]: ... +def in_(options: Container[_T]) -> _ValidatorType[_T]: ... +def and_(*validators: _ValidatorType[_T]) -> _ValidatorType[_T]: ... +def matches_re( + regex: Union[Pattern[AnyStr], AnyStr], + flags: int = ..., + func: Optional[ + Callable[[AnyStr, AnyStr, int], Optional[Match[AnyStr]]] + ] = ..., +) -> _ValidatorType[AnyStr]: ... +def deep_iterable( + member_validator: _ValidatorArgType[_T], + iterable_validator: Optional[_ValidatorType[_I]] = ..., +) -> _ValidatorType[_I]: ... +def deep_mapping( + key_validator: _ValidatorType[_K], + value_validator: _ValidatorType[_V], + mapping_validator: Optional[_ValidatorType[_M]] = ..., +) -> _ValidatorType[_M]: ... +def is_callable() -> _ValidatorType[_T]: ... +def lt(val: _T) -> _ValidatorType[_T]: ... +def le(val: _T) -> _ValidatorType[_T]: ... +def ge(val: _T) -> _ValidatorType[_T]: ... +def gt(val: _T) -> _ValidatorType[_T]: ... +def max_len(length: int) -> _ValidatorType[_T]: ... +def min_len(length: int) -> _ValidatorType[_T]: ... diff --git a/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/AUTHORS.rst b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/AUTHORS.rst new file mode 100644 index 00000000..aa677e81 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/AUTHORS.rst @@ -0,0 +1,11 @@ +Credits +======= + +``attrs`` is written and maintained by `Hynek Schlawack `_. + +The development is kindly supported by `Variomedia AG `_. + +A full list of contributors can be found in `GitHub's overview `_. + +It’s the spiritual successor of `characteristic `_ and aspires to fix some of it clunkiness and unfortunate decisions. +Both were inspired by Twisted’s `FancyEqMixin `_ but both are implemented using class decorators because `subclassing is bad for you `_, m’kay? diff --git a/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/LICENSE new file mode 100644 index 00000000..2bd6453d --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2015 Hynek Schlawack and the attrs contributors + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/METADATA b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/METADATA new file mode 100644 index 00000000..60b6653e --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/METADATA @@ -0,0 +1,240 @@ +Metadata-Version: 2.1 +Name: attrs +Version: 22.1.0 +Summary: Classes Without Boilerplate +Home-page: https://www.attrs.org/ +Author: Hynek Schlawack +Author-email: hs@ox.cx +Maintainer: Hynek Schlawack +Maintainer-email: hs@ox.cx +License: MIT +Project-URL: Documentation, https://www.attrs.org/ +Project-URL: Changelog, https://www.attrs.org/en/stable/changelog.html +Project-URL: Bug Tracker, https://github.com/python-attrs/attrs/issues +Project-URL: Source Code, https://github.com/python-attrs/attrs +Project-URL: Funding, https://github.com/sponsors/hynek +Project-URL: Tidelift, https://tidelift.com/subscription/pkg/pypi-attrs?utm_source=pypi-attrs&utm_medium=pypi +Project-URL: Ko-fi, https://ko-fi.com/the_hynek +Keywords: class,attribute,boilerplate +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: Natural Language :: English +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Python: >=3.5 +Description-Content-Type: text/x-rst +License-File: LICENSE +License-File: AUTHORS.rst +Provides-Extra: dev +Requires-Dist: coverage[toml] (>=5.0.2) ; extra == 'dev' +Requires-Dist: hypothesis ; extra == 'dev' +Requires-Dist: pympler ; extra == 'dev' +Requires-Dist: pytest (>=4.3.0) ; extra == 'dev' +Requires-Dist: mypy (!=0.940,>=0.900) ; extra == 'dev' +Requires-Dist: pytest-mypy-plugins ; extra == 'dev' +Requires-Dist: zope.interface ; extra == 'dev' +Requires-Dist: furo ; extra == 'dev' +Requires-Dist: sphinx ; extra == 'dev' +Requires-Dist: sphinx-notfound-page ; extra == 'dev' +Requires-Dist: pre-commit ; extra == 'dev' +Requires-Dist: cloudpickle ; (platform_python_implementation == "CPython") and extra == 'dev' +Provides-Extra: docs +Requires-Dist: furo ; extra == 'docs' +Requires-Dist: sphinx ; extra == 'docs' +Requires-Dist: zope.interface ; extra == 'docs' +Requires-Dist: sphinx-notfound-page ; extra == 'docs' +Provides-Extra: tests +Requires-Dist: coverage[toml] (>=5.0.2) ; extra == 'tests' +Requires-Dist: hypothesis ; extra == 'tests' +Requires-Dist: pympler ; extra == 'tests' +Requires-Dist: pytest (>=4.3.0) ; extra == 'tests' +Requires-Dist: mypy (!=0.940,>=0.900) ; extra == 'tests' +Requires-Dist: pytest-mypy-plugins ; extra == 'tests' +Requires-Dist: zope.interface ; extra == 'tests' +Requires-Dist: cloudpickle ; (platform_python_implementation == "CPython") and extra == 'tests' +Provides-Extra: tests_no_zope +Requires-Dist: coverage[toml] (>=5.0.2) ; extra == 'tests_no_zope' +Requires-Dist: hypothesis ; extra == 'tests_no_zope' +Requires-Dist: pympler ; extra == 'tests_no_zope' +Requires-Dist: pytest (>=4.3.0) ; extra == 'tests_no_zope' +Requires-Dist: mypy (!=0.940,>=0.900) ; extra == 'tests_no_zope' +Requires-Dist: pytest-mypy-plugins ; extra == 'tests_no_zope' +Requires-Dist: cloudpickle ; (platform_python_implementation == "CPython") and extra == 'tests_no_zope' + + +.. image:: https://www.attrs.org/en/stable/_static/attrs_logo.png + :alt: attrs logo + :align: center + + +``attrs`` is the Python package that will bring back the **joy** of **writing classes** by relieving you from the drudgery of implementing object protocols (aka `dunder methods `_). +`Trusted by NASA `_ for Mars missions since 2020! + +Its main goal is to help you to write **concise** and **correct** software without slowing down your code. + +.. teaser-end + +For that, it gives you a class decorator and a way to declaratively define the attributes on that class: + +.. -code-begin- + +.. code-block:: pycon + + >>> from attrs import asdict, define, make_class, Factory + + >>> @define + ... class SomeClass: + ... a_number: int = 42 + ... list_of_numbers: list[int] = Factory(list) + ... + ... def hard_math(self, another_number): + ... return self.a_number + sum(self.list_of_numbers) * another_number + + + >>> sc = SomeClass(1, [1, 2, 3]) + >>> sc + SomeClass(a_number=1, list_of_numbers=[1, 2, 3]) + + >>> sc.hard_math(3) + 19 + >>> sc == SomeClass(1, [1, 2, 3]) + True + >>> sc != SomeClass(2, [3, 2, 1]) + True + + >>> asdict(sc) + {'a_number': 1, 'list_of_numbers': [1, 2, 3]} + + >>> SomeClass() + SomeClass(a_number=42, list_of_numbers=[]) + + >>> C = make_class("C", ["a", "b"]) + >>> C("foo", "bar") + C(a='foo', b='bar') + + +After *declaring* your attributes, ``attrs`` gives you: + +- a concise and explicit overview of the class's attributes, +- a nice human-readable ``__repr__``, +- equality-checking methods, +- an initializer, +- and much more, + +*without* writing dull boilerplate code again and again and *without* runtime performance penalties. + +**Hate type annotations**!? +No problem! +Types are entirely **optional** with ``attrs``. +Simply assign ``attrs.field()`` to the attributes instead of annotating them with types. + +---- + +This example uses ``attrs``'s modern APIs that have been introduced in version 20.1.0, and the ``attrs`` package import name that has been added in version 21.3.0. +The classic APIs (``@attr.s``, ``attr.ib``, plus their serious-business aliases) and the ``attr`` package import name will remain **indefinitely**. + +Please check out `On The Core API Names `_ for a more in-depth explanation. + + +Data Classes +============ + +On the tin, ``attrs`` might remind you of ``dataclasses`` (and indeed, ``dataclasses`` `are a descendant `_ of ``attrs``). +In practice it does a lot more and is more flexible. +For instance it allows you to define `special handling of NumPy arrays for equality checks `_, or allows more ways to `plug into the initialization process `_. + +For more details, please refer to our `comparison page `_. + +.. -project-information- + +Project Information +=================== + +- **License**: `MIT `_ +- **PyPI**: https://pypi.org/project/attrs/ +- **Source Code**: https://github.com/python-attrs/attrs +- **Documentation**: https://www.attrs.org/ +- **Changelog**: https://www.attrs.org/en/stable/changelog.html +- **Get Help**: please use the ``python-attrs`` tag on `StackOverflow `_ +- **Third-party Extensions**: https://github.com/python-attrs/attrs/wiki/Extensions-to-attrs +- **Supported Python Versions**: 3.5 and later (last 2.7-compatible release is `21.4.0 `_) + +If you'd like to contribute to ``attrs`` you're most welcome and we've written `a little guide `_ to get you started! + + +``attrs`` for Enterprise +------------------------ + +Available as part of the Tidelift Subscription. + +The maintainers of ``attrs`` and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source packages you use to build your applications. +Save time, reduce risk, and improve code health, while paying the maintainers of the exact packages you use. +`Learn more. `_ + + +Release Information +=================== + +22.1.0 (2022-07-28) +------------------- + +Backwards-incompatible Changes +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- Python 2.7 is not supported anymore. + + Dealing with Python 2.7 tooling has become too difficult for a volunteer-run project. + + We have supported Python 2 more than 2 years after it was officially discontinued and feel that we have paid our dues. + All version up to 21.4.0 from December 2021 remain fully functional, of course. + `#936 `_ +- The deprecated ``cmp`` attribute of ``attrs.Attribute`` has been removed. + This does not affect the *cmp* argument to ``attr.s`` that can be used as a shortcut to set *eq* and *order* at the same time. + `#939 `_ + + +Changes +^^^^^^^ + +- Instantiation of frozen slotted classes is now faster. + `#898 `_ +- If an ``eq`` key is defined, it is also used before hashing the attribute. + `#909 `_ +- Added ``attrs.validators.min_len()``. + `#916 `_ +- ``attrs.validators.deep_iterable()``'s *member_validator* argument now also accepts a list of validators and wraps them in an ``attrs.validators.and_()``. + `#925 `_ +- Added missing type stub re-imports for ``attrs.converters`` and ``attrs.filters``. + `#931 `_ +- Added missing stub for ``attr(s).cmp_using()``. + `#949 `_ +- ``attrs.validators._in()``'s ``ValueError`` is not missing the attribute, expected options, and the value it got anymore. + `#951 `_ +- Python 3.11 is now officially supported. + `#969 `_ + +`Full changelog `_. + +Credits +======= + +``attrs`` is written and maintained by `Hynek Schlawack `_. + +The development is kindly supported by `Variomedia AG `_. + +A full list of contributors can be found in `GitHub's overview `_. + +It’s the spiritual successor of `characteristic `_ and aspires to fix some of it clunkiness and unfortunate decisions. +Both were inspired by Twisted’s `FancyEqMixin `_ but both are implemented using class decorators because `subclassing is bad for you `_, m’kay? diff --git a/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/RECORD b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/RECORD new file mode 100644 index 00000000..1d4b5e3b --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/RECORD @@ -0,0 +1,56 @@ +attr/__init__.py,sha256=KZjj88xCd_tH-W59HR1EvHiYAUi4Zd1dzOxx8P77jeI,1602 +attr/__init__.pyi,sha256=t-1r-I1VnyxFrqic__QxIk1HUc3ag53L2b8lv0xDZTw,15137 +attr/__pycache__/__init__.cpython-38.pyc,, +attr/__pycache__/_cmp.cpython-38.pyc,, +attr/__pycache__/_compat.cpython-38.pyc,, +attr/__pycache__/_config.cpython-38.pyc,, +attr/__pycache__/_funcs.cpython-38.pyc,, +attr/__pycache__/_make.cpython-38.pyc,, +attr/__pycache__/_next_gen.cpython-38.pyc,, +attr/__pycache__/_version_info.cpython-38.pyc,, +attr/__pycache__/converters.cpython-38.pyc,, +attr/__pycache__/exceptions.cpython-38.pyc,, +attr/__pycache__/filters.cpython-38.pyc,, +attr/__pycache__/setters.cpython-38.pyc,, +attr/__pycache__/validators.cpython-38.pyc,, +attr/_cmp.py,sha256=Mmqj-6w71g_vx0TTLLkU4pbmv28qz_FyBGcUb1HX9ZE,4102 +attr/_cmp.pyi,sha256=cSlVvIH4As2NlDIoLglPEbSrBMWVVTpwExVDDBH6pn8,357 +attr/_compat.py,sha256=Qr9kZOu95Og7joPaQiXoPb54epKqxNU39Xu0u4QVGZI,5568 +attr/_config.py,sha256=5W8lgRePuIOWu1ZuqF1899e2CmXGc95-ipwTpF1cEU4,826 +attr/_funcs.py,sha256=XTFKZtd5zxsUvWocBw7VAswTxH-nFHk0H8gJ2JQkxD4,14645 +attr/_make.py,sha256=Srxbhb8kB17T6nP9e_dgcXY72zda9xfL5uJzva6LExY,97569 +attr/_next_gen.py,sha256=N0Gb5WdBHfcfQhcUsLAc_2ZYzl0JtAX1NOHuDgvkecE,5882 +attr/_version_info.py,sha256=exSqb3b5E-fMSsgZAlEw9XcLpEgobPORCZpcaEglAM4,2121 +attr/_version_info.pyi,sha256=x_M3L3WuB7r_ULXAWjx959udKQ4HLB8l-hsc1FDGNvk,209 +attr/converters.py,sha256=TWCfmCAxk8s2tgTSYnyQv9MRDPf1pk8Rj16KO_Xqe1c,3610 +attr/converters.pyi,sha256=MQo7iEzPNVoFpKqD30sVwgVpdNoIeSCF2nsXvoxLZ-Y,416 +attr/exceptions.py,sha256=ZGEMLv0CDY1TOtj49OF84myejOn-LCCXAKGIKalKkVU,1915 +attr/exceptions.pyi,sha256=zZq8bCUnKAy9mDtBEw42ZhPhAUIHoTKedDQInJD883M,539 +attr/filters.py,sha256=aZep54h8-4ZYV5lmZ3Dx2mqeQH4cMx6tuCmCylLNbEU,1038 +attr/filters.pyi,sha256=_Sm80jGySETX_Clzdkon5NHVjQWRl3Y3liQKZX1czXc,215 +attr/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +attr/setters.py,sha256=pbCZQ-pE6ZxjDqZfWWUhUFefXtpekIU4qS_YDMLPQ50,1400 +attr/setters.pyi,sha256=7dM10rqpQVDW0y-iJUnq8rabdO5Wx2Sbo5LwNa0IXl0,573 +attr/validators.py,sha256=cpOHMNSt02ApbTQtQAwBTMeWZqp0u_sx-e3xH-jTyR4,16793 +attr/validators.pyi,sha256=6ngbvnWnFSkbf5J2dK86pq2xpEtrwzWTKfJ4aUvSIlk,2355 +attrs-22.1.0.dist-info/AUTHORS.rst,sha256=jau5p7JMPbBJeCCpGBWsRj8zpxUVAhpyoHFJRfjM888,743 +attrs-22.1.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +attrs-22.1.0.dist-info/LICENSE,sha256=iCEVyV38KvHutnFPjsbVy8q_Znyv-HKfQkINpj9xTp8,1109 +attrs-22.1.0.dist-info/METADATA,sha256=vwSMK_BbEgVHrgFWpj3iW0PISTMPHzi6qham9jg7LtA,11015 +attrs-22.1.0.dist-info/RECORD,, +attrs-22.1.0.dist-info/WHEEL,sha256=z9j0xAa_JmUKMpmz72K0ZGALSM_n-wQVmGbleXx2VHg,110 +attrs-22.1.0.dist-info/top_level.txt,sha256=AGbmKnOtYpdkLRsDRQVSBIwfL32pAQ6BSo1mt-BxI7M,11 +attrs/__init__.py,sha256=CeyxLGVViAEKKsLOLaif8vF3vs1a28vsrRVLv7eMEgM,1109 +attrs/__init__.pyi,sha256=vuFxNbulP9Q7hfpO6Lb5A-_0mbEJOiwYyefjzXMqVfs,2100 +attrs/__pycache__/__init__.cpython-38.pyc,, +attrs/__pycache__/converters.cpython-38.pyc,, +attrs/__pycache__/exceptions.cpython-38.pyc,, +attrs/__pycache__/filters.cpython-38.pyc,, +attrs/__pycache__/setters.cpython-38.pyc,, +attrs/__pycache__/validators.cpython-38.pyc,, +attrs/converters.py,sha256=fCBEdlYWcmI3sCnpUk2pz22GYtXzqTkp6NeOpdI64PY,70 +attrs/exceptions.py,sha256=SlDli6AY77f6ny-H7oy98OkQjsrw-D_supEuErIVYkE,70 +attrs/filters.py,sha256=dc_dNey29kH6KLU1mT2Dakq7tZ3kBfzEGwzOmDzw1F8,67 +attrs/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +attrs/setters.py,sha256=oKw51C72Hh45wTwYvDHJP9kbicxiMhMR4Y5GvdpKdHQ,67 +attrs/validators.py,sha256=4ag1SyVD2Hm3PYKiNG_NOtR_e7f81Hr6GiNl4YvXo4Q,70 diff --git a/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/WHEEL new file mode 100644 index 00000000..0b18a281 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/top_level.txt new file mode 100644 index 00000000..eca8ba9f --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs-22.1.0.dist-info/top_level.txt @@ -0,0 +1,2 @@ +attr +attrs diff --git a/env/lib/python3.8/site-packages/attrs/__init__.py b/env/lib/python3.8/site-packages/attrs/__init__.py new file mode 100644 index 00000000..a704b8b5 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs/__init__.py @@ -0,0 +1,70 @@ +# SPDX-License-Identifier: MIT + +from attr import ( + NOTHING, + Attribute, + Factory, + __author__, + __copyright__, + __description__, + __doc__, + __email__, + __license__, + __title__, + __url__, + __version__, + __version_info__, + assoc, + cmp_using, + define, + evolve, + field, + fields, + fields_dict, + frozen, + has, + make_class, + mutable, + resolve_types, + validate, +) +from attr._next_gen import asdict, astuple + +from . import converters, exceptions, filters, setters, validators + + +__all__ = [ + "__author__", + "__copyright__", + "__description__", + "__doc__", + "__email__", + "__license__", + "__title__", + "__url__", + "__version__", + "__version_info__", + "asdict", + "assoc", + "astuple", + "Attribute", + "cmp_using", + "converters", + "define", + "evolve", + "exceptions", + "Factory", + "field", + "fields_dict", + "fields", + "filters", + "frozen", + "has", + "make_class", + "mutable", + "NOTHING", + "resolve_types", + "setters", + "validate", + "validators", +] diff --git a/env/lib/python3.8/site-packages/attrs/__init__.pyi b/env/lib/python3.8/site-packages/attrs/__init__.pyi new file mode 100644 index 00000000..fc44de46 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs/__init__.pyi @@ -0,0 +1,66 @@ +from typing import ( + Any, + Callable, + Dict, + Mapping, + Optional, + Sequence, + Tuple, + Type, +) + +# Because we need to type our own stuff, we have to make everything from +# attr explicitly public too. +from attr import __author__ as __author__ +from attr import __copyright__ as __copyright__ +from attr import __description__ as __description__ +from attr import __email__ as __email__ +from attr import __license__ as __license__ +from attr import __title__ as __title__ +from attr import __url__ as __url__ +from attr import __version__ as __version__ +from attr import __version_info__ as __version_info__ +from attr import _FilterType +from attr import assoc as assoc +from attr import Attribute as Attribute +from attr import cmp_using as cmp_using +from attr import converters as converters +from attr import define as define +from attr import evolve as evolve +from attr import exceptions as exceptions +from attr import Factory as Factory +from attr import field as field +from attr import fields as fields +from attr import fields_dict as fields_dict +from attr import filters as filters +from attr import frozen as frozen +from attr import has as has +from attr import make_class as make_class +from attr import mutable as mutable +from attr import NOTHING as NOTHING +from attr import resolve_types as resolve_types +from attr import setters as setters +from attr import validate as validate +from attr import validators as validators + +# TODO: see definition of attr.asdict/astuple +def asdict( + inst: Any, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + dict_factory: Type[Mapping[Any, Any]] = ..., + retain_collection_types: bool = ..., + value_serializer: Optional[ + Callable[[type, Attribute[Any], Any], Any] + ] = ..., + tuple_keys: bool = ..., +) -> Dict[str, Any]: ... + +# TODO: add support for returning NamedTuple from the mypy plugin +def astuple( + inst: Any, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + tuple_factory: Type[Sequence[Any]] = ..., + retain_collection_types: bool = ..., +) -> Tuple[Any, ...]: ... diff --git a/env/lib/python3.8/site-packages/attrs/converters.py b/env/lib/python3.8/site-packages/attrs/converters.py new file mode 100644 index 00000000..edfa8d3c --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs/converters.py @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: MIT + +from attr.converters import * # noqa diff --git a/env/lib/python3.8/site-packages/attrs/exceptions.py b/env/lib/python3.8/site-packages/attrs/exceptions.py new file mode 100644 index 00000000..bd9efed2 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs/exceptions.py @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: MIT + +from attr.exceptions import * # noqa diff --git a/env/lib/python3.8/site-packages/attrs/filters.py b/env/lib/python3.8/site-packages/attrs/filters.py new file mode 100644 index 00000000..52959005 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs/filters.py @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: MIT + +from attr.filters import * # noqa diff --git a/env/lib/python3.8/site-packages/attrs/py.typed b/env/lib/python3.8/site-packages/attrs/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/attrs/setters.py b/env/lib/python3.8/site-packages/attrs/setters.py new file mode 100644 index 00000000..9b507708 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs/setters.py @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: MIT + +from attr.setters import * # noqa diff --git a/env/lib/python3.8/site-packages/attrs/validators.py b/env/lib/python3.8/site-packages/attrs/validators.py new file mode 100644 index 00000000..ab2c9b30 --- /dev/null +++ b/env/lib/python3.8/site-packages/attrs/validators.py @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: MIT + +from attr.validators import * # noqa diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/AUTHORS.rst b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/AUTHORS.rst new file mode 100644 index 00000000..e2781e4c --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/AUTHORS.rst @@ -0,0 +1,48 @@ +Main contributors +----------------- +- Hideo Hattori (https://github.com/hhatto) +- Steven Myint (https://github.com/myint) +- Bill Wendling (https://github.com/gwelymernans) + +Patches +------- +- Fraser Tweedale (https://github.com/frasertweedale) +- clach04 (https://github.com/clach04) +- Marc Abramowitz (https://github.com/msabramo) +- dellis23 (https://github.com/dellis23) +- Sam Vilain (https://github.com/samv) +- Florent Xicluna (https://github.com/florentx) +- Andras Tim (https://github.com/andras-tim) +- tomscytale (https://github.com/tomscytale) +- Filip Noetzel (https://github.com/peritus) +- Erik Bray (https://github.com/iguananaut) +- Christopher Medrela (https://github.com/chrismedrela) +- 小明 (https://github.com/dongweiming) +- Andy Hayden (https://github.com/hayd) +- Fabio Zadrozny (https://github.com/fabioz) +- Alex Chernetz (https://github.com/achernet) +- Marc Schlaich (https://github.com/schlamar) +- E. M. Bray (https://github.com/embray) +- Thomas Hisch (https://github.com/thisch) +- Florian Best (https://github.com/spaceone) +- Ian Clark (https://github.com/evenicoulddoit) +- Khairi Hafsham (https://github.com/khairihafsham) +- Neil Halelamien (https://github.com/neilsh) +- Hashem Nasarat (https://github.com/Hnasar) +- Hugo van Kemenade (https://github.com/hugovk) +- gmbnomis (https://github.com/gmbnomis) +- Samuel Lelièvre (https://github.com/slel) +- bigredengineer (https://github.com/bigredengineer) +- Kai Chen (https://github.com/kx-chen) +- Anthony Sottile (https://github.com/asottile) +- 秋葉 (https://github.com/Hanaasagi) +- Christian Clauss (https://github.com/cclauss) +- tobixx (https://github.com/tobixx) +- bigredengineer (https://github.com/bigredengineer) +- Bastien Gérard (https://github.com/bagerard) +- nicolasbonifas (https://github.com/nicolasbonifas) +- Andrii Yurchuk (https://github.com/Ch00k) +- José M. Guisado (https://github.com/pvxe) +- Dai Truong (https://github.com/NovaDev94) +- jnozsc (https://github.com/jnozsc) +- Edwin Shepherd (https://github.com/shardros) diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/LICENSE new file mode 100644 index 00000000..df9738f4 --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/LICENSE @@ -0,0 +1,23 @@ +Copyright (C) 2010-2011 Hideo Hattori +Copyright (C) 2011-2013 Hideo Hattori, Steven Myint +Copyright (C) 2013-2016 Hideo Hattori, Steven Myint, Bill Wendling + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/METADATA b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/METADATA new file mode 100644 index 00000000..bc6fb6b9 --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/METADATA @@ -0,0 +1,475 @@ +Metadata-Version: 2.1 +Name: autopep8 +Version: 1.7.0 +Summary: A tool that automatically formats Python code to conform to the PEP 8 style guide +Home-page: https://github.com/hhatto/autopep8 +Author: Hideo Hattori +Author-email: hhatto.jp@gmail.com +License: Expat License +Keywords: automation,pep8,format,pycodestyle +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Software Development :: Quality Assurance +License-File: LICENSE +License-File: AUTHORS.rst +Requires-Dist: pycodestyle (>=2.9.1) +Requires-Dist: toml + +======== +autopep8 +======== + +.. image:: https://img.shields.io/pypi/v/autopep8.svg + :target: https://pypi.org/project/autopep8/ + :alt: PyPI Version + +.. image:: https://github.com/hhatto/autopep8/workflows/Python%20package/badge.svg + :target: https://github.com/hhatto/autopep8/actions + :alt: Build status + +.. image:: https://codecov.io/gh/hhatto/autopep8/branch/main/graph/badge.svg + :target: https://codecov.io/gh/hhatto/autopep8 + :alt: Code Coverage + +autopep8 automatically formats Python code to conform to the `PEP 8`_ style +guide. It uses the pycodestyle_ utility to determine what parts of the code +needs to be formatted. autopep8 is capable of fixing most of the formatting +issues_ that can be reported by pycodestyle. + +.. _PEP 8: https://www.python.org/dev/peps/pep-0008/ +.. _issues: https://pycodestyle.readthedocs.org/en/latest/intro.html#error-codes + +.. contents:: + + +Installation +============ + +From pip:: + + $ pip install --upgrade autopep8 + +Consider using the ``--user`` option_. + +.. _option: https://pip.pypa.io/en/latest/user_guide/#user-installs + + +Requirements +============ + +autopep8 requires pycodestyle_. + +.. _pycodestyle: https://github.com/PyCQA/pycodestyle + + +Usage +===== + +To modify a file in place (with aggressive level 2):: + + $ autopep8 --in-place --aggressive --aggressive + +Before running autopep8. + +.. code-block:: python + + import math, sys; + + def example1(): + ####This is a long comment. This should be wrapped to fit within 72 characters. + some_tuple=( 1,2, 3,'a' ); + some_variable={'long':'Long code lines should be wrapped within 79 characters.', + 'other':[math.pi, 100,200,300,9876543210,'This is a long string that goes on'], + 'more':{'inner':'This whole logical line should be wrapped.',some_tuple:[1, + 20,300,40000,500000000,60000000000000000]}} + return (some_tuple, some_variable) + def example2(): return {'has_key() is deprecated':True}.has_key({'f':2}.has_key('')); + class Example3( object ): + def __init__ ( self, bar ): + #Comments should have a space after the hash. + if bar : bar+=1; bar=bar* bar ; return bar + else: + some_string = """ + Indentation in multiline strings should not be touched. + Only actual code should be reindented. + """ + return (sys.path, some_string) + +After running autopep8. + +.. code-block:: python + + import math + import sys + + + def example1(): + # This is a long comment. This should be wrapped to fit within 72 + # characters. + some_tuple = (1, 2, 3, 'a') + some_variable = { + 'long': 'Long code lines should be wrapped within 79 characters.', + 'other': [ + math.pi, + 100, + 200, + 300, + 9876543210, + 'This is a long string that goes on'], + 'more': { + 'inner': 'This whole logical line should be wrapped.', + some_tuple: [ + 1, + 20, + 300, + 40000, + 500000000, + 60000000000000000]}} + return (some_tuple, some_variable) + + + def example2(): return ('' in {'f': 2}) in {'has_key() is deprecated': True} + + + class Example3(object): + def __init__(self, bar): + # Comments should have a space after the hash. + if bar: + bar += 1 + bar = bar * bar + return bar + else: + some_string = """ + Indentation in multiline strings should not be touched. + Only actual code should be reindented. + """ + return (sys.path, some_string) + +Options:: + + usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename] + [--ignore-local-config] [-r] [-j n] [-p n] [-a] + [--experimental] [--exclude globs] [--list-fixes] + [--ignore errors] [--select errors] [--max-line-length n] + [--line-range line line] [--hang-closing] [--exit-code] + [files [files ...]] + + Automatically formats Python code to conform to the PEP 8 style guide. + + positional arguments: + files files to format or '-' for standard in + + optional arguments: + -h, --help show this help message and exit + --version show program's version number and exit + -v, --verbose print verbose messages; multiple -v result in more + verbose messages + -d, --diff print the diff for the fixed source + -i, --in-place make changes to files in place + --global-config filename + path to a global pep8 config file; if this file does + not exist then this is ignored (default: + ~/.config/pep8) + --ignore-local-config + don't look for and apply local config files; if not + passed, defaults are updated with any config files in + the project's root directory + -r, --recursive run recursively over directories; must be used with + --in-place or --diff + -j n, --jobs n number of parallel jobs; match CPU count if value is + less than 1 + -p n, --pep8-passes n + maximum number of additional pep8 passes (default: + infinite) + -a, --aggressive enable non-whitespace changes; multiple -a result in + more aggressive changes + --experimental enable experimental fixes + --exclude globs exclude file/directory names that match these comma- + separated globs + --list-fixes list codes for fixes; used by --ignore and --select + --ignore errors do not fix these errors/warnings (default: + E226,E24,W50,W690) + --select errors fix only these errors/warnings (e.g. E4,W) + --max-line-length n set maximum allowed line length (default: 79) + --line-range line line, --range line line + only fix errors found within this inclusive range of + line numbers (e.g. 1 99); line numbers are indexed at + 1 + --hang-closing hang-closing option passed to pycodestyle + --exit-code change to behavior of exit code. default behavior of + return value, 0 is no differences, 1 is error exit. + return 2 when add this option. 2 is exists + differences. + + +Features +======== + +autopep8 fixes the following issues_ reported by pycodestyle_:: + + E101 - Reindent all lines. + E11 - Fix indentation. + E121 - Fix indentation to be a multiple of four. + E122 - Add absent indentation for hanging indentation. + E123 - Align closing bracket to match opening bracket. + E124 - Align closing bracket to match visual indentation. + E125 - Indent to distinguish line from next logical line. + E126 - Fix over-indented hanging indentation. + E127 - Fix visual indentation. + E128 - Fix visual indentation. + E129 - Fix visual indentation. + E131 - Fix hanging indent for unaligned continuation line. + E133 - Fix missing indentation for closing bracket. + E20 - Remove extraneous whitespace. + E211 - Remove extraneous whitespace. + E22 - Fix extraneous whitespace around keywords. + E224 - Remove extraneous whitespace around operator. + E225 - Fix missing whitespace around operator. + E226 - Fix missing whitespace around arithmetic operator. + E227 - Fix missing whitespace around bitwise/shift operator. + E228 - Fix missing whitespace around modulo operator. + E231 - Add missing whitespace. + E241 - Fix extraneous whitespace around keywords. + E242 - Remove extraneous whitespace around operator. + E251 - Remove whitespace around parameter '=' sign. + E252 - Missing whitespace around parameter equals. + E26 - Fix spacing after comment hash for inline comments. + E265 - Fix spacing after comment hash for block comments. + E266 - Fix too many leading '#' for block comments. + E27 - Fix extraneous whitespace around keywords. + E301 - Add missing blank line. + E302 - Add missing 2 blank lines. + E303 - Remove extra blank lines. + E304 - Remove blank line following function decorator. + E305 - Expected 2 blank lines after end of function or class. + E306 - Expected 1 blank line before a nested definition. + E401 - Put imports on separate lines. + E402 - Fix module level import not at top of file + E501 - Try to make lines fit within --max-line-length characters. + E502 - Remove extraneous escape of newline. + E701 - Put colon-separated compound statement on separate lines. + E70 - Put semicolon-separated compound statement on separate lines. + E711 - Fix comparison with None. + E712 - Fix comparison with boolean. + E713 - Use 'not in' for test for membership. + E714 - Use 'is not' test for object identity. + E721 - Use "isinstance()" instead of comparing types directly. + E722 - Fix bare except. + E731 - Use a def when use do not assign a lambda expression. + W291 - Remove trailing whitespace. + W292 - Add a single newline at the end of the file. + W293 - Remove trailing whitespace on blank line. + W391 - Remove trailing blank lines. + W503 - Fix line break before binary operator. + W504 - Fix line break after binary operator. + W601 - Use "in" rather than "has_key()". + W602 - Fix deprecated form of raising exception. + W603 - Use "!=" instead of "<>" + W604 - Use "repr()" instead of backticks. + W605 - Fix invalid escape sequence 'x'. + W690 - Fix various deprecated code (via lib2to3). + +autopep8 also fixes some issues not found by pycodestyle_. + +- Correct deprecated or non-idiomatic Python code (via ``lib2to3``). Use this + for making Python 2.7 code more compatible with Python 3. (This is triggered + if ``W690`` is enabled.) +- Normalize files with mixed line endings. +- Put a blank line between a class docstring and its first method + declaration. (Enabled with ``E301``.) +- Remove blank lines between a function declaration and its docstring. (Enabled + with ``E303``.) + +autopep8 avoids fixing some issues found by pycodestyle_. + +- ``E112``/``E113`` for non comments are reports of bad indentation that break + syntax rules. These should not be modified at all. +- ``E265``, which refers to spacing after comment hash, is ignored if the + comment looks like code. autopep8 avoids modifying these since they are not + real comments. If you really want to get rid of the pycodestyle_ warning, + consider just removing the commented-out code. (This can be automated via + eradicate_.) + +.. _eradicate: https://github.com/myint/eradicate + + +More advanced usage +=================== + +By default autopep8 only makes whitespace changes. Thus, by default, it does +not fix ``E711`` and ``E712``. (Changing ``x == None`` to ``x is None`` may +change the meaning of the program if ``x`` has its ``__eq__`` method +overridden.) Nor does it correct deprecated code ``W6``. To enable these +more aggressive fixes, use the ``--aggressive`` option:: + + $ autopep8 --aggressive + +Use multiple ``--aggressive`` to increase the aggressiveness level. For +example, ``E712`` requires aggressiveness level 2 (since ``x == True`` could be +changed to either ``x`` or ``x is True``, but autopep8 chooses the former). + +``--aggressive`` will also shorten lines more aggressively. It will also remove +trailing whitespace more aggressively. (Usually, we don't touch trailing +whitespace in docstrings and other multiline strings. And to do even more +aggressive changes to docstrings, use docformatter_.) + +.. _docformatter: https://github.com/myint/docformatter + +To enable only a subset of the fixes, use the ``--select`` option. For example, +to fix various types of indentation issues:: + + $ autopep8 --select=E1,W1 + +Similarly, to just fix deprecated code:: + + $ autopep8 --aggressive --select=W6 + +The above is useful when trying to port a single code base to work with both +Python 2 and Python 3 at the same time. + +If the file being fixed is large, you may want to enable verbose progress +messages:: + + $ autopep8 -v + +Passing in ``--experimental`` enables the following functionality: + +- Shortens code lines by taking its length into account + +:: + +$ autopep8 --experimental + +Disabling line-by-line +---------------------- + +It is possible to disable autopep8 untill it it turned back on again in the file, using ``autopep8: off`` and then renabling ``autopep8: on``. + +.. code-block:: python + + # autopep8: off + [ + [23, 23, 13, 43], + [32, 34, 34, 34], + [56, 34, 34, 11], + [10, 10, 10, 10], + ] + # autopep8: on + +``fmt: off`` and ``fmt: on`` are also valid. + +Use as a module +=============== + +The simplest way of using autopep8 as a module is via the ``fix_code()`` +function: + + >>> import autopep8 + >>> autopep8.fix_code('x= 123\n') + 'x = 123\n' + +Or with options: + + >>> import autopep8 + >>> autopep8.fix_code('x.has_key(y)\n', + ... options={'aggressive': 1}) + 'y in x\n' + >>> autopep8.fix_code('print( 123 )\n', + ... options={'ignore': ['E']}) + 'print( 123 )\n' + + +Configuration +============= + +By default, if ``$HOME/.config/pycodestyle`` (``~\.pycodestyle`` in Windows +environment) exists, it will be used as global configuration file. +Alternatively, you can specify the global configuration file with the +``--global-config`` option. + +Also, if ``setup.cfg``, ``tox.ini``, ``.pep8`` and ``.flake8`` files exist +in the directory where the target file exists, it will be used as the +configuration file. + +``pep8``, ``pycodestyle``, and ``flake8`` can be used as a section. + +configuration file example:: + + [pycodestyle] + max_line_length = 120 + ignore = E501 + +pyproject.toml +-------------- + +autopep8 can also use ``pyproject.toml``. +The section must be ``[tool.autopep8]``, and ``pyproject.toml`` takes precedence +over any other configuration files. + +configuration file example:: + + [tool.autopep8] + max_line_length = 120 + ignore = "E501,W6" # or ["E501", "W6"] + in-place = true + recursive = true + aggressive = 3 + + +Testing +======= + +Test cases are in ``test/test_autopep8.py``. They can be run directly via +``python test/test_autopep8.py`` or via tox_. The latter is useful for +testing against multiple Python interpreters. (We currently test against +CPython versions 2.7, 3.6 3.7 and 3.8. We also test against PyPy.) + +.. _`tox`: https://pypi.org/project/tox/ + +Broad spectrum testing is available via ``test/acid.py``. This script runs +autopep8 against Python code and checks for correctness and completeness of the +code fixes. It can check that the bytecode remains identical. +``test/acid_pypi.py`` makes use of ``acid.py`` to test against the latest +released packages on PyPI. + + +Troubleshooting +=============== + +``pkg_resources.DistributionNotFound`` +-------------------------------------- + +If you are using an ancient version of ``setuptools``, you might encounter +``pkg_resources.DistributionNotFound`` when trying to run ``autopep8``. Try +upgrading ``setuptools`` to workaround this ``setuptools`` problem:: + + $ pip install --upgrade setuptools + +Use ``sudo`` if you are installing to the system. + + +Links +===== + +* PyPI_ +* GitHub_ +* `Travis CI`_ +* Coveralls_ + +.. _PyPI: https://pypi.org/project/autopep8/ +.. _GitHub: https://github.com/hhatto/autopep8 +.. _`Travis CI`: https://travis-ci.org/hhatto/autopep8 +.. _`Coveralls`: https://coveralls.io/r/hhatto/autopep8 diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/RECORD b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/RECORD new file mode 100644 index 00000000..abf94a8c --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/RECORD @@ -0,0 +1,11 @@ +../../../bin/autopep8,sha256=CZxTFMrJebpG-KMiBmKSEyth1ml7VwiYN_DUJ8g62s8,257 +__pycache__/autopep8.cpython-38.pyc,, +autopep8-1.7.0.dist-info/AUTHORS.rst,sha256=tiTPsbzGl9dtXCMEWXbWSV1zan1M-BoWtiixs46GIWk,2003 +autopep8-1.7.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +autopep8-1.7.0.dist-info/LICENSE,sha256=jR0COOSFQ0QZFMqwdB1N4-Bwobg2f3h69fIJr7YLCWo,1181 +autopep8-1.7.0.dist-info/METADATA,sha256=uf9qENqUy_VnrVYXoyCkoLVjkcbTVut_FPcntXpbFQk,17302 +autopep8-1.7.0.dist-info/RECORD,, +autopep8-1.7.0.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +autopep8-1.7.0.dist-info/entry_points.txt,sha256=zEduLXzN3YzTTZBwxjhEKW7PVLqSqVG8-ocCaCR3P4A,43 +autopep8-1.7.0.dist-info/top_level.txt,sha256=s2x-di3QBwGxr7kd5xErt2pom8dsFRdINbmwsOEgLfU,9 +autopep8.py,sha256=DS3qpM_YacgSCQWofj_6yRbkFr12T_IX1fS9HShhgYs,156300 diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/WHEEL new file mode 100644 index 00000000..ef99c6cf --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/entry_points.txt new file mode 100644 index 00000000..2d14c507 --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[console_scripts] +autopep8 = autopep8:main diff --git a/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/top_level.txt new file mode 100644 index 00000000..d81c0c25 --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8-1.7.0.dist-info/top_level.txt @@ -0,0 +1 @@ +autopep8 diff --git a/env/lib/python3.8/site-packages/autopep8.py b/env/lib/python3.8/site-packages/autopep8.py new file mode 100644 index 00000000..87acfa19 --- /dev/null +++ b/env/lib/python3.8/site-packages/autopep8.py @@ -0,0 +1,4590 @@ +#!/usr/bin/env python + +# Copyright (C) 2010-2011 Hideo Hattori +# Copyright (C) 2011-2013 Hideo Hattori, Steven Myint +# Copyright (C) 2013-2016 Hideo Hattori, Steven Myint, Bill Wendling +# +# Permission is hereby granted, free of charge, to any person obtaining +# a copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, +# distribute, sublicense, and/or sell copies of the Software, and to +# permit persons to whom the Software is furnished to do so, subject to +# the following conditions: +# +# The above copyright notice and this permission notice shall be +# included in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +# Copyright (C) 2006-2009 Johann C. Rocholl +# Copyright (C) 2009-2013 Florent Xicluna +# +# Permission is hereby granted, free of charge, to any person +# obtaining a copy of this software and associated documentation files +# (the "Software"), to deal in the Software without restriction, +# including without limitation the rights to use, copy, modify, merge, +# publish, distribute, sublicense, and/or sell copies of the Software, +# and to permit persons to whom the Software is furnished to do so, +# subject to the following conditions: +# +# The above copyright notice and this permission notice shall be +# included in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +"""Automatically formats Python code to conform to the PEP 8 style guide. + +Fixes that only need be done once can be added by adding a function of the form +"fix_(source)" to this module. They should return the fixed source code. +These fixes are picked up by apply_global_fixes(). + +Fixes that depend on pycodestyle should be added as methods to FixPEP8. See the +class documentation for more information. + +""" + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function +from __future__ import unicode_literals + +import argparse +import codecs +import collections +import copy +import difflib +import fnmatch +import inspect +import io +import itertools +import keyword +import locale +import os +import re +import signal +import sys +import textwrap +import token +import tokenize +import warnings +import ast +from configparser import ConfigParser as SafeConfigParser, Error + +import pycodestyle +from pycodestyle import STARTSWITH_INDENT_STATEMENT_REGEX + + +__version__ = '1.7.0' + + +CR = '\r' +LF = '\n' +CRLF = '\r\n' + + +PYTHON_SHEBANG_REGEX = re.compile(r'^#!.*\bpython[23]?\b\s*$') +LAMBDA_REGEX = re.compile(r'([\w.]+)\s=\slambda\s*([)(=\w,\s.]*):') +COMPARE_NEGATIVE_REGEX = re.compile(r'\b(not)\s+([^][)(}{]+?)\s+(in|is)\s') +COMPARE_NEGATIVE_REGEX_THROUGH = re.compile(r'\b(not\s+in|is\s+not)\s') +BARE_EXCEPT_REGEX = re.compile(r'except\s*:') +STARTSWITH_DEF_REGEX = re.compile(r'^(async\s+def|def)\s.*\):') +DOCSTRING_START_REGEX = re.compile(r'^u?r?(?P["\']{3})') +ENABLE_REGEX = re.compile(r'# *(fmt|autopep8): *on') +DISABLE_REGEX = re.compile(r'# *(fmt|autopep8): *off') + +EXIT_CODE_OK = 0 +EXIT_CODE_ERROR = 1 +EXIT_CODE_EXISTS_DIFF = 2 +EXIT_CODE_ARGPARSE_ERROR = 99 + +# For generating line shortening candidates. +SHORTEN_OPERATOR_GROUPS = frozenset([ + frozenset([',']), + frozenset(['%']), + frozenset([',', '(', '[', '{']), + frozenset(['%', '(', '[', '{']), + frozenset([',', '(', '[', '{', '%', '+', '-', '*', '/', '//']), + frozenset(['%', '+', '-', '*', '/', '//']), +]) + + +DEFAULT_IGNORE = 'E226,E24,W50,W690' # TODO: use pycodestyle.DEFAULT_IGNORE +DEFAULT_INDENT_SIZE = 4 +# these fixes conflict with each other, if the `--ignore` setting causes both +# to be enabled, disable both of them +CONFLICTING_CODES = ('W503', 'W504') + +SELECTED_GLOBAL_FIXED_METHOD_CODES = ['W602', ] + +# W602 is handled separately due to the need to avoid "with_traceback". +CODE_TO_2TO3 = { + 'E231': ['ws_comma'], + 'E721': ['idioms'], + 'W601': ['has_key'], + 'W603': ['ne'], + 'W604': ['repr'], + 'W690': ['apply', + 'except', + 'exitfunc', + 'numliterals', + 'operator', + 'paren', + 'reduce', + 'renames', + 'standarderror', + 'sys_exc', + 'throw', + 'tuple_params', + 'xreadlines']} + + +if sys.platform == 'win32': # pragma: no cover + DEFAULT_CONFIG = os.path.expanduser(r'~\.pycodestyle') +else: + DEFAULT_CONFIG = os.path.join(os.getenv('XDG_CONFIG_HOME') or + os.path.expanduser('~/.config'), + 'pycodestyle') +# fallback, use .pep8 +if not os.path.exists(DEFAULT_CONFIG): # pragma: no cover + if sys.platform == 'win32': + DEFAULT_CONFIG = os.path.expanduser(r'~\.pep8') + else: + DEFAULT_CONFIG = os.path.join(os.path.expanduser('~/.config'), 'pep8') +PROJECT_CONFIG = ('setup.cfg', 'tox.ini', '.pep8', '.flake8') + + +MAX_PYTHON_FILE_DETECTION_BYTES = 1024 + + +def open_with_encoding(filename, mode='r', encoding=None, limit_byte_check=-1): + """Return opened file with a specific encoding.""" + if not encoding: + encoding = detect_encoding(filename, limit_byte_check=limit_byte_check) + + return io.open(filename, mode=mode, encoding=encoding, + newline='') # Preserve line endings + + +def detect_encoding(filename, limit_byte_check=-1): + """Return file encoding.""" + try: + with open(filename, 'rb') as input_file: + from lib2to3.pgen2 import tokenize as lib2to3_tokenize + encoding = lib2to3_tokenize.detect_encoding(input_file.readline)[0] + + with open_with_encoding(filename, encoding=encoding) as test_file: + test_file.read(limit_byte_check) + + return encoding + except (LookupError, SyntaxError, UnicodeDecodeError): + return 'latin-1' + + +def readlines_from_file(filename): + """Return contents of file.""" + with open_with_encoding(filename) as input_file: + return input_file.readlines() + + +def extended_blank_lines(logical_line, + blank_lines, + blank_before, + indent_level, + previous_logical): + """Check for missing blank lines after class declaration.""" + if previous_logical.startswith('def '): + if blank_lines and pycodestyle.DOCSTRING_REGEX.match(logical_line): + yield (0, 'E303 too many blank lines ({})'.format(blank_lines)) + elif pycodestyle.DOCSTRING_REGEX.match(previous_logical): + # Missing blank line between class docstring and method declaration. + if ( + indent_level and + not blank_lines and + not blank_before and + logical_line.startswith(('def ')) and + '(self' in logical_line + ): + yield (0, 'E301 expected 1 blank line, found 0') + + +pycodestyle.register_check(extended_blank_lines) + + +def continued_indentation(logical_line, tokens, indent_level, hang_closing, + indent_char, noqa): + """Override pycodestyle's function to provide indentation information.""" + first_row = tokens[0][2][0] + nrows = 1 + tokens[-1][2][0] - first_row + if noqa or nrows == 1: + return + + # indent_next tells us whether the next block is indented. Assuming + # that it is indented by 4 spaces, then we should not allow 4-space + # indents on the final continuation line. In turn, some other + # indents are allowed to have an extra 4 spaces. + indent_next = logical_line.endswith(':') + + row = depth = 0 + valid_hangs = ( + (DEFAULT_INDENT_SIZE,) + if indent_char != '\t' else (DEFAULT_INDENT_SIZE, + 2 * DEFAULT_INDENT_SIZE) + ) + + # Remember how many brackets were opened on each line. + parens = [0] * nrows + + # Relative indents of physical lines. + rel_indent = [0] * nrows + + # For each depth, collect a list of opening rows. + open_rows = [[0]] + # For each depth, memorize the hanging indentation. + hangs = [None] + + # Visual indents. + indent_chances = {} + last_indent = tokens[0][2] + indent = [last_indent[1]] + + last_token_multiline = None + line = None + last_line = '' + last_line_begins_with_multiline = False + for token_type, text, start, end, line in tokens: + + newline = row < start[0] - first_row + if newline: + row = start[0] - first_row + newline = (not last_token_multiline and + token_type not in (tokenize.NL, tokenize.NEWLINE)) + last_line_begins_with_multiline = last_token_multiline + + if newline: + # This is the beginning of a continuation line. + last_indent = start + + # Record the initial indent. + rel_indent[row] = pycodestyle.expand_indent(line) - indent_level + + # Identify closing bracket. + close_bracket = (token_type == tokenize.OP and text in ']})') + + # Is the indent relative to an opening bracket line? + for open_row in reversed(open_rows[depth]): + hang = rel_indent[row] - rel_indent[open_row] + hanging_indent = hang in valid_hangs + if hanging_indent: + break + if hangs[depth]: + hanging_indent = (hang == hangs[depth]) + + visual_indent = (not close_bracket and hang > 0 and + indent_chances.get(start[1])) + + if close_bracket and indent[depth]: + # Closing bracket for visual indent. + if start[1] != indent[depth]: + yield (start, 'E124 {}'.format(indent[depth])) + elif close_bracket and not hang: + # closing bracket matches indentation of opening bracket's line + if hang_closing: + yield (start, 'E133 {}'.format(indent[depth])) + elif indent[depth] and start[1] < indent[depth]: + if visual_indent is not True: + # Visual indent is broken. + yield (start, 'E128 {}'.format(indent[depth])) + elif (hanging_indent or + (indent_next and + rel_indent[row] == 2 * DEFAULT_INDENT_SIZE)): + # Hanging indent is verified. + if close_bracket and not hang_closing: + yield (start, 'E123 {}'.format(indent_level + + rel_indent[open_row])) + hangs[depth] = hang + elif visual_indent is True: + # Visual indent is verified. + indent[depth] = start[1] + elif visual_indent in (text, str): + # Ignore token lined up with matching one from a previous line. + pass + else: + one_indented = (indent_level + rel_indent[open_row] + + DEFAULT_INDENT_SIZE) + # Indent is broken. + if hang <= 0: + error = ('E122', one_indented) + elif indent[depth]: + error = ('E127', indent[depth]) + elif not close_bracket and hangs[depth]: + error = ('E131', one_indented) + elif hang > DEFAULT_INDENT_SIZE: + error = ('E126', one_indented) + else: + hangs[depth] = hang + error = ('E121', one_indented) + + yield (start, '{} {}'.format(*error)) + + # Look for visual indenting. + if ( + parens[row] and + token_type not in (tokenize.NL, tokenize.COMMENT) and + not indent[depth] + ): + indent[depth] = start[1] + indent_chances[start[1]] = True + # Deal with implicit string concatenation. + elif (token_type in (tokenize.STRING, tokenize.COMMENT) or + text in ('u', 'ur', 'b', 'br')): + indent_chances[start[1]] = str + # Special case for the "if" statement because len("if (") is equal to + # 4. + elif not indent_chances and not row and not depth and text == 'if': + indent_chances[end[1] + 1] = True + elif text == ':' and line[end[1]:].isspace(): + open_rows[depth].append(row) + + # Keep track of bracket depth. + if token_type == tokenize.OP: + if text in '([{': + depth += 1 + indent.append(0) + hangs.append(None) + if len(open_rows) == depth: + open_rows.append([]) + open_rows[depth].append(row) + parens[row] += 1 + elif text in ')]}' and depth > 0: + # Parent indents should not be more than this one. + prev_indent = indent.pop() or last_indent[1] + hangs.pop() + for d in range(depth): + if indent[d] > prev_indent: + indent[d] = 0 + for ind in list(indent_chances): + if ind >= prev_indent: + del indent_chances[ind] + del open_rows[depth + 1:] + depth -= 1 + if depth: + indent_chances[indent[depth]] = True + for idx in range(row, -1, -1): + if parens[idx]: + parens[idx] -= 1 + break + assert len(indent) == depth + 1 + if ( + start[1] not in indent_chances and + # This is for purposes of speeding up E121 (GitHub #90). + not last_line.rstrip().endswith(',') + ): + # Allow to line up tokens. + indent_chances[start[1]] = text + + last_token_multiline = (start[0] != end[0]) + if last_token_multiline: + rel_indent[end[0] - first_row] = rel_indent[row] + + last_line = line + + if ( + indent_next and + not last_line_begins_with_multiline and + pycodestyle.expand_indent(line) == indent_level + DEFAULT_INDENT_SIZE + ): + pos = (start[0], indent[0] + 4) + desired_indent = indent_level + 2 * DEFAULT_INDENT_SIZE + if visual_indent: + yield (pos, 'E129 {}'.format(desired_indent)) + else: + yield (pos, 'E125 {}'.format(desired_indent)) + + +del pycodestyle._checks['logical_line'][pycodestyle.continued_indentation] +pycodestyle.register_check(continued_indentation) + + +class FixPEP8(object): + + """Fix invalid code. + + Fixer methods are prefixed "fix_". The _fix_source() method looks for these + automatically. + + The fixer method can take either one or two arguments (in addition to + self). The first argument is "result", which is the error information from + pycodestyle. The second argument, "logical", is required only for + logical-line fixes. + + The fixer method can return the list of modified lines or None. An empty + list would mean that no changes were made. None would mean that only the + line reported in the pycodestyle error was modified. Note that the modified + line numbers that are returned are indexed at 1. This typically would + correspond with the line number reported in the pycodestyle error + information. + + [fixed method list] + - e111,e114,e115,e116 + - e121,e122,e123,e124,e125,e126,e127,e128,e129 + - e201,e202,e203 + - e211 + - e221,e222,e223,e224,e225 + - e231 + - e251,e252 + - e261,e262 + - e271,e272,e273,e274,e275 + - e301,e302,e303,e304,e305,e306 + - e401,e402 + - e502 + - e701,e702,e703,e704 + - e711,e712,e713,e714 + - e722 + - e731 + - w291 + - w503,504 + + """ + + def __init__(self, filename, + options, + contents=None, + long_line_ignore_cache=None): + self.filename = filename + if contents is None: + self.source = readlines_from_file(filename) + else: + sio = io.StringIO(contents) + self.source = sio.readlines() + self.options = options + self.indent_word = _get_indentword(''.join(self.source)) + + # collect imports line + self.imports = {} + for i, line in enumerate(self.source): + if (line.find("import ") == 0 or line.find("from ") == 0) and \ + line not in self.imports: + # collect only import statements that first appeared + self.imports[line] = i + + self.long_line_ignore_cache = ( + set() if long_line_ignore_cache is None + else long_line_ignore_cache) + + # Many fixers are the same even though pycodestyle categorizes them + # differently. + self.fix_e115 = self.fix_e112 + self.fix_e121 = self._fix_reindent + self.fix_e122 = self._fix_reindent + self.fix_e123 = self._fix_reindent + self.fix_e124 = self._fix_reindent + self.fix_e126 = self._fix_reindent + self.fix_e127 = self._fix_reindent + self.fix_e128 = self._fix_reindent + self.fix_e129 = self._fix_reindent + self.fix_e133 = self.fix_e131 + self.fix_e202 = self.fix_e201 + self.fix_e203 = self.fix_e201 + self.fix_e211 = self.fix_e201 + self.fix_e221 = self.fix_e271 + self.fix_e222 = self.fix_e271 + self.fix_e223 = self.fix_e271 + self.fix_e226 = self.fix_e225 + self.fix_e227 = self.fix_e225 + self.fix_e228 = self.fix_e225 + self.fix_e241 = self.fix_e271 + self.fix_e242 = self.fix_e224 + self.fix_e252 = self.fix_e225 + self.fix_e261 = self.fix_e262 + self.fix_e272 = self.fix_e271 + self.fix_e273 = self.fix_e271 + self.fix_e274 = self.fix_e271 + self.fix_e275 = self.fix_e271 + self.fix_e306 = self.fix_e301 + self.fix_e501 = ( + self.fix_long_line_logically if + options and (options.aggressive >= 2 or options.experimental) else + self.fix_long_line_physically) + self.fix_e703 = self.fix_e702 + self.fix_w292 = self.fix_w291 + self.fix_w293 = self.fix_w291 + + def _fix_source(self, results): + try: + (logical_start, logical_end) = _find_logical(self.source) + logical_support = True + except (SyntaxError, tokenize.TokenError): # pragma: no cover + logical_support = False + + completed_lines = set() + for result in sorted(results, key=_priority_key): + if result['line'] in completed_lines: + continue + + fixed_methodname = 'fix_' + result['id'].lower() + if hasattr(self, fixed_methodname): + fix = getattr(self, fixed_methodname) + + line_index = result['line'] - 1 + original_line = self.source[line_index] + + is_logical_fix = len(_get_parameters(fix)) > 2 + if is_logical_fix: + logical = None + if logical_support: + logical = _get_logical(self.source, + result, + logical_start, + logical_end) + if logical and set(range( + logical[0][0] + 1, + logical[1][0] + 1)).intersection( + completed_lines): + continue + + modified_lines = fix(result, logical) + else: + modified_lines = fix(result) + + if modified_lines is None: + # Force logical fixes to report what they modified. + assert not is_logical_fix + + if self.source[line_index] == original_line: + modified_lines = [] + + if modified_lines: + completed_lines.update(modified_lines) + elif modified_lines == []: # Empty list means no fix + if self.options.verbose >= 2: + print( + '---> Not fixing {error} on line {line}'.format( + error=result['id'], line=result['line']), + file=sys.stderr) + else: # We assume one-line fix when None. + completed_lines.add(result['line']) + else: + if self.options.verbose >= 3: + print( + "---> '{}' is not defined.".format(fixed_methodname), + file=sys.stderr) + + info = result['info'].strip() + print('---> {}:{}:{}:{}'.format(self.filename, + result['line'], + result['column'], + info), + file=sys.stderr) + + def fix(self): + """Return a version of the source code with PEP 8 violations fixed.""" + pep8_options = { + 'ignore': self.options.ignore, + 'select': self.options.select, + 'max_line_length': self.options.max_line_length, + 'hang_closing': self.options.hang_closing, + } + results = _execute_pep8(pep8_options, self.source) + + if self.options.verbose: + progress = {} + for r in results: + if r['id'] not in progress: + progress[r['id']] = set() + progress[r['id']].add(r['line']) + print('---> {n} issue(s) to fix {progress}'.format( + n=len(results), progress=progress), file=sys.stderr) + + if self.options.line_range: + start, end = self.options.line_range + results = [r for r in results + if start <= r['line'] <= end] + + self._fix_source(filter_results(source=''.join(self.source), + results=results, + aggressive=self.options.aggressive)) + + if self.options.line_range: + # If number of lines has changed then change line_range. + count = sum(sline.count('\n') + for sline in self.source[start - 1:end]) + self.options.line_range[1] = start + count - 1 + + return ''.join(self.source) + + def _fix_reindent(self, result): + """Fix a badly indented line. + + This is done by adding or removing from its initial indent only. + + """ + num_indent_spaces = int(result['info'].split()[1]) + line_index = result['line'] - 1 + target = self.source[line_index] + + self.source[line_index] = ' ' * num_indent_spaces + target.lstrip() + + def fix_e112(self, result): + """Fix under-indented comments.""" + line_index = result['line'] - 1 + target = self.source[line_index] + + if not target.lstrip().startswith('#'): + # Don't screw with invalid syntax. + return [] + + self.source[line_index] = self.indent_word + target + + def fix_e113(self, result): + """Fix unexpected indentation.""" + line_index = result['line'] - 1 + target = self.source[line_index] + indent = _get_indentation(target) + stripped = target.lstrip() + self.source[line_index] = indent[1:] + stripped + + def fix_e116(self, result): + """Fix over-indented comments.""" + line_index = result['line'] - 1 + target = self.source[line_index] + + indent = _get_indentation(target) + stripped = target.lstrip() + + if not stripped.startswith('#'): + # Don't screw with invalid syntax. + return [] + + self.source[line_index] = indent[1:] + stripped + + def fix_e117(self, result): + """Fix over-indented.""" + line_index = result['line'] - 1 + target = self.source[line_index] + + indent = _get_indentation(target) + if indent == '\t': + return [] + + stripped = target.lstrip() + + self.source[line_index] = indent[1:] + stripped + + def fix_e125(self, result): + """Fix indentation undistinguish from the next logical line.""" + num_indent_spaces = int(result['info'].split()[1]) + line_index = result['line'] - 1 + target = self.source[line_index] + + spaces_to_add = num_indent_spaces - len(_get_indentation(target)) + indent = len(_get_indentation(target)) + modified_lines = [] + + while len(_get_indentation(self.source[line_index])) >= indent: + self.source[line_index] = (' ' * spaces_to_add + + self.source[line_index]) + modified_lines.append(1 + line_index) # Line indexed at 1. + line_index -= 1 + + return modified_lines + + def fix_e131(self, result): + """Fix indentation undistinguish from the next logical line.""" + num_indent_spaces = int(result['info'].split()[1]) + line_index = result['line'] - 1 + target = self.source[line_index] + + spaces_to_add = num_indent_spaces - len(_get_indentation(target)) + + indent_length = len(_get_indentation(target)) + spaces_to_add = num_indent_spaces - indent_length + if num_indent_spaces == 0 and indent_length == 0: + spaces_to_add = 4 + + if spaces_to_add >= 0: + self.source[line_index] = (' ' * spaces_to_add + + self.source[line_index]) + else: + offset = abs(spaces_to_add) + self.source[line_index] = self.source[line_index][offset:] + + def fix_e201(self, result): + """Remove extraneous whitespace.""" + line_index = result['line'] - 1 + target = self.source[line_index] + offset = result['column'] - 1 + + fixed = fix_whitespace(target, + offset=offset, + replacement='') + + self.source[line_index] = fixed + + def fix_e224(self, result): + """Remove extraneous whitespace around operator.""" + target = self.source[result['line'] - 1] + offset = result['column'] - 1 + fixed = target[:offset] + target[offset:].replace('\t', ' ') + self.source[result['line'] - 1] = fixed + + def fix_e225(self, result): + """Fix missing whitespace around operator.""" + target = self.source[result['line'] - 1] + offset = result['column'] - 1 + fixed = target[:offset] + ' ' + target[offset:] + + # Only proceed if non-whitespace characters match. + # And make sure we don't break the indentation. + if ( + fixed.replace(' ', '') == target.replace(' ', '') and + _get_indentation(fixed) == _get_indentation(target) + ): + self.source[result['line'] - 1] = fixed + error_code = result.get('id', 0) + try: + ts = generate_tokens(fixed) + except (SyntaxError, tokenize.TokenError): + return + if not check_syntax(fixed.lstrip()): + return + errors = list( + pycodestyle.missing_whitespace_around_operator(fixed, ts)) + for e in reversed(errors): + if error_code != e[1].split()[0]: + continue + offset = e[0][1] + fixed = fixed[:offset] + ' ' + fixed[offset:] + self.source[result['line'] - 1] = fixed + else: + return [] + + def fix_e231(self, result): + """Add missing whitespace.""" + line_index = result['line'] - 1 + target = self.source[line_index] + offset = result['column'] + fixed = target[:offset].rstrip() + ' ' + target[offset:].lstrip() + self.source[line_index] = fixed + + def fix_e251(self, result): + """Remove whitespace around parameter '=' sign.""" + line_index = result['line'] - 1 + target = self.source[line_index] + + # This is necessary since pycodestyle sometimes reports columns that + # goes past the end of the physical line. This happens in cases like, + # foo(bar\n=None) + c = min(result['column'] - 1, + len(target) - 1) + + if target[c].strip(): + fixed = target + else: + fixed = target[:c].rstrip() + target[c:].lstrip() + + # There could be an escaped newline + # + # def foo(a=\ + # 1) + if fixed.endswith(('=\\\n', '=\\\r\n', '=\\\r')): + self.source[line_index] = fixed.rstrip('\n\r \t\\') + self.source[line_index + 1] = self.source[line_index + 1].lstrip() + return [line_index + 1, line_index + 2] # Line indexed at 1 + + self.source[result['line'] - 1] = fixed + + def fix_e262(self, result): + """Fix spacing after comment hash.""" + target = self.source[result['line'] - 1] + offset = result['column'] + + code = target[:offset].rstrip(' \t#') + comment = target[offset:].lstrip(' \t#') + + fixed = code + (' # ' + comment if comment.strip() else '\n') + + self.source[result['line'] - 1] = fixed + + def fix_e271(self, result): + """Fix extraneous whitespace around keywords.""" + line_index = result['line'] - 1 + target = self.source[line_index] + offset = result['column'] - 1 + + fixed = fix_whitespace(target, + offset=offset, + replacement=' ') + + if fixed == target: + return [] + else: + self.source[line_index] = fixed + + def fix_e301(self, result): + """Add missing blank line.""" + cr = '\n' + self.source[result['line'] - 1] = cr + self.source[result['line'] - 1] + + def fix_e302(self, result): + """Add missing 2 blank lines.""" + add_linenum = 2 - int(result['info'].split()[-1]) + offset = 1 + if self.source[result['line'] - 2].strip() == "\\": + offset = 2 + cr = '\n' * add_linenum + self.source[result['line'] - offset] = ( + cr + self.source[result['line'] - offset] + ) + + def fix_e303(self, result): + """Remove extra blank lines.""" + delete_linenum = int(result['info'].split('(')[1].split(')')[0]) - 2 + delete_linenum = max(1, delete_linenum) + + # We need to count because pycodestyle reports an offset line number if + # there are comments. + cnt = 0 + line = result['line'] - 2 + modified_lines = [] + while cnt < delete_linenum and line >= 0: + if not self.source[line].strip(): + self.source[line] = '' + modified_lines.append(1 + line) # Line indexed at 1 + cnt += 1 + line -= 1 + + return modified_lines + + def fix_e304(self, result): + """Remove blank line following function decorator.""" + line = result['line'] - 2 + if not self.source[line].strip(): + self.source[line] = '' + + def fix_e305(self, result): + """Add missing 2 blank lines after end of function or class.""" + add_delete_linenum = 2 - int(result['info'].split()[-1]) + cnt = 0 + offset = result['line'] - 2 + modified_lines = [] + if add_delete_linenum < 0: + # delete cr + add_delete_linenum = abs(add_delete_linenum) + while cnt < add_delete_linenum and offset >= 0: + if not self.source[offset].strip(): + self.source[offset] = '' + modified_lines.append(1 + offset) # Line indexed at 1 + cnt += 1 + offset -= 1 + else: + # add cr + cr = '\n' + # check comment line + while True: + if offset < 0: + break + line = self.source[offset].lstrip() + if not line: + break + if line[0] != '#': + break + offset -= 1 + offset += 1 + self.source[offset] = cr + self.source[offset] + modified_lines.append(1 + offset) # Line indexed at 1. + return modified_lines + + def fix_e401(self, result): + """Put imports on separate lines.""" + line_index = result['line'] - 1 + target = self.source[line_index] + offset = result['column'] - 1 + + if not target.lstrip().startswith('import'): + return [] + + indentation = re.split(pattern=r'\bimport\b', + string=target, maxsplit=1)[0] + fixed = (target[:offset].rstrip('\t ,') + '\n' + + indentation + 'import ' + target[offset:].lstrip('\t ,')) + self.source[line_index] = fixed + + def fix_e402(self, result): + (line_index, offset, target) = get_index_offset_contents(result, + self.source) + for i in range(1, 100): + line = "".join(self.source[line_index:line_index+i]) + try: + generate_tokens("".join(line)) + except (SyntaxError, tokenize.TokenError): + continue + break + if not (target in self.imports and self.imports[target] != line_index): + mod_offset = get_module_imports_on_top_of_file(self.source, + line_index) + self.source[mod_offset] = line + self.source[mod_offset] + for offset in range(i): + self.source[line_index+offset] = '' + + def fix_long_line_logically(self, result, logical): + """Try to make lines fit within --max-line-length characters.""" + if ( + not logical or + len(logical[2]) == 1 or + self.source[result['line'] - 1].lstrip().startswith('#') + ): + return self.fix_long_line_physically(result) + + start_line_index = logical[0][0] + end_line_index = logical[1][0] + logical_lines = logical[2] + + previous_line = get_item(self.source, start_line_index - 1, default='') + next_line = get_item(self.source, end_line_index + 1, default='') + + single_line = join_logical_line(''.join(logical_lines)) + + try: + fixed = self.fix_long_line( + target=single_line, + previous_line=previous_line, + next_line=next_line, + original=''.join(logical_lines)) + except (SyntaxError, tokenize.TokenError): + return self.fix_long_line_physically(result) + + if fixed: + for line_index in range(start_line_index, end_line_index + 1): + self.source[line_index] = '' + self.source[start_line_index] = fixed + return range(start_line_index + 1, end_line_index + 1) + + return [] + + def fix_long_line_physically(self, result): + """Try to make lines fit within --max-line-length characters.""" + line_index = result['line'] - 1 + target = self.source[line_index] + + previous_line = get_item(self.source, line_index - 1, default='') + next_line = get_item(self.source, line_index + 1, default='') + + try: + fixed = self.fix_long_line( + target=target, + previous_line=previous_line, + next_line=next_line, + original=target) + except (SyntaxError, tokenize.TokenError): + return [] + + if fixed: + self.source[line_index] = fixed + return [line_index + 1] + + return [] + + def fix_long_line(self, target, previous_line, + next_line, original): + cache_entry = (target, previous_line, next_line) + if cache_entry in self.long_line_ignore_cache: + return [] + + if target.lstrip().startswith('#'): + if self.options.aggressive: + # Wrap commented lines. + return shorten_comment( + line=target, + max_line_length=self.options.max_line_length, + last_comment=not next_line.lstrip().startswith('#')) + return [] + + fixed = get_fixed_long_line( + target=target, + previous_line=previous_line, + original=original, + indent_word=self.indent_word, + max_line_length=self.options.max_line_length, + aggressive=self.options.aggressive, + experimental=self.options.experimental, + verbose=self.options.verbose) + + if fixed and not code_almost_equal(original, fixed): + return fixed + + self.long_line_ignore_cache.add(cache_entry) + return None + + def fix_e502(self, result): + """Remove extraneous escape of newline.""" + (line_index, _, target) = get_index_offset_contents(result, + self.source) + self.source[line_index] = target.rstrip('\n\r \t\\') + '\n' + + def fix_e701(self, result): + """Put colon-separated compound statement on separate lines.""" + line_index = result['line'] - 1 + target = self.source[line_index] + c = result['column'] + + fixed_source = (target[:c] + '\n' + + _get_indentation(target) + self.indent_word + + target[c:].lstrip('\n\r \t\\')) + self.source[result['line'] - 1] = fixed_source + return [result['line'], result['line'] + 1] + + def fix_e702(self, result, logical): + """Put semicolon-separated compound statement on separate lines.""" + if not logical: + return [] # pragma: no cover + logical_lines = logical[2] + + # Avoid applying this when indented. + # https://docs.python.org/reference/compound_stmts.html + for line in logical_lines: + if (result['id'] == 'E702' and ':' in line + and STARTSWITH_INDENT_STATEMENT_REGEX.match(line)): + if self.options.verbose: + print( + '---> avoid fixing {error} with ' + 'other compound statements'.format(error=result['id']), + file=sys.stderr + ) + return [] + + line_index = result['line'] - 1 + target = self.source[line_index] + + if target.rstrip().endswith('\\'): + # Normalize '1; \\\n2' into '1; 2'. + self.source[line_index] = target.rstrip('\n \r\t\\') + self.source[line_index + 1] = self.source[line_index + 1].lstrip() + return [line_index + 1, line_index + 2] + + if target.rstrip().endswith(';'): + self.source[line_index] = target.rstrip('\n \r\t;') + '\n' + return [line_index + 1] + + offset = result['column'] - 1 + first = target[:offset].rstrip(';').rstrip() + second = (_get_indentation(logical_lines[0]) + + target[offset:].lstrip(';').lstrip()) + + # Find inline comment. + inline_comment = None + if target[offset:].lstrip(';').lstrip()[:2] == '# ': + inline_comment = target[offset:].lstrip(';') + + if inline_comment: + self.source[line_index] = first + inline_comment + else: + self.source[line_index] = first + '\n' + second + return [line_index + 1] + + def fix_e704(self, result): + """Fix multiple statements on one line def""" + (line_index, _, target) = get_index_offset_contents(result, + self.source) + match = STARTSWITH_DEF_REGEX.match(target) + if match: + self.source[line_index] = '{}\n{}{}'.format( + match.group(0), + _get_indentation(target) + self.indent_word, + target[match.end(0):].lstrip()) + + def fix_e711(self, result): + """Fix comparison with None.""" + (line_index, offset, target) = get_index_offset_contents(result, + self.source) + + right_offset = offset + 2 + if right_offset >= len(target): + return [] + + left = target[:offset].rstrip() + center = target[offset:right_offset] + right = target[right_offset:].lstrip() + + if center.strip() == '==': + new_center = 'is' + elif center.strip() == '!=': + new_center = 'is not' + else: + return [] + + self.source[line_index] = ' '.join([left, new_center, right]) + + def fix_e712(self, result): + """Fix (trivial case of) comparison with boolean.""" + (line_index, offset, target) = get_index_offset_contents(result, + self.source) + + # Handle very easy "not" special cases. + if re.match(r'^\s*if [\w."\'\[\]]+ == False:$', target): + self.source[line_index] = re.sub(r'if ([\w."\'\[\]]+) == False:', + r'if not \1:', target, count=1) + elif re.match(r'^\s*if [\w."\'\[\]]+ != True:$', target): + self.source[line_index] = re.sub(r'if ([\w."\'\[\]]+) != True:', + r'if not \1:', target, count=1) + else: + right_offset = offset + 2 + if right_offset >= len(target): + return [] + + left = target[:offset].rstrip() + center = target[offset:right_offset] + right = target[right_offset:].lstrip() + + # Handle simple cases only. + new_right = None + if center.strip() == '==': + if re.match(r'\bTrue\b', right): + new_right = re.sub(r'\bTrue\b *', '', right, count=1) + elif center.strip() == '!=': + if re.match(r'\bFalse\b', right): + new_right = re.sub(r'\bFalse\b *', '', right, count=1) + + if new_right is None: + return [] + + if new_right[0].isalnum(): + new_right = ' ' + new_right + + self.source[line_index] = left + new_right + + def fix_e713(self, result): + """Fix (trivial case of) non-membership check.""" + (line_index, offset, target) = get_index_offset_contents(result, + self.source) + + # to convert once 'not in' -> 'in' + before_target = target[:offset] + target = target[offset:] + match_notin = COMPARE_NEGATIVE_REGEX_THROUGH.search(target) + notin_pos_start, notin_pos_end = 0, 0 + if match_notin: + notin_pos_start = match_notin.start(1) + notin_pos_end = match_notin.end() + target = '{}{} {}'.format( + target[:notin_pos_start], 'in', target[notin_pos_end:]) + + # fix 'not in' + match = COMPARE_NEGATIVE_REGEX.search(target) + if match: + if match.group(3) == 'in': + pos_start = match.start(1) + new_target = '{5}{0}{1} {2} {3} {4}'.format( + target[:pos_start], match.group(2), match.group(1), + match.group(3), target[match.end():], before_target) + if match_notin: + # revert 'in' -> 'not in' + pos_start = notin_pos_start + offset + pos_end = notin_pos_end + offset - 4 # len('not ') + new_target = '{}{} {}'.format( + new_target[:pos_start], 'not in', new_target[pos_end:]) + self.source[line_index] = new_target + + def fix_e714(self, result): + """Fix object identity should be 'is not' case.""" + (line_index, offset, target) = get_index_offset_contents(result, + self.source) + + # to convert once 'is not' -> 'is' + before_target = target[:offset] + target = target[offset:] + match_isnot = COMPARE_NEGATIVE_REGEX_THROUGH.search(target) + isnot_pos_start, isnot_pos_end = 0, 0 + if match_isnot: + isnot_pos_start = match_isnot.start(1) + isnot_pos_end = match_isnot.end() + target = '{}{} {}'.format( + target[:isnot_pos_start], 'in', target[isnot_pos_end:]) + + match = COMPARE_NEGATIVE_REGEX.search(target) + if match: + if match.group(3).startswith('is'): + pos_start = match.start(1) + new_target = '{5}{0}{1} {2} {3} {4}'.format( + target[:pos_start], match.group(2), match.group(3), + match.group(1), target[match.end():], before_target) + if match_isnot: + # revert 'is' -> 'is not' + pos_start = isnot_pos_start + offset + pos_end = isnot_pos_end + offset - 4 # len('not ') + new_target = '{}{} {}'.format( + new_target[:pos_start], 'is not', new_target[pos_end:]) + self.source[line_index] = new_target + + def fix_e722(self, result): + """fix bare except""" + (line_index, _, target) = get_index_offset_contents(result, + self.source) + match = BARE_EXCEPT_REGEX.search(target) + if match: + self.source[line_index] = '{}{}{}'.format( + target[:result['column'] - 1], "except BaseException:", + target[match.end():]) + + def fix_e731(self, result): + """Fix do not assign a lambda expression check.""" + (line_index, _, target) = get_index_offset_contents(result, + self.source) + match = LAMBDA_REGEX.search(target) + if match: + end = match.end() + self.source[line_index] = '{}def {}({}): return {}'.format( + target[:match.start(0)], match.group(1), match.group(2), + target[end:].lstrip()) + + def fix_w291(self, result): + """Remove trailing whitespace.""" + fixed_line = self.source[result['line'] - 1].rstrip() + self.source[result['line'] - 1] = fixed_line + '\n' + + def fix_w391(self, _): + """Remove trailing blank lines.""" + blank_count = 0 + for line in reversed(self.source): + line = line.rstrip() + if line: + break + else: + blank_count += 1 + + original_length = len(self.source) + self.source = self.source[:original_length - blank_count] + return range(1, 1 + original_length) + + def fix_w503(self, result): + (line_index, _, target) = get_index_offset_contents(result, + self.source) + one_string_token = target.split()[0] + try: + ts = generate_tokens(one_string_token) + except (SyntaxError, tokenize.TokenError): + return + if not _is_binary_operator(ts[0][0], one_string_token): + return + # find comment + comment_index = 0 + found_not_comment_only_line = False + comment_only_linenum = 0 + for i in range(5): + # NOTE: try to parse code in 5 times + if (line_index - i) < 0: + break + from_index = line_index - i - 1 + if from_index < 0 or len(self.source) <= from_index: + break + to_index = line_index + 1 + strip_line = self.source[from_index].lstrip() + if ( + not found_not_comment_only_line and + strip_line and strip_line[0] == '#' + ): + comment_only_linenum += 1 + continue + found_not_comment_only_line = True + try: + ts = generate_tokens("".join(self.source[from_index:to_index])) + except (SyntaxError, tokenize.TokenError): + continue + newline_count = 0 + newline_index = [] + for index, t in enumerate(ts): + if t[0] in (tokenize.NEWLINE, tokenize.NL): + newline_index.append(index) + newline_count += 1 + if newline_count > 2: + tts = ts[newline_index[-3]:] + else: + tts = ts + old = [] + for t in tts: + if t[0] in (tokenize.NEWLINE, tokenize.NL): + newline_count -= 1 + if newline_count <= 1: + break + if tokenize.COMMENT == t[0] and old and old[0] != tokenize.NL: + comment_index = old[3][1] + break + old = t + break + i = target.index(one_string_token) + fix_target_line = line_index - 1 - comment_only_linenum + self.source[line_index] = '{}{}'.format( + target[:i], target[i + len(one_string_token):].lstrip()) + nl = find_newline(self.source[fix_target_line:line_index]) + before_line = self.source[fix_target_line] + bl = before_line.index(nl) + if comment_index: + self.source[fix_target_line] = '{} {} {}'.format( + before_line[:comment_index], one_string_token, + before_line[comment_index + 1:]) + else: + if before_line[:bl].endswith("#"): + # special case + # see: https://github.com/hhatto/autopep8/issues/503 + self.source[fix_target_line] = '{}{} {}'.format( + before_line[:bl-2], one_string_token, before_line[bl-2:]) + else: + self.source[fix_target_line] = '{} {}{}'.format( + before_line[:bl], one_string_token, before_line[bl:]) + + def fix_w504(self, result): + (line_index, _, target) = get_index_offset_contents(result, + self.source) + # NOTE: is not collect pointed out in pycodestyle==2.4.0 + comment_index = 0 + operator_position = None # (start_position, end_position) + for i in range(1, 6): + to_index = line_index + i + try: + ts = generate_tokens("".join(self.source[line_index:to_index])) + except (SyntaxError, tokenize.TokenError): + continue + newline_count = 0 + newline_index = [] + for index, t in enumerate(ts): + if _is_binary_operator(t[0], t[1]): + if t[2][0] == 1 and t[3][0] == 1: + operator_position = (t[2][1], t[3][1]) + elif t[0] == tokenize.NAME and t[1] in ("and", "or"): + if t[2][0] == 1 and t[3][0] == 1: + operator_position = (t[2][1], t[3][1]) + elif t[0] in (tokenize.NEWLINE, tokenize.NL): + newline_index.append(index) + newline_count += 1 + if newline_count > 2: + tts = ts[:newline_index[-3]] + else: + tts = ts + old = [] + for t in tts: + if tokenize.COMMENT == t[0] and old: + comment_row, comment_index = old[3] + break + old = t + break + if not operator_position: + return + target_operator = target[operator_position[0]:operator_position[1]] + + if comment_index and comment_row == 1: + self.source[line_index] = '{}{}'.format( + target[:operator_position[0]].rstrip(), + target[comment_index:]) + else: + self.source[line_index] = '{}{}{}'.format( + target[:operator_position[0]].rstrip(), + target[operator_position[1]:].lstrip(), + target[operator_position[1]:]) + + next_line = self.source[line_index + 1] + next_line_indent = 0 + m = re.match(r'\s*', next_line) + if m: + next_line_indent = m.span()[1] + self.source[line_index + 1] = '{}{} {}'.format( + next_line[:next_line_indent], target_operator, + next_line[next_line_indent:]) + + def fix_w605(self, result): + (line_index, offset, target) = get_index_offset_contents(result, + self.source) + self.source[line_index] = '{}\\{}'.format( + target[:offset + 1], target[offset + 1:]) + + +def get_module_imports_on_top_of_file(source, import_line_index): + """return import or from keyword position + + example: + > 0: import sys + 1: import os + 2: + 3: def function(): + """ + def is_string_literal(line): + if line[0] in 'uUbB': + line = line[1:] + if line and line[0] in 'rR': + line = line[1:] + return line and (line[0] == '"' or line[0] == "'") + + def is_future_import(line): + nodes = ast.parse(line) + for n in nodes.body: + if isinstance(n, ast.ImportFrom) and n.module == '__future__': + return True + return False + + def has_future_import(source): + offset = 0 + line = '' + for _, next_line in source: + for line_part in next_line.strip().splitlines(True): + line = line + line_part + try: + return is_future_import(line), offset + except SyntaxError: + continue + offset += 1 + return False, offset + + allowed_try_keywords = ('try', 'except', 'else', 'finally') + in_docstring = False + docstring_kind = '"""' + source_stream = iter(enumerate(source)) + for cnt, line in source_stream: + if not in_docstring: + m = DOCSTRING_START_REGEX.match(line.lstrip()) + if m is not None: + in_docstring = True + docstring_kind = m.group('kind') + remain = line[m.end(): m.endpos].rstrip() + if remain[-3:] == docstring_kind: # one line doc + in_docstring = False + continue + if in_docstring: + if line.rstrip()[-3:] == docstring_kind: + in_docstring = False + continue + + if not line.rstrip(): + continue + elif line.startswith('#'): + continue + + if line.startswith('import '): + if cnt == import_line_index: + continue + return cnt + elif line.startswith('from '): + if cnt == import_line_index: + continue + hit, offset = has_future_import( + itertools.chain([(cnt, line)], source_stream) + ) + if hit: + # move to the back + return cnt + offset + 1 + return cnt + elif pycodestyle.DUNDER_REGEX.match(line): + return cnt + elif any(line.startswith(kw) for kw in allowed_try_keywords): + continue + elif is_string_literal(line): + return cnt + else: + return cnt + return 0 + + +def get_index_offset_contents(result, source): + """Return (line_index, column_offset, line_contents).""" + line_index = result['line'] - 1 + return (line_index, + result['column'] - 1, + source[line_index]) + + +def get_fixed_long_line(target, previous_line, original, + indent_word=' ', max_line_length=79, + aggressive=False, experimental=False, verbose=False): + """Break up long line and return result. + + Do this by generating multiple reformatted candidates and then + ranking the candidates to heuristically select the best option. + + """ + indent = _get_indentation(target) + source = target[len(indent):] + assert source.lstrip() == source + assert not target.lstrip().startswith('#') + + # Check for partial multiline. + tokens = list(generate_tokens(source)) + + candidates = shorten_line( + tokens, source, indent, + indent_word, + max_line_length, + aggressive=aggressive, + experimental=experimental, + previous_line=previous_line) + + # Also sort alphabetically as a tie breaker (for determinism). + candidates = sorted( + sorted(set(candidates).union([target, original])), + key=lambda x: line_shortening_rank( + x, + indent_word, + max_line_length, + experimental=experimental)) + + if verbose >= 4: + print(('-' * 79 + '\n').join([''] + candidates + ['']), + file=wrap_output(sys.stderr, 'utf-8')) + + if candidates: + best_candidate = candidates[0] + + # Don't allow things to get longer. + if longest_line_length(best_candidate) > longest_line_length(original): + return None + + return best_candidate + + +def longest_line_length(code): + """Return length of longest line.""" + if len(code) == 0: + return 0 + return max(len(line) for line in code.splitlines()) + + +def join_logical_line(logical_line): + """Return single line based on logical line input.""" + indentation = _get_indentation(logical_line) + + return indentation + untokenize_without_newlines( + generate_tokens(logical_line.lstrip())) + '\n' + + +def untokenize_without_newlines(tokens): + """Return source code based on tokens.""" + text = '' + last_row = 0 + last_column = -1 + + for t in tokens: + token_string = t[1] + (start_row, start_column) = t[2] + (end_row, end_column) = t[3] + + if start_row > last_row: + last_column = 0 + if ( + (start_column > last_column or token_string == '\n') and + not text.endswith(' ') + ): + text += ' ' + + if token_string != '\n': + text += token_string + + last_row = end_row + last_column = end_column + + return text.rstrip() + + +def _find_logical(source_lines): + # Make a variable which is the index of all the starts of lines. + logical_start = [] + logical_end = [] + last_newline = True + parens = 0 + for t in generate_tokens(''.join(source_lines)): + if t[0] in [tokenize.COMMENT, tokenize.DEDENT, + tokenize.INDENT, tokenize.NL, + tokenize.ENDMARKER]: + continue + if not parens and t[0] in [tokenize.NEWLINE, tokenize.SEMI]: + last_newline = True + logical_end.append((t[3][0] - 1, t[2][1])) + continue + if last_newline and not parens: + logical_start.append((t[2][0] - 1, t[2][1])) + last_newline = False + if t[0] == tokenize.OP: + if t[1] in '([{': + parens += 1 + elif t[1] in '}])': + parens -= 1 + return (logical_start, logical_end) + + +def _get_logical(source_lines, result, logical_start, logical_end): + """Return the logical line corresponding to the result. + + Assumes input is already E702-clean. + + """ + row = result['line'] - 1 + col = result['column'] - 1 + ls = None + le = None + for i in range(0, len(logical_start), 1): + assert logical_end + x = logical_end[i] + if x[0] > row or (x[0] == row and x[1] > col): + le = x + ls = logical_start[i] + break + if ls is None: + return None + original = source_lines[ls[0]:le[0] + 1] + return ls, le, original + + +def get_item(items, index, default=None): + if 0 <= index < len(items): + return items[index] + + return default + + +def reindent(source, indent_size, leave_tabs=False): + """Reindent all lines.""" + reindenter = Reindenter(source, leave_tabs) + return reindenter.run(indent_size) + + +def code_almost_equal(a, b): + """Return True if code is similar. + + Ignore whitespace when comparing specific line. + + """ + split_a = split_and_strip_non_empty_lines(a) + split_b = split_and_strip_non_empty_lines(b) + + if len(split_a) != len(split_b): + return False + + for (index, _) in enumerate(split_a): + if ''.join(split_a[index].split()) != ''.join(split_b[index].split()): + return False + + return True + + +def split_and_strip_non_empty_lines(text): + """Return lines split by newline. + + Ignore empty lines. + + """ + return [line.strip() for line in text.splitlines() if line.strip()] + + +def fix_e265(source, aggressive=False): # pylint: disable=unused-argument + """Format block comments.""" + if '#' not in source: + # Optimization. + return source + + ignored_line_numbers = multiline_string_lines( + source, + include_docstrings=True) | set(commented_out_code_lines(source)) + + fixed_lines = [] + sio = io.StringIO(source) + for (line_number, line) in enumerate(sio.readlines(), start=1): + if ( + line.lstrip().startswith('#') and + line_number not in ignored_line_numbers and + not pycodestyle.noqa(line) + ): + indentation = _get_indentation(line) + line = line.lstrip() + + # Normalize beginning if not a shebang. + if len(line) > 1: + pos = next((index for index, c in enumerate(line) + if c != '#')) + if ( + # Leave multiple spaces like '# ' alone. + (line[:pos].count('#') > 1 or line[1].isalnum() or + not line[1].isspace()) and + line[1] not in ':!' and + # Leave stylistic outlined blocks alone. + not line.rstrip().endswith('#') + ): + line = '# ' + line.lstrip('# \t') + + fixed_lines.append(indentation + line) + else: + fixed_lines.append(line) + + return ''.join(fixed_lines) + + +def refactor(source, fixer_names, ignore=None, filename=''): + """Return refactored code using lib2to3. + + Skip if ignore string is produced in the refactored code. + + """ + not_found_end_of_file_newline = source and source.rstrip("\r\n") == source + if not_found_end_of_file_newline: + input_source = source + "\n" + else: + input_source = source + + from lib2to3 import pgen2 + try: + new_text = refactor_with_2to3(input_source, + fixer_names=fixer_names, + filename=filename) + except (pgen2.parse.ParseError, + SyntaxError, + UnicodeDecodeError, + UnicodeEncodeError): + return source + + if ignore: + if ignore in new_text and ignore not in source: + return source + + if not_found_end_of_file_newline: + return new_text.rstrip("\r\n") + + return new_text + + +def code_to_2to3(select, ignore, where='', verbose=False): + fixes = set() + for code, fix in CODE_TO_2TO3.items(): + if code_match(code, select=select, ignore=ignore): + if verbose: + print('---> Applying {} fix for {}'.format(where, + code.upper()), + file=sys.stderr) + fixes |= set(fix) + return fixes + + +def fix_2to3(source, + aggressive=True, select=None, ignore=None, filename='', + where='global', verbose=False): + """Fix various deprecated code (via lib2to3).""" + if not aggressive: + return source + + select = select or [] + ignore = ignore or [] + + return refactor(source, + code_to_2to3(select=select, + ignore=ignore, + where=where, + verbose=verbose), + filename=filename) + + +def fix_w602(source, aggressive=True): + """Fix deprecated form of raising exception.""" + if not aggressive: + return source + + return refactor(source, ['raise'], ignore='with_traceback') + + +def find_newline(source): + """Return type of newline used in source. + + Input is a list of lines. + + """ + assert not isinstance(source, str) + + counter = collections.defaultdict(int) + for line in source: + if line.endswith(CRLF): + counter[CRLF] += 1 + elif line.endswith(CR): + counter[CR] += 1 + elif line.endswith(LF): + counter[LF] += 1 + + return (sorted(counter, key=counter.get, reverse=True) or [LF])[0] + + +def _get_indentword(source): + """Return indentation type.""" + indent_word = ' ' # Default in case source has no indentation + try: + for t in generate_tokens(source): + if t[0] == token.INDENT: + indent_word = t[1] + break + except (SyntaxError, tokenize.TokenError): + pass + return indent_word + + +def _get_indentation(line): + """Return leading whitespace.""" + if line.strip(): + non_whitespace_index = len(line) - len(line.lstrip()) + return line[:non_whitespace_index] + + return '' + + +def get_diff_text(old, new, filename): + """Return text of unified diff between old and new.""" + newline = '\n' + diff = difflib.unified_diff( + old, new, + 'original/' + filename, + 'fixed/' + filename, + lineterm=newline) + + text = '' + for line in diff: + text += line + + # Work around missing newline (http://bugs.python.org/issue2142). + if text and not line.endswith(newline): + text += newline + r'\ No newline at end of file' + newline + + return text + + +def _priority_key(pep8_result): + """Key for sorting PEP8 results. + + Global fixes should be done first. This is important for things like + indentation. + + """ + priority = [ + # Fix multiline colon-based before semicolon based. + 'e701', + # Break multiline statements early. + 'e702', + # Things that make lines longer. + 'e225', 'e231', + # Remove extraneous whitespace before breaking lines. + 'e201', + # Shorten whitespace in comment before resorting to wrapping. + 'e262' + ] + middle_index = 10000 + lowest_priority = [ + # We need to shorten lines last since the logical fixer can get in a + # loop, which causes us to exit early. + 'e501', + ] + key = pep8_result['id'].lower() + try: + return priority.index(key) + except ValueError: + try: + return middle_index + lowest_priority.index(key) + 1 + except ValueError: + return middle_index + + +def shorten_line(tokens, source, indentation, indent_word, max_line_length, + aggressive=False, experimental=False, previous_line=''): + """Separate line at OPERATOR. + + Multiple candidates will be yielded. + + """ + for candidate in _shorten_line(tokens=tokens, + source=source, + indentation=indentation, + indent_word=indent_word, + aggressive=aggressive, + previous_line=previous_line): + yield candidate + + if aggressive: + for key_token_strings in SHORTEN_OPERATOR_GROUPS: + shortened = _shorten_line_at_tokens( + tokens=tokens, + source=source, + indentation=indentation, + indent_word=indent_word, + key_token_strings=key_token_strings, + aggressive=aggressive) + + if shortened is not None and shortened != source: + yield shortened + + if experimental: + for shortened in _shorten_line_at_tokens_new( + tokens=tokens, + source=source, + indentation=indentation, + max_line_length=max_line_length): + + yield shortened + + +def _shorten_line(tokens, source, indentation, indent_word, + aggressive=False, previous_line=''): + """Separate line at OPERATOR. + + The input is expected to be free of newlines except for inside multiline + strings and at the end. + + Multiple candidates will be yielded. + + """ + for (token_type, + token_string, + start_offset, + end_offset) in token_offsets(tokens): + + if ( + token_type == tokenize.COMMENT and + not is_probably_part_of_multiline(previous_line) and + not is_probably_part_of_multiline(source) and + not source[start_offset + 1:].strip().lower().startswith( + ('noqa', 'pragma:', 'pylint:')) + ): + # Move inline comments to previous line. + first = source[:start_offset] + second = source[start_offset:] + yield (indentation + second.strip() + '\n' + + indentation + first.strip() + '\n') + elif token_type == token.OP and token_string != '=': + # Don't break on '=' after keyword as this violates PEP 8. + + assert token_type != token.INDENT + + first = source[:end_offset] + + second_indent = indentation + if (first.rstrip().endswith('(') and + source[end_offset:].lstrip().startswith(')')): + pass + elif first.rstrip().endswith('('): + second_indent += indent_word + elif '(' in first: + second_indent += ' ' * (1 + first.find('(')) + else: + second_indent += indent_word + + second = (second_indent + source[end_offset:].lstrip()) + if ( + not second.strip() or + second.lstrip().startswith('#') + ): + continue + + # Do not begin a line with a comma + if second.lstrip().startswith(','): + continue + # Do end a line with a dot + if first.rstrip().endswith('.'): + continue + if token_string in '+-*/': + fixed = first + ' \\' + '\n' + second + else: + fixed = first + '\n' + second + + # Only fix if syntax is okay. + if check_syntax(normalize_multiline(fixed) + if aggressive else fixed): + yield indentation + fixed + + +def _is_binary_operator(token_type, text): + return ((token_type == tokenize.OP or text in ['and', 'or']) and + text not in '()[]{},:.;@=%~') + + +# A convenient way to handle tokens. +Token = collections.namedtuple('Token', ['token_type', 'token_string', + 'spos', 'epos', 'line']) + + +class ReformattedLines(object): + + """The reflowed lines of atoms. + + Each part of the line is represented as an "atom." They can be moved + around when need be to get the optimal formatting. + + """ + + ########################################################################### + # Private Classes + + class _Indent(object): + + """Represent an indentation in the atom stream.""" + + def __init__(self, indent_amt): + self._indent_amt = indent_amt + + def emit(self): + return ' ' * self._indent_amt + + @property + def size(self): + return self._indent_amt + + class _Space(object): + + """Represent a space in the atom stream.""" + + def emit(self): + return ' ' + + @property + def size(self): + return 1 + + class _LineBreak(object): + + """Represent a line break in the atom stream.""" + + def emit(self): + return '\n' + + @property + def size(self): + return 0 + + def __init__(self, max_line_length): + self._max_line_length = max_line_length + self._lines = [] + self._bracket_depth = 0 + self._prev_item = None + self._prev_prev_item = None + + def __repr__(self): + return self.emit() + + ########################################################################### + # Public Methods + + def add(self, obj, indent_amt, break_after_open_bracket): + if isinstance(obj, Atom): + self._add_item(obj, indent_amt) + return + + self._add_container(obj, indent_amt, break_after_open_bracket) + + def add_comment(self, item): + num_spaces = 2 + if len(self._lines) > 1: + if isinstance(self._lines[-1], self._Space): + num_spaces -= 1 + if len(self._lines) > 2: + if isinstance(self._lines[-2], self._Space): + num_spaces -= 1 + + while num_spaces > 0: + self._lines.append(self._Space()) + num_spaces -= 1 + self._lines.append(item) + + def add_indent(self, indent_amt): + self._lines.append(self._Indent(indent_amt)) + + def add_line_break(self, indent): + self._lines.append(self._LineBreak()) + self.add_indent(len(indent)) + + def add_line_break_at(self, index, indent_amt): + self._lines.insert(index, self._LineBreak()) + self._lines.insert(index + 1, self._Indent(indent_amt)) + + def add_space_if_needed(self, curr_text, equal=False): + if ( + not self._lines or isinstance( + self._lines[-1], (self._LineBreak, self._Indent, self._Space)) + ): + return + + prev_text = str(self._prev_item) + prev_prev_text = ( + str(self._prev_prev_item) if self._prev_prev_item else '') + + if ( + # The previous item was a keyword or identifier and the current + # item isn't an operator that doesn't require a space. + ((self._prev_item.is_keyword or self._prev_item.is_string or + self._prev_item.is_name or self._prev_item.is_number) and + (curr_text[0] not in '([{.,:}])' or + (curr_text[0] == '=' and equal))) or + + # Don't place spaces around a '.', unless it's in an 'import' + # statement. + ((prev_prev_text != 'from' and prev_text[-1] != '.' and + curr_text != 'import') and + + # Don't place a space before a colon. + curr_text[0] != ':' and + + # Don't split up ending brackets by spaces. + ((prev_text[-1] in '}])' and curr_text[0] not in '.,}])') or + + # Put a space after a colon or comma. + prev_text[-1] in ':,' or + + # Put space around '=' if asked to. + (equal and prev_text == '=') or + + # Put spaces around non-unary arithmetic operators. + ((self._prev_prev_item and + (prev_text not in '+-' and + (self._prev_prev_item.is_name or + self._prev_prev_item.is_number or + self._prev_prev_item.is_string)) and + prev_text in ('+', '-', '%', '*', '/', '//', '**', 'in'))))) + ): + self._lines.append(self._Space()) + + def previous_item(self): + """Return the previous non-whitespace item.""" + return self._prev_item + + def fits_on_current_line(self, item_extent): + return self.current_size() + item_extent <= self._max_line_length + + def current_size(self): + """The size of the current line minus the indentation.""" + size = 0 + for item in reversed(self._lines): + size += item.size + if isinstance(item, self._LineBreak): + break + + return size + + def line_empty(self): + return (self._lines and + isinstance(self._lines[-1], + (self._LineBreak, self._Indent))) + + def emit(self): + string = '' + for item in self._lines: + if isinstance(item, self._LineBreak): + string = string.rstrip() + string += item.emit() + + return string.rstrip() + '\n' + + ########################################################################### + # Private Methods + + def _add_item(self, item, indent_amt): + """Add an item to the line. + + Reflow the line to get the best formatting after the item is + inserted. The bracket depth indicates if the item is being + inserted inside of a container or not. + + """ + if self._prev_item and self._prev_item.is_string and item.is_string: + # Place consecutive string literals on separate lines. + self._lines.append(self._LineBreak()) + self._lines.append(self._Indent(indent_amt)) + + item_text = str(item) + if self._lines and self._bracket_depth: + # Adding the item into a container. + self._prevent_default_initializer_splitting(item, indent_amt) + + if item_text in '.,)]}': + self._split_after_delimiter(item, indent_amt) + + elif self._lines and not self.line_empty(): + # Adding the item outside of a container. + if self.fits_on_current_line(len(item_text)): + self._enforce_space(item) + + else: + # Line break for the new item. + self._lines.append(self._LineBreak()) + self._lines.append(self._Indent(indent_amt)) + + self._lines.append(item) + self._prev_item, self._prev_prev_item = item, self._prev_item + + if item_text in '([{': + self._bracket_depth += 1 + + elif item_text in '}])': + self._bracket_depth -= 1 + assert self._bracket_depth >= 0 + + def _add_container(self, container, indent_amt, break_after_open_bracket): + actual_indent = indent_amt + 1 + + if ( + str(self._prev_item) != '=' and + not self.line_empty() and + not self.fits_on_current_line( + container.size + self._bracket_depth + 2) + ): + + if str(container)[0] == '(' and self._prev_item.is_name: + # Don't split before the opening bracket of a call. + break_after_open_bracket = True + actual_indent = indent_amt + 4 + elif ( + break_after_open_bracket or + str(self._prev_item) not in '([{' + ): + # If the container doesn't fit on the current line and the + # current line isn't empty, place the container on the next + # line. + self._lines.append(self._LineBreak()) + self._lines.append(self._Indent(indent_amt)) + break_after_open_bracket = False + else: + actual_indent = self.current_size() + 1 + break_after_open_bracket = False + + if isinstance(container, (ListComprehension, IfExpression)): + actual_indent = indent_amt + + # Increase the continued indentation only if recursing on a + # container. + container.reflow(self, ' ' * actual_indent, + break_after_open_bracket=break_after_open_bracket) + + def _prevent_default_initializer_splitting(self, item, indent_amt): + """Prevent splitting between a default initializer. + + When there is a default initializer, it's best to keep it all on + the same line. It's nicer and more readable, even if it goes + over the maximum allowable line length. This goes back along the + current line to determine if we have a default initializer, and, + if so, to remove extraneous whitespaces and add a line + break/indent before it if needed. + + """ + if str(item) == '=': + # This is the assignment in the initializer. Just remove spaces for + # now. + self._delete_whitespace() + return + + if (not self._prev_item or not self._prev_prev_item or + str(self._prev_item) != '='): + return + + self._delete_whitespace() + prev_prev_index = self._lines.index(self._prev_prev_item) + + if ( + isinstance(self._lines[prev_prev_index - 1], self._Indent) or + self.fits_on_current_line(item.size + 1) + ): + # The default initializer is already the only item on this line. + # Don't insert a newline here. + return + + # Replace the space with a newline/indent combo. + if isinstance(self._lines[prev_prev_index - 1], self._Space): + del self._lines[prev_prev_index - 1] + + self.add_line_break_at(self._lines.index(self._prev_prev_item), + indent_amt) + + def _split_after_delimiter(self, item, indent_amt): + """Split the line only after a delimiter.""" + self._delete_whitespace() + + if self.fits_on_current_line(item.size): + return + + last_space = None + for current_item in reversed(self._lines): + if ( + last_space and + (not isinstance(current_item, Atom) or + not current_item.is_colon) + ): + break + else: + last_space = None + if isinstance(current_item, self._Space): + last_space = current_item + if isinstance(current_item, (self._LineBreak, self._Indent)): + return + + if not last_space: + return + + self.add_line_break_at(self._lines.index(last_space), indent_amt) + + def _enforce_space(self, item): + """Enforce a space in certain situations. + + There are cases where we will want a space where normally we + wouldn't put one. This just enforces the addition of a space. + + """ + if isinstance(self._lines[-1], + (self._Space, self._LineBreak, self._Indent)): + return + + if not self._prev_item: + return + + item_text = str(item) + prev_text = str(self._prev_item) + + # Prefer a space around a '.' in an import statement, and between the + # 'import' and '('. + if ( + (item_text == '.' and prev_text == 'from') or + (item_text == 'import' and prev_text == '.') or + (item_text == '(' and prev_text == 'import') + ): + self._lines.append(self._Space()) + + def _delete_whitespace(self): + """Delete all whitespace from the end of the line.""" + while isinstance(self._lines[-1], (self._Space, self._LineBreak, + self._Indent)): + del self._lines[-1] + + +class Atom(object): + + """The smallest unbreakable unit that can be reflowed.""" + + def __init__(self, atom): + self._atom = atom + + def __repr__(self): + return self._atom.token_string + + def __len__(self): + return self.size + + def reflow( + self, reflowed_lines, continued_indent, extent, + break_after_open_bracket=False, + is_list_comp_or_if_expr=False, + next_is_dot=False + ): + if self._atom.token_type == tokenize.COMMENT: + reflowed_lines.add_comment(self) + return + + total_size = extent if extent else self.size + + if self._atom.token_string not in ',:([{}])': + # Some atoms will need an extra 1-sized space token after them. + total_size += 1 + + prev_item = reflowed_lines.previous_item() + if ( + not is_list_comp_or_if_expr and + not reflowed_lines.fits_on_current_line(total_size) and + not (next_is_dot and + reflowed_lines.fits_on_current_line(self.size + 1)) and + not reflowed_lines.line_empty() and + not self.is_colon and + not (prev_item and prev_item.is_name and + str(self) == '(') + ): + # Start a new line if there is already something on the line and + # adding this atom would make it go over the max line length. + reflowed_lines.add_line_break(continued_indent) + else: + reflowed_lines.add_space_if_needed(str(self)) + + reflowed_lines.add(self, len(continued_indent), + break_after_open_bracket) + + def emit(self): + return self.__repr__() + + @property + def is_keyword(self): + return keyword.iskeyword(self._atom.token_string) + + @property + def is_string(self): + return self._atom.token_type == tokenize.STRING + + @property + def is_name(self): + return self._atom.token_type == tokenize.NAME + + @property + def is_number(self): + return self._atom.token_type == tokenize.NUMBER + + @property + def is_comma(self): + return self._atom.token_string == ',' + + @property + def is_colon(self): + return self._atom.token_string == ':' + + @property + def size(self): + return len(self._atom.token_string) + + +class Container(object): + + """Base class for all container types.""" + + def __init__(self, items): + self._items = items + + def __repr__(self): + string = '' + last_was_keyword = False + + for item in self._items: + if item.is_comma: + string += ', ' + elif item.is_colon: + string += ': ' + else: + item_string = str(item) + if ( + string and + (last_was_keyword or + (not string.endswith(tuple('([{,.:}]) ')) and + not item_string.startswith(tuple('([{,.:}])')))) + ): + string += ' ' + string += item_string + + last_was_keyword = item.is_keyword + return string + + def __iter__(self): + for element in self._items: + yield element + + def __getitem__(self, idx): + return self._items[idx] + + def reflow(self, reflowed_lines, continued_indent, + break_after_open_bracket=False): + last_was_container = False + for (index, item) in enumerate(self._items): + next_item = get_item(self._items, index + 1) + + if isinstance(item, Atom): + is_list_comp_or_if_expr = ( + isinstance(self, (ListComprehension, IfExpression))) + item.reflow(reflowed_lines, continued_indent, + self._get_extent(index), + is_list_comp_or_if_expr=is_list_comp_or_if_expr, + next_is_dot=(next_item and + str(next_item) == '.')) + if last_was_container and item.is_comma: + reflowed_lines.add_line_break(continued_indent) + last_was_container = False + else: # isinstance(item, Container) + reflowed_lines.add(item, len(continued_indent), + break_after_open_bracket) + last_was_container = not isinstance(item, (ListComprehension, + IfExpression)) + + if ( + break_after_open_bracket and index == 0 and + # Prefer to keep empty containers together instead of + # separating them. + str(item) == self.open_bracket and + (not next_item or str(next_item) != self.close_bracket) and + (len(self._items) != 3 or not isinstance(next_item, Atom)) + ): + reflowed_lines.add_line_break(continued_indent) + break_after_open_bracket = False + else: + next_next_item = get_item(self._items, index + 2) + if ( + str(item) not in ['.', '%', 'in'] and + next_item and not isinstance(next_item, Container) and + str(next_item) != ':' and + next_next_item and (not isinstance(next_next_item, Atom) or + str(next_item) == 'not') and + not reflowed_lines.line_empty() and + not reflowed_lines.fits_on_current_line( + self._get_extent(index + 1) + 2) + ): + reflowed_lines.add_line_break(continued_indent) + + def _get_extent(self, index): + """The extent of the full element. + + E.g., the length of a function call or keyword. + + """ + extent = 0 + prev_item = get_item(self._items, index - 1) + seen_dot = prev_item and str(prev_item) == '.' + while index < len(self._items): + item = get_item(self._items, index) + index += 1 + + if isinstance(item, (ListComprehension, IfExpression)): + break + + if isinstance(item, Container): + if prev_item and prev_item.is_name: + if seen_dot: + extent += 1 + else: + extent += item.size + + prev_item = item + continue + elif (str(item) not in ['.', '=', ':', 'not'] and + not item.is_name and not item.is_string): + break + + if str(item) == '.': + seen_dot = True + + extent += item.size + prev_item = item + + return extent + + @property + def is_string(self): + return False + + @property + def size(self): + return len(self.__repr__()) + + @property + def is_keyword(self): + return False + + @property + def is_name(self): + return False + + @property + def is_comma(self): + return False + + @property + def is_colon(self): + return False + + @property + def open_bracket(self): + return None + + @property + def close_bracket(self): + return None + + +class Tuple(Container): + + """A high-level representation of a tuple.""" + + @property + def open_bracket(self): + return '(' + + @property + def close_bracket(self): + return ')' + + +class List(Container): + + """A high-level representation of a list.""" + + @property + def open_bracket(self): + return '[' + + @property + def close_bracket(self): + return ']' + + +class DictOrSet(Container): + + """A high-level representation of a dictionary or set.""" + + @property + def open_bracket(self): + return '{' + + @property + def close_bracket(self): + return '}' + + +class ListComprehension(Container): + + """A high-level representation of a list comprehension.""" + + @property + def size(self): + length = 0 + for item in self._items: + if isinstance(item, IfExpression): + break + length += item.size + return length + + +class IfExpression(Container): + + """A high-level representation of an if-expression.""" + + +def _parse_container(tokens, index, for_or_if=None): + """Parse a high-level container, such as a list, tuple, etc.""" + + # Store the opening bracket. + items = [Atom(Token(*tokens[index]))] + index += 1 + + num_tokens = len(tokens) + while index < num_tokens: + tok = Token(*tokens[index]) + + if tok.token_string in ',)]}': + # First check if we're at the end of a list comprehension or + # if-expression. Don't add the ending token as part of the list + # comprehension or if-expression, because they aren't part of those + # constructs. + if for_or_if == 'for': + return (ListComprehension(items), index - 1) + + elif for_or_if == 'if': + return (IfExpression(items), index - 1) + + # We've reached the end of a container. + items.append(Atom(tok)) + + # If not, then we are at the end of a container. + if tok.token_string == ')': + # The end of a tuple. + return (Tuple(items), index) + + elif tok.token_string == ']': + # The end of a list. + return (List(items), index) + + elif tok.token_string == '}': + # The end of a dictionary or set. + return (DictOrSet(items), index) + + elif tok.token_string in '([{': + # A sub-container is being defined. + (container, index) = _parse_container(tokens, index) + items.append(container) + + elif tok.token_string == 'for': + (container, index) = _parse_container(tokens, index, 'for') + items.append(container) + + elif tok.token_string == 'if': + (container, index) = _parse_container(tokens, index, 'if') + items.append(container) + + else: + items.append(Atom(tok)) + + index += 1 + + return (None, None) + + +def _parse_tokens(tokens): + """Parse the tokens. + + This converts the tokens into a form where we can manipulate them + more easily. + + """ + + index = 0 + parsed_tokens = [] + + num_tokens = len(tokens) + while index < num_tokens: + tok = Token(*tokens[index]) + + assert tok.token_type != token.INDENT + if tok.token_type == tokenize.NEWLINE: + # There's only one newline and it's at the end. + break + + if tok.token_string in '([{': + (container, index) = _parse_container(tokens, index) + if not container: + return None + parsed_tokens.append(container) + else: + parsed_tokens.append(Atom(tok)) + + index += 1 + + return parsed_tokens + + +def _reflow_lines(parsed_tokens, indentation, max_line_length, + start_on_prefix_line): + """Reflow the lines so that it looks nice.""" + + if str(parsed_tokens[0]) == 'def': + # A function definition gets indented a bit more. + continued_indent = indentation + ' ' * 2 * DEFAULT_INDENT_SIZE + else: + continued_indent = indentation + ' ' * DEFAULT_INDENT_SIZE + + break_after_open_bracket = not start_on_prefix_line + + lines = ReformattedLines(max_line_length) + lines.add_indent(len(indentation.lstrip('\r\n'))) + + if not start_on_prefix_line: + # If splitting after the opening bracket will cause the first element + # to be aligned weirdly, don't try it. + first_token = get_item(parsed_tokens, 0) + second_token = get_item(parsed_tokens, 1) + + if ( + first_token and second_token and + str(second_token)[0] == '(' and + len(indentation) + len(first_token) + 1 == len(continued_indent) + ): + return None + + for item in parsed_tokens: + lines.add_space_if_needed(str(item), equal=True) + + save_continued_indent = continued_indent + if start_on_prefix_line and isinstance(item, Container): + start_on_prefix_line = False + continued_indent = ' ' * (lines.current_size() + 1) + + item.reflow(lines, continued_indent, break_after_open_bracket) + continued_indent = save_continued_indent + + return lines.emit() + + +def _shorten_line_at_tokens_new(tokens, source, indentation, + max_line_length): + """Shorten the line taking its length into account. + + The input is expected to be free of newlines except for inside + multiline strings and at the end. + + """ + # Yield the original source so to see if it's a better choice than the + # shortened candidate lines we generate here. + yield indentation + source + + parsed_tokens = _parse_tokens(tokens) + + if parsed_tokens: + # Perform two reflows. The first one starts on the same line as the + # prefix. The second starts on the line after the prefix. + fixed = _reflow_lines(parsed_tokens, indentation, max_line_length, + start_on_prefix_line=True) + if fixed and check_syntax(normalize_multiline(fixed.lstrip())): + yield fixed + + fixed = _reflow_lines(parsed_tokens, indentation, max_line_length, + start_on_prefix_line=False) + if fixed and check_syntax(normalize_multiline(fixed.lstrip())): + yield fixed + + +def _shorten_line_at_tokens(tokens, source, indentation, indent_word, + key_token_strings, aggressive): + """Separate line by breaking at tokens in key_token_strings. + + The input is expected to be free of newlines except for inside + multiline strings and at the end. + + """ + offsets = [] + for (index, _t) in enumerate(token_offsets(tokens)): + (token_type, + token_string, + start_offset, + end_offset) = _t + + assert token_type != token.INDENT + + if token_string in key_token_strings: + # Do not break in containers with zero or one items. + unwanted_next_token = { + '(': ')', + '[': ']', + '{': '}'}.get(token_string) + if unwanted_next_token: + if ( + get_item(tokens, + index + 1, + default=[None, None])[1] == unwanted_next_token or + get_item(tokens, + index + 2, + default=[None, None])[1] == unwanted_next_token + ): + continue + + if ( + index > 2 and token_string == '(' and + tokens[index - 1][1] in ',(%[' + ): + # Don't split after a tuple start, or before a tuple start if + # the tuple is in a list. + continue + + if end_offset < len(source) - 1: + # Don't split right before newline. + offsets.append(end_offset) + else: + # Break at adjacent strings. These were probably meant to be on + # separate lines in the first place. + previous_token = get_item(tokens, index - 1) + if ( + token_type == tokenize.STRING and + previous_token and previous_token[0] == tokenize.STRING + ): + offsets.append(start_offset) + + current_indent = None + fixed = None + for line in split_at_offsets(source, offsets): + if fixed: + fixed += '\n' + current_indent + line + + for symbol in '([{': + if line.endswith(symbol): + current_indent += indent_word + else: + # First line. + fixed = line + assert not current_indent + current_indent = indent_word + + assert fixed is not None + + if check_syntax(normalize_multiline(fixed) + if aggressive > 1 else fixed): + return indentation + fixed + + return None + + +def token_offsets(tokens): + """Yield tokens and offsets.""" + end_offset = 0 + previous_end_row = 0 + previous_end_column = 0 + for t in tokens: + token_type = t[0] + token_string = t[1] + (start_row, start_column) = t[2] + (end_row, end_column) = t[3] + + # Account for the whitespace between tokens. + end_offset += start_column + if previous_end_row == start_row: + end_offset -= previous_end_column + + # Record the start offset of the token. + start_offset = end_offset + + # Account for the length of the token itself. + end_offset += len(token_string) + + yield (token_type, + token_string, + start_offset, + end_offset) + + previous_end_row = end_row + previous_end_column = end_column + + +def normalize_multiline(line): + """Normalize multiline-related code that will cause syntax error. + + This is for purposes of checking syntax. + + """ + if line.startswith('def ') and line.rstrip().endswith(':'): + return line + ' pass' + elif line.startswith('return '): + return 'def _(): ' + line + elif line.startswith('@'): + return line + 'def _(): pass' + elif line.startswith('class '): + return line + ' pass' + elif line.startswith(('if ', 'elif ', 'for ', 'while ')): + return line + ' pass' + + return line + + +def fix_whitespace(line, offset, replacement): + """Replace whitespace at offset and return fixed line.""" + # Replace escaped newlines too + left = line[:offset].rstrip('\n\r \t\\') + right = line[offset:].lstrip('\n\r \t\\') + if right.startswith('#'): + return line + + return left + replacement + right + + +def _execute_pep8(pep8_options, source): + """Execute pycodestyle via python method calls.""" + class QuietReport(pycodestyle.BaseReport): + + """Version of checker that does not print.""" + + def __init__(self, options): + super(QuietReport, self).__init__(options) + self.__full_error_results = [] + + def error(self, line_number, offset, text, check): + """Collect errors.""" + code = super(QuietReport, self).error(line_number, + offset, + text, + check) + if code: + self.__full_error_results.append( + {'id': code, + 'line': line_number, + 'column': offset + 1, + 'info': text}) + + def full_error_results(self): + """Return error results in detail. + + Results are in the form of a list of dictionaries. Each + dictionary contains 'id', 'line', 'column', and 'info'. + + """ + return self.__full_error_results + + checker = pycodestyle.Checker('', lines=source, reporter=QuietReport, + **pep8_options) + checker.check_all() + return checker.report.full_error_results() + + +def _remove_leading_and_normalize(line, with_rstrip=True): + # ignore FF in first lstrip() + if with_rstrip: + return line.lstrip(' \t\v').rstrip(CR + LF) + '\n' + return line.lstrip(' \t\v') + + +class Reindenter(object): + + """Reindents badly-indented code to uniformly use four-space indentation. + + Released to the public domain, by Tim Peters, 03 October 2000. + + """ + + def __init__(self, input_text, leave_tabs=False): + sio = io.StringIO(input_text) + source_lines = sio.readlines() + + self.string_content_line_numbers = multiline_string_lines(input_text) + + # File lines, rstripped & tab-expanded. Dummy at start is so + # that we can use tokenize's 1-based line numbering easily. + # Note that a line is all-blank iff it is a newline. + self.lines = [] + for line_number, line in enumerate(source_lines, start=1): + # Do not modify if inside a multiline string. + if line_number in self.string_content_line_numbers: + self.lines.append(line) + else: + # Only expand leading tabs. + with_rstrip = line_number != len(source_lines) + if leave_tabs: + self.lines.append( + _get_indentation(line) + + _remove_leading_and_normalize(line, with_rstrip) + ) + else: + self.lines.append( + _get_indentation(line).expandtabs() + + _remove_leading_and_normalize(line, with_rstrip) + ) + + self.lines.insert(0, None) + self.index = 1 # index into self.lines of next line + self.input_text = input_text + + def run(self, indent_size=DEFAULT_INDENT_SIZE): + """Fix indentation and return modified line numbers. + + Line numbers are indexed at 1. + + """ + if indent_size < 1: + return self.input_text + + try: + stats = _reindent_stats(tokenize.generate_tokens(self.getline)) + except (SyntaxError, tokenize.TokenError): + return self.input_text + # Remove trailing empty lines. + lines = self.lines + # Sentinel. + stats.append((len(lines), 0)) + # Map count of leading spaces to # we want. + have2want = {} + # Program after transformation. + after = [] + # Copy over initial empty lines -- there's nothing to do until + # we see a line with *something* on it. + i = stats[0][0] + after.extend(lines[1:i]) + for i in range(len(stats) - 1): + thisstmt, thislevel = stats[i] + nextstmt = stats[i + 1][0] + have = _leading_space_count(lines[thisstmt]) + want = thislevel * indent_size + if want < 0: + # A comment line. + if have: + # An indented comment line. If we saw the same + # indentation before, reuse what it most recently + # mapped to. + want = have2want.get(have, -1) + if want < 0: + # Then it probably belongs to the next real stmt. + for j in range(i + 1, len(stats) - 1): + jline, jlevel = stats[j] + if jlevel >= 0: + if have == _leading_space_count(lines[jline]): + want = jlevel * indent_size + break + # Maybe it's a hanging comment like this one, + if want < 0: + # in which case we should shift it like its base + # line got shifted. + for j in range(i - 1, -1, -1): + jline, jlevel = stats[j] + if jlevel >= 0: + want = (have + _leading_space_count( + after[jline - 1]) - + _leading_space_count(lines[jline])) + break + if want < 0: + # Still no luck -- leave it alone. + want = have + else: + want = 0 + assert want >= 0 + have2want[have] = want + diff = want - have + if diff == 0 or have == 0: + after.extend(lines[thisstmt:nextstmt]) + else: + for line_number, line in enumerate(lines[thisstmt:nextstmt], + start=thisstmt): + if line_number in self.string_content_line_numbers: + after.append(line) + elif diff > 0: + if line == '\n': + after.append(line) + else: + after.append(' ' * diff + line) + else: + remove = min(_leading_space_count(line), -diff) + after.append(line[remove:]) + + return ''.join(after) + + def getline(self): + """Line-getter for tokenize.""" + if self.index >= len(self.lines): + line = '' + else: + line = self.lines[self.index] + self.index += 1 + return line + + +def _reindent_stats(tokens): + """Return list of (lineno, indentlevel) pairs. + + One for each stmt and comment line. indentlevel is -1 for comment + lines, as a signal that tokenize doesn't know what to do about them; + indeed, they're our headache! + + """ + find_stmt = 1 # Next token begins a fresh stmt? + level = 0 # Current indent level. + stats = [] + + for t in tokens: + token_type = t[0] + sline = t[2][0] + line = t[4] + + if token_type == tokenize.NEWLINE: + # A program statement, or ENDMARKER, will eventually follow, + # after some (possibly empty) run of tokens of the form + # (NL | COMMENT)* (INDENT | DEDENT+)? + find_stmt = 1 + + elif token_type == tokenize.INDENT: + find_stmt = 1 + level += 1 + + elif token_type == tokenize.DEDENT: + find_stmt = 1 + level -= 1 + + elif token_type == tokenize.COMMENT: + if find_stmt: + stats.append((sline, -1)) + # But we're still looking for a new stmt, so leave + # find_stmt alone. + + elif token_type == tokenize.NL: + pass + + elif find_stmt: + # This is the first "real token" following a NEWLINE, so it + # must be the first token of the next program statement, or an + # ENDMARKER. + find_stmt = 0 + if line: # Not endmarker. + stats.append((sline, level)) + + return stats + + +def _leading_space_count(line): + """Return number of leading spaces in line.""" + i = 0 + while i < len(line) and line[i] == ' ': + i += 1 + return i + + +def refactor_with_2to3(source_text, fixer_names, filename=''): + """Use lib2to3 to refactor the source. + + Return the refactored source code. + + """ + from lib2to3.refactor import RefactoringTool + fixers = ['lib2to3.fixes.fix_' + name for name in fixer_names] + tool = RefactoringTool(fixer_names=fixers, explicit=fixers) + + from lib2to3.pgen2 import tokenize as lib2to3_tokenize + try: + # The name parameter is necessary particularly for the "import" fixer. + return str(tool.refactor_string(source_text, name=filename)) + except lib2to3_tokenize.TokenError: + return source_text + + +def check_syntax(code): + """Return True if syntax is okay.""" + try: + return compile(code, '', 'exec', dont_inherit=True) + except (SyntaxError, TypeError, ValueError): + return False + + +def find_with_line_numbers(pattern, contents): + """A wrapper around 're.finditer' to find line numbers. + + Returns a list of line numbers where pattern was found in contents. + """ + matches = list(re.finditer(pattern, contents)) + if not matches: + return [] + + end = matches[-1].start() + + # -1 so a failed `rfind` maps to the first line. + newline_offsets = { + -1: 0 + } + for line_num, m in enumerate(re.finditer(r'\n', contents), 1): + offset = m.start() + if offset > end: + break + newline_offsets[offset] = line_num + + def get_line_num(match, contents): + """Get the line number of string in a files contents. + + Failing to find the newline is OK, -1 maps to 0 + + """ + newline_offset = contents.rfind('\n', 0, match.start()) + return newline_offsets[newline_offset] + + return [get_line_num(match, contents) + 1 for match in matches] + + +def get_disabled_ranges(source): + """Returns a list of tuples representing the disabled ranges. + + If disabled and no re-enable will disable for rest of file. + + """ + enable_line_nums = find_with_line_numbers(ENABLE_REGEX, source) + disable_line_nums = find_with_line_numbers(DISABLE_REGEX, source) + total_lines = len(re.findall("\n", source)) + 1 + + enable_commands = {} + for num in enable_line_nums: + enable_commands[num] = True + for num in disable_line_nums: + enable_commands[num] = False + + disabled_ranges = [] + currently_enabled = True + disabled_start = None + + for line, commanded_enabled in sorted(enable_commands.items()): + if commanded_enabled is False and currently_enabled is True: + disabled_start = line + currently_enabled = False + elif commanded_enabled is True and currently_enabled is False: + disabled_ranges.append((disabled_start, line)) + currently_enabled = True + + if currently_enabled is False: + disabled_ranges.append((disabled_start, total_lines)) + + return disabled_ranges + + +def filter_disabled_results(result, disabled_ranges): + """Filter out reports based on tuple of disabled ranges. + + """ + line = result['line'] + for disabled_range in disabled_ranges: + if disabled_range[0] <= line <= disabled_range[1]: + return False + return True + + +def filter_results(source, results, aggressive): + """Filter out spurious reports from pycodestyle. + + If aggressive is True, we allow possibly unsafe fixes (E711, E712). + + """ + non_docstring_string_line_numbers = multiline_string_lines( + source, include_docstrings=False) + all_string_line_numbers = multiline_string_lines( + source, include_docstrings=True) + + commented_out_code_line_numbers = commented_out_code_lines(source) + + # Filter out the disabled ranges + disabled_ranges = get_disabled_ranges(source) + if disabled_ranges: + results = [ + result for result in results if filter_disabled_results( + result, + disabled_ranges, + ) + ] + + has_e901 = any(result['id'].lower() == 'e901' for result in results) + + for r in results: + issue_id = r['id'].lower() + + if r['line'] in non_docstring_string_line_numbers: + if issue_id.startswith(('e1', 'e501', 'w191')): + continue + + if r['line'] in all_string_line_numbers: + if issue_id in ['e501']: + continue + + # We must offset by 1 for lines that contain the trailing contents of + # multiline strings. + if not aggressive and (r['line'] + 1) in all_string_line_numbers: + # Do not modify multiline strings in non-aggressive mode. Remove + # trailing whitespace could break doctests. + if issue_id.startswith(('w29', 'w39')): + continue + + if aggressive <= 0: + if issue_id.startswith(('e711', 'e72', 'w6')): + continue + + if aggressive <= 1: + if issue_id.startswith(('e712', 'e713', 'e714')): + continue + + if aggressive <= 2: + if issue_id.startswith(('e704')): + continue + + if r['line'] in commented_out_code_line_numbers: + if issue_id.startswith(('e26', 'e501')): + continue + + # Do not touch indentation if there is a token error caused by + # incomplete multi-line statement. Otherwise, we risk screwing up the + # indentation. + if has_e901: + if issue_id.startswith(('e1', 'e7')): + continue + + yield r + + +def multiline_string_lines(source, include_docstrings=False): + """Return line numbers that are within multiline strings. + + The line numbers are indexed at 1. + + Docstrings are ignored. + + """ + line_numbers = set() + previous_token_type = '' + try: + for t in generate_tokens(source): + token_type = t[0] + start_row = t[2][0] + end_row = t[3][0] + + if token_type == tokenize.STRING and start_row != end_row: + if ( + include_docstrings or + previous_token_type != tokenize.INDENT + ): + # We increment by one since we want the contents of the + # string. + line_numbers |= set(range(1 + start_row, 1 + end_row)) + + previous_token_type = token_type + except (SyntaxError, tokenize.TokenError): + pass + + return line_numbers + + +def commented_out_code_lines(source): + """Return line numbers of comments that are likely code. + + Commented-out code is bad practice, but modifying it just adds even + more clutter. + + """ + line_numbers = [] + try: + for t in generate_tokens(source): + token_type = t[0] + token_string = t[1] + start_row = t[2][0] + line = t[4] + + # Ignore inline comments. + if not line.lstrip().startswith('#'): + continue + + if token_type == tokenize.COMMENT: + stripped_line = token_string.lstrip('#').strip() + with warnings.catch_warnings(): + # ignore SyntaxWarning in Python3.8+ + # refs: + # https://bugs.python.org/issue15248 + # https://docs.python.org/3.8/whatsnew/3.8.html#other-language-changes + warnings.filterwarnings("ignore", category=SyntaxWarning) + if ( + ' ' in stripped_line and + '#' not in stripped_line and + check_syntax(stripped_line) + ): + line_numbers.append(start_row) + except (SyntaxError, tokenize.TokenError): + pass + + return line_numbers + + +def shorten_comment(line, max_line_length, last_comment=False): + """Return trimmed or split long comment line. + + If there are no comments immediately following it, do a text wrap. + Doing this wrapping on all comments in general would lead to jagged + comment text. + + """ + assert len(line) > max_line_length + line = line.rstrip() + + # PEP 8 recommends 72 characters for comment text. + indentation = _get_indentation(line) + '# ' + max_line_length = min(max_line_length, + len(indentation) + 72) + + MIN_CHARACTER_REPEAT = 5 + if ( + len(line) - len(line.rstrip(line[-1])) >= MIN_CHARACTER_REPEAT and + not line[-1].isalnum() + ): + # Trim comments that end with things like --------- + return line[:max_line_length] + '\n' + elif last_comment and re.match(r'\s*#+\s*\w+', line): + split_lines = textwrap.wrap(line.lstrip(' \t#'), + initial_indent=indentation, + subsequent_indent=indentation, + width=max_line_length, + break_long_words=False, + break_on_hyphens=False) + return '\n'.join(split_lines) + '\n' + + return line + '\n' + + +def normalize_line_endings(lines, newline): + """Return fixed line endings. + + All lines will be modified to use the most common line ending. + """ + line = [line.rstrip('\n\r') + newline for line in lines] + if line and lines[-1] == lines[-1].rstrip('\n\r'): + line[-1] = line[-1].rstrip('\n\r') + return line + + +def mutual_startswith(a, b): + return b.startswith(a) or a.startswith(b) + + +def code_match(code, select, ignore): + if ignore: + assert not isinstance(ignore, str) + for ignored_code in [c.strip() for c in ignore]: + if mutual_startswith(code.lower(), ignored_code.lower()): + return False + + if select: + assert not isinstance(select, str) + for selected_code in [c.strip() for c in select]: + if mutual_startswith(code.lower(), selected_code.lower()): + return True + return False + + return True + + +def fix_code(source, options=None, encoding=None, apply_config=False): + """Return fixed source code. + + "encoding" will be used to decode "source" if it is a byte string. + + """ + options = _get_options(options, apply_config) + + if not isinstance(source, str): + source = source.decode(encoding or get_encoding()) + + sio = io.StringIO(source) + return fix_lines(sio.readlines(), options=options) + + +def _get_options(raw_options, apply_config): + """Return parsed options.""" + if not raw_options: + return parse_args([''], apply_config=apply_config) + + if isinstance(raw_options, dict): + options = parse_args([''], apply_config=apply_config) + for name, value in raw_options.items(): + if not hasattr(options, name): + raise ValueError("No such option '{}'".format(name)) + + # Check for very basic type errors. + expected_type = type(getattr(options, name)) + if not isinstance(expected_type, (str, )): + if isinstance(value, (str, )): + raise ValueError( + "Option '{}' should not be a string".format(name)) + setattr(options, name, value) + else: + options = raw_options + + return options + + +def fix_lines(source_lines, options, filename=''): + """Return fixed source code.""" + # Transform everything to line feed. Then change them back to original + # before returning fixed source code. + original_newline = find_newline(source_lines) + tmp_source = ''.join(normalize_line_endings(source_lines, '\n')) + + # Keep a history to break out of cycles. + previous_hashes = set() + + if options.line_range: + # Disable "apply_local_fixes()" for now due to issue #175. + fixed_source = tmp_source + else: + pep8_options = { + 'ignore': options.ignore, + 'select': options.select, + 'max_line_length': options.max_line_length, + 'hang_closing': options.hang_closing, + } + sio = io.StringIO(tmp_source) + contents = sio.readlines() + results = _execute_pep8(pep8_options, contents) + codes = {result['id'] for result in results + if result['id'] in SELECTED_GLOBAL_FIXED_METHOD_CODES} + # Apply global fixes only once (for efficiency). + fixed_source = apply_global_fixes(tmp_source, + options, + filename=filename, + codes=codes) + + passes = 0 + long_line_ignore_cache = set() + while hash(fixed_source) not in previous_hashes: + if options.pep8_passes >= 0 and passes > options.pep8_passes: + break + passes += 1 + + previous_hashes.add(hash(fixed_source)) + + tmp_source = copy.copy(fixed_source) + + fix = FixPEP8( + filename, + options, + contents=tmp_source, + long_line_ignore_cache=long_line_ignore_cache) + + fixed_source = fix.fix() + + sio = io.StringIO(fixed_source) + return ''.join(normalize_line_endings(sio.readlines(), original_newline)) + + +def fix_file(filename, options=None, output=None, apply_config=False): + if not options: + options = parse_args([filename], apply_config=apply_config) + + original_source = readlines_from_file(filename) + + fixed_source = original_source + + if options.in_place or options.diff or output: + encoding = detect_encoding(filename) + + if output: + output = LineEndingWrapper(wrap_output(output, encoding=encoding)) + + fixed_source = fix_lines(fixed_source, options, filename=filename) + + if options.diff: + new = io.StringIO(fixed_source) + new = new.readlines() + diff = get_diff_text(original_source, new, filename) + if output: + output.write(diff) + output.flush() + elif options.jobs > 1: + diff = diff.encode(encoding) + return diff + elif options.in_place: + original = "".join(original_source).splitlines() + fixed = fixed_source.splitlines() + original_source_last_line = ( + original_source[-1].split("\n")[-1] if original_source else "" + ) + fixed_source_last_line = fixed_source.split("\n")[-1] + if original != fixed or ( + original_source_last_line != fixed_source_last_line + ): + with open_with_encoding(filename, 'w', encoding=encoding) as fp: + fp.write(fixed_source) + return fixed_source + return None + else: + if output: + output.write(fixed_source) + output.flush() + return fixed_source + + +def global_fixes(): + """Yield multiple (code, function) tuples.""" + for function in list(globals().values()): + if inspect.isfunction(function): + arguments = _get_parameters(function) + if arguments[:1] != ['source']: + continue + + code = extract_code_from_function(function) + if code: + yield (code, function) + + +def _get_parameters(function): + # pylint: disable=deprecated-method + if sys.version_info.major >= 3: + # We need to match "getargspec()", which includes "self" as the first + # value for methods. + # https://bugs.python.org/issue17481#msg209469 + if inspect.ismethod(function): + function = function.__func__ + + return list(inspect.signature(function).parameters) + else: + return inspect.getargspec(function)[0] + + +def apply_global_fixes(source, options, where='global', filename='', + codes=None): + """Run global fixes on source code. + + These are fixes that only need be done once (unlike those in + FixPEP8, which are dependent on pycodestyle). + + """ + if codes is None: + codes = [] + if any(code_match(code, select=options.select, ignore=options.ignore) + for code in ['E101', 'E111']): + source = reindent(source, + indent_size=options.indent_size, + leave_tabs=not ( + code_match( + 'W191', + select=options.select, + ignore=options.ignore) + ) + ) + + for (code, function) in global_fixes(): + if code.upper() in SELECTED_GLOBAL_FIXED_METHOD_CODES \ + and code.upper() not in codes: + continue + if code_match(code, select=options.select, ignore=options.ignore): + if options.verbose: + print('---> Applying {} fix for {}'.format(where, + code.upper()), + file=sys.stderr) + source = function(source, + aggressive=options.aggressive) + + source = fix_2to3(source, + aggressive=options.aggressive, + select=options.select, + ignore=options.ignore, + filename=filename, + where=where, + verbose=options.verbose) + + return source + + +def extract_code_from_function(function): + """Return code handled by function.""" + if not function.__name__.startswith('fix_'): + return None + + code = re.sub('^fix_', '', function.__name__) + if not code: + return None + + try: + int(code[1:]) + except ValueError: + return None + + return code + + +def _get_package_version(): + packages = ["pycodestyle: {}".format(pycodestyle.__version__)] + return ", ".join(packages) + + +def create_parser(): + """Return command-line parser.""" + parser = argparse.ArgumentParser(description=docstring_summary(__doc__), + prog='autopep8') + parser.add_argument('--version', action='version', + version='%(prog)s {} ({})'.format( + __version__, _get_package_version())) + parser.add_argument('-v', '--verbose', action='count', + default=0, + help='print verbose messages; ' + 'multiple -v result in more verbose messages') + parser.add_argument('-d', '--diff', action='store_true', + help='print the diff for the fixed source') + parser.add_argument('-i', '--in-place', action='store_true', + help='make changes to files in place') + parser.add_argument('--global-config', metavar='filename', + default=DEFAULT_CONFIG, + help='path to a global pep8 config file; if this file ' + 'does not exist then this is ignored ' + '(default: {})'.format(DEFAULT_CONFIG)) + parser.add_argument('--ignore-local-config', action='store_true', + help="don't look for and apply local config files; " + 'if not passed, defaults are updated with any ' + "config files in the project's root directory") + parser.add_argument('-r', '--recursive', action='store_true', + help='run recursively over directories; ' + 'must be used with --in-place or --diff') + parser.add_argument('-j', '--jobs', type=int, metavar='n', default=1, + help='number of parallel jobs; ' + 'match CPU count if value is less than 1') + parser.add_argument('-p', '--pep8-passes', metavar='n', + default=-1, type=int, + help='maximum number of additional pep8 passes ' + '(default: infinite)') + parser.add_argument('-a', '--aggressive', action='count', default=0, + help='enable non-whitespace changes; ' + 'multiple -a result in more aggressive changes') + parser.add_argument('--experimental', action='store_true', + help='enable experimental fixes') + parser.add_argument('--exclude', metavar='globs', + help='exclude file/directory names that match these ' + 'comma-separated globs') + parser.add_argument('--list-fixes', action='store_true', + help='list codes for fixes; ' + 'used by --ignore and --select') + parser.add_argument('--ignore', metavar='errors', default='', + help='do not fix these errors/warnings ' + '(default: {})'.format(DEFAULT_IGNORE)) + parser.add_argument('--select', metavar='errors', default='', + help='fix only these errors/warnings (e.g. E4,W)') + parser.add_argument('--max-line-length', metavar='n', default=79, type=int, + help='set maximum allowed line length ' + '(default: %(default)s)') + parser.add_argument('--line-range', '--range', metavar='line', + default=None, type=int, nargs=2, + help='only fix errors found within this inclusive ' + 'range of line numbers (e.g. 1 99); ' + 'line numbers are indexed at 1') + parser.add_argument('--indent-size', default=DEFAULT_INDENT_SIZE, + type=int, help=argparse.SUPPRESS) + parser.add_argument('--hang-closing', action='store_true', + help='hang-closing option passed to pycodestyle') + parser.add_argument('--exit-code', action='store_true', + help='change to behavior of exit code.' + ' default behavior of return value, 0 is no ' + 'differences, 1 is error exit. return 2 when' + ' add this option. 2 is exists differences.') + parser.add_argument('files', nargs='*', + help="files to format or '-' for standard in") + + return parser + + +def _expand_codes(codes, ignore_codes): + """expand to individual E/W codes""" + ret = set() + + is_conflict = False + if all( + any( + conflicting_code.startswith(code) + for code in codes + ) + for conflicting_code in CONFLICTING_CODES + ): + is_conflict = True + + is_ignore_w503 = "W503" in ignore_codes + is_ignore_w504 = "W504" in ignore_codes + + for code in codes: + if code == "W": + if is_ignore_w503 and is_ignore_w504: + ret.update({"W1", "W2", "W3", "W505", "W6"}) + elif is_ignore_w503: + ret.update({"W1", "W2", "W3", "W504", "W505", "W6"}) + else: + ret.update({"W1", "W2", "W3", "W503", "W505", "W6"}) + elif code in ("W5", "W50"): + if is_ignore_w503 and is_ignore_w504: + ret.update({"W505"}) + elif is_ignore_w503: + ret.update({"W504", "W505"}) + else: + ret.update({"W503", "W505"}) + elif not (code in ("W503", "W504") and is_conflict): + ret.add(code) + + return ret + + +def parse_args(arguments, apply_config=False): + """Parse command-line options.""" + parser = create_parser() + args = parser.parse_args(arguments) + + if not args.files and not args.list_fixes: + parser.exit(EXIT_CODE_ARGPARSE_ERROR, 'incorrect number of arguments') + + args.files = [decode_filename(name) for name in args.files] + + if apply_config: + parser = read_config(args, parser) + # prioritize settings when exist pyproject.toml's tool.autopep8 section + try: + parser_with_pyproject_toml = read_pyproject_toml(args, parser) + except Exception: + parser_with_pyproject_toml = None + if parser_with_pyproject_toml: + parser = parser_with_pyproject_toml + args = parser.parse_args(arguments) + args.files = [decode_filename(name) for name in args.files] + + if '-' in args.files: + if len(args.files) > 1: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + 'cannot mix stdin and regular files', + ) + + if args.diff: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--diff cannot be used with standard input', + ) + + if args.in_place: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--in-place cannot be used with standard input', + ) + + if args.recursive: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--recursive cannot be used with standard input', + ) + + if len(args.files) > 1 and not (args.in_place or args.diff): + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + 'autopep8 only takes one filename as argument ' + 'unless the "--in-place" or "--diff" args are used', + ) + + if args.recursive and not (args.in_place or args.diff): + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--recursive must be used with --in-place or --diff', + ) + + if args.in_place and args.diff: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--in-place and --diff are mutually exclusive', + ) + + if args.max_line_length <= 0: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--max-line-length must be greater than 0', + ) + + if args.indent_size <= 0: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--indent-size must be greater than 0', + ) + + if args.select: + args.select = _expand_codes( + _split_comma_separated(args.select), + (_split_comma_separated(args.ignore) if args.ignore else []) + ) + + if args.ignore: + args.ignore = _split_comma_separated(args.ignore) + if all( + not any( + conflicting_code.startswith(ignore_code) + for ignore_code in args.ignore + ) + for conflicting_code in CONFLICTING_CODES + ): + args.ignore.update(CONFLICTING_CODES) + elif not args.select: + if args.aggressive: + # Enable everything by default if aggressive. + args.select = {'E', 'W1', 'W2', 'W3', 'W6'} + else: + args.ignore = _split_comma_separated(DEFAULT_IGNORE) + + if args.exclude: + args.exclude = _split_comma_separated(args.exclude) + else: + args.exclude = {} + + if args.jobs < 1: + # Do not import multiprocessing globally in case it is not supported + # on the platform. + import multiprocessing + args.jobs = multiprocessing.cpu_count() + + if args.jobs > 1 and not (args.in_place or args.diff): + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + 'parallel jobs requires --in-place', + ) + + if args.line_range: + if args.line_range[0] <= 0: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + '--range must be positive numbers', + ) + if args.line_range[0] > args.line_range[1]: + parser.exit( + EXIT_CODE_ARGPARSE_ERROR, + 'First value of --range should be less than or equal ' + 'to the second', + ) + + return args + + +def _get_normalize_options(config, section, option_list): + for (k, _) in config.items(section): + norm_opt = k.lstrip('-').replace('-', '_') + if not option_list.get(norm_opt): + continue + opt_type = option_list[norm_opt] + if opt_type is int: + value = config.getint(section, k) + elif opt_type is bool: + value = config.getboolean(section, k) + else: + value = config.get(section, k) + yield norm_opt, k, value + + +def read_config(args, parser): + """Read both user configuration and local configuration.""" + config = SafeConfigParser() + + try: + if args.verbose and os.path.exists(args.global_config): + print("read config path: {}".format(args.global_config)) + config.read(args.global_config) + + if not args.ignore_local_config: + parent = tail = args.files and os.path.abspath( + os.path.commonprefix(args.files)) + while tail: + if config.read([os.path.join(parent, fn) + for fn in PROJECT_CONFIG]): + if args.verbose: + for fn in PROJECT_CONFIG: + config_file = os.path.join(parent, fn) + if not os.path.exists(config_file): + continue + print( + "read config path: {}".format( + os.path.join(parent, fn) + ) + ) + break + (parent, tail) = os.path.split(parent) + + defaults = {} + option_list = {o.dest: o.type or type(o.default) + for o in parser._actions} + + for section in ['pep8', 'pycodestyle', 'flake8']: + if not config.has_section(section): + continue + for norm_opt, k, value in _get_normalize_options(config, section, + option_list): + if args.verbose: + print("enable config: section={}, key={}, value={}".format( + section, k, value)) + defaults[norm_opt] = value + + parser.set_defaults(**defaults) + except Error: + # Ignore for now. + pass + + return parser + + +def read_pyproject_toml(args, parser): + """Read pyproject.toml and load configuration.""" + import toml + + config = None + + if os.path.exists(args.global_config): + with open(args.global_config) as fp: + config = toml.load(fp) + + if not args.ignore_local_config: + parent = tail = args.files and os.path.abspath( + os.path.commonprefix(args.files)) + while tail: + pyproject_toml = os.path.join(parent, "pyproject.toml") + if os.path.exists(pyproject_toml): + with open(pyproject_toml) as fp: + config = toml.load(fp) + break + (parent, tail) = os.path.split(parent) + + if not config: + return None + + if config.get("tool", {}).get("autopep8") is None: + return None + + config = config.get("tool").get("autopep8") + + defaults = {} + option_list = {o.dest: o.type or type(o.default) + for o in parser._actions} + + TUPLED_OPTIONS = ("ignore", "select") + for (k, v) in config.items(): + norm_opt = k.lstrip('-').replace('-', '_') + if not option_list.get(norm_opt): + continue + if type(v) in (list, tuple) and norm_opt in TUPLED_OPTIONS: + value = ",".join(v) + else: + value = v + if args.verbose: + print("enable pyproject.toml config: " + "key={}, value={}".format(k, value)) + defaults[norm_opt] = value + + if defaults: + # set value when exists key-value in defaults dict + parser.set_defaults(**defaults) + + return parser + + +def _split_comma_separated(string): + """Return a set of strings.""" + return {text.strip() for text in string.split(',') if text.strip()} + + +def decode_filename(filename): + """Return Unicode filename.""" + if isinstance(filename, str): + return filename + + return filename.decode(sys.getfilesystemencoding()) + + +def supported_fixes(): + """Yield pep8 error codes that autopep8 fixes. + + Each item we yield is a tuple of the code followed by its + description. + + """ + yield ('E101', docstring_summary(reindent.__doc__)) + + instance = FixPEP8(filename=None, options=None, contents='') + for attribute in dir(instance): + code = re.match('fix_([ew][0-9][0-9][0-9])', attribute) + if code: + yield ( + code.group(1).upper(), + re.sub(r'\s+', ' ', + docstring_summary(getattr(instance, attribute).__doc__)) + ) + + for (code, function) in sorted(global_fixes()): + yield (code.upper() + (4 - len(code)) * ' ', + re.sub(r'\s+', ' ', docstring_summary(function.__doc__))) + + for code in sorted(CODE_TO_2TO3): + yield (code.upper() + (4 - len(code)) * ' ', + re.sub(r'\s+', ' ', docstring_summary(fix_2to3.__doc__))) + + +def docstring_summary(docstring): + """Return summary of docstring.""" + return docstring.split('\n')[0] if docstring else '' + + +def line_shortening_rank(candidate, indent_word, max_line_length, + experimental=False): + """Return rank of candidate. + + This is for sorting candidates. + + """ + if not candidate.strip(): + return 0 + + rank = 0 + lines = candidate.rstrip().split('\n') + + offset = 0 + if ( + not lines[0].lstrip().startswith('#') and + lines[0].rstrip()[-1] not in '([{' + ): + for (opening, closing) in ('()', '[]', '{}'): + # Don't penalize empty containers that aren't split up. Things like + # this "foo(\n )" aren't particularly good. + opening_loc = lines[0].find(opening) + closing_loc = lines[0].find(closing) + if opening_loc >= 0: + if closing_loc < 0 or closing_loc != opening_loc + 1: + offset = max(offset, 1 + opening_loc) + + current_longest = max(offset + len(x.strip()) for x in lines) + + rank += 4 * max(0, current_longest - max_line_length) + + rank += len(lines) + + # Too much variation in line length is ugly. + rank += 2 * standard_deviation(len(line) for line in lines) + + bad_staring_symbol = { + '(': ')', + '[': ']', + '{': '}'}.get(lines[0][-1]) + + if len(lines) > 1: + if ( + bad_staring_symbol and + lines[1].lstrip().startswith(bad_staring_symbol) + ): + rank += 20 + + for lineno, current_line in enumerate(lines): + current_line = current_line.strip() + + if current_line.startswith('#'): + continue + + for bad_start in ['.', '%', '+', '-', '/']: + if current_line.startswith(bad_start): + rank += 100 + + # Do not tolerate operators on their own line. + if current_line == bad_start: + rank += 1000 + + if ( + current_line.endswith(('.', '%', '+', '-', '/')) and + "': " in current_line + ): + rank += 1000 + + if current_line.endswith(('(', '[', '{', '.')): + # Avoid lonely opening. They result in longer lines. + if len(current_line) <= len(indent_word): + rank += 100 + + # Avoid the ugliness of ", (\n". + if ( + current_line.endswith('(') and + current_line[:-1].rstrip().endswith(',') + ): + rank += 100 + + # Avoid the ugliness of "something[\n" and something[index][\n. + if ( + current_line.endswith('[') and + len(current_line) > 1 and + (current_line[-2].isalnum() or current_line[-2] in ']') + ): + rank += 300 + + # Also avoid the ugliness of "foo.\nbar" + if current_line.endswith('.'): + rank += 100 + + if has_arithmetic_operator(current_line): + rank += 100 + + # Avoid breaking at unary operators. + if re.match(r'.*[(\[{]\s*[\-\+~]$', current_line.rstrip('\\ ')): + rank += 1000 + + if re.match(r'.*lambda\s*\*$', current_line.rstrip('\\ ')): + rank += 1000 + + if current_line.endswith(('%', '(', '[', '{')): + rank -= 20 + + # Try to break list comprehensions at the "for". + if current_line.startswith('for '): + rank -= 50 + + if current_line.endswith('\\'): + # If a line ends in \-newline, it may be part of a + # multiline string. In that case, we would like to know + # how long that line is without the \-newline. If it's + # longer than the maximum, or has comments, then we assume + # that the \-newline is an okay candidate and only + # penalize it a bit. + total_len = len(current_line) + lineno += 1 + while lineno < len(lines): + total_len += len(lines[lineno]) + + if lines[lineno].lstrip().startswith('#'): + total_len = max_line_length + break + + if not lines[lineno].endswith('\\'): + break + + lineno += 1 + + if total_len < max_line_length: + rank += 10 + else: + rank += 100 if experimental else 1 + + # Prefer breaking at commas rather than colon. + if ',' in current_line and current_line.endswith(':'): + rank += 10 + + # Avoid splitting dictionaries between key and value. + if current_line.endswith(':'): + rank += 100 + + rank += 10 * count_unbalanced_brackets(current_line) + + return max(0, rank) + + +def standard_deviation(numbers): + """Return standard deviation.""" + numbers = list(numbers) + if not numbers: + return 0 + mean = sum(numbers) / len(numbers) + return (sum((n - mean) ** 2 for n in numbers) / + len(numbers)) ** .5 + + +def has_arithmetic_operator(line): + """Return True if line contains any arithmetic operators.""" + for operator in pycodestyle.ARITHMETIC_OP: + if operator in line: + return True + + return False + + +def count_unbalanced_brackets(line): + """Return number of unmatched open/close brackets.""" + count = 0 + for opening, closing in ['()', '[]', '{}']: + count += abs(line.count(opening) - line.count(closing)) + + return count + + +def split_at_offsets(line, offsets): + """Split line at offsets. + + Return list of strings. + + """ + result = [] + + previous_offset = 0 + current_offset = 0 + for current_offset in sorted(offsets): + if current_offset < len(line) and previous_offset != current_offset: + result.append(line[previous_offset:current_offset].strip()) + previous_offset = current_offset + + result.append(line[current_offset:]) + + return result + + +class LineEndingWrapper(object): + + r"""Replace line endings to work with sys.stdout. + + It seems that sys.stdout expects only '\n' as the line ending, no matter + the platform. Otherwise, we get repeated line endings. + + """ + + def __init__(self, output): + self.__output = output + + def write(self, s): + self.__output.write(s.replace('\r\n', '\n').replace('\r', '\n')) + + def flush(self): + self.__output.flush() + + +def match_file(filename, exclude): + """Return True if file is okay for modifying/recursing.""" + base_name = os.path.basename(filename) + + if base_name.startswith('.'): + return False + + for pattern in exclude: + if fnmatch.fnmatch(base_name, pattern): + return False + if fnmatch.fnmatch(filename, pattern): + return False + + if not os.path.isdir(filename) and not is_python_file(filename): + return False + + return True + + +def find_files(filenames, recursive, exclude): + """Yield filenames.""" + while filenames: + name = filenames.pop(0) + if recursive and os.path.isdir(name): + for root, directories, children in os.walk(name): + filenames += [os.path.join(root, f) for f in children + if match_file(os.path.join(root, f), + exclude)] + directories[:] = [d for d in directories + if match_file(os.path.join(root, d), + exclude)] + else: + is_exclude_match = False + for pattern in exclude: + if fnmatch.fnmatch(name, pattern): + is_exclude_match = True + break + if not is_exclude_match: + yield name + + +def _fix_file(parameters): + """Helper function for optionally running fix_file() in parallel.""" + if parameters[1].verbose: + print('[file:{}]'.format(parameters[0]), file=sys.stderr) + try: + return fix_file(*parameters) + except IOError as error: + print(str(error), file=sys.stderr) + raise error + + +def fix_multiple_files(filenames, options, output=None): + """Fix list of files. + + Optionally fix files recursively. + + """ + results = [] + filenames = find_files(filenames, options.recursive, options.exclude) + if options.jobs > 1: + import multiprocessing + pool = multiprocessing.Pool(options.jobs) + rets = [] + for name in filenames: + ret = pool.apply_async(_fix_file, ((name, options),)) + rets.append(ret) + pool.close() + pool.join() + if options.diff: + for r in rets: + sys.stdout.write(r.get().decode()) + sys.stdout.flush() + results.extend([x.get() for x in rets if x is not None]) + else: + for name in filenames: + ret = _fix_file((name, options, output)) + if ret is None: + continue + if options.diff: + if ret != '': + results.append(ret) + elif options.in_place: + results.append(ret) + else: + original_source = readlines_from_file(name) + if "".join(original_source).splitlines() != ret.splitlines(): + results.append(ret) + return results + + +def is_python_file(filename): + """Return True if filename is Python file.""" + if filename.endswith('.py'): + return True + + try: + with open_with_encoding( + filename, + limit_byte_check=MAX_PYTHON_FILE_DETECTION_BYTES) as f: + text = f.read(MAX_PYTHON_FILE_DETECTION_BYTES) + if not text: + return False + first_line = text.splitlines()[0] + except (IOError, IndexError): + return False + + if not PYTHON_SHEBANG_REGEX.match(first_line): + return False + + return True + + +def is_probably_part_of_multiline(line): + """Return True if line is likely part of a multiline string. + + When multiline strings are involved, pep8 reports the error as being + at the start of the multiline string, which doesn't work for us. + + """ + return ( + '"""' in line or + "'''" in line or + line.rstrip().endswith('\\') + ) + + +def wrap_output(output, encoding): + """Return output with specified encoding.""" + return codecs.getwriter(encoding)(output.buffer + if hasattr(output, 'buffer') + else output) + + +def get_encoding(): + """Return preferred encoding.""" + return locale.getpreferredencoding() or sys.getdefaultencoding() + + +def main(argv=None, apply_config=True): + """Command-line entry.""" + if argv is None: + argv = sys.argv + + try: + # Exit on broken pipe. + signal.signal(signal.SIGPIPE, signal.SIG_DFL) + except AttributeError: # pragma: no cover + # SIGPIPE is not available on Windows. + pass + + try: + args = parse_args(argv[1:], apply_config=apply_config) + + if args.list_fixes: + for code, description in sorted(supported_fixes()): + print('{code} - {description}'.format( + code=code, description=description)) + return EXIT_CODE_OK + + if args.files == ['-']: + assert not args.in_place + + encoding = sys.stdin.encoding or get_encoding() + read_stdin = sys.stdin.read() + fixed_stdin = fix_code(read_stdin, args, encoding=encoding) + + # LineEndingWrapper is unnecessary here due to the symmetry between + # standard in and standard out. + wrap_output(sys.stdout, encoding=encoding).write(fixed_stdin) + + if hash(read_stdin) != hash(fixed_stdin): + if args.exit_code: + return EXIT_CODE_EXISTS_DIFF + else: + if args.in_place or args.diff: + args.files = list(set(args.files)) + else: + assert len(args.files) == 1 + assert not args.recursive + + results = fix_multiple_files(args.files, args, sys.stdout) + if args.diff: + ret = any([len(ret) != 0 for ret in results]) + else: + # with in-place option + ret = any([ret is not None for ret in results]) + if args.exit_code and ret: + return EXIT_CODE_EXISTS_DIFF + except IOError: + return EXIT_CODE_ERROR + except KeyboardInterrupt: + return EXIT_CODE_ERROR # pragma: no cover + + +class CachedTokenizer(object): + + """A one-element cache around tokenize.generate_tokens(). + + Original code written by Ned Batchelder, in coverage.py. + + """ + + def __init__(self): + self.last_text = None + self.last_tokens = None + + def generate_tokens(self, text): + """A stand-in for tokenize.generate_tokens().""" + if text != self.last_text: + string_io = io.StringIO(text) + self.last_tokens = list( + tokenize.generate_tokens(string_io.readline) + ) + self.last_text = text + return self.last_tokens + + +_cached_tokenizer = CachedTokenizer() +generate_tokens = _cached_tokenizer.generate_tokens + + +if __name__ == '__main__': + sys.exit(main()) diff --git a/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/COPYING b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/COPYING new file mode 100644 index 00000000..342bd620 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/COPYING @@ -0,0 +1,19 @@ +Copyright (c) 2015 David Keijser + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff --git a/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/METADATA b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/METADATA new file mode 100644 index 00000000..8c2e6446 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/METADATA @@ -0,0 +1,86 @@ +Metadata-Version: 2.1 +Name: base58 +Version: 2.1.1 +Summary: Base58 and Base58Check implementation. +Home-page: https://github.com/keis/base58 +Author: David Keijser +Author-email: keijser@gmail.com +License: MIT +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Requires-Python: >=3.5 +Description-Content-Type: text/markdown +License-File: COPYING +Provides-Extra: tests +Requires-Dist: mypy ; extra == 'tests' +Requires-Dist: PyHamcrest (>=2.0.2) ; extra == 'tests' +Requires-Dist: pytest (>=4.6) ; extra == 'tests' +Requires-Dist: pytest-benchmark ; extra == 'tests' +Requires-Dist: pytest-cov ; extra == 'tests' +Requires-Dist: pytest-flake8 ; extra == 'tests' + +# base58 + +[![PyPI Version][pypi-image]](https://pypi.python.org/pypi?name=base58&:action=display) +[![PyPI Downloads][pypi-downloads-image]](https://pypi.python.org/pypi?name=base58&:action=display) +[![Build Status][travis-image]](https://travis-ci.org/keis/base58) +[![Coverage Status][coveralls-image]](https://coveralls.io/r/keis/base58?branch=master) + +Base58 and Base58Check implementation compatible with what is used by the +bitcoin network. Any other alternative alphabet (like the XRP one) can be used. + +Starting from version 2.0.0 **python2 is no longer supported** the 1.x series +will remain supported but no new features will be added. + + +## Command line usage + + $ printf "hello world" | base58 + StV1DL6CwTryKyV + + $ printf "hello world" | base58 -c + 3vQB7B6MrGQZaxCuFg4oh + + $ printf "3vQB7B6MrGQZaxCuFg4oh" | base58 -dc + hello world + + $ printf "4vQB7B6MrGQZaxCuFg4oh" | base58 -dc + Invalid checksum + + +## Module usage + + >>> import base58 + >>> base58.b58encode(b'hello world') + b'StV1DL6CwTryKyV' + >>> base58.b58decode(b'StV1DL6CwTryKyV') + b'hello world' + >>> base58.b58encode_check(b'hello world') + b'3vQB7B6MrGQZaxCuFg4oh' + >>> base58.b58decode_check(b'3vQB7B6MrGQZaxCuFg4oh') + b'hello world' + >>> base58.b58decode_check(b'4vQB7B6MrGQZaxCuFg4oh') + Traceback (most recent call last): + File "", line 1, in + File "base58.py", line 89, in b58decode_check + raise ValueError("Invalid checksum") + ValueError: Invalid checksum + # Use another alphabet. Here, using the built-in XRP/Ripple alphabet. + # RIPPLE_ALPHABET is provided as an option for compatibility with existing code + # It is recommended to use XRP_ALPHABET instead + >>> base58.b58encode(b'hello world', alphabet=base58.XRP_ALPHABET) + b'StVrDLaUATiyKyV' + >>> base58.b58decode(b'StVrDLaUATiyKyV', alphabet=base58.XRP_ALPHABET) + b'hello world' + + +[pypi-image]: https://img.shields.io/pypi/v/base58.svg?style=flat +[pypi-downloads-image]: https://img.shields.io/pypi/dm/base58.svg?style=flat +[travis-image]: https://img.shields.io/travis/keis/base58.svg?style=flat +[coveralls-image]: https://img.shields.io/coveralls/keis/base58.svg?style=flat + + diff --git a/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/RECORD b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/RECORD new file mode 100644 index 00000000..f0d1df15 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/RECORD @@ -0,0 +1,13 @@ +../../../bin/base58,sha256=w5MpZbqpZyOJlzyi_SrH1upGWIAvhrpE7eXjmhOYp30,264 +base58-2.1.1.dist-info/COPYING,sha256=z0aU8EC3oxzY7D280LWDpgHA1MN94Ba-eqCgbjpqOlQ,1057 +base58-2.1.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +base58-2.1.1.dist-info/METADATA,sha256=OMWrHqwTJgXypOcS0Tdx3XBESZ5Y4terXPUBfg07Vd0,3078 +base58-2.1.1.dist-info/RECORD,, +base58-2.1.1.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92 +base58-2.1.1.dist-info/entry_points.txt,sha256=7WwcggBSeBwcC22-LkpqMOCaPdey0nOG3QEaKok403Y,49 +base58-2.1.1.dist-info/top_level.txt,sha256=BVSonMPECDcX_2XqQ7iILRqitlshZNNEmLCEWlpvUvI,7 +base58/__init__.py,sha256=7HvHX-In8vY_AJIGqPYhMRuibYkWNEU0yiquapCDKpw,4059 +base58/__main__.py,sha256=3rysVZfdK6HC2DXk5XdRQntooSxBevh_yO1oi_Pqb4M,1183 +base58/__pycache__/__init__.cpython-38.pyc,, +base58/__pycache__/__main__.cpython-38.pyc,, +base58/py.typed,sha256=dcrsqJrcYfTX-ckLFJMTaj6mD8aDe2u0tkQG-ZYxnEg,26 diff --git a/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/WHEEL new file mode 100644 index 00000000..5bad85fd --- /dev/null +++ b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/entry_points.txt new file mode 100644 index 00000000..dc6d6a2d --- /dev/null +++ b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +base58 = base58.__main__:main + diff --git a/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/top_level.txt new file mode 100644 index 00000000..b4c9d717 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58-2.1.1.dist-info/top_level.txt @@ -0,0 +1 @@ +base58 diff --git a/env/lib/python3.8/site-packages/base58/__init__.py b/env/lib/python3.8/site-packages/base58/__init__.py new file mode 100644 index 00000000..929014f2 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58/__init__.py @@ -0,0 +1,159 @@ +'''Base58 encoding + +Implementations of Base58 and Base58Check encodings that are compatible +with the bitcoin network. +''' + +# This module is based upon base58 snippets found scattered over many bitcoin +# tools written in python. From what I gather the original source is from a +# forum post by Gavin Andresen, so direct your praise to him. +# This module adds shiny packaging and support for python3. + +from functools import lru_cache +from hashlib import sha256 +from typing import Mapping, Union + +__version__ = '2.1.1' + +# 58 character alphabet used +BITCOIN_ALPHABET = \ + b'123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' +RIPPLE_ALPHABET = b'rpshnaf39wBUDNEGHJKLM4PQRST7VWXYZ2bcdeCg65jkm8oFqi1tuvAxyz' +XRP_ALPHABET = RIPPLE_ALPHABET + +# Retro compatibility +alphabet = BITCOIN_ALPHABET + + +def scrub_input(v: Union[str, bytes]) -> bytes: + if isinstance(v, str): + v = v.encode('ascii') + + return v + + +def b58encode_int( + i: int, default_one: bool = True, alphabet: bytes = BITCOIN_ALPHABET +) -> bytes: + """ + Encode an integer using Base58 + """ + if not i and default_one: + return alphabet[0:1] + string = b"" + base = len(alphabet) + while i: + i, idx = divmod(i, base) + string = alphabet[idx:idx+1] + string + return string + + +def b58encode( + v: Union[str, bytes], alphabet: bytes = BITCOIN_ALPHABET +) -> bytes: + """ + Encode a string using Base58 + """ + v = scrub_input(v) + + origlen = len(v) + v = v.lstrip(b'\0') + newlen = len(v) + + acc = int.from_bytes(v, byteorder='big') # first byte is most significant + + result = b58encode_int(acc, default_one=False, alphabet=alphabet) + return alphabet[0:1] * (origlen - newlen) + result + + +@lru_cache() +def _get_base58_decode_map(alphabet: bytes, + autofix: bool) -> Mapping[int, int]: + invmap = {char: index for index, char in enumerate(alphabet)} + + if autofix: + groups = [b'0Oo', b'Il1'] + for group in groups: + pivots = [c for c in group if c in invmap] + if len(pivots) == 1: + for alternative in group: + invmap[alternative] = invmap[pivots[0]] + + return invmap + + +def b58decode_int( + v: Union[str, bytes], alphabet: bytes = BITCOIN_ALPHABET, *, + autofix: bool = False +) -> int: + """ + Decode a Base58 encoded string as an integer + """ + if b' ' not in alphabet: + v = v.rstrip() + v = scrub_input(v) + + map = _get_base58_decode_map(alphabet, autofix=autofix) + + decimal = 0 + base = len(alphabet) + try: + for char in v: + decimal = decimal * base + map[char] + except KeyError as e: + raise ValueError( + "Invalid character {!r}".format(chr(e.args[0])) + ) from None + return decimal + + +def b58decode( + v: Union[str, bytes], alphabet: bytes = BITCOIN_ALPHABET, *, + autofix: bool = False +) -> bytes: + """ + Decode a Base58 encoded string + """ + v = v.rstrip() + v = scrub_input(v) + + origlen = len(v) + v = v.lstrip(alphabet[0:1]) + newlen = len(v) + + acc = b58decode_int(v, alphabet=alphabet, autofix=autofix) + + result = [] + while acc > 0: + acc, mod = divmod(acc, 256) + result.append(mod) + + return b'\0' * (origlen - newlen) + bytes(reversed(result)) + + +def b58encode_check( + v: Union[str, bytes], alphabet: bytes = BITCOIN_ALPHABET +) -> bytes: + """ + Encode a string using Base58 with a 4 character checksum + """ + v = scrub_input(v) + + digest = sha256(sha256(v).digest()).digest() + return b58encode(v + digest[:4], alphabet=alphabet) + + +def b58decode_check( + v: Union[str, bytes], alphabet: bytes = BITCOIN_ALPHABET, *, + autofix: bool = False +) -> bytes: + '''Decode and verify the checksum of a Base58 encoded string''' + + result = b58decode(v, alphabet=alphabet, autofix=autofix) + result, check = result[:-4], result[-4:] + digest = sha256(sha256(result).digest()).digest() + + if check != digest[:4]: + raise ValueError("Invalid checksum") + + return result diff --git a/env/lib/python3.8/site-packages/base58/__main__.py b/env/lib/python3.8/site-packages/base58/__main__.py new file mode 100644 index 00000000..3a6f5d26 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58/__main__.py @@ -0,0 +1,50 @@ +import argparse +import sys +from typing import Callable, Dict, Tuple + +from base58 import b58decode, b58decode_check, b58encode, b58encode_check + +_fmap = { + (False, False): b58encode, + (False, True): b58encode_check, + (True, False): b58decode, + (True, True): b58decode_check +} # type: Dict[Tuple[bool, bool], Callable[[bytes], bytes]] + + +def main() -> None: + '''Base58 encode or decode FILE, or standard input, to standard output.''' + + stdout = sys.stdout.buffer + + parser = argparse.ArgumentParser(description=main.__doc__) + parser.add_argument( + 'file', + metavar='FILE', + nargs='?', + type=argparse.FileType('r'), + default='-') + parser.add_argument( + '-d', '--decode', + action='store_true', + help='decode data') + parser.add_argument( + '-c', '--check', + action='store_true', + help='append a checksum before encoding') + + args = parser.parse_args() + fun = _fmap[(args.decode, args.check)] + + data = args.file.buffer.read() + + try: + result = fun(data) + except Exception as e: + sys.exit(e) + + stdout.write(result) + + +if __name__ == '__main__': + main() diff --git a/env/lib/python3.8/site-packages/base58/py.typed b/env/lib/python3.8/site-packages/base58/py.typed new file mode 100644 index 00000000..e5aff4f8 --- /dev/null +++ b/env/lib/python3.8/site-packages/base58/py.typed @@ -0,0 +1 @@ +# Marker file for PEP 561. \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/LICENSE new file mode 100644 index 00000000..4923749d --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/LICENSE @@ -0,0 +1,46 @@ +PYTHON SOFTWARE FOUNDATION LICENSE +---------------------------------- + +1. This LICENSE AGREEMENT is between Ilan Schnell, and the Individual or +Organization ("Licensee") accessing and otherwise using this software +("bitarray") in source or binary form and its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, Ilan Schnell +hereby grants Licensee a nonexclusive, royalty-free, world-wide +license to reproduce, analyze, test, perform and/or display publicly, +prepare derivative works, distribute, and otherwise use bitarray +alone or in any derivative version, provided, however, that Ilan Schnell's +License Agreement and Ilan Schnell's notice of copyright, i.e., "Copyright (c) +2008 - 2022 Ilan Schnell; All Rights Reserved" are retained in bitarray +alone or in any derivative version prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on +or incorporates bitarray or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to bitarray. + +4. Ilan Schnell is making bitarray available to Licensee on an "AS IS" +basis. ILAN SCHNELL MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, ILAN SCHNELL MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF BITARRAY WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. ILAN SCHNELL SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF BITARRAY +FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING BITARRAY, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any +relationship of agency, partnership, or joint venture between Ilan Schnell +and Licensee. This License Agreement does not grant permission to use Ilan +Schnell trademarks or trade name in a trademark sense to endorse or promote +products or services of Licensee, or any third party. + +8. By copying, installing or otherwise using bitarray, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. diff --git a/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/METADATA b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/METADATA new file mode 100644 index 00000000..10e83633 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/METADATA @@ -0,0 +1,931 @@ +Metadata-Version: 2.1 +Name: bitarray +Version: 2.6.0 +Summary: efficient arrays of booleans -- C extension +Home-page: https://github.com/ilanschnell/bitarray +Author: Ilan Schnell +Author-email: ilanschnell@gmail.com +License: PSF +Platform: UNKNOWN +Classifier: License :: OSI Approved :: Python Software Foundation License +Classifier: Development Status :: 6 - Mature +Classifier: Intended Audience :: Developers +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: C +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Topic :: Utilities +License-File: LICENSE + +bitarray: efficient arrays of booleans +====================================== + +This library provides an object type which efficiently represents an array +of booleans. Bitarrays are sequence types and behave very much like usual +lists. Eight bits are represented by one byte in a contiguous block of +memory. The user can select between two representations: little-endian +and big-endian. All functionality is implemented in C. +Methods for accessing the machine representation are provided, including the +ability to import and export buffers. This allows creating bitarrays that +are mapped to other objects, including memory-mapped files. + + +Key features +------------ + +* The bit endianness can be specified for each bitarray object, see below. +* Sequence methods: slicing (including slice assignment and deletion), + operations ``+``, ``*``, ``+=``, ``*=``, the ``in`` operator, ``len()`` +* Bitwise operations: ``~``, ``&``, ``|``, ``^``, ``<<``, ``>>`` (as well as + their in-place versions ``&=``, ``|=``, ``^=``, ``<<=``, ``>>=``). +* Fast methods for encoding and decoding variable bit length prefix codes. +* Bitarray objects support the buffer protocol (both importing and + exporting buffers). +* Packing and unpacking to other binary data formats, e.g. ``numpy.ndarray``. +* Pickling and unpickling of bitarray objects. +* Immutable ``frozenbitarray`` objects which are hashable +* Sequential search +* Type hinting +* Extensive test suite with over 400 unittests. +* Utility module ``bitarray.util``: + + * conversion to and from hexadecimal strings + * (de-) serialization + * pretty printing + * conversion to and from integers + * creating Huffman codes + * various count functions + * other helpful functions + + +Installation +------------ + +Python wheels are are available on PyPI for all mayor platforms and Python +versions. Which means you can simply: + +.. code-block:: shell-session + + $ pip install bitarray + +In addition, conda packages are available (both the default Anaconda +repository as well as conda-forge support bitarray): + +.. code-block:: shell-session + + $ conda install bitarray + +Once you have installed the package, you may want to test it: + +.. code-block:: shell-session + + $ python -c 'import bitarray; bitarray.test()' + bitarray is installed in: /Users/ilan/bitarray/bitarray + bitarray version: 2.6.0 + sys.version: 3.9.4 (default, May 10 2021, 22:13:15) [Clang 11.1.0] + sys.prefix: /Users/ilan/Mini3/envs/py39 + pointer size: 64 bit + sizeof(size_t): 8 + sizeof(bitarrayobject): 80 + PY_UINT64_T defined: 1 + USE_WORD_SHIFT: 1 + DEBUG: 0 + ......................................................................... + ......................................................................... + ................................................................ + ---------------------------------------------------------------------- + Ran 435 tests in 0.531s + + OK + +The ``test()`` function is part of the API. It will return +a ``unittest.runner.TextTestResult`` object, such that one can verify that +all tests ran successfully by: + +.. code-block:: python + + import bitarray + assert bitarray.test().wasSuccessful() + + +Using the module +---------------- + +As mentioned above, bitarray objects behave very much like lists, so +there is not too much to learn. The biggest difference from list +objects (except that bitarray are obviously homogeneous) is the ability +to access the machine representation of the object. +When doing so, the bit endianness is of importance; this issue is +explained in detail in the section below. Here, we demonstrate the +basic usage of bitarray objects: + +.. code-block:: python + + >>> from bitarray import bitarray + >>> a = bitarray() # create empty bitarray + >>> a.append(1) + >>> a.extend([1, 0]) + >>> a + bitarray('110') + >>> x = bitarray(2 ** 20) # bitarray of length 1048576 (uninitialized) + >>> len(x) + 1048576 + >>> bitarray('1001 011') # initialize from string (whitespace is ignored) + bitarray('1001011') + >>> lst = [1, 0, False, True, True] + >>> a = bitarray(lst) # initialize from iterable + >>> a + bitarray('10011') + >>> a.count(1) + 3 + >>> a.remove(0) # removes first occurrence of 0 + >>> a + bitarray('1011') + +Like lists, bitarray objects support slice assignment and deletion: + +.. code-block:: python + + >>> a = bitarray(50) + >>> a.setall(0) # set all elements in a to 0 + >>> a[11:37:3] = 9 * bitarray('1') + >>> a + bitarray('00000000000100100100100100100100100100000000000000') + >>> del a[12::3] + >>> a + bitarray('0000000000010101010101010101000000000') + >>> a[-6:] = bitarray('10011') + >>> a + bitarray('000000000001010101010101010100010011') + >>> a += bitarray('000111') + >>> a[9:] + bitarray('001010101010101010100010011000111') + +In addition, slices can be assigned to booleans, which is easier (and +faster) than assigning to a bitarray in which all values are the same: + +.. code-block:: python + + >>> a = 20 * bitarray('0') + >>> a[1:15:3] = True + >>> a + bitarray('01001001001001000000') + +This is easier and faster than: + +.. code-block:: python + + >>> a = 20 * bitarray('0') + >>> a[1:15:3] = 5 * bitarray('1') + >>> a + bitarray('01001001001001000000') + +Note that in the latter we have to create a temporary bitarray whose length +must be known or calculated. Another example of assigning slices to Booleans, +is setting ranges: + +.. code-block:: python + + >>> a = bitarray(30) + >>> a[:] = 0 # set all elements to 0 - equivalent to a.setall(0) + >>> a[10:25] = 1 # set elements in range(10, 25) to 1 + >>> a + bitarray('000000000011111111111111100000') + + +Bitwise operators +----------------- + +Bitarray objects support the bitwise operators ``~``, ``&``, ``|``, ``^``, +``<<``, ``>>`` (as well as their in-place versions ``&=``, ``|=``, ``^=``, +``<<=``, ``>>=``). The behavior is very much what one would expect: + +.. code-block:: python + + >>> a = bitarray('101110001') + >>> ~a # invert + bitarray('010001110') + >>> b = bitarray('111001011') + >>> a ^ b + bitarray('010111010') + >>> a &= b + >>> a + bitarray('101000001') + >>> a <<= 2 # in-place left shift by 2 + >>> a + bitarray('100000100') + >>> b >> 1 + bitarray('011100101') + +The C language does not specify the behavior of negative shifts and +of left shifts larger or equal than the width of the promoted left operand. +The exact behavior is compiler/machine specific. +This Python bitarray library specifies the behavior as follows: + +* the length of the bitarray is never changed by any shift operation +* blanks are filled by 0 +* negative shifts raise ``ValueError`` +* shifts larger or equal to the length of the bitarray result in + bitarrays with all values 0 + + +Bit endianness +-------------- + +Unless explicitly converting to machine representation, using +the ``.tobytes()``, ``.frombytes()``, ``.tofile()`` and ``.fromfile()`` +methods, as well as using ``memoryview``, the bit endianness will have no +effect on any computation, and one can skip this section. + +Since bitarrays allows addressing individual bits, where the machine +represents 8 bits in one byte, there are two obvious choices for this +mapping: little-endian and big-endian. + +When dealing with the machine representation of bitarray objects, it is +recommended to always explicitly specify the endianness. + +By default, bitarrays use big-endian representation: + +.. code-block:: python + + >>> a = bitarray() + >>> a.endian() + 'big' + >>> a.frombytes(b'A') + >>> a + bitarray('01000001') + >>> a[6] = 1 + >>> a.tobytes() + b'C' + +Big-endian means that the most-significant bit comes first. +Here, ``a[0]`` is the lowest address (index) and most significant bit, +and ``a[7]`` is the highest address and least significant bit. + +When creating a new bitarray object, the endianness can always be +specified explicitly: + +.. code-block:: python + + >>> a = bitarray(endian='little') + >>> a.frombytes(b'A') + >>> a + bitarray('10000010') + >>> a.endian() + 'little' + +Here, the low-bit comes first because little-endian means that increasing +numeric significance corresponds to an increasing address. +So ``a[0]`` is the lowest address and least significant bit, +and ``a[7]`` is the highest address and most significant bit. + +The bit endianness is a property of the bitarray object. +The endianness cannot be changed once a bitarray object is created. +When comparing bitarray objects, the endianness (and hence the machine +representation) is irrelevant; what matters is the mapping from indices +to bits: + +.. code-block:: python + + >>> bitarray('11001', endian='big') == bitarray('11001', endian='little') + True + +Bitwise operations (``|``, ``^``, ``&=``, ``|=``, ``^=``, ``~``) are +implemented efficiently using the corresponding byte operations in C, i.e. the +operators act on the machine representation of the bitarray objects. +Therefore, it is not possible to perform bitwise operators on bitarrays +with different endianness. + +When converting to and from machine representation, using +the ``.tobytes()``, ``.frombytes()``, ``.tofile()`` and ``.fromfile()`` +methods, the endianness matters: + +.. code-block:: python + + >>> a = bitarray(endian='little') + >>> a.frombytes(b'\x01') + >>> a + bitarray('10000000') + >>> b = bitarray(endian='big') + >>> b.frombytes(b'\x80') + >>> b + bitarray('10000000') + >>> a == b + True + >>> a.tobytes() == b.tobytes() + False + +As mentioned above, the endianness can not be changed once an object is +created. However, you can create a new bitarray with different endianness: + +.. code-block:: python + + >>> a = bitarray('111000', endian='little') + >>> b = bitarray(a, endian='big') + >>> b + bitarray('111000') + >>> a == b + True + + +Buffer protocol +--------------- + +Bitarray objects support the buffer protocol. They can both export their +own buffer, as well as import another object's buffer. To learn more about +this topic, please read `buffer protocol `__. There is also an example that shows how +to memory-map a file to a bitarray: `mmapped-file.py `__ + + +Variable bit length prefix codes +-------------------------------- + +The ``.encode()`` method takes a dictionary mapping symbols to bitarrays +and an iterable, and extends the bitarray object with the encoded symbols +found while iterating. For example: + +.. code-block:: python + + >>> d = {'H':bitarray('111'), 'e':bitarray('0'), + ... 'l':bitarray('110'), 'o':bitarray('10')} + ... + >>> a = bitarray() + >>> a.encode(d, 'Hello') + >>> a + bitarray('111011011010') + +Note that the string ``'Hello'`` is an iterable, but the symbols are not +limited to characters, in fact any immutable Python object can be a symbol. +Taking the same dictionary, we can apply the ``.decode()`` method which will +return a list of the symbols: + +.. code-block:: python + + >>> a.decode(d) + ['H', 'e', 'l', 'l', 'o'] + >>> ''.join(a.decode(d)) + 'Hello' + +Since symbols are not limited to being characters, it is necessary to return +them as elements of a list, rather than simply returning the joined string. +The above dictionary ``d`` can be efficiently constructed using the function +``bitarray.util.huffman_code()``. I also wrote `Huffman coding in Python +using bitarray `__ for more +background information. + +When the codes are large, and you have many decode calls, most time will +be spent creating the (same) internal decode tree objects. In this case, +it will be much faster to create a ``decodetree`` object, which can be +passed to bitarray's ``.decode()`` and ``.iterdecode()`` methods, instead +of passing the prefix code dictionary to those methods itself: + +.. code-block:: python + + >>> from bitarray import bitarray, decodetree + >>> t = decodetree({'a': bitarray('0'), 'b': bitarray('1')}) + >>> a = bitarray('0110') + >>> a.decode(t) + ['a', 'b', 'b', 'a'] + >>> ''.join(a.iterdecode(t)) + 'abba' + +The sole purpose of the immutable ``decodetree`` object is to be passed +to bitarray's ``.decode()`` and ``.iterdecode()`` methods. + + +Frozenbitarrays +--------------- + +A ``frozenbitarray`` object is very similar to the bitarray object. +The difference is that this a ``frozenbitarray`` is immutable, and hashable, +and can therefore be used as a dictionary key: + +.. code-block:: python + + >>> from bitarray import frozenbitarray + >>> key = frozenbitarray('1100011') + >>> {key: 'some value'} + {frozenbitarray('1100011'): 'some value'} + >>> key[3] = 1 + Traceback (most recent call last): + ... + TypeError: frozenbitarray is immutable + + +Reference +========= + +bitarray version: 2.6.0 -- `change log `__ + +In the following, ``item`` and ``value`` are usually a single bit - +an integer 0 or 1. + + +The bitarray object: +-------------------- + +``bitarray(initializer=0, /, endian='big', buffer=None)`` -> bitarray + Return a new bitarray object whose items are bits initialized from + the optional initial object, and endianness. + The initializer may be of the following types: + + ``int``: Create a bitarray of given integer length. The initial values are + uninitialized. + + ``str``: Create bitarray from a string of ``0`` and ``1``. + + ``iterable``: Create bitarray from iterable or sequence or integers 0 or 1. + + Optional keyword arguments: + + ``endian``: Specifies the bit endianness of the created bitarray object. + Allowed values are ``big`` and ``little`` (the default is ``big``). + The bit endianness effects the buffer representation of the bitarray. + + ``buffer``: Any object which exposes a buffer. When provided, ``initializer`` + cannot be present (or has to be ``None``). The imported buffer may be + readonly or writable, depending on the object type. + + New in version 2.3: optional ``buffer`` argument. + + +bitarray methods: +----------------- + +``all()`` -> bool + Return True when all bits in the array are True. + Note that ``a.all()`` is faster than ``all(a)``. + + +``any()`` -> bool + Return True when any bit in the array is True. + Note that ``a.any()`` is faster than ``any(a)``. + + +``append(item, /)`` + Append ``item`` to the end of the bitarray. + + +``buffer_info()`` -> tuple + Return a tuple containing: + + 0. memory address of buffer + 1. buffer size (in bytes) + 2. bit endianness as a string + 3. number of pad bits + 4. allocated memory for the buffer (in bytes) + 5. memory is read-only + 6. buffer is imported + 7. number of buffer exports + + +``bytereverse(start=0, stop=, /)`` + For each byte in byte-range(start, stop) reverse the bit order in-place. + The start and stop indices are given in terms of bytes (not bits). + Also note that this method only changes the buffer; it does not change the + endianness of the bitarray object. + + New in version 2.2.5: optional start and stop arguments. + + +``clear()`` + Remove all items from the bitarray. + + New in version 1.4. + + +``copy()`` -> bitarray + Return a copy of the bitarray. + + +``count(value=1, start=0, stop=, step=1, /)`` -> int + Count the number of occurrences of ``value`` in the bitarray. + + New in version 1.1.0: optional start and stop arguments. + + New in version 2.3.7: optional step argument. + + +``decode(code, /)`` -> list + Given a prefix code (a dict mapping symbols to bitarrays, or ``decodetree`` + object), decode the content of the bitarray and return it as a list of + symbols. + + +``encode(code, iterable, /)`` + Given a prefix code (a dict mapping symbols to bitarrays), + iterate over the iterable object with symbols, and extend the bitarray + with the corresponding bitarray for each symbol. + + +``endian()`` -> str + Return the bit endianness of the bitarray as a string (``little`` or ``big``). + + +``extend(iterable, /)`` + Append all items from ``iterable`` to the end of the bitarray. + If the iterable is a string, each ``0`` and ``1`` are appended as + bits (ignoring whitespace and underscore). + + +``fill()`` -> int + Add zeros to the end of the bitarray, such that the length of the bitarray + will be a multiple of 8, and return the number of bits added (0..7). + + +``find(sub_bitarray, start=0, stop=, /)`` -> int + Return the lowest index where sub_bitarray is found, such that sub_bitarray + is contained within ``[start:stop]``. + Return -1 when sub_bitarray is not found. + + New in version 2.1. + + +``frombytes(bytes, /)`` + Extend the bitarray with raw bytes from a bytes-like object. + Each added byte will add eight bits to the bitarray. + + New in version 2.5.0: allow bytes-like argument. + + +``fromfile(f, n=-1, /)`` + Extend bitarray with up to n bytes read from the file object f. + When n is omitted or negative, reads all data until EOF. + When n is provided and positive but exceeds the data available, + ``EOFError`` is raised (but the available data is still read and appended. + + +``index(sub_bitarray, start=0, stop=, /)`` -> int + Return the lowest index where sub_bitarray is found, such that sub_bitarray + is contained within ``[start:stop]``. + Raises ``ValueError`` when the sub_bitarray is not present. + + +``insert(index, value, /)`` + Insert ``value`` into the bitarray before ``index``. + + +``invert(index=, /)`` + Invert all bits in the array (in-place). + When the optional ``index`` is given, only invert the single bit at index. + + New in version 1.5.3: optional index argument. + + +``iterdecode(code, /)`` -> iterator + Given a prefix code (a dict mapping symbols to bitarrays, or ``decodetree`` + object), decode the content of the bitarray and return an iterator over + the symbols. + + +``itersearch(sub_bitarray, /)`` -> iterator + Searches for the given sub_bitarray in self, and return an iterator over + the start positions where sub_bitarray matches self. + + +``pack(bytes, /)`` + Extend the bitarray from a bytes-like object, where each byte corresponds + to a single bit. The byte ``b'\x00'`` maps to bit 0 and all other bytes + map to bit 1. + + This method, as well as the ``.unpack()`` method, are meant for efficient + transfer of data between bitarray objects to other Python objects (for + example NumPy's ndarray object) which have a different memory view. + + New in version 2.5.0: allow bytes-like argument. + + +``pop(index=-1, /)`` -> item + Return the i-th (default last) element and delete it from the bitarray. + Raises ``IndexError`` if index is out of range. + + +``remove(value, /)`` + Remove the first occurrence of ``value`` in the bitarray. + Raises ``ValueError`` if item is not present. + + +``reverse()`` + Reverse all bits in the array (in-place). + + +``search(sub_bitarray, limit=, /)`` -> list + Searches for the given sub_bitarray in self, and return the list of start + positions. + The optional argument limits the number of search results to the integer + specified. By default, all search results are returned. + + +``setall(value, /)`` + Set all elements in the bitarray to ``value``. + Note that ``a.setall(value)`` is equivalent to ``a[:] = value``. + + +``sort(reverse=False)`` + Sort the bits in the array (in-place). + + +``to01()`` -> str + Return a string containing '0's and '1's, representing the bits in the + bitarray. + + +``tobytes()`` -> bytes + Return the bitarray buffer in bytes (pad bits are set to zero). + + +``tofile(f, /)`` + Write the byte representation of the bitarray to the file object f. + + +``tolist()`` -> list + Return the bitarray as a list of integer items. + + Note that the list object being created will require 32 or 64 times more + memory (depending on the machine architecture) than the bitarray object, + which may cause a memory error if the bitarray is very large. + + +``unpack(zero=b'\x00', one=b'\x01')`` -> bytes + Return bytes containing one character for each bit in the bitarray, + using the specified mapping. + + +bitarray data descriptors: +-------------------------- + +Data descriptors were added in version 2.6. + +``nbytes`` -> int + buffer size in bytes + + +``padbits`` -> int + number of pad bits + + +``readonly`` -> bool + bool indicating whether buffer is read only + + +Other objects: +-------------- + +``frozenbitarray(initializer=0, /, endian='big', buffer=None)`` -> frozenbitarray + Return a frozenbitarray object, which is initialized the same way a bitarray + object is initialized. A frozenbitarray is immutable and hashable. + Its contents cannot be altered after it is created; however, it can be used + as a dictionary key. + + New in version 1.1. + + +``decodetree(code, /)`` -> decodetree + Given a prefix code (a dict mapping symbols to bitarrays), + create a binary tree object to be passed to ``.decode()`` or ``.iterdecode()``. + + New in version 1.6. + + +Functions defined in the `bitarray` module: +------------------------------------------- + +``bits2bytes(n, /)`` -> int + Return the number of bytes necessary to store n bits. + + +``get_default_endian()`` -> str + Return the default endianness for new bitarray objects being created. + Unless ``_set_default_endian('little')`` was called, the default endianness + is ``big``. + + New in version 1.3. + + +``test(verbosity=1, repeat=1)`` -> TextTestResult + Run self-test, and return unittest.runner.TextTestResult object. + + +Functions defined in `bitarray.util` module: +-------------------------------------------- + +This sub-module was added in version 1.2. + +``zeros(length, /, endian=None)`` -> bitarray + Create a bitarray of length, with all values 0, and optional + endianness, which may be 'big', 'little'. + + +``urandom(length, /, endian=None)`` -> bitarray + Return a bitarray of ``length`` random bits (uses ``os.urandom``). + + New in version 1.7. + + +``pprint(bitarray, /, stream=None, group=8, indent=4, width=80)`` + Prints the formatted representation of object on ``stream`` (which defaults + to ``sys.stdout``). By default, elements are grouped in bytes (8 elements), + and 8 bytes (64 elements) per line. + Non-bitarray objects are printed by the standard library + function ``pprint.pprint()``. + + New in version 1.8. + + +``make_endian(bitarray, /, endian)`` -> bitarray + When the endianness of the given bitarray is different from ``endian``, + return a new bitarray, with endianness ``endian`` and the same elements + as the original bitarray. + Otherwise (endianness is already ``endian``) the original bitarray is returned + unchanged. + + New in version 1.3. + + +``rindex(bitarray, value=1, start=0, stop=, /)`` -> int + Return the rightmost (highest) index of ``value`` in bitarray. + Raises ``ValueError`` if the value is not present. + + New in version 2.3.0: optional start and stop arguments. + + +``strip(bitarray, /, mode='right')`` -> bitarray + Return a new bitarray with zeros stripped from left, right or both ends. + Allowed values for mode are the strings: ``left``, ``right``, ``both`` + + +``count_n(a, n, value=1, /)`` -> int + Return lowest index ``i`` for which ``a[:i].count(value) == n``. + Raises ``ValueError``, when n exceeds total count (``a.count(value)``). + + New in version 2.3.6: optional value argument. + + +``parity(a, /)`` -> int + Return the parity of bitarray ``a``. + This is equivalent to ``a.count() % 2`` (but more efficient). + + New in version 1.9. + + +``count_and(a, b, /)`` -> int + Return ``(a & b).count()`` in a memory efficient manner, + as no intermediate bitarray object gets created. + + +``count_or(a, b, /)`` -> int + Return ``(a | b).count()`` in a memory efficient manner, + as no intermediate bitarray object gets created. + + +``count_xor(a, b, /)`` -> int + Return ``(a ^ b).count()`` in a memory efficient manner, + as no intermediate bitarray object gets created. + + This is also known as the Hamming distance. + + +``subset(a, b, /)`` -> bool + Return ``True`` if bitarray ``a`` is a subset of bitarray ``b``. + ``subset(a, b)`` is equivalent to ``(a & b).count() == a.count()`` but is more + efficient since we can stop as soon as one mismatch is found, and no + intermediate bitarray object gets created. + + +``ba2hex(bitarray, /)`` -> hexstr + Return a string containing the hexadecimal representation of + the bitarray (which has to be multiple of 4 in length). + + +``hex2ba(hexstr, /, endian=None)`` -> bitarray + Bitarray of hexadecimal representation. hexstr may contain any number + (including odd numbers) of hex digits (upper or lower case). + + +``ba2base(n, bitarray, /)`` -> str + Return a string containing the base ``n`` ASCII representation of + the bitarray. Allowed values for ``n`` are 2, 4, 8, 16, 32 and 64. + The bitarray has to be multiple of length 1, 2, 3, 4, 5 or 6 respectively. + For ``n=16`` (hexadecimal), ``ba2hex()`` will be much faster, as ``ba2base()`` + does not take advantage of byte level operations. + For ``n=32`` the RFC 4648 Base32 alphabet is used, and for ``n=64`` the + standard base 64 alphabet is used. + + See also: `Bitarray representations `__ + + New in version 1.9. + + +``base2ba(n, asciistr, /, endian=None)`` -> bitarray + Bitarray of the base ``n`` ASCII representation. + Allowed values for ``n`` are 2, 4, 8, 16, 32 and 64. + For ``n=16`` (hexadecimal), ``hex2ba()`` will be much faster, as ``base2ba()`` + does not take advantage of byte level operations. + For ``n=32`` the RFC 4648 Base32 alphabet is used, and for ``n=64`` the + standard base 64 alphabet is used. + + See also: `Bitarray representations `__ + + New in version 1.9. + + +``ba2int(bitarray, /, signed=False)`` -> int + Convert the given bitarray to an integer. + The bit-endianness of the bitarray is respected. + ``signed`` indicates whether two's complement is used to represent the integer. + + +``int2ba(int, /, length=None, endian=None, signed=False)`` -> bitarray + Convert the given integer to a bitarray (with given endianness, + and no leading (big-endian) / trailing (little-endian) zeros), unless + the ``length`` of the bitarray is provided. An ``OverflowError`` is raised + if the integer is not representable with the given number of bits. + ``signed`` determines whether two's complement is used to represent the integer, + and requires ``length`` to be provided. + + +``serialize(bitarray, /)`` -> bytes + Return a serialized representation of the bitarray, which may be passed to + ``deserialize()``. It efficiently represents the bitarray object (including + its endianness) and is guaranteed not to change in future releases. + + See also: `Bitarray representations `__ + + New in version 1.8. + + +``deserialize(bytes, /)`` -> bitarray + Return a bitarray given a bytes-like representation such as returned + by ``serialize()``. + + See also: `Bitarray representations `__ + + New in version 1.8. + + New in version 2.5.0: allow bytes-like argument. + + +``vl_encode(bitarray, /)`` -> bytes + Return variable length binary representation of bitarray. + This representation is useful for efficiently storing small bitarray + in a binary stream. Use ``vl_decode()`` for decoding. + + See also: `Variable length bitarray format `__ + + New in version 2.2. + + +``vl_decode(stream, /, endian=None)`` -> bitarray + Decode binary stream (an integer iterator, or bytes-like object), and return + the decoded bitarray. This function consumes only one bitarray and leaves + the remaining stream untouched. ``StopIteration`` is raised when no + terminating byte is found. + Use ``vl_encode()`` for encoding. + + See also: `Variable length bitarray format `__ + + New in version 2.2. + + +``huffman_code(dict, /, endian=None)`` -> dict + Given a frequency map, a dictionary mapping symbols to their frequency, + calculate the Huffman code, i.e. a dict mapping those symbols to + bitarrays (with given endianness). Note that the symbols are not limited + to being strings. Symbols may may be any hashable object (such as ``None``). + + +``canonical_huffman(dict, /)`` -> tuple + Given a frequency map, a dictionary mapping symbols to their frequency, + calculate the canonical Huffman code. Returns a tuple containing: + + 0. the canonical Huffman code as a dict mapping symbols to bitarrays + 1. a list containing the number of symbols of each code length + 2. a list of symbols in canonical order + + Note: the two lists may be used as input for ``canonical_decode()``. + + See also: `Canonical Huffman Coding `__ + + New in version 2.5. + + +``canonical_decode(bitarray, count, symbol, /)`` -> iterator + Decode bitarray using canonical Huffman decoding tables + where ``count`` is a sequence containing the number of symbols of each length + and ``symbol`` is a sequence of symbols in canonical order. + + See also: `Canonical Huffman Coding `__ + + New in version 2.5. + + + + diff --git a/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/RECORD b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/RECORD new file mode 100644 index 00000000..5aba4973 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/RECORD @@ -0,0 +1,22 @@ +bitarray-2.6.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +bitarray-2.6.0.dist-info/LICENSE,sha256=MN_Xny88XSgvo3rDjNK1eFU-yV3lDF5b1I6jjlrkkyc,2412 +bitarray-2.6.0.dist-info/METADATA,sha256=LZD2HKDyaucfkVUYs2O-KE9OzKn4ow6vxbkZXj5M59k,30575 +bitarray-2.6.0.dist-info/RECORD,, +bitarray-2.6.0.dist-info/WHEEL,sha256=paN2rHE-sLfyg0Z4YvQnentMRWXxZnkclRDH8E5J6qk,148 +bitarray-2.6.0.dist-info/top_level.txt,sha256=cLdoeIHotTwh4eHSe8qy8umF7zzyxUM90I8zFb2_xhg,9 +bitarray/__init__.py,sha256=Lj3UY4MwrGQT2GkQgVk-g1fBShfpimUgli6778miZsY,2726 +bitarray/__init__.pyi,sha256=81XYiH_Sd_wk4b1_jNd6sj7WMRATFWy35t5meOeOZ3E,4724 +bitarray/__pycache__/__init__.cpython-38.pyc,, +bitarray/__pycache__/test_bitarray.cpython-38.pyc,, +bitarray/__pycache__/test_util.cpython-38.pyc,, +bitarray/__pycache__/util.cpython-38.pyc,, +bitarray/_bitarray.cpython-38-x86_64-linux-gnu.so,sha256=yCwcmDuYl9ti1660aCJsvUnY0OgAH6kmBds1NCm8b9w,420880 +bitarray/_util.cpython-38-x86_64-linux-gnu.so,sha256=TKDaXEgWUAmnte4uiYd_B8RPNXWA_S8BfUjVr8mu1QI,118560 +bitarray/bitarray.h,sha256=torTfxtDnZyCVeBxlD87ZpocaAcBA3CGQMma1molKyI,7488 +bitarray/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +bitarray/pythoncapi_compat.h,sha256=tardZ3T2CnEfUO6kb2s8d1TkmHUAqg4XzRTQwUiOdUU,8849 +bitarray/test_bitarray.py,sha256=4RfkJo9-1z2Mbnh0XGhOSfdmrrCpNe6FckA8jkN64X4,162559 +bitarray/test_data.pickle,sha256=v5oAS4jJwWDVDh8s0mPn4iCvwN3-Y5sZWpVGkpwyCqw,356 +bitarray/test_util.py,sha256=ZlnFGR04oGUdfO27ea-msNeYWZ5_hEhc0WDNNvfNEHI,69593 +bitarray/util.py,sha256=13Lv16p0JUlhwLXJUWAPhXZScvnNa6JOP57k_EcWTak,15115 +bitarray/util.pyi,sha256=ws6RZVijd6p7ffAVTZjPyXuEbhM9xBYyL5fGr71yWiI,2192 diff --git a/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/WHEEL new file mode 100644 index 00000000..32bdea04 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.0) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_17_x86_64 +Tag: cp38-cp38-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/top_level.txt new file mode 100644 index 00000000..aab91b87 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray-2.6.0.dist-info/top_level.txt @@ -0,0 +1 @@ +bitarray diff --git a/env/lib/python3.8/site-packages/bitarray/__init__.py b/env/lib/python3.8/site-packages/bitarray/__init__.py new file mode 100644 index 00000000..bbc3e28b --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/__init__.py @@ -0,0 +1,79 @@ +# Copyright (c) 2008 - 2022, Ilan Schnell; All Rights Reserved +""" +This package defines an object type which can efficiently represent +a bitarray. Bitarrays are sequence types and behave very much like lists. + +Please find a description of this package at: + + https://github.com/ilanschnell/bitarray + +Author: Ilan Schnell +""" +from __future__ import absolute_import + +from bitarray._bitarray import (bitarray, decodetree, _sysinfo, + get_default_endian, _set_default_endian, + __version__) + + +__all__ = ['bitarray', 'frozenbitarray', 'decodetree', 'bits2bytes'] + + +class frozenbitarray(bitarray): + """frozenbitarray(initializer=0, /, endian='big', buffer=None) -> \ +frozenbitarray + +Return a frozenbitarray object, which is initialized the same way a bitarray +object is initialized. A frozenbitarray is immutable and hashable. +Its contents cannot be altered after it is created; however, it can be used +as a dictionary key. +""" + def __init__(self, *args, **kwargs): + self._freeze() + + def __repr__(self): + return 'frozen' + bitarray.__repr__(self) + + def __hash__(self): + "Return hash(self)." + if getattr(self, '_hash', None) is None: + # ensure hash is independent of endianness, also the copy will be + # mutable such that .tobytes() can zero out the pad bits + a = bitarray(self, 'big') + self._hash = hash((len(a), a.tobytes())) + return self._hash + + # Technically the code below is not necessary, as all these methods will + # raise a TypeError on read-only memory. However, with a different error + # message. + def __delitem__(self, *args, **kwargs): + "" # no docstring + raise TypeError("frozenbitarray is immutable") + + append = bytereverse = clear = extend = encode = fill = __delitem__ + frombytes = fromfile = insert = invert = pack = pop = __delitem__ + remove = reverse = setall = sort = __setitem__ = __delitem__ + __iadd__ = __iand__ = __imul__ = __ior__ = __ixor__ = __delitem__ + __ilshift__ = __irshift__ = __delitem__ + + +def bits2bytes(__n): + """bits2bytes(n, /) -> int + +Return the number of bytes necessary to store n bits. +""" + import sys + if not isinstance(__n, (int, long) if sys.version_info[0] == 2 else int): + raise TypeError("integer expected") + if __n < 0: + raise ValueError("non-negative integer expected") + return (__n + 7) // 8 + + +def test(verbosity=1, repeat=1): + """test(verbosity=1, repeat=1) -> TextTestResult + +Run self-test, and return unittest.runner.TextTestResult object. +""" + from bitarray import test_bitarray + return test_bitarray.run(verbosity=verbosity, repeat=repeat) diff --git a/env/lib/python3.8/site-packages/bitarray/__init__.pyi b/env/lib/python3.8/site-packages/bitarray/__init__.pyi new file mode 100644 index 00000000..5a0867c6 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/__init__.pyi @@ -0,0 +1,137 @@ +# Copyright (c) 2021 - 2022, Ilan Schnell; All Rights Reserved +# +# This stub, as well as util.pyi, are tested with Python 3.9 and mypy 0.950 + +from collections.abc import Iterable, Iterator +from unittest.runner import TextTestResult + +from typing import Any, BinaryIO, Dict, Union, overload + + +CodeDict = Dict[Any, bitarray] +BytesLike = Union[bytes, Iterable[int]] + + +class decodetree: + def __init__(self, code: CodeDict) -> None: ... + def complete(self) -> bool: ... + def nodes(self) -> int: ... + def todict(self) -> CodeDict: ... + + +class bitarray: + def __init__(self, + initializer: Union[int, str, Iterable[int], None] = ..., + endian: Union[str, None] = ..., + buffer: Any = ...) -> None: ... + + def all(self) -> bool: ... + def any(self) -> bool: ... + def append(self, value: int) -> None: ... + def buffer_info(self) -> tuple: ... + def bytereverse(self, + start: int = ..., + stop: int = ...) -> None: ... + + def clear(self) -> None: ... + def copy(self) -> bitarray: ... + def count(self, + value: int = ..., + start: int = ..., + stop: int = ..., + step: int = ...) -> int: ... + + def decode(self, code: Union[CodeDict, decodetree]) -> list: ... + def encode(self, code: CodeDict, x: Iterable) -> None: ... + def endian(self) -> str: ... + def extend(self, x: Union[str, Iterable[int]]) -> None: ... + def fill(self) -> int: ... + def find(self, + a: Union[bitarray, int], + start: int = ..., + stop: int = ...) -> int: ... + + def frombytes(self, a: BytesLike) -> None: ... + def fromfile(self, f: BinaryIO, n: int = ...) -> None: ... + def index(self, + a: Union[bitarray, int], + start: int = ..., + stop: int = ...) -> int: ... + + def insert(self, i: int, value: int) -> None: ... + def invert(self, i: int = ...) -> None: ... + def iterdecode(self, + code: Union[CodeDict, decodetree]) -> Iterator: ... + + def itersearch(self, a: Union[bitarray, int]) -> Iterator[int]: ... + def pack(self, b: BytesLike) -> None: ... + def pop(self, i: int = ...) -> int: ... + def remove(self, value: int) -> None: ... + def reverse(self) -> None: ... + def search(self, a: Union[bitarray, int], + limit: int = ...) -> list[int]: ... + + def setall(self, value: int) -> None: ... + def sort(self, reverse: int) -> None: ... + def to01(self) -> str: ... + def tobytes(self) -> bytes: ... + def tofile(self, f: BinaryIO) -> None: ... + def tolist(self) -> list[int]: ... + def unpack(self, + zero: bytes = ..., + one: bytes = ...) -> bytes: ... + + def __len__(self) -> int: ... + def __iter__(self) -> Iterator[int]: ... + @overload + def __getitem__(self, i: int) -> int: ... + @overload + def __getitem__(self, s: slice) -> bitarray: ... + @overload + def __setitem__(self, i: Union[int, slice], o: int) -> None: ... + @overload + def __setitem__(self, s: slice, o: bitarray) -> None: ... + def __delitem__(self, i: Union[int, slice]) -> None: ... + + def __add__(self, other: bitarray) -> bitarray: ... + def __iadd__(self, other: bitarray) -> bitarray: ... + def __mul__(self, n: int) -> bitarray: ... + def __imul__(self, n: int) -> bitarray: ... + def __rmul__(self, n: int) -> bitarray: ... + + def __ge__(self, other: bitarray) -> bool: ... + def __gt__(self, other: bitarray) -> bool: ... + def __le__(self, other: bitarray) -> bool: ... + def __lt__(self, other: bitarray) -> bool: ... + + def __and__(self, other: bitarray) -> bitarray: ... + def __or__(self, other: bitarray) -> bitarray: ... + def __xor__(self, other: bitarray) -> bitarray: ... + def __iand__(self, other: bitarray) -> bitarray: ... + def __ior__(self, other: bitarray) -> bitarray: ... + def __ixor__(self, other: bitarray) -> bitarray: ... + def __invert__(self) -> bitarray: ... + def __lshift__(self, n: int) -> bitarray: ... + def __rshift__(self, n: int) -> bitarray: ... + def __ilshift__(self, n: int) -> bitarray: ... + def __irshift__(self, n: int) -> bitarray: ... + + # data descriptors + @property + def nbytes(self) -> int: ... + @property + def padbits(self) -> int: ... + @property + def readonly(self) -> bool: ... + + +class frozenbitarray(bitarray): + def __hash__(self) -> int: ... + + +__version__: str +def bits2bytes(n: int) -> int: ... +def get_default_endian() -> str: ... +def test(verbosity: int = ..., repeat: int = ...) -> TextTestResult: ... +def _set_default_endian(endian: str) -> None: ... +def _sysinfo() -> tuple: ... diff --git a/env/lib/python3.8/site-packages/bitarray/_bitarray.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/bitarray/_bitarray.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..9159fa26 Binary files /dev/null and b/env/lib/python3.8/site-packages/bitarray/_bitarray.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/bitarray/_util.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/bitarray/_util.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..69247346 Binary files /dev/null and b/env/lib/python3.8/site-packages/bitarray/_util.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/bitarray/bitarray.h b/env/lib/python3.8/site-packages/bitarray/bitarray.h new file mode 100644 index 00000000..9da2c4ff --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/bitarray.h @@ -0,0 +1,240 @@ +/* + Copyright (c) 2008 - 2022, Ilan Schnell; All Rights Reserved + bitarray is published under the PSF license. + + Author: Ilan Schnell +*/ +#define BITARRAY_VERSION "2.6.0" + +#ifdef STDC_HEADERS +#include +#else /* !STDC_HEADERS */ +#ifdef HAVE_SYS_TYPES_H +#include /* For size_t */ +#endif /* HAVE_SYS_TYPES_H */ +#endif /* !STDC_HEADERS */ + +/* Compatibility with Visual Studio 2013 and older which don't support + the inline keyword in C (only in C++): use __inline instead. + (copied from pythoncapi_compat.h) */ +#if (defined(_MSC_VER) && _MSC_VER < 1900 \ + && !defined(__cplusplus) && !defined(inline)) +#define inline __inline +#endif + +/* --- definitions specific to Python --- */ + +/* Py_UNREACHABLE was introduced in Python 3.7 */ +#ifndef Py_UNREACHABLE +#define Py_UNREACHABLE() abort() +#endif + +#if PY_MAJOR_VERSION >= 3 +#define IS_PY3K +#define BYTES_SIZE_FMT "y#" +#else +/* the Py_MIN and Py_MAX macros were introduced in Python 3.3 */ +#define Py_MIN(x, y) (((x) > (y)) ? (y) : (x)) +#define Py_MAX(x, y) (((x) > (y)) ? (x) : (y)) +#define PySlice_GetIndicesEx(slice, len, start, stop, step, slicelength) \ + PySlice_GetIndicesEx(((PySliceObject *) slice), \ + (len), (start), (stop), (step), (slicelength)) +#define PyLong_FromLong PyInt_FromLong +#define BYTES_SIZE_FMT "s#" +#endif + +/* --- bitarrayobject --- */ + +/* .ob_size is buffer size (in bytes), not the number of elements. + The number of elements (bits) is .nbits. */ +typedef struct { + PyObject_VAR_HEAD + char *ob_item; /* buffer */ + Py_ssize_t allocated; /* allocated buffer size (in bytes) */ + Py_ssize_t nbits; /* length of bitarray, i.e. elements */ + int endian; /* bit endianness of bitarray */ + int ob_exports; /* how many buffer exports */ + PyObject *weakreflist; /* list of weak references */ + Py_buffer *buffer; /* used when importing a buffer */ + int readonly; /* buffer is readonly */ +} bitarrayobject; + +/* --- bit endianness --- */ +#define ENDIAN_LITTLE 0 +#define ENDIAN_BIG 1 + +#define IS_LE(self) ((self)->endian == ENDIAN_LITTLE) +#define IS_BE(self) ((self)->endian == ENDIAN_BIG) + +/* the endianness string */ +#define ENDIAN_STR(endian) ((endian) == ENDIAN_LITTLE ? "little" : "big") + +/* number of pad bits */ +#define PADBITS(self) (8 * Py_SIZE(self) - (self)->nbits) + +/* number of bytes necessary to store given bits */ +#define BYTES(bits) (((bits) + 7) >> 3) + +/* we're not using bitmask_table here, as it is actually slower */ +#define BITMASK(self, i) (((char) 1) << ((self)->endian == ENDIAN_LITTLE ? \ + ((i) % 8) : (7 - (i) % 8))) + +/* assert that .nbits is in agreement with .ob_size */ +#define assert_nbits(self) assert(BYTES((self)->nbits) == Py_SIZE(self)) + +/* assert byte index is in range */ +#define assert_byte_in_range(self, j) \ + assert(self->ob_item && 0 <= (j) && (j) < Py_SIZE(self)) + +/* ------------ low level access to bits in bitarrayobject ------------- */ + +static inline int +getbit(bitarrayobject *self, Py_ssize_t i) +{ + assert_nbits(self); + assert(0 <= i && i < self->nbits); + return self->ob_item[i >> 3] & BITMASK(self, i) ? 1 : 0; +} + +static inline void +setbit(bitarrayobject *self, Py_ssize_t i, int vi) +{ + char *cp, mask; + + assert_nbits(self); + assert(0 <= i && i < self->nbits); + assert(self->readonly == 0); + + mask = BITMASK(self, i); + cp = self->ob_item + (i >> 3); + if (vi) + *cp |= mask; + else + *cp &= ~mask; +} + +static const char bitmask_table[2][8] = { + {0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80}, /* little endian */ + {0x80, 0x40, 0x20, 0x10, 0x08, 0x04, 0x02, 0x01}, /* big endian */ +}; + +/* character with n leading ones is: ones_table[endian][n] */ +static const char ones_table[2][8] = { + {0x00, 0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f}, /* little endian */ + {0x00, 0x80, 0xc0, 0xe0, 0xf0, 0xf8, 0xfc, 0xfe}, /* big endian */ +}; + +/* Return last byte in buffer with pad bits zeroed out. The the number of + bits in the bitarray must not be a multiple of 8. */ +static inline char +zeroed_last_byte(bitarrayobject *self) +{ + const int r = self->nbits % 8; /* index into mask table */ + + assert(r > 0); + assert_nbits(self); + return ones_table[IS_BE(self)][r] & self->ob_item[Py_SIZE(self) - 1]; +} + +/* Unless buffer is readonly, zero out pad bits. + Always return the number of pad bits - leave self->nbits unchanged */ +static inline int +set_padbits(bitarrayobject *self) +{ + const int r = self->nbits % 8; + + if (r == 0) + return 0; + if (self->readonly == 0) + self->ob_item[Py_SIZE(self) - 1] = zeroed_last_byte(self); + return 8 - r; +} + +static const unsigned char bitcount_lookup[256] = { +#define B2(n) n, n + 1, n + 1, n + 2 +#define B4(n) B2(n), B2(n + 1), B2(n + 1), B2(n + 2) +#define B6(n) B4(n), B4(n + 1), B4(n + 1), B4(n + 2) + B6(0), B6(1), B6(1), B6(2) +#undef B2 +#undef B4 +#undef B6 +}; + +/* adjust index a manner consistent with the handling of normal slices */ +static inline void +adjust_index(Py_ssize_t length, Py_ssize_t *i, Py_ssize_t step) +{ + if (*i < 0) { + *i += length; + if (*i < 0) + *i = (step < 0) ? -1 : 0; + } + else if (*i >= length) { + *i = (step < 0) ? length - 1 : length; + } +} + +/* same as PySlice_AdjustIndices() which was introduced in Python 3.6.1 */ +static inline Py_ssize_t +adjust_indices(Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, + Py_ssize_t step) +{ +#if PY_VERSION_HEX > 0x03060100 + return PySlice_AdjustIndices(length, start, stop, step); +#else + assert(step != 0); + adjust_index(length, start, step); + adjust_index(length, stop, step); + /* + a / b does integer division. If either a or b is negative, the result + depends on the compiler (rounding can go toward 0 or negative infinity). + Therefore, we are careful that both a and b are always positive. + */ + if (step < 0) { + if (*stop < *start) + return (*start - *stop - 1) / (-step) + 1; + } + else { + if (*start < *stop) + return (*stop - *start - 1) / step + 1; + } + return 0; +#endif +} + +/* adjust slice parameters such that step is always positive; produces + simpler loops over elements when their order is irrelevant */ +static inline void +adjust_step_positive(Py_ssize_t slicelength, + Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step) +{ + if (*step < 0) { + *stop = *start + 1; + *start = *stop + *step * (slicelength - 1) - 1; + *step = -(*step); + } + assert(*start >= 0 && *stop >= 0 && *step > 0 && slicelength >= 0); + /* slicelength == 0 implies stop <= start */ + assert(slicelength != 0 || *stop <= *start); + /* step == 1 and slicelength != 0 implies stop - start == slicelength */ + assert(*step != 1 || slicelength == 0 || *stop - *start == slicelength); +} + +/* convert Python object to C int and set value at address - + return 1 on success, 0 on failure (and raise exception) */ +static inline int +conv_pybit(PyObject *value, int *vi) +{ + Py_ssize_t n; + + n = PyNumber_AsSsize_t(value, NULL); + if (n == -1 && PyErr_Occurred()) + return 0; + + if (n < 0 || n > 1) { + PyErr_Format(PyExc_ValueError, "bit must be 0 or 1, got %zd", n); + return 0; + } + *vi = (int) n; + return 1; +} diff --git a/env/lib/python3.8/site-packages/bitarray/py.typed b/env/lib/python3.8/site-packages/bitarray/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/bitarray/pythoncapi_compat.h b/env/lib/python3.8/site-packages/bitarray/pythoncapi_compat.h new file mode 100644 index 00000000..d9de1818 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/pythoncapi_compat.h @@ -0,0 +1,347 @@ +// Header file providing new functions of the Python C API to old Python +// versions. +// +// File distributed under the MIT license. +// +// Homepage: +// https://github.com/pythoncapi/pythoncapi_compat +// +// Latest version: +// https://raw.githubusercontent.com/pythoncapi/pythoncapi_compat/master/pythoncapi_compat.h +// +// SPDX-License-Identifier: MIT + +#ifndef PYTHONCAPI_COMPAT +#define PYTHONCAPI_COMPAT + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include "frameobject.h" // PyFrameObject, PyFrame_GetBack() + + +// Compatibility with Visual Studio 2013 and older which don't support +// the inline keyword in C (only in C++): use __inline instead. +#if (defined(_MSC_VER) && _MSC_VER < 1900 \ + && !defined(__cplusplus) && !defined(inline)) +# define inline __inline +# define PYTHONCAPI_COMPAT_MSC_INLINE + // These two macros are undefined at the end of this file +#endif + + +// Cast argument to PyObject* type. +#ifndef _PyObject_CAST +# define _PyObject_CAST(op) ((PyObject*)(op)) +#endif +#ifndef _PyObject_CAST_CONST +# define _PyObject_CAST_CONST(op) ((const PyObject*)(op)) +#endif + + +// bpo-42262 added Py_NewRef() to Python 3.10.0a3 +#if PY_VERSION_HEX < 0x030A00A3 && !defined(Py_NewRef) +static inline PyObject* _Py_NewRef(PyObject *obj) +{ + Py_INCREF(obj); + return obj; +} +#define Py_NewRef(obj) _Py_NewRef(_PyObject_CAST(obj)) +#endif + + +// bpo-42262 added Py_XNewRef() to Python 3.10.0a3 +#if PY_VERSION_HEX < 0x030A00A3 && !defined(Py_XNewRef) +static inline PyObject* _Py_XNewRef(PyObject *obj) +{ + Py_XINCREF(obj); + return obj; +} +#define Py_XNewRef(obj) _Py_XNewRef(_PyObject_CAST(obj)) +#endif + + +// See https://bugs.python.org/issue42522 +#if !defined(_Py_StealRef) +static inline PyObject* __Py_StealRef(PyObject *obj) +{ + Py_DECREF(obj); + return obj; +} +#define _Py_StealRef(obj) __Py_StealRef(_PyObject_CAST(obj)) +#endif + + +// See https://bugs.python.org/issue42522 +#if !defined(_Py_XStealRef) +static inline PyObject* __Py_XStealRef(PyObject *obj) +{ + Py_XDECREF(obj); + return obj; +} +#define _Py_XStealRef(obj) __Py_XStealRef(_PyObject_CAST(obj)) +#endif + + +// bpo-39573 added Py_SET_REFCNT() to Python 3.9.0a4 +#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_SET_REFCNT) +static inline void _Py_SET_REFCNT(PyObject *ob, Py_ssize_t refcnt) +{ + ob->ob_refcnt = refcnt; +} +#define Py_SET_REFCNT(ob, refcnt) _Py_SET_REFCNT(_PyObject_CAST(ob), refcnt) +#endif + + +// Py_SETREF() and Py_XSETREF() were added to Python 3.5.2. +// It is excluded from the limited C API. +#if (PY_VERSION_HEX < 0x03050200 && !defined(Py_SETREF)) && !defined(Py_LIMITED_API) +#define Py_SETREF(op, op2) \ + do { \ + PyObject *_py_tmp = _PyObject_CAST(op); \ + (op) = (op2); \ + Py_DECREF(_py_tmp); \ + } while (0) + +#define Py_XSETREF(op, op2) \ + do { \ + PyObject *_py_tmp = _PyObject_CAST(op); \ + (op) = (op2); \ + Py_XDECREF(_py_tmp); \ + } while (0) +#endif + +// bpo-39573 added Py_SET_TYPE() to Python 3.9.0a4 +#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_SET_TYPE) +static inline void +_Py_SET_TYPE(PyObject *ob, PyTypeObject *type) +{ + ob->ob_type = type; +} +#define Py_SET_TYPE(ob, type) _Py_SET_TYPE(_PyObject_CAST(ob), type) +#endif + + +// bpo-39573 added Py_SET_SIZE() to Python 3.9.0a4 +#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_SET_SIZE) +static inline void +_Py_SET_SIZE(PyVarObject *ob, Py_ssize_t size) +{ + ob->ob_size = size; +} +#define Py_SET_SIZE(ob, size) _Py_SET_SIZE((PyVarObject*)(ob), size) +#endif + + +// bpo-40421 added PyFrame_GetCode() to Python 3.9.0b1 +#if PY_VERSION_HEX < 0x030900B1 +static inline PyCodeObject* +PyFrame_GetCode(PyFrameObject *frame) +{ + assert(frame != NULL); + assert(frame->f_code != NULL); + return (PyCodeObject*)Py_NewRef(frame->f_code); +} +#endif + +static inline PyCodeObject* +_PyFrame_GetCodeBorrow(PyFrameObject *frame) +{ + return (PyCodeObject *)_Py_StealRef(PyFrame_GetCode(frame)); +} + + +// bpo-40421 added PyFrame_GetCode() to Python 3.9.0b1 +#if PY_VERSION_HEX < 0x030900B1 && !defined(PYPY_VERSION) +static inline PyFrameObject* +PyFrame_GetBack(PyFrameObject *frame) +{ + assert(frame != NULL); + return (PyFrameObject*)Py_XNewRef(frame->f_back); +} +#endif + +#if !defined(PYPY_VERSION) +static inline PyFrameObject* +_PyFrame_GetBackBorrow(PyFrameObject *frame) +{ + return (PyFrameObject *)_Py_XStealRef(PyFrame_GetBack(frame)); +} +#endif + + +// bpo-39947 added PyThreadState_GetInterpreter() to Python 3.9.0a5 +#if PY_VERSION_HEX < 0x030900A5 +static inline PyInterpreterState * +PyThreadState_GetInterpreter(PyThreadState *tstate) +{ + assert(tstate != NULL); + return tstate->interp; +} +#endif + + +// bpo-40429 added PyThreadState_GetFrame() to Python 3.9.0b1 +#if PY_VERSION_HEX < 0x030900B1 && !defined(PYPY_VERSION) +static inline PyFrameObject* +PyThreadState_GetFrame(PyThreadState *tstate) +{ + assert(tstate != NULL); + return (PyFrameObject *)Py_XNewRef(tstate->frame); +} +#endif + +#if !defined(PYPY_VERSION) +static inline PyFrameObject* +_PyThreadState_GetFrameBorrow(PyThreadState *tstate) +{ + return (PyFrameObject *)_Py_XStealRef(PyThreadState_GetFrame(tstate)); +} +#endif + + +// bpo-39947 added PyInterpreterState_Get() to Python 3.9.0a5 +#if PY_VERSION_HEX < 0x030900A5 +static inline PyInterpreterState * +PyInterpreterState_Get(void) +{ + PyThreadState *tstate; + PyInterpreterState *interp; + + tstate = PyThreadState_GET(); + if (tstate == NULL) { + Py_FatalError("GIL released (tstate is NULL)"); + } + interp = tstate->interp; + if (interp == NULL) { + Py_FatalError("no current interpreter"); + } + return interp; +} +#endif + + +// bpo-39947 added PyInterpreterState_Get() to Python 3.9.0a6 +#if 0x030700A1 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x030900A6 && !defined(PYPY_VERSION) +static inline uint64_t +PyThreadState_GetID(PyThreadState *tstate) +{ + assert(tstate != NULL); + return tstate->id; +} +#endif + + +// bpo-37194 added PyObject_CallNoArgs() to Python 3.9.0a1 +#if PY_VERSION_HEX < 0x030900A1 +static inline PyObject* +PyObject_CallNoArgs(PyObject *func) +{ + return PyObject_CallFunctionObjArgs(func, NULL); +} +#endif + + +// bpo-39245 made PyObject_CallOneArg() public (previously called +// _PyObject_CallOneArg) in Python 3.9.0a4 +#if PY_VERSION_HEX < 0x030900A4 +static inline PyObject* +PyObject_CallOneArg(PyObject *func, PyObject *arg) +{ + return PyObject_CallFunctionObjArgs(func, arg, NULL); +} +#endif + + +// bpo-1635741 added PyModule_AddObjectRef() to Python 3.10.0a3 +#if PY_VERSION_HEX < 0x030A00A3 +static inline int +PyModule_AddObjectRef(PyObject *module, const char *name, PyObject *value) +{ + int res; + Py_XINCREF(value); + res = PyModule_AddObject(module, name, value); + if (res < 0) { + Py_XDECREF(value); + } + return res; +} +#endif + + +// bpo-40024 added PyModule_AddType() to Python 3.9.0a5 +#if PY_VERSION_HEX < 0x030900A5 +static inline int +PyModule_AddType(PyObject *module, PyTypeObject *type) +{ + const char *name, *dot; + + if (PyType_Ready(type) < 0) { + return -1; + } + + // inline _PyType_Name() + name = type->tp_name; + assert(name != NULL); + dot = strrchr(name, '.'); + if (dot != NULL) { + name = dot + 1; + } + + return PyModule_AddObjectRef(module, name, (PyObject *)type); +} +#endif + + +// bpo-40241 added PyObject_GC_IsTracked() to Python 3.9.0a6. +// bpo-4688 added _PyObject_GC_IS_TRACKED() to Python 2.7.0a2. +#if PY_VERSION_HEX < 0x030900A6 && !defined(PYPY_VERSION) +static inline int +PyObject_GC_IsTracked(PyObject* obj) +{ + return (PyObject_IS_GC(obj) && _PyObject_GC_IS_TRACKED(obj)); +} +#endif + +// bpo-40241 added PyObject_GC_IsFinalized() to Python 3.9.0a6. +// bpo-18112 added _PyGCHead_FINALIZED() to Python 3.4.0 final. +#if PY_VERSION_HEX < 0x030900A6 && PY_VERSION_HEX >= 0x030400F0 && !defined(PYPY_VERSION) +static inline int +PyObject_GC_IsFinalized(PyObject *obj) +{ + return (PyObject_IS_GC(obj) && _PyGCHead_FINALIZED((PyGC_Head *)(obj)-1)); +} +#endif + + +// bpo-39573 added Py_IS_TYPE() to Python 3.9.0a4 +#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_IS_TYPE) +static inline int +_Py_IS_TYPE(const PyObject *ob, const PyTypeObject *type) { + return ob->ob_type == type; +} +#define Py_IS_TYPE(ob, type) _Py_IS_TYPE(_PyObject_CAST_CONST(ob), type) +#endif + + +// Py_UNUSED() was added to Python 3.4.0b2. +#if PY_VERSION_HEX < 0x030400B2 && !defined(Py_UNUSED) +# if defined(__GNUC__) || defined(__clang__) +# define Py_UNUSED(name) _unused_ ## name __attribute__((unused)) +# else +# define Py_UNUSED(name) _unused_ ## name +# endif +#endif + + +#ifdef PYTHONCAPI_COMPAT_MSC_INLINE +# undef inline +# undef PYTHONCAPI_COMPAT_MSC_INLINE +#endif + +#ifdef __cplusplus +} +#endif +#endif // PYTHONCAPI_COMPAT diff --git a/env/lib/python3.8/site-packages/bitarray/test_bitarray.py b/env/lib/python3.8/site-packages/bitarray/test_bitarray.py new file mode 100644 index 00000000..21d1c44b --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/test_bitarray.py @@ -0,0 +1,4640 @@ +""" +Tests for bitarray + +Author: Ilan Schnell +""" +from __future__ import absolute_import + +import re +import os +import sys +import platform +import unittest +import shutil +import tempfile +from random import randint + +# imports needed inside tests +import array +import copy +import itertools +import mmap +import pickle +import shelve +import weakref + + +is_py3k = bool(sys.version_info[0] == 3) +pyodide = bool(platform.machine() == 'wasm32') + +if is_py3k: + from io import BytesIO +else: + from cStringIO import StringIO as BytesIO # type: ignore + range = xrange # type: ignore + + +from bitarray import (bitarray, frozenbitarray, bits2bytes, decodetree, + get_default_endian, _set_default_endian, + _sysinfo, __version__) + +def skipIf(condition): + "Skip a test if the condition is true." + if condition: + return lambda f: None + return lambda f: f + +SYSINFO = _sysinfo() +DEBUG = SYSINFO[6] + +def buffer_info(a, key=None): + fields = ( + "address", # 0. address of byte buffer + "size", # 1. buffer size in bytes + "endian", # 2. bit endianness + "padding", # 3. number of pad bits + "allocated", # 4. allocated memory + "readonly", # 5. memory is read-only + "imported", # 6. buffer is imported + "exports", # 7. number of buffer exports + ) + info = a.buffer_info() + res = dict(zip(fields, info)) + return res if key is None else res[key] + + +# avoid importing from bitarray.util +def zeros(n, endian=None): + a = bitarray(n, endian or get_default_endian()) + a.setall(0) + return a + +def urandom(n, endian=None): + a = bitarray(0, endian or get_default_endian()) + a.frombytes(os.urandom(bits2bytes(n))) + del a[n:] + return a + +tests = [] # type: list + +class Util(object): + + @staticmethod + def random_endian(): + return ['little', 'big'][randint(0, 1)] + + @staticmethod + def randombitarrays(start=0): + for n in list(range(start, 25)) + [randint(1000, 2000)]: + a = bitarray(endian=['little', 'big'][randint(0, 1)]) + a.frombytes(os.urandom(bits2bytes(n))) + del a[n:] + yield a + + def randomlists(self): + for a in self.randombitarrays(): + yield a.tolist() + + @staticmethod + def rndsliceidx(length): + if randint(0, 1): + return None + else: + return randint(-length - 5, length + 5) + + @staticmethod + def other_endian(endian): + t = {'little': 'big', + 'big': 'little'} + return t[endian] + + @staticmethod + def calc_slicelength(s, length): + assert isinstance(s, slice) + start, stop, step = s.indices(length) + assert step < 0 or (start >= 0 and stop >= 0) + assert step > 0 or (start >= -1 and stop >= -1) + + # This implementation works because Python's floor division (a // b) + # always rounds to the lowest integer, even when a or b are negative. + res1 = (stop - start + (1 if step < 0 else -1)) // step + 1 + if res1 < 0: + res1 = 0 + + # The above implementation is not used in C. + # In C's a / b, if either a or b is negative, the result depends on + # the compiler. Therefore, we use the implementation below (where + # both a and b are always positive). + res2 = 0 + if step < 0: + if stop < start: + res2 = (start - stop - 1) // (-step) + 1 + else: + if start < stop: + res2 = (stop - start - 1) // step + 1 + + assert res1 == res2 + return res1 + + def check_obj(self, a): + self.assertIsInstance(a, bitarray) + + ptr, size, endian, padbits, alloc, readonly, buf, exports = \ + a.buffer_info() + + self.assertEqual(size, bits2bytes(len(a))) + self.assertEqual(padbits, 8 * size - len(a)) + self.assertTrue(0 <= padbits < 8) + self.assertEqual(endian, a.endian()) + self.assertTrue(endian in ('little', 'big')) + + self.assertEqual(a.nbytes, size) + self.assertEqual(a.padbits, padbits) + self.assertEqual(a.readonly, readonly) + self.assertEqual(len(a) + a.padbits, 8 * a.nbytes) + + if buf: + # imported buffer implies that no extra memory is allocated + self.assertEqual(alloc, 0) + # an imported buffer will always have a multiple of 8 bits + self.assertEqual(len(a) % 8, 0) + self.assertEqual(len(a), 8 * size) + self.assertEqual(padbits, 0) + else: + # the allocated memory is always larger than the buffer size + self.assertTrue(alloc >= size) + + if ptr == 0: + # the buffer being a NULL pointer implies that the buffer size + # and the allocated memory size are 0 + self.assertEqual(size, 0) + self.assertEqual(alloc, 0) + + if type(a).__name__ == 'frozenbitarray': + # frozenbitarray have read-only memory + self.assertEqual(readonly, 1) + if padbits: # ensure padbits are zero + b = bitarray(endian=endian) + b.frombytes(a.tobytes()[-1:]) + self.assertEqual(b[-padbits:], zeros(padbits)) + elif not buf: + # otherwise, unless the buffer is imported, it is writable + self.assertEqual(readonly, 0) + + def assertEQUAL(self, a, b): + self.assertEqual(a, b) + self.assertEqual(a.endian(), b.endian()) + + def assertIsType(self, a, b): + self.assertEqual(type(a).__name__, b) + self.assertEqual( + repr(type(a)), "<%s 'bitarray.%s'>" % + ('class' if is_py3k or b == 'frozenbitarray' else 'type', b)) + + def assertBitEqual(self, x, y): + for z in x, y: + self.assertEqual('01'[z], repr(z)) + self.assertEqual(x, y) + + def assertStopIteration(self, it): + self.assertRaises(StopIteration, next, it) + + def assertRaisesMessage(self, excClass, msg, callable, *args, **kwargs): + try: + callable(*args, **kwargs) + raise AssertionError("%s not raised" % excClass.__name__) + except excClass as e: + if msg != str(e): + raise AssertionError("message: %s\n got: %s" % (msg, e)) + +# --------------------------------------------------------------------------- + +class TestsModuleFunctions(unittest.TestCase, Util): + + def test_version_string(self): + # the version string is not a function, but test it here anyway + self.assertIsInstance(__version__, str) + + def test_sysinfo(self): + info = _sysinfo() + self.assertIsInstance(info, tuple) + for x in info: + self.assertIsInstance(x, int) + + def test_set_default_endian(self): + self.assertRaises(TypeError, _set_default_endian, 0) + self.assertRaises(TypeError, _set_default_endian, 'little', 0) + self.assertRaises(ValueError, _set_default_endian, 'foo') + for default_endian in 'big', 'little', u'big', u'little': + _set_default_endian(default_endian) + a = bitarray() + self.assertEqual(a.endian(), default_endian) + for x in None, 0, 64, '10111', [1, 0]: + a = bitarray(x) + self.assertEqual(a.endian(), default_endian) + + for endian in 'big', 'little', None: + a = bitarray(endian=endian) + self.assertEqual(a.endian(), + default_endian if endian is None else endian) + + # make sure that calling _set_default_endian wrong does not + # change the default endianness + self.assertRaises(ValueError, _set_default_endian, 'foobar') + self.assertEqual(bitarray().endian(), default_endian) + + def test_get_default_endian(self): + # takes no arguments + self.assertRaises(TypeError, get_default_endian, 'big') + for default_endian in 'big', 'little': + _set_default_endian(default_endian) + endian = get_default_endian() + self.assertEqual(endian, default_endian) + self.assertIsInstance(endian, str) + + def test_bits2bytes(self): + for arg in 'foo', [], None, {}, 187.0, -4.0: + self.assertRaises(TypeError, bits2bytes, arg) + + self.assertRaises(TypeError, bits2bytes) + self.assertRaises(TypeError, bits2bytes, 1, 2) + + self.assertRaises(ValueError, bits2bytes, -1) + self.assertRaises(ValueError, bits2bytes, -924) + + self.assertEqual(bits2bytes(0), 0) + for n in range(1, 100): + m = bits2bytes(n) + self.assertEqual(m, (n - 1) // 8 + 1) + self.assertIsInstance(m, int) + + for n, m in [(0, 0), (1, 1), (2, 1), (7, 1), (8, 1), (9, 2), + (10, 2), (15, 2), (16, 2), (64, 8), (65, 9), + (2**31, 2**28), (2**32, 2**29), (2**34, 2**31), + (2**34+793, 2**31+100), (2**35-8, 2**32-1), + (2**62, 2**59), (2**63-8, 2**60-1)]: + self.assertEqual(bits2bytes(n), m) + +tests.append(TestsModuleFunctions) + +# --------------------------------------------------------------------------- + +class CreateObjectTests(unittest.TestCase, Util): + + def test_noInitializer(self): + a = bitarray() + self.assertEqual(len(a), 0) + self.assertEqual(a.tolist(), []) + self.assertIsType(a, 'bitarray') + self.check_obj(a) + + def test_endian(self): + a = bitarray(endian='little') + a.frombytes(b'ABC') + self.assertEqual(a.endian(), 'little') + self.assertIsInstance(a.endian(), str) + self.check_obj(a) + + b = bitarray(endian='big') + b.frombytes(b'ABC') + self.assertEqual(b.endian(), 'big') + self.assertIsInstance(a.endian(), str) + self.check_obj(b) + + self.assertNotEqual(a, b) + self.assertEqual(a.tobytes(), b.tobytes()) + + def test_endian_default(self): + _set_default_endian('big') + a_big = bitarray() + _set_default_endian('little') + a_little = bitarray() + _set_default_endian('big') + + self.assertEqual(a_big.endian(), 'big') + self.assertEqual(a_little.endian(), 'little') + + def test_endian_wrong(self): + self.assertRaises(TypeError, bitarray, endian=0) + self.assertRaises(ValueError, bitarray, endian='') + self.assertRaisesMessage( + ValueError, + "bit endianness must be either 'little' or 'big', not 'foo'", + bitarray, endian='foo') + self.assertRaisesMessage(TypeError, + "'ellipsis' object is not iterable", + bitarray, Ellipsis) + + def test_buffer(self): + # buffer requires no initial argument + self.assertRaises(TypeError, bitarray, 5, buffer=b'DATA\0') + + for endian in 'big', 'little': + a = bitarray(buffer=b'', endian=endian) + self.assertEQUAL(a, bitarray(0, endian)) + + _set_default_endian(endian) + a = bitarray(buffer=b'A') + self.assertEqual(a.endian(), endian) + self.assertEqual(len(a), 8) + + a = bitarray(buffer=b'\xf0', endian='little') + self.assertRaises(TypeError, a.clear) + self.assertRaises(TypeError, a.__setitem__, 3, 1) + self.assertEQUAL(a, bitarray('00001111', 'little')) + self.check_obj(a) + + # positinal arguments + a = bitarray(None, 'big', bytearray([15])) + self.assertEQUAL(a, bitarray('00001111', 'big')) + a = bitarray(None, 'little', None) + self.assertEQUAL(a, bitarray(0, 'little')) + + def test_integers(self): + for n in range(50): + a = bitarray(n) + self.assertEqual(len(a), n) + self.check_obj(a) + + a = bitarray(int(n)) + self.assertEqual(len(a), n) + self.check_obj(a) + + if not is_py3k: + a = bitarray(long(29)) + self.assertEqual(len(a), 29) + + self.assertRaises(ValueError, bitarray, -1) + self.assertRaises(ValueError, bitarray, -924) + + def test_list(self): + lst = [0, 1, False, True] + a = bitarray(lst) + self.assertEqual(a, bitarray('0101')) + self.check_obj(a) + + if not is_py3k: + a = bitarray([long(1), long(0)]) + self.assertEqual(a, bitarray('10')) + + self.assertRaises(ValueError, bitarray, [0, 1, 2]) + self.assertRaises(TypeError, bitarray, [0, 1, None]) + + for n in range(50): + lst = [bool(randint(0, 1)) for d in range(n)] + a = bitarray(lst) + self.assertEqual(a.tolist(), lst) + self.check_obj(a) + + def test_tuple(self): + tup = (0, True, False, 1) + a = bitarray(tup) + self.assertEqual(a, bitarray('0101')) + self.check_obj(a) + + self.assertRaises(ValueError, bitarray, (0, 1, 2)) + self.assertRaises(TypeError, bitarray, (0, 1, None)) + + for n in range(50): + lst = [bool(randint(0, 1)) for d in range(n)] + a = bitarray(tuple(lst)) + self.assertEqual(a.tolist(), lst) + self.check_obj(a) + + def test_iter1(self): + for n in range(50): + lst = [bool(randint(0, 1)) for d in range(n)] + a = bitarray(iter(lst)) + self.assertEqual(a.tolist(), lst) + self.check_obj(a) + + def test_iter2(self): + for lst in self.randomlists(): + def foo(): + for x in lst: + yield x + a = bitarray(foo()) + self.assertEqual(a, bitarray(lst)) + self.check_obj(a) + + def test_iter3(self): + a = bitarray(itertools.repeat(False, 10)) + self.assertEqual(a, zeros(10)) + a = bitarray(itertools.repeat(1, 10)) + self.assertEqual(a, bitarray(10 * '1')) + + def test_range(self): + self.assertEqual(bitarray(range(2)), bitarray('01')) + self.assertRaises(ValueError, bitarray, range(0, 3)) + + def test_string01(self): + for s in ('0010111', u'0010111', '0010 111', u'0010 111', + '0010_111', u'0010_111'): + a = bitarray(s) + self.assertEqual(a.tolist(), [0, 0, 1, 0, 1, 1, 1]) + self.check_obj(a) + + for n in range(50): + lst = [bool(randint(0, 1)) for d in range(n)] + s = ''.join([['0', '1'][x] for x in lst]) + a = bitarray(s) + self.assertEqual(a.tolist(), lst) + self.check_obj(a) + + self.assertRaises(ValueError, bitarray, '01021') + self.assertRaises(UnicodeEncodeError, bitarray, u'1\u26050') + + def test_string01_whitespace(self): + whitespace = ' \n\r\t\v' + a = bitarray(whitespace) + self.assertEqual(a, bitarray()) + + # For Python 2 (where strings are bytes), we are in the lucky + # position that none of the valid characters ('\t'=9, '\n'=10, + # '\v'=11, '\r'=13, ' '=32, '0'=48 and '1'=49) are valid header + # bytes for deserialization 0..7, 16..23. Therefore a string of + # '0's and '1'a can start with any whitespace character, as well + # as '0' or '1' obviously. + for c in whitespace: + a = bitarray(c + '1101110001') + self.assertEqual(a, bitarray('1101110001')) + + a = bitarray(' 0\n1\r0\t1\v0 ') + self.assertEqual(a, bitarray('01010')) + + def test_rawbytes(self): + self.assertEqual(bitarray(b'\x00').endian(), 'little') + self.assertEqual(bitarray(b'\x10').endian(), 'big') + + # this representation is used for pickling + for s, r in [(b'\x00', ''), (b'\x07\xff', '1'), (b'\x03\xff', '11111'), + (b'\x01\x87\xda', '10000111 1101101')]: + self.assertEqual(bitarray(s, endian='big'), bitarray(r)) + + self.assertEQUAL(bitarray(b'\x12\x0f', 'little'), + bitarray('111100', 'little')) + self.assertEQUAL(bitarray(b'\x02\x0f', 'big'), + bitarray('000011', 'big')) + + for a, s in [ + (bitarray(0, 'little'), b'\x00'), + (bitarray(0, 'big'), b'\x10'), + (bitarray('1', 'little'), b'\x07\x01'), + (bitarray('1', 'big'), b'\x17\x80'), + (bitarray('11110000', 'little'), b'\x00\x0f'), + (bitarray('11110000', 'big'), b'\x10\xf0'), + ]: + self.assertEQUAL(bitarray(s), a) + + def test_rawbytes_invalid(self): + for s in b'\x01', b'\x04', b'\x07', b'\x11', b'\x15', b'\x17': + # this error is raised in newbitarray_from_pickle() (C function) + self.assertRaises(ValueError, bitarray, s) + # Python 2: PyErr_Format() seems to handle "0x%02x" + # incorrectly. E.g. instead of "0x01", I get "0x1" + if is_py3k: + msg = "invalid header byte: 0x%02x" % s[0] + self.assertRaisesMessage(ValueError, msg, bitarray, s) + + a = bitarray(s + b'\x00') + head = s[0] if is_py3k else ord(s[0]) + endian, padbits = divmod(head, 16) + self.assertEqual(a.endian(), ['little', 'big'][endian]) + self.assertEqual(len(a) + padbits, 8) + self.assertEqual(a.padbits, padbits) + self.assertFalse(a.any()) + self.check_obj(a) + + s = b'\x21' + if is_py3k: + # on Python 3, we don't allow bitarrays being created from bytes + error = TypeError + msg = ("cannot extend bitarray with 'bytes', use .pack() or " + ".frombytes() instead") + else: + # on Python 2, we have an invalid character in the string + error = ValueError + msg = ("expected '0' or '1' (or whitespace, or underscore), " + "got '!' (0x21)") + self.assertRaisesMessage(error, msg, bitarray, s) + + def test_bitarray_simple(self): + for n in range(10): + a = bitarray(n) + b = bitarray(a, endian=None) + self.assertFalse(a is b) + self.assertEQUAL(a, b) + + def test_bitarray_endian(self): + # Test creating a new bitarray with different endianness from an + # existing bitarray. + for endian in 'little', 'big', u'little', u'big': + a = bitarray(endian=endian) + b = bitarray(a) + self.assertFalse(a is b) + self.assertEQUAL(a, b) + + endian2 = self.other_endian(endian) + b = bitarray(a, endian2) + self.assertEqual(b.endian(), endian2) + self.assertEqual(a, b) + + for a in self.randombitarrays(): + endian2 = self.other_endian(a.endian()) + b = bitarray(a, endian2) + self.assertEqual(a, b) + self.assertEqual(b.endian(), endian2) + self.assertNotEqual(a.endian(), b.endian()) + + def test_bitarray_endianness(self): + a = bitarray('11100001', endian='little') + b = bitarray(a, endian='big') + self.assertEqual(a, b) + self.assertNotEqual(a.tobytes(), b.tobytes()) + + b.bytereverse() + self.assertNotEqual(a, b) + self.assertEqual(a.tobytes(), b.tobytes()) + + c = bitarray('11100001', endian='big') + self.assertEqual(a, c) + + def test_frozenbitarray(self): + a = bitarray(frozenbitarray()) + self.assertEQUAL(a, bitarray()) + self.assertIsType(a, 'bitarray') + + for endian in 'little', 'big': + a = bitarray(frozenbitarray('011', endian=endian)) + self.assertEQUAL(a, bitarray('011', endian)) + self.assertIsType(a, 'bitarray') + + def test_create_empty(self): + for x in (None, 0, '', list(), tuple(), set(), dict(), u'', + bitarray(), frozenbitarray()): + a = bitarray(x) + self.assertEqual(len(a), 0) + self.assertEQUAL(a, bitarray()) + + if is_py3k: + self.assertRaises(TypeError, bitarray, b'') + else: + self.assertEqual(bitarray(b''), bitarray()) + + def test_wrong_args(self): + # wrong types + for x in False, True, Ellipsis, slice(0), 0.0, 0 + 0j: + self.assertRaises(TypeError, bitarray, x) + if is_py3k: + self.assertRaises(TypeError, bitarray, b'10') + else: + self.assertEQUAL(bitarray(b'10'), bitarray('10')) + # wrong values + for x in -1, 'A': + self.assertRaises(ValueError, bitarray, x) + # test second (endian) argument + self.assertRaises(TypeError, bitarray, 0, 0) + self.assertRaises(ValueError, bitarray, 0, 'foo') + # too many args + self.assertRaises(TypeError, bitarray, 0, 'big', 0) + + def test_weakref(self): + a = bitarray('0100') + b = weakref.proxy(a) + self.assertEqual(b.to01(), a.to01()) + a = None + self.assertRaises(ReferenceError, len, b) + +tests.append(CreateObjectTests) + +# --------------------------------------------------------------------------- + +class ToObjectsTests(unittest.TestCase, Util): + + def test_numeric(self): + a = bitarray() + self.assertRaises(Exception, int, a) + self.assertRaises(Exception, float, a) + self.assertRaises(Exception, complex, a) + + def test_list(self): + for a in self.randombitarrays(): + self.assertEqual(list(a), a.tolist()) + + def test_tuple(self): + for a in self.randombitarrays(): + self.assertEqual(tuple(a), tuple(a.tolist())) + +tests.append(ToObjectsTests) + +# --------------------------------------------------------------------------- + +class MetaDataTests(unittest.TestCase): + + def test_buffer_info(self): + a = bitarray(13, endian='little') + self.assertEqual(a.buffer_info()[1:4], (2, 'little', 3)) + + info = a.buffer_info() + self.assertIsInstance(info, tuple) + self.assertEqual(len(info), 8) + for i, item in enumerate(info): + if i == 2: + self.assertIsInstance(item, str) + continue + self.assertIsInstance(item, int) + + def test_endian(self): + for endian in 'big', 'little': + a = bitarray(endian=endian) + self.assertEqual(a.endian(), endian) + + def test_len(self): + for n in range(100): + a = bitarray(n) + self.assertEqual(len(a), n) + +tests.append(MetaDataTests) + +# --------------------------------------------------------------------------- + +class InternalTests(unittest.TestCase, Util): + + # Internal functionality exposed for the purpose of testing. + # This class will only be part of the test suite in debug mode. + + def test_shift_r8_empty(self): + a = bitarray() + a._shift_r8(0, 0, 3) + self.assertEqual(a, bitarray()) + + a = urandom(80) + for i in range(11): + b = a.copy() + a._shift_r8(i, i, 5) + self.assertEqual(a, b) + + def test_shift_r8_explicit(self): + x = bitarray('11000100 11111111 11100111 11111111 00001000') + y = bitarray('11000100 00000111 11111111 00111111 00001000') + x._shift_r8(1, 4, 5) + self.assertEqual(x, y) + + x = bitarray('11000100 11110') + y = bitarray('00011000 10011') + x._shift_r8(0, 2, 3) + self.assertEqual(x, y) + + def test_shift_r8_random_bytes(self): + for N in range(100): + a = randint(0, N) + b = randint(a, N) + n = randint(0, 7) + x = urandom(8 * N, self.random_endian()) + y = x.copy() + x._shift_r8(a, b, n) + y[8 * a : 8 * b] >>= n + self.assertEQUAL(x, y) + self.assertEqual(len(x), 8 * N) + + def test_copy_n_explicit(self): + x = bitarray('11000100 11110') + # ^^^^ ^ + y = bitarray('0101110001') + # ^^^^^ + x._copy_n(4, y, 1, 5) + self.assertEqual(x, bitarray('11001011 11110')) + # ^^^^ ^ + x = bitarray('10110111 101', 'little') + y = x.copy() + x._copy_n(3, x, 3, 7) # copy region of x onto x + self.assertEqual(x, y) + x._copy_n(3, bitarray(x, 'big'), 3, 7) # as before but other endian + self.assertEqual(x, y) + x._copy_n(5, bitarray(), 0, 0) # copy empty bitarray onto x + self.assertEqual(x, y) + + def test_copy_n_example(self): + # example givin in bitarray/copy_n.txt + y = bitarray( + '00101110 11111001 01011101 11001011 10110000 01011110 011') + x = bitarray( + '01011101 11100101 01110101 01011001 01110100 10001010 01111011') + x._copy_n(21, y, 6, 31) + self.assertEqual(x, bitarray( + '01011101 11100101 01110101 11110010 10111011 10010111 01101011')) + + def check_copy_n(self, N, M, a, b, n): + x = urandom(N, self.random_endian()) + x_lst = x.tolist() + y = x if M < 0 else urandom(M, self.random_endian()) + x_lst[a:a + n] = y.tolist()[b:b + n] + x._copy_n(a, y, b, n) + self.assertEqual(x, bitarray(x_lst)) + self.assertEqual(len(x), N) + self.check_obj(x) + + def test_copy_n_range(self): + for a in range(8): + for b in range(8): + for n in range(90): + self.check_copy_n(100, -1, a, b, n) + self.check_copy_n(100, 100, a, b, n) + + def test_copy_n_random_self(self): + for N in range(500): + n = randint(0, N) + a = randint(0, N - n) + b = randint(0, N - n) + self.check_copy_n(N, -1, a, b, n) + + def test_copy_n_random_other(self): + for N in range(500): + M = randint(0, 5 + 2 * N) + n = randint(0, min(N, M)) + a = randint(0, N - n) + b = randint(0, M - n) + self.check_copy_n(N, M, a, b, n) + + @staticmethod + def getslice(a, start, slicelength): + # this is the Python eqivalent of __getitem__ for slices with step=1 + b = bitarray(slicelength, a.endian()) + b._copy_n(0, a, start, slicelength) + return b + + def test_getslice(self): + for a in self.randombitarrays(): + a_lst = a.tolist() + n = len(a) + i = randint(0, n) + j = randint(i, n) + b = self.getslice(a, i, j - i) + self.assertEqual(b.tolist(), a_lst[i:j]) + self.assertEQUAL(b, a[i:j]) + + def check_overlap(self, a, b, res): + r1 = a._overlap(b) + r2 = b._overlap(a) + self.assertIsInstance(r1, bool) + self.assertTrue(r1 == r2 == res) + self.check_obj(a) + self.check_obj(b) + + def test_overlap_empty(self): + a = bitarray() + self.check_overlap(a, a, False) + b = bitarray() + self.check_overlap(a, b, False) + + def test_overlap_distinct(self): + for a in self.randombitarrays(): + # buffers overlaps with itself, unless buffer is NULL + self.check_overlap(a, a, bool(a)) + b = a.copy() + self.check_overlap(a, b, False) + + def test_overlap_shared(self): + a = bitarray(64) + b = bitarray(buffer=a) + self.check_overlap(b, a, True) + + c = bitarray(buffer=memoryview(a)[2:4]) + self.check_overlap(c, a, True) + + d = bitarray(buffer=memoryview(a)[5:]) + self.check_overlap(d, c, False) + self.check_overlap(d, b, True) + + e = bitarray(buffer=memoryview(a)[3:3]) + self.check_overlap(e, c, False) + self.check_overlap(e, d, False) + + def test_overlap_shared_random(self): + n = 100 # buffer size in bytes + a = bitarray(8 * n) + for _ in range(1000): + i1 = randint(0, n) + j1 = randint(0, n) + if j1 < i1: + continue + i2 = randint(0, n) + j2 = randint(0, n) + if j2 < i2: + continue + + b1 = bitarray(buffer=memoryview(a)[i1:j1]) + b2 = bitarray(buffer=memoryview(a)[i2:j2]) + + x1 = zeros(n) + x2 = zeros(n) + x1[i1:j1] = 1 + x2[i2:j2] = 1 + self.check_overlap(b1, b2, (x1 & x2).any()) + +if DEBUG: + tests.append(InternalTests) + +# --------------------------------------------------------------------------- + +class SliceTests(unittest.TestCase, Util): + + def test_getitem_1(self): + a = bitarray() + self.assertRaises(IndexError, a.__getitem__, 0) + a.append(True) + self.assertBitEqual(a[0], 1) + self.assertBitEqual(a[-1], 1) + self.assertRaises(IndexError, a.__getitem__, 1) + self.assertRaises(IndexError, a.__getitem__, -2) + a.append(False) + self.assertBitEqual(a[1], 0) + self.assertBitEqual(a[-1], 0) + self.assertRaises(IndexError, a.__getitem__, 2) + self.assertRaises(IndexError, a.__getitem__, -3) + self.assertRaises(TypeError, a.__getitem__, 1.5) + self.assertRaises(TypeError, a.__getitem__, None) + self.assertRaises(TypeError, a.__getitem__, 'A') + + def test_getitem_2(self): + a = bitarray('1100010') + for i, b in enumerate(a): + self.assertBitEqual(a[i], b) + self.assertBitEqual(a[i - 7], b) + self.assertRaises(IndexError, a.__getitem__, 7) + self.assertRaises(IndexError, a.__getitem__, -8) + + def test_getslice(self): + a = bitarray('01001111 00001') + self.assertEQUAL(a[:], a) + self.assertFalse(a[:] is a) + self.assertEQUAL(a[13:2:-3], bitarray('1010')) + self.assertEQUAL(a[2:-1:4], bitarray('010')) + self.assertEQUAL(a[::2], bitarray('0011001')) + self.assertEQUAL(a[8:], bitarray('00001')) + self.assertEQUAL(a[7:], bitarray('100001')) + self.assertEQUAL(a[:8], bitarray('01001111')) + self.assertEQUAL(a[::-1], bitarray('10000111 10010')) + self.assertEQUAL(a[:8:-1], bitarray('1000')) + + self.assertRaises(ValueError, a.__getitem__, slice(None, None, 0)) + self.assertRaises(TypeError, a.__getitem__, (1, 2)) + + def test_getslice_random(self): + for a in self.randombitarrays(start=1): + aa = a.tolist() + la = len(a) + for _ in range(10): + step = self.rndsliceidx(la) or None + s = slice(self.rndsliceidx(la), self.rndsliceidx(la), step) + self.assertEQUAL(a[s], bitarray(aa[s], endian=a.endian())) + + def test_getslice_random2(self): + n = randint(1000, 2000) + a = urandom(n, self.random_endian()) + sa = a.to01() + for _ in range(50): + i = randint(0, n) + j = randint(i, n) + b = a[i:j] + self.assertEqual(b.to01(), sa[i:j]) + self.assertEqual(len(b), j - i) + self.assertEqual(b.endian(), a.endian()) + + def test_setitem_simple(self): + a = bitarray('0') + a[0] = 1 + self.assertEqual(a, bitarray('1')) + + a = bitarray(2) + a[0] = 0 + a[1] = 1 + self.assertEqual(a, bitarray('01')) + a[-1] = 0 + a[-2] = 1 + self.assertEqual(a, bitarray('10')) + + self.assertRaises(ValueError, a.__setitem__, 0, -1) + self.assertRaises(TypeError, a.__setitem__, 1, None) + + self.assertRaises(IndexError, a.__setitem__, 2, True) + self.assertRaises(IndexError, a.__setitem__, -3, False) + self.assertRaises(TypeError, a.__setitem__, 1.5, 1) # see issue 114 + self.assertRaises(TypeError, a.__setitem__, None, 0) + self.assertRaises(TypeError, a.__setitem__, 'a', True) + self.assertEqual(a, bitarray('10')) + + def test_setitem_random(self): + for a in self.randombitarrays(start=1): + i = randint(0, len(a) - 1) + aa = a.tolist() + val = bool(randint(0, 1)) + a[i] = val + aa[i] = val + self.assertEqual(a.tolist(), aa) + self.check_obj(a) + + def test_setslice_simple(self): + for a in self.randombitarrays(start=1): + la = len(a) + b = bitarray(la) + b[0:la] = bitarray(a) + self.assertEqual(a, b) + self.assertFalse(a is b) + + b = bitarray(la) + b[:] = bitarray(a) + self.assertEqual(a, b) + self.assertFalse(a is b) + + b = bitarray(la) + b[::-1] = bitarray(a) + self.assertEqual(a.tolist()[::-1], b.tolist()) + + def test_setslice_random(self): + for a in self.randombitarrays(start=1): + la = len(a) + for _ in range(10): + step = self.rndsliceidx(la) or None + s = slice(self.rndsliceidx(la), self.rndsliceidx(la), step) + lb = (randint(0, 10) if step is None else + self.calc_slicelength(s, la)) + b = bitarray(lb) + c = bitarray(a) + c[s] = b + self.check_obj(c) + cc = a.tolist() + cc[s] = b.tolist() + self.assertEqual(c, bitarray(cc)) + + def test_setslice_self_random(self): + for a in self.randombitarrays(): + for step in -1, 1: + s = slice(None, None, step) + aa = a.tolist() + a[s] = a + aa[s] = aa + self.assertEqual(a, bitarray(aa)) + + def test_setslice_special(self): + for n in 0, 1, 10, 87: + a = urandom(n) + for m in 0, 1, 10, 99: + x = urandom(m) + b = a.copy() + b[n:n] = x # insert at end - extend + self.assertEqual(b, a + x) + self.assertEqual(len(b), len(a) + len(x)) + b[0:0] = x # insert at 0 - prepend + self.assertEqual(b, x + a + x) + self.check_obj(b) + self.assertEqual(len(b), len(a) + 2 * len(x)) + + def test_setslice_range(self): + # tests C function insert_n() + for endian in 'big', 'little': + for n in range(500): + a = urandom(n, endian) + p = randint(0, n) + m = randint(0, 500) + + x = urandom(m, self.random_endian()) + b = a.copy() + b[p:p] = x + self.assertEQUAL(b, a[:p] + x + a[p:]) + self.assertEqual(len(b), len(a) + m) + self.check_obj(b) + + def test_setslice_resize(self): + N, M = 200, 300 + for endian in 'big', 'little': + for n in 0, randint(0, N), N: + a = urandom(n, endian) + for p1 in 0, randint(0, n), n: + for p2 in 0, randint(0, p1), p1, randint(0, n), n: + for m in 0, randint(0, M), M: + x = urandom(m, self.random_endian()) + b = a.copy() + b[p1:p2] = x + b_lst = a.tolist() + b_lst[p1:p2] = x.tolist() + self.assertEqual(b.tolist(), b_lst) + if p1 <= p2: + self.assertEQUAL(b, a[:p1] + x + a[p2:]) + self.assertEqual(len(b), n + p1 - p2 + len(x)) + else: + self.assertEqual(b, a[:p1] + x + a[p1:]) + self.assertEqual(len(b), n + len(x)) + self.check_obj(b) + + def test_setslice_self(self): + a = bitarray('1100111') + a[::-1] = a + self.assertEqual(a, bitarray('1110011')) + a[4:] = a + self.assertEqual(a, bitarray('11101110011')) + a[:-5] = a + self.assertEqual(a, bitarray('1110111001110011')) + + a = bitarray('01001') + a[:-1] = a + self.assertEqual(a, bitarray('010011')) + a[2::] = a + self.assertEqual(a, bitarray('01010011')) + a[2:-2:1] = a + self.assertEqual(a, bitarray('010101001111')) + + a = bitarray('011') + a[2:2] = a + self.assertEqual(a, bitarray('010111')) + a[:] = a + self.assertEqual(a, bitarray('010111')) + + def test_setslice_self_shared_buffer(self): + # This is a special case. We have two bitarrays which share the + # same buffer, and then do a slice assignment. The bitarray is + # copied onto itself in reverse order. So we need to make a copy + # in setslice_bitarray(). However, since a and b are two distinct + # objects, it is not enough to check for self == other, but rather + # self->ob_item == other->ob_item. + a = bitarray('11100000') + b = bitarray(buffer=a) + b[::-1] = a + self.assertEqual(a, b) + self.assertEqual(a, bitarray('00000111')) + + def test_setslice_self_shared_buffer_2(self): + # This is an even more special case. We have a bitarrays which + # shares part of anothers bitarray buffer. So in setslice_bitarray(), + # we need to make a copy of other if: + # + # self->ob_item <= other->ob_item <= self->ob_item + Py_SIZE(self) + # + # In words: Is the other buffer inside the self buffer (which inclues + # the previous case) + a = bitarray('11111111 11000000 00000000') + b = bitarray(buffer=memoryview(a)[1:2]) + self.assertEqual(b, bitarray('11000000')) + a[15:7:-1] = b + self.assertEqual(a, bitarray('11111111 00000011 00000000')) + + def test_setslice_self_shared_buffer_3(self): + # Requires to check for (in setslice_bitarray()): + # + # other->ob_item <= self->ob_item <= other->ob_item + Py_SIZE(other) + # + a = bitarray('11111111 11000000 00000000') + b = bitarray(buffer=memoryview(a)[:2]) + c = bitarray(buffer=memoryview(a)[1:]) + self.assertEqual(b, bitarray('11111111 11000000')) + self.assertEqual(c, bitarray('11000000 00000000')) + c[::-1] = b + self.assertEqual(c, bitarray('00000011 11111111')) + self.assertEqual(a, bitarray('11111111 00000011 11111111')) + + def test_setslice_bitarray(self): + a = bitarray('11111111 1111') + a[2:6] = bitarray('0010') + self.assertEqual(a, bitarray('11001011 1111')) + a.setall(0) + a[::2] = bitarray('111001') + self.assertEqual(a, bitarray('10101000 0010')) + a.setall(0) + a[3:] = bitarray('111') + self.assertEqual(a, bitarray('000111')) + + a = bitarray(12) + a.setall(0) + a[1:11:2] = bitarray('11101') + self.assertEqual(a, bitarray('01010100 0100')) + a.setall(0) + a[5:2] = bitarray('111') # make sure inserts before 5 (not 2) + self.assertEqual(a, bitarray('00000111 0000000')) + + a = bitarray(12) + a.setall(0) + a[:-6:-1] = bitarray('10111') + self.assertEqual(a, bitarray('00000001 1101')) + + def test_setslice_bitarray_2(self): + a = bitarray('1111') + a[3:3] = bitarray('000') # insert + self.assertEqual(a, bitarray('1110001')) + a[2:5] = bitarray() # remove + self.assertEqual(a, bitarray('1101')) + + a = bitarray('1111') + a[1:3] = bitarray('0000') + self.assertEqual(a, bitarray('100001')) + a[:] = bitarray('010') # replace all values + self.assertEqual(a, bitarray('010')) + + # assign slice to bitarray with different length + a = bitarray('111111') + a[3:4] = bitarray('00') + self.assertEqual(a, bitarray('1110011')) + a[2:5] = bitarray('0') # remove + self.assertEqual(a, bitarray('11011')) + + def test_setslice_bitarray_random_same_length(self): + for endian in 'little', 'big': + for _ in range(100): + n = randint(0, 200) + a = urandom(n, endian) + lst_a = a.tolist() + b = urandom(randint(0, n), self.random_endian()) + lst_b = b.tolist() + i = randint(0, n - len(b)) + j = i + len(b) + self.assertEqual(j - i, len(b)) + a[i:j] = b + lst_a[i:j] = lst_b + self.assertEqual(a.tolist(), lst_a) + # a didn't change length + self.assertEqual(len(a), n) + self.assertEqual(a.endian(), endian) + self.check_obj(a) + + def test_setslice_bitarray_random_step_1(self): + for _ in range(50): + n = randint(0, 300) + a = urandom(n, self.random_endian()) + lst_a = a.tolist() + b = urandom(randint(0, 100), self.random_endian()) + lst_b = b.tolist() + s = slice(self.rndsliceidx(n), self.rndsliceidx(n), None) + a[s] = b + lst_a[s] = lst_b + self.assertEqual(a.tolist(), lst_a) + self.check_obj(a) + + def test_setslice_bitarray_random(self): + for _ in range(100): + n = randint(0, 50) + a = urandom(n, self.random_endian()) + lst_a = a.tolist() + b = urandom(randint(0, 50), self.random_endian()) + lst_b = b.tolist() + s = slice(self.rndsliceidx(n), self.rndsliceidx(n), + randint(-3, 3) or None) + try: + a[s] = b + except ValueError: + a = None + + try: + lst_a[s] = lst_b + except ValueError: + lst_a = None + + if a is None: + self.assertTrue(lst_a is None) + else: + self.assertEqual(a.tolist(), lst_a) + self.check_obj(a) + + def test_setslice_bool_explicit(self): + a = bitarray('11111111') + a[::2] = False + self.assertEqual(a, bitarray('01010101')) + a[4::] = True # ^^^^ + self.assertEqual(a, bitarray('01011111')) + a[-2:] = False # ^^ + self.assertEqual(a, bitarray('01011100')) + a[:2:] = True # ^^ + self.assertEqual(a, bitarray('11011100')) + a[:] = True # ^^^^^^^^ + self.assertEqual(a, bitarray('11111111')) + a[2:5] = False # ^^^ + self.assertEqual(a, bitarray('11000111')) + a[1::3] = False # ^ ^ ^ + self.assertEqual(a, bitarray('10000110')) + a[1:6:2] = True # ^ ^ ^ + self.assertEqual(a, bitarray('11010110')) + a[3:3] = False # zero slicelength + self.assertEqual(a, bitarray('11010110')) + a[:] = False # ^^^^^^^^ + self.assertEqual(a, bitarray('00000000')) + a[-2:2:-1] = 1 # ^^^^ + self.assertEqual(a, bitarray('00011110')) + + def test_setslice_bool_simple(self): + for _ in range(100): + N = randint(100, 2000) + s = slice(randint(0, 20), randint(N - 20, N), randint(1, 20)) + a = zeros(N) + a[s] = 1 + b = zeros(N) + for i in range(s.start, s.stop, s.step): + b[i] = 1 + self.assertEqual(a, b) + + def test_setslice_bool_range(self): + N = 200 + a = bitarray(N, self.random_endian()) + b = bitarray(N) + for step in range(-N - 1, N): + if step == 0: + continue + v = randint(0, 1) + a.setall(not v) + a[::step] = v + + b.setall(not v) + for i in range(0, N, abs(step)): + b[i] = v + if step < 0: + b.reverse() + self.assertEqual(a, b) + + def test_setslice_bool_random(self): + N = 100 + a = bitarray(N) + for _ in range(100): + a.setall(0) + aa = a.tolist() + step = self.rndsliceidx(N) or None + s = slice(self.rndsliceidx(N), self.rndsliceidx(N), step) + a[s] = 1 + aa[s] = self.calc_slicelength(s, N) * [1] + self.assertEqual(a.tolist(), aa) + + def test_setslice_bool_random2(self): + for a in self.randombitarrays(): + n = len(a) + aa = a.tolist() + step = self.rndsliceidx(n) or None + s = slice(self.rndsliceidx(n), self.rndsliceidx(n), step) + v = randint(0, 1) + a[s] = v + aa[s] = self.calc_slicelength(s, n) * [v] + self.assertEqual(a.tolist(), aa) + + def test_setslice_to_int(self): + a = bitarray('11111111') + a[::2] = 0 # ^ ^ ^ ^ + self.assertEqual(a, bitarray('01010101')) + a[4::] = 1 # ^^^^ + self.assertEqual(a, bitarray('01011111')) + a.__setitem__(slice(-2, None, None), 0) + self.assertEqual(a, bitarray('01011100')) + self.assertRaises(ValueError, a.__setitem__, slice(None, None, 2), 3) + self.assertRaises(ValueError, a.__setitem__, slice(None, 2, None), -1) + # a[:2:] = '0' + self.assertRaises(TypeError, a.__setitem__, slice(None, 2, None), '0') + + def test_setslice_to_invalid(self): + a = bitarray('11111111') + s = slice(2, 6, None) + self.assertRaises(TypeError, a.__setitem__, s, 1.2) + self.assertRaises(TypeError, a.__setitem__, s, None) + self.assertRaises(TypeError, a.__setitem__, s, "0110") + a[s] = False + self.assertEqual(a, bitarray('11000011')) + # step != 1 and slicelen != length of assigned bitarray + self.assertRaisesMessage( + ValueError, + "attempt to assign sequence of size 3 to extended slice of size 4", + a.__setitem__, slice(None, None, 2), bitarray('000')) + self.assertRaisesMessage( + ValueError, + "attempt to assign sequence of size 3 to extended slice of size 2", + a.__setitem__, slice(None, None, 4), bitarray('000')) + self.assertRaisesMessage( + ValueError, + "attempt to assign sequence of size 7 to extended slice of size 8", + a.__setitem__, slice(None, None, -1), bitarray('0001000')) + self.assertEqual(a, bitarray('11000011')) + + def test_delitem_simple(self): + a = bitarray('100110') + del a[1] + self.assertEqual(len(a), 5) + del a[3], a[-2] + self.assertEqual(a, bitarray('100')) + self.assertRaises(IndexError, a.__delitem__, 3) + self.assertRaises(IndexError, a.__delitem__, -4) + + def test_delitem_random(self): + for a in self.randombitarrays(start=1): + n = len(a) + b = a.copy() + i = randint(0, n - 1) + del b[i] + self.assertEQUAL(b, a[:i] + a[i + 1:]) + self.assertEqual(len(b), n - 1) + self.check_obj(b) + + def test_delslice_explicit(self): + a = bitarray('10101100 10110') + del a[3:9] # ^^^^^ ^ + self.assertEqual(a, bitarray('1010110')) + del a[::3] # ^ ^ ^ + self.assertEqual(a, bitarray('0111')) + a = bitarray('10101100 101101111') + del a[5:-3:3] # ^ ^ ^ + self.assertEqual(a, bitarray('1010100 0101111')) + a = bitarray('10101100 1011011') + del a[:-9:-2] # ^ ^ ^ ^ + self.assertEqual(a, bitarray('10101100 011')) + del a[3:3] # zero slicelength + self.assertEqual(a, bitarray('10101100 011')) + self.assertRaises(ValueError, a.__delitem__, slice(None, None, 0)) + self.assertEqual(len(a), 11) + del a[:] + self.assertEqual(a, bitarray()) + + def test_delslice_special(self): + for n in 0, 1, 10, 73: + a = urandom(n) + b = a.copy() + del b[:0] + del b[n:] + self.assertEqual(b, a) + del b[10:] # delete at end + self.assertEqual(b, a[:10]) + del b[:] # clear + self.assertEqual(len(b), 0) + self.check_obj(b) + + def test_delslice_random(self): + for a in self.randombitarrays(): + la = len(a) + for _ in range(10): + step = self.rndsliceidx(la) or None + s = slice(self.rndsliceidx(la), self.rndsliceidx(la), step) + c = a.copy() + del c[s] + self.check_obj(c) + c_lst = a.tolist() + del c_lst[s] + self.assertEQUAL(c, bitarray(c_lst, endian=c.endian())) + + def test_delslice_range(self): + # tests C function delete_n() + for n in range(500): + a = urandom(n, self.random_endian()) + p = randint(0, n) + m = randint(0, 500) + + b = a.copy() + del b[p:p + m] + self.assertEQUAL(b, a[:p] + a[p + m:]) + self.check_obj(b) + + def test_delslice_range_step(self): + N = 200 + for step in range(-N - 1, N): + if step == 0: + continue + a = urandom(N, self.random_endian()) + lst = a.tolist() + del a[::step] + del lst[::step] + self.assertEqual(a.tolist(), lst) + +tests.append(SliceTests) + +# --------------------------------------------------------------------------- + +class MiscTests(unittest.TestCase, Util): + + def test_instancecheck(self): + a = bitarray('011') + self.assertIsInstance(a, bitarray) + self.assertFalse(isinstance(a, str)) + + def test_booleanness(self): + self.assertEqual(bool(bitarray('')), False) + self.assertEqual(bool(bitarray('0')), True) + self.assertEqual(bool(bitarray('1')), True) + + def test_to01(self): + a = bitarray() + self.assertEqual(a.to01(), '') + self.assertIsInstance(a.to01(), str) + + a = bitarray('101') + self.assertEqual(a.to01(), '101') + self.assertIsInstance(a.to01(), str) + + def test_iterate(self): + for lst in self.randomlists(): + acc = [] + for b in bitarray(lst): + acc.append(b) + self.assertEqual(acc, lst) + + def test_iter1(self): + it = iter(bitarray('011')) + self.assertIsType(it, 'bitarrayiterator') + self.assertBitEqual(next(it), 0) + self.assertBitEqual(next(it), 1) + self.assertBitEqual(next(it), 1) + self.assertStopIteration(it) + + def test_iter2(self): + for a in self.randombitarrays(): + aa = a.tolist() + self.assertEqual(list(a), aa) + self.assertEqual(list(iter(a)), aa) + + def test_assignment(self): + a = bitarray('00110111001') + a[1:3] = a[7:9] + a[-1:] = a[:1] + b = bitarray('01010111000') + self.assertEqual(a, b) + + def test_subclassing(self): + class ExaggeratingBitarray(bitarray): + + def __new__(cls, data, offset): + return bitarray.__new__(cls, data) + + def __init__(self, data, offset): + self.offset = offset + + def __getitem__(self, i): + return bitarray.__getitem__(self, i - self.offset) + + for a in self.randombitarrays(): + b = ExaggeratingBitarray(a, 1234) + for i in range(len(a)): + self.assertEqual(a[i], b[i + 1234]) + + def test_endianness1(self): + a = bitarray(endian='little') + a.frombytes(b'\x01') + self.assertEqual(a.to01(), '10000000') + + b = bitarray(endian='little') + b.frombytes(b'\x80') + self.assertEqual(b.to01(), '00000001') + + c = bitarray(endian='big') + c.frombytes(b'\x80') + self.assertEqual(c.to01(), '10000000') + + d = bitarray(endian='big') + d.frombytes(b'\x01') + self.assertEqual(d.to01(), '00000001') + + self.assertEqual(a, c) + self.assertEqual(b, d) + + def test_endianness2(self): + a = bitarray(8, endian='little') + a.setall(False) + a[0] = True + self.assertEqual(a.tobytes(), b'\x01') + a[1] = True + self.assertEqual(a.tobytes(), b'\x03') + a.frombytes(b' ') + self.assertEqual(a.tobytes(), b'\x03 ') + self.assertEqual(a.to01(), '1100000000000100') + + def test_endianness3(self): + a = bitarray(8, endian='big') + a.setall(False) + a[7] = True + self.assertEqual(a.tobytes(), b'\x01') + a[6] = True + self.assertEqual(a.tobytes(), b'\x03') + a.frombytes(b' ') + self.assertEqual(a.tobytes(), b'\x03 ') + self.assertEqual(a.to01(), '0000001100100000') + + def test_endianness4(self): + a = bitarray('00100000', endian='big') + self.assertEqual(a.tobytes(), b' ') + b = bitarray('00000100', endian='little') + self.assertEqual(b.tobytes(), b' ') + self.assertNotEqual(a, b) + + def test_pickle(self): + for a in self.randombitarrays(): + b = pickle.loads(pickle.dumps(a)) + self.assertFalse(b is a) + self.assertEQUAL(a, b) + self.check_obj(b) + + def test_overflow(self): + a = bitarray(1) + for i in -7, -1, 0, 1: + n = 2 ** 63 + i + self.assertRaises(OverflowError, a.__imul__, n) + self.assertRaises(OverflowError, bitarray, n) + + a = bitarray(2 ** 10) + self.assertRaises(OverflowError, a.__imul__, 2 ** 53) + + if SYSINFO[0] == 8: + return + + a = bitarray(10 ** 6) + self.assertRaises(OverflowError, a.__imul__, 17180) + for i in -7, -1, 0, 1: + self.assertRaises(OverflowError, bitarray, 2 ** 31 + i) + try: + a = bitarray(2 ** 31 - 8); + except MemoryError: + return + self.assertRaises(OverflowError, bitarray.append, a, True) + + def test_unicode_create(self): + a = bitarray(u'') + self.assertEqual(a, bitarray()) + + a = bitarray(u'111001') + self.assertEqual(a, bitarray('111001')) + + def test_unhashable(self): + a = bitarray() + self.assertRaises(TypeError, hash, a) + self.assertRaises(TypeError, dict, [(a, 'foo')]) + +tests.append(MiscTests) + +# --------------------------------------------------------------------------- + +class RichCompareTests(unittest.TestCase, Util): + + def test_wrong_types(self): + a = bitarray() + for x in None, 7, 'A': + self.assertEqual(a.__eq__(x), NotImplemented) + self.assertEqual(a.__ne__(x), NotImplemented) + self.assertEqual(a.__ge__(x), NotImplemented) + self.assertEqual(a.__gt__(x), NotImplemented) + self.assertEqual(a.__le__(x), NotImplemented) + self.assertEqual(a.__lt__(x), NotImplemented) + + def test_explicit(self): + for sa, sb, res in [ + ('', '', '101010'), + ('0', '0', '101010'), + ('1', '1', '101010'), + ('0', '', '011100'), + ('1', '', '011100'), + ('1', '0', '011100'), + ('11', '10', '011100'), + ('01', '00', '011100'), + ('0', '1', '010011'), + ('', '0', '010011'), + ('', '1', '010011'), + ]: + a = bitarray(sa, self.random_endian()) + b = bitarray(sb, self.random_endian()) + self.assertEqual(a == b, int(res[0])) + self.assertEqual(a != b, int(res[1])) + self.assertEqual(a >= b, int(res[2])) + self.assertEqual(a > b, int(res[3])) + self.assertEqual(a <= b, int(res[4])) + self.assertEqual(a < b, int(res[5])) + + def test_eq_ne(self): + for _ in range(10): + self.assertTrue(bitarray(0, self.random_endian()) == + bitarray(0, self.random_endian())) + self.assertFalse(bitarray(0, self.random_endian()) != + bitarray(0, self.random_endian())) + + for n in range(1, 20): + a = bitarray(n, self.random_endian()) + a.setall(1) + b = bitarray(a, self.random_endian()) + self.assertTrue(a == b) + self.assertFalse(a != b) + b[n - 1] = 0 + self.assertTrue(a != b) + self.assertFalse(a == b) + + def test_eq_ne_random(self): + for a in self.randombitarrays(start=1): + b = bitarray(a, self.random_endian()) + self.assertTrue(a == b) + self.assertFalse(a != b) + b.invert(randint(0, len(a) - 1)) + self.assertTrue(a != b) + self.assertFalse(a == b) + + def check(self, a, b, c, d): + self.assertEqual(a == b, c == d) + self.assertEqual(a != b, c != d) + self.assertEqual(a <= b, c <= d) + self.assertEqual(a < b, c < d) + self.assertEqual(a >= b, c >= d) + self.assertEqual(a > b, c > d) + + def test_invert_random_element(self): + for a in self.randombitarrays(start=1): + n = len(a) + b = bitarray(a, self.random_endian()) + i = randint(0, n - 1) + b.invert(i) + self.check(a, b, a[i], b[i]) + + def test_size(self): + for _ in range(100): + a = zeros(randint(1, 20), self.random_endian()) + b = zeros(randint(1, 20), self.random_endian()) + self.check(a, b, len(a), len(b)) + + def test_random(self): + for a in self.randombitarrays(): + aa = a.tolist() + if randint(0, 1): + a = frozenbitarray(a) + for b in self.randombitarrays(): + bb = b.tolist() + if randint(0, 1): + b = frozenbitarray(b) + self.check(a, b, aa, bb) + self.check(a, b, aa, bb) + +tests.append(RichCompareTests) + +# --------------------------------------------------------------------------- + +class SpecialMethodTests(unittest.TestCase, Util): + + def test_all(self): + a = bitarray() + self.assertTrue(a.all()) + for s, r in ('0', False), ('1', True), ('01', False): + self.assertTrue(bitarray(s).all() is r) + + for a in self.randombitarrays(): + self.assertTrue(a.all() is all(a)) + + N = randint(1000, 2000) + a = bitarray(N) + a.setall(1) + self.assertTrue(a.all()) + a[N - 1] = 0 + self.assertFalse(a.all()) + + def test_any(self): + a = bitarray() + self.assertFalse(a.any()) + for s, r in ('0', False), ('1', True), ('01', True): + self.assertTrue(bitarray(s).any() is r) + + for a in self.randombitarrays(): + self.assertTrue(a.any() is any(a)) + + N = randint(1000, 2000) + a = bitarray(N) + a.setall(0) + self.assertFalse(a.any()) + a[N - 1] = 1 + self.assertTrue(a.any()) + + def test_repr(self): + r = repr(bitarray()) + self.assertEqual(r, "bitarray()") + self.assertIsInstance(r, str) + + r = repr(bitarray('10111')) + self.assertEqual(r, "bitarray('10111')") + self.assertIsInstance(r, str) + + for a in self.randombitarrays(): + b = eval(repr(a)) + self.assertFalse(b is a) + self.assertEqual(a, b) + self.check_obj(b) + + def test_copy(self): + for a in self.randombitarrays(): + b = a.copy() + self.assertFalse(b is a) + self.assertEQUAL(a, b) + + b = copy.copy(a) + self.assertFalse(b is a) + self.assertEQUAL(a, b) + + b = copy.deepcopy(a) + self.assertFalse(b is a) + self.assertEQUAL(a, b) + + def assertReallyEqual(self, a, b): + # assertEqual first, because it will have a good message if the + # assertion fails. + self.assertEqual(a, b) + self.assertEqual(b, a) + self.assertTrue(a == b) + self.assertTrue(b == a) + self.assertFalse(a != b) + self.assertFalse(b != a) + if not is_py3k: + self.assertEqual(0, cmp(a, b)) + self.assertEqual(0, cmp(b, a)) + + def assertReallyNotEqual(self, a, b): + # assertNotEqual first, because it will have a good message if the + # assertion fails. + self.assertNotEqual(a, b) + self.assertNotEqual(b, a) + self.assertFalse(a == b) + self.assertFalse(b == a) + self.assertTrue(a != b) + self.assertTrue(b != a) + if not is_py3k: + self.assertNotEqual(0, cmp(a, b)) + self.assertNotEqual(0, cmp(b, a)) + + def test_equality(self): + self.assertReallyEqual(bitarray(''), bitarray('')) + self.assertReallyEqual(bitarray('0'), bitarray('0')) + self.assertReallyEqual(bitarray('1'), bitarray('1')) + + def test_not_equality(self): + self.assertReallyNotEqual(bitarray(''), bitarray('1')) + self.assertReallyNotEqual(bitarray(''), bitarray('0')) + self.assertReallyNotEqual(bitarray('0'), bitarray('1')) + + def test_equality_random(self): + for a in self.randombitarrays(start=1): + b = a.copy() + self.assertReallyEqual(a, b) + n = len(a) + b.invert(n - 1) # flip last bit + self.assertReallyNotEqual(a, b) + + def test_sizeof(self): + a = bitarray() + size = sys.getsizeof(a) + self.assertEqual(size, a.__sizeof__()) + self.assertIsInstance(size, int if is_py3k else (int, long)) + self.assertTrue(size < 200) + a = bitarray(8000) + self.assertTrue(sys.getsizeof(a) > 1000) + +tests.append(SpecialMethodTests) + +# --------------------------------------------------------------------------- + +class SequenceMethodsTests(unittest.TestCase, Util): + + def test_concat(self): + a = bitarray('001') + b = a + bitarray('110') + self.assertEQUAL(b, bitarray('001110')) + b = a + [0, 1, True] + self.assertEQUAL(b, bitarray('001011')) + b = a + '100' + self.assertEQUAL(b, bitarray('001100')) + b = a + (1, 0, True) + self.assertEQUAL(b, bitarray('001101')) + self.assertRaises(ValueError, a.__add__, (0, 1, 2)) + self.assertEQUAL(a, bitarray('001')) + + self.assertRaises(TypeError, a.__add__, 42) + if is_py3k: + self.assertRaises(TypeError, a.__add__, b'1101') + else: + self.assertEqual(a + b'10', bitarray('00110')) + + for a in self.randombitarrays(): + aa = a.copy() + for b in self.randombitarrays(): + bb = b.copy() + c = a + b + self.assertEqual(c, bitarray(a.tolist() + b.tolist())) + self.assertEqual(c.endian(), a.endian()) + self.check_obj(c) + + self.assertEQUAL(a, aa) + self.assertEQUAL(b, bb) + + def test_inplace_concat(self): + a = bitarray('001') + a += bitarray('110') + self.assertEqual(a, bitarray('001110')) + a += [0, 1, True] + self.assertEqual(a, bitarray('001110011')) + a += '100' + self.assertEqual(a, bitarray('001110011100')) + a += (1, 0, True) + self.assertEqual(a, bitarray('001110011100101')) + + a = bitarray('110') + self.assertRaises(ValueError, a.__iadd__, [0, 1, 2]) + self.assertEqual(a, bitarray('110')) + + self.assertRaises(TypeError, a.__iadd__, 42) + b = b'101' + if is_py3k: + self.assertRaises(TypeError, a.__iadd__, b) + else: + a += b + self.assertEqual(a, bitarray('110101')) + + for a in self.randombitarrays(): + for b in self.randombitarrays(): + c = bitarray(a) + d = c + d += b + self.assertEqual(d, a + b) + self.assertTrue(c is d) + self.assertEQUAL(c, d) + self.assertEqual(d.endian(), a.endian()) + self.check_obj(d) + + def test_repeat_explicit(self): + for m, s, r in [ + ( 0, '', ''), + ( 0, '1001111', ''), + (-1, '100110', ''), + (11, '', ''), + ( 1, '110', '110'), + ( 2, '01', '0101'), + ( 5, '1', '11111'), + ]: + a = bitarray(s) + self.assertEqual(a * m, bitarray(r)) + self.assertEqual(m * a, bitarray(r)) + c = a.copy() + c *= m + self.assertEqual(c, bitarray(r)) + + def test_repeat_wrong_args(self): + a = bitarray() + self.assertRaises(TypeError, a.__mul__, None) + self.assertRaises(TypeError, a.__mul__, 2.0) + self.assertRaises(TypeError, a.__imul__, None) + self.assertRaises(TypeError, a.__imul__, 3.0) + + def test_repeat_random(self): + for a in self.randombitarrays(): + b = a.copy() + for m in list(range(-3, 5)) + [randint(100, 200)]: + res = bitarray(m * a.to01(), endian=a.endian()) + self.assertEqual(len(res), len(a) * max(0, m)) + + c = a * m + self.assertEQUAL(c, res) + c = m * a + self.assertEQUAL(c, res) + + c = a.copy() + c *= m + self.assertEQUAL(c, res) + self.check_obj(c) + + self.assertEQUAL(a, b) + + def test_contains_simple(self): + a = bitarray() + self.assertFalse(False in a) + self.assertFalse(True in a) + self.assertTrue(bitarray() in a) + a.append(True) + self.assertTrue(True in a) + self.assertFalse(False in a) + a = bitarray([False]) + self.assertTrue(False in a) + self.assertFalse(True in a) + a.append(True) + self.assertTrue(0 in a) + self.assertTrue(1 in a) + if not is_py3k: + self.assertTrue(long(0) in a) + self.assertTrue(long(1) in a) + + def test_contains_errors(self): + a = bitarray() + self.assertEqual(a.__contains__(1), False) + a.append(1) + self.assertEqual(a.__contains__(1), True) + a = bitarray('0011') + self.assertEqual(a.__contains__(bitarray('01')), True) + self.assertEqual(a.__contains__(bitarray('10')), False) + self.assertRaises(TypeError, a.__contains__, 'asdf') + self.assertRaises(ValueError, a.__contains__, 2) + self.assertRaises(ValueError, a.__contains__, -1) + if not is_py3k: + self.assertRaises(ValueError, a.__contains__, long(2)) + + def test_contains_range(self): + for n in range(2, 50): + a = bitarray(n) + a.setall(0) + self.assertTrue(False in a) + self.assertFalse(True in a) + a[randint(0, n - 1)] = 1 + self.assertTrue(True in a) + self.assertTrue(False in a) + a.setall(1) + self.assertTrue(True in a) + self.assertFalse(False in a) + a[randint(0, n - 1)] = 0 + self.assertTrue(True in a) + self.assertTrue(False in a) + + def test_contains_explicit(self): + a = bitarray('011010000001') + for s, r in [('', True), # every bitarray contains an empty one + ('1', True), ('11', True), ('111', False), + ('011', True), ('0001', True), ('00011', False)]: + self.assertEqual(bitarray(s) in a, r) + +tests.append(SequenceMethodsTests) + +# --------------------------------------------------------------------------- + +class NumberTests(unittest.TestCase, Util): + + def test_misc(self): + for a in self.randombitarrays(): + b = ~a + c = a & b + self.assertEqual(c.any(), False) + self.assertEqual(a, a ^ c) + d = a ^ b + self.assertEqual(d.all(), True) + b &= d + self.assertEqual(~b, a) + + def test_bool(self): + a = bitarray() + self.assertTrue(bool(a) is False) + a.append(0) + self.assertTrue(bool(a) is True) + a.append(1) + self.assertTrue(bool(a) is True) + + def test_size_error(self): + a = bitarray('11001') + b = bitarray('100111') + self.assertRaises(ValueError, lambda: a & b) + self.assertRaises(ValueError, lambda: a | b) + self.assertRaises(ValueError, lambda: a ^ b) + for x in (a.__and__, a.__or__, a.__xor__, + a.__iand__, a.__ior__, a.__ixor__): + self.assertRaises(ValueError, x, b) + + def test_endianness_error(self): + a = bitarray('11001', 'big') + b = bitarray('10011', 'little') + self.assertRaises(ValueError, lambda: a & b) + self.assertRaises(ValueError, lambda: a | b) + self.assertRaises(ValueError, lambda: a ^ b) + for x in (a.__and__, a.__or__, a.__xor__, + a.__iand__, a.__ior__, a.__ixor__): + self.assertRaises(ValueError, x, b) + + def test_and(self): + a = bitarray('11001') + b = bitarray('10011') + c = a & b + self.assertEqual(c, bitarray('10001')) + self.check_obj(c) + + self.assertRaises(TypeError, lambda: a & 1) + self.assertRaises(TypeError, lambda: 1 & a) + self.assertEqual(a, bitarray('11001')) + self.assertEqual(b, bitarray('10011')) + + def test_or(self): + a = bitarray('11001') + b = bitarray('10011') + c = a | b + self.assertEqual(c, bitarray('11011')) + self.check_obj(c) + + self.assertRaises(TypeError, lambda: a | 1) + self.assertRaises(TypeError, lambda: 1 | a) + self.assertEqual(a, bitarray('11001')) + self.assertEqual(b, bitarray('10011')) + + def test_xor(self): + a = bitarray('11001') + b = bitarray('10011') + c = a ^ b + self.assertEQUAL(c, bitarray('01010')) + self.check_obj(c) + + self.assertRaises(TypeError, lambda: a ^ 1) + self.assertRaises(TypeError, lambda: 1 ^ a) + self.assertEqual(a, bitarray('11001')) + self.assertEqual(b, bitarray('10011')) + + def test_iand(self): + a = bitarray('110010110') + b = bitarray('100110011') + a &= b + self.assertEqual(a, bitarray('100010010')) + self.assertEqual(b, bitarray('100110011')) + self.check_obj(a) + self.check_obj(b) + try: + a &= 1 + except TypeError: + error = 1 + self.assertEqual(error, 1) + + def test_ior(self): + a = bitarray('110010110') + b = bitarray('100110011') + a |= b + self.assertEQUAL(a, bitarray('110110111')) + self.assertEQUAL(b, bitarray('100110011')) + try: + a |= 1 + except TypeError: + error = 1 + self.assertEqual(error, 1) + + def test_ixor(self): + a = bitarray('110010110') + b = bitarray('100110011') + a ^= b + self.assertEQUAL(a, bitarray('010100101')) + self.assertEQUAL(b, bitarray('100110011')) + try: + a ^= 1 + except TypeError: + error = 1 + self.assertEqual(error, 1) + + def test_bitwise_self(self): + for a in self.randombitarrays(): + aa = a.copy() + self.assertEQUAL(a & a, aa) + self.assertEQUAL(a | a, aa) + self.assertEQUAL(a ^ a, zeros(len(aa), aa.endian())) + self.assertEQUAL(a, aa) + + def test_bitwise_inplace_self(self): + for a in self.randombitarrays(): + aa = a.copy() + a &= a + self.assertEQUAL(a, aa) + a |= a + self.assertEQUAL(a, aa) + a ^= a + self.assertEqual(a, zeros(len(aa), aa.endian())) + + def test_invert(self): + a = bitarray('11011') + b = ~a + self.assertEQUAL(b, bitarray('00100')) + self.assertEQUAL(a, bitarray('11011')) + self.assertFalse(a is b) + self.check_obj(b) + + for a in self.randombitarrays(): + b = bitarray(a) + b.invert() + for i in range(len(a)): + self.assertEqual(b[i], not a[i]) + self.check_obj(b) + self.assertEQUAL(~a, b) + + @staticmethod + def shift(a, n, direction): + if n >= len(a): + return zeros(len(a), a.endian()) + + if direction == 'right': + return zeros(n, a.endian()) + a[:len(a)-n] + elif direction == 'left': + return a[n:] + zeros(n, a.endian()) + else: + raise ValueError("invalid direction: %s" % direction) + + def test_lshift(self): + a = bitarray('11011') + b = a << 2 + self.assertEQUAL(b, bitarray('01100')) + self.assertRaises(TypeError, lambda: a << 1.2) + self.assertRaises(TypeError, a.__lshift__, 1.2) + self.assertRaises(ValueError, lambda: a << -1) + self.assertRaises(OverflowError, a.__lshift__, 2 ** 63) + + for a in self.randombitarrays(): + c = a.copy() + n = randint(0, len(a) + 3) + b = a << n + self.assertEqual(len(b), len(a)) + self.assertEQUAL(b, self.shift(a, n, 'left')) + self.assertEQUAL(a, c) + + def test_rshift(self): + a = bitarray('1101101') + b = a >> 1 + self.assertEQUAL(b, bitarray('0110110')) + self.assertRaises(TypeError, lambda: a >> 1.2) + self.assertRaises(TypeError, a.__rshift__, 1.2) + self.assertRaises(ValueError, lambda: a >> -1) + + for a in self.randombitarrays(): + c = a.copy() + n = randint(0, len(a) + 3) + b = a >> n + self.assertEqual(len(b), len(a)) + self.assertEQUAL(b, self.shift(a, n, 'right')) + self.assertEQUAL(a, c) + + def test_ilshift(self): + a = bitarray('110110101') + a <<= 7 + self.assertEQUAL(a, bitarray('010000000')) + self.assertRaises(TypeError, a.__ilshift__, 1.2) + self.assertRaises(ValueError, a.__ilshift__, -3) + + for a in self.randombitarrays(): + b = a.copy() + n = randint(0, len(a) + 3) + b <<= n + self.assertEqual(len(b), len(a)) + self.assertEQUAL(b, self.shift(a, n, 'left')) + + def test_irshift(self): + a = bitarray('110110111') + a >>= 3 + self.assertEQUAL(a, bitarray('000110110')) + self.assertRaises(TypeError, a.__irshift__, 1.2) + self.assertRaises(ValueError, a.__irshift__, -4) + + for a in self.randombitarrays(): + b = a.copy() + n = randint(0, len(a) + 3) + b >>= n + self.assertEqual(len(b), len(a)) + self.assertEQUAL(b, self.shift(a, n, 'right')) + + def check_random(self, n, endian, n_shift, direction): + a = urandom(n, endian) + self.assertEqual(len(a), n) + + b = a.copy() + if direction == 'left': + b <<= n_shift + else: + b >>= n_shift + self.assertEQUAL(b, self.shift(a, n_shift, direction)) + + def test_shift_range(self): + for endian in 'little', 'big': + for direction in 'left', 'right': + for n in range(0, 200): + self.check_random(n, endian, 1, direction) + self.check_random(n, endian, randint(0, n), direction) + for n_shift in range(0, 100): + self.check_random(100, endian, n_shift, direction) + + def test_zero_shift(self): + for a in self.randombitarrays(): + aa = a.copy() + self.assertEQUAL(a << 0, aa) + self.assertEQUAL(a >> 0, aa) + a <<= 0 + self.assertEQUAL(a, aa) + a >>= 0 + self.assertEQUAL(a, aa) + + def test_len_or_larger_shift(self): + # ensure shifts with len(a) (or larger) result in all zero bitarrays + for a in self.randombitarrays(): + c = a.copy() + z = zeros(len(a), a.endian()) + n = randint(len(a), len(a) + 10) + self.assertEQUAL(a << n, z) + self.assertEQUAL(a >> n, z) + self.assertEQUAL(a, c) + a <<= n + self.assertEQUAL(a, z) + a = bitarray(c) + a >>= n + self.assertEQUAL(a, z) + + def test_shift_example(self): + a = bitarray('0010011') + self.assertEqual(a << 3, bitarray('0011000')) + a >>= 4 + self.assertEqual(a, bitarray('0000001')) + +tests.append(NumberTests) + +# --------------------------------------------------------------------------- + +class ExtendTests(unittest.TestCase, Util): + + def test_wrongArgs(self): + a = bitarray() + self.assertRaises(TypeError, a.extend) + self.assertRaises(TypeError, a.extend, None) + self.assertRaises(TypeError, a.extend, True) + self.assertRaises(TypeError, a.extend, 24) + self.assertRaises(TypeError, a.extend, 1.0) + + def test_bitarray(self): + a = bitarray() + a.extend(bitarray()) + self.assertEqual(a, bitarray()) + a.extend(bitarray('110')) + self.assertEqual(a, bitarray('110')) + a.extend(bitarray('1110')) + self.assertEqual(a, bitarray('1101110')) + + a = bitarray('00001111', endian='little') + a.extend(bitarray('00100111', endian='big')) + self.assertEqual(a, bitarray('00001111 00100111')) + + def test_bitarray_random(self): + for a in self.randombitarrays(): + sa = a.to01() + for b in self.randombitarrays(): + bb = b.copy() + c = bitarray(a) + c.extend(b) + self.assertEqual(c.to01(), sa + bb.to01()) + self.assertEqual(c.endian(), a.endian()) + self.assertEqual(len(c), len(a) + len(b)) + self.check_obj(c) + # ensure b hasn't changed + self.assertEQUAL(b, bb) + self.check_obj(b) + + def test_list(self): + a = bitarray() + a.extend([]) + self.assertEqual(a, bitarray()) + a.extend([0, 1, True, False]) + self.assertEqual(a, bitarray('0110')) + self.assertRaises(ValueError, a.extend, [0, 1, 2]) + self.assertRaises(TypeError, a.extend, [0, 1, 'a']) + self.assertEqual(a, bitarray('0110')) + + for a in self.randomlists(): + for b in self.randomlists(): + c = bitarray(a) + c.extend(b) + self.assertEqual(c.tolist(), a + b) + self.check_obj(c) + + def test_tuple(self): + a = bitarray() + a.extend(tuple()) + self.assertEqual(a, bitarray()) + a.extend((0, 1, True, 0, False)) + self.assertEqual(a, bitarray('01100')) + self.assertRaises(ValueError, a.extend, (0, 1, 2)) + self.assertRaises(TypeError, a.extend, (0, 1, 'a')) + self.assertEqual(a, bitarray('01100')) + + for a in self.randomlists(): + for b in self.randomlists(): + c = bitarray(a) + c.extend(tuple(b)) + self.assertEqual(c.tolist(), a + b) + self.check_obj(c) + + def test_generator_1(self): + def gen(lst): + for x in lst: + yield x + a = bitarray('0011') + a.extend(gen([0, 1, False, True, 0])) + self.assertEqual(a, bitarray('0011 01010')) + self.assertRaises(ValueError, a.extend, gen([0, 1, 2])) + self.assertRaises(TypeError, a.extend, gen([1, 0, None])) + self.assertEqual(a, bitarray('0011 01010')) + + a = bytearray() + a.extend(gen([0, 1, 255])) + self.assertEqual(a, b'\x00\x01\xff') + self.assertRaises(ValueError, a.extend, gen([0, 1, 256])) + self.assertRaises(TypeError, a.extend, gen([1, 0, None])) + self.assertEqual(a, b'\x00\x01\xff') + + for a in self.randomlists(): + def foo(): + for e in a: + yield e + b = bitarray() + b.extend(foo()) + self.assertEqual(b.tolist(), a) + self.check_obj(b) + + def test_generator_2(self): + def gen(): + for i in range(10): + if i == 4: + raise KeyError + yield i % 2 + + a = bitarray() + self.assertRaises(KeyError, a.extend, gen()) + self.assertEqual(a, bitarray('0101')) + a = [] + self.assertRaises(KeyError, a.extend, gen()) + self.assertEqual(a, [0, 1, 0, 1]) + + def test_iterator_1(self): + a = bitarray() + a.extend(iter([])) + self.assertEqual(a, bitarray()) + a.extend(iter([1, 1, 0, True, False])) + self.assertEqual(a, bitarray('11010')) + self.assertRaises(ValueError, a.extend, iter([1, 1, 0, 0, 2])) + self.assertEqual(a, bitarray('11010')) + + for a in self.randomlists(): + for b in self.randomlists(): + c = bitarray(a) + c.extend(iter(b)) + self.assertEqual(c.tolist(), a + b) + self.check_obj(c) + + def test_iterator_2(self): + a = bitarray() + a.extend(itertools.repeat(True, 23)) + self.assertEqual(a, bitarray(23 * '1')) + self.check_obj(a) + + def test_string01(self): + a = bitarray() + a.extend(str()) + a.extend('') + self.assertEqual(a, bitarray()) + a.extend('0110111') + self.assertEqual(a, bitarray('0110111')) + self.assertRaises(ValueError, a.extend, '0011201') + # ensure no bits got added after error was raised + self.assertEqual(a, bitarray('0110111')) + + a = bitarray() + self.assertRaises(ValueError, a.extend, 1000 * '01' + '.') + self.assertEqual(a, bitarray()) + + for a in self.randomlists(): + for b in self.randomlists(): + c = bitarray(a) + c.extend(''.join(['0', '1'][x] for x in b)) + self.assertEqual(c, bitarray(a + b)) + self.check_obj(c) + + def test_string01_whitespace(self): + a = bitarray() + a.extend('0 1\n0\r1\t0\v1_') + self.assertEqual(a, bitarray('010101')) + a += '_ 1\n0\r1\t0\v' + self.assertEqual(a, bitarray('010101 1010')) + self.check_obj(a) + + def test_unicode(self): + a = bitarray() + a.extend(u'') + self.assertEqual(a, bitarray()) + self.assertRaises(ValueError, a.extend, u'0011201') + # ensure no bits got added after error was raised + self.assertEqual(a, bitarray()) + self.check_obj(a) + + a = bitarray() + a.extend(u'001 011_') + self.assertEqual(a, bitarray('001011')) + self.assertRaises(UnicodeEncodeError, a.extend, u'1\u2605 0') + self.assertEqual(a, bitarray('001011')) + self.check_obj(a) + + def test_bytes(self): + a = bitarray() + b = b'10110' + if is_py3k: + self.assertRaises(TypeError, a.extend, b) + else: + a.extend(b) + self.assertEqual(a, bitarray('10110')) + self.check_obj(a) + + def test_self(self): + for s in '', '1', '110', '00110111': + a = bitarray(s) + a.extend(a) + self.assertEqual(a, bitarray(2 * s)) + + for a in self.randombitarrays(): + endian = a.endian() + s = a.to01() + a.extend(a) + self.assertEqual(a.to01(), 2 * s) + self.assertEqual(a.endian(), endian) + self.assertEqual(len(a), 2 * len(s)) + self.check_obj(a) + +tests.append(ExtendTests) + +# --------------------------------------------------------------------------- + +class MethodTests(unittest.TestCase, Util): + + def test_append_simple(self): + a = bitarray() + a.append(True) + a.append(False) + a.append(False) + self.assertEQUAL(a, bitarray('100')) + a.append(0) + a.append(1) + self.assertEQUAL(a, bitarray('10001')) + self.assertRaises(ValueError, a.append, 2) + self.assertRaises(TypeError, a.append, None) + self.assertRaises(TypeError, a.append, '') + self.assertEQUAL(a, bitarray('10001')) + self.check_obj(a) + + def test_append_random(self): + for a in self.randombitarrays(): + aa = a.tolist() + a.append(1) + self.assertEQUAL(a, bitarray(aa + [1], endian=a.endian())) + a.append(0) + self.assertEQUAL(a, bitarray(aa + [1, 0], endian=a.endian())) + self.check_obj(a) + + def test_insert(self): + a = bitarray('111100') + a.insert(3, False) + self.assertEqual(a, bitarray('1110100')) + self.assertRaises(ValueError, a.insert, 0, 2) + self.assertRaises(TypeError, a.insert, 0, None) + self.assertRaises(TypeError, a.insert) + self.assertRaises(TypeError, a.insert, None) + self.assertEqual(a, bitarray('1110100')) + self.check_obj(a) + + def test_insert_random(self): + for a in self.randombitarrays(): + aa = a.tolist() + for _ in range(20): + item = randint(0, 1) + pos = randint(-len(a) - 2, len(a) + 2) + a.insert(pos, item) + aa.insert(pos, item) + self.assertEqual(a.tolist(), aa) + self.check_obj(a) + + def test_fill_simple(self): + for endian in 'little', 'big': + a = bitarray(endian=endian) + self.assertEqual(a.fill(), 0) + self.assertEqual(len(a), 0) + + a = bitarray('101', endian) + self.assertEqual(a.fill(), 5) + self.assertEqual(a, bitarray('10100000')) + self.assertEqual(a.fill(), 0) + self.assertEqual(a, bitarray('10100000')) + self.check_obj(a) + + def test_fill_exported(self): + a = bitarray('11101') + b = bitarray(buffer=a) + v = memoryview(a) + self.assertEqual(a.fill(), 3) + self.assertEqual(a, b) + if is_py3k: + self.assertEqual(v.nbytes, 1) + + def test_fill_random(self): + for a in self.randombitarrays(): + b = a.copy() + res = b.fill() + self.assertTrue(0 <= res < 8) + self.assertEqual(b.endian(), a.endian()) + self.check_obj(b) + if len(a) % 8 == 0: + self.assertEqual(b, a) + else: + self.assertEqual(len(b) % 8, 0) + self.assertNotEqual(b, a) + self.assertEqual(b[:len(a)], a) + self.assertEqual(b[len(a):], zeros(len(b) - len(a))) + + def test_invert_simple(self): + a = bitarray() + a.invert() + self.assertEQUAL(a, bitarray()) + self.check_obj(a) + + a = bitarray('11011') + a.invert() + self.assertEQUAL(a, bitarray('00100')) + a.invert(2) + self.assertEQUAL(a, bitarray('00000')) + a.invert(-1) + self.assertEQUAL(a, bitarray('00001')) + + def test_invert_errors(self): + a = bitarray(5) + self.assertRaises(IndexError, a.invert, 5) + self.assertRaises(IndexError, a.invert, -6) + self.assertRaises(TypeError, a.invert, "A") + self.assertRaises(TypeError, a.invert, 0, 1) + + def test_invert_random(self): + for a in self.randombitarrays(start=1): + b = a.copy() + i = randint(0, len(a) - 1) + b.invert(i) + a[i] = not a[i] + self.assertEQUAL(a, b) + + def test_sort_simple(self): + a = bitarray('1101000') + a.sort() + self.assertEqual(a, bitarray('0000111')) + self.check_obj(a) + + a = bitarray('1101000') + a.sort(reverse=True) + self.assertEqual(a, bitarray('1110000')) + a.sort(reverse=False) + self.assertEqual(a, bitarray('0000111')) + a.sort(True) + self.assertEqual(a, bitarray('1110000')) + a.sort(False) + self.assertEqual(a, bitarray('0000111')) + + self.assertRaises(TypeError, a.sort, 'A') + + def test_sort_random(self): + for rev in False, True, 0, 1, 7, -1, -7, None: + for a in self.randombitarrays(): + lst = a.tolist() + if rev is None: + lst.sort() + a.sort() + else: + lst.sort(reverse=rev) + a.sort(reverse=rev) + self.assertEqual(a, bitarray(lst)) + self.check_obj(a) + + def test_reverse_explicit(self): + for x, y in [('', ''), ('1', '1'), ('10', '01'), ('001', '100'), + ('1110', '0111'), ('11100', '00111'), + ('011000', '000110'), ('1101100', '0011011'), + ('11110000', '00001111'), + ('11111000011', '11000011111'), + ('11011111 00100000 000111', + '111000 00000100 11111011')]: + a = bitarray(x) + a.reverse() + self.assertEQUAL(a, bitarray(y)) + self.check_obj(a) + + self.assertRaises(TypeError, bitarray().reverse, 42) + + def test_reverse_random(self): + for a in self.randombitarrays(): + b = a.copy() + a.reverse() + self.assertEQUAL(a, bitarray(reversed(b), endian=a.endian())) + self.assertEQUAL(a, b[::-1]) + self.check_obj(a) + + def test_tolist(self): + a = bitarray() + self.assertEqual(a.tolist(), []) + + a = bitarray('110') + lst = a.tolist() + self.assertIsInstance(lst, list) + self.assertEqual(repr(lst), '[1, 1, 0]') + + for lst in self.randomlists(): + a = bitarray(lst) + self.assertEqual(a.tolist(), lst) + + def test_remove(self): + a = bitarray('1010110') + for val, res in [(False, '110110'), (True, '10110'), + (1, '0110'), (1, '010'), (0, '10'), + (0, '1'), (1, '')]: + a.remove(val) + self.assertEQUAL(a, bitarray(res)) + self.check_obj(a) + + a = bitarray('0010011') + a.remove(1) + self.assertEQUAL(a, bitarray('000011')) + self.assertRaises(TypeError, a.remove, 'A') + self.assertRaises(ValueError, a.remove, 21) + + def test_remove_errors(self): + a = bitarray() + for i in (True, False, 1, 0): + self.assertRaises(ValueError, a.remove, i) + + a = bitarray(21) + a.setall(0) + self.assertRaises(ValueError, a.remove, 1) + a.setall(1) + self.assertRaises(ValueError, a.remove, 0) + + def test_pop_simple(self): + for x, n, r, y in [('1', 0, 1, ''), + ('0', -1, 0, ''), + ('0011100', 3, 1, '001100')]: + a = bitarray(x) + self.assertTrue(a.pop(n) is r) + self.assertEqual(a, bitarray(y)) + self.check_obj(a) + + a = bitarray('01') + self.assertEqual(a.pop(), True) + self.assertEqual(a.pop(), False) + # pop from empty bitarray + self.assertRaises(IndexError, a.pop) + + def test_pop_random_1(self): + for a in self.randombitarrays(): + self.assertRaises(IndexError, a.pop, len(a)) + self.assertRaises(IndexError, a.pop, -len(a) - 1) + if len(a) == 0: + continue + aa = a.tolist() + enda = a.endian() + self.assertEqual(a.pop(), aa[-1]) + self.check_obj(a) + self.assertEqual(a.endian(), enda) + + def test_pop_random_2(self): + for a in self.randombitarrays(start=1): + n = randint(-len(a), len(a)-1) + aa = a.tolist() + x = a.pop(n) + self.assertBitEqual(x, aa[n]) + y = aa.pop(n) + self.assertEqual(a, bitarray(aa)) + self.assertBitEqual(x, y) + self.check_obj(a) + + def test_clear(self): + for a in self.randombitarrays(): + endian = a.endian() + a.clear() + self.assertEqual(len(a), 0) + self.assertEqual(a.endian(), endian) + self.check_obj(a) + + def test_setall(self): + a = bitarray(5) + a.setall(True) + self.assertRaises(ValueError, a.setall, -1) + self.assertRaises(TypeError, a.setall, None) + self.assertEQUAL(a, bitarray('11111')) + a.setall(0) + self.assertEQUAL(a, bitarray('00000')) + self.check_obj(a) + + def test_setall_empty(self): + a = bitarray() + for v in 0, 1: + a.setall(v) + self.assertEqual(a, bitarray()) + self.check_obj(a) + + def test_setall_random(self): + for a in self.randombitarrays(): + val = randint(0, 1) + a.setall(val) + self.assertEqual(a, bitarray(len(a) * [val])) + self.check_obj(a) + + def test_bytereverse_explicit_all(self): + for x, y in [('', ''), + ('11101101', '10110111'), + ('00000001', '10000000'), + ('11011111 00100000 00011111', + '11111011 00000100 11111000')]: + a = bitarray(x) + a.bytereverse() + self.assertEqual(a, bitarray(y)) + + def test_bytereverse_explicit_range(self): + a = bitarray('11100000 00000011 00111111 11111000') + a.bytereverse(0, 1) # reverse byte 0 + self.assertEqual(a, bitarray('00000111 00000011 00111111 11111000')) + a.bytereverse(1, -1) # reverse bytes 1 and 2 + self.assertEqual(a, bitarray('00000111 11000000 11111100 11111000')) + a.bytereverse(2) # reverse bytes 2 till end of buffer + self.assertEqual(a, bitarray('00000111 11000000 00111111 00011111')) + a.bytereverse(-1) # reverse last byte + self.assertEqual(a, bitarray('00000111 11000000 00111111 11111000')) + a.bytereverse(3, 1) # start > stop (nothing to reverse) + self.assertEqual(a, bitarray('00000111 11000000 00111111 11111000')) + a.bytereverse(0, 4) # reverse all bytes + self.assertEqual(a, bitarray('11100000 00000011 11111100 00011111')) + a.bytereverse(-2) # last two bytes + self.assertEqual(a, bitarray('11100000 00000011 00111111 11111000')) + + self.assertRaises(IndexError, a.bytereverse, -5) + self.assertRaises(IndexError, a.bytereverse, 0, -5) + self.assertRaises(IndexError, a.bytereverse, 5) + self.assertRaises(IndexError, a.bytereverse, 0, 5) + + @skipIf(sys.version_info[0] == 2) + def test_bytereverse_part(self): + a = bitarray(5, 'big') + memoryview(a)[0] = 0x13 # 0001 0011 + self.assertEqual(a, bitarray('0001 0')) + # the padbits (011) are not treated as zeros + a.bytereverse() + self.assertEqual(a, bitarray('1100 1')) + + a = bitarray(12, 'little') + memoryview(a)[1] = 0xd4 # .... .... 0010 1011 + self.assertEqual(a[8:], bitarray('0010')) + # the padbits (1011) are not treated as zeros + a.bytereverse(1) + self.assertEqual(a[8:], bitarray('1101')) + + def test_bytereverse_byte(self): + for i in range(256): + a = bitarray() + a.frombytes(bytearray([i])) + self.assertEqual(len(a), 8) + b = a.copy() + b.bytereverse() + self.assertEqual(b, a[::-1]) + a.reverse() + self.assertEqual(b, a) + self.check_obj(b) + + def test_bytereverse_random(self): + t = bitarray(endian=self.random_endian()) + t.frombytes(bytearray(range(256))) + t.bytereverse() + table = t.tobytes() # translation table + self.assertEqual(table[:9], b'\x00\x80\x40\xc0\x20\xa0\x60\xe0\x10') + + for n in range(100): + a = urandom(8 * n, self.random_endian()) + i = randint(0, n) # start + j = randint(0, n) # stop + b = a.copy() + memoryview(b)[i:j] = b.tobytes()[i:j].translate(table) + a.bytereverse(i, j) + self.assertEQUAL(a, b) + self.check_obj(a) + + def test_bytereverse_endian(self): + for n in range(20): + a = urandom(8 * n, self.random_endian()) + b = a.copy() + a.bytereverse() + a = bitarray(a, self.other_endian(a.endian())) + self.assertEqual(a.tobytes(), b.tobytes()) + +tests.append(MethodTests) + +# --------------------------------------------------------------------------- + +class CountTests(unittest.TestCase, Util): + + def test_basic(self): + a = bitarray('10011') + self.assertEqual(a.count(), 3) + self.assertEqual(a.count(True), 3) + self.assertEqual(a.count(False), 2) + self.assertEqual(a.count(1), 3) + self.assertEqual(a.count(0), 2) + self.assertRaises(ValueError, a.count, 2) + self.assertRaises(ValueError, a.count, 1, 0, 5, 0) + self.assertRaises(TypeError, a.count, None) + self.assertRaises(TypeError, a.count, '') + self.assertRaises(TypeError, a.count, 'A') + self.assertRaises(TypeError, a.count, 1, 2.0) + self.assertRaises(TypeError, a.count, 1, 2, 4.0) + self.assertRaises(TypeError, a.count, 0, 'A') + self.assertRaises(TypeError, a.count, 0, 0, 'A') + + def test_byte(self): + for i in range(256): + a = bitarray() + a.frombytes(bytearray([i])) + self.assertEqual(len(a), 8) + self.assertEqual(a.count(), bin(i)[2:].count('1')) + + def test_whole_range(self): + for a in self.randombitarrays(): + n = len(a) + s = a.to01() + for v in 0, 1: + ref = s.count(str(v)) + self.assertEqual(a.count(v), ref) + self.assertEqual(a.count(v, n, -n - 1, -1), ref) + + def test_zeros(self): + N = 37 + a = zeros(N) + for i in range(N): + for j in range(i, N): + self.assertEqual(a.count(0, i, j), j - i) + + for step in range(-N - 3, N + 3): + if step == 0: + continue + self.assertEqual(a.count(0, i, i, step), 0) + + def test_slicelength(self): + for N in range(100): + step = randint(-N - 1, N) + if step == 0: + continue + + a = bitarray(N, self.random_endian()) + i = randint(-N - 1, N) + j = randint(-N - 1, N) + slicelength = self.calc_slicelength(slice(i, j, step), N) + self.assertEqual(len(a[i:j:step]), slicelength) + + a.setall(0) + self.assertEqual(a.count(0, i, j, step), slicelength) + self.assertEqual(a.count(1, i, j, step), 0) + a[i:j:step] = 1 + self.assertEqual(a.count(0), N - slicelength) + self.assertEqual(a.count(1), slicelength) + del a[i:j:step] + self.assertEqual(len(a), N - slicelength) + self.assertFalse(a.any()) + + def test_explicit(self): + a = bitarray('01001100 01110011 01') + self.assertEqual(a.count(), 9) + self.assertEqual(a.count(0, 12), 3) + self.assertEqual(a.count(1, 1, 18, 2), 6) + self.assertEqual(a.count(1, 0, 18, 3), 2) + self.assertEqual(a.count(1, 15, 4, -3), 2) + self.assertEqual(a.count(1, -5), 3) + self.assertEqual(a.count(1, 2, 17), 7) + self.assertEqual(a.count(1, 6, 11), 2) + self.assertEqual(a.count(0, 7, -3), 4) + self.assertEqual(a.count(1, 1, -1), 8) + self.assertEqual(a.count(1, 17, 14), 0) + + def test_random(self): + for a in self.randombitarrays(): + n = len(a) + i = randint(-n - 3, n + 3) + j = randint(-n - 3, n + 3) + for step in randint(-n - 3, -2), -1, 1, randint(2, n + 3): + for v in 0, 1: + self.assertEqual(a.count(v, i, j, step), + a[i:j:step].count(v)) + +tests.append(CountTests) + +# --------------------------------------------------------------------------- + +class IndexTests(unittest.TestCase, Util): + + def test_simple(self): + a = bitarray() + for i in True, False, 1, 0: + self.assertEqual(a.find(i), -1) + self.assertRaises(ValueError, a.index, i) + + a = zeros(100) + self.assertRaises(TypeError, a.find) + self.assertRaises(TypeError, a.find, 1, 'a') + self.assertRaises(TypeError, a.find, 1, 0, 'a') + self.assertRaises(TypeError, a.find, 1, 0, 100, 1) + + self.assertRaises(ValueError, a.index, True) + self.assertRaises(TypeError, a.index) + self.assertRaises(TypeError, a.index, 1, 'a') + self.assertRaises(TypeError, a.index, 1, 0, 'a') + self.assertRaises(TypeError, a.index, 1, 0, 100, 1) + + a[20] = a[27] = 1 + for i in 1, True, bitarray('1'), bitarray('10'): + self.assertEqual(a.index(i), 20) + self.assertEqual(a.index(i, 21), 27) + self.assertEqual(a.index(i, 27), 27) + self.assertEqual(a.index(i, -73), 27) + self.assertRaises(ValueError, a.index, -1) + self.assertRaises(TypeError, a.index, None) + self.assertRaises(ValueError, a.index, 1, 5, 17) + self.assertRaises(ValueError, a.index, 1, 5, -83) + self.assertRaises(ValueError, a.index, 1, 21, 27) + self.assertRaises(ValueError, a.index, 1, 28) + self.assertEqual(a.index(0), 0) + self.assertEqual(a.find(0), 0) + + def test_empty(self): + for a in self.randombitarrays(): + # empty bitarray is always found at start index + sub = bitarray() + start = randint(0, len(a)) + self.assertEqual(a.index(sub, start), start) + self.assertEqual(a.find(sub, start), start) + + def test_200(self): + a = 200 * bitarray('1') + self.assertRaises(ValueError, a.index, False) + self.assertEqual(a.find(False), -1) + a[173] = a[187] = a[189] = 0 + for i in 0, False, bitarray('0'), bitarray('01'): + self.assertEqual(a.index(i), 173) + self.assertEqual(a.find(i), 173) + self.assertEqual(a.index(True), 0) + self.assertEqual(a.find(True), 0) + s = bitarray('010') + self.assertEqual(a.index(s), 187) + self.assertEqual(a.find(s), 187) + + def test_range(self): + n = 250 + a = bitarray(n) + for m in range(n): + a.setall(0) + self.assertRaises(ValueError, a.index, 1) + self.assertEqual(a.find(1), -1) + a[m] = 1 + self.assertEqual(a.index(1), m) + self.assertEqual(a.find(1), m) + + a.setall(1) + self.assertRaises(ValueError, a.index, 0) + self.assertEqual(a.find(0), -1) + a[m] = 0 + self.assertEqual(a.index(0), m) + self.assertEqual(a.find(0), m) + + def test_explicit(self): + for endian in 'big', 'little': + a = bitarray('00001000 00000000 0010000', endian) + self.assertEqual(a.index(1), 4) + self.assertEqual(a.index(1, 1), 4) + self.assertEqual(a.index(0, 4), 5) + self.assertEqual(a.index(1, 5), 18) + self.assertEqual(a.index(1, -11), 18) + self.assertEqual(a.index(1, -50), 4) + self.assertRaises(ValueError, a.index, 1, 5, 18) + self.assertRaises(ValueError, a.index, 1, 19) + + self.assertEqual(a.find(1), 4) + self.assertEqual(a.find(1, 1), 4) + self.assertEqual(a.find(0, 4), 5) + self.assertEqual(a.find(1, 5), 18) + self.assertEqual(a.find(1, -11), 18) + self.assertEqual(a.find(1, -50), 4) + self.assertEqual(a.find(1, 5, 18), -1) + self.assertEqual(a.find(1, 19), -1) + + def test_explicit_2(self): + a = bitarray('10010101 11001111 1001011', self.random_endian()) + s = bitarray('011', self.random_endian()) + self.assertEqual(a.index(s, 15), 20) + self.assertEqual(a.index(s, -3), 20) + self.assertRaises(ValueError, a.index, s, 15, 22) + self.assertRaises(ValueError, a.index, s, 15, -1) + + self.assertEqual(a.find(s, 15), 20) + self.assertEqual(a.find(s, -3), 20) + self.assertEqual(a.find(s, 15, 22), -1) + self.assertEqual(a.find(s, 15, -1), -1) + + def test_random_start_stop(self): + n = 2000 + a = zeros(n) + for _ in range(100): + a[randint(0, n - 1)] = 1 + aa = a.tolist() + for _ in range(100): + start = randint(0, n) + stop = randint(0, n) + try: # reference from list + ref = aa.index(1, start, stop) + except ValueError: + ref = -1 + + res1 = a.find(1, start, stop) + self.assertEqual(res1, ref) + + try: + res2 = a.index(1, start, stop) + except ValueError: + res2 = -1 + self.assertEqual(res2, ref) + + def test_random_2(self): + for n in range(1, 70): + a = bitarray(n) + i = randint(0, 1) + a.setall(i) + for _ in range(randint(1, 4)): + a.invert(randint(0, n - 1)) + aa = a.tolist() + for _ in range(10): + start = randint(-10, n + 10) + stop = randint(-10, n + 10) + try: + res0 = aa.index(not i, start, stop) + except ValueError: + res0 = -1 + + res1 = a.find(not i, start, stop) + self.assertEqual(res1, res0) + + try: + res2 = a.index(not i, start, stop) + except ValueError: + res2 = -1 + self.assertEqual(res2, res0) + + def test_random_3(self): + for a in self.randombitarrays(): + aa = a.to01() + if a: + self.assertEqual(a.find(a), 0) + self.assertEqual(a.index(a), 0) + + for sub in '0', '1', '01', '01', '11', '101', '1111', '00100': + b = bitarray(sub, self.random_endian()) + self.assertEqual(a.find(b), aa.find(sub)) + + i = randint(-len(a) - 3, len(a) + 2) + j = randint(-len(a) - 3, len(a) + 2) + ref = aa.find(sub, i, j) + self.assertEqual(a.find(b, i, j), ref) + if ref == -1: + self.assertRaises(ValueError, a.index, b, i, j) + else: + self.assertEqual(a.index(b, i, j), ref) + +tests.append(IndexTests) + +# --------------------------------------------------------------------------- + +class SearchTests(unittest.TestCase, Util): + + def test_simple(self): + a = bitarray() + for s in 0, 1, False, True, bitarray('0'), bitarray('1'): + self.assertEqual(a.search(s), []) + + a = bitarray('00100') + for s in 1, True, bitarray('1'), bitarray('10'): + self.assertEqual(a.search(s), [2]) + + a = 100 * bitarray('1') + self.assertEqual(a.search(0), []) + self.assertEqual(a.search(1), list(range(100))) + + a = bitarray('10010101110011111001011') + for limit in range(10): + self.assertEqual(a.search(bitarray('011'), limit), + [6, 11, 20][:limit]) + + self.assertRaises(ValueError, a.search, bitarray()) + self.assertRaises(TypeError, a.search, '010') + + def test_itersearch(self): + a = bitarray('10011') + self.assertRaises(ValueError, a.itersearch, bitarray()) + self.assertRaises(TypeError, a.itersearch, 1, 0) + self.assertRaises(TypeError, a.itersearch, '') + it = a.itersearch(1) + self.assertIsType(it, 'searchiterator') + self.assertEqual(next(it), 0) + self.assertEqual(next(it), 3) + self.assertEqual(next(it), 4) + self.assertStopIteration(it) + x = bitarray('11') + it = a.itersearch(x) + del a, x + self.assertEqual(next(it), 3) + + def test_explicit_1(self): + a = bitarray('10011', self.random_endian()) + for s, res in [('0', [1, 2]), ('1', [0, 3, 4]), + ('01', [2]), ('11', [3]), + ('000', []), ('1001', [0]), + ('011', [2]), ('0011', [1]), + ('10011', [0]), ('100111', [])]: + b = bitarray(s, self.random_endian()) + self.assertEqual(a.search(b), res) + self.assertEqual(list(a.itersearch(b)), res) + + def test_explicit_2(self): + a = bitarray('10010101 11001111 1001011') + for s, res in [('011', [6, 11, 20]), + ('111', [7, 12, 13, 14]), # note the overlap + ('1011', [5, 19]), + ('100', [0, 9, 16])]: + b = bitarray(s) + self.assertEqual(a.search(b), res) + self.assertEqual(list(a.itersearch(b)), res) + + def test_bool_random(self): + for a in self.randombitarrays(): + b = a.copy() + b.setall(0) + for i in a.itersearch(1): + b[i] = 1 + self.assertEQUAL(b, a) + + b.setall(1) + for i in a.itersearch(0): + b[i] = 0 + self.assertEQUAL(b, a) + + s = set(a.search(0) + a.search(1)) + self.assertEqual(len(s), len(a)) + + def test_random(self): + for a in self.randombitarrays(): + if a: + self.assertEqual(a.search(a), [0]) + self.assertEqual(list(a.itersearch(a)), [0]) + + for sub in '0', '1', '01', '01', '11', '101', '1101', '01100': + b = bitarray(sub, self.random_endian()) + plst = [i for i in range(len(a)) if a[i:i + len(b)] == b] + self.assertEqual(a.search(b), plst) + for p in a.itersearch(b): + self.assertEqual(a[p:p + len(b)], b) + +tests.append(SearchTests) + +# --------------------------------------------------------------------------- + +class BytesTests(unittest.TestCase, Util): + + @staticmethod + def randombytes(): + for n in range(1, 20): + yield os.urandom(n) + + def test_frombytes_simple(self): + a = bitarray(endian='big') + a.frombytes(b'A') + self.assertEqual(a, bitarray('01000001')) + + b = a + b.frombytes(b'BC') + self.assertEQUAL(b, bitarray('01000001 01000010 01000011', + endian='big')) + self.assertTrue(b is a) + + def test_frombytes_types(self): + a = bitarray(endian='big') + a.frombytes(b'A') # bytes + self.assertEqual(a, bitarray('01000001')) + a.frombytes(bytearray([254])) # bytearray + self.assertEqual(a, bitarray('01000001 11111110')) + a.frombytes(memoryview(b'C')) # memoryview + self.assertEqual(a, bitarray('01000001 11111110 01000011')) + + a.clear() + if is_py3k: # Python 2's array cannot be used as buffer + a.frombytes(array.array('B', [5, 255, 192])) + self.assertEqual(a, bitarray('00000101 11111111 11000000')) + + self.check_obj(a) + + for x in u'', 0, 1, False, True, None, []: + self.assertRaises(TypeError, a.frombytes, x) + + def test_frombytes_bitarray(self): + for endian in 'little', 'big': + # endianness doesn't matter here as we're writting the buffer + # from bytes, and then getting the memoryview + b = bitarray(0, endian) + b.frombytes(b'ABC') + + a = bitarray(0, 'big') + a.frombytes(bitarray(b)) + self.assertEqual(a.endian(), 'big') + self.assertEqual(a, bitarray('01000001 01000010 01000011')) + self.check_obj(a) + + def test_frombytes_self(self): + a = bitarray() + self.assertRaisesMessage( + BufferError, + "cannot resize bitarray that is exporting buffers", + a.frombytes, a) + + def test_frombytes_empty(self): + for a in self.randombitarrays(): + b = a.copy() + a.frombytes(b'') + a.frombytes(bytearray()) + self.assertEQUAL(a, b) + self.assertFalse(a is b) + self.check_obj(a) + + def test_frombytes_errors(self): + a = bitarray() + self.assertRaises(TypeError, a.frombytes) + self.assertRaises(TypeError, a.frombytes, b'', b'') + self.assertRaises(TypeError, a.frombytes, 1) + self.check_obj(a) + + def test_frombytes_random(self): + for b in self.randombitarrays(): + for s in self.randombytes(): + a = bitarray(endian=b.endian()) + a.frombytes(s) + c = b.copy() + b.frombytes(s) + self.assertEQUAL(b[-len(a):], a) + self.assertEQUAL(b[:-len(a)], c) + self.assertEQUAL(b, c + a) + self.check_obj(a) + + def test_tobytes_empty(self): + a = bitarray() + self.assertEqual(a.tobytes(), b'') + + def test_tobytes_endian(self): + for end in ('big', 'little'): + a = bitarray(endian=end) + a.frombytes(b'foo') + self.assertEqual(a.tobytes(), b'foo') + + for s in self.randombytes(): + a = bitarray(endian=end) + a.frombytes(s) + self.assertEqual(a.tobytes(), s) + self.check_obj(a) + + def test_tobytes_explicit_ones(self): + for n, s in [(1, b'\x01'), (2, b'\x03'), (3, b'\x07'), (4, b'\x0f'), + (5, b'\x1f'), (6, b'\x3f'), (7, b'\x7f'), (8, b'\xff'), + (12, b'\xff\x0f'), (15, b'\xff\x7f'), (16, b'\xff\xff'), + (17, b'\xff\xff\x01'), (24, b'\xff\xff\xff')]: + a = bitarray(n, endian='little') + a.setall(1) + self.assertEqual(a.tobytes(), s) + + def test_unpack_simple(self): + a = bitarray('01') + self.assertIsInstance(a.unpack(), bytes) + self.assertEqual(a.unpack(), b'\x00\x01') + self.assertEqual(a.unpack(b'A'), b'A\x01') + self.assertEqual(a.unpack(b'0', b'1'), b'01') + self.assertEqual(a.unpack(one=b'\xff'), b'\x00\xff') + self.assertEqual(a.unpack(zero=b'A'), b'A\x01') + self.assertEqual(a.unpack(one=b't', zero=b'f'), b'ft') + + def test_unpack_random(self): + for a in self.randombitarrays(): + self.assertEqual(a.unpack(b'0', b'1'), + a.to01().encode()) + # round trip + b = bitarray() + b.pack(a.unpack()) + self.assertEqual(b, a) + # round trip with invert + b = bitarray() + b.pack(a.unpack(b'\x01', b'\x00')) + b.invert() + self.assertEqual(b, a) + + def test_unpack_errors(self): + a = bitarray('01') + self.assertRaises(TypeError, a.unpack, b'') + self.assertRaises(TypeError, a.unpack, b'0', b'') + self.assertRaises(TypeError, a.unpack, b'a', zero=b'b') + self.assertRaises(TypeError, a.unpack, foo=b'b') + self.assertRaises(TypeError, a.unpack, one=b'aa', zero=b'b') + if is_py3k: + self.assertRaises(TypeError, a.unpack, '0') + self.assertRaises(TypeError, a.unpack, one='a') + self.assertRaises(TypeError, a.unpack, b'0', '1') + + def test_pack_simple(self): + for endian in 'little', 'big': + _set_default_endian(endian) + a = bitarray() + a.pack(bytes()) + self.assertEQUAL(a, bitarray()) + a.pack(b'\x00') + self.assertEQUAL(a, bitarray('0')) + a.pack(b'\xff') + self.assertEQUAL(a, bitarray('01')) + a.pack(b'\x01\x00\x7a') + self.assertEQUAL(a, bitarray('01101')) + a.pack(bytearray([0x01, 0x00, 0xff, 0xa7])) + self.assertEQUAL(a, bitarray('01101 1011')) + self.check_obj(a) + + def test_pack_types(self): + a = bitarray() + a.pack(b'\0\x01') # bytes + self.assertEqual(a, bitarray('01')) + a.pack(bytearray([0, 2])) # bytearray + self.assertEqual(a, bitarray('01 01')) + a.pack(memoryview(b'\x02\0')) # memoryview + self.assertEqual(a, bitarray('01 01 10')) + + if is_py3k: # Python 2's array cannot be used as buffer + a.pack(array.array('B', [0, 255, 192])) + self.assertEqual(a, bitarray('01 01 10 011')) + + self.check_obj(a) + + def test_pack_bitarray(self): + b = bitarray("00000000 00000001 10000000 11111111 00000000") + a = bitarray() + a.pack(bitarray(b)) + self.assertEqual(a, bitarray('01110')) + self.check_obj(a) + + def test_pack_self(self): + a = bitarray() + self.assertRaisesMessage( + BufferError, + "cannot resize bitarray that is exporting buffers", + a.pack, a) + + def test_pack_allbytes(self): + a = bitarray() + a.pack(bytearray(range(256))) + self.assertEqual(a, bitarray('0' + 255 * '1')) + self.check_obj(a) + + def test_pack_errors(self): + a = bitarray() + self.assertRaises(TypeError, a.pack, 0) + if is_py3k: + self.assertRaises(TypeError, a.pack, '1') + self.assertRaises(TypeError, a.pack, [1, 3]) + +tests.append(BytesTests) + +# --------------------------------------------------------------------------- + +class DescriptorTests(unittest.TestCase, Util): + + def test_nbytes_padbits(self): + for a in self.randombitarrays(): + self.assertEqual(a.nbytes, bits2bytes(len(a))) + self.assertEqual(a.padbits, 8 * a.nbytes - len(a)) + if is_py3k: + self.assertIsInstance(a.nbytes, int) + self.assertIsInstance(a.padbits, int) + + def test_readonly(self): + a = bitarray('110') + self.assertFalse(a.readonly) + self.assertIsInstance(a.readonly, bool) + + b = frozenbitarray(a) + self.assertTrue(b.readonly) + self.assertIsInstance(b.readonly, bool) + +tests.append(DescriptorTests) + +# --------------------------------------------------------------------------- + +class FileTests(unittest.TestCase, Util): + + def setUp(self): + self.tmpdir = tempfile.mkdtemp() + self.tmpfname = os.path.join(self.tmpdir, 'testfile') + + def tearDown(self): + shutil.rmtree(self.tmpdir) + + def read_file(self): + with open(self.tmpfname, 'rb') as fi: + return fi.read() + + def assertFileSize(self, size): + self.assertEqual(os.path.getsize(self.tmpfname), size) + + + def test_pickle(self): + d1 = {i: a for i, a in enumerate(self.randombitarrays())} + with open(self.tmpfname, 'wb') as fo: + pickle.dump(d1, fo) + with open(self.tmpfname, 'rb') as fi: + d2 = pickle.load(fi) + for key in d1.keys(): + self.assertEQUAL(d1[key], d2[key]) + + @skipIf(sys.version_info[0] == 2) + def test_pickle_load(self): + # the test data file was created using bitarray 1.5.0 / Python 3.5.5 + path = os.path.join(os.path.dirname(__file__), 'test_data.pickle') + with open(path, 'rb') as fi: + d = pickle.load(fi) + + for i, (s, end) in enumerate([ + ('110', 'little'), + ('011', 'big'), + ('1110000001001000000000000000001', 'little'), + ('0010011110000000000000000000001', 'big'), + ]): + b = d['b%d' % i] + self.assertEqual(b.to01(), s) + self.assertEqual(b.endian(), end) + self.assertIsType(b, 'bitarray') + self.check_obj(b) + + f = d['f%d' % i] + self.assertEqual(f.to01(), s) + self.assertEqual(f.endian(), end) + self.assertIsType(f, 'frozenbitarray') + self.check_obj(f) + + @skipIf(pyodide) # pyodide has no dbm module + def test_shelve(self): + if hasattr(sys, 'gettotalrefcount'): + return + + d1 = shelve.open(self.tmpfname) + stored = [] + for a in self.randombitarrays(): + key = str(len(a)) + d1[key] = a + stored.append((key, a)) + d1.close() + + d2 = shelve.open(self.tmpfname) + for k, v in stored: + self.assertEQUAL(d2[k], v) + d2.close() + + def test_fromfile_empty(self): + with open(self.tmpfname, 'wb') as fo: + pass + self.assertFileSize(0) + + a = bitarray() + with open(self.tmpfname, 'rb') as fi: + a.fromfile(fi) + self.assertEqual(a, bitarray()) + self.check_obj(a) + + def test_fromfile_Foo(self): + with open(self.tmpfname, 'wb') as fo: + fo.write(b'Foo') + self.assertFileSize(3) + + a = bitarray(endian='big') + with open(self.tmpfname, 'rb') as fi: + a.fromfile(fi) + self.assertEqual(a, bitarray('01000110 01101111 01101111')) + + a = bitarray(endian='little') + with open(self.tmpfname, 'rb') as fi: + a.fromfile(fi) + self.assertEqual(a, bitarray('01100010 11110110 11110110')) + + def test_fromfile_wrong_args(self): + a = bitarray() + self.assertRaises(TypeError, a.fromfile) + self.assertRaises(Exception, a.fromfile, 42) + self.assertRaises(Exception, a.fromfile, 'bar') + + with open(self.tmpfname, 'wb') as fo: + pass + with open(self.tmpfname, 'rb') as fi: + self.assertRaises(TypeError, a.fromfile, fi, None) + + def test_fromfile_erros(self): + with open(self.tmpfname, 'wb') as fo: + fo.write(b'0123456789') + self.assertFileSize(10) + + a = bitarray() + with open(self.tmpfname, 'wb') as fi: + self.assertRaises(Exception, a.fromfile, fi) + + if is_py3k: + with open(self.tmpfname, 'r') as fi: + self.assertRaises(TypeError, a.fromfile, fi) + + def test_from_large_files(self): + for N in range(65534, 65538): + data = os.urandom(N) + with open(self.tmpfname, 'wb') as fo: + fo.write(data) + + a = bitarray() + with open(self.tmpfname, 'rb') as fi: + a.fromfile(fi) + self.assertEqual(len(a), 8 * N) + self.assertEqual(buffer_info(a, 'size'), N) + self.assertEqual(a.tobytes(), data) + + def test_fromfile_extend_existing(self): + with open(self.tmpfname, 'wb') as fo: + fo.write(b'Foo') + + foo_le = '011000101111011011110110' + a = bitarray('1', endian='little') + with open(self.tmpfname, 'rb') as fi: + a.fromfile(fi) + + self.assertEqual(a, bitarray('1' + foo_le)) + + for n in range(20): + a = bitarray(n, endian='little') + a.setall(1) + with open(self.tmpfname, 'rb') as fi: + a.fromfile(fi) + self.assertEqual(a, bitarray(n * '1' + foo_le)) + + def test_fromfile_n(self): + a = bitarray() + a.frombytes(b'ABCDEFGHIJ') + with open(self.tmpfname, 'wb') as fo: + a.tofile(fo) + self.assertFileSize(10) + + with open(self.tmpfname, 'rb') as f: + a = bitarray() + a.fromfile(f, 0); self.assertEqual(a.tobytes(), b'') + a.fromfile(f, 1); self.assertEqual(a.tobytes(), b'A') + f.read(1) # skip B + a.fromfile(f, 1); self.assertEqual(a.tobytes(), b'AC') + a = bitarray() + a.fromfile(f, 2); self.assertEqual(a.tobytes(), b'DE') + a.fromfile(f, 1); self.assertEqual(a.tobytes(), b'DEF') + a.fromfile(f, 0); self.assertEqual(a.tobytes(), b'DEF') + a.fromfile(f); self.assertEqual(a.tobytes(), b'DEFGHIJ') + a.fromfile(f); self.assertEqual(a.tobytes(), b'DEFGHIJ') + self.check_obj(a) + + a = bitarray() + with open(self.tmpfname, 'rb') as f: + f.read(1) + self.assertRaises(EOFError, a.fromfile, f, 10) + # check that although we received an EOFError, the bytes were read + self.assertEqual(a.tobytes(), b'BCDEFGHIJ') + + a = bitarray() + with open(self.tmpfname, 'rb') as f: + # negative values - like ommiting the argument + a.fromfile(f, -1) + self.assertEqual(a.tobytes(), b'ABCDEFGHIJ') + self.assertRaises(EOFError, a.fromfile, f, 1) + + def test_fromfile_BytesIO(self): + f = BytesIO(b'somedata') + a = bitarray() + a.fromfile(f, 4) + self.assertEqual(len(a), 32) + self.assertEqual(a.tobytes(), b'some') + a.fromfile(f) + self.assertEqual(len(a), 64) + self.assertEqual(a.tobytes(), b'somedata') + self.check_obj(a) + + def test_tofile_empty(self): + a = bitarray() + with open(self.tmpfname, 'wb') as f: + a.tofile(f) + + self.assertFileSize(0) + + def test_tofile_Foo(self): + a = bitarray('0100011 001101111 01101111', endian='big') + b = a.copy() + with open(self.tmpfname, 'wb') as f: + a.tofile(f) + self.assertEQUAL(a, b) + + self.assertFileSize(3) + self.assertEqual(self.read_file(), b'Foo') + + def test_tofile_random(self): + for a in self.randombitarrays(): + with open(self.tmpfname, 'wb') as fo: + a.tofile(fo) + n = a.nbytes + self.assertFileSize(n) + raw = self.read_file() + self.assertEqual(len(raw), n) + self.assertEqual(raw, a.tobytes()) + + def test_tofile_errors(self): + n = 100 + a = bitarray(8 * n) + self.assertRaises(TypeError, a.tofile) + + with open(self.tmpfname, 'wb') as f: + a.tofile(f) + self.assertFileSize(n) + # write to closed file + self.assertRaises(ValueError, a.tofile, f) + + if is_py3k: + with open(self.tmpfname, 'w') as f: + self.assertRaises(TypeError, a.tofile, f) + + with open(self.tmpfname, 'rb') as f: + self.assertRaises(Exception, a.tofile, f) + + def test_tofile_large(self): + n = 100 * 1000 + a = bitarray(8 * n) + a.setall(0) + a[2::37] = 1 + with open(self.tmpfname, 'wb') as f: + a.tofile(f) + self.assertFileSize(n) + + raw = self.read_file() + self.assertEqual(len(raw), n) + self.assertEqual(raw, a.tobytes()) + + def test_tofile_ones(self): + for n in range(20): + a = n * bitarray('1', endian='little') + with open(self.tmpfname, 'wb') as fo: + a.tofile(fo) + + raw = self.read_file() + self.assertEqual(len(raw), a.nbytes) + # when we fill the padbits in a, we can compare + a.fill() + b = bitarray(endian='little') + b.frombytes(raw) + self.assertEqual(a, b) + + def test_tofile_BytesIO(self): + for n in list(range(10)) + list(range(65534, 65538)): + data = os.urandom(n) + a = bitarray(0, 'big') + a.frombytes(data) + self.assertEqual(a.nbytes, n) + f = BytesIO() + a.tofile(f) + self.assertEqual(f.getvalue(), data) + + @skipIf(sys.version_info[0] == 2) + def test_mmap(self): + with open(self.tmpfname, 'wb') as fo: + fo.write(1000 * b'\0') + + with open(self.tmpfname, 'r+b') as f: # see issue #141 + with mmap.mmap(f.fileno(), 0) as mapping: + a = bitarray(buffer=mapping, endian='little') + info = buffer_info(a) + self.assertFalse(info['readonly']) + self.assertTrue(info['imported']) + self.assertEqual(a, zeros(8000)) + a[::2] = True + # not sure this is necessary, without 'del a', I get: + # BufferError: cannot close exported pointers exist + del a + + self.assertEqual(self.read_file(), 1000 * b'\x55') + + # pyodide hits emscripten mmap bug + @skipIf(sys.version_info[0] == 2 or pyodide) + def test_mmap_2(self): + with open(self.tmpfname, 'wb') as fo: + fo.write(1000 * b'\x22') + + with open(self.tmpfname, 'r+b') as f: + a = bitarray(buffer=mmap.mmap(f.fileno(), 0), endian='little') + info = buffer_info(a) + self.assertFalse(info['readonly']) + self.assertTrue(info['imported']) + self.assertEqual(a, 1000 * bitarray('0100 0100')) + a[::4] = 1 + + self.assertEqual(self.read_file(), 1000 * b'\x33') + + @skipIf(sys.version_info[0] == 2) + def test_mmap_readonly(self): + with open(self.tmpfname, 'wb') as fo: + fo.write(994 * b'\x89' + b'Veedon') + + with open(self.tmpfname, 'rb') as fi: # readonly + m = mmap.mmap(fi.fileno(), 0, access=mmap.ACCESS_READ) + a = bitarray(buffer=m, endian='big') + info = buffer_info(a) + self.assertTrue(info['readonly']) + self.assertTrue(info['imported']) + self.assertRaisesMessage(TypeError, + "cannot modify read-only memory", + a.__setitem__, 0, 1) + self.assertEqual(a[:8 * 994], 994 * bitarray('1000 1001')) + self.assertEqual(a[8 * 994:].tobytes(), b'Veedon') + +tests.append(FileTests) + +# ----------------------------- Decode Tree --------------------------------- + +alphabet_code = { + ' ': bitarray('001'), '.': bitarray('0101010'), + 'a': bitarray('0110'), 'b': bitarray('0001100'), + 'c': bitarray('000011'), 'd': bitarray('01011'), + 'e': bitarray('111'), 'f': bitarray('010100'), + 'g': bitarray('101000'), 'h': bitarray('00000'), + 'i': bitarray('1011'), 'j': bitarray('0111101111'), + 'k': bitarray('00011010'), 'l': bitarray('01110'), + 'm': bitarray('000111'), 'n': bitarray('1001'), + 'o': bitarray('1000'), 'p': bitarray('101001'), + 'q': bitarray('00001001101'), 'r': bitarray('1101'), + 's': bitarray('1100'), 't': bitarray('0100'), + 'u': bitarray('000100'), 'v': bitarray('0111100'), + 'w': bitarray('011111'), 'x': bitarray('0000100011'), + 'y': bitarray('101010'), 'z': bitarray('00011011110') +} + +class DecodeTreeTests(unittest.TestCase, Util): + + def test_create(self): + dt = decodetree(alphabet_code) + self.assertIsType(dt, 'decodetree') + self.assertIsInstance(dt, decodetree) + self.assertRaises(TypeError, decodetree, None) + self.assertRaises(TypeError, decodetree, 'foo') + d = dict(alphabet_code) + d['-'] = bitarray() + self.assertRaises(ValueError, decodetree, d) + + def test_ambiguous_code(self): + for d in [ + {'a': bitarray('0'), 'b': bitarray('0'), 'c': bitarray('1')}, + {'a': bitarray('01'), 'b': bitarray('01'), 'c': bitarray('1')}, + {'a': bitarray('0'), 'b': bitarray('01')}, + {'a': bitarray('0'), 'b': bitarray('11'), 'c': bitarray('111')}, + ]: + self.assertRaises(ValueError, decodetree, d) + + def test_sizeof(self): + dt = decodetree({'.': bitarray('1')}) + self.assertTrue(0 < sys.getsizeof(dt) < 100) + + dt = decodetree({'a': zeros(20)}) + self.assertTrue(sys.getsizeof(dt) > 200) + + def test_nodes(self): + for n in range(1, 20): + dt = decodetree({'a': zeros(n)}) + self.assertEqual(dt.nodes(), n + 1) + self.assertFalse(dt.complete()) + + dt = decodetree({'I': bitarray('1'), 'l': bitarray('01'), + 'a': bitarray('001'), 'n': bitarray('000')}) + self.assertEqual(dt.nodes(), 7) + dt = decodetree(alphabet_code) + self.assertEqual(dt.nodes(), 70) + + def test_complete(self): + dt = decodetree({'.': bitarray('1')}) + self.assertIsInstance(dt.complete(), bool) + self.assertFalse(dt.complete()) + + dt = decodetree({'a': bitarray('0'), + 'b': bitarray('1')}) + self.assertTrue(dt.complete()) + + dt = decodetree({'a': bitarray('0'), + 'b': bitarray('11')}) + self.assertFalse(dt.complete()) + + dt = decodetree({'a': bitarray('0'), + 'b': bitarray('11'), + 'c': bitarray('10')}) + self.assertTrue(dt.complete()) + + def test_todict(self): + t = decodetree(alphabet_code) + d = t.todict() + self.assertIsInstance(d, dict) + self.assertEqual(d, alphabet_code) + + def test_decode(self): + t = decodetree(alphabet_code) + a = bitarray('1011 01110 0110 1001') + self.assertEqual(a.decode(t), ['i', 'l', 'a', 'n']) + self.assertEqual(''.join(a.iterdecode(t)), 'ilan') + a = bitarray() + self.assertEqual(a.decode(t), []) + self.assertEqual(''.join(a.iterdecode(t)), '') + self.check_obj(a) + + def test_large(self): + d = {i: bitarray(bool((1 << j) & i) for j in range(10)) + for i in range(1024)} + t = decodetree(d) + self.assertEqual(t.todict(), d) + self.assertEqual(t.nodes(), 2047) + self.assertTrue(sys.getsizeof(t) > 10000) + +tests.append(DecodeTreeTests) + +# ------------------ variable length encoding and decoding ------------------ + +class PrefixCodeTests(unittest.TestCase, Util): + + def test_encode_string(self): + a = bitarray() + a.encode(alphabet_code, '') + self.assertEqual(a, bitarray()) + a.encode(alphabet_code, 'a') + self.assertEqual(a, bitarray('0110')) + + def test_encode_list(self): + a = bitarray() + a.encode(alphabet_code, []) + self.assertEqual(a, bitarray()) + a.encode(alphabet_code, ['e']) + self.assertEqual(a, bitarray('111')) + + def test_encode_iter(self): + a = bitarray() + d = {0: bitarray('0'), 1: bitarray('1')} + a.encode(d, iter([0, 1, 1, 0])) + self.assertEqual(a, bitarray('0110')) + + def foo(): + for c in 1, 1, 0, 0, 1, 1: + yield c + + a.clear() + a.encode(d, foo()) + a.encode(d, range(2)) + self.assertEqual(a, bitarray('11001101')) + self.assertEqual(d, {0: bitarray('0'), 1: bitarray('1')}) + + def test_encode_symbol_not_in_code(self): + d = dict(alphabet_code) + a = bitarray() + a.encode(d, 'is') + self.assertEqual(a, bitarray('1011 1100')) + self.assertRaises(ValueError, a.encode, d, 'ilAn') + msg = "symbol not defined in prefix code" + if is_py3k: + msg += ": None" + self.assertRaisesMessage(ValueError, msg, a.encode, d, [None, 2]) + + def test_encode_not_iterable(self): + d = {'a': bitarray('0'), 'b': bitarray('1')} + a = bitarray() + a.encode(d, 'abba') + self.assertRaises(TypeError, a.encode, d, 42) + self.assertRaises(TypeError, a.encode, d, 1.3) + self.assertRaises(TypeError, a.encode, d, None) + self.assertEqual(a, bitarray('0110')) + + def test_check_codedict_encode(self): + a = bitarray() + self.assertRaises(TypeError, a.encode, None, '') + self.assertRaises(ValueError, a.encode, {}, '') + self.assertRaises(TypeError, a.encode, {'a': 'b'}, 'a') + self.assertRaises(ValueError, a.encode, {'a': bitarray()}, 'a') + self.assertEqual(len(a), 0) + + def test_check_codedict_decode(self): + a = bitarray('101') + self.assertRaises(TypeError, a.decode, 0) + self.assertRaises(ValueError, a.decode, {}) + self.assertRaises(TypeError, a.decode, {'a': 42}) + self.assertRaises(ValueError, a.decode, {'a': bitarray()}) + self.assertEqual(a, bitarray('101')) + + def test_check_codedict_iterdecode(self): + a = bitarray('1100101') + self.assertRaises(TypeError, a.iterdecode, 0) + self.assertRaises(ValueError, a.iterdecode, {}) + self.assertRaises(TypeError, a.iterdecode, {'a': []}) + self.assertRaises(ValueError, a.iterdecode, {'a': bitarray()}) + self.assertEqual(a, bitarray('1100101')) + + def test_decode_simple(self): + d = {'I': bitarray('1'), 'l': bitarray('01'), + 'a': bitarray('001'), 'n': bitarray('000')} + dcopy = dict(d) + a = bitarray('101001000') + res = list("Ilan") + self.assertEqual(a.decode(d), res) + self.assertEqual(list(a.iterdecode(d)), res) + self.assertEqual(d, dcopy) + self.assertEqual(a, bitarray('101001000')) + + def test_iterdecode_type(self): + a = bitarray('0110') + it = a.iterdecode(alphabet_code) + self.assertIsType(it, 'decodeiterator') + self.assertEqual(list(it), ['a']) + + def test_iterdecode_remove(self): + d = {'I': bitarray('1'), 'l': bitarray('01'), + 'a': bitarray('001'), 'n': bitarray('000')} + t = decodetree(d) + a = bitarray('101001000') + it = a.iterdecode(t) + del t # remove tree + self.assertEqual(''.join(it), "Ilan") + + it = a.iterdecode(d) + del a + self.assertEqual(''.join(it), "Ilan") + + def test_decode_empty(self): + d = {'a': bitarray('1')} + a = bitarray() + self.assertEqual(a.decode(d), []) + self.assertEqual(d, {'a': bitarray('1')}) + # test decode iterator + self.assertEqual(list(a.iterdecode(d)), []) + self.assertEqual(d, {'a': bitarray('1')}) + self.assertEqual(len(a), 0) + + def test_decode_incomplete(self): + d = {'a': bitarray('0'), 'b': bitarray('111')} + a = bitarray('00011') + msg = "incomplete prefix code at position 3" + self.assertRaisesMessage(ValueError, msg, a.decode, d) + it = a.iterdecode(d) + self.assertIsType(it, 'decodeiterator') + self.assertRaisesMessage(ValueError, msg, list, it) + t = decodetree(d) + self.assertRaisesMessage(ValueError, msg, a.decode, t) + self.assertRaisesMessage(ValueError, msg, list, a.iterdecode(t)) + + self.assertEqual(a, bitarray('00011')) + self.assertEqual(d, {'a': bitarray('0'), 'b': bitarray('111')}) + self.assertEqual(t.todict(), d) + + def test_decode_incomplete_2(self): + a = bitarray() + a.encode(alphabet_code, "now we rise") + x = len(a) + a.extend('00') + msg = "incomplete prefix code at position %d" % x + self.assertRaisesMessage(ValueError, msg, a.decode, alphabet_code) + + def test_decode_buggybitarray(self): + d = dict(alphabet_code) + # i s t + a = bitarray('1011 1100 0100 011110111001101001') + msg = "prefix code unrecognized in bitarray at position 12 .. 21" + self.assertRaisesMessage(ValueError, msg, a.decode, d) + self.assertRaisesMessage(ValueError, msg, list, a.iterdecode(d)) + t = decodetree(d) + self.assertRaisesMessage(ValueError, msg, a.decode, t) + self.assertRaisesMessage(ValueError, msg, list, a.iterdecode(d)) + + self.check_obj(a) + self.assertEqual(t.todict(), d) + + def test_iterdecode_no_term(self): + d = {'a': bitarray('0'), 'b': bitarray('111')} + a = bitarray('011') + it = a.iterdecode(d) + self.assertEqual(next(it), 'a') + self.assertRaisesMessage(ValueError, + "incomplete prefix code at position 1", + next, it) + self.assertEqual(a, bitarray('011')) + + def test_iterdecode_buggybitarray(self): + d = {'a': bitarray('0')} + a = bitarray('1') + it = a.iterdecode(d) + self.assertRaises(ValueError, next, it) + self.assertEqual(a, bitarray('1')) + self.assertEqual(d, {'a': bitarray('0')}) + + def test_decode_buggybitarray2(self): + d = {'a': bitarray('00'), 'b': bitarray('01')} + a = bitarray('1') + self.assertRaises(ValueError, a.decode, d) + self.assertRaises(ValueError, next, a.iterdecode(d)) + + t = decodetree(d) + self.assertRaises(ValueError, a.decode, t) + self.assertRaises(ValueError, next, a.iterdecode(t)) + + self.assertEqual(a, bitarray('1')) + self.assertEqual(d, {'a': bitarray('00'), 'b': bitarray('01')}) + self.assertEqual(t.todict(), d) + + def test_decode_random(self): + pat1 = re.compile(r'incomplete prefix code.+\s(\d+)') + pat2 = re.compile(r'prefix code unrecognized.+\s(\d+)\s*\.\.\s*(\d+)') + t = decodetree(alphabet_code) + for a in self.randombitarrays(): + try: + a.decode(t) + except ValueError as e: + msg = str(e) + m1 = pat1.match(msg) + m2 = pat2.match(msg) + self.assertFalse(m1 and m2) + if m1: + i = int(m1.group(1)) + if m2: + i, j = int(m2.group(1)), int(m2.group(2)) + self.assertFalse(a[i:j] in alphabet_code.values()) + a[:i].decode(t) + + def test_decode_ambiguous_code(self): + for d in [ + {'a': bitarray('0'), 'b': bitarray('0'), 'c': bitarray('1')}, + {'a': bitarray('01'), 'b': bitarray('01'), 'c': bitarray('1')}, + {'a': bitarray('0'), 'b': bitarray('01')}, + {'a': bitarray('0'), 'b': bitarray('11'), 'c': bitarray('111')}, + ]: + a = bitarray() + self.assertRaises(ValueError, a.decode, d) + self.assertRaises(ValueError, a.iterdecode, d) + self.check_obj(a) + + def test_miscitems(self): + d = {None : bitarray('00'), + 0 : bitarray('110'), + 1 : bitarray('111'), + '' : bitarray('010'), + 2 : bitarray('011')} + a = bitarray() + a.encode(d, [None, 0, 1, '', 2]) + self.assertEqual(a, bitarray('00110111010011')) + self.assertEqual(a.decode(d), [None, 0, 1, '', 2]) + # iterator + it = a.iterdecode(d) + self.assertEqual(next(it), None) + self.assertEqual(next(it), 0) + self.assertEqual(next(it), 1) + self.assertEqual(next(it), '') + self.assertEqual(next(it), 2) + self.assertStopIteration(it) + + def test_quick_example(self): + a = bitarray() + message = 'the quick brown fox jumps over the lazy dog.' + a.encode(alphabet_code, message) + self.assertEqual(a, bitarray( + # t h e q u i c k + '0100 00000 111 001 00001001101 000100 1011 000011 00011010 001' + # b r o w n f o x + '0001100 1101 1000 011111 1001 001 010100 1000 0000100011 001' + # j u m p s o v e r + '0111101111 000100 000111 101001 1100 001 1000 0111100 111 1101' + # t h e l a z y + '001 0100 00000 111 001 01110 0110 00011011110 101010 001' + # d o g . + '01011 1000 101000 0101010')) + self.assertEqual(''.join(a.decode(alphabet_code)), message) + self.assertEqual(''.join(a.iterdecode(alphabet_code)), message) + t = decodetree(alphabet_code) + self.assertEqual(''.join(a.decode(t)), message) + self.assertEqual(''.join(a.iterdecode(t)), message) + self.check_obj(a) + +tests.append(PrefixCodeTests) + +# --------------------------- Buffer Import --------------------------------- + +class BufferImportTests(unittest.TestCase, Util): + + def test_bytes(self): + b = 100 * b'\0' + a = bitarray(buffer=b) + + info = buffer_info(a) + self.assertFalse(info['allocated']) + self.assertTrue(info['readonly']) + self.assertTrue(info['imported']) + + self.assertRaises(TypeError, a.setall, 1) + self.assertRaises(TypeError, a.clear) + self.assertEqual(a, zeros(800)) + self.check_obj(a) + + def test_bytearray(self): + b = bytearray(100 * [0]) + a = bitarray(buffer=b, endian='little') + + info = buffer_info(a) + self.assertFalse(info['allocated']) + self.assertFalse(info['readonly']) + self.assertTrue(info['imported']) + + a[0] = 1 + self.assertEqual(b[0], 1) + a[7] = 1 + self.assertEqual(b[0], 129) + a[:] = 1 + self.assertEqual(b, bytearray(100 * [255])) + self.assertRaises(BufferError, a.pop) + a[8:16] = bitarray('10000010', endian='big') + self.assertEqual(b, bytearray([255, 65] + 98 * [255])) + self.assertEqual(a.tobytes(), bytes(b)) + for n in 7, 9: + self.assertRaises(BufferError, a.__setitem__, slice(8, 16), + bitarray(n)) + b[1] = b[2] = 255 + self.assertEqual(b, bytearray(100 * [255])) + self.assertEqual(a, 800 * bitarray('1')) + self.check_obj(a) + + # Python 2's array cannot be used as buffer + @skipIf(sys.version_info[0] == 2) + def test_array(self): + a = array.array('B', [0, 255, 64]) + b = bitarray(None, 'little', a) + self.assertEqual(b, bitarray('00000000 11111111 00000010')) + a[1] = 32 + self.assertEqual(b, bitarray('00000000 00000100 00000010')) + b[3] = 1 + self.assertEqual(a.tolist(), [8, 32, 64]) + self.check_obj(b) + + def test_bitarray(self): + a = urandom(10000) + b = bitarray(buffer=a) + # a and b are two distinct bitarrays that share the same buffer now + self.assertFalse(a is b) + + a_info = buffer_info(a) + self.assertFalse(a_info['imported']) + self.assertEqual(a_info['exports'], 1) + b_info = buffer_info(b) + self.assertTrue(b_info['imported']) + self.assertEqual(b_info['exports'], 0) + # buffer address is the same! + self.assertEqual(a_info['address'], + b_info['address']) + + self.assertFalse(a is b) + self.assertEqual(a, b) + b[437:461] = 0 + self.assertEqual(a, b) + a[327:350] = 1 + self.assertEqual(a, b) + b[101:1187] <<= 79 + self.assertEqual(a, b) + a[100:9800:5] = 1 + self.assertEqual(a, b) + + self.assertRaisesMessage( + BufferError, + "cannot resize bitarray that is exporting buffers", + a.pop) + self.assertRaisesMessage( + BufferError, + "cannot resize imported buffer", + b.pop) + self.check_obj(a) + self.check_obj(b) + + def test_bitarray_shared_sections(self): + a = urandom(0x2000) + b = bitarray(buffer=memoryview(a)[0x100:0x300]) + self.assertEqual(buffer_info(b, 'address'), + buffer_info(a, 'address') + 0x100) + c = bitarray(buffer=memoryview(a)[0x200:0x800]) + self.assertEqual(buffer_info(c, 'address'), + buffer_info(a, 'address') + 0x200) + self.assertEqual(a[8 * 0x100 : 8 * 0x300], b) + self.assertEqual(a[8 * 0x200 : 8 * 0x800], c) + a.setall(0) + b.setall(1) + c.setall(0) + + d = bitarray(0x2000) + d.setall(0) + d[8 * 0x100 : 8 * 0x200] = 1 + self.assertEqual(a, d) + + def test_bitarray_range(self): + for n in range(100): + a = urandom(n, self.random_endian()) + b = bitarray(buffer=a, endian=a.endian()) + # an imported buffer will never have padbits + self.assertEqual(b.padbits, 0) + self.assertEqual(len(b) % 8, 0) + self.assertEQUAL(b[:n], a) + self.check_obj(a) + self.check_obj(b) + + def test_bitarray_chain(self): + a = urandom(64) + d = {0: a} + for n in range(1, 100): + d[n] = bitarray(buffer=d[n - 1]) + + self.assertEqual(d[99], a) + a.setall(0) + self.assertEqual(d[99], zeros(64)) + a[:] = 1 + self.assertTrue(d[99].all()) + for c in d.values(): + self.check_obj(c) + + def test_frozenbitarray(self): + a = frozenbitarray('10011011 011') + self.assertTrue(a.readonly) + self.check_obj(a) + + b = bitarray(buffer=a) + self.assertTrue(b.readonly) # also readonly + self.assertRaises(TypeError, b.__setitem__, 1, 0) + self.check_obj(b) + + def test_invalid_buffer(self): + # these objects do not expose a buffer + for arg in (123, 1.23, Ellipsis, [1, 2, 3], (1, 2, 3), {1: 2}, + set([1, 2, 3]),): + self.assertRaises(TypeError, bitarray, buffer=arg) + + def test_del_import_object(self): + b = bytearray(100 * [0]) + a = bitarray(buffer=b) + del b + self.assertEqual(a, zeros(800)) + a.setall(1) + self.assertTrue(a.all()) + self.check_obj(a) + + def test_readonly_errors(self): + a = bitarray(buffer=b'A') + info = buffer_info(a) + self.assertTrue(info['readonly']) + self.assertTrue(info['imported']) + + self.assertRaises(TypeError, a.append, True) + self.assertRaises(TypeError, a.bytereverse) + self.assertRaises(TypeError, a.clear) + self.assertRaises(TypeError, a.encode, {'a': bitarray('0')}, 'aa') + self.assertRaises(TypeError, a.extend, [0, 1, 0]) + self.assertRaises(TypeError, a.fill) + self.assertRaises(TypeError, a.frombytes, b'') + self.assertRaises(TypeError, a.insert, 0, 1) + self.assertRaises(TypeError, a.invert) + self.assertRaises(TypeError, a.pack, b'\0\0\xff') + self.assertRaises(TypeError, a.pop) + self.assertRaises(TypeError, a.remove, 1) + self.assertRaises(TypeError, a.reverse) + self.assertRaises(TypeError, a.setall, 0) + self.assertRaises(TypeError, a.sort) + self.assertRaises(TypeError, a.__delitem__, 0) + self.assertRaises(TypeError, a.__delitem__, slice(None, None, 2)) + self.assertRaises(TypeError, a.__setitem__, 0, 0) + self.assertRaises(TypeError, a.__iadd__, bitarray('010')) + self.assertRaises(TypeError, a.__ior__, bitarray('100')) + self.assertRaises(TypeError, a.__ixor__, bitarray('110')) + self.assertRaises(TypeError, a.__irshift__, 1) + self.assertRaises(TypeError, a.__ilshift__, 1) + self.check_obj(a) + + def test_resize_errors(self): + a = bitarray(buffer=bytearray([123])) + info = buffer_info(a) + self.assertFalse(info['readonly']) + self.assertTrue(info['imported']) + + self.assertRaises(BufferError, a.append, True) + self.assertRaises(BufferError, a.clear) + self.assertRaises(BufferError, a.encode, {'a': bitarray('0')}, 'aa') + self.assertRaises(BufferError, a.extend, [0, 1, 0]) + self.assertRaises(BufferError, a.frombytes, b'a') + self.assertRaises(BufferError, a.insert, 0, 1) + self.assertRaises(BufferError, a.pack, b'\0\0\xff') + self.assertRaises(BufferError, a.pop) + self.assertRaises(BufferError, a.remove, 1) + self.assertRaises(BufferError, a.__delitem__, 0) + self.check_obj(a) + +tests.append(BufferImportTests) + +# --------------------------- Buffer Export --------------------------------- + +class BufferExportTests(unittest.TestCase, Util): + + def test_read_simple(self): + a = bitarray('01000001 01000010 01000011', endian='big') + v = memoryview(a) + self.assertFalse(v.readonly) + self.assertEqual(buffer_info(a, 'exports'), 1) + self.assertEqual(len(v), 3) + self.assertEqual(v[0], 65 if is_py3k else 'A') + self.assertEqual(v.tobytes(), b'ABC') + a[13] = 1 + self.assertEqual(v.tobytes(), b'AFC') + + w = memoryview(a) # a second buffer export + self.assertFalse(w.readonly) + self.assertEqual(buffer_info(a, 'exports'), 2) + self.check_obj(a) + + def test_many_exports(self): + a = bitarray('01000111 01011111') + d = {} # put bitarrays in dict to key object around + for n in range(1, 20): + d[n] = bitarray(buffer=a) + self.assertEqual(buffer_info(a, 'exports'), n) + self.assertEqual(len(d[n]), 16) + self.check_obj(a) + + def test_range(self): + for n in range(100): + a = bitarray(n) + v = memoryview(a) + self.assertEqual(len(v), a.nbytes) + info = buffer_info(a) + self.assertFalse(info['readonly']) + self.assertFalse(info['imported']) + self.assertEqual(info['exports'], 1) + self.check_obj(a) + + def test_read_random(self): + a = bitarray() + a.frombytes(os.urandom(100)) + v = memoryview(a) + self.assertEqual(len(v), 100) + b = a[34 * 8 : 67 * 8] + self.assertEqual(v[34:67].tobytes(), b.tobytes()) + self.assertEqual(v.tobytes(), a.tobytes()) + self.check_obj(a) + + def test_resize(self): + a = bitarray('011', endian='big') + v = memoryview(a) + self.assertFalse(v.readonly) + self.assertRaises(BufferError, a.append, 1) + self.assertRaises(BufferError, a.clear) + self.assertRaises(BufferError, a.encode, {'a': bitarray('0')}, 'aa') + self.assertRaises(BufferError, a.extend, '0') + self.assertRaises(BufferError, a.frombytes, b'\0') + self.assertRaises(BufferError, a.insert, 0, 1) + self.assertRaises(BufferError, a.pack, b'\0') + self.assertRaises(BufferError, a.pop) + self.assertRaises(BufferError, a.remove, 1) + self.assertRaises(BufferError, a.__delitem__, slice(0, 8)) + a.fill() + self.assertEqual(v.tobytes(), a.tobytes()) + self.check_obj(a) + + def test_frozenbitarray(self): + a = frozenbitarray(40) + v = memoryview(a) + self.assertTrue(v.readonly) + self.assertEqual(len(v), 5) + self.assertEqual(v.tobytes(), a.tobytes()) + self.check_obj(a) + + def test_write(self): + a = bitarray(8000) + a.setall(0) + v = memoryview(a) + self.assertFalse(v.readonly) + v[500] = 255 if is_py3k else '\xff' + self.assertEqual(a[3999:4009], bitarray('0111111110')) + a[4003] = 0 + self.assertEqual(a[3999:4009], bitarray('0111011110')) + v[301:304] = b'ABC' + self.assertEqual(a[300 * 8 : 305 * 8].tobytes(), b'\x00ABC\x00') + self.check_obj(a) + + @skipIf(sys.version_info[0] == 2) + def test_write_py3(self): + a = bitarray(40) + a.setall(0) + m = memoryview(a) + v = m[1:4] + v[0] = 65 + v[1] = 66 + v[2] = 67 + self.assertEqual(a.tobytes(), b'\x00ABC\x00') + self.check_obj(a) + +tests.append(BufferExportTests) + +# --------------------------------------------------------------------------- + +class TestsFrozenbitarray(unittest.TestCase, Util): + + def test_init(self): + a = frozenbitarray('110') + self.assertEqual(a, bitarray('110')) + self.assertEqual(a.to01(), '110') + self.assertIsInstance(a, bitarray) + self.assertIsType(a, 'frozenbitarray') + self.assertTrue(a.readonly) + self.check_obj(a) + + a = frozenbitarray(bitarray()) + self.assertEQUAL(a, frozenbitarray()) + self.assertIsType(a, 'frozenbitarray') + + for endian in 'big', 'little': + a = frozenbitarray(0, endian) + self.assertEqual(a.endian(), endian) + self.assertIsType(a, 'frozenbitarray') + + a = frozenbitarray(bitarray(0, endian)) + self.assertEqual(a.endian(), endian) + self.assertIsType(a, 'frozenbitarray') + + def test_methods(self): + # test a few methods which do not raise the TypeError + a = frozenbitarray('1101100') + self.assertEqual(a[2], 0) + self.assertEqual(a[:4].to01(), '1101') + self.assertEqual(a.count(), 4) + self.assertEqual(a.index(0), 2) + b = a.copy() + self.assertEqual(b, a) + self.assertIsType(b, 'frozenbitarray') + self.assertEqual(len(b), 7) + self.assertFalse(b.all()) + self.assertTrue(b.any()) + self.check_obj(a) + + def test_init_from_bitarray(self): + for a in self.randombitarrays(): + b = frozenbitarray(a) + self.assertFalse(b is a) + self.assertEQUAL(b, a) + c = frozenbitarray(b) + self.assertFalse(c is b) + self.assertEQUAL(c, b) + self.assertEqual(hash(c), hash(b)) + self.check_obj(b) + + def test_init_from_misc(self): + tup = 0, 1, 0, 1, 1, False, True + for obj in list(tup), tup, iter(tup), bitarray(tup): + a = frozenbitarray(obj) + self.assertEqual(a, bitarray(tup)) + + def test_repr(self): + a = frozenbitarray() + self.assertEqual(repr(a), "frozenbitarray()") + self.assertEqual(str(a), "frozenbitarray()") + a = frozenbitarray('10111') + self.assertEqual(repr(a), "frozenbitarray('10111')") + self.assertEqual(str(a), "frozenbitarray('10111')") + + def test_immutable(self): + a = frozenbitarray('111') + self.assertRaises(TypeError, a.append, True) + self.assertRaises(TypeError, a.bytereverse) + self.assertRaises(TypeError, a.clear) + self.assertRaises(TypeError, a.encode, {'a': bitarray('0')}, 'aa') + self.assertRaises(TypeError, a.extend, [0, 1, 0]) + self.assertRaises(TypeError, a.fill) + self.assertRaises(TypeError, a.frombytes, b'') + self.assertRaises(TypeError, a.insert, 0, 1) + self.assertRaises(TypeError, a.invert) + self.assertRaises(TypeError, a.pack, b'\0\0\xff') + self.assertRaises(TypeError, a.pop) + self.assertRaises(TypeError, a.remove, 1) + self.assertRaises(TypeError, a.reverse) + self.assertRaises(TypeError, a.setall, 0) + self.assertRaises(TypeError, a.sort) + self.assertRaises(TypeError, a.__delitem__, 0) + self.assertRaises(TypeError, a.__delitem__, slice(None, None, 2)) + self.assertRaises(TypeError, a.__setitem__, 0, 0) + self.assertRaises(TypeError, a.__iadd__, bitarray('010')) + self.assertRaises(TypeError, a.__ior__, bitarray('100')) + self.assertRaises(TypeError, a.__ixor__, bitarray('110')) + self.assertRaises(TypeError, a.__irshift__, 1) + self.assertRaises(TypeError, a.__ilshift__, 1) + self.check_obj(a) + + def test_freeze(self): + # not so much a test for frozenbitarray, but how it is initialized + a = bitarray(78) + self.assertFalse(a.readonly) # not readonly + a._freeze() + self.assertTrue(a.readonly) # readonly + + def test_memoryview(self): + a = frozenbitarray('01000001 01000010', 'big') + v = memoryview(a) + self.assertEqual(v.tobytes(), b'AB') + self.assertRaises(TypeError, v.__setitem__, 0, 255) + + def test_buffer_import_readonly(self): + b = bytes(bytearray([15, 95, 128])) + a = frozenbitarray(buffer=b, endian='big') + self.assertEQUAL(a, bitarray('00001111 01011111 10000000', 'big')) + info = buffer_info(a) + self.assertTrue(info['readonly']) + self.assertTrue(info['imported']) + + def test_buffer_import_writable(self): + c = bytearray([15, 95]) + self.assertRaisesMessage( + TypeError, + "cannot import writable buffer into frozenbitarray", + frozenbitarray, buffer=c) + + def test_set(self): + a = frozenbitarray('1') + b = frozenbitarray('11') + c = frozenbitarray('01') + d = frozenbitarray('011') + s = set([a, b, c, d]) + self.assertEqual(len(s), 4) + self.assertTrue(d in s) + self.assertFalse(frozenbitarray('0') in s) + + def test_dictkey(self): + a = frozenbitarray('01') + b = frozenbitarray('1001') + d = {a: 123, b: 345} + self.assertEqual(d[frozenbitarray('01')], 123) + self.assertEqual(d[frozenbitarray(b)], 345) + + def test_dictkey2(self): # taken slightly modified from issue #74 + a1 = frozenbitarray([True, False]) + a2 = frozenbitarray([False, False]) + dct = {a1: "one", a2: "two"} + a3 = frozenbitarray([True, False]) + self.assertEqual(a3, a1) + self.assertEqual(dct[a3], 'one') + + def test_mix(self): + a = bitarray('110') + b = frozenbitarray('0011') + self.assertEqual(a + b, bitarray('1100011')) + a.extend(b) + self.assertEqual(a, bitarray('1100011')) + + def test_hash_endianness_simple(self): + a = frozenbitarray('1', 'big') + b = frozenbitarray('1', 'little') + self.assertEqual(a, b) + self.assertEqual(hash(a), hash(b)) + d = {a: 'value'} + self.assertEqual(d[b], 'value') + self.assertEqual(len(set([a, b])), 1) + + def test_hash_endianness_random(self): + s = set() + n = 0 + for a in self.randombitarrays(): + a = frozenbitarray(a) + b = frozenbitarray(a, self.other_endian(a.endian())) + self.assertEqual(a, b) + self.assertNotEqual(a.endian(), b.endian()) + self.assertEqual(hash(a), hash(b)) + d = {a: 1, b: 2} + self.assertEqual(len(d), 1) + s.add(a) + s.add(b) + n += 1 + + self.assertEqual(len(s), n) + + def test_pickle(self): + for a in self.randombitarrays(): + f = frozenbitarray(a) + g = pickle.loads(pickle.dumps(f)) + self.assertEqual(f, g) + self.assertEqual(f.endian(), g.endian()) + self.assertTrue(str(g).startswith('frozenbitarray')) + self.check_obj(a) + self.check_obj(f) + self.check_obj(g) + +tests.append(TestsFrozenbitarray) + +# --------------------------------------------------------------------------- + +def run(verbosity=1, repeat=1): + import bitarray.test_util as btu + tests.extend(btu.tests) + + default_endian = get_default_endian() + print('bitarray is installed in: %s' % os.path.dirname(__file__)) + print('bitarray version: %s' % __version__) + print('sys.version: %s' % sys.version) + print('sys.prefix: %s' % sys.prefix) + print('pointer size: %d bit' % (8 * SYSINFO[0])) + print('sizeof(size_t): %d' % SYSINFO[1]) + print('sizeof(bitarrayobject): %d' % SYSINFO[2]) + print('PY_UINT64_T defined: %s' % SYSINFO[5]) + print('USE_WORD_SHIFT: %s' % SYSINFO[7]) + print('DEBUG: %s' % DEBUG) + suite = unittest.TestSuite() + for cls in tests: + for _ in range(repeat): + suite.addTest(unittest.makeSuite(cls)) + + runner = unittest.TextTestRunner(verbosity=verbosity) + result = runner.run(suite) + _set_default_endian(default_endian) + return result + + +if __name__ == '__main__': + run() diff --git a/env/lib/python3.8/site-packages/bitarray/test_data.pickle b/env/lib/python3.8/site-packages/bitarray/test_data.pickle new file mode 100644 index 00000000..fb8d8aba Binary files /dev/null and b/env/lib/python3.8/site-packages/bitarray/test_data.pickle differ diff --git a/env/lib/python3.8/site-packages/bitarray/test_util.py b/env/lib/python3.8/site-packages/bitarray/test_util.py new file mode 100644 index 00000000..0f29e22a --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/test_util.py @@ -0,0 +1,1873 @@ +""" +Tests for bitarray.util module +""" +from __future__ import absolute_import + +import os +import re +import sys +import base64 +import binascii +import shutil +import tempfile +import unittest +from string import hexdigits +from random import choice, randint, random +from collections import Counter + +from bitarray import (bitarray, frozenbitarray, decodetree, + get_default_endian, _set_default_endian) +from bitarray.test_bitarray import Util, skipIf + +from bitarray.util import ( + zeros, urandom, pprint, make_endian, rindex, strip, count_n, + parity, count_and, count_or, count_xor, subset, + serialize, deserialize, ba2hex, hex2ba, ba2base, base2ba, + ba2int, int2ba, vl_encode, vl_decode, + huffman_code, canonical_huffman, canonical_decode, +) + +if sys.version_info[0] == 3: + from io import StringIO +else: + from io import BytesIO as StringIO + +tests = [] # type: list + +# --------------------------------------------------------------------------- + +class TestsZeros(unittest.TestCase): + + def test_basic(self): + for default_endian in 'big', 'little': + _set_default_endian(default_endian) + a = zeros(0) + self.assertEqual(a, bitarray()) + self.assertEqual(a.endian(), default_endian) + + a = zeros(0, endian=None) + self.assertEqual(len(a), 0) + self.assertEqual(a.endian(), default_endian) + + for n in range(100): + a = zeros(n) + self.assertEqual(len(a), n) + self.assertFalse(a.any()) + self.assertEqual(a.count(1), 0) + self.assertEqual(a, bitarray(n * '0')) + + for endian in 'big', 'little': + a = zeros(3, endian) + self.assertEqual(a, bitarray('000')) + self.assertEqual(a.endian(), endian) + + def test_wrong_args(self): + self.assertRaises(TypeError, zeros) # no argument + self.assertRaises(TypeError, zeros, '') + self.assertRaises(TypeError, zeros, bitarray()) + self.assertRaises(TypeError, zeros, []) + self.assertRaises(TypeError, zeros, 1.0) + self.assertRaises(ValueError, zeros, -1) + + # endian not string + for x in 0, 1, {}, [], False, True: + self.assertRaises(TypeError, zeros, 0, x) + # endian wrong string + self.assertRaises(ValueError, zeros, 0, 'foo') + +tests.append(TestsZeros) + +# --------------------------------------------------------------------------- + +class TestsURandom(unittest.TestCase): + + def test_basic(self): + for default_endian in 'big', 'little': + _set_default_endian(default_endian) + a = urandom(0) + self.assertEqual(a, bitarray()) + self.assertEqual(a.endian(), default_endian) + + a = urandom(7, endian=None) + self.assertEqual(len(a), 7) + self.assertEqual(a.endian(), default_endian) + + for n in range(50): + a = urandom(n) + self.assertEqual(len(a), n) + self.assertEqual(a.endian(), default_endian) + + for endian in 'big', 'little': + a = urandom(11, endian) + self.assertEqual(len(a), 11) + self.assertEqual(a.endian(), endian) + + def test_count(self): + a = urandom(1000) + b = urandom(1000) + self.assertNotEqual(a, b) + self.assertTrue(400 < a.count() < 600) + self.assertTrue(400 < b.count() < 600) + + def test_wrong_args(self): + self.assertRaises(TypeError, urandom) + self.assertRaises(TypeError, urandom, '') + self.assertRaises(TypeError, urandom, bitarray()) + self.assertRaises(TypeError, urandom, []) + self.assertRaises(TypeError, urandom, 1.0) + self.assertRaises(ValueError, urandom, -1) + + self.assertRaises(TypeError, urandom, 0, 1) + self.assertRaises(ValueError, urandom, 0, 'foo') + +tests.append(TestsURandom) + +# --------------------------------------------------------------------------- + +class TestsPPrint(unittest.TestCase): + + @staticmethod + def get_code_string(a): + f = StringIO() + pprint(a, stream=f) + return f.getvalue() + + def round_trip(self, a): + b = eval(self.get_code_string(a)) + self.assertEqual(b, a) + self.assertEqual(type(b), type(a)) + + def test_bitarray(self): + a = bitarray('110') + self.assertEqual(self.get_code_string(a), "bitarray('110')\n") + self.round_trip(a) + + def test_frozenbitarray(self): + a = frozenbitarray('01') + self.assertEqual(self.get_code_string(a), "frozenbitarray('01')\n") + self.round_trip(a) + + def test_formatting(self): + a = bitarray(200) + for width in range(40, 130, 10): + for n in range(1, 10): + f = StringIO() + pprint(a, stream=f, group=n, width=width) + r = f.getvalue() + self.assertEqual(eval(r), a) + s = r.strip("bitary(')\n") + for group in s.split()[:-1]: + self.assertEqual(len(group), n) + for line in s.split('\n'): + self.assertTrue(len(line) < width) + + def test_fallback(self): + for a in None, 'asd', [1, 2], bitarray(), frozenbitarray('1'): + self.round_trip(a) + + def test_subclass(self): + class Foo(bitarray): + pass + + a = Foo() + code = self.get_code_string(a) + self.assertEqual(code, "Foo()\n") + b = eval(code) + self.assertEqual(b, a) + self.assertEqual(type(b), type(a)) + + def test_random(self): + for n in range(150): + self.round_trip(urandom(n)) + + def test_file(self): + tmpdir = tempfile.mkdtemp() + tmpfile = os.path.join(tmpdir, 'testfile') + a = bitarray(1000) + try: + with open(tmpfile, 'w') as fo: + pprint(a, fo) + with open(tmpfile, 'r') as fi: + b = eval(fi.read()) + self.assertEqual(a, b) + finally: + shutil.rmtree(tmpdir) + +tests.append(TestsPPrint) + +# --------------------------------------------------------------------------- + +class TestsMakeEndian(unittest.TestCase, Util): + + def test_simple(self): + a = bitarray('1110001', endian='big') + b = make_endian(a, 'big') + self.assertTrue(b is a) + c = make_endian(a, endian='little') + self.assertTrue(c == a) + self.assertEqual(c.endian(), 'little') + self.assertIsType(c, 'bitarray') + + # wrong arguments + self.assertRaises(TypeError, make_endian, '', 'big') + self.assertRaises(TypeError, make_endian, bitarray(), 1) + self.assertRaises(ValueError, make_endian, bitarray(), 'foo') + + def test_empty(self): + a = bitarray(endian='little') + b = make_endian(a, 'big') + self.assertTrue(b == a) + self.assertEqual(len(b), 0) + self.assertEqual(b.endian(), 'big') + + def test_from_frozen(self): + a = frozenbitarray('1101111', 'big') + b = make_endian(a, 'big') + self.assertTrue(b is a) + c = make_endian(a, 'little') + self.assertTrue(c == a) + self.assertEqual(c.endian(), 'little') + #self.assertIsType(c, 'frozenbitarray') + + def test_random(self): + for a in self.randombitarrays(): + aa = a.copy() + for endian in 'big', 'little': + b = make_endian(a, endian) + self.assertEqual(a, b) + self.assertEqual(b.endian(), endian) + if a.endian() == endian: + self.assertTrue(b is a) + self.assertEQUAL(a, aa) + +tests.append(TestsMakeEndian) + +# --------------------------------------------------------------------------- + +class TestsRIndex(unittest.TestCase, Util): + + def test_simple(self): + self.assertRaises(TypeError, rindex) + self.assertRaises(TypeError, rindex, None) + self.assertRaises(ValueError, rindex, bitarray(), 1) + for endian in 'big', 'little': + a = bitarray('00010110 000', endian) + self.assertEqual(rindex(a), 6) + self.assertEqual(rindex(a, 1), 6) + self.assertEqual(rindex(a, 1, 3), 6) + self.assertEqual(rindex(a, 1, 3, 8), 6) + self.assertEqual(rindex(a, 1, -20, 20), 6) + self.assertEqual(rindex(a, 1, 0, 5), 3) + self.assertEqual(rindex(a, 1, 0, -6), 3) + self.assertEqual(rindex(a, 1, 0, -5), 5) + self.assertRaises(TypeError, rindex, a, 'A') + self.assertRaises(ValueError, rindex, a, 2) + self.assertRaises(ValueError, rindex, a, 1, 7) + self.assertRaises(ValueError, rindex, a, 1, 10, 3) + self.assertRaises(ValueError, rindex, a, 1, -1, 0) + self.assertRaises(TypeError, rindex, a, 1, 10, 3, 4) + + a = bitarray('00010110 111', endian) + self.assertEqual(rindex(a, 0), 7) + self.assertEqual(rindex(a, 0, 0, 4), 2) + self.assertEqual(rindex(a, False), 7) + + a = frozenbitarray('00010110 111', endian) + self.assertEqual(rindex(a, 0), 7) + self.assertRaises(TypeError, rindex, a, None) + self.assertRaises(ValueError, rindex, a, 7) + + for v in 0, 1: + self.assertRaises(ValueError, rindex, + bitarray(0, endian), v) + self.assertRaises(ValueError, rindex, + bitarray('000', endian), 1) + self.assertRaises(ValueError, rindex, + bitarray('11111', endian), 0) + + def test_range(self): + n = 250 + a = bitarray(n) + for m in range(n): + a.setall(0) + self.assertRaises(ValueError, rindex, a, 1) + a[m] = 1 + self.assertEqual(rindex(a, 1), m) + + a.setall(1) + self.assertRaises(ValueError, rindex, a, 0) + a[m] = 0 + self.assertEqual(rindex(a, 0), m) + + def test_random(self): + for a in self.randombitarrays(): + v = randint(0, 1) + try: + i = rindex(a, v) + except ValueError: + i = None + s = a.to01() + try: + j = s.rindex(str(v)) + except ValueError: + j = None + self.assertEqual(i, j) + + def test_random_start_stop(self): + n = 2000 + a = zeros(n) + indices = [randint(0, n - 1) for _ in range(100)] + for i in indices: + a[i] = 1 + for _ in range(100): + start = randint(0, n) + stop = randint(0, n) + filtered = [i for i in indices if i >= start and i < stop] + ref = max(filtered) if filtered else -1 + try: + res = rindex(a, 1, start, stop) + except ValueError: + res = -1 + self.assertEqual(res, ref) + + def test_many_set(self): + for _ in range(10): + n = randint(1, 10000) + v = randint(0, 1) + a = bitarray(n) + a.setall(not v) + lst = [randint(0, n - 1) for _ in range(100)] + for i in lst: + a[i] = v + self.assertEqual(rindex(a, v), max(lst)) + + def test_one_set(self): + for _ in range(10): + N = randint(1, 10000) + a = bitarray(N) + a.setall(0) + a[randint(0, N - 1)] = 1 + self.assertEqual(rindex(a), a.index(1)) + +tests.append(TestsRIndex) + +# --------------------------------------------------------------------------- + +class TestsStrip(unittest.TestCase, Util): + + def test_simple(self): + self.assertRaises(TypeError, strip, '0110') + self.assertRaises(TypeError, strip, bitarray(), 123) + self.assertRaises(ValueError, strip, bitarray(), 'up') + for default_endian in 'big', 'little': + _set_default_endian(default_endian) + a = bitarray('00010110000') + self.assertEQUAL(strip(a), bitarray('0001011')) + self.assertEQUAL(strip(a, 'left'), bitarray('10110000')) + self.assertEQUAL(strip(a, 'both'), bitarray('1011')) + b = frozenbitarray('00010110000') + c = strip(b, 'both') + self.assertEqual(c, bitarray('1011')) + self.assertIsType(c, 'frozenbitarray') + + def test_zeros(self): + for n in range(10): + for mode in 'left', 'right', 'both': + a = zeros(n) + c = strip(a, mode) + self.assertIsType(c, 'bitarray') + self.assertEqual(c, bitarray()) + self.assertEqual(a, zeros(n)) + + b = frozenbitarray(a) + c = strip(b, mode) + self.assertIsType(c, 'frozenbitarray') + self.assertEqual(c, bitarray()) + + def test_random(self): + for a in self.randombitarrays(): + b = a.copy() + f = frozenbitarray(a) + s = a.to01() + for mode, res in [ + ('left', bitarray(s.lstrip('0'), a.endian())), + ('right', bitarray(s.rstrip('0'), a.endian())), + ('both', bitarray(s.strip('0'), a.endian())), + ]: + c = strip(a, mode) + self.assertEQUAL(c, res) + self.assertIsType(c, 'bitarray') + self.assertEQUAL(a, b) + + c = strip(f, mode) + self.assertEQUAL(c, res) + self.assertIsType(c, 'frozenbitarray') + self.assertEQUAL(f, b) + + def test_one_set(self): + for _ in range(10): + n = randint(1, 10000) + a = bitarray(n) + a.setall(0) + a[randint(0, n - 1)] = 1 + self.assertEqual(strip(a, 'both'), bitarray('1')) + self.assertEqual(len(a), n) + +tests.append(TestsStrip) + +# --------------------------------------------------------------------------- + +class TestsCount_N(unittest.TestCase, Util): + + @staticmethod + def count_n(a, n): + "return lowest index i for which a[:i].count() == n" + i, j = n, a.count(1, 0, n) + while j < n: + j += a[i] + i += 1 + return i + + def check_result(self, a, n, i, v=1): + self.assertEqual(a.count(v, 0, i), n) + if i == 0: + self.assertEqual(n, 0) + else: + self.assertEqual(a[i - 1], v) + + def test_empty(self): + a = bitarray() + self.assertEqual(count_n(a, 0), 0) + self.assertEqual(count_n(a, 0, 0), 0) + self.assertEqual(count_n(a, 0, 1), 0) + self.assertRaises(ValueError, count_n, a, 1) + self.assertRaises(TypeError, count_n, '', 0) + self.assertRaises(TypeError, count_n, a, 7.0) + self.assertRaises(ValueError, count_n, a, 0, 2) + + def test_simple(self): + a = bitarray('111110111110111110111110011110111110111110111000') + b = a.copy() + self.assertEqual(len(a), 48) + self.assertEqual(a.count(), 37) + self.assertEqual(a.count(0), 11) + + self.assertEqual(count_n(a, 0, 0), 0) + self.assertEqual(count_n(a, 2, 0), 12) + self.assertEqual(count_n(a, 10, 0), 47) + self.assertRaisesMessage(ValueError, "non-negative integer expected", + count_n, a, -1, 0) # n < 0 + self.assertRaisesMessage(ValueError, "n larger than bitarray size", + count_n, a, 49, 0) # n > len(a) + self.assertRaisesMessage(ValueError, "n exceeds total count", + count_n, a, 12, 0) # n > a.count(0) + + self.assertEqual(count_n(a, 0), 0) + self.assertEqual(count_n(a, 20), 23) + self.assertEqual(count_n(a, 20, 1), 23) + self.assertEqual(count_n(a, 37), 45) + self.assertRaisesMessage(ValueError, "non-negative integer expected", + count_n, a, -1) # n < 0 + self.assertRaisesMessage(ValueError, "n larger than bitarray size", + count_n, a, 49) # n > len(a) + self.assertRaisesMessage(ValueError, "n exceeds total count", + count_n, a, 38) # n > a.count() + + for v in 0, 1: + for n in range(a.count(v) + 1): + i = count_n(a, n, v) + self.check_result(a, n, i, v) + self.assertEqual(a[:i].count(v), n) + self.assertEqual(i, self.count_n(a if v else ~a, n)) + self.assertEQUAL(a, b) + + def test_frozen(self): + a = frozenbitarray('001111101111101111101111100111100') + self.assertEqual(len(a), 33) + self.assertEqual(a.count(), 24) + self.assertEqual(count_n(a, 0), 0) + self.assertEqual(count_n(a, 10), 13) + self.assertEqual(count_n(a, 24), 31) + self.assertRaises(ValueError, count_n, a, -1) # n < 0 + self.assertRaises(ValueError, count_n, a, 25) # n > a.count() + self.assertRaises(ValueError, count_n, a, 34) # n > len(a) + for n in range(25): + self.check_result(a, n, count_n(a, n)) + + def test_ones(self): + n = randint(1, 100000) + a = bitarray(n) + a.setall(1) + self.assertEqual(count_n(a, n), n) + self.assertRaises(ValueError, count_n, a, 1, 0) + self.assertRaises(ValueError, count_n, a, n + 1) + for _ in range(20): + i = randint(0, n) + self.assertEqual(count_n(a, i), i) + + def test_one_set(self): + n = randint(1, 100000) + a = bitarray(n) + a.setall(0) + self.assertEqual(count_n(a, 0), 0) + self.assertRaises(ValueError, count_n, a, 1) + for _ in range(20): + a.setall(0) + i = randint(0, n - 1) + a[i] = 1 + self.assertEqual(count_n(a, 1), i + 1) + self.assertRaises(ValueError, count_n, a, 2) + + def test_large(self): + for _ in range(100): + N = randint(100000, 250000) + a = bitarray(N) + v = randint(0, 1) + a.setall(not v) + for _ in range(randint(0, 100)): + a[randint(0, N - 1)] = v + tc = a.count(v) # total count + i = count_n(a, tc, v) + self.check_result(a, tc, i, v) + self.assertRaises(ValueError, count_n, a, tc + 1, v) + for _ in range(20): + n = randint(0, tc) + i = count_n(a, n, v) + self.check_result(a, n, i, v) + + def test_random(self): + for a in self.randombitarrays(): + for v in 0, 1: + n = a.count(v) // 2 + i = count_n(a, n, v) + self.check_result(a, n, i, v) + +tests.append(TestsCount_N) + +# --------------------------------------------------------------------------- + +class TestsBitwiseCount(unittest.TestCase, Util): + + def test_count_byte(self): + ones = bitarray(8) + ones.setall(1) + zeros = bitarray(8) + zeros.setall(0) + for i in range(256): + a = bitarray() + a.frombytes(bytes(bytearray([i]))) + cnt = a.count() + self.assertEqual(count_and(a, zeros), 0) + self.assertEqual(count_and(a, ones), cnt) + self.assertEqual(count_and(a, a), cnt) + self.assertEqual(count_or(a, zeros), cnt) + self.assertEqual(count_or(a, ones), 8) + self.assertEqual(count_or(a, a), cnt) + self.assertEqual(count_xor(a, zeros), cnt) + self.assertEqual(count_xor(a, ones), 8 - cnt) + self.assertEqual(count_xor(a, a), 0) + + def test_bit_count1(self): + a = bitarray('001111') + aa = a.copy() + b = bitarray('010011') + bb = b.copy() + self.assertEqual(count_and(a, b), 2) + self.assertEqual(count_or(a, b), 5) + self.assertEqual(count_xor(a, b), 3) + for f in count_and, count_or, count_xor: + # not two arguments + self.assertRaises(TypeError, f) + self.assertRaises(TypeError, f, a) + self.assertRaises(TypeError, f, a, b, 3) + # wrong argument types + self.assertRaises(TypeError, f, a, '') + self.assertRaises(TypeError, f, '1', b) + self.assertRaises(TypeError, f, a, 4) + self.assertEQUAL(a, aa) + self.assertEQUAL(b, bb) + + b.append(1) + for f in count_and, count_or, count_xor: + self.assertRaises(ValueError, f, a, b) + self.assertRaises(ValueError, f, + bitarray('110', 'big'), + bitarray('101', 'little')) + + def test_bit_count_frozen(self): + a = frozenbitarray('001111') + b = frozenbitarray('010011') + self.assertEqual(count_and(a, b), 2) + self.assertEqual(count_or(a, b), 5) + self.assertEqual(count_xor(a, b), 3) + + def test_bit_count_random(self): + for n in list(range(50)) + [randint(1000, 2000)]: + a = urandom(n) + b = urandom(n) + self.assertEqual(count_and(a, b), (a & b).count()) + self.assertEqual(count_or(a, b), (a | b).count()) + self.assertEqual(count_xor(a, b), (a ^ b).count()) + +tests.append(TestsBitwiseCount) + +# --------------------------------------------------------------------------- + +class TestsSubset(unittest.TestCase, Util): + + def test_basic(self): + a = frozenbitarray('0101') + b = bitarray('0111') + self.assertTrue(subset(a, b)) + self.assertFalse(subset(b, a)) + self.assertRaises(TypeError, subset) + self.assertRaises(TypeError, subset, a, '') + self.assertRaises(TypeError, subset, '1', b) + self.assertRaises(TypeError, subset, a, 4) + b.append(1) + self.assertRaises(ValueError, subset, a, b) + + def subset_simple(self, a, b): + return (a & b).count() == a.count() + + def test_True(self): + for a, b in [('', ''), ('0', '1'), ('0', '0'), ('1', '1'), + ('000', '111'), ('0101', '0111'), + ('000010111', '010011111')]: + a, b = bitarray(a), bitarray(b) + self.assertTrue(subset(a, b) is True) + self.assertTrue(self.subset_simple(a, b) is True) + + def test_False(self): + for a, b in [('1', '0'), ('1101', '0111'), + ('0000101111', '0100111011')]: + a, b = bitarray(a), bitarray(b) + self.assertTrue(subset(a, b) is False) + self.assertTrue(self.subset_simple(a, b) is False) + + def test_random(self): + for a in self.randombitarrays(start=1): + b = a.copy() + # we set one random bit in b to 1, so a is always a subset of b + b[randint(0, len(a) - 1)] = 1 + self.assertTrue(subset(a, b)) + # but b in not always a subset of a + self.assertEqual(subset(b, a), self.subset_simple(b, a)) + # we set all bits in a, which ensures that b is a subset of a + a.setall(1) + self.assertTrue(subset(b, a)) + +tests.append(TestsSubset) + +# --------------------------------------------------------------------------- + +class TestsParity(unittest.TestCase, Util): + + def test_bitarray(self): + a = bitarray() + self.assertBitEqual(parity(a), 0) + par = False + for _ in range(100): + self.assertEqual(parity(a), par) + a.append(1) + par = not par + + def test_unused_bits(self): + a = bitarray(1) + a.setall(1) + self.assertTrue(parity(a)) + + def test_frozenbitarray(self): + self.assertBitEqual(parity(frozenbitarray()), 0) + self.assertBitEqual(parity(frozenbitarray('0010011')), 1) + self.assertBitEqual(parity(frozenbitarray('10100110')), 0) + + def test_wrong_args(self): + self.assertRaises(TypeError, parity, '') + self.assertRaises(TypeError, bitarray(), 1) + + def test_byte(self): + for i in range(256): + a = bitarray() + a.frombytes(bytes(bytearray([i]))) + self.assertEqual(parity(a), a.count() % 2) + + def test_random(self): + for a in self.randombitarrays(): + self.assertEqual(parity(a), a.count() % 2) + +tests.append(TestsParity) + +# --------------------------------------------------------------------------- + +class TestsHexlify(unittest.TestCase, Util): + + def test_ba2hex(self): + self.assertEqual(ba2hex(bitarray(0, 'big')), '') + self.assertEqual(ba2hex(bitarray('1110', 'big')), 'e') + self.assertEqual(ba2hex(bitarray('1110', 'little')), '7') + self.assertEqual(ba2hex(bitarray('0000 0001', 'big')), '01') + self.assertEqual(ba2hex(bitarray('1000 0000', 'big')), '80') + self.assertEqual(ba2hex(bitarray('0000 0001', 'little')), '08') + self.assertEqual(ba2hex(bitarray('1000 0000', 'little')), '10') + self.assertEqual(ba2hex(frozenbitarray('1100 0111', 'big')), 'c7') + # length not multiple of 4 + self.assertRaises(ValueError, ba2hex, bitarray('10')) + self.assertRaises(TypeError, ba2hex, '101') + + c = ba2hex(bitarray('1101', 'big')) + self.assertIsInstance(c, str) + + for n in range(7): + a = bitarray(n * '1111', 'big') + b = a.copy() + self.assertEqual(ba2hex(a), n * 'f') + # ensure original object wasn't altered + self.assertEQUAL(a, b) + + def test_hex2ba(self): + _set_default_endian('big') + self.assertEqual(hex2ba(''), bitarray()) + for c in 'e', 'E', b'e', b'E', u'e', u'E': + a = hex2ba(c) + self.assertEqual(a.to01(), '1110') + self.assertEqual(a.endian(), 'big') + self.assertEQUAL(hex2ba('01'), bitarray('0000 0001', 'big')) + self.assertEQUAL(hex2ba('08', 'little'), + bitarray('0000 0001', 'little')) + self.assertEQUAL(hex2ba('aD'), bitarray('1010 1101', 'big')) + self.assertEQUAL(hex2ba(b'10aF'), + bitarray('0001 0000 1010 1111', 'big')) + self.assertEQUAL(hex2ba(b'10aF', 'little'), + bitarray('1000 0000 0101 1111', 'little')) + + def test_hex2ba_errors(self): + self.assertRaises(TypeError, hex2ba, 0) + + for endian in 'little', 'big': + _set_default_endian(endian) + self.assertRaises(ValueError, hex2ba, '01a7g89') + self.assertRaises(UnicodeEncodeError, hex2ba, u'10\u20ac') + # check for NUL bytes + for b in b'\0', b'\0f', b'f\0', b'\0ff', b'f\0f', b'ff\0': + self.assertRaises(ValueError, hex2ba, b) + + def test_explicit(self): + data = [ # little big + ('', '', ''), + ('1000', '1', '8'), + ('1000 1100', '13', '8c'), + ('1000 1100 1110', '137', '8ce'), + ('1000 1100 1110 1111' , '137f', '8cef'), + ('1000 1100 1110 1111 0100', '137f2', '8cef4'), + ] + for bs, hex_le, hex_be in data: + a_be = bitarray(bs, 'big') + a_le = bitarray(bs, 'little') + self.assertEQUAL(hex2ba(hex_be, 'big'), a_be) + self.assertEQUAL(hex2ba(hex_le, 'little'), a_le) + self.assertEqual(ba2hex(a_be), hex_be) + self.assertEqual(ba2hex(a_le), hex_le) + + def test_round_trip(self): + s = ''.join(choice(hexdigits) for _ in range(randint(20, 100))) + for default_endian in 'big', 'little': + _set_default_endian(default_endian) + a = hex2ba(s) + self.check_obj(a) + self.assertEqual(len(a) % 4, 0) + self.assertEqual(a.endian(), default_endian) + t = ba2hex(a) + self.assertEqual(t, s.lower()) + b = hex2ba(t, default_endian) + self.assertEQUAL(a, b) + + def test_binascii(self): + a = urandom(800, 'big') + s = binascii.hexlify(a.tobytes()).decode() + self.assertEqual(ba2hex(a), s) + b = bitarray(endian='big') + b.frombytes(binascii.unhexlify(s)) + self.assertEQUAL(hex2ba(s, 'big'), b) + +tests.append(TestsHexlify) + +# --------------------------------------------------------------------------- + +class TestsBase(unittest.TestCase, Util): + + def test_ba2base(self): + c = ba2base(16, bitarray('1101', 'big')) + self.assertIsInstance(c, str) + + def test_base2ba(self): + _set_default_endian('big') + for c in 'e', 'E', b'e', b'E', u'e', u'E': + a = base2ba(16, c) + self.assertEqual(a.to01(), '1110') + self.assertEqual(a.endian(), 'big') + + def test_explicit(self): + data = [ # n little big + ('', 2, '', ''), + ('1 0 1', 2, '101', '101'), + ('11 01 00', 4, '320', '310'), + ('111 001', 8, '74', '71'), + ('1111 0001', 16, 'f8', 'f1'), + ('11111 00001', 32, '7Q', '7B'), + ('111111 000001', 64, '/g', '/B'), + ] + for bs, n, s_le, s_be in data: + a_le = bitarray(bs, 'little') + a_be = bitarray(bs, 'big') + self.assertEQUAL(base2ba(n, s_le, 'little'), a_le) + self.assertEQUAL(base2ba(n, s_be, 'big'), a_be) + self.assertEqual(ba2base(n, a_le), s_le) + self.assertEqual(ba2base(n, a_be), s_be) + + def test_empty(self): + for n in 2, 4, 8, 16, 32, 64: + a = base2ba(n, '') + self.assertEqual(a, bitarray()) + self.assertEqual(ba2base(n, a), '') + + def test_upper(self): + self.assertEqual(base2ba(16, 'F'), bitarray('1111')) + + def test_invalid_characters(self): + for n, s in ((2, '2'), (4, '4'), (8, '8'), (16, 'g'), (32, '8'), + (32, '1'), (32, 'a'), (64, '-'), (64, '_')): + self.assertRaises(ValueError, base2ba, n, s) + + def test_invalid_args(self): + a = bitarray() + self.assertRaises(TypeError, ba2base, None, a) + self.assertRaises(TypeError, base2ba, None, '') + self.assertRaises(TypeError, ba2base, 16.0, a) + self.assertRaises(TypeError, base2ba, 16.0, '') + for i in range(-10, 260): + if i in (2, 4, 8, 16, 32, 64): + continue + self.assertRaises(ValueError, ba2base, i, a) + self.assertRaises(ValueError, base2ba, i, '') + + self.assertRaises(TypeError, ba2base, 32, None) + self.assertRaises(TypeError, base2ba, 32, None) + + def test_binary(self): + a = base2ba(2, '1011') + self.assertEqual(a, bitarray('1011')) + self.assertEqual(ba2base(2, a), '1011') + + for a in self.randombitarrays(): + s = ba2base(2, a) + self.assertEqual(s, a.to01()) + self.assertEQUAL(base2ba(2, s, a.endian()), a) + + def test_quaternary(self): + a = base2ba(4, '0123', 'big') + self.assertEqual(a, bitarray('00 01 10 11')) + self.assertEqual(ba2base(4, a), '0123') + + def test_octal(self): + a = base2ba(8, '0147', 'big') + self.assertEqual(a, bitarray('000 001 100 111')) + self.assertEqual(ba2base(8, a), '0147') + + def test_hexadecimal(self): + a = base2ba(16, 'F61', 'big') + self.assertEqual(a, bitarray('1111 0110 0001')) + self.assertEqual(ba2base(16, a), 'f61') + + for n in range(50): + s = ''.join(choice(hexdigits) for _ in range(n)) + for endian in 'big', 'little': + a = base2ba(16, s, endian) + self.assertEQUAL(a, hex2ba(s, endian)) + self.assertEqual(ba2base(16, a), ba2hex(a)) + + def test_base32(self): + a = base2ba(32, '7SH', 'big') + self.assertEqual(a, bitarray('11111 10010 00111')) + self.assertEqual(ba2base(32, a), '7SH') + + msg = os.urandom(randint(10, 100) * 5) + s = base64.b32encode(msg).decode() + a = base2ba(32, s, 'big') + self.assertEqual(a.tobytes(), msg) + self.assertEqual(ba2base(32, a), s) + + def test_base64(self): + a = base2ba(64, '/jH', 'big') + self.assertEqual(a, bitarray('111111 100011 000111')) + self.assertEqual(ba2base(64, a), '/jH') + + msg = os.urandom(randint(10, 100) * 3) + s = base64.standard_b64encode(msg).decode() + a = base2ba(64, s, 'big') + self.assertEqual(a.tobytes(), msg) + self.assertEqual(ba2base(64, a), s) + + def test_alphabets(self): + for m, n, alpabet in [ + (1, 2, '01'), + (2, 4, '0123'), + (3, 8, '01234567'), + (4, 16, '0123456789abcdef'), + (5, 32, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567'), + (6, 64, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' + 'abcdefghijklmnopqrstuvwxyz0123456789+/'), + ]: + self.assertEqual(1 << m, n) + self.assertEqual(len(alpabet), n) + for i, c in enumerate(alpabet): + for endian in 'big', 'little': + self.assertEqual(ba2int(base2ba(n, c, endian)), i) + self.assertEqual(ba2base(n, int2ba(i, m, endian)), c) + + def test_random(self): + for a in self.randombitarrays(): + for m in range(1, 7): + n = 1 << m + if len(a) % m == 0: + s = ba2base(n, a) + b = base2ba(n, s, a.endian()) + self.assertEQUAL(a, b) + self.check_obj(b) + else: + self.assertRaises(ValueError, ba2base, n, a) + + def test_random2(self): + for m in range(1, 7): + n = 1 << m + for length in range(0, 100, m): + a = urandom(length, 'little') + self.assertEQUAL(base2ba(n, ba2base(n, a), 'little'), a) + b = bitarray(a, 'big') + self.assertEQUAL(base2ba(n, ba2base(n, b), 'big'), b) + +tests.append(TestsBase) + +# --------------------------------------------------------------------------- + +class VLFTests(unittest.TestCase, Util): + + def test_explicit(self): + for s, bits in [ + (b'\x40', ''), + (b'\x30', '0'), + (b'\x38', '1'), + (b'\x00', '0000'), + (b'\x01', '0001'), + (b'\xe0\x40', '0000 1'), + (b'\x90\x02', '0000 000001'), + (b'\xb5\xa7\x18', '0101 0100111 0011'), + ]: + a = bitarray(bits) + self.assertEqual(vl_encode(a), s) + self.assertEqual(vl_decode(s), a) + + def test_encode(self): + for endian in 'big', 'little': + s = vl_encode(bitarray('001101', endian)) + self.assertIsInstance(s, bytes) + self.assertEqual(s, b'\xd3\x20') + + def test_decode_args(self): + if sys.version_info[0] == 3: + self.assertRaises(TypeError, vl_decode, 'foo') + # item not integer + self.assertRaises(TypeError, vl_decode, iter([b'\x40'])) + + self.assertRaises(TypeError, vl_decode, b'\x40', 'big', 3) + self.assertRaises(ValueError, vl_decode, b'\x40', 'foo') + # these objects are not iterable + for arg in None, 0, 1, 0.0: + self.assertRaises(TypeError, vl_decode, arg) + # these items cannot be interpreted as ints + for item in None, 2.34, Ellipsis: + self.assertRaises(TypeError, vl_decode, iter([0x95, item])) + + b = b'\xd3\x20' + lst = [b, iter(b), memoryview(b)] + if sys.version_info[0] == 3: + lst.append(iter([0xd3, 0x20])) + lst.append(bytearray(b)) + for s in lst: + a = vl_decode(s, endian=self.random_endian()) + self.assertIsInstance(a, bitarray) + self.assertEqual(a, bitarray('0011 01')) + + def test_decode_endian(self): + for endian in 'little', 'big', None: + a = vl_decode(b'\xd3\x20', endian) + self.assertEqual(a, bitarray('0011 01')) + self.assertEqual(a.endian(), + endian if endian else get_default_endian()) + + def test_decode_trailing(self): + for s, bits in [(b'\x40ABC', ''), + (b'\xe0\x40A', '00001')]: + stream = iter(s) + self.assertEqual(vl_decode(stream), bitarray(bits)) + self.assertEqual(next(stream), + b'A' if sys.version_info[0] == 2 else 65) + + def test_decode_ambiguity(self): + for s in b'\x40', b'\x4f', b'\x45': + self.assertEqual(vl_decode(iter(s)), bitarray()) + for s in b'\x1e', b'\x1f': + self.assertEqual(vl_decode(iter(s)), bitarray('111')) + + def test_decode_stream(self): + stream = iter(b'\x40\x30\x38\x40\x2c\xe0\x40\xd3\x20') + for bits in '', '0', '1', '', '11', '0000 1', '0011 01': + self.assertEqual(vl_decode(stream), bitarray(bits)) + + arrays = [urandom(randint(0, 30)) for _ in range(1000)] + stream = iter(b''.join(vl_encode(a) for a in arrays)) + for a in arrays: + self.assertEqual(vl_decode(stream), a) + + def test_decode_errors(self): + # decode empty bits + self.assertRaises(StopIteration, vl_decode, b'') + # invalid number of padding bits + for s in b'\x50', b'\x60', b'\x70': + self.assertRaises(ValueError, vl_decode, s) + self.assertRaises(ValueError, vl_decode, b'\xf0') + # high bit set, but no terminating byte + for s in b'\x80', b'\x80\x80': + self.assertRaises(StopIteration, vl_decode, s) + + def test_decode_error_message(self): + pat = re.compile(r'[\w\s,]+:\s+(\d+)') + for n in range(120): + a = None + s = bytes(bytearray([randint(0x80, 0xef) for _ in range(n)])) + try: + a = vl_decode(s) + except StopIteration as e: + m = pat.match(str(e)) + self.assertEqual(m.group(1), str(n)) + self.assertTrue(a is None) + + @skipIf(sys.version_info[0] == 2) + def test_decode_invalid_stream(self): + N = 100 + s = iter(N * (3 * [0x80] + ['XX']) + ['end.']) + for _ in range(N): + a = None + try: + a = vl_decode(s) + except TypeError: + pass + self.assertTrue(a is None) + self.assertEqual(next(s), 'end.') + + def test_explicit_zeros(self): + for n in range(100): + a = zeros(4 + n * 7) + s = n * b'\x80' + b'\x00' + self.assertEqual(vl_encode(a), s) + self.assertEqual(vl_decode(s), a) + + def round_trip(self, a): + s = vl_encode(a) + b = vl_decode(s) + self.check_obj(b) + self.assertEqual(a, b) + LEN_PAD_BITS = 3 + self.assertEqual(len(s), (len(a) + LEN_PAD_BITS + 6) // 7) + + head = ord(s[0]) if sys.version_info[0] == 2 else s[0] + padding = (head & 0x70) >> 4 + self.assertEqual(len(a) + padding, 7 * len(s) - LEN_PAD_BITS) + + def test_range(self): + for n in range(500): + self.round_trip(urandom(n)) + + def test_large(self): + a = urandom(randint(50000, 100000)) + self.round_trip(a) + + def test_random(self): + for a in self.randombitarrays(): + self.round_trip(a) + +tests.append(VLFTests) + +# --------------------------------------------------------------------------- + +class TestsIntegerization(unittest.TestCase, Util): + + def test_ba2int(self): + self.assertEqual(ba2int(bitarray('0')), 0) + self.assertEqual(ba2int(bitarray('1')), 1) + self.assertEqual(ba2int(bitarray('00101', 'big')), 5) + self.assertEqual(ba2int(bitarray('00101', 'little')), 20) + self.assertEqual(ba2int(frozenbitarray('11')), 3) + self.assertRaises(ValueError, ba2int, bitarray()) + self.assertRaises(ValueError, ba2int, frozenbitarray()) + self.assertRaises(TypeError, ba2int, '101') + a = bitarray('111') + b = a.copy() + self.assertEqual(ba2int(a), 7) + # ensure original object wasn't altered + self.assertEQUAL(a, b) + + def test_ba2int_frozen(self): + for a in self.randombitarrays(start=1): + b = frozenbitarray(a) + self.assertEqual(ba2int(b), ba2int(a)) + self.assertEQUAL(a, b) + + def test_ba2int_random(self): + for a in self.randombitarrays(start=1): + b = bitarray(a, 'big') + self.assertEqual(a, b) + self.assertEqual(ba2int(b), int(b.to01(), 2)) + + def test_ba2int_bytes(self): + for n in range(1, 50): + a = urandom(8 * n, self.random_endian()) + c = bytearray(a.tobytes()) + i = 0 + for x in (c if a.endian() == 'big' else reversed(c)): + i <<= 8 + i |= x + self.assertEqual(ba2int(a), i) + + def test_int2ba(self): + self.assertEqual(int2ba(0), bitarray('0')) + self.assertEqual(int2ba(1), bitarray('1')) + self.assertEqual(int2ba(5), bitarray('101')) + self.assertEQUAL(int2ba(6, endian='big'), bitarray('110', 'big')) + self.assertEQUAL(int2ba(6, endian='little'), + bitarray('011', 'little')) + self.assertRaises(TypeError, int2ba, 1.0) + self.assertRaises(TypeError, int2ba, 1, 3.0) + self.assertRaises(ValueError, int2ba, 1, 0) + self.assertRaises(TypeError, int2ba, 1, 10, 123) + self.assertRaises(ValueError, int2ba, 1, 10, 'asd') + # signed integer requires length + self.assertRaises(TypeError, int2ba, 100, signed=True) + + def test_signed(self): + for s, i in [ + ('0', 0), + ('1', -1), + ('00', 0), + ('10', 1), + ('01', -2), + ('11', -1), + ('000', 0), + ('100', 1), + ('010', 2), + ('110', 3), + ('001', -4), + ('101', -3), + ('011', -2), + ('111', -1), + ('00000', 0), + ('11110', 15), + ('00001', -16), + ('11111', -1), + ('00000000 0', 0), + ('11111111 0', 255), + ('00000000 1', -256), + ('11111111 1', -1), + ('00000000 00000000 000000', 0), + ('10010000 11000000 100010', 9 + 3 * 256 + 17 * 2 ** 16), + ('11111111 11111111 111110', 2 ** 21 - 1), + ('00000000 00000000 000001', -2 ** 21), + ('10010000 11000000 100011', -2 ** 21 + + (9 + 3 * 256 + 17 * 2 ** 16)), + ('11111111 11111111 111111', -1), + ]: + self.assertEqual(ba2int(bitarray(s, 'little'), signed=1), i) + self.assertEqual(ba2int(bitarray(s[::-1], 'big'), signed=1), i) + + len_s = len(bitarray(s)) + self.assertEQUAL(int2ba(i, len_s, 'little', signed=1), + bitarray(s, 'little')) + self.assertEQUAL(int2ba(i, len_s, 'big', signed=1), + bitarray(s[::-1], 'big')) + + def test_int2ba_overflow(self): + self.assertRaises(OverflowError, int2ba, -1) + self.assertRaises(OverflowError, int2ba, -1, 4) + + self.assertRaises(OverflowError, int2ba, 128, 7) + self.assertRaises(OverflowError, int2ba, 64, 7, signed=1) + self.assertRaises(OverflowError, int2ba, -65, 7, signed=1) + + for n in range(1, 20): + self.assertRaises(OverflowError, int2ba, 2 ** n, n) + self.assertRaises(OverflowError, int2ba, 2 ** (n - 1), n, + signed=1) + self.assertRaises(OverflowError, int2ba, -2 ** (n - 1) - 1, n, + signed=1) + + def test_int2ba_length(self): + self.assertRaises(TypeError, int2ba, 0, 1.0) + self.assertRaises(ValueError, int2ba, 0, 0) + self.assertEqual(int2ba(5, length=6, endian='big'), + bitarray('000101')) + for n in range(1, 100): + ab = int2ba(1, n, 'big') + al = int2ba(1, n, 'little') + self.assertEqual(ab.endian(), 'big') + self.assertEqual(al.endian(), 'little') + self.assertEqual(len(ab), n), + self.assertEqual(len(al), n) + self.assertEqual(ab, bitarray((n - 1) * '0') + bitarray('1')) + self.assertEqual(al, bitarray('1') + bitarray((n - 1) * '0')) + + ab = int2ba(0, n, 'big') + al = int2ba(0, n, 'little') + self.assertEqual(len(ab), n) + self.assertEqual(len(al), n) + self.assertEqual(ab, bitarray(n * '0', 'big')) + self.assertEqual(al, bitarray(n * '0', 'little')) + + self.assertEqual(int2ba(2 ** n - 1), bitarray(n * '1')) + self.assertEqual(int2ba(2 ** n - 1, endian='little'), + bitarray(n * '1')) + for endian in 'big', 'little': + self.assertEqual(int2ba(-1, n, endian, signed=True), + bitarray(n * '1')) + + def test_explicit(self): + _set_default_endian('big') + for i, sa in [( 0, '0'), (1, '1'), + ( 2, '10'), (3, '11'), + (25, '11001'), (265, '100001001'), + (3691038, '1110000101001000011110')]: + ab = bitarray(sa, 'big') + al = bitarray(sa[::-1], 'little') + self.assertEQUAL(int2ba(i), ab) + self.assertEQUAL(int2ba(i, endian='big'), ab) + self.assertEQUAL(int2ba(i, endian='little'), al) + self.assertEqual(ba2int(ab), ba2int(al), i) + + def check_round_trip(self, i): + for endian in 'big', 'little': + a = int2ba(i, endian=endian) + self.check_obj(a) + self.assertEqual(a.endian(), endian) + self.assertTrue(len(a) > 0) + # ensure we have no leading zeros + if a.endian == 'big': + self.assertTrue(len(a) == 1 or a.index(1) == 0) + self.assertEqual(ba2int(a), i) + if i > 0: + self.assertEqual(i.bit_length(), len(a)) + # add a few trailing / leading zeros to bitarray + if endian == 'big': + a = zeros(randint(0, 3), endian) + a + else: + a = a + zeros(randint(0, 3), endian) + self.assertEqual(a.endian(), endian) + self.assertEqual(ba2int(a), i) + + def test_many(self): + for i in range(20): + self.check_round_trip(i) + self.check_round_trip(randint(0, 10 ** randint(3, 300))) + + @staticmethod + def twos_complement(i, num_bits): + # https://en.wikipedia.org/wiki/Two%27s_complement + mask = 2 ** (num_bits - 1) + return -(i & mask) + (i & ~mask) + + def test_random_signed(self): + for a in self.randombitarrays(start=1): + i = ba2int(a, signed=True) + b = int2ba(i, len(a), a.endian(), signed=True) + self.assertEQUAL(a, b) + + j = ba2int(a, signed=False) # unsigned + if i >= 0: + self.assertEqual(i, j) + + self.assertEqual(i, self.twos_complement(j, len(a))) + +tests.append(TestsIntegerization) + +# --------------------------------------------------------------------------- + +class MixedTests(unittest.TestCase, Util): + + def test_bin(self): + for i in range(100): + s = bin(i) + self.assertEqual(s[:2], '0b') + a = bitarray(s[2:], 'big') + self.assertEqual(ba2int(a), i) + t = '0b%s' % a.to01() + self.assertEqual(t, s) + self.assertEqual(eval(t), i) + + @skipIf(sys.version_info[0] == 2) + def test_oct(self): + for i in range(1000): + s = oct(i) + self.assertEqual(s[:2], '0o') + a = base2ba(8, s[2:], 'big') + self.assertEqual(ba2int(a), i) + t = '0o%s' % ba2base(8, a) + self.assertEqual(t, s) + self.assertEqual(eval(t), i) + + def test_hex(self): + for i in range(1000): + s = hex(i) + self.assertEqual(s[:2], '0x') + a = hex2ba(s[2:], 'big') + self.assertEqual(ba2int(a), i) + t = '0x%s' % ba2hex(a) + self.assertEqual(t, s) + self.assertEqual(eval(t), i) + + def test_bitwise(self): + for a in self.randombitarrays(start=1): + b = urandom(len(a), a.endian()) + aa = a.copy() + bb = b.copy() + i = ba2int(a) + j = ba2int(b) + self.assertEqual(ba2int(a & b), i & j) + self.assertEqual(ba2int(a | b), i | j) + self.assertEqual(ba2int(a ^ b), i ^ j) + + n = randint(0, len(a)) + if a.endian() == 'big': + self.assertEqual(ba2int(a >> n), i >> n) + c = zeros(len(a), 'big') + a + self.assertEqual(ba2int(c << n), i << n) + + self.assertEQUAL(a, aa) + self.assertEQUAL(b, bb) + + def test_bitwise_inplace(self): + for a in self.randombitarrays(start=1): + b = urandom(len(a), a.endian()) + bb = b.copy() + i = ba2int(a) + j = ba2int(b) + c = a.copy() + c &= b + self.assertEqual(ba2int(c), i & j) + c = a.copy() + c |= b + self.assertEqual(ba2int(c), i | j) + c = a.copy() + c ^= b + self.assertEqual(ba2int(c), i ^ j) + self.assertEQUAL(b, bb) + + n = randint(0, len(a)) + if a.endian() == 'big': + c = a.copy() + c >>= n + self.assertEqual(ba2int(c), i >> n) + c = zeros(len(a), 'big') + a + c <<= n + self.assertEqual(ba2int(c), i << n) + + def test_primes(self): # Sieve of Eratosthenes + sieve = bitarray(10000) + sieve.setall(1) + sieve[:2] = 0 # zero and one are not prime + for i in range(2, 100): + if sieve[i]: + sieve[i * i::i] = 0 + # the first 15 primes + self.assertEqual(sieve.search(1, 15), [2, 3, 5, 7, 11, 13, 17, 19, + 23, 29, 31, 37, 41, 43, 47]) + # there are 1229 primes between 1 and 10000 + self.assertEqual(sieve.count(1), 1229) + # there are 119 primes between 4000 and 5000 + self.assertEqual(sieve.count(1, 4000, 5000), 119) + # the 1000th prime is 7919 + self.assertEqual(count_n(sieve, 1000) - 1, 7919) + +tests.append(MixedTests) + +# --------------------------------------------------------------------------- + +class TestsSerialization(unittest.TestCase, Util): + + def test_explicit(self): + for a, b in [ + (bitarray(0, 'little'), b'\x00'), + (bitarray(0, 'big'), b'\x10'), + (bitarray('1', 'little'), b'\x07\x01'), + (bitarray('1', 'big'), b'\x17\x80'), + (bitarray('11110000', 'little'), b'\x00\x0f'), + (bitarray('11110000', 'big'), b'\x10\xf0'), + ]: + self.assertEqual(serialize(a), b) + self.assertEQUAL(deserialize(b), a) + + def test_zeros_and_ones(self): + for endian in 'little', 'big': + for n in range(50): + a = zeros(n, endian) + s = serialize(a) + self.assertIsInstance(s, bytes) + self.assertEqual(s[1:], b'\0' * a.nbytes) + self.assertEQUAL(a, deserialize(s)) + a.setall(1) + self.assertEQUAL(a, deserialize(serialize(a))) + + def test_serialize_args(self): + for x in '0', 0, 1, b'\x00', 0.0, [0, 1], bytearray([0]): + self.assertRaises(TypeError, serialize, x) + # no arguments + self.assertRaises(TypeError, serialize) + # too many arguments + self.assertRaises(TypeError, serialize, bitarray(), 1) + + # On Python 2, bytes(x) is the string representation of x, + # so x can almost be of any type. And the string representation + # of these object will cause ValueErrors in bitarray(x). + # Instead of adding special cases in deserialize() or here, + # we decided to skip these tests for Python 2 altogether. + @skipIf(sys.version_info[0] == 2) + def test_deserialize_args(self): + for x in 0, 1, False, True, None, u'', u'01', 0.0, [0, 0.6]: + self.assertRaises(TypeError, deserialize, x) + self.assertRaises(TypeError, deserialize, b'\x00', 1) + self.assertRaises(ValueError, deserialize, [0, 256]) + + b = b'\x03\x06' + for s in b, bytearray(b), memoryview(b), iter(b): + a = deserialize(s) + self.assertEqual(a.endian(), 'little') + self.assertEqual(a, bitarray('01100')) + + def test_invalid_bytes(self): + self.assertRaises(ValueError, deserialize, b'') + + def check_msg(b): + # Python 2: PyErr_Format() seems to handle "0x%02x" + # incorrectly. E.g. instead of "0x01", I get "0x1" + if sys.version_info[0] == 3: + msg = "invalid header byte: 0x%02x" % b[0] + self.assertRaisesMessage(ValueError, msg, deserialize, b) + + for i in range(256): + b = bytes(bytearray([i])) + if i == 0 or i == 16: + self.assertEqual(deserialize(b), bitarray()) + else: + self.assertRaises(ValueError, deserialize, b) + check_msg(b) + + b += b'\0' + if i < 32 and i % 16 < 8: + self.assertEqual(deserialize(b), zeros(8 - i % 8)) + else: + self.assertRaises(ValueError, deserialize, b) + check_msg(b) + + def test_bits_ignored(self): + # the unused padding bits (with the last bytes) are ignored + for b, a in [ + (b'\x07\x01', bitarray('1', 'little')), + (b'\x07\x03', bitarray('1', 'little')), + (b'\x07\xff', bitarray('1', 'little')), + (b'\x17\x80', bitarray('1', 'big')), + (b'\x17\xc0', bitarray('1', 'big')), + (b'\x17\xff', bitarray('1', 'big')), + ]: + self.assertEQUAL(deserialize(b), a) + + def test_random(self): + for a in self.randombitarrays(): + b = deserialize(serialize(a)) + self.assertEQUAL(a, b) + self.check_obj(b) + +tests.append(TestsSerialization) + +# --------------------------------------------------------------------------- + +class TestsHuffman(unittest.TestCase): + + def test_simple(self): + freq = {0: 10, 'as': 2, None: 1.6} + code = huffman_code(freq) + self.assertEqual(len(code), 3) + self.assertEqual(len(code[0]), 1) + self.assertEqual(len(code['as']), 2) + self.assertEqual(len(code[None]), 2) + + def test_endianness(self): + freq = {'A': 10, 'B': 2, 'C': 5} + for endian in 'big', 'little': + code = huffman_code(freq, endian) + self.assertEqual(len(code), 3) + for v in code.values(): + self.assertEqual(v.endian(), endian) + + def test_wrong_arg(self): + self.assertRaises(TypeError, huffman_code, [('a', 1)]) + self.assertRaises(TypeError, huffman_code, 123) + self.assertRaises(TypeError, huffman_code, None) + # cannot compare 'a' with 1 + self.assertRaises(TypeError, huffman_code, {'A': 'a', 'B': 1}) + # frequency map cannot be empty + self.assertRaises(ValueError, huffman_code, {}) + + def test_one_symbol(self): + cnt = {'a': 1} + code = huffman_code(cnt) + self.assertEqual(code, {'a': bitarray('0')}) + for n in range(4): + msg = n * ['a'] + a = bitarray() + a.encode(code, msg) + self.assertEqual(a.to01(), n * '0') + self.assertEqual(a.decode(code), msg) + a.append(1) + self.assertRaises(ValueError, a.decode, code) + self.assertRaises(ValueError, list, a.iterdecode(code)) + + def check_tree(self, code): + n = len(code) + tree = decodetree(code) + self.assertEqual(tree.todict(), code) + # ensure tree has 2n-1 nodes (n symbol nodes and n-1 internal nodes) + self.assertEqual(tree.nodes(), 2 * n - 1) + # a proper Huffman tree is complete + self.assertTrue(tree.complete()) + + def test_balanced(self): + n = 6 + freq = {} + for i in range(2 ** n): + freq[i] = 1 + code = huffman_code(freq) + self.assertEqual(len(code), 2 ** n) + self.assertTrue(all(len(v) == n for v in code.values())) + self.check_tree(code) + + def test_unbalanced(self): + N = 27 + freq = {} + for i in range(N): + freq[i] = 2 ** i + code = huffman_code(freq) + self.assertEqual(len(code), N) + for i in range(N): + self.assertEqual(len(code[i]), N - (1 if i <= 1 else i)) + self.check_tree(code) + + def test_counter(self): + message = 'the quick brown fox jumps over the lazy dog.' + code = huffman_code(Counter(message)) + a = bitarray() + a.encode(code, message) + self.assertEqual(''.join(a.decode(code)), message) + self.check_tree(code) + + def test_random_list(self): + plain = [randint(0, 100) for _ in range(500)] + code = huffman_code(Counter(plain)) + a = bitarray() + a.encode(code, plain) + self.assertEqual(a.decode(code), plain) + self.check_tree(code) + + def test_random_freq(self): + for n in 2, 3, 5, randint(50, 200): + # create Huffman code for n symbols + code = huffman_code({i: random() for i in range(n)}) + self.check_tree(code) + +tests.append(TestsHuffman) + +# --------------------------------------------------------------------------- + +class TestsCanonicalHuffman(unittest.TestCase, Util): + + def test_basic(self): + plain = bytearray(b'the quick brown fox jumps over the lazy dog.') + chc, count, symbol = canonical_huffman(Counter(plain)) + self.assertIsInstance(chc, dict) + self.assertIsInstance(count, list) + self.assertIsInstance(symbol, list) + a = bitarray() + a.encode(chc, plain) + self.assertEqual(bytearray(a.iterdecode(chc)), plain) + self.assertEqual(bytearray(canonical_decode(a, count, symbol)), plain) + + def test_canonical_huffman_errors(self): + self.assertRaises(TypeError, canonical_huffman, []) + # frequency map cannot be empty + self.assertRaises(ValueError, canonical_huffman, {}) + self.assertRaises(TypeError, canonical_huffman) + cnt = huffman_code(Counter('aabc')) + self.assertRaises(TypeError, canonical_huffman, cnt, 'a') + + def test_one_symbol(self): + cnt = {'a': 1} + chc, count, symbol = canonical_huffman(cnt) + self.assertEqual(chc, {'a': bitarray('0')}) + self.assertEqual(count, [0, 1]) + self.assertEqual(symbol, ['a']) + for n in range(4): + msg = n * ['a'] + a = bitarray() + a.encode(chc, msg) + self.assertEqual(a.to01(), n * '0') + self.assertEqual(list(canonical_decode(a, count, symbol)), msg) + a.append(1) + self.assertRaises(ValueError, list, + canonical_decode(a, count, symbol)) + + def test_canonical_decode_errors(self): + a = bitarray('1101') + s = ['a'] + # bitarray not of bitarray type + self.assertRaises(TypeError, canonical_decode, '11', [0, 1], s) + # count not sequence + self.assertRaises(TypeError, canonical_decode, a, {0, 1}, s) + # count element not an int + self.assertRaises(TypeError, canonical_decode, a, [0, 1.0], s) + # count element overflow + self.assertRaises(OverflowError, canonical_decode, a, [0, 1 << 65], s) + # negative count + self.assertRaises(ValueError, canonical_decode, a, [0, -1], s) + # count list too long + self.assertRaises(ValueError, canonical_decode, a, 32 * [0], s) + # symbol not sequence + self.assertRaises(TypeError, canonical_decode, a, [0, 1], 43) + + symbol = ['a', 'b', 'c', 'd'] + # sum(count) != len(symbol) + self.assertRaisesMessage(ValueError, + "sum(count) = 3, but len(symbol) = 4", + canonical_decode, a, [0, 1, 2], symbol) + # count[i] > 1 << i + self.assertRaisesMessage(ValueError, + "count[2] cannot be negative or larger than 4, got 5", + canonical_decode, a, [0, 2, 5], symbol) + + def test_canonical_decode_simple(self): + # symbols can be anything, they do not even have to be hashable here + cnt = [0, 0, 4] + s = ['A', 42, [1.2-3.7j, 4j], {'B': 6}] + a = bitarray('00 01 10 11') + # count can be a list + self.assertEqual(list(canonical_decode(a, cnt, s)), s) + # count can also be a tuple (any sequence object in fact) + self.assertEqual(list(canonical_decode(a, (0, 0, 4), s)), s) + self.assertEqual(list(canonical_decode(7 * a, cnt, s)), 7 * s) + # the count list may have extra 0's at the end (but not too many) + count = [0, 0, 4, 0, 0, 0, 0, 0] + self.assertEqual(list(canonical_decode(a, count, s)), s) + # the element count[0] is unused + self.assertEqual(list(canonical_decode(a, [-47, 0, 4], s)), s) + # in fact it can be anything, as it is entirely ignored + self.assertEqual(list(canonical_decode(a, [s, 0, 4], s)), s) + + # the symbol argument can be any sequence object + s = [65, 66, 67, 98] + self.assertEqual(list(canonical_decode(a, cnt, s)), s) + self.assertEqual(list(canonical_decode(a, cnt, bytearray(s))), s) + self.assertEqual(list(canonical_decode(a, cnt, tuple(s))), s) + if sys.version_info[0] == 3: + self.assertEqual(list(canonical_decode(a, cnt, bytes(s))), s) + # Implementation Note: + # The symbol can even be an iterable. This was done because we + # want to use PySequence_Fast in order to convert sequence + # objects (like bytes and bytearray) to a list. This is faster + # as all objects are now elements in an array of pointers (as + # opposed to having the object's __getitem__ method called on + # every iteration). + self.assertEqual(list(canonical_decode(a, cnt, iter(s))), s) + + def test_canonical_decode_empty(self): + a = bitarray() + # count and symbol are empty, ok because sum([]) == len([]) + self.assertEqual(list(canonical_decode(a, [], [])), []) + a.append(0) + self.assertRaisesMessage(ValueError, "reached end of bitarray", + list, canonical_decode(a, [], [])) + a = bitarray(31 * '0') + self.assertRaisesMessage(ValueError, "ran out of codes", + list, canonical_decode(a, [], [])) + + def test_canonical_decode_one_symbol(self): + symbols = ['A'] + count = [0, 1] + a = bitarray('000') + self.assertEqual(list(canonical_decode(a, count, symbols)), + 3 * symbols) + a.append(1) + a.extend(bitarray(10 * '0')) + iterator = canonical_decode(a, count, symbols) + self.assertRaisesMessage(ValueError, "reached end of bitarray", + list, iterator) + + a.extend(bitarray(20 * '0')) + iterator = canonical_decode(a, count, symbols) + self.assertRaisesMessage(ValueError, "ran out of codes", + list, iterator) + + def test_canonical_decode_large(self): + with open(__file__, 'rb') as f: + msg = bytearray(f.read()) + self.assertTrue(len(msg) > 50000) + codedict, count, symbol = canonical_huffman(Counter(msg)) + a = bitarray() + a.encode(codedict, msg) + self.assertEqual(bytearray(canonical_decode(a, count, symbol)), msg) + self.check_code(codedict, count, symbol) + + def test_canonical_decode_symbol_change(self): + msg = bytearray(b"Hello World!") + codedict, count, symbol = canonical_huffman(Counter(msg)) + self.check_code(codedict, count, symbol) + a = bitarray() + a.encode(codedict, 10 * msg) + + it = canonical_decode(a, count, symbol) + def decode_one_msg(): + return bytearray(next(it) for _ in range(len(msg))) + + self.assertEqual(decode_one_msg(), msg) + symbol[symbol.index(ord("l"))] = ord("k") + self.assertEqual(decode_one_msg(), bytearray(b"Hekko Workd!")) + del symbol[:] + self.assertRaises(IndexError, decode_one_msg) + + def ensure_sorted(self, chc, symbol): + # ensure codes are sorted + for i in range(len(symbol) - 1): + a = chc[symbol[i]] + b = chc[symbol[i + 1]] + self.assertTrue(ba2int(a) < ba2int(b)) + + def ensure_consecutive(self, chc, count, symbol): + first = 0 + for nbits, cnt in enumerate(count): + for i in range(first, first + cnt - 1): + # ensure two consecutive codes (with same bit length) have + # consecutive integer values + a = chc[symbol[i]] + b = chc[symbol[i + 1]] + self.assertTrue(len(a) == len(b) == nbits) + self.assertEqual(ba2int(a) + 1, ba2int(b)) + first += cnt + + def ensure_count(self, chc, count): + # ensure count list corresponds to length counts from codedict + maxbits = max(len(a) for a in chc.values()) + my_count = (maxbits + 1) * [0] + for a in chc.values(): + self.assertEqual(a.endian(), 'big') + my_count[len(a)] += 1 + self.assertEqual(my_count, list(count)) + + def ensure_complete(self, count): + # ensure code is complete and not oversubscribed + maxbits = len(count) + x = sum(count[i] << (maxbits - i) for i in range(1, maxbits)) + self.assertEqual(x, 1 << maxbits) + + def ensure_complete_2(self, chc): + # ensure code is complete + dt = decodetree(chc) + self.assertTrue(dt.complete()) + + def ensure_round_trip(self, chc, count, symbol): + # create a short test message, encode and decode + msg = [choice(symbol) for _ in range(10)] + a = bitarray() + a.encode(chc, msg) + it = canonical_decode(a, count, symbol) + # the iterator holds a reference to the bitarray and symbol list + del a, count, symbol + self.assertEqual(type(it).__name__, 'canonical_decodeiter') + self.assertEqual(list(it), msg) + + def check_code(self, chc, count, symbol): + self.assertTrue(len(chc) == len(symbol) == sum(count)) + self.assertEqual(count[0], 0) # no codes have length 0 + self.assertTrue(set(chc) == set(symbol)) + # the code of the last symbol has all 1 bits + self.assertTrue(chc[symbol[-1]].all()) + # the code of the first symbol starts with bit 0 + self.assertFalse(chc[symbol[0]][0]) + + self.ensure_sorted(chc, symbol) + self.ensure_consecutive(chc, count, symbol) + self.ensure_count(chc, count) + self.ensure_complete(count) + self.ensure_complete_2(chc) + self.ensure_round_trip(chc, count, symbol) + + def test_simple_counter(self): + plain = bytearray(b'the quick brown fox jumps over the lazy dog.') + cnt = Counter(plain) + code, count, symbol = canonical_huffman(cnt) + self.check_code(code, count, symbol) + self.check_code(code, tuple(count), tuple(symbol)) + self.check_code(code, bytearray(count), symbol) + self.check_code(code, count, bytearray(symbol)) + + def test_balanced(self): + n = 7 + freq = {} + for i in range(2 ** n): + freq[i] = 1 + code, count, sym = canonical_huffman(freq) + self.assertEqual(len(code), 2 ** n) + self.assertTrue(all(len(v) == n for v in code.values())) + self.check_code(code, count, sym) + + def test_unbalanced(self): + n = 29 + freq = {} + for i in range(n): + freq[i] = 2 ** i + code = canonical_huffman(freq)[0] + self.assertEqual(len(code), n) + for i in range(n): + self.assertEqual(len(code[i]), n - (1 if i <= 1 else i)) + self.check_code(*canonical_huffman(freq)) + + def test_random_freq(self): + for n in 2, 3, 5, randint(50, 200): + freq = {i: random() for i in range(n)} + self.check_code(*canonical_huffman(freq)) + + +tests.append(TestsCanonicalHuffman) + +# --------------------------------------------------------------------------- + +def run(verbosity=1): + import bitarray + + print('bitarray.util is installed in: %s' % os.path.dirname(__file__)) + print('bitarray version: %s' % bitarray.__version__) + print('Python version: %s' % sys.version) + + suite = unittest.TestSuite() + for cls in tests: + suite.addTest(unittest.makeSuite(cls)) + + runner = unittest.TextTestRunner(verbosity=verbosity) + return runner.run(suite) + + +if __name__ == '__main__': + run() diff --git a/env/lib/python3.8/site-packages/bitarray/util.py b/env/lib/python3.8/site-packages/bitarray/util.py new file mode 100644 index 00000000..e348fa55 --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/util.py @@ -0,0 +1,458 @@ +# Copyright (c) 2019 - 2022, Ilan Schnell; All Rights Reserved +# bitarray is published under the PSF license. +# +# Author: Ilan Schnell +""" +Useful utilities for working with bitarrays. +""" +from __future__ import absolute_import + +import os +import sys + +from bitarray import bitarray, bits2bytes + +from bitarray._util import ( + count_n, rindex, parity, count_and, count_or, count_xor, subset, + serialize, ba2hex, _hex2ba, ba2base, _base2ba, vl_encode, _vl_decode, + canonical_decode, +) + +__all__ = [ + 'zeros', 'urandom', 'pprint', 'make_endian', 'rindex', 'strip', 'count_n', + 'parity', 'count_and', 'count_or', 'count_xor', 'subset', + 'ba2hex', 'hex2ba', 'ba2base', 'base2ba', 'ba2int', 'int2ba', + 'serialize', 'deserialize', 'vl_encode', 'vl_decode', + 'huffman_code', 'canonical_huffman', 'canonical_decode', +] + + +_is_py2 = bool(sys.version_info[0] == 2) + + +def zeros(__length, endian=None): + """zeros(length, /, endian=None) -> bitarray + +Create a bitarray of length, with all values 0, and optional +endianness, which may be 'big', 'little'. +""" + if not isinstance(__length, (int, long) if _is_py2 else int): + raise TypeError("int expected, got '%s'" % type(__length).__name__) + + a = bitarray(__length, endian) + a.setall(0) + return a + + +def urandom(__length, endian=None): + """urandom(length, /, endian=None) -> bitarray + +Return a bitarray of `length` random bits (uses `os.urandom`). +""" + a = bitarray(0, endian) + a.frombytes(os.urandom(bits2bytes(__length))) + del a[__length:] + return a + + +def pprint(__a, stream=None, group=8, indent=4, width=80): + """pprint(bitarray, /, stream=None, group=8, indent=4, width=80) + +Prints the formatted representation of object on `stream` (which defaults +to `sys.stdout`). By default, elements are grouped in bytes (8 elements), +and 8 bytes (64 elements) per line. +Non-bitarray objects are printed by the standard library +function `pprint.pprint()`. +""" + if stream is None: + stream = sys.stdout + + if not isinstance(__a, bitarray): + import pprint as _pprint + _pprint.pprint(__a, stream=stream, indent=indent, width=width) + return + + group = int(group) + if group < 1: + raise ValueError('group must be >= 1') + indent = int(indent) + if indent < 0: + raise ValueError('indent must be >= 0') + width = int(width) + if width <= indent: + raise ValueError('width must be > %d (indent)' % indent) + + gpl = (width - indent) // (group + 1) # groups per line + epl = group * gpl # elements per line + if epl == 0: + epl = width - indent - 2 + type_name = type(__a).__name__ + # here 4 is len("'()'") + multiline = len(type_name) + 4 + len(__a) + len(__a) // group >= width + if multiline: + quotes = "'''" + elif __a: + quotes = "'" + else: + quotes = "" + + stream.write("%s(%s" % (type_name, quotes)) + for i, b in enumerate(__a): + if multiline and i % epl == 0: + stream.write('\n%s' % (indent * ' ')) + if i % group == 0 and i % epl != 0: + stream.write(' ') + stream.write(str(b)) + + if multiline: + stream.write('\n') + + stream.write("%s)\n" % quotes) + stream.flush() + + +def make_endian(__a, endian): + """make_endian(bitarray, /, endian) -> bitarray + +When the endianness of the given bitarray is different from `endian`, +return a new bitarray, with endianness `endian` and the same elements +as the original bitarray. +Otherwise (endianness is already `endian`) the original bitarray is returned +unchanged. +""" + if not isinstance(__a, bitarray): + raise TypeError("bitarray expected, got '%s'" % type(__a).__name__) + + if __a.endian() == endian: + return __a + + return bitarray(__a, endian) + + +def strip(__a, mode='right'): + """strip(bitarray, /, mode='right') -> bitarray + +Return a new bitarray with zeros stripped from left, right or both ends. +Allowed values for mode are the strings: `left`, `right`, `both` +""" + if not isinstance(__a, bitarray): + raise TypeError("bitarray expected, got '%s'" % type(__a).__name__) + if not isinstance(mode, str): + raise TypeError("str expected for mode, got '%s'" % type(__a).__name__) + if mode not in ('left', 'right', 'both'): + raise ValueError("mode must be 'left', 'right' or 'both', got %r" % + mode) + first = 0 + if mode in ('left', 'both'): + try: + first = __a.index(1) + except ValueError: + return __a[:0] + + last = len(__a) - 1 + if mode in ('right', 'both'): + try: + last = rindex(__a) + except ValueError: + return __a[:0] + + return __a[first:last + 1] + + +def hex2ba(__s, endian=None): + """hex2ba(hexstr, /, endian=None) -> bitarray + +Bitarray of hexadecimal representation. hexstr may contain any number +(including odd numbers) of hex digits (upper or lower case). +""" + if isinstance(__s, unicode if _is_py2 else str): + __s = __s.encode('ascii') + if not isinstance(__s, bytes): + raise TypeError("str expected, got '%s'" % type(__s).__name__) + + a = bitarray(4 * len(__s), endian) + _hex2ba(a, __s) + return a + + +def base2ba(__n, __s, endian=None): + """base2ba(n, asciistr, /, endian=None) -> bitarray + +Bitarray of the base `n` ASCII representation. +Allowed values for `n` are 2, 4, 8, 16, 32 and 64. +For `n=16` (hexadecimal), `hex2ba()` will be much faster, as `base2ba()` +does not take advantage of byte level operations. +For `n=32` the RFC 4648 Base32 alphabet is used, and for `n=64` the +standard base 64 alphabet is used. +""" + if isinstance(__s, unicode if _is_py2 else str): + __s = __s.encode('ascii') + if not isinstance(__s, bytes): + raise TypeError("str expected, got '%s'" % type(__s).__name__) + + a = bitarray(_base2ba(__n) * len(__s), endian) + _base2ba(__n, a, __s) + return a + + +def ba2int(__a, signed=False): + """ba2int(bitarray, /, signed=False) -> int + +Convert the given bitarray to an integer. +The bit-endianness of the bitarray is respected. +`signed` indicates whether two's complement is used to represent the integer. +""" + if not isinstance(__a, bitarray): + raise TypeError("bitarray expected, got '%s'" % type(__a).__name__) + length = len(__a) + if length == 0: + raise ValueError("non-empty bitarray expected") + + le = bool(__a.endian() == 'little') + if __a.padbits: + pad = zeros(__a.padbits, __a.endian()) + __a = __a + pad if le else pad + __a + + if _is_py2: + a = bitarray(__a, 'big') + if le: + a.reverse() + res = int(ba2hex(a), 16) + else: # py3 + res = int.from_bytes(__a.tobytes(), byteorder=__a.endian()) + + if signed and res >= 1 << (length - 1): + res -= 1 << length + return res + + +def int2ba(__i, length=None, endian=None, signed=False): + """int2ba(int, /, length=None, endian=None, signed=False) -> bitarray + +Convert the given integer to a bitarray (with given endianness, +and no leading (big-endian) / trailing (little-endian) zeros), unless +the `length` of the bitarray is provided. An `OverflowError` is raised +if the integer is not representable with the given number of bits. +`signed` determines whether two's complement is used to represent the integer, +and requires `length` to be provided. +""" + if not isinstance(__i, (int, long) if _is_py2 else int): + raise TypeError("int expected, got '%s'" % type(__i).__name__) + if length is not None: + if not isinstance(length, int): + raise TypeError("int expected for length") + if length <= 0: + raise ValueError("length must be > 0") + if signed and length is None: + raise TypeError("signed requires length") + + if __i == 0: + # there are special cases for 0 which we'd rather not deal with below + return zeros(length or 1, endian) + + if signed: + m = 1 << (length - 1) + if not (-m <= __i < m): + raise OverflowError("signed integer not in range(%d, %d), " + "got %d" % (-m, m, __i)) + if __i < 0: + __i += 1 << length + else: # unsigned + if __i < 0: + raise OverflowError("unsigned integer not positive, got %d" % __i) + if length and __i >= (1 << length): + raise OverflowError("unsigned integer not in range(0, %d), " + "got %d" % (1 << length, __i)) + + a = bitarray(0, endian) + le = bool(a.endian() == 'little') + if _is_py2: + s = hex(__i)[2:].rstrip('L') + a.extend(hex2ba(s, 'big')) + if le: + a.reverse() + else: # py3 + b = __i.to_bytes(bits2bytes(__i.bit_length()), byteorder=a.endian()) + a.frombytes(b) + + if length is None: + return strip(a, 'right' if le else 'left') + + la = len(a) + if la > length: + a = a[:length] if le else a[-length:] + if la < length: + pad = zeros(length - la, endian) + a = a + pad if le else pad + a + assert len(a) == length + return a + + +def deserialize(__b): + """deserialize(bytes, /) -> bitarray + +Return a bitarray given a bytes-like representation such as returned +by `serialize()`. +""" + if isinstance(__b, int): # as bytes(n) will return n NUL bytes + raise TypeError("cannot convert 'int' object to bytes") + if not isinstance(__b, bytes): + __b = bytes(__b) + if len(__b) == 0: + raise ValueError("non-empty bytes expected") + + if _is_py2: + head = ord(__b[0]) + if head >= 32 or head % 16 >= 8: + raise ValueError('invalid header byte: 0x%02x' % head) + try: + return bitarray(__b) + except TypeError: + raise ValueError('invalid header byte: 0x%02x' % __b[0]) + + +def vl_decode(__stream, endian=None): + """vl_decode(stream, /, endian=None) -> bitarray + +Decode binary stream (an integer iterator, or bytes-like object), and return +the decoded bitarray. This function consumes only one bitarray and leaves +the remaining stream untouched. `StopIteration` is raised when no +terminating byte is found. +Use `vl_encode()` for encoding. +""" + a = bitarray(32, endian) + _vl_decode(iter(__stream), a) + return a + +# ------------------------------ Huffman coding ----------------------------- + +def _huffman_tree(__freq_map): + """_huffman_tree(dict, /) -> Node + +Given a dict mapping symbols to their frequency, construct a Huffman tree +and return its root node. +""" + from heapq import heappush, heappop + + class Node(object): + """ + A Node instance will either have a 'symbol' (leaf node) or + a 'child' (a tuple with both children) attribute. + The 'freq' attribute will always be present. + """ + def __lt__(self, other): + # heapq needs to be able to compare the nodes + return self.freq < other.freq + + minheap = [] + # create all leaf nodes and push them onto the queue + for sym, f in __freq_map.items(): + leaf = Node() + leaf.symbol = sym + leaf.freq = f + heappush(minheap, leaf) + + # repeat the process until only one node remains + while len(minheap) > 1: + # take the two nodes with lowest frequencies from the queue + # to construct a new node and push it onto the queue + parent = Node() + parent.child = heappop(minheap), heappop(minheap) + parent.freq = parent.child[0].freq + parent.child[1].freq + heappush(minheap, parent) + + # the single remaining node is the root of the Huffman tree + return minheap[0] + + +def huffman_code(__freq_map, endian=None): + """huffman_code(dict, /, endian=None) -> dict + +Given a frequency map, a dictionary mapping symbols to their frequency, +calculate the Huffman code, i.e. a dict mapping those symbols to +bitarrays (with given endianness). Note that the symbols are not limited +to being strings. Symbols may may be any hashable object (such as `None`). +""" + if not isinstance(__freq_map, dict): + raise TypeError("dict expected, got '%s'" % type(__freq_map).__name__) + + b0 = bitarray('0', endian) + b1 = bitarray('1', endian) + + if len(__freq_map) < 2: + if len(__freq_map) == 0: + raise ValueError("cannot create Huffman code with no symbols") + # Only one symbol: Normally if only one symbol is given, the code + # could be represented with zero bits. However here, the code should + # be at least one bit for the .encode() and .decode() methods to work. + # So we represent the symbol by a single code of length one, in + # particular one 0 bit. This is an incomplete code, since if a 1 bit + # is received, it has no meaning and will result in an error. + return {list(__freq_map)[0]: b0} + + result = {} + + def traverse(nd, prefix=bitarray(0, endian)): + try: # leaf + result[nd.symbol] = prefix + except AttributeError: # parent, so traverse each of the children + traverse(nd.child[0], prefix + b0) + traverse(nd.child[1], prefix + b1) + + traverse(_huffman_tree(__freq_map)) + return result + + +def canonical_huffman(__freq_map): + """canonical_huffman(dict, /) -> tuple + +Given a frequency map, a dictionary mapping symbols to their frequency, +calculate the canonical Huffman code. Returns a tuple containing: + +0. the canonical Huffman code as a dict mapping symbols to bitarrays +1. a list containing the number of symbols of each code length +2. a list of symbols in canonical order + +Note: the two lists may be used as input for `canonical_decode()`. +""" + if not isinstance(__freq_map, dict): + raise TypeError("dict expected, got '%s'" % type(__freq_map).__name__) + + if len(__freq_map) < 2: + if len(__freq_map) == 0: + raise ValueError("cannot create Huffman code with no symbols") + # Only one symbol: see note above in huffman_code() + sym = list(__freq_map)[0] + return {sym: bitarray('0', 'big')}, [0, 1], [sym] + + code_length = {} # map symbols to their code length + + def traverse(nd, length=0): + # traverse the Huffman tree, but (unlike in huffman_code() above) we + # now just simply record the length for reaching each symbol + try: # leaf + code_length[nd.symbol] = length + except AttributeError: # parent, so traverse each of the children + traverse(nd.child[0], length + 1) + traverse(nd.child[1], length + 1) + + traverse(_huffman_tree(__freq_map)) + + # we now have a mapping of symbols to their code length, + # which is all we need + + table = sorted(code_length.items(), key=lambda item: (item[1], item[0])) + + maxbits = max(item[1] for item in table) + codedict = {} + count = (maxbits + 1) * [0] + + code = 0 + for i, (sym, length) in enumerate(table): + codedict[sym] = int2ba(code, length, 'big') + count[length] += 1 + if i + 1 < len(table): + code += 1 + code <<= table[i + 1][1] - length + + return codedict, count, [item[0] for item in table] diff --git a/env/lib/python3.8/site-packages/bitarray/util.pyi b/env/lib/python3.8/site-packages/bitarray/util.pyi new file mode 100644 index 00000000..bac0c64a --- /dev/null +++ b/env/lib/python3.8/site-packages/bitarray/util.pyi @@ -0,0 +1,63 @@ +# Copyright (c) 2021 - 2022, Ilan Schnell; All Rights Reserved + +from collections import Counter +from collections.abc import Iterable, Iterator, Sequence +from typing import Any, AnyStr, BinaryIO, Optional, Union + +from bitarray import bitarray, BytesLike, CodeDict + + +FreqMap = Union[Counter[int], dict[Any, Union[int, float]]] + + +def zeros(length: int, endian: Optional[str] = ...) -> bitarray: ... +def urandom(length: int, endian: Optional[str] = ...) -> bitarray: ... +def pprint(a: Any, stream: BinaryIO = ..., + group: int = ..., + indent: int = ..., + width: int = ...) -> None: ... + +def make_endian(a: bitarray, endian: str) -> bitarray: ... +def rindex(a: bitarray, + value: int = ..., + start: int = ..., + stop: int = ...) -> int: ... + +def strip(a: bitarray, mode: str = ...) -> bitarray: ... + +def count_n(a: bitarray, + n: int, + value: int = ...) -> int: ... + +def parity(a: bitarray) -> int: ... +def count_and(a: bitarray, b: bitarray) -> int: ... +def count_or(a: bitarray, b: bitarray) -> int: ... +def count_xor(a: bitarray, b: bitarray) -> int: ... +def subset(a: bitarray, b: bitarray) -> bool: ... + +def ba2hex(a: bitarray) -> str: ... +def hex2ba(s: AnyStr, endian: Optional[str] = ...) -> bitarray: ... +def ba2base(n: int, a: bitarray) -> str: ... +def base2ba(n: int, + s: AnyStr, + endian: Optional[str] = ...) -> bitarray: ... + +def ba2int(a: bitarray, signed: int = ...) -> int: ... +def int2ba(i: int, + length: int = ..., + endian: str = ..., + signed: int = ...) -> bitarray: ... + +def serialize(a: bitarray) -> bytes: ... +def deserialize(b: BytesLike) -> bitarray: ... +def vl_encode(a: bitarray) -> bytes: ... +def vl_decode(stream: BytesLike, + endian: Optional[str] = ...) -> bitarray: ... + +def _huffman_tree(freq_map: FreqMap) -> Any: ... +def huffman_code(freq_map: FreqMap, + endian: Optional[str] = ...) -> CodeDict: ... +def canonical_huffman(Freq_Map) -> tuple[CodeDict, list, list]: ... +def canonical_decode(a: bitarray, + count: Sequence[int], + symbol: Iterable[Any]) -> Iterator: ... diff --git a/env/lib/python3.8/site-packages/black-22.10.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/black-22.10.0.dist-info/METADATA b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/METADATA new file mode 100644 index 00000000..d8d5b33b --- /dev/null +++ b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/METADATA @@ -0,0 +1,1437 @@ +Metadata-Version: 2.1 +Name: black +Version: 22.10.0 +Summary: The uncompromising code formatter. +Project-URL: Changelog, https://github.com/psf/black/blob/main/CHANGES.md +Project-URL: Homepage, https://github.com/psf/black +Author-email: Łukasz Langa +License-File: AUTHORS.md +License-File: LICENSE +Keywords: automation,autopep8,formatter,gofmt,pyfmt,rustfmt,yapf +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Software Development :: Quality Assurance +Requires-Python: >=3.7 +Requires-Dist: click>=8.0.0 +Requires-Dist: mypy-extensions>=0.4.3 +Requires-Dist: pathspec>=0.9.0 +Requires-Dist: platformdirs>=2 +Requires-Dist: tomli>=1.1.0; python_full_version < '3.11.0a7' +Requires-Dist: typed-ast>=1.4.2; python_version < '3.8' and implementation_name == 'cpython' +Requires-Dist: typing-extensions>=3.10.0.0; python_version < '3.10' +Provides-Extra: colorama +Requires-Dist: colorama>=0.4.3; extra == 'colorama' +Provides-Extra: d +Requires-Dist: aiohttp>=3.7.4; extra == 'd' +Provides-Extra: jupyter +Requires-Dist: ipython>=7.8.0; extra == 'jupyter' +Requires-Dist: tokenize-rt>=3.2.0; extra == 'jupyter' +Provides-Extra: uvloop +Requires-Dist: uvloop>=0.15.2; extra == 'uvloop' +Description-Content-Type: text/markdown + +[![Black Logo](https://raw.githubusercontent.com/psf/black/main/docs/_static/logo2-readme.png)](https://black.readthedocs.io/en/stable/) + +

    The Uncompromising Code Formatter

    + +

    +Actions Status +Documentation Status +Coverage Status +License: MIT +PyPI +Downloads +conda-forge +Code style: black +

    + +> “Any color you like.” + +_Black_ is the uncompromising Python code formatter. By using it, you agree to cede +control over minutiae of hand-formatting. In return, _Black_ gives you speed, +determinism, and freedom from `pycodestyle` nagging about formatting. You will save time +and mental energy for more important matters. + +Blackened code looks the same regardless of the project you're reading. Formatting +becomes transparent after a while and you can focus on the content instead. + +_Black_ makes code review faster by producing the smallest diffs possible. + +Try it out now using the [Black Playground](https://black.vercel.app). Watch the +[PyCon 2019 talk](https://youtu.be/esZLCuWs_2Y) to learn more. + +--- + +**[Read the documentation on ReadTheDocs!](https://black.readthedocs.io/en/stable)** + +--- + +## Installation and usage + +### Installation + +_Black_ can be installed by running `pip install black`. It requires Python 3.7+ to run. +If you want to format Jupyter Notebooks, install with `pip install 'black[jupyter]'`. + +If you can't wait for the latest _hotness_ and want to install from GitHub, use: + +`pip install git+https://github.com/psf/black` + +### Usage + +To get started right away with sensible defaults: + +```sh +black {source_file_or_directory} +``` + +You can run _Black_ as a package if running it as a script doesn't work: + +```sh +python -m black {source_file_or_directory} +``` + +Further information can be found in our docs: + +- [Usage and Configuration](https://black.readthedocs.io/en/stable/usage_and_configuration/index.html) + +_Black_ is already [successfully used](https://github.com/psf/black#used-by) by many +projects, small and big. _Black_ has a comprehensive test suite, with efficient parallel +tests, and our own auto formatting and parallel Continuous Integration runner. Now that +we have become stable, you should not expect large formatting to changes in the future. +Stylistic changes will mostly be responses to bug reports and support for new Python +syntax. For more information please refer to the +[The Black Code Style](https://black.readthedocs.io/en/stable/the_black_code_style/index.html). + +Also, as a safety measure which slows down processing, _Black_ will check that the +reformatted code still produces a valid AST that is effectively equivalent to the +original (see the +[Pragmatism](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#ast-before-and-after-formatting) +section for details). If you're feeling confident, use `--fast`. + +## The _Black_ code style + +_Black_ is a PEP 8 compliant opinionated formatter. _Black_ reformats entire files in +place. Style configuration options are deliberately limited and rarely added. It doesn't +take previous formatting into account (see +[Pragmatism](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#pragmatism) +for exceptions). + +Our documentation covers the current _Black_ code style, but planned changes to it are +also documented. They're both worth taking a look: + +- [The _Black_ Code Style: Current style](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html) +- [The _Black_ Code Style: Future style](https://black.readthedocs.io/en/stable/the_black_code_style/future_style.html) + +Changes to the _Black_ code style are bound by the Stability Policy: + +- [The _Black_ Code Style: Stability Policy](https://black.readthedocs.io/en/stable/the_black_code_style/index.html#stability-policy) + +Please refer to this document before submitting an issue. What seems like a bug might be +intended behaviour. + +### Pragmatism + +Early versions of _Black_ used to be absolutist in some respects. They took after its +initial author. This was fine at the time as it made the implementation simpler and +there were not many users anyway. Not many edge cases were reported. As a mature tool, +_Black_ does make some exceptions to rules it otherwise holds. + +- [The _Black_ code style: Pragmatism](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#pragmatism) + +Please refer to this document before submitting an issue just like with the document +above. What seems like a bug might be intended behaviour. + +## Configuration + +_Black_ is able to read project-specific default values for its command line options +from a `pyproject.toml` file. This is especially useful for specifying custom +`--include` and `--exclude`/`--force-exclude`/`--extend-exclude` patterns for your +project. + +You can find more details in our documentation: + +- [The basics: Configuration via a file](https://black.readthedocs.io/en/stable/usage_and_configuration/the_basics.html#configuration-via-a-file) + +And if you're looking for more general configuration documentation: + +- [Usage and Configuration](https://black.readthedocs.io/en/stable/usage_and_configuration/index.html) + +**Pro-tip**: If you're asking yourself "Do I need to configure anything?" the answer is +"No". _Black_ is all about sensible defaults. Applying those defaults will have your +code in compliance with many other _Black_ formatted projects. + +## Used by + +The following notable open-source projects trust _Black_ with enforcing a consistent +code style: pytest, tox, Pyramid, Django, Django Channels, Hypothesis, attrs, +SQLAlchemy, Poetry, PyPA applications (Warehouse, Bandersnatch, Pipenv, virtualenv), +pandas, Pillow, Twisted, LocalStack, every Datadog Agent Integration, Home Assistant, +Zulip, Kedro, OpenOA, FLORIS, ORBIT, WOMBAT, and many more. + +The following organizations use _Black_: Facebook, Dropbox, KeepTruckin, Mozilla, Quora, +Duolingo, QuantumBlack, Tesla. + +Are we missing anyone? Let us know. + +## Testimonials + +**Mike Bayer**, [author of `SQLAlchemy`](https://www.sqlalchemy.org/): + +> I can't think of any single tool in my entire programming career that has given me a +> bigger productivity increase by its introduction. I can now do refactorings in about +> 1% of the keystrokes that it would have taken me previously when we had no way for +> code to format itself. + +**Dusty Phillips**, +[writer](https://smile.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=dusty+phillips): + +> _Black_ is opinionated so you don't have to be. + +**Hynek Schlawack**, [creator of `attrs`](https://www.attrs.org/), core developer of +Twisted and CPython: + +> An auto-formatter that doesn't suck is all I want for Xmas! + +**Carl Meyer**, [Django](https://www.djangoproject.com/) core developer: + +> At least the name is good. + +**Kenneth Reitz**, creator of [`requests`](https://requests.readthedocs.io/en/latest/) +and [`pipenv`](https://readthedocs.org/projects/pipenv/): + +> This vastly improves the formatting of our code. Thanks a ton! + +## Show your style + +Use the badge in your project's README.md: + +```md +[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) +``` + +Using the badge in README.rst: + +``` +.. image:: https://img.shields.io/badge/code%20style-black-000000.svg + :target: https://github.com/psf/black +``` + +Looks like this: +[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) + +## License + +MIT + +## Contributing + +Welcome! Happy to see you willing to make the project better. You can get started by +reading this: + +- [Contributing: The basics](https://black.readthedocs.io/en/latest/contributing/the_basics.html) + +You can also take a look at the rest of the contributing docs or talk with the +developers: + +- [Contributing documentation](https://black.readthedocs.io/en/latest/contributing/index.html) +- [Chat on Discord](https://discord.gg/RtVdv86PrH) + +## Change log + +The log has become rather long. It moved to its own file. + +See [CHANGES](https://black.readthedocs.io/en/latest/change_log.html). + +## Authors + +The author list is quite long nowadays, so it lives in its own file. + +See [AUTHORS.md](./AUTHORS.md) + +## Code of Conduct + +Everyone participating in the _Black_ project, and in particular in the issue tracker, +pull requests, and social media activity, is expected to treat other people with respect +and more generally to follow the guidelines articulated in the +[Python Community Code of Conduct](https://www.python.org/psf/codeofconduct/). + +At the same time, humor is encouraged. In fact, basic familiarity with Monty Python's +Flying Circus is expected. We are not savages. + +And if you _really_ need to slap somebody, do it with a fish while dancing. +# Change Log + +## Unreleased + +### Highlights + + + +### Stable style + + + +### Preview style + + + +### Configuration + + + +### Packaging + + + +### Parser + + + +### Performance + + + +### Output + + + +### _Blackd_ + + + +### Integrations + + + +### Documentation + + + +## 22.10.0 + +### Highlights + +- Runtime support for Python 3.6 has been removed. Formatting 3.6 code will still be + supported until further notice. + +### Stable style + +- Fix a crash when `# fmt: on` is used on a different block level than `# fmt: off` + (#3281) + +### Preview style + +- Fix a crash when formatting some dicts with parenthesis-wrapped long string keys + (#3262) + +### Configuration + +- `.ipynb_checkpoints` directories are now excluded by default (#3293) +- Add `--skip-source-first-line` / `-x` option to ignore the first line of source code + while formatting (#3299) + +### Packaging + +- Executables made with PyInstaller will no longer crash when formatting several files + at once on macOS. Native x86-64 executables for macOS are available once again. + (#3275) +- Hatchling is now used as the build backend. This will not have any effect for users + who install Black with its wheels from PyPI. (#3233) +- Faster compiled wheels are now available for CPython 3.11 (#3276) + +### _Blackd_ + +- Windows style (CRLF) newlines will be preserved (#3257). + +### Integrations + +- Vim plugin: add flag (`g:black_preview`) to enable/disable the preview style (#3246) +- Update GitHub Action to support formatting of Jupyter Notebook files via a `jupyter` + option (#3282) +- Update GitHub Action to support use of version specifiers (e.g. `<23`) for Black + version (#3265) + +## 22.8.0 + +### Highlights + +- Python 3.11 is now supported, except for _blackd_ as aiohttp does not support 3.11 as + of publishing (#3234) +- This is the last release that supports running _Black_ on Python 3.6 (formatting 3.6 + code will continue to be supported until further notice) +- Reword the stability policy to say that we may, in rare cases, make changes that + affect code that was not previously formatted by _Black_ (#3155) + +### Stable style + +- Fix an infinite loop when using `# fmt: on/off` in the middle of an expression or code + block (#3158) +- Fix incorrect handling of `# fmt: skip` on colon (`:`) lines (#3148) +- Comments are no longer deleted when a line had spaces removed around power operators + (#2874) + +### Preview style + +- Single-character closing docstring quotes are no longer moved to their own line as + this is invalid. This was a bug introduced in version 22.6.0. (#3166) +- `--skip-string-normalization` / `-S` now prevents docstring prefixes from being + normalized as expected (#3168) +- When using `--skip-magic-trailing-comma` or `-C`, trailing commas are stripped from + subscript expressions with more than 1 element (#3209) +- Implicitly concatenated strings inside a list, set, or tuple are now wrapped inside + parentheses (#3162) +- Fix a string merging/split issue when a comment is present in the middle of implicitly + concatenated strings on its own line (#3227) + +### _Blackd_ + +- `blackd` now supports enabling the preview style via the `X-Preview` header (#3217) + +### Configuration + +- Black now uses the presence of debug f-strings to detect target version (#3215) +- Fix misdetection of project root and verbose logging of sources in cases involving + `--stdin-filename` (#3216) +- Immediate `.gitignore` files in source directories given on the command line are now + also respected, previously only `.gitignore` files in the project root and + automatically discovered directories were respected (#3237) + +### Documentation + +- Recommend using BlackConnect in IntelliJ IDEs (#3150) + +### Integrations + +- Vim plugin: prefix messages with `Black: ` so it's clear they come from Black (#3194) +- Docker: changed to a /opt/venv installation + added to PATH to be available to + non-root users (#3202) + +### Output + +- Change from deprecated `asyncio.get_event_loop()` to create our event loop which + removes DeprecationWarning (#3164) +- Remove logging from internal `blib2to3` library since it regularly emits error logs + about failed caching that can and should be ignored (#3193) + +### Parser + +- Type comments are now included in the AST equivalence check consistently so accidental + deletion raises an error. Though type comments can't be tracked when running on PyPy + 3.7 due to standard library limitations. (#2874) + +### Performance + +- Reduce Black's startup time when formatting a single file by 15-30% (#3211) + +## 22.6.0 + +### Style + +- Fix unstable formatting involving `#fmt: skip` and `# fmt:skip` comments (notice the + lack of spaces) (#2970) + +### Preview style + +- Docstring quotes are no longer moved if it would violate the line length limit (#3044) +- Parentheses around return annotations are now managed (#2990) +- Remove unnecessary parentheses around awaited objects (#2991) +- Remove unnecessary parentheses in `with` statements (#2926) +- Remove trailing newlines after code block open (#3035) + +### Integrations + +- Add `scripts/migrate-black.py` script to ease introduction of Black to a Git project + (#3038) + +### Output + +- Output Python version and implementation as part of `--version` flag (#2997) + +### Packaging + +- Use `tomli` instead of `tomllib` on Python 3.11 builds where `tomllib` is not + available (#2987) + +### Parser + +- [PEP 654](https://peps.python.org/pep-0654/#except) syntax (for example, + `except *ExceptionGroup:`) is now supported (#3016) +- [PEP 646](https://peps.python.org/pep-0646) syntax (for example, + `Array[Batch, *Shape]` or `def fn(*args: *T) -> None`) is now supported (#3071) + +### Vim Plugin + +- Fix `strtobool` function. It didn't parse true/on/false/off. (#3025) + +## 22.3.0 + +### Preview style + +- Code cell separators `#%%` are now standardised to `# %%` (#2919) +- Remove unnecessary parentheses from `except` statements (#2939) +- Remove unnecessary parentheses from tuple unpacking in `for` loops (#2945) +- Avoid magic-trailing-comma in single-element subscripts (#2942) + +### Configuration + +- Do not format `__pypackages__` directories by default (#2836) +- Add support for specifying stable version with `--required-version` (#2832). +- Avoid crashing when the user has no homedir (#2814) +- Avoid crashing when md5 is not available (#2905) +- Fix handling of directory junctions on Windows (#2904) + +### Documentation + +- Update pylint config documentation (#2931) + +### Integrations + +- Move test to disable plugin in Vim/Neovim, which speeds up loading (#2896) + +### Output + +- In verbose mode, log when _Black_ is using user-level config (#2861) + +### Packaging + +- Fix Black to work with Click 8.1.0 (#2966) +- On Python 3.11 and newer, use the standard library's `tomllib` instead of `tomli` + (#2903) +- `black-primer`, the deprecated internal devtool, has been removed and copied to a + [separate repository](https://github.com/cooperlees/black-primer) (#2924) + +### Parser + +- Black can now parse starred expressions in the target of `for` and `async for` + statements, e.g `for item in *items_1, *items_2: pass` (#2879). + +## 22.1.0 + +At long last, _Black_ is no longer a beta product! This is the first non-beta release +and the first release covered by our new +[stability policy](https://black.readthedocs.io/en/stable/the_black_code_style/index.html#stability-policy). + +### Highlights + +- **Remove Python 2 support** (#2740) +- Introduce the `--preview` flag (#2752) + +### Style + +- Deprecate `--experimental-string-processing` and move the functionality under + `--preview` (#2789) +- For stubs, one blank line between class attributes and methods is now kept if there's + at least one pre-existing blank line (#2736) +- Black now normalizes string prefix order (#2297) +- Remove spaces around power operators if both operands are simple (#2726) +- Work around bug that causes unstable formatting in some cases in the presence of the + magic trailing comma (#2807) +- Use parentheses for attribute access on decimal float and int literals (#2799) +- Don't add whitespace for attribute access on hexadecimal, binary, octal, and complex + literals (#2799) +- Treat blank lines in stubs the same inside top-level `if` statements (#2820) +- Fix unstable formatting with semicolons and arithmetic expressions (#2817) +- Fix unstable formatting around magic trailing comma (#2572) + +### Parser + +- Fix mapping cases that contain as-expressions, like `case {"key": 1 | 2 as password}` + (#2686) +- Fix cases that contain multiple top-level as-expressions, like `case 1 as a, 2 as b` + (#2716) +- Fix call patterns that contain as-expressions with keyword arguments, like + `case Foo(bar=baz as quux)` (#2749) +- Tuple unpacking on `return` and `yield` constructs now implies 3.8+ (#2700) +- Unparenthesized tuples on annotated assignments (e.g + `values: Tuple[int, ...] = 1, 2, 3`) now implies 3.8+ (#2708) +- Fix handling of standalone `match()` or `case()` when there is a trailing newline or a + comment inside of the parentheses. (#2760) +- `from __future__ import annotations` statement now implies Python 3.7+ (#2690) + +### Performance + +- Speed-up the new backtracking parser about 4X in general (enabled when + `--target-version` is set to 3.10 and higher). (#2728) +- _Black_ is now compiled with [mypyc](https://github.com/mypyc/mypyc) for an overall 2x + speed-up. 64-bit Windows, MacOS, and Linux (not including musl) are supported. (#1009, + #2431) + +### Configuration + +- Do not accept bare carriage return line endings in pyproject.toml (#2408) +- Add configuration option (`python-cell-magics`) to format cells with custom magics in + Jupyter Notebooks (#2744) +- Allow setting custom cache directory on all platforms with environment variable + `BLACK_CACHE_DIR` (#2739). +- Enable Python 3.10+ by default, without any extra need to specify + `--target-version=py310`. (#2758) +- Make passing `SRC` or `--code` mandatory and mutually exclusive (#2804) + +### Output + +- Improve error message for invalid regular expression (#2678) +- Improve error message when parsing fails during AST safety check by embedding the + underlying SyntaxError (#2693) +- No longer color diff headers white as it's unreadable in light themed terminals + (#2691) +- Text coloring added in the final statistics (#2712) +- Verbose mode also now describes how a project root was discovered and which paths will + be formatted. (#2526) + +### Packaging + +- All upper version bounds on dependencies have been removed (#2718) +- `typing-extensions` is no longer a required dependency in Python 3.10+ (#2772) +- Set `click` lower bound to `8.0.0` (#2791) + +### Integrations + +- Update GitHub action to support containerized runs (#2748) + +### Documentation + +- Change protocol in pip installation instructions to `https://` (#2761) +- Change HTML theme to Furo primarily for its responsive design and mobile support + (#2793) +- Deprecate the `black-primer` tool (#2809) +- Document Python support policy (#2819) + +## 21.12b0 + +### _Black_ + +- Fix determination of f-string expression spans (#2654) +- Fix bad formatting of error messages about EOF in multi-line statements (#2343) +- Functions and classes in blocks now have more consistent surrounding spacing (#2472) + +#### Jupyter Notebook support + +- Cell magics are now only processed if they are known Python cell magics. Earlier, all + cell magics were tokenized, leading to possible indentation errors e.g. with + `%%writefile`. (#2630) +- Fix assignment to environment variables in Jupyter Notebooks (#2642) + +#### Python 3.10 support + +- Point users to using `--target-version py310` if we detect 3.10-only syntax (#2668) +- Fix `match` statements with open sequence subjects, like `match a, b:` or + `match a, *b:` (#2639) (#2659) +- Fix `match`/`case` statements that contain `match`/`case` soft keywords multiple + times, like `match re.match()` (#2661) +- Fix `case` statements with an inline body (#2665) +- Fix styling of starred expressions inside `match` subject (#2667) +- Fix parser error location on invalid syntax in a `match` statement (#2649) +- Fix Python 3.10 support on platforms without ProcessPoolExecutor (#2631) +- Improve parsing performance on code that uses `match` under `--target-version py310` + up to ~50% (#2670) + +### Packaging + +- Remove dependency on `regex` (#2644) (#2663) + +## 21.11b1 + +### _Black_ + +- Bumped regex version minimum to 2021.4.4 to fix Pattern class usage (#2621) + +## 21.11b0 + +### _Black_ + +- Warn about Python 2 deprecation in more cases by improving Python 2 only syntax + detection (#2592) +- Add experimental PyPy support (#2559) +- Add partial support for the match statement. As it's experimental, it's only enabled + when `--target-version py310` is explicitly specified (#2586) +- Add support for parenthesized with (#2586) +- Declare support for Python 3.10 for running Black (#2562) + +### Integrations + +- Fixed vim plugin with Python 3.10 by removing deprecated distutils import (#2610) +- The vim plugin now parses `skip_magic_trailing_comma` from pyproject.toml (#2613) + +## 21.10b0 + +### _Black_ + +- Document stability policy, that will apply for non-beta releases (#2529) +- Add new `--workers` parameter (#2514) +- Fixed feature detection for positional-only arguments in lambdas (#2532) +- Bumped typed-ast version minimum to 1.4.3 for 3.10 compatibility (#2519) +- Fixed a Python 3.10 compatibility issue where the loop argument was still being passed + even though it has been removed (#2580) +- Deprecate Python 2 formatting support (#2523) + +### _Blackd_ + +- Remove dependency on aiohttp-cors (#2500) +- Bump required aiohttp version to 3.7.4 (#2509) + +### _Black-Primer_ + +- Add primer support for --projects (#2555) +- Print primer summary after individual failures (#2570) + +### Integrations + +- Allow to pass `target_version` in the vim plugin (#1319) +- Install build tools in docker file and use multi-stage build to keep the image size + down (#2582) + +## 21.9b0 + +### Packaging + +- Fix missing modules in self-contained binaries (#2466) +- Fix missing toml extra used during installation (#2475) + +## 21.8b0 + +### _Black_ + +- Add support for formatting Jupyter Notebook files (#2357) +- Move from `appdirs` dependency to `platformdirs` (#2375) +- Present a more user-friendly error if .gitignore is invalid (#2414) +- The failsafe for accidentally added backslashes in f-string expressions has been + hardened to handle more edge cases during quote normalization (#2437) +- Avoid changing a function return type annotation's type to a tuple by adding a + trailing comma (#2384) +- Parsing support has been added for unparenthesized walruses in set literals, set + comprehensions, and indices (#2447). +- Pin `setuptools-scm` build-time dependency version (#2457) +- Exclude typing-extensions version 3.10.0.1 due to it being broken on Python 3.10 + (#2460) + +### _Blackd_ + +- Replace sys.exit(-1) with raise ImportError as it plays more nicely with tools that + scan installed packages (#2440) + +### Integrations + +- The provided pre-commit hooks no longer specify `language_version` to avoid overriding + `default_language_version` (#2430) + +## 21.7b0 + +### _Black_ + +- Configuration files using TOML features higher than spec v0.5.0 are now supported + (#2301) +- Add primer support and test for code piped into black via STDIN (#2315) +- Fix internal error when `FORCE_OPTIONAL_PARENTHESES` feature is enabled (#2332) +- Accept empty stdin (#2346) +- Provide a more useful error when parsing fails during AST safety checks (#2304) + +### Docker + +- Add new `latest_release` tag automation to follow latest black release on docker + images (#2374) + +### Integrations + +- The vim plugin now searches upwards from the directory containing the current buffer + instead of the current working directory for pyproject.toml. (#1871) +- The vim plugin now reads the correct string normalization option in pyproject.toml + (#1869) +- The vim plugin no longer crashes Black when there's boolean values in pyproject.toml + (#1869) + +## 21.6b0 + +### _Black_ + +- Fix failure caused by `fmt: skip` and indentation (#2281) +- Account for += assignment when deciding whether to split string (#2312) +- Correct max string length calculation when there are string operators (#2292) +- Fixed option usage when using the `--code` flag (#2259) +- Do not call `uvloop.install()` when _Black_ is used as a library (#2303) +- Added `--required-version` option to require a specific version to be running (#2300) +- Fix incorrect custom breakpoint indices when string group contains fake f-strings + (#2311) +- Fix regression where `R` prefixes would be lowercased for docstrings (#2285) +- Fix handling of named escapes (`\N{...}`) when `--experimental-string-processing` is + used (#2319) + +### Integrations + +- The official Black action now supports choosing what version to use, and supports the + major 3 OSes. (#1940) + +## 21.5b2 + +### _Black_ + +- A space is no longer inserted into empty docstrings (#2249) +- Fix handling of .gitignore files containing non-ASCII characters on Windows (#2229) +- Respect `.gitignore` files in all levels, not only `root/.gitignore` file (apply + `.gitignore` rules like `git` does) (#2225) +- Restored compatibility with Click 8.0 on Python 3.6 when LANG=C used (#2227) +- Add extra uvloop install + import support if in python env (#2258) +- Fix --experimental-string-processing crash when matching parens are not found (#2283) +- Make sure to split lines that start with a string operator (#2286) +- Fix regular expression that black uses to identify f-expressions (#2287) + +### _Blackd_ + +- Add a lower bound for the `aiohttp-cors` dependency. Only 0.4.0 or higher is + supported. (#2231) + +### Packaging + +- Release self-contained x86_64 MacOS binaries as part of the GitHub release pipeline + (#2198) +- Always build binaries with the latest available Python (#2260) + +### Documentation + +- Add discussion of magic comments to FAQ page (#2272) +- `--experimental-string-processing` will be enabled by default in the future (#2273) +- Fix typos discovered by codespell (#2228) +- Fix Vim plugin installation instructions. (#2235) +- Add new Frequently Asked Questions page (#2247) +- Fix encoding + symlink issues preventing proper build on Windows (#2262) + +## 21.5b1 + +### _Black_ + +- Refactor `src/black/__init__.py` into many files (#2206) + +### Documentation + +- Replaced all remaining references to the + [`master`](https://github.com/psf/black/tree/main) branch with the + [`main`](https://github.com/psf/black/tree/main) branch. Some additional changes in + the source code were also made. (#2210) +- Sigificantly reorganized the documentation to make much more sense. Check them out by + heading over to [the stable docs on RTD](https://black.readthedocs.io/en/stable/). + (#2174) + +## 21.5b0 + +### _Black_ + +- Set `--pyi` mode if `--stdin-filename` ends in `.pyi` (#2169) +- Stop detecting target version as Python 3.9+ with pre-PEP-614 decorators that are + being called but with no arguments (#2182) + +### _Black-Primer_ + +- Add `--no-diff` to black-primer to suppress formatting changes (#2187) + +## 21.4b2 + +### _Black_ + +- Fix crash if the user configuration directory is inaccessible. (#2158) + +- Clarify + [circumstances](https://github.com/psf/black/blob/master/docs/the_black_code_style.md#pragmatism) + in which _Black_ may change the AST (#2159) + +- Allow `.gitignore` rules to be overridden by specifying `exclude` in `pyproject.toml` + or on the command line. (#2170) + +### _Packaging_ + +- Install `primer.json` (used by `black-primer` by default) with black. (#2154) + +## 21.4b1 + +### _Black_ + +- Fix crash on docstrings ending with "\\ ". (#2142) + +- Fix crash when atypical whitespace is cleaned out of dostrings (#2120) + +- Reflect the `--skip-magic-trailing-comma` and `--experimental-string-processing` flags + in the name of the cache file. Without this fix, changes in these flags would not take + effect if the cache had already been populated. (#2131) + +- Don't remove necessary parentheses from assignment expression containing assert / + return statements. (#2143) + +### _Packaging_ + +- Bump pathspec to >= 0.8.1 to solve invalid .gitignore exclusion handling + +## 21.4b0 + +### _Black_ + +- Fixed a rare but annoying formatting instability created by the combination of + optional trailing commas inserted by `Black` and optional parentheses looking at + pre-existing "magic" trailing commas. This fixes issue #1629 and all of its many many + duplicates. (#2126) + +- `Black` now processes one-line docstrings by stripping leading and trailing spaces, + and adding a padding space when needed to break up """". (#1740) + +- `Black` now cleans up leading non-breaking spaces in comments (#2092) + +- `Black` now respects `--skip-string-normalization` when normalizing multiline + docstring quotes (#1637) + +- `Black` no longer removes all empty lines between non-function code and decorators + when formatting typing stubs. Now `Black` enforces a single empty line. (#1646) + +- `Black` no longer adds an incorrect space after a parenthesized assignment expression + in if/while statements (#1655) + +- Added `--skip-magic-trailing-comma` / `-C` to avoid using trailing commas as a reason + to split lines (#1824) + +- fixed a crash when PWD=/ on POSIX (#1631) + +- fixed "I/O operation on closed file" when using --diff (#1664) + +- Prevent coloured diff output being interleaved with multiple files (#1673) + +- Added support for PEP 614 relaxed decorator syntax on python 3.9 (#1711) + +- Added parsing support for unparenthesized tuples and yield expressions in annotated + assignments (#1835) + +- added `--extend-exclude` argument (PR #2005) + +- speed up caching by avoiding pathlib (#1950) + +- `--diff` correctly indicates when a file doesn't end in a newline (#1662) + +- Added `--stdin-filename` argument to allow stdin to respect `--force-exclude` rules + (#1780) + +- Lines ending with `fmt: skip` will now be not formatted (#1800) + +- PR #2053: Black no longer relies on typed-ast for Python 3.8 and higher + +- PR #2053: Python 2 support is now optional, install with + `python3 -m pip install black[python2]` to maintain support. + +- Exclude `venv` directory by default (#1683) + +- Fixed "Black produced code that is not equivalent to the source" when formatting + Python 2 docstrings (#2037) + +### _Packaging_ + +- Self-contained native _Black_ binaries are now provided for releases via GitHub + Releases (#1743) + +## 20.8b1 + +### _Packaging_ + +- explicitly depend on Click 7.1.2 or newer as `Black` no longer works with versions + older than 7.0 + +## 20.8b0 + +### _Black_ + +- re-implemented support for explicit trailing commas: now it works consistently within + any bracket pair, including nested structures (#1288 and duplicates) + +- `Black` now reindents docstrings when reindenting code around it (#1053) + +- `Black` now shows colored diffs (#1266) + +- `Black` is now packaged using 'py3' tagged wheels (#1388) + +- `Black` now supports Python 3.8 code, e.g. star expressions in return statements + (#1121) + +- `Black` no longer normalizes capital R-string prefixes as those have a + community-accepted meaning (#1244) + +- `Black` now uses exit code 2 when specified configuration file doesn't exit (#1361) + +- `Black` now works on AWS Lambda (#1141) + +- added `--force-exclude` argument (#1032) + +- removed deprecated `--py36` option (#1236) + +- fixed `--diff` output when EOF is encountered (#526) + +- fixed `# fmt: off` handling around decorators (#560) + +- fixed unstable formatting with some `# type: ignore` comments (#1113) + +- fixed invalid removal on organizing brackets followed by indexing (#1575) + +- introduced `black-primer`, a CI tool that allows us to run regression tests against + existing open source users of Black (#1402) + +- introduced property-based fuzzing to our test suite based on Hypothesis and + Hypothersmith (#1566) + +- implemented experimental and disabled by default long string rewrapping (#1132), + hidden under a `--experimental-string-processing` flag while it's being worked on; + this is an undocumented and unsupported feature, you lose Internet points for + depending on it (#1609) + +### Vim plugin + +- prefer virtualenv packages over global packages (#1383) + +## 19.10b0 + +- added support for PEP 572 assignment expressions (#711) + +- added support for PEP 570 positional-only arguments (#943) + +- added support for async generators (#593) + +- added support for pre-splitting collections by putting an explicit trailing comma + inside (#826) + +- added `black -c` as a way to format code passed from the command line (#761) + +- --safe now works with Python 2 code (#840) + +- fixed grammar selection for Python 2-specific code (#765) + +- fixed feature detection for trailing commas in function definitions and call sites + (#763) + +- `# fmt: off`/`# fmt: on` comment pairs placed multiple times within the same block of + code now behave correctly (#1005) + +- _Black_ no longer crashes on Windows machines with more than 61 cores (#838) + +- _Black_ no longer crashes on standalone comments prepended with a backslash (#767) + +- _Black_ no longer crashes on `from` ... `import` blocks with comments (#829) + +- _Black_ no longer crashes on Python 3.7 on some platform configurations (#494) + +- _Black_ no longer fails on comments in from-imports (#671) + +- _Black_ no longer fails when the file starts with a backslash (#922) + +- _Black_ no longer merges regular comments with type comments (#1027) + +- _Black_ no longer splits long lines that contain type comments (#997) + +- removed unnecessary parentheses around `yield` expressions (#834) + +- added parentheses around long tuples in unpacking assignments (#832) + +- added parentheses around complex powers when they are prefixed by a unary operator + (#646) + +- fixed bug that led _Black_ format some code with a line length target of 1 (#762) + +- _Black_ no longer introduces quotes in f-string subexpressions on string boundaries + (#863) + +- if _Black_ puts parenthesis around a single expression, it moves comments to the + wrapped expression instead of after the brackets (#872) + +- `blackd` now returns the version of _Black_ in the response headers (#1013) + +- `blackd` can now output the diff of formats on source code when the `X-Diff` header is + provided (#969) + +## 19.3b0 + +- new option `--target-version` to control which Python versions _Black_-formatted code + should target (#618) + +- deprecated `--py36` (use `--target-version=py36` instead) (#724) + +- _Black_ no longer normalizes numeric literals to include `_` separators (#696) + +- long `del` statements are now split into multiple lines (#698) + +- type comments are no longer mangled in function signatures + +- improved performance of formatting deeply nested data structures (#509) + +- _Black_ now properly formats multiple files in parallel on Windows (#632) + +- _Black_ now creates cache files atomically which allows it to be used in parallel + pipelines (like `xargs -P8`) (#673) + +- _Black_ now correctly indents comments in files that were previously formatted with + tabs (#262) + +- `blackd` now supports CORS (#622) + +## 18.9b0 + +- numeric literals are now formatted by _Black_ (#452, #461, #464, #469): + + - numeric literals are normalized to include `_` separators on Python 3.6+ code + + - added `--skip-numeric-underscore-normalization` to disable the above behavior and + leave numeric underscores as they were in the input + + - code with `_` in numeric literals is recognized as Python 3.6+ + + - most letters in numeric literals are lowercased (e.g., in `1e10`, `0x01`) + + - hexadecimal digits are always uppercased (e.g. `0xBADC0DE`) + +- added `blackd`, see + [its documentation](https://github.com/psf/black/blob/18.9b0/README.md#blackd) for + more info (#349) + +- adjacent string literals are now correctly split into multiple lines (#463) + +- trailing comma is now added to single imports that don't fit on a line (#250) + +- cache is now populated when `--check` is successful for a file which speeds up + consecutive checks of properly formatted unmodified files (#448) + +- whitespace at the beginning of the file is now removed (#399) + +- fixed mangling [pweave](http://mpastell.com/pweave/) and + [Spyder IDE](https://www.spyder-ide.org/) special comments (#532) + +- fixed unstable formatting when unpacking big tuples (#267) + +- fixed parsing of `__future__` imports with renames (#389) + +- fixed scope of `# fmt: off` when directly preceding `yield` and other nodes (#385) + +- fixed formatting of lambda expressions with default arguments (#468) + +- fixed `async for` statements: _Black_ no longer breaks them into separate lines (#372) + +- note: the Vim plugin stopped registering `,=` as a default chord as it turned out to + be a bad idea (#415) + +## 18.6b4 + +- hotfix: don't freeze when multiple comments directly precede `# fmt: off` (#371) + +## 18.6b3 + +- typing stub files (`.pyi`) now have blank lines added after constants (#340) + +- `# fmt: off` and `# fmt: on` are now much more dependable: + + - they now work also within bracket pairs (#329) + + - they now correctly work across function/class boundaries (#335) + + - they now work when an indentation block starts with empty lines or misaligned + comments (#334) + +- made Click not fail on invalid environments; note that Click is right but the + likelihood we'll need to access non-ASCII file paths when dealing with Python source + code is low (#277) + +- fixed improper formatting of f-strings with quotes inside interpolated expressions + (#322) + +- fixed unnecessary slowdown when long list literals where found in a file + +- fixed unnecessary slowdown on AST nodes with very many siblings + +- fixed cannibalizing backslashes during string normalization + +- fixed a crash due to symbolic links pointing outside of the project directory (#338) + +## 18.6b2 + +- added `--config` (#65) + +- added `-h` equivalent to `--help` (#316) + +- fixed improper unmodified file caching when `-S` was used + +- fixed extra space in string unpacking (#305) + +- fixed formatting of empty triple quoted strings (#313) + +- fixed unnecessary slowdown in comment placement calculation on lines without comments + +## 18.6b1 + +- hotfix: don't output human-facing information on stdout (#299) + +- hotfix: don't output cake emoji on non-zero return code (#300) + +## 18.6b0 + +- added `--include` and `--exclude` (#270) + +- added `--skip-string-normalization` (#118) + +- added `--verbose` (#283) + +- the header output in `--diff` now actually conforms to the unified diff spec + +- fixed long trivial assignments being wrapped in unnecessary parentheses (#273) + +- fixed unnecessary parentheses when a line contained multiline strings (#232) + +- fixed stdin handling not working correctly if an old version of Click was used (#276) + +- _Black_ now preserves line endings when formatting a file in place (#258) + +## 18.5b1 + +- added `--pyi` (#249) + +- added `--py36` (#249) + +- Python grammar pickle caches are stored with the formatting caches, making _Black_ + work in environments where site-packages is not user-writable (#192) + +- _Black_ now enforces a PEP 257 empty line after a class-level docstring (and/or + fields) and the first method + +- fixed invalid code produced when standalone comments were present in a trailer that + was omitted from line splitting on a large expression (#237) + +- fixed optional parentheses being removed within `# fmt: off` sections (#224) + +- fixed invalid code produced when stars in very long imports were incorrectly wrapped + in optional parentheses (#234) + +- fixed unstable formatting when inline comments were moved around in a trailer that was + omitted from line splitting on a large expression (#238) + +- fixed extra empty line between a class declaration and the first method if no class + docstring or fields are present (#219) + +- fixed extra empty line between a function signature and an inner function or inner + class (#196) + +## 18.5b0 + +- call chains are now formatted according to the + [fluent interfaces](https://en.wikipedia.org/wiki/Fluent_interface) style (#67) + +- data structure literals (tuples, lists, dictionaries, and sets) are now also always + exploded like imports when they don't fit in a single line (#152) + +- slices are now formatted according to PEP 8 (#178) + +- parentheses are now also managed automatically on the right-hand side of assignments + and return statements (#140) + +- math operators now use their respective priorities for delimiting multiline + expressions (#148) + +- optional parentheses are now omitted on expressions that start or end with a bracket + and only contain a single operator (#177) + +- empty parentheses in a class definition are now removed (#145, #180) + +- string prefixes are now standardized to lowercase and `u` is removed on Python 3.6+ + only code and Python 2.7+ code with the `unicode_literals` future import (#188, #198, + #199) + +- typing stub files (`.pyi`) are now formatted in a style that is consistent with PEP + 484 (#207, #210) + +- progress when reformatting many files is now reported incrementally + +- fixed trailers (content with brackets) being unnecessarily exploded into their own + lines after a dedented closing bracket (#119) + +- fixed an invalid trailing comma sometimes left in imports (#185) + +- fixed non-deterministic formatting when multiple pairs of removable parentheses were + used (#183) + +- fixed multiline strings being unnecessarily wrapped in optional parentheses in long + assignments (#215) + +- fixed not splitting long from-imports with only a single name + +- fixed Python 3.6+ file discovery by also looking at function calls with unpacking. + This fixed non-deterministic formatting if trailing commas where used both in function + signatures with stars and function calls with stars but the former would be + reformatted to a single line. + +- fixed crash on dealing with optional parentheses (#193) + +- fixed "is", "is not", "in", and "not in" not considered operators for splitting + purposes + +- fixed crash when dead symlinks where encountered + +## 18.4a4 + +- don't populate the cache on `--check` (#175) + +## 18.4a3 + +- added a "cache"; files already reformatted that haven't changed on disk won't be + reformatted again (#109) + +- `--check` and `--diff` are no longer mutually exclusive (#149) + +- generalized star expression handling, including double stars; this fixes + multiplication making expressions "unsafe" for trailing commas (#132) + +- _Black_ no longer enforces putting empty lines behind control flow statements (#90) + +- _Black_ now splits imports like "Mode 3 + trailing comma" of isort (#127) + +- fixed comment indentation when a standalone comment closes a block (#16, #32) + +- fixed standalone comments receiving extra empty lines if immediately preceding a + class, def, or decorator (#56, #154) + +- fixed `--diff` not showing entire path (#130) + +- fixed parsing of complex expressions after star and double stars in function calls + (#2) + +- fixed invalid splitting on comma in lambda arguments (#133) + +- fixed missing splits of ternary expressions (#141) + +## 18.4a2 + +- fixed parsing of unaligned standalone comments (#99, #112) + +- fixed placement of dictionary unpacking inside dictionary literals (#111) + +- Vim plugin now works on Windows, too + +- fixed unstable formatting when encountering unnecessarily escaped quotes in a string + (#120) + +## 18.4a1 + +- added `--quiet` (#78) + +- added automatic parentheses management (#4) + +- added [pre-commit](https://pre-commit.com) integration (#103, #104) + +- fixed reporting on `--check` with multiple files (#101, #102) + +- fixed removing backslash escapes from raw strings (#100, #105) + +## 18.4a0 + +- added `--diff` (#87) + +- add line breaks before all delimiters, except in cases like commas, to better comply + with PEP 8 (#73) + +- standardize string literals to use double quotes (almost) everywhere (#75) + +- fixed handling of standalone comments within nested bracketed expressions; _Black_ + will no longer produce super long lines or put all standalone comments at the end of + the expression (#22) + +- fixed 18.3a4 regression: don't crash and burn on empty lines with trailing whitespace + (#80) + +- fixed 18.3a4 regression: `# yapf: disable` usage as trailing comment would cause + _Black_ to not emit the rest of the file (#95) + +- when CTRL+C is pressed while formatting many files, _Black_ no longer freaks out with + a flurry of asyncio-related exceptions + +- only allow up to two empty lines on module level and only single empty lines within + functions (#74) + +## 18.3a4 + +- `# fmt: off` and `# fmt: on` are implemented (#5) + +- automatic detection of deprecated Python 2 forms of print statements and exec + statements in the formatted file (#49) + +- use proper spaces for complex expressions in default values of typed function + arguments (#60) + +- only return exit code 1 when --check is used (#50) + +- don't remove single trailing commas from square bracket indexing (#59) + +- don't omit whitespace if the previous factor leaf wasn't a math operator (#55) + +- omit extra space in kwarg unpacking if it's the first argument (#46) + +- omit extra space in + [Sphinx auto-attribute comments](http://www.sphinx-doc.org/en/stable/ext/autodoc.html#directive-autoattribute) + (#68) + +## 18.3a3 + +- don't remove single empty lines outside of bracketed expressions (#19) + +- added ability to pipe formatting from stdin to stdin (#25) + +- restored ability to format code with legacy usage of `async` as a name (#20, #42) + +- even better handling of numpy-style array indexing (#33, again) + +## 18.3a2 + +- changed positioning of binary operators to occur at beginning of lines instead of at + the end, following + [a recent change to PEP 8](https://github.com/python/peps/commit/c59c4376ad233a62ca4b3a6060c81368bd21e85b) + (#21) + +- ignore empty bracket pairs while splitting. This avoids very weirdly looking + formattings (#34, #35) + +- remove a trailing comma if there is a single argument to a call + +- if top level functions were separated by a comment, don't put four empty lines after + the upper function + +- fixed unstable formatting of newlines with imports + +- fixed unintentional folding of post scriptum standalone comments into last statement + if it was a simple statement (#18, #28) + +- fixed missing space in numpy-style array indexing (#33) + +- fixed spurious space after star-based unary expressions (#31) + +## 18.3a1 + +- added `--check` + +- only put trailing commas in function signatures and calls if it's safe to do so. If + the file is Python 3.6+ it's always safe, otherwise only safe if there are no `*args` + or `**kwargs` used in the signature or call. (#8) + +- fixed invalid spacing of dots in relative imports (#6, #13) + +- fixed invalid splitting after comma on unpacked variables in for-loops (#23) + +- fixed spurious space in parenthesized set expressions (#7) + +- fixed spurious space after opening parentheses and in default arguments (#14, #17) + +- fixed spurious space after unary operators when the operand was a complex expression + (#15) + +## 18.3a0 + +- first published version, Happy 🍰 Day 2018! + +- alpha quality + +- date-versioned (see: ) diff --git a/env/lib/python3.8/site-packages/black-22.10.0.dist-info/RECORD b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/RECORD new file mode 100644 index 00000000..378cf45d --- /dev/null +++ b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/RECORD @@ -0,0 +1,114 @@ +../../../bin/black,sha256=ObOHEjplrW3RbMlAmjRDjQsi-V3dN1zefEBeSRUHHI4,270 +../../../bin/blackd,sha256=fEBd1UudWCxOVIOa_hVHY-yvnhwAp3qa4Vrx8pPk-f4,271 +6b397dd64e00b5aff23d__mypyc.cpython-38-x86_64-linux-gnu.so,sha256=Dm9jGM8QabRn5p8L8xLYfCf9nIv6omB6HutQ-cnW0Kw,3864552 +__pycache__/_black_version.cpython-38.pyc,, +_black_version.py,sha256=7tclAnbRh-vZyRzvmtzh97WIQSEon9SNEVYpr4vB5Fw,20 +black-22.10.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +black-22.10.0.dist-info/METADATA,sha256=voJD70reC7LdUv50v3JOJV9GMsa1CXyMqARy7ItFDzU,50980 +black-22.10.0.dist-info/RECORD,, +black-22.10.0.dist-info/WHEEL,sha256=2FjIQj2QPtEtdfb4HU-aKejOOLVCzJGjAAcUJyZ3jVI,144 +black-22.10.0.dist-info/entry_points.txt,sha256=qBIyywHwGRkJj7kieq86kqf77rz3qGC4Joj36lHnxwc,78 +black-22.10.0.dist-info/licenses/AUTHORS.md,sha256=3z40-svLSJHXpYTX3yEyHNQqiuE-ScuWqo6TgZ2cAAQ,7976 +black-22.10.0.dist-info/licenses/LICENSE,sha256=nAQo8MO0d5hQz1vZbhGqqK_HLUqG1KNiI9erouWNbgA,1080 +black/__init__.cpython-38-x86_64-linux-gnu.so,sha256=CgJ6c-AKMv9QtNvRCLmR_V8rzDsVrRn-XpI5XUK2kJg,8248 +black/__init__.py,sha256=1-C_cpSXfeVM66yjknWJ6BFwWxhaTIJ9yZ0DY-3-_FA,44943 +black/__main__.py,sha256=mogeA4o9zt4w-ufKvaQjSEhtSgQkcMVLK9ChvdB5wH8,47 +black/__pycache__/__init__.cpython-38.pyc,, +black/__pycache__/__main__.cpython-38.pyc,, +black/__pycache__/brackets.cpython-38.pyc,, +black/__pycache__/cache.cpython-38.pyc,, +black/__pycache__/comments.cpython-38.pyc,, +black/__pycache__/concurrency.cpython-38.pyc,, +black/__pycache__/const.cpython-38.pyc,, +black/__pycache__/debug.cpython-38.pyc,, +black/__pycache__/files.cpython-38.pyc,, +black/__pycache__/handle_ipynb_magics.cpython-38.pyc,, +black/__pycache__/linegen.cpython-38.pyc,, +black/__pycache__/lines.cpython-38.pyc,, +black/__pycache__/mode.cpython-38.pyc,, +black/__pycache__/nodes.cpython-38.pyc,, +black/__pycache__/numerics.cpython-38.pyc,, +black/__pycache__/output.cpython-38.pyc,, +black/__pycache__/parsing.cpython-38.pyc,, +black/__pycache__/report.cpython-38.pyc,, +black/__pycache__/rusty.cpython-38.pyc,, +black/__pycache__/strings.cpython-38.pyc,, +black/__pycache__/trans.cpython-38.pyc,, +black/brackets.cpython-38-x86_64-linux-gnu.so,sha256=MaBc3dHQRXie8jpteDTNzkt18yTHUa3cB1Fwj9jWG20,8256 +black/brackets.py,sha256=qDBZarnbUQTCKCLxNOiTQDCPvaBIaTmPrHgBOzKEZq4,10759 +black/cache.cpython-38-x86_64-linux-gnu.so,sha256=9t7Pb6fIK9WGTFaFVQ6gHljDxkugHX-yX44ZpP2Ylck,8256 +black/cache.py,sha256=yC8aCiOGpNq2kqsrtSUCjjH6NmTXMkUKg9gO9w4pUEg,2946 +black/comments.cpython-38-x86_64-linux-gnu.so,sha256=OIn893wiLK9UaJgXj1ijeXY-7T30lxPq4SIPGT_7f4w,8256 +black/comments.py,sha256=UyqZNngQnbUX8H-C19loa_L6Te4LKEetEHvDxmrryZ4,12722 +black/concurrency.py,sha256=Cy1b7t6QhNc_2zsOt3yEGRS_fB68wxh4owpj3zdxv90,6424 +black/const.cpython-38-x86_64-linux-gnu.so,sha256=DVIbqk7inqgOwR3fWfQ81lBt8hJ0CpfV5IzuPH9Ho6U,8256 +black/const.py,sha256=4_6Bjr4eLrEEGY8uX_ZgR1Y90f3ok21RGlgDFgwGfjQ,284 +black/debug.py,sha256=PDBJeevHiMPzdQTZmC4kCZbQxyBXOSzQvIVnrFCXALU,1594 +black/files.py,sha256=vjhQFaYfJf0UhaBqhGoneGffJWMCWfn1Kli9d84lWFk,9367 +black/handle_ipynb_magics.cpython-38-x86_64-linux-gnu.so,sha256=zg4gSIXXxZQ-7Kq7fGz2ETjFGHgZxuNGKh2Zu6pW2_I,8280 +black/handle_ipynb_magics.py,sha256=6pR23tJlZ0pg2Md8PQfM1CRvqXGhdMkUmz39mkpWS1E,13499 +black/linegen.cpython-38-x86_64-linux-gnu.so,sha256=Fkcw8RSlj2cZVw-manxcJ26pyuZJmZ0z6dw5JNBVNd4,8256 +black/linegen.py,sha256=biRuM1KaBN_rCgizfToBKCEUM67KfqG3SQeVLXj1cFA,51771 +black/lines.cpython-38-x86_64-linux-gnu.so,sha256=vQ2iDSE81PXnE1R4al35wGHgIRpyx4zM-tRsXskaciY,8256 +black/lines.py,sha256=dXO49NbKL_GS6UtQFOgSarb_ZdcwdIGgxixV2rAuezE,28723 +black/mode.cpython-38-x86_64-linux-gnu.so,sha256=0xnWy0BnahCBJ2pIYirOHxVTYucS59WbfDCd27yqmwc,8248 +black/mode.py,sha256=ZChlN_mefP4RGfP3uPPRVZLi4xGXQz31U3L-A2MzXr4,6686 +black/nodes.cpython-38-x86_64-linux-gnu.so,sha256=U3jkifUwUmJC7fklreBFcQFxZRzCnmRrajP-zuH00wA,8256 +black/nodes.py,sha256=TaSt4uHHHrU0m4WzDBlf8NJA7ILyixwiS-4fEkS8wnM,24059 +black/numerics.cpython-38-x86_64-linux-gnu.so,sha256=CZOKtSqmDlYxUjisBYINy83spP-_Z0VDU3XV0Pa9uc0,8256 +black/numerics.py,sha256=TBID6blEBXZgGIigMXS5JpX2-Trv9lK3WoWPkkHrpac,1653 +black/output.py,sha256=qfdOuT8z5WSm_GxwP24X5XrjrxPacjj1k1gC40crJGw,3486 +black/parsing.cpython-38-x86_64-linux-gnu.so,sha256=xXD5r2J1DSGZHh-7Qa99OzobW9iec4L5Q9zdtm6-lGo,8256 +black/parsing.py,sha256=PClNCvq0Ml_G6TBy48NkqCtSnCnP7Y_CV5EVUZv3Jrg,9886 +black/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +black/report.py,sha256=vcZXcQwoT2LPYD0AoL7w3tshKw_i5DjxGRsqqLZQksc,3451 +black/rusty.cpython-38-x86_64-linux-gnu.so,sha256=gj8vFgpPM-ELkwHb9cSNPZDcnZODYt-8Zc_zTBKqro0,8256 +black/rusty.py,sha256=kmKIJpD9J0bPfkQ67Sy8HkVcx-5CF02acggiaaxM7sc,556 +black/strings.cpython-38-x86_64-linux-gnu.so,sha256=9tHjPUNcQKSQaej0Ei2LHCKhw9ySxsOV78tAu1Oi-9E,8256 +black/strings.py,sha256=cTo6WV5fL0mvLRlynv4Q8ssjKOlkhUyN_BJAXJQHEgI,7972 +black/trans.cpython-38-x86_64-linux-gnu.so,sha256=e8rLoUIBkoTdSqe7_A6AhLlHMBu4iznsTlJTmJyhGnw,8256 +black/trans.py,sha256=fS58HOnjWjUB9SMSneNim7ejCIVk6KlFpdWzR0Ufhbw,82766 +blackd/__init__.py,sha256=cTJotjnER9YB0rDWo3hOWSg4v0mNAx5DFmCJdyB1s5k,8068 +blackd/__main__.py,sha256=L4xAcDh1K5zb6SsJB102AewW2G13P9-w2RiEwuFj8WA,37 +blackd/__pycache__/__init__.cpython-38.pyc,, +blackd/__pycache__/__main__.cpython-38.pyc,, +blackd/__pycache__/middlewares.cpython-38.pyc,, +blackd/middlewares.py,sha256=QS7cs86Ojuaqh64dGneimhJ-f30rDI646c27ts4Dwh0,1585 +blib2to3/Grammar.txt,sha256=FHZR_yGNvUagcPcO9eia_kYxyzXRKU8JWaTJDOO34To,11267 +blib2to3/LICENSE,sha256=V4mIG4rrnJH1g19bt8q-hKD-zUuyvi9UyeaVenjseZ0,12762 +blib2to3/PatternGrammar.txt,sha256=7lul2ztnIqDi--JWDrwciD5yMo75w7TaHHxdHMZJvOM,793 +blib2to3/README,sha256=C_K4Os-RntLrCpCHOKr43cnhQAaG94psjrgOX6tywg8,1071 +blib2to3/__init__.py,sha256=9_8wL9Scv8_Cs8HJyJHGvx1vwXErsuvlsAqNZLcJQR0,8 +blib2to3/__pycache__/__init__.cpython-38.pyc,, +blib2to3/__pycache__/pygram.cpython-38.pyc,, +blib2to3/__pycache__/pytree.cpython-38.pyc,, +blib2to3/pgen2/__init__.py,sha256=hY6w9QUzvTvRb-MoFfd_q_7ZLt6IUHC2yxWCfsZupQA,143 +blib2to3/pgen2/__pycache__/__init__.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/conv.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/driver.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/grammar.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/literals.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/parse.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/pgen.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/token.cpython-38.pyc,, +blib2to3/pgen2/__pycache__/tokenize.cpython-38.pyc,, +blib2to3/pgen2/conv.cpython-38-x86_64-linux-gnu.so,sha256=bdsKfq3xEPmqDJuHYKHOHZU8ewrV16kTvByatNGLhzc,8256 +blib2to3/pgen2/conv.py,sha256=wGS0vapMV81PvxisAentOrXCZX1IlCFulQDW5Ha9Ueo,9607 +blib2to3/pgen2/driver.cpython-38-x86_64-linux-gnu.so,sha256=-VETopCd7YDZ6ZRsVwESTe2v-I7pae3dRGEGr5DZRFI,8264 +blib2to3/pgen2/driver.py,sha256=A5HNBnHrl1amhullooyqWEtU2jOnByIketrdbb5zMmY,10670 +blib2to3/pgen2/grammar.cpython-38-x86_64-linux-gnu.so,sha256=VofOv9mFQF-ERjSRFNGH6xxkRg2SMTpgTPW3qbQxjT0,8264 +blib2to3/pgen2/grammar.py,sha256=EJW7w2JIZtzi_g1JWAmONf4kCH8KJD6GVn_1R1uFnDA,6873 +blib2to3/pgen2/literals.cpython-38-x86_64-linux-gnu.so,sha256=COjetWplcjE1FjAvcOYUm0dxPSivxWaZXe5BwAIJ37Q,8264 +blib2to3/pgen2/literals.py,sha256=vbzw1RX0NvATBR971WwRNtp68fsXky4-pV4cBZ02_9E,1628 +blib2to3/pgen2/parse.cpython-38-x86_64-linux-gnu.so,sha256=pt5Xq1hXDGN42bLPIiZ6gF6JvtSN-trKVJqVEVI08cQ,8264 +blib2to3/pgen2/parse.py,sha256=QJhDkRCgEtc6jQterhuNwvtURIzuz_8Forq5X5k4mQo,14853 +blib2to3/pgen2/pgen.cpython-38-x86_64-linux-gnu.so,sha256=jWb8I3MBi8s8CQ21C1rEp2Y2zrP2hHPctfT-_uzvYr8,8256 +blib2to3/pgen2/pgen.py,sha256=8gkqFARDLVp8tdpjxjJEP1YytCZvNrLl5tuK6SdM_nc,15491 +blib2to3/pgen2/token.cpython-38-x86_64-linux-gnu.so,sha256=d0XKLPE6koNzE9PuWaNKQPHrqgB2cwb_0Lrihc3u4qg,8264 +blib2to3/pgen2/token.py,sha256=eDZWhONvSUNGsWYzHhgBd1pIQ2HlVPP7aon0S1CLuKE,1919 +blib2to3/pgen2/tokenize.cpython-38-x86_64-linux-gnu.so,sha256=TG2zcweKQXFnRiV-ZFAZRIBYAd6l4iKxZqZmsgeJULw,8264 +blib2to3/pgen2/tokenize.py,sha256=CT6TXaY9RJkboV9NJcxQf7KZo07KpxsjOSY4gymGDD4,22842 +blib2to3/pygram.cpython-38-x86_64-linux-gnu.so,sha256=tPbQlQaRVCe5qKTas2DEtjAxrdGDSu7J-0ibH7FBhKk,8256 +blib2to3/pygram.py,sha256=uCU5Dyc1pObqP1TOJ9IjOkpVqaVpsg-nRovAJt7xOrU,5732 +blib2to3/pytree.cpython-38-x86_64-linux-gnu.so,sha256=DCfpUopCKbhU6sHZZxLhnL1U8bC9Z9wniGItaH1IZEQ,8256 +blib2to3/pytree.py,sha256=We4PqLlpIOT2-625It5HN9ToD8a1OHrl5_FYmDE_0v4,32234 diff --git a/env/lib/python3.8/site-packages/black-22.10.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/WHEEL new file mode 100644 index 00000000..cce0f8d6 --- /dev/null +++ b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: hatchling 1.10.0 +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_17_x86_64 +Tag: cp38-cp38-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/black-22.10.0.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/entry_points.txt new file mode 100644 index 00000000..d0bf9079 --- /dev/null +++ b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +black = black:patched_main +blackd = blackd:patched_main [d] diff --git a/env/lib/python3.8/site-packages/black-22.10.0.dist-info/licenses/AUTHORS.md b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/licenses/AUTHORS.md new file mode 100644 index 00000000..f2d599dd --- /dev/null +++ b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/licenses/AUTHORS.md @@ -0,0 +1,194 @@ +# Authors + +Glued together by [Łukasz Langa](mailto:lukasz@langa.pl). + +Maintained with: + +- [Carol Willing](mailto:carolcode@willingconsulting.com) +- [Carl Meyer](mailto:carl@oddbird.net) +- [Jelle Zijlstra](mailto:jelle.zijlstra@gmail.com) +- [Mika Naylor](mailto:mail@autophagy.io) +- [Zsolt Dollenstein](mailto:zsol.zsol@gmail.com) +- [Cooper Lees](mailto:me@cooperlees.com) +- Richard Si +- [Felix Hildén](mailto:felix.hilden@gmail.com) +- [Batuhan Taskaya](mailto:batuhan@python.org) + +Multiple contributions by: + +- [Abdur-Rahmaan Janhangeer](mailto:arj.python@gmail.com) +- [Adam Johnson](mailto:me@adamj.eu) +- [Adam Williamson](mailto:adamw@happyassassin.net) +- [Alexander Huynh](mailto:ahrex-gh-psf-black@e.sc) +- [Alexandr Artemyev](mailto:mogost@gmail.com) +- [Alex Vandiver](mailto:github@chmrr.net) +- [Allan Simon](mailto:allan.simon@supinfo.com) +- Anders-Petter Ljungquist +- [Andrew Thorp](mailto:andrew.thorp.dev@gmail.com) +- [Andrew Zhou](mailto:andrewfzhou@gmail.com) +- [Andrey](mailto:dyuuus@yandex.ru) +- [Andy Freeland](mailto:andy@andyfreeland.net) +- [Anthony Sottile](mailto:asottile@umich.edu) +- [Antonio Ossa Guerra](mailto:aaossa+black@uc.cl) +- [Arjaan Buijk](mailto:arjaan.buijk@gmail.com) +- [Arnav Borbornah](mailto:arnavborborah11@gmail.com) +- [Artem Malyshev](mailto:proofit404@gmail.com) +- [Asger Hautop Drewsen](mailto:asgerdrewsen@gmail.com) +- [Augie Fackler](mailto:raf@durin42.com) +- [Aviskar KC](mailto:aviskarkc10@gmail.com) +- Batuhan Taşkaya +- [Benjamin Wohlwend](mailto:bw@piquadrat.ch) +- [Benjamin Woodruff](mailto:github@benjam.info) +- [Bharat Raghunathan](mailto:bharatraghunthan9767@gmail.com) +- [Brandt Bucher](mailto:brandtbucher@gmail.com) +- [Brett Cannon](mailto:brett@python.org) +- [Bryan Bugyi](mailto:bryan.bugyi@rutgers.edu) +- [Bryan Forbes](mailto:bryan@reigndropsfall.net) +- [Calum Lind](mailto:calumlind@gmail.com) +- [Charles](mailto:peacech@gmail.com) +- Charles Reid +- [Christian Clauss](mailto:cclauss@bluewin.ch) +- [Christian Heimes](mailto:christian@python.org) +- [Chuck Wooters](mailto:chuck.wooters@microsoft.com) +- [Chris Rose](mailto:offline@offby1.net) +- Codey Oxley +- [Cong](mailto:congusbongus@gmail.com) +- [Cooper Ry Lees](mailto:me@cooperlees.com) +- [Dan Davison](mailto:dandavison7@gmail.com) +- [Daniel Hahler](mailto:github@thequod.de) +- [Daniel M. Capella](mailto:polycitizen@gmail.com) +- Daniele Esposti +- [David Hotham](mailto:david.hotham@metaswitch.com) +- [David Lukes](mailto:dafydd.lukes@gmail.com) +- [David Szotten](mailto:davidszotten@gmail.com) +- [Denis Laxalde](mailto:denis@laxalde.org) +- [Douglas Thor](mailto:dthor@transphormusa.com) +- dylanjblack +- [Eli Treuherz](mailto:eli@treuherz.com) +- [Emil Hessman](mailto:emil@hessman.se) +- [Felix Kohlgrüber](mailto:felix.kohlgrueber@gmail.com) +- [Florent Thiery](mailto:fthiery@gmail.com) +- Francisco +- [Giacomo Tagliabue](mailto:giacomo.tag@gmail.com) +- [Greg Gandenberger](mailto:ggandenberger@shoprunner.com) +- [Gregory P. Smith](mailto:greg@krypto.org) +- Gustavo Camargo +- hauntsaninja +- [Hadi Alqattan](mailto:alqattanhadizaki@gmail.com) +- [Hassan Abouelela](mailto:hassan@hassanamr.com) +- [Heaford](mailto:dan@heaford.com) +- [Hugo Barrera](mailto::hugo@barrera.io) +- Hugo van Kemenade +- [Hynek Schlawack](mailto:hs@ox.cx) +- [Ionite](mailto:dev@ionite.io) +- [Ivan Katanić](mailto:ivan.katanic@gmail.com) +- [Jakub Kadlubiec](mailto:jakub.kadlubiec@skyscanner.net) +- [Jakub Warczarek](mailto:jakub.warczarek@gmail.com) +- [Jan Hnátek](mailto:jan.hnatek@gmail.com) +- [Jason Fried](mailto:me@jasonfried.info) +- [Jason Friedland](mailto:jason@friedland.id.au) +- [jgirardet](mailto:ijkl@netc.fr) +- Jim Brännlund +- [Jimmy Jia](mailto:tesrin@gmail.com) +- [Joe Antonakakis](mailto:jma353@cornell.edu) +- [Jon Dufresne](mailto:jon.dufresne@gmail.com) +- [Jonas Obrist](mailto:ojiidotch@gmail.com) +- [Jonty Wareing](mailto:jonty@jonty.co.uk) +- [Jose Nazario](mailto:jose.monkey.org@gmail.com) +- [Joseph Larson](mailto:larson.joseph@gmail.com) +- [Josh Bode](mailto:joshbode@fastmail.com) +- [Josh Holland](mailto:anowlcalledjosh@gmail.com) +- [Joshua Cannon](mailto:joshdcannon@gmail.com) +- [José Padilla](mailto:jpadilla@webapplicate.com) +- [Juan Luis Cano Rodríguez](mailto:hello@juanlu.space) +- [kaiix](mailto:kvn.hou@gmail.com) +- [Katie McLaughlin](mailto:katie@glasnt.com) +- Katrin Leinweber +- [Keith Smiley](mailto:keithbsmiley@gmail.com) +- [Kenyon Ralph](mailto:kenyon@kenyonralph.com) +- [Kevin Kirsche](mailto:Kev.Kirsche+GitHub@gmail.com) +- [Kyle Hausmann](mailto:kyle.hausmann@gmail.com) +- [Kyle Sunden](mailto:sunden@wisc.edu) +- Lawrence Chan +- [Linus Groh](mailto:mail@linusgroh.de) +- [Loren Carvalho](mailto:comradeloren@gmail.com) +- [Luka Sterbic](mailto:luka.sterbic@gmail.com) +- [LukasDrude](mailto:mail@lukas-drude.de) +- Mahmoud Hossam +- Mariatta +- [Matt VanEseltine](mailto:vaneseltine@gmail.com) +- [Matthew Clapp](mailto:itsayellow+dev@gmail.com) +- [Matthew Walster](mailto:matthew@walster.org) +- Max Smolens +- [Michael Aquilina](mailto:michaelaquilina@gmail.com) +- [Michael Flaxman](mailto:michael.flaxman@gmail.com) +- [Michael J. Sullivan](mailto:sully@msully.net) +- [Michael McClimon](mailto:michael@mcclimon.org) +- [Miguel Gaiowski](mailto:miggaiowski@gmail.com) +- [Mike](mailto:roshi@fedoraproject.org) +- [mikehoyio](mailto:mikehoy@gmail.com) +- [Min ho Kim](mailto:minho42@gmail.com) +- [Miroslav Shubernetskiy](mailto:miroslav@miki725.com) +- MomIsBestFriend +- [Nathan Goldbaum](mailto:ngoldbau@illinois.edu) +- [Nathan Hunt](mailto:neighthan.hunt@gmail.com) +- [Neraste](mailto:neraste.herr10@gmail.com) +- [Nikolaus Waxweiler](mailto:madigens@gmail.com) +- [Ofek Lev](mailto:ofekmeister@gmail.com) +- [Osaetin Daniel](mailto:osaetindaniel@gmail.com) +- [otstrel](mailto:otstrel@gmail.com) +- [Pablo Galindo](mailto:Pablogsal@gmail.com) +- [Paul Ganssle](mailto:p.ganssle@gmail.com) +- [Paul Meinhardt](mailto:mnhrdt@gmail.com) +- [Peter Bengtsson](mailto:mail@peterbe.com) +- [Peter Grayson](mailto:pete@jpgrayson.net) +- [Peter Stensmyr](mailto:peter.stensmyr@gmail.com) +- pmacosta +- [Quentin Pradet](mailto:quentin@pradet.me) +- [Ralf Schmitt](mailto:ralf@systemexit.de) +- [Ramón Valles](mailto:mroutis@protonmail.com) +- [Richard Fearn](mailto:richardfearn@gmail.com) +- [Rishikesh Jha](mailto:rishijha424@gmail.com) +- [Rupert Bedford](mailto:rupert@rupertb.com) +- Russell Davis +- [Sagi Shadur](mailto:saroad2@gmail.com) +- [Rémi Verschelde](mailto:rverschelde@gmail.com) +- [Sami Salonen](mailto:sakki@iki.fi) +- [Samuel Cormier-Iijima](mailto:samuel@cormier-iijima.com) +- [Sanket Dasgupta](mailto:sanketdasgupta@gmail.com) +- Sergi +- [Scott Stevenson](mailto:scott@stevenson.io) +- Shantanu +- [shaoran](mailto:shaoran@sakuranohana.org) +- [Shinya Fujino](mailto:shf0811@gmail.com) +- springstan +- [Stavros Korokithakis](mailto:hi@stavros.io) +- [Stephen Rosen](mailto:sirosen@globus.org) +- [Steven M. Vascellaro](mailto:S.Vascellaro@gmail.com) +- [Sunil Kapil](mailto:snlkapil@gmail.com) +- [Sébastien Eustace](mailto:sebastien.eustace@gmail.com) +- [Tal Amuyal](mailto:TalAmuyal@gmail.com) +- [Terrance](mailto:git@terrance.allofti.me) +- [Thom Lu](mailto:thomas.c.lu@gmail.com) +- [Thomas Grainger](mailto:tagrain@gmail.com) +- [Tim Gates](mailto:tim.gates@iress.com) +- [Tim Swast](mailto:swast@google.com) +- [Timo](mailto:timo_tk@hotmail.com) +- Toby Fleming +- [Tom Christie](mailto:tom@tomchristie.com) +- [Tony Narlock](mailto:tony@git-pull.com) +- [Tsuyoshi Hombashi](mailto:tsuyoshi.hombashi@gmail.com) +- [Tushar Chandra](mailto:tusharchandra2018@u.northwestern.edu) +- [Tzu-ping Chung](mailto:uranusjr@gmail.com) +- [Utsav Shah](mailto:ukshah2@illinois.edu) +- utsav-dbx +- vezeli +- [Ville Skyttä](mailto:ville.skytta@iki.fi) +- [Vishwas B Sharma](mailto:sharma.vishwas88@gmail.com) +- [Vlad Emelianov](mailto:volshebnyi@gmail.com) +- [williamfzc](mailto:178894043@qq.com) +- [wouter bolsterlee](mailto:wouter@bolsterl.ee) +- Yazdan +- [Yngve Høiseth](mailto:yngve@hoiseth.net) +- [Yurii Karabas](mailto:1998uriyyo@gmail.com) +- [Zac Hatfield-Dodds](mailto:zac@zhd.dev) diff --git a/env/lib/python3.8/site-packages/black-22.10.0.dist-info/licenses/LICENSE b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/licenses/LICENSE new file mode 100644 index 00000000..7a9b891f --- /dev/null +++ b/env/lib/python3.8/site-packages/black-22.10.0.dist-info/licenses/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2018 Łukasz Langa + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/black/__init__.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/__init__.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..dc9fdca7 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/__init__.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/__init__.py b/env/lib/python3.8/site-packages/black/__init__.py new file mode 100644 index 00000000..afd71e51 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/__init__.py @@ -0,0 +1,1403 @@ +import io +import json +import platform +import re +import sys +import tokenize +import traceback +from contextlib import contextmanager +from dataclasses import replace +from datetime import datetime +from enum import Enum +from json.decoder import JSONDecodeError +from pathlib import Path +from typing import ( + Any, + Dict, + Generator, + Iterator, + List, + MutableMapping, + Optional, + Pattern, + Sequence, + Set, + Sized, + Tuple, + Union, +) + +import click +from click.core import ParameterSource +from mypy_extensions import mypyc_attr +from pathspec.patterns.gitwildmatch import GitWildMatchPatternError + +from _black_version import version as __version__ +from black.cache import Cache, get_cache_info, read_cache, write_cache +from black.comments import normalize_fmt_off +from black.const import ( + DEFAULT_EXCLUDES, + DEFAULT_INCLUDES, + DEFAULT_LINE_LENGTH, + STDIN_PLACEHOLDER, +) +from black.files import ( + find_project_root, + find_pyproject_toml, + find_user_pyproject_toml, + gen_python_files, + get_gitignore, + normalize_path_maybe_ignore, + parse_pyproject_toml, + wrap_stream_for_windows, +) +from black.handle_ipynb_magics import ( + PYTHON_CELL_MAGICS, + TRANSFORMED_MAGICS, + jupyter_dependencies_are_installed, + mask_cell, + put_trailing_semicolon_back, + remove_trailing_semicolon, + unmask_cell, +) +from black.linegen import LN, LineGenerator, transform_line +from black.lines import EmptyLineTracker, Line +from black.mode import ( + FUTURE_FLAG_TO_FEATURE, + VERSION_TO_FEATURES, + Feature, + Mode, + TargetVersion, + supports_feature, +) +from black.nodes import ( + STARS, + is_number_token, + is_simple_decorator_expression, + is_string_token, + syms, +) +from black.output import color_diff, diff, dump_to_file, err, ipynb_diff, out +from black.parsing import InvalidInput # noqa F401 +from black.parsing import lib2to3_parse, parse_ast, stringify_ast +from black.report import Changed, NothingChanged, Report +from black.trans import iter_fexpr_spans +from blib2to3.pgen2 import token +from blib2to3.pytree import Leaf, Node + +COMPILED = Path(__file__).suffix in (".pyd", ".so") + +# types +FileContent = str +Encoding = str +NewLine = str + + +class WriteBack(Enum): + NO = 0 + YES = 1 + DIFF = 2 + CHECK = 3 + COLOR_DIFF = 4 + + @classmethod + def from_configuration( + cls, *, check: bool, diff: bool, color: bool = False + ) -> "WriteBack": + if check and not diff: + return cls.CHECK + + if diff and color: + return cls.COLOR_DIFF + + return cls.DIFF if diff else cls.YES + + +# Legacy name, left for integrations. +FileMode = Mode + + +def read_pyproject_toml( + ctx: click.Context, param: click.Parameter, value: Optional[str] +) -> Optional[str]: + """Inject Black configuration from "pyproject.toml" into defaults in `ctx`. + + Returns the path to a successfully found and read configuration file, None + otherwise. + """ + if not value: + value = find_pyproject_toml(ctx.params.get("src", ())) + if value is None: + return None + + try: + config = parse_pyproject_toml(value) + except (OSError, ValueError) as e: + raise click.FileError( + filename=value, hint=f"Error reading configuration file: {e}" + ) from None + + if not config: + return None + else: + # Sanitize the values to be Click friendly. For more information please see: + # https://github.com/psf/black/issues/1458 + # https://github.com/pallets/click/issues/1567 + config = { + k: str(v) if not isinstance(v, (list, dict)) else v + for k, v in config.items() + } + + target_version = config.get("target_version") + if target_version is not None and not isinstance(target_version, list): + raise click.BadOptionUsage( + "target-version", "Config key target-version must be a list" + ) + + default_map: Dict[str, Any] = {} + if ctx.default_map: + default_map.update(ctx.default_map) + default_map.update(config) + + ctx.default_map = default_map + return value + + +def target_version_option_callback( + c: click.Context, p: Union[click.Option, click.Parameter], v: Tuple[str, ...] +) -> List[TargetVersion]: + """Compute the target versions from a --target-version flag. + + This is its own function because mypy couldn't infer the type correctly + when it was a lambda, causing mypyc trouble. + """ + return [TargetVersion[val.upper()] for val in v] + + +def re_compile_maybe_verbose(regex: str) -> Pattern[str]: + """Compile a regular expression string in `regex`. + + If it contains newlines, use verbose mode. + """ + if "\n" in regex: + regex = "(?x)" + regex + compiled: Pattern[str] = re.compile(regex) + return compiled + + +def validate_regex( + ctx: click.Context, + param: click.Parameter, + value: Optional[str], +) -> Optional[Pattern[str]]: + try: + return re_compile_maybe_verbose(value) if value is not None else None + except re.error as e: + raise click.BadParameter(f"Not a valid regular expression: {e}") from None + + +@click.command( + context_settings={"help_option_names": ["-h", "--help"]}, + # While Click does set this field automatically using the docstring, mypyc + # (annoyingly) strips 'em so we need to set it here too. + help="The uncompromising code formatter.", +) +@click.option("-c", "--code", type=str, help="Format the code passed in as a string.") +@click.option( + "-l", + "--line-length", + type=int, + default=DEFAULT_LINE_LENGTH, + help="How many characters per line to allow.", + show_default=True, +) +@click.option( + "-t", + "--target-version", + type=click.Choice([v.name.lower() for v in TargetVersion]), + callback=target_version_option_callback, + multiple=True, + help=( + "Python versions that should be supported by Black's output. [default: per-file" + " auto-detection]" + ), +) +@click.option( + "--pyi", + is_flag=True, + help=( + "Format all input files like typing stubs regardless of file extension (useful" + " when piping source on standard input)." + ), +) +@click.option( + "--ipynb", + is_flag=True, + help=( + "Format all input files like Jupyter Notebooks regardless of file extension " + "(useful when piping source on standard input)." + ), +) +@click.option( + "--python-cell-magics", + multiple=True, + help=( + "When processing Jupyter Notebooks, add the given magic to the list" + f" of known python-magics ({', '.join(PYTHON_CELL_MAGICS)})." + " Useful for formatting cells with custom python magics." + ), + default=[], +) +@click.option( + "-x", + "--skip-source-first-line", + is_flag=True, + help="Skip the first line of the source code.", +) +@click.option( + "-S", + "--skip-string-normalization", + is_flag=True, + help="Don't normalize string quotes or prefixes.", +) +@click.option( + "-C", + "--skip-magic-trailing-comma", + is_flag=True, + help="Don't use trailing commas as a reason to split lines.", +) +@click.option( + "--experimental-string-processing", + is_flag=True, + hidden=True, + help="(DEPRECATED and now included in --preview) Normalize string literals.", +) +@click.option( + "--preview", + is_flag=True, + help=( + "Enable potentially disruptive style changes that may be added to Black's main" + " functionality in the next major release." + ), +) +@click.option( + "--check", + is_flag=True, + help=( + "Don't write the files back, just return the status. Return code 0 means" + " nothing would change. Return code 1 means some files would be reformatted." + " Return code 123 means there was an internal error." + ), +) +@click.option( + "--diff", + is_flag=True, + help="Don't write the files back, just output a diff for each file on stdout.", +) +@click.option( + "--color/--no-color", + is_flag=True, + help="Show colored diff. Only applies when `--diff` is given.", +) +@click.option( + "--fast/--safe", + is_flag=True, + help="If --fast given, skip temporary sanity checks. [default: --safe]", +) +@click.option( + "--required-version", + type=str, + help=( + "Require a specific version of Black to be running (useful for unifying results" + " across many environments e.g. with a pyproject.toml file). It can be" + " either a major version number or an exact version." + ), +) +@click.option( + "--include", + type=str, + default=DEFAULT_INCLUDES, + callback=validate_regex, + help=( + "A regular expression that matches files and directories that should be" + " included on recursive searches. An empty value means all files are included" + " regardless of the name. Use forward slashes for directories on all platforms" + " (Windows, too). Exclusions are calculated first, inclusions later." + ), + show_default=True, +) +@click.option( + "--exclude", + type=str, + callback=validate_regex, + help=( + "A regular expression that matches files and directories that should be" + " excluded on recursive searches. An empty value means no paths are excluded." + " Use forward slashes for directories on all platforms (Windows, too)." + " Exclusions are calculated first, inclusions later. [default:" + f" {DEFAULT_EXCLUDES}]" + ), + show_default=False, +) +@click.option( + "--extend-exclude", + type=str, + callback=validate_regex, + help=( + "Like --exclude, but adds additional files and directories on top of the" + " excluded ones. (Useful if you simply want to add to the default)" + ), +) +@click.option( + "--force-exclude", + type=str, + callback=validate_regex, + help=( + "Like --exclude, but files and directories matching this regex will be " + "excluded even when they are passed explicitly as arguments." + ), +) +@click.option( + "--stdin-filename", + type=str, + help=( + "The name of the file when passing it through stdin. Useful to make " + "sure Black will respect --force-exclude option on some " + "editors that rely on using stdin." + ), +) +@click.option( + "-W", + "--workers", + type=click.IntRange(min=1), + default=None, + help="Number of parallel workers [default: number of CPUs in the system]", +) +@click.option( + "-q", + "--quiet", + is_flag=True, + help=( + "Don't emit non-error messages to stderr. Errors are still emitted; silence" + " those with 2>/dev/null." + ), +) +@click.option( + "-v", + "--verbose", + is_flag=True, + help=( + "Also emit messages to stderr about files that were not changed or were ignored" + " due to exclusion patterns." + ), +) +@click.version_option( + version=__version__, + message=( + f"%(prog)s, %(version)s (compiled: {'yes' if COMPILED else 'no'})\n" + f"Python ({platform.python_implementation()}) {platform.python_version()}" + ), +) +@click.argument( + "src", + nargs=-1, + type=click.Path( + exists=True, file_okay=True, dir_okay=True, readable=True, allow_dash=True + ), + is_eager=True, + metavar="SRC ...", +) +@click.option( + "--config", + type=click.Path( + exists=True, + file_okay=True, + dir_okay=False, + readable=True, + allow_dash=False, + path_type=str, + ), + is_eager=True, + callback=read_pyproject_toml, + help="Read configuration from FILE path.", +) +@click.pass_context +def main( # noqa: C901 + ctx: click.Context, + code: Optional[str], + line_length: int, + target_version: List[TargetVersion], + check: bool, + diff: bool, + color: bool, + fast: bool, + pyi: bool, + ipynb: bool, + python_cell_magics: Sequence[str], + skip_source_first_line: bool, + skip_string_normalization: bool, + skip_magic_trailing_comma: bool, + experimental_string_processing: bool, + preview: bool, + quiet: bool, + verbose: bool, + required_version: Optional[str], + include: Pattern[str], + exclude: Optional[Pattern[str]], + extend_exclude: Optional[Pattern[str]], + force_exclude: Optional[Pattern[str]], + stdin_filename: Optional[str], + workers: Optional[int], + src: Tuple[str, ...], + config: Optional[str], +) -> None: + """The uncompromising code formatter.""" + ctx.ensure_object(dict) + + if src and code is not None: + out( + main.get_usage(ctx) + + "\n\n'SRC' and 'code' cannot be passed simultaneously." + ) + ctx.exit(1) + if not src and code is None: + out(main.get_usage(ctx) + "\n\nOne of 'SRC' or 'code' is required.") + ctx.exit(1) + + root, method = ( + find_project_root(src, stdin_filename) if code is None else (None, None) + ) + ctx.obj["root"] = root + + if verbose: + if root: + out( + f"Identified `{root}` as project root containing a {method}.", + fg="blue", + ) + + normalized = [ + (source, source) + if source == "-" + else (normalize_path_maybe_ignore(Path(source), root), source) + for source in src + ] + srcs_string = ", ".join( + [ + f'"{_norm}"' + if _norm + else f'\033[31m"{source} (skipping - invalid)"\033[34m' + for _norm, source in normalized + ] + ) + out(f"Sources to be formatted: {srcs_string}", fg="blue") + + if config: + config_source = ctx.get_parameter_source("config") + user_level_config = str(find_user_pyproject_toml()) + if config == user_level_config: + out( + "Using configuration from user-level config at " + f"'{user_level_config}'.", + fg="blue", + ) + elif config_source in ( + ParameterSource.DEFAULT, + ParameterSource.DEFAULT_MAP, + ): + out("Using configuration from project root.", fg="blue") + else: + out(f"Using configuration in '{config}'.", fg="blue") + + error_msg = "Oh no! 💥 💔 💥" + if ( + required_version + and required_version != __version__ + and required_version != __version__.split(".")[0] + ): + err( + f"{error_msg} The required version `{required_version}` does not match" + f" the running version `{__version__}`!" + ) + ctx.exit(1) + if ipynb and pyi: + err("Cannot pass both `pyi` and `ipynb` flags!") + ctx.exit(1) + + write_back = WriteBack.from_configuration(check=check, diff=diff, color=color) + if target_version: + versions = set(target_version) + else: + # We'll autodetect later. + versions = set() + mode = Mode( + target_versions=versions, + line_length=line_length, + is_pyi=pyi, + is_ipynb=ipynb, + skip_source_first_line=skip_source_first_line, + string_normalization=not skip_string_normalization, + magic_trailing_comma=not skip_magic_trailing_comma, + experimental_string_processing=experimental_string_processing, + preview=preview, + python_cell_magics=set(python_cell_magics), + ) + + if code is not None: + # Run in quiet mode by default with -c; the extra output isn't useful. + # You can still pass -v to get verbose output. + quiet = True + + report = Report(check=check, diff=diff, quiet=quiet, verbose=verbose) + + if code is not None: + reformat_code( + content=code, fast=fast, write_back=write_back, mode=mode, report=report + ) + else: + try: + sources = get_sources( + ctx=ctx, + src=src, + quiet=quiet, + verbose=verbose, + include=include, + exclude=exclude, + extend_exclude=extend_exclude, + force_exclude=force_exclude, + report=report, + stdin_filename=stdin_filename, + ) + except GitWildMatchPatternError: + ctx.exit(1) + + path_empty( + sources, + "No Python files are present to be formatted. Nothing to do 😴", + quiet, + verbose, + ctx, + ) + + if len(sources) == 1: + reformat_one( + src=sources.pop(), + fast=fast, + write_back=write_back, + mode=mode, + report=report, + ) + else: + from black.concurrency import reformat_many + + reformat_many( + sources=sources, + fast=fast, + write_back=write_back, + mode=mode, + report=report, + workers=workers, + ) + + if verbose or not quiet: + if code is None and (verbose or report.change_count or report.failure_count): + out() + out(error_msg if report.return_code else "All done! ✨ 🍰 ✨") + if code is None: + click.echo(str(report), err=True) + ctx.exit(report.return_code) + + +def get_sources( + *, + ctx: click.Context, + src: Tuple[str, ...], + quiet: bool, + verbose: bool, + include: Pattern[str], + exclude: Optional[Pattern[str]], + extend_exclude: Optional[Pattern[str]], + force_exclude: Optional[Pattern[str]], + report: "Report", + stdin_filename: Optional[str], +) -> Set[Path]: + """Compute the set of files to be formatted.""" + sources: Set[Path] = set() + root = ctx.obj["root"] + + for s in src: + if s == "-" and stdin_filename: + p = Path(stdin_filename) + is_stdin = True + else: + p = Path(s) + is_stdin = False + + if is_stdin or p.is_file(): + normalized_path = normalize_path_maybe_ignore(p, ctx.obj["root"], report) + if normalized_path is None: + continue + + normalized_path = "/" + normalized_path + # Hard-exclude any files that matches the `--force-exclude` regex. + if force_exclude: + force_exclude_match = force_exclude.search(normalized_path) + else: + force_exclude_match = None + if force_exclude_match and force_exclude_match.group(0): + report.path_ignored(p, "matches the --force-exclude regular expression") + continue + + if is_stdin: + p = Path(f"{STDIN_PLACEHOLDER}{str(p)}") + + if p.suffix == ".ipynb" and not jupyter_dependencies_are_installed( + verbose=verbose, quiet=quiet + ): + continue + + sources.add(p) + elif p.is_dir(): + if exclude is None: + exclude = re_compile_maybe_verbose(DEFAULT_EXCLUDES) + gitignore = get_gitignore(root) + p_gitignore = get_gitignore(p) + # No need to use p's gitignore if it is identical to root's gitignore + # (i.e. root and p point to the same directory). + if gitignore != p_gitignore: + gitignore += p_gitignore + else: + gitignore = None + sources.update( + gen_python_files( + p.iterdir(), + ctx.obj["root"], + include, + exclude, + extend_exclude, + force_exclude, + report, + gitignore, + verbose=verbose, + quiet=quiet, + ) + ) + elif s == "-": + sources.add(p) + else: + err(f"invalid path: {s}") + return sources + + +def path_empty( + src: Sized, msg: str, quiet: bool, verbose: bool, ctx: click.Context +) -> None: + """ + Exit if there is no `src` provided for formatting + """ + if not src: + if verbose or not quiet: + out(msg) + ctx.exit(0) + + +def reformat_code( + content: str, fast: bool, write_back: WriteBack, mode: Mode, report: Report +) -> None: + """ + Reformat and print out `content` without spawning child processes. + Similar to `reformat_one`, but for string content. + + `fast`, `write_back`, and `mode` options are passed to + :func:`format_file_in_place` or :func:`format_stdin_to_stdout`. + """ + path = Path("") + try: + changed = Changed.NO + if format_stdin_to_stdout( + content=content, fast=fast, write_back=write_back, mode=mode + ): + changed = Changed.YES + report.done(path, changed) + except Exception as exc: + if report.verbose: + traceback.print_exc() + report.failed(path, str(exc)) + + +# diff-shades depends on being to monkeypatch this function to operate. I know it's +# not ideal, but this shouldn't cause any issues ... hopefully. ~ichard26 +@mypyc_attr(patchable=True) +def reformat_one( + src: Path, fast: bool, write_back: WriteBack, mode: Mode, report: "Report" +) -> None: + """Reformat a single file under `src` without spawning child processes. + + `fast`, `write_back`, and `mode` options are passed to + :func:`format_file_in_place` or :func:`format_stdin_to_stdout`. + """ + try: + changed = Changed.NO + + if str(src) == "-": + is_stdin = True + elif str(src).startswith(STDIN_PLACEHOLDER): + is_stdin = True + # Use the original name again in case we want to print something + # to the user + src = Path(str(src)[len(STDIN_PLACEHOLDER) :]) + else: + is_stdin = False + + if is_stdin: + if src.suffix == ".pyi": + mode = replace(mode, is_pyi=True) + elif src.suffix == ".ipynb": + mode = replace(mode, is_ipynb=True) + if format_stdin_to_stdout(fast=fast, write_back=write_back, mode=mode): + changed = Changed.YES + else: + cache: Cache = {} + if write_back not in (WriteBack.DIFF, WriteBack.COLOR_DIFF): + cache = read_cache(mode) + res_src = src.resolve() + res_src_s = str(res_src) + if res_src_s in cache and cache[res_src_s] == get_cache_info(res_src): + changed = Changed.CACHED + if changed is not Changed.CACHED and format_file_in_place( + src, fast=fast, write_back=write_back, mode=mode + ): + changed = Changed.YES + if (write_back is WriteBack.YES and changed is not Changed.CACHED) or ( + write_back is WriteBack.CHECK and changed is Changed.NO + ): + write_cache(cache, [src], mode) + report.done(src, changed) + except Exception as exc: + if report.verbose: + traceback.print_exc() + report.failed(src, str(exc)) + + +def format_file_in_place( + src: Path, + fast: bool, + mode: Mode, + write_back: WriteBack = WriteBack.NO, + lock: Any = None, # multiprocessing.Manager().Lock() is some crazy proxy +) -> bool: + """Format file under `src` path. Return True if changed. + + If `write_back` is DIFF, write a diff to stdout. If it is YES, write reformatted + code to the file. + `mode` and `fast` options are passed to :func:`format_file_contents`. + """ + if src.suffix == ".pyi": + mode = replace(mode, is_pyi=True) + elif src.suffix == ".ipynb": + mode = replace(mode, is_ipynb=True) + + then = datetime.utcfromtimestamp(src.stat().st_mtime) + header = b"" + with open(src, "rb") as buf: + if mode.skip_source_first_line: + header = buf.readline() + src_contents, encoding, newline = decode_bytes(buf.read()) + try: + dst_contents = format_file_contents(src_contents, fast=fast, mode=mode) + except NothingChanged: + return False + except JSONDecodeError: + raise ValueError( + f"File '{src}' cannot be parsed as valid Jupyter notebook." + ) from None + src_contents = header.decode(encoding) + src_contents + dst_contents = header.decode(encoding) + dst_contents + + if write_back == WriteBack.YES: + with open(src, "w", encoding=encoding, newline=newline) as f: + f.write(dst_contents) + elif write_back in (WriteBack.DIFF, WriteBack.COLOR_DIFF): + now = datetime.utcnow() + src_name = f"{src}\t{then} +0000" + dst_name = f"{src}\t{now} +0000" + if mode.is_ipynb: + diff_contents = ipynb_diff(src_contents, dst_contents, src_name, dst_name) + else: + diff_contents = diff(src_contents, dst_contents, src_name, dst_name) + + if write_back == WriteBack.COLOR_DIFF: + diff_contents = color_diff(diff_contents) + + with lock or nullcontext(): + f = io.TextIOWrapper( + sys.stdout.buffer, + encoding=encoding, + newline=newline, + write_through=True, + ) + f = wrap_stream_for_windows(f) + f.write(diff_contents) + f.detach() + + return True + + +def format_stdin_to_stdout( + fast: bool, + *, + content: Optional[str] = None, + write_back: WriteBack = WriteBack.NO, + mode: Mode, +) -> bool: + """Format file on stdin. Return True if changed. + + If content is None, it's read from sys.stdin. + + If `write_back` is YES, write reformatted code back to stdout. If it is DIFF, + write a diff to stdout. The `mode` argument is passed to + :func:`format_file_contents`. + """ + then = datetime.utcnow() + + if content is None: + src, encoding, newline = decode_bytes(sys.stdin.buffer.read()) + else: + src, encoding, newline = content, "utf-8", "" + + dst = src + try: + dst = format_file_contents(src, fast=fast, mode=mode) + return True + + except NothingChanged: + return False + + finally: + f = io.TextIOWrapper( + sys.stdout.buffer, encoding=encoding, newline=newline, write_through=True + ) + if write_back == WriteBack.YES: + # Make sure there's a newline after the content + if dst and dst[-1] != "\n": + dst += "\n" + f.write(dst) + elif write_back in (WriteBack.DIFF, WriteBack.COLOR_DIFF): + now = datetime.utcnow() + src_name = f"STDIN\t{then} +0000" + dst_name = f"STDOUT\t{now} +0000" + d = diff(src, dst, src_name, dst_name) + if write_back == WriteBack.COLOR_DIFF: + d = color_diff(d) + f = wrap_stream_for_windows(f) + f.write(d) + f.detach() + + +def check_stability_and_equivalence( + src_contents: str, dst_contents: str, *, mode: Mode +) -> None: + """Perform stability and equivalence checks. + + Raise AssertionError if source and destination contents are not + equivalent, or if a second pass of the formatter would format the + content differently. + """ + assert_equivalent(src_contents, dst_contents) + assert_stable(src_contents, dst_contents, mode=mode) + + +def format_file_contents(src_contents: str, *, fast: bool, mode: Mode) -> FileContent: + """Reformat contents of a file and return new contents. + + If `fast` is False, additionally confirm that the reformatted code is + valid by calling :func:`assert_equivalent` and :func:`assert_stable` on it. + `mode` is passed to :func:`format_str`. + """ + if not src_contents.strip(): + raise NothingChanged + + if mode.is_ipynb: + dst_contents = format_ipynb_string(src_contents, fast=fast, mode=mode) + else: + dst_contents = format_str(src_contents, mode=mode) + if src_contents == dst_contents: + raise NothingChanged + + if not fast and not mode.is_ipynb: + # Jupyter notebooks will already have been checked above. + check_stability_and_equivalence(src_contents, dst_contents, mode=mode) + return dst_contents + + +def validate_cell(src: str, mode: Mode) -> None: + """Check that cell does not already contain TransformerManager transformations, + or non-Python cell magics, which might cause tokenizer_rt to break because of + indentations. + + If a cell contains ``!ls``, then it'll be transformed to + ``get_ipython().system('ls')``. However, if the cell originally contained + ``get_ipython().system('ls')``, then it would get transformed in the same way: + + >>> TransformerManager().transform_cell("get_ipython().system('ls')") + "get_ipython().system('ls')\n" + >>> TransformerManager().transform_cell("!ls") + "get_ipython().system('ls')\n" + + Due to the impossibility of safely roundtripping in such situations, cells + containing transformed magics will be ignored. + """ + if any(transformed_magic in src for transformed_magic in TRANSFORMED_MAGICS): + raise NothingChanged + if ( + src[:2] == "%%" + and src.split()[0][2:] not in PYTHON_CELL_MAGICS | mode.python_cell_magics + ): + raise NothingChanged + + +def format_cell(src: str, *, fast: bool, mode: Mode) -> str: + """Format code in given cell of Jupyter notebook. + + General idea is: + + - if cell has trailing semicolon, remove it; + - if cell has IPython magics, mask them; + - format cell; + - reinstate IPython magics; + - reinstate trailing semicolon (if originally present); + - strip trailing newlines. + + Cells with syntax errors will not be processed, as they + could potentially be automagics or multi-line magics, which + are currently not supported. + """ + validate_cell(src, mode) + src_without_trailing_semicolon, has_trailing_semicolon = remove_trailing_semicolon( + src + ) + try: + masked_src, replacements = mask_cell(src_without_trailing_semicolon) + except SyntaxError: + raise NothingChanged from None + masked_dst = format_str(masked_src, mode=mode) + if not fast: + check_stability_and_equivalence(masked_src, masked_dst, mode=mode) + dst_without_trailing_semicolon = unmask_cell(masked_dst, replacements) + dst = put_trailing_semicolon_back( + dst_without_trailing_semicolon, has_trailing_semicolon + ) + dst = dst.rstrip("\n") + if dst == src: + raise NothingChanged from None + return dst + + +def validate_metadata(nb: MutableMapping[str, Any]) -> None: + """If notebook is marked as non-Python, don't format it. + + All notebook metadata fields are optional, see + https://nbformat.readthedocs.io/en/latest/format_description.html. So + if a notebook has empty metadata, we will try to parse it anyway. + """ + language = nb.get("metadata", {}).get("language_info", {}).get("name", None) + if language is not None and language != "python": + raise NothingChanged from None + + +def format_ipynb_string(src_contents: str, *, fast: bool, mode: Mode) -> FileContent: + """Format Jupyter notebook. + + Operate cell-by-cell, only on code cells, only for Python notebooks. + If the ``.ipynb`` originally had a trailing newline, it'll be preserved. + """ + trailing_newline = src_contents[-1] == "\n" + modified = False + nb = json.loads(src_contents) + validate_metadata(nb) + for cell in nb["cells"]: + if cell.get("cell_type", None) == "code": + try: + src = "".join(cell["source"]) + dst = format_cell(src, fast=fast, mode=mode) + except NothingChanged: + pass + else: + cell["source"] = dst.splitlines(keepends=True) + modified = True + if modified: + dst_contents = json.dumps(nb, indent=1, ensure_ascii=False) + if trailing_newline: + dst_contents = dst_contents + "\n" + return dst_contents + else: + raise NothingChanged + + +def format_str(src_contents: str, *, mode: Mode) -> str: + """Reformat a string and return new contents. + + `mode` determines formatting options, such as how many characters per line are + allowed. Example: + + >>> import black + >>> print(black.format_str("def f(arg:str='')->None:...", mode=black.Mode())) + def f(arg: str = "") -> None: + ... + + A more complex example: + + >>> print( + ... black.format_str( + ... "def f(arg:str='')->None: hey", + ... mode=black.Mode( + ... target_versions={black.TargetVersion.PY36}, + ... line_length=10, + ... string_normalization=False, + ... is_pyi=False, + ... ), + ... ), + ... ) + def f( + arg: str = '', + ) -> None: + hey + + """ + dst_contents = _format_str_once(src_contents, mode=mode) + # Forced second pass to work around optional trailing commas (becoming + # forced trailing commas on pass 2) interacting differently with optional + # parentheses. Admittedly ugly. + if src_contents != dst_contents: + return _format_str_once(dst_contents, mode=mode) + return dst_contents + + +def _format_str_once(src_contents: str, *, mode: Mode) -> str: + src_node = lib2to3_parse(src_contents.lstrip(), mode.target_versions) + dst_contents = [] + if mode.target_versions: + versions = mode.target_versions + else: + future_imports = get_future_imports(src_node) + versions = detect_target_versions(src_node, future_imports=future_imports) + + normalize_fmt_off(src_node, preview=mode.preview) + lines = LineGenerator(mode=mode) + elt = EmptyLineTracker(is_pyi=mode.is_pyi) + empty_line = Line(mode=mode) + after = 0 + split_line_features = { + feature + for feature in {Feature.TRAILING_COMMA_IN_CALL, Feature.TRAILING_COMMA_IN_DEF} + if supports_feature(versions, feature) + } + for current_line in lines.visit(src_node): + dst_contents.append(str(empty_line) * after) + before, after = elt.maybe_empty_lines(current_line) + dst_contents.append(str(empty_line) * before) + for line in transform_line( + current_line, mode=mode, features=split_line_features + ): + dst_contents.append(str(line)) + return "".join(dst_contents) + + +def decode_bytes(src: bytes) -> Tuple[FileContent, Encoding, NewLine]: + """Return a tuple of (decoded_contents, encoding, newline). + + `newline` is either CRLF or LF but `decoded_contents` is decoded with + universal newlines (i.e. only contains LF). + """ + srcbuf = io.BytesIO(src) + encoding, lines = tokenize.detect_encoding(srcbuf.readline) + if not lines: + return "", encoding, "\n" + + newline = "\r\n" if b"\r\n" == lines[0][-2:] else "\n" + srcbuf.seek(0) + with io.TextIOWrapper(srcbuf, encoding) as tiow: + return tiow.read(), encoding, newline + + +def get_features_used( # noqa: C901 + node: Node, *, future_imports: Optional[Set[str]] = None +) -> Set[Feature]: + """Return a set of (relatively) new Python features used in this file. + + Currently looking for: + - f-strings; + - self-documenting expressions in f-strings (f"{x=}"); + - underscores in numeric literals; + - trailing commas after * or ** in function signatures and calls; + - positional only arguments in function signatures and lambdas; + - assignment expression; + - relaxed decorator syntax; + - usage of __future__ flags (annotations); + - print / exec statements; + """ + features: Set[Feature] = set() + if future_imports: + features |= { + FUTURE_FLAG_TO_FEATURE[future_import] + for future_import in future_imports + if future_import in FUTURE_FLAG_TO_FEATURE + } + + for n in node.pre_order(): + if is_string_token(n): + value_head = n.value[:2] + if value_head in {'f"', 'F"', "f'", "F'", "rf", "fr", "RF", "FR"}: + features.add(Feature.F_STRINGS) + if Feature.DEBUG_F_STRINGS not in features: + for span_beg, span_end in iter_fexpr_spans(n.value): + if n.value[span_beg : span_end - 1].rstrip().endswith("="): + features.add(Feature.DEBUG_F_STRINGS) + break + + elif is_number_token(n): + if "_" in n.value: + features.add(Feature.NUMERIC_UNDERSCORES) + + elif n.type == token.SLASH: + if n.parent and n.parent.type in { + syms.typedargslist, + syms.arglist, + syms.varargslist, + }: + features.add(Feature.POS_ONLY_ARGUMENTS) + + elif n.type == token.COLONEQUAL: + features.add(Feature.ASSIGNMENT_EXPRESSIONS) + + elif n.type == syms.decorator: + if len(n.children) > 1 and not is_simple_decorator_expression( + n.children[1] + ): + features.add(Feature.RELAXED_DECORATORS) + + elif ( + n.type in {syms.typedargslist, syms.arglist} + and n.children + and n.children[-1].type == token.COMMA + ): + if n.type == syms.typedargslist: + feature = Feature.TRAILING_COMMA_IN_DEF + else: + feature = Feature.TRAILING_COMMA_IN_CALL + + for ch in n.children: + if ch.type in STARS: + features.add(feature) + + if ch.type == syms.argument: + for argch in ch.children: + if argch.type in STARS: + features.add(feature) + + elif ( + n.type in {syms.return_stmt, syms.yield_expr} + and len(n.children) >= 2 + and n.children[1].type == syms.testlist_star_expr + and any(child.type == syms.star_expr for child in n.children[1].children) + ): + features.add(Feature.UNPACKING_ON_FLOW) + + elif ( + n.type == syms.annassign + and len(n.children) >= 4 + and n.children[3].type == syms.testlist_star_expr + ): + features.add(Feature.ANN_ASSIGN_EXTENDED_RHS) + + elif ( + n.type == syms.except_clause + and len(n.children) >= 2 + and n.children[1].type == token.STAR + ): + features.add(Feature.EXCEPT_STAR) + + elif n.type in {syms.subscriptlist, syms.trailer} and any( + child.type == syms.star_expr for child in n.children + ): + features.add(Feature.VARIADIC_GENERICS) + + elif ( + n.type == syms.tname_star + and len(n.children) == 3 + and n.children[2].type == syms.star_expr + ): + features.add(Feature.VARIADIC_GENERICS) + + return features + + +def detect_target_versions( + node: Node, *, future_imports: Optional[Set[str]] = None +) -> Set[TargetVersion]: + """Detect the version to target based on the nodes used.""" + features = get_features_used(node, future_imports=future_imports) + return { + version for version in TargetVersion if features <= VERSION_TO_FEATURES[version] + } + + +def get_future_imports(node: Node) -> Set[str]: + """Return a set of __future__ imports in the file.""" + imports: Set[str] = set() + + def get_imports_from_children(children: List[LN]) -> Generator[str, None, None]: + for child in children: + if isinstance(child, Leaf): + if child.type == token.NAME: + yield child.value + + elif child.type == syms.import_as_name: + orig_name = child.children[0] + assert isinstance(orig_name, Leaf), "Invalid syntax parsing imports" + assert orig_name.type == token.NAME, "Invalid syntax parsing imports" + yield orig_name.value + + elif child.type == syms.import_as_names: + yield from get_imports_from_children(child.children) + + else: + raise AssertionError("Invalid syntax parsing imports") + + for child in node.children: + if child.type != syms.simple_stmt: + break + + first_child = child.children[0] + if isinstance(first_child, Leaf): + # Continue looking if we see a docstring; otherwise stop. + if ( + len(child.children) == 2 + and first_child.type == token.STRING + and child.children[1].type == token.NEWLINE + ): + continue + + break + + elif first_child.type == syms.import_from: + module_name = first_child.children[1] + if not isinstance(module_name, Leaf) or module_name.value != "__future__": + break + + imports |= set(get_imports_from_children(first_child.children[3:])) + else: + break + + return imports + + +def assert_equivalent(src: str, dst: str) -> None: + """Raise AssertionError if `src` and `dst` aren't equivalent.""" + try: + src_ast = parse_ast(src) + except Exception as exc: + raise AssertionError( + "cannot use --safe with this file; failed to parse source file AST: " + f"{exc}\n" + "This could be caused by running Black with an older Python version " + "that does not support new syntax used in your source file." + ) from exc + + try: + dst_ast = parse_ast(dst) + except Exception as exc: + log = dump_to_file("".join(traceback.format_tb(exc.__traceback__)), dst) + raise AssertionError( + f"INTERNAL ERROR: Black produced invalid code: {exc}. " + "Please report a bug on https://github.com/psf/black/issues. " + f"This invalid output might be helpful: {log}" + ) from None + + src_ast_str = "\n".join(stringify_ast(src_ast)) + dst_ast_str = "\n".join(stringify_ast(dst_ast)) + if src_ast_str != dst_ast_str: + log = dump_to_file(diff(src_ast_str, dst_ast_str, "src", "dst")) + raise AssertionError( + "INTERNAL ERROR: Black produced code that is not equivalent to the" + " source. Please report a bug on " + f"https://github.com/psf/black/issues. This diff might be helpful: {log}" + ) from None + + +def assert_stable(src: str, dst: str, mode: Mode) -> None: + """Raise AssertionError if `dst` reformats differently the second time.""" + # We shouldn't call format_str() here, because that formats the string + # twice and may hide a bug where we bounce back and forth between two + # versions. + newdst = _format_str_once(dst, mode=mode) + if dst != newdst: + log = dump_to_file( + str(mode), + diff(src, dst, "source", "first pass"), + diff(dst, newdst, "first pass", "second pass"), + ) + raise AssertionError( + "INTERNAL ERROR: Black produced different code on the second pass of the" + " formatter. Please report a bug on https://github.com/psf/black/issues." + f" This diff might be helpful: {log}" + ) from None + + +@contextmanager +def nullcontext() -> Iterator[None]: + """Return an empty context manager. + + To be used like `nullcontext` in Python 3.7. + """ + yield + + +def patch_click() -> None: + """Make Click not crash on Python 3.6 with LANG=C. + + On certain misconfigured environments, Python 3 selects the ASCII encoding as the + default which restricts paths that it can access during the lifetime of the + application. Click refuses to work in this scenario by raising a RuntimeError. + + In case of Black the likelihood that non-ASCII characters are going to be used in + file paths is minimal since it's Python source code. Moreover, this crash was + spurious on Python 3.7 thanks to PEP 538 and PEP 540. + """ + modules: List[Any] = [] + try: + from click import core + except ImportError: + pass + else: + modules.append(core) + try: + # Removed in Click 8.1.0 and newer; we keep this around for users who have + # older versions installed. + from click import _unicodefun # type: ignore + except ImportError: + pass + else: + modules.append(_unicodefun) + + for module in modules: + if hasattr(module, "_verify_python3_env"): + module._verify_python3_env = lambda: None # type: ignore + if hasattr(module, "_verify_python_env"): + module._verify_python_env = lambda: None # type: ignore + + +def patched_main() -> None: + # PyInstaller patches multiprocessing to need freeze_support() even in non-Windows + # environments so just assume we always need to call it if frozen. + if getattr(sys, "frozen", False): + from multiprocessing import freeze_support + + freeze_support() + + patch_click() + main() + + +if __name__ == "__main__": + patched_main() diff --git a/env/lib/python3.8/site-packages/black/__main__.py b/env/lib/python3.8/site-packages/black/__main__.py new file mode 100644 index 00000000..19b810b5 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/__main__.py @@ -0,0 +1,3 @@ +from black import patched_main + +patched_main() diff --git a/env/lib/python3.8/site-packages/black/brackets.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/brackets.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..b78dfd80 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/brackets.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/brackets.py b/env/lib/python3.8/site-packages/black/brackets.py new file mode 100644 index 00000000..3566f5b6 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/brackets.py @@ -0,0 +1,342 @@ +"""Builds on top of nodes.py to track brackets.""" + +import sys +from dataclasses import dataclass, field +from typing import Dict, Iterable, List, Optional, Tuple, Union + +if sys.version_info < (3, 8): + from typing_extensions import Final +else: + from typing import Final + +from black.nodes import ( + BRACKET, + CLOSING_BRACKETS, + COMPARATORS, + LOGIC_OPERATORS, + MATH_OPERATORS, + OPENING_BRACKETS, + UNPACKING_PARENTS, + VARARGS_PARENTS, + is_vararg, + syms, +) +from blib2to3.pgen2 import token +from blib2to3.pytree import Leaf, Node + +# types +LN = Union[Leaf, Node] +Depth = int +LeafID = int +NodeType = int +Priority = int + + +COMPREHENSION_PRIORITY: Final = 20 +COMMA_PRIORITY: Final = 18 +TERNARY_PRIORITY: Final = 16 +LOGIC_PRIORITY: Final = 14 +STRING_PRIORITY: Final = 12 +COMPARATOR_PRIORITY: Final = 10 +MATH_PRIORITIES: Final = { + token.VBAR: 9, + token.CIRCUMFLEX: 8, + token.AMPER: 7, + token.LEFTSHIFT: 6, + token.RIGHTSHIFT: 6, + token.PLUS: 5, + token.MINUS: 5, + token.STAR: 4, + token.SLASH: 4, + token.DOUBLESLASH: 4, + token.PERCENT: 4, + token.AT: 4, + token.TILDE: 3, + token.DOUBLESTAR: 2, +} +DOT_PRIORITY: Final = 1 + + +class BracketMatchError(Exception): + """Raised when an opening bracket is unable to be matched to a closing bracket.""" + + +@dataclass +class BracketTracker: + """Keeps track of brackets on a line.""" + + depth: int = 0 + bracket_match: Dict[Tuple[Depth, NodeType], Leaf] = field(default_factory=dict) + delimiters: Dict[LeafID, Priority] = field(default_factory=dict) + previous: Optional[Leaf] = None + _for_loop_depths: List[int] = field(default_factory=list) + _lambda_argument_depths: List[int] = field(default_factory=list) + invisible: List[Leaf] = field(default_factory=list) + + def mark(self, leaf: Leaf) -> None: + """Mark `leaf` with bracket-related metadata. Keep track of delimiters. + + All leaves receive an int `bracket_depth` field that stores how deep + within brackets a given leaf is. 0 means there are no enclosing brackets + that started on this line. + + If a leaf is itself a closing bracket, it receives an `opening_bracket` + field that it forms a pair with. This is a one-directional link to + avoid reference cycles. + + If a leaf is a delimiter (a token on which Black can split the line if + needed) and it's on depth 0, its `id()` is stored in the tracker's + `delimiters` field. + """ + if leaf.type == token.COMMENT: + return + + self.maybe_decrement_after_for_loop_variable(leaf) + self.maybe_decrement_after_lambda_arguments(leaf) + if leaf.type in CLOSING_BRACKETS: + self.depth -= 1 + try: + opening_bracket = self.bracket_match.pop((self.depth, leaf.type)) + except KeyError as e: + raise BracketMatchError( + "Unable to match a closing bracket to the following opening" + f" bracket: {leaf}" + ) from e + leaf.opening_bracket = opening_bracket + if not leaf.value: + self.invisible.append(leaf) + leaf.bracket_depth = self.depth + if self.depth == 0: + delim = is_split_before_delimiter(leaf, self.previous) + if delim and self.previous is not None: + self.delimiters[id(self.previous)] = delim + else: + delim = is_split_after_delimiter(leaf, self.previous) + if delim: + self.delimiters[id(leaf)] = delim + if leaf.type in OPENING_BRACKETS: + self.bracket_match[self.depth, BRACKET[leaf.type]] = leaf + self.depth += 1 + if not leaf.value: + self.invisible.append(leaf) + self.previous = leaf + self.maybe_increment_lambda_arguments(leaf) + self.maybe_increment_for_loop_variable(leaf) + + def any_open_brackets(self) -> bool: + """Return True if there is an yet unmatched open bracket on the line.""" + return bool(self.bracket_match) + + def max_delimiter_priority(self, exclude: Iterable[LeafID] = ()) -> Priority: + """Return the highest priority of a delimiter found on the line. + + Values are consistent with what `is_split_*_delimiter()` return. + Raises ValueError on no delimiters. + """ + return max(v for k, v in self.delimiters.items() if k not in exclude) + + def delimiter_count_with_priority(self, priority: Priority = 0) -> int: + """Return the number of delimiters with the given `priority`. + + If no `priority` is passed, defaults to max priority on the line. + """ + if not self.delimiters: + return 0 + + priority = priority or self.max_delimiter_priority() + return sum(1 for p in self.delimiters.values() if p == priority) + + def maybe_increment_for_loop_variable(self, leaf: Leaf) -> bool: + """In a for loop, or comprehension, the variables are often unpacks. + + To avoid splitting on the comma in this situation, increase the depth of + tokens between `for` and `in`. + """ + if leaf.type == token.NAME and leaf.value == "for": + self.depth += 1 + self._for_loop_depths.append(self.depth) + return True + + return False + + def maybe_decrement_after_for_loop_variable(self, leaf: Leaf) -> bool: + """See `maybe_increment_for_loop_variable` above for explanation.""" + if ( + self._for_loop_depths + and self._for_loop_depths[-1] == self.depth + and leaf.type == token.NAME + and leaf.value == "in" + ): + self.depth -= 1 + self._for_loop_depths.pop() + return True + + return False + + def maybe_increment_lambda_arguments(self, leaf: Leaf) -> bool: + """In a lambda expression, there might be more than one argument. + + To avoid splitting on the comma in this situation, increase the depth of + tokens between `lambda` and `:`. + """ + if leaf.type == token.NAME and leaf.value == "lambda": + self.depth += 1 + self._lambda_argument_depths.append(self.depth) + return True + + return False + + def maybe_decrement_after_lambda_arguments(self, leaf: Leaf) -> bool: + """See `maybe_increment_lambda_arguments` above for explanation.""" + if ( + self._lambda_argument_depths + and self._lambda_argument_depths[-1] == self.depth + and leaf.type == token.COLON + ): + self.depth -= 1 + self._lambda_argument_depths.pop() + return True + + return False + + def get_open_lsqb(self) -> Optional[Leaf]: + """Return the most recent opening square bracket (if any).""" + return self.bracket_match.get((self.depth - 1, token.RSQB)) + + +def is_split_after_delimiter(leaf: Leaf, previous: Optional[Leaf] = None) -> Priority: + """Return the priority of the `leaf` delimiter, given a line break after it. + + The delimiter priorities returned here are from those delimiters that would + cause a line break after themselves. + + Higher numbers are higher priority. + """ + if leaf.type == token.COMMA: + return COMMA_PRIORITY + + return 0 + + +def is_split_before_delimiter(leaf: Leaf, previous: Optional[Leaf] = None) -> Priority: + """Return the priority of the `leaf` delimiter, given a line break before it. + + The delimiter priorities returned here are from those delimiters that would + cause a line break before themselves. + + Higher numbers are higher priority. + """ + if is_vararg(leaf, within=VARARGS_PARENTS | UNPACKING_PARENTS): + # * and ** might also be MATH_OPERATORS but in this case they are not. + # Don't treat them as a delimiter. + return 0 + + if ( + leaf.type == token.DOT + and leaf.parent + and leaf.parent.type not in {syms.import_from, syms.dotted_name} + and (previous is None or previous.type in CLOSING_BRACKETS) + ): + return DOT_PRIORITY + + if ( + leaf.type in MATH_OPERATORS + and leaf.parent + and leaf.parent.type not in {syms.factor, syms.star_expr} + ): + return MATH_PRIORITIES[leaf.type] + + if leaf.type in COMPARATORS: + return COMPARATOR_PRIORITY + + if ( + leaf.type == token.STRING + and previous is not None + and previous.type == token.STRING + ): + return STRING_PRIORITY + + if leaf.type not in {token.NAME, token.ASYNC}: + return 0 + + if ( + leaf.value == "for" + and leaf.parent + and leaf.parent.type in {syms.comp_for, syms.old_comp_for} + or leaf.type == token.ASYNC + ): + if ( + not isinstance(leaf.prev_sibling, Leaf) + or leaf.prev_sibling.value != "async" + ): + return COMPREHENSION_PRIORITY + + if ( + leaf.value == "if" + and leaf.parent + and leaf.parent.type in {syms.comp_if, syms.old_comp_if} + ): + return COMPREHENSION_PRIORITY + + if leaf.value in {"if", "else"} and leaf.parent and leaf.parent.type == syms.test: + return TERNARY_PRIORITY + + if leaf.value == "is": + return COMPARATOR_PRIORITY + + if ( + leaf.value == "in" + and leaf.parent + and leaf.parent.type in {syms.comp_op, syms.comparison} + and not ( + previous is not None + and previous.type == token.NAME + and previous.value == "not" + ) + ): + return COMPARATOR_PRIORITY + + if ( + leaf.value == "not" + and leaf.parent + and leaf.parent.type == syms.comp_op + and not ( + previous is not None + and previous.type == token.NAME + and previous.value == "is" + ) + ): + return COMPARATOR_PRIORITY + + if leaf.value in LOGIC_OPERATORS and leaf.parent: + return LOGIC_PRIORITY + + return 0 + + +def max_delimiter_priority_in_atom(node: LN) -> Priority: + """Return maximum delimiter priority inside `node`. + + This is specific to atoms with contents contained in a pair of parentheses. + If `node` isn't an atom or there are no enclosing parentheses, returns 0. + """ + if node.type != syms.atom: + return 0 + + first = node.children[0] + last = node.children[-1] + if not (first.type == token.LPAR and last.type == token.RPAR): + return 0 + + bt = BracketTracker() + for c in node.children[1:-1]: + if isinstance(c, Leaf): + bt.mark(c) + else: + for leaf in c.leaves(): + bt.mark(leaf) + try: + return bt.max_delimiter_priority() + + except ValueError: + return 0 diff --git a/env/lib/python3.8/site-packages/black/cache.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/cache.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..4b0353d4 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/cache.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/cache.py b/env/lib/python3.8/site-packages/black/cache.py new file mode 100644 index 00000000..9455ff44 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/cache.py @@ -0,0 +1,97 @@ +"""Caching of formatted files with feature-based invalidation.""" + +import os +import pickle +import tempfile +from pathlib import Path +from typing import Dict, Iterable, Set, Tuple + +from platformdirs import user_cache_dir + +from _black_version import version as __version__ +from black.mode import Mode + +# types +Timestamp = float +FileSize = int +CacheInfo = Tuple[Timestamp, FileSize] +Cache = Dict[str, CacheInfo] + + +def get_cache_dir() -> Path: + """Get the cache directory used by black. + + Users can customize this directory on all systems using `BLACK_CACHE_DIR` + environment variable. By default, the cache directory is the user cache directory + under the black application. + + This result is immediately set to a constant `black.cache.CACHE_DIR` as to avoid + repeated calls. + """ + # NOTE: Function mostly exists as a clean way to test getting the cache directory. + default_cache_dir = user_cache_dir("black", version=__version__) + cache_dir = Path(os.environ.get("BLACK_CACHE_DIR", default_cache_dir)) + return cache_dir + + +CACHE_DIR = get_cache_dir() + + +def read_cache(mode: Mode) -> Cache: + """Read the cache if it exists and is well formed. + + If it is not well formed, the call to write_cache later should resolve the issue. + """ + cache_file = get_cache_file(mode) + if not cache_file.exists(): + return {} + + with cache_file.open("rb") as fobj: + try: + cache: Cache = pickle.load(fobj) + except (pickle.UnpicklingError, ValueError, IndexError): + return {} + + return cache + + +def get_cache_file(mode: Mode) -> Path: + return CACHE_DIR / f"cache.{mode.get_cache_key()}.pickle" + + +def get_cache_info(path: Path) -> CacheInfo: + """Return the information used to check if a file is already formatted or not.""" + stat = path.stat() + return stat.st_mtime, stat.st_size + + +def filter_cached(cache: Cache, sources: Iterable[Path]) -> Tuple[Set[Path], Set[Path]]: + """Split an iterable of paths in `sources` into two sets. + + The first contains paths of files that modified on disk or are not in the + cache. The other contains paths to non-modified files. + """ + todo, done = set(), set() + for src in sources: + res_src = src.resolve() + if cache.get(str(res_src)) != get_cache_info(res_src): + todo.add(src) + else: + done.add(src) + return todo, done + + +def write_cache(cache: Cache, sources: Iterable[Path], mode: Mode) -> None: + """Update the cache file.""" + cache_file = get_cache_file(mode) + try: + CACHE_DIR.mkdir(parents=True, exist_ok=True) + new_cache = { + **cache, + **{str(src.resolve()): get_cache_info(src) for src in sources}, + } + with tempfile.NamedTemporaryFile(dir=str(cache_file.parent), delete=False) as f: + pickle.dump(new_cache, f, protocol=4) + os.replace(f.name, cache_file) + except OSError: + pass diff --git a/env/lib/python3.8/site-packages/black/comments.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/comments.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..019d79ca Binary files /dev/null and b/env/lib/python3.8/site-packages/black/comments.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/comments.py b/env/lib/python3.8/site-packages/black/comments.py new file mode 100644 index 00000000..dce83abf --- /dev/null +++ b/env/lib/python3.8/site-packages/black/comments.py @@ -0,0 +1,332 @@ +import re +import sys +from dataclasses import dataclass +from functools import lru_cache +from typing import Iterator, List, Optional, Union + +if sys.version_info >= (3, 8): + from typing import Final +else: + from typing_extensions import Final + +from black.nodes import ( + CLOSING_BRACKETS, + STANDALONE_COMMENT, + WHITESPACE, + container_of, + first_leaf_of, + preceding_leaf, + syms, +) +from blib2to3.pgen2 import token +from blib2to3.pytree import Leaf, Node + +# types +LN = Union[Leaf, Node] + +FMT_OFF: Final = {"# fmt: off", "# fmt:off", "# yapf: disable"} +FMT_SKIP: Final = {"# fmt: skip", "# fmt:skip"} +FMT_PASS: Final = {*FMT_OFF, *FMT_SKIP} +FMT_ON: Final = {"# fmt: on", "# fmt:on", "# yapf: enable"} + +COMMENT_EXCEPTIONS = {True: " !:#'", False: " !:#'%"} + + +@dataclass +class ProtoComment: + """Describes a piece of syntax that is a comment. + + It's not a :class:`blib2to3.pytree.Leaf` so that: + + * it can be cached (`Leaf` objects should not be reused more than once as + they store their lineno, column, prefix, and parent information); + * `newlines` and `consumed` fields are kept separate from the `value`. This + simplifies handling of special marker comments like ``# fmt: off/on``. + """ + + type: int # token.COMMENT or STANDALONE_COMMENT + value: str # content of the comment + newlines: int # how many newlines before the comment + consumed: int # how many characters of the original leaf's prefix did we consume + + +def generate_comments(leaf: LN, *, preview: bool) -> Iterator[Leaf]: + """Clean the prefix of the `leaf` and generate comments from it, if any. + + Comments in lib2to3 are shoved into the whitespace prefix. This happens + in `pgen2/driver.py:Driver.parse_tokens()`. This was a brilliant implementation + move because it does away with modifying the grammar to include all the + possible places in which comments can be placed. + + The sad consequence for us though is that comments don't "belong" anywhere. + This is why this function generates simple parentless Leaf objects for + comments. We simply don't know what the correct parent should be. + + No matter though, we can live without this. We really only need to + differentiate between inline and standalone comments. The latter don't + share the line with any code. + + Inline comments are emitted as regular token.COMMENT leaves. Standalone + are emitted with a fake STANDALONE_COMMENT token identifier. + """ + for pc in list_comments( + leaf.prefix, is_endmarker=leaf.type == token.ENDMARKER, preview=preview + ): + yield Leaf(pc.type, pc.value, prefix="\n" * pc.newlines) + + +@lru_cache(maxsize=4096) +def list_comments( + prefix: str, *, is_endmarker: bool, preview: bool +) -> List[ProtoComment]: + """Return a list of :class:`ProtoComment` objects parsed from the given `prefix`.""" + result: List[ProtoComment] = [] + if not prefix or "#" not in prefix: + return result + + consumed = 0 + nlines = 0 + ignored_lines = 0 + for index, line in enumerate(re.split("\r?\n", prefix)): + consumed += len(line) + 1 # adding the length of the split '\n' + line = line.lstrip() + if not line: + nlines += 1 + if not line.startswith("#"): + # Escaped newlines outside of a comment are not really newlines at + # all. We treat a single-line comment following an escaped newline + # as a simple trailing comment. + if line.endswith("\\"): + ignored_lines += 1 + continue + + if index == ignored_lines and not is_endmarker: + comment_type = token.COMMENT # simple trailing comment + else: + comment_type = STANDALONE_COMMENT + comment = make_comment(line, preview=preview) + result.append( + ProtoComment( + type=comment_type, value=comment, newlines=nlines, consumed=consumed + ) + ) + nlines = 0 + return result + + +def make_comment(content: str, *, preview: bool) -> str: + """Return a consistently formatted comment from the given `content` string. + + All comments (except for "##", "#!", "#:", '#'") should have a single + space between the hash sign and the content. + + If `content` didn't start with a hash sign, one is provided. + """ + content = content.rstrip() + if not content: + return "#" + + if content[0] == "#": + content = content[1:] + NON_BREAKING_SPACE = " " + if ( + content + and content[0] == NON_BREAKING_SPACE + and not content.lstrip().startswith("type:") + ): + content = " " + content[1:] # Replace NBSP by a simple space + if content and content[0] not in COMMENT_EXCEPTIONS[preview]: + content = " " + content + return "#" + content + + +def normalize_fmt_off(node: Node, *, preview: bool) -> None: + """Convert content between `# fmt: off`/`# fmt: on` into standalone comments.""" + try_again = True + while try_again: + try_again = convert_one_fmt_off_pair(node, preview=preview) + + +def convert_one_fmt_off_pair(node: Node, *, preview: bool) -> bool: + """Convert content of a single `# fmt: off`/`# fmt: on` into a standalone comment. + + Returns True if a pair was converted. + """ + for leaf in node.leaves(): + previous_consumed = 0 + for comment in list_comments(leaf.prefix, is_endmarker=False, preview=preview): + if comment.value not in FMT_PASS: + previous_consumed = comment.consumed + continue + # We only want standalone comments. If there's no previous leaf or + # the previous leaf is indentation, it's a standalone comment in + # disguise. + if comment.value in FMT_PASS and comment.type != STANDALONE_COMMENT: + prev = preceding_leaf(leaf) + if prev: + if comment.value in FMT_OFF and prev.type not in WHITESPACE: + continue + if comment.value in FMT_SKIP and prev.type in WHITESPACE: + continue + + ignored_nodes = list(generate_ignored_nodes(leaf, comment, preview=preview)) + if not ignored_nodes: + continue + + first = ignored_nodes[0] # Can be a container node with the `leaf`. + parent = first.parent + prefix = first.prefix + if comment.value in FMT_OFF: + first.prefix = prefix[comment.consumed :] + if comment.value in FMT_SKIP: + first.prefix = "" + standalone_comment_prefix = prefix + else: + standalone_comment_prefix = ( + prefix[:previous_consumed] + "\n" * comment.newlines + ) + hidden_value = "".join(str(n) for n in ignored_nodes) + if comment.value in FMT_OFF: + hidden_value = comment.value + "\n" + hidden_value + if comment.value in FMT_SKIP: + hidden_value += " " + comment.value + if hidden_value.endswith("\n"): + # That happens when one of the `ignored_nodes` ended with a NEWLINE + # leaf (possibly followed by a DEDENT). + hidden_value = hidden_value[:-1] + first_idx: Optional[int] = None + for ignored in ignored_nodes: + index = ignored.remove() + if first_idx is None: + first_idx = index + assert parent is not None, "INTERNAL ERROR: fmt: on/off handling (1)" + assert first_idx is not None, "INTERNAL ERROR: fmt: on/off handling (2)" + parent.insert_child( + first_idx, + Leaf( + STANDALONE_COMMENT, + hidden_value, + prefix=standalone_comment_prefix, + ), + ) + return True + + return False + + +def generate_ignored_nodes( + leaf: Leaf, comment: ProtoComment, *, preview: bool +) -> Iterator[LN]: + """Starting from the container of `leaf`, generate all leaves until `# fmt: on`. + + If comment is skip, returns leaf only. + Stops at the end of the block. + """ + if comment.value in FMT_SKIP: + yield from _generate_ignored_nodes_from_fmt_skip(leaf, comment, preview=preview) + return + container: Optional[LN] = container_of(leaf) + while container is not None and container.type != token.ENDMARKER: + if is_fmt_on(container, preview=preview): + return + + # fix for fmt: on in children + if children_contains_fmt_on(container, preview=preview): + for child in container.children: + if isinstance(child, Leaf) and is_fmt_on(child, preview=preview): + if child.type in CLOSING_BRACKETS: + # This means `# fmt: on` is placed at a different bracket level + # than `# fmt: off`. This is an invalid use, but as a courtesy, + # we include this closing bracket in the ignored nodes. + # The alternative is to fail the formatting. + yield child + return + if children_contains_fmt_on(child, preview=preview): + return + yield child + else: + if container.type == token.DEDENT and container.next_sibling is None: + # This can happen when there is no matching `# fmt: on` comment at the + # same level as `# fmt: on`. We need to keep this DEDENT. + return + yield container + container = container.next_sibling + + +def _generate_ignored_nodes_from_fmt_skip( + leaf: Leaf, comment: ProtoComment, *, preview: bool +) -> Iterator[LN]: + """Generate all leaves that should be ignored by the `# fmt: skip` from `leaf`.""" + prev_sibling = leaf.prev_sibling + parent = leaf.parent + # Need to properly format the leaf prefix to compare it to comment.value, + # which is also formatted + comments = list_comments(leaf.prefix, is_endmarker=False, preview=preview) + if not comments or comment.value != comments[0].value: + return + if prev_sibling is not None: + leaf.prefix = "" + siblings = [prev_sibling] + while "\n" not in prev_sibling.prefix and prev_sibling.prev_sibling is not None: + prev_sibling = prev_sibling.prev_sibling + siblings.insert(0, prev_sibling) + yield from siblings + elif ( + parent is not None and parent.type == syms.suite and leaf.type == token.NEWLINE + ): + # The `# fmt: skip` is on the colon line of the if/while/def/class/... + # statements. The ignored nodes should be previous siblings of the + # parent suite node. + leaf.prefix = "" + ignored_nodes: List[LN] = [] + parent_sibling = parent.prev_sibling + while parent_sibling is not None and parent_sibling.type != syms.suite: + ignored_nodes.insert(0, parent_sibling) + parent_sibling = parent_sibling.prev_sibling + # Special case for `async_stmt` where the ASYNC token is on the + # grandparent node. + grandparent = parent.parent + if ( + grandparent is not None + and grandparent.prev_sibling is not None + and grandparent.prev_sibling.type == token.ASYNC + ): + ignored_nodes.insert(0, grandparent.prev_sibling) + yield from iter(ignored_nodes) + + +def is_fmt_on(container: LN, preview: bool) -> bool: + """Determine whether formatting is switched on within a container. + Determined by whether the last `# fmt:` comment is `on` or `off`. + """ + fmt_on = False + for comment in list_comments(container.prefix, is_endmarker=False, preview=preview): + if comment.value in FMT_ON: + fmt_on = True + elif comment.value in FMT_OFF: + fmt_on = False + return fmt_on + + +def children_contains_fmt_on(container: LN, *, preview: bool) -> bool: + """Determine if children have formatting switched on.""" + for child in container.children: + leaf = first_leaf_of(child) + if leaf is not None and is_fmt_on(leaf, preview=preview): + return True + + return False + + +def contains_pragma_comment(comment_list: List[Leaf]) -> bool: + """ + Returns: + True iff one of the comments in @comment_list is a pragma used by one + of the more common static analysis tools for python (e.g. mypy, flake8, + pylint). + """ + for comment in comment_list: + if comment.value.startswith(("# type:", "# noqa", "# pylint:")): + return True + + return False diff --git a/env/lib/python3.8/site-packages/black/concurrency.py b/env/lib/python3.8/site-packages/black/concurrency.py new file mode 100644 index 00000000..10e288f4 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/concurrency.py @@ -0,0 +1,191 @@ +""" +Formatting many files at once via multiprocessing. Contains entrypoint and utilities. + +NOTE: this module is only imported if we need to format several files at once. +""" + +import asyncio +import logging +import os +import signal +import sys +from concurrent.futures import Executor, ProcessPoolExecutor, ThreadPoolExecutor +from multiprocessing import Manager +from pathlib import Path +from typing import Any, Iterable, Optional, Set + +from mypy_extensions import mypyc_attr + +from black import WriteBack, format_file_in_place +from black.cache import Cache, filter_cached, read_cache, write_cache +from black.mode import Mode +from black.output import err +from black.report import Changed, Report + + +def maybe_install_uvloop() -> None: + """If our environment has uvloop installed we use it. + + This is called only from command-line entry points to avoid + interfering with the parent process if Black is used as a library. + """ + try: + import uvloop + + uvloop.install() + except ImportError: + pass + + +def cancel(tasks: Iterable["asyncio.Task[Any]"]) -> None: + """asyncio signal handler that cancels all `tasks` and reports to stderr.""" + err("Aborted!") + for task in tasks: + task.cancel() + + +def shutdown(loop: asyncio.AbstractEventLoop) -> None: + """Cancel all pending tasks on `loop`, wait for them, and close the loop.""" + try: + if sys.version_info[:2] >= (3, 7): + all_tasks = asyncio.all_tasks + else: + all_tasks = asyncio.Task.all_tasks + # This part is borrowed from asyncio/runners.py in Python 3.7b2. + to_cancel = [task for task in all_tasks(loop) if not task.done()] + if not to_cancel: + return + + for task in to_cancel: + task.cancel() + loop.run_until_complete(asyncio.gather(*to_cancel, return_exceptions=True)) + finally: + # `concurrent.futures.Future` objects cannot be cancelled once they + # are already running. There might be some when the `shutdown()` happened. + # Silence their logger's spew about the event loop being closed. + cf_logger = logging.getLogger("concurrent.futures") + cf_logger.setLevel(logging.CRITICAL) + loop.close() + + +# diff-shades depends on being to monkeypatch this function to operate. I know it's +# not ideal, but this shouldn't cause any issues ... hopefully. ~ichard26 +@mypyc_attr(patchable=True) +def reformat_many( + sources: Set[Path], + fast: bool, + write_back: WriteBack, + mode: Mode, + report: Report, + workers: Optional[int], +) -> None: + """Reformat multiple files using a ProcessPoolExecutor.""" + maybe_install_uvloop() + + executor: Executor + if workers is None: + workers = os.cpu_count() or 1 + if sys.platform == "win32": + # Work around https://bugs.python.org/issue26903 + workers = min(workers, 60) + try: + executor = ProcessPoolExecutor(max_workers=workers) + except (ImportError, NotImplementedError, OSError): + # we arrive here if the underlying system does not support multi-processing + # like in AWS Lambda or Termux, in which case we gracefully fallback to + # a ThreadPoolExecutor with just a single worker (more workers would not do us + # any good due to the Global Interpreter Lock) + executor = ThreadPoolExecutor(max_workers=1) + + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + try: + loop.run_until_complete( + schedule_formatting( + sources=sources, + fast=fast, + write_back=write_back, + mode=mode, + report=report, + loop=loop, + executor=executor, + ) + ) + finally: + try: + shutdown(loop) + finally: + asyncio.set_event_loop(None) + if executor is not None: + executor.shutdown() + + +async def schedule_formatting( + sources: Set[Path], + fast: bool, + write_back: WriteBack, + mode: Mode, + report: "Report", + loop: asyncio.AbstractEventLoop, + executor: "Executor", +) -> None: + """Run formatting of `sources` in parallel using the provided `executor`. + + (Use ProcessPoolExecutors for actual parallelism.) + + `write_back`, `fast`, and `mode` options are passed to + :func:`format_file_in_place`. + """ + cache: Cache = {} + if write_back not in (WriteBack.DIFF, WriteBack.COLOR_DIFF): + cache = read_cache(mode) + sources, cached = filter_cached(cache, sources) + for src in sorted(cached): + report.done(src, Changed.CACHED) + if not sources: + return + + cancelled = [] + sources_to_cache = [] + lock = None + if write_back in (WriteBack.DIFF, WriteBack.COLOR_DIFF): + # For diff output, we need locks to ensure we don't interleave output + # from different processes. + manager = Manager() + lock = manager.Lock() + tasks = { + asyncio.ensure_future( + loop.run_in_executor( + executor, format_file_in_place, src, fast, mode, write_back, lock + ) + ): src + for src in sorted(sources) + } + pending = tasks.keys() + try: + loop.add_signal_handler(signal.SIGINT, cancel, pending) + loop.add_signal_handler(signal.SIGTERM, cancel, pending) + except NotImplementedError: + # There are no good alternatives for these on Windows. + pass + while pending: + done, _ = await asyncio.wait(pending, return_when=asyncio.FIRST_COMPLETED) + for task in done: + src = tasks.pop(task) + if task.cancelled(): + cancelled.append(task) + elif task.exception(): + report.failed(src, str(task.exception())) + else: + changed = Changed.YES if task.result() else Changed.NO + # If the file was written back or was successfully checked as + # well-formatted, store this information in the cache. + if write_back is WriteBack.YES or ( + write_back is WriteBack.CHECK and changed is Changed.NO + ): + sources_to_cache.append(src) + report.done(src, changed) + if cancelled: + await asyncio.gather(*cancelled, return_exceptions=True) + if sources_to_cache: + write_cache(cache, sources_to_cache, mode) diff --git a/env/lib/python3.8/site-packages/black/const.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/const.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..5acaa069 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/const.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/const.py b/env/lib/python3.8/site-packages/black/const.py new file mode 100644 index 00000000..0e13f315 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/const.py @@ -0,0 +1,4 @@ +DEFAULT_LINE_LENGTH = 88 +DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|venv|\.svn|\.ipynb_checkpoints|_build|buck-out|build|dist|__pypackages__)/" # noqa: B950 +DEFAULT_INCLUDES = r"(\.pyi?|\.ipynb)$" +STDIN_PLACEHOLDER = "__BLACK_STDIN_FILENAME__" diff --git a/env/lib/python3.8/site-packages/black/debug.py b/env/lib/python3.8/site-packages/black/debug.py new file mode 100644 index 00000000..150b4484 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/debug.py @@ -0,0 +1,47 @@ +from dataclasses import dataclass +from typing import Iterator, TypeVar, Union + +from black.nodes import Visitor +from black.output import out +from black.parsing import lib2to3_parse +from blib2to3.pgen2 import token +from blib2to3.pytree import Leaf, Node, type_repr + +LN = Union[Leaf, Node] +T = TypeVar("T") + + +@dataclass +class DebugVisitor(Visitor[T]): + tree_depth: int = 0 + + def visit_default(self, node: LN) -> Iterator[T]: + indent = " " * (2 * self.tree_depth) + if isinstance(node, Node): + _type = type_repr(node.type) + out(f"{indent}{_type}", fg="yellow") + self.tree_depth += 1 + for child in node.children: + yield from self.visit(child) + + self.tree_depth -= 1 + out(f"{indent}/{_type}", fg="yellow", bold=False) + else: + _type = token.tok_name.get(node.type, str(node.type)) + out(f"{indent}{_type}", fg="blue", nl=False) + if node.prefix: + # We don't have to handle prefixes for `Node` objects since + # that delegates to the first child anyway. + out(f" {node.prefix!r}", fg="green", bold=False, nl=False) + out(f" {node.value!r}", fg="blue", bold=False) + + @classmethod + def show(cls, code: Union[str, Leaf, Node]) -> None: + """Pretty-print the lib2to3 AST of a given string of `code`. + + Convenience method for debugging. + """ + v: DebugVisitor[None] = DebugVisitor() + if isinstance(code, str): + code = lib2to3_parse(code) + list(v.visit(code)) diff --git a/env/lib/python3.8/site-packages/black/files.py b/env/lib/python3.8/site-packages/black/files.py new file mode 100644 index 00000000..ed503f5f --- /dev/null +++ b/env/lib/python3.8/site-packages/black/files.py @@ -0,0 +1,287 @@ +import io +import os +import sys +from functools import lru_cache +from pathlib import Path +from typing import ( + TYPE_CHECKING, + Any, + Dict, + Iterable, + Iterator, + List, + Optional, + Pattern, + Sequence, + Tuple, + Union, +) + +from mypy_extensions import mypyc_attr +from pathspec import PathSpec +from pathspec.patterns.gitwildmatch import GitWildMatchPatternError + +if sys.version_info >= (3, 11): + try: + import tomllib + except ImportError: + # Help users on older alphas + if not TYPE_CHECKING: + import tomli as tomllib +else: + import tomli as tomllib + +from black.handle_ipynb_magics import jupyter_dependencies_are_installed +from black.output import err +from black.report import Report + +if TYPE_CHECKING: + import colorama # noqa: F401 + + +@lru_cache() +def find_project_root( + srcs: Sequence[str], stdin_filename: Optional[str] = None +) -> Tuple[Path, str]: + """Return a directory containing .git, .hg, or pyproject.toml. + + That directory will be a common parent of all files and directories + passed in `srcs`. + + If no directory in the tree contains a marker that would specify it's the + project root, the root of the file system is returned. + + Returns a two-tuple with the first element as the project root path and + the second element as a string describing the method by which the + project root was discovered. + """ + if stdin_filename is not None: + srcs = tuple(stdin_filename if s == "-" else s for s in srcs) + if not srcs: + srcs = [str(Path.cwd().resolve())] + + path_srcs = [Path(Path.cwd(), src).resolve() for src in srcs] + + # A list of lists of parents for each 'src'. 'src' is included as a + # "parent" of itself if it is a directory + src_parents = [ + list(path.parents) + ([path] if path.is_dir() else []) for path in path_srcs + ] + + common_base = max( + set.intersection(*(set(parents) for parents in src_parents)), + key=lambda path: path.parts, + ) + + for directory in (common_base, *common_base.parents): + if (directory / ".git").exists(): + return directory, ".git directory" + + if (directory / ".hg").is_dir(): + return directory, ".hg directory" + + if (directory / "pyproject.toml").is_file(): + return directory, "pyproject.toml" + + return directory, "file system root" + + +def find_pyproject_toml(path_search_start: Tuple[str, ...]) -> Optional[str]: + """Find the absolute filepath to a pyproject.toml if it exists""" + path_project_root, _ = find_project_root(path_search_start) + path_pyproject_toml = path_project_root / "pyproject.toml" + if path_pyproject_toml.is_file(): + return str(path_pyproject_toml) + + try: + path_user_pyproject_toml = find_user_pyproject_toml() + return ( + str(path_user_pyproject_toml) + if path_user_pyproject_toml.is_file() + else None + ) + except (PermissionError, RuntimeError) as e: + # We do not have access to the user-level config directory, so ignore it. + err(f"Ignoring user configuration directory due to {e!r}") + return None + + +@mypyc_attr(patchable=True) +def parse_pyproject_toml(path_config: str) -> Dict[str, Any]: + """Parse a pyproject toml file, pulling out relevant parts for Black + + If parsing fails, will raise a tomllib.TOMLDecodeError + """ + with open(path_config, "rb") as f: + pyproject_toml = tomllib.load(f) + config = pyproject_toml.get("tool", {}).get("black", {}) + return {k.replace("--", "").replace("-", "_"): v for k, v in config.items()} + + +@lru_cache() +def find_user_pyproject_toml() -> Path: + r"""Return the path to the top-level user configuration for black. + + This looks for ~\.black on Windows and ~/.config/black on Linux and other + Unix systems. + + May raise: + - RuntimeError: if the current user has no homedir + - PermissionError: if the current process cannot access the user's homedir + """ + if sys.platform == "win32": + # Windows + user_config_path = Path.home() / ".black" + else: + config_root = os.environ.get("XDG_CONFIG_HOME", "~/.config") + user_config_path = Path(config_root).expanduser() / "black" + return user_config_path.resolve() + + +@lru_cache() +def get_gitignore(root: Path) -> PathSpec: + """Return a PathSpec matching gitignore content if present.""" + gitignore = root / ".gitignore" + lines: List[str] = [] + if gitignore.is_file(): + with gitignore.open(encoding="utf-8") as gf: + lines = gf.readlines() + try: + return PathSpec.from_lines("gitwildmatch", lines) + except GitWildMatchPatternError as e: + err(f"Could not parse {gitignore}: {e}") + raise + + +def normalize_path_maybe_ignore( + path: Path, + root: Path, + report: Optional[Report] = None, +) -> Optional[str]: + """Normalize `path`. May return `None` if `path` was ignored. + + `report` is where "path ignored" output goes. + """ + try: + abspath = path if path.is_absolute() else Path.cwd() / path + normalized_path = abspath.resolve() + try: + root_relative_path = normalized_path.relative_to(root).as_posix() + except ValueError: + if report: + report.path_ignored( + path, f"is a symbolic link that points outside {root}" + ) + return None + + except OSError as e: + if report: + report.path_ignored(path, f"cannot be read because {e}") + return None + + return root_relative_path + + +def path_is_excluded( + normalized_path: str, + pattern: Optional[Pattern[str]], +) -> bool: + match = pattern.search(normalized_path) if pattern else None + return bool(match and match.group(0)) + + +def gen_python_files( + paths: Iterable[Path], + root: Path, + include: Pattern[str], + exclude: Pattern[str], + extend_exclude: Optional[Pattern[str]], + force_exclude: Optional[Pattern[str]], + report: Report, + gitignore: Optional[PathSpec], + *, + verbose: bool, + quiet: bool, +) -> Iterator[Path]: + """Generate all files under `path` whose paths are not excluded by the + `exclude_regex`, `extend_exclude`, or `force_exclude` regexes, + but are included by the `include` regex. + + Symbolic links pointing outside of the `root` directory are ignored. + + `report` is where output about exclusions goes. + """ + assert root.is_absolute(), f"INTERNAL ERROR: `root` must be absolute but is {root}" + for child in paths: + normalized_path = normalize_path_maybe_ignore(child, root, report) + if normalized_path is None: + continue + + # First ignore files matching .gitignore, if passed + if gitignore is not None and gitignore.match_file(normalized_path): + report.path_ignored(child, "matches the .gitignore file content") + continue + + # Then ignore with `--exclude` `--extend-exclude` and `--force-exclude` options. + normalized_path = "/" + normalized_path + if child.is_dir(): + normalized_path += "/" + + if path_is_excluded(normalized_path, exclude): + report.path_ignored(child, "matches the --exclude regular expression") + continue + + if path_is_excluded(normalized_path, extend_exclude): + report.path_ignored( + child, "matches the --extend-exclude regular expression" + ) + continue + + if path_is_excluded(normalized_path, force_exclude): + report.path_ignored(child, "matches the --force-exclude regular expression") + continue + + if child.is_dir(): + # If gitignore is None, gitignore usage is disabled, while a Falsey + # gitignore is when the directory doesn't have a .gitignore file. + yield from gen_python_files( + child.iterdir(), + root, + include, + exclude, + extend_exclude, + force_exclude, + report, + gitignore + get_gitignore(child) if gitignore is not None else None, + verbose=verbose, + quiet=quiet, + ) + + elif child.is_file(): + if child.suffix == ".ipynb" and not jupyter_dependencies_are_installed( + verbose=verbose, quiet=quiet + ): + continue + include_match = include.search(normalized_path) if include else True + if include_match: + yield child + + +def wrap_stream_for_windows( + f: io.TextIOWrapper, +) -> Union[io.TextIOWrapper, "colorama.AnsiToWin32"]: + """ + Wrap stream with colorama's wrap_stream so colors are shown on Windows. + + If `colorama` is unavailable, the original stream is returned unmodified. + Otherwise, the `wrap_stream()` function determines whether the stream needs + to be wrapped for a Windows environment and will accordingly either return + an `AnsiToWin32` wrapper or the original stream. + """ + try: + from colorama.initialise import wrap_stream + except ImportError: + return f + else: + # Set `strip=False` to avoid needing to modify test_express_diff_with_color. + return wrap_stream(f, convert=None, strip=False, autoreset=False, wrap=True) diff --git a/env/lib/python3.8/site-packages/black/handle_ipynb_magics.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/handle_ipynb_magics.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..c9db6294 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/handle_ipynb_magics.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/handle_ipynb_magics.py b/env/lib/python3.8/site-packages/black/handle_ipynb_magics.py new file mode 100644 index 00000000..693f1a68 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/handle_ipynb_magics.py @@ -0,0 +1,459 @@ +"""Functions to process IPython magics with.""" + +import ast +import collections +import dataclasses +import secrets +import sys +from functools import lru_cache +from typing import Dict, List, Optional, Tuple + +if sys.version_info >= (3, 10): + from typing import TypeGuard +else: + from typing_extensions import TypeGuard + +from black.output import out +from black.report import NothingChanged + +TRANSFORMED_MAGICS = frozenset( + ( + "get_ipython().run_cell_magic", + "get_ipython().system", + "get_ipython().getoutput", + "get_ipython().run_line_magic", + ) +) +TOKENS_TO_IGNORE = frozenset( + ( + "ENDMARKER", + "NL", + "NEWLINE", + "COMMENT", + "DEDENT", + "UNIMPORTANT_WS", + "ESCAPED_NL", + ) +) +PYTHON_CELL_MAGICS = frozenset( + ( + "capture", + "prun", + "pypy", + "python", + "python3", + "time", + "timeit", + ) +) +TOKEN_HEX = secrets.token_hex + + +@dataclasses.dataclass(frozen=True) +class Replacement: + mask: str + src: str + + +@lru_cache() +def jupyter_dependencies_are_installed(*, verbose: bool, quiet: bool) -> bool: + try: + import IPython # noqa:F401 + import tokenize_rt # noqa:F401 + except ModuleNotFoundError: + if verbose or not quiet: + msg = ( + "Skipping .ipynb files as Jupyter dependencies are not installed.\n" + "You can fix this by running ``pip install black[jupyter]``" + ) + out(msg) + return False + else: + return True + + +def remove_trailing_semicolon(src: str) -> Tuple[str, bool]: + """Remove trailing semicolon from Jupyter notebook cell. + + For example, + + fig, ax = plt.subplots() + ax.plot(x_data, y_data); # plot data + + would become + + fig, ax = plt.subplots() + ax.plot(x_data, y_data) # plot data + + Mirrors the logic in `quiet` from `IPython.core.displayhook`, but uses + ``tokenize_rt`` so that round-tripping works fine. + """ + from tokenize_rt import reversed_enumerate, src_to_tokens, tokens_to_src + + tokens = src_to_tokens(src) + trailing_semicolon = False + for idx, token in reversed_enumerate(tokens): + if token.name in TOKENS_TO_IGNORE: + continue + if token.name == "OP" and token.src == ";": + del tokens[idx] + trailing_semicolon = True + break + if not trailing_semicolon: + return src, False + return tokens_to_src(tokens), True + + +def put_trailing_semicolon_back(src: str, has_trailing_semicolon: bool) -> str: + """Put trailing semicolon back if cell originally had it. + + Mirrors the logic in `quiet` from `IPython.core.displayhook`, but uses + ``tokenize_rt`` so that round-tripping works fine. + """ + if not has_trailing_semicolon: + return src + from tokenize_rt import reversed_enumerate, src_to_tokens, tokens_to_src + + tokens = src_to_tokens(src) + for idx, token in reversed_enumerate(tokens): + if token.name in TOKENS_TO_IGNORE: + continue + tokens[idx] = token._replace(src=token.src + ";") + break + else: # pragma: nocover + raise AssertionError( + "INTERNAL ERROR: Was not able to reinstate trailing semicolon. " + "Please report a bug on https://github.com/psf/black/issues. " + ) from None + return str(tokens_to_src(tokens)) + + +def mask_cell(src: str) -> Tuple[str, List[Replacement]]: + """Mask IPython magics so content becomes parseable Python code. + + For example, + + %matplotlib inline + 'foo' + + becomes + + "25716f358c32750e" + 'foo' + + The replacements are returned, along with the transformed code. + """ + replacements: List[Replacement] = [] + try: + ast.parse(src) + except SyntaxError: + # Might have IPython magics, will process below. + pass + else: + # Syntax is fine, nothing to mask, early return. + return src, replacements + + from IPython.core.inputtransformer2 import TransformerManager + + transformer_manager = TransformerManager() + transformed = transformer_manager.transform_cell(src) + transformed, cell_magic_replacements = replace_cell_magics(transformed) + replacements += cell_magic_replacements + transformed = transformer_manager.transform_cell(transformed) + transformed, magic_replacements = replace_magics(transformed) + if len(transformed.splitlines()) != len(src.splitlines()): + # Multi-line magic, not supported. + raise NothingChanged + replacements += magic_replacements + return transformed, replacements + + +def get_token(src: str, magic: str) -> str: + """Return randomly generated token to mask IPython magic with. + + For example, if 'magic' was `%matplotlib inline`, then a possible + token to mask it with would be `"43fdd17f7e5ddc83"`. The token + will be the same length as the magic, and we make sure that it was + not already present anywhere else in the cell. + """ + assert magic + nbytes = max(len(magic) // 2 - 1, 1) + token = TOKEN_HEX(nbytes) + counter = 0 + while token in src: + token = TOKEN_HEX(nbytes) + counter += 1 + if counter > 100: + raise AssertionError( + "INTERNAL ERROR: Black was not able to replace IPython magic. " + "Please report a bug on https://github.com/psf/black/issues. " + f"The magic might be helpful: {magic}" + ) from None + if len(token) + 2 < len(magic): + token = f"{token}." + return f'"{token}"' + + +def replace_cell_magics(src: str) -> Tuple[str, List[Replacement]]: + """Replace cell magic with token. + + Note that 'src' will already have been processed by IPython's + TransformerManager().transform_cell. + + Example, + + get_ipython().run_cell_magic('t', '-n1', 'ls =!ls\\n') + + becomes + + "a794." + ls =!ls + + The replacement, along with the transformed code, is returned. + """ + replacements: List[Replacement] = [] + + tree = ast.parse(src) + + cell_magic_finder = CellMagicFinder() + cell_magic_finder.visit(tree) + if cell_magic_finder.cell_magic is None: + return src, replacements + header = cell_magic_finder.cell_magic.header + mask = get_token(src, header) + replacements.append(Replacement(mask=mask, src=header)) + return f"{mask}\n{cell_magic_finder.cell_magic.body}", replacements + + +def replace_magics(src: str) -> Tuple[str, List[Replacement]]: + """Replace magics within body of cell. + + Note that 'src' will already have been processed by IPython's + TransformerManager().transform_cell. + + Example, this + + get_ipython().run_line_magic('matplotlib', 'inline') + 'foo' + + becomes + + "5e67db56d490fd39" + 'foo' + + The replacement, along with the transformed code, are returned. + """ + replacements = [] + magic_finder = MagicFinder() + magic_finder.visit(ast.parse(src)) + new_srcs = [] + for i, line in enumerate(src.splitlines(), start=1): + if i in magic_finder.magics: + offsets_and_magics = magic_finder.magics[i] + if len(offsets_and_magics) != 1: # pragma: nocover + raise AssertionError( + f"Expecting one magic per line, got: {offsets_and_magics}\n" + "Please report a bug on https://github.com/psf/black/issues." + ) + col_offset, magic = ( + offsets_and_magics[0].col_offset, + offsets_and_magics[0].magic, + ) + mask = get_token(src, magic) + replacements.append(Replacement(mask=mask, src=magic)) + line = line[:col_offset] + mask + new_srcs.append(line) + return "\n".join(new_srcs), replacements + + +def unmask_cell(src: str, replacements: List[Replacement]) -> str: + """Remove replacements from cell. + + For example + + "9b20" + foo = bar + + becomes + + %%time + foo = bar + """ + for replacement in replacements: + src = src.replace(replacement.mask, replacement.src) + return src + + +def _is_ipython_magic(node: ast.expr) -> TypeGuard[ast.Attribute]: + """Check if attribute is IPython magic. + + Note that the source of the abstract syntax tree + will already have been processed by IPython's + TransformerManager().transform_cell. + """ + return ( + isinstance(node, ast.Attribute) + and isinstance(node.value, ast.Call) + and isinstance(node.value.func, ast.Name) + and node.value.func.id == "get_ipython" + ) + + +def _get_str_args(args: List[ast.expr]) -> List[str]: + str_args = [] + for arg in args: + assert isinstance(arg, ast.Str) + str_args.append(arg.s) + return str_args + + +@dataclasses.dataclass(frozen=True) +class CellMagic: + name: str + params: Optional[str] + body: str + + @property + def header(self) -> str: + if self.params: + return f"%%{self.name} {self.params}" + return f"%%{self.name}" + + +# ast.NodeVisitor + dataclass = breakage under mypyc. +class CellMagicFinder(ast.NodeVisitor): + """Find cell magics. + + Note that the source of the abstract syntax tree + will already have been processed by IPython's + TransformerManager().transform_cell. + + For example, + + %%time\nfoo() + + would have been transformed to + + get_ipython().run_cell_magic('time', '', 'foo()\\n') + + and we look for instances of the latter. + """ + + def __init__(self, cell_magic: Optional[CellMagic] = None) -> None: + self.cell_magic = cell_magic + + def visit_Expr(self, node: ast.Expr) -> None: + """Find cell magic, extract header and body.""" + if ( + isinstance(node.value, ast.Call) + and _is_ipython_magic(node.value.func) + and node.value.func.attr == "run_cell_magic" + ): + args = _get_str_args(node.value.args) + self.cell_magic = CellMagic(name=args[0], params=args[1], body=args[2]) + self.generic_visit(node) + + +@dataclasses.dataclass(frozen=True) +class OffsetAndMagic: + col_offset: int + magic: str + + +# Unsurprisingly, subclassing ast.NodeVisitor means we can't use dataclasses here +# as mypyc will generate broken code. +class MagicFinder(ast.NodeVisitor): + """Visit cell to look for get_ipython calls. + + Note that the source of the abstract syntax tree + will already have been processed by IPython's + TransformerManager().transform_cell. + + For example, + + %matplotlib inline + + would have been transformed to + + get_ipython().run_line_magic('matplotlib', 'inline') + + and we look for instances of the latter (and likewise for other + types of magics). + """ + + def __init__(self) -> None: + self.magics: Dict[int, List[OffsetAndMagic]] = collections.defaultdict(list) + + def visit_Assign(self, node: ast.Assign) -> None: + """Look for system assign magics. + + For example, + + black_version = !black --version + env = %env var + + would have been (respectively) transformed to + + black_version = get_ipython().getoutput('black --version') + env = get_ipython().run_line_magic('env', 'var') + + and we look for instances of any of the latter. + """ + if isinstance(node.value, ast.Call) and _is_ipython_magic(node.value.func): + args = _get_str_args(node.value.args) + if node.value.func.attr == "getoutput": + src = f"!{args[0]}" + elif node.value.func.attr == "run_line_magic": + src = f"%{args[0]}" + if args[1]: + src += f" {args[1]}" + else: + raise AssertionError( + f"Unexpected IPython magic {node.value.func.attr!r} found. " + "Please report a bug on https://github.com/psf/black/issues." + ) from None + self.magics[node.value.lineno].append( + OffsetAndMagic(node.value.col_offset, src) + ) + self.generic_visit(node) + + def visit_Expr(self, node: ast.Expr) -> None: + """Look for magics in body of cell. + + For examples, + + !ls + !!ls + ?ls + ??ls + + would (respectively) get transformed to + + get_ipython().system('ls') + get_ipython().getoutput('ls') + get_ipython().run_line_magic('pinfo', 'ls') + get_ipython().run_line_magic('pinfo2', 'ls') + + and we look for instances of any of the latter. + """ + if isinstance(node.value, ast.Call) and _is_ipython_magic(node.value.func): + args = _get_str_args(node.value.args) + if node.value.func.attr == "run_line_magic": + if args[0] == "pinfo": + src = f"?{args[1]}" + elif args[0] == "pinfo2": + src = f"??{args[1]}" + else: + src = f"%{args[0]}" + if args[1]: + src += f" {args[1]}" + elif node.value.func.attr == "system": + src = f"!{args[0]}" + elif node.value.func.attr == "getoutput": + src = f"!!{args[0]}" + else: + raise NothingChanged # unsupported magic. + self.magics[node.value.lineno].append( + OffsetAndMagic(node.value.col_offset, src) + ) + self.generic_visit(node) diff --git a/env/lib/python3.8/site-packages/black/linegen.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/linegen.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..5a34e4fe Binary files /dev/null and b/env/lib/python3.8/site-packages/black/linegen.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/linegen.py b/env/lib/python3.8/site-packages/black/linegen.py new file mode 100644 index 00000000..a2e41bf5 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/linegen.py @@ -0,0 +1,1294 @@ +""" +Generating lines of code. +""" +import sys +from functools import partial, wraps +from typing import Collection, Iterator, List, Optional, Set, Union, cast + +from black.brackets import COMMA_PRIORITY, DOT_PRIORITY, max_delimiter_priority_in_atom +from black.comments import FMT_OFF, generate_comments, list_comments +from black.lines import ( + Line, + append_leaves, + can_be_split, + can_omit_invisible_parens, + is_line_short_enough, + line_to_string, +) +from black.mode import Feature, Mode, Preview +from black.nodes import ( + ASSIGNMENTS, + CLOSING_BRACKETS, + OPENING_BRACKETS, + RARROW, + STANDALONE_COMMENT, + STATEMENT, + WHITESPACE, + Visitor, + ensure_visible, + is_arith_like, + is_atom_with_invisible_parens, + is_docstring, + is_empty_tuple, + is_lpar_token, + is_multiline_string, + is_name_token, + is_one_sequence_between, + is_one_tuple, + is_rpar_token, + is_stub_body, + is_stub_suite, + is_vararg, + is_walrus_assignment, + is_yield, + syms, + wrap_in_parentheses, +) +from black.numerics import normalize_numeric_literal +from black.strings import ( + fix_docstring, + get_string_prefix, + normalize_string_prefix, + normalize_string_quotes, +) +from black.trans import ( + CannotTransform, + StringMerger, + StringParenStripper, + StringParenWrapper, + StringSplitter, + Transformer, + hug_power_op, +) +from blib2to3.pgen2 import token +from blib2to3.pytree import Leaf, Node + +# types +LeafID = int +LN = Union[Leaf, Node] + + +class CannotSplit(CannotTransform): + """A readable split that fits the allotted line length is impossible.""" + + +# This isn't a dataclass because @dataclass + Generic breaks mypyc. +# See also https://github.com/mypyc/mypyc/issues/827. +class LineGenerator(Visitor[Line]): + """Generates reformatted Line objects. Empty lines are not emitted. + + Note: destroys the tree it's visiting by mutating prefixes of its leaves + in ways that will no longer stringify to valid Python code on the tree. + """ + + def __init__(self, mode: Mode) -> None: + self.mode = mode + self.current_line: Line + self.__post_init__() + + def line(self, indent: int = 0) -> Iterator[Line]: + """Generate a line. + + If the line is empty, only emit if it makes sense. + If the line is too long, split it first and then generate. + + If any lines were generated, set up a new current_line. + """ + if not self.current_line: + self.current_line.depth += indent + return # Line is empty, don't emit. Creating a new one unnecessary. + + complete_line = self.current_line + self.current_line = Line(mode=self.mode, depth=complete_line.depth + indent) + yield complete_line + + def visit_default(self, node: LN) -> Iterator[Line]: + """Default `visit_*()` implementation. Recurses to children of `node`.""" + if isinstance(node, Leaf): + any_open_brackets = self.current_line.bracket_tracker.any_open_brackets() + for comment in generate_comments(node, preview=self.mode.preview): + if any_open_brackets: + # any comment within brackets is subject to splitting + self.current_line.append(comment) + elif comment.type == token.COMMENT: + # regular trailing comment + self.current_line.append(comment) + yield from self.line() + + else: + # regular standalone comment + yield from self.line() + + self.current_line.append(comment) + yield from self.line() + + normalize_prefix(node, inside_brackets=any_open_brackets) + if self.mode.string_normalization and node.type == token.STRING: + node.value = normalize_string_prefix(node.value) + node.value = normalize_string_quotes(node.value) + if node.type == token.NUMBER: + normalize_numeric_literal(node) + if node.type not in WHITESPACE: + self.current_line.append(node) + yield from super().visit_default(node) + + def visit_INDENT(self, node: Leaf) -> Iterator[Line]: + """Increase indentation level, maybe yield a line.""" + # In blib2to3 INDENT never holds comments. + yield from self.line(+1) + yield from self.visit_default(node) + + def visit_DEDENT(self, node: Leaf) -> Iterator[Line]: + """Decrease indentation level, maybe yield a line.""" + # The current line might still wait for trailing comments. At DEDENT time + # there won't be any (they would be prefixes on the preceding NEWLINE). + # Emit the line then. + yield from self.line() + + # While DEDENT has no value, its prefix may contain standalone comments + # that belong to the current indentation level. Get 'em. + yield from self.visit_default(node) + + # Finally, emit the dedent. + yield from self.line(-1) + + def visit_stmt( + self, node: Node, keywords: Set[str], parens: Set[str] + ) -> Iterator[Line]: + """Visit a statement. + + This implementation is shared for `if`, `while`, `for`, `try`, `except`, + `def`, `with`, `class`, `assert`, and assignments. + + The relevant Python language `keywords` for a given statement will be + NAME leaves within it. This methods puts those on a separate line. + + `parens` holds a set of string leaf values immediately after which + invisible parens should be put. + """ + normalize_invisible_parens(node, parens_after=parens, preview=self.mode.preview) + for child in node.children: + if is_name_token(child) and child.value in keywords: + yield from self.line() + + yield from self.visit(child) + + def visit_funcdef(self, node: Node) -> Iterator[Line]: + """Visit function definition.""" + if Preview.annotation_parens not in self.mode: + yield from self.visit_stmt(node, keywords={"def"}, parens=set()) + else: + yield from self.line() + + # Remove redundant brackets around return type annotation. + is_return_annotation = False + for child in node.children: + if child.type == token.RARROW: + is_return_annotation = True + elif is_return_annotation: + if child.type == syms.atom and child.children[0].type == token.LPAR: + if maybe_make_parens_invisible_in_atom( + child, + parent=node, + remove_brackets_around_comma=False, + ): + wrap_in_parentheses(node, child, visible=False) + else: + wrap_in_parentheses(node, child, visible=False) + is_return_annotation = False + + for child in node.children: + yield from self.visit(child) + + def visit_match_case(self, node: Node) -> Iterator[Line]: + """Visit either a match or case statement.""" + normalize_invisible_parens(node, parens_after=set(), preview=self.mode.preview) + + yield from self.line() + for child in node.children: + yield from self.visit(child) + + def visit_suite(self, node: Node) -> Iterator[Line]: + """Visit a suite.""" + if self.mode.is_pyi and is_stub_suite(node): + yield from self.visit(node.children[2]) + else: + yield from self.visit_default(node) + + def visit_simple_stmt(self, node: Node) -> Iterator[Line]: + """Visit a statement without nested statements.""" + prev_type: Optional[int] = None + for child in node.children: + if (prev_type is None or prev_type == token.SEMI) and is_arith_like(child): + wrap_in_parentheses(node, child, visible=False) + prev_type = child.type + + is_suite_like = node.parent and node.parent.type in STATEMENT + if is_suite_like: + if self.mode.is_pyi and is_stub_body(node): + yield from self.visit_default(node) + else: + yield from self.line(+1) + yield from self.visit_default(node) + yield from self.line(-1) + + else: + if ( + not self.mode.is_pyi + or not node.parent + or not is_stub_suite(node.parent) + ): + yield from self.line() + yield from self.visit_default(node) + + def visit_async_stmt(self, node: Node) -> Iterator[Line]: + """Visit `async def`, `async for`, `async with`.""" + yield from self.line() + + children = iter(node.children) + for child in children: + yield from self.visit(child) + + if child.type == token.ASYNC or child.type == STANDALONE_COMMENT: + # STANDALONE_COMMENT happens when `# fmt: skip` is applied on the async + # line. + break + + internal_stmt = next(children) + for child in internal_stmt.children: + yield from self.visit(child) + + def visit_decorators(self, node: Node) -> Iterator[Line]: + """Visit decorators.""" + for child in node.children: + yield from self.line() + yield from self.visit(child) + + def visit_power(self, node: Node) -> Iterator[Line]: + for idx, leaf in enumerate(node.children[:-1]): + next_leaf = node.children[idx + 1] + + if not isinstance(leaf, Leaf): + continue + + value = leaf.value.lower() + if ( + leaf.type == token.NUMBER + and next_leaf.type == syms.trailer + # Ensure that we are in an attribute trailer + and next_leaf.children[0].type == token.DOT + # It shouldn't wrap hexadecimal, binary and octal literals + and not value.startswith(("0x", "0b", "0o")) + # It shouldn't wrap complex literals + and "j" not in value + ): + wrap_in_parentheses(node, leaf) + + if Preview.remove_redundant_parens in self.mode: + remove_await_parens(node) + + yield from self.visit_default(node) + + def visit_SEMI(self, leaf: Leaf) -> Iterator[Line]: + """Remove a semicolon and put the other statement on a separate line.""" + yield from self.line() + + def visit_ENDMARKER(self, leaf: Leaf) -> Iterator[Line]: + """End of file. Process outstanding comments and end with a newline.""" + yield from self.visit_default(leaf) + yield from self.line() + + def visit_STANDALONE_COMMENT(self, leaf: Leaf) -> Iterator[Line]: + if not self.current_line.bracket_tracker.any_open_brackets(): + yield from self.line() + yield from self.visit_default(leaf) + + def visit_factor(self, node: Node) -> Iterator[Line]: + """Force parentheses between a unary op and a binary power: + + -2 ** 8 -> -(2 ** 8) + """ + _operator, operand = node.children + if ( + operand.type == syms.power + and len(operand.children) == 3 + and operand.children[1].type == token.DOUBLESTAR + ): + lpar = Leaf(token.LPAR, "(") + rpar = Leaf(token.RPAR, ")") + index = operand.remove() or 0 + node.insert_child(index, Node(syms.atom, [lpar, operand, rpar])) + yield from self.visit_default(node) + + def visit_STRING(self, leaf: Leaf) -> Iterator[Line]: + if is_docstring(leaf) and "\\\n" not in leaf.value: + # We're ignoring docstrings with backslash newline escapes because changing + # indentation of those changes the AST representation of the code. + if Preview.normalize_docstring_quotes_and_prefixes_properly in self.mode: + # There was a bug where --skip-string-normalization wouldn't stop us + # from normalizing docstring prefixes. To maintain stability, we can + # only address this buggy behaviour while the preview style is enabled. + if self.mode.string_normalization: + docstring = normalize_string_prefix(leaf.value) + # visit_default() does handle string normalization for us, but + # since this method acts differently depending on quote style (ex. + # see padding logic below), there's a possibility for unstable + # formatting as visit_default() is called *after*. To avoid a + # situation where this function formats a docstring differently on + # the second pass, normalize it early. + docstring = normalize_string_quotes(docstring) + else: + docstring = leaf.value + else: + # ... otherwise, we'll keep the buggy behaviour >.< + docstring = normalize_string_prefix(leaf.value) + prefix = get_string_prefix(docstring) + docstring = docstring[len(prefix) :] # Remove the prefix + quote_char = docstring[0] + # A natural way to remove the outer quotes is to do: + # docstring = docstring.strip(quote_char) + # but that breaks on """""x""" (which is '""x'). + # So we actually need to remove the first character and the next two + # characters but only if they are the same as the first. + quote_len = 1 if docstring[1] != quote_char else 3 + docstring = docstring[quote_len:-quote_len] + docstring_started_empty = not docstring + indent = " " * 4 * self.current_line.depth + + if is_multiline_string(leaf): + docstring = fix_docstring(docstring, indent) + else: + docstring = docstring.strip() + + if docstring: + # Add some padding if the docstring starts / ends with a quote mark. + if docstring[0] == quote_char: + docstring = " " + docstring + if docstring[-1] == quote_char: + docstring += " " + if docstring[-1] == "\\": + backslash_count = len(docstring) - len(docstring.rstrip("\\")) + if backslash_count % 2: + # Odd number of tailing backslashes, add some padding to + # avoid escaping the closing string quote. + docstring += " " + elif not docstring_started_empty: + docstring = " " + + # We could enforce triple quotes at this point. + quote = quote_char * quote_len + + # It's invalid to put closing single-character quotes on a new line. + if Preview.long_docstring_quotes_on_newline in self.mode and quote_len == 3: + # We need to find the length of the last line of the docstring + # to find if we can add the closing quotes to the line without + # exceeding the maximum line length. + # If docstring is one line, then we need to add the length + # of the indent, prefix, and starting quotes. Ending quotes are + # handled later. + lines = docstring.splitlines() + last_line_length = len(lines[-1]) if docstring else 0 + + if len(lines) == 1: + last_line_length += len(indent) + len(prefix) + quote_len + + # If adding closing quotes would cause the last line to exceed + # the maximum line length then put a line break before the + # closing quotes + if last_line_length + quote_len > self.mode.line_length: + leaf.value = prefix + quote + docstring + "\n" + indent + quote + else: + leaf.value = prefix + quote + docstring + quote + else: + leaf.value = prefix + quote + docstring + quote + + yield from self.visit_default(leaf) + + def __post_init__(self) -> None: + """You are in a twisty little maze of passages.""" + self.current_line = Line(mode=self.mode) + + v = self.visit_stmt + Ø: Set[str] = set() + self.visit_assert_stmt = partial(v, keywords={"assert"}, parens={"assert", ","}) + self.visit_if_stmt = partial( + v, keywords={"if", "else", "elif"}, parens={"if", "elif"} + ) + self.visit_while_stmt = partial(v, keywords={"while", "else"}, parens={"while"}) + self.visit_for_stmt = partial(v, keywords={"for", "else"}, parens={"for", "in"}) + self.visit_try_stmt = partial( + v, keywords={"try", "except", "else", "finally"}, parens=Ø + ) + if self.mode.preview: + self.visit_except_clause = partial( + v, keywords={"except"}, parens={"except"} + ) + self.visit_with_stmt = partial(v, keywords={"with"}, parens={"with"}) + else: + self.visit_except_clause = partial(v, keywords={"except"}, parens=Ø) + self.visit_with_stmt = partial(v, keywords={"with"}, parens=Ø) + self.visit_classdef = partial(v, keywords={"class"}, parens=Ø) + self.visit_expr_stmt = partial(v, keywords=Ø, parens=ASSIGNMENTS) + self.visit_return_stmt = partial(v, keywords={"return"}, parens={"return"}) + self.visit_import_from = partial(v, keywords=Ø, parens={"import"}) + self.visit_del_stmt = partial(v, keywords=Ø, parens={"del"}) + self.visit_async_funcdef = self.visit_async_stmt + self.visit_decorated = self.visit_decorators + + # PEP 634 + self.visit_match_stmt = self.visit_match_case + self.visit_case_block = self.visit_match_case + + +def transform_line( + line: Line, mode: Mode, features: Collection[Feature] = () +) -> Iterator[Line]: + """Transform a `line`, potentially splitting it into many lines. + + They should fit in the allotted `line_length` but might not be able to. + + `features` are syntactical features that may be used in the output. + """ + if line.is_comment: + yield line + return + + line_str = line_to_string(line) + + ll = mode.line_length + sn = mode.string_normalization + string_merge = StringMerger(ll, sn) + string_paren_strip = StringParenStripper(ll, sn) + string_split = StringSplitter(ll, sn) + string_paren_wrap = StringParenWrapper(ll, sn) + + transformers: List[Transformer] + if ( + not line.contains_uncollapsable_type_comments() + and not line.should_split_rhs + and not line.magic_trailing_comma + and ( + is_line_short_enough(line, line_length=mode.line_length, line_str=line_str) + or line.contains_unsplittable_type_ignore() + ) + and not (line.inside_brackets and line.contains_standalone_comments()) + ): + # Only apply basic string preprocessing, since lines shouldn't be split here. + if Preview.string_processing in mode: + transformers = [string_merge, string_paren_strip] + else: + transformers = [] + elif line.is_def: + transformers = [left_hand_split] + else: + + def _rhs( + self: object, line: Line, features: Collection[Feature] + ) -> Iterator[Line]: + """Wraps calls to `right_hand_split`. + + The calls increasingly `omit` right-hand trailers (bracket pairs with + content), meaning the trailers get glued together to split on another + bracket pair instead. + """ + for omit in generate_trailers_to_omit(line, mode.line_length): + lines = list( + right_hand_split(line, mode.line_length, features, omit=omit) + ) + # Note: this check is only able to figure out if the first line of the + # *current* transformation fits in the line length. This is true only + # for simple cases. All others require running more transforms via + # `transform_line()`. This check doesn't know if those would succeed. + if is_line_short_enough(lines[0], line_length=mode.line_length): + yield from lines + return + + # All splits failed, best effort split with no omits. + # This mostly happens to multiline strings that are by definition + # reported as not fitting a single line, as well as lines that contain + # trailing commas (those have to be exploded). + yield from right_hand_split( + line, line_length=mode.line_length, features=features + ) + + # HACK: nested functions (like _rhs) compiled by mypyc don't retain their + # __name__ attribute which is needed in `run_transformer` further down. + # Unfortunately a nested class breaks mypyc too. So a class must be created + # via type ... https://github.com/mypyc/mypyc/issues/884 + rhs = type("rhs", (), {"__call__": _rhs})() + + if Preview.string_processing in mode: + if line.inside_brackets: + transformers = [ + string_merge, + string_paren_strip, + string_split, + delimiter_split, + standalone_comment_split, + string_paren_wrap, + rhs, + ] + else: + transformers = [ + string_merge, + string_paren_strip, + string_split, + string_paren_wrap, + rhs, + ] + else: + if line.inside_brackets: + transformers = [delimiter_split, standalone_comment_split, rhs] + else: + transformers = [rhs] + # It's always safe to attempt hugging of power operations and pretty much every line + # could match. + transformers.append(hug_power_op) + + for transform in transformers: + # We are accumulating lines in `result` because we might want to abort + # mission and return the original line in the end, or attempt a different + # split altogether. + try: + result = run_transformer(line, transform, mode, features, line_str=line_str) + except CannotTransform: + continue + else: + yield from result + break + + else: + yield line + + +def left_hand_split(line: Line, _features: Collection[Feature] = ()) -> Iterator[Line]: + """Split line into many lines, starting with the first matching bracket pair. + + Note: this usually looks weird, only use this for function definitions. + Prefer RHS otherwise. This is why this function is not symmetrical with + :func:`right_hand_split` which also handles optional parentheses. + """ + tail_leaves: List[Leaf] = [] + body_leaves: List[Leaf] = [] + head_leaves: List[Leaf] = [] + current_leaves = head_leaves + matching_bracket: Optional[Leaf] = None + for leaf in line.leaves: + if ( + current_leaves is body_leaves + and leaf.type in CLOSING_BRACKETS + and leaf.opening_bracket is matching_bracket + and isinstance(matching_bracket, Leaf) + ): + ensure_visible(leaf) + ensure_visible(matching_bracket) + current_leaves = tail_leaves if body_leaves else head_leaves + current_leaves.append(leaf) + if current_leaves is head_leaves: + if leaf.type in OPENING_BRACKETS: + matching_bracket = leaf + current_leaves = body_leaves + if not matching_bracket: + raise CannotSplit("No brackets found") + + head = bracket_split_build_line(head_leaves, line, matching_bracket) + body = bracket_split_build_line(body_leaves, line, matching_bracket, is_body=True) + tail = bracket_split_build_line(tail_leaves, line, matching_bracket) + bracket_split_succeeded_or_raise(head, body, tail) + for result in (head, body, tail): + if result: + yield result + + +def right_hand_split( + line: Line, + line_length: int, + features: Collection[Feature] = (), + omit: Collection[LeafID] = (), +) -> Iterator[Line]: + """Split line into many lines, starting with the last matching bracket pair. + + If the split was by optional parentheses, attempt splitting without them, too. + `omit` is a collection of closing bracket IDs that shouldn't be considered for + this split. + + Note: running this function modifies `bracket_depth` on the leaves of `line`. + """ + tail_leaves: List[Leaf] = [] + body_leaves: List[Leaf] = [] + head_leaves: List[Leaf] = [] + current_leaves = tail_leaves + opening_bracket: Optional[Leaf] = None + closing_bracket: Optional[Leaf] = None + for leaf in reversed(line.leaves): + if current_leaves is body_leaves: + if leaf is opening_bracket: + current_leaves = head_leaves if body_leaves else tail_leaves + current_leaves.append(leaf) + if current_leaves is tail_leaves: + if leaf.type in CLOSING_BRACKETS and id(leaf) not in omit: + opening_bracket = leaf.opening_bracket + closing_bracket = leaf + current_leaves = body_leaves + if not (opening_bracket and closing_bracket and head_leaves): + # If there is no opening or closing_bracket that means the split failed and + # all content is in the tail. Otherwise, if `head_leaves` are empty, it means + # the matching `opening_bracket` wasn't available on `line` anymore. + raise CannotSplit("No brackets found") + + tail_leaves.reverse() + body_leaves.reverse() + head_leaves.reverse() + head = bracket_split_build_line(head_leaves, line, opening_bracket) + body = bracket_split_build_line(body_leaves, line, opening_bracket, is_body=True) + tail = bracket_split_build_line(tail_leaves, line, opening_bracket) + bracket_split_succeeded_or_raise(head, body, tail) + if ( + Feature.FORCE_OPTIONAL_PARENTHESES not in features + # the opening bracket is an optional paren + and opening_bracket.type == token.LPAR + and not opening_bracket.value + # the closing bracket is an optional paren + and closing_bracket.type == token.RPAR + and not closing_bracket.value + # it's not an import (optional parens are the only thing we can split on + # in this case; attempting a split without them is a waste of time) + and not line.is_import + # there are no standalone comments in the body + and not body.contains_standalone_comments(0) + # and we can actually remove the parens + and can_omit_invisible_parens(body, line_length) + ): + omit = {id(closing_bracket), *omit} + try: + yield from right_hand_split(line, line_length, features=features, omit=omit) + return + + except CannotSplit as e: + if not ( + can_be_split(body) + or is_line_short_enough(body, line_length=line_length) + ): + raise CannotSplit( + "Splitting failed, body is still too long and can't be split." + ) from e + + elif head.contains_multiline_strings() or tail.contains_multiline_strings(): + raise CannotSplit( + "The current optional pair of parentheses is bound to fail to" + " satisfy the splitting algorithm because the head or the tail" + " contains multiline strings which by definition never fit one" + " line." + ) from e + + ensure_visible(opening_bracket) + ensure_visible(closing_bracket) + for result in (head, body, tail): + if result: + yield result + + +def bracket_split_succeeded_or_raise(head: Line, body: Line, tail: Line) -> None: + """Raise :exc:`CannotSplit` if the last left- or right-hand split failed. + + Do nothing otherwise. + + A left- or right-hand split is based on a pair of brackets. Content before + (and including) the opening bracket is left on one line, content inside the + brackets is put on a separate line, and finally content starting with and + following the closing bracket is put on a separate line. + + Those are called `head`, `body`, and `tail`, respectively. If the split + produced the same line (all content in `head`) or ended up with an empty `body` + and the `tail` is just the closing bracket, then it's considered failed. + """ + tail_len = len(str(tail).strip()) + if not body: + if tail_len == 0: + raise CannotSplit("Splitting brackets produced the same line") + + elif tail_len < 3: + raise CannotSplit( + f"Splitting brackets on an empty body to save {tail_len} characters is" + " not worth it" + ) + + +def bracket_split_build_line( + leaves: List[Leaf], original: Line, opening_bracket: Leaf, *, is_body: bool = False +) -> Line: + """Return a new line with given `leaves` and respective comments from `original`. + + If `is_body` is True, the result line is one-indented inside brackets and as such + has its first leaf's prefix normalized and a trailing comma added when expected. + """ + result = Line(mode=original.mode, depth=original.depth) + if is_body: + result.inside_brackets = True + result.depth += 1 + if leaves: + # Since body is a new indent level, remove spurious leading whitespace. + normalize_prefix(leaves[0], inside_brackets=True) + # Ensure a trailing comma for imports and standalone function arguments, but + # be careful not to add one after any comments or within type annotations. + no_commas = ( + original.is_def + and opening_bracket.value == "(" + and not any(leaf.type == token.COMMA for leaf in leaves) + # In particular, don't add one within a parenthesized return annotation. + # Unfortunately the indicator we're in a return annotation (RARROW) may + # be defined directly in the parent node, the parent of the parent ... + # and so on depending on how complex the return annotation is. + # This isn't perfect and there's some false negatives but they are in + # contexts were a comma is actually fine. + and not any( + node.prev_sibling.type == RARROW + for node in ( + leaves[0].parent, + getattr(leaves[0].parent, "parent", None), + ) + if isinstance(node, Node) and isinstance(node.prev_sibling, Leaf) + ) + ) + + if original.is_import or no_commas: + for i in range(len(leaves) - 1, -1, -1): + if leaves[i].type == STANDALONE_COMMENT: + continue + + if leaves[i].type != token.COMMA: + new_comma = Leaf(token.COMMA, ",") + leaves.insert(i + 1, new_comma) + break + + # Populate the line + for leaf in leaves: + result.append(leaf, preformatted=True) + for comment_after in original.comments_after(leaf): + result.append(comment_after, preformatted=True) + if is_body and should_split_line(result, opening_bracket): + result.should_split_rhs = True + return result + + +def dont_increase_indentation(split_func: Transformer) -> Transformer: + """Normalize prefix of the first leaf in every line returned by `split_func`. + + This is a decorator over relevant split functions. + """ + + @wraps(split_func) + def split_wrapper(line: Line, features: Collection[Feature] = ()) -> Iterator[Line]: + for split_line in split_func(line, features): + normalize_prefix(split_line.leaves[0], inside_brackets=True) + yield split_line + + return split_wrapper + + +@dont_increase_indentation +def delimiter_split(line: Line, features: Collection[Feature] = ()) -> Iterator[Line]: + """Split according to delimiters of the highest priority. + + If the appropriate Features are given, the split will add trailing commas + also in function signatures and calls that contain `*` and `**`. + """ + try: + last_leaf = line.leaves[-1] + except IndexError: + raise CannotSplit("Line empty") from None + + bt = line.bracket_tracker + try: + delimiter_priority = bt.max_delimiter_priority(exclude={id(last_leaf)}) + except ValueError: + raise CannotSplit("No delimiters found") from None + + if delimiter_priority == DOT_PRIORITY: + if bt.delimiter_count_with_priority(delimiter_priority) == 1: + raise CannotSplit("Splitting a single attribute from its owner looks wrong") + + current_line = Line( + mode=line.mode, depth=line.depth, inside_brackets=line.inside_brackets + ) + lowest_depth = sys.maxsize + trailing_comma_safe = True + + def append_to_line(leaf: Leaf) -> Iterator[Line]: + """Append `leaf` to current line or to new line if appending impossible.""" + nonlocal current_line + try: + current_line.append_safe(leaf, preformatted=True) + except ValueError: + yield current_line + + current_line = Line( + mode=line.mode, depth=line.depth, inside_brackets=line.inside_brackets + ) + current_line.append(leaf) + + for leaf in line.leaves: + yield from append_to_line(leaf) + + for comment_after in line.comments_after(leaf): + yield from append_to_line(comment_after) + + lowest_depth = min(lowest_depth, leaf.bracket_depth) + if leaf.bracket_depth == lowest_depth: + if is_vararg(leaf, within={syms.typedargslist}): + trailing_comma_safe = ( + trailing_comma_safe and Feature.TRAILING_COMMA_IN_DEF in features + ) + elif is_vararg(leaf, within={syms.arglist, syms.argument}): + trailing_comma_safe = ( + trailing_comma_safe and Feature.TRAILING_COMMA_IN_CALL in features + ) + + leaf_priority = bt.delimiters.get(id(leaf)) + if leaf_priority == delimiter_priority: + yield current_line + + current_line = Line( + mode=line.mode, depth=line.depth, inside_brackets=line.inside_brackets + ) + if current_line: + if ( + trailing_comma_safe + and delimiter_priority == COMMA_PRIORITY + and current_line.leaves[-1].type != token.COMMA + and current_line.leaves[-1].type != STANDALONE_COMMENT + ): + new_comma = Leaf(token.COMMA, ",") + current_line.append(new_comma) + yield current_line + + +@dont_increase_indentation +def standalone_comment_split( + line: Line, features: Collection[Feature] = () +) -> Iterator[Line]: + """Split standalone comments from the rest of the line.""" + if not line.contains_standalone_comments(0): + raise CannotSplit("Line does not have any standalone comments") + + current_line = Line( + mode=line.mode, depth=line.depth, inside_brackets=line.inside_brackets + ) + + def append_to_line(leaf: Leaf) -> Iterator[Line]: + """Append `leaf` to current line or to new line if appending impossible.""" + nonlocal current_line + try: + current_line.append_safe(leaf, preformatted=True) + except ValueError: + yield current_line + + current_line = Line( + line.mode, depth=line.depth, inside_brackets=line.inside_brackets + ) + current_line.append(leaf) + + for leaf in line.leaves: + yield from append_to_line(leaf) + + for comment_after in line.comments_after(leaf): + yield from append_to_line(comment_after) + + if current_line: + yield current_line + + +def normalize_prefix(leaf: Leaf, *, inside_brackets: bool) -> None: + """Leave existing extra newlines if not `inside_brackets`. Remove everything + else. + + Note: don't use backslashes for formatting or you'll lose your voting rights. + """ + if not inside_brackets: + spl = leaf.prefix.split("#") + if "\\" not in spl[0]: + nl_count = spl[-1].count("\n") + if len(spl) > 1: + nl_count -= 1 + leaf.prefix = "\n" * nl_count + return + + leaf.prefix = "" + + +def normalize_invisible_parens( + node: Node, parens_after: Set[str], *, preview: bool +) -> None: + """Make existing optional parentheses invisible or create new ones. + + `parens_after` is a set of string leaf values immediately after which parens + should be put. + + Standardizes on visible parentheses for single-element tuples, and keeps + existing visible parentheses for other tuples and generator expressions. + """ + for pc in list_comments(node.prefix, is_endmarker=False, preview=preview): + if pc.value in FMT_OFF: + # This `node` has a prefix with `# fmt: off`, don't mess with parens. + return + check_lpar = False + for index, child in enumerate(list(node.children)): + # Fixes a bug where invisible parens are not properly stripped from + # assignment statements that contain type annotations. + if isinstance(child, Node) and child.type == syms.annassign: + normalize_invisible_parens( + child, parens_after=parens_after, preview=preview + ) + + # Add parentheses around long tuple unpacking in assignments. + if ( + index == 0 + and isinstance(child, Node) + and child.type == syms.testlist_star_expr + ): + check_lpar = True + + if check_lpar: + if ( + preview + and child.type == syms.atom + and node.type == syms.for_stmt + and isinstance(child.prev_sibling, Leaf) + and child.prev_sibling.type == token.NAME + and child.prev_sibling.value == "for" + ): + if maybe_make_parens_invisible_in_atom( + child, + parent=node, + remove_brackets_around_comma=True, + ): + wrap_in_parentheses(node, child, visible=False) + elif preview and isinstance(child, Node) and node.type == syms.with_stmt: + remove_with_parens(child, node) + elif child.type == syms.atom: + if maybe_make_parens_invisible_in_atom( + child, + parent=node, + ): + wrap_in_parentheses(node, child, visible=False) + elif is_one_tuple(child): + wrap_in_parentheses(node, child, visible=True) + elif node.type == syms.import_from: + # "import from" nodes store parentheses directly as part of + # the statement + if is_lpar_token(child): + assert is_rpar_token(node.children[-1]) + # make parentheses invisible + child.value = "" + node.children[-1].value = "" + elif child.type != token.STAR: + # insert invisible parentheses + node.insert_child(index, Leaf(token.LPAR, "")) + node.append_child(Leaf(token.RPAR, "")) + break + elif ( + index == 1 + and child.type == token.STAR + and node.type == syms.except_clause + ): + # In except* (PEP 654), the star is actually part of + # of the keyword. So we need to skip the insertion of + # invisible parentheses to work more precisely. + continue + + elif not (isinstance(child, Leaf) and is_multiline_string(child)): + wrap_in_parentheses(node, child, visible=False) + + comma_check = child.type == token.COMMA if preview else False + + check_lpar = isinstance(child, Leaf) and ( + child.value in parens_after or comma_check + ) + + +def remove_await_parens(node: Node) -> None: + if node.children[0].type == token.AWAIT and len(node.children) > 1: + if ( + node.children[1].type == syms.atom + and node.children[1].children[0].type == token.LPAR + ): + if maybe_make_parens_invisible_in_atom( + node.children[1], + parent=node, + remove_brackets_around_comma=True, + ): + wrap_in_parentheses(node, node.children[1], visible=False) + + # Since await is an expression we shouldn't remove + # brackets in cases where this would change + # the AST due to operator precedence. + # Therefore we only aim to remove brackets around + # power nodes that aren't also await expressions themselves. + # https://peps.python.org/pep-0492/#updated-operator-precedence-table + # N.B. We've still removed any redundant nested brackets though :) + opening_bracket = cast(Leaf, node.children[1].children[0]) + closing_bracket = cast(Leaf, node.children[1].children[-1]) + bracket_contents = cast(Node, node.children[1].children[1]) + if bracket_contents.type != syms.power: + ensure_visible(opening_bracket) + ensure_visible(closing_bracket) + elif ( + bracket_contents.type == syms.power + and bracket_contents.children[0].type == token.AWAIT + ): + ensure_visible(opening_bracket) + ensure_visible(closing_bracket) + # If we are in a nested await then recurse down. + remove_await_parens(bracket_contents) + + +def remove_with_parens(node: Node, parent: Node) -> None: + """Recursively hide optional parens in `with` statements.""" + # Removing all unnecessary parentheses in with statements in one pass is a tad + # complex as different variations of bracketed statements result in pretty + # different parse trees: + # + # with (open("file")) as f: # this is an asexpr_test + # ... + # + # with (open("file") as f): # this is an atom containing an + # ... # asexpr_test + # + # with (open("file")) as f, (open("file")) as f: # this is asexpr_test, COMMA, + # ... # asexpr_test + # + # with (open("file") as f, open("file") as f): # an atom containing a + # ... # testlist_gexp which then + # # contains multiple asexpr_test(s) + if node.type == syms.atom: + if maybe_make_parens_invisible_in_atom( + node, + parent=parent, + remove_brackets_around_comma=True, + ): + wrap_in_parentheses(parent, node, visible=False) + if isinstance(node.children[1], Node): + remove_with_parens(node.children[1], node) + elif node.type == syms.testlist_gexp: + for child in node.children: + if isinstance(child, Node): + remove_with_parens(child, node) + elif node.type == syms.asexpr_test and not any( + leaf.type == token.COLONEQUAL for leaf in node.leaves() + ): + if maybe_make_parens_invisible_in_atom( + node.children[0], + parent=node, + remove_brackets_around_comma=True, + ): + wrap_in_parentheses(node, node.children[0], visible=False) + + +def maybe_make_parens_invisible_in_atom( + node: LN, + parent: LN, + remove_brackets_around_comma: bool = False, +) -> bool: + """If it's safe, make the parens in the atom `node` invisible, recursively. + Additionally, remove repeated, adjacent invisible parens from the atom `node` + as they are redundant. + + Returns whether the node should itself be wrapped in invisible parentheses. + """ + if ( + node.type != syms.atom + or is_empty_tuple(node) + or is_one_tuple(node) + or (is_yield(node) and parent.type != syms.expr_stmt) + or ( + # This condition tries to prevent removing non-optional brackets + # around a tuple, however, can be a bit overzealous so we provide + # and option to skip this check for `for` and `with` statements. + not remove_brackets_around_comma + and max_delimiter_priority_in_atom(node) >= COMMA_PRIORITY + ) + ): + return False + + if is_walrus_assignment(node): + if parent.type in [ + syms.annassign, + syms.expr_stmt, + syms.assert_stmt, + syms.return_stmt, + # these ones aren't useful to end users, but they do please fuzzers + syms.for_stmt, + syms.del_stmt, + ]: + return False + + first = node.children[0] + last = node.children[-1] + if is_lpar_token(first) and is_rpar_token(last): + middle = node.children[1] + # make parentheses invisible + first.value = "" + last.value = "" + maybe_make_parens_invisible_in_atom( + middle, + parent=parent, + remove_brackets_around_comma=remove_brackets_around_comma, + ) + + if is_atom_with_invisible_parens(middle): + # Strip the invisible parens from `middle` by replacing + # it with the child in-between the invisible parens + middle.replace(middle.children[1]) + + return False + + return True + + +def should_split_line(line: Line, opening_bracket: Leaf) -> bool: + """Should `line` be immediately split with `delimiter_split()` after RHS?""" + + if not (opening_bracket.parent and opening_bracket.value in "[{("): + return False + + # We're essentially checking if the body is delimited by commas and there's more + # than one of them (we're excluding the trailing comma and if the delimiter priority + # is still commas, that means there's more). + exclude = set() + trailing_comma = False + try: + last_leaf = line.leaves[-1] + if last_leaf.type == token.COMMA: + trailing_comma = True + exclude.add(id(last_leaf)) + max_priority = line.bracket_tracker.max_delimiter_priority(exclude=exclude) + except (IndexError, ValueError): + return False + + return max_priority == COMMA_PRIORITY and ( + (line.mode.magic_trailing_comma and trailing_comma) + # always explode imports + or opening_bracket.parent.type in {syms.atom, syms.import_from} + ) + + +def generate_trailers_to_omit(line: Line, line_length: int) -> Iterator[Set[LeafID]]: + """Generate sets of closing bracket IDs that should be omitted in a RHS. + + Brackets can be omitted if the entire trailer up to and including + a preceding closing bracket fits in one line. + + Yielded sets are cumulative (contain results of previous yields, too). First + set is empty, unless the line should explode, in which case bracket pairs until + the one that needs to explode are omitted. + """ + + omit: Set[LeafID] = set() + if not line.magic_trailing_comma: + yield omit + + length = 4 * line.depth + opening_bracket: Optional[Leaf] = None + closing_bracket: Optional[Leaf] = None + inner_brackets: Set[LeafID] = set() + for index, leaf, leaf_length in line.enumerate_with_length(reversed=True): + length += leaf_length + if length > line_length: + break + + has_inline_comment = leaf_length > len(leaf.value) + len(leaf.prefix) + if leaf.type == STANDALONE_COMMENT or has_inline_comment: + break + + if opening_bracket: + if leaf is opening_bracket: + opening_bracket = None + elif leaf.type in CLOSING_BRACKETS: + prev = line.leaves[index - 1] if index > 0 else None + if ( + prev + and prev.type == token.COMMA + and leaf.opening_bracket is not None + and not is_one_sequence_between( + leaf.opening_bracket, leaf, line.leaves + ) + ): + # Never omit bracket pairs with trailing commas. + # We need to explode on those. + break + + inner_brackets.add(id(leaf)) + elif leaf.type in CLOSING_BRACKETS: + prev = line.leaves[index - 1] if index > 0 else None + if prev and prev.type in OPENING_BRACKETS: + # Empty brackets would fail a split so treat them as "inner" + # brackets (e.g. only add them to the `omit` set if another + # pair of brackets was good enough. + inner_brackets.add(id(leaf)) + continue + + if closing_bracket: + omit.add(id(closing_bracket)) + omit.update(inner_brackets) + inner_brackets.clear() + yield omit + + if ( + prev + and prev.type == token.COMMA + and leaf.opening_bracket is not None + and not is_one_sequence_between(leaf.opening_bracket, leaf, line.leaves) + ): + # Never omit bracket pairs with trailing commas. + # We need to explode on those. + break + + if leaf.value: + opening_bracket = leaf.opening_bracket + closing_bracket = leaf + + +def run_transformer( + line: Line, + transform: Transformer, + mode: Mode, + features: Collection[Feature], + *, + line_str: str = "", +) -> List[Line]: + if not line_str: + line_str = line_to_string(line) + result: List[Line] = [] + for transformed_line in transform(line, features): + if str(transformed_line).strip("\n") == line_str: + raise CannotTransform("Line transformer returned an unchanged result") + + result.extend(transform_line(transformed_line, mode=mode, features=features)) + + if ( + transform.__class__.__name__ != "rhs" + or not line.bracket_tracker.invisible + or any(bracket.value for bracket in line.bracket_tracker.invisible) + or line.contains_multiline_strings() + or result[0].contains_uncollapsable_type_comments() + or result[0].contains_unsplittable_type_ignore() + or is_line_short_enough(result[0], line_length=mode.line_length) + # If any leaves have no parents (which _can_ occur since + # `transform(line)` potentially destroys the line's underlying node + # structure), then we can't proceed. Doing so would cause the below + # call to `append_leaves()` to fail. + or any(leaf.parent is None for leaf in line.leaves) + ): + return result + + line_copy = line.clone() + append_leaves(line_copy, line, line.leaves) + features_fop = set(features) | {Feature.FORCE_OPTIONAL_PARENTHESES} + second_opinion = run_transformer( + line_copy, transform, mode, features_fop, line_str=line_str + ) + if all( + is_line_short_enough(ln, line_length=mode.line_length) for ln in second_opinion + ): + result = second_opinion + return result diff --git a/env/lib/python3.8/site-packages/black/lines.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/lines.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..02503212 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/lines.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/lines.py b/env/lib/python3.8/site-packages/black/lines.py new file mode 100644 index 00000000..30622650 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/lines.py @@ -0,0 +1,811 @@ +import itertools +import sys +from dataclasses import dataclass, field +from typing import ( + Callable, + Dict, + Iterator, + List, + Optional, + Sequence, + Tuple, + TypeVar, + cast, +) + +from black.brackets import DOT_PRIORITY, BracketTracker +from black.mode import Mode, Preview +from black.nodes import ( + BRACKETS, + CLOSING_BRACKETS, + OPENING_BRACKETS, + STANDALONE_COMMENT, + TEST_DESCENDANTS, + child_towards, + is_import, + is_multiline_string, + is_one_sequence_between, + is_type_comment, + replace_child, + syms, + whitespace, +) +from blib2to3.pgen2 import token +from blib2to3.pytree import Leaf, Node + +# types +T = TypeVar("T") +Index = int +LeafID = int + + +@dataclass +class Line: + """Holds leaves and comments. Can be printed with `str(line)`.""" + + mode: Mode + depth: int = 0 + leaves: List[Leaf] = field(default_factory=list) + # keys ordered like `leaves` + comments: Dict[LeafID, List[Leaf]] = field(default_factory=dict) + bracket_tracker: BracketTracker = field(default_factory=BracketTracker) + inside_brackets: bool = False + should_split_rhs: bool = False + magic_trailing_comma: Optional[Leaf] = None + + def append(self, leaf: Leaf, preformatted: bool = False) -> None: + """Add a new `leaf` to the end of the line. + + Unless `preformatted` is True, the `leaf` will receive a new consistent + whitespace prefix and metadata applied by :class:`BracketTracker`. + Trailing commas are maybe removed, unpacked for loop variables are + demoted from being delimiters. + + Inline comments are put aside. + """ + has_value = leaf.type in BRACKETS or bool(leaf.value.strip()) + if not has_value: + return + + if token.COLON == leaf.type and self.is_class_paren_empty: + del self.leaves[-2:] + if self.leaves and not preformatted: + # Note: at this point leaf.prefix should be empty except for + # imports, for which we only preserve newlines. + leaf.prefix += whitespace( + leaf, complex_subscript=self.is_complex_subscript(leaf) + ) + if self.inside_brackets or not preformatted: + self.bracket_tracker.mark(leaf) + if self.mode.magic_trailing_comma: + if self.has_magic_trailing_comma(leaf): + self.magic_trailing_comma = leaf + elif self.has_magic_trailing_comma(leaf, ensure_removable=True): + self.remove_trailing_comma() + if not self.append_comment(leaf): + self.leaves.append(leaf) + + def append_safe(self, leaf: Leaf, preformatted: bool = False) -> None: + """Like :func:`append()` but disallow invalid standalone comment structure. + + Raises ValueError when any `leaf` is appended after a standalone comment + or when a standalone comment is not the first leaf on the line. + """ + if self.bracket_tracker.depth == 0: + if self.is_comment: + raise ValueError("cannot append to standalone comments") + + if self.leaves and leaf.type == STANDALONE_COMMENT: + raise ValueError( + "cannot append standalone comments to a populated line" + ) + + self.append(leaf, preformatted=preformatted) + + @property + def is_comment(self) -> bool: + """Is this line a standalone comment?""" + return len(self.leaves) == 1 and self.leaves[0].type == STANDALONE_COMMENT + + @property + def is_decorator(self) -> bool: + """Is this line a decorator?""" + return bool(self) and self.leaves[0].type == token.AT + + @property + def is_import(self) -> bool: + """Is this an import line?""" + return bool(self) and is_import(self.leaves[0]) + + @property + def is_class(self) -> bool: + """Is this line a class definition?""" + return ( + bool(self) + and self.leaves[0].type == token.NAME + and self.leaves[0].value == "class" + ) + + @property + def is_stub_class(self) -> bool: + """Is this line a class definition with a body consisting only of "..."?""" + return self.is_class and self.leaves[-3:] == [ + Leaf(token.DOT, ".") for _ in range(3) + ] + + @property + def is_def(self) -> bool: + """Is this a function definition? (Also returns True for async defs.)""" + try: + first_leaf = self.leaves[0] + except IndexError: + return False + + try: + second_leaf: Optional[Leaf] = self.leaves[1] + except IndexError: + second_leaf = None + return (first_leaf.type == token.NAME and first_leaf.value == "def") or ( + first_leaf.type == token.ASYNC + and second_leaf is not None + and second_leaf.type == token.NAME + and second_leaf.value == "def" + ) + + @property + def is_class_paren_empty(self) -> bool: + """Is this a class with no base classes but using parentheses? + + Those are unnecessary and should be removed. + """ + return ( + bool(self) + and len(self.leaves) == 4 + and self.is_class + and self.leaves[2].type == token.LPAR + and self.leaves[2].value == "(" + and self.leaves[3].type == token.RPAR + and self.leaves[3].value == ")" + ) + + @property + def is_triple_quoted_string(self) -> bool: + """Is the line a triple quoted string?""" + return ( + bool(self) + and self.leaves[0].type == token.STRING + and self.leaves[0].value.startswith(('"""', "'''")) + ) + + @property + def opens_block(self) -> bool: + """Does this line open a new level of indentation.""" + if len(self.leaves) == 0: + return False + return self.leaves[-1].type == token.COLON + + def contains_standalone_comments(self, depth_limit: int = sys.maxsize) -> bool: + """If so, needs to be split before emitting.""" + for leaf in self.leaves: + if leaf.type == STANDALONE_COMMENT and leaf.bracket_depth <= depth_limit: + return True + + return False + + def contains_uncollapsable_type_comments(self) -> bool: + ignored_ids = set() + try: + last_leaf = self.leaves[-1] + ignored_ids.add(id(last_leaf)) + if last_leaf.type == token.COMMA or ( + last_leaf.type == token.RPAR and not last_leaf.value + ): + # When trailing commas or optional parens are inserted by Black for + # consistency, comments after the previous last element are not moved + # (they don't have to, rendering will still be correct). So we ignore + # trailing commas and invisible. + last_leaf = self.leaves[-2] + ignored_ids.add(id(last_leaf)) + except IndexError: + return False + + # A type comment is uncollapsable if it is attached to a leaf + # that isn't at the end of the line (since that could cause it + # to get associated to a different argument) or if there are + # comments before it (since that could cause it to get hidden + # behind a comment. + comment_seen = False + for leaf_id, comments in self.comments.items(): + for comment in comments: + if is_type_comment(comment): + if comment_seen or ( + not is_type_comment(comment, " ignore") + and leaf_id not in ignored_ids + ): + return True + + comment_seen = True + + return False + + def contains_unsplittable_type_ignore(self) -> bool: + if not self.leaves: + return False + + # If a 'type: ignore' is attached to the end of a line, we + # can't split the line, because we can't know which of the + # subexpressions the ignore was meant to apply to. + # + # We only want this to apply to actual physical lines from the + # original source, though: we don't want the presence of a + # 'type: ignore' at the end of a multiline expression to + # justify pushing it all onto one line. Thus we + # (unfortunately) need to check the actual source lines and + # only report an unsplittable 'type: ignore' if this line was + # one line in the original code. + + # Grab the first and last line numbers, skipping generated leaves + first_line = next((leaf.lineno for leaf in self.leaves if leaf.lineno != 0), 0) + last_line = next( + (leaf.lineno for leaf in reversed(self.leaves) if leaf.lineno != 0), 0 + ) + + if first_line == last_line: + # We look at the last two leaves since a comma or an + # invisible paren could have been added at the end of the + # line. + for node in self.leaves[-2:]: + for comment in self.comments.get(id(node), []): + if is_type_comment(comment, " ignore"): + return True + + return False + + def contains_multiline_strings(self) -> bool: + return any(is_multiline_string(leaf) for leaf in self.leaves) + + def has_magic_trailing_comma( + self, closing: Leaf, ensure_removable: bool = False + ) -> bool: + """Return True if we have a magic trailing comma, that is when: + - there's a trailing comma here + - it's not a one-tuple + - it's not a single-element subscript + Additionally, if ensure_removable: + - it's not from square bracket indexing + (specifically, single-element square bracket indexing with + Preview.skip_magic_trailing_comma_in_subscript) + """ + if not ( + closing.type in CLOSING_BRACKETS + and self.leaves + and self.leaves[-1].type == token.COMMA + ): + return False + + if closing.type == token.RBRACE: + return True + + if closing.type == token.RSQB: + if ( + Preview.one_element_subscript in self.mode + and closing.parent + and closing.parent.type == syms.trailer + and closing.opening_bracket + and is_one_sequence_between( + closing.opening_bracket, + closing, + self.leaves, + brackets=(token.LSQB, token.RSQB), + ) + ): + return False + + if not ensure_removable: + return True + + comma = self.leaves[-1] + if comma.parent is None: + return False + if Preview.skip_magic_trailing_comma_in_subscript in self.mode: + return ( + comma.parent.type != syms.subscriptlist + or closing.opening_bracket is None + or not is_one_sequence_between( + closing.opening_bracket, + closing, + self.leaves, + brackets=(token.LSQB, token.RSQB), + ) + ) + return comma.parent.type == syms.listmaker + + if self.is_import: + return True + + if closing.opening_bracket is not None and not is_one_sequence_between( + closing.opening_bracket, closing, self.leaves + ): + return True + + return False + + def append_comment(self, comment: Leaf) -> bool: + """Add an inline or standalone comment to the line.""" + if ( + comment.type == STANDALONE_COMMENT + and self.bracket_tracker.any_open_brackets() + ): + comment.prefix = "" + return False + + if comment.type != token.COMMENT: + return False + + if not self.leaves: + comment.type = STANDALONE_COMMENT + comment.prefix = "" + return False + + last_leaf = self.leaves[-1] + if ( + last_leaf.type == token.RPAR + and not last_leaf.value + and last_leaf.parent + and len(list(last_leaf.parent.leaves())) <= 3 + and not is_type_comment(comment) + ): + # Comments on an optional parens wrapping a single leaf should belong to + # the wrapped node except if it's a type comment. Pinning the comment like + # this avoids unstable formatting caused by comment migration. + if len(self.leaves) < 2: + comment.type = STANDALONE_COMMENT + comment.prefix = "" + return False + + last_leaf = self.leaves[-2] + self.comments.setdefault(id(last_leaf), []).append(comment) + return True + + def comments_after(self, leaf: Leaf) -> List[Leaf]: + """Generate comments that should appear directly after `leaf`.""" + return self.comments.get(id(leaf), []) + + def remove_trailing_comma(self) -> None: + """Remove the trailing comma and moves the comments attached to it.""" + trailing_comma = self.leaves.pop() + trailing_comma_comments = self.comments.pop(id(trailing_comma), []) + self.comments.setdefault(id(self.leaves[-1]), []).extend( + trailing_comma_comments + ) + + def is_complex_subscript(self, leaf: Leaf) -> bool: + """Return True iff `leaf` is part of a slice with non-trivial exprs.""" + open_lsqb = self.bracket_tracker.get_open_lsqb() + if open_lsqb is None: + return False + + subscript_start = open_lsqb.next_sibling + + if isinstance(subscript_start, Node): + if subscript_start.type == syms.listmaker: + return False + + if subscript_start.type == syms.subscriptlist: + subscript_start = child_towards(subscript_start, leaf) + return subscript_start is not None and any( + n.type in TEST_DESCENDANTS for n in subscript_start.pre_order() + ) + + def enumerate_with_length( + self, reversed: bool = False + ) -> Iterator[Tuple[Index, Leaf, int]]: + """Return an enumeration of leaves with their length. + + Stops prematurely on multiline strings and standalone comments. + """ + op = cast( + Callable[[Sequence[Leaf]], Iterator[Tuple[Index, Leaf]]], + enumerate_reversed if reversed else enumerate, + ) + for index, leaf in op(self.leaves): + length = len(leaf.prefix) + len(leaf.value) + if "\n" in leaf.value: + return # Multiline strings, we can't continue. + + for comment in self.comments_after(leaf): + length += len(comment.value) + + yield index, leaf, length + + def clone(self) -> "Line": + return Line( + mode=self.mode, + depth=self.depth, + inside_brackets=self.inside_brackets, + should_split_rhs=self.should_split_rhs, + magic_trailing_comma=self.magic_trailing_comma, + ) + + def __str__(self) -> str: + """Render the line.""" + if not self: + return "\n" + + indent = " " * self.depth + leaves = iter(self.leaves) + first = next(leaves) + res = f"{first.prefix}{indent}{first.value}" + for leaf in leaves: + res += str(leaf) + for comment in itertools.chain.from_iterable(self.comments.values()): + res += str(comment) + + return res + "\n" + + def __bool__(self) -> bool: + """Return True if the line has leaves or comments.""" + return bool(self.leaves or self.comments) + + +@dataclass +class EmptyLineTracker: + """Provides a stateful method that returns the number of potential extra + empty lines needed before and after the currently processed line. + + Note: this tracker works on lines that haven't been split yet. It assumes + the prefix of the first leaf consists of optional newlines. Those newlines + are consumed by `maybe_empty_lines()` and included in the computation. + """ + + is_pyi: bool = False + previous_line: Optional[Line] = None + previous_after: int = 0 + previous_defs: List[int] = field(default_factory=list) + + def maybe_empty_lines(self, current_line: Line) -> Tuple[int, int]: + """Return the number of extra empty lines before and after the `current_line`. + + This is for separating `def`, `async def` and `class` with extra empty + lines (two on module-level). + """ + before, after = self._maybe_empty_lines(current_line) + before = ( + # Black should not insert empty lines at the beginning + # of the file + 0 + if self.previous_line is None + else before - self.previous_after + ) + self.previous_after = after + self.previous_line = current_line + return before, after + + def _maybe_empty_lines(self, current_line: Line) -> Tuple[int, int]: + max_allowed = 1 + if current_line.depth == 0: + max_allowed = 1 if self.is_pyi else 2 + if current_line.leaves: + # Consume the first leaf's extra newlines. + first_leaf = current_line.leaves[0] + before = first_leaf.prefix.count("\n") + before = min(before, max_allowed) + first_leaf.prefix = "" + else: + before = 0 + depth = current_line.depth + while self.previous_defs and self.previous_defs[-1] >= depth: + if self.is_pyi: + assert self.previous_line is not None + if depth and not current_line.is_def and self.previous_line.is_def: + # Empty lines between attributes and methods should be preserved. + before = min(1, before) + elif depth: + before = 0 + else: + before = 1 + else: + if depth: + before = 1 + elif ( + not depth + and self.previous_defs[-1] + and current_line.leaves[-1].type == token.COLON + and ( + current_line.leaves[0].value + not in ("with", "try", "for", "while", "if", "match") + ) + ): + # We shouldn't add two newlines between an indented function and + # a dependent non-indented clause. This is to avoid issues with + # conditional function definitions that are technically top-level + # and therefore get two trailing newlines, but look weird and + # inconsistent when they're followed by elif, else, etc. This is + # worse because these functions only get *one* preceding newline + # already. + before = 1 + else: + before = 2 + self.previous_defs.pop() + if current_line.is_decorator or current_line.is_def or current_line.is_class: + return self._maybe_empty_lines_for_class_or_def(current_line, before) + + if ( + self.previous_line + and self.previous_line.is_import + and not current_line.is_import + and depth == self.previous_line.depth + ): + return (before or 1), 0 + + if ( + self.previous_line + and self.previous_line.is_class + and current_line.is_triple_quoted_string + ): + return before, 1 + + if ( + Preview.remove_block_trailing_newline in current_line.mode + and self.previous_line + and self.previous_line.opens_block + ): + return 0, 0 + return before, 0 + + def _maybe_empty_lines_for_class_or_def( + self, current_line: Line, before: int + ) -> Tuple[int, int]: + if not current_line.is_decorator: + self.previous_defs.append(current_line.depth) + if self.previous_line is None: + # Don't insert empty lines before the first line in the file. + return 0, 0 + + if self.previous_line.is_decorator: + if self.is_pyi and current_line.is_stub_class: + # Insert an empty line after a decorated stub class + return 0, 1 + + return 0, 0 + + if self.previous_line.depth < current_line.depth and ( + self.previous_line.is_class or self.previous_line.is_def + ): + return 0, 0 + + if ( + self.previous_line.is_comment + and self.previous_line.depth == current_line.depth + and before == 0 + ): + return 0, 0 + + if self.is_pyi: + if current_line.is_class or self.previous_line.is_class: + if self.previous_line.depth < current_line.depth: + newlines = 0 + elif self.previous_line.depth > current_line.depth: + newlines = 1 + elif current_line.is_stub_class and self.previous_line.is_stub_class: + # No blank line between classes with an empty body + newlines = 0 + else: + newlines = 1 + elif ( + current_line.is_def or current_line.is_decorator + ) and not self.previous_line.is_def: + if current_line.depth: + # In classes empty lines between attributes and methods should + # be preserved. + newlines = min(1, before) + else: + # Blank line between a block of functions (maybe with preceding + # decorators) and a block of non-functions + newlines = 1 + elif self.previous_line.depth > current_line.depth: + newlines = 1 + else: + newlines = 0 + else: + newlines = 1 if current_line.depth else 2 + return newlines, 0 + + +def enumerate_reversed(sequence: Sequence[T]) -> Iterator[Tuple[Index, T]]: + """Like `reversed(enumerate(sequence))` if that were possible.""" + index = len(sequence) - 1 + for element in reversed(sequence): + yield (index, element) + index -= 1 + + +def append_leaves( + new_line: Line, old_line: Line, leaves: List[Leaf], preformatted: bool = False +) -> None: + """ + Append leaves (taken from @old_line) to @new_line, making sure to fix the + underlying Node structure where appropriate. + + All of the leaves in @leaves are duplicated. The duplicates are then + appended to @new_line and used to replace their originals in the underlying + Node structure. Any comments attached to the old leaves are reattached to + the new leaves. + + Pre-conditions: + set(@leaves) is a subset of set(@old_line.leaves). + """ + for old_leaf in leaves: + new_leaf = Leaf(old_leaf.type, old_leaf.value) + replace_child(old_leaf, new_leaf) + new_line.append(new_leaf, preformatted=preformatted) + + for comment_leaf in old_line.comments_after(old_leaf): + new_line.append(comment_leaf, preformatted=True) + + +def is_line_short_enough(line: Line, *, line_length: int, line_str: str = "") -> bool: + """Return True if `line` is no longer than `line_length`. + + Uses the provided `line_str` rendering, if any, otherwise computes a new one. + """ + if not line_str: + line_str = line_to_string(line) + return ( + len(line_str) <= line_length + and "\n" not in line_str # multiline strings + and not line.contains_standalone_comments() + ) + + +def can_be_split(line: Line) -> bool: + """Return False if the line cannot be split *for sure*. + + This is not an exhaustive search but a cheap heuristic that we can use to + avoid some unfortunate formattings (mostly around wrapping unsplittable code + in unnecessary parentheses). + """ + leaves = line.leaves + if len(leaves) < 2: + return False + + if leaves[0].type == token.STRING and leaves[1].type == token.DOT: + call_count = 0 + dot_count = 0 + next = leaves[-1] + for leaf in leaves[-2::-1]: + if leaf.type in OPENING_BRACKETS: + if next.type not in CLOSING_BRACKETS: + return False + + call_count += 1 + elif leaf.type == token.DOT: + dot_count += 1 + elif leaf.type == token.NAME: + if not (next.type == token.DOT or next.type in OPENING_BRACKETS): + return False + + elif leaf.type not in CLOSING_BRACKETS: + return False + + if dot_count > 1 and call_count > 1: + return False + + return True + + +def can_omit_invisible_parens( + line: Line, + line_length: int, +) -> bool: + """Does `line` have a shape safe to reformat without optional parens around it? + + Returns True for only a subset of potentially nice looking formattings but + the point is to not return false positives that end up producing lines that + are too long. + """ + bt = line.bracket_tracker + if not bt.delimiters: + # Without delimiters the optional parentheses are useless. + return True + + max_priority = bt.max_delimiter_priority() + if bt.delimiter_count_with_priority(max_priority) > 1: + # With more than one delimiter of a kind the optional parentheses read better. + return False + + if max_priority == DOT_PRIORITY: + # A single stranded method call doesn't require optional parentheses. + return True + + assert len(line.leaves) >= 2, "Stranded delimiter" + + # With a single delimiter, omit if the expression starts or ends with + # a bracket. + first = line.leaves[0] + second = line.leaves[1] + if first.type in OPENING_BRACKETS and second.type not in CLOSING_BRACKETS: + if _can_omit_opening_paren(line, first=first, line_length=line_length): + return True + + # Note: we are not returning False here because a line might have *both* + # a leading opening bracket and a trailing closing bracket. If the + # opening bracket doesn't match our rule, maybe the closing will. + + penultimate = line.leaves[-2] + last = line.leaves[-1] + + if ( + last.type == token.RPAR + or last.type == token.RBRACE + or ( + # don't use indexing for omitting optional parentheses; + # it looks weird + last.type == token.RSQB + and last.parent + and last.parent.type != syms.trailer + ) + ): + if penultimate.type in OPENING_BRACKETS: + # Empty brackets don't help. + return False + + if is_multiline_string(first): + # Additional wrapping of a multiline string in this situation is + # unnecessary. + return True + + if _can_omit_closing_paren(line, last=last, line_length=line_length): + return True + + return False + + +def _can_omit_opening_paren(line: Line, *, first: Leaf, line_length: int) -> bool: + """See `can_omit_invisible_parens`.""" + remainder = False + length = 4 * line.depth + _index = -1 + for _index, leaf, leaf_length in line.enumerate_with_length(): + if leaf.type in CLOSING_BRACKETS and leaf.opening_bracket is first: + remainder = True + if remainder: + length += leaf_length + if length > line_length: + break + + if leaf.type in OPENING_BRACKETS: + # There are brackets we can further split on. + remainder = False + + else: + # checked the entire string and line length wasn't exceeded + if len(line.leaves) == _index + 1: + return True + + return False + + +def _can_omit_closing_paren(line: Line, *, last: Leaf, line_length: int) -> bool: + """See `can_omit_invisible_parens`.""" + length = 4 * line.depth + seen_other_brackets = False + for _index, leaf, leaf_length in line.enumerate_with_length(): + length += leaf_length + if leaf is last.opening_bracket: + if seen_other_brackets or length <= line_length: + return True + + elif leaf.type in OPENING_BRACKETS: + # There are brackets we can further split on. + seen_other_brackets = True + + return False + + +def line_to_string(line: Line) -> str: + """Returns the string representation of @line. + + WARNING: This is known to be computationally expensive. + """ + return str(line).strip("\n") diff --git a/env/lib/python3.8/site-packages/black/mode.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/mode.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..aec26d9a Binary files /dev/null and b/env/lib/python3.8/site-packages/black/mode.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/mode.py b/env/lib/python3.8/site-packages/black/mode.py new file mode 100644 index 00000000..e3c36450 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/mode.py @@ -0,0 +1,218 @@ +"""Data structures configuring Black behavior. + +Mostly around Python language feature support per version and Black configuration +chosen by the user. +""" + +import sys +from dataclasses import dataclass, field +from enum import Enum, auto +from hashlib import sha256 +from operator import attrgetter +from typing import Dict, Set +from warnings import warn + +if sys.version_info < (3, 8): + from typing_extensions import Final +else: + from typing import Final + +from black.const import DEFAULT_LINE_LENGTH + + +class TargetVersion(Enum): + PY33 = 3 + PY34 = 4 + PY35 = 5 + PY36 = 6 + PY37 = 7 + PY38 = 8 + PY39 = 9 + PY310 = 10 + PY311 = 11 + + +class Feature(Enum): + F_STRINGS = 2 + NUMERIC_UNDERSCORES = 3 + TRAILING_COMMA_IN_CALL = 4 + TRAILING_COMMA_IN_DEF = 5 + # The following two feature-flags are mutually exclusive, and exactly one should be + # set for every version of python. + ASYNC_IDENTIFIERS = 6 + ASYNC_KEYWORDS = 7 + ASSIGNMENT_EXPRESSIONS = 8 + POS_ONLY_ARGUMENTS = 9 + RELAXED_DECORATORS = 10 + PATTERN_MATCHING = 11 + UNPACKING_ON_FLOW = 12 + ANN_ASSIGN_EXTENDED_RHS = 13 + EXCEPT_STAR = 14 + VARIADIC_GENERICS = 15 + DEBUG_F_STRINGS = 16 + FORCE_OPTIONAL_PARENTHESES = 50 + + # __future__ flags + FUTURE_ANNOTATIONS = 51 + + +FUTURE_FLAG_TO_FEATURE: Final = { + "annotations": Feature.FUTURE_ANNOTATIONS, +} + + +VERSION_TO_FEATURES: Dict[TargetVersion, Set[Feature]] = { + TargetVersion.PY33: {Feature.ASYNC_IDENTIFIERS}, + TargetVersion.PY34: {Feature.ASYNC_IDENTIFIERS}, + TargetVersion.PY35: {Feature.TRAILING_COMMA_IN_CALL, Feature.ASYNC_IDENTIFIERS}, + TargetVersion.PY36: { + Feature.F_STRINGS, + Feature.NUMERIC_UNDERSCORES, + Feature.TRAILING_COMMA_IN_CALL, + Feature.TRAILING_COMMA_IN_DEF, + Feature.ASYNC_IDENTIFIERS, + }, + TargetVersion.PY37: { + Feature.F_STRINGS, + Feature.NUMERIC_UNDERSCORES, + Feature.TRAILING_COMMA_IN_CALL, + Feature.TRAILING_COMMA_IN_DEF, + Feature.ASYNC_KEYWORDS, + Feature.FUTURE_ANNOTATIONS, + }, + TargetVersion.PY38: { + Feature.F_STRINGS, + Feature.DEBUG_F_STRINGS, + Feature.NUMERIC_UNDERSCORES, + Feature.TRAILING_COMMA_IN_CALL, + Feature.TRAILING_COMMA_IN_DEF, + Feature.ASYNC_KEYWORDS, + Feature.FUTURE_ANNOTATIONS, + Feature.ASSIGNMENT_EXPRESSIONS, + Feature.POS_ONLY_ARGUMENTS, + Feature.UNPACKING_ON_FLOW, + Feature.ANN_ASSIGN_EXTENDED_RHS, + }, + TargetVersion.PY39: { + Feature.F_STRINGS, + Feature.DEBUG_F_STRINGS, + Feature.NUMERIC_UNDERSCORES, + Feature.TRAILING_COMMA_IN_CALL, + Feature.TRAILING_COMMA_IN_DEF, + Feature.ASYNC_KEYWORDS, + Feature.FUTURE_ANNOTATIONS, + Feature.ASSIGNMENT_EXPRESSIONS, + Feature.RELAXED_DECORATORS, + Feature.POS_ONLY_ARGUMENTS, + Feature.UNPACKING_ON_FLOW, + Feature.ANN_ASSIGN_EXTENDED_RHS, + }, + TargetVersion.PY310: { + Feature.F_STRINGS, + Feature.DEBUG_F_STRINGS, + Feature.NUMERIC_UNDERSCORES, + Feature.TRAILING_COMMA_IN_CALL, + Feature.TRAILING_COMMA_IN_DEF, + Feature.ASYNC_KEYWORDS, + Feature.FUTURE_ANNOTATIONS, + Feature.ASSIGNMENT_EXPRESSIONS, + Feature.RELAXED_DECORATORS, + Feature.POS_ONLY_ARGUMENTS, + Feature.UNPACKING_ON_FLOW, + Feature.ANN_ASSIGN_EXTENDED_RHS, + Feature.PATTERN_MATCHING, + }, + TargetVersion.PY311: { + Feature.F_STRINGS, + Feature.DEBUG_F_STRINGS, + Feature.NUMERIC_UNDERSCORES, + Feature.TRAILING_COMMA_IN_CALL, + Feature.TRAILING_COMMA_IN_DEF, + Feature.ASYNC_KEYWORDS, + Feature.FUTURE_ANNOTATIONS, + Feature.ASSIGNMENT_EXPRESSIONS, + Feature.RELAXED_DECORATORS, + Feature.POS_ONLY_ARGUMENTS, + Feature.UNPACKING_ON_FLOW, + Feature.ANN_ASSIGN_EXTENDED_RHS, + Feature.PATTERN_MATCHING, + Feature.EXCEPT_STAR, + Feature.VARIADIC_GENERICS, + }, +} + + +def supports_feature(target_versions: Set[TargetVersion], feature: Feature) -> bool: + return all(feature in VERSION_TO_FEATURES[version] for version in target_versions) + + +class Preview(Enum): + """Individual preview style features.""" + + annotation_parens = auto() + long_docstring_quotes_on_newline = auto() + normalize_docstring_quotes_and_prefixes_properly = auto() + one_element_subscript = auto() + remove_block_trailing_newline = auto() + remove_redundant_parens = auto() + string_processing = auto() + skip_magic_trailing_comma_in_subscript = auto() + + +class Deprecated(UserWarning): + """Visible deprecation warning.""" + + +@dataclass +class Mode: + target_versions: Set[TargetVersion] = field(default_factory=set) + line_length: int = DEFAULT_LINE_LENGTH + string_normalization: bool = True + is_pyi: bool = False + is_ipynb: bool = False + skip_source_first_line: bool = False + magic_trailing_comma: bool = True + experimental_string_processing: bool = False + python_cell_magics: Set[str] = field(default_factory=set) + preview: bool = False + + def __post_init__(self) -> None: + if self.experimental_string_processing: + warn( + "`experimental string processing` has been included in `preview`" + " and deprecated. Use `preview` instead.", + Deprecated, + ) + + def __contains__(self, feature: Preview) -> bool: + """ + Provide `Preview.FEATURE in Mode` syntax that mirrors the ``preview`` flag. + + The argument is not checked and features are not differentiated. + They only exist to make development easier by clarifying intent. + """ + if feature is Preview.string_processing: + return self.preview or self.experimental_string_processing + return self.preview + + def get_cache_key(self) -> str: + if self.target_versions: + version_str = ",".join( + str(version.value) + for version in sorted(self.target_versions, key=attrgetter("value")) + ) + else: + version_str = "-" + parts = [ + version_str, + str(self.line_length), + str(int(self.string_normalization)), + str(int(self.is_pyi)), + str(int(self.is_ipynb)), + str(int(self.skip_source_first_line)), + str(int(self.magic_trailing_comma)), + str(int(self.experimental_string_processing)), + str(int(self.preview)), + sha256((",".join(sorted(self.python_cell_magics))).encode()).hexdigest(), + ] + return ".".join(parts) diff --git a/env/lib/python3.8/site-packages/black/nodes.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/nodes.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..29fc76b2 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/nodes.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/nodes.py b/env/lib/python3.8/site-packages/black/nodes.py new file mode 100644 index 00000000..aeb2be38 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/nodes.py @@ -0,0 +1,850 @@ +""" +blib2to3 Node/Leaf transformation-related utility functions. +""" + +import sys +from typing import Generic, Iterator, List, Optional, Set, Tuple, TypeVar, Union + +if sys.version_info >= (3, 8): + from typing import Final +else: + from typing_extensions import Final +if sys.version_info >= (3, 10): + from typing import TypeGuard +else: + from typing_extensions import TypeGuard + +from mypy_extensions import mypyc_attr + +from black.cache import CACHE_DIR +from black.strings import has_triple_quotes +from blib2to3 import pygram +from blib2to3.pgen2 import token +from blib2to3.pytree import NL, Leaf, Node, type_repr + +pygram.initialize(CACHE_DIR) +syms: Final = pygram.python_symbols + + +# types +T = TypeVar("T") +LN = Union[Leaf, Node] +LeafID = int +NodeType = int + + +WHITESPACE: Final = {token.DEDENT, token.INDENT, token.NEWLINE} +STATEMENT: Final = { + syms.if_stmt, + syms.while_stmt, + syms.for_stmt, + syms.try_stmt, + syms.except_clause, + syms.with_stmt, + syms.funcdef, + syms.classdef, + syms.match_stmt, + syms.case_block, +} +STANDALONE_COMMENT: Final = 153 +token.tok_name[STANDALONE_COMMENT] = "STANDALONE_COMMENT" +LOGIC_OPERATORS: Final = {"and", "or"} +COMPARATORS: Final = { + token.LESS, + token.GREATER, + token.EQEQUAL, + token.NOTEQUAL, + token.LESSEQUAL, + token.GREATEREQUAL, +} +MATH_OPERATORS: Final = { + token.VBAR, + token.CIRCUMFLEX, + token.AMPER, + token.LEFTSHIFT, + token.RIGHTSHIFT, + token.PLUS, + token.MINUS, + token.STAR, + token.SLASH, + token.DOUBLESLASH, + token.PERCENT, + token.AT, + token.TILDE, + token.DOUBLESTAR, +} +STARS: Final = {token.STAR, token.DOUBLESTAR} +VARARGS_SPECIALS: Final = STARS | {token.SLASH} +VARARGS_PARENTS: Final = { + syms.arglist, + syms.argument, # double star in arglist + syms.trailer, # single argument to call + syms.typedargslist, + syms.varargslist, # lambdas +} +UNPACKING_PARENTS: Final = { + syms.atom, # single element of a list or set literal + syms.dictsetmaker, + syms.listmaker, + syms.testlist_gexp, + syms.testlist_star_expr, + syms.subject_expr, + syms.pattern, +} +TEST_DESCENDANTS: Final = { + syms.test, + syms.lambdef, + syms.or_test, + syms.and_test, + syms.not_test, + syms.comparison, + syms.star_expr, + syms.expr, + syms.xor_expr, + syms.and_expr, + syms.shift_expr, + syms.arith_expr, + syms.trailer, + syms.term, + syms.power, +} +TYPED_NAMES: Final = {syms.tname, syms.tname_star} +ASSIGNMENTS: Final = { + "=", + "+=", + "-=", + "*=", + "@=", + "/=", + "%=", + "&=", + "|=", + "^=", + "<<=", + ">>=", + "**=", + "//=", +} + +IMPLICIT_TUPLE: Final = {syms.testlist, syms.testlist_star_expr, syms.exprlist} +BRACKET: Final = { + token.LPAR: token.RPAR, + token.LSQB: token.RSQB, + token.LBRACE: token.RBRACE, +} +OPENING_BRACKETS: Final = set(BRACKET.keys()) +CLOSING_BRACKETS: Final = set(BRACKET.values()) +BRACKETS: Final = OPENING_BRACKETS | CLOSING_BRACKETS +ALWAYS_NO_SPACE: Final = CLOSING_BRACKETS | {token.COMMA, STANDALONE_COMMENT} + +RARROW = 55 + + +@mypyc_attr(allow_interpreted_subclasses=True) +class Visitor(Generic[T]): + """Basic lib2to3 visitor that yields things of type `T` on `visit()`.""" + + def visit(self, node: LN) -> Iterator[T]: + """Main method to visit `node` and its children. + + It tries to find a `visit_*()` method for the given `node.type`, like + `visit_simple_stmt` for Node objects or `visit_INDENT` for Leaf objects. + If no dedicated `visit_*()` method is found, chooses `visit_default()` + instead. + + Then yields objects of type `T` from the selected visitor. + """ + if node.type < 256: + name = token.tok_name[node.type] + else: + name = str(type_repr(node.type)) + # We explicitly branch on whether a visitor exists (instead of + # using self.visit_default as the default arg to getattr) in order + # to save needing to create a bound method object and so mypyc can + # generate a native call to visit_default. + visitf = getattr(self, f"visit_{name}", None) + if visitf: + yield from visitf(node) + else: + yield from self.visit_default(node) + + def visit_default(self, node: LN) -> Iterator[T]: + """Default `visit_*()` implementation. Recurses to children of `node`.""" + if isinstance(node, Node): + for child in node.children: + yield from self.visit(child) + + +def whitespace(leaf: Leaf, *, complex_subscript: bool) -> str: # noqa: C901 + """Return whitespace prefix if needed for the given `leaf`. + + `complex_subscript` signals whether the given leaf is part of a subscription + which has non-trivial arguments, like arithmetic expressions or function calls. + """ + NO: Final = "" + SPACE: Final = " " + DOUBLESPACE: Final = " " + t = leaf.type + p = leaf.parent + v = leaf.value + if t in ALWAYS_NO_SPACE: + return NO + + if t == token.COMMENT: + return DOUBLESPACE + + assert p is not None, f"INTERNAL ERROR: hand-made leaf without parent: {leaf!r}" + if t == token.COLON and p.type not in { + syms.subscript, + syms.subscriptlist, + syms.sliceop, + }: + return NO + + prev = leaf.prev_sibling + if not prev: + prevp = preceding_leaf(p) + if not prevp or prevp.type in OPENING_BRACKETS: + return NO + + if t == token.COLON: + if prevp.type == token.COLON: + return NO + + elif prevp.type != token.COMMA and not complex_subscript: + return NO + + return SPACE + + if prevp.type == token.EQUAL: + if prevp.parent: + if prevp.parent.type in { + syms.arglist, + syms.argument, + syms.parameters, + syms.varargslist, + }: + return NO + + elif prevp.parent.type == syms.typedargslist: + # A bit hacky: if the equal sign has whitespace, it means we + # previously found it's a typed argument. So, we're using + # that, too. + return prevp.prefix + + elif ( + prevp.type == token.STAR + and parent_type(prevp) == syms.star_expr + and parent_type(prevp.parent) == syms.subscriptlist + ): + # No space between typevar tuples. + return NO + + elif prevp.type in VARARGS_SPECIALS: + if is_vararg(prevp, within=VARARGS_PARENTS | UNPACKING_PARENTS): + return NO + + elif prevp.type == token.COLON: + if prevp.parent and prevp.parent.type in {syms.subscript, syms.sliceop}: + return SPACE if complex_subscript else NO + + elif ( + prevp.parent + and prevp.parent.type == syms.factor + and prevp.type in MATH_OPERATORS + ): + return NO + + elif prevp.type == token.AT and p.parent and p.parent.type == syms.decorator: + # no space in decorators + return NO + + elif prev.type in OPENING_BRACKETS: + return NO + + if p.type in {syms.parameters, syms.arglist}: + # untyped function signatures or calls + if not prev or prev.type != token.COMMA: + return NO + + elif p.type == syms.varargslist: + # lambdas + if prev and prev.type != token.COMMA: + return NO + + elif p.type == syms.typedargslist: + # typed function signatures + if not prev: + return NO + + if t == token.EQUAL: + if prev.type not in TYPED_NAMES: + return NO + + elif prev.type == token.EQUAL: + # A bit hacky: if the equal sign has whitespace, it means we + # previously found it's a typed argument. So, we're using that, too. + return prev.prefix + + elif prev.type != token.COMMA: + return NO + + elif p.type in TYPED_NAMES: + # type names + if not prev: + prevp = preceding_leaf(p) + if not prevp or prevp.type != token.COMMA: + return NO + + elif p.type == syms.trailer: + # attributes and calls + if t == token.LPAR or t == token.RPAR: + return NO + + if not prev: + if t == token.DOT or t == token.LSQB: + return NO + + elif prev.type != token.COMMA: + return NO + + elif p.type == syms.argument: + # single argument + if t == token.EQUAL: + return NO + + if not prev: + prevp = preceding_leaf(p) + if not prevp or prevp.type == token.LPAR: + return NO + + elif prev.type in {token.EQUAL} | VARARGS_SPECIALS: + return NO + + elif p.type == syms.decorator: + # decorators + return NO + + elif p.type == syms.dotted_name: + if prev: + return NO + + prevp = preceding_leaf(p) + if not prevp or prevp.type == token.AT or prevp.type == token.DOT: + return NO + + elif p.type == syms.classdef: + if t == token.LPAR: + return NO + + if prev and prev.type == token.LPAR: + return NO + + elif p.type in {syms.subscript, syms.sliceop}: + # indexing + if not prev: + assert p.parent is not None, "subscripts are always parented" + if p.parent.type == syms.subscriptlist: + return SPACE + + return NO + + elif not complex_subscript: + return NO + + elif p.type == syms.atom: + if prev and t == token.DOT: + # dots, but not the first one. + return NO + + elif p.type == syms.dictsetmaker: + # dict unpacking + if prev and prev.type == token.DOUBLESTAR: + return NO + + elif p.type in {syms.factor, syms.star_expr}: + # unary ops + if not prev: + prevp = preceding_leaf(p) + if not prevp or prevp.type in OPENING_BRACKETS: + return NO + + prevp_parent = prevp.parent + assert prevp_parent is not None + if prevp.type == token.COLON and prevp_parent.type in { + syms.subscript, + syms.sliceop, + }: + return NO + + elif prevp.type == token.EQUAL and prevp_parent.type == syms.argument: + return NO + + elif t in {token.NAME, token.NUMBER, token.STRING}: + return NO + + elif p.type == syms.import_from: + if t == token.DOT: + if prev and prev.type == token.DOT: + return NO + + elif t == token.NAME: + if v == "import": + return SPACE + + if prev and prev.type == token.DOT: + return NO + + elif p.type == syms.sliceop: + return NO + + elif p.type == syms.except_clause: + if t == token.STAR: + return NO + + return SPACE + + +def preceding_leaf(node: Optional[LN]) -> Optional[Leaf]: + """Return the first leaf that precedes `node`, if any.""" + while node: + res = node.prev_sibling + if res: + if isinstance(res, Leaf): + return res + + try: + return list(res.leaves())[-1] + + except IndexError: + return None + + node = node.parent + return None + + +def prev_siblings_are(node: Optional[LN], tokens: List[Optional[NodeType]]) -> bool: + """Return if the `node` and its previous siblings match types against the provided + list of tokens; the provided `node`has its type matched against the last element in + the list. `None` can be used as the first element to declare that the start of the + list is anchored at the start of its parent's children.""" + if not tokens: + return True + if tokens[-1] is None: + return node is None + if not node: + return False + if node.type != tokens[-1]: + return False + return prev_siblings_are(node.prev_sibling, tokens[:-1]) + + +def parent_type(node: Optional[LN]) -> Optional[NodeType]: + """ + Returns: + @node.parent.type, if @node is not None and has a parent. + OR + None, otherwise. + """ + if node is None or node.parent is None: + return None + + return node.parent.type + + +def child_towards(ancestor: Node, descendant: LN) -> Optional[LN]: + """Return the child of `ancestor` that contains `descendant`.""" + node: Optional[LN] = descendant + while node and node.parent != ancestor: + node = node.parent + return node + + +def replace_child(old_child: LN, new_child: LN) -> None: + """ + Side Effects: + * If @old_child.parent is set, replace @old_child with @new_child in + @old_child's underlying Node structure. + OR + * Otherwise, this function does nothing. + """ + parent = old_child.parent + if not parent: + return + + child_idx = old_child.remove() + if child_idx is not None: + parent.insert_child(child_idx, new_child) + + +def container_of(leaf: Leaf) -> LN: + """Return `leaf` or one of its ancestors that is the topmost container of it. + + By "container" we mean a node where `leaf` is the very first child. + """ + same_prefix = leaf.prefix + container: LN = leaf + while container: + parent = container.parent + if parent is None: + break + + if parent.children[0].prefix != same_prefix: + break + + if parent.type == syms.file_input: + break + + if parent.prev_sibling is not None and parent.prev_sibling.type in BRACKETS: + break + + container = parent + return container + + +def first_leaf_of(node: LN) -> Optional[Leaf]: + """Returns the first leaf of the node tree.""" + if isinstance(node, Leaf): + return node + if node.children: + return first_leaf_of(node.children[0]) + else: + return None + + +def is_arith_like(node: LN) -> bool: + """Whether node is an arithmetic or a binary arithmetic expression""" + return node.type in { + syms.arith_expr, + syms.shift_expr, + syms.xor_expr, + syms.and_expr, + } + + +def is_docstring(leaf: Leaf) -> bool: + if prev_siblings_are( + leaf.parent, [None, token.NEWLINE, token.INDENT, syms.simple_stmt] + ): + return True + + # Multiline docstring on the same line as the `def`. + if prev_siblings_are(leaf.parent, [syms.parameters, token.COLON, syms.simple_stmt]): + # `syms.parameters` is only used in funcdefs and async_funcdefs in the Python + # grammar. We're safe to return True without further checks. + return True + + return False + + +def is_empty_tuple(node: LN) -> bool: + """Return True if `node` holds an empty tuple.""" + return ( + node.type == syms.atom + and len(node.children) == 2 + and node.children[0].type == token.LPAR + and node.children[1].type == token.RPAR + ) + + +def is_one_tuple(node: LN) -> bool: + """Return True if `node` holds a tuple with one element, with or without parens.""" + if node.type == syms.atom: + gexp = unwrap_singleton_parenthesis(node) + if gexp is None or gexp.type != syms.testlist_gexp: + return False + + return len(gexp.children) == 2 and gexp.children[1].type == token.COMMA + + return ( + node.type in IMPLICIT_TUPLE + and len(node.children) == 2 + and node.children[1].type == token.COMMA + ) + + +def is_one_sequence_between( + opening: Leaf, + closing: Leaf, + leaves: List[Leaf], + brackets: Tuple[int, int] = (token.LPAR, token.RPAR), +) -> bool: + """Return True if content between `opening` and `closing` is a one-sequence.""" + if (opening.type, closing.type) != brackets: + return False + + depth = closing.bracket_depth + 1 + for _opening_index, leaf in enumerate(leaves): + if leaf is opening: + break + + else: + raise LookupError("Opening paren not found in `leaves`") + + commas = 0 + _opening_index += 1 + for leaf in leaves[_opening_index:]: + if leaf is closing: + break + + bracket_depth = leaf.bracket_depth + if bracket_depth == depth and leaf.type == token.COMMA: + commas += 1 + if leaf.parent and leaf.parent.type in { + syms.arglist, + syms.typedargslist, + }: + commas += 1 + break + + return commas < 2 + + +def is_walrus_assignment(node: LN) -> bool: + """Return True iff `node` is of the shape ( test := test )""" + inner = unwrap_singleton_parenthesis(node) + return inner is not None and inner.type == syms.namedexpr_test + + +def is_simple_decorator_trailer(node: LN, last: bool = False) -> bool: + """Return True iff `node` is a trailer valid in a simple decorator""" + return node.type == syms.trailer and ( + ( + len(node.children) == 2 + and node.children[0].type == token.DOT + and node.children[1].type == token.NAME + ) + # last trailer can be an argument-less parentheses pair + or ( + last + and len(node.children) == 2 + and node.children[0].type == token.LPAR + and node.children[1].type == token.RPAR + ) + # last trailer can be arguments + or ( + last + and len(node.children) == 3 + and node.children[0].type == token.LPAR + # and node.children[1].type == syms.argument + and node.children[2].type == token.RPAR + ) + ) + + +def is_simple_decorator_expression(node: LN) -> bool: + """Return True iff `node` could be a 'dotted name' decorator + + This function takes the node of the 'namedexpr_test' of the new decorator + grammar and test if it would be valid under the old decorator grammar. + + The old grammar was: decorator: @ dotted_name [arguments] NEWLINE + The new grammar is : decorator: @ namedexpr_test NEWLINE + """ + if node.type == token.NAME: + return True + if node.type == syms.power: + if node.children: + return ( + node.children[0].type == token.NAME + and all(map(is_simple_decorator_trailer, node.children[1:-1])) + and ( + len(node.children) < 2 + or is_simple_decorator_trailer(node.children[-1], last=True) + ) + ) + return False + + +def is_yield(node: LN) -> bool: + """Return True if `node` holds a `yield` or `yield from` expression.""" + if node.type == syms.yield_expr: + return True + + if is_name_token(node) and node.value == "yield": + return True + + if node.type != syms.atom: + return False + + if len(node.children) != 3: + return False + + lpar, expr, rpar = node.children + if lpar.type == token.LPAR and rpar.type == token.RPAR: + return is_yield(expr) + + return False + + +def is_vararg(leaf: Leaf, within: Set[NodeType]) -> bool: + """Return True if `leaf` is a star or double star in a vararg or kwarg. + + If `within` includes VARARGS_PARENTS, this applies to function signatures. + If `within` includes UNPACKING_PARENTS, it applies to right hand-side + extended iterable unpacking (PEP 3132) and additional unpacking + generalizations (PEP 448). + """ + if leaf.type not in VARARGS_SPECIALS or not leaf.parent: + return False + + p = leaf.parent + if p.type == syms.star_expr: + # Star expressions are also used as assignment targets in extended + # iterable unpacking (PEP 3132). See what its parent is instead. + if not p.parent: + return False + + p = p.parent + + return p.type in within + + +def is_multiline_string(leaf: Leaf) -> bool: + """Return True if `leaf` is a multiline string that actually spans many lines.""" + return has_triple_quotes(leaf.value) and "\n" in leaf.value + + +def is_stub_suite(node: Node) -> bool: + """Return True if `node` is a suite with a stub body.""" + if ( + len(node.children) != 4 + or node.children[0].type != token.NEWLINE + or node.children[1].type != token.INDENT + or node.children[3].type != token.DEDENT + ): + return False + + return is_stub_body(node.children[2]) + + +def is_stub_body(node: LN) -> bool: + """Return True if `node` is a simple statement containing an ellipsis.""" + if not isinstance(node, Node) or node.type != syms.simple_stmt: + return False + + if len(node.children) != 2: + return False + + child = node.children[0] + return ( + child.type == syms.atom + and len(child.children) == 3 + and all(leaf == Leaf(token.DOT, ".") for leaf in child.children) + ) + + +def is_atom_with_invisible_parens(node: LN) -> bool: + """Given a `LN`, determines whether it's an atom `node` with invisible + parens. Useful in dedupe-ing and normalizing parens. + """ + if isinstance(node, Leaf) or node.type != syms.atom: + return False + + first, last = node.children[0], node.children[-1] + return ( + isinstance(first, Leaf) + and first.type == token.LPAR + and first.value == "" + and isinstance(last, Leaf) + and last.type == token.RPAR + and last.value == "" + ) + + +def is_empty_par(leaf: Leaf) -> bool: + return is_empty_lpar(leaf) or is_empty_rpar(leaf) + + +def is_empty_lpar(leaf: Leaf) -> bool: + return leaf.type == token.LPAR and leaf.value == "" + + +def is_empty_rpar(leaf: Leaf) -> bool: + return leaf.type == token.RPAR and leaf.value == "" + + +def is_import(leaf: Leaf) -> bool: + """Return True if the given leaf starts an import statement.""" + p = leaf.parent + t = leaf.type + v = leaf.value + return bool( + t == token.NAME + and ( + (v == "import" and p and p.type == syms.import_name) + or (v == "from" and p and p.type == syms.import_from) + ) + ) + + +def is_type_comment(leaf: Leaf, suffix: str = "") -> bool: + """Return True if the given leaf is a special comment. + Only returns true for type comments for now.""" + t = leaf.type + v = leaf.value + return t in {token.COMMENT, STANDALONE_COMMENT} and v.startswith("# type:" + suffix) + + +def wrap_in_parentheses(parent: Node, child: LN, *, visible: bool = True) -> None: + """Wrap `child` in parentheses. + + This replaces `child` with an atom holding the parentheses and the old + child. That requires moving the prefix. + + If `visible` is False, the leaves will be valueless (and thus invisible). + """ + lpar = Leaf(token.LPAR, "(" if visible else "") + rpar = Leaf(token.RPAR, ")" if visible else "") + prefix = child.prefix + child.prefix = "" + index = child.remove() or 0 + new_child = Node(syms.atom, [lpar, child, rpar]) + new_child.prefix = prefix + parent.insert_child(index, new_child) + + +def unwrap_singleton_parenthesis(node: LN) -> Optional[LN]: + """Returns `wrapped` if `node` is of the shape ( wrapped ). + + Parenthesis can be optional. Returns None otherwise""" + if len(node.children) != 3: + return None + + lpar, wrapped, rpar = node.children + if not (lpar.type == token.LPAR and rpar.type == token.RPAR): + return None + + return wrapped + + +def ensure_visible(leaf: Leaf) -> None: + """Make sure parentheses are visible. + + They could be invisible as part of some statements (see + :func:`normalize_invisible_parens` and :func:`visit_import_from`). + """ + if leaf.type == token.LPAR: + leaf.value = "(" + elif leaf.type == token.RPAR: + leaf.value = ")" + + +def is_name_token(nl: NL) -> TypeGuard[Leaf]: + return nl.type == token.NAME + + +def is_lpar_token(nl: NL) -> TypeGuard[Leaf]: + return nl.type == token.LPAR + + +def is_rpar_token(nl: NL) -> TypeGuard[Leaf]: + return nl.type == token.RPAR + + +def is_string_token(nl: NL) -> TypeGuard[Leaf]: + return nl.type == token.STRING + + +def is_number_token(nl: NL) -> TypeGuard[Leaf]: + return nl.type == token.NUMBER diff --git a/env/lib/python3.8/site-packages/black/numerics.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/numerics.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..f0722d79 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/numerics.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/numerics.py b/env/lib/python3.8/site-packages/black/numerics.py new file mode 100644 index 00000000..879e5b2c --- /dev/null +++ b/env/lib/python3.8/site-packages/black/numerics.py @@ -0,0 +1,60 @@ +""" +Formatting numeric literals. +""" +from blib2to3.pytree import Leaf + + +def format_hex(text: str) -> str: + """ + Formats a hexadecimal string like "0x12B3" + """ + before, after = text[:2], text[2:] + return f"{before}{after.upper()}" + + +def format_scientific_notation(text: str) -> str: + """Formats a numeric string utilizing scentific notation""" + before, after = text.split("e") + sign = "" + if after.startswith("-"): + after = after[1:] + sign = "-" + elif after.startswith("+"): + after = after[1:] + before = format_float_or_int_string(before) + return f"{before}e{sign}{after}" + + +def format_complex_number(text: str) -> str: + """Formats a complex string like `10j`""" + number = text[:-1] + suffix = text[-1] + return f"{format_float_or_int_string(number)}{suffix}" + + +def format_float_or_int_string(text: str) -> str: + """Formats a float string like "1.0".""" + if "." not in text: + return text + + before, after = text.split(".") + return f"{before or 0}.{after or 0}" + + +def normalize_numeric_literal(leaf: Leaf) -> None: + """Normalizes numeric (float, int, and complex) literals. + + All letters used in the representation are normalized to lowercase.""" + text = leaf.value.lower() + if text.startswith(("0o", "0b")): + # Leave octal and binary literals alone. + pass + elif text.startswith("0x"): + text = format_hex(text) + elif "e" in text: + text = format_scientific_notation(text) + elif text.endswith("j"): + text = format_complex_number(text) + else: + text = format_float_or_int_string(text) + leaf.value = text diff --git a/env/lib/python3.8/site-packages/black/output.py b/env/lib/python3.8/site-packages/black/output.py new file mode 100644 index 00000000..f4c17f28 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/output.py @@ -0,0 +1,105 @@ +"""Nice output for Black. + +The double calls are for patching purposes in tests. +""" + +import json +import tempfile +from typing import Any, Optional + +from click import echo, style +from mypy_extensions import mypyc_attr + + +@mypyc_attr(patchable=True) +def _out(message: Optional[str] = None, nl: bool = True, **styles: Any) -> None: + if message is not None: + if "bold" not in styles: + styles["bold"] = True + message = style(message, **styles) + echo(message, nl=nl, err=True) + + +@mypyc_attr(patchable=True) +def _err(message: Optional[str] = None, nl: bool = True, **styles: Any) -> None: + if message is not None: + if "fg" not in styles: + styles["fg"] = "red" + message = style(message, **styles) + echo(message, nl=nl, err=True) + + +@mypyc_attr(patchable=True) +def out(message: Optional[str] = None, nl: bool = True, **styles: Any) -> None: + _out(message, nl=nl, **styles) + + +def err(message: Optional[str] = None, nl: bool = True, **styles: Any) -> None: + _err(message, nl=nl, **styles) + + +def ipynb_diff(a: str, b: str, a_name: str, b_name: str) -> str: + """Return a unified diff string between each cell in notebooks `a` and `b`.""" + a_nb = json.loads(a) + b_nb = json.loads(b) + diff_lines = [ + diff( + "".join(a_nb["cells"][cell_number]["source"]) + "\n", + "".join(b_nb["cells"][cell_number]["source"]) + "\n", + f"{a_name}:cell_{cell_number}", + f"{b_name}:cell_{cell_number}", + ) + for cell_number, cell in enumerate(a_nb["cells"]) + if cell["cell_type"] == "code" + ] + return "".join(diff_lines) + + +def diff(a: str, b: str, a_name: str, b_name: str) -> str: + """Return a unified diff string between strings `a` and `b`.""" + import difflib + + a_lines = a.splitlines(keepends=True) + b_lines = b.splitlines(keepends=True) + diff_lines = [] + for line in difflib.unified_diff( + a_lines, b_lines, fromfile=a_name, tofile=b_name, n=5 + ): + # Work around https://bugs.python.org/issue2142 + # See: + # https://www.gnu.org/software/diffutils/manual/html_node/Incomplete-Lines.html + if line[-1] == "\n": + diff_lines.append(line) + else: + diff_lines.append(line + "\n") + diff_lines.append("\\ No newline at end of file\n") + return "".join(diff_lines) + + +def color_diff(contents: str) -> str: + """Inject the ANSI color codes to the diff.""" + lines = contents.split("\n") + for i, line in enumerate(lines): + if line.startswith("+++") or line.startswith("---"): + line = "\033[1m" + line + "\033[0m" # bold, reset + elif line.startswith("@@"): + line = "\033[36m" + line + "\033[0m" # cyan, reset + elif line.startswith("+"): + line = "\033[32m" + line + "\033[0m" # green, reset + elif line.startswith("-"): + line = "\033[31m" + line + "\033[0m" # red, reset + lines[i] = line + return "\n".join(lines) + + +@mypyc_attr(patchable=True) +def dump_to_file(*output: str, ensure_final_newline: bool = True) -> str: + """Dump `output` to a temporary file. Return path to the file.""" + with tempfile.NamedTemporaryFile( + mode="w", prefix="blk_", suffix=".log", delete=False, encoding="utf8" + ) as f: + for lines in output: + f.write(lines) + if ensure_final_newline and lines and lines[-1] != "\n": + f.write("\n") + return f.name diff --git a/env/lib/python3.8/site-packages/black/parsing.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/parsing.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..2e3651a0 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/parsing.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/parsing.py b/env/lib/python3.8/site-packages/black/parsing.py new file mode 100644 index 00000000..64c0b1e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/parsing.py @@ -0,0 +1,278 @@ +""" +Parse Python code and perform AST validation. +""" +import ast +import platform +import sys +from typing import Any, Iterable, Iterator, List, Set, Tuple, Type, Union + +if sys.version_info < (3, 8): + from typing_extensions import Final +else: + from typing import Final + +from black.mode import Feature, TargetVersion, supports_feature +from black.nodes import syms +from blib2to3 import pygram +from blib2to3.pgen2 import driver +from blib2to3.pgen2.grammar import Grammar +from blib2to3.pgen2.parse import ParseError +from blib2to3.pgen2.tokenize import TokenError +from blib2to3.pytree import Leaf, Node + +ast3: Any + +_IS_PYPY = platform.python_implementation() == "PyPy" + +try: + from typed_ast import ast3 +except ImportError: + # Either our python version is too low, or we're on pypy + if sys.version_info < (3, 7) or (sys.version_info < (3, 8) and not _IS_PYPY): + print( + "The typed_ast package is required but not installed.\n" + "You can upgrade to Python 3.8+ or install typed_ast with\n" + "`python3 -m pip install typed-ast`.", + file=sys.stderr, + ) + sys.exit(1) + else: + ast3 = ast + + +PY2_HINT: Final = "Python 2 support was removed in version 22.0." + + +class InvalidInput(ValueError): + """Raised when input source code fails all parse attempts.""" + + +def get_grammars(target_versions: Set[TargetVersion]) -> List[Grammar]: + if not target_versions: + # No target_version specified, so try all grammars. + return [ + # Python 3.7+ + pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords, + # Python 3.0-3.6 + pygram.python_grammar_no_print_statement_no_exec_statement, + # Python 3.10+ + pygram.python_grammar_soft_keywords, + ] + + grammars = [] + # If we have to parse both, try to parse async as a keyword first + if not supports_feature( + target_versions, Feature.ASYNC_IDENTIFIERS + ) and not supports_feature(target_versions, Feature.PATTERN_MATCHING): + # Python 3.7-3.9 + grammars.append( + pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords + ) + if not supports_feature(target_versions, Feature.ASYNC_KEYWORDS): + # Python 3.0-3.6 + grammars.append(pygram.python_grammar_no_print_statement_no_exec_statement) + if supports_feature(target_versions, Feature.PATTERN_MATCHING): + # Python 3.10+ + grammars.append(pygram.python_grammar_soft_keywords) + + # At least one of the above branches must have been taken, because every Python + # version has exactly one of the two 'ASYNC_*' flags + return grammars + + +def lib2to3_parse(src_txt: str, target_versions: Iterable[TargetVersion] = ()) -> Node: + """Given a string with source, return the lib2to3 Node.""" + if not src_txt.endswith("\n"): + src_txt += "\n" + + grammars = get_grammars(set(target_versions)) + errors = {} + for grammar in grammars: + drv = driver.Driver(grammar) + try: + result = drv.parse_string(src_txt, True) + break + + except ParseError as pe: + lineno, column = pe.context[1] + lines = src_txt.splitlines() + try: + faulty_line = lines[lineno - 1] + except IndexError: + faulty_line = "" + errors[grammar.version] = InvalidInput( + f"Cannot parse: {lineno}:{column}: {faulty_line}" + ) + + except TokenError as te: + # In edge cases these are raised; and typically don't have a "faulty_line". + lineno, column = te.args[1] + errors[grammar.version] = InvalidInput( + f"Cannot parse: {lineno}:{column}: {te.args[0]}" + ) + + else: + # Choose the latest version when raising the actual parsing error. + assert len(errors) >= 1 + exc = errors[max(errors)] + + if matches_grammar(src_txt, pygram.python_grammar) or matches_grammar( + src_txt, pygram.python_grammar_no_print_statement + ): + original_msg = exc.args[0] + msg = f"{original_msg}\n{PY2_HINT}" + raise InvalidInput(msg) from None + + raise exc from None + + if isinstance(result, Leaf): + result = Node(syms.file_input, [result]) + return result + + +def matches_grammar(src_txt: str, grammar: Grammar) -> bool: + drv = driver.Driver(grammar) + try: + drv.parse_string(src_txt, True) + except (ParseError, TokenError, IndentationError): + return False + else: + return True + + +def lib2to3_unparse(node: Node) -> str: + """Given a lib2to3 node, return its string representation.""" + code = str(node) + return code + + +def parse_single_version( + src: str, version: Tuple[int, int] +) -> Union[ast.AST, ast3.AST]: + filename = "" + # typed-ast is needed because of feature version limitations in the builtin ast 3.8> + if sys.version_info >= (3, 8) and version >= (3,): + return ast.parse(src, filename, feature_version=version, type_comments=True) + + if _IS_PYPY: + # PyPy 3.7 doesn't support type comment tracking which is not ideal, but there's + # not much we can do as typed-ast won't work either. + if sys.version_info >= (3, 8): + return ast3.parse(src, filename, type_comments=True) + else: + return ast3.parse(src, filename) + else: + # Typed-ast is guaranteed to be used here and automatically tracks type + # comments separately. + return ast3.parse(src, filename, feature_version=version[1]) + + raise AssertionError("INTERNAL ERROR: Tried parsing unsupported Python version!") + + +def parse_ast(src: str) -> Union[ast.AST, ast3.AST]: + # TODO: support Python 4+ ;) + versions = [(3, minor) for minor in range(3, sys.version_info[1] + 1)] + + first_error = "" + for version in sorted(versions, reverse=True): + try: + return parse_single_version(src, version) + except SyntaxError as e: + if not first_error: + first_error = str(e) + + raise SyntaxError(first_error) + + +ast3_AST: Final[Type[ast3.AST]] = ast3.AST + + +def _normalize(lineend: str, value: str) -> str: + # To normalize, we strip any leading and trailing space from + # each line... + stripped: List[str] = [i.strip() for i in value.splitlines()] + normalized = lineend.join(stripped) + # ...and remove any blank lines at the beginning and end of + # the whole string + return normalized.strip() + + +def stringify_ast(node: Union[ast.AST, ast3.AST], depth: int = 0) -> Iterator[str]: + """Simple visitor generating strings to compare ASTs by content.""" + + node = fixup_ast_constants(node) + + yield f"{' ' * depth}{node.__class__.__name__}(" + + type_ignore_classes: Tuple[Type[Any], ...] + for field in sorted(node._fields): # noqa: F402 + # TypeIgnore will not be present using pypy < 3.8, so need for this + if not (_IS_PYPY and sys.version_info < (3, 8)): + # TypeIgnore has only one field 'lineno' which breaks this comparison + type_ignore_classes = (ast3.TypeIgnore,) + if sys.version_info >= (3, 8): + type_ignore_classes += (ast.TypeIgnore,) + if isinstance(node, type_ignore_classes): + break + + try: + value: object = getattr(node, field) + except AttributeError: + continue + + yield f"{' ' * (depth+1)}{field}=" + + if isinstance(value, list): + for item in value: + # Ignore nested tuples within del statements, because we may insert + # parentheses and they change the AST. + if ( + field == "targets" + and isinstance(node, (ast.Delete, ast3.Delete)) + and isinstance(item, (ast.Tuple, ast3.Tuple)) + ): + for elt in item.elts: + yield from stringify_ast(elt, depth + 2) + + elif isinstance(item, (ast.AST, ast3.AST)): + yield from stringify_ast(item, depth + 2) + + # Note that we are referencing the typed-ast ASTs via global variables and not + # direct module attribute accesses because that breaks mypyc. It's probably + # something to do with the ast3 variables being marked as Any leading + # mypy to think this branch is always taken, leaving the rest of the code + # unanalyzed. Tighting up the types for the typed-ast AST types avoids the + # mypyc crash. + elif isinstance(value, (ast.AST, ast3_AST)): + yield from stringify_ast(value, depth + 2) + + else: + normalized: object + # Constant strings may be indented across newlines, if they are + # docstrings; fold spaces after newlines when comparing. Similarly, + # trailing and leading space may be removed. + if ( + isinstance(node, ast.Constant) + and field == "value" + and isinstance(value, str) + ): + normalized = _normalize("\n", value) + else: + normalized = value + yield f"{' ' * (depth+2)}{normalized!r}, # {value.__class__.__name__}" + + yield f"{' ' * depth}) # /{node.__class__.__name__}" + + +def fixup_ast_constants(node: Union[ast.AST, ast3.AST]) -> Union[ast.AST, ast3.AST]: + """Map ast nodes deprecated in 3.8 to Constant.""" + if isinstance(node, (ast.Str, ast3.Str, ast.Bytes, ast3.Bytes)): + return ast.Constant(value=node.s) + + if isinstance(node, (ast.Num, ast3.Num)): + return ast.Constant(value=node.n) + + if isinstance(node, (ast.NameConstant, ast3.NameConstant)): + return ast.Constant(value=node.value) + + return node diff --git a/env/lib/python3.8/site-packages/black/py.typed b/env/lib/python3.8/site-packages/black/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/black/report.py b/env/lib/python3.8/site-packages/black/report.py new file mode 100644 index 00000000..a507671e --- /dev/null +++ b/env/lib/python3.8/site-packages/black/report.py @@ -0,0 +1,106 @@ +""" +Summarize Black runs to users. +""" +from dataclasses import dataclass +from enum import Enum +from pathlib import Path + +from click import style + +from black.output import err, out + + +class Changed(Enum): + NO = 0 + CACHED = 1 + YES = 2 + + +class NothingChanged(UserWarning): + """Raised when reformatted code is the same as source.""" + + +@dataclass +class Report: + """Provides a reformatting counter. Can be rendered with `str(report)`.""" + + check: bool = False + diff: bool = False + quiet: bool = False + verbose: bool = False + change_count: int = 0 + same_count: int = 0 + failure_count: int = 0 + + def done(self, src: Path, changed: Changed) -> None: + """Increment the counter for successful reformatting. Write out a message.""" + if changed is Changed.YES: + reformatted = "would reformat" if self.check or self.diff else "reformatted" + if self.verbose or not self.quiet: + out(f"{reformatted} {src}") + self.change_count += 1 + else: + if self.verbose: + if changed is Changed.NO: + msg = f"{src} already well formatted, good job." + else: + msg = f"{src} wasn't modified on disk since last run." + out(msg, bold=False) + self.same_count += 1 + + def failed(self, src: Path, message: str) -> None: + """Increment the counter for failed reformatting. Write out a message.""" + err(f"error: cannot format {src}: {message}") + self.failure_count += 1 + + def path_ignored(self, path: Path, message: str) -> None: + if self.verbose: + out(f"{path} ignored: {message}", bold=False) + + @property + def return_code(self) -> int: + """Return the exit code that the app should use. + + This considers the current state of changed files and failures: + - if there were any failures, return 123; + - if any files were changed and --check is being used, return 1; + - otherwise return 0. + """ + # According to http://tldp.org/LDP/abs/html/exitcodes.html starting with + # 126 we have special return codes reserved by the shell. + if self.failure_count: + return 123 + + elif self.change_count and self.check: + return 1 + + return 0 + + def __str__(self) -> str: + """Render a color report of the current state. + + Use `click.unstyle` to remove colors. + """ + if self.check or self.diff: + reformatted = "would be reformatted" + unchanged = "would be left unchanged" + failed = "would fail to reformat" + else: + reformatted = "reformatted" + unchanged = "left unchanged" + failed = "failed to reformat" + report = [] + if self.change_count: + s = "s" if self.change_count > 1 else "" + report.append( + style(f"{self.change_count} file{s} ", bold=True, fg="blue") + + style(f"{reformatted}", bold=True) + ) + + if self.same_count: + s = "s" if self.same_count > 1 else "" + report.append(style(f"{self.same_count} file{s} ", fg="blue") + unchanged) + if self.failure_count: + s = "s" if self.failure_count > 1 else "" + report.append(style(f"{self.failure_count} file{s} {failed}", fg="red")) + return ", ".join(report) + "." diff --git a/env/lib/python3.8/site-packages/black/rusty.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/rusty.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..7cb02c59 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/rusty.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/rusty.py b/env/lib/python3.8/site-packages/black/rusty.py new file mode 100644 index 00000000..84a80b5a --- /dev/null +++ b/env/lib/python3.8/site-packages/black/rusty.py @@ -0,0 +1,27 @@ +"""An error-handling model influenced by that used by the Rust programming language + +See https://doc.rust-lang.org/book/ch09-00-error-handling.html. +""" +from typing import Generic, TypeVar, Union + +T = TypeVar("T") +E = TypeVar("E", bound=Exception) + + +class Ok(Generic[T]): + def __init__(self, value: T) -> None: + self._value = value + + def ok(self) -> T: + return self._value + + +class Err(Generic[E]): + def __init__(self, e: E) -> None: + self._e = e + + def err(self) -> E: + return self._e + + +Result = Union[Ok[T], Err[E]] diff --git a/env/lib/python3.8/site-packages/black/strings.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/strings.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..7ec6c549 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/strings.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/strings.py b/env/lib/python3.8/site-packages/black/strings.py new file mode 100644 index 00000000..9d0e2eb8 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/strings.py @@ -0,0 +1,238 @@ +""" +Simple formatting on strings. Further string formatting code is in trans.py. +""" + +import re +import sys +from functools import lru_cache +from typing import List, Pattern + +if sys.version_info < (3, 8): + from typing_extensions import Final +else: + from typing import Final + + +STRING_PREFIX_CHARS: Final = "furbFURB" # All possible string prefix characters. +STRING_PREFIX_RE: Final = re.compile( + r"^([" + STRING_PREFIX_CHARS + r"]*)(.*)$", re.DOTALL +) +FIRST_NON_WHITESPACE_RE: Final = re.compile(r"\s*\t+\s*(\S)") + + +def sub_twice(regex: Pattern[str], replacement: str, original: str) -> str: + """Replace `regex` with `replacement` twice on `original`. + + This is used by string normalization to perform replaces on + overlapping matches. + """ + return regex.sub(replacement, regex.sub(replacement, original)) + + +def has_triple_quotes(string: str) -> bool: + """ + Returns: + True iff @string starts with three quotation characters. + """ + raw_string = string.lstrip(STRING_PREFIX_CHARS) + return raw_string[:3] in {'"""', "'''"} + + +def lines_with_leading_tabs_expanded(s: str) -> List[str]: + """ + Splits string into lines and expands only leading tabs (following the normal + Python rules) + """ + lines = [] + for line in s.splitlines(): + # Find the index of the first non-whitespace character after a string of + # whitespace that includes at least one tab + match = FIRST_NON_WHITESPACE_RE.match(line) + if match: + first_non_whitespace_idx = match.start(1) + + lines.append( + line[:first_non_whitespace_idx].expandtabs() + + line[first_non_whitespace_idx:] + ) + else: + lines.append(line) + return lines + + +def fix_docstring(docstring: str, prefix: str) -> str: + # https://www.python.org/dev/peps/pep-0257/#handling-docstring-indentation + if not docstring: + return "" + lines = lines_with_leading_tabs_expanded(docstring) + # Determine minimum indentation (first line doesn't count): + indent = sys.maxsize + for line in lines[1:]: + stripped = line.lstrip() + if stripped: + indent = min(indent, len(line) - len(stripped)) + # Remove indentation (first line is special): + trimmed = [lines[0].strip()] + if indent < sys.maxsize: + last_line_idx = len(lines) - 2 + for i, line in enumerate(lines[1:]): + stripped_line = line[indent:].rstrip() + if stripped_line or i == last_line_idx: + trimmed.append(prefix + stripped_line) + else: + trimmed.append("") + return "\n".join(trimmed) + + +def get_string_prefix(string: str) -> str: + """ + Pre-conditions: + * assert_is_leaf_string(@string) + + Returns: + @string's prefix (e.g. '', 'r', 'f', or 'rf'). + """ + assert_is_leaf_string(string) + + prefix = "" + prefix_idx = 0 + while string[prefix_idx] in STRING_PREFIX_CHARS: + prefix += string[prefix_idx] + prefix_idx += 1 + + return prefix + + +def assert_is_leaf_string(string: str) -> None: + """ + Checks the pre-condition that @string has the format that you would expect + of `leaf.value` where `leaf` is some Leaf such that `leaf.type == + token.STRING`. A more precise description of the pre-conditions that are + checked are listed below. + + Pre-conditions: + * @string starts with either ', ", ', or " where + `set()` is some subset of `set(STRING_PREFIX_CHARS)`. + * @string ends with a quote character (' or "). + + Raises: + AssertionError(...) if the pre-conditions listed above are not + satisfied. + """ + dquote_idx = string.find('"') + squote_idx = string.find("'") + if -1 in [dquote_idx, squote_idx]: + quote_idx = max(dquote_idx, squote_idx) + else: + quote_idx = min(squote_idx, dquote_idx) + + assert ( + 0 <= quote_idx < len(string) - 1 + ), f"{string!r} is missing a starting quote character (' or \")." + assert string[-1] in ( + "'", + '"', + ), f"{string!r} is missing an ending quote character (' or \")." + assert set(string[:quote_idx]).issubset( + set(STRING_PREFIX_CHARS) + ), f"{set(string[:quote_idx])} is NOT a subset of {set(STRING_PREFIX_CHARS)}." + + +def normalize_string_prefix(s: str) -> str: + """Make all string prefixes lowercase.""" + match = STRING_PREFIX_RE.match(s) + assert match is not None, f"failed to match string {s!r}" + orig_prefix = match.group(1) + new_prefix = ( + orig_prefix.replace("F", "f") + .replace("B", "b") + .replace("U", "") + .replace("u", "") + ) + + # Python syntax guarantees max 2 prefixes and that one of them is "r" + if len(new_prefix) == 2 and "r" != new_prefix[0].lower(): + new_prefix = new_prefix[::-1] + return f"{new_prefix}{match.group(2)}" + + +# Re(gex) does actually cache patterns internally but this still improves +# performance on a long list literal of strings by 5-9% since lru_cache's +# caching overhead is much lower. +@lru_cache(maxsize=64) +def _cached_compile(pattern: str) -> Pattern[str]: + return re.compile(pattern) + + +def normalize_string_quotes(s: str) -> str: + """Prefer double quotes but only if it doesn't cause more escaping. + + Adds or removes backslashes as appropriate. Doesn't parse and fix + strings nested in f-strings. + """ + value = s.lstrip(STRING_PREFIX_CHARS) + if value[:3] == '"""': + return s + + elif value[:3] == "'''": + orig_quote = "'''" + new_quote = '"""' + elif value[0] == '"': + orig_quote = '"' + new_quote = "'" + else: + orig_quote = "'" + new_quote = '"' + first_quote_pos = s.find(orig_quote) + if first_quote_pos == -1: + return s # There's an internal error + + prefix = s[:first_quote_pos] + unescaped_new_quote = _cached_compile(rf"(([^\\]|^)(\\\\)*){new_quote}") + escaped_new_quote = _cached_compile(rf"([^\\]|^)\\((?:\\\\)*){new_quote}") + escaped_orig_quote = _cached_compile(rf"([^\\]|^)\\((?:\\\\)*){orig_quote}") + body = s[first_quote_pos + len(orig_quote) : -len(orig_quote)] + if "r" in prefix.casefold(): + if unescaped_new_quote.search(body): + # There's at least one unescaped new_quote in this raw string + # so converting is impossible + return s + + # Do not introduce or remove backslashes in raw strings + new_body = body + else: + # remove unnecessary escapes + new_body = sub_twice(escaped_new_quote, rf"\1\2{new_quote}", body) + if body != new_body: + # Consider the string without unnecessary escapes as the original + body = new_body + s = f"{prefix}{orig_quote}{body}{orig_quote}" + new_body = sub_twice(escaped_orig_quote, rf"\1\2{orig_quote}", new_body) + new_body = sub_twice(unescaped_new_quote, rf"\1\\{new_quote}", new_body) + if "f" in prefix.casefold(): + matches = re.findall( + r""" + (?:(? orig_escape_count: + return s # Do not introduce more escaping + + if new_escape_count == orig_escape_count and orig_quote == '"': + return s # Prefer double quotes + + return f"{prefix}{new_quote}{new_body}{new_quote}" diff --git a/env/lib/python3.8/site-packages/black/trans.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/black/trans.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..d9cd47f1 Binary files /dev/null and b/env/lib/python3.8/site-packages/black/trans.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/black/trans.py b/env/lib/python3.8/site-packages/black/trans.py new file mode 100644 index 00000000..7e2d8e67 --- /dev/null +++ b/env/lib/python3.8/site-packages/black/trans.py @@ -0,0 +1,2222 @@ +""" +String transformers that can split and merge strings. +""" +import re +import sys +from abc import ABC, abstractmethod +from collections import defaultdict +from dataclasses import dataclass +from typing import ( + Any, + Callable, + ClassVar, + Collection, + Dict, + Iterable, + Iterator, + List, + Optional, + Sequence, + Set, + Tuple, + TypeVar, + Union, +) + +if sys.version_info < (3, 8): + from typing_extensions import Final, Literal +else: + from typing import Literal, Final + +from mypy_extensions import trait + +from black.brackets import BracketMatchError +from black.comments import contains_pragma_comment +from black.lines import Line, append_leaves +from black.mode import Feature +from black.nodes import ( + CLOSING_BRACKETS, + OPENING_BRACKETS, + STANDALONE_COMMENT, + is_empty_lpar, + is_empty_par, + is_empty_rpar, + parent_type, + replace_child, + syms, +) +from black.rusty import Err, Ok, Result +from black.strings import ( + assert_is_leaf_string, + get_string_prefix, + has_triple_quotes, + normalize_string_quotes, +) +from blib2to3.pgen2 import token +from blib2to3.pytree import Leaf, Node + + +class CannotTransform(Exception): + """Base class for errors raised by Transformers.""" + + +# types +T = TypeVar("T") +LN = Union[Leaf, Node] +Transformer = Callable[[Line, Collection[Feature]], Iterator[Line]] +Index = int +NodeType = int +ParserState = int +StringID = int +TResult = Result[T, CannotTransform] # (T)ransform Result +TMatchResult = TResult[Index] + + +def TErr(err_msg: str) -> Err[CannotTransform]: + """(T)ransform Err + + Convenience function used when working with the TResult type. + """ + cant_transform = CannotTransform(err_msg) + return Err(cant_transform) + + +def hug_power_op(line: Line, features: Collection[Feature]) -> Iterator[Line]: + """A transformer which normalizes spacing around power operators.""" + + # Performance optimization to avoid unnecessary Leaf clones and other ops. + for leaf in line.leaves: + if leaf.type == token.DOUBLESTAR: + break + else: + raise CannotTransform("No doublestar token was found in the line.") + + def is_simple_lookup(index: int, step: Literal[1, -1]) -> bool: + # Brackets and parentheses indicate calls, subscripts, etc. ... + # basically stuff that doesn't count as "simple". Only a NAME lookup + # or dotted lookup (eg. NAME.NAME) is OK. + if step == -1: + disallowed = {token.RPAR, token.RSQB} + else: + disallowed = {token.LPAR, token.LSQB} + + while 0 <= index < len(line.leaves): + current = line.leaves[index] + if current.type in disallowed: + return False + if current.type not in {token.NAME, token.DOT} or current.value == "for": + # If the current token isn't disallowed, we'll assume this is simple as + # only the disallowed tokens are semantically attached to this lookup + # expression we're checking. Also, stop early if we hit the 'for' bit + # of a comprehension. + return True + + index += step + + return True + + def is_simple_operand(index: int, kind: Literal["base", "exponent"]) -> bool: + # An operand is considered "simple" if's a NAME, a numeric CONSTANT, a simple + # lookup (see above), with or without a preceding unary operator. + start = line.leaves[index] + if start.type in {token.NAME, token.NUMBER}: + return is_simple_lookup(index, step=(1 if kind == "exponent" else -1)) + + if start.type in {token.PLUS, token.MINUS, token.TILDE}: + if line.leaves[index + 1].type in {token.NAME, token.NUMBER}: + # step is always one as bases with a preceding unary op will be checked + # for simplicity starting from the next token (so it'll hit the check + # above). + return is_simple_lookup(index + 1, step=1) + + return False + + new_line = line.clone() + should_hug = False + for idx, leaf in enumerate(line.leaves): + new_leaf = leaf.clone() + if should_hug: + new_leaf.prefix = "" + should_hug = False + + should_hug = ( + (0 < idx < len(line.leaves) - 1) + and leaf.type == token.DOUBLESTAR + and is_simple_operand(idx - 1, kind="base") + and line.leaves[idx - 1].value != "lambda" + and is_simple_operand(idx + 1, kind="exponent") + ) + if should_hug: + new_leaf.prefix = "" + + # We have to be careful to make a new line properly: + # - bracket related metadata must be maintained (handled by Line.append) + # - comments need to copied over, updating the leaf IDs they're attached to + new_line.append(new_leaf, preformatted=True) + for comment_leaf in line.comments_after(leaf): + new_line.append(comment_leaf, preformatted=True) + + yield new_line + + +class StringTransformer(ABC): + """ + An implementation of the Transformer protocol that relies on its + subclasses overriding the template methods `do_match(...)` and + `do_transform(...)`. + + This Transformer works exclusively on strings (for example, by merging + or splitting them). + + The following sections can be found among the docstrings of each concrete + StringTransformer subclass. + + Requirements: + Which requirements must be met of the given Line for this + StringTransformer to be applied? + + Transformations: + If the given Line meets all of the above requirements, which string + transformations can you expect to be applied to it by this + StringTransformer? + + Collaborations: + What contractual agreements does this StringTransformer have with other + StringTransfomers? Such collaborations should be eliminated/minimized + as much as possible. + """ + + __name__: Final = "StringTransformer" + + # Ideally this would be a dataclass, but unfortunately mypyc breaks when used with + # `abc.ABC`. + def __init__(self, line_length: int, normalize_strings: bool) -> None: + self.line_length = line_length + self.normalize_strings = normalize_strings + + @abstractmethod + def do_match(self, line: Line) -> TMatchResult: + """ + Returns: + * Ok(string_idx) such that `line.leaves[string_idx]` is our target + string, if a match was able to be made. + OR + * Err(CannotTransform), if a match was not able to be made. + """ + + @abstractmethod + def do_transform(self, line: Line, string_idx: int) -> Iterator[TResult[Line]]: + """ + Yields: + * Ok(new_line) where new_line is the new transformed line. + OR + * Err(CannotTransform) if the transformation failed for some reason. The + `do_match(...)` template method should usually be used to reject + the form of the given Line, but in some cases it is difficult to + know whether or not a Line meets the StringTransformer's + requirements until the transformation is already midway. + + Side Effects: + This method should NOT mutate @line directly, but it MAY mutate the + Line's underlying Node structure. (WARNING: If the underlying Node + structure IS altered, then this method should NOT be allowed to + yield an CannotTransform after that point.) + """ + + def __call__(self, line: Line, _features: Collection[Feature]) -> Iterator[Line]: + """ + StringTransformer instances have a call signature that mirrors that of + the Transformer type. + + Raises: + CannotTransform(...) if the concrete StringTransformer class is unable + to transform @line. + """ + # Optimization to avoid calling `self.do_match(...)` when the line does + # not contain any string. + if not any(leaf.type == token.STRING for leaf in line.leaves): + raise CannotTransform("There are no strings in this line.") + + match_result = self.do_match(line) + + if isinstance(match_result, Err): + cant_transform = match_result.err() + raise CannotTransform( + f"The string transformer {self.__class__.__name__} does not recognize" + " this line as one that it can transform." + ) from cant_transform + + string_idx = match_result.ok() + + for line_result in self.do_transform(line, string_idx): + if isinstance(line_result, Err): + cant_transform = line_result.err() + raise CannotTransform( + "StringTransformer failed while attempting to transform string." + ) from cant_transform + line = line_result.ok() + yield line + + +@dataclass +class CustomSplit: + """A custom (i.e. manual) string split. + + A single CustomSplit instance represents a single substring. + + Examples: + Consider the following string: + ``` + "Hi there friend." + " This is a custom" + f" string {split}." + ``` + + This string will correspond to the following three CustomSplit instances: + ``` + CustomSplit(False, 16) + CustomSplit(False, 17) + CustomSplit(True, 16) + ``` + """ + + has_prefix: bool + break_idx: int + + +@trait +class CustomSplitMapMixin: + """ + This mixin class is used to map merged strings to a sequence of + CustomSplits, which will then be used to re-split the strings iff none of + the resultant substrings go over the configured max line length. + """ + + _Key: ClassVar = Tuple[StringID, str] + _CUSTOM_SPLIT_MAP: ClassVar[Dict[_Key, Tuple[CustomSplit, ...]]] = defaultdict( + tuple + ) + + @staticmethod + def _get_key(string: str) -> "CustomSplitMapMixin._Key": + """ + Returns: + A unique identifier that is used internally to map @string to a + group of custom splits. + """ + return (id(string), string) + + def add_custom_splits( + self, string: str, custom_splits: Iterable[CustomSplit] + ) -> None: + """Custom Split Map Setter Method + + Side Effects: + Adds a mapping from @string to the custom splits @custom_splits. + """ + key = self._get_key(string) + self._CUSTOM_SPLIT_MAP[key] = tuple(custom_splits) + + def pop_custom_splits(self, string: str) -> List[CustomSplit]: + """Custom Split Map Getter Method + + Returns: + * A list of the custom splits that are mapped to @string, if any + exist. + OR + * [], otherwise. + + Side Effects: + Deletes the mapping between @string and its associated custom + splits (which are returned to the caller). + """ + key = self._get_key(string) + + custom_splits = self._CUSTOM_SPLIT_MAP[key] + del self._CUSTOM_SPLIT_MAP[key] + + return list(custom_splits) + + def has_custom_splits(self, string: str) -> bool: + """ + Returns: + True iff @string is associated with a set of custom splits. + """ + key = self._get_key(string) + return key in self._CUSTOM_SPLIT_MAP + + +class StringMerger(StringTransformer, CustomSplitMapMixin): + """StringTransformer that merges strings together. + + Requirements: + (A) The line contains adjacent strings such that ALL of the validation checks + listed in StringMerger.__validate_msg(...)'s docstring pass. + OR + (B) The line contains a string which uses line continuation backslashes. + + Transformations: + Depending on which of the two requirements above where met, either: + + (A) The string group associated with the target string is merged. + OR + (B) All line-continuation backslashes are removed from the target string. + + Collaborations: + StringMerger provides custom split information to StringSplitter. + """ + + def do_match(self, line: Line) -> TMatchResult: + LL = line.leaves + + is_valid_index = is_valid_index_factory(LL) + + for i, leaf in enumerate(LL): + if ( + leaf.type == token.STRING + and is_valid_index(i + 1) + and LL[i + 1].type == token.STRING + ): + return Ok(i) + + if leaf.type == token.STRING and "\\\n" in leaf.value: + return Ok(i) + + return TErr("This line has no strings that need merging.") + + def do_transform(self, line: Line, string_idx: int) -> Iterator[TResult[Line]]: + new_line = line + rblc_result = self._remove_backslash_line_continuation_chars( + new_line, string_idx + ) + if isinstance(rblc_result, Ok): + new_line = rblc_result.ok() + + msg_result = self._merge_string_group(new_line, string_idx) + if isinstance(msg_result, Ok): + new_line = msg_result.ok() + + if isinstance(rblc_result, Err) and isinstance(msg_result, Err): + msg_cant_transform = msg_result.err() + rblc_cant_transform = rblc_result.err() + cant_transform = CannotTransform( + "StringMerger failed to merge any strings in this line." + ) + + # Chain the errors together using `__cause__`. + msg_cant_transform.__cause__ = rblc_cant_transform + cant_transform.__cause__ = msg_cant_transform + + yield Err(cant_transform) + else: + yield Ok(new_line) + + @staticmethod + def _remove_backslash_line_continuation_chars( + line: Line, string_idx: int + ) -> TResult[Line]: + """ + Merge strings that were split across multiple lines using + line-continuation backslashes. + + Returns: + Ok(new_line), if @line contains backslash line-continuation + characters. + OR + Err(CannotTransform), otherwise. + """ + LL = line.leaves + + string_leaf = LL[string_idx] + if not ( + string_leaf.type == token.STRING + and "\\\n" in string_leaf.value + and not has_triple_quotes(string_leaf.value) + ): + return TErr( + f"String leaf {string_leaf} does not contain any backslash line" + " continuation characters." + ) + + new_line = line.clone() + new_line.comments = line.comments.copy() + append_leaves(new_line, line, LL) + + new_string_leaf = new_line.leaves[string_idx] + new_string_leaf.value = new_string_leaf.value.replace("\\\n", "") + + return Ok(new_line) + + def _merge_string_group(self, line: Line, string_idx: int) -> TResult[Line]: + """ + Merges string group (i.e. set of adjacent strings) where the first + string in the group is `line.leaves[string_idx]`. + + Returns: + Ok(new_line), if ALL of the validation checks found in + __validate_msg(...) pass. + OR + Err(CannotTransform), otherwise. + """ + LL = line.leaves + + is_valid_index = is_valid_index_factory(LL) + + vresult = self._validate_msg(line, string_idx) + if isinstance(vresult, Err): + return vresult + + # If the string group is wrapped inside an Atom node, we must make sure + # to later replace that Atom with our new (merged) string leaf. + atom_node = LL[string_idx].parent + + # We will place BREAK_MARK in between every two substrings that we + # merge. We will then later go through our final result and use the + # various instances of BREAK_MARK we find to add the right values to + # the custom split map. + BREAK_MARK = "@@@@@ BLACK BREAKPOINT MARKER @@@@@" + + QUOTE = LL[string_idx].value[-1] + + def make_naked(string: str, string_prefix: str) -> str: + """Strip @string (i.e. make it a "naked" string) + + Pre-conditions: + * assert_is_leaf_string(@string) + + Returns: + A string that is identical to @string except that + @string_prefix has been stripped, the surrounding QUOTE + characters have been removed, and any remaining QUOTE + characters have been escaped. + """ + assert_is_leaf_string(string) + + RE_EVEN_BACKSLASHES = r"(?:(?= 0 + ), "Logic error while filling the custom string breakpoint cache." + + temp_string = temp_string[mark_idx + len(BREAK_MARK) :] + breakpoint_idx = mark_idx + (len(prefix) if has_prefix else 0) + 1 + custom_splits.append(CustomSplit(has_prefix, breakpoint_idx)) + + string_leaf = Leaf(token.STRING, S_leaf.value.replace(BREAK_MARK, "")) + + if atom_node is not None: + # If not all children of the atom node are merged (this can happen + # when there is a standalone comment in the middle) ... + if non_string_idx - string_idx < len(atom_node.children): + # We need to replace the old STRING leaves with the new string leaf. + first_child_idx = LL[string_idx].remove() + for idx in range(string_idx + 1, non_string_idx): + LL[idx].remove() + if first_child_idx is not None: + atom_node.insert_child(first_child_idx, string_leaf) + else: + # Else replace the atom node with the new string leaf. + replace_child(atom_node, string_leaf) + + # Build the final line ('new_line') that this method will later return. + new_line = line.clone() + for i, leaf in enumerate(LL): + if i == string_idx: + new_line.append(string_leaf) + + if string_idx <= i < string_idx + num_of_strings: + for comment_leaf in line.comments_after(LL[i]): + new_line.append(comment_leaf, preformatted=True) + continue + + append_leaves(new_line, line, [leaf]) + + self.add_custom_splits(string_leaf.value, custom_splits) + return Ok(new_line) + + @staticmethod + def _validate_msg(line: Line, string_idx: int) -> TResult[None]: + """Validate (M)erge (S)tring (G)roup + + Transform-time string validation logic for __merge_string_group(...). + + Returns: + * Ok(None), if ALL validation checks (listed below) pass. + OR + * Err(CannotTransform), if any of the following are true: + - The target string group does not contain ANY stand-alone comments. + - The target string is not in a string group (i.e. it has no + adjacent strings). + - The string group has more than one inline comment. + - The string group has an inline comment that appears to be a pragma. + - The set of all string prefixes in the string group is of + length greater than one and is not equal to {"", "f"}. + - The string group consists of raw strings. + """ + # We first check for "inner" stand-alone comments (i.e. stand-alone + # comments that have a string leaf before them AND after them). + for inc in [1, -1]: + i = string_idx + found_sa_comment = False + is_valid_index = is_valid_index_factory(line.leaves) + while is_valid_index(i) and line.leaves[i].type in [ + token.STRING, + STANDALONE_COMMENT, + ]: + if line.leaves[i].type == STANDALONE_COMMENT: + found_sa_comment = True + elif found_sa_comment: + return TErr( + "StringMerger does NOT merge string groups which contain " + "stand-alone comments." + ) + + i += inc + + num_of_inline_string_comments = 0 + set_of_prefixes = set() + num_of_strings = 0 + for leaf in line.leaves[string_idx:]: + if leaf.type != token.STRING: + # If the string group is trailed by a comma, we count the + # comments trailing the comma to be one of the string group's + # comments. + if leaf.type == token.COMMA and id(leaf) in line.comments: + num_of_inline_string_comments += 1 + break + + if has_triple_quotes(leaf.value): + return TErr("StringMerger does NOT merge multiline strings.") + + num_of_strings += 1 + prefix = get_string_prefix(leaf.value).lower() + if "r" in prefix: + return TErr("StringMerger does NOT merge raw strings.") + + set_of_prefixes.add(prefix) + + if id(leaf) in line.comments: + num_of_inline_string_comments += 1 + if contains_pragma_comment(line.comments[id(leaf)]): + return TErr("Cannot merge strings which have pragma comments.") + + if num_of_strings < 2: + return TErr( + f"Not enough strings to merge (num_of_strings={num_of_strings})." + ) + + if num_of_inline_string_comments > 1: + return TErr( + f"Too many inline string comments ({num_of_inline_string_comments})." + ) + + if len(set_of_prefixes) > 1 and set_of_prefixes != {"", "f"}: + return TErr(f"Too many different prefixes ({set_of_prefixes}).") + + return Ok(None) + + +class StringParenStripper(StringTransformer): + """StringTransformer that strips surrounding parentheses from strings. + + Requirements: + The line contains a string which is surrounded by parentheses and: + - The target string is NOT the only argument to a function call. + - The target string is NOT a "pointless" string. + - If the target string contains a PERCENT, the brackets are not + preceded or followed by an operator with higher precedence than + PERCENT. + + Transformations: + The parentheses mentioned in the 'Requirements' section are stripped. + + Collaborations: + StringParenStripper has its own inherent usefulness, but it is also + relied on to clean up the parentheses created by StringParenWrapper (in + the event that they are no longer needed). + """ + + def do_match(self, line: Line) -> TMatchResult: + LL = line.leaves + + is_valid_index = is_valid_index_factory(LL) + + for idx, leaf in enumerate(LL): + # Should be a string... + if leaf.type != token.STRING: + continue + + # If this is a "pointless" string... + if ( + leaf.parent + and leaf.parent.parent + and leaf.parent.parent.type == syms.simple_stmt + ): + continue + + # Should be preceded by a non-empty LPAR... + if ( + not is_valid_index(idx - 1) + or LL[idx - 1].type != token.LPAR + or is_empty_lpar(LL[idx - 1]) + ): + continue + + # That LPAR should NOT be preceded by a function name or a closing + # bracket (which could be a function which returns a function or a + # list/dictionary that contains a function)... + if is_valid_index(idx - 2) and ( + LL[idx - 2].type == token.NAME or LL[idx - 2].type in CLOSING_BRACKETS + ): + continue + + string_idx = idx + + # Skip the string trailer, if one exists. + string_parser = StringParser() + next_idx = string_parser.parse(LL, string_idx) + + # if the leaves in the parsed string include a PERCENT, we need to + # make sure the initial LPAR is NOT preceded by an operator with + # higher or equal precedence to PERCENT + if is_valid_index(idx - 2): + # mypy can't quite follow unless we name this + before_lpar = LL[idx - 2] + if token.PERCENT in {leaf.type for leaf in LL[idx - 1 : next_idx]} and ( + ( + before_lpar.type + in { + token.STAR, + token.AT, + token.SLASH, + token.DOUBLESLASH, + token.PERCENT, + token.TILDE, + token.DOUBLESTAR, + token.AWAIT, + token.LSQB, + token.LPAR, + } + ) + or ( + # only unary PLUS/MINUS + before_lpar.parent + and before_lpar.parent.type == syms.factor + and (before_lpar.type in {token.PLUS, token.MINUS}) + ) + ): + continue + + # Should be followed by a non-empty RPAR... + if ( + is_valid_index(next_idx) + and LL[next_idx].type == token.RPAR + and not is_empty_rpar(LL[next_idx]) + ): + # That RPAR should NOT be followed by anything with higher + # precedence than PERCENT + if is_valid_index(next_idx + 1) and LL[next_idx + 1].type in { + token.DOUBLESTAR, + token.LSQB, + token.LPAR, + token.DOT, + }: + continue + + return Ok(string_idx) + + return TErr("This line has no strings wrapped in parens.") + + def do_transform(self, line: Line, string_idx: int) -> Iterator[TResult[Line]]: + LL = line.leaves + + string_parser = StringParser() + rpar_idx = string_parser.parse(LL, string_idx) + + for leaf in (LL[string_idx - 1], LL[rpar_idx]): + if line.comments_after(leaf): + yield TErr( + "Will not strip parentheses which have comments attached to them." + ) + return + + new_line = line.clone() + new_line.comments = line.comments.copy() + try: + append_leaves(new_line, line, LL[: string_idx - 1]) + except BracketMatchError: + # HACK: I believe there is currently a bug somewhere in + # right_hand_split() that is causing brackets to not be tracked + # properly by a shared BracketTracker. + append_leaves(new_line, line, LL[: string_idx - 1], preformatted=True) + + string_leaf = Leaf(token.STRING, LL[string_idx].value) + LL[string_idx - 1].remove() + replace_child(LL[string_idx], string_leaf) + new_line.append(string_leaf) + + append_leaves( + new_line, line, LL[string_idx + 1 : rpar_idx] + LL[rpar_idx + 1 :] + ) + + LL[rpar_idx].remove() + + yield Ok(new_line) + + +class BaseStringSplitter(StringTransformer): + """ + Abstract class for StringTransformers which transform a Line's strings by splitting + them or placing them on their own lines where necessary to avoid going over + the configured line length. + + Requirements: + * The target string value is responsible for the line going over the + line length limit. It follows that after all of black's other line + split methods have been exhausted, this line (or one of the resulting + lines after all line splits are performed) would still be over the + line_length limit unless we split this string. + AND + * The target string is NOT a "pointless" string (i.e. a string that has + no parent or siblings). + AND + * The target string is not followed by an inline comment that appears + to be a pragma. + AND + * The target string is not a multiline (i.e. triple-quote) string. + """ + + STRING_OPERATORS: Final = [ + token.EQEQUAL, + token.GREATER, + token.GREATEREQUAL, + token.LESS, + token.LESSEQUAL, + token.NOTEQUAL, + token.PERCENT, + token.PLUS, + token.STAR, + ] + + @abstractmethod + def do_splitter_match(self, line: Line) -> TMatchResult: + """ + BaseStringSplitter asks its clients to override this method instead of + `StringTransformer.do_match(...)`. + + Follows the same protocol as `StringTransformer.do_match(...)`. + + Refer to `help(StringTransformer.do_match)` for more information. + """ + + def do_match(self, line: Line) -> TMatchResult: + match_result = self.do_splitter_match(line) + if isinstance(match_result, Err): + return match_result + + string_idx = match_result.ok() + vresult = self._validate(line, string_idx) + if isinstance(vresult, Err): + return vresult + + return match_result + + def _validate(self, line: Line, string_idx: int) -> TResult[None]: + """ + Checks that @line meets all of the requirements listed in this classes' + docstring. Refer to `help(BaseStringSplitter)` for a detailed + description of those requirements. + + Returns: + * Ok(None), if ALL of the requirements are met. + OR + * Err(CannotTransform), if ANY of the requirements are NOT met. + """ + LL = line.leaves + + string_leaf = LL[string_idx] + + max_string_length = self._get_max_string_length(line, string_idx) + if len(string_leaf.value) <= max_string_length: + return TErr( + "The string itself is not what is causing this line to be too long." + ) + + if not string_leaf.parent or [L.type for L in string_leaf.parent.children] == [ + token.STRING, + token.NEWLINE, + ]: + return TErr( + f"This string ({string_leaf.value}) appears to be pointless (i.e. has" + " no parent)." + ) + + if id(line.leaves[string_idx]) in line.comments and contains_pragma_comment( + line.comments[id(line.leaves[string_idx])] + ): + return TErr( + "Line appears to end with an inline pragma comment. Splitting the line" + " could modify the pragma's behavior." + ) + + if has_triple_quotes(string_leaf.value): + return TErr("We cannot split multiline strings.") + + return Ok(None) + + def _get_max_string_length(self, line: Line, string_idx: int) -> int: + """ + Calculates the max string length used when attempting to determine + whether or not the target string is responsible for causing the line to + go over the line length limit. + + WARNING: This method is tightly coupled to both StringSplitter and + (especially) StringParenWrapper. There is probably a better way to + accomplish what is being done here. + + Returns: + max_string_length: such that `line.leaves[string_idx].value > + max_string_length` implies that the target string IS responsible + for causing this line to exceed the line length limit. + """ + LL = line.leaves + + is_valid_index = is_valid_index_factory(LL) + + # We use the shorthand "WMA4" in comments to abbreviate "We must + # account for". When giving examples, we use STRING to mean some/any + # valid string. + # + # Finally, we use the following convenience variables: + # + # P: The leaf that is before the target string leaf. + # N: The leaf that is after the target string leaf. + # NN: The leaf that is after N. + + # WMA4 the whitespace at the beginning of the line. + offset = line.depth * 4 + + if is_valid_index(string_idx - 1): + p_idx = string_idx - 1 + if ( + LL[string_idx - 1].type == token.LPAR + and LL[string_idx - 1].value == "" + and string_idx >= 2 + ): + # If the previous leaf is an empty LPAR placeholder, we should skip it. + p_idx -= 1 + + P = LL[p_idx] + if P.type in self.STRING_OPERATORS: + # WMA4 a space and a string operator (e.g. `+ STRING` or `== STRING`). + offset += len(str(P)) + 1 + + if P.type == token.COMMA: + # WMA4 a space, a comma, and a closing bracket [e.g. `), STRING`]. + offset += 3 + + if P.type in [token.COLON, token.EQUAL, token.PLUSEQUAL, token.NAME]: + # This conditional branch is meant to handle dictionary keys, + # variable assignments, 'return STRING' statement lines, and + # 'else STRING' ternary expression lines. + + # WMA4 a single space. + offset += 1 + + # WMA4 the lengths of any leaves that came before that space, + # but after any closing bracket before that space. + for leaf in reversed(LL[: p_idx + 1]): + offset += len(str(leaf)) + if leaf.type in CLOSING_BRACKETS: + break + + if is_valid_index(string_idx + 1): + N = LL[string_idx + 1] + if N.type == token.RPAR and N.value == "" and len(LL) > string_idx + 2: + # If the next leaf is an empty RPAR placeholder, we should skip it. + N = LL[string_idx + 2] + + if N.type == token.COMMA: + # WMA4 a single comma at the end of the string (e.g `STRING,`). + offset += 1 + + if is_valid_index(string_idx + 2): + NN = LL[string_idx + 2] + + if N.type == token.DOT and NN.type == token.NAME: + # This conditional branch is meant to handle method calls invoked + # off of a string literal up to and including the LPAR character. + + # WMA4 the '.' character. + offset += 1 + + if ( + is_valid_index(string_idx + 3) + and LL[string_idx + 3].type == token.LPAR + ): + # WMA4 the left parenthesis character. + offset += 1 + + # WMA4 the length of the method's name. + offset += len(NN.value) + + has_comments = False + for comment_leaf in line.comments_after(LL[string_idx]): + if not has_comments: + has_comments = True + # WMA4 two spaces before the '#' character. + offset += 2 + + # WMA4 the length of the inline comment. + offset += len(comment_leaf.value) + + max_string_length = self.line_length - offset + return max_string_length + + @staticmethod + def _prefer_paren_wrap_match(LL: List[Leaf]) -> Optional[int]: + """ + Returns: + string_idx such that @LL[string_idx] is equal to our target (i.e. + matched) string, if this line matches the "prefer paren wrap" statement + requirements listed in the 'Requirements' section of the StringParenWrapper + class's docstring. + OR + None, otherwise. + """ + # The line must start with a string. + if LL[0].type != token.STRING: + return None + + matching_nodes = [ + syms.listmaker, + syms.dictsetmaker, + syms.testlist_gexp, + ] + # If the string is an immediate child of a list/set/tuple literal... + if ( + parent_type(LL[0]) in matching_nodes + or parent_type(LL[0].parent) in matching_nodes + ): + # And the string is surrounded by commas (or is the first/last child)... + prev_sibling = LL[0].prev_sibling + next_sibling = LL[0].next_sibling + if ( + not prev_sibling + and not next_sibling + and parent_type(LL[0]) == syms.atom + ): + # If it's an atom string, we need to check the parent atom's siblings. + parent = LL[0].parent + assert parent is not None # For type checkers. + prev_sibling = parent.prev_sibling + next_sibling = parent.next_sibling + if (not prev_sibling or prev_sibling.type == token.COMMA) and ( + not next_sibling or next_sibling.type == token.COMMA + ): + return 0 + + return None + + +def iter_fexpr_spans(s: str) -> Iterator[Tuple[int, int]]: + """ + Yields spans corresponding to expressions in a given f-string. + Spans are half-open ranges (left inclusive, right exclusive). + Assumes the input string is a valid f-string, but will not crash if the input + string is invalid. + """ + stack: List[int] = [] # our curly paren stack + i = 0 + while i < len(s): + if s[i] == "{": + # if we're in a string part of the f-string, ignore escaped curly braces + if not stack and i + 1 < len(s) and s[i + 1] == "{": + i += 2 + continue + stack.append(i) + i += 1 + continue + + if s[i] == "}": + if not stack: + i += 1 + continue + j = stack.pop() + # we've made it back out of the expression! yield the span + if not stack: + yield (j, i + 1) + i += 1 + continue + + # if we're in an expression part of the f-string, fast forward through strings + # note that backslashes are not legal in the expression portion of f-strings + if stack: + delim = None + if s[i : i + 3] in ("'''", '"""'): + delim = s[i : i + 3] + elif s[i] in ("'", '"'): + delim = s[i] + if delim: + i += len(delim) + while i < len(s) and s[i : i + len(delim)] != delim: + i += 1 + i += len(delim) + continue + i += 1 + + +def fstring_contains_expr(s: str) -> bool: + return any(iter_fexpr_spans(s)) + + +class StringSplitter(BaseStringSplitter, CustomSplitMapMixin): + """ + StringTransformer that splits "atom" strings (i.e. strings which exist on + lines by themselves). + + Requirements: + * The line consists ONLY of a single string (possibly prefixed by a + string operator [e.g. '+' or '==']), MAYBE a string trailer, and MAYBE + a trailing comma. + AND + * All of the requirements listed in BaseStringSplitter's docstring. + + Transformations: + The string mentioned in the 'Requirements' section is split into as + many substrings as necessary to adhere to the configured line length. + + In the final set of substrings, no substring should be smaller than + MIN_SUBSTR_SIZE characters. + + The string will ONLY be split on spaces (i.e. each new substring should + start with a space). Note that the string will NOT be split on a space + which is escaped with a backslash. + + If the string is an f-string, it will NOT be split in the middle of an + f-expression (e.g. in f"FooBar: {foo() if x else bar()}", {foo() if x + else bar()} is an f-expression). + + If the string that is being split has an associated set of custom split + records and those custom splits will NOT result in any line going over + the configured line length, those custom splits are used. Otherwise the + string is split as late as possible (from left-to-right) while still + adhering to the transformation rules listed above. + + Collaborations: + StringSplitter relies on StringMerger to construct the appropriate + CustomSplit objects and add them to the custom split map. + """ + + MIN_SUBSTR_SIZE: Final = 6 + + def do_splitter_match(self, line: Line) -> TMatchResult: + LL = line.leaves + + if self._prefer_paren_wrap_match(LL) is not None: + return TErr("Line needs to be wrapped in parens first.") + + is_valid_index = is_valid_index_factory(LL) + + idx = 0 + + # The first two leaves MAY be the 'not in' keywords... + if ( + is_valid_index(idx) + and is_valid_index(idx + 1) + and [LL[idx].type, LL[idx + 1].type] == [token.NAME, token.NAME] + and str(LL[idx]) + str(LL[idx + 1]) == "not in" + ): + idx += 2 + # Else the first leaf MAY be a string operator symbol or the 'in' keyword... + elif is_valid_index(idx) and ( + LL[idx].type in self.STRING_OPERATORS + or LL[idx].type == token.NAME + and str(LL[idx]) == "in" + ): + idx += 1 + + # The next/first leaf MAY be an empty LPAR... + if is_valid_index(idx) and is_empty_lpar(LL[idx]): + idx += 1 + + # The next/first leaf MUST be a string... + if not is_valid_index(idx) or LL[idx].type != token.STRING: + return TErr("Line does not start with a string.") + + string_idx = idx + + # Skip the string trailer, if one exists. + string_parser = StringParser() + idx = string_parser.parse(LL, string_idx) + + # That string MAY be followed by an empty RPAR... + if is_valid_index(idx) and is_empty_rpar(LL[idx]): + idx += 1 + + # That string / empty RPAR leaf MAY be followed by a comma... + if is_valid_index(idx) and LL[idx].type == token.COMMA: + idx += 1 + + # But no more leaves are allowed... + if is_valid_index(idx): + return TErr("This line does not end with a string.") + + return Ok(string_idx) + + def do_transform(self, line: Line, string_idx: int) -> Iterator[TResult[Line]]: + LL = line.leaves + + QUOTE = LL[string_idx].value[-1] + + is_valid_index = is_valid_index_factory(LL) + insert_str_child = insert_str_child_factory(LL[string_idx]) + + prefix = get_string_prefix(LL[string_idx].value).lower() + + # We MAY choose to drop the 'f' prefix from substrings that don't + # contain any f-expressions, but ONLY if the original f-string + # contains at least one f-expression. Otherwise, we will alter the AST + # of the program. + drop_pointless_f_prefix = ("f" in prefix) and fstring_contains_expr( + LL[string_idx].value + ) + + first_string_line = True + + string_op_leaves = self._get_string_operator_leaves(LL) + string_op_leaves_length = ( + sum(len(str(prefix_leaf)) for prefix_leaf in string_op_leaves) + 1 + if string_op_leaves + else 0 + ) + + def maybe_append_string_operators(new_line: Line) -> None: + """ + Side Effects: + If @line starts with a string operator and this is the first + line we are constructing, this function appends the string + operator to @new_line and replaces the old string operator leaf + in the node structure. Otherwise this function does nothing. + """ + maybe_prefix_leaves = string_op_leaves if first_string_line else [] + for i, prefix_leaf in enumerate(maybe_prefix_leaves): + replace_child(LL[i], prefix_leaf) + new_line.append(prefix_leaf) + + ends_with_comma = ( + is_valid_index(string_idx + 1) and LL[string_idx + 1].type == token.COMMA + ) + + def max_last_string() -> int: + """ + Returns: + The max allowed length of the string value used for the last + line we will construct. + """ + result = self.line_length + result -= line.depth * 4 + result -= 1 if ends_with_comma else 0 + result -= string_op_leaves_length + return result + + # --- Calculate Max Break Index (for string value) + # We start with the line length limit + max_break_idx = self.line_length + # The last index of a string of length N is N-1. + max_break_idx -= 1 + # Leading whitespace is not present in the string value (e.g. Leaf.value). + max_break_idx -= line.depth * 4 + if max_break_idx < 0: + yield TErr( + f"Unable to split {LL[string_idx].value} at such high of a line depth:" + f" {line.depth}" + ) + return + + # Check if StringMerger registered any custom splits. + custom_splits = self.pop_custom_splits(LL[string_idx].value) + # We use them ONLY if none of them would produce lines that exceed the + # line limit. + use_custom_breakpoints = bool( + custom_splits + and all(csplit.break_idx <= max_break_idx for csplit in custom_splits) + ) + + # Temporary storage for the remaining chunk of the string line that + # can't fit onto the line currently being constructed. + rest_value = LL[string_idx].value + + def more_splits_should_be_made() -> bool: + """ + Returns: + True iff `rest_value` (the remaining string value from the last + split), should be split again. + """ + if use_custom_breakpoints: + return len(custom_splits) > 1 + else: + return len(rest_value) > max_last_string() + + string_line_results: List[Ok[Line]] = [] + while more_splits_should_be_made(): + if use_custom_breakpoints: + # Custom User Split (manual) + csplit = custom_splits.pop(0) + break_idx = csplit.break_idx + else: + # Algorithmic Split (automatic) + max_bidx = max_break_idx - string_op_leaves_length + maybe_break_idx = self._get_break_idx(rest_value, max_bidx) + if maybe_break_idx is None: + # If we are unable to algorithmically determine a good split + # and this string has custom splits registered to it, we + # fall back to using them--which means we have to start + # over from the beginning. + if custom_splits: + rest_value = LL[string_idx].value + string_line_results = [] + first_string_line = True + use_custom_breakpoints = True + continue + + # Otherwise, we stop splitting here. + break + + break_idx = maybe_break_idx + + # --- Construct `next_value` + next_value = rest_value[:break_idx] + QUOTE + + # HACK: The following 'if' statement is a hack to fix the custom + # breakpoint index in the case of either: (a) substrings that were + # f-strings but will have the 'f' prefix removed OR (b) substrings + # that were not f-strings but will now become f-strings because of + # redundant use of the 'f' prefix (i.e. none of the substrings + # contain f-expressions but one or more of them had the 'f' prefix + # anyway; in which case, we will prepend 'f' to _all_ substrings). + # + # There is probably a better way to accomplish what is being done + # here... + # + # If this substring is an f-string, we _could_ remove the 'f' + # prefix, and the current custom split did NOT originally use a + # prefix... + if ( + next_value != self._normalize_f_string(next_value, prefix) + and use_custom_breakpoints + and not csplit.has_prefix + ): + # Then `csplit.break_idx` will be off by one after removing + # the 'f' prefix. + break_idx += 1 + next_value = rest_value[:break_idx] + QUOTE + + if drop_pointless_f_prefix: + next_value = self._normalize_f_string(next_value, prefix) + + # --- Construct `next_leaf` + next_leaf = Leaf(token.STRING, next_value) + insert_str_child(next_leaf) + self._maybe_normalize_string_quotes(next_leaf) + + # --- Construct `next_line` + next_line = line.clone() + maybe_append_string_operators(next_line) + next_line.append(next_leaf) + string_line_results.append(Ok(next_line)) + + rest_value = prefix + QUOTE + rest_value[break_idx:] + first_string_line = False + + yield from string_line_results + + if drop_pointless_f_prefix: + rest_value = self._normalize_f_string(rest_value, prefix) + + rest_leaf = Leaf(token.STRING, rest_value) + insert_str_child(rest_leaf) + + # NOTE: I could not find a test case that verifies that the following + # line is actually necessary, but it seems to be. Otherwise we risk + # not normalizing the last substring, right? + self._maybe_normalize_string_quotes(rest_leaf) + + last_line = line.clone() + maybe_append_string_operators(last_line) + + # If there are any leaves to the right of the target string... + if is_valid_index(string_idx + 1): + # We use `temp_value` here to determine how long the last line + # would be if we were to append all the leaves to the right of the + # target string to the last string line. + temp_value = rest_value + for leaf in LL[string_idx + 1 :]: + temp_value += str(leaf) + if leaf.type == token.LPAR: + break + + # Try to fit them all on the same line with the last substring... + if ( + len(temp_value) <= max_last_string() + or LL[string_idx + 1].type == token.COMMA + ): + last_line.append(rest_leaf) + append_leaves(last_line, line, LL[string_idx + 1 :]) + yield Ok(last_line) + # Otherwise, place the last substring on one line and everything + # else on a line below that... + else: + last_line.append(rest_leaf) + yield Ok(last_line) + + non_string_line = line.clone() + append_leaves(non_string_line, line, LL[string_idx + 1 :]) + yield Ok(non_string_line) + # Else the target string was the last leaf... + else: + last_line.append(rest_leaf) + last_line.comments = line.comments.copy() + yield Ok(last_line) + + def _iter_nameescape_slices(self, string: str) -> Iterator[Tuple[Index, Index]]: + """ + Yields: + All ranges of @string which, if @string were to be split there, + would result in the splitting of an \\N{...} expression (which is NOT + allowed). + """ + # True - the previous backslash was unescaped + # False - the previous backslash was escaped *or* there was no backslash + previous_was_unescaped_backslash = False + it = iter(enumerate(string)) + for idx, c in it: + if c == "\\": + previous_was_unescaped_backslash = not previous_was_unescaped_backslash + continue + if not previous_was_unescaped_backslash or c != "N": + previous_was_unescaped_backslash = False + continue + previous_was_unescaped_backslash = False + + begin = idx - 1 # the position of backslash before \N{...} + for idx, c in it: + if c == "}": + end = idx + break + else: + # malformed nameescape expression? + # should have been detected by AST parsing earlier... + raise RuntimeError(f"{self.__class__.__name__} LOGIC ERROR!") + yield begin, end + + def _iter_fexpr_slices(self, string: str) -> Iterator[Tuple[Index, Index]]: + """ + Yields: + All ranges of @string which, if @string were to be split there, + would result in the splitting of an f-expression (which is NOT + allowed). + """ + if "f" not in get_string_prefix(string).lower(): + return + yield from iter_fexpr_spans(string) + + def _get_illegal_split_indices(self, string: str) -> Set[Index]: + illegal_indices: Set[Index] = set() + iterators = [ + self._iter_fexpr_slices(string), + self._iter_nameescape_slices(string), + ] + for it in iterators: + for begin, end in it: + illegal_indices.update(range(begin, end + 1)) + return illegal_indices + + def _get_break_idx(self, string: str, max_break_idx: int) -> Optional[int]: + """ + This method contains the algorithm that StringSplitter uses to + determine which character to split each string at. + + Args: + @string: The substring that we are attempting to split. + @max_break_idx: The ideal break index. We will return this value if it + meets all the necessary conditions. In the likely event that it + doesn't we will try to find the closest index BELOW @max_break_idx + that does. If that fails, we will expand our search by also + considering all valid indices ABOVE @max_break_idx. + + Pre-Conditions: + * assert_is_leaf_string(@string) + * 0 <= @max_break_idx < len(@string) + + Returns: + break_idx, if an index is able to be found that meets all of the + conditions listed in the 'Transformations' section of this classes' + docstring. + OR + None, otherwise. + """ + is_valid_index = is_valid_index_factory(string) + + assert is_valid_index(max_break_idx) + assert_is_leaf_string(string) + + _illegal_split_indices = self._get_illegal_split_indices(string) + + def breaks_unsplittable_expression(i: Index) -> bool: + """ + Returns: + True iff returning @i would result in the splitting of an + unsplittable expression (which is NOT allowed). + """ + return i in _illegal_split_indices + + def passes_all_checks(i: Index) -> bool: + """ + Returns: + True iff ALL of the conditions listed in the 'Transformations' + section of this classes' docstring would be be met by returning @i. + """ + is_space = string[i] == " " + + is_not_escaped = True + j = i - 1 + while is_valid_index(j) and string[j] == "\\": + is_not_escaped = not is_not_escaped + j -= 1 + + is_big_enough = ( + len(string[i:]) >= self.MIN_SUBSTR_SIZE + and len(string[:i]) >= self.MIN_SUBSTR_SIZE + ) + return ( + is_space + and is_not_escaped + and is_big_enough + and not breaks_unsplittable_expression(i) + ) + + # First, we check all indices BELOW @max_break_idx. + break_idx = max_break_idx + while is_valid_index(break_idx - 1) and not passes_all_checks(break_idx): + break_idx -= 1 + + if not passes_all_checks(break_idx): + # If that fails, we check all indices ABOVE @max_break_idx. + # + # If we are able to find a valid index here, the next line is going + # to be longer than the specified line length, but it's probably + # better than doing nothing at all. + break_idx = max_break_idx + 1 + while is_valid_index(break_idx + 1) and not passes_all_checks(break_idx): + break_idx += 1 + + if not is_valid_index(break_idx) or not passes_all_checks(break_idx): + return None + + return break_idx + + def _maybe_normalize_string_quotes(self, leaf: Leaf) -> None: + if self.normalize_strings: + leaf.value = normalize_string_quotes(leaf.value) + + def _normalize_f_string(self, string: str, prefix: str) -> str: + """ + Pre-Conditions: + * assert_is_leaf_string(@string) + + Returns: + * If @string is an f-string that contains no f-expressions, we + return a string identical to @string except that the 'f' prefix + has been stripped and all double braces (i.e. '{{' or '}}') have + been normalized (i.e. turned into '{' or '}'). + OR + * Otherwise, we return @string. + """ + assert_is_leaf_string(string) + + if "f" in prefix and not fstring_contains_expr(string): + new_prefix = prefix.replace("f", "") + + temp = string[len(prefix) :] + temp = re.sub(r"\{\{", "{", temp) + temp = re.sub(r"\}\}", "}", temp) + new_string = temp + + return f"{new_prefix}{new_string}" + else: + return string + + def _get_string_operator_leaves(self, leaves: Iterable[Leaf]) -> List[Leaf]: + LL = list(leaves) + + string_op_leaves = [] + i = 0 + while LL[i].type in self.STRING_OPERATORS + [token.NAME]: + prefix_leaf = Leaf(LL[i].type, str(LL[i]).strip()) + string_op_leaves.append(prefix_leaf) + i += 1 + return string_op_leaves + + +class StringParenWrapper(BaseStringSplitter, CustomSplitMapMixin): + """ + StringTransformer that wraps strings in parens and then splits at the LPAR. + + Requirements: + All of the requirements listed in BaseStringSplitter's docstring in + addition to the requirements listed below: + + * The line is a return/yield statement, which returns/yields a string. + OR + * The line is part of a ternary expression (e.g. `x = y if cond else + z`) such that the line starts with `else `, where is + some string. + OR + * The line is an assert statement, which ends with a string. + OR + * The line is an assignment statement (e.g. `x = ` or `x += + `) such that the variable is being assigned the value of some + string. + OR + * The line is a dictionary key assignment where some valid key is being + assigned the value of some string. + OR + * The line starts with an "atom" string that prefers to be wrapped in + parens. It's preferred to be wrapped when it's is an immediate child of + a list/set/tuple literal, AND the string is surrounded by commas (or is + the first/last child). + + Transformations: + The chosen string is wrapped in parentheses and then split at the LPAR. + + We then have one line which ends with an LPAR and another line that + starts with the chosen string. The latter line is then split again at + the RPAR. This results in the RPAR (and possibly a trailing comma) + being placed on its own line. + + NOTE: If any leaves exist to the right of the chosen string (except + for a trailing comma, which would be placed after the RPAR), those + leaves are placed inside the parentheses. In effect, the chosen + string is not necessarily being "wrapped" by parentheses. We can, + however, count on the LPAR being placed directly before the chosen + string. + + In other words, StringParenWrapper creates "atom" strings. These + can then be split again by StringSplitter, if necessary. + + Collaborations: + In the event that a string line split by StringParenWrapper is + changed such that it no longer needs to be given its own line, + StringParenWrapper relies on StringParenStripper to clean up the + parentheses it created. + + For "atom" strings that prefers to be wrapped in parens, it requires + StringSplitter to hold the split until the string is wrapped in parens. + """ + + def do_splitter_match(self, line: Line) -> TMatchResult: + LL = line.leaves + + if line.leaves[-1].type in OPENING_BRACKETS: + return TErr( + "Cannot wrap parens around a line that ends in an opening bracket." + ) + + string_idx = ( + self._return_match(LL) + or self._else_match(LL) + or self._assert_match(LL) + or self._assign_match(LL) + or self._dict_match(LL) + or self._prefer_paren_wrap_match(LL) + ) + + if string_idx is not None: + string_value = line.leaves[string_idx].value + # If the string has no spaces... + if " " not in string_value: + # And will still violate the line length limit when split... + max_string_length = self.line_length - ((line.depth + 1) * 4) + if len(string_value) > max_string_length: + # And has no associated custom splits... + if not self.has_custom_splits(string_value): + # Then we should NOT put this string on its own line. + return TErr( + "We do not wrap long strings in parentheses when the" + " resultant line would still be over the specified line" + " length and can't be split further by StringSplitter." + ) + return Ok(string_idx) + + return TErr("This line does not contain any non-atomic strings.") + + @staticmethod + def _return_match(LL: List[Leaf]) -> Optional[int]: + """ + Returns: + string_idx such that @LL[string_idx] is equal to our target (i.e. + matched) string, if this line matches the return/yield statement + requirements listed in the 'Requirements' section of this classes' + docstring. + OR + None, otherwise. + """ + # If this line is apart of a return/yield statement and the first leaf + # contains either the "return" or "yield" keywords... + if parent_type(LL[0]) in [syms.return_stmt, syms.yield_expr] and LL[ + 0 + ].value in ["return", "yield"]: + is_valid_index = is_valid_index_factory(LL) + + idx = 2 if is_valid_index(1) and is_empty_par(LL[1]) else 1 + # The next visible leaf MUST contain a string... + if is_valid_index(idx) and LL[idx].type == token.STRING: + return idx + + return None + + @staticmethod + def _else_match(LL: List[Leaf]) -> Optional[int]: + """ + Returns: + string_idx such that @LL[string_idx] is equal to our target (i.e. + matched) string, if this line matches the ternary expression + requirements listed in the 'Requirements' section of this classes' + docstring. + OR + None, otherwise. + """ + # If this line is apart of a ternary expression and the first leaf + # contains the "else" keyword... + if ( + parent_type(LL[0]) == syms.test + and LL[0].type == token.NAME + and LL[0].value == "else" + ): + is_valid_index = is_valid_index_factory(LL) + + idx = 2 if is_valid_index(1) and is_empty_par(LL[1]) else 1 + # The next visible leaf MUST contain a string... + if is_valid_index(idx) and LL[idx].type == token.STRING: + return idx + + return None + + @staticmethod + def _assert_match(LL: List[Leaf]) -> Optional[int]: + """ + Returns: + string_idx such that @LL[string_idx] is equal to our target (i.e. + matched) string, if this line matches the assert statement + requirements listed in the 'Requirements' section of this classes' + docstring. + OR + None, otherwise. + """ + # If this line is apart of an assert statement and the first leaf + # contains the "assert" keyword... + if parent_type(LL[0]) == syms.assert_stmt and LL[0].value == "assert": + is_valid_index = is_valid_index_factory(LL) + + for i, leaf in enumerate(LL): + # We MUST find a comma... + if leaf.type == token.COMMA: + idx = i + 2 if is_empty_par(LL[i + 1]) else i + 1 + + # That comma MUST be followed by a string... + if is_valid_index(idx) and LL[idx].type == token.STRING: + string_idx = idx + + # Skip the string trailer, if one exists. + string_parser = StringParser() + idx = string_parser.parse(LL, string_idx) + + # But no more leaves are allowed... + if not is_valid_index(idx): + return string_idx + + return None + + @staticmethod + def _assign_match(LL: List[Leaf]) -> Optional[int]: + """ + Returns: + string_idx such that @LL[string_idx] is equal to our target (i.e. + matched) string, if this line matches the assignment statement + requirements listed in the 'Requirements' section of this classes' + docstring. + OR + None, otherwise. + """ + # If this line is apart of an expression statement or is a function + # argument AND the first leaf contains a variable name... + if ( + parent_type(LL[0]) in [syms.expr_stmt, syms.argument, syms.power] + and LL[0].type == token.NAME + ): + is_valid_index = is_valid_index_factory(LL) + + for i, leaf in enumerate(LL): + # We MUST find either an '=' or '+=' symbol... + if leaf.type in [token.EQUAL, token.PLUSEQUAL]: + idx = i + 2 if is_empty_par(LL[i + 1]) else i + 1 + + # That symbol MUST be followed by a string... + if is_valid_index(idx) and LL[idx].type == token.STRING: + string_idx = idx + + # Skip the string trailer, if one exists. + string_parser = StringParser() + idx = string_parser.parse(LL, string_idx) + + # The next leaf MAY be a comma iff this line is apart + # of a function argument... + if ( + parent_type(LL[0]) == syms.argument + and is_valid_index(idx) + and LL[idx].type == token.COMMA + ): + idx += 1 + + # But no more leaves are allowed... + if not is_valid_index(idx): + return string_idx + + return None + + @staticmethod + def _dict_match(LL: List[Leaf]) -> Optional[int]: + """ + Returns: + string_idx such that @LL[string_idx] is equal to our target (i.e. + matched) string, if this line matches the dictionary key assignment + statement requirements listed in the 'Requirements' section of this + classes' docstring. + OR + None, otherwise. + """ + # If this line is apart of a dictionary key assignment... + if syms.dictsetmaker in [parent_type(LL[0]), parent_type(LL[0].parent)]: + is_valid_index = is_valid_index_factory(LL) + + for i, leaf in enumerate(LL): + # We MUST find a colon... + if leaf.type == token.COLON: + idx = i + 2 if is_empty_par(LL[i + 1]) else i + 1 + + # That colon MUST be followed by a string... + if is_valid_index(idx) and LL[idx].type == token.STRING: + string_idx = idx + + # Skip the string trailer, if one exists. + string_parser = StringParser() + idx = string_parser.parse(LL, string_idx) + + # That string MAY be followed by a comma... + if is_valid_index(idx) and LL[idx].type == token.COMMA: + idx += 1 + + # But no more leaves are allowed... + if not is_valid_index(idx): + return string_idx + + return None + + def do_transform(self, line: Line, string_idx: int) -> Iterator[TResult[Line]]: + LL = line.leaves + + is_valid_index = is_valid_index_factory(LL) + insert_str_child = insert_str_child_factory(LL[string_idx]) + + comma_idx = -1 + ends_with_comma = False + if LL[comma_idx].type == token.COMMA: + ends_with_comma = True + + leaves_to_steal_comments_from = [LL[string_idx]] + if ends_with_comma: + leaves_to_steal_comments_from.append(LL[comma_idx]) + + # --- First Line + first_line = line.clone() + left_leaves = LL[:string_idx] + + # We have to remember to account for (possibly invisible) LPAR and RPAR + # leaves that already wrapped the target string. If these leaves do + # exist, we will replace them with our own LPAR and RPAR leaves. + old_parens_exist = False + if left_leaves and left_leaves[-1].type == token.LPAR: + old_parens_exist = True + leaves_to_steal_comments_from.append(left_leaves[-1]) + left_leaves.pop() + + append_leaves(first_line, line, left_leaves) + + lpar_leaf = Leaf(token.LPAR, "(") + if old_parens_exist: + replace_child(LL[string_idx - 1], lpar_leaf) + else: + insert_str_child(lpar_leaf) + first_line.append(lpar_leaf) + + # We throw inline comments that were originally to the right of the + # target string to the top line. They will now be shown to the right of + # the LPAR. + for leaf in leaves_to_steal_comments_from: + for comment_leaf in line.comments_after(leaf): + first_line.append(comment_leaf, preformatted=True) + + yield Ok(first_line) + + # --- Middle (String) Line + # We only need to yield one (possibly too long) string line, since the + # `StringSplitter` will break it down further if necessary. + string_value = LL[string_idx].value + string_line = Line( + mode=line.mode, + depth=line.depth + 1, + inside_brackets=True, + should_split_rhs=line.should_split_rhs, + magic_trailing_comma=line.magic_trailing_comma, + ) + string_leaf = Leaf(token.STRING, string_value) + insert_str_child(string_leaf) + string_line.append(string_leaf) + + old_rpar_leaf = None + if is_valid_index(string_idx + 1): + right_leaves = LL[string_idx + 1 :] + if ends_with_comma: + right_leaves.pop() + + if old_parens_exist: + assert right_leaves and right_leaves[-1].type == token.RPAR, ( + "Apparently, old parentheses do NOT exist?!" + f" (left_leaves={left_leaves}, right_leaves={right_leaves})" + ) + old_rpar_leaf = right_leaves.pop() + + append_leaves(string_line, line, right_leaves) + + yield Ok(string_line) + + # --- Last Line + last_line = line.clone() + last_line.bracket_tracker = first_line.bracket_tracker + + new_rpar_leaf = Leaf(token.RPAR, ")") + if old_rpar_leaf is not None: + replace_child(old_rpar_leaf, new_rpar_leaf) + else: + insert_str_child(new_rpar_leaf) + last_line.append(new_rpar_leaf) + + # If the target string ended with a comma, we place this comma to the + # right of the RPAR on the last line. + if ends_with_comma: + comma_leaf = Leaf(token.COMMA, ",") + replace_child(LL[comma_idx], comma_leaf) + last_line.append(comma_leaf) + + yield Ok(last_line) + + +class StringParser: + """ + A state machine that aids in parsing a string's "trailer", which can be + either non-existent, an old-style formatting sequence (e.g. `% varX` or `% + (varX, varY)`), or a method-call / attribute access (e.g. `.format(varX, + varY)`). + + NOTE: A new StringParser object MUST be instantiated for each string + trailer we need to parse. + + Examples: + We shall assume that `line` equals the `Line` object that corresponds + to the following line of python code: + ``` + x = "Some {}.".format("String") + some_other_string + ``` + + Furthermore, we will assume that `string_idx` is some index such that: + ``` + assert line.leaves[string_idx].value == "Some {}." + ``` + + The following code snippet then holds: + ``` + string_parser = StringParser() + idx = string_parser.parse(line.leaves, string_idx) + assert line.leaves[idx].type == token.PLUS + ``` + """ + + DEFAULT_TOKEN: Final = 20210605 + + # String Parser States + START: Final = 1 + DOT: Final = 2 + NAME: Final = 3 + PERCENT: Final = 4 + SINGLE_FMT_ARG: Final = 5 + LPAR: Final = 6 + RPAR: Final = 7 + DONE: Final = 8 + + # Lookup Table for Next State + _goto: Final[Dict[Tuple[ParserState, NodeType], ParserState]] = { + # A string trailer may start with '.' OR '%'. + (START, token.DOT): DOT, + (START, token.PERCENT): PERCENT, + (START, DEFAULT_TOKEN): DONE, + # A '.' MUST be followed by an attribute or method name. + (DOT, token.NAME): NAME, + # A method name MUST be followed by an '(', whereas an attribute name + # is the last symbol in the string trailer. + (NAME, token.LPAR): LPAR, + (NAME, DEFAULT_TOKEN): DONE, + # A '%' symbol can be followed by an '(' or a single argument (e.g. a + # string or variable name). + (PERCENT, token.LPAR): LPAR, + (PERCENT, DEFAULT_TOKEN): SINGLE_FMT_ARG, + # If a '%' symbol is followed by a single argument, that argument is + # the last leaf in the string trailer. + (SINGLE_FMT_ARG, DEFAULT_TOKEN): DONE, + # If present, a ')' symbol is the last symbol in a string trailer. + # (NOTE: LPARS and nested RPARS are not included in this lookup table, + # since they are treated as a special case by the parsing logic in this + # classes' implementation.) + (RPAR, DEFAULT_TOKEN): DONE, + } + + def __init__(self) -> None: + self._state = self.START + self._unmatched_lpars = 0 + + def parse(self, leaves: List[Leaf], string_idx: int) -> int: + """ + Pre-conditions: + * @leaves[@string_idx].type == token.STRING + + Returns: + The index directly after the last leaf which is apart of the string + trailer, if a "trailer" exists. + OR + @string_idx + 1, if no string "trailer" exists. + """ + assert leaves[string_idx].type == token.STRING + + idx = string_idx + 1 + while idx < len(leaves) and self._next_state(leaves[idx]): + idx += 1 + return idx + + def _next_state(self, leaf: Leaf) -> bool: + """ + Pre-conditions: + * On the first call to this function, @leaf MUST be the leaf that + was directly after the string leaf in question (e.g. if our target + string is `line.leaves[i]` then the first call to this method must + be `line.leaves[i + 1]`). + * On the next call to this function, the leaf parameter passed in + MUST be the leaf directly following @leaf. + + Returns: + True iff @leaf is apart of the string's trailer. + """ + # We ignore empty LPAR or RPAR leaves. + if is_empty_par(leaf): + return True + + next_token = leaf.type + if next_token == token.LPAR: + self._unmatched_lpars += 1 + + current_state = self._state + + # The LPAR parser state is a special case. We will return True until we + # find the matching RPAR token. + if current_state == self.LPAR: + if next_token == token.RPAR: + self._unmatched_lpars -= 1 + if self._unmatched_lpars == 0: + self._state = self.RPAR + # Otherwise, we use a lookup table to determine the next state. + else: + # If the lookup table matches the current state to the next + # token, we use the lookup table. + if (current_state, next_token) in self._goto: + self._state = self._goto[current_state, next_token] + else: + # Otherwise, we check if a the current state was assigned a + # default. + if (current_state, self.DEFAULT_TOKEN) in self._goto: + self._state = self._goto[current_state, self.DEFAULT_TOKEN] + # If no default has been assigned, then this parser has a logic + # error. + else: + raise RuntimeError(f"{self.__class__.__name__} LOGIC ERROR!") + + if self._state == self.DONE: + return False + + return True + + +def insert_str_child_factory(string_leaf: Leaf) -> Callable[[LN], None]: + """ + Factory for a convenience function that is used to orphan @string_leaf + and then insert multiple new leaves into the same part of the node + structure that @string_leaf had originally occupied. + + Examples: + Let `string_leaf = Leaf(token.STRING, '"foo"')` and `N = + string_leaf.parent`. Assume the node `N` has the following + original structure: + + Node( + expr_stmt, [ + Leaf(NAME, 'x'), + Leaf(EQUAL, '='), + Leaf(STRING, '"foo"'), + ] + ) + + We then run the code snippet shown below. + ``` + insert_str_child = insert_str_child_factory(string_leaf) + + lpar = Leaf(token.LPAR, '(') + insert_str_child(lpar) + + bar = Leaf(token.STRING, '"bar"') + insert_str_child(bar) + + rpar = Leaf(token.RPAR, ')') + insert_str_child(rpar) + ``` + + After which point, it follows that `string_leaf.parent is None` and + the node `N` now has the following structure: + + Node( + expr_stmt, [ + Leaf(NAME, 'x'), + Leaf(EQUAL, '='), + Leaf(LPAR, '('), + Leaf(STRING, '"bar"'), + Leaf(RPAR, ')'), + ] + ) + """ + string_parent = string_leaf.parent + string_child_idx = string_leaf.remove() + + def insert_str_child(child: LN) -> None: + nonlocal string_child_idx + + assert string_parent is not None + assert string_child_idx is not None + + string_parent.insert_child(string_child_idx, child) + string_child_idx += 1 + + return insert_str_child + + +def is_valid_index_factory(seq: Sequence[Any]) -> Callable[[int], bool]: + """ + Examples: + ``` + my_list = [1, 2, 3] + + is_valid_index = is_valid_index_factory(my_list) + + assert is_valid_index(0) + assert is_valid_index(2) + + assert not is_valid_index(3) + assert not is_valid_index(-1) + ``` + """ + + def is_valid_index(idx: int) -> bool: + """ + Returns: + True iff @idx is positive AND seq[@idx] does NOT raise an + IndexError. + """ + return 0 <= idx < len(seq) + + return is_valid_index diff --git a/env/lib/python3.8/site-packages/blackd/__init__.py b/env/lib/python3.8/site-packages/blackd/__init__.py new file mode 100644 index 00000000..ba4750b8 --- /dev/null +++ b/env/lib/python3.8/site-packages/blackd/__init__.py @@ -0,0 +1,227 @@ +import asyncio +import logging +from concurrent.futures import Executor, ProcessPoolExecutor +from datetime import datetime +from functools import partial +from multiprocessing import freeze_support +from typing import Set, Tuple + +try: + from aiohttp import web + + from .middlewares import cors +except ImportError as ie: + raise ImportError( + f"aiohttp dependency is not installed: {ie}. " + + "Please re-install black with the '[d]' extra install " + + "to obtain aiohttp_cors: `pip install black[d]`" + ) from None + +import click + +import black +from _black_version import version as __version__ +from black.concurrency import maybe_install_uvloop + +# This is used internally by tests to shut down the server prematurely +_stop_signal = asyncio.Event() + +# Request headers +PROTOCOL_VERSION_HEADER = "X-Protocol-Version" +LINE_LENGTH_HEADER = "X-Line-Length" +PYTHON_VARIANT_HEADER = "X-Python-Variant" +SKIP_SOURCE_FIRST_LINE = "X-Skip-Source-First-Line" +SKIP_STRING_NORMALIZATION_HEADER = "X-Skip-String-Normalization" +SKIP_MAGIC_TRAILING_COMMA = "X-Skip-Magic-Trailing-Comma" +PREVIEW = "X-Preview" +FAST_OR_SAFE_HEADER = "X-Fast-Or-Safe" +DIFF_HEADER = "X-Diff" + +BLACK_HEADERS = [ + PROTOCOL_VERSION_HEADER, + LINE_LENGTH_HEADER, + PYTHON_VARIANT_HEADER, + SKIP_SOURCE_FIRST_LINE, + SKIP_STRING_NORMALIZATION_HEADER, + SKIP_MAGIC_TRAILING_COMMA, + PREVIEW, + FAST_OR_SAFE_HEADER, + DIFF_HEADER, +] + +# Response headers +BLACK_VERSION_HEADER = "X-Black-Version" + + +class InvalidVariantHeader(Exception): + pass + + +@click.command(context_settings={"help_option_names": ["-h", "--help"]}) +@click.option( + "--bind-host", type=str, help="Address to bind the server to.", default="localhost" +) +@click.option("--bind-port", type=int, help="Port to listen on", default=45484) +@click.version_option(version=black.__version__) +def main(bind_host: str, bind_port: int) -> None: + logging.basicConfig(level=logging.INFO) + app = make_app() + ver = black.__version__ + black.out(f"blackd version {ver} listening on {bind_host} port {bind_port}") + web.run_app(app, host=bind_host, port=bind_port, handle_signals=True, print=None) + + +def make_app() -> web.Application: + app = web.Application( + middlewares=[cors(allow_headers=(*BLACK_HEADERS, "Content-Type"))] + ) + executor = ProcessPoolExecutor() + app.add_routes([web.post("/", partial(handle, executor=executor))]) + return app + + +async def handle(request: web.Request, executor: Executor) -> web.Response: + headers = {BLACK_VERSION_HEADER: __version__} + try: + if request.headers.get(PROTOCOL_VERSION_HEADER, "1") != "1": + return web.Response( + status=501, text="This server only supports protocol version 1" + ) + try: + line_length = int( + request.headers.get(LINE_LENGTH_HEADER, black.DEFAULT_LINE_LENGTH) + ) + except ValueError: + return web.Response(status=400, text="Invalid line length header value") + + if PYTHON_VARIANT_HEADER in request.headers: + value = request.headers[PYTHON_VARIANT_HEADER] + try: + pyi, versions = parse_python_variant_header(value) + except InvalidVariantHeader as e: + return web.Response( + status=400, + text=f"Invalid value for {PYTHON_VARIANT_HEADER}: {e.args[0]}", + ) + else: + pyi = False + versions = set() + + skip_string_normalization = bool( + request.headers.get(SKIP_STRING_NORMALIZATION_HEADER, False) + ) + skip_magic_trailing_comma = bool( + request.headers.get(SKIP_MAGIC_TRAILING_COMMA, False) + ) + skip_source_first_line = bool( + request.headers.get(SKIP_SOURCE_FIRST_LINE, False) + ) + preview = bool(request.headers.get(PREVIEW, False)) + fast = False + if request.headers.get(FAST_OR_SAFE_HEADER, "safe") == "fast": + fast = True + mode = black.FileMode( + target_versions=versions, + is_pyi=pyi, + line_length=line_length, + skip_source_first_line=skip_source_first_line, + string_normalization=not skip_string_normalization, + magic_trailing_comma=not skip_magic_trailing_comma, + preview=preview, + ) + req_bytes = await request.content.read() + charset = request.charset if request.charset is not None else "utf8" + req_str = req_bytes.decode(charset) + then = datetime.utcnow() + + header = "" + if skip_source_first_line: + first_newline_position: int = req_str.find("\n") + 1 + header = req_str[:first_newline_position] + req_str = req_str[first_newline_position:] + + loop = asyncio.get_event_loop() + formatted_str = await loop.run_in_executor( + executor, partial(black.format_file_contents, req_str, fast=fast, mode=mode) + ) + + # Preserve CRLF line endings + if req_str[req_str.find("\n") - 1] == "\r": + formatted_str = formatted_str.replace("\n", "\r\n") + # If, after swapping line endings, nothing changed, then say so + if formatted_str == req_str: + raise black.NothingChanged + + # Put the source first line back + req_str = header + req_str + formatted_str = header + formatted_str + + # Only output the diff in the HTTP response + only_diff = bool(request.headers.get(DIFF_HEADER, False)) + if only_diff: + now = datetime.utcnow() + src_name = f"In\t{then} +0000" + dst_name = f"Out\t{now} +0000" + loop = asyncio.get_event_loop() + formatted_str = await loop.run_in_executor( + executor, + partial(black.diff, req_str, formatted_str, src_name, dst_name), + ) + + return web.Response( + content_type=request.content_type, + charset=charset, + headers=headers, + text=formatted_str, + ) + except black.NothingChanged: + return web.Response(status=204, headers=headers) + except black.InvalidInput as e: + return web.Response(status=400, headers=headers, text=str(e)) + except Exception as e: + logging.exception("Exception during handling a request") + return web.Response(status=500, headers=headers, text=str(e)) + + +def parse_python_variant_header(value: str) -> Tuple[bool, Set[black.TargetVersion]]: + if value == "pyi": + return True, set() + else: + versions = set() + for version in value.split(","): + if version.startswith("py"): + version = version[len("py") :] + if "." in version: + major_str, *rest = version.split(".") + else: + major_str = version[0] + rest = [version[1:]] if len(version) > 1 else [] + try: + major = int(major_str) + if major not in (2, 3): + raise InvalidVariantHeader("major version must be 2 or 3") + if len(rest) > 0: + minor = int(rest[0]) + if major == 2: + raise InvalidVariantHeader("Python 2 is not supported") + else: + # Default to lowest supported minor version. + minor = 7 if major == 2 else 3 + version_str = f"PY{major}{minor}" + if major == 3 and not hasattr(black.TargetVersion, version_str): + raise InvalidVariantHeader(f"3.{minor} is not supported") + versions.add(black.TargetVersion[version_str]) + except (KeyError, ValueError): + raise InvalidVariantHeader("expected e.g. '3.7', 'py3.5'") from None + return False, versions + + +def patched_main() -> None: + maybe_install_uvloop() + freeze_support() + black.patch_click() + main() + + +if __name__ == "__main__": + patched_main() diff --git a/env/lib/python3.8/site-packages/blackd/__main__.py b/env/lib/python3.8/site-packages/blackd/__main__.py new file mode 100644 index 00000000..b5a4b137 --- /dev/null +++ b/env/lib/python3.8/site-packages/blackd/__main__.py @@ -0,0 +1,3 @@ +import blackd + +blackd.patched_main() diff --git a/env/lib/python3.8/site-packages/blackd/middlewares.py b/env/lib/python3.8/site-packages/blackd/middlewares.py new file mode 100644 index 00000000..370e0ae2 --- /dev/null +++ b/env/lib/python3.8/site-packages/blackd/middlewares.py @@ -0,0 +1,45 @@ +from typing import TYPE_CHECKING, Any, Awaitable, Callable, Iterable, TypeVar + +from aiohttp.web_request import Request +from aiohttp.web_response import StreamResponse + +if TYPE_CHECKING: + F = TypeVar("F", bound=Callable[..., Any]) + middleware: Callable[[F], F] +else: + try: + from aiohttp.web_middlewares import middleware + except ImportError: + # @middleware is deprecated and its behaviour is the default since aiohttp 4.0 + # so if it doesn't exist anymore, define a no-op for forward compatibility. + middleware = lambda x: x # noqa: E731 + +Handler = Callable[[Request], Awaitable[StreamResponse]] +Middleware = Callable[[Request, Handler], Awaitable[StreamResponse]] + + +def cors(allow_headers: Iterable[str]) -> Middleware: + @middleware + async def impl(request: Request, handler: Handler) -> StreamResponse: + is_options = request.method == "OPTIONS" + is_preflight = is_options and "Access-Control-Request-Method" in request.headers + if is_preflight: + resp = StreamResponse() + else: + resp = await handler(request) + + origin = request.headers.get("Origin") + if not origin: + return resp + + resp.headers["Access-Control-Allow-Origin"] = "*" + resp.headers["Access-Control-Expose-Headers"] = "*" + if is_options: + resp.headers["Access-Control-Allow-Headers"] = ", ".join(allow_headers) + resp.headers["Access-Control-Allow-Methods"] = ", ".join( + ("OPTIONS", "POST") + ) + + return resp + + return impl diff --git a/env/lib/python3.8/site-packages/blib2to3/Grammar.txt b/env/lib/python3.8/site-packages/blib2to3/Grammar.txt new file mode 100644 index 00000000..ac7ad764 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/Grammar.txt @@ -0,0 +1,252 @@ +# Grammar for 2to3. This grammar supports Python 2.x and 3.x. + +# NOTE WELL: You should also follow all the steps listed at +# https://devguide.python.org/grammar/ + +# Start symbols for the grammar: +# file_input is a module or sequence of commands read from an input file; +# single_input is a single interactive statement; +# eval_input is the input for the eval() and input() functions. +# NB: compound_stmt in single_input is followed by extra NEWLINE! +file_input: (NEWLINE | stmt)* ENDMARKER +single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE +eval_input: testlist NEWLINE* ENDMARKER + +decorator: '@' namedexpr_test NEWLINE +decorators: decorator+ +decorated: decorators (classdef | funcdef | async_funcdef) +async_funcdef: ASYNC funcdef +funcdef: 'def' NAME parameters ['->' test] ':' suite +parameters: '(' [typedargslist] ')' + +# The following definition for typedarglist is equivalent to this set of rules: +# +# arguments = argument (',' argument)* +# argument = tfpdef ['=' test] +# kwargs = '**' tname [','] +# args = '*' [tname_star] +# kwonly_kwargs = (',' argument)* [',' [kwargs]] +# args_kwonly_kwargs = args kwonly_kwargs | kwargs +# poskeyword_args_kwonly_kwargs = arguments [',' [args_kwonly_kwargs]] +# typedargslist_no_posonly = poskeyword_args_kwonly_kwargs | args_kwonly_kwargs +# typedarglist = arguments ',' '/' [',' [typedargslist_no_posonly]])|(typedargslist_no_posonly)" +# +# It needs to be fully expanded to allow our LL(1) parser to work on it. + +typedargslist: tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [ + ',' [((tfpdef ['=' test] ',')* ('*' [tname_star] (',' tname ['=' test])* + [',' ['**' tname [',']]] | '**' tname [',']) + | tfpdef ['=' test] (',' tfpdef ['=' test])* [','])] + ] | ((tfpdef ['=' test] ',')* ('*' [tname_star] (',' tname ['=' test])* + [',' ['**' tname [',']]] | '**' tname [',']) + | tfpdef ['=' test] (',' tfpdef ['=' test])* [',']) + +tname: NAME [':' test] +tname_star: NAME [':' (test|star_expr)] +tfpdef: tname | '(' tfplist ')' +tfplist: tfpdef (',' tfpdef)* [','] + +# The following definition for varargslist is equivalent to this set of rules: +# +# arguments = argument (',' argument )* +# argument = vfpdef ['=' test] +# kwargs = '**' vname [','] +# args = '*' [vname] +# kwonly_kwargs = (',' argument )* [',' [kwargs]] +# args_kwonly_kwargs = args kwonly_kwargs | kwargs +# poskeyword_args_kwonly_kwargs = arguments [',' [args_kwonly_kwargs]] +# vararglist_no_posonly = poskeyword_args_kwonly_kwargs | args_kwonly_kwargs +# varargslist = arguments ',' '/' [','[(vararglist_no_posonly)]] | (vararglist_no_posonly) +# +# It needs to be fully expanded to allow our LL(1) parser to work on it. + +varargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ + ((vfpdef ['=' test] ',')* ('*' [vname] (',' vname ['=' test])* + [',' ['**' vname [',']]] | '**' vname [',']) + | vfpdef ['=' test] (',' vfpdef ['=' test])* [',']) + ]] | ((vfpdef ['=' test] ',')* + ('*' [vname] (',' vname ['=' test])* [',' ['**' vname [',']]]| '**' vname [',']) + | vfpdef ['=' test] (',' vfpdef ['=' test])* [',']) + +vname: NAME +vfpdef: vname | '(' vfplist ')' +vfplist: vfpdef (',' vfpdef)* [','] + +stmt: simple_stmt | compound_stmt +simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE +small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | + import_stmt | global_stmt | exec_stmt | assert_stmt) +expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) | + ('=' (yield_expr|testlist_star_expr))*) +annassign: ':' test ['=' (yield_expr|testlist_star_expr)] +testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] +augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' | + '<<=' | '>>=' | '**=' | '//=') +# For normal and annotated assignments, additional restrictions enforced by the interpreter +print_stmt: 'print' ( [ test (',' test)* [','] ] | + '>>' test [ (',' test)+ [','] ] ) +del_stmt: 'del' exprlist +pass_stmt: 'pass' +flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt +break_stmt: 'break' +continue_stmt: 'continue' +return_stmt: 'return' [testlist_star_expr] +yield_stmt: yield_expr +raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]] +import_stmt: import_name | import_from +import_name: 'import' dotted_as_names +import_from: ('from' ('.'* dotted_name | '.'+) + 'import' ('*' | '(' import_as_names ')' | import_as_names)) +import_as_name: NAME ['as' NAME] +dotted_as_name: dotted_name ['as' NAME] +import_as_names: import_as_name (',' import_as_name)* [','] +dotted_as_names: dotted_as_name (',' dotted_as_name)* +dotted_name: NAME ('.' NAME)* +global_stmt: ('global' | 'nonlocal') NAME (',' NAME)* +exec_stmt: 'exec' expr ['in' test [',' test]] +assert_stmt: 'assert' test [',' test] + +compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt | match_stmt +async_stmt: ASYNC (funcdef | with_stmt | for_stmt) +if_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite] +while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite] +for_stmt: 'for' exprlist 'in' testlist_star_expr ':' suite ['else' ':' suite] +try_stmt: ('try' ':' suite + ((except_clause ':' suite)+ + ['else' ':' suite] + ['finally' ':' suite] | + 'finally' ':' suite)) +with_stmt: 'with' asexpr_test (',' asexpr_test)* ':' suite + +# NB compile.c makes sure that the default except clause is last +except_clause: 'except' ['*'] [test [(',' | 'as') test]] +suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT + +# Backward compatibility cruft to support: +# [ x for x in lambda: True, lambda: False if x() ] +# even while also allowing: +# lambda x: 5 if x else 2 +# (But not a mix of the two) +testlist_safe: old_test [(',' old_test)+ [',']] +old_test: or_test | old_lambdef +old_lambdef: 'lambda' [varargslist] ':' old_test + +namedexpr_test: asexpr_test [':=' asexpr_test] + +# This is actually not a real rule, though since the parser is very +# limited in terms of the strategy about match/case rules, we are inserting +# a virtual case ( as ) as a valid expression. Unless a better +# approach is thought, the only side effect of this seem to be just allowing +# more stuff to be parser (which would fail on the ast). +asexpr_test: test ['as' test] + +test: or_test ['if' or_test 'else' test] | lambdef +or_test: and_test ('or' and_test)* +and_test: not_test ('and' not_test)* +not_test: 'not' not_test | comparison +comparison: expr (comp_op expr)* +comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' +star_expr: '*' expr +expr: xor_expr ('|' xor_expr)* +xor_expr: and_expr ('^' and_expr)* +and_expr: shift_expr ('&' shift_expr)* +shift_expr: arith_expr (('<<'|'>>') arith_expr)* +arith_expr: term (('+'|'-') term)* +term: factor (('*'|'@'|'/'|'%'|'//') factor)* +factor: ('+'|'-'|'~') factor | power +power: [AWAIT] atom trailer* ['**' factor] +atom: ('(' [yield_expr|testlist_gexp] ')' | + '[' [listmaker] ']' | + '{' [dictsetmaker] '}' | + '`' testlist1 '`' | + NAME | NUMBER | STRING+ | '.' '.' '.') +listmaker: (namedexpr_test|star_expr) ( old_comp_for | (',' (namedexpr_test|star_expr))* [','] ) +testlist_gexp: (namedexpr_test|star_expr) ( old_comp_for | (',' (namedexpr_test|star_expr))* [','] ) +lambdef: 'lambda' [varargslist] ':' test +trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME +subscriptlist: (subscript|star_expr) (',' (subscript|star_expr))* [','] +subscript: test [':=' test] | [test] ':' [test] [sliceop] +sliceop: ':' [test] +exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] +testlist: test (',' test)* [','] +dictsetmaker: ( ((test ':' asexpr_test | '**' expr) + (comp_for | (',' (test ':' asexpr_test | '**' expr))* [','])) | + ((test [':=' test] | star_expr) + (comp_for | (',' (test [':=' test] | star_expr))* [','])) ) + +classdef: 'class' NAME ['(' [arglist] ')'] ':' suite + +arglist: argument (',' argument)* [','] + +# "test '=' test" is really "keyword '=' test", but we have no such token. +# These need to be in a single rule to avoid grammar that is ambiguous +# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr, +# we explicitly match '*' here, too, to give it proper precedence. +# Illegal combinations and orderings are blocked in ast.c: +# multiple (test comp_for) arguments are blocked; keyword unpackings +# that precede iterable unpackings are blocked; etc. +argument: ( test [comp_for] | + test ':=' test | + test 'as' test | + test '=' asexpr_test | + '**' test | + '*' test ) + +comp_iter: comp_for | comp_if +comp_for: [ASYNC] 'for' exprlist 'in' or_test [comp_iter] +comp_if: 'if' old_test [comp_iter] + +# As noted above, testlist_safe extends the syntax allowed in list +# comprehensions and generators. We can't use it indiscriminately in all +# derivations using a comp_for-like pattern because the testlist_safe derivation +# contains comma which clashes with trailing comma in arglist. +# +# This was an issue because the parser would not follow the correct derivation +# when parsing syntactically valid Python code. Since testlist_safe was created +# specifically to handle list comprehensions and generator expressions enclosed +# with parentheses, it's safe to only use it in those. That avoids the issue; we +# can parse code like set(x for x in [],). +# +# The syntax supported by this set of rules is not a valid Python 3 syntax, +# hence the prefix "old". +# +# See https://bugs.python.org/issue27494 +old_comp_iter: old_comp_for | old_comp_if +old_comp_for: [ASYNC] 'for' exprlist 'in' testlist_safe [old_comp_iter] +old_comp_if: 'if' old_test [old_comp_iter] + +testlist1: test (',' test)* + +# not used in grammar, but may appear in "node" passed from Parser to Compiler +encoding_decl: NAME + +yield_expr: 'yield' [yield_arg] +yield_arg: 'from' test | testlist_star_expr + + +# 3.10 match statement definition + +# PS: normally the grammar is much much more restricted, but +# at this moment for not trying to bother much with encoding the +# exact same DSL in a LL(1) parser, we will just accept an expression +# and let the ast.parse() step of the safe mode to reject invalid +# grammar. + +# The reason why it is more restricted is that, patterns are some +# sort of a DSL (more advanced than our LHS on assignments, but +# still in a very limited python subset). They are not really +# expressions, but who cares. If we can parse them, that is enough +# to reformat them. + +match_stmt: "match" subject_expr ':' NEWLINE INDENT case_block+ DEDENT + +# This is more permissive than the actual version. For example it +# accepts `match *something:`, even though single-item starred expressions +# are forbidden. +subject_expr: (namedexpr_test|star_expr) (',' (namedexpr_test|star_expr))* [','] + +# cases +case_block: "case" patterns [guard] ':' suite +guard: 'if' namedexpr_test +patterns: pattern (',' pattern)* [','] +pattern: (expr|star_expr) ['as' expr] diff --git a/env/lib/python3.8/site-packages/blib2to3/LICENSE b/env/lib/python3.8/site-packages/blib2to3/LICENSE new file mode 100644 index 00000000..ef8df069 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/LICENSE @@ -0,0 +1,254 @@ +A. HISTORY OF THE SOFTWARE +========================== + +Python was created in the early 1990s by Guido van Rossum at Stichting +Mathematisch Centrum (CWI, see https://www.cwi.nl) in the Netherlands +as a successor of a language called ABC. Guido remains Python's +principal author, although it includes many contributions from others. + +In 1995, Guido continued his work on Python at the Corporation for +National Research Initiatives (CNRI, see https://www.cnri.reston.va.us) +in Reston, Virginia where he released several versions of the +software. + +In May 2000, Guido and the Python core development team moved to +BeOpen.com to form the BeOpen PythonLabs team. In October of the same +year, the PythonLabs team moved to Digital Creations, which became +Zope Corporation. In 2001, the Python Software Foundation (PSF, see +https://www.python.org/psf/) was formed, a non-profit organization +created specifically to own Python-related Intellectual Property. +Zope Corporation was a sponsoring member of the PSF. + +All Python releases are Open Source (see https://opensource.org for +the Open Source Definition). Historically, most, but not all, Python +releases have also been GPL-compatible; the table below summarizes +the various releases. + + Release Derived Year Owner GPL- + from compatible? (1) + + 0.9.0 thru 1.2 1991-1995 CWI yes + 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes + 1.6 1.5.2 2000 CNRI no + 2.0 1.6 2000 BeOpen.com no + 1.6.1 1.6 2001 CNRI yes (2) + 2.1 2.0+1.6.1 2001 PSF no + 2.0.1 2.0+1.6.1 2001 PSF yes + 2.1.1 2.1+2.0.1 2001 PSF yes + 2.1.2 2.1.1 2002 PSF yes + 2.1.3 2.1.2 2002 PSF yes + 2.2 and above 2.1.1 2001-now PSF yes + +Footnotes: + +(1) GPL-compatible doesn't mean that we're distributing Python under + the GPL. All Python licenses, unlike the GPL, let you distribute + a modified version without making your changes open source. The + GPL-compatible licenses make it possible to combine Python with + other software that is released under the GPL; the others don't. + +(2) According to Richard Stallman, 1.6.1 is not GPL-compatible, + because its license has a choice of law clause. According to + CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1 + is "not incompatible" with the GPL. + +Thanks to the many outside volunteers who have worked under Guido's +direction to make these releases possible. + + +B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON +=============================================================== + +PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 +-------------------------------------------- + +1. This LICENSE AGREEMENT is between the Python Software Foundation +("PSF"), and the Individual or Organization ("Licensee") accessing and +otherwise using this software ("Python") in source or binary form and +its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, PSF hereby +grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, +analyze, test, perform and/or display publicly, prepare derivative works, +distribute, and otherwise use Python alone or in any derivative version, +provided, however, that PSF's License Agreement and PSF's notice of copyright, +i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, +2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 Python Software Foundation; All +Rights Reserved" are retained in Python alone or in any derivative version +prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on +or incorporates Python or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to Python. + +4. PSF is making Python available to Licensee on an "AS IS" +basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON +FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any +relationship of agency, partnership, or joint venture between PSF and +Licensee. This License Agreement does not grant permission to use PSF +trademarks or trade name in a trademark sense to endorse or promote +products or services of Licensee, or any third party. + +8. By copying, installing or otherwise using Python, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. + + +BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0 +------------------------------------------- + +BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1 + +1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an +office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the +Individual or Organization ("Licensee") accessing and otherwise using +this software in source or binary form and its associated +documentation ("the Software"). + +2. Subject to the terms and conditions of this BeOpen Python License +Agreement, BeOpen hereby grants Licensee a non-exclusive, +royalty-free, world-wide license to reproduce, analyze, test, perform +and/or display publicly, prepare derivative works, distribute, and +otherwise use the Software alone or in any derivative version, +provided, however, that the BeOpen Python License is retained in the +Software, alone or in any derivative version prepared by Licensee. + +3. BeOpen is making the Software available to Licensee on an "AS IS" +basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE +SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS +AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY +DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +5. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +6. This License Agreement shall be governed by and interpreted in all +respects by the law of the State of California, excluding conflict of +law provisions. Nothing in this License Agreement shall be deemed to +create any relationship of agency, partnership, or joint venture +between BeOpen and Licensee. This License Agreement does not grant +permission to use BeOpen trademarks or trade names in a trademark +sense to endorse or promote products or services of Licensee, or any +third party. As an exception, the "BeOpen Python" logos available at +http://www.pythonlabs.com/logos.html may be used according to the +permissions granted on that web page. + +7. By copying, installing or otherwise using the software, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. + + +CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1 +--------------------------------------- + +1. This LICENSE AGREEMENT is between the Corporation for National +Research Initiatives, having an office at 1895 Preston White Drive, +Reston, VA 20191 ("CNRI"), and the Individual or Organization +("Licensee") accessing and otherwise using Python 1.6.1 software in +source or binary form and its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, CNRI +hereby grants Licensee a nonexclusive, royalty-free, world-wide +license to reproduce, analyze, test, perform and/or display publicly, +prepare derivative works, distribute, and otherwise use Python 1.6.1 +alone or in any derivative version, provided, however, that CNRI's +License Agreement and CNRI's notice of copyright, i.e., "Copyright (c) +1995-2001 Corporation for National Research Initiatives; All Rights +Reserved" are retained in Python 1.6.1 alone or in any derivative +version prepared by Licensee. Alternately, in lieu of CNRI's License +Agreement, Licensee may substitute the following text (omitting the +quotes): "Python 1.6.1 is made available subject to the terms and +conditions in CNRI's License Agreement. This Agreement together with +Python 1.6.1 may be located on the Internet using the following +unique, persistent identifier (known as a handle): 1895.22/1013. This +Agreement may also be obtained from a proxy server on the Internet +using the following URL: http://hdl.handle.net/1895.22/1013". + +3. In the event Licensee prepares a derivative work that is based on +or incorporates Python 1.6.1 or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to Python 1.6.1. + +4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" +basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON +1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. This License Agreement shall be governed by the federal +intellectual property law of the United States, including without +limitation the federal copyright law, and, to the extent such +U.S. federal law does not apply, by the law of the Commonwealth of +Virginia, excluding Virginia's conflict of law provisions. +Notwithstanding the foregoing, with regard to derivative works based +on Python 1.6.1 that incorporate non-separable material that was +previously distributed under the GNU General Public License (GPL), the +law of the Commonwealth of Virginia shall govern this License +Agreement only as to issues arising under or with respect to +Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this +License Agreement shall be deemed to create any relationship of +agency, partnership, or joint venture between CNRI and Licensee. This +License Agreement does not grant permission to use CNRI trademarks or +trade name in a trademark sense to endorse or promote products or +services of Licensee, or any third party. + +8. By clicking on the "ACCEPT" button where indicated, or by copying, +installing or otherwise using Python 1.6.1, Licensee agrees to be +bound by the terms and conditions of this License Agreement. + + ACCEPT + + +CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2 +-------------------------------------------------- + +Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, +The Netherlands. All rights reserved. + +Permission to use, copy, modify, and distribute this software and its +documentation for any purpose and without fee is hereby granted, +provided that the above copyright notice appear in all copies and that +both that copyright notice and this permission notice appear in +supporting documentation, and that the name of Stichting Mathematisch +Centrum or CWI not be used in advertising or publicity pertaining to +distribution of the software without specific, written prior +permission. + +STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO +THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND +FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE +FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT +OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/env/lib/python3.8/site-packages/blib2to3/PatternGrammar.txt b/env/lib/python3.8/site-packages/blib2to3/PatternGrammar.txt new file mode 100644 index 00000000..36bf8148 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/PatternGrammar.txt @@ -0,0 +1,28 @@ +# Copyright 2006 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +# A grammar to describe tree matching patterns. +# Not shown here: +# - 'TOKEN' stands for any token (leaf node) +# - 'any' stands for any node (leaf or interior) +# With 'any' we can still specify the sub-structure. + +# The start symbol is 'Matcher'. + +Matcher: Alternatives ENDMARKER + +Alternatives: Alternative ('|' Alternative)* + +Alternative: (Unit | NegatedUnit)+ + +Unit: [NAME '='] ( STRING [Repeater] + | NAME [Details] [Repeater] + | '(' Alternatives ')' [Repeater] + | '[' Alternatives ']' + ) + +NegatedUnit: 'not' (STRING | NAME [Details] | '(' Alternatives ')') + +Repeater: '*' | '+' | '{' NUMBER [',' NUMBER] '}' + +Details: '<' Alternatives '>' diff --git a/env/lib/python3.8/site-packages/blib2to3/README b/env/lib/python3.8/site-packages/blib2to3/README new file mode 100644 index 00000000..0d3c607c --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/README @@ -0,0 +1,23 @@ +A subset of lib2to3 taken from Python 3.7.0b2. +Commit hash: 9c17e3a1987004b8bcfbe423953aad84493a7984 + +Reasons for forking: +- consistent handling of f-strings for users of Python < 3.6.2 +- backport of BPO-33064 that fixes parsing files with trailing commas after + *args and **kwargs +- backport of GH-6143 that restores the ability to reformat legacy usage of + `async` +- support all types of string literals +- better ability to debug (better reprs) +- INDENT and DEDENT don't hold whitespace and comment prefixes +- ability to Cythonize + +Change Log: +- Changes default logger used by Driver +- Backported the following upstream parser changes: + - "bpo-42381: Allow walrus in set literals and set comprehensions (GH-23332)" + https://github.com/python/cpython/commit/cae60187cf7a7b26281d012e1952fafe4e2e97e9 + - "bpo-42316: Allow unparenthesized walrus operator in indexes (GH-23317)" + https://github.com/python/cpython/commit/b0aba1fcdc3da952698d99aec2334faa79a8b68c +- Tweaks to help mypyc compile faster code (including inlining type information, + "Final-ing", etc.) diff --git a/env/lib/python3.8/site-packages/blib2to3/__init__.py b/env/lib/python3.8/site-packages/blib2to3/__init__.py new file mode 100644 index 00000000..1bb8bf6d --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/__init__.py @@ -0,0 +1 @@ +# empty diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/__init__.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/__init__.py new file mode 100644 index 00000000..af390484 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/__init__.py @@ -0,0 +1,4 @@ +# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""The pgen2 package.""" diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/conv.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/conv.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..9ef023d5 Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/conv.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/conv.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/conv.py new file mode 100644 index 00000000..fa9825e5 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/conv.py @@ -0,0 +1,256 @@ +# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +# mypy: ignore-errors + +"""Convert graminit.[ch] spit out by pgen to Python code. + +Pgen is the Python parser generator. It is useful to quickly create a +parser from a grammar file in Python's grammar notation. But I don't +want my parsers to be written in C (yet), so I'm translating the +parsing tables to Python data structures and writing a Python parse +engine. + +Note that the token numbers are constants determined by the standard +Python tokenizer. The standard token module defines these numbers and +their names (the names are not used much). The token numbers are +hardcoded into the Python tokenizer and into pgen. A Python +implementation of the Python tokenizer is also available, in the +standard tokenize module. + +On the other hand, symbol numbers (representing the grammar's +non-terminals) are assigned by pgen based on the actual grammar +input. + +Note: this module is pretty much obsolete; the pgen module generates +equivalent grammar tables directly from the Grammar.txt input file +without having to invoke the Python pgen C program. + +""" + +# Python imports +import re + +# Local imports +from pgen2 import grammar, token + + +class Converter(grammar.Grammar): + """Grammar subclass that reads classic pgen output files. + + The run() method reads the tables as produced by the pgen parser + generator, typically contained in two C files, graminit.h and + graminit.c. The other methods are for internal use only. + + See the base class for more documentation. + + """ + + def run(self, graminit_h, graminit_c): + """Load the grammar tables from the text files written by pgen.""" + self.parse_graminit_h(graminit_h) + self.parse_graminit_c(graminit_c) + self.finish_off() + + def parse_graminit_h(self, filename): + """Parse the .h file written by pgen. (Internal) + + This file is a sequence of #define statements defining the + nonterminals of the grammar as numbers. We build two tables + mapping the numbers to names and back. + + """ + try: + f = open(filename) + except OSError as err: + print("Can't open %s: %s" % (filename, err)) + return False + self.symbol2number = {} + self.number2symbol = {} + lineno = 0 + for line in f: + lineno += 1 + mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line) + if not mo and line.strip(): + print("%s(%s): can't parse %s" % (filename, lineno, line.strip())) + else: + symbol, number = mo.groups() + number = int(number) + assert symbol not in self.symbol2number + assert number not in self.number2symbol + self.symbol2number[symbol] = number + self.number2symbol[number] = symbol + return True + + def parse_graminit_c(self, filename): + """Parse the .c file written by pgen. (Internal) + + The file looks as follows. The first two lines are always this: + + #include "pgenheaders.h" + #include "grammar.h" + + After that come four blocks: + + 1) one or more state definitions + 2) a table defining dfas + 3) a table defining labels + 4) a struct defining the grammar + + A state definition has the following form: + - one or more arc arrays, each of the form: + static arc arcs__[] = { + {, }, + ... + }; + - followed by a state array, of the form: + static state states_[] = { + {, arcs__}, + ... + }; + + """ + try: + f = open(filename) + except OSError as err: + print("Can't open %s: %s" % (filename, err)) + return False + # The code below essentially uses f's iterator-ness! + lineno = 0 + + # Expect the two #include lines + lineno, line = lineno + 1, next(f) + assert line == '#include "pgenheaders.h"\n', (lineno, line) + lineno, line = lineno + 1, next(f) + assert line == '#include "grammar.h"\n', (lineno, line) + + # Parse the state definitions + lineno, line = lineno + 1, next(f) + allarcs = {} + states = [] + while line.startswith("static arc "): + while line.startswith("static arc "): + mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$", line) + assert mo, (lineno, line) + n, m, k = list(map(int, mo.groups())) + arcs = [] + for _ in range(k): + lineno, line = lineno + 1, next(f) + mo = re.match(r"\s+{(\d+), (\d+)},$", line) + assert mo, (lineno, line) + i, j = list(map(int, mo.groups())) + arcs.append((i, j)) + lineno, line = lineno + 1, next(f) + assert line == "};\n", (lineno, line) + allarcs[(n, m)] = arcs + lineno, line = lineno + 1, next(f) + mo = re.match(r"static state states_(\d+)\[(\d+)\] = {$", line) + assert mo, (lineno, line) + s, t = list(map(int, mo.groups())) + assert s == len(states), (lineno, line) + state = [] + for _ in range(t): + lineno, line = lineno + 1, next(f) + mo = re.match(r"\s+{(\d+), arcs_(\d+)_(\d+)},$", line) + assert mo, (lineno, line) + k, n, m = list(map(int, mo.groups())) + arcs = allarcs[n, m] + assert k == len(arcs), (lineno, line) + state.append(arcs) + states.append(state) + lineno, line = lineno + 1, next(f) + assert line == "};\n", (lineno, line) + lineno, line = lineno + 1, next(f) + self.states = states + + # Parse the dfas + dfas = {} + mo = re.match(r"static dfa dfas\[(\d+)\] = {$", line) + assert mo, (lineno, line) + ndfas = int(mo.group(1)) + for i in range(ndfas): + lineno, line = lineno + 1, next(f) + mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$', line) + assert mo, (lineno, line) + symbol = mo.group(2) + number, x, y, z = list(map(int, mo.group(1, 3, 4, 5))) + assert self.symbol2number[symbol] == number, (lineno, line) + assert self.number2symbol[number] == symbol, (lineno, line) + assert x == 0, (lineno, line) + state = states[z] + assert y == len(state), (lineno, line) + lineno, line = lineno + 1, next(f) + mo = re.match(r'\s+("(?:\\\d\d\d)*")},$', line) + assert mo, (lineno, line) + first = {} + rawbitset = eval(mo.group(1)) + for i, c in enumerate(rawbitset): + byte = ord(c) + for j in range(8): + if byte & (1 << j): + first[i * 8 + j] = 1 + dfas[number] = (state, first) + lineno, line = lineno + 1, next(f) + assert line == "};\n", (lineno, line) + self.dfas = dfas + + # Parse the labels + labels = [] + lineno, line = lineno + 1, next(f) + mo = re.match(r"static label labels\[(\d+)\] = {$", line) + assert mo, (lineno, line) + nlabels = int(mo.group(1)) + for i in range(nlabels): + lineno, line = lineno + 1, next(f) + mo = re.match(r'\s+{(\d+), (0|"\w+")},$', line) + assert mo, (lineno, line) + x, y = mo.groups() + x = int(x) + if y == "0": + y = None + else: + y = eval(y) + labels.append((x, y)) + lineno, line = lineno + 1, next(f) + assert line == "};\n", (lineno, line) + self.labels = labels + + # Parse the grammar struct + lineno, line = lineno + 1, next(f) + assert line == "grammar _PyParser_Grammar = {\n", (lineno, line) + lineno, line = lineno + 1, next(f) + mo = re.match(r"\s+(\d+),$", line) + assert mo, (lineno, line) + ndfas = int(mo.group(1)) + assert ndfas == len(self.dfas) + lineno, line = lineno + 1, next(f) + assert line == "\tdfas,\n", (lineno, line) + lineno, line = lineno + 1, next(f) + mo = re.match(r"\s+{(\d+), labels},$", line) + assert mo, (lineno, line) + nlabels = int(mo.group(1)) + assert nlabels == len(self.labels), (lineno, line) + lineno, line = lineno + 1, next(f) + mo = re.match(r"\s+(\d+)$", line) + assert mo, (lineno, line) + start = int(mo.group(1)) + assert start in self.number2symbol, (lineno, line) + self.start = start + lineno, line = lineno + 1, next(f) + assert line == "};\n", (lineno, line) + try: + lineno, line = lineno + 1, next(f) + except StopIteration: + pass + else: + assert 0, (lineno, line) + + def finish_off(self): + """Create additional useful structures. (Internal).""" + self.keywords = {} # map from keyword strings to arc labels + self.tokens = {} # map from numeric token values to arc labels + for ilabel, (type, value) in enumerate(self.labels): + if type == token.NAME and value is not None: + self.keywords[value] = ilabel + elif value is None: + self.tokens[type] = ilabel diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/driver.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/driver.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..812f0e0c Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/driver.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/driver.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/driver.py new file mode 100644 index 00000000..daf271df --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/driver.py @@ -0,0 +1,326 @@ +# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +# Modifications: +# Copyright 2006 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Parser driver. + +This provides a high-level interface to parse a file into a syntax tree. + +""" + +__author__ = "Guido van Rossum " + +__all__ = ["Driver", "load_grammar"] + +# Python imports +import io +import os +import logging +import pkgutil +import sys +from typing import ( + Any, + cast, + IO, + Iterable, + List, + Optional, + Text, + Iterator, + Tuple, + TypeVar, + Generic, + Union, +) +from contextlib import contextmanager +from dataclasses import dataclass, field + +# Pgen imports +from . import grammar, parse, token, tokenize, pgen +from logging import Logger +from blib2to3.pytree import NL +from blib2to3.pgen2.grammar import Grammar +from blib2to3.pgen2.tokenize import GoodTokenInfo + +Path = Union[str, "os.PathLike[str]"] + + +@dataclass +class ReleaseRange: + start: int + end: Optional[int] = None + tokens: List[Any] = field(default_factory=list) + + def lock(self) -> None: + total_eaten = len(self.tokens) + self.end = self.start + total_eaten + + +class TokenProxy: + def __init__(self, generator: Any) -> None: + self._tokens = generator + self._counter = 0 + self._release_ranges: List[ReleaseRange] = [] + + @contextmanager + def release(self) -> Iterator["TokenProxy"]: + release_range = ReleaseRange(self._counter) + self._release_ranges.append(release_range) + try: + yield self + finally: + # Lock the last release range to the final position that + # has been eaten. + release_range.lock() + + def eat(self, point: int) -> Any: + eaten_tokens = self._release_ranges[-1].tokens + if point < len(eaten_tokens): + return eaten_tokens[point] + else: + while point >= len(eaten_tokens): + token = next(self._tokens) + eaten_tokens.append(token) + return token + + def __iter__(self) -> "TokenProxy": + return self + + def __next__(self) -> Any: + # If the current position is already compromised (looked up) + # return the eaten token, if not just go further on the given + # token producer. + for release_range in self._release_ranges: + assert release_range.end is not None + + start, end = release_range.start, release_range.end + if start <= self._counter < end: + token = release_range.tokens[self._counter - start] + break + else: + token = next(self._tokens) + self._counter += 1 + return token + + def can_advance(self, to: int) -> bool: + # Try to eat, fail if it can't. The eat operation is cached + # so there wont be any additional cost of eating here + try: + self.eat(to) + except StopIteration: + return False + else: + return True + + +class Driver(object): + def __init__(self, grammar: Grammar, logger: Optional[Logger] = None) -> None: + self.grammar = grammar + if logger is None: + logger = logging.getLogger(__name__) + self.logger = logger + + def parse_tokens(self, tokens: Iterable[GoodTokenInfo], debug: bool = False) -> NL: + """Parse a series of tokens and return the syntax tree.""" + # XXX Move the prefix computation into a wrapper around tokenize. + proxy = TokenProxy(tokens) + + p = parse.Parser(self.grammar) + p.setup(proxy=proxy) + + lineno = 1 + column = 0 + indent_columns: List[int] = [] + type = value = start = end = line_text = None + prefix = "" + + for quintuple in proxy: + type, value, start, end, line_text = quintuple + if start != (lineno, column): + assert (lineno, column) <= start, ((lineno, column), start) + s_lineno, s_column = start + if lineno < s_lineno: + prefix += "\n" * (s_lineno - lineno) + lineno = s_lineno + column = 0 + if column < s_column: + prefix += line_text[column:s_column] + column = s_column + if type in (tokenize.COMMENT, tokenize.NL): + prefix += value + lineno, column = end + if value.endswith("\n"): + lineno += 1 + column = 0 + continue + if type == token.OP: + type = grammar.opmap[value] + if debug: + assert type is not None + self.logger.debug( + "%s %r (prefix=%r)", token.tok_name[type], value, prefix + ) + if type == token.INDENT: + indent_columns.append(len(value)) + _prefix = prefix + value + prefix = "" + value = "" + elif type == token.DEDENT: + _indent_col = indent_columns.pop() + prefix, _prefix = self._partially_consume_prefix(prefix, _indent_col) + if p.addtoken(cast(int, type), value, (prefix, start)): + if debug: + self.logger.debug("Stop.") + break + prefix = "" + if type in {token.INDENT, token.DEDENT}: + prefix = _prefix + lineno, column = end + if value.endswith("\n"): + lineno += 1 + column = 0 + else: + # We never broke out -- EOF is too soon (how can this happen???) + assert start is not None + raise parse.ParseError("incomplete input", type, value, (prefix, start)) + assert p.rootnode is not None + return p.rootnode + + def parse_stream_raw(self, stream: IO[Text], debug: bool = False) -> NL: + """Parse a stream and return the syntax tree.""" + tokens = tokenize.generate_tokens(stream.readline, grammar=self.grammar) + return self.parse_tokens(tokens, debug) + + def parse_stream(self, stream: IO[Text], debug: bool = False) -> NL: + """Parse a stream and return the syntax tree.""" + return self.parse_stream_raw(stream, debug) + + def parse_file( + self, filename: Path, encoding: Optional[Text] = None, debug: bool = False + ) -> NL: + """Parse a file and return the syntax tree.""" + with io.open(filename, "r", encoding=encoding) as stream: + return self.parse_stream(stream, debug) + + def parse_string(self, text: Text, debug: bool = False) -> NL: + """Parse a string and return the syntax tree.""" + tokens = tokenize.generate_tokens( + io.StringIO(text).readline, grammar=self.grammar + ) + return self.parse_tokens(tokens, debug) + + def _partially_consume_prefix(self, prefix: Text, column: int) -> Tuple[Text, Text]: + lines: List[str] = [] + current_line = "" + current_column = 0 + wait_for_nl = False + for char in prefix: + current_line += char + if wait_for_nl: + if char == "\n": + if current_line.strip() and current_column < column: + res = "".join(lines) + return res, prefix[len(res) :] + + lines.append(current_line) + current_line = "" + current_column = 0 + wait_for_nl = False + elif char in " \t": + current_column += 1 + elif char == "\n": + # unexpected empty line + current_column = 0 + else: + # indent is finished + wait_for_nl = True + return "".join(lines), current_line + + +def _generate_pickle_name(gt: Path, cache_dir: Optional[Path] = None) -> Text: + head, tail = os.path.splitext(gt) + if tail == ".txt": + tail = "" + name = head + tail + ".".join(map(str, sys.version_info)) + ".pickle" + if cache_dir: + return os.path.join(cache_dir, os.path.basename(name)) + else: + return name + + +def load_grammar( + gt: Text = "Grammar.txt", + gp: Optional[Text] = None, + save: bool = True, + force: bool = False, + logger: Optional[Logger] = None, +) -> Grammar: + """Load the grammar (maybe from a pickle).""" + if logger is None: + logger = logging.getLogger(__name__) + gp = _generate_pickle_name(gt) if gp is None else gp + if force or not _newer(gp, gt): + g: grammar.Grammar = pgen.generate_grammar(gt) + if save: + try: + g.dump(gp) + except OSError: + # Ignore error, caching is not vital. + pass + else: + g = grammar.Grammar() + g.load(gp) + return g + + +def _newer(a: Text, b: Text) -> bool: + """Inquire whether file a was written since file b.""" + if not os.path.exists(a): + return False + if not os.path.exists(b): + return True + return os.path.getmtime(a) >= os.path.getmtime(b) + + +def load_packaged_grammar( + package: str, grammar_source: Text, cache_dir: Optional[Path] = None +) -> grammar.Grammar: + """Normally, loads a pickled grammar by doing + pkgutil.get_data(package, pickled_grammar) + where *pickled_grammar* is computed from *grammar_source* by adding the + Python version and using a ``.pickle`` extension. + + However, if *grammar_source* is an extant file, load_grammar(grammar_source) + is called instead. This facilitates using a packaged grammar file when needed + but preserves load_grammar's automatic regeneration behavior when possible. + + """ + if os.path.isfile(grammar_source): + gp = _generate_pickle_name(grammar_source, cache_dir) if cache_dir else None + return load_grammar(grammar_source, gp=gp) + pickled_name = _generate_pickle_name(os.path.basename(grammar_source), cache_dir) + data = pkgutil.get_data(package, pickled_name) + assert data is not None + g = grammar.Grammar() + g.loads(data) + return g + + +def main(*args: Text) -> bool: + """Main program, when run as a script: produce grammar pickle files. + + Calls load_grammar for each argument, a path to a grammar text file. + """ + if not args: + args = tuple(sys.argv[1:]) + logging.basicConfig(level=logging.INFO, stream=sys.stdout, format="%(message)s") + for gt in args: + load_grammar(gt, save=True, force=True) + return True + + +if __name__ == "__main__": + sys.exit(int(not main())) diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/grammar.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/grammar.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..e958bae5 Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/grammar.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/grammar.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/grammar.py new file mode 100644 index 00000000..337a64f1 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/grammar.py @@ -0,0 +1,227 @@ +# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""This module defines the data structures used to represent a grammar. + +These are a bit arcane because they are derived from the data +structures used by Python's 'pgen' parser generator. + +There's also a table here mapping operators to their names in the +token module; the Python tokenize module reports all operators as the +fallback token code OP, but the parser needs the actual token code. + +""" + +# Python imports +import os +import pickle +import tempfile +from typing import Any, Dict, List, Optional, Text, Tuple, TypeVar, Union + +# Local imports +from . import token + +_P = TypeVar("_P", bound="Grammar") +Label = Tuple[int, Optional[Text]] +DFA = List[List[Tuple[int, int]]] +DFAS = Tuple[DFA, Dict[int, int]] +Path = Union[str, "os.PathLike[str]"] + + +class Grammar(object): + """Pgen parsing tables conversion class. + + Once initialized, this class supplies the grammar tables for the + parsing engine implemented by parse.py. The parsing engine + accesses the instance variables directly. The class here does not + provide initialization of the tables; several subclasses exist to + do this (see the conv and pgen modules). + + The load() method reads the tables from a pickle file, which is + much faster than the other ways offered by subclasses. The pickle + file is written by calling dump() (after loading the grammar + tables using a subclass). The report() method prints a readable + representation of the tables to stdout, for debugging. + + The instance variables are as follows: + + symbol2number -- a dict mapping symbol names to numbers. Symbol + numbers are always 256 or higher, to distinguish + them from token numbers, which are between 0 and + 255 (inclusive). + + number2symbol -- a dict mapping numbers to symbol names; + these two are each other's inverse. + + states -- a list of DFAs, where each DFA is a list of + states, each state is a list of arcs, and each + arc is a (i, j) pair where i is a label and j is + a state number. The DFA number is the index into + this list. (This name is slightly confusing.) + Final states are represented by a special arc of + the form (0, j) where j is its own state number. + + dfas -- a dict mapping symbol numbers to (DFA, first) + pairs, where DFA is an item from the states list + above, and first is a set of tokens that can + begin this grammar rule (represented by a dict + whose values are always 1). + + labels -- a list of (x, y) pairs where x is either a token + number or a symbol number, and y is either None + or a string; the strings are keywords. The label + number is the index in this list; label numbers + are used to mark state transitions (arcs) in the + DFAs. + + start -- the number of the grammar's start symbol. + + keywords -- a dict mapping keyword strings to arc labels. + + tokens -- a dict mapping token numbers to arc labels. + + """ + + def __init__(self) -> None: + self.symbol2number: Dict[str, int] = {} + self.number2symbol: Dict[int, str] = {} + self.states: List[DFA] = [] + self.dfas: Dict[int, DFAS] = {} + self.labels: List[Label] = [(0, "EMPTY")] + self.keywords: Dict[str, int] = {} + self.soft_keywords: Dict[str, int] = {} + self.tokens: Dict[int, int] = {} + self.symbol2label: Dict[str, int] = {} + self.version: Tuple[int, int] = (0, 0) + self.start = 256 + # Python 3.7+ parses async as a keyword, not an identifier + self.async_keywords = False + + def dump(self, filename: Path) -> None: + """Dump the grammar tables to a pickle file.""" + + # mypyc generates objects that don't have a __dict__, but they + # do have __getstate__ methods that will return an equivalent + # dictionary + if hasattr(self, "__dict__"): + d = self.__dict__ + else: + d = self.__getstate__() # type: ignore + + with tempfile.NamedTemporaryFile( + dir=os.path.dirname(filename), delete=False + ) as f: + pickle.dump(d, f, pickle.HIGHEST_PROTOCOL) + os.replace(f.name, filename) + + def _update(self, attrs: Dict[str, Any]) -> None: + for k, v in attrs.items(): + setattr(self, k, v) + + def load(self, filename: Path) -> None: + """Load the grammar tables from a pickle file.""" + with open(filename, "rb") as f: + d = pickle.load(f) + self._update(d) + + def loads(self, pkl: bytes) -> None: + """Load the grammar tables from a pickle bytes object.""" + self._update(pickle.loads(pkl)) + + def copy(self: _P) -> _P: + """ + Copy the grammar. + """ + new = self.__class__() + for dict_attr in ( + "symbol2number", + "number2symbol", + "dfas", + "keywords", + "soft_keywords", + "tokens", + "symbol2label", + ): + setattr(new, dict_attr, getattr(self, dict_attr).copy()) + new.labels = self.labels[:] + new.states = self.states[:] + new.start = self.start + new.version = self.version + new.async_keywords = self.async_keywords + return new + + def report(self) -> None: + """Dump the grammar tables to standard output, for debugging.""" + from pprint import pprint + + print("s2n") + pprint(self.symbol2number) + print("n2s") + pprint(self.number2symbol) + print("states") + pprint(self.states) + print("dfas") + pprint(self.dfas) + print("labels") + pprint(self.labels) + print("start", self.start) + + +# Map from operator to number (since tokenize doesn't do this) + +opmap_raw = """ +( LPAR +) RPAR +[ LSQB +] RSQB +: COLON +, COMMA +; SEMI ++ PLUS +- MINUS +* STAR +/ SLASH +| VBAR +& AMPER +< LESS +> GREATER += EQUAL +. DOT +% PERCENT +` BACKQUOTE +{ LBRACE +} RBRACE +@ AT +@= ATEQUAL +== EQEQUAL +!= NOTEQUAL +<> NOTEQUAL +<= LESSEQUAL +>= GREATEREQUAL +~ TILDE +^ CIRCUMFLEX +<< LEFTSHIFT +>> RIGHTSHIFT +** DOUBLESTAR ++= PLUSEQUAL +-= MINEQUAL +*= STAREQUAL +/= SLASHEQUAL +%= PERCENTEQUAL +&= AMPEREQUAL +|= VBAREQUAL +^= CIRCUMFLEXEQUAL +<<= LEFTSHIFTEQUAL +>>= RIGHTSHIFTEQUAL +**= DOUBLESTAREQUAL +// DOUBLESLASH +//= DOUBLESLASHEQUAL +-> RARROW +:= COLONEQUAL +""" + +opmap = {} +for line in opmap_raw.splitlines(): + if line: + op, name = line.split() + opmap[op] = getattr(token, name) diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/literals.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/literals.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..17eaf259 Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/literals.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/literals.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/literals.py new file mode 100644 index 00000000..b5fe4285 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/literals.py @@ -0,0 +1,68 @@ +# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Safely evaluate Python string literals without using eval().""" + +import re + +from typing import Dict, Match, Text + + +simple_escapes: Dict[Text, Text] = { + "a": "\a", + "b": "\b", + "f": "\f", + "n": "\n", + "r": "\r", + "t": "\t", + "v": "\v", + "'": "'", + '"': '"', + "\\": "\\", +} + + +def escape(m: Match[Text]) -> Text: + all, tail = m.group(0, 1) + assert all.startswith("\\") + esc = simple_escapes.get(tail) + if esc is not None: + return esc + if tail.startswith("x"): + hexes = tail[1:] + if len(hexes) < 2: + raise ValueError("invalid hex string escape ('\\%s')" % tail) + try: + i = int(hexes, 16) + except ValueError: + raise ValueError("invalid hex string escape ('\\%s')" % tail) from None + else: + try: + i = int(tail, 8) + except ValueError: + raise ValueError("invalid octal string escape ('\\%s')" % tail) from None + return chr(i) + + +def evalString(s: Text) -> Text: + assert s.startswith("'") or s.startswith('"'), repr(s[:1]) + q = s[0] + if s[:3] == q * 3: + q = q * 3 + assert s.endswith(q), repr(s[-len(q) :]) + assert len(s) >= 2 * len(q) + s = s[len(q) : -len(q)] + return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s) + + +def test() -> None: + for i in range(256): + c = chr(i) + s = repr(c) + e = evalString(s) + if e != c: + print(i, c, s, e) + + +if __name__ == "__main__": + test() diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/parse.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/parse.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..09a85ea2 Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/parse.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/parse.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/parse.py new file mode 100644 index 00000000..d6deaac6 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/parse.py @@ -0,0 +1,393 @@ +# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Parser engine for the grammar tables generated by pgen. + +The grammar table must be loaded first. + +See Parser/parser.c in the Python distribution for additional info on +how this parsing engine works. + +""" +import copy +from contextlib import contextmanager + +# Local imports +from . import grammar, token, tokenize +from typing import ( + cast, + Any, + Optional, + Text, + Union, + Tuple, + Dict, + List, + Iterator, + Callable, + Set, + TYPE_CHECKING, +) +from blib2to3.pgen2.grammar import Grammar +from blib2to3.pytree import convert, NL, Context, RawNode, Leaf, Node + +if TYPE_CHECKING: + from blib2to3.driver import TokenProxy + + +Results = Dict[Text, NL] +Convert = Callable[[Grammar, RawNode], Union[Node, Leaf]] +DFA = List[List[Tuple[int, int]]] +DFAS = Tuple[DFA, Dict[int, int]] + + +def lam_sub(grammar: Grammar, node: RawNode) -> NL: + assert node[3] is not None + return Node(type=node[0], children=node[3], context=node[2]) + + +# A placeholder node, used when parser is backtracking. +DUMMY_NODE = (-1, None, None, None) + + +def stack_copy( + stack: List[Tuple[DFAS, int, RawNode]] +) -> List[Tuple[DFAS, int, RawNode]]: + """Nodeless stack copy.""" + return [(dfa, label, DUMMY_NODE) for dfa, label, _ in stack] + + +class Recorder: + def __init__(self, parser: "Parser", ilabels: List[int], context: Context) -> None: + self.parser = parser + self._ilabels = ilabels + self.context = context # not really matter + + self._dead_ilabels: Set[int] = set() + self._start_point = self.parser.stack + self._points = {ilabel: stack_copy(self._start_point) for ilabel in ilabels} + + @property + def ilabels(self) -> Set[int]: + return self._dead_ilabels.symmetric_difference(self._ilabels) + + @contextmanager + def switch_to(self, ilabel: int) -> Iterator[None]: + with self.backtrack(): + self.parser.stack = self._points[ilabel] + try: + yield + except ParseError: + self._dead_ilabels.add(ilabel) + finally: + self.parser.stack = self._start_point + + @contextmanager + def backtrack(self) -> Iterator[None]: + """ + Use the node-level invariant ones for basic parsing operations (push/pop/shift). + These still will operate on the stack; but they won't create any new nodes, or + modify the contents of any other existing nodes. + + This saves us a ton of time when we are backtracking, since we + want to restore to the initial state as quick as possible, which + can only be done by having as little mutatations as possible. + """ + is_backtracking = self.parser.is_backtracking + try: + self.parser.is_backtracking = True + yield + finally: + self.parser.is_backtracking = is_backtracking + + def add_token(self, tok_type: int, tok_val: Text, raw: bool = False) -> None: + func: Callable[..., Any] + if raw: + func = self.parser._addtoken + else: + func = self.parser.addtoken + + for ilabel in self.ilabels: + with self.switch_to(ilabel): + args = [tok_type, tok_val, self.context] + if raw: + args.insert(0, ilabel) + func(*args) + + def determine_route(self, value: Optional[Text] = None, force: bool = False) -> Optional[int]: + alive_ilabels = self.ilabels + if len(alive_ilabels) == 0: + *_, most_successful_ilabel = self._dead_ilabels + raise ParseError("bad input", most_successful_ilabel, value, self.context) + + ilabel, *rest = alive_ilabels + if force or not rest: + return ilabel + else: + return None + + +class ParseError(Exception): + """Exception to signal the parser is stuck.""" + + def __init__( + self, msg: Text, type: Optional[int], value: Optional[Text], context: Context + ) -> None: + Exception.__init__( + self, "%s: type=%r, value=%r, context=%r" % (msg, type, value, context) + ) + self.msg = msg + self.type = type + self.value = value + self.context = context + + +class Parser(object): + """Parser engine. + + The proper usage sequence is: + + p = Parser(grammar, [converter]) # create instance + p.setup([start]) # prepare for parsing + : + if p.addtoken(...): # parse a token; may raise ParseError + break + root = p.rootnode # root of abstract syntax tree + + A Parser instance may be reused by calling setup() repeatedly. + + A Parser instance contains state pertaining to the current token + sequence, and should not be used concurrently by different threads + to parse separate token sequences. + + See driver.py for how to get input tokens by tokenizing a file or + string. + + Parsing is complete when addtoken() returns True; the root of the + abstract syntax tree can then be retrieved from the rootnode + instance variable. When a syntax error occurs, addtoken() raises + the ParseError exception. There is no error recovery; the parser + cannot be used after a syntax error was reported (but it can be + reinitialized by calling setup()). + + """ + + def __init__(self, grammar: Grammar, convert: Optional[Convert] = None) -> None: + """Constructor. + + The grammar argument is a grammar.Grammar instance; see the + grammar module for more information. + + The parser is not ready yet for parsing; you must call the + setup() method to get it started. + + The optional convert argument is a function mapping concrete + syntax tree nodes to abstract syntax tree nodes. If not + given, no conversion is done and the syntax tree produced is + the concrete syntax tree. If given, it must be a function of + two arguments, the first being the grammar (a grammar.Grammar + instance), and the second being the concrete syntax tree node + to be converted. The syntax tree is converted from the bottom + up. + + **post-note: the convert argument is ignored since for Black's + usage, convert will always be blib2to3.pytree.convert. Allowing + this to be dynamic hurts mypyc's ability to use early binding. + These docs are left for historical and informational value. + + A concrete syntax tree node is a (type, value, context, nodes) + tuple, where type is the node type (a token or symbol number), + value is None for symbols and a string for tokens, context is + None or an opaque value used for error reporting (typically a + (lineno, offset) pair), and nodes is a list of children for + symbols, and None for tokens. + + An abstract syntax tree node may be anything; this is entirely + up to the converter function. + + """ + self.grammar = grammar + # See note in docstring above. TL;DR this is ignored. + self.convert = convert or lam_sub + self.is_backtracking = False + + def setup(self, proxy: "TokenProxy", start: Optional[int] = None) -> None: + """Prepare for parsing. + + This *must* be called before starting to parse. + + The optional argument is an alternative start symbol; it + defaults to the grammar's start symbol. + + You can use a Parser instance to parse any number of programs; + each time you call setup() the parser is reset to an initial + state determined by the (implicit or explicit) start symbol. + + """ + if start is None: + start = self.grammar.start + # Each stack entry is a tuple: (dfa, state, node). + # A node is a tuple: (type, value, context, children), + # where children is a list of nodes or None, and context may be None. + newnode: RawNode = (start, None, None, []) + stackentry = (self.grammar.dfas[start], 0, newnode) + self.stack: List[Tuple[DFAS, int, RawNode]] = [stackentry] + self.rootnode: Optional[NL] = None + self.used_names: Set[str] = set() + self.proxy = proxy + + def addtoken(self, type: int, value: Text, context: Context) -> bool: + """Add a token; return True iff this is the end of the program.""" + # Map from token to label + ilabels = self.classify(type, value, context) + assert len(ilabels) >= 1 + + # If we have only one state to advance, we'll directly + # take it as is. + if len(ilabels) == 1: + [ilabel] = ilabels + return self._addtoken(ilabel, type, value, context) + + # If there are multiple states which we can advance (only + # happen under soft-keywords), then we will try all of them + # in parallel and as soon as one state can reach further than + # the rest, we'll choose that one. This is a pretty hacky + # and hopefully temporary algorithm. + # + # For a more detailed explanation, check out this post: + # https://tree.science/what-the-backtracking.html + + with self.proxy.release() as proxy: + counter, force = 0, False + recorder = Recorder(self, ilabels, context) + recorder.add_token(type, value, raw=True) + + next_token_value = value + while recorder.determine_route(next_token_value) is None: + if not proxy.can_advance(counter): + force = True + break + + next_token_type, next_token_value, *_ = proxy.eat(counter) + if next_token_type in (tokenize.COMMENT, tokenize.NL): + counter += 1 + continue + + if next_token_type == tokenize.OP: + next_token_type = grammar.opmap[next_token_value] + + recorder.add_token(next_token_type, next_token_value) + counter += 1 + + ilabel = cast(int, recorder.determine_route(next_token_value, force=force)) + assert ilabel is not None + + return self._addtoken(ilabel, type, value, context) + + def _addtoken(self, ilabel: int, type: int, value: Text, context: Context) -> bool: + # Loop until the token is shifted; may raise exceptions + while True: + dfa, state, node = self.stack[-1] + states, first = dfa + arcs = states[state] + # Look for a state with this label + for i, newstate in arcs: + t = self.grammar.labels[i][0] + if t >= 256: + # See if it's a symbol and if we're in its first set + itsdfa = self.grammar.dfas[t] + itsstates, itsfirst = itsdfa + if ilabel in itsfirst: + # Push a symbol + self.push(t, itsdfa, newstate, context) + break # To continue the outer while loop + + elif ilabel == i: + # Look it up in the list of labels + # Shift a token; we're done with it + self.shift(type, value, newstate, context) + # Pop while we are in an accept-only state + state = newstate + while states[state] == [(0, state)]: + self.pop() + if not self.stack: + # Done parsing! + return True + dfa, state, node = self.stack[-1] + states, first = dfa + # Done with this token + return False + + else: + if (0, state) in arcs: + # An accepting state, pop it and try something else + self.pop() + if not self.stack: + # Done parsing, but another token is input + raise ParseError("too much input", type, value, context) + else: + # No success finding a transition + raise ParseError("bad input", type, value, context) + + def classify(self, type: int, value: Text, context: Context) -> List[int]: + """Turn a token into a label. (Internal) + + Depending on whether the value is a soft-keyword or not, + this function may return multiple labels to choose from.""" + if type == token.NAME: + # Keep a listing of all used names + self.used_names.add(value) + # Check for reserved words + if value in self.grammar.keywords: + return [self.grammar.keywords[value]] + elif value in self.grammar.soft_keywords: + assert type in self.grammar.tokens + return [ + self.grammar.soft_keywords[value], + self.grammar.tokens[type], + ] + + ilabel = self.grammar.tokens.get(type) + if ilabel is None: + raise ParseError("bad token", type, value, context) + return [ilabel] + + def shift(self, type: int, value: Text, newstate: int, context: Context) -> None: + """Shift a token. (Internal)""" + if self.is_backtracking: + dfa, state, _ = self.stack[-1] + self.stack[-1] = (dfa, newstate, DUMMY_NODE) + else: + dfa, state, node = self.stack[-1] + rawnode: RawNode = (type, value, context, None) + newnode = convert(self.grammar, rawnode) + assert node[-1] is not None + node[-1].append(newnode) + self.stack[-1] = (dfa, newstate, node) + + def push(self, type: int, newdfa: DFAS, newstate: int, context: Context) -> None: + """Push a nonterminal. (Internal)""" + if self.is_backtracking: + dfa, state, _ = self.stack[-1] + self.stack[-1] = (dfa, newstate, DUMMY_NODE) + self.stack.append((newdfa, 0, DUMMY_NODE)) + else: + dfa, state, node = self.stack[-1] + newnode: RawNode = (type, None, context, []) + self.stack[-1] = (dfa, newstate, node) + self.stack.append((newdfa, 0, newnode)) + + def pop(self) -> None: + """Pop a nonterminal. (Internal)""" + if self.is_backtracking: + self.stack.pop() + else: + popdfa, popstate, popnode = self.stack.pop() + newnode = convert(self.grammar, popnode) + if self.stack: + dfa, state, node = self.stack[-1] + assert node[-1] is not None + node[-1].append(newnode) + else: + self.rootnode = newnode + self.rootnode.used_names = self.used_names diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/pgen.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/pgen.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..48b68a73 Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/pgen.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/pgen.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/pgen.py new file mode 100644 index 00000000..631682a7 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/pgen.py @@ -0,0 +1,433 @@ +# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +# Pgen imports +from . import grammar, token, tokenize + +from typing import ( + Any, + Dict, + IO, + Iterator, + List, + Optional, + Text, + Tuple, + Union, + Sequence, + NoReturn, +) +from blib2to3.pgen2 import grammar +from blib2to3.pgen2.tokenize import GoodTokenInfo +import os + + +Path = Union[str, "os.PathLike[str]"] + + +class PgenGrammar(grammar.Grammar): + pass + + +class ParserGenerator(object): + + filename: Path + stream: IO[Text] + generator: Iterator[GoodTokenInfo] + first: Dict[Text, Optional[Dict[Text, int]]] + + def __init__(self, filename: Path, stream: Optional[IO[Text]] = None) -> None: + close_stream = None + if stream is None: + stream = open(filename) + close_stream = stream.close + self.filename = filename + self.stream = stream + self.generator = tokenize.generate_tokens(stream.readline) + self.gettoken() # Initialize lookahead + self.dfas, self.startsymbol = self.parse() + if close_stream is not None: + close_stream() + self.first = {} # map from symbol name to set of tokens + self.addfirstsets() + + def make_grammar(self) -> PgenGrammar: + c = PgenGrammar() + names = list(self.dfas.keys()) + names.sort() + names.remove(self.startsymbol) + names.insert(0, self.startsymbol) + for name in names: + i = 256 + len(c.symbol2number) + c.symbol2number[name] = i + c.number2symbol[i] = name + for name in names: + dfa = self.dfas[name] + states = [] + for state in dfa: + arcs = [] + for label, next in sorted(state.arcs.items()): + arcs.append((self.make_label(c, label), dfa.index(next))) + if state.isfinal: + arcs.append((0, dfa.index(state))) + states.append(arcs) + c.states.append(states) + c.dfas[c.symbol2number[name]] = (states, self.make_first(c, name)) + c.start = c.symbol2number[self.startsymbol] + return c + + def make_first(self, c: PgenGrammar, name: Text) -> Dict[int, int]: + rawfirst = self.first[name] + assert rawfirst is not None + first = {} + for label in sorted(rawfirst): + ilabel = self.make_label(c, label) + ##assert ilabel not in first # XXX failed on <> ... != + first[ilabel] = 1 + return first + + def make_label(self, c: PgenGrammar, label: Text) -> int: + # XXX Maybe this should be a method on a subclass of converter? + ilabel = len(c.labels) + if label[0].isalpha(): + # Either a symbol name or a named token + if label in c.symbol2number: + # A symbol name (a non-terminal) + if label in c.symbol2label: + return c.symbol2label[label] + else: + c.labels.append((c.symbol2number[label], None)) + c.symbol2label[label] = ilabel + return ilabel + else: + # A named token (NAME, NUMBER, STRING) + itoken = getattr(token, label, None) + assert isinstance(itoken, int), label + assert itoken in token.tok_name, label + if itoken in c.tokens: + return c.tokens[itoken] + else: + c.labels.append((itoken, None)) + c.tokens[itoken] = ilabel + return ilabel + else: + # Either a keyword or an operator + assert label[0] in ('"', "'"), label + value = eval(label) + if value[0].isalpha(): + if label[0] == '"': + keywords = c.soft_keywords + else: + keywords = c.keywords + + # A keyword + if value in keywords: + return keywords[value] + else: + c.labels.append((token.NAME, value)) + keywords[value] = ilabel + return ilabel + else: + # An operator (any non-numeric token) + itoken = grammar.opmap[value] # Fails if unknown token + if itoken in c.tokens: + return c.tokens[itoken] + else: + c.labels.append((itoken, None)) + c.tokens[itoken] = ilabel + return ilabel + + def addfirstsets(self) -> None: + names = list(self.dfas.keys()) + names.sort() + for name in names: + if name not in self.first: + self.calcfirst(name) + # print name, self.first[name].keys() + + def calcfirst(self, name: Text) -> None: + dfa = self.dfas[name] + self.first[name] = None # dummy to detect left recursion + state = dfa[0] + totalset: Dict[str, int] = {} + overlapcheck = {} + for label, next in state.arcs.items(): + if label in self.dfas: + if label in self.first: + fset = self.first[label] + if fset is None: + raise ValueError("recursion for rule %r" % name) + else: + self.calcfirst(label) + fset = self.first[label] + assert fset is not None + totalset.update(fset) + overlapcheck[label] = fset + else: + totalset[label] = 1 + overlapcheck[label] = {label: 1} + inverse: Dict[str, str] = {} + for label, itsfirst in overlapcheck.items(): + for symbol in itsfirst: + if symbol in inverse: + raise ValueError( + "rule %s is ambiguous; %s is in the first sets of %s as well" + " as %s" % (name, symbol, label, inverse[symbol]) + ) + inverse[symbol] = label + self.first[name] = totalset + + def parse(self) -> Tuple[Dict[Text, List["DFAState"]], Text]: + dfas = {} + startsymbol: Optional[str] = None + # MSTART: (NEWLINE | RULE)* ENDMARKER + while self.type != token.ENDMARKER: + while self.type == token.NEWLINE: + self.gettoken() + # RULE: NAME ':' RHS NEWLINE + name = self.expect(token.NAME) + self.expect(token.OP, ":") + a, z = self.parse_rhs() + self.expect(token.NEWLINE) + # self.dump_nfa(name, a, z) + dfa = self.make_dfa(a, z) + # self.dump_dfa(name, dfa) + oldlen = len(dfa) + self.simplify_dfa(dfa) + newlen = len(dfa) + dfas[name] = dfa + # print name, oldlen, newlen + if startsymbol is None: + startsymbol = name + assert startsymbol is not None + return dfas, startsymbol + + def make_dfa(self, start: "NFAState", finish: "NFAState") -> List["DFAState"]: + # To turn an NFA into a DFA, we define the states of the DFA + # to correspond to *sets* of states of the NFA. Then do some + # state reduction. Let's represent sets as dicts with 1 for + # values. + assert isinstance(start, NFAState) + assert isinstance(finish, NFAState) + + def closure(state: NFAState) -> Dict[NFAState, int]: + base: Dict[NFAState, int] = {} + addclosure(state, base) + return base + + def addclosure(state: NFAState, base: Dict[NFAState, int]) -> None: + assert isinstance(state, NFAState) + if state in base: + return + base[state] = 1 + for label, next in state.arcs: + if label is None: + addclosure(next, base) + + states = [DFAState(closure(start), finish)] + for state in states: # NB states grows while we're iterating + arcs: Dict[str, Dict[NFAState, int]] = {} + for nfastate in state.nfaset: + for label, next in nfastate.arcs: + if label is not None: + addclosure(next, arcs.setdefault(label, {})) + for label, nfaset in sorted(arcs.items()): + for st in states: + if st.nfaset == nfaset: + break + else: + st = DFAState(nfaset, finish) + states.append(st) + state.addarc(st, label) + return states # List of DFAState instances; first one is start + + def dump_nfa(self, name: Text, start: "NFAState", finish: "NFAState") -> None: + print("Dump of NFA for", name) + todo = [start] + for i, state in enumerate(todo): + print(" State", i, state is finish and "(final)" or "") + for label, next in state.arcs: + if next in todo: + j = todo.index(next) + else: + j = len(todo) + todo.append(next) + if label is None: + print(" -> %d" % j) + else: + print(" %s -> %d" % (label, j)) + + def dump_dfa(self, name: Text, dfa: Sequence["DFAState"]) -> None: + print("Dump of DFA for", name) + for i, state in enumerate(dfa): + print(" State", i, state.isfinal and "(final)" or "") + for label, next in sorted(state.arcs.items()): + print(" %s -> %d" % (label, dfa.index(next))) + + def simplify_dfa(self, dfa: List["DFAState"]) -> None: + # This is not theoretically optimal, but works well enough. + # Algorithm: repeatedly look for two states that have the same + # set of arcs (same labels pointing to the same nodes) and + # unify them, until things stop changing. + + # dfa is a list of DFAState instances + changes = True + while changes: + changes = False + for i, state_i in enumerate(dfa): + for j in range(i + 1, len(dfa)): + state_j = dfa[j] + if state_i == state_j: + # print " unify", i, j + del dfa[j] + for state in dfa: + state.unifystate(state_j, state_i) + changes = True + break + + def parse_rhs(self) -> Tuple["NFAState", "NFAState"]: + # RHS: ALT ('|' ALT)* + a, z = self.parse_alt() + if self.value != "|": + return a, z + else: + aa = NFAState() + zz = NFAState() + aa.addarc(a) + z.addarc(zz) + while self.value == "|": + self.gettoken() + a, z = self.parse_alt() + aa.addarc(a) + z.addarc(zz) + return aa, zz + + def parse_alt(self) -> Tuple["NFAState", "NFAState"]: + # ALT: ITEM+ + a, b = self.parse_item() + while self.value in ("(", "[") or self.type in (token.NAME, token.STRING): + c, d = self.parse_item() + b.addarc(c) + b = d + return a, b + + def parse_item(self) -> Tuple["NFAState", "NFAState"]: + # ITEM: '[' RHS ']' | ATOM ['+' | '*'] + if self.value == "[": + self.gettoken() + a, z = self.parse_rhs() + self.expect(token.OP, "]") + a.addarc(z) + return a, z + else: + a, z = self.parse_atom() + value = self.value + if value not in ("+", "*"): + return a, z + self.gettoken() + z.addarc(a) + if value == "+": + return a, z + else: + return a, a + + def parse_atom(self) -> Tuple["NFAState", "NFAState"]: + # ATOM: '(' RHS ')' | NAME | STRING + if self.value == "(": + self.gettoken() + a, z = self.parse_rhs() + self.expect(token.OP, ")") + return a, z + elif self.type in (token.NAME, token.STRING): + a = NFAState() + z = NFAState() + a.addarc(z, self.value) + self.gettoken() + return a, z + else: + self.raise_error( + "expected (...) or NAME or STRING, got %s/%s", self.type, self.value + ) + assert False + + def expect(self, type: int, value: Optional[Any] = None) -> Text: + if self.type != type or (value is not None and self.value != value): + self.raise_error( + "expected %s/%s, got %s/%s", type, value, self.type, self.value + ) + value = self.value + self.gettoken() + return value + + def gettoken(self) -> None: + tup = next(self.generator) + while tup[0] in (tokenize.COMMENT, tokenize.NL): + tup = next(self.generator) + self.type, self.value, self.begin, self.end, self.line = tup + # print token.tok_name[self.type], repr(self.value) + + def raise_error(self, msg: str, *args: Any) -> NoReturn: + if args: + try: + msg = msg % args + except: + msg = " ".join([msg] + list(map(str, args))) + raise SyntaxError(msg, (self.filename, self.end[0], self.end[1], self.line)) + + +class NFAState(object): + arcs: List[Tuple[Optional[Text], "NFAState"]] + + def __init__(self) -> None: + self.arcs = [] # list of (label, NFAState) pairs + + def addarc(self, next: "NFAState", label: Optional[Text] = None) -> None: + assert label is None or isinstance(label, str) + assert isinstance(next, NFAState) + self.arcs.append((label, next)) + + +class DFAState(object): + nfaset: Dict[NFAState, Any] + isfinal: bool + arcs: Dict[Text, "DFAState"] + + def __init__(self, nfaset: Dict[NFAState, Any], final: NFAState) -> None: + assert isinstance(nfaset, dict) + assert isinstance(next(iter(nfaset)), NFAState) + assert isinstance(final, NFAState) + self.nfaset = nfaset + self.isfinal = final in nfaset + self.arcs = {} # map from label to DFAState + + def addarc(self, next: "DFAState", label: Text) -> None: + assert isinstance(label, str) + assert label not in self.arcs + assert isinstance(next, DFAState) + self.arcs[label] = next + + def unifystate(self, old: "DFAState", new: "DFAState") -> None: + for label, next in self.arcs.items(): + if next is old: + self.arcs[label] = new + + def __eq__(self, other: Any) -> bool: + # Equality test -- ignore the nfaset instance variable + assert isinstance(other, DFAState) + if self.isfinal != other.isfinal: + return False + # Can't just return self.arcs == other.arcs, because that + # would invoke this method recursively, with cycles... + if len(self.arcs) != len(other.arcs): + return False + for label, next in self.arcs.items(): + if next is not other.arcs.get(label): + return False + return True + + __hash__: Any = None # For Py3 compatibility. + + +def generate_grammar(filename: Path = "Grammar.txt") -> PgenGrammar: + p = ParserGenerator(filename) + return p.make_grammar() diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/token.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/token.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..cec39ce2 Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/token.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/token.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/token.py new file mode 100644 index 00000000..1e0dec9c --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/token.py @@ -0,0 +1,94 @@ +"""Token constants (from "token.h").""" + +import sys +from typing import Dict + +if sys.version_info < (3, 8): + from typing_extensions import Final +else: + from typing import Final + +# Taken from Python (r53757) and modified to include some tokens +# originally monkeypatched in by pgen2.tokenize + +# --start constants-- +ENDMARKER: Final = 0 +NAME: Final = 1 +NUMBER: Final = 2 +STRING: Final = 3 +NEWLINE: Final = 4 +INDENT: Final = 5 +DEDENT: Final = 6 +LPAR: Final = 7 +RPAR: Final = 8 +LSQB: Final = 9 +RSQB: Final = 10 +COLON: Final = 11 +COMMA: Final = 12 +SEMI: Final = 13 +PLUS: Final = 14 +MINUS: Final = 15 +STAR: Final = 16 +SLASH: Final = 17 +VBAR: Final = 18 +AMPER: Final = 19 +LESS: Final = 20 +GREATER: Final = 21 +EQUAL: Final = 22 +DOT: Final = 23 +PERCENT: Final = 24 +BACKQUOTE: Final = 25 +LBRACE: Final = 26 +RBRACE: Final = 27 +EQEQUAL: Final = 28 +NOTEQUAL: Final = 29 +LESSEQUAL: Final = 30 +GREATEREQUAL: Final = 31 +TILDE: Final = 32 +CIRCUMFLEX: Final = 33 +LEFTSHIFT: Final = 34 +RIGHTSHIFT: Final = 35 +DOUBLESTAR: Final = 36 +PLUSEQUAL: Final = 37 +MINEQUAL: Final = 38 +STAREQUAL: Final = 39 +SLASHEQUAL: Final = 40 +PERCENTEQUAL: Final = 41 +AMPEREQUAL: Final = 42 +VBAREQUAL: Final = 43 +CIRCUMFLEXEQUAL: Final = 44 +LEFTSHIFTEQUAL: Final = 45 +RIGHTSHIFTEQUAL: Final = 46 +DOUBLESTAREQUAL: Final = 47 +DOUBLESLASH: Final = 48 +DOUBLESLASHEQUAL: Final = 49 +AT: Final = 50 +ATEQUAL: Final = 51 +OP: Final = 52 +COMMENT: Final = 53 +NL: Final = 54 +RARROW: Final = 55 +AWAIT: Final = 56 +ASYNC: Final = 57 +ERRORTOKEN: Final = 58 +COLONEQUAL: Final = 59 +N_TOKENS: Final = 60 +NT_OFFSET: Final = 256 +# --end constants-- + +tok_name: Final[Dict[int, str]] = {} +for _name, _value in list(globals().items()): + if type(_value) is type(0): + tok_name[_value] = _name + + +def ISTERMINAL(x: int) -> bool: + return x < NT_OFFSET + + +def ISNONTERMINAL(x: int) -> bool: + return x >= NT_OFFSET + + +def ISEOF(x: int) -> bool: + return x == ENDMARKER diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/tokenize.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pgen2/tokenize.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..ceb5c54a Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pgen2/tokenize.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pgen2/tokenize.py b/env/lib/python3.8/site-packages/blib2to3/pgen2/tokenize.py new file mode 100644 index 00000000..257dbef4 --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pgen2/tokenize.py @@ -0,0 +1,688 @@ +# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation. +# All rights reserved. + +# mypy: allow-untyped-defs, allow-untyped-calls + +"""Tokenization help for Python programs. + +generate_tokens(readline) is a generator that breaks a stream of +text into Python tokens. It accepts a readline-like method which is called +repeatedly to get the next line of input (or "" for EOF). It generates +5-tuples with these members: + + the token type (see token.py) + the token (a string) + the starting (row, column) indices of the token (a 2-tuple of ints) + the ending (row, column) indices of the token (a 2-tuple of ints) + the original line (string) + +It is designed to match the working of the Python tokenizer exactly, except +that it produces COMMENT tokens for comments and gives type OP for all +operators + +Older entry points + tokenize_loop(readline, tokeneater) + tokenize(readline, tokeneater=printtoken) +are the same, except instead of generating tokens, tokeneater is a callback +function to which the 5 fields described above are passed as 5 arguments, +each time a new token is found.""" + +import sys +from typing import ( + Callable, + Iterable, + Iterator, + List, + Optional, + Text, + Tuple, + Pattern, + Union, + cast, +) + +if sys.version_info >= (3, 8): + from typing import Final +else: + from typing_extensions import Final + +from blib2to3.pgen2.token import * +from blib2to3.pgen2.grammar import Grammar + +__author__ = "Ka-Ping Yee " +__credits__ = "GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro" + +import re +from codecs import BOM_UTF8, lookup +from blib2to3.pgen2.token import * + +from . import token + +__all__ = [x for x in dir(token) if x[0] != "_"] + [ + "tokenize", + "generate_tokens", + "untokenize", +] +del token + + +def group(*choices): + return "(" + "|".join(choices) + ")" + + +def any(*choices): + return group(*choices) + "*" + + +def maybe(*choices): + return group(*choices) + "?" + + +def _combinations(*l): + return set(x + y for x in l for y in l + ("",) if x.casefold() != y.casefold()) + + +Whitespace = r"[ \f\t]*" +Comment = r"#[^\r\n]*" +Ignore = Whitespace + any(r"\\\r?\n" + Whitespace) + maybe(Comment) +Name = ( # this is invalid but it's fine because Name comes after Number in all groups + r"[^\s#\(\)\[\]\{\}+\-*/!@$%^&=|;:'\",\.<>/?`~\\]+" +) + +Binnumber = r"0[bB]_?[01]+(?:_[01]+)*" +Hexnumber = r"0[xX]_?[\da-fA-F]+(?:_[\da-fA-F]+)*[lL]?" +Octnumber = r"0[oO]?_?[0-7]+(?:_[0-7]+)*[lL]?" +Decnumber = group(r"[1-9]\d*(?:_\d+)*[lL]?", "0[lL]?") +Intnumber = group(Binnumber, Hexnumber, Octnumber, Decnumber) +Exponent = r"[eE][-+]?\d+(?:_\d+)*" +Pointfloat = group(r"\d+(?:_\d+)*\.(?:\d+(?:_\d+)*)?", r"\.\d+(?:_\d+)*") + maybe( + Exponent +) +Expfloat = r"\d+(?:_\d+)*" + Exponent +Floatnumber = group(Pointfloat, Expfloat) +Imagnumber = group(r"\d+(?:_\d+)*[jJ]", Floatnumber + r"[jJ]") +Number = group(Imagnumber, Floatnumber, Intnumber) + +# Tail end of ' string. +Single = r"[^'\\]*(?:\\.[^'\\]*)*'" +# Tail end of " string. +Double = r'[^"\\]*(?:\\.[^"\\]*)*"' +# Tail end of ''' string. +Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''" +# Tail end of """ string. +Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""' +_litprefix = r"(?:[uUrRbBfF]|[rR][fFbB]|[fFbBuU][rR])?" +Triple = group(_litprefix + "'''", _litprefix + '"""') +# Single-line ' or " string. +String = group( + _litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*'", + _litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*"', +) + +# Because of leftmost-then-longest match semantics, be sure to put the +# longest operators first (e.g., if = came before ==, == would get +# recognized as two instances of =). +Operator = group( + r"\*\*=?", + r">>=?", + r"<<=?", + r"<>", + r"!=", + r"//=?", + r"->", + r"[+\-*/%&@|^=<>:]=?", + r"~", +) + +Bracket = "[][(){}]" +Special = group(r"\r?\n", r"[:;.,`@]") +Funny = group(Operator, Bracket, Special) + +# First (or only) line of ' or " string. +ContStr = group( + _litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*" + group("'", r"\\\r?\n"), + _litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*' + group('"', r"\\\r?\n"), +) +PseudoExtras = group(r"\\\r?\n", Comment, Triple) +PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name) + +pseudoprog: Final = re.compile(PseudoToken, re.UNICODE) +single3prog = re.compile(Single3) +double3prog = re.compile(Double3) + +_strprefixes = ( + _combinations("r", "R", "f", "F") + | _combinations("r", "R", "b", "B") + | {"u", "U", "ur", "uR", "Ur", "UR"} +) + +endprogs: Final = { + "'": re.compile(Single), + '"': re.compile(Double), + "'''": single3prog, + '"""': double3prog, + **{f"{prefix}'''": single3prog for prefix in _strprefixes}, + **{f'{prefix}"""': double3prog for prefix in _strprefixes}, + **{prefix: None for prefix in _strprefixes}, +} + +triple_quoted: Final = ( + {"'''", '"""'} + | {f"{prefix}'''" for prefix in _strprefixes} + | {f'{prefix}"""' for prefix in _strprefixes} +) +single_quoted: Final = ( + {"'", '"'} + | {f"{prefix}'" for prefix in _strprefixes} + | {f'{prefix}"' for prefix in _strprefixes} +) + +tabsize = 8 + + +class TokenError(Exception): + pass + + +class StopTokenizing(Exception): + pass + + +def printtoken(type, token, xxx_todo_changeme, xxx_todo_changeme1, line): # for testing + (srow, scol) = xxx_todo_changeme + (erow, ecol) = xxx_todo_changeme1 + print( + "%d,%d-%d,%d:\t%s\t%s" % (srow, scol, erow, ecol, tok_name[type], repr(token)) + ) + + +Coord = Tuple[int, int] +TokenEater = Callable[[int, Text, Coord, Coord, Text], None] + + +def tokenize(readline: Callable[[], Text], tokeneater: TokenEater = printtoken) -> None: + """ + The tokenize() function accepts two parameters: one representing the + input stream, and one providing an output mechanism for tokenize(). + + The first parameter, readline, must be a callable object which provides + the same interface as the readline() method of built-in file objects. + Each call to the function should return one line of input as a string. + + The second parameter, tokeneater, must also be a callable object. It is + called once for each token, with five arguments, corresponding to the + tuples generated by generate_tokens(). + """ + try: + tokenize_loop(readline, tokeneater) + except StopTokenizing: + pass + + +# backwards compatible interface +def tokenize_loop(readline, tokeneater): + for token_info in generate_tokens(readline): + tokeneater(*token_info) + + +GoodTokenInfo = Tuple[int, Text, Coord, Coord, Text] +TokenInfo = Union[Tuple[int, str], GoodTokenInfo] + + +class Untokenizer: + + tokens: List[Text] + prev_row: int + prev_col: int + + def __init__(self) -> None: + self.tokens = [] + self.prev_row = 1 + self.prev_col = 0 + + def add_whitespace(self, start: Coord) -> None: + row, col = start + assert row <= self.prev_row + col_offset = col - self.prev_col + if col_offset: + self.tokens.append(" " * col_offset) + + def untokenize(self, iterable: Iterable[TokenInfo]) -> Text: + for t in iterable: + if len(t) == 2: + self.compat(cast(Tuple[int, str], t), iterable) + break + tok_type, token, start, end, line = cast( + Tuple[int, Text, Coord, Coord, Text], t + ) + self.add_whitespace(start) + self.tokens.append(token) + self.prev_row, self.prev_col = end + if tok_type in (NEWLINE, NL): + self.prev_row += 1 + self.prev_col = 0 + return "".join(self.tokens) + + def compat(self, token: Tuple[int, Text], iterable: Iterable[TokenInfo]) -> None: + startline = False + indents = [] + toks_append = self.tokens.append + toknum, tokval = token + if toknum in (NAME, NUMBER): + tokval += " " + if toknum in (NEWLINE, NL): + startline = True + for tok in iterable: + toknum, tokval = tok[:2] + + if toknum in (NAME, NUMBER, ASYNC, AWAIT): + tokval += " " + + if toknum == INDENT: + indents.append(tokval) + continue + elif toknum == DEDENT: + indents.pop() + continue + elif toknum in (NEWLINE, NL): + startline = True + elif startline and indents: + toks_append(indents[-1]) + startline = False + toks_append(tokval) + + +cookie_re = re.compile(r"^[ \t\f]*#.*?coding[:=][ \t]*([-\w.]+)", re.ASCII) +blank_re = re.compile(rb"^[ \t\f]*(?:[#\r\n]|$)", re.ASCII) + + +def _get_normal_name(orig_enc: str) -> str: + """Imitates get_normal_name in tokenizer.c.""" + # Only care about the first 12 characters. + enc = orig_enc[:12].lower().replace("_", "-") + if enc == "utf-8" or enc.startswith("utf-8-"): + return "utf-8" + if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or enc.startswith( + ("latin-1-", "iso-8859-1-", "iso-latin-1-") + ): + return "iso-8859-1" + return orig_enc + + +def detect_encoding(readline: Callable[[], bytes]) -> Tuple[str, List[bytes]]: + """ + The detect_encoding() function is used to detect the encoding that should + be used to decode a Python source file. It requires one argument, readline, + in the same way as the tokenize() generator. + + It will call readline a maximum of twice, and return the encoding used + (as a string) and a list of any lines (left as bytes) it has read + in. + + It detects the encoding from the presence of a utf-8 bom or an encoding + cookie as specified in pep-0263. If both a bom and a cookie are present, but + disagree, a SyntaxError will be raised. If the encoding cookie is an invalid + charset, raise a SyntaxError. Note that if a utf-8 bom is found, + 'utf-8-sig' is returned. + + If no encoding is specified, then the default of 'utf-8' will be returned. + """ + bom_found = False + encoding = None + default = "utf-8" + + def read_or_stop() -> bytes: + try: + return readline() + except StopIteration: + return bytes() + + def find_cookie(line: bytes) -> Optional[str]: + try: + line_string = line.decode("ascii") + except UnicodeDecodeError: + return None + match = cookie_re.match(line_string) + if not match: + return None + encoding = _get_normal_name(match.group(1)) + try: + codec = lookup(encoding) + except LookupError: + # This behaviour mimics the Python interpreter + raise SyntaxError("unknown encoding: " + encoding) + + if bom_found: + if codec.name != "utf-8": + # This behaviour mimics the Python interpreter + raise SyntaxError("encoding problem: utf-8") + encoding += "-sig" + return encoding + + first = read_or_stop() + if first.startswith(BOM_UTF8): + bom_found = True + first = first[3:] + default = "utf-8-sig" + if not first: + return default, [] + + encoding = find_cookie(first) + if encoding: + return encoding, [first] + if not blank_re.match(first): + return default, [first] + + second = read_or_stop() + if not second: + return default, [first] + + encoding = find_cookie(second) + if encoding: + return encoding, [first, second] + + return default, [first, second] + + +def untokenize(iterable: Iterable[TokenInfo]) -> Text: + """Transform tokens back into Python source code. + + Each element returned by the iterable must be a token sequence + with at least two elements, a token number and token value. If + only two tokens are passed, the resulting output is poor. + + Round-trip invariant for full input: + Untokenized source will match input source exactly + + Round-trip invariant for limited input: + # Output text will tokenize the back to the input + t1 = [tok[:2] for tok in generate_tokens(f.readline)] + newcode = untokenize(t1) + readline = iter(newcode.splitlines(1)).next + t2 = [tok[:2] for tokin generate_tokens(readline)] + assert t1 == t2 + """ + ut = Untokenizer() + return ut.untokenize(iterable) + + +def generate_tokens( + readline: Callable[[], Text], grammar: Optional[Grammar] = None +) -> Iterator[GoodTokenInfo]: + """ + The generate_tokens() generator requires one argument, readline, which + must be a callable object which provides the same interface as the + readline() method of built-in file objects. Each call to the function + should return one line of input as a string. Alternately, readline + can be a callable function terminating with StopIteration: + readline = open(myfile).next # Example of alternate readline + + The generator produces 5-tuples with these members: the token type; the + token string; a 2-tuple (srow, scol) of ints specifying the row and + column where the token begins in the source; a 2-tuple (erow, ecol) of + ints specifying the row and column where the token ends in the source; + and the line on which the token was found. The line passed is the + logical line; continuation lines are included. + """ + lnum = parenlev = continued = 0 + numchars: Final = "0123456789" + contstr, needcont = "", 0 + contline: Optional[str] = None + indents = [0] + + # If we know we're parsing 3.7+, we can unconditionally parse `async` and + # `await` as keywords. + async_keywords = False if grammar is None else grammar.async_keywords + # 'stashed' and 'async_*' are used for async/await parsing + stashed: Optional[GoodTokenInfo] = None + async_def = False + async_def_indent = 0 + async_def_nl = False + + strstart: Tuple[int, int] + endprog: Pattern[str] + + while 1: # loop over lines in stream + try: + line = readline() + except StopIteration: + line = "" + lnum += 1 + pos, max = 0, len(line) + + if contstr: # continued string + assert contline is not None + if not line: + raise TokenError("EOF in multi-line string", strstart) + endmatch = endprog.match(line) + if endmatch: + pos = end = endmatch.end(0) + yield ( + STRING, + contstr + line[:end], + strstart, + (lnum, end), + contline + line, + ) + contstr, needcont = "", 0 + contline = None + elif needcont and line[-2:] != "\\\n" and line[-3:] != "\\\r\n": + yield ( + ERRORTOKEN, + contstr + line, + strstart, + (lnum, len(line)), + contline, + ) + contstr = "" + contline = None + continue + else: + contstr = contstr + line + contline = contline + line + continue + + elif parenlev == 0 and not continued: # new statement + if not line: + break + column = 0 + while pos < max: # measure leading whitespace + if line[pos] == " ": + column += 1 + elif line[pos] == "\t": + column = (column // tabsize + 1) * tabsize + elif line[pos] == "\f": + column = 0 + else: + break + pos += 1 + if pos == max: + break + + if stashed: + yield stashed + stashed = None + + if line[pos] in "\r\n": # skip blank lines + yield (NL, line[pos:], (lnum, pos), (lnum, len(line)), line) + continue + + if line[pos] == "#": # skip comments + comment_token = line[pos:].rstrip("\r\n") + nl_pos = pos + len(comment_token) + yield ( + COMMENT, + comment_token, + (lnum, pos), + (lnum, nl_pos), + line, + ) + yield (NL, line[nl_pos:], (lnum, nl_pos), (lnum, len(line)), line) + continue + + if column > indents[-1]: # count indents + indents.append(column) + yield (INDENT, line[:pos], (lnum, 0), (lnum, pos), line) + + while column < indents[-1]: # count dedents + if column not in indents: + raise IndentationError( + "unindent does not match any outer indentation level", + ("", lnum, pos, line), + ) + indents = indents[:-1] + + if async_def and async_def_indent >= indents[-1]: + async_def = False + async_def_nl = False + async_def_indent = 0 + + yield (DEDENT, "", (lnum, pos), (lnum, pos), line) + + if async_def and async_def_nl and async_def_indent >= indents[-1]: + async_def = False + async_def_nl = False + async_def_indent = 0 + + else: # continued statement + if not line: + raise TokenError("EOF in multi-line statement", (lnum, 0)) + continued = 0 + + while pos < max: + pseudomatch = pseudoprog.match(line, pos) + if pseudomatch: # scan for tokens + start, end = pseudomatch.span(1) + spos, epos, pos = (lnum, start), (lnum, end), end + token, initial = line[start:end], line[start] + + if initial in numchars or ( + initial == "." and token != "." + ): # ordinary number + yield (NUMBER, token, spos, epos, line) + elif initial in "\r\n": + newline = NEWLINE + if parenlev > 0: + newline = NL + elif async_def: + async_def_nl = True + if stashed: + yield stashed + stashed = None + yield (newline, token, spos, epos, line) + + elif initial == "#": + assert not token.endswith("\n") + if stashed: + yield stashed + stashed = None + yield (COMMENT, token, spos, epos, line) + elif token in triple_quoted: + endprog = endprogs[token] + endmatch = endprog.match(line, pos) + if endmatch: # all on one line + pos = endmatch.end(0) + token = line[start:pos] + if stashed: + yield stashed + stashed = None + yield (STRING, token, spos, (lnum, pos), line) + else: + strstart = (lnum, start) # multiple lines + contstr = line[start:] + contline = line + break + elif ( + initial in single_quoted + or token[:2] in single_quoted + or token[:3] in single_quoted + ): + if token[-1] == "\n": # continued string + strstart = (lnum, start) + endprog = ( + endprogs[initial] + or endprogs[token[1]] + or endprogs[token[2]] + ) + contstr, needcont = line[start:], 1 + contline = line + break + else: # ordinary string + if stashed: + yield stashed + stashed = None + yield (STRING, token, spos, epos, line) + elif initial.isidentifier(): # ordinary name + if token in ("async", "await"): + if async_keywords or async_def: + yield ( + ASYNC if token == "async" else AWAIT, + token, + spos, + epos, + line, + ) + continue + + tok = (NAME, token, spos, epos, line) + if token == "async" and not stashed: + stashed = tok + continue + + if token in ("def", "for"): + if stashed and stashed[0] == NAME and stashed[1] == "async": + + if token == "def": + async_def = True + async_def_indent = indents[-1] + + yield ( + ASYNC, + stashed[1], + stashed[2], + stashed[3], + stashed[4], + ) + stashed = None + + if stashed: + yield stashed + stashed = None + + yield tok + elif initial == "\\": # continued stmt + # This yield is new; needed for better idempotency: + if stashed: + yield stashed + stashed = None + yield (NL, token, spos, (lnum, pos), line) + continued = 1 + else: + if initial in "([{": + parenlev += 1 + elif initial in ")]}": + parenlev -= 1 + if stashed: + yield stashed + stashed = None + yield (OP, token, spos, epos, line) + else: + yield (ERRORTOKEN, line[pos], (lnum, pos), (lnum, pos + 1), line) + pos += 1 + + if stashed: + yield stashed + stashed = None + + for indent in indents[1:]: # pop remaining indent levels + yield (DEDENT, "", (lnum, 0), (lnum, 0), "") + yield (ENDMARKER, "", (lnum, 0), (lnum, 0), "") + + +if __name__ == "__main__": # testing + import sys + + if len(sys.argv) > 1: + tokenize(open(sys.argv[1]).readline) + else: + tokenize(sys.stdin.readline) diff --git a/env/lib/python3.8/site-packages/blib2to3/pygram.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pygram.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..363304ad Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pygram.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pygram.py b/env/lib/python3.8/site-packages/blib2to3/pygram.py new file mode 100644 index 00000000..99012cdd --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pygram.py @@ -0,0 +1,218 @@ +# Copyright 2006 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Export the Python grammar and symbols.""" + +# Python imports +import os + +from typing import Union + +# Local imports +from .pgen2 import token +from .pgen2 import driver + +from .pgen2.grammar import Grammar + +# Moved into initialize because mypyc can't handle __file__ (XXX bug) +# # The grammar file +# _GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt") +# _PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), +# "PatternGrammar.txt") + + +class Symbols(object): + def __init__(self, grammar: Grammar) -> None: + """Initializer. + + Creates an attribute for each grammar symbol (nonterminal), + whose value is the symbol's type (an int >= 256). + """ + for name, symbol in grammar.symbol2number.items(): + setattr(self, name, symbol) + + +class _python_symbols(Symbols): + and_expr: int + and_test: int + annassign: int + arglist: int + argument: int + arith_expr: int + asexpr_test: int + assert_stmt: int + async_funcdef: int + async_stmt: int + atom: int + augassign: int + break_stmt: int + case_block: int + classdef: int + comp_for: int + comp_if: int + comp_iter: int + comp_op: int + comparison: int + compound_stmt: int + continue_stmt: int + decorated: int + decorator: int + decorators: int + del_stmt: int + dictsetmaker: int + dotted_as_name: int + dotted_as_names: int + dotted_name: int + encoding_decl: int + eval_input: int + except_clause: int + exec_stmt: int + expr: int + expr_stmt: int + exprlist: int + factor: int + file_input: int + flow_stmt: int + for_stmt: int + funcdef: int + global_stmt: int + guard: int + if_stmt: int + import_as_name: int + import_as_names: int + import_from: int + import_name: int + import_stmt: int + lambdef: int + listmaker: int + match_stmt: int + namedexpr_test: int + not_test: int + old_comp_for: int + old_comp_if: int + old_comp_iter: int + old_lambdef: int + old_test: int + or_test: int + parameters: int + pass_stmt: int + pattern: int + patterns: int + power: int + print_stmt: int + raise_stmt: int + return_stmt: int + shift_expr: int + simple_stmt: int + single_input: int + sliceop: int + small_stmt: int + subject_expr: int + star_expr: int + stmt: int + subscript: int + subscriptlist: int + suite: int + term: int + test: int + testlist: int + testlist1: int + testlist_gexp: int + testlist_safe: int + testlist_star_expr: int + tfpdef: int + tfplist: int + tname: int + tname_star: int + trailer: int + try_stmt: int + typedargslist: int + varargslist: int + vfpdef: int + vfplist: int + vname: int + while_stmt: int + with_stmt: int + xor_expr: int + yield_arg: int + yield_expr: int + yield_stmt: int + + +class _pattern_symbols(Symbols): + Alternative: int + Alternatives: int + Details: int + Matcher: int + NegatedUnit: int + Repeater: int + Unit: int + + +python_grammar: Grammar +python_grammar_no_print_statement: Grammar +python_grammar_no_print_statement_no_exec_statement: Grammar +python_grammar_no_print_statement_no_exec_statement_async_keywords: Grammar +python_grammar_no_exec_statement: Grammar +pattern_grammar: Grammar +python_grammar_soft_keywords: Grammar + +python_symbols: _python_symbols +pattern_symbols: _pattern_symbols + + +def initialize(cache_dir: Union[str, "os.PathLike[str]", None] = None) -> None: + global python_grammar + global python_grammar_no_print_statement + global python_grammar_no_print_statement_no_exec_statement + global python_grammar_no_print_statement_no_exec_statement_async_keywords + global python_grammar_soft_keywords + global python_symbols + global pattern_grammar + global pattern_symbols + + # The grammar file + _GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt") + _PATTERN_GRAMMAR_FILE = os.path.join( + os.path.dirname(__file__), "PatternGrammar.txt" + ) + + # Python 2 + python_grammar = driver.load_packaged_grammar("blib2to3", _GRAMMAR_FILE, cache_dir) + python_grammar.version = (2, 0) + + soft_keywords = python_grammar.soft_keywords.copy() + python_grammar.soft_keywords.clear() + + python_symbols = _python_symbols(python_grammar) + + # Python 2 + from __future__ import print_function + python_grammar_no_print_statement = python_grammar.copy() + del python_grammar_no_print_statement.keywords["print"] + + # Python 3.0-3.6 + python_grammar_no_print_statement_no_exec_statement = python_grammar.copy() + del python_grammar_no_print_statement_no_exec_statement.keywords["print"] + del python_grammar_no_print_statement_no_exec_statement.keywords["exec"] + python_grammar_no_print_statement_no_exec_statement.version = (3, 0) + + # Python 3.7+ + python_grammar_no_print_statement_no_exec_statement_async_keywords = ( + python_grammar_no_print_statement_no_exec_statement.copy() + ) + python_grammar_no_print_statement_no_exec_statement_async_keywords.async_keywords = ( + True + ) + python_grammar_no_print_statement_no_exec_statement_async_keywords.version = (3, 7) + + # Python 3.10+ + python_grammar_soft_keywords = ( + python_grammar_no_print_statement_no_exec_statement_async_keywords.copy() + ) + python_grammar_soft_keywords.soft_keywords = soft_keywords + python_grammar_soft_keywords.version = (3, 10) + + pattern_grammar = driver.load_packaged_grammar( + "blib2to3", _PATTERN_GRAMMAR_FILE, cache_dir + ) + pattern_symbols = _pattern_symbols(pattern_grammar) diff --git a/env/lib/python3.8/site-packages/blib2to3/pytree.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/blib2to3/pytree.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..0a515b20 Binary files /dev/null and b/env/lib/python3.8/site-packages/blib2to3/pytree.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/blib2to3/pytree.py b/env/lib/python3.8/site-packages/blib2to3/pytree.py new file mode 100644 index 00000000..15a1420e --- /dev/null +++ b/env/lib/python3.8/site-packages/blib2to3/pytree.py @@ -0,0 +1,983 @@ +# Copyright 2006 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +""" +Python parse tree definitions. + +This is a very concrete parse tree; we need to keep every token and +even the comments and whitespace between tokens. + +There's also a pattern matching implementation here. +""" + +# mypy: allow-untyped-defs, allow-incomplete-defs + +from typing import ( + Any, + Dict, + Iterator, + List, + Optional, + Text, + Tuple, + TypeVar, + Union, + Set, + Iterable, +) +from blib2to3.pgen2.grammar import Grammar + +__author__ = "Guido van Rossum " + +import sys +from io import StringIO + +HUGE: int = 0x7FFFFFFF # maximum repeat count, default max + +_type_reprs: Dict[int, Union[Text, int]] = {} + + +def type_repr(type_num: int) -> Union[Text, int]: + global _type_reprs + if not _type_reprs: + from .pygram import python_symbols + + # printing tokens is possible but not as useful + # from .pgen2 import token // token.__dict__.items(): + for name in dir(python_symbols): + val = getattr(python_symbols, name) + if type(val) == int: + _type_reprs[val] = name + return _type_reprs.setdefault(type_num, type_num) + + +_P = TypeVar("_P", bound="Base") + +NL = Union["Node", "Leaf"] +Context = Tuple[Text, Tuple[int, int]] +RawNode = Tuple[int, Optional[Text], Optional[Context], Optional[List[NL]]] + + +class Base(object): + + """ + Abstract base class for Node and Leaf. + + This provides some default functionality and boilerplate using the + template pattern. + + A node may be a subnode of at most one parent. + """ + + # Default values for instance variables + type: int # int: token number (< 256) or symbol number (>= 256) + parent: Optional["Node"] = None # Parent node pointer, or None + children: List[NL] # List of subnodes + was_changed: bool = False + was_checked: bool = False + + def __new__(cls, *args, **kwds): + """Constructor that prevents Base from being instantiated.""" + assert cls is not Base, "Cannot instantiate Base" + return object.__new__(cls) + + def __eq__(self, other: Any) -> bool: + """ + Compare two nodes for equality. + + This calls the method _eq(). + """ + if self.__class__ is not other.__class__: + return NotImplemented + return self._eq(other) + + @property + def prefix(self) -> Text: + raise NotImplementedError + + def _eq(self: _P, other: _P) -> bool: + """ + Compare two nodes for equality. + + This is called by __eq__ and __ne__. It is only called if the two nodes + have the same type. This must be implemented by the concrete subclass. + Nodes should be considered equal if they have the same structure, + ignoring the prefix string and other context information. + """ + raise NotImplementedError + + def __deepcopy__(self: _P, memo: Any) -> _P: + return self.clone() + + def clone(self: _P) -> _P: + """ + Return a cloned (deep) copy of self. + + This must be implemented by the concrete subclass. + """ + raise NotImplementedError + + def post_order(self) -> Iterator[NL]: + """ + Return a post-order iterator for the tree. + + This must be implemented by the concrete subclass. + """ + raise NotImplementedError + + def pre_order(self) -> Iterator[NL]: + """ + Return a pre-order iterator for the tree. + + This must be implemented by the concrete subclass. + """ + raise NotImplementedError + + def replace(self, new: Union[NL, List[NL]]) -> None: + """Replace this node with a new one in the parent.""" + assert self.parent is not None, str(self) + assert new is not None + if not isinstance(new, list): + new = [new] + l_children = [] + found = False + for ch in self.parent.children: + if ch is self: + assert not found, (self.parent.children, self, new) + if new is not None: + l_children.extend(new) + found = True + else: + l_children.append(ch) + assert found, (self.children, self, new) + self.parent.children = l_children + self.parent.changed() + self.parent.invalidate_sibling_maps() + for x in new: + x.parent = self.parent + self.parent = None + + def get_lineno(self) -> Optional[int]: + """Return the line number which generated the invocant node.""" + node = self + while not isinstance(node, Leaf): + if not node.children: + return None + node = node.children[0] + return node.lineno + + def changed(self) -> None: + if self.was_changed: + return + if self.parent: + self.parent.changed() + self.was_changed = True + + def remove(self) -> Optional[int]: + """ + Remove the node from the tree. Returns the position of the node in its + parent's children before it was removed. + """ + if self.parent: + for i, node in enumerate(self.parent.children): + if node is self: + del self.parent.children[i] + self.parent.changed() + self.parent.invalidate_sibling_maps() + self.parent = None + return i + return None + + @property + def next_sibling(self) -> Optional[NL]: + """ + The node immediately following the invocant in their parent's children + list. If the invocant does not have a next sibling, it is None + """ + if self.parent is None: + return None + + if self.parent.next_sibling_map is None: + self.parent.update_sibling_maps() + assert self.parent.next_sibling_map is not None + return self.parent.next_sibling_map[id(self)] + + @property + def prev_sibling(self) -> Optional[NL]: + """ + The node immediately preceding the invocant in their parent's children + list. If the invocant does not have a previous sibling, it is None. + """ + if self.parent is None: + return None + + if self.parent.prev_sibling_map is None: + self.parent.update_sibling_maps() + assert self.parent.prev_sibling_map is not None + return self.parent.prev_sibling_map[id(self)] + + def leaves(self) -> Iterator["Leaf"]: + for child in self.children: + yield from child.leaves() + + def depth(self) -> int: + if self.parent is None: + return 0 + return 1 + self.parent.depth() + + def get_suffix(self) -> Text: + """ + Return the string immediately following the invocant node. This is + effectively equivalent to node.next_sibling.prefix + """ + next_sib = self.next_sibling + if next_sib is None: + return "" + prefix = next_sib.prefix + return prefix + + +class Node(Base): + + """Concrete implementation for interior nodes.""" + + fixers_applied: Optional[List[Any]] + used_names: Optional[Set[Text]] + + def __init__( + self, + type: int, + children: List[NL], + context: Optional[Any] = None, + prefix: Optional[Text] = None, + fixers_applied: Optional[List[Any]] = None, + ) -> None: + """ + Initializer. + + Takes a type constant (a symbol number >= 256), a sequence of + child nodes, and an optional context keyword argument. + + As a side effect, the parent pointers of the children are updated. + """ + assert type >= 256, type + self.type = type + self.children = list(children) + for ch in self.children: + assert ch.parent is None, repr(ch) + ch.parent = self + self.invalidate_sibling_maps() + if prefix is not None: + self.prefix = prefix + if fixers_applied: + self.fixers_applied = fixers_applied[:] + else: + self.fixers_applied = None + + def __repr__(self) -> Text: + """Return a canonical string representation.""" + assert self.type is not None + return "%s(%s, %r)" % ( + self.__class__.__name__, + type_repr(self.type), + self.children, + ) + + def __str__(self) -> Text: + """ + Return a pretty string representation. + + This reproduces the input source exactly. + """ + return "".join(map(str, self.children)) + + def _eq(self, other: Base) -> bool: + """Compare two nodes for equality.""" + return (self.type, self.children) == (other.type, other.children) + + def clone(self) -> "Node": + assert self.type is not None + """Return a cloned (deep) copy of self.""" + return Node( + self.type, + [ch.clone() for ch in self.children], + fixers_applied=self.fixers_applied, + ) + + def post_order(self) -> Iterator[NL]: + """Return a post-order iterator for the tree.""" + for child in self.children: + yield from child.post_order() + yield self + + def pre_order(self) -> Iterator[NL]: + """Return a pre-order iterator for the tree.""" + yield self + for child in self.children: + yield from child.pre_order() + + @property + def prefix(self) -> Text: + """ + The whitespace and comments preceding this node in the input. + """ + if not self.children: + return "" + return self.children[0].prefix + + @prefix.setter + def prefix(self, prefix: Text) -> None: + if self.children: + self.children[0].prefix = prefix + + def set_child(self, i: int, child: NL) -> None: + """ + Equivalent to 'node.children[i] = child'. This method also sets the + child's parent attribute appropriately. + """ + child.parent = self + self.children[i].parent = None + self.children[i] = child + self.changed() + self.invalidate_sibling_maps() + + def insert_child(self, i: int, child: NL) -> None: + """ + Equivalent to 'node.children.insert(i, child)'. This method also sets + the child's parent attribute appropriately. + """ + child.parent = self + self.children.insert(i, child) + self.changed() + self.invalidate_sibling_maps() + + def append_child(self, child: NL) -> None: + """ + Equivalent to 'node.children.append(child)'. This method also sets the + child's parent attribute appropriately. + """ + child.parent = self + self.children.append(child) + self.changed() + self.invalidate_sibling_maps() + + def invalidate_sibling_maps(self) -> None: + self.prev_sibling_map: Optional[Dict[int, Optional[NL]]] = None + self.next_sibling_map: Optional[Dict[int, Optional[NL]]] = None + + def update_sibling_maps(self) -> None: + _prev: Dict[int, Optional[NL]] = {} + _next: Dict[int, Optional[NL]] = {} + self.prev_sibling_map = _prev + self.next_sibling_map = _next + previous: Optional[NL] = None + for current in self.children: + _prev[id(current)] = previous + _next[id(previous)] = current + previous = current + _next[id(current)] = None + + +class Leaf(Base): + + """Concrete implementation for leaf nodes.""" + + # Default values for instance variables + value: Text + fixers_applied: List[Any] + bracket_depth: int + # Changed later in brackets.py + opening_bracket: Optional["Leaf"] = None + used_names: Optional[Set[Text]] + _prefix = "" # Whitespace and comments preceding this token in the input + lineno: int = 0 # Line where this token starts in the input + column: int = 0 # Column where this token starts in the input + + def __init__( + self, + type: int, + value: Text, + context: Optional[Context] = None, + prefix: Optional[Text] = None, + fixers_applied: List[Any] = [], + opening_bracket: Optional["Leaf"] = None, + ) -> None: + """ + Initializer. + + Takes a type constant (a token number < 256), a string value, and an + optional context keyword argument. + """ + + assert 0 <= type < 256, type + if context is not None: + self._prefix, (self.lineno, self.column) = context + self.type = type + self.value = value + if prefix is not None: + self._prefix = prefix + self.fixers_applied: Optional[List[Any]] = fixers_applied[:] + self.children = [] + self.opening_bracket = opening_bracket + + def __repr__(self) -> str: + """Return a canonical string representation.""" + from .pgen2.token import tok_name + + assert self.type is not None + return "%s(%s, %r)" % ( + self.__class__.__name__, + tok_name.get(self.type, self.type), + self.value, + ) + + def __str__(self) -> Text: + """ + Return a pretty string representation. + + This reproduces the input source exactly. + """ + return self._prefix + str(self.value) + + def _eq(self, other: "Leaf") -> bool: + """Compare two nodes for equality.""" + return (self.type, self.value) == (other.type, other.value) + + def clone(self) -> "Leaf": + assert self.type is not None + """Return a cloned (deep) copy of self.""" + return Leaf( + self.type, + self.value, + (self.prefix, (self.lineno, self.column)), + fixers_applied=self.fixers_applied, + ) + + def leaves(self) -> Iterator["Leaf"]: + yield self + + def post_order(self) -> Iterator["Leaf"]: + """Return a post-order iterator for the tree.""" + yield self + + def pre_order(self) -> Iterator["Leaf"]: + """Return a pre-order iterator for the tree.""" + yield self + + @property + def prefix(self) -> Text: + """ + The whitespace and comments preceding this token in the input. + """ + return self._prefix + + @prefix.setter + def prefix(self, prefix: Text) -> None: + self.changed() + self._prefix = prefix + + +def convert(gr: Grammar, raw_node: RawNode) -> NL: + """ + Convert raw node information to a Node or Leaf instance. + + This is passed to the parser driver which calls it whenever a reduction of a + grammar rule produces a new complete node, so that the tree is build + strictly bottom-up. + """ + type, value, context, children = raw_node + if children or type in gr.number2symbol: + # If there's exactly one child, return that child instead of + # creating a new node. + assert children is not None + if len(children) == 1: + return children[0] + return Node(type, children, context=context) + else: + return Leaf(type, value or "", context=context) + + +_Results = Dict[Text, NL] + + +class BasePattern(object): + + """ + A pattern is a tree matching pattern. + + It looks for a specific node type (token or symbol), and + optionally for a specific content. + + This is an abstract base class. There are three concrete + subclasses: + + - LeafPattern matches a single leaf node; + - NodePattern matches a single node (usually non-leaf); + - WildcardPattern matches a sequence of nodes of variable length. + """ + + # Defaults for instance variables + type: Optional[int] + type = None # Node type (token if < 256, symbol if >= 256) + content: Any = None # Optional content matching pattern + name: Optional[Text] = None # Optional name used to store match in results dict + + def __new__(cls, *args, **kwds): + """Constructor that prevents BasePattern from being instantiated.""" + assert cls is not BasePattern, "Cannot instantiate BasePattern" + return object.__new__(cls) + + def __repr__(self) -> Text: + assert self.type is not None + args = [type_repr(self.type), self.content, self.name] + while args and args[-1] is None: + del args[-1] + return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args))) + + def _submatch(self, node, results=None) -> bool: + raise NotImplementedError + + def optimize(self) -> "BasePattern": + """ + A subclass can define this as a hook for optimizations. + + Returns either self or another node with the same effect. + """ + return self + + def match(self, node: NL, results: Optional[_Results] = None) -> bool: + """ + Does this pattern exactly match a node? + + Returns True if it matches, False if not. + + If results is not None, it must be a dict which will be + updated with the nodes matching named subpatterns. + + Default implementation for non-wildcard patterns. + """ + if self.type is not None and node.type != self.type: + return False + if self.content is not None: + r: Optional[_Results] = None + if results is not None: + r = {} + if not self._submatch(node, r): + return False + if r: + assert results is not None + results.update(r) + if results is not None and self.name: + results[self.name] = node + return True + + def match_seq(self, nodes: List[NL], results: Optional[_Results] = None) -> bool: + """ + Does this pattern exactly match a sequence of nodes? + + Default implementation for non-wildcard patterns. + """ + if len(nodes) != 1: + return False + return self.match(nodes[0], results) + + def generate_matches(self, nodes: List[NL]) -> Iterator[Tuple[int, _Results]]: + """ + Generator yielding all matches for this pattern. + + Default implementation for non-wildcard patterns. + """ + r: _Results = {} + if nodes and self.match(nodes[0], r): + yield 1, r + + +class LeafPattern(BasePattern): + def __init__( + self, + type: Optional[int] = None, + content: Optional[Text] = None, + name: Optional[Text] = None, + ) -> None: + """ + Initializer. Takes optional type, content, and name. + + The type, if given must be a token type (< 256). If not given, + this matches any *leaf* node; the content may still be required. + + The content, if given, must be a string. + + If a name is given, the matching node is stored in the results + dict under that key. + """ + if type is not None: + assert 0 <= type < 256, type + if content is not None: + assert isinstance(content, str), repr(content) + self.type = type + self.content = content + self.name = name + + def match(self, node: NL, results=None) -> bool: + """Override match() to insist on a leaf node.""" + if not isinstance(node, Leaf): + return False + return BasePattern.match(self, node, results) + + def _submatch(self, node, results=None): + """ + Match the pattern's content to the node's children. + + This assumes the node type matches and self.content is not None. + + Returns True if it matches, False if not. + + If results is not None, it must be a dict which will be + updated with the nodes matching named subpatterns. + + When returning False, the results dict may still be updated. + """ + return self.content == node.value + + +class NodePattern(BasePattern): + + wildcards: bool = False + + def __init__( + self, + type: Optional[int] = None, + content: Optional[Iterable[Text]] = None, + name: Optional[Text] = None, + ) -> None: + """ + Initializer. Takes optional type, content, and name. + + The type, if given, must be a symbol type (>= 256). If the + type is None this matches *any* single node (leaf or not), + except if content is not None, in which it only matches + non-leaf nodes that also match the content pattern. + + The content, if not None, must be a sequence of Patterns that + must match the node's children exactly. If the content is + given, the type must not be None. + + If a name is given, the matching node is stored in the results + dict under that key. + """ + if type is not None: + assert type >= 256, type + if content is not None: + assert not isinstance(content, str), repr(content) + newcontent = list(content) + for i, item in enumerate(newcontent): + assert isinstance(item, BasePattern), (i, item) + # I don't even think this code is used anywhere, but it does cause + # unreachable errors from mypy. This function's signature does look + # odd though *shrug*. + if isinstance(item, WildcardPattern): # type: ignore[unreachable] + self.wildcards = True # type: ignore[unreachable] + self.type = type + self.content = newcontent # TODO: this is unbound when content is None + self.name = name + + def _submatch(self, node, results=None) -> bool: + """ + Match the pattern's content to the node's children. + + This assumes the node type matches and self.content is not None. + + Returns True if it matches, False if not. + + If results is not None, it must be a dict which will be + updated with the nodes matching named subpatterns. + + When returning False, the results dict may still be updated. + """ + if self.wildcards: + for c, r in generate_matches(self.content, node.children): + if c == len(node.children): + if results is not None: + results.update(r) + return True + return False + if len(self.content) != len(node.children): + return False + for subpattern, child in zip(self.content, node.children): + if not subpattern.match(child, results): + return False + return True + + +class WildcardPattern(BasePattern): + + """ + A wildcard pattern can match zero or more nodes. + + This has all the flexibility needed to implement patterns like: + + .* .+ .? .{m,n} + (a b c | d e | f) + (...)* (...)+ (...)? (...){m,n} + + except it always uses non-greedy matching. + """ + + min: int + max: int + + def __init__( + self, + content: Optional[Text] = None, + min: int = 0, + max: int = HUGE, + name: Optional[Text] = None, + ) -> None: + """ + Initializer. + + Args: + content: optional sequence of subsequences of patterns; + if absent, matches one node; + if present, each subsequence is an alternative [*] + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE + name: optional name assigned to this match + + [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is + equivalent to (a b c | d e | f g h); if content is None, + this is equivalent to '.' in regular expression terms. + The min and max parameters work as follows: + min=0, max=maxint: .* + min=1, max=maxint: .+ + min=0, max=1: .? + min=1, max=1: . + If content is not None, replace the dot with the parenthesized + list of alternatives, e.g. (a b c | d e | f g h)* + """ + assert 0 <= min <= max <= HUGE, (min, max) + if content is not None: + f = lambda s: tuple(s) + wrapped_content = tuple(map(f, content)) # Protect against alterations + # Check sanity of alternatives + assert len(wrapped_content), repr( + wrapped_content + ) # Can't have zero alternatives + for alt in wrapped_content: + assert len(alt), repr(alt) # Can have empty alternatives + self.content = wrapped_content + self.min = min + self.max = max + self.name = name + + def optimize(self) -> Any: + """Optimize certain stacked wildcard patterns.""" + subpattern = None + if ( + self.content is not None + and len(self.content) == 1 + and len(self.content[0]) == 1 + ): + subpattern = self.content[0][0] + if self.min == 1 and self.max == 1: + if self.content is None: + return NodePattern(name=self.name) + if subpattern is not None and self.name == subpattern.name: + return subpattern.optimize() + if ( + self.min <= 1 + and isinstance(subpattern, WildcardPattern) + and subpattern.min <= 1 + and self.name == subpattern.name + ): + return WildcardPattern( + subpattern.content, + self.min * subpattern.min, + self.max * subpattern.max, + subpattern.name, + ) + return self + + def match(self, node, results=None) -> bool: + """Does this pattern exactly match a node?""" + return self.match_seq([node], results) + + def match_seq(self, nodes, results=None) -> bool: + """Does this pattern exactly match a sequence of nodes?""" + for c, r in self.generate_matches(nodes): + if c == len(nodes): + if results is not None: + results.update(r) + if self.name: + results[self.name] = list(nodes) + return True + return False + + def generate_matches(self, nodes) -> Iterator[Tuple[int, _Results]]: + """ + Generator yielding matches for a sequence of nodes. + + Args: + nodes: sequence of nodes + + Yields: + (count, results) tuples where: + count: the match comprises nodes[:count]; + results: dict containing named submatches. + """ + if self.content is None: + # Shortcut for special case (see __init__.__doc__) + for count in range(self.min, 1 + min(len(nodes), self.max)): + r = {} + if self.name: + r[self.name] = nodes[:count] + yield count, r + elif self.name == "bare_name": + yield self._bare_name_matches(nodes) + else: + # The reason for this is that hitting the recursion limit usually + # results in some ugly messages about how RuntimeErrors are being + # ignored. We only have to do this on CPython, though, because other + # implementations don't have this nasty bug in the first place. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() + try: + for count, r in self._recursive_matches(nodes, 0): + if self.name: + r[self.name] = nodes[:count] + yield count, r + except RuntimeError: + # We fall back to the iterative pattern matching scheme if the recursive + # scheme hits the recursion limit. + for count, r in self._iterative_matches(nodes): + if self.name: + r[self.name] = nodes[:count] + yield count, r + finally: + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr + + def _iterative_matches(self, nodes) -> Iterator[Tuple[int, _Results]]: + """Helper to iteratively yield the matches.""" + nodelen = len(nodes) + if 0 >= self.min: + yield 0, {} + + results = [] + # generate matches that use just one alt from self.content + for alt in self.content: + for c, r in generate_matches(alt, nodes): + yield c, r + results.append((c, r)) + + # for each match, iterate down the nodes + while results: + new_results = [] + for c0, r0 in results: + # stop if the entire set of nodes has been matched + if c0 < nodelen and c0 <= self.max: + for alt in self.content: + for c1, r1 in generate_matches(alt, nodes[c0:]): + if c1 > 0: + r = {} + r.update(r0) + r.update(r1) + yield c0 + c1, r + new_results.append((c0 + c1, r)) + results = new_results + + def _bare_name_matches(self, nodes) -> Tuple[int, _Results]: + """Special optimized matcher for bare_name.""" + count = 0 + r = {} # type: _Results + done = False + max = len(nodes) + while not done and count < max: + done = True + for leaf in self.content: + if leaf[0].match(nodes[count], r): + count += 1 + done = False + break + assert self.name is not None + r[self.name] = nodes[:count] + return count, r + + def _recursive_matches(self, nodes, count) -> Iterator[Tuple[int, _Results]]: + """Helper to recursively yield the matches.""" + assert self.content is not None + if count >= self.min: + yield 0, {} + if count < self.max: + for alt in self.content: + for c0, r0 in generate_matches(alt, nodes): + for c1, r1 in self._recursive_matches(nodes[c0:], count + 1): + r = {} + r.update(r0) + r.update(r1) + yield c0 + c1, r + + +class NegatedPattern(BasePattern): + def __init__(self, content: Optional[BasePattern] = None) -> None: + """ + Initializer. + + The argument is either a pattern or None. If it is None, this + only matches an empty sequence (effectively '$' in regex + lingo). If it is not None, this matches whenever the argument + pattern doesn't have any matches. + """ + if content is not None: + assert isinstance(content, BasePattern), repr(content) + self.content = content + + def match(self, node, results=None) -> bool: + # We never match a node in its entirety + return False + + def match_seq(self, nodes, results=None) -> bool: + # We only match an empty sequence of nodes in its entirety + return len(nodes) == 0 + + def generate_matches(self, nodes: List[NL]) -> Iterator[Tuple[int, _Results]]: + if self.content is None: + # Return a match if there is an empty sequence + if len(nodes) == 0: + yield 0, {} + else: + # Return a match if the argument pattern has no matches + for c, r in self.content.generate_matches(nodes): + return + yield 0, {} + + +def generate_matches( + patterns: List[BasePattern], nodes: List[NL] +) -> Iterator[Tuple[int, _Results]]: + """ + Generator yielding matches for a sequence of patterns and nodes. + + Args: + patterns: a sequence of patterns + nodes: a sequence of nodes + + Yields: + (count, results) tuples where: + count: the entire sequence of patterns matches nodes[:count]; + results: dict containing named submatches. + """ + if not patterns: + yield 0, {} + else: + p, rest = patterns[0], patterns[1:] + for c0, r0 in p.generate_matches(nodes): + if not rest: + yield c0, r0 + else: + for c1, r1 in generate_matches(rest, nodes[c0:]): + r = {} + r.update(r0) + r.update(r1) + yield c0 + c1, r diff --git a/env/lib/python3.8/site-packages/cachecontrol/__init__.py b/env/lib/python3.8/site-packages/cachecontrol/__init__.py new file mode 100644 index 00000000..f631ae6d --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/__init__.py @@ -0,0 +1,18 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +"""CacheControl import Interface. + +Make it easy to import from cachecontrol without long namespaces. +""" +__author__ = "Eric Larson" +__email__ = "eric@ionrock.org" +__version__ = "0.12.11" + +from .wrapper import CacheControl +from .adapter import CacheControlAdapter +from .controller import CacheController + +import logging +logging.getLogger(__name__).addHandler(logging.NullHandler()) diff --git a/env/lib/python3.8/site-packages/cachecontrol/_cmd.py b/env/lib/python3.8/site-packages/cachecontrol/_cmd.py new file mode 100644 index 00000000..ccee0079 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/_cmd.py @@ -0,0 +1,61 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +import logging + +import requests + +from cachecontrol.adapter import CacheControlAdapter +from cachecontrol.cache import DictCache +from cachecontrol.controller import logger + +from argparse import ArgumentParser + + +def setup_logging(): + logger.setLevel(logging.DEBUG) + handler = logging.StreamHandler() + logger.addHandler(handler) + + +def get_session(): + adapter = CacheControlAdapter( + DictCache(), cache_etags=True, serializer=None, heuristic=None + ) + sess = requests.Session() + sess.mount("http://", adapter) + sess.mount("https://", adapter) + + sess.cache_controller = adapter.controller + return sess + + +def get_args(): + parser = ArgumentParser() + parser.add_argument("url", help="The URL to try and cache") + return parser.parse_args() + + +def main(args=None): + args = get_args() + sess = get_session() + + # Make a request to get a response + resp = sess.get(args.url) + + # Turn on logging + setup_logging() + + # try setting the cache + sess.cache_controller.cache_response(resp.request, resp.raw) + + # Now try to get it + if sess.cache_controller.cached_request(resp.request): + print("Cached!") + else: + print("Not cached :(") + + +if __name__ == "__main__": + main() diff --git a/env/lib/python3.8/site-packages/cachecontrol/adapter.py b/env/lib/python3.8/site-packages/cachecontrol/adapter.py new file mode 100644 index 00000000..22b49638 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/adapter.py @@ -0,0 +1,137 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +import types +import functools +import zlib + +from requests.adapters import HTTPAdapter + +from .controller import CacheController, PERMANENT_REDIRECT_STATUSES +from .cache import DictCache +from .filewrapper import CallbackFileWrapper + + +class CacheControlAdapter(HTTPAdapter): + invalidating_methods = {"PUT", "PATCH", "DELETE"} + + def __init__( + self, + cache=None, + cache_etags=True, + controller_class=None, + serializer=None, + heuristic=None, + cacheable_methods=None, + *args, + **kw + ): + super(CacheControlAdapter, self).__init__(*args, **kw) + self.cache = DictCache() if cache is None else cache + self.heuristic = heuristic + self.cacheable_methods = cacheable_methods or ("GET",) + + controller_factory = controller_class or CacheController + self.controller = controller_factory( + self.cache, cache_etags=cache_etags, serializer=serializer + ) + + def send(self, request, cacheable_methods=None, **kw): + """ + Send a request. Use the request information to see if it + exists in the cache and cache the response if we need to and can. + """ + cacheable = cacheable_methods or self.cacheable_methods + if request.method in cacheable: + try: + cached_response = self.controller.cached_request(request) + except zlib.error: + cached_response = None + if cached_response: + return self.build_response(request, cached_response, from_cache=True) + + # check for etags and add headers if appropriate + request.headers.update(self.controller.conditional_headers(request)) + + resp = super(CacheControlAdapter, self).send(request, **kw) + + return resp + + def build_response( + self, request, response, from_cache=False, cacheable_methods=None + ): + """ + Build a response by making a request or using the cache. + + This will end up calling send and returning a potentially + cached response + """ + cacheable = cacheable_methods or self.cacheable_methods + if not from_cache and request.method in cacheable: + # Check for any heuristics that might update headers + # before trying to cache. + if self.heuristic: + response = self.heuristic.apply(response) + + # apply any expiration heuristics + if response.status == 304: + # We must have sent an ETag request. This could mean + # that we've been expired already or that we simply + # have an etag. In either case, we want to try and + # update the cache if that is the case. + cached_response = self.controller.update_cached_response( + request, response + ) + + if cached_response is not response: + from_cache = True + + # We are done with the server response, read a + # possible response body (compliant servers will + # not return one, but we cannot be 100% sure) and + # release the connection back to the pool. + response.read(decode_content=False) + response.release_conn() + + response = cached_response + + # We always cache the 301 responses + elif int(response.status) in PERMANENT_REDIRECT_STATUSES: + self.controller.cache_response(request, response) + else: + # Wrap the response file with a wrapper that will cache the + # response when the stream has been consumed. + response._fp = CallbackFileWrapper( + response._fp, + functools.partial( + self.controller.cache_response, request, response + ), + ) + if response.chunked: + super_update_chunk_length = response._update_chunk_length + + def _update_chunk_length(self): + super_update_chunk_length() + if self.chunk_left == 0: + self._fp._close() + + response._update_chunk_length = types.MethodType( + _update_chunk_length, response + ) + + resp = super(CacheControlAdapter, self).build_response(request, response) + + # See if we should invalidate the cache. + if request.method in self.invalidating_methods and resp.ok: + cache_url = self.controller.cache_url(request.url) + self.cache.delete(cache_url) + + # Give the request a from_cache attr to let people use it + resp.from_cache = from_cache + + return resp + + def close(self): + self.cache.close() + super(CacheControlAdapter, self).close() diff --git a/env/lib/python3.8/site-packages/cachecontrol/cache.py b/env/lib/python3.8/site-packages/cachecontrol/cache.py new file mode 100644 index 00000000..2a965f59 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/cache.py @@ -0,0 +1,65 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +""" +The cache object API for implementing caches. The default is a thread +safe in-memory dictionary. +""" +from threading import Lock + + +class BaseCache(object): + + def get(self, key): + raise NotImplementedError() + + def set(self, key, value, expires=None): + raise NotImplementedError() + + def delete(self, key): + raise NotImplementedError() + + def close(self): + pass + + +class DictCache(BaseCache): + + def __init__(self, init_dict=None): + self.lock = Lock() + self.data = init_dict or {} + + def get(self, key): + return self.data.get(key, None) + + def set(self, key, value, expires=None): + with self.lock: + self.data.update({key: value}) + + def delete(self, key): + with self.lock: + if key in self.data: + self.data.pop(key) + + +class SeparateBodyBaseCache(BaseCache): + """ + In this variant, the body is not stored mixed in with the metadata, but is + passed in (as a bytes-like object) in a separate call to ``set_body()``. + + That is, the expected interaction pattern is:: + + cache.set(key, serialized_metadata) + cache.set_body(key) + + Similarly, the body should be loaded separately via ``get_body()``. + """ + def set_body(self, key, body): + raise NotImplementedError() + + def get_body(self, key): + """ + Return the body as file-like object. + """ + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cachecontrol/caches/__init__.py b/env/lib/python3.8/site-packages/cachecontrol/caches/__init__.py new file mode 100644 index 00000000..37827291 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/caches/__init__.py @@ -0,0 +1,9 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +from .file_cache import FileCache, SeparateBodyFileCache +from .redis_cache import RedisCache + + +__all__ = ["FileCache", "SeparateBodyFileCache", "RedisCache"] diff --git a/env/lib/python3.8/site-packages/cachecontrol/caches/file_cache.py b/env/lib/python3.8/site-packages/cachecontrol/caches/file_cache.py new file mode 100644 index 00000000..f1ddb2eb --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/caches/file_cache.py @@ -0,0 +1,188 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +import hashlib +import os +from textwrap import dedent + +from ..cache import BaseCache, SeparateBodyBaseCache +from ..controller import CacheController + +try: + FileNotFoundError +except NameError: + # py2.X + FileNotFoundError = (IOError, OSError) + + +def _secure_open_write(filename, fmode): + # We only want to write to this file, so open it in write only mode + flags = os.O_WRONLY + + # os.O_CREAT | os.O_EXCL will fail if the file already exists, so we only + # will open *new* files. + # We specify this because we want to ensure that the mode we pass is the + # mode of the file. + flags |= os.O_CREAT | os.O_EXCL + + # Do not follow symlinks to prevent someone from making a symlink that + # we follow and insecurely open a cache file. + if hasattr(os, "O_NOFOLLOW"): + flags |= os.O_NOFOLLOW + + # On Windows we'll mark this file as binary + if hasattr(os, "O_BINARY"): + flags |= os.O_BINARY + + # Before we open our file, we want to delete any existing file that is + # there + try: + os.remove(filename) + except (IOError, OSError): + # The file must not exist already, so we can just skip ahead to opening + pass + + # Open our file, the use of os.O_CREAT | os.O_EXCL will ensure that if a + # race condition happens between the os.remove and this line, that an + # error will be raised. Because we utilize a lockfile this should only + # happen if someone is attempting to attack us. + fd = os.open(filename, flags, fmode) + try: + return os.fdopen(fd, "wb") + + except: + # An error occurred wrapping our FD in a file object + os.close(fd) + raise + + +class _FileCacheMixin: + """Shared implementation for both FileCache variants.""" + + def __init__( + self, + directory, + forever=False, + filemode=0o0600, + dirmode=0o0700, + use_dir_lock=None, + lock_class=None, + ): + + if use_dir_lock is not None and lock_class is not None: + raise ValueError("Cannot use use_dir_lock and lock_class together") + + try: + from lockfile import LockFile + from lockfile.mkdirlockfile import MkdirLockFile + except ImportError: + notice = dedent( + """ + NOTE: In order to use the FileCache you must have + lockfile installed. You can install it via pip: + pip install lockfile + """ + ) + raise ImportError(notice) + + else: + if use_dir_lock: + lock_class = MkdirLockFile + + elif lock_class is None: + lock_class = LockFile + + self.directory = directory + self.forever = forever + self.filemode = filemode + self.dirmode = dirmode + self.lock_class = lock_class + + @staticmethod + def encode(x): + return hashlib.sha224(x.encode()).hexdigest() + + def _fn(self, name): + # NOTE: This method should not change as some may depend on it. + # See: https://github.com/ionrock/cachecontrol/issues/63 + hashed = self.encode(name) + parts = list(hashed[:5]) + [hashed] + return os.path.join(self.directory, *parts) + + def get(self, key): + name = self._fn(key) + try: + with open(name, "rb") as fh: + return fh.read() + + except FileNotFoundError: + return None + + def set(self, key, value, expires=None): + name = self._fn(key) + self._write(name, value) + + def _write(self, path, data: bytes): + """ + Safely write the data to the given path. + """ + # Make sure the directory exists + try: + os.makedirs(os.path.dirname(path), self.dirmode) + except (IOError, OSError): + pass + + with self.lock_class(path) as lock: + # Write our actual file + with _secure_open_write(lock.path, self.filemode) as fh: + fh.write(data) + + def _delete(self, key, suffix): + name = self._fn(key) + suffix + if not self.forever: + try: + os.remove(name) + except FileNotFoundError: + pass + + +class FileCache(_FileCacheMixin, BaseCache): + """ + Traditional FileCache: body is stored in memory, so not suitable for large + downloads. + """ + + def delete(self, key): + self._delete(key, "") + + +class SeparateBodyFileCache(_FileCacheMixin, SeparateBodyBaseCache): + """ + Memory-efficient FileCache: body is stored in a separate file, reducing + peak memory usage. + """ + + def get_body(self, key): + name = self._fn(key) + ".body" + try: + return open(name, "rb") + except FileNotFoundError: + return None + + def set_body(self, key, body): + name = self._fn(key) + ".body" + self._write(name, body) + + def delete(self, key): + self._delete(key, "") + self._delete(key, ".body") + + +def url_to_file_path(url, filecache): + """Return the file cache path based on the URL. + + This does not ensure the file exists! + """ + key = CacheController.cache_url(url) + return filecache._fn(key) diff --git a/env/lib/python3.8/site-packages/cachecontrol/caches/redis_cache.py b/env/lib/python3.8/site-packages/cachecontrol/caches/redis_cache.py new file mode 100644 index 00000000..7bcb38a2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/caches/redis_cache.py @@ -0,0 +1,39 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +from __future__ import division + +from datetime import datetime +from cachecontrol.cache import BaseCache + + +class RedisCache(BaseCache): + + def __init__(self, conn): + self.conn = conn + + def get(self, key): + return self.conn.get(key) + + def set(self, key, value, expires=None): + if not expires: + self.conn.set(key, value) + elif isinstance(expires, datetime): + expires = expires - datetime.utcnow() + self.conn.setex(key, int(expires.total_seconds()), value) + else: + self.conn.setex(key, expires, value) + + def delete(self, key): + self.conn.delete(key) + + def clear(self): + """Helper for clearing all the keys in a database. Use with + caution!""" + for key in self.conn.keys(): + self.conn.delete(key) + + def close(self): + """Redis uses connection pooling, no need to close the connection.""" + pass diff --git a/env/lib/python3.8/site-packages/cachecontrol/compat.py b/env/lib/python3.8/site-packages/cachecontrol/compat.py new file mode 100644 index 00000000..72c456cf --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/compat.py @@ -0,0 +1,32 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +try: + from urllib.parse import urljoin +except ImportError: + from urlparse import urljoin + + +try: + import cPickle as pickle +except ImportError: + import pickle + +# Handle the case where the requests module has been patched to not have +# urllib3 bundled as part of its source. +try: + from requests.packages.urllib3.response import HTTPResponse +except ImportError: + from urllib3.response import HTTPResponse + +try: + from requests.packages.urllib3.util import is_fp_closed +except ImportError: + from urllib3.util import is_fp_closed + +# Replicate some six behaviour +try: + text_type = unicode +except NameError: + text_type = str diff --git a/env/lib/python3.8/site-packages/cachecontrol/controller.py b/env/lib/python3.8/site-packages/cachecontrol/controller.py new file mode 100644 index 00000000..a95f6437 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/controller.py @@ -0,0 +1,439 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +""" +The httplib2 algorithms ported for use with requests. +""" +import logging +import re +import calendar +import time +from email.utils import parsedate_tz + +from requests.structures import CaseInsensitiveDict + +from .cache import DictCache, SeparateBodyBaseCache +from .serialize import Serializer + + +logger = logging.getLogger(__name__) + +URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?") + +PERMANENT_REDIRECT_STATUSES = (301, 308) + + +def parse_uri(uri): + """Parses a URI using the regex given in Appendix B of RFC 3986. + + (scheme, authority, path, query, fragment) = parse_uri(uri) + """ + groups = URI.match(uri).groups() + return (groups[1], groups[3], groups[4], groups[6], groups[8]) + + +class CacheController(object): + """An interface to see if request should cached or not.""" + + def __init__( + self, cache=None, cache_etags=True, serializer=None, status_codes=None + ): + self.cache = DictCache() if cache is None else cache + self.cache_etags = cache_etags + self.serializer = serializer or Serializer() + self.cacheable_status_codes = status_codes or (200, 203, 300, 301, 308) + + @classmethod + def _urlnorm(cls, uri): + """Normalize the URL to create a safe key for the cache""" + (scheme, authority, path, query, fragment) = parse_uri(uri) + if not scheme or not authority: + raise Exception("Only absolute URIs are allowed. uri = %s" % uri) + + scheme = scheme.lower() + authority = authority.lower() + + if not path: + path = "/" + + # Could do syntax based normalization of the URI before + # computing the digest. See Section 6.2.2 of Std 66. + request_uri = query and "?".join([path, query]) or path + defrag_uri = scheme + "://" + authority + request_uri + + return defrag_uri + + @classmethod + def cache_url(cls, uri): + return cls._urlnorm(uri) + + def parse_cache_control(self, headers): + known_directives = { + # https://tools.ietf.org/html/rfc7234#section-5.2 + "max-age": (int, True), + "max-stale": (int, False), + "min-fresh": (int, True), + "no-cache": (None, False), + "no-store": (None, False), + "no-transform": (None, False), + "only-if-cached": (None, False), + "must-revalidate": (None, False), + "public": (None, False), + "private": (None, False), + "proxy-revalidate": (None, False), + "s-maxage": (int, True), + } + + cc_headers = headers.get("cache-control", headers.get("Cache-Control", "")) + + retval = {} + + for cc_directive in cc_headers.split(","): + if not cc_directive.strip(): + continue + + parts = cc_directive.split("=", 1) + directive = parts[0].strip() + + try: + typ, required = known_directives[directive] + except KeyError: + logger.debug("Ignoring unknown cache-control directive: %s", directive) + continue + + if not typ or not required: + retval[directive] = None + if typ: + try: + retval[directive] = typ(parts[1].strip()) + except IndexError: + if required: + logger.debug( + "Missing value for cache-control " "directive: %s", + directive, + ) + except ValueError: + logger.debug( + "Invalid value for cache-control directive " "%s, must be %s", + directive, + typ.__name__, + ) + + return retval + + def cached_request(self, request): + """ + Return a cached response if it exists in the cache, otherwise + return False. + """ + cache_url = self.cache_url(request.url) + logger.debug('Looking up "%s" in the cache', cache_url) + cc = self.parse_cache_control(request.headers) + + # Bail out if the request insists on fresh data + if "no-cache" in cc: + logger.debug('Request header has "no-cache", cache bypassed') + return False + + if "max-age" in cc and cc["max-age"] == 0: + logger.debug('Request header has "max_age" as 0, cache bypassed') + return False + + # Request allows serving from the cache, let's see if we find something + cache_data = self.cache.get(cache_url) + if cache_data is None: + logger.debug("No cache entry available") + return False + + if isinstance(self.cache, SeparateBodyBaseCache): + body_file = self.cache.get_body(cache_url) + else: + body_file = None + + # Check whether it can be deserialized + resp = self.serializer.loads(request, cache_data, body_file) + if not resp: + logger.warning("Cache entry deserialization failed, entry ignored") + return False + + # If we have a cached permanent redirect, return it immediately. We + # don't need to test our response for other headers b/c it is + # intrinsically "cacheable" as it is Permanent. + # + # See: + # https://tools.ietf.org/html/rfc7231#section-6.4.2 + # + # Client can try to refresh the value by repeating the request + # with cache busting headers as usual (ie no-cache). + if int(resp.status) in PERMANENT_REDIRECT_STATUSES: + msg = ( + "Returning cached permanent redirect response " + "(ignoring date and etag information)" + ) + logger.debug(msg) + return resp + + headers = CaseInsensitiveDict(resp.headers) + if not headers or "date" not in headers: + if "etag" not in headers: + # Without date or etag, the cached response can never be used + # and should be deleted. + logger.debug("Purging cached response: no date or etag") + self.cache.delete(cache_url) + logger.debug("Ignoring cached response: no date") + return False + + now = time.time() + date = calendar.timegm(parsedate_tz(headers["date"])) + current_age = max(0, now - date) + logger.debug("Current age based on date: %i", current_age) + + # TODO: There is an assumption that the result will be a + # urllib3 response object. This may not be best since we + # could probably avoid instantiating or constructing the + # response until we know we need it. + resp_cc = self.parse_cache_control(headers) + + # determine freshness + freshness_lifetime = 0 + + # Check the max-age pragma in the cache control header + if "max-age" in resp_cc: + freshness_lifetime = resp_cc["max-age"] + logger.debug("Freshness lifetime from max-age: %i", freshness_lifetime) + + # If there isn't a max-age, check for an expires header + elif "expires" in headers: + expires = parsedate_tz(headers["expires"]) + if expires is not None: + expire_time = calendar.timegm(expires) - date + freshness_lifetime = max(0, expire_time) + logger.debug("Freshness lifetime from expires: %i", freshness_lifetime) + + # Determine if we are setting freshness limit in the + # request. Note, this overrides what was in the response. + if "max-age" in cc: + freshness_lifetime = cc["max-age"] + logger.debug( + "Freshness lifetime from request max-age: %i", freshness_lifetime + ) + + if "min-fresh" in cc: + min_fresh = cc["min-fresh"] + # adjust our current age by our min fresh + current_age += min_fresh + logger.debug("Adjusted current age from min-fresh: %i", current_age) + + # Return entry if it is fresh enough + if freshness_lifetime > current_age: + logger.debug('The response is "fresh", returning cached response') + logger.debug("%i > %i", freshness_lifetime, current_age) + return resp + + # we're not fresh. If we don't have an Etag, clear it out + if "etag" not in headers: + logger.debug('The cached response is "stale" with no etag, purging') + self.cache.delete(cache_url) + + # return the original handler + return False + + def conditional_headers(self, request): + cache_url = self.cache_url(request.url) + resp = self.serializer.loads(request, self.cache.get(cache_url)) + new_headers = {} + + if resp: + headers = CaseInsensitiveDict(resp.headers) + + if "etag" in headers: + new_headers["If-None-Match"] = headers["ETag"] + + if "last-modified" in headers: + new_headers["If-Modified-Since"] = headers["Last-Modified"] + + return new_headers + + def _cache_set(self, cache_url, request, response, body=None, expires_time=None): + """ + Store the data in the cache. + """ + if isinstance(self.cache, SeparateBodyBaseCache): + # We pass in the body separately; just put a placeholder empty + # string in the metadata. + self.cache.set( + cache_url, + self.serializer.dumps(request, response, b""), + expires=expires_time, + ) + self.cache.set_body(cache_url, body) + else: + self.cache.set( + cache_url, + self.serializer.dumps(request, response, body), + expires=expires_time, + ) + + def cache_response(self, request, response, body=None, status_codes=None): + """ + Algorithm for caching requests. + + This assumes a requests Response object. + """ + # From httplib2: Don't cache 206's since we aren't going to + # handle byte range requests + cacheable_status_codes = status_codes or self.cacheable_status_codes + if response.status not in cacheable_status_codes: + logger.debug( + "Status code %s not in %s", response.status, cacheable_status_codes + ) + return + + response_headers = CaseInsensitiveDict(response.headers) + + if "date" in response_headers: + date = calendar.timegm(parsedate_tz(response_headers["date"])) + else: + date = 0 + + # If we've been given a body, our response has a Content-Length, that + # Content-Length is valid then we can check to see if the body we've + # been given matches the expected size, and if it doesn't we'll just + # skip trying to cache it. + if ( + body is not None + and "content-length" in response_headers + and response_headers["content-length"].isdigit() + and int(response_headers["content-length"]) != len(body) + ): + return + + cc_req = self.parse_cache_control(request.headers) + cc = self.parse_cache_control(response_headers) + + cache_url = self.cache_url(request.url) + logger.debug('Updating cache with response from "%s"', cache_url) + + # Delete it from the cache if we happen to have it stored there + no_store = False + if "no-store" in cc: + no_store = True + logger.debug('Response header has "no-store"') + if "no-store" in cc_req: + no_store = True + logger.debug('Request header has "no-store"') + if no_store and self.cache.get(cache_url): + logger.debug('Purging existing cache entry to honor "no-store"') + self.cache.delete(cache_url) + if no_store: + return + + # https://tools.ietf.org/html/rfc7234#section-4.1: + # A Vary header field-value of "*" always fails to match. + # Storing such a response leads to a deserialization warning + # during cache lookup and is not allowed to ever be served, + # so storing it can be avoided. + if "*" in response_headers.get("vary", ""): + logger.debug('Response header has "Vary: *"') + return + + # If we've been given an etag, then keep the response + if self.cache_etags and "etag" in response_headers: + expires_time = 0 + if response_headers.get("expires"): + expires = parsedate_tz(response_headers["expires"]) + if expires is not None: + expires_time = calendar.timegm(expires) - date + + expires_time = max(expires_time, 14 * 86400) + + logger.debug("etag object cached for {0} seconds".format(expires_time)) + logger.debug("Caching due to etag") + self._cache_set(cache_url, request, response, body, expires_time) + + # Add to the cache any permanent redirects. We do this before looking + # that the Date headers. + elif int(response.status) in PERMANENT_REDIRECT_STATUSES: + logger.debug("Caching permanent redirect") + self._cache_set(cache_url, request, response, b"") + + # Add to the cache if the response headers demand it. If there + # is no date header then we can't do anything about expiring + # the cache. + elif "date" in response_headers: + date = calendar.timegm(parsedate_tz(response_headers["date"])) + # cache when there is a max-age > 0 + if "max-age" in cc and cc["max-age"] > 0: + logger.debug("Caching b/c date exists and max-age > 0") + expires_time = cc["max-age"] + self._cache_set( + cache_url, + request, + response, + body, + expires_time, + ) + + # If the request can expire, it means we should cache it + # in the meantime. + elif "expires" in response_headers: + if response_headers["expires"]: + expires = parsedate_tz(response_headers["expires"]) + if expires is not None: + expires_time = calendar.timegm(expires) - date + else: + expires_time = None + + logger.debug( + "Caching b/c of expires header. expires in {0} seconds".format( + expires_time + ) + ) + self._cache_set( + cache_url, + request, + response, + body, + expires_time, + ) + + def update_cached_response(self, request, response): + """On a 304 we will get a new set of headers that we want to + update our cached value with, assuming we have one. + + This should only ever be called when we've sent an ETag and + gotten a 304 as the response. + """ + cache_url = self.cache_url(request.url) + + cached_response = self.serializer.loads(request, self.cache.get(cache_url)) + + if not cached_response: + # we didn't have a cached response + return response + + # Lets update our headers with the headers from the new request: + # http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1 + # + # The server isn't supposed to send headers that would make + # the cached body invalid. But... just in case, we'll be sure + # to strip out ones we know that might be problmatic due to + # typical assumptions. + excluded_headers = ["content-length"] + + cached_response.headers.update( + dict( + (k, v) + for k, v in response.headers.items() + if k.lower() not in excluded_headers + ) + ) + + # we want a 200 b/c we have content via the cache + cached_response.status = 200 + + # update our cache + self._cache_set(cache_url, request, cached_response) + + return cached_response diff --git a/env/lib/python3.8/site-packages/cachecontrol/filewrapper.py b/env/lib/python3.8/site-packages/cachecontrol/filewrapper.py new file mode 100644 index 00000000..f5ed5f6f --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/filewrapper.py @@ -0,0 +1,111 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +from tempfile import NamedTemporaryFile +import mmap + + +class CallbackFileWrapper(object): + """ + Small wrapper around a fp object which will tee everything read into a + buffer, and when that file is closed it will execute a callback with the + contents of that buffer. + + All attributes are proxied to the underlying file object. + + This class uses members with a double underscore (__) leading prefix so as + not to accidentally shadow an attribute. + + The data is stored in a temporary file until it is all available. As long + as the temporary files directory is disk-based (sometimes it's a + memory-backed-``tmpfs`` on Linux), data will be unloaded to disk if memory + pressure is high. For small files the disk usually won't be used at all, + it'll all be in the filesystem memory cache, so there should be no + performance impact. + """ + + def __init__(self, fp, callback): + self.__buf = NamedTemporaryFile("rb+", delete=True) + self.__fp = fp + self.__callback = callback + + def __getattr__(self, name): + # The vaguaries of garbage collection means that self.__fp is + # not always set. By using __getattribute__ and the private + # name[0] allows looking up the attribute value and raising an + # AttributeError when it doesn't exist. This stop thigns from + # infinitely recursing calls to getattr in the case where + # self.__fp hasn't been set. + # + # [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers + fp = self.__getattribute__("_CallbackFileWrapper__fp") + return getattr(fp, name) + + def __is_fp_closed(self): + try: + return self.__fp.fp is None + + except AttributeError: + pass + + try: + return self.__fp.closed + + except AttributeError: + pass + + # We just don't cache it then. + # TODO: Add some logging here... + return False + + def _close(self): + if self.__callback: + if self.__buf.tell() == 0: + # Empty file: + result = b"" + else: + # Return the data without actually loading it into memory, + # relying on Python's buffer API and mmap(). mmap() just gives + # a view directly into the filesystem's memory cache, so it + # doesn't result in duplicate memory use. + self.__buf.seek(0, 0) + result = memoryview( + mmap.mmap(self.__buf.fileno(), 0, access=mmap.ACCESS_READ) + ) + self.__callback(result) + + # We assign this to None here, because otherwise we can get into + # really tricky problems where the CPython interpreter dead locks + # because the callback is holding a reference to something which + # has a __del__ method. Setting this to None breaks the cycle + # and allows the garbage collector to do it's thing normally. + self.__callback = None + + # Closing the temporary file releases memory and frees disk space. + # Important when caching big files. + self.__buf.close() + + def read(self, amt=None): + data = self.__fp.read(amt) + if data: + # We may be dealing with b'', a sign that things are over: + # it's passed e.g. after we've already closed self.__buf. + self.__buf.write(data) + if self.__is_fp_closed(): + self._close() + + return data + + def _safe_read(self, amt): + data = self.__fp._safe_read(amt) + if amt == 2 and data == b"\r\n": + # urllib executes this read to toss the CRLF at the end + # of the chunk. + return data + + self.__buf.write(data) + if self.__is_fp_closed(): + self._close() + + return data diff --git a/env/lib/python3.8/site-packages/cachecontrol/heuristics.py b/env/lib/python3.8/site-packages/cachecontrol/heuristics.py new file mode 100644 index 00000000..ebe4a96f --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/heuristics.py @@ -0,0 +1,139 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +import calendar +import time + +from email.utils import formatdate, parsedate, parsedate_tz + +from datetime import datetime, timedelta + +TIME_FMT = "%a, %d %b %Y %H:%M:%S GMT" + + +def expire_after(delta, date=None): + date = date or datetime.utcnow() + return date + delta + + +def datetime_to_header(dt): + return formatdate(calendar.timegm(dt.timetuple())) + + +class BaseHeuristic(object): + + def warning(self, response): + """ + Return a valid 1xx warning header value describing the cache + adjustments. + + The response is provided too allow warnings like 113 + http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need + to explicitly say response is over 24 hours old. + """ + return '110 - "Response is Stale"' + + def update_headers(self, response): + """Update the response headers with any new headers. + + NOTE: This SHOULD always include some Warning header to + signify that the response was cached by the client, not + by way of the provided headers. + """ + return {} + + def apply(self, response): + updated_headers = self.update_headers(response) + + if updated_headers: + response.headers.update(updated_headers) + warning_header_value = self.warning(response) + if warning_header_value is not None: + response.headers.update({"Warning": warning_header_value}) + + return response + + +class OneDayCache(BaseHeuristic): + """ + Cache the response by providing an expires 1 day in the + future. + """ + + def update_headers(self, response): + headers = {} + + if "expires" not in response.headers: + date = parsedate(response.headers["date"]) + expires = expire_after(timedelta(days=1), date=datetime(*date[:6])) + headers["expires"] = datetime_to_header(expires) + headers["cache-control"] = "public" + return headers + + +class ExpiresAfter(BaseHeuristic): + """ + Cache **all** requests for a defined time period. + """ + + def __init__(self, **kw): + self.delta = timedelta(**kw) + + def update_headers(self, response): + expires = expire_after(self.delta) + return {"expires": datetime_to_header(expires), "cache-control": "public"} + + def warning(self, response): + tmpl = "110 - Automatically cached for %s. Response might be stale" + return tmpl % self.delta + + +class LastModified(BaseHeuristic): + """ + If there is no Expires header already, fall back on Last-Modified + using the heuristic from + http://tools.ietf.org/html/rfc7234#section-4.2.2 + to calculate a reasonable value. + + Firefox also does something like this per + https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ + http://lxr.mozilla.org/mozilla-release/source/netwerk/protocol/http/nsHttpResponseHead.cpp#397 + Unlike mozilla we limit this to 24-hr. + """ + cacheable_by_default_statuses = { + 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, 501 + } + + def update_headers(self, resp): + headers = resp.headers + + if "expires" in headers: + return {} + + if "cache-control" in headers and headers["cache-control"] != "public": + return {} + + if resp.status not in self.cacheable_by_default_statuses: + return {} + + if "date" not in headers or "last-modified" not in headers: + return {} + + date = calendar.timegm(parsedate_tz(headers["date"])) + last_modified = parsedate(headers["last-modified"]) + if date is None or last_modified is None: + return {} + + now = time.time() + current_age = max(0, now - date) + delta = date - calendar.timegm(last_modified) + freshness_lifetime = max(0, min(delta / 10, 24 * 3600)) + if freshness_lifetime <= current_age: + return {} + + expires = date + freshness_lifetime + return {"expires": time.strftime(TIME_FMT, time.gmtime(expires))} + + def warning(self, resp): + return None diff --git a/env/lib/python3.8/site-packages/cachecontrol/serialize.py b/env/lib/python3.8/site-packages/cachecontrol/serialize.py new file mode 100644 index 00000000..70135d39 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/serialize.py @@ -0,0 +1,190 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +import base64 +import io +import json +import zlib + +import msgpack +from requests.structures import CaseInsensitiveDict + +from .compat import HTTPResponse, pickle, text_type + + +def _b64_decode_bytes(b): + return base64.b64decode(b.encode("ascii")) + + +def _b64_decode_str(s): + return _b64_decode_bytes(s).decode("utf8") + + +_default_body_read = object() + + +class Serializer(object): + def dumps(self, request, response, body=None): + response_headers = CaseInsensitiveDict(response.headers) + + if body is None: + # When a body isn't passed in, we'll read the response. We + # also update the response with a new file handler to be + # sure it acts as though it was never read. + body = response.read(decode_content=False) + response._fp = io.BytesIO(body) + + # NOTE: This is all a bit weird, but it's really important that on + # Python 2.x these objects are unicode and not str, even when + # they contain only ascii. The problem here is that msgpack + # understands the difference between unicode and bytes and we + # have it set to differentiate between them, however Python 2 + # doesn't know the difference. Forcing these to unicode will be + # enough to have msgpack know the difference. + data = { + u"response": { + u"body": body, # Empty bytestring if body is stored separately + u"headers": dict( + (text_type(k), text_type(v)) for k, v in response.headers.items() + ), + u"status": response.status, + u"version": response.version, + u"reason": text_type(response.reason), + u"strict": response.strict, + u"decode_content": response.decode_content, + } + } + + # Construct our vary headers + data[u"vary"] = {} + if u"vary" in response_headers: + varied_headers = response_headers[u"vary"].split(",") + for header in varied_headers: + header = text_type(header).strip() + header_value = request.headers.get(header, None) + if header_value is not None: + header_value = text_type(header_value) + data[u"vary"][header] = header_value + + return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) + + def loads(self, request, data, body_file=None): + # Short circuit if we've been given an empty set of data + if not data: + return + + # Determine what version of the serializer the data was serialized + # with + try: + ver, data = data.split(b",", 1) + except ValueError: + ver = b"cc=0" + + # Make sure that our "ver" is actually a version and isn't a false + # positive from a , being in the data stream. + if ver[:3] != b"cc=": + data = ver + data + ver = b"cc=0" + + # Get the version number out of the cc=N + ver = ver.split(b"=", 1)[-1].decode("ascii") + + # Dispatch to the actual load method for the given version + try: + return getattr(self, "_loads_v{}".format(ver))(request, data, body_file) + + except AttributeError: + # This is a version we don't have a loads function for, so we'll + # just treat it as a miss and return None + return + + def prepare_response(self, request, cached, body_file=None): + """Verify our vary headers match and construct a real urllib3 + HTTPResponse object. + """ + # Special case the '*' Vary value as it means we cannot actually + # determine if the cached response is suitable for this request. + # This case is also handled in the controller code when creating + # a cache entry, but is left here for backwards compatibility. + if "*" in cached.get("vary", {}): + return + + # Ensure that the Vary headers for the cached response match our + # request + for header, value in cached.get("vary", {}).items(): + if request.headers.get(header, None) != value: + return + + body_raw = cached["response"].pop("body") + + headers = CaseInsensitiveDict(data=cached["response"]["headers"]) + if headers.get("transfer-encoding", "") == "chunked": + headers.pop("transfer-encoding") + + cached["response"]["headers"] = headers + + try: + if body_file is None: + body = io.BytesIO(body_raw) + else: + body = body_file + except TypeError: + # This can happen if cachecontrol serialized to v1 format (pickle) + # using Python 2. A Python 2 str(byte string) will be unpickled as + # a Python 3 str (unicode string), which will cause the above to + # fail with: + # + # TypeError: 'str' does not support the buffer interface + body = io.BytesIO(body_raw.encode("utf8")) + + return HTTPResponse(body=body, preload_content=False, **cached["response"]) + + def _loads_v0(self, request, data, body_file=None): + # The original legacy cache data. This doesn't contain enough + # information to construct everything we need, so we'll treat this as + # a miss. + return + + def _loads_v1(self, request, data, body_file=None): + try: + cached = pickle.loads(data) + except ValueError: + return + + return self.prepare_response(request, cached, body_file) + + def _loads_v2(self, request, data, body_file=None): + assert body_file is None + try: + cached = json.loads(zlib.decompress(data).decode("utf8")) + except (ValueError, zlib.error): + return + + # We need to decode the items that we've base64 encoded + cached["response"]["body"] = _b64_decode_bytes(cached["response"]["body"]) + cached["response"]["headers"] = dict( + (_b64_decode_str(k), _b64_decode_str(v)) + for k, v in cached["response"]["headers"].items() + ) + cached["response"]["reason"] = _b64_decode_str(cached["response"]["reason"]) + cached["vary"] = dict( + (_b64_decode_str(k), _b64_decode_str(v) if v is not None else v) + for k, v in cached["vary"].items() + ) + + return self.prepare_response(request, cached, body_file) + + def _loads_v3(self, request, data, body_file): + # Due to Python 2 encoding issues, it's impossible to know for sure + # exactly how to load v3 entries, thus we'll treat these as a miss so + # that they get rewritten out as v4 entries. + return + + def _loads_v4(self, request, data, body_file=None): + try: + cached = msgpack.loads(data, raw=False) + except ValueError: + return + + return self.prepare_response(request, cached, body_file) diff --git a/env/lib/python3.8/site-packages/cachecontrol/wrapper.py b/env/lib/python3.8/site-packages/cachecontrol/wrapper.py new file mode 100644 index 00000000..b6ee7f20 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachecontrol/wrapper.py @@ -0,0 +1,33 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +from .adapter import CacheControlAdapter +from .cache import DictCache + + +def CacheControl( + sess, + cache=None, + cache_etags=True, + serializer=None, + heuristic=None, + controller_class=None, + adapter_class=None, + cacheable_methods=None, +): + + cache = DictCache() if cache is None else cache + adapter_class = adapter_class or CacheControlAdapter + adapter = adapter_class( + cache, + cache_etags=cache_etags, + serializer=serializer, + heuristic=heuristic, + controller_class=controller_class, + cacheable_methods=cacheable_methods, + ) + sess.mount("http://", adapter) + sess.mount("https://", adapter) + + return sess diff --git a/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/LICENSE new file mode 100644 index 00000000..bd185ce7 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/LICENSE @@ -0,0 +1,20 @@ +The MIT License (MIT) + +Copyright (c) 2014-2022 Thomas Kemmer + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software is furnished to do so, +subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS +FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER +IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/METADATA b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/METADATA new file mode 100644 index 00000000..294ead8e --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/METADATA @@ -0,0 +1,146 @@ +Metadata-Version: 2.1 +Name: cachetools +Version: 5.2.0 +Summary: Extensible memoizing collections and decorators +Home-page: https://github.com/tkem/cachetools/ +Author: Thomas Kemmer +Author-email: tkemmer@computer.org +License: MIT +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Other Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Python: ~=3.7 +License-File: LICENSE + +cachetools +======================================================================== + +.. image:: https://img.shields.io/pypi/v/cachetools + :target: https://pypi.org/project/cachetools/ + :alt: Latest PyPI version + +.. image:: https://img.shields.io/github/workflow/status/tkem/cachetools/CI + :target: https://github.com/tkem/cachetools/actions/workflows/ci.yml + :alt: CI build status + +.. image:: https://img.shields.io/readthedocs/cachetools + :target: https://cachetools.readthedocs.io/ + :alt: Documentation build status + +.. image:: https://img.shields.io/codecov/c/github/tkem/cachetools/master.svg + :target: https://codecov.io/gh/tkem/cachetools + :alt: Test coverage + +.. image:: https://img.shields.io/librariesio/sourcerank/pypi/cachetools + :target: https://libraries.io/pypi/cachetools + :alt: Libraries.io SourceRank + +.. image:: https://img.shields.io/github/license/tkem/cachetools + :target: https://raw.github.com/tkem/cachetools/master/LICENSE + :alt: License + +.. image:: https://img.shields.io/badge/code%20style-black-000000.svg + :target: https://github.com/psf/black + :alt: Code style: black + + +This module provides various memoizing collections and decorators, +including variants of the Python Standard Library's `@lru_cache`_ +function decorator. + +.. code-block:: python + + from cachetools import cached, LRUCache, TTLCache + + # speed up calculating Fibonacci numbers with dynamic programming + @cached(cache={}) + def fib(n): + return n if n < 2 else fib(n - 1) + fib(n - 2) + + # cache least recently used Python Enhancement Proposals + @cached(cache=LRUCache(maxsize=32)) + def get_pep(num): + url = 'http://www.python.org/dev/peps/pep-%04d/' % num + with urllib.request.urlopen(url) as s: + return s.read() + + # cache weather data for no longer than ten minutes + @cached(cache=TTLCache(maxsize=1024, ttl=600)) + def get_weather(place): + return owm.weather_at_place(place).get_weather() + +For the purpose of this module, a *cache* is a mutable_ mapping_ of a +fixed maximum size. When the cache is full, i.e. by adding another +item the cache would exceed its maximum size, the cache must choose +which item(s) to discard based on a suitable `cache algorithm`_. + +This module provides multiple cache classes based on different cache +algorithms, as well as decorators for easily memoizing function and +method calls. + + +Installation +------------------------------------------------------------------------ + +cachetools is available from PyPI_ and can be installed by running:: + + pip install cachetools + +Typing stubs for this package are provided by typeshed_ and can be +installed by running:: + + pip install types-cachetools + + +Project Resources +------------------------------------------------------------------------ + +- `Documentation`_ +- `Issue tracker`_ +- `Source code`_ +- `Change log`_ + + +Related Projects +------------------------------------------------------------------------ + +- asyncache_: Helpers to use cachetools with async functions +- CacheToolsUtils_: Cachetools Utilities +- `kids.cache`_: Kids caching library +- shelved-cache_: Persistent cache for Python cachetools + + +License +------------------------------------------------------------------------ + +Copyright (c) 2014-2022 Thomas Kemmer. + +Licensed under the `MIT License`_. + + +.. _@lru_cache: https://docs.python.org/3/library/functools.html#functools.lru_cache +.. _mutable: https://docs.python.org/dev/glossary.html#term-mutable +.. _mapping: https://docs.python.org/dev/glossary.html#term-mapping +.. _cache algorithm: https://en.wikipedia.org/wiki/Cache_algorithms + +.. _PyPI: https://pypi.org/project/cachetools/ +.. _typeshed: https://github.com/python/typeshed/ +.. _Documentation: https://cachetools.readthedocs.io/ +.. _Issue tracker: https://github.com/tkem/cachetools/issues/ +.. _Source code: https://github.com/tkem/cachetools/ +.. _Change log: https://github.com/tkem/cachetools/blob/master/CHANGELOG.rst +.. _MIT License: https://raw.github.com/tkem/cachetools/master/LICENSE + +.. _asyncache: https://pypi.org/project/asyncache/ +.. _CacheToolsUtils: https://pypi.org/project/CacheToolsUtils/ +.. _kids.cache: https://pypi.org/project/kids.cache/ +.. _shelved-cache: https://pypi.org/project/shelved-cache/ diff --git a/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/RECORD b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/RECORD new file mode 100644 index 00000000..871d2686 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/RECORD @@ -0,0 +1,12 @@ +cachetools-5.2.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cachetools-5.2.0.dist-info/LICENSE,sha256=diYME3Cn1B1frHGifXgfOt1dckmt-7-pMIRtLZ5H29U,1085 +cachetools-5.2.0.dist-info/METADATA,sha256=cofhuzJUGMo5kVMpE1yvKFxOMAGYF9Fe14UJEtUTr4s,5124 +cachetools-5.2.0.dist-info/RECORD,, +cachetools-5.2.0.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +cachetools-5.2.0.dist-info/top_level.txt,sha256=ai2FH78TGwoBcCgVfoqbzk5IQCtnDukdSs4zKuVPvDs,11 +cachetools/__init__.py,sha256=rEErTnGMZszbSHz8POD5h_DJgVOuocRelxu6G8zMOlY,21785 +cachetools/__pycache__/__init__.cpython-38.pyc,, +cachetools/__pycache__/func.cpython-38.pyc,, +cachetools/__pycache__/keys.cpython-38.pyc,, +cachetools/func.py,sha256=c5CAlRae2hUymZaeAJi6GEc_VpZIn6Mosi95bAlQEcs,4934 +cachetools/keys.py,sha256=d-cpW252E_uV50ySlw13IevdNQnSc0MfiMViImQktRI,1613 diff --git a/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/top_level.txt new file mode 100644 index 00000000..50d14084 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools-5.2.0.dist-info/top_level.txt @@ -0,0 +1 @@ +cachetools diff --git a/env/lib/python3.8/site-packages/cachetools/__init__.py b/env/lib/python3.8/site-packages/cachetools/__init__.py new file mode 100644 index 00000000..14c47b18 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools/__init__.py @@ -0,0 +1,745 @@ +"""Extensible memoizing collections and decorators.""" + +__all__ = ( + "Cache", + "FIFOCache", + "LFUCache", + "LRUCache", + "MRUCache", + "RRCache", + "TLRUCache", + "TTLCache", + "cached", + "cachedmethod", +) + +__version__ = "5.2.0" + +import collections +import collections.abc +import functools +import heapq +import random +import time + +from . import keys + + +class _DefaultSize: + + __slots__ = () + + def __getitem__(self, _): + return 1 + + def __setitem__(self, _, value): + assert value == 1 + + def pop(self, _): + return 1 + + +class Cache(collections.abc.MutableMapping): + """Mutable mapping to serve as a simple cache or cache base class.""" + + __marker = object() + + __size = _DefaultSize() + + def __init__(self, maxsize, getsizeof=None): + if getsizeof: + self.getsizeof = getsizeof + if self.getsizeof is not Cache.getsizeof: + self.__size = dict() + self.__data = dict() + self.__currsize = 0 + self.__maxsize = maxsize + + def __repr__(self): + return "%s(%s, maxsize=%r, currsize=%r)" % ( + self.__class__.__name__, + repr(self.__data), + self.__maxsize, + self.__currsize, + ) + + def __getitem__(self, key): + try: + return self.__data[key] + except KeyError: + return self.__missing__(key) + + def __setitem__(self, key, value): + maxsize = self.__maxsize + size = self.getsizeof(value) + if size > maxsize: + raise ValueError("value too large") + if key not in self.__data or self.__size[key] < size: + while self.__currsize + size > maxsize: + self.popitem() + if key in self.__data: + diffsize = size - self.__size[key] + else: + diffsize = size + self.__data[key] = value + self.__size[key] = size + self.__currsize += diffsize + + def __delitem__(self, key): + size = self.__size.pop(key) + del self.__data[key] + self.__currsize -= size + + def __contains__(self, key): + return key in self.__data + + def __missing__(self, key): + raise KeyError(key) + + def __iter__(self): + return iter(self.__data) + + def __len__(self): + return len(self.__data) + + def get(self, key, default=None): + if key in self: + return self[key] + else: + return default + + def pop(self, key, default=__marker): + if key in self: + value = self[key] + del self[key] + elif default is self.__marker: + raise KeyError(key) + else: + value = default + return value + + def setdefault(self, key, default=None): + if key in self: + value = self[key] + else: + self[key] = value = default + return value + + @property + def maxsize(self): + """The maximum size of the cache.""" + return self.__maxsize + + @property + def currsize(self): + """The current size of the cache.""" + return self.__currsize + + @staticmethod + def getsizeof(value): + """Return the size of a cache element's value.""" + return 1 + + +class FIFOCache(Cache): + """First In First Out (FIFO) cache implementation.""" + + def __init__(self, maxsize, getsizeof=None): + Cache.__init__(self, maxsize, getsizeof) + self.__order = collections.OrderedDict() + + def __setitem__(self, key, value, cache_setitem=Cache.__setitem__): + cache_setitem(self, key, value) + try: + self.__order.move_to_end(key) + except KeyError: + self.__order[key] = None + + def __delitem__(self, key, cache_delitem=Cache.__delitem__): + cache_delitem(self, key) + del self.__order[key] + + def popitem(self): + """Remove and return the `(key, value)` pair first inserted.""" + try: + key = next(iter(self.__order)) + except StopIteration: + raise KeyError("%s is empty" % type(self).__name__) from None + else: + return (key, self.pop(key)) + + +class LFUCache(Cache): + """Least Frequently Used (LFU) cache implementation.""" + + def __init__(self, maxsize, getsizeof=None): + Cache.__init__(self, maxsize, getsizeof) + self.__counter = collections.Counter() + + def __getitem__(self, key, cache_getitem=Cache.__getitem__): + value = cache_getitem(self, key) + if key in self: # __missing__ may not store item + self.__counter[key] -= 1 + return value + + def __setitem__(self, key, value, cache_setitem=Cache.__setitem__): + cache_setitem(self, key, value) + self.__counter[key] -= 1 + + def __delitem__(self, key, cache_delitem=Cache.__delitem__): + cache_delitem(self, key) + del self.__counter[key] + + def popitem(self): + """Remove and return the `(key, value)` pair least frequently used.""" + try: + ((key, _),) = self.__counter.most_common(1) + except ValueError: + raise KeyError("%s is empty" % type(self).__name__) from None + else: + return (key, self.pop(key)) + + +class LRUCache(Cache): + """Least Recently Used (LRU) cache implementation.""" + + def __init__(self, maxsize, getsizeof=None): + Cache.__init__(self, maxsize, getsizeof) + self.__order = collections.OrderedDict() + + def __getitem__(self, key, cache_getitem=Cache.__getitem__): + value = cache_getitem(self, key) + if key in self: # __missing__ may not store item + self.__update(key) + return value + + def __setitem__(self, key, value, cache_setitem=Cache.__setitem__): + cache_setitem(self, key, value) + self.__update(key) + + def __delitem__(self, key, cache_delitem=Cache.__delitem__): + cache_delitem(self, key) + del self.__order[key] + + def popitem(self): + """Remove and return the `(key, value)` pair least recently used.""" + try: + key = next(iter(self.__order)) + except StopIteration: + raise KeyError("%s is empty" % type(self).__name__) from None + else: + return (key, self.pop(key)) + + def __update(self, key): + try: + self.__order.move_to_end(key) + except KeyError: + self.__order[key] = None + + +class MRUCache(Cache): + """Most Recently Used (MRU) cache implementation.""" + + def __init__(self, maxsize, getsizeof=None): + Cache.__init__(self, maxsize, getsizeof) + self.__order = collections.OrderedDict() + + def __getitem__(self, key, cache_getitem=Cache.__getitem__): + value = cache_getitem(self, key) + if key in self: # __missing__ may not store item + self.__update(key) + return value + + def __setitem__(self, key, value, cache_setitem=Cache.__setitem__): + cache_setitem(self, key, value) + self.__update(key) + + def __delitem__(self, key, cache_delitem=Cache.__delitem__): + cache_delitem(self, key) + del self.__order[key] + + def popitem(self): + """Remove and return the `(key, value)` pair most recently used.""" + try: + key = next(iter(self.__order)) + except StopIteration: + raise KeyError("%s is empty" % type(self).__name__) from None + else: + return (key, self.pop(key)) + + def __update(self, key): + try: + self.__order.move_to_end(key, last=False) + except KeyError: + self.__order[key] = None + + +class RRCache(Cache): + """Random Replacement (RR) cache implementation.""" + + def __init__(self, maxsize, choice=random.choice, getsizeof=None): + Cache.__init__(self, maxsize, getsizeof) + self.__choice = choice + + @property + def choice(self): + """The `choice` function used by the cache.""" + return self.__choice + + def popitem(self): + """Remove and return a random `(key, value)` pair.""" + try: + key = self.__choice(list(self)) + except IndexError: + raise KeyError("%s is empty" % type(self).__name__) from None + else: + return (key, self.pop(key)) + + +class _TimedCache(Cache): + """Base class for time aware cache implementations.""" + + class _Timer: + def __init__(self, timer): + self.__timer = timer + self.__nesting = 0 + + def __call__(self): + if self.__nesting == 0: + return self.__timer() + else: + return self.__time + + def __enter__(self): + if self.__nesting == 0: + self.__time = time = self.__timer() + else: + time = self.__time + self.__nesting += 1 + return time + + def __exit__(self, *exc): + self.__nesting -= 1 + + def __reduce__(self): + return _TimedCache._Timer, (self.__timer,) + + def __getattr__(self, name): + return getattr(self.__timer, name) + + def __init__(self, maxsize, timer=time.monotonic, getsizeof=None): + Cache.__init__(self, maxsize, getsizeof) + self.__timer = _TimedCache._Timer(timer) + + def __repr__(self, cache_repr=Cache.__repr__): + with self.__timer as time: + self.expire(time) + return cache_repr(self) + + def __len__(self, cache_len=Cache.__len__): + with self.__timer as time: + self.expire(time) + return cache_len(self) + + @property + def currsize(self): + with self.__timer as time: + self.expire(time) + return super().currsize + + @property + def timer(self): + """The timer function used by the cache.""" + return self.__timer + + def clear(self): + with self.__timer as time: + self.expire(time) + Cache.clear(self) + + def get(self, *args, **kwargs): + with self.__timer: + return Cache.get(self, *args, **kwargs) + + def pop(self, *args, **kwargs): + with self.__timer: + return Cache.pop(self, *args, **kwargs) + + def setdefault(self, *args, **kwargs): + with self.__timer: + return Cache.setdefault(self, *args, **kwargs) + + +class TTLCache(_TimedCache): + """LRU Cache implementation with per-item time-to-live (TTL) value.""" + + class _Link: + + __slots__ = ("key", "expires", "next", "prev") + + def __init__(self, key=None, expires=None): + self.key = key + self.expires = expires + + def __reduce__(self): + return TTLCache._Link, (self.key, self.expires) + + def unlink(self): + next = self.next + prev = self.prev + prev.next = next + next.prev = prev + + def __init__(self, maxsize, ttl, timer=time.monotonic, getsizeof=None): + _TimedCache.__init__(self, maxsize, timer, getsizeof) + self.__root = root = TTLCache._Link() + root.prev = root.next = root + self.__links = collections.OrderedDict() + self.__ttl = ttl + + def __contains__(self, key): + try: + link = self.__links[key] # no reordering + except KeyError: + return False + else: + return self.timer() < link.expires + + def __getitem__(self, key, cache_getitem=Cache.__getitem__): + try: + link = self.__getlink(key) + except KeyError: + expired = False + else: + expired = not (self.timer() < link.expires) + if expired: + return self.__missing__(key) + else: + return cache_getitem(self, key) + + def __setitem__(self, key, value, cache_setitem=Cache.__setitem__): + with self.timer as time: + self.expire(time) + cache_setitem(self, key, value) + try: + link = self.__getlink(key) + except KeyError: + self.__links[key] = link = TTLCache._Link(key) + else: + link.unlink() + link.expires = time + self.__ttl + link.next = root = self.__root + link.prev = prev = root.prev + prev.next = root.prev = link + + def __delitem__(self, key, cache_delitem=Cache.__delitem__): + cache_delitem(self, key) + link = self.__links.pop(key) + link.unlink() + if not (self.timer() < link.expires): + raise KeyError(key) + + def __iter__(self): + root = self.__root + curr = root.next + while curr is not root: + # "freeze" time for iterator access + with self.timer as time: + if time < curr.expires: + yield curr.key + curr = curr.next + + def __setstate__(self, state): + self.__dict__.update(state) + root = self.__root + root.prev = root.next = root + for link in sorted(self.__links.values(), key=lambda obj: obj.expires): + link.next = root + link.prev = prev = root.prev + prev.next = root.prev = link + self.expire(self.timer()) + + @property + def ttl(self): + """The time-to-live value of the cache's items.""" + return self.__ttl + + def expire(self, time=None): + """Remove expired items from the cache.""" + if time is None: + time = self.timer() + root = self.__root + curr = root.next + links = self.__links + cache_delitem = Cache.__delitem__ + while curr is not root and not (time < curr.expires): + cache_delitem(self, curr.key) + del links[curr.key] + next = curr.next + curr.unlink() + curr = next + + def popitem(self): + """Remove and return the `(key, value)` pair least recently used that + has not already expired. + + """ + with self.timer as time: + self.expire(time) + try: + key = next(iter(self.__links)) + except StopIteration: + raise KeyError("%s is empty" % type(self).__name__) from None + else: + return (key, self.pop(key)) + + def __getlink(self, key): + value = self.__links[key] + self.__links.move_to_end(key) + return value + + +class TLRUCache(_TimedCache): + """Time aware Least Recently Used (TLRU) cache implementation.""" + + @functools.total_ordering + class _Item: + + __slots__ = ("key", "expires", "removed") + + def __init__(self, key=None, expires=None): + self.key = key + self.expires = expires + self.removed = False + + def __lt__(self, other): + return self.expires < other.expires + + def __init__(self, maxsize, ttu, timer=time.monotonic, getsizeof=None): + _TimedCache.__init__(self, maxsize, timer, getsizeof) + self.__items = collections.OrderedDict() + self.__order = [] + self.__ttu = ttu + + def __contains__(self, key): + try: + item = self.__items[key] # no reordering + except KeyError: + return False + else: + return self.timer() < item.expires + + def __getitem__(self, key, cache_getitem=Cache.__getitem__): + try: + item = self.__getitem(key) + except KeyError: + expired = False + else: + expired = not (self.timer() < item.expires) + if expired: + return self.__missing__(key) + else: + return cache_getitem(self, key) + + def __setitem__(self, key, value, cache_setitem=Cache.__setitem__): + with self.timer as time: + expires = self.__ttu(key, value, time) + if not (time < expires): + return # skip expired items + self.expire(time) + cache_setitem(self, key, value) + # removing an existing item would break the heap structure, so + # only mark it as removed for now + try: + self.__getitem(key).removed = True + except KeyError: + pass + self.__items[key] = item = TLRUCache._Item(key, expires) + heapq.heappush(self.__order, item) + + def __delitem__(self, key, cache_delitem=Cache.__delitem__): + with self.timer as time: + # no self.expire() for performance reasons, e.g. self.clear() [#67] + cache_delitem(self, key) + item = self.__items.pop(key) + item.removed = True + if not (time < item.expires): + raise KeyError(key) + + def __iter__(self): + for curr in self.__order: + # "freeze" time for iterator access + with self.timer as time: + if time < curr.expires and not curr.removed: + yield curr.key + + @property + def ttu(self): + """The local time-to-use function used by the cache.""" + return self.__ttu + + def expire(self, time=None): + """Remove expired items from the cache.""" + if time is None: + time = self.timer() + items = self.__items + order = self.__order + # clean up the heap if too many items are marked as removed + if len(order) > len(items) * 2: + self.__order = order = [item for item in order if not item.removed] + heapq.heapify(order) + cache_delitem = Cache.__delitem__ + while order and (order[0].removed or not (time < order[0].expires)): + item = heapq.heappop(order) + if not item.removed: + cache_delitem(self, item.key) + del items[item.key] + + def popitem(self): + """Remove and return the `(key, value)` pair least recently used that + has not already expired. + + """ + with self.timer as time: + self.expire(time) + try: + key = next(iter(self.__items)) + except StopIteration: + raise KeyError("%s is empty" % self.__class__.__name__) from None + else: + return (key, self.pop(key)) + + def __getitem(self, key): + value = self.__items[key] + self.__items.move_to_end(key) + return value + + +def cached(cache, key=keys.hashkey, lock=None): + """Decorator to wrap a function with a memoizing callable that saves + results in a cache. + + """ + + def decorator(func): + if cache is None: + + def wrapper(*args, **kwargs): + return func(*args, **kwargs) + + def clear(): + pass + + elif lock is None: + + def wrapper(*args, **kwargs): + k = key(*args, **kwargs) + try: + return cache[k] + except KeyError: + pass # key not found + v = func(*args, **kwargs) + try: + cache[k] = v + except ValueError: + pass # value too large + return v + + def clear(): + cache.clear() + + else: + + def wrapper(*args, **kwargs): + k = key(*args, **kwargs) + try: + with lock: + return cache[k] + except KeyError: + pass # key not found + v = func(*args, **kwargs) + # in case of a race, prefer the item already in the cache + try: + with lock: + return cache.setdefault(k, v) + except ValueError: + return v # value too large + + def clear(): + with lock: + cache.clear() + + wrapper.cache = cache + wrapper.cache_key = key + wrapper.cache_lock = lock + wrapper.cache_clear = clear + + return functools.update_wrapper(wrapper, func) + + return decorator + + +def cachedmethod(cache, key=keys.methodkey, lock=None): + """Decorator to wrap a class or instance method with a memoizing + callable that saves results in a cache. + + """ + + def decorator(method): + if lock is None: + + def wrapper(self, *args, **kwargs): + c = cache(self) + if c is None: + return method(self, *args, **kwargs) + k = key(self, *args, **kwargs) + try: + return c[k] + except KeyError: + pass # key not found + v = method(self, *args, **kwargs) + try: + c[k] = v + except ValueError: + pass # value too large + return v + + def clear(self): + c = cache(self) + if c is not None: + c.clear() + + else: + + def wrapper(self, *args, **kwargs): + c = cache(self) + if c is None: + return method(self, *args, **kwargs) + k = key(self, *args, **kwargs) + try: + with lock(self): + return c[k] + except KeyError: + pass # key not found + v = method(self, *args, **kwargs) + # in case of a race, prefer the item already in the cache + try: + with lock(self): + return c.setdefault(k, v) + except ValueError: + return v # value too large + + def clear(self): + c = cache(self) + if c is not None: + with lock(self): + c.clear() + + wrapper.cache = cache + wrapper.cache_key = key + wrapper.cache_lock = lock + wrapper.cache_clear = clear + + return functools.update_wrapper(wrapper, method) + + return decorator diff --git a/env/lib/python3.8/site-packages/cachetools/func.py b/env/lib/python3.8/site-packages/cachetools/func.py new file mode 100644 index 00000000..4366eaa2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools/func.py @@ -0,0 +1,172 @@ +"""`functools.lru_cache` compatible memoizing function decorators.""" + +__all__ = ("fifo_cache", "lfu_cache", "lru_cache", "mru_cache", "rr_cache", "ttl_cache") + +import collections +import functools +import math +import random +import time + +try: + from threading import RLock +except ImportError: # pragma: no cover + from dummy_threading import RLock + +from . import FIFOCache, LFUCache, LRUCache, MRUCache, RRCache, TTLCache +from . import keys + + +_CacheInfo = collections.namedtuple( + "CacheInfo", ["hits", "misses", "maxsize", "currsize"] +) + + +class _UnboundCache(dict): + @property + def maxsize(self): + return None + + @property + def currsize(self): + return len(self) + + +class _UnboundTTLCache(TTLCache): + def __init__(self, ttl, timer): + TTLCache.__init__(self, math.inf, ttl, timer) + + @property + def maxsize(self): + return None + + +def _cache(cache, typed): + maxsize = cache.maxsize + + def decorator(func): + key = keys.typedkey if typed else keys.hashkey + hits = misses = 0 + lock = RLock() + + def wrapper(*args, **kwargs): + nonlocal hits, misses + k = key(*args, **kwargs) + with lock: + try: + v = cache[k] + hits += 1 + return v + except KeyError: + misses += 1 + v = func(*args, **kwargs) + # in case of a race, prefer the item already in the cache + try: + with lock: + return cache.setdefault(k, v) + except ValueError: + return v # value too large + + def cache_info(): + with lock: + maxsize = cache.maxsize + currsize = cache.currsize + return _CacheInfo(hits, misses, maxsize, currsize) + + def cache_clear(): + nonlocal hits, misses + with lock: + try: + cache.clear() + finally: + hits = misses = 0 + + wrapper.cache_info = cache_info + wrapper.cache_clear = cache_clear + wrapper.cache_parameters = lambda: {"maxsize": maxsize, "typed": typed} + functools.update_wrapper(wrapper, func) + return wrapper + + return decorator + + +def fifo_cache(maxsize=128, typed=False): + """Decorator to wrap a function with a memoizing callable that saves + up to `maxsize` results based on a First In First Out (FIFO) + algorithm. + + """ + if maxsize is None: + return _cache(_UnboundCache(), typed) + elif callable(maxsize): + return _cache(FIFOCache(128), typed)(maxsize) + else: + return _cache(FIFOCache(maxsize), typed) + + +def lfu_cache(maxsize=128, typed=False): + """Decorator to wrap a function with a memoizing callable that saves + up to `maxsize` results based on a Least Frequently Used (LFU) + algorithm. + + """ + if maxsize is None: + return _cache(_UnboundCache(), typed) + elif callable(maxsize): + return _cache(LFUCache(128), typed)(maxsize) + else: + return _cache(LFUCache(maxsize), typed) + + +def lru_cache(maxsize=128, typed=False): + """Decorator to wrap a function with a memoizing callable that saves + up to `maxsize` results based on a Least Recently Used (LRU) + algorithm. + + """ + if maxsize is None: + return _cache(_UnboundCache(), typed) + elif callable(maxsize): + return _cache(LRUCache(128), typed)(maxsize) + else: + return _cache(LRUCache(maxsize), typed) + + +def mru_cache(maxsize=128, typed=False): + """Decorator to wrap a function with a memoizing callable that saves + up to `maxsize` results based on a Most Recently Used (MRU) + algorithm. + """ + if maxsize is None: + return _cache(_UnboundCache(), typed) + elif callable(maxsize): + return _cache(MRUCache(128), typed)(maxsize) + else: + return _cache(MRUCache(maxsize), typed) + + +def rr_cache(maxsize=128, choice=random.choice, typed=False): + """Decorator to wrap a function with a memoizing callable that saves + up to `maxsize` results based on a Random Replacement (RR) + algorithm. + + """ + if maxsize is None: + return _cache(_UnboundCache(), typed) + elif callable(maxsize): + return _cache(RRCache(128, choice), typed)(maxsize) + else: + return _cache(RRCache(maxsize, choice), typed) + + +def ttl_cache(maxsize=128, ttl=600, timer=time.monotonic, typed=False): + """Decorator to wrap a function with a memoizing callable that saves + up to `maxsize` results based on a Least Recently Used (LRU) + algorithm with a per-item time-to-live (TTL) value. + """ + if maxsize is None: + return _cache(_UnboundTTLCache(ttl, timer), typed) + elif callable(maxsize): + return _cache(TTLCache(128, ttl, timer), typed)(maxsize) + else: + return _cache(TTLCache(maxsize, ttl, timer), typed) diff --git a/env/lib/python3.8/site-packages/cachetools/keys.py b/env/lib/python3.8/site-packages/cachetools/keys.py new file mode 100644 index 00000000..f2feb418 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachetools/keys.py @@ -0,0 +1,57 @@ +"""Key functions for memoizing decorators.""" + +__all__ = ("hashkey", "methodkey", "typedkey") + + +class _HashedTuple(tuple): + """A tuple that ensures that hash() will be called no more than once + per element, since cache decorators will hash the key multiple + times on a cache miss. See also _HashedSeq in the standard + library functools implementation. + + """ + + __hashvalue = None + + def __hash__(self, hash=tuple.__hash__): + hashvalue = self.__hashvalue + if hashvalue is None: + self.__hashvalue = hashvalue = hash(self) + return hashvalue + + def __add__(self, other, add=tuple.__add__): + return _HashedTuple(add(self, other)) + + def __radd__(self, other, add=tuple.__add__): + return _HashedTuple(add(other, self)) + + def __getstate__(self): + return {} + + +# used for separating keyword arguments; we do not use an object +# instance here so identity is preserved when pickling/unpickling +_kwmark = (_HashedTuple,) + + +def hashkey(*args, **kwargs): + """Return a cache key for the specified hashable arguments.""" + + if kwargs: + return _HashedTuple(args + sum(sorted(kwargs.items()), _kwmark)) + else: + return _HashedTuple(args) + + +def methodkey(self, *args, **kwargs): + """Return a cache key for use with cached methods.""" + return hashkey(*args, **kwargs) + + +def typedkey(*args, **kwargs): + """Return a typed cache key for the specified hashable arguments.""" + + key = hashkey(*args, **kwargs) + key += tuple(type(v) for v in args) + key += tuple(type(v) for _, v in sorted(kwargs.items())) + return key diff --git a/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/LICENSE new file mode 100644 index 00000000..701df228 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/LICENSE @@ -0,0 +1,20 @@ +Copyright (c) 2015 Sébastien Eustace + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/METADATA b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/METADATA new file mode 100644 index 00000000..2344fe4f --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/METADATA @@ -0,0 +1,45 @@ +Metadata-Version: 2.1 +Name: cachy +Version: 0.3.0 +Summary: Cachy provides a simple yet effective caching library. +Home-page: https://github.com/sdispater/cachy +License: MIT +Keywords: cache +Author: Sébastien Eustace +Author-email: sebastien@eustace.io +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.* +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Provides-Extra: memcached +Provides-Extra: msgpack +Provides-Extra: redis +Requires-Dist: msgpack-python (>=0.5,<0.6); extra == "msgpack" +Requires-Dist: python-memcached (>=1.59,<2.0); extra == "memcached" +Requires-Dist: redis (>=3.3.6,<4.0.0); extra == "redis" +Project-URL: Repository, https://github.com/sdispater/cachy +Description-Content-Type: text/x-rst + +Cachy +##### + +.. image:: https://travis-ci.org/sdispater/cachy.png + :alt: Cachy Build status + :target: https://travis-ci.org/sdispater/cachy + +Cachy provides a simple yet effective caching library. + +The full documentation is available here: http://cachy.readthedocs.org + + +Resources +========= + +* `Documentation `_ +* `Issue Tracker `_ + diff --git a/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/RECORD b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/RECORD new file mode 100644 index 00000000..b3fa65d4 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/RECORD @@ -0,0 +1,53 @@ +cachy-0.3.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cachy-0.3.0.dist-info/LICENSE,sha256=eddrvsOJnRZIIWOjZcf2XoenKZk6HWX5YPQYHi7LgaA,1062 +cachy-0.3.0.dist-info/METADATA,sha256=KK9dg-znFQygldlGs2epsfIRu37fMgVaQCO6Vi5TjGs,1472 +cachy-0.3.0.dist-info/RECORD,, +cachy-0.3.0.dist-info/WHEEL,sha256=k_UtF7DeTKrUqc0FULNabrogAWgCy8zJZKQiwD2AkC4,89 +cachy/__init__.py,sha256=y41ImV85TJlxPUkY7wNYgAhG2VQEo5_g_03WbczZ5_E,124 +cachy/__pycache__/__init__.cpython-38.pyc,, +cachy/__pycache__/cache_manager.cpython-38.pyc,, +cachy/__pycache__/helpers.cpython-38.pyc,, +cachy/__pycache__/redis_tagged_cache.cpython-38.pyc,, +cachy/__pycache__/repository.cpython-38.pyc,, +cachy/__pycache__/tag_set.cpython-38.pyc,, +cachy/__pycache__/tagged_cache.cpython-38.pyc,, +cachy/__pycache__/utils.cpython-38.pyc,, +cachy/cache_manager.py,sha256=-eL3UqPRy6uTGqyYzZmqGkYt8PrxhIx9ziu8uZIWIWQ,7339 +cachy/contracts/__init__.py,sha256=O9CT1B2F-cVB8elT0EoCJbgkcffjvlmqteqavs4giDg,25 +cachy/contracts/__pycache__/__init__.cpython-38.pyc,, +cachy/contracts/__pycache__/factory.cpython-38.pyc,, +cachy/contracts/__pycache__/repository.cpython-38.pyc,, +cachy/contracts/__pycache__/store.cpython-38.pyc,, +cachy/contracts/__pycache__/taggable_store.cpython-38.pyc,, +cachy/contracts/factory.py,sha256=F6USJ2ooGjTOFNS8gqmKxg8LHK3SZ2RgcbBxQI1QYsE,323 +cachy/contracts/repository.py,sha256=-3eqnXR8j5MiRIPlOYpec1X9DuIYNFbf_pn86XO8oDM,2942 +cachy/contracts/store.py,sha256=oxQWQEwiHIDOQj1b7CGnztrQdeFFHG5iAEVluZ-bAh8,2594 +cachy/contracts/taggable_store.py,sha256=ObQILbeu_CLBKV17r1dn3IJWjoPi4WiNboFKW7JW6RA,497 +cachy/helpers.py,sha256=pDSGH7MD2ARvumlPl6KhycFHIP_QQk5KsWRA6Vfs08U,101 +cachy/redis_tagged_cache.py,sha256=FZ8mmAd0y-QmXc7DCA0W-oJkUT7HgX-xX82QOosJV-E,2171 +cachy/repository.py,sha256=ZZHdMXeUrxvT2PJSmL9hTqeUr7fUkq527UvrZ3aQMyg,8250 +cachy/serializers/__init__.py,sha256=RuQDghzeLKWxWs8YKju6dOp75AsTcookQqOjYj9nfjs,202 +cachy/serializers/__pycache__/__init__.cpython-38.pyc,, +cachy/serializers/__pycache__/json_serializer.cpython-38.pyc,, +cachy/serializers/__pycache__/msgpack_serializer.cpython-38.pyc,, +cachy/serializers/__pycache__/pickle_serializer.cpython-38.pyc,, +cachy/serializers/__pycache__/serializer.cpython-38.pyc,, +cachy/serializers/json_serializer.py,sha256=UNjSMa0auB9cVI-Ru6NB5dtvXKqSQ1Ne66fllOLwxT0,683 +cachy/serializers/msgpack_serializer.py,sha256=UX31MOUri5KwgECKOczoTaWAeJfvVeVQujKXz560xyI,814 +cachy/serializers/pickle_serializer.py,sha256=Hc-OOLQnLfOCvYGdRO3umd2z38MNjz7IUBPLI0Ksecc,848 +cachy/serializers/serializer.py,sha256=d_1G10DNG0Myq4tO43T1RmESyZjRRiHubhKsI21D4no,513 +cachy/stores/__init__.py,sha256=9kcnXzYQl9SU9EOYfMeTZH5bT7H1y2mPZkHYuIahZDk,207 +cachy/stores/__pycache__/__init__.cpython-38.pyc,, +cachy/stores/__pycache__/dict_store.cpython-38.pyc,, +cachy/stores/__pycache__/file_store.cpython-38.pyc,, +cachy/stores/__pycache__/memcached_store.cpython-38.pyc,, +cachy/stores/__pycache__/null_store.cpython-38.pyc,, +cachy/stores/__pycache__/redis_store.cpython-38.pyc,, +cachy/stores/dict_store.py,sha256=Y8QN_yg3yzq6QxaX24Sg62lRLt0qODTV-XdVuosaNpw,3700 +cachy/stores/file_store.py,sha256=5RakbtGquztkcPBtYGg7GQ64R0lG9638zFgwbZ2TcvQ,5806 +cachy/stores/memcached_store.py,sha256=hZ1bMNDqj49eRKN3rCpO80j4XVfm1jbddyFRWIXGXRA,2984 +cachy/stores/null_store.py,sha256=2kYMfN4G9TlrAH7YMx60TT3BmNwfEA2z1AezMJcL_gk,2026 +cachy/stores/redis_store.py,sha256=0qHMDEDXxo0BIwPa1kjrSCJaeFTTipabo4yQTTSzivs,3318 +cachy/tag_set.py,sha256=R7kJUapYsd5zsnQ4MoWL9E9XgCuhUGdHXwcccpJEtwM,1665 +cachy/tagged_cache.py,sha256=A7yl7jVdMyITIM8P3aL-X_OAuEmHQP8XPz2jNMdtbzM,5835 +cachy/utils.py,sha256=w2JpCwjLDE-GpdUyuRnkQA4hXyj-cml9etZ4ReuSQGQ,1309 diff --git a/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/WHEEL new file mode 100644 index 00000000..c90e5bfe --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy-0.3.0.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: poetry 1.0.0a5 +Root-Is-Purelib: true +Tag: py2.py3-none-any diff --git a/env/lib/python3.8/site-packages/cachy/__init__.py b/env/lib/python3.8/site-packages/cachy/__init__.py new file mode 100644 index 00000000..1e9c6d5e --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/__init__.py @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- + +from .cache_manager import CacheManager +from .repository import Repository + + +__version__ = "0.3.0" diff --git a/env/lib/python3.8/site-packages/cachy/cache_manager.py b/env/lib/python3.8/site-packages/cachy/cache_manager.py new file mode 100644 index 00000000..aaf0d0fd --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/cache_manager.py @@ -0,0 +1,301 @@ +# -*- coding: utf-8 -*- + +import threading +import types +from .contracts.factory import Factory +from .contracts.store import Store + +from .stores import ( + DictStore, + FileStore, + RedisStore, + MemcachedStore +) + +from .repository import Repository +from .serializers import ( + Serializer, + JsonSerializer, + MsgPackSerializer, + PickleSerializer +) + + +class CacheManager(Factory, threading.local): + """ + A CacheManager is a pool of cache stores. + """ + + _serializers = { + 'json': JsonSerializer(), + 'msgpack': MsgPackSerializer(), + 'pickle': PickleSerializer() + } + + def __init__(self, config): + super(CacheManager, self).__init__() + + self._config = config + self._stores = {} + self._custom_creators = {} + self._serializer = self._resolve_serializer(config.get('serializer', 'pickle')) + + def store(self, name=None): + """ + Get a cache store instance by name. + + :param name: The cache store name + :type name: str + + :rtype: Repository + """ + if name is None: + name = self.get_default_driver() + + self._stores[name] = self._get(name) + + return self._stores[name] + + def driver(self, name=None): + """ + Get a cache store instance by name. + + :param name: The cache store name + :type name: str + + :rtype: Repository + """ + return self.store(name) + + def _get(self, name): + """ + Attempt to get the store from the local cache. + + :param name: The store name + :type name: str + + :rtype: Repository + """ + return self._stores.get(name, self._resolve(name)) + + def _resolve(self, name): + """ + Resolve the given store + + :param name: The store to resolve + :type name: str + + :rtype: Repository + """ + config = self._get_config(name) + + if not config: + raise RuntimeError('Cache store [%s] is not defined.' % name) + + if config['driver'] in self._custom_creators: + repository = self._call_custom_creator(config) + else: + repository = getattr(self, '_create_%s_driver' % config['driver'])(config) + + if 'serializer' in config: + serializer = self._resolve_serializer(config['serializer']) + else: + serializer = self._serializer + + repository.get_store().set_serializer(serializer) + + return repository + + def _call_custom_creator(self, config): + """ + Call a custom driver creator. + + :param config: The driver configuration + :type config: dict + + :rtype: Repository + """ + creator = self._custom_creators[config['driver']](config) + + if isinstance(creator, Store): + creator = self.repository(creator) + + if not isinstance(creator, Repository): + raise RuntimeError('Custom creator should return a Repository instance.') + + return creator + + def _create_dict_driver(self, config): + """ + Create an instance of the dict cache driver. + + :param config: The driver configuration + :type config: dict + + :rtype: Repository + """ + return self.repository(DictStore()) + + def _create_file_driver(self, config): + """ + Create an instance of the file cache driver. + + :param config: The driver configuration + :type config: dict + + :rtype: Repository + """ + kwargs = { + 'directory': config['path'] + } + + if 'hash_type' in config: + kwargs['hash_type'] = config['hash_type'] + + return self.repository(FileStore(**kwargs)) + + def _create_redis_driver(self, config): + """ + Create an instance of the redis cache driver. + + :param config: The driver configuration + :type config: dict + + :return: Repository + """ + return self.repository(RedisStore(**config)) + + def _create_memcached_driver(self, config): + """ + Create an instance of the redis cache driver. + + :param config: The driver configuration + :type config: dict + + :return: Repository + """ + return self.repository(MemcachedStore(**config)) + + def repository(self, store): + """ + Create a new cache repository with the given implementation. + + :param store: The cache store implementation instance + :type store: Store + + :rtype: Repository + """ + repository = Repository(store) + + return repository + + def _get_prefix(self, config): + """ + Get the cache prefix. + + :param config: The configuration + :type config: dict + + :rtype: str + """ + return config.get('prefix', '') + + def _get_config(self, name): + """ + Get the cache connection configuration. + + :param name: The cache name + :type name: str + + :rtype: dict + """ + return self._config['stores'].get(name) + + def get_default_driver(self): + """ + Get the default cache driver name. + + :rtype: str + + :raises: RuntimeError + """ + if 'default' in self._config: + return self._config['default'] + + if len(self._config['stores']) == 1: + return list(self._config['stores'].keys())[0] + + raise RuntimeError('Missing "default" cache in configuration.') + + def set_default_driver(self, name): + """ + Set the default cache driver name. + + :param name: The default cache driver name + :type name: str + """ + self._config['default'] = name + + def extend(self, driver, store): + """ + Register a custom driver creator. + + :param driver: The driver + :type driver: name + + :param store: The store class + :type store: Store or callable + + :rtype: self + """ + self._custom_creators[driver] = store + + return self + + def _resolve_serializer(self, serializer): + """ + Resolve the given serializer. + + :param serializer: The serializer to resolve + :type serializer: str or Serializer + + :rtype: Serializer + """ + if isinstance(serializer, Serializer): + return serializer + + if serializer in self._serializers: + return self._serializers[serializer] + + raise RuntimeError('Unsupported serializer') + + def register_serializer(self, name, serializer): + """ + Register a new serializer. + + :param name: The name of the serializer + :type name: str + + :param serializer: The serializer + :type serializer: Serializer + """ + self._serializers[name] = serializer + + def __getattr__(self, item): + return getattr(self.store(), item) + + def __call__(self, store=None, *args, **kwargs): + if isinstance(store, (types.FunctionType, types.MethodType)): + fn = store + + if len(args) > 0: + store = args[0] + args = args[1:] if len(args) > 1 else [] + else: + store = None + + args = (fn,) + args + + return self.store(store)(*args, **kwargs) + else: + return self.store(store)(*args, **kwargs) diff --git a/env/lib/python3.8/site-packages/cachy/contracts/__init__.py b/env/lib/python3.8/site-packages/cachy/contracts/__init__.py new file mode 100644 index 00000000..633f8661 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/contracts/__init__.py @@ -0,0 +1,2 @@ +# -*- coding: utf-8 -*- + diff --git a/env/lib/python3.8/site-packages/cachy/contracts/factory.py b/env/lib/python3.8/site-packages/cachy/contracts/factory.py new file mode 100644 index 00000000..1fe6ac4b --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/contracts/factory.py @@ -0,0 +1,18 @@ +# -*- coding: utf-8 -*- + + +class Factory(object): + """ + Represent a cahce factory. + """ + + def store(self, name=None): + """ + Get a cache store instance by name. + + :param name: The cache store name + :type name: str + + :rtype: mixed + """ + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cachy/contracts/repository.py b/env/lib/python3.8/site-packages/cachy/contracts/repository.py new file mode 100644 index 00000000..b4843aa7 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/contracts/repository.py @@ -0,0 +1,129 @@ +# -*- coding: utf-8 -*- + + +class Repository(object): + + def has(self, key): + """ + Determine if an item exists in the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + raise NotImplementedError() + + def get(self, key, default=None): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :param default: The default value to return + :type default: mixed + + :rtype: mixed + """ + raise NotImplementedError() + + def pull(self, key, default=None): + """ + Retrieve an item from the cache by key and delete ir. + + :param key: The cache key + :type key: str + + :param default: The default value to return + :type default: mixed + + :rtype: mixed + """ + raise NotImplementedError() + + def put(self, key, value, minutes): + """ + Store an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int or datetime + """ + raise NotImplementedError() + + def add(self, key, value, minutes): + """ + Store an item in the cache if it does not exist. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int or datetime + + :rtype: bool + """ + raise NotImplementedError() + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + """ + raise NotImplementedError() + + def remember(self, key, minutes, callback): + """ + Get an item from the cache, or store the default value. + + :param key: The cache key + :type key: str + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int or datetime + + :param callback: The default function + :type callback: callable + + :rtype: mixed + """ + raise NotImplementedError() + + def remember_forever(self, key, callback): + """ + Get an item from the cache, or store the default value forever. + + :param key: The cache key + :type key: str + + :param callback: The default function + :type callback: callable + + :rtype: mixed + """ + raise NotImplementedError() + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cachy/contracts/store.py b/env/lib/python3.8/site-packages/cachy/contracts/store.py new file mode 100644 index 00000000..a1658dc3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/contracts/store.py @@ -0,0 +1,121 @@ +# -*- coding: utf-8 -*- + +from ..serializers import PickleSerializer + + +class Store(object): + """ + Abstract class representing a cache store. + """ + + _serializer = PickleSerializer() + + def get(self, key): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :return: The cache value + """ + raise NotImplementedError() + + def put(self, key, value, minutes): + """ + Store an item in the cache for a given number of minutes. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int + """ + raise NotImplementedError() + + def increment(self, key, value=1): + """ + Increment the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + + :rtype: int or bool + """ + raise NotImplementedError() + + def decrement(self, key, value=1): + """ + Decrement the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The decrement value + :type value: int + + :rtype: int or bool + """ + raise NotImplementedError() + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The value + :type value: mixed + """ + raise NotImplementedError() + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + raise NotImplementedError() + + def flush(self): + """ + Remove all items from the cache. + """ + raise NotImplementedError() + + def get_prefix(self): + """ + Get the cache key prefix. + + :rtype: str + """ + raise NotImplementedError() + + def set_serializer(self, serializer): + """ + Set the serializer. + + :param serializer: The serializer + :type serializer: cachy.serializers.Serializer + + :rtype: Store + """ + self._serializer = serializer + + return self + + def unserialize(self, data): + return self._serializer.unserialize(data) + + def serialize(self, data): + return self._serializer.serialize(data) diff --git a/env/lib/python3.8/site-packages/cachy/contracts/taggable_store.py b/env/lib/python3.8/site-packages/cachy/contracts/taggable_store.py new file mode 100644 index 00000000..5a005a37 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/contracts/taggable_store.py @@ -0,0 +1,22 @@ +# -*- coding: utf-8 -*- + +from .store import Store +from ..tagged_cache import TaggedCache +from ..tag_set import TagSet + + +class TaggableStore(Store): + + def tags(self, *names): + """ + Begin executing a new tags operation. + + :param names: The tags + :type names: tuple + + :rtype: cachy.tagged_cache.TaggedCache + """ + if len(names) == 1 and isinstance(names[0], list): + names = names[0] + + return TaggedCache(self, TagSet(self, names)) diff --git a/env/lib/python3.8/site-packages/cachy/helpers.py b/env/lib/python3.8/site-packages/cachy/helpers.py new file mode 100644 index 00000000..594f0ea9 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/helpers.py @@ -0,0 +1,8 @@ +# -*- coding: utf-8 -*- + + +def value(val): + if callable(val): + return val() + + return val diff --git a/env/lib/python3.8/site-packages/cachy/redis_tagged_cache.py b/env/lib/python3.8/site-packages/cachy/redis_tagged_cache.py new file mode 100644 index 00000000..5cbe1b00 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/redis_tagged_cache.py @@ -0,0 +1,80 @@ +# -*- coding: utf-8 -*- + +import hashlib +from .tagged_cache import TaggedCache +from .utils import encode + + +class RedisTaggedCache(TaggedCache): + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The value + :type value: mixed + """ + namespace = self._tags.get_namespace() + + self._push_forever_keys(namespace, key) + + self._store.forever( + '%s:%s' % (hashlib.sha1(encode(self._tags.get_namespace())).hexdigest(), key), + value + ) + + def flush(self): + """ + Remove all items from the cache. + """ + self._delete_forever_keys() + + super(RedisTaggedCache, self).flush() + + def _push_forever_keys(self, namespace, key): + """ + Store a copy of the full key for each namespace segment. + + :type namespace: str + :type key: str + """ + full_key = '%s%s:%s' % (self.get_prefix(), + hashlib.sha1(encode(self._tags.get_namespace())).hexdigest(), + key) + + for segment in namespace.split('|'): + self._store.connection().lpush(self._forever_key(segment), full_key) + + def _delete_forever_keys(self): + """ + Delete all of the items that were stored forever. + """ + for segment in self._tags.get_namespace().split('|'): + segment = self._forever_key(segment) + self._delete_forever_values(segment) + + self._store.connection().delete(segment) + + def _delete_forever_values(self, forever_key): + """ + Delete all of the keys that have been stored forever. + + :type forever_key: str + """ + forever = self._store.connection().lrange(forever_key, 0, -1) + + if len(forever) > 0: + self._store.connection().delete(*forever) + + def _forever_key(self, segment): + """ + Get the forever reference key for the segment. + + :type segment: str + + :rtype: str + """ + return '%s%s:forever' % (self.get_prefix(), segment) diff --git a/env/lib/python3.8/site-packages/cachy/repository.py b/env/lib/python3.8/site-packages/cachy/repository.py new file mode 100644 index 00000000..0f145032 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/repository.py @@ -0,0 +1,335 @@ +# -*- coding: utf-8 -*- + +import math +import datetime +import types +import hashlib +from functools import wraps +from .contracts.repository import Repository as CacheContract +from .helpers import value +from .utils import encode, decode + + +class Repository(CacheContract): + + _default = 60 + + def __init__(self, store): + """ + :param store: The underlying cache store + :type store: Store + """ + self._store = store + + def has(self, key): + """ + Determine if an item exists in the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + return self.get(key) is not None + + def get(self, key, default=None): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :param default: The default value to return + :type default: mixed + + :rtype: mixed + """ + val = self._store.get(key) + + if val is None: + return value(default) + + return val + + def pull(self, key, default=None): + """ + Retrieve an item from the cache by key and delete ir. + + :param key: The cache key + :type key: str + + :param default: The default value to return + :type default: mixed + + :rtype: mixed + """ + val = self.get(key, default) + + self.forget(key) + + return val + + def put(self, key, val, minutes): + """ + Store an item in the cache. + + :param key: The cache key + :type key: str + + :param val: The cache value + :type val: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int|datetime + """ + minutes = self._get_minutes(minutes) + + if minutes is not None: + self._store.put(key, val, minutes) + + def add(self, key, val, minutes): + """ + Store an item in the cache if it does not exist. + + :param key: The cache key + :type key: str + + :param val: The cache value + :type val: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int|datetime + + :rtype: bool + """ + if hasattr(self._store, 'add'): + return self._store.add(key, val, self._get_minutes(minutes)) + + if not self.has(key): + self.put(key, val, minutes) + + return True + + return False + + def forever(self, key, val): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param val: The cache value + :type val: mixed + """ + self._store.forever(key, val) + + def remember(self, key, minutes, callback): + """ + Get an item from the cache, or store the default value. + + :param key: The cache key + :type key: str + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int or datetime + + :param callback: The default function + :type callback: mixed + + :rtype: mixed + """ + # If the item exists in the cache we will just return this immediately + # otherwise we will execute the given callback and cache the result + # of that execution for the given number of minutes in storage. + val = self.get(key) + if val is not None: + return val + + val = value(callback) + + self.put(key, val, minutes) + + return val + + def remember_forever(self, key, callback): + """ + Get an item from the cache, or store the default value forever. + + :param key: The cache key + :type key: str + + :param callback: The default function + :type callback: mixed + + :rtype: mixed + """ + # If the item exists in the cache we will just return this immediately + # otherwise we will execute the given callback and cache the result + # of that execution forever. + val = self.get(key) + if val is not None: + return val + + val = value(callback) + + self.forever(key, val) + + return val + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + success = self._store.forget(key) + + return success + + def get_default_cache_time(self): + """ + Get the default cache time. + + :rtype: int + """ + return self._default + + def set_default_cache_time(self, minutes): + """ + Set the default cache time. + + :param minutes: The default cache time + :type minutes: int + + :rtype: self + """ + self._default = minutes + + return self + + def get_store(self): + """ + Get the cache store implementation. + + :rtype: Store + """ + return self._store + + def __getitem__(self, item): + return self.get(item) + + def __setitem__(self, key, val): + self.put(key, val, self._default) + + def __delitem__(self, key): + self.forget(key) + + def _get_minutes(self, duration): + """ + Calculate the number of minutes with the given duration. + + :param duration: The duration + :type duration: int or datetime + + :rtype: int or None + """ + if isinstance(duration, datetime.datetime): + from_now = (duration - datetime.datetime.now()).total_seconds() + from_now = math.ceil(from_now / 60) + + if from_now > 0: + return from_now + + return + + return duration + + def _hash(self, value): + """ + Calculate the hash given a value. + + :param value: The value to hash + :type value: str or bytes + + :rtype: str + """ + return hashlib.sha1(encode(value)).hexdigest() + + def _get_key(self, fn, args, kwargs): + """ + Calculate a cache key given a function, args and kwargs. + + :param fn: The function + :type fn: callable or str + + :param args: The function args + :type args: tuple + + :param kwargs: The function kwargs + :type kwargs: dict + + :rtype: str + """ + if args: + serialized_arguments = ( + self._store.serialize(args[1:]) + + self._store.serialize([(k, kwargs[k]) for k in sorted(kwargs.keys())]) + ) + else: + serialized_arguments = self._store.serialize([(k, kwargs[k]) for k in sorted(kwargs.keys())]) + + if isinstance(fn, types.MethodType): + key = self._hash('%s.%s.%s' + % (fn.__self__.__class__.__name__, + args[0].__name__, + serialized_arguments)) + elif isinstance(fn, types.FunctionType): + key = self._hash('%s.%s' + % (fn.__name__, + serialized_arguments)) + else: + key = '%s:' % fn + self._hash(serialized_arguments) + + return key + + def __getattr__(self, item): + try: + return object.__getattribute__(self, item) + except AttributeError: + return getattr(self._store, item) + + def __call__(self, *args, **kwargs): + if args and isinstance(args[0], (types.FunctionType, types.MethodType)): + fn = args[0] + + @wraps(fn) + def wrapper(*a, **kw): + return self.remember( + self._get_key(fn, a, kw), + self._default, + lambda: fn(*a, **kw) + ) + + return wrapper + else: + k = kwargs.get('key') + minutes = kwargs.get('minutes', self._default) + + def decorated(fn): + key = k + + @wraps(fn) + def wrapper(*a, **kw): + return self.remember( + self._get_key(key or fn, a, kw), + minutes, + lambda: fn(*a, **kw) + ) + + return wrapper + + return decorated diff --git a/env/lib/python3.8/site-packages/cachy/serializers/__init__.py b/env/lib/python3.8/site-packages/cachy/serializers/__init__.py new file mode 100644 index 00000000..c62e5181 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/serializers/__init__.py @@ -0,0 +1,6 @@ +# -*- coding: utf-8 -*- + +from .serializer import Serializer +from .json_serializer import JsonSerializer +from .msgpack_serializer import MsgPackSerializer +from .pickle_serializer import PickleSerializer diff --git a/env/lib/python3.8/site-packages/cachy/serializers/json_serializer.py b/env/lib/python3.8/site-packages/cachy/serializers/json_serializer.py new file mode 100644 index 00000000..b1c63d0d --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/serializers/json_serializer.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- + +try: + import simplejson as json +except ImportError: + import json + +from cachy.utils import decode + +from .serializer import Serializer + + +class JsonSerializer(Serializer): + """ + Serializer that uses JSON representations. + """ + + def serialize(self, data): + """ + Serialize data. + + :param data: The data to serialize + :type data: mixed + + :rtype: str + """ + return json.dumps(data) + + def unserialize(self, data): + """ + Unserialize data. + + :param data: The data to unserialize + :type data: mixed + + :rtype: str + """ + return json.loads(decode(data)) diff --git a/env/lib/python3.8/site-packages/cachy/serializers/msgpack_serializer.py b/env/lib/python3.8/site-packages/cachy/serializers/msgpack_serializer.py new file mode 100644 index 00000000..5b96dd72 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/serializers/msgpack_serializer.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- + +try: + import msgpack +except ImportError: + msgpack = None + +from .serializer import Serializer + + +class MsgPackSerializer(Serializer): + """ + Serializer that uses `msgpack `_ representations. + + By default, this serializer does not support serializing custom objects. + """ + + def serialize(self, data): + """ + Serialize data. + + :param data: The data to serialize + :type data: mixed + + :rtype: str + """ + return msgpack.packb(data, use_bin_type=True) + + def unserialize(self, data): + """ + Unserialize data. + + :param data: The data to unserialize + :type data: mixed + + :rtype: str + """ + return msgpack.unpackb(data, encoding='utf-8') diff --git a/env/lib/python3.8/site-packages/cachy/serializers/pickle_serializer.py b/env/lib/python3.8/site-packages/cachy/serializers/pickle_serializer.py new file mode 100644 index 00000000..862d6d5e --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/serializers/pickle_serializer.py @@ -0,0 +1,43 @@ +# -*- coding: utf-8 -*- + +from functools import partial + +try: + import cPickle as pickle +except ImportError: # noqa + import pickle + +# Serialize pickle dumps using the highest pickle protocol (binary, default +# uses ascii) +dumps = partial(pickle.dumps, protocol=pickle.HIGHEST_PROTOCOL) +loads = pickle.loads + +from .serializer import Serializer + + +class PickleSerializer(Serializer): + """ + Serializer that uses the pickle module. + """ + + def serialize(self, data): + """ + Serialize data. + + :param data: The data to serialize + :type data: mixed + + :rtype: str + """ + return dumps(data) + + def unserialize(self, data): + """ + Unserialize data. + + :param data: The data to unserialize + :type data: mixed + + :rtype: str + """ + return loads(data) diff --git a/env/lib/python3.8/site-packages/cachy/serializers/serializer.py b/env/lib/python3.8/site-packages/cachy/serializers/serializer.py new file mode 100644 index 00000000..9427a9d6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/serializers/serializer.py @@ -0,0 +1,29 @@ +# -*- coding: utf-8 -*- + + +class Serializer(object): + """ + Abstract serializer. + """ + + def serialize(self, data): + """ + Serialize data. + + :param data: The data to serialize + :type data: mixed + + :rtype: str + """ + raise NotImplementedError() + + def unserialize(self, data): + """ + Unserialize data. + + :param data: The data to unserialize + :type data: mixed + + :rtype: str + """ + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cachy/stores/__init__.py b/env/lib/python3.8/site-packages/cachy/stores/__init__.py new file mode 100644 index 00000000..e73c7b8e --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/stores/__init__.py @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- + +from .dict_store import DictStore +from .file_store import FileStore +from .memcached_store import MemcachedStore +from .redis_store import RedisStore +from .null_store import NullStore diff --git a/env/lib/python3.8/site-packages/cachy/stores/dict_store.py b/env/lib/python3.8/site-packages/cachy/stores/dict_store.py new file mode 100644 index 00000000..916eaaba --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/stores/dict_store.py @@ -0,0 +1,163 @@ +# -*- coding: utf-8 -*- + +import time +import math +from ..contracts.taggable_store import TaggableStore + + +class DictStore(TaggableStore): + """ + A cache store using a dictionary as its backend. + """ + + def __init__(self): + self._storage = {} + + def get(self, key): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :return: The cache value + """ + return self._get_payload(key)[0] + + def _get_payload(self, key): + """ + Retrieve an item and expiry time from the cache by key. + + :param key: The cache key + :type key: str + + :rtype: dict + """ + payload = self._storage.get(key) + + # If the key does not exist, we return nothing + if not payload: + return (None, None) + + expire = payload[0] + + # If the current time is greater than expiration timestamps we will delete + # the entry + if round(time.time()) >= expire: + self.forget(key) + + return (None, None) + + data = payload[1] + + # Next, we'll extract the number of minutes that are remaining for a cache + # so that we can properly retain the time for things like the increment + # operation that may be performed on the cache. We'll round this out. + time_ = math.ceil((expire - round(time.time())) / 60.) + + return (data, time_) + + def put(self, key, value, minutes): + """ + Store an item in the cache for a given number of minutes. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int + """ + self._storage[key] = (self._expiration(minutes), value) + + def increment(self, key, value=1): + """ + Increment the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + + :rtype: int or bool + """ + data, time_ = self._get_payload(key) + + integer = int(data) + value + + self.put(key, integer, int(time_)) + + return integer + + def decrement(self, key, value=1): + """ + Decrement the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The decrement value + :type value: int + + :rtype: int or bool + """ + return self.increment(key, value * -1) + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + """ + self.put(key, value, 0) + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + if key in self._storage: + del self._storage[key] + + return True + + return False + + def flush(self): + """ + Remove all items from the cache. + """ + self._storage = {} + + def _expiration(self, minutes): + """ + Get the expiration time based on the given minutes. + + :param minutes: The minutes + :type minutes: int + + :rtype: int + """ + if minutes == 0: + return 9999999999 + + return round(time.time()) + (minutes * 60) + + def get_prefix(self): + """ + Get the cache key prefix. + + :rtype: str + """ + return '' diff --git a/env/lib/python3.8/site-packages/cachy/stores/file_store.py b/env/lib/python3.8/site-packages/cachy/stores/file_store.py new file mode 100644 index 00000000..f68b0b91 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/stores/file_store.py @@ -0,0 +1,227 @@ +# -*- coding: utf-8 -*- + +import os +import time +import math +import hashlib +from ..contracts.store import Store +from ..utils import mkdir_p, encode + + +class FileStore(Store): + """ + A cache store using the filesystem as its backend. + """ + + _HASHES = { + 'md5': (hashlib.md5, 2), + 'sha1': (hashlib.sha1, 4), + 'sha256': (hashlib.sha256, 8) + } + + def __init__(self, directory, hash_type='sha256'): + """ + :param directory: The cache directory + :type directory: str + """ + self._directory = directory + + if hash_type not in self._HASHES: + raise ValueError('hash_type "{}" is not valid.'.format(hash_type)) + + self._hash_type = hash_type + + def get(self, key): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :return: The cache value + """ + return self._get_payload(key).get('data') + + def _get_payload(self, key): + """ + Retrieve an item and expiry time from the cache by key. + + :param key: The cache key + :type key: str + + :rtype: dict + """ + path = self._path(key) + + # If the file doesn't exists, we obviously can't return the cache so we will + # just return null. Otherwise, we'll get the contents of the file and get + # the expiration UNIX timestamps from the start of the file's contents. + if not os.path.exists(path): + return {'data': None, 'time': None} + + with open(path, 'rb') as fh: + contents = fh.read() + + expire = int(contents[:10]) + + # If the current time is greater than expiration timestamps we will delete + # the file and return null. This helps clean up the old files and keeps + # this directory much cleaner for us as old files aren't hanging out. + if round(time.time()) >= expire: + self.forget(key) + + return {'data': None, 'time': None} + + data = self.unserialize(contents[10:]) + + # Next, we'll extract the number of minutes that are remaining for a cache + # so that we can properly retain the time for things like the increment + # operation that may be performed on the cache. We'll round this out. + time_ = math.ceil((expire - round(time.time())) / 60.) + + return {'data': data, 'time': time_} + + def put(self, key, value, minutes): + """ + Store an item in the cache for a given number of minutes. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int + """ + value = encode(str(self._expiration(minutes))) + encode(self.serialize(value)) + + path = self._path(key) + self._create_cache_directory(path) + + with open(path, 'wb') as fh: + fh.write(value) + + def _create_cache_directory(self, path): + """ + Create the file cache directory if necessary + + :param path: The cache path + :type path: str + """ + mkdir_p(os.path.dirname(path)) + + def increment(self, key, value=1): + """ + Increment the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + + :rtype: int or bool + """ + raw = self._get_payload(key) + + integer = int(raw['data']) + value + + self.put(key, integer, int(raw['time'])) + + return integer + + def decrement(self, key, value=1): + """ + Decrement the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The decrement value + :type value: int + + :rtype: int or bool + """ + return self.increment(key, value * -1) + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + """ + self.put(key, value, 0) + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + path = self._path(key) + + if os.path.exists(path): + os.remove(path) + + return True + + return False + + def flush(self): + """ + Remove all items from the cache. + """ + if os.path.isdir(self._directory): + for root, dirs, files in os.walk(self._directory, topdown=False): + for name in files: + os.remove(os.path.join(root, name)) + + for name in dirs: + os.rmdir(os.path.join(root, name)) + + def _path(self, key): + """ + Get the full path for the given cache key. + + :param key: The cache key + :type key: str + + :rtype: str + """ + hash_type, parts_count = self._HASHES[self._hash_type] + h = hash_type(encode(key)).hexdigest() + + parts = [h[i:i+2] for i in range(0, len(h), 2)][:parts_count] + + return os.path.join(self._directory, os.path.sep.join(parts), h) + + def _expiration(self, minutes): + """ + Get the expiration time based on the given minutes. + + :param minutes: The minutes + :type minutes: int + + :rtype: int + """ + if minutes == 0: + return 9999999999 + + return int(round(time.time()) + (minutes * 60)) + + def get_prefix(self): + """ + Get the cache key prefix. + + :rtype: str + """ + return '' + return '' diff --git a/env/lib/python3.8/site-packages/cachy/stores/memcached_store.py b/env/lib/python3.8/site-packages/cachy/stores/memcached_store.py new file mode 100644 index 00000000..c41680b6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/stores/memcached_store.py @@ -0,0 +1,129 @@ +# -*- coding: utf-8 -*- + +try: + from pylibmc import memcache +except ImportError: + try: + import memcache + except ImportError: + memcache = None + +from ..contracts.taggable_store import TaggableStore + + +class MemcachedStore(TaggableStore): + + def __init__(self, servers, prefix='', **kwargs): + # Removing potential "driver" key + kwargs.pop('driver', None) + + self._prefix = prefix + self._memcache = memcache.Client(servers, **kwargs) + + def get(self, key): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :return: The cache value + """ + return self._memcache.get(self._prefix + key) + + def put(self, key, value, minutes): + """ + Store an item in the cache for a given number of minutes. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int + """ + self._memcache.set(self._prefix + key, value, minutes * 60) + + def add(self, key, val, minutes): + """ + Store an item in the cache if it does not exist. + + :param key: The cache key + :type key: str + + :param val: The cache value + :type val: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int + + :rtype: bool + """ + return self._memcache.add(self._prefix + key, val, minutes * 60) + + def increment(self, key, value=1): + """ + Increment the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + + :rtype: int or bool + """ + return self._memcache.incr(self._prefix + key, value) + + def decrement(self, key, value=1): + """ + Decrement the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The decrement value + :type value: int + + :rtype: int or bool + """ + return self._memcache.decr(self._prefix + key, value) + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The value + :type value: mixed + """ + self.put(key, value, 0) + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + return self._memcache.delete(self._prefix + key) + + def flush(self): + """ + Remove all items from the cache. + """ + self._memcache.flush_all() + + def get_prefix(self): + """ + Get the cache key prefix. + + :rtype: str + """ + return self._prefix diff --git a/env/lib/python3.8/site-packages/cachy/stores/null_store.py b/env/lib/python3.8/site-packages/cachy/stores/null_store.py new file mode 100644 index 00000000..7cf64288 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/stores/null_store.py @@ -0,0 +1,101 @@ +# -*- coding: utf-8 -*- + +from ..contracts.store import Store + + +class NullStore(Store): + """ + This cache store implementation is meant to be used + only in development or test environments and it never stores anything. + """ + + def get(self, key): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :return: The cache value + """ + pass + + def put(self, key, value, minutes): + """ + Store an item in the cache for a given number of minutes. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int + """ + pass + + def increment(self, key, value=1): + """ + Increment the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + + :rtype: int or bool + """ + pass + + def decrement(self, key, value=1): + """ + Decrement the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The decrement value + :type value: int + + :rtype: int or bool + """ + pass + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + """ + pass + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + pass + + def flush(self): + """ + Remove all items from the cache. + """ + pass + + def get_prefix(self): + """ + Get the cache key prefix. + + :rtype: str + """ + return '' diff --git a/env/lib/python3.8/site-packages/cachy/stores/redis_store.py b/env/lib/python3.8/site-packages/cachy/stores/redis_store.py new file mode 100644 index 00000000..235308d9 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/stores/redis_store.py @@ -0,0 +1,139 @@ +# -*- coding: utf-8 -*- + +try: + from redis import StrictRedis +except ImportError: + StrictRedis = None + +from ..contracts.taggable_store import TaggableStore +from ..redis_tagged_cache import RedisTaggedCache +from ..tag_set import TagSet + + +class RedisStore(TaggableStore): + """ + A cache store using the Redis as its backend. + """ + + def __init__(self, host='localhost', port=6379, db=0, password=None, + prefix='', redis_class=StrictRedis, **kwargs): + # Removing potential "driver" key + kwargs.pop('driver', None) + + self._prefix = prefix + self._redis = redis_class(host=host, port=port, db=db, + password=password, **kwargs) + + def get(self, key): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :return: The cache value + """ + value = self._redis.get(self._prefix + key) + + if value is not None: + return self.unserialize(value) + + def put(self, key, value, minutes): + """ + Store an item in the cache for a given number of minutes. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int + """ + value = self.serialize(value) + + minutes = max(1, minutes) + + self._redis.setex(self._prefix + key, minutes * 60, value) + + def increment(self, key, value=1): + """ + Increment the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + + :rtype: int or bool + """ + return self._redis.incrby(self._prefix + key, value) + + def decrement(self, key, value=1): + """ + Decrement the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The decrement value + :type value: int + + :rtype: int or bool + """ + return self._redis.decr(self._prefix + key, value) + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The value to store + :type value: mixed + """ + value = self.serialize(value) + + self._redis.set(self._prefix + key, value) + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + return bool(self._redis.delete(self._prefix + key)) + + def flush(self): + """ + Remove all items from the cache. + """ + return self._redis.flushdb() + + def get_prefix(self): + """ + Get the cache key prefix. + + :rtype: str + """ + return self._prefix + + def connection(self): + return self._redis + + def tags(self, *names): + """ + Begin executing a new tags operation. + + :param names: The tags + :type names: tuple + + :rtype: cachy.tagged_cache.TaggedCache + """ + return RedisTaggedCache(self, TagSet(self, names)) diff --git a/env/lib/python3.8/site-packages/cachy/tag_set.py b/env/lib/python3.8/site-packages/cachy/tag_set.py new file mode 100644 index 00000000..ebfb3eb2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/tag_set.py @@ -0,0 +1,76 @@ +# -*- coding: utf-8 -*- + +import uuid + + +class TagSet(object): + + def __init__(self, store, names=None): + """ + :param store: The cache store implementation + :type store: cachy.contracts.store.Store + + :param names: The tags names + :type names: list or tuple + """ + self._store = store + self._names = names or [] + + def reset(self): + """ + Reset all tags in the set. + """ + list(map(self.reset_tag, self._names)) + + def tag_id(self, name): + """ + Get the unique tag identifier for a given tag. + + :param name: The tag + :type name: str + + :rtype: str + """ + return self._store.get(self.tag_key(name)) or self.reset_tag(name) + + def _tag_ids(self): + """ + Get a list of tag identifiers for all of the tags in the set. + + :rtype: list + """ + return list(map(self.tag_id, self._names)) + + def get_namespace(self): + """ + Get a unique namespace that changes when any of the tags are flushed. + + :rtype: str + """ + return '|'.join(self._tag_ids()) + + def reset_tag(self, name): + """ + Reset the tag and return the new tag identifier. + + :param name: The tag + :type name: str + + :rtype: str + """ + id_ = str(uuid.uuid4()).replace('-', '') + + self._store.forever(self.tag_key(name), id_) + + return id_ + + def tag_key(self, name): + """ + Get the tag identifier key for a given tag. + + :param name: The tag + :type name: str + + :rtype: str + """ + return 'tag:%s:key' % name diff --git a/env/lib/python3.8/site-packages/cachy/tagged_cache.py b/env/lib/python3.8/site-packages/cachy/tagged_cache.py new file mode 100644 index 00000000..bc39d3a0 --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/tagged_cache.py @@ -0,0 +1,243 @@ +# -*- coding: utf-8 -*- + +import hashlib +import datetime +import math +from .contracts.store import Store +from .helpers import value +from .utils import encode + + +class TaggedCache(Store): + """ + + """ + def __init__(self, store, tags): + """ + :param store: The cache store implementation + :type store: cachy.contracts.store.Store + + :param tags: The tag set + :type tags: cachy.tag_set.TagSet + """ + self._store = store + self._tags = tags + + def has(self, key): + """ + Determine if an item exists in the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + return self.get(key) is not None + + def get(self, key, default=None): + """ + Retrieve an item from the cache by key. + + :param key: The cache key + :type key: str + + :param default: The default value + :type default: mixed + + :return: The cache value + """ + val = self._store.get(self.tagged_item_key(key)) + + if val is not None: + return val + + return value(default) + + def put(self, key, value, minutes): + """ + Store an item in the cache for a given number of minutes. + + :param key: The cache key + :type key: str + + :param value: The cache value + :type value: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int or datetime + """ + minutes = self._get_minutes(minutes) + + if minutes is not None: + return self._store.put(self.tagged_item_key(key), value, minutes) + + def add(self, key, val, minutes): + """ + Store an item in the cache if it does not exist. + + :param key: The cache key + :type key: str + + :param val: The cache value + :type val: mixed + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int|datetime + + :rtype: bool + """ + if not self.has(key): + self.put(key, val, minutes) + + return True + + return False + + def increment(self, key, value=1): + """ + Increment the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The increment value + :type value: int + + :rtype: int or bool + """ + self._store.increment(self.tagged_item_key(key), value) + + def decrement(self, key, value=1): + """ + Decrement the value of an item in the cache. + + :param key: The cache key + :type key: str + + :param value: The decrement value + :type value: int + + :rtype: int or bool + """ + self._store.decrement(self.tagged_item_key(key), value) + + def forever(self, key, value): + """ + Store an item in the cache indefinitely. + + :param key: The cache key + :type key: str + + :param value: The value + :type value: mixed + """ + self._store.forever(self.tagged_item_key(key), value) + + def forget(self, key): + """ + Remove an item from the cache. + + :param key: The cache key + :type key: str + + :rtype: bool + """ + self._store.forget(self.tagged_item_key(key)) + + def flush(self): + """ + Remove all items from the cache. + """ + self._tags.reset() + + def remember(self, key, minutes, callback): + """ + Get an item from the cache, or store the default value. + + :param key: The cache key + :type key: str + + :param minutes: The lifetime in minutes of the cached value + :type minutes: int or datetime + + :param callback: The default function + :type callback: mixed + + :rtype: mixed + """ + # If the item exists in the cache we will just return this immediately + # otherwise we will execute the given callback and cache the result + # of that execution for the given number of minutes in storage. + val = self.get(key) + if val is not None: + return val + + val = value(callback) + + self.put(key, val, minutes) + + return val + + def remember_forever(self, key, callback): + """ + Get an item from the cache, or store the default value forever. + + :param key: The cache key + :type key: str + + :param callback: The default function + :type callback: mixed + + :rtype: mixed + """ + # If the item exists in the cache we will just return this immediately + # otherwise we will execute the given callback and cache the result + # of that execution forever. + val = self.get(key) + if val is not None: + return val + + val = value(callback) + + self.forever(key, val) + + return val + + def tagged_item_key(self, key): + """ + Get a fully qualified key for a tagged item. + + :param key: The cache key + :type key: str + + :rtype: str + """ + return '%s:%s' % (hashlib.sha1(encode(self._tags.get_namespace())).hexdigest(), key) + + def get_prefix(self): + """ + Get the cache key prefix. + + :rtype: str + """ + return self._store.get_prefix() + + def _get_minutes(self, duration): + """ + Calculate the number of minutes with the given duration. + + :param duration: The duration + :type duration: int or datetime + + :rtype: int or None + """ + if isinstance(duration, datetime.datetime): + from_now = (duration - datetime.datetime.now()).total_seconds() + from_now = math.ceil(from_now / 60) + + if from_now > 0: + return from_now + + return + + return duration diff --git a/env/lib/python3.8/site-packages/cachy/utils.py b/env/lib/python3.8/site-packages/cachy/utils.py new file mode 100644 index 00000000..c5e754af --- /dev/null +++ b/env/lib/python3.8/site-packages/cachy/utils.py @@ -0,0 +1,62 @@ +# -*- coding: utf-8 -*- + +import sys +import os +import errno + +PY2 = sys.version_info[0] == 2 +PY3K = sys.version_info[0] >= 3 +PY33 = sys.version_info >= (3, 3) + +if PY2: + import imp + + long = long + unicode = unicode + basestring = basestring +else: + long = int + unicode = str + basestring = str + + +def decode(string, encodings=None): + if not PY2 and not isinstance(string, bytes): + return string + + if encodings is None: + encodings = ['utf-8', 'latin1', 'ascii'] + + for encoding in encodings: + try: + return string.decode(encoding) + except UnicodeDecodeError: + pass + + return string.decode(encodings[0], errors='ignore') + + +def encode(string, encodings=None): + if isinstance(string, bytes) or PY2 and isinstance(string, unicode): + return string + + if encodings is None: + encodings = ['utf-8', 'latin1', 'ascii'] + + for encoding in encodings: + try: + return string.encode(encoding) + except UnicodeDecodeError: + pass + + return string.encode(encodings[0], errors='ignore') + + +def mkdir_p(path, mode=0o777): + try: + os.makedirs(path, mode) + except OSError as exc: + if exc.errno == errno.EEXIST and os.path.isdir(path): + pass + else: + raise diff --git a/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/METADATA b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/METADATA new file mode 100644 index 00000000..3fd255c8 --- /dev/null +++ b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/METADATA @@ -0,0 +1,95 @@ +Metadata-Version: 2.1 +Name: cairo-lang +Version: 0.10.1 +Summary: Compiler and runner for the Cairo language +Home-page: https://cairo-lang.org/ +Author: Starkware +Author-email: info@starkware.co +License: UNKNOWN +Platform: UNKNOWN +Requires-Python: >=3.6 +Description-Content-Type: text/markdown +Requires-Dist: PyYAML +Requires-Dist: Web3 +Requires-Dist: aiohttp +Requires-Dist: cachetools +Requires-Dist: ecdsa +Requires-Dist: eth-hash[pycryptodome] +Requires-Dist: fastecdsa +Requires-Dist: frozendict (==1.2) +Requires-Dist: lark +Requires-Dist: marshmallow-dataclass (>=7.1.0) +Requires-Dist: marshmallow-enum +Requires-Dist: marshmallow-oneofschema +Requires-Dist: marshmallow (>=3.2.1) +Requires-Dist: mpmath +Requires-Dist: numpy +Requires-Dist: pipdeptree +Requires-Dist: prometheus-client +Requires-Dist: pytest +Requires-Dist: pytest-asyncio +Requires-Dist: sympy +Requires-Dist: typeguard + +# Introduction + +[Cairo](https://cairo-lang.org/) is a programming language for writing provable programs. + +# Documentation + +The Cairo documentation consists of two parts: "Hello Cairo" and "How Cairo Works?". +Both parts can be found in https://cairo-lang.org/docs/. + +We recommend starting from [Setting up the environment](https://cairo-lang.org/docs/quickstart.html). + +# Installation instructions + +You should be able to download the python package zip file directly from +[github](https://github.com/starkware-libs/cairo-lang/releases/tag/v0.10.1) +and install it using ``pip``. +See [Setting up the environment](https://cairo-lang.org/docs/quickstart.html). + +However, if you want to build it yourself, you can build it from the git repository. +It is recommended to run the build inside a docker (as explained below), +since it guarantees that all the dependencies +are installed. Alternatively, you can try following the commands in the +[docker file](https://github.com/starkware-libs/cairo-lang/blob/master/Dockerfile). + +## Building using the dockerfile + +*Note*: This section is relevant only if you wish to build the Cairo python-package yourself, +rather than downloading it. + +The root directory holds a dedicated Dockerfile, which automatically builds the package and runs +the unit tests on a simulated Ubuntu 18.04 environment. +You should have docker installed (see https://docs.docker.com/get-docker/). + +Clone the repository and initialize the git submodules using: + +```bash +> git clone git@github.com:starkware-libs/cairo-lang.git +> cd cairo-lang +``` + +Build the docker image: + +```bash +> docker build --tag cairo . +``` + +If everything works, you should see + +```bash +Successfully tagged cairo:latest +``` + +Once the docker image is built, you can fetch the python package zip file using: + +```bash +> container_id=$(docker create cairo) +> docker cp ${container_id}:/app/cairo-lang-0.10.1.zip . +> docker rm -v ${container_id} +``` + + + diff --git a/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/RECORD b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/RECORD new file mode 100644 index 00000000..71e83ae1 --- /dev/null +++ b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/RECORD @@ -0,0 +1,773 @@ +../../../bin/cairo-compile,sha256=bFAUg3kwUOQ7cZShBJbqWVcTWvC4q_PzEo1TZwlOkrs,225 +../../../bin/cairo-format,sha256=FBOD0UC0j5YOweXNcciaJ5H9LCvowidQRHH2Jjmc710,224 +../../../bin/cairo-hash-program,sha256=MSMIsx61VT5QwVALgKvFgRZer5MEVftPpz2uMwHeaHA,222 +../../../bin/cairo-migrate,sha256=rEqfMKnfF9CQhYy5HNhabywVbeAbElMUCcUvnK8zlq8,221 +../../../bin/cairo-reconstruct-traceback,sha256=OVRWgnLoEnTRCQzceYk1NhiwY99gvLLlpJ2SVbJep4Q,227 +../../../bin/cairo-run,sha256=oj_-IwuRQbkxxsGLChs3Ev5aWki4leWroJuS3fICU4Y,215 +../../../bin/cairo-sharp,sha256=Fj4AlTprZ8M3QsSOrZllw03Yt4FSklFYnDO8PYOk2v8,216 +../../../bin/starknet,sha256=lc6_5lRCB5jxF6Hg1spQt5hmmLRePPRKrp-c44V9xe4,243 +../../../bin/starknet-compile,sha256=tfu02OXmmxrIiXSc6um0YO3qpIKu7XAB_titXlzzW8U,214 +cairo_lang-0.10.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cairo_lang-0.10.1.dist-info/METADATA,sha256=UacjA50aHGGJsEHzPfKn1SMWWQ1vU7NKrKbw5IRHtUE,2803 +cairo_lang-0.10.1.dist-info/RECORD,, +cairo_lang-0.10.1.dist-info/WHEEL,sha256=g4nMs7d-Xl9-xC9XovUrsDHGXt-FT0E17Yqo92DEfvY,92 +cairo_lang-0.10.1.dist-info/top_level.txt,sha256=BSbUqU0dDnt87vV9X3HayGU8ObBjorJLUs5p3o_PzIA,19 +services/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/__pycache__/__init__.cpython-38.pyc,, +services/everest/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/everest/__pycache__/__init__.cpython-38.pyc,, +services/everest/api/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/everest/api/__pycache__/__init__.cpython-38.pyc,, +services/everest/api/feeder_gateway/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/everest/api/feeder_gateway/__pycache__/__init__.cpython-38.pyc,, +services/everest/api/feeder_gateway/__pycache__/feeder_gateway_client.cpython-38.pyc,, +services/everest/api/feeder_gateway/__pycache__/response_objects.cpython-38.pyc,, +services/everest/api/feeder_gateway/feeder_gateway_client.py,sha256=nIAB1hHR3YFyQ2MRlB-3QmUUD9d7oql4tf-kQznfKyw,617 +services/everest/api/feeder_gateway/response_objects.py,sha256=mnwMdgaEaG_hlmKjr19RG8LSDQjedEqvqWZO74Jre1E,703 +services/everest/api/gateway/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/everest/api/gateway/__pycache__/__init__.cpython-38.pyc,, +services/everest/api/gateway/__pycache__/gateway_client.cpython-38.pyc,, +services/everest/api/gateway/__pycache__/transaction.cpython-38.pyc,, +services/everest/api/gateway/__pycache__/transaction_type.cpython-38.pyc,, +services/everest/api/gateway/gateway_client.py,sha256=ssqhKvvyPRWk1UOeNeTTsN5TVqQ9BW63qgpJ-PIkGvk,809 +services/everest/api/gateway/transaction.py,sha256=j-BdZfC7C6ttV-t0oHVQgXF-JcO5KjRHzRCvKuiCi2U,725 +services/everest/api/gateway/transaction_type.py,sha256=fTkUND40NgK51rccPdws_YgmgdA5AiGt060DnaWcp74,264 +services/everest/business_logic/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/everest/business_logic/__pycache__/__init__.cpython-38.pyc,, +services/everest/business_logic/__pycache__/internal_transaction.cpython-38.pyc,, +services/everest/business_logic/__pycache__/state.cpython-38.pyc,, +services/everest/business_logic/__pycache__/state_api.cpython-38.pyc,, +services/everest/business_logic/__pycache__/transaction_execution_objects.cpython-38.pyc,, +services/everest/business_logic/internal_transaction.py,sha256=RtoozTlVRiHHiB-S0U_FHnMQoE-bMlE2WnkwKq3_o4Q,3973 +services/everest/business_logic/state.py,sha256=kq40CUEN2jXW2K-ABwKBobKVJ4ScGXl1uNl2tHcznQo,8184 +services/everest/business_logic/state_api.py,sha256=uct6fJJWgLP_pFxRSZt8yPwDU-qN1IG8ZFYCsHn7IYo,148 +services/everest/business_logic/transaction_execution_objects.py,sha256=BAN0UeMCIs9B6EYx3lD1O4b8kS5yJU9Q5I5eUG1sVBw,1795 +services/everest/definitions/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/everest/definitions/__pycache__/__init__.cpython-38.pyc,, +services/everest/definitions/__pycache__/constants.cpython-38.pyc,, +services/everest/definitions/__pycache__/fields.cpython-38.pyc,, +services/everest/definitions/__pycache__/general_config.cpython-38.pyc,, +services/everest/definitions/constants.py,sha256=UoxTgDMBb-d-WdTbKWqeII6VdI3gTvJoiPJNciwjmEs,99 +services/everest/definitions/fields.py,sha256=fOBLHHcy_UvcnBgasX0jei3s7UF5jkIgG46ZnA6uuXA,3707 +services/everest/definitions/general_config.py,sha256=IOANi3rEMT2QBMRGvcwRCfm0Uhe0zJJg7Yka1A2t-4Q,180 +services/external_api/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +services/external_api/__pycache__/__init__.cpython-38.pyc,, +services/external_api/__pycache__/client.cpython-38.pyc,, +services/external_api/__pycache__/eth_gas_constants.cpython-38.pyc,, +services/external_api/__pycache__/has_uri_prefix.cpython-38.pyc,, +services/external_api/__pycache__/utils.cpython-38.pyc,, +services/external_api/client.py,sha256=dhZZfath2pPEI1lIF7hp3QpTM0PIa6f0ZM6nDQsh5Pc,5367 +services/external_api/eth_gas_constants.py,sha256=UDoK7vRQyx-DCoUI7Eq06iOGL9UoPx2ndX2K75_Mux0,714 +services/external_api/has_uri_prefix.py,sha256=ebrkDr0IFJQkZI3I75SdkvsdB-Jh7ccD4rVB2cOdxw4,811 +services/external_api/utils.py,sha256=hghASvAG5Yx-5xD9S42I75AJ44Jw6gvNkXsRd8-Ndgs,504 +starkware/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/bootloaders/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/bootloaders/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/bootloaders/__pycache__/compute_fact.cpython-38.pyc,, +starkware/cairo/bootloaders/__pycache__/fact_topology.cpython-38.pyc,, +starkware/cairo/bootloaders/__pycache__/generate_fact.cpython-38.pyc,, +starkware/cairo/bootloaders/__pycache__/hash_program.cpython-38.pyc,, +starkware/cairo/bootloaders/__pycache__/program_hash_test_utils.cpython-38.pyc,, +starkware/cairo/bootloaders/compute_fact.py,sha256=G1YSDdIHqrbSQ_iYD95G-ly90qTATGPTEABHNueGf74,3335 +starkware/cairo/bootloaders/fact_topology.py,sha256=AqyScgiSjQYXmvvKHjqTJkBxvbYwETcA63VhKlYyhso,3704 +starkware/cairo/bootloaders/generate_fact.py,sha256=XOBUi5PTajUcHm2AzUzL3I3jIQgkWp-i_JA7sg_tGFA,1885 +starkware/cairo/bootloaders/hash_program.py,sha256=dXjjRPZhMmw5n86A7i8gwSI3LYwtKf_qEIlrePJo6h8,1705 +starkware/cairo/bootloaders/program_hash_test_utils.py,sha256=KnBO6ilu585KS-z7na5fT2Mz4gXgA_DnY1hoFx0jzB0,1271 +starkware/cairo/common/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/common/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/common/__pycache__/cairo_function_runner.cpython-38.pyc,, +starkware/cairo/common/__pycache__/dict.cpython-38.pyc,, +starkware/cairo/common/__pycache__/hash_chain.cpython-38.pyc,, +starkware/cairo/common/__pycache__/hash_state.cpython-38.pyc,, +starkware/cairo/common/__pycache__/math_utils.cpython-38.pyc,, +starkware/cairo/common/__pycache__/patricia_utils.cpython-38.pyc,, +starkware/cairo/common/__pycache__/small_merkle_tree.cpython-38.pyc,, +starkware/cairo/common/__pycache__/structs.cpython-38.pyc,, +starkware/cairo/common/alloc.cairo,sha256=sZAKRuXJr9ObeUQAkitovq7syL5HCnvoL4iLPFxCZVE,159 +starkware/cairo/common/bitwise.cairo,sha256=lRp4lg9s2KznYMFrd-wcgsGoAt3WIIzer5LtqcwsCNw,3121 +starkware/cairo/common/bool.cairo,sha256=nFIlA9KiD36G27ARoCLygrObAFug0_xyYfsF789gZbM,72 +starkware/cairo/common/cairo_blake2s/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/common/cairo_blake2s/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/common/cairo_blake2s/__pycache__/blake2s_utils.cpython-38.pyc,, +starkware/cairo/common/cairo_blake2s/blake2s.cairo,sha256=xVIJnB58wmOSngWepA87KBd18E2TvfoE3PzRNmpCFIo,17417 +starkware/cairo/common/cairo_blake2s/blake2s_utils.py,sha256=2NQNKFO5sCw7AnWpWjRXuU8YsLV5RwS3gcq8O2d5BIg,3919 +starkware/cairo/common/cairo_blake2s/packed_blake2s.cairo,sha256=eMT219xbDFd_TJZr9TYci_-nOa8atjFg3Fd5mRNQZqc,8114 +starkware/cairo/common/cairo_builtins.cairo,sha256=fgSA75Mqiv2d1jtjXKIxN6vQFs5iHtPLGv5Ae0hryFU,795 +starkware/cairo/common/cairo_function_runner.py,sha256=k9dAFnhJ5_VI4ERfvjFx5Z8RNl8FmMeUffjiKsi8k90,11706 +starkware/cairo/common/cairo_keccak/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/common/cairo_keccak/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/common/cairo_keccak/__pycache__/keccak_utils.cpython-38.pyc,, +starkware/cairo/common/cairo_keccak/keccak.cairo,sha256=2dpcmW8LMYTQvupbemVpRbp1t3VqFRknRPmD83W3sD4,18629 +starkware/cairo/common/cairo_keccak/keccak_utils.py,sha256=n7zUYpiENW_gfxNMOzpGp_F1VIteyv2RfIY9-u5hd8k,3876 +starkware/cairo/common/cairo_keccak/packed_keccak.cairo,sha256=6OZ-WHgYfkCfghkGzjiksaeU-S-O5wGQrUKo_NeJmGM,24145 +starkware/cairo/common/cairo_secp/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/common/cairo_secp/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/common/cairo_secp/__pycache__/secp_utils.cpython-38.pyc,, +starkware/cairo/common/cairo_secp/bigint.cairo,sha256=KkB7s3b3mA55RtfcpeSQ_pYdbUp8AZHmfYu8HUgGh8U,4590 +starkware/cairo/common/cairo_secp/constants.cairo,sha256=NCPpZ0WvfCDAfiT_pT7-qyvV1ABHIh4Iup1BhlQmeLY,1039 +starkware/cairo/common/cairo_secp/ec.cairo,sha256=f8HPloVGEC5HgGJ4-QvqdZS7ixGPs-X8MeoebWskoU8,10778 +starkware/cairo/common/cairo_secp/field.cairo,sha256=xlAUKQNlpffD55bGsackFPa1s_NQz3W0iEIfPvapytI,6347 +starkware/cairo/common/cairo_secp/secp_utils.py,sha256=dF4ytHbhSLopxsEKKcWfmYneugpC7RlQ9jj4pYM-pXA,1045 +starkware/cairo/common/cairo_secp/signature.cairo,sha256=l4XrTaSbKbbmiS4KmNwXma83goh6JqdEF2K_LvWQB2c,9365 +starkware/cairo/common/cairo_sha256/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/common/cairo_sha256/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/common/cairo_sha256/__pycache__/sha256_utils.cpython-38.pyc,, +starkware/cairo/common/cairo_sha256/sha256_utils.py,sha256=DALgKcqlRVyPfXQ5_kp81-qzj-cnbVbq-yuQ7n2Gpxk,2606 +starkware/cairo/common/default_dict.cairo,sha256=lC0xGDMrD589VQ9n8QtdpYVoBm25UDQEKAnBnxSEoJk,2003 +starkware/cairo/common/dict.cairo,sha256=N0v2HH8D9otZOk2jsLEo4jMsAs-Q6brcFinsIOAXQVs,4185 +starkware/cairo/common/dict.py,sha256=ZKcsnVcQtzjai57kBtoxtaxnNIp-HB7-Rx23OkbXIZg,2448 +starkware/cairo/common/dict_access.cairo,sha256=Zq9wBSMxWNcVJCDYw06_V83OMieS58lRb6XfP694u7Q,430 +starkware/cairo/common/ec.cairo,sha256=R76FkFZC8ikdo84vQ2CHGpllDLLJkhEiV4rRfJDlQ3A,9702 +starkware/cairo/common/ec_point.cairo,sha256=F8_P06eRJ8U76Fz0FWAr5C9fDDEWbzF3ppoYqOpY9fY,89 +starkware/cairo/common/find_element.cairo,sha256=X9hXtR9fvkBznTqbuNoB13M-bTEk_ZqN6gbY68ipAmI,4976 +starkware/cairo/common/hash.cairo,sha256=uJE-MYRLedLlm0p4bnBHfh9hlAULtQVjQ5l8u-diKAc,646 +starkware/cairo/common/hash_chain.cairo,sha256=6Js2Pf7PQ06ueRJHKOw_eWQ_PYxZ5wy2X7UKu0VoK1Q,1890 +starkware/cairo/common/hash_chain.py,sha256=aOl56_VMUJvY4rnTwAnP-hUbj4s0cTj25AScJzh1XPM,447 +starkware/cairo/common/hash_state.cairo,sha256=sqjn-5DaJMOYTnknpnX-LVOUn_1Ck1rprLq-IBO3fwE,5175 +starkware/cairo/common/hash_state.py,sha256=Iwg5Wwt1PSnz4I7Ql9Ht8trd8LHTRmi9XVXw1Ag_GwM,575 +starkware/cairo/common/invoke.cairo,sha256=IzBBLMwXL1w01ArNDzm8dGJzh_VWB29QgY8NL29TlKk,790 +starkware/cairo/common/keccak.cairo,sha256=Oykw4wb_vOiNL68_vnMk1wlmCTsb2B6ga2nST9Qww2s,4381 +starkware/cairo/common/keccak_state.cairo,sha256=6OQZriDW0ddI5Tq18EQdDsM1jmLYT-y4jID4oJAsO2A,220 +starkware/cairo/common/math.cairo,sha256=Z9ijVDRX-9mFIItMRcwKG897nEplcKXr1Zwjalj-9KA,15356 +starkware/cairo/common/math_cmp.cairo,sha256=1tVjXhCMCajXfc1b_HlRrBCbobvP2Qk6PJY66_yhdCk,2308 +starkware/cairo/common/math_utils.py,sha256=bTyavo0jaKbjRMndEr2azCF0JQTHTzGCbJ8IcsC8hUY,775 +starkware/cairo/common/memcpy.cairo,sha256=pAWcJFe8HMIIJNcdviUM4yWDsWHPHRrLuut1iwazjw0,937 +starkware/cairo/common/memset.cairo,sha256=mkKRC874vk-Enzj6syZhsrbJyX_tJRuzDedpqnKenQE,845 +starkware/cairo/common/merkle_multi_update.cairo,sha256=K2IxDHOZW-zpzokZM_fVQZOekKiGdVzf4TL-wuPL9BY,7486 +starkware/cairo/common/merkle_update.cairo,sha256=hnkylEQu4lhH3ejfzZJ46HyE6Y_GSFCsiieSO79NlJ4,2862 +starkware/cairo/common/patricia.cairo,sha256=bR4FX4ZoTF2TzANRPvqEeS0bbK3AS1MNI7_8SMB4Oa8,20611 +starkware/cairo/common/patricia_utils.py,sha256=uJ2FS1nFP2eOGArkmBgB1yM96QHFk5OnmSFgaLjL5GU,8360 +starkware/cairo/common/pow.cairo,sha256=OYs2RwjkrDYmoevJ736UhZmmc-iTnqp6oFbuAAfg7_s,1499 +starkware/cairo/common/registers.cairo,sha256=BTnmdIioxc4qEwXR-9gQB4G4wukOSPIi03mb3if0R5o,693 +starkware/cairo/common/segments.cairo,sha256=7HXesUcM_wo_BX7KDcnWLky8bqGsbydQ2TnmzVSOwfQ,456 +starkware/cairo/common/serialize.cairo,sha256=8AyjOmjIgTPFFkS-UwFFaP_pvdGMRb_OIfMtrwyfu2Q,2098 +starkware/cairo/common/set.cairo,sha256=2NzRF4eKR_uLtzBsMcGwg8kkiHBy9TFmi5CntYwFYeA,2398 +starkware/cairo/common/signature.cairo,sha256=4Ubejz_XhfpdFMxyRw0DEwwt8sR1YEYIimdPMuDTDV8,3034 +starkware/cairo/common/small_merkle_tree.cairo,sha256=B42XlguFhCKvoYBR3zS9lphoDM-TrMqlckCFlGnc8To,4692 +starkware/cairo/common/small_merkle_tree.py,sha256=k9GVkNvasPjnJnFPbPC2p5E0VeMOzHq4X9pt46IaSjY,2819 +starkware/cairo/common/squash_dict.cairo,sha256=UEsPhvKuGPGbUOJ-dMhAhWRGgERSEAFL1jI_t-PX0Yo,10260 +starkware/cairo/common/structs.py,sha256=yd4lXuGPn-3fu_68O09E1iAclEFQju4AZmerjTiBqLE,6795 +starkware/cairo/common/uint256.cairo,sha256=xAdCaDamoaer96flwSwYS3zvsBVPLUyjPZrhRJPyHE0,17527 +starkware/cairo/common/usort.cairo,sha256=zRB8OjIlIGD-3E2KniYTZgAdbWYfycgvAKLZI0MmAsc,3382 +starkware/cairo/lang/VERSION,sha256=9NQ54LUjIIoJ0ThiwWggzDAo_ZRBcxDOHVOjHRTWosQ,7 +starkware/cairo/lang/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/__pycache__/cairo_constants.cpython-38.pyc,, +starkware/cairo/lang/__pycache__/instances.cpython-38.pyc,, +starkware/cairo/lang/__pycache__/version.cpython-38.pyc,, +starkware/cairo/lang/builtins/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/builtins/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/builtins/__pycache__/all_builtins.cpython-38.pyc,, +starkware/cairo/lang/builtins/all_builtins.py,sha256=bUkfBhTlFzMa63gxz-w0RHUHvOEq-W7IX4JXOB4NrF4,1537 +starkware/cairo/lang/builtins/bitwise/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/builtins/bitwise/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/builtins/bitwise/__pycache__/bitwise_builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/builtins/bitwise/__pycache__/instance_def.cpython-38.pyc,, +starkware/cairo/lang/builtins/bitwise/bitwise_builtin_runner.py,sha256=PXx6MDsexXVrNKCdCnaborEvAnZsgkonmQWeApn4tco,3987 +starkware/cairo/lang/builtins/bitwise/instance_def.py,sha256=C_Ls6Kh6lDQIhOfV3mD4K2KSXkCkKAL8eVmhsBXvVKo,642 +starkware/cairo/lang/builtins/ec/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/builtins/ec/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/builtins/ec/__pycache__/ec_op_builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/builtins/ec/__pycache__/instance_def.cpython-38.pyc,, +starkware/cairo/lang/builtins/ec/ec_op_builtin_runner.py,sha256=i1-ifDGfGBCyEzRXseBXI_s5tjWWO_F155ddFAoACHY,5648 +starkware/cairo/lang/builtins/ec/instance_def.py,sha256=CgfYPwJBXN0tMqEshnRPq_yxvX0MQrKO-bgeMbg6wzk,744 +starkware/cairo/lang/builtins/hash/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/builtins/hash/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/builtins/hash/__pycache__/hash_builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/builtins/hash/__pycache__/instance_def.cpython-38.pyc,, +starkware/cairo/lang/builtins/hash/hash_builtin_runner.py,sha256=a_riyS3QmLuQZqZ-g1CBZeWxTPMTeB5bU0uELW60zoQ,3724 +starkware/cairo/lang/builtins/hash/instance_def.py,sha256=zq0VSGPFXNiPDheLO4BgKTDPBsECbVqBI4dzbvv0TdI,844 +starkware/cairo/lang/builtins/keccak/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/builtins/keccak/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/builtins/keccak/__pycache__/instance_def.cpython-38.pyc,, +starkware/cairo/lang/builtins/keccak/__pycache__/keccak_builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/builtins/keccak/instance_def.py,sha256=T-lsybSYDgEaLKFjbDF_L0YJel3At7sgt9ZVAYjz-pE,825 +starkware/cairo/lang/builtins/keccak/keccak_builtin_runner.py,sha256=SxgNY4mUGEp8lZz6bHoaiWITiN-uPpx1OgYDFD6fRDg,4796 +starkware/cairo/lang/builtins/range_check/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/builtins/range_check/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/builtins/range_check/__pycache__/instance_def.cpython-38.pyc,, +starkware/cairo/lang/builtins/range_check/__pycache__/range_check_builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/builtins/range_check/instance_def.py,sha256=4ErAoFZCNTlBiiGSiLBAHpmMOD5uoKvNM4Q4RIUlSPM,595 +starkware/cairo/lang/builtins/range_check/range_check_builtin_runner.py,sha256=sCaOFHcU0AGnTPe29HTacqlQF_HiclteXHsszf0A9UE,3853 +starkware/cairo/lang/builtins/signature/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/builtins/signature/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/builtins/signature/__pycache__/instance_def.cpython-38.pyc,, +starkware/cairo/lang/builtins/signature/__pycache__/signature_builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/builtins/signature/instance_def.py,sha256=arrLTlmcfhAEStx28m8OQh22C-sFsCA1_VIwV8D4qmo,646 +starkware/cairo/lang/builtins/signature/signature_builtin_runner.py,sha256=DDaiVqBYWA31Hg68GQNwFTBQ5fXitGMCLsSrFLdmHbA,5450 +starkware/cairo/lang/cairo_constants.py,sha256=_LSsgveMrGvfNZyNM5R6LtIQ_YXoli4kPGc6haLKrFA,41 +starkware/cairo/lang/compiler/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +starkware/cairo/lang/compiler/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/assembler.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/cairo_compile.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/cairo_format.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/const_expr_checker.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/constants.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/debug_info.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/encode.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/error_handling.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/expression_evaluator.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/expression_simplifier.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/expression_transformer.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/fields.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/filter_unused_identifiers.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/identifier_definition.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/identifier_manager.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/identifier_manager_field.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/identifier_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/import_loader.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/injector.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/instruction.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/instruction_builder.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/location_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/module_reader.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/offset_reference.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/parser.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/parser_transformer.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/program.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/proxy_identifier_manager.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/references.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/resolve_search_result.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/scoped_name.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/substitute_identifiers.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/test_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/type_casts.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/type_system.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/type_system_visitor.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/type_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/__pycache__/unique_name_provider.cpython-38.pyc,, +starkware/cairo/lang/compiler/assembler.py,sha256=cvnw_535u7a8igmztJPWR_9o-6DX4Jm_MfrAgPUktmg,3374 +starkware/cairo/lang/compiler/ast/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +starkware/cairo/lang/compiler/ast/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/aliased_identifier.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/arguments.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/ast_objects_test_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/bool_expr.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/cairo_types.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/code_elements.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/expr.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/expr_func_call.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/formatting_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/instructions.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/module.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/node.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/notes.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/rvalue.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/types.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/__pycache__/visitor.cpython-38.pyc,, +starkware/cairo/lang/compiler/ast/aliased_identifier.py,sha256=PsabmNL1s3HluhdkxjQ6uhrwve7VnCbkk8U0CWp4pFo,915 +starkware/cairo/lang/compiler/ast/arguments.py,sha256=aVOb8h5zalV3IUUL1t_9kicvy1_sJLEXft4Gt9gENYk,1113 +starkware/cairo/lang/compiler/ast/ast_objects_test_utils.py,sha256=Gx_igr9Q6VOb1qBmp8CiCUhvQZId-pHDqCManUYqIv0,1266 +starkware/cairo/lang/compiler/ast/bool_expr.py,sha256=2Kt7FQlSNqs30UIjBb3PKKUKUj0fUswWCITiJ-xXDUs,1987 +starkware/cairo/lang/compiler/ast/cairo_types.py,sha256=M-WZW8fRgChpZtf8x_E_gp8gLuN7E2IUk-VF0dUo14Y,6922 +starkware/cairo/lang/compiler/ast/code_elements.py,sha256=9-FTNDBn2v-a5lTdcZPdwHJiyfUlMHuCtJ8kwwKgCUc,28125 +starkware/cairo/lang/compiler/ast/expr.py,sha256=lxBOISM9b2waOBF1okFnUxny4akD5ebzZp_ILLQyrVs,15861 +starkware/cairo/lang/compiler/ast/expr_func_call.py,sha256=6qkQ7eH1ges2hYY90fXE8PjiMnZEvzMtQgRbR3UPROE,999 +starkware/cairo/lang/compiler/ast/formatting_utils.py,sha256=1fzA3aZEX0LMSidkQdbL-P10cf5NzW93xlPbFfLSxNM,14814 +starkware/cairo/lang/compiler/ast/instructions.py,sha256=IcjgLP8zxqUHIawmo002WP_nexLaDci-QYag8e30MNk,4946 +starkware/cairo/lang/compiler/ast/module.py,sha256=iT1cOADiosFCKyLDAgMuhBAJsWpTnjVueV9XKOAe40o,1153 +starkware/cairo/lang/compiler/ast/node.py,sha256=GK1tnBKoTqyUs_iEI8ZJqmqS6L0tBQyaGhtvLiavyuQ,618 +starkware/cairo/lang/compiler/ast/notes.py,sha256=E9bovRbEY_pv8TXU8cpHm4GNu7dFy9EdpfYcItnG-xM,2720 +starkware/cairo/lang/compiler/ast/rvalue.py,sha256=dXSZNsTuFRk4_VFxZED5llSMny72f_EIu_IATlEfDiY,4742 +starkware/cairo/lang/compiler/ast/types.py,sha256=DIbfXCWj0WEEe2f-tO0ujCc_GosKrl_Q2HDtt6o11DA,2113 +starkware/cairo/lang/compiler/ast/visitor.py,sha256=8v3YrFxsMBuuqgz40r5ep-flZJ3mDr47SxdmCMVJlww,5959 +starkware/cairo/lang/compiler/cairo.ebnf,sha256=cq7Nt3GMq1oDfploQ5rAPDmB19_H63OaTunq9A7zIg0,8410 +starkware/cairo/lang/compiler/cairo_compile.py,sha256=VO294Xe8JPQaOoDejpD38hNcG-QglMTYt4KamZjsxbE,14224 +starkware/cairo/lang/compiler/cairo_format.py,sha256=IC-G5IxwhmHNzvdBs2h3qZHk2Ruw8BnQfYM1pcQD1mg,3530 +starkware/cairo/lang/compiler/const_expr_checker.py,sha256=82M_xxNkF4l2icWnhuJ9qsZ3bBFhJU14Rx0VEaWpXPI,1110 +starkware/cairo/lang/compiler/constants.py,sha256=WoOEXy4APtyOzMNFZyuNJQ5YrdCINL5xz_9lz9Ywn8o,220 +starkware/cairo/lang/compiler/debug_info.py,sha256=u57GHj0rIPxMTXmjii8h3N94SzTc59iL_jW6oUQTVY4,3660 +starkware/cairo/lang/compiler/encode.py,sha256=gKjejW_oLCO3E2OofB68aj6J-QHAz096uOUW5z9F00w,8169 +starkware/cairo/lang/compiler/error_handling.py,sha256=6plRqs7T4usuBqQ4gL26-bxLa9yx-TlP8YwFgpo0YjM,5859 +starkware/cairo/lang/compiler/expression_evaluator.py,sha256=tlqQvQw6OxZAaWnsMgwUdDaJYYimZTq5sW73uS5AOJQ,2813 +starkware/cairo/lang/compiler/expression_simplifier.py,sha256=YGjjTQy3dm8gbaibdCy2rXdyDQ7KlCBCNfrhxy6gd0E,5568 +starkware/cairo/lang/compiler/expression_transformer.py,sha256=y0aW1XY4cJ_0iVWYBC9r2oTN9QGf9wSnY528UYdliDU,6173 +starkware/cairo/lang/compiler/fields.py,sha256=ZQDuwtpQBPuaytzCIhkaiSgS99Cj-MW_NKrvkapxE9k,1383 +starkware/cairo/lang/compiler/filter_unused_identifiers.py,sha256=ga7uWzMXS1pH6vgQ5JBg-gZ1GYkKElwff6zjwkuoxX0,9125 +starkware/cairo/lang/compiler/identifier_definition.py,sha256=dHbQLO5O4HPmc4iGDXW9P9VHKs-km3TU3kaHz-ewCMI,5901 +starkware/cairo/lang/compiler/identifier_manager.py,sha256=v-ByROldMsuUelYRpghcvL0PXXj-eejgIm3s2v2IN1k,13800 +starkware/cairo/lang/compiler/identifier_manager_field.py,sha256=pVUQHKWiiffPd7qAI7JCu-uedml3PSsUH5WSsezdfaM,1236 +starkware/cairo/lang/compiler/identifier_utils.py,sha256=g8wyZKUuZvJ3GPZuXHLZtoF8G7S5fjtE1unUDEc2j58,2421 +starkware/cairo/lang/compiler/import_loader.py,sha256=zrOezcbPiO3LOHHzqMsLU8aRhkGNNMN35AUHpb0O3EE,4889 +starkware/cairo/lang/compiler/injector.py,sha256=yQr__SnsdzGoVAkmZrWdWqaCsq_Ub1wRpJJMXQXvA0E,1894 +starkware/cairo/lang/compiler/instruction.py,sha256=D-LHlY_fuZ7y1MzbbpZ_keScBFCQzoThXX5AiQ9Hciw,2826 +starkware/cairo/lang/compiler/instruction_builder.py,sha256=VGnWTegJT0ZZw0BgyAd_Zjgl1xbQcAUl8tZoyzqr_2Y,18674 +starkware/cairo/lang/compiler/lib/registers.cairo,sha256=07TQOzp3de3-CYu8dttSn7txOao7GQd7bp8jzqI_ypE,853 +starkware/cairo/lang/compiler/location_utils.py,sha256=LGqMgQyciCDDhBGtdp-OJ4bsDdxjtAHqEd7Rb_TkVGY,857 +starkware/cairo/lang/compiler/module_reader.py,sha256=mRXTWy4jSbW3vMnnzljqKS6TW1-YTnusEmOhCQmqX70,2626 +starkware/cairo/lang/compiler/offset_reference.py,sha256=rboWU05KTT1vbbfck0b1LtZgCKnxNH7bAmfuyL6TFtc,1560 +starkware/cairo/lang/compiler/parser.py,sha256=C3V7u-SokhOYPnSF0w5Hq8M6NRDs6bJ2DDtxehbTtzA,9840 +starkware/cairo/lang/compiler/parser_transformer.py,sha256=uZLPc6chS7haPjYz4POXXosvBMk9jhmh4a3Hixi4NYw,29116 +starkware/cairo/lang/compiler/preprocessor/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +starkware/cairo/lang/compiler/preprocessor/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/auxiliary_info_collector.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/compound_expressions.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/default_pass_manager.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/dependency_graph.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/directives.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/flow.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/identifier_aware_visitor.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/identifier_collector.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/if_labels.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/local_variables.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/memento.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/pass_manager.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/preprocess_codes.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/preprocessor.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/preprocessor_error.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/preprocessor_test_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/preprocessor_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/reg_tracking.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/__pycache__/struct_collector.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/auxiliary_info_collector.py,sha256=PumiHijIn9AWfsYEDpYAc_y1vsDqF0V39AVBsjhyTEk,3354 +starkware/cairo/lang/compiler/preprocessor/bool_expr/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +starkware/cairo/lang/compiler/preprocessor/bool_expr/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/bool_expr/__pycache__/errors.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/bool_expr/__pycache__/lowering.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/bool_expr/__pycache__/lowering_test_utils.cpython-38.pyc,, +starkware/cairo/lang/compiler/preprocessor/bool_expr/errors.py,sha256=MEXUqJLILkPXAYpOfq3QehPTGGko8casK43BjW5bq6k,126 +starkware/cairo/lang/compiler/preprocessor/bool_expr/lowering.py,sha256=xvRAylmTvmgiATgPjRXxTk33eo-N0wTCwRDvOr5J9cQ,2960 +starkware/cairo/lang/compiler/preprocessor/bool_expr/lowering_test_utils.py,sha256=g4dSBq-LkSRc3vVAczJr2DLIuVC-2crYmmsAGfvNBbY,1472 +starkware/cairo/lang/compiler/preprocessor/compound_expressions.py,sha256=ycZKI7FMO_R_wMkjd3NEHSZORdaxJ7k45YVyrfZSq7k,13078 +starkware/cairo/lang/compiler/preprocessor/default_pass_manager.py,sha256=WSLLnGNZZzvdCw8yB-Un_Z9OYcPRVpa7u5PaE8gptyw,6687 +starkware/cairo/lang/compiler/preprocessor/dependency_graph.py,sha256=GYnCNcaaXJCCvAI9unYxDwWceTHRSD1ZvORE1R1f5lw,7966 +starkware/cairo/lang/compiler/preprocessor/directives.py,sha256=qDKzHWZV4_F4Yr2ozW8ZsEBN-iWeLkSQRu6c-ScsFy0,1689 +starkware/cairo/lang/compiler/preprocessor/flow.py,sha256=ChMGcocwdrFvSqd91NPKPUuBWt8apUujsS9H9D1fR84,13634 +starkware/cairo/lang/compiler/preprocessor/identifier_aware_visitor.py,sha256=XXLOGVnSYaj0Bcw-mfXKWqG7nESDMajB2HeUDe4bm4A,13387 +starkware/cairo/lang/compiler/preprocessor/identifier_collector.py,sha256=JfMESaJiSja_byiGtp9sDg3p7TI--ogaxh4glfHad1I,11207 +starkware/cairo/lang/compiler/preprocessor/if_labels.py,sha256=NPqP7s0XgS484ofa4P7HK997Do5o1lWbgXb960o4eIQ,1216 +starkware/cairo/lang/compiler/preprocessor/local_variables.py,sha256=0vi3zGaBAeSpxuZPpSqjfIu2Q-YGQ0q_arUelLQ22z8,9242 +starkware/cairo/lang/compiler/preprocessor/memento.py,sha256=Gg1haOFaCU9NsePpnkeJyTCslvs8Pb5RZQlOoEQDZAg,5116 +starkware/cairo/lang/compiler/preprocessor/pass_manager.py,sha256=YfSl2zWDbwjjsXGE38YMHuMP4S1SXjfnbuPZfXfVmOU,4707 +starkware/cairo/lang/compiler/preprocessor/preprocess_codes.py,sha256=vnPr7AQTklOP-CSuaDF0rO9bohRgvZACuHTbM2kluLk,1084 +starkware/cairo/lang/compiler/preprocessor/preprocessor.py,sha256=VnSn3is2pe8rwFt3Kpy_j8pQkU-KUG-P0OfzmPrT3FQ,105239 +starkware/cairo/lang/compiler/preprocessor/preprocessor_error.py,sha256=xOP7yJ5qlVrk99CE6uxotZ4183FN9fZ3qJHnmI5jGrM,122 +starkware/cairo/lang/compiler/preprocessor/preprocessor_test_utils.py,sha256=_AmsFJBvkJMnW3z8SjkkxcYl7Y00ZORwtL6hQxhdQhY,3017 +starkware/cairo/lang/compiler/preprocessor/preprocessor_utils.py,sha256=8cR8XbXGQpTQezWHZOcWWcRLsffdxwtEOb7iP8-DY5c,2181 +starkware/cairo/lang/compiler/preprocessor/reg_tracking.py,sha256=t-M6qn9nz3vNjkTXp4VxcQE8GqON7JGDvAMI2_SU1bI,4218 +starkware/cairo/lang/compiler/preprocessor/struct_collector.py,sha256=BfeSino4BoKlpmUBHdSqd3TzPI3MScW7yoNdEOnrh-I,7096 +starkware/cairo/lang/compiler/program.py,sha256=kBAfSzmFb1qrq2sp7SDgmbTJXg2BlkTNj9q_0XG8cZs,6470 +starkware/cairo/lang/compiler/proxy_identifier_manager.py,sha256=7p5Uog5gEpVHicJlcxsU9ogL3aL4_wzbyilH-2rnU5k,3051 +starkware/cairo/lang/compiler/references.py,sha256=f3nvklUyLjSyjkIPbYCnC8LiMD4Li7h0DNwdj7aALJw,5437 +starkware/cairo/lang/compiler/resolve_search_result.py,sha256=hXvevmmA7evYLO-wEzIzyqtgAGa-5IxTaj5pSOxREMA,2041 +starkware/cairo/lang/compiler/scoped_name.py,sha256=oITQRo5XqWT2iVOCmiU-DBu2e_lkFcSuBfGoNUrgivo,1861 +starkware/cairo/lang/compiler/substitute_identifiers.py,sha256=yXEHRtMHKergij_gS-o2b3QfB9mEHhdXBSI0kgqThQc,6120 +starkware/cairo/lang/compiler/test_utils.py,sha256=lFsIubiJGnQvsQuKUDkDtw7md7FRz-KUDpQAHmwNlFY,914 +starkware/cairo/lang/compiler/type_casts.py,sha256=M83rkmQo3iKclSNyJXNps2J0X8dGGTUu8Dntdxe3f6M,5777 +starkware/cairo/lang/compiler/type_system.py,sha256=XZaRJGc-109GxwPi1e3ljOhVWSjRQbBKiVzFJJgvfCY,2346 +starkware/cairo/lang/compiler/type_system_visitor.py,sha256=6jlfy5jULu2ZArLARu3D22qeuhMjSKjSoO7RJ2Znd3U,17971 +starkware/cairo/lang/compiler/type_utils.py,sha256=s1pRkrF3RxF5fLKCMmLr_a4QrIwSAurzfx_NmMwrGDo,1468 +starkware/cairo/lang/compiler/unique_name_provider.py,sha256=pqi-v-AxvpvxnY9EOwiy40a6GwpSdhXcGJuzhQz0W34,1297 +starkware/cairo/lang/instances.py,sha256=kZ0wA_LLBYXqcoumwRvlp_-KHT3261aw7N4tS6xwCuE,7263 +starkware/cairo/lang/migrators/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/migrators/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/migrators/__pycache__/migrator.cpython-38.pyc,, +starkware/cairo/lang/migrators/migrator.py,sha256=f6bIPxKqIUCYjgklxvhUMDOfznUK2SuPVrVshPu3VxQ,10550 +starkware/cairo/lang/migrators/migrator_grammar.ebnf,sha256=tA2_euT9gdfyNnhOALYgfTwjUn_d2XFM7QBU9KaUeu4,8041 +starkware/cairo/lang/tracer/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/tracer/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/tracer/__pycache__/profile.cpython-38.pyc,, +starkware/cairo/lang/tracer/__pycache__/profiler.cpython-38.pyc,, +starkware/cairo/lang/tracer/__pycache__/tracer.cpython-38.pyc,, +starkware/cairo/lang/tracer/__pycache__/tracer_data.cpython-38.pyc,, +starkware/cairo/lang/tracer/favicon.png,sha256=wn0YjqWS44kW2n_xWq5goWa5G7yG03IG8bBYPKgm7Y0,9015 +starkware/cairo/lang/tracer/index.html,sha256=6FdoRuQg9OqnjHxixnwWOPtk557LvxFUz6bs_lb-hPE,1721 +starkware/cairo/lang/tracer/profile.py,sha256=depap0v72ax23U0Fak117XMzxkLXVWGrAQ92ZGKjzH0,7832 +starkware/cairo/lang/tracer/profiler.py,sha256=lbz0elk-Z-fReuF__E5UB1srzUop6ORga-idvoPnTFk,1404 +starkware/cairo/lang/tracer/third_party/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/tracer/third_party/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/tracer/third_party/__pycache__/profile_pb2.cpython-38.pyc,, +starkware/cairo/lang/tracer/third_party/profile_pb2.py,sha256=uf4P73ztT62oCKGBdAcWw8FPbc7JFzBDsacyKvVeZw8,28249 +starkware/cairo/lang/tracer/tracer.css,sha256=5W7kcfKujgI1ZiZ_kRcrFapbavkpwXW7y2B1qT5L-Cs,1182 +starkware/cairo/lang/tracer/tracer.js,sha256=zSaxwAqZfk2KGHt3-5V-yuR2cKtpmW0t8XKsqkY2PC8,10222 +starkware/cairo/lang/tracer/tracer.py,sha256=-46WYCqcSIIuQHnYzfOhoZurexnnrpOCuCA30d1eNFM,4819 +starkware/cairo/lang/tracer/tracer_data.py,sha256=vDZawo1S8-sIadlqA3XM-UBgEVGfBwlYcbMAcCC7iJk,14170 +starkware/cairo/lang/version.py,sha256=AapO183hhtwG0F9daLENVuqzQ_4M8feBm87io-iBGyY,97 +starkware/cairo/lang/vm/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/lang/vm/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/air_public_input.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/cairo_pie.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/cairo_run.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/cairo_runner.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/crypto.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/memory_dict.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/memory_dict_backend.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/memory_segments.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/output_builtin_runner.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/reconstruct_traceback.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/relocatable.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/relocatable_fields.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/security.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/trace_entry.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/utils.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/validated_memory_dict.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/virtual_machine_base.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/vm.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/vm_consts.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/vm_core.cpython-38.pyc,, +starkware/cairo/lang/vm/__pycache__/vm_exceptions.cpython-38.pyc,, +starkware/cairo/lang/vm/air_public_input.py,sha256=rx3hQXWpVhSBNOgMLsfKy0qtrMYEQcaxsq3EfG_k1bg,3922 +starkware/cairo/lang/vm/builtin_runner.py,sha256=vfs3NNnZU8khLfevxd4q9qXPpy1V1KngKIxzukA0AxE,10757 +starkware/cairo/lang/vm/cairo_pie.py,sha256=xhM4Hq3BpqVpGtAqLZ5EcP6Ah9KBm-LQo7cu0qteNGg,15365 +starkware/cairo/lang/vm/cairo_run.py,sha256=dkCNFp6Nk4PWiB4qZvpQ6-uJRHyP7laMUrkyaW5vAlA,19695 +starkware/cairo/lang/vm/cairo_runner.py,sha256=XbE2zP4gzkeKxGOp8hbGHLzcfs-OkV7BlavP1vcubvA,34752 +starkware/cairo/lang/vm/crypto.py,sha256=y0ABpb9ZmmCFHr_ShafdGJddJBfdqhl2MWstXEpYYtM,343 +starkware/cairo/lang/vm/memory_dict.py,sha256=eOQRI-9kkhR5elIMXhgx9iyMIuzE6hOA5aluihFRZJc,8837 +starkware/cairo/lang/vm/memory_dict_backend.py,sha256=0T8URyxwjwu_X9eloCCh3eQb6hjan8a4G1wHOhWEDak,25 +starkware/cairo/lang/vm/memory_segments.py,sha256=MYYETSlWmB2uxvz_BdVUYCbJ4fiBz3mdkor2WceH4MM,10858 +starkware/cairo/lang/vm/output_builtin_runner.py,sha256=Zjffqbgx0DzUmPu82TVcmDpX2oJmGVzcMCz9PP38sHs,7242 +starkware/cairo/lang/vm/reconstruct_traceback.py,sha256=AU6xP1uNRgDR91FHx_lgFL0nTs1j6NsfSLuBJe-TUjw,2524 +starkware/cairo/lang/vm/relocatable.py,sha256=OsJlWbkVkxpLKgludW7BKdu_DSvWLeB4ogAID98kkdg,5664 +starkware/cairo/lang/vm/relocatable_fields.py,sha256=Apzx9TFZ_GaqOLqNOFlAA2UOdo9sJXDcdrVBj8nF7NY,1203 +starkware/cairo/lang/vm/security.py,sha256=4y1RK1bnTXzM1TGCFPsbdDVK_S-x4ijXrJNi037LXs8,2529 +starkware/cairo/lang/vm/trace_entry.py,sha256=c0MA-TuTbd5lxT9ZPD4QS9HO4bjbOLTkggYrsTyFc0k,1776 +starkware/cairo/lang/vm/utils.py,sha256=jdG2RKe9ihKpmTleszvqPEviLu5cbJzi6wfiy9ye3IM,3136 +starkware/cairo/lang/vm/validated_memory_dict.py,sha256=HuZVicEO2YiV5muKrpVMcmSXW7OfHcs8dMqI_ydH1cA,2770 +starkware/cairo/lang/vm/virtual_machine_base.py,sha256=tnFjWqLRzUTROYTjWqQXDn9B3ExlkG4uBrgs8xiZM8w,21045 +starkware/cairo/lang/vm/vm.py,sha256=jVBmP7NdMh98oZXldgwucqNb2Y3QVynpqW-KGhFZozk,172 +starkware/cairo/lang/vm/vm_consts.py,sha256=MN6uz_A73lYOAqoMzzGHobleWY-0R72iREZhHeD_jco,14523 +starkware/cairo/lang/vm/vm_core.py,sha256=B5Rapy1yaPSNAyCnOKaBizwhM8Zj3fz7IijIoVcw658,19857 +starkware/cairo/lang/vm/vm_exceptions.py,sha256=QYq3oZ-pWRmIV-A2Op7Gx_WKu2rfmkVGoZixq956MoA,4503 +starkware/cairo/sharp/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/cairo/sharp/__pycache__/__init__.cpython-38.pyc,, +starkware/cairo/sharp/__pycache__/client_lib.cpython-38.pyc,, +starkware/cairo/sharp/__pycache__/fact_checker.cpython-38.pyc,, +starkware/cairo/sharp/__pycache__/sharp_client.cpython-38.pyc,, +starkware/cairo/sharp/client_lib.py,sha256=0mYzmHbRovzQ_jj9tUw9wHydHlycpxyw4YgGUNge_O0,1869 +starkware/cairo/sharp/config.json,sha256=uKAKqOaZjlF6tk32nMzZChHgvowbp8sGn9u7UnUk7kA,156 +starkware/cairo/sharp/fact_checker.py,sha256=ysbg84FgUG0kxBtCUte5S8qnBzXc768THdVBnbWSDyg,1286 +starkware/cairo/sharp/sharp_client.py,sha256=eAWj8CZaNKoNYjOfwi2Ro0ASsTI6Sq62KhJmJbyJtXM,10312 +starkware/crypto/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/crypto/__pycache__/__init__.cpython-38.pyc,, +starkware/crypto/signature/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/crypto/signature/__pycache__/__init__.cpython-38.pyc,, +starkware/crypto/signature/__pycache__/fast_pedersen_hash.cpython-38.pyc,, +starkware/crypto/signature/__pycache__/math_utils.cpython-38.pyc,, +starkware/crypto/signature/__pycache__/nothing_up_my_sleeve_gen.cpython-38.pyc,, +starkware/crypto/signature/__pycache__/signature.cpython-38.pyc,, +starkware/crypto/signature/fast_pedersen_hash.py,sha256=YhnKQDStAg0SfHob4CHsnowo_zoW76-1jBjXxe8a4Oc,1840 +starkware/crypto/signature/math_utils.py,sha256=uXLowThalU02O8e6r-G6uqLl7jSpKCFsWLozQcKYIag,3627 +starkware/crypto/signature/nothing_up_my_sleeve_gen.py,sha256=DILHHUq6oTl9fIRS85yb5EJtrq_7UTixqS5DTJtARFw,6074 +starkware/crypto/signature/pedersen_params.json,sha256=yTl1wWhhjVGWrcXyrIS2gHmnPcSx0RvJFwPu1i3FP50,102708 +starkware/crypto/signature/signature.py,sha256=EWzYav991S7-mFtaZKXHDK8h5iQs6FCVBaSBaRRktDQ,9665 +starkware/eth/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/eth/__pycache__/__init__.cpython-38.pyc,, +starkware/eth/__pycache__/eth_test_utils.cpython-38.pyc,, +starkware/eth/eth_test_utils.py,sha256=DaTaD5_gJ1kNKDeq7fNPPyOP_Xle-vfhriOG4QCo1So,9790 +starkware/python/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/python/__pycache__/__init__.cpython-38.pyc,, +starkware/python/__pycache__/async_subprocess.cpython-38.pyc,, +starkware/python/__pycache__/expression_string.cpython-38.pyc,, +starkware/python/__pycache__/math_utils.cpython-38.pyc,, +starkware/python/__pycache__/merkle_tree.cpython-38.pyc,, +starkware/python/__pycache__/object_utils.cpython-38.pyc,, +starkware/python/__pycache__/python_dependencies.cpython-38.pyc,, +starkware/python/__pycache__/utils.cpython-38.pyc,, +starkware/python/__pycache__/utils_stub_module.cpython-38.pyc,, +starkware/python/async_subprocess.py,sha256=Xv7KFNJsHV35NqcuCUpBAqqaxZj_M3nyDl57LPO441s,1522 +starkware/python/expression_string.py,sha256=gJ4i7SGm2EdBctGlMcr2we8gk2xLu0VHcgeKwSCJPkY,5958 +starkware/python/math_utils.py,sha256=X9qcFRhZBp5eSwJXFj7hZ5hIBREEC_qcrrGTAZz4gys,7990 +starkware/python/merkle_tree.py,sha256=Yba_q-mlumPgSOyYGrl36hI_0m7kyvtic1T3EAVEu8E,1614 +starkware/python/object_utils.py,sha256=BxA3wJn0U-Of5x5WfQFFBtQ5OUv6RrYsLMD4xlZJOqU,1796 +starkware/python/python_dependencies.py,sha256=YTUZbAKWVZHpCOX0FGHTWcPdW3kO1zwSuQc-Lr3MRhM,1545 +starkware/python/utils.py,sha256=1re2jtC_-gs8LVT9-YGtvZUO-VN9-mOSaxk_8D4hGzc,17414 +starkware/python/utils_stub_module.py,sha256=WO-vapz1v5ipohxt7165gV3FEGjIqDsq5PfT17Ezuv8,596 +starkware/solidity/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/solidity/__pycache__/__init__.cpython-38.pyc,, +starkware/solidity/__pycache__/utils.cpython-38.pyc,, +starkware/solidity/utils.py,sha256=2q7-D5wOksjCdW5HtWfqZ74qjFV-tMEVrHlrwlOj850,455 +starkware/starknet/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/business_logic/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/business_logic/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/business_logic/__pycache__/utils.cpython-38.pyc,, +starkware/starknet/business_logic/execution/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/business_logic/execution/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/business_logic/execution/__pycache__/execute_entry_point.cpython-38.pyc,, +starkware/starknet/business_logic/execution/__pycache__/execute_entry_point_base.cpython-38.pyc,, +starkware/starknet/business_logic/execution/__pycache__/gas_usage.cpython-38.pyc,, +starkware/starknet/business_logic/execution/__pycache__/objects.cpython-38.pyc,, +starkware/starknet/business_logic/execution/__pycache__/os_usage.cpython-38.pyc,, +starkware/starknet/business_logic/execution/execute_entry_point.py,sha256=iH5gzD_SM-6CUh7g5bU7GQJMlF-ImIhC27SLLoUFu_E,14689 +starkware/starknet/business_logic/execution/execute_entry_point_base.py,sha256=wgtPJPJPjlCv5IdAF0Ze5omEK_IxOCsncyStJtgsgRo,2223 +starkware/starknet/business_logic/execution/gas_usage.py,sha256=gKFi57zz29Falcg3-lYZO-DAcdg0HwQBr-WExS0EQ-w,6447 +starkware/starknet/business_logic/execution/objects.py,sha256=GJYRU3WwZ7V7wUDb3qYcajyu6Th3fjjzBcYtnUibZ58,26644 +starkware/starknet/business_logic/execution/os_resources.json,sha256=jAtpyR3GBEukYeTSXPD_l3ELnhgpF19JjniJcbAgI8U,3957 +starkware/starknet/business_logic/execution/os_usage.py,sha256=MIexmDixN2KugpHvuxz7QkBKxeI39JgFFSALvbzNjLM,1808 +starkware/starknet/business_logic/fact_state/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/business_logic/fact_state/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/business_logic/fact_state/__pycache__/contract_state_objects.cpython-38.pyc,, +starkware/starknet/business_logic/fact_state/__pycache__/patricia_state.cpython-38.pyc,, +starkware/starknet/business_logic/fact_state/__pycache__/state.cpython-38.pyc,, +starkware/starknet/business_logic/fact_state/contract_state_objects.py,sha256=RWBgr8oTk0H3xirSc_EtjbQQNWoHczpdH3QuLt8tXIE,9394 +starkware/starknet/business_logic/fact_state/patricia_state.py,sha256=lSLl2HIRsytOgZBAeOsm2XGWb4RzGUTiJE7v3xDjl8c,3755 +starkware/starknet/business_logic/fact_state/state.py,sha256=nW5zjqbSFtl3G9MNCU6R7pa_cPBIr7F6TYOI1WN7swY,15837 +starkware/starknet/business_logic/state/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/business_logic/state/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/business_logic/state/__pycache__/state.cpython-38.pyc,, +starkware/starknet/business_logic/state/__pycache__/state_api.cpython-38.pyc,, +starkware/starknet/business_logic/state/__pycache__/state_api_objects.cpython-38.pyc,, +starkware/starknet/business_logic/state/state.py,sha256=xciZILdJE8WW3F-h3L6xqDELVMCzM9PigfI3O1ga9mU,19752 +starkware/starknet/business_logic/state/state_api.py,sha256=THECj6sm3Zc8KLEHZ_Wd2tmLVKPO8pIUUIFZ1NsSNAM,4602 +starkware/starknet/business_logic/state/state_api_objects.py,sha256=EH52jN0_gMT2RkoRGg9fhWvUfB4aBMuRV7u5CCR9ewE,3078 +starkware/starknet/business_logic/transaction/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/business_logic/transaction/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/business_logic/transaction/__pycache__/fee.cpython-38.pyc,, +starkware/starknet/business_logic/transaction/__pycache__/objects.cpython-38.pyc,, +starkware/starknet/business_logic/transaction/__pycache__/state_objects.cpython-38.pyc,, +starkware/starknet/business_logic/transaction/fee.py,sha256=DFnvjh8ouTssUSj5piSYRSoTa8H0pXwsaGTJxNvVitI,3974 +starkware/starknet/business_logic/transaction/objects.py,sha256=P02bL9qAbFuh5_jY6vU5shtYle3m4MjBFc9E-UwAXJo,57386 +starkware/starknet/business_logic/transaction/state_objects.py,sha256=xGLhEqNQian_r54NEuVPo7zLewq8nrjLJjO-gZs-xhY,5682 +starkware/starknet/business_logic/utils.py,sha256=RtC26J2mvNCoYYUjwiUqnVd07pWHEl-0K8gNd5CPg74,10790 +starkware/starknet/cli/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/cli/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/cli/__pycache__/reconstruct_starknet_traceback.cpython-38.pyc,, +starkware/starknet/cli/__pycache__/starknet_cli.cpython-38.pyc,, +starkware/starknet/cli/reconstruct_starknet_traceback.py,sha256=bnPEDbBffGFe9uvIr0J--j3GH4lmpbW2_E0vVyA2Xp8,1364 +starkware/starknet/cli/starknet_cli.py,sha256=DCZByNNX8ZblJ5Hi8XvH5xBbXqQv88eEsTdjqfHdUPE,56912 +starkware/starknet/common/constants.cairo,sha256=EDymADfmfxdHM26JmHMmBiEksFSD8BnjNiXUQTeE50E,337 +starkware/starknet/common/eth_utils.cairo,sha256=7ADA9o9IsI7jjSr9EkTRpTY2PlpjtgFeqE9Qi0o3W2A,452 +starkware/starknet/common/messages.cairo,sha256=DB66tXgFxIESEfjbO2uIYly0yY_vTXoMsaYTmnKi0sk,672 +starkware/starknet/common/storage.cairo,sha256=yyo-2P-YVjDEx__hSghzEbVmtea-dTscOgsF5WP-BLo,2490 +starkware/starknet/common/syscalls.cairo,sha256=_YvYvvDBs4P_nx96VtgXTwiR7phFU_Ok6UQdsQUPJ4I,14857 +starkware/starknet/compiler/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/compiler/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/compile.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/contract_interface.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/data_encoder.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/event.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/external_wrapper.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/starknet_pass_manager.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/starknet_preprocessor.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/storage_var.cpython-38.pyc,, +starkware/starknet/compiler/__pycache__/validation_utils.cpython-38.pyc,, +starkware/starknet/compiler/compile.py,sha256=gBZ_eAt4B1fzWuqnZy7Px-LNylts5FkPvFyYuOI7i8w,8733 +starkware/starknet/compiler/contract_interface.py,sha256=MUL1pXGp-eB9eQo1dLzA0TC_N3kLTAu1tfCVMwukroU,14493 +starkware/starknet/compiler/data_encoder.py,sha256=_TOMgfGDySl1VJAu8u5sOCTOYLfmhDYp9vAxhVEscQM,14554 +starkware/starknet/compiler/event.py,sha256=s7N-bp1b0sVeFeEIoVCO_SssyPq6DxfdlvgUhEMps-I,7634 +starkware/starknet/compiler/external_wrapper.py,sha256=zFRDPeM7YM9SQ3DxXJwGytMjXTuHKmpGVHuPVPE02a4,31392 +starkware/starknet/compiler/starknet_pass_manager.py,sha256=tLzUCX4KQoTfayWXnRK6bGkT1XHfXYH6OJz0UUClrSc,4597 +starkware/starknet/compiler/starknet_preprocessor.py,sha256=3qRDT1h_-kA94geKVuP46jXTlyT0gYTAJpvRBhWspA0,9310 +starkware/starknet/compiler/storage_var.py,sha256=_NC-jqfvqIAKDX1esz_mEZU-Rmob5v9eOHStZB1P594,12408 +starkware/starknet/compiler/validation_utils.py,sha256=scyU26CnmxDhG8kTQ1LfKKmFXW4z5YP48JuPutkebdM,9800 +starkware/starknet/core/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/core/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/core/os/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/core/os/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/core/os/__pycache__/class_hash.cpython-38.pyc,, +starkware/starknet/core/os/__pycache__/os_program.cpython-38.pyc,, +starkware/starknet/core/os/__pycache__/os_utils.cpython-38.pyc,, +starkware/starknet/core/os/__pycache__/segment_utils.cpython-38.pyc,, +starkware/starknet/core/os/__pycache__/syscall_utils.cpython-38.pyc,, +starkware/starknet/core/os/block_hash/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/core/os/block_hash/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/core/os/block_hash/__pycache__/block_hash.cpython-38.pyc,, +starkware/starknet/core/os/block_hash/block_hash.py,sha256=Nw0jkXpvTsoL7Qms-Y1BedyA8JZsxep2C9QJAwDcfJ4,5677 +starkware/starknet/core/os/class_hash.py,sha256=P28m4Z-GnMFYWrVHkT99bcSbPzMWqAUFDrLwvh4xpOI,7500 +starkware/starknet/core/os/contract_address/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/core/os/contract_address/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/core/os/contract_address/__pycache__/contract_address.cpython-38.pyc,, +starkware/starknet/core/os/contract_address/contract_address.cairo,sha256=lTExBwXZUPHzuG5Iuk8KFES0ZZIGYxJ8PJa8Qvl6VCs,1441 +starkware/starknet/core/os/contract_address/contract_address.py,sha256=27MxLefWeQ7LLuzxaVOQoTEpkp1Tsxj38NmIy-5GDcI,2201 +starkware/starknet/core/os/contracts.cairo,sha256=dO0pv0I5qXbvcj39UiZlrLHqtkkI7_GSmsc6mHs9rPU,7666 +starkware/starknet/core/os/os_program.py,sha256=VLvUP_KsiNpQv0ja39pmnu1ewiyELgfQI-hBt4X9aqo,583 +starkware/starknet/core/os/os_utils.py,sha256=fXU7CKf7LJ15TDEhvaZu944SVTZwIJ4CuaQTdY2g7p0,3599 +starkware/starknet/core/os/segment_utils.py,sha256=70nvOGaNnsY-OK-qY14rPHz_VfaSFIQhBe4_Zx53OMc,2925 +starkware/starknet/core/os/starknet_os_compiled.json,sha256=OBvQl4tUqphdeyioBBBmgb-CD8soJaC8SUSx8eh6_LU,8689376 +starkware/starknet/core/os/syscall_utils.py,sha256=3Qk371n3E-eD0-wBDdcePPTxVGxAwrPb6bA8RXyrqh0,45507 +starkware/starknet/core/os/transaction_hash/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/core/os/transaction_hash/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/core/os/transaction_hash/__pycache__/transaction_hash.cpython-38.pyc,, +starkware/starknet/core/os/transaction_hash/transaction_hash.py,sha256=YHPb5xg8BhPoQdvya5qh50A7RCXcc85ojTnU0h_1yMo,4461 +starkware/starknet/definitions/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/definitions/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/definitions/__pycache__/constants.cpython-38.pyc,, +starkware/starknet/definitions/__pycache__/error_codes.cpython-38.pyc,, +starkware/starknet/definitions/__pycache__/fields.cpython-38.pyc,, +starkware/starknet/definitions/__pycache__/general_config.cpython-38.pyc,, +starkware/starknet/definitions/__pycache__/transaction_type.cpython-38.pyc,, +starkware/starknet/definitions/constants.py,sha256=xQJykRKtS-AOtUK-eauH3KhEuUM_vUkAXnFNpRZzTG8,2771 +starkware/starknet/definitions/error_codes.py,sha256=oebLERE8XTEJfMEFF2gYQXjQrE5HWmlwLiM84RykNeI,5599 +starkware/starknet/definitions/fields.py,sha256=zwnehXRtkU3B_ZfPbykB0B-WVkBrqQlbMZQAUieU_h8,11480 +starkware/starknet/definitions/general_config.py,sha256=KBAtBZmayUJA6ha_LZx6-yoJqaLuBWmpEuvwyPyb1Yc,7132 +starkware/starknet/definitions/general_config.yml,sha256=q5kDLtS4I2viR-_XiD4IbIiSqAgYRfYRHVjx-gUEYqo,511 +starkware/starknet/definitions/transaction_type.py,sha256=ja8bzinGLOY9K5VnlXLhlOdNR0FVv9E2IFItxzH-kKc,299 +starkware/starknet/public/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/public/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/public/__pycache__/abi.cpython-38.pyc,, +starkware/starknet/public/__pycache__/abi_structs.cpython-38.pyc,, +starkware/starknet/public/abi.py,sha256=VG4WlaXhs7Aa0BU_KevX9YJB9mCHZdrFUzJOROkM4dE,2583 +starkware/starknet/public/abi_structs.py,sha256=AUQxyjK0OeE5YGnpkwqulRKPKeBOBwsS2P0ryDbMWgk,4403 +starkware/starknet/security/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/security/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/security/__pycache__/hints_whitelist.cpython-38.pyc,, +starkware/starknet/security/__pycache__/secure_hints.cpython-38.pyc,, +starkware/starknet/security/__pycache__/simple_references.cpython-38.pyc,, +starkware/starknet/security/hints_whitelist.py,sha256=7s-IUjUPzzm4cVSpEzW1C7mbSdOyxyu25PuPjbLIdvs,257 +starkware/starknet/security/secure_hints.py,sha256=11mUKZamfmY5cShen4uM-G5EEAC1354UdqguMDoIaks,7539 +starkware/starknet/security/simple_references.py,sha256=c8c_Pl8TdeVh-809IL3Kvy3hQCQm1iifIRcYGPgZ_7s,2232 +starkware/starknet/security/whitelists/0.6.0.json,sha256=p6-Iy4oNfGP6k2hC8deE7BXwENsNU6U7immGIY4PxW0,21693 +starkware/starknet/security/whitelists/0.8.2.json,sha256=cBI7IJlCjxenXyGScphZ9lrcbBjEBRR7ssmpjqLzMqM,33238 +starkware/starknet/security/whitelists/384_bit_prime_field.json,sha256=5ipSGY4tCFDFYdD4SfkE5cKBxgQHrCNDxaWWTWFji5Q,10049 +starkware/starknet/security/whitelists/cairo_blake2s.json,sha256=zmbMgTwMhhcj_2kjxtM-mm6KwawfOezaSLTMBaKOoBc,5272 +starkware/starknet/security/whitelists/cairo_keccak.json,sha256=gHo1vddp20yOP_ji9o521YjF6Lb6xS_p9U8Z99tDBzY,6754 +starkware/starknet/security/whitelists/cairo_secp.json,sha256=qyvilg5wmuLKGQm4LTff2rYCiHL3r51DHjoEAexFsIQ,5864 +starkware/starknet/security/whitelists/cairo_sha256.json,sha256=c18ZWGsRsPprn5ULmQc31VnfzoBkZ9_aTovn2PyrfXg,4885 +starkware/starknet/security/whitelists/cairo_sha256_arbitrary_input_length.json,sha256=PaxrdxvQ2vfNDvF06GA3_o8tNyk6ArUSrLzlXqn0tMo,946 +starkware/starknet/security/whitelists/ec_bigint.json,sha256=cc1oYRYrKzF-k_nmJ458443Ks86qtclyVJEEHUBWfQw,1187 +starkware/starknet/security/whitelists/ec_recover.json,sha256=4OqMCEv--Zk4gsf63wj_UXCnDe6OsjPe9ZDCVj23CQs,1592 +starkware/starknet/security/whitelists/latest.json,sha256=_NQe2whoM6rjoXUyLYRsdrggoPriPHw4_uT1B-TkSzs,38667 +starkware/starknet/services/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/services/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/services/api/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/services/api/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/services/api/__pycache__/contract_class.cpython-38.pyc,, +starkware/starknet/services/api/__pycache__/messages.cpython-38.pyc,, +starkware/starknet/services/api/contract_class.py,sha256=11BWGd9XsvZbZRxHfc_b2zt8iEJsCZ3Jy99j4UvYQbM,3761 +starkware/starknet/services/api/feeder_gateway/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/services/api/feeder_gateway/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/services/api/feeder_gateway/__pycache__/feeder_gateway_client.cpython-38.pyc,, +starkware/starknet/services/api/feeder_gateway/__pycache__/request_objects.cpython-38.pyc,, +starkware/starknet/services/api/feeder_gateway/__pycache__/response_objects.cpython-38.pyc,, +starkware/starknet/services/api/feeder_gateway/feeder_gateway_client.py,sha256=tUrqItBfHkqziodcgvUcCd6COjjernfL8U5oNvg5mjY,12764 +starkware/starknet/services/api/feeder_gateway/request_objects.py,sha256=NUbzQeJRL_aMrONfOUvQP29vdyy5DRC9F-tiV5FoqKY,2124 +starkware/starknet/services/api/feeder_gateway/response_objects.py,sha256=jg5rFqyAgxSx9knB6cneWX1ktBmBw4WrZ0p-TRhy9x0,35338 +starkware/starknet/services/api/gateway/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/services/api/gateway/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/services/api/gateway/__pycache__/gateway_client.cpython-38.pyc,, +starkware/starknet/services/api/gateway/__pycache__/transaction.cpython-38.pyc,, +starkware/starknet/services/api/gateway/__pycache__/transaction_utils.cpython-38.pyc,, +starkware/starknet/services/api/gateway/gateway_client.py,sha256=sY-GMa57CqLOQqI7_3vRIznuJdDGqgaC8SkIGg0aMyE,706 +starkware/starknet/services/api/gateway/transaction.py,sha256=TbqM38g6_mRfDON9m5bn2j8vSx4s2PEcmjmlNuB_iSc,12331 +starkware/starknet/services/api/gateway/transaction_utils.py,sha256=DnpN3nBEG-saa4qlhgTX0AOYOAuKgAb7O-Y9gt-phlE,1585 +starkware/starknet/services/api/messages.py,sha256=uuxyFg_X-Sg2wXqYJcaYzTcIjgQxmjEdoBkhMPaPDEI,2596 +starkware/starknet/services/utils/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/services/utils/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/services/utils/__pycache__/sequencer_api_utils.cpython-38.pyc,, +starkware/starknet/services/utils/sequencer_api_utils.py,sha256=PJm4qOqAOO73BQ81D01UzoxKzBnEVQvVh7nQR9dXzLw,3825 +starkware/starknet/storage/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/storage/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/storage/__pycache__/starknet_storage.cpython-38.pyc,, +starkware/starknet/storage/starknet_storage.py,sha256=e7dR7lXW2LPvF8-iRfcDvTOnxgC3P61SOpyuayGLRqI,14061 +starkware/starknet/testing/MockStarknetMessaging.json,sha256=8lOfMfHyqn8J583Eaf7i2-wHNyDEJX5xfndSfdnr2k8,23726 +starkware/starknet/testing/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/testing/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/testing/__pycache__/contract.cpython-38.pyc,, +starkware/starknet/testing/__pycache__/contract_utils.cpython-38.pyc,, +starkware/starknet/testing/__pycache__/contracts.cpython-38.pyc,, +starkware/starknet/testing/__pycache__/objects.cpython-38.pyc,, +starkware/starknet/testing/__pycache__/postman.cpython-38.pyc,, +starkware/starknet/testing/__pycache__/starknet.cpython-38.pyc,, +starkware/starknet/testing/__pycache__/state.cpython-38.pyc,, +starkware/starknet/testing/contract.py,sha256=EzqgSy7XJtjhaevx9EEUWTS6fi52U7Dx6uxrgJ9w68s,16201 +starkware/starknet/testing/contract_utils.py,sha256=0E4HB-8pfwzInscvhQLW5zyWa26UTJ6i4wWDiUu7w2A,7943 +starkware/starknet/testing/contracts.py,sha256=TygbeWEcsGnjXWkl1gICSt4vp_39jwAMiAbfnM7nyLA,129 +starkware/starknet/testing/objects.py,sha256=-1zT8YLe8HX0eX4kXUrqeqtIuzGlJZDnjcrX9CzPo3c,1393 +starkware/starknet/testing/postman.py,sha256=2jyGaZVjoJsHaUhdNUr2FTPDiZy-Wr_gOv5pt0LI9JA,3071 +starkware/starknet/testing/starknet.py,sha256=-rrl9_pyVaPH6kaLMheUBp9rX8unc6YBxEaHcN5EsBI,5402 +starkware/starknet/testing/state.py,sha256=1dviZKB1W7o2W_iT6TiWC5ED4tUQ88t7tzyAnzpvH64,11055 +starkware/starknet/third_party/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/third_party/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/third_party/open_zeppelin/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/third_party/open_zeppelin/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/third_party/open_zeppelin/__pycache__/starknet_contracts.cpython-38.pyc,, +starkware/starknet/third_party/open_zeppelin/account.json,sha256=wk43iDMm21MiZi8j_AgduO9NvaCU0Zmezkp_BL9vNCg,1138839 +starkware/starknet/third_party/open_zeppelin/starknet_contracts.py,sha256=Jel3W7NoqC4cw_pDWaPIh034X2r6lZwza2wMYi_iHII,215 +starkware/starknet/utils/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/utils/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/utils/__pycache__/api_utils.cpython-38.pyc,, +starkware/starknet/utils/api_utils.py,sha256=H_T4aOxtSCh-78MUQFG0GqtBIlPGrj-s7PkySGG0ESY,954 +starkware/starknet/wallets/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starknet/wallets/__pycache__/__init__.cpython-38.pyc,, +starkware/starknet/wallets/__pycache__/account.cpython-38.pyc,, +starkware/starknet/wallets/__pycache__/open_zeppelin.cpython-38.pyc,, +starkware/starknet/wallets/__pycache__/starknet_context.cpython-38.pyc,, +starkware/starknet/wallets/account.py,sha256=411tMK4A6LQ7xMEJmxl9zS-g04ITPr9EbJ4LtmfsAqI,2833 +starkware/starknet/wallets/open_zeppelin.py,sha256=SSun4ooZq-D6rlJcSalAxPEx5Cc4LV7aT8n2tzgElk4,12990 +starkware/starknet/wallets/starknet_context.py,sha256=ftYgtsA2pp820AGpYnJqa2XdEX_wnVSdiK6RAcwVeLQ,522 +starkware/starkware_utils/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starkware_utils/__pycache__/__init__.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/config_base.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/custom_raising_dict.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/error_handling.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/executor.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/field_validators.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/marshmallow_dataclass_fields.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/marshmallow_fields_metadata.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/one_of_schema_tracker.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/serializable.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/serializable_dataclass.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/subsequence.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/validated_dataclass.cpython-38.pyc,, +starkware/starkware_utils/__pycache__/validated_fields.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starkware_utils/commitment_tree/__pycache__/__init__.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__pycache__/binary_fact_tree.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__pycache__/binary_fact_tree_da_utils.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__pycache__/binary_fact_tree_node.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__pycache__/calculation.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__pycache__/inner_node_fact.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__pycache__/leaf_fact.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/__pycache__/update_tree.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/binary_fact_tree.py,sha256=YLppQKXUL0iHDzTYL3vrYuxJNuI0ONIXHANo6Pe7-gw,3981 +starkware/starkware_utils/commitment_tree/binary_fact_tree_da_utils.py,sha256=fjbi0AsVBJj7uhFWRZTkRMHzCqzPLIuOM7icx7TOEmI,2329 +starkware/starkware_utils/commitment_tree/binary_fact_tree_node.py,sha256=WQu0W9xuHvMZFKzkl4M553cJ7-yjrhv0pmFxrVIA1aw,8527 +starkware/starkware_utils/commitment_tree/calculation.py,sha256=4p8gHVma62ucaqNkozeT3cBqEVm_i3pcBv5D3CjlLO0,8854 +starkware/starkware_utils/commitment_tree/inner_node_fact.py,sha256=3PrJrzhDDOv8haQSosqHsXnB9pyyVbFkoiN9E9xplTo,365 +starkware/starkware_utils/commitment_tree/leaf_fact.py,sha256=banyaBkm13f3pYhDPgy7olhz0hHIdPeztHO72Q2PSW4,426 +starkware/starkware_utils/commitment_tree/merkle_tree/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starkware_utils/commitment_tree/merkle_tree/__pycache__/__init__.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/merkle_tree/__pycache__/traverse_tree.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/merkle_tree/traverse_tree.py,sha256=MjDhleKLj6viVQvMPUJXHmd6jZ08YUR1XJDURis2700,2087 +starkware/starkware_utils/commitment_tree/patricia_tree/__init__.py,sha256=h-QyvMVzDNpT3jyVskcSbUVFXxGCRxieFPrvTveZG9k,64 +starkware/starkware_utils/commitment_tree/patricia_tree/__pycache__/__init__.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/patricia_tree/__pycache__/nodes.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/patricia_tree/__pycache__/patricia_tree.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/patricia_tree/__pycache__/virtual_calculation_node.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/patricia_tree/__pycache__/virtual_patricia_node.cpython-38.pyc,, +starkware/starkware_utils/commitment_tree/patricia_tree/nodes.py,sha256=4vBxvFTKuqgivl5z5TkGbYOyFwbFedlEJGUmqB5qvog,5968 +starkware/starkware_utils/commitment_tree/patricia_tree/patricia_tree.py,sha256=IwzPm4QIAMutESmcwuanK73RZ-TPoA9njo4jr4kbqAA,3928 +starkware/starkware_utils/commitment_tree/patricia_tree/virtual_calculation_node.py,sha256=XqZF5njTwGUKR2qQwbgKau20ZQw3YMruXG2FI9VuQ9g,10676 +starkware/starkware_utils/commitment_tree/patricia_tree/virtual_patricia_node.py,sha256=mVEC6dxsZAQ2W8FFpXwIvwKMRn3lnulORoUE9p3z_w8,9210 +starkware/starkware_utils/commitment_tree/update_tree.py,sha256=rB6IE9MlCn96xo68xTjzvBNs40dsi54emT00v4sBKJg,7774 +starkware/starkware_utils/config_base.py,sha256=GTKqQjO725R0FvWVxllL41rcAgeecvajvAHNsoc1hrE,2167 +starkware/starkware_utils/custom_raising_dict.py,sha256=KPwMoa6f7TkqvZ3WOI6Ds7b80IxzGBfle1r7SfGNurQ,2260 +starkware/starkware_utils/error_handling.py,sha256=LfaU1kRD9WYPuZcvkNamRg4QpPs6m4vceArfYvFBPa8,11731 +starkware/starkware_utils/executor.py,sha256=uH7WbsDCL7xDSlI0-qDZ40l1Vyy6ZVnx7XgrGFNduqo,519 +starkware/starkware_utils/field_validators.py,sha256=WrplLcRnsWhZw1hcL-YSRoxcoEjaA4IvAqt1RBYMUiY,10936 +starkware/starkware_utils/marshmallow_dataclass_fields.py,sha256=wC5zwKdLVIcIhIxpo0-OWlIVWwBi_7lwaOspG9kKl7U,7629 +starkware/starkware_utils/marshmallow_fields_metadata.py,sha256=v_an71gGsmEf936GmZDDwhi3tauxai9hzUAzmVoT1pM,2667 +starkware/starkware_utils/one_of_schema_tracker.py,sha256=XvIQPMjBAKkfgCs3ilAFPS9s61AogLpMtTkWI9cxlrY,3352 +starkware/starkware_utils/serializable.py,sha256=PnzSoD7r8OYul9HP3iCb8mrhO4l7ZO749xIV1TsrXBc,5044 +starkware/starkware_utils/serializable_dataclass.py,sha256=dgRwqcIvDoRl4b4dVdxLC3dPT_uuRnSFt4Cqq5ZqioQ,1205 +starkware/starkware_utils/subsequence.py,sha256=kZTmQoVTqUbMVsrZh-jiE25X3zSLQqVUVahTodnpctI,387 +starkware/starkware_utils/time/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +starkware/starkware_utils/time/__pycache__/__init__.cpython-38.pyc,, +starkware/starkware_utils/time/__pycache__/fastforward.cpython-38.pyc,, +starkware/starkware_utils/time/__pycache__/synchronous_executor.cpython-38.pyc,, +starkware/starkware_utils/time/__pycache__/time.cpython-38.pyc,, +starkware/starkware_utils/time/fastforward.py,sha256=FktDvbDTFsVYCEMsBh2XYibVtII5GRXLi344sLagS3w,1217 +starkware/starkware_utils/time/synchronous_executor.py,sha256=2KcJQSMiLNXKi0YYoaOxOOnlpoSITHsN-3T9z7exxx0,824 +starkware/starkware_utils/time/time.py,sha256=z2Hb3q8bDuBz7qroOdbj_mKUshALBvX5u3ziUsNzwbo,805 +starkware/starkware_utils/validated_dataclass.py,sha256=wwKWkZIzdBBYY4rzORzc9ksW1JLKZXFzmzxWdLgNirs,12766 +starkware/starkware_utils/validated_fields.py,sha256=1AWZfkGdYum6U8B2LdO6g2uQAxuvJ3xcTcDSqCrqOTg,11707 +starkware/storage/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +starkware/storage/__pycache__/__init__.cpython-38.pyc,, +starkware/storage/__pycache__/dict_storage.cpython-38.pyc,, +starkware/storage/__pycache__/gated_storage.cpython-38.pyc,, +starkware/storage/__pycache__/imm_storage.cpython-38.pyc,, +starkware/storage/__pycache__/metrics.cpython-38.pyc,, +starkware/storage/__pycache__/names.cpython-38.pyc,, +starkware/storage/__pycache__/storage.cpython-38.pyc,, +starkware/storage/__pycache__/storage_utils.cpython-38.pyc,, +starkware/storage/dict_storage.py,sha256=CBJXRoO7XdiwhvHi1mPBc5JdP9KABQGwX2V9oi89nAY,2038 +starkware/storage/gated_storage.py,sha256=2nn10mdS8YqmqxG1dbs4jbGkpvEC7P6bi0HX0qd3sWI,3044 +starkware/storage/imm_storage.py,sha256=qs1BV4zCiZUOn1JT3MBSeHkokhvL_eVLyFjl5hkJw6c,2623 +starkware/storage/metrics.py,sha256=MEo8ENT03t1i7vZxo3yEFJCaPP9yuyxR8F4ZH8i32xc,651 +starkware/storage/names.py,sha256=IebXAYcMD5dmViFP9nL3Ivf3Fw7h3dIiKkEy4acRVAw,660 +starkware/storage/storage.py,sha256=HbTGAjlJGN343xjSfekV5wj1sdrgkNoHYEfXlZP5iQI,15566 +starkware/storage/storage_utils.py,sha256=ZITuGSqwMjHmLqPfvMvouEyzAkBl7Ktj_3jyCI4bScg,801 diff --git a/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/WHEEL new file mode 100644 index 00000000..b552003f --- /dev/null +++ b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/top_level.txt new file mode 100644 index 00000000..15abcf47 --- /dev/null +++ b/env/lib/python3.8/site-packages/cairo_lang-0.10.1.dist-info/top_level.txt @@ -0,0 +1,2 @@ +services +starkware diff --git a/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/INSTALLER b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/LICENSE b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/LICENSE new file mode 100644 index 00000000..0a64774e --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/LICENSE @@ -0,0 +1,21 @@ +This package contains a modified version of ca-bundle.crt: + +ca-bundle.crt -- Bundle of CA Root Certificates + +Certificate data from Mozilla as of: Thu Nov 3 19:04:19 2011# +This is a bundle of X.509 certificates of public Certificate Authorities +(CA). These were automatically extracted from Mozilla's root certificates +file (certdata.txt). This file can be found in the mozilla source tree: +https://hg.mozilla.org/mozilla-central/file/tip/security/nss/lib/ckfw/builtins/certdata.txt +It contains the certificates in PEM format and therefore +can be directly used with curl / libcurl / php_curl, or with +an Apache+mod_ssl webserver for SSL client authentication. +Just configure this file as the SSLCACertificateFile.# + +***** BEGIN LICENSE BLOCK ***** +This Source Code Form is subject to the terms of the Mozilla Public License, +v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain +one at http://mozilla.org/MPL/2.0/. + +***** END LICENSE BLOCK ***** +@(#) $RCSfile: certdata.txt,v $ $Revision: 1.80 $ $Date: 2011/11/03 15:11:58 $ diff --git a/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/METADATA b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/METADATA new file mode 100644 index 00000000..92ea6c83 --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/METADATA @@ -0,0 +1,83 @@ +Metadata-Version: 2.1 +Name: certifi +Version: 2022.9.24 +Summary: Python package for providing Mozilla's CA Bundle. +Home-page: https://github.com/certifi/python-certifi +Author: Kenneth Reitz +Author-email: me@kennethreitz.com +License: MPL-2.0 +Project-URL: Source, https://github.com/certifi/python-certifi +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0) +Classifier: Natural Language :: English +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Requires-Python: >=3.6 +License-File: LICENSE + +Certifi: Python SSL Certificates +================================ + +Certifi provides Mozilla's carefully curated collection of Root Certificates for +validating the trustworthiness of SSL certificates while verifying the identity +of TLS hosts. It has been extracted from the `Requests`_ project. + +Installation +------------ + +``certifi`` is available on PyPI. Simply install it with ``pip``:: + + $ pip install certifi + +Usage +----- + +To reference the installed certificate authority (CA) bundle, you can use the +built-in function:: + + >>> import certifi + + >>> certifi.where() + '/usr/local/lib/python3.7/site-packages/certifi/cacert.pem' + +Or from the command line:: + + $ python -m certifi + /usr/local/lib/python3.7/site-packages/certifi/cacert.pem + +Enjoy! + +1024-bit Root Certificates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Browsers and certificate authorities have concluded that 1024-bit keys are +unacceptably weak for certificates, particularly root certificates. For this +reason, Mozilla has removed any weak (i.e. 1024-bit key) certificate from its +bundle, replacing it with an equivalent strong (i.e. 2048-bit or greater key) +certificate from the same CA. Because Mozilla removed these certificates from +its bundle, ``certifi`` removed them as well. + +In previous versions, ``certifi`` provided the ``certifi.old_where()`` function +to intentionally re-add the 1024-bit roots back into your bundle. This was not +recommended in production and therefore was removed at the end of 2018. + +.. _`Requests`: https://requests.readthedocs.io/en/master/ + +Addition/Removal of Certificates +-------------------------------- + +Certifi does not support any addition/removal or other modification of the +CA trust store content. This project is intended to provide a reliable and +highly portable root of trust to python deployments. Look to upstream projects +for methods to use alternate trust. + + diff --git a/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/RECORD b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/RECORD new file mode 100644 index 00000000..d11709ca --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/RECORD @@ -0,0 +1,14 @@ +certifi-2022.9.24.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +certifi-2022.9.24.dist-info/LICENSE,sha256=oC9sY4-fuE0G93ZMOrCF2K9-2luTwWbaVDEkeQd8b7A,1052 +certifi-2022.9.24.dist-info/METADATA,sha256=33NAOmkqKTCb2u1Ys8Zth7ABWXfEuLgp-5gLp1yK_7A,2911 +certifi-2022.9.24.dist-info/RECORD,, +certifi-2022.9.24.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92 +certifi-2022.9.24.dist-info/top_level.txt,sha256=KMu4vUCfsjLrkPbSNdgdekS-pVJzBAJFO__nI8NF6-U,8 +certifi/__init__.py,sha256=luDjIGxDSrQ9O0zthdz5Lnt069Z_7eR1GIEefEaf-Ys,94 +certifi/__main__.py,sha256=xBBoj905TUWBLRGANOcf7oi6e-3dMP4cEoG9OyMs11g,243 +certifi/__pycache__/__init__.cpython-38.pyc,, +certifi/__pycache__/__main__.cpython-38.pyc,, +certifi/__pycache__/core.cpython-38.pyc,, +certifi/cacert.pem,sha256=3l8CcWt_qL42030rGieD3SLufICFX0bYtGhDl_EXVPI,286370 +certifi/core.py,sha256=lhewz0zFb2b4ULsQurElmloYwQoecjWzPqY67P8T7iM,4219 +certifi/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/WHEEL b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/WHEEL new file mode 100644 index 00000000..5bad85fd --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/top_level.txt b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/top_level.txt new file mode 100644 index 00000000..963eac53 --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi-2022.9.24.dist-info/top_level.txt @@ -0,0 +1 @@ +certifi diff --git a/env/lib/python3.8/site-packages/certifi/__init__.py b/env/lib/python3.8/site-packages/certifi/__init__.py new file mode 100644 index 00000000..af4bcc15 --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi/__init__.py @@ -0,0 +1,4 @@ +from .core import contents, where + +__all__ = ["contents", "where"] +__version__ = "2022.09.24" diff --git a/env/lib/python3.8/site-packages/certifi/__main__.py b/env/lib/python3.8/site-packages/certifi/__main__.py new file mode 100644 index 00000000..8945b5da --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi/__main__.py @@ -0,0 +1,12 @@ +import argparse + +from certifi import contents, where + +parser = argparse.ArgumentParser() +parser.add_argument("-c", "--contents", action="store_true") +args = parser.parse_args() + +if args.contents: + print(contents()) +else: + print(where()) diff --git a/env/lib/python3.8/site-packages/certifi/cacert.pem b/env/lib/python3.8/site-packages/certifi/cacert.pem new file mode 100644 index 00000000..40051551 --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi/cacert.pem @@ -0,0 +1,4708 @@ + +# Issuer: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA +# Subject: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA +# Label: "GlobalSign Root CA" +# Serial: 4835703278459707669005204 +# MD5 Fingerprint: 3e:45:52:15:09:51:92:e1:b7:5d:37:9f:b1:87:29:8a +# SHA1 Fingerprint: b1:bc:96:8b:d4:f4:9d:62:2a:a8:9a:81:f2:15:01:52:a4:1d:82:9c +# SHA256 Fingerprint: eb:d4:10:40:e4:bb:3e:c7:42:c9:e3:81:d3:1e:f2:a4:1a:48:b6:68:5c:96:e7:ce:f3:c1:df:6c:d4:33:1c:99 +-----BEGIN CERTIFICATE----- +MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG +A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExEDAOBgNVBAsTB1Jv +b3QgQ0ExGzAZBgNVBAMTEkdsb2JhbFNpZ24gUm9vdCBDQTAeFw05ODA5MDExMjAw +MDBaFw0yODAxMjgxMjAwMDBaMFcxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9i +YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT +aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ +jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp +xy0Sy6scTHAHoT0KMM0VjU/43dSMUBUc71DuxC73/OlS8pF94G3VNTCOXkNz8kHp +1Wrjsok6Vjk4bwY8iGlbKk3Fp1S4bInMm/k8yuX9ifUSPJJ4ltbcdG6TRGHRjcdG +snUOhugZitVtbNV4FpWi6cgKOOvyJBNPc1STE4U6G7weNLWLBYy5d4ux2x8gkasJ +U26Qzns3dLlwR5EiUWMWea6xrkEmCMgZK9FGqkjWZCrXgzT/LCrBbBlDSgeF59N8 +9iFo7+ryUp9/k5DPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E +BTADAQH/MB0GA1UdDgQWBBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0B +AQUFAAOCAQEA1nPnfE920I2/7LqivjTFKDK1fPxsnCwrvQmeU79rXqoRSLblCKOz +yj1hTdNGCbM+w6DjY1Ub8rrvrTnhQ7k4o+YviiY776BQVvnGCv04zcQLcFGUl5gE +38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP +AbEVtQwdpf5pLGkkeB6zpxxxYu7KyJesF12KwvhHhm4qxFYxldBniYUr+WymXUad +DKqC5JlR3XC321Y9YeRq4VzW9v493kHMB65jUr9TU/Qr6cf9tveCX4XSQRjbgbME +HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A== +-----END CERTIFICATE----- + +# Issuer: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited +# Subject: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited +# Label: "Entrust.net Premium 2048 Secure Server CA" +# Serial: 946069240 +# MD5 Fingerprint: ee:29:31:bc:32:7e:9a:e6:e8:b5:f7:51:b4:34:71:90 +# SHA1 Fingerprint: 50:30:06:09:1d:97:d4:f5:ae:39:f7:cb:e7:92:7d:7d:65:2d:34:31 +# SHA256 Fingerprint: 6d:c4:71:72:e0:1c:bc:b0:bf:62:58:0d:89:5f:e2:b8:ac:9a:d4:f8:73:80:1e:0c:10:b9:c8:37:d2:1e:b1:77 +-----BEGIN CERTIFICATE----- +MIIEKjCCAxKgAwIBAgIEOGPe+DANBgkqhkiG9w0BAQUFADCBtDEUMBIGA1UEChML +RW50cnVzdC5uZXQxQDA+BgNVBAsUN3d3dy5lbnRydXN0Lm5ldC9DUFNfMjA0OCBp +bmNvcnAuIGJ5IHJlZi4gKGxpbWl0cyBsaWFiLikxJTAjBgNVBAsTHChjKSAxOTk5 +IEVudHJ1c3QubmV0IExpbWl0ZWQxMzAxBgNVBAMTKkVudHJ1c3QubmV0IENlcnRp +ZmljYXRpb24gQXV0aG9yaXR5ICgyMDQ4KTAeFw05OTEyMjQxNzUwNTFaFw0yOTA3 +MjQxNDE1MTJaMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3 +LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxp +YWIuKTElMCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEG +A1UEAxMqRW50cnVzdC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgp +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArU1LqRKGsuqjIAcVFmQq +K0vRvwtKTY7tgHalZ7d4QMBzQshowNtTK91euHaYNZOLGp18EzoOH1u3Hs/lJBQe +sYGpjX24zGtLA/ECDNyrpUAkAH90lKGdCCmziAv1h3edVc3kw37XamSrhRSGlVuX +MlBvPci6Zgzj/L24ScF2iUkZ/cCovYmjZy/Gn7xxGWC4LeksyZB2ZnuU4q941mVT +XTzWnLLPKQP5L6RQstRIzgUyVYr9smRMDuSYB3Xbf9+5CFVghTAp+XtIpGmG4zU/ +HoZdenoVve8AjhUiVBcAkCaTvA5JaJG/+EfTnZVCwQ5N328mz8MYIWJmQ3DW1cAH +4QIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQUVeSB0RGAvtiJuQijMfmhJAkWuXAwDQYJKoZIhvcNAQEFBQADggEBADub +j1abMOdTmXx6eadNl9cZlZD7Bh/KM3xGY4+WZiT6QBshJ8rmcnPyT/4xmf3IDExo +U8aAghOY+rat2l098c5u9hURlIIM7j+VrxGrD9cv3h8Dj1csHsm7mhpElesYT6Yf +zX1XEC+bBAlahLVu2B064dae0Wx5XnkcFMXj0EyTO2U87d89vqbllRrDtRnDvV5b +u/8j72gZyxKTJ1wDLW8w0B62GqzeWvfRqqgnpv55gcR5mTNXuhKwqeBCbJPKVt7+ +bYQLCIt+jerXmCHG8+c8eS9enNFMFY3h7CI3zJpDC5fcgJCNs2ebb0gIFVbPv/Er +fF6adulZkMV8gzURZVE= +-----END CERTIFICATE----- + +# Issuer: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust +# Subject: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust +# Label: "Baltimore CyberTrust Root" +# Serial: 33554617 +# MD5 Fingerprint: ac:b6:94:a5:9c:17:e0:d7:91:52:9b:b1:97:06:a6:e4 +# SHA1 Fingerprint: d4:de:20:d0:5e:66:fc:53:fe:1a:50:88:2c:78:db:28:52:ca:e4:74 +# SHA256 Fingerprint: 16:af:57:a9:f6:76:b0:ab:12:60:95:aa:5e:ba:de:f2:2a:b3:11:19:d6:44:ac:95:cd:4b:93:db:f3:f2:6a:eb +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ +RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD +VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX +DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y +ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy +VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr +mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr +IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK +mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu +XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy +dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye +jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1 +BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3 +DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92 +9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx +jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0 +Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz +ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS +R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc. +# Subject: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc. +# Label: "Entrust Root Certification Authority" +# Serial: 1164660820 +# MD5 Fingerprint: d6:a5:c3:ed:5d:dd:3e:00:c1:3d:87:92:1f:1d:3f:e4 +# SHA1 Fingerprint: b3:1e:b1:b7:40:e3:6c:84:02:da:dc:37:d4:4d:f5:d4:67:49:52:f9 +# SHA256 Fingerprint: 73:c1:76:43:4f:1b:c6:d5:ad:f4:5b:0e:76:e7:27:28:7c:8d:e5:76:16:c1:e6:e6:14:1a:2b:2c:bc:7d:8e:4c +-----BEGIN CERTIFICATE----- +MIIEkTCCA3mgAwIBAgIERWtQVDANBgkqhkiG9w0BAQUFADCBsDELMAkGA1UEBhMC +VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRydXN0 +Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMW +KGMpIDIwMDYgRW50cnVzdCwgSW5jLjEtMCsGA1UEAxMkRW50cnVzdCBSb290IENl +cnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA2MTEyNzIwMjM0MloXDTI2MTEyNzIw +NTM0MlowgbAxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkw +NwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSBy +ZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDA2IEVudHJ1c3QsIEluYy4xLTArBgNV +BAMTJEVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASIwDQYJ +KoZIhvcNAQEBBQADggEPADCCAQoCggEBALaVtkNC+sZtKm9I35RMOVcF7sN5EUFo +Nu3s/poBj6E4KPz3EEZmLk0eGrEaTsbRwJWIsMn/MYszA9u3g3s+IIRe7bJWKKf4 +4LlAcTfFy0cOlypowCKVYhXbR9n10Cv/gkvJrT7eTNuQgFA/CYqEAOwwCj0Yzfv9 +KlmaI5UXLEWeH25DeW0MXJj+SKfFI0dcXv1u5x609mhF0YaDW6KKjbHjKYD+JXGI +rb68j6xSlkuqUY3kEzEZ6E5Nn9uss2rVvDlUccp6en+Q3X0dgNmBu1kmwhH+5pPi +94DkZfs0Nw4pgHBNrziGLp5/V6+eF67rHMsoIV+2HNjnogQi+dPa2MsCAwEAAaOB +sDCBrTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zArBgNVHRAEJDAi +gA8yMDA2MTEyNzIwMjM0MlqBDzIwMjYxMTI3MjA1MzQyWjAfBgNVHSMEGDAWgBRo +kORnpKZTgMeGZqTx90tD+4S9bTAdBgNVHQ4EFgQUaJDkZ6SmU4DHhmak8fdLQ/uE +vW0wHQYJKoZIhvZ9B0EABBAwDhsIVjcuMTo0LjADAgSQMA0GCSqGSIb3DQEBBQUA +A4IBAQCT1DCw1wMgKtD5Y+iRDAUgqV8ZyntyTtSx29CW+1RaGSwMCPeyvIWonX9t +O1KzKtvn1ISMY/YPyyYBkVBs9F8U4pN0wBOeMDpQ47RgxRzwIkSNcUesyBrJ6Zua +AGAT/3B+XxFNSRuzFVJ7yVTav52Vr2ua2J7p8eRDjeIRRDq/r72DQnNSi6q7pynP +9WQcCk3RvKqsnyrQ/39/2n3qse0wJcGE2jTSW3iDVuycNsMm4hH2Z0kdkquM++v/ +eu6FSqdQgPCnXEqULl8FmTxSQeDNtGPPAUO6nIPcj2A781q0tHuu2guQOHXvgR1m +0vdXcDazv/wor3ElhVsT/h5/WrQ8 +-----END CERTIFICATE----- + +# Issuer: CN=AAA Certificate Services O=Comodo CA Limited +# Subject: CN=AAA Certificate Services O=Comodo CA Limited +# Label: "Comodo AAA Services root" +# Serial: 1 +# MD5 Fingerprint: 49:79:04:b0:eb:87:19:ac:47:b0:bc:11:51:9b:74:d0 +# SHA1 Fingerprint: d1:eb:23:a4:6d:17:d6:8f:d9:25:64:c2:f1:f1:60:17:64:d8:e3:49 +# SHA256 Fingerprint: d7:a7:a0:fb:5d:7e:27:31:d7:71:e9:48:4e:bc:de:f7:1d:5f:0c:3e:0a:29:48:78:2b:c8:3e:e0:ea:69:9e:f4 +-----BEGIN CERTIFICATE----- +MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb +MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow +GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj +YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL +MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE +BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM +GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP +ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua +BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe +3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4 +YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR +rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm +ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU +oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF +MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v +QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t +b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF +AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q +GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz +Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2 +G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi +l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3 +smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg== +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 2 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 2 O=QuoVadis Limited +# Label: "QuoVadis Root CA 2" +# Serial: 1289 +# MD5 Fingerprint: 5e:39:7b:dd:f8:ba:ec:82:e9:ac:62:ba:0c:54:00:2b +# SHA1 Fingerprint: ca:3a:fb:cf:12:40:36:4b:44:b2:16:20:88:80:48:39:19:93:7c:f7 +# SHA256 Fingerprint: 85:a0:dd:7d:d7:20:ad:b7:ff:05:f8:3d:54:2b:20:9d:c7:ff:45:28:f7:d6:77:b1:83:89:fe:a5:e5:c4:9e:86 +-----BEGIN CERTIFICATE----- +MIIFtzCCA5+gAwIBAgICBQkwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x +GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv +b3QgQ0EgMjAeFw0wNjExMjQxODI3MDBaFw0zMTExMjQxODIzMzNaMEUxCzAJBgNV +BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W +YWRpcyBSb290IENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCa +GMpLlA0ALa8DKYrwD4HIrkwZhR0In6spRIXzL4GtMh6QRr+jhiYaHv5+HBg6XJxg +Fyo6dIMzMH1hVBHL7avg5tKifvVrbxi3Cgst/ek+7wrGsxDp3MJGF/hd/aTa/55J +WpzmM+Yklvc/ulsrHHo1wtZn/qtmUIttKGAr79dgw8eTvI02kfN/+NsRE8Scd3bB +rrcCaoF6qUWD4gXmuVbBlDePSHFjIuwXZQeVikvfj8ZaCuWw419eaxGrDPmF60Tp ++ARz8un+XJiM9XOva7R+zdRcAitMOeGylZUtQofX1bOQQ7dsE/He3fbE+Ik/0XX1 +ksOR1YqI0JDs3G3eicJlcZaLDQP9nL9bFqyS2+r+eXyt66/3FsvbzSUr5R/7mp/i +Ucw6UwxI5g69ybR2BlLmEROFcmMDBOAENisgGQLodKcftslWZvB1JdxnwQ5hYIiz +PtGo/KPaHbDRsSNU30R2be1B2MGyIrZTHN81Hdyhdyox5C315eXbyOD/5YDXC2Og +/zOhD7osFRXql7PSorW+8oyWHhqPHWykYTe5hnMz15eWniN9gqRMgeKh0bpnX5UH +oycR7hYQe7xFSkyyBNKr79X9DFHOUGoIMfmR2gyPZFwDwzqLID9ujWc9Otb+fVuI +yV77zGHcizN300QyNQliBJIWENieJ0f7OyHj+OsdWwIDAQABo4GwMIGtMA8GA1Ud +EwEB/wQFMAMBAf8wCwYDVR0PBAQDAgEGMB0GA1UdDgQWBBQahGK8SEwzJQTU7tD2 +A8QZRtGUazBuBgNVHSMEZzBlgBQahGK8SEwzJQTU7tD2A8QZRtGUa6FJpEcwRTEL +MAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMT +ElF1b1ZhZGlzIFJvb3QgQ0EgMoICBQkwDQYJKoZIhvcNAQEFBQADggIBAD4KFk2f +BluornFdLwUvZ+YTRYPENvbzwCYMDbVHZF34tHLJRqUDGCdViXh9duqWNIAXINzn +g/iN/Ae42l9NLmeyhP3ZRPx3UIHmfLTJDQtyU/h2BwdBR5YM++CCJpNVjP4iH2Bl +fF/nJrP3MpCYUNQ3cVX2kiF495V5+vgtJodmVjB3pjd4M1IQWK4/YY7yarHvGH5K +WWPKjaJW1acvvFYfzznB4vsKqBUsfU16Y8Zsl0Q80m/DShcK+JDSV6IZUaUtl0Ha +B0+pUNqQjZRG4T7wlP0QADj1O+hA4bRuVhogzG9Yje0uRY/W6ZM/57Es3zrWIozc +hLsib9D45MY56QSIPMO661V6bYCZJPVsAfv4l7CUW+v90m/xd2gNNWQjrLhVoQPR +TUIZ3Ph1WVaj+ahJefivDrkRoHy3au000LYmYjgahwz46P0u05B/B5EqHdZ+XIWD +mbA4CD/pXvk1B+TJYm5Xf6dQlfe6yJvmjqIBxdZmv3lh8zwc4bmCXF2gw+nYSL0Z +ohEUGW6yhhtoPkg3Goi3XZZenMfvJ2II4pEZXNLxId26F0KCl3GBUzGpn/Z9Yr9y +4aOTHcyKJloJONDO1w2AFrR4pTqHTI2KpdVGl/IsELm8VCLAAVBpQ570su9t+Oza +8eOx79+Rj1QqCyXBJhnEUhAFZdWCEOrCMc0u +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 3" +# Serial: 1478 +# MD5 Fingerprint: 31:85:3c:62:94:97:63:b9:aa:fd:89:4e:af:6f:e0:cf +# SHA1 Fingerprint: 1f:49:14:f7:d8:74:95:1d:dd:ae:02:c0:be:fd:3a:2d:82:75:51:85 +# SHA256 Fingerprint: 18:f1:fc:7f:20:5d:f8:ad:dd:eb:7f:e0:07:dd:57:e3:af:37:5a:9c:4d:8d:73:54:6b:f4:f1:fe:d1:e1:8d:35 +-----BEGIN CERTIFICATE----- +MIIGnTCCBIWgAwIBAgICBcYwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x +GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv +b3QgQ0EgMzAeFw0wNjExMjQxOTExMjNaFw0zMTExMjQxOTA2NDRaMEUxCzAJBgNV +BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W +YWRpcyBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDM +V0IWVJzmmNPTTe7+7cefQzlKZbPoFog02w1ZkXTPkrgEQK0CSzGrvI2RaNggDhoB +4hp7Thdd4oq3P5kazethq8Jlph+3t723j/z9cI8LoGe+AaJZz3HmDyl2/7FWeUUr +H556VOijKTVopAFPD6QuN+8bv+OPEKhyq1hX51SGyMnzW9os2l2ObjyjPtr7guXd +8lyyBTNvijbO0BNO/79KDDRMpsMhvVAEVeuxu537RR5kFd5VAYwCdrXLoT9Cabwv +vWhDFlaJKjdhkf2mrk7AyxRllDdLkgbvBNDInIjbC3uBr7E9KsRlOni27tyAsdLT +mZw67mtaa7ONt9XOnMK+pUsvFrGeaDsGb659n/je7Mwpp5ijJUMv7/FfJuGITfhe +btfZFG4ZM2mnO4SJk8RTVROhUXhA+LjJou57ulJCg54U7QVSWllWp5f8nT8KKdjc +T5EOE7zelaTfi5m+rJsziO+1ga8bxiJTyPbH7pcUsMV8eFLI8M5ud2CEpukqdiDt +WAEXMJPpGovgc2PZapKUSU60rUqFxKMiMPwJ7Wgic6aIDFUhWMXhOp8q3crhkODZ +c6tsgLjoC2SToJyMGf+z0gzskSaHirOi4XCPLArlzW1oUevaPwV/izLmE1xr/l9A +4iLItLRkT9a6fUg+qGkM17uGcclzuD87nSVL2v9A6wIDAQABo4IBlTCCAZEwDwYD +VR0TAQH/BAUwAwEB/zCB4QYDVR0gBIHZMIHWMIHTBgkrBgEEAb5YAAMwgcUwgZMG +CCsGAQUFBwICMIGGGoGDQW55IHVzZSBvZiB0aGlzIENlcnRpZmljYXRlIGNvbnN0 +aXR1dGVzIGFjY2VwdGFuY2Ugb2YgdGhlIFF1b1ZhZGlzIFJvb3QgQ0EgMyBDZXJ0 +aWZpY2F0ZSBQb2xpY3kgLyBDZXJ0aWZpY2F0aW9uIFByYWN0aWNlIFN0YXRlbWVu +dC4wLQYIKwYBBQUHAgEWIWh0dHA6Ly93d3cucXVvdmFkaXNnbG9iYWwuY29tL2Nw +czALBgNVHQ8EBAMCAQYwHQYDVR0OBBYEFPLAE+CCQz777i9nMpY1XNu4ywLQMG4G +A1UdIwRnMGWAFPLAE+CCQz777i9nMpY1XNu4ywLQoUmkRzBFMQswCQYDVQQGEwJC +TTEZMBcGA1UEChMQUXVvVmFkaXMgTGltaXRlZDEbMBkGA1UEAxMSUXVvVmFkaXMg +Um9vdCBDQSAzggIFxjANBgkqhkiG9w0BAQUFAAOCAgEAT62gLEz6wPJv92ZVqyM0 +7ucp2sNbtrCD2dDQ4iH782CnO11gUyeim/YIIirnv6By5ZwkajGxkHon24QRiSem +d1o417+shvzuXYO8BsbRd2sPbSQvS3pspweWyuOEn62Iix2rFo1bZhfZFvSLgNLd ++LJ2w/w4E6oM3kJpK27zPOuAJ9v1pkQNn1pVWQvVDVJIxa6f8i+AxeoyUDUSly7B +4f/xI4hROJ/yZlZ25w9Rl6VSDE1JUZU2Pb+iSwwQHYaZTKrzchGT5Or2m9qoXadN +t54CrnMAyNojA+j56hl0YgCUyyIgvpSnWbWCar6ZeXqp8kokUvd0/bpO5qgdAm6x +DYBEwa7TIzdfu4V8K5Iu6H6li92Z4b8nby1dqnuH/grdS/yO9SbkbnBCbjPsMZ57 +k8HkyWkaPcBrTiJt7qtYTcbQQcEr6k8Sh17rRdhs9ZgC06DYVYoGmRmioHfRMJ6s +zHXug/WwYjnPbFfiTNKRCw51KBuav/0aQ/HKd/s7j2G4aSgWQgRecCocIdiP4b0j +Wy10QJLZYxkNc91pvGJHvOB0K7Lrfb5BG7XARsWhIstfTsEokt4YutUqKLsRixeT +mJlglFwjz1onl14LBQaTNx47aTbrqZ5hHY8y2o4M1nQ+ewkk2gF3R8Q7zTSMmfXK +4SVhM7JZG+Ju1zdXtg2pEto= +-----END CERTIFICATE----- + +# Issuer: O=SECOM Trust.net OU=Security Communication RootCA1 +# Subject: O=SECOM Trust.net OU=Security Communication RootCA1 +# Label: "Security Communication Root CA" +# Serial: 0 +# MD5 Fingerprint: f1:bc:63:6a:54:e0:b5:27:f5:cd:e7:1a:e3:4d:6e:4a +# SHA1 Fingerprint: 36:b1:2b:49:f9:81:9e:d7:4c:9e:bc:38:0f:c6:56:8f:5d:ac:b2:f7 +# SHA256 Fingerprint: e7:5e:72:ed:9f:56:0e:ec:6e:b4:80:00:73:a4:3f:c3:ad:19:19:5a:39:22:82:01:78:95:97:4a:99:02:6b:6c +-----BEGIN CERTIFICATE----- +MIIDWjCCAkKgAwIBAgIBADANBgkqhkiG9w0BAQUFADBQMQswCQYDVQQGEwJKUDEY +MBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYDVQQLEx5TZWN1cml0eSBDb21t +dW5pY2F0aW9uIFJvb3RDQTEwHhcNMDMwOTMwMDQyMDQ5WhcNMjMwOTMwMDQyMDQ5 +WjBQMQswCQYDVQQGEwJKUDEYMBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYD +VQQLEx5TZWN1cml0eSBDb21tdW5pY2F0aW9uIFJvb3RDQTEwggEiMA0GCSqGSIb3 +DQEBAQUAA4IBDwAwggEKAoIBAQCzs/5/022x7xZ8V6UMbXaKL0u/ZPtM7orw8yl8 +9f/uKuDp6bpbZCKamm8sOiZpUQWZJtzVHGpxxpp9Hp3dfGzGjGdnSj74cbAZJ6kJ +DKaVv0uMDPpVmDvY6CKhS3E4eayXkmmziX7qIWgGmBSWh9JhNrxtJ1aeV+7AwFb9 +Ms+k2Y7CI9eNqPPYJayX5HA49LY6tJ07lyZDo6G8SVlyTCMwhwFY9k6+HGhWZq/N +QV3Is00qVUarH9oe4kA92819uZKAnDfdDJZkndwi92SL32HeFZRSFaB9UslLqCHJ +xrHty8OVYNEP8Ktw+N/LTX7s1vqr2b1/VPKl6Xn62dZ2JChzAgMBAAGjPzA9MB0G +A1UdDgQWBBSgc0mZaNyFW2XjmygvV5+9M7wHSDALBgNVHQ8EBAMCAQYwDwYDVR0T +AQH/BAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAaECpqLvkT115swW1F7NgE+vG +kl3g0dNq/vu+m22/xwVtWSDEHPC32oRYAmP6SBbvT6UL90qY8j+eG61Ha2POCEfr +Uj94nK9NrvjVT8+amCoQQTlSxN3Zmw7vkwGusi7KaEIkQmywszo+zenaSMQVy+n5 +Bw+SUEmK3TGXX8npN6o7WWWXlDLJs58+OmJYxUmtYg5xpTKqL8aJdkNAExNnPaJU +JRDL8Try2frbSVa7pv6nQTXD4IhhyYjH3zYQIphZ6rBK+1YWc26sTfcioU+tHXot +RSflMMFe8toTyyVCUZVHA4xsIcx0Qu1T/zOLjw9XARYvz6buyXAiFL39vmwLAw== +-----END CERTIFICATE----- + +# Issuer: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com +# Subject: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com +# Label: "XRamp Global CA Root" +# Serial: 107108908803651509692980124233745014957 +# MD5 Fingerprint: a1:0b:44:b3:ca:10:d8:00:6e:9d:0f:d8:0f:92:0a:d1 +# SHA1 Fingerprint: b8:01:86:d1:eb:9c:86:a5:41:04:cf:30:54:f3:4c:52:b7:e5:58:c6 +# SHA256 Fingerprint: ce:cd:dc:90:50:99:d8:da:df:c5:b1:d2:09:b7:37:cb:e2:c1:8c:fb:2c:10:c0:ff:0b:cf:0d:32:86:fc:1a:a2 +-----BEGIN CERTIFICATE----- +MIIEMDCCAxigAwIBAgIQUJRs7Bjq1ZxN1ZfvdY+grTANBgkqhkiG9w0BAQUFADCB +gjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3dy54cmFtcHNlY3VyaXR5LmNvbTEk +MCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2VydmljZXMgSW5jMS0wKwYDVQQDEyRY +UmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQxMTAxMTcx +NDA0WhcNMzUwMTAxMDUzNzE5WjCBgjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3 +dy54cmFtcHNlY3VyaXR5LmNvbTEkMCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2Vy +dmljZXMgSW5jMS0wKwYDVQQDEyRYUmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBB +dXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYJB69FbS6 +38eMpSe2OAtp87ZOqCwuIR1cRN8hXX4jdP5efrRKt6atH67gBhbim1vZZ3RrXYCP +KZ2GG9mcDZhtdhAoWORlsH9KmHmf4MMxfoArtYzAQDsRhtDLooY2YKTVMIJt2W7Q +DxIEM5dfT2Fa8OT5kavnHTu86M/0ay00fOJIYRyO82FEzG+gSqmUsE3a56k0enI4 +qEHMPJQRfevIpoy3hsvKMzvZPTeL+3o+hiznc9cKV6xkmxnr9A8ECIqsAxcZZPRa +JSKNNCyy9mgdEm3Tih4U2sSPpuIjhdV6Db1q4Ons7Be7QhtnqiXtRYMh/MHJfNVi +PvryxS3T/dRlAgMBAAGjgZ8wgZwwEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0P +BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMZPoj0GY4QJnM5i5ASs +jVy16bYbMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwueHJhbXBzZWN1cml0 +eS5jb20vWEdDQS5jcmwwEAYJKwYBBAGCNxUBBAMCAQEwDQYJKoZIhvcNAQEFBQAD +ggEBAJEVOQMBG2f7Shz5CmBbodpNl2L5JFMn14JkTpAuw0kbK5rc/Kh4ZzXxHfAR +vbdI4xD2Dd8/0sm2qlWkSLoC295ZLhVbO50WfUfXN+pfTXYSNrsf16GBBEYgoyxt +qZ4Bfj8pzgCT3/3JknOJiWSe5yvkHJEs0rnOfc5vMZnT5r7SHpDwCRR5XCOrTdLa +IR9NmXmd4c8nnxCbHIgNsIpkQTG4DmyQJKSbXHGPurt+HBvbaoAPIbzp26a3QPSy +i6mx5O+aGtA9aZnuqCij4Tyz8LIRnM98QObd50N9otg6tamN8jSZxNQQ4Qb9CYQQ +O+7ETPTsJ3xCwnR8gooJybQDJbw= +-----END CERTIFICATE----- + +# Issuer: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority +# Subject: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority +# Label: "Go Daddy Class 2 CA" +# Serial: 0 +# MD5 Fingerprint: 91:de:06:25:ab:da:fd:32:17:0c:bb:25:17:2a:84:67 +# SHA1 Fingerprint: 27:96:ba:e6:3f:18:01:e2:77:26:1b:a0:d7:77:70:02:8f:20:ee:e4 +# SHA256 Fingerprint: c3:84:6b:f2:4b:9e:93:ca:64:27:4c:0e:c6:7c:1e:cc:5e:02:4f:fc:ac:d2:d7:40:19:35:0e:81:fe:54:6a:e4 +-----BEGIN CERTIFICATE----- +MIIEADCCAuigAwIBAgIBADANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEh +MB8GA1UEChMYVGhlIEdvIERhZGR5IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBE +YWRkeSBDbGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA0MDYyOTE3 +MDYyMFoXDTM0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRo +ZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3Mg +MiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggEN +ADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCA +PVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6w +wdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXi +EqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMY +avx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+ +YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjgcAwgb0wHQYDVR0OBBYEFNLE +sNKR1EwRcbNhyz2h/t2oatTjMIGNBgNVHSMEgYUwgYKAFNLEsNKR1EwRcbNhyz2h +/t2oatTjoWekZTBjMQswCQYDVQQGEwJVUzEhMB8GA1UEChMYVGhlIEdvIERhZGR5 +IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBEYWRkeSBDbGFzcyAyIENlcnRpZmlj +YXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD +ggEBADJL87LKPpH8EsahB4yOd6AzBhRckB4Y9wimPQoZ+YeAEW5p5JYXMP80kWNy +OO7MHAGjHZQopDH2esRU1/blMVgDoszOYtuURXO1v0XJJLXVggKtI3lpjbi2Tc7P +TMozI+gciKqdi0FuFskg5YmezTvacPd+mSYgFFQlq25zheabIZ0KbIIOqPjCDPoQ +HmyW74cNxA9hi63ugyuV+I6ShHI56yDqg+2DzZduCLzrTia2cyvk0/ZM/iZx4mER +dEr/VxqHD3VILs9RaRegAhJhldXRQLIQTO7ErBBDpqWeCtWVYpoNz4iCxTIM5Cuf +ReYNnyicsbkqWletNw+vHX/bvZ8= +-----END CERTIFICATE----- + +# Issuer: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority +# Subject: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority +# Label: "Starfield Class 2 CA" +# Serial: 0 +# MD5 Fingerprint: 32:4a:4b:bb:c8:63:69:9b:be:74:9a:c6:dd:1d:46:24 +# SHA1 Fingerprint: ad:7e:1c:28:b0:64:ef:8f:60:03:40:20:14:c3:d0:e3:37:0e:b5:8a +# SHA256 Fingerprint: 14:65:fa:20:53:97:b8:76:fa:a6:f0:a9:95:8e:55:90:e4:0f:cc:7f:aa:4f:b7:c2:c8:67:75:21:fb:5f:b6:58 +-----BEGIN CERTIFICATE----- +MIIEDzCCAvegAwIBAgIBADANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJVUzEl +MCMGA1UEChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMp +U3RhcmZpZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQw +NjI5MTczOTE2WhcNMzQwNjI5MTczOTE2WjBoMQswCQYDVQQGEwJVUzElMCMGA1UE +ChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMpU3RhcmZp +ZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggEgMA0GCSqGSIb3 +DQEBAQUAA4IBDQAwggEIAoIBAQC3Msj+6XGmBIWtDBFk385N78gDGIc/oav7PKaf +8MOh2tTYbitTkPskpD6E8J7oX+zlJ0T1KKY/e97gKvDIr1MvnsoFAZMej2YcOadN ++lq2cwQlZut3f+dZxkqZJRRU6ybH838Z1TBwj6+wRir/resp7defqgSHo9T5iaU0 +X9tDkYI22WY8sbi5gv2cOj4QyDvvBmVmepsZGD3/cVE8MC5fvj13c7JdBmzDI1aa +K4UmkhynArPkPw2vCHmCuDY96pzTNbO8acr1zJ3o/WSNF4Azbl5KXZnJHoe0nRrA +1W4TNSNe35tfPe/W93bC6j67eA0cQmdrBNj41tpvi/JEoAGrAgEDo4HFMIHCMB0G +A1UdDgQWBBS/X7fRzt0fhvRbVazc1xDCDqmI5zCBkgYDVR0jBIGKMIGHgBS/X7fR +zt0fhvRbVazc1xDCDqmI56FspGowaDELMAkGA1UEBhMCVVMxJTAjBgNVBAoTHFN0 +YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAsTKVN0YXJmaWVsZCBD +bGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8w +DQYJKoZIhvcNAQEFBQADggEBAAWdP4id0ckaVaGsafPzWdqbAYcaT1epoXkJKtv3 +L7IezMdeatiDh6GX70k1PncGQVhiv45YuApnP+yz3SFmH8lU+nLMPUxA2IGvd56D +eruix/U0F47ZEUD0/CwqTRV/p2JdLiXTAAsgGh1o+Re49L2L7ShZ3U0WixeDyLJl +xy16paq8U4Zt3VekyvggQQto8PT7dL5WXXp59fkdheMtlb71cZBDzI0fmgAKhynp +VSJYACPq4xJDKVtHCN2MQWplBqjlIapBtJUhlbl90TSrE9atvNziPTnNvT51cKEY +WQPJIrSPnNVeKtelttQKbfi3QBFGmh95DmK/D5fs4C8fF5Q= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root CA" +# Serial: 17154717934120587862167794914071425081 +# MD5 Fingerprint: 87:ce:0b:7b:2a:0e:49:00:e1:58:71:9b:37:a8:93:72 +# SHA1 Fingerprint: 05:63:b8:63:0d:62:d7:5a:bb:c8:ab:1e:4b:df:b5:a8:99:b2:4d:43 +# SHA256 Fingerprint: 3e:90:99:b5:01:5e:8f:48:6c:00:bc:ea:9d:11:1e:e7:21:fa:ba:35:5a:89:bc:f1:df:69:56:1e:3d:c6:32:5c +-----BEGIN CERTIFICATE----- +MIIDtzCCAp+gAwIBAgIQDOfg5RfYRv6P5WD8G/AwOTANBgkqhkiG9w0BAQUFADBl +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv +b3QgQ0EwHhcNMDYxMTEwMDAwMDAwWhcNMzExMTEwMDAwMDAwWjBlMQswCQYDVQQG +EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl +cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwggEi +MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCtDhXO5EOAXLGH87dg+XESpa7c +JpSIqvTO9SA5KFhgDPiA2qkVlTJhPLWxKISKityfCgyDF3qPkKyK53lTXDGEKvYP +mDI2dsze3Tyoou9q+yHyUmHfnyDXH+Kx2f4YZNISW1/5WBg1vEfNoTb5a3/UsDg+ +wRvDjDPZ2C8Y/igPs6eD1sNuRMBhNZYW/lmci3Zt1/GiSw0r/wty2p5g0I6QNcZ4 +VYcgoc/lbQrISXwxmDNsIumH0DJaoroTghHtORedmTpyoeb6pNnVFzF1roV9Iq4/ +AUaG9ih5yLHa5FcXxH4cDrC0kqZWs72yl+2qp/C3xag/lRbQ/6GW6whfGHdPAgMB +AAGjYzBhMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW +BBRF66Kv9JLLgjEtUYunpyGd823IDzAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun +pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEAog683+Lt8ONyc3pklL/3cmbYMuRC +dWKuh+vy1dneVrOfzM4UKLkNl2BcEkxY5NM9g0lFWJc1aRqoR+pWxnmrEthngYTf +fwk8lOa4JiwgvT2zKIn3X/8i4peEH+ll74fg38FnSbNd67IJKusm7Xi+fT8r87cm +NW1fiQG2SVufAQWbqz0lwcy2f8Lxb4bG+mRo64EtlOtCt/qMHt1i8b5QZ7dsvfPx +H2sMNgcWfzd8qVttevESRmCD1ycEvkvOl77DZypoEd+A5wwzZr8TDRRu838fYxAe ++o0bJW1sj6W3YQGx0qMmoRBxna3iw/nDmVG3KwcIzi7mULKn+gpFL6Lw8g== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root CA" +# Serial: 10944719598952040374951832963794454346 +# MD5 Fingerprint: 79:e4:a9:84:0d:7d:3a:96:d7:c0:4f:e2:43:4c:89:2e +# SHA1 Fingerprint: a8:98:5d:3a:65:e5:e5:c4:b2:d7:d6:6d:40:c6:dd:2f:b1:9c:54:36 +# SHA256 Fingerprint: 43:48:a0:e9:44:4c:78:cb:26:5e:05:8d:5e:89:44:b4:d8:4f:96:62:bd:26:db:25:7f:89:34:a4:43:c7:01:61 +-----BEGIN CERTIFICATE----- +MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD +QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB +CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97 +nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt +43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P +T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4 +gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO +BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR +TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw +DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr +hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg +06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF +PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls +YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk +CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert High Assurance EV Root CA" +# Serial: 3553400076410547919724730734378100087 +# MD5 Fingerprint: d4:74:de:57:5c:39:b2:d3:9c:85:83:c5:c0:65:49:8a +# SHA1 Fingerprint: 5f:b7:ee:06:33:e2:59:db:ad:0c:4c:9a:e6:d3:8f:1a:61:c7:dc:25 +# SHA256 Fingerprint: 74:31:e5:f4:c3:c1:ce:46:90:77:4f:0b:61:e0:54:40:88:3b:a9:a0:1e:d0:0b:a6:ab:d7:80:6e:d3:b1:18:cf +-----BEGIN CERTIFICATE----- +MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j +ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL +MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3 +LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug +RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm ++9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW +PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM +xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB +Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3 +hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg +EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA +FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec +nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z +eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF +hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2 +Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe +vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep ++OkuE6N36B9K +-----END CERTIFICATE----- + +# Issuer: CN=SwissSign Gold CA - G2 O=SwissSign AG +# Subject: CN=SwissSign Gold CA - G2 O=SwissSign AG +# Label: "SwissSign Gold CA - G2" +# Serial: 13492815561806991280 +# MD5 Fingerprint: 24:77:d9:a8:91:d1:3b:fa:88:2d:c2:ff:f8:cd:33:93 +# SHA1 Fingerprint: d8:c5:38:8a:b7:30:1b:1b:6e:d4:7a:e6:45:25:3a:6f:9f:1a:27:61 +# SHA256 Fingerprint: 62:dd:0b:e9:b9:f5:0a:16:3e:a0:f8:e7:5c:05:3b:1e:ca:57:ea:55:c8:68:8f:64:7c:68:81:f2:c8:35:7b:95 +-----BEGIN CERTIFICATE----- +MIIFujCCA6KgAwIBAgIJALtAHEP1Xk+wMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV +BAYTAkNIMRUwEwYDVQQKEwxTd2lzc1NpZ24gQUcxHzAdBgNVBAMTFlN3aXNzU2ln +biBHb2xkIENBIC0gRzIwHhcNMDYxMDI1MDgzMDM1WhcNMzYxMDI1MDgzMDM1WjBF +MQswCQYDVQQGEwJDSDEVMBMGA1UEChMMU3dpc3NTaWduIEFHMR8wHQYDVQQDExZT +d2lzc1NpZ24gR29sZCBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC +CgKCAgEAr+TufoskDhJuqVAtFkQ7kpJcyrhdhJJCEyq8ZVeCQD5XJM1QiyUqt2/8 +76LQwB8CJEoTlo8jE+YoWACjR8cGp4QjK7u9lit/VcyLwVcfDmJlD909Vopz2q5+ +bbqBHH5CjCA12UNNhPqE21Is8w4ndwtrvxEvcnifLtg+5hg3Wipy+dpikJKVyh+c +6bM8K8vzARO/Ws/BtQpgvd21mWRTuKCWs2/iJneRjOBiEAKfNA+k1ZIzUd6+jbqE +emA8atufK+ze3gE/bk3lUIbLtK/tREDFylqM2tIrfKjuvqblCqoOpd8FUrdVxyJd +MmqXl2MT28nbeTZ7hTpKxVKJ+STnnXepgv9VHKVxaSvRAiTysybUa9oEVeXBCsdt +MDeQKuSeFDNeFhdVxVu1yzSJkvGdJo+hB9TGsnhQ2wwMC3wLjEHXuendjIj3o02y +MszYF9rNt85mndT9Xv+9lz4pded+p2JYryU0pUHHPbwNUMoDAw8IWh+Vc3hiv69y +FGkOpeUDDniOJihC8AcLYiAQZzlG+qkDzAQ4embvIIO1jEpWjpEA/I5cgt6IoMPi +aG59je883WX0XaxR7ySArqpWl2/5rX3aYT+YdzylkbYcjCbaZaIJbcHiVOO5ykxM +gI93e2CaHt+28kgeDrpOVG2Y4OGiGqJ3UM/EY5LsRxmd6+ZrzsECAwEAAaOBrDCB +qTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUWyV7 +lqRlUX64OfPAeGZe6Drn8O4wHwYDVR0jBBgwFoAUWyV7lqRlUX64OfPAeGZe6Drn +8O4wRgYDVR0gBD8wPTA7BglghXQBWQECAQEwLjAsBggrBgEFBQcCARYgaHR0cDov +L3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIBACe6 +45R88a7A3hfm5djV9VSwg/S7zV4Fe0+fdWavPOhWfvxyeDgD2StiGwC5+OlgzczO +UYrHUDFu4Up+GC9pWbY9ZIEr44OE5iKHjn3g7gKZYbge9LgriBIWhMIxkziWMaa5 +O1M/wySTVltpkuzFwbs4AOPsF6m43Md8AYOfMke6UiI0HTJ6CVanfCU2qT1L2sCC +bwq7EsiHSycR+R4tx5M/nttfJmtS2S6K8RTGRI0Vqbe/vd6mGu6uLftIdxf+u+yv +GPUqUfA5hJeVbG4bwyvEdGB5JbAKJ9/fXtI5z0V9QkvfsywexcZdylU6oJxpmo/a +77KwPJ+HbBIrZXAVUjEaJM9vMSNQH4xPjyPDdEFjHFWoFN0+4FFQz/EbMFYOkrCC +hdiDyyJkvC24JdVUorgG6q2SpCSgwYa1ShNqR88uC1aVVMvOmttqtKay20EIhid3 +92qgQmwLOM7XdVAyksLfKzAiSNDVQTglXaTpXZ/GlHXQRf0wl0OPkKsKx4ZzYEpp +Ld6leNcG2mqeSz53OiATIgHQv2ieY2BrNU0LbbqhPcCT4H8js1WtciVORvnSFu+w +ZMEBnunKoGqYDs/YYPIvSbjkQuE4NRb0yG5P94FW6LqjviOvrv1vA+ACOzB2+htt +Qc8Bsem4yWb02ybzOqR08kkkW8mw0FfB+j564ZfJ +-----END CERTIFICATE----- + +# Issuer: CN=SwissSign Silver CA - G2 O=SwissSign AG +# Subject: CN=SwissSign Silver CA - G2 O=SwissSign AG +# Label: "SwissSign Silver CA - G2" +# Serial: 5700383053117599563 +# MD5 Fingerprint: e0:06:a1:c9:7d:cf:c9:fc:0d:c0:56:75:96:d8:62:13 +# SHA1 Fingerprint: 9b:aa:e5:9f:56:ee:21:cb:43:5a:be:25:93:df:a7:f0:40:d1:1d:cb +# SHA256 Fingerprint: be:6c:4d:a2:bb:b9:ba:59:b6:f3:93:97:68:37:42:46:c3:c0:05:99:3f:a9:8f:02:0d:1d:ed:be:d4:8a:81:d5 +-----BEGIN CERTIFICATE----- +MIIFvTCCA6WgAwIBAgIITxvUL1S7L0swDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UE +BhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMYU3dpc3NTaWdu +IFNpbHZlciBDQSAtIEcyMB4XDTA2MTAyNTA4MzI0NloXDTM2MTAyNTA4MzI0Nlow +RzELMAkGA1UEBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMY +U3dpc3NTaWduIFNpbHZlciBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A +MIICCgKCAgEAxPGHf9N4Mfc4yfjDmUO8x/e8N+dOcbpLj6VzHVxumK4DV644N0Mv +Fz0fyM5oEMF4rhkDKxD6LHmD9ui5aLlV8gREpzn5/ASLHvGiTSf5YXu6t+WiE7br +YT7QbNHm+/pe7R20nqA1W6GSy/BJkv6FCgU+5tkL4k+73JU3/JHpMjUi0R86TieF +nbAVlDLaYQ1HTWBCrpJH6INaUFjpiou5XaHc3ZlKHzZnu0jkg7Y360g6rw9njxcH +6ATK72oxh9TAtvmUcXtnZLi2kUpCe2UuMGoM9ZDulebyzYLs2aFK7PayS+VFheZt +eJMELpyCbTapxDFkH4aDCyr0NQp4yVXPQbBH6TCfmb5hqAaEuSh6XzjZG6k4sIN/ +c8HDO0gqgg8hm7jMqDXDhBuDsz6+pJVpATqJAHgE2cn0mRmrVn5bi4Y5FZGkECwJ +MoBgs5PAKrYYC51+jUnyEEp/+dVGLxmSo5mnJqy7jDzmDrxHB9xzUfFwZC8I+bRH +HTBsROopN4WSaGa8gzj+ezku01DwH/teYLappvonQfGbGHLy9YR0SslnxFSuSGTf +jNFusB3hB48IHpmccelM2KX3RxIfdNFRnobzwqIjQAtz20um53MGjMGg6cFZrEb6 +5i/4z3GcRm25xBWNOHkDRUjvxF3XCO6HOSKGsg0PWEP3calILv3q1h8CAwEAAaOB +rDCBqTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU +F6DNweRBtjpbO8tFnb0cwpj6hlgwHwYDVR0jBBgwFoAUF6DNweRBtjpbO8tFnb0c +wpj6hlgwRgYDVR0gBD8wPTA7BglghXQBWQEDAQEwLjAsBggrBgEFBQcCARYgaHR0 +cDovL3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIB +AHPGgeAn0i0P4JUw4ppBf1AsX19iYamGamkYDHRJ1l2E6kFSGG9YrVBWIGrGvShp +WJHckRE1qTodvBqlYJ7YH39FkWnZfrt4csEGDyrOj4VwYaygzQu4OSlWhDJOhrs9 +xCrZ1x9y7v5RoSJBsXECYxqCsGKrXlcSH9/L3XWgwF15kIwb4FDm3jH+mHtwX6WQ +2K34ArZv02DdQEsixT2tOnqfGhpHkXkzuoLcMmkDlm4fS/Bx/uNncqCxv1yL5PqZ +IseEuRuNI5c/7SXgz2W79WEE790eslpBIlqhn10s6FvJbakMDHiqYMZWjwFaDGi8 +aRl5xB9+lwW/xekkUV7U1UtT7dkjWjYDZaPBA61BMPNGG4WQr2W11bHkFlt4dR2X +em1ZqSqPe97Dh4kQmUlzeMg9vVE1dCrV8X5pGyq7O70luJpaPXJhkGaH7gzWTdQR +dAtq/gsD/KNVV4n+SsuuWxcFyPKNIzFTONItaj+CuY0IavdeQXRuwxF+B6wpYJE/ +OMpXEA29MC/HpeZBoNquBYeaoKRlbEwJDIm6uNO5wJOKMPqN5ZprFQFOZ6raYlY+ +hAhm0sQ2fac+EPyI4NSA5QC9qvNOBqN6avlicuMJT+ubDgEj8Z+7fNzcbBGXJbLy +tGMU0gYqZ4yD9c7qB9iaah7s5Aq7KkzrCWA5zspi2C5u +-----END CERTIFICATE----- + +# Issuer: CN=SecureTrust CA O=SecureTrust Corporation +# Subject: CN=SecureTrust CA O=SecureTrust Corporation +# Label: "SecureTrust CA" +# Serial: 17199774589125277788362757014266862032 +# MD5 Fingerprint: dc:32:c3:a7:6d:25:57:c7:68:09:9d:ea:2d:a9:a2:d1 +# SHA1 Fingerprint: 87:82:c6:c3:04:35:3b:cf:d2:96:92:d2:59:3e:7d:44:d9:34:ff:11 +# SHA256 Fingerprint: f1:c1:b5:0a:e5:a2:0d:d8:03:0e:c9:f6:bc:24:82:3d:d3:67:b5:25:57:59:b4:e7:1b:61:fc:e9:f7:37:5d:73 +-----BEGIN CERTIFICATE----- +MIIDuDCCAqCgAwIBAgIQDPCOXAgWpa1Cf/DrJxhZ0DANBgkqhkiG9w0BAQUFADBI +MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x +FzAVBgNVBAMTDlNlY3VyZVRydXN0IENBMB4XDTA2MTEwNzE5MzExOFoXDTI5MTIz +MTE5NDA1NVowSDELMAkGA1UEBhMCVVMxIDAeBgNVBAoTF1NlY3VyZVRydXN0IENv +cnBvcmF0aW9uMRcwFQYDVQQDEw5TZWN1cmVUcnVzdCBDQTCCASIwDQYJKoZIhvcN +AQEBBQADggEPADCCAQoCggEBAKukgeWVzfX2FI7CT8rU4niVWJxB4Q2ZQCQXOZEz +Zum+4YOvYlyJ0fwkW2Gz4BERQRwdbvC4u/jep4G6pkjGnx29vo6pQT64lO0pGtSO +0gMdA+9tDWccV9cGrcrI9f4Or2YlSASWC12juhbDCE/RRvgUXPLIXgGZbf2IzIao +wW8xQmxSPmjL8xk037uHGFaAJsTQ3MBv396gwpEWoGQRS0S8Hvbn+mPeZqx2pHGj +7DaUaHp3pLHnDi+BeuK1cobvomuL8A/b01k/unK8RCSc43Oz969XL0Imnal0ugBS +8kvNU3xHCzaFDmapCJcWNFfBZveA4+1wVMeT4C4oFVmHursCAwEAAaOBnTCBmjAT +BgkrBgEEAYI3FAIEBh4EAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB +/zAdBgNVHQ4EFgQUQjK2FvoE/f5dS3rD/fdMQB1aQ68wNAYDVR0fBC0wKzApoCeg +JYYjaHR0cDovL2NybC5zZWN1cmV0cnVzdC5jb20vU1RDQS5jcmwwEAYJKwYBBAGC +NxUBBAMCAQAwDQYJKoZIhvcNAQEFBQADggEBADDtT0rhWDpSclu1pqNlGKa7UTt3 +6Z3q059c4EVlew3KW+JwULKUBRSuSceNQQcSc5R+DCMh/bwQf2AQWnL1mA6s7Ll/ +3XpvXdMc9P+IBWlCqQVxyLesJugutIxq/3HcuLHfmbx8IVQr5Fiiu1cprp6poxkm +D5kuCLDv/WnPmRoJjeOnnyvJNjR7JLN4TJUXpAYmHrZkUjZfYGfZnMUFdAvnZyPS +CPyI6a6Lf+Ew9Dd+/cYy2i2eRDAwbO4H3tI0/NL/QPZL9GZGBlSm8jIKYyYwa5vR +3ItHuuG51WLQoqD0ZwV4KWMabwTW+MZMo5qxN7SN5ShLHZ4swrhovO0C7jE= +-----END CERTIFICATE----- + +# Issuer: CN=Secure Global CA O=SecureTrust Corporation +# Subject: CN=Secure Global CA O=SecureTrust Corporation +# Label: "Secure Global CA" +# Serial: 9751836167731051554232119481456978597 +# MD5 Fingerprint: cf:f4:27:0d:d4:ed:dc:65:16:49:6d:3d:da:bf:6e:de +# SHA1 Fingerprint: 3a:44:73:5a:e5:81:90:1f:24:86:61:46:1e:3b:9c:c4:5f:f5:3a:1b +# SHA256 Fingerprint: 42:00:f5:04:3a:c8:59:0e:bb:52:7d:20:9e:d1:50:30:29:fb:cb:d4:1c:a1:b5:06:ec:27:f1:5a:de:7d:ac:69 +-----BEGIN CERTIFICATE----- +MIIDvDCCAqSgAwIBAgIQB1YipOjUiolN9BPI8PjqpTANBgkqhkiG9w0BAQUFADBK +MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x +GTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwHhcNMDYxMTA3MTk0MjI4WhcNMjkx +MjMxMTk1MjA2WjBKMQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3Qg +Q29ycG9yYXRpb24xGTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwggEiMA0GCSqG +SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvNS7YrGxVaQZx5RNoJLNP2MwhR/jxYDiJ +iQPpvepeRlMJ3Fz1Wuj3RSoC6zFh1ykzTM7HfAo3fg+6MpjhHZevj8fcyTiW89sa +/FHtaMbQbqR8JNGuQsiWUGMu4P51/pinX0kuleM5M2SOHqRfkNJnPLLZ/kG5VacJ +jnIFHovdRIWCQtBJwB1g8NEXLJXr9qXBkqPFwqcIYA1gBBCWeZ4WNOaptvolRTnI +HmX5k/Wq8VLcmZg9pYYaDDUz+kulBAYVHDGA76oYa8J719rO+TMg1fW9ajMtgQT7 +sFzUnKPiXB3jqUJ1XnvUd+85VLrJChgbEplJL4hL/VBi0XPnj3pDAgMBAAGjgZ0w +gZowEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFK9EBMJBfkiD2045AuzshHrmzsmkMDQGA1UdHwQtMCsw +KaAnoCWGI2h0dHA6Ly9jcmwuc2VjdXJldHJ1c3QuY29tL1NHQ0EuY3JsMBAGCSsG +AQQBgjcVAQQDAgEAMA0GCSqGSIb3DQEBBQUAA4IBAQBjGghAfaReUw132HquHw0L +URYD7xh8yOOvaliTFGCRsoTciE6+OYo68+aCiV0BN7OrJKQVDpI1WkpEXk5X+nXO +H0jOZvQ8QCaSmGwb7iRGDBezUqXbpZGRzzfTb+cnCDpOGR86p1hcF895P4vkp9Mm +I50mD1hp/Ed+stCNi5O/KU9DaXR2Z0vPB4zmAve14bRDtUstFJ/53CYNv6ZHdAbY +iNE6KTCEztI5gGIbqMdXSbxqVVFnFUq+NQfk1XWYN3kwFNspnWzFacxHVaIw98xc +f8LDmBxrThaA63p4ZUWiABqvDA1VZDRIuJK58bRQKfJPIx/abKwfROHdI3hRW8cW +-----END CERTIFICATE----- + +# Issuer: CN=COMODO Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO Certification Authority O=COMODO CA Limited +# Label: "COMODO Certification Authority" +# Serial: 104350513648249232941998508985834464573 +# MD5 Fingerprint: 5c:48:dc:f7:42:72:ec:56:94:6d:1c:cc:71:35:80:75 +# SHA1 Fingerprint: 66:31:bf:9e:f7:4f:9e:b6:c9:d5:a6:0c:ba:6a:be:d1:f7:bd:ef:7b +# SHA256 Fingerprint: 0c:2c:d6:3d:f7:80:6f:a3:99:ed:e8:09:11:6b:57:5b:f8:79:89:f0:65:18:f9:80:8c:86:05:03:17:8b:af:66 +-----BEGIN CERTIFICATE----- +MIIEHTCCAwWgAwIBAgIQToEtioJl4AsC7j41AkblPTANBgkqhkiG9w0BAQUFADCB +gTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G +A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxJzAlBgNV +BAMTHkNPTU9ETyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEyMDEwMDAw +MDBaFw0yOTEyMzEyMzU5NTlaMIGBMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl +YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P +RE8gQ0EgTGltaXRlZDEnMCUGA1UEAxMeQ09NT0RPIENlcnRpZmljYXRpb24gQXV0 +aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0ECLi3LjkRv3 +UcEbVASY06m/weaKXTuH+7uIzg3jLz8GlvCiKVCZrts7oVewdFFxze1CkU1B/qnI +2GqGd0S7WWaXUF601CxwRM/aN5VCaTwwxHGzUvAhTaHYujl8HJ6jJJ3ygxaYqhZ8 +Q5sVW7euNJH+1GImGEaaP+vB+fGQV+useg2L23IwambV4EajcNxo2f8ESIl33rXp ++2dtQem8Ob0y2WIC8bGoPW43nOIv4tOiJovGuFVDiOEjPqXSJDlqR6sA1KGzqSX+ +DT+nHbrTUcELpNqsOO9VUCQFZUaTNE8tja3G1CEZ0o7KBWFxB3NH5YoZEr0ETc5O +nKVIrLsm9wIDAQABo4GOMIGLMB0GA1UdDgQWBBQLWOWLxkwVN6RAqTCpIb5HNlpW +/zAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zBJBgNVHR8EQjBAMD6g +PKA6hjhodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9DZXJ0aWZpY2F0aW9u +QXV0aG9yaXR5LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAPpiem/Yb6dc5t3iuHXIY +SdOH5EOC6z/JqvWote9VfCFSZfnVDeFs9D6Mk3ORLgLETgdxb8CPOGEIqB6BCsAv +IC9Bi5HcSEW88cbeunZrM8gALTFGTO3nnc+IlP8zwFboJIYmuNg4ON8qa90SzMc/ +RxdMosIGlgnW2/4/PEZB31jiVg88O8EckzXZOFKs7sjsLjBOlDW0JB9LeGna8gI4 +zJVSk/BwJVmcIGfE7vmLV2H0knZ9P4SNVbfo5azV8fUZVqZa+5Acr5Pr5RzUZ5dd +BA6+C4OmF4O5MBKgxTMVBbkN+8cFduPYSo38NBejxiEovjBFMR7HeL5YYTisO+IB +ZQ== +-----END CERTIFICATE----- + +# Issuer: CN=Network Solutions Certificate Authority O=Network Solutions L.L.C. +# Subject: CN=Network Solutions Certificate Authority O=Network Solutions L.L.C. +# Label: "Network Solutions Certificate Authority" +# Serial: 116697915152937497490437556386812487904 +# MD5 Fingerprint: d3:f3:a6:16:c0:fa:6b:1d:59:b1:2d:96:4d:0e:11:2e +# SHA1 Fingerprint: 74:f8:a3:c3:ef:e7:b3:90:06:4b:83:90:3c:21:64:60:20:e5:df:ce +# SHA256 Fingerprint: 15:f0:ba:00:a3:ac:7a:f3:ac:88:4c:07:2b:10:11:a0:77:bd:77:c0:97:f4:01:64:b2:f8:59:8a:bd:83:86:0c +-----BEGIN CERTIFICATE----- +MIID5jCCAs6gAwIBAgIQV8szb8JcFuZHFhfjkDFo4DANBgkqhkiG9w0BAQUFADBi +MQswCQYDVQQGEwJVUzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMu +MTAwLgYDVQQDEydOZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3Jp +dHkwHhcNMDYxMjAxMDAwMDAwWhcNMjkxMjMxMjM1OTU5WjBiMQswCQYDVQQGEwJV +UzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMuMTAwLgYDVQQDEydO +ZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggEiMA0GCSqG +SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDkvH6SMG3G2I4rC7xGzuAnlt7e+foS0zwz +c7MEL7xxjOWftiJgPl9dzgn/ggwbmlFQGiaJ3dVhXRncEg8tCqJDXRfQNJIg6nPP +OCwGJgl6cvf6UDL4wpPTaaIjzkGxzOTVHzbRijr4jGPiFFlp7Q3Tf2vouAPlT2rl +mGNpSAW+Lv8ztumXWWn4Zxmuk2GWRBXTcrA/vGp97Eh/jcOrqnErU2lBUzS1sLnF +BgrEsEX1QV1uiUV7PTsmjHTC5dLRfbIR1PtYMiKagMnc/Qzpf14Dl847ABSHJ3A4 +qY5usyd2mFHgBeMhqxrVhSI8KbWaFsWAqPS7azCPL0YCorEMIuDTAgMBAAGjgZcw +gZQwHQYDVR0OBBYEFCEwyfsA106Y2oeqKtCnLrFAMadMMA4GA1UdDwEB/wQEAwIB +BjAPBgNVHRMBAf8EBTADAQH/MFIGA1UdHwRLMEkwR6BFoEOGQWh0dHA6Ly9jcmwu +bmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zQ2VydGlmaWNhdGVBdXRob3Jp +dHkuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQC7rkvnt1frf6ott3NHhWrB5KUd5Oc8 +6fRZZXe1eltajSU24HqXLjjAV2CDmAaDn7l2em5Q4LqILPxFzBiwmZVRDuwduIj/ +h1AcgsLj4DKAv6ALR8jDMe+ZZzKATxcheQxpXN5eNK4CtSbqUN9/GGUsyfJj4akH +/nxxH2szJGoeBfcFaMBqEssuXmHLrijTfsK0ZpEmXzwuJF/LWA/rKOyvEZbz3Htv +wKeI8lN3s2Berq4o2jUsbzRF0ybh3uxbTydrFny9RAQYgrOJeRcQcT16ohZO9QHN +pGxlaKFJdlxDydi8NmdspZS11My5vWo1ViHe2MPr+8ukYEywVaCge1ey +-----END CERTIFICATE----- + +# Issuer: CN=COMODO ECC Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO ECC Certification Authority O=COMODO CA Limited +# Label: "COMODO ECC Certification Authority" +# Serial: 41578283867086692638256921589707938090 +# MD5 Fingerprint: 7c:62:ff:74:9d:31:53:5e:68:4a:d5:78:aa:1e:bf:23 +# SHA1 Fingerprint: 9f:74:4e:9f:2b:4d:ba:ec:0f:31:2c:50:b6:56:3b:8e:2d:93:c3:11 +# SHA256 Fingerprint: 17:93:92:7a:06:14:54:97:89:ad:ce:2f:8f:34:f7:f0:b6:6d:0f:3a:e3:a3:b8:4d:21:ec:15:db:ba:4f:ad:c7 +-----BEGIN CERTIFICATE----- +MIICiTCCAg+gAwIBAgIQH0evqmIAcFBUTAGem2OZKjAKBggqhkjOPQQDAzCBhTEL +MAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE +BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMT +IkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwMzA2MDAw +MDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdy +ZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09N +T0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlv +biBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQDR3svdcmCFYX7deSR +FtSrYpn1PlILBs5BAH+X4QokPB0BBO490o0JlwzgdeT6+3eKKvUDYEs2ixYjFq0J +cfRK9ChQtP6IHG4/bC8vCVlbpVsLM5niwz2J+Wos77LTBumjQjBAMB0GA1UdDgQW +BBR1cacZSBm8nZ3qQUfflMRId5nTeTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/ +BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjEA7wNbeqy3eApyt4jf/7VGFAkK+qDm +fQjGGoe9GKhzvSbKYAydzpmfz1wPMOG+FDHqAjAU9JM8SaczepBGR7NjfRObTrdv +GDeAU/7dIOA1mjbRxwG55tzd8/8dLDoWV9mSOdY= +-----END CERTIFICATE----- + +# Issuer: CN=Certigna O=Dhimyotis +# Subject: CN=Certigna O=Dhimyotis +# Label: "Certigna" +# Serial: 18364802974209362175 +# MD5 Fingerprint: ab:57:a6:5b:7d:42:82:19:b5:d8:58:26:28:5e:fd:ff +# SHA1 Fingerprint: b1:2e:13:63:45:86:a4:6f:1a:b2:60:68:37:58:2d:c4:ac:fd:94:97 +# SHA256 Fingerprint: e3:b6:a2:db:2e:d7:ce:48:84:2f:7a:c5:32:41:c7:b7:1d:54:14:4b:fb:40:c1:1f:3f:1d:0b:42:f5:ee:a1:2d +-----BEGIN CERTIFICATE----- +MIIDqDCCApCgAwIBAgIJAP7c4wEPyUj/MA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV +BAYTAkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hMB4X +DTA3MDYyOTE1MTMwNVoXDTI3MDYyOTE1MTMwNVowNDELMAkGA1UEBhMCRlIxEjAQ +BgNVBAoMCURoaW15b3RpczERMA8GA1UEAwwIQ2VydGlnbmEwggEiMA0GCSqGSIb3 +DQEBAQUAA4IBDwAwggEKAoIBAQDIaPHJ1tazNHUmgh7stL7qXOEm7RFHYeGifBZ4 +QCHkYJ5ayGPhxLGWkv8YbWkj4Sti993iNi+RB7lIzw7sebYs5zRLcAglozyHGxny +gQcPOJAZ0xH+hrTy0V4eHpbNgGzOOzGTtvKg0KmVEn2lmsxryIRWijOp5yIVUxbw +zBfsV1/pogqYCd7jX5xv3EjjhQsVWqa6n6xI4wmy9/Qy3l40vhx4XUJbzg4ij02Q +130yGLMLLGq/jj8UEYkgDncUtT2UCIf3JR7VsmAA7G8qKCVuKj4YYxclPz5EIBb2 +JsglrgVKtOdjLPOMFlN+XPsRGgjBRmKfIrjxwo1p3Po6WAbfAgMBAAGjgbwwgbkw +DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUGu3+QTmQtCRZvgHyUtVF9lo53BEw +ZAYDVR0jBF0wW4AUGu3+QTmQtCRZvgHyUtVF9lo53BGhOKQ2MDQxCzAJBgNVBAYT +AkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hggkA/tzj +AQ/JSP8wDgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzANBgkqhkiG +9w0BAQUFAAOCAQEAhQMeknH2Qq/ho2Ge6/PAD/Kl1NqV5ta+aDY9fm4fTIrv0Q8h +bV6lUmPOEvjvKtpv6zf+EwLHyzs+ImvaYS5/1HI93TDhHkxAGYwP15zRgzB7mFnc +fca5DClMoTOi62c6ZYTTluLtdkVwj7Ur3vkj1kluPBS1xp81HlDQwY9qcEQCYsuu +HWhBp6pX6FOqB9IG9tUUBguRA3UsbHK1YZWaDYu5Def131TN3ubY1gkIl2PlwS6w +t0QmwCbAr1UwnjvVNioZBPRcHv/PLLf/0P2HQBHVESO7SMAhqaQoLf0V+LBOK/Qw +WyH8EZE0vkHve52Xdf+XlcCWWC/qu0bXu+TZLg== +-----END CERTIFICATE----- + +# Issuer: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority +# Subject: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority +# Label: "ePKI Root Certification Authority" +# Serial: 28956088682735189655030529057352760477 +# MD5 Fingerprint: 1b:2e:00:ca:26:06:90:3d:ad:fe:6f:15:68:d3:6b:b3 +# SHA1 Fingerprint: 67:65:0d:f1:7e:8e:7e:5b:82:40:a4:f4:56:4b:cf:e2:3d:69:c6:f0 +# SHA256 Fingerprint: c0:a6:f4:dc:63:a2:4b:fd:cf:54:ef:2a:6a:08:2a:0a:72:de:35:80:3e:2f:f5:ff:52:7a:e5:d8:72:06:df:d5 +-----BEGIN CERTIFICATE----- +MIIFsDCCA5igAwIBAgIQFci9ZUdcr7iXAF7kBtK8nTANBgkqhkiG9w0BAQUFADBe +MQswCQYDVQQGEwJUVzEjMCEGA1UECgwaQ2h1bmdod2EgVGVsZWNvbSBDby4sIEx0 +ZC4xKjAoBgNVBAsMIWVQS0kgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe +Fw0wNDEyMjAwMjMxMjdaFw0zNDEyMjAwMjMxMjdaMF4xCzAJBgNVBAYTAlRXMSMw +IQYDVQQKDBpDaHVuZ2h3YSBUZWxlY29tIENvLiwgTHRkLjEqMCgGA1UECwwhZVBL +SSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIICIjANBgkqhkiG9w0BAQEF +AAOCAg8AMIICCgKCAgEA4SUP7o3biDN1Z82tH306Tm2d0y8U82N0ywEhajfqhFAH +SyZbCUNsIZ5qyNUD9WBpj8zwIuQf5/dqIjG3LBXy4P4AakP/h2XGtRrBp0xtInAh +ijHyl3SJCRImHJ7K2RKilTza6We/CKBk49ZCt0Xvl/T29de1ShUCWH2YWEtgvM3X +DZoTM1PRYfl61dd4s5oz9wCGzh1NlDivqOx4UXCKXBCDUSH3ET00hl7lSM2XgYI1 +TBnsZfZrxQWh7kcT1rMhJ5QQCtkkO7q+RBNGMD+XPNjX12ruOzjjK9SXDrkb5wdJ +fzcq+Xd4z1TtW0ado4AOkUPB1ltfFLqfpo0kR0BZv3I4sjZsN/+Z0V0OWQqraffA +sgRFelQArr5T9rXn4fg8ozHSqf4hUmTFpmfwdQcGlBSBVcYn5AGPF8Fqcde+S/uU +WH1+ETOxQvdibBjWzwloPn9s9h6PYq2lY9sJpx8iQkEeb5mKPtf5P0B6ebClAZLS +nT0IFaUQAS2zMnaolQ2zepr7BxB4EW/hj8e6DyUadCrlHJhBmd8hh+iVBmoKs2pH +dmX2Os+PYhcZewoozRrSgx4hxyy/vv9haLdnG7t4TY3OZ+XkwY63I2binZB1NJip +NiuKmpS5nezMirH4JYlcWrYvjB9teSSnUmjDhDXiZo1jDiVN1Rmy5nk3pyKdVDEC +AwEAAaNqMGgwHQYDVR0OBBYEFB4M97Zn8uGSJglFwFU5Lnc/QkqiMAwGA1UdEwQF +MAMBAf8wOQYEZyoHAAQxMC8wLQIBADAJBgUrDgMCGgUAMAcGBWcqAwAABBRFsMLH +ClZ87lt4DJX5GFPBphzYEDANBgkqhkiG9w0BAQUFAAOCAgEACbODU1kBPpVJufGB +uvl2ICO1J2B01GqZNF5sAFPZn/KmsSQHRGoqxqWOeBLoR9lYGxMqXnmbnwoqZ6Yl +PwZpVnPDimZI+ymBV3QGypzqKOg4ZyYr8dW1P2WT+DZdjo2NQCCHGervJ8A9tDkP +JXtoUHRVnAxZfVo9QZQlUgjgRywVMRnVvwdVxrsStZf0X4OFunHB2WyBEXYKCrC/ +gpf36j36+uwtqSiUO1bd0lEursC9CBWMd1I0ltabrNMdjmEPNXubrjlpC2JgQCA2 +j6/7Nu4tCEoduL+bXPjqpRugc6bY+G7gMwRfaKonh+3ZwZCc7b3jajWvY9+rGNm6 +5ulK6lCKD2GTHuItGeIwlDWSXQ62B68ZgI9HkFFLLk3dheLSClIKF5r8GrBQAuUB +o2M3IUxExJtRmREOc5wGj1QupyheRDmHVi03vYVElOEMSyycw5KFNGHLD7ibSkNS +/jQ6fbjpKdx2qcgw+BRxgMYeNkh0IkFch4LoGHGLQYlE535YW6i4jRPpp2zDR+2z +Gp1iro2C6pSe3VkQw63d4k3jMdXH7OjysP6SHhYKGvzZ8/gntsm+HbRsZJB/9OTE +W9c3rkIO3aQab3yIVMUWbuF6aC74Or8NpDyJO3inTmODBCEIZ43ygknQW/2xzQ+D +hNQ+IIX3Sj0rnP0qCglN6oH4EZw= +-----END CERTIFICATE----- + +# Issuer: O=certSIGN OU=certSIGN ROOT CA +# Subject: O=certSIGN OU=certSIGN ROOT CA +# Label: "certSIGN ROOT CA" +# Serial: 35210227249154 +# MD5 Fingerprint: 18:98:c0:d6:e9:3a:fc:f9:b0:f5:0c:f7:4b:01:44:17 +# SHA1 Fingerprint: fa:b7:ee:36:97:26:62:fb:2d:b0:2a:f6:bf:03:fd:e8:7c:4b:2f:9b +# SHA256 Fingerprint: ea:a9:62:c4:fa:4a:6b:af:eb:e4:15:19:6d:35:1c:cd:88:8d:4f:53:f3:fa:8a:e6:d7:c4:66:a9:4e:60:42:bb +-----BEGIN CERTIFICATE----- +MIIDODCCAiCgAwIBAgIGIAYFFnACMA0GCSqGSIb3DQEBBQUAMDsxCzAJBgNVBAYT +AlJPMREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBD +QTAeFw0wNjA3MDQxNzIwMDRaFw0zMTA3MDQxNzIwMDRaMDsxCzAJBgNVBAYTAlJP +MREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBDQTCC +ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALczuX7IJUqOtdu0KBuqV5Do +0SLTZLrTk+jUrIZhQGpgV2hUhE28alQCBf/fm5oqrl0Hj0rDKH/v+yv6efHHrfAQ +UySQi2bJqIirr1qjAOm+ukbuW3N7LBeCgV5iLKECZbO9xSsAfsT8AzNXDe3i+s5d +RdY4zTW2ssHQnIFKquSyAVwdj1+ZxLGt24gh65AIgoDzMKND5pCCrlUoSe1b16kQ +OA7+j0xbm0bqQfWwCHTD0IgztnzXdN/chNFDDnU5oSVAKOp4yw4sLjmdjItuFhwv +JoIQ4uNllAoEwF73XVv4EOLQunpL+943AAAaWyjj0pxzPjKHmKHJUS/X3qwzs08C +AwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAcYwHQYDVR0O +BBYEFOCMm9slSbPxfIbWskKHC9BroNnkMA0GCSqGSIb3DQEBBQUAA4IBAQA+0hyJ +LjX8+HXd5n9liPRyTMks1zJO890ZeUe9jjtbkw9QSSQTaxQGcu8J06Gh40CEyecY +MnQ8SG4Pn0vU9x7Tk4ZkVJdjclDVVc/6IJMCopvDI5NOFlV2oHB5bc0hH88vLbwZ +44gx+FkagQnIl6Z0x2DEW8xXjrJ1/RsCCdtZb3KTafcxQdaIOL+Hsr0Wefmq5L6I +Jd1hJyMctTEHBDa0GpC9oHRxUIltvBTjD4au8as+x6AJzKNI0eDbZOeStc+vckNw +i/nDhDwTqn6Sm1dTk/pwwpEOMfmbZ13pljheX7NzTogVZ96edhBiIL5VaZVDADlN +9u6wWk5JRFRYX0KD +-----END CERTIFICATE----- + +# Issuer: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services) +# Subject: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services) +# Label: "NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny" +# Serial: 80544274841616 +# MD5 Fingerprint: c5:a1:b7:ff:73:dd:d6:d7:34:32:18:df:fc:3c:ad:88 +# SHA1 Fingerprint: 06:08:3f:59:3f:15:a1:04:a0:69:a4:6b:a9:03:d0:06:b7:97:09:91 +# SHA256 Fingerprint: 6c:61:da:c3:a2:de:f0:31:50:6b:e0:36:d2:a6:fe:40:19:94:fb:d1:3d:f9:c8:d4:66:59:92:74:c4:46:ec:98 +-----BEGIN CERTIFICATE----- +MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG +EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3 +MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl +cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR +dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB +pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM +b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm +aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz +IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A +MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT +lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz +AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5 +VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG +ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2 +BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG +AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M +U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh +bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C ++C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC +bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F +uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2 +XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E= +-----END CERTIFICATE----- + +# Issuer: CN=Hongkong Post Root CA 1 O=Hongkong Post +# Subject: CN=Hongkong Post Root CA 1 O=Hongkong Post +# Label: "Hongkong Post Root CA 1" +# Serial: 1000 +# MD5 Fingerprint: a8:0d:6f:39:78:b9:43:6d:77:42:6d:98:5a:cc:23:ca +# SHA1 Fingerprint: d6:da:a8:20:8d:09:d2:15:4d:24:b5:2f:cb:34:6e:b2:58:b2:8a:58 +# SHA256 Fingerprint: f9:e6:7d:33:6c:51:00:2a:c0:54:c6:32:02:2d:66:dd:a2:e7:e3:ff:f1:0a:d0:61:ed:31:d8:bb:b4:10:cf:b2 +-----BEGIN CERTIFICATE----- +MIIDMDCCAhigAwIBAgICA+gwDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UEBhMCSEsx +FjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdrb25nIFBvc3Qg +Um9vdCBDQSAxMB4XDTAzMDUxNTA1MTMxNFoXDTIzMDUxNTA0NTIyOVowRzELMAkG +A1UEBhMCSEsxFjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdr +b25nIFBvc3QgUm9vdCBDQSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEArP84tulmAknjorThkPlAj3n54r15/gK97iSSHSL22oVyaf7XPwnU3ZG1ApzQ +jVrhVcNQhrkpJsLj2aDxaQMoIIBFIi1WpztUlVYiWR8o3x8gPW2iNr4joLFutbEn +PzlTCeqrauh0ssJlXI6/fMN4hM2eFvz1Lk8gKgifd/PFHsSaUmYeSF7jEAaPIpjh +ZY4bXSNmO7ilMlHIhqqhqZ5/dpTCpmy3QfDVyAY45tQM4vM7TG1QjMSDJ8EThFk9 +nnV0ttgCXjqQesBCNnLsak3c78QA3xMYV18meMjWCnl3v/evt3a5pQuEF10Q6m/h +q5URX208o1xNg1vysxmKgIsLhwIDAQABoyYwJDASBgNVHRMBAf8ECDAGAQH/AgED +MA4GA1UdDwEB/wQEAwIBxjANBgkqhkiG9w0BAQUFAAOCAQEADkbVPK7ih9legYsC +mEEIjEy82tvuJxuC52pF7BaLT4Wg87JwvVqWuspube5Gi27nKi6Wsxkz67SfqLI3 +7piol7Yutmcn1KZJ/RyTZXaeQi/cImyaT/JaFTmxcdcrUehtHJjA2Sr0oYJ71clB +oiMBdDhViw+5LmeiIAQ32pwL0xch4I+XeTRvhEgCIDMb5jREn5Fw9IBehEPCKdJs +EhTkYY2sEJCehFC78JZvRZ+K88psT/oROhUVRsPNH4NbLUES7VBnQRM9IauUiqpO +fMGx+6fWtScvl6tu4B3i0RwsH0Ti/L6RoZz71ilTc4afU9hDDl3WY4JxHYB0yvbi +AmvZWg== +-----END CERTIFICATE----- + +# Issuer: CN=SecureSign RootCA11 O=Japan Certification Services, Inc. +# Subject: CN=SecureSign RootCA11 O=Japan Certification Services, Inc. +# Label: "SecureSign RootCA11" +# Serial: 1 +# MD5 Fingerprint: b7:52:74:e2:92:b4:80:93:f2:75:e4:cc:d7:f2:ea:26 +# SHA1 Fingerprint: 3b:c4:9f:48:f8:f3:73:a0:9c:1e:bd:f8:5b:b1:c3:65:c7:d8:11:b3 +# SHA256 Fingerprint: bf:0f:ee:fb:9e:3a:58:1a:d5:f9:e9:db:75:89:98:57:43:d2:61:08:5c:4d:31:4f:6f:5d:72:59:aa:42:16:12 +-----BEGIN CERTIFICATE----- +MIIDbTCCAlWgAwIBAgIBATANBgkqhkiG9w0BAQUFADBYMQswCQYDVQQGEwJKUDEr +MCkGA1UEChMiSmFwYW4gQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcywgSW5jLjEcMBoG +A1UEAxMTU2VjdXJlU2lnbiBSb290Q0ExMTAeFw0wOTA0MDgwNDU2NDdaFw0yOTA0 +MDgwNDU2NDdaMFgxCzAJBgNVBAYTAkpQMSswKQYDVQQKEyJKYXBhbiBDZXJ0aWZp +Y2F0aW9uIFNlcnZpY2VzLCBJbmMuMRwwGgYDVQQDExNTZWN1cmVTaWduIFJvb3RD +QTExMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/XeqpRyQBTvLTJsz +i1oURaTnkBbR31fSIRCkF/3frNYfp+TbfPfs37gD2pRY/V1yfIw/XwFndBWW4wI8 +h9uuywGOwvNmxoVF9ALGOrVisq/6nL+k5tSAMJjzDbaTj6nU2DbysPyKyiyhFTOV +MdrAG/LuYpmGYz+/3ZMqg6h2uRMft85OQoWPIucuGvKVCbIFtUROd6EgvanyTgp9 +UK31BQ1FT0Zx/Sg+U/sE2C3XZR1KG/rPO7AxmjVuyIsG0wCR8pQIZUyxNAYAeoni +8McDWc/V1uinMrPmmECGxc0nEovMe863ETxiYAcjPitAbpSACW22s293bzUIUPsC +h8U+iQIDAQABo0IwQDAdBgNVHQ4EFgQUW/hNT7KlhtQ60vFjmqC+CfZXt94wDgYD +VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB +AKChOBZmLqdWHyGcBvod7bkixTgm2E5P7KN/ed5GIaGHd48HCJqypMWvDzKYC3xm +KbabfSVSSUOrTC4rbnpwrxYO4wJs+0LmGJ1F2FXI6Dvd5+H0LgscNFxsWEr7jIhQ +X5Ucv+2rIrVls4W6ng+4reV6G4pQOh29Dbx7VFALuUKvVaAYga1lme++5Jy/xIWr +QbJUb9wlze144o4MjQlJ3WN7WmmWAiGovVJZ6X01y8hSyn+B/tlr0/cR7SXf+Of5 +pPpyl4RTDaXQMhhRdlkUbA/r7F+AjHVDg8OFmP9Mni0N5HeDk061lgeLKBObjBmN +QSdJQO7e5iNEOdyhIta6A/I= +-----END CERTIFICATE----- + +# Issuer: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd. +# Subject: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd. +# Label: "Microsec e-Szigno Root CA 2009" +# Serial: 14014712776195784473 +# MD5 Fingerprint: f8:49:f4:03:bc:44:2d:83:be:48:69:7d:29:64:fc:b1 +# SHA1 Fingerprint: 89:df:74:fe:5c:f4:0f:4a:80:f9:e3:37:7d:54:da:91:e1:01:31:8e +# SHA256 Fingerprint: 3c:5f:81:fe:a5:fa:b8:2c:64:bf:a2:ea:ec:af:cd:e8:e0:77:fc:86:20:a7:ca:e5:37:16:3d:f3:6e:db:f3:78 +-----BEGIN CERTIFICATE----- +MIIECjCCAvKgAwIBAgIJAMJ+QwRORz8ZMA0GCSqGSIb3DQEBCwUAMIGCMQswCQYD +VQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFjAUBgNVBAoMDU1pY3Jvc2VjIEx0 +ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3ppZ25vIFJvb3QgQ0EgMjAwOTEfMB0G +CSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5odTAeFw0wOTA2MTYxMTMwMThaFw0y +OTEyMzAxMTMwMThaMIGCMQswCQYDVQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3Qx +FjAUBgNVBAoMDU1pY3Jvc2VjIEx0ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3pp +Z25vIFJvb3QgQ0EgMjAwOTEfMB0GCSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5o +dTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOn4j/NjrdqG2KfgQvvP +kd6mJviZpWNwrZuuyjNAfW2WbqEORO7hE52UQlKavXWFdCyoDh2Tthi3jCyoz/tc +cbna7P7ofo/kLx2yqHWH2Leh5TvPmUpG0IMZfcChEhyVbUr02MelTTMuhTlAdX4U +fIASmFDHQWe4oIBhVKZsTh/gnQ4H6cm6M+f+wFUoLAKApxn1ntxVUwOXewdI/5n7 +N4okxFnMUBBjjqqpGrCEGob5X7uxUG6k0QrM1XF+H6cbfPVTbiJfyyvm1HxdrtbC +xkzlBQHZ7Vf8wSN5/PrIJIOV87VqUQHQd9bpEqH5GoP7ghu5sJf0dgYzQ0mg/wu1 ++rUCAwEAAaOBgDB+MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G +A1UdDgQWBBTLD8bfQkPMPcu1SCOhGnqmKrs0aDAfBgNVHSMEGDAWgBTLD8bfQkPM +Pcu1SCOhGnqmKrs0aDAbBgNVHREEFDASgRBpbmZvQGUtc3ppZ25vLmh1MA0GCSqG +SIb3DQEBCwUAA4IBAQDJ0Q5eLtXMs3w+y/w9/w0olZMEyL/azXm4Q5DwpL7v8u8h +mLzU1F0G9u5C7DBsoKqpyvGvivo/C3NqPuouQH4frlRheesuCDfXI/OMn74dseGk +ddug4lQUsbocKaQY9hK6ohQU4zE1yED/t+AFdlfBHFny+L/k7SViXITwfn4fs775 +tyERzAMBVnCnEJIeGzSBHq2cGsMEPO0CYdYeBvNfOofyK/FFh+U9rNHHV4S9a67c +2Pm2G2JwCz02yULyMtd6YebS2z3PyKnJm9zbWETXbzivf3jTo60adbocwTZ8jx5t +HMN1Rq41Bab2XD0h7lbwyYIiLXpUq3DDfSJlgnCW +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3 +# Label: "GlobalSign Root CA - R3" +# Serial: 4835703278459759426209954 +# MD5 Fingerprint: c5:df:b8:49:ca:05:13:55:ee:2d:ba:1a:c3:3e:b0:28 +# SHA1 Fingerprint: d6:9b:56:11:48:f0:1c:77:c5:45:78:c1:09:26:df:5b:85:69:76:ad +# SHA256 Fingerprint: cb:b5:22:d7:b7:f1:27:ad:6a:01:13:86:5b:df:1c:d4:10:2e:7d:07:59:af:63:5a:7c:f4:72:0d:c9:63:c5:3b +-----BEGIN CERTIFICATE----- +MIIDXzCCAkegAwIBAgILBAAAAAABIVhTCKIwDQYJKoZIhvcNAQELBQAwTDEgMB4G +A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNp +Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDkwMzE4MTAwMDAwWhcNMjkwMzE4 +MTAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEG +A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAMwldpB5BngiFvXAg7aEyiie/QV2EcWtiHL8 +RgJDx7KKnQRfJMsuS+FggkbhUqsMgUdwbN1k0ev1LKMPgj0MK66X17YUhhB5uzsT +gHeMCOFJ0mpiLx9e+pZo34knlTifBtc+ycsmWQ1z3rDI6SYOgxXG71uL0gRgykmm +KPZpO/bLyCiR5Z2KYVc3rHQU3HTgOu5yLy6c+9C7v/U9AOEGM+iCK65TpjoWc4zd +QQ4gOsC0p6Hpsk+QLjJg6VfLuQSSaGjlOCZgdbKfd/+RFO+uIEn8rUAVSNECMWEZ +XriX7613t2Saer9fwRPvm2L7DWzgVGkWqQPabumDk3F2xmmFghcCAwEAAaNCMEAw +DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI/wS3+o +LkUkrk1Q+mOai97i3Ru8MA0GCSqGSIb3DQEBCwUAA4IBAQBLQNvAUKr+yAzv95ZU +RUm7lgAJQayzE4aGKAczymvmdLm6AC2upArT9fHxD4q/c2dKg8dEe3jgr25sbwMp +jjM5RcOO5LlXbKr8EpbsU8Yt5CRsuZRj+9xTaGdWPoO4zzUhw8lo/s7awlOqzJCK +6fBdRoyV3XpYKBovHd7NADdBj+1EbddTKJd+82cEHhXXipa0095MJ6RMG3NzdvQX +mcIfeg7jLQitChws/zyrVQ4PkX4268NXSb7hLi18YIvDQVETI53O9zJrlAGomecs +Mx86OyXShkDOOyyGeMlhLxS67ttVb9+E7gUJTb0o2HLO02JQZR7rkpeDMdmztcpH +WD9f +-----END CERTIFICATE----- + +# Issuer: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Subject: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Label: "Autoridad de Certificacion Firmaprofesional CIF A62634068" +# Serial: 6047274297262753887 +# MD5 Fingerprint: 73:3a:74:7a:ec:bb:a3:96:a6:c2:e4:e2:c8:9b:c0:c3 +# SHA1 Fingerprint: ae:c5:fb:3f:c8:e1:bf:c4:e5:4f:03:07:5a:9a:e8:00:b7:f7:b6:fa +# SHA256 Fingerprint: 04:04:80:28:bf:1f:28:64:d4:8f:9a:d4:d8:32:94:36:6a:82:88:56:55:3f:3b:14:30:3f:90:14:7f:5d:40:ef +-----BEGIN CERTIFICATE----- +MIIGFDCCA/ygAwIBAgIIU+w77vuySF8wDQYJKoZIhvcNAQEFBQAwUTELMAkGA1UE +BhMCRVMxQjBABgNVBAMMOUF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1h +cHJvZmVzaW9uYWwgQ0lGIEE2MjYzNDA2ODAeFw0wOTA1MjAwODM4MTVaFw0zMDEy +MzEwODM4MTVaMFExCzAJBgNVBAYTAkVTMUIwQAYDVQQDDDlBdXRvcmlkYWQgZGUg +Q2VydGlmaWNhY2lvbiBGaXJtYXByb2Zlc2lvbmFsIENJRiBBNjI2MzQwNjgwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKlmuO6vj78aI14H9M2uDDUtd9 +thDIAl6zQyrET2qyyhxdKJp4ERppWVevtSBC5IsP5t9bpgOSL/UR5GLXMnE42QQM +cas9UX4PB99jBVzpv5RvwSmCwLTaUbDBPLutN0pcyvFLNg4kq7/DhHf9qFD0sefG +L9ItWY16Ck6WaVICqjaY7Pz6FIMMNx/Jkjd/14Et5cS54D40/mf0PmbR0/RAz15i +NA9wBj4gGFrO93IbJWyTdBSTo3OxDqqHECNZXyAFGUftaI6SEspd/NYrspI8IM/h +X68gvqB2f3bl7BqGYTM+53u0P6APjqK5am+5hyZvQWyIplD9amML9ZMWGxmPsu2b +m8mQ9QEM3xk9Dz44I8kvjwzRAv4bVdZO0I08r0+k8/6vKtMFnXkIoctXMbScyJCy +Z/QYFpM6/EfY0XiWMR+6KwxfXZmtY4laJCB22N/9q06mIqqdXuYnin1oKaPnirja +EbsXLZmdEyRG98Xi2J+Of8ePdG1asuhy9azuJBCtLxTa/y2aRnFHvkLfuwHb9H/T +KI8xWVvTyQKmtFLKbpf7Q8UIJm+K9Lv9nyiqDdVF8xM6HdjAeI9BZzwelGSuewvF +6NkBiDkal4ZkQdU7hwxu+g/GvUgUvzlN1J5Bto+WHWOWk9mVBngxaJ43BjuAiUVh +OSPHG0SjFeUc+JIwuwIDAQABo4HvMIHsMBIGA1UdEwEB/wQIMAYBAf8CAQEwDgYD +VR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRlzeurNR4APn7VdMActHNHDhpkLzCBpgYD +VR0gBIGeMIGbMIGYBgRVHSAAMIGPMC8GCCsGAQUFBwIBFiNodHRwOi8vd3d3LmZp +cm1hcHJvZmVzaW9uYWwuY29tL2NwczBcBggrBgEFBQcCAjBQHk4AUABhAHMAZQBv +ACAAZABlACAAbABhACAAQgBvAG4AYQBuAG8AdgBhACAANAA3ACAAQgBhAHIAYwBl +AGwAbwBuAGEAIAAwADgAMAAxADcwDQYJKoZIhvcNAQEFBQADggIBABd9oPm03cXF +661LJLWhAqvdpYhKsg9VSytXjDvlMd3+xDLx51tkljYyGOylMnfX40S2wBEqgLk9 +am58m9Ot/MPWo+ZkKXzR4Tgegiv/J2Wv+xYVxC5xhOW1//qkR71kMrv2JYSiJ0L1 +ILDCExARzRAVukKQKtJE4ZYm6zFIEv0q2skGz3QeqUvVhyj5eTSSPi5E6PaPT481 +PyWzOdxjKpBrIF/EUhJOlywqrJ2X3kjyo2bbwtKDlaZmp54lD+kLM5FlClrD2VQS +3a/DTg4fJl4N3LON7NWBcN7STyQF82xO9UxJZo3R/9ILJUFI/lGExkKvgATP0H5k +SeTy36LssUzAKh3ntLFlosS88Zj0qnAHY7S42jtM+kAiMFsRpvAFDsYCA0irhpuF +3dvd6qJ2gHN99ZwExEWN57kci57q13XRcrHedUTnQn3iV2t93Jm8PYMo6oCTjcVM +ZcFwgbg4/EMxsvYDNEeyrPsiBsse3RdHHF9mudMaotoRsaS8I8nkvof/uZS2+F0g +StRf571oe2XyFR7SOqkt6dhrJKyXWERHrVkY8SFlcN7ONGCoQPHzPKTDKCOM/icz +Q0CgFzzr6juwcqajuUpLXhZI9LK8yIySxZ2frHI2vDSANGupi5LAuBft7HZT9SQB +jLMi6Et8Vcad+qMUu2WFbm5PEn4KPJ2V +-----END CERTIFICATE----- + +# Issuer: CN=Izenpe.com O=IZENPE S.A. +# Subject: CN=Izenpe.com O=IZENPE S.A. +# Label: "Izenpe.com" +# Serial: 917563065490389241595536686991402621 +# MD5 Fingerprint: a6:b0:cd:85:80:da:5c:50:34:a3:39:90:2f:55:67:73 +# SHA1 Fingerprint: 2f:78:3d:25:52:18:a7:4a:65:39:71:b5:2c:a2:9c:45:15:6f:e9:19 +# SHA256 Fingerprint: 25:30:cc:8e:98:32:15:02:ba:d9:6f:9b:1f:ba:1b:09:9e:2d:29:9e:0f:45:48:bb:91:4f:36:3b:c0:d4:53:1f +-----BEGIN CERTIFICATE----- +MIIF8TCCA9mgAwIBAgIQALC3WhZIX7/hy/WL1xnmfTANBgkqhkiG9w0BAQsFADA4 +MQswCQYDVQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6 +ZW5wZS5jb20wHhcNMDcxMjEzMTMwODI4WhcNMzcxMjEzMDgyNzI1WjA4MQswCQYD +VQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6ZW5wZS5j +b20wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDJ03rKDx6sp4boFmVq +scIbRTJxldn+EFvMr+eleQGPicPK8lVx93e+d5TzcqQsRNiekpsUOqHnJJAKClaO +xdgmlOHZSOEtPtoKct2jmRXagaKH9HtuJneJWK3W6wyyQXpzbm3benhB6QiIEn6H +LmYRY2xU+zydcsC8Lv/Ct90NduM61/e0aL6i9eOBbsFGb12N4E3GVFWJGjMxCrFX +uaOKmMPsOzTFlUFpfnXCPCDFYbpRR6AgkJOhkEvzTnyFRVSa0QUmQbC1TR0zvsQD +yCV8wXDbO/QJLVQnSKwv4cSsPsjLkkxTOTcj7NMB+eAJRE1NZMDhDVqHIrytG6P+ +JrUV86f8hBnp7KGItERphIPzidF0BqnMC9bC3ieFUCbKF7jJeodWLBoBHmy+E60Q +rLUk9TiRodZL2vG70t5HtfG8gfZZa88ZU+mNFctKy6lvROUbQc/hhqfK0GqfvEyN +BjNaooXlkDWgYlwWTvDjovoDGrQscbNYLN57C9saD+veIR8GdwYDsMnvmfzAuU8L +hij+0rnq49qlw0dpEuDb8PYZi+17cNcC1u2HGCgsBCRMd+RIihrGO5rUD8r6ddIB +QFqNeb+Lz0vPqhbBleStTIo+F5HUsWLlguWABKQDfo2/2n+iD5dPDNMN+9fR5XJ+ +HMh3/1uaD7euBUbl8agW7EekFwIDAQABo4H2MIHzMIGwBgNVHREEgagwgaWBD2lu +Zm9AaXplbnBlLmNvbaSBkTCBjjFHMEUGA1UECgw+SVpFTlBFIFMuQS4gLSBDSUYg +QTAxMzM3MjYwLVJNZXJjLlZpdG9yaWEtR2FzdGVpeiBUMTA1NSBGNjIgUzgxQzBB +BgNVBAkMOkF2ZGEgZGVsIE1lZGl0ZXJyYW5lbyBFdG9yYmlkZWEgMTQgLSAwMTAx +MCBWaXRvcmlhLUdhc3RlaXowDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AQYwHQYDVR0OBBYEFB0cZQ6o8iV7tJHP5LGx5r1VdGwFMA0GCSqGSIb3DQEBCwUA +A4ICAQB4pgwWSp9MiDrAyw6lFn2fuUhfGI8NYjb2zRlrrKvV9pF9rnHzP7MOeIWb +laQnIUdCSnxIOvVFfLMMjlF4rJUT3sb9fbgakEyrkgPH7UIBzg/YsfqikuFgba56 +awmqxinuaElnMIAkejEWOVt+8Rwu3WwJrfIxwYJOubv5vr8qhT/AQKM6WfxZSzwo +JNu0FXWuDYi6LnPAvViH5ULy617uHjAimcs30cQhbIHsvm0m5hzkQiCeR7Csg1lw +LDXWrzY0tM07+DKo7+N4ifuNRSzanLh+QBxh5z6ikixL8s36mLYp//Pye6kfLqCT +VyvehQP5aTfLnnhqBbTFMXiJ7HqnheG5ezzevh55hM6fcA5ZwjUukCox2eRFekGk +LhObNA5me0mrZJfQRsN5nXJQY6aYWwa9SG3YOYNw6DXwBdGqvOPbyALqfP2C2sJb +UjWumDqtujWTI6cfSN01RpiyEGjkpTHCClguGYEQyVB1/OpaFs4R1+7vUIgtYf8/ +QnMFlEPVjjxOAToZpR9GTnfQXeWBIiGH/pR9hNiTrdZoQ0iy2+tzJOeRf1SktoA+ +naM8THLCV8Sg1Mw4J87VBp6iSNnpn86CcDaTmjvfliHjWbcM2pE38P1ZWrOZyGls +QyYBNWNgVYkDOnXYukrZVP/u3oDYLdE41V4tC5h9Pmzb/CaIxw== +-----END CERTIFICATE----- + +# Issuer: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc. +# Subject: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc. +# Label: "Go Daddy Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: 80:3a:bc:22:c1:e6:fb:8d:9b:3b:27:4a:32:1b:9a:01 +# SHA1 Fingerprint: 47:be:ab:c9:22:ea:e8:0e:78:78:34:62:a7:9f:45:c2:54:fd:e6:8b +# SHA256 Fingerprint: 45:14:0b:32:47:eb:9c:c8:c5:b4:f0:d7:b5:30:91:f7:32:92:08:9e:6e:5a:63:e2:74:9d:d3:ac:a9:19:8e:da +-----BEGIN CERTIFICATE----- +MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT +EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp +ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz +NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH +EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE +AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw +DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD +E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH +/PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy +DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh +GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR +tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA +AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE +FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX +WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu +9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr +gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo +2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO +LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI +4uJEvlz36hz1 +-----END CERTIFICATE----- + +# Issuer: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Subject: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Label: "Starfield Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: d6:39:81:c6:52:7e:96:69:fc:fc:ca:66:ed:05:f2:96 +# SHA1 Fingerprint: b5:1c:06:7c:ee:2b:0c:3d:f8:55:ab:2d:92:f4:fe:39:d4:e7:0f:0e +# SHA256 Fingerprint: 2c:e1:cb:0b:f9:d2:f9:e1:02:99:3f:be:21:51:52:c3:b2:dd:0c:ab:de:1c:68:e5:31:9b:83:91:54:db:b7:f5 +-----BEGIN CERTIFICATE----- +MIID3TCCAsWgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBjzELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT +HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAMTKVN0YXJmaWVs +ZCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAw +MFoXDTM3MTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6 +b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVj +aG5vbG9naWVzLCBJbmMuMTIwMAYDVQQDEylTdGFyZmllbGQgUm9vdCBDZXJ0aWZp +Y2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC +ggEBAL3twQP89o/8ArFvW59I2Z154qK3A2FWGMNHttfKPTUuiUP3oWmb3ooa/RMg +nLRJdzIpVv257IzdIvpy3Cdhl+72WoTsbhm5iSzchFvVdPtrX8WJpRBSiUZV9Lh1 +HOZ/5FSuS/hVclcCGfgXcVnrHigHdMWdSL5stPSksPNkN3mSwOxGXn/hbVNMYq/N +Hwtjuzqd+/x5AJhhdM8mgkBj87JyahkNmcrUDnXMN/uLicFZ8WJ/X7NfZTD4p7dN +dloedl40wOiWVpmKs/B/pM293DIxfJHP4F8R+GuqSVzRmZTRouNjWwl2tVZi4Ut0 +HZbUJtQIBFnQmA4O5t78w+wfkPECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO +BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFHwMMh+n2TB/xH1oo2Kooc6rB1snMA0G +CSqGSIb3DQEBCwUAA4IBAQARWfolTwNvlJk7mh+ChTnUdgWUXuEok21iXQnCoKjU +sHU48TRqneSfioYmUeYs0cYtbpUgSpIB7LiKZ3sx4mcujJUDJi5DnUox9g61DLu3 +4jd/IroAow57UvtruzvE03lRTs2Q9GcHGcg8RnoNAX3FWOdt5oUwF5okxBDgBPfg +8n/Uqgr/Qh037ZTlZFkSIHc40zI+OIF1lnP6aI+xy84fxez6nH7PfrHxBy22/L/K +pL/QlwVKvOoYKAKQvVR4CSFx09F9HdkWsKlhPdAKACL8x3vLCWRFCztAgfd9fDL1 +mMpYjn0q7pBZc2T5NnReJaH1ZgUufzkVqSr7UIuOhWn0 +-----END CERTIFICATE----- + +# Issuer: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Subject: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Label: "Starfield Services Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: 17:35:74:af:7b:61:1c:eb:f4:f9:3c:e2:ee:40:f9:a2 +# SHA1 Fingerprint: 92:5a:8f:8d:2c:6d:04:e0:66:5f:59:6a:ff:22:d8:63:e8:25:6f:3f +# SHA256 Fingerprint: 56:8d:69:05:a2:c8:87:08:a4:b3:02:51:90:ed:cf:ed:b1:97:4a:60:6a:13:c6:e5:29:0f:cb:2a:e6:3e:da:b5 +-----BEGIN CERTIFICATE----- +MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT +HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs +ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5 +MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYD +VQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFy +ZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJTdGFyZmllbGQgU2Vy +dmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p +OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm2 +8xpWriu2dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1K +Ts9DkTvnVtYAcMtS7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufe +hRhJfGZOozptqbXuNC66DQO4M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk +6mFBrMnUVN+HL8cisibMn1lUaJ/8viovxFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAw +DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJxfAN+q +AdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBLNqaEd2ndOxmfZyMI +bw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynVv/heyNXB +ve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z +qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkd +iEDPfUYd/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn +0q23KXB56jzaYyWf/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCN +sSi6 +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Commercial O=AffirmTrust +# Subject: CN=AffirmTrust Commercial O=AffirmTrust +# Label: "AffirmTrust Commercial" +# Serial: 8608355977964138876 +# MD5 Fingerprint: 82:92:ba:5b:ef:cd:8a:6f:a6:3d:55:f9:84:f6:d6:b7 +# SHA1 Fingerprint: f9:b5:b6:32:45:5f:9c:be:ec:57:5f:80:dc:e9:6e:2c:c7:b2:78:b7 +# SHA256 Fingerprint: 03:76:ab:1d:54:c5:f9:80:3c:e4:b2:e2:01:a0:ee:7e:ef:7b:57:b6:36:e8:a9:3c:9b:8d:48:60:c9:6f:5f:a7 +-----BEGIN CERTIFICATE----- +MIIDTDCCAjSgAwIBAgIId3cGJyapsXwwDQYJKoZIhvcNAQELBQAwRDELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz +dCBDb21tZXJjaWFsMB4XDTEwMDEyOTE0MDYwNloXDTMwMTIzMTE0MDYwNlowRDEL +MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp +cm1UcnVzdCBDb21tZXJjaWFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEA9htPZwcroRX1BiLLHwGy43NFBkRJLLtJJRTWzsO3qyxPxkEylFf6EqdbDuKP +Hx6GGaeqtS25Xw2Kwq+FNXkyLbscYjfysVtKPcrNcV/pQr6U6Mje+SJIZMblq8Yr +ba0F8PrVC8+a5fBQpIs7R6UjW3p6+DM/uO+Zl+MgwdYoic+U+7lF7eNAFxHUdPAL +MeIrJmqbTFeurCA+ukV6BfO9m2kVrn1OIGPENXY6BwLJN/3HR+7o8XYdcxXyl6S1 +yHp52UKqK39c/s4mT6NmgTWvRLpUHhwwMmWd5jyTXlBOeuM61G7MGvv50jeuJCqr +VwMiKA1JdX+3KNp1v47j3A55MQIDAQABo0IwQDAdBgNVHQ4EFgQUnZPGU4teyq8/ +nx4P5ZmVvCT2lI8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ +KoZIhvcNAQELBQADggEBAFis9AQOzcAN/wr91LoWXym9e2iZWEnStB03TX8nfUYG +XUPGhi4+c7ImfU+TqbbEKpqrIZcUsd6M06uJFdhrJNTxFq7YpFzUf1GO7RgBsZNj +vbz4YYCanrHOQnDiqX0GJX0nof5v7LMeJNrjS1UaADs1tDvZ110w/YETifLCBivt +Z8SOyUOyXGsViQK8YvxO8rUzqrJv0wqiUOP2O+guRMLbZjipM1ZI8W0bM40NjD9g +N53Tym1+NH4Nn3J2ixufcv1SNUFFApYvHLKac0khsUlHRUe072o0EclNmsxZt9YC +nlpOZbWUrhvfKbAW8b8Angc6F2S1BLUjIZkKlTuXfO8= +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Networking O=AffirmTrust +# Subject: CN=AffirmTrust Networking O=AffirmTrust +# Label: "AffirmTrust Networking" +# Serial: 8957382827206547757 +# MD5 Fingerprint: 42:65:ca:be:01:9a:9a:4c:a9:8c:41:49:cd:c0:d5:7f +# SHA1 Fingerprint: 29:36:21:02:8b:20:ed:02:f5:66:c5:32:d1:d6:ed:90:9f:45:00:2f +# SHA256 Fingerprint: 0a:81:ec:5a:92:97:77:f1:45:90:4a:f3:8d:5d:50:9f:66:b5:e2:c5:8f:cd:b5:31:05:8b:0e:17:f3:f0:b4:1b +-----BEGIN CERTIFICATE----- +MIIDTDCCAjSgAwIBAgIIfE8EORzUmS0wDQYJKoZIhvcNAQEFBQAwRDELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz +dCBOZXR3b3JraW5nMB4XDTEwMDEyOTE0MDgyNFoXDTMwMTIzMTE0MDgyNFowRDEL +MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp +cm1UcnVzdCBOZXR3b3JraW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEAtITMMxcua5Rsa2FSoOujz3mUTOWUgJnLVWREZY9nZOIG41w3SfYvm4SEHi3y +YJ0wTsyEheIszx6e/jarM3c1RNg1lho9Nuh6DtjVR6FqaYvZ/Ls6rnla1fTWcbua +kCNrmreIdIcMHl+5ni36q1Mr3Lt2PpNMCAiMHqIjHNRqrSK6mQEubWXLviRmVSRL +QESxG9fhwoXA3hA/Pe24/PHxI1Pcv2WXb9n5QHGNfb2V1M6+oF4nI979ptAmDgAp +6zxG8D1gvz9Q0twmQVGeFDdCBKNwV6gbh+0t+nvujArjqWaJGctB+d1ENmHP4ndG +yH329JKBNv3bNPFyfvMMFr20FQIDAQABo0IwQDAdBgNVHQ4EFgQUBx/S55zawm6i +QLSwelAQUHTEyL0wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ +KoZIhvcNAQEFBQADggEBAIlXshZ6qML91tmbmzTCnLQyFE2npN/svqe++EPbkTfO +tDIuUFUaNU52Q3Eg75N3ThVwLofDwR1t3Mu1J9QsVtFSUzpE0nPIxBsFZVpikpzu +QY0x2+c06lkh1QF612S4ZDnNye2v7UsDSKegmQGA3GWjNq5lWUhPgkvIZfFXHeVZ +Lgo/bNjR9eUJtGxUAArgFU2HdW23WJZa3W3SAKD0m0i+wzekujbgfIeFlxoVot4u +olu9rxj5kFDNcFn4J2dHy8egBzp90SxdbBk6ZrV9/ZFvgrG+CJPbFEfxojfHRZ48 +x3evZKiT3/Zpg4Jg8klCNO1aAFSFHBY2kgxc+qatv9s= +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Premium O=AffirmTrust +# Subject: CN=AffirmTrust Premium O=AffirmTrust +# Label: "AffirmTrust Premium" +# Serial: 7893706540734352110 +# MD5 Fingerprint: c4:5d:0e:48:b6:ac:28:30:4e:0a:bc:f9:38:16:87:57 +# SHA1 Fingerprint: d8:a6:33:2c:e0:03:6f:b1:85:f6:63:4f:7d:6a:06:65:26:32:28:27 +# SHA256 Fingerprint: 70:a7:3f:7f:37:6b:60:07:42:48:90:45:34:b1:14:82:d5:bf:0e:69:8e:cc:49:8d:f5:25:77:eb:f2:e9:3b:9a +-----BEGIN CERTIFICATE----- +MIIFRjCCAy6gAwIBAgIIbYwURrGmCu4wDQYJKoZIhvcNAQEMBQAwQTELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1UcnVz +dCBQcmVtaXVtMB4XDTEwMDEyOTE0MTAzNloXDTQwMTIzMTE0MTAzNlowQTELMAkG +A1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1U +cnVzdCBQcmVtaXVtMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxBLf +qV/+Qd3d9Z+K4/as4Tx4mrzY8H96oDMq3I0gW64tb+eT2TZwamjPjlGjhVtnBKAQ +JG9dKILBl1fYSCkTtuG+kU3fhQxTGJoeJKJPj/CihQvL9Cl/0qRY7iZNyaqoe5rZ ++jjeRFcV5fiMyNlI4g0WJx0eyIOFJbe6qlVBzAMiSy2RjYvmia9mx+n/K+k8rNrS +s8PhaJyJ+HoAVt70VZVs+7pk3WKL3wt3MutizCaam7uqYoNMtAZ6MMgpv+0GTZe5 +HMQxK9VfvFMSF5yZVylmd2EhMQcuJUmdGPLu8ytxjLW6OQdJd/zvLpKQBY0tL3d7 +70O/Nbua2Plzpyzy0FfuKE4mX4+QaAkvuPjcBukumj5Rp9EixAqnOEhss/n/fauG +V+O61oV4d7pD6kh/9ti+I20ev9E2bFhc8e6kGVQa9QPSdubhjL08s9NIS+LI+H+S +qHZGnEJlPqQewQcDWkYtuJfzt9WyVSHvutxMAJf7FJUnM7/oQ0dG0giZFmA7mn7S +5u046uwBHjxIVkkJx0w3AJ6IDsBz4W9m6XJHMD4Q5QsDyZpCAGzFlH5hxIrff4Ia +C1nEWTJ3s7xgaVY5/bQGeyzWZDbZvUjthB9+pSKPKrhC9IK31FOQeE4tGv2Bb0TX +OwF0lkLgAOIua+rF7nKsu7/+6qqo+Nz2snmKtmcCAwEAAaNCMEAwHQYDVR0OBBYE +FJ3AZ6YMItkm9UWrpmVSESfYRaxjMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/ +BAQDAgEGMA0GCSqGSIb3DQEBDAUAA4ICAQCzV00QYk465KzquByvMiPIs0laUZx2 +KI15qldGF9X1Uva3ROgIRL8YhNILgM3FEv0AVQVhh0HctSSePMTYyPtwni94loMg +Nt58D2kTiKV1NpgIpsbfrM7jWNa3Pt668+s0QNiigfV4Py/VpfzZotReBA4Xrf5B +8OWycvpEgjNC6C1Y91aMYj+6QrCcDFx+LmUmXFNPALJ4fqENmS2NuB2OosSw/WDQ +MKSOyARiqcTtNd56l+0OOF6SL5Nwpamcb6d9Ex1+xghIsV5n61EIJenmJWtSKZGc +0jlzCFfemQa0W50QBuHCAKi4HEoCChTQwUHK+4w1IX2COPKpVJEZNZOUbWo6xbLQ +u4mGk+ibyQ86p3q4ofB4Rvr8Ny/lioTz3/4E2aFooC8k4gmVBtWVyuEklut89pMF +u+1z6S3RdTnX5yTb2E5fQ4+e0BQ5v1VwSJlXMbSc7kqYA5YwH2AG7hsj/oFgIxpH +YoWlzBk0gG+zrBrjn/B7SK3VAdlntqlyk+otZrWyuOQ9PLLvTIzq6we/qzWaVYa8 +GKa1qF60g2xraUDTn9zxw2lrueFtCfTxqlB2Cnp9ehehVZZCmTEJ3WARjQUwfuaO +RtGdFNrHF+QFlozEJLUbzxQHskD4o55BhrwE0GuWyCqANP2/7waj3VjFhT0+j/6e +KeC2uAloGRwYQw== +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Premium ECC O=AffirmTrust +# Subject: CN=AffirmTrust Premium ECC O=AffirmTrust +# Label: "AffirmTrust Premium ECC" +# Serial: 8401224907861490260 +# MD5 Fingerprint: 64:b0:09:55:cf:b1:d5:99:e2:be:13:ab:a6:5d:ea:4d +# SHA1 Fingerprint: b8:23:6b:00:2f:1d:16:86:53:01:55:6c:11:a4:37:ca:eb:ff:c3:bb +# SHA256 Fingerprint: bd:71:fd:f6:da:97:e4:cf:62:d1:64:7a:dd:25:81:b0:7d:79:ad:f8:39:7e:b4:ec:ba:9c:5e:84:88:82:14:23 +-----BEGIN CERTIFICATE----- +MIIB/jCCAYWgAwIBAgIIdJclisc/elQwCgYIKoZIzj0EAwMwRTELMAkGA1UEBhMC +VVMxFDASBgNVBAoMC0FmZmlybVRydXN0MSAwHgYDVQQDDBdBZmZpcm1UcnVzdCBQ +cmVtaXVtIEVDQzAeFw0xMDAxMjkxNDIwMjRaFw00MDEyMzExNDIwMjRaMEUxCzAJ +BgNVBAYTAlVTMRQwEgYDVQQKDAtBZmZpcm1UcnVzdDEgMB4GA1UEAwwXQWZmaXJt +VHJ1c3QgUHJlbWl1bSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQNMF4bFZ0D +0KF5Nbc6PJJ6yhUczWLznCZcBz3lVPqj1swS6vQUX+iOGasvLkjmrBhDeKzQN8O9 +ss0s5kfiGuZjuD0uL3jET9v0D6RoTFVya5UdThhClXjMNzyR4ptlKymjQjBAMB0G +A1UdDgQWBBSaryl6wBE1NSZRMADDav5A1a7WPDAPBgNVHRMBAf8EBTADAQH/MA4G +A1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNnADBkAjAXCfOHiFBar8jAQr9HX/Vs +aobgxCd05DhT1wV/GzTjxi+zygk8N53X57hG8f2h4nECMEJZh0PUUd+60wkyWs6I +flc9nF9Ca/UHLbXwgpP5WW+uZPpY5Yse42O+tYHNbwKMeQ== +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Network CA" +# Serial: 279744 +# MD5 Fingerprint: d5:e9:81:40:c5:18:69:fc:46:2c:89:75:62:0f:aa:78 +# SHA1 Fingerprint: 07:e0:32:e0:20:b7:2c:3f:19:2f:06:28:a2:59:3a:19:a7:0f:06:9e +# SHA256 Fingerprint: 5c:58:46:8d:55:f5:8e:49:7e:74:39:82:d2:b5:00:10:b6:d1:65:37:4a:cf:83:a7:d4:a3:2d:b7:68:c4:40:8e +-----BEGIN CERTIFICATE----- +MIIDuzCCAqOgAwIBAgIDBETAMA0GCSqGSIb3DQEBBQUAMH4xCzAJBgNVBAYTAlBM +MSIwIAYDVQQKExlVbml6ZXRvIFRlY2hub2xvZ2llcyBTLkEuMScwJQYDVQQLEx5D +ZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxIjAgBgNVBAMTGUNlcnR1bSBU +cnVzdGVkIE5ldHdvcmsgQ0EwHhcNMDgxMDIyMTIwNzM3WhcNMjkxMjMxMTIwNzM3 +WjB+MQswCQYDVQQGEwJQTDEiMCAGA1UEChMZVW5pemV0byBUZWNobm9sb2dpZXMg +Uy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MSIw +IAYDVQQDExlDZXJ0dW0gVHJ1c3RlZCBOZXR3b3JrIENBMIIBIjANBgkqhkiG9w0B +AQEFAAOCAQ8AMIIBCgKCAQEA4/t9o3K6wvDJFIf1awFO4W5AB7ptJ11/91sts1rH +UV+rpDKmYYe2bg+G0jACl/jXaVehGDldamR5xgFZrDwxSjh80gTSSyjoIF87B6LM +TXPb865Px1bVWqeWifrzq2jUI4ZZJ88JJ7ysbnKDHDBy3+Ci6dLhdHUZvSqeexVU +BBvXQzmtVSjF4hq79MDkrjhJM8x2hZ85RdKknvISjFH4fOQtf/WsX+sWn7Et0brM +kUJ3TCXJkDhv2/DM+44el1k+1WBO5gUo7Ul5E0u6SNsv+XLTOcr+H9g0cvW0QM8x +AcPs3hEtF10fuFDRXhmnad4HMyjKUJX5p1TLVIZQRan5SQIDAQABo0IwQDAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBQIds3LB/8k9sXN7buQvOKEN0Z19zAOBgNV +HQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBAKaorSLOAT2mo/9i0Eidi15y +sHhE49wcrwn9I0j6vSrEuVUEtRCjjSfeC4Jj0O7eDDd5QVsisrCaQVymcODU0HfL +I9MA4GxWL+FpDQ3Zqr8hgVDZBqWo/5U30Kr+4rP1mS1FhIrlQgnXdAIv94nYmem8 +J9RHjboNRhx3zxSkHLmkMcScKHQDNP8zGSal6Q10tz6XxnboJ5ajZt3hrvJBW8qY +VoNzcOSGGtIxQbovvi0TWnZvTuhOgQ4/WwMioBK+ZlgRSssDxLQqKi2WF+A5VLxI +03YnnZotBqbJ7DnSq9ufmgsnAjUpsUCV5/nonFWIGUbWtzT1fs45mtk48VH3Tyw= +-----END CERTIFICATE----- + +# Issuer: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA +# Subject: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA +# Label: "TWCA Root Certification Authority" +# Serial: 1 +# MD5 Fingerprint: aa:08:8f:f6:f9:7b:b7:f2:b1:a7:1e:9b:ea:ea:bd:79 +# SHA1 Fingerprint: cf:9e:87:6d:d3:eb:fc:42:26:97:a3:b5:a3:7a:a0:76:a9:06:23:48 +# SHA256 Fingerprint: bf:d8:8f:e1:10:1c:41:ae:3e:80:1b:f8:be:56:35:0e:e9:ba:d1:a6:b9:bd:51:5e:dc:5c:6d:5b:87:11:ac:44 +-----BEGIN CERTIFICATE----- +MIIDezCCAmOgAwIBAgIBATANBgkqhkiG9w0BAQUFADBfMQswCQYDVQQGEwJUVzES +MBAGA1UECgwJVEFJV0FOLUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFU +V0NBIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwODI4MDcyNDMz +WhcNMzAxMjMxMTU1OTU5WjBfMQswCQYDVQQGEwJUVzESMBAGA1UECgwJVEFJV0FO +LUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFUV0NBIFJvb3QgQ2VydGlm +aWNhdGlvbiBBdXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB +AQCwfnK4pAOU5qfeCTiRShFAh6d8WWQUe7UREN3+v9XAu1bihSX0NXIP+FPQQeFE +AcK0HMMxQhZHhTMidrIKbw/lJVBPhYa+v5guEGcevhEFhgWQxFnQfHgQsIBct+HH +K3XLfJ+utdGdIzdjp9xCoi2SBBtQwXu4PhvJVgSLL1KbralW6cH/ralYhzC2gfeX +RfwZVzsrb+RH9JlF/h3x+JejiB03HFyP4HYlmlD4oFT/RJB2I9IyxsOrBr/8+7/z +rX2SYgJbKdM1o5OaQ2RgXbL6Mv87BK9NQGr5x+PvI/1ry+UPizgN7gr8/g+YnzAx +3WxSZfmLgb4i4RxYA7qRG4kHAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqOFsmjd6LWvJPelSDGRjjCDWmujANBgkq +hkiG9w0BAQUFAAOCAQEAPNV3PdrfibqHDAhUaiBQkr6wQT25JmSDCi/oQMCXKCeC +MErJk/9q56YAf4lCmtYR5VPOL8zy2gXE/uJQxDqGfczafhAJO5I1KlOy/usrBdls +XebQ79NqZp4VKIV66IIArB6nCWlWQtNoURi+VJq/REG6Sb4gumlc7rh3zc5sH62D +lhh9DrUUOYTxKOkto557HnpyWoOzeW/vtPzQCqVYT0bf+215WfKEIlKuD8z7fDvn +aspHYcN6+NOSBB+4IIThNlQWx0DeO4pz3N/GCUzf7Nr/1FNCocnyYh0igzyXxfkZ +YiesZSLX0zzG5Y6yU8xJzrww/nsOM5D77dIUkR8Hrw== +-----END CERTIFICATE----- + +# Issuer: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2 +# Subject: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2 +# Label: "Security Communication RootCA2" +# Serial: 0 +# MD5 Fingerprint: 6c:39:7d:a4:0e:55:59:b2:3f:d6:41:b1:12:50:de:43 +# SHA1 Fingerprint: 5f:3b:8c:f2:f8:10:b3:7d:78:b4:ce:ec:19:19:c3:73:34:b9:c7:74 +# SHA256 Fingerprint: 51:3b:2c:ec:b8:10:d4:cd:e5:dd:85:39:1a:df:c6:c2:dd:60:d8:7b:b7:36:d2:b5:21:48:4a:a4:7a:0e:be:f6 +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIBADANBgkqhkiG9w0BAQsFADBdMQswCQYDVQQGEwJKUDEl +MCMGA1UEChMcU0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEnMCUGA1UECxMe +U2VjdXJpdHkgQ29tbXVuaWNhdGlvbiBSb290Q0EyMB4XDTA5MDUyOTA1MDAzOVoX +DTI5MDUyOTA1MDAzOVowXTELMAkGA1UEBhMCSlAxJTAjBgNVBAoTHFNFQ09NIFRy +dXN0IFN5c3RlbXMgQ08uLExURC4xJzAlBgNVBAsTHlNlY3VyaXR5IENvbW11bmlj +YXRpb24gUm9vdENBMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANAV +OVKxUrO6xVmCxF1SrjpDZYBLx/KWvNs2l9amZIyoXvDjChz335c9S672XewhtUGr +zbl+dp+++T42NKA7wfYxEUV0kz1XgMX5iZnK5atq1LXaQZAQwdbWQonCv/Q4EpVM +VAX3NuRFg3sUZdbcDE3R3n4MqzvEFb46VqZab3ZpUql6ucjrappdUtAtCms1FgkQ +hNBqyjoGADdH5H5XTz+L62e4iKrFvlNVspHEfbmwhRkGeC7bYRr6hfVKkaHnFtWO +ojnflLhwHyg/i/xAXmODPIMqGplrz95Zajv8bxbXH/1KEOtOghY6rCcMU/Gt1SSw +awNQwS08Ft1ENCcadfsCAwEAAaNCMEAwHQYDVR0OBBYEFAqFqXdlBZh8QIH4D5cs +OPEK7DzPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 +DQEBCwUAA4IBAQBMOqNErLlFsceTfsgLCkLfZOoc7llsCLqJX2rKSpWeeo8HxdpF +coJxDjrSzG+ntKEju/Ykn8sX/oymzsLS28yN/HH8AynBbF0zX2S2ZTuJbxh2ePXc +okgfGT+Ok+vx+hfuzU7jBBJV1uXk3fs+BXziHV7Gp7yXT2g69ekuCkO2r1dcYmh8 +t/2jioSgrGK+KwmHNPBqAbubKVY8/gA3zyNs8U6qtnRGEmyR7jTV7JqR50S+kDFy +1UkC9gLl9B/rfNmWVan/7Ir5mUf/NVoCqgTLiluHcSmRvaS0eg29mvVXIwAHIRc/ +SjnRBUkLp7Y3gaVdjKozXoEofKd9J+sAro03 +-----END CERTIFICATE----- + +# Issuer: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967 +# Subject: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967 +# Label: "Actalis Authentication Root CA" +# Serial: 6271844772424770508 +# MD5 Fingerprint: 69:c1:0d:4f:07:a3:1b:c3:fe:56:3d:04:bc:11:f6:a6 +# SHA1 Fingerprint: f3:73:b3:87:06:5a:28:84:8a:f2:f3:4a:ce:19:2b:dd:c7:8e:9c:ac +# SHA256 Fingerprint: 55:92:60:84:ec:96:3a:64:b9:6e:2a:be:01:ce:0b:a8:6a:64:fb:fe:bc:c7:aa:b5:af:c1:55:b3:7f:d7:60:66 +-----BEGIN CERTIFICATE----- +MIIFuzCCA6OgAwIBAgIIVwoRl0LE48wwDQYJKoZIhvcNAQELBQAwazELMAkGA1UE +BhMCSVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8w +MzM1ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290 +IENBMB4XDTExMDkyMjExMjIwMloXDTMwMDkyMjExMjIwMlowazELMAkGA1UEBhMC +SVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8wMzM1 +ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290IENB +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAp8bEpSmkLO/lGMWwUKNv +UTufClrJwkg4CsIcoBh/kbWHuUA/3R1oHwiD1S0eiKD4j1aPbZkCkpAW1V8IbInX +4ay8IMKx4INRimlNAJZaby/ARH6jDuSRzVju3PvHHkVH3Se5CAGfpiEd9UEtL0z9 +KK3giq0itFZljoZUj5NDKd45RnijMCO6zfB9E1fAXdKDa0hMxKufgFpbOr3JpyI/ +gCczWw63igxdBzcIy2zSekciRDXFzMwujt0q7bd9Zg1fYVEiVRvjRuPjPdA1Yprb +rxTIW6HMiRvhMCb8oJsfgadHHwTrozmSBp+Z07/T6k9QnBn+locePGX2oxgkg4YQ +51Q+qDp2JE+BIcXjDwL4k5RHILv+1A7TaLndxHqEguNTVHnd25zS8gebLra8Pu2F +be8lEfKXGkJh90qX6IuxEAf6ZYGyojnP9zz/GPvG8VqLWeICrHuS0E4UT1lF9gxe +KF+w6D9Fz8+vm2/7hNN3WpVvrJSEnu68wEqPSpP4RCHiMUVhUE4Q2OM1fEwZtN4F +v6MGn8i1zeQf1xcGDXqVdFUNaBr8EBtiZJ1t4JWgw5QHVw0U5r0F+7if5t+L4sbn +fpb2U8WANFAoWPASUHEXMLrmeGO89LKtmyuy/uE5jF66CyCU3nuDuP/jVo23Eek7 +jPKxwV2dpAtMK9myGPW1n0sCAwEAAaNjMGEwHQYDVR0OBBYEFFLYiDrIn3hm7Ynz +ezhwlMkCAjbQMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUUtiIOsifeGbt +ifN7OHCUyQICNtAwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAL +e3KHwGCmSUyIWOYdiPcUZEim2FgKDk8TNd81HdTtBjHIgT5q1d07GjLukD0R0i70 +jsNjLiNmsGe+b7bAEzlgqqI0JZN1Ut6nna0Oh4lScWoWPBkdg/iaKWW+9D+a2fDz +WochcYBNy+A4mz+7+uAwTc+G02UQGRjRlwKxK3JCaKygvU5a2hi/a5iB0P2avl4V +SM0RFbnAKVy06Ij3Pjaut2L9HmLecHgQHEhb2rykOLpn7VU+Xlff1ANATIGk0k9j +pwlCCRT8AKnCgHNPLsBA2RF7SOp6AsDT6ygBJlh0wcBzIm2Tlf05fbsq4/aC4yyX +X04fkZT6/iyj2HYauE2yOE+b+h1IYHkm4vP9qdCa6HCPSXrW5b0KDtst842/6+Ok +fcvHlXHo2qN8xcL4dJIEG4aspCJTQLas/kx2z/uUMsA1n3Y/buWQbqCmJqK4LL7R +K4X9p2jIugErsWx0Hbhzlefut8cl8ABMALJ+tguLHPPAUJ4lueAI3jZm/zel0btU +ZCzJJ7VLkn5l/9Mt4blOvH+kQSGQQXemOR/qnuOf0GZvBeyqdn6/axag67XH/JJU +LysRJyU3eExRarDzzFhdFPFqSBX/wge2sY0PjlxQRrM9vwGYT7JZVEc+NHt4bVaT +LnPqZih4zR0Uv6CPLy64Lo7yFIrM6bV8+2ydDKXhlg== +-----END CERTIFICATE----- + +# Issuer: CN=Buypass Class 2 Root CA O=Buypass AS-983163327 +# Subject: CN=Buypass Class 2 Root CA O=Buypass AS-983163327 +# Label: "Buypass Class 2 Root CA" +# Serial: 2 +# MD5 Fingerprint: 46:a7:d2:fe:45:fb:64:5a:a8:59:90:9b:78:44:9b:29 +# SHA1 Fingerprint: 49:0a:75:74:de:87:0a:47:fe:58:ee:f6:c7:6b:eb:c6:0b:12:40:99 +# SHA256 Fingerprint: 9a:11:40:25:19:7c:5b:b9:5d:94:e6:3d:55:cd:43:79:08:47:b6:46:b2:3c:df:11:ad:a4:a0:0e:ff:15:fb:48 +-----BEGIN CERTIFICATE----- +MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd +MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg +Q2xhc3MgMiBSb290IENBMB4XDTEwMTAyNjA4MzgwM1oXDTQwMTAyNjA4MzgwM1ow +TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw +HgYDVQQDDBdCdXlwYXNzIENsYXNzIDIgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB +BQADggIPADCCAgoCggIBANfHXvfBB9R3+0Mh9PT1aeTuMgHbo4Yf5FkNuud1g1Lr +6hxhFUi7HQfKjK6w3Jad6sNgkoaCKHOcVgb/S2TwDCo3SbXlzwx87vFKu3MwZfPV +L4O2fuPn9Z6rYPnT8Z2SdIrkHJasW4DptfQxh6NR/Md+oW+OU3fUl8FVM5I+GC91 +1K2GScuVr1QGbNgGE41b/+EmGVnAJLqBcXmQRFBoJJRfuLMR8SlBYaNByyM21cHx +MlAQTn/0hpPshNOOvEu/XAFOBz3cFIqUCqTqc/sLUegTBxj6DvEr0VQVfTzh97QZ +QmdiXnfgolXsttlpF9U6r0TtSsWe5HonfOV116rLJeffawrbD02TTqigzXsu8lkB +arcNuAeBfos4GzjmCleZPe4h6KP1DBbdi+w0jpwqHAAVF41og9JwnxgIzRFo1clr +Us3ERo/ctfPYV3Me6ZQ5BL/T3jjetFPsaRyifsSP5BtwrfKi+fv3FmRmaZ9JUaLi +FRhnBkp/1Wy1TbMz4GHrXb7pmA8y1x1LPC5aAVKRCfLf6o3YBkBjqhHk/sM3nhRS +P/TizPJhk9H9Z2vXUq6/aKtAQ6BXNVN48FP4YUIHZMbXb5tMOA1jrGKvNouicwoN +9SG9dKpN6nIDSdvHXx1iY8f93ZHsM+71bbRuMGjeyNYmsHVee7QHIJihdjK4TWxP +AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMmAd+BikoL1Rpzz +uvdMw964o605MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAU18h +9bqwOlI5LJKwbADJ784g7wbylp7ppHR/ehb8t/W2+xUbP6umwHJdELFx7rxP462s +A20ucS6vxOOto70MEae0/0qyexAQH6dXQbLArvQsWdZHEIjzIVEpMMpghq9Gqx3t +OluwlN5E40EIosHsHdb9T7bWR9AUC8rmyrV7d35BH16Dx7aMOZawP5aBQW9gkOLo ++fsicdl9sz1Gv7SEr5AcD48Saq/v7h56rgJKihcrdv6sVIkkLE8/trKnToyokZf7 +KcZ7XC25y2a2t6hbElGFtQl+Ynhw/qlqYLYdDnkM/crqJIByw5c/8nerQyIKx+u2 +DISCLIBrQYoIwOula9+ZEsuK1V6ADJHgJgg2SMX6OBE1/yWDLfJ6v9r9jv6ly0Us +H8SIU653DtmadsWOLB2jutXsMq7Aqqz30XpN69QH4kj3Io6wpJ9qzo6ysmD0oyLQ +I+uUWnpp3Q+/QFesa1lQ2aOZ4W7+jQF5JyMV3pKdewlNWudLSDBaGOYKbeaP4NK7 +5t98biGCwWg5TbSYWGZizEqQXsP6JwSxeRV0mcy+rSDeJmAc61ZRpqPq5KM/p/9h +3PFaTWwyI0PurKju7koSCTxdccK+efrCh2gdC/1cacwG0Jp9VJkqyTkaGa9LKkPz +Y11aWOIv4x3kqdbQCtCev9eBCfHJxyYNrJgWVqA= +-----END CERTIFICATE----- + +# Issuer: CN=Buypass Class 3 Root CA O=Buypass AS-983163327 +# Subject: CN=Buypass Class 3 Root CA O=Buypass AS-983163327 +# Label: "Buypass Class 3 Root CA" +# Serial: 2 +# MD5 Fingerprint: 3d:3b:18:9e:2c:64:5a:e8:d5:88:ce:0e:f9:37:c2:ec +# SHA1 Fingerprint: da:fa:f7:fa:66:84:ec:06:8f:14:50:bd:c7:c2:81:a5:bc:a9:64:57 +# SHA256 Fingerprint: ed:f7:eb:bc:a2:7a:2a:38:4d:38:7b:7d:40:10:c6:66:e2:ed:b4:84:3e:4c:29:b4:ae:1d:5b:93:32:e6:b2:4d +-----BEGIN CERTIFICATE----- +MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd +MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg +Q2xhc3MgMyBSb290IENBMB4XDTEwMTAyNjA4Mjg1OFoXDTQwMTAyNjA4Mjg1OFow +TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw +HgYDVQQDDBdCdXlwYXNzIENsYXNzIDMgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB +BQADggIPADCCAgoCggIBAKXaCpUWUOOV8l6ddjEGMnqb8RB2uACatVI2zSRHsJ8Y +ZLya9vrVediQYkwiL944PdbgqOkcLNt4EemOaFEVcsfzM4fkoF0LXOBXByow9c3E +N3coTRiR5r/VUv1xLXA+58bEiuPwKAv0dpihi4dVsjoT/Lc+JzeOIuOoTyrvYLs9 +tznDDgFHmV0ST9tD+leh7fmdvhFHJlsTmKtdFoqwNxxXnUX/iJY2v7vKB3tvh2PX +0DJq1l1sDPGzbjniazEuOQAnFN44wOwZZoYS6J1yFhNkUsepNxz9gjDthBgd9K5c +/3ATAOux9TN6S9ZV+AWNS2mw9bMoNlwUxFFzTWsL8TQH2xc519woe2v1n/MuwU8X +KhDzzMro6/1rqy6any2CbgTUUgGTLT2G/H783+9CHaZr77kgxve9oKeV/afmiSTY +zIw0bOIjL9kSGiG5VZFvC5F5GQytQIgLcOJ60g7YaEi7ghM5EFjp2CoHxhLbWNvS +O1UQRwUVZ2J+GGOmRj8JDlQyXr8NYnon74Do29lLBlo3WiXQCBJ31G8JUJc9yB3D +34xFMFbG02SrZvPAXpacw8Tvw3xrizp5f7NJzz3iiZ+gMEuFuZyUJHmPfWupRWgP +K9Dx2hzLabjKSWJtyNBjYt1gD1iqj6G8BaVmos8bdrKEZLFMOVLAMLrwjEsCsLa3 +AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEe4zf/lb+74suwv +Tg75JbCOPGvDMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAACAj +QTUEkMJAYmDv4jVM1z+s4jSQuKFvdvoWFqRINyzpkMLyPPgKn9iB5btb2iUspKdV +cSQy9sgL8rxq+JOssgfCX5/bzMiKqr5qb+FJEMwx14C7u8jYog5kV+qi9cKpMRXS +IGrs/CIBKM+GuIAeqcwRpTzyFrNHnfzSgCHEy9BHcEGhyoMZCCxt8l13nIoUE9Q2 +HJLw5QY33KbmkJs4j1xrG0aGQ0JfPgEHU1RdZX33inOhmlRaHylDFCfChQ+1iHsa +O5S3HWCntZznKWlXWpuTekMwGwPXYshApqr8ZORK15FTAaggiG6cX0S5y2CBNOxv +033aSF/rtJC8LakcC6wc1aJoIIAE1vyxjy+7SjENSoYc6+I2KSb12tjE8nVhz36u +dmNKekBlk4f4HoCMhuWG1o8O/FMsYOgWYRqiPkN7zTlgVGr18okmAWiDSKIz6MkE +kbIRNBE+6tBDGR8Dk5AM/1E9V/RBbuHLoL7ryWPNbczk+DaqaJ3tvV2XcEQNtg41 +3OEMXbugUZTLfhbrES+jkkXITHHZvMmZUldGL1DPvTVp9D0VzgalLA8+9oG6lLvD +u79leNKGef9JOxqDDPDeeOzI8k1MGt6CKfjBWtrt7uYnXuhF0J0cUahoq0Tj0Itq +4/g7u9xN12TyUb7mqqta6THuBrxzvxNiCp/HuZc= +-----END CERTIFICATE----- + +# Issuer: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Subject: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Label: "T-TeleSec GlobalRoot Class 3" +# Serial: 1 +# MD5 Fingerprint: ca:fb:40:a8:4e:39:92:8a:1d:fe:8e:2f:c4:27:ea:ef +# SHA1 Fingerprint: 55:a6:72:3e:cb:f2:ec:cd:c3:23:74:70:19:9d:2a:be:11:e3:81:d1 +# SHA256 Fingerprint: fd:73:da:d3:1c:64:4f:f1:b4:3b:ef:0c:cd:da:96:71:0b:9c:d9:87:5e:ca:7e:31:70:7a:f3:e9:6d:52:2b:bd +-----BEGIN CERTIFICATE----- +MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx +KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd +BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl +YyBHbG9iYWxSb290IENsYXNzIDMwHhcNMDgxMDAxMTAyOTU2WhcNMzMxMDAxMjM1 +OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy +aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50 +ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDMwggEiMA0G +CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9dZPwYiJvJK7genasfb3ZJNW4t/zN +8ELg63iIVl6bmlQdTQyK9tPPcPRStdiTBONGhnFBSivwKixVA9ZIw+A5OO3yXDw/ +RLyTPWGrTs0NvvAgJ1gORH8EGoel15YUNpDQSXuhdfsaa3Ox+M6pCSzyU9XDFES4 +hqX2iys52qMzVNn6chr3IhUciJFrf2blw2qAsCTz34ZFiP0Zf3WHHx+xGwpzJFu5 +ZeAsVMhg02YXP+HMVDNzkQI6pn97djmiH5a2OK61yJN0HZ65tOVgnS9W0eDrXltM +EnAMbEQgqxHY9Bn20pxSN+f6tsIxO0rUFJmtxxr1XV/6B7h8DR/Wgx6zAgMBAAGj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS1 +A/d2O2GCahKqGFPrAyGUv/7OyjANBgkqhkiG9w0BAQsFAAOCAQEAVj3vlNW92nOy +WL6ukK2YJ5f+AbGwUgC4TeQbIXQbfsDuXmkqJa9c1h3a0nnJ85cp4IaH3gRZD/FZ +1GSFS5mvJQQeyUapl96Cshtwn5z2r3Ex3XsFpSzTucpH9sry9uetuUg/vBa3wW30 +6gmv7PO15wWeph6KU1HWk4HMdJP2udqmJQV0eVp+QD6CSyYRMG7hP0HHRwA11fXT +91Q+gT3aSWqas+8QPebrb9HIIkfLzM8BMZLZGOMivgkeGj5asuRrDFR6fUNOuIml +e9eiPZaGzPImNC1qkp2aGtAw4l1OBLBfiyB+d8E9lYLRRpo7PHi4b6HQDWSieB4p +TpPDpFQUWw== +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH +# Subject: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH +# Label: "D-TRUST Root Class 3 CA 2 2009" +# Serial: 623603 +# MD5 Fingerprint: cd:e0:25:69:8d:47:ac:9c:89:35:90:f7:fd:51:3d:2f +# SHA1 Fingerprint: 58:e8:ab:b0:36:15:33:fb:80:f7:9b:1b:6d:29:d3:ff:8d:5f:00:f0 +# SHA256 Fingerprint: 49:e7:a4:42:ac:f0:ea:62:87:05:00:54:b5:25:64:b6:50:e4:f4:9e:42:e3:48:d6:aa:38:e0:39:e9:57:b1:c1 +-----BEGIN CERTIFICATE----- +MIIEMzCCAxugAwIBAgIDCYPzMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNVBAYTAkRF +MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMMHkQtVFJVU1QgUm9vdCBD +bGFzcyAzIENBIDIgMjAwOTAeFw0wOTExMDUwODM1NThaFw0yOTExMDUwODM1NTha +ME0xCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMM +HkQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgMjAwOTCCASIwDQYJKoZIhvcNAQEB +BQADggEPADCCAQoCggEBANOySs96R+91myP6Oi/WUEWJNTrGa9v+2wBoqOADER03 +UAifTUpolDWzU9GUY6cgVq/eUXjsKj3zSEhQPgrfRlWLJ23DEE0NkVJD2IfgXU42 +tSHKXzlABF9bfsyjxiupQB7ZNoTWSPOSHjRGICTBpFGOShrvUD9pXRl/RcPHAY9R +ySPocq60vFYJfxLLHLGvKZAKyVXMD9O0Gu1HNVpK7ZxzBCHQqr0ME7UAyiZsxGsM +lFqVlNpQmvH/pStmMaTJOKDfHR+4CS7zp+hnUquVH+BGPtikw8paxTGA6Eian5Rp +/hnd2HN8gcqW3o7tszIFZYQ05ub9VxC1X3a/L7AQDcUCAwEAAaOCARowggEWMA8G +A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFP3aFMSfMN4hvR5COfyrYyNJ4PGEMA4G +A1UdDwEB/wQEAwIBBjCB0wYDVR0fBIHLMIHIMIGAoH6gfIZ6bGRhcDovL2RpcmVj +dG9yeS5kLXRydXN0Lm5ldC9DTj1ELVRSVVNUJTIwUm9vdCUyMENsYXNzJTIwMyUy +MENBJTIwMiUyMDIwMDksTz1ELVRydXN0JTIwR21iSCxDPURFP2NlcnRpZmljYXRl +cmV2b2NhdGlvbmxpc3QwQ6BBoD+GPWh0dHA6Ly93d3cuZC10cnVzdC5uZXQvY3Js +L2QtdHJ1c3Rfcm9vdF9jbGFzc18zX2NhXzJfMjAwOS5jcmwwDQYJKoZIhvcNAQEL +BQADggEBAH+X2zDI36ScfSF6gHDOFBJpiBSVYEQBrLLpME+bUMJm2H6NMLVwMeni +acfzcNsgFYbQDfC+rAF1hM5+n02/t2A7nPPKHeJeaNijnZflQGDSNiH+0LS4F9p0 +o3/U37CYAqxva2ssJSRyoWXuJVrl5jLn8t+rSfrzkGkj2wTZ51xY/GXUl77M/C4K +zCUqNQT4YJEVdT1B/yMfGchs64JTBKbkTCJNjYy6zltz7GRUUG3RnFX7acM2w4y8 +PIWmawomDeCTmGCufsYkl4phX5GOZpIJhzbNi5stPvZR1FDUWSi9g/LMKHtThm3Y +Johw1+qRzT65ysCQblrGXnRl11z+o+I= +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH +# Subject: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH +# Label: "D-TRUST Root Class 3 CA 2 EV 2009" +# Serial: 623604 +# MD5 Fingerprint: aa:c6:43:2c:5e:2d:cd:c4:34:c0:50:4f:11:02:4f:b6 +# SHA1 Fingerprint: 96:c9:1b:0b:95:b4:10:98:42:fa:d0:d8:22:79:fe:60:fa:b9:16:83 +# SHA256 Fingerprint: ee:c5:49:6b:98:8c:e9:86:25:b9:34:09:2e:ec:29:08:be:d0:b0:f3:16:c2:d4:73:0c:84:ea:f1:f3:d3:48:81 +-----BEGIN CERTIFICATE----- +MIIEQzCCAyugAwIBAgIDCYP0MA0GCSqGSIb3DQEBCwUAMFAxCzAJBgNVBAYTAkRF +MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNVBAMMIUQtVFJVU1QgUm9vdCBD +bGFzcyAzIENBIDIgRVYgMjAwOTAeFw0wOTExMDUwODUwNDZaFw0yOTExMDUwODUw +NDZaMFAxCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNV +BAMMIUQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgRVYgMjAwOTCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAJnxhDRwui+3MKCOvXwEz75ivJn9gpfSegpn +ljgJ9hBOlSJzmY3aFS3nBfwZcyK3jpgAvDw9rKFs+9Z5JUut8Mxk2og+KbgPCdM0 +3TP1YtHhzRnp7hhPTFiu4h7WDFsVWtg6uMQYZB7jM7K1iXdODL/ZlGsTl28So/6Z +qQTMFexgaDbtCHu39b+T7WYxg4zGcTSHThfqr4uRjRxWQa4iN1438h3Z0S0NL2lR +p75mpoo6Kr3HGrHhFPC+Oh25z1uxav60sUYgovseO3Dvk5h9jHOW8sXvhXCtKSb8 +HgQ+HKDYD8tSg2J87otTlZCpV6LqYQXY+U3EJ/pure3511H3a6UCAwEAAaOCASQw +ggEgMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFNOUikxiEyoZLsyvcop9Ntea +HNxnMA4GA1UdDwEB/wQEAwIBBjCB3QYDVR0fBIHVMIHSMIGHoIGEoIGBhn9sZGFw +Oi8vZGlyZWN0b3J5LmQtdHJ1c3QubmV0L0NOPUQtVFJVU1QlMjBSb290JTIwQ2xh +c3MlMjAzJTIwQ0ElMjAyJTIwRVYlMjAyMDA5LE89RC1UcnVzdCUyMEdtYkgsQz1E +RT9jZXJ0aWZpY2F0ZXJldm9jYXRpb25saXN0MEagRKBChkBodHRwOi8vd3d3LmQt +dHJ1c3QubmV0L2NybC9kLXRydXN0X3Jvb3RfY2xhc3NfM19jYV8yX2V2XzIwMDku +Y3JsMA0GCSqGSIb3DQEBCwUAA4IBAQA07XtaPKSUiO8aEXUHL7P+PPoeUSbrh/Yp +3uDx1MYkCenBz1UbtDDZzhr+BlGmFaQt77JLvyAoJUnRpjZ3NOhk31KxEcdzes05 +nsKtjHEh8lprr988TlWvsoRlFIm5d8sqMb7Po23Pb0iUMkZv53GMoKaEGTcH8gNF +CSuGdXzfX2lXANtu2KZyIktQ1HWYVt+3GP9DQ1CuekR78HlR10M9p9OB0/DJT7na +xpeG0ILD5EJt/rDiZE4OJudANCa1CInXCGNjOCd1HjPqbqjdn5lPdE2BiYBL3ZqX +KVwvvoFBuYz/6n1gBp7N1z3TLqMVvKjmJuVvw9y4AyHqnxbxLFS1 +-----END CERTIFICATE----- + +# Issuer: CN=CA Disig Root R2 O=Disig a.s. +# Subject: CN=CA Disig Root R2 O=Disig a.s. +# Label: "CA Disig Root R2" +# Serial: 10572350602393338211 +# MD5 Fingerprint: 26:01:fb:d8:27:a7:17:9a:45:54:38:1a:43:01:3b:03 +# SHA1 Fingerprint: b5:61:eb:ea:a4:de:e4:25:4b:69:1a:98:a5:57:47:c2:34:c7:d9:71 +# SHA256 Fingerprint: e2:3d:4a:03:6d:7b:70:e9:f5:95:b1:42:20:79:d2:b9:1e:df:bb:1f:b6:51:a0:63:3e:aa:8a:9d:c5:f8:07:03 +-----BEGIN CERTIFICATE----- +MIIFaTCCA1GgAwIBAgIJAJK4iNuwisFjMA0GCSqGSIb3DQEBCwUAMFIxCzAJBgNV +BAYTAlNLMRMwEQYDVQQHEwpCcmF0aXNsYXZhMRMwEQYDVQQKEwpEaXNpZyBhLnMu +MRkwFwYDVQQDExBDQSBEaXNpZyBSb290IFIyMB4XDTEyMDcxOTA5MTUzMFoXDTQy +MDcxOTA5MTUzMFowUjELMAkGA1UEBhMCU0sxEzARBgNVBAcTCkJyYXRpc2xhdmEx +EzARBgNVBAoTCkRpc2lnIGEucy4xGTAXBgNVBAMTEENBIERpc2lnIFJvb3QgUjIw +ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCio8QACdaFXS1tFPbCw3Oe +NcJxVX6B+6tGUODBfEl45qt5WDza/3wcn9iXAng+a0EE6UG9vgMsRfYvZNSrXaNH +PWSb6WiaxswbP7q+sos0Ai6YVRn8jG+qX9pMzk0DIaPY0jSTVpbLTAwAFjxfGs3I +x2ymrdMxp7zo5eFm1tL7A7RBZckQrg4FY8aAamkw/dLukO8NJ9+flXP04SXabBbe +QTg06ov80egEFGEtQX6sx3dOy1FU+16SGBsEWmjGycT6txOgmLcRK7fWV8x8nhfR +yyX+hk4kLlYMeE2eARKmK6cBZW58Yh2EhN/qwGu1pSqVg8NTEQxzHQuyRpDRQjrO +QG6Vrf/GlK1ul4SOfW+eioANSW1z4nuSHsPzwfPrLgVv2RvPN3YEyLRa5Beny912 +H9AZdugsBbPWnDTYltxhh5EF5EQIM8HauQhl1K6yNg3ruji6DOWbnuuNZt2Zz9aJ +QfYEkoopKW1rOhzndX0CcQ7zwOe9yxndnWCywmZgtrEE7snmhrmaZkCo5xHtgUUD +i/ZnWejBBhG93c+AAk9lQHhcR1DIm+YfgXvkRKhbhZri3lrVx/k6RGZL5DJUfORs +nLMOPReisjQS1n6yqEm70XooQL6iFh/f5DcfEXP7kAplQ6INfPgGAVUzfbANuPT1 +rqVCV3w2EYx7XsQDnYx5nQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1Ud +DwEB/wQEAwIBBjAdBgNVHQ4EFgQUtZn4r7CU9eMg1gqtzk5WpC5uQu0wDQYJKoZI +hvcNAQELBQADggIBACYGXnDnZTPIgm7ZnBc6G3pmsgH2eDtpXi/q/075KMOYKmFM +tCQSin1tERT3nLXK5ryeJ45MGcipvXrA1zYObYVybqjGom32+nNjf7xueQgcnYqf +GopTpti72TVVsRHFqQOzVju5hJMiXn7B9hJSi+osZ7z+Nkz1uM/Rs0mSO9MpDpkb +lvdhuDvEK7Z4bLQjb/D907JedR+Zlais9trhxTF7+9FGs9K8Z7RiVLoJ92Owk6Ka ++elSLotgEqv89WBW7xBci8QaQtyDW2QOy7W81k/BfDxujRNt+3vrMNDcTa/F1bal +TFtxyegxvug4BkihGuLq0t4SOVga/4AOgnXmt8kHbA7v/zjxmHHEt38OFdAlab0i +nSvtBfZGR6ztwPDUO+Ls7pZbkBNOHlY667DvlruWIxG68kOGdGSVyCh13x01utI3 +gzhTODY7z2zp+WsO0PsE6E9312UBeIYMej4hYvF/Y3EMyZ9E26gnonW+boE+18Dr +G5gPcFw0sorMwIUY6256s/daoQe/qUKS82Ail+QUoQebTnbAjn39pCXHR+3/H3Os +zMOl6W8KjptlwlCFtaOgUxLMVYdh84GuEEZhvUQhuMI9dM9+JDX6HAcOmz0iyu8x +L4ysEr3vQCj8KWefshNPZiTEUxnpHikV7+ZtsH8tZ/3zbBt1RqPlShfppNcL +-----END CERTIFICATE----- + +# Issuer: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV +# Subject: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV +# Label: "ACCVRAIZ1" +# Serial: 6828503384748696800 +# MD5 Fingerprint: d0:a0:5a:ee:05:b6:09:94:21:a1:7d:f1:b2:29:82:02 +# SHA1 Fingerprint: 93:05:7a:88:15:c6:4f:ce:88:2f:fa:91:16:52:28:78:bc:53:64:17 +# SHA256 Fingerprint: 9a:6e:c0:12:e1:a7:da:9d:be:34:19:4d:47:8a:d7:c0:db:18:22:fb:07:1d:f1:29:81:49:6e:d1:04:38:41:13 +-----BEGIN CERTIFICATE----- +MIIH0zCCBbugAwIBAgIIXsO3pkN/pOAwDQYJKoZIhvcNAQEFBQAwQjESMBAGA1UE +AwwJQUNDVlJBSVoxMRAwDgYDVQQLDAdQS0lBQ0NWMQ0wCwYDVQQKDARBQ0NWMQsw +CQYDVQQGEwJFUzAeFw0xMTA1MDUwOTM3MzdaFw0zMDEyMzEwOTM3MzdaMEIxEjAQ +BgNVBAMMCUFDQ1ZSQUlaMTEQMA4GA1UECwwHUEtJQUNDVjENMAsGA1UECgwEQUND +VjELMAkGA1UEBhMCRVMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCb +qau/YUqXry+XZpp0X9DZlv3P4uRm7x8fRzPCRKPfmt4ftVTdFXxpNRFvu8gMjmoY +HtiP2Ra8EEg2XPBjs5BaXCQ316PWywlxufEBcoSwfdtNgM3802/J+Nq2DoLSRYWo +G2ioPej0RGy9ocLLA76MPhMAhN9KSMDjIgro6TenGEyxCQ0jVn8ETdkXhBilyNpA +lHPrzg5XPAOBOp0KoVdDaaxXbXmQeOW1tDvYvEyNKKGno6e6Ak4l0Squ7a4DIrhr +IA8wKFSVf+DuzgpmndFALW4ir50awQUZ0m/A8p/4e7MCQvtQqR0tkw8jq8bBD5L/ +0KIV9VMJcRz/RROE5iZe+OCIHAr8Fraocwa48GOEAqDGWuzndN9wrqODJerWx5eH +k6fGioozl2A3ED6XPm4pFdahD9GILBKfb6qkxkLrQaLjlUPTAYVtjrs78yM2x/47 +4KElB0iryYl0/wiPgL/AlmXz7uxLaL2diMMxs0Dx6M/2OLuc5NF/1OVYm3z61PMO +m3WR5LpSLhl+0fXNWhn8ugb2+1KoS5kE3fj5tItQo05iifCHJPqDQsGH+tUtKSpa +cXpkatcnYGMN285J9Y0fkIkyF/hzQ7jSWpOGYdbhdQrqeWZ2iE9x6wQl1gpaepPl +uUsXQA+xtrn13k/c4LOsOxFwYIRKQ26ZIMApcQrAZQIDAQABo4ICyzCCAscwfQYI +KwYBBQUHAQEEcTBvMEwGCCsGAQUFBzAChkBodHRwOi8vd3d3LmFjY3YuZXMvZmls +ZWFkbWluL0FyY2hpdm9zL2NlcnRpZmljYWRvcy9yYWl6YWNjdjEuY3J0MB8GCCsG +AQUFBzABhhNodHRwOi8vb2NzcC5hY2N2LmVzMB0GA1UdDgQWBBTSh7Tj3zcnk1X2 +VuqB5TbMjB4/vTAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFNKHtOPfNyeT +VfZW6oHlNsyMHj+9MIIBcwYDVR0gBIIBajCCAWYwggFiBgRVHSAAMIIBWDCCASIG +CCsGAQUFBwICMIIBFB6CARAAQQB1AHQAbwByAGkAZABhAGQAIABkAGUAIABDAGUA +cgB0AGkAZgBpAGMAYQBjAGkA8wBuACAAUgBhAO0AegAgAGQAZQAgAGwAYQAgAEEA +QwBDAFYAIAAoAEEAZwBlAG4AYwBpAGEAIABkAGUAIABUAGUAYwBuAG8AbABvAGcA +7QBhACAAeQAgAEMAZQByAHQAaQBmAGkAYwBhAGMAaQDzAG4AIABFAGwAZQBjAHQA +cgDzAG4AaQBjAGEALAAgAEMASQBGACAAUQA0ADYAMAAxADEANQA2AEUAKQAuACAA +QwBQAFMAIABlAG4AIABoAHQAdABwADoALwAvAHcAdwB3AC4AYQBjAGMAdgAuAGUA +czAwBggrBgEFBQcCARYkaHR0cDovL3d3dy5hY2N2LmVzL2xlZ2lzbGFjaW9uX2Mu +aHRtMFUGA1UdHwROMEwwSqBIoEaGRGh0dHA6Ly93d3cuYWNjdi5lcy9maWxlYWRt +aW4vQXJjaGl2b3MvY2VydGlmaWNhZG9zL3JhaXphY2N2MV9kZXIuY3JsMA4GA1Ud +DwEB/wQEAwIBBjAXBgNVHREEEDAOgQxhY2N2QGFjY3YuZXMwDQYJKoZIhvcNAQEF +BQADggIBAJcxAp/n/UNnSEQU5CmH7UwoZtCPNdpNYbdKl02125DgBS4OxnnQ8pdp +D70ER9m+27Up2pvZrqmZ1dM8MJP1jaGo/AaNRPTKFpV8M9xii6g3+CfYCS0b78gU +JyCpZET/LtZ1qmxNYEAZSUNUY9rizLpm5U9EelvZaoErQNV/+QEnWCzI7UiRfD+m +AM/EKXMRNt6GGT6d7hmKG9Ww7Y49nCrADdg9ZuM8Db3VlFzi4qc1GwQA9j9ajepD +vV+JHanBsMyZ4k0ACtrJJ1vnE5Bc5PUzolVt3OAJTS+xJlsndQAJxGJ3KQhfnlms +tn6tn1QwIgPBHnFk/vk4CpYY3QIUrCPLBhwepH2NDd4nQeit2hW3sCPdK6jT2iWH +7ehVRE2I9DZ+hJp4rPcOVkkO1jMl1oRQQmwgEh0q1b688nCBpHBgvgW1m54ERL5h +I6zppSSMEYCUWqKiuUnSwdzRp+0xESyeGabu4VXhwOrPDYTkF7eifKXeVSUG7szA +h1xA2syVP1XgNce4hL60Xc16gwFy7ofmXx2utYXGJt/mwZrpHgJHnyqobalbz+xF +d3+YJ5oyXSrjhO7FmGYvliAd3djDJ9ew+f7Zfc3Qn48LFFhRny+Lwzgt3uiP1o2H +pPVWQxaZLPSkVrQ0uGE3ycJYgBugl6H8WY3pEfbRD0tVNEYqi4Y7 +-----END CERTIFICATE----- + +# Issuer: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA +# Subject: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA +# Label: "TWCA Global Root CA" +# Serial: 3262 +# MD5 Fingerprint: f9:03:7e:cf:e6:9e:3c:73:7a:2a:90:07:69:ff:2b:96 +# SHA1 Fingerprint: 9c:bb:48:53:f6:a4:f6:d3:52:a4:e8:32:52:55:60:13:f5:ad:af:65 +# SHA256 Fingerprint: 59:76:90:07:f7:68:5d:0f:cd:50:87:2f:9f:95:d5:75:5a:5b:2b:45:7d:81:f3:69:2b:61:0a:98:67:2f:0e:1b +-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgICDL4wDQYJKoZIhvcNAQELBQAwUTELMAkGA1UEBhMCVFcx +EjAQBgNVBAoTCVRBSVdBTi1DQTEQMA4GA1UECxMHUm9vdCBDQTEcMBoGA1UEAxMT +VFdDQSBHbG9iYWwgUm9vdCBDQTAeFw0xMjA2MjcwNjI4MzNaFw0zMDEyMzExNTU5 +NTlaMFExCzAJBgNVBAYTAlRXMRIwEAYDVQQKEwlUQUlXQU4tQ0ExEDAOBgNVBAsT +B1Jvb3QgQ0ExHDAaBgNVBAMTE1RXQ0EgR2xvYmFsIFJvb3QgQ0EwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCwBdvI64zEbooh745NnHEKH1Jw7W2CnJfF +10xORUnLQEK1EjRsGcJ0pDFfhQKX7EMzClPSnIyOt7h52yvVavKOZsTuKwEHktSz +0ALfUPZVr2YOy+BHYC8rMjk1Ujoog/h7FsYYuGLWRyWRzvAZEk2tY/XTP3VfKfCh +MBwqoJimFb3u/Rk28OKRQ4/6ytYQJ0lM793B8YVwm8rqqFpD/G2Gb3PpN0Wp8DbH +zIh1HrtsBv+baz4X7GGqcXzGHaL3SekVtTzWoWH1EfcFbx39Eb7QMAfCKbAJTibc +46KokWofwpFFiFzlmLhxpRUZyXx1EcxwdE8tmx2RRP1WKKD+u4ZqyPpcC1jcxkt2 +yKsi2XMPpfRaAok/T54igu6idFMqPVMnaR1sjjIsZAAmY2E2TqNGtz99sy2sbZCi +laLOz9qC5wc0GZbpuCGqKX6mOL6OKUohZnkfs8O1CWfe1tQHRvMq2uYiN2DLgbYP +oA/pyJV/v1WRBXrPPRXAb94JlAGD1zQbzECl8LibZ9WYkTunhHiVJqRaCPgrdLQA +BDzfuBSO6N+pjWxnkjMdwLfS7JLIvgm/LCkFbwJrnu+8vyq8W8BQj0FwcYeyTbcE +qYSjMq+u7msXi7Kx/mzhkIyIqJdIzshNy/MGz19qCkKxHh53L46g5pIOBvwFItIm +4TFRfTLcDwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB +/zANBgkqhkiG9w0BAQsFAAOCAgEAXzSBdu+WHdXltdkCY4QWwa6gcFGn90xHNcgL +1yg9iXHZqjNB6hQbbCEAwGxCGX6faVsgQt+i0trEfJdLjbDorMjupWkEmQqSpqsn +LhpNgb+E1HAerUf+/UqdM+DyucRFCCEK2mlpc3INvjT+lIutwx4116KD7+U4x6WF +H6vPNOw/KP4M8VeGTslV9xzU2KV9Bnpv1d8Q34FOIWWxtuEXeZVFBs5fzNxGiWNo +RI2T9GRwoD2dKAXDOXC4Ynsg/eTb6QihuJ49CcdP+yz4k3ZB3lLg4VfSnQO8d57+ +nile98FRYB/e2guyLXW3Q0iT5/Z5xoRdgFlglPx4mI88k1HtQJAH32RjJMtOcQWh +15QaiDLxInQirqWm2BJpTGCjAu4r7NRjkgtevi92a6O2JryPA9gK8kxkRr05YuWW +6zRjESjMlfGt7+/cgFhI6Uu46mWs6fyAtbXIRfmswZ/ZuepiiI7E8UuDEq3mi4TW +nsLrgxifarsbJGAzcMzs9zLzXNl5fe+epP7JI8Mk7hWSsT2RTyaGvWZzJBPqpK5j +wa19hAM8EHiGG3njxPPyBJUgriOCxLM6AGK/5jYk4Ve6xx6QddVfP5VhK8E7zeWz +aGHQRiapIVJpLesux+t3zqY6tQMzT3bR51xUAV3LePTJDL/PEo4XLSNolOer/qmy +KwbQBM0= +-----END CERTIFICATE----- + +# Issuer: CN=TeliaSonera Root CA v1 O=TeliaSonera +# Subject: CN=TeliaSonera Root CA v1 O=TeliaSonera +# Label: "TeliaSonera Root CA v1" +# Serial: 199041966741090107964904287217786801558 +# MD5 Fingerprint: 37:41:49:1b:18:56:9a:26:f5:ad:c2:66:fb:40:a5:4c +# SHA1 Fingerprint: 43:13:bb:96:f1:d5:86:9b:c1:4e:6a:92:f6:cf:f6:34:69:87:82:37 +# SHA256 Fingerprint: dd:69:36:fe:21:f8:f0:77:c1:23:a1:a5:21:c1:22:24:f7:22:55:b7:3e:03:a7:26:06:93:e8:a2:4b:0f:a3:89 +-----BEGIN CERTIFICATE----- +MIIFODCCAyCgAwIBAgIRAJW+FqD3LkbxezmCcvqLzZYwDQYJKoZIhvcNAQEFBQAw +NzEUMBIGA1UECgwLVGVsaWFTb25lcmExHzAdBgNVBAMMFlRlbGlhU29uZXJhIFJv +b3QgQ0EgdjEwHhcNMDcxMDE4MTIwMDUwWhcNMzIxMDE4MTIwMDUwWjA3MRQwEgYD +VQQKDAtUZWxpYVNvbmVyYTEfMB0GA1UEAwwWVGVsaWFTb25lcmEgUm9vdCBDQSB2 +MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMK+6yfwIaPzaSZVfp3F +VRaRXP3vIb9TgHot0pGMYzHw7CTww6XScnwQbfQ3t+XmfHnqjLWCi65ItqwA3GV1 +7CpNX8GH9SBlK4GoRz6JI5UwFpB/6FcHSOcZrr9FZ7E3GwYq/t75rH2D+1665I+X +Z75Ljo1kB1c4VWk0Nj0TSO9P4tNmHqTPGrdeNjPUtAa9GAH9d4RQAEX1jF3oI7x+ +/jXh7VB7qTCNGdMJjmhnXb88lxhTuylixcpecsHHltTbLaC0H2kD7OriUPEMPPCs +81Mt8Bz17Ww5OXOAFshSsCPN4D7c3TxHoLs1iuKYaIu+5b9y7tL6pe0S7fyYGKkm +dtwoSxAgHNN/Fnct7W+A90m7UwW7XWjH1Mh1Fj+JWov3F0fUTPHSiXk+TT2YqGHe +Oh7S+F4D4MHJHIzTjU3TlTazN19jY5szFPAtJmtTfImMMsJu7D0hADnJoWjiUIMu +sDor8zagrC/kb2HCUQk5PotTubtn2txTuXZZNp1D5SDgPTJghSJRt8czu90VL6R4 +pgd7gUY2BIbdeTXHlSw7sKMXNeVzH7RcWe/a6hBle3rQf5+ztCo3O3CLm1u5K7fs +slESl1MpWtTwEhDcTwK7EpIvYtQ/aUN8Ddb8WHUBiJ1YFkveupD/RwGJBmr2X7KQ +arMCpgKIv7NHfirZ1fpoeDVNAgMBAAGjPzA9MA8GA1UdEwEB/wQFMAMBAf8wCwYD +VR0PBAQDAgEGMB0GA1UdDgQWBBTwj1k4ALP1j5qWDNXr+nuqF+gTEjANBgkqhkiG +9w0BAQUFAAOCAgEAvuRcYk4k9AwI//DTDGjkk0kiP0Qnb7tt3oNmzqjMDfz1mgbl +dxSR651Be5kqhOX//CHBXfDkH1e3damhXwIm/9fH907eT/j3HEbAek9ALCI18Bmx +0GtnLLCo4MBANzX2hFxc469CeP6nyQ1Q6g2EdvZR74NTxnr/DlZJLo961gzmJ1Tj +TQpgcmLNkQfWpb/ImWvtxBnmq0wROMVvMeJuScg/doAmAyYp4Db29iBT4xdwNBed +Y2gea+zDTYa4EzAvXUYNR0PVG6pZDrlcjQZIrXSHX8f8MVRBE+LHIQ6e4B4N4cB7 +Q4WQxYpYxmUKeFfyxiMPAdkgS94P+5KFdSpcc41teyWRyu5FrgZLAMzTsVlQ2jqI +OylDRl6XK1TOU2+NSueW+r9xDkKLfP0ooNBIytrEgUy7onOTJsjrDNYmiLbAJM+7 +vVvrdX3pCI6GMyx5dwlppYn8s3CQh3aP0yK7Qs69cwsgJirQmz1wHiRszYd2qReW +t88NkvuOGKmYSdGe/mBEciG5Ge3C9THxOUiIkCR1VBatzvT4aRRkOfujuLpwQMcn +HL/EVlP6Y2XQ8xwOFvVrhlhNGNTkDY6lnVuR3HYkUD/GKvvZt5y11ubQ2egZixVx +SK236thZiNSQvxaz2emsWWFUyBy6ysHK4bkgTI86k4mloMy/0/Z1pHWWbVY= +-----END CERTIFICATE----- + +# Issuer: CN=E-Tugra Certification Authority O=E-Tu\u011fra EBG Bili\u015fim Teknolojileri ve Hizmetleri A.\u015e. OU=E-Tugra Sertifikasyon Merkezi +# Subject: CN=E-Tugra Certification Authority O=E-Tu\u011fra EBG Bili\u015fim Teknolojileri ve Hizmetleri A.\u015e. OU=E-Tugra Sertifikasyon Merkezi +# Label: "E-Tugra Certification Authority" +# Serial: 7667447206703254355 +# MD5 Fingerprint: b8:a1:03:63:b0:bd:21:71:70:8a:6f:13:3a:bb:79:49 +# SHA1 Fingerprint: 51:c6:e7:08:49:06:6e:f3:92:d4:5c:a0:0d:6d:a3:62:8f:c3:52:39 +# SHA256 Fingerprint: b0:bf:d5:2b:b0:d7:d9:bd:92:bf:5d:4d:c1:3d:a2:55:c0:2c:54:2f:37:83:65:ea:89:39:11:f5:5e:55:f2:3c +-----BEGIN CERTIFICATE----- +MIIGSzCCBDOgAwIBAgIIamg+nFGby1MwDQYJKoZIhvcNAQELBQAwgbIxCzAJBgNV +BAYTAlRSMQ8wDQYDVQQHDAZBbmthcmExQDA+BgNVBAoMN0UtVHXEn3JhIEVCRyBC +aWxpxZ9pbSBUZWtub2xvamlsZXJpIHZlIEhpem1ldGxlcmkgQS7Fni4xJjAkBgNV +BAsMHUUtVHVncmEgU2VydGlmaWthc3lvbiBNZXJrZXppMSgwJgYDVQQDDB9FLVR1 +Z3JhIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTEzMDMwNTEyMDk0OFoXDTIz +MDMwMzEyMDk0OFowgbIxCzAJBgNVBAYTAlRSMQ8wDQYDVQQHDAZBbmthcmExQDA+ +BgNVBAoMN0UtVHXEn3JhIEVCRyBCaWxpxZ9pbSBUZWtub2xvamlsZXJpIHZlIEhp +em1ldGxlcmkgQS7Fni4xJjAkBgNVBAsMHUUtVHVncmEgU2VydGlmaWthc3lvbiBN +ZXJrZXppMSgwJgYDVQQDDB9FLVR1Z3JhIENlcnRpZmljYXRpb24gQXV0aG9yaXR5 +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4vU/kwVRHoViVF56C/UY +B4Oufq9899SKa6VjQzm5S/fDxmSJPZQuVIBSOTkHS0vdhQd2h8y/L5VMzH2nPbxH +D5hw+IyFHnSOkm0bQNGZDbt1bsipa5rAhDGvykPL6ys06I+XawGb1Q5KCKpbknSF +Q9OArqGIW66z6l7LFpp3RMih9lRozt6Plyu6W0ACDGQXwLWTzeHxE2bODHnv0ZEo +q1+gElIwcxmOj+GMB6LDu0rw6h8VqO4lzKRG+Bsi77MOQ7osJLjFLFzUHPhdZL3D +k14opz8n8Y4e0ypQBaNV2cvnOVPAmJ6MVGKLJrD3fY185MaeZkJVgkfnsliNZvcH +fC425lAcP9tDJMW/hkd5s3kc91r0E+xs+D/iWR+V7kI+ua2oMoVJl0b+SzGPWsut +dEcf6ZG33ygEIqDUD13ieU/qbIWGvaimzuT6w+Gzrt48Ue7LE3wBf4QOXVGUnhMM +ti6lTPk5cDZvlsouDERVxcr6XQKj39ZkjFqzAQqptQpHF//vkUAqjqFGOjGY5RH8 +zLtJVor8udBhmm9lbObDyz51Sf6Pp+KJxWfXnUYTTjF2OySznhFlhqt/7x3U+Lzn +rFpct1pHXFXOVbQicVtbC/DP3KBhZOqp12gKY6fgDT+gr9Oq0n7vUaDmUStVkhUX +U8u3Zg5mTPj5dUyQ5xJwx0UCAwEAAaNjMGEwHQYDVR0OBBYEFC7j27JJ0JxUeVz6 +Jyr+zE7S6E5UMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAULuPbsknQnFR5 +XPonKv7MTtLoTlQwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAF +Nzr0TbdF4kV1JI+2d1LoHNgQk2Xz8lkGpD4eKexd0dCrfOAKkEh47U6YA5n+KGCR +HTAduGN8qOY1tfrTYXbm1gdLymmasoR6d5NFFxWfJNCYExL/u6Au/U5Mh/jOXKqY +GwXgAEZKgoClM4so3O0409/lPun++1ndYYRP0lSWE2ETPo+Aab6TR7U1Q9Jauz1c +77NCR807VRMGsAnb/WP2OogKmW9+4c4bU2pEZiNRCHu8W1Ki/QY3OEBhj0qWuJA3 ++GbHeJAAFS6LrVE1Uweoa2iu+U48BybNCAVwzDk/dr2l02cmAYamU9JgO3xDf1WK +vJUawSg5TB9D0pH0clmKuVb8P7Sd2nCcdlqMQ1DujjByTd//SffGqWfZbawCEeI6 +FiWnWAjLb1NBnEg4R2gz0dfHj9R0IdTDBZB6/86WiLEVKV0jq9BgoRJP3vQXzTLl +yb/IQ639Lo7xr+L0mPoSHyDYwKcMhcWQ9DstliaxLL5Mq+ux0orJ23gTDx4JnW2P +AJ8C2sH6H3p6CcRK5ogql5+Ji/03X186zjhZhkuvcQu02PJwT58yE+Owp1fl2tpD +y4Q08ijE6m30Ku/Ba3ba+367hTzSU8JNvnHhRdH9I2cNE3X7z2VnIp2usAnRCf8d +NL/+I5c30jn6PQ0GC7TbO6Orb1wdtn7os4I07QZcJA== +-----END CERTIFICATE----- + +# Issuer: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Subject: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Label: "T-TeleSec GlobalRoot Class 2" +# Serial: 1 +# MD5 Fingerprint: 2b:9b:9e:e4:7b:6c:1f:00:72:1a:cc:c1:77:79:df:6a +# SHA1 Fingerprint: 59:0d:2d:7d:88:4f:40:2e:61:7e:a5:62:32:17:65:cf:17:d8:94:e9 +# SHA256 Fingerprint: 91:e2:f5:78:8d:58:10:eb:a7:ba:58:73:7d:e1:54:8a:8e:ca:cd:01:45:98:bc:0b:14:3e:04:1b:17:05:25:52 +-----BEGIN CERTIFICATE----- +MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx +KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd +BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl +YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1 +OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy +aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50 +ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G +CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd +AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC +FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi +1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq +jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ +wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/ +WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy +NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC +uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw +IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6 +g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN +9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP +BSeOE6Fuwg== +-----END CERTIFICATE----- + +# Issuer: CN=Atos TrustedRoot 2011 O=Atos +# Subject: CN=Atos TrustedRoot 2011 O=Atos +# Label: "Atos TrustedRoot 2011" +# Serial: 6643877497813316402 +# MD5 Fingerprint: ae:b9:c4:32:4b:ac:7f:5d:66:cc:77:94:bb:2a:77:56 +# SHA1 Fingerprint: 2b:b1:f5:3e:55:0c:1d:c5:f1:d4:e6:b7:6a:46:4b:55:06:02:ac:21 +# SHA256 Fingerprint: f3:56:be:a2:44:b7:a9:1e:b3:5d:53:ca:9a:d7:86:4a:ce:01:8e:2d:35:d5:f8:f9:6d:df:68:a6:f4:1a:a4:74 +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIIXDPLYixfszIwDQYJKoZIhvcNAQELBQAwPDEeMBwGA1UE +AwwVQXRvcyBUcnVzdGVkUm9vdCAyMDExMQ0wCwYDVQQKDARBdG9zMQswCQYDVQQG +EwJERTAeFw0xMTA3MDcxNDU4MzBaFw0zMDEyMzEyMzU5NTlaMDwxHjAcBgNVBAMM +FUF0b3MgVHJ1c3RlZFJvb3QgMjAxMTENMAsGA1UECgwEQXRvczELMAkGA1UEBhMC +REUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCVhTuXbyo7LjvPpvMp +Nb7PGKw+qtn4TaA+Gke5vJrf8v7MPkfoepbCJI419KkM/IL9bcFyYie96mvr54rM +VD6QUM+A1JX76LWC1BTFtqlVJVfbsVD2sGBkWXppzwO3bw2+yj5vdHLqqjAqc2K+ +SZFhyBH+DgMq92og3AIVDV4VavzjgsG1xZ1kCWyjWZgHJ8cblithdHFsQ/H3NYkQ +4J7sVaE3IqKHBAUsR320HLliKWYoyrfhk/WklAOZuXCFteZI6o1Q/NnezG8HDt0L +cp2AMBYHlT8oDv3FdU9T1nSatCQujgKRz3bFmx5VdJx4IbHwLfELn8LVlhgf8FQi +eowHAgMBAAGjfTB7MB0GA1UdDgQWBBSnpQaxLKYJYO7Rl+lwrrw7GWzbITAPBgNV +HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFKelBrEspglg7tGX6XCuvDsZbNshMBgG +A1UdIAQRMA8wDQYLKwYBBAGwLQMEAQEwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3 +DQEBCwUAA4IBAQAmdzTblEiGKkGdLD4GkGDEjKwLVLgfuXvTBznk+j57sj1O7Z8j +vZfza1zv7v1Apt+hk6EKhqzvINB5Ab149xnYJDE0BAGmuhWawyfc2E8PzBhj/5kP +DpFrdRbhIfzYJsdHt6bPWHJxfrrhTZVHO8mvbaG0weyJ9rQPOLXiZNwlz6bb65pc +maHFCN795trV1lpFDMS3wrUU77QR/w4VtfX128a961qn8FYiqTxlVMYVqL2Gns2D +lmh6cYGJ4Qvh6hEbaAjMaZ7snkGeRDImeuKHCnE96+RapNLbxc3G3mB/ufNPRJLv +KrcYPqcZ2Qt9sTdBQrC6YB3y/gkRsPCHe6ed +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 1 G3" +# Serial: 687049649626669250736271037606554624078720034195 +# MD5 Fingerprint: a4:bc:5b:3f:fe:37:9a:fa:64:f0:e2:fa:05:3d:0b:ab +# SHA1 Fingerprint: 1b:8e:ea:57:96:29:1a:c9:39:ea:b8:0a:81:1a:73:73:c0:93:79:67 +# SHA256 Fingerprint: 8a:86:6f:d1:b2:76:b5:7e:57:8e:92:1c:65:82:8a:2b:ed:58:e9:f2:f2:88:05:41:34:b7:f1:f4:bf:c9:cc:74 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIUeFhfLq0sGUvjNwc1NBMotZbUZZMwDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMSBHMzAeFw0xMjAxMTIxNzI3NDRaFw00 +MjAxMTIxNzI3NDRaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDEgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCgvlAQjunybEC0BJyFuTHK3C3kEakEPBtV +wedYMB0ktMPvhd6MLOHBPd+C5k+tR4ds7FtJwUrVu4/sh6x/gpqG7D0DmVIB0jWe +rNrwU8lmPNSsAgHaJNM7qAJGr6Qc4/hzWHa39g6QDbXwz8z6+cZM5cOGMAqNF341 +68Xfuw6cwI2H44g4hWf6Pser4BOcBRiYz5P1sZK0/CPTz9XEJ0ngnjybCKOLXSoh +4Pw5qlPafX7PGglTvF0FBM+hSo+LdoINofjSxxR3W5A2B4GbPgb6Ul5jxaYA/qXp +UhtStZI5cgMJYr2wYBZupt0lwgNm3fME0UDiTouG9G/lg6AnhF4EwfWQvTA9xO+o +abw4m6SkltFi2mnAAZauy8RRNOoMqv8hjlmPSlzkYZqn0ukqeI1RPToV7qJZjqlc +3sX5kCLliEVx3ZGZbHqfPT2YfF72vhZooF6uCyP8Wg+qInYtyaEQHeTTRCOQiJ/G +KubX9ZqzWB4vMIkIG1SitZgj7Ah3HJVdYdHLiZxfokqRmu8hqkkWCKi9YSgxyXSt +hfbZxbGL0eUQMk1fiyA6PEkfM4VZDdvLCXVDaXP7a3F98N/ETH3Goy7IlXnLc6KO +Tk0k+17kBL5yG6YnLUlamXrXXAkgt3+UuU/xDRxeiEIbEbfnkduebPRq34wGmAOt +zCjvpUfzUwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQUo5fW816iEOGrRZ88F2Q87gFwnMwwDQYJKoZIhvcNAQELBQAD +ggIBABj6W3X8PnrHX3fHyt/PX8MSxEBd1DKquGrX1RUVRpgjpeaQWxiZTOOtQqOC +MTaIzen7xASWSIsBx40Bz1szBpZGZnQdT+3Btrm0DWHMY37XLneMlhwqI2hrhVd2 +cDMT/uFPpiN3GPoajOi9ZcnPP/TJF9zrx7zABC4tRi9pZsMbj/7sPtPKlL92CiUN +qXsCHKnQO18LwIE6PWThv6ctTr1NxNgpxiIY0MWscgKCP6o6ojoilzHdCGPDdRS5 +YCgtW2jgFqlmgiNR9etT2DGbe+m3nUvriBbP+V04ikkwj+3x6xn0dxoxGE1nVGwv +b2X52z3sIexe9PSLymBlVNFxZPT5pqOBMzYzcfCkeF9OrYMh3jRJjehZrJ3ydlo2 +8hP0r+AJx2EqbPfgna67hkooby7utHnNkDPDs3b69fBsnQGQ+p6Q9pxyz0fawx/k +NSBT8lTR32GDpgLiJTjehTItXnOQUl1CxM49S+H5GYQd1aJQzEH7QRTDvdbJWqNj +ZgKAvQU6O0ec7AAmTPWIUb+oI38YB7AL7YsmoWTTYUrrXJ/es69nA7Mf3W1daWhp +q1467HxpvMc7hU6eFbm0FU/DlXpY18ls6Wy58yljXrQs8C097Vpl4KlbQMJImYFt +nh8GKjwStIsPm6Ik8KaN1nrgS7ZklmOVhMJKzRwuJIczYOXD +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 2 G3" +# Serial: 390156079458959257446133169266079962026824725800 +# MD5 Fingerprint: af:0c:86:6e:bf:40:2d:7f:0b:3e:12:50:ba:12:3d:06 +# SHA1 Fingerprint: 09:3c:61:f3:8b:8b:dc:7d:55:df:75:38:02:05:00:e1:25:f5:c8:36 +# SHA256 Fingerprint: 8f:e4:fb:0a:f9:3a:4d:0d:67:db:0b:eb:b2:3e:37:c7:1b:f3:25:dc:bc:dd:24:0e:a0:4d:af:58:b4:7e:18:40 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIURFc0JFuBiZs18s64KztbpybwdSgwDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMiBHMzAeFw0xMjAxMTIxODU5MzJaFw00 +MjAxMTIxODU5MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDIgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQChriWyARjcV4g/Ruv5r+LrI3HimtFhZiFf +qq8nUeVuGxbULX1QsFN3vXg6YOJkApt8hpvWGo6t/x8Vf9WVHhLL5hSEBMHfNrMW +n4rjyduYNM7YMxcoRvynyfDStNVNCXJJ+fKH46nafaF9a7I6JaltUkSs+L5u+9ym +c5GQYaYDFCDy54ejiK2toIz/pgslUiXnFgHVy7g1gQyjO/Dh4fxaXc6AcW34Sas+ +O7q414AB+6XrW7PFXmAqMaCvN+ggOp+oMiwMzAkd056OXbxMmO7FGmh77FOm6RQ1 +o9/NgJ8MSPsc9PG/Srj61YxxSscfrf5BmrODXfKEVu+lV0POKa2Mq1W/xPtbAd0j +IaFYAI7D0GoT7RPjEiuA3GfmlbLNHiJuKvhB1PLKFAeNilUSxmn1uIZoL1NesNKq +IcGY5jDjZ1XHm26sGahVpkUG0CM62+tlXSoREfA7T8pt9DTEceT/AFr2XK4jYIVz +8eQQsSWu1ZK7E8EM4DnatDlXtas1qnIhO4M15zHfeiFuuDIIfR0ykRVKYnLP43eh +vNURG3YBZwjgQQvD6xVu+KQZ2aKrr+InUlYrAoosFCT5v0ICvybIxo/gbjh9Uy3l +7ZizlWNof/k19N+IxWA1ksB8aRxhlRbQ694Lrz4EEEVlWFA4r0jyWbYW8jwNkALG +cC4BrTwV1wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQU7edvdlq/YOxJW8ald7tyFnGbxD0wDQYJKoZIhvcNAQELBQAD +ggIBAJHfgD9DCX5xwvfrs4iP4VGyvD11+ShdyLyZm3tdquXK4Qr36LLTn91nMX66 +AarHakE7kNQIXLJgapDwyM4DYvmL7ftuKtwGTTwpD4kWilhMSA/ohGHqPHKmd+RC +roijQ1h5fq7KpVMNqT1wvSAZYaRsOPxDMuHBR//47PERIjKWnML2W2mWeyAMQ0Ga +W/ZZGYjeVYg3UQt4XAoeo0L9x52ID8DyeAIkVJOviYeIyUqAHerQbj5hLja7NQ4n +lv1mNDthcnPxFlxHBlRJAHpYErAK74X9sbgzdWqTHBLmYF5vHX/JHyPLhGGfHoJE ++V+tYlUkmlKY7VHnoX6XOuYvHxHaU4AshZ6rNRDbIl9qxV6XU/IyAgkwo1jwDQHV +csaxfGl7w/U2Rcxhbl5MlMVerugOXou/983g7aEOGzPuVBj+D77vfoRrQ+NwmNtd +dbINWQeFFSM51vHfqSYP1kjHs6Yi9TM3WpVHn3u6GBVv/9YUZINJ0gpnIdsPNWNg +KCLjsZWDzYWm3S8P52dSbrsvhXz1SnPnxT7AvSESBT/8twNJAlvIJebiVDj1eYeM +HVOyToV7BjjHLPj4sHKNJeV3UvQDHEimUF+IIDBu8oJDqz2XhOdT+yHBTw8imoa4 +WSr2Rz0ZiC3oheGe7IUIarFsNMkd7EgrO3jtZsSOeWmD3n+M +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 3 G3" +# Serial: 268090761170461462463995952157327242137089239581 +# MD5 Fingerprint: df:7d:b9:ad:54:6f:68:a1:df:89:57:03:97:43:b0:d7 +# SHA1 Fingerprint: 48:12:bd:92:3c:a8:c4:39:06:e7:30:6d:27:96:e6:a4:cf:22:2e:7d +# SHA256 Fingerprint: 88:ef:81:de:20:2e:b0:18:45:2e:43:f8:64:72:5c:ea:5f:bd:1f:c2:d9:d2:05:73:07:09:c5:d8:b8:69:0f:46 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIULvWbAiin23r/1aOp7r0DoM8Sah0wDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMyBHMzAeFw0xMjAxMTIyMDI2MzJaFw00 +MjAxMTIyMDI2MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDMgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCzyw4QZ47qFJenMioKVjZ/aEzHs286IxSR +/xl/pcqs7rN2nXrpixurazHb+gtTTK/FpRp5PIpM/6zfJd5O2YIyC0TeytuMrKNu +FoM7pmRLMon7FhY4futD4tN0SsJiCnMK3UmzV9KwCoWdcTzeo8vAMvMBOSBDGzXR +U7Ox7sWTaYI+FrUoRqHe6okJ7UO4BUaKhvVZR74bbwEhELn9qdIoyhA5CcoTNs+c +ra1AdHkrAj80//ogaX3T7mH1urPnMNA3I4ZyYUUpSFlob3emLoG+B01vr87ERROR +FHAGjx+f+IdpsQ7vw4kZ6+ocYfx6bIrc1gMLnia6Et3UVDmrJqMz6nWB2i3ND0/k +A9HvFZcba5DFApCTZgIhsUfei5pKgLlVj7WiL8DWM2fafsSntARE60f75li59wzw +eyuxwHApw0BiLTtIadwjPEjrewl5qW3aqDCYz4ByA4imW0aucnl8CAMhZa634Ryl +sSqiMd5mBPfAdOhx3v89WcyWJhKLhZVXGqtrdQtEPREoPHtht+KPZ0/l7DxMYIBp +VzgeAVuNVejH38DMdyM0SXV89pgR6y3e7UEuFAUCf+D+IOs15xGsIs5XPd7JMG0Q +A4XN8f+MFrXBsj6IbGB/kE+V9/YtrQE5BwT6dYB9v0lQ7e/JxHwc64B+27bQ3RP+ +ydOc17KXqQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQUxhfQvKjqAkPyGwaZXSuQILnXnOQwDQYJKoZIhvcNAQELBQAD +ggIBADRh2Va1EodVTd2jNTFGu6QHcrxfYWLopfsLN7E8trP6KZ1/AvWkyaiTt3px +KGmPc+FSkNrVvjrlt3ZqVoAh313m6Tqe5T72omnHKgqwGEfcIHB9UqM+WXzBusnI +FUBhynLWcKzSt/Ac5IYp8M7vaGPQtSCKFWGafoaYtMnCdvvMujAWzKNhxnQT5Wvv +oxXqA/4Ti2Tk08HS6IT7SdEQTXlm66r99I0xHnAUrdzeZxNMgRVhvLfZkXdxGYFg +u/BYpbWcC/ePIlUnwEsBbTuZDdQdm2NnL9DuDcpmvJRPpq3t/O5jrFc/ZSXPsoaP +0Aj/uHYUbt7lJ+yreLVTubY/6CD50qi+YUbKh4yE8/nxoGibIh6BJpsQBJFxwAYf +3KDTuVan45gtf4Od34wrnDKOMpTwATwiKp9Dwi7DmDkHOHv8XgBCH/MyJnmDhPbl +8MFREsALHgQjDFSlTC9JxUrRtm5gDWv8a4uFJGS3iQ6rJUdbPM9+Sb3H6QrG2vd+ +DhcI00iX0HGS8A85PjRqHH3Y8iKuu2n0M7SmSFXRDw4m6Oy2Cy2nhTXN/VnIn9HN +PlopNLk9hM6xZdRZkZFWdSHBd575euFgndOtBBj0fOtek49TSiIp+EgrPk2GrFt/ +ywaZWWDYWGWVjUTR939+J399roD1B0y2PpxxVJkES/1Y+Zj0 +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root G2" +# Serial: 15385348160840213938643033620894905419 +# MD5 Fingerprint: 92:38:b9:f8:63:24:82:65:2c:57:33:e6:fe:81:8f:9d +# SHA1 Fingerprint: a1:4b:48:d9:43:ee:0a:0e:40:90:4f:3c:e0:a4:c0:91:93:51:5d:3f +# SHA256 Fingerprint: 7d:05:eb:b6:82:33:9f:8c:94:51:ee:09:4e:eb:fe:fa:79:53:a1:14:ed:b2:f4:49:49:45:2f:ab:7d:2f:c1:85 +-----BEGIN CERTIFICATE----- +MIIDljCCAn6gAwIBAgIQC5McOtY5Z+pnI7/Dr5r0SzANBgkqhkiG9w0BAQsFADBl +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv +b3QgRzIwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQG +EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl +cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzIwggEi +MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDZ5ygvUj82ckmIkzTz+GoeMVSA +n61UQbVH35ao1K+ALbkKz3X9iaV9JPrjIgwrvJUXCzO/GU1BBpAAvQxNEP4Htecc +biJVMWWXvdMX0h5i89vqbFCMP4QMls+3ywPgym2hFEwbid3tALBSfK+RbLE4E9Hp +EgjAALAcKxHad3A2m67OeYfcgnDmCXRwVWmvo2ifv922ebPynXApVfSr/5Vh88lA +bx3RvpO704gqu52/clpWcTs/1PPRCv4o76Pu2ZmvA9OPYLfykqGxvYmJHzDNw6Yu +YjOuFgJ3RFrngQo8p0Quebg/BLxcoIfhG69Rjs3sLPr4/m3wOnyqi+RnlTGNAgMB +AAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQW +BBTOw0q5mVXyuNtgv6l+vVa1lzan1jANBgkqhkiG9w0BAQsFAAOCAQEAyqVVjOPI +QW5pJ6d1Ee88hjZv0p3GeDgdaZaikmkuOGybfQTUiaWxMTeKySHMq2zNixya1r9I +0jJmwYrA8y8678Dj1JGG0VDjA9tzd29KOVPt3ibHtX2vK0LRdWLjSisCx1BL4Gni +lmwORGYQRI+tBev4eaymG+g3NJ1TyWGqolKvSnAWhsI6yLETcDbYz+70CjTVW0z9 +B5yiutkBclzzTcHdDrEcDcRjvq30FPuJ7KJBDkzMyFdA0G4Dqs0MjomZmWzwPDCv +ON9vvKO+KSAnq3T/EyJ43pdSVR6DtVQgA+6uwE9W3jfMw3+qBCe703e4YtsXfJwo +IhNzbM8m9Yop5w== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root G3" +# Serial: 15459312981008553731928384953135426796 +# MD5 Fingerprint: 7c:7f:65:31:0c:81:df:8d:ba:3e:99:e2:5c:ad:6e:fb +# SHA1 Fingerprint: f5:17:a2:4f:9a:48:c6:c9:f8:a2:00:26:9f:dc:0f:48:2c:ab:30:89 +# SHA256 Fingerprint: 7e:37:cb:8b:4c:47:09:0c:ab:36:55:1b:a6:f4:5d:b8:40:68:0f:ba:16:6a:95:2d:b1:00:71:7f:43:05:3f:c2 +-----BEGIN CERTIFICATE----- +MIICRjCCAc2gAwIBAgIQC6Fa+h3foLVJRK/NJKBs7DAKBggqhkjOPQQDAzBlMQsw +CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu +ZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3Qg +RzMwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQGEwJV +UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu +Y29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzMwdjAQBgcq +hkjOPQIBBgUrgQQAIgNiAAQZ57ysRGXtzbg/WPuNsVepRC0FFfLvC/8QdJ+1YlJf +Zn4f5dwbRXkLzMZTCp2NXQLZqVneAlr2lSoOjThKiknGvMYDOAdfVdp+CW7if17Q +RSAPWXYQ1qAk8C3eNvJsKTmjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/ +BAQDAgGGMB0GA1UdDgQWBBTL0L2p4ZgFUaFNN6KDec6NHSrkhDAKBggqhkjOPQQD +AwNnADBkAjAlpIFFAmsSS3V0T8gj43DydXLefInwz5FyYZ5eEJJZVrmDxxDnOOlY +JjZ91eQ0hjkCMHw2U/Aw5WJjOpnitqM7mzT6HtoQknFekROn3aRukswy1vUhZscv +6pZjamVFkpUBtA== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root G2" +# Serial: 4293743540046975378534879503202253541 +# MD5 Fingerprint: e4:a6:8a:c8:54:ac:52:42:46:0a:fd:72:48:1b:2a:44 +# SHA1 Fingerprint: df:3c:24:f9:bf:d6:66:76:1b:26:80:73:fe:06:d1:cc:8d:4f:82:a4 +# SHA256 Fingerprint: cb:3c:cb:b7:60:31:e5:e0:13:8f:8d:d3:9a:23:f9:de:47:ff:c3:5e:43:c1:14:4c:ea:27:d4:6a:5a:b1:cb:5f +-----BEGIN CERTIFICATE----- +MIIDjjCCAnagAwIBAgIQAzrx5qcRqaC7KGSxHQn65TANBgkqhkiG9w0BAQsFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH +MjAeFw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEcyMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuzfNNNx7a8myaJCtSnX/RrohCgiN9RlUyfuI +2/Ou8jqJkTx65qsGGmvPrC3oXgkkRLpimn7Wo6h+4FR1IAWsULecYxpsMNzaHxmx +1x7e/dfgy5SDN67sH0NO3Xss0r0upS/kqbitOtSZpLYl6ZtrAGCSYP9PIUkY92eQ +q2EGnI/yuum06ZIya7XzV+hdG82MHauVBJVJ8zUtluNJbd134/tJS7SsVQepj5Wz +tCO7TG1F8PapspUwtP1MVYwnSlcUfIKdzXOS0xZKBgyMUNGPHgm+F6HmIcr9g+UQ +vIOlCsRnKPZzFBQ9RnbDhxSJITRNrw9FDKZJobq7nMWxM4MphQIDAQABo0IwQDAP +BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAdBgNVHQ4EFgQUTiJUIBiV +5uNu5g/6+rkS7QYXjzkwDQYJKoZIhvcNAQELBQADggEBAGBnKJRvDkhj6zHd6mcY +1Yl9PMWLSn/pvtsrF9+wX3N3KjITOYFnQoQj8kVnNeyIv/iPsGEMNKSuIEyExtv4 +NeF22d+mQrvHRAiGfzZ0JFrabA0UWTW98kndth/Jsw1HKj2ZL7tcu7XUIOGZX1NG +Fdtom/DzMNU+MeKNhJ7jitralj41E6Vf8PlwUHBHQRFXGU7Aj64GxJUTFy8bJZ91 +8rGOmaFvE7FBcf6IKshPECBV1/MUReXgRPTqh5Uykw7+U0b6LJ3/iyK5S9kJRaTe +pLiaWN0bfVKfjllDiIGknibVb63dDcY3fe0Dkhvld1927jyNxF1WW6LZZm6zNTfl +MrY= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root G3" +# Serial: 7089244469030293291760083333884364146 +# MD5 Fingerprint: f5:5d:a4:50:a5:fb:28:7e:1e:0f:0d:cc:96:57:56:ca +# SHA1 Fingerprint: 7e:04:de:89:6a:3e:66:6d:00:e6:87:d3:3f:fa:d9:3b:e8:3d:34:9e +# SHA256 Fingerprint: 31:ad:66:48:f8:10:41:38:c7:38:f3:9e:a4:32:01:33:39:3e:3a:18:cc:02:29:6e:f9:7c:2a:c9:ef:67:31:d0 +-----BEGIN CERTIFICATE----- +MIICPzCCAcWgAwIBAgIQBVVWvPJepDU1w6QP1atFcjAKBggqhkjOPQQDAzBhMQsw +CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu +ZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBHMzAe +Fw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVTMRUw +EwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20x +IDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEczMHYwEAYHKoZIzj0CAQYF +K4EEACIDYgAE3afZu4q4C/sLfyHS8L6+c/MzXRq8NOrexpu80JX28MzQC7phW1FG +fp4tn+6OYwwX7Adw9c+ELkCDnOg/QW07rdOkFFk2eJ0DQ+4QE2xy3q6Ip6FrtUPO +Z9wj/wMco+I+o0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAd +BgNVHQ4EFgQUs9tIpPmhxdiuNkHMEWNpYim8S8YwCgYIKoZIzj0EAwMDaAAwZQIx +AK288mw/EkrRLTnDCgmXc/SINoyIJ7vmiI1Qhadj+Z4y3maTD/HMsQmP3Wyr+mt/ +oAIwOWZbwmSNuJ5Q3KjVSaLtx9zRSX8XAbjIho9OjIgrqJqpisXRAL34VOKa5Vt8 +sycX +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Trusted Root G4" +# Serial: 7451500558977370777930084869016614236 +# MD5 Fingerprint: 78:f2:fc:aa:60:1f:2f:b4:eb:c9:37:ba:53:2e:75:49 +# SHA1 Fingerprint: dd:fb:16:cd:49:31:c9:73:a2:03:7d:3f:c8:3a:4d:7d:77:5d:05:e4 +# SHA256 Fingerprint: 55:2f:7b:dc:f1:a7:af:9e:6c:e6:72:01:7f:4f:12:ab:f7:72:40:c7:8e:76:1a:c2:03:d1:d9:d2:0a:c8:99:88 +-----BEGIN CERTIFICATE----- +MIIFkDCCA3igAwIBAgIQBZsbV56OITLiOQe9p3d1XDANBgkqhkiG9w0BAQwFADBi +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3Qg +RzQwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBiMQswCQYDVQQGEwJV +UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu +Y29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3QgRzQwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQC/5pBzaN675F1KPDAiMGkz7MKnJS7JIT3y +ithZwuEppz1Yq3aaza57G4QNxDAf8xukOBbrVsaXbR2rsnnyyhHS5F/WBTxSD1If +xp4VpX6+n6lXFllVcq9ok3DCsrp1mWpzMpTREEQQLt+C8weE5nQ7bXHiLQwb7iDV +ySAdYyktzuxeTsiT+CFhmzTrBcZe7FsavOvJz82sNEBfsXpm7nfISKhmV1efVFiO +DCu3T6cw2Vbuyntd463JT17lNecxy9qTXtyOj4DatpGYQJB5w3jHtrHEtWoYOAMQ +jdjUN6QuBX2I9YI+EJFwq1WCQTLX2wRzKm6RAXwhTNS8rhsDdV14Ztk6MUSaM0C/ +CNdaSaTC5qmgZ92kJ7yhTzm1EVgX9yRcRo9k98FpiHaYdj1ZXUJ2h4mXaXpI8OCi +EhtmmnTK3kse5w5jrubU75KSOp493ADkRSWJtppEGSt+wJS00mFt6zPZxd9LBADM +fRyVw4/3IbKyEbe7f/LVjHAsQWCqsWMYRJUadmJ+9oCw++hkpjPRiQfhvbfmQ6QY +uKZ3AeEPlAwhHbJUKSWJbOUOUlFHdL4mrLZBdd56rF+NP8m800ERElvlEFDrMcXK +chYiCd98THU/Y+whX8QgUWtvsauGi0/C1kVfnSD8oR7FwI+isX4KJpn15GkvmB0t +9dmpsh3lGwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +hjAdBgNVHQ4EFgQU7NfjgtJxXWRM3y5nP+e6mK4cD08wDQYJKoZIhvcNAQEMBQAD +ggIBALth2X2pbL4XxJEbw6GiAI3jZGgPVs93rnD5/ZpKmbnJeFwMDF/k5hQpVgs2 +SV1EY+CtnJYYZhsjDT156W1r1lT40jzBQ0CuHVD1UvyQO7uYmWlrx8GnqGikJ9yd ++SeuMIW59mdNOj6PWTkiU0TryF0Dyu1Qen1iIQqAyHNm0aAFYF/opbSnr6j3bTWc +fFqK1qI4mfN4i/RN0iAL3gTujJtHgXINwBQy7zBZLq7gcfJW5GqXb5JQbZaNaHqa +sjYUegbyJLkJEVDXCLG4iXqEI2FCKeWjzaIgQdfRnGTZ6iahixTXTBmyUEFxPT9N +cCOGDErcgdLMMpSEDQgJlxxPwO5rIHQw0uA5NBCFIRUBCOhVMt5xSdkoF1BN5r5N +0XWs0Mr7QbhDparTwwVETyw2m+L64kW4I1NsBm9nVX9GtUw/bihaeSbSpKhil9Ie +4u1Ki7wb/UdKDd9nZn6yW0HQO+T0O/QEY+nvwlQAUaCKKsnOeMzV6ocEGLPOr0mI +r/OSmbaz5mEP0oUA51Aa5BuVnRmhuZyxm7EAHu/QD09CbMkKvO5D+jpxpchNJqU1 +/YldvIViHTLSoCtU7ZpXwdv6EM8Zt4tKG48BtieVU+i2iW1bvGjUI+iLUaJW+fCm +gKDWHrO8Dw9TdSmq6hN35N6MgSGtBxBHEa2HPQfRdbzP82Z+ +-----END CERTIFICATE----- + +# Issuer: CN=COMODO RSA Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO RSA Certification Authority O=COMODO CA Limited +# Label: "COMODO RSA Certification Authority" +# Serial: 101909084537582093308941363524873193117 +# MD5 Fingerprint: 1b:31:b0:71:40:36:cc:14:36:91:ad:c4:3e:fd:ec:18 +# SHA1 Fingerprint: af:e5:d2:44:a8:d1:19:42:30:ff:47:9f:e2:f8:97:bb:cd:7a:8c:b4 +# SHA256 Fingerprint: 52:f0:e1:c4:e5:8e:c6:29:29:1b:60:31:7f:07:46:71:b8:5d:7e:a8:0d:5b:07:27:34:63:53:4b:32:b4:02:34 +-----BEGIN CERTIFICATE----- +MIIF2DCCA8CgAwIBAgIQTKr5yttjb+Af907YWwOGnTANBgkqhkiG9w0BAQwFADCB +hTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G +A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNV +BAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMTE5 +MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgT +EkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMR +Q09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNh +dGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCR +6FSS0gpWsawNJN3Fz0RndJkrN6N9I3AAcbxT38T6KhKPS38QVr2fcHK3YX/JSw8X +pz3jsARh7v8Rl8f0hj4K+j5c+ZPmNHrZFGvnnLOFoIJ6dq9xkNfs/Q36nGz637CC +9BR++b7Epi9Pf5l/tfxnQ3K9DADWietrLNPtj5gcFKt+5eNu/Nio5JIk2kNrYrhV +/erBvGy2i/MOjZrkm2xpmfh4SDBF1a3hDTxFYPwyllEnvGfDyi62a+pGx8cgoLEf +Zd5ICLqkTqnyg0Y3hOvozIFIQ2dOciqbXL1MGyiKXCJ7tKuY2e7gUYPDCUZObT6Z ++pUX2nwzV0E8jVHtC7ZcryxjGt9XyD+86V3Em69FmeKjWiS0uqlWPc9vqv9JWL7w +qP/0uK3pN/u6uPQLOvnoQ0IeidiEyxPx2bvhiWC4jChWrBQdnArncevPDt09qZah +SL0896+1DSJMwBGB7FY79tOi4lu3sgQiUpWAk2nojkxl8ZEDLXB0AuqLZxUpaVIC +u9ffUGpVRr+goyhhf3DQw6KqLCGqR84onAZFdr+CGCe01a60y1Dma/RMhnEw6abf +Fobg2P9A3fvQQoh/ozM6LlweQRGBY84YcWsr7KaKtzFcOmpH4MN5WdYgGq/yapiq +crxXStJLnbsQ/LBMQeXtHT1eKJ2czL+zUdqnR+WEUwIDAQABo0IwQDAdBgNVHQ4E +FgQUu69+Aj36pvE8hI6t7jiY7NkyMtQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB +/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAArx1UaEt65Ru2yyTUEUAJNMnMvl +wFTPoCWOAvn9sKIN9SCYPBMtrFaisNZ+EZLpLrqeLppysb0ZRGxhNaKatBYSaVqM +4dc+pBroLwP0rmEdEBsqpIt6xf4FpuHA1sj+nq6PK7o9mfjYcwlYRm6mnPTXJ9OV +2jeDchzTc+CiR5kDOF3VSXkAKRzH7JsgHAckaVd4sjn8OoSgtZx8jb8uk2Intzna +FxiuvTwJaP+EmzzV1gsD41eeFPfR60/IvYcjt7ZJQ3mFXLrrkguhxuhoqEwWsRqZ +CuhTLJK7oQkYdQxlqHvLI7cawiiFwxv/0Cti76R7CZGYZ4wUAc1oBmpjIXUDgIiK +boHGhfKppC3n9KUkEEeDys30jXlYsQab5xoq2Z0B15R97QNKyvDb6KkBPvVWmcke +jkk9u+UJueBPSZI9FoJAzMxZxuY67RIuaTxslbH9qh17f4a+Hg4yRvv7E491f0yL +S0Zj/gA0QHDBw7mh3aZw4gSzQbzpgJHqZJx64SIDqZxubw5lT2yHh17zbqD5daWb +QOhTsiedSrnAdyGN/4fy3ryM7xfft0kL0fJuMAsaDk527RH89elWsn2/x20Kk4yl +0MC2Hb46TpSi125sC8KKfPog88Tk5c0NqMuRkrF8hey1FGlmDoLnzc7ILaZRfyHB +NVOFBkpdn627G190 +-----END CERTIFICATE----- + +# Issuer: CN=USERTrust RSA Certification Authority O=The USERTRUST Network +# Subject: CN=USERTrust RSA Certification Authority O=The USERTRUST Network +# Label: "USERTrust RSA Certification Authority" +# Serial: 2645093764781058787591871645665788717 +# MD5 Fingerprint: 1b:fe:69:d1:91:b7:19:33:a3:72:a8:0f:e1:55:e5:b5 +# SHA1 Fingerprint: 2b:8f:1b:57:33:0d:bb:a2:d0:7a:6c:51:f7:0e:e9:0d:da:b9:ad:8e +# SHA256 Fingerprint: e7:93:c9:b0:2f:d8:aa:13:e2:1c:31:22:8a:cc:b0:81:19:64:3b:74:9c:89:89:64:b1:74:6d:46:c3:d4:cb:d2 +-----BEGIN CERTIFICATE----- +MIIF3jCCA8agAwIBAgIQAf1tMPyjylGoG7xkDjUDLTANBgkqhkiG9w0BAQwFADCB +iDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0pl +cnNleSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNV +BAMTJVVTRVJUcnVzdCBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAw +MjAxMDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNV +BAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVU +aGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBSU0EgQ2Vy +dGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCAEmUXNg7D2wiz0KxXDXbtzSfTTK1Qg2HiqiBNCS1kCdzOiZ/MPans9s/B +3PHTsdZ7NygRK0faOca8Ohm0X6a9fZ2jY0K2dvKpOyuR+OJv0OwWIJAJPuLodMkY +tJHUYmTbf6MG8YgYapAiPLz+E/CHFHv25B+O1ORRxhFnRghRy4YUVD+8M/5+bJz/ +Fp0YvVGONaanZshyZ9shZrHUm3gDwFA66Mzw3LyeTP6vBZY1H1dat//O+T23LLb2 +VN3I5xI6Ta5MirdcmrS3ID3KfyI0rn47aGYBROcBTkZTmzNg95S+UzeQc0PzMsNT +79uq/nROacdrjGCT3sTHDN/hMq7MkztReJVni+49Vv4M0GkPGw/zJSZrM233bkf6 +c0Plfg6lZrEpfDKEY1WJxA3Bk1QwGROs0303p+tdOmw1XNtB1xLaqUkL39iAigmT +Yo61Zs8liM2EuLE/pDkP2QKe6xJMlXzzawWpXhaDzLhn4ugTncxbgtNMs+1b/97l +c6wjOy0AvzVVdAlJ2ElYGn+SNuZRkg7zJn0cTRe8yexDJtC/QV9AqURE9JnnV4ee +UB9XVKg+/XRjL7FQZQnmWEIuQxpMtPAlR1n6BB6T1CZGSlCBst6+eLf8ZxXhyVeE +Hg9j1uliutZfVS7qXMYoCAQlObgOK6nyTJccBz8NUvXt7y+CDwIDAQABo0IwQDAd +BgNVHQ4EFgQUU3m/WqorSs9UgOHYm8Cd8rIDZsswDgYDVR0PAQH/BAQDAgEGMA8G +A1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAFzUfA3P9wF9QZllDHPF +Up/L+M+ZBn8b2kMVn54CVVeWFPFSPCeHlCjtHzoBN6J2/FNQwISbxmtOuowhT6KO +VWKR82kV2LyI48SqC/3vqOlLVSoGIG1VeCkZ7l8wXEskEVX/JJpuXior7gtNn3/3 +ATiUFJVDBwn7YKnuHKsSjKCaXqeYalltiz8I+8jRRa8YFWSQEg9zKC7F4iRO/Fjs +8PRF/iKz6y+O0tlFYQXBl2+odnKPi4w2r78NBc5xjeambx9spnFixdjQg3IM8WcR +iQycE0xyNN+81XHfqnHd4blsjDwSXWXavVcStkNr/+XeTWYRUc+ZruwXtuhxkYze +Sf7dNXGiFSeUHM9h4ya7b6NnJSFd5t0dCy5oGzuCr+yDZ4XUmFF0sbmZgIn/f3gZ +XHlKYC6SQK5MNyosycdiyA5d9zZbyuAlJQG03RoHnHcAP9Dc1ew91Pq7P8yF1m9/ +qS3fuQL39ZeatTXaw2ewh0qpKJ4jjv9cJ2vhsE/zB+4ALtRZh8tSQZXq9EfX7mRB +VXyNWQKV3WKdwrnuWih0hKWbt5DHDAff9Yk2dDLWKMGwsAvgnEzDHNb842m1R0aB +L6KCq9NjRHDEjf8tM7qtj3u1cIiuPhnPQCjY/MiQu12ZIvVS5ljFH4gxQ+6IHdfG +jjxDah2nGN59PRbxYvnKkKj9 +-----END CERTIFICATE----- + +# Issuer: CN=USERTrust ECC Certification Authority O=The USERTRUST Network +# Subject: CN=USERTrust ECC Certification Authority O=The USERTRUST Network +# Label: "USERTrust ECC Certification Authority" +# Serial: 123013823720199481456569720443997572134 +# MD5 Fingerprint: fa:68:bc:d9:b5:7f:ad:fd:c9:1d:06:83:28:cc:24:c1 +# SHA1 Fingerprint: d1:cb:ca:5d:b2:d5:2a:7f:69:3b:67:4d:e5:f0:5a:1d:0c:95:7d:f0 +# SHA256 Fingerprint: 4f:f4:60:d5:4b:9c:86:da:bf:bc:fc:57:12:e0:40:0d:2b:ed:3f:bc:4d:4f:bd:aa:86:e0:6a:dc:d2:a9:ad:7a +-----BEGIN CERTIFICATE----- +MIICjzCCAhWgAwIBAgIQXIuZxVqUxdJxVt7NiYDMJjAKBggqhkjOPQQDAzCBiDEL +MAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNl +eSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMT +JVVTRVJUcnVzdCBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMjAx +MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgT +Ck5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVUaGUg +VVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBFQ0MgQ2VydGlm +aWNhdGlvbiBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQarFRaqflo +I+d61SRvU8Za2EurxtW20eZzca7dnNYMYf3boIkDuAUU7FfO7l0/4iGzzvfUinng +o4N+LZfQYcTxmdwlkWOrfzCjtHDix6EznPO/LlxTsV+zfTJ/ijTjeXmjQjBAMB0G +A1UdDgQWBBQ64QmG1M8ZwpZ2dEl23OA1xmNjmjAOBgNVHQ8BAf8EBAMCAQYwDwYD +VR0TAQH/BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjA2Z6EWCNzklwBBHU6+4WMB +zzuqQhFkoJ2UOQIReVx7Hfpkue4WQrO/isIJxOzksU0CMQDpKmFHjFJKS04YcPbW +RNZu9YO6bVi9JNlWSOrvxKJGgYhqOkbRqZtNyWHa0V1Xahg= +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5 +# Label: "GlobalSign ECC Root CA - R5" +# Serial: 32785792099990507226680698011560947931244 +# MD5 Fingerprint: 9f:ad:3b:1c:02:1e:8a:ba:17:74:38:81:0c:a2:bc:08 +# SHA1 Fingerprint: 1f:24:c6:30:cd:a4:18:ef:20:69:ff:ad:4f:dd:5f:46:3a:1b:69:aa +# SHA256 Fingerprint: 17:9f:bc:14:8a:3d:d0:0f:d2:4e:a1:34:58:cc:43:bf:a7:f5:9c:81:82:d7:83:a5:13:f6:eb:ec:10:0c:89:24 +-----BEGIN CERTIFICATE----- +MIICHjCCAaSgAwIBAgIRYFlJ4CYuu1X5CneKcflK2GwwCgYIKoZIzj0EAwMwUDEk +MCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBDQSAtIFI1MRMwEQYDVQQKEwpH +bG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTEyMTExMzAwMDAwMFoX +DTM4MDExOTAzMTQwN1owUDEkMCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBD +QSAtIFI1MRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWdu +MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAER0UOlvt9Xb/pOdEh+J8LttV7HpI6SFkc +8GIxLcB6KP4ap1yztsyX50XUWPrRd21DosCHZTQKH3rd6zwzocWdTaRvQZU4f8ke +hOvRnkmSh5SHDDqFSmafnVmTTZdhBoZKo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYD +VR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUPeYpSJvqB8ohREom3m7e0oPQn1kwCgYI +KoZIzj0EAwMDaAAwZQIxAOVpEslu28YxuglB4Zf4+/2a4n0Sye18ZNPLBSWLVtmg +515dTguDnFt2KaAJJiFqYgIwcdK1j1zqO+F4CYWodZI7yFz9SO8NdCKoCOJuxUnO +xwy8p2Fp8fc74SrL+SvzZpA3 +-----END CERTIFICATE----- + +# Issuer: CN=Staat der Nederlanden EV Root CA O=Staat der Nederlanden +# Subject: CN=Staat der Nederlanden EV Root CA O=Staat der Nederlanden +# Label: "Staat der Nederlanden EV Root CA" +# Serial: 10000013 +# MD5 Fingerprint: fc:06:af:7b:e8:1a:f1:9a:b4:e8:d2:70:1f:c0:f5:ba +# SHA1 Fingerprint: 76:e2:7e:c1:4f:db:82:c1:c0:a6:75:b5:05:be:3d:29:b4:ed:db:bb +# SHA256 Fingerprint: 4d:24:91:41:4c:fe:95:67:46:ec:4c:ef:a6:cf:6f:72:e2:8a:13:29:43:2f:9d:8a:90:7a:c4:cb:5d:ad:c1:5a +-----BEGIN CERTIFICATE----- +MIIFcDCCA1igAwIBAgIEAJiWjTANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJO +TDEeMBwGA1UECgwVU3RhYXQgZGVyIE5lZGVybGFuZGVuMSkwJwYDVQQDDCBTdGFh +dCBkZXIgTmVkZXJsYW5kZW4gRVYgUm9vdCBDQTAeFw0xMDEyMDgxMTE5MjlaFw0y +MjEyMDgxMTEwMjhaMFgxCzAJBgNVBAYTAk5MMR4wHAYDVQQKDBVTdGFhdCBkZXIg +TmVkZXJsYW5kZW4xKTAnBgNVBAMMIFN0YWF0IGRlciBOZWRlcmxhbmRlbiBFViBS +b290IENBMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA48d+ifkkSzrS +M4M1LGns3Amk41GoJSt5uAg94JG6hIXGhaTK5skuU6TJJB79VWZxXSzFYGgEt9nC +UiY4iKTWO0Cmws0/zZiTs1QUWJZV1VD+hq2kY39ch/aO5ieSZxeSAgMs3NZmdO3d +Z//BYY1jTw+bbRcwJu+r0h8QoPnFfxZpgQNH7R5ojXKhTbImxrpsX23Wr9GxE46p +rfNeaXUmGD5BKyF/7otdBwadQ8QpCiv8Kj6GyzyDOvnJDdrFmeK8eEEzduG/L13l +pJhQDBXd4Pqcfzho0LKmeqfRMb1+ilgnQ7O6M5HTp5gVXJrm0w912fxBmJc+qiXb +j5IusHsMX/FjqTf5m3VpTCgmJdrV8hJwRVXj33NeN/UhbJCONVrJ0yPr08C+eKxC +KFhmpUZtcALXEPlLVPxdhkqHz3/KRawRWrUgUY0viEeXOcDPusBCAUCZSCELa6fS +/ZbV0b5GnUngC6agIk440ME8MLxwjyx1zNDFjFE7PZQIZCZhfbnDZY8UnCHQqv0X +cgOPvZuM5l5Tnrmd74K74bzickFbIZTTRTeU0d8JOV3nI6qaHcptqAqGhYqCvkIH +1vI4gnPah1vlPNOePqc7nvQDs/nxfRN0Av+7oeX6AHkcpmZBiFxgV6YuCcS6/ZrP +px9Aw7vMWgpVSzs4dlG4Y4uElBbmVvMCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB +/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFP6rAJCYniT8qcwaivsnuL8wbqg7 +MA0GCSqGSIb3DQEBCwUAA4ICAQDPdyxuVr5Os7aEAJSrR8kN0nbHhp8dB9O2tLsI +eK9p0gtJ3jPFrK3CiAJ9Brc1AsFgyb/E6JTe1NOpEyVa/m6irn0F3H3zbPB+po3u +2dfOWBfoqSmuc0iH55vKbimhZF8ZE/euBhD/UcabTVUlT5OZEAFTdfETzsemQUHS +v4ilf0X8rLiltTMMgsT7B/Zq5SWEXwbKwYY5EdtYzXc7LMJMD16a4/CrPmEbUCTC +wPTxGfARKbalGAKb12NMcIxHowNDXLldRqANb/9Zjr7dn3LDWyvfjFvO5QxGbJKy +CqNMVEIYFRIYvdr8unRu/8G2oGTYqV9Vrp9canaW2HNnh/tNf1zuacpzEPuKqf2e +vTY4SUmH9A4U8OmHuD+nT3pajnnUk+S7aFKErGzp85hwVXIy+TSrK0m1zSBi5Dp6 +Z2Orltxtrpfs/J92VoguZs9btsmksNcFuuEnL5O7Jiqik7Ab846+HUCjuTaPPoIa +Gl6I6lD4WeKDRikL40Rc4ZW2aZCaFG+XroHPaO+Zmr615+F/+PoTRxZMzG0IQOeL +eG9QgkRQP2YGiqtDhFZKDyAthg710tvSeopLzaXoTvFeJiUBWSOgftL2fiFX1ye8 +FVdMpEbB4IMeDExNH08GGeL5qPQ6gqGyeUN51q1veieQA6TqJIc/2b3Z6fJfUEkc +7uzXLg== +-----END CERTIFICATE----- + +# Issuer: CN=IdenTrust Commercial Root CA 1 O=IdenTrust +# Subject: CN=IdenTrust Commercial Root CA 1 O=IdenTrust +# Label: "IdenTrust Commercial Root CA 1" +# Serial: 13298821034946342390520003877796839426 +# MD5 Fingerprint: b3:3e:77:73:75:ee:a0:d3:e3:7e:49:63:49:59:bb:c7 +# SHA1 Fingerprint: df:71:7e:aa:4a:d9:4e:c9:55:84:99:60:2d:48:de:5f:bc:f0:3a:25 +# SHA256 Fingerprint: 5d:56:49:9b:e4:d2:e0:8b:cf:ca:d0:8a:3e:38:72:3d:50:50:3b:de:70:69:48:e4:2f:55:60:30:19:e5:28:ae +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIQCgFCgAAAAUUjyES1AAAAAjANBgkqhkiG9w0BAQsFADBK +MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVu +VHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwHhcNMTQwMTE2MTgxMjIzWhcNMzQw +MTE2MTgxMjIzWjBKMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScw +JQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCnUBneP5k91DNG8W9RYYKyqU+PZ4ldhNlT +3Qwo2dfw/66VQ3KZ+bVdfIrBQuExUHTRgQ18zZshq0PirK1ehm7zCYofWjK9ouuU ++ehcCuz/mNKvcbO0U59Oh++SvL3sTzIwiEsXXlfEU8L2ApeN2WIrvyQfYo3fw7gp +S0l4PJNgiCL8mdo2yMKi1CxUAGc1bnO/AljwpN3lsKImesrgNqUZFvX9t++uP0D1 +bVoE/c40yiTcdCMbXTMTEl3EASX2MN0CXZ/g1Ue9tOsbobtJSdifWwLziuQkkORi +T0/Br4sOdBeo0XKIanoBScy0RnnGF7HamB4HWfp1IYVl3ZBWzvurpWCdxJ35UrCL +vYf5jysjCiN2O/cz4ckA82n5S6LgTrx+kzmEB/dEcH7+B1rlsazRGMzyNeVJSQjK +Vsk9+w8YfYs7wRPCTY/JTw436R+hDmrfYi7LNQZReSzIJTj0+kuniVyc0uMNOYZK +dHzVWYfCP04MXFL0PfdSgvHqo6z9STQaKPNBiDoT7uje/5kdX7rL6B7yuVBgwDHT +c+XvvqDtMwt0viAgxGds8AgDelWAf0ZOlqf0Hj7h9tgJ4TNkK2PXMl6f+cB7D3hv +l7yTmvmcEpB4eoCHFddydJxVdHixuuFucAS6T6C6aMN7/zHwcz09lCqxC0EOoP5N +iGVreTO01wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB +/zAdBgNVHQ4EFgQU7UQZwNPwBovupHu+QucmVMiONnYwDQYJKoZIhvcNAQELBQAD +ggIBAA2ukDL2pkt8RHYZYR4nKM1eVO8lvOMIkPkp165oCOGUAFjvLi5+U1KMtlwH +6oi6mYtQlNeCgN9hCQCTrQ0U5s7B8jeUeLBfnLOic7iPBZM4zY0+sLj7wM+x8uwt +LRvM7Kqas6pgghstO8OEPVeKlh6cdbjTMM1gCIOQ045U8U1mwF10A0Cj7oV+wh93 +nAbowacYXVKV7cndJZ5t+qntozo00Fl72u1Q8zW/7esUTTHHYPTa8Yec4kjixsU3 ++wYQ+nVZZjFHKdp2mhzpgq7vmrlR94gjmmmVYjzlVYA211QC//G5Xc7UI2/YRYRK +W2XviQzdFKcgyxilJbQN+QHwotL0AMh0jqEqSI5l2xPE4iUXfeu+h1sXIFRRk0pT +AwvsXcoz7WL9RccvW9xYoIA55vrX/hMUpu09lEpCdNTDd1lzzY9GvlU47/rokTLq +l1gEIt44w8y8bckzOmoKaT+gyOpyj4xjhiO9bTyWnpXgSUyqorkqG5w2gXjtw+hG +4iZZRHUe2XWJUc0QhJ1hYMtd+ZciTY6Y5uN/9lu7rs3KSoFrXgvzUeF0K+l+J6fZ +mUlO+KWA2yUPHGNiiskzZ2s8EIPGrd6ozRaOjfAHN3Gf8qv8QfXBi+wAN10J5U6A +7/qxXDgGpRtK4dw4LTzcqx+QGtVKnO7RcGzM7vRX+Bi6hG6H +-----END CERTIFICATE----- + +# Issuer: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust +# Subject: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust +# Label: "IdenTrust Public Sector Root CA 1" +# Serial: 13298821034946342390521976156843933698 +# MD5 Fingerprint: 37:06:a5:b0:fc:89:9d:ba:f4:6b:8c:1a:64:cd:d5:ba +# SHA1 Fingerprint: ba:29:41:60:77:98:3f:f4:f3:ef:f2:31:05:3b:2e:ea:6d:4d:45:fd +# SHA256 Fingerprint: 30:d0:89:5a:9a:44:8a:26:20:91:63:55:22:d1:f5:20:10:b5:86:7a:ca:e1:2c:78:ef:95:8f:d4:f4:38:9f:2f +-----BEGIN CERTIFICATE----- +MIIFZjCCA06gAwIBAgIQCgFCgAAAAUUjz0Z8AAAAAjANBgkqhkiG9w0BAQsFADBN +MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MSowKAYDVQQDEyFJZGVu +VHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwHhcNMTQwMTE2MTc1MzMyWhcN +MzQwMTE2MTc1MzMyWjBNMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0 +MSowKAYDVQQDEyFJZGVuVHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC2IpT8pEiv6EdrCvsnduTyP4o7 +ekosMSqMjbCpwzFrqHd2hCa2rIFCDQjrVVi7evi8ZX3yoG2LqEfpYnYeEe4IFNGy +RBb06tD6Hi9e28tzQa68ALBKK0CyrOE7S8ItneShm+waOh7wCLPQ5CQ1B5+ctMlS +bdsHyo+1W/CD80/HLaXIrcuVIKQxKFdYWuSNG5qrng0M8gozOSI5Cpcu81N3uURF +/YTLNiCBWS2ab21ISGHKTN9T0a9SvESfqy9rg3LvdYDaBjMbXcjaY8ZNzaxmMc3R +3j6HEDbhuaR672BQssvKplbgN6+rNBM5Jeg5ZuSYeqoSmJxZZoY+rfGwyj4GD3vw +EUs3oERte8uojHH01bWRNszwFcYr3lEXsZdMUD2xlVl8BX0tIdUAvwFnol57plzy +9yLxkA2T26pEUWbMfXYD62qoKjgZl3YNa4ph+bz27nb9cCvdKTz4Ch5bQhyLVi9V +GxyhLrXHFub4qjySjmm2AcG1hp2JDws4lFTo6tyePSW8Uybt1as5qsVATFSrsrTZ +2fjXctscvG29ZV/viDUqZi/u9rNl8DONfJhBaUYPQxxp+pu10GFqzcpL2UyQRqsV +WaFHVCkugyhfHMKiq3IXAAaOReyL4jM9f9oZRORicsPfIsbyVtTdX5Vy7W1f90gD +W/3FKqD2cyOEEBsB5wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/ +BAUwAwEB/zAdBgNVHQ4EFgQU43HgntinQtnbcZFrlJPrw6PRFKMwDQYJKoZIhvcN +AQELBQADggIBAEf63QqwEZE4rU1d9+UOl1QZgkiHVIyqZJnYWv6IAcVYpZmxI1Qj +t2odIFflAWJBF9MJ23XLblSQdf4an4EKwt3X9wnQW3IV5B4Jaj0z8yGa5hV+rVHV +DRDtfULAj+7AmgjVQdZcDiFpboBhDhXAuM/FSRJSzL46zNQuOAXeNf0fb7iAaJg9 +TaDKQGXSc3z1i9kKlT/YPyNtGtEqJBnZhbMX73huqVjRI9PHE+1yJX9dsXNw0H8G +lwmEKYBhHfpe/3OsoOOJuBxxFcbeMX8S3OFtm6/n6J91eEyrRjuazr8FGF1NFTwW +mhlQBJqymm9li1JfPFgEKCXAZmExfrngdbkaqIHWchezxQMxNRF4eKLg6TCMf4Df +WN88uieW4oA0beOY02QnrEh+KHdcxiVhJfiFDGX6xDIvpZgF5PgLZxYWxoK4Mhn5 ++bl53B/N66+rDt0b20XkeucC4pVd/GnwU2lhlXV5C15V5jgclKlZM57IcXR5f1GJ +tshquDDIajjDbp7hNxbqBWJMWxJH7ae0s1hWx0nzfxJoCTFx8G34Tkf71oXuxVhA +GaQdp/lLQzfcaFpPz+vCZHTetBXZ9FRUGi8c15dxVJCO2SCdUyt/q4/i6jC8UDfv +8Ue1fXwsBOxonbRJRBD0ckscZOf85muQ3Wl9af0AVqW3rLatt8o+Ae+c +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - G2" +# Serial: 1246989352 +# MD5 Fingerprint: 4b:e2:c9:91:96:65:0c:f4:0e:5a:93:92:a0:0a:fe:b2 +# SHA1 Fingerprint: 8c:f4:27:fd:79:0c:3a:d1:66:06:8d:e8:1e:57:ef:bb:93:22:72:d4 +# SHA256 Fingerprint: 43:df:57:74:b0:3e:7f:ef:5f:e4:0d:93:1a:7b:ed:f1:bb:2e:6b:42:73:8c:4e:6d:38:41:10:3d:3a:a7:f3:39 +-----BEGIN CERTIFICATE----- +MIIEPjCCAyagAwIBAgIESlOMKDANBgkqhkiG9w0BAQsFADCBvjELMAkGA1UEBhMC +VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50 +cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3Qs +IEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVz +dCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRzIwHhcNMDkwNzA3MTcy +NTU0WhcNMzAxMjA3MTc1NTU0WjCBvjELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUVu +dHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50cnVzdC5uZXQvbGVnYWwt +dGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3QsIEluYy4gLSBmb3IgYXV0 +aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVzdCBSb290IENlcnRpZmlj +YXRpb24gQXV0aG9yaXR5IC0gRzIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK +AoIBAQC6hLZy254Ma+KZ6TABp3bqMriVQRrJ2mFOWHLP/vaCeb9zYQYKpSfYs1/T +RU4cctZOMvJyig/3gxnQaoCAAEUesMfnmr8SVycco2gvCoe9amsOXmXzHHfV1IWN +cCG0szLni6LVhjkCsbjSR87kyUnEO6fe+1R9V77w6G7CebI6C1XiUJgWMhNcL3hW +wcKUs/Ja5CeanyTXxuzQmyWC48zCxEXFjJd6BmsqEZ+pCm5IO2/b1BEZQvePB7/1 +U1+cPvQXLOZprE4yTGJ36rfo5bs0vBmLrpxR57d+tVOxMyLlbc9wPBr64ptntoP0 +jaWvYkxN4FisZDQSA/i2jZRjJKRxAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAP +BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqciZ60B7vfec7aVHUbI2fkBJmqzAN +BgkqhkiG9w0BAQsFAAOCAQEAeZ8dlsa2eT8ijYfThwMEYGprmi5ZiXMRrEPR9RP/ +jTkrwPK9T3CMqS/qF8QLVJ7UG5aYMzyorWKiAHarWWluBh1+xLlEjZivEtRh2woZ +Rkfz6/djwUAFQKXSt/S1mja/qYh2iARVBCuch38aNzx+LaUa2NSJXsq9rD1s2G2v +1fN2D807iDginWyTmsQ9v4IbZT+mD12q/OWyFcq1rca8PdCE6OoGcrBNOTJ4vz4R +nAuknZoh8/CbCzB428Hch0P+vGOaysXCHMnHjf87ElgI5rY97HosTvuDls4MPGmH +VHOkc8KT/1EQrBVUAdj8BbGJoX90g5pJ19xOe4pIb4tF9g== +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - EC1" +# Serial: 51543124481930649114116133369 +# MD5 Fingerprint: b6:7e:1d:f0:58:c5:49:6c:24:3b:3d:ed:98:18:ed:bc +# SHA1 Fingerprint: 20:d8:06:40:df:9b:25:f5:12:25:3a:11:ea:f7:59:8a:eb:14:b5:47 +# SHA256 Fingerprint: 02:ed:0e:b2:8c:14:da:45:16:5c:56:67:91:70:0d:64:51:d7:fb:56:f0:b2:ab:1d:3b:8e:b0:70:e5:6e:df:f5 +-----BEGIN CERTIFICATE----- +MIIC+TCCAoCgAwIBAgINAKaLeSkAAAAAUNCR+TAKBggqhkjOPQQDAzCBvzELMAkG +A1UEBhMCVVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3 +d3cuZW50cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDEyIEVu +dHJ1c3QsIEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEzMDEGA1UEAxMq +RW50cnVzdCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRUMxMB4XDTEy +MTIxODE1MjUzNloXDTM3MTIxODE1NTUzNlowgb8xCzAJBgNVBAYTAlVTMRYwFAYD +VQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1c3QubmV0 +L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxMiBFbnRydXN0LCBJbmMuIC0g +Zm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMzAxBgNVBAMTKkVudHJ1c3QgUm9vdCBD +ZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEVDMTB2MBAGByqGSM49AgEGBSuBBAAi +A2IABIQTydC6bUF74mzQ61VfZgIaJPRbiWlH47jCffHyAsWfoPZb1YsGGYZPUxBt +ByQnoaD41UcZYUx9ypMn6nQM72+WCf5j7HBdNq1nd67JnXxVRDqiY1Ef9eNi1KlH +Bz7MIKNCMEAwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O +BBYEFLdj5xrdjekIplWDpOBqUEFlEUJJMAoGCCqGSM49BAMDA2cAMGQCMGF52OVC +R98crlOZF7ZvHH3hvxGU0QOIdeSNiaSKd0bebWHvAvX7td/M/k7//qnmpwIwW5nX +hTcGtXsI/esni0qU+eH6p44mCOh8kmhtc9hvJqwhAriZtyZBWyVgrtBIGu4G +-----END CERTIFICATE----- + +# Issuer: CN=CFCA EV ROOT O=China Financial Certification Authority +# Subject: CN=CFCA EV ROOT O=China Financial Certification Authority +# Label: "CFCA EV ROOT" +# Serial: 407555286 +# MD5 Fingerprint: 74:e1:b6:ed:26:7a:7a:44:30:33:94:ab:7b:27:81:30 +# SHA1 Fingerprint: e2:b8:29:4b:55:84:ab:6b:58:c2:90:46:6c:ac:3f:b8:39:8f:84:83 +# SHA256 Fingerprint: 5c:c3:d7:8e:4e:1d:5e:45:54:7a:04:e6:87:3e:64:f9:0c:f9:53:6d:1c:cc:2e:f8:00:f3:55:c4:c5:fd:70:fd +-----BEGIN CERTIFICATE----- +MIIFjTCCA3WgAwIBAgIEGErM1jANBgkqhkiG9w0BAQsFADBWMQswCQYDVQQGEwJD +TjEwMC4GA1UECgwnQ2hpbmEgRmluYW5jaWFsIENlcnRpZmljYXRpb24gQXV0aG9y +aXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJPT1QwHhcNMTIwODA4MDMwNzAxWhcNMjkx +MjMxMDMwNzAxWjBWMQswCQYDVQQGEwJDTjEwMC4GA1UECgwnQ2hpbmEgRmluYW5j +aWFsIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJP +T1QwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDXXWvNED8fBVnVBU03 +sQ7smCuOFR36k0sXgiFxEFLXUWRwFsJVaU2OFW2fvwwbwuCjZ9YMrM8irq93VCpL +TIpTUnrD7i7es3ElweldPe6hL6P3KjzJIx1qqx2hp/Hz7KDVRM8Vz3IvHWOX6Jn5 +/ZOkVIBMUtRSqy5J35DNuF++P96hyk0g1CXohClTt7GIH//62pCfCqktQT+x8Rgp +7hZZLDRJGqgG16iI0gNyejLi6mhNbiyWZXvKWfry4t3uMCz7zEasxGPrb382KzRz +EpR/38wmnvFyXVBlWY9ps4deMm/DGIq1lY+wejfeWkU7xzbh72fROdOXW3NiGUgt +hxwG+3SYIElz8AXSG7Ggo7cbcNOIabla1jj0Ytwli3i/+Oh+uFzJlU9fpy25IGvP +a931DfSCt/SyZi4QKPaXWnuWFo8BGS1sbn85WAZkgwGDg8NNkt0yxoekN+kWzqot +aK8KgWU6cMGbrU1tVMoqLUuFG7OA5nBFDWteNfB/O7ic5ARwiRIlk9oKmSJgamNg +TnYGmE69g60dWIolhdLHZR4tjsbftsbhf4oEIRUpdPA+nJCdDC7xij5aqgwJHsfV +PKPtl8MeNPo4+QgO48BdK4PRVmrJtqhUUy54Mmc9gn900PvhtgVguXDbjgv5E1hv +cWAQUhC5wUEJ73IfZzF4/5YFjQIDAQABo2MwYTAfBgNVHSMEGDAWgBTj/i39KNAL +tbq2osS/BqoFjJP7LzAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAd +BgNVHQ4EFgQU4/4t/SjQC7W6tqLEvwaqBYyT+y8wDQYJKoZIhvcNAQELBQADggIB +ACXGumvrh8vegjmWPfBEp2uEcwPenStPuiB/vHiyz5ewG5zz13ku9Ui20vsXiObT +ej/tUxPQ4i9qecsAIyjmHjdXNYmEwnZPNDatZ8POQQaIxffu2Bq41gt/UP+TqhdL +jOztUmCypAbqTuv0axn96/Ua4CUqmtzHQTb3yHQFhDmVOdYLO6Qn+gjYXB74BGBS +ESgoA//vU2YApUo0FmZ8/Qmkrp5nGm9BC2sGE5uPhnEFtC+NiWYzKXZUmhH4J/qy +P5Hgzg0b8zAarb8iXRvTvyUFTeGSGn+ZnzxEk8rUQElsgIfXBDrDMlI1Dlb4pd19 +xIsNER9Tyx6yF7Zod1rg1MvIB671Oi6ON7fQAUtDKXeMOZePglr4UeWJoBjnaH9d +Ci77o0cOPaYjesYBx4/IXr9tgFa+iiS6M+qf4TIRnvHST4D2G0CvOJ4RUHlzEhLN +5mydLIhyPDCBBpEi6lmt2hkuIsKNuYyH4Ga8cyNfIWRjgEj1oDwYPZTISEEdQLpe +/v5WOaHIz16eGWRGENoXkbcFgKyLmZJ956LYBws2J+dIeWCKw9cTXPhyQN9Ky8+Z +AAoACxGV2lZFA4gKn2fQ1XmxqI1AbQ3CekD6819kR5LLU7m7Wc5P/dAVUwHY3+vZ +5nbv0CO7O6l5s9UCKc2Jo5YPSjXnTkLAdc0Hz+Ys63su +-----END CERTIFICATE----- + +# Issuer: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed +# Subject: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed +# Label: "OISTE WISeKey Global Root GB CA" +# Serial: 157768595616588414422159278966750757568 +# MD5 Fingerprint: a4:eb:b9:61:28:2e:b7:2f:98:b0:35:26:90:99:51:1d +# SHA1 Fingerprint: 0f:f9:40:76:18:d3:d7:6a:4b:98:f0:a8:35:9e:0c:fd:27:ac:cc:ed +# SHA256 Fingerprint: 6b:9c:08:e8:6e:b0:f7:67:cf:ad:65:cd:98:b6:21:49:e5:49:4a:67:f5:84:5e:7b:d1:ed:01:9f:27:b8:6b:d6 +-----BEGIN CERTIFICATE----- +MIIDtTCCAp2gAwIBAgIQdrEgUnTwhYdGs/gjGvbCwDANBgkqhkiG9w0BAQsFADBt +MQswCQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUg +Rm91bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9i +YWwgUm9vdCBHQiBDQTAeFw0xNDEyMDExNTAwMzJaFw0zOTEyMDExNTEwMzFaMG0x +CzAJBgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBG +b3VuZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2Jh +bCBSb290IEdCIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2Be3 +HEokKtaXscriHvt9OO+Y9bI5mE4nuBFde9IllIiCFSZqGzG7qFshISvYD06fWvGx +WuR51jIjK+FTzJlFXHtPrby/h0oLS5daqPZI7H17Dc0hBt+eFf1Biki3IPShehtX +1F1Q/7pn2COZH8g/497/b1t3sWtuuMlk9+HKQUYOKXHQuSP8yYFfTvdv37+ErXNk +u7dCjmn21HYdfp2nuFeKUWdy19SouJVUQHMD9ur06/4oQnc/nSMbsrY9gBQHTC5P +99UKFg29ZkM3fiNDecNAhvVMKdqOmq0NpQSHiB6F4+lT1ZvIiwNjeOvgGUpuuy9r +M2RYk61pv48b74JIxwIDAQABo1EwTzALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUw +AwEB/zAdBgNVHQ4EFgQUNQ/INmNe4qPs+TtmFc5RUuORmj0wEAYJKwYBBAGCNxUB +BAMCAQAwDQYJKoZIhvcNAQELBQADggEBAEBM+4eymYGQfp3FsLAmzYh7KzKNbrgh +cViXfa43FK8+5/ea4n32cZiZBKpDdHij40lhPnOMTZTg+XHEthYOU3gf1qKHLwI5 +gSk8rxWYITD+KJAAjNHhy/peyP34EEY7onhCkRd0VQreUGdNZtGn//3ZwLWoo4rO +ZvUPQ82nK1d7Y0Zqqi5S2PTt4W2tKZB4SLrhI6qjiey1q5bAtEuiHZeeevJuQHHf +aPFlTc58Bd9TZaml8LGXBHAVRgOY1NK/VLSgWH1Sb9pWJmLU2NuJMW8c8CLC02Ic +Nc1MaRVUGpCY3useX8p3x8uOPUNpnJpY0CQ73xtAln41rYHHTnG6iBM= +-----END CERTIFICATE----- + +# Issuer: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A. +# Subject: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A. +# Label: "SZAFIR ROOT CA2" +# Serial: 357043034767186914217277344587386743377558296292 +# MD5 Fingerprint: 11:64:c1:89:b0:24:b1:8c:b1:07:7e:89:9e:51:9e:99 +# SHA1 Fingerprint: e2:52:fa:95:3f:ed:db:24:60:bd:6e:28:f3:9c:cc:cf:5e:b3:3f:de +# SHA256 Fingerprint: a1:33:9d:33:28:1a:0b:56:e5:57:d3:d3:2b:1c:e7:f9:36:7e:b0:94:bd:5f:a7:2a:7e:50:04:c8:de:d7:ca:fe +-----BEGIN CERTIFICATE----- +MIIDcjCCAlqgAwIBAgIUPopdB+xV0jLVt+O2XwHrLdzk1uQwDQYJKoZIhvcNAQEL +BQAwUTELMAkGA1UEBhMCUEwxKDAmBgNVBAoMH0tyYWpvd2EgSXpiYSBSb3psaWN6 +ZW5pb3dhIFMuQS4xGDAWBgNVBAMMD1NaQUZJUiBST09UIENBMjAeFw0xNTEwMTkw +NzQzMzBaFw0zNTEwMTkwNzQzMzBaMFExCzAJBgNVBAYTAlBMMSgwJgYDVQQKDB9L +cmFqb3dhIEl6YmEgUm96bGljemVuaW93YSBTLkEuMRgwFgYDVQQDDA9TWkFGSVIg +Uk9PVCBDQTIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3vD5QqEvN +QLXOYeeWyrSh2gwisPq1e3YAd4wLz32ohswmUeQgPYUM1ljj5/QqGJ3a0a4m7utT +3PSQ1hNKDJA8w/Ta0o4NkjrcsbH/ON7Dui1fgLkCvUqdGw+0w8LBZwPd3BucPbOw +3gAeqDRHu5rr/gsUvTaE2g0gv/pby6kWIK05YO4vdbbnl5z5Pv1+TW9NL++IDWr6 +3fE9biCloBK0TXC5ztdyO4mTp4CEHCdJckm1/zuVnsHMyAHs6A6KCpbns6aH5db5 +BSsNl0BwPLqsdVqc1U2dAgrSS5tmS0YHF2Wtn2yIANwiieDhZNRnvDF5YTy7ykHN +XGoAyDw4jlivAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQD +AgEGMB0GA1UdDgQWBBQuFqlKGLXLzPVvUPMjX/hd56zwyDANBgkqhkiG9w0BAQsF +AAOCAQEAtXP4A9xZWx126aMqe5Aosk3AM0+qmrHUuOQn/6mWmc5G4G18TKI4pAZw +8PRBEew/R40/cof5O/2kbytTAOD/OblqBw7rHRz2onKQy4I9EYKL0rufKq8h5mOG +nXkZ7/e7DDWQw4rtTw/1zBLZpD67oPwglV9PJi8RI4NOdQcPv5vRtB3pEAT+ymCP +oky4rc/hkA/NrgrHXXu3UNLUYfrVFdvXn4dRVOul4+vJhaAlIDf7js4MNIThPIGy +d05DpYhfhmehPea0XGG2Ptv+tyjFogeutcrKjSoS75ftwjCkySp6+/NNIxuZMzSg +LvWpCz/UXeHPhJ/iGcJfitYgHuNztw== +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Network CA 2" +# Serial: 44979900017204383099463764357512596969 +# MD5 Fingerprint: 6d:46:9e:d9:25:6d:08:23:5b:5e:74:7d:1e:27:db:f2 +# SHA1 Fingerprint: d3:dd:48:3e:2b:bf:4c:05:e8:af:10:f5:fa:76:26:cf:d3:dc:30:92 +# SHA256 Fingerprint: b6:76:f2:ed:da:e8:77:5c:d3:6c:b0:f6:3c:d1:d4:60:39:61:f4:9e:62:65:ba:01:3a:2f:03:07:b6:d0:b8:04 +-----BEGIN CERTIFICATE----- +MIIF0jCCA7qgAwIBAgIQIdbQSk8lD8kyN/yqXhKN6TANBgkqhkiG9w0BAQ0FADCB +gDELMAkGA1UEBhMCUEwxIjAgBgNVBAoTGVVuaXpldG8gVGVjaG5vbG9naWVzIFMu +QS4xJzAlBgNVBAsTHkNlcnR1bSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEkMCIG +A1UEAxMbQ2VydHVtIFRydXN0ZWQgTmV0d29yayBDQSAyMCIYDzIwMTExMDA2MDgz +OTU2WhgPMjA0NjEwMDYwODM5NTZaMIGAMQswCQYDVQQGEwJQTDEiMCAGA1UEChMZ +VW5pemV0byBUZWNobm9sb2dpZXMgUy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRp +ZmljYXRpb24gQXV0aG9yaXR5MSQwIgYDVQQDExtDZXJ0dW0gVHJ1c3RlZCBOZXR3 +b3JrIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC9+Xj45tWA +DGSdhhuWZGc/IjoedQF97/tcZ4zJzFxrqZHmuULlIEub2pt7uZld2ZuAS9eEQCsn +0+i6MLs+CRqnSZXvK0AkwpfHp+6bJe+oCgCXhVqqndwpyeI1B+twTUrWwbNWuKFB +OJvR+zF/j+Bf4bE/D44WSWDXBo0Y+aomEKsq09DRZ40bRr5HMNUuctHFY9rnY3lE +fktjJImGLjQ/KUxSiyqnwOKRKIm5wFv5HdnnJ63/mgKXwcZQkpsCLL2puTRZCr+E +Sv/f/rOf69me4Jgj7KZrdxYq28ytOxykh9xGc14ZYmhFV+SQgkK7QtbwYeDBoz1m +o130GO6IyY0XRSmZMnUCMe4pJshrAua1YkV/NxVaI2iJ1D7eTiew8EAMvE0Xy02i +sx7QBlrd9pPPV3WZ9fqGGmd4s7+W/jTcvedSVuWz5XV710GRBdxdaeOVDUO5/IOW +OZV7bIBaTxNyxtd9KXpEulKkKtVBRgkg/iKgtlswjbyJDNXXcPiHUv3a76xRLgez +Tv7QCdpw75j6VuZt27VXS9zlLCUVyJ4ueE742pyehizKV/Ma5ciSixqClnrDvFAS +adgOWkaLOusm+iPJtrCBvkIApPjW/jAux9JG9uWOdf3yzLnQh1vMBhBgu4M1t15n +3kfsmUjxpKEV/q2MYo45VU85FrmxY53/twIDAQABo0IwQDAPBgNVHRMBAf8EBTAD +AQH/MB0GA1UdDgQWBBS2oVQ5AsOgP46KvPrU+Bym0ToO/TAOBgNVHQ8BAf8EBAMC +AQYwDQYJKoZIhvcNAQENBQADggIBAHGlDs7k6b8/ONWJWsQCYftMxRQXLYtPU2sQ +F/xlhMcQSZDe28cmk4gmb3DWAl45oPePq5a1pRNcgRRtDoGCERuKTsZPpd1iHkTf +CVn0W3cLN+mLIMb4Ck4uWBzrM9DPhmDJ2vuAL55MYIR4PSFk1vtBHxgP58l1cb29 +XN40hz5BsA72udY/CROWFC/emh1auVbONTqwX3BNXuMp8SMoclm2q8KMZiYcdywm +djWLKKdpoPk79SPdhRB0yZADVpHnr7pH1BKXESLjokmUbOe3lEu6LaTaM4tMpkT/ +WjzGHWTYtTHkpjx6qFcL2+1hGsvxznN3Y6SHb0xRONbkX8eftoEq5IVIeVheO/jb +AoJnwTnbw3RLPTYe+SmTiGhbqEQZIfCn6IENLOiTNrQ3ssqwGyZ6miUfmpqAnksq +P/ujmv5zMnHCnsZy4YpoJ/HkD7TETKVhk/iXEAcqMCWpuchxuO9ozC1+9eB+D4Ko +b7a6bINDd82Kkhehnlt4Fj1F4jNy3eFmypnTycUm/Q1oBEauttmbjL4ZvrHG8hnj +XALKLNhvSgfZyTXaQHXyxKcZb55CEJh15pWLYLztxRLXis7VmFxWlgPF7ncGNf/P +5O4/E2Hu29othfDNrp2yGAlFw5Khchf8R7agCyzxxN5DaAhqXzvwdmP7zAYspsbi +DrW5viSP +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions RootCA 2015" +# Serial: 0 +# MD5 Fingerprint: ca:ff:e2:db:03:d9:cb:4b:e9:0f:ad:84:fd:7b:18:ce +# SHA1 Fingerprint: 01:0c:06:95:a6:98:19:14:ff:bf:5f:c6:b0:b6:95:ea:29:e9:12:a6 +# SHA256 Fingerprint: a0:40:92:9a:02:ce:53:b4:ac:f4:f2:ff:c6:98:1c:e4:49:6f:75:5e:6d:45:fe:0b:2a:69:2b:cd:52:52:3f:36 +-----BEGIN CERTIFICATE----- +MIIGCzCCA/OgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBpjELMAkGA1UEBhMCR1Ix +DzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5k +IFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxQDA+BgNVBAMT +N0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgUm9v +dENBIDIwMTUwHhcNMTUwNzA3MTAxMTIxWhcNNDAwNjMwMTAxMTIxWjCBpjELMAkG +A1UEBhMCR1IxDzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNh +ZGVtaWMgYW5kIFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkx +QDA+BgNVBAMTN0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1 +dGlvbnMgUm9vdENBIDIwMTUwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC +AQDC+Kk/G4n8PDwEXT2QNrCROnk8ZlrvbTkBSRq0t89/TSNTt5AA4xMqKKYx8ZEA +4yjsriFBzh/a/X0SWwGDD7mwX5nh8hKDgE0GPt+sr+ehiGsxr/CL0BgzuNtFajT0 +AoAkKAoCFZVedioNmToUW/bLy1O8E00BiDeUJRtCvCLYjqOWXjrZMts+6PAQZe10 +4S+nfK8nNLspfZu2zwnI5dMK/IhlZXQK3HMcXM1AsRzUtoSMTFDPaI6oWa7CJ06C +ojXdFPQf/7J31Ycvqm59JCfnxssm5uX+Zwdj2EUN3TpZZTlYepKZcj2chF6IIbjV +9Cz82XBST3i4vTwri5WY9bPRaM8gFH5MXF/ni+X1NYEZN9cRCLdmvtNKzoNXADrD +gfgXy5I2XdGj2HUb4Ysn6npIQf1FGQatJ5lOwXBH3bWfgVMS5bGMSF0xQxfjjMZ6 +Y5ZLKTBOhE5iGV48zpeQpX8B653g+IuJ3SWYPZK2fu/Z8VFRfS0myGlZYeCsargq +NhEEelC9MoS+L9xy1dcdFkfkR2YgP/SWxa+OAXqlD3pk9Q0Yh9muiNX6hME6wGko +LfINaFGq46V3xqSQDqE3izEjR8EJCOtu93ib14L8hCCZSRm2Ekax+0VVFqmjZayc +Bw/qa9wfLgZy7IaIEuQt218FL+TwA9MmM+eAws1CoRc0CwIDAQABo0IwQDAPBgNV +HRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUcRVnyMjJvXVd +ctA4GGqd83EkVAswDQYJKoZIhvcNAQELBQADggIBAHW7bVRLqhBYRjTyYtcWNl0I +XtVsyIe9tC5G8jH4fOpCtZMWVdyhDBKg2mF+D1hYc2Ryx+hFjtyp8iY/xnmMsVMI +M4GwVhO+5lFc2JsKT0ucVlMC6U/2DWDqTUJV6HwbISHTGzrMd/K4kPFox/la/vot +9L/J9UUbzjgQKjeKeaO04wlshYaT/4mWJ3iBj2fjRnRUjtkNaeJK9E10A/+yd+2V +Z5fkscWrv2oj6NSU4kQoYsRL4vDY4ilrGnB+JGGTe08DMiUNRSQrlrRGar9KC/ea +j8GsGsVn82800vpzY4zvFrCopEYq+OsS7HK07/grfoxSwIuEVPkvPuNVqNxmsdnh +X9izjFk0WaSrT2y7HxjbdavYy5LNlDhhDgcGH0tGEPEVvo2FXDtKK4F5D7Rpn0lQ +l033DlZdwJVqwjbDG2jJ9SrcR5q+ss7FJej6A7na+RZukYT1HCjI/CbM1xyQVqdf +bzoEvM14iQuODy+jqk+iGxI9FghAD/FGTNeqewjBCvVtJ94Cj8rDtSvK6evIIVM4 +pcw72Hc3MKJP2W/R8kCtQXoXxdZKNYm3QdV8hn9VTYNKpXMgwDqvkPGaJI7ZjnHK +e7iG2rKPmT4dEw0SEe7Uq/DpFXYC5ODfqiAeW2GFZECpkJcNrVPSWh2HagCXZWK0 +vm9qp/UsQu0yrbYhnr68 +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions ECC RootCA 2015" +# Serial: 0 +# MD5 Fingerprint: 81:e5:b4:17:eb:c2:f5:e1:4b:0d:41:7b:49:92:fe:ef +# SHA1 Fingerprint: 9f:f1:71:8d:92:d5:9a:f3:7d:74:97:b4:bc:6f:84:68:0b:ba:b6:66 +# SHA256 Fingerprint: 44:b5:45:aa:8a:25:e6:5a:73:ca:15:dc:27:fc:36:d2:4c:1c:b9:95:3a:06:65:39:b1:15:82:dc:48:7b:48:33 +-----BEGIN CERTIFICATE----- +MIICwzCCAkqgAwIBAgIBADAKBggqhkjOPQQDAjCBqjELMAkGA1UEBhMCR1IxDzAN +BgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl +c2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxRDBCBgNVBAMTO0hl +bGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgRUNDIFJv +b3RDQSAyMDE1MB4XDTE1MDcwNzEwMzcxMloXDTQwMDYzMDEwMzcxMlowgaoxCzAJ +BgNVBAYTAkdSMQ8wDQYDVQQHEwZBdGhlbnMxRDBCBgNVBAoTO0hlbGxlbmljIEFj +YWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgQ2VydC4gQXV0aG9yaXR5 +MUQwQgYDVQQDEztIZWxsZW5pYyBBY2FkZW1pYyBhbmQgUmVzZWFyY2ggSW5zdGl0 +dXRpb25zIEVDQyBSb290Q0EgMjAxNTB2MBAGByqGSM49AgEGBSuBBAAiA2IABJKg +QehLgoRc4vgxEZmGZE4JJS+dQS8KrjVPdJWyUWRrjWvmP3CV8AVER6ZyOFB2lQJa +jq4onvktTpnvLEhvTCUp6NFxW98dwXU3tNf6e3pCnGoKVlp8aQuqgAkkbH7BRqNC +MEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFLQi +C4KZJAEOnLvkDv2/+5cgk5kqMAoGCCqGSM49BAMCA2cAMGQCMGfOFmI4oqxiRaep +lSTAGiecMjvAwNW6qef4BENThe5SId6d9SWDPp5YSy/XZxMOIQIwBeF1Ad5o7Sof +TUwJCA3sS61kFyjndc5FZXIhF8siQQ6ME5g4mlRtm8rifOoCWCKR +-----END CERTIFICATE----- + +# Issuer: CN=ISRG Root X1 O=Internet Security Research Group +# Subject: CN=ISRG Root X1 O=Internet Security Research Group +# Label: "ISRG Root X1" +# Serial: 172886928669790476064670243504169061120 +# MD5 Fingerprint: 0c:d2:f9:e0:da:17:73:e9:ed:86:4d:a5:e3:70:e7:4e +# SHA1 Fingerprint: ca:bd:2a:79:a1:07:6a:31:f2:1d:25:36:35:cb:03:9d:43:29:a5:e8 +# SHA256 Fingerprint: 96:bc:ec:06:26:49:76:f3:74:60:77:9a:cf:28:c5:a7:cf:e8:a3:c0:aa:e1:1a:8f:fc:ee:05:c0:bd:df:08:c6 +-----BEGIN CERTIFICATE----- +MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw +TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh +cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4 +WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu +ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY +MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc +h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+ +0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U +A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW +T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH +B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC +B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv +KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn +OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn +jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw +qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI +rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq +hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL +ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ +3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK +NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5 +ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur +TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC +jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc +oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq +4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA +mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d +emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc= +-----END CERTIFICATE----- + +# Issuer: O=FNMT-RCM OU=AC RAIZ FNMT-RCM +# Subject: O=FNMT-RCM OU=AC RAIZ FNMT-RCM +# Label: "AC RAIZ FNMT-RCM" +# Serial: 485876308206448804701554682760554759 +# MD5 Fingerprint: e2:09:04:b4:d3:bd:d1:a0:14:fd:1a:d2:47:c4:57:1d +# SHA1 Fingerprint: ec:50:35:07:b2:15:c4:95:62:19:e2:a8:9a:5b:42:99:2c:4c:2c:20 +# SHA256 Fingerprint: eb:c5:57:0c:29:01:8c:4d:67:b1:aa:12:7b:af:12:f7:03:b4:61:1e:bc:17:b7:da:b5:57:38:94:17:9b:93:fa +-----BEGIN CERTIFICATE----- +MIIFgzCCA2ugAwIBAgIPXZONMGc2yAYdGsdUhGkHMA0GCSqGSIb3DQEBCwUAMDsx +CzAJBgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJ +WiBGTk1ULVJDTTAeFw0wODEwMjkxNTU5NTZaFw0zMDAxMDEwMDAwMDBaMDsxCzAJ +BgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJWiBG +Tk1ULVJDTTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALpxgHpMhm5/ +yBNtwMZ9HACXjywMI7sQmkCpGreHiPibVmr75nuOi5KOpyVdWRHbNi63URcfqQgf +BBckWKo3Shjf5TnUV/3XwSyRAZHiItQDwFj8d0fsjz50Q7qsNI1NOHZnjrDIbzAz +WHFctPVrbtQBULgTfmxKo0nRIBnuvMApGGWn3v7v3QqQIecaZ5JCEJhfTzC8PhxF +tBDXaEAUwED653cXeuYLj2VbPNmaUtu1vZ5Gzz3rkQUCwJaydkxNEJY7kvqcfw+Z +374jNUUeAlz+taibmSXaXvMiwzn15Cou08YfxGyqxRxqAQVKL9LFwag0Jl1mpdIC +IfkYtwb1TplvqKtMUejPUBjFd8g5CSxJkjKZqLsXF3mwWsXmo8RZZUc1g16p6DUL +mbvkzSDGm0oGObVo/CK67lWMK07q87Hj/LaZmtVC+nFNCM+HHmpxffnTtOmlcYF7 +wk5HlqX2doWjKI/pgG6BU6VtX7hI+cL5NqYuSf+4lsKMB7ObiFj86xsc3i1w4peS +MKGJ47xVqCfWS+2QrYv6YyVZLag13cqXM7zlzced0ezvXg5KkAYmY6252TUtB7p2 +ZSysV4999AeU14ECll2jB0nVetBX+RvnU0Z1qrB5QstocQjpYL05ac70r8NWQMet +UqIJ5G+GR4of6ygnXYMgrwTJbFaai0b1AgMBAAGjgYMwgYAwDwYDVR0TAQH/BAUw +AwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFPd9xf3E6Jobd2Sn9R2gzL+H +YJptMD4GA1UdIAQ3MDUwMwYEVR0gADArMCkGCCsGAQUFBwIBFh1odHRwOi8vd3d3 +LmNlcnQuZm5tdC5lcy9kcGNzLzANBgkqhkiG9w0BAQsFAAOCAgEAB5BK3/MjTvDD +nFFlm5wioooMhfNzKWtN/gHiqQxjAb8EZ6WdmF/9ARP67Jpi6Yb+tmLSbkyU+8B1 +RXxlDPiyN8+sD8+Nb/kZ94/sHvJwnvDKuO+3/3Y3dlv2bojzr2IyIpMNOmqOFGYM +LVN0V2Ue1bLdI4E7pWYjJ2cJj+F3qkPNZVEI7VFY/uY5+ctHhKQV8Xa7pO6kO8Rf +77IzlhEYt8llvhjho6Tc+hj507wTmzl6NLrTQfv6MooqtyuGC2mDOL7Nii4LcK2N +JpLuHvUBKwrZ1pebbuCoGRw6IYsMHkCtA+fdZn71uSANA+iW+YJF1DngoABd15jm +fZ5nc8OaKveri6E6FO80vFIOiZiaBECEHX5FaZNXzuvO+FB8TxxuBEOb+dY7Ixjp +6o7RTUaN8Tvkasq6+yO3m/qZASlaWFot4/nUbQ4mrcFuNLwy+AwF+mWj2zs3gyLp +1txyM/1d8iC9djwj2ij3+RvrWWTV3F9yfiD8zYm1kGdNYno/Tq0dwzn+evQoFt9B +9kiABdcPUXmsEKvU7ANm5mqwujGSQkBqvjrTcuFqN1W8rB2Vt2lh8kORdOag0wok +RqEIr9baRRmW1FMdW4R58MD3R++Lj8UGrp1MYp3/RgT408m2ECVAdf4WqslKYIYv +uu8wd+RU4riEmViAqhOLUTpPSPaLtrM= +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 1 O=Amazon +# Subject: CN=Amazon Root CA 1 O=Amazon +# Label: "Amazon Root CA 1" +# Serial: 143266978916655856878034712317230054538369994 +# MD5 Fingerprint: 43:c6:bf:ae:ec:fe:ad:2f:18:c6:88:68:30:fc:c8:e6 +# SHA1 Fingerprint: 8d:a7:f9:65:ec:5e:fc:37:91:0f:1c:6e:59:fd:c1:cc:6a:6e:de:16 +# SHA256 Fingerprint: 8e:cd:e6:88:4f:3d:87:b1:12:5b:a3:1a:c3:fc:b1:3d:70:16:de:7f:57:cc:90:4f:e1:cb:97:c6:ae:98:19:6e +-----BEGIN CERTIFICATE----- +MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsF +ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6 +b24gUm9vdCBDQSAxMB4XDTE1MDUyNjAwMDAwMFoXDTM4MDExNzAwMDAwMFowOTEL +MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv +b3QgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALJ4gHHKeNXj +ca9HgFB0fW7Y14h29Jlo91ghYPl0hAEvrAIthtOgQ3pOsqTQNroBvo3bSMgHFzZM +9O6II8c+6zf1tRn4SWiw3te5djgdYZ6k/oI2peVKVuRF4fn9tBb6dNqcmzU5L/qw +IFAGbHrQgLKm+a/sRxmPUDgH3KKHOVj4utWp+UhnMJbulHheb4mjUcAwhmahRWa6 +VOujw5H5SNz/0egwLX0tdHA114gk957EWW67c4cX8jJGKLhD+rcdqsq08p8kDi1L +93FcXmn/6pUCyziKrlA4b9v7LWIbxcceVOF34GfID5yHI9Y/QCB/IIDEgEw+OyQm +jgSubJrIqg0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AYYwHQYDVR0OBBYEFIQYzIU07LwMlJQuCFmcx7IQTgoIMA0GCSqGSIb3DQEBCwUA +A4IBAQCY8jdaQZChGsV2USggNiMOruYou6r4lK5IpDB/G/wkjUu0yKGX9rbxenDI +U5PMCCjjmCXPI6T53iHTfIUJrU6adTrCC2qJeHZERxhlbI1Bjjt/msv0tadQ1wUs +N+gDS63pYaACbvXy8MWy7Vu33PqUXHeeE6V/Uq2V8viTO96LXFvKWlJbYK8U90vv +o/ufQJVtMVT8QtPHRh8jrdkPSHCa2XV4cdFyQzR1bldZwgJcJmApzyMZFo6IQ6XU +5MsI+yMRQ+hDKXJioaldXgjUkK642M4UwtBV8ob2xJNDd2ZhwLnoQdeXeGADbkpy +rqXRfboQnoZsG4q5WTP468SQvvG5 +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 2 O=Amazon +# Subject: CN=Amazon Root CA 2 O=Amazon +# Label: "Amazon Root CA 2" +# Serial: 143266982885963551818349160658925006970653239 +# MD5 Fingerprint: c8:e5:8d:ce:a8:42:e2:7a:c0:2a:5c:7c:9e:26:bf:66 +# SHA1 Fingerprint: 5a:8c:ef:45:d7:a6:98:59:76:7a:8c:8b:44:96:b5:78:cf:47:4b:1a +# SHA256 Fingerprint: 1b:a5:b2:aa:8c:65:40:1a:82:96:01:18:f8:0b:ec:4f:62:30:4d:83:ce:c4:71:3a:19:c3:9c:01:1e:a4:6d:b4 +-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgITBmyf0pY1hp8KD+WGePhbJruKNzANBgkqhkiG9w0BAQwF +ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6 +b24gUm9vdCBDQSAyMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTEL +MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv +b3QgQ0EgMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK2Wny2cSkxK +gXlRmeyKy2tgURO8TW0G/LAIjd0ZEGrHJgw12MBvIITplLGbhQPDW9tK6Mj4kHbZ +W0/jTOgGNk3Mmqw9DJArktQGGWCsN0R5hYGCrVo34A3MnaZMUnbqQ523BNFQ9lXg +1dKmSYXpN+nKfq5clU1Imj+uIFptiJXZNLhSGkOQsL9sBbm2eLfq0OQ6PBJTYv9K +8nu+NQWpEjTj82R0Yiw9AElaKP4yRLuH3WUnAnE72kr3H9rN9yFVkE8P7K6C4Z9r +2UXTu/Bfh+08LDmG2j/e7HJV63mjrdvdfLC6HM783k81ds8P+HgfajZRRidhW+me +z/CiVX18JYpvL7TFz4QuK/0NURBs+18bvBt+xa47mAExkv8LV/SasrlX6avvDXbR +8O70zoan4G7ptGmh32n2M8ZpLpcTnqWHsFcQgTfJU7O7f/aS0ZzQGPSSbtqDT6Zj +mUyl+17vIWR6IF9sZIUVyzfpYgwLKhbcAS4y2j5L9Z469hdAlO+ekQiG+r5jqFoz +7Mt0Q5X5bGlSNscpb/xVA1wf+5+9R+vnSUeVC06JIglJ4PVhHvG/LopyboBZ/1c6 ++XUyo05f7O0oYtlNc/LMgRdg7c3r3NunysV+Ar3yVAhU/bQtCSwXVEqY0VThUWcI +0u1ufm8/0i2BWSlmy5A5lREedCf+3euvAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMB +Af8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSwDPBMMPQFWAJI/TPlUq9LhONm +UjANBgkqhkiG9w0BAQwFAAOCAgEAqqiAjw54o+Ci1M3m9Zh6O+oAA7CXDpO8Wqj2 +LIxyh6mx/H9z/WNxeKWHWc8w4Q0QshNabYL1auaAn6AFC2jkR2vHat+2/XcycuUY ++gn0oJMsXdKMdYV2ZZAMA3m3MSNjrXiDCYZohMr/+c8mmpJ5581LxedhpxfL86kS +k5Nrp+gvU5LEYFiwzAJRGFuFjWJZY7attN6a+yb3ACfAXVU3dJnJUH/jWS5E4ywl +7uxMMne0nxrpS10gxdr9HIcWxkPo1LsmmkVwXqkLN1PiRnsn/eBG8om3zEK2yygm +btmlyTrIQRNg91CMFa6ybRoVGld45pIq2WWQgj9sAq+uEjonljYE1x2igGOpm/Hl +urR8FLBOybEfdF849lHqm/osohHUqS0nGkWxr7JOcQ3AWEbWaQbLU8uz/mtBzUF+ +fUwPfHJ5elnNXkoOrJupmHN5fLT0zLm4BwyydFy4x2+IoZCn9Kr5v2c69BoVYh63 +n749sSmvZ6ES8lgQGVMDMBu4Gon2nL2XA46jCfMdiyHxtN/kHNGfZQIG6lzWE7OE +76KlXIx3KadowGuuQNKotOrN8I1LOJwZmhsoVLiJkO/KdYE+HvJkJMcYr07/R54H +9jVlpNMKVv/1F2Rs76giJUmTtt8AF9pYfl3uxRuw0dFfIRDH+fO6AgonB8Xx1sfT +4PsJYGw= +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 3 O=Amazon +# Subject: CN=Amazon Root CA 3 O=Amazon +# Label: "Amazon Root CA 3" +# Serial: 143266986699090766294700635381230934788665930 +# MD5 Fingerprint: a0:d4:ef:0b:f7:b5:d8:49:95:2a:ec:f5:c4:fc:81:87 +# SHA1 Fingerprint: 0d:44:dd:8c:3c:8c:1a:1a:58:75:64:81:e9:0f:2e:2a:ff:b3:d2:6e +# SHA256 Fingerprint: 18:ce:6c:fe:7b:f1:4e:60:b2:e3:47:b8:df:e8:68:cb:31:d0:2e:bb:3a:da:27:15:69:f5:03:43:b4:6d:b3:a4 +-----BEGIN CERTIFICATE----- +MIIBtjCCAVugAwIBAgITBmyf1XSXNmY/Owua2eiedgPySjAKBggqhkjOPQQDAjA5 +MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g +Um9vdCBDQSAzMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG +A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg +Q0EgMzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABCmXp8ZBf8ANm+gBG1bG8lKl +ui2yEujSLtf6ycXYqm0fc4E7O5hrOXwzpcVOho6AF2hiRVd9RFgdszflZwjrZt6j +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSr +ttvXBp43rDCGB5Fwx5zEGbF4wDAKBggqhkjOPQQDAgNJADBGAiEA4IWSoxe3jfkr +BqWTrBqYaGFy+uGh0PsceGCmQ5nFuMQCIQCcAu/xlJyzlvnrxir4tiz+OpAUFteM +YyRIHN8wfdVoOw== +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 4 O=Amazon +# Subject: CN=Amazon Root CA 4 O=Amazon +# Label: "Amazon Root CA 4" +# Serial: 143266989758080763974105200630763877849284878 +# MD5 Fingerprint: 89:bc:27:d5:eb:17:8d:06:6a:69:d5:fd:89:47:b4:cd +# SHA1 Fingerprint: f6:10:84:07:d6:f8:bb:67:98:0c:c2:e2:44:c2:eb:ae:1c:ef:63:be +# SHA256 Fingerprint: e3:5d:28:41:9e:d0:20:25:cf:a6:90:38:cd:62:39:62:45:8d:a5:c6:95:fb:de:a3:c2:2b:0b:fb:25:89:70:92 +-----BEGIN CERTIFICATE----- +MIIB8jCCAXigAwIBAgITBmyf18G7EEwpQ+Vxe3ssyBrBDjAKBggqhkjOPQQDAzA5 +MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g +Um9vdCBDQSA0MB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG +A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg +Q0EgNDB2MBAGByqGSM49AgEGBSuBBAAiA2IABNKrijdPo1MN/sGKe0uoe0ZLY7Bi +9i0b2whxIdIA6GO9mif78DluXeo9pcmBqqNbIJhFXRbb/egQbeOc4OO9X4Ri83Bk +M6DLJC9wuoihKqB1+IGuYgbEgds5bimwHvouXKNCMEAwDwYDVR0TAQH/BAUwAwEB +/zAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFNPsxzplbszh2naaVvuc84ZtV+WB +MAoGCCqGSM49BAMDA2gAMGUCMDqLIfG9fhGt0O9Yli/W651+kI0rz2ZVwyzjKKlw +CkcO8DdZEv8tmZQoTipPNU0zWgIxAOp1AE47xDqUEpHJWEadIRNyp4iciuRMStuW +1KyLa2tJElMzrdfkviT8tQp21KW8EA== +-----END CERTIFICATE----- + +# Issuer: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM +# Subject: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM +# Label: "TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1" +# Serial: 1 +# MD5 Fingerprint: dc:00:81:dc:69:2f:3e:2f:b0:3b:f6:3d:5a:91:8e:49 +# SHA1 Fingerprint: 31:43:64:9b:ec:ce:27:ec:ed:3a:3f:0b:8f:0d:e4:e8:91:dd:ee:ca +# SHA256 Fingerprint: 46:ed:c3:68:90:46:d5:3a:45:3f:b3:10:4a:b8:0d:ca:ec:65:8b:26:60:ea:16:29:dd:7e:86:79:90:64:87:16 +-----BEGIN CERTIFICATE----- +MIIEYzCCA0ugAwIBAgIBATANBgkqhkiG9w0BAQsFADCB0jELMAkGA1UEBhMCVFIx +GDAWBgNVBAcTD0dlYnplIC0gS29jYWVsaTFCMEAGA1UEChM5VHVya2l5ZSBCaWxp +bXNlbCB2ZSBUZWtub2xvamlrIEFyYXN0aXJtYSBLdXJ1bXUgLSBUVUJJVEFLMS0w +KwYDVQQLEyRLYW11IFNlcnRpZmlrYXN5b24gTWVya2V6aSAtIEthbXUgU00xNjA0 +BgNVBAMTLVRVQklUQUsgS2FtdSBTTSBTU0wgS29rIFNlcnRpZmlrYXNpIC0gU3Vy +dW0gMTAeFw0xMzExMjUwODI1NTVaFw00MzEwMjUwODI1NTVaMIHSMQswCQYDVQQG +EwJUUjEYMBYGA1UEBxMPR2ViemUgLSBLb2NhZWxpMUIwQAYDVQQKEzlUdXJraXll +IEJpbGltc2VsIHZlIFRla25vbG9qaWsgQXJhc3Rpcm1hIEt1cnVtdSAtIFRVQklU +QUsxLTArBgNVBAsTJEthbXUgU2VydGlmaWthc3lvbiBNZXJrZXppIC0gS2FtdSBT +TTE2MDQGA1UEAxMtVFVCSVRBSyBLYW11IFNNIFNTTCBLb2sgU2VydGlmaWthc2kg +LSBTdXJ1bSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAr3UwM6q7 +a9OZLBI3hNmNe5eA027n/5tQlT6QlVZC1xl8JoSNkvoBHToP4mQ4t4y86Ij5iySr +LqP1N+RAjhgleYN1Hzv/bKjFxlb4tO2KRKOrbEz8HdDc72i9z+SqzvBV96I01INr +N3wcwv61A+xXzry0tcXtAA9TNypN9E8Mg/uGz8v+jE69h/mniyFXnHrfA2eJLJ2X +YacQuFWQfw4tJzh03+f92k4S400VIgLI4OD8D62K18lUUMw7D8oWgITQUVbDjlZ/ +iSIzL+aFCr2lqBs23tPcLG07xxO9WSMs5uWk99gL7eqQQESolbuT1dCANLZGeA4f +AJNG4e7p+exPFwIDAQABo0IwQDAdBgNVHQ4EFgQUZT/HiobGPN08VFw1+DrtUgxH +V8gwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBACo/4fEyjq7hmFxLXs9rHmoJ0iKpEsdeV31zVmSAhHqT5Am5EM2fKifh +AHe+SMg1qIGf5LgsyX8OsNJLN13qudULXjS99HMpw+0mFZx+CFOKWI3QSyjfwbPf +IPP54+M638yclNhOT8NrF7f3cuitZjO1JVOr4PhMqZ398g26rrnZqsZr+ZO7rqu4 +lzwDGrpDxpa5RXI4s6ehlj2Re37AIVNMh+3yC1SVUZPVIqUNivGTDj5UDrDYyU7c +8jEyVupk+eq1nRZmQnLzf9OxMUP8pI4X8W0jq5Rm+K37DwhuJi1/FwcJsoz7UMCf +lo3Ptv0AnVoUmr8CRPXBwp8iXqIPoeM= +-----END CERTIFICATE----- + +# Issuer: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD. +# Subject: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD. +# Label: "GDCA TrustAUTH R5 ROOT" +# Serial: 9009899650740120186 +# MD5 Fingerprint: 63:cc:d9:3d:34:35:5c:6f:53:a3:e2:08:70:48:1f:b4 +# SHA1 Fingerprint: 0f:36:38:5b:81:1a:25:c3:9b:31:4e:83:ca:e9:34:66:70:cc:74:b4 +# SHA256 Fingerprint: bf:ff:8f:d0:44:33:48:7d:6a:8a:a6:0c:1a:29:76:7a:9f:c2:bb:b0:5e:42:0f:71:3a:13:b9:92:89:1d:38:93 +-----BEGIN CERTIFICATE----- +MIIFiDCCA3CgAwIBAgIIfQmX/vBH6nowDQYJKoZIhvcNAQELBQAwYjELMAkGA1UE +BhMCQ04xMjAwBgNVBAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZ +IENPLixMVEQuMR8wHQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMB4XDTE0 +MTEyNjA1MTMxNVoXDTQwMTIzMTE1NTk1OVowYjELMAkGA1UEBhMCQ04xMjAwBgNV +BAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZIENPLixMVEQuMR8w +HQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMIICIjANBgkqhkiG9w0BAQEF +AAOCAg8AMIICCgKCAgEA2aMW8Mh0dHeb7zMNOwZ+Vfy1YI92hhJCfVZmPoiC7XJj +Dp6L3TQsAlFRwxn9WVSEyfFrs0yw6ehGXTjGoqcuEVe6ghWinI9tsJlKCvLriXBj +TnnEt1u9ol2x8kECK62pOqPseQrsXzrj/e+APK00mxqriCZ7VqKChh/rNYmDf1+u +KU49tm7srsHwJ5uu4/Ts765/94Y9cnrrpftZTqfrlYwiOXnhLQiPzLyRuEH3FMEj +qcOtmkVEs7LXLM3GKeJQEK5cy4KOFxg2fZfmiJqwTTQJ9Cy5WmYqsBebnh52nUpm +MUHfP/vFBu8btn4aRjb3ZGM74zkYI+dndRTVdVeSN72+ahsmUPI2JgaQxXABZG12 +ZuGR224HwGGALrIuL4xwp9E7PLOR5G62xDtw8mySlwnNR30YwPO7ng/Wi64HtloP +zgsMR6flPri9fcebNaBhlzpBdRfMK5Z3KpIhHtmVdiBnaM8Nvd/WHwlqmuLMc3Gk +L30SgLdTMEZeS1SZD2fJpcjyIMGC7J0R38IC+xo70e0gmu9lZJIQDSri3nDxGGeC +jGHeuLzRL5z7D9Ar7Rt2ueQ5Vfj4oR24qoAATILnsn8JuLwwoC8N9VKejveSswoA +HQBUlwbgsQfZxw9cZX08bVlX5O2ljelAU58VS6Bx9hoh49pwBiFYFIeFd3mqgnkC +AwEAAaNCMEAwHQYDVR0OBBYEFOLJQJ9NzuiaoXzPDj9lxSmIahlRMA8GA1UdEwEB +/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQDRSVfg +p8xoWLoBDysZzY2wYUWsEe1jUGn4H3++Fo/9nesLqjJHdtJnJO29fDMylyrHBYZm +DRd9FBUb1Ov9H5r2XpdptxolpAqzkT9fNqyL7FeoPueBihhXOYV0GkLH6VsTX4/5 +COmSdI31R9KrO9b7eGZONn356ZLpBN79SWP8bfsUcZNnL0dKt7n/HipzcEYwv1ry +L3ml4Y0M2fmyYzeMN2WFcGpcWwlyua1jPLHd+PwyvzeG5LuOmCd+uh8W4XAR8gPf +JWIyJyYYMoSf/wA6E7qaTfRPuBRwIrHKK5DOKcFw9C+df/KQHtZa37dG/OaG+svg +IHZ6uqbL9XzeYqWxi+7egmaKTjowHz+Ay60nugxe19CxVsp3cbK1daFQqUBDF8Io +2c9Si1vIY9RCPqAzekYu9wogRlR+ak8x8YF+QnQ4ZXMn7sZ8uI7XpTrXmKGcjBBV +09tL7ECQ8s1uV9JiDnxXk7Gnbc2dg7sq5+W2O3FYrf3RRbxake5TFW/TRQl1brqQ +XR4EzzffHqhmsYzmIGrv/EhOdJhCrylvLmrH+33RZjEizIYAfmaDDEL0vTSSwxrq +T8p+ck0LcIymSLumoRT2+1hEmRSuqguTaaApJUqlyyvdimYHFngVV3Eb7PVHhPOe +MTd61X8kreS8/f3MboPoDKi3QWwH3b08hpcv0g== +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor RootCert CA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor RootCert CA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor RootCert CA-1" +# Serial: 15752444095811006489 +# MD5 Fingerprint: 6e:85:f1:dc:1a:00:d3:22:d5:b2:b2:ac:6b:37:05:45 +# SHA1 Fingerprint: ff:bd:cd:e7:82:c8:43:5e:3c:6f:26:86:5c:ca:a8:3a:45:5b:c3:0a +# SHA256 Fingerprint: d4:0e:9c:86:cd:8f:e4:68:c1:77:69:59:f4:9e:a7:74:fa:54:86:84:b6:c4:06:f3:90:92:61:f4:dc:e2:57:5c +-----BEGIN CERTIFICATE----- +MIIEMDCCAxigAwIBAgIJANqb7HHzA7AZMA0GCSqGSIb3DQEBCwUAMIGkMQswCQYD +VQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEgQ2l0eTEk +MCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYDVQQLDB5U +cnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxHzAdBgNVBAMMFlRydXN0Q29y +IFJvb3RDZXJ0IENBLTEwHhcNMTYwMjA0MTIzMjE2WhcNMjkxMjMxMTcyMzE2WjCB +pDELMAkGA1UEBhMCUEExDzANBgNVBAgMBlBhbmFtYTEUMBIGA1UEBwwLUGFuYW1h +IENpdHkxJDAiBgNVBAoMG1RydXN0Q29yIFN5c3RlbXMgUy4gZGUgUi5MLjEnMCUG +A1UECwweVHJ1c3RDb3IgQ2VydGlmaWNhdGUgQXV0aG9yaXR5MR8wHQYDVQQDDBZU +cnVzdENvciBSb290Q2VydCBDQS0xMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB +CgKCAQEAv463leLCJhJrMxnHQFgKq1mqjQCj/IDHUHuO1CAmujIS2CNUSSUQIpid +RtLByZ5OGy4sDjjzGiVoHKZaBeYei0i/mJZ0PmnK6bV4pQa81QBeCQryJ3pS/C3V +seq0iWEk8xoT26nPUu0MJLq5nux+AHT6k61sKZKuUbS701e/s/OojZz0JEsq1pme +9J7+wH5COucLlVPat2gOkEz7cD+PSiyU8ybdY2mplNgQTsVHCJCZGxdNuWxu72CV +EY4hgLW9oHPY0LJ3xEXqWib7ZnZ2+AYfYW0PVcWDtxBWcgYHpfOxGgMFZA6dWorW +hnAbJN7+KIor0Gqw/Hqi3LJ5DotlDwIDAQABo2MwYTAdBgNVHQ4EFgQU7mtJPHo/ +DeOxCbeKyKsZn3MzUOcwHwYDVR0jBBgwFoAU7mtJPHo/DeOxCbeKyKsZn3MzUOcw +DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQAD +ggEBACUY1JGPE+6PHh0RU9otRCkZoB5rMZ5NDp6tPVxBb5UrJKF5mDo4Nvu7Zp5I +/5CQ7z3UuJu0h3U/IJvOcs+hVcFNZKIZBqEHMwwLKeXx6quj7LUKdJDHfXLy11yf +ke+Ri7fc7Waiz45mO7yfOgLgJ90WmMCV1Aqk5IGadZQ1nJBfiDcGrVmVCrDRZ9MZ +yonnMlo2HD6CqFqTvsbQZJG2z9m2GM/bftJlo6bEjhcxwft+dtvTheNYsnd6djts +L1Ac59v2Z3kf9YKVmgenFK+P3CghZwnS1k1aHBkcjndcw5QkPTJrS37UeJSDvjdN +zl/HHk484IkzlQsPpTLWPFp5LBk= +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor RootCert CA-2 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor RootCert CA-2 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor RootCert CA-2" +# Serial: 2711694510199101698 +# MD5 Fingerprint: a2:e1:f8:18:0b:ba:45:d5:c7:41:2a:bb:37:52:45:64 +# SHA1 Fingerprint: b8:be:6d:cb:56:f1:55:b9:63:d4:12:ca:4e:06:34:c7:94:b2:1c:c0 +# SHA256 Fingerprint: 07:53:e9:40:37:8c:1b:d5:e3:83:6e:39:5d:ae:a5:cb:83:9e:50:46:f1:bd:0e:ae:19:51:cf:10:fe:c7:c9:65 +-----BEGIN CERTIFICATE----- +MIIGLzCCBBegAwIBAgIIJaHfyjPLWQIwDQYJKoZIhvcNAQELBQAwgaQxCzAJBgNV +BAYTAlBBMQ8wDQYDVQQIDAZQYW5hbWExFDASBgNVBAcMC1BhbmFtYSBDaXR5MSQw +IgYDVQQKDBtUcnVzdENvciBTeXN0ZW1zIFMuIGRlIFIuTC4xJzAlBgNVBAsMHlRy +dXN0Q29yIENlcnRpZmljYXRlIEF1dGhvcml0eTEfMB0GA1UEAwwWVHJ1c3RDb3Ig +Um9vdENlcnQgQ0EtMjAeFw0xNjAyMDQxMjMyMjNaFw0zNDEyMzExNzI2MzlaMIGk +MQswCQYDVQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEg +Q2l0eTEkMCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYD +VQQLDB5UcnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxHzAdBgNVBAMMFlRy +dXN0Q29yIFJvb3RDZXJ0IENBLTIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCnIG7CKqJiJJWQdsg4foDSq8GbZQWU9MEKENUCrO2fk8eHyLAnK0IMPQo+ +QVqedd2NyuCb7GgypGmSaIwLgQ5WoD4a3SwlFIIvl9NkRvRUqdw6VC0xK5mC8tkq +1+9xALgxpL56JAfDQiDyitSSBBtlVkxs1Pu2YVpHI7TYabS3OtB0PAx1oYxOdqHp +2yqlO/rOsP9+aij9JxzIsekp8VduZLTQwRVtDr4uDkbIXvRR/u8OYzo7cbrPb1nK +DOObXUm4TOJXsZiKQlecdu/vvdFoqNL0Cbt3Nb4lggjEFixEIFapRBF37120Hape +az6LMvYHL1cEksr1/p3C6eizjkxLAjHZ5DxIgif3GIJ2SDpxsROhOdUuxTTCHWKF +3wP+TfSvPd9cW436cOGlfifHhi5qjxLGhF5DUVCcGZt45vz27Ud+ez1m7xMTiF88 +oWP7+ayHNZ/zgp6kPwqcMWmLmaSISo5uZk3vFsQPeSghYA2FFn3XVDjxklb9tTNM +g9zXEJ9L/cb4Qr26fHMC4P99zVvh1Kxhe1fVSntb1IVYJ12/+CtgrKAmrhQhJ8Z3 +mjOAPF5GP/fDsaOGM8boXg25NSyqRsGFAnWAoOsk+xWq5Gd/bnc/9ASKL3x74xdh +8N0JqSDIvgmk0H5Ew7IwSjiqqewYmgeCK9u4nBit2uBGF6zPXQIDAQABo2MwYTAd +BgNVHQ4EFgQU2f4hQG6UnrybPZx9mCAZ5YwwYrIwHwYDVR0jBBgwFoAU2f4hQG6U +nrybPZx9mCAZ5YwwYrIwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYw +DQYJKoZIhvcNAQELBQADggIBAJ5Fngw7tu/hOsh80QA9z+LqBrWyOrsGS2h60COX +dKcs8AjYeVrXWoSK2BKaG9l9XE1wxaX5q+WjiYndAfrs3fnpkpfbsEZC89NiqpX+ +MWcUaViQCqoL7jcjx1BRtPV+nuN79+TMQjItSQzL/0kMmx40/W5ulop5A7Zv2wnL +/V9lFDfhOPXzYRZY5LVtDQsEGz9QLX+zx3oaFoBg+Iof6Rsqxvm6ARppv9JYx1RX +CI/hOWB3S6xZhBqI8d3LT3jX5+EzLfzuQfogsL7L9ziUwOHQhQ+77Sxzq+3+knYa +ZH9bDTMJBzN7Bj8RpFxwPIXAz+OQqIN3+tvmxYxoZxBnpVIt8MSZj3+/0WvitUfW +2dCFmU2Umw9Lje4AWkcdEQOsQRivh7dvDDqPys/cA8GiCcjl/YBeyGBCARsaU1q7 +N6a3vLqE6R5sGtRk2tRD/pOLS/IseRYQ1JMLiI+h2IYURpFHmygk71dSTlxCnKr3 +Sewn6EAes6aJInKc9Q0ztFijMDvd1GpUk74aTfOTlPf8hAs/hCBcNANExdqtvArB +As8e5ZTZ845b2EzwnexhF7sUMlQMAimTHpKG9n/v55IFDlndmQguLvqcAFLTxWYp +5KeXRKQOKIETNcX2b2TmQcTVL8w0RSXPQQCWPUouwpaYT05KnJe32x+SMsj/D1Fu +1uwJ +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor ECA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor ECA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor ECA-1" +# Serial: 9548242946988625984 +# MD5 Fingerprint: 27:92:23:1d:0a:f5:40:7c:e9:e6:6b:9d:d8:f5:e7:6c +# SHA1 Fingerprint: 58:d1:df:95:95:67:6b:63:c0:f0:5b:1c:17:4d:8b:84:0b:c8:78:bd +# SHA256 Fingerprint: 5a:88:5d:b1:9c:01:d9:12:c5:75:93:88:93:8c:af:bb:df:03:1a:b2:d4:8e:91:ee:15:58:9b:42:97:1d:03:9c +-----BEGIN CERTIFICATE----- +MIIEIDCCAwigAwIBAgIJAISCLF8cYtBAMA0GCSqGSIb3DQEBCwUAMIGcMQswCQYD +VQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEgQ2l0eTEk +MCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYDVQQLDB5U +cnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxFzAVBgNVBAMMDlRydXN0Q29y +IEVDQS0xMB4XDTE2MDIwNDEyMzIzM1oXDTI5MTIzMTE3MjgwN1owgZwxCzAJBgNV +BAYTAlBBMQ8wDQYDVQQIDAZQYW5hbWExFDASBgNVBAcMC1BhbmFtYSBDaXR5MSQw +IgYDVQQKDBtUcnVzdENvciBTeXN0ZW1zIFMuIGRlIFIuTC4xJzAlBgNVBAsMHlRy +dXN0Q29yIENlcnRpZmljYXRlIEF1dGhvcml0eTEXMBUGA1UEAwwOVHJ1c3RDb3Ig +RUNBLTEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDPj+ARtZ+odnbb +3w9U73NjKYKtR8aja+3+XzP4Q1HpGjORMRegdMTUpwHmspI+ap3tDvl0mEDTPwOA +BoJA6LHip1GnHYMma6ve+heRK9jGrB6xnhkB1Zem6g23xFUfJ3zSCNV2HykVh0A5 +3ThFEXXQmqc04L/NyFIduUd+Dbi7xgz2c1cWWn5DkR9VOsZtRASqnKmcp0yJF4Ou +owReUoCLHhIlERnXDH19MURB6tuvsBzvgdAsxZohmz3tQjtQJvLsznFhBmIhVE5/ +wZ0+fyCMgMsq2JdiyIMzkX2woloPV+g7zPIlstR8L+xNxqE6FXrntl019fZISjZF +ZtS6mFjBAgMBAAGjYzBhMB0GA1UdDgQWBBREnkj1zG1I1KBLf/5ZJC+Dl5mahjAf +BgNVHSMEGDAWgBREnkj1zG1I1KBLf/5ZJC+Dl5mahjAPBgNVHRMBAf8EBTADAQH/ +MA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAQEABT41XBVwm8nHc2Fv +civUwo/yQ10CzsSUuZQRg2dd4mdsdXa/uwyqNsatR5Nj3B5+1t4u/ukZMjgDfxT2 +AHMsWbEhBuH7rBiVDKP/mZb3Kyeb1STMHd3BOuCYRLDE5D53sXOpZCz2HAF8P11F +hcCF5yWPldwX8zyfGm6wyuMdKulMY/okYWLW2n62HGz1Ah3UKt1VkOsqEUc8Ll50 +soIipX1TH0XsJ5F95yIW6MBoNtjG8U+ARDL54dHRHareqKucBK+tIA5kmE2la8BI +WJZpTdwHjFGTot+fDz2LYLSCjaoITmJF4PkL0uDgPFveXHEnJcLmA4GLEFPjx1Wi +tJ/X5g== +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com Root Certification Authority RSA O=SSL Corporation +# Subject: CN=SSL.com Root Certification Authority RSA O=SSL Corporation +# Label: "SSL.com Root Certification Authority RSA" +# Serial: 8875640296558310041 +# MD5 Fingerprint: 86:69:12:c0:70:f1:ec:ac:ac:c2:d5:bc:a5:5b:a1:29 +# SHA1 Fingerprint: b7:ab:33:08:d1:ea:44:77:ba:14:80:12:5a:6f:bd:a9:36:49:0c:bb +# SHA256 Fingerprint: 85:66:6a:56:2e:e0:be:5c:e9:25:c1:d8:89:0a:6f:76:a8:7e:c1:6d:4d:7d:5f:29:ea:74:19:cf:20:12:3b:69 +-----BEGIN CERTIFICATE----- +MIIF3TCCA8WgAwIBAgIIeyyb0xaAMpkwDQYJKoZIhvcNAQELBQAwfDELMAkGA1UE +BhMCVVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQK +DA9TU0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eSBSU0EwHhcNMTYwMjEyMTczOTM5WhcNNDEwMjEyMTcz +OTM5WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv +dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNv +bSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IFJTQTCCAiIwDQYJKoZIhvcN +AQEBBQADggIPADCCAgoCggIBAPkP3aMrfcvQKv7sZ4Wm5y4bunfh4/WvpOz6Sl2R +xFdHaxh3a3by/ZPkPQ/CFp4LZsNWlJ4Xg4XOVu/yFv0AYvUiCVToZRdOQbngT0aX +qhvIuG5iXmmxX9sqAn78bMrzQdjt0Oj8P2FI7bADFB0QDksZ4LtO7IZl/zbzXmcC +C52GVWH9ejjt/uIZALdvoVBidXQ8oPrIJZK0bnoix/geoeOy3ZExqysdBP+lSgQ3 +6YWkMyv94tZVNHwZpEpox7Ko07fKoZOI68GXvIz5HdkihCR0xwQ9aqkpk8zruFvh +/l8lqjRYyMEjVJ0bmBHDOJx+PYZspQ9AhnwC9FwCTyjLrnGfDzrIM/4RJTXq/LrF +YD3ZfBjVsqnTdXgDciLKOsMf7yzlLqn6niy2UUb9rwPW6mBo6oUWNmuF6R7As93E +JNyAKoFBbZQ+yODJgUEAnl6/f8UImKIYLEJAs/lvOCdLToD0PYFH4Ih86hzOtXVc +US4cK38acijnALXRdMbX5J+tB5O2UzU1/Dfkw/ZdFr4hc96SCvigY2q8lpJqPvi8 +ZVWb3vUNiSYE/CUapiVpy8JtynziWV+XrOvvLsi81xtZPCvM8hnIk2snYxnP/Okm ++Mpxm3+T/jRnhE6Z6/yzeAkzcLpmpnbtG3PrGqUNxCITIJRWCk4sbE6x/c+cCbqi +M+2HAgMBAAGjYzBhMB0GA1UdDgQWBBTdBAkHovV6fVJTEpKV7jiAJQ2mWTAPBgNV +HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFN0ECQei9Xp9UlMSkpXuOIAlDaZZMA4G +A1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAIBgRlCn7Jp0cHh5wYfGV +cpNxJK1ok1iOMq8bs3AD/CUrdIWQPXhq9LmLpZc7tRiRux6n+UBbkflVma8eEdBc +Hadm47GUBwwyOabqG7B52B2ccETjit3E+ZUfijhDPwGFpUenPUayvOUiaPd7nNgs +PgohyC0zrL/FgZkxdMF1ccW+sfAjRfSda/wZY52jvATGGAslu1OJD7OAUN5F7kR/ +q5R4ZJjT9ijdh9hwZXT7DrkT66cPYakylszeu+1jTBi7qUD3oFRuIIhxdRjqerQ0 +cuAjJ3dctpDqhiVAq+8zD8ufgr6iIPv2tS0a5sKFsXQP+8hlAqRSAUfdSSLBv9jr +a6x+3uxjMxW3IwiPxg+NQVrdjsW5j+VFP3jbutIbQLH+cU0/4IGiul607BXgk90I +H37hVZkLId6Tngr75qNJvTYw/ud3sqB1l7UtgYgXZSD32pAAn8lSzDLKNXz1PQ/Y +K9f1JmzJBjSWFupwWRoyeXkLtoh/D1JIPb9s2KJELtFOt3JY04kTlf5Eq/jXixtu +nLwsoFvVagCvXzfh1foQC5ichucmj87w7G6KVwuA406ywKBjYZC6VWg3dGq2ktuf +oYYitmUnDuy2n0Jg5GfCtdpBC8TTi2EbvPofkSvXRAdeuims2cXp71NIWuuA8ShY +Ic2wBlX7Jz9TkHCpBB5XJ7k= +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com Root Certification Authority ECC O=SSL Corporation +# Subject: CN=SSL.com Root Certification Authority ECC O=SSL Corporation +# Label: "SSL.com Root Certification Authority ECC" +# Serial: 8495723813297216424 +# MD5 Fingerprint: 2e:da:e4:39:7f:9c:8f:37:d1:70:9f:26:17:51:3a:8e +# SHA1 Fingerprint: c3:19:7c:39:24:e6:54:af:1b:c4:ab:20:95:7a:e2:c3:0e:13:02:6a +# SHA256 Fingerprint: 34:17:bb:06:cc:60:07:da:1b:96:1c:92:0b:8a:b4:ce:3f:ad:82:0e:4a:a3:0b:9a:cb:c4:a7:4e:bd:ce:bc:65 +-----BEGIN CERTIFICATE----- +MIICjTCCAhSgAwIBAgIIdebfy8FoW6gwCgYIKoZIzj0EAwIwfDELMAkGA1UEBhMC +VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T +U0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZpY2F0 +aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNDAzWhcNNDEwMjEyMTgxNDAz +WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hvdXN0 +b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNvbSBS +b290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49AgEGBSuB +BAAiA2IABEVuqVDEpiM2nl8ojRfLliJkP9x6jh3MCLOicSS6jkm5BBtHllirLZXI +7Z4INcgn64mMU1jrYor+8FsPazFSY0E7ic3s7LaNGdM0B9y7xgZ/wkWV7Mt/qCPg +CemB+vNH06NjMGEwHQYDVR0OBBYEFILRhXMw5zUE044CkvvlpNHEIejNMA8GA1Ud +EwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUgtGFczDnNQTTjgKS++Wk0cQh6M0wDgYD +VR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2cAMGQCMG/n61kRpGDPYbCWe+0F+S8T +kdzt5fxQaxFGRrMcIQBiu77D5+jNB5n5DQtdcj7EqgIwH7y6C+IwJPt8bYBVCpk+ +gA0z5Wajs6O7pdWLjwkspl1+4vAHCGht0nxpbl/f5Wpl +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation +# Subject: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation +# Label: "SSL.com EV Root Certification Authority RSA R2" +# Serial: 6248227494352943350 +# MD5 Fingerprint: e1:1e:31:58:1a:ae:54:53:02:f6:17:6a:11:7b:4d:95 +# SHA1 Fingerprint: 74:3a:f0:52:9b:d0:32:a0:f4:4a:83:cd:d4:ba:a9:7b:7c:2e:c4:9a +# SHA256 Fingerprint: 2e:7b:f1:6c:c2:24:85:a7:bb:e2:aa:86:96:75:07:61:b0:ae:39:be:3b:2f:e9:d0:cc:6d:4e:f7:34:91:42:5c +-----BEGIN CERTIFICATE----- +MIIF6zCCA9OgAwIBAgIIVrYpzTS8ePYwDQYJKoZIhvcNAQELBQAwgYIxCzAJBgNV +BAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4GA1UEBwwHSG91c3RvbjEYMBYGA1UE +CgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQDDC5TU0wuY29tIEVWIFJvb3QgQ2Vy +dGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIyMB4XDTE3MDUzMTE4MTQzN1oXDTQy +MDUzMDE4MTQzN1owgYIxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4G +A1UEBwwHSG91c3RvbjEYMBYGA1UECgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQD +DC5TU0wuY29tIEVWIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIy +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAjzZlQOHWTcDXtOlG2mvq +M0fNTPl9fb69LT3w23jhhqXZuglXaO1XPqDQCEGD5yhBJB/jchXQARr7XnAjssuf +OePPxU7Gkm0mxnu7s9onnQqG6YE3Bf7wcXHswxzpY6IXFJ3vG2fThVUCAtZJycxa +4bH3bzKfydQ7iEGonL3Lq9ttewkfokxykNorCPzPPFTOZw+oz12WGQvE43LrrdF9 +HSfvkusQv1vrO6/PgN3B0pYEW3p+pKk8OHakYo6gOV7qd89dAFmPZiw+B6KjBSYR +aZfqhbcPlgtLyEDhULouisv3D5oi53+aNxPN8k0TayHRwMwi8qFG9kRpnMphNQcA +b9ZhCBHqurj26bNg5U257J8UZslXWNvNh2n4ioYSA0e/ZhN2rHd9NCSFg83XqpyQ +Gp8hLH94t2S42Oim9HizVcuE0jLEeK6jj2HdzghTreyI/BXkmg3mnxp3zkyPuBQV +PWKchjgGAGYS5Fl2WlPAApiiECtoRHuOec4zSnaqW4EWG7WK2NAAe15itAnWhmMO +pgWVSbooi4iTsjQc2KRVbrcc0N6ZVTsj9CLg+SlmJuwgUHfbSguPvuUCYHBBXtSu +UDkiFCbLsjtzdFVHB3mBOagwE0TlBIqulhMlQg+5U8Sb/M3kHN48+qvWBkofZ6aY +MBzdLNvcGJVXZsb/XItW9XcCAwEAAaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAfBgNV +HSMEGDAWgBT5YLvU49U09rj1BoAlp3PbRmmonjAdBgNVHQ4EFgQU+WC71OPVNPa4 +9QaAJadz20ZpqJ4wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQBW +s47LCp1Jjr+kxJG7ZhcFUZh1++VQLHqe8RT6q9OKPv+RKY9ji9i0qVQBDb6Thi/5 +Sm3HXvVX+cpVHBK+Rw82xd9qt9t1wkclf7nxY/hoLVUE0fKNsKTPvDxeH3jnpaAg +cLAExbf3cqfeIg29MyVGjGSSJuM+LmOW2puMPfgYCdcDzH2GguDKBAdRUNf/ktUM +79qGn5nX67evaOI5JpS6aLe/g9Pqemc9YmeuJeVy6OLk7K4S9ksrPJ/psEDzOFSz +/bdoyNrGj1E8svuR3Bznm53htw1yj+KkxKl4+esUrMZDBcJlOSgYAsOCsp0FvmXt +ll9ldDz7CTUue5wT/RsPXcdtgTpWD8w74a8CLyKsRspGPKAcTNZEtF4uXBVmCeEm +Kf7GUmG6sXP/wwyc5WxqlD8UykAWlYTzWamsX0xhk23RO8yilQwipmdnRC652dKK +QbNmC1r7fSOl8hqw/96bg5Qu0T/fkreRrwU7ZcegbLHNYhLDkBvjJc40vG93drEQ +w/cFGsDWr3RiSBd3kmmQYRzelYB0VI8YHMPzA9C/pEN1hlMYegouCRw2n5H9gooi +S9EOUCXdywMMF8mDAAhONU2Ki+3wApRmLER/y5UnlhetCTCstnEXbosX9hwJ1C07 +mKVx01QT2WDz9UtmT/rx7iASjbSsV7FFY6GsdqnC+w== +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation +# Subject: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation +# Label: "SSL.com EV Root Certification Authority ECC" +# Serial: 3182246526754555285 +# MD5 Fingerprint: 59:53:22:65:83:42:01:54:c0:ce:42:b9:5a:7c:f2:90 +# SHA1 Fingerprint: 4c:dd:51:a3:d1:f5:20:32:14:b0:c6:c5:32:23:03:91:c7:46:42:6d +# SHA256 Fingerprint: 22:a2:c1:f7:bd:ed:70:4c:c1:e7:01:b5:f4:08:c3:10:88:0f:e9:56:b5:de:2a:4a:44:f9:9c:87:3a:25:a7:c8 +-----BEGIN CERTIFICATE----- +MIIClDCCAhqgAwIBAgIILCmcWxbtBZUwCgYIKoZIzj0EAwIwfzELMAkGA1UEBhMC +VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T +U0wgQ29ycG9yYXRpb24xNDAyBgNVBAMMK1NTTC5jb20gRVYgUm9vdCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNTIzWhcNNDEwMjEyMTgx +NTIzWjB/MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv +dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjE0MDIGA1UEAwwrU1NMLmNv +bSBFViBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49 +AgEGBSuBBAAiA2IABKoSR5CYG/vvw0AHgyBO8TCCogbR8pKGYfL2IWjKAMTH6kMA +VIbc/R/fALhBYlzccBYy3h+Z1MzFB8gIH2EWB1E9fVwHU+M1OIzfzZ/ZLg1Kthku +WnBaBu2+8KGwytAJKaNjMGEwHQYDVR0OBBYEFFvKXuXe0oGqzagtZFG22XKbl+ZP +MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUW8pe5d7SgarNqC1kUbbZcpuX +5k8wDgYDVR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2gAMGUCMQCK5kCJN+vp1RPZ +ytRrJPOwPYdGWBrssd9v+1a6cGvHOMzosYxPD/fxZ3YOg9AeUY8CMD32IygmTMZg +h5Mmm7I1HrrW9zzRHM76JTymGoEVW/MSD2zuZYrJh6j5B+BimoxcSg== +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6 +# Label: "GlobalSign Root CA - R6" +# Serial: 1417766617973444989252670301619537 +# MD5 Fingerprint: 4f:dd:07:e4:d4:22:64:39:1e:0c:37:42:ea:d1:c6:ae +# SHA1 Fingerprint: 80:94:64:0e:b5:a7:a1:ca:11:9c:1f:dd:d5:9f:81:02:63:a7:fb:d1 +# SHA256 Fingerprint: 2c:ab:ea:fe:37:d0:6c:a2:2a:ba:73:91:c0:03:3d:25:98:29:52:c4:53:64:73:49:76:3a:3a:b5:ad:6c:cf:69 +-----BEGIN CERTIFICATE----- +MIIFgzCCA2ugAwIBAgIORea7A4Mzw4VlSOb/RVEwDQYJKoZIhvcNAQEMBQAwTDEg +MB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjYxEzARBgNVBAoTCkdsb2Jh +bFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMTQxMjEwMDAwMDAwWhcNMzQx +MjEwMDAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSNjET +MBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCAiIwDQYJ +KoZIhvcNAQEBBQADggIPADCCAgoCggIBAJUH6HPKZvnsFMp7PPcNCPG0RQssgrRI +xutbPK6DuEGSMxSkb3/pKszGsIhrxbaJ0cay/xTOURQh7ErdG1rG1ofuTToVBu1k +ZguSgMpE3nOUTvOniX9PeGMIyBJQbUJmL025eShNUhqKGoC3GYEOfsSKvGRMIRxD +aNc9PIrFsmbVkJq3MQbFvuJtMgamHvm566qjuL++gmNQ0PAYid/kD3n16qIfKtJw +LnvnvJO7bVPiSHyMEAc4/2ayd2F+4OqMPKq0pPbzlUoSB239jLKJz9CgYXfIWHSw +1CM69106yqLbnQneXUQtkPGBzVeS+n68UARjNN9rkxi+azayOeSsJDa38O+2HBNX +k7besvjihbdzorg1qkXy4J02oW9UivFyVm4uiMVRQkQVlO6jxTiWm05OWgtH8wY2 +SXcwvHE35absIQh1/OZhFj931dmRl4QKbNQCTXTAFO39OfuD8l4UoQSwC+n+7o/h +bguyCLNhZglqsQY6ZZZZwPA1/cnaKI0aEYdwgQqomnUdnjqGBQCe24DWJfncBZ4n +WUx2OVvq+aWh2IMP0f/fMBH5hc8zSPXKbWQULHpYT9NLCEnFlWQaYw55PfWzjMpY +rZxCRXluDocZXFSxZba/jJvcE+kNb7gu3GduyYsRtYQUigAZcIN5kZeR1Bonvzce +MgfYFGM8KEyvAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTAD +AQH/MB0GA1UdDgQWBBSubAWjkxPioufi1xzWx/B/yGdToDAfBgNVHSMEGDAWgBSu +bAWjkxPioufi1xzWx/B/yGdToDANBgkqhkiG9w0BAQwFAAOCAgEAgyXt6NH9lVLN +nsAEoJFp5lzQhN7craJP6Ed41mWYqVuoPId8AorRbrcWc+ZfwFSY1XS+wc3iEZGt +Ixg93eFyRJa0lV7Ae46ZeBZDE1ZXs6KzO7V33EByrKPrmzU+sQghoefEQzd5Mr61 +55wsTLxDKZmOMNOsIeDjHfrYBzN2VAAiKrlNIC5waNrlU/yDXNOd8v9EDERm8tLj +vUYAGm0CuiVdjaExUd1URhxN25mW7xocBFymFe944Hn+Xds+qkxV/ZoVqW/hpvvf +cDDpw+5CRu3CkwWJ+n1jez/QcYF8AOiYrg54NMMl+68KnyBr3TsTjxKM4kEaSHpz +oHdpx7Zcf4LIHv5YGygrqGytXm3ABdJ7t+uA/iU3/gKbaKxCXcPu9czc8FB10jZp +nOZ7BN9uBmm23goJSFmH63sUYHpkqmlD75HHTOwY3WzvUy2MmeFe8nI+z1TIvWfs +pA9MRf/TuTAjB0yPEL+GltmZWrSZVxykzLsViVO6LAUP5MSeGbEYNNVMnbrt9x+v +JJUEeKgDu+6B5dpffItKoZB0JaezPkvILFa9x8jvOOJckvB595yEunQtYQEgfn7R +8k8HWV+LLUNS60YMlOH1Zkd5d9VUWx+tJDfLRVpOoERIyNiwmcUVhAn21klJwGW4 +5hpxbqCo8YLoRT5s1gLXCmeDBVrJpBA= +-----END CERTIFICATE----- + +# Issuer: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed +# Subject: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed +# Label: "OISTE WISeKey Global Root GC CA" +# Serial: 44084345621038548146064804565436152554 +# MD5 Fingerprint: a9:d6:b9:2d:2f:93:64:f8:a5:69:ca:91:e9:68:07:23 +# SHA1 Fingerprint: e0:11:84:5e:34:de:be:88:81:b9:9c:f6:16:26:d1:96:1f:c3:b9:31 +# SHA256 Fingerprint: 85:60:f9:1c:36:24:da:ba:95:70:b5:fe:a0:db:e3:6f:f1:1a:83:23:be:94:86:85:4f:b3:f3:4a:55:71:19:8d +-----BEGIN CERTIFICATE----- +MIICaTCCAe+gAwIBAgIQISpWDK7aDKtARb8roi066jAKBggqhkjOPQQDAzBtMQsw +CQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUgRm91 +bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9iYWwg +Um9vdCBHQyBDQTAeFw0xNzA1MDkwOTQ4MzRaFw00MjA1MDkwOTU4MzNaMG0xCzAJ +BgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBGb3Vu +ZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2JhbCBS +b290IEdDIENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAETOlQwMYPchi82PG6s4ni +eUqjFqdrVCTbUf/q9Akkwwsin8tqJ4KBDdLArzHkdIJuyiXZjHWd8dvQmqJLIX4W +p2OQ0jnUsYd4XxiWD1AbNTcPasbc2RNNpI6QN+a9WzGRo1QwUjAOBgNVHQ8BAf8E +BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUSIcUrOPDnpBgOtfKie7T +rYy0UGYwEAYJKwYBBAGCNxUBBAMCAQAwCgYIKoZIzj0EAwMDaAAwZQIwJsdpW9zV +57LnyAyMjMPdeYwbY9XJUpROTYJKcx6ygISpJcBMWm1JKWB4E+J+SOtkAjEA2zQg +Mgj/mkkCtojeFK9dbJlxjRo/i9fgojaGHAeCOnZT/cKi7e97sIBPWA9LUzm9 +-----END CERTIFICATE----- + +# Issuer: CN=UCA Global G2 Root O=UniTrust +# Subject: CN=UCA Global G2 Root O=UniTrust +# Label: "UCA Global G2 Root" +# Serial: 124779693093741543919145257850076631279 +# MD5 Fingerprint: 80:fe:f0:c4:4a:f0:5c:62:32:9f:1c:ba:78:a9:50:f8 +# SHA1 Fingerprint: 28:f9:78:16:19:7a:ff:18:25:18:aa:44:fe:c1:a0:ce:5c:b6:4c:8a +# SHA256 Fingerprint: 9b:ea:11:c9:76:fe:01:47:64:c1:be:56:a6:f9:14:b5:a5:60:31:7a:bd:99:88:39:33:82:e5:16:1a:a0:49:3c +-----BEGIN CERTIFICATE----- +MIIFRjCCAy6gAwIBAgIQXd+x2lqj7V2+WmUgZQOQ7zANBgkqhkiG9w0BAQsFADA9 +MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxGzAZBgNVBAMMElVDQSBH +bG9iYWwgRzIgUm9vdDAeFw0xNjAzMTEwMDAwMDBaFw00MDEyMzEwMDAwMDBaMD0x +CzAJBgNVBAYTAkNOMREwDwYDVQQKDAhVbmlUcnVzdDEbMBkGA1UEAwwSVUNBIEds +b2JhbCBHMiBSb290MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxeYr +b3zvJgUno4Ek2m/LAfmZmqkywiKHYUGRO8vDaBsGxUypK8FnFyIdK+35KYmToni9 +kmugow2ifsqTs6bRjDXVdfkX9s9FxeV67HeToI8jrg4aA3++1NDtLnurRiNb/yzm +VHqUwCoV8MmNsHo7JOHXaOIxPAYzRrZUEaalLyJUKlgNAQLx+hVRZ2zA+te2G3/R +VogvGjqNO7uCEeBHANBSh6v7hn4PJGtAnTRnvI3HLYZveT6OqTwXS3+wmeOwcWDc +C/Vkw85DvG1xudLeJ1uK6NjGruFZfc8oLTW4lVYa8bJYS7cSN8h8s+1LgOGN+jIj +tm+3SJUIsUROhYw6AlQgL9+/V087OpAh18EmNVQg7Mc/R+zvWr9LesGtOxdQXGLY +D0tK3Cv6brxzks3sx1DoQZbXqX5t2Okdj4q1uViSukqSKwxW/YDrCPBeKW4bHAyv +j5OJrdu9o54hyokZ7N+1wxrrFv54NkzWbtA+FxyQF2smuvt6L78RHBgOLXMDj6Dl +NaBa4kx1HXHhOThTeEDMg5PXCp6dW4+K5OXgSORIskfNTip1KnvyIvbJvgmRlld6 +iIis7nCs+dwp4wwcOxJORNanTrAmyPPZGpeRaOrvjUYG0lZFWJo8DA+DuAUlwznP +O6Q0ibd5Ei9Hxeepl2n8pndntd978XplFeRhVmUCAwEAAaNCMEAwDgYDVR0PAQH/ +BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFIHEjMz15DD/pQwIX4wV +ZyF0Ad/fMA0GCSqGSIb3DQEBCwUAA4ICAQATZSL1jiutROTL/7lo5sOASD0Ee/oj +L3rtNtqyzm325p7lX1iPyzcyochltq44PTUbPrw7tgTQvPlJ9Zv3hcU2tsu8+Mg5 +1eRfB70VVJd0ysrtT7q6ZHafgbiERUlMjW+i67HM0cOU2kTC5uLqGOiiHycFutfl +1qnN3e92mI0ADs0b+gO3joBYDic/UvuUospeZcnWhNq5NXHzJsBPd+aBJ9J3O5oU +b3n09tDh05S60FdRvScFDcH9yBIw7m+NESsIndTUv4BFFJqIRNow6rSn4+7vW4LV +PtateJLbXDzz2K36uGt/xDYotgIVilQsnLAXc47QN6MUPJiVAAwpBVueSUmxX8fj +y88nZY41F7dXyDDZQVu5FLbowg+UMaeUmMxq67XhJ/UQqAHojhJi6IjMtX9Gl8Cb +EGY4GjZGXyJoPd/JxhMnq1MGrKI8hgZlb7F+sSlEmqO6SWkoaY/X5V+tBIZkbxqg +DMUIYs6Ao9Dz7GjevjPHF1t/gMRMTLGmhIrDO7gJzRSBuhjjVFc2/tsvfEehOjPI ++Vg7RE+xygKJBJYoaMVLuCaJu9YzL1DV/pqJuhgyklTGW+Cd+V7lDSKb9triyCGy +YiGqhkCyLmTTX8jjfhFnRR8F/uOi77Oos/N9j/gMHyIfLXC0uAE0djAA5SN4p1bX +UB+K+wb1whnw0A== +-----END CERTIFICATE----- + +# Issuer: CN=UCA Extended Validation Root O=UniTrust +# Subject: CN=UCA Extended Validation Root O=UniTrust +# Label: "UCA Extended Validation Root" +# Serial: 106100277556486529736699587978573607008 +# MD5 Fingerprint: a1:f3:5f:43:c6:34:9b:da:bf:8c:7e:05:53:ad:96:e2 +# SHA1 Fingerprint: a3:a1:b0:6f:24:61:23:4a:e3:36:a5:c2:37:fc:a6:ff:dd:f0:d7:3a +# SHA256 Fingerprint: d4:3a:f9:b3:54:73:75:5c:96:84:fc:06:d7:d8:cb:70:ee:5c:28:e7:73:fb:29:4e:b4:1e:e7:17:22:92:4d:24 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgIQT9Irj/VkyDOeTzRYZiNwYDANBgkqhkiG9w0BAQsFADBH +MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNVBAMMHFVDQSBF +eHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwHhcNMTUwMzEzMDAwMDAwWhcNMzgxMjMx +MDAwMDAwWjBHMQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNV +BAMMHFVDQSBFeHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQCpCQcoEwKwmeBkqh5DFnpzsZGgdT6o+uM4AHrsiWog +D4vFsJszA1qGxliG1cGFu0/GnEBNyr7uaZa4rYEwmnySBesFK5pI0Lh2PpbIILvS +sPGP2KxFRv+qZ2C0d35qHzwaUnoEPQc8hQ2E0B92CvdqFN9y4zR8V05WAT558aop +O2z6+I9tTcg1367r3CTueUWnhbYFiN6IXSV8l2RnCdm/WhUFhvMJHuxYMjMR83dk +sHYf5BA1FxvyDrFspCqjc/wJHx4yGVMR59mzLC52LqGj3n5qiAno8geK+LLNEOfi +c0CTuwjRP+H8C5SzJe98ptfRr5//lpr1kXuYC3fUfugH0mK1lTnj8/FtDw5lhIpj +VMWAtuCeS31HJqcBCF3RiJ7XwzJE+oJKCmhUfzhTA8ykADNkUVkLo4KRel7sFsLz +KuZi2irbWWIQJUoqgQtHB0MGcIfS+pMRKXpITeuUx3BNr2fVUbGAIAEBtHoIppB/ +TuDvB0GHr2qlXov7z1CymlSvw4m6WC31MJixNnI5fkkE/SmnTHnkBVfblLkWU41G +sx2VYVdWf6/wFlthWG82UBEL2KwrlRYaDh8IzTY0ZRBiZtWAXxQgXy0MoHgKaNYs +1+lvK9JKBZP8nm9rZ/+I8U6laUpSNwXqxhaN0sSZ0YIrO7o1dfdRUVjzyAfd5LQD +fwIDAQABo0IwQDAdBgNVHQ4EFgQU2XQ65DA9DfcS3H5aBZ8eNJr34RQwDwYDVR0T +AQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBADaN +l8xCFWQpN5smLNb7rhVpLGsaGvdftvkHTFnq88nIua7Mui563MD1sC3AO6+fcAUR +ap8lTwEpcOPlDOHqWnzcSbvBHiqB9RZLcpHIojG5qtr8nR/zXUACE/xOHAbKsxSQ +VBcZEhrxH9cMaVr2cXj0lH2RC47skFSOvG+hTKv8dGT9cZr4QQehzZHkPJrgmzI5 +c6sq1WnIeJEmMX3ixzDx/BR4dxIOE/TdFpS/S2d7cFOFyrC78zhNLJA5wA3CXWvp +4uXViI3WLL+rG761KIcSF3Ru/H38j9CHJrAb+7lsq+KePRXBOy5nAliRn+/4Qh8s +t2j1da3Ptfb/EX3C8CSlrdP6oDyp+l3cpaDvRKS+1ujl5BOWF3sGPjLtx7dCvHaj +2GU4Kzg1USEODm8uNBNA4StnDG1KQTAYI1oyVZnJF+A83vbsea0rWBmirSwiGpWO +vpaQXUJXxPkUAzUrHC1RVwinOt4/5Mi0A3PCwSaAuwtCH60NryZy2sy+s6ODWA2C +xR9GUeOcGMyNm43sSet1UNWMKFnKdDTajAshqx7qG+XH/RU+wBeq+yNuJkbL+vmx +cmtpzyKEC2IPrNkZAJSidjzULZrtBJ4tBmIQN1IchXIbJ+XMxjHsN+xjWZsLHXbM +fjKaiJUINlK73nZfdklJrX+9ZSCyycErdhh2n1ax +-----END CERTIFICATE----- + +# Issuer: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036 +# Subject: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036 +# Label: "Certigna Root CA" +# Serial: 269714418870597844693661054334862075617 +# MD5 Fingerprint: 0e:5c:30:62:27:eb:5b:bc:d7:ae:62:ba:e9:d5:df:77 +# SHA1 Fingerprint: 2d:0d:52:14:ff:9e:ad:99:24:01:74:20:47:6e:6c:85:27:27:f5:43 +# SHA256 Fingerprint: d4:8d:3d:23:ee:db:50:a4:59:e5:51:97:60:1c:27:77:4b:9d:7b:18:c9:4d:5a:05:95:11:a1:02:50:b9:31:68 +-----BEGIN CERTIFICATE----- +MIIGWzCCBEOgAwIBAgIRAMrpG4nxVQMNo+ZBbcTjpuEwDQYJKoZIhvcNAQELBQAw +WjELMAkGA1UEBhMCRlIxEjAQBgNVBAoMCURoaW15b3RpczEcMBoGA1UECwwTMDAw +MiA0ODE0NjMwODEwMDAzNjEZMBcGA1UEAwwQQ2VydGlnbmEgUm9vdCBDQTAeFw0x +MzEwMDEwODMyMjdaFw0zMzEwMDEwODMyMjdaMFoxCzAJBgNVBAYTAkZSMRIwEAYD +VQQKDAlEaGlteW90aXMxHDAaBgNVBAsMEzAwMDIgNDgxNDYzMDgxMDAwMzYxGTAX +BgNVBAMMEENlcnRpZ25hIFJvb3QgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAw +ggIKAoICAQDNGDllGlmx6mQWDoyUJJV8g9PFOSbcDO8WV43X2KyjQn+Cyu3NW9sO +ty3tRQgXstmzy9YXUnIo245Onoq2C/mehJpNdt4iKVzSs9IGPjA5qXSjklYcoW9M +CiBtnyN6tMbaLOQdLNyzKNAT8kxOAkmhVECe5uUFoC2EyP+YbNDrihqECB63aCPu +I9Vwzm1RaRDuoXrC0SIxwoKF0vJVdlB8JXrJhFwLrN1CTivngqIkicuQstDuI7pm +TLtipPlTWmR7fJj6o0ieD5Wupxj0auwuA0Wv8HT4Ks16XdG+RCYyKfHx9WzMfgIh +C59vpD++nVPiz32pLHxYGpfhPTc3GGYo0kDFUYqMwy3OU4gkWGQwFsWq4NYKpkDf +ePb1BHxpE4S80dGnBs8B92jAqFe7OmGtBIyT46388NtEbVncSVmurJqZNjBBe3Yz +IoejwpKGbvlw7q6Hh5UbxHq9MfPU0uWZ/75I7HX1eBYdpnDBfzwboZL7z8g81sWT +Co/1VTp2lc5ZmIoJlXcymoO6LAQ6l73UL77XbJuiyn1tJslV1c/DeVIICZkHJC1k +JWumIWmbat10TWuXekG9qxf5kBdIjzb5LdXF2+6qhUVB+s06RbFo5jZMm5BX7CO5 +hwjCxAnxl4YqKE3idMDaxIzb3+KhF1nOJFl0Mdp//TBt2dzhauH8XwIDAQABo4IB +GjCCARYwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE +FBiHVuBud+4kNTxOc5of1uHieX4rMB8GA1UdIwQYMBaAFBiHVuBud+4kNTxOc5of +1uHieX4rMEQGA1UdIAQ9MDswOQYEVR0gADAxMC8GCCsGAQUFBwIBFiNodHRwczov +L3d3d3cuY2VydGlnbmEuZnIvYXV0b3JpdGVzLzBtBgNVHR8EZjBkMC+gLaArhilo +dHRwOi8vY3JsLmNlcnRpZ25hLmZyL2NlcnRpZ25hcm9vdGNhLmNybDAxoC+gLYYr +aHR0cDovL2NybC5kaGlteW90aXMuY29tL2NlcnRpZ25hcm9vdGNhLmNybDANBgkq +hkiG9w0BAQsFAAOCAgEAlLieT/DjlQgi581oQfccVdV8AOItOoldaDgvUSILSo3L +6btdPrtcPbEo/uRTVRPPoZAbAh1fZkYJMyjhDSSXcNMQH+pkV5a7XdrnxIxPTGRG +HVyH41neQtGbqH6mid2PHMkwgu07nM3A6RngatgCdTer9zQoKJHyBApPNeNgJgH6 +0BGM+RFq7q89w1DTj18zeTyGqHNFkIwgtnJzFyO+B2XleJINugHA64wcZr+shncB +lA2c5uk5jR+mUYyZDDl34bSb+hxnV29qao6pK0xXeXpXIs/NX2NGjVxZOob4Mkdi +o2cNGJHc+6Zr9UhhcyNZjgKnvETq9Emd8VRY+WCv2hikLyhF3HqgiIZd8zvn/yk1 +gPxkQ5Tm4xxvvq0OKmOZK8l+hfZx6AYDlf7ej0gcWtSS6Cvu5zHbugRqh5jnxV/v +faci9wHYTfmJ0A6aBVmknpjZbyvKcL5kwlWj9Omvw5Ip3IgWJJk8jSaYtlu3zM63 +Nwf9JtmYhST/WSMDmu2dnajkXjjO11INb9I/bbEFa0nOipFGc/T2L/Coc3cOZayh +jWZSaX5LaAzHHjcng6WMxwLkFM1JAbBzs/3GkDpv0mztO+7skb6iQ12LAEpmJURw +3kAP+HwV96LOPNdeE4yBFxgX0b3xdxA61GU5wSesVywlVP+i2k+KYTlerj1KjL0= +-----END CERTIFICATE----- + +# Issuer: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI +# Subject: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI +# Label: "emSign Root CA - G1" +# Serial: 235931866688319308814040 +# MD5 Fingerprint: 9c:42:84:57:dd:cb:0b:a7:2e:95:ad:b6:f3:da:bc:ac +# SHA1 Fingerprint: 8a:c7:ad:8f:73:ac:4e:c1:b5:75:4d:a5:40:f4:fc:cf:7c:b5:8e:8c +# SHA256 Fingerprint: 40:f6:af:03:46:a9:9a:a1:cd:1d:55:5a:4e:9c:ce:62:c7:f9:63:46:03:ee:40:66:15:83:3d:c8:c8:d0:03:67 +-----BEGIN CERTIFICATE----- +MIIDlDCCAnygAwIBAgIKMfXkYgxsWO3W2DANBgkqhkiG9w0BAQsFADBnMQswCQYD +VQQGEwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBU +ZWNobm9sb2dpZXMgTGltaXRlZDEcMBoGA1UEAxMTZW1TaWduIFJvb3QgQ0EgLSBH +MTAeFw0xODAyMTgxODMwMDBaFw00MzAyMTgxODMwMDBaMGcxCzAJBgNVBAYTAklO +MRMwEQYDVQQLEwplbVNpZ24gUEtJMSUwIwYDVQQKExxlTXVkaHJhIFRlY2hub2xv +Z2llcyBMaW1pdGVkMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEcxMIIBIjAN +BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAk0u76WaK7p1b1TST0Bsew+eeuGQz +f2N4aLTNLnF115sgxk0pvLZoYIr3IZpWNVrzdr3YzZr/k1ZLpVkGoZM0Kd0WNHVO +8oG0x5ZOrRkVUkr+PHB1cM2vK6sVmjM8qrOLqs1D/fXqcP/tzxE7lM5OMhbTI0Aq +d7OvPAEsbO2ZLIvZTmmYsvePQbAyeGHWDV/D+qJAkh1cF+ZwPjXnorfCYuKrpDhM +tTk1b+oDafo6VGiFbdbyL0NVHpENDtjVaqSW0RM8LHhQ6DqS0hdW5TUaQBw+jSzt +Od9C4INBdN+jzcKGYEho42kLVACL5HZpIQ15TjQIXhTCzLG3rdd8cIrHhQIDAQAB +o0IwQDAdBgNVHQ4EFgQU++8Nhp6w492pufEhF38+/PB3KxowDgYDVR0PAQH/BAQD +AgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAFn/8oz1h31x +PaOfG1vR2vjTnGs2vZupYeveFix0PZ7mddrXuqe8QhfnPZHr5X3dPpzxz5KsbEjM +wiI/aTvFthUvozXGaCocV685743QNcMYDHsAVhzNixl03r4PEuDQqqE/AjSxcM6d +GNYIAwlG7mDgfrbESQRRfXBgvKqy/3lyeqYdPV8q+Mri/Tm3R7nrft8EI6/6nAYH +6ftjk4BAtcZsCjEozgyfz7MjNYBBjWzEN3uBL4ChQEKF6dk4jeihU80Bv2noWgby +RQuQ+q7hv53yrlc8pa6yVvSLZUDp/TGBLPQ5Cdjua6e0ph0VpZj3AYHYhX3zUVxx +iN66zB+Afko= +-----END CERTIFICATE----- + +# Issuer: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI +# Subject: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI +# Label: "emSign ECC Root CA - G3" +# Serial: 287880440101571086945156 +# MD5 Fingerprint: ce:0b:72:d1:9f:88:8e:d0:50:03:e8:e3:b8:8b:67:40 +# SHA1 Fingerprint: 30:43:fa:4f:f2:57:dc:a0:c3:80:ee:2e:58:ea:78:b2:3f:e6:bb:c1 +# SHA256 Fingerprint: 86:a1:ec:ba:08:9c:4a:8d:3b:be:27:34:c6:12:ba:34:1d:81:3e:04:3c:f9:e8:a8:62:cd:5c:57:a3:6b:be:6b +-----BEGIN CERTIFICATE----- +MIICTjCCAdOgAwIBAgIKPPYHqWhwDtqLhDAKBggqhkjOPQQDAzBrMQswCQYDVQQG +EwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNo +bm9sb2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0g +RzMwHhcNMTgwMjE4MTgzMDAwWhcNNDMwMjE4MTgzMDAwWjBrMQswCQYDVQQGEwJJ +TjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNobm9s +b2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0gRzMw +djAQBgcqhkjOPQIBBgUrgQQAIgNiAAQjpQy4LRL1KPOxst3iAhKAnjlfSU2fySU0 +WXTsuwYc58Byr+iuL+FBVIcUqEqy6HyC5ltqtdyzdc6LBtCGI79G1Y4PPwT01xyS +fvalY8L1X44uT6EYGQIrMgqCZH0Wk9GjQjBAMB0GA1UdDgQWBBR8XQKEE9TMipuB +zhccLikenEhjQjAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggq +hkjOPQQDAwNpADBmAjEAvvNhzwIQHWSVB7gYboiFBS+DCBeQyh+KTOgNG3qxrdWB +CUfvO6wIBHxcmbHtRwfSAjEAnbpV/KlK6O3t5nYBQnvI+GDZjVGLVTv7jHvrZQnD ++JbNR6iC8hZVdyR+EhCVBCyj +-----END CERTIFICATE----- + +# Issuer: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI +# Subject: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI +# Label: "emSign Root CA - C1" +# Serial: 825510296613316004955058 +# MD5 Fingerprint: d8:e3:5d:01:21:fa:78:5a:b0:df:ba:d2:ee:2a:5f:68 +# SHA1 Fingerprint: e7:2e:f1:df:fc:b2:09:28:cf:5d:d4:d5:67:37:b1:51:cb:86:4f:01 +# SHA256 Fingerprint: 12:56:09:aa:30:1d:a0:a2:49:b9:7a:82:39:cb:6a:34:21:6f:44:dc:ac:9f:39:54:b1:42:92:f2:e8:c8:60:8f +-----BEGIN CERTIFICATE----- +MIIDczCCAlugAwIBAgILAK7PALrEzzL4Q7IwDQYJKoZIhvcNAQELBQAwVjELMAkG +A1UEBhMCVVMxEzARBgNVBAsTCmVtU2lnbiBQS0kxFDASBgNVBAoTC2VNdWRocmEg +SW5jMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEMxMB4XDTE4MDIxODE4MzAw +MFoXDTQzMDIxODE4MzAwMFowVjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln +biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMRwwGgYDVQQDExNlbVNpZ24gUm9v +dCBDQSAtIEMxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz+upufGZ +BczYKCFK83M0UYRWEPWgTywS4/oTmifQz/l5GnRfHXk5/Fv4cI7gklL35CX5VIPZ +HdPIWoU/Xse2B+4+wM6ar6xWQio5JXDWv7V7Nq2s9nPczdcdioOl+yuQFTdrHCZH +3DspVpNqs8FqOp099cGXOFgFixwR4+S0uF2FHYP+eF8LRWgYSKVGczQ7/g/IdrvH +GPMF0Ybzhe3nudkyrVWIzqa2kbBPrH4VI5b2P/AgNBbeCsbEBEV5f6f9vtKppa+c +xSMq9zwhbL2vj07FOrLzNBL834AaSaTUqZX3noleoomslMuoaJuvimUnzYnu3Yy1 +aylwQ6BpC+S5DwIDAQABo0IwQDAdBgNVHQ4EFgQU/qHgcB4qAzlSWkK+XJGFehiq +TbUwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBAMJKVvoVIXsoounlHfv4LcQ5lkFMOycsxGwYFYDGrK9HWS8mC+M2sO87 +/kOXSTKZEhVb3xEp/6tT+LvBeA+snFOvV71ojD1pM/CjoCNjO2RnIkSt1XHLVip4 +kqNPEjE2NuLe/gDEo2APJ62gsIq1NnpSob0n9CAnYuhNlCQT5AoE6TyrLshDCUrG +YQTlSTR+08TI9Q/Aqum6VF7zYytPT1DU/rl7mYw9wC68AivTxEDkigcxHpvOJpkT ++xHqmiIMERnHXhuBUDDIlhJu58tBf5E7oke3VIAb3ADMmpDqw8NQBmIMMMAVSKeo +WXzhriKi4gp6D/piq1JM4fHfyr6DDUI= +-----END CERTIFICATE----- + +# Issuer: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI +# Subject: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI +# Label: "emSign ECC Root CA - C3" +# Serial: 582948710642506000014504 +# MD5 Fingerprint: 3e:53:b3:a3:81:ee:d7:10:f8:d3:b0:1d:17:92:f5:d5 +# SHA1 Fingerprint: b6:af:43:c2:9b:81:53:7d:f6:ef:6b:c3:1f:1f:60:15:0c:ee:48:66 +# SHA256 Fingerprint: bc:4d:80:9b:15:18:9d:78:db:3e:1d:8c:f4:f9:72:6a:79:5d:a1:64:3c:a5:f1:35:8e:1d:db:0e:dc:0d:7e:b3 +-----BEGIN CERTIFICATE----- +MIICKzCCAbGgAwIBAgIKe3G2gla4EnycqDAKBggqhkjOPQQDAzBaMQswCQYDVQQG +EwJVUzETMBEGA1UECxMKZW1TaWduIFBLSTEUMBIGA1UEChMLZU11ZGhyYSBJbmMx +IDAeBgNVBAMTF2VtU2lnbiBFQ0MgUm9vdCBDQSAtIEMzMB4XDTE4MDIxODE4MzAw +MFoXDTQzMDIxODE4MzAwMFowWjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln +biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMSAwHgYDVQQDExdlbVNpZ24gRUND +IFJvb3QgQ0EgLSBDMzB2MBAGByqGSM49AgEGBSuBBAAiA2IABP2lYa57JhAd6bci +MK4G9IGzsUJxlTm801Ljr6/58pc1kjZGDoeVjbk5Wum739D+yAdBPLtVb4Ojavti +sIGJAnB9SMVK4+kiVCJNk7tCDK93nCOmfddhEc5lx/h//vXyqaNCMEAwHQYDVR0O +BBYEFPtaSNCAIEDyqOkAB2kZd6fmw/TPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB +Af8EBTADAQH/MAoGCCqGSM49BAMDA2gAMGUCMQC02C8Cif22TGK6Q04ThHK1rt0c +3ta13FaPWEBaLd4gTCKDypOofu4SQMfWh0/434UCMBwUZOR8loMRnLDRWmFLpg9J +0wD8ofzkpf9/rdcw0Md3f76BB1UwUCAU9Vc4CqgxUQ== +-----END CERTIFICATE----- + +# Issuer: CN=Hongkong Post Root CA 3 O=Hongkong Post +# Subject: CN=Hongkong Post Root CA 3 O=Hongkong Post +# Label: "Hongkong Post Root CA 3" +# Serial: 46170865288971385588281144162979347873371282084 +# MD5 Fingerprint: 11:fc:9f:bd:73:30:02:8a:fd:3f:f3:58:b9:cb:20:f0 +# SHA1 Fingerprint: 58:a2:d0:ec:20:52:81:5b:c1:f3:f8:64:02:24:4e:c2:8e:02:4b:02 +# SHA256 Fingerprint: 5a:2f:c0:3f:0c:83:b0:90:bb:fa:40:60:4b:09:88:44:6c:76:36:18:3d:f9:84:6e:17:10:1a:44:7f:b8:ef:d6 +-----BEGIN CERTIFICATE----- +MIIFzzCCA7egAwIBAgIUCBZfikyl7ADJk0DfxMauI7gcWqQwDQYJKoZIhvcNAQEL +BQAwbzELMAkGA1UEBhMCSEsxEjAQBgNVBAgTCUhvbmcgS29uZzESMBAGA1UEBxMJ +SG9uZyBLb25nMRYwFAYDVQQKEw1Ib25na29uZyBQb3N0MSAwHgYDVQQDExdIb25n +a29uZyBQb3N0IFJvb3QgQ0EgMzAeFw0xNzA2MDMwMjI5NDZaFw00MjA2MDMwMjI5 +NDZaMG8xCzAJBgNVBAYTAkhLMRIwEAYDVQQIEwlIb25nIEtvbmcxEjAQBgNVBAcT +CUhvbmcgS29uZzEWMBQGA1UEChMNSG9uZ2tvbmcgUG9zdDEgMB4GA1UEAxMXSG9u +Z2tvbmcgUG9zdCBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCziNfqzg8gTr7m1gNt7ln8wlffKWihgw4+aMdoWJwcYEuJQwy51BWy7sFO +dem1p+/l6TWZ5Mwc50tfjTMwIDNT2aa71T4Tjukfh0mtUC1Qyhi+AViiE3CWu4mI +VoBc+L0sPOFMV4i707mV78vH9toxdCim5lSJ9UExyuUmGs2C4HDaOym71QP1mbpV +9WTRYA6ziUm4ii8F0oRFKHyPaFASePwLtVPLwpgchKOesL4jpNrcyCse2m5FHomY +2vkALgbpDDtw1VAliJnLzXNg99X/NWfFobxeq81KuEXryGgeDQ0URhLj0mRiikKY +vLTGCAj4/ahMZJx2Ab0vqWwzD9g/KLg8aQFChn5pwckGyuV6RmXpwtZQQS4/t+Tt +bNe/JgERohYpSms0BpDsE9K2+2p20jzt8NYt3eEV7KObLyzJPivkaTv/ciWxNoZb +x39ri1UbSsUgYT2uy1DhCDq+sI9jQVMwCFk8mB13umOResoQUGC/8Ne8lYePl8X+ +l2oBlKN8W4UdKjk60FSh0Tlxnf0h+bV78OLgAo9uliQlLKAeLKjEiafv7ZkGL7YK +TE/bosw3Gq9HhS2KX8Q0NEwA/RiTZxPRN+ZItIsGxVd7GYYKecsAyVKvQv83j+Gj +Hno9UKtjBucVtT+2RTeUN7F+8kjDf8V1/peNRY8apxpyKBpADwIDAQABo2MwYTAP +BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAfBgNVHSMEGDAWgBQXnc0e +i9Y5K3DTXNSguB+wAPzFYTAdBgNVHQ4EFgQUF53NHovWOStw01zUoLgfsAD8xWEw +DQYJKoZIhvcNAQELBQADggIBAFbVe27mIgHSQpsY1Q7XZiNc4/6gx5LS6ZStS6LG +7BJ8dNVI0lkUmcDrudHr9EgwW62nV3OZqdPlt9EuWSRY3GguLmLYauRwCy0gUCCk +MpXRAJi70/33MvJJrsZ64Ee+bs7Lo3I6LWldy8joRTnU+kLBEUx3XZL7av9YROXr +gZ6voJmtvqkBZss4HTzfQx/0TW60uhdG/H39h4F5ag0zD/ov+BS5gLNdTaqX4fnk +GMX41TiMJjz98iji7lpJiCzfeT2OnpA8vUFKOt1b9pq0zj8lMH8yfaIDlNDceqFS +3m6TjRgm/VWsvY+b0s+v54Ysyx8Jb6NvqYTUc79NoXQbTiNg8swOqn+knEwlqLJm +Ozj/2ZQw9nKEvmhVEA/GcywWaZMH/rFF7buiVWqw2rVKAiUnhde3t4ZEFolsgCs+ +l6mc1X5VTMbeRRAc6uk7nwNT7u56AQIWeNTowr5GdogTPyK7SBIdUgC0An4hGh6c +JfTzPV4e0hz5sy229zdcxsshTrD3mUcYhcErulWuBurQB7Lcq9CClnXO0lD+mefP +L5/ndtFhKvshuzHQqp9HpLIiyhY6UFfEW0NnxWViA0kB60PZ2Pierc+xYw5F9KBa +LJstxabArahH9CdMOA0uG0k7UvToiIMrVCjU8jVStDKDYmlkDJGcn5fqdBb9HxEG +mpv0 +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - G4" +# Serial: 289383649854506086828220374796556676440 +# MD5 Fingerprint: 89:53:f1:83:23:b7:7c:8e:05:f1:8c:71:38:4e:1f:88 +# SHA1 Fingerprint: 14:88:4e:86:26:37:b0:26:af:59:62:5c:40:77:ec:35:29:ba:96:01 +# SHA256 Fingerprint: db:35:17:d1:f6:73:2a:2d:5a:b9:7c:53:3e:c7:07:79:ee:32:70:a6:2f:b4:ac:42:38:37:24:60:e6:f0:1e:88 +-----BEGIN CERTIFICATE----- +MIIGSzCCBDOgAwIBAgIRANm1Q3+vqTkPAAAAAFVlrVgwDQYJKoZIhvcNAQELBQAw +gb4xCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQL +Ex9TZWUgd3d3LmVudHJ1c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykg +MjAxNSBFbnRydXN0LCBJbmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAw +BgNVBAMTKUVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0 +MB4XDTE1MDUyNzExMTExNloXDTM3MTIyNzExNDExNlowgb4xCzAJBgNVBAYTAlVT +MRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1 +c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxNSBFbnRydXN0LCBJ +bmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAwBgNVBAMTKUVudHJ1c3Qg +Um9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0MIICIjANBgkqhkiG9w0B +AQEFAAOCAg8AMIICCgKCAgEAsewsQu7i0TD/pZJH4i3DumSXbcr3DbVZwbPLqGgZ +2K+EbTBwXX7zLtJTmeH+H17ZSK9dE43b/2MzTdMAArzE+NEGCJR5WIoV3imz/f3E +T+iq4qA7ec2/a0My3dl0ELn39GjUu9CH1apLiipvKgS1sqbHoHrmSKvS0VnM1n4j +5pds8ELl3FFLFUHtSUrJ3hCX1nbB76W1NhSXNdh4IjVS70O92yfbYVaCNNzLiGAM +C1rlLAHGVK/XqsEQe9IFWrhAnoanw5CGAlZSCXqc0ieCU0plUmr1POeo8pyvi73T +DtTUXm6Hnmo9RR3RXRv06QqsYJn7ibT/mCzPfB3pAqoEmh643IhuJbNsZvc8kPNX +wbMv9W3y+8qh+CmdRouzavbmZwe+LGcKKh9asj5XxNMhIWNlUpEbsZmOeX7m640A +2Vqq6nPopIICR5b+W45UYaPrL0swsIsjdXJ8ITzI9vF01Bx7owVV7rtNOzK+mndm +nqxpkCIHH2E6lr7lmk/MBTwoWdPBDFSoWWG9yHJM6Nyfh3+9nEg2XpWjDrk4JFX8 +dWbrAuMINClKxuMrLzOg2qOGpRKX/YAr2hRC45K9PvJdXmd0LhyIRyk0X+IyqJwl +N4y6mACXi0mWHv0liqzc2thddG5msP9E36EYxr5ILzeUePiVSj9/E15dWf10hkNj +c0kCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD +VR0OBBYEFJ84xFYjwznooHFs6FRM5Og6sb9nMA0GCSqGSIb3DQEBCwUAA4ICAQAS +5UKme4sPDORGpbZgQIeMJX6tuGguW8ZAdjwD+MlZ9POrYs4QjbRaZIxowLByQzTS +Gwv2LFPSypBLhmb8qoMi9IsabyZIrHZ3CL/FmFz0Jomee8O5ZDIBf9PD3Vht7LGr +hFV0d4QEJ1JrhkzO3bll/9bGXp+aEJlLdWr+aumXIOTkdnrG0CSqkM0gkLpHZPt/ +B7NTeLUKYvJzQ85BK4FqLoUWlFPUa19yIqtRLULVAJyZv967lDtX/Zr1hstWO1uI +AeV8KEsD+UmDfLJ/fOPtjqF/YFOOVZ1QNBIPt5d7bIdKROf1beyAN/BYGW5KaHbw +H5Lk6rWS02FREAutp9lfx1/cH6NcjKF+m7ee01ZvZl4HliDtC3T7Zk6LERXpgUl+ +b7DUUH8i119lAg2m9IUe2K4GS0qn0jFmwvjO5QimpAKWRGhXxNUzzxkvFMSUHHuk +2fCfDrGA4tGeEWSpiBE6doLlYsKA2KSD7ZPvfC+QsDJMlhVoSFLUmQjAJOgc47Ol +IQ6SwJAfzyBfyjs4x7dtOvPmRLgOMWuIjnDrnBdSqEGULoe256YSxXXfW8AKbnuk +5F6G+TaU33fD6Q3AOfF5u0aOq0NZJ7cguyPpVkAh7DE9ZapD8j3fcEThuk0mEDuY +n/PIjhs4ViFqUZPTkcpG2om3PVODLAgfi49T3f+sHw== +-----END CERTIFICATE----- + +# Issuer: CN=Microsoft ECC Root Certificate Authority 2017 O=Microsoft Corporation +# Subject: CN=Microsoft ECC Root Certificate Authority 2017 O=Microsoft Corporation +# Label: "Microsoft ECC Root Certificate Authority 2017" +# Serial: 136839042543790627607696632466672567020 +# MD5 Fingerprint: dd:a1:03:e6:4a:93:10:d1:bf:f0:19:42:cb:fe:ed:67 +# SHA1 Fingerprint: 99:9a:64:c3:7f:f4:7d:9f:ab:95:f1:47:69:89:14:60:ee:c4:c3:c5 +# SHA256 Fingerprint: 35:8d:f3:9d:76:4a:f9:e1:b7:66:e9:c9:72:df:35:2e:e1:5c:fa:c2:27:af:6a:d1:d7:0e:8e:4a:6e:dc:ba:02 +-----BEGIN CERTIFICATE----- +MIICWTCCAd+gAwIBAgIQZvI9r4fei7FK6gxXMQHC7DAKBggqhkjOPQQDAzBlMQsw +CQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYD +VQQDEy1NaWNyb3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIw +MTcwHhcNMTkxMjE4MjMwNjQ1WhcNNDIwNzE4MjMxNjA0WjBlMQswCQYDVQQGEwJV +UzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYDVQQDEy1NaWNy +b3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTcwdjAQBgcq +hkjOPQIBBgUrgQQAIgNiAATUvD0CQnVBEyPNgASGAlEvaqiBYgtlzPbKnR5vSmZR +ogPZnZH6thaxjG7efM3beaYvzrvOcS/lpaso7GMEZpn4+vKTEAXhgShC48Zo9OYb +hGBKia/teQ87zvH2RPUBeMCjVDBSMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8E +BTADAQH/MB0GA1UdDgQWBBTIy5lycFIM+Oa+sgRXKSrPQhDtNTAQBgkrBgEEAYI3 +FQEEAwIBADAKBggqhkjOPQQDAwNoADBlAjBY8k3qDPlfXu5gKcs68tvWMoQZP3zV +L8KxzJOuULsJMsbG7X7JNpQS5GiFBqIb0C8CMQCZ6Ra0DvpWSNSkMBaReNtUjGUB +iudQZsIxtzm6uBoiB078a1QWIP8rtedMDE2mT3M= +-----END CERTIFICATE----- + +# Issuer: CN=Microsoft RSA Root Certificate Authority 2017 O=Microsoft Corporation +# Subject: CN=Microsoft RSA Root Certificate Authority 2017 O=Microsoft Corporation +# Label: "Microsoft RSA Root Certificate Authority 2017" +# Serial: 40975477897264996090493496164228220339 +# MD5 Fingerprint: 10:ff:00:ff:cf:c9:f8:c7:7a:c0:ee:35:8e:c9:0f:47 +# SHA1 Fingerprint: 73:a5:e6:4a:3b:ff:83:16:ff:0e:dc:cc:61:8a:90:6e:4e:ae:4d:74 +# SHA256 Fingerprint: c7:41:f7:0f:4b:2a:8d:88:bf:2e:71:c1:41:22:ef:53:ef:10:eb:a0:cf:a5:e6:4c:fa:20:f4:18:85:30:73:e0 +-----BEGIN CERTIFICATE----- +MIIFqDCCA5CgAwIBAgIQHtOXCV/YtLNHcB6qvn9FszANBgkqhkiG9w0BAQwFADBl +MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYw +NAYDVQQDEy1NaWNyb3NvZnQgUlNBIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5 +IDIwMTcwHhcNMTkxMjE4MjI1MTIyWhcNNDIwNzE4MjMwMDIzWjBlMQswCQYDVQQG +EwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYDVQQDEy1N +aWNyb3NvZnQgUlNBIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTcwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKW76UM4wplZEWCpW9R2LBifOZ +Nt9GkMml7Xhqb0eRaPgnZ1AzHaGm++DlQ6OEAlcBXZxIQIJTELy/xztokLaCLeX0 +ZdDMbRnMlfl7rEqUrQ7eS0MdhweSE5CAg2Q1OQT85elss7YfUJQ4ZVBcF0a5toW1 +HLUX6NZFndiyJrDKxHBKrmCk3bPZ7Pw71VdyvD/IybLeS2v4I2wDwAW9lcfNcztm +gGTjGqwu+UcF8ga2m3P1eDNbx6H7JyqhtJqRjJHTOoI+dkC0zVJhUXAoP8XFWvLJ +jEm7FFtNyP9nTUwSlq31/niol4fX/V4ggNyhSyL71Imtus5Hl0dVe49FyGcohJUc +aDDv70ngNXtk55iwlNpNhTs+VcQor1fznhPbRiefHqJeRIOkpcrVE7NLP8TjwuaG +YaRSMLl6IE9vDzhTyzMMEyuP1pq9KsgtsRx9S1HKR9FIJ3Jdh+vVReZIZZ2vUpC6 +W6IYZVcSn2i51BVrlMRpIpj0M+Dt+VGOQVDJNE92kKz8OMHY4Xu54+OU4UZpyw4K +UGsTuqwPN1q3ErWQgR5WrlcihtnJ0tHXUeOrO8ZV/R4O03QK0dqq6mm4lyiPSMQH ++FJDOvTKVTUssKZqwJz58oHhEmrARdlns87/I6KJClTUFLkqqNfs+avNJVgyeY+Q +W5g5xAgGwax/Dj0ApQIDAQABo1QwUjAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/ +BAUwAwEB/zAdBgNVHQ4EFgQUCctZf4aycI8awznjwNnpv7tNsiMwEAYJKwYBBAGC +NxUBBAMCAQAwDQYJKoZIhvcNAQEMBQADggIBAKyvPl3CEZaJjqPnktaXFbgToqZC +LgLNFgVZJ8og6Lq46BrsTaiXVq5lQ7GPAJtSzVXNUzltYkyLDVt8LkS/gxCP81OC +gMNPOsduET/m4xaRhPtthH80dK2Jp86519efhGSSvpWhrQlTM93uCupKUY5vVau6 +tZRGrox/2KJQJWVggEbbMwSubLWYdFQl3JPk+ONVFT24bcMKpBLBaYVu32TxU5nh +SnUgnZUP5NbcA/FZGOhHibJXWpS2qdgXKxdJ5XbLwVaZOjex/2kskZGT4d9Mozd2 +TaGf+G0eHdP67Pv0RR0Tbc/3WeUiJ3IrhvNXuzDtJE3cfVa7o7P4NHmJweDyAmH3 +pvwPuxwXC65B2Xy9J6P9LjrRk5Sxcx0ki69bIImtt2dmefU6xqaWM/5TkshGsRGR +xpl/j8nWZjEgQRCHLQzWwa80mMpkg/sTV9HB8Dx6jKXB/ZUhoHHBk2dxEuqPiApp +GWSZI1b7rCoucL5mxAyE7+WL85MB+GqQk2dLsmijtWKP6T+MejteD+eMuMZ87zf9 +dOLITzNy4ZQ5bb0Sr74MTnB8G2+NszKTc0QWbej09+CVgI+WXTik9KveCjCHk9hN +AHFiRSdLOkKEW39lt2c0Ui2cFmuqqNh7o0JMcccMyj6D5KbvtwEwXlGjefVwaaZB +RA+GsCyRxj3qrg+E +-----END CERTIFICATE----- + +# Issuer: CN=e-Szigno Root CA 2017 O=Microsec Ltd. +# Subject: CN=e-Szigno Root CA 2017 O=Microsec Ltd. +# Label: "e-Szigno Root CA 2017" +# Serial: 411379200276854331539784714 +# MD5 Fingerprint: de:1f:f6:9e:84:ae:a7:b4:21:ce:1e:58:7d:d1:84:98 +# SHA1 Fingerprint: 89:d4:83:03:4f:9e:9a:48:80:5f:72:37:d4:a9:a6:ef:cb:7c:1f:d1 +# SHA256 Fingerprint: be:b0:0b:30:83:9b:9b:c3:2c:32:e4:44:79:05:95:06:41:f2:64:21:b1:5e:d0:89:19:8b:51:8a:e2:ea:1b:99 +-----BEGIN CERTIFICATE----- +MIICQDCCAeWgAwIBAgIMAVRI7yH9l1kN9QQKMAoGCCqGSM49BAMCMHExCzAJBgNV +BAYTAkhVMREwDwYDVQQHDAhCdWRhcGVzdDEWMBQGA1UECgwNTWljcm9zZWMgTHRk +LjEXMBUGA1UEYQwOVkFUSFUtMjM1ODQ0OTcxHjAcBgNVBAMMFWUtU3ppZ25vIFJv +b3QgQ0EgMjAxNzAeFw0xNzA4MjIxMjA3MDZaFw00MjA4MjIxMjA3MDZaMHExCzAJ +BgNVBAYTAkhVMREwDwYDVQQHDAhCdWRhcGVzdDEWMBQGA1UECgwNTWljcm9zZWMg +THRkLjEXMBUGA1UEYQwOVkFUSFUtMjM1ODQ0OTcxHjAcBgNVBAMMFWUtU3ppZ25v +IFJvb3QgQ0EgMjAxNzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABJbcPYrYsHtv +xie+RJCxs1YVe45DJH0ahFnuY2iyxl6H0BVIHqiQrb1TotreOpCmYF9oMrWGQd+H +Wyx7xf58etqjYzBhMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G +A1UdDgQWBBSHERUI0arBeAyxr87GyZDvvzAEwDAfBgNVHSMEGDAWgBSHERUI0arB +eAyxr87GyZDvvzAEwDAKBggqhkjOPQQDAgNJADBGAiEAtVfd14pVCzbhhkT61Nlo +jbjcI4qKDdQvfepz7L9NbKgCIQDLpbQS+ue16M9+k/zzNY9vTlp8tLxOsvxyqltZ ++efcMQ== +-----END CERTIFICATE----- + +# Issuer: O=CERTSIGN SA OU=certSIGN ROOT CA G2 +# Subject: O=CERTSIGN SA OU=certSIGN ROOT CA G2 +# Label: "certSIGN Root CA G2" +# Serial: 313609486401300475190 +# MD5 Fingerprint: 8c:f1:75:8a:c6:19:cf:94:b7:f7:65:20:87:c3:97:c7 +# SHA1 Fingerprint: 26:f9:93:b4:ed:3d:28:27:b0:b9:4b:a7:e9:15:1d:a3:8d:92:e5:32 +# SHA256 Fingerprint: 65:7c:fe:2f:a7:3f:aa:38:46:25:71:f3:32:a2:36:3a:46:fc:e7:02:09:51:71:07:02:cd:fb:b6:ee:da:33:05 +-----BEGIN CERTIFICATE----- +MIIFRzCCAy+gAwIBAgIJEQA0tk7GNi02MA0GCSqGSIb3DQEBCwUAMEExCzAJBgNV +BAYTAlJPMRQwEgYDVQQKEwtDRVJUU0lHTiBTQTEcMBoGA1UECxMTY2VydFNJR04g +Uk9PVCBDQSBHMjAeFw0xNzAyMDYwOTI3MzVaFw00MjAyMDYwOTI3MzVaMEExCzAJ +BgNVBAYTAlJPMRQwEgYDVQQKEwtDRVJUU0lHTiBTQTEcMBoGA1UECxMTY2VydFNJ +R04gUk9PVCBDQSBHMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMDF +dRmRfUR0dIf+DjuW3NgBFszuY5HnC2/OOwppGnzC46+CjobXXo9X69MhWf05N0Iw +vlDqtg+piNguLWkh59E3GE59kdUWX2tbAMI5Qw02hVK5U2UPHULlj88F0+7cDBrZ +uIt4ImfkabBoxTzkbFpG583H+u/E7Eu9aqSs/cwoUe+StCmrqzWaTOTECMYmzPhp +n+Sc8CnTXPnGFiWeI8MgwT0PPzhAsP6CRDiqWhqKa2NYOLQV07YRaXseVO6MGiKs +cpc/I1mbySKEwQdPzH/iV8oScLumZfNpdWO9lfsbl83kqK/20U6o2YpxJM02PbyW +xPFsqa7lzw1uKA2wDrXKUXt4FMMgL3/7FFXhEZn91QqhngLjYl/rNUssuHLoPj1P +rCy7Lobio3aP5ZMqz6WryFyNSwb/EkaseMsUBzXgqd+L6a8VTxaJW732jcZZroiF +DsGJ6x9nxUWO/203Nit4ZoORUSs9/1F3dmKh7Gc+PoGD4FapUB8fepmrY7+EF3fx +DTvf95xhszWYijqy7DwaNz9+j5LP2RIUZNoQAhVB/0/E6xyjyfqZ90bp4RjZsbgy +LcsUDFDYg2WD7rlcz8sFWkz6GZdr1l0T08JcVLwyc6B49fFtHsufpaafItzRUZ6C +eWRgKRM+o/1Pcmqr4tTluCRVLERLiohEnMqE0yo7AgMBAAGjQjBAMA8GA1UdEwEB +/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSCIS1mxteg4BXrzkwJ +d8RgnlRuAzANBgkqhkiG9w0BAQsFAAOCAgEAYN4auOfyYILVAzOBywaK8SJJ6ejq +kX/GM15oGQOGO0MBzwdw5AgeZYWR5hEit/UCI46uuR59H35s5r0l1ZUa8gWmr4UC +b6741jH/JclKyMeKqdmfS0mbEVeZkkMR3rYzpMzXjWR91M08KCy0mpbqTfXERMQl +qiCA2ClV9+BB/AYm/7k29UMUA2Z44RGx2iBfRgB4ACGlHgAoYXhvqAEBj500mv/0 +OJD7uNGzcgbJceaBxXntC6Z58hMLnPddDnskk7RI24Zf3lCGeOdA5jGokHZwYa+c +NywRtYK3qq4kNFtyDGkNzVmf9nGvnAvRCjj5BiKDUyUM/FHE5r7iOZULJK2v0ZXk +ltd0ZGtxTgI8qoXzIKNDOXZbbFD+mpwUHmUUihW9o4JFWklWatKcsWMy5WHgUyIO +pwpJ6st+H6jiYoD2EEVSmAYY3qXNL3+q1Ok+CHLsIwMCPKaq2LxndD0UF/tUSxfj +03k9bWtJySgOLnRQvwzZRjoQhsmnP+mg7H/rpXdYaXHmgwo38oZJar55CJD2AhZk +PuXaTH4MNMn5X7azKFGnpyuqSfqNZSlO42sTp5SjLVFteAxEy9/eCG/Oo2Sr05WE +1LlSVHJ7liXMvGnjSG4N0MedJ5qq+BOS3R7fY581qRY27Iy4g/Q9iY/NtBde17MX +QRBdJ3NghVdJIgc= +-----END CERTIFICATE----- + +# Issuer: CN=Trustwave Global Certification Authority O=Trustwave Holdings, Inc. +# Subject: CN=Trustwave Global Certification Authority O=Trustwave Holdings, Inc. +# Label: "Trustwave Global Certification Authority" +# Serial: 1846098327275375458322922162 +# MD5 Fingerprint: f8:1c:18:2d:2f:ba:5f:6d:a1:6c:bc:c7:ab:91:c7:0e +# SHA1 Fingerprint: 2f:8f:36:4f:e1:58:97:44:21:59:87:a5:2a:9a:d0:69:95:26:7f:b5 +# SHA256 Fingerprint: 97:55:20:15:f5:dd:fc:3c:87:88:c0:06:94:45:55:40:88:94:45:00:84:f1:00:86:70:86:bc:1a:2b:b5:8d:c8 +-----BEGIN CERTIFICATE----- +MIIF2jCCA8KgAwIBAgIMBfcOhtpJ80Y1LrqyMA0GCSqGSIb3DQEBCwUAMIGIMQsw +CQYDVQQGEwJVUzERMA8GA1UECAwISWxsaW5vaXMxEDAOBgNVBAcMB0NoaWNhZ28x +ITAfBgNVBAoMGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjExMC8GA1UEAwwoVHJ1 +c3R3YXZlIEdsb2JhbCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0xNzA4MjMx +OTM0MTJaFw00MjA4MjMxOTM0MTJaMIGIMQswCQYDVQQGEwJVUzERMA8GA1UECAwI +SWxsaW5vaXMxEDAOBgNVBAcMB0NoaWNhZ28xITAfBgNVBAoMGFRydXN0d2F2ZSBI +b2xkaW5ncywgSW5jLjExMC8GA1UEAwwoVHJ1c3R3YXZlIEdsb2JhbCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB +ALldUShLPDeS0YLOvR29zd24q88KPuFd5dyqCblXAj7mY2Hf8g+CY66j96xz0Xzn +swuvCAAJWX/NKSqIk4cXGIDtiLK0thAfLdZfVaITXdHG6wZWiYj+rDKd/VzDBcdu +7oaJuogDnXIhhpCujwOl3J+IKMujkkkP7NAP4m1ET4BqstTnoApTAbqOl5F2brz8 +1Ws25kCI1nsvXwXoLG0R8+eyvpJETNKXpP7ScoFDB5zpET71ixpZfR9oWN0EACyW +80OzfpgZdNmcc9kYvkHHNHnZ9GLCQ7mzJ7Aiy/k9UscwR7PJPrhq4ufogXBeQotP +JqX+OsIgbrv4Fo7NDKm0G2x2EOFYeUY+VM6AqFcJNykbmROPDMjWLBz7BegIlT1l +RtzuzWniTY+HKE40Cz7PFNm73bZQmq131BnW2hqIyE4bJ3XYsgjxroMwuREOzYfw +hI0Vcnyh78zyiGG69Gm7DIwLdVcEuE4qFC49DxweMqZiNu5m4iK4BUBjECLzMx10 +coos9TkpoNPnG4CELcU9402x/RpvumUHO1jsQkUm+9jaJXLE9gCxInm943xZYkqc +BW89zubWR2OZxiRvchLIrH+QtAuRcOi35hYQcRfO3gZPSEF9NUqjifLJS3tBEW1n +twiYTOURGa5CgNz7kAXU+FDKvuStx8KU1xad5hePrzb7AgMBAAGjQjBAMA8GA1Ud +EwEB/wQFMAMBAf8wHQYDVR0OBBYEFJngGWcNYtt2s9o9uFvo/ULSMQ6HMA4GA1Ud +DwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAmHNw4rDT7TnsTGDZqRKGFx6W +0OhUKDtkLSGm+J1WE2pIPU/HPinbbViDVD2HfSMF1OQc3Og4ZYbFdada2zUFvXfe +uyk3QAUHw5RSn8pk3fEbK9xGChACMf1KaA0HZJDmHvUqoai7PF35owgLEQzxPy0Q +lG/+4jSHg9bP5Rs1bdID4bANqKCqRieCNqcVtgimQlRXtpla4gt5kNdXElE1GYhB +aCXUNxeEFfsBctyV3lImIJgm4nb1J2/6ADtKYdkNy1GTKv0WBpanI5ojSP5RvbbE +sLFUzt5sQa0WZ37b/TjNuThOssFgy50X31ieemKyJo90lZvkWx3SD92YHJtZuSPT +MaCm/zjdzyBP6VhWOmfD0faZmZ26NraAL4hHT4a/RDqA5Dccprrql5gR0IRiR2Qe +qu5AvzSxnI9O4fKSTx+O856X3vOmeWqJcU9LJxdI/uz0UA9PSX3MReO9ekDFQdxh +VicGaeVyQYHTtgGJoC86cnn+OjC/QezHYj6RS8fZMXZC+fc8Y+wmjHMMfRod6qh8 +h6jCJ3zhM0EPz8/8AKAigJ5Kp28AsEFFtyLKaEjFQqKu3R3y4G5OBVixwJAWKqQ9 +EEC+j2Jjg6mcgn0tAumDMHzLJ8n9HmYAsC7TIS+OMxZsmO0QqAfWzJPP29FpHOTK +yeC2nOnOcXHebD8WpHk= +-----END CERTIFICATE----- + +# Issuer: CN=Trustwave Global ECC P256 Certification Authority O=Trustwave Holdings, Inc. +# Subject: CN=Trustwave Global ECC P256 Certification Authority O=Trustwave Holdings, Inc. +# Label: "Trustwave Global ECC P256 Certification Authority" +# Serial: 4151900041497450638097112925 +# MD5 Fingerprint: 5b:44:e3:8d:5d:36:86:26:e8:0d:05:d2:59:a7:83:54 +# SHA1 Fingerprint: b4:90:82:dd:45:0c:be:8b:5b:b1:66:d3:e2:a4:08:26:cd:ed:42:cf +# SHA256 Fingerprint: 94:5b:bc:82:5e:a5:54:f4:89:d1:fd:51:a7:3d:df:2e:a6:24:ac:70:19:a0:52:05:22:5c:22:a7:8c:cf:a8:b4 +-----BEGIN CERTIFICATE----- +MIICYDCCAgegAwIBAgIMDWpfCD8oXD5Rld9dMAoGCCqGSM49BAMCMIGRMQswCQYD +VQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAf +BgNVBAoTGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3 +YXZlIEdsb2JhbCBFQ0MgUDI1NiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0x +NzA4MjMxOTM1MTBaFw00MjA4MjMxOTM1MTBaMIGRMQswCQYDVQQGEwJVUzERMA8G +A1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAfBgNVBAoTGFRydXN0 +d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3YXZlIEdsb2JhbCBF +Q0MgUDI1NiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTBZMBMGByqGSM49AgEGCCqG +SM49AwEHA0IABH77bOYj43MyCMpg5lOcunSNGLB4kFKA3TjASh3RqMyTpJcGOMoN +FWLGjgEqZZ2q3zSRLoHB5DOSMcT9CTqmP62jQzBBMA8GA1UdEwEB/wQFMAMBAf8w +DwYDVR0PAQH/BAUDAwcGADAdBgNVHQ4EFgQUo0EGrJBt0UrrdaVKEJmzsaGLSvcw +CgYIKoZIzj0EAwIDRwAwRAIgB+ZU2g6gWrKuEZ+Hxbb/ad4lvvigtwjzRM4q3wgh +DDcCIC0mA6AFvWvR9lz4ZcyGbbOcNEhjhAnFjXca4syc4XR7 +-----END CERTIFICATE----- + +# Issuer: CN=Trustwave Global ECC P384 Certification Authority O=Trustwave Holdings, Inc. +# Subject: CN=Trustwave Global ECC P384 Certification Authority O=Trustwave Holdings, Inc. +# Label: "Trustwave Global ECC P384 Certification Authority" +# Serial: 2704997926503831671788816187 +# MD5 Fingerprint: ea:cf:60:c4:3b:b9:15:29:40:a1:97:ed:78:27:93:d6 +# SHA1 Fingerprint: e7:f3:a3:c8:cf:6f:c3:04:2e:6d:0e:67:32:c5:9e:68:95:0d:5e:d2 +# SHA256 Fingerprint: 55:90:38:59:c8:c0:c3:eb:b8:75:9e:ce:4e:25:57:22:5f:f5:75:8b:bd:38:eb:d4:82:76:60:1e:1b:d5:80:97 +-----BEGIN CERTIFICATE----- +MIICnTCCAiSgAwIBAgIMCL2Fl2yZJ6SAaEc7MAoGCCqGSM49BAMDMIGRMQswCQYD +VQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAf +BgNVBAoTGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3 +YXZlIEdsb2JhbCBFQ0MgUDM4NCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0x +NzA4MjMxOTM2NDNaFw00MjA4MjMxOTM2NDNaMIGRMQswCQYDVQQGEwJVUzERMA8G +A1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAfBgNVBAoTGFRydXN0 +d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3YXZlIEdsb2JhbCBF +Q0MgUDM4NCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTB2MBAGByqGSM49AgEGBSuB +BAAiA2IABGvaDXU1CDFHBa5FmVXxERMuSvgQMSOjfoPTfygIOiYaOs+Xgh+AtycJ +j9GOMMQKmw6sWASr9zZ9lCOkmwqKi6vr/TklZvFe/oyujUF5nQlgziip04pt89ZF +1PKYhDhloKNDMEEwDwYDVR0TAQH/BAUwAwEB/zAPBgNVHQ8BAf8EBQMDBwYAMB0G +A1UdDgQWBBRVqYSJ0sEyvRjLbKYHTsjnnb6CkDAKBggqhkjOPQQDAwNnADBkAjA3 +AZKXRRJ+oPM+rRk6ct30UJMDEr5E0k9BpIycnR+j9sKS50gU/k6bpZFXrsY3crsC +MGclCrEMXu6pY5Jv5ZAL/mYiykf9ijH3g/56vxC+GCsej/YpHpRZ744hN8tRmKVu +Sw== +-----END CERTIFICATE----- + +# Issuer: CN=NAVER Global Root Certification Authority O=NAVER BUSINESS PLATFORM Corp. +# Subject: CN=NAVER Global Root Certification Authority O=NAVER BUSINESS PLATFORM Corp. +# Label: "NAVER Global Root Certification Authority" +# Serial: 9013692873798656336226253319739695165984492813 +# MD5 Fingerprint: c8:7e:41:f6:25:3b:f5:09:b3:17:e8:46:3d:bf:d0:9b +# SHA1 Fingerprint: 8f:6b:f2:a9:27:4a:da:14:a0:c4:f4:8e:61:27:f9:c0:1e:78:5d:d1 +# SHA256 Fingerprint: 88:f4:38:dc:f8:ff:d1:fa:8f:42:91:15:ff:e5:f8:2a:e1:e0:6e:0c:70:c3:75:fa:ad:71:7b:34:a4:9e:72:65 +-----BEGIN CERTIFICATE----- +MIIFojCCA4qgAwIBAgIUAZQwHqIL3fXFMyqxQ0Rx+NZQTQ0wDQYJKoZIhvcNAQEM +BQAwaTELMAkGA1UEBhMCS1IxJjAkBgNVBAoMHU5BVkVSIEJVU0lORVNTIFBMQVRG +T1JNIENvcnAuMTIwMAYDVQQDDClOQVZFUiBHbG9iYWwgUm9vdCBDZXJ0aWZpY2F0 +aW9uIEF1dGhvcml0eTAeFw0xNzA4MTgwODU4NDJaFw0zNzA4MTgyMzU5NTlaMGkx +CzAJBgNVBAYTAktSMSYwJAYDVQQKDB1OQVZFUiBCVVNJTkVTUyBQTEFURk9STSBD +b3JwLjEyMDAGA1UEAwwpTkFWRVIgR2xvYmFsIFJvb3QgQ2VydGlmaWNhdGlvbiBB +dXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC21PGTXLVA +iQqrDZBbUGOukJR0F0Vy1ntlWilLp1agS7gvQnXp2XskWjFlqxcX0TM62RHcQDaH +38dq6SZeWYp34+hInDEW+j6RscrJo+KfziFTowI2MMtSAuXaMl3Dxeb57hHHi8lE +HoSTGEq0n+USZGnQJoViAbbJAh2+g1G7XNr4rRVqmfeSVPc0W+m/6imBEtRTkZaz +kVrd/pBzKPswRrXKCAfHcXLJZtM0l/aM9BhK4dA9WkW2aacp+yPOiNgSnABIqKYP +szuSjXEOdMWLyEz59JuOuDxp7W87UC9Y7cSw0BwbagzivESq2M0UXZR4Yb8Obtoq +vC8MC3GmsxY/nOb5zJ9TNeIDoKAYv7vxvvTWjIcNQvcGufFt7QSUqP620wbGQGHf +nZ3zVHbOUzoBppJB7ASjjw2i1QnK1sua8e9DXcCrpUHPXFNwcMmIpi3Ua2FzUCaG +YQ5fG8Ir4ozVu53BA0K6lNpfqbDKzE0K70dpAy8i+/Eozr9dUGWokG2zdLAIx6yo +0es+nPxdGoMuK8u180SdOqcXYZaicdNwlhVNt0xz7hlcxVs+Qf6sdWA7G2POAN3a +CJBitOUt7kinaxeZVL6HSuOpXgRM6xBtVNbv8ejyYhbLgGvtPe31HzClrkvJE+2K +AQHJuFFYwGY6sWZLxNUxAmLpdIQM201GLQIDAQABo0IwQDAdBgNVHQ4EFgQU0p+I +36HNLL3s9TsBAZMzJ7LrYEswDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMB +Af8wDQYJKoZIhvcNAQEMBQADggIBADLKgLOdPVQG3dLSLvCkASELZ0jKbY7gyKoN +qo0hV4/GPnrK21HUUrPUloSlWGB/5QuOH/XcChWB5Tu2tyIvCZwTFrFsDDUIbatj +cu3cvuzHV+YwIHHW1xDBE1UBjCpD5EHxzzp6U5LOogMFDTjfArsQLtk70pt6wKGm ++LUx5vR1yblTmXVHIloUFcd4G7ad6Qz4G3bxhYTeodoS76TiEJd6eN4MUZeoIUCL +hr0N8F5OSza7OyAfikJW4Qsav3vQIkMsRIz75Sq0bBwcupTgE34h5prCy8VCZLQe +lHsIJchxzIdFV4XTnyliIoNRlwAYl3dqmJLJfGBs32x9SuRwTMKeuB330DTHD8z7 +p/8Dvq1wkNoL3chtl1+afwkyQf3NosxabUzyqkn+Zvjp2DXrDige7kgvOtB5CTh8 +piKCk5XQA76+AqAF3SAi428diDRgxuYKuQl1C/AH6GmWNcf7I4GOODm4RStDeKLR +LBT/DShycpWbXgnbiUSYqqFJu3FS8r/2/yehNq+4tneI3TqkbZs0kNwUXTC/t+sX +5Ie3cdCh13cV1ELX8vMxmV2b3RZtP+oGI/hGoiLtk/bdmuYqh7GYVPEi92tF4+KO +dh2ajcQGjTa3FPOdVGm3jjzVpG2Tgbet9r1ke8LJaDmgkpzNNIaRkPpkUZ3+/uul +9XXeifdy +-----END CERTIFICATE----- + +# Issuer: CN=AC RAIZ FNMT-RCM SERVIDORES SEGUROS O=FNMT-RCM OU=Ceres +# Subject: CN=AC RAIZ FNMT-RCM SERVIDORES SEGUROS O=FNMT-RCM OU=Ceres +# Label: "AC RAIZ FNMT-RCM SERVIDORES SEGUROS" +# Serial: 131542671362353147877283741781055151509 +# MD5 Fingerprint: 19:36:9c:52:03:2f:d2:d1:bb:23:cc:dd:1e:12:55:bb +# SHA1 Fingerprint: 62:ff:d9:9e:c0:65:0d:03:ce:75:93:d2:ed:3f:2d:32:c9:e3:e5:4a +# SHA256 Fingerprint: 55:41:53:b1:3d:2c:f9:dd:b7:53:bf:be:1a:4e:0a:e0:8d:0a:a4:18:70:58:fe:60:a2:b8:62:b2:e4:b8:7b:cb +-----BEGIN CERTIFICATE----- +MIICbjCCAfOgAwIBAgIQYvYybOXE42hcG2LdnC6dlTAKBggqhkjOPQQDAzB4MQsw +CQYDVQQGEwJFUzERMA8GA1UECgwIRk5NVC1SQ00xDjAMBgNVBAsMBUNlcmVzMRgw +FgYDVQRhDA9WQVRFUy1RMjgyNjAwNEoxLDAqBgNVBAMMI0FDIFJBSVogRk5NVC1S +Q00gU0VSVklET1JFUyBTRUdVUk9TMB4XDTE4MTIyMDA5MzczM1oXDTQzMTIyMDA5 +MzczM1oweDELMAkGA1UEBhMCRVMxETAPBgNVBAoMCEZOTVQtUkNNMQ4wDAYDVQQL +DAVDZXJlczEYMBYGA1UEYQwPVkFURVMtUTI4MjYwMDRKMSwwKgYDVQQDDCNBQyBS +QUlaIEZOTVQtUkNNIFNFUlZJRE9SRVMgU0VHVVJPUzB2MBAGByqGSM49AgEGBSuB +BAAiA2IABPa6V1PIyqvfNkpSIeSX0oNnnvBlUdBeh8dHsVnyV0ebAAKTRBdp20LH +sbI6GA60XYyzZl2hNPk2LEnb80b8s0RpRBNm/dfF/a82Tc4DTQdxz69qBdKiQ1oK +Um8BA06Oi6NCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD +VR0OBBYEFAG5L++/EYZg8k/QQW6rcx/n0m5JMAoGCCqGSM49BAMDA2kAMGYCMQCu +SuMrQMN0EfKVrRYj3k4MGuZdpSRea0R7/DjiT8ucRRcRTBQnJlU5dUoDzBOQn5IC +MQD6SmxgiHPz7riYYqnOK8LZiqZwMR2vsJRM60/G49HzYqc8/5MuB1xJAWdpEgJy +v+c= +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign Root R46 O=GlobalSign nv-sa +# Subject: CN=GlobalSign Root R46 O=GlobalSign nv-sa +# Label: "GlobalSign Root R46" +# Serial: 1552617688466950547958867513931858518042577 +# MD5 Fingerprint: c4:14:30:e4:fa:66:43:94:2a:6a:1b:24:5f:19:d0:ef +# SHA1 Fingerprint: 53:a2:b0:4b:ca:6b:d6:45:e6:39:8a:8e:c4:0d:d2:bf:77:c3:a2:90 +# SHA256 Fingerprint: 4f:a3:12:6d:8d:3a:11:d1:c4:85:5a:4f:80:7c:ba:d6:cf:91:9d:3a:5a:88:b0:3b:ea:2c:63:72:d9:3c:40:c9 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgISEdK7udcjGJ5AXwqdLdDfJWfRMA0GCSqGSIb3DQEBDAUA +MEYxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMRwwGgYD +VQQDExNHbG9iYWxTaWduIFJvb3QgUjQ2MB4XDTE5MDMyMDAwMDAwMFoXDTQ2MDMy +MDAwMDAwMFowRjELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYt +c2ExHDAaBgNVBAMTE0dsb2JhbFNpZ24gUm9vdCBSNDYwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQCsrHQy6LNl5brtQyYdpokNRbopiLKkHWPd08EsCVeJ +OaFV6Wc0dwxu5FUdUiXSE2te4R2pt32JMl8Nnp8semNgQB+msLZ4j5lUlghYruQG +vGIFAha/r6gjA7aUD7xubMLL1aa7DOn2wQL7Id5m3RerdELv8HQvJfTqa1VbkNud +316HCkD7rRlr+/fKYIje2sGP1q7Vf9Q8g+7XFkyDRTNrJ9CG0Bwta/OrffGFqfUo +0q3v84RLHIf8E6M6cqJaESvWJ3En7YEtbWaBkoe0G1h6zD8K+kZPTXhc+CtI4wSE +y132tGqzZfxCnlEmIyDLPRT5ge1lFgBPGmSXZgjPjHvjK8Cd+RTyG/FWaha/LIWF +zXg4mutCagI0GIMXTpRW+LaCtfOW3T3zvn8gdz57GSNrLNRyc0NXfeD412lPFzYE ++cCQYDdF3uYM2HSNrpyibXRdQr4G9dlkbgIQrImwTDsHTUB+JMWKmIJ5jqSngiCN +I/onccnfxkF0oE32kRbcRoxfKWMxWXEM2G/CtjJ9++ZdU6Z+Ffy7dXxd7Pj2Fxzs +x2sZy/N78CsHpdlseVR2bJ0cpm4O6XkMqCNqo98bMDGfsVR7/mrLZqrcZdCinkqa +ByFrgY/bxFn63iLABJzjqls2k+g9vXqhnQt2sQvHnf3PmKgGwvgqo6GDoLclcqUC +4wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQUA1yrc4GHqMywptWU4jaWSf8FmSwwDQYJKoZIhvcNAQEMBQADggIBAHx4 +7PYCLLtbfpIrXTncvtgdokIzTfnvpCo7RGkerNlFo048p9gkUbJUHJNOxO97k4Vg +JuoJSOD1u8fpaNK7ajFxzHmuEajwmf3lH7wvqMxX63bEIaZHU1VNaL8FpO7XJqti +2kM3S+LGteWygxk6x9PbTZ4IevPuzz5i+6zoYMzRx6Fcg0XERczzF2sUyQQCPtIk +pnnpHs6i58FZFZ8d4kuaPp92CC1r2LpXFNqD6v6MVenQTqnMdzGxRBF6XLE+0xRF +FRhiJBPSy03OXIPBNvIQtQ6IbbjhVp+J3pZmOUdkLG5NrmJ7v2B0GbhWrJKsFjLt +rWhV/pi60zTe9Mlhww6G9kuEYO4Ne7UyWHmRVSyBQ7N0H3qqJZ4d16GLuc1CLgSk +ZoNNiTW2bKg2SnkheCLQQrzRQDGQob4Ez8pn7fXwgNNgyYMqIgXQBztSvwyeqiv5 +u+YfjyW6hY0XHgL+XVAEV8/+LbzvXMAaq7afJMbfc2hIkCwU9D9SGuTSyxTDYWnP +4vkYxboznxSjBF25cfe1lNj2M8FawTSLfJvdkzrnE6JwYZ+vj+vYxXX4M2bUdGc6 +N3ec592kD3ZDZopD8p/7DEJ4Y9HiD2971KE9dJeFt0g5QdYg/NA6s/rob8SKunE3 +vouXsXgxT7PntgMTzlSdriVZzH81Xwj3QEUxeCp6 +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign Root E46 O=GlobalSign nv-sa +# Subject: CN=GlobalSign Root E46 O=GlobalSign nv-sa +# Label: "GlobalSign Root E46" +# Serial: 1552617690338932563915843282459653771421763 +# MD5 Fingerprint: b5:b8:66:ed:de:08:83:e3:c9:e2:01:34:06:ac:51:6f +# SHA1 Fingerprint: 39:b4:6c:d5:fe:80:06:eb:e2:2f:4a:bb:08:33:a0:af:db:b9:dd:84 +# SHA256 Fingerprint: cb:b9:c4:4d:84:b8:04:3e:10:50:ea:31:a6:9f:51:49:55:d7:bf:d2:e2:c6:b4:93:01:01:9a:d6:1d:9f:50:58 +-----BEGIN CERTIFICATE----- +MIICCzCCAZGgAwIBAgISEdK7ujNu1LzmJGjFDYQdmOhDMAoGCCqGSM49BAMDMEYx +CzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMRwwGgYDVQQD +ExNHbG9iYWxTaWduIFJvb3QgRTQ2MB4XDTE5MDMyMDAwMDAwMFoXDTQ2MDMyMDAw +MDAwMFowRjELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2Ex +HDAaBgNVBAMTE0dsb2JhbFNpZ24gUm9vdCBFNDYwdjAQBgcqhkjOPQIBBgUrgQQA +IgNiAAScDrHPt+ieUnd1NPqlRqetMhkytAepJ8qUuwzSChDH2omwlwxwEwkBjtjq +R+q+soArzfwoDdusvKSGN+1wCAB16pMLey5SnCNoIwZD7JIvU4Tb+0cUB+hflGdd +yXqBPCCjQjBAMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud +DgQWBBQxCpCPtsad0kRLgLWi5h+xEk8blTAKBggqhkjOPQQDAwNoADBlAjEA31SQ +7Zvvi5QCkxeCmb6zniz2C5GMn0oUsfZkvLtoURMMA/cVi4RguYv/Uo7njLwcAjA8 ++RHUjE7AwWHCFUyqqx0LMV87HOIAl0Qx5v5zli/altP+CAezNIm8BZ/3Hobui3A= +-----END CERTIFICATE----- + +# Issuer: CN=GLOBALTRUST 2020 O=e-commerce monitoring GmbH +# Subject: CN=GLOBALTRUST 2020 O=e-commerce monitoring GmbH +# Label: "GLOBALTRUST 2020" +# Serial: 109160994242082918454945253 +# MD5 Fingerprint: 8a:c7:6f:cb:6d:e3:cc:a2:f1:7c:83:fa:0e:78:d7:e8 +# SHA1 Fingerprint: d0:67:c1:13:51:01:0c:aa:d0:c7:6a:65:37:31:16:26:4f:53:71:a2 +# SHA256 Fingerprint: 9a:29:6a:51:82:d1:d4:51:a2:e3:7f:43:9b:74:da:af:a2:67:52:33:29:f9:0f:9a:0d:20:07:c3:34:e2:3c:9a +-----BEGIN CERTIFICATE----- +MIIFgjCCA2qgAwIBAgILWku9WvtPilv6ZeUwDQYJKoZIhvcNAQELBQAwTTELMAkG +A1UEBhMCQVQxIzAhBgNVBAoTGmUtY29tbWVyY2UgbW9uaXRvcmluZyBHbWJIMRkw +FwYDVQQDExBHTE9CQUxUUlVTVCAyMDIwMB4XDTIwMDIxMDAwMDAwMFoXDTQwMDYx +MDAwMDAwMFowTTELMAkGA1UEBhMCQVQxIzAhBgNVBAoTGmUtY29tbWVyY2UgbW9u +aXRvcmluZyBHbWJIMRkwFwYDVQQDExBHTE9CQUxUUlVTVCAyMDIwMIICIjANBgkq +hkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAri5WrRsc7/aVj6B3GyvTY4+ETUWiD59b +RatZe1E0+eyLinjF3WuvvcTfk0Uev5E4C64OFudBc/jbu9G4UeDLgztzOG53ig9Z +YybNpyrOVPu44sB8R85gfD+yc/LAGbaKkoc1DZAoouQVBGM+uq/ufF7MpotQsjj3 +QWPKzv9pj2gOlTblzLmMCcpL3TGQlsjMH/1WljTbjhzqLL6FLmPdqqmV0/0plRPw +yJiT2S0WR5ARg6I6IqIoV6Lr/sCMKKCmfecqQjuCgGOlYx8ZzHyyZqjC0203b+J+ +BlHZRYQfEs4kUmSFC0iAToexIiIwquuuvuAC4EDosEKAA1GqtH6qRNdDYfOiaxaJ +SaSjpCuKAsR49GiKweR6NrFvG5Ybd0mN1MkGco/PU+PcF4UgStyYJ9ORJitHHmkH +r96i5OTUawuzXnzUJIBHKWk7buis/UDr2O1xcSvy6Fgd60GXIsUf1DnQJ4+H4xj0 +4KlGDfV0OoIu0G4skaMxXDtG6nsEEFZegB31pWXogvziB4xiRfUg3kZwhqG8k9Me +dKZssCz3AwyIDMvUclOGvGBG85hqwvG/Q/lwIHfKN0F5VVJjjVsSn8VoxIidrPIw +q7ejMZdnrY8XD2zHc+0klGvIg5rQmjdJBKuxFshsSUktq6HQjJLyQUp5ISXbY9e2 +nKd+Qmn7OmMCAwEAAaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AQYwHQYDVR0OBBYEFNwuH9FhN3nkq9XVsxJxaD1qaJwiMB8GA1UdIwQYMBaAFNwu +H9FhN3nkq9XVsxJxaD1qaJwiMA0GCSqGSIb3DQEBCwUAA4ICAQCR8EICaEDuw2jA +VC/f7GLDw56KoDEoqoOOpFaWEhCGVrqXctJUMHytGdUdaG/7FELYjQ7ztdGl4wJC +XtzoRlgHNQIw4Lx0SsFDKv/bGtCwr2zD/cuz9X9tAy5ZVp0tLTWMstZDFyySCstd +6IwPS3BD0IL/qMy/pJTAvoe9iuOTe8aPmxadJ2W8esVCgmxcB9CpwYhgROmYhRZf ++I/KARDOJcP5YBugxZfD0yyIMaK9MOzQ0MAS8cE54+X1+NZK3TTN+2/BT+MAi1bi +kvcoskJ3ciNnxz8RFbLEAwW+uxF7Cr+obuf/WEPPm2eggAe2HcqtbepBEX4tdJP7 +wry+UUTF72glJ4DjyKDUEuzZpTcdN3y0kcra1LGWge9oXHYQSa9+pTeAsRxSvTOB +TI/53WXZFM2KJVj04sWDpQmQ1GwUY7VA3+vA/MRYfg0UFodUJ25W5HCEuGwyEn6C +MUO+1918oa2u1qsgEu8KwxCMSZY13At1XrFP1U80DhEgB3VDRemjEdqso5nCtnkn +4rnvyOL2NSl6dPrFf4IFYqYK6miyeUcGbvJXqBUzxvd4Sj1Ce2t+/vdG6tHrju+I +aFvowdlxfv1k7/9nR4hYJS8+hge9+6jlgqispdNpQ80xiEmEU5LAsTkbOYMBMMTy +qfrQA71yN2BWHzZ8vTmR9W0Nv3vXkg== +-----END CERTIFICATE----- + +# Issuer: CN=ANF Secure Server Root CA O=ANF Autoridad de Certificacion OU=ANF CA Raiz +# Subject: CN=ANF Secure Server Root CA O=ANF Autoridad de Certificacion OU=ANF CA Raiz +# Label: "ANF Secure Server Root CA" +# Serial: 996390341000653745 +# MD5 Fingerprint: 26:a6:44:5a:d9:af:4e:2f:b2:1d:b6:65:b0:4e:e8:96 +# SHA1 Fingerprint: 5b:6e:68:d0:cc:15:b6:a0:5f:1e:c1:5f:ae:02:fc:6b:2f:5d:6f:74 +# SHA256 Fingerprint: fb:8f:ec:75:91:69:b9:10:6b:1e:51:16:44:c6:18:c5:13:04:37:3f:6c:06:43:08:8d:8b:ef:fd:1b:99:75:99 +-----BEGIN CERTIFICATE----- +MIIF7zCCA9egAwIBAgIIDdPjvGz5a7EwDQYJKoZIhvcNAQELBQAwgYQxEjAQBgNV +BAUTCUc2MzI4NzUxMDELMAkGA1UEBhMCRVMxJzAlBgNVBAoTHkFORiBBdXRvcmlk +YWQgZGUgQ2VydGlmaWNhY2lvbjEUMBIGA1UECxMLQU5GIENBIFJhaXoxIjAgBgNV +BAMTGUFORiBTZWN1cmUgU2VydmVyIFJvb3QgQ0EwHhcNMTkwOTA0MTAwMDM4WhcN +MzkwODMwMTAwMDM4WjCBhDESMBAGA1UEBRMJRzYzMjg3NTEwMQswCQYDVQQGEwJF +UzEnMCUGA1UEChMeQU5GIEF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uMRQwEgYD +VQQLEwtBTkYgQ0EgUmFpejEiMCAGA1UEAxMZQU5GIFNlY3VyZSBTZXJ2ZXIgUm9v +dCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANvrayvmZFSVgpCj +cqQZAZ2cC4Ffc0m6p6zzBE57lgvsEeBbphzOG9INgxwruJ4dfkUyYA8H6XdYfp9q +yGFOtibBTI3/TO80sh9l2Ll49a2pcbnvT1gdpd50IJeh7WhM3pIXS7yr/2WanvtH +2Vdy8wmhrnZEE26cLUQ5vPnHO6RYPUG9tMJJo8gN0pcvB2VSAKduyK9o7PQUlrZX +H1bDOZ8rbeTzPvY1ZNoMHKGESy9LS+IsJJ1tk0DrtSOOMspvRdOoiXsezx76W0OL +zc2oD2rKDF65nkeP8Nm2CgtYZRczuSPkdxl9y0oukntPLxB3sY0vaJxizOBQ+OyR +p1RMVwnVdmPF6GUe7m1qzwmd+nxPrWAI/VaZDxUse6mAq4xhj0oHdkLePfTdsiQz +W7i1o0TJrH93PB0j7IKppuLIBkwC/qxcmZkLLxCKpvR/1Yd0DVlJRfbwcVw5Kda/ +SiOL9V8BY9KHcyi1Swr1+KuCLH5zJTIdC2MKF4EA/7Z2Xue0sUDKIbvVgFHlSFJn +LNJhiQcND85Cd8BEc5xEUKDbEAotlRyBr+Qc5RQe8TZBAQIvfXOn3kLMTOmJDVb3 +n5HUA8ZsyY/b2BzgQJhdZpmYgG4t/wHFzstGH6wCxkPmrqKEPMVOHj1tyRRM4y5B +u8o5vzY8KhmqQYdOpc5LMnndkEl/AgMBAAGjYzBhMB8GA1UdIwQYMBaAFJxf0Gxj +o1+TypOYCK2Mh6UsXME3MB0GA1UdDgQWBBScX9BsY6Nfk8qTmAitjIelLFzBNzAO +BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC +AgEATh65isagmD9uw2nAalxJUqzLK114OMHVVISfk/CHGT0sZonrDUL8zPB1hT+L +9IBdeeUXZ701guLyPI59WzbLWoAAKfLOKyzxj6ptBZNscsdW699QIyjlRRA96Gej +rw5VD5AJYu9LWaL2U/HANeQvwSS9eS9OICI7/RogsKQOLHDtdD+4E5UGUcjohybK +pFtqFiGS3XNgnhAY3jyB6ugYw3yJ8otQPr0R4hUDqDZ9MwFsSBXXiJCZBMXM5gf0 +vPSQ7RPi6ovDj6MzD8EpTBNO2hVWcXNyglD2mjN8orGoGjR0ZVzO0eurU+AagNjq +OknkJjCb5RyKqKkVMoaZkgoQI1YS4PbOTOK7vtuNknMBZi9iPrJyJ0U27U1W45eZ +/zo1PqVUSlJZS2Db7v54EX9K3BR5YLZrZAPbFYPhor72I5dQ8AkzNqdxliXzuUJ9 +2zg/LFis6ELhDtjTO0wugumDLmsx2d1Hhk9tl5EuT+IocTUW0fJz/iUrB0ckYyfI ++PbZa/wSMVYIwFNCr5zQM378BvAxRAMU8Vjq8moNqRGyg77FGr8H6lnco4g175x2 +MjxNBiLOFeXdntiP2t7SxDnlF4HPOEfrf4htWRvfn0IUrn7PqLBmZdo3r5+qPeoo +tt7VMVgWglvquxl1AnMaykgaIZOQCo6ThKd9OyMYkomgjaw= +-----END CERTIFICATE----- + +# Issuer: CN=Certum EC-384 CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Subject: CN=Certum EC-384 CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Label: "Certum EC-384 CA" +# Serial: 160250656287871593594747141429395092468 +# MD5 Fingerprint: b6:65:b3:96:60:97:12:a1:ec:4e:e1:3d:a3:c6:c9:f1 +# SHA1 Fingerprint: f3:3e:78:3c:ac:df:f4:a2:cc:ac:67:55:69:56:d7:e5:16:3c:e1:ed +# SHA256 Fingerprint: 6b:32:80:85:62:53:18:aa:50:d1:73:c9:8d:8b:da:09:d5:7e:27:41:3d:11:4c:f7:87:a0:f5:d0:6c:03:0c:f6 +-----BEGIN CERTIFICATE----- +MIICZTCCAeugAwIBAgIQeI8nXIESUiClBNAt3bpz9DAKBggqhkjOPQQDAzB0MQsw +CQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBTLkEuMScw +JQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxGTAXBgNVBAMT +EENlcnR1bSBFQy0zODQgQ0EwHhcNMTgwMzI2MDcyNDU0WhcNNDMwMzI2MDcyNDU0 +WjB0MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBT +LkEuMScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxGTAX +BgNVBAMTEENlcnR1bSBFQy0zODQgQ0EwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATE +KI6rGFtqvm5kN2PkzeyrOvfMobgOgknXhimfoZTy42B4mIF4Bk3y7JoOV2CDn7Tm +Fy8as10CW4kjPMIRBSqniBMY81CE1700LCeJVf/OTOffph8oxPBUw7l8t1Ot68Kj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI0GZnQkdjrzife81r1HfS+8 +EF9LMA4GA1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNoADBlAjADVS2m5hjEfO/J +UG7BJw+ch69u1RsIGL2SKcHvlJF40jocVYli5RsJHrpka/F2tNQCMQC0QoSZ/6vn +nvuRlydd3LBbMHHOXjgaatkl5+r3YZJW+OraNsKHZZYuciUvf9/DE8k= +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Root CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Root CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Root CA" +# Serial: 40870380103424195783807378461123655149 +# MD5 Fingerprint: 51:e1:c2:e7:fe:4c:84:af:59:0e:2f:f4:54:6f:ea:29 +# SHA1 Fingerprint: c8:83:44:c0:18:ae:9f:cc:f1:87:b7:8f:22:d1:c5:d7:45:84:ba:e5 +# SHA256 Fingerprint: fe:76:96:57:38:55:77:3e:37:a9:5e:7a:d4:d9:cc:96:c3:01:57:c1:5d:31:76:5b:a9:b1:57:04:e1:ae:78:fd +-----BEGIN CERTIFICATE----- +MIIFwDCCA6igAwIBAgIQHr9ZULjJgDdMBvfrVU+17TANBgkqhkiG9w0BAQ0FADB6 +MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBTLkEu +MScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxHzAdBgNV +BAMTFkNlcnR1bSBUcnVzdGVkIFJvb3QgQ0EwHhcNMTgwMzE2MTIxMDEzWhcNNDMw +MzE2MTIxMDEzWjB6MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEg +U3lzdGVtcyBTLkEuMScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRo +b3JpdHkxHzAdBgNVBAMTFkNlcnR1bSBUcnVzdGVkIFJvb3QgQ0EwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQDRLY67tzbqbTeRn06TpwXkKQMlzhyC93yZ +n0EGze2jusDbCSzBfN8pfktlL5On1AFrAygYo9idBcEq2EXxkd7fO9CAAozPOA/q +p1x4EaTByIVcJdPTsuclzxFUl6s1wB52HO8AU5853BSlLCIls3Jy/I2z5T4IHhQq +NwuIPMqw9MjCoa68wb4pZ1Xi/K1ZXP69VyywkI3C7Te2fJmItdUDmj0VDT06qKhF +8JVOJVkdzZhpu9PMMsmN74H+rX2Ju7pgE8pllWeg8xn2A1bUatMn4qGtg/BKEiJ3 +HAVz4hlxQsDsdUaakFjgao4rpUYwBI4Zshfjvqm6f1bxJAPXsiEodg42MEx51UGa +mqi4NboMOvJEGyCI98Ul1z3G4z5D3Yf+xOr1Uz5MZf87Sst4WmsXXw3Hw09Omiqi +7VdNIuJGmj8PkTQkfVXjjJU30xrwCSss0smNtA0Aq2cpKNgB9RkEth2+dv5yXMSF +ytKAQd8FqKPVhJBPC/PgP5sZ0jeJP/J7UhyM9uH3PAeXjA6iWYEMspA90+NZRu0P +qafegGtaqge2Gcu8V/OXIXoMsSt0Puvap2ctTMSYnjYJdmZm/Bo/6khUHL4wvYBQ +v3y1zgD2DGHZ5yQD4OMBgQ692IU0iL2yNqh7XAjlRICMb/gv1SHKHRzQ+8S1h9E6 +Tsd2tTVItQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSM+xx1 +vALTn04uSNn5YFSqxLNP+jAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQAD +ggIBAEii1QALLtA/vBzVtVRJHlpr9OTy4EA34MwUe7nJ+jW1dReTagVphZzNTxl4 +WxmB82M+w85bj/UvXgF2Ez8sALnNllI5SW0ETsXpD4YN4fqzX4IS8TrOZgYkNCvo +zMrnadyHncI013nR03e4qllY/p0m+jiGPp2Kh2RX5Rc64vmNueMzeMGQ2Ljdt4NR +5MTMI9UGfOZR0800McD2RrsLrfw9EAUqO0qRJe6M1ISHgCq8CYyqOhNf6DR5UMEQ +GfnTKB7U0VEwKbOukGfWHwpjscWpxkIxYxeU72nLL/qMFH3EQxiJ2fAyQOaA4kZf +5ePBAFmo+eggvIksDkc0C+pXwlM2/KfUrzHN/gLldfq5Jwn58/U7yn2fqSLLiMmq +0Uc9NneoWWRrJ8/vJ8HjJLWG965+Mk2weWjROeiQWMODvA8s1pfrzgzhIMfatz7D +P78v3DSk+yshzWePS/Tj6tQ/50+6uaWTRRxmHyH6ZF5v4HaUMst19W7l9o/HuKTM +qJZ9ZPskWkoDbGs4xugDQ5r3V7mzKWmTOPQD8rv7gmsHINFSH5pkAnuYZttcTVoP +0ISVoDwUQwbKytu4QTbaakRnh6+v40URFWkIsr4WOZckbxJF0WddCajJFdr60qZf +E2Efv4WstK2tBZQIgx51F9NxO5NQI1mg7TyRVJ12AMXDuDjb +-----END CERTIFICATE----- + +# Issuer: CN=TunTrust Root CA O=Agence Nationale de Certification Electronique +# Subject: CN=TunTrust Root CA O=Agence Nationale de Certification Electronique +# Label: "TunTrust Root CA" +# Serial: 108534058042236574382096126452369648152337120275 +# MD5 Fingerprint: 85:13:b9:90:5b:36:5c:b6:5e:b8:5a:f8:e0:31:57:b4 +# SHA1 Fingerprint: cf:e9:70:84:0f:e0:73:0f:9d:f6:0c:7f:2c:4b:ee:20:46:34:9c:bb +# SHA256 Fingerprint: 2e:44:10:2a:b5:8c:b8:54:19:45:1c:8e:19:d9:ac:f3:66:2c:af:bc:61:4b:6a:53:96:0a:30:f7:d0:e2:eb:41 +-----BEGIN CERTIFICATE----- +MIIFszCCA5ugAwIBAgIUEwLV4kBMkkaGFmddtLu7sms+/BMwDQYJKoZIhvcNAQEL +BQAwYTELMAkGA1UEBhMCVE4xNzA1BgNVBAoMLkFnZW5jZSBOYXRpb25hbGUgZGUg +Q2VydGlmaWNhdGlvbiBFbGVjdHJvbmlxdWUxGTAXBgNVBAMMEFR1blRydXN0IFJv +b3QgQ0EwHhcNMTkwNDI2MDg1NzU2WhcNNDQwNDI2MDg1NzU2WjBhMQswCQYDVQQG +EwJUTjE3MDUGA1UECgwuQWdlbmNlIE5hdGlvbmFsZSBkZSBDZXJ0aWZpY2F0aW9u +IEVsZWN0cm9uaXF1ZTEZMBcGA1UEAwwQVHVuVHJ1c3QgUm9vdCBDQTCCAiIwDQYJ +KoZIhvcNAQEBBQADggIPADCCAgoCggIBAMPN0/y9BFPdDCA61YguBUtB9YOCfvdZ +n56eY+hz2vYGqU8ftPkLHzmMmiDQfgbU7DTZhrx1W4eI8NLZ1KMKsmwb60ksPqxd +2JQDoOw05TDENX37Jk0bbjBU2PWARZw5rZzJJQRNmpA+TkBuimvNKWfGzC3gdOgF +VwpIUPp6Q9p+7FuaDmJ2/uqdHYVy7BG7NegfJ7/Boce7SBbdVtfMTqDhuazb1YMZ +GoXRlJfXyqNlC/M4+QKu3fZnz8k/9YosRxqZbwUN/dAdgjH8KcwAWJeRTIAAHDOF +li/LQcKLEITDCSSJH7UP2dl3RxiSlGBcx5kDPP73lad9UKGAwqmDrViWVSHbhlnU +r8a83YFuB9tgYv7sEG7aaAH0gxupPqJbI9dkxt/con3YS7qC0lH4Zr8GRuR5KiY2 +eY8fTpkdso8MDhz/yV3A/ZAQprE38806JG60hZC/gLkMjNWb1sjxVj8agIl6qeIb +MlEsPvLfe/ZdeikZjuXIvTZxi11Mwh0/rViizz1wTaZQmCXcI/m4WEEIcb9PuISg +jwBUFfyRbVinljvrS5YnzWuioYasDXxU5mZMZl+QviGaAkYt5IPCgLnPSz7ofzwB +7I9ezX/SKEIBlYrilz0QIX32nRzFNKHsLA4KUiwSVXAkPcvCFDVDXSdOvsC9qnyW +5/yeYa1E0wCXAgMBAAGjYzBhMB0GA1UdDgQWBBQGmpsfU33x9aTI04Y+oXNZtPdE +ITAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFAaamx9TffH1pMjThj6hc1m0 +90QhMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAqgVutt0Vyb+z +xiD2BkewhpMl0425yAA/l/VSJ4hxyXT968pk21vvHl26v9Hr7lxpuhbI87mP0zYu +QEkHDVneixCwSQXi/5E/S7fdAo74gShczNxtr18UnH1YeA32gAm56Q6XKRm4t+v4 +FstVEuTGfbvE7Pi1HE4+Z7/FXxttbUcoqgRYYdZ2vyJ/0Adqp2RT8JeNnYA/u8EH +22Wv5psymsNUk8QcCMNE+3tjEUPRahphanltkE8pjkcFwRJpadbGNjHh/PqAulxP +xOu3Mqz4dWEX1xAZufHSCe96Qp1bWgvUxpVOKs7/B9dPfhgGiPEZtdmYu65xxBzn +dFlY7wyJz4sfdZMaBBSSSFCp61cpABbjNhzI+L/wM9VBD8TMPN3pM0MBkRArHtG5 +Xc0yGYuPjCB31yLEQtyEFpslbei0VXF/sHyz03FJuc9SpAQ/3D2gu68zngowYI7b +nV2UqL1g52KAdoGDDIzMMEZJ4gzSqK/rYXHv5yJiqfdcZGyfFoxnNidF9Ql7v/YQ +CvGwjVRDjAS6oz/v4jXH+XTgbzRB0L9zZVcg+ZtnemZoJE6AZb0QmQZZ8mWvuMZH +u/2QeItBcy6vVR/cO5JyboTT0GFMDcx2V+IthSIVNg3rAZ3r2OvEhJn7wAzMMujj +d9qDRIueVSjAi1jTkD5OGwDxFa2DK5o= +-----END CERTIFICATE----- + +# Issuer: CN=HARICA TLS RSA Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Subject: CN=HARICA TLS RSA Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Label: "HARICA TLS RSA Root CA 2021" +# Serial: 76817823531813593706434026085292783742 +# MD5 Fingerprint: 65:47:9b:58:86:dd:2c:f0:fc:a2:84:1f:1e:96:c4:91 +# SHA1 Fingerprint: 02:2d:05:82:fa:88:ce:14:0c:06:79:de:7f:14:10:e9:45:d7:a5:6d +# SHA256 Fingerprint: d9:5d:0e:8e:da:79:52:5b:f9:be:b1:1b:14:d2:10:0d:32:94:98:5f:0c:62:d9:fa:bd:9c:d9:99:ec:cb:7b:1d +-----BEGIN CERTIFICATE----- +MIIFpDCCA4ygAwIBAgIQOcqTHO9D88aOk8f0ZIk4fjANBgkqhkiG9w0BAQsFADBs +MQswCQYDVQQGEwJHUjE3MDUGA1UECgwuSGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl +c2VhcmNoIEluc3RpdHV0aW9ucyBDQTEkMCIGA1UEAwwbSEFSSUNBIFRMUyBSU0Eg +Um9vdCBDQSAyMDIxMB4XDTIxMDIxOTEwNTUzOFoXDTQ1MDIxMzEwNTUzN1owbDEL +MAkGA1UEBhMCR1IxNzA1BgNVBAoMLkhlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNl +YXJjaCBJbnN0aXR1dGlvbnMgQ0ExJDAiBgNVBAMMG0hBUklDQSBUTFMgUlNBIFJv +b3QgQ0EgMjAyMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAIvC569l +mwVnlskNJLnQDmT8zuIkGCyEf3dRywQRNrhe7Wlxp57kJQmXZ8FHws+RFjZiPTgE +4VGC/6zStGndLuwRo0Xua2s7TL+MjaQenRG56Tj5eg4MmOIjHdFOY9TnuEFE+2uv +a9of08WRiFukiZLRgeaMOVig1mlDqa2YUlhu2wr7a89o+uOkXjpFc5gH6l8Cct4M +pbOfrqkdtx2z/IpZ525yZa31MJQjB/OCFks1mJxTuy/K5FrZx40d/JiZ+yykgmvw +Kh+OC19xXFyuQnspiYHLA6OZyoieC0AJQTPb5lh6/a6ZcMBaD9YThnEvdmn8kN3b +LW7R8pv1GmuebxWMevBLKKAiOIAkbDakO/IwkfN4E8/BPzWr8R0RI7VDIp4BkrcY +AuUR0YLbFQDMYTfBKnya4dC6s1BG7oKsnTH4+yPiAwBIcKMJJnkVU2DzOFytOOqB +AGMUuTNe3QvboEUHGjMJ+E20pwKmafTCWQWIZYVWrkvL4N48fS0ayOn7H6NhStYq +E613TBoYm5EPWNgGVMWX+Ko/IIqmhaZ39qb8HOLubpQzKoNQhArlT4b4UEV4AIHr +W2jjJo3Me1xR9BQsQL4aYB16cmEdH2MtiKrOokWQCPxrvrNQKlr9qEgYRtaQQJKQ +CoReaDH46+0N0x3GfZkYVVYnZS6NRcUk7M7jAgMBAAGjQjBAMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFApII6ZgpJIKM+qTW8VX6iVNvRLuMA4GA1UdDwEB/wQE +AwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAPpBIqm5iFSVmewzVjIuJndftTgfvnNAU +X15QvWiWkKQUEapobQk1OUAJ2vQJLDSle1mESSmXdMgHHkdt8s4cUCbjnj1AUz/3 +f5Z2EMVGpdAgS1D0NTsY9FVqQRtHBmg8uwkIYtlfVUKqrFOFrJVWNlar5AWMxaja +H6NpvVMPxP/cyuN+8kyIhkdGGvMA9YCRotxDQpSbIPDRzbLrLFPCU3hKTwSUQZqP +JzLB5UkZv/HywouoCjkxKLR9YjYsTewfM7Z+d21+UPCfDtcRj88YxeMn/ibvBZ3P +zzfF0HvaO7AWhAw6k9a+F9sPPg4ZeAnHqQJyIkv3N3a6dcSFA1pj1bF1BcK5vZSt +jBWZp5N99sXzqnTPBIWUmAD04vnKJGW/4GKvyMX6ssmeVkjaef2WdhW+o45WxLM0 +/L5H9MG0qPzVMIho7suuyWPEdr6sOBjhXlzPrjoiUevRi7PzKzMHVIf6tLITe7pT +BGIBnfHAT+7hOtSLIBD6Alfm78ELt5BGnBkpjNxvoEppaZS3JGWg/6w/zgH7IS79 +aPib8qXPMThcFarmlwDB31qlpzmq6YR/PFGoOtmUW4y/Twhx5duoXNTSpv4Ao8YW +xw/ogM4cKGR0GQjTQuPOAF1/sdwTsOEFy9EgqoZ0njnnkf3/W9b3raYvAwtt41dU +63ZTGI0RmLo= +-----END CERTIFICATE----- + +# Issuer: CN=HARICA TLS ECC Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Subject: CN=HARICA TLS ECC Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Label: "HARICA TLS ECC Root CA 2021" +# Serial: 137515985548005187474074462014555733966 +# MD5 Fingerprint: ae:f7:4c:e5:66:35:d1:b7:9b:8c:22:93:74:d3:4b:b0 +# SHA1 Fingerprint: bc:b0:c1:9d:e9:98:92:70:19:38:57:e9:8d:a7:b4:5d:6e:ee:01:48 +# SHA256 Fingerprint: 3f:99:cc:47:4a:cf:ce:4d:fe:d5:87:94:66:5e:47:8d:15:47:73:9f:2e:78:0f:1b:b4:ca:9b:13:30:97:d4:01 +-----BEGIN CERTIFICATE----- +MIICVDCCAdugAwIBAgIQZ3SdjXfYO2rbIvT/WeK/zjAKBggqhkjOPQQDAzBsMQsw +CQYDVQQGEwJHUjE3MDUGA1UECgwuSGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJlc2Vh +cmNoIEluc3RpdHV0aW9ucyBDQTEkMCIGA1UEAwwbSEFSSUNBIFRMUyBFQ0MgUm9v +dCBDQSAyMDIxMB4XDTIxMDIxOTExMDExMFoXDTQ1MDIxMzExMDEwOVowbDELMAkG +A1UEBhMCR1IxNzA1BgNVBAoMLkhlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJj +aCBJbnN0aXR1dGlvbnMgQ0ExJDAiBgNVBAMMG0hBUklDQSBUTFMgRUNDIFJvb3Qg +Q0EgMjAyMTB2MBAGByqGSM49AgEGBSuBBAAiA2IABDgI/rGgltJ6rK9JOtDA4MM7 +KKrxcm1lAEeIhPyaJmuqS7psBAqIXhfyVYf8MLA04jRYVxqEU+kw2anylnTDUR9Y +STHMmE5gEYd103KUkE+bECUqqHgtvpBBWJAVcqeht6NCMEAwDwYDVR0TAQH/BAUw +AwEB/zAdBgNVHQ4EFgQUyRtTgRL+BNUW0aq8mm+3oJUZbsowDgYDVR0PAQH/BAQD +AgGGMAoGCCqGSM49BAMDA2cAMGQCMBHervjcToiwqfAircJRQO9gcS3ujwLEXQNw +SaSS6sUUiHCm0w2wqsosQJz76YJumgIwK0eaB8bRwoF8yguWGEEbo/QwCZ61IygN +nxS2PFOiTAZpffpskcYqSUXm7LcT4Tps +-----END CERTIFICATE----- + +# Issuer: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Subject: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Label: "Autoridad de Certificacion Firmaprofesional CIF A62634068" +# Serial: 1977337328857672817 +# MD5 Fingerprint: 4e:6e:9b:54:4c:ca:b7:fa:48:e4:90:b1:15:4b:1c:a3 +# SHA1 Fingerprint: 0b:be:c2:27:22:49:cb:39:aa:db:35:5c:53:e3:8c:ae:78:ff:b6:fe +# SHA256 Fingerprint: 57:de:05:83:ef:d2:b2:6e:03:61:da:99:da:9d:f4:64:8d:ef:7e:e8:44:1c:3b:72:8a:fa:9b:cd:e0:f9:b2:6a +-----BEGIN CERTIFICATE----- +MIIGFDCCA/ygAwIBAgIIG3Dp0v+ubHEwDQYJKoZIhvcNAQELBQAwUTELMAkGA1UE +BhMCRVMxQjBABgNVBAMMOUF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1h +cHJvZmVzaW9uYWwgQ0lGIEE2MjYzNDA2ODAeFw0xNDA5MjMxNTIyMDdaFw0zNjA1 +MDUxNTIyMDdaMFExCzAJBgNVBAYTAkVTMUIwQAYDVQQDDDlBdXRvcmlkYWQgZGUg +Q2VydGlmaWNhY2lvbiBGaXJtYXByb2Zlc2lvbmFsIENJRiBBNjI2MzQwNjgwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKlmuO6vj78aI14H9M2uDDUtd9 +thDIAl6zQyrET2qyyhxdKJp4ERppWVevtSBC5IsP5t9bpgOSL/UR5GLXMnE42QQM +cas9UX4PB99jBVzpv5RvwSmCwLTaUbDBPLutN0pcyvFLNg4kq7/DhHf9qFD0sefG +L9ItWY16Ck6WaVICqjaY7Pz6FIMMNx/Jkjd/14Et5cS54D40/mf0PmbR0/RAz15i +NA9wBj4gGFrO93IbJWyTdBSTo3OxDqqHECNZXyAFGUftaI6SEspd/NYrspI8IM/h +X68gvqB2f3bl7BqGYTM+53u0P6APjqK5am+5hyZvQWyIplD9amML9ZMWGxmPsu2b +m8mQ9QEM3xk9Dz44I8kvjwzRAv4bVdZO0I08r0+k8/6vKtMFnXkIoctXMbScyJCy +Z/QYFpM6/EfY0XiWMR+6KwxfXZmtY4laJCB22N/9q06mIqqdXuYnin1oKaPnirja +EbsXLZmdEyRG98Xi2J+Of8ePdG1asuhy9azuJBCtLxTa/y2aRnFHvkLfuwHb9H/T +KI8xWVvTyQKmtFLKbpf7Q8UIJm+K9Lv9nyiqDdVF8xM6HdjAeI9BZzwelGSuewvF +6NkBiDkal4ZkQdU7hwxu+g/GvUgUvzlN1J5Bto+WHWOWk9mVBngxaJ43BjuAiUVh +OSPHG0SjFeUc+JIwuwIDAQABo4HvMIHsMB0GA1UdDgQWBBRlzeurNR4APn7VdMAc +tHNHDhpkLzASBgNVHRMBAf8ECDAGAQH/AgEBMIGmBgNVHSAEgZ4wgZswgZgGBFUd +IAAwgY8wLwYIKwYBBQUHAgEWI2h0dHA6Ly93d3cuZmlybWFwcm9mZXNpb25hbC5j +b20vY3BzMFwGCCsGAQUFBwICMFAeTgBQAGEAcwBlAG8AIABkAGUAIABsAGEAIABC +AG8AbgBhAG4AbwB2AGEAIAA0ADcAIABCAGEAcgBjAGUAbABvAG4AYQAgADAAOAAw +ADEANzAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQELBQADggIBAHSHKAIrdx9m +iWTtj3QuRhy7qPj4Cx2Dtjqn6EWKB7fgPiDL4QjbEwj4KKE1soCzC1HA01aajTNF +Sa9J8OA9B3pFE1r/yJfY0xgsfZb43aJlQ3CTkBW6kN/oGbDbLIpgD7dvlAceHabJ +hfa9NPhAeGIQcDq+fUs5gakQ1JZBu/hfHAsdCPKxsIl68veg4MSPi3i1O1ilI45P +Vf42O+AMt8oqMEEgtIDNrvx2ZnOorm7hfNoD6JQg5iKj0B+QXSBTFCZX2lSX3xZE +EAEeiGaPcjiT3SC3NL7X8e5jjkd5KAb881lFJWAiMxujX6i6KtoaPc1A6ozuBRWV +1aUsIC+nmCjuRfzxuIgALI9C2lHVnOUTaHFFQ4ueCyE8S1wF3BqfmI7avSKecs2t +CsvMo2ebKHTEm9caPARYpoKdrcd7b/+Alun4jWq9GJAd/0kakFI3ky88Al2CdgtR +5xbHV/g4+afNmyJU72OwFW1TZQNKXkqgsqeOSQBZONXH9IBk9W6VULgRfhVwOEqw +f9DEMnDAGf/JOC0ULGb0QkTmVXYbgBVX/8Cnp6o5qtjTcNAuuuuUavpfNIbnYrX9 +ivAwhZTJryQCL2/W3Wf+47BVTwSYT6RBVuKT0Gro1vP7ZeDOdcQxWQzugsgMYDNK +GbqEZycPvEJdvSRUDewdcAZfpLz6IHxV +-----END CERTIFICATE----- + +# Issuer: CN=vTrus ECC Root CA O=iTrusChina Co.,Ltd. +# Subject: CN=vTrus ECC Root CA O=iTrusChina Co.,Ltd. +# Label: "vTrus ECC Root CA" +# Serial: 630369271402956006249506845124680065938238527194 +# MD5 Fingerprint: de:4b:c1:f5:52:8c:9b:43:e1:3e:8f:55:54:17:8d:85 +# SHA1 Fingerprint: f6:9c:db:b0:fc:f6:02:13:b6:52:32:a6:a3:91:3f:16:70:da:c3:e1 +# SHA256 Fingerprint: 30:fb:ba:2c:32:23:8e:2a:98:54:7a:f9:79:31:e5:50:42:8b:9b:3f:1c:8e:eb:66:33:dc:fa:86:c5:b2:7d:d3 +-----BEGIN CERTIFICATE----- +MIICDzCCAZWgAwIBAgIUbmq8WapTvpg5Z6LSa6Q75m0c1towCgYIKoZIzj0EAwMw +RzELMAkGA1UEBhMCQ04xHDAaBgNVBAoTE2lUcnVzQ2hpbmEgQ28uLEx0ZC4xGjAY +BgNVBAMTEXZUcnVzIEVDQyBSb290IENBMB4XDTE4MDczMTA3MjY0NFoXDTQzMDcz +MTA3MjY0NFowRzELMAkGA1UEBhMCQ04xHDAaBgNVBAoTE2lUcnVzQ2hpbmEgQ28u +LEx0ZC4xGjAYBgNVBAMTEXZUcnVzIEVDQyBSb290IENBMHYwEAYHKoZIzj0CAQYF +K4EEACIDYgAEZVBKrox5lkqqHAjDo6LN/llWQXf9JpRCux3NCNtzslt188+cToL0 +v/hhJoVs1oVbcnDS/dtitN9Ti72xRFhiQgnH+n9bEOf+QP3A2MMrMudwpremIFUd +e4BdS49nTPEQo0IwQDAdBgNVHQ4EFgQUmDnNvtiyjPeyq+GtJK97fKHbH88wDwYD +VR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwCgYIKoZIzj0EAwMDaAAwZQIw +V53dVvHH4+m4SVBrm2nDb+zDfSXkV5UTQJtS0zvzQBm8JsctBp61ezaf9SXUY2sA +AjEA6dPGnlaaKsyh2j/IZivTWJwghfqrkYpwcBE4YGQLYgmRWAD5Tfs0aNoJrSEG +GJTO +-----END CERTIFICATE----- + +# Issuer: CN=vTrus Root CA O=iTrusChina Co.,Ltd. +# Subject: CN=vTrus Root CA O=iTrusChina Co.,Ltd. +# Label: "vTrus Root CA" +# Serial: 387574501246983434957692974888460947164905180485 +# MD5 Fingerprint: b8:c9:37:df:fa:6b:31:84:64:c5:ea:11:6a:1b:75:fc +# SHA1 Fingerprint: 84:1a:69:fb:f5:cd:1a:25:34:13:3d:e3:f8:fc:b8:99:d0:c9:14:b7 +# SHA256 Fingerprint: 8a:71:de:65:59:33:6f:42:6c:26:e5:38:80:d0:0d:88:a1:8d:a4:c6:a9:1f:0d:cb:61:94:e2:06:c5:c9:63:87 +-----BEGIN CERTIFICATE----- +MIIFVjCCAz6gAwIBAgIUQ+NxE9izWRRdt86M/TX9b7wFjUUwDQYJKoZIhvcNAQEL +BQAwQzELMAkGA1UEBhMCQ04xHDAaBgNVBAoTE2lUcnVzQ2hpbmEgQ28uLEx0ZC4x +FjAUBgNVBAMTDXZUcnVzIFJvb3QgQ0EwHhcNMTgwNzMxMDcyNDA1WhcNNDMwNzMx +MDcyNDA1WjBDMQswCQYDVQQGEwJDTjEcMBoGA1UEChMTaVRydXNDaGluYSBDby4s +THRkLjEWMBQGA1UEAxMNdlRydXMgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEBBQAD +ggIPADCCAgoCggIBAL1VfGHTuB0EYgWgrmy3cLRB6ksDXhA/kFocizuwZotsSKYc +IrrVQJLuM7IjWcmOvFjai57QGfIvWcaMY1q6n6MLsLOaXLoRuBLpDLvPbmyAhykU +AyyNJJrIZIO1aqwTLDPxn9wsYTwaP3BVm60AUn/PBLn+NvqcwBauYv6WTEN+VRS+ +GrPSbcKvdmaVayqwlHeFXgQPYh1jdfdr58tbmnDsPmcF8P4HCIDPKNsFxhQnL4Z9 +8Cfe/+Z+M0jnCx5Y0ScrUw5XSmXX+6KAYPxMvDVTAWqXcoKv8R1w6Jz1717CbMdH +flqUhSZNO7rrTOiwCcJlwp2dCZtOtZcFrPUGoPc2BX70kLJrxLT5ZOrpGgrIDajt +J8nU57O5q4IikCc9Kuh8kO+8T/3iCiSn3mUkpF3qwHYw03dQ+A0Em5Q2AXPKBlim +0zvc+gRGE1WKyURHuFE5Gi7oNOJ5y1lKCn+8pu8fA2dqWSslYpPZUxlmPCdiKYZN +pGvu/9ROutW04o5IWgAZCfEF2c6Rsffr6TlP9m8EQ5pV9T4FFL2/s1m02I4zhKOQ +UqqzApVg+QxMaPnu1RcN+HFXtSXkKe5lXa/R7jwXC1pDxaWG6iSe4gUH3DRCEpHW +OXSuTEGC2/KmSNGzm/MzqvOmwMVO9fSddmPmAsYiS8GVP1BkLFTltvA8Kc9XAgMB +AAGjQjBAMB0GA1UdDgQWBBRUYnBj8XWEQ1iO0RYgscasGrz2iTAPBgNVHRMBAf8E +BTADAQH/MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAKbqSSaet +8PFww+SX8J+pJdVrnjT+5hpk9jprUrIQeBqfTNqK2uwcN1LgQkv7bHbKJAs5EhWd +nxEt/Hlk3ODg9d3gV8mlsnZwUKT+twpw1aA08XXXTUm6EdGz2OyC/+sOxL9kLX1j +bhd47F18iMjrjld22VkE+rxSH0Ws8HqA7Oxvdq6R2xCOBNyS36D25q5J08FsEhvM +Kar5CKXiNxTKsbhm7xqC5PD48acWabfbqWE8n/Uxy+QARsIvdLGx14HuqCaVvIiv +TDUHKgLKeBRtRytAVunLKmChZwOgzoy8sHJnxDHO2zTlJQNgJXtxmOTAGytfdELS +S8VZCAeHvsXDf+eW2eHcKJfWjwXj9ZtOyh1QRwVTsMo554WgicEFOwE30z9J4nfr +I8iIZjs9OXYhRvHsXyO466JmdXTBQPfYaJqT4i2pLr0cox7IdMakLXogqzu4sEb9 +b91fUlV1YvCXoHzXOP0l382gmxDPi7g4Xl7FtKYCNqEeXxzP4padKar9mK5S4fNB +UvupLnKWnyfjqnN9+BojZns7q2WwMgFLFT49ok8MKzWixtlnEjUwzXYuFrOZnk1P +Ti07NEPhmg4NpGaXutIcSkwsKouLgU9xGqndXHt7CMUADTdA43x7VF8vhV929ven +sBxXVsFy6K2ir40zSbofitzmdHxghm+Hl3s= +-----END CERTIFICATE----- + +# Issuer: CN=ISRG Root X2 O=Internet Security Research Group +# Subject: CN=ISRG Root X2 O=Internet Security Research Group +# Label: "ISRG Root X2" +# Serial: 87493402998870891108772069816698636114 +# MD5 Fingerprint: d3:9e:c4:1e:23:3c:a6:df:cf:a3:7e:6d:e0:14:e6:e5 +# SHA1 Fingerprint: bd:b1:b9:3c:d5:97:8d:45:c6:26:14:55:f8:db:95:c7:5a:d1:53:af +# SHA256 Fingerprint: 69:72:9b:8e:15:a8:6e:fc:17:7a:57:af:b7:17:1d:fc:64:ad:d2:8c:2f:ca:8c:f1:50:7e:34:45:3c:cb:14:70 +-----BEGIN CERTIFICATE----- +MIICGzCCAaGgAwIBAgIQQdKd0XLq7qeAwSxs6S+HUjAKBggqhkjOPQQDAzBPMQsw +CQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJuZXQgU2VjdXJpdHkgUmVzZWFyY2gg +R3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBYMjAeFw0yMDA5MDQwMDAwMDBaFw00 +MDA5MTcxNjAwMDBaME8xCzAJBgNVBAYTAlVTMSkwJwYDVQQKEyBJbnRlcm5ldCBT +ZWN1cml0eSBSZXNlYXJjaCBHcm91cDEVMBMGA1UEAxMMSVNSRyBSb290IFgyMHYw +EAYHKoZIzj0CAQYFK4EEACIDYgAEzZvVn4CDCuwJSvMWSj5cz3es3mcFDR0HttwW ++1qLFNvicWDEukWVEYmO6gbf9yoWHKS5xcUy4APgHoIYOIvXRdgKam7mAHf7AlF9 +ItgKbppbd9/w+kHsOdx1ymgHDB/qo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0T +AQH/BAUwAwEB/zAdBgNVHQ4EFgQUfEKWrt5LSDv6kviejM9ti6lyN5UwCgYIKoZI +zj0EAwMDaAAwZQIwe3lORlCEwkSHRhtFcP9Ymd70/aTSVaYgLXTWNLxBo1BfASdW +tL4ndQavEi51mI38AjEAi/V3bNTIZargCyzuFJ0nN6T5U6VR5CmD1/iQMVtCnwr1 +/q4AaOeMSQ+2b1tbFfLn +-----END CERTIFICATE----- + +# Issuer: CN=HiPKI Root CA - G1 O=Chunghwa Telecom Co., Ltd. +# Subject: CN=HiPKI Root CA - G1 O=Chunghwa Telecom Co., Ltd. +# Label: "HiPKI Root CA - G1" +# Serial: 60966262342023497858655262305426234976 +# MD5 Fingerprint: 69:45:df:16:65:4b:e8:68:9a:8f:76:5f:ff:80:9e:d3 +# SHA1 Fingerprint: 6a:92:e4:a8:ee:1b:ec:96:45:37:e3:29:57:49:cd:96:e3:e5:d2:60 +# SHA256 Fingerprint: f0:15:ce:3c:c2:39:bf:ef:06:4b:e9:f1:d2:c4:17:e1:a0:26:4a:0a:94:be:1f:0c:8d:12:18:64:eb:69:49:cc +-----BEGIN CERTIFICATE----- +MIIFajCCA1KgAwIBAgIQLd2szmKXlKFD6LDNdmpeYDANBgkqhkiG9w0BAQsFADBP +MQswCQYDVQQGEwJUVzEjMCEGA1UECgwaQ2h1bmdod2EgVGVsZWNvbSBDby4sIEx0 +ZC4xGzAZBgNVBAMMEkhpUEtJIFJvb3QgQ0EgLSBHMTAeFw0xOTAyMjIwOTQ2MDRa +Fw0zNzEyMzExNTU5NTlaME8xCzAJBgNVBAYTAlRXMSMwIQYDVQQKDBpDaHVuZ2h3 +YSBUZWxlY29tIENvLiwgTHRkLjEbMBkGA1UEAwwSSGlQS0kgUm9vdCBDQSAtIEcx +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA9B5/UnMyDHPkvRN0o9Qw +qNCuS9i233VHZvR85zkEHmpwINJaR3JnVfSl6J3VHiGh8Ge6zCFovkRTv4354twv +Vcg3Px+kwJyz5HdcoEb+d/oaoDjq7Zpy3iu9lFc6uux55199QmQ5eiY29yTw1S+6 +lZgRZq2XNdZ1AYDgr/SEYYwNHl98h5ZeQa/rh+r4XfEuiAU+TCK72h8q3VJGZDnz +Qs7ZngyzsHeXZJzA9KMuH5UHsBffMNsAGJZMoYFL3QRtU6M9/Aes1MU3guvklQgZ +KILSQjqj2FPseYlgSGDIcpJQ3AOPgz+yQlda22rpEZfdhSi8MEyr48KxRURHH+CK +FgeW0iEPU8DtqX7UTuybCeyvQqww1r/REEXgphaypcXTT3OUM3ECoWqj1jOXTyFj +HluP2cFeRXF3D4FdXyGarYPM+l7WjSNfGz1BryB1ZlpK9p/7qxj3ccC2HTHsOyDr +y+K49a6SsvfhhEvyovKTmiKe0xRvNlS9H15ZFblzqMF8b3ti6RZsR1pl8w4Rm0bZ +/W3c1pzAtH2lsN0/Vm+h+fbkEkj9Bn8SV7apI09bA8PgcSojt/ewsTu8mL3WmKgM +a/aOEmem8rJY5AIJEzypuxC00jBF8ez3ABHfZfjcK0NVvxaXxA/VLGGEqnKG/uY6 +fsI/fe78LxQ+5oXdUG+3Se0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQU8ncX+l6o/vY9cdVouslGDDjYr7AwDgYDVR0PAQH/BAQDAgGGMA0GCSqG +SIb3DQEBCwUAA4ICAQBQUfB13HAE4/+qddRxosuej6ip0691x1TPOhwEmSKsxBHi +7zNKpiMdDg1H2DfHb680f0+BazVP6XKlMeJ45/dOlBhbQH3PayFUhuaVevvGyuqc +SE5XCV0vrPSltJczWNWseanMX/mF+lLFjfiRFOs6DRfQUsJ748JzjkZ4Bjgs6Fza +ZsT0pPBWGTMpWmWSBUdGSquEwx4noR8RkpkndZMPvDY7l1ePJlsMu5wP1G4wB9Tc +XzZoZjmDlicmisjEOf6aIW/Vcobpf2Lll07QJNBAsNB1CI69aO4I1258EHBGG3zg +iLKecoaZAeO/n0kZtCW+VmWuF2PlHt/o/0elv+EmBYTksMCv5wiZqAxeJoBF1Pho +L5aPruJKHJwWDBNvOIf2u8g0X5IDUXlwpt/L9ZlNec1OvFefQ05rLisY+GpzjLrF +Ne85akEez3GoorKGB1s6yeHvP2UEgEcyRHCVTjFnanRbEEV16rCf0OY1/k6fi8wr +kkVbbiVghUbN0aqwdmaTd5a+g744tiROJgvM7XpWGuDpWsZkrUx6AEhEL7lAuxM+ +vhV4nYWBSipX3tUZQ9rbyltHhoMLP7YNdnhzeSJesYAfz77RP1YQmCuVh6EfnWQU +YDksswBVLuT1sw5XxJFBAJw/6KXf6vb/yPCtbVKoF6ubYfwSUTXkJf2vqmqGOQ== +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4 +# Label: "GlobalSign ECC Root CA - R4" +# Serial: 159662223612894884239637590694 +# MD5 Fingerprint: 26:29:f8:6d:e1:88:bf:a2:65:7f:aa:c4:cd:0f:7f:fc +# SHA1 Fingerprint: 6b:a0:b0:98:e1:71:ef:5a:ad:fe:48:15:80:77:10:f4:bd:6f:0b:28 +# SHA256 Fingerprint: b0:85:d7:0b:96:4f:19:1a:73:e4:af:0d:54:ae:7a:0e:07:aa:fd:af:9b:71:dd:08:62:13:8a:b7:32:5a:24:a2 +-----BEGIN CERTIFICATE----- +MIIB3DCCAYOgAwIBAgINAgPlfvU/k/2lCSGypjAKBggqhkjOPQQDAjBQMSQwIgYD +VQQLExtHbG9iYWxTaWduIEVDQyBSb290IENBIC0gUjQxEzARBgNVBAoTCkdsb2Jh +bFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMTIxMTEzMDAwMDAwWhcNMzgw +MTE5MDMxNDA3WjBQMSQwIgYDVQQLExtHbG9iYWxTaWduIEVDQyBSb290IENBIC0g +UjQxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wWTAT +BgcqhkjOPQIBBggqhkjOPQMBBwNCAAS4xnnTj2wlDp8uORkcA6SumuU5BwkWymOx +uYb4ilfBV85C+nOh92VC/x7BALJucw7/xyHlGKSq2XE/qNS5zowdo0IwQDAOBgNV +HQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUVLB7rUW44kB/ ++wpu+74zyTyjhNUwCgYIKoZIzj0EAwIDRwAwRAIgIk90crlgr/HmnKAWBVBfw147 +bmF0774BxL4YSFlhgjICICadVGNA3jdgUM/I2O2dgq43mLyjj0xMqTQrbO/7lZsm +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R1 O=Google Trust Services LLC +# Subject: CN=GTS Root R1 O=Google Trust Services LLC +# Label: "GTS Root R1" +# Serial: 159662320309726417404178440727 +# MD5 Fingerprint: 05:fe:d0:bf:71:a8:a3:76:63:da:01:e0:d8:52:dc:40 +# SHA1 Fingerprint: e5:8c:1c:c4:91:3b:38:63:4b:e9:10:6e:e3:ad:8e:6b:9d:d9:81:4a +# SHA256 Fingerprint: d9:47:43:2a:bd:e7:b7:fa:90:fc:2e:6b:59:10:1b:12:80:e0:e1:c7:e4:e4:0f:a3:c6:88:7f:ff:57:a7:f4:cf +-----BEGIN CERTIFICATE----- +MIIFVzCCAz+gAwIBAgINAgPlk28xsBNJiGuiFzANBgkqhkiG9w0BAQwFADBHMQsw +CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU +MBIGA1UEAxMLR1RTIFJvb3QgUjEwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw +MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp +Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjEwggIiMA0GCSqGSIb3DQEBAQUA +A4ICDwAwggIKAoICAQC2EQKLHuOhd5s73L+UPreVp0A8of2C+X0yBoJx9vaMf/vo +27xqLpeXo4xL+Sv2sfnOhB2x+cWX3u+58qPpvBKJXqeqUqv4IyfLpLGcY9vXmX7w +Cl7raKb0xlpHDU0QM+NOsROjyBhsS+z8CZDfnWQpJSMHobTSPS5g4M/SCYe7zUjw +TcLCeoiKu7rPWRnWr4+wB7CeMfGCwcDfLqZtbBkOtdh+JhpFAz2weaSUKK0Pfybl +qAj+lug8aJRT7oM6iCsVlgmy4HqMLnXWnOunVmSPlk9orj2XwoSPwLxAwAtcvfaH +szVsrBhQf4TgTM2S0yDpM7xSma8ytSmzJSq0SPly4cpk9+aCEI3oncKKiPo4Zor8 +Y/kB+Xj9e1x3+naH+uzfsQ55lVe0vSbv1gHR6xYKu44LtcXFilWr06zqkUspzBmk +MiVOKvFlRNACzqrOSbTqn3yDsEB750Orp2yjj32JgfpMpf/VjsPOS+C12LOORc92 +wO1AK/1TD7Cn1TsNsYqiA94xrcx36m97PtbfkSIS5r762DL8EGMUUXLeXdYWk70p +aDPvOmbsB4om3xPXV2V4J95eSRQAogB/mqghtqmxlbCluQ0WEdrHbEg8QOB+DVrN +VjzRlwW5y0vtOUucxD/SVRNuJLDWcfr0wbrM7Rv1/oFB2ACYPTrIrnqYNxgFlQID +AQABo0IwQDAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4E +FgQU5K8rJnEaK0gnhS9SZizv8IkTcT4wDQYJKoZIhvcNAQEMBQADggIBAJ+qQibb +C5u+/x6Wki4+omVKapi6Ist9wTrYggoGxval3sBOh2Z5ofmmWJyq+bXmYOfg6LEe +QkEzCzc9zolwFcq1JKjPa7XSQCGYzyI0zzvFIoTgxQ6KfF2I5DUkzps+GlQebtuy +h6f88/qBVRRiClmpIgUxPoLW7ttXNLwzldMXG+gnoot7TiYaelpkttGsN/H9oPM4 +7HLwEXWdyzRSjeZ2axfG34arJ45JK3VmgRAhpuo+9K4l/3wV3s6MJT/KYnAK9y8J +ZgfIPxz88NtFMN9iiMG1D53Dn0reWVlHxYciNuaCp+0KueIHoI17eko8cdLiA6Ef +MgfdG+RCzgwARWGAtQsgWSl4vflVy2PFPEz0tv/bal8xa5meLMFrUKTX5hgUvYU/ +Z6tGn6D/Qqc6f1zLXbBwHSs09dR2CQzreExZBfMzQsNhFRAbd03OIozUhfJFfbdT +6u9AWpQKXCBfTkBdYiJ23//OYb2MI3jSNwLgjt7RETeJ9r/tSQdirpLsQBqvFAnZ +0E6yove+7u7Y/9waLd64NnHi/Hm3lCXRSHNboTXns5lndcEZOitHTtNCjv0xyBZm +2tIMPNuzjsmhDYAPexZ3FL//2wmUspO8IFgV6dtxQ/PeEMMA3KgqlbbC1j+Qa3bb +bP6MvPJwNQzcmRk13NfIRmPVNnGuV/u3gm3c +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R2 O=Google Trust Services LLC +# Subject: CN=GTS Root R2 O=Google Trust Services LLC +# Label: "GTS Root R2" +# Serial: 159662449406622349769042896298 +# MD5 Fingerprint: 1e:39:c0:53:e6:1e:29:82:0b:ca:52:55:36:5d:57:dc +# SHA1 Fingerprint: 9a:44:49:76:32:db:de:fa:d0:bc:fb:5a:7b:17:bd:9e:56:09:24:94 +# SHA256 Fingerprint: 8d:25:cd:97:22:9d:bf:70:35:6b:da:4e:b3:cc:73:40:31:e2:4c:f0:0f:af:cf:d3:2d:c7:6e:b5:84:1c:7e:a8 +-----BEGIN CERTIFICATE----- +MIIFVzCCAz+gAwIBAgINAgPlrsWNBCUaqxElqjANBgkqhkiG9w0BAQwFADBHMQsw +CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU +MBIGA1UEAxMLR1RTIFJvb3QgUjIwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw +MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp +Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjIwggIiMA0GCSqGSIb3DQEBAQUA +A4ICDwAwggIKAoICAQDO3v2m++zsFDQ8BwZabFn3GTXd98GdVarTzTukk3LvCvpt +nfbwhYBboUhSnznFt+4orO/LdmgUud+tAWyZH8QiHZ/+cnfgLFuv5AS/T3KgGjSY +6Dlo7JUle3ah5mm5hRm9iYz+re026nO8/4Piy33B0s5Ks40FnotJk9/BW9BuXvAu +MC6C/Pq8tBcKSOWIm8Wba96wyrQD8Nr0kLhlZPdcTK3ofmZemde4wj7I0BOdre7k +RXuJVfeKH2JShBKzwkCX44ofR5GmdFrS+LFjKBC4swm4VndAoiaYecb+3yXuPuWg +f9RhD1FLPD+M2uFwdNjCaKH5wQzpoeJ/u1U8dgbuak7MkogwTZq9TwtImoS1mKPV ++3PBV2HdKFZ1E66HjucMUQkQdYhMvI35ezzUIkgfKtzra7tEscszcTJGr61K8Yzo +dDqs5xoic4DSMPclQsciOzsSrZYuxsN2B6ogtzVJV+mSSeh2FnIxZyuWfoqjx5RW +Ir9qS34BIbIjMt/kmkRtWVtd9QCgHJvGeJeNkP+byKq0rxFROV7Z+2et1VsRnTKa +G73VululycslaVNVJ1zgyjbLiGH7HrfQy+4W+9OmTN6SpdTi3/UGVN4unUu0kzCq +gc7dGtxRcw1PcOnlthYhGXmy5okLdWTK1au8CcEYof/UVKGFPP0UJAOyh9OktwID +AQABo0IwQDAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4E +FgQUu//KjiOfT5nK2+JopqUVJxce2Q4wDQYJKoZIhvcNAQEMBQADggIBAB/Kzt3H +vqGf2SdMC9wXmBFqiN495nFWcrKeGk6c1SuYJF2ba3uwM4IJvd8lRuqYnrYb/oM8 +0mJhwQTtzuDFycgTE1XnqGOtjHsB/ncw4c5omwX4Eu55MaBBRTUoCnGkJE+M3DyC +B19m3H0Q/gxhswWV7uGugQ+o+MePTagjAiZrHYNSVc61LwDKgEDg4XSsYPWHgJ2u +NmSRXbBoGOqKYcl3qJfEycel/FVL8/B/uWU9J2jQzGv6U53hkRrJXRqWbTKH7QMg +yALOWr7Z6v2yTcQvG99fevX4i8buMTolUVVnjWQye+mew4K6Ki3pHrTgSAai/Gev +HyICc/sgCq+dVEuhzf9gR7A/Xe8bVr2XIZYtCtFenTgCR2y59PYjJbigapordwj6 +xLEokCZYCDzifqrXPW+6MYgKBesntaFJ7qBFVHvmJ2WZICGoo7z7GJa7Um8M7YNR +TOlZ4iBgxcJlkoKM8xAfDoqXvneCbT+PHV28SSe9zE8P4c52hgQjxcCMElv924Sg +JPFI/2R80L5cFtHvma3AH/vLrrw4IgYmZNralw4/KBVEqE8AyvCazM90arQ+POuV +7LXTWtiBmelDGDfrs7vRWGJB82bSj6p4lVQgw1oudCvV0b4YacCs1aTPObpRhANl +6WLAYv7YTVWW4tAR+kg0Eeye7QUd5MjWHYbL +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R3 O=Google Trust Services LLC +# Subject: CN=GTS Root R3 O=Google Trust Services LLC +# Label: "GTS Root R3" +# Serial: 159662495401136852707857743206 +# MD5 Fingerprint: 3e:e7:9d:58:02:94:46:51:94:e5:e0:22:4a:8b:e7:73 +# SHA1 Fingerprint: ed:e5:71:80:2b:c8:92:b9:5b:83:3c:d2:32:68:3f:09:cd:a0:1e:46 +# SHA256 Fingerprint: 34:d8:a7:3e:e2:08:d9:bc:db:0d:95:65:20:93:4b:4e:40:e6:94:82:59:6e:8b:6f:73:c8:42:6b:01:0a:6f:48 +-----BEGIN CERTIFICATE----- +MIICCTCCAY6gAwIBAgINAgPluILrIPglJ209ZjAKBggqhkjOPQQDAzBHMQswCQYD +VQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEUMBIG +A1UEAxMLR1RTIFJvb3QgUjMwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAwMDAw +WjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2Vz +IExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjMwdjAQBgcqhkjOPQIBBgUrgQQAIgNi +AAQfTzOHMymKoYTey8chWEGJ6ladK0uFxh1MJ7x/JlFyb+Kf1qPKzEUURout736G +jOyxfi//qXGdGIRFBEFVbivqJn+7kAHjSxm65FSWRQmx1WyRRK2EE46ajA2ADDL2 +4CejQjBAMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW +BBTB8Sa6oC2uhYHP0/EqEr24Cmf9vDAKBggqhkjOPQQDAwNpADBmAjEA9uEglRR7 +VKOQFhG/hMjqb2sXnh5GmCCbn9MN2azTL818+FsuVbu/3ZL3pAzcMeGiAjEA/Jdm +ZuVDFhOD3cffL74UOO0BzrEXGhF16b0DjyZ+hOXJYKaV11RZt+cRLInUue4X +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R4 O=Google Trust Services LLC +# Subject: CN=GTS Root R4 O=Google Trust Services LLC +# Label: "GTS Root R4" +# Serial: 159662532700760215368942768210 +# MD5 Fingerprint: 43:96:83:77:19:4d:76:b3:9d:65:52:e4:1d:22:a5:e8 +# SHA1 Fingerprint: 77:d3:03:67:b5:e0:0c:15:f6:0c:38:61:df:7c:e1:3b:92:46:4d:47 +# SHA256 Fingerprint: 34:9d:fa:40:58:c5:e2:63:12:3b:39:8a:e7:95:57:3c:4e:13:13:c8:3f:e6:8f:93:55:6c:d5:e8:03:1b:3c:7d +-----BEGIN CERTIFICATE----- +MIICCTCCAY6gAwIBAgINAgPlwGjvYxqccpBQUjAKBggqhkjOPQQDAzBHMQswCQYD +VQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEUMBIG +A1UEAxMLR1RTIFJvb3QgUjQwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAwMDAw +WjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2Vz +IExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjQwdjAQBgcqhkjOPQIBBgUrgQQAIgNi +AATzdHOnaItgrkO4NcWBMHtLSZ37wWHO5t5GvWvVYRg1rkDdc/eJkTBa6zzuhXyi +QHY7qca4R9gq55KRanPpsXI5nymfopjTX15YhmUPoYRlBtHci8nHc8iMai/lxKvR +HYqjQjBAMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW +BBSATNbrdP9JNqPV2Py1PsVq8JQdjDAKBggqhkjOPQQDAwNpADBmAjEA6ED/g94D +9J+uHXqnLrmvT/aDHQ4thQEd0dlq7A/Cr8deVl5c1RxYIigL9zC2L7F8AjEA8GE8 +p/SgguMh1YQdc4acLa/KNJvxn7kjNuK8YAOdgLOaVsjh4rsUecrNIdSUtUlD +-----END CERTIFICATE----- + +# Issuer: CN=Telia Root CA v2 O=Telia Finland Oyj +# Subject: CN=Telia Root CA v2 O=Telia Finland Oyj +# Label: "Telia Root CA v2" +# Serial: 7288924052977061235122729490515358 +# MD5 Fingerprint: 0e:8f:ac:aa:82:df:85:b1:f4:dc:10:1c:fc:99:d9:48 +# SHA1 Fingerprint: b9:99:cd:d1:73:50:8a:c4:47:05:08:9c:8c:88:fb:be:a0:2b:40:cd +# SHA256 Fingerprint: 24:2b:69:74:2f:cb:1e:5b:2a:bf:98:89:8b:94:57:21:87:54:4e:5b:4d:99:11:78:65:73:62:1f:6a:74:b8:2c +-----BEGIN CERTIFICATE----- +MIIFdDCCA1ygAwIBAgIPAWdfJ9b+euPkrL4JWwWeMA0GCSqGSIb3DQEBCwUAMEQx +CzAJBgNVBAYTAkZJMRowGAYDVQQKDBFUZWxpYSBGaW5sYW5kIE95ajEZMBcGA1UE +AwwQVGVsaWEgUm9vdCBDQSB2MjAeFw0xODExMjkxMTU1NTRaFw00MzExMjkxMTU1 +NTRaMEQxCzAJBgNVBAYTAkZJMRowGAYDVQQKDBFUZWxpYSBGaW5sYW5kIE95ajEZ +MBcGA1UEAwwQVGVsaWEgUm9vdCBDQSB2MjCCAiIwDQYJKoZIhvcNAQEBBQADggIP +ADCCAgoCggIBALLQPwe84nvQa5n44ndp586dpAO8gm2h/oFlH0wnrI4AuhZ76zBq +AMCzdGh+sq/H1WKzej9Qyow2RCRj0jbpDIX2Q3bVTKFgcmfiKDOlyzG4OiIjNLh9 +vVYiQJ3q9HsDrWj8soFPmNB06o3lfc1jw6P23pLCWBnglrvFxKk9pXSW/q/5iaq9 +lRdU2HhE8Qx3FZLgmEKnpNaqIJLNwaCzlrI6hEKNfdWV5Nbb6WLEWLN5xYzTNTOD +n3WhUidhOPFZPY5Q4L15POdslv5e2QJltI5c0BE0312/UqeBAMN/mUWZFdUXyApT +7GPzmX3MaRKGwhfwAZ6/hLzRUssbkmbOpFPlob/E2wnW5olWK8jjfN7j/4nlNW4o +6GwLI1GpJQXrSPjdscr6bAhR77cYbETKJuFzxokGgeWKrLDiKca5JLNrRBH0pUPC +TEPlcDaMtjNXepUugqD0XBCzYYP2AgWGLnwtbNwDRm41k9V6lS/eINhbfpSQBGq6 +WT0EBXWdN6IOLj3rwaRSg/7Qa9RmjtzG6RJOHSpXqhC8fF6CfaamyfItufUXJ63R +DolUK5X6wK0dmBR4M0KGCqlztft0DbcbMBnEWg4cJ7faGND/isgFuvGqHKI3t+ZI +pEYslOqodmJHixBTB0hXbOKSTbauBcvcwUpej6w9GU7C7WB1K9vBykLVAgMBAAGj +YzBhMB8GA1UdIwQYMBaAFHKs5DN5qkWH9v2sHZ7Wxy+G2CQ5MB0GA1UdDgQWBBRy +rOQzeapFh/b9rB2e1scvhtgkOTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUw +AwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAoDtZpwmUPjaE0n4vOaWWl/oRrfxn83EJ +8rKJhGdEr7nv7ZbsnGTbMjBvZ5qsfl+yqwE2foH65IRe0qw24GtixX1LDoJt0nZi +0f6X+J8wfBj5tFJ3gh1229MdqfDBmgC9bXXYfef6xzijnHDoRnkDry5023X4blMM +A8iZGok1GTzTyVR8qPAs5m4HeW9q4ebqkYJpCh3DflminmtGFZhb069GHWLIzoBS +SRE/yQQSwxN8PzuKlts8oB4KtItUsiRnDe+Cy748fdHif64W1lZYudogsYMVoe+K +TTJvQS8TUoKU1xrBeKJR3Stwbbca+few4GeXVtt8YVMJAygCQMez2P2ccGrGKMOF +6eLtGpOg3kuYooQ+BXcBlj37tCAPnHICehIv1aO6UXivKitEZU61/Qrowc15h2Er +3oBXRb9n8ZuRXqWk7FlIEA04x7D6w0RtBPV4UBySllva9bguulvP5fBqnUsvWHMt +Ty3EHD70sz+rFQ47GUGKpMFXEmZxTPpT41frYpUJnlTd0cI8Vzy9OK2YZLe4A5pT +VmBds9hCG1xLEooc6+t9xnppxyd/pPiL8uSUZodL6ZQHCRJ5irLrdATczvREWeAW +ysUsWNc8e89ihmpQfTU2Zqf7N+cox9jQraVplI/owd8k+BsHMYeB2F326CjYSlKA +rBPuUBQemMc= +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST BR Root CA 1 2020 O=D-Trust GmbH +# Subject: CN=D-TRUST BR Root CA 1 2020 O=D-Trust GmbH +# Label: "D-TRUST BR Root CA 1 2020" +# Serial: 165870826978392376648679885835942448534 +# MD5 Fingerprint: b5:aa:4b:d5:ed:f7:e3:55:2e:8f:72:0a:f3:75:b8:ed +# SHA1 Fingerprint: 1f:5b:98:f0:e3:b5:f7:74:3c:ed:e6:b0:36:7d:32:cd:f4:09:41:67 +# SHA256 Fingerprint: e5:9a:aa:81:60:09:c2:2b:ff:5b:25:ba:d3:7d:f3:06:f0:49:79:7c:1f:81:d8:5a:b0:89:e6:57:bd:8f:00:44 +-----BEGIN CERTIFICATE----- +MIIC2zCCAmCgAwIBAgIQfMmPK4TX3+oPyWWa00tNljAKBggqhkjOPQQDAzBIMQsw +CQYDVQQGEwJERTEVMBMGA1UEChMMRC1UcnVzdCBHbWJIMSIwIAYDVQQDExlELVRS +VVNUIEJSIFJvb3QgQ0EgMSAyMDIwMB4XDTIwMDIxMTA5NDUwMFoXDTM1MDIxMTA5 +NDQ1OVowSDELMAkGA1UEBhMCREUxFTATBgNVBAoTDEQtVHJ1c3QgR21iSDEiMCAG +A1UEAxMZRC1UUlVTVCBCUiBSb290IENBIDEgMjAyMDB2MBAGByqGSM49AgEGBSuB +BAAiA2IABMbLxyjR+4T1mu9CFCDhQ2tuda38KwOE1HaTJddZO0Flax7mNCq7dPYS +zuht56vkPE4/RAiLzRZxy7+SmfSk1zxQVFKQhYN4lGdnoxwJGT11NIXe7WB9xwy0 +QVK5buXuQqOCAQ0wggEJMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHOREKv/ +VbNafAkl1bK6CKBrqx9tMA4GA1UdDwEB/wQEAwIBBjCBxgYDVR0fBIG+MIG7MD6g +PKA6hjhodHRwOi8vY3JsLmQtdHJ1c3QubmV0L2NybC9kLXRydXN0X2JyX3Jvb3Rf +Y2FfMV8yMDIwLmNybDB5oHegdYZzbGRhcDovL2RpcmVjdG9yeS5kLXRydXN0Lm5l +dC9DTj1ELVRSVVNUJTIwQlIlMjBSb290JTIwQ0ElMjAxJTIwMjAyMCxPPUQtVHJ1 +c3QlMjBHbWJILEM9REU/Y2VydGlmaWNhdGVyZXZvY2F0aW9ubGlzdDAKBggqhkjO +PQQDAwNpADBmAjEAlJAtE/rhY/hhY+ithXhUkZy4kzg+GkHaQBZTQgjKL47xPoFW +wKrY7RjEsK70PvomAjEA8yjixtsrmfu3Ubgko6SUeho/5jbiA1czijDLgsfWFBHV +dWNbFJWcHwHP2NVypw87 +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST EV Root CA 1 2020 O=D-Trust GmbH +# Subject: CN=D-TRUST EV Root CA 1 2020 O=D-Trust GmbH +# Label: "D-TRUST EV Root CA 1 2020" +# Serial: 126288379621884218666039612629459926992 +# MD5 Fingerprint: 8c:2d:9d:70:9f:48:99:11:06:11:fb:e9:cb:30:c0:6e +# SHA1 Fingerprint: 61:db:8c:21:59:69:03:90:d8:7c:9c:12:86:54:cf:9d:3d:f4:dd:07 +# SHA256 Fingerprint: 08:17:0d:1a:a3:64:53:90:1a:2f:95:92:45:e3:47:db:0c:8d:37:ab:aa:bc:56:b8:1a:a1:00:dc:95:89:70:db +-----BEGIN CERTIFICATE----- +MIIC2zCCAmCgAwIBAgIQXwJB13qHfEwDo6yWjfv/0DAKBggqhkjOPQQDAzBIMQsw +CQYDVQQGEwJERTEVMBMGA1UEChMMRC1UcnVzdCBHbWJIMSIwIAYDVQQDExlELVRS +VVNUIEVWIFJvb3QgQ0EgMSAyMDIwMB4XDTIwMDIxMTEwMDAwMFoXDTM1MDIxMTA5 +NTk1OVowSDELMAkGA1UEBhMCREUxFTATBgNVBAoTDEQtVHJ1c3QgR21iSDEiMCAG +A1UEAxMZRC1UUlVTVCBFViBSb290IENBIDEgMjAyMDB2MBAGByqGSM49AgEGBSuB +BAAiA2IABPEL3YZDIBnfl4XoIkqbz52Yv7QFJsnL46bSj8WeeHsxiamJrSc8ZRCC +/N/DnU7wMyPE0jL1HLDfMxddxfCxivnvubcUyilKwg+pf3VlSSowZ/Rk99Yad9rD +wpdhQntJraOCAQ0wggEJMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFH8QARY3 +OqQo5FD4pPfsazK2/umLMA4GA1UdDwEB/wQEAwIBBjCBxgYDVR0fBIG+MIG7MD6g +PKA6hjhodHRwOi8vY3JsLmQtdHJ1c3QubmV0L2NybC9kLXRydXN0X2V2X3Jvb3Rf +Y2FfMV8yMDIwLmNybDB5oHegdYZzbGRhcDovL2RpcmVjdG9yeS5kLXRydXN0Lm5l +dC9DTj1ELVRSVVNUJTIwRVYlMjBSb290JTIwQ0ElMjAxJTIwMjAyMCxPPUQtVHJ1 +c3QlMjBHbWJILEM9REU/Y2VydGlmaWNhdGVyZXZvY2F0aW9ubGlzdDAKBggqhkjO +PQQDAwNpADBmAjEAyjzGKnXCXnViOTYAYFqLwZOZzNnbQTs7h5kXO9XMT8oi96CA +y/m0sRtW9XLS/BnRAjEAkfcwkz8QRitxpNA7RJvAKQIFskF3UfN5Wp6OFKBOQtJb +gfM0agPnIjhQW+0ZT0MW +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert TLS ECC P384 Root G5 O=DigiCert, Inc. +# Subject: CN=DigiCert TLS ECC P384 Root G5 O=DigiCert, Inc. +# Label: "DigiCert TLS ECC P384 Root G5" +# Serial: 13129116028163249804115411775095713523 +# MD5 Fingerprint: d3:71:04:6a:43:1c:db:a6:59:e1:a8:a3:aa:c5:71:ed +# SHA1 Fingerprint: 17:f3:de:5e:9f:0f:19:e9:8e:f6:1f:32:26:6e:20:c4:07:ae:30:ee +# SHA256 Fingerprint: 01:8e:13:f0:77:25:32:cf:80:9b:d1:b1:72:81:86:72:83:fc:48:c6:e1:3b:e9:c6:98:12:85:4a:49:0c:1b:05 +-----BEGIN CERTIFICATE----- +MIICGTCCAZ+gAwIBAgIQCeCTZaz32ci5PhwLBCou8zAKBggqhkjOPQQDAzBOMQsw +CQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQsIEluYy4xJjAkBgNVBAMTHURp +Z2lDZXJ0IFRMUyBFQ0MgUDM4NCBSb290IEc1MB4XDTIxMDExNTAwMDAwMFoXDTQ2 +MDExNDIzNTk1OVowTjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDkRpZ2lDZXJ0LCBJ +bmMuMSYwJAYDVQQDEx1EaWdpQ2VydCBUTFMgRUNDIFAzODQgUm9vdCBHNTB2MBAG +ByqGSM49AgEGBSuBBAAiA2IABMFEoc8Rl1Ca3iOCNQfN0MsYndLxf3c1TzvdlHJS +7cI7+Oz6e2tYIOyZrsn8aLN1udsJ7MgT9U7GCh1mMEy7H0cKPGEQQil8pQgO4CLp +0zVozptjn4S1mU1YoI71VOeVyaNCMEAwHQYDVR0OBBYEFMFRRVBZqz7nLFr6ICIS +B4CIfBFqMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49 +BAMDA2gAMGUCMQCJao1H5+z8blUD2WdsJk6Dxv3J+ysTvLd6jLRl0mlpYxNjOyZQ +LgGheQaRnUi/wr4CMEfDFXuxoJGZSZOoPHzoRgaLLPIxAJSdYsiJvRmEFOml+wG4 +DXZDjC5Ty3zfDBeWUA== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert TLS RSA4096 Root G5 O=DigiCert, Inc. +# Subject: CN=DigiCert TLS RSA4096 Root G5 O=DigiCert, Inc. +# Label: "DigiCert TLS RSA4096 Root G5" +# Serial: 11930366277458970227240571539258396554 +# MD5 Fingerprint: ac:fe:f7:34:96:a9:f2:b3:b4:12:4b:e4:27:41:6f:e1 +# SHA1 Fingerprint: a7:88:49:dc:5d:7c:75:8c:8c:de:39:98:56:b3:aa:d0:b2:a5:71:35 +# SHA256 Fingerprint: 37:1a:00:dc:05:33:b3:72:1a:7e:eb:40:e8:41:9e:70:79:9d:2b:0a:0f:2c:1d:80:69:31:65:f7:ce:c4:ad:75 +-----BEGIN CERTIFICATE----- +MIIFZjCCA06gAwIBAgIQCPm0eKj6ftpqMzeJ3nzPijANBgkqhkiG9w0BAQwFADBN +MQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQsIEluYy4xJTAjBgNVBAMT +HERpZ2lDZXJ0IFRMUyBSU0E0MDk2IFJvb3QgRzUwHhcNMjEwMTE1MDAwMDAwWhcN +NDYwMTE0MjM1OTU5WjBNMQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQs +IEluYy4xJTAjBgNVBAMTHERpZ2lDZXJ0IFRMUyBSU0E0MDk2IFJvb3QgRzUwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCz0PTJeRGd/fxmgefM1eS87IE+ +ajWOLrfn3q/5B03PMJ3qCQuZvWxX2hhKuHisOjmopkisLnLlvevxGs3npAOpPxG0 +2C+JFvuUAT27L/gTBaF4HI4o4EXgg/RZG5Wzrn4DReW+wkL+7vI8toUTmDKdFqgp +wgscONyfMXdcvyej/Cestyu9dJsXLfKB2l2w4SMXPohKEiPQ6s+d3gMXsUJKoBZM +pG2T6T867jp8nVid9E6P/DsjyG244gXazOvswzH016cpVIDPRFtMbzCe88zdH5RD +nU1/cHAN1DrRN/BsnZvAFJNY781BOHW8EwOVfH/jXOnVDdXifBBiqmvwPXbzP6Po +sMH976pXTayGpxi0KcEsDr9kvimM2AItzVwv8n/vFfQMFawKsPHTDU9qTXeXAaDx +Zre3zu/O7Oyldcqs4+Fj97ihBMi8ez9dLRYiVu1ISf6nL3kwJZu6ay0/nTvEF+cd +Lvvyz6b84xQslpghjLSR6Rlgg/IwKwZzUNWYOwbpx4oMYIwo+FKbbuH2TbsGJJvX +KyY//SovcfXWJL5/MZ4PbeiPT02jP/816t9JXkGPhvnxd3lLG7SjXi/7RgLQZhNe +XoVPzthwiHvOAbWWl9fNff2C+MIkwcoBOU+NosEUQB+cZtUMCUbW8tDRSHZWOkPL +tgoRObqME2wGtZ7P6wIDAQABo0IwQDAdBgNVHQ4EFgQUUTMc7TZArxfTJc1paPKv +TiM+s0EwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcN +AQEMBQADggIBAGCmr1tfV9qJ20tQqcQjNSH/0GEwhJG3PxDPJY7Jv0Y02cEhJhxw +GXIeo8mH/qlDZJY6yFMECrZBu8RHANmfGBg7sg7zNOok992vIGCukihfNudd5N7H +PNtQOa27PShNlnx2xlv0wdsUpasZYgcYQF+Xkdycx6u1UQ3maVNVzDl92sURVXLF +O4uJ+DQtpBflF+aZfTCIITfNMBc9uPK8qHWgQ9w+iUuQrm0D4ByjoJYJu32jtyoQ +REtGBzRj7TG5BO6jm5qu5jF49OokYTurWGT/u4cnYiWB39yhL/btp/96j1EuMPik +AdKFOV8BmZZvWltwGUb+hmA+rYAQCd05JS9Yf7vSdPD3Rh9GOUrYU9DzLjtxpdRv +/PNn5AeP3SYZ4Y1b+qOTEZvpyDrDVWiakuFSdjjo4bq9+0/V77PnSIMx8IIh47a+ +p6tv75/fTM8BuGJqIz3nCU2AG3swpMPdB380vqQmsvZB6Akd4yCYqjdP//fx4ilw +MUc/dNAUFvohigLVigmUdy7yWSiLfFCSCmZ4OIN1xLVaqBHG5cGdZlXPU8Sv13WF +qUITVuwhd4GTWgzqltlJyqEI8pc7bZsEGCREjnwB8twl2F6GmrE52/WRMmrRpnCK +ovfepEWFJqgejF0pW8hL2JpqA15w8oVPbEtoL8pU9ozaMv7Da4M/OMZ+ +-----END CERTIFICATE----- + +# Issuer: CN=Certainly Root R1 O=Certainly +# Subject: CN=Certainly Root R1 O=Certainly +# Label: "Certainly Root R1" +# Serial: 188833316161142517227353805653483829216 +# MD5 Fingerprint: 07:70:d4:3e:82:87:a0:fa:33:36:13:f4:fa:33:e7:12 +# SHA1 Fingerprint: a0:50:ee:0f:28:71:f4:27:b2:12:6d:6f:50:96:25:ba:cc:86:42:af +# SHA256 Fingerprint: 77:b8:2c:d8:64:4c:43:05:f7:ac:c5:cb:15:6b:45:67:50:04:03:3d:51:c6:0c:62:02:a8:e0:c3:34:67:d3:a0 +-----BEGIN CERTIFICATE----- +MIIFRzCCAy+gAwIBAgIRAI4P+UuQcWhlM1T01EQ5t+AwDQYJKoZIhvcNAQELBQAw +PTELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUNlcnRhaW5seTEaMBgGA1UEAxMRQ2Vy +dGFpbmx5IFJvb3QgUjEwHhcNMjEwNDAxMDAwMDAwWhcNNDYwNDAxMDAwMDAwWjA9 +MQswCQYDVQQGEwJVUzESMBAGA1UEChMJQ2VydGFpbmx5MRowGAYDVQQDExFDZXJ0 +YWlubHkgUm9vdCBSMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANA2 +1B/q3avk0bbm+yLA3RMNansiExyXPGhjZjKcA7WNpIGD2ngwEc/csiu+kr+O5MQT +vqRoTNoCaBZ0vrLdBORrKt03H2As2/X3oXyVtwxwhi7xOu9S98zTm/mLvg7fMbed +aFySpvXl8wo0tf97ouSHocavFwDvA5HtqRxOcT3Si2yJ9HiG5mpJoM610rCrm/b0 +1C7jcvk2xusVtyWMOvwlDbMicyF0yEqWYZL1LwsYpfSt4u5BvQF5+paMjRcCMLT5 +r3gajLQ2EBAHBXDQ9DGQilHFhiZ5shGIXsXwClTNSaa/ApzSRKft43jvRl5tcdF5 +cBxGX1HpyTfcX35pe0HfNEXgO4T0oYoKNp43zGJS4YkNKPl6I7ENPT2a/Z2B7yyQ +wHtETrtJ4A5KVpK8y7XdeReJkd5hiXSSqOMyhb5OhaRLWcsrxXiOcVTQAjeZjOVJ +6uBUcqQRBi8LjMFbvrWhsFNunLhgkR9Za/kt9JQKl7XsxXYDVBtlUrpMklZRNaBA +2CnbrlJ2Oy0wQJuK0EJWtLeIAaSHO1OWzaMWj/Nmqhexx2DgwUMFDO6bW2BvBlyH +Wyf5QBGenDPBt+U1VwV/J84XIIwc/PH72jEpSe31C4SnT8H2TsIonPru4K8H+zMR +eiFPCyEQtkA6qyI6BJyLm4SGcprSp6XEtHWRqSsjAgMBAAGjQjBAMA4GA1UdDwEB +/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTgqj8ljZ9EXME66C6u +d0yEPmcM9DANBgkqhkiG9w0BAQsFAAOCAgEAuVevuBLaV4OPaAszHQNTVfSVcOQr +PbA56/qJYv331hgELyE03fFo8NWWWt7CgKPBjcZq91l3rhVkz1t5BXdm6ozTaw3d +8VkswTOlMIAVRQdFGjEitpIAq5lNOo93r6kiyi9jyhXWx8bwPWz8HA2YEGGeEaIi +1wrykXprOQ4vMMM2SZ/g6Q8CRFA3lFV96p/2O7qUpUzpvD5RtOjKkjZUbVwlKNrd +rRT90+7iIgXr0PK3aBLXWopBGsaSpVo7Y0VPv+E6dyIvXL9G+VoDhRNCX8reU9di +taY1BMJH/5n9hN9czulegChB8n3nHpDYT3Y+gjwN/KUD+nsa2UUeYNrEjvn8K8l7 +lcUq/6qJ34IxD3L/DCfXCh5WAFAeDJDBlrXYFIW7pw0WwfgHJBu6haEaBQmAupVj +yTrsJZ9/nbqkRxWbRHDxakvWOF5D8xh+UG7pWijmZeZ3Gzr9Hb4DJqPb1OG7fpYn +Kx3upPvaJVQTA945xsMfTZDsjxtK0hzthZU4UHlG1sGQUDGpXJpuHfUzVounmdLy +yCwzk5Iwx06MZTMQZBf9JBeW0Y3COmor6xOLRPIh80oat3df1+2IpHLlOR+Vnb5n +wXARPbv0+Em34yaXOp/SX3z7wJl8OSngex2/DaeP0ik0biQVy96QXr8axGbqwua6 +OV+KmalBWQewLK8= +-----END CERTIFICATE----- + +# Issuer: CN=Certainly Root E1 O=Certainly +# Subject: CN=Certainly Root E1 O=Certainly +# Label: "Certainly Root E1" +# Serial: 8168531406727139161245376702891150584 +# MD5 Fingerprint: 0a:9e:ca:cd:3e:52:50:c6:36:f3:4b:a3:ed:a7:53:e9 +# SHA1 Fingerprint: f9:e1:6d:dc:01:89:cf:d5:82:45:63:3e:c5:37:7d:c2:eb:93:6f:2b +# SHA256 Fingerprint: b4:58:5f:22:e4:ac:75:6a:4e:86:12:a1:36:1c:5d:9d:03:1a:93:fd:84:fe:bb:77:8f:a3:06:8b:0f:c4:2d:c2 +-----BEGIN CERTIFICATE----- +MIIB9zCCAX2gAwIBAgIQBiUzsUcDMydc+Y2aub/M+DAKBggqhkjOPQQDAzA9MQsw +CQYDVQQGEwJVUzESMBAGA1UEChMJQ2VydGFpbmx5MRowGAYDVQQDExFDZXJ0YWlu +bHkgUm9vdCBFMTAeFw0yMTA0MDEwMDAwMDBaFw00NjA0MDEwMDAwMDBaMD0xCzAJ +BgNVBAYTAlVTMRIwEAYDVQQKEwlDZXJ0YWlubHkxGjAYBgNVBAMTEUNlcnRhaW5s +eSBSb290IEUxMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE3m/4fxzf7flHh4axpMCK ++IKXgOqPyEpeKn2IaKcBYhSRJHpcnqMXfYqGITQYUBsQ3tA3SybHGWCA6TS9YBk2 +QNYphwk8kXr2vBMj3VlOBF7PyAIcGFPBMdjaIOlEjeR2o0IwQDAOBgNVHQ8BAf8E +BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU8ygYy2R17ikq6+2uI1g4 +hevIIgcwCgYIKoZIzj0EAwMDaAAwZQIxALGOWiDDshliTd6wT99u0nCK8Z9+aozm +ut6Dacpps6kFtZaSF4fC0urQe87YQVt8rgIwRt7qy12a7DLCZRawTDBcMPPaTnOG +BtjOiQRINzf43TNRnXCve1XYAS59BWQOhriR +-----END CERTIFICATE----- + +# Issuer: CN=E-Tugra Global Root CA RSA v3 O=E-Tugra EBG A.S. OU=E-Tugra Trust Center +# Subject: CN=E-Tugra Global Root CA RSA v3 O=E-Tugra EBG A.S. OU=E-Tugra Trust Center +# Label: "E-Tugra Global Root CA RSA v3" +# Serial: 75951268308633135324246244059508261641472512052 +# MD5 Fingerprint: 22:be:10:f6:c2:f8:03:88:73:5f:33:29:47:28:47:a4 +# SHA1 Fingerprint: e9:a8:5d:22:14:52:1c:5b:aa:0a:b4:be:24:6a:23:8a:c9:ba:e2:a9 +# SHA256 Fingerprint: ef:66:b0:b1:0a:3c:db:9f:2e:36:48:c7:6b:d2:af:18:ea:d2:bf:e6:f1:17:65:5e:28:c4:06:0d:a1:a3:f4:c2 +-----BEGIN CERTIFICATE----- +MIIF8zCCA9ugAwIBAgIUDU3FzRYilZYIfrgLfxUGNPt5EDQwDQYJKoZIhvcNAQEL +BQAwgYAxCzAJBgNVBAYTAlRSMQ8wDQYDVQQHEwZBbmthcmExGTAXBgNVBAoTEEUt +VHVncmEgRUJHIEEuUy4xHTAbBgNVBAsTFEUtVHVncmEgVHJ1c3QgQ2VudGVyMSYw +JAYDVQQDEx1FLVR1Z3JhIEdsb2JhbCBSb290IENBIFJTQSB2MzAeFw0yMDAzMTgw +OTA3MTdaFw00NTAzMTIwOTA3MTdaMIGAMQswCQYDVQQGEwJUUjEPMA0GA1UEBxMG +QW5rYXJhMRkwFwYDVQQKExBFLVR1Z3JhIEVCRyBBLlMuMR0wGwYDVQQLExRFLVR1 +Z3JhIFRydXN0IENlbnRlcjEmMCQGA1UEAxMdRS1UdWdyYSBHbG9iYWwgUm9vdCBD +QSBSU0EgdjMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCiZvCJt3J7 +7gnJY9LTQ91ew6aEOErxjYG7FL1H6EAX8z3DeEVypi6Q3po61CBxyryfHUuXCscx +uj7X/iWpKo429NEvx7epXTPcMHD4QGxLsqYxYdE0PD0xesevxKenhOGXpOhL9hd8 +7jwH7eKKV9y2+/hDJVDqJ4GohryPUkqWOmAalrv9c/SF/YP9f4RtNGx/ardLAQO/ +rWm31zLZ9Vdq6YaCPqVmMbMWPcLzJmAy01IesGykNz709a/r4d+ABs8qQedmCeFL +l+d3vSFtKbZnwy1+7dZ5ZdHPOrbRsV5WYVB6Ws5OUDGAA5hH5+QYfERaxqSzO8bG +wzrwbMOLyKSRBfP12baqBqG3q+Sx6iEUXIOk/P+2UNOMEiaZdnDpwA+mdPy70Bt4 +znKS4iicvObpCdg604nmvi533wEKb5b25Y08TVJ2Glbhc34XrD2tbKNSEhhw5oBO +M/J+JjKsBY04pOZ2PJ8QaQ5tndLBeSBrW88zjdGUdjXnXVXHt6woq0bM5zshtQoK +5EpZ3IE1S0SVEgpnpaH/WwAH0sDM+T/8nzPyAPiMbIedBi3x7+PmBvrFZhNb/FAH +nnGGstpvdDDPk1Po3CLW3iAfYY2jLqN4MpBs3KwytQXk9TwzDdbgh3cXTJ2w2Amo +DVf3RIXwyAS+XF1a4xeOVGNpf0l0ZAWMowIDAQABo2MwYTAPBgNVHRMBAf8EBTAD +AQH/MB8GA1UdIwQYMBaAFLK0ruYt9ybVqnUtdkvAG1Mh0EjvMB0GA1UdDgQWBBSy +tK7mLfcm1ap1LXZLwBtTIdBI7zAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEL +BQADggIBAImocn+M684uGMQQgC0QDP/7FM0E4BQ8Tpr7nym/Ip5XuYJzEmMmtcyQ +6dIqKe6cLcwsmb5FJ+Sxce3kOJUxQfJ9emN438o2Fi+CiJ+8EUdPdk3ILY7r3y18 +Tjvarvbj2l0Upq7ohUSdBm6O++96SmotKygY/r+QLHUWnw/qln0F7psTpURs+APQ +3SPh/QMSEgj0GDSz4DcLdxEBSL9htLX4GdnLTeqjjO/98Aa1bZL0SmFQhO3sSdPk +vmjmLuMxC1QLGpLWgti2omU8ZgT5Vdps+9u1FGZNlIM7zR6mK7L+d0CGq+ffCsn9 +9t2HVhjYsCxVYJb6CH5SkPVLpi6HfMsg2wY+oF0Dd32iPBMbKaITVaA9FCKvb7jQ +mhty3QUBjYZgv6Rn7rWlDdF/5horYmbDB7rnoEgcOMPpRfunf/ztAmgayncSd6YA +VSgU7NbHEqIbZULpkejLPoeJVF3Zr52XnGnnCv8PWniLYypMfUeUP95L6VPQMPHF +9p5J3zugkaOj/s1YzOrfr28oO6Bpm4/srK4rVJ2bBLFHIK+WEj5jlB0E5y67hscM +moi/dkfv97ALl2bSRM9gUgfh1SxKOidhd8rXj+eHDjD/DLsE4mHDosiXYY60MGo8 +bcIHX0pzLz/5FooBZu+6kcpSV3uu1OYP3Qt6f4ueJiDPO++BcYNZ +-----END CERTIFICATE----- + +# Issuer: CN=E-Tugra Global Root CA ECC v3 O=E-Tugra EBG A.S. OU=E-Tugra Trust Center +# Subject: CN=E-Tugra Global Root CA ECC v3 O=E-Tugra EBG A.S. OU=E-Tugra Trust Center +# Label: "E-Tugra Global Root CA ECC v3" +# Serial: 218504919822255052842371958738296604628416471745 +# MD5 Fingerprint: 46:bc:81:bb:f1:b5:1e:f7:4b:96:bc:14:e2:e7:27:64 +# SHA1 Fingerprint: 8a:2f:af:57:53:b1:b0:e6:a1:04:ec:5b:6a:69:71:6d:f6:1c:e2:84 +# SHA256 Fingerprint: 87:3f:46:85:fa:7f:56:36:25:25:2e:6d:36:bc:d7:f1:6f:c2:49:51:f2:64:e4:7e:1b:95:4f:49:08:cd:ca:13 +-----BEGIN CERTIFICATE----- +MIICpTCCAiqgAwIBAgIUJkYZdzHhT28oNt45UYbm1JeIIsEwCgYIKoZIzj0EAwMw +gYAxCzAJBgNVBAYTAlRSMQ8wDQYDVQQHEwZBbmthcmExGTAXBgNVBAoTEEUtVHVn +cmEgRUJHIEEuUy4xHTAbBgNVBAsTFEUtVHVncmEgVHJ1c3QgQ2VudGVyMSYwJAYD +VQQDEx1FLVR1Z3JhIEdsb2JhbCBSb290IENBIEVDQyB2MzAeFw0yMDAzMTgwOTQ2 +NThaFw00NTAzMTIwOTQ2NThaMIGAMQswCQYDVQQGEwJUUjEPMA0GA1UEBxMGQW5r +YXJhMRkwFwYDVQQKExBFLVR1Z3JhIEVCRyBBLlMuMR0wGwYDVQQLExRFLVR1Z3Jh +IFRydXN0IENlbnRlcjEmMCQGA1UEAxMdRS1UdWdyYSBHbG9iYWwgUm9vdCBDQSBF +Q0MgdjMwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAASOmCm/xxAeJ9urA8woLNheSBkQ +KczLWYHMjLiSF4mDKpL2w6QdTGLVn9agRtwcvHbB40fQWxPa56WzZkjnIZpKT4YK +fWzqTTKACrJ6CZtpS5iB4i7sAnCWH/31Rs7K3IKjYzBhMA8GA1UdEwEB/wQFMAMB +Af8wHwYDVR0jBBgwFoAU/4Ixcj75xGZsrTie0bBRiKWQzPUwHQYDVR0OBBYEFP+C +MXI++cRmbK04ntGwUYilkMz1MA4GA1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNp +ADBmAjEA5gVYaWHlLcoNy/EZCL3W/VGSGn5jVASQkZo1kTmZ+gepZpO6yGjUij/6 +7W4WAie3AjEA3VoXK3YdZUKWpqxdinlW2Iob35reX8dQj7FbcQwm32pAAOwzkSFx +vmjkI6TZraE3 +-----END CERTIFICATE----- + +# Issuer: CN=Security Communication RootCA3 O=SECOM Trust Systems CO.,LTD. +# Subject: CN=Security Communication RootCA3 O=SECOM Trust Systems CO.,LTD. +# Label: "Security Communication RootCA3" +# Serial: 16247922307909811815 +# MD5 Fingerprint: 1c:9a:16:ff:9e:5c:e0:4d:8a:14:01:f4:35:5d:29:26 +# SHA1 Fingerprint: c3:03:c8:22:74:92:e5:61:a2:9c:5f:79:91:2b:1e:44:13:91:30:3a +# SHA256 Fingerprint: 24:a5:5c:2a:b0:51:44:2d:06:17:76:65:41:23:9a:4a:d0:32:d7:c5:51:75:aa:34:ff:de:2f:bc:4f:5c:52:94 +-----BEGIN CERTIFICATE----- +MIIFfzCCA2egAwIBAgIJAOF8N0D9G/5nMA0GCSqGSIb3DQEBDAUAMF0xCzAJBgNV +BAYTAkpQMSUwIwYDVQQKExxTRUNPTSBUcnVzdCBTeXN0ZW1zIENPLixMVEQuMScw +JQYDVQQDEx5TZWN1cml0eSBDb21tdW5pY2F0aW9uIFJvb3RDQTMwHhcNMTYwNjE2 +MDYxNzE2WhcNMzgwMTE4MDYxNzE2WjBdMQswCQYDVQQGEwJKUDElMCMGA1UEChMc +U0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEnMCUGA1UEAxMeU2VjdXJpdHkg +Q29tbXVuaWNhdGlvbiBSb290Q0EzMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC +CgKCAgEA48lySfcw3gl8qUCBWNO0Ot26YQ+TUG5pPDXC7ltzkBtnTCHsXzW7OT4r +CmDvu20rhvtxosis5FaU+cmvsXLUIKx00rgVrVH+hXShuRD+BYD5UpOzQD11EKzA +lrenfna84xtSGc4RHwsENPXY9Wk8d/Nk9A2qhd7gCVAEF5aEt8iKvE1y/By7z/MG +TfmfZPd+pmaGNXHIEYBMwXFAWB6+oHP2/D5Q4eAvJj1+XCO1eXDe+uDRpdYMQXF7 +9+qMHIjH7Iv10S9VlkZ8WjtYO/u62C21Jdp6Ts9EriGmnpjKIG58u4iFW/vAEGK7 +8vknR+/RiTlDxN/e4UG/VHMgly1s2vPUB6PmudhvrvyMGS7TZ2crldtYXLVqAvO4 +g160a75BflcJdURQVc1aEWEhCmHCqYj9E7wtiS/NYeCVvsq1e+F7NGcLH7YMx3we +GVPKp7FKFSBWFHA9K4IsD50VHUeAR/94mQ4xr28+j+2GaR57GIgUssL8gjMunEst ++3A7caoreyYn8xrC3PsXuKHqy6C0rtOUfnrQq8PsOC0RLoi/1D+tEjtCrI8Cbn3M +0V9hvqG8OmpI6iZVIhZdXw3/JzOfGAN0iltSIEdrRU0id4xVJ/CvHozJgyJUt5rQ +T9nO/NkuHJYosQLTA70lUhw0Zk8jq/R3gpYd0VcwCBEF/VfR2ccCAwEAAaNCMEAw +HQYDVR0OBBYEFGQUfPxYchamCik0FW8qy7z8r6irMA4GA1UdDwEB/wQEAwIBBjAP +BgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBDAUAA4ICAQDcAiMI4u8hOscNtybS +YpOnpSNyByCCYN8Y11StaSWSntkUz5m5UoHPrmyKO1o5yGwBQ8IibQLwYs1OY0PA +FNr0Y/Dq9HHuTofjcan0yVflLl8cebsjqodEV+m9NU1Bu0soo5iyG9kLFwfl9+qd +9XbXv8S2gVj/yP9kaWJ5rW4OH3/uHWnlt3Jxs/6lATWUVCvAUm2PVcTJ0rjLyjQI +UYWg9by0F1jqClx6vWPGOi//lkkZhOpn2ASxYfQAW0q3nHE3GYV5v4GwxxMOdnE+ +OoAGrgYWp421wsTL/0ClXI2lyTrtcoHKXJg80jQDdwj98ClZXSEIx2C/pHF7uNke +gr4Jr2VvKKu/S7XuPghHJ6APbw+LP6yVGPO5DtxnVW5inkYO0QR4ynKudtml+LLf +iAlhi+8kTtFZP1rUPcmTPCtk9YENFpb3ksP+MW/oKjJ0DvRMmEoYDjBU1cXrvMUV +nuiZIesnKwkK2/HmcBhWuwzkvvnoEKQTkrgc4NtnHVMDpCKn3F2SEDzq//wbEBrD +2NCcnWXL0CsnMQMeNuE9dnUM/0Umud1RvCPHX9jYhxBAEg09ODfnRDwYwFMJZI// +1ZqmfHAuc1Uh6N//g7kdPjIe1qZ9LPFm6Vwdp6POXiUyK+OVrCoHzrQoeIY8Laad +TdJ0MN1kURXbg4NR16/9M51NZg== +-----END CERTIFICATE----- + +# Issuer: CN=Security Communication ECC RootCA1 O=SECOM Trust Systems CO.,LTD. +# Subject: CN=Security Communication ECC RootCA1 O=SECOM Trust Systems CO.,LTD. +# Label: "Security Communication ECC RootCA1" +# Serial: 15446673492073852651 +# MD5 Fingerprint: 7e:43:b0:92:68:ec:05:43:4c:98:ab:5d:35:2e:7e:86 +# SHA1 Fingerprint: b8:0e:26:a9:bf:d2:b2:3b:c0:ef:46:c9:ba:c7:bb:f6:1d:0d:41:41 +# SHA256 Fingerprint: e7:4f:bd:a5:5b:d5:64:c4:73:a3:6b:44:1a:a7:99:c8:a6:8e:07:74:40:e8:28:8b:9f:a1:e5:0e:4b:ba:ca:11 +-----BEGIN CERTIFICATE----- +MIICODCCAb6gAwIBAgIJANZdm7N4gS7rMAoGCCqGSM49BAMDMGExCzAJBgNVBAYT +AkpQMSUwIwYDVQQKExxTRUNPTSBUcnVzdCBTeXN0ZW1zIENPLixMVEQuMSswKQYD +VQQDEyJTZWN1cml0eSBDb21tdW5pY2F0aW9uIEVDQyBSb290Q0ExMB4XDTE2MDYx +NjA1MTUyOFoXDTM4MDExODA1MTUyOFowYTELMAkGA1UEBhMCSlAxJTAjBgNVBAoT +HFNFQ09NIFRydXN0IFN5c3RlbXMgQ08uLExURC4xKzApBgNVBAMTIlNlY3VyaXR5 +IENvbW11bmljYXRpb24gRUNDIFJvb3RDQTEwdjAQBgcqhkjOPQIBBgUrgQQAIgNi +AASkpW9gAwPDvTH00xecK4R1rOX9PVdu12O/5gSJko6BnOPpR27KkBLIE+Cnnfdl +dB9sELLo5OnvbYUymUSxXv3MdhDYW72ixvnWQuRXdtyQwjWpS4g8EkdtXP9JTxpK +ULGjQjBAMB0GA1UdDgQWBBSGHOf+LaVKiwj+KBH6vqNm+GBZLzAOBgNVHQ8BAf8E +BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjAVXUI9/Lbu +9zuxNuie9sRGKEkz0FhDKmMpzE2xtHqiuQ04pV1IKv3LsnNdo4gIxwwCMQDAqy0O +be0YottT6SXbVQjgUMzfRGEWgqtJsLKB7HOHeLRMsmIbEvoWTSVLY70eN9k= +-----END CERTIFICATE----- diff --git a/env/lib/python3.8/site-packages/certifi/core.py b/env/lib/python3.8/site-packages/certifi/core.py new file mode 100644 index 00000000..de028981 --- /dev/null +++ b/env/lib/python3.8/site-packages/certifi/core.py @@ -0,0 +1,108 @@ +""" +certifi.py +~~~~~~~~~~ + +This module returns the installation location of cacert.pem or its contents. +""" +import sys + + +if sys.version_info >= (3, 11): + + from importlib.resources import as_file, files + + _CACERT_CTX = None + _CACERT_PATH = None + + def where() -> str: + # This is slightly terrible, but we want to delay extracting the file + # in cases where we're inside of a zipimport situation until someone + # actually calls where(), but we don't want to re-extract the file + # on every call of where(), so we'll do it once then store it in a + # global variable. + global _CACERT_CTX + global _CACERT_PATH + if _CACERT_PATH is None: + # This is slightly janky, the importlib.resources API wants you to + # manage the cleanup of this file, so it doesn't actually return a + # path, it returns a context manager that will give you the path + # when you enter it and will do any cleanup when you leave it. In + # the common case of not needing a temporary file, it will just + # return the file system location and the __exit__() is a no-op. + # + # We also have to hold onto the actual context manager, because + # it will do the cleanup whenever it gets garbage collected, so + # we will also store that at the global level as well. + _CACERT_CTX = as_file(files("certifi").joinpath("cacert.pem")) + _CACERT_PATH = str(_CACERT_CTX.__enter__()) + + return _CACERT_PATH + + def contents() -> str: + return files("certifi").joinpath("cacert.pem").read_text(encoding="ascii") + +elif sys.version_info >= (3, 7): + + from importlib.resources import path as get_path, read_text + + _CACERT_CTX = None + _CACERT_PATH = None + + def where() -> str: + # This is slightly terrible, but we want to delay extracting the + # file in cases where we're inside of a zipimport situation until + # someone actually calls where(), but we don't want to re-extract + # the file on every call of where(), so we'll do it once then store + # it in a global variable. + global _CACERT_CTX + global _CACERT_PATH + if _CACERT_PATH is None: + # This is slightly janky, the importlib.resources API wants you + # to manage the cleanup of this file, so it doesn't actually + # return a path, it returns a context manager that will give + # you the path when you enter it and will do any cleanup when + # you leave it. In the common case of not needing a temporary + # file, it will just return the file system location and the + # __exit__() is a no-op. + # + # We also have to hold onto the actual context manager, because + # it will do the cleanup whenever it gets garbage collected, so + # we will also store that at the global level as well. + _CACERT_CTX = get_path("certifi", "cacert.pem") + _CACERT_PATH = str(_CACERT_CTX.__enter__()) + + return _CACERT_PATH + + def contents() -> str: + return read_text("certifi", "cacert.pem", encoding="ascii") + +else: + import os + import types + from typing import Union + + Package = Union[types.ModuleType, str] + Resource = Union[str, "os.PathLike"] + + # This fallback will work for Python versions prior to 3.7 that lack the + # importlib.resources module but relies on the existing `where` function + # so won't address issues with environments like PyOxidizer that don't set + # __file__ on modules. + def read_text( + package: Package, + resource: Resource, + encoding: str = 'utf-8', + errors: str = 'strict' + ) -> str: + with open(where(), encoding=encoding) as data: + return data.read() + + # If we don't have importlib.resources, then we will just do the old logic + # of assuming we're on the filesystem and munge the path directly. + def where() -> str: + f = os.path.dirname(__file__) + + return os.path.join(f, "cacert.pem") + + def contents() -> str: + return read_text("certifi", "cacert.pem", encoding="ascii") diff --git a/env/lib/python3.8/site-packages/certifi/py.typed b/env/lib/python3.8/site-packages/certifi/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/LICENSE b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/LICENSE new file mode 100644 index 00000000..29225eee --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/LICENSE @@ -0,0 +1,26 @@ + +Except when otherwise stated (look for LICENSE files in directories or +information at the beginning of each file) all software and +documentation is licensed as follows: + + The MIT License + + Permission is hereby granted, free of charge, to any person + obtaining a copy of this software and associated documentation + files (the "Software"), to deal in the Software without + restriction, including without limitation the rights to use, + copy, modify, merge, publish, distribute, sublicense, and/or + sell copies of the Software, and to permit persons to whom the + Software is furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + diff --git a/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/METADATA b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/METADATA new file mode 100644 index 00000000..538e6791 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/METADATA @@ -0,0 +1,34 @@ +Metadata-Version: 2.1 +Name: cffi +Version: 1.15.1 +Summary: Foreign Function Interface for Python calling C code. +Home-page: http://cffi.readthedocs.org +Author: Armin Rigo, Maciej Fijalkowski +Author-email: python-cffi@googlegroups.com +License: MIT +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: License :: OSI Approved :: MIT License +License-File: LICENSE +Requires-Dist: pycparser + + +CFFI +==== + +Foreign Function Interface for Python calling C code. +Please see the `Documentation `_. + +Contact +------- + +`Mailing list `_ diff --git a/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/RECORD b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/RECORD new file mode 100644 index 00000000..9ccedf76 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/RECORD @@ -0,0 +1,44 @@ +_cffi_backend.cpython-38-x86_64-linux-gnu.so,sha256=Vexo81KP_uf8fqK4h5zftvakKuYQ_nFkS0sSlsOXee8,994936 +cffi-1.15.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cffi-1.15.1.dist-info/LICENSE,sha256=BLgPWwd7vtaICM_rreteNSPyqMmpZJXFh72W3x6sKjM,1294 +cffi-1.15.1.dist-info/METADATA,sha256=KP4G3WmavRgDGwD2b8Y_eDsM1YeV6ckcG6Alz3-D8VY,1144 +cffi-1.15.1.dist-info/RECORD,, +cffi-1.15.1.dist-info/WHEEL,sha256=-ijGDuALlPxm3HbhKntps0QzHsi-DPlXqgerYTTJkFE,148 +cffi-1.15.1.dist-info/entry_points.txt,sha256=y6jTxnyeuLnL-XJcDv8uML3n6wyYiGRg8MTp_QGJ9Ho,75 +cffi-1.15.1.dist-info/top_level.txt,sha256=rE7WR3rZfNKxWI9-jn6hsHCAl7MDkB-FmuQbxWjFehQ,19 +cffi/__init__.py,sha256=6xB_tafGvhhM5Xvj0Ova3oPC2SEhVlLTEObVLnazeiM,513 +cffi/__pycache__/__init__.cpython-38.pyc,, +cffi/__pycache__/api.cpython-38.pyc,, +cffi/__pycache__/backend_ctypes.cpython-38.pyc,, +cffi/__pycache__/cffi_opcode.cpython-38.pyc,, +cffi/__pycache__/commontypes.cpython-38.pyc,, +cffi/__pycache__/cparser.cpython-38.pyc,, +cffi/__pycache__/error.cpython-38.pyc,, +cffi/__pycache__/ffiplatform.cpython-38.pyc,, +cffi/__pycache__/lock.cpython-38.pyc,, +cffi/__pycache__/model.cpython-38.pyc,, +cffi/__pycache__/pkgconfig.cpython-38.pyc,, +cffi/__pycache__/recompiler.cpython-38.pyc,, +cffi/__pycache__/setuptools_ext.cpython-38.pyc,, +cffi/__pycache__/vengine_cpy.cpython-38.pyc,, +cffi/__pycache__/vengine_gen.cpython-38.pyc,, +cffi/__pycache__/verifier.cpython-38.pyc,, +cffi/_cffi_errors.h,sha256=zQXt7uR_m8gUW-fI2hJg0KoSkJFwXv8RGUkEDZ177dQ,3908 +cffi/_cffi_include.h,sha256=tKnA1rdSoPHp23FnDL1mDGwFo-Uj6fXfA6vA6kcoEUc,14800 +cffi/_embedding.h,sha256=9tnjF44QRobR8z0FGqAmAZY-wMSBOae1SUPqHccowqc,17680 +cffi/api.py,sha256=yxJalIePbr1mz_WxAHokSwyP5CVYde44m-nolHnbJNo,42064 +cffi/backend_ctypes.py,sha256=h5ZIzLc6BFVXnGyc9xPqZWUS7qGy7yFSDqXe68Sa8z4,42454 +cffi/cffi_opcode.py,sha256=v9RdD_ovA8rCtqsC95Ivki5V667rAOhGgs3fb2q9xpM,5724 +cffi/commontypes.py,sha256=QS4uxCDI7JhtTyjh1hlnCA-gynmaszWxJaRRLGkJa1A,2689 +cffi/cparser.py,sha256=rO_1pELRw1gI1DE1m4gi2ik5JMfpxouAACLXpRPlVEA,44231 +cffi/error.py,sha256=v6xTiS4U0kvDcy4h_BDRo5v39ZQuj-IMRYLv5ETddZs,877 +cffi/ffiplatform.py,sha256=HMXqR8ks2wtdsNxGaWpQ_PyqIvtiuos_vf1qKCy-cwg,4046 +cffi/lock.py,sha256=l9TTdwMIMpi6jDkJGnQgE9cvTIR7CAntIJr8EGHt3pY,747 +cffi/model.py,sha256=_GH_UF1Rn9vC4AvmgJm6qj7RUXXG3eqKPc8bPxxyBKE,21768 +cffi/parse_c_type.h,sha256=OdwQfwM9ktq6vlCB43exFQmxDBtj2MBNdK8LYl15tjw,5976 +cffi/pkgconfig.py,sha256=LP1w7vmWvmKwyqLaU1Z243FOWGNQMrgMUZrvgFuOlco,4374 +cffi/recompiler.py,sha256=YgVYTh2CrXIobo-vMk7_K9mwAXdd_LqB4-IbYABQ488,64598 +cffi/setuptools_ext.py,sha256=RUR17N5f8gpiQBBlXL34P9FtOu1mhHIaAf3WJlg5S4I,8931 +cffi/vengine_cpy.py,sha256=YglN8YS-UaHEv2k2cxgotNWE87dHX20-68EyKoiKUYA,43320 +cffi/vengine_gen.py,sha256=5dX7s1DU6pTBOMI6oTVn_8Bnmru_lj932B6b4v29Hlg,26684 +cffi/verifier.py,sha256=ESwuXWXtXrKEagCKveLRDjFzLNCyaKdqAgAlKREcyhY,11253 diff --git a/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/WHEEL new file mode 100644 index 00000000..3a48d348 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_17_x86_64 +Tag: cp38-cp38-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/entry_points.txt new file mode 100644 index 00000000..4b0274f2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[distutils.setup_keywords] +cffi_modules = cffi.setuptools_ext:cffi_modules diff --git a/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/top_level.txt new file mode 100644 index 00000000..f6457795 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi-1.15.1.dist-info/top_level.txt @@ -0,0 +1,2 @@ +_cffi_backend +cffi diff --git a/env/lib/python3.8/site-packages/cffi/__init__.py b/env/lib/python3.8/site-packages/cffi/__init__.py new file mode 100644 index 00000000..90e2e655 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/__init__.py @@ -0,0 +1,14 @@ +__all__ = ['FFI', 'VerificationError', 'VerificationMissing', 'CDefError', + 'FFIError'] + +from .api import FFI +from .error import CDefError, FFIError, VerificationError, VerificationMissing +from .error import PkgConfigError + +__version__ = "1.15.1" +__version_info__ = (1, 15, 1) + +# The verifier module file names are based on the CRC32 of a string that +# contains the following version number. It may be older than __version__ +# if nothing is clearly incompatible. +__version_verifier_modules__ = "0.8.6" diff --git a/env/lib/python3.8/site-packages/cffi/_cffi_errors.h b/env/lib/python3.8/site-packages/cffi/_cffi_errors.h new file mode 100644 index 00000000..158e0590 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/_cffi_errors.h @@ -0,0 +1,149 @@ +#ifndef CFFI_MESSAGEBOX +# ifdef _MSC_VER +# define CFFI_MESSAGEBOX 1 +# else +# define CFFI_MESSAGEBOX 0 +# endif +#endif + + +#if CFFI_MESSAGEBOX +/* Windows only: logic to take the Python-CFFI embedding logic + initialization errors and display them in a background thread + with MessageBox. The idea is that if the whole program closes + as a result of this problem, then likely it is already a console + program and you can read the stderr output in the console too. + If it is not a console program, then it will likely show its own + dialog to complain, or generally not abruptly close, and for this + case the background thread should stay alive. +*/ +static void *volatile _cffi_bootstrap_text; + +static PyObject *_cffi_start_error_capture(void) +{ + PyObject *result = NULL; + PyObject *x, *m, *bi; + + if (InterlockedCompareExchangePointer(&_cffi_bootstrap_text, + (void *)1, NULL) != NULL) + return (PyObject *)1; + + m = PyImport_AddModule("_cffi_error_capture"); + if (m == NULL) + goto error; + + result = PyModule_GetDict(m); + if (result == NULL) + goto error; + +#if PY_MAJOR_VERSION >= 3 + bi = PyImport_ImportModule("builtins"); +#else + bi = PyImport_ImportModule("__builtin__"); +#endif + if (bi == NULL) + goto error; + PyDict_SetItemString(result, "__builtins__", bi); + Py_DECREF(bi); + + x = PyRun_String( + "import sys\n" + "class FileLike:\n" + " def write(self, x):\n" + " try:\n" + " of.write(x)\n" + " except: pass\n" + " self.buf += x\n" + " def flush(self):\n" + " pass\n" + "fl = FileLike()\n" + "fl.buf = ''\n" + "of = sys.stderr\n" + "sys.stderr = fl\n" + "def done():\n" + " sys.stderr = of\n" + " return fl.buf\n", /* make sure the returned value stays alive */ + Py_file_input, + result, result); + Py_XDECREF(x); + + error: + if (PyErr_Occurred()) + { + PyErr_WriteUnraisable(Py_None); + PyErr_Clear(); + } + return result; +} + +#pragma comment(lib, "user32.lib") + +static DWORD WINAPI _cffi_bootstrap_dialog(LPVOID ignored) +{ + Sleep(666); /* may be interrupted if the whole process is closing */ +#if PY_MAJOR_VERSION >= 3 + MessageBoxW(NULL, (wchar_t *)_cffi_bootstrap_text, + L"Python-CFFI error", + MB_OK | MB_ICONERROR); +#else + MessageBoxA(NULL, (char *)_cffi_bootstrap_text, + "Python-CFFI error", + MB_OK | MB_ICONERROR); +#endif + _cffi_bootstrap_text = NULL; + return 0; +} + +static void _cffi_stop_error_capture(PyObject *ecap) +{ + PyObject *s; + void *text; + + if (ecap == (PyObject *)1) + return; + + if (ecap == NULL) + goto error; + + s = PyRun_String("done()", Py_eval_input, ecap, ecap); + if (s == NULL) + goto error; + + /* Show a dialog box, but in a background thread, and + never show multiple dialog boxes at once. */ +#if PY_MAJOR_VERSION >= 3 + text = PyUnicode_AsWideCharString(s, NULL); +#else + text = PyString_AsString(s); +#endif + + _cffi_bootstrap_text = text; + + if (text != NULL) + { + HANDLE h; + h = CreateThread(NULL, 0, _cffi_bootstrap_dialog, + NULL, 0, NULL); + if (h != NULL) + CloseHandle(h); + } + /* decref the string, but it should stay alive as 'fl.buf' + in the small module above. It will really be freed only if + we later get another similar error. So it's a leak of at + most one copy of the small module. That's fine for this + situation which is usually a "fatal error" anyway. */ + Py_DECREF(s); + PyErr_Clear(); + return; + + error: + _cffi_bootstrap_text = NULL; + PyErr_Clear(); +} + +#else + +static PyObject *_cffi_start_error_capture(void) { return NULL; } +static void _cffi_stop_error_capture(PyObject *ecap) { } + +#endif diff --git a/env/lib/python3.8/site-packages/cffi/_cffi_include.h b/env/lib/python3.8/site-packages/cffi/_cffi_include.h new file mode 100644 index 00000000..e4c0a672 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/_cffi_include.h @@ -0,0 +1,385 @@ +#define _CFFI_ + +/* We try to define Py_LIMITED_API before including Python.h. + + Mess: we can only define it if Py_DEBUG, Py_TRACE_REFS and + Py_REF_DEBUG are not defined. This is a best-effort approximation: + we can learn about Py_DEBUG from pyconfig.h, but it is unclear if + the same works for the other two macros. Py_DEBUG implies them, + but not the other way around. + + The implementation is messy (issue #350): on Windows, with _MSC_VER, + we have to define Py_LIMITED_API even before including pyconfig.h. + In that case, we guess what pyconfig.h will do to the macros above, + and check our guess after the #include. + + Note that on Windows, with CPython 3.x, you need >= 3.5 and virtualenv + version >= 16.0.0. With older versions of either, you don't get a + copy of PYTHON3.DLL in the virtualenv. We can't check the version of + CPython *before* we even include pyconfig.h. ffi.set_source() puts + a ``#define _CFFI_NO_LIMITED_API'' at the start of this file if it is + running on Windows < 3.5, as an attempt at fixing it, but that's + arguably wrong because it may not be the target version of Python. + Still better than nothing I guess. As another workaround, you can + remove the definition of Py_LIMITED_API here. + + See also 'py_limited_api' in cffi/setuptools_ext.py. +*/ +#if !defined(_CFFI_USE_EMBEDDING) && !defined(Py_LIMITED_API) +# ifdef _MSC_VER +# if !defined(_DEBUG) && !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) +# define Py_LIMITED_API +# endif +# include + /* sanity-check: Py_LIMITED_API will cause crashes if any of these + are also defined. Normally, the Python file PC/pyconfig.h does not + cause any of these to be defined, with the exception that _DEBUG + causes Py_DEBUG. Double-check that. */ +# ifdef Py_LIMITED_API +# if defined(Py_DEBUG) +# error "pyconfig.h unexpectedly defines Py_DEBUG, but Py_LIMITED_API is set" +# endif +# if defined(Py_TRACE_REFS) +# error "pyconfig.h unexpectedly defines Py_TRACE_REFS, but Py_LIMITED_API is set" +# endif +# if defined(Py_REF_DEBUG) +# error "pyconfig.h unexpectedly defines Py_REF_DEBUG, but Py_LIMITED_API is set" +# endif +# endif +# else +# include +# if !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) +# define Py_LIMITED_API +# endif +# endif +#endif + +#include +#ifdef __cplusplus +extern "C" { +#endif +#include +#include "parse_c_type.h" + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +#endif + +#ifdef __GNUC__ +# define _CFFI_UNUSED_FN __attribute__((unused)) +#else +# define _CFFI_UNUSED_FN /* nothing */ +#endif + +#ifdef __cplusplus +# ifndef _Bool + typedef bool _Bool; /* semi-hackish: C++ has no _Bool; bool is builtin */ +# endif +#endif + +/********** CPython-specific section **********/ +#ifndef PYPY_VERSION + + +#if PY_MAJOR_VERSION >= 3 +# define PyInt_FromLong PyLong_FromLong +#endif + +#define _cffi_from_c_double PyFloat_FromDouble +#define _cffi_from_c_float PyFloat_FromDouble +#define _cffi_from_c_long PyInt_FromLong +#define _cffi_from_c_ulong PyLong_FromUnsignedLong +#define _cffi_from_c_longlong PyLong_FromLongLong +#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong +#define _cffi_from_c__Bool PyBool_FromLong + +#define _cffi_to_c_double PyFloat_AsDouble +#define _cffi_to_c_float PyFloat_AsDouble + +#define _cffi_from_c_int(x, type) \ + (((type)-1) > 0 ? /* unsigned */ \ + (sizeof(type) < sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + sizeof(type) == sizeof(long) ? \ + PyLong_FromUnsignedLong((unsigned long)x) : \ + PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ + (sizeof(type) <= sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + PyLong_FromLongLong((long long)x))) + +#define _cffi_to_c_int(o, type) \ + ((type)( \ + sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ + : (type)_cffi_to_c_i8(o)) : \ + sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ + : (type)_cffi_to_c_i16(o)) : \ + sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ + : (type)_cffi_to_c_i32(o)) : \ + sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ + : (type)_cffi_to_c_i64(o)) : \ + (Py_FatalError("unsupported size for type " #type), (type)0))) + +#define _cffi_to_c_i8 \ + ((int(*)(PyObject *))_cffi_exports[1]) +#define _cffi_to_c_u8 \ + ((int(*)(PyObject *))_cffi_exports[2]) +#define _cffi_to_c_i16 \ + ((int(*)(PyObject *))_cffi_exports[3]) +#define _cffi_to_c_u16 \ + ((int(*)(PyObject *))_cffi_exports[4]) +#define _cffi_to_c_i32 \ + ((int(*)(PyObject *))_cffi_exports[5]) +#define _cffi_to_c_u32 \ + ((unsigned int(*)(PyObject *))_cffi_exports[6]) +#define _cffi_to_c_i64 \ + ((long long(*)(PyObject *))_cffi_exports[7]) +#define _cffi_to_c_u64 \ + ((unsigned long long(*)(PyObject *))_cffi_exports[8]) +#define _cffi_to_c_char \ + ((int(*)(PyObject *))_cffi_exports[9]) +#define _cffi_from_c_pointer \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[10]) +#define _cffi_to_c_pointer \ + ((char *(*)(PyObject *, struct _cffi_ctypedescr *))_cffi_exports[11]) +#define _cffi_get_struct_layout \ + not used any more +#define _cffi_restore_errno \ + ((void(*)(void))_cffi_exports[13]) +#define _cffi_save_errno \ + ((void(*)(void))_cffi_exports[14]) +#define _cffi_from_c_char \ + ((PyObject *(*)(char))_cffi_exports[15]) +#define _cffi_from_c_deref \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[16]) +#define _cffi_to_c \ + ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[17]) +#define _cffi_from_c_struct \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[18]) +#define _cffi_to_c_wchar_t \ + ((_cffi_wchar_t(*)(PyObject *))_cffi_exports[19]) +#define _cffi_from_c_wchar_t \ + ((PyObject *(*)(_cffi_wchar_t))_cffi_exports[20]) +#define _cffi_to_c_long_double \ + ((long double(*)(PyObject *))_cffi_exports[21]) +#define _cffi_to_c__Bool \ + ((_Bool(*)(PyObject *))_cffi_exports[22]) +#define _cffi_prepare_pointer_call_argument \ + ((Py_ssize_t(*)(struct _cffi_ctypedescr *, \ + PyObject *, char **))_cffi_exports[23]) +#define _cffi_convert_array_from_object \ + ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[24]) +#define _CFFI_CPIDX 25 +#define _cffi_call_python \ + ((void(*)(struct _cffi_externpy_s *, char *))_cffi_exports[_CFFI_CPIDX]) +#define _cffi_to_c_wchar3216_t \ + ((int(*)(PyObject *))_cffi_exports[26]) +#define _cffi_from_c_wchar3216_t \ + ((PyObject *(*)(int))_cffi_exports[27]) +#define _CFFI_NUM_EXPORTS 28 + +struct _cffi_ctypedescr; + +static void *_cffi_exports[_CFFI_NUM_EXPORTS]; + +#define _cffi_type(index) ( \ + assert((((uintptr_t)_cffi_types[index]) & 1) == 0), \ + (struct _cffi_ctypedescr *)_cffi_types[index]) + +static PyObject *_cffi_init(const char *module_name, Py_ssize_t version, + const struct _cffi_type_context_s *ctx) +{ + PyObject *module, *o_arg, *new_module; + void *raw[] = { + (void *)module_name, + (void *)version, + (void *)_cffi_exports, + (void *)ctx, + }; + + module = PyImport_ImportModule("_cffi_backend"); + if (module == NULL) + goto failure; + + o_arg = PyLong_FromVoidPtr((void *)raw); + if (o_arg == NULL) + goto failure; + + new_module = PyObject_CallMethod( + module, (char *)"_init_cffi_1_0_external_module", (char *)"O", o_arg); + + Py_DECREF(o_arg); + Py_DECREF(module); + return new_module; + + failure: + Py_XDECREF(module); + return NULL; +} + + +#ifdef HAVE_WCHAR_H +typedef wchar_t _cffi_wchar_t; +#else +typedef uint16_t _cffi_wchar_t; /* same random pick as _cffi_backend.c */ +#endif + +_CFFI_UNUSED_FN static uint16_t _cffi_to_c_char16_t(PyObject *o) +{ + if (sizeof(_cffi_wchar_t) == 2) + return (uint16_t)_cffi_to_c_wchar_t(o); + else + return (uint16_t)_cffi_to_c_wchar3216_t(o); +} + +_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char16_t(uint16_t x) +{ + if (sizeof(_cffi_wchar_t) == 2) + return _cffi_from_c_wchar_t((_cffi_wchar_t)x); + else + return _cffi_from_c_wchar3216_t((int)x); +} + +_CFFI_UNUSED_FN static int _cffi_to_c_char32_t(PyObject *o) +{ + if (sizeof(_cffi_wchar_t) == 4) + return (int)_cffi_to_c_wchar_t(o); + else + return (int)_cffi_to_c_wchar3216_t(o); +} + +_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char32_t(unsigned int x) +{ + if (sizeof(_cffi_wchar_t) == 4) + return _cffi_from_c_wchar_t((_cffi_wchar_t)x); + else + return _cffi_from_c_wchar3216_t((int)x); +} + +union _cffi_union_alignment_u { + unsigned char m_char; + unsigned short m_short; + unsigned int m_int; + unsigned long m_long; + unsigned long long m_longlong; + float m_float; + double m_double; + long double m_longdouble; +}; + +struct _cffi_freeme_s { + struct _cffi_freeme_s *next; + union _cffi_union_alignment_u alignment; +}; + +_CFFI_UNUSED_FN static int +_cffi_convert_array_argument(struct _cffi_ctypedescr *ctptr, PyObject *arg, + char **output_data, Py_ssize_t datasize, + struct _cffi_freeme_s **freeme) +{ + char *p; + if (datasize < 0) + return -1; + + p = *output_data; + if (p == NULL) { + struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( + offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); + if (fp == NULL) + return -1; + fp->next = *freeme; + *freeme = fp; + p = *output_data = (char *)&fp->alignment; + } + memset((void *)p, 0, (size_t)datasize); + return _cffi_convert_array_from_object(p, ctptr, arg); +} + +_CFFI_UNUSED_FN static void +_cffi_free_array_arguments(struct _cffi_freeme_s *freeme) +{ + do { + void *p = (void *)freeme; + freeme = freeme->next; + PyObject_Free(p); + } while (freeme != NULL); +} + +/********** end CPython-specific section **********/ +#else +_CFFI_UNUSED_FN +static void (*_cffi_call_python_org)(struct _cffi_externpy_s *, char *); +# define _cffi_call_python _cffi_call_python_org +#endif + + +#define _cffi_array_len(array) (sizeof(array) / sizeof((array)[0])) + +#define _cffi_prim_int(size, sign) \ + ((size) == 1 ? ((sign) ? _CFFI_PRIM_INT8 : _CFFI_PRIM_UINT8) : \ + (size) == 2 ? ((sign) ? _CFFI_PRIM_INT16 : _CFFI_PRIM_UINT16) : \ + (size) == 4 ? ((sign) ? _CFFI_PRIM_INT32 : _CFFI_PRIM_UINT32) : \ + (size) == 8 ? ((sign) ? _CFFI_PRIM_INT64 : _CFFI_PRIM_UINT64) : \ + _CFFI__UNKNOWN_PRIM) + +#define _cffi_prim_float(size) \ + ((size) == sizeof(float) ? _CFFI_PRIM_FLOAT : \ + (size) == sizeof(double) ? _CFFI_PRIM_DOUBLE : \ + (size) == sizeof(long double) ? _CFFI__UNKNOWN_LONG_DOUBLE : \ + _CFFI__UNKNOWN_FLOAT_PRIM) + +#define _cffi_check_int(got, got_nonpos, expected) \ + ((got_nonpos) == (expected <= 0) && \ + (got) == (unsigned long long)expected) + +#ifdef MS_WIN32 +# define _cffi_stdcall __stdcall +#else +# define _cffi_stdcall /* nothing */ +#endif + +#ifdef __cplusplus +} +#endif diff --git a/env/lib/python3.8/site-packages/cffi/_embedding.h b/env/lib/python3.8/site-packages/cffi/_embedding.h new file mode 100644 index 00000000..8e8df882 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/_embedding.h @@ -0,0 +1,528 @@ + +/***** Support code for embedding *****/ + +#ifdef __cplusplus +extern "C" { +#endif + + +#if defined(_WIN32) +# define CFFI_DLLEXPORT __declspec(dllexport) +#elif defined(__GNUC__) +# define CFFI_DLLEXPORT __attribute__((visibility("default"))) +#else +# define CFFI_DLLEXPORT /* nothing */ +#endif + + +/* There are two global variables of type _cffi_call_python_fnptr: + + * _cffi_call_python, which we declare just below, is the one called + by ``extern "Python"`` implementations. + + * _cffi_call_python_org, which on CPython is actually part of the + _cffi_exports[] array, is the function pointer copied from + _cffi_backend. If _cffi_start_python() fails, then this is set + to NULL; otherwise, it should never be NULL. + + After initialization is complete, both are equal. However, the + first one remains equal to &_cffi_start_and_call_python until the + very end of initialization, when we are (or should be) sure that + concurrent threads also see a completely initialized world, and + only then is it changed. +*/ +#undef _cffi_call_python +typedef void (*_cffi_call_python_fnptr)(struct _cffi_externpy_s *, char *); +static void _cffi_start_and_call_python(struct _cffi_externpy_s *, char *); +static _cffi_call_python_fnptr _cffi_call_python = &_cffi_start_and_call_python; + + +#ifndef _MSC_VER + /* --- Assuming a GCC not infinitely old --- */ +# define cffi_compare_and_swap(l,o,n) __sync_bool_compare_and_swap(l,o,n) +# define cffi_write_barrier() __sync_synchronize() +# if !defined(__amd64__) && !defined(__x86_64__) && \ + !defined(__i386__) && !defined(__i386) +# define cffi_read_barrier() __sync_synchronize() +# else +# define cffi_read_barrier() (void)0 +# endif +#else + /* --- Windows threads version --- */ +# include +# define cffi_compare_and_swap(l,o,n) \ + (InterlockedCompareExchangePointer(l,n,o) == (o)) +# define cffi_write_barrier() InterlockedCompareExchange(&_cffi_dummy,0,0) +# define cffi_read_barrier() (void)0 +static volatile LONG _cffi_dummy; +#endif + +#ifdef WITH_THREAD +# ifndef _MSC_VER +# include + static pthread_mutex_t _cffi_embed_startup_lock; +# else + static CRITICAL_SECTION _cffi_embed_startup_lock; +# endif + static char _cffi_embed_startup_lock_ready = 0; +#endif + +static void _cffi_acquire_reentrant_mutex(void) +{ + static void *volatile lock = NULL; + + while (!cffi_compare_and_swap(&lock, NULL, (void *)1)) { + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: pthread_mutex_init() should be very fast, and + this is only run at start-up anyway. */ + } + +#ifdef WITH_THREAD + if (!_cffi_embed_startup_lock_ready) { +# ifndef _MSC_VER + pthread_mutexattr_t attr; + pthread_mutexattr_init(&attr); + pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); + pthread_mutex_init(&_cffi_embed_startup_lock, &attr); +# else + InitializeCriticalSection(&_cffi_embed_startup_lock); +# endif + _cffi_embed_startup_lock_ready = 1; + } +#endif + + while (!cffi_compare_and_swap(&lock, (void *)1, NULL)) + ; + +#ifndef _MSC_VER + pthread_mutex_lock(&_cffi_embed_startup_lock); +#else + EnterCriticalSection(&_cffi_embed_startup_lock); +#endif +} + +static void _cffi_release_reentrant_mutex(void) +{ +#ifndef _MSC_VER + pthread_mutex_unlock(&_cffi_embed_startup_lock); +#else + LeaveCriticalSection(&_cffi_embed_startup_lock); +#endif +} + + +/********** CPython-specific section **********/ +#ifndef PYPY_VERSION + +#include "_cffi_errors.h" + + +#define _cffi_call_python_org _cffi_exports[_CFFI_CPIDX] + +PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(void); /* forward */ + +static void _cffi_py_initialize(void) +{ + /* XXX use initsigs=0, which "skips initialization registration of + signal handlers, which might be useful when Python is + embedded" according to the Python docs. But review and think + if it should be a user-controllable setting. + + XXX we should also give a way to write errors to a buffer + instead of to stderr. + + XXX if importing 'site' fails, CPython (any version) calls + exit(). Should we try to work around this behavior here? + */ + Py_InitializeEx(0); +} + +static int _cffi_initialize_python(void) +{ + /* This initializes Python, imports _cffi_backend, and then the + present .dll/.so is set up as a CPython C extension module. + */ + int result; + PyGILState_STATE state; + PyObject *pycode=NULL, *global_dict=NULL, *x; + PyObject *builtins; + + state = PyGILState_Ensure(); + + /* Call the initxxx() function from the present module. It will + create and initialize us as a CPython extension module, instead + of letting the startup Python code do it---it might reimport + the same .dll/.so and get maybe confused on some platforms. + It might also have troubles locating the .dll/.so again for all + I know. + */ + (void)_CFFI_PYTHON_STARTUP_FUNC(); + if (PyErr_Occurred()) + goto error; + + /* Now run the Python code provided to ffi.embedding_init_code(). + */ + pycode = Py_CompileString(_CFFI_PYTHON_STARTUP_CODE, + "", + Py_file_input); + if (pycode == NULL) + goto error; + global_dict = PyDict_New(); + if (global_dict == NULL) + goto error; + builtins = PyEval_GetBuiltins(); + if (builtins == NULL) + goto error; + if (PyDict_SetItemString(global_dict, "__builtins__", builtins) < 0) + goto error; + x = PyEval_EvalCode( +#if PY_MAJOR_VERSION < 3 + (PyCodeObject *) +#endif + pycode, global_dict, global_dict); + if (x == NULL) + goto error; + Py_DECREF(x); + + /* Done! Now if we've been called from + _cffi_start_and_call_python() in an ``extern "Python"``, we can + only hope that the Python code did correctly set up the + corresponding @ffi.def_extern() function. Otherwise, the + general logic of ``extern "Python"`` functions (inside the + _cffi_backend module) will find that the reference is still + missing and print an error. + */ + result = 0; + done: + Py_XDECREF(pycode); + Py_XDECREF(global_dict); + PyGILState_Release(state); + return result; + + error:; + { + /* Print as much information as potentially useful. + Debugging load-time failures with embedding is not fun + */ + PyObject *ecap; + PyObject *exception, *v, *tb, *f, *modules, *mod; + PyErr_Fetch(&exception, &v, &tb); + ecap = _cffi_start_error_capture(); + f = PySys_GetObject((char *)"stderr"); + if (f != NULL && f != Py_None) { + PyFile_WriteString( + "Failed to initialize the Python-CFFI embedding logic:\n\n", f); + } + + if (exception != NULL) { + PyErr_NormalizeException(&exception, &v, &tb); + PyErr_Display(exception, v, tb); + } + Py_XDECREF(exception); + Py_XDECREF(v); + Py_XDECREF(tb); + + if (f != NULL && f != Py_None) { + PyFile_WriteString("\nFrom: " _CFFI_MODULE_NAME + "\ncompiled with cffi version: 1.15.1" + "\n_cffi_backend module: ", f); + modules = PyImport_GetModuleDict(); + mod = PyDict_GetItemString(modules, "_cffi_backend"); + if (mod == NULL) { + PyFile_WriteString("not loaded", f); + } + else { + v = PyObject_GetAttrString(mod, "__file__"); + PyFile_WriteObject(v, f, 0); + Py_XDECREF(v); + } + PyFile_WriteString("\nsys.path: ", f); + PyFile_WriteObject(PySys_GetObject((char *)"path"), f, 0); + PyFile_WriteString("\n\n", f); + } + _cffi_stop_error_capture(ecap); + } + result = -1; + goto done; +} + +#if PY_VERSION_HEX < 0x03080000 +PyAPI_DATA(char *) _PyParser_TokenNames[]; /* from CPython */ +#endif + +static int _cffi_carefully_make_gil(void) +{ + /* This does the basic initialization of Python. It can be called + completely concurrently from unrelated threads. It assumes + that we don't hold the GIL before (if it exists), and we don't + hold it afterwards. + + (What it really does used to be completely different in Python 2 + and Python 3, with the Python 2 solution avoiding the spin-lock + around the Py_InitializeEx() call. However, after recent changes + to CPython 2.7 (issue #358) it no longer works. So we use the + Python 3 solution everywhere.) + + This initializes Python by calling Py_InitializeEx(). + Important: this must not be called concurrently at all. + So we use a global variable as a simple spin lock. This global + variable must be from 'libpythonX.Y.so', not from this + cffi-based extension module, because it must be shared from + different cffi-based extension modules. + + In Python < 3.8, we choose + _PyParser_TokenNames[0] as a completely arbitrary pointer value + that is never written to. The default is to point to the + string "ENDMARKER". We change it temporarily to point to the + next character in that string. (Yes, I know it's REALLY + obscure.) + + In Python >= 3.8, this string array is no longer writable, so + instead we pick PyCapsuleType.tp_version_tag. We can't change + Python < 3.8 because someone might use a mixture of cffi + embedded modules, some of which were compiled before this file + changed. + */ + +#ifdef WITH_THREAD +# if PY_VERSION_HEX < 0x03080000 + char *volatile *lock = (char *volatile *)_PyParser_TokenNames; + char *old_value, *locked_value; + + while (1) { /* spin loop */ + old_value = *lock; + locked_value = old_value + 1; + if (old_value[0] == 'E') { + assert(old_value[1] == 'N'); + if (cffi_compare_and_swap(lock, old_value, locked_value)) + break; + } + else { + assert(old_value[0] == 'N'); + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: PyEval_InitThreads() should be very fast, and + this is only run at start-up anyway. */ + } + } +# else + int volatile *lock = (int volatile *)&PyCapsule_Type.tp_version_tag; + int old_value, locked_value; + assert(!(PyCapsule_Type.tp_flags & Py_TPFLAGS_HAVE_VERSION_TAG)); + + while (1) { /* spin loop */ + old_value = *lock; + locked_value = -42; + if (old_value == 0) { + if (cffi_compare_and_swap(lock, old_value, locked_value)) + break; + } + else { + assert(old_value == locked_value); + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: PyEval_InitThreads() should be very fast, and + this is only run at start-up anyway. */ + } + } +# endif +#endif + + /* call Py_InitializeEx() */ + if (!Py_IsInitialized()) { + _cffi_py_initialize(); +#if PY_VERSION_HEX < 0x03070000 + PyEval_InitThreads(); +#endif + PyEval_SaveThread(); /* release the GIL */ + /* the returned tstate must be the one that has been stored into the + autoTLSkey by _PyGILState_Init() called from Py_Initialize(). */ + } + else { +#if PY_VERSION_HEX < 0x03070000 + /* PyEval_InitThreads() is always a no-op from CPython 3.7 */ + PyGILState_STATE state = PyGILState_Ensure(); + PyEval_InitThreads(); + PyGILState_Release(state); +#endif + } + +#ifdef WITH_THREAD + /* release the lock */ + while (!cffi_compare_and_swap(lock, locked_value, old_value)) + ; +#endif + + return 0; +} + +/********** end CPython-specific section **********/ + + +#else + + +/********** PyPy-specific section **********/ + +PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(const void *[]); /* forward */ + +static struct _cffi_pypy_init_s { + const char *name; + void *func; /* function pointer */ + const char *code; +} _cffi_pypy_init = { + _CFFI_MODULE_NAME, + _CFFI_PYTHON_STARTUP_FUNC, + _CFFI_PYTHON_STARTUP_CODE, +}; + +extern int pypy_carefully_make_gil(const char *); +extern int pypy_init_embedded_cffi_module(int, struct _cffi_pypy_init_s *); + +static int _cffi_carefully_make_gil(void) +{ + return pypy_carefully_make_gil(_CFFI_MODULE_NAME); +} + +static int _cffi_initialize_python(void) +{ + return pypy_init_embedded_cffi_module(0xB011, &_cffi_pypy_init); +} + +/********** end PyPy-specific section **********/ + + +#endif + + +#ifdef __GNUC__ +__attribute__((noinline)) +#endif +static _cffi_call_python_fnptr _cffi_start_python(void) +{ + /* Delicate logic to initialize Python. This function can be + called multiple times concurrently, e.g. when the process calls + its first ``extern "Python"`` functions in multiple threads at + once. It can also be called recursively, in which case we must + ignore it. We also have to consider what occurs if several + different cffi-based extensions reach this code in parallel + threads---it is a different copy of the code, then, and we + can't have any shared global variable unless it comes from + 'libpythonX.Y.so'. + + Idea: + + * _cffi_carefully_make_gil(): "carefully" call + PyEval_InitThreads() (possibly with Py_InitializeEx() first). + + * then we use a (local) custom lock to make sure that a call to this + cffi-based extension will wait if another call to the *same* + extension is running the initialization in another thread. + It is reentrant, so that a recursive call will not block, but + only one from a different thread. + + * then we grab the GIL and (Python 2) we call Py_InitializeEx(). + At this point, concurrent calls to Py_InitializeEx() are not + possible: we have the GIL. + + * do the rest of the specific initialization, which may + temporarily release the GIL but not the custom lock. + Only release the custom lock when we are done. + */ + static char called = 0; + + if (_cffi_carefully_make_gil() != 0) + return NULL; + + _cffi_acquire_reentrant_mutex(); + + /* Here the GIL exists, but we don't have it. We're only protected + from concurrency by the reentrant mutex. */ + + /* This file only initializes the embedded module once, the first + time this is called, even if there are subinterpreters. */ + if (!called) { + called = 1; /* invoke _cffi_initialize_python() only once, + but don't set '_cffi_call_python' right now, + otherwise concurrent threads won't call + this function at all (we need them to wait) */ + if (_cffi_initialize_python() == 0) { + /* now initialization is finished. Switch to the fast-path. */ + + /* We would like nobody to see the new value of + '_cffi_call_python' without also seeing the rest of the + data initialized. However, this is not possible. But + the new value of '_cffi_call_python' is the function + 'cffi_call_python()' from _cffi_backend. So: */ + cffi_write_barrier(); + /* ^^^ we put a write barrier here, and a corresponding + read barrier at the start of cffi_call_python(). This + ensures that after that read barrier, we see everything + done here before the write barrier. + */ + + assert(_cffi_call_python_org != NULL); + _cffi_call_python = (_cffi_call_python_fnptr)_cffi_call_python_org; + } + else { + /* initialization failed. Reset this to NULL, even if it was + already set to some other value. Future calls to + _cffi_start_python() are still forced to occur, and will + always return NULL from now on. */ + _cffi_call_python_org = NULL; + } + } + + _cffi_release_reentrant_mutex(); + + return (_cffi_call_python_fnptr)_cffi_call_python_org; +} + +static +void _cffi_start_and_call_python(struct _cffi_externpy_s *externpy, char *args) +{ + _cffi_call_python_fnptr fnptr; + int current_err = errno; +#ifdef _MSC_VER + int current_lasterr = GetLastError(); +#endif + fnptr = _cffi_start_python(); + if (fnptr == NULL) { + fprintf(stderr, "function %s() called, but initialization code " + "failed. Returning 0.\n", externpy->name); + memset(args, 0, externpy->size_of_result); + } +#ifdef _MSC_VER + SetLastError(current_lasterr); +#endif + errno = current_err; + + if (fnptr != NULL) + fnptr(externpy, args); +} + + +/* The cffi_start_python() function makes sure Python is initialized + and our cffi module is set up. It can be called manually from the + user C code. The same effect is obtained automatically from any + dll-exported ``extern "Python"`` function. This function returns + -1 if initialization failed, 0 if all is OK. */ +_CFFI_UNUSED_FN +static int cffi_start_python(void) +{ + if (_cffi_call_python == &_cffi_start_and_call_python) { + if (_cffi_start_python() == NULL) + return -1; + } + cffi_read_barrier(); + return 0; +} + +#undef cffi_compare_and_swap +#undef cffi_write_barrier +#undef cffi_read_barrier + +#ifdef __cplusplus +} +#endif diff --git a/env/lib/python3.8/site-packages/cffi/api.py b/env/lib/python3.8/site-packages/cffi/api.py new file mode 100644 index 00000000..999a8aef --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/api.py @@ -0,0 +1,965 @@ +import sys, types +from .lock import allocate_lock +from .error import CDefError +from . import model + +try: + callable +except NameError: + # Python 3.1 + from collections import Callable + callable = lambda x: isinstance(x, Callable) + +try: + basestring +except NameError: + # Python 3.x + basestring = str + +_unspecified = object() + + + +class FFI(object): + r''' + The main top-level class that you instantiate once, or once per module. + + Example usage: + + ffi = FFI() + ffi.cdef(""" + int printf(const char *, ...); + """) + + C = ffi.dlopen(None) # standard library + -or- + C = ffi.verify() # use a C compiler: verify the decl above is right + + C.printf("hello, %s!\n", ffi.new("char[]", "world")) + ''' + + def __init__(self, backend=None): + """Create an FFI instance. The 'backend' argument is used to + select a non-default backend, mostly for tests. + """ + if backend is None: + # You need PyPy (>= 2.0 beta), or a CPython (>= 2.6) with + # _cffi_backend.so compiled. + import _cffi_backend as backend + from . import __version__ + if backend.__version__ != __version__: + # bad version! Try to be as explicit as possible. + if hasattr(backend, '__file__'): + # CPython + raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. When we import the top-level '_cffi_backend' extension module, we get version %s, located in %r. The two versions should be equal; check your installation." % ( + __version__, __file__, + backend.__version__, backend.__file__)) + else: + # PyPy + raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. This interpreter comes with a built-in '_cffi_backend' module, which is version %s. The two versions should be equal; check your installation." % ( + __version__, __file__, backend.__version__)) + # (If you insist you can also try to pass the option + # 'backend=backend_ctypes.CTypesBackend()', but don't + # rely on it! It's probably not going to work well.) + + from . import cparser + self._backend = backend + self._lock = allocate_lock() + self._parser = cparser.Parser() + self._cached_btypes = {} + self._parsed_types = types.ModuleType('parsed_types').__dict__ + self._new_types = types.ModuleType('new_types').__dict__ + self._function_caches = [] + self._libraries = [] + self._cdefsources = [] + self._included_ffis = [] + self._windows_unicode = None + self._init_once_cache = {} + self._cdef_version = None + self._embedding = None + self._typecache = model.get_typecache(backend) + if hasattr(backend, 'set_ffi'): + backend.set_ffi(self) + for name in list(backend.__dict__): + if name.startswith('RTLD_'): + setattr(self, name, getattr(backend, name)) + # + with self._lock: + self.BVoidP = self._get_cached_btype(model.voidp_type) + self.BCharA = self._get_cached_btype(model.char_array_type) + if isinstance(backend, types.ModuleType): + # _cffi_backend: attach these constants to the class + if not hasattr(FFI, 'NULL'): + FFI.NULL = self.cast(self.BVoidP, 0) + FFI.CData, FFI.CType = backend._get_types() + else: + # ctypes backend: attach these constants to the instance + self.NULL = self.cast(self.BVoidP, 0) + self.CData, self.CType = backend._get_types() + self.buffer = backend.buffer + + def cdef(self, csource, override=False, packed=False, pack=None): + """Parse the given C source. This registers all declared functions, + types, and global variables. The functions and global variables can + then be accessed via either 'ffi.dlopen()' or 'ffi.verify()'. + The types can be used in 'ffi.new()' and other functions. + If 'packed' is specified as True, all structs declared inside this + cdef are packed, i.e. laid out without any field alignment at all. + Alternatively, 'pack' can be a small integer, and requests for + alignment greater than that are ignored (pack=1 is equivalent to + packed=True). + """ + self._cdef(csource, override=override, packed=packed, pack=pack) + + def embedding_api(self, csource, packed=False, pack=None): + self._cdef(csource, packed=packed, pack=pack, dllexport=True) + if self._embedding is None: + self._embedding = '' + + def _cdef(self, csource, override=False, **options): + if not isinstance(csource, str): # unicode, on Python 2 + if not isinstance(csource, basestring): + raise TypeError("cdef() argument must be a string") + csource = csource.encode('ascii') + with self._lock: + self._cdef_version = object() + self._parser.parse(csource, override=override, **options) + self._cdefsources.append(csource) + if override: + for cache in self._function_caches: + cache.clear() + finishlist = self._parser._recomplete + if finishlist: + self._parser._recomplete = [] + for tp in finishlist: + tp.finish_backend_type(self, finishlist) + + def dlopen(self, name, flags=0): + """Load and return a dynamic library identified by 'name'. + The standard C library can be loaded by passing None. + Note that functions and types declared by 'ffi.cdef()' are not + linked to a particular library, just like C headers; in the + library we only look for the actual (untyped) symbols. + """ + if not (isinstance(name, basestring) or + name is None or + isinstance(name, self.CData)): + raise TypeError("dlopen(name): name must be a file name, None, " + "or an already-opened 'void *' handle") + with self._lock: + lib, function_cache = _make_ffi_library(self, name, flags) + self._function_caches.append(function_cache) + self._libraries.append(lib) + return lib + + def dlclose(self, lib): + """Close a library obtained with ffi.dlopen(). After this call, + access to functions or variables from the library will fail + (possibly with a segmentation fault). + """ + type(lib).__cffi_close__(lib) + + def _typeof_locked(self, cdecl): + # call me with the lock! + key = cdecl + if key in self._parsed_types: + return self._parsed_types[key] + # + if not isinstance(cdecl, str): # unicode, on Python 2 + cdecl = cdecl.encode('ascii') + # + type = self._parser.parse_type(cdecl) + really_a_function_type = type.is_raw_function + if really_a_function_type: + type = type.as_function_pointer() + btype = self._get_cached_btype(type) + result = btype, really_a_function_type + self._parsed_types[key] = result + return result + + def _typeof(self, cdecl, consider_function_as_funcptr=False): + # string -> ctype object + try: + result = self._parsed_types[cdecl] + except KeyError: + with self._lock: + result = self._typeof_locked(cdecl) + # + btype, really_a_function_type = result + if really_a_function_type and not consider_function_as_funcptr: + raise CDefError("the type %r is a function type, not a " + "pointer-to-function type" % (cdecl,)) + return btype + + def typeof(self, cdecl): + """Parse the C type given as a string and return the + corresponding object. + It can also be used on 'cdata' instance to get its C type. + """ + if isinstance(cdecl, basestring): + return self._typeof(cdecl) + if isinstance(cdecl, self.CData): + return self._backend.typeof(cdecl) + if isinstance(cdecl, types.BuiltinFunctionType): + res = _builtin_function_type(cdecl) + if res is not None: + return res + if (isinstance(cdecl, types.FunctionType) + and hasattr(cdecl, '_cffi_base_type')): + with self._lock: + return self._get_cached_btype(cdecl._cffi_base_type) + raise TypeError(type(cdecl)) + + def sizeof(self, cdecl): + """Return the size in bytes of the argument. It can be a + string naming a C type, or a 'cdata' instance. + """ + if isinstance(cdecl, basestring): + BType = self._typeof(cdecl) + return self._backend.sizeof(BType) + else: + return self._backend.sizeof(cdecl) + + def alignof(self, cdecl): + """Return the natural alignment size in bytes of the C type + given as a string. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.alignof(cdecl) + + def offsetof(self, cdecl, *fields_or_indexes): + """Return the offset of the named field inside the given + structure or array, which must be given as a C type name. + You can give several field names in case of nested structures. + You can also give numeric values which correspond to array + items, in case of an array type. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._typeoffsetof(cdecl, *fields_or_indexes)[1] + + def new(self, cdecl, init=None): + """Allocate an instance according to the specified C type and + return a pointer to it. The specified C type must be either a + pointer or an array: ``new('X *')`` allocates an X and returns + a pointer to it, whereas ``new('X[n]')`` allocates an array of + n X'es and returns an array referencing it (which works + mostly like a pointer, like in C). You can also use + ``new('X[]', n)`` to allocate an array of a non-constant + length n. + + The memory is initialized following the rules of declaring a + global variable in C: by default it is zero-initialized, but + an explicit initializer can be given which can be used to + fill all or part of the memory. + + When the returned object goes out of scope, the memory + is freed. In other words the returned object has + ownership of the value of type 'cdecl' that it points to. This + means that the raw data can be used as long as this object is + kept alive, but must not be used for a longer time. Be careful + about that when copying the pointer to the memory somewhere + else, e.g. into another structure. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.newp(cdecl, init) + + def new_allocator(self, alloc=None, free=None, + should_clear_after_alloc=True): + """Return a new allocator, i.e. a function that behaves like ffi.new() + but uses the provided low-level 'alloc' and 'free' functions. + + 'alloc' is called with the size as argument. If it returns NULL, a + MemoryError is raised. 'free' is called with the result of 'alloc' + as argument. Both can be either Python function or directly C + functions. If 'free' is None, then no free function is called. + If both 'alloc' and 'free' are None, the default is used. + + If 'should_clear_after_alloc' is set to False, then the memory + returned by 'alloc' is assumed to be already cleared (or you are + fine with garbage); otherwise CFFI will clear it. + """ + compiled_ffi = self._backend.FFI() + allocator = compiled_ffi.new_allocator(alloc, free, + should_clear_after_alloc) + def allocate(cdecl, init=None): + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return allocator(cdecl, init) + return allocate + + def cast(self, cdecl, source): + """Similar to a C cast: returns an instance of the named C + type initialized with the given 'source'. The source is + casted between integers or pointers of any type. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.cast(cdecl, source) + + def string(self, cdata, maxlen=-1): + """Return a Python string (or unicode string) from the 'cdata'. + If 'cdata' is a pointer or array of characters or bytes, returns + the null-terminated string. The returned string extends until + the first null character, or at most 'maxlen' characters. If + 'cdata' is an array then 'maxlen' defaults to its length. + + If 'cdata' is a pointer or array of wchar_t, returns a unicode + string following the same rules. + + If 'cdata' is a single character or byte or a wchar_t, returns + it as a string or unicode string. + + If 'cdata' is an enum, returns the value of the enumerator as a + string, or 'NUMBER' if the value is out of range. + """ + return self._backend.string(cdata, maxlen) + + def unpack(self, cdata, length): + """Unpack an array of C data of the given length, + returning a Python string/unicode/list. + + If 'cdata' is a pointer to 'char', returns a byte string. + It does not stop at the first null. This is equivalent to: + ffi.buffer(cdata, length)[:] + + If 'cdata' is a pointer to 'wchar_t', returns a unicode string. + 'length' is measured in wchar_t's; it is not the size in bytes. + + If 'cdata' is a pointer to anything else, returns a list of + 'length' items. This is a faster equivalent to: + [cdata[i] for i in range(length)] + """ + return self._backend.unpack(cdata, length) + + #def buffer(self, cdata, size=-1): + # """Return a read-write buffer object that references the raw C data + # pointed to by the given 'cdata'. The 'cdata' must be a pointer or + # an array. Can be passed to functions expecting a buffer, or directly + # manipulated with: + # + # buf[:] get a copy of it in a regular string, or + # buf[idx] as a single character + # buf[:] = ... + # buf[idx] = ... change the content + # """ + # note that 'buffer' is a type, set on this instance by __init__ + + def from_buffer(self, cdecl, python_buffer=_unspecified, + require_writable=False): + """Return a cdata of the given type pointing to the data of the + given Python object, which must support the buffer interface. + Note that this is not meant to be used on the built-in types + str or unicode (you can build 'char[]' arrays explicitly) + but only on objects containing large quantities of raw data + in some other format, like 'array.array' or numpy arrays. + + The first argument is optional and default to 'char[]'. + """ + if python_buffer is _unspecified: + cdecl, python_buffer = self.BCharA, cdecl + elif isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.from_buffer(cdecl, python_buffer, + require_writable) + + def memmove(self, dest, src, n): + """ffi.memmove(dest, src, n) copies n bytes of memory from src to dest. + + Like the C function memmove(), the memory areas may overlap; + apart from that it behaves like the C function memcpy(). + + 'src' can be any cdata ptr or array, or any Python buffer object. + 'dest' can be any cdata ptr or array, or a writable Python buffer + object. The size to copy, 'n', is always measured in bytes. + + Unlike other methods, this one supports all Python buffer including + byte strings and bytearrays---but it still does not support + non-contiguous buffers. + """ + return self._backend.memmove(dest, src, n) + + def callback(self, cdecl, python_callable=None, error=None, onerror=None): + """Return a callback object or a decorator making such a + callback object. 'cdecl' must name a C function pointer type. + The callback invokes the specified 'python_callable' (which may + be provided either directly or via a decorator). Important: the + callback object must be manually kept alive for as long as the + callback may be invoked from the C level. + """ + def callback_decorator_wrap(python_callable): + if not callable(python_callable): + raise TypeError("the 'python_callable' argument " + "is not callable") + return self._backend.callback(cdecl, python_callable, + error, onerror) + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl, consider_function_as_funcptr=True) + if python_callable is None: + return callback_decorator_wrap # decorator mode + else: + return callback_decorator_wrap(python_callable) # direct mode + + def getctype(self, cdecl, replace_with=''): + """Return a string giving the C type 'cdecl', which may be itself + a string or a object. If 'replace_with' is given, it gives + extra text to append (or insert for more complicated C types), like + a variable name, or '*' to get actually the C type 'pointer-to-cdecl'. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + replace_with = replace_with.strip() + if (replace_with.startswith('*') + and '&[' in self._backend.getcname(cdecl, '&')): + replace_with = '(%s)' % replace_with + elif replace_with and not replace_with[0] in '[(': + replace_with = ' ' + replace_with + return self._backend.getcname(cdecl, replace_with) + + def gc(self, cdata, destructor, size=0): + """Return a new cdata object that points to the same + data. Later, when this new cdata object is garbage-collected, + 'destructor(old_cdata_object)' will be called. + + The optional 'size' gives an estimate of the size, used to + trigger the garbage collection more eagerly. So far only used + on PyPy. It tells the GC that the returned object keeps alive + roughly 'size' bytes of external memory. + """ + return self._backend.gcp(cdata, destructor, size) + + def _get_cached_btype(self, type): + assert self._lock.acquire(False) is False + # call me with the lock! + try: + BType = self._cached_btypes[type] + except KeyError: + finishlist = [] + BType = type.get_cached_btype(self, finishlist) + for type in finishlist: + type.finish_backend_type(self, finishlist) + return BType + + def verify(self, source='', tmpdir=None, **kwargs): + """Verify that the current ffi signatures compile on this + machine, and return a dynamic library object. The dynamic + library can be used to call functions and access global + variables declared in this 'ffi'. The library is compiled + by the C compiler: it gives you C-level API compatibility + (including calling macros). This is unlike 'ffi.dlopen()', + which requires binary compatibility in the signatures. + """ + from .verifier import Verifier, _caller_dir_pycache + # + # If set_unicode(True) was called, insert the UNICODE and + # _UNICODE macro declarations + if self._windows_unicode: + self._apply_windows_unicode(kwargs) + # + # Set the tmpdir here, and not in Verifier.__init__: it picks + # up the caller's directory, which we want to be the caller of + # ffi.verify(), as opposed to the caller of Veritier(). + tmpdir = tmpdir or _caller_dir_pycache() + # + # Make a Verifier() and use it to load the library. + self.verifier = Verifier(self, source, tmpdir, **kwargs) + lib = self.verifier.load_library() + # + # Save the loaded library for keep-alive purposes, even + # if the caller doesn't keep it alive itself (it should). + self._libraries.append(lib) + return lib + + def _get_errno(self): + return self._backend.get_errno() + def _set_errno(self, errno): + self._backend.set_errno(errno) + errno = property(_get_errno, _set_errno, None, + "the value of 'errno' from/to the C calls") + + def getwinerror(self, code=-1): + return self._backend.getwinerror(code) + + def _pointer_to(self, ctype): + with self._lock: + return model.pointer_cache(self, ctype) + + def addressof(self, cdata, *fields_or_indexes): + """Return the address of a . + If 'fields_or_indexes' are given, returns the address of that + field or array item in the structure or array, recursively in + case of nested structures. + """ + try: + ctype = self._backend.typeof(cdata) + except TypeError: + if '__addressof__' in type(cdata).__dict__: + return type(cdata).__addressof__(cdata, *fields_or_indexes) + raise + if fields_or_indexes: + ctype, offset = self._typeoffsetof(ctype, *fields_or_indexes) + else: + if ctype.kind == "pointer": + raise TypeError("addressof(pointer)") + offset = 0 + ctypeptr = self._pointer_to(ctype) + return self._backend.rawaddressof(ctypeptr, cdata, offset) + + def _typeoffsetof(self, ctype, field_or_index, *fields_or_indexes): + ctype, offset = self._backend.typeoffsetof(ctype, field_or_index) + for field1 in fields_or_indexes: + ctype, offset1 = self._backend.typeoffsetof(ctype, field1, 1) + offset += offset1 + return ctype, offset + + def include(self, ffi_to_include): + """Includes the typedefs, structs, unions and enums defined + in another FFI instance. Usage is similar to a #include in C, + where a part of the program might include types defined in + another part for its own usage. Note that the include() + method has no effect on functions, constants and global + variables, which must anyway be accessed directly from the + lib object returned by the original FFI instance. + """ + if not isinstance(ffi_to_include, FFI): + raise TypeError("ffi.include() expects an argument that is also of" + " type cffi.FFI, not %r" % ( + type(ffi_to_include).__name__,)) + if ffi_to_include is self: + raise ValueError("self.include(self)") + with ffi_to_include._lock: + with self._lock: + self._parser.include(ffi_to_include._parser) + self._cdefsources.append('[') + self._cdefsources.extend(ffi_to_include._cdefsources) + self._cdefsources.append(']') + self._included_ffis.append(ffi_to_include) + + def new_handle(self, x): + return self._backend.newp_handle(self.BVoidP, x) + + def from_handle(self, x): + return self._backend.from_handle(x) + + def release(self, x): + self._backend.release(x) + + def set_unicode(self, enabled_flag): + """Windows: if 'enabled_flag' is True, enable the UNICODE and + _UNICODE defines in C, and declare the types like TCHAR and LPTCSTR + to be (pointers to) wchar_t. If 'enabled_flag' is False, + declare these types to be (pointers to) plain 8-bit characters. + This is mostly for backward compatibility; you usually want True. + """ + if self._windows_unicode is not None: + raise ValueError("set_unicode() can only be called once") + enabled_flag = bool(enabled_flag) + if enabled_flag: + self.cdef("typedef wchar_t TBYTE;" + "typedef wchar_t TCHAR;" + "typedef const wchar_t *LPCTSTR;" + "typedef const wchar_t *PCTSTR;" + "typedef wchar_t *LPTSTR;" + "typedef wchar_t *PTSTR;" + "typedef TBYTE *PTBYTE;" + "typedef TCHAR *PTCHAR;") + else: + self.cdef("typedef char TBYTE;" + "typedef char TCHAR;" + "typedef const char *LPCTSTR;" + "typedef const char *PCTSTR;" + "typedef char *LPTSTR;" + "typedef char *PTSTR;" + "typedef TBYTE *PTBYTE;" + "typedef TCHAR *PTCHAR;") + self._windows_unicode = enabled_flag + + def _apply_windows_unicode(self, kwds): + defmacros = kwds.get('define_macros', ()) + if not isinstance(defmacros, (list, tuple)): + raise TypeError("'define_macros' must be a list or tuple") + defmacros = list(defmacros) + [('UNICODE', '1'), + ('_UNICODE', '1')] + kwds['define_macros'] = defmacros + + def _apply_embedding_fix(self, kwds): + # must include an argument like "-lpython2.7" for the compiler + def ensure(key, value): + lst = kwds.setdefault(key, []) + if value not in lst: + lst.append(value) + # + if '__pypy__' in sys.builtin_module_names: + import os + if sys.platform == "win32": + # we need 'libpypy-c.lib'. Current distributions of + # pypy (>= 4.1) contain it as 'libs/python27.lib'. + pythonlib = "python{0[0]}{0[1]}".format(sys.version_info) + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'libs')) + else: + # we need 'libpypy-c.{so,dylib}', which should be by + # default located in 'sys.prefix/bin' for installed + # systems. + if sys.version_info < (3,): + pythonlib = "pypy-c" + else: + pythonlib = "pypy3-c" + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'bin')) + # On uninstalled pypy's, the libpypy-c is typically found in + # .../pypy/goal/. + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'pypy', 'goal')) + else: + if sys.platform == "win32": + template = "python%d%d" + if hasattr(sys, 'gettotalrefcount'): + template += '_d' + else: + try: + import sysconfig + except ImportError: # 2.6 + from distutils import sysconfig + template = "python%d.%d" + if sysconfig.get_config_var('DEBUG_EXT'): + template += sysconfig.get_config_var('DEBUG_EXT') + pythonlib = (template % + (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) + if hasattr(sys, 'abiflags'): + pythonlib += sys.abiflags + ensure('libraries', pythonlib) + if sys.platform == "win32": + ensure('extra_link_args', '/MANIFEST') + + def set_source(self, module_name, source, source_extension='.c', **kwds): + import os + if hasattr(self, '_assigned_source'): + raise ValueError("set_source() cannot be called several times " + "per ffi object") + if not isinstance(module_name, basestring): + raise TypeError("'module_name' must be a string") + if os.sep in module_name or (os.altsep and os.altsep in module_name): + raise ValueError("'module_name' must not contain '/': use a dotted " + "name to make a 'package.module' location") + self._assigned_source = (str(module_name), source, + source_extension, kwds) + + def set_source_pkgconfig(self, module_name, pkgconfig_libs, source, + source_extension='.c', **kwds): + from . import pkgconfig + if not isinstance(pkgconfig_libs, list): + raise TypeError("the pkgconfig_libs argument must be a list " + "of package names") + kwds2 = pkgconfig.flags_from_pkgconfig(pkgconfig_libs) + pkgconfig.merge_flags(kwds, kwds2) + self.set_source(module_name, source, source_extension, **kwds) + + def distutils_extension(self, tmpdir='build', verbose=True): + from distutils.dir_util import mkpath + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + if hasattr(self, 'verifier'): # fallback, 'tmpdir' ignored + return self.verifier.get_extension() + raise ValueError("set_source() must be called before" + " distutils_extension()") + module_name, source, source_extension, kwds = self._assigned_source + if source is None: + raise TypeError("distutils_extension() is only for C extension " + "modules, not for dlopen()-style pure Python " + "modules") + mkpath(tmpdir) + ext, updated = recompile(self, module_name, + source, tmpdir=tmpdir, extradir=tmpdir, + source_extension=source_extension, + call_c_compiler=False, **kwds) + if verbose: + if updated: + sys.stderr.write("regenerated: %r\n" % (ext.sources[0],)) + else: + sys.stderr.write("not modified: %r\n" % (ext.sources[0],)) + return ext + + def emit_c_code(self, filename): + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before emit_c_code()") + module_name, source, source_extension, kwds = self._assigned_source + if source is None: + raise TypeError("emit_c_code() is only for C extension modules, " + "not for dlopen()-style pure Python modules") + recompile(self, module_name, source, + c_file=filename, call_c_compiler=False, **kwds) + + def emit_python_code(self, filename): + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before emit_c_code()") + module_name, source, source_extension, kwds = self._assigned_source + if source is not None: + raise TypeError("emit_python_code() is only for dlopen()-style " + "pure Python modules, not for C extension modules") + recompile(self, module_name, source, + c_file=filename, call_c_compiler=False, **kwds) + + def compile(self, tmpdir='.', verbose=0, target=None, debug=None): + """The 'target' argument gives the final file name of the + compiled DLL. Use '*' to force distutils' choice, suitable for + regular CPython C API modules. Use a file name ending in '.*' + to ask for the system's default extension for dynamic libraries + (.so/.dll/.dylib). + + The default is '*' when building a non-embedded C API extension, + and (module_name + '.*') when building an embedded library. + """ + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before compile()") + module_name, source, source_extension, kwds = self._assigned_source + return recompile(self, module_name, source, tmpdir=tmpdir, + target=target, source_extension=source_extension, + compiler_verbose=verbose, debug=debug, **kwds) + + def init_once(self, func, tag): + # Read _init_once_cache[tag], which is either (False, lock) if + # we're calling the function now in some thread, or (True, result). + # Don't call setdefault() in most cases, to avoid allocating and + # immediately freeing a lock; but still use setdefaut() to avoid + # races. + try: + x = self._init_once_cache[tag] + except KeyError: + x = self._init_once_cache.setdefault(tag, (False, allocate_lock())) + # Common case: we got (True, result), so we return the result. + if x[0]: + return x[1] + # Else, it's a lock. Acquire it to serialize the following tests. + with x[1]: + # Read again from _init_once_cache the current status. + x = self._init_once_cache[tag] + if x[0]: + return x[1] + # Call the function and store the result back. + result = func() + self._init_once_cache[tag] = (True, result) + return result + + def embedding_init_code(self, pysource): + if self._embedding: + raise ValueError("embedding_init_code() can only be called once") + # fix 'pysource' before it gets dumped into the C file: + # - remove empty lines at the beginning, so it starts at "line 1" + # - dedent, if all non-empty lines are indented + # - check for SyntaxErrors + import re + match = re.match(r'\s*\n', pysource) + if match: + pysource = pysource[match.end():] + lines = pysource.splitlines() or [''] + prefix = re.match(r'\s*', lines[0]).group() + for i in range(1, len(lines)): + line = lines[i] + if line.rstrip(): + while not line.startswith(prefix): + prefix = prefix[:-1] + i = len(prefix) + lines = [line[i:]+'\n' for line in lines] + pysource = ''.join(lines) + # + compile(pysource, "cffi_init", "exec") + # + self._embedding = pysource + + def def_extern(self, *args, **kwds): + raise ValueError("ffi.def_extern() is only available on API-mode FFI " + "objects") + + def list_types(self): + """Returns the user type names known to this FFI instance. + This returns a tuple containing three lists of names: + (typedef_names, names_of_structs, names_of_unions) + """ + typedefs = [] + structs = [] + unions = [] + for key in self._parser._declarations: + if key.startswith('typedef '): + typedefs.append(key[8:]) + elif key.startswith('struct '): + structs.append(key[7:]) + elif key.startswith('union '): + unions.append(key[6:]) + typedefs.sort() + structs.sort() + unions.sort() + return (typedefs, structs, unions) + + +def _load_backend_lib(backend, name, flags): + import os + if not isinstance(name, basestring): + if sys.platform != "win32" or name is not None: + return backend.load_library(name, flags) + name = "c" # Windows: load_library(None) fails, but this works + # on Python 2 (backward compatibility hack only) + first_error = None + if '.' in name or '/' in name or os.sep in name: + try: + return backend.load_library(name, flags) + except OSError as e: + first_error = e + import ctypes.util + path = ctypes.util.find_library(name) + if path is None: + if name == "c" and sys.platform == "win32" and sys.version_info >= (3,): + raise OSError("dlopen(None) cannot work on Windows for Python 3 " + "(see http://bugs.python.org/issue23606)") + msg = ("ctypes.util.find_library() did not manage " + "to locate a library called %r" % (name,)) + if first_error is not None: + msg = "%s. Additionally, %s" % (first_error, msg) + raise OSError(msg) + return backend.load_library(path, flags) + +def _make_ffi_library(ffi, libname, flags): + backend = ffi._backend + backendlib = _load_backend_lib(backend, libname, flags) + # + def accessor_function(name): + key = 'function ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + value = backendlib.load_function(BType, name) + library.__dict__[name] = value + # + def accessor_variable(name): + key = 'variable ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + read_variable = backendlib.read_variable + write_variable = backendlib.write_variable + setattr(FFILibrary, name, property( + lambda self: read_variable(BType, name), + lambda self, value: write_variable(BType, name, value))) + # + def addressof_var(name): + try: + return addr_variables[name] + except KeyError: + with ffi._lock: + if name not in addr_variables: + key = 'variable ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + if BType.kind != 'array': + BType = model.pointer_cache(ffi, BType) + p = backendlib.load_function(BType, name) + addr_variables[name] = p + return addr_variables[name] + # + def accessor_constant(name): + raise NotImplementedError("non-integer constant '%s' cannot be " + "accessed from a dlopen() library" % (name,)) + # + def accessor_int_constant(name): + library.__dict__[name] = ffi._parser._int_constants[name] + # + accessors = {} + accessors_version = [False] + addr_variables = {} + # + def update_accessors(): + if accessors_version[0] is ffi._cdef_version: + return + # + for key, (tp, _) in ffi._parser._declarations.items(): + if not isinstance(tp, model.EnumType): + tag, name = key.split(' ', 1) + if tag == 'function': + accessors[name] = accessor_function + elif tag == 'variable': + accessors[name] = accessor_variable + elif tag == 'constant': + accessors[name] = accessor_constant + else: + for i, enumname in enumerate(tp.enumerators): + def accessor_enum(name, tp=tp, i=i): + tp.check_not_partial() + library.__dict__[name] = tp.enumvalues[i] + accessors[enumname] = accessor_enum + for name in ffi._parser._int_constants: + accessors.setdefault(name, accessor_int_constant) + accessors_version[0] = ffi._cdef_version + # + def make_accessor(name): + with ffi._lock: + if name in library.__dict__ or name in FFILibrary.__dict__: + return # added by another thread while waiting for the lock + if name not in accessors: + update_accessors() + if name not in accessors: + raise AttributeError(name) + accessors[name](name) + # + class FFILibrary(object): + def __getattr__(self, name): + make_accessor(name) + return getattr(self, name) + def __setattr__(self, name, value): + try: + property = getattr(self.__class__, name) + except AttributeError: + make_accessor(name) + setattr(self, name, value) + else: + property.__set__(self, value) + def __dir__(self): + with ffi._lock: + update_accessors() + return accessors.keys() + def __addressof__(self, name): + if name in library.__dict__: + return library.__dict__[name] + if name in FFILibrary.__dict__: + return addressof_var(name) + make_accessor(name) + if name in library.__dict__: + return library.__dict__[name] + if name in FFILibrary.__dict__: + return addressof_var(name) + raise AttributeError("cffi library has no function or " + "global variable named '%s'" % (name,)) + def __cffi_close__(self): + backendlib.close_lib() + self.__dict__.clear() + # + if isinstance(libname, basestring): + try: + if not isinstance(libname, str): # unicode, on Python 2 + libname = libname.encode('utf-8') + FFILibrary.__name__ = 'FFILibrary_%s' % libname + except UnicodeError: + pass + library = FFILibrary() + return library, library.__dict__ + +def _builtin_function_type(func): + # a hack to make at least ffi.typeof(builtin_function) work, + # if the builtin function was obtained by 'vengine_cpy'. + import sys + try: + module = sys.modules[func.__module__] + ffi = module._cffi_original_ffi + types_of_builtin_funcs = module._cffi_types_of_builtin_funcs + tp = types_of_builtin_funcs[func] + except (KeyError, AttributeError, TypeError): + return None + else: + with ffi._lock: + return ffi._get_cached_btype(tp) diff --git a/env/lib/python3.8/site-packages/cffi/backend_ctypes.py b/env/lib/python3.8/site-packages/cffi/backend_ctypes.py new file mode 100644 index 00000000..e7956a79 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/backend_ctypes.py @@ -0,0 +1,1121 @@ +import ctypes, ctypes.util, operator, sys +from . import model + +if sys.version_info < (3,): + bytechr = chr +else: + unicode = str + long = int + xrange = range + bytechr = lambda num: bytes([num]) + +class CTypesType(type): + pass + +class CTypesData(object): + __metaclass__ = CTypesType + __slots__ = ['__weakref__'] + __name__ = '' + + def __init__(self, *args): + raise TypeError("cannot instantiate %r" % (self.__class__,)) + + @classmethod + def _newp(cls, init): + raise TypeError("expected a pointer or array ctype, got '%s'" + % (cls._get_c_name(),)) + + @staticmethod + def _to_ctypes(value): + raise TypeError + + @classmethod + def _arg_to_ctypes(cls, *value): + try: + ctype = cls._ctype + except AttributeError: + raise TypeError("cannot create an instance of %r" % (cls,)) + if value: + res = cls._to_ctypes(*value) + if not isinstance(res, ctype): + res = cls._ctype(res) + else: + res = cls._ctype() + return res + + @classmethod + def _create_ctype_obj(cls, init): + if init is None: + return cls._arg_to_ctypes() + else: + return cls._arg_to_ctypes(init) + + @staticmethod + def _from_ctypes(ctypes_value): + raise TypeError + + @classmethod + def _get_c_name(cls, replace_with=''): + return cls._reftypename.replace(' &', replace_with) + + @classmethod + def _fix_class(cls): + cls.__name__ = 'CData<%s>' % (cls._get_c_name(),) + cls.__qualname__ = 'CData<%s>' % (cls._get_c_name(),) + cls.__module__ = 'ffi' + + def _get_own_repr(self): + raise NotImplementedError + + def _addr_repr(self, address): + if address == 0: + return 'NULL' + else: + if address < 0: + address += 1 << (8*ctypes.sizeof(ctypes.c_void_p)) + return '0x%x' % address + + def __repr__(self, c_name=None): + own = self._get_own_repr() + return '' % (c_name or self._get_c_name(), own) + + def _convert_to_address(self, BClass): + if BClass is None: + raise TypeError("cannot convert %r to an address" % ( + self._get_c_name(),)) + else: + raise TypeError("cannot convert %r to %r" % ( + self._get_c_name(), BClass._get_c_name())) + + @classmethod + def _get_size(cls): + return ctypes.sizeof(cls._ctype) + + def _get_size_of_instance(self): + return ctypes.sizeof(self._ctype) + + @classmethod + def _cast_from(cls, source): + raise TypeError("cannot cast to %r" % (cls._get_c_name(),)) + + def _cast_to_integer(self): + return self._convert_to_address(None) + + @classmethod + def _alignment(cls): + return ctypes.alignment(cls._ctype) + + def __iter__(self): + raise TypeError("cdata %r does not support iteration" % ( + self._get_c_name()),) + + def _make_cmp(name): + cmpfunc = getattr(operator, name) + def cmp(self, other): + v_is_ptr = not isinstance(self, CTypesGenericPrimitive) + w_is_ptr = (isinstance(other, CTypesData) and + not isinstance(other, CTypesGenericPrimitive)) + if v_is_ptr and w_is_ptr: + return cmpfunc(self._convert_to_address(None), + other._convert_to_address(None)) + elif v_is_ptr or w_is_ptr: + return NotImplemented + else: + if isinstance(self, CTypesGenericPrimitive): + self = self._value + if isinstance(other, CTypesGenericPrimitive): + other = other._value + return cmpfunc(self, other) + cmp.func_name = name + return cmp + + __eq__ = _make_cmp('__eq__') + __ne__ = _make_cmp('__ne__') + __lt__ = _make_cmp('__lt__') + __le__ = _make_cmp('__le__') + __gt__ = _make_cmp('__gt__') + __ge__ = _make_cmp('__ge__') + + def __hash__(self): + return hash(self._convert_to_address(None)) + + def _to_string(self, maxlen): + raise TypeError("string(): %r" % (self,)) + + +class CTypesGenericPrimitive(CTypesData): + __slots__ = [] + + def __hash__(self): + return hash(self._value) + + def _get_own_repr(self): + return repr(self._from_ctypes(self._value)) + + +class CTypesGenericArray(CTypesData): + __slots__ = [] + + @classmethod + def _newp(cls, init): + return cls(init) + + def __iter__(self): + for i in xrange(len(self)): + yield self[i] + + def _get_own_repr(self): + return self._addr_repr(ctypes.addressof(self._blob)) + + +class CTypesGenericPtr(CTypesData): + __slots__ = ['_address', '_as_ctype_ptr'] + _automatic_casts = False + kind = "pointer" + + @classmethod + def _newp(cls, init): + return cls(init) + + @classmethod + def _cast_from(cls, source): + if source is None: + address = 0 + elif isinstance(source, CTypesData): + address = source._cast_to_integer() + elif isinstance(source, (int, long)): + address = source + else: + raise TypeError("bad type for cast to %r: %r" % + (cls, type(source).__name__)) + return cls._new_pointer_at(address) + + @classmethod + def _new_pointer_at(cls, address): + self = cls.__new__(cls) + self._address = address + self._as_ctype_ptr = ctypes.cast(address, cls._ctype) + return self + + def _get_own_repr(self): + try: + return self._addr_repr(self._address) + except AttributeError: + return '???' + + def _cast_to_integer(self): + return self._address + + def __nonzero__(self): + return bool(self._address) + __bool__ = __nonzero__ + + @classmethod + def _to_ctypes(cls, value): + if not isinstance(value, CTypesData): + raise TypeError("unexpected %s object" % type(value).__name__) + address = value._convert_to_address(cls) + return ctypes.cast(address, cls._ctype) + + @classmethod + def _from_ctypes(cls, ctypes_ptr): + address = ctypes.cast(ctypes_ptr, ctypes.c_void_p).value or 0 + return cls._new_pointer_at(address) + + @classmethod + def _initialize(cls, ctypes_ptr, value): + if value: + ctypes_ptr.contents = cls._to_ctypes(value).contents + + def _convert_to_address(self, BClass): + if (BClass in (self.__class__, None) or BClass._automatic_casts + or self._automatic_casts): + return self._address + else: + return CTypesData._convert_to_address(self, BClass) + + +class CTypesBaseStructOrUnion(CTypesData): + __slots__ = ['_blob'] + + @classmethod + def _create_ctype_obj(cls, init): + # may be overridden + raise TypeError("cannot instantiate opaque type %s" % (cls,)) + + def _get_own_repr(self): + return self._addr_repr(ctypes.addressof(self._blob)) + + @classmethod + def _offsetof(cls, fieldname): + return getattr(cls._ctype, fieldname).offset + + def _convert_to_address(self, BClass): + if getattr(BClass, '_BItem', None) is self.__class__: + return ctypes.addressof(self._blob) + else: + return CTypesData._convert_to_address(self, BClass) + + @classmethod + def _from_ctypes(cls, ctypes_struct_or_union): + self = cls.__new__(cls) + self._blob = ctypes_struct_or_union + return self + + @classmethod + def _to_ctypes(cls, value): + return value._blob + + def __repr__(self, c_name=None): + return CTypesData.__repr__(self, c_name or self._get_c_name(' &')) + + +class CTypesBackend(object): + + PRIMITIVE_TYPES = { + 'char': ctypes.c_char, + 'short': ctypes.c_short, + 'int': ctypes.c_int, + 'long': ctypes.c_long, + 'long long': ctypes.c_longlong, + 'signed char': ctypes.c_byte, + 'unsigned char': ctypes.c_ubyte, + 'unsigned short': ctypes.c_ushort, + 'unsigned int': ctypes.c_uint, + 'unsigned long': ctypes.c_ulong, + 'unsigned long long': ctypes.c_ulonglong, + 'float': ctypes.c_float, + 'double': ctypes.c_double, + '_Bool': ctypes.c_bool, + } + + for _name in ['unsigned long long', 'unsigned long', + 'unsigned int', 'unsigned short', 'unsigned char']: + _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) + PRIMITIVE_TYPES['uint%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_void_p): + PRIMITIVE_TYPES['uintptr_t'] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_size_t): + PRIMITIVE_TYPES['size_t'] = PRIMITIVE_TYPES[_name] + + for _name in ['long long', 'long', 'int', 'short', 'signed char']: + _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) + PRIMITIVE_TYPES['int%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_void_p): + PRIMITIVE_TYPES['intptr_t'] = PRIMITIVE_TYPES[_name] + PRIMITIVE_TYPES['ptrdiff_t'] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_size_t): + PRIMITIVE_TYPES['ssize_t'] = PRIMITIVE_TYPES[_name] + + + def __init__(self): + self.RTLD_LAZY = 0 # not supported anyway by ctypes + self.RTLD_NOW = 0 + self.RTLD_GLOBAL = ctypes.RTLD_GLOBAL + self.RTLD_LOCAL = ctypes.RTLD_LOCAL + + def set_ffi(self, ffi): + self.ffi = ffi + + def _get_types(self): + return CTypesData, CTypesType + + def load_library(self, path, flags=0): + cdll = ctypes.CDLL(path, flags) + return CTypesLibrary(self, cdll) + + def new_void_type(self): + class CTypesVoid(CTypesData): + __slots__ = [] + _reftypename = 'void &' + @staticmethod + def _from_ctypes(novalue): + return None + @staticmethod + def _to_ctypes(novalue): + if novalue is not None: + raise TypeError("None expected, got %s object" % + (type(novalue).__name__,)) + return None + CTypesVoid._fix_class() + return CTypesVoid + + def new_primitive_type(self, name): + if name == 'wchar_t': + raise NotImplementedError(name) + ctype = self.PRIMITIVE_TYPES[name] + if name == 'char': + kind = 'char' + elif name in ('float', 'double'): + kind = 'float' + else: + if name in ('signed char', 'unsigned char'): + kind = 'byte' + elif name == '_Bool': + kind = 'bool' + else: + kind = 'int' + is_signed = (ctype(-1).value == -1) + # + def _cast_source_to_int(source): + if isinstance(source, (int, long, float)): + source = int(source) + elif isinstance(source, CTypesData): + source = source._cast_to_integer() + elif isinstance(source, bytes): + source = ord(source) + elif source is None: + source = 0 + else: + raise TypeError("bad type for cast to %r: %r" % + (CTypesPrimitive, type(source).__name__)) + return source + # + kind1 = kind + class CTypesPrimitive(CTypesGenericPrimitive): + __slots__ = ['_value'] + _ctype = ctype + _reftypename = '%s &' % name + kind = kind1 + + def __init__(self, value): + self._value = value + + @staticmethod + def _create_ctype_obj(init): + if init is None: + return ctype() + return ctype(CTypesPrimitive._to_ctypes(init)) + + if kind == 'int' or kind == 'byte': + @classmethod + def _cast_from(cls, source): + source = _cast_source_to_int(source) + source = ctype(source).value # cast within range + return cls(source) + def __int__(self): + return self._value + + if kind == 'bool': + @classmethod + def _cast_from(cls, source): + if not isinstance(source, (int, long, float)): + source = _cast_source_to_int(source) + return cls(bool(source)) + def __int__(self): + return int(self._value) + + if kind == 'char': + @classmethod + def _cast_from(cls, source): + source = _cast_source_to_int(source) + source = bytechr(source & 0xFF) + return cls(source) + def __int__(self): + return ord(self._value) + + if kind == 'float': + @classmethod + def _cast_from(cls, source): + if isinstance(source, float): + pass + elif isinstance(source, CTypesGenericPrimitive): + if hasattr(source, '__float__'): + source = float(source) + else: + source = int(source) + else: + source = _cast_source_to_int(source) + source = ctype(source).value # fix precision + return cls(source) + def __int__(self): + return int(self._value) + def __float__(self): + return self._value + + _cast_to_integer = __int__ + + if kind == 'int' or kind == 'byte' or kind == 'bool': + @staticmethod + def _to_ctypes(x): + if not isinstance(x, (int, long)): + if isinstance(x, CTypesData): + x = int(x) + else: + raise TypeError("integer expected, got %s" % + type(x).__name__) + if ctype(x).value != x: + if not is_signed and x < 0: + raise OverflowError("%s: negative integer" % name) + else: + raise OverflowError("%s: integer out of bounds" + % name) + return x + + if kind == 'char': + @staticmethod + def _to_ctypes(x): + if isinstance(x, bytes) and len(x) == 1: + return x + if isinstance(x, CTypesPrimitive): # > + return x._value + raise TypeError("character expected, got %s" % + type(x).__name__) + def __nonzero__(self): + return ord(self._value) != 0 + else: + def __nonzero__(self): + return self._value != 0 + __bool__ = __nonzero__ + + if kind == 'float': + @staticmethod + def _to_ctypes(x): + if not isinstance(x, (int, long, float, CTypesData)): + raise TypeError("float expected, got %s" % + type(x).__name__) + return ctype(x).value + + @staticmethod + def _from_ctypes(value): + return getattr(value, 'value', value) + + @staticmethod + def _initialize(blob, init): + blob.value = CTypesPrimitive._to_ctypes(init) + + if kind == 'char': + def _to_string(self, maxlen): + return self._value + if kind == 'byte': + def _to_string(self, maxlen): + return chr(self._value & 0xff) + # + CTypesPrimitive._fix_class() + return CTypesPrimitive + + def new_pointer_type(self, BItem): + getbtype = self.ffi._get_cached_btype + if BItem is getbtype(model.PrimitiveType('char')): + kind = 'charp' + elif BItem in (getbtype(model.PrimitiveType('signed char')), + getbtype(model.PrimitiveType('unsigned char'))): + kind = 'bytep' + elif BItem is getbtype(model.void_type): + kind = 'voidp' + else: + kind = 'generic' + # + class CTypesPtr(CTypesGenericPtr): + __slots__ = ['_own'] + if kind == 'charp': + __slots__ += ['__as_strbuf'] + _BItem = BItem + if hasattr(BItem, '_ctype'): + _ctype = ctypes.POINTER(BItem._ctype) + _bitem_size = ctypes.sizeof(BItem._ctype) + else: + _ctype = ctypes.c_void_p + if issubclass(BItem, CTypesGenericArray): + _reftypename = BItem._get_c_name('(* &)') + else: + _reftypename = BItem._get_c_name(' * &') + + def __init__(self, init): + ctypeobj = BItem._create_ctype_obj(init) + if kind == 'charp': + self.__as_strbuf = ctypes.create_string_buffer( + ctypeobj.value + b'\x00') + self._as_ctype_ptr = ctypes.cast( + self.__as_strbuf, self._ctype) + else: + self._as_ctype_ptr = ctypes.pointer(ctypeobj) + self._address = ctypes.cast(self._as_ctype_ptr, + ctypes.c_void_p).value + self._own = True + + def __add__(self, other): + if isinstance(other, (int, long)): + return self._new_pointer_at(self._address + + other * self._bitem_size) + else: + return NotImplemented + + def __sub__(self, other): + if isinstance(other, (int, long)): + return self._new_pointer_at(self._address - + other * self._bitem_size) + elif type(self) is type(other): + return (self._address - other._address) // self._bitem_size + else: + return NotImplemented + + def __getitem__(self, index): + if getattr(self, '_own', False) and index != 0: + raise IndexError + return BItem._from_ctypes(self._as_ctype_ptr[index]) + + def __setitem__(self, index, value): + self._as_ctype_ptr[index] = BItem._to_ctypes(value) + + if kind == 'charp' or kind == 'voidp': + @classmethod + def _arg_to_ctypes(cls, *value): + if value and isinstance(value[0], bytes): + return ctypes.c_char_p(value[0]) + else: + return super(CTypesPtr, cls)._arg_to_ctypes(*value) + + if kind == 'charp' or kind == 'bytep': + def _to_string(self, maxlen): + if maxlen < 0: + maxlen = sys.maxsize + p = ctypes.cast(self._as_ctype_ptr, + ctypes.POINTER(ctypes.c_char)) + n = 0 + while n < maxlen and p[n] != b'\x00': + n += 1 + return b''.join([p[i] for i in range(n)]) + + def _get_own_repr(self): + if getattr(self, '_own', False): + return 'owning %d bytes' % ( + ctypes.sizeof(self._as_ctype_ptr.contents),) + return super(CTypesPtr, self)._get_own_repr() + # + if (BItem is self.ffi._get_cached_btype(model.void_type) or + BItem is self.ffi._get_cached_btype(model.PrimitiveType('char'))): + CTypesPtr._automatic_casts = True + # + CTypesPtr._fix_class() + return CTypesPtr + + def new_array_type(self, CTypesPtr, length): + if length is None: + brackets = ' &[]' + else: + brackets = ' &[%d]' % length + BItem = CTypesPtr._BItem + getbtype = self.ffi._get_cached_btype + if BItem is getbtype(model.PrimitiveType('char')): + kind = 'char' + elif BItem in (getbtype(model.PrimitiveType('signed char')), + getbtype(model.PrimitiveType('unsigned char'))): + kind = 'byte' + else: + kind = 'generic' + # + class CTypesArray(CTypesGenericArray): + __slots__ = ['_blob', '_own'] + if length is not None: + _ctype = BItem._ctype * length + else: + __slots__.append('_ctype') + _reftypename = BItem._get_c_name(brackets) + _declared_length = length + _CTPtr = CTypesPtr + + def __init__(self, init): + if length is None: + if isinstance(init, (int, long)): + len1 = init + init = None + elif kind == 'char' and isinstance(init, bytes): + len1 = len(init) + 1 # extra null + else: + init = tuple(init) + len1 = len(init) + self._ctype = BItem._ctype * len1 + self._blob = self._ctype() + self._own = True + if init is not None: + self._initialize(self._blob, init) + + @staticmethod + def _initialize(blob, init): + if isinstance(init, bytes): + init = [init[i:i+1] for i in range(len(init))] + else: + if isinstance(init, CTypesGenericArray): + if (len(init) != len(blob) or + not isinstance(init, CTypesArray)): + raise TypeError("length/type mismatch: %s" % (init,)) + init = tuple(init) + if len(init) > len(blob): + raise IndexError("too many initializers") + addr = ctypes.cast(blob, ctypes.c_void_p).value + PTR = ctypes.POINTER(BItem._ctype) + itemsize = ctypes.sizeof(BItem._ctype) + for i, value in enumerate(init): + p = ctypes.cast(addr + i * itemsize, PTR) + BItem._initialize(p.contents, value) + + def __len__(self): + return len(self._blob) + + def __getitem__(self, index): + if not (0 <= index < len(self._blob)): + raise IndexError + return BItem._from_ctypes(self._blob[index]) + + def __setitem__(self, index, value): + if not (0 <= index < len(self._blob)): + raise IndexError + self._blob[index] = BItem._to_ctypes(value) + + if kind == 'char' or kind == 'byte': + def _to_string(self, maxlen): + if maxlen < 0: + maxlen = len(self._blob) + p = ctypes.cast(self._blob, + ctypes.POINTER(ctypes.c_char)) + n = 0 + while n < maxlen and p[n] != b'\x00': + n += 1 + return b''.join([p[i] for i in range(n)]) + + def _get_own_repr(self): + if getattr(self, '_own', False): + return 'owning %d bytes' % (ctypes.sizeof(self._blob),) + return super(CTypesArray, self)._get_own_repr() + + def _convert_to_address(self, BClass): + if BClass in (CTypesPtr, None) or BClass._automatic_casts: + return ctypes.addressof(self._blob) + else: + return CTypesData._convert_to_address(self, BClass) + + @staticmethod + def _from_ctypes(ctypes_array): + self = CTypesArray.__new__(CTypesArray) + self._blob = ctypes_array + return self + + @staticmethod + def _arg_to_ctypes(value): + return CTypesPtr._arg_to_ctypes(value) + + def __add__(self, other): + if isinstance(other, (int, long)): + return CTypesPtr._new_pointer_at( + ctypes.addressof(self._blob) + + other * ctypes.sizeof(BItem._ctype)) + else: + return NotImplemented + + @classmethod + def _cast_from(cls, source): + raise NotImplementedError("casting to %r" % ( + cls._get_c_name(),)) + # + CTypesArray._fix_class() + return CTypesArray + + def _new_struct_or_union(self, kind, name, base_ctypes_class): + # + class struct_or_union(base_ctypes_class): + pass + struct_or_union.__name__ = '%s_%s' % (kind, name) + kind1 = kind + # + class CTypesStructOrUnion(CTypesBaseStructOrUnion): + __slots__ = ['_blob'] + _ctype = struct_or_union + _reftypename = '%s &' % (name,) + _kind = kind = kind1 + # + CTypesStructOrUnion._fix_class() + return CTypesStructOrUnion + + def new_struct_type(self, name): + return self._new_struct_or_union('struct', name, ctypes.Structure) + + def new_union_type(self, name): + return self._new_struct_or_union('union', name, ctypes.Union) + + def complete_struct_or_union(self, CTypesStructOrUnion, fields, tp, + totalsize=-1, totalalignment=-1, sflags=0, + pack=0): + if totalsize >= 0 or totalalignment >= 0: + raise NotImplementedError("the ctypes backend of CFFI does not support " + "structures completed by verify(); please " + "compile and install the _cffi_backend module.") + struct_or_union = CTypesStructOrUnion._ctype + fnames = [fname for (fname, BField, bitsize) in fields] + btypes = [BField for (fname, BField, bitsize) in fields] + bitfields = [bitsize for (fname, BField, bitsize) in fields] + # + bfield_types = {} + cfields = [] + for (fname, BField, bitsize) in fields: + if bitsize < 0: + cfields.append((fname, BField._ctype)) + bfield_types[fname] = BField + else: + cfields.append((fname, BField._ctype, bitsize)) + bfield_types[fname] = Ellipsis + if sflags & 8: + struct_or_union._pack_ = 1 + elif pack: + struct_or_union._pack_ = pack + struct_or_union._fields_ = cfields + CTypesStructOrUnion._bfield_types = bfield_types + # + @staticmethod + def _create_ctype_obj(init): + result = struct_or_union() + if init is not None: + initialize(result, init) + return result + CTypesStructOrUnion._create_ctype_obj = _create_ctype_obj + # + def initialize(blob, init): + if is_union: + if len(init) > 1: + raise ValueError("union initializer: %d items given, but " + "only one supported (use a dict if needed)" + % (len(init),)) + if not isinstance(init, dict): + if isinstance(init, (bytes, unicode)): + raise TypeError("union initializer: got a str") + init = tuple(init) + if len(init) > len(fnames): + raise ValueError("too many values for %s initializer" % + CTypesStructOrUnion._get_c_name()) + init = dict(zip(fnames, init)) + addr = ctypes.addressof(blob) + for fname, value in init.items(): + BField, bitsize = name2fieldtype[fname] + assert bitsize < 0, \ + "not implemented: initializer with bit fields" + offset = CTypesStructOrUnion._offsetof(fname) + PTR = ctypes.POINTER(BField._ctype) + p = ctypes.cast(addr + offset, PTR) + BField._initialize(p.contents, value) + is_union = CTypesStructOrUnion._kind == 'union' + name2fieldtype = dict(zip(fnames, zip(btypes, bitfields))) + # + for fname, BField, bitsize in fields: + if fname == '': + raise NotImplementedError("nested anonymous structs/unions") + if hasattr(CTypesStructOrUnion, fname): + raise ValueError("the field name %r conflicts in " + "the ctypes backend" % fname) + if bitsize < 0: + def getter(self, fname=fname, BField=BField, + offset=CTypesStructOrUnion._offsetof(fname), + PTR=ctypes.POINTER(BField._ctype)): + addr = ctypes.addressof(self._blob) + p = ctypes.cast(addr + offset, PTR) + return BField._from_ctypes(p.contents) + def setter(self, value, fname=fname, BField=BField): + setattr(self._blob, fname, BField._to_ctypes(value)) + # + if issubclass(BField, CTypesGenericArray): + setter = None + if BField._declared_length == 0: + def getter(self, fname=fname, BFieldPtr=BField._CTPtr, + offset=CTypesStructOrUnion._offsetof(fname), + PTR=ctypes.POINTER(BField._ctype)): + addr = ctypes.addressof(self._blob) + p = ctypes.cast(addr + offset, PTR) + return BFieldPtr._from_ctypes(p) + # + else: + def getter(self, fname=fname, BField=BField): + return BField._from_ctypes(getattr(self._blob, fname)) + def setter(self, value, fname=fname, BField=BField): + # xxx obscure workaround + value = BField._to_ctypes(value) + oldvalue = getattr(self._blob, fname) + setattr(self._blob, fname, value) + if value != getattr(self._blob, fname): + setattr(self._blob, fname, oldvalue) + raise OverflowError("value too large for bitfield") + setattr(CTypesStructOrUnion, fname, property(getter, setter)) + # + CTypesPtr = self.ffi._get_cached_btype(model.PointerType(tp)) + for fname in fnames: + if hasattr(CTypesPtr, fname): + raise ValueError("the field name %r conflicts in " + "the ctypes backend" % fname) + def getter(self, fname=fname): + return getattr(self[0], fname) + def setter(self, value, fname=fname): + setattr(self[0], fname, value) + setattr(CTypesPtr, fname, property(getter, setter)) + + def new_function_type(self, BArgs, BResult, has_varargs): + nameargs = [BArg._get_c_name() for BArg in BArgs] + if has_varargs: + nameargs.append('...') + nameargs = ', '.join(nameargs) + # + class CTypesFunctionPtr(CTypesGenericPtr): + __slots__ = ['_own_callback', '_name'] + _ctype = ctypes.CFUNCTYPE(getattr(BResult, '_ctype', None), + *[BArg._ctype for BArg in BArgs], + use_errno=True) + _reftypename = BResult._get_c_name('(* &)(%s)' % (nameargs,)) + + def __init__(self, init, error=None): + # create a callback to the Python callable init() + import traceback + assert not has_varargs, "varargs not supported for callbacks" + if getattr(BResult, '_ctype', None) is not None: + error = BResult._from_ctypes( + BResult._create_ctype_obj(error)) + else: + error = None + def callback(*args): + args2 = [] + for arg, BArg in zip(args, BArgs): + args2.append(BArg._from_ctypes(arg)) + try: + res2 = init(*args2) + res2 = BResult._to_ctypes(res2) + except: + traceback.print_exc() + res2 = error + if issubclass(BResult, CTypesGenericPtr): + if res2: + res2 = ctypes.cast(res2, ctypes.c_void_p).value + # .value: http://bugs.python.org/issue1574593 + else: + res2 = None + #print repr(res2) + return res2 + if issubclass(BResult, CTypesGenericPtr): + # The only pointers callbacks can return are void*s: + # http://bugs.python.org/issue5710 + callback_ctype = ctypes.CFUNCTYPE( + ctypes.c_void_p, + *[BArg._ctype for BArg in BArgs], + use_errno=True) + else: + callback_ctype = CTypesFunctionPtr._ctype + self._as_ctype_ptr = callback_ctype(callback) + self._address = ctypes.cast(self._as_ctype_ptr, + ctypes.c_void_p).value + self._own_callback = init + + @staticmethod + def _initialize(ctypes_ptr, value): + if value: + raise NotImplementedError("ctypes backend: not supported: " + "initializers for function pointers") + + def __repr__(self): + c_name = getattr(self, '_name', None) + if c_name: + i = self._reftypename.index('(* &)') + if self._reftypename[i-1] not in ' )*': + c_name = ' ' + c_name + c_name = self._reftypename.replace('(* &)', c_name) + return CTypesData.__repr__(self, c_name) + + def _get_own_repr(self): + if getattr(self, '_own_callback', None) is not None: + return 'calling %r' % (self._own_callback,) + return super(CTypesFunctionPtr, self)._get_own_repr() + + def __call__(self, *args): + if has_varargs: + assert len(args) >= len(BArgs) + extraargs = args[len(BArgs):] + args = args[:len(BArgs)] + else: + assert len(args) == len(BArgs) + ctypes_args = [] + for arg, BArg in zip(args, BArgs): + ctypes_args.append(BArg._arg_to_ctypes(arg)) + if has_varargs: + for i, arg in enumerate(extraargs): + if arg is None: + ctypes_args.append(ctypes.c_void_p(0)) # NULL + continue + if not isinstance(arg, CTypesData): + raise TypeError( + "argument %d passed in the variadic part " + "needs to be a cdata object (got %s)" % + (1 + len(BArgs) + i, type(arg).__name__)) + ctypes_args.append(arg._arg_to_ctypes(arg)) + result = self._as_ctype_ptr(*ctypes_args) + return BResult._from_ctypes(result) + # + CTypesFunctionPtr._fix_class() + return CTypesFunctionPtr + + def new_enum_type(self, name, enumerators, enumvalues, CTypesInt): + assert isinstance(name, str) + reverse_mapping = dict(zip(reversed(enumvalues), + reversed(enumerators))) + # + class CTypesEnum(CTypesInt): + __slots__ = [] + _reftypename = '%s &' % name + + def _get_own_repr(self): + value = self._value + try: + return '%d: %s' % (value, reverse_mapping[value]) + except KeyError: + return str(value) + + def _to_string(self, maxlen): + value = self._value + try: + return reverse_mapping[value] + except KeyError: + return str(value) + # + CTypesEnum._fix_class() + return CTypesEnum + + def get_errno(self): + return ctypes.get_errno() + + def set_errno(self, value): + ctypes.set_errno(value) + + def string(self, b, maxlen=-1): + return b._to_string(maxlen) + + def buffer(self, bptr, size=-1): + raise NotImplementedError("buffer() with ctypes backend") + + def sizeof(self, cdata_or_BType): + if isinstance(cdata_or_BType, CTypesData): + return cdata_or_BType._get_size_of_instance() + else: + assert issubclass(cdata_or_BType, CTypesData) + return cdata_or_BType._get_size() + + def alignof(self, BType): + assert issubclass(BType, CTypesData) + return BType._alignment() + + def newp(self, BType, source): + if not issubclass(BType, CTypesData): + raise TypeError + return BType._newp(source) + + def cast(self, BType, source): + return BType._cast_from(source) + + def callback(self, BType, source, error, onerror): + assert onerror is None # XXX not implemented + return BType(source, error) + + _weakref_cache_ref = None + + def gcp(self, cdata, destructor, size=0): + if self._weakref_cache_ref is None: + import weakref + class MyRef(weakref.ref): + def __eq__(self, other): + myref = self() + return self is other or ( + myref is not None and myref is other()) + def __ne__(self, other): + return not (self == other) + def __hash__(self): + try: + return self._hash + except AttributeError: + self._hash = hash(self()) + return self._hash + self._weakref_cache_ref = {}, MyRef + weak_cache, MyRef = self._weakref_cache_ref + + if destructor is None: + try: + del weak_cache[MyRef(cdata)] + except KeyError: + raise TypeError("Can remove destructor only on a object " + "previously returned by ffi.gc()") + return None + + def remove(k): + cdata, destructor = weak_cache.pop(k, (None, None)) + if destructor is not None: + destructor(cdata) + + new_cdata = self.cast(self.typeof(cdata), cdata) + assert new_cdata is not cdata + weak_cache[MyRef(new_cdata, remove)] = (cdata, destructor) + return new_cdata + + typeof = type + + def getcname(self, BType, replace_with): + return BType._get_c_name(replace_with) + + def typeoffsetof(self, BType, fieldname, num=0): + if isinstance(fieldname, str): + if num == 0 and issubclass(BType, CTypesGenericPtr): + BType = BType._BItem + if not issubclass(BType, CTypesBaseStructOrUnion): + raise TypeError("expected a struct or union ctype") + BField = BType._bfield_types[fieldname] + if BField is Ellipsis: + raise TypeError("not supported for bitfields") + return (BField, BType._offsetof(fieldname)) + elif isinstance(fieldname, (int, long)): + if issubclass(BType, CTypesGenericArray): + BType = BType._CTPtr + if not issubclass(BType, CTypesGenericPtr): + raise TypeError("expected an array or ptr ctype") + BItem = BType._BItem + offset = BItem._get_size() * fieldname + if offset > sys.maxsize: + raise OverflowError + return (BItem, offset) + else: + raise TypeError(type(fieldname)) + + def rawaddressof(self, BTypePtr, cdata, offset=None): + if isinstance(cdata, CTypesBaseStructOrUnion): + ptr = ctypes.pointer(type(cdata)._to_ctypes(cdata)) + elif isinstance(cdata, CTypesGenericPtr): + if offset is None or not issubclass(type(cdata)._BItem, + CTypesBaseStructOrUnion): + raise TypeError("unexpected cdata type") + ptr = type(cdata)._to_ctypes(cdata) + elif isinstance(cdata, CTypesGenericArray): + ptr = type(cdata)._to_ctypes(cdata) + else: + raise TypeError("expected a ") + if offset: + ptr = ctypes.cast( + ctypes.c_void_p( + ctypes.cast(ptr, ctypes.c_void_p).value + offset), + type(ptr)) + return BTypePtr._from_ctypes(ptr) + + +class CTypesLibrary(object): + + def __init__(self, backend, cdll): + self.backend = backend + self.cdll = cdll + + def load_function(self, BType, name): + c_func = getattr(self.cdll, name) + funcobj = BType._from_ctypes(c_func) + funcobj._name = name + return funcobj + + def read_variable(self, BType, name): + try: + ctypes_obj = BType._ctype.in_dll(self.cdll, name) + except AttributeError as e: + raise NotImplementedError(e) + return BType._from_ctypes(ctypes_obj) + + def write_variable(self, BType, name, value): + new_ctypes_obj = BType._to_ctypes(value) + ctypes_obj = BType._ctype.in_dll(self.cdll, name) + ctypes.memmove(ctypes.addressof(ctypes_obj), + ctypes.addressof(new_ctypes_obj), + ctypes.sizeof(BType._ctype)) diff --git a/env/lib/python3.8/site-packages/cffi/cffi_opcode.py b/env/lib/python3.8/site-packages/cffi/cffi_opcode.py new file mode 100644 index 00000000..a0df98d1 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/cffi_opcode.py @@ -0,0 +1,187 @@ +from .error import VerificationError + +class CffiOp(object): + def __init__(self, op, arg): + self.op = op + self.arg = arg + + def as_c_expr(self): + if self.op is None: + assert isinstance(self.arg, str) + return '(_cffi_opcode_t)(%s)' % (self.arg,) + classname = CLASS_NAME[self.op] + return '_CFFI_OP(_CFFI_OP_%s, %s)' % (classname, self.arg) + + def as_python_bytes(self): + if self.op is None and self.arg.isdigit(): + value = int(self.arg) # non-negative: '-' not in self.arg + if value >= 2**31: + raise OverflowError("cannot emit %r: limited to 2**31-1" + % (self.arg,)) + return format_four_bytes(value) + if isinstance(self.arg, str): + raise VerificationError("cannot emit to Python: %r" % (self.arg,)) + return format_four_bytes((self.arg << 8) | self.op) + + def __str__(self): + classname = CLASS_NAME.get(self.op, self.op) + return '(%s %s)' % (classname, self.arg) + +def format_four_bytes(num): + return '\\x%02X\\x%02X\\x%02X\\x%02X' % ( + (num >> 24) & 0xFF, + (num >> 16) & 0xFF, + (num >> 8) & 0xFF, + (num ) & 0xFF) + +OP_PRIMITIVE = 1 +OP_POINTER = 3 +OP_ARRAY = 5 +OP_OPEN_ARRAY = 7 +OP_STRUCT_UNION = 9 +OP_ENUM = 11 +OP_FUNCTION = 13 +OP_FUNCTION_END = 15 +OP_NOOP = 17 +OP_BITFIELD = 19 +OP_TYPENAME = 21 +OP_CPYTHON_BLTN_V = 23 # varargs +OP_CPYTHON_BLTN_N = 25 # noargs +OP_CPYTHON_BLTN_O = 27 # O (i.e. a single arg) +OP_CONSTANT = 29 +OP_CONSTANT_INT = 31 +OP_GLOBAL_VAR = 33 +OP_DLOPEN_FUNC = 35 +OP_DLOPEN_CONST = 37 +OP_GLOBAL_VAR_F = 39 +OP_EXTERN_PYTHON = 41 + +PRIM_VOID = 0 +PRIM_BOOL = 1 +PRIM_CHAR = 2 +PRIM_SCHAR = 3 +PRIM_UCHAR = 4 +PRIM_SHORT = 5 +PRIM_USHORT = 6 +PRIM_INT = 7 +PRIM_UINT = 8 +PRIM_LONG = 9 +PRIM_ULONG = 10 +PRIM_LONGLONG = 11 +PRIM_ULONGLONG = 12 +PRIM_FLOAT = 13 +PRIM_DOUBLE = 14 +PRIM_LONGDOUBLE = 15 + +PRIM_WCHAR = 16 +PRIM_INT8 = 17 +PRIM_UINT8 = 18 +PRIM_INT16 = 19 +PRIM_UINT16 = 20 +PRIM_INT32 = 21 +PRIM_UINT32 = 22 +PRIM_INT64 = 23 +PRIM_UINT64 = 24 +PRIM_INTPTR = 25 +PRIM_UINTPTR = 26 +PRIM_PTRDIFF = 27 +PRIM_SIZE = 28 +PRIM_SSIZE = 29 +PRIM_INT_LEAST8 = 30 +PRIM_UINT_LEAST8 = 31 +PRIM_INT_LEAST16 = 32 +PRIM_UINT_LEAST16 = 33 +PRIM_INT_LEAST32 = 34 +PRIM_UINT_LEAST32 = 35 +PRIM_INT_LEAST64 = 36 +PRIM_UINT_LEAST64 = 37 +PRIM_INT_FAST8 = 38 +PRIM_UINT_FAST8 = 39 +PRIM_INT_FAST16 = 40 +PRIM_UINT_FAST16 = 41 +PRIM_INT_FAST32 = 42 +PRIM_UINT_FAST32 = 43 +PRIM_INT_FAST64 = 44 +PRIM_UINT_FAST64 = 45 +PRIM_INTMAX = 46 +PRIM_UINTMAX = 47 +PRIM_FLOATCOMPLEX = 48 +PRIM_DOUBLECOMPLEX = 49 +PRIM_CHAR16 = 50 +PRIM_CHAR32 = 51 + +_NUM_PRIM = 52 +_UNKNOWN_PRIM = -1 +_UNKNOWN_FLOAT_PRIM = -2 +_UNKNOWN_LONG_DOUBLE = -3 + +_IO_FILE_STRUCT = -1 + +PRIMITIVE_TO_INDEX = { + 'char': PRIM_CHAR, + 'short': PRIM_SHORT, + 'int': PRIM_INT, + 'long': PRIM_LONG, + 'long long': PRIM_LONGLONG, + 'signed char': PRIM_SCHAR, + 'unsigned char': PRIM_UCHAR, + 'unsigned short': PRIM_USHORT, + 'unsigned int': PRIM_UINT, + 'unsigned long': PRIM_ULONG, + 'unsigned long long': PRIM_ULONGLONG, + 'float': PRIM_FLOAT, + 'double': PRIM_DOUBLE, + 'long double': PRIM_LONGDOUBLE, + 'float _Complex': PRIM_FLOATCOMPLEX, + 'double _Complex': PRIM_DOUBLECOMPLEX, + '_Bool': PRIM_BOOL, + 'wchar_t': PRIM_WCHAR, + 'char16_t': PRIM_CHAR16, + 'char32_t': PRIM_CHAR32, + 'int8_t': PRIM_INT8, + 'uint8_t': PRIM_UINT8, + 'int16_t': PRIM_INT16, + 'uint16_t': PRIM_UINT16, + 'int32_t': PRIM_INT32, + 'uint32_t': PRIM_UINT32, + 'int64_t': PRIM_INT64, + 'uint64_t': PRIM_UINT64, + 'intptr_t': PRIM_INTPTR, + 'uintptr_t': PRIM_UINTPTR, + 'ptrdiff_t': PRIM_PTRDIFF, + 'size_t': PRIM_SIZE, + 'ssize_t': PRIM_SSIZE, + 'int_least8_t': PRIM_INT_LEAST8, + 'uint_least8_t': PRIM_UINT_LEAST8, + 'int_least16_t': PRIM_INT_LEAST16, + 'uint_least16_t': PRIM_UINT_LEAST16, + 'int_least32_t': PRIM_INT_LEAST32, + 'uint_least32_t': PRIM_UINT_LEAST32, + 'int_least64_t': PRIM_INT_LEAST64, + 'uint_least64_t': PRIM_UINT_LEAST64, + 'int_fast8_t': PRIM_INT_FAST8, + 'uint_fast8_t': PRIM_UINT_FAST8, + 'int_fast16_t': PRIM_INT_FAST16, + 'uint_fast16_t': PRIM_UINT_FAST16, + 'int_fast32_t': PRIM_INT_FAST32, + 'uint_fast32_t': PRIM_UINT_FAST32, + 'int_fast64_t': PRIM_INT_FAST64, + 'uint_fast64_t': PRIM_UINT_FAST64, + 'intmax_t': PRIM_INTMAX, + 'uintmax_t': PRIM_UINTMAX, + } + +F_UNION = 0x01 +F_CHECK_FIELDS = 0x02 +F_PACKED = 0x04 +F_EXTERNAL = 0x08 +F_OPAQUE = 0x10 + +G_FLAGS = dict([('_CFFI_' + _key, globals()[_key]) + for _key in ['F_UNION', 'F_CHECK_FIELDS', 'F_PACKED', + 'F_EXTERNAL', 'F_OPAQUE']]) + +CLASS_NAME = {} +for _name, _value in list(globals().items()): + if _name.startswith('OP_') and isinstance(_value, int): + CLASS_NAME[_value] = _name[3:] diff --git a/env/lib/python3.8/site-packages/cffi/commontypes.py b/env/lib/python3.8/site-packages/cffi/commontypes.py new file mode 100644 index 00000000..8ec97c75 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/commontypes.py @@ -0,0 +1,80 @@ +import sys +from . import model +from .error import FFIError + + +COMMON_TYPES = {} + +try: + # fetch "bool" and all simple Windows types + from _cffi_backend import _get_common_types + _get_common_types(COMMON_TYPES) +except ImportError: + pass + +COMMON_TYPES['FILE'] = model.unknown_type('FILE', '_IO_FILE') +COMMON_TYPES['bool'] = '_Bool' # in case we got ImportError above + +for _type in model.PrimitiveType.ALL_PRIMITIVE_TYPES: + if _type.endswith('_t'): + COMMON_TYPES[_type] = _type +del _type + +_CACHE = {} + +def resolve_common_type(parser, commontype): + try: + return _CACHE[commontype] + except KeyError: + cdecl = COMMON_TYPES.get(commontype, commontype) + if not isinstance(cdecl, str): + result, quals = cdecl, 0 # cdecl is already a BaseType + elif cdecl in model.PrimitiveType.ALL_PRIMITIVE_TYPES: + result, quals = model.PrimitiveType(cdecl), 0 + elif cdecl == 'set-unicode-needed': + raise FFIError("The Windows type %r is only available after " + "you call ffi.set_unicode()" % (commontype,)) + else: + if commontype == cdecl: + raise FFIError( + "Unsupported type: %r. Please look at " + "http://cffi.readthedocs.io/en/latest/cdef.html#ffi-cdef-limitations " + "and file an issue if you think this type should really " + "be supported." % (commontype,)) + result, quals = parser.parse_type_and_quals(cdecl) # recursive + + assert isinstance(result, model.BaseTypeByIdentity) + _CACHE[commontype] = result, quals + return result, quals + + +# ____________________________________________________________ +# extra types for Windows (most of them are in commontypes.c) + + +def win_common_types(): + return { + "UNICODE_STRING": model.StructType( + "_UNICODE_STRING", + ["Length", + "MaximumLength", + "Buffer"], + [model.PrimitiveType("unsigned short"), + model.PrimitiveType("unsigned short"), + model.PointerType(model.PrimitiveType("wchar_t"))], + [-1, -1, -1]), + "PUNICODE_STRING": "UNICODE_STRING *", + "PCUNICODE_STRING": "const UNICODE_STRING *", + + "TBYTE": "set-unicode-needed", + "TCHAR": "set-unicode-needed", + "LPCTSTR": "set-unicode-needed", + "PCTSTR": "set-unicode-needed", + "LPTSTR": "set-unicode-needed", + "PTSTR": "set-unicode-needed", + "PTBYTE": "set-unicode-needed", + "PTCHAR": "set-unicode-needed", + } + +if sys.platform == 'win32': + COMMON_TYPES.update(win_common_types()) diff --git a/env/lib/python3.8/site-packages/cffi/cparser.py b/env/lib/python3.8/site-packages/cffi/cparser.py new file mode 100644 index 00000000..74830e91 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/cparser.py @@ -0,0 +1,1006 @@ +from . import model +from .commontypes import COMMON_TYPES, resolve_common_type +from .error import FFIError, CDefError +try: + from . import _pycparser as pycparser +except ImportError: + import pycparser +import weakref, re, sys + +try: + if sys.version_info < (3,): + import thread as _thread + else: + import _thread + lock = _thread.allocate_lock() +except ImportError: + lock = None + +def _workaround_for_static_import_finders(): + # Issue #392: packaging tools like cx_Freeze can not find these + # because pycparser uses exec dynamic import. This is an obscure + # workaround. This function is never called. + import pycparser.yacctab + import pycparser.lextab + +CDEF_SOURCE_STRING = "" +_r_comment = re.compile(r"/\*.*?\*/|//([^\n\\]|\\.)*?$", + re.DOTALL | re.MULTILINE) +_r_define = re.compile(r"^\s*#\s*define\s+([A-Za-z_][A-Za-z_0-9]*)" + r"\b((?:[^\n\\]|\\.)*?)$", + re.DOTALL | re.MULTILINE) +_r_line_directive = re.compile(r"^[ \t]*#[ \t]*(?:line|\d+)\b.*$", re.MULTILINE) +_r_partial_enum = re.compile(r"=\s*\.\.\.\s*[,}]|\.\.\.\s*\}") +_r_enum_dotdotdot = re.compile(r"__dotdotdot\d+__$") +_r_partial_array = re.compile(r"\[\s*\.\.\.\s*\]") +_r_words = re.compile(r"\w+|\S") +_parser_cache = None +_r_int_literal = re.compile(r"-?0?x?[0-9a-f]+[lu]*$", re.IGNORECASE) +_r_stdcall1 = re.compile(r"\b(__stdcall|WINAPI)\b") +_r_stdcall2 = re.compile(r"[(]\s*(__stdcall|WINAPI)\b") +_r_cdecl = re.compile(r"\b__cdecl\b") +_r_extern_python = re.compile(r'\bextern\s*"' + r'(Python|Python\s*\+\s*C|C\s*\+\s*Python)"\s*.') +_r_star_const_space = re.compile( # matches "* const " + r"[*]\s*((const|volatile|restrict)\b\s*)+") +_r_int_dotdotdot = re.compile(r"(\b(int|long|short|signed|unsigned|char)\s*)+" + r"\.\.\.") +_r_float_dotdotdot = re.compile(r"\b(double|float)\s*\.\.\.") + +def _get_parser(): + global _parser_cache + if _parser_cache is None: + _parser_cache = pycparser.CParser() + return _parser_cache + +def _workaround_for_old_pycparser(csource): + # Workaround for a pycparser issue (fixed between pycparser 2.10 and + # 2.14): "char*const***" gives us a wrong syntax tree, the same as + # for "char***(*const)". This means we can't tell the difference + # afterwards. But "char(*const(***))" gives us the right syntax + # tree. The issue only occurs if there are several stars in + # sequence with no parenthesis inbetween, just possibly qualifiers. + # Attempt to fix it by adding some parentheses in the source: each + # time we see "* const" or "* const *", we add an opening + # parenthesis before each star---the hard part is figuring out where + # to close them. + parts = [] + while True: + match = _r_star_const_space.search(csource) + if not match: + break + #print repr(''.join(parts)+csource), '=>', + parts.append(csource[:match.start()]) + parts.append('('); closing = ')' + parts.append(match.group()) # e.g. "* const " + endpos = match.end() + if csource.startswith('*', endpos): + parts.append('('); closing += ')' + level = 0 + i = endpos + while i < len(csource): + c = csource[i] + if c == '(': + level += 1 + elif c == ')': + if level == 0: + break + level -= 1 + elif c in ',;=': + if level == 0: + break + i += 1 + csource = csource[endpos:i] + closing + csource[i:] + #print repr(''.join(parts)+csource) + parts.append(csource) + return ''.join(parts) + +def _preprocess_extern_python(csource): + # input: `extern "Python" int foo(int);` or + # `extern "Python" { int foo(int); }` + # output: + # void __cffi_extern_python_start; + # int foo(int); + # void __cffi_extern_python_stop; + # + # input: `extern "Python+C" int foo(int);` + # output: + # void __cffi_extern_python_plus_c_start; + # int foo(int); + # void __cffi_extern_python_stop; + parts = [] + while True: + match = _r_extern_python.search(csource) + if not match: + break + endpos = match.end() - 1 + #print + #print ''.join(parts)+csource + #print '=>' + parts.append(csource[:match.start()]) + if 'C' in match.group(1): + parts.append('void __cffi_extern_python_plus_c_start; ') + else: + parts.append('void __cffi_extern_python_start; ') + if csource[endpos] == '{': + # grouping variant + closing = csource.find('}', endpos) + if closing < 0: + raise CDefError("'extern \"Python\" {': no '}' found") + if csource.find('{', endpos + 1, closing) >= 0: + raise NotImplementedError("cannot use { } inside a block " + "'extern \"Python\" { ... }'") + parts.append(csource[endpos+1:closing]) + csource = csource[closing+1:] + else: + # non-grouping variant + semicolon = csource.find(';', endpos) + if semicolon < 0: + raise CDefError("'extern \"Python\": no ';' found") + parts.append(csource[endpos:semicolon+1]) + csource = csource[semicolon+1:] + parts.append(' void __cffi_extern_python_stop;') + #print ''.join(parts)+csource + #print + parts.append(csource) + return ''.join(parts) + +def _warn_for_string_literal(csource): + if '"' not in csource: + return + for line in csource.splitlines(): + if '"' in line and not line.lstrip().startswith('#'): + import warnings + warnings.warn("String literal found in cdef() or type source. " + "String literals are ignored here, but you should " + "remove them anyway because some character sequences " + "confuse pre-parsing.") + break + +def _warn_for_non_extern_non_static_global_variable(decl): + if not decl.storage: + import warnings + warnings.warn("Global variable '%s' in cdef(): for consistency " + "with C it should have a storage class specifier " + "(usually 'extern')" % (decl.name,)) + +def _remove_line_directives(csource): + # _r_line_directive matches whole lines, without the final \n, if they + # start with '#line' with some spacing allowed, or '#NUMBER'. This + # function stores them away and replaces them with exactly the string + # '#line@N', where N is the index in the list 'line_directives'. + line_directives = [] + def replace(m): + i = len(line_directives) + line_directives.append(m.group()) + return '#line@%d' % i + csource = _r_line_directive.sub(replace, csource) + return csource, line_directives + +def _put_back_line_directives(csource, line_directives): + def replace(m): + s = m.group() + if not s.startswith('#line@'): + raise AssertionError("unexpected #line directive " + "(should have been processed and removed") + return line_directives[int(s[6:])] + return _r_line_directive.sub(replace, csource) + +def _preprocess(csource): + # First, remove the lines of the form '#line N "filename"' because + # the "filename" part could confuse the rest + csource, line_directives = _remove_line_directives(csource) + # Remove comments. NOTE: this only work because the cdef() section + # should not contain any string literals (except in line directives)! + def replace_keeping_newlines(m): + return ' ' + m.group().count('\n') * '\n' + csource = _r_comment.sub(replace_keeping_newlines, csource) + # Remove the "#define FOO x" lines + macros = {} + for match in _r_define.finditer(csource): + macroname, macrovalue = match.groups() + macrovalue = macrovalue.replace('\\\n', '').strip() + macros[macroname] = macrovalue + csource = _r_define.sub('', csource) + # + if pycparser.__version__ < '2.14': + csource = _workaround_for_old_pycparser(csource) + # + # BIG HACK: replace WINAPI or __stdcall with "volatile const". + # It doesn't make sense for the return type of a function to be + # "volatile volatile const", so we abuse it to detect __stdcall... + # Hack number 2 is that "int(volatile *fptr)();" is not valid C + # syntax, so we place the "volatile" before the opening parenthesis. + csource = _r_stdcall2.sub(' volatile volatile const(', csource) + csource = _r_stdcall1.sub(' volatile volatile const ', csource) + csource = _r_cdecl.sub(' ', csource) + # + # Replace `extern "Python"` with start/end markers + csource = _preprocess_extern_python(csource) + # + # Now there should not be any string literal left; warn if we get one + _warn_for_string_literal(csource) + # + # Replace "[...]" with "[__dotdotdotarray__]" + csource = _r_partial_array.sub('[__dotdotdotarray__]', csource) + # + # Replace "...}" with "__dotdotdotNUM__}". This construction should + # occur only at the end of enums; at the end of structs we have "...;}" + # and at the end of vararg functions "...);". Also replace "=...[,}]" + # with ",__dotdotdotNUM__[,}]": this occurs in the enums too, when + # giving an unknown value. + matches = list(_r_partial_enum.finditer(csource)) + for number, match in enumerate(reversed(matches)): + p = match.start() + if csource[p] == '=': + p2 = csource.find('...', p, match.end()) + assert p2 > p + csource = '%s,__dotdotdot%d__ %s' % (csource[:p], number, + csource[p2+3:]) + else: + assert csource[p:p+3] == '...' + csource = '%s __dotdotdot%d__ %s' % (csource[:p], number, + csource[p+3:]) + # Replace "int ..." or "unsigned long int..." with "__dotdotdotint__" + csource = _r_int_dotdotdot.sub(' __dotdotdotint__ ', csource) + # Replace "float ..." or "double..." with "__dotdotdotfloat__" + csource = _r_float_dotdotdot.sub(' __dotdotdotfloat__ ', csource) + # Replace all remaining "..." with the same name, "__dotdotdot__", + # which is declared with a typedef for the purpose of C parsing. + csource = csource.replace('...', ' __dotdotdot__ ') + # Finally, put back the line directives + csource = _put_back_line_directives(csource, line_directives) + return csource, macros + +def _common_type_names(csource): + # Look in the source for what looks like usages of types from the + # list of common types. A "usage" is approximated here as the + # appearance of the word, minus a "definition" of the type, which + # is the last word in a "typedef" statement. Approximative only + # but should be fine for all the common types. + look_for_words = set(COMMON_TYPES) + look_for_words.add(';') + look_for_words.add(',') + look_for_words.add('(') + look_for_words.add(')') + look_for_words.add('typedef') + words_used = set() + is_typedef = False + paren = 0 + previous_word = '' + for word in _r_words.findall(csource): + if word in look_for_words: + if word == ';': + if is_typedef: + words_used.discard(previous_word) + look_for_words.discard(previous_word) + is_typedef = False + elif word == 'typedef': + is_typedef = True + paren = 0 + elif word == '(': + paren += 1 + elif word == ')': + paren -= 1 + elif word == ',': + if is_typedef and paren == 0: + words_used.discard(previous_word) + look_for_words.discard(previous_word) + else: # word in COMMON_TYPES + words_used.add(word) + previous_word = word + return words_used + + +class Parser(object): + + def __init__(self): + self._declarations = {} + self._included_declarations = set() + self._anonymous_counter = 0 + self._structnode2type = weakref.WeakKeyDictionary() + self._options = {} + self._int_constants = {} + self._recomplete = [] + self._uses_new_feature = None + + def _parse(self, csource): + csource, macros = _preprocess(csource) + # XXX: for more efficiency we would need to poke into the + # internals of CParser... the following registers the + # typedefs, because their presence or absence influences the + # parsing itself (but what they are typedef'ed to plays no role) + ctn = _common_type_names(csource) + typenames = [] + for name in sorted(self._declarations): + if name.startswith('typedef '): + name = name[8:] + typenames.append(name) + ctn.discard(name) + typenames += sorted(ctn) + # + csourcelines = [] + csourcelines.append('# 1 ""') + for typename in typenames: + csourcelines.append('typedef int %s;' % typename) + csourcelines.append('typedef int __dotdotdotint__, __dotdotdotfloat__,' + ' __dotdotdot__;') + # this forces pycparser to consider the following in the file + # called from line 1 + csourcelines.append('# 1 "%s"' % (CDEF_SOURCE_STRING,)) + csourcelines.append(csource) + fullcsource = '\n'.join(csourcelines) + if lock is not None: + lock.acquire() # pycparser is not thread-safe... + try: + ast = _get_parser().parse(fullcsource) + except pycparser.c_parser.ParseError as e: + self.convert_pycparser_error(e, csource) + finally: + if lock is not None: + lock.release() + # csource will be used to find buggy source text + return ast, macros, csource + + def _convert_pycparser_error(self, e, csource): + # xxx look for ":NUM:" at the start of str(e) + # and interpret that as a line number. This will not work if + # the user gives explicit ``# NUM "FILE"`` directives. + line = None + msg = str(e) + match = re.match(r"%s:(\d+):" % (CDEF_SOURCE_STRING,), msg) + if match: + linenum = int(match.group(1), 10) + csourcelines = csource.splitlines() + if 1 <= linenum <= len(csourcelines): + line = csourcelines[linenum-1] + return line + + def convert_pycparser_error(self, e, csource): + line = self._convert_pycparser_error(e, csource) + + msg = str(e) + if line: + msg = 'cannot parse "%s"\n%s' % (line.strip(), msg) + else: + msg = 'parse error\n%s' % (msg,) + raise CDefError(msg) + + def parse(self, csource, override=False, packed=False, pack=None, + dllexport=False): + if packed: + if packed != True: + raise ValueError("'packed' should be False or True; use " + "'pack' to give another value") + if pack: + raise ValueError("cannot give both 'pack' and 'packed'") + pack = 1 + elif pack: + if pack & (pack - 1): + raise ValueError("'pack' must be a power of two, not %r" % + (pack,)) + else: + pack = 0 + prev_options = self._options + try: + self._options = {'override': override, + 'packed': pack, + 'dllexport': dllexport} + self._internal_parse(csource) + finally: + self._options = prev_options + + def _internal_parse(self, csource): + ast, macros, csource = self._parse(csource) + # add the macros + self._process_macros(macros) + # find the first "__dotdotdot__" and use that as a separator + # between the repeated typedefs and the real csource + iterator = iter(ast.ext) + for decl in iterator: + if decl.name == '__dotdotdot__': + break + else: + assert 0 + current_decl = None + # + try: + self._inside_extern_python = '__cffi_extern_python_stop' + for decl in iterator: + current_decl = decl + if isinstance(decl, pycparser.c_ast.Decl): + self._parse_decl(decl) + elif isinstance(decl, pycparser.c_ast.Typedef): + if not decl.name: + raise CDefError("typedef does not declare any name", + decl) + quals = 0 + if (isinstance(decl.type.type, pycparser.c_ast.IdentifierType) and + decl.type.type.names[-1].startswith('__dotdotdot')): + realtype = self._get_unknown_type(decl) + elif (isinstance(decl.type, pycparser.c_ast.PtrDecl) and + isinstance(decl.type.type, pycparser.c_ast.TypeDecl) and + isinstance(decl.type.type.type, + pycparser.c_ast.IdentifierType) and + decl.type.type.type.names[-1].startswith('__dotdotdot')): + realtype = self._get_unknown_ptr_type(decl) + else: + realtype, quals = self._get_type_and_quals( + decl.type, name=decl.name, partial_length_ok=True, + typedef_example="*(%s *)0" % (decl.name,)) + self._declare('typedef ' + decl.name, realtype, quals=quals) + elif decl.__class__.__name__ == 'Pragma': + pass # skip pragma, only in pycparser 2.15 + else: + raise CDefError("unexpected <%s>: this construct is valid " + "C but not valid in cdef()" % + decl.__class__.__name__, decl) + except CDefError as e: + if len(e.args) == 1: + e.args = e.args + (current_decl,) + raise + except FFIError as e: + msg = self._convert_pycparser_error(e, csource) + if msg: + e.args = (e.args[0] + "\n *** Err: %s" % msg,) + raise + + def _add_constants(self, key, val): + if key in self._int_constants: + if self._int_constants[key] == val: + return # ignore identical double declarations + raise FFIError( + "multiple declarations of constant: %s" % (key,)) + self._int_constants[key] = val + + def _add_integer_constant(self, name, int_str): + int_str = int_str.lower().rstrip("ul") + neg = int_str.startswith('-') + if neg: + int_str = int_str[1:] + # "010" is not valid oct in py3 + if (int_str.startswith("0") and int_str != '0' + and not int_str.startswith("0x")): + int_str = "0o" + int_str[1:] + pyvalue = int(int_str, 0) + if neg: + pyvalue = -pyvalue + self._add_constants(name, pyvalue) + self._declare('macro ' + name, pyvalue) + + def _process_macros(self, macros): + for key, value in macros.items(): + value = value.strip() + if _r_int_literal.match(value): + self._add_integer_constant(key, value) + elif value == '...': + self._declare('macro ' + key, value) + else: + raise CDefError( + 'only supports one of the following syntax:\n' + ' #define %s ... (literally dot-dot-dot)\n' + ' #define %s NUMBER (with NUMBER an integer' + ' constant, decimal/hex/octal)\n' + 'got:\n' + ' #define %s %s' + % (key, key, key, value)) + + def _declare_function(self, tp, quals, decl): + tp = self._get_type_pointer(tp, quals) + if self._options.get('dllexport'): + tag = 'dllexport_python ' + elif self._inside_extern_python == '__cffi_extern_python_start': + tag = 'extern_python ' + elif self._inside_extern_python == '__cffi_extern_python_plus_c_start': + tag = 'extern_python_plus_c ' + else: + tag = 'function ' + self._declare(tag + decl.name, tp) + + def _parse_decl(self, decl): + node = decl.type + if isinstance(node, pycparser.c_ast.FuncDecl): + tp, quals = self._get_type_and_quals(node, name=decl.name) + assert isinstance(tp, model.RawFunctionType) + self._declare_function(tp, quals, decl) + else: + if isinstance(node, pycparser.c_ast.Struct): + self._get_struct_union_enum_type('struct', node) + elif isinstance(node, pycparser.c_ast.Union): + self._get_struct_union_enum_type('union', node) + elif isinstance(node, pycparser.c_ast.Enum): + self._get_struct_union_enum_type('enum', node) + elif not decl.name: + raise CDefError("construct does not declare any variable", + decl) + # + if decl.name: + tp, quals = self._get_type_and_quals(node, + partial_length_ok=True) + if tp.is_raw_function: + self._declare_function(tp, quals, decl) + elif (tp.is_integer_type() and + hasattr(decl, 'init') and + hasattr(decl.init, 'value') and + _r_int_literal.match(decl.init.value)): + self._add_integer_constant(decl.name, decl.init.value) + elif (tp.is_integer_type() and + isinstance(decl.init, pycparser.c_ast.UnaryOp) and + decl.init.op == '-' and + hasattr(decl.init.expr, 'value') and + _r_int_literal.match(decl.init.expr.value)): + self._add_integer_constant(decl.name, + '-' + decl.init.expr.value) + elif (tp is model.void_type and + decl.name.startswith('__cffi_extern_python_')): + # hack: `extern "Python"` in the C source is replaced + # with "void __cffi_extern_python_start;" and + # "void __cffi_extern_python_stop;" + self._inside_extern_python = decl.name + else: + if self._inside_extern_python !='__cffi_extern_python_stop': + raise CDefError( + "cannot declare constants or " + "variables with 'extern \"Python\"'") + if (quals & model.Q_CONST) and not tp.is_array_type: + self._declare('constant ' + decl.name, tp, quals=quals) + else: + _warn_for_non_extern_non_static_global_variable(decl) + self._declare('variable ' + decl.name, tp, quals=quals) + + def parse_type(self, cdecl): + return self.parse_type_and_quals(cdecl)[0] + + def parse_type_and_quals(self, cdecl): + ast, macros = self._parse('void __dummy(\n%s\n);' % cdecl)[:2] + assert not macros + exprnode = ast.ext[-1].type.args.params[0] + if isinstance(exprnode, pycparser.c_ast.ID): + raise CDefError("unknown identifier '%s'" % (exprnode.name,)) + return self._get_type_and_quals(exprnode.type) + + def _declare(self, name, obj, included=False, quals=0): + if name in self._declarations: + prevobj, prevquals = self._declarations[name] + if prevobj is obj and prevquals == quals: + return + if not self._options.get('override'): + raise FFIError( + "multiple declarations of %s (for interactive usage, " + "try cdef(xx, override=True))" % (name,)) + assert '__dotdotdot__' not in name.split() + self._declarations[name] = (obj, quals) + if included: + self._included_declarations.add(obj) + + def _extract_quals(self, type): + quals = 0 + if isinstance(type, (pycparser.c_ast.TypeDecl, + pycparser.c_ast.PtrDecl)): + if 'const' in type.quals: + quals |= model.Q_CONST + if 'volatile' in type.quals: + quals |= model.Q_VOLATILE + if 'restrict' in type.quals: + quals |= model.Q_RESTRICT + return quals + + def _get_type_pointer(self, type, quals, declname=None): + if isinstance(type, model.RawFunctionType): + return type.as_function_pointer() + if (isinstance(type, model.StructOrUnionOrEnum) and + type.name.startswith('$') and type.name[1:].isdigit() and + type.forcename is None and declname is not None): + return model.NamedPointerType(type, declname, quals) + return model.PointerType(type, quals) + + def _get_type_and_quals(self, typenode, name=None, partial_length_ok=False, + typedef_example=None): + # first, dereference typedefs, if we have it already parsed, we're good + if (isinstance(typenode, pycparser.c_ast.TypeDecl) and + isinstance(typenode.type, pycparser.c_ast.IdentifierType) and + len(typenode.type.names) == 1 and + ('typedef ' + typenode.type.names[0]) in self._declarations): + tp, quals = self._declarations['typedef ' + typenode.type.names[0]] + quals |= self._extract_quals(typenode) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.ArrayDecl): + # array type + if typenode.dim is None: + length = None + else: + length = self._parse_constant( + typenode.dim, partial_length_ok=partial_length_ok) + # a hack: in 'typedef int foo_t[...][...];', don't use '...' as + # the length but use directly the C expression that would be + # generated by recompiler.py. This lets the typedef be used in + # many more places within recompiler.py + if typedef_example is not None: + if length == '...': + length = '_cffi_array_len(%s)' % (typedef_example,) + typedef_example = "*" + typedef_example + # + tp, quals = self._get_type_and_quals(typenode.type, + partial_length_ok=partial_length_ok, + typedef_example=typedef_example) + return model.ArrayType(tp, length), quals + # + if isinstance(typenode, pycparser.c_ast.PtrDecl): + # pointer type + itemtype, itemquals = self._get_type_and_quals(typenode.type) + tp = self._get_type_pointer(itemtype, itemquals, declname=name) + quals = self._extract_quals(typenode) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.TypeDecl): + quals = self._extract_quals(typenode) + type = typenode.type + if isinstance(type, pycparser.c_ast.IdentifierType): + # assume a primitive type. get it from .names, but reduce + # synonyms to a single chosen combination + names = list(type.names) + if names != ['signed', 'char']: # keep this unmodified + prefixes = {} + while names: + name = names[0] + if name in ('short', 'long', 'signed', 'unsigned'): + prefixes[name] = prefixes.get(name, 0) + 1 + del names[0] + else: + break + # ignore the 'signed' prefix below, and reorder the others + newnames = [] + for prefix in ('unsigned', 'short', 'long'): + for i in range(prefixes.get(prefix, 0)): + newnames.append(prefix) + if not names: + names = ['int'] # implicitly + if names == ['int']: # but kill it if 'short' or 'long' + if 'short' in prefixes or 'long' in prefixes: + names = [] + names = newnames + names + ident = ' '.join(names) + if ident == 'void': + return model.void_type, quals + if ident == '__dotdotdot__': + raise FFIError(':%d: bad usage of "..."' % + typenode.coord.line) + tp0, quals0 = resolve_common_type(self, ident) + return tp0, (quals | quals0) + # + if isinstance(type, pycparser.c_ast.Struct): + # 'struct foobar' + tp = self._get_struct_union_enum_type('struct', type, name) + return tp, quals + # + if isinstance(type, pycparser.c_ast.Union): + # 'union foobar' + tp = self._get_struct_union_enum_type('union', type, name) + return tp, quals + # + if isinstance(type, pycparser.c_ast.Enum): + # 'enum foobar' + tp = self._get_struct_union_enum_type('enum', type, name) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.FuncDecl): + # a function type + return self._parse_function_type(typenode, name), 0 + # + # nested anonymous structs or unions end up here + if isinstance(typenode, pycparser.c_ast.Struct): + return self._get_struct_union_enum_type('struct', typenode, name, + nested=True), 0 + if isinstance(typenode, pycparser.c_ast.Union): + return self._get_struct_union_enum_type('union', typenode, name, + nested=True), 0 + # + raise FFIError(":%d: bad or unsupported type declaration" % + typenode.coord.line) + + def _parse_function_type(self, typenode, funcname=None): + params = list(getattr(typenode.args, 'params', [])) + for i, arg in enumerate(params): + if not hasattr(arg, 'type'): + raise CDefError("%s arg %d: unknown type '%s'" + " (if you meant to use the old C syntax of giving" + " untyped arguments, it is not supported)" + % (funcname or 'in expression', i + 1, + getattr(arg, 'name', '?'))) + ellipsis = ( + len(params) > 0 and + isinstance(params[-1].type, pycparser.c_ast.TypeDecl) and + isinstance(params[-1].type.type, + pycparser.c_ast.IdentifierType) and + params[-1].type.type.names == ['__dotdotdot__']) + if ellipsis: + params.pop() + if not params: + raise CDefError( + "%s: a function with only '(...)' as argument" + " is not correct C" % (funcname or 'in expression')) + args = [self._as_func_arg(*self._get_type_and_quals(argdeclnode.type)) + for argdeclnode in params] + if not ellipsis and args == [model.void_type]: + args = [] + result, quals = self._get_type_and_quals(typenode.type) + # the 'quals' on the result type are ignored. HACK: we absure them + # to detect __stdcall functions: we textually replace "__stdcall" + # with "volatile volatile const" above. + abi = None + if hasattr(typenode.type, 'quals'): # else, probable syntax error anyway + if typenode.type.quals[-3:] == ['volatile', 'volatile', 'const']: + abi = '__stdcall' + return model.RawFunctionType(tuple(args), result, ellipsis, abi) + + def _as_func_arg(self, type, quals): + if isinstance(type, model.ArrayType): + return model.PointerType(type.item, quals) + elif isinstance(type, model.RawFunctionType): + return type.as_function_pointer() + else: + return type + + def _get_struct_union_enum_type(self, kind, type, name=None, nested=False): + # First, a level of caching on the exact 'type' node of the AST. + # This is obscure, but needed because pycparser "unrolls" declarations + # such as "typedef struct { } foo_t, *foo_p" and we end up with + # an AST that is not a tree, but a DAG, with the "type" node of the + # two branches foo_t and foo_p of the trees being the same node. + # It's a bit silly but detecting "DAG-ness" in the AST tree seems + # to be the only way to distinguish this case from two independent + # structs. See test_struct_with_two_usages. + try: + return self._structnode2type[type] + except KeyError: + pass + # + # Note that this must handle parsing "struct foo" any number of + # times and always return the same StructType object. Additionally, + # one of these times (not necessarily the first), the fields of + # the struct can be specified with "struct foo { ...fields... }". + # If no name is given, then we have to create a new anonymous struct + # with no caching; in this case, the fields are either specified + # right now or never. + # + force_name = name + name = type.name + # + # get the type or create it if needed + if name is None: + # 'force_name' is used to guess a more readable name for + # anonymous structs, for the common case "typedef struct { } foo". + if force_name is not None: + explicit_name = '$%s' % force_name + else: + self._anonymous_counter += 1 + explicit_name = '$%d' % self._anonymous_counter + tp = None + else: + explicit_name = name + key = '%s %s' % (kind, name) + tp, _ = self._declarations.get(key, (None, None)) + # + if tp is None: + if kind == 'struct': + tp = model.StructType(explicit_name, None, None, None) + elif kind == 'union': + tp = model.UnionType(explicit_name, None, None, None) + elif kind == 'enum': + if explicit_name == '__dotdotdot__': + raise CDefError("Enums cannot be declared with ...") + tp = self._build_enum_type(explicit_name, type.values) + else: + raise AssertionError("kind = %r" % (kind,)) + if name is not None: + self._declare(key, tp) + else: + if kind == 'enum' and type.values is not None: + raise NotImplementedError( + "enum %s: the '{}' declaration should appear on the first " + "time the enum is mentioned, not later" % explicit_name) + if not tp.forcename: + tp.force_the_name(force_name) + if tp.forcename and '$' in tp.name: + self._declare('anonymous %s' % tp.forcename, tp) + # + self._structnode2type[type] = tp + # + # enums: done here + if kind == 'enum': + return tp + # + # is there a 'type.decls'? If yes, then this is the place in the + # C sources that declare the fields. If no, then just return the + # existing type, possibly still incomplete. + if type.decls is None: + return tp + # + if tp.fldnames is not None: + raise CDefError("duplicate declaration of struct %s" % name) + fldnames = [] + fldtypes = [] + fldbitsize = [] + fldquals = [] + for decl in type.decls: + if (isinstance(decl.type, pycparser.c_ast.IdentifierType) and + ''.join(decl.type.names) == '__dotdotdot__'): + # XXX pycparser is inconsistent: 'names' should be a list + # of strings, but is sometimes just one string. Use + # str.join() as a way to cope with both. + self._make_partial(tp, nested) + continue + if decl.bitsize is None: + bitsize = -1 + else: + bitsize = self._parse_constant(decl.bitsize) + self._partial_length = False + type, fqual = self._get_type_and_quals(decl.type, + partial_length_ok=True) + if self._partial_length: + self._make_partial(tp, nested) + if isinstance(type, model.StructType) and type.partial: + self._make_partial(tp, nested) + fldnames.append(decl.name or '') + fldtypes.append(type) + fldbitsize.append(bitsize) + fldquals.append(fqual) + tp.fldnames = tuple(fldnames) + tp.fldtypes = tuple(fldtypes) + tp.fldbitsize = tuple(fldbitsize) + tp.fldquals = tuple(fldquals) + if fldbitsize != [-1] * len(fldbitsize): + if isinstance(tp, model.StructType) and tp.partial: + raise NotImplementedError("%s: using both bitfields and '...;'" + % (tp,)) + tp.packed = self._options.get('packed') + if tp.completed: # must be re-completed: it is not opaque any more + tp.completed = 0 + self._recomplete.append(tp) + return tp + + def _make_partial(self, tp, nested): + if not isinstance(tp, model.StructOrUnion): + raise CDefError("%s cannot be partial" % (tp,)) + if not tp.has_c_name() and not nested: + raise NotImplementedError("%s is partial but has no C name" %(tp,)) + tp.partial = True + + def _parse_constant(self, exprnode, partial_length_ok=False): + # for now, limited to expressions that are an immediate number + # or positive/negative number + if isinstance(exprnode, pycparser.c_ast.Constant): + s = exprnode.value + if '0' <= s[0] <= '9': + s = s.rstrip('uUlL') + try: + if s.startswith('0'): + return int(s, 8) + else: + return int(s, 10) + except ValueError: + if len(s) > 1: + if s.lower()[0:2] == '0x': + return int(s, 16) + elif s.lower()[0:2] == '0b': + return int(s, 2) + raise CDefError("invalid constant %r" % (s,)) + elif s[0] == "'" and s[-1] == "'" and ( + len(s) == 3 or (len(s) == 4 and s[1] == "\\")): + return ord(s[-2]) + else: + raise CDefError("invalid constant %r" % (s,)) + # + if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and + exprnode.op == '+'): + return self._parse_constant(exprnode.expr) + # + if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and + exprnode.op == '-'): + return -self._parse_constant(exprnode.expr) + # load previously defined int constant + if (isinstance(exprnode, pycparser.c_ast.ID) and + exprnode.name in self._int_constants): + return self._int_constants[exprnode.name] + # + if (isinstance(exprnode, pycparser.c_ast.ID) and + exprnode.name == '__dotdotdotarray__'): + if partial_length_ok: + self._partial_length = True + return '...' + raise FFIError(":%d: unsupported '[...]' here, cannot derive " + "the actual array length in this context" + % exprnode.coord.line) + # + if isinstance(exprnode, pycparser.c_ast.BinaryOp): + left = self._parse_constant(exprnode.left) + right = self._parse_constant(exprnode.right) + if exprnode.op == '+': + return left + right + elif exprnode.op == '-': + return left - right + elif exprnode.op == '*': + return left * right + elif exprnode.op == '/': + return self._c_div(left, right) + elif exprnode.op == '%': + return left - self._c_div(left, right) * right + elif exprnode.op == '<<': + return left << right + elif exprnode.op == '>>': + return left >> right + elif exprnode.op == '&': + return left & right + elif exprnode.op == '|': + return left | right + elif exprnode.op == '^': + return left ^ right + # + raise FFIError(":%d: unsupported expression: expected a " + "simple numeric constant" % exprnode.coord.line) + + def _c_div(self, a, b): + result = a // b + if ((a < 0) ^ (b < 0)) and (a % b) != 0: + result += 1 + return result + + def _build_enum_type(self, explicit_name, decls): + if decls is not None: + partial = False + enumerators = [] + enumvalues = [] + nextenumvalue = 0 + for enum in decls.enumerators: + if _r_enum_dotdotdot.match(enum.name): + partial = True + continue + if enum.value is not None: + nextenumvalue = self._parse_constant(enum.value) + enumerators.append(enum.name) + enumvalues.append(nextenumvalue) + self._add_constants(enum.name, nextenumvalue) + nextenumvalue += 1 + enumerators = tuple(enumerators) + enumvalues = tuple(enumvalues) + tp = model.EnumType(explicit_name, enumerators, enumvalues) + tp.partial = partial + else: # opaque enum + tp = model.EnumType(explicit_name, (), ()) + return tp + + def include(self, other): + for name, (tp, quals) in other._declarations.items(): + if name.startswith('anonymous $enum_$'): + continue # fix for test_anonymous_enum_include + kind = name.split(' ', 1)[0] + if kind in ('struct', 'union', 'enum', 'anonymous', 'typedef'): + self._declare(name, tp, included=True, quals=quals) + for k, v in other._int_constants.items(): + self._add_constants(k, v) + + def _get_unknown_type(self, decl): + typenames = decl.type.type.names + if typenames == ['__dotdotdot__']: + return model.unknown_type(decl.name) + + if typenames == ['__dotdotdotint__']: + if self._uses_new_feature is None: + self._uses_new_feature = "'typedef int... %s'" % decl.name + return model.UnknownIntegerType(decl.name) + + if typenames == ['__dotdotdotfloat__']: + # note: not for 'long double' so far + if self._uses_new_feature is None: + self._uses_new_feature = "'typedef float... %s'" % decl.name + return model.UnknownFloatType(decl.name) + + raise FFIError(':%d: unsupported usage of "..." in typedef' + % decl.coord.line) + + def _get_unknown_ptr_type(self, decl): + if decl.type.type.type.names == ['__dotdotdot__']: + return model.unknown_ptr_type(decl.name) + raise FFIError(':%d: unsupported usage of "..." in typedef' + % decl.coord.line) diff --git a/env/lib/python3.8/site-packages/cffi/error.py b/env/lib/python3.8/site-packages/cffi/error.py new file mode 100644 index 00000000..0a27247c --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/error.py @@ -0,0 +1,31 @@ + +class FFIError(Exception): + __module__ = 'cffi' + +class CDefError(Exception): + __module__ = 'cffi' + def __str__(self): + try: + current_decl = self.args[1] + filename = current_decl.coord.file + linenum = current_decl.coord.line + prefix = '%s:%d: ' % (filename, linenum) + except (AttributeError, TypeError, IndexError): + prefix = '' + return '%s%s' % (prefix, self.args[0]) + +class VerificationError(Exception): + """ An error raised when verification fails + """ + __module__ = 'cffi' + +class VerificationMissing(Exception): + """ An error raised when incomplete structures are passed into + cdef, but no verification has been done + """ + __module__ = 'cffi' + +class PkgConfigError(Exception): + """ An error raised for missing modules in pkg-config + """ + __module__ = 'cffi' diff --git a/env/lib/python3.8/site-packages/cffi/ffiplatform.py b/env/lib/python3.8/site-packages/cffi/ffiplatform.py new file mode 100644 index 00000000..85313460 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/ffiplatform.py @@ -0,0 +1,127 @@ +import sys, os +from .error import VerificationError + + +LIST_OF_FILE_NAMES = ['sources', 'include_dirs', 'library_dirs', + 'extra_objects', 'depends'] + +def get_extension(srcfilename, modname, sources=(), **kwds): + _hack_at_distutils() + from distutils.core import Extension + allsources = [srcfilename] + for src in sources: + allsources.append(os.path.normpath(src)) + return Extension(name=modname, sources=allsources, **kwds) + +def compile(tmpdir, ext, compiler_verbose=0, debug=None): + """Compile a C extension module using distutils.""" + + _hack_at_distutils() + saved_environ = os.environ.copy() + try: + outputfilename = _build(tmpdir, ext, compiler_verbose, debug) + outputfilename = os.path.abspath(outputfilename) + finally: + # workaround for a distutils bugs where some env vars can + # become longer and longer every time it is used + for key, value in saved_environ.items(): + if os.environ.get(key) != value: + os.environ[key] = value + return outputfilename + +def _build(tmpdir, ext, compiler_verbose=0, debug=None): + # XXX compact but horrible :-( + from distutils.core import Distribution + import distutils.errors, distutils.log + # + dist = Distribution({'ext_modules': [ext]}) + dist.parse_config_files() + options = dist.get_option_dict('build_ext') + if debug is None: + debug = sys.flags.debug + options['debug'] = ('ffiplatform', debug) + options['force'] = ('ffiplatform', True) + options['build_lib'] = ('ffiplatform', tmpdir) + options['build_temp'] = ('ffiplatform', tmpdir) + # + try: + old_level = distutils.log.set_threshold(0) or 0 + try: + distutils.log.set_verbosity(compiler_verbose) + dist.run_command('build_ext') + cmd_obj = dist.get_command_obj('build_ext') + [soname] = cmd_obj.get_outputs() + finally: + distutils.log.set_threshold(old_level) + except (distutils.errors.CompileError, + distutils.errors.LinkError) as e: + raise VerificationError('%s: %s' % (e.__class__.__name__, e)) + # + return soname + +try: + from os.path import samefile +except ImportError: + def samefile(f1, f2): + return os.path.abspath(f1) == os.path.abspath(f2) + +def maybe_relative_path(path): + if not os.path.isabs(path): + return path # already relative + dir = path + names = [] + while True: + prevdir = dir + dir, name = os.path.split(prevdir) + if dir == prevdir or not dir: + return path # failed to make it relative + names.append(name) + try: + if samefile(dir, os.curdir): + names.reverse() + return os.path.join(*names) + except OSError: + pass + +# ____________________________________________________________ + +try: + int_or_long = (int, long) + import cStringIO +except NameError: + int_or_long = int # Python 3 + import io as cStringIO + +def _flatten(x, f): + if isinstance(x, str): + f.write('%ds%s' % (len(x), x)) + elif isinstance(x, dict): + keys = sorted(x.keys()) + f.write('%dd' % len(keys)) + for key in keys: + _flatten(key, f) + _flatten(x[key], f) + elif isinstance(x, (list, tuple)): + f.write('%dl' % len(x)) + for value in x: + _flatten(value, f) + elif isinstance(x, int_or_long): + f.write('%di' % (x,)) + else: + raise TypeError( + "the keywords to verify() contains unsupported object %r" % (x,)) + +def flatten(x): + f = cStringIO.StringIO() + _flatten(x, f) + return f.getvalue() + +def _hack_at_distutils(): + # Windows-only workaround for some configurations: see + # https://bugs.python.org/issue23246 (Python 2.7 with + # a specific MS compiler suite download) + if sys.platform == "win32": + try: + import setuptools # for side-effects, patches distutils + except ImportError: + pass diff --git a/env/lib/python3.8/site-packages/cffi/lock.py b/env/lib/python3.8/site-packages/cffi/lock.py new file mode 100644 index 00000000..db91b715 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/lock.py @@ -0,0 +1,30 @@ +import sys + +if sys.version_info < (3,): + try: + from thread import allocate_lock + except ImportError: + from dummy_thread import allocate_lock +else: + try: + from _thread import allocate_lock + except ImportError: + from _dummy_thread import allocate_lock + + +##import sys +##l1 = allocate_lock + +##class allocate_lock(object): +## def __init__(self): +## self._real = l1() +## def __enter__(self): +## for i in range(4, 0, -1): +## print sys._getframe(i).f_code +## print +## return self._real.__enter__() +## def __exit__(self, *args): +## return self._real.__exit__(*args) +## def acquire(self, f): +## assert f is False +## return self._real.acquire(f) diff --git a/env/lib/python3.8/site-packages/cffi/model.py b/env/lib/python3.8/site-packages/cffi/model.py new file mode 100644 index 00000000..ad1c1764 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/model.py @@ -0,0 +1,617 @@ +import types +import weakref + +from .lock import allocate_lock +from .error import CDefError, VerificationError, VerificationMissing + +# type qualifiers +Q_CONST = 0x01 +Q_RESTRICT = 0x02 +Q_VOLATILE = 0x04 + +def qualify(quals, replace_with): + if quals & Q_CONST: + replace_with = ' const ' + replace_with.lstrip() + if quals & Q_VOLATILE: + replace_with = ' volatile ' + replace_with.lstrip() + if quals & Q_RESTRICT: + # It seems that __restrict is supported by gcc and msvc. + # If you hit some different compiler, add a #define in + # _cffi_include.h for it (and in its copies, documented there) + replace_with = ' __restrict ' + replace_with.lstrip() + return replace_with + + +class BaseTypeByIdentity(object): + is_array_type = False + is_raw_function = False + + def get_c_name(self, replace_with='', context='a C file', quals=0): + result = self.c_name_with_marker + assert result.count('&') == 1 + # some logic duplication with ffi.getctype()... :-( + replace_with = replace_with.strip() + if replace_with: + if replace_with.startswith('*') and '&[' in result: + replace_with = '(%s)' % replace_with + elif not replace_with[0] in '[(': + replace_with = ' ' + replace_with + replace_with = qualify(quals, replace_with) + result = result.replace('&', replace_with) + if '$' in result: + raise VerificationError( + "cannot generate '%s' in %s: unknown type name" + % (self._get_c_name(), context)) + return result + + def _get_c_name(self): + return self.c_name_with_marker.replace('&', '') + + def has_c_name(self): + return '$' not in self._get_c_name() + + def is_integer_type(self): + return False + + def get_cached_btype(self, ffi, finishlist, can_delay=False): + try: + BType = ffi._cached_btypes[self] + except KeyError: + BType = self.build_backend_type(ffi, finishlist) + BType2 = ffi._cached_btypes.setdefault(self, BType) + assert BType2 is BType + return BType + + def __repr__(self): + return '<%s>' % (self._get_c_name(),) + + def _get_items(self): + return [(name, getattr(self, name)) for name in self._attrs_] + + +class BaseType(BaseTypeByIdentity): + + def __eq__(self, other): + return (self.__class__ == other.__class__ and + self._get_items() == other._get_items()) + + def __ne__(self, other): + return not self == other + + def __hash__(self): + return hash((self.__class__, tuple(self._get_items()))) + + +class VoidType(BaseType): + _attrs_ = () + + def __init__(self): + self.c_name_with_marker = 'void&' + + def build_backend_type(self, ffi, finishlist): + return global_cache(self, ffi, 'new_void_type') + +void_type = VoidType() + + +class BasePrimitiveType(BaseType): + def is_complex_type(self): + return False + + +class PrimitiveType(BasePrimitiveType): + _attrs_ = ('name',) + + ALL_PRIMITIVE_TYPES = { + 'char': 'c', + 'short': 'i', + 'int': 'i', + 'long': 'i', + 'long long': 'i', + 'signed char': 'i', + 'unsigned char': 'i', + 'unsigned short': 'i', + 'unsigned int': 'i', + 'unsigned long': 'i', + 'unsigned long long': 'i', + 'float': 'f', + 'double': 'f', + 'long double': 'f', + 'float _Complex': 'j', + 'double _Complex': 'j', + '_Bool': 'i', + # the following types are not primitive in the C sense + 'wchar_t': 'c', + 'char16_t': 'c', + 'char32_t': 'c', + 'int8_t': 'i', + 'uint8_t': 'i', + 'int16_t': 'i', + 'uint16_t': 'i', + 'int32_t': 'i', + 'uint32_t': 'i', + 'int64_t': 'i', + 'uint64_t': 'i', + 'int_least8_t': 'i', + 'uint_least8_t': 'i', + 'int_least16_t': 'i', + 'uint_least16_t': 'i', + 'int_least32_t': 'i', + 'uint_least32_t': 'i', + 'int_least64_t': 'i', + 'uint_least64_t': 'i', + 'int_fast8_t': 'i', + 'uint_fast8_t': 'i', + 'int_fast16_t': 'i', + 'uint_fast16_t': 'i', + 'int_fast32_t': 'i', + 'uint_fast32_t': 'i', + 'int_fast64_t': 'i', + 'uint_fast64_t': 'i', + 'intptr_t': 'i', + 'uintptr_t': 'i', + 'intmax_t': 'i', + 'uintmax_t': 'i', + 'ptrdiff_t': 'i', + 'size_t': 'i', + 'ssize_t': 'i', + } + + def __init__(self, name): + assert name in self.ALL_PRIMITIVE_TYPES + self.name = name + self.c_name_with_marker = name + '&' + + def is_char_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'c' + def is_integer_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'i' + def is_float_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'f' + def is_complex_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'j' + + def build_backend_type(self, ffi, finishlist): + return global_cache(self, ffi, 'new_primitive_type', self.name) + + +class UnknownIntegerType(BasePrimitiveType): + _attrs_ = ('name',) + + def __init__(self, name): + self.name = name + self.c_name_with_marker = name + '&' + + def is_integer_type(self): + return True + + def build_backend_type(self, ffi, finishlist): + raise NotImplementedError("integer type '%s' can only be used after " + "compilation" % self.name) + +class UnknownFloatType(BasePrimitiveType): + _attrs_ = ('name', ) + + def __init__(self, name): + self.name = name + self.c_name_with_marker = name + '&' + + def build_backend_type(self, ffi, finishlist): + raise NotImplementedError("float type '%s' can only be used after " + "compilation" % self.name) + + +class BaseFunctionType(BaseType): + _attrs_ = ('args', 'result', 'ellipsis', 'abi') + + def __init__(self, args, result, ellipsis, abi=None): + self.args = args + self.result = result + self.ellipsis = ellipsis + self.abi = abi + # + reprargs = [arg._get_c_name() for arg in self.args] + if self.ellipsis: + reprargs.append('...') + reprargs = reprargs or ['void'] + replace_with = self._base_pattern % (', '.join(reprargs),) + if abi is not None: + replace_with = replace_with[:1] + abi + ' ' + replace_with[1:] + self.c_name_with_marker = ( + self.result.c_name_with_marker.replace('&', replace_with)) + + +class RawFunctionType(BaseFunctionType): + # Corresponds to a C type like 'int(int)', which is the C type of + # a function, but not a pointer-to-function. The backend has no + # notion of such a type; it's used temporarily by parsing. + _base_pattern = '(&)(%s)' + is_raw_function = True + + def build_backend_type(self, ffi, finishlist): + raise CDefError("cannot render the type %r: it is a function " + "type, not a pointer-to-function type" % (self,)) + + def as_function_pointer(self): + return FunctionPtrType(self.args, self.result, self.ellipsis, self.abi) + + +class FunctionPtrType(BaseFunctionType): + _base_pattern = '(*&)(%s)' + + def build_backend_type(self, ffi, finishlist): + result = self.result.get_cached_btype(ffi, finishlist) + args = [] + for tp in self.args: + args.append(tp.get_cached_btype(ffi, finishlist)) + abi_args = () + if self.abi == "__stdcall": + if not self.ellipsis: # __stdcall ignored for variadic funcs + try: + abi_args = (ffi._backend.FFI_STDCALL,) + except AttributeError: + pass + return global_cache(self, ffi, 'new_function_type', + tuple(args), result, self.ellipsis, *abi_args) + + def as_raw_function(self): + return RawFunctionType(self.args, self.result, self.ellipsis, self.abi) + + +class PointerType(BaseType): + _attrs_ = ('totype', 'quals') + + def __init__(self, totype, quals=0): + self.totype = totype + self.quals = quals + extra = qualify(quals, " *&") + if totype.is_array_type: + extra = "(%s)" % (extra.lstrip(),) + self.c_name_with_marker = totype.c_name_with_marker.replace('&', extra) + + def build_backend_type(self, ffi, finishlist): + BItem = self.totype.get_cached_btype(ffi, finishlist, can_delay=True) + return global_cache(self, ffi, 'new_pointer_type', BItem) + +voidp_type = PointerType(void_type) + +def ConstPointerType(totype): + return PointerType(totype, Q_CONST) + +const_voidp_type = ConstPointerType(void_type) + + +class NamedPointerType(PointerType): + _attrs_ = ('totype', 'name') + + def __init__(self, totype, name, quals=0): + PointerType.__init__(self, totype, quals) + self.name = name + self.c_name_with_marker = name + '&' + + +class ArrayType(BaseType): + _attrs_ = ('item', 'length') + is_array_type = True + + def __init__(self, item, length): + self.item = item + self.length = length + # + if length is None: + brackets = '&[]' + elif length == '...': + brackets = '&[/*...*/]' + else: + brackets = '&[%s]' % length + self.c_name_with_marker = ( + self.item.c_name_with_marker.replace('&', brackets)) + + def length_is_unknown(self): + return isinstance(self.length, str) + + def resolve_length(self, newlength): + return ArrayType(self.item, newlength) + + def build_backend_type(self, ffi, finishlist): + if self.length_is_unknown(): + raise CDefError("cannot render the type %r: unknown length" % + (self,)) + self.item.get_cached_btype(ffi, finishlist) # force the item BType + BPtrItem = PointerType(self.item).get_cached_btype(ffi, finishlist) + return global_cache(self, ffi, 'new_array_type', BPtrItem, self.length) + +char_array_type = ArrayType(PrimitiveType('char'), None) + + +class StructOrUnionOrEnum(BaseTypeByIdentity): + _attrs_ = ('name',) + forcename = None + + def build_c_name_with_marker(self): + name = self.forcename or '%s %s' % (self.kind, self.name) + self.c_name_with_marker = name + '&' + + def force_the_name(self, forcename): + self.forcename = forcename + self.build_c_name_with_marker() + + def get_official_name(self): + assert self.c_name_with_marker.endswith('&') + return self.c_name_with_marker[:-1] + + +class StructOrUnion(StructOrUnionOrEnum): + fixedlayout = None + completed = 0 + partial = False + packed = 0 + + def __init__(self, name, fldnames, fldtypes, fldbitsize, fldquals=None): + self.name = name + self.fldnames = fldnames + self.fldtypes = fldtypes + self.fldbitsize = fldbitsize + self.fldquals = fldquals + self.build_c_name_with_marker() + + def anonymous_struct_fields(self): + if self.fldtypes is not None: + for name, type in zip(self.fldnames, self.fldtypes): + if name == '' and isinstance(type, StructOrUnion): + yield type + + def enumfields(self, expand_anonymous_struct_union=True): + fldquals = self.fldquals + if fldquals is None: + fldquals = (0,) * len(self.fldnames) + for name, type, bitsize, quals in zip(self.fldnames, self.fldtypes, + self.fldbitsize, fldquals): + if (name == '' and isinstance(type, StructOrUnion) + and expand_anonymous_struct_union): + # nested anonymous struct/union + for result in type.enumfields(): + yield result + else: + yield (name, type, bitsize, quals) + + def force_flatten(self): + # force the struct or union to have a declaration that lists + # directly all fields returned by enumfields(), flattening + # nested anonymous structs/unions. + names = [] + types = [] + bitsizes = [] + fldquals = [] + for name, type, bitsize, quals in self.enumfields(): + names.append(name) + types.append(type) + bitsizes.append(bitsize) + fldquals.append(quals) + self.fldnames = tuple(names) + self.fldtypes = tuple(types) + self.fldbitsize = tuple(bitsizes) + self.fldquals = tuple(fldquals) + + def get_cached_btype(self, ffi, finishlist, can_delay=False): + BType = StructOrUnionOrEnum.get_cached_btype(self, ffi, finishlist, + can_delay) + if not can_delay: + self.finish_backend_type(ffi, finishlist) + return BType + + def finish_backend_type(self, ffi, finishlist): + if self.completed: + if self.completed != 2: + raise NotImplementedError("recursive structure declaration " + "for '%s'" % (self.name,)) + return + BType = ffi._cached_btypes[self] + # + self.completed = 1 + # + if self.fldtypes is None: + pass # not completing it: it's an opaque struct + # + elif self.fixedlayout is None: + fldtypes = [tp.get_cached_btype(ffi, finishlist) + for tp in self.fldtypes] + lst = list(zip(self.fldnames, fldtypes, self.fldbitsize)) + extra_flags = () + if self.packed: + if self.packed == 1: + extra_flags = (8,) # SF_PACKED + else: + extra_flags = (0, self.packed) + ffi._backend.complete_struct_or_union(BType, lst, self, + -1, -1, *extra_flags) + # + else: + fldtypes = [] + fieldofs, fieldsize, totalsize, totalalignment = self.fixedlayout + for i in range(len(self.fldnames)): + fsize = fieldsize[i] + ftype = self.fldtypes[i] + # + if isinstance(ftype, ArrayType) and ftype.length_is_unknown(): + # fix the length to match the total size + BItemType = ftype.item.get_cached_btype(ffi, finishlist) + nlen, nrest = divmod(fsize, ffi.sizeof(BItemType)) + if nrest != 0: + self._verification_error( + "field '%s.%s' has a bogus size?" % ( + self.name, self.fldnames[i] or '{}')) + ftype = ftype.resolve_length(nlen) + self.fldtypes = (self.fldtypes[:i] + (ftype,) + + self.fldtypes[i+1:]) + # + BFieldType = ftype.get_cached_btype(ffi, finishlist) + if isinstance(ftype, ArrayType) and ftype.length is None: + assert fsize == 0 + else: + bitemsize = ffi.sizeof(BFieldType) + if bitemsize != fsize: + self._verification_error( + "field '%s.%s' is declared as %d bytes, but is " + "really %d bytes" % (self.name, + self.fldnames[i] or '{}', + bitemsize, fsize)) + fldtypes.append(BFieldType) + # + lst = list(zip(self.fldnames, fldtypes, self.fldbitsize, fieldofs)) + ffi._backend.complete_struct_or_union(BType, lst, self, + totalsize, totalalignment) + self.completed = 2 + + def _verification_error(self, msg): + raise VerificationError(msg) + + def check_not_partial(self): + if self.partial and self.fixedlayout is None: + raise VerificationMissing(self._get_c_name()) + + def build_backend_type(self, ffi, finishlist): + self.check_not_partial() + finishlist.append(self) + # + return global_cache(self, ffi, 'new_%s_type' % self.kind, + self.get_official_name(), key=self) + + +class StructType(StructOrUnion): + kind = 'struct' + + +class UnionType(StructOrUnion): + kind = 'union' + + +class EnumType(StructOrUnionOrEnum): + kind = 'enum' + partial = False + partial_resolved = False + + def __init__(self, name, enumerators, enumvalues, baseinttype=None): + self.name = name + self.enumerators = enumerators + self.enumvalues = enumvalues + self.baseinttype = baseinttype + self.build_c_name_with_marker() + + def force_the_name(self, forcename): + StructOrUnionOrEnum.force_the_name(self, forcename) + if self.forcename is None: + name = self.get_official_name() + self.forcename = '$' + name.replace(' ', '_') + + def check_not_partial(self): + if self.partial and not self.partial_resolved: + raise VerificationMissing(self._get_c_name()) + + def build_backend_type(self, ffi, finishlist): + self.check_not_partial() + base_btype = self.build_baseinttype(ffi, finishlist) + return global_cache(self, ffi, 'new_enum_type', + self.get_official_name(), + self.enumerators, self.enumvalues, + base_btype, key=self) + + def build_baseinttype(self, ffi, finishlist): + if self.baseinttype is not None: + return self.baseinttype.get_cached_btype(ffi, finishlist) + # + if self.enumvalues: + smallest_value = min(self.enumvalues) + largest_value = max(self.enumvalues) + else: + import warnings + try: + # XXX! The goal is to ensure that the warnings.warn() + # will not suppress the warning. We want to get it + # several times if we reach this point several times. + __warningregistry__.clear() + except NameError: + pass + warnings.warn("%r has no values explicitly defined; " + "guessing that it is equivalent to 'unsigned int'" + % self._get_c_name()) + smallest_value = largest_value = 0 + if smallest_value < 0: # needs a signed type + sign = 1 + candidate1 = PrimitiveType("int") + candidate2 = PrimitiveType("long") + else: + sign = 0 + candidate1 = PrimitiveType("unsigned int") + candidate2 = PrimitiveType("unsigned long") + btype1 = candidate1.get_cached_btype(ffi, finishlist) + btype2 = candidate2.get_cached_btype(ffi, finishlist) + size1 = ffi.sizeof(btype1) + size2 = ffi.sizeof(btype2) + if (smallest_value >= ((-1) << (8*size1-1)) and + largest_value < (1 << (8*size1-sign))): + return btype1 + if (smallest_value >= ((-1) << (8*size2-1)) and + largest_value < (1 << (8*size2-sign))): + return btype2 + raise CDefError("%s values don't all fit into either 'long' " + "or 'unsigned long'" % self._get_c_name()) + +def unknown_type(name, structname=None): + if structname is None: + structname = '$%s' % name + tp = StructType(structname, None, None, None) + tp.force_the_name(name) + tp.origin = "unknown_type" + return tp + +def unknown_ptr_type(name, structname=None): + if structname is None: + structname = '$$%s' % name + tp = StructType(structname, None, None, None) + return NamedPointerType(tp, name) + + +global_lock = allocate_lock() +_typecache_cffi_backend = weakref.WeakValueDictionary() + +def get_typecache(backend): + # returns _typecache_cffi_backend if backend is the _cffi_backend + # module, or type(backend).__typecache if backend is an instance of + # CTypesBackend (or some FakeBackend class during tests) + if isinstance(backend, types.ModuleType): + return _typecache_cffi_backend + with global_lock: + if not hasattr(type(backend), '__typecache'): + type(backend).__typecache = weakref.WeakValueDictionary() + return type(backend).__typecache + +def global_cache(srctype, ffi, funcname, *args, **kwds): + key = kwds.pop('key', (funcname, args)) + assert not kwds + try: + return ffi._typecache[key] + except KeyError: + pass + try: + res = getattr(ffi._backend, funcname)(*args) + except NotImplementedError as e: + raise NotImplementedError("%s: %r: %s" % (funcname, srctype, e)) + # note that setdefault() on WeakValueDictionary is not atomic + # and contains a rare bug (http://bugs.python.org/issue19542); + # we have to use a lock and do it ourselves + cache = ffi._typecache + with global_lock: + res1 = cache.get(key) + if res1 is None: + cache[key] = res + return res + else: + return res1 + +def pointer_cache(ffi, BType): + return global_cache('?', ffi, 'new_pointer_type', BType) + +def attach_exception_info(e, name): + if e.args and type(e.args[0]) is str: + e.args = ('%s: %s' % (name, e.args[0]),) + e.args[1:] diff --git a/env/lib/python3.8/site-packages/cffi/parse_c_type.h b/env/lib/python3.8/site-packages/cffi/parse_c_type.h new file mode 100644 index 00000000..84e4ef85 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/parse_c_type.h @@ -0,0 +1,181 @@ + +/* This part is from file 'cffi/parse_c_type.h'. It is copied at the + beginning of C sources generated by CFFI's ffi.set_source(). */ + +typedef void *_cffi_opcode_t; + +#define _CFFI_OP(opcode, arg) (_cffi_opcode_t)(opcode | (((uintptr_t)(arg)) << 8)) +#define _CFFI_GETOP(cffi_opcode) ((unsigned char)(uintptr_t)cffi_opcode) +#define _CFFI_GETARG(cffi_opcode) (((intptr_t)cffi_opcode) >> 8) + +#define _CFFI_OP_PRIMITIVE 1 +#define _CFFI_OP_POINTER 3 +#define _CFFI_OP_ARRAY 5 +#define _CFFI_OP_OPEN_ARRAY 7 +#define _CFFI_OP_STRUCT_UNION 9 +#define _CFFI_OP_ENUM 11 +#define _CFFI_OP_FUNCTION 13 +#define _CFFI_OP_FUNCTION_END 15 +#define _CFFI_OP_NOOP 17 +#define _CFFI_OP_BITFIELD 19 +#define _CFFI_OP_TYPENAME 21 +#define _CFFI_OP_CPYTHON_BLTN_V 23 // varargs +#define _CFFI_OP_CPYTHON_BLTN_N 25 // noargs +#define _CFFI_OP_CPYTHON_BLTN_O 27 // O (i.e. a single arg) +#define _CFFI_OP_CONSTANT 29 +#define _CFFI_OP_CONSTANT_INT 31 +#define _CFFI_OP_GLOBAL_VAR 33 +#define _CFFI_OP_DLOPEN_FUNC 35 +#define _CFFI_OP_DLOPEN_CONST 37 +#define _CFFI_OP_GLOBAL_VAR_F 39 +#define _CFFI_OP_EXTERN_PYTHON 41 + +#define _CFFI_PRIM_VOID 0 +#define _CFFI_PRIM_BOOL 1 +#define _CFFI_PRIM_CHAR 2 +#define _CFFI_PRIM_SCHAR 3 +#define _CFFI_PRIM_UCHAR 4 +#define _CFFI_PRIM_SHORT 5 +#define _CFFI_PRIM_USHORT 6 +#define _CFFI_PRIM_INT 7 +#define _CFFI_PRIM_UINT 8 +#define _CFFI_PRIM_LONG 9 +#define _CFFI_PRIM_ULONG 10 +#define _CFFI_PRIM_LONGLONG 11 +#define _CFFI_PRIM_ULONGLONG 12 +#define _CFFI_PRIM_FLOAT 13 +#define _CFFI_PRIM_DOUBLE 14 +#define _CFFI_PRIM_LONGDOUBLE 15 + +#define _CFFI_PRIM_WCHAR 16 +#define _CFFI_PRIM_INT8 17 +#define _CFFI_PRIM_UINT8 18 +#define _CFFI_PRIM_INT16 19 +#define _CFFI_PRIM_UINT16 20 +#define _CFFI_PRIM_INT32 21 +#define _CFFI_PRIM_UINT32 22 +#define _CFFI_PRIM_INT64 23 +#define _CFFI_PRIM_UINT64 24 +#define _CFFI_PRIM_INTPTR 25 +#define _CFFI_PRIM_UINTPTR 26 +#define _CFFI_PRIM_PTRDIFF 27 +#define _CFFI_PRIM_SIZE 28 +#define _CFFI_PRIM_SSIZE 29 +#define _CFFI_PRIM_INT_LEAST8 30 +#define _CFFI_PRIM_UINT_LEAST8 31 +#define _CFFI_PRIM_INT_LEAST16 32 +#define _CFFI_PRIM_UINT_LEAST16 33 +#define _CFFI_PRIM_INT_LEAST32 34 +#define _CFFI_PRIM_UINT_LEAST32 35 +#define _CFFI_PRIM_INT_LEAST64 36 +#define _CFFI_PRIM_UINT_LEAST64 37 +#define _CFFI_PRIM_INT_FAST8 38 +#define _CFFI_PRIM_UINT_FAST8 39 +#define _CFFI_PRIM_INT_FAST16 40 +#define _CFFI_PRIM_UINT_FAST16 41 +#define _CFFI_PRIM_INT_FAST32 42 +#define _CFFI_PRIM_UINT_FAST32 43 +#define _CFFI_PRIM_INT_FAST64 44 +#define _CFFI_PRIM_UINT_FAST64 45 +#define _CFFI_PRIM_INTMAX 46 +#define _CFFI_PRIM_UINTMAX 47 +#define _CFFI_PRIM_FLOATCOMPLEX 48 +#define _CFFI_PRIM_DOUBLECOMPLEX 49 +#define _CFFI_PRIM_CHAR16 50 +#define _CFFI_PRIM_CHAR32 51 + +#define _CFFI__NUM_PRIM 52 +#define _CFFI__UNKNOWN_PRIM (-1) +#define _CFFI__UNKNOWN_FLOAT_PRIM (-2) +#define _CFFI__UNKNOWN_LONG_DOUBLE (-3) + +#define _CFFI__IO_FILE_STRUCT (-1) + + +struct _cffi_global_s { + const char *name; + void *address; + _cffi_opcode_t type_op; + void *size_or_direct_fn; // OP_GLOBAL_VAR: size, or 0 if unknown + // OP_CPYTHON_BLTN_*: addr of direct function +}; + +struct _cffi_getconst_s { + unsigned long long value; + const struct _cffi_type_context_s *ctx; + int gindex; +}; + +struct _cffi_struct_union_s { + const char *name; + int type_index; // -> _cffi_types, on a OP_STRUCT_UNION + int flags; // _CFFI_F_* flags below + size_t size; + int alignment; + int first_field_index; // -> _cffi_fields array + int num_fields; +}; +#define _CFFI_F_UNION 0x01 // is a union, not a struct +#define _CFFI_F_CHECK_FIELDS 0x02 // complain if fields are not in the + // "standard layout" or if some are missing +#define _CFFI_F_PACKED 0x04 // for CHECK_FIELDS, assume a packed struct +#define _CFFI_F_EXTERNAL 0x08 // in some other ffi.include() +#define _CFFI_F_OPAQUE 0x10 // opaque + +struct _cffi_field_s { + const char *name; + size_t field_offset; + size_t field_size; + _cffi_opcode_t field_type_op; +}; + +struct _cffi_enum_s { + const char *name; + int type_index; // -> _cffi_types, on a OP_ENUM + int type_prim; // _CFFI_PRIM_xxx + const char *enumerators; // comma-delimited string +}; + +struct _cffi_typename_s { + const char *name; + int type_index; /* if opaque, points to a possibly artificial + OP_STRUCT which is itself opaque */ +}; + +struct _cffi_type_context_s { + _cffi_opcode_t *types; + const struct _cffi_global_s *globals; + const struct _cffi_field_s *fields; + const struct _cffi_struct_union_s *struct_unions; + const struct _cffi_enum_s *enums; + const struct _cffi_typename_s *typenames; + int num_globals; + int num_struct_unions; + int num_enums; + int num_typenames; + const char *const *includes; + int num_types; + int flags; /* future extension */ +}; + +struct _cffi_parse_info_s { + const struct _cffi_type_context_s *ctx; + _cffi_opcode_t *output; + unsigned int output_size; + size_t error_location; + const char *error_message; +}; + +struct _cffi_externpy_s { + const char *name; + size_t size_of_result; + void *reserved1, *reserved2; +}; + +#ifdef _CFFI_INTERNAL +static int parse_c_type(struct _cffi_parse_info_s *info, const char *input); +static int search_in_globals(const struct _cffi_type_context_s *ctx, + const char *search, size_t search_len); +static int search_in_struct_unions(const struct _cffi_type_context_s *ctx, + const char *search, size_t search_len); +#endif diff --git a/env/lib/python3.8/site-packages/cffi/pkgconfig.py b/env/lib/python3.8/site-packages/cffi/pkgconfig.py new file mode 100644 index 00000000..5c93f15a --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/pkgconfig.py @@ -0,0 +1,121 @@ +# pkg-config, https://www.freedesktop.org/wiki/Software/pkg-config/ integration for cffi +import sys, os, subprocess + +from .error import PkgConfigError + + +def merge_flags(cfg1, cfg2): + """Merge values from cffi config flags cfg2 to cf1 + + Example: + merge_flags({"libraries": ["one"]}, {"libraries": ["two"]}) + {"libraries": ["one", "two"]} + """ + for key, value in cfg2.items(): + if key not in cfg1: + cfg1[key] = value + else: + if not isinstance(cfg1[key], list): + raise TypeError("cfg1[%r] should be a list of strings" % (key,)) + if not isinstance(value, list): + raise TypeError("cfg2[%r] should be a list of strings" % (key,)) + cfg1[key].extend(value) + return cfg1 + + +def call(libname, flag, encoding=sys.getfilesystemencoding()): + """Calls pkg-config and returns the output if found + """ + a = ["pkg-config", "--print-errors"] + a.append(flag) + a.append(libname) + try: + pc = subprocess.Popen(a, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + except EnvironmentError as e: + raise PkgConfigError("cannot run pkg-config: %s" % (str(e).strip(),)) + + bout, berr = pc.communicate() + if pc.returncode != 0: + try: + berr = berr.decode(encoding) + except Exception: + pass + raise PkgConfigError(berr.strip()) + + if sys.version_info >= (3,) and not isinstance(bout, str): # Python 3.x + try: + bout = bout.decode(encoding) + except UnicodeDecodeError: + raise PkgConfigError("pkg-config %s %s returned bytes that cannot " + "be decoded with encoding %r:\n%r" % + (flag, libname, encoding, bout)) + + if os.altsep != '\\' and '\\' in bout: + raise PkgConfigError("pkg-config %s %s returned an unsupported " + "backslash-escaped output:\n%r" % + (flag, libname, bout)) + return bout + + +def flags_from_pkgconfig(libs): + r"""Return compiler line flags for FFI.set_source based on pkg-config output + + Usage + ... + ffibuilder.set_source("_foo", pkgconfig = ["libfoo", "libbar >= 1.8.3"]) + + If pkg-config is installed on build machine, then arguments include_dirs, + library_dirs, libraries, define_macros, extra_compile_args and + extra_link_args are extended with an output of pkg-config for libfoo and + libbar. + + Raises PkgConfigError in case the pkg-config call fails. + """ + + def get_include_dirs(string): + return [x[2:] for x in string.split() if x.startswith("-I")] + + def get_library_dirs(string): + return [x[2:] for x in string.split() if x.startswith("-L")] + + def get_libraries(string): + return [x[2:] for x in string.split() if x.startswith("-l")] + + # convert -Dfoo=bar to list of tuples [("foo", "bar")] expected by distutils + def get_macros(string): + def _macro(x): + x = x[2:] # drop "-D" + if '=' in x: + return tuple(x.split("=", 1)) # "-Dfoo=bar" => ("foo", "bar") + else: + return (x, None) # "-Dfoo" => ("foo", None) + return [_macro(x) for x in string.split() if x.startswith("-D")] + + def get_other_cflags(string): + return [x for x in string.split() if not x.startswith("-I") and + not x.startswith("-D")] + + def get_other_libs(string): + return [x for x in string.split() if not x.startswith("-L") and + not x.startswith("-l")] + + # return kwargs for given libname + def kwargs(libname): + fse = sys.getfilesystemencoding() + all_cflags = call(libname, "--cflags") + all_libs = call(libname, "--libs") + return { + "include_dirs": get_include_dirs(all_cflags), + "library_dirs": get_library_dirs(all_libs), + "libraries": get_libraries(all_libs), + "define_macros": get_macros(all_cflags), + "extra_compile_args": get_other_cflags(all_cflags), + "extra_link_args": get_other_libs(all_libs), + } + + # merge all arguments together + ret = {} + for libname in libs: + lib_flags = kwargs(libname) + merge_flags(ret, lib_flags) + return ret diff --git a/env/lib/python3.8/site-packages/cffi/recompiler.py b/env/lib/python3.8/site-packages/cffi/recompiler.py new file mode 100644 index 00000000..5d9d32d7 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/recompiler.py @@ -0,0 +1,1581 @@ +import os, sys, io +from . import ffiplatform, model +from .error import VerificationError +from .cffi_opcode import * + +VERSION_BASE = 0x2601 +VERSION_EMBEDDED = 0x2701 +VERSION_CHAR16CHAR32 = 0x2801 + +USE_LIMITED_API = (sys.platform != 'win32' or sys.version_info < (3, 0) or + sys.version_info >= (3, 5)) + + +class GlobalExpr: + def __init__(self, name, address, type_op, size=0, check_value=0): + self.name = name + self.address = address + self.type_op = type_op + self.size = size + self.check_value = check_value + + def as_c_expr(self): + return ' { "%s", (void *)%s, %s, (void *)%s },' % ( + self.name, self.address, self.type_op.as_c_expr(), self.size) + + def as_python_expr(self): + return "b'%s%s',%d" % (self.type_op.as_python_bytes(), self.name, + self.check_value) + +class FieldExpr: + def __init__(self, name, field_offset, field_size, fbitsize, field_type_op): + self.name = name + self.field_offset = field_offset + self.field_size = field_size + self.fbitsize = fbitsize + self.field_type_op = field_type_op + + def as_c_expr(self): + spaces = " " * len(self.name) + return (' { "%s", %s,\n' % (self.name, self.field_offset) + + ' %s %s,\n' % (spaces, self.field_size) + + ' %s %s },' % (spaces, self.field_type_op.as_c_expr())) + + def as_python_expr(self): + raise NotImplementedError + + def as_field_python_expr(self): + if self.field_type_op.op == OP_NOOP: + size_expr = '' + elif self.field_type_op.op == OP_BITFIELD: + size_expr = format_four_bytes(self.fbitsize) + else: + raise NotImplementedError + return "b'%s%s%s'" % (self.field_type_op.as_python_bytes(), + size_expr, + self.name) + +class StructUnionExpr: + def __init__(self, name, type_index, flags, size, alignment, comment, + first_field_index, c_fields): + self.name = name + self.type_index = type_index + self.flags = flags + self.size = size + self.alignment = alignment + self.comment = comment + self.first_field_index = first_field_index + self.c_fields = c_fields + + def as_c_expr(self): + return (' { "%s", %d, %s,' % (self.name, self.type_index, self.flags) + + '\n %s, %s, ' % (self.size, self.alignment) + + '%d, %d ' % (self.first_field_index, len(self.c_fields)) + + ('/* %s */ ' % self.comment if self.comment else '') + + '},') + + def as_python_expr(self): + flags = eval(self.flags, G_FLAGS) + fields_expr = [c_field.as_field_python_expr() + for c_field in self.c_fields] + return "(b'%s%s%s',%s)" % ( + format_four_bytes(self.type_index), + format_four_bytes(flags), + self.name, + ','.join(fields_expr)) + +class EnumExpr: + def __init__(self, name, type_index, size, signed, allenums): + self.name = name + self.type_index = type_index + self.size = size + self.signed = signed + self.allenums = allenums + + def as_c_expr(self): + return (' { "%s", %d, _cffi_prim_int(%s, %s),\n' + ' "%s" },' % (self.name, self.type_index, + self.size, self.signed, self.allenums)) + + def as_python_expr(self): + prim_index = { + (1, 0): PRIM_UINT8, (1, 1): PRIM_INT8, + (2, 0): PRIM_UINT16, (2, 1): PRIM_INT16, + (4, 0): PRIM_UINT32, (4, 1): PRIM_INT32, + (8, 0): PRIM_UINT64, (8, 1): PRIM_INT64, + }[self.size, self.signed] + return "b'%s%s%s\\x00%s'" % (format_four_bytes(self.type_index), + format_four_bytes(prim_index), + self.name, self.allenums) + +class TypenameExpr: + def __init__(self, name, type_index): + self.name = name + self.type_index = type_index + + def as_c_expr(self): + return ' { "%s", %d },' % (self.name, self.type_index) + + def as_python_expr(self): + return "b'%s%s'" % (format_four_bytes(self.type_index), self.name) + + +# ____________________________________________________________ + + +class Recompiler: + _num_externpy = 0 + + def __init__(self, ffi, module_name, target_is_python=False): + self.ffi = ffi + self.module_name = module_name + self.target_is_python = target_is_python + self._version = VERSION_BASE + + def needs_version(self, ver): + self._version = max(self._version, ver) + + def collect_type_table(self): + self._typesdict = {} + self._generate("collecttype") + # + all_decls = sorted(self._typesdict, key=str) + # + # prepare all FUNCTION bytecode sequences first + self.cffi_types = [] + for tp in all_decls: + if tp.is_raw_function: + assert self._typesdict[tp] is None + self._typesdict[tp] = len(self.cffi_types) + self.cffi_types.append(tp) # placeholder + for tp1 in tp.args: + assert isinstance(tp1, (model.VoidType, + model.BasePrimitiveType, + model.PointerType, + model.StructOrUnionOrEnum, + model.FunctionPtrType)) + if self._typesdict[tp1] is None: + self._typesdict[tp1] = len(self.cffi_types) + self.cffi_types.append(tp1) # placeholder + self.cffi_types.append('END') # placeholder + # + # prepare all OTHER bytecode sequences + for tp in all_decls: + if not tp.is_raw_function and self._typesdict[tp] is None: + self._typesdict[tp] = len(self.cffi_types) + self.cffi_types.append(tp) # placeholder + if tp.is_array_type and tp.length is not None: + self.cffi_types.append('LEN') # placeholder + assert None not in self._typesdict.values() + # + # collect all structs and unions and enums + self._struct_unions = {} + self._enums = {} + for tp in all_decls: + if isinstance(tp, model.StructOrUnion): + self._struct_unions[tp] = None + elif isinstance(tp, model.EnumType): + self._enums[tp] = None + for i, tp in enumerate(sorted(self._struct_unions, + key=lambda tp: tp.name)): + self._struct_unions[tp] = i + for i, tp in enumerate(sorted(self._enums, + key=lambda tp: tp.name)): + self._enums[tp] = i + # + # emit all bytecode sequences now + for tp in all_decls: + method = getattr(self, '_emit_bytecode_' + tp.__class__.__name__) + method(tp, self._typesdict[tp]) + # + # consistency check + for op in self.cffi_types: + assert isinstance(op, CffiOp) + self.cffi_types = tuple(self.cffi_types) # don't change any more + + def _enum_fields(self, tp): + # When producing C, expand all anonymous struct/union fields. + # That's necessary to have C code checking the offsets of the + # individual fields contained in them. When producing Python, + # don't do it and instead write it like it is, with the + # corresponding fields having an empty name. Empty names are + # recognized at runtime when we import the generated Python + # file. + expand_anonymous_struct_union = not self.target_is_python + return tp.enumfields(expand_anonymous_struct_union) + + def _do_collect_type(self, tp): + if not isinstance(tp, model.BaseTypeByIdentity): + if isinstance(tp, tuple): + for x in tp: + self._do_collect_type(x) + return + if tp not in self._typesdict: + self._typesdict[tp] = None + if isinstance(tp, model.FunctionPtrType): + self._do_collect_type(tp.as_raw_function()) + elif isinstance(tp, model.StructOrUnion): + if tp.fldtypes is not None and ( + tp not in self.ffi._parser._included_declarations): + for name1, tp1, _, _ in self._enum_fields(tp): + self._do_collect_type(self._field_type(tp, name1, tp1)) + else: + for _, x in tp._get_items(): + self._do_collect_type(x) + + def _generate(self, step_name): + lst = self.ffi._parser._declarations.items() + for name, (tp, quals) in sorted(lst): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_cpy_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in recompile(): %r" % name) + try: + self._current_quals = quals + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + # ---------- + + ALL_STEPS = ["global", "field", "struct_union", "enum", "typename"] + + def collect_step_tables(self): + # collect the declarations for '_cffi_globals', '_cffi_typenames', etc. + self._lsts = {} + for step_name in self.ALL_STEPS: + self._lsts[step_name] = [] + self._seen_struct_unions = set() + self._generate("ctx") + self._add_missing_struct_unions() + # + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + if step_name != "field": + lst.sort(key=lambda entry: entry.name) + self._lsts[step_name] = tuple(lst) # don't change any more + # + # check for a possible internal inconsistency: _cffi_struct_unions + # should have been generated with exactly self._struct_unions + lst = self._lsts["struct_union"] + for tp, i in self._struct_unions.items(): + assert i < len(lst) + assert lst[i].name == tp.name + assert len(lst) == len(self._struct_unions) + # same with enums + lst = self._lsts["enum"] + for tp, i in self._enums.items(): + assert i < len(lst) + assert lst[i].name == tp.name + assert len(lst) == len(self._enums) + + # ---------- + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def write_source_to_f(self, f, preamble): + if self.target_is_python: + assert preamble is None + self.write_py_source_to_f(f) + else: + assert preamble is not None + self.write_c_source_to_f(f, preamble) + + def _rel_readlines(self, filename): + g = open(os.path.join(os.path.dirname(__file__), filename), 'r') + lines = g.readlines() + g.close() + return lines + + def write_c_source_to_f(self, f, preamble): + self._f = f + prnt = self._prnt + if self.ffi._embedding is not None: + prnt('#define _CFFI_USE_EMBEDDING') + if not USE_LIMITED_API: + prnt('#define _CFFI_NO_LIMITED_API') + # + # first the '#include' (actually done by inlining the file's content) + lines = self._rel_readlines('_cffi_include.h') + i = lines.index('#include "parse_c_type.h"\n') + lines[i:i+1] = self._rel_readlines('parse_c_type.h') + prnt(''.join(lines)) + # + # if we have ffi._embedding != None, we give it here as a macro + # and include an extra file + base_module_name = self.module_name.split('.')[-1] + if self.ffi._embedding is not None: + prnt('#define _CFFI_MODULE_NAME "%s"' % (self.module_name,)) + prnt('static const char _CFFI_PYTHON_STARTUP_CODE[] = {') + self._print_string_literal_in_array(self.ffi._embedding) + prnt('0 };') + prnt('#ifdef PYPY_VERSION') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC _cffi_pypyinit_%s' % ( + base_module_name,)) + prnt('#elif PY_MAJOR_VERSION >= 3') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC PyInit_%s' % ( + base_module_name,)) + prnt('#else') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC init%s' % ( + base_module_name,)) + prnt('#endif') + lines = self._rel_readlines('_embedding.h') + i = lines.index('#include "_cffi_errors.h"\n') + lines[i:i+1] = self._rel_readlines('_cffi_errors.h') + prnt(''.join(lines)) + self.needs_version(VERSION_EMBEDDED) + # + # then paste the C source given by the user, verbatim. + prnt('/************************************************************/') + prnt() + prnt(preamble) + prnt() + prnt('/************************************************************/') + prnt() + # + # the declaration of '_cffi_types' + prnt('static void *_cffi_types[] = {') + typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) + for i, op in enumerate(self.cffi_types): + comment = '' + if i in typeindex2type: + comment = ' // ' + typeindex2type[i]._get_c_name() + prnt('/* %2d */ %s,%s' % (i, op.as_c_expr(), comment)) + if not self.cffi_types: + prnt(' 0') + prnt('};') + prnt() + # + # call generate_cpy_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._seen_constants = set() + self._generate("decl") + # + # the declaration of '_cffi_globals' and '_cffi_typenames' + nums = {} + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + nums[step_name] = len(lst) + if nums[step_name] > 0: + prnt('static const struct _cffi_%s_s _cffi_%ss[] = {' % ( + step_name, step_name)) + for entry in lst: + prnt(entry.as_c_expr()) + prnt('};') + prnt() + # + # the declaration of '_cffi_includes' + if self.ffi._included_ffis: + prnt('static const char * const _cffi_includes[] = {') + for ffi_to_include in self.ffi._included_ffis: + try: + included_module_name, included_source = ( + ffi_to_include._assigned_source[:2]) + except AttributeError: + raise VerificationError( + "ffi object %r includes %r, but the latter has not " + "been prepared with set_source()" % ( + self.ffi, ffi_to_include,)) + if included_source is None: + raise VerificationError( + "not implemented yet: ffi.include() of a Python-based " + "ffi inside a C-based ffi") + prnt(' "%s",' % (included_module_name,)) + prnt(' NULL') + prnt('};') + prnt() + # + # the declaration of '_cffi_type_context' + prnt('static const struct _cffi_type_context_s _cffi_type_context = {') + prnt(' _cffi_types,') + for step_name in self.ALL_STEPS: + if nums[step_name] > 0: + prnt(' _cffi_%ss,' % step_name) + else: + prnt(' NULL, /* no %ss */' % step_name) + for step_name in self.ALL_STEPS: + if step_name != "field": + prnt(' %d, /* num_%ss */' % (nums[step_name], step_name)) + if self.ffi._included_ffis: + prnt(' _cffi_includes,') + else: + prnt(' NULL, /* no includes */') + prnt(' %d, /* num_types */' % (len(self.cffi_types),)) + flags = 0 + if self._num_externpy > 0 or self.ffi._embedding is not None: + flags |= 1 # set to mean that we use extern "Python" + prnt(' %d, /* flags */' % flags) + prnt('};') + prnt() + # + # the init function + prnt('#ifdef __GNUC__') + prnt('# pragma GCC visibility push(default) /* for -fvisibility= */') + prnt('#endif') + prnt() + prnt('#ifdef PYPY_VERSION') + prnt('PyMODINIT_FUNC') + prnt('_cffi_pypyinit_%s(const void *p[])' % (base_module_name,)) + prnt('{') + if flags & 1: + prnt(' if (((intptr_t)p[0]) >= 0x0A03) {') + prnt(' _cffi_call_python_org = ' + '(void(*)(struct _cffi_externpy_s *, char *))p[1];') + prnt(' }') + prnt(' p[0] = (const void *)0x%x;' % self._version) + prnt(' p[1] = &_cffi_type_context;') + prnt('#if PY_MAJOR_VERSION >= 3') + prnt(' return NULL;') + prnt('#endif') + prnt('}') + # on Windows, distutils insists on putting init_cffi_xyz in + # 'export_symbols', so instead of fighting it, just give up and + # give it one + prnt('# ifdef _MSC_VER') + prnt(' PyMODINIT_FUNC') + prnt('# if PY_MAJOR_VERSION >= 3') + prnt(' PyInit_%s(void) { return NULL; }' % (base_module_name,)) + prnt('# else') + prnt(' init%s(void) { }' % (base_module_name,)) + prnt('# endif') + prnt('# endif') + prnt('#elif PY_MAJOR_VERSION >= 3') + prnt('PyMODINIT_FUNC') + prnt('PyInit_%s(void)' % (base_module_name,)) + prnt('{') + prnt(' return _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( + self.module_name, self._version)) + prnt('}') + prnt('#else') + prnt('PyMODINIT_FUNC') + prnt('init%s(void)' % (base_module_name,)) + prnt('{') + prnt(' _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( + self.module_name, self._version)) + prnt('}') + prnt('#endif') + prnt() + prnt('#ifdef __GNUC__') + prnt('# pragma GCC visibility pop') + prnt('#endif') + self._version = None + + def _to_py(self, x): + if isinstance(x, str): + return "b'%s'" % (x,) + if isinstance(x, (list, tuple)): + rep = [self._to_py(item) for item in x] + if len(rep) == 1: + rep.append('') + return "(%s)" % (','.join(rep),) + return x.as_python_expr() # Py2: unicode unexpected; Py3: bytes unexp. + + def write_py_source_to_f(self, f): + self._f = f + prnt = self._prnt + # + # header + prnt("# auto-generated file") + prnt("import _cffi_backend") + # + # the 'import' of the included ffis + num_includes = len(self.ffi._included_ffis or ()) + for i in range(num_includes): + ffi_to_include = self.ffi._included_ffis[i] + try: + included_module_name, included_source = ( + ffi_to_include._assigned_source[:2]) + except AttributeError: + raise VerificationError( + "ffi object %r includes %r, but the latter has not " + "been prepared with set_source()" % ( + self.ffi, ffi_to_include,)) + if included_source is not None: + raise VerificationError( + "not implemented yet: ffi.include() of a C-based " + "ffi inside a Python-based ffi") + prnt('from %s import ffi as _ffi%d' % (included_module_name, i)) + prnt() + prnt("ffi = _cffi_backend.FFI('%s'," % (self.module_name,)) + prnt(" _version = 0x%x," % (self._version,)) + self._version = None + # + # the '_types' keyword argument + self.cffi_types = tuple(self.cffi_types) # don't change any more + types_lst = [op.as_python_bytes() for op in self.cffi_types] + prnt(' _types = %s,' % (self._to_py(''.join(types_lst)),)) + typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) + # + # the keyword arguments from ALL_STEPS + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + if len(lst) > 0 and step_name != "field": + prnt(' _%ss = %s,' % (step_name, self._to_py(lst))) + # + # the '_includes' keyword argument + if num_includes > 0: + prnt(' _includes = (%s,),' % ( + ', '.join(['_ffi%d' % i for i in range(num_includes)]),)) + # + # the footer + prnt(')') + + # ---------- + + def _gettypenum(self, type): + # a KeyError here is a bug. please report it! :-) + return self._typesdict[type] + + def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): + extraarg = '' + if isinstance(tp, model.BasePrimitiveType) and not tp.is_complex_type(): + if tp.is_integer_type() and tp.name != '_Bool': + converter = '_cffi_to_c_int' + extraarg = ', %s' % tp.name + elif isinstance(tp, model.UnknownFloatType): + # don't check with is_float_type(): it may be a 'long + # double' here, and _cffi_to_c_double would loose precision + converter = '(%s)_cffi_to_c_double' % (tp.get_c_name(''),) + else: + cname = tp.get_c_name('') + converter = '(%s)_cffi_to_c_%s' % (cname, + tp.name.replace(' ', '_')) + if cname in ('char16_t', 'char32_t'): + self.needs_version(VERSION_CHAR16CHAR32) + errvalue = '-1' + # + elif isinstance(tp, model.PointerType): + self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, + tovar, errcode) + return + # + elif (isinstance(tp, model.StructOrUnionOrEnum) or + isinstance(tp, model.BasePrimitiveType)): + # a struct (not a struct pointer) as a function argument; + # or, a complex (the same code works) + self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' + % (tovar, self._gettypenum(tp), fromvar)) + self._prnt(' %s;' % errcode) + return + # + elif isinstance(tp, model.FunctionPtrType): + converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') + extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) + errvalue = 'NULL' + # + else: + raise NotImplementedError(tp) + # + self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) + self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( + tovar, tp.get_c_name(''), errvalue)) + self._prnt(' %s;' % errcode) + + def _extra_local_variables(self, tp, localvars, freelines): + if isinstance(tp, model.PointerType): + localvars.add('Py_ssize_t datasize') + localvars.add('struct _cffi_freeme_s *large_args_free = NULL') + freelines.add('if (large_args_free != NULL)' + ' _cffi_free_array_arguments(large_args_free);') + + def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): + self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') + self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( + self._gettypenum(tp), fromvar, tovar)) + self._prnt(' if (datasize != 0) {') + self._prnt(' %s = ((size_t)datasize) <= 640 ? ' + '(%s)alloca((size_t)datasize) : NULL;' % ( + tovar, tp.get_c_name(''))) + self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' + '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) + self._prnt(' datasize, &large_args_free) < 0)') + self._prnt(' %s;' % errcode) + self._prnt(' }') + + def _convert_expr_from_c(self, tp, var, context): + if isinstance(tp, model.BasePrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + return '_cffi_from_c_int(%s, %s)' % (var, tp.name) + elif isinstance(tp, model.UnknownFloatType): + return '_cffi_from_c_double(%s)' % (var,) + elif tp.name != 'long double' and not tp.is_complex_type(): + cname = tp.name.replace(' ', '_') + if cname in ('char16_t', 'char32_t'): + self.needs_version(VERSION_CHAR16CHAR32) + return '_cffi_from_c_%s(%s)' % (cname, var) + else: + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.ArrayType): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(model.PointerType(tp.item))) + elif isinstance(tp, model.StructOrUnion): + if tp.fldnames is None: + raise TypeError("'%s' is used as %s, but is opaque" % ( + tp._get_c_name(), context)) + return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.EnumType): + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + else: + raise NotImplementedError(tp) + + # ---------- + # typedefs + + def _typedef_type(self, tp, name): + return self._global_type(tp, "(*(%s *)0)" % (name,)) + + def _generate_cpy_typedef_collecttype(self, tp, name): + self._do_collect_type(self._typedef_type(tp, name)) + + def _generate_cpy_typedef_decl(self, tp, name): + pass + + def _typedef_ctx(self, tp, name): + type_index = self._typesdict[tp] + self._lsts["typename"].append(TypenameExpr(name, type_index)) + + def _generate_cpy_typedef_ctx(self, tp, name): + tp = self._typedef_type(tp, name) + self._typedef_ctx(tp, name) + if getattr(tp, "origin", None) == "unknown_type": + self._struct_ctx(tp, tp.name, approxname=None) + elif isinstance(tp, model.NamedPointerType): + self._struct_ctx(tp.totype, tp.totype.name, approxname=tp.name, + named_ptr=tp) + + # ---------- + # function declarations + + def _generate_cpy_function_collecttype(self, tp, name): + self._do_collect_type(tp.as_raw_function()) + if tp.ellipsis and not self.target_is_python: + self._do_collect_type(tp) + + def _generate_cpy_function_decl(self, tp, name): + assert not self.target_is_python + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no CPython wrapper) + self._generate_cpy_constant_decl(tp, name) + return + prnt = self._prnt + numargs = len(tp.args) + if numargs == 0: + argname = 'noarg' + elif numargs == 1: + argname = 'arg0' + else: + argname = 'args' + # + # ------------------------------ + # the 'd' version of the function, only for addressof(lib, 'func') + arguments = [] + call_arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arguments.append(type.get_c_name(' x%d' % i, context)) + call_arguments.append('x%d' % i) + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + if tp.abi: + abi = tp.abi + ' ' + else: + abi = '' + name_and_arguments = '%s_cffi_d_%s(%s)' % (abi, name, repr_arguments) + prnt('static %s' % (tp.result.get_c_name(name_and_arguments),)) + prnt('{') + call_arguments = ', '.join(call_arguments) + result_code = 'return ' + if isinstance(tp.result, model.VoidType): + result_code = '' + prnt(' %s%s(%s);' % (result_code, name, call_arguments)) + prnt('}') + # + prnt('#ifndef PYPY_VERSION') # ------------------------------ + # + prnt('static PyObject *') + prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) + prnt('{') + # + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arg = type.get_c_name(' x%d' % i, context) + prnt(' %s;' % arg) + # + localvars = set() + freelines = set() + for type in tp.args: + self._extra_local_variables(type, localvars, freelines) + for decl in sorted(localvars): + prnt(' %s;' % (decl,)) + # + if not isinstance(tp.result, model.VoidType): + result_code = 'result = ' + context = 'result of %s' % name + result_decl = ' %s;' % tp.result.get_c_name(' result', context) + prnt(result_decl) + prnt(' PyObject *pyresult;') + else: + result_decl = None + result_code = '' + # + if len(tp.args) > 1: + rng = range(len(tp.args)) + for i in rng: + prnt(' PyObject *arg%d;' % i) + prnt() + prnt(' if (!PyArg_UnpackTuple(args, "%s", %d, %d, %s))' % ( + name, len(rng), len(rng), + ', '.join(['&arg%d' % i for i in rng]))) + prnt(' return NULL;') + prnt() + # + for i, type in enumerate(tp.args): + self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, + 'return NULL') + prnt() + # + prnt(' Py_BEGIN_ALLOW_THREADS') + prnt(' _cffi_restore_errno();') + call_arguments = ['x%d' % i for i in range(len(tp.args))] + call_arguments = ', '.join(call_arguments) + prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) + prnt(' _cffi_save_errno();') + prnt(' Py_END_ALLOW_THREADS') + prnt() + # + prnt(' (void)self; /* unused */') + if numargs == 0: + prnt(' (void)noarg; /* unused */') + if result_code: + prnt(' pyresult = %s;' % + self._convert_expr_from_c(tp.result, 'result', 'result type')) + for freeline in freelines: + prnt(' ' + freeline) + prnt(' return pyresult;') + else: + for freeline in freelines: + prnt(' ' + freeline) + prnt(' Py_INCREF(Py_None);') + prnt(' return Py_None;') + prnt('}') + # + prnt('#else') # ------------------------------ + # + # the PyPy version: need to replace struct/union arguments with + # pointers, and if the result is a struct/union, insert a first + # arg that is a pointer to the result. We also do that for + # complex args and return type. + def need_indirection(type): + return (isinstance(type, model.StructOrUnion) or + (isinstance(type, model.PrimitiveType) and + type.is_complex_type())) + difference = False + arguments = [] + call_arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + indirection = '' + if need_indirection(type): + indirection = '*' + difference = True + arg = type.get_c_name(' %sx%d' % (indirection, i), context) + arguments.append(arg) + call_arguments.append('%sx%d' % (indirection, i)) + tp_result = tp.result + if need_indirection(tp_result): + context = 'result of %s' % name + arg = tp_result.get_c_name(' *result', context) + arguments.insert(0, arg) + tp_result = model.void_type + result_decl = None + result_code = '*result = ' + difference = True + if difference: + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + name_and_arguments = '%s_cffi_f_%s(%s)' % (abi, name, + repr_arguments) + prnt('static %s' % (tp_result.get_c_name(name_and_arguments),)) + prnt('{') + if result_decl: + prnt(result_decl) + call_arguments = ', '.join(call_arguments) + prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) + if result_decl: + prnt(' return result;') + prnt('}') + else: + prnt('# define _cffi_f_%s _cffi_d_%s' % (name, name)) + # + prnt('#endif') # ------------------------------ + prnt() + + def _generate_cpy_function_ctx(self, tp, name): + if tp.ellipsis and not self.target_is_python: + self._generate_cpy_constant_ctx(tp, name) + return + type_index = self._typesdict[tp.as_raw_function()] + numargs = len(tp.args) + if self.target_is_python: + meth_kind = OP_DLOPEN_FUNC + elif numargs == 0: + meth_kind = OP_CPYTHON_BLTN_N # 'METH_NOARGS' + elif numargs == 1: + meth_kind = OP_CPYTHON_BLTN_O # 'METH_O' + else: + meth_kind = OP_CPYTHON_BLTN_V # 'METH_VARARGS' + self._lsts["global"].append( + GlobalExpr(name, '_cffi_f_%s' % name, + CffiOp(meth_kind, type_index), + size='_cffi_d_%s' % name)) + + # ---------- + # named structs or unions + + def _field_type(self, tp_struct, field_name, tp_field): + if isinstance(tp_field, model.ArrayType): + actual_length = tp_field.length + if actual_length == '...': + ptr_struct_name = tp_struct.get_c_name('*') + actual_length = '_cffi_array_len(((%s)0)->%s)' % ( + ptr_struct_name, field_name) + tp_item = self._field_type(tp_struct, '%s[0]' % field_name, + tp_field.item) + tp_field = model.ArrayType(tp_item, actual_length) + return tp_field + + def _struct_collecttype(self, tp): + self._do_collect_type(tp) + if self.target_is_python: + # also requires nested anon struct/unions in ABI mode, recursively + for fldtype in tp.anonymous_struct_fields(): + self._struct_collecttype(fldtype) + + def _struct_decl(self, tp, cname, approxname): + if tp.fldtypes is None: + return + prnt = self._prnt + checkfuncname = '_cffi_checkfld_%s' % (approxname,) + prnt('_CFFI_UNUSED_FN') + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in self._enum_fields(tp): + try: + if ftype.is_integer_type() or fbitsize >= 0: + # accept all integers, but complain on float or double + if fname != '': + prnt(" (void)((p->%s) | 0); /* check that '%s.%s' is " + "an integer */" % (fname, cname, fname)) + continue + # only accept exactly the type declared, except that '[]' + # is interpreted as a '*' and so will match any array length. + # (It would also match '*', but that's harder to detect...) + while (isinstance(ftype, model.ArrayType) + and (ftype.length is None or ftype.length == '...')): + ftype = ftype.item + fname = fname + '[0]' + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + prnt('struct _cffi_align_%s { char x; %s y; };' % (approxname, cname)) + prnt() + + def _struct_ctx(self, tp, cname, approxname, named_ptr=None): + type_index = self._typesdict[tp] + reason_for_not_expanding = None + flags = [] + if isinstance(tp, model.UnionType): + flags.append("_CFFI_F_UNION") + if tp.fldtypes is None: + flags.append("_CFFI_F_OPAQUE") + reason_for_not_expanding = "opaque" + if (tp not in self.ffi._parser._included_declarations and + (named_ptr is None or + named_ptr not in self.ffi._parser._included_declarations)): + if tp.fldtypes is None: + pass # opaque + elif tp.partial or any(tp.anonymous_struct_fields()): + pass # field layout obtained silently from the C compiler + else: + flags.append("_CFFI_F_CHECK_FIELDS") + if tp.packed: + if tp.packed > 1: + raise NotImplementedError( + "%r is declared with 'pack=%r'; only 0 or 1 are " + "supported in API mode (try to use \"...;\", which " + "does not require a 'pack' declaration)" % + (tp, tp.packed)) + flags.append("_CFFI_F_PACKED") + else: + flags.append("_CFFI_F_EXTERNAL") + reason_for_not_expanding = "external" + flags = '|'.join(flags) or '0' + c_fields = [] + if reason_for_not_expanding is None: + enumfields = list(self._enum_fields(tp)) + for fldname, fldtype, fbitsize, fqual in enumfields: + fldtype = self._field_type(tp, fldname, fldtype) + self._check_not_opaque(fldtype, + "field '%s.%s'" % (tp.name, fldname)) + # cname is None for _add_missing_struct_unions() only + op = OP_NOOP + if fbitsize >= 0: + op = OP_BITFIELD + size = '%d /* bits */' % fbitsize + elif cname is None or ( + isinstance(fldtype, model.ArrayType) and + fldtype.length is None): + size = '(size_t)-1' + else: + size = 'sizeof(((%s)0)->%s)' % ( + tp.get_c_name('*') if named_ptr is None + else named_ptr.name, + fldname) + if cname is None or fbitsize >= 0: + offset = '(size_t)-1' + elif named_ptr is not None: + offset = '((char *)&((%s)0)->%s) - (char *)0' % ( + named_ptr.name, fldname) + else: + offset = 'offsetof(%s, %s)' % (tp.get_c_name(''), fldname) + c_fields.append( + FieldExpr(fldname, offset, size, fbitsize, + CffiOp(op, self._typesdict[fldtype]))) + first_field_index = len(self._lsts["field"]) + self._lsts["field"].extend(c_fields) + # + if cname is None: # unknown name, for _add_missing_struct_unions + size = '(size_t)-2' + align = -2 + comment = "unnamed" + else: + if named_ptr is not None: + size = 'sizeof(*(%s)0)' % (named_ptr.name,) + align = '-1 /* unknown alignment */' + else: + size = 'sizeof(%s)' % (cname,) + align = 'offsetof(struct _cffi_align_%s, y)' % (approxname,) + comment = None + else: + size = '(size_t)-1' + align = -1 + first_field_index = -1 + comment = reason_for_not_expanding + self._lsts["struct_union"].append( + StructUnionExpr(tp.name, type_index, flags, size, align, comment, + first_field_index, c_fields)) + self._seen_struct_unions.add(tp) + + def _check_not_opaque(self, tp, location): + while isinstance(tp, model.ArrayType): + tp = tp.item + if isinstance(tp, model.StructOrUnion) and tp.fldtypes is None: + raise TypeError( + "%s is of an opaque type (not declared in cdef())" % location) + + def _add_missing_struct_unions(self): + # not very nice, but some struct declarations might be missing + # because they don't have any known C name. Check that they are + # not partial (we can't complete or verify them!) and emit them + # anonymously. + lst = list(self._struct_unions.items()) + lst.sort(key=lambda tp_order: tp_order[1]) + for tp, order in lst: + if tp not in self._seen_struct_unions: + if tp.partial: + raise NotImplementedError("internal inconsistency: %r is " + "partial but was not seen at " + "this point" % (tp,)) + if tp.name.startswith('$') and tp.name[1:].isdigit(): + approxname = tp.name[1:] + elif tp.name == '_IO_FILE' and tp.forcename == 'FILE': + approxname = 'FILE' + self._typedef_ctx(tp, 'FILE') + else: + raise NotImplementedError("internal inconsistency: %r" % + (tp,)) + self._struct_ctx(tp, None, approxname) + + def _generate_cpy_struct_collecttype(self, tp, name): + self._struct_collecttype(tp) + _generate_cpy_union_collecttype = _generate_cpy_struct_collecttype + + def _struct_names(self, tp): + cname = tp.get_c_name('') + if ' ' in cname: + return cname, cname.replace(' ', '_') + else: + return cname, '_' + cname + + def _generate_cpy_struct_decl(self, tp, name): + self._struct_decl(tp, *self._struct_names(tp)) + _generate_cpy_union_decl = _generate_cpy_struct_decl + + def _generate_cpy_struct_ctx(self, tp, name): + self._struct_ctx(tp, *self._struct_names(tp)) + _generate_cpy_union_ctx = _generate_cpy_struct_ctx + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + def _generate_cpy_anonymous_collecttype(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_collecttype(tp, name) + else: + self._struct_collecttype(tp) + + def _generate_cpy_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_decl(tp) + else: + self._struct_decl(tp, name, 'typedef_' + name) + + def _generate_cpy_anonymous_ctx(self, tp, name): + if isinstance(tp, model.EnumType): + self._enum_ctx(tp, name) + else: + self._struct_ctx(tp, name, 'typedef_' + name) + + # ---------- + # constants, declared with "static const ..." + + def _generate_cpy_const(self, is_int, name, tp=None, category='const', + check_value=None): + if (category, name) in self._seen_constants: + raise VerificationError( + "duplicate declaration of %s '%s'" % (category, name)) + self._seen_constants.add((category, name)) + # + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + if is_int: + prnt('static int %s(unsigned long long *o)' % funcname) + prnt('{') + prnt(' int n = (%s) <= 0;' % (name,)) + prnt(' *o = (unsigned long long)((%s) | 0);' + ' /* check that %s is an integer */' % (name, name)) + if check_value is not None: + if check_value > 0: + check_value = '%dU' % (check_value,) + prnt(' if (!_cffi_check_int(*o, n, %s))' % (check_value,)) + prnt(' n |= 2;') + prnt(' return n;') + prnt('}') + else: + assert check_value is None + prnt('static void %s(char *o)' % funcname) + prnt('{') + prnt(' *(%s)o = %s;' % (tp.get_c_name('*'), name)) + prnt('}') + prnt() + + def _generate_cpy_constant_collecttype(self, tp, name): + is_int = tp.is_integer_type() + if not is_int or self.target_is_python: + self._do_collect_type(tp) + + def _generate_cpy_constant_decl(self, tp, name): + is_int = tp.is_integer_type() + self._generate_cpy_const(is_int, name, tp) + + def _generate_cpy_constant_ctx(self, tp, name): + if not self.target_is_python and tp.is_integer_type(): + type_op = CffiOp(OP_CONSTANT_INT, -1) + else: + if self.target_is_python: + const_kind = OP_DLOPEN_CONST + else: + const_kind = OP_CONSTANT + type_index = self._typesdict[tp] + type_op = CffiOp(const_kind, type_index) + self._lsts["global"].append( + GlobalExpr(name, '_cffi_const_%s' % name, type_op)) + + # ---------- + # enums + + def _generate_cpy_enum_collecttype(self, tp, name): + self._do_collect_type(tp) + + def _generate_cpy_enum_decl(self, tp, name=None): + for enumerator in tp.enumerators: + self._generate_cpy_const(True, enumerator) + + def _enum_ctx(self, tp, cname): + type_index = self._typesdict[tp] + type_op = CffiOp(OP_ENUM, -1) + if self.target_is_python: + tp.check_not_partial() + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._lsts["global"].append( + GlobalExpr(enumerator, '_cffi_const_%s' % enumerator, type_op, + check_value=enumvalue)) + # + if cname is not None and '$' not in cname and not self.target_is_python: + size = "sizeof(%s)" % cname + signed = "((%s)-1) <= 0" % cname + else: + basetp = tp.build_baseinttype(self.ffi, []) + size = self.ffi.sizeof(basetp) + signed = int(int(self.ffi.cast(basetp, -1)) < 0) + allenums = ",".join(tp.enumerators) + self._lsts["enum"].append( + EnumExpr(tp.name, type_index, size, signed, allenums)) + + def _generate_cpy_enum_ctx(self, tp, name): + self._enum_ctx(tp, tp._get_c_name()) + + # ---------- + # macros: for now only for integers + + def _generate_cpy_macro_collecttype(self, tp, name): + pass + + def _generate_cpy_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_cpy_const(True, name, check_value=check_value) + + def _generate_cpy_macro_ctx(self, tp, name): + if tp == '...': + if self.target_is_python: + raise VerificationError( + "cannot use the syntax '...' in '#define %s ...' when " + "using the ABI mode" % (name,)) + check_value = None + else: + check_value = tp # an integer + type_op = CffiOp(OP_CONSTANT_INT, -1) + self._lsts["global"].append( + GlobalExpr(name, '_cffi_const_%s' % name, type_op, + check_value=check_value)) + + # ---------- + # global variables + + def _global_type(self, tp, global_name): + if isinstance(tp, model.ArrayType): + actual_length = tp.length + if actual_length == '...': + actual_length = '_cffi_array_len(%s)' % (global_name,) + tp_item = self._global_type(tp.item, '%s[0]' % global_name) + tp = model.ArrayType(tp_item, actual_length) + return tp + + def _generate_cpy_variable_collecttype(self, tp, name): + self._do_collect_type(self._global_type(tp, name)) + + def _generate_cpy_variable_decl(self, tp, name): + prnt = self._prnt + tp = self._global_type(tp, name) + if isinstance(tp, model.ArrayType) and tp.length is None: + tp = tp.item + ampersand = '' + else: + ampersand = '&' + # This code assumes that casts from "tp *" to "void *" is a + # no-op, i.e. a function that returns a "tp *" can be called + # as if it returned a "void *". This should be generally true + # on any modern machine. The only exception to that rule (on + # uncommon architectures, and as far as I can tell) might be + # if 'tp' were a function type, but that is not possible here. + # (If 'tp' is a function _pointer_ type, then casts from "fn_t + # **" to "void *" are again no-ops, as far as I can tell.) + decl = '*_cffi_var_%s(void)' % (name,) + prnt('static ' + tp.get_c_name(decl, quals=self._current_quals)) + prnt('{') + prnt(' return %s(%s);' % (ampersand, name)) + prnt('}') + prnt() + + def _generate_cpy_variable_ctx(self, tp, name): + tp = self._global_type(tp, name) + type_index = self._typesdict[tp] + if self.target_is_python: + op = OP_GLOBAL_VAR + else: + op = OP_GLOBAL_VAR_F + self._lsts["global"].append( + GlobalExpr(name, '_cffi_var_%s' % name, CffiOp(op, type_index))) + + # ---------- + # extern "Python" + + def _generate_cpy_extern_python_collecttype(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + self._do_collect_type(tp) + _generate_cpy_dllexport_python_collecttype = \ + _generate_cpy_extern_python_plus_c_collecttype = \ + _generate_cpy_extern_python_collecttype + + def _extern_python_decl(self, tp, name, tag_and_space): + prnt = self._prnt + if isinstance(tp.result, model.VoidType): + size_of_result = '0' + else: + context = 'result of %s' % name + size_of_result = '(int)sizeof(%s)' % ( + tp.result.get_c_name('', context),) + prnt('static struct _cffi_externpy_s _cffi_externpy__%s =' % name) + prnt(' { "%s.%s", %s, 0, 0 };' % ( + self.module_name, name, size_of_result)) + prnt() + # + arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arg = type.get_c_name(' a%d' % i, context) + arguments.append(arg) + # + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + name_and_arguments = '%s(%s)' % (name, repr_arguments) + if tp.abi == "__stdcall": + name_and_arguments = '_cffi_stdcall ' + name_and_arguments + # + def may_need_128_bits(tp): + return (isinstance(tp, model.PrimitiveType) and + tp.name == 'long double') + # + size_of_a = max(len(tp.args)*8, 8) + if may_need_128_bits(tp.result): + size_of_a = max(size_of_a, 16) + if isinstance(tp.result, model.StructOrUnion): + size_of_a = 'sizeof(%s) > %d ? sizeof(%s) : %d' % ( + tp.result.get_c_name(''), size_of_a, + tp.result.get_c_name(''), size_of_a) + prnt('%s%s' % (tag_and_space, tp.result.get_c_name(name_and_arguments))) + prnt('{') + prnt(' char a[%s];' % size_of_a) + prnt(' char *p = a;') + for i, type in enumerate(tp.args): + arg = 'a%d' % i + if (isinstance(type, model.StructOrUnion) or + may_need_128_bits(type)): + arg = '&' + arg + type = model.PointerType(type) + prnt(' *(%s)(p + %d) = %s;' % (type.get_c_name('*'), i*8, arg)) + prnt(' _cffi_call_python(&_cffi_externpy__%s, p);' % name) + if not isinstance(tp.result, model.VoidType): + prnt(' return *(%s)p;' % (tp.result.get_c_name('*'),)) + prnt('}') + prnt() + self._num_externpy += 1 + + def _generate_cpy_extern_python_decl(self, tp, name): + self._extern_python_decl(tp, name, 'static ') + + def _generate_cpy_dllexport_python_decl(self, tp, name): + self._extern_python_decl(tp, name, 'CFFI_DLLEXPORT ') + + def _generate_cpy_extern_python_plus_c_decl(self, tp, name): + self._extern_python_decl(tp, name, '') + + def _generate_cpy_extern_python_ctx(self, tp, name): + if self.target_is_python: + raise VerificationError( + "cannot use 'extern \"Python\"' in the ABI mode") + if tp.ellipsis: + raise NotImplementedError("a vararg function is extern \"Python\"") + type_index = self._typesdict[tp] + type_op = CffiOp(OP_EXTERN_PYTHON, type_index) + self._lsts["global"].append( + GlobalExpr(name, '&_cffi_externpy__%s' % name, type_op, name)) + + _generate_cpy_dllexport_python_ctx = \ + _generate_cpy_extern_python_plus_c_ctx = \ + _generate_cpy_extern_python_ctx + + def _print_string_literal_in_array(self, s): + prnt = self._prnt + prnt('// # NB. this is not a string because of a size limit in MSVC') + if not isinstance(s, bytes): # unicode + s = s.encode('utf-8') # -> bytes + else: + s.decode('utf-8') # got bytes, check for valid utf-8 + try: + s.decode('ascii') + except UnicodeDecodeError: + s = b'# -*- encoding: utf8 -*-\n' + s + for line in s.splitlines(True): + comment = line + if type('//') is bytes: # python2 + line = map(ord, line) # make a list of integers + else: # python3 + # type(line) is bytes, which enumerates like a list of integers + comment = ascii(comment)[1:-1] + prnt(('// ' + comment).rstrip()) + printed_line = '' + for c in line: + if len(printed_line) >= 76: + prnt(printed_line) + printed_line = '' + printed_line += '%d,' % (c,) + prnt(printed_line) + + # ---------- + # emitting the opcodes for individual types + + def _emit_bytecode_VoidType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, PRIM_VOID) + + def _emit_bytecode_PrimitiveType(self, tp, index): + prim_index = PRIMITIVE_TO_INDEX[tp.name] + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, prim_index) + + def _emit_bytecode_UnknownIntegerType(self, tp, index): + s = ('_cffi_prim_int(sizeof(%s), (\n' + ' ((%s)-1) | 0 /* check that %s is an integer type */\n' + ' ) <= 0)' % (tp.name, tp.name, tp.name)) + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) + + def _emit_bytecode_UnknownFloatType(self, tp, index): + s = ('_cffi_prim_float(sizeof(%s) *\n' + ' (((%s)1) / 2) * 2 /* integer => 0, float => 1 */\n' + ' )' % (tp.name, tp.name)) + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) + + def _emit_bytecode_RawFunctionType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_FUNCTION, self._typesdict[tp.result]) + index += 1 + for tp1 in tp.args: + realindex = self._typesdict[tp1] + if index != realindex: + if isinstance(tp1, model.PrimitiveType): + self._emit_bytecode_PrimitiveType(tp1, index) + else: + self.cffi_types[index] = CffiOp(OP_NOOP, realindex) + index += 1 + flags = int(tp.ellipsis) + if tp.abi is not None: + if tp.abi == '__stdcall': + flags |= 2 + else: + raise NotImplementedError("abi=%r" % (tp.abi,)) + self.cffi_types[index] = CffiOp(OP_FUNCTION_END, flags) + + def _emit_bytecode_PointerType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[tp.totype]) + + _emit_bytecode_ConstPointerType = _emit_bytecode_PointerType + _emit_bytecode_NamedPointerType = _emit_bytecode_PointerType + + def _emit_bytecode_FunctionPtrType(self, tp, index): + raw = tp.as_raw_function() + self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[raw]) + + def _emit_bytecode_ArrayType(self, tp, index): + item_index = self._typesdict[tp.item] + if tp.length is None: + self.cffi_types[index] = CffiOp(OP_OPEN_ARRAY, item_index) + elif tp.length == '...': + raise VerificationError( + "type %s badly placed: the '...' array length can only be " + "used on global arrays or on fields of structures" % ( + str(tp).replace('/*...*/', '...'),)) + else: + assert self.cffi_types[index + 1] == 'LEN' + self.cffi_types[index] = CffiOp(OP_ARRAY, item_index) + self.cffi_types[index + 1] = CffiOp(None, str(tp.length)) + + def _emit_bytecode_StructType(self, tp, index): + struct_index = self._struct_unions[tp] + self.cffi_types[index] = CffiOp(OP_STRUCT_UNION, struct_index) + _emit_bytecode_UnionType = _emit_bytecode_StructType + + def _emit_bytecode_EnumType(self, tp, index): + enum_index = self._enums[tp] + self.cffi_types[index] = CffiOp(OP_ENUM, enum_index) + + +if sys.version_info >= (3,): + NativeIO = io.StringIO +else: + class NativeIO(io.BytesIO): + def write(self, s): + if isinstance(s, unicode): + s = s.encode('ascii') + super(NativeIO, self).write(s) + +def _make_c_or_py_source(ffi, module_name, preamble, target_file, verbose): + if verbose: + print("generating %s" % (target_file,)) + recompiler = Recompiler(ffi, module_name, + target_is_python=(preamble is None)) + recompiler.collect_type_table() + recompiler.collect_step_tables() + f = NativeIO() + recompiler.write_source_to_f(f, preamble) + output = f.getvalue() + try: + with open(target_file, 'r') as f1: + if f1.read(len(output) + 1) != output: + raise IOError + if verbose: + print("(already up-to-date)") + return False # already up-to-date + except IOError: + tmp_file = '%s.~%d' % (target_file, os.getpid()) + with open(tmp_file, 'w') as f1: + f1.write(output) + try: + os.rename(tmp_file, target_file) + except OSError: + os.unlink(target_file) + os.rename(tmp_file, target_file) + return True + +def make_c_source(ffi, module_name, preamble, target_c_file, verbose=False): + assert preamble is not None + return _make_c_or_py_source(ffi, module_name, preamble, target_c_file, + verbose) + +def make_py_source(ffi, module_name, target_py_file, verbose=False): + return _make_c_or_py_source(ffi, module_name, None, target_py_file, + verbose) + +def _modname_to_file(outputdir, modname, extension): + parts = modname.split('.') + try: + os.makedirs(os.path.join(outputdir, *parts[:-1])) + except OSError: + pass + parts[-1] += extension + return os.path.join(outputdir, *parts), parts + + +# Aaargh. Distutils is not tested at all for the purpose of compiling +# DLLs that are not extension modules. Here are some hacks to work +# around that, in the _patch_for_*() functions... + +def _patch_meth(patchlist, cls, name, new_meth): + old = getattr(cls, name) + patchlist.append((cls, name, old)) + setattr(cls, name, new_meth) + return old + +def _unpatch_meths(patchlist): + for cls, name, old_meth in reversed(patchlist): + setattr(cls, name, old_meth) + +def _patch_for_embedding(patchlist): + if sys.platform == 'win32': + # we must not remove the manifest when building for embedding! + from distutils.msvc9compiler import MSVCCompiler + _patch_meth(patchlist, MSVCCompiler, '_remove_visual_c_ref', + lambda self, manifest_file: manifest_file) + + if sys.platform == 'darwin': + # we must not make a '-bundle', but a '-dynamiclib' instead + from distutils.ccompiler import CCompiler + def my_link_shared_object(self, *args, **kwds): + if '-bundle' in self.linker_so: + self.linker_so = list(self.linker_so) + i = self.linker_so.index('-bundle') + self.linker_so[i] = '-dynamiclib' + return old_link_shared_object(self, *args, **kwds) + old_link_shared_object = _patch_meth(patchlist, CCompiler, + 'link_shared_object', + my_link_shared_object) + +def _patch_for_target(patchlist, target): + from distutils.command.build_ext import build_ext + # if 'target' is different from '*', we need to patch some internal + # method to just return this 'target' value, instead of having it + # built from module_name + if target.endswith('.*'): + target = target[:-2] + if sys.platform == 'win32': + target += '.dll' + elif sys.platform == 'darwin': + target += '.dylib' + else: + target += '.so' + _patch_meth(patchlist, build_ext, 'get_ext_filename', + lambda self, ext_name: target) + + +def recompile(ffi, module_name, preamble, tmpdir='.', call_c_compiler=True, + c_file=None, source_extension='.c', extradir=None, + compiler_verbose=1, target=None, debug=None, **kwds): + if not isinstance(module_name, str): + module_name = module_name.encode('ascii') + if ffi._windows_unicode: + ffi._apply_windows_unicode(kwds) + if preamble is not None: + embedding = (ffi._embedding is not None) + if embedding: + ffi._apply_embedding_fix(kwds) + if c_file is None: + c_file, parts = _modname_to_file(tmpdir, module_name, + source_extension) + if extradir: + parts = [extradir] + parts + ext_c_file = os.path.join(*parts) + else: + ext_c_file = c_file + # + if target is None: + if embedding: + target = '%s.*' % module_name + else: + target = '*' + # + ext = ffiplatform.get_extension(ext_c_file, module_name, **kwds) + updated = make_c_source(ffi, module_name, preamble, c_file, + verbose=compiler_verbose) + if call_c_compiler: + patchlist = [] + cwd = os.getcwd() + try: + if embedding: + _patch_for_embedding(patchlist) + if target != '*': + _patch_for_target(patchlist, target) + if compiler_verbose: + if tmpdir == '.': + msg = 'the current directory is' + else: + msg = 'setting the current directory to' + print('%s %r' % (msg, os.path.abspath(tmpdir))) + os.chdir(tmpdir) + outputfilename = ffiplatform.compile('.', ext, + compiler_verbose, debug) + finally: + os.chdir(cwd) + _unpatch_meths(patchlist) + return outputfilename + else: + return ext, updated + else: + if c_file is None: + c_file, _ = _modname_to_file(tmpdir, module_name, '.py') + updated = make_py_source(ffi, module_name, c_file, + verbose=compiler_verbose) + if call_c_compiler: + return c_file + else: + return None, updated + diff --git a/env/lib/python3.8/site-packages/cffi/setuptools_ext.py b/env/lib/python3.8/site-packages/cffi/setuptools_ext.py new file mode 100644 index 00000000..8fe36148 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/setuptools_ext.py @@ -0,0 +1,219 @@ +import os +import sys + +try: + basestring +except NameError: + # Python 3.x + basestring = str + +def error(msg): + from distutils.errors import DistutilsSetupError + raise DistutilsSetupError(msg) + + +def execfile(filename, glob): + # We use execfile() (here rewritten for Python 3) instead of + # __import__() to load the build script. The problem with + # a normal import is that in some packages, the intermediate + # __init__.py files may already try to import the file that + # we are generating. + with open(filename) as f: + src = f.read() + src += '\n' # Python 2.6 compatibility + code = compile(src, filename, 'exec') + exec(code, glob, glob) + + +def add_cffi_module(dist, mod_spec): + from cffi.api import FFI + + if not isinstance(mod_spec, basestring): + error("argument to 'cffi_modules=...' must be a str or a list of str," + " not %r" % (type(mod_spec).__name__,)) + mod_spec = str(mod_spec) + try: + build_file_name, ffi_var_name = mod_spec.split(':') + except ValueError: + error("%r must be of the form 'path/build.py:ffi_variable'" % + (mod_spec,)) + if not os.path.exists(build_file_name): + ext = '' + rewritten = build_file_name.replace('.', '/') + '.py' + if os.path.exists(rewritten): + ext = ' (rewrite cffi_modules to [%r])' % ( + rewritten + ':' + ffi_var_name,) + error("%r does not name an existing file%s" % (build_file_name, ext)) + + mod_vars = {'__name__': '__cffi__', '__file__': build_file_name} + execfile(build_file_name, mod_vars) + + try: + ffi = mod_vars[ffi_var_name] + except KeyError: + error("%r: object %r not found in module" % (mod_spec, + ffi_var_name)) + if not isinstance(ffi, FFI): + ffi = ffi() # maybe it's a function instead of directly an ffi + if not isinstance(ffi, FFI): + error("%r is not an FFI instance (got %r)" % (mod_spec, + type(ffi).__name__)) + if not hasattr(ffi, '_assigned_source'): + error("%r: the set_source() method was not called" % (mod_spec,)) + module_name, source, source_extension, kwds = ffi._assigned_source + if ffi._windows_unicode: + kwds = kwds.copy() + ffi._apply_windows_unicode(kwds) + + if source is None: + _add_py_module(dist, ffi, module_name) + else: + _add_c_module(dist, ffi, module_name, source, source_extension, kwds) + +def _set_py_limited_api(Extension, kwds): + """ + Add py_limited_api to kwds if setuptools >= 26 is in use. + Do not alter the setting if it already exists. + Setuptools takes care of ignoring the flag on Python 2 and PyPy. + + CPython itself should ignore the flag in a debugging version + (by not listing .abi3.so in the extensions it supports), but + it doesn't so far, creating troubles. That's why we check + for "not hasattr(sys, 'gettotalrefcount')" (the 2.7 compatible equivalent + of 'd' not in sys.abiflags). (http://bugs.python.org/issue28401) + + On Windows, with CPython <= 3.4, it's better not to use py_limited_api + because virtualenv *still* doesn't copy PYTHON3.DLL on these versions. + Recently (2020) we started shipping only >= 3.5 wheels, though. So + we'll give it another try and set py_limited_api on Windows >= 3.5. + """ + from cffi import recompiler + + if ('py_limited_api' not in kwds and not hasattr(sys, 'gettotalrefcount') + and recompiler.USE_LIMITED_API): + import setuptools + try: + setuptools_major_version = int(setuptools.__version__.partition('.')[0]) + if setuptools_major_version >= 26: + kwds['py_limited_api'] = True + except ValueError: # certain development versions of setuptools + # If we don't know the version number of setuptools, we + # try to set 'py_limited_api' anyway. At worst, we get a + # warning. + kwds['py_limited_api'] = True + return kwds + +def _add_c_module(dist, ffi, module_name, source, source_extension, kwds): + from distutils.core import Extension + # We are a setuptools extension. Need this build_ext for py_limited_api. + from setuptools.command.build_ext import build_ext + from distutils.dir_util import mkpath + from distutils import log + from cffi import recompiler + + allsources = ['$PLACEHOLDER'] + allsources.extend(kwds.pop('sources', [])) + kwds = _set_py_limited_api(Extension, kwds) + ext = Extension(name=module_name, sources=allsources, **kwds) + + def make_mod(tmpdir, pre_run=None): + c_file = os.path.join(tmpdir, module_name + source_extension) + log.info("generating cffi module %r" % c_file) + mkpath(tmpdir) + # a setuptools-only, API-only hook: called with the "ext" and "ffi" + # arguments just before we turn the ffi into C code. To use it, + # subclass the 'distutils.command.build_ext.build_ext' class and + # add a method 'def pre_run(self, ext, ffi)'. + if pre_run is not None: + pre_run(ext, ffi) + updated = recompiler.make_c_source(ffi, module_name, source, c_file) + if not updated: + log.info("already up-to-date") + return c_file + + if dist.ext_modules is None: + dist.ext_modules = [] + dist.ext_modules.append(ext) + + base_class = dist.cmdclass.get('build_ext', build_ext) + class build_ext_make_mod(base_class): + def run(self): + if ext.sources[0] == '$PLACEHOLDER': + pre_run = getattr(self, 'pre_run', None) + ext.sources[0] = make_mod(self.build_temp, pre_run) + base_class.run(self) + dist.cmdclass['build_ext'] = build_ext_make_mod + # NB. multiple runs here will create multiple 'build_ext_make_mod' + # classes. Even in this case the 'build_ext' command should be + # run once; but just in case, the logic above does nothing if + # called again. + + +def _add_py_module(dist, ffi, module_name): + from distutils.dir_util import mkpath + from setuptools.command.build_py import build_py + from setuptools.command.build_ext import build_ext + from distutils import log + from cffi import recompiler + + def generate_mod(py_file): + log.info("generating cffi module %r" % py_file) + mkpath(os.path.dirname(py_file)) + updated = recompiler.make_py_source(ffi, module_name, py_file) + if not updated: + log.info("already up-to-date") + + base_class = dist.cmdclass.get('build_py', build_py) + class build_py_make_mod(base_class): + def run(self): + base_class.run(self) + module_path = module_name.split('.') + module_path[-1] += '.py' + generate_mod(os.path.join(self.build_lib, *module_path)) + def get_source_files(self): + # This is called from 'setup.py sdist' only. Exclude + # the generate .py module in this case. + saved_py_modules = self.py_modules + try: + if saved_py_modules: + self.py_modules = [m for m in saved_py_modules + if m != module_name] + return base_class.get_source_files(self) + finally: + self.py_modules = saved_py_modules + dist.cmdclass['build_py'] = build_py_make_mod + + # distutils and setuptools have no notion I could find of a + # generated python module. If we don't add module_name to + # dist.py_modules, then things mostly work but there are some + # combination of options (--root and --record) that will miss + # the module. So we add it here, which gives a few apparently + # harmless warnings about not finding the file outside the + # build directory. + # Then we need to hack more in get_source_files(); see above. + if dist.py_modules is None: + dist.py_modules = [] + dist.py_modules.append(module_name) + + # the following is only for "build_ext -i" + base_class_2 = dist.cmdclass.get('build_ext', build_ext) + class build_ext_make_mod(base_class_2): + def run(self): + base_class_2.run(self) + if self.inplace: + # from get_ext_fullpath() in distutils/command/build_ext.py + module_path = module_name.split('.') + package = '.'.join(module_path[:-1]) + build_py = self.get_finalized_command('build_py') + package_dir = build_py.get_package_dir(package) + file_name = module_path[-1] + '.py' + generate_mod(os.path.join(package_dir, file_name)) + dist.cmdclass['build_ext'] = build_ext_make_mod + +def cffi_modules(dist, attr, value): + assert attr == 'cffi_modules' + if isinstance(value, basestring): + value = [value] + + for cffi_module in value: + add_cffi_module(dist, cffi_module) diff --git a/env/lib/python3.8/site-packages/cffi/vengine_cpy.py b/env/lib/python3.8/site-packages/cffi/vengine_cpy.py new file mode 100644 index 00000000..6de0df0e --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/vengine_cpy.py @@ -0,0 +1,1076 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, imp +from . import model +from .error import VerificationError + + +class VCPythonEngine(object): + _class_key = 'x' + _gen_python_module = True + + def __init__(self, verifier): + self.verifier = verifier + self.ffi = verifier.ffi + self._struct_pending_verification = {} + self._types_of_builtin_functions = {} + + def patch_extension_kwds(self, kwds): + pass + + def find_module(self, module_name, path, so_suffixes): + try: + f, filename, descr = imp.find_module(module_name, path) + except ImportError: + return None + if f is not None: + f.close() + # Note that after a setuptools installation, there are both .py + # and .so files with the same basename. The code here relies on + # imp.find_module() locating the .so in priority. + if descr[0] not in so_suffixes: + return None + return filename + + def collect_types(self): + self._typesdict = {} + self._generate("collecttype") + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def _gettypenum(self, type): + # a KeyError here is a bug. please report it! :-) + return self._typesdict[type] + + def _do_collect_type(self, tp): + if ((not isinstance(tp, model.PrimitiveType) + or tp.name == 'long double') + and tp not in self._typesdict): + num = len(self._typesdict) + self._typesdict[tp] = num + + def write_source_to_f(self): + self.collect_types() + # + # The new module will have a _cffi_setup() function that receives + # objects from the ffi world, and that calls some setup code in + # the module. This setup code is split in several independent + # functions, e.g. one per constant. The functions are "chained" + # by ending in a tail call to each other. + # + # This is further split in two chained lists, depending on if we + # can do it at import-time or if we must wait for _cffi_setup() to + # provide us with the objects. This is needed because we + # need the values of the enum constants in order to build the + # that we may have to pass to _cffi_setup(). + # + # The following two 'chained_list_constants' items contains + # the head of these two chained lists, as a string that gives the + # call to do, if any. + self._chained_list_constants = ['((void)lib,0)', '((void)lib,0)'] + # + prnt = self._prnt + # first paste some standard set of lines that are mostly '#define' + prnt(cffimod_header) + prnt() + # then paste the C source given by the user, verbatim. + prnt(self.verifier.preamble) + prnt() + # + # call generate_cpy_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._generate("decl") + # + # implement the function _cffi_setup_custom() as calling the + # head of the chained list. + self._generate_setup_custom() + prnt() + # + # produce the method table, including the entries for the + # generated Python->C function wrappers, which are done + # by generate_cpy_function_method(). + prnt('static PyMethodDef _cffi_methods[] = {') + self._generate("method") + prnt(' {"_cffi_setup", _cffi_setup, METH_VARARGS, NULL},') + prnt(' {NULL, NULL, 0, NULL} /* Sentinel */') + prnt('};') + prnt() + # + # standard init. + modname = self.verifier.get_module_name() + constants = self._chained_list_constants[False] + prnt('#if PY_MAJOR_VERSION >= 3') + prnt() + prnt('static struct PyModuleDef _cffi_module_def = {') + prnt(' PyModuleDef_HEAD_INIT,') + prnt(' "%s",' % modname) + prnt(' NULL,') + prnt(' -1,') + prnt(' _cffi_methods,') + prnt(' NULL, NULL, NULL, NULL') + prnt('};') + prnt() + prnt('PyMODINIT_FUNC') + prnt('PyInit_%s(void)' % modname) + prnt('{') + prnt(' PyObject *lib;') + prnt(' lib = PyModule_Create(&_cffi_module_def);') + prnt(' if (lib == NULL)') + prnt(' return NULL;') + prnt(' if (%s < 0 || _cffi_init() < 0) {' % (constants,)) + prnt(' Py_DECREF(lib);') + prnt(' return NULL;') + prnt(' }') + prnt(' return lib;') + prnt('}') + prnt() + prnt('#else') + prnt() + prnt('PyMODINIT_FUNC') + prnt('init%s(void)' % modname) + prnt('{') + prnt(' PyObject *lib;') + prnt(' lib = Py_InitModule("%s", _cffi_methods);' % modname) + prnt(' if (lib == NULL)') + prnt(' return;') + prnt(' if (%s < 0 || _cffi_init() < 0)' % (constants,)) + prnt(' return;') + prnt(' return;') + prnt('}') + prnt() + prnt('#endif') + + def load_library(self, flags=None): + # XXX review all usages of 'self' here! + # import it as a new extension module + imp.acquire_lock() + try: + if hasattr(sys, "getdlopenflags"): + previous_flags = sys.getdlopenflags() + try: + if hasattr(sys, "setdlopenflags") and flags is not None: + sys.setdlopenflags(flags) + module = imp.load_dynamic(self.verifier.get_module_name(), + self.verifier.modulefilename) + except ImportError as e: + error = "importing %r: %s" % (self.verifier.modulefilename, e) + raise VerificationError(error) + finally: + if hasattr(sys, "setdlopenflags"): + sys.setdlopenflags(previous_flags) + finally: + imp.release_lock() + # + # call loading_cpy_struct() to get the struct layout inferred by + # the C compiler + self._load(module, 'loading') + # + # the C code will need the objects. Collect them in + # order in a list. + revmapping = dict([(value, key) + for (key, value) in self._typesdict.items()]) + lst = [revmapping[i] for i in range(len(revmapping))] + lst = list(map(self.ffi._get_cached_btype, lst)) + # + # build the FFILibrary class and instance and call _cffi_setup(). + # this will set up some fields like '_cffi_types', and only then + # it will invoke the chained list of functions that will really + # build (notably) the constant objects, as if they are + # pointers, and store them as attributes on the 'library' object. + class FFILibrary(object): + _cffi_python_module = module + _cffi_ffi = self.ffi + _cffi_dir = [] + def __dir__(self): + return FFILibrary._cffi_dir + list(self.__dict__) + library = FFILibrary() + if module._cffi_setup(lst, VerificationError, library): + import warnings + warnings.warn("reimporting %r might overwrite older definitions" + % (self.verifier.get_module_name())) + # + # finally, call the loaded_cpy_xxx() functions. This will perform + # the final adjustments, like copying the Python->C wrapper + # functions from the module to the 'library' object, and setting + # up the FFILibrary class with properties for the global C variables. + self._load(module, 'loaded', library=library) + module._cffi_original_ffi = self.ffi + module._cffi_types_of_builtin_funcs = self._types_of_builtin_functions + return library + + def _get_declarations(self): + lst = [(key, tp) for (key, (tp, qual)) in + self.ffi._parser._declarations.items()] + lst.sort() + return lst + + def _generate(self, step_name): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_cpy_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in verify(): %r" % name) + try: + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _load(self, module, step_name, **kwds): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + method = getattr(self, '_%s_cpy_%s' % (step_name, kind)) + try: + method(tp, realname, module, **kwds) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _generate_nothing(self, tp, name): + pass + + def _loaded_noop(self, tp, name, module, **kwds): + pass + + # ---------- + + def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): + extraarg = '' + if isinstance(tp, model.PrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + converter = '_cffi_to_c_int' + extraarg = ', %s' % tp.name + else: + converter = '(%s)_cffi_to_c_%s' % (tp.get_c_name(''), + tp.name.replace(' ', '_')) + errvalue = '-1' + # + elif isinstance(tp, model.PointerType): + self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, + tovar, errcode) + return + # + elif isinstance(tp, (model.StructOrUnion, model.EnumType)): + # a struct (not a struct pointer) as a function argument + self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' + % (tovar, self._gettypenum(tp), fromvar)) + self._prnt(' %s;' % errcode) + return + # + elif isinstance(tp, model.FunctionPtrType): + converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') + extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) + errvalue = 'NULL' + # + else: + raise NotImplementedError(tp) + # + self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) + self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( + tovar, tp.get_c_name(''), errvalue)) + self._prnt(' %s;' % errcode) + + def _extra_local_variables(self, tp, localvars, freelines): + if isinstance(tp, model.PointerType): + localvars.add('Py_ssize_t datasize') + localvars.add('struct _cffi_freeme_s *large_args_free = NULL') + freelines.add('if (large_args_free != NULL)' + ' _cffi_free_array_arguments(large_args_free);') + + def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): + self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') + self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( + self._gettypenum(tp), fromvar, tovar)) + self._prnt(' if (datasize != 0) {') + self._prnt(' %s = ((size_t)datasize) <= 640 ? ' + 'alloca((size_t)datasize) : NULL;' % (tovar,)) + self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' + '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) + self._prnt(' datasize, &large_args_free) < 0)') + self._prnt(' %s;' % errcode) + self._prnt(' }') + + def _convert_expr_from_c(self, tp, var, context): + if isinstance(tp, model.PrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + return '_cffi_from_c_int(%s, %s)' % (var, tp.name) + elif tp.name != 'long double': + return '_cffi_from_c_%s(%s)' % (tp.name.replace(' ', '_'), var) + else: + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.ArrayType): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(model.PointerType(tp.item))) + elif isinstance(tp, model.StructOrUnion): + if tp.fldnames is None: + raise TypeError("'%s' is used as %s, but is opaque" % ( + tp._get_c_name(), context)) + return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.EnumType): + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + else: + raise NotImplementedError(tp) + + # ---------- + # typedefs: generates no code so far + + _generate_cpy_typedef_collecttype = _generate_nothing + _generate_cpy_typedef_decl = _generate_nothing + _generate_cpy_typedef_method = _generate_nothing + _loading_cpy_typedef = _loaded_noop + _loaded_cpy_typedef = _loaded_noop + + # ---------- + # function declarations + + def _generate_cpy_function_collecttype(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + self._do_collect_type(tp) + else: + # don't call _do_collect_type(tp) in this common case, + # otherwise test_autofilled_struct_as_argument fails + for type in tp.args: + self._do_collect_type(type) + self._do_collect_type(tp.result) + + def _generate_cpy_function_decl(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no CPython wrapper) + self._generate_cpy_const(False, name, tp) + return + prnt = self._prnt + numargs = len(tp.args) + if numargs == 0: + argname = 'noarg' + elif numargs == 1: + argname = 'arg0' + else: + argname = 'args' + prnt('static PyObject *') + prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) + prnt('{') + # + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + prnt(' %s;' % type.get_c_name(' x%d' % i, context)) + # + localvars = set() + freelines = set() + for type in tp.args: + self._extra_local_variables(type, localvars, freelines) + for decl in sorted(localvars): + prnt(' %s;' % (decl,)) + # + if not isinstance(tp.result, model.VoidType): + result_code = 'result = ' + context = 'result of %s' % name + prnt(' %s;' % tp.result.get_c_name(' result', context)) + prnt(' PyObject *pyresult;') + else: + result_code = '' + # + if len(tp.args) > 1: + rng = range(len(tp.args)) + for i in rng: + prnt(' PyObject *arg%d;' % i) + prnt() + prnt(' if (!PyArg_ParseTuple(args, "%s:%s", %s))' % ( + 'O' * numargs, name, ', '.join(['&arg%d' % i for i in rng]))) + prnt(' return NULL;') + prnt() + # + for i, type in enumerate(tp.args): + self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, + 'return NULL') + prnt() + # + prnt(' Py_BEGIN_ALLOW_THREADS') + prnt(' _cffi_restore_errno();') + prnt(' { %s%s(%s); }' % ( + result_code, name, + ', '.join(['x%d' % i for i in range(len(tp.args))]))) + prnt(' _cffi_save_errno();') + prnt(' Py_END_ALLOW_THREADS') + prnt() + # + prnt(' (void)self; /* unused */') + if numargs == 0: + prnt(' (void)noarg; /* unused */') + if result_code: + prnt(' pyresult = %s;' % + self._convert_expr_from_c(tp.result, 'result', 'result type')) + for freeline in freelines: + prnt(' ' + freeline) + prnt(' return pyresult;') + else: + for freeline in freelines: + prnt(' ' + freeline) + prnt(' Py_INCREF(Py_None);') + prnt(' return Py_None;') + prnt('}') + prnt() + + def _generate_cpy_function_method(self, tp, name): + if tp.ellipsis: + return + numargs = len(tp.args) + if numargs == 0: + meth = 'METH_NOARGS' + elif numargs == 1: + meth = 'METH_O' + else: + meth = 'METH_VARARGS' + self._prnt(' {"%s", _cffi_f_%s, %s, NULL},' % (name, name, meth)) + + _loading_cpy_function = _loaded_noop + + def _loaded_cpy_function(self, tp, name, module, library): + if tp.ellipsis: + return + func = getattr(module, name) + setattr(library, name, func) + self._types_of_builtin_functions[func] = tp + + # ---------- + # named structs + + _generate_cpy_struct_collecttype = _generate_nothing + def _generate_cpy_struct_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'struct', name) + def _generate_cpy_struct_method(self, tp, name): + self._generate_struct_or_union_method(tp, 'struct', name) + def _loading_cpy_struct(self, tp, name, module): + self._loading_struct_or_union(tp, 'struct', name, module) + def _loaded_cpy_struct(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + _generate_cpy_union_collecttype = _generate_nothing + def _generate_cpy_union_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'union', name) + def _generate_cpy_union_method(self, tp, name): + self._generate_struct_or_union_method(tp, 'union', name) + def _loading_cpy_union(self, tp, name, module): + self._loading_struct_or_union(tp, 'union', name, module) + def _loaded_cpy_union(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_struct_or_union_decl(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + checkfuncname = '_cffi_check_%s_%s' % (prefix, name) + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + cname = ('%s %s' % (prefix, name)).strip() + # + prnt = self._prnt + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if (isinstance(ftype, model.PrimitiveType) + and ftype.is_integer_type()) or fbitsize >= 0: + # accept all integers, but complain on float or double + prnt(' (void)((p->%s) << 1);' % fname) + else: + # only accept exactly the type declared. + try: + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + prnt('static PyObject *') + prnt('%s(PyObject *self, PyObject *noarg)' % (layoutfuncname,)) + prnt('{') + prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) + prnt(' static Py_ssize_t nums[] = {') + prnt(' sizeof(%s),' % cname) + prnt(' offsetof(struct _cffi_aligncheck, y),') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + prnt(' offsetof(%s, %s),' % (cname, fname)) + if isinstance(ftype, model.ArrayType) and ftype.length is None: + prnt(' 0, /* %s */' % ftype._get_c_name()) + else: + prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) + prnt(' -1') + prnt(' };') + prnt(' (void)self; /* unused */') + prnt(' (void)noarg; /* unused */') + prnt(' return _cffi_get_struct_layout(nums);') + prnt(' /* the next line is not executed, but compiled */') + prnt(' %s(0);' % (checkfuncname,)) + prnt('}') + prnt() + + def _generate_struct_or_union_method(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + self._prnt(' {"%s", %s, METH_NOARGS, NULL},' % (layoutfuncname, + layoutfuncname)) + + def _loading_struct_or_union(self, tp, prefix, name, module): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + # + function = getattr(module, layoutfuncname) + layout = function() + if isinstance(tp, model.StructOrUnion) and tp.partial: + # use the function()'s sizes and offsets to guide the + # layout of the struct + totalsize = layout[0] + totalalignment = layout[1] + fieldofs = layout[2::2] + fieldsize = layout[3::2] + tp.force_flatten() + assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) + tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment + else: + cname = ('%s %s' % (prefix, name)).strip() + self._struct_pending_verification[tp] = layout, cname + + def _loaded_struct_or_union(self, tp): + if tp.fldnames is None: + return # nothing to do with opaque structs + self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered + + if tp in self._struct_pending_verification: + # check that the layout sizes and offsets match the real ones + def check(realvalue, expectedvalue, msg): + if realvalue != expectedvalue: + raise VerificationError( + "%s (we have %d, but C compiler says %d)" + % (msg, expectedvalue, realvalue)) + ffi = self.ffi + BStruct = ffi._get_cached_btype(tp) + layout, cname = self._struct_pending_verification.pop(tp) + check(layout[0], ffi.sizeof(BStruct), "wrong total size") + check(layout[1], ffi.alignof(BStruct), "wrong total alignment") + i = 2 + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + check(layout[i], ffi.offsetof(BStruct, fname), + "wrong offset for field %r" % (fname,)) + if layout[i+1] != 0: + BField = ffi._get_cached_btype(ftype) + check(layout[i+1], ffi.sizeof(BField), + "wrong size for field %r" % (fname,)) + i += 2 + assert i == len(layout) + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + _generate_cpy_anonymous_collecttype = _generate_nothing + + def _generate_cpy_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_decl(tp, name, '') + else: + self._generate_struct_or_union_decl(tp, '', name) + + def _generate_cpy_anonymous_method(self, tp, name): + if not isinstance(tp, model.EnumType): + self._generate_struct_or_union_method(tp, '', name) + + def _loading_cpy_anonymous(self, tp, name, module): + if isinstance(tp, model.EnumType): + self._loading_cpy_enum(tp, name, module) + else: + self._loading_struct_or_union(tp, '', name, module) + + def _loaded_cpy_anonymous(self, tp, name, module, **kwds): + if isinstance(tp, model.EnumType): + self._loaded_cpy_enum(tp, name, module, **kwds) + else: + self._loaded_struct_or_union(tp) + + # ---------- + # constants, likely declared with '#define' + + def _generate_cpy_const(self, is_int, name, tp=None, category='const', + vartp=None, delayed=True, size_too=False, + check_value=None): + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + prnt('static int %s(PyObject *lib)' % funcname) + prnt('{') + prnt(' PyObject *o;') + prnt(' int res;') + if not is_int: + prnt(' %s;' % (vartp or tp).get_c_name(' i', name)) + else: + assert category == 'const' + # + if check_value is not None: + self._check_int_constant_value(name, check_value) + # + if not is_int: + if category == 'var': + realexpr = '&' + name + else: + realexpr = name + prnt(' i = (%s);' % (realexpr,)) + prnt(' o = %s;' % (self._convert_expr_from_c(tp, 'i', + 'variable type'),)) + assert delayed + else: + prnt(' o = _cffi_from_c_int_const(%s);' % name) + prnt(' if (o == NULL)') + prnt(' return -1;') + if size_too: + prnt(' {') + prnt(' PyObject *o1 = o;') + prnt(' o = Py_BuildValue("On", o1, (Py_ssize_t)sizeof(%s));' + % (name,)) + prnt(' Py_DECREF(o1);') + prnt(' if (o == NULL)') + prnt(' return -1;') + prnt(' }') + prnt(' res = PyObject_SetAttrString(lib, "%s", o);' % name) + prnt(' Py_DECREF(o);') + prnt(' if (res < 0)') + prnt(' return -1;') + prnt(' return %s;' % self._chained_list_constants[delayed]) + self._chained_list_constants[delayed] = funcname + '(lib)' + prnt('}') + prnt() + + def _generate_cpy_constant_collecttype(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + if not is_int: + self._do_collect_type(tp) + + def _generate_cpy_constant_decl(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + self._generate_cpy_const(is_int, name, tp) + + _generate_cpy_constant_method = _generate_nothing + _loading_cpy_constant = _loaded_noop + _loaded_cpy_constant = _loaded_noop + + # ---------- + # enums + + def _check_int_constant_value(self, name, value, err_prefix=''): + prnt = self._prnt + if value <= 0: + prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( + name, name, value)) + else: + prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( + name, name, value)) + prnt(' char buf[64];') + prnt(' if ((%s) <= 0)' % name) + prnt(' snprintf(buf, 63, "%%ld", (long)(%s));' % name) + prnt(' else') + prnt(' snprintf(buf, 63, "%%lu", (unsigned long)(%s));' % + name) + prnt(' PyErr_Format(_cffi_VerificationError,') + prnt(' "%s%s has the real value %s, not %s",') + prnt(' "%s", "%s", buf, "%d");' % ( + err_prefix, name, value)) + prnt(' return -1;') + prnt(' }') + + def _enum_funcname(self, prefix, name): + # "$enum_$1" => "___D_enum____D_1" + name = name.replace('$', '___D_') + return '_cffi_e_%s_%s' % (prefix, name) + + def _generate_cpy_enum_decl(self, tp, name, prefix='enum'): + if tp.partial: + for enumerator in tp.enumerators: + self._generate_cpy_const(True, enumerator, delayed=False) + return + # + funcname = self._enum_funcname(prefix, name) + prnt = self._prnt + prnt('static int %s(PyObject *lib)' % funcname) + prnt('{') + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._check_int_constant_value(enumerator, enumvalue, + "enum %s: " % name) + prnt(' return %s;' % self._chained_list_constants[True]) + self._chained_list_constants[True] = funcname + '(lib)' + prnt('}') + prnt() + + _generate_cpy_enum_collecttype = _generate_nothing + _generate_cpy_enum_method = _generate_nothing + + def _loading_cpy_enum(self, tp, name, module): + if tp.partial: + enumvalues = [getattr(module, enumerator) + for enumerator in tp.enumerators] + tp.enumvalues = tuple(enumvalues) + tp.partial_resolved = True + + def _loaded_cpy_enum(self, tp, name, module, library): + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + setattr(library, enumerator, enumvalue) + + # ---------- + # macros: for now only for integers + + def _generate_cpy_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_cpy_const(True, name, check_value=check_value) + + _generate_cpy_macro_collecttype = _generate_nothing + _generate_cpy_macro_method = _generate_nothing + _loading_cpy_macro = _loaded_noop + _loaded_cpy_macro = _loaded_noop + + # ---------- + # global variables + + def _generate_cpy_variable_collecttype(self, tp, name): + if isinstance(tp, model.ArrayType): + tp_ptr = model.PointerType(tp.item) + else: + tp_ptr = model.PointerType(tp) + self._do_collect_type(tp_ptr) + + def _generate_cpy_variable_decl(self, tp, name): + if isinstance(tp, model.ArrayType): + tp_ptr = model.PointerType(tp.item) + self._generate_cpy_const(False, name, tp, vartp=tp_ptr, + size_too = tp.length_is_unknown()) + else: + tp_ptr = model.PointerType(tp) + self._generate_cpy_const(False, name, tp_ptr, category='var') + + _generate_cpy_variable_method = _generate_nothing + _loading_cpy_variable = _loaded_noop + + def _loaded_cpy_variable(self, tp, name, module, library): + value = getattr(library, name) + if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the + # sense that "a=..." is forbidden + if tp.length_is_unknown(): + assert isinstance(value, tuple) + (value, size) = value + BItemType = self.ffi._get_cached_btype(tp.item) + length, rest = divmod(size, self.ffi.sizeof(BItemType)) + if rest != 0: + raise VerificationError( + "bad size: %r does not seem to be an array of %s" % + (name, tp.item)) + tp = tp.resolve_length(length) + # 'value' is a which we have to replace with + # a if the N is actually known + if tp.length is not None: + BArray = self.ffi._get_cached_btype(tp) + value = self.ffi.cast(BArray, value) + setattr(library, name, value) + return + # remove ptr= from the library instance, and replace + # it by a property on the class, which reads/writes into ptr[0]. + ptr = value + delattr(library, name) + def getter(library): + return ptr[0] + def setter(library, value): + ptr[0] = value + setattr(type(library), name, property(getter, setter)) + type(library)._cffi_dir.append(name) + + # ---------- + + def _generate_setup_custom(self): + prnt = self._prnt + prnt('static int _cffi_setup_custom(PyObject *lib)') + prnt('{') + prnt(' return %s;' % self._chained_list_constants[True]) + prnt('}') + +cffimod_header = r''' +#include +#include + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +#endif + +#if PY_MAJOR_VERSION < 3 +# undef PyCapsule_CheckExact +# undef PyCapsule_GetPointer +# define PyCapsule_CheckExact(capsule) (PyCObject_Check(capsule)) +# define PyCapsule_GetPointer(capsule, name) \ + (PyCObject_AsVoidPtr(capsule)) +#endif + +#if PY_MAJOR_VERSION >= 3 +# define PyInt_FromLong PyLong_FromLong +#endif + +#define _cffi_from_c_double PyFloat_FromDouble +#define _cffi_from_c_float PyFloat_FromDouble +#define _cffi_from_c_long PyInt_FromLong +#define _cffi_from_c_ulong PyLong_FromUnsignedLong +#define _cffi_from_c_longlong PyLong_FromLongLong +#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong +#define _cffi_from_c__Bool PyBool_FromLong + +#define _cffi_to_c_double PyFloat_AsDouble +#define _cffi_to_c_float PyFloat_AsDouble + +#define _cffi_from_c_int_const(x) \ + (((x) > 0) ? \ + ((unsigned long long)(x) <= (unsigned long long)LONG_MAX) ? \ + PyInt_FromLong((long)(x)) : \ + PyLong_FromUnsignedLongLong((unsigned long long)(x)) : \ + ((long long)(x) >= (long long)LONG_MIN) ? \ + PyInt_FromLong((long)(x)) : \ + PyLong_FromLongLong((long long)(x))) + +#define _cffi_from_c_int(x, type) \ + (((type)-1) > 0 ? /* unsigned */ \ + (sizeof(type) < sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + sizeof(type) == sizeof(long) ? \ + PyLong_FromUnsignedLong((unsigned long)x) : \ + PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ + (sizeof(type) <= sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + PyLong_FromLongLong((long long)x))) + +#define _cffi_to_c_int(o, type) \ + ((type)( \ + sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ + : (type)_cffi_to_c_i8(o)) : \ + sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ + : (type)_cffi_to_c_i16(o)) : \ + sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ + : (type)_cffi_to_c_i32(o)) : \ + sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ + : (type)_cffi_to_c_i64(o)) : \ + (Py_FatalError("unsupported size for type " #type), (type)0))) + +#define _cffi_to_c_i8 \ + ((int(*)(PyObject *))_cffi_exports[1]) +#define _cffi_to_c_u8 \ + ((int(*)(PyObject *))_cffi_exports[2]) +#define _cffi_to_c_i16 \ + ((int(*)(PyObject *))_cffi_exports[3]) +#define _cffi_to_c_u16 \ + ((int(*)(PyObject *))_cffi_exports[4]) +#define _cffi_to_c_i32 \ + ((int(*)(PyObject *))_cffi_exports[5]) +#define _cffi_to_c_u32 \ + ((unsigned int(*)(PyObject *))_cffi_exports[6]) +#define _cffi_to_c_i64 \ + ((long long(*)(PyObject *))_cffi_exports[7]) +#define _cffi_to_c_u64 \ + ((unsigned long long(*)(PyObject *))_cffi_exports[8]) +#define _cffi_to_c_char \ + ((int(*)(PyObject *))_cffi_exports[9]) +#define _cffi_from_c_pointer \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[10]) +#define _cffi_to_c_pointer \ + ((char *(*)(PyObject *, CTypeDescrObject *))_cffi_exports[11]) +#define _cffi_get_struct_layout \ + ((PyObject *(*)(Py_ssize_t[]))_cffi_exports[12]) +#define _cffi_restore_errno \ + ((void(*)(void))_cffi_exports[13]) +#define _cffi_save_errno \ + ((void(*)(void))_cffi_exports[14]) +#define _cffi_from_c_char \ + ((PyObject *(*)(char))_cffi_exports[15]) +#define _cffi_from_c_deref \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[16]) +#define _cffi_to_c \ + ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[17]) +#define _cffi_from_c_struct \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[18]) +#define _cffi_to_c_wchar_t \ + ((wchar_t(*)(PyObject *))_cffi_exports[19]) +#define _cffi_from_c_wchar_t \ + ((PyObject *(*)(wchar_t))_cffi_exports[20]) +#define _cffi_to_c_long_double \ + ((long double(*)(PyObject *))_cffi_exports[21]) +#define _cffi_to_c__Bool \ + ((_Bool(*)(PyObject *))_cffi_exports[22]) +#define _cffi_prepare_pointer_call_argument \ + ((Py_ssize_t(*)(CTypeDescrObject *, PyObject *, char **))_cffi_exports[23]) +#define _cffi_convert_array_from_object \ + ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[24]) +#define _CFFI_NUM_EXPORTS 25 + +typedef struct _ctypedescr CTypeDescrObject; + +static void *_cffi_exports[_CFFI_NUM_EXPORTS]; +static PyObject *_cffi_types, *_cffi_VerificationError; + +static int _cffi_setup_custom(PyObject *lib); /* forward */ + +static PyObject *_cffi_setup(PyObject *self, PyObject *args) +{ + PyObject *library; + int was_alive = (_cffi_types != NULL); + (void)self; /* unused */ + if (!PyArg_ParseTuple(args, "OOO", &_cffi_types, &_cffi_VerificationError, + &library)) + return NULL; + Py_INCREF(_cffi_types); + Py_INCREF(_cffi_VerificationError); + if (_cffi_setup_custom(library) < 0) + return NULL; + return PyBool_FromLong(was_alive); +} + +union _cffi_union_alignment_u { + unsigned char m_char; + unsigned short m_short; + unsigned int m_int; + unsigned long m_long; + unsigned long long m_longlong; + float m_float; + double m_double; + long double m_longdouble; +}; + +struct _cffi_freeme_s { + struct _cffi_freeme_s *next; + union _cffi_union_alignment_u alignment; +}; + +#ifdef __GNUC__ + __attribute__((unused)) +#endif +static int _cffi_convert_array_argument(CTypeDescrObject *ctptr, PyObject *arg, + char **output_data, Py_ssize_t datasize, + struct _cffi_freeme_s **freeme) +{ + char *p; + if (datasize < 0) + return -1; + + p = *output_data; + if (p == NULL) { + struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( + offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); + if (fp == NULL) + return -1; + fp->next = *freeme; + *freeme = fp; + p = *output_data = (char *)&fp->alignment; + } + memset((void *)p, 0, (size_t)datasize); + return _cffi_convert_array_from_object(p, ctptr, arg); +} + +#ifdef __GNUC__ + __attribute__((unused)) +#endif +static void _cffi_free_array_arguments(struct _cffi_freeme_s *freeme) +{ + do { + void *p = (void *)freeme; + freeme = freeme->next; + PyObject_Free(p); + } while (freeme != NULL); +} + +static int _cffi_init(void) +{ + PyObject *module, *c_api_object = NULL; + + module = PyImport_ImportModule("_cffi_backend"); + if (module == NULL) + goto failure; + + c_api_object = PyObject_GetAttrString(module, "_C_API"); + if (c_api_object == NULL) + goto failure; + if (!PyCapsule_CheckExact(c_api_object)) { + PyErr_SetNone(PyExc_ImportError); + goto failure; + } + memcpy(_cffi_exports, PyCapsule_GetPointer(c_api_object, "cffi"), + _CFFI_NUM_EXPORTS * sizeof(void *)); + + Py_DECREF(module); + Py_DECREF(c_api_object); + return 0; + + failure: + Py_XDECREF(module); + Py_XDECREF(c_api_object); + return -1; +} + +#define _cffi_type(num) ((CTypeDescrObject *)PyList_GET_ITEM(_cffi_types, num)) + +/**********/ +''' diff --git a/env/lib/python3.8/site-packages/cffi/vengine_gen.py b/env/lib/python3.8/site-packages/cffi/vengine_gen.py new file mode 100644 index 00000000..26421526 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/vengine_gen.py @@ -0,0 +1,675 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, os +import types + +from . import model +from .error import VerificationError + + +class VGenericEngine(object): + _class_key = 'g' + _gen_python_module = False + + def __init__(self, verifier): + self.verifier = verifier + self.ffi = verifier.ffi + self.export_symbols = [] + self._struct_pending_verification = {} + + def patch_extension_kwds(self, kwds): + # add 'export_symbols' to the dictionary. Note that we add the + # list before filling it. When we fill it, it will thus also show + # up in kwds['export_symbols']. + kwds.setdefault('export_symbols', self.export_symbols) + + def find_module(self, module_name, path, so_suffixes): + for so_suffix in so_suffixes: + basename = module_name + so_suffix + if path is None: + path = sys.path + for dirname in path: + filename = os.path.join(dirname, basename) + if os.path.isfile(filename): + return filename + + def collect_types(self): + pass # not needed in the generic engine + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def write_source_to_f(self): + prnt = self._prnt + # first paste some standard set of lines that are mostly '#include' + prnt(cffimod_header) + # then paste the C source given by the user, verbatim. + prnt(self.verifier.preamble) + # + # call generate_gen_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._generate('decl') + # + # on Windows, distutils insists on putting init_cffi_xyz in + # 'export_symbols', so instead of fighting it, just give up and + # give it one + if sys.platform == 'win32': + if sys.version_info >= (3,): + prefix = 'PyInit_' + else: + prefix = 'init' + modname = self.verifier.get_module_name() + prnt("void %s%s(void) { }\n" % (prefix, modname)) + + def load_library(self, flags=0): + # import it with the CFFI backend + backend = self.ffi._backend + # needs to make a path that contains '/', on Posix + filename = os.path.join(os.curdir, self.verifier.modulefilename) + module = backend.load_library(filename, flags) + # + # call loading_gen_struct() to get the struct layout inferred by + # the C compiler + self._load(module, 'loading') + + # build the FFILibrary class and instance, this is a module subclass + # because modules are expected to have usually-constant-attributes and + # in PyPy this means the JIT is able to treat attributes as constant, + # which we want. + class FFILibrary(types.ModuleType): + _cffi_generic_module = module + _cffi_ffi = self.ffi + _cffi_dir = [] + def __dir__(self): + return FFILibrary._cffi_dir + library = FFILibrary("") + # + # finally, call the loaded_gen_xxx() functions. This will set + # up the 'library' object. + self._load(module, 'loaded', library=library) + return library + + def _get_declarations(self): + lst = [(key, tp) for (key, (tp, qual)) in + self.ffi._parser._declarations.items()] + lst.sort() + return lst + + def _generate(self, step_name): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_gen_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in verify(): %r" % name) + try: + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _load(self, module, step_name, **kwds): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + method = getattr(self, '_%s_gen_%s' % (step_name, kind)) + try: + method(tp, realname, module, **kwds) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _generate_nothing(self, tp, name): + pass + + def _loaded_noop(self, tp, name, module, **kwds): + pass + + # ---------- + # typedefs: generates no code so far + + _generate_gen_typedef_decl = _generate_nothing + _loading_gen_typedef = _loaded_noop + _loaded_gen_typedef = _loaded_noop + + # ---------- + # function declarations + + def _generate_gen_function_decl(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no _cffi_f_%s wrapper) + self._generate_gen_const(False, name, tp) + return + prnt = self._prnt + numargs = len(tp.args) + argnames = [] + for i, type in enumerate(tp.args): + indirection = '' + if isinstance(type, model.StructOrUnion): + indirection = '*' + argnames.append('%sx%d' % (indirection, i)) + context = 'argument of %s' % name + arglist = [type.get_c_name(' %s' % arg, context) + for type, arg in zip(tp.args, argnames)] + tpresult = tp.result + if isinstance(tpresult, model.StructOrUnion): + arglist.insert(0, tpresult.get_c_name(' *r', context)) + tpresult = model.void_type + arglist = ', '.join(arglist) or 'void' + wrappername = '_cffi_f_%s' % name + self.export_symbols.append(wrappername) + if tp.abi: + abi = tp.abi + ' ' + else: + abi = '' + funcdecl = ' %s%s(%s)' % (abi, wrappername, arglist) + context = 'result of %s' % name + prnt(tpresult.get_c_name(funcdecl, context)) + prnt('{') + # + if isinstance(tp.result, model.StructOrUnion): + result_code = '*r = ' + elif not isinstance(tp.result, model.VoidType): + result_code = 'return ' + else: + result_code = '' + prnt(' %s%s(%s);' % (result_code, name, ', '.join(argnames))) + prnt('}') + prnt() + + _loading_gen_function = _loaded_noop + + def _loaded_gen_function(self, tp, name, module, library): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + newfunction = self._load_constant(False, tp, name, module) + else: + indirections = [] + base_tp = tp + if (any(isinstance(typ, model.StructOrUnion) for typ in tp.args) + or isinstance(tp.result, model.StructOrUnion)): + indirect_args = [] + for i, typ in enumerate(tp.args): + if isinstance(typ, model.StructOrUnion): + typ = model.PointerType(typ) + indirections.append((i, typ)) + indirect_args.append(typ) + indirect_result = tp.result + if isinstance(indirect_result, model.StructOrUnion): + if indirect_result.fldtypes is None: + raise TypeError("'%s' is used as result type, " + "but is opaque" % ( + indirect_result._get_c_name(),)) + indirect_result = model.PointerType(indirect_result) + indirect_args.insert(0, indirect_result) + indirections.insert(0, ("result", indirect_result)) + indirect_result = model.void_type + tp = model.FunctionPtrType(tuple(indirect_args), + indirect_result, tp.ellipsis) + BFunc = self.ffi._get_cached_btype(tp) + wrappername = '_cffi_f_%s' % name + newfunction = module.load_function(BFunc, wrappername) + for i, typ in indirections: + newfunction = self._make_struct_wrapper(newfunction, i, typ, + base_tp) + setattr(library, name, newfunction) + type(library)._cffi_dir.append(name) + + def _make_struct_wrapper(self, oldfunc, i, tp, base_tp): + backend = self.ffi._backend + BType = self.ffi._get_cached_btype(tp) + if i == "result": + ffi = self.ffi + def newfunc(*args): + res = ffi.new(BType) + oldfunc(res, *args) + return res[0] + else: + def newfunc(*args): + args = args[:i] + (backend.newp(BType, args[i]),) + args[i+1:] + return oldfunc(*args) + newfunc._cffi_base_type = base_tp + return newfunc + + # ---------- + # named structs + + def _generate_gen_struct_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'struct', name) + + def _loading_gen_struct(self, tp, name, module): + self._loading_struct_or_union(tp, 'struct', name, module) + + def _loaded_gen_struct(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_gen_union_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'union', name) + + def _loading_gen_union(self, tp, name, module): + self._loading_struct_or_union(tp, 'union', name, module) + + def _loaded_gen_union(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_struct_or_union_decl(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + checkfuncname = '_cffi_check_%s_%s' % (prefix, name) + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + cname = ('%s %s' % (prefix, name)).strip() + # + prnt = self._prnt + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if (isinstance(ftype, model.PrimitiveType) + and ftype.is_integer_type()) or fbitsize >= 0: + # accept all integers, but complain on float or double + prnt(' (void)((p->%s) << 1);' % fname) + else: + # only accept exactly the type declared. + try: + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + self.export_symbols.append(layoutfuncname) + prnt('intptr_t %s(intptr_t i)' % (layoutfuncname,)) + prnt('{') + prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) + prnt(' static intptr_t nums[] = {') + prnt(' sizeof(%s),' % cname) + prnt(' offsetof(struct _cffi_aligncheck, y),') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + prnt(' offsetof(%s, %s),' % (cname, fname)) + if isinstance(ftype, model.ArrayType) and ftype.length is None: + prnt(' 0, /* %s */' % ftype._get_c_name()) + else: + prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) + prnt(' -1') + prnt(' };') + prnt(' return nums[i];') + prnt(' /* the next line is not executed, but compiled */') + prnt(' %s(0);' % (checkfuncname,)) + prnt('}') + prnt() + + def _loading_struct_or_union(self, tp, prefix, name, module): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + # + BFunc = self.ffi._typeof_locked("intptr_t(*)(intptr_t)")[0] + function = module.load_function(BFunc, layoutfuncname) + layout = [] + num = 0 + while True: + x = function(num) + if x < 0: break + layout.append(x) + num += 1 + if isinstance(tp, model.StructOrUnion) and tp.partial: + # use the function()'s sizes and offsets to guide the + # layout of the struct + totalsize = layout[0] + totalalignment = layout[1] + fieldofs = layout[2::2] + fieldsize = layout[3::2] + tp.force_flatten() + assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) + tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment + else: + cname = ('%s %s' % (prefix, name)).strip() + self._struct_pending_verification[tp] = layout, cname + + def _loaded_struct_or_union(self, tp): + if tp.fldnames is None: + return # nothing to do with opaque structs + self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered + + if tp in self._struct_pending_verification: + # check that the layout sizes and offsets match the real ones + def check(realvalue, expectedvalue, msg): + if realvalue != expectedvalue: + raise VerificationError( + "%s (we have %d, but C compiler says %d)" + % (msg, expectedvalue, realvalue)) + ffi = self.ffi + BStruct = ffi._get_cached_btype(tp) + layout, cname = self._struct_pending_verification.pop(tp) + check(layout[0], ffi.sizeof(BStruct), "wrong total size") + check(layout[1], ffi.alignof(BStruct), "wrong total alignment") + i = 2 + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + check(layout[i], ffi.offsetof(BStruct, fname), + "wrong offset for field %r" % (fname,)) + if layout[i+1] != 0: + BField = ffi._get_cached_btype(ftype) + check(layout[i+1], ffi.sizeof(BField), + "wrong size for field %r" % (fname,)) + i += 2 + assert i == len(layout) + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + def _generate_gen_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_gen_enum_decl(tp, name, '') + else: + self._generate_struct_or_union_decl(tp, '', name) + + def _loading_gen_anonymous(self, tp, name, module): + if isinstance(tp, model.EnumType): + self._loading_gen_enum(tp, name, module, '') + else: + self._loading_struct_or_union(tp, '', name, module) + + def _loaded_gen_anonymous(self, tp, name, module, **kwds): + if isinstance(tp, model.EnumType): + self._loaded_gen_enum(tp, name, module, **kwds) + else: + self._loaded_struct_or_union(tp) + + # ---------- + # constants, likely declared with '#define' + + def _generate_gen_const(self, is_int, name, tp=None, category='const', + check_value=None): + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + self.export_symbols.append(funcname) + if check_value is not None: + assert is_int + assert category == 'const' + prnt('int %s(char *out_error)' % funcname) + prnt('{') + self._check_int_constant_value(name, check_value) + prnt(' return 0;') + prnt('}') + elif is_int: + assert category == 'const' + prnt('int %s(long long *out_value)' % funcname) + prnt('{') + prnt(' *out_value = (long long)(%s);' % (name,)) + prnt(' return (%s) <= 0;' % (name,)) + prnt('}') + else: + assert tp is not None + assert check_value is None + if category == 'var': + ampersand = '&' + else: + ampersand = '' + extra = '' + if category == 'const' and isinstance(tp, model.StructOrUnion): + extra = 'const *' + ampersand = '&' + prnt(tp.get_c_name(' %s%s(void)' % (extra, funcname), name)) + prnt('{') + prnt(' return (%s%s);' % (ampersand, name)) + prnt('}') + prnt() + + def _generate_gen_constant_decl(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + self._generate_gen_const(is_int, name, tp) + + _loading_gen_constant = _loaded_noop + + def _load_constant(self, is_int, tp, name, module, check_value=None): + funcname = '_cffi_const_%s' % name + if check_value is not None: + assert is_int + self._load_known_int_constant(module, funcname) + value = check_value + elif is_int: + BType = self.ffi._typeof_locked("long long*")[0] + BFunc = self.ffi._typeof_locked("int(*)(long long*)")[0] + function = module.load_function(BFunc, funcname) + p = self.ffi.new(BType) + negative = function(p) + value = int(p[0]) + if value < 0 and not negative: + BLongLong = self.ffi._typeof_locked("long long")[0] + value += (1 << (8*self.ffi.sizeof(BLongLong))) + else: + assert check_value is None + fntypeextra = '(*)(void)' + if isinstance(tp, model.StructOrUnion): + fntypeextra = '*' + fntypeextra + BFunc = self.ffi._typeof_locked(tp.get_c_name(fntypeextra, name))[0] + function = module.load_function(BFunc, funcname) + value = function() + if isinstance(tp, model.StructOrUnion): + value = value[0] + return value + + def _loaded_gen_constant(self, tp, name, module, library): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + value = self._load_constant(is_int, tp, name, module) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + + # ---------- + # enums + + def _check_int_constant_value(self, name, value): + prnt = self._prnt + if value <= 0: + prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( + name, name, value)) + else: + prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( + name, name, value)) + prnt(' char buf[64];') + prnt(' if ((%s) <= 0)' % name) + prnt(' sprintf(buf, "%%ld", (long)(%s));' % name) + prnt(' else') + prnt(' sprintf(buf, "%%lu", (unsigned long)(%s));' % + name) + prnt(' sprintf(out_error, "%s has the real value %s, not %s",') + prnt(' "%s", buf, "%d");' % (name[:100], value)) + prnt(' return -1;') + prnt(' }') + + def _load_known_int_constant(self, module, funcname): + BType = self.ffi._typeof_locked("char[]")[0] + BFunc = self.ffi._typeof_locked("int(*)(char*)")[0] + function = module.load_function(BFunc, funcname) + p = self.ffi.new(BType, 256) + if function(p) < 0: + error = self.ffi.string(p) + if sys.version_info >= (3,): + error = str(error, 'utf-8') + raise VerificationError(error) + + def _enum_funcname(self, prefix, name): + # "$enum_$1" => "___D_enum____D_1" + name = name.replace('$', '___D_') + return '_cffi_e_%s_%s' % (prefix, name) + + def _generate_gen_enum_decl(self, tp, name, prefix='enum'): + if tp.partial: + for enumerator in tp.enumerators: + self._generate_gen_const(True, enumerator) + return + # + funcname = self._enum_funcname(prefix, name) + self.export_symbols.append(funcname) + prnt = self._prnt + prnt('int %s(char *out_error)' % funcname) + prnt('{') + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._check_int_constant_value(enumerator, enumvalue) + prnt(' return 0;') + prnt('}') + prnt() + + def _loading_gen_enum(self, tp, name, module, prefix='enum'): + if tp.partial: + enumvalues = [self._load_constant(True, tp, enumerator, module) + for enumerator in tp.enumerators] + tp.enumvalues = tuple(enumvalues) + tp.partial_resolved = True + else: + funcname = self._enum_funcname(prefix, name) + self._load_known_int_constant(module, funcname) + + def _loaded_gen_enum(self, tp, name, module, library): + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + setattr(library, enumerator, enumvalue) + type(library)._cffi_dir.append(enumerator) + + # ---------- + # macros: for now only for integers + + def _generate_gen_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_gen_const(True, name, check_value=check_value) + + _loading_gen_macro = _loaded_noop + + def _loaded_gen_macro(self, tp, name, module, library): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + value = self._load_constant(True, tp, name, module, + check_value=check_value) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + + # ---------- + # global variables + + def _generate_gen_variable_decl(self, tp, name): + if isinstance(tp, model.ArrayType): + if tp.length_is_unknown(): + prnt = self._prnt + funcname = '_cffi_sizeof_%s' % (name,) + self.export_symbols.append(funcname) + prnt("size_t %s(void)" % funcname) + prnt("{") + prnt(" return sizeof(%s);" % (name,)) + prnt("}") + tp_ptr = model.PointerType(tp.item) + self._generate_gen_const(False, name, tp_ptr) + else: + tp_ptr = model.PointerType(tp) + self._generate_gen_const(False, name, tp_ptr, category='var') + + _loading_gen_variable = _loaded_noop + + def _loaded_gen_variable(self, tp, name, module, library): + if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the + # sense that "a=..." is forbidden + if tp.length_is_unknown(): + funcname = '_cffi_sizeof_%s' % (name,) + BFunc = self.ffi._typeof_locked('size_t(*)(void)')[0] + function = module.load_function(BFunc, funcname) + size = function() + BItemType = self.ffi._get_cached_btype(tp.item) + length, rest = divmod(size, self.ffi.sizeof(BItemType)) + if rest != 0: + raise VerificationError( + "bad size: %r does not seem to be an array of %s" % + (name, tp.item)) + tp = tp.resolve_length(length) + tp_ptr = model.PointerType(tp.item) + value = self._load_constant(False, tp_ptr, name, module) + # 'value' is a which we have to replace with + # a if the N is actually known + if tp.length is not None: + BArray = self.ffi._get_cached_btype(tp) + value = self.ffi.cast(BArray, value) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + return + # remove ptr= from the library instance, and replace + # it by a property on the class, which reads/writes into ptr[0]. + funcname = '_cffi_var_%s' % name + BFunc = self.ffi._typeof_locked(tp.get_c_name('*(*)(void)', name))[0] + function = module.load_function(BFunc, funcname) + ptr = function() + def getter(library): + return ptr[0] + def setter(library, value): + ptr[0] = value + setattr(type(library), name, property(getter, setter)) + type(library)._cffi_dir.append(name) + +cffimod_header = r''' +#include +#include +#include +#include +#include /* XXX for ssize_t on some platforms */ + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +#endif +''' diff --git a/env/lib/python3.8/site-packages/cffi/verifier.py b/env/lib/python3.8/site-packages/cffi/verifier.py new file mode 100644 index 00000000..a500c781 --- /dev/null +++ b/env/lib/python3.8/site-packages/cffi/verifier.py @@ -0,0 +1,307 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, os, binascii, shutil, io +from . import __version_verifier_modules__ +from . import ffiplatform +from .error import VerificationError + +if sys.version_info >= (3, 3): + import importlib.machinery + def _extension_suffixes(): + return importlib.machinery.EXTENSION_SUFFIXES[:] +else: + import imp + def _extension_suffixes(): + return [suffix for suffix, _, type in imp.get_suffixes() + if type == imp.C_EXTENSION] + + +if sys.version_info >= (3,): + NativeIO = io.StringIO +else: + class NativeIO(io.BytesIO): + def write(self, s): + if isinstance(s, unicode): + s = s.encode('ascii') + super(NativeIO, self).write(s) + + +class Verifier(object): + + def __init__(self, ffi, preamble, tmpdir=None, modulename=None, + ext_package=None, tag='', force_generic_engine=False, + source_extension='.c', flags=None, relative_to=None, **kwds): + if ffi._parser._uses_new_feature: + raise VerificationError( + "feature not supported with ffi.verify(), but only " + "with ffi.set_source(): %s" % (ffi._parser._uses_new_feature,)) + self.ffi = ffi + self.preamble = preamble + if not modulename: + flattened_kwds = ffiplatform.flatten(kwds) + vengine_class = _locate_engine_class(ffi, force_generic_engine) + self._vengine = vengine_class(self) + self._vengine.patch_extension_kwds(kwds) + self.flags = flags + self.kwds = self.make_relative_to(kwds, relative_to) + # + if modulename: + if tag: + raise TypeError("can't specify both 'modulename' and 'tag'") + else: + key = '\x00'.join(['%d.%d' % sys.version_info[:2], + __version_verifier_modules__, + preamble, flattened_kwds] + + ffi._cdefsources) + if sys.version_info >= (3,): + key = key.encode('utf-8') + k1 = hex(binascii.crc32(key[0::2]) & 0xffffffff) + k1 = k1.lstrip('0x').rstrip('L') + k2 = hex(binascii.crc32(key[1::2]) & 0xffffffff) + k2 = k2.lstrip('0').rstrip('L') + modulename = '_cffi_%s_%s%s%s' % (tag, self._vengine._class_key, + k1, k2) + suffix = _get_so_suffixes()[0] + self.tmpdir = tmpdir or _caller_dir_pycache() + self.sourcefilename = os.path.join(self.tmpdir, modulename + source_extension) + self.modulefilename = os.path.join(self.tmpdir, modulename + suffix) + self.ext_package = ext_package + self._has_source = False + self._has_module = False + + def write_source(self, file=None): + """Write the C source code. It is produced in 'self.sourcefilename', + which can be tweaked beforehand.""" + with self.ffi._lock: + if self._has_source and file is None: + raise VerificationError( + "source code already written") + self._write_source(file) + + def compile_module(self): + """Write the C source code (if not done already) and compile it. + This produces a dynamic link library in 'self.modulefilename'.""" + with self.ffi._lock: + if self._has_module: + raise VerificationError("module already compiled") + if not self._has_source: + self._write_source() + self._compile_module() + + def load_library(self): + """Get a C module from this Verifier instance. + Returns an instance of a FFILibrary class that behaves like the + objects returned by ffi.dlopen(), but that delegates all + operations to the C module. If necessary, the C code is written + and compiled first. + """ + with self.ffi._lock: + if not self._has_module: + self._locate_module() + if not self._has_module: + if not self._has_source: + self._write_source() + self._compile_module() + return self._load_library() + + def get_module_name(self): + basename = os.path.basename(self.modulefilename) + # kill both the .so extension and the other .'s, as introduced + # by Python 3: 'basename.cpython-33m.so' + basename = basename.split('.', 1)[0] + # and the _d added in Python 2 debug builds --- but try to be + # conservative and not kill a legitimate _d + if basename.endswith('_d') and hasattr(sys, 'gettotalrefcount'): + basename = basename[:-2] + return basename + + def get_extension(self): + ffiplatform._hack_at_distutils() # backward compatibility hack + if not self._has_source: + with self.ffi._lock: + if not self._has_source: + self._write_source() + sourcename = ffiplatform.maybe_relative_path(self.sourcefilename) + modname = self.get_module_name() + return ffiplatform.get_extension(sourcename, modname, **self.kwds) + + def generates_python_module(self): + return self._vengine._gen_python_module + + def make_relative_to(self, kwds, relative_to): + if relative_to and os.path.dirname(relative_to): + dirname = os.path.dirname(relative_to) + kwds = kwds.copy() + for key in ffiplatform.LIST_OF_FILE_NAMES: + if key in kwds: + lst = kwds[key] + if not isinstance(lst, (list, tuple)): + raise TypeError("keyword '%s' should be a list or tuple" + % (key,)) + lst = [os.path.join(dirname, fn) for fn in lst] + kwds[key] = lst + return kwds + + # ---------- + + def _locate_module(self): + if not os.path.isfile(self.modulefilename): + if self.ext_package: + try: + pkg = __import__(self.ext_package, None, None, ['__doc__']) + except ImportError: + return # cannot import the package itself, give up + # (e.g. it might be called differently before installation) + path = pkg.__path__ + else: + path = None + filename = self._vengine.find_module(self.get_module_name(), path, + _get_so_suffixes()) + if filename is None: + return + self.modulefilename = filename + self._vengine.collect_types() + self._has_module = True + + def _write_source_to(self, file): + self._vengine._f = file + try: + self._vengine.write_source_to_f() + finally: + del self._vengine._f + + def _write_source(self, file=None): + if file is not None: + self._write_source_to(file) + else: + # Write our source file to an in memory file. + f = NativeIO() + self._write_source_to(f) + source_data = f.getvalue() + + # Determine if this matches the current file + if os.path.exists(self.sourcefilename): + with open(self.sourcefilename, "r") as fp: + needs_written = not (fp.read() == source_data) + else: + needs_written = True + + # Actually write the file out if it doesn't match + if needs_written: + _ensure_dir(self.sourcefilename) + with open(self.sourcefilename, "w") as fp: + fp.write(source_data) + + # Set this flag + self._has_source = True + + def _compile_module(self): + # compile this C source + tmpdir = os.path.dirname(self.sourcefilename) + outputfilename = ffiplatform.compile(tmpdir, self.get_extension()) + try: + same = ffiplatform.samefile(outputfilename, self.modulefilename) + except OSError: + same = False + if not same: + _ensure_dir(self.modulefilename) + shutil.move(outputfilename, self.modulefilename) + self._has_module = True + + def _load_library(self): + assert self._has_module + if self.flags is not None: + return self._vengine.load_library(self.flags) + else: + return self._vengine.load_library() + +# ____________________________________________________________ + +_FORCE_GENERIC_ENGINE = False # for tests + +def _locate_engine_class(ffi, force_generic_engine): + if _FORCE_GENERIC_ENGINE: + force_generic_engine = True + if not force_generic_engine: + if '__pypy__' in sys.builtin_module_names: + force_generic_engine = True + else: + try: + import _cffi_backend + except ImportError: + _cffi_backend = '?' + if ffi._backend is not _cffi_backend: + force_generic_engine = True + if force_generic_engine: + from . import vengine_gen + return vengine_gen.VGenericEngine + else: + from . import vengine_cpy + return vengine_cpy.VCPythonEngine + +# ____________________________________________________________ + +_TMPDIR = None + +def _caller_dir_pycache(): + if _TMPDIR: + return _TMPDIR + result = os.environ.get('CFFI_TMPDIR') + if result: + return result + filename = sys._getframe(2).f_code.co_filename + return os.path.abspath(os.path.join(os.path.dirname(filename), + '__pycache__')) + +def set_tmpdir(dirname): + """Set the temporary directory to use instead of __pycache__.""" + global _TMPDIR + _TMPDIR = dirname + +def cleanup_tmpdir(tmpdir=None, keep_so=False): + """Clean up the temporary directory by removing all files in it + called `_cffi_*.{c,so}` as well as the `build` subdirectory.""" + tmpdir = tmpdir or _caller_dir_pycache() + try: + filelist = os.listdir(tmpdir) + except OSError: + return + if keep_so: + suffix = '.c' # only remove .c files + else: + suffix = _get_so_suffixes()[0].lower() + for fn in filelist: + if fn.lower().startswith('_cffi_') and ( + fn.lower().endswith(suffix) or fn.lower().endswith('.c')): + try: + os.unlink(os.path.join(tmpdir, fn)) + except OSError: + pass + clean_dir = [os.path.join(tmpdir, 'build')] + for dir in clean_dir: + try: + for fn in os.listdir(dir): + fn = os.path.join(dir, fn) + if os.path.isdir(fn): + clean_dir.append(fn) + else: + os.unlink(fn) + except OSError: + pass + +def _get_so_suffixes(): + suffixes = _extension_suffixes() + if not suffixes: + # bah, no C_EXTENSION available. Occurs on pypy without cpyext + if sys.platform == 'win32': + suffixes = [".pyd"] + else: + suffixes = [".so"] + + return suffixes + +def _ensure_dir(filename): + dirname = os.path.dirname(filename) + if dirname and not os.path.isdir(dirname): + os.makedirs(dirname) diff --git a/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/LICENSE b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/LICENSE new file mode 100644 index 00000000..ad82355b --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2019 TAHRI Ahmed R. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/METADATA b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/METADATA new file mode 100644 index 00000000..c6ebd7d4 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/METADATA @@ -0,0 +1,269 @@ +Metadata-Version: 2.1 +Name: charset-normalizer +Version: 2.1.1 +Summary: The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. +Home-page: https://github.com/ousret/charset_normalizer +Author: Ahmed TAHRI @Ousret +Author-email: ahmed.tahri@cloudnursery.dev +License: MIT +Project-URL: Bug Reports, https://github.com/Ousret/charset_normalizer/issues +Project-URL: Documentation, https://charset-normalizer.readthedocs.io/en/latest +Keywords: encoding,i18n,txt,text,charset,charset-detector,normalization,unicode,chardet +Classifier: Development Status :: 5 - Production/Stable +Classifier: License :: OSI Approved :: MIT License +Classifier: Intended Audience :: Developers +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Topic :: Text Processing :: Linguistic +Classifier: Topic :: Utilities +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Typing :: Typed +Requires-Python: >=3.6.0 +Description-Content-Type: text/markdown +License-File: LICENSE +Provides-Extra: unicode_backport +Requires-Dist: unicodedata2 ; extra == 'unicode_backport' + + +

    Charset Detection, for Everyone 👋

    + +

    + The Real First Universal Charset Detector
    + + + + + + + + Download Count Total + +

    + +> A library that helps you read text from an unknown charset encoding.
    Motivated by `chardet`, +> I'm trying to resolve the issue by taking a new approach. +> All IANA character set names for which the Python core library provides codecs are supported. + +

    + >>>>> 👉 Try Me Online Now, Then Adopt Me 👈 <<<<< +

    + +This project offers you an alternative to **Universal Charset Encoding Detector**, also known as **Chardet**. + +| Feature | [Chardet](https://github.com/chardet/chardet) | Charset Normalizer | [cChardet](https://github.com/PyYoshi/cChardet) | +| ------------- | :-------------: | :------------------: | :------------------: | +| `Fast` | ❌
    | ✅
    | ✅
    | +| `Universal**` | ❌ | ✅ | ❌ | +| `Reliable` **without** distinguishable standards | ❌ | ✅ | ✅ | +| `Reliable` **with** distinguishable standards | ✅ | ✅ | ✅ | +| `License` | LGPL-2.1
    _restrictive_ | MIT | MPL-1.1
    _restrictive_ | +| `Native Python` | ✅ | ✅ | ❌ | +| `Detect spoken language` | ❌ | ✅ | N/A | +| `UnicodeDecodeError Safety` | ❌ | ✅ | ❌ | +| `Whl Size` | 193.6 kB | 39.5 kB | ~200 kB | +| `Supported Encoding` | 33 | :tada: [93](https://charset-normalizer.readthedocs.io/en/latest/user/support.html#supported-encodings) | 40 + +

    +Reading Normalized TextCat Reading Text + +*\*\* : They are clearly using specific code for a specific encoding even if covering most of used one*
    +Did you got there because of the logs? See [https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html](https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html) + +## ⭐ Your support + +*Fork, test-it, star-it, submit your ideas! We do listen.* + +## ⚡ Performance + +This package offer better performance than its counterpart Chardet. Here are some numbers. + +| Package | Accuracy | Mean per file (ms) | File per sec (est) | +| ------------- | :-------------: | :------------------: | :------------------: | +| [chardet](https://github.com/chardet/chardet) | 86 % | 200 ms | 5 file/sec | +| charset-normalizer | **98 %** | **39 ms** | 26 file/sec | + +| Package | 99th percentile | 95th percentile | 50th percentile | +| ------------- | :-------------: | :------------------: | :------------------: | +| [chardet](https://github.com/chardet/chardet) | 1200 ms | 287 ms | 23 ms | +| charset-normalizer | 400 ms | 200 ms | 15 ms | + +Chardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload. + +> Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows. +> And yes, these results might change at any time. The dataset can be updated to include more files. +> The actual delays heavily depends on your CPU capabilities. The factors should remain the same. +> Keep in mind that the stats are generous and that Chardet accuracy vs our is measured using Chardet initial capability +> (eg. Supported Encoding) Challenge-them if you want. + +[cchardet](https://github.com/PyYoshi/cChardet) is a non-native (cpp binding) and unmaintained faster alternative with +a better accuracy than chardet but lower than this package. If speed is the most important factor, you should try it. + +## ✨ Installation + +Using PyPi for latest stable +```sh +pip install charset-normalizer -U +``` + +If you want a more up-to-date `unicodedata` than the one available in your Python setup. +```sh +pip install charset-normalizer[unicode_backport] -U +``` + +## 🚀 Basic Usage + +### CLI +This package comes with a CLI. + +``` +usage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD] + file [file ...] + +The Real First Universal Charset Detector. Discover originating encoding used +on text file. Normalize text to unicode. + +positional arguments: + files File(s) to be analysed + +optional arguments: + -h, --help show this help message and exit + -v, --verbose Display complementary information about file if any. + Stdout will contain logs about the detection process. + -a, --with-alternative + Output complementary possibilities if any. Top-level + JSON WILL be a list. + -n, --normalize Permit to normalize input file. If not set, program + does not write anything. + -m, --minimal Only output the charset detected to STDOUT. Disabling + JSON output. + -r, --replace Replace file when trying to normalize it instead of + creating a new one. + -f, --force Replace file without asking if you are sure, use this + flag with caution. + -t THRESHOLD, --threshold THRESHOLD + Define a custom maximum amount of chaos allowed in + decoded content. 0. <= chaos <= 1. + --version Show version information and exit. +``` + +```bash +normalizer ./data/sample.1.fr.srt +``` + +:tada: Since version 1.4.0 the CLI produce easily usable stdout result in JSON format. + +```json +{ + "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt", + "encoding": "cp1252", + "encoding_aliases": [ + "1252", + "windows_1252" + ], + "alternative_encodings": [ + "cp1254", + "cp1256", + "cp1258", + "iso8859_14", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_9", + "latin_1", + "mbcs" + ], + "language": "French", + "alphabets": [ + "Basic Latin", + "Latin-1 Supplement" + ], + "has_sig_or_bom": false, + "chaos": 0.149, + "coherence": 97.152, + "unicode_path": null, + "is_preferred": true +} +``` + +### Python +*Just print out normalized text* +```python +from charset_normalizer import from_path + +results = from_path('./my_subtitle.srt') + +print(str(results.best())) +``` + +*Normalize any text file* +```python +from charset_normalizer import normalize +try: + normalize('./my_subtitle.srt') # should write to disk my_subtitle-***.srt +except IOError as e: + print('Sadly, we are unable to perform charset normalization.', str(e)) +``` + +*Upgrade your code without effort* +```python +from charset_normalizer import detect +``` + +The above code will behave the same as **chardet**. We ensure that we offer the best (reasonable) BC result possible. + +See the docs for advanced usage : [readthedocs.io](https://charset-normalizer.readthedocs.io/en/latest/) + +## 😇 Why + +When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a +reliable alternative using a completely different method. Also! I never back down on a good challenge! + +I **don't care** about the **originating charset** encoding, because **two different tables** can +produce **two identical rendered string.** +What I want is to get readable text, the best I can. + +In a way, **I'm brute forcing text decoding.** How cool is that ? 😎 + +Don't confuse package **ftfy** with charset-normalizer or chardet. ftfy goal is to repair unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode. + +## 🍰 How + + - Discard all charset encoding table that could not fit the binary content. + - Measure chaos, or the mess once opened (by chunks) with a corresponding charset encoding. + - Extract matches with the lowest mess detected. + - Additionally, we measure coherence / probe for a language. + +**Wait a minute**, what is chaos/mess and coherence according to **YOU ?** + +*Chaos :* I opened hundred of text files, **written by humans**, with the wrong encoding table. **I observed**, then +**I established** some ground rules about **what is obvious** when **it seems like** a mess. + I know that my interpretation of what is chaotic is very subjective, feel free to contribute in order to + improve or rewrite it. + +*Coherence :* For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought +that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design. + +## ⚡ Known limitations + + - Language detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters)) + - Every charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content. + +## 👤 Contributing + +Contributions, issues and feature requests are very much welcome.
    +Feel free to check [issues page](https://github.com/ousret/charset_normalizer/issues) if you want to contribute. + +## 📝 License + +Copyright © 2019 [Ahmed TAHRI @Ousret](https://github.com/Ousret).
    +This project is [MIT](https://github.com/Ousret/charset_normalizer/blob/master/LICENSE) licensed. + +Characters frequencies used in this project © 2012 [Denny Vrandečić](http://simia.net/letters/) diff --git a/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/RECORD b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/RECORD new file mode 100644 index 00000000..47f8a480 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/RECORD @@ -0,0 +1,33 @@ +../../../bin/normalizer,sha256=AL3p9DCoxmYVJbiy6hK1C_pjqa9hXn01u9m5ZdhUGhI,294 +charset_normalizer-2.1.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +charset_normalizer-2.1.1.dist-info/LICENSE,sha256=6zGgxaT7Cbik4yBV0lweX5w1iidS_vPNcgIT0cz-4kE,1070 +charset_normalizer-2.1.1.dist-info/METADATA,sha256=C99l12g4d1E9_UiW-mqPCWx7v2M_lYGWxy1GTOjXSsA,11942 +charset_normalizer-2.1.1.dist-info/RECORD,, +charset_normalizer-2.1.1.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +charset_normalizer-2.1.1.dist-info/entry_points.txt,sha256=uYo8aIGLWv8YgWfSna5HnfY_En4pkF1w4bgawNAXzP0,76 +charset_normalizer-2.1.1.dist-info/top_level.txt,sha256=7ASyzePr8_xuZWJsnqJjIBtyV8vhEo0wBCv1MPRRi3Q,19 +charset_normalizer/__init__.py,sha256=jGhhf1IcOgCpZsr593E9fPvjWKnflVqHe_LwkOJjInU,1790 +charset_normalizer/__pycache__/__init__.cpython-38.pyc,, +charset_normalizer/__pycache__/api.cpython-38.pyc,, +charset_normalizer/__pycache__/cd.cpython-38.pyc,, +charset_normalizer/__pycache__/constant.cpython-38.pyc,, +charset_normalizer/__pycache__/legacy.cpython-38.pyc,, +charset_normalizer/__pycache__/md.cpython-38.pyc,, +charset_normalizer/__pycache__/models.cpython-38.pyc,, +charset_normalizer/__pycache__/utils.cpython-38.pyc,, +charset_normalizer/__pycache__/version.cpython-38.pyc,, +charset_normalizer/api.py,sha256=euVPmjAMbjpqhEHPjfKtyy1mK52U0TOUBUQgM_Qy6eE,19191 +charset_normalizer/assets/__init__.py,sha256=r7aakPaRIc2FFG2mw2V8NOTvkl25_euKZ3wPf5SAVa4,15222 +charset_normalizer/assets/__pycache__/__init__.cpython-38.pyc,, +charset_normalizer/cd.py,sha256=Pxdkbn4cy0iZF42KTb1FiWIqqKobuz_fDjGwc6JMNBc,10811 +charset_normalizer/cli/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charset_normalizer/cli/__pycache__/__init__.cpython-38.pyc,, +charset_normalizer/cli/__pycache__/normalizer.cpython-38.pyc,, +charset_normalizer/cli/normalizer.py,sha256=FmD1RXeMpRBg_mjR0MaJhNUpM2qZ8wz2neAE7AayBeg,9521 +charset_normalizer/constant.py,sha256=NgU-pY8JH2a9lkVT8oKwAFmIUYNKOuSBwZgF9MrlNCM,19157 +charset_normalizer/legacy.py,sha256=XKeZOts_HdYQU_Jb3C9ZfOjY2CiUL132k9_nXer8gig,3384 +charset_normalizer/md.py,sha256=pZP8IVpSC82D8INA9Tf_y0ijJSRI-UIncZvLdfTWEd4,17642 +charset_normalizer/models.py,sha256=i68YdlSLTEI3EEBVXq8TLNAbyyjrLC2OWszc-OBAk9I,13167 +charset_normalizer/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charset_normalizer/utils.py,sha256=ykOznhcAeH-ODLBWJuI7t1nbwa1SAfN_bDYTCJGyh4U,11771 +charset_normalizer/version.py,sha256=_eh2MA3qS__IajlePQxKBmlw6zaBDvPYlLdEgxgIojw,79 diff --git a/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/entry_points.txt new file mode 100644 index 00000000..a06d3600 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[console_scripts] +normalizer = charset_normalizer.cli.normalizer:cli_detect diff --git a/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/top_level.txt new file mode 100644 index 00000000..66958f0a --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer-2.1.1.dist-info/top_level.txt @@ -0,0 +1 @@ +charset_normalizer diff --git a/env/lib/python3.8/site-packages/charset_normalizer/__init__.py b/env/lib/python3.8/site-packages/charset_normalizer/__init__.py new file mode 100644 index 00000000..2dcaf56f --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/__init__.py @@ -0,0 +1,56 @@ +# -*- coding: utf-8 -*- +""" +Charset-Normalizer +~~~~~~~~~~~~~~ +The Real First Universal Charset Detector. +A library that helps you read text from an unknown charset encoding. +Motivated by chardet, This package is trying to resolve the issue by taking a new approach. +All IANA character set names for which the Python core library provides codecs are supported. + +Basic usage: + >>> from charset_normalizer import from_bytes + >>> results = from_bytes('Bсеки човек има право на образование. Oбразованието!'.encode('utf_8')) + >>> best_guess = results.best() + >>> str(best_guess) + 'Bсеки човек има право на образование. Oбразованието!' + +Others methods and usages are available - see the full documentation +at . +:copyright: (c) 2021 by Ahmed TAHRI +:license: MIT, see LICENSE for more details. +""" +import logging + +from .api import from_bytes, from_fp, from_path, normalize +from .legacy import ( + CharsetDetector, + CharsetDoctor, + CharsetNormalizerMatch, + CharsetNormalizerMatches, + detect, +) +from .models import CharsetMatch, CharsetMatches +from .utils import set_logging_handler +from .version import VERSION, __version__ + +__all__ = ( + "from_fp", + "from_path", + "from_bytes", + "normalize", + "detect", + "CharsetMatch", + "CharsetMatches", + "CharsetNormalizerMatch", + "CharsetNormalizerMatches", + "CharsetDetector", + "CharsetDoctor", + "__version__", + "VERSION", + "set_logging_handler", +) + +# Attach a NullHandler to the top level logger by default +# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library + +logging.getLogger("charset_normalizer").addHandler(logging.NullHandler()) diff --git a/env/lib/python3.8/site-packages/charset_normalizer/api.py b/env/lib/python3.8/site-packages/charset_normalizer/api.py new file mode 100644 index 00000000..72907f94 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/api.py @@ -0,0 +1,584 @@ +import logging +import warnings +from os import PathLike +from os.path import basename, splitext +from typing import Any, BinaryIO, List, Optional, Set + +from .cd import ( + coherence_ratio, + encoding_languages, + mb_encoding_languages, + merge_coherence_ratios, +) +from .constant import IANA_SUPPORTED, TOO_BIG_SEQUENCE, TOO_SMALL_SEQUENCE, TRACE +from .md import mess_ratio +from .models import CharsetMatch, CharsetMatches +from .utils import ( + any_specified_encoding, + cut_sequence_chunks, + iana_name, + identify_sig_or_bom, + is_cp_similar, + is_multi_byte_encoding, + should_strip_sig_or_bom, +) + +# Will most likely be controversial +# logging.addLevelName(TRACE, "TRACE") +logger = logging.getLogger("charset_normalizer") +explain_handler = logging.StreamHandler() +explain_handler.setFormatter( + logging.Formatter("%(asctime)s | %(levelname)s | %(message)s") +) + + +def from_bytes( + sequences: bytes, + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.2, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, + explain: bool = False, +) -> CharsetMatches: + """ + Given a raw bytes sequence, return the best possibles charset usable to render str objects. + If there is no results, it is a strong indicator that the source is binary/not text. + By default, the process will extract 5 blocs of 512o each to assess the mess and coherence of a given sequence. + And will give up a particular code page after 20% of measured mess. Those criteria are customizable at will. + + The preemptive behavior DOES NOT replace the traditional detection workflow, it prioritize a particular code page + but never take it for granted. Can improve the performance. + + You may want to focus your attention to some code page or/and not others, use cp_isolation and cp_exclusion for that + purpose. + + This function will strip the SIG in the payload/sequence every time except on UTF-16, UTF-32. + By default the library does not setup any handler other than the NullHandler, if you choose to set the 'explain' + toggle to True it will alter the logger configuration to add a StreamHandler that is suitable for debugging. + Custom logging format and handler can be set manually. + """ + + if not isinstance(sequences, (bytearray, bytes)): + raise TypeError( + "Expected object of type bytes or bytearray, got: {0}".format( + type(sequences) + ) + ) + + if explain: + previous_logger_level: int = logger.level + logger.addHandler(explain_handler) + logger.setLevel(TRACE) + + length: int = len(sequences) + + if length == 0: + logger.debug("Encoding detection on empty bytes, assuming utf_8 intention.") + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level or logging.WARNING) + return CharsetMatches([CharsetMatch(sequences, "utf_8", 0.0, False, [], "")]) + + if cp_isolation is not None: + logger.log( + TRACE, + "cp_isolation is set. use this flag for debugging purpose. " + "limited list of encoding allowed : %s.", + ", ".join(cp_isolation), + ) + cp_isolation = [iana_name(cp, False) for cp in cp_isolation] + else: + cp_isolation = [] + + if cp_exclusion is not None: + logger.log( + TRACE, + "cp_exclusion is set. use this flag for debugging purpose. " + "limited list of encoding excluded : %s.", + ", ".join(cp_exclusion), + ) + cp_exclusion = [iana_name(cp, False) for cp in cp_exclusion] + else: + cp_exclusion = [] + + if length <= (chunk_size * steps): + logger.log( + TRACE, + "override steps (%i) and chunk_size (%i) as content does not fit (%i byte(s) given) parameters.", + steps, + chunk_size, + length, + ) + steps = 1 + chunk_size = length + + if steps > 1 and length / steps < chunk_size: + chunk_size = int(length / steps) + + is_too_small_sequence: bool = len(sequences) < TOO_SMALL_SEQUENCE + is_too_large_sequence: bool = len(sequences) >= TOO_BIG_SEQUENCE + + if is_too_small_sequence: + logger.log( + TRACE, + "Trying to detect encoding from a tiny portion of ({}) byte(s).".format( + length + ), + ) + elif is_too_large_sequence: + logger.log( + TRACE, + "Using lazy str decoding because the payload is quite large, ({}) byte(s).".format( + length + ), + ) + + prioritized_encodings: List[str] = [] + + specified_encoding: Optional[str] = ( + any_specified_encoding(sequences) if preemptive_behaviour else None + ) + + if specified_encoding is not None: + prioritized_encodings.append(specified_encoding) + logger.log( + TRACE, + "Detected declarative mark in sequence. Priority +1 given for %s.", + specified_encoding, + ) + + tested: Set[str] = set() + tested_but_hard_failure: List[str] = [] + tested_but_soft_failure: List[str] = [] + + fallback_ascii: Optional[CharsetMatch] = None + fallback_u8: Optional[CharsetMatch] = None + fallback_specified: Optional[CharsetMatch] = None + + results: CharsetMatches = CharsetMatches() + + sig_encoding, sig_payload = identify_sig_or_bom(sequences) + + if sig_encoding is not None: + prioritized_encodings.append(sig_encoding) + logger.log( + TRACE, + "Detected a SIG or BOM mark on first %i byte(s). Priority +1 given for %s.", + len(sig_payload), + sig_encoding, + ) + + prioritized_encodings.append("ascii") + + if "utf_8" not in prioritized_encodings: + prioritized_encodings.append("utf_8") + + for encoding_iana in prioritized_encodings + IANA_SUPPORTED: + + if cp_isolation and encoding_iana not in cp_isolation: + continue + + if cp_exclusion and encoding_iana in cp_exclusion: + continue + + if encoding_iana in tested: + continue + + tested.add(encoding_iana) + + decoded_payload: Optional[str] = None + bom_or_sig_available: bool = sig_encoding == encoding_iana + strip_sig_or_bom: bool = bom_or_sig_available and should_strip_sig_or_bom( + encoding_iana + ) + + if encoding_iana in {"utf_16", "utf_32"} and not bom_or_sig_available: + logger.log( + TRACE, + "Encoding %s wont be tested as-is because it require a BOM. Will try some sub-encoder LE/BE.", + encoding_iana, + ) + continue + + try: + is_multi_byte_decoder: bool = is_multi_byte_encoding(encoding_iana) + except (ModuleNotFoundError, ImportError): + logger.log( + TRACE, + "Encoding %s does not provide an IncrementalDecoder", + encoding_iana, + ) + continue + + try: + if is_too_large_sequence and is_multi_byte_decoder is False: + str( + sequences[: int(50e4)] + if strip_sig_or_bom is False + else sequences[len(sig_payload) : int(50e4)], + encoding=encoding_iana, + ) + else: + decoded_payload = str( + sequences + if strip_sig_or_bom is False + else sequences[len(sig_payload) :], + encoding=encoding_iana, + ) + except (UnicodeDecodeError, LookupError) as e: + if not isinstance(e, LookupError): + logger.log( + TRACE, + "Code page %s does not fit given bytes sequence at ALL. %s", + encoding_iana, + str(e), + ) + tested_but_hard_failure.append(encoding_iana) + continue + + similar_soft_failure_test: bool = False + + for encoding_soft_failed in tested_but_soft_failure: + if is_cp_similar(encoding_iana, encoding_soft_failed): + similar_soft_failure_test = True + break + + if similar_soft_failure_test: + logger.log( + TRACE, + "%s is deemed too similar to code page %s and was consider unsuited already. Continuing!", + encoding_iana, + encoding_soft_failed, + ) + continue + + r_ = range( + 0 if not bom_or_sig_available else len(sig_payload), + length, + int(length / steps), + ) + + multi_byte_bonus: bool = ( + is_multi_byte_decoder + and decoded_payload is not None + and len(decoded_payload) < length + ) + + if multi_byte_bonus: + logger.log( + TRACE, + "Code page %s is a multi byte encoding table and it appear that at least one character " + "was encoded using n-bytes.", + encoding_iana, + ) + + max_chunk_gave_up: int = int(len(r_) / 4) + + max_chunk_gave_up = max(max_chunk_gave_up, 2) + early_stop_count: int = 0 + lazy_str_hard_failure = False + + md_chunks: List[str] = [] + md_ratios = [] + + try: + for chunk in cut_sequence_chunks( + sequences, + encoding_iana, + r_, + chunk_size, + bom_or_sig_available, + strip_sig_or_bom, + sig_payload, + is_multi_byte_decoder, + decoded_payload, + ): + md_chunks.append(chunk) + + md_ratios.append(mess_ratio(chunk, threshold)) + + if md_ratios[-1] >= threshold: + early_stop_count += 1 + + if (early_stop_count >= max_chunk_gave_up) or ( + bom_or_sig_available and strip_sig_or_bom is False + ): + break + except UnicodeDecodeError as e: # Lazy str loading may have missed something there + logger.log( + TRACE, + "LazyStr Loading: After MD chunk decode, code page %s does not fit given bytes sequence at ALL. %s", + encoding_iana, + str(e), + ) + early_stop_count = max_chunk_gave_up + lazy_str_hard_failure = True + + # We might want to check the sequence again with the whole content + # Only if initial MD tests passes + if ( + not lazy_str_hard_failure + and is_too_large_sequence + and not is_multi_byte_decoder + ): + try: + sequences[int(50e3) :].decode(encoding_iana, errors="strict") + except UnicodeDecodeError as e: + logger.log( + TRACE, + "LazyStr Loading: After final lookup, code page %s does not fit given bytes sequence at ALL. %s", + encoding_iana, + str(e), + ) + tested_but_hard_failure.append(encoding_iana) + continue + + mean_mess_ratio: float = sum(md_ratios) / len(md_ratios) if md_ratios else 0.0 + if mean_mess_ratio >= threshold or early_stop_count >= max_chunk_gave_up: + tested_but_soft_failure.append(encoding_iana) + logger.log( + TRACE, + "%s was excluded because of initial chaos probing. Gave up %i time(s). " + "Computed mean chaos is %f %%.", + encoding_iana, + early_stop_count, + round(mean_mess_ratio * 100, ndigits=3), + ) + # Preparing those fallbacks in case we got nothing. + if ( + encoding_iana in ["ascii", "utf_8", specified_encoding] + and not lazy_str_hard_failure + ): + fallback_entry = CharsetMatch( + sequences, encoding_iana, threshold, False, [], decoded_payload + ) + if encoding_iana == specified_encoding: + fallback_specified = fallback_entry + elif encoding_iana == "ascii": + fallback_ascii = fallback_entry + else: + fallback_u8 = fallback_entry + continue + + logger.log( + TRACE, + "%s passed initial chaos probing. Mean measured chaos is %f %%", + encoding_iana, + round(mean_mess_ratio * 100, ndigits=3), + ) + + if not is_multi_byte_decoder: + target_languages: List[str] = encoding_languages(encoding_iana) + else: + target_languages = mb_encoding_languages(encoding_iana) + + if target_languages: + logger.log( + TRACE, + "{} should target any language(s) of {}".format( + encoding_iana, str(target_languages) + ), + ) + + cd_ratios = [] + + # We shall skip the CD when its about ASCII + # Most of the time its not relevant to run "language-detection" on it. + if encoding_iana != "ascii": + for chunk in md_chunks: + chunk_languages = coherence_ratio( + chunk, 0.1, ",".join(target_languages) if target_languages else None + ) + + cd_ratios.append(chunk_languages) + + cd_ratios_merged = merge_coherence_ratios(cd_ratios) + + if cd_ratios_merged: + logger.log( + TRACE, + "We detected language {} using {}".format( + cd_ratios_merged, encoding_iana + ), + ) + + results.append( + CharsetMatch( + sequences, + encoding_iana, + mean_mess_ratio, + bom_or_sig_available, + cd_ratios_merged, + decoded_payload, + ) + ) + + if ( + encoding_iana in [specified_encoding, "ascii", "utf_8"] + and mean_mess_ratio < 0.1 + ): + logger.debug( + "Encoding detection: %s is most likely the one.", encoding_iana + ) + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level) + return CharsetMatches([results[encoding_iana]]) + + if encoding_iana == sig_encoding: + logger.debug( + "Encoding detection: %s is most likely the one as we detected a BOM or SIG within " + "the beginning of the sequence.", + encoding_iana, + ) + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level) + return CharsetMatches([results[encoding_iana]]) + + if len(results) == 0: + if fallback_u8 or fallback_ascii or fallback_specified: + logger.log( + TRACE, + "Nothing got out of the detection process. Using ASCII/UTF-8/Specified fallback.", + ) + + if fallback_specified: + logger.debug( + "Encoding detection: %s will be used as a fallback match", + fallback_specified.encoding, + ) + results.append(fallback_specified) + elif ( + (fallback_u8 and fallback_ascii is None) + or ( + fallback_u8 + and fallback_ascii + and fallback_u8.fingerprint != fallback_ascii.fingerprint + ) + or (fallback_u8 is not None) + ): + logger.debug("Encoding detection: utf_8 will be used as a fallback match") + results.append(fallback_u8) + elif fallback_ascii: + logger.debug("Encoding detection: ascii will be used as a fallback match") + results.append(fallback_ascii) + + if results: + logger.debug( + "Encoding detection: Found %s as plausible (best-candidate) for content. With %i alternatives.", + results.best().encoding, # type: ignore + len(results) - 1, + ) + else: + logger.debug("Encoding detection: Unable to determine any suitable charset.") + + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level) + + return results + + +def from_fp( + fp: BinaryIO, + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.20, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, + explain: bool = False, +) -> CharsetMatches: + """ + Same thing than the function from_bytes but using a file pointer that is already ready. + Will not close the file pointer. + """ + return from_bytes( + fp.read(), + steps, + chunk_size, + threshold, + cp_isolation, + cp_exclusion, + preemptive_behaviour, + explain, + ) + + +def from_path( + path: "PathLike[Any]", + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.20, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, + explain: bool = False, +) -> CharsetMatches: + """ + Same thing than the function from_bytes but with one extra step. Opening and reading given file path in binary mode. + Can raise IOError. + """ + with open(path, "rb") as fp: + return from_fp( + fp, + steps, + chunk_size, + threshold, + cp_isolation, + cp_exclusion, + preemptive_behaviour, + explain, + ) + + +def normalize( + path: "PathLike[Any]", + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.20, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, +) -> CharsetMatch: + """ + Take a (text-based) file path and try to create another file next to it, this time using UTF-8. + """ + warnings.warn( + "normalize is deprecated and will be removed in 3.0", + DeprecationWarning, + ) + + results = from_path( + path, + steps, + chunk_size, + threshold, + cp_isolation, + cp_exclusion, + preemptive_behaviour, + ) + + filename = basename(path) + target_extensions = list(splitext(filename)) + + if len(results) == 0: + raise IOError( + 'Unable to normalize "{}", no encoding charset seems to fit.'.format( + filename + ) + ) + + result = results.best() + + target_extensions[0] += "-" + result.encoding # type: ignore + + with open( + "{}".format(str(path).replace(filename, "".join(target_extensions))), "wb" + ) as fp: + fp.write(result.output()) # type: ignore + + return result # type: ignore diff --git a/env/lib/python3.8/site-packages/charset_normalizer/assets/__init__.py b/env/lib/python3.8/site-packages/charset_normalizer/assets/__init__.py new file mode 100644 index 00000000..3c33ba30 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/assets/__init__.py @@ -0,0 +1,1122 @@ +# -*- coding: utf-8 -*- +from typing import Dict, List + +FREQUENCIES: Dict[str, List[str]] = { + "English": [ + "e", + "a", + "t", + "i", + "o", + "n", + "s", + "r", + "h", + "l", + "d", + "c", + "u", + "m", + "f", + "p", + "g", + "w", + "y", + "b", + "v", + "k", + "x", + "j", + "z", + "q", + ], + "German": [ + "e", + "n", + "i", + "r", + "s", + "t", + "a", + "d", + "h", + "u", + "l", + "g", + "o", + "c", + "m", + "b", + "f", + "k", + "w", + "z", + "p", + "v", + "ü", + "ä", + "ö", + "j", + ], + "French": [ + "e", + "a", + "s", + "n", + "i", + "t", + "r", + "l", + "u", + "o", + "d", + "c", + "p", + "m", + "é", + "v", + "g", + "f", + "b", + "h", + "q", + "à", + "x", + "è", + "y", + "j", + ], + "Dutch": [ + "e", + "n", + "a", + "i", + "r", + "t", + "o", + "d", + "s", + "l", + "g", + "h", + "v", + "m", + "u", + "k", + "c", + "p", + "b", + "w", + "j", + "z", + "f", + "y", + "x", + "ë", + ], + "Italian": [ + "e", + "i", + "a", + "o", + "n", + "l", + "t", + "r", + "s", + "c", + "d", + "u", + "p", + "m", + "g", + "v", + "f", + "b", + "z", + "h", + "q", + "è", + "à", + "k", + "y", + "ò", + ], + "Polish": [ + "a", + "i", + "o", + "e", + "n", + "r", + "z", + "w", + "s", + "c", + "t", + "k", + "y", + "d", + "p", + "m", + "u", + "l", + "j", + "ł", + "g", + "b", + "h", + "ą", + "ę", + "ó", + ], + "Spanish": [ + "e", + "a", + "o", + "n", + "s", + "r", + "i", + "l", + "d", + "t", + "c", + "u", + "m", + "p", + "b", + "g", + "v", + "f", + "y", + "ó", + "h", + "q", + "í", + "j", + "z", + "á", + ], + "Russian": [ + "о", + "а", + "е", + "и", + "н", + "с", + "т", + "р", + "в", + "л", + "к", + "м", + "д", + "п", + "у", + "г", + "я", + "ы", + "з", + "б", + "й", + "ь", + "ч", + "х", + "ж", + "ц", + ], + "Japanese": [ + "の", + "に", + "る", + "た", + "は", + "ー", + "と", + "し", + "を", + "で", + "て", + "が", + "い", + "ン", + "れ", + "な", + "年", + "ス", + "っ", + "ル", + "か", + "ら", + "あ", + "さ", + "も", + "り", + ], + "Portuguese": [ + "a", + "e", + "o", + "s", + "i", + "r", + "d", + "n", + "t", + "m", + "u", + "c", + "l", + "p", + "g", + "v", + "b", + "f", + "h", + "ã", + "q", + "é", + "ç", + "á", + "z", + "í", + ], + "Swedish": [ + "e", + "a", + "n", + "r", + "t", + "s", + "i", + "l", + "d", + "o", + "m", + "k", + "g", + "v", + "h", + "f", + "u", + "p", + "ä", + "c", + "b", + "ö", + "å", + "y", + "j", + "x", + ], + "Chinese": [ + "的", + "一", + "是", + "不", + "了", + "在", + "人", + "有", + "我", + "他", + "这", + "个", + "们", + "中", + "来", + "上", + "大", + "为", + "和", + "国", + "地", + "到", + "以", + "说", + "时", + "要", + "就", + "出", + "会", + ], + "Ukrainian": [ + "о", + "а", + "н", + "і", + "и", + "р", + "в", + "т", + "е", + "с", + "к", + "л", + "у", + "д", + "м", + "п", + "з", + "я", + "ь", + "б", + "г", + "й", + "ч", + "х", + "ц", + "ї", + ], + "Norwegian": [ + "e", + "r", + "n", + "t", + "a", + "s", + "i", + "o", + "l", + "d", + "g", + "k", + "m", + "v", + "f", + "p", + "u", + "b", + "h", + "å", + "y", + "j", + "ø", + "c", + "æ", + "w", + ], + "Finnish": [ + "a", + "i", + "n", + "t", + "e", + "s", + "l", + "o", + "u", + "k", + "ä", + "m", + "r", + "v", + "j", + "h", + "p", + "y", + "d", + "ö", + "g", + "c", + "b", + "f", + "w", + "z", + ], + "Vietnamese": [ + "n", + "h", + "t", + "i", + "c", + "g", + "a", + "o", + "u", + "m", + "l", + "r", + "à", + "đ", + "s", + "e", + "v", + "p", + "b", + "y", + "ư", + "d", + "á", + "k", + "ộ", + "ế", + ], + "Czech": [ + "o", + "e", + "a", + "n", + "t", + "s", + "i", + "l", + "v", + "r", + "k", + "d", + "u", + "m", + "p", + "í", + "c", + "h", + "z", + "á", + "y", + "j", + "b", + "ě", + "é", + "ř", + ], + "Hungarian": [ + "e", + "a", + "t", + "l", + "s", + "n", + "k", + "r", + "i", + "o", + "z", + "á", + "é", + "g", + "m", + "b", + "y", + "v", + "d", + "h", + "u", + "p", + "j", + "ö", + "f", + "c", + ], + "Korean": [ + "이", + "다", + "에", + "의", + "는", + "로", + "하", + "을", + "가", + "고", + "지", + "서", + "한", + "은", + "기", + "으", + "년", + "대", + "사", + "시", + "를", + "리", + "도", + "인", + "스", + "일", + ], + "Indonesian": [ + "a", + "n", + "e", + "i", + "r", + "t", + "u", + "s", + "d", + "k", + "m", + "l", + "g", + "p", + "b", + "o", + "h", + "y", + "j", + "c", + "w", + "f", + "v", + "z", + "x", + "q", + ], + "Turkish": [ + "a", + "e", + "i", + "n", + "r", + "l", + "ı", + "k", + "d", + "t", + "s", + "m", + "y", + "u", + "o", + "b", + "ü", + "ş", + "v", + "g", + "z", + "h", + "c", + "p", + "ç", + "ğ", + ], + "Romanian": [ + "e", + "i", + "a", + "r", + "n", + "t", + "u", + "l", + "o", + "c", + "s", + "d", + "p", + "m", + "ă", + "f", + "v", + "î", + "g", + "b", + "ș", + "ț", + "z", + "h", + "â", + "j", + ], + "Farsi": [ + "ا", + "ی", + "ر", + "د", + "ن", + "ه", + "و", + "م", + "ت", + "ب", + "س", + "ل", + "ک", + "ش", + "ز", + "ف", + "گ", + "ع", + "خ", + "ق", + "ج", + "آ", + "پ", + "ح", + "ط", + "ص", + ], + "Arabic": [ + "ا", + "ل", + "ي", + "م", + "و", + "ن", + "ر", + "ت", + "ب", + "ة", + "ع", + "د", + "س", + "ف", + "ه", + "ك", + "ق", + "أ", + "ح", + "ج", + "ش", + "ط", + "ص", + "ى", + "خ", + "إ", + ], + "Danish": [ + "e", + "r", + "n", + "t", + "a", + "i", + "s", + "d", + "l", + "o", + "g", + "m", + "k", + "f", + "v", + "u", + "b", + "h", + "p", + "å", + "y", + "ø", + "æ", + "c", + "j", + "w", + ], + "Serbian": [ + "а", + "и", + "о", + "е", + "н", + "р", + "с", + "у", + "т", + "к", + "ј", + "в", + "д", + "м", + "п", + "л", + "г", + "з", + "б", + "a", + "i", + "e", + "o", + "n", + "ц", + "ш", + ], + "Lithuanian": [ + "i", + "a", + "s", + "o", + "r", + "e", + "t", + "n", + "u", + "k", + "m", + "l", + "p", + "v", + "d", + "j", + "g", + "ė", + "b", + "y", + "ų", + "š", + "ž", + "c", + "ą", + "į", + ], + "Slovene": [ + "e", + "a", + "i", + "o", + "n", + "r", + "s", + "l", + "t", + "j", + "v", + "k", + "d", + "p", + "m", + "u", + "z", + "b", + "g", + "h", + "č", + "c", + "š", + "ž", + "f", + "y", + ], + "Slovak": [ + "o", + "a", + "e", + "n", + "i", + "r", + "v", + "t", + "s", + "l", + "k", + "d", + "m", + "p", + "u", + "c", + "h", + "j", + "b", + "z", + "á", + "y", + "ý", + "í", + "č", + "é", + ], + "Hebrew": [ + "י", + "ו", + "ה", + "ל", + "ר", + "ב", + "ת", + "מ", + "א", + "ש", + "נ", + "ע", + "ם", + "ד", + "ק", + "ח", + "פ", + "ס", + "כ", + "ג", + "ט", + "צ", + "ן", + "ז", + "ך", + ], + "Bulgarian": [ + "а", + "и", + "о", + "е", + "н", + "т", + "р", + "с", + "в", + "л", + "к", + "д", + "п", + "м", + "з", + "г", + "я", + "ъ", + "у", + "б", + "ч", + "ц", + "й", + "ж", + "щ", + "х", + ], + "Croatian": [ + "a", + "i", + "o", + "e", + "n", + "r", + "j", + "s", + "t", + "u", + "k", + "l", + "v", + "d", + "m", + "p", + "g", + "z", + "b", + "c", + "č", + "h", + "š", + "ž", + "ć", + "f", + ], + "Hindi": [ + "क", + "र", + "स", + "न", + "त", + "म", + "ह", + "प", + "य", + "ल", + "व", + "ज", + "द", + "ग", + "ब", + "श", + "ट", + "अ", + "ए", + "थ", + "भ", + "ड", + "च", + "ध", + "ष", + "इ", + ], + "Estonian": [ + "a", + "i", + "e", + "s", + "t", + "l", + "u", + "n", + "o", + "k", + "r", + "d", + "m", + "v", + "g", + "p", + "j", + "h", + "ä", + "b", + "õ", + "ü", + "f", + "c", + "ö", + "y", + ], + "Simple English": [ + "e", + "a", + "t", + "i", + "o", + "n", + "s", + "r", + "h", + "l", + "d", + "c", + "m", + "u", + "f", + "p", + "g", + "w", + "b", + "y", + "v", + "k", + "j", + "x", + "z", + "q", + ], + "Thai": [ + "า", + "น", + "ร", + "อ", + "ก", + "เ", + "ง", + "ม", + "ย", + "ล", + "ว", + "ด", + "ท", + "ส", + "ต", + "ะ", + "ป", + "บ", + "ค", + "ห", + "แ", + "จ", + "พ", + "ช", + "ข", + "ใ", + ], + "Greek": [ + "α", + "τ", + "ο", + "ι", + "ε", + "ν", + "ρ", + "σ", + "κ", + "η", + "π", + "ς", + "υ", + "μ", + "λ", + "ί", + "ό", + "ά", + "γ", + "έ", + "δ", + "ή", + "ω", + "χ", + "θ", + "ύ", + ], + "Tamil": [ + "க", + "த", + "ப", + "ட", + "ர", + "ம", + "ல", + "ன", + "வ", + "ற", + "ய", + "ள", + "ச", + "ந", + "இ", + "ண", + "அ", + "ஆ", + "ழ", + "ங", + "எ", + "உ", + "ஒ", + "ஸ", + ], + "Classical Chinese": [ + "之", + "年", + "為", + "也", + "以", + "一", + "人", + "其", + "者", + "國", + "有", + "二", + "十", + "於", + "曰", + "三", + "不", + "大", + "而", + "子", + "中", + "五", + "四", + ], + "Kazakh": [ + "а", + "ы", + "е", + "н", + "т", + "р", + "л", + "і", + "д", + "с", + "м", + "қ", + "к", + "о", + "б", + "и", + "у", + "ғ", + "ж", + "ң", + "з", + "ш", + "й", + "п", + "г", + "ө", + ], +} diff --git a/env/lib/python3.8/site-packages/charset_normalizer/cd.py b/env/lib/python3.8/site-packages/charset_normalizer/cd.py new file mode 100644 index 00000000..ee4b7424 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/cd.py @@ -0,0 +1,339 @@ +import importlib +from codecs import IncrementalDecoder +from collections import Counter +from functools import lru_cache +from typing import Counter as TypeCounter, Dict, List, Optional, Tuple + +from .assets import FREQUENCIES +from .constant import KO_NAMES, LANGUAGE_SUPPORTED_COUNT, TOO_SMALL_SEQUENCE, ZH_NAMES +from .md import is_suspiciously_successive_range +from .models import CoherenceMatches +from .utils import ( + is_accentuated, + is_latin, + is_multi_byte_encoding, + is_unicode_range_secondary, + unicode_range, +) + + +def encoding_unicode_range(iana_name: str) -> List[str]: + """ + Return associated unicode ranges in a single byte code page. + """ + if is_multi_byte_encoding(iana_name): + raise IOError("Function not supported on multi-byte code page") + + decoder = importlib.import_module( + "encodings.{}".format(iana_name) + ).IncrementalDecoder + + p: IncrementalDecoder = decoder(errors="ignore") + seen_ranges: Dict[str, int] = {} + character_count: int = 0 + + for i in range(0x40, 0xFF): + chunk: str = p.decode(bytes([i])) + + if chunk: + character_range: Optional[str] = unicode_range(chunk) + + if character_range is None: + continue + + if is_unicode_range_secondary(character_range) is False: + if character_range not in seen_ranges: + seen_ranges[character_range] = 0 + seen_ranges[character_range] += 1 + character_count += 1 + + return sorted( + [ + character_range + for character_range in seen_ranges + if seen_ranges[character_range] / character_count >= 0.15 + ] + ) + + +def unicode_range_languages(primary_range: str) -> List[str]: + """ + Return inferred languages used with a unicode range. + """ + languages: List[str] = [] + + for language, characters in FREQUENCIES.items(): + for character in characters: + if unicode_range(character) == primary_range: + languages.append(language) + break + + return languages + + +@lru_cache() +def encoding_languages(iana_name: str) -> List[str]: + """ + Single-byte encoding language association. Some code page are heavily linked to particular language(s). + This function does the correspondence. + """ + unicode_ranges: List[str] = encoding_unicode_range(iana_name) + primary_range: Optional[str] = None + + for specified_range in unicode_ranges: + if "Latin" not in specified_range: + primary_range = specified_range + break + + if primary_range is None: + return ["Latin Based"] + + return unicode_range_languages(primary_range) + + +@lru_cache() +def mb_encoding_languages(iana_name: str) -> List[str]: + """ + Multi-byte encoding language association. Some code page are heavily linked to particular language(s). + This function does the correspondence. + """ + if ( + iana_name.startswith("shift_") + or iana_name.startswith("iso2022_jp") + or iana_name.startswith("euc_j") + or iana_name == "cp932" + ): + return ["Japanese"] + if iana_name.startswith("gb") or iana_name in ZH_NAMES: + return ["Chinese", "Classical Chinese"] + if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES: + return ["Korean"] + + return [] + + +@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT) +def get_target_features(language: str) -> Tuple[bool, bool]: + """ + Determine main aspects from a supported language if it contains accents and if is pure Latin. + """ + target_have_accents: bool = False + target_pure_latin: bool = True + + for character in FREQUENCIES[language]: + if not target_have_accents and is_accentuated(character): + target_have_accents = True + if target_pure_latin and is_latin(character) is False: + target_pure_latin = False + + return target_have_accents, target_pure_latin + + +def alphabet_languages( + characters: List[str], ignore_non_latin: bool = False +) -> List[str]: + """ + Return associated languages associated to given characters. + """ + languages: List[Tuple[str, float]] = [] + + source_have_accents = any(is_accentuated(character) for character in characters) + + for language, language_characters in FREQUENCIES.items(): + + target_have_accents, target_pure_latin = get_target_features(language) + + if ignore_non_latin and target_pure_latin is False: + continue + + if target_have_accents is False and source_have_accents: + continue + + character_count: int = len(language_characters) + + character_match_count: int = len( + [c for c in language_characters if c in characters] + ) + + ratio: float = character_match_count / character_count + + if ratio >= 0.2: + languages.append((language, ratio)) + + languages = sorted(languages, key=lambda x: x[1], reverse=True) + + return [compatible_language[0] for compatible_language in languages] + + +def characters_popularity_compare( + language: str, ordered_characters: List[str] +) -> float: + """ + Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language. + The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit). + Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.) + """ + if language not in FREQUENCIES: + raise ValueError("{} not available".format(language)) + + character_approved_count: int = 0 + FREQUENCIES_language_set = set(FREQUENCIES[language]) + + for character in ordered_characters: + if character not in FREQUENCIES_language_set: + continue + + characters_before_source: List[str] = FREQUENCIES[language][ + 0 : FREQUENCIES[language].index(character) + ] + characters_after_source: List[str] = FREQUENCIES[language][ + FREQUENCIES[language].index(character) : + ] + characters_before: List[str] = ordered_characters[ + 0 : ordered_characters.index(character) + ] + characters_after: List[str] = ordered_characters[ + ordered_characters.index(character) : + ] + + before_match_count: int = len( + set(characters_before) & set(characters_before_source) + ) + + after_match_count: int = len( + set(characters_after) & set(characters_after_source) + ) + + if len(characters_before_source) == 0 and before_match_count <= 4: + character_approved_count += 1 + continue + + if len(characters_after_source) == 0 and after_match_count <= 4: + character_approved_count += 1 + continue + + if ( + before_match_count / len(characters_before_source) >= 0.4 + or after_match_count / len(characters_after_source) >= 0.4 + ): + character_approved_count += 1 + continue + + return character_approved_count / len(ordered_characters) + + +def alpha_unicode_split(decoded_sequence: str) -> List[str]: + """ + Given a decoded text sequence, return a list of str. Unicode range / alphabet separation. + Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list; + One containing the latin letters and the other hebrew. + """ + layers: Dict[str, str] = {} + + for character in decoded_sequence: + if character.isalpha() is False: + continue + + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + continue + + layer_target_range: Optional[str] = None + + for discovered_range in layers: + if ( + is_suspiciously_successive_range(discovered_range, character_range) + is False + ): + layer_target_range = discovered_range + break + + if layer_target_range is None: + layer_target_range = character_range + + if layer_target_range not in layers: + layers[layer_target_range] = character.lower() + continue + + layers[layer_target_range] += character.lower() + + return list(layers.values()) + + +def merge_coherence_ratios(results: List[CoherenceMatches]) -> CoherenceMatches: + """ + This function merge results previously given by the function coherence_ratio. + The return type is the same as coherence_ratio. + """ + per_language_ratios: Dict[str, List[float]] = {} + for result in results: + for sub_result in result: + language, ratio = sub_result + if language not in per_language_ratios: + per_language_ratios[language] = [ratio] + continue + per_language_ratios[language].append(ratio) + + merge = [ + ( + language, + round( + sum(per_language_ratios[language]) / len(per_language_ratios[language]), + 4, + ), + ) + for language in per_language_ratios + ] + + return sorted(merge, key=lambda x: x[1], reverse=True) + + +@lru_cache(maxsize=2048) +def coherence_ratio( + decoded_sequence: str, threshold: float = 0.1, lg_inclusion: Optional[str] = None +) -> CoherenceMatches: + """ + Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers. + A layer = Character extraction by alphabets/ranges. + """ + + results: List[Tuple[str, float]] = [] + ignore_non_latin: bool = False + + sufficient_match_count: int = 0 + + lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else [] + if "Latin Based" in lg_inclusion_list: + ignore_non_latin = True + lg_inclusion_list.remove("Latin Based") + + for layer in alpha_unicode_split(decoded_sequence): + sequence_frequencies: TypeCounter[str] = Counter(layer) + most_common = sequence_frequencies.most_common() + + character_count: int = sum(o for c, o in most_common) + + if character_count <= TOO_SMALL_SEQUENCE: + continue + + popular_character_ordered: List[str] = [c for c, o in most_common] + + for language in lg_inclusion_list or alphabet_languages( + popular_character_ordered, ignore_non_latin + ): + ratio: float = characters_popularity_compare( + language, popular_character_ordered + ) + + if ratio < threshold: + continue + elif ratio >= 0.8: + sufficient_match_count += 1 + + results.append((language, round(ratio, 4))) + + if sufficient_match_count >= 3: + break + + return sorted(results, key=lambda x: x[1], reverse=True) diff --git a/env/lib/python3.8/site-packages/charset_normalizer/cli/__init__.py b/env/lib/python3.8/site-packages/charset_normalizer/cli/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/charset_normalizer/cli/normalizer.py b/env/lib/python3.8/site-packages/charset_normalizer/cli/normalizer.py new file mode 100644 index 00000000..b8b652a5 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/cli/normalizer.py @@ -0,0 +1,295 @@ +import argparse +import sys +from json import dumps +from os.path import abspath +from platform import python_version +from typing import List, Optional + +try: + from unicodedata2 import unidata_version +except ImportError: + from unicodedata import unidata_version + +from charset_normalizer import from_fp +from charset_normalizer.models import CliDetectionResult +from charset_normalizer.version import __version__ + + +def query_yes_no(question: str, default: str = "yes") -> bool: + """Ask a yes/no question via input() and return their answer. + + "question" is a string that is presented to the user. + "default" is the presumed answer if the user just hits . + It must be "yes" (the default), "no" or None (meaning + an answer is required of the user). + + The "answer" return value is True for "yes" or False for "no". + + Credit goes to (c) https://stackoverflow.com/questions/3041986/apt-command-line-interface-like-yes-no-input + """ + valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False} + if default is None: + prompt = " [y/n] " + elif default == "yes": + prompt = " [Y/n] " + elif default == "no": + prompt = " [y/N] " + else: + raise ValueError("invalid default answer: '%s'" % default) + + while True: + sys.stdout.write(question + prompt) + choice = input().lower() + if default is not None and choice == "": + return valid[default] + elif choice in valid: + return valid[choice] + else: + sys.stdout.write("Please respond with 'yes' or 'no' " "(or 'y' or 'n').\n") + + +def cli_detect(argv: Optional[List[str]] = None) -> int: + """ + CLI assistant using ARGV and ArgumentParser + :param argv: + :return: 0 if everything is fine, anything else equal trouble + """ + parser = argparse.ArgumentParser( + description="The Real First Universal Charset Detector. " + "Discover originating encoding used on text file. " + "Normalize text to unicode." + ) + + parser.add_argument( + "files", type=argparse.FileType("rb"), nargs="+", help="File(s) to be analysed" + ) + parser.add_argument( + "-v", + "--verbose", + action="store_true", + default=False, + dest="verbose", + help="Display complementary information about file if any. " + "Stdout will contain logs about the detection process.", + ) + parser.add_argument( + "-a", + "--with-alternative", + action="store_true", + default=False, + dest="alternatives", + help="Output complementary possibilities if any. Top-level JSON WILL be a list.", + ) + parser.add_argument( + "-n", + "--normalize", + action="store_true", + default=False, + dest="normalize", + help="Permit to normalize input file. If not set, program does not write anything.", + ) + parser.add_argument( + "-m", + "--minimal", + action="store_true", + default=False, + dest="minimal", + help="Only output the charset detected to STDOUT. Disabling JSON output.", + ) + parser.add_argument( + "-r", + "--replace", + action="store_true", + default=False, + dest="replace", + help="Replace file when trying to normalize it instead of creating a new one.", + ) + parser.add_argument( + "-f", + "--force", + action="store_true", + default=False, + dest="force", + help="Replace file without asking if you are sure, use this flag with caution.", + ) + parser.add_argument( + "-t", + "--threshold", + action="store", + default=0.2, + type=float, + dest="threshold", + help="Define a custom maximum amount of chaos allowed in decoded content. 0. <= chaos <= 1.", + ) + parser.add_argument( + "--version", + action="version", + version="Charset-Normalizer {} - Python {} - Unicode {}".format( + __version__, python_version(), unidata_version + ), + help="Show version information and exit.", + ) + + args = parser.parse_args(argv) + + if args.replace is True and args.normalize is False: + print("Use --replace in addition of --normalize only.", file=sys.stderr) + return 1 + + if args.force is True and args.replace is False: + print("Use --force in addition of --replace only.", file=sys.stderr) + return 1 + + if args.threshold < 0.0 or args.threshold > 1.0: + print("--threshold VALUE should be between 0. AND 1.", file=sys.stderr) + return 1 + + x_ = [] + + for my_file in args.files: + + matches = from_fp(my_file, threshold=args.threshold, explain=args.verbose) + + best_guess = matches.best() + + if best_guess is None: + print( + 'Unable to identify originating encoding for "{}". {}'.format( + my_file.name, + "Maybe try increasing maximum amount of chaos." + if args.threshold < 1.0 + else "", + ), + file=sys.stderr, + ) + x_.append( + CliDetectionResult( + abspath(my_file.name), + None, + [], + [], + "Unknown", + [], + False, + 1.0, + 0.0, + None, + True, + ) + ) + else: + x_.append( + CliDetectionResult( + abspath(my_file.name), + best_guess.encoding, + best_guess.encoding_aliases, + [ + cp + for cp in best_guess.could_be_from_charset + if cp != best_guess.encoding + ], + best_guess.language, + best_guess.alphabets, + best_guess.bom, + best_guess.percent_chaos, + best_guess.percent_coherence, + None, + True, + ) + ) + + if len(matches) > 1 and args.alternatives: + for el in matches: + if el != best_guess: + x_.append( + CliDetectionResult( + abspath(my_file.name), + el.encoding, + el.encoding_aliases, + [ + cp + for cp in el.could_be_from_charset + if cp != el.encoding + ], + el.language, + el.alphabets, + el.bom, + el.percent_chaos, + el.percent_coherence, + None, + False, + ) + ) + + if args.normalize is True: + + if best_guess.encoding.startswith("utf") is True: + print( + '"{}" file does not need to be normalized, as it already came from unicode.'.format( + my_file.name + ), + file=sys.stderr, + ) + if my_file.closed is False: + my_file.close() + continue + + o_: List[str] = my_file.name.split(".") + + if args.replace is False: + o_.insert(-1, best_guess.encoding) + if my_file.closed is False: + my_file.close() + elif ( + args.force is False + and query_yes_no( + 'Are you sure to normalize "{}" by replacing it ?'.format( + my_file.name + ), + "no", + ) + is False + ): + if my_file.closed is False: + my_file.close() + continue + + try: + x_[0].unicode_path = abspath("./{}".format(".".join(o_))) + + with open(x_[0].unicode_path, "w", encoding="utf-8") as fp: + fp.write(str(best_guess)) + except IOError as e: + print(str(e), file=sys.stderr) + if my_file.closed is False: + my_file.close() + return 2 + + if my_file.closed is False: + my_file.close() + + if args.minimal is False: + print( + dumps( + [el.__dict__ for el in x_] if len(x_) > 1 else x_[0].__dict__, + ensure_ascii=True, + indent=4, + ) + ) + else: + for my_file in args.files: + print( + ", ".join( + [ + el.encoding or "undefined" + for el in x_ + if el.path == abspath(my_file.name) + ] + ) + ) + + return 0 + + +if __name__ == "__main__": + cli_detect() diff --git a/env/lib/python3.8/site-packages/charset_normalizer/constant.py b/env/lib/python3.8/site-packages/charset_normalizer/constant.py new file mode 100644 index 00000000..ac840c46 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/constant.py @@ -0,0 +1,497 @@ +from codecs import BOM_UTF8, BOM_UTF16_BE, BOM_UTF16_LE, BOM_UTF32_BE, BOM_UTF32_LE +from encodings.aliases import aliases +from re import IGNORECASE, compile as re_compile +from typing import Dict, List, Set, Union + +from .assets import FREQUENCIES + +# Contain for each eligible encoding a list of/item bytes SIG/BOM +ENCODING_MARKS: Dict[str, Union[bytes, List[bytes]]] = { + "utf_8": BOM_UTF8, + "utf_7": [ + b"\x2b\x2f\x76\x38", + b"\x2b\x2f\x76\x39", + b"\x2b\x2f\x76\x2b", + b"\x2b\x2f\x76\x2f", + b"\x2b\x2f\x76\x38\x2d", + ], + "gb18030": b"\x84\x31\x95\x33", + "utf_32": [BOM_UTF32_BE, BOM_UTF32_LE], + "utf_16": [BOM_UTF16_BE, BOM_UTF16_LE], +} + +TOO_SMALL_SEQUENCE: int = 32 +TOO_BIG_SEQUENCE: int = int(10e6) + +UTF8_MAXIMAL_ALLOCATION: int = 1112064 + +UNICODE_RANGES_COMBINED: Dict[str, range] = { + "Control character": range(31 + 1), + "Basic Latin": range(32, 127 + 1), + "Latin-1 Supplement": range(128, 255 + 1), + "Latin Extended-A": range(256, 383 + 1), + "Latin Extended-B": range(384, 591 + 1), + "IPA Extensions": range(592, 687 + 1), + "Spacing Modifier Letters": range(688, 767 + 1), + "Combining Diacritical Marks": range(768, 879 + 1), + "Greek and Coptic": range(880, 1023 + 1), + "Cyrillic": range(1024, 1279 + 1), + "Cyrillic Supplement": range(1280, 1327 + 1), + "Armenian": range(1328, 1423 + 1), + "Hebrew": range(1424, 1535 + 1), + "Arabic": range(1536, 1791 + 1), + "Syriac": range(1792, 1871 + 1), + "Arabic Supplement": range(1872, 1919 + 1), + "Thaana": range(1920, 1983 + 1), + "NKo": range(1984, 2047 + 1), + "Samaritan": range(2048, 2111 + 1), + "Mandaic": range(2112, 2143 + 1), + "Syriac Supplement": range(2144, 2159 + 1), + "Arabic Extended-A": range(2208, 2303 + 1), + "Devanagari": range(2304, 2431 + 1), + "Bengali": range(2432, 2559 + 1), + "Gurmukhi": range(2560, 2687 + 1), + "Gujarati": range(2688, 2815 + 1), + "Oriya": range(2816, 2943 + 1), + "Tamil": range(2944, 3071 + 1), + "Telugu": range(3072, 3199 + 1), + "Kannada": range(3200, 3327 + 1), + "Malayalam": range(3328, 3455 + 1), + "Sinhala": range(3456, 3583 + 1), + "Thai": range(3584, 3711 + 1), + "Lao": range(3712, 3839 + 1), + "Tibetan": range(3840, 4095 + 1), + "Myanmar": range(4096, 4255 + 1), + "Georgian": range(4256, 4351 + 1), + "Hangul Jamo": range(4352, 4607 + 1), + "Ethiopic": range(4608, 4991 + 1), + "Ethiopic Supplement": range(4992, 5023 + 1), + "Cherokee": range(5024, 5119 + 1), + "Unified Canadian Aboriginal Syllabics": range(5120, 5759 + 1), + "Ogham": range(5760, 5791 + 1), + "Runic": range(5792, 5887 + 1), + "Tagalog": range(5888, 5919 + 1), + "Hanunoo": range(5920, 5951 + 1), + "Buhid": range(5952, 5983 + 1), + "Tagbanwa": range(5984, 6015 + 1), + "Khmer": range(6016, 6143 + 1), + "Mongolian": range(6144, 6319 + 1), + "Unified Canadian Aboriginal Syllabics Extended": range(6320, 6399 + 1), + "Limbu": range(6400, 6479 + 1), + "Tai Le": range(6480, 6527 + 1), + "New Tai Lue": range(6528, 6623 + 1), + "Khmer Symbols": range(6624, 6655 + 1), + "Buginese": range(6656, 6687 + 1), + "Tai Tham": range(6688, 6831 + 1), + "Combining Diacritical Marks Extended": range(6832, 6911 + 1), + "Balinese": range(6912, 7039 + 1), + "Sundanese": range(7040, 7103 + 1), + "Batak": range(7104, 7167 + 1), + "Lepcha": range(7168, 7247 + 1), + "Ol Chiki": range(7248, 7295 + 1), + "Cyrillic Extended C": range(7296, 7311 + 1), + "Sundanese Supplement": range(7360, 7375 + 1), + "Vedic Extensions": range(7376, 7423 + 1), + "Phonetic Extensions": range(7424, 7551 + 1), + "Phonetic Extensions Supplement": range(7552, 7615 + 1), + "Combining Diacritical Marks Supplement": range(7616, 7679 + 1), + "Latin Extended Additional": range(7680, 7935 + 1), + "Greek Extended": range(7936, 8191 + 1), + "General Punctuation": range(8192, 8303 + 1), + "Superscripts and Subscripts": range(8304, 8351 + 1), + "Currency Symbols": range(8352, 8399 + 1), + "Combining Diacritical Marks for Symbols": range(8400, 8447 + 1), + "Letterlike Symbols": range(8448, 8527 + 1), + "Number Forms": range(8528, 8591 + 1), + "Arrows": range(8592, 8703 + 1), + "Mathematical Operators": range(8704, 8959 + 1), + "Miscellaneous Technical": range(8960, 9215 + 1), + "Control Pictures": range(9216, 9279 + 1), + "Optical Character Recognition": range(9280, 9311 + 1), + "Enclosed Alphanumerics": range(9312, 9471 + 1), + "Box Drawing": range(9472, 9599 + 1), + "Block Elements": range(9600, 9631 + 1), + "Geometric Shapes": range(9632, 9727 + 1), + "Miscellaneous Symbols": range(9728, 9983 + 1), + "Dingbats": range(9984, 10175 + 1), + "Miscellaneous Mathematical Symbols-A": range(10176, 10223 + 1), + "Supplemental Arrows-A": range(10224, 10239 + 1), + "Braille Patterns": range(10240, 10495 + 1), + "Supplemental Arrows-B": range(10496, 10623 + 1), + "Miscellaneous Mathematical Symbols-B": range(10624, 10751 + 1), + "Supplemental Mathematical Operators": range(10752, 11007 + 1), + "Miscellaneous Symbols and Arrows": range(11008, 11263 + 1), + "Glagolitic": range(11264, 11359 + 1), + "Latin Extended-C": range(11360, 11391 + 1), + "Coptic": range(11392, 11519 + 1), + "Georgian Supplement": range(11520, 11567 + 1), + "Tifinagh": range(11568, 11647 + 1), + "Ethiopic Extended": range(11648, 11743 + 1), + "Cyrillic Extended-A": range(11744, 11775 + 1), + "Supplemental Punctuation": range(11776, 11903 + 1), + "CJK Radicals Supplement": range(11904, 12031 + 1), + "Kangxi Radicals": range(12032, 12255 + 1), + "Ideographic Description Characters": range(12272, 12287 + 1), + "CJK Symbols and Punctuation": range(12288, 12351 + 1), + "Hiragana": range(12352, 12447 + 1), + "Katakana": range(12448, 12543 + 1), + "Bopomofo": range(12544, 12591 + 1), + "Hangul Compatibility Jamo": range(12592, 12687 + 1), + "Kanbun": range(12688, 12703 + 1), + "Bopomofo Extended": range(12704, 12735 + 1), + "CJK Strokes": range(12736, 12783 + 1), + "Katakana Phonetic Extensions": range(12784, 12799 + 1), + "Enclosed CJK Letters and Months": range(12800, 13055 + 1), + "CJK Compatibility": range(13056, 13311 + 1), + "CJK Unified Ideographs Extension A": range(13312, 19903 + 1), + "Yijing Hexagram Symbols": range(19904, 19967 + 1), + "CJK Unified Ideographs": range(19968, 40959 + 1), + "Yi Syllables": range(40960, 42127 + 1), + "Yi Radicals": range(42128, 42191 + 1), + "Lisu": range(42192, 42239 + 1), + "Vai": range(42240, 42559 + 1), + "Cyrillic Extended-B": range(42560, 42655 + 1), + "Bamum": range(42656, 42751 + 1), + "Modifier Tone Letters": range(42752, 42783 + 1), + "Latin Extended-D": range(42784, 43007 + 1), + "Syloti Nagri": range(43008, 43055 + 1), + "Common Indic Number Forms": range(43056, 43071 + 1), + "Phags-pa": range(43072, 43135 + 1), + "Saurashtra": range(43136, 43231 + 1), + "Devanagari Extended": range(43232, 43263 + 1), + "Kayah Li": range(43264, 43311 + 1), + "Rejang": range(43312, 43359 + 1), + "Hangul Jamo Extended-A": range(43360, 43391 + 1), + "Javanese": range(43392, 43487 + 1), + "Myanmar Extended-B": range(43488, 43519 + 1), + "Cham": range(43520, 43615 + 1), + "Myanmar Extended-A": range(43616, 43647 + 1), + "Tai Viet": range(43648, 43743 + 1), + "Meetei Mayek Extensions": range(43744, 43775 + 1), + "Ethiopic Extended-A": range(43776, 43823 + 1), + "Latin Extended-E": range(43824, 43887 + 1), + "Cherokee Supplement": range(43888, 43967 + 1), + "Meetei Mayek": range(43968, 44031 + 1), + "Hangul Syllables": range(44032, 55215 + 1), + "Hangul Jamo Extended-B": range(55216, 55295 + 1), + "High Surrogates": range(55296, 56191 + 1), + "High Private Use Surrogates": range(56192, 56319 + 1), + "Low Surrogates": range(56320, 57343 + 1), + "Private Use Area": range(57344, 63743 + 1), + "CJK Compatibility Ideographs": range(63744, 64255 + 1), + "Alphabetic Presentation Forms": range(64256, 64335 + 1), + "Arabic Presentation Forms-A": range(64336, 65023 + 1), + "Variation Selectors": range(65024, 65039 + 1), + "Vertical Forms": range(65040, 65055 + 1), + "Combining Half Marks": range(65056, 65071 + 1), + "CJK Compatibility Forms": range(65072, 65103 + 1), + "Small Form Variants": range(65104, 65135 + 1), + "Arabic Presentation Forms-B": range(65136, 65279 + 1), + "Halfwidth and Fullwidth Forms": range(65280, 65519 + 1), + "Specials": range(65520, 65535 + 1), + "Linear B Syllabary": range(65536, 65663 + 1), + "Linear B Ideograms": range(65664, 65791 + 1), + "Aegean Numbers": range(65792, 65855 + 1), + "Ancient Greek Numbers": range(65856, 65935 + 1), + "Ancient Symbols": range(65936, 65999 + 1), + "Phaistos Disc": range(66000, 66047 + 1), + "Lycian": range(66176, 66207 + 1), + "Carian": range(66208, 66271 + 1), + "Coptic Epact Numbers": range(66272, 66303 + 1), + "Old Italic": range(66304, 66351 + 1), + "Gothic": range(66352, 66383 + 1), + "Old Permic": range(66384, 66431 + 1), + "Ugaritic": range(66432, 66463 + 1), + "Old Persian": range(66464, 66527 + 1), + "Deseret": range(66560, 66639 + 1), + "Shavian": range(66640, 66687 + 1), + "Osmanya": range(66688, 66735 + 1), + "Osage": range(66736, 66815 + 1), + "Elbasan": range(66816, 66863 + 1), + "Caucasian Albanian": range(66864, 66927 + 1), + "Linear A": range(67072, 67455 + 1), + "Cypriot Syllabary": range(67584, 67647 + 1), + "Imperial Aramaic": range(67648, 67679 + 1), + "Palmyrene": range(67680, 67711 + 1), + "Nabataean": range(67712, 67759 + 1), + "Hatran": range(67808, 67839 + 1), + "Phoenician": range(67840, 67871 + 1), + "Lydian": range(67872, 67903 + 1), + "Meroitic Hieroglyphs": range(67968, 67999 + 1), + "Meroitic Cursive": range(68000, 68095 + 1), + "Kharoshthi": range(68096, 68191 + 1), + "Old South Arabian": range(68192, 68223 + 1), + "Old North Arabian": range(68224, 68255 + 1), + "Manichaean": range(68288, 68351 + 1), + "Avestan": range(68352, 68415 + 1), + "Inscriptional Parthian": range(68416, 68447 + 1), + "Inscriptional Pahlavi": range(68448, 68479 + 1), + "Psalter Pahlavi": range(68480, 68527 + 1), + "Old Turkic": range(68608, 68687 + 1), + "Old Hungarian": range(68736, 68863 + 1), + "Rumi Numeral Symbols": range(69216, 69247 + 1), + "Brahmi": range(69632, 69759 + 1), + "Kaithi": range(69760, 69839 + 1), + "Sora Sompeng": range(69840, 69887 + 1), + "Chakma": range(69888, 69967 + 1), + "Mahajani": range(69968, 70015 + 1), + "Sharada": range(70016, 70111 + 1), + "Sinhala Archaic Numbers": range(70112, 70143 + 1), + "Khojki": range(70144, 70223 + 1), + "Multani": range(70272, 70319 + 1), + "Khudawadi": range(70320, 70399 + 1), + "Grantha": range(70400, 70527 + 1), + "Newa": range(70656, 70783 + 1), + "Tirhuta": range(70784, 70879 + 1), + "Siddham": range(71040, 71167 + 1), + "Modi": range(71168, 71263 + 1), + "Mongolian Supplement": range(71264, 71295 + 1), + "Takri": range(71296, 71375 + 1), + "Ahom": range(71424, 71487 + 1), + "Warang Citi": range(71840, 71935 + 1), + "Zanabazar Square": range(72192, 72271 + 1), + "Soyombo": range(72272, 72367 + 1), + "Pau Cin Hau": range(72384, 72447 + 1), + "Bhaiksuki": range(72704, 72815 + 1), + "Marchen": range(72816, 72895 + 1), + "Masaram Gondi": range(72960, 73055 + 1), + "Cuneiform": range(73728, 74751 + 1), + "Cuneiform Numbers and Punctuation": range(74752, 74879 + 1), + "Early Dynastic Cuneiform": range(74880, 75087 + 1), + "Egyptian Hieroglyphs": range(77824, 78895 + 1), + "Anatolian Hieroglyphs": range(82944, 83583 + 1), + "Bamum Supplement": range(92160, 92735 + 1), + "Mro": range(92736, 92783 + 1), + "Bassa Vah": range(92880, 92927 + 1), + "Pahawh Hmong": range(92928, 93071 + 1), + "Miao": range(93952, 94111 + 1), + "Ideographic Symbols and Punctuation": range(94176, 94207 + 1), + "Tangut": range(94208, 100351 + 1), + "Tangut Components": range(100352, 101119 + 1), + "Kana Supplement": range(110592, 110847 + 1), + "Kana Extended-A": range(110848, 110895 + 1), + "Nushu": range(110960, 111359 + 1), + "Duployan": range(113664, 113823 + 1), + "Shorthand Format Controls": range(113824, 113839 + 1), + "Byzantine Musical Symbols": range(118784, 119039 + 1), + "Musical Symbols": range(119040, 119295 + 1), + "Ancient Greek Musical Notation": range(119296, 119375 + 1), + "Tai Xuan Jing Symbols": range(119552, 119647 + 1), + "Counting Rod Numerals": range(119648, 119679 + 1), + "Mathematical Alphanumeric Symbols": range(119808, 120831 + 1), + "Sutton SignWriting": range(120832, 121519 + 1), + "Glagolitic Supplement": range(122880, 122927 + 1), + "Mende Kikakui": range(124928, 125151 + 1), + "Adlam": range(125184, 125279 + 1), + "Arabic Mathematical Alphabetic Symbols": range(126464, 126719 + 1), + "Mahjong Tiles": range(126976, 127023 + 1), + "Domino Tiles": range(127024, 127135 + 1), + "Playing Cards": range(127136, 127231 + 1), + "Enclosed Alphanumeric Supplement": range(127232, 127487 + 1), + "Enclosed Ideographic Supplement": range(127488, 127743 + 1), + "Miscellaneous Symbols and Pictographs": range(127744, 128511 + 1), + "Emoticons range(Emoji)": range(128512, 128591 + 1), + "Ornamental Dingbats": range(128592, 128639 + 1), + "Transport and Map Symbols": range(128640, 128767 + 1), + "Alchemical Symbols": range(128768, 128895 + 1), + "Geometric Shapes Extended": range(128896, 129023 + 1), + "Supplemental Arrows-C": range(129024, 129279 + 1), + "Supplemental Symbols and Pictographs": range(129280, 129535 + 1), + "CJK Unified Ideographs Extension B": range(131072, 173791 + 1), + "CJK Unified Ideographs Extension C": range(173824, 177983 + 1), + "CJK Unified Ideographs Extension D": range(177984, 178207 + 1), + "CJK Unified Ideographs Extension E": range(178208, 183983 + 1), + "CJK Unified Ideographs Extension F": range(183984, 191471 + 1), + "CJK Compatibility Ideographs Supplement": range(194560, 195103 + 1), + "Tags": range(917504, 917631 + 1), + "Variation Selectors Supplement": range(917760, 917999 + 1), +} + + +UNICODE_SECONDARY_RANGE_KEYWORD: List[str] = [ + "Supplement", + "Extended", + "Extensions", + "Modifier", + "Marks", + "Punctuation", + "Symbols", + "Forms", + "Operators", + "Miscellaneous", + "Drawing", + "Block", + "Shapes", + "Supplemental", + "Tags", +] + +RE_POSSIBLE_ENCODING_INDICATION = re_compile( + r"(?:(?:encoding)|(?:charset)|(?:coding))(?:[\:= ]{1,10})(?:[\"\']?)([a-zA-Z0-9\-_]+)(?:[\"\']?)", + IGNORECASE, +) + +IANA_SUPPORTED: List[str] = sorted( + filter( + lambda x: x.endswith("_codec") is False + and x not in {"rot_13", "tactis", "mbcs"}, + list(set(aliases.values())), + ) +) + +IANA_SUPPORTED_COUNT: int = len(IANA_SUPPORTED) + +# pre-computed code page that are similar using the function cp_similarity. +IANA_SUPPORTED_SIMILAR: Dict[str, List[str]] = { + "cp037": ["cp1026", "cp1140", "cp273", "cp500"], + "cp1026": ["cp037", "cp1140", "cp273", "cp500"], + "cp1125": ["cp866"], + "cp1140": ["cp037", "cp1026", "cp273", "cp500"], + "cp1250": ["iso8859_2"], + "cp1251": ["kz1048", "ptcp154"], + "cp1252": ["iso8859_15", "iso8859_9", "latin_1"], + "cp1253": ["iso8859_7"], + "cp1254": ["iso8859_15", "iso8859_9", "latin_1"], + "cp1257": ["iso8859_13"], + "cp273": ["cp037", "cp1026", "cp1140", "cp500"], + "cp437": ["cp850", "cp858", "cp860", "cp861", "cp862", "cp863", "cp865"], + "cp500": ["cp037", "cp1026", "cp1140", "cp273"], + "cp850": ["cp437", "cp857", "cp858", "cp865"], + "cp857": ["cp850", "cp858", "cp865"], + "cp858": ["cp437", "cp850", "cp857", "cp865"], + "cp860": ["cp437", "cp861", "cp862", "cp863", "cp865"], + "cp861": ["cp437", "cp860", "cp862", "cp863", "cp865"], + "cp862": ["cp437", "cp860", "cp861", "cp863", "cp865"], + "cp863": ["cp437", "cp860", "cp861", "cp862", "cp865"], + "cp865": ["cp437", "cp850", "cp857", "cp858", "cp860", "cp861", "cp862", "cp863"], + "cp866": ["cp1125"], + "iso8859_10": ["iso8859_14", "iso8859_15", "iso8859_4", "iso8859_9", "latin_1"], + "iso8859_11": ["tis_620"], + "iso8859_13": ["cp1257"], + "iso8859_14": [ + "iso8859_10", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_9", + "latin_1", + ], + "iso8859_15": [ + "cp1252", + "cp1254", + "iso8859_10", + "iso8859_14", + "iso8859_16", + "iso8859_3", + "iso8859_9", + "latin_1", + ], + "iso8859_16": [ + "iso8859_14", + "iso8859_15", + "iso8859_2", + "iso8859_3", + "iso8859_9", + "latin_1", + ], + "iso8859_2": ["cp1250", "iso8859_16", "iso8859_4"], + "iso8859_3": ["iso8859_14", "iso8859_15", "iso8859_16", "iso8859_9", "latin_1"], + "iso8859_4": ["iso8859_10", "iso8859_2", "iso8859_9", "latin_1"], + "iso8859_7": ["cp1253"], + "iso8859_9": [ + "cp1252", + "cp1254", + "cp1258", + "iso8859_10", + "iso8859_14", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_4", + "latin_1", + ], + "kz1048": ["cp1251", "ptcp154"], + "latin_1": [ + "cp1252", + "cp1254", + "cp1258", + "iso8859_10", + "iso8859_14", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_4", + "iso8859_9", + ], + "mac_iceland": ["mac_roman", "mac_turkish"], + "mac_roman": ["mac_iceland", "mac_turkish"], + "mac_turkish": ["mac_iceland", "mac_roman"], + "ptcp154": ["cp1251", "kz1048"], + "tis_620": ["iso8859_11"], +} + + +CHARDET_CORRESPONDENCE: Dict[str, str] = { + "iso2022_kr": "ISO-2022-KR", + "iso2022_jp": "ISO-2022-JP", + "euc_kr": "EUC-KR", + "tis_620": "TIS-620", + "utf_32": "UTF-32", + "euc_jp": "EUC-JP", + "koi8_r": "KOI8-R", + "iso8859_1": "ISO-8859-1", + "iso8859_2": "ISO-8859-2", + "iso8859_5": "ISO-8859-5", + "iso8859_6": "ISO-8859-6", + "iso8859_7": "ISO-8859-7", + "iso8859_8": "ISO-8859-8", + "utf_16": "UTF-16", + "cp855": "IBM855", + "mac_cyrillic": "MacCyrillic", + "gb2312": "GB2312", + "gb18030": "GB18030", + "cp932": "CP932", + "cp866": "IBM866", + "utf_8": "utf-8", + "utf_8_sig": "UTF-8-SIG", + "shift_jis": "SHIFT_JIS", + "big5": "Big5", + "cp1250": "windows-1250", + "cp1251": "windows-1251", + "cp1252": "Windows-1252", + "cp1253": "windows-1253", + "cp1255": "windows-1255", + "cp1256": "windows-1256", + "cp1254": "Windows-1254", + "cp949": "CP949", +} + + +COMMON_SAFE_ASCII_CHARACTERS: Set[str] = { + "<", + ">", + "=", + ":", + "/", + "&", + ";", + "{", + "}", + "[", + "]", + ",", + "|", + '"', + "-", +} + + +KO_NAMES: Set[str] = {"johab", "cp949", "euc_kr"} +ZH_NAMES: Set[str] = {"big5", "cp950", "big5hkscs", "hz"} + +NOT_PRINTABLE_PATTERN = re_compile(r"[0-9\W\n\r\t]+") + +LANGUAGE_SUPPORTED_COUNT: int = len(FREQUENCIES) + +# Logging LEVEL bellow DEBUG +TRACE: int = 5 diff --git a/env/lib/python3.8/site-packages/charset_normalizer/legacy.py b/env/lib/python3.8/site-packages/charset_normalizer/legacy.py new file mode 100644 index 00000000..cdebe2b8 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/legacy.py @@ -0,0 +1,95 @@ +import warnings +from typing import Dict, Optional, Union + +from .api import from_bytes, from_fp, from_path, normalize +from .constant import CHARDET_CORRESPONDENCE +from .models import CharsetMatch, CharsetMatches + + +def detect(byte_str: bytes) -> Dict[str, Optional[Union[str, float]]]: + """ + chardet legacy method + Detect the encoding of the given byte string. It should be mostly backward-compatible. + Encoding name will match Chardet own writing whenever possible. (Not on encoding name unsupported by it) + This function is deprecated and should be used to migrate your project easily, consult the documentation for + further information. Not planned for removal. + + :param byte_str: The byte sequence to examine. + """ + if not isinstance(byte_str, (bytearray, bytes)): + raise TypeError( # pragma: nocover + "Expected object of type bytes or bytearray, got: " + "{0}".format(type(byte_str)) + ) + + if isinstance(byte_str, bytearray): + byte_str = bytes(byte_str) + + r = from_bytes(byte_str).best() + + encoding = r.encoding if r is not None else None + language = r.language if r is not None and r.language != "Unknown" else "" + confidence = 1.0 - r.chaos if r is not None else None + + # Note: CharsetNormalizer does not return 'UTF-8-SIG' as the sig get stripped in the detection/normalization process + # but chardet does return 'utf-8-sig' and it is a valid codec name. + if r is not None and encoding == "utf_8" and r.bom: + encoding += "_sig" + + return { + "encoding": encoding + if encoding not in CHARDET_CORRESPONDENCE + else CHARDET_CORRESPONDENCE[encoding], + "language": language, + "confidence": confidence, + } + + +class CharsetNormalizerMatch(CharsetMatch): + pass + + +class CharsetNormalizerMatches(CharsetMatches): + @staticmethod + def from_fp(*args, **kwargs): # type: ignore + warnings.warn( # pragma: nocover + "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " + "and scheduled to be removed in 3.0", + DeprecationWarning, + ) + return from_fp(*args, **kwargs) # pragma: nocover + + @staticmethod + def from_bytes(*args, **kwargs): # type: ignore + warnings.warn( # pragma: nocover + "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " + "and scheduled to be removed in 3.0", + DeprecationWarning, + ) + return from_bytes(*args, **kwargs) # pragma: nocover + + @staticmethod + def from_path(*args, **kwargs): # type: ignore + warnings.warn( # pragma: nocover + "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " + "and scheduled to be removed in 3.0", + DeprecationWarning, + ) + return from_path(*args, **kwargs) # pragma: nocover + + @staticmethod + def normalize(*args, **kwargs): # type: ignore + warnings.warn( # pragma: nocover + "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " + "and scheduled to be removed in 3.0", + DeprecationWarning, + ) + return normalize(*args, **kwargs) # pragma: nocover + + +class CharsetDetector(CharsetNormalizerMatches): + pass + + +class CharsetDoctor(CharsetNormalizerMatches): + pass diff --git a/env/lib/python3.8/site-packages/charset_normalizer/md.py b/env/lib/python3.8/site-packages/charset_normalizer/md.py new file mode 100644 index 00000000..31808af8 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/md.py @@ -0,0 +1,553 @@ +from functools import lru_cache +from typing import List, Optional + +from .constant import COMMON_SAFE_ASCII_CHARACTERS, UNICODE_SECONDARY_RANGE_KEYWORD +from .utils import ( + is_accentuated, + is_ascii, + is_case_variable, + is_cjk, + is_emoticon, + is_hangul, + is_hiragana, + is_katakana, + is_latin, + is_punctuation, + is_separator, + is_symbol, + is_thai, + is_unprintable, + remove_accent, + unicode_range, +) + + +class MessDetectorPlugin: + """ + Base abstract class used for mess detection plugins. + All detectors MUST extend and implement given methods. + """ + + def eligible(self, character: str) -> bool: + """ + Determine if given character should be fed in. + """ + raise NotImplementedError # pragma: nocover + + def feed(self, character: str) -> None: + """ + The main routine to be executed upon character. + Insert the logic in witch the text would be considered chaotic. + """ + raise NotImplementedError # pragma: nocover + + def reset(self) -> None: # pragma: no cover + """ + Permit to reset the plugin to the initial state. + """ + raise NotImplementedError + + @property + def ratio(self) -> float: + """ + Compute the chaos ratio based on what your feed() has seen. + Must NOT be lower than 0.; No restriction gt 0. + """ + raise NotImplementedError # pragma: nocover + + +class TooManySymbolOrPunctuationPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._punctuation_count: int = 0 + self._symbol_count: int = 0 + self._character_count: int = 0 + + self._last_printable_char: Optional[str] = None + self._frenzy_symbol_in_word: bool = False + + def eligible(self, character: str) -> bool: + return character.isprintable() + + def feed(self, character: str) -> None: + self._character_count += 1 + + if ( + character != self._last_printable_char + and character not in COMMON_SAFE_ASCII_CHARACTERS + ): + if is_punctuation(character): + self._punctuation_count += 1 + elif ( + character.isdigit() is False + and is_symbol(character) + and is_emoticon(character) is False + ): + self._symbol_count += 2 + + self._last_printable_char = character + + def reset(self) -> None: # pragma: no cover + self._punctuation_count = 0 + self._character_count = 0 + self._symbol_count = 0 + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + ratio_of_punctuation: float = ( + self._punctuation_count + self._symbol_count + ) / self._character_count + + return ratio_of_punctuation if ratio_of_punctuation >= 0.3 else 0.0 + + +class TooManyAccentuatedPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._character_count: int = 0 + self._accentuated_count: int = 0 + + def eligible(self, character: str) -> bool: + return character.isalpha() + + def feed(self, character: str) -> None: + self._character_count += 1 + + if is_accentuated(character): + self._accentuated_count += 1 + + def reset(self) -> None: # pragma: no cover + self._character_count = 0 + self._accentuated_count = 0 + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + ratio_of_accentuation: float = self._accentuated_count / self._character_count + return ratio_of_accentuation if ratio_of_accentuation >= 0.35 else 0.0 + + +class UnprintablePlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._unprintable_count: int = 0 + self._character_count: int = 0 + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + if is_unprintable(character): + self._unprintable_count += 1 + self._character_count += 1 + + def reset(self) -> None: # pragma: no cover + self._unprintable_count = 0 + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + return (self._unprintable_count * 8) / self._character_count + + +class SuspiciousDuplicateAccentPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._successive_count: int = 0 + self._character_count: int = 0 + + self._last_latin_character: Optional[str] = None + + def eligible(self, character: str) -> bool: + return character.isalpha() and is_latin(character) + + def feed(self, character: str) -> None: + self._character_count += 1 + if ( + self._last_latin_character is not None + and is_accentuated(character) + and is_accentuated(self._last_latin_character) + ): + if character.isupper() and self._last_latin_character.isupper(): + self._successive_count += 1 + # Worse if its the same char duplicated with different accent. + if remove_accent(character) == remove_accent(self._last_latin_character): + self._successive_count += 1 + self._last_latin_character = character + + def reset(self) -> None: # pragma: no cover + self._successive_count = 0 + self._character_count = 0 + self._last_latin_character = None + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + return (self._successive_count * 2) / self._character_count + + +class SuspiciousRange(MessDetectorPlugin): + def __init__(self) -> None: + self._suspicious_successive_range_count: int = 0 + self._character_count: int = 0 + self._last_printable_seen: Optional[str] = None + + def eligible(self, character: str) -> bool: + return character.isprintable() + + def feed(self, character: str) -> None: + self._character_count += 1 + + if ( + character.isspace() + or is_punctuation(character) + or character in COMMON_SAFE_ASCII_CHARACTERS + ): + self._last_printable_seen = None + return + + if self._last_printable_seen is None: + self._last_printable_seen = character + return + + unicode_range_a: Optional[str] = unicode_range(self._last_printable_seen) + unicode_range_b: Optional[str] = unicode_range(character) + + if is_suspiciously_successive_range(unicode_range_a, unicode_range_b): + self._suspicious_successive_range_count += 1 + + self._last_printable_seen = character + + def reset(self) -> None: # pragma: no cover + self._character_count = 0 + self._suspicious_successive_range_count = 0 + self._last_printable_seen = None + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + ratio_of_suspicious_range_usage: float = ( + self._suspicious_successive_range_count * 2 + ) / self._character_count + + if ratio_of_suspicious_range_usage < 0.1: + return 0.0 + + return ratio_of_suspicious_range_usage + + +class SuperWeirdWordPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._word_count: int = 0 + self._bad_word_count: int = 0 + self._foreign_long_count: int = 0 + + self._is_current_word_bad: bool = False + self._foreign_long_watch: bool = False + + self._character_count: int = 0 + self._bad_character_count: int = 0 + + self._buffer: str = "" + self._buffer_accent_count: int = 0 + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + if character.isalpha(): + self._buffer += character + if is_accentuated(character): + self._buffer_accent_count += 1 + if ( + self._foreign_long_watch is False + and (is_latin(character) is False or is_accentuated(character)) + and is_cjk(character) is False + and is_hangul(character) is False + and is_katakana(character) is False + and is_hiragana(character) is False + and is_thai(character) is False + ): + self._foreign_long_watch = True + return + if not self._buffer: + return + if ( + character.isspace() or is_punctuation(character) or is_separator(character) + ) and self._buffer: + self._word_count += 1 + buffer_length: int = len(self._buffer) + + self._character_count += buffer_length + + if buffer_length >= 4: + if self._buffer_accent_count / buffer_length > 0.34: + self._is_current_word_bad = True + # Word/Buffer ending with a upper case accentuated letter are so rare, + # that we will consider them all as suspicious. Same weight as foreign_long suspicious. + if is_accentuated(self._buffer[-1]) and self._buffer[-1].isupper(): + self._foreign_long_count += 1 + self._is_current_word_bad = True + if buffer_length >= 24 and self._foreign_long_watch: + self._foreign_long_count += 1 + self._is_current_word_bad = True + + if self._is_current_word_bad: + self._bad_word_count += 1 + self._bad_character_count += len(self._buffer) + self._is_current_word_bad = False + + self._foreign_long_watch = False + self._buffer = "" + self._buffer_accent_count = 0 + elif ( + character not in {"<", ">", "-", "=", "~", "|", "_"} + and character.isdigit() is False + and is_symbol(character) + ): + self._is_current_word_bad = True + self._buffer += character + + def reset(self) -> None: # pragma: no cover + self._buffer = "" + self._is_current_word_bad = False + self._foreign_long_watch = False + self._bad_word_count = 0 + self._word_count = 0 + self._character_count = 0 + self._bad_character_count = 0 + self._foreign_long_count = 0 + + @property + def ratio(self) -> float: + if self._word_count <= 10 and self._foreign_long_count == 0: + return 0.0 + + return self._bad_character_count / self._character_count + + +class CjkInvalidStopPlugin(MessDetectorPlugin): + """ + GB(Chinese) based encoding often render the stop incorrectly when the content does not fit and + can be easily detected. Searching for the overuse of '丅' and '丄'. + """ + + def __init__(self) -> None: + self._wrong_stop_count: int = 0 + self._cjk_character_count: int = 0 + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + if character in {"丅", "丄"}: + self._wrong_stop_count += 1 + return + if is_cjk(character): + self._cjk_character_count += 1 + + def reset(self) -> None: # pragma: no cover + self._wrong_stop_count = 0 + self._cjk_character_count = 0 + + @property + def ratio(self) -> float: + if self._cjk_character_count < 16: + return 0.0 + return self._wrong_stop_count / self._cjk_character_count + + +class ArchaicUpperLowerPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._buf: bool = False + + self._character_count_since_last_sep: int = 0 + + self._successive_upper_lower_count: int = 0 + self._successive_upper_lower_count_final: int = 0 + + self._character_count: int = 0 + + self._last_alpha_seen: Optional[str] = None + self._current_ascii_only: bool = True + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + is_concerned = character.isalpha() and is_case_variable(character) + chunk_sep = is_concerned is False + + if chunk_sep and self._character_count_since_last_sep > 0: + if ( + self._character_count_since_last_sep <= 64 + and character.isdigit() is False + and self._current_ascii_only is False + ): + self._successive_upper_lower_count_final += ( + self._successive_upper_lower_count + ) + + self._successive_upper_lower_count = 0 + self._character_count_since_last_sep = 0 + self._last_alpha_seen = None + self._buf = False + self._character_count += 1 + self._current_ascii_only = True + + return + + if self._current_ascii_only is True and is_ascii(character) is False: + self._current_ascii_only = False + + if self._last_alpha_seen is not None: + if (character.isupper() and self._last_alpha_seen.islower()) or ( + character.islower() and self._last_alpha_seen.isupper() + ): + if self._buf is True: + self._successive_upper_lower_count += 2 + self._buf = False + else: + self._buf = True + else: + self._buf = False + + self._character_count += 1 + self._character_count_since_last_sep += 1 + self._last_alpha_seen = character + + def reset(self) -> None: # pragma: no cover + self._character_count = 0 + self._character_count_since_last_sep = 0 + self._successive_upper_lower_count = 0 + self._successive_upper_lower_count_final = 0 + self._last_alpha_seen = None + self._buf = False + self._current_ascii_only = True + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + return self._successive_upper_lower_count_final / self._character_count + + +@lru_cache(maxsize=1024) +def is_suspiciously_successive_range( + unicode_range_a: Optional[str], unicode_range_b: Optional[str] +) -> bool: + """ + Determine if two Unicode range seen next to each other can be considered as suspicious. + """ + if unicode_range_a is None or unicode_range_b is None: + return True + + if unicode_range_a == unicode_range_b: + return False + + if "Latin" in unicode_range_a and "Latin" in unicode_range_b: + return False + + if "Emoticons" in unicode_range_a or "Emoticons" in unicode_range_b: + return False + + # Latin characters can be accompanied with a combining diacritical mark + # eg. Vietnamese. + if ("Latin" in unicode_range_a or "Latin" in unicode_range_b) and ( + "Combining" in unicode_range_a or "Combining" in unicode_range_b + ): + return False + + keywords_range_a, keywords_range_b = unicode_range_a.split( + " " + ), unicode_range_b.split(" ") + + for el in keywords_range_a: + if el in UNICODE_SECONDARY_RANGE_KEYWORD: + continue + if el in keywords_range_b: + return False + + # Japanese Exception + range_a_jp_chars, range_b_jp_chars = ( + unicode_range_a + in ( + "Hiragana", + "Katakana", + ), + unicode_range_b in ("Hiragana", "Katakana"), + ) + if (range_a_jp_chars or range_b_jp_chars) and ( + "CJK" in unicode_range_a or "CJK" in unicode_range_b + ): + return False + if range_a_jp_chars and range_b_jp_chars: + return False + + if "Hangul" in unicode_range_a or "Hangul" in unicode_range_b: + if "CJK" in unicode_range_a or "CJK" in unicode_range_b: + return False + if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin": + return False + + # Chinese/Japanese use dedicated range for punctuation and/or separators. + if ("CJK" in unicode_range_a or "CJK" in unicode_range_b) or ( + unicode_range_a in ["Katakana", "Hiragana"] + and unicode_range_b in ["Katakana", "Hiragana"] + ): + if "Punctuation" in unicode_range_a or "Punctuation" in unicode_range_b: + return False + if "Forms" in unicode_range_a or "Forms" in unicode_range_b: + return False + + return True + + +@lru_cache(maxsize=2048) +def mess_ratio( + decoded_sequence: str, maximum_threshold: float = 0.2, debug: bool = False +) -> float: + """ + Compute a mess ratio given a decoded bytes sequence. The maximum threshold does stop the computation earlier. + """ + + detectors: List[MessDetectorPlugin] = [ + md_class() for md_class in MessDetectorPlugin.__subclasses__() + ] + + length: int = len(decoded_sequence) + 1 + + mean_mess_ratio: float = 0.0 + + if length < 512: + intermediary_mean_mess_ratio_calc: int = 32 + elif length <= 1024: + intermediary_mean_mess_ratio_calc = 64 + else: + intermediary_mean_mess_ratio_calc = 128 + + for character, index in zip(decoded_sequence + "\n", range(length)): + for detector in detectors: + if detector.eligible(character): + detector.feed(character) + + if ( + index > 0 and index % intermediary_mean_mess_ratio_calc == 0 + ) or index == length - 1: + mean_mess_ratio = sum(dt.ratio for dt in detectors) + + if mean_mess_ratio >= maximum_threshold: + break + + if debug: + for dt in detectors: # pragma: nocover + print(dt.__class__, dt.ratio) + + return round(mean_mess_ratio, 3) diff --git a/env/lib/python3.8/site-packages/charset_normalizer/models.py b/env/lib/python3.8/site-packages/charset_normalizer/models.py new file mode 100644 index 00000000..ccb0d475 --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/models.py @@ -0,0 +1,401 @@ +import warnings +from collections import Counter +from encodings.aliases import aliases +from hashlib import sha256 +from json import dumps +from re import sub +from typing import ( + Any, + Counter as TypeCounter, + Dict, + Iterator, + List, + Optional, + Tuple, + Union, +) + +from .constant import NOT_PRINTABLE_PATTERN, TOO_BIG_SEQUENCE +from .md import mess_ratio +from .utils import iana_name, is_multi_byte_encoding, unicode_range + + +class CharsetMatch: + def __init__( + self, + payload: bytes, + guessed_encoding: str, + mean_mess_ratio: float, + has_sig_or_bom: bool, + languages: "CoherenceMatches", + decoded_payload: Optional[str] = None, + ): + self._payload: bytes = payload + + self._encoding: str = guessed_encoding + self._mean_mess_ratio: float = mean_mess_ratio + self._languages: CoherenceMatches = languages + self._has_sig_or_bom: bool = has_sig_or_bom + self._unicode_ranges: Optional[List[str]] = None + + self._leaves: List[CharsetMatch] = [] + self._mean_coherence_ratio: float = 0.0 + + self._output_payload: Optional[bytes] = None + self._output_encoding: Optional[str] = None + + self._string: Optional[str] = decoded_payload + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CharsetMatch): + raise TypeError( + "__eq__ cannot be invoked on {} and {}.".format( + str(other.__class__), str(self.__class__) + ) + ) + return self.encoding == other.encoding and self.fingerprint == other.fingerprint + + def __lt__(self, other: object) -> bool: + """ + Implemented to make sorted available upon CharsetMatches items. + """ + if not isinstance(other, CharsetMatch): + raise ValueError + + chaos_difference: float = abs(self.chaos - other.chaos) + coherence_difference: float = abs(self.coherence - other.coherence) + + # Bellow 1% difference --> Use Coherence + if chaos_difference < 0.01 and coherence_difference > 0.02: + # When having a tough decision, use the result that decoded as many multi-byte as possible. + if chaos_difference == 0.0 and self.coherence == other.coherence: + return self.multi_byte_usage > other.multi_byte_usage + return self.coherence > other.coherence + + return self.chaos < other.chaos + + @property + def multi_byte_usage(self) -> float: + return 1.0 - len(str(self)) / len(self.raw) + + @property + def chaos_secondary_pass(self) -> float: + """ + Check once again chaos in decoded text, except this time, with full content. + Use with caution, this can be very slow. + Notice: Will be removed in 3.0 + """ + warnings.warn( + "chaos_secondary_pass is deprecated and will be removed in 3.0", + DeprecationWarning, + ) + return mess_ratio(str(self), 1.0) + + @property + def coherence_non_latin(self) -> float: + """ + Coherence ratio on the first non-latin language detected if ANY. + Notice: Will be removed in 3.0 + """ + warnings.warn( + "coherence_non_latin is deprecated and will be removed in 3.0", + DeprecationWarning, + ) + return 0.0 + + @property + def w_counter(self) -> TypeCounter[str]: + """ + Word counter instance on decoded text. + Notice: Will be removed in 3.0 + """ + warnings.warn( + "w_counter is deprecated and will be removed in 3.0", DeprecationWarning + ) + + string_printable_only = sub(NOT_PRINTABLE_PATTERN, " ", str(self).lower()) + + return Counter(string_printable_only.split()) + + def __str__(self) -> str: + # Lazy Str Loading + if self._string is None: + self._string = str(self._payload, self._encoding, "strict") + return self._string + + def __repr__(self) -> str: + return "".format(self.encoding, self.fingerprint) + + def add_submatch(self, other: "CharsetMatch") -> None: + if not isinstance(other, CharsetMatch) or other == self: + raise ValueError( + "Unable to add instance <{}> as a submatch of a CharsetMatch".format( + other.__class__ + ) + ) + + other._string = None # Unload RAM usage; dirty trick. + self._leaves.append(other) + + @property + def encoding(self) -> str: + return self._encoding + + @property + def encoding_aliases(self) -> List[str]: + """ + Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855. + """ + also_known_as: List[str] = [] + for u, p in aliases.items(): + if self.encoding == u: + also_known_as.append(p) + elif self.encoding == p: + also_known_as.append(u) + return also_known_as + + @property + def bom(self) -> bool: + return self._has_sig_or_bom + + @property + def byte_order_mark(self) -> bool: + return self._has_sig_or_bom + + @property + def languages(self) -> List[str]: + """ + Return the complete list of possible languages found in decoded sequence. + Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'. + """ + return [e[0] for e in self._languages] + + @property + def language(self) -> str: + """ + Most probable language found in decoded sequence. If none were detected or inferred, the property will return + "Unknown". + """ + if not self._languages: + # Trying to infer the language based on the given encoding + # Its either English or we should not pronounce ourselves in certain cases. + if "ascii" in self.could_be_from_charset: + return "English" + + # doing it there to avoid circular import + from charset_normalizer.cd import encoding_languages, mb_encoding_languages + + languages = ( + mb_encoding_languages(self.encoding) + if is_multi_byte_encoding(self.encoding) + else encoding_languages(self.encoding) + ) + + if len(languages) == 0 or "Latin Based" in languages: + return "Unknown" + + return languages[0] + + return self._languages[0][0] + + @property + def chaos(self) -> float: + return self._mean_mess_ratio + + @property + def coherence(self) -> float: + if not self._languages: + return 0.0 + return self._languages[0][1] + + @property + def percent_chaos(self) -> float: + return round(self.chaos * 100, ndigits=3) + + @property + def percent_coherence(self) -> float: + return round(self.coherence * 100, ndigits=3) + + @property + def raw(self) -> bytes: + """ + Original untouched bytes. + """ + return self._payload + + @property + def submatch(self) -> List["CharsetMatch"]: + return self._leaves + + @property + def has_submatch(self) -> bool: + return len(self._leaves) > 0 + + @property + def alphabets(self) -> List[str]: + if self._unicode_ranges is not None: + return self._unicode_ranges + # list detected ranges + detected_ranges: List[Optional[str]] = [ + unicode_range(char) for char in str(self) + ] + # filter and sort + self._unicode_ranges = sorted(list({r for r in detected_ranges if r})) + return self._unicode_ranges + + @property + def could_be_from_charset(self) -> List[str]: + """ + The complete list of encoding that output the exact SAME str result and therefore could be the originating + encoding. + This list does include the encoding available in property 'encoding'. + """ + return [self._encoding] + [m.encoding for m in self._leaves] + + def first(self) -> "CharsetMatch": + """ + Kept for BC reasons. Will be removed in 3.0. + """ + return self + + def best(self) -> "CharsetMatch": + """ + Kept for BC reasons. Will be removed in 3.0. + """ + return self + + def output(self, encoding: str = "utf_8") -> bytes: + """ + Method to get re-encoded bytes payload using given target encoding. Default to UTF-8. + Any errors will be simply ignored by the encoder NOT replaced. + """ + if self._output_encoding is None or self._output_encoding != encoding: + self._output_encoding = encoding + self._output_payload = str(self).encode(encoding, "replace") + + return self._output_payload # type: ignore + + @property + def fingerprint(self) -> str: + """ + Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one. + """ + return sha256(self.output()).hexdigest() + + +class CharsetMatches: + """ + Container with every CharsetMatch items ordered by default from most probable to the less one. + Act like a list(iterable) but does not implements all related methods. + """ + + def __init__(self, results: Optional[List[CharsetMatch]] = None): + self._results: List[CharsetMatch] = sorted(results) if results else [] + + def __iter__(self) -> Iterator[CharsetMatch]: + yield from self._results + + def __getitem__(self, item: Union[int, str]) -> CharsetMatch: + """ + Retrieve a single item either by its position or encoding name (alias may be used here). + Raise KeyError upon invalid index or encoding not present in results. + """ + if isinstance(item, int): + return self._results[item] + if isinstance(item, str): + item = iana_name(item, False) + for result in self._results: + if item in result.could_be_from_charset: + return result + raise KeyError + + def __len__(self) -> int: + return len(self._results) + + def __bool__(self) -> bool: + return len(self._results) > 0 + + def append(self, item: CharsetMatch) -> None: + """ + Insert a single match. Will be inserted accordingly to preserve sort. + Can be inserted as a submatch. + """ + if not isinstance(item, CharsetMatch): + raise ValueError( + "Cannot append instance '{}' to CharsetMatches".format( + str(item.__class__) + ) + ) + # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage) + if len(item.raw) <= TOO_BIG_SEQUENCE: + for match in self._results: + if match.fingerprint == item.fingerprint and match.chaos == item.chaos: + match.add_submatch(item) + return + self._results.append(item) + self._results = sorted(self._results) + + def best(self) -> Optional["CharsetMatch"]: + """ + Simply return the first match. Strict equivalent to matches[0]. + """ + if not self._results: + return None + return self._results[0] + + def first(self) -> Optional["CharsetMatch"]: + """ + Redundant method, call the method best(). Kept for BC reasons. + """ + return self.best() + + +CoherenceMatch = Tuple[str, float] +CoherenceMatches = List[CoherenceMatch] + + +class CliDetectionResult: + def __init__( + self, + path: str, + encoding: Optional[str], + encoding_aliases: List[str], + alternative_encodings: List[str], + language: str, + alphabets: List[str], + has_sig_or_bom: bool, + chaos: float, + coherence: float, + unicode_path: Optional[str], + is_preferred: bool, + ): + self.path: str = path + self.unicode_path: Optional[str] = unicode_path + self.encoding: Optional[str] = encoding + self.encoding_aliases: List[str] = encoding_aliases + self.alternative_encodings: List[str] = alternative_encodings + self.language: str = language + self.alphabets: List[str] = alphabets + self.has_sig_or_bom: bool = has_sig_or_bom + self.chaos: float = chaos + self.coherence: float = coherence + self.is_preferred: bool = is_preferred + + @property + def __dict__(self) -> Dict[str, Any]: # type: ignore + return { + "path": self.path, + "encoding": self.encoding, + "encoding_aliases": self.encoding_aliases, + "alternative_encodings": self.alternative_encodings, + "language": self.language, + "alphabets": self.alphabets, + "has_sig_or_bom": self.has_sig_or_bom, + "chaos": self.chaos, + "coherence": self.coherence, + "unicode_path": self.unicode_path, + "is_preferred": self.is_preferred, + } + + def to_json(self) -> str: + return dumps(self.__dict__, ensure_ascii=True, indent=4) diff --git a/env/lib/python3.8/site-packages/charset_normalizer/py.typed b/env/lib/python3.8/site-packages/charset_normalizer/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/charset_normalizer/utils.py b/env/lib/python3.8/site-packages/charset_normalizer/utils.py new file mode 100644 index 00000000..859f212b --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/utils.py @@ -0,0 +1,424 @@ +try: + # WARNING: unicodedata2 support is going to be removed in 3.0 + # Python is quickly catching up. + import unicodedata2 as unicodedata +except ImportError: + import unicodedata # type: ignore[no-redef] + +import importlib +import logging +from codecs import IncrementalDecoder +from encodings.aliases import aliases +from functools import lru_cache +from re import findall +from typing import Generator, List, Optional, Set, Tuple, Union + +from _multibytecodec import MultibyteIncrementalDecoder + +from .constant import ( + ENCODING_MARKS, + IANA_SUPPORTED_SIMILAR, + RE_POSSIBLE_ENCODING_INDICATION, + UNICODE_RANGES_COMBINED, + UNICODE_SECONDARY_RANGE_KEYWORD, + UTF8_MAXIMAL_ALLOCATION, +) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_accentuated(character: str) -> bool: + try: + description: str = unicodedata.name(character) + except ValueError: + return False + return ( + "WITH GRAVE" in description + or "WITH ACUTE" in description + or "WITH CEDILLA" in description + or "WITH DIAERESIS" in description + or "WITH CIRCUMFLEX" in description + or "WITH TILDE" in description + ) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def remove_accent(character: str) -> str: + decomposed: str = unicodedata.decomposition(character) + if not decomposed: + return character + + codes: List[str] = decomposed.split(" ") + + return chr(int(codes[0], 16)) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def unicode_range(character: str) -> Optional[str]: + """ + Retrieve the Unicode range official name from a single character. + """ + character_ord: int = ord(character) + + for range_name, ord_range in UNICODE_RANGES_COMBINED.items(): + if character_ord in ord_range: + return range_name + + return None + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_latin(character: str) -> bool: + try: + description: str = unicodedata.name(character) + except ValueError: + return False + return "LATIN" in description + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_ascii(character: str) -> bool: + try: + character.encode("ascii") + except UnicodeEncodeError: + return False + return True + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_punctuation(character: str) -> bool: + character_category: str = unicodedata.category(character) + + if "P" in character_category: + return True + + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + return False + + return "Punctuation" in character_range + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_symbol(character: str) -> bool: + character_category: str = unicodedata.category(character) + + if "S" in character_category or "N" in character_category: + return True + + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + return False + + return "Forms" in character_range + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_emoticon(character: str) -> bool: + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + return False + + return "Emoticons" in character_range + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_separator(character: str) -> bool: + if character.isspace() or character in {"|", "+", ",", ";", "<", ">"}: + return True + + character_category: str = unicodedata.category(character) + + return "Z" in character_category + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_case_variable(character: str) -> bool: + return character.islower() != character.isupper() + + +def is_private_use_only(character: str) -> bool: + character_category: str = unicodedata.category(character) + + return character_category == "Co" + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_cjk(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "CJK" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_hiragana(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "HIRAGANA" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_katakana(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "KATAKANA" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_hangul(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "HANGUL" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_thai(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "THAI" in character_name + + +@lru_cache(maxsize=len(UNICODE_RANGES_COMBINED)) +def is_unicode_range_secondary(range_name: str) -> bool: + return any(keyword in range_name for keyword in UNICODE_SECONDARY_RANGE_KEYWORD) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_unprintable(character: str) -> bool: + return ( + character.isspace() is False # includes \n \t \r \v + and character.isprintable() is False + and character != "\x1A" # Why? Its the ASCII substitute character. + and character != "\ufeff" # bug discovered in Python, + # Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space. + ) + + +def any_specified_encoding(sequence: bytes, search_zone: int = 4096) -> Optional[str]: + """ + Extract using ASCII-only decoder any specified encoding in the first n-bytes. + """ + if not isinstance(sequence, bytes): + raise TypeError + + seq_len: int = len(sequence) + + results: List[str] = findall( + RE_POSSIBLE_ENCODING_INDICATION, + sequence[: min(seq_len, search_zone)].decode("ascii", errors="ignore"), + ) + + if len(results) == 0: + return None + + for specified_encoding in results: + specified_encoding = specified_encoding.lower().replace("-", "_") + + encoding_alias: str + encoding_iana: str + + for encoding_alias, encoding_iana in aliases.items(): + if encoding_alias == specified_encoding: + return encoding_iana + if encoding_iana == specified_encoding: + return encoding_iana + + return None + + +@lru_cache(maxsize=128) +def is_multi_byte_encoding(name: str) -> bool: + """ + Verify is a specific encoding is a multi byte one based on it IANA name + """ + return name in { + "utf_8", + "utf_8_sig", + "utf_16", + "utf_16_be", + "utf_16_le", + "utf_32", + "utf_32_le", + "utf_32_be", + "utf_7", + } or issubclass( + importlib.import_module("encodings.{}".format(name)).IncrementalDecoder, + MultibyteIncrementalDecoder, + ) + + +def identify_sig_or_bom(sequence: bytes) -> Tuple[Optional[str], bytes]: + """ + Identify and extract SIG/BOM in given sequence. + """ + + for iana_encoding in ENCODING_MARKS: + marks: Union[bytes, List[bytes]] = ENCODING_MARKS[iana_encoding] + + if isinstance(marks, bytes): + marks = [marks] + + for mark in marks: + if sequence.startswith(mark): + return iana_encoding, mark + + return None, b"" + + +def should_strip_sig_or_bom(iana_encoding: str) -> bool: + return iana_encoding not in {"utf_16", "utf_32"} + + +def iana_name(cp_name: str, strict: bool = True) -> str: + cp_name = cp_name.lower().replace("-", "_") + + encoding_alias: str + encoding_iana: str + + for encoding_alias, encoding_iana in aliases.items(): + if cp_name in [encoding_alias, encoding_iana]: + return encoding_iana + + if strict: + raise ValueError("Unable to retrieve IANA for '{}'".format(cp_name)) + + return cp_name + + +def range_scan(decoded_sequence: str) -> List[str]: + ranges: Set[str] = set() + + for character in decoded_sequence: + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + continue + + ranges.add(character_range) + + return list(ranges) + + +def cp_similarity(iana_name_a: str, iana_name_b: str) -> float: + + if is_multi_byte_encoding(iana_name_a) or is_multi_byte_encoding(iana_name_b): + return 0.0 + + decoder_a = importlib.import_module( + "encodings.{}".format(iana_name_a) + ).IncrementalDecoder + decoder_b = importlib.import_module( + "encodings.{}".format(iana_name_b) + ).IncrementalDecoder + + id_a: IncrementalDecoder = decoder_a(errors="ignore") + id_b: IncrementalDecoder = decoder_b(errors="ignore") + + character_match_count: int = 0 + + for i in range(255): + to_be_decoded: bytes = bytes([i]) + if id_a.decode(to_be_decoded) == id_b.decode(to_be_decoded): + character_match_count += 1 + + return character_match_count / 254 + + +def is_cp_similar(iana_name_a: str, iana_name_b: str) -> bool: + """ + Determine if two code page are at least 80% similar. IANA_SUPPORTED_SIMILAR dict was generated using + the function cp_similarity. + """ + return ( + iana_name_a in IANA_SUPPORTED_SIMILAR + and iana_name_b in IANA_SUPPORTED_SIMILAR[iana_name_a] + ) + + +def set_logging_handler( + name: str = "charset_normalizer", + level: int = logging.INFO, + format_string: str = "%(asctime)s | %(levelname)s | %(message)s", +) -> None: + + logger = logging.getLogger(name) + logger.setLevel(level) + + handler = logging.StreamHandler() + handler.setFormatter(logging.Formatter(format_string)) + logger.addHandler(handler) + + +def cut_sequence_chunks( + sequences: bytes, + encoding_iana: str, + offsets: range, + chunk_size: int, + bom_or_sig_available: bool, + strip_sig_or_bom: bool, + sig_payload: bytes, + is_multi_byte_decoder: bool, + decoded_payload: Optional[str] = None, +) -> Generator[str, None, None]: + + if decoded_payload and is_multi_byte_decoder is False: + for i in offsets: + chunk = decoded_payload[i : i + chunk_size] + if not chunk: + break + yield chunk + else: + for i in offsets: + chunk_end = i + chunk_size + if chunk_end > len(sequences) + 8: + continue + + cut_sequence = sequences[i : i + chunk_size] + + if bom_or_sig_available and strip_sig_or_bom is False: + cut_sequence = sig_payload + cut_sequence + + chunk = cut_sequence.decode( + encoding_iana, + errors="ignore" if is_multi_byte_decoder else "strict", + ) + + # multi-byte bad cutting detector and adjustment + # not the cleanest way to perform that fix but clever enough for now. + if is_multi_byte_decoder and i > 0 and sequences[i] >= 0x80: + + chunk_partial_size_chk: int = min(chunk_size, 16) + + if ( + decoded_payload + and chunk[:chunk_partial_size_chk] not in decoded_payload + ): + for j in range(i, i - 4, -1): + cut_sequence = sequences[j:chunk_end] + + if bom_or_sig_available and strip_sig_or_bom is False: + cut_sequence = sig_payload + cut_sequence + + chunk = cut_sequence.decode(encoding_iana, errors="ignore") + + if chunk[:chunk_partial_size_chk] in decoded_payload: + break + + yield chunk diff --git a/env/lib/python3.8/site-packages/charset_normalizer/version.py b/env/lib/python3.8/site-packages/charset_normalizer/version.py new file mode 100644 index 00000000..64c0dbde --- /dev/null +++ b/env/lib/python3.8/site-packages/charset_normalizer/version.py @@ -0,0 +1,6 @@ +""" +Expose version +""" + +__version__ = "2.1.1" +VERSION = __version__.split(".") diff --git a/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/INSTALLER b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/LICENSE b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/LICENSE new file mode 100644 index 00000000..3f0aed7f --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/LICENSE @@ -0,0 +1,20 @@ +Copyright (c) 2013 Sébastien Eustace + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/METADATA b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/METADATA new file mode 100644 index 00000000..d1ea8597 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/METADATA @@ -0,0 +1,489 @@ +Metadata-Version: 2.1 +Name: cleo +Version: 1.0.0a5 +Summary: Cleo allows you to create beautiful and testable command-line interfaces. +Home-page: https://github.com/python-poetry/cleo +License: MIT +Keywords: cli,commands +Author: Sébastien Eustace +Author-email: sebastien@eustace.io +Requires-Python: >=3.7,<4.0 +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Requires-Dist: crashtest (>=0.3.1,<0.4.0) +Requires-Dist: pylev (>=1.3.0,<2.0.0) +Description-Content-Type: text/markdown + +# Cleo + +[![Tests](https://github.com/python-poetry/cleo/actions/workflows/tests.yml/badge.svg)](https://github.com/python-poetry/cleo/actions/workflows/tests.yml) +[![PyPI version](https://img.shields.io/pypi/v/cleo)](https://pypi.org/project/cleo/) + +Create beautiful and testable command-line interfaces. + +## Resources + +- [Documentation](http://cleo.readthedocs.io) +- [Issue Tracker](https://github.com/python-poetry/cleo/issues) + +## Usage + +To make a command that greets you from the command line, create +`greet_command.py` and add the following to it: + +```python +from cleo.commands.command import Command + + +class GreetCommand(Command): + """ + Greets someone + + greet + {name? : Who do you want to greet?} + {--y|yell : If set, the task will yell in uppercase letters} + """ + + def handle(self): + name = self.argument("name") + + if name: + text = f"Hello {name}" + else: + text = "Hello" + + if self.option("yell"): + text = text.upper() + + self.line(text) +``` + +You also need to create the file `application.py` to run at the command line which +creates an `Application` and adds commands to it: + +```python +#!/usr/bin/env python + +from greet_command import GreetCommand + +from cleo.application import Application + + +application = Application() +application.add(GreetCommand()) + +if __name__ == "__main__": + application.run() +``` + +Test the new command by running the following + +```bash +$ python application.py greet John +``` + +This will print the following to the command line: + +```text +Hello John +``` + +You can also use the `--yell` option to make everything uppercase: + +```bash +$ python application.py greet John --yell +``` + +This prints: + +```text +HELLO JOHN +``` + +As you may have already seen, Cleo uses the command docstring to +determine the command definition. The docstring must be in the following +form : + +```python +""" +Command description + +Command signature +""" +``` + +The signature being in the following form: + +```python +""" +command:name {argument : Argument description} {--option : Option description} +""" +``` + +The signature can span multiple lines. + +```python +""" +command:name + {argument : Argument description} + {--option : Option description} +""" +``` + +### Coloring the Output + +Whenever you output text, you can surround the text with tags to color +its output. For example: + +```python +# green text +self.line("foo") + +# yellow text +self.line("foo") + +# black text on a cyan background +self.line("foo") + +# white text on a red background +self.line("foo") +``` + +The closing tag can be replaced by ``, which revokes all formatting +options established by the last opened tag. + +It is possible to define your own styles using the `add_style()` method: + +```python +self.add_style("fire", fg="red", bg="yellow", options=["bold", "blink"]) +self.line("foo") +``` + +Available foreground and background colors are: `black`, `red`, `green`, +`yellow`, `blue`, `magenta`, `cyan` and `white`. + +And available options are: `bold`, `underscore`, `blink`, `reverse` and +`conceal`. + +You can also set these colors and options inside the tag name: + +```python +# green text +self.line("foo") + +# black text on a cyan background +self.line("foo") + +# bold text on a yellow background +self.line("foo") +``` + +### Verbosity Levels + +Cleo has four verbosity levels. These are defined in the `Output` class: + +| Mode | Meaning | Console option | +| ------------------------ | ---------------------------------- | ----------------- | +| `Verbosity.QUIET` | Do not output any messages | `-q` or `--quiet` | +| `Verbosity.NORMAL` | The default verbosity level | (none) | +| `Verbosity.VERBOSE` | Increased verbosity of messages | `-v` | +| `Verbosity.VERY_VERBOSE` | Informative non essential messages | `-vv` | +| `Verbosity.DEBUG` | Debug messages | `-vvv` | + +It is possible to print a message in a command for only a specific +verbosity level. For example: + +```python +if Verbosity.VERBOSE <= self.io.verbosity: + self.line(...) +``` + +There are also more semantic methods you can use to test for each of the +verbosity levels: + +```python +if self.output.is_quiet(): + # ... + +if self.output.is_verbose(): + # ... +``` + +You can also pass the verbosity flag directly to `line()`. + +```python +self.line("", verbosity=Verbosity.VERBOSE) +``` + +When the quiet level is used, all output is suppressed. + +### Using Arguments + +The most interesting part of the commands are the arguments and options +that you can make available. Arguments are the strings - separated by +spaces - that come after the command name itself. They are ordered, and +can be optional or required. For example, add an optional `last_name` +argument to the command and make the `name` argument required: + +```python +class GreetCommand(Command): + """ + Greets someone + + greet + {name : Who do you want to greet?} + {last_name? : Your last name?} + {--y|yell : If set, the task will yell in uppercase letters} + """ +``` + +You now have access to a `last_name` argument in your command: + +```python +last_name = self.argument("last_name") +if last_name: + text += f" {last_name}" +``` + +The command can now be used in either of the following ways: + +```bash +$ python application.py greet John +$ python application.py greet John Doe +``` + +It is also possible to let an argument take a list of values (imagine +you want to greet all your friends). For this it must be specified at +the end of the argument list: + +```python +class GreetCommand(Command): + """ + Greets someone + + greet + {names* : Who do you want to greet?} + {--y|yell : If set, the task will yell in uppercase letters} + """ +``` + +To use this, just specify as many names as you want: + +```bash +$ python application.py greet John Jane +``` + +You can access the `names` argument as a list: + +```python +names = self.argument("names") +if names: + text = "Hello " + ", ".join(names) +``` + +There are 3 argument variants you can use: + +| Mode | Notation | Value | +| -------- | ----------------------------------- | ----------------------------------------------------------------------------------------------------------- | +| Required | none (just write the argument name) | The argument is required | +| Optional | `argument?` | The argument is optional and therefore can be omitted | +| List | `argument*` | The argument can contain an indefinite number of arguments and must be used at the end of the argument list | + +You can combine them like this: + +```python +class GreetCommand(Command): + """ + Greets someone + + greet + {names?* : Who do you want to greet?} + {--y|yell : If set, the task will yell in uppercase letters} + """ +``` + +If you want to set a default value, you can it like so: + +```text +argument=default +``` + +The argument will then be considered optional. + +### Using Options + +Unlike arguments, options are not ordered (meaning you can specify them +in any order) and are specified with two dashes (e.g. `--yell` - you can +also declare a one-letter shortcut that you can call with a single dash +like `-y`). Options are _always_ optional, and can be setup to accept a +value (e.g. `--dir=src`) or simply as a boolean flag without a value +(e.g. `--yell`). + +> _Tip_: It is also possible to make an option _optionally_ accept a value (so +> that `--yell` or `--yell=loud` work). Options can also be configured to +> accept a list of values. + +For example, add a new option to the command that can be used to specify +how many times in a row the message should be printed: + +```python +class GreetCommand(Command): + """ + Greets someone + + greet + {name? : Who do you want to greet?} + {--y|yell : If set, the task will yell in uppercase letters} + {--iterations=1 : How many times should the message be printed?} + """ +``` + +Next, use this in the command to print the message multiple times: + +```python +for _ in range(0, int(self.option("iterations"))): + self.line(text) +``` + +Now, when you run the task, you can optionally specify a `--iterations` +flag: + +```bash +$ python application.py greet John +$ python application.py greet John --iterations=5 +``` + +The first example will only print once, since `iterations` is empty and +defaults to `1`. The second example will print five times. + +Recall that options don\'t care about their order. So, either of the +following will work: + +```bash +$ python application.py greet John --iterations=5 --yell +$ python application.py greet John --yell --iterations=5 +``` + +There are 4 option variants you can use: + +| Option | Notation | Value | +| -------------- | ------------ | ----------------------------------------------------------------------------------- | +| List | `--option=*` | This option accepts multiple values (e.g. `--dir=/foo --dir=/bar`) | +| Flag | `--option` | Do not accept input for this option (e.g. `--yell`) | +| Requires value | `--option=` | This value is required (e.g. `--iterations=5`), the option itself is still optional | +| Optional value | `--option=?` | This option may or may not have a value (e.g. `--yell` or `--yell=loud`) | + +You can combine them like this: + +```python +class GreetCommand(Command): + """ + Greets someone + + greet + {name? : Who do you want to greet?} + {--y|yell : If set, the task will yell in uppercase letters} + {--iterations=?*1 : How many times should the message be printed?} + """ +``` + +### Testing Commands + +Cleo provides several tools to help you test your commands. The most +useful one is the `CommandTester` class. It uses a special IO class to +ease testing without a real console: + +```python +from greet_command import GreetCommand + +from cleo.application import Application +from cleo.testers.command_tester import CommandTester + + +def test_execute(): + application = Application() + application.add(GreetCommand()) + + command = application.find("greet") + command_tester = CommandTester(command) + command_tester.execute() + + assert "..." == command_tester.io.fetch_output() +``` + +The `CommandTester.io.fetch_output()` method returns what would have +been displayed during a normal call from the console. +`CommandTester.io.fetch_error()` is also available to get what you have +been written to the stderr. + +You can test sending arguments and options to the command by passing +them as a string to the `CommandTester.execute()` method: + +```python +from greet_command import GreetCommand + +from cleo.application import Application +from cleo.testers.command_tester import CommandTester + + +def test_execute(): + application = Application() + application.add(GreetCommand()) + + command = application.find("greet") + command_tester = CommandTester(command) + command_tester.execute("John") + + assert "John" in command_tester.io.fetch_output() +``` + +You can also test a whole console application by using the +`ApplicationTester` class. + +### Calling an existing Command + +If a command depends on another one being run before it, instead of +asking the user to remember the order of execution, you can call it +directly yourself. This is also useful if you want to create a \"meta\" +command that just runs a bunch of other commands. + +Calling a command from another one is straightforward: + +```python +def handle(self): + return_code = self.call("greet", "John --yell") + return return_code +``` + +If you want to suppress the output of the executed command, you can use +the `call_silent()` method instead. + +### Autocompletion + +Cleo supports automatic (tab) completion in `bash`, `zsh` and `fish`. + +By default, your application will have a `completions` command. To register these completions for your application, run one of the following in a terminal (replacing `[program]` with the command you use to run your application): + +```bash +# Bash +[program] completions bash | sudo tee /etc/bash_completion.d/[program].bash-completion + +# Bash - macOS/Homebrew (requires `brew install bash-completion`) +[program] completions bash > $(brew --prefix)/etc/bash_completion.d/[program].bash-completion + +# Zsh +mkdir ~/.zfunc +echo "fpath+=~/.zfunc" >> ~/.zshrc +[program] completions zsh > ~/.zfunc/_[program] + +# Zsh - macOS/Homebrew +[program] completions zsh > $(brew --prefix)/share/zsh/site-functions/_[program] + +# Fish +[program] completions fish > ~/.config/fish/completions/[program].fish +``` + diff --git a/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/RECORD b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/RECORD new file mode 100644 index 00000000..b081926e --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/RECORD @@ -0,0 +1,151 @@ +cleo-1.0.0a5.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cleo-1.0.0a5.dist-info/LICENSE,sha256=orng86NcVUmwuC7avMDiKXIUN5lNJlzKlhhen5nzppg,1061 +cleo-1.0.0a5.dist-info/METADATA,sha256=iSQmoU7q4EEgsrqgIS1wkPOiLmT_1dCtO8TFFbWanIk,13776 +cleo-1.0.0a5.dist-info/RECORD,, +cleo-1.0.0a5.dist-info/WHEEL,sha256=FWzWAvOOsDP6aqtZj_z3pIhVEk8S9F03Xwafa0tLqIY,90 +cleo/__init__.py,sha256=4vUCn9zLBy-cQ91Jo0aumQDyXd8m2W0z0O7jYbxIR6s,61 +cleo/__pycache__/__init__.cpython-38.pyc,, +cleo/__pycache__/_compat.cpython-38.pyc,, +cleo/__pycache__/_utils.cpython-38.pyc,, +cleo/__pycache__/application.cpython-38.pyc,, +cleo/__pycache__/color.cpython-38.pyc,, +cleo/__pycache__/cursor.cpython-38.pyc,, +cleo/__pycache__/helpers.cpython-38.pyc,, +cleo/__pycache__/parser.cpython-38.pyc,, +cleo/__pycache__/terminal.cpython-38.pyc,, +cleo/_compat.py,sha256=u78g05pp08qeiYpzyGubSCg28tpe3zD2n0zq3gPtlWs,247 +cleo/_utils.py,sha256=yScgKRtkxXlcYCY6A2XrAfJ_P99VPdOQAv9DkBgCoGE,2471 +cleo/application.py,sha256=7W0BAx5lcMKB96NibNqfr5dn3OB6VH0-ZsaP0nriRZY,19787 +cleo/color.py,sha256=oH8GP6uUun_cRvtpYiXM4NBWjgVFLz-X4tWsSSJqL6I,4190 +cleo/commands/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/commands/__pycache__/__init__.cpython-38.pyc,, +cleo/commands/__pycache__/base_command.cpython-38.pyc,, +cleo/commands/__pycache__/command.cpython-38.pyc,, +cleo/commands/__pycache__/completions_command.cpython-38.pyc,, +cleo/commands/__pycache__/help_command.cpython-38.pyc,, +cleo/commands/__pycache__/list_command.cpython-38.pyc,, +cleo/commands/base_command.py,sha256=-rVKL4ZoKew504CSA6UaHlcUJdpR6FOd5H6SaOeEtTs,3888 +cleo/commands/command.py,sha256=Gm4HnxrDd5PGqQ2o2ZYbJHZgdu_HDDsurl2pMtirORI,9734 +cleo/commands/completions/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/commands/completions/__pycache__/__init__.cpython-38.pyc,, +cleo/commands/completions/__pycache__/templates.cpython-38.pyc,, +cleo/commands/completions/templates.py,sha256=1fRUqkgEUwdOhShGcANkUqrmprHEnPBCCJ0KNsfe4ss,2209 +cleo/commands/completions_command.py,sha256=050FC27ZGk9PQY4Fy3iGiMXLaUiPzWsb-oWDKTh86A4,14100 +cleo/commands/help_command.py,sha256=9gLHq-mPQfRxBaenD6CMuIa9Ja9rMx9Fa5H9jRS6QHc,1181 +cleo/commands/list_command.py,sha256=KmoBGAioRBMgOtu2JsqVt60pkH3lscDd40oi0WTroro,772 +cleo/cursor.py,sha256=y5IIS0-w36gsKlcGMinzdo51Xn0XNqDzCCn-hoozO88,2352 +cleo/descriptors/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/descriptors/__pycache__/__init__.cpython-38.pyc,, +cleo/descriptors/__pycache__/application_description.cpython-38.pyc,, +cleo/descriptors/__pycache__/descriptor.cpython-38.pyc,, +cleo/descriptors/__pycache__/text_descriptor.cpython-38.pyc,, +cleo/descriptors/application_description.py,sha256=i_l-oaoPnnORCWJayQJSEzV1_XIajoNaHtrypH8QX1g,2795 +cleo/descriptors/descriptor.py,sha256=hKjEbfY7Ny63N-UXXbBYd3sezTCUSfT5AniMMNutSoo,1805 +cleo/descriptors/text_descriptor.py,sha256=TzyL2J4mCcXaYyKx_1dAHGvDkd_wqfX_K6ClmD-CoxQ,9625 +cleo/events/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/events/__pycache__/__init__.cpython-38.pyc,, +cleo/events/__pycache__/console_command_event.cpython-38.pyc,, +cleo/events/__pycache__/console_error_event.cpython-38.pyc,, +cleo/events/__pycache__/console_event.cpython-38.pyc,, +cleo/events/__pycache__/console_events.cpython-38.pyc,, +cleo/events/__pycache__/console_signal_event.cpython-38.pyc,, +cleo/events/__pycache__/console_terminate_event.cpython-38.pyc,, +cleo/events/__pycache__/event.cpython-38.pyc,, +cleo/events/__pycache__/event_dispatcher.cpython-38.pyc,, +cleo/events/console_command_event.py,sha256=mN_KUXGTPrplBz3ocXbS9r8jWMzaC42sicKEJewOyCM,833 +cleo/events/console_error_event.py,sha256=Szyt2_N6_ONcC8oN8zRUs4Mz02h_gRF3lD7ehvX7l1c,1113 +cleo/events/console_event.py,sha256=BgiMusr6UmnNJO2HcSqWSqvF3-_uhjLB9l1IsZAgbk8,591 +cleo/events/console_events.py,sha256=F2n-kfRK1SwY-Pz8AigxBpDxjmIUbPPZBezAh_fZOQM,685 +cleo/events/console_signal_event.py,sha256=tlJrdHeasf52nvh7SnE50U7M-7C1Sx0OJC4HaxL3vec,633 +cleo/events/console_terminate_event.py,sha256=-1KLcWykW3FHcJSOD5o_F4mdUBuUUiQWRS0Tn4TQUUw,663 +cleo/events/event.py,sha256=EaugliZMxk7CCbEcQJ8vKvk-d8nTIzH_7NSQpcQLXjI,321 +cleo/events/event_dispatcher.py,sha256=60LkUH2WELS49mj_pcvmR8eWAZUMcRw3kxWIJUHp468,2871 +cleo/exceptions/__init__.py,sha256=hmSgdBArIetubZpZGtTm8YvJjkT04zM6CgJQJsM-df0,1751 +cleo/exceptions/__pycache__/__init__.cpython-38.pyc,, +cleo/formatters/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/formatters/__pycache__/__init__.cpython-38.pyc,, +cleo/formatters/__pycache__/formatter.cpython-38.pyc,, +cleo/formatters/__pycache__/style.cpython-38.pyc,, +cleo/formatters/__pycache__/style_stack.cpython-38.pyc,, +cleo/formatters/formatter.py,sha256=GeBR-H__wwwzjUJXuu6Ry05mBvonhxmqdyXWHJwkJU4,6382 +cleo/formatters/style.py,sha256=pY0CM9IY9fxr8VMIJsyxNeJXtnSinPlre6LjM-R8tSQ,2381 +cleo/formatters/style_stack.py,sha256=nmLMePA6YLUmOp-981rpJ2AmAyF_j7j_DWH4N7Fyfl8,1088 +cleo/helpers.py,sha256=udUL9Xu0mWeXBhZqvE7gVjA23dWn6qe0pGTgw9NmOMo,912 +cleo/io/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/io/__pycache__/__init__.cpython-38.pyc,, +cleo/io/__pycache__/buffered_io.cpython-38.pyc,, +cleo/io/__pycache__/io.cpython-38.pyc,, +cleo/io/__pycache__/null_io.cpython-38.pyc,, +cleo/io/buffered_io.py,sha256=S40bmqznneUf4EbtQQUgf22cGEe0v7AfXdXsZHYW9Vc,1504 +cleo/io/inputs/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/io/inputs/__pycache__/__init__.cpython-38.pyc,, +cleo/io/inputs/__pycache__/argument.cpython-38.pyc,, +cleo/io/inputs/__pycache__/argv_input.cpython-38.pyc,, +cleo/io/inputs/__pycache__/definition.cpython-38.pyc,, +cleo/io/inputs/__pycache__/input.cpython-38.pyc,, +cleo/io/inputs/__pycache__/option.cpython-38.pyc,, +cleo/io/inputs/__pycache__/string_input.cpython-38.pyc,, +cleo/io/inputs/__pycache__/token_parser.cpython-38.pyc,, +cleo/io/inputs/argument.py,sha256=9I3BLecMnzCN5npm3Xp-4vaxe6fNcjyQOpa7Mco7-Y8,1753 +cleo/io/inputs/argv_input.py,sha256=1emuUUtGuWjNaPI4hfSEZswSd5Vkhjo_6ww0f12p_o8,9853 +cleo/io/inputs/definition.py,sha256=lSHCGs5HSZh-SMOQag-5suCfAKgaYQvrDaiyCjb3i_U,6757 +cleo/io/inputs/input.py,sha256=7J_Y0LPLbgcUqXz_7CFgliOHo-MQKUI2S7l6xKF8pNA,5124 +cleo/io/inputs/option.py,sha256=HWLdwLUV99ZF_IqZqk7zvKNLT1kAN3oRKIZUe1NUshc,2497 +cleo/io/inputs/string_input.py,sha256=106yPcXpkzZOq8OmSbiDZeBPRwcEp6xxuFcIIFXLoPY,445 +cleo/io/inputs/token_parser.py,sha256=8yxTPZVmlEnNHHAfwo11-aw7aJyGX1umqVEnqLn3Ljc,2858 +cleo/io/io.py,sha256=Bxa1DjYipWm6cdKS5V6Cgul5g9JP0qf0sX5fkKkL6rQ,4201 +cleo/io/null_io.py,sha256=aoXu_KiqG71tN3b9S99SBA4Sit3DX8jgtew-mRPAJk8,465 +cleo/io/outputs/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/io/outputs/__pycache__/__init__.cpython-38.pyc,, +cleo/io/outputs/__pycache__/buffered_output.cpython-38.pyc,, +cleo/io/outputs/__pycache__/null_output.cpython-38.pyc,, +cleo/io/outputs/__pycache__/output.cpython-38.pyc,, +cleo/io/outputs/__pycache__/section_output.cpython-38.pyc,, +cleo/io/outputs/__pycache__/stream_output.cpython-38.pyc,, +cleo/io/outputs/buffered_output.py,sha256=6_FV2pKNytUgtwkElRxtTf4Z-7SJAiFTxysRSftdnmw,1328 +cleo/io/outputs/null_output.py,sha256=uH7QuZyaMYUITgiuj-khs0jHqpnPPEzTNjsV2CmGFCg,1309 +cleo/io/outputs/output.py,sha256=2Lg6TXRbcCLIfXLflr1lQiPxauVlkKtjj1x3RVhXU7E,3207 +cleo/io/outputs/section_output.py,sha256=LI9SLXuZJEO9cLRbclKxeAPVpPKCeegKMpAAc7xYMyQ,3065 +cleo/io/outputs/stream_output.py,sha256=BspfNRQzO8p4AOnQ4qnLxEifaHtvqDLoKcnPH9BjEWY,4309 +cleo/loaders/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/loaders/__pycache__/__init__.cpython-38.pyc,, +cleo/loaders/__pycache__/command_loader.cpython-38.pyc,, +cleo/loaders/__pycache__/factory_command_loader.cpython-38.pyc,, +cleo/loaders/command_loader.py,sha256=oIGVCii45aRIf7bbC6TbQqX8nFPXIN2KZ5MYpx9M-9o,578 +cleo/loaders/factory_command_loader.py,sha256=XbKW1maO4VExWKACiBCxZJNgISZE7JR-xDHYJhberAM,854 +cleo/parser.py,sha256=U3bdTy1l7R9jjBB-ist3OOf2zNlZIG5qlHJE5QMz1y4,5015 +cleo/terminal.py,sha256=mSgYxvnBMRlOZW1ll6gyPI1jixND5Zyk0wDby52VWoM,3931 +cleo/testers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/testers/__pycache__/__init__.cpython-38.pyc,, +cleo/testers/__pycache__/application_tester.cpython-38.pyc,, +cleo/testers/__pycache__/command_tester.cpython-38.pyc,, +cleo/testers/application_tester.py,sha256=UsoospX3KuJ5dIDJgbrzuyMDBzz4XHRUPgv-rl8apYo,1852 +cleo/testers/command_tester.py,sha256=rN17IbFFb8QLo5n3V5KRfyY0rJkP3g28rFXNRARYklg,2288 +cleo/ui/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cleo/ui/__pycache__/__init__.cpython-38.pyc,, +cleo/ui/__pycache__/choice_question.cpython-38.pyc,, +cleo/ui/__pycache__/component.cpython-38.pyc,, +cleo/ui/__pycache__/confirmation_question.cpython-38.pyc,, +cleo/ui/__pycache__/exception_trace.cpython-38.pyc,, +cleo/ui/__pycache__/progress_bar.cpython-38.pyc,, +cleo/ui/__pycache__/progress_indicator.cpython-38.pyc,, +cleo/ui/__pycache__/question.cpython-38.pyc,, +cleo/ui/__pycache__/table.cpython-38.pyc,, +cleo/ui/__pycache__/table_cell.cpython-38.pyc,, +cleo/ui/__pycache__/table_cell_style.cpython-38.pyc,, +cleo/ui/__pycache__/table_separator.cpython-38.pyc,, +cleo/ui/__pycache__/table_style.cpython-38.pyc,, +cleo/ui/__pycache__/ui.cpython-38.pyc,, +cleo/ui/choice_question.py,sha256=wTAQvzk6d0hr8HyLJEnuw5riSsjsi_Vvv-euBuQEy-s,4573 +cleo/ui/component.py,sha256=wiJ6S7sT48q-TJjnYANqW8mIPG60OdabTMxelbOaVOY,71 +cleo/ui/confirmation_question.py,sha256=XqlSaZpE0WW8yKkivrX8GIlPW7VSpdXFwIs5h2TJSuk,1173 +cleo/ui/exception_trace.py,sha256=3USOFwtb-t47yMr7wfIVzg8pq72rmNpDT7sU4GSOP2A,15535 +cleo/ui/progress_bar.py,sha256=i9X4Rtc-ZLU--RW5ojaGb34zd0XhdSYeet0W5q7gPEk,13030 +cleo/ui/progress_indicator.py,sha256=EAFKIlOVyCcvsnSSHVmH5YQroBigUJKDON25WDb6MRM,5461 +cleo/ui/question.py,sha256=ZvrhsMWBhqsMXkeJxBE6C_XfcDKqYQUQUHQg3bQ1Ewo,7692 +cleo/ui/table.py,sha256=J-V90iUJjjEHYnrfrqUaizZS5NASW3fbNZ-eYTq9bVY,22915 +cleo/ui/table_cell.py,sha256=7M-cUm5yOqlIZdnrZtFuJpClc4RWelwoeWad7j3HsKw,882 +cleo/ui/table_cell_style.py,sha256=3jgXTw2M2GW0TxwoTpiCW_8D2hpdUggZxDBTRVn-XDU,891 +cleo/ui/table_separator.py,sha256=PU3dteo7SpJVg2DiqSjqWwjEecHd_xSWzvh1RkWU8vM,173 +cleo/ui/table_style.py,sha256=sZTpoAFX3b6NN6ceTWzyjiueEZINzMJnMMB133jpbZ0,10763 +cleo/ui/ui.py,sha256=emAfpv7CQKR-UpRKkIHaXhSFzFNVevGLovuqiBVr3is,980 diff --git a/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/WHEEL b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/WHEEL new file mode 100644 index 00000000..2d3103db --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo-1.0.0a5.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: poetry-core 1.1.0b2 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/env/lib/python3.8/site-packages/cleo/__init__.py b/env/lib/python3.8/site-packages/cleo/__init__.py new file mode 100644 index 00000000..82ac1e3f --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/__init__.py @@ -0,0 +1,4 @@ +from __future__ import annotations + + +__version__ = "1.0.0a5" diff --git a/env/lib/python3.8/site-packages/cleo/_compat.py b/env/lib/python3.8/site-packages/cleo/_compat.py new file mode 100644 index 00000000..82636122 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/_compat.py @@ -0,0 +1,15 @@ +from __future__ import annotations + +import shlex +import subprocess +import sys + + +WINDOWS = sys.platform == "win32" + + +def shell_quote(token: str) -> str: + if WINDOWS: + return subprocess.list2cmdline([token]) + + return shlex.quote(token) diff --git a/env/lib/python3.8/site-packages/cleo/_utils.py b/env/lib/python3.8/site-packages/cleo/_utils.py new file mode 100644 index 00000000..527a6f91 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/_utils.py @@ -0,0 +1,106 @@ +from __future__ import annotations + +import math + +from html.parser import HTMLParser +from typing import Any + +from pylev import levenshtein + + +class TagStripper(HTMLParser): + def __init__(self) -> None: + super().__init__(convert_charrefs=False) + + self.reset() + self.fed = [] + + def handle_data(self, d) -> None: + self.fed.append(d) + + def handle_entityref(self, name) -> None: + self.fed.append("&%s;" % name) + + def handle_charref(self, name) -> None: + self.fed.append("&#%s;" % name) + + def get_data(self) -> str: + return "".join(self.fed) + + +def _strip(value) -> str: + s = TagStripper() + s.feed(value) + s.close() + + return s.get_data() + + +def strip_tags(value: Any) -> str: + value = str(value) + while "<" in value and ">" in value: + new_value = _strip(value) + if value.count("<") == new_value.count("<"): + break + + value = new_value + + return value + + +def find_similar_names(name: str, names: list[str]) -> list[str]: + """ + Finds names similar to a given command name. + """ + threshold = 1e3 + distance_by_name = {} + suggested_names = [] + + for actual_name in names: + # Get Levenshtein distance between the input and each command name + distance = levenshtein(name, actual_name) + + is_similar = distance <= len(name) / 3 + is_sub_string = actual_name.find(name) != -1 + + if is_similar or is_sub_string: + distance_by_name[actual_name] = ( + distance, + actual_name.find(name) if is_sub_string else float("inf"), + ) + + # Only keep results with a distance below the threshold + distance_by_name = { + k: v for k, v in distance_by_name.items() if v[0] < 2 * threshold + } + + # Display results with shortest distance first + for k, _v in sorted(distance_by_name.items(), key=lambda i: (i[1][0], i[1][1])): + if k not in suggested_names: + suggested_names.append(k) + + return suggested_names + + +_TIME_FORMATS = [ + (0, "< 1 sec"), + (2, "1 sec"), + (59, "secs", 1), + (60, "1 min"), + (3600, "mins", 60), + (5400, "1 hr"), + (86400, "hrs", 3600), + (129600, "1 day"), + (604800, "days", 86400), +] + + +def format_time(secs: float) -> str: + for fmt in _TIME_FORMATS: + if secs > fmt[0]: + continue + + if len(fmt) == 2: + return fmt[1] + + return f"{math.ceil(secs / fmt[2])} {fmt[1]}" diff --git a/env/lib/python3.8/site-packages/cleo/application.py b/env/lib/python3.8/site-packages/cleo/application.py new file mode 100644 index 00000000..8c6683db --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/application.py @@ -0,0 +1,633 @@ +from __future__ import annotations + +import os +import re +import sys + +from contextlib import suppress +from typing import TYPE_CHECKING +from typing import cast + +from cleo.commands.completions_command import CompletionsCommand +from cleo.commands.help_command import HelpCommand +from cleo.commands.list_command import ListCommand +from cleo.events.console_command_event import ConsoleCommandEvent +from cleo.events.console_error_event import ConsoleErrorEvent +from cleo.events.console_events import COMMAND +from cleo.events.console_events import ERROR +from cleo.events.console_events import TERMINATE +from cleo.events.console_terminate_event import ConsoleTerminateEvent +from cleo.exceptions import CleoException +from cleo.exceptions import CleoSimpleException +from cleo.exceptions import CommandNotFoundException +from cleo.exceptions import LogicException +from cleo.exceptions import NamespaceNotFoundException +from cleo.io.inputs.argument import Argument +from cleo.io.inputs.argv_input import ArgvInput +from cleo.io.inputs.definition import Definition +from cleo.io.inputs.option import Option +from cleo.io.io import IO +from cleo.io.outputs.output import Verbosity +from cleo.io.outputs.stream_output import StreamOutput +from cleo.terminal import Terminal +from cleo.ui.ui import UI + + +if TYPE_CHECKING: + from crashtest.solution_providers.solution_provider_repository import ( + SolutionProviderRepository, + ) + + from cleo.commands.command import Command + from cleo.events.event_dispatcher import EventDispatcher + from cleo.io.inputs.input import Input + from cleo.io.outputs.output import Output + from cleo.loaders.command_loader import CommandLoader + + +class Application: + """ + An Application is the container for a collection of commands. + + This class is optimized for a standard CLI environment. + + Usage: + >>> app = Application('myapp', '1.0 (stable)') + >>> app.add(Command()) + >>> app.run() + """ + + def __init__(self, name: str = "console", version: str = "") -> None: + self._name = name + self._version = version + self._display_name: str | None = None + self._terminal = Terminal() + self._default_command = "list" + self._single_command = False + self._commands: dict[str, Command] = {} + self._running_command = None + self._want_helps = False + self._definition: Definition | None = None + self._catch_exceptions = True + self._auto_exit = True + self._initialized = False + self._ui: UI | None = None + + # TODO: signals support + self._event_dispatcher: EventDispatcher | None = None + + self._command_loader: CommandLoader | None = None + + self._solution_provider_repository: SolutionProviderRepository | None = None + + @property + def name(self) -> str: + return self._name + + @property + def display_name(self) -> str: + if self._display_name is None: + return re.sub(r"[\s\-_]+", " ", self._name).title() + + return self._display_name + + @property + def version(self) -> str: + return self._version + + @property + def long_version(self) -> str: + if self._name: + if self._version: + return f"{self.display_name} (version {self._version})" + + return f"{self.display_name}" + + return "Console application" + + @property + def definition(self) -> Definition: + if self._definition is None: + self._definition = self._default_definition + + if self._single_command: + definition = self._definition + definition.set_arguments([]) + + return definition + + return self._definition + + @property + def default_commands(self) -> list[Command]: + return [HelpCommand(), ListCommand(), CompletionsCommand()] + + @property + def help(self) -> str: + return self.long_version + + @property + def ui(self) -> UI: + if self._ui is None: + self._ui = self._get_default_ui() + + return self._ui + + @property + def event_dispatcher(self) -> EventDispatcher | None: + return self._event_dispatcher + + def set_event_dispatcher(self, event_dispatcher: EventDispatcher) -> None: + self._event_dispatcher = event_dispatcher + + def set_name(self, name: str) -> None: + self._name = name + + def set_display_name(self, display_name: str) -> None: + self._display_name = display_name + + def set_version(self, version: str) -> None: + self._version = version + + def set_ui(self, ui: UI) -> None: + self._ui = ui + + def set_command_loader(self, command_loader: CommandLoader) -> None: + self._command_loader = command_loader + + def auto_exits(self, auto_exits: bool = True) -> None: + self._auto_exit = auto_exits + + def is_auto_exit_enabled(self) -> bool: + return self._auto_exit + + def are_exceptions_caught(self) -> bool: + return self._catch_exceptions + + def catch_exceptions(self, catch_exceptions: bool = True) -> None: + self._catch_exceptions = catch_exceptions + + def is_single_command(self) -> bool: + return self._single_command + + def set_solution_provider_repository( + self, solution_provider_repository: SolutionProviderRepository + ) -> None: + self._solution_provider_repository = solution_provider_repository + + def add(self, command: Command) -> Command | None: + self._init() + + command.set_application(self) + + if not command.enabled: + command.set_application() + + return None + + if not command.name: + raise LogicException( + 'The command "{}" cannot have an empty name'.format( + command.__class__.__name__ + ) + ) + + self._commands[command.name] = command + + for alias in command.aliases: + self._commands[alias] = command + + return command + + def get(self, name: str) -> Command: + self._init() + + if not self.has(name): + raise CommandNotFoundException(name) + + if name not in self._commands: + # The command was registered in a different name in the command loader + raise CommandNotFoundException(name) + + command = self._commands[name] + + if self._want_helps: + self._want_helps = False + + help_command: HelpCommand = cast(HelpCommand, self.get("help")) + help_command.set_command(command) + + return help_command + + return command + + def has(self, name: str) -> bool: + self._init() + + if name in self._commands: + return True + + if not self._command_loader: + return False + + return self._command_loader.has(name) and self.add( + self._command_loader.get(name) + ) + + def get_namespaces(self) -> list[str]: + namespaces = [] + seen = set() + + for command in self.all().values(): + if command.hidden: + continue + + for namespace in self._extract_all_namespaces(command.name): + if namespace in seen: + continue + + namespaces.append(namespace) + seen.add(namespace) + + for alias in command.aliases: + for namespace in self._extract_all_namespaces(alias): + if namespace in seen: + continue + + namespaces.append(namespace) + seen.add(namespace) + + return namespaces + + def find_namespace(self, namespace: str) -> str: + all_namespaces = self.get_namespaces() + + if namespace not in all_namespaces: + raise NamespaceNotFoundException(namespace, all_namespaces) + + return namespace + + def find(self, name: str) -> Command: + self._init() + + if self.has(name): + return self.get(name) + + all_commands = [] + if self._command_loader: + all_commands += self._command_loader.names + + all_commands += [ + name for name, command in self._commands.items() if not command.hidden + ] + + raise CommandNotFoundException(name, all_commands) + + def all(self, namespace: str | None = None) -> dict[str, Command]: + self._init() + + if namespace is None: + commands = self._commands.copy() + if not self._command_loader: + return commands + + for name in self._command_loader.names: + if name not in commands and self.has(name): + commands[name] = self.get(name) + + return commands + + commands = {} + + for name, command in self._commands.items(): + if namespace == self.extract_namespace(name, name.count(" ") + 1): + commands[name] = command + + if self._command_loader: + for name in self._command_loader.names: + if ( + name not in commands + and namespace == self.extract_namespace(name, name.count(" ") + 1) + and self.has(name) + ): + commands[name] = self.get(name) + + return commands + + def run( + self, + input: Input | None = None, + output: Output | None = None, + error_output: Output | None = None, + ) -> int: + try: + io = self.create_io(input, output, error_output) + + self._configure_io(io) + + try: + exit_code = self._run(io) + except Exception as e: + if not self._catch_exceptions: + raise + + self.render_error(e, io) + + exit_code = 1 + # TODO: Custom error exit codes + except KeyboardInterrupt: + exit_code = 1 + + if self._auto_exit: + sys.exit(exit_code) + + return exit_code + + def _run(self, io: IO) -> int: + if io.input.has_parameter_option(["--version", "-V"], True): + io.write_line(self.long_version) + + return 0 + + definition = self.definition + input_definition = Definition() + for argument in definition.arguments: + if argument.name == "command": + argument = Argument( + "command", + required=True, + is_list=True, + description=definition.argument("command").description, + ) + + input_definition.add_argument(argument) + + input_definition.set_options(definition.options) + + # Errors must be ignored, full binding/validation happens later when the command is known. + with suppress(CleoException): + # Makes ArgvInput.first_argument() able to distinguish an option from an argument. + io.input.bind(input_definition) + + name = self._get_command_name(io) + if io.input.has_parameter_option(["--help", "-h"], True): + if not name: + name = "help" + io.set_input(ArgvInput(["console", "help", self._default_command])) + else: + self._want_helps = True + + if not name: + name = self._default_command + definition = self.definition + arguments = definition.arguments + if not definition.has_argument("command"): + arguments.append( + Argument( + "command", + required=False, + description=definition.argument("command").description, + default=name, + ) + ) + definition.set_arguments(arguments) + + self._running_command = None + command = self.find(name) + + self._running_command = command + + if " " in name and isinstance(io.input, ArgvInput): + # If the command is namespaced we rearrange + # the input to parse it as a single argument + argv = io._input._tokens[:] + + if io.input.script_name is not None: + argv.insert(0, io.input.script_name) + + namespace = name.split(" ")[0] + index = None + for i, arg in enumerate(argv): + if arg == namespace and i > 0: + argv[i] = name + index = i + break + + if index is not None: + del argv[index + 1 : index + 1 + (len(name.split(" ")) - 1)] + + stream = io.input.stream + io.set_input(ArgvInput(argv)) + io.input.set_stream(stream) + + exit_code = self._run_command(command, io) + self._running_command = None + + return exit_code + + def _run_command(self, command: Command, io: IO) -> int: + if self._event_dispatcher is None: + return command.run(io) + + # Bind before the console.command event, + # so the listeners have access to the arguments and options + try: + command.merge_application_definition() + io.input.bind(command.definition) + except CleoException: + # Ignore invalid option/arguments for now, + # to allow the listeners to customize the definition + pass + + event = ConsoleCommandEvent(command, io) + error = None + + try: + self._event_dispatcher.dispatch(event, COMMAND) + + if event.command_should_run(): + exit_code = command.run(io) + else: + exit_code = ConsoleCommandEvent.RETURN_CODE_DISABLED + except Exception as e: + event = ConsoleErrorEvent(command, io, e) + self._event_dispatcher.dispatch(event, ERROR) + error = event.error + exit_code = event.exit_code + + if exit_code == 0: + error = None + + event = ConsoleTerminateEvent(command, io, exit_code) + self._event_dispatcher.dispatch(event, TERMINATE) + + if error is not None: + raise error + + return event.exit_code + + def create_io( + self, + input: Input | None = None, + output: Output | None = None, + error_output: Output | None = None, + ) -> IO: + if input is None: + input = ArgvInput() + input.set_stream(sys.stdin) + + if output is None: + output = StreamOutput(sys.stdout) + + if error_output is None: + error_output = StreamOutput(sys.stderr) + + return IO(input, output, error_output) + + def render_error(self, error: Exception, io: IO) -> None: + from cleo.ui.exception_trace import ExceptionTrace + + trace = ExceptionTrace( + error, solution_provider_repository=self._solution_provider_repository + ) + simple = not io.is_verbose() or isinstance(error, CleoSimpleException) + trace.render(io.error_output, simple) + + def _configure_io(self, io: IO) -> None: + if io.input.has_parameter_option("--ansi", True): + io.decorated(True) + elif io.input.has_parameter_option("--no-ansi", True): + io.decorated(False) + + if io.input.has_parameter_option(["--no-interaction", "-n"], True): + io.interactive(False) + + shell_verbosity = int(os.getenv("SHELL_VERBOSITY", 0)) + if shell_verbosity == -1: + io.set_verbosity(Verbosity.QUIET) + elif shell_verbosity == 1: + io.set_verbosity(Verbosity.VERBOSE) + elif shell_verbosity == 2: + io.set_verbosity(Verbosity.VERY_VERBOSE) + elif shell_verbosity == 3: + io.set_verbosity(Verbosity.DEBUG) + else: + shell_verbosity = 0 + + if io.input.has_parameter_option(["--quiet", "-q"], True): + io.set_verbosity(Verbosity.QUIET) + shell_verbosity = -1 + else: + if io.input.has_parameter_option("-vvv", True): + io.set_verbosity(Verbosity.DEBUG) + shell_verbosity = 3 + elif io.input.has_parameter_option("-vv", True): + io.set_verbosity(Verbosity.VERY_VERBOSE) + shell_verbosity = 2 + elif io.input.has_parameter_option( + "-v", True + ) or io.input.has_parameter_option("--verbose", only_params=True): + io.set_verbosity(Verbosity.VERBOSE) + shell_verbosity = 1 + + if shell_verbosity == -1: + io.interactive(False) + + @property + def _default_definition(self) -> Definition: + return Definition( + [ + Argument( + "command", + required=True, + description="The command to execute.", + ), + Option( + "--help", + "-h", + flag=True, + description=( + "Display help for the given command. " + f"When no command is given display help for the {self._default_command} command." + ), + ), + Option( + "--quiet", "-q", flag=True, description="Do not output any message." + ), + Option( + "--verbose", + "-v|vv|vvv", + flag=True, + description=( + "Increase the verbosity of messages: " + "1 for normal output, 2 for more verbose output and 3 for debug." + ), + ), + Option( + "--version", + "-V", + flag=True, + description="Display this application version.", + ), + Option("--ansi", flag=True, description="Force ANSI output."), + Option("--no-ansi", flag=True, description="Disable ANSI output."), + Option( + "--no-interaction", + "-n", + flag=True, + description="Do not ask any interactive question.", + ), + ] + ) + + def _get_command_name(self, io: IO) -> str: + if self._single_command: + return self._default_command + + if "command" in io.input.arguments and io.input.argument("command"): + candidates = [] + for command_part in io.input.argument("command"): + if candidates: + candidates.append(candidates[-1] + " " + command_part) + else: + candidates.append(command_part) + + for candidate in reversed(candidates): + if self.has(candidate): + return candidate + + return io.input.first_argument + + def extract_namespace(self, name: str, limit: int | None = None) -> str: + parts = name.split(" ")[:-1] + + if limit is not None: + return " ".join(parts[:limit]) + + return " ".join(parts) + + def _get_default_ui(self) -> UI: + from cleo.ui.progress_bar import ProgressBar + + return UI([ProgressBar()]) + + def _extract_all_namespaces(self, name: str) -> list[str]: + parts = name.split(" ")[:-1] + namespaces = [] + + for part in parts: + if namespaces: + namespaces.append(namespaces[-1] + " " + part) + else: + namespaces.append(part) + + return namespaces + + def _init(self) -> None: + if self._initialized: + return + + self._initialized = True + + for command in self.default_commands: + self.add(command) diff --git a/env/lib/python3.8/site-packages/cleo/color.py b/env/lib/python3.8/site-packages/cleo/color.py new file mode 100644 index 00000000..ff2fc365 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/color.py @@ -0,0 +1,152 @@ +from __future__ import annotations + +import os + +from cleo.exceptions import ValueException + + +class Color: + + COLORS = { + "black": (30, 40), + "red": (31, 41), + "green": (32, 42), + "yellow": (33, 43), + "blue": (34, 44), + "magenta": (35, 45), + "cyan": (36, 46), + "light_gray": (37, 47), + "default": (39, 49), + "dark_gray": (90, 100), + "light_red": (91, 101), + "light_green": (92, 102), + "light_yellow": (93, 103), + "light_blue": (94, 104), + "light_magenta": (95, 105), + "light_cyan": (96, 106), + "white": (97, 107), + } + + AVAILABLE_OPTIONS = { + "bold": {"set": 1, "unset": 22}, + "dark": {"set": 2, "unset": 22}, + "italic": {"set": 3, "unset": 23}, + "underline": {"set": 4, "unset": 24}, + "blink": {"set": 5, "unset": 25}, + "reverse": {"set": 7, "unset": 27}, + "conceal": {"set": 8, "unset": 28}, + } + + def __init__( + self, + foreground: str = "", + background: str = "", + options: list[str] | None = None, + ) -> None: + self._foreground = self._parse_color(foreground, False) + self._background = self._parse_color(background, True) + + if options is None: + options = [] + + self._options = {} + for option in options: + if option not in self.AVAILABLE_OPTIONS: + raise ValueError( + '"{}" is not a valid color option. It must be one of {}'.format( + option, ", ".join(self.AVAILABLE_OPTIONS.keys()) + ) + ) + + self._options[option] = self.AVAILABLE_OPTIONS[option] + + def apply(self, text: str) -> str: + return self.set() + text + self.unset() + + def set(self) -> str: + codes = [] + + if self._foreground: + codes.append(self._foreground) + + if self._background: + codes.append(self._background) + + for option in self._options.values(): + codes.append(str(option["set"])) + + if not codes: + return "" + + return "\033[{}m".format(";".join(codes)) + + def unset(self) -> str: + codes = [] + + if self._foreground: + codes.append("39") + + if self._background: + codes.append("49") + + for option in self._options.values(): + codes.append(str(option["unset"])) + + if not codes: + return "" + + return "\033[{}m".format(";".join(codes)) + + def _parse_color(self, color: str, background: bool) -> str: + if not color: + return "" + + if color.startswith("#"): + color = color[1:] + + if len(color) == 3: + color = color[0] * 2 + color[1] * 2 + color[2] * 2 + + if len(color) != 6: + raise ValueException(f'"{color}" is an invalid color') + + return ("4" if background else "3") + self._convert_hex_color_to_ansi( + int(color, 16) + ) + + if color not in self.COLORS: + raise ValueException( + '"{}" is an invalid color. It must be one of {}'.format( + color, ", ".join(self.COLORS.keys()) + ) + ) + + return str(self.COLORS[color][int(background)]) + + def _convert_hex_color_to_ansi(self, color: int) -> str: + r = (color >> 16) & 255 + g = (color >> 8) & 255 + b = color & 255 + + if os.getenv("COLORTERM") != "truecolor": + return str(self._degrade_hex_color_to_ansi(r, g, b)) + + return f"8;2;{r};{g};{b}" + + def _degrade_hex_color_to_ansi(self, r: int, g: int, b: int) -> int: + if round(self._get_saturation(r, g, b) / 50) == 0: + return 0 + + return (round(b / 255) << 2) | (round(g / 255) << 1) | round(r / 255) + + def _get_saturation(self, r: int, g: int, b: int) -> int: + r_float = r / 255 + g_float = g / 255 + b_float = b / 255 + v = max(r_float, g_float, b_float) + + diff = v - min(r_float, g_float, b_float) + if diff == 0: + return 0 + + return int(diff * 100 / v) diff --git a/env/lib/python3.8/site-packages/cleo/commands/__init__.py b/env/lib/python3.8/site-packages/cleo/commands/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/commands/base_command.py b/env/lib/python3.8/site-packages/cleo/commands/base_command.py new file mode 100644 index 00000000..c6ba04e6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/commands/base_command.py @@ -0,0 +1,148 @@ +from __future__ import annotations + +import inspect + +from typing import TYPE_CHECKING + +from cleo.exceptions import CleoException +from cleo.io.inputs.definition import Definition + + +if TYPE_CHECKING: + from cleo.application import Application + from cleo.io.io import IO + + +class BaseCommand: + + name: str | None = None + + description = "" + + help = "" + + enabled = True + hidden = False + + usages: list[str] = [] + + def __init__(self) -> None: + self._definition = Definition() + self._full_definition = None + self._application = None + self._ignore_validation_errors = False + self._synopsis: dict[str, str] = {} + + self.configure() + + for i, usage in enumerate(self.usages): + if self.name and usage.find(self.name) != 0: + self.usages[i] = f"{self.name} {usage}" + + @property + def application(self) -> Application: + return self._application + + @property + def definition(self) -> Definition: + if self._full_definition is not None: + return self._full_definition + + return self._definition + + @property + def processed_help(self) -> str: + help_text = self.help + if not self.help: + help_text = self.description + + is_single_command = self._application and self._application.is_single_command() + + if self._application: + current_script = self._application.name + else: + current_script = inspect.stack()[-1][1] + + return help_text.format( + command_name=self.name, + command_full_name=current_script + if is_single_command + else current_script + " " + self.name, + script_name=current_script, + ) + + def ignore_validation_errors(self) -> None: + self._ignore_validation_errors = True + + def set_application(self, application: Application | None = None) -> None: + self._application = application + + self._full_definition = None + + def configure(self) -> None: + """ + Configures the current command. + """ + pass + + def execute(self, io: IO) -> int: + raise NotImplementedError() + + def interact(self, io: IO) -> None: + """ + Interacts with the user. + """ + pass + + def initialize(self, io: IO) -> None: + pass + + def run(self, io: IO) -> int: + self.merge_application_definition() + + try: + io.input.bind(self.definition) + except CleoException: + if not self._ignore_validation_errors: + raise + + self.initialize(io) + + if io.is_interactive(): + self.interact(io) + + if io.input.has_argument("command") and io.input.argument("command") is None: + io.input.set_argument("command", self.name) + + io.input.validate() + + status_code = self.execute(io) + + if status_code is None: + status_code = 0 + + return status_code + + def merge_application_definition(self, merge_args: bool = True) -> None: + if self._application is None: + return + + self._full_definition = Definition() + self._full_definition.add_options(self._definition.options) + self._full_definition.add_options(self._application.definition.options) + + if merge_args: + self._full_definition.set_arguments(self._application.definition.arguments) + self._full_definition.add_arguments(self._definition.arguments) + else: + self._full_definition.set_arguments(self._definition.arguments) + + def synopsis(self, short: bool = False) -> str: + key = "short" if short else "long" + + if key not in self._synopsis: + self._synopsis[key] = "{} {}".format( + self.name, self.definition.synopsis(short) + ) + + return self._synopsis[key] diff --git a/env/lib/python3.8/site-packages/cleo/commands/command.py b/env/lib/python3.8/site-packages/cleo/commands/command.py new file mode 100644 index 00000000..ca2033ec --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/commands/command.py @@ -0,0 +1,377 @@ +from __future__ import annotations + +import inspect +import re + +from typing import TYPE_CHECKING +from typing import Any + +from cleo.commands.base_command import BaseCommand +from cleo.formatters.style import Style +from cleo.io.inputs.string_input import StringInput +from cleo.io.null_io import NullIO +from cleo.io.outputs.output import Verbosity +from cleo.parser import Parser +from cleo.ui.table_separator import TableSeparator + + +if TYPE_CHECKING: + from cleo.io.io import IO + from cleo.ui.progress_bar import ProgressBar + from cleo.ui.progress_indicator import ProgressIndicator + from cleo.ui.question import Question + + +class Command(BaseCommand): + + arguments = [] + options = [] + + aliases = [] + + usages = [] + + commands = [] + + def __init__(self) -> None: + self._io: IO | None = None + super().__init__() + + @property + def io(self) -> IO: + return self._io + + def configure(self) -> None: + if not self.name: + doc = self.__doc__ + + if not doc: + for base in inspect.getmro(self.__class__): + if base.__doc__ is not None: + doc = base.__doc__ + break + + if doc: + self._parse_doc(doc) + + for argument in self.arguments: + self._definition.add_argument(argument) + + for option in self.options: + self._definition.add_option(option) + + def _parse_doc(self, doc): + doc = doc.strip().split("\n", 1) + if len(doc) > 1: + self.description = doc[0].strip() + signature = re.sub(r"\s{2,}", " ", doc[1].strip()) + definition = Parser.parse(signature) + self.name = definition["name"] + + for argument in definition["arguments"]: + self._definition.add_argument(argument) + + for option in definition["options"]: + self._definition.add_option(option) + else: + self.description = doc[0].strip() + + def execute(self, io: IO) -> int: + self._io = io + + try: + return self.handle() + except KeyboardInterrupt: + return 1 + + def handle(self) -> int: + """ + Executes the command. + """ + raise NotImplementedError() + + def call(self, name: str, args: str | None = None) -> int: + """ + Call another command. + """ + if args is None: + args = "" + + input = StringInput(args) + command = self.application.get(name) + + return self.application._run_command(command, self._io.with_input(input)) + + def call_silent(self, name: str, args: str | None = None) -> int: + """ + Call another command silently. + """ + if args is None: + args = "" + + args = StringInput(args) + command = self.application.get(name) + + return self.application._run_command(command, NullIO(input)) + + def argument(self, name: str): + """ + Get the value of a command argument. + """ + return self._io.input.argument(name) + + def option(self, name: str): + """ + Get the value of a command option. + """ + return self._io.input.option(name) + + def confirm( + self, question: str, default: bool = False, true_answer_regex: str = "(?i)^y" + ) -> bool: + """ + Confirm a question with the user. + """ + from cleo.ui.confirmation_question import ConfirmationQuestion + + question = ConfirmationQuestion( + question, default=default, true_answer_regex=true_answer_regex + ) + + return question.ask(self._io) + + def ask(self, question: str | Question, default: Any | None = None) -> Any: + """ + Prompt the user for input. + """ + from cleo.ui.question import Question + + if not isinstance(question, Question): + question = Question(question, default=default) + + return question.ask(self._io) + + def secret(self, question: str | Question, default: Any | None = None) -> Any: + """ + Prompt the user for input but hide the answer from the console. + """ + from cleo.ui.question import Question + + if not isinstance(question, Question): + question = Question(question) + + question.hide() + + return question.ask(self._io) + + def choice( + self, + question: str, + choices: list[str], + default: Any | None = None, + attempts: int | None = None, + multiple: bool = False, + ) -> Any: + """ + Give the user a single choice from an list of answers. + """ + from cleo.ui.choice_question import ChoiceQuestion + + question = ChoiceQuestion(question, choices, default) + + question.set_max_attempts(attempts) + question.set_multi_select(multiple) + + return question.ask(self._io) + + def create_question(self, question, type=None, **kwargs): + """ + Returns a Question of specified type. + """ + from cleo.ui.choice_question import ChoiceQuestion + from cleo.ui.confirmation_question import ConfirmationQuestion + from cleo.ui.question import Question + + if not type: + return Question(question, **kwargs) + + if type == "choice": + return ChoiceQuestion(question, **kwargs) + + if type == "confirmation": + return ConfirmationQuestion(question, **kwargs) + + def table(self, header=None, rows=None, style=None): + """ + Return a Table instance. + """ + from cleo.ui.table import Table + + table = Table(self._io, style=style) + + if header: + table.set_headers([header]) + + if rows: + table.set_rows(rows) + + return table + + def table_separator(self) -> TableSeparator: + """ + Return a TableSeparator instance. + """ + from cleo.ui.table_separator import TableSeparator + + return TableSeparator() + + def render_table(self, headers, rows, style=None) -> None: + """ + Format input to textual table. + """ + table = self.table(headers, rows, style) + + table.render() + + def write(self, text: str, style: str | None = None) -> None: + """ + Writes a string without a new line. + Useful if you want to use overwrite(). + """ + if style: + styled = f"<{style}>{text}" + else: + styled = text + + self._io.write(styled) + + def line( + self, + text: str, + style: str | None = None, + verbosity: Verbosity = Verbosity.NORMAL, + ) -> None: + """ + Write a string as information output. + """ + if style: + styled = f"<{style}>{text}" + else: + styled = text + + self._io.write_line(styled, verbosity=verbosity) + + def line_error( + self, + text: str, + style: str | None = None, + verbosity: Verbosity = Verbosity.NORMAL, + ) -> None: + """ + Write a string as information output to stderr. + """ + if style: + styled = f"<{style}>{text}" + else: + styled = text + + self._io.write_error_line(styled, verbosity) + + def info(self, text: str) -> None: + """ + Write a string as information output. + + :param text: The line to write + :type text: str + """ + self.line(text, "info") + + def comment(self, text: str) -> None: + """ + Write a string as comment output. + + :param text: The line to write + :type text: str + """ + self.line(text, "comment") + + def question(self, text: str) -> None: + """ + Write a string as question output. + + :param text: The line to write + :type text: str + """ + self.line(text, "question") + + def progress_bar(self, max: int = 0) -> ProgressBar: + """ + Creates a new progress bar + """ + from cleo.ui.progress_bar import ProgressBar + + return ProgressBar(self._io, max=max) + + def progress_indicator( + self, + fmt: str | None = None, + interval: int = 100, + values: list[str] | None = None, + ) -> ProgressIndicator: + """ + Creates a new progress indicator. + """ + from cleo.ui.progress_indicator import ProgressIndicator + + return ProgressIndicator(self.io, fmt, interval, values) + + def spin( + self, + start_message: str, + end_message: str, + fmt: str | None = None, + interval=100, + values: list[str] | None = None, + ): + """ + Automatically spin a progress indicator. + """ + spinner = self.progress_indicator(fmt, interval, values) + + return spinner.auto(start_message, end_message) + + def add_style( + self, + name: str, + fg: str | None = None, + bg: str | None = None, + options: list[str] | None = None, + ) -> None: + """ + Adds a new style + """ + style = Style(name) + if fg is not None: + style.fg(fg) + + if bg is not None: + style.bg(bg) + + if options is not None: + if "bold" in options: + style.bold() + + if "underline" in options: + style.underlined() + + self._io.output.formatter.add_style(style) + self._io.error_output.formatter.add_style(style) + + def overwrite(self, text: str) -> None: + """ + Overwrites the current line. + + It will not add a new line so use line('') + if necessary. + """ + self._io.overwrite(text) diff --git a/env/lib/python3.8/site-packages/cleo/commands/completions/__init__.py b/env/lib/python3.8/site-packages/cleo/commands/completions/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/commands/completions/templates.py b/env/lib/python3.8/site-packages/cleo/commands/completions/templates.py new file mode 100644 index 00000000..0e134c37 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/commands/completions/templates.py @@ -0,0 +1,120 @@ +from __future__ import annotations + + +BASH_TEMPLATE = """%(function)s() +{ + local cur script coms opts com + COMPREPLY=() + _get_comp_words_by_ref -n : cur words + + # for an alias, get the real script behind it + if [[ $(type -t ${words[0]}) == "alias" ]]; then + script=$(alias ${words[0]} | sed -E "s/alias ${words[0]}='(.*)'/\\1/") + else + script=${words[0]} + fi + + # lookup for command + for word in ${words[@]:1}; do + if [[ $word != -* ]]; then + com=$word + break + fi + done + + # completing for an option + if [[ ${cur} == --* ]] ; then + opts="%(opts)s" + + case "$com" in + +%(command_list)s + + esac + + COMPREPLY=($(compgen -W "${opts}" -- ${cur})) + __ltrim_colon_completions "$cur" + + return 0; + fi + + # completing for a command + if [[ $cur == $com ]]; then + coms="%(coms)s" + + COMPREPLY=($(compgen -W "${coms}" -- ${cur})) + __ltrim_colon_completions "$cur" + + return 0 + fi +} + +%(compdefs)s""" + +ZSH_TEMPLATE = """#compdef %(script_name)s + +%(function)s() +{ + local com coms cur opts state + + cur=${words[${#words[@]}]} + + # lookup for command + for word in ${words[@]:1}; do + if [[ $word != -* ]]; then + com=$word + break + fi + done + + if [[ ${cur} == --* ]]; then + state="option" + opts=(%(opts)s) + elif [[ $cur == $com ]]; then + state="command" + coms=(%(coms)s) + fi + + case $state in + (command) + _describe 'command' coms + ;; + (option) + case "$com" in + +%(command_list)s + + esac + + _describe 'option' opts + ;; + *) + # fallback to file completion + _arguments '*:file:_files' + esac +} + +%(function)s "$@" +%(compdefs)s""" + +FISH_TEMPLATE = """function __fish%(function)s_no_subcommand + for i in (commandline -opc) + if contains -- $i %(cmds_names)s + return 1 + end + end + return 0 +end + +# global options +%(opts)s + +# commands +%(cmds)s + +# command options + +%(cmds_opts)s""" + + +TEMPLATES = {"bash": BASH_TEMPLATE, "zsh": ZSH_TEMPLATE, "fish": FISH_TEMPLATE} diff --git a/env/lib/python3.8/site-packages/cleo/commands/completions_command.py b/env/lib/python3.8/site-packages/cleo/commands/completions_command.py new file mode 100644 index 00000000..58a2ec0e --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/commands/completions_command.py @@ -0,0 +1,417 @@ +from __future__ import annotations + +import hashlib +import inspect +import os +import posixpath +import re +import subprocess + +from cleo import helpers +from cleo.commands.command import Command +from cleo.commands.completions.templates import TEMPLATES + + +class CompletionsCommand(Command): + + name = "completions" + description = "Generate completion scripts for your shell." + + arguments = [ + helpers.argument( + "shell", "The shell to generate the scripts for.", optional=True + ) + ] + options = [ + helpers.option( + "alias", None, "Alias for the current command.", flag=False, multiple=True + ) + ] + + SUPPORTED_SHELLS = ("bash", "zsh", "fish") + + hidden = True + + help = """ +One can generate a completion script for `{script_name}` that is compatible with \ +a given shell. The script is output on `stdout` allowing one to re-direct \ +the output to the file of their choosing. Where you place the file will \ +depend on which shell, and which operating system you are using. Your \ +particular configuration may also determine where these scripts need \ +to be placed. + +Here are some common set ups for the three supported shells under \ +Unix and similar operating systems (such as GNU/Linux). + +BASH: + +Completion files are commonly stored in `/etc/bash_completion.d/` + +Run the command: + +`{script_name} {command_name} bash > /etc/bash_completion.d/{script_name}.bash-completion` + +This installs the completion script. You may have to log out and log \ +back in to your shell session for the changes to take effect. + +FISH: + +Fish completion files are commonly stored in\ +`$HOME/.config/fish/completions` + +Run the command: + +`{script_name} {command_name} fish > ~/.config/fish/completions/{script_name}.fish` + +This installs the completion script. You may have to log out and log \ +back in to your shell session for the changes to take effect. + +ZSH: + +ZSH completions are commonly stored in any directory listed in your \ +`$fpath` variable. To use these completions, you must either add the \ +generated script to one of those directories, or add your own \ +to this list. + +Adding a custom directory is often the safest best if you're unsure \ +of which directory to use. First create the directory, for this \ +example we'll create a hidden directory inside our `$HOME` directory + +`mkdir ~/.zfunc` + +Then add the following lines to your `.zshrc` just before `compinit` + +`fpath+=~/.zfunc` + +Now you can install the completions script using the following command + +`{script_name} {command_name} zsh > ~/.zfunc/_{script_name}` + +You must then either log out and log back in, or simply run + +`exec zsh` + +For the new completions to take affect. + +CUSTOM LOCATIONS: + +Alternatively, you could save these files to the place of your choosing, \ +such as a custom directory inside your $HOME. Doing so will require you \ +to add the proper directives, such as `source`ing inside your login \ +script. Consult your shells documentation for how to add such directives. +""" + + def handle(self) -> int: + shell = self.argument("shell") + if not shell: + shell = self.get_shell_type() + + if shell not in self.SUPPORTED_SHELLS: + raise ValueError( + "[shell] argument must be one of {}".format( + ", ".join(self.SUPPORTED_SHELLS) + ) + ) + + self.line(self.render(shell)) + + return 0 + + def render(self, shell: str) -> str: + return getattr(self, f"render_{shell}")() + + def render_bash(self) -> str: + template = TEMPLATES["bash"] + + script_name = self._io.input.script_name + if not script_name: + script_name = inspect.stack()[-1][1] + + script_path = posixpath.realpath(script_name) + script_name = os.path.basename(script_path) + aliases = [script_name, script_path] + aliases += self.option("alias") + + function = self._generate_function_name(script_name, script_path) + + commands = [] + global_options = set() + options_descriptions = {} + commands_options = {} + for option in self.application.definition.options: + options_descriptions[ + "--" + option.name + ] = self.io.output.formatter.remove_format(option.description) + global_options.add("--" + option.name) + + for command in self.application.all().values(): + if not command.enabled or command.hidden: + continue + + command_options = [] + commands.append(command.name) + + options = command.definition.options + for option in sorted(options, key=lambda o: o.name): + name = "--" + option.name + description = option.description + command_options.append(name) + options_descriptions[name] = description + + commands_options[command.name] = command_options + + compdefs = "\n".join( + [f"complete -o default -F {function} {alias}" for alias in aliases] + ) + + commands = sorted(commands) + + command_list = [] + for i, command in enumerate(commands): + options = set(commands_options[command]).difference(global_options) + options = sorted(options) + options = [self._zsh_describe(opt, None).strip('"') for opt in options] + + desc = [ + f" ({command})", + ' opts="${{opts}} {}"'.format(" ".join(options)), + " ;;", + ] + + if i < len(commands) - 1: + desc.append("") + + command_list.append("\n".join(desc)) + + output = template % { + "script_name": script_name, + "function": function, + "opts": " ".join(sorted(global_options)), + "coms": " ".join(commands), + "command_list": "\n".join(command_list), + "compdefs": compdefs, + } + + return output + + def render_zsh(self): + template = TEMPLATES["zsh"] + + script_name = self._io.input.script_name + if not script_name: + script_name = inspect.stack()[-1][1] + + script_path = posixpath.realpath(script_name) + script_name = os.path.basename(script_path) + aliases = [script_path] + aliases += self.option("alias") + + function = self._generate_function_name(script_name, script_path) + + global_options = set() + commands_descriptions = [] + options_descriptions = {} + commands_options_descriptions = {} + commands_options = {} + for option in self.application.definition.options: + options_descriptions[ + "--" + option.name + ] = self.io.output.formatter.remove_format(option.description) + global_options.add("--" + option.name) + + for command in self.application.all().values(): + if not command.enabled or command.hidden: + continue + + command_options = [] + commands_options_descriptions[command.name] = {} + command_description = self._io.output.formatter.remove_format( + command.description + ) + commands_descriptions.append( + self._zsh_describe(command.name, command_description) + ) + + options = command.definition.options + for option in sorted(options, key=lambda o: o.name): + name = "--" + option.name + description = self.io.output.formatter.remove_format(option.description) + command_options.append(name) + options_descriptions[name] = description + commands_options_descriptions[command.name][name] = description + + commands_options[command.name] = command_options + + compdefs = "\n".join([f"compdef {function} {alias}" for alias in aliases]) + + commands = sorted(commands_options.keys()) + command_list = [] + for i, command in enumerate(commands): + options = set(commands_options[command]).difference(global_options) + options = sorted(options) + options = [ + self._zsh_describe(opt, commands_options_descriptions[command][opt]) + for opt in options + ] + + desc = [ + f" ({command})", + " opts+=({})".format(" ".join(options)), + " ;;", + ] + + if i < len(commands) - 1: + desc.append("") + + command_list.append("\n".join(desc)) + + opts = [] + for opt in global_options: + opts.append(self._zsh_describe(opt, options_descriptions[opt])) + + output = template % { + "script_name": script_name, + "function": function, + "opts": " ".join(sorted(opts)), + "coms": " ".join(sorted(commands_descriptions)), + "command_list": "\n".join(command_list), + "compdefs": compdefs, + } + + return output + + def render_fish(self): + template = TEMPLATES["fish"] + + script_name = self._io.input.script_name + if not script_name: + script_name = inspect.stack()[-1][1] + + script_path = posixpath.realpath(script_name) + script_name = os.path.basename(script_path) + aliases = [script_name] + aliases += self.option("alias") + + function = self._generate_function_name(script_name, script_path) + + global_options = set() + commands_descriptions = {} + options_descriptions = {} + commands_options_descriptions = {} + commands_options = {} + for option in self.application.definition.options: + options_descriptions[ + "--" + option.name + ] = self._io.output.formatter.remove_format(option.description) + global_options.add("--" + option.name) + + for command in self.application.all().values(): + if not command.enabled or command.hidden: + continue + + command_options = [] + commands_options_descriptions[command.name] = {} + commands_descriptions[ + command.name + ] = self._io.output.formatter.remove_format(command.description) + + options = command.definition.options + for option in sorted(options, key=lambda o: o.name): + name = "--" + option.name + description = self._io.output.formatter.remove_format( + option.description + ) + command_options.append(name) + options_descriptions[name] = description + commands_options_descriptions[command.name][name] = description + + commands_options[command.name] = command_options + + opts = [] + for opt in sorted(global_options): + opts.append( + "complete -c {} -n '__fish{}_no_subcommand' " + "-l {} -d '{}'".format( + script_name, + function, + opt[2:], + options_descriptions[opt].replace("'", "\\'"), + ) + ) + + cmds_names = sorted(commands_options.keys()) + + cmds = [] + cmds_opts = [] + for i, cmd in enumerate(cmds_names): + cmds.append( + "complete -c {} -f -n '__fish{}_no_subcommand' " + "-a {} -d '{}'".format( + script_name, + function, + cmd, + commands_descriptions[cmd].replace("'", "\\'"), + ) + ) + + cmds_opts += [f"# {cmd}"] + options = set(commands_options[cmd]).difference(global_options) + options = sorted(options) + + for opt in options: + cmds_opts.append( + "complete -c {} -A -n '__fish_seen_subcommand_from {}' " + "-l {} -d '{}'".format( + script_name, + cmd, + opt[2:], + commands_options_descriptions[cmd][opt].replace("'", "\\'"), + ) + ) + + if i < len(cmds_names) - 1: + cmds_opts.append("") + + output = template % { + "script_name": script_name, + "function": function, + "cmds_names": " ".join(cmds_names), + "opts": "\n".join(opts), + "cmds": "\n".join(cmds), + "cmds_opts": "\n".join(cmds_opts), + } + + return output + + def get_shell_type(self) -> str: + shell = os.getenv("SHELL") + if not shell: + raise RuntimeError( + "Could not read SHELL environment variable. " + "Please specify your shell type by passing it as the first argument." + ) + + return os.path.basename(shell) + + def _generate_function_name(self, script_name: str, script_path: str) -> str: + return "_{}_{}_complete".format( + self._sanitize_for_function_name(script_name), + hashlib.md5(script_path.encode()).hexdigest()[0:16], + ) + + def _sanitize_for_function_name(self, name: str) -> str: + name = name.replace("-", "_") + + return re.sub("[^A-Za-z0-9_]+", "", name) + + def _zsh_describe(self, value: str, description: str | None = None) -> str: + value = '"' + value.replace(":", "\\:") + if description: + description = re.sub( + r'(["\'#&;`|*?~<>^()\[\]{}$\\\x0A\xFF])', r"\\\1", description + ) + value += ":{}".format(subprocess.list2cmdline([description]).strip('"')) + + value += '"' + + return value diff --git a/env/lib/python3.8/site-packages/cleo/commands/help_command.py b/env/lib/python3.8/site-packages/cleo/commands/help_command.py new file mode 100644 index 00000000..3b6b2b61 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/commands/help_command.py @@ -0,0 +1,51 @@ +from __future__ import annotations + +from cleo.commands.command import Command +from cleo.io.inputs.argument import Argument + + +class HelpCommand(Command): + + name = "help" + + description = "Displays help for a command." + + arguments = [ + Argument( + "command_name", + required=False, + description="The command name", + default="help", + ) + ] + + help = """\ +The {command_name} command displays help for a given command: + + {command_full_name} list + +To display the list of available commands, please use the list command. +""" + + _command = None + + def set_command(self, command: Command) -> None: + self._command = command + + def configure(self) -> None: + self.ignore_validation_errors() + + super().configure() + + def handle(self) -> int: + from cleo.descriptors.text_descriptor import TextDescriptor + + if self._command is None: + self._command = self._application.find(self.argument("command_name")) + + self.line("") + TextDescriptor().describe(self._io, self._command) + + self._command = None + + return 0 diff --git a/env/lib/python3.8/site-packages/cleo/commands/list_command.py b/env/lib/python3.8/site-packages/cleo/commands/list_command.py new file mode 100644 index 00000000..528e4eca --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/commands/list_command.py @@ -0,0 +1,34 @@ +from __future__ import annotations + +from cleo.commands.command import Command +from cleo.io.inputs.argument import Argument + + +class ListCommand(Command): + + name = "list" + + description = "Lists commands." + + help = """\ +The {command_name} command lists all commands: + + {command_full_name} + +You can also display the commands for a specific namespace: + + {command_full_name} test +""" + + arguments = [ + Argument("namespace", required=False, description="The namespace name") + ] + + def handle(self) -> int: + from cleo.descriptors.text_descriptor import TextDescriptor + + TextDescriptor().describe( + self._io, self.application, namespace=self.argument("namespace") + ) + + return 0 diff --git a/env/lib/python3.8/site-packages/cleo/cursor.py b/env/lib/python3.8/site-packages/cleo/cursor.py new file mode 100644 index 00000000..dddd1d1a --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/cursor.py @@ -0,0 +1,108 @@ +from __future__ import annotations + +import sys + +from typing import TYPE_CHECKING +from typing import TextIO + +from cleo.io.io import IO + + +if TYPE_CHECKING: + from cleo.io.outputs.output import Output + + +class Cursor: + def __init__(self, io: IO | Output, input: TextIO | None = None) -> None: + if isinstance(io, IO): + io = io.output + + self._output = io + + if input is None: + input = sys.stdin + + self._input = input + + def move_up(self, lines: int = 1) -> Cursor: + self._output.write(f"\x1b[{lines}A") + + return self + + def move_down(self, lines: int = 1) -> Cursor: + self._output.write(f"\x1b[{lines}B") + + return self + + def move_right(self, columns: int = 1) -> Cursor: + self._output.write(f"\x1b[{columns}C") + + return self + + def move_left(self, columns: int = 1) -> Cursor: + self._output.write(f"\x1b[{columns}D") + + return self + + def move_to_column(self, column: int) -> Cursor: + self._output.write(f"\x1b[{column}G") + + return self + + def move_to_position(self, column: int, row: int) -> Cursor: + self._output.write(f"\x1b[{row + 1};{column}H") + + return self + + def save_position(self) -> Cursor: + self._output.write("\x1b7") + + return self + + def restore_position(self) -> Cursor: + self._output.write("\x1b8") + + return self + + def hide(self) -> Cursor: + self._output.write("\x1b[?25l") + + return self + + def show(self) -> Cursor: + self._output.write("\x1b[?25h\x1b[?0c") + + return self + + def clear_line(self) -> Cursor: + """ + Clears all the output from the current line. + """ + self._output.write("\x1b[2K") + + return self + + def clear_line_after(self) -> Cursor: + """ + Clears all the output from the current line after the current position. + """ + self._output.write("\x1b[K") + + return self + + def clear_output(self) -> Cursor: + """ + Clears all the output from the cursors' current position + to the end of the screen. + """ + self._output.write("\x1b[0J") + + return self + + def clear_screen(self) -> Cursor: + """ + Clears the entire screen. + """ + self._output.write("\x1b[2J") + + return self diff --git a/env/lib/python3.8/site-packages/cleo/descriptors/__init__.py b/env/lib/python3.8/site-packages/cleo/descriptors/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/descriptors/application_description.py b/env/lib/python3.8/site-packages/cleo/descriptors/application_description.py new file mode 100644 index 00000000..511d553e --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/descriptors/application_description.py @@ -0,0 +1,93 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + +from cleo.exceptions import CommandNotFoundException + + +if TYPE_CHECKING: + from cleo.application import Application + from cleo.commands.command import Command + + +class ApplicationDescription: + + GLOBAL_NAMESPACE = "_global" + + def __init__( + self, + application: Application, + namespace: str | None = None, + show_hidden: bool = False, + ) -> None: + self._application: Application = application + self._namespace = namespace + self._show_hidden = show_hidden + self._namespaces: dict[str, dict[str, str | list[Command]]] = {} + self._commands = {} + self._aliases = {} + + self._inspect_application() + + @property + def namespaces(self) -> dict[str, dict[str, str | list[Command]]]: + return self._namespaces + + @property + def commands(self) -> dict[str, Command]: + return self._commands + + def command(self, name: str) -> Command: + if name not in self._commands and name not in self._aliases: + raise CommandNotFoundException(name) + + return self._commands.get(name, self._aliases.get(name)) + + def _inspect_application(self) -> None: + namespace = None + if self._namespace: + namespace = self._application.find_namespace(self._namespace) + + all_commands = self._application.all(namespace) + + for namespace, commands in self._sort_commands(all_commands): + names = [] + + for name, command in commands: + if not command.name or command.hidden: + continue + + if command.name == name: + self._commands[name] = command + else: + self._aliases[name] = command + + names.append(name) + + self._namespaces[namespace] = {"id": namespace, "commands": names} + + def _sort_commands( + self, commands: dict[str, Command] + ) -> list[tuple[str, list[tuple[str, Command]]]]: + """ + Sorts command in alphabetical order + """ + namespaced_commands = {} + for name, command in commands.items(): + key = self._application.extract_namespace(name, 1) + if not key: + key = "_global" + + if key in namespaced_commands: + namespaced_commands[key][name] = command + else: + namespaced_commands[key] = {name: command} + + for namespace, commands in namespaced_commands.items(): + namespaced_commands[namespace] = sorted( + commands.items(), key=lambda x: x[0] + ) + + namespaced_commands = sorted(namespaced_commands.items(), key=lambda x: x[0]) + + return namespaced_commands diff --git a/env/lib/python3.8/site-packages/cleo/descriptors/descriptor.py b/env/lib/python3.8/site-packages/cleo/descriptors/descriptor.py new file mode 100644 index 00000000..bfbcc2cf --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/descriptors/descriptor.py @@ -0,0 +1,54 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING +from typing import Any + +from cleo.application import Application +from cleo.commands.command import Command +from cleo.io.inputs.argument import Argument +from cleo.io.inputs.definition import Definition +from cleo.io.inputs.option import Option +from cleo.io.outputs.output import Type + + +if TYPE_CHECKING: + from cleo.io.io import IO + + +class Descriptor: + def __init__(self) -> None: + self._io: IO | None = None + + def describe(self, io: IO, obj: Any, **options: Any) -> None: + self._io = io + + if isinstance(obj, Argument): + self._describe_argument(obj, **options) + elif isinstance(obj, Option): + self._describe_option(obj, **options) + elif isinstance(obj, Definition): + self._describe_definition(obj, **options) + elif isinstance(obj, Command): + self._describe_command(obj, **options) + elif isinstance(obj, Application): + self._describe_application(obj, **options) + + def _write(self, content: str, decorated: bool = True) -> None: + self._io.write( + content, new_line=False, type=Type.NORMAL if decorated else Type.RAW + ) + + def _describe_argument(self, argument: Argument, **options: Any) -> None: + raise NotImplementedError() + + def _describe_option(self, option: Option, **options: Any) -> None: + raise NotImplementedError() + + def _describe_definition(self, definition: Definition, **options: Any) -> None: + raise NotImplementedError() + + def _describe_command(self, command: Command, **options: Any) -> None: + raise NotImplementedError() + + def _describe_application(self, application: Application, **options: Any) -> None: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cleo/descriptors/text_descriptor.py b/env/lib/python3.8/site-packages/cleo/descriptors/text_descriptor.py new file mode 100644 index 00000000..b053ad4b --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/descriptors/text_descriptor.py @@ -0,0 +1,293 @@ +from __future__ import annotations + +import json +import re + +from typing import TYPE_CHECKING +from typing import Any +from typing import Sequence + +from cleo.commands.command import Command +from cleo.descriptors.descriptor import Descriptor +from cleo.formatters.formatter import Formatter +from cleo.io.inputs.definition import Definition + + +if TYPE_CHECKING: + from cleo.application import Application + from cleo.io.inputs.argument import Argument + from cleo.io.inputs.option import Option + + +class TextDescriptor(Descriptor): + def _describe_argument(self, argument: Argument, **options: Any) -> None: + if argument.default is not None and ( + not isinstance(argument.default, list) or argument.default + ): + default = " [default: {}]".format( + self._format_default_value(argument.default) + ) + else: + default = "" + + total_width = options.get("total_width", len(argument.name)) + + spacing_width = total_width - len(argument.name) + self._write( + " {} {}{}{}".format( + argument.name, + " " * spacing_width, + re.sub( + r"\s*[\r\n]\s*", + "\n" + " " * (total_width + 4), + argument.description, + ), + default, + ) + ) + + def _describe_option(self, option: Option, **options: Any) -> None: + if ( + option.accepts_value() + and option.default is not None + and (not isinstance(option.default, list) or option.default) + ): + default = " [default: {}]".format( + self._format_default_value(option.default) + ) + else: + default = "" + + value = "" + if option.accepts_value(): + value = "=" + option.name.upper() + + if not option.requires_value(): + value = "[" + value + "]" + + total_width = options.get( + "total_width", self._calculate_total_width_for_options([option]) + ) + + synopsis = "{}{}".format( + f"-{option.shortcut}, " if option.shortcut else " ", + f"--{option.name}{value}", + ) + + spacing_width = total_width - len(synopsis) + + self._write( + " {} {}{}{}{}".format( + synopsis, + " " * spacing_width, + re.sub( + r"\s*[\r\n]\s*", + "\n" + " " * (total_width + 4), + option.description, + ), + default, + " (multiple values allowed)" + if option.is_list() + else "", + ) + ) + + def _describe_definition(self, definition: Definition, **options: Any) -> None: + arguments = definition.arguments + definition_options = definition.options + total_width = self._calculate_total_width_for_options(definition_options) + + for argument in arguments: + total_width = max(total_width, len(argument.name)) + + if arguments: + self._write("Arguments:") + self._write("\n") + + for argument in arguments: + self._describe_argument(argument, total_width=total_width) + self._write("\n") + + if arguments and definition_options: + self._write("\n") + + if definition_options: + later_options = [] + + self._write("Options:") + + for option in definition_options: + if option.shortcut and len(option.shortcut) > 1: + later_options.append(option) + continue + + self._write("\n") + self._describe_option(option, total_width=total_width) + + for option in later_options: + self._write("\n") + self._describe_option(option, total_width=total_width) + + def _describe_command(self, command: Command, **options: Any) -> None: + command.merge_application_definition(False) + + description = command.description + if description: + self._write("Description:") + self._write("\n") + self._write(" " + description) + self._write("\n\n") + + self._write("Usage:") + for usage in [command.synopsis(True)] + command.aliases + command.usages: + self._write("\n") + self._write(" " + Formatter.escape(usage)) + + self._write("\n") + + definition = command.definition + if definition.options or definition.arguments: + self._write("\n") + self._describe_definition(definition, **options) + self._write("\n") + + help_text = command.processed_help + if help_text and help_text != description: + self._write("\n") + self._write("Help:") + self._write("\n") + self._write(" " + help_text.replace("\n", "\n ")) + self._write("\n") + + def _describe_application(self, application: Application, **options: Any) -> None: + from cleo.descriptors.application_description import ApplicationDescription + + described_namespace = options.get("namespace") + description = ApplicationDescription(application, namespace=described_namespace) + + help_text = application.help + if help_text: + self._write(f"{help_text}\n\n") + + self._write("Usage:\n") + self._write(" command [options] [arguments]\n\n") + + self._describe_definition(Definition(application.definition.options), **options) + + self._write("\n") + self._write("\n") + + commands = description.commands + namespaces = description.namespaces + + if described_namespace and namespaces: + described_namespace_info = list(namespaces.values())[0] + for name in described_namespace_info["commands"]: + commands[name] = description.command(name) + + # calculate max width based on available commands per namespace + all_commands = list(commands.keys()) + for namespace in namespaces.values(): + all_commands += namespace["commands"] + + width = self._get_column_width(all_commands) + if described_namespace: + self._write( + 'Available commands for the "{}" namespace:'.format( + described_namespace + ) + ) + else: + self._write("Available commands:") + + for namespace in namespaces.values(): + namespace["commands"] = [c for c in namespace["commands"] if c in commands] + + if not namespace["commands"]: + continue + + if ( + not described_namespace + and namespace["id"] != ApplicationDescription.GLOBAL_NAMESPACE + ): + self._write("\n") + self._write(" {}".format(namespace["id"])) + + for name in namespace["commands"]: + self._write("\n") + spacing_width = width - len(name) + command = commands[name] + command_aliases = ( + self._get_command_aliases_text(command) + if command.name == name + else "" + ) + self._write( + " {}{}{}".format( + name, " " * spacing_width, command_aliases + command.description + ) + ) + + self._write("\n") + + def _format_default_value(self, default: Any) -> str: + if isinstance(default, str): + default = Formatter.escape(default) + elif isinstance(default, list): + new_default = [] + for value in default: + if isinstance(value, str): + new_default.append(Formatter.escape(value)) + + default = new_default + elif isinstance(default, dict): + new_default = {} + for key, value in default.items(): + if isinstance(value, str): + new_default[key] = Formatter.escape(value) + + default = new_default + + return json.dumps(default).replace("\\\\", "\\") + + def _calculate_total_width_for_options(self, options: list[Option]) -> int: + total_width = 0 + + for option in options: + name_length = 1 + max(len(option.shortcut or ""), 1) + 4 + len(option.name) + + if option.accepts_value(): + value_length = 1 + len(option.name) + if not option.requires_value(): + value_length += 2 + + name_length += value_length + + total_width = max(total_width, name_length) + + return total_width + + def _get_column_width(self, commands: Sequence[Command | str]) -> int: + widths = [] + + for command in commands: + if isinstance(command, Command): + widths.append(len(command.name)) + for alias in command.aliases: + widths.append(len(alias)) + else: + widths.append(len(command)) + + if not widths: + return 0 + + return max(widths) + 2 + + def _get_command_aliases_text(self, command: Command) -> str: + text = "" + aliases = command.aliases + + if aliases: + text = "[" + "|".join(aliases) + "] " + + return text diff --git a/env/lib/python3.8/site-packages/cleo/events/__init__.py b/env/lib/python3.8/site-packages/cleo/events/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/events/console_command_event.py b/env/lib/python3.8/site-packages/cleo/events/console_command_event.py new file mode 100644 index 00000000..fe34387e --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/console_command_event.py @@ -0,0 +1,34 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + +from cleo.events.console_event import ConsoleEvent + + +if TYPE_CHECKING: + from cleo.commands.command import Command + from cleo.io.io import IO + + +class ConsoleCommandEvent(ConsoleEvent): + """ + An event triggered before the command is executed. + + It allows to do things like skipping the command or changing the input. + """ + + RETURN_CODE_DISABLED: int = 113 + + def __init__(self, command: Command | None, io: IO) -> None: + super().__init__(command, io) + + self._command_should_run = True + + def disable_command(self) -> None: + self._command_should_run = False + + def enable_command(self) -> None: + self._command_should_run = True + + def command_should_run(self) -> bool: + return self._command_should_run diff --git a/env/lib/python3.8/site-packages/cleo/events/console_error_event.py b/env/lib/python3.8/site-packages/cleo/events/console_error_event.py new file mode 100644 index 00000000..1795b40e --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/console_error_event.py @@ -0,0 +1,43 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + +from cleo.events.console_event import ConsoleEvent +from cleo.exceptions import CleoException + + +if TYPE_CHECKING: + from cleo.commands.command import Command + from cleo.io.io import IO + + +class ConsoleErrorEvent(ConsoleEvent): + """ + An event triggered when an exception is raised during the execution of a command. + """ + + def __init__(self, command: Command | None, io: IO, error: Exception) -> None: + super().__init__(command, io) + + self._error = error + self._exit_code: int | None = None + + @property + def error(self) -> Exception: + return self._error + + @property + def exit_code(self) -> int: + if self._exit_code is not None: + return self._exit_code + + if isinstance(self._error, CleoException) and self._error.exit_code is not None: + return self._error.exit_code + + return 1 + + def set_error(self, error: Exception) -> None: + self._error = error + + def set_exit_code(self, exit_code: int) -> None: + self._exit_code = exit_code diff --git a/env/lib/python3.8/site-packages/cleo/events/console_event.py b/env/lib/python3.8/site-packages/cleo/events/console_event.py new file mode 100644 index 00000000..3436d985 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/console_event.py @@ -0,0 +1,30 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + +from cleo.events.event import Event + + +if TYPE_CHECKING: + from cleo.commands.command import Command + from cleo.io.io import IO + + +class ConsoleEvent(Event): + """ + An event that gives access to the IO of a command. + """ + + def __init__(self, command: Command | None, io: IO) -> None: + super().__init__() + + self._command = command + self._io = io + + @property + def command(self) -> Command: + return self._command + + @property + def io(self) -> IO: + return self._io diff --git a/env/lib/python3.8/site-packages/cleo/events/console_events.py b/env/lib/python3.8/site-packages/cleo/events/console_events.py new file mode 100644 index 00000000..e9f1f7c9 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/console_events.py @@ -0,0 +1,21 @@ +# The COMMAND event allows to attach listeners before any command +# is executed. It also allows the modification of the command and IO +# before it's handed to the command. +from __future__ import annotations + + +COMMAND = "console.command" + +# The SIGNAL event allows some actions to be performed after +# the command execution is interrupted. +SIGNAL = "console.signal" + +# The TERMINATE event allows listeners to be attached after the command +# is executed by the console. +TERMINATE = "console.terminate" + +# The ERROR event occurs when an uncaught exception is raised. +# +# This event gives the ability to deal with the exception or to modify +# the raised exception. +ERROR = "console.error" diff --git a/env/lib/python3.8/site-packages/cleo/events/console_signal_event.py b/env/lib/python3.8/site-packages/cleo/events/console_signal_event.py new file mode 100644 index 00000000..a1bb02e6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/console_signal_event.py @@ -0,0 +1,28 @@ +from __future__ import annotations + +import signal + +from typing import TYPE_CHECKING + +from cleo.events.console_event import ConsoleEvent + + +if TYPE_CHECKING: + from cleo.commands.command import Command + from cleo.io.io import IO + + +class ConsoleSignalEvent(ConsoleEvent): + """ + An event triggered by a system signal. + """ + + def __init__( + self, command: Command | None, io: IO, handling_signal: signal.Signals + ) -> None: + super().__init__(command, io) + self._handling_signal = handling_signal + + @property + def handling_signal(self) -> signal.Signals: + return self._handling_signal diff --git a/env/lib/python3.8/site-packages/cleo/events/console_terminate_event.py b/env/lib/python3.8/site-packages/cleo/events/console_terminate_event.py new file mode 100644 index 00000000..e9c13dd5 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/console_terminate_event.py @@ -0,0 +1,28 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + +from cleo.events.console_event import ConsoleEvent + + +if TYPE_CHECKING: + from cleo.commands.command import Command + from cleo.io.io import IO + + +class ConsoleTerminateEvent(ConsoleEvent): + """ + An event triggered by after the execution of a command. + """ + + def __init__(self, command: Command | None, io: IO, exit_code: int) -> None: + super().__init__(command, io) + + self._exit_code = exit_code + + @property + def exit_code(self) -> int: + return self._exit_code + + def set_exit_code(self, exit_code: int) -> None: + self._exit_code = exit_code diff --git a/env/lib/python3.8/site-packages/cleo/events/event.py b/env/lib/python3.8/site-packages/cleo/events/event.py new file mode 100644 index 00000000..282d6ce4 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/event.py @@ -0,0 +1,16 @@ +from __future__ import annotations + + +class Event: + """ + Event + """ + + def __init__(self) -> None: + self._propagation_stopped = False + + def is_propagation_stopped(self) -> bool: + return self._propagation_stopped + + def stop_propagation(self) -> None: + self._propagation_stopped = True diff --git a/env/lib/python3.8/site-packages/cleo/events/event_dispatcher.py b/env/lib/python3.8/site-packages/cleo/events/event_dispatcher.py new file mode 100644 index 00000000..75d4f76c --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/events/event_dispatcher.py @@ -0,0 +1,96 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING +from typing import Callable + + +if TYPE_CHECKING: + from cleo.events.event import Event + + +class EventDispatcher: + def __init__(self) -> None: + self._listeners = {} + self._sorted = {} + + def dispatch(self, event: Event, event_name: str | None = None) -> Event: + if event_name is None: + event_name = event.__class__.__name__ + + listeners = self.get_listeners(event_name) + + if listeners: + self._do_dispatch(listeners, event_name, event) + + return event + + def get_listeners( + self, event_name: str | None = None + ) -> list[Callable] | dict[str, Callable]: + if event_name is not None: + if event_name not in self._listeners: + return [] + + if event_name not in self._sorted: + self._sort_listeners(event_name) + + return self._sorted[event_name] + + for event_name in self._listeners.keys(): + if event_name not in self._sorted: + self._sort_listeners(event_name) + + return self._sorted + + def get_listener_priority(self, event_name: str, listener: Callable) -> int | None: + if event_name not in self._listeners: + return + + for priority, listeners in self._listeners[event_name].items(): + for v in listeners: + if v == listener: + return priority + + def has_listeners(self, event_name: str | None = None) -> bool: + if event_name is not None: + if event_name not in self._listeners: + return False + + return len(self._listeners[event_name]) > 0 + + return any(self._listeners.values()) + + def add_listener( + self, event_name: str, listener: Callable, priority: int = 0 + ) -> None: + if event_name not in self._listeners: + self._listeners[event_name] = {} + + if priority not in self._listeners[event_name]: + self._listeners[event_name][priority] = [] + + self._listeners[event_name][priority].append(listener) + + if event_name in self._sorted: + del self._sorted[event_name] + + def _do_dispatch( + self, listeners: list[Callable], event_name: str, event: Event + ) -> None: + for listener in listeners: + if event.is_propagation_stopped(): + break + + listener(event, event_name, self) + + def _sort_listeners(self, event_name: str) -> None: + """ + Sorts the internal list of listeners for the given event by priority. + """ + self._sorted[event_name] = [] + + for _, listeners in sorted( + self._listeners[event_name].items(), key=lambda t: -t[0] + ): + for listener in listeners: + self._sorted[event_name].append(listener) diff --git a/env/lib/python3.8/site-packages/cleo/exceptions/__init__.py b/env/lib/python3.8/site-packages/cleo/exceptions/__init__.py new file mode 100644 index 00000000..41883ba5 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/exceptions/__init__.py @@ -0,0 +1,77 @@ +from __future__ import annotations + +from typing import List +from typing import Optional + +from cleo._utils import find_similar_names + + +class CleoException(Exception): + + exit_code: int | None = None + + +class CleoSimpleException(Exception): + + pass + + +class LogicException(CleoException): + + pass + + +class RuntimeException(CleoException): + + pass + + +class ValueException(CleoException): + + pass + + +class MissingArgumentsException(CleoSimpleException): + + pass + + +class NoSuchOptionException(CleoException): + + pass + + +class CommandNotFoundException(CleoSimpleException): + def __init__(self, name: str, commands: list[str] | None = None) -> None: + message = f'The command "{name}" does not exist.' + + if commands: + suggested_names = find_similar_names(name, commands) + + if suggested_names: + if len(suggested_names) == 1: + message += "\n\nDid you mean this?\n " + else: + message += "\n\nDid you mean one of these?\n " + + message += "\n ".join(suggested_names) + + super().__init__(message) + + +class NamespaceNotFoundException(CleoSimpleException): + def __init__(self, name: str, namespaces: list[str] | None = None) -> None: + message = f'There are no commands in the "{name}" namespace.' + + if namespaces: + suggested_names = find_similar_names(name, namespaces) + + if suggested_names: + if len(suggested_names) == 1: + message += "\n\nDid you mean this?\n " + else: + message += "\n\nDid you mean one of these?\n " + + message += "\n ".join(suggested_names) + + super().__init__(message) diff --git a/env/lib/python3.8/site-packages/cleo/formatters/__init__.py b/env/lib/python3.8/site-packages/cleo/formatters/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/formatters/formatter.py b/env/lib/python3.8/site-packages/cleo/formatters/formatter.py new file mode 100644 index 00000000..6fe010c8 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/formatters/formatter.py @@ -0,0 +1,212 @@ +from __future__ import annotations + +import re + +from cleo.exceptions import ValueException +from cleo.formatters.style import Style +from cleo.formatters.style_stack import StyleStack + + +class Formatter: + + TAG_REGEX = re.compile(r"(?ix)<(([a-z](?:[^<>]*)) | /([a-z](?:[^<>]*))?)>") + + _inline_styles_cache: dict[str, Style] = {} + + def __init__( + self, decorated: bool = False, styles: dict[str, Style] | None = None + ) -> None: + self._decorated = decorated + self._styles: dict[str, Style] = {} + + self.set_style("error", Style("red", options=["bold"])) + self.set_style("info", Style("blue")) + self.set_style("comment", Style("green")) + self.set_style("question", Style("cyan")) + self.set_style("c1", Style("cyan")) + self.set_style("c2", Style("default", options=["bold"])) + self.set_style("b", Style("default", options=["bold"])) + + if styles is None: + styles = {} + + for name, style in styles.items(): + self.set_style(name, style) + + self._style_stack = StyleStack() + + @classmethod + def escape(cls, text: str) -> str: + """ + Escapes "<" special char in given text. + """ + text = re.sub(r"([^\\]?)<", "\\1\\<", text) + + return cls.escape_trailing_backslash(text) + + @classmethod + def escape_trailing_backslash(cls, text: str) -> str: + """ + Escapes trailing "\" in given text. + """ + if text and text[-1] == "\\": + length = len(text) + text = text.rstrip("\\") + text = text.replace("\0", "") + text += "\0" * (length - len(text)) + + return text + + def decorated(self, decorated: bool = True) -> None: + self._decorated = decorated + + def is_decorated(self) -> bool: + return self._decorated + + def set_style(self, name: str, style: Style) -> None: + self._styles[name] = style + + def has_style(self, name: str) -> bool: + return name in self._styles + + def style(self, name: str) -> Style: + if not self.has_style(name): + raise ValueException(f'Undefined style: "{name}"') + + return self._styles[name] + + def format(self, message: str) -> str: + return self.format_and_wrap(message, 0) + + def format_and_wrap(self, message: str, width: int) -> str: + offset = 0 + output = "" + current_line_length = 0 + for match in self.TAG_REGEX.finditer(message): + pos = match.start() + text = match.group(0) + + if pos != 0 and message[pos - 1] == "\\": + continue + + # add the text up to the next tag + formatted, current_line_length = self._apply_current_style( + message[offset:pos], output, width, current_line_length + ) + output += formatted + offset = pos + len(text) + + # Opening tag + open = text[1] != "/" + if open: + tag = match.group(1) + else: + tag = match.group(2) + + style = None + if tag: + style = self._create_style_from_string(tag) + + if not open and not tag: + # + self._style_stack.pop() + elif style is None: + formatted, current_line_length = self._apply_current_style( + text, output, width, current_line_length + ) + output += formatted + elif open: + self._style_stack.push(style) + else: + self._style_stack.pop(style) + + formatted, current_line_length = self._apply_current_style( + message[offset:], output, width, current_line_length + ) + output += formatted + + if output.find("\0") != -1: + return output.replace("\0", "\\").replace("\\<", "<") + + return output.replace("\\<", "<") + + def remove_format(self, text: str) -> str: + decorated = self._decorated + self._decorated = False + + text = self.format(text) + text = re.sub(r"\033\[[^m]*m", "", text) + + self._decorated = decorated + + return text + + def _create_style_from_string(self, string: str) -> Style | None: + if string in self._styles: + return self._styles[string] + + if string in self._inline_styles_cache: + return self._inline_styles_cache[string] + + matches = re.findall("([^=]+)=([^;]+)(;|$)", string.lower()) + if not matches: + return None + + style = Style() + + for match in matches: + if match[0] == "fg": + style.foreground(match[1]) + elif match[0] == "bg": + style.background(match[1]) + else: + try: + for option in match[1].split(","): + style.set_option(option.strip()) + except ValueError: + return None + + self._inline_styles_cache[string] = style + + return style + + def _apply_current_style( + self, text: str, current: str, width: int, current_line_length: int + ) -> tuple[str, int]: + if not text: + return "", current_line_length + + if not width: + if self.is_decorated(): + return self._style_stack.current.apply(text), current_line_length + + return text, current_line_length + + if not current_line_length and current: + text = text.lstrip() + + if current_line_length: + i = width - current_line_length + prefix = text[:i] + "\n" + text = text[i:] + else: + prefix = "" + + m = re.match(r"(\n)$", text) + text = prefix + re.sub(rf"([^\n]{{{width}}})\ *", "\\1\n", text) + text = text.rstrip("\n") + (m.group(1) if m else "") + + if not current_line_length and current and current[-1] != "\n": + text = "\n" + text + + lines = text.split("\n") + for line in lines: + current_line_length += len(line) + if current_line_length >= width: + current_line_length = 0 + + if self.is_decorated(): + for i, line in enumerate(lines): + lines[i] = self._style_stack.current.apply(line) + + return "\n".join(lines), current_line_length diff --git a/env/lib/python3.8/site-packages/cleo/formatters/style.py b/env/lib/python3.8/site-packages/cleo/formatters/style.py new file mode 100644 index 00000000..d273240f --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/formatters/style.py @@ -0,0 +1,73 @@ +from __future__ import annotations + +from cleo.color import Color + + +class Style: + def __init__( + self, + foreground: str | None = None, + background: str | None = None, + options: list[str] | None = None, + ) -> None: + self._foreground = foreground or "" + self._background = background or "" + self._options = options or [] + + self._color = Color(self._foreground, self._background, self._options) + + def foreground(self, foreground: str) -> Style: + self._color = Color(foreground, self._background, self._options) + self._foreground = foreground + + return self + + def background(self, background: str) -> Style: + self._color = Color(self._foreground, background, self._options) + self._background = background + + return self + + def bold(self, bold: bool = True) -> Style: + return self.set_option("bold") if bold else self.unset_option("bold") + + def dark(self, dark: bool = True) -> Style: + return self.set_option("dark") if dark else self.unset_option("dark") + + def underlines(self, underlined: bool = True) -> Style: + return ( + self.set_option("underline") + if underlined + else self.unset_option("underline") + ) + + def italic(self, italic: bool = True) -> Style: + return self.set_option("italic") if italic else self.unset_option("italic") + + def blinking(self, blinking: bool = True) -> Style: + return self.set_option("blink") if blinking else self.unset_option("blink") + + def inverse(self, inverse: bool = True) -> Style: + return self.set_option("reverse") if inverse else self.unset_option("reverse") + + def hidden(self, hidden: bool = True) -> Style: + return self.set_option("conceal") if hidden else self.unset_option("conceal") + + def set_option(self, option: str) -> Style: + self._options.append(option) + self._color = Color(self._foreground, self._background, self._options) + + return self + + def unset_option(self, option: str) -> Style: + try: + index = self._options.index(option) + except IndexError: + return self + + del self._options[index] + + self._color = Color(self._foreground, self._background, self._options) + + def apply(self, text: str) -> str: + return self._color.apply(text) diff --git a/env/lib/python3.8/site-packages/cleo/formatters/style_stack.py b/env/lib/python3.8/site-packages/cleo/formatters/style_stack.py new file mode 100644 index 00000000..f4037148 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/formatters/style_stack.py @@ -0,0 +1,41 @@ +from __future__ import annotations + +from cleo.exceptions import ValueException +from cleo.formatters.style import Style + + +class StyleStack: + def __init__(self, empty_style: Style | None = None): + if empty_style is None: + empty_style = Style() + + self._empty_style = empty_style + self._styles: list[Style] = [] + + @property + def current(self) -> Style: + if not self._styles: + return self._empty_style + + return self._styles[-1] + + def reset(self) -> None: + self._styles = [] + + def push(self, style: Style) -> None: + self._styles.append(style) + + def pop(self, style: Style | None = None) -> Style: + if not self._styles: + return self._empty_style + + if style is None: + return self._styles.pop() + + for i, stacked_style in reversed(list(enumerate(self._styles))): + if style.apply("") == stacked_style.apply(""): + self._styles = self._styles[:i] + + return stacked_style + + raise ValueException("Invalid nested tag found") diff --git a/env/lib/python3.8/site-packages/cleo/helpers.py b/env/lib/python3.8/site-packages/cleo/helpers.py new file mode 100644 index 00000000..a922aa7d --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/helpers.py @@ -0,0 +1,42 @@ +from __future__ import annotations + +from typing import Any + +from cleo.io.inputs.argument import Argument +from cleo.io.inputs.option import Option + + +def argument( + name: str, + description: str | None = None, + optional: bool = False, + multiple: bool = False, + default: Any | None = None, +) -> Argument: + return Argument( + name, + required=not optional, + is_list=multiple, + description=description, + default=default, + ) + + +def option( + long_name: str, + short_name: str | None = None, + description: str | None = None, + flag: bool = True, + value_required: bool = True, + multiple: bool = False, + default: Any | None = None, +) -> Option: + return Option( + long_name, + short_name, + flag=flag, + requires_value=value_required, + is_list=multiple, + description=description, + default=default, + ) diff --git a/env/lib/python3.8/site-packages/cleo/io/__init__.py b/env/lib/python3.8/site-packages/cleo/io/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/io/buffered_io.py b/env/lib/python3.8/site-packages/cleo/io/buffered_io.py new file mode 100644 index 00000000..713dbb81 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/buffered_io.py @@ -0,0 +1,57 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING +from typing import cast + +from cleo.io.io import IO +from cleo.io.outputs.buffered_output import BufferedOutput + + +if TYPE_CHECKING: + from cleo.io.inputs.input import Input + + +class BufferedIO(IO): + def __init__( + self, + input: Input | None = None, + decorated: bool = False, + supports_utf8: bool = True, + ) -> None: + super().__init__( + input, + BufferedOutput(decorated=decorated, supports_utf8=supports_utf8), + BufferedOutput(decorated=decorated, supports_utf8=supports_utf8), + ) + + self._output = cast(BufferedOutput, self._output) + self._error_output = cast(BufferedOutput, self._error_output) + + def fetch_output(self) -> str: + return self._output.fetch() + + def fetch_error(self) -> str: + return self._error_output.fetch() + + def clear(self) -> None: + self._output.clear() + self._error_output.clear() + + def clear_output(self) -> None: + self._output.clear() + + def clear_error(self) -> None: + self._error_output.clear() + + def supports_utf8(self) -> bool: + return self._output.supports_utf8() + + def clear_user_input(self) -> None: + self._input.stream.truncate(0) + self._input.stream.seek(0) + + def set_user_input(self, user_input: str) -> None: + self.clear_user_input() + + self._input.stream.write(user_input) + self._input.stream.seek(0) diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/__init__.py b/env/lib/python3.8/site-packages/cleo/io/inputs/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/argument.py b/env/lib/python3.8/site-packages/cleo/io/inputs/argument.py new file mode 100644 index 00000000..5938385c --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/inputs/argument.py @@ -0,0 +1,70 @@ +from __future__ import annotations + +from typing import Any + +from cleo.exceptions import LogicException + + +class Argument: + """ + A command line argument. + """ + + def __init__( + self, + name: str, + required: bool = True, + is_list: bool = False, + description: str | None = None, + default: Any | None = None, + ) -> None: + self._name = name + self._required = required + self._is_list = is_list + self._description = description + self._default = None + + self.set_default(default) + + @property + def name(self) -> str: + return self._name + + @property + def default(self) -> str: + return self._default + + @property + def description(self) -> str: + return self._description + + def is_required(self) -> bool: + return self._required + + def is_list(self) -> bool: + return self._is_list + + def set_default(self, default: Any | None = None) -> None: + if self._required and default is not None: + raise LogicException("Cannot set a default value for required arguments") + + if self._is_list: + if default is None: + default = [] + elif not isinstance(default, list): + raise LogicException( + "A default value for a list argument must be a list" + ) + + self._default = default + + def __repr__(self) -> str: + return ( + "Argument({}, required={}, is_list={}, description={}, default={})".format( + repr(self._name), + self._required, + self._is_list, + repr(self._description), + repr(self._default), + ) + ) diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/argv_input.py b/env/lib/python3.8/site-packages/cleo/io/inputs/argv_input.py new file mode 100644 index 00000000..44e50dd0 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/inputs/argv_input.py @@ -0,0 +1,301 @@ +from __future__ import annotations + +import sys + +from typing import TYPE_CHECKING +from typing import Any + +from cleo.exceptions import NoSuchOptionException +from cleo.exceptions import RuntimeException +from cleo.io.inputs.input import Input + + +if TYPE_CHECKING: + from cleo.io.inputs.definition import Definition + + +class ArgvInput(Input): + """ + Represents an input coming from the command line. + """ + + def __init__( + self, argv: list[str] | None = None, definition: Definition | None = None + ) -> None: + if argv is None: + argv = sys.argv + + argv = argv[:] + + # Strip the application name + try: + self._script_name = argv.pop(0) + except IndexError: + self._script_name = None + + self._tokens = argv + self._parsed = [] + + super().__init__(definition=definition) + + @property + def first_argument(self) -> str | None: + is_option = False + + for i, token in enumerate(self._tokens): + if token and token[0] == "-": + if token.find("=") != -1 or len(self._tokens) <= (i + 1): + continue + + # If it's a long option, consider that + # everything after "--" is the option name. + # Otherwise, use the last character + # (if it's a short option set, only the last one + # can take a value with space separator). + if len(token) > 1 and token[1] == "-": + name = token[2:] + else: + name = token[-1] + + if name not in self._options and not self._definition.has_shortcut( + name + ): + # noop + continue + + if name not in self._options: + name = self._definition.shortcut_to_name(name) + + if name in self._options and self._tokens[i + 1] == self._options[name]: + is_option = True + + continue + + if is_option: + is_option = False + continue + + return token + + return + + @property + def script_name(self) -> str | None: + return self._script_name + + def has_parameter_option( + self, values: str | list[str], only_params: bool = False + ) -> bool: + """ + Returns true if the raw parameters (not parsed) contain a value. + """ + if not isinstance(values, list): + values = [values] + + for token in self._tokens: + if only_params and token == "--": + return False + + for value in values: + # Options with values: + # For long options, test for '--option=' at beginning + # For short options, test for '-o' at beginning + if value.find("--") == 0: + leading = value + "=" + else: + leading = value + + if token == value or leading != "" and token.find(leading) == 0: + return True + + return False + + def parameter_option( + self, + values: str | list[str], + default: Any = False, + only_params: bool = False, + ) -> Any: + if not isinstance(values, list): + values = [values] + + tokens = self._tokens[:] + while len(tokens) > 0: + token = tokens.pop(0) + if only_params and token == "--": + return default + + for value in values: + if token == value: + try: + return tokens.pop(0) + except IndexError: + return + + # Options with values: + # For long options, test for '--option=' at beginning + # For short options, test for '-o' at beginning + if value.find("--") == 0: + leading = value + "=" + else: + leading = value + + if token == value or leading != "" and token.find(leading) == 0: + return token[len(leading)] + + return False + + def _set_tokens(self, tokens: list[str]) -> None: + self._tokens = tokens + + def _parse(self) -> None: + parse_options = True + self._parsed = self._tokens[:] + + try: + token = self._parsed.pop(0) + except IndexError: + token = None + + while token is not None: + if parse_options and token == "": + self._parse_argument(token) + elif parse_options and token == "--": + parse_options = False + elif parse_options and token.find("--") == 0: + self._parse_long_option(token) + elif parse_options and token[0] == "-" and token != "-": + self._parse_short_option(token) + else: + self._parse_argument(token) + + try: + token = self._parsed.pop(0) + except IndexError: + token = None + + def _parse_short_option(self, token: str) -> None: + name = token[1:] + + if len(name) > 1: + if ( + self._definition.has_shortcut(name[0]) + and self._definition.option_for_shortcut(name[0]).accepts_value() + ): + # An option with a value and no space + self._add_short_option(name[0], name[1:]) + else: + self._parse_short_option_set(name) + else: + self._add_short_option(name, None) + + def _parse_short_option_set(self, name: str) -> None: + length = len(name) + for i in range(length): + if not self._definition.has_shortcut(name[i]): + raise RuntimeException(f'The option "{name[i]}" does not exist') + + option = self._definition.option_for_shortcut(name[i]) + if option.accepts_value(): + self._add_long_option( + option.name, None if i == length - 1 else name[i + 1 :] + ) + + break + + self._add_long_option(option.name, None) + + def _parse_long_option(self, token: str) -> None: + name = token[2:] + + pos = name.find("=") + if pos != -1: + value = name[pos + 1 :] + if not value: + self._parsed.insert(0, value) + + self._add_long_option(name[0:pos], value) + else: + self._add_long_option(name, None) + + def _parse_argument(self, token: str) -> None: + count = len(self._arguments) + + # If the input is expecting another argument, add it + if self._definition.has_argument(count): + argument = self._definition.argument(count) + self._arguments[argument.name] = [token] if argument.is_list() else token + # If the last argument is a list, append the token to it + elif ( + self._definition.has_argument(count - 1) + and self._definition.argument(count - 1).is_list() + ): + argument = self._definition.argument(count - 1) + self._arguments[argument.name].append(token) + # Unexpected argument + else: + all = self._definition.arguments.copy() + command_name = None + argument = all[0] + if argument and argument.name == "command": + command_name = self._arguments.get("command") + del all[0] + + if all: + if command_name: + message = 'Too many arguments to "{}" command, expected arguments "{}"'.format( + command_name, '" "'.join([a.name for a in all]) + ) + else: + message = 'Too many arguments, expected arguments "{}"'.format( + '" "'.join([a.name for a in all]) + ) + elif command_name: + message = 'No arguments expected for "{}" command, got "{}"'.format( + command_name, token + ) + else: + message = f'No arguments expected, got "{token}"' + + raise RuntimeException(message) + + def _add_short_option(self, shortcut: str, value: Any) -> None: + if not self._definition.has_shortcut(shortcut): + raise NoSuchOptionException(f'The option "-{shortcut}" does not exist') + + self._add_long_option( + self._definition.option_for_shortcut(shortcut).name, value + ) + + def _add_long_option(self, name: str, value: Any) -> None: + if not self._definition.has_option(name): + raise NoSuchOptionException(f'The option "--{name}" does not exist') + + option = self._definition.option(name) + + if value is not None and not option.accepts_value(): + raise RuntimeException(f'The "--{name}" option does not accept a value') + + if value in ["", None] and option.accepts_value() and self._parsed: + # If the option accepts a value, either required or optional, + # we check if there is one + next_token = self._parsed.pop(0) + if (next_token and next_token[0] != "-") or next_token in ["", None]: + value = next_token + else: + self._parsed.insert(0, next_token) + + if value is None: + if option.requires_value(): + raise RuntimeException(f'The "--{name}" option requires a value') + + if not option.is_list() and option.is_flag(): + value = True + + if option.is_list(): + if name not in self._options: + self._options[name] = [] + + self._options[name].append(value) + else: + self._options[name] = value diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/definition.py b/env/lib/python3.8/site-packages/cleo/io/inputs/definition.py new file mode 100644 index 00000000..a6feef23 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/inputs/definition.py @@ -0,0 +1,224 @@ +from __future__ import annotations + +import sys + +from typing import TYPE_CHECKING +from typing import Any +from typing import Sequence + +from cleo.exceptions import LogicException +from cleo.io.inputs.option import Option + + +if TYPE_CHECKING: + from cleo.io.inputs.argument import Argument + + +class Definition: + """ + A Definition represents a set of command line arguments and options. + """ + + def __init__(self, definition: Sequence[Argument | Option] | None = None) -> None: + self._arguments: dict[str, Argument] = {} + self._required_count = 0 + self._has_a_list_argument = False + self._has_optional = False + self._options: dict[str, Option] = {} + self._shortcuts: dict[str, str] = {} + + if definition is None: + definition = [] + + self.set_definition(definition) + + @property + def arguments(self) -> list[Argument]: + return list(self._arguments.values()) + + @property + def argument_count(self) -> int: + if self._has_a_list_argument: + return sys.maxsize + + return len(self._arguments) + + @property + def required_argument_count(self) -> int: + return self._required_count + + @property + def argument_defaults(self) -> dict[str, Any]: + values = {} + + for argument in self._arguments.values(): + values[argument.name] = argument.default + + return values + + @property + def options(self) -> list[Option]: + return list(self._options.values()) + + @property + def option_defaults(self) -> dict[str, Any]: + values = {} + for option in self._options.values(): + values[option.name] = option.default + + return values + + def set_definition(self, definition: Sequence[Argument | Option]) -> None: + arguments = [] + options = [] + + for item in definition: + if isinstance(item, Option): + options.append(item) + else: + arguments.append(item) + + self.set_arguments(arguments) + self.set_options(options) + + def set_arguments(self, arguments: list[Argument]) -> None: + self._arguments = {} + self._required_count = 0 + self._has_a_list_argument = False + self._has_optional = False + self.add_arguments(arguments) + + def add_arguments(self, arguments: list[Argument]) -> None: + for argument in arguments: + self.add_argument(argument) + + def add_argument(self, argument: Argument) -> None: + if argument.name in self._arguments: + raise LogicException( + f'An argument with name "{argument.name}" already exists' + ) + + if self._has_a_list_argument: + raise LogicException("Cannot add an argument after a list argument") + + if argument.is_required() and self._has_optional: + raise LogicException("Cannot add a required argument after an optional one") + + if argument.is_list(): + self._has_a_list_argument = True + + if argument.is_required(): + self._required_count += 1 + else: + self._has_optional = True + + self._arguments[argument.name] = argument + + def argument(self, name: str | int) -> Argument: + if not self.has_argument(name): + raise ValueError(f'The "{name}" argument does not exist') + + if isinstance(name, int): + arguments = list(self._arguments.values()) + else: + arguments = self._arguments + + return arguments[name] + + def has_argument(self, name: str | int) -> bool: + if isinstance(name, int): + arguments = list(self._arguments.values()) + else: + arguments = self._arguments + + try: + arguments[name] + except (KeyError, IndexError): + return False + + return True + + def set_options(self, options: list[Option]) -> None: + self._options = {} + self._shortcuts = {} + self.add_options(options) + + def add_options(self, options: list[Option]) -> None: + for option in options: + self.add_option(option) + + def add_option(self, option: Option) -> None: + if option.name in self._options and option != self._options[option.name]: + raise LogicException(f'An option named "{option.name}" already exists') + + if option.shortcut: + for shortcut in option.shortcut.split("|"): + if shortcut in self._shortcuts and option != self._shortcuts[shortcut]: + raise LogicException( + f'An option with shortcut "{shortcut}" already exists' + ) + + self._options[option.name] = option + + if option.shortcut: + for shortcut in option.shortcut.split("|"): + self._shortcuts[shortcut] = option.name + + def option(self, name: str) -> Option: + if not self.has_option(name): + raise ValueError(f'The option "--{name}" option does not exist') + + return self._options[name] + + def has_option(self, name: str) -> bool: + return name in self._options + + def has_shortcut(self, shortcut: str) -> bool: + return shortcut in self._shortcuts + + def option_for_shortcut(self, shortcut: str) -> Option: + return self._options[self.shortcut_to_name(shortcut)] + + def shortcut_to_name(self, shortcut: str) -> str: + if shortcut not in self._shortcuts: + raise ValueError(f'The "-{shortcut}" option does not exist') + + return self._shortcuts[shortcut] + + def synopsis(self, short: bool = False) -> str: + elements = [] + + if short and self._options: + elements.append("[options]") + elif not short: + for option in self._options.values(): + value = "" + if option.accepts_value(): + value = " {}{}{}".format( + "[" if not option.requires_value() else "", + option.name.upper(), + "]" if not option.requires_value() else "", + ) + + shortcut = "" + if option.shortcut: + shortcut = f"-{option.shortcut}|" + + elements.append(f"[{shortcut}--{option.name}{value}]") + + if elements and self._arguments: + elements.append("[--]") + + tail = "" + for argument in self._arguments.values(): + element = f"<{argument.name}>" + if argument.is_list(): + element += "..." + + if not argument.is_required(): + element = "[" + element + tail += "]" + + elements.append(element) + + return " ".join(elements) + tail diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/input.py b/env/lib/python3.8/site-packages/cleo/io/inputs/input.py new file mode 100644 index 00000000..c776cbb2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/inputs/input.py @@ -0,0 +1,181 @@ +from __future__ import annotations + +import re + +from typing import Any +from typing import TextIO + +from cleo._compat import shell_quote +from cleo.exceptions import MissingArgumentsException +from cleo.exceptions import ValueException +from cleo.io.inputs.definition import Definition + + +class Input: + """ + This class is the base class for concrete Input implementations. + """ + + def __init__(self, definition: Definition | None = None) -> None: + self._definition = None + self._stream = None + self._options = {} + self._arguments = {} + self._interactive = True + + if definition is None: + self._definition = Definition() + else: + self.bind(definition) + self.validate() + + @property + def arguments(self) -> dict[str, Any]: + return {**self._definition.argument_defaults, **self._arguments} + + @property + def options(self) -> dict[str, Any]: + return {**self._definition.option_defaults, **self._options} + + @property + def stream(self) -> TextIO: + return self._stream + + @property + def first_argument(self) -> str | None: + """ + Returns the first argument from the raw parameters (not parsed). + """ + raise NotImplementedError() + + @property + def script_name(self) -> str | None: + raise NotImplementedError() + + def read(self, length: int, default: str | None = None) -> str: + """ + Reads the given amount of characters from the input stream. + """ + if not self._interactive: + return default + + return self._stream.read(length) + + def read_line(self, length: int | None = None, default: str | None = None) -> str: + """ + Reads a line from the input stream. + """ + if not self._interactive: + return default + + return self._stream.readline(length) + + def close(self) -> None: + """ + Closes the input. + """ + self._stream.close() + + def is_closed(self) -> bool: + """ + Returns whether the input is closed. + """ + return self._stream.is_closed() + + def is_interactive(self) -> bool: + return self._interactive + + def interactive(self, interactive: bool = True) -> None: + self._interactive = interactive + + def bind(self, definition: Definition) -> None: + """ + Binds the current Input instance with + the given definition's arguments and options. + """ + self._arguments = {} + self._options = {} + self._definition = definition + + self._parse() + + def validate(self) -> None: + missing_arguments = [] + + for argument in self._definition.arguments: + if argument.name not in self._arguments and argument.is_required(): + missing_arguments.append(argument.name) + + if missing_arguments: + raise MissingArgumentsException( + 'Not enough arguments (missing: "{}")'.format( + ", ".join(missing_arguments) + ) + ) + + def argument(self, name: str) -> Any: + if not self._definition.has_argument(name): + raise ValueException(f'The argument "{name}" does not exist') + + if name in self._arguments: + return self._arguments[name] + + return self._definition.argument(name).default + + def set_argument(self, name: str, value: Any) -> None: + if not self._definition.has_argument(name): + raise ValueException(f'The argument "{name}" does not exist') + + self._arguments[name] = value + + def has_argument(self, name: str) -> bool: + return self._definition.has_argument(name) + + def option(self, name: str) -> Any: + if not self._definition.has_option(name): + raise ValueException(f'The option "--{name}" does not exist') + + if name in self._options: + return self._options[name] + + return self._definition.option(name).default + + def set_option(self, name: str, value: Any) -> None: + if not self._definition.has_option(name): + raise ValueException(f'The option "--{name}" does not exist') + + self._options[name] = value + + def has_option(self, name: str) -> bool: + return self._definition.has_option(name) + + def escape_token(self, token: str) -> str: + if re.match(r"^[\w-]+$"): + return token + + return shell_quote(token) + + def set_stream(self, stream: TextIO) -> None: + self._stream = stream + + def has_parameter_option( + self, values: str | list[str], only_params: bool = False + ) -> bool: + """ + Returns true if the raw parameters (not parsed) contain a value. + """ + raise NotImplementedError() + + def parameter_option( + self, + values: str | list[str], + only_params: bool = False, + default: Any = False, + ) -> Any: + """ + Returns the value of a raw option (not parsed). + """ + raise NotImplementedError() + + def _parse(self) -> None: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/option.py b/env/lib/python3.8/site-packages/cleo/io/inputs/option.py new file mode 100644 index 00000000..6cf63663 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/inputs/option.py @@ -0,0 +1,97 @@ +from __future__ import annotations + +import re + +from typing import Any + +from cleo.exceptions import LogicException +from cleo.exceptions import ValueException + + +class Option: + """ + A command line option. + """ + + def __init__( + self, + name: str, + shortcut: str | None = None, + flag: bool = True, + requires_value: bool = True, + is_list: bool = False, + description: str | None = None, + default: Any | None = None, + ) -> None: + if name.startswith("--"): + name = name[2:] + + if not name: + raise ValueException("An option name cannot be empty") + + if shortcut is not None: + shortcuts = re.split(r"\|-?", shortcut.lstrip("-")) + shortcuts = [s for s in shortcuts if s] + shortcut = "|".join(shortcuts) + + if not shortcut: + raise ValueException("An option shortcut cannot be empty") + + self._name = name + self._shortcut = shortcut + self._flag = flag + self._requires_value = requires_value + self._is_list = is_list + self._description = description or "" + self._default = None + + if self._is_list and self._flag: + raise LogicException("A flag option cannot be a list as well") + + self.set_default(default) + + @property + def name(self) -> str: + return self._name + + @property + def shortcut(self) -> str | None: + return self._shortcut + + @property + def description(self) -> str: + return self._description + + @property + def default(self) -> Any | None: + return self._default + + def is_flag(self) -> bool: + return self._flag + + def accepts_value(self) -> bool: + return not self._flag + + def requires_value(self) -> bool: + return not self._flag and self._requires_value + + def is_list(self) -> bool: + return self._is_list + + def set_default(self, default: Any | None = None) -> None: + if self._flag and default is not None: + raise LogicException("A flag option cannot have a default value") + + if self._is_list: + if default is None: + default = [] + elif not isinstance(default, list): + raise LogicException("A default value for a list option must be a list") + + if self._flag: + default = False + + self._default = default + + def __repr__(self) -> str: + return f"Option({self._name})" diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/string_input.py b/env/lib/python3.8/site-packages/cleo/io/inputs/string_input.py new file mode 100644 index 00000000..de894b63 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/inputs/string_input.py @@ -0,0 +1,18 @@ +from __future__ import annotations + +from cleo.io.inputs.argv_input import ArgvInput +from cleo.io.inputs.token_parser import TokenParser + + +class StringInput(ArgvInput): + """ + Represents an input provided as a string + """ + + def __init__(self, input: str) -> None: + super().__init__([]) + + self._set_tokens(self._tokenize(input)) + + def _tokenize(self, input: str) -> list[str]: + return TokenParser().parse(input) diff --git a/env/lib/python3.8/site-packages/cleo/io/inputs/token_parser.py b/env/lib/python3.8/site-packages/cleo/io/inputs/token_parser.py new file mode 100644 index 00000000..79ed90db --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/inputs/token_parser.py @@ -0,0 +1,116 @@ +from __future__ import annotations + + +class TokenParser: + """ + Parses tokens from a string passed to StringArgs. + """ + + def __init__(self) -> None: + self._string: str = "" + self._cursor: int = 0 + self._current: str | None = None + self._next_: str | None = None + + def parse(self, string: str) -> list[str]: + self._string = string + self._cursor = 0 + self._current = None + if len(string) > 0: + self._current = string[0] + + self._next_ = None + if len(string) > 1: + self._next_ = string[1] + + tokens = self._parse() + + return tokens + + def _parse(self) -> list[str]: + tokens = [] + + while self._is_valid(): + if self._current.isspace(): + # Skip spaces + self._next() + + continue + + if self._is_valid(): + tokens.append(self._parse_token()) + + return tokens + + def _is_valid(self) -> bool: + return self._current is not None + + def _next(self) -> None: + """ + Advances the cursor to the next position. + """ + if not self._is_valid(): + return + + self._cursor += 1 + self._current = self._next_ + + if self._cursor + 1 < len(self._string): + self._next_ = self._string[self._cursor + 1] + else: + self._next_ = None + + def _parse_token(self) -> str: + token = "" + + while self._is_valid(): + if self._current.isspace(): + self._next() + + break + + if self._current == "\\": + token += self._parse_escape_sequence() + elif self._current in ["'", '"']: + token += self._parse_quoted_string() + else: + token += self._current + self._next() + + return token + + def _parse_quoted_string(self) -> str: + string = "" + delimiter = self._current + + # Skip first delimiter + self._next() + while self._is_valid(): + if self._current == delimiter: + # Skip last delimiter + self._next() + + break + + if self._current == "\\": + string += self._parse_escape_sequence() + elif self._current == '"': + string += f'"{self._parse_quoted_string()}"' + elif self._current == "'": + string += f"'{self._parse_quoted_string()}'" + else: + string += self._current + self._next() + + return string + + def _parse_escape_sequence(self) -> str: + if self._next_ in ['"', "'"]: + sequence = self._next_ + else: + sequence = "\\" + self._next_ + + self._next() + self._next() + + return sequence diff --git a/env/lib/python3.8/site-packages/cleo/io/io.py b/env/lib/python3.8/site-packages/cleo/io/io.py new file mode 100644 index 00000000..c228a6f6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/io.py @@ -0,0 +1,140 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING +from typing import Iterable + +from cleo.io.outputs.output import Type as OutputType +from cleo.io.outputs.output import Verbosity + + +if TYPE_CHECKING: + from cleo.io.inputs.input import Input + from cleo.io.outputs.output import Output + from cleo.io.outputs.section_output import SectionOutput + + +class IO: + def __init__(self, input: Input, output: Output, error_output: Output) -> None: + self._input = input + self._output = output + self._error_output = error_output + + @property + def input(self) -> Input: + return self._input + + @property + def output(self) -> Output: + return self._output + + @property + def error_output(self) -> Output: + return self._error_output + + def read(self, length: int, default: str | None = None) -> str: + """ + Reads the given amount of characters from the input stream. + """ + return self._input.read(length, default=default) + + def read_line(self, length: int | None = None, default: str | None = None) -> str: + """ + Reads a line from the input stream. + """ + return self._input.read_line(length=length, default=default) + + def write_line( + self, + messages: str | Iterable[str], + verbosity: Verbosity = Verbosity.NORMAL, + type: OutputType = OutputType.NORMAL, + ) -> None: + self._output.write_line(messages, verbosity=verbosity, type=type) + + def write( + self, + messages: str | Iterable[str], + new_line: bool = False, + verbosity: Verbosity = Verbosity.NORMAL, + type: OutputType = OutputType.NORMAL, + ) -> None: + self._output.write(messages, new_line=new_line, verbosity=verbosity, type=type) + + def write_error_line( + self, + messages: str | Iterable[str], + verbosity: Verbosity = Verbosity.NORMAL, + type: OutputType = OutputType.NORMAL, + ) -> None: + self._error_output.write_line(messages, verbosity=verbosity, type=type) + + def write_error( + self, + messages: str | Iterable[str], + new_line: bool = False, + verbosity: Verbosity = Verbosity.NORMAL, + type: OutputType = OutputType.NORMAL, + ) -> None: + self._error_output.write( + messages, new_line=new_line, verbosity=verbosity, type=type + ) + + def overwrite(self, messages: str | Iterable[str]) -> None: + from cleo.cursor import Cursor + + cursor = Cursor(self._output) + cursor.move_to_column(1) + cursor.clear_line() + self.write(messages) + + def overwrite_error(self, messages: str | Iterable[str]) -> None: + from cleo.cursor import Cursor + + cursor = Cursor(self._error_output) + cursor.move_to_column(1) + cursor.clear_line() + self.write_error(messages) + + def flush(self) -> None: + self._output.flush() + + def is_interactive(self) -> bool: + return self._input.is_interactive() + + def interactive(self, interactive: bool = True) -> None: + self._input.interactive(interactive) + + def decorated(self, decorated: bool = True) -> None: + self._output.decorated(decorated) + self._error_output.decorated(decorated) + + def is_decorated(self) -> bool: + return self._output.is_decorated() + + def supports_utf8(self) -> bool: + return self._output.supports_utf8() + + def set_verbosity(self, verbosity: Verbosity) -> None: + self._output.set_verbosity(verbosity) + self._error_output.set_verbosity(verbosity) + + def is_verbose(self) -> bool: + return self.output.is_verbose() + + def is_very_verbose(self) -> bool: + return self.output.is_very_verbose() + + def is_debug(self) -> bool: + return self.output.is_debug() + + def set_input(self, input: Input) -> None: + self._input = input + + def with_input(self, input: Input) -> IO: + return self.__class__(input, self._output, self._error_output) + + def remove_format(self, text: str) -> str: + return self._output.remove_format(text) + + def section(self) -> SectionOutput: + return self._output.section() diff --git a/env/lib/python3.8/site-packages/cleo/io/null_io.py b/env/lib/python3.8/site-packages/cleo/io/null_io.py new file mode 100644 index 00000000..0a6b9ca1 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/null_io.py @@ -0,0 +1,19 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + +from cleo.io.inputs.string_input import StringInput +from cleo.io.io import IO +from cleo.io.outputs.null_output import NullOutput + + +if TYPE_CHECKING: + from cleo.io.inputs.input import Input + + +class NullIO(IO): + def __init__(self, input: Input | None = None) -> None: + if input is None: + input = StringInput("") + + super().__init__(input, NullOutput(), NullOutput()) diff --git a/env/lib/python3.8/site-packages/cleo/io/outputs/__init__.py b/env/lib/python3.8/site-packages/cleo/io/outputs/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/io/outputs/buffered_output.py b/env/lib/python3.8/site-packages/cleo/io/outputs/buffered_output.py new file mode 100644 index 00000000..993c4ab8 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/outputs/buffered_output.py @@ -0,0 +1,50 @@ +from __future__ import annotations + +from io import StringIO + +from cleo.io.outputs.output import Output +from cleo.io.outputs.section_output import SectionOutput + + +class BufferedOutput(Output): + def __init__(self, decorated: bool = False, supports_utf8: bool = True) -> None: + super().__init__(decorated=decorated) + + self._buffer = StringIO() + self._supports_utf8 = supports_utf8 + + def fetch(self) -> str: + """ + Empties the buffer and returns its content. + """ + content = self._buffer.getvalue() + self._buffer = StringIO() + + return content + + def clear(self) -> None: + """ + Empties the buffer. + """ + self._buffer = StringIO() + + def supports_utf8(self) -> bool: + return self._supports_utf8 + + def set_supports_utf8(self, supports_utf8: bool) -> None: + self._supports_utf8 = supports_utf8 + + def section(self) -> SectionOutput: + return SectionOutput( + self._buffer, + self._section_outputs, + verbosity=self.verbosity, + decorated=self.is_decorated(), + formatter=self.formatter, + ) + + def _write(self, message: str, new_line: bool = False) -> None: + self._buffer.write(message) + + if new_line: + self._buffer.write("\n") diff --git a/env/lib/python3.8/site-packages/cleo/io/outputs/null_output.py b/env/lib/python3.8/site-packages/cleo/io/outputs/null_output.py new file mode 100644 index 00000000..08026706 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/outputs/null_output.py @@ -0,0 +1,60 @@ +from __future__ import annotations + +from typing import Iterable + +from cleo.io.outputs.output import Output +from cleo.io.outputs.output import Type +from cleo.io.outputs.output import Verbosity + + +class NullOutput(Output): + @property + def verbosity(self) -> Verbosity: + return Verbosity.QUIET + + def is_decorated(self) -> bool: + return False + + def decorated(self, decorated: bool = True) -> None: + pass + + def supports_utf8(self) -> bool: + return True + + def set_verbosity(self, verbosity: Verbosity) -> None: + pass + + def is_quiet(self) -> bool: + return True + + def is_verbose(self) -> bool: + return False + + def is_very_verbose(self) -> bool: + return False + + def is_debug(self) -> bool: + return False + + def write_line( + self, + messages: str | Iterable[str], + verbosity: Verbosity = Verbosity.NORMAL, + type: Type = Type.NORMAL, + ) -> None: + pass + + def write( + self, + messages: str | Iterable[str], + new_line: bool = False, + verbosity: Verbosity = Verbosity.NORMAL, + type: Type = Type.NORMAL, + ) -> None: + pass + + def flush(self) -> None: + pass + + def _write(self, message: str, new_line: bool = False) -> None: + pass diff --git a/env/lib/python3.8/site-packages/cleo/io/outputs/output.py b/env/lib/python3.8/site-packages/cleo/io/outputs/output.py new file mode 100644 index 00000000..8d7a73bb --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/outputs/output.py @@ -0,0 +1,124 @@ +from __future__ import annotations + +from enum import Enum +from typing import TYPE_CHECKING +from typing import Iterable + +from cleo._utils import strip_tags +from cleo.formatters.formatter import Formatter + + +if TYPE_CHECKING: + from cleo.io.outputs.section_output import SectionOutput + + +class Verbosity(Enum): + + QUIET: int = 16 + NORMAL: int = 32 + VERBOSE: int = 64 + VERY_VERBOSE: int = 128 + DEBUG: int = 256 + + +class Type(Enum): + + NORMAL: int = 1 + RAW: int = 2 + PLAIN: int = 4 + + +class Output: + def __init__( + self, + verbosity: Verbosity = Verbosity.NORMAL, + decorated: bool = False, + formatter: Formatter | None = None, + ) -> None: + self._verbosity: Verbosity = verbosity + if formatter is None: + formatter = Formatter() + + self._formatter = formatter + self._formatter.decorated(decorated) + + self._section_outputs: list[SectionOutput] = [] + + @property + def formatter(self) -> Formatter: + return self._formatter + + @property + def verbosity(self) -> Verbosity: + return self._verbosity + + def set_formatter(self, formatter: Formatter) -> None: + self._formatter = formatter + + def is_decorated(self) -> bool: + return self._formatter.is_decorated() + + def decorated(self, decorated: bool = True) -> None: + self._formatter.decorated(decorated) + + def supports_utf8(self) -> bool: + """ + Returns whether the stream supports the UTF-8 encoding. + """ + return True + + def set_verbosity(self, verbosity: Verbosity) -> None: + self._verbosity = verbosity + + def is_quiet(self) -> bool: + return self._verbosity == Verbosity.QUIET + + def is_verbose(self) -> bool: + return self._verbosity.value >= Verbosity.VERBOSE.value + + def is_very_verbose(self) -> bool: + return self._verbosity.value >= Verbosity.VERY_VERBOSE.value + + def is_debug(self) -> bool: + return self._verbosity == Verbosity.DEBUG + + def write_line( + self, + messages: str | Iterable[str], + verbosity: Verbosity = Verbosity.NORMAL, + type: Type = Type.NORMAL, + ) -> None: + self.write(messages, new_line=True, verbosity=verbosity, type=type) + + def write( + self, + messages: str | Iterable[str], + new_line: bool = False, + verbosity: Verbosity = Verbosity.NORMAL, + type: Type = Type.NORMAL, + ) -> None: + if isinstance(messages, str): + messages = [messages] + + if verbosity.value > self.verbosity.value: + return + + for message in messages: + if type == Type.NORMAL: + message = self._formatter.format(message) + elif type == Type.PLAIN: + message = strip_tags(self._formatter.format(message)) + + self._write(message, new_line=new_line) + + def flush(self) -> None: + pass + + def remove_format(self, text: str) -> str: + return self.formatter.remove_format(text) + + def section(self) -> SectionOutput: + raise NotImplementedError() + + def _write(self, message: str, new_line: bool = False) -> None: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cleo/io/outputs/section_output.py b/env/lib/python3.8/site-packages/cleo/io/outputs/section_output.py new file mode 100644 index 00000000..ea99fc55 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/outputs/section_output.py @@ -0,0 +1,106 @@ +from __future__ import annotations + +import math + +from typing import TYPE_CHECKING +from typing import TextIO + +from cleo.io.outputs.output import Verbosity +from cleo.io.outputs.stream_output import StreamOutput +from cleo.terminal import Terminal + + +if TYPE_CHECKING: + from cleo.formatters.formatter import Formatter + + +class SectionOutput(StreamOutput): + def __init__( + self, + stream: TextIO, + sections: list[SectionOutput], + verbosity: Verbosity = Verbosity.NORMAL, + decorated: bool | None = None, + formatter: Formatter | None = None, + ) -> None: + super().__init__( + stream, verbosity=verbosity, decorated=decorated, formatter=formatter + ) + + self._content: list[str] = [] + self._lines = 0 + sections.insert(0, self) + self._sections = sections + self._terminal = Terminal() + + @property + def content(self) -> str: + return "".join(self._content) + + @property + def lines(self) -> int: + return self._lines + + def clear(self, lines: int | None = None) -> None: + if not self._content or not self.is_decorated(): + return + + if lines: + # Multiply lines by 2 to cater for each new line added between content + del self._content[-(lines * 2) :] + else: + lines = self._lines + self._content = [] + + self._lines -= lines + + super()._write( + self._pop_stream_content_until_current_section(lines), new_line=False + ) + + def overwrite(self, message: str) -> None: + self.clear() + self.write_line(message) + + def add_content(self, content: str) -> None: + for line_content in content.split("\n"): + self._lines += ( + math.ceil( + len(self.remove_format(line_content).replace("\t", " ")) + / self._terminal.width + ) + or 1 + ) + self._content.append(line_content) + self._content.append("\n") + + def _write(self, message: str, new_line: bool = False) -> None: + if not self.is_decorated(): + return super()._write(message, new_line=new_line) + + erased_content = self._pop_stream_content_until_current_section() + + self.add_content(message) + + super()._write(message, new_line=True) + super()._write(erased_content, new_line=False) + + def _pop_stream_content_until_current_section( + self, lines_to_clear_count: int = 0 + ) -> str: + erased_content = [] + + for section in self._sections: + if section is self: + break + + lines_to_clear_count += section.lines + erased_content.append(section.content) + + if lines_to_clear_count > 0: + # Move cursor up n lines + super()._write(f"\x1b[{lines_to_clear_count}A", new_line=False) + # Erase to end of screen + super()._write("\x1b[0J", new_line=False) + + return "".join(reversed(erased_content)) diff --git a/env/lib/python3.8/site-packages/cleo/io/outputs/stream_output.py b/env/lib/python3.8/site-packages/cleo/io/outputs/stream_output.py new file mode 100644 index 00000000..d7f16637 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/io/outputs/stream_output.py @@ -0,0 +1,155 @@ +from __future__ import annotations + +import codecs +import io +import locale +import os +import platform +import sys + +from typing import TYPE_CHECKING +from typing import TextIO + +from cleo.io.outputs.output import Output +from cleo.io.outputs.output import Verbosity + + +if TYPE_CHECKING: + from cleo.formatters.formatter import Formatter + from cleo.io.outputs.section_output import SectionOutput + + +class StreamOutput(Output): + def __init__( + self, + stream: TextIO, + verbosity: Verbosity = Verbosity.NORMAL, + decorated: bool | None = None, + formatter: Formatter | None = None, + ) -> None: + self._stream = stream + self._supports_utf8 = None + + if decorated is None: + decorated = self._has_color_support() + + super().__init__(verbosity=verbosity, decorated=decorated, formatter=formatter) + + @property + def stream(self) -> TextIO: + return self._stream + + def supports_utf8(self) -> bool: + """ + Returns whether the stream supports the UTF-8 encoding. + """ + if self._supports_utf8 is not None: + return self._supports_utf8 + + encoding = self._stream.encoding + if encoding is None: + encoding = locale.getpreferredencoding(False) + + try: + encoding = codecs.lookup(encoding).name + except Exception: + encoding = "utf-8" + + self._supports_utf8 = encoding == "utf-8" + + return self._supports_utf8 + + def flush(self) -> None: + self._stream.flush() + + def section(self) -> SectionOutput: + from cleo.io.outputs.section_output import SectionOutput + + return SectionOutput( + self._stream, + self._section_outputs, + verbosity=self.verbosity, + decorated=self.is_decorated(), + formatter=self.formatter, + ) + + def _write(self, message: str, new_line: bool = False) -> None: + if new_line: + message += "\n" + + self._stream.write(message) + self._stream.flush() + + def _has_color_support(self) -> bool: + # Follow https://no-color.org/ + if "NO_COLOR" in os.environ: + return False + + if os.getenv("TERM_PROGRAM") == "Hyper": + return True + + if platform.system().lower() == "windows": + shell_supported = ( + os.getenv("ANSICON") is not None + or os.getenv("ConEmuANSI") == "ON" + or os.getenv("TERM") == "xterm" + ) + + if shell_supported: + return True + + if not hasattr(self._stream, "fileno"): + return False + + # Checking for Windows version + # If we have a compatible version + # activate color support + windows_version = sys.getwindowsversion() + major, build = windows_version[0], windows_version[2] + if (major, build) < (10, 14393): + return False + + # Activate colors if possible + import ctypes + import ctypes.wintypes + + FILE_TYPE_CHAR = 0x0002 + FILE_TYPE_REMOTE = 0x8000 + ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004 + + kernel32 = ctypes.windll.kernel32 + + fileno = self._stream.fileno() + + if fileno == 1: + h = kernel32.GetStdHandle(-11) + elif fileno == 2: + h = kernel32.GetStdHandle(-12) + else: + return False + + if h is None or h == ctypes.wintypes.HANDLE(-1): + return False + + if (kernel32.GetFileType(h) & ~FILE_TYPE_REMOTE) != FILE_TYPE_CHAR: + return False + + mode = ctypes.wintypes.DWORD() + if not kernel32.GetConsoleMode(h, ctypes.byref(mode)): + return False + + if (mode.value & ENABLE_VIRTUAL_TERMINAL_PROCESSING) == 0: + kernel32.SetConsoleMode( + h, mode.value | ENABLE_VIRTUAL_TERMINAL_PROCESSING + ) + return True + + return False + + if not hasattr(self._stream, "fileno"): + return False + + try: + return os.isatty(self._stream.fileno()) + except io.UnsupportedOperation: + return False diff --git a/env/lib/python3.8/site-packages/cleo/loaders/__init__.py b/env/lib/python3.8/site-packages/cleo/loaders/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/loaders/command_loader.py b/env/lib/python3.8/site-packages/cleo/loaders/command_loader.py new file mode 100644 index 00000000..5e6d81b2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/loaders/command_loader.py @@ -0,0 +1,28 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + + +if TYPE_CHECKING: + from cleo.commands.command import Command + + +class CommandLoader: + @property + def names(self) -> list[str]: + """ + All registered command names. + """ + raise NotImplementedError() + + def get(self, name: str) -> Command: + """ + Loads a command. + """ + raise NotImplementedError() + + def has(self, name: str) -> bool: + """ + Checks whether a command exists or not. + """ + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/cleo/loaders/factory_command_loader.py b/env/lib/python3.8/site-packages/cleo/loaders/factory_command_loader.py new file mode 100644 index 00000000..f1dde598 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/loaders/factory_command_loader.py @@ -0,0 +1,34 @@ +from __future__ import annotations + +from typing import Callable + +from cleo.commands.command import Command +from cleo.exceptions import CommandNotFoundException +from cleo.loaders.command_loader import CommandLoader + + +Factory = Callable[[], Command] + + +class FactoryCommandLoader(CommandLoader): + """ + A simple command loader using factories to instantiate commands lazily. + """ + + def __init__(self, factories: dict[str, Factory]) -> None: + self._factories = factories + + @property + def names(self) -> list[str]: + return list(self._factories.keys()) + + def has(self, name: str) -> bool: + return name in self._factories + + def get(self, name: str) -> Command: + if name not in self._factories: + raise CommandNotFoundException(name) + + factory = self._factories[name] + + return factory() diff --git a/env/lib/python3.8/site-packages/cleo/parser.py b/env/lib/python3.8/site-packages/cleo/parser.py new file mode 100644 index 00000000..6dd608bb --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/parser.py @@ -0,0 +1,185 @@ +from __future__ import annotations + +import os +import re + +from typing import Any + +from cleo.io.inputs.argument import Argument +from cleo.io.inputs.option import Option + + +class Parser: + @classmethod + def parse(cls, expression: str) -> dict[str, Any]: + """ + Parse the given console command definition into a dict. + """ + parsed: dict[str, Any] = {"name": None, "arguments": [], "options": []} + + if not expression.strip(): + raise ValueError("Console command signature is empty.") + + expression = expression.replace(os.linesep, "") + + matches = re.match(r"[^\s]+", expression) + + if not matches: + raise ValueError("Unable to determine command name from signature.") + + name = matches.group(0) + parsed["name"] = name + + tokens = re.findall(r"\{\s*(.*?)\s*\}", expression) + + if tokens: + parsed.update(cls._parameters(tokens)) + + return parsed + + @classmethod + def _parameters(cls, tokens: list[str]) -> dict[str, Any]: + """ + Extract all of the parameters from the tokens. + """ + arguments = [] + options = [] + + for token in tokens: + if not token.startswith("--"): + arguments.append(cls._parse_argument(token)) + else: + options.append(cls._parse_option(token)) + + return {"arguments": arguments, "options": options} + + @classmethod + def _parse_argument(cls, token: str) -> Argument: + """ + Parse an argument expression. + """ + description = "" + + if " : " in token: + token, description = tuple(token.split(" : ", 2)) + + token = token.strip() + + description = description.strip() + + # Checking validator: + matches = re.match(r"(.*)\((.*?)\)", token) + if matches: + token = matches.group(1).strip() + + if token.endswith("?*"): + return Argument( + token.rstrip("?*"), + required=False, + is_list=True, + description=description, + ) + elif token.endswith("*"): + return Argument( + token.rstrip("*"), + is_list=True, + description=description, + ) + elif token.endswith("?"): + return Argument( + token.rstrip("?"), + required=False, + description=description, + ) + + matches = re.match(r"(.+)=(.+)", token) + if matches: + return Argument( + matches.group(1), + required=False, + description=description, + default=matches.group(2), + ) + + return Argument( + token, + description=description, + ) + + @classmethod + def _parse_option(cls, token: str) -> Option: + """ + Parse an option expression. + """ + description = "" + + if " : " in token: + token, description = tuple(token.split(" : ", 2)) + + token = token.strip() + + description = description.strip() + + # Checking validator: + matches = re.match(r"(.*)\((.*?)\)", token) + if matches: + token = matches.group(1).strip() + + shortcut = None + + matches = re.split(r"\s*\|\s*", token, 2) + + if len(matches) > 1: + shortcut = matches[0].lstrip("-") + token = matches[1] + else: + token = token.lstrip("-") + + default = None + flag = True + requires_value = False + is_list = False + + if token.endswith("=*"): + flag = False + is_list = True + requires_value = True + token = token.rstrip("=*") + elif token.endswith("=?*"): + flag = False + is_list = True + token = token.rstrip("=?*") + elif token.endswith("=?"): + flag = False + token = token.rstrip("=?") + elif token.endswith("="): + flag = False + requires_value = True + token = token.rstrip("=") + else: + matches = re.match(r"(.+)(=[?*]*)(.+)", token) + if matches: + flag = False + token = matches.group(1) + operator = matches.group(2) + default = matches.group(3) + + if operator == "=*": + requires_value = True + is_list = True + elif operator == "=?*": + is_list = True + elif operator == "=?": + requires_value = False + elif operator == "=": + requires_value = True + + return Option( + token, + shortcut, + flag=flag, + requires_value=requires_value, + is_list=is_list, + description=description, + default=default, + ) diff --git a/env/lib/python3.8/site-packages/cleo/terminal.py b/env/lib/python3.8/site-packages/cleo/terminal.py new file mode 100644 index 00000000..f39271df --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/terminal.py @@ -0,0 +1,142 @@ +from __future__ import annotations + +import os +import platform +import shlex +import struct +import subprocess + + +class Terminal: + """ + Represents the current terminal. + """ + + _width = None + _height = None + + @property + def width(self) -> int: + width = os.getenv("COLUMNS", "").strip() + if width: + return int(width) + + if self.__class__._width is None: + self._init_dimensions() + + return self.__class__._width + + @property + def height(self) -> int: + height = os.getenv("LINES", "").strip() + if height: + return int(height) + + if self.__class__._height is None: + self._init_dimensions() + + return self.__class__._height + + @classmethod + def _init_dimensions(cls) -> None: + current_os = platform.system().lower() + dimensions = None + + if current_os.lower() == "windows": + dimensions = cls._get_terminal_size_windows() + if dimensions is None: + dimensions = cls._get_terminal_size_tput() + elif current_os.lower() in ["linux", "darwin"] or current_os.startswith( + "cygwin" + ): + dimensions = cls._get_terminal_size_linux() + + if dimensions is None: + dimensions = 80, 25 + + # Ensure we have a valid width + if dimensions[0] <= 0: + dimensions = 80, dimensions[1] + + cls._width, cls._height = dimensions + + @classmethod + def _get_terminal_size_windows(cls) -> tuple[int, int] | None: + try: + from ctypes import create_string_buffer + from ctypes import windll + + # stdin handle is -10 + # stdout handle is -11 + # stderr handle is -12 + h = windll.kernel32.GetStdHandle(-12) + csbi = create_string_buffer(22) + res = windll.kernel32.GetConsoleScreenBufferInfo(h, csbi) + if res: + ( + bufx, + bufy, + curx, + cury, + wattr, + left, + top, + right, + bottom, + maxx, + maxy, + ) = struct.unpack("hhhhHhhhhhh", csbi.raw) + sizex = right - left + 1 + sizey = bottom - top + 1 + return sizex, sizey + except Exception: + return + + @classmethod + def _get_terminal_size_tput(cls) -> tuple[int, int] | None: + # get terminal width + # src: http://stackoverflow.com/questions/263890/how-do-i-find-the-width-height-of-a-terminal-window + try: + cols = int( + subprocess.check_output( + shlex.split("tput cols"), stderr=subprocess.STDOUT + ) + ) + rows = int( + subprocess.check_output( + shlex.split("tput lines"), stderr=subprocess.STDOUT + ) + ) + + return cols, rows + except Exception: + pass + + @classmethod + def _get_terminal_size_linux(cls) -> tuple[int, int] | None: + def ioctl_GWINSZ(fd): + try: + import fcntl + import termios + + cr = struct.unpack("hh", fcntl.ioctl(fd, termios.TIOCGWINSZ, "1234")) + return cr + except Exception: + pass + + cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2) + if not cr: + try: + fd = os.open(os.ctermid(), os.O_RDONLY) + cr = ioctl_GWINSZ(fd) + os.close(fd) + except Exception: + pass + + if not cr: + try: + cr = (os.environ["LINES"], os.environ["COLUMNS"]) + except Exception: + return + + return int(cr[1]), int(cr[0]) diff --git a/env/lib/python3.8/site-packages/cleo/testers/__init__.py b/env/lib/python3.8/site-packages/cleo/testers/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/testers/application_tester.py b/env/lib/python3.8/site-packages/cleo/testers/application_tester.py new file mode 100644 index 00000000..3c3fac16 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/testers/application_tester.py @@ -0,0 +1,73 @@ +from __future__ import annotations + +from io import StringIO +from typing import TYPE_CHECKING + +from cleo.io.buffered_io import BufferedIO +from cleo.io.inputs.string_input import StringInput + + +if TYPE_CHECKING: + from cleo.application import Application + from cleo.io.outputs.output import Verbosity + + +class ApplicationTester: + """ + Eases the testing of console applications. + """ + + def __init__(self, application: Application) -> None: + self._application = application + self._application.auto_exits(False) + self._io = BufferedIO() + self._status_code = 0 + + @property + def application(self) -> Application: + return self._application + + @property + def io(self) -> BufferedIO: + return self._io + + @property + def status_code(self) -> int: + return self._status_code + + def execute( + self, + args: str | None = "", + inputs: str | None = None, + interactive: bool | None = None, + verbosity: Verbosity | None = None, + decorated: bool = False, + supports_utf8: bool = True, + ) -> int: + """ + Executes the command + """ + self._io.clear() + + input = StringInput(args) + self._io.set_input(input) + self._io.decorated(decorated) + self._io.output.set_supports_utf8(supports_utf8) + self._io.error_output.set_supports_utf8(supports_utf8) + + if inputs is not None: + self._io.input.set_stream(StringIO(inputs)) + + if interactive is not None: + self._io.interactive(interactive) + + if verbosity is not None: + self._io.set_verbosity(verbosity) + + self._status_code = self._application.run( + self._io.input, + self._io.output, + self._io.error_output, + ) + + return self._status_code diff --git a/env/lib/python3.8/site-packages/cleo/testers/command_tester.py b/env/lib/python3.8/site-packages/cleo/testers/command_tester.py new file mode 100644 index 00000000..1f06acc6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/testers/command_tester.py @@ -0,0 +1,83 @@ +from __future__ import annotations + +from io import StringIO +from typing import TYPE_CHECKING + +from cleo.io.buffered_io import BufferedIO +from cleo.io.inputs.argv_input import ArgvInput +from cleo.io.inputs.string_input import StringInput + + +if TYPE_CHECKING: + from cleo.commands.command import Command + from cleo.io.outputs.output import Verbosity + + +class CommandTester: + """ + Eases the testing of console commands. + """ + + def __init__(self, command: Command) -> None: + self._command = command + self._io = BufferedIO() + self._inputs = [] + self._status_code = None + + @property + def command(self) -> Command: + return self._command + + @property + def io(self) -> BufferedIO: + return self._io + + @property + def status_code(self) -> int: + return self._status_code + + def execute( + self, + args: str | None = "", + inputs: str | None = None, + interactive: bool | None = None, + verbosity: Verbosity | None = None, + decorated: bool | None = None, + supports_utf8: bool = True, + ) -> int: + """ + Executes the command + """ + application = self._command.application + + input = StringInput(args) + if application is not None and application.definition.has_argument("command"): + name = self._command.name + if " " in name: + # If the command is namespaced we rearrange + # the input to parse it as a single argument + argv = [application.name, self._command.name] + input._tokens + + input = ArgvInput(argv) + else: + input = StringInput(name + " " + args) + + self._io.set_input(input) + self._io.output.set_supports_utf8(supports_utf8) + self._io.error_output.set_supports_utf8(supports_utf8) + + if inputs is not None: + self._io.input.set_stream(StringIO(inputs)) + + if interactive is not None: + self._io.interactive(interactive) + + if verbosity is not None: + self._io.set_verbosity(verbosity) + + if decorated is not None: + self._io.decorated(decorated) + + self._status_code = self._command.run(self._io) + + return self._status_code diff --git a/env/lib/python3.8/site-packages/cleo/ui/__init__.py b/env/lib/python3.8/site-packages/cleo/ui/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cleo/ui/choice_question.py b/env/lib/python3.8/site-packages/cleo/ui/choice_question.py new file mode 100644 index 00000000..371cc62f --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/choice_question.py @@ -0,0 +1,158 @@ +from __future__ import annotations + +import re + +from typing import TYPE_CHECKING +from typing import Any + +from cleo.exceptions import ValueException +from cleo.ui.question import Question + + +if TYPE_CHECKING: + from cleo.io.io import IO + + +class SelectChoiceValidator: + def __init__(self, question: Question) -> None: + """ + Constructor. + """ + self._question = question + self._values = question.choices + + def validate(self, selected: str | int) -> str | None: + """ + Validate a choice. + """ + # Collapse all spaces. + if isinstance(selected, int): + selected = str(selected) + + if selected is None: + return None + + selected_choices = selected.replace(" ", "") + + if self._question.supports_multiple_choices(): + # Check for a separated comma values + if not re.match("^[a-zA-Z0-9_-]+(?:,[a-zA-Z0-9_-]+)*$", selected_choices): + raise ValueException(self._question.error_message.format(selected)) + + selected_choices = selected_choices.split(",") + else: + selected_choices = [selected] + + multiselect_choices = [] + for value in selected_choices: + results = [] + + for key, choice in enumerate(self._values): + if choice == value: + results.append(key) + + if len(results) > 1: + raise ValueException( + "The provided answer is ambiguous. Value should be one of {}.".format( + " or ".join(str(r) for r in results) + ) + ) + + try: + result = self._values.index(value) + result = self._values[result] + except ValueError: + try: + value = int(value) + + if 0 <= value < len(self._values): + result = self._values[value] + else: + result = False + except ValueError: + result = False + + if result is False: + raise ValueException(self._question.error_message.format(value)) + + multiselect_choices.append(result) + + if self._question.supports_multiple_choices(): + return multiselect_choices + + return multiselect_choices[0] + + +class ChoiceQuestion(Question): + """ + Multiple choice question. + """ + + def __init__( + self, question: str, choices: list[str], default: Any | None = None + ) -> None: + super().__init__(question, default) + + self._multi_select = False + self._choices = choices + self._validator = SelectChoiceValidator(self).validate + self._autocomplete_values = choices + self._prompt = " > " + self._error_message = 'Value "{}" is invalid' + + @property + def error_message(self) -> str: + return self._error_message + + @property + def choices(self) -> list[str]: + return self._choices + + def supports_multiple_choices(self) -> bool: + return self._multi_select + + def set_multi_select(self, multi_select: bool) -> None: + self._multi_select = multi_select + + def set_error_message(self, message: str) -> None: + self._error_message = message + + def _write_prompt(self, io: IO) -> None: + """ + Outputs the question prompt. + """ + message = self._question + default = self._default + + if default is None: + message = f"{message}: " + elif self._multi_select: + choices = self._choices + default = default.split(",") + + for i, value in enumerate(default): + default[i] = choices[int(value.strip())] + + message = "{} [{}]:".format( + message, ", ".join(default) + ) + else: + choices = self._choices + message = "{} [{}]:".format( + message, choices[int(default)] + ) + + if len(self._choices) > 1: + width = max(*map(len, [str(k) for k, _ in enumerate(self._choices)])) + else: + width = 1 + + messages = [message] + for key, value in enumerate(self._choices): + messages.append(" [{:{}}] {}".format(key, width, value)) + + io.write_error_line("\n".join(messages)) + + message = self._prompt + + io.write_error(message) diff --git a/env/lib/python3.8/site-packages/cleo/ui/component.py b/env/lib/python3.8/site-packages/cleo/ui/component.py new file mode 100644 index 00000000..3bcf8d22 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/component.py @@ -0,0 +1,6 @@ +from __future__ import annotations + + +class Component: + + name = None diff --git a/env/lib/python3.8/site-packages/cleo/ui/confirmation_question.py b/env/lib/python3.8/site-packages/cleo/ui/confirmation_question.py new file mode 100644 index 00000000..7a3b3f04 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/confirmation_question.py @@ -0,0 +1,47 @@ +from __future__ import annotations + +import re + +from typing import TYPE_CHECKING + +from cleo.ui.question import Question + + +if TYPE_CHECKING: + from cleo.io.io import IO + + +class ConfirmationQuestion(Question): + """ + Represents a yes/no question. + """ + + def __init__( + self, question: str, default: bool = True, true_answer_regex: str = "(?i)^y" + ) -> None: + super().__init__(question, default) + + self._true_answer_regex = true_answer_regex + self._normalizer = self._get_default_normalizer + + def _write_prompt(self, io: IO) -> None: + message = self._question + + message = "{} (yes/no) [{}] ".format( + message, "yes" if self._default else "no" + ) + + io.write_error(message) + + def _get_default_normalizer(self, answer: str | bool) -> bool: + """ + Default answer normalizer. + """ + if isinstance(answer, bool): + return answer + + answer_is_true = re.match(self._true_answer_regex, answer) is not None + if self.default is False: + return answer and answer_is_true + + return not answer or answer_is_true diff --git a/env/lib/python3.8/site-packages/cleo/ui/exception_trace.py b/env/lib/python3.8/site-packages/cleo/ui/exception_trace.py new file mode 100644 index 00000000..b9dad735 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/exception_trace.py @@ -0,0 +1,461 @@ +from __future__ import annotations + +import ast +import inspect +import io +import keyword +import os +import re +import sys +import tokenize + +from typing import TYPE_CHECKING + +from crashtest.frame_collection import FrameCollection + +from cleo.formatters.formatter import Formatter + + +if TYPE_CHECKING: + from crashtest.frame import Frame + from crashtest.solution_providers.solution_provider_repository import ( + SolutionProviderRepository, + ) + + from cleo.io.io import IO + from cleo.io.outputs.output import Output + + +class Highlighter: + + TOKEN_DEFAULT = "token_default" + TOKEN_COMMENT = "token_comment" + TOKEN_STRING = "token_string" + TOKEN_NUMBER = "token_number" + TOKEN_KEYWORD = "token_keyword" + TOKEN_BUILTIN = "token_builtin" + TOKEN_OP = "token_op" + LINE_MARKER = "line_marker" + LINE_NUMBER = "line_number" + + DEFAULT_THEME = { + TOKEN_STRING: "fg=yellow;options=bold", + TOKEN_NUMBER: "fg=blue;options=bold", + TOKEN_COMMENT: "fg=default;options=dark,italic", + TOKEN_KEYWORD: "fg=magenta;options=bold", + TOKEN_BUILTIN: "fg=default;options=bold", + TOKEN_DEFAULT: "fg=default", + TOKEN_OP: "fg=default;options=dark", + LINE_MARKER: "fg=red;options=bold", + LINE_NUMBER: "fg=default;options=dark", + } + + KEYWORDS = set(keyword.kwlist) + BUILTINS = set( + __builtins__.keys() if type(__builtins__) is dict else dir(__builtins__) + ) + + UI = { + False: {"arrow": ">", "delimiter": "|"}, + True: {"arrow": "→", "delimiter": "│"}, + } + + def __init__(self, supports_utf8: bool = True) -> None: + self._theme = self.DEFAULT_THEME.copy() + self._ui = self.UI[supports_utf8] + + def code_snippet( + self, source: str, line: int, lines_before: int = 2, lines_after: int = 2 + ) -> list[str]: + token_lines = self.highlighted_lines(source) + token_lines = self.line_numbers(token_lines, line) + + offset = line - lines_before - 1 + offset = max(offset, 0) + length = lines_after + lines_before + 1 + token_lines = token_lines[offset : offset + length] + + return token_lines + + def highlighted_lines(self, source: str) -> list[str]: + source = source.replace("\r\n", "\n").replace("\r", "\n") + + return self.split_to_lines(source) + + def split_to_lines(self, source: str) -> list[str]: + lines = [] + current_line = 1 + current_col = 0 + buffer = "" + current_type = None + source_io = io.BytesIO(source.encode()) + formatter = Formatter() + + def readline() -> bytes: + return formatter.format( + formatter.escape(source_io.readline().decode()) + ).encode() + + tokens = tokenize.tokenize(readline) + line = "" + for token_info in tokens: + token_type, token_string, start, end, _ = token_info + lineno = start[0] + if lineno == 0: + # Encoding line + continue + + if token_type == tokenize.ENDMARKER: + # End of source + if current_type is None: + current_type = self.TOKEN_DEFAULT + + line += f"<{self._theme[current_type]}>{buffer}" + lines.append(line) + break + + if lineno > current_line: + if current_type is None: + current_type = self.TOKEN_DEFAULT + + diff = lineno - current_line + if diff > 1: + lines += [""] * (diff - 1) + + line += "<{}>{}".format( + self._theme[current_type], buffer.rstrip("\n") + ) + + # New line + lines.append(line) + line = "" + current_line = lineno + current_col = 0 + buffer = "" + + if token_string in self.KEYWORDS: + new_type = self.TOKEN_KEYWORD + elif token_string in self.BUILTINS or token_string == "self": + new_type = self.TOKEN_BUILTIN + elif token_type == tokenize.STRING: + new_type = self.TOKEN_STRING + elif token_type == tokenize.NUMBER: + new_type = self.TOKEN_NUMBER + elif token_type == tokenize.COMMENT: + new_type = self.TOKEN_COMMENT + elif token_type == tokenize.OP: + new_type = self.TOKEN_OP + elif token_type == tokenize.NEWLINE: + continue + else: + new_type = self.TOKEN_DEFAULT + + if current_type is None: + current_type = new_type + + if start[1] > current_col: + buffer += token_info.line[current_col : start[1]] + + if current_type != new_type: + line += f"<{self._theme[current_type]}>{buffer}" + buffer = "" + current_type = new_type + + if lineno < end[0]: + # The token spans multiple lines + token_lines = token_string.split("\n") + line += f"<{self._theme[current_type]}>{token_lines[0]}" + lines.append(line) + for token_line in token_lines[1:-1]: + lines.append(f"<{self._theme[current_type]}>{token_line}") + + current_line = end[0] + buffer = token_lines[-1][: end[1]] + line = "" + continue + + buffer += token_string + current_col = end[1] + current_line = lineno + + return lines + + def line_numbers(self, lines: list[str], mark_line: int | None = None) -> list[str]: + max_line_length = max(3, len(str(len(lines)))) + + snippet_lines = [] + marker = "<{}>{} ".format(self._theme[self.LINE_MARKER], self._ui["arrow"]) + no_marker = " " + for i, line in enumerate(lines): + if mark_line is not None: + if mark_line == i + 1: + snippet = marker + else: + snippet = no_marker + + line_number = "{:>{}}".format(i + 1, max_line_length) + snippet += "<{}>{}<{}>{} {}".format( + "fg=default;options=bold" + if mark_line == i + 1 + else self._theme[self.LINE_NUMBER], + line_number, + self._theme[self.LINE_NUMBER], + self._ui["delimiter"], + line, + ) + snippet_lines.append(snippet) + + return snippet_lines + + +class ExceptionTrace: + """ + Renders the trace of an exception. + """ + + THEME = { + "comment": "", + "keyword": "", + "builtin": "", + "literal": "", + } + + AST_ELEMENTS = { + "builtins": __builtins__.keys() + if type(__builtins__) is dict + else dir(__builtins__), + "keywords": [ + getattr(ast, cls) + for cls in dir(ast) + if keyword.iskeyword(cls.lower()) + and inspect.isclass(getattr(ast, cls)) + and issubclass(getattr(ast, cls), ast.AST) + ], + } + + _FRAME_SNIPPET_CACHE = {} + + def __init__( + self, + exception: Exception, + solution_provider_repository: SolutionProviderRepository | None = None, + ) -> None: + self._exception = exception + self._solution_provider_repository = solution_provider_repository + self._exc_info = sys.exc_info() + self._ignore = None + + def ignore_files_in(self, ignore: str) -> ExceptionTrace: + self._ignore = ignore + + return self + + def render(self, io: IO | Output, simple: bool = False) -> None: + if simple: + io.write_line("") + io.write_line(f"{str(self._exception)}") + else: + self._render_exception(io, self._exception) + + self._render_solution(io, self._exception) + + def _render_exception(self, io: IO | Output, exception: Exception) -> None: + from crashtest.inspector import Inspector + + inspector = Inspector(exception) + if not inspector.frames: + return + + if inspector.has_previous_exception(): + self._render_exception(io, inspector.previous_exception) + io.write_line("") + io.write_line( + "The following error occurred when trying to handle this error:" + ) + io.write_line("") + + self._render_trace(io, inspector.frames) + + self._render_line(io, f"{inspector.exception_name}", True) + io.write_line("") + exception_message = ( + Formatter().format(inspector.exception_message).replace("\n", "\n ") + ) + self._render_line(io, f"{exception_message}") + + current_frame = inspector.frames[-1] + self._render_snippet(io, current_frame) + + def _render_snippet(self, io: IO | Output, frame: Frame) -> None: + self._render_line( + io, + "at {}:{} in {}".format( + self._get_relative_file_path(frame.filename), + frame.lineno, + frame.function, + ), + True, + ) + + code_lines = Highlighter(supports_utf8=io.supports_utf8()).code_snippet( + frame.file_content, frame.lineno, 4, 4 + ) + + for code_line in code_lines: + self._render_line(io, code_line, indent=4) + + def _render_solution(self, io: IO | Output, exception: Exception) -> None: + if self._solution_provider_repository is None: + return + + solutions = self._solution_provider_repository.get_solutions_for_exception( + exception + ) + symbol = "•" + if not io.supports_utf8(): + symbol = "*" + + for solution in solutions: + title = solution.solution_title + description = solution.solution_description + links = solution.documentation_links + + description = description.replace("\n", "\n ").strip(" ") + + self._render_line( + io, + "{} {}: {}{}".format( + symbol, + title.rstrip("."), + description, + ",".join(f"\n {link}" for link in links), + ), + True, + ) + + def _render_trace(self, io: IO | Output, frames: FrameCollection) -> None: + stack_frames = FrameCollection() + for frame in frames: + if ( + self._ignore + and re.match(self._ignore, frame.filename) + and not io.is_debug() + ): + continue + + stack_frames.append(frame) + + remaining_frames_length = len(stack_frames) - 1 + if io.is_very_verbose() and remaining_frames_length: + self._render_line(io, "Stack trace:", True) + max_frame_length = len(str(remaining_frames_length)) + frame_collections = stack_frames.compact() + i = remaining_frames_length + for collection in frame_collections: + if collection.is_repeated(): + if len(collection) > 1: + frames_message = "{} frames".format( + len(collection) + ) + else: + frames_message = "frame" + + self._render_line( + io, + "{:>{}} Previous {} repeated {} times".format( + "...", + max_frame_length, + frames_message, + collection.repetitions + 1, + ), + True, + ) + + i -= len(collection) * (collection.repetitions + 1) + + for frame in collection: + relative_file_path = self._get_relative_file_path(frame.filename) + relative_file_path_parts = relative_file_path.split(os.path.sep) + relative_file_path = "{}".format( + "{}".format( + Formatter.escape(os.sep) + ).join( + relative_file_path_parts[:-1] + + [ + "{}".format( + relative_file_path_parts[-1] + ) + ] + ), + ) + + self._render_line( + io, + "{:>{}} {}:{} in {}".format( + i, + max_frame_length, + relative_file_path, + frame.lineno, + frame.function, + ), + True, + ) + + if io.is_debug(): + if (frame, 2, 2) not in self._FRAME_SNIPPET_CACHE: + code_lines = Highlighter( + supports_utf8=io.supports_utf8() + ).code_snippet( + frame.file_content, + frame.lineno, + ) + + self._FRAME_SNIPPET_CACHE[(frame, 2, 2)] = code_lines + + code_lines = self._FRAME_SNIPPET_CACHE[(frame, 2, 2)] + + for code_line in code_lines: + self._render_line( + io, + "{:>{}}{}".format(" ", max_frame_length, code_line), + indent=3, + ) + else: + highlighter = Highlighter(supports_utf8=io.supports_utf8()) + try: + code_line = highlighter.highlighted_lines( + frame.line.strip() + )[0] + except tokenize.TokenError: + code_line = frame.line.strip() + + self._render_line( + io, + "{:>{}} {}".format( + " ", + max_frame_length, + code_line, + ), + ) + + i -= 1 + + def _render_line( + self, io: IO | Output, line: str, new_line: bool = False, indent: int = 2 + ) -> None: + if new_line: + io.write_line("") + + io.write_line("{}{}".format(indent * " ", line)) + + def _get_relative_file_path(self, filepath: str) -> str: + cwd = os.getcwd() + + if cwd: + filepath = filepath.replace(cwd + os.path.sep, "") + + home = os.path.expanduser("~") + if home: + filepath = filepath.replace(home + os.path.sep, "~" + os.path.sep) + + return filepath diff --git a/env/lib/python3.8/site-packages/cleo/ui/progress_bar.py b/env/lib/python3.8/site-packages/cleo/ui/progress_bar.py new file mode 100644 index 00000000..cd064a79 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/progress_bar.py @@ -0,0 +1,451 @@ +from __future__ import annotations + +import math +import re +import time + +from typing import TYPE_CHECKING +from typing import Match + +from cleo._utils import format_time +from cleo.cursor import Cursor +from cleo.io.io import IO +from cleo.io.outputs.section_output import SectionOutput +from cleo.terminal import Terminal +from cleo.ui.component import Component + + +if TYPE_CHECKING: + from cleo.io.outputs.output import Output + + +class ProgressBar(Component): + """ + The ProgressBar provides helpers to display progress output. + """ + + name = "progress_bar" + + # Options + bar_width = 28 + bar_char = None + empty_bar_char = "-" + progress_char = ">" + redraw_freq = 1 + + formats = { + "normal": " %current%/%max% [%bar%] %percent:3s%%", + "normal_nomax": " %current% [%bar%]", + "verbose": " %current%/%max% [%bar%] %percent:3s%% %elapsed:-6s%", + "verbose_nomax": " %current% [%bar%] %elapsed:6s%", + "very_verbose": " %current%/%max% [%bar%] %percent:3s%% %elapsed:6s%/%estimated:-6s%", + "very_verbose_nomax": " %current% [%bar%] %elapsed:6s%", + "debug": " %current%/%max% [%bar%] %percent:3s%% %elapsed:6s%/%estimated:-6s%", + "debug_nomax": " %current% [%bar%] %elapsed:6s%", + } + + def __init__( + self, + io: IO | Output, + max: int = 0, + min_seconds_between_redraws: float = 0.1, + ) -> None: + # If we have an IO, ensure we write to the error output + if isinstance(io, IO): + io = io.error_output + + self._io = io + self._terminal = Terminal() + self._max = 0 + self._step_width = None + self._set_max_steps(max) + self._step = 0 + self._percent = 0.0 + self._format = None + self._internal_format = None + self._format_line_count = 0 + self._previous_message = None + self._should_overwrite = True + self._min_seconds_between_redraws = 0 + self._max_seconds_between_redraws = 1 + self._write_count = 0 + + if min_seconds_between_redraws > 0: + self.redraw_freq = None + self._min_seconds_between_redraws = min_seconds_between_redraws + + if not self._io.formatter.is_decorated(): + # Disable overwrite when output does not support ANSI codes. + self._should_overwrite = False + + # Set a reasonable redraw frequency so output isn't flooded + self.redraw_freq = None + + self._messages = {} + + self._start_time = time.time() + self._last_write_time = 0 + self._cursor = Cursor(self._io) + + def set_message(self, message: str, name: str = "message") -> None: + self._messages[name] = message + + def get_message(self, name: str = "message") -> str: + return self._messages[name] + + def get_start_time(self) -> float: + return self._start_time + + def get_max_steps(self) -> int: + return self._max + + def get_progress(self) -> int: + return self._step + + def get_progress_percent(self) -> float: + return self._percent + + def set_bar_character(self, character: str) -> ProgressBar: + self.bar_char = character + + return self + + def get_bar_character(self) -> str: + if self.bar_char is None: + if self._max: + return "=" + + return self.empty_bar_char + + return self.bar_char + + def set_bar_width(self, width: int) -> ProgressBar: + self.bar_width = width + + return self + + def get_empty_bar_character(self) -> str: + return self.empty_bar_char + + def set_empty_bar_character(self, character: str) -> ProgressBar: + self.empty_bar_char = character + + return self + + def get_progress_character(self) -> str: + return self.progress_char + + def set_progress_character(self, character: str) -> ProgressBar: + self.progress_char = character + + return self + + def set_format(self, fmt: str) -> None: + self._format = None + self._internal_format = fmt + + def set_redraw_frequency(self, freq: int | None) -> None: + if self.redraw_freq is not None: + self.redraw_freq = max(freq, 1) + + def min_seconds_between_redraws(self, freq: float) -> None: + if freq > 0: + self.redraw_freq = None + self._min_seconds_between_redraws = freq + + def max_seconds_between_redraws(self, freq: float) -> None: + self._max_seconds_between_redraws = freq + + def start(self, max: int | None = None) -> None: + """ + Start the progress output. + """ + self._start_time = time.time() + self._step = 0 + self._percent = 0.0 + + if max is not None: + self._set_max_steps(max) + + self.display() + + def advance(self, step: int = 1) -> None: + """ + Advances the progress output X steps. + """ + self.set_progress(self._step + step) + + def set_progress(self, step: int) -> None: + """ + Sets the current progress. + """ + if self._max and step > self._max: + self._max = step + elif step < 0: + step = 0 + + redraw_freq = ( + (self._max or 10) / 10 if self.redraw_freq is None else self.redraw_freq + ) + prev_period = int(self._step / redraw_freq) + curr_period = int(step / redraw_freq) + + self._step = step + + if self._max: + self._percent = self._step / self._max + else: + self._percent = 0.0 + + time_interval = time.time() - self._last_write_time + + # Draw regardless of other limits + if step == self._max: + self.display() + + return + + # Throttling + if time_interval < self._min_seconds_between_redraws: + return + + # Draw each step period, but not too late + if ( + prev_period != curr_period + or time_interval >= self._max_seconds_between_redraws + ): + self.display() + + def finish(self) -> None: + """ + Finish the progress output. + """ + if not self._max: + self._max = self._step + + if self._step == self._max and not self._should_overwrite: + return + + self.set_progress(self._max) + + def display(self) -> None: + """ + Output the current progress string. + """ + if self._io.is_quiet(): + return + + if self._format is None: + self._set_real_format( + self._internal_format or self._determine_best_format() + ) + + self._overwrite(self._build_line()) + + def _overwrite_callback(self, matches: Match) -> str: + if hasattr(self, f"_formatter_{matches.group(1)}"): + text = str(getattr(self, f"_formatter_{matches.group(1)}")()) + elif matches.group(1) in self._messages: + text = self._messages[matches.group(1)] + else: + return matches.group(0) + + if matches.group(2): + if matches.group(2).startswith("-"): + text = text.ljust(int(matches.group(2).lstrip("-").rstrip("s"))) + else: + text = text.rjust(int(matches.group(2).rstrip("s"))) + + return text + + def clear(self) -> None: + """ + Removes the progress bar from the current line. + + This is useful if you wish to write some output + while a progress bar is running. + Call display() to show the progress bar again. + """ + if not self._should_overwrite: + return + + if self._format is None: + self._set_real_format( + self._internal_format or self._determine_best_format() + ) + + self._overwrite("\n" * self._format_line_count) + + def _set_real_format(self, fmt: str) -> None: + """ + Sets the progress bar format. + """ + # try to use the _nomax variant if available + if not self._max and fmt + "_nomax" in self.formats: + self._format = self.formats[fmt + "_nomax"] + else: + self._format = self.formats.get(fmt, fmt) + + self._format_line_count = self._format.count("\n") + + def _set_max_steps(self, mx: int) -> None: + """ + Sets the progress bar maximal steps. + """ + self._max = max(0, mx) + + if self._max: + self._step_width = len(str(self._max)) + else: + self._step_width = 4 + + def _overwrite(self, message: str) -> None: + """ + Overwrites a previous message to the output. + """ + if self._previous_message == message: + return + + original_message = message + + if self._should_overwrite: + if self._previous_message is not None: + if isinstance(self._io, SectionOutput): + lines_to_clear = ( + int( + math.floor( + len(self._io.remove_format(message)) + / self._terminal.width + ) + ) + + self._format_line_count + + 1 + ) + self._io.clear(lines_to_clear) + else: + if self._format_line_count: + self._cursor.move_up(self._format_line_count) + + self._cursor.move_to_column(1) + self._cursor.clear_line() + elif self._step > 0: + message = "\n" + message + + self._previous_message = original_message + self._last_write_time = time.time() + + self._io.write(message) + self._write_count += 1 + + def _determine_best_format(self) -> str: + if self._io.is_debug(): + if self._max: + return "debug" + + return "debug_nomax" + elif self._io.is_very_verbose(): + if self._max: + return "very_verbose" + + return "very_verbose_nomax" + elif self._io.is_verbose(): + if self._max: + return "verbose" + + return "verbose_nomax" + + if self._max: + return "normal" + + return "normal_nomax" + + @property + def bar_offset(self) -> int: + if self._max: + return math.floor(self._percent * self.bar_width) + else: + if self.redraw_freq is None: + return math.floor( + (min(5, self.bar_width // 15) * self._write_count) % self.bar_width + ) + + return math.floor(self._step % self.bar_width) + + def _formatter_bar(self) -> str: + complete_bars = self.bar_offset + + display = self.get_bar_character() * int(complete_bars) + + if complete_bars < self.bar_width: + empty_bars = ( + self.bar_width + - complete_bars + - len(self._io.remove_format(self.progress_char)) + ) + display += self.progress_char + self.empty_bar_char * int(empty_bars) + + return display + + def _formatter_elapsed(self) -> str: + return format_time(time.time() - self._start_time) + + def _formatter_remaining(self) -> str: + if not self._max: + raise RuntimeError( + "Unable to display the remaining time " + "if the maximum number of steps is not set." + ) + + if not self._step: + remaining = 0 + else: + remaining = round( + (time.time() - self._start_time) / self._step * (self._max - self._max) + ) + + return format_time(remaining) + + def _formatter_estimated(self) -> int: + if not self._max: + raise RuntimeError( + "Unable to display the estimated time " + "if the maximum number of steps is not set." + ) + + if not self._step: + estimated = 0 + else: + estimated = round((time.time() - self._start_time) / self._step * self._max) + + return estimated + + def _formatter_current(self) -> str: + return str(self._step).rjust(self._step_width, " ") + + def _formatter_max(self) -> int: + return self._max + + def _formatter_percent(self) -> int: + return int(math.floor(self._percent * 100)) + + def _build_line(self) -> str: + regex = re.compile(r"(?i)%([a-z\-_]+)(?::([^%]+))?%") + + line = regex.sub(self._overwrite_callback, self._format) + + # gets string length for each sub line with multiline format + lines_length = [ + len(self._io.remove_format(sub_line.rstrip("\r"))) + for sub_line in line.split("\n") + ] + + lines_width = max(lines_length) + + terminal_width = self._terminal.width + + if lines_width <= terminal_width: + return line + + self.set_bar_width(self.bar_width - lines_width + terminal_width) + + return regex.sub(self._overwrite_callback, self._format) diff --git a/env/lib/python3.8/site-packages/cleo/ui/progress_indicator.py b/env/lib/python3.8/site-packages/cleo/ui/progress_indicator.py new file mode 100644 index 00000000..d101ffdf --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/progress_indicator.py @@ -0,0 +1,213 @@ +from __future__ import annotations + +import re +import threading +import time + +from contextlib import contextmanager +from typing import TYPE_CHECKING + +from cleo._utils import format_time +from cleo.io.io import IO + + +if TYPE_CHECKING: + from cleo.io.outputs.output import Output + + +class ProgressIndicator: + """ + A process indicator. + """ + + NORMAL = " {indicator} {message}" + NORMAL_NO_ANSI = " {message}" + VERBOSE = " {indicator} {message} ({elapsed:6s})" + VERBOSE_NO_ANSI = " {message} ({elapsed:6s})" + VERY_VERBOSE = " {indicator} {message} ({elapsed:6s})" + VERY_VERBOSE_NO_ANSI = " {message} ({elapsed:6s})" + + def __init__( + self, + io: IO | Output, + fmt: str | None = None, + interval: int = 100, + values: list[str] | None = None, + ) -> None: + if isinstance(io, IO): + io = io.error_output + + self._io = io + + if fmt is None: + fmt = self._determine_best_format() + + self._fmt = fmt + + if values is None: + values = ["-", "\\", "|", "/"] + + if len(values) < 2: + raise ValueError( + "The progress indicator must have at least 2 indicator value characters." + ) + + self._interval = interval + self._values = values + + self._message = None + self._update_time = None + self._started = False + self._current = 0 + + self._auto_running = None + self._auto_thread = None + + self._start_time = None + self._last_message_length = 0 + + @property + def message(self) -> str | None: + return self._message + + def set_message(self, message: str | None) -> None: + self._message = message + + self._display() + + @property + def current_value(self) -> str: + return self._values[self._current % len(self._values)] + + def start(self, message: str) -> None: + if self._started: + raise RuntimeError("Progress indicator already started.") + + self._message = message + self._started = True + self._start_time = time.time() + self._update_time = self._get_current_time_in_milliseconds() + self._interval + self._current = 0 + + self._display() + + def advance(self) -> None: + if not self._started: + raise RuntimeError("Progress indicator has not yet been started.") + + if not self._io.is_decorated(): + return + + current_time = self._get_current_time_in_milliseconds() + if current_time < self._update_time: + return + + self._update_time = current_time + self._interval + self._current += 1 + + self._display() + + def finish(self, message: str, reset_indicator: bool = False) -> None: + if not self._started: + raise RuntimeError("Progress indicator has not yet been started.") + + if self._auto_thread is not None: + self._auto_running.set() + self._auto_thread.join() + + self._message = message + + if reset_indicator: + self._current = 0 + + self._display() + self._io.write_line("") + self._started = False + + @contextmanager + def auto(self, start_message: str, end_message: str): + """ + Auto progress. + """ + self._auto_running = threading.Event() + self._auto_thread = threading.Thread(target=self._spin) + + self.start(start_message) + self._auto_thread.start() + + try: + yield self + except (Exception, KeyboardInterrupt): + self._io.write_line("") + + self._auto_running.set() + self._auto_thread.join() + + raise + + self.finish(end_message, reset_indicator=True) + + def _spin(self) -> None: + while not self._auto_running.is_set(): + self.advance() + + time.sleep(0.1) + + def _display(self) -> None: + if self._io.is_quiet(): + return + + self._overwrite( + re.sub( + r"(?i){([a-z\-_]+)(?::([^}]+))?}", self._overwrite_callback, self._fmt + ) + ) + + def _overwrite_callback(self, matches): + if hasattr(self, f"_formatter_{matches.group(1)}"): + text = str(getattr(self, f"_formatter_{matches.group(1)}")()) + else: + text = matches.group(0) + + return text + + def _overwrite(self, message) -> None: + """ + Overwrites a previous message to the output. + """ + if self._io.is_decorated(): + self._io.write("\x0D\x1B[2K") + self._io.write(message) + else: + self._io.write_line(message) + + def _determine_best_format(self): + decorated = self._io.is_decorated() + + if self._io.is_very_verbose(): + if decorated: + return self.VERY_VERBOSE + + return self.VERY_VERBOSE_NO_ANSI + elif self._io.is_verbose(): + if decorated: + return self.VERY_VERBOSE + + return self.VERBOSE_NO_ANSI + + if decorated: + return self.NORMAL + + return self.NORMAL_NO_ANSI + + def _get_current_time_in_milliseconds(self) -> int: + return round(time.time() * 1000) + + def _formatter_indicator(self) -> str: + return self.current_value + + def _formatter_message(self): + return self.message + + def _formatter_elapsed(self) -> str: + return format_time(time.time() - self._start_time) diff --git a/env/lib/python3.8/site-packages/cleo/ui/question.py b/env/lib/python3.8/site-packages/cleo/ui/question.py new file mode 100644 index 00000000..6559bf7d --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/question.py @@ -0,0 +1,279 @@ +from __future__ import annotations + +import getpass +import os +import subprocess + +from typing import TYPE_CHECKING +from typing import Any +from typing import Callable + +from cleo.formatters.style import Style + + +if TYPE_CHECKING: + from cleo.io.io import IO + + +class Question: + """ + A question that will be asked in a Console. + """ + + def __init__(self, question: str, default: Any | None = None) -> None: + self._question = question + self._default = default + + self._attempts = None + self._hidden = False + self._hidden_fallback = True + self._autocomplete_values = None + self._validator = None + self._normalizer = None + self._error_message = None + + @property + def question(self) -> str: + return self._question + + @property + def default(self) -> str: + return self._default + + @property + def autocomplete_values(self): + return self._autocomplete_values + + @property + def max_attempts(self) -> int | None: + return self._attempts + + def is_hidden(self) -> bool: + return self._hidden + + def hide(self, hidden: bool = True) -> None: + if hidden is True and self._autocomplete_values: + raise RuntimeError("A hidden question cannot use the autocompleter.") + + self._hidden = hidden + + def set_autocomplete_values(self, autocomplete_values) -> None: + if self.is_hidden(): + raise RuntimeError("A hidden question cannot use the autocompleter.") + + self._autocomplete_values = autocomplete_values + + def set_max_attempts(self, attempts: int): + self._attempts = attempts + + def set_validator(self, validator: Callable) -> None: + self._validator = validator + + def ask(self, io: IO) -> str | None: + """ + Asks the question to the user. + """ + if not io.is_interactive(): + return self.default + + if not self._validator: + return self._do_ask(io) + + def interviewer(): + return self._do_ask(io) + + return self._validate_attempts(interviewer, io) + + def _do_ask(self, io: IO) -> str | None: + """ + Asks the question to the user. + """ + self._write_prompt(io) + + autocomplete = self._autocomplete_values + + if autocomplete is None or not self._has_stty_available(): + ret = False + + if self.is_hidden(): + try: + ret = self._get_hidden_response(io) + except RuntimeError: + if not self._hidden_fallback: + raise + + if not ret: + ret = self._read_from_input(io) + else: + ret = self._autocomplete(io) + + if len(ret) <= 0: + ret = self._default + + if self._normalizer: + return self._normalizer(ret) + + return ret + + def _write_prompt(self, io: IO) -> None: + """ + Outputs the question prompt. + """ + message = self._question + + io.write_error(f"{message} ") + + def _write_error(self, io: IO, error: Exception) -> None: + """ + Outputs an error message. + """ + message = f"{str(error)}" + + io.write_error_line(message) + + def _autocomplete(self, io: IO) -> str: + """ + Autocomplete a question. + """ + autocomplete = self._autocomplete_values + + ret = "" + + i = 0 + ofs = -1 + matches = list(autocomplete) + num_matches = len(matches) + + stty_mode = subprocess.check_output(["stty", "-g"]).decode().rstrip("\n") + + # Disable icanon (so we can read each keypress) and echo (we'll do echoing here instead) + subprocess.check_output(["stty", "-icanon", "-echo"]) + + # Add highlighted text style + style = Style(options=["reverse"]) + io.error_output.formatter.set_style("hl", style) + + # Read a keypress + while True: + c = io.read(1) + + # Backspace character + if c == "\177": + if num_matches == 0 and i != 0: + i -= 1 + # Move cursor backwards + io.write_error("\033[1D") + + if i == 0: + ofs = -1 + matches = list(autocomplete) + num_matches = len(matches) + else: + num_matches = 0 + + # Pop the last character off the end of our string + ret = ret[:i] + # Did we read an escape sequence + elif c == "\033": + c += io.read(2) + + # A = Up Arrow. B = Down Arrow + if c[2] == "A" or c[2] == "B": + if c[2] == "A" and ofs == -1: + ofs = 0 + + if num_matches == 0: + continue + + ofs += -1 if c[2] == "A" else 1 + ofs = (num_matches + ofs) % num_matches + elif ord(c) < 32: + if c in ["\t", "\n"]: + if num_matches > 0 and ofs != -1: + ret = matches[ofs] + # Echo out remaining chars for current match + io.write_error(ret[i:]) + i = len(ret) + + if c == "\n": + io.write_error(c) + break + + num_matches = 0 + + continue + else: + io.write_error(c) + ret += c + i += 1 + + num_matches = 0 + ofs = 0 + + for value in autocomplete: + # If typed characters match the beginning chunk of value (e.g. [AcmeDe]moBundle) + if value.startswith(ret) and i != len(value): + num_matches += 1 + matches[num_matches - 1] = value + + # Erase characters from cursor to end of line + io.write_error("\033[K") + + if num_matches > 0 and ofs != -1: + # Save cursor position + io.write_error("\0337") + # Write highlighted text + io.write_error("" + matches[ofs][i:] + "") + # Restore cursor position + io.write_error("\0338") + + subprocess.call(["stty", f"{stty_mode}"]) + + return ret + + def _get_hidden_response(self, io: IO) -> str: + """ + Gets a hidden response from user. + """ + return getpass.getpass("", stream=io.error_output.stream) + + def _validate_attempts(self, interviewer: Callable, io: IO) -> str: + """ + Validates an attempt. + """ + error = None + attempts = self._attempts + + while attempts is None or attempts: + if error is not None: + self._write_error(io, error) + + try: + return self._validator(interviewer()) + except Exception as e: + error = e + + if attempts is not None: + attempts -= 1 + + raise error + + def _read_from_input(self, io: IO) -> str: + """ + Read user input. + """ + ret = io.read_line(4096) + + if not ret: + raise RuntimeError("Aborted") + + return ret.strip() + + def _has_stty_available(self) -> bool: + with open(os.devnull, "w") as devnull: + try: + exit_code = subprocess.call(["stty"], stdout=devnull, stderr=devnull) + except Exception: + exit_code = 2 + + return exit_code == 0 diff --git a/env/lib/python3.8/site-packages/cleo/ui/table.py b/env/lib/python3.8/site-packages/cleo/ui/table.py new file mode 100644 index 00000000..15744cad --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/table.py @@ -0,0 +1,710 @@ +from __future__ import annotations + +import math +import re + +from copy import deepcopy +from typing import TYPE_CHECKING +from typing import Generator +from typing import List +from typing import Union + +from cleo.formatters.formatter import Formatter +from cleo.ui.table_cell import TableCell +from cleo.ui.table_cell_style import TableCellStyle +from cleo.ui.table_separator import TableSeparator +from cleo.ui.table_style import TableStyle + + +if TYPE_CHECKING: + from cleo.io.io import IO + from cleo.io.outputs.output import Output + +_Row = List[Union[str, TableCell]] +_Rows = List[Union[_Row, TableSeparator]] + + +class Table: + + SEPARATOR_TOP: int = 0 + SEPARATOR_TOP_BOTTOM: int = 1 + SEPARATOR_MID: int = 2 + SEPARATOR_BOTTOM: int = 3 + + BORDER_OUTSIDE: int = 0 + BORDER_INSIDE: int = 1 + + _styles: dict[str, TableStyle] | None = None + + def __init__(self, io: IO | Output, style: str = None) -> None: + self._io = io + + if style is None: + style = "default" + + self._header_title = None + self._footer_title = None + + self._headers = [] + + self._rows = [] + self._horizontal = False + + self._effective_column_widths: dict[int, int] = {} + + self._number_of_columns = None + + self._column_styles: dict[int, TableStyle] = {} + self._column_widths: dict[int, int] = {} + self._column_max_widths: dict[int, int] = {} + + self._rendered = False + + self._style: TableStyle | None = None + self._init_styles() + self.set_style(style) + + @property + def style(self) -> TableStyle: + return self._style + + def set_style(self, name: str) -> Table: + self._init_styles() + + self._style = self._resolve_style(name) + + return self + + def column_style(self, column_index: int) -> TableStyle: + if column_index in self._column_styles: + return self._column_styles[column_index] + + return self._style + + def set_column_style(self, column_index: int, style: str | TableStyle) -> Table: + self._column_styles[column_index] = self._resolve_style(style) + + return self + + def set_column_width(self, column_index: int, width: int) -> Table: + self._column_widths[column_index] = width + + return self + + def set_column_widths(self, widths: list[int]) -> Table: + self._column_widths = {} + + for i, width in enumerate(widths): + self._column_widths[i] = width + + return self + + def set_column_max_width(self, column_index: int, width: int) -> Table: + self._column_widths[column_index] = width + + return self + + def set_headers(self, headers: list[str]) -> Table: + if headers and not isinstance(headers[0], list): + headers = [headers] + + self._headers = headers + + return self + + def set_rows(self, rows: _Rows) -> Table: + self._rows = [] + + return self.add_rows(rows) + + def add_rows(self, rows: _Rows) -> Table: + for row in rows: + self.add_row(row) + + return self + + def add_row(self, row: _Row | TableSeparator) -> Table: + if isinstance(row, TableSeparator): + self._rows.append(row) + + return self + + self._rows.append(row) + + return self + + def set_header_title(self, header_title: str) -> Table: + self._header_title = header_title + + return self + + def set_footer_title(self, footer_title: str) -> Table: + self._footer_title = footer_title + + return self + + def horizontal(self, horizontal: bool = True) -> Table: + self._horizontal = horizontal + + return self + + def render(self) -> None: + divider = TableSeparator() + + if self._horizontal: + rows = [] + headers = self._headers[0] if self._headers else [] + for i, header in enumerate(headers): + rows.append([header]) + for row in self._rows: + if isinstance(row, TableSeparator): + continue + + if len(row) > i: + rows[i].append(row[i]) + elif isinstance(rows[i][0], TableCell) and rows[i][0].colspan >= 2: + # There is a title + pass + else: + rows[i].append(None) + else: + rows = self._headers + [divider] + self._rows + + self._calculate_number_of_columns(rows) + rows = list(self._build_table_rows(rows)) + self._calculate_column_widths(rows) + + is_header = not self._horizontal + is_first_row = self._horizontal + + for row in rows: + if row is divider: + is_header = False + is_first_row = True + + continue + + if isinstance(row, TableSeparator): + self._render_row_separator() + + continue + + if not row: + continue + + if is_header or is_first_row: + if is_first_row: + self._render_row_separator(self.SEPARATOR_TOP_BOTTOM) + is_first_row = False + else: + self._render_row_separator( + self.SEPARATOR_TOP, + self._header_title, + self._style.header_title_format, + ) + + if self._horizontal: + self._render_row( + row, self._style.cell_row_format, self._style.cell_header_format + ) + else: + self._render_row( + row, + self._style.cell_header_format + if is_header + else self._style.cell_row_format, + ) + + self._render_row_separator( + self.SEPARATOR_BOTTOM, + self._footer_title, + self._style.footer_title_format, + ) + + self._cleanup() + self._rendered = True + + def _render_row_separator( + self, + type: int = SEPARATOR_MID, + title: str | None = None, + title_format: str | None = None, + ) -> None: + """ + Renders horizontal header separator. + + Example: + + +-----+-----------+-------+ + """ + count = self._number_of_columns + if not count: + return + + borders = self._style.border_chars + if not borders[0] and not borders[2] and not self._style.crossing_char: + return + + crossings = self._style.crossing_chars + if type == self.SEPARATOR_MID: + horizontal, left_char, mid_char, right_char = ( + borders[2], + crossings[8], + crossings[0], + crossings[4], + ) + elif type == self.SEPARATOR_TOP: + horizontal, left_char, mid_char, right_char = ( + borders[0], + crossings[1], + crossings[2], + crossings[3], + ) + elif type == self.SEPARATOR_TOP_BOTTOM: + horizontal, left_char, mid_char, right_char = ( + borders[0], + crossings[9], + crossings[10], + crossings[11], + ) + else: + horizontal, left_char, mid_char, right_char = ( + borders[0], + crossings[7], + crossings[6], + crossings[5], + ) + + markup = left_char + for column in range(count): + markup += horizontal * self._effective_column_widths[column] + markup += right_char if column == count - 1 else mid_char + + if title is not None: + formatted_title = title_format.format(title) + title_length = len(self._io.remove_format(formatted_title)) + markup_length = len(markup) + limit = markup_length - 4 + + if title_length > limit: + title_length = limit + format_length = len(self._io.remove_format(title_format.format(""))) + formatted_title = title_format.format( + title[: limit - format_length - 3] + "..." + ) + + title_start = (markup_length - title_length) // 2 + markup = ( + markup[:title_start] + + formatted_title + + markup[title_start + title_length :] + ) + + self._io.write_line(self._style.border_format.format(markup)) + + def _render_column_separator(self, type: int = BORDER_OUTSIDE) -> str: + """ + Renders vertical column separator. + """ + borders = self._style.border_chars + + return self._style.border_format.format( + borders[1] if type == self.BORDER_OUTSIDE else borders[3] + ) + + def _render_row( + self, row: list[str], cell_format: str, first_cell_format: str | None = None + ) -> None: + """ + Renders table row. + + Example: + + | 9971-5-0210-0 | A Tale of Two Cities | Charles Dickens | + """ + row_content = self._render_column_separator(self.BORDER_OUTSIDE) + columns = self._get_row_columns(row) + last = len(columns) - 1 + for i, column in enumerate(columns): + if first_cell_format and i == 0: + row_content += self._render_cell(row, column, first_cell_format) + else: + row_content += self._render_cell(row, column, cell_format) + + row_content += self._render_column_separator( + self.BORDER_OUTSIDE if i == last else self.BORDER_INSIDE + ) + + self._io.write_line(row_content) + + def _render_cell(self, row: _Row, column: int, cell_format: str) -> str: + """ + Renders a table cell with padding. + """ + try: + cell = row[column] + except IndexError: + cell = "" + + width = self._effective_column_widths[column] + if isinstance(cell, TableCell) and cell.colspan > 1: + # add the width of the following columns(numbers of colspan). + for next_column in range(column + 1, column + cell.colspan): + width += ( + self._get_column_separator_width() + + self._effective_column_widths[next_column] + ) + + style = self.column_style(column) + + if isinstance(cell, TableSeparator): + return style.border_format.format(style.border_chars[2] * width) + + width += len(cell) - len(self._io.remove_format(cell)) + content = style.cell_row_content_format.format(cell) + + pad = style.pad + if isinstance(cell, TableCell) and isinstance(cell.style, TableCellStyle): + is_not_styled_by_tag = not re.match( + r"^<(\w+|(\w+=[\w,]+;?)*)>.+$", str(cell) + ) + if is_not_styled_by_tag: + cell_format = cell.style.cell_format + if cell_format is None: + cell_format = f"<{cell.style.tag}>{{}}" + + if "" in content: + content = content.replace("", "") + width -= 3 + + if "" in content: + content = content.replace("") + width -= len("") + + pad = cell.style.pad + + return cell_format.format(pad(content, width, style.padding_char)) + + def _calculate_number_of_columns(self, rows: _Rows) -> None: + columns = [0] + for row in rows: + if isinstance(row, TableSeparator): + continue + + columns.append(self._get_number_of_columns(row)) + + self._number_of_columns = max(columns) + + def _build_table_rows(self, rows: _Rows) -> Generator: + unmerged_rows = {} + row_key = 0 + while row_key < len(rows): + rows = self._fill_next_rows(rows, row_key) + + # Remove any new line breaks and replace it with a new line + for column, cell in enumerate(rows[row_key]): + colspan = cell.colspan if isinstance(cell, TableCell) else 1 + + if column in self._column_max_widths and self._column_max_widths[ + column + ] < len(self._io.remove_format(cell)): + cell = self._io.formatter.format_and_wrap( + cell, self._column_max_widths[column] * colspan + ) + + if "\n" not in cell: + continue + + escaped = "\n".join( + Formatter.escape_trailing_backslash(c) for c in cell.split("\n") + ) + cell = ( + TableCell(escaped, colspan=cell.colspan) + if isinstance(cell, TableCell) + else escaped + ) + lines = cell.replace("\n", "\n").split("\n") + + for line_key, line in enumerate(lines): + if colspan > 1: + line = TableCell(line, colspan=colspan) + + if line_key == 0: + rows[row_key][column] = line + else: + if row_key not in unmerged_rows: + unmerged_rows[row_key] = {} + + if line_key not in unmerged_rows[row_key]: + unmerged_rows[row_key][line_key] = self._copy_row( + rows, row_key + ) + + unmerged_rows[row_key][line_key][column] = line + + row_key += 1 + + for row_key, row in enumerate(rows): + yield self._fill_cells(row) + + if row_key in unmerged_rows: + for unmerged_row in unmerged_rows[row_key].values(): + yield self._fill_cells(unmerged_row) + + def _calculate_row_count(self) -> int: + number_of_rows = len( + list( + self._build_table_rows(self._headers + [TableSeparator()] + self._rows) + ) + ) + + if self._headers: + number_of_rows += 1 + + if len(self._rows) > 0: + number_of_rows += 1 + + return number_of_rows + + def _fill_next_rows(self, rows: _Rows, line: int) -> _Rows: + """ + Fill rows that contains rowspan > 1. + """ + unmerged_rows = {} + + for column, cell in enumerate(rows[line]): + if isinstance(cell, TableCell) and cell.rowspan > 1: + nb_lines = cell.rowspan - 1 + lines = [cell] + if "\n" in cell: + lines = cell.replace("\n", "\n").split( + "\n" + ) + if len(lines) > nb_lines: + nb_lines = cell.count("\n") + + rows[line][column] = TableCell( + lines[0], colspan=cell.colspan, style=cell.style + ) + + # Create a two dimensional dict (rowspan x colspan) + placeholder = {k: {} for k in range(line + 1, line + 1 + nb_lines)} + for k, v in unmerged_rows.items(): + if k in placeholder: + for l, m in unmerged_rows[k].items(): # noqa: E741 + if l in placeholder[k]: + placeholder[k][l].update(m) + else: + placeholder[k][l] = m + else: + placeholder[k] = v + + unmerged_rows = placeholder + + for unmerged_row_key, _ in unmerged_rows.items(): + value = "" + if unmerged_row_key - line < len(lines): + value = lines[unmerged_row_key - line] + + unmerged_rows[unmerged_row_key][column] = TableCell( + value, colspan=cell.colspan, style=cell.style + ) + if nb_lines == unmerged_row_key - line: + break + + for unmerged_row_key, unmerged_row in unmerged_rows.items(): + # we need to know if unmerged_row will be merged or inserted into rows + if ( + unmerged_row_key < len(rows) + and isinstance(rows[unmerged_row_key], list) + and ( + ( + self._get_number_of_columns(rows[unmerged_row_key]) + + self._get_number_of_columns( + list(unmerged_rows[unmerged_row_key].values()) + ) + ) + <= self._number_of_columns + ) + ): + # insert cell into row at cell_key position + for cell_key, cell in unmerged_row.items(): + rows[unmerged_row_key].insert(cell_key, cell) + else: + row = self._copy_row(rows, unmerged_row_key - 1) + for column, cell in unmerged_row.items(): + if len(cell): + row[column] = unmerged_row[column] + + rows.insert(unmerged_row_key, row) + + return rows + + def _fill_cells(self, row: _Row) -> list[str | TableCell]: + """ + Fills cells for a row that contains colspan > 1. + """ + new_row = [] + + for column, cell in enumerate(row): + new_row.append(cell) + + if isinstance(cell, TableCell) and cell.colspan > 1: + for _ in range(column + 1, column + cell.colspan): + # insert empty value at column position + new_row.append("") + + if new_row: + return new_row + + return row + + def _copy_row(self, rows: _Rows, line: int) -> _Row: + """ + Copies a row. + """ + row = list(rows[line]) + + for cell_key, cell_value in enumerate(row): + row[cell_key] = "" + if isinstance(cell_value, TableCell): + row[cell_key] = TableCell("", colspan=cell_value.colspan) + + return row + + def _get_number_of_columns(self, row: _Row): + """ + Gets number of columns by row. + """ + columns = len(row) + for column in row: + if isinstance(column, TableCell): + columns += column.colspan - 1 + + return columns + + def _get_row_columns(self, row: _Row) -> list[int]: + """ + Gets list of columns for the given row. + """ + columns = list(range(0, self._number_of_columns)) + + for cell_key, cell in enumerate(row): + if isinstance(cell, TableCell) and cell.colspan > 1: + # exclude grouped columns. + columns = [ + x + for x in columns + if x not in list(range(cell_key + 1, cell_key + cell.colspan)) + ] + + return columns + + def _calculate_column_widths(self, rows: _Rows) -> None: + """ + Calculates column widths. + """ + for column in range(0, self._number_of_columns): + lengths = [0] + for row in rows: + if isinstance(row, TableSeparator): + continue + + row_ = row.copy() + for i, cell in enumerate(row_): + if isinstance(cell, TableCell): + text_content = self._io.remove_format(cell) + text_length = len(text_content) + if text_length: + length = math.ceil(text_length / cell.colspan) + content_columns = [ + text_content[i : i + length] + for i in range(0, text_length, length) + ] + + for position, content in enumerate(content_columns): + try: + row_[i + position] = content + except IndexError: + row_.append(content) + + lengths.append(self._get_cell_width(row_, column)) + + self._effective_column_widths[column] = ( + max(lengths) + len(self._style.cell_row_content_format) - 2 + ) + + def _get_column_separator_width(self) -> int: + return len(self._style.border_format.format(self._style.border_chars[3])) + + def _get_cell_width(self, row: _Row, column: int) -> int: + """ + Gets cell width. + """ + cell_width = 0 + + try: + cell = row[column] + cell_width = len(self._io.remove_format(cell)) + except IndexError: + pass + + column_width = ( + self._column_widths[column] if column in self._column_widths else 0 + ) + cell_width = max(cell_width, column_width) + + if column in self._column_max_widths: + return min(self._column_max_widths[column], cell_width) + + return cell_width + + def _cleanup(self): + self._column_widths = {} + self._number_of_columns = None + + @classmethod + def _init_styles(cls) -> None: + if cls._styles is not None: + return + + borderless = TableStyle() + borderless.set_horizontal_border_chars("=") + borderless.set_vertical_border_chars(" ") + borderless.set_default_crossing_char(" ") + + compact = TableStyle() + compact.set_horizontal_border_chars("") + compact.set_vertical_border_chars(" ") + compact.set_default_crossing_char("") + compact.set_cell_row_content_format("{}") + + box = TableStyle() + box.set_horizontal_border_chars("─") + box.set_vertical_border_chars("│") + box.set_crossing_chars("┼", "┌", "┬", "┐", "┤", "┘", "┴", "└", "├") + + box_double = TableStyle() + box_double.set_horizontal_border_chars("═", "─") + box_double.set_vertical_border_chars("║", "│") + box_double.set_crossing_chars( + "┼", "╔", "╤", "╗", "╢", "╝", "╧", "╚", "╟", "╠", "╪", "╣" + ) + + cls._styles = { + "default": TableStyle(), + "borderless": borderless, + "compact": compact, + "box": box, + "box-double": box_double, + } + + @classmethod + def _resolve_style(cls, name: str | TableStyle) -> TableStyle: + if isinstance(name, TableStyle): + return name + + if name in cls._styles: + return deepcopy(cls._styles[name]) + + raise ValueError(f'Table style "{name}" is not defined.') diff --git a/env/lib/python3.8/site-packages/cleo/ui/table_cell.py b/env/lib/python3.8/site-packages/cleo/ui/table_cell.py new file mode 100644 index 00000000..c066260c --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/table_cell.py @@ -0,0 +1,41 @@ +from __future__ import annotations + +from typing import TYPE_CHECKING + + +if TYPE_CHECKING: + from cleo.ui.table_cell_style import TableCellStyle + + +class TableCell(str): + def __new__( + cls, + value: str = "", + rowspan: int = 1, + colspan: int = 1, + style: TableCellStyle | None = None, + ) -> TableCell: + return super().__new__(cls, value) + + def __init__( + self, + value: str = "", + rowspan: int = 1, + colspan: int = 1, + style: TableCellStyle | None = None, + ) -> None: + self._rowspan = rowspan + self._colspan = colspan + self._style = style + + @property + def rowspan(self) -> int: + return self._rowspan + + @property + def colspan(self) -> int: + return self._colspan + + @property + def style(self) -> TableCellStyle | None: + return self._style diff --git a/env/lib/python3.8/site-packages/cleo/ui/table_cell_style.py b/env/lib/python3.8/site-packages/cleo/ui/table_cell_style.py new file mode 100644 index 00000000..36713a7b --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/table_cell_style.py @@ -0,0 +1,36 @@ +from __future__ import annotations + + +class TableCellStyle: + def __init__( + self, fg="default", bg="default", options=None, align="left", cell_format=None + ) -> None: + self._fg = fg + self._bg = bg + self._options = options + self._align = "left" + self._cell_format = cell_format + + @property + def cell_format(self) -> str | None: + return self._cell_format + + @property + def tag(self) -> str: + tag = " str: + if self._align == "left": + return string.rjust(length, char) + + if self._align == "right": + return string.ljust(length, char) + + return string.center(length, char) diff --git a/env/lib/python3.8/site-packages/cleo/ui/table_separator.py b/env/lib/python3.8/site-packages/cleo/ui/table_separator.py new file mode 100644 index 00000000..4feb09d2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/table_separator.py @@ -0,0 +1,8 @@ +from __future__ import annotations + +from cleo.ui.table_cell import TableCell + + +class TableSeparator(TableCell): + def __init__(self) -> None: + super().__init__("") diff --git a/env/lib/python3.8/site-packages/cleo/ui/table_style.py b/env/lib/python3.8/site-packages/cleo/ui/table_style.py new file mode 100644 index 00000000..82fe187e --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/table_style.py @@ -0,0 +1,276 @@ +from __future__ import annotations + + +class TableStyle: + """ + Defines styles for Table instances. + """ + + def __init__(self) -> None: + self._padding_char = " " + self._horizontal_outside_border_char = "-" + self._horizontal_inside_border_char = "-" + self._vertical_outside_border_char = "|" + self._vertical_inside_border_char = "|" + self._crossing_char = "+" + self._crossing_top_right_char = "+" + self._crossing_top_mid_char = "+" + self._crossing_top_left_char = "+" + self._crossing_mid_right_char = "+" + self._crossing_bottom_right_char = "+" + self._crossing_bottom_mid_char = "+" + self._crossing_bottom_left_char = "+" + self._crossing_mid_left_char = "+" + self._crossing_top_left_bottom_char = "+" + self._crossing_top_mid_bottom_char = "+" + self._crossing_top_right_bottom_char = "+" + self._header_title_format = " {} " + self._footer_title_format = " {} " + self._cell_header_format = "{}" + self._cell_row_format = "{}" + self._cell_row_content_format = " {} " + self._border_format = "{}" + self._pad_type = "right" + + @property + def padding_char(self) -> str: + return self._padding_char + + @property + def border_chars(self) -> list[str]: + return [ + self._horizontal_outside_border_char, + self._vertical_outside_border_char, + self._horizontal_inside_border_char, + self._vertical_inside_border_char, + ] + + @property + def crossing_char(self) -> str: + return self._crossing_char + + @property + def crossing_chars(self) -> list[str]: + return [ + self._crossing_char, + self._crossing_top_left_char, + self._crossing_top_mid_char, + self._crossing_top_right_char, + self._crossing_mid_right_char, + self._crossing_bottom_right_char, + self._crossing_bottom_mid_char, + self._crossing_bottom_left_char, + self._crossing_mid_left_char, + self._crossing_top_left_bottom_char, + self._crossing_top_mid_bottom_char, + self._crossing_top_right_bottom_char, + ] + + @property + def cell_header_format(self) -> str: + return self._cell_header_format + + @property + def cell_row_format(self) -> str: + return self._cell_row_format + + @property + def cell_row_content_format(self) -> str: + return self._cell_row_content_format + + @property + def border_format(self) -> str: + return self._border_format + + @property + def header_title_format(self) -> str: + return self._header_title_format + + @property + def footer_title_format(self) -> str: + return self._footer_title_format + + @property + def pad_type(self) -> str: + return self._pad_type + + def set_padding_char(self, padding_char: str) -> TableStyle: + """ + Sets padding character, used for cell padding. + """ + if not padding_char: + raise ValueError("The padding char must not be empty.") + + self._padding_char = padding_char + + return self + + def set_horizontal_border_chars( + self, outside: str, inside: str | None = None + ) -> TableStyle: + """ + Sets horizontal border characters. + + ╔═══════════════╤══════════════════════════╤══════════════════╗ + 1 ISBN 2 Title │ Author ║ + ╠═══════════════╪══════════════════════════╪══════════════════╣ + ║ 99921-58-10-7 │ Divine Comedy │ Dante Alighieri ║ + ║ 9971-5-0210-0 │ A Tale of Two Cities │ Charles Dickens ║ + ║ 960-425-059-0 │ The Lord of the Rings │ J. R. R. Tolkien ║ + ║ 80-902734-1-6 │ And Then There Were None │ Agatha Christie ║ + ╚═══════════════╧══════════════════════════╧══════════════════╝ + """ + self._horizontal_outside_border_char = outside + self._horizontal_inside_border_char = inside if inside is not None else outside + + return self + + def set_vertical_border_chars( + self, outside: str, inside: str | None = None + ) -> TableStyle: + """ + Sets vertical border characters. + + ╔═══════════════╤══════════════════════════╤══════════════════╗ + ║ ISBN │ Title │ Author ║ + ╠═══════1═══════╪══════════════════════════╪══════════════════╣ + ║ 99921-58-10-7 │ Divine Comedy │ Dante Alighieri ║ + ║ 9971-5-0210-0 │ A Tale of Two Cities │ Charles Dickens ║ + ╟───────2───────┼──────────────────────────┼──────────────────╢ + ║ 960-425-059-0 │ The Lord of the Rings │ J. R. R. Tolkien ║ + ║ 80-902734-1-6 │ And Then There Were None │ Agatha Christie ║ + ╚═══════════════╧══════════════════════════╧══════════════════╝ + """ + self._vertical_outside_border_char = outside + self._vertical_inside_border_char = inside if inside is not None else outside + + return self + + def set_crossing_chars( + self, + cross: str, + top_left: str, + top_mid: str, + top_right: str, + mid_right: str, + bottom_right: str, + bottom_mid: str, + bottom_left: str, + mid_left: str, + top_left_bottom: str | None = None, + top_mid_bottom: str | None = None, + top_right_bottom: str | None = None, + ) -> TableStyle: + """ + Sets crossing characters. + + Example: + + 1═══════════════2══════════════════════════2══════════════════3 + ║ ISBN │ Title │ Author ║ + 8'══════════════0'═════════════════════════0'═════════════════4' + ║ 99921-58-10-7 │ Divine Comedy │ Dante Alighieri ║ + ║ 9971-5-0210-0 │ A Tale of Two Cities │ Charles Dickens ║ + 8───────────────0──────────────────────────0──────────────────4 + ║ 960-425-059-0 │ The Lord of the Rings │ J. R. R. Tolkien ║ + ║ 80-902734-1-6 │ And Then There Were None │ Agatha Christie ║ + 7═══════════════6══════════════════════════6══════════════════5 + """ + self._crossing_char = cross + self._crossing_top_left_char = top_left + self._crossing_top_mid_char = top_mid + self._crossing_top_right_char = top_right + self._crossing_mid_right_char = mid_right + self._crossing_bottom_right_char = bottom_right + self._crossing_bottom_mid_char = bottom_mid + self._crossing_bottom_left_char = bottom_left + self._crossing_mid_left_char = mid_left + self._crossing_top_left_bottom_char = ( + top_left_bottom if top_left_bottom is not None else mid_left + ) + self._crossing_top_mid_bottom_char = ( + top_mid_bottom if top_mid_bottom is not None else cross + ) + self._crossing_top_right_bottom_char = ( + top_right_bottom if top_right_bottom is not None else mid_right + ) + + return self + + def set_default_crossing_char(self, char: str) -> TableStyle: + """ + Sets default crossing character used for each cross. + """ + return self.set_crossing_chars( + char, char, char, char, char, char, char, char, char + ) + + def set_cell_header_format(self, cell_header_format: str) -> TableStyle: + """ + Sets the header cell format. + """ + self._cell_header_format = cell_header_format + + return self + + def set_cell_row_format(self, cell_row_format: str) -> TableStyle: + """ + Sets the row cell format. + """ + self._cell_row_format = cell_row_format + + return self + + def set_cell_row_content_format(self, cell_row_content_format: str) -> TableStyle: + """ + Sets the row cell content format. + """ + self._cell_row_content_format = cell_row_content_format + + return self + + def set_border_format(self, border_format: str) -> TableStyle: + """ + Sets the border format. + """ + self._border_format = border_format + + return self + + def set_header_title_format(self, header_title_format: str) -> TableStyle: + """ + Sets the header title format. + """ + self._header_title_format = header_title_format + + return self + + def set_footer_title_format(self, footer_title_format: str) -> TableStyle: + """ + Sets the footer title format. + """ + self._footer_title_format = footer_title_format + + return self + + def set_pad_type(self, pad_type: str) -> TableStyle: + """ + Sets the padding type. + """ + if pad_type not in {"left", "right", "center"}: + raise ValueError( + 'Invalid padding type. Expected one of "left", "right", "center").' + ) + + self._pad_type = pad_type + + return self + + def pad(self, string: str, length: int, char: str = " ") -> str: + if self._pad_type == "left": + return string.rjust(length, char) + + if self._pad_type == "right": + return string.ljust(length, char) + + return string.center(length, char) diff --git a/env/lib/python3.8/site-packages/cleo/ui/ui.py b/env/lib/python3.8/site-packages/cleo/ui/ui.py new file mode 100644 index 00000000..fd4d00bd --- /dev/null +++ b/env/lib/python3.8/site-packages/cleo/ui/ui.py @@ -0,0 +1,32 @@ +from __future__ import annotations + +from cleo.exceptions import ValueException +from cleo.ui.component import Component + + +class UI: + def __init__(self, components: list[Component] | None = None) -> None: + self._components: dict[str, Component] = {} + + if components is None: + components = [] + + for component in components: + self.register(component) + + def register(self, component: Component) -> None: + if not isinstance(component, Component): + raise ValueException( + "A UI component must inherit from the Component class." + ) + + if not component.name: + raise ValueException("A UI component cannot be anonymous.") + + self._components[component.name] = component + + def component(self, name: str) -> Component: + if name not in self._components: + raise ValueException(f'UI component "{name}" does not exist.') + + return self._components[name] diff --git a/env/lib/python3.8/site-packages/click-8.1.3.dist-info/INSTALLER b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/click-8.1.3.dist-info/LICENSE.rst b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/LICENSE.rst new file mode 100644 index 00000000..d12a8491 --- /dev/null +++ b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2014 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/env/lib/python3.8/site-packages/click-8.1.3.dist-info/METADATA b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/METADATA new file mode 100644 index 00000000..8e5dc1e0 --- /dev/null +++ b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/METADATA @@ -0,0 +1,111 @@ +Metadata-Version: 2.1 +Name: click +Version: 8.1.3 +Summary: Composable command line interface toolkit +Home-page: https://palletsprojects.com/p/click/ +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Documentation, https://click.palletsprojects.com/ +Project-URL: Changes, https://click.palletsprojects.com/changes/ +Project-URL: Source Code, https://github.com/pallets/click/ +Project-URL: Issue Tracker, https://github.com/pallets/click/issues/ +Project-URL: Twitter, https://twitter.com/PalletsTeam +Project-URL: Chat, https://discord.gg/pallets +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE.rst +Requires-Dist: colorama ; platform_system == "Windows" +Requires-Dist: importlib-metadata ; python_version < "3.8" + +\$ click\_ +========== + +Click is a Python package for creating beautiful command line interfaces +in a composable way with as little code as necessary. It's the "Command +Line Interface Creation Kit". It's highly configurable but comes with +sensible defaults out of the box. + +It aims to make the process of writing command line tools quick and fun +while also preventing any frustration caused by the inability to +implement an intended CLI API. + +Click in three points: + +- Arbitrary nesting of commands +- Automatic help page generation +- Supports lazy loading of subcommands at runtime + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + $ pip install -U click + +.. _pip: https://pip.pypa.io/en/stable/getting-started/ + + +A Simple Example +---------------- + +.. code-block:: python + + import click + + @click.command() + @click.option("--count", default=1, help="Number of greetings.") + @click.option("--name", prompt="Your name", help="The person to greet.") + def hello(count, name): + """Simple program that greets NAME for a total of COUNT times.""" + for _ in range(count): + click.echo(f"Hello, {name}!") + + if __name__ == '__main__': + hello() + +.. code-block:: text + + $ python hello.py --count=3 + Your name: Click + Hello, Click! + Hello, Click! + Hello, Click! + + +Donate +------ + +The Pallets organization develops and supports Click and other popular +packages. In order to grow the community of contributors and users, and +allow the maintainers to devote more time to the projects, `please +donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +- Documentation: https://click.palletsprojects.com/ +- Changes: https://click.palletsprojects.com/changes/ +- PyPI Releases: https://pypi.org/project/click/ +- Source Code: https://github.com/pallets/click +- Issue Tracker: https://github.com/pallets/click/issues +- Website: https://palletsprojects.com/p/click +- Twitter: https://twitter.com/PalletsTeam +- Chat: https://discord.gg/pallets + + diff --git a/env/lib/python3.8/site-packages/click-8.1.3.dist-info/RECORD b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/RECORD new file mode 100644 index 00000000..927fa43b --- /dev/null +++ b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/RECORD @@ -0,0 +1,39 @@ +click-8.1.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +click-8.1.3.dist-info/LICENSE.rst,sha256=morRBqOU6FO_4h9C9OctWSgZoigF2ZG18ydQKSkrZY0,1475 +click-8.1.3.dist-info/METADATA,sha256=tFJIX5lOjx7c5LjZbdTPFVDJSgyv9F74XY0XCPp_gnc,3247 +click-8.1.3.dist-info/RECORD,, +click-8.1.3.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +click-8.1.3.dist-info/top_level.txt,sha256=J1ZQogalYS4pphY_lPECoNMfw0HzTSrZglC4Yfwo4xA,6 +click/__init__.py,sha256=rQBLutqg-z6m8nOzivIfigDn_emijB_dKv9BZ2FNi5s,3138 +click/__pycache__/__init__.cpython-38.pyc,, +click/__pycache__/_compat.cpython-38.pyc,, +click/__pycache__/_termui_impl.cpython-38.pyc,, +click/__pycache__/_textwrap.cpython-38.pyc,, +click/__pycache__/_winconsole.cpython-38.pyc,, +click/__pycache__/core.cpython-38.pyc,, +click/__pycache__/decorators.cpython-38.pyc,, +click/__pycache__/exceptions.cpython-38.pyc,, +click/__pycache__/formatting.cpython-38.pyc,, +click/__pycache__/globals.cpython-38.pyc,, +click/__pycache__/parser.cpython-38.pyc,, +click/__pycache__/shell_completion.cpython-38.pyc,, +click/__pycache__/termui.cpython-38.pyc,, +click/__pycache__/testing.cpython-38.pyc,, +click/__pycache__/types.cpython-38.pyc,, +click/__pycache__/utils.cpython-38.pyc,, +click/_compat.py,sha256=JIHLYs7Jzz4KT9t-ds4o4jBzLjnwCiJQKqur-5iwCKI,18810 +click/_termui_impl.py,sha256=qK6Cfy4mRFxvxE8dya8RBhLpSC8HjF-lvBc6aNrPdwg,23451 +click/_textwrap.py,sha256=10fQ64OcBUMuK7mFvh8363_uoOxPlRItZBmKzRJDgoY,1353 +click/_winconsole.py,sha256=5ju3jQkcZD0W27WEMGqmEP4y_crUVzPCqsX_FYb7BO0,7860 +click/core.py,sha256=mz87bYEKzIoNYEa56BFAiOJnvt1Y0L-i7wD4_ZecieE,112782 +click/decorators.py,sha256=yo3zvzgUm5q7h5CXjyV6q3h_PJAiUaem178zXwdWUFI,16350 +click/exceptions.py,sha256=7gDaLGuFZBeCNwY9ERMsF2-Z3R9Fvq09Zc6IZSKjseo,9167 +click/formatting.py,sha256=Frf0-5W33-loyY_i9qrwXR8-STnW3m5gvyxLVUdyxyk,9706 +click/globals.py,sha256=TP-qM88STzc7f127h35TD_v920FgfOD2EwzqA0oE8XU,1961 +click/parser.py,sha256=cAEt1uQR8gq3-S9ysqbVU-fdAZNvilxw4ReJ_T1OQMk,19044 +click/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +click/shell_completion.py,sha256=qOp_BeC9esEOSZKyu5G7RIxEUaLsXUX-mTb7hB1r4QY,18018 +click/termui.py,sha256=ACBQVOvFCTSqtD5VREeCAdRtlHd-Imla-Lte4wSfMjA,28355 +click/testing.py,sha256=ptpMYgRY7dVfE3UDgkgwayu9ePw98sQI3D7zZXiCpj4,16063 +click/types.py,sha256=rEb1aZSQKq3ciCMmjpG2Uva9vk498XRL7ThrcK2GRss,35805 +click/utils.py,sha256=33D6E7poH_nrKB-xr-UyDEXnxOcCiQqxuRLtrqeVv6o,18682 diff --git a/env/lib/python3.8/site-packages/click-8.1.3.dist-info/WHEEL b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/click-8.1.3.dist-info/top_level.txt b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/top_level.txt new file mode 100644 index 00000000..dca9a909 --- /dev/null +++ b/env/lib/python3.8/site-packages/click-8.1.3.dist-info/top_level.txt @@ -0,0 +1 @@ +click diff --git a/env/lib/python3.8/site-packages/click/__init__.py b/env/lib/python3.8/site-packages/click/__init__.py new file mode 100644 index 00000000..e3ef423b --- /dev/null +++ b/env/lib/python3.8/site-packages/click/__init__.py @@ -0,0 +1,73 @@ +""" +Click is a simple Python module inspired by the stdlib optparse to make +writing command line scripts fun. Unlike other modules, it's based +around a simple API that does not come with too much magic and is +composable. +""" +from .core import Argument as Argument +from .core import BaseCommand as BaseCommand +from .core import Command as Command +from .core import CommandCollection as CommandCollection +from .core import Context as Context +from .core import Group as Group +from .core import MultiCommand as MultiCommand +from .core import Option as Option +from .core import Parameter as Parameter +from .decorators import argument as argument +from .decorators import command as command +from .decorators import confirmation_option as confirmation_option +from .decorators import group as group +from .decorators import help_option as help_option +from .decorators import make_pass_decorator as make_pass_decorator +from .decorators import option as option +from .decorators import pass_context as pass_context +from .decorators import pass_obj as pass_obj +from .decorators import password_option as password_option +from .decorators import version_option as version_option +from .exceptions import Abort as Abort +from .exceptions import BadArgumentUsage as BadArgumentUsage +from .exceptions import BadOptionUsage as BadOptionUsage +from .exceptions import BadParameter as BadParameter +from .exceptions import ClickException as ClickException +from .exceptions import FileError as FileError +from .exceptions import MissingParameter as MissingParameter +from .exceptions import NoSuchOption as NoSuchOption +from .exceptions import UsageError as UsageError +from .formatting import HelpFormatter as HelpFormatter +from .formatting import wrap_text as wrap_text +from .globals import get_current_context as get_current_context +from .parser import OptionParser as OptionParser +from .termui import clear as clear +from .termui import confirm as confirm +from .termui import echo_via_pager as echo_via_pager +from .termui import edit as edit +from .termui import getchar as getchar +from .termui import launch as launch +from .termui import pause as pause +from .termui import progressbar as progressbar +from .termui import prompt as prompt +from .termui import secho as secho +from .termui import style as style +from .termui import unstyle as unstyle +from .types import BOOL as BOOL +from .types import Choice as Choice +from .types import DateTime as DateTime +from .types import File as File +from .types import FLOAT as FLOAT +from .types import FloatRange as FloatRange +from .types import INT as INT +from .types import IntRange as IntRange +from .types import ParamType as ParamType +from .types import Path as Path +from .types import STRING as STRING +from .types import Tuple as Tuple +from .types import UNPROCESSED as UNPROCESSED +from .types import UUID as UUID +from .utils import echo as echo +from .utils import format_filename as format_filename +from .utils import get_app_dir as get_app_dir +from .utils import get_binary_stream as get_binary_stream +from .utils import get_text_stream as get_text_stream +from .utils import open_file as open_file + +__version__ = "8.1.3" diff --git a/env/lib/python3.8/site-packages/click/_compat.py b/env/lib/python3.8/site-packages/click/_compat.py new file mode 100644 index 00000000..766d286b --- /dev/null +++ b/env/lib/python3.8/site-packages/click/_compat.py @@ -0,0 +1,626 @@ +import codecs +import io +import os +import re +import sys +import typing as t +from weakref import WeakKeyDictionary + +CYGWIN = sys.platform.startswith("cygwin") +MSYS2 = sys.platform.startswith("win") and ("GCC" in sys.version) +# Determine local App Engine environment, per Google's own suggestion +APP_ENGINE = "APPENGINE_RUNTIME" in os.environ and "Development/" in os.environ.get( + "SERVER_SOFTWARE", "" +) +WIN = sys.platform.startswith("win") and not APP_ENGINE and not MSYS2 +auto_wrap_for_ansi: t.Optional[t.Callable[[t.TextIO], t.TextIO]] = None +_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]") + + +def get_filesystem_encoding() -> str: + return sys.getfilesystemencoding() or sys.getdefaultencoding() + + +def _make_text_stream( + stream: t.BinaryIO, + encoding: t.Optional[str], + errors: t.Optional[str], + force_readable: bool = False, + force_writable: bool = False, +) -> t.TextIO: + if encoding is None: + encoding = get_best_encoding(stream) + if errors is None: + errors = "replace" + return _NonClosingTextIOWrapper( + stream, + encoding, + errors, + line_buffering=True, + force_readable=force_readable, + force_writable=force_writable, + ) + + +def is_ascii_encoding(encoding: str) -> bool: + """Checks if a given encoding is ascii.""" + try: + return codecs.lookup(encoding).name == "ascii" + except LookupError: + return False + + +def get_best_encoding(stream: t.IO) -> str: + """Returns the default stream encoding if not found.""" + rv = getattr(stream, "encoding", None) or sys.getdefaultencoding() + if is_ascii_encoding(rv): + return "utf-8" + return rv + + +class _NonClosingTextIOWrapper(io.TextIOWrapper): + def __init__( + self, + stream: t.BinaryIO, + encoding: t.Optional[str], + errors: t.Optional[str], + force_readable: bool = False, + force_writable: bool = False, + **extra: t.Any, + ) -> None: + self._stream = stream = t.cast( + t.BinaryIO, _FixupStream(stream, force_readable, force_writable) + ) + super().__init__(stream, encoding, errors, **extra) + + def __del__(self) -> None: + try: + self.detach() + except Exception: + pass + + def isatty(self) -> bool: + # https://bitbucket.org/pypy/pypy/issue/1803 + return self._stream.isatty() + + +class _FixupStream: + """The new io interface needs more from streams than streams + traditionally implement. As such, this fix-up code is necessary in + some circumstances. + + The forcing of readable and writable flags are there because some tools + put badly patched objects on sys (one such offender are certain version + of jupyter notebook). + """ + + def __init__( + self, + stream: t.BinaryIO, + force_readable: bool = False, + force_writable: bool = False, + ): + self._stream = stream + self._force_readable = force_readable + self._force_writable = force_writable + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._stream, name) + + def read1(self, size: int) -> bytes: + f = getattr(self._stream, "read1", None) + + if f is not None: + return t.cast(bytes, f(size)) + + return self._stream.read(size) + + def readable(self) -> bool: + if self._force_readable: + return True + x = getattr(self._stream, "readable", None) + if x is not None: + return t.cast(bool, x()) + try: + self._stream.read(0) + except Exception: + return False + return True + + def writable(self) -> bool: + if self._force_writable: + return True + x = getattr(self._stream, "writable", None) + if x is not None: + return t.cast(bool, x()) + try: + self._stream.write("") # type: ignore + except Exception: + try: + self._stream.write(b"") + except Exception: + return False + return True + + def seekable(self) -> bool: + x = getattr(self._stream, "seekable", None) + if x is not None: + return t.cast(bool, x()) + try: + self._stream.seek(self._stream.tell()) + except Exception: + return False + return True + + +def _is_binary_reader(stream: t.IO, default: bool = False) -> bool: + try: + return isinstance(stream.read(0), bytes) + except Exception: + return default + # This happens in some cases where the stream was already + # closed. In this case, we assume the default. + + +def _is_binary_writer(stream: t.IO, default: bool = False) -> bool: + try: + stream.write(b"") + except Exception: + try: + stream.write("") + return False + except Exception: + pass + return default + return True + + +def _find_binary_reader(stream: t.IO) -> t.Optional[t.BinaryIO]: + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detaching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_reader(stream, False): + return t.cast(t.BinaryIO, stream) + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_reader(buf, True): + return t.cast(t.BinaryIO, buf) + + return None + + +def _find_binary_writer(stream: t.IO) -> t.Optional[t.BinaryIO]: + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detaching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_writer(stream, False): + return t.cast(t.BinaryIO, stream) + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_writer(buf, True): + return t.cast(t.BinaryIO, buf) + + return None + + +def _stream_is_misconfigured(stream: t.TextIO) -> bool: + """A stream is misconfigured if its encoding is ASCII.""" + # If the stream does not have an encoding set, we assume it's set + # to ASCII. This appears to happen in certain unittest + # environments. It's not quite clear what the correct behavior is + # but this at least will force Click to recover somehow. + return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii") + + +def _is_compat_stream_attr(stream: t.TextIO, attr: str, value: t.Optional[str]) -> bool: + """A stream attribute is compatible if it is equal to the + desired value or the desired value is unset and the attribute + has a value. + """ + stream_value = getattr(stream, attr, None) + return stream_value == value or (value is None and stream_value is not None) + + +def _is_compatible_text_stream( + stream: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] +) -> bool: + """Check if a stream's encoding and errors attributes are + compatible with the desired values. + """ + return _is_compat_stream_attr( + stream, "encoding", encoding + ) and _is_compat_stream_attr(stream, "errors", errors) + + +def _force_correct_text_stream( + text_stream: t.IO, + encoding: t.Optional[str], + errors: t.Optional[str], + is_binary: t.Callable[[t.IO, bool], bool], + find_binary: t.Callable[[t.IO], t.Optional[t.BinaryIO]], + force_readable: bool = False, + force_writable: bool = False, +) -> t.TextIO: + if is_binary(text_stream, False): + binary_reader = t.cast(t.BinaryIO, text_stream) + else: + text_stream = t.cast(t.TextIO, text_stream) + # If the stream looks compatible, and won't default to a + # misconfigured ascii encoding, return it as-is. + if _is_compatible_text_stream(text_stream, encoding, errors) and not ( + encoding is None and _stream_is_misconfigured(text_stream) + ): + return text_stream + + # Otherwise, get the underlying binary reader. + possible_binary_reader = find_binary(text_stream) + + # If that's not possible, silently use the original reader + # and get mojibake instead of exceptions. + if possible_binary_reader is None: + return text_stream + + binary_reader = possible_binary_reader + + # Default errors to replace instead of strict in order to get + # something that works. + if errors is None: + errors = "replace" + + # Wrap the binary stream in a text stream with the correct + # encoding parameters. + return _make_text_stream( + binary_reader, + encoding, + errors, + force_readable=force_readable, + force_writable=force_writable, + ) + + +def _force_correct_text_reader( + text_reader: t.IO, + encoding: t.Optional[str], + errors: t.Optional[str], + force_readable: bool = False, +) -> t.TextIO: + return _force_correct_text_stream( + text_reader, + encoding, + errors, + _is_binary_reader, + _find_binary_reader, + force_readable=force_readable, + ) + + +def _force_correct_text_writer( + text_writer: t.IO, + encoding: t.Optional[str], + errors: t.Optional[str], + force_writable: bool = False, +) -> t.TextIO: + return _force_correct_text_stream( + text_writer, + encoding, + errors, + _is_binary_writer, + _find_binary_writer, + force_writable=force_writable, + ) + + +def get_binary_stdin() -> t.BinaryIO: + reader = _find_binary_reader(sys.stdin) + if reader is None: + raise RuntimeError("Was not able to determine binary stream for sys.stdin.") + return reader + + +def get_binary_stdout() -> t.BinaryIO: + writer = _find_binary_writer(sys.stdout) + if writer is None: + raise RuntimeError("Was not able to determine binary stream for sys.stdout.") + return writer + + +def get_binary_stderr() -> t.BinaryIO: + writer = _find_binary_writer(sys.stderr) + if writer is None: + raise RuntimeError("Was not able to determine binary stream for sys.stderr.") + return writer + + +def get_text_stdin( + encoding: t.Optional[str] = None, errors: t.Optional[str] = None +) -> t.TextIO: + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_reader(sys.stdin, encoding, errors, force_readable=True) + + +def get_text_stdout( + encoding: t.Optional[str] = None, errors: t.Optional[str] = None +) -> t.TextIO: + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer(sys.stdout, encoding, errors, force_writable=True) + + +def get_text_stderr( + encoding: t.Optional[str] = None, errors: t.Optional[str] = None +) -> t.TextIO: + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer(sys.stderr, encoding, errors, force_writable=True) + + +def _wrap_io_open( + file: t.Union[str, os.PathLike, int], + mode: str, + encoding: t.Optional[str], + errors: t.Optional[str], +) -> t.IO: + """Handles not passing ``encoding`` and ``errors`` in binary mode.""" + if "b" in mode: + return open(file, mode) + + return open(file, mode, encoding=encoding, errors=errors) + + +def open_stream( + filename: str, + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + atomic: bool = False, +) -> t.Tuple[t.IO, bool]: + binary = "b" in mode + + # Standard streams first. These are simple because they ignore the + # atomic flag. Use fsdecode to handle Path("-"). + if os.fsdecode(filename) == "-": + if any(m in mode for m in ["w", "a", "x"]): + if binary: + return get_binary_stdout(), False + return get_text_stdout(encoding=encoding, errors=errors), False + if binary: + return get_binary_stdin(), False + return get_text_stdin(encoding=encoding, errors=errors), False + + # Non-atomic writes directly go out through the regular open functions. + if not atomic: + return _wrap_io_open(filename, mode, encoding, errors), True + + # Some usability stuff for atomic writes + if "a" in mode: + raise ValueError( + "Appending to an existing file is not supported, because that" + " would involve an expensive `copy`-operation to a temporary" + " file. Open the file in normal `w`-mode and copy explicitly" + " if that's what you're after." + ) + if "x" in mode: + raise ValueError("Use the `overwrite`-parameter instead.") + if "w" not in mode: + raise ValueError("Atomic writes only make sense with `w`-mode.") + + # Atomic writes are more complicated. They work by opening a file + # as a proxy in the same folder and then using the fdopen + # functionality to wrap it in a Python file. Then we wrap it in an + # atomic file that moves the file over on close. + import errno + import random + + try: + perm: t.Optional[int] = os.stat(filename).st_mode + except OSError: + perm = None + + flags = os.O_RDWR | os.O_CREAT | os.O_EXCL + + if binary: + flags |= getattr(os, "O_BINARY", 0) + + while True: + tmp_filename = os.path.join( + os.path.dirname(filename), + f".__atomic-write{random.randrange(1 << 32):08x}", + ) + try: + fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm) + break + except OSError as e: + if e.errno == errno.EEXIST or ( + os.name == "nt" + and e.errno == errno.EACCES + and os.path.isdir(e.filename) + and os.access(e.filename, os.W_OK) + ): + continue + raise + + if perm is not None: + os.chmod(tmp_filename, perm) # in case perm includes bits in umask + + f = _wrap_io_open(fd, mode, encoding, errors) + af = _AtomicFile(f, tmp_filename, os.path.realpath(filename)) + return t.cast(t.IO, af), True + + +class _AtomicFile: + def __init__(self, f: t.IO, tmp_filename: str, real_filename: str) -> None: + self._f = f + self._tmp_filename = tmp_filename + self._real_filename = real_filename + self.closed = False + + @property + def name(self) -> str: + return self._real_filename + + def close(self, delete: bool = False) -> None: + if self.closed: + return + self._f.close() + os.replace(self._tmp_filename, self._real_filename) + self.closed = True + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._f, name) + + def __enter__(self) -> "_AtomicFile": + return self + + def __exit__(self, exc_type, exc_value, tb): # type: ignore + self.close(delete=exc_type is not None) + + def __repr__(self) -> str: + return repr(self._f) + + +def strip_ansi(value: str) -> str: + return _ansi_re.sub("", value) + + +def _is_jupyter_kernel_output(stream: t.IO) -> bool: + while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)): + stream = stream._stream + + return stream.__class__.__module__.startswith("ipykernel.") + + +def should_strip_ansi( + stream: t.Optional[t.IO] = None, color: t.Optional[bool] = None +) -> bool: + if color is None: + if stream is None: + stream = sys.stdin + return not isatty(stream) and not _is_jupyter_kernel_output(stream) + return not color + + +# On Windows, wrap the output streams with colorama to support ANSI +# color codes. +# NOTE: double check is needed so mypy does not analyze this on Linux +if sys.platform.startswith("win") and WIN: + from ._winconsole import _get_windows_console_stream + + def _get_argv_encoding() -> str: + import locale + + return locale.getpreferredencoding() + + _ansi_stream_wrappers: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary() + + def auto_wrap_for_ansi( + stream: t.TextIO, color: t.Optional[bool] = None + ) -> t.TextIO: + """Support ANSI color and style codes on Windows by wrapping a + stream with colorama. + """ + try: + cached = _ansi_stream_wrappers.get(stream) + except Exception: + cached = None + + if cached is not None: + return cached + + import colorama + + strip = should_strip_ansi(stream, color) + ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip) + rv = t.cast(t.TextIO, ansi_wrapper.stream) + _write = rv.write + + def _safe_write(s): + try: + return _write(s) + except BaseException: + ansi_wrapper.reset_all() + raise + + rv.write = _safe_write + + try: + _ansi_stream_wrappers[stream] = rv + except Exception: + pass + + return rv + +else: + + def _get_argv_encoding() -> str: + return getattr(sys.stdin, "encoding", None) or get_filesystem_encoding() + + def _get_windows_console_stream( + f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] + ) -> t.Optional[t.TextIO]: + return None + + +def term_len(x: str) -> int: + return len(strip_ansi(x)) + + +def isatty(stream: t.IO) -> bool: + try: + return stream.isatty() + except Exception: + return False + + +def _make_cached_stream_func( + src_func: t.Callable[[], t.TextIO], wrapper_func: t.Callable[[], t.TextIO] +) -> t.Callable[[], t.TextIO]: + cache: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary() + + def func() -> t.TextIO: + stream = src_func() + try: + rv = cache.get(stream) + except Exception: + rv = None + if rv is not None: + return rv + rv = wrapper_func() + try: + cache[stream] = rv + except Exception: + pass + return rv + + return func + + +_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin) +_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout) +_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr) + + +binary_streams: t.Mapping[str, t.Callable[[], t.BinaryIO]] = { + "stdin": get_binary_stdin, + "stdout": get_binary_stdout, + "stderr": get_binary_stderr, +} + +text_streams: t.Mapping[ + str, t.Callable[[t.Optional[str], t.Optional[str]], t.TextIO] +] = { + "stdin": get_text_stdin, + "stdout": get_text_stdout, + "stderr": get_text_stderr, +} diff --git a/env/lib/python3.8/site-packages/click/_termui_impl.py b/env/lib/python3.8/site-packages/click/_termui_impl.py new file mode 100644 index 00000000..4b979bcc --- /dev/null +++ b/env/lib/python3.8/site-packages/click/_termui_impl.py @@ -0,0 +1,717 @@ +""" +This module contains implementations for the termui module. To keep the +import time of Click down, some infrequently used functionality is +placed in this module and only imported as needed. +""" +import contextlib +import math +import os +import sys +import time +import typing as t +from gettext import gettext as _ + +from ._compat import _default_text_stdout +from ._compat import CYGWIN +from ._compat import get_best_encoding +from ._compat import isatty +from ._compat import open_stream +from ._compat import strip_ansi +from ._compat import term_len +from ._compat import WIN +from .exceptions import ClickException +from .utils import echo + +V = t.TypeVar("V") + +if os.name == "nt": + BEFORE_BAR = "\r" + AFTER_BAR = "\n" +else: + BEFORE_BAR = "\r\033[?25l" + AFTER_BAR = "\033[?25h\n" + + +class ProgressBar(t.Generic[V]): + def __init__( + self, + iterable: t.Optional[t.Iterable[V]], + length: t.Optional[int] = None, + fill_char: str = "#", + empty_char: str = " ", + bar_template: str = "%(bar)s", + info_sep: str = " ", + show_eta: bool = True, + show_percent: t.Optional[bool] = None, + show_pos: bool = False, + item_show_func: t.Optional[t.Callable[[t.Optional[V]], t.Optional[str]]] = None, + label: t.Optional[str] = None, + file: t.Optional[t.TextIO] = None, + color: t.Optional[bool] = None, + update_min_steps: int = 1, + width: int = 30, + ) -> None: + self.fill_char = fill_char + self.empty_char = empty_char + self.bar_template = bar_template + self.info_sep = info_sep + self.show_eta = show_eta + self.show_percent = show_percent + self.show_pos = show_pos + self.item_show_func = item_show_func + self.label = label or "" + if file is None: + file = _default_text_stdout() + self.file = file + self.color = color + self.update_min_steps = update_min_steps + self._completed_intervals = 0 + self.width = width + self.autowidth = width == 0 + + if length is None: + from operator import length_hint + + length = length_hint(iterable, -1) + + if length == -1: + length = None + if iterable is None: + if length is None: + raise TypeError("iterable or length is required") + iterable = t.cast(t.Iterable[V], range(length)) + self.iter = iter(iterable) + self.length = length + self.pos = 0 + self.avg: t.List[float] = [] + self.start = self.last_eta = time.time() + self.eta_known = False + self.finished = False + self.max_width: t.Optional[int] = None + self.entered = False + self.current_item: t.Optional[V] = None + self.is_hidden = not isatty(self.file) + self._last_line: t.Optional[str] = None + + def __enter__(self) -> "ProgressBar": + self.entered = True + self.render_progress() + return self + + def __exit__(self, exc_type, exc_value, tb): # type: ignore + self.render_finish() + + def __iter__(self) -> t.Iterator[V]: + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + self.render_progress() + return self.generator() + + def __next__(self) -> V: + # Iteration is defined in terms of a generator function, + # returned by iter(self); use that to define next(). This works + # because `self.iter` is an iterable consumed by that generator, + # so it is re-entry safe. Calling `next(self.generator())` + # twice works and does "what you want". + return next(iter(self)) + + def render_finish(self) -> None: + if self.is_hidden: + return + self.file.write(AFTER_BAR) + self.file.flush() + + @property + def pct(self) -> float: + if self.finished: + return 1.0 + return min(self.pos / (float(self.length or 1) or 1), 1.0) + + @property + def time_per_iteration(self) -> float: + if not self.avg: + return 0.0 + return sum(self.avg) / float(len(self.avg)) + + @property + def eta(self) -> float: + if self.length is not None and not self.finished: + return self.time_per_iteration * (self.length - self.pos) + return 0.0 + + def format_eta(self) -> str: + if self.eta_known: + t = int(self.eta) + seconds = t % 60 + t //= 60 + minutes = t % 60 + t //= 60 + hours = t % 24 + t //= 24 + if t > 0: + return f"{t}d {hours:02}:{minutes:02}:{seconds:02}" + else: + return f"{hours:02}:{minutes:02}:{seconds:02}" + return "" + + def format_pos(self) -> str: + pos = str(self.pos) + if self.length is not None: + pos += f"/{self.length}" + return pos + + def format_pct(self) -> str: + return f"{int(self.pct * 100): 4}%"[1:] + + def format_bar(self) -> str: + if self.length is not None: + bar_length = int(self.pct * self.width) + bar = self.fill_char * bar_length + bar += self.empty_char * (self.width - bar_length) + elif self.finished: + bar = self.fill_char * self.width + else: + chars = list(self.empty_char * (self.width or 1)) + if self.time_per_iteration != 0: + chars[ + int( + (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5) + * self.width + ) + ] = self.fill_char + bar = "".join(chars) + return bar + + def format_progress_line(self) -> str: + show_percent = self.show_percent + + info_bits = [] + if self.length is not None and show_percent is None: + show_percent = not self.show_pos + + if self.show_pos: + info_bits.append(self.format_pos()) + if show_percent: + info_bits.append(self.format_pct()) + if self.show_eta and self.eta_known and not self.finished: + info_bits.append(self.format_eta()) + if self.item_show_func is not None: + item_info = self.item_show_func(self.current_item) + if item_info is not None: + info_bits.append(item_info) + + return ( + self.bar_template + % { + "label": self.label, + "bar": self.format_bar(), + "info": self.info_sep.join(info_bits), + } + ).rstrip() + + def render_progress(self) -> None: + import shutil + + if self.is_hidden: + # Only output the label as it changes if the output is not a + # TTY. Use file=stderr if you expect to be piping stdout. + if self._last_line != self.label: + self._last_line = self.label + echo(self.label, file=self.file, color=self.color) + + return + + buf = [] + # Update width in case the terminal has been resized + if self.autowidth: + old_width = self.width + self.width = 0 + clutter_length = term_len(self.format_progress_line()) + new_width = max(0, shutil.get_terminal_size().columns - clutter_length) + if new_width < old_width: + buf.append(BEFORE_BAR) + buf.append(" " * self.max_width) # type: ignore + self.max_width = new_width + self.width = new_width + + clear_width = self.width + if self.max_width is not None: + clear_width = self.max_width + + buf.append(BEFORE_BAR) + line = self.format_progress_line() + line_len = term_len(line) + if self.max_width is None or self.max_width < line_len: + self.max_width = line_len + + buf.append(line) + buf.append(" " * (clear_width - line_len)) + line = "".join(buf) + # Render the line only if it changed. + + if line != self._last_line: + self._last_line = line + echo(line, file=self.file, color=self.color, nl=False) + self.file.flush() + + def make_step(self, n_steps: int) -> None: + self.pos += n_steps + if self.length is not None and self.pos >= self.length: + self.finished = True + + if (time.time() - self.last_eta) < 1.0: + return + + self.last_eta = time.time() + + # self.avg is a rolling list of length <= 7 of steps where steps are + # defined as time elapsed divided by the total progress through + # self.length. + if self.pos: + step = (time.time() - self.start) / self.pos + else: + step = time.time() - self.start + + self.avg = self.avg[-6:] + [step] + + self.eta_known = self.length is not None + + def update(self, n_steps: int, current_item: t.Optional[V] = None) -> None: + """Update the progress bar by advancing a specified number of + steps, and optionally set the ``current_item`` for this new + position. + + :param n_steps: Number of steps to advance. + :param current_item: Optional item to set as ``current_item`` + for the updated position. + + .. versionchanged:: 8.0 + Added the ``current_item`` optional parameter. + + .. versionchanged:: 8.0 + Only render when the number of steps meets the + ``update_min_steps`` threshold. + """ + if current_item is not None: + self.current_item = current_item + + self._completed_intervals += n_steps + + if self._completed_intervals >= self.update_min_steps: + self.make_step(self._completed_intervals) + self.render_progress() + self._completed_intervals = 0 + + def finish(self) -> None: + self.eta_known = False + self.current_item = None + self.finished = True + + def generator(self) -> t.Iterator[V]: + """Return a generator which yields the items added to the bar + during construction, and updates the progress bar *after* the + yielded block returns. + """ + # WARNING: the iterator interface for `ProgressBar` relies on + # this and only works because this is a simple generator which + # doesn't create or manage additional state. If this function + # changes, the impact should be evaluated both against + # `iter(bar)` and `next(bar)`. `next()` in particular may call + # `self.generator()` repeatedly, and this must remain safe in + # order for that interface to work. + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + + if self.is_hidden: + yield from self.iter + else: + for rv in self.iter: + self.current_item = rv + + # This allows show_item_func to be updated before the + # item is processed. Only trigger at the beginning of + # the update interval. + if self._completed_intervals == 0: + self.render_progress() + + yield rv + self.update(1) + + self.finish() + self.render_progress() + + +def pager(generator: t.Iterable[str], color: t.Optional[bool] = None) -> None: + """Decide what method to use for paging through text.""" + stdout = _default_text_stdout() + if not isatty(sys.stdin) or not isatty(stdout): + return _nullpager(stdout, generator, color) + pager_cmd = (os.environ.get("PAGER", None) or "").strip() + if pager_cmd: + if WIN: + return _tempfilepager(generator, pager_cmd, color) + return _pipepager(generator, pager_cmd, color) + if os.environ.get("TERM") in ("dumb", "emacs"): + return _nullpager(stdout, generator, color) + if WIN or sys.platform.startswith("os2"): + return _tempfilepager(generator, "more <", color) + if hasattr(os, "system") and os.system("(less) 2>/dev/null") == 0: + return _pipepager(generator, "less", color) + + import tempfile + + fd, filename = tempfile.mkstemp() + os.close(fd) + try: + if hasattr(os, "system") and os.system(f'more "{filename}"') == 0: + return _pipepager(generator, "more", color) + return _nullpager(stdout, generator, color) + finally: + os.unlink(filename) + + +def _pipepager(generator: t.Iterable[str], cmd: str, color: t.Optional[bool]) -> None: + """Page through text by feeding it to another program. Invoking a + pager through this might support colors. + """ + import subprocess + + env = dict(os.environ) + + # If we're piping to less we might support colors under the + # condition that + cmd_detail = cmd.rsplit("/", 1)[-1].split() + if color is None and cmd_detail[0] == "less": + less_flags = f"{os.environ.get('LESS', '')}{' '.join(cmd_detail[1:])}" + if not less_flags: + env["LESS"] = "-R" + color = True + elif "r" in less_flags or "R" in less_flags: + color = True + + c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, env=env) + stdin = t.cast(t.BinaryIO, c.stdin) + encoding = get_best_encoding(stdin) + try: + for text in generator: + if not color: + text = strip_ansi(text) + + stdin.write(text.encode(encoding, "replace")) + except (OSError, KeyboardInterrupt): + pass + else: + stdin.close() + + # Less doesn't respect ^C, but catches it for its own UI purposes (aborting + # search or other commands inside less). + # + # That means when the user hits ^C, the parent process (click) terminates, + # but less is still alive, paging the output and messing up the terminal. + # + # If the user wants to make the pager exit on ^C, they should set + # `LESS='-K'`. It's not our decision to make. + while True: + try: + c.wait() + except KeyboardInterrupt: + pass + else: + break + + +def _tempfilepager( + generator: t.Iterable[str], cmd: str, color: t.Optional[bool] +) -> None: + """Page through text by invoking a program on a temporary file.""" + import tempfile + + fd, filename = tempfile.mkstemp() + # TODO: This never terminates if the passed generator never terminates. + text = "".join(generator) + if not color: + text = strip_ansi(text) + encoding = get_best_encoding(sys.stdout) + with open_stream(filename, "wb")[0] as f: + f.write(text.encode(encoding)) + try: + os.system(f'{cmd} "{filename}"') + finally: + os.close(fd) + os.unlink(filename) + + +def _nullpager( + stream: t.TextIO, generator: t.Iterable[str], color: t.Optional[bool] +) -> None: + """Simply print unformatted text. This is the ultimate fallback.""" + for text in generator: + if not color: + text = strip_ansi(text) + stream.write(text) + + +class Editor: + def __init__( + self, + editor: t.Optional[str] = None, + env: t.Optional[t.Mapping[str, str]] = None, + require_save: bool = True, + extension: str = ".txt", + ) -> None: + self.editor = editor + self.env = env + self.require_save = require_save + self.extension = extension + + def get_editor(self) -> str: + if self.editor is not None: + return self.editor + for key in "VISUAL", "EDITOR": + rv = os.environ.get(key) + if rv: + return rv + if WIN: + return "notepad" + for editor in "sensible-editor", "vim", "nano": + if os.system(f"which {editor} >/dev/null 2>&1") == 0: + return editor + return "vi" + + def edit_file(self, filename: str) -> None: + import subprocess + + editor = self.get_editor() + environ: t.Optional[t.Dict[str, str]] = None + + if self.env: + environ = os.environ.copy() + environ.update(self.env) + + try: + c = subprocess.Popen(f'{editor} "{filename}"', env=environ, shell=True) + exit_code = c.wait() + if exit_code != 0: + raise ClickException( + _("{editor}: Editing failed").format(editor=editor) + ) + except OSError as e: + raise ClickException( + _("{editor}: Editing failed: {e}").format(editor=editor, e=e) + ) from e + + def edit(self, text: t.Optional[t.AnyStr]) -> t.Optional[t.AnyStr]: + import tempfile + + if not text: + data = b"" + elif isinstance(text, (bytes, bytearray)): + data = text + else: + if text and not text.endswith("\n"): + text += "\n" + + if WIN: + data = text.replace("\n", "\r\n").encode("utf-8-sig") + else: + data = text.encode("utf-8") + + fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension) + f: t.BinaryIO + + try: + with os.fdopen(fd, "wb") as f: + f.write(data) + + # If the filesystem resolution is 1 second, like Mac OS + # 10.12 Extended, or 2 seconds, like FAT32, and the editor + # closes very fast, require_save can fail. Set the modified + # time to be 2 seconds in the past to work around this. + os.utime(name, (os.path.getatime(name), os.path.getmtime(name) - 2)) + # Depending on the resolution, the exact value might not be + # recorded, so get the new recorded value. + timestamp = os.path.getmtime(name) + + self.edit_file(name) + + if self.require_save and os.path.getmtime(name) == timestamp: + return None + + with open(name, "rb") as f: + rv = f.read() + + if isinstance(text, (bytes, bytearray)): + return rv + + return rv.decode("utf-8-sig").replace("\r\n", "\n") # type: ignore + finally: + os.unlink(name) + + +def open_url(url: str, wait: bool = False, locate: bool = False) -> int: + import subprocess + + def _unquote_file(url: str) -> str: + from urllib.parse import unquote + + if url.startswith("file://"): + url = unquote(url[7:]) + + return url + + if sys.platform == "darwin": + args = ["open"] + if wait: + args.append("-W") + if locate: + args.append("-R") + args.append(_unquote_file(url)) + null = open("/dev/null", "w") + try: + return subprocess.Popen(args, stderr=null).wait() + finally: + null.close() + elif WIN: + if locate: + url = _unquote_file(url.replace('"', "")) + args = f'explorer /select,"{url}"' + else: + url = url.replace('"', "") + wait_str = "/WAIT" if wait else "" + args = f'start {wait_str} "" "{url}"' + return os.system(args) + elif CYGWIN: + if locate: + url = os.path.dirname(_unquote_file(url).replace('"', "")) + args = f'cygstart "{url}"' + else: + url = url.replace('"', "") + wait_str = "-w" if wait else "" + args = f'cygstart {wait_str} "{url}"' + return os.system(args) + + try: + if locate: + url = os.path.dirname(_unquote_file(url)) or "." + else: + url = _unquote_file(url) + c = subprocess.Popen(["xdg-open", url]) + if wait: + return c.wait() + return 0 + except OSError: + if url.startswith(("http://", "https://")) and not locate and not wait: + import webbrowser + + webbrowser.open(url) + return 0 + return 1 + + +def _translate_ch_to_exc(ch: str) -> t.Optional[BaseException]: + if ch == "\x03": + raise KeyboardInterrupt() + + if ch == "\x04" and not WIN: # Unix-like, Ctrl+D + raise EOFError() + + if ch == "\x1a" and WIN: # Windows, Ctrl+Z + raise EOFError() + + return None + + +if WIN: + import msvcrt + + @contextlib.contextmanager + def raw_terminal() -> t.Iterator[int]: + yield -1 + + def getchar(echo: bool) -> str: + # The function `getch` will return a bytes object corresponding to + # the pressed character. Since Windows 10 build 1803, it will also + # return \x00 when called a second time after pressing a regular key. + # + # `getwch` does not share this probably-bugged behavior. Moreover, it + # returns a Unicode object by default, which is what we want. + # + # Either of these functions will return \x00 or \xe0 to indicate + # a special key, and you need to call the same function again to get + # the "rest" of the code. The fun part is that \u00e0 is + # "latin small letter a with grave", so if you type that on a French + # keyboard, you _also_ get a \xe0. + # E.g., consider the Up arrow. This returns \xe0 and then \x48. The + # resulting Unicode string reads as "a with grave" + "capital H". + # This is indistinguishable from when the user actually types + # "a with grave" and then "capital H". + # + # When \xe0 is returned, we assume it's part of a special-key sequence + # and call `getwch` again, but that means that when the user types + # the \u00e0 character, `getchar` doesn't return until a second + # character is typed. + # The alternative is returning immediately, but that would mess up + # cross-platform handling of arrow keys and others that start with + # \xe0. Another option is using `getch`, but then we can't reliably + # read non-ASCII characters, because return values of `getch` are + # limited to the current 8-bit codepage. + # + # Anyway, Click doesn't claim to do this Right(tm), and using `getwch` + # is doing the right thing in more situations than with `getch`. + func: t.Callable[[], str] + + if echo: + func = msvcrt.getwche # type: ignore + else: + func = msvcrt.getwch # type: ignore + + rv = func() + + if rv in ("\x00", "\xe0"): + # \x00 and \xe0 are control characters that indicate special key, + # see above. + rv += func() + + _translate_ch_to_exc(rv) + return rv + +else: + import tty + import termios + + @contextlib.contextmanager + def raw_terminal() -> t.Iterator[int]: + f: t.Optional[t.TextIO] + fd: int + + if not isatty(sys.stdin): + f = open("/dev/tty") + fd = f.fileno() + else: + fd = sys.stdin.fileno() + f = None + + try: + old_settings = termios.tcgetattr(fd) + + try: + tty.setraw(fd) + yield fd + finally: + termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) + sys.stdout.flush() + + if f is not None: + f.close() + except termios.error: + pass + + def getchar(echo: bool) -> str: + with raw_terminal() as fd: + ch = os.read(fd, 32).decode(get_best_encoding(sys.stdin), "replace") + + if echo and isatty(sys.stdout): + sys.stdout.write(ch) + + _translate_ch_to_exc(ch) + return ch diff --git a/env/lib/python3.8/site-packages/click/_textwrap.py b/env/lib/python3.8/site-packages/click/_textwrap.py new file mode 100644 index 00000000..b47dcbd4 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/_textwrap.py @@ -0,0 +1,49 @@ +import textwrap +import typing as t +from contextlib import contextmanager + + +class TextWrapper(textwrap.TextWrapper): + def _handle_long_word( + self, + reversed_chunks: t.List[str], + cur_line: t.List[str], + cur_len: int, + width: int, + ) -> None: + space_left = max(width - cur_len, 1) + + if self.break_long_words: + last = reversed_chunks[-1] + cut = last[:space_left] + res = last[space_left:] + cur_line.append(cut) + reversed_chunks[-1] = res + elif not cur_line: + cur_line.append(reversed_chunks.pop()) + + @contextmanager + def extra_indent(self, indent: str) -> t.Iterator[None]: + old_initial_indent = self.initial_indent + old_subsequent_indent = self.subsequent_indent + self.initial_indent += indent + self.subsequent_indent += indent + + try: + yield + finally: + self.initial_indent = old_initial_indent + self.subsequent_indent = old_subsequent_indent + + def indent_only(self, text: str) -> str: + rv = [] + + for idx, line in enumerate(text.splitlines()): + indent = self.initial_indent + + if idx > 0: + indent = self.subsequent_indent + + rv.append(f"{indent}{line}") + + return "\n".join(rv) diff --git a/env/lib/python3.8/site-packages/click/_winconsole.py b/env/lib/python3.8/site-packages/click/_winconsole.py new file mode 100644 index 00000000..6b20df31 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/_winconsole.py @@ -0,0 +1,279 @@ +# This module is based on the excellent work by Adam Bartoš who +# provided a lot of what went into the implementation here in +# the discussion to issue1602 in the Python bug tracker. +# +# There are some general differences in regards to how this works +# compared to the original patches as we do not need to patch +# the entire interpreter but just work in our little world of +# echo and prompt. +import io +import sys +import time +import typing as t +from ctypes import byref +from ctypes import c_char +from ctypes import c_char_p +from ctypes import c_int +from ctypes import c_ssize_t +from ctypes import c_ulong +from ctypes import c_void_p +from ctypes import POINTER +from ctypes import py_object +from ctypes import Structure +from ctypes.wintypes import DWORD +from ctypes.wintypes import HANDLE +from ctypes.wintypes import LPCWSTR +from ctypes.wintypes import LPWSTR + +from ._compat import _NonClosingTextIOWrapper + +assert sys.platform == "win32" +import msvcrt # noqa: E402 +from ctypes import windll # noqa: E402 +from ctypes import WINFUNCTYPE # noqa: E402 + +c_ssize_p = POINTER(c_ssize_t) + +kernel32 = windll.kernel32 +GetStdHandle = kernel32.GetStdHandle +ReadConsoleW = kernel32.ReadConsoleW +WriteConsoleW = kernel32.WriteConsoleW +GetConsoleMode = kernel32.GetConsoleMode +GetLastError = kernel32.GetLastError +GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32)) +CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))( + ("CommandLineToArgvW", windll.shell32) +) +LocalFree = WINFUNCTYPE(c_void_p, c_void_p)(("LocalFree", windll.kernel32)) + +STDIN_HANDLE = GetStdHandle(-10) +STDOUT_HANDLE = GetStdHandle(-11) +STDERR_HANDLE = GetStdHandle(-12) + +PyBUF_SIMPLE = 0 +PyBUF_WRITABLE = 1 + +ERROR_SUCCESS = 0 +ERROR_NOT_ENOUGH_MEMORY = 8 +ERROR_OPERATION_ABORTED = 995 + +STDIN_FILENO = 0 +STDOUT_FILENO = 1 +STDERR_FILENO = 2 + +EOF = b"\x1a" +MAX_BYTES_WRITTEN = 32767 + +try: + from ctypes import pythonapi +except ImportError: + # On PyPy we cannot get buffers so our ability to operate here is + # severely limited. + get_buffer = None +else: + + class Py_buffer(Structure): + _fields_ = [ + ("buf", c_void_p), + ("obj", py_object), + ("len", c_ssize_t), + ("itemsize", c_ssize_t), + ("readonly", c_int), + ("ndim", c_int), + ("format", c_char_p), + ("shape", c_ssize_p), + ("strides", c_ssize_p), + ("suboffsets", c_ssize_p), + ("internal", c_void_p), + ] + + PyObject_GetBuffer = pythonapi.PyObject_GetBuffer + PyBuffer_Release = pythonapi.PyBuffer_Release + + def get_buffer(obj, writable=False): + buf = Py_buffer() + flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE + PyObject_GetBuffer(py_object(obj), byref(buf), flags) + + try: + buffer_type = c_char * buf.len + return buffer_type.from_address(buf.buf) + finally: + PyBuffer_Release(byref(buf)) + + +class _WindowsConsoleRawIOBase(io.RawIOBase): + def __init__(self, handle): + self.handle = handle + + def isatty(self): + super().isatty() + return True + + +class _WindowsConsoleReader(_WindowsConsoleRawIOBase): + def readable(self): + return True + + def readinto(self, b): + bytes_to_be_read = len(b) + if not bytes_to_be_read: + return 0 + elif bytes_to_be_read % 2: + raise ValueError( + "cannot read odd number of bytes from UTF-16-LE encoded console" + ) + + buffer = get_buffer(b, writable=True) + code_units_to_be_read = bytes_to_be_read // 2 + code_units_read = c_ulong() + + rv = ReadConsoleW( + HANDLE(self.handle), + buffer, + code_units_to_be_read, + byref(code_units_read), + None, + ) + if GetLastError() == ERROR_OPERATION_ABORTED: + # wait for KeyboardInterrupt + time.sleep(0.1) + if not rv: + raise OSError(f"Windows error: {GetLastError()}") + + if buffer[0] == EOF: + return 0 + return 2 * code_units_read.value + + +class _WindowsConsoleWriter(_WindowsConsoleRawIOBase): + def writable(self): + return True + + @staticmethod + def _get_error_message(errno): + if errno == ERROR_SUCCESS: + return "ERROR_SUCCESS" + elif errno == ERROR_NOT_ENOUGH_MEMORY: + return "ERROR_NOT_ENOUGH_MEMORY" + return f"Windows error {errno}" + + def write(self, b): + bytes_to_be_written = len(b) + buf = get_buffer(b) + code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2 + code_units_written = c_ulong() + + WriteConsoleW( + HANDLE(self.handle), + buf, + code_units_to_be_written, + byref(code_units_written), + None, + ) + bytes_written = 2 * code_units_written.value + + if bytes_written == 0 and bytes_to_be_written > 0: + raise OSError(self._get_error_message(GetLastError())) + return bytes_written + + +class ConsoleStream: + def __init__(self, text_stream: t.TextIO, byte_stream: t.BinaryIO) -> None: + self._text_stream = text_stream + self.buffer = byte_stream + + @property + def name(self) -> str: + return self.buffer.name + + def write(self, x: t.AnyStr) -> int: + if isinstance(x, str): + return self._text_stream.write(x) + try: + self.flush() + except Exception: + pass + return self.buffer.write(x) + + def writelines(self, lines: t.Iterable[t.AnyStr]) -> None: + for line in lines: + self.write(line) + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._text_stream, name) + + def isatty(self) -> bool: + return self.buffer.isatty() + + def __repr__(self): + return f"" + + +def _get_text_stdin(buffer_stream: t.BinaryIO) -> t.TextIO: + text_stream = _NonClosingTextIOWrapper( + io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) + + +def _get_text_stdout(buffer_stream: t.BinaryIO) -> t.TextIO: + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) + + +def _get_text_stderr(buffer_stream: t.BinaryIO) -> t.TextIO: + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) + + +_stream_factories: t.Mapping[int, t.Callable[[t.BinaryIO], t.TextIO]] = { + 0: _get_text_stdin, + 1: _get_text_stdout, + 2: _get_text_stderr, +} + + +def _is_console(f: t.TextIO) -> bool: + if not hasattr(f, "fileno"): + return False + + try: + fileno = f.fileno() + except (OSError, io.UnsupportedOperation): + return False + + handle = msvcrt.get_osfhandle(fileno) + return bool(GetConsoleMode(handle, byref(DWORD()))) + + +def _get_windows_console_stream( + f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] +) -> t.Optional[t.TextIO]: + if ( + get_buffer is not None + and encoding in {"utf-16-le", None} + and errors in {"strict", None} + and _is_console(f) + ): + func = _stream_factories.get(f.fileno()) + if func is not None: + b = getattr(f, "buffer", None) + + if b is None: + return None + + return func(b) diff --git a/env/lib/python3.8/site-packages/click/core.py b/env/lib/python3.8/site-packages/click/core.py new file mode 100644 index 00000000..5abfb0f3 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/core.py @@ -0,0 +1,2998 @@ +import enum +import errno +import inspect +import os +import sys +import typing as t +from collections import abc +from contextlib import contextmanager +from contextlib import ExitStack +from functools import partial +from functools import update_wrapper +from gettext import gettext as _ +from gettext import ngettext +from itertools import repeat + +from . import types +from .exceptions import Abort +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import Exit +from .exceptions import MissingParameter +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import join_options +from .globals import pop_context +from .globals import push_context +from .parser import _flag_needs_value +from .parser import OptionParser +from .parser import split_opt +from .termui import confirm +from .termui import prompt +from .termui import style +from .utils import _detect_program_name +from .utils import _expand_args +from .utils import echo +from .utils import make_default_short_help +from .utils import make_str +from .utils import PacifyFlushWrapper + +if t.TYPE_CHECKING: + import typing_extensions as te + from .shell_completion import CompletionItem + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) +V = t.TypeVar("V") + + +def _complete_visible_commands( + ctx: "Context", incomplete: str +) -> t.Iterator[t.Tuple[str, "Command"]]: + """List all the subcommands of a group that start with the + incomplete value and aren't hidden. + + :param ctx: Invocation context for the group. + :param incomplete: Value being completed. May be empty. + """ + multi = t.cast(MultiCommand, ctx.command) + + for name in multi.list_commands(ctx): + if name.startswith(incomplete): + command = multi.get_command(ctx, name) + + if command is not None and not command.hidden: + yield name, command + + +def _check_multicommand( + base_command: "MultiCommand", cmd_name: str, cmd: "Command", register: bool = False +) -> None: + if not base_command.chain or not isinstance(cmd, MultiCommand): + return + if register: + hint = ( + "It is not possible to add multi commands as children to" + " another multi command that is in chain mode." + ) + else: + hint = ( + "Found a multi command as subcommand to a multi command" + " that is in chain mode. This is not supported." + ) + raise RuntimeError( + f"{hint}. Command {base_command.name!r} is set to chain and" + f" {cmd_name!r} was added as a subcommand but it in itself is a" + f" multi command. ({cmd_name!r} is a {type(cmd).__name__}" + f" within a chained {type(base_command).__name__} named" + f" {base_command.name!r})." + ) + + +def batch(iterable: t.Iterable[V], batch_size: int) -> t.List[t.Tuple[V, ...]]: + return list(zip(*repeat(iter(iterable), batch_size))) + + +@contextmanager +def augment_usage_errors( + ctx: "Context", param: t.Optional["Parameter"] = None +) -> t.Iterator[None]: + """Context manager that attaches extra information to exceptions.""" + try: + yield + except BadParameter as e: + if e.ctx is None: + e.ctx = ctx + if param is not None and e.param is None: + e.param = param + raise + except UsageError as e: + if e.ctx is None: + e.ctx = ctx + raise + + +def iter_params_for_processing( + invocation_order: t.Sequence["Parameter"], + declaration_order: t.Sequence["Parameter"], +) -> t.List["Parameter"]: + """Given a sequence of parameters in the order as should be considered + for processing and an iterable of parameters that exist, this returns + a list in the correct order as they should be processed. + """ + + def sort_key(item: "Parameter") -> t.Tuple[bool, float]: + try: + idx: float = invocation_order.index(item) + except ValueError: + idx = float("inf") + + return not item.is_eager, idx + + return sorted(declaration_order, key=sort_key) + + +class ParameterSource(enum.Enum): + """This is an :class:`~enum.Enum` that indicates the source of a + parameter's value. + + Use :meth:`click.Context.get_parameter_source` to get the + source for a parameter by name. + + .. versionchanged:: 8.0 + Use :class:`~enum.Enum` and drop the ``validate`` method. + + .. versionchanged:: 8.0 + Added the ``PROMPT`` value. + """ + + COMMANDLINE = enum.auto() + """The value was provided by the command line args.""" + ENVIRONMENT = enum.auto() + """The value was provided with an environment variable.""" + DEFAULT = enum.auto() + """Used the default specified by the parameter.""" + DEFAULT_MAP = enum.auto() + """Used a default provided by :attr:`Context.default_map`.""" + PROMPT = enum.auto() + """Used a prompt to confirm a default or provide a value.""" + + +class Context: + """The context is a special internal object that holds state relevant + for the script execution at every single level. It's normally invisible + to commands unless they opt-in to getting access to it. + + The context is useful as it can pass internal objects around and can + control special execution features such as reading data from + environment variables. + + A context can be used as context manager in which case it will call + :meth:`close` on teardown. + + :param command: the command class for this context. + :param parent: the parent context. + :param info_name: the info name for this invocation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it is usually + the name of the script, for commands below it it's + the name of the script. + :param obj: an arbitrary object of user data. + :param auto_envvar_prefix: the prefix to use for automatic environment + variables. If this is `None` then reading + from environment variables is disabled. This + does not affect manually set environment + variables which are always read. + :param default_map: a dictionary (like object) with default values + for parameters. + :param terminal_width: the width of the terminal. The default is + inherit from parent context. If no context + defines the terminal width then auto + detection will be applied. + :param max_content_width: the maximum width for content rendered by + Click (this currently only affects help + pages). This defaults to 80 characters if + not overridden. In other words: even if the + terminal is larger than that, Click will not + format things wider than 80 characters by + default. In addition to that, formatters might + add some safety mapping on the right. + :param resilient_parsing: if this flag is enabled then Click will + parse without any interactivity or callback + invocation. Default values will also be + ignored. This is useful for implementing + things such as completion support. + :param allow_extra_args: if this is set to `True` then extra arguments + at the end will not raise an error and will be + kept on the context. The default is to inherit + from the command. + :param allow_interspersed_args: if this is set to `False` then options + and arguments cannot be mixed. The + default is to inherit from the command. + :param ignore_unknown_options: instructs click to ignore options it does + not know and keeps them for later + processing. + :param help_option_names: optionally a list of strings that define how + the default help parameter is named. The + default is ``['--help']``. + :param token_normalize_func: an optional function that is used to + normalize tokens (options, choices, + etc.). This for instance can be used to + implement case insensitive behavior. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are used in texts that Click prints which is by + default not the case. This for instance would affect + help output. + :param show_default: Show the default value for commands. If this + value is not set, it defaults to the value from the parent + context. ``Command.show_default`` overrides this default for the + specific command. + + .. versionchanged:: 8.1 + The ``show_default`` parameter is overridden by + ``Command.show_default``, instead of the other way around. + + .. versionchanged:: 8.0 + The ``show_default`` parameter defaults to the value from the + parent context. + + .. versionchanged:: 7.1 + Added the ``show_default`` parameter. + + .. versionchanged:: 4.0 + Added the ``color``, ``ignore_unknown_options``, and + ``max_content_width`` parameters. + + .. versionchanged:: 3.0 + Added the ``allow_extra_args`` and ``allow_interspersed_args`` + parameters. + + .. versionchanged:: 2.0 + Added the ``resilient_parsing``, ``help_option_names``, and + ``token_normalize_func`` parameters. + """ + + #: The formatter class to create with :meth:`make_formatter`. + #: + #: .. versionadded:: 8.0 + formatter_class: t.Type["HelpFormatter"] = HelpFormatter + + def __init__( + self, + command: "Command", + parent: t.Optional["Context"] = None, + info_name: t.Optional[str] = None, + obj: t.Optional[t.Any] = None, + auto_envvar_prefix: t.Optional[str] = None, + default_map: t.Optional[t.Dict[str, t.Any]] = None, + terminal_width: t.Optional[int] = None, + max_content_width: t.Optional[int] = None, + resilient_parsing: bool = False, + allow_extra_args: t.Optional[bool] = None, + allow_interspersed_args: t.Optional[bool] = None, + ignore_unknown_options: t.Optional[bool] = None, + help_option_names: t.Optional[t.List[str]] = None, + token_normalize_func: t.Optional[t.Callable[[str], str]] = None, + color: t.Optional[bool] = None, + show_default: t.Optional[bool] = None, + ) -> None: + #: the parent context or `None` if none exists. + self.parent = parent + #: the :class:`Command` for this context. + self.command = command + #: the descriptive information name + self.info_name = info_name + #: Map of parameter names to their parsed values. Parameters + #: with ``expose_value=False`` are not stored. + self.params: t.Dict[str, t.Any] = {} + #: the leftover arguments. + self.args: t.List[str] = [] + #: protected arguments. These are arguments that are prepended + #: to `args` when certain parsing scenarios are encountered but + #: must be never propagated to another arguments. This is used + #: to implement nested parsing. + self.protected_args: t.List[str] = [] + #: the collected prefixes of the command's options. + self._opt_prefixes: t.Set[str] = set(parent._opt_prefixes) if parent else set() + + if obj is None and parent is not None: + obj = parent.obj + + #: the user object stored. + self.obj: t.Any = obj + self._meta: t.Dict[str, t.Any] = getattr(parent, "meta", {}) + + #: A dictionary (-like object) with defaults for parameters. + if ( + default_map is None + and info_name is not None + and parent is not None + and parent.default_map is not None + ): + default_map = parent.default_map.get(info_name) + + self.default_map: t.Optional[t.Dict[str, t.Any]] = default_map + + #: This flag indicates if a subcommand is going to be executed. A + #: group callback can use this information to figure out if it's + #: being executed directly or because the execution flow passes + #: onwards to a subcommand. By default it's None, but it can be + #: the name of the subcommand to execute. + #: + #: If chaining is enabled this will be set to ``'*'`` in case + #: any commands are executed. It is however not possible to + #: figure out which ones. If you require this knowledge you + #: should use a :func:`result_callback`. + self.invoked_subcommand: t.Optional[str] = None + + if terminal_width is None and parent is not None: + terminal_width = parent.terminal_width + + #: The width of the terminal (None is autodetection). + self.terminal_width: t.Optional[int] = terminal_width + + if max_content_width is None and parent is not None: + max_content_width = parent.max_content_width + + #: The maximum width of formatted content (None implies a sensible + #: default which is 80 for most things). + self.max_content_width: t.Optional[int] = max_content_width + + if allow_extra_args is None: + allow_extra_args = command.allow_extra_args + + #: Indicates if the context allows extra args or if it should + #: fail on parsing. + #: + #: .. versionadded:: 3.0 + self.allow_extra_args = allow_extra_args + + if allow_interspersed_args is None: + allow_interspersed_args = command.allow_interspersed_args + + #: Indicates if the context allows mixing of arguments and + #: options or not. + #: + #: .. versionadded:: 3.0 + self.allow_interspersed_args: bool = allow_interspersed_args + + if ignore_unknown_options is None: + ignore_unknown_options = command.ignore_unknown_options + + #: Instructs click to ignore options that a command does not + #: understand and will store it on the context for later + #: processing. This is primarily useful for situations where you + #: want to call into external programs. Generally this pattern is + #: strongly discouraged because it's not possibly to losslessly + #: forward all arguments. + #: + #: .. versionadded:: 4.0 + self.ignore_unknown_options: bool = ignore_unknown_options + + if help_option_names is None: + if parent is not None: + help_option_names = parent.help_option_names + else: + help_option_names = ["--help"] + + #: The names for the help options. + self.help_option_names: t.List[str] = help_option_names + + if token_normalize_func is None and parent is not None: + token_normalize_func = parent.token_normalize_func + + #: An optional normalization function for tokens. This is + #: options, choices, commands etc. + self.token_normalize_func: t.Optional[ + t.Callable[[str], str] + ] = token_normalize_func + + #: Indicates if resilient parsing is enabled. In that case Click + #: will do its best to not cause any failures and default values + #: will be ignored. Useful for completion. + self.resilient_parsing: bool = resilient_parsing + + # If there is no envvar prefix yet, but the parent has one and + # the command on this level has a name, we can expand the envvar + # prefix automatically. + if auto_envvar_prefix is None: + if ( + parent is not None + and parent.auto_envvar_prefix is not None + and self.info_name is not None + ): + auto_envvar_prefix = ( + f"{parent.auto_envvar_prefix}_{self.info_name.upper()}" + ) + else: + auto_envvar_prefix = auto_envvar_prefix.upper() + + if auto_envvar_prefix is not None: + auto_envvar_prefix = auto_envvar_prefix.replace("-", "_") + + self.auto_envvar_prefix: t.Optional[str] = auto_envvar_prefix + + if color is None and parent is not None: + color = parent.color + + #: Controls if styling output is wanted or not. + self.color: t.Optional[bool] = color + + if show_default is None and parent is not None: + show_default = parent.show_default + + #: Show option default values when formatting help text. + self.show_default: t.Optional[bool] = show_default + + self._close_callbacks: t.List[t.Callable[[], t.Any]] = [] + self._depth = 0 + self._parameter_source: t.Dict[str, ParameterSource] = {} + self._exit_stack = ExitStack() + + def to_info_dict(self) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. This traverses the entire CLI + structure. + + .. code-block:: python + + with Context(cli) as ctx: + info = ctx.to_info_dict() + + .. versionadded:: 8.0 + """ + return { + "command": self.command.to_info_dict(self), + "info_name": self.info_name, + "allow_extra_args": self.allow_extra_args, + "allow_interspersed_args": self.allow_interspersed_args, + "ignore_unknown_options": self.ignore_unknown_options, + "auto_envvar_prefix": self.auto_envvar_prefix, + } + + def __enter__(self) -> "Context": + self._depth += 1 + push_context(self) + return self + + def __exit__(self, exc_type, exc_value, tb): # type: ignore + self._depth -= 1 + if self._depth == 0: + self.close() + pop_context() + + @contextmanager + def scope(self, cleanup: bool = True) -> t.Iterator["Context"]: + """This helper method can be used with the context object to promote + it to the current thread local (see :func:`get_current_context`). + The default behavior of this is to invoke the cleanup functions which + can be disabled by setting `cleanup` to `False`. The cleanup + functions are typically used for things such as closing file handles. + + If the cleanup is intended the context object can also be directly + used as a context manager. + + Example usage:: + + with ctx.scope(): + assert get_current_context() is ctx + + This is equivalent:: + + with ctx: + assert get_current_context() is ctx + + .. versionadded:: 5.0 + + :param cleanup: controls if the cleanup functions should be run or + not. The default is to run these functions. In + some situations the context only wants to be + temporarily pushed in which case this can be disabled. + Nested pushes automatically defer the cleanup. + """ + if not cleanup: + self._depth += 1 + try: + with self as rv: + yield rv + finally: + if not cleanup: + self._depth -= 1 + + @property + def meta(self) -> t.Dict[str, t.Any]: + """This is a dictionary which is shared with all the contexts + that are nested. It exists so that click utilities can store some + state here if they need to. It is however the responsibility of + that code to manage this dictionary well. + + The keys are supposed to be unique dotted strings. For instance + module paths are a good choice for it. What is stored in there is + irrelevant for the operation of click. However what is important is + that code that places data here adheres to the general semantics of + the system. + + Example usage:: + + LANG_KEY = f'{__name__}.lang' + + def set_language(value): + ctx = get_current_context() + ctx.meta[LANG_KEY] = value + + def get_language(): + return get_current_context().meta.get(LANG_KEY, 'en_US') + + .. versionadded:: 5.0 + """ + return self._meta + + def make_formatter(self) -> HelpFormatter: + """Creates the :class:`~click.HelpFormatter` for the help and + usage output. + + To quickly customize the formatter class used without overriding + this method, set the :attr:`formatter_class` attribute. + + .. versionchanged:: 8.0 + Added the :attr:`formatter_class` attribute. + """ + return self.formatter_class( + width=self.terminal_width, max_width=self.max_content_width + ) + + def with_resource(self, context_manager: t.ContextManager[V]) -> V: + """Register a resource as if it were used in a ``with`` + statement. The resource will be cleaned up when the context is + popped. + + Uses :meth:`contextlib.ExitStack.enter_context`. It calls the + resource's ``__enter__()`` method and returns the result. When + the context is popped, it closes the stack, which calls the + resource's ``__exit__()`` method. + + To register a cleanup function for something that isn't a + context manager, use :meth:`call_on_close`. Or use something + from :mod:`contextlib` to turn it into a context manager first. + + .. code-block:: python + + @click.group() + @click.option("--name") + @click.pass_context + def cli(ctx): + ctx.obj = ctx.with_resource(connect_db(name)) + + :param context_manager: The context manager to enter. + :return: Whatever ``context_manager.__enter__()`` returns. + + .. versionadded:: 8.0 + """ + return self._exit_stack.enter_context(context_manager) + + def call_on_close(self, f: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]: + """Register a function to be called when the context tears down. + + This can be used to close resources opened during the script + execution. Resources that support Python's context manager + protocol which would be used in a ``with`` statement should be + registered with :meth:`with_resource` instead. + + :param f: The function to execute on teardown. + """ + return self._exit_stack.callback(f) + + def close(self) -> None: + """Invoke all close callbacks registered with + :meth:`call_on_close`, and exit all context managers entered + with :meth:`with_resource`. + """ + self._exit_stack.close() + # In case the context is reused, create a new exit stack. + self._exit_stack = ExitStack() + + @property + def command_path(self) -> str: + """The computed command path. This is used for the ``usage`` + information on the help page. It's automatically created by + combining the info names of the chain of contexts to the root. + """ + rv = "" + if self.info_name is not None: + rv = self.info_name + if self.parent is not None: + parent_command_path = [self.parent.command_path] + + if isinstance(self.parent.command, Command): + for param in self.parent.command.get_params(self): + parent_command_path.extend(param.get_usage_pieces(self)) + + rv = f"{' '.join(parent_command_path)} {rv}" + return rv.lstrip() + + def find_root(self) -> "Context": + """Finds the outermost context.""" + node = self + while node.parent is not None: + node = node.parent + return node + + def find_object(self, object_type: t.Type[V]) -> t.Optional[V]: + """Finds the closest object of a given type.""" + node: t.Optional["Context"] = self + + while node is not None: + if isinstance(node.obj, object_type): + return node.obj + + node = node.parent + + return None + + def ensure_object(self, object_type: t.Type[V]) -> V: + """Like :meth:`find_object` but sets the innermost object to a + new instance of `object_type` if it does not exist. + """ + rv = self.find_object(object_type) + if rv is None: + self.obj = rv = object_type() + return rv + + @t.overload + def lookup_default( + self, name: str, call: "te.Literal[True]" = True + ) -> t.Optional[t.Any]: + ... + + @t.overload + def lookup_default( + self, name: str, call: "te.Literal[False]" = ... + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: + ... + + def lookup_default(self, name: str, call: bool = True) -> t.Optional[t.Any]: + """Get the default for a parameter from :attr:`default_map`. + + :param name: Name of the parameter. + :param call: If the default is a callable, call it. Disable to + return the callable instead. + + .. versionchanged:: 8.0 + Added the ``call`` parameter. + """ + if self.default_map is not None: + value = self.default_map.get(name) + + if call and callable(value): + return value() + + return value + + return None + + def fail(self, message: str) -> "te.NoReturn": + """Aborts the execution of the program with a specific error + message. + + :param message: the error message to fail with. + """ + raise UsageError(message, self) + + def abort(self) -> "te.NoReturn": + """Aborts the script.""" + raise Abort() + + def exit(self, code: int = 0) -> "te.NoReturn": + """Exits the application with a given exit code.""" + raise Exit(code) + + def get_usage(self) -> str: + """Helper method to get formatted usage string for the current + context and command. + """ + return self.command.get_usage(self) + + def get_help(self) -> str: + """Helper method to get formatted help page for the current + context and command. + """ + return self.command.get_help(self) + + def _make_sub_context(self, command: "Command") -> "Context": + """Create a new context of the same type as this context, but + for a new command. + + :meta private: + """ + return type(self)(command, info_name=command.name, parent=self) + + def invoke( + __self, # noqa: B902 + __callback: t.Union["Command", t.Callable[..., t.Any]], + *args: t.Any, + **kwargs: t.Any, + ) -> t.Any: + """Invokes a command callback in exactly the way it expects. There + are two ways to invoke this method: + + 1. the first argument can be a callback and all other arguments and + keyword arguments are forwarded directly to the function. + 2. the first argument is a click command object. In that case all + arguments are forwarded as well but proper click parameters + (options and click arguments) must be keyword arguments and Click + will fill in defaults. + + Note that before Click 3.2 keyword arguments were not properly filled + in against the intention of this code and no context was created. For + more information about this change and why it was done in a bugfix + release see :ref:`upgrade-to-3.2`. + + .. versionchanged:: 8.0 + All ``kwargs`` are tracked in :attr:`params` so they will be + passed if :meth:`forward` is called at multiple levels. + """ + if isinstance(__callback, Command): + other_cmd = __callback + + if other_cmd.callback is None: + raise TypeError( + "The given command does not have a callback that can be invoked." + ) + else: + __callback = other_cmd.callback + + ctx = __self._make_sub_context(other_cmd) + + for param in other_cmd.params: + if param.name not in kwargs and param.expose_value: + kwargs[param.name] = param.type_cast_value( # type: ignore + ctx, param.get_default(ctx) + ) + + # Track all kwargs as params, so that forward() will pass + # them on in subsequent calls. + ctx.params.update(kwargs) + else: + ctx = __self + + with augment_usage_errors(__self): + with ctx: + return __callback(*args, **kwargs) + + def forward( + __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902 + ) -> t.Any: + """Similar to :meth:`invoke` but fills in default keyword + arguments from the current context if the other command expects + it. This cannot invoke callbacks directly, only other commands. + + .. versionchanged:: 8.0 + All ``kwargs`` are tracked in :attr:`params` so they will be + passed if ``forward`` is called at multiple levels. + """ + # Can only forward to other commands, not direct callbacks. + if not isinstance(__cmd, Command): + raise TypeError("Callback is not a command.") + + for param in __self.params: + if param not in kwargs: + kwargs[param] = __self.params[param] + + return __self.invoke(__cmd, *args, **kwargs) + + def set_parameter_source(self, name: str, source: ParameterSource) -> None: + """Set the source of a parameter. This indicates the location + from which the value of the parameter was obtained. + + :param name: The name of the parameter. + :param source: A member of :class:`~click.core.ParameterSource`. + """ + self._parameter_source[name] = source + + def get_parameter_source(self, name: str) -> t.Optional[ParameterSource]: + """Get the source of a parameter. This indicates the location + from which the value of the parameter was obtained. + + This can be useful for determining when a user specified a value + on the command line that is the same as the default value. It + will be :attr:`~click.core.ParameterSource.DEFAULT` only if the + value was actually taken from the default. + + :param name: The name of the parameter. + :rtype: ParameterSource + + .. versionchanged:: 8.0 + Returns ``None`` if the parameter was not provided from any + source. + """ + return self._parameter_source.get(name) + + +class BaseCommand: + """The base command implements the minimal API contract of commands. + Most code will never use this as it does not implement a lot of useful + functionality but it can act as the direct subclass of alternative + parsing methods that do not depend on the Click parser. + + For instance, this can be used to bridge Click and other systems like + argparse or docopt. + + Because base commands do not implement a lot of the API that other + parts of Click take for granted, they are not supported for all + operations. For instance, they cannot be used with the decorators + usually and they have no built-in callback system. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + """ + + #: The context class to create with :meth:`make_context`. + #: + #: .. versionadded:: 8.0 + context_class: t.Type[Context] = Context + #: the default for the :attr:`Context.allow_extra_args` flag. + allow_extra_args = False + #: the default for the :attr:`Context.allow_interspersed_args` flag. + allow_interspersed_args = True + #: the default for the :attr:`Context.ignore_unknown_options` flag. + ignore_unknown_options = False + + def __init__( + self, + name: t.Optional[str], + context_settings: t.Optional[t.Dict[str, t.Any]] = None, + ) -> None: + #: the name the command thinks it has. Upon registering a command + #: on a :class:`Group` the group will default the command name + #: with this information. You should instead use the + #: :class:`Context`\'s :attr:`~Context.info_name` attribute. + self.name = name + + if context_settings is None: + context_settings = {} + + #: an optional dictionary with defaults passed to the context. + self.context_settings: t.Dict[str, t.Any] = context_settings + + def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. This traverses the entire structure + below this command. + + Use :meth:`click.Context.to_info_dict` to traverse the entire + CLI structure. + + :param ctx: A :class:`Context` representing this command. + + .. versionadded:: 8.0 + """ + return {"name": self.name} + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} {self.name}>" + + def get_usage(self, ctx: Context) -> str: + raise NotImplementedError("Base commands cannot get usage") + + def get_help(self, ctx: Context) -> str: + raise NotImplementedError("Base commands cannot get help") + + def make_context( + self, + info_name: t.Optional[str], + args: t.List[str], + parent: t.Optional[Context] = None, + **extra: t.Any, + ) -> Context: + """This function when given an info name and arguments will kick + off the parsing and create a new :class:`Context`. It does not + invoke the actual command callback though. + + To quickly customize the context class used without overriding + this method, set the :attr:`context_class` attribute. + + :param info_name: the info name for this invocation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it's usually + the name of the script, for commands below it it's + the name of the command. + :param args: the arguments to parse as list of strings. + :param parent: the parent context if available. + :param extra: extra keyword arguments forwarded to the context + constructor. + + .. versionchanged:: 8.0 + Added the :attr:`context_class` attribute. + """ + for key, value in self.context_settings.items(): + if key not in extra: + extra[key] = value + + ctx = self.context_class( + self, info_name=info_name, parent=parent, **extra # type: ignore + ) + + with ctx.scope(cleanup=False): + self.parse_args(ctx, args) + return ctx + + def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: + """Given a context and a list of arguments this creates the parser + and parses the arguments, then modifies the context as necessary. + This is automatically invoked by :meth:`make_context`. + """ + raise NotImplementedError("Base commands do not know how to parse arguments.") + + def invoke(self, ctx: Context) -> t.Any: + """Given a context, this invokes the command. The default + implementation is raising a not implemented error. + """ + raise NotImplementedError("Base commands are not invokable by default") + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. Looks + at the names of chained multi-commands. + + Any command could be part of a chained multi-command, so sibling + commands are valid at any point during command completion. Other + command classes will return more completions. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + results: t.List["CompletionItem"] = [] + + while ctx.parent is not None: + ctx = ctx.parent + + if isinstance(ctx.command, MultiCommand) and ctx.command.chain: + results.extend( + CompletionItem(name, help=command.get_short_help_str()) + for name, command in _complete_visible_commands(ctx, incomplete) + if name not in ctx.protected_args + ) + + return results + + @t.overload + def main( + self, + args: t.Optional[t.Sequence[str]] = None, + prog_name: t.Optional[str] = None, + complete_var: t.Optional[str] = None, + standalone_mode: "te.Literal[True]" = True, + **extra: t.Any, + ) -> "te.NoReturn": + ... + + @t.overload + def main( + self, + args: t.Optional[t.Sequence[str]] = None, + prog_name: t.Optional[str] = None, + complete_var: t.Optional[str] = None, + standalone_mode: bool = ..., + **extra: t.Any, + ) -> t.Any: + ... + + def main( + self, + args: t.Optional[t.Sequence[str]] = None, + prog_name: t.Optional[str] = None, + complete_var: t.Optional[str] = None, + standalone_mode: bool = True, + windows_expand_args: bool = True, + **extra: t.Any, + ) -> t.Any: + """This is the way to invoke a script with all the bells and + whistles as a command line application. This will always terminate + the application after a call. If this is not wanted, ``SystemExit`` + needs to be caught. + + This method is also available by directly calling the instance of + a :class:`Command`. + + :param args: the arguments that should be used for parsing. If not + provided, ``sys.argv[1:]`` is used. + :param prog_name: the program name that should be used. By default + the program name is constructed by taking the file + name from ``sys.argv[0]``. + :param complete_var: the environment variable that controls the + bash completion support. The default is + ``"__COMPLETE"`` with prog_name in + uppercase. + :param standalone_mode: the default behavior is to invoke the script + in standalone mode. Click will then + handle exceptions and convert them into + error messages and the function will never + return but shut down the interpreter. If + this is set to `False` they will be + propagated to the caller and the return + value of this function is the return value + of :meth:`invoke`. + :param windows_expand_args: Expand glob patterns, user dir, and + env vars in command line args on Windows. + :param extra: extra keyword arguments are forwarded to the context + constructor. See :class:`Context` for more information. + + .. versionchanged:: 8.0.1 + Added the ``windows_expand_args`` parameter to allow + disabling command line arg expansion on Windows. + + .. versionchanged:: 8.0 + When taking arguments from ``sys.argv`` on Windows, glob + patterns, user dir, and env vars are expanded. + + .. versionchanged:: 3.0 + Added the ``standalone_mode`` parameter. + """ + if args is None: + args = sys.argv[1:] + + if os.name == "nt" and windows_expand_args: + args = _expand_args(args) + else: + args = list(args) + + if prog_name is None: + prog_name = _detect_program_name() + + # Process shell completion requests and exit early. + self._main_shell_completion(extra, prog_name, complete_var) + + try: + try: + with self.make_context(prog_name, args, **extra) as ctx: + rv = self.invoke(ctx) + if not standalone_mode: + return rv + # it's not safe to `ctx.exit(rv)` here! + # note that `rv` may actually contain data like "1" which + # has obvious effects + # more subtle case: `rv=[None, None]` can come out of + # chained commands which all returned `None` -- so it's not + # even always obvious that `rv` indicates success/failure + # by its truthiness/falsiness + ctx.exit() + except (EOFError, KeyboardInterrupt): + echo(file=sys.stderr) + raise Abort() from None + except ClickException as e: + if not standalone_mode: + raise + e.show() + sys.exit(e.exit_code) + except OSError as e: + if e.errno == errno.EPIPE: + sys.stdout = t.cast(t.TextIO, PacifyFlushWrapper(sys.stdout)) + sys.stderr = t.cast(t.TextIO, PacifyFlushWrapper(sys.stderr)) + sys.exit(1) + else: + raise + except Exit as e: + if standalone_mode: + sys.exit(e.exit_code) + else: + # in non-standalone mode, return the exit code + # note that this is only reached if `self.invoke` above raises + # an Exit explicitly -- thus bypassing the check there which + # would return its result + # the results of non-standalone execution may therefore be + # somewhat ambiguous: if there are codepaths which lead to + # `ctx.exit(1)` and to `return 1`, the caller won't be able to + # tell the difference between the two + return e.exit_code + except Abort: + if not standalone_mode: + raise + echo(_("Aborted!"), file=sys.stderr) + sys.exit(1) + + def _main_shell_completion( + self, + ctx_args: t.Dict[str, t.Any], + prog_name: str, + complete_var: t.Optional[str] = None, + ) -> None: + """Check if the shell is asking for tab completion, process + that, then exit early. Called from :meth:`main` before the + program is invoked. + + :param prog_name: Name of the executable in the shell. + :param complete_var: Name of the environment variable that holds + the completion instruction. Defaults to + ``_{PROG_NAME}_COMPLETE``. + """ + if complete_var is None: + complete_var = f"_{prog_name}_COMPLETE".replace("-", "_").upper() + + instruction = os.environ.get(complete_var) + + if not instruction: + return + + from .shell_completion import shell_complete + + rv = shell_complete(self, ctx_args, prog_name, complete_var, instruction) + sys.exit(rv) + + def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any: + """Alias for :meth:`main`.""" + return self.main(*args, **kwargs) + + +class Command(BaseCommand): + """Commands are the basic building block of command line interfaces in + Click. A basic command handles command line parsing and might dispatch + more parsing to commands nested below it. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + :param callback: the callback to invoke. This is optional. + :param params: the parameters to register with this command. This can + be either :class:`Option` or :class:`Argument` objects. + :param help: the help string to use for this command. + :param epilog: like the help string but it's printed at the end of the + help page after everything else. + :param short_help: the short help to use for this command. This is + shown on the command listing of the parent command. + :param add_help_option: by default each command registers a ``--help`` + option. This can be disabled by this parameter. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is disabled by default. + If enabled this will add ``--help`` as argument + if no arguments are passed + :param hidden: hide this command from help outputs. + + :param deprecated: issues a message indicating that + the command is deprecated. + + .. versionchanged:: 8.1 + ``help``, ``epilog``, and ``short_help`` are stored unprocessed, + all formatting is done when outputting help text, not at init, + and is done even if not using the ``@command`` decorator. + + .. versionchanged:: 8.0 + Added a ``repr`` showing the command name. + + .. versionchanged:: 7.1 + Added the ``no_args_is_help`` parameter. + + .. versionchanged:: 2.0 + Added the ``context_settings`` parameter. + """ + + def __init__( + self, + name: t.Optional[str], + context_settings: t.Optional[t.Dict[str, t.Any]] = None, + callback: t.Optional[t.Callable[..., t.Any]] = None, + params: t.Optional[t.List["Parameter"]] = None, + help: t.Optional[str] = None, + epilog: t.Optional[str] = None, + short_help: t.Optional[str] = None, + options_metavar: t.Optional[str] = "[OPTIONS]", + add_help_option: bool = True, + no_args_is_help: bool = False, + hidden: bool = False, + deprecated: bool = False, + ) -> None: + super().__init__(name, context_settings) + #: the callback to execute when the command fires. This might be + #: `None` in which case nothing happens. + self.callback = callback + #: the list of parameters for this command in the order they + #: should show up in the help page and execute. Eager parameters + #: will automatically be handled before non eager ones. + self.params: t.List["Parameter"] = params or [] + self.help = help + self.epilog = epilog + self.options_metavar = options_metavar + self.short_help = short_help + self.add_help_option = add_help_option + self.no_args_is_help = no_args_is_help + self.hidden = hidden + self.deprecated = deprecated + + def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict(ctx) + info_dict.update( + params=[param.to_info_dict() for param in self.get_params(ctx)], + help=self.help, + epilog=self.epilog, + short_help=self.short_help, + hidden=self.hidden, + deprecated=self.deprecated, + ) + return info_dict + + def get_usage(self, ctx: Context) -> str: + """Formats the usage line into a string and returns it. + + Calls :meth:`format_usage` internally. + """ + formatter = ctx.make_formatter() + self.format_usage(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_params(self, ctx: Context) -> t.List["Parameter"]: + rv = self.params + help_option = self.get_help_option(ctx) + + if help_option is not None: + rv = [*rv, help_option] + + return rv + + def format_usage(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the usage line into the formatter. + + This is a low-level method called by :meth:`get_usage`. + """ + pieces = self.collect_usage_pieces(ctx) + formatter.write_usage(ctx.command_path, " ".join(pieces)) + + def collect_usage_pieces(self, ctx: Context) -> t.List[str]: + """Returns all the pieces that go into the usage line and returns + it as a list of strings. + """ + rv = [self.options_metavar] if self.options_metavar else [] + + for param in self.get_params(ctx): + rv.extend(param.get_usage_pieces(ctx)) + + return rv + + def get_help_option_names(self, ctx: Context) -> t.List[str]: + """Returns the names for the help option.""" + all_names = set(ctx.help_option_names) + for param in self.params: + all_names.difference_update(param.opts) + all_names.difference_update(param.secondary_opts) + return list(all_names) + + def get_help_option(self, ctx: Context) -> t.Optional["Option"]: + """Returns the help option object.""" + help_options = self.get_help_option_names(ctx) + + if not help_options or not self.add_help_option: + return None + + def show_help(ctx: Context, param: "Parameter", value: str) -> None: + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + return Option( + help_options, + is_flag=True, + is_eager=True, + expose_value=False, + callback=show_help, + help=_("Show this message and exit."), + ) + + def make_parser(self, ctx: Context) -> OptionParser: + """Creates the underlying option parser for this command.""" + parser = OptionParser(ctx) + for param in self.get_params(ctx): + param.add_to_parser(parser, ctx) + return parser + + def get_help(self, ctx: Context) -> str: + """Formats the help into a string and returns it. + + Calls :meth:`format_help` internally. + """ + formatter = ctx.make_formatter() + self.format_help(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_short_help_str(self, limit: int = 45) -> str: + """Gets short help for the command or makes it by shortening the + long help string. + """ + if self.short_help: + text = inspect.cleandoc(self.short_help) + elif self.help: + text = make_default_short_help(self.help, limit) + else: + text = "" + + if self.deprecated: + text = _("(Deprecated) {text}").format(text=text) + + return text.strip() + + def format_help(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the help into the formatter if it exists. + + This is a low-level method called by :meth:`get_help`. + + This calls the following methods: + + - :meth:`format_usage` + - :meth:`format_help_text` + - :meth:`format_options` + - :meth:`format_epilog` + """ + self.format_usage(ctx, formatter) + self.format_help_text(ctx, formatter) + self.format_options(ctx, formatter) + self.format_epilog(ctx, formatter) + + def format_help_text(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the help text to the formatter if it exists.""" + text = self.help if self.help is not None else "" + + if self.deprecated: + text = _("(Deprecated) {text}").format(text=text) + + if text: + text = inspect.cleandoc(text).partition("\f")[0] + formatter.write_paragraph() + + with formatter.indentation(): + formatter.write_text(text) + + def format_options(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes all the options into the formatter if they exist.""" + opts = [] + for param in self.get_params(ctx): + rv = param.get_help_record(ctx) + if rv is not None: + opts.append(rv) + + if opts: + with formatter.section(_("Options")): + formatter.write_dl(opts) + + def format_epilog(self, ctx: Context, formatter: HelpFormatter) -> None: + """Writes the epilog into the formatter if it exists.""" + if self.epilog: + epilog = inspect.cleandoc(self.epilog) + formatter.write_paragraph() + + with formatter.indentation(): + formatter.write_text(epilog) + + def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + parser = self.make_parser(ctx) + opts, args, param_order = parser.parse_args(args=args) + + for param in iter_params_for_processing(param_order, self.get_params(ctx)): + value, args = param.handle_parse_result(ctx, opts, args) + + if args and not ctx.allow_extra_args and not ctx.resilient_parsing: + ctx.fail( + ngettext( + "Got unexpected extra argument ({args})", + "Got unexpected extra arguments ({args})", + len(args), + ).format(args=" ".join(map(str, args))) + ) + + ctx.args = args + ctx._opt_prefixes.update(parser._opt_prefixes) + return args + + def invoke(self, ctx: Context) -> t.Any: + """Given a context, this invokes the attached callback (if it exists) + in the right way. + """ + if self.deprecated: + message = _( + "DeprecationWarning: The command {name!r} is deprecated." + ).format(name=self.name) + echo(style(message, fg="red"), err=True) + + if self.callback is not None: + return ctx.invoke(self.callback, **ctx.params) + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. Looks + at the names of options and chained multi-commands. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + results: t.List["CompletionItem"] = [] + + if incomplete and not incomplete[0].isalnum(): + for param in self.get_params(ctx): + if ( + not isinstance(param, Option) + or param.hidden + or ( + not param.multiple + and ctx.get_parameter_source(param.name) # type: ignore + is ParameterSource.COMMANDLINE + ) + ): + continue + + results.extend( + CompletionItem(name, help=param.help) + for name in [*param.opts, *param.secondary_opts] + if name.startswith(incomplete) + ) + + results.extend(super().shell_complete(ctx, incomplete)) + return results + + +class MultiCommand(Command): + """A multi command is the basic implementation of a command that + dispatches to subcommands. The most common version is the + :class:`Group`. + + :param invoke_without_command: this controls how the multi command itself + is invoked. By default it's only invoked + if a subcommand is provided. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is enabled by default if + `invoke_without_command` is disabled or disabled + if it's enabled. If enabled this will add + ``--help`` as argument if no arguments are + passed. + :param subcommand_metavar: the string that is used in the documentation + to indicate the subcommand place. + :param chain: if this is set to `True` chaining of multiple subcommands + is enabled. This restricts the form of commands in that + they cannot have optional arguments but it allows + multiple commands to be chained together. + :param result_callback: The result callback to attach to this multi + command. This can be set or changed later with the + :meth:`result_callback` decorator. + """ + + allow_extra_args = True + allow_interspersed_args = False + + def __init__( + self, + name: t.Optional[str] = None, + invoke_without_command: bool = False, + no_args_is_help: t.Optional[bool] = None, + subcommand_metavar: t.Optional[str] = None, + chain: bool = False, + result_callback: t.Optional[t.Callable[..., t.Any]] = None, + **attrs: t.Any, + ) -> None: + super().__init__(name, **attrs) + + if no_args_is_help is None: + no_args_is_help = not invoke_without_command + + self.no_args_is_help = no_args_is_help + self.invoke_without_command = invoke_without_command + + if subcommand_metavar is None: + if chain: + subcommand_metavar = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..." + else: + subcommand_metavar = "COMMAND [ARGS]..." + + self.subcommand_metavar = subcommand_metavar + self.chain = chain + # The result callback that is stored. This can be set or + # overridden with the :func:`result_callback` decorator. + self._result_callback = result_callback + + if self.chain: + for param in self.params: + if isinstance(param, Argument) and not param.required: + raise RuntimeError( + "Multi commands in chain mode cannot have" + " optional arguments." + ) + + def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict(ctx) + commands = {} + + for name in self.list_commands(ctx): + command = self.get_command(ctx, name) + + if command is None: + continue + + sub_ctx = ctx._make_sub_context(command) + + with sub_ctx.scope(cleanup=False): + commands[name] = command.to_info_dict(sub_ctx) + + info_dict.update(commands=commands, chain=self.chain) + return info_dict + + def collect_usage_pieces(self, ctx: Context) -> t.List[str]: + rv = super().collect_usage_pieces(ctx) + rv.append(self.subcommand_metavar) + return rv + + def format_options(self, ctx: Context, formatter: HelpFormatter) -> None: + super().format_options(ctx, formatter) + self.format_commands(ctx, formatter) + + def result_callback(self, replace: bool = False) -> t.Callable[[F], F]: + """Adds a result callback to the command. By default if a + result callback is already registered this will chain them but + this can be disabled with the `replace` parameter. The result + callback is invoked with the return value of the subcommand + (or the list of return values from all subcommands if chaining + is enabled) as well as the parameters as they would be passed + to the main callback. + + Example:: + + @click.group() + @click.option('-i', '--input', default=23) + def cli(input): + return 42 + + @cli.result_callback() + def process_result(result, input): + return result + input + + :param replace: if set to `True` an already existing result + callback will be removed. + + .. versionchanged:: 8.0 + Renamed from ``resultcallback``. + + .. versionadded:: 3.0 + """ + + def decorator(f: F) -> F: + old_callback = self._result_callback + + if old_callback is None or replace: + self._result_callback = f + return f + + def function(__value, *args, **kwargs): # type: ignore + inner = old_callback(__value, *args, **kwargs) # type: ignore + return f(inner, *args, **kwargs) + + self._result_callback = rv = update_wrapper(t.cast(F, function), f) + return rv + + return decorator + + def format_commands(self, ctx: Context, formatter: HelpFormatter) -> None: + """Extra format methods for multi methods that adds all the commands + after the options. + """ + commands = [] + for subcommand in self.list_commands(ctx): + cmd = self.get_command(ctx, subcommand) + # What is this, the tool lied about a command. Ignore it + if cmd is None: + continue + if cmd.hidden: + continue + + commands.append((subcommand, cmd)) + + # allow for 3 times the default spacing + if len(commands): + limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands) + + rows = [] + for subcommand, cmd in commands: + help = cmd.get_short_help_str(limit) + rows.append((subcommand, help)) + + if rows: + with formatter.section(_("Commands")): + formatter.write_dl(rows) + + def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]: + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + rest = super().parse_args(ctx, args) + + if self.chain: + ctx.protected_args = rest + ctx.args = [] + elif rest: + ctx.protected_args, ctx.args = rest[:1], rest[1:] + + return ctx.args + + def invoke(self, ctx: Context) -> t.Any: + def _process_result(value: t.Any) -> t.Any: + if self._result_callback is not None: + value = ctx.invoke(self._result_callback, value, **ctx.params) + return value + + if not ctx.protected_args: + if self.invoke_without_command: + # No subcommand was invoked, so the result callback is + # invoked with the group return value for regular + # groups, or an empty list for chained groups. + with ctx: + rv = super().invoke(ctx) + return _process_result([] if self.chain else rv) + ctx.fail(_("Missing command.")) + + # Fetch args back out + args = [*ctx.protected_args, *ctx.args] + ctx.args = [] + ctx.protected_args = [] + + # If we're not in chain mode, we only allow the invocation of a + # single command but we also inform the current context about the + # name of the command to invoke. + if not self.chain: + # Make sure the context is entered so we do not clean up + # resources until the result processor has worked. + with ctx: + cmd_name, cmd, args = self.resolve_command(ctx, args) + assert cmd is not None + ctx.invoked_subcommand = cmd_name + super().invoke(ctx) + sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) + with sub_ctx: + return _process_result(sub_ctx.command.invoke(sub_ctx)) + + # In chain mode we create the contexts step by step, but after the + # base command has been invoked. Because at that point we do not + # know the subcommands yet, the invoked subcommand attribute is + # set to ``*`` to inform the command that subcommands are executed + # but nothing else. + with ctx: + ctx.invoked_subcommand = "*" if args else None + super().invoke(ctx) + + # Otherwise we make every single context and invoke them in a + # chain. In that case the return value to the result processor + # is the list of all invoked subcommand's results. + contexts = [] + while args: + cmd_name, cmd, args = self.resolve_command(ctx, args) + assert cmd is not None + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + ) + contexts.append(sub_ctx) + args, sub_ctx.args = sub_ctx.args, [] + + rv = [] + for sub_ctx in contexts: + with sub_ctx: + rv.append(sub_ctx.command.invoke(sub_ctx)) + return _process_result(rv) + + def resolve_command( + self, ctx: Context, args: t.List[str] + ) -> t.Tuple[t.Optional[str], t.Optional[Command], t.List[str]]: + cmd_name = make_str(args[0]) + original_cmd_name = cmd_name + + # Get the command + cmd = self.get_command(ctx, cmd_name) + + # If we can't find the command but there is a normalization + # function available, we try with that one. + if cmd is None and ctx.token_normalize_func is not None: + cmd_name = ctx.token_normalize_func(cmd_name) + cmd = self.get_command(ctx, cmd_name) + + # If we don't find the command we want to show an error message + # to the user that it was not provided. However, there is + # something else we should do: if the first argument looks like + # an option we want to kick off parsing again for arguments to + # resolve things like --help which now should go to the main + # place. + if cmd is None and not ctx.resilient_parsing: + if split_opt(cmd_name)[0]: + self.parse_args(ctx, ctx.args) + ctx.fail(_("No such command {name!r}.").format(name=original_cmd_name)) + return cmd_name if cmd else None, cmd, args[1:] + + def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: + """Given a context and a command name, this returns a + :class:`Command` object if it exists or returns `None`. + """ + raise NotImplementedError + + def list_commands(self, ctx: Context) -> t.List[str]: + """Returns a list of subcommand names in the order they should + appear. + """ + return [] + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. Looks + at the names of options, subcommands, and chained + multi-commands. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + results = [ + CompletionItem(name, help=command.get_short_help_str()) + for name, command in _complete_visible_commands(ctx, incomplete) + ] + results.extend(super().shell_complete(ctx, incomplete)) + return results + + +class Group(MultiCommand): + """A group allows a command to have subcommands attached. This is + the most common way to implement nesting in Click. + + :param name: The name of the group command. + :param commands: A dict mapping names to :class:`Command` objects. + Can also be a list of :class:`Command`, which will use + :attr:`Command.name` to create the dict. + :param attrs: Other command arguments described in + :class:`MultiCommand`, :class:`Command`, and + :class:`BaseCommand`. + + .. versionchanged:: 8.0 + The ``commmands`` argument can be a list of command objects. + """ + + #: If set, this is used by the group's :meth:`command` decorator + #: as the default :class:`Command` class. This is useful to make all + #: subcommands use a custom command class. + #: + #: .. versionadded:: 8.0 + command_class: t.Optional[t.Type[Command]] = None + + #: If set, this is used by the group's :meth:`group` decorator + #: as the default :class:`Group` class. This is useful to make all + #: subgroups use a custom group class. + #: + #: If set to the special value :class:`type` (literally + #: ``group_class = type``), this group's class will be used as the + #: default class. This makes a custom group class continue to make + #: custom groups. + #: + #: .. versionadded:: 8.0 + group_class: t.Optional[t.Union[t.Type["Group"], t.Type[type]]] = None + # Literal[type] isn't valid, so use Type[type] + + def __init__( + self, + name: t.Optional[str] = None, + commands: t.Optional[t.Union[t.Dict[str, Command], t.Sequence[Command]]] = None, + **attrs: t.Any, + ) -> None: + super().__init__(name, **attrs) + + if commands is None: + commands = {} + elif isinstance(commands, abc.Sequence): + commands = {c.name: c for c in commands if c.name is not None} + + #: The registered subcommands by their exported names. + self.commands: t.Dict[str, Command] = commands + + def add_command(self, cmd: Command, name: t.Optional[str] = None) -> None: + """Registers another :class:`Command` with this group. If the name + is not provided, the name of the command is used. + """ + name = name or cmd.name + if name is None: + raise TypeError("Command has no name.") + _check_multicommand(self, name, cmd, register=True) + self.commands[name] = cmd + + @t.overload + def command(self, __func: t.Callable[..., t.Any]) -> Command: + ... + + @t.overload + def command( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Callable[[t.Callable[..., t.Any]], Command]: + ... + + def command( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], Command], Command]: + """A shortcut decorator for declaring and attaching a command to + the group. This takes the same arguments as :func:`command` and + immediately registers the created command with this group by + calling :meth:`add_command`. + + To customize the command class used, set the + :attr:`command_class` attribute. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + + .. versionchanged:: 8.0 + Added the :attr:`command_class` attribute. + """ + from .decorators import command + + if self.command_class and kwargs.get("cls") is None: + kwargs["cls"] = self.command_class + + func: t.Optional[t.Callable] = None + + if args and callable(args[0]): + assert ( + len(args) == 1 and not kwargs + ), "Use 'command(**kwargs)(callable)' to provide arguments." + (func,) = args + args = () + + def decorator(f: t.Callable[..., t.Any]) -> Command: + cmd: Command = command(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + if func is not None: + return decorator(func) + + return decorator + + @t.overload + def group(self, __func: t.Callable[..., t.Any]) -> "Group": + ... + + @t.overload + def group( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Callable[[t.Callable[..., t.Any]], "Group"]: + ... + + def group( + self, *args: t.Any, **kwargs: t.Any + ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], "Group"], "Group"]: + """A shortcut decorator for declaring and attaching a group to + the group. This takes the same arguments as :func:`group` and + immediately registers the created group with this group by + calling :meth:`add_command`. + + To customize the group class used, set the :attr:`group_class` + attribute. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + + .. versionchanged:: 8.0 + Added the :attr:`group_class` attribute. + """ + from .decorators import group + + func: t.Optional[t.Callable] = None + + if args and callable(args[0]): + assert ( + len(args) == 1 and not kwargs + ), "Use 'group(**kwargs)(callable)' to provide arguments." + (func,) = args + args = () + + if self.group_class is not None and kwargs.get("cls") is None: + if self.group_class is type: + kwargs["cls"] = type(self) + else: + kwargs["cls"] = self.group_class + + def decorator(f: t.Callable[..., t.Any]) -> "Group": + cmd: Group = group(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + if func is not None: + return decorator(func) + + return decorator + + def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: + return self.commands.get(cmd_name) + + def list_commands(self, ctx: Context) -> t.List[str]: + return sorted(self.commands) + + +class CommandCollection(MultiCommand): + """A command collection is a multi command that merges multiple multi + commands together into one. This is a straightforward implementation + that accepts a list of different multi commands as sources and + provides all the commands for each of them. + """ + + def __init__( + self, + name: t.Optional[str] = None, + sources: t.Optional[t.List[MultiCommand]] = None, + **attrs: t.Any, + ) -> None: + super().__init__(name, **attrs) + #: The list of registered multi commands. + self.sources: t.List[MultiCommand] = sources or [] + + def add_source(self, multi_cmd: MultiCommand) -> None: + """Adds a new multi command to the chain dispatcher.""" + self.sources.append(multi_cmd) + + def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]: + for source in self.sources: + rv = source.get_command(ctx, cmd_name) + + if rv is not None: + if self.chain: + _check_multicommand(self, cmd_name, rv) + + return rv + + return None + + def list_commands(self, ctx: Context) -> t.List[str]: + rv: t.Set[str] = set() + + for source in self.sources: + rv.update(source.list_commands(ctx)) + + return sorted(rv) + + +def _check_iter(value: t.Any) -> t.Iterator[t.Any]: + """Check if the value is iterable but not a string. Raises a type + error, or return an iterator over the value. + """ + if isinstance(value, str): + raise TypeError + + return iter(value) + + +class Parameter: + r"""A parameter to a command comes in two versions: they are either + :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently + not supported by design as some of the internals for parsing are + intentionally not finalized. + + Some settings are supported by both options and arguments. + + :param param_decls: the parameter declarations for this option or + argument. This is a list of flags or argument + names. + :param type: the type that should be used. Either a :class:`ParamType` + or a Python type. The later is converted into the former + automatically if supported. + :param required: controls if this is optional or not. + :param default: the default value if omitted. This can also be a callable, + in which case it's invoked when the default is needed + without any arguments. + :param callback: A function to further process or validate the value + after type conversion. It is called as ``f(ctx, param, value)`` + and must return the value. It is called for all sources, + including prompts. + :param nargs: the number of arguments to match. If not ``1`` the return + value is a tuple instead of single value. The default for + nargs is ``1`` (except if the type is a tuple, then it's + the arity of the tuple). If ``nargs=-1``, all remaining + parameters are collected. + :param metavar: how the value is represented in the help page. + :param expose_value: if this is `True` then the value is passed onwards + to the command callback and stored on the context, + otherwise it's skipped. + :param is_eager: eager values are processed before non eager ones. This + should not be set for arguments or it will inverse the + order of processing. + :param envvar: a string or list of strings that are environment variables + that should be checked. + :param shell_complete: A function that returns custom shell + completions. Used instead of the param's type completion if + given. Takes ``ctx, param, incomplete`` and must return a list + of :class:`~click.shell_completion.CompletionItem` or a list of + strings. + + .. versionchanged:: 8.0 + ``process_value`` validates required parameters and bounded + ``nargs``, and invokes the parameter callback before returning + the value. This allows the callback to validate prompts. + ``full_process_value`` is removed. + + .. versionchanged:: 8.0 + ``autocompletion`` is renamed to ``shell_complete`` and has new + semantics described above. The old name is deprecated and will + be removed in 8.1, until then it will be wrapped to match the + new requirements. + + .. versionchanged:: 8.0 + For ``multiple=True, nargs>1``, the default must be a list of + tuples. + + .. versionchanged:: 8.0 + Setting a default is no longer required for ``nargs>1``, it will + default to ``None``. ``multiple=True`` or ``nargs=-1`` will + default to ``()``. + + .. versionchanged:: 7.1 + Empty environment variables are ignored rather than taking the + empty string value. This makes it possible for scripts to clear + variables if they can't unset them. + + .. versionchanged:: 2.0 + Changed signature for parameter callback to also be passed the + parameter. The old callback format will still work, but it will + raise a warning to give you a chance to migrate the code easier. + """ + + param_type_name = "parameter" + + def __init__( + self, + param_decls: t.Optional[t.Sequence[str]] = None, + type: t.Optional[t.Union[types.ParamType, t.Any]] = None, + required: bool = False, + default: t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]] = None, + callback: t.Optional[t.Callable[[Context, "Parameter", t.Any], t.Any]] = None, + nargs: t.Optional[int] = None, + multiple: bool = False, + metavar: t.Optional[str] = None, + expose_value: bool = True, + is_eager: bool = False, + envvar: t.Optional[t.Union[str, t.Sequence[str]]] = None, + shell_complete: t.Optional[ + t.Callable[ + [Context, "Parameter", str], + t.Union[t.List["CompletionItem"], t.List[str]], + ] + ] = None, + ) -> None: + self.name, self.opts, self.secondary_opts = self._parse_decls( + param_decls or (), expose_value + ) + self.type = types.convert_type(type, default) + + # Default nargs to what the type tells us if we have that + # information available. + if nargs is None: + if self.type.is_composite: + nargs = self.type.arity + else: + nargs = 1 + + self.required = required + self.callback = callback + self.nargs = nargs + self.multiple = multiple + self.expose_value = expose_value + self.default = default + self.is_eager = is_eager + self.metavar = metavar + self.envvar = envvar + self._custom_shell_complete = shell_complete + + if __debug__: + if self.type.is_composite and nargs != self.type.arity: + raise ValueError( + f"'nargs' must be {self.type.arity} (or None) for" + f" type {self.type!r}, but it was {nargs}." + ) + + # Skip no default or callable default. + check_default = default if not callable(default) else None + + if check_default is not None: + if multiple: + try: + # Only check the first value against nargs. + check_default = next(_check_iter(check_default), None) + except TypeError: + raise ValueError( + "'default' must be a list when 'multiple' is true." + ) from None + + # Can be None for multiple with empty default. + if nargs != 1 and check_default is not None: + try: + _check_iter(check_default) + except TypeError: + if multiple: + message = ( + "'default' must be a list of lists when 'multiple' is" + " true and 'nargs' != 1." + ) + else: + message = "'default' must be a list when 'nargs' != 1." + + raise ValueError(message) from None + + if nargs > 1 and len(check_default) != nargs: + subject = "item length" if multiple else "length" + raise ValueError( + f"'default' {subject} must match nargs={nargs}." + ) + + def to_info_dict(self) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. + + Use :meth:`click.Context.to_info_dict` to traverse the entire + CLI structure. + + .. versionadded:: 8.0 + """ + return { + "name": self.name, + "param_type_name": self.param_type_name, + "opts": self.opts, + "secondary_opts": self.secondary_opts, + "type": self.type.to_info_dict(), + "required": self.required, + "nargs": self.nargs, + "multiple": self.multiple, + "default": self.default, + "envvar": self.envvar, + } + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} {self.name}>" + + def _parse_decls( + self, decls: t.Sequence[str], expose_value: bool + ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: + raise NotImplementedError() + + @property + def human_readable_name(self) -> str: + """Returns the human readable name of this parameter. This is the + same as the name for options, but the metavar for arguments. + """ + return self.name # type: ignore + + def make_metavar(self) -> str: + if self.metavar is not None: + return self.metavar + + metavar = self.type.get_metavar(self) + + if metavar is None: + metavar = self.type.name.upper() + + if self.nargs != 1: + metavar += "..." + + return metavar + + @t.overload + def get_default( + self, ctx: Context, call: "te.Literal[True]" = True + ) -> t.Optional[t.Any]: + ... + + @t.overload + def get_default( + self, ctx: Context, call: bool = ... + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: + ... + + def get_default( + self, ctx: Context, call: bool = True + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: + """Get the default for the parameter. Tries + :meth:`Context.lookup_default` first, then the local default. + + :param ctx: Current context. + :param call: If the default is a callable, call it. Disable to + return the callable instead. + + .. versionchanged:: 8.0.2 + Type casting is no longer performed when getting a default. + + .. versionchanged:: 8.0.1 + Type casting can fail in resilient parsing mode. Invalid + defaults will not prevent showing help text. + + .. versionchanged:: 8.0 + Looks at ``ctx.default_map`` first. + + .. versionchanged:: 8.0 + Added the ``call`` parameter. + """ + value = ctx.lookup_default(self.name, call=False) # type: ignore + + if value is None: + value = self.default + + if call and callable(value): + value = value() + + return value + + def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: + raise NotImplementedError() + + def consume_value( + self, ctx: Context, opts: t.Mapping[str, t.Any] + ) -> t.Tuple[t.Any, ParameterSource]: + value = opts.get(self.name) # type: ignore + source = ParameterSource.COMMANDLINE + + if value is None: + value = self.value_from_envvar(ctx) + source = ParameterSource.ENVIRONMENT + + if value is None: + value = ctx.lookup_default(self.name) # type: ignore + source = ParameterSource.DEFAULT_MAP + + if value is None: + value = self.get_default(ctx) + source = ParameterSource.DEFAULT + + return value, source + + def type_cast_value(self, ctx: Context, value: t.Any) -> t.Any: + """Convert and validate a value against the option's + :attr:`type`, :attr:`multiple`, and :attr:`nargs`. + """ + if value is None: + return () if self.multiple or self.nargs == -1 else None + + def check_iter(value: t.Any) -> t.Iterator: + try: + return _check_iter(value) + except TypeError: + # This should only happen when passing in args manually, + # the parser should construct an iterable when parsing + # the command line. + raise BadParameter( + _("Value must be an iterable."), ctx=ctx, param=self + ) from None + + if self.nargs == 1 or self.type.is_composite: + convert: t.Callable[[t.Any], t.Any] = partial( + self.type, param=self, ctx=ctx + ) + elif self.nargs == -1: + + def convert(value: t.Any) -> t.Tuple: + return tuple(self.type(x, self, ctx) for x in check_iter(value)) + + else: # nargs > 1 + + def convert(value: t.Any) -> t.Tuple: + value = tuple(check_iter(value)) + + if len(value) != self.nargs: + raise BadParameter( + ngettext( + "Takes {nargs} values but 1 was given.", + "Takes {nargs} values but {len} were given.", + len(value), + ).format(nargs=self.nargs, len=len(value)), + ctx=ctx, + param=self, + ) + + return tuple(self.type(x, self, ctx) for x in value) + + if self.multiple: + return tuple(convert(x) for x in check_iter(value)) + + return convert(value) + + def value_is_missing(self, value: t.Any) -> bool: + if value is None: + return True + + if (self.nargs != 1 or self.multiple) and value == (): + return True + + return False + + def process_value(self, ctx: Context, value: t.Any) -> t.Any: + value = self.type_cast_value(ctx, value) + + if self.required and self.value_is_missing(value): + raise MissingParameter(ctx=ctx, param=self) + + if self.callback is not None: + value = self.callback(ctx, self, value) + + return value + + def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]: + if self.envvar is None: + return None + + if isinstance(self.envvar, str): + rv = os.environ.get(self.envvar) + + if rv: + return rv + else: + for envvar in self.envvar: + rv = os.environ.get(envvar) + + if rv: + return rv + + return None + + def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]: + rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx) + + if rv is not None and self.nargs != 1: + rv = self.type.split_envvar_value(rv) + + return rv + + def handle_parse_result( + self, ctx: Context, opts: t.Mapping[str, t.Any], args: t.List[str] + ) -> t.Tuple[t.Any, t.List[str]]: + with augment_usage_errors(ctx, param=self): + value, source = self.consume_value(ctx, opts) + ctx.set_parameter_source(self.name, source) # type: ignore + + try: + value = self.process_value(ctx, value) + except Exception: + if not ctx.resilient_parsing: + raise + + value = None + + if self.expose_value: + ctx.params[self.name] = value # type: ignore + + return value, args + + def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]: + pass + + def get_usage_pieces(self, ctx: Context) -> t.List[str]: + return [] + + def get_error_hint(self, ctx: Context) -> str: + """Get a stringified version of the param for use in error messages to + indicate which param caused the error. + """ + hint_list = self.opts or [self.human_readable_name] + return " / ".join(f"'{x}'" for x in hint_list) + + def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: + """Return a list of completions for the incomplete value. If a + ``shell_complete`` function was given during init, it is used. + Otherwise, the :attr:`type` + :meth:`~click.types.ParamType.shell_complete` function is used. + + :param ctx: Invocation context for this command. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + if self._custom_shell_complete is not None: + results = self._custom_shell_complete(ctx, self, incomplete) + + if results and isinstance(results[0], str): + from click.shell_completion import CompletionItem + + results = [CompletionItem(c) for c in results] + + return t.cast(t.List["CompletionItem"], results) + + return self.type.shell_complete(ctx, self, incomplete) + + +class Option(Parameter): + """Options are usually optional values on the command line and + have some extra features that arguments don't have. + + All other parameters are passed onwards to the parameter constructor. + + :param show_default: Show the default value for this option in its + help text. Values are not shown by default, unless + :attr:`Context.show_default` is ``True``. If this value is a + string, it shows that string in parentheses instead of the + actual value. This is particularly useful for dynamic options. + For single option boolean flags, the default remains hidden if + its value is ``False``. + :param show_envvar: Controls if an environment variable should be + shown on the help page. Normally, environment variables are not + shown. + :param prompt: If set to ``True`` or a non empty string then the + user will be prompted for input. If set to ``True`` the prompt + will be the option name capitalized. + :param confirmation_prompt: Prompt a second time to confirm the + value if it was prompted for. Can be set to a string instead of + ``True`` to customize the message. + :param prompt_required: If set to ``False``, the user will be + prompted for input only when the option was specified as a flag + without a value. + :param hide_input: If this is ``True`` then the input on the prompt + will be hidden from the user. This is useful for password input. + :param is_flag: forces this option to act as a flag. The default is + auto detection. + :param flag_value: which value should be used for this flag if it's + enabled. This is set to a boolean automatically if + the option string contains a slash to mark two options. + :param multiple: if this is set to `True` then the argument is accepted + multiple times and recorded. This is similar to ``nargs`` + in how it works but supports arbitrary number of + arguments. + :param count: this flag makes an option increment an integer. + :param allow_from_autoenv: if this is enabled then the value of this + parameter will be pulled from an environment + variable in case a prefix is defined on the + context. + :param help: the help string. + :param hidden: hide this option from help outputs. + + .. versionchanged:: 8.1.0 + Help text indentation is cleaned here instead of only in the + ``@option`` decorator. + + .. versionchanged:: 8.1.0 + The ``show_default`` parameter overrides + ``Context.show_default``. + + .. versionchanged:: 8.1.0 + The default of a single option boolean flag is not shown if the + default value is ``False``. + + .. versionchanged:: 8.0.1 + ``type`` is detected from ``flag_value`` if given. + """ + + param_type_name = "option" + + def __init__( + self, + param_decls: t.Optional[t.Sequence[str]] = None, + show_default: t.Union[bool, str, None] = None, + prompt: t.Union[bool, str] = False, + confirmation_prompt: t.Union[bool, str] = False, + prompt_required: bool = True, + hide_input: bool = False, + is_flag: t.Optional[bool] = None, + flag_value: t.Optional[t.Any] = None, + multiple: bool = False, + count: bool = False, + allow_from_autoenv: bool = True, + type: t.Optional[t.Union[types.ParamType, t.Any]] = None, + help: t.Optional[str] = None, + hidden: bool = False, + show_choices: bool = True, + show_envvar: bool = False, + **attrs: t.Any, + ) -> None: + if help: + help = inspect.cleandoc(help) + + default_is_missing = "default" not in attrs + super().__init__(param_decls, type=type, multiple=multiple, **attrs) + + if prompt is True: + if self.name is None: + raise TypeError("'name' is required with 'prompt=True'.") + + prompt_text: t.Optional[str] = self.name.replace("_", " ").capitalize() + elif prompt is False: + prompt_text = None + else: + prompt_text = prompt + + self.prompt = prompt_text + self.confirmation_prompt = confirmation_prompt + self.prompt_required = prompt_required + self.hide_input = hide_input + self.hidden = hidden + + # If prompt is enabled but not required, then the option can be + # used as a flag to indicate using prompt or flag_value. + self._flag_needs_value = self.prompt is not None and not self.prompt_required + + if is_flag is None: + if flag_value is not None: + # Implicitly a flag because flag_value was set. + is_flag = True + elif self._flag_needs_value: + # Not a flag, but when used as a flag it shows a prompt. + is_flag = False + else: + # Implicitly a flag because flag options were given. + is_flag = bool(self.secondary_opts) + elif is_flag is False and not self._flag_needs_value: + # Not a flag, and prompt is not enabled, can be used as a + # flag if flag_value is set. + self._flag_needs_value = flag_value is not None + + if is_flag and default_is_missing and not self.required: + self.default: t.Union[t.Any, t.Callable[[], t.Any]] = False + + if flag_value is None: + flag_value = not self.default + + if is_flag and type is None: + # Re-guess the type from the flag value instead of the + # default. + self.type = types.convert_type(None, flag_value) + + self.is_flag: bool = is_flag + self.is_bool_flag = is_flag and isinstance(self.type, types.BoolParamType) + self.flag_value: t.Any = flag_value + + # Counting + self.count = count + if count: + if type is None: + self.type = types.IntRange(min=0) + if default_is_missing: + self.default = 0 + + self.allow_from_autoenv = allow_from_autoenv + self.help = help + self.show_default = show_default + self.show_choices = show_choices + self.show_envvar = show_envvar + + if __debug__: + if self.nargs == -1: + raise TypeError("nargs=-1 is not supported for options.") + + if self.prompt and self.is_flag and not self.is_bool_flag: + raise TypeError("'prompt' is not valid for non-boolean flag.") + + if not self.is_bool_flag and self.secondary_opts: + raise TypeError("Secondary flag is not valid for non-boolean flag.") + + if self.is_bool_flag and self.hide_input and self.prompt is not None: + raise TypeError( + "'prompt' with 'hide_input' is not valid for boolean flag." + ) + + if self.count: + if self.multiple: + raise TypeError("'count' is not valid with 'multiple'.") + + if self.is_flag: + raise TypeError("'count' is not valid with 'is_flag'.") + + if self.multiple and self.is_flag: + raise TypeError("'multiple' is not valid with 'is_flag', use 'count'.") + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update( + help=self.help, + prompt=self.prompt, + is_flag=self.is_flag, + flag_value=self.flag_value, + count=self.count, + hidden=self.hidden, + ) + return info_dict + + def _parse_decls( + self, decls: t.Sequence[str], expose_value: bool + ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: + opts = [] + secondary_opts = [] + name = None + possible_names = [] + + for decl in decls: + if decl.isidentifier(): + if name is not None: + raise TypeError(f"Name '{name}' defined twice") + name = decl + else: + split_char = ";" if decl[:1] == "/" else "/" + if split_char in decl: + first, second = decl.split(split_char, 1) + first = first.rstrip() + if first: + possible_names.append(split_opt(first)) + opts.append(first) + second = second.lstrip() + if second: + secondary_opts.append(second.lstrip()) + if first == second: + raise ValueError( + f"Boolean option {decl!r} cannot use the" + " same flag for true/false." + ) + else: + possible_names.append(split_opt(decl)) + opts.append(decl) + + if name is None and possible_names: + possible_names.sort(key=lambda x: -len(x[0])) # group long options first + name = possible_names[0][1].replace("-", "_").lower() + if not name.isidentifier(): + name = None + + if name is None: + if not expose_value: + return None, opts, secondary_opts + raise TypeError("Could not determine name for option") + + if not opts and not secondary_opts: + raise TypeError( + f"No options defined but a name was passed ({name})." + " Did you mean to declare an argument instead? Did" + f" you mean to pass '--{name}'?" + ) + + return name, opts, secondary_opts + + def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: + if self.multiple: + action = "append" + elif self.count: + action = "count" + else: + action = "store" + + if self.is_flag: + action = f"{action}_const" + + if self.is_bool_flag and self.secondary_opts: + parser.add_option( + obj=self, opts=self.opts, dest=self.name, action=action, const=True + ) + parser.add_option( + obj=self, + opts=self.secondary_opts, + dest=self.name, + action=action, + const=False, + ) + else: + parser.add_option( + obj=self, + opts=self.opts, + dest=self.name, + action=action, + const=self.flag_value, + ) + else: + parser.add_option( + obj=self, + opts=self.opts, + dest=self.name, + action=action, + nargs=self.nargs, + ) + + def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]: + if self.hidden: + return None + + any_prefix_is_slash = False + + def _write_opts(opts: t.Sequence[str]) -> str: + nonlocal any_prefix_is_slash + + rv, any_slashes = join_options(opts) + + if any_slashes: + any_prefix_is_slash = True + + if not self.is_flag and not self.count: + rv += f" {self.make_metavar()}" + + return rv + + rv = [_write_opts(self.opts)] + + if self.secondary_opts: + rv.append(_write_opts(self.secondary_opts)) + + help = self.help or "" + extra = [] + + if self.show_envvar: + envvar = self.envvar + + if envvar is None: + if ( + self.allow_from_autoenv + and ctx.auto_envvar_prefix is not None + and self.name is not None + ): + envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}" + + if envvar is not None: + var_str = ( + envvar + if isinstance(envvar, str) + else ", ".join(str(d) for d in envvar) + ) + extra.append(_("env var: {var}").format(var=var_str)) + + # Temporarily enable resilient parsing to avoid type casting + # failing for the default. Might be possible to extend this to + # help formatting in general. + resilient = ctx.resilient_parsing + ctx.resilient_parsing = True + + try: + default_value = self.get_default(ctx, call=False) + finally: + ctx.resilient_parsing = resilient + + show_default = False + show_default_is_str = False + + if self.show_default is not None: + if isinstance(self.show_default, str): + show_default_is_str = show_default = True + else: + show_default = self.show_default + elif ctx.show_default is not None: + show_default = ctx.show_default + + if show_default_is_str or (show_default and (default_value is not None)): + if show_default_is_str: + default_string = f"({self.show_default})" + elif isinstance(default_value, (list, tuple)): + default_string = ", ".join(str(d) for d in default_value) + elif inspect.isfunction(default_value): + default_string = _("(dynamic)") + elif self.is_bool_flag and self.secondary_opts: + # For boolean flags that have distinct True/False opts, + # use the opt without prefix instead of the value. + default_string = split_opt( + (self.opts if self.default else self.secondary_opts)[0] + )[1] + elif self.is_bool_flag and not self.secondary_opts and not default_value: + default_string = "" + else: + default_string = str(default_value) + + if default_string: + extra.append(_("default: {default}").format(default=default_string)) + + if ( + isinstance(self.type, types._NumberRangeBase) + # skip count with default range type + and not (self.count and self.type.min == 0 and self.type.max is None) + ): + range_str = self.type._describe_range() + + if range_str: + extra.append(range_str) + + if self.required: + extra.append(_("required")) + + if extra: + extra_str = "; ".join(extra) + help = f"{help} [{extra_str}]" if help else f"[{extra_str}]" + + return ("; " if any_prefix_is_slash else " / ").join(rv), help + + @t.overload + def get_default( + self, ctx: Context, call: "te.Literal[True]" = True + ) -> t.Optional[t.Any]: + ... + + @t.overload + def get_default( + self, ctx: Context, call: bool = ... + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: + ... + + def get_default( + self, ctx: Context, call: bool = True + ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]: + # If we're a non boolean flag our default is more complex because + # we need to look at all flags in the same group to figure out + # if we're the default one in which case we return the flag + # value as default. + if self.is_flag and not self.is_bool_flag: + for param in ctx.command.params: + if param.name == self.name and param.default: + return param.flag_value # type: ignore + + return None + + return super().get_default(ctx, call=call) + + def prompt_for_value(self, ctx: Context) -> t.Any: + """This is an alternative flow that can be activated in the full + value processing if a value does not exist. It will prompt the + user until a valid value exists and then returns the processed + value as result. + """ + assert self.prompt is not None + + # Calculate the default before prompting anything to be stable. + default = self.get_default(ctx) + + # If this is a prompt for a flag we need to handle this + # differently. + if self.is_bool_flag: + return confirm(self.prompt, default) + + return prompt( + self.prompt, + default=default, + type=self.type, + hide_input=self.hide_input, + show_choices=self.show_choices, + confirmation_prompt=self.confirmation_prompt, + value_proc=lambda x: self.process_value(ctx, x), + ) + + def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]: + rv = super().resolve_envvar_value(ctx) + + if rv is not None: + return rv + + if ( + self.allow_from_autoenv + and ctx.auto_envvar_prefix is not None + and self.name is not None + ): + envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}" + rv = os.environ.get(envvar) + + if rv: + return rv + + return None + + def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]: + rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx) + + if rv is None: + return None + + value_depth = (self.nargs != 1) + bool(self.multiple) + + if value_depth > 0: + rv = self.type.split_envvar_value(rv) + + if self.multiple and self.nargs != 1: + rv = batch(rv, self.nargs) + + return rv + + def consume_value( + self, ctx: Context, opts: t.Mapping[str, "Parameter"] + ) -> t.Tuple[t.Any, ParameterSource]: + value, source = super().consume_value(ctx, opts) + + # The parser will emit a sentinel value if the option can be + # given as a flag without a value. This is different from None + # to distinguish from the flag not being given at all. + if value is _flag_needs_value: + if self.prompt is not None and not ctx.resilient_parsing: + value = self.prompt_for_value(ctx) + source = ParameterSource.PROMPT + else: + value = self.flag_value + source = ParameterSource.COMMANDLINE + + elif ( + self.multiple + and value is not None + and any(v is _flag_needs_value for v in value) + ): + value = [self.flag_value if v is _flag_needs_value else v for v in value] + source = ParameterSource.COMMANDLINE + + # The value wasn't set, or used the param's default, prompt if + # prompting is enabled. + elif ( + source in {None, ParameterSource.DEFAULT} + and self.prompt is not None + and (self.required or self.prompt_required) + and not ctx.resilient_parsing + ): + value = self.prompt_for_value(ctx) + source = ParameterSource.PROMPT + + return value, source + + +class Argument(Parameter): + """Arguments are positional parameters to a command. They generally + provide fewer features than options but can have infinite ``nargs`` + and are required by default. + + All parameters are passed onwards to the parameter constructor. + """ + + param_type_name = "argument" + + def __init__( + self, + param_decls: t.Sequence[str], + required: t.Optional[bool] = None, + **attrs: t.Any, + ) -> None: + if required is None: + if attrs.get("default") is not None: + required = False + else: + required = attrs.get("nargs", 1) > 0 + + if "multiple" in attrs: + raise TypeError("__init__() got an unexpected keyword argument 'multiple'.") + + super().__init__(param_decls, required=required, **attrs) + + if __debug__: + if self.default is not None and self.nargs == -1: + raise TypeError("'default' is not supported for nargs=-1.") + + @property + def human_readable_name(self) -> str: + if self.metavar is not None: + return self.metavar + return self.name.upper() # type: ignore + + def make_metavar(self) -> str: + if self.metavar is not None: + return self.metavar + var = self.type.get_metavar(self) + if not var: + var = self.name.upper() # type: ignore + if not self.required: + var = f"[{var}]" + if self.nargs != 1: + var += "..." + return var + + def _parse_decls( + self, decls: t.Sequence[str], expose_value: bool + ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]: + if not decls: + if not expose_value: + return None, [], [] + raise TypeError("Could not determine name for argument") + if len(decls) == 1: + name = arg = decls[0] + name = name.replace("-", "_").lower() + else: + raise TypeError( + "Arguments take exactly one parameter declaration, got" + f" {len(decls)}." + ) + return name, [arg], [] + + def get_usage_pieces(self, ctx: Context) -> t.List[str]: + return [self.make_metavar()] + + def get_error_hint(self, ctx: Context) -> str: + return f"'{self.make_metavar()}'" + + def add_to_parser(self, parser: OptionParser, ctx: Context) -> None: + parser.add_argument(dest=self.name, nargs=self.nargs, obj=self) diff --git a/env/lib/python3.8/site-packages/click/decorators.py b/env/lib/python3.8/site-packages/click/decorators.py new file mode 100644 index 00000000..28618dc5 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/decorators.py @@ -0,0 +1,497 @@ +import inspect +import types +import typing as t +from functools import update_wrapper +from gettext import gettext as _ + +from .core import Argument +from .core import Command +from .core import Context +from .core import Group +from .core import Option +from .core import Parameter +from .globals import get_current_context +from .utils import echo + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) +FC = t.TypeVar("FC", bound=t.Union[t.Callable[..., t.Any], Command]) + + +def pass_context(f: F) -> F: + """Marks a callback as wanting to receive the current context + object as first argument. + """ + + def new_func(*args, **kwargs): # type: ignore + return f(get_current_context(), *args, **kwargs) + + return update_wrapper(t.cast(F, new_func), f) + + +def pass_obj(f: F) -> F: + """Similar to :func:`pass_context`, but only pass the object on the + context onwards (:attr:`Context.obj`). This is useful if that object + represents the state of a nested system. + """ + + def new_func(*args, **kwargs): # type: ignore + return f(get_current_context().obj, *args, **kwargs) + + return update_wrapper(t.cast(F, new_func), f) + + +def make_pass_decorator( + object_type: t.Type, ensure: bool = False +) -> "t.Callable[[F], F]": + """Given an object type this creates a decorator that will work + similar to :func:`pass_obj` but instead of passing the object of the + current context, it will find the innermost context of type + :func:`object_type`. + + This generates a decorator that works roughly like this:: + + from functools import update_wrapper + + def decorator(f): + @pass_context + def new_func(ctx, *args, **kwargs): + obj = ctx.find_object(object_type) + return ctx.invoke(f, obj, *args, **kwargs) + return update_wrapper(new_func, f) + return decorator + + :param object_type: the type of the object to pass. + :param ensure: if set to `True`, a new object will be created and + remembered on the context if it's not there yet. + """ + + def decorator(f: F) -> F: + def new_func(*args, **kwargs): # type: ignore + ctx = get_current_context() + + if ensure: + obj = ctx.ensure_object(object_type) + else: + obj = ctx.find_object(object_type) + + if obj is None: + raise RuntimeError( + "Managed to invoke callback without a context" + f" object of type {object_type.__name__!r}" + " existing." + ) + + return ctx.invoke(f, obj, *args, **kwargs) + + return update_wrapper(t.cast(F, new_func), f) + + return decorator + + +def pass_meta_key( + key: str, *, doc_description: t.Optional[str] = None +) -> "t.Callable[[F], F]": + """Create a decorator that passes a key from + :attr:`click.Context.meta` as the first argument to the decorated + function. + + :param key: Key in ``Context.meta`` to pass. + :param doc_description: Description of the object being passed, + inserted into the decorator's docstring. Defaults to "the 'key' + key from Context.meta". + + .. versionadded:: 8.0 + """ + + def decorator(f: F) -> F: + def new_func(*args, **kwargs): # type: ignore + ctx = get_current_context() + obj = ctx.meta[key] + return ctx.invoke(f, obj, *args, **kwargs) + + return update_wrapper(t.cast(F, new_func), f) + + if doc_description is None: + doc_description = f"the {key!r} key from :attr:`click.Context.meta`" + + decorator.__doc__ = ( + f"Decorator that passes {doc_description} as the first argument" + " to the decorated function." + ) + return decorator + + +CmdType = t.TypeVar("CmdType", bound=Command) + + +@t.overload +def command( + __func: t.Callable[..., t.Any], +) -> Command: + ... + + +@t.overload +def command( + name: t.Optional[str] = None, + **attrs: t.Any, +) -> t.Callable[..., Command]: + ... + + +@t.overload +def command( + name: t.Optional[str] = None, + cls: t.Type[CmdType] = ..., + **attrs: t.Any, +) -> t.Callable[..., CmdType]: + ... + + +def command( + name: t.Union[str, t.Callable[..., t.Any], None] = None, + cls: t.Optional[t.Type[Command]] = None, + **attrs: t.Any, +) -> t.Union[Command, t.Callable[..., Command]]: + r"""Creates a new :class:`Command` and uses the decorated function as + callback. This will also automatically attach all decorated + :func:`option`\s and :func:`argument`\s as parameters to the command. + + The name of the command defaults to the name of the function with + underscores replaced by dashes. If you want to change that, you can + pass the intended name as the first argument. + + All keyword arguments are forwarded to the underlying command class. + For the ``params`` argument, any decorated params are appended to + the end of the list. + + Once decorated the function turns into a :class:`Command` instance + that can be invoked as a command line utility or be attached to a + command :class:`Group`. + + :param name: the name of the command. This defaults to the function + name with underscores replaced by dashes. + :param cls: the command class to instantiate. This defaults to + :class:`Command`. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + + .. versionchanged:: 8.1 + The ``params`` argument can be used. Decorated params are + appended to the end of the list. + """ + + func: t.Optional[t.Callable[..., t.Any]] = None + + if callable(name): + func = name + name = None + assert cls is None, "Use 'command(cls=cls)(callable)' to specify a class." + assert not attrs, "Use 'command(**kwargs)(callable)' to provide arguments." + + if cls is None: + cls = Command + + def decorator(f: t.Callable[..., t.Any]) -> Command: + if isinstance(f, Command): + raise TypeError("Attempted to convert a callback into a command twice.") + + attr_params = attrs.pop("params", None) + params = attr_params if attr_params is not None else [] + + try: + decorator_params = f.__click_params__ # type: ignore + except AttributeError: + pass + else: + del f.__click_params__ # type: ignore + params.extend(reversed(decorator_params)) + + if attrs.get("help") is None: + attrs["help"] = f.__doc__ + + cmd = cls( # type: ignore[misc] + name=name or f.__name__.lower().replace("_", "-"), # type: ignore[arg-type] + callback=f, + params=params, + **attrs, + ) + cmd.__doc__ = f.__doc__ + return cmd + + if func is not None: + return decorator(func) + + return decorator + + +@t.overload +def group( + __func: t.Callable[..., t.Any], +) -> Group: + ... + + +@t.overload +def group( + name: t.Optional[str] = None, + **attrs: t.Any, +) -> t.Callable[[F], Group]: + ... + + +def group( + name: t.Union[str, t.Callable[..., t.Any], None] = None, **attrs: t.Any +) -> t.Union[Group, t.Callable[[F], Group]]: + """Creates a new :class:`Group` with a function as callback. This + works otherwise the same as :func:`command` just that the `cls` + parameter is set to :class:`Group`. + + .. versionchanged:: 8.1 + This decorator can be applied without parentheses. + """ + if attrs.get("cls") is None: + attrs["cls"] = Group + + if callable(name): + grp: t.Callable[[F], Group] = t.cast(Group, command(**attrs)) + return grp(name) + + return t.cast(Group, command(name, **attrs)) + + +def _param_memo(f: FC, param: Parameter) -> None: + if isinstance(f, Command): + f.params.append(param) + else: + if not hasattr(f, "__click_params__"): + f.__click_params__ = [] # type: ignore + + f.__click_params__.append(param) # type: ignore + + +def argument(*param_decls: str, **attrs: t.Any) -> t.Callable[[FC], FC]: + """Attaches an argument to the command. All positional arguments are + passed as parameter declarations to :class:`Argument`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Argument` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the argument class to instantiate. This defaults to + :class:`Argument`. + """ + + def decorator(f: FC) -> FC: + ArgumentClass = attrs.pop("cls", None) or Argument + _param_memo(f, ArgumentClass(param_decls, **attrs)) + return f + + return decorator + + +def option(*param_decls: str, **attrs: t.Any) -> t.Callable[[FC], FC]: + """Attaches an option to the command. All positional arguments are + passed as parameter declarations to :class:`Option`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Option` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the option class to instantiate. This defaults to + :class:`Option`. + """ + + def decorator(f: FC) -> FC: + # Issue 926, copy attrs, so pre-defined options can re-use the same cls= + option_attrs = attrs.copy() + OptionClass = option_attrs.pop("cls", None) or Option + _param_memo(f, OptionClass(param_decls, **option_attrs)) + return f + + return decorator + + +def confirmation_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]: + """Add a ``--yes`` option which shows a prompt before continuing if + not passed. If the prompt is declined, the program will exit. + + :param param_decls: One or more option names. Defaults to the single + value ``"--yes"``. + :param kwargs: Extra arguments are passed to :func:`option`. + """ + + def callback(ctx: Context, param: Parameter, value: bool) -> None: + if not value: + ctx.abort() + + if not param_decls: + param_decls = ("--yes",) + + kwargs.setdefault("is_flag", True) + kwargs.setdefault("callback", callback) + kwargs.setdefault("expose_value", False) + kwargs.setdefault("prompt", "Do you want to continue?") + kwargs.setdefault("help", "Confirm the action without prompting.") + return option(*param_decls, **kwargs) + + +def password_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]: + """Add a ``--password`` option which prompts for a password, hiding + input and asking to enter the value again for confirmation. + + :param param_decls: One or more option names. Defaults to the single + value ``"--password"``. + :param kwargs: Extra arguments are passed to :func:`option`. + """ + if not param_decls: + param_decls = ("--password",) + + kwargs.setdefault("prompt", True) + kwargs.setdefault("confirmation_prompt", True) + kwargs.setdefault("hide_input", True) + return option(*param_decls, **kwargs) + + +def version_option( + version: t.Optional[str] = None, + *param_decls: str, + package_name: t.Optional[str] = None, + prog_name: t.Optional[str] = None, + message: t.Optional[str] = None, + **kwargs: t.Any, +) -> t.Callable[[FC], FC]: + """Add a ``--version`` option which immediately prints the version + number and exits the program. + + If ``version`` is not provided, Click will try to detect it using + :func:`importlib.metadata.version` to get the version for the + ``package_name``. On Python < 3.8, the ``importlib_metadata`` + backport must be installed. + + If ``package_name`` is not provided, Click will try to detect it by + inspecting the stack frames. This will be used to detect the + version, so it must match the name of the installed package. + + :param version: The version number to show. If not provided, Click + will try to detect it. + :param param_decls: One or more option names. Defaults to the single + value ``"--version"``. + :param package_name: The package name to detect the version from. If + not provided, Click will try to detect it. + :param prog_name: The name of the CLI to show in the message. If not + provided, it will be detected from the command. + :param message: The message to show. The values ``%(prog)s``, + ``%(package)s``, and ``%(version)s`` are available. Defaults to + ``"%(prog)s, version %(version)s"``. + :param kwargs: Extra arguments are passed to :func:`option`. + :raise RuntimeError: ``version`` could not be detected. + + .. versionchanged:: 8.0 + Add the ``package_name`` parameter, and the ``%(package)s`` + value for messages. + + .. versionchanged:: 8.0 + Use :mod:`importlib.metadata` instead of ``pkg_resources``. The + version is detected based on the package name, not the entry + point name. The Python package name must match the installed + package name, or be passed with ``package_name=``. + """ + if message is None: + message = _("%(prog)s, version %(version)s") + + if version is None and package_name is None: + frame = inspect.currentframe() + f_back = frame.f_back if frame is not None else None + f_globals = f_back.f_globals if f_back is not None else None + # break reference cycle + # https://docs.python.org/3/library/inspect.html#the-interpreter-stack + del frame + + if f_globals is not None: + package_name = f_globals.get("__name__") + + if package_name == "__main__": + package_name = f_globals.get("__package__") + + if package_name: + package_name = package_name.partition(".")[0] + + def callback(ctx: Context, param: Parameter, value: bool) -> None: + if not value or ctx.resilient_parsing: + return + + nonlocal prog_name + nonlocal version + + if prog_name is None: + prog_name = ctx.find_root().info_name + + if version is None and package_name is not None: + metadata: t.Optional[types.ModuleType] + + try: + from importlib import metadata # type: ignore + except ImportError: + # Python < 3.8 + import importlib_metadata as metadata # type: ignore + + try: + version = metadata.version(package_name) # type: ignore + except metadata.PackageNotFoundError: # type: ignore + raise RuntimeError( + f"{package_name!r} is not installed. Try passing" + " 'package_name' instead." + ) from None + + if version is None: + raise RuntimeError( + f"Could not determine the version for {package_name!r} automatically." + ) + + echo( + t.cast(str, message) + % {"prog": prog_name, "package": package_name, "version": version}, + color=ctx.color, + ) + ctx.exit() + + if not param_decls: + param_decls = ("--version",) + + kwargs.setdefault("is_flag", True) + kwargs.setdefault("expose_value", False) + kwargs.setdefault("is_eager", True) + kwargs.setdefault("help", _("Show the version and exit.")) + kwargs["callback"] = callback + return option(*param_decls, **kwargs) + + +def help_option(*param_decls: str, **kwargs: t.Any) -> t.Callable[[FC], FC]: + """Add a ``--help`` option which immediately prints the help page + and exits the program. + + This is usually unnecessary, as the ``--help`` option is added to + each command automatically unless ``add_help_option=False`` is + passed. + + :param param_decls: One or more option names. Defaults to the single + value ``"--help"``. + :param kwargs: Extra arguments are passed to :func:`option`. + """ + + def callback(ctx: Context, param: Parameter, value: bool) -> None: + if not value or ctx.resilient_parsing: + return + + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + if not param_decls: + param_decls = ("--help",) + + kwargs.setdefault("is_flag", True) + kwargs.setdefault("expose_value", False) + kwargs.setdefault("is_eager", True) + kwargs.setdefault("help", _("Show this message and exit.")) + kwargs["callback"] = callback + return option(*param_decls, **kwargs) diff --git a/env/lib/python3.8/site-packages/click/exceptions.py b/env/lib/python3.8/site-packages/click/exceptions.py new file mode 100644 index 00000000..9e20b3eb --- /dev/null +++ b/env/lib/python3.8/site-packages/click/exceptions.py @@ -0,0 +1,287 @@ +import os +import typing as t +from gettext import gettext as _ +from gettext import ngettext + +from ._compat import get_text_stderr +from .utils import echo + +if t.TYPE_CHECKING: + from .core import Context + from .core import Parameter + + +def _join_param_hints( + param_hint: t.Optional[t.Union[t.Sequence[str], str]] +) -> t.Optional[str]: + if param_hint is not None and not isinstance(param_hint, str): + return " / ".join(repr(x) for x in param_hint) + + return param_hint + + +class ClickException(Exception): + """An exception that Click can handle and show to the user.""" + + #: The exit code for this exception. + exit_code = 1 + + def __init__(self, message: str) -> None: + super().__init__(message) + self.message = message + + def format_message(self) -> str: + return self.message + + def __str__(self) -> str: + return self.message + + def show(self, file: t.Optional[t.IO] = None) -> None: + if file is None: + file = get_text_stderr() + + echo(_("Error: {message}").format(message=self.format_message()), file=file) + + +class UsageError(ClickException): + """An internal exception that signals a usage error. This typically + aborts any further handling. + + :param message: the error message to display. + :param ctx: optionally the context that caused this error. Click will + fill in the context automatically in some situations. + """ + + exit_code = 2 + + def __init__(self, message: str, ctx: t.Optional["Context"] = None) -> None: + super().__init__(message) + self.ctx = ctx + self.cmd = self.ctx.command if self.ctx else None + + def show(self, file: t.Optional[t.IO] = None) -> None: + if file is None: + file = get_text_stderr() + color = None + hint = "" + if ( + self.ctx is not None + and self.ctx.command.get_help_option(self.ctx) is not None + ): + hint = _("Try '{command} {option}' for help.").format( + command=self.ctx.command_path, option=self.ctx.help_option_names[0] + ) + hint = f"{hint}\n" + if self.ctx is not None: + color = self.ctx.color + echo(f"{self.ctx.get_usage()}\n{hint}", file=file, color=color) + echo( + _("Error: {message}").format(message=self.format_message()), + file=file, + color=color, + ) + + +class BadParameter(UsageError): + """An exception that formats out a standardized error message for a + bad parameter. This is useful when thrown from a callback or type as + Click will attach contextual information to it (for instance, which + parameter it is). + + .. versionadded:: 2.0 + + :param param: the parameter object that caused this error. This can + be left out, and Click will attach this info itself + if possible. + :param param_hint: a string that shows up as parameter name. This + can be used as alternative to `param` in cases + where custom validation should happen. If it is + a string it's used as such, if it's a list then + each item is quoted and separated. + """ + + def __init__( + self, + message: str, + ctx: t.Optional["Context"] = None, + param: t.Optional["Parameter"] = None, + param_hint: t.Optional[str] = None, + ) -> None: + super().__init__(message, ctx) + self.param = param + self.param_hint = param_hint + + def format_message(self) -> str: + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) # type: ignore + else: + return _("Invalid value: {message}").format(message=self.message) + + return _("Invalid value for {param_hint}: {message}").format( + param_hint=_join_param_hints(param_hint), message=self.message + ) + + +class MissingParameter(BadParameter): + """Raised if click required an option or argument but it was not + provided when invoking the script. + + .. versionadded:: 4.0 + + :param param_type: a string that indicates the type of the parameter. + The default is to inherit the parameter type from + the given `param`. Valid values are ``'parameter'``, + ``'option'`` or ``'argument'``. + """ + + def __init__( + self, + message: t.Optional[str] = None, + ctx: t.Optional["Context"] = None, + param: t.Optional["Parameter"] = None, + param_hint: t.Optional[str] = None, + param_type: t.Optional[str] = None, + ) -> None: + super().__init__(message or "", ctx, param, param_hint) + self.param_type = param_type + + def format_message(self) -> str: + if self.param_hint is not None: + param_hint: t.Optional[str] = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) # type: ignore + else: + param_hint = None + + param_hint = _join_param_hints(param_hint) + param_hint = f" {param_hint}" if param_hint else "" + + param_type = self.param_type + if param_type is None and self.param is not None: + param_type = self.param.param_type_name + + msg = self.message + if self.param is not None: + msg_extra = self.param.type.get_missing_message(self.param) + if msg_extra: + if msg: + msg += f". {msg_extra}" + else: + msg = msg_extra + + msg = f" {msg}" if msg else "" + + # Translate param_type for known types. + if param_type == "argument": + missing = _("Missing argument") + elif param_type == "option": + missing = _("Missing option") + elif param_type == "parameter": + missing = _("Missing parameter") + else: + missing = _("Missing {param_type}").format(param_type=param_type) + + return f"{missing}{param_hint}.{msg}" + + def __str__(self) -> str: + if not self.message: + param_name = self.param.name if self.param else None + return _("Missing parameter: {param_name}").format(param_name=param_name) + else: + return self.message + + +class NoSuchOption(UsageError): + """Raised if click attempted to handle an option that does not + exist. + + .. versionadded:: 4.0 + """ + + def __init__( + self, + option_name: str, + message: t.Optional[str] = None, + possibilities: t.Optional[t.Sequence[str]] = None, + ctx: t.Optional["Context"] = None, + ) -> None: + if message is None: + message = _("No such option: {name}").format(name=option_name) + + super().__init__(message, ctx) + self.option_name = option_name + self.possibilities = possibilities + + def format_message(self) -> str: + if not self.possibilities: + return self.message + + possibility_str = ", ".join(sorted(self.possibilities)) + suggest = ngettext( + "Did you mean {possibility}?", + "(Possible options: {possibilities})", + len(self.possibilities), + ).format(possibility=possibility_str, possibilities=possibility_str) + return f"{self.message} {suggest}" + + +class BadOptionUsage(UsageError): + """Raised if an option is generally supplied but the use of the option + was incorrect. This is for instance raised if the number of arguments + for an option is not correct. + + .. versionadded:: 4.0 + + :param option_name: the name of the option being used incorrectly. + """ + + def __init__( + self, option_name: str, message: str, ctx: t.Optional["Context"] = None + ) -> None: + super().__init__(message, ctx) + self.option_name = option_name + + +class BadArgumentUsage(UsageError): + """Raised if an argument is generally supplied but the use of the argument + was incorrect. This is for instance raised if the number of values + for an argument is not correct. + + .. versionadded:: 6.0 + """ + + +class FileError(ClickException): + """Raised if a file cannot be opened.""" + + def __init__(self, filename: str, hint: t.Optional[str] = None) -> None: + if hint is None: + hint = _("unknown error") + + super().__init__(hint) + self.ui_filename = os.fsdecode(filename) + self.filename = filename + + def format_message(self) -> str: + return _("Could not open file {filename!r}: {message}").format( + filename=self.ui_filename, message=self.message + ) + + +class Abort(RuntimeError): + """An internal signalling exception that signals Click to abort.""" + + +class Exit(RuntimeError): + """An exception that indicates that the application should exit with some + status code. + + :param code: the status code to exit with. + """ + + __slots__ = ("exit_code",) + + def __init__(self, code: int = 0) -> None: + self.exit_code = code diff --git a/env/lib/python3.8/site-packages/click/formatting.py b/env/lib/python3.8/site-packages/click/formatting.py new file mode 100644 index 00000000..ddd2a2f8 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/formatting.py @@ -0,0 +1,301 @@ +import typing as t +from contextlib import contextmanager +from gettext import gettext as _ + +from ._compat import term_len +from .parser import split_opt + +# Can force a width. This is used by the test system +FORCED_WIDTH: t.Optional[int] = None + + +def measure_table(rows: t.Iterable[t.Tuple[str, str]]) -> t.Tuple[int, ...]: + widths: t.Dict[int, int] = {} + + for row in rows: + for idx, col in enumerate(row): + widths[idx] = max(widths.get(idx, 0), term_len(col)) + + return tuple(y for x, y in sorted(widths.items())) + + +def iter_rows( + rows: t.Iterable[t.Tuple[str, str]], col_count: int +) -> t.Iterator[t.Tuple[str, ...]]: + for row in rows: + yield row + ("",) * (col_count - len(row)) + + +def wrap_text( + text: str, + width: int = 78, + initial_indent: str = "", + subsequent_indent: str = "", + preserve_paragraphs: bool = False, +) -> str: + """A helper function that intelligently wraps text. By default, it + assumes that it operates on a single paragraph of text but if the + `preserve_paragraphs` parameter is provided it will intelligently + handle paragraphs (defined by two empty lines). + + If paragraphs are handled, a paragraph can be prefixed with an empty + line containing the ``\\b`` character (``\\x08``) to indicate that + no rewrapping should happen in that block. + + :param text: the text that should be rewrapped. + :param width: the maximum width for the text. + :param initial_indent: the initial indent that should be placed on the + first line as a string. + :param subsequent_indent: the indent string that should be placed on + each consecutive line. + :param preserve_paragraphs: if this flag is set then the wrapping will + intelligently handle paragraphs. + """ + from ._textwrap import TextWrapper + + text = text.expandtabs() + wrapper = TextWrapper( + width, + initial_indent=initial_indent, + subsequent_indent=subsequent_indent, + replace_whitespace=False, + ) + if not preserve_paragraphs: + return wrapper.fill(text) + + p: t.List[t.Tuple[int, bool, str]] = [] + buf: t.List[str] = [] + indent = None + + def _flush_par() -> None: + if not buf: + return + if buf[0].strip() == "\b": + p.append((indent or 0, True, "\n".join(buf[1:]))) + else: + p.append((indent or 0, False, " ".join(buf))) + del buf[:] + + for line in text.splitlines(): + if not line: + _flush_par() + indent = None + else: + if indent is None: + orig_len = term_len(line) + line = line.lstrip() + indent = orig_len - term_len(line) + buf.append(line) + _flush_par() + + rv = [] + for indent, raw, text in p: + with wrapper.extra_indent(" " * indent): + if raw: + rv.append(wrapper.indent_only(text)) + else: + rv.append(wrapper.fill(text)) + + return "\n\n".join(rv) + + +class HelpFormatter: + """This class helps with formatting text-based help pages. It's + usually just needed for very special internal cases, but it's also + exposed so that developers can write their own fancy outputs. + + At present, it always writes into memory. + + :param indent_increment: the additional increment for each level. + :param width: the width for the text. This defaults to the terminal + width clamped to a maximum of 78. + """ + + def __init__( + self, + indent_increment: int = 2, + width: t.Optional[int] = None, + max_width: t.Optional[int] = None, + ) -> None: + import shutil + + self.indent_increment = indent_increment + if max_width is None: + max_width = 80 + if width is None: + width = FORCED_WIDTH + if width is None: + width = max(min(shutil.get_terminal_size().columns, max_width) - 2, 50) + self.width = width + self.current_indent = 0 + self.buffer: t.List[str] = [] + + def write(self, string: str) -> None: + """Writes a unicode string into the internal buffer.""" + self.buffer.append(string) + + def indent(self) -> None: + """Increases the indentation.""" + self.current_indent += self.indent_increment + + def dedent(self) -> None: + """Decreases the indentation.""" + self.current_indent -= self.indent_increment + + def write_usage( + self, prog: str, args: str = "", prefix: t.Optional[str] = None + ) -> None: + """Writes a usage line into the buffer. + + :param prog: the program name. + :param args: whitespace separated list of arguments. + :param prefix: The prefix for the first line. Defaults to + ``"Usage: "``. + """ + if prefix is None: + prefix = f"{_('Usage:')} " + + usage_prefix = f"{prefix:>{self.current_indent}}{prog} " + text_width = self.width - self.current_indent + + if text_width >= (term_len(usage_prefix) + 20): + # The arguments will fit to the right of the prefix. + indent = " " * term_len(usage_prefix) + self.write( + wrap_text( + args, + text_width, + initial_indent=usage_prefix, + subsequent_indent=indent, + ) + ) + else: + # The prefix is too long, put the arguments on the next line. + self.write(usage_prefix) + self.write("\n") + indent = " " * (max(self.current_indent, term_len(prefix)) + 4) + self.write( + wrap_text( + args, text_width, initial_indent=indent, subsequent_indent=indent + ) + ) + + self.write("\n") + + def write_heading(self, heading: str) -> None: + """Writes a heading into the buffer.""" + self.write(f"{'':>{self.current_indent}}{heading}:\n") + + def write_paragraph(self) -> None: + """Writes a paragraph into the buffer.""" + if self.buffer: + self.write("\n") + + def write_text(self, text: str) -> None: + """Writes re-indented text into the buffer. This rewraps and + preserves paragraphs. + """ + indent = " " * self.current_indent + self.write( + wrap_text( + text, + self.width, + initial_indent=indent, + subsequent_indent=indent, + preserve_paragraphs=True, + ) + ) + self.write("\n") + + def write_dl( + self, + rows: t.Sequence[t.Tuple[str, str]], + col_max: int = 30, + col_spacing: int = 2, + ) -> None: + """Writes a definition list into the buffer. This is how options + and commands are usually formatted. + + :param rows: a list of two item tuples for the terms and values. + :param col_max: the maximum width of the first column. + :param col_spacing: the number of spaces between the first and + second column. + """ + rows = list(rows) + widths = measure_table(rows) + if len(widths) != 2: + raise TypeError("Expected two columns for definition list") + + first_col = min(widths[0], col_max) + col_spacing + + for first, second in iter_rows(rows, len(widths)): + self.write(f"{'':>{self.current_indent}}{first}") + if not second: + self.write("\n") + continue + if term_len(first) <= first_col - col_spacing: + self.write(" " * (first_col - term_len(first))) + else: + self.write("\n") + self.write(" " * (first_col + self.current_indent)) + + text_width = max(self.width - first_col - 2, 10) + wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True) + lines = wrapped_text.splitlines() + + if lines: + self.write(f"{lines[0]}\n") + + for line in lines[1:]: + self.write(f"{'':>{first_col + self.current_indent}}{line}\n") + else: + self.write("\n") + + @contextmanager + def section(self, name: str) -> t.Iterator[None]: + """Helpful context manager that writes a paragraph, a heading, + and the indents. + + :param name: the section name that is written as heading. + """ + self.write_paragraph() + self.write_heading(name) + self.indent() + try: + yield + finally: + self.dedent() + + @contextmanager + def indentation(self) -> t.Iterator[None]: + """A context manager that increases the indentation.""" + self.indent() + try: + yield + finally: + self.dedent() + + def getvalue(self) -> str: + """Returns the buffer contents.""" + return "".join(self.buffer) + + +def join_options(options: t.Sequence[str]) -> t.Tuple[str, bool]: + """Given a list of option strings this joins them in the most appropriate + way and returns them in the form ``(formatted_string, + any_prefix_is_slash)`` where the second item in the tuple is a flag that + indicates if any of the option prefixes was a slash. + """ + rv = [] + any_prefix_is_slash = False + + for opt in options: + prefix = split_opt(opt)[0] + + if prefix == "/": + any_prefix_is_slash = True + + rv.append((len(prefix), opt)) + + rv.sort(key=lambda x: x[0]) + return ", ".join(x[1] for x in rv), any_prefix_is_slash diff --git a/env/lib/python3.8/site-packages/click/globals.py b/env/lib/python3.8/site-packages/click/globals.py new file mode 100644 index 00000000..480058f1 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/globals.py @@ -0,0 +1,68 @@ +import typing as t +from threading import local + +if t.TYPE_CHECKING: + import typing_extensions as te + from .core import Context + +_local = local() + + +@t.overload +def get_current_context(silent: "te.Literal[False]" = False) -> "Context": + ... + + +@t.overload +def get_current_context(silent: bool = ...) -> t.Optional["Context"]: + ... + + +def get_current_context(silent: bool = False) -> t.Optional["Context"]: + """Returns the current click context. This can be used as a way to + access the current context object from anywhere. This is a more implicit + alternative to the :func:`pass_context` decorator. This function is + primarily useful for helpers such as :func:`echo` which might be + interested in changing its behavior based on the current context. + + To push the current context, :meth:`Context.scope` can be used. + + .. versionadded:: 5.0 + + :param silent: if set to `True` the return value is `None` if no context + is available. The default behavior is to raise a + :exc:`RuntimeError`. + """ + try: + return t.cast("Context", _local.stack[-1]) + except (AttributeError, IndexError) as e: + if not silent: + raise RuntimeError("There is no active click context.") from e + + return None + + +def push_context(ctx: "Context") -> None: + """Pushes a new context to the current stack.""" + _local.__dict__.setdefault("stack", []).append(ctx) + + +def pop_context() -> None: + """Removes the top level from the stack.""" + _local.stack.pop() + + +def resolve_color_default(color: t.Optional[bool] = None) -> t.Optional[bool]: + """Internal helper to get the default value of the color flag. If a + value is passed it's returned unchanged, otherwise it's looked up from + the current context. + """ + if color is not None: + return color + + ctx = get_current_context(silent=True) + + if ctx is not None: + return ctx.color + + return None diff --git a/env/lib/python3.8/site-packages/click/parser.py b/env/lib/python3.8/site-packages/click/parser.py new file mode 100644 index 00000000..2d5a2ed7 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/parser.py @@ -0,0 +1,529 @@ +""" +This module started out as largely a copy paste from the stdlib's +optparse module with the features removed that we do not need from +optparse because we implement them in Click on a higher level (for +instance type handling, help formatting and a lot more). + +The plan is to remove more and more from here over time. + +The reason this is a different module and not optparse from the stdlib +is that there are differences in 2.x and 3.x about the error messages +generated and optparse in the stdlib uses gettext for no good reason +and might cause us issues. + +Click uses parts of optparse written by Gregory P. Ward and maintained +by the Python Software Foundation. This is limited to code in parser.py. + +Copyright 2001-2006 Gregory P. Ward. All rights reserved. +Copyright 2002-2006 Python Software Foundation. All rights reserved. +""" +# This code uses parts of optparse written by Gregory P. Ward and +# maintained by the Python Software Foundation. +# Copyright 2001-2006 Gregory P. Ward +# Copyright 2002-2006 Python Software Foundation +import typing as t +from collections import deque +from gettext import gettext as _ +from gettext import ngettext + +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import NoSuchOption +from .exceptions import UsageError + +if t.TYPE_CHECKING: + import typing_extensions as te + from .core import Argument as CoreArgument + from .core import Context + from .core import Option as CoreOption + from .core import Parameter as CoreParameter + +V = t.TypeVar("V") + +# Sentinel value that indicates an option was passed as a flag without a +# value but is not a flag option. Option.consume_value uses this to +# prompt or use the flag_value. +_flag_needs_value = object() + + +def _unpack_args( + args: t.Sequence[str], nargs_spec: t.Sequence[int] +) -> t.Tuple[t.Sequence[t.Union[str, t.Sequence[t.Optional[str]], None]], t.List[str]]: + """Given an iterable of arguments and an iterable of nargs specifications, + it returns a tuple with all the unpacked arguments at the first index + and all remaining arguments as the second. + + The nargs specification is the number of arguments that should be consumed + or `-1` to indicate that this position should eat up all the remainders. + + Missing items are filled with `None`. + """ + args = deque(args) + nargs_spec = deque(nargs_spec) + rv: t.List[t.Union[str, t.Tuple[t.Optional[str], ...], None]] = [] + spos: t.Optional[int] = None + + def _fetch(c: "te.Deque[V]") -> t.Optional[V]: + try: + if spos is None: + return c.popleft() + else: + return c.pop() + except IndexError: + return None + + while nargs_spec: + nargs = _fetch(nargs_spec) + + if nargs is None: + continue + + if nargs == 1: + rv.append(_fetch(args)) + elif nargs > 1: + x = [_fetch(args) for _ in range(nargs)] + + # If we're reversed, we're pulling in the arguments in reverse, + # so we need to turn them around. + if spos is not None: + x.reverse() + + rv.append(tuple(x)) + elif nargs < 0: + if spos is not None: + raise TypeError("Cannot have two nargs < 0") + + spos = len(rv) + rv.append(None) + + # spos is the position of the wildcard (star). If it's not `None`, + # we fill it with the remainder. + if spos is not None: + rv[spos] = tuple(args) + args = [] + rv[spos + 1 :] = reversed(rv[spos + 1 :]) + + return tuple(rv), list(args) + + +def split_opt(opt: str) -> t.Tuple[str, str]: + first = opt[:1] + if first.isalnum(): + return "", opt + if opt[1:2] == first: + return opt[:2], opt[2:] + return first, opt[1:] + + +def normalize_opt(opt: str, ctx: t.Optional["Context"]) -> str: + if ctx is None or ctx.token_normalize_func is None: + return opt + prefix, opt = split_opt(opt) + return f"{prefix}{ctx.token_normalize_func(opt)}" + + +def split_arg_string(string: str) -> t.List[str]: + """Split an argument string as with :func:`shlex.split`, but don't + fail if the string is incomplete. Ignores a missing closing quote or + incomplete escape sequence and uses the partial token as-is. + + .. code-block:: python + + split_arg_string("example 'my file") + ["example", "my file"] + + split_arg_string("example my\\") + ["example", "my"] + + :param string: String to split. + """ + import shlex + + lex = shlex.shlex(string, posix=True) + lex.whitespace_split = True + lex.commenters = "" + out = [] + + try: + for token in lex: + out.append(token) + except ValueError: + # Raised when end-of-string is reached in an invalid state. Use + # the partial token as-is. The quote or escape character is in + # lex.state, not lex.token. + out.append(lex.token) + + return out + + +class Option: + def __init__( + self, + obj: "CoreOption", + opts: t.Sequence[str], + dest: t.Optional[str], + action: t.Optional[str] = None, + nargs: int = 1, + const: t.Optional[t.Any] = None, + ): + self._short_opts = [] + self._long_opts = [] + self.prefixes = set() + + for opt in opts: + prefix, value = split_opt(opt) + if not prefix: + raise ValueError(f"Invalid start character for option ({opt})") + self.prefixes.add(prefix[0]) + if len(prefix) == 1 and len(value) == 1: + self._short_opts.append(opt) + else: + self._long_opts.append(opt) + self.prefixes.add(prefix) + + if action is None: + action = "store" + + self.dest = dest + self.action = action + self.nargs = nargs + self.const = const + self.obj = obj + + @property + def takes_value(self) -> bool: + return self.action in ("store", "append") + + def process(self, value: str, state: "ParsingState") -> None: + if self.action == "store": + state.opts[self.dest] = value # type: ignore + elif self.action == "store_const": + state.opts[self.dest] = self.const # type: ignore + elif self.action == "append": + state.opts.setdefault(self.dest, []).append(value) # type: ignore + elif self.action == "append_const": + state.opts.setdefault(self.dest, []).append(self.const) # type: ignore + elif self.action == "count": + state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 # type: ignore + else: + raise ValueError(f"unknown action '{self.action}'") + state.order.append(self.obj) + + +class Argument: + def __init__(self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1): + self.dest = dest + self.nargs = nargs + self.obj = obj + + def process( + self, + value: t.Union[t.Optional[str], t.Sequence[t.Optional[str]]], + state: "ParsingState", + ) -> None: + if self.nargs > 1: + assert value is not None + holes = sum(1 for x in value if x is None) + if holes == len(value): + value = None + elif holes != 0: + raise BadArgumentUsage( + _("Argument {name!r} takes {nargs} values.").format( + name=self.dest, nargs=self.nargs + ) + ) + + if self.nargs == -1 and self.obj.envvar is not None and value == (): + # Replace empty tuple with None so that a value from the + # environment may be tried. + value = None + + state.opts[self.dest] = value # type: ignore + state.order.append(self.obj) + + +class ParsingState: + def __init__(self, rargs: t.List[str]) -> None: + self.opts: t.Dict[str, t.Any] = {} + self.largs: t.List[str] = [] + self.rargs = rargs + self.order: t.List["CoreParameter"] = [] + + +class OptionParser: + """The option parser is an internal class that is ultimately used to + parse options and arguments. It's modelled after optparse and brings + a similar but vastly simplified API. It should generally not be used + directly as the high level Click classes wrap it for you. + + It's not nearly as extensible as optparse or argparse as it does not + implement features that are implemented on a higher level (such as + types or defaults). + + :param ctx: optionally the :class:`~click.Context` where this parser + should go with. + """ + + def __init__(self, ctx: t.Optional["Context"] = None) -> None: + #: The :class:`~click.Context` for this parser. This might be + #: `None` for some advanced use cases. + self.ctx = ctx + #: This controls how the parser deals with interspersed arguments. + #: If this is set to `False`, the parser will stop on the first + #: non-option. Click uses this to implement nested subcommands + #: safely. + self.allow_interspersed_args = True + #: This tells the parser how to deal with unknown options. By + #: default it will error out (which is sensible), but there is a + #: second mode where it will ignore it and continue processing + #: after shifting all the unknown options into the resulting args. + self.ignore_unknown_options = False + + if ctx is not None: + self.allow_interspersed_args = ctx.allow_interspersed_args + self.ignore_unknown_options = ctx.ignore_unknown_options + + self._short_opt: t.Dict[str, Option] = {} + self._long_opt: t.Dict[str, Option] = {} + self._opt_prefixes = {"-", "--"} + self._args: t.List[Argument] = [] + + def add_option( + self, + obj: "CoreOption", + opts: t.Sequence[str], + dest: t.Optional[str], + action: t.Optional[str] = None, + nargs: int = 1, + const: t.Optional[t.Any] = None, + ) -> None: + """Adds a new option named `dest` to the parser. The destination + is not inferred (unlike with optparse) and needs to be explicitly + provided. Action can be any of ``store``, ``store_const``, + ``append``, ``append_const`` or ``count``. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + opts = [normalize_opt(opt, self.ctx) for opt in opts] + option = Option(obj, opts, dest, action=action, nargs=nargs, const=const) + self._opt_prefixes.update(option.prefixes) + for opt in option._short_opts: + self._short_opt[opt] = option + for opt in option._long_opts: + self._long_opt[opt] = option + + def add_argument( + self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1 + ) -> None: + """Adds a positional argument named `dest` to the parser. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + self._args.append(Argument(obj, dest=dest, nargs=nargs)) + + def parse_args( + self, args: t.List[str] + ) -> t.Tuple[t.Dict[str, t.Any], t.List[str], t.List["CoreParameter"]]: + """Parses positional arguments and returns ``(values, args, order)`` + for the parsed options and arguments as well as the leftover + arguments if there are any. The order is a list of objects as they + appear on the command line. If arguments appear multiple times they + will be memorized multiple times as well. + """ + state = ParsingState(args) + try: + self._process_args_for_options(state) + self._process_args_for_args(state) + except UsageError: + if self.ctx is None or not self.ctx.resilient_parsing: + raise + return state.opts, state.largs, state.order + + def _process_args_for_args(self, state: ParsingState) -> None: + pargs, args = _unpack_args( + state.largs + state.rargs, [x.nargs for x in self._args] + ) + + for idx, arg in enumerate(self._args): + arg.process(pargs[idx], state) + + state.largs = args + state.rargs = [] + + def _process_args_for_options(self, state: ParsingState) -> None: + while state.rargs: + arg = state.rargs.pop(0) + arglen = len(arg) + # Double dashes always handled explicitly regardless of what + # prefixes are valid. + if arg == "--": + return + elif arg[:1] in self._opt_prefixes and arglen > 1: + self._process_opts(arg, state) + elif self.allow_interspersed_args: + state.largs.append(arg) + else: + state.rargs.insert(0, arg) + return + + # Say this is the original argument list: + # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] + # ^ + # (we are about to process arg(i)). + # + # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of + # [arg0, ..., arg(i-1)] (any options and their arguments will have + # been removed from largs). + # + # The while loop will usually consume 1 or more arguments per pass. + # If it consumes 1 (eg. arg is an option that takes no arguments), + # then after _process_arg() is done the situation is: + # + # largs = subset of [arg0, ..., arg(i)] + # rargs = [arg(i+1), ..., arg(N-1)] + # + # If allow_interspersed_args is false, largs will always be + # *empty* -- still a subset of [arg0, ..., arg(i-1)], but + # not a very interesting subset! + + def _match_long_opt( + self, opt: str, explicit_value: t.Optional[str], state: ParsingState + ) -> None: + if opt not in self._long_opt: + from difflib import get_close_matches + + possibilities = get_close_matches(opt, self._long_opt) + raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) + + option = self._long_opt[opt] + if option.takes_value: + # At this point it's safe to modify rargs by injecting the + # explicit value, because no exception is raised in this + # branch. This means that the inserted value will be fully + # consumed. + if explicit_value is not None: + state.rargs.insert(0, explicit_value) + + value = self._get_value_from_state(opt, option, state) + + elif explicit_value is not None: + raise BadOptionUsage( + opt, _("Option {name!r} does not take a value.").format(name=opt) + ) + + else: + value = None + + option.process(value, state) + + def _match_short_opt(self, arg: str, state: ParsingState) -> None: + stop = False + i = 1 + prefix = arg[0] + unknown_options = [] + + for ch in arg[1:]: + opt = normalize_opt(f"{prefix}{ch}", self.ctx) + option = self._short_opt.get(opt) + i += 1 + + if not option: + if self.ignore_unknown_options: + unknown_options.append(ch) + continue + raise NoSuchOption(opt, ctx=self.ctx) + if option.takes_value: + # Any characters left in arg? Pretend they're the + # next arg, and stop consuming characters of arg. + if i < len(arg): + state.rargs.insert(0, arg[i:]) + stop = True + + value = self._get_value_from_state(opt, option, state) + + else: + value = None + + option.process(value, state) + + if stop: + break + + # If we got any unknown options we re-combinate the string of the + # remaining options and re-attach the prefix, then report that + # to the state as new larg. This way there is basic combinatorics + # that can be achieved while still ignoring unknown arguments. + if self.ignore_unknown_options and unknown_options: + state.largs.append(f"{prefix}{''.join(unknown_options)}") + + def _get_value_from_state( + self, option_name: str, option: Option, state: ParsingState + ) -> t.Any: + nargs = option.nargs + + if len(state.rargs) < nargs: + if option.obj._flag_needs_value: + # Option allows omitting the value. + value = _flag_needs_value + else: + raise BadOptionUsage( + option_name, + ngettext( + "Option {name!r} requires an argument.", + "Option {name!r} requires {nargs} arguments.", + nargs, + ).format(name=option_name, nargs=nargs), + ) + elif nargs == 1: + next_rarg = state.rargs[0] + + if ( + option.obj._flag_needs_value + and isinstance(next_rarg, str) + and next_rarg[:1] in self._opt_prefixes + and len(next_rarg) > 1 + ): + # The next arg looks like the start of an option, don't + # use it as the value if omitting the value is allowed. + value = _flag_needs_value + else: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + return value + + def _process_opts(self, arg: str, state: ParsingState) -> None: + explicit_value = None + # Long option handling happens in two parts. The first part is + # supporting explicitly attached values. In any case, we will try + # to long match the option first. + if "=" in arg: + long_opt, explicit_value = arg.split("=", 1) + else: + long_opt = arg + norm_long_opt = normalize_opt(long_opt, self.ctx) + + # At this point we will match the (assumed) long option through + # the long option matching code. Note that this allows options + # like "-foo" to be matched as long options. + try: + self._match_long_opt(norm_long_opt, explicit_value, state) + except NoSuchOption: + # At this point the long option matching failed, and we need + # to try with short options. However there is a special rule + # which says, that if we have a two character options prefix + # (applies to "--foo" for instance), we do not dispatch to the + # short option code and will instead raise the no option + # error. + if arg[:2] not in self._opt_prefixes: + self._match_short_opt(arg, state) + return + + if not self.ignore_unknown_options: + raise + + state.largs.append(arg) diff --git a/env/lib/python3.8/site-packages/click/py.typed b/env/lib/python3.8/site-packages/click/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/click/shell_completion.py b/env/lib/python3.8/site-packages/click/shell_completion.py new file mode 100644 index 00000000..c17a8e64 --- /dev/null +++ b/env/lib/python3.8/site-packages/click/shell_completion.py @@ -0,0 +1,580 @@ +import os +import re +import typing as t +from gettext import gettext as _ + +from .core import Argument +from .core import BaseCommand +from .core import Context +from .core import MultiCommand +from .core import Option +from .core import Parameter +from .core import ParameterSource +from .parser import split_arg_string +from .utils import echo + + +def shell_complete( + cli: BaseCommand, + ctx_args: t.Dict[str, t.Any], + prog_name: str, + complete_var: str, + instruction: str, +) -> int: + """Perform shell completion for the given CLI program. + + :param cli: Command being called. + :param ctx_args: Extra arguments to pass to + ``cli.make_context``. + :param prog_name: Name of the executable in the shell. + :param complete_var: Name of the environment variable that holds + the completion instruction. + :param instruction: Value of ``complete_var`` with the completion + instruction and shell, in the form ``instruction_shell``. + :return: Status code to exit with. + """ + shell, _, instruction = instruction.partition("_") + comp_cls = get_completion_class(shell) + + if comp_cls is None: + return 1 + + comp = comp_cls(cli, ctx_args, prog_name, complete_var) + + if instruction == "source": + echo(comp.source()) + return 0 + + if instruction == "complete": + echo(comp.complete()) + return 0 + + return 1 + + +class CompletionItem: + """Represents a completion value and metadata about the value. The + default metadata is ``type`` to indicate special shell handling, + and ``help`` if a shell supports showing a help string next to the + value. + + Arbitrary parameters can be passed when creating the object, and + accessed using ``item.attr``. If an attribute wasn't passed, + accessing it returns ``None``. + + :param value: The completion suggestion. + :param type: Tells the shell script to provide special completion + support for the type. Click uses ``"dir"`` and ``"file"``. + :param help: String shown next to the value if supported. + :param kwargs: Arbitrary metadata. The built-in implementations + don't use this, but custom type completions paired with custom + shell support could use it. + """ + + __slots__ = ("value", "type", "help", "_info") + + def __init__( + self, + value: t.Any, + type: str = "plain", + help: t.Optional[str] = None, + **kwargs: t.Any, + ) -> None: + self.value = value + self.type = type + self.help = help + self._info = kwargs + + def __getattr__(self, name: str) -> t.Any: + return self._info.get(name) + + +# Only Bash >= 4.4 has the nosort option. +_SOURCE_BASH = """\ +%(complete_func)s() { + local IFS=$'\\n' + local response + + response=$(env COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \ +%(complete_var)s=bash_complete $1) + + for completion in $response; do + IFS=',' read type value <<< "$completion" + + if [[ $type == 'dir' ]]; then + COMPREPLY=() + compopt -o dirnames + elif [[ $type == 'file' ]]; then + COMPREPLY=() + compopt -o default + elif [[ $type == 'plain' ]]; then + COMPREPLY+=($value) + fi + done + + return 0 +} + +%(complete_func)s_setup() { + complete -o nosort -F %(complete_func)s %(prog_name)s +} + +%(complete_func)s_setup; +""" + +_SOURCE_ZSH = """\ +#compdef %(prog_name)s + +%(complete_func)s() { + local -a completions + local -a completions_with_descriptions + local -a response + (( ! $+commands[%(prog_name)s] )) && return 1 + + response=("${(@f)$(env COMP_WORDS="${words[*]}" COMP_CWORD=$((CURRENT-1)) \ +%(complete_var)s=zsh_complete %(prog_name)s)}") + + for type key descr in ${response}; do + if [[ "$type" == "plain" ]]; then + if [[ "$descr" == "_" ]]; then + completions+=("$key") + else + completions_with_descriptions+=("$key":"$descr") + fi + elif [[ "$type" == "dir" ]]; then + _path_files -/ + elif [[ "$type" == "file" ]]; then + _path_files -f + fi + done + + if [ -n "$completions_with_descriptions" ]; then + _describe -V unsorted completions_with_descriptions -U + fi + + if [ -n "$completions" ]; then + compadd -U -V unsorted -a completions + fi +} + +compdef %(complete_func)s %(prog_name)s; +""" + +_SOURCE_FISH = """\ +function %(complete_func)s; + set -l response; + + for value in (env %(complete_var)s=fish_complete COMP_WORDS=(commandline -cp) \ +COMP_CWORD=(commandline -t) %(prog_name)s); + set response $response $value; + end; + + for completion in $response; + set -l metadata (string split "," $completion); + + if test $metadata[1] = "dir"; + __fish_complete_directories $metadata[2]; + else if test $metadata[1] = "file"; + __fish_complete_path $metadata[2]; + else if test $metadata[1] = "plain"; + echo $metadata[2]; + end; + end; +end; + +complete --no-files --command %(prog_name)s --arguments \ +"(%(complete_func)s)"; +""" + + +class ShellComplete: + """Base class for providing shell completion support. A subclass for + a given shell will override attributes and methods to implement the + completion instructions (``source`` and ``complete``). + + :param cli: Command being called. + :param prog_name: Name of the executable in the shell. + :param complete_var: Name of the environment variable that holds + the completion instruction. + + .. versionadded:: 8.0 + """ + + name: t.ClassVar[str] + """Name to register the shell as with :func:`add_completion_class`. + This is used in completion instructions (``{name}_source`` and + ``{name}_complete``). + """ + + source_template: t.ClassVar[str] + """Completion script template formatted by :meth:`source`. This must + be provided by subclasses. + """ + + def __init__( + self, + cli: BaseCommand, + ctx_args: t.Dict[str, t.Any], + prog_name: str, + complete_var: str, + ) -> None: + self.cli = cli + self.ctx_args = ctx_args + self.prog_name = prog_name + self.complete_var = complete_var + + @property + def func_name(self) -> str: + """The name of the shell function defined by the completion + script. + """ + safe_name = re.sub(r"\W*", "", self.prog_name.replace("-", "_"), re.ASCII) + return f"_{safe_name}_completion" + + def source_vars(self) -> t.Dict[str, t.Any]: + """Vars for formatting :attr:`source_template`. + + By default this provides ``complete_func``, ``complete_var``, + and ``prog_name``. + """ + return { + "complete_func": self.func_name, + "complete_var": self.complete_var, + "prog_name": self.prog_name, + } + + def source(self) -> str: + """Produce the shell script that defines the completion + function. By default this ``%``-style formats + :attr:`source_template` with the dict returned by + :meth:`source_vars`. + """ + return self.source_template % self.source_vars() + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + """Use the env vars defined by the shell script to return a + tuple of ``args, incomplete``. This must be implemented by + subclasses. + """ + raise NotImplementedError + + def get_completions( + self, args: t.List[str], incomplete: str + ) -> t.List[CompletionItem]: + """Determine the context and last complete command or parameter + from the complete args. Call that object's ``shell_complete`` + method to get the completions for the incomplete value. + + :param args: List of complete args before the incomplete value. + :param incomplete: Value being completed. May be empty. + """ + ctx = _resolve_context(self.cli, self.ctx_args, self.prog_name, args) + obj, incomplete = _resolve_incomplete(ctx, args, incomplete) + return obj.shell_complete(ctx, incomplete) + + def format_completion(self, item: CompletionItem) -> str: + """Format a completion item into the form recognized by the + shell script. This must be implemented by subclasses. + + :param item: Completion item to format. + """ + raise NotImplementedError + + def complete(self) -> str: + """Produce the completion data to send back to the shell. + + By default this calls :meth:`get_completion_args`, gets the + completions, then calls :meth:`format_completion` for each + completion. + """ + args, incomplete = self.get_completion_args() + completions = self.get_completions(args, incomplete) + out = [self.format_completion(item) for item in completions] + return "\n".join(out) + + +class BashComplete(ShellComplete): + """Shell completion for Bash.""" + + name = "bash" + source_template = _SOURCE_BASH + + def _check_version(self) -> None: + import subprocess + + output = subprocess.run( + ["bash", "-c", "echo ${BASH_VERSION}"], stdout=subprocess.PIPE + ) + match = re.search(r"^(\d+)\.(\d+)\.\d+", output.stdout.decode()) + + if match is not None: + major, minor = match.groups() + + if major < "4" or major == "4" and minor < "4": + raise RuntimeError( + _( + "Shell completion is not supported for Bash" + " versions older than 4.4." + ) + ) + else: + raise RuntimeError( + _("Couldn't detect Bash version, shell completion is not supported.") + ) + + def source(self) -> str: + self._check_version() + return super().source() + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + cwords = split_arg_string(os.environ["COMP_WORDS"]) + cword = int(os.environ["COMP_CWORD"]) + args = cwords[1:cword] + + try: + incomplete = cwords[cword] + except IndexError: + incomplete = "" + + return args, incomplete + + def format_completion(self, item: CompletionItem) -> str: + return f"{item.type},{item.value}" + + +class ZshComplete(ShellComplete): + """Shell completion for Zsh.""" + + name = "zsh" + source_template = _SOURCE_ZSH + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + cwords = split_arg_string(os.environ["COMP_WORDS"]) + cword = int(os.environ["COMP_CWORD"]) + args = cwords[1:cword] + + try: + incomplete = cwords[cword] + except IndexError: + incomplete = "" + + return args, incomplete + + def format_completion(self, item: CompletionItem) -> str: + return f"{item.type}\n{item.value}\n{item.help if item.help else '_'}" + + +class FishComplete(ShellComplete): + """Shell completion for Fish.""" + + name = "fish" + source_template = _SOURCE_FISH + + def get_completion_args(self) -> t.Tuple[t.List[str], str]: + cwords = split_arg_string(os.environ["COMP_WORDS"]) + incomplete = os.environ["COMP_CWORD"] + args = cwords[1:] + + # Fish stores the partial word in both COMP_WORDS and + # COMP_CWORD, remove it from complete args. + if incomplete and args and args[-1] == incomplete: + args.pop() + + return args, incomplete + + def format_completion(self, item: CompletionItem) -> str: + if item.help: + return f"{item.type},{item.value}\t{item.help}" + + return f"{item.type},{item.value}" + + +_available_shells: t.Dict[str, t.Type[ShellComplete]] = { + "bash": BashComplete, + "fish": FishComplete, + "zsh": ZshComplete, +} + + +def add_completion_class( + cls: t.Type[ShellComplete], name: t.Optional[str] = None +) -> None: + """Register a :class:`ShellComplete` subclass under the given name. + The name will be provided by the completion instruction environment + variable during completion. + + :param cls: The completion class that will handle completion for the + shell. + :param name: Name to register the class under. Defaults to the + class's ``name`` attribute. + """ + if name is None: + name = cls.name + + _available_shells[name] = cls + + +def get_completion_class(shell: str) -> t.Optional[t.Type[ShellComplete]]: + """Look up a registered :class:`ShellComplete` subclass by the name + provided by the completion instruction environment variable. If the + name isn't registered, returns ``None``. + + :param shell: Name the class is registered under. + """ + return _available_shells.get(shell) + + +def _is_incomplete_argument(ctx: Context, param: Parameter) -> bool: + """Determine if the given parameter is an argument that can still + accept values. + + :param ctx: Invocation context for the command represented by the + parsed complete args. + :param param: Argument object being checked. + """ + if not isinstance(param, Argument): + return False + + assert param.name is not None + value = ctx.params[param.name] + return ( + param.nargs == -1 + or ctx.get_parameter_source(param.name) is not ParameterSource.COMMANDLINE + or ( + param.nargs > 1 + and isinstance(value, (tuple, list)) + and len(value) < param.nargs + ) + ) + + +def _start_of_option(ctx: Context, value: str) -> bool: + """Check if the value looks like the start of an option.""" + if not value: + return False + + c = value[0] + return c in ctx._opt_prefixes + + +def _is_incomplete_option(ctx: Context, args: t.List[str], param: Parameter) -> bool: + """Determine if the given parameter is an option that needs a value. + + :param args: List of complete args before the incomplete value. + :param param: Option object being checked. + """ + if not isinstance(param, Option): + return False + + if param.is_flag or param.count: + return False + + last_option = None + + for index, arg in enumerate(reversed(args)): + if index + 1 > param.nargs: + break + + if _start_of_option(ctx, arg): + last_option = arg + + return last_option is not None and last_option in param.opts + + +def _resolve_context( + cli: BaseCommand, ctx_args: t.Dict[str, t.Any], prog_name: str, args: t.List[str] +) -> Context: + """Produce the context hierarchy starting with the command and + traversing the complete arguments. This only follows the commands, + it doesn't trigger input prompts or callbacks. + + :param cli: Command being called. + :param prog_name: Name of the executable in the shell. + :param args: List of complete args before the incomplete value. + """ + ctx_args["resilient_parsing"] = True + ctx = cli.make_context(prog_name, args.copy(), **ctx_args) + args = ctx.protected_args + ctx.args + + while args: + command = ctx.command + + if isinstance(command, MultiCommand): + if not command.chain: + name, cmd, args = command.resolve_command(ctx, args) + + if cmd is None: + return ctx + + ctx = cmd.make_context(name, args, parent=ctx, resilient_parsing=True) + args = ctx.protected_args + ctx.args + else: + while args: + name, cmd, args = command.resolve_command(ctx, args) + + if cmd is None: + return ctx + + sub_ctx = cmd.make_context( + name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + resilient_parsing=True, + ) + args = sub_ctx.args + + ctx = sub_ctx + args = [*sub_ctx.protected_args, *sub_ctx.args] + else: + break + + return ctx + + +def _resolve_incomplete( + ctx: Context, args: t.List[str], incomplete: str +) -> t.Tuple[t.Union[BaseCommand, Parameter], str]: + """Find the Click object that will handle the completion of the + incomplete value. Return the object and the incomplete value. + + :param ctx: Invocation context for the command represented by + the parsed complete args. + :param args: List of complete args before the incomplete value. + :param incomplete: Value being completed. May be empty. + """ + # Different shells treat an "=" between a long option name and + # value differently. Might keep the value joined, return the "=" + # as a separate item, or return the split name and value. Always + # split and discard the "=" to make completion easier. + if incomplete == "=": + incomplete = "" + elif "=" in incomplete and _start_of_option(ctx, incomplete): + name, _, incomplete = incomplete.partition("=") + args.append(name) + + # The "--" marker tells Click to stop treating values as options + # even if they start with the option character. If it hasn't been + # given and the incomplete arg looks like an option, the current + # command will provide option name completions. + if "--" not in args and _start_of_option(ctx, incomplete): + return ctx.command, incomplete + + params = ctx.command.get_params(ctx) + + # If the last complete arg is an option name with an incomplete + # value, the option will provide value completions. + for param in params: + if _is_incomplete_option(ctx, args, param): + return param, incomplete + + # It's not an option name or value. The first argument without a + # parsed value will provide value completions. + for param in params: + if _is_incomplete_argument(ctx, param): + return param, incomplete + + # There were no unparsed arguments, the command may be a group that + # will provide command name completions. + return ctx.command, incomplete diff --git a/env/lib/python3.8/site-packages/click/termui.py b/env/lib/python3.8/site-packages/click/termui.py new file mode 100644 index 00000000..bfb2f5ae --- /dev/null +++ b/env/lib/python3.8/site-packages/click/termui.py @@ -0,0 +1,787 @@ +import inspect +import io +import itertools +import os +import sys +import typing as t +from gettext import gettext as _ + +from ._compat import isatty +from ._compat import strip_ansi +from ._compat import WIN +from .exceptions import Abort +from .exceptions import UsageError +from .globals import resolve_color_default +from .types import Choice +from .types import convert_type +from .types import ParamType +from .utils import echo +from .utils import LazyFile + +if t.TYPE_CHECKING: + from ._termui_impl import ProgressBar + +V = t.TypeVar("V") + +# The prompt functions to use. The doc tools currently override these +# functions to customize how they work. +visible_prompt_func: t.Callable[[str], str] = input + +_ansi_colors = { + "black": 30, + "red": 31, + "green": 32, + "yellow": 33, + "blue": 34, + "magenta": 35, + "cyan": 36, + "white": 37, + "reset": 39, + "bright_black": 90, + "bright_red": 91, + "bright_green": 92, + "bright_yellow": 93, + "bright_blue": 94, + "bright_magenta": 95, + "bright_cyan": 96, + "bright_white": 97, +} +_ansi_reset_all = "\033[0m" + + +def hidden_prompt_func(prompt: str) -> str: + import getpass + + return getpass.getpass(prompt) + + +def _build_prompt( + text: str, + suffix: str, + show_default: bool = False, + default: t.Optional[t.Any] = None, + show_choices: bool = True, + type: t.Optional[ParamType] = None, +) -> str: + prompt = text + if type is not None and show_choices and isinstance(type, Choice): + prompt += f" ({', '.join(map(str, type.choices))})" + if default is not None and show_default: + prompt = f"{prompt} [{_format_default(default)}]" + return f"{prompt}{suffix}" + + +def _format_default(default: t.Any) -> t.Any: + if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"): + return default.name # type: ignore + + return default + + +def prompt( + text: str, + default: t.Optional[t.Any] = None, + hide_input: bool = False, + confirmation_prompt: t.Union[bool, str] = False, + type: t.Optional[t.Union[ParamType, t.Any]] = None, + value_proc: t.Optional[t.Callable[[str], t.Any]] = None, + prompt_suffix: str = ": ", + show_default: bool = True, + err: bool = False, + show_choices: bool = True, +) -> t.Any: + """Prompts a user for input. This is a convenience function that can + be used to prompt a user for input later. + + If the user aborts the input by sending an interrupt signal, this + function will catch it and raise a :exc:`Abort` exception. + + :param text: the text to show for the prompt. + :param default: the default value to use if no input happens. If this + is not given it will prompt until it's aborted. + :param hide_input: if this is set to true then the input value will + be hidden. + :param confirmation_prompt: Prompt a second time to confirm the + value. Can be set to a string instead of ``True`` to customize + the message. + :param type: the type to use to check the value against. + :param value_proc: if this parameter is provided it's a function that + is invoked instead of the type conversion to + convert a value. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + :param show_choices: Show or hide choices if the passed type is a Choice. + For example if type is a Choice of either day or week, + show_choices is true and text is "Group by" then the + prompt will be "Group by (day, week): ". + + .. versionadded:: 8.0 + ``confirmation_prompt`` can be a custom string. + + .. versionadded:: 7.0 + Added the ``show_choices`` parameter. + + .. versionadded:: 6.0 + Added unicode support for cmd.exe on Windows. + + .. versionadded:: 4.0 + Added the `err` parameter. + + """ + + def prompt_func(text: str) -> str: + f = hidden_prompt_func if hide_input else visible_prompt_func + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(text.rstrip(" "), nl=False, err=err) + # Echo a space to stdout to work around an issue where + # readline causes backspace to clear the whole line. + return f(" ") + except (KeyboardInterrupt, EOFError): + # getpass doesn't print a newline if the user aborts input with ^C. + # Allegedly this behavior is inherited from getpass(3). + # A doc bug has been filed at https://bugs.python.org/issue24711 + if hide_input: + echo(None, err=err) + raise Abort() from None + + if value_proc is None: + value_proc = convert_type(type, default) + + prompt = _build_prompt( + text, prompt_suffix, show_default, default, show_choices, type + ) + + if confirmation_prompt: + if confirmation_prompt is True: + confirmation_prompt = _("Repeat for confirmation") + + confirmation_prompt = _build_prompt(confirmation_prompt, prompt_suffix) + + while True: + while True: + value = prompt_func(prompt) + if value: + break + elif default is not None: + value = default + break + try: + result = value_proc(value) + except UsageError as e: + if hide_input: + echo(_("Error: The value you entered was invalid."), err=err) + else: + echo(_("Error: {e.message}").format(e=e), err=err) # noqa: B306 + continue + if not confirmation_prompt: + return result + while True: + value2 = prompt_func(confirmation_prompt) + is_empty = not value and not value2 + if value2 or is_empty: + break + if value == value2: + return result + echo(_("Error: The two entered values do not match."), err=err) + + +def confirm( + text: str, + default: t.Optional[bool] = False, + abort: bool = False, + prompt_suffix: str = ": ", + show_default: bool = True, + err: bool = False, +) -> bool: + """Prompts for confirmation (yes/no question). + + If the user aborts the input by sending a interrupt signal this + function will catch it and raise a :exc:`Abort` exception. + + :param text: the question to ask. + :param default: The default value to use when no input is given. If + ``None``, repeat until input is given. + :param abort: if this is set to `True` a negative answer aborts the + exception by raising :exc:`Abort`. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + + .. versionchanged:: 8.0 + Repeat until input is given if ``default`` is ``None``. + + .. versionadded:: 4.0 + Added the ``err`` parameter. + """ + prompt = _build_prompt( + text, + prompt_suffix, + show_default, + "y/n" if default is None else ("Y/n" if default else "y/N"), + ) + + while True: + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(prompt.rstrip(" "), nl=False, err=err) + # Echo a space to stdout to work around an issue where + # readline causes backspace to clear the whole line. + value = visible_prompt_func(" ").lower().strip() + except (KeyboardInterrupt, EOFError): + raise Abort() from None + if value in ("y", "yes"): + rv = True + elif value in ("n", "no"): + rv = False + elif default is not None and value == "": + rv = default + else: + echo(_("Error: invalid input"), err=err) + continue + break + if abort and not rv: + raise Abort() + return rv + + +def echo_via_pager( + text_or_generator: t.Union[t.Iterable[str], t.Callable[[], t.Iterable[str]], str], + color: t.Optional[bool] = None, +) -> None: + """This function takes a text and shows it via an environment specific + pager on stdout. + + .. versionchanged:: 3.0 + Added the `color` flag. + + :param text_or_generator: the text to page, or alternatively, a + generator emitting the text to page. + :param color: controls if the pager supports ANSI colors or not. The + default is autodetection. + """ + color = resolve_color_default(color) + + if inspect.isgeneratorfunction(text_or_generator): + i = t.cast(t.Callable[[], t.Iterable[str]], text_or_generator)() + elif isinstance(text_or_generator, str): + i = [text_or_generator] + else: + i = iter(t.cast(t.Iterable[str], text_or_generator)) + + # convert every element of i to a text type if necessary + text_generator = (el if isinstance(el, str) else str(el) for el in i) + + from ._termui_impl import pager + + return pager(itertools.chain(text_generator, "\n"), color) + + +def progressbar( + iterable: t.Optional[t.Iterable[V]] = None, + length: t.Optional[int] = None, + label: t.Optional[str] = None, + show_eta: bool = True, + show_percent: t.Optional[bool] = None, + show_pos: bool = False, + item_show_func: t.Optional[t.Callable[[t.Optional[V]], t.Optional[str]]] = None, + fill_char: str = "#", + empty_char: str = "-", + bar_template: str = "%(label)s [%(bar)s] %(info)s", + info_sep: str = " ", + width: int = 36, + file: t.Optional[t.TextIO] = None, + color: t.Optional[bool] = None, + update_min_steps: int = 1, +) -> "ProgressBar[V]": + """This function creates an iterable context manager that can be used + to iterate over something while showing a progress bar. It will + either iterate over the `iterable` or `length` items (that are counted + up). While iteration happens, this function will print a rendered + progress bar to the given `file` (defaults to stdout) and will attempt + to calculate remaining time and more. By default, this progress bar + will not be rendered if the file is not a terminal. + + The context manager creates the progress bar. When the context + manager is entered the progress bar is already created. With every + iteration over the progress bar, the iterable passed to the bar is + advanced and the bar is updated. When the context manager exits, + a newline is printed and the progress bar is finalized on screen. + + Note: The progress bar is currently designed for use cases where the + total progress can be expected to take at least several seconds. + Because of this, the ProgressBar class object won't display + progress that is considered too fast, and progress where the time + between steps is less than a second. + + No printing must happen or the progress bar will be unintentionally + destroyed. + + Example usage:: + + with progressbar(items) as bar: + for item in bar: + do_something_with(item) + + Alternatively, if no iterable is specified, one can manually update the + progress bar through the `update()` method instead of directly + iterating over the progress bar. The update method accepts the number + of steps to increment the bar with:: + + with progressbar(length=chunks.total_bytes) as bar: + for chunk in chunks: + process_chunk(chunk) + bar.update(chunks.bytes) + + The ``update()`` method also takes an optional value specifying the + ``current_item`` at the new position. This is useful when used + together with ``item_show_func`` to customize the output for each + manual step:: + + with click.progressbar( + length=total_size, + label='Unzipping archive', + item_show_func=lambda a: a.filename + ) as bar: + for archive in zip_file: + archive.extract() + bar.update(archive.size, archive) + + :param iterable: an iterable to iterate over. If not provided the length + is required. + :param length: the number of items to iterate over. By default the + progressbar will attempt to ask the iterator about its + length, which might or might not work. If an iterable is + also provided this parameter can be used to override the + length. If an iterable is not provided the progress bar + will iterate over a range of that length. + :param label: the label to show next to the progress bar. + :param show_eta: enables or disables the estimated time display. This is + automatically disabled if the length cannot be + determined. + :param show_percent: enables or disables the percentage display. The + default is `True` if the iterable has a length or + `False` if not. + :param show_pos: enables or disables the absolute position display. The + default is `False`. + :param item_show_func: A function called with the current item which + can return a string to show next to the progress bar. If the + function returns ``None`` nothing is shown. The current item can + be ``None``, such as when entering and exiting the bar. + :param fill_char: the character to use to show the filled part of the + progress bar. + :param empty_char: the character to use to show the non-filled part of + the progress bar. + :param bar_template: the format string to use as template for the bar. + The parameters in it are ``label`` for the label, + ``bar`` for the progress bar and ``info`` for the + info section. + :param info_sep: the separator between multiple info items (eta etc.) + :param width: the width of the progress bar in characters, 0 means full + terminal width + :param file: The file to write to. If this is not a terminal then + only the label is printed. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are included anywhere in the progress bar output + which is not the case by default. + :param update_min_steps: Render only when this many updates have + completed. This allows tuning for very fast iterators. + + .. versionchanged:: 8.0 + Output is shown even if execution time is less than 0.5 seconds. + + .. versionchanged:: 8.0 + ``item_show_func`` shows the current item, not the previous one. + + .. versionchanged:: 8.0 + Labels are echoed if the output is not a TTY. Reverts a change + in 7.0 that removed all output. + + .. versionadded:: 8.0 + Added the ``update_min_steps`` parameter. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. Added the ``update`` method to + the object. + + .. versionadded:: 2.0 + """ + from ._termui_impl import ProgressBar + + color = resolve_color_default(color) + return ProgressBar( + iterable=iterable, + length=length, + show_eta=show_eta, + show_percent=show_percent, + show_pos=show_pos, + item_show_func=item_show_func, + fill_char=fill_char, + empty_char=empty_char, + bar_template=bar_template, + info_sep=info_sep, + file=file, + label=label, + width=width, + color=color, + update_min_steps=update_min_steps, + ) + + +def clear() -> None: + """Clears the terminal screen. This will have the effect of clearing + the whole visible space of the terminal and moving the cursor to the + top left. This does not do anything if not connected to a terminal. + + .. versionadded:: 2.0 + """ + if not isatty(sys.stdout): + return + if WIN: + os.system("cls") + else: + sys.stdout.write("\033[2J\033[1;1H") + + +def _interpret_color( + color: t.Union[int, t.Tuple[int, int, int], str], offset: int = 0 +) -> str: + if isinstance(color, int): + return f"{38 + offset};5;{color:d}" + + if isinstance(color, (tuple, list)): + r, g, b = color + return f"{38 + offset};2;{r:d};{g:d};{b:d}" + + return str(_ansi_colors[color] + offset) + + +def style( + text: t.Any, + fg: t.Optional[t.Union[int, t.Tuple[int, int, int], str]] = None, + bg: t.Optional[t.Union[int, t.Tuple[int, int, int], str]] = None, + bold: t.Optional[bool] = None, + dim: t.Optional[bool] = None, + underline: t.Optional[bool] = None, + overline: t.Optional[bool] = None, + italic: t.Optional[bool] = None, + blink: t.Optional[bool] = None, + reverse: t.Optional[bool] = None, + strikethrough: t.Optional[bool] = None, + reset: bool = True, +) -> str: + """Styles a text with ANSI styles and returns the new string. By + default the styling is self contained which means that at the end + of the string a reset code is issued. This can be prevented by + passing ``reset=False``. + + Examples:: + + click.echo(click.style('Hello World!', fg='green')) + click.echo(click.style('ATTENTION!', blink=True)) + click.echo(click.style('Some things', reverse=True, fg='cyan')) + click.echo(click.style('More colors', fg=(255, 12, 128), bg=117)) + + Supported color names: + + * ``black`` (might be a gray) + * ``red`` + * ``green`` + * ``yellow`` (might be an orange) + * ``blue`` + * ``magenta`` + * ``cyan`` + * ``white`` (might be light gray) + * ``bright_black`` + * ``bright_red`` + * ``bright_green`` + * ``bright_yellow`` + * ``bright_blue`` + * ``bright_magenta`` + * ``bright_cyan`` + * ``bright_white`` + * ``reset`` (reset the color code only) + + If the terminal supports it, color may also be specified as: + + - An integer in the interval [0, 255]. The terminal must support + 8-bit/256-color mode. + - An RGB tuple of three integers in [0, 255]. The terminal must + support 24-bit/true-color mode. + + See https://en.wikipedia.org/wiki/ANSI_color and + https://gist.github.com/XVilka/8346728 for more information. + + :param text: the string to style with ansi codes. + :param fg: if provided this will become the foreground color. + :param bg: if provided this will become the background color. + :param bold: if provided this will enable or disable bold mode. + :param dim: if provided this will enable or disable dim mode. This is + badly supported. + :param underline: if provided this will enable or disable underline. + :param overline: if provided this will enable or disable overline. + :param italic: if provided this will enable or disable italic. + :param blink: if provided this will enable or disable blinking. + :param reverse: if provided this will enable or disable inverse + rendering (foreground becomes background and the + other way round). + :param strikethrough: if provided this will enable or disable + striking through text. + :param reset: by default a reset-all code is added at the end of the + string which means that styles do not carry over. This + can be disabled to compose styles. + + .. versionchanged:: 8.0 + A non-string ``message`` is converted to a string. + + .. versionchanged:: 8.0 + Added support for 256 and RGB color codes. + + .. versionchanged:: 8.0 + Added the ``strikethrough``, ``italic``, and ``overline`` + parameters. + + .. versionchanged:: 7.0 + Added support for bright colors. + + .. versionadded:: 2.0 + """ + if not isinstance(text, str): + text = str(text) + + bits = [] + + if fg: + try: + bits.append(f"\033[{_interpret_color(fg)}m") + except KeyError: + raise TypeError(f"Unknown color {fg!r}") from None + + if bg: + try: + bits.append(f"\033[{_interpret_color(bg, 10)}m") + except KeyError: + raise TypeError(f"Unknown color {bg!r}") from None + + if bold is not None: + bits.append(f"\033[{1 if bold else 22}m") + if dim is not None: + bits.append(f"\033[{2 if dim else 22}m") + if underline is not None: + bits.append(f"\033[{4 if underline else 24}m") + if overline is not None: + bits.append(f"\033[{53 if overline else 55}m") + if italic is not None: + bits.append(f"\033[{3 if italic else 23}m") + if blink is not None: + bits.append(f"\033[{5 if blink else 25}m") + if reverse is not None: + bits.append(f"\033[{7 if reverse else 27}m") + if strikethrough is not None: + bits.append(f"\033[{9 if strikethrough else 29}m") + bits.append(text) + if reset: + bits.append(_ansi_reset_all) + return "".join(bits) + + +def unstyle(text: str) -> str: + """Removes ANSI styling information from a string. Usually it's not + necessary to use this function as Click's echo function will + automatically remove styling if necessary. + + .. versionadded:: 2.0 + + :param text: the text to remove style information from. + """ + return strip_ansi(text) + + +def secho( + message: t.Optional[t.Any] = None, + file: t.Optional[t.IO[t.AnyStr]] = None, + nl: bool = True, + err: bool = False, + color: t.Optional[bool] = None, + **styles: t.Any, +) -> None: + """This function combines :func:`echo` and :func:`style` into one + call. As such the following two calls are the same:: + + click.secho('Hello World!', fg='green') + click.echo(click.style('Hello World!', fg='green')) + + All keyword arguments are forwarded to the underlying functions + depending on which one they go with. + + Non-string types will be converted to :class:`str`. However, + :class:`bytes` are passed directly to :meth:`echo` without applying + style. If you want to style bytes that represent text, call + :meth:`bytes.decode` first. + + .. versionchanged:: 8.0 + A non-string ``message`` is converted to a string. Bytes are + passed through without style applied. + + .. versionadded:: 2.0 + """ + if message is not None and not isinstance(message, (bytes, bytearray)): + message = style(message, **styles) + + return echo(message, file=file, nl=nl, err=err, color=color) + + +def edit( + text: t.Optional[t.AnyStr] = None, + editor: t.Optional[str] = None, + env: t.Optional[t.Mapping[str, str]] = None, + require_save: bool = True, + extension: str = ".txt", + filename: t.Optional[str] = None, +) -> t.Optional[t.AnyStr]: + r"""Edits the given text in the defined editor. If an editor is given + (should be the full path to the executable but the regular operating + system search path is used for finding the executable) it overrides + the detected editor. Optionally, some environment variables can be + used. If the editor is closed without changes, `None` is returned. In + case a file is edited directly the return value is always `None` and + `require_save` and `extension` are ignored. + + If the editor cannot be opened a :exc:`UsageError` is raised. + + Note for Windows: to simplify cross-platform usage, the newlines are + automatically converted from POSIX to Windows and vice versa. As such, + the message here will have ``\n`` as newline markers. + + :param text: the text to edit. + :param editor: optionally the editor to use. Defaults to automatic + detection. + :param env: environment variables to forward to the editor. + :param require_save: if this is true, then not saving in the editor + will make the return value become `None`. + :param extension: the extension to tell the editor about. This defaults + to `.txt` but changing this might change syntax + highlighting. + :param filename: if provided it will edit this file instead of the + provided text contents. It will not use a temporary + file as an indirection in that case. + """ + from ._termui_impl import Editor + + ed = Editor(editor=editor, env=env, require_save=require_save, extension=extension) + + if filename is None: + return ed.edit(text) + + ed.edit_file(filename) + return None + + +def launch(url: str, wait: bool = False, locate: bool = False) -> int: + """This function launches the given URL (or filename) in the default + viewer application for this file type. If this is an executable, it + might launch the executable in a new session. The return value is + the exit code of the launched application. Usually, ``0`` indicates + success. + + Examples:: + + click.launch('https://click.palletsprojects.com/') + click.launch('/my/downloaded/file', locate=True) + + .. versionadded:: 2.0 + + :param url: URL or filename of the thing to launch. + :param wait: Wait for the program to exit before returning. This + only works if the launched program blocks. In particular, + ``xdg-open`` on Linux does not block. + :param locate: if this is set to `True` then instead of launching the + application associated with the URL it will attempt to + launch a file manager with the file located. This + might have weird effects if the URL does not point to + the filesystem. + """ + from ._termui_impl import open_url + + return open_url(url, wait=wait, locate=locate) + + +# If this is provided, getchar() calls into this instead. This is used +# for unittesting purposes. +_getchar: t.Optional[t.Callable[[bool], str]] = None + + +def getchar(echo: bool = False) -> str: + """Fetches a single character from the terminal and returns it. This + will always return a unicode character and under certain rare + circumstances this might return more than one character. The + situations which more than one character is returned is when for + whatever reason multiple characters end up in the terminal buffer or + standard input was not actually a terminal. + + Note that this will always read from the terminal, even if something + is piped into the standard input. + + Note for Windows: in rare cases when typing non-ASCII characters, this + function might wait for a second character and then return both at once. + This is because certain Unicode characters look like special-key markers. + + .. versionadded:: 2.0 + + :param echo: if set to `True`, the character read will also show up on + the terminal. The default is to not show it. + """ + global _getchar + + if _getchar is None: + from ._termui_impl import getchar as f + + _getchar = f + + return _getchar(echo) + + +def raw_terminal() -> t.ContextManager[int]: + from ._termui_impl import raw_terminal as f + + return f() + + +def pause(info: t.Optional[str] = None, err: bool = False) -> None: + """This command stops execution and waits for the user to press any + key to continue. This is similar to the Windows batch "pause" + command. If the program is not run through a terminal, this command + will instead do nothing. + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param info: The message to print before pausing. Defaults to + ``"Press any key to continue..."``. + :param err: if set to message goes to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + if not isatty(sys.stdin) or not isatty(sys.stdout): + return + + if info is None: + info = _("Press any key to continue...") + + try: + if info: + echo(info, nl=False, err=err) + try: + getchar() + except (KeyboardInterrupt, EOFError): + pass + finally: + if info: + echo(err=err) diff --git a/env/lib/python3.8/site-packages/click/testing.py b/env/lib/python3.8/site-packages/click/testing.py new file mode 100644 index 00000000..e395c2ed --- /dev/null +++ b/env/lib/python3.8/site-packages/click/testing.py @@ -0,0 +1,479 @@ +import contextlib +import io +import os +import shlex +import shutil +import sys +import tempfile +import typing as t +from types import TracebackType + +from . import formatting +from . import termui +from . import utils +from ._compat import _find_binary_reader + +if t.TYPE_CHECKING: + from .core import BaseCommand + + +class EchoingStdin: + def __init__(self, input: t.BinaryIO, output: t.BinaryIO) -> None: + self._input = input + self._output = output + self._paused = False + + def __getattr__(self, x: str) -> t.Any: + return getattr(self._input, x) + + def _echo(self, rv: bytes) -> bytes: + if not self._paused: + self._output.write(rv) + + return rv + + def read(self, n: int = -1) -> bytes: + return self._echo(self._input.read(n)) + + def read1(self, n: int = -1) -> bytes: + return self._echo(self._input.read1(n)) # type: ignore + + def readline(self, n: int = -1) -> bytes: + return self._echo(self._input.readline(n)) + + def readlines(self) -> t.List[bytes]: + return [self._echo(x) for x in self._input.readlines()] + + def __iter__(self) -> t.Iterator[bytes]: + return iter(self._echo(x) for x in self._input) + + def __repr__(self) -> str: + return repr(self._input) + + +@contextlib.contextmanager +def _pause_echo(stream: t.Optional[EchoingStdin]) -> t.Iterator[None]: + if stream is None: + yield + else: + stream._paused = True + yield + stream._paused = False + + +class _NamedTextIOWrapper(io.TextIOWrapper): + def __init__( + self, buffer: t.BinaryIO, name: str, mode: str, **kwargs: t.Any + ) -> None: + super().__init__(buffer, **kwargs) + self._name = name + self._mode = mode + + @property + def name(self) -> str: + return self._name + + @property + def mode(self) -> str: + return self._mode + + +def make_input_stream( + input: t.Optional[t.Union[str, bytes, t.IO]], charset: str +) -> t.BinaryIO: + # Is already an input stream. + if hasattr(input, "read"): + rv = _find_binary_reader(t.cast(t.IO, input)) + + if rv is not None: + return rv + + raise TypeError("Could not find binary reader for input stream.") + + if input is None: + input = b"" + elif isinstance(input, str): + input = input.encode(charset) + + return io.BytesIO(t.cast(bytes, input)) + + +class Result: + """Holds the captured result of an invoked CLI script.""" + + def __init__( + self, + runner: "CliRunner", + stdout_bytes: bytes, + stderr_bytes: t.Optional[bytes], + return_value: t.Any, + exit_code: int, + exception: t.Optional[BaseException], + exc_info: t.Optional[ + t.Tuple[t.Type[BaseException], BaseException, TracebackType] + ] = None, + ): + #: The runner that created the result + self.runner = runner + #: The standard output as bytes. + self.stdout_bytes = stdout_bytes + #: The standard error as bytes, or None if not available + self.stderr_bytes = stderr_bytes + #: The value returned from the invoked command. + #: + #: .. versionadded:: 8.0 + self.return_value = return_value + #: The exit code as integer. + self.exit_code = exit_code + #: The exception that happened if one did. + self.exception = exception + #: The traceback + self.exc_info = exc_info + + @property + def output(self) -> str: + """The (standard) output as unicode string.""" + return self.stdout + + @property + def stdout(self) -> str: + """The standard output as unicode string.""" + return self.stdout_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + @property + def stderr(self) -> str: + """The standard error as unicode string.""" + if self.stderr_bytes is None: + raise ValueError("stderr not separately captured") + return self.stderr_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + def __repr__(self) -> str: + exc_str = repr(self.exception) if self.exception else "okay" + return f"<{type(self).__name__} {exc_str}>" + + +class CliRunner: + """The CLI runner provides functionality to invoke a Click command line + script for unittesting purposes in a isolated environment. This only + works in single-threaded systems without any concurrency as it changes the + global interpreter state. + + :param charset: the character set for the input and output data. + :param env: a dictionary with environment variables for overriding. + :param echo_stdin: if this is set to `True`, then reading from stdin writes + to stdout. This is useful for showing examples in + some circumstances. Note that regular prompts + will automatically echo the input. + :param mix_stderr: if this is set to `False`, then stdout and stderr are + preserved as independent streams. This is useful for + Unix-philosophy apps that have predictable stdout and + noisy stderr, such that each may be measured + independently + """ + + def __init__( + self, + charset: str = "utf-8", + env: t.Optional[t.Mapping[str, t.Optional[str]]] = None, + echo_stdin: bool = False, + mix_stderr: bool = True, + ) -> None: + self.charset = charset + self.env = env or {} + self.echo_stdin = echo_stdin + self.mix_stderr = mix_stderr + + def get_default_prog_name(self, cli: "BaseCommand") -> str: + """Given a command object it will return the default program name + for it. The default is the `name` attribute or ``"root"`` if not + set. + """ + return cli.name or "root" + + def make_env( + self, overrides: t.Optional[t.Mapping[str, t.Optional[str]]] = None + ) -> t.Mapping[str, t.Optional[str]]: + """Returns the environment overrides for invoking a script.""" + rv = dict(self.env) + if overrides: + rv.update(overrides) + return rv + + @contextlib.contextmanager + def isolation( + self, + input: t.Optional[t.Union[str, bytes, t.IO]] = None, + env: t.Optional[t.Mapping[str, t.Optional[str]]] = None, + color: bool = False, + ) -> t.Iterator[t.Tuple[io.BytesIO, t.Optional[io.BytesIO]]]: + """A context manager that sets up the isolation for invoking of a + command line tool. This sets up stdin with the given input data + and `os.environ` with the overrides from the given dictionary. + This also rebinds some internals in Click to be mocked (like the + prompt functionality). + + This is automatically done in the :meth:`invoke` method. + + :param input: the input stream to put into sys.stdin. + :param env: the environment overrides as dictionary. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + + .. versionchanged:: 8.0 + ``stderr`` is opened with ``errors="backslashreplace"`` + instead of the default ``"strict"``. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. + """ + bytes_input = make_input_stream(input, self.charset) + echo_input = None + + old_stdin = sys.stdin + old_stdout = sys.stdout + old_stderr = sys.stderr + old_forced_width = formatting.FORCED_WIDTH + formatting.FORCED_WIDTH = 80 + + env = self.make_env(env) + + bytes_output = io.BytesIO() + + if self.echo_stdin: + bytes_input = echo_input = t.cast( + t.BinaryIO, EchoingStdin(bytes_input, bytes_output) + ) + + sys.stdin = text_input = _NamedTextIOWrapper( + bytes_input, encoding=self.charset, name="", mode="r" + ) + + if self.echo_stdin: + # Force unbuffered reads, otherwise TextIOWrapper reads a + # large chunk which is echoed early. + text_input._CHUNK_SIZE = 1 # type: ignore + + sys.stdout = _NamedTextIOWrapper( + bytes_output, encoding=self.charset, name="", mode="w" + ) + + bytes_error = None + if self.mix_stderr: + sys.stderr = sys.stdout + else: + bytes_error = io.BytesIO() + sys.stderr = _NamedTextIOWrapper( + bytes_error, + encoding=self.charset, + name="", + mode="w", + errors="backslashreplace", + ) + + @_pause_echo(echo_input) # type: ignore + def visible_input(prompt: t.Optional[str] = None) -> str: + sys.stdout.write(prompt or "") + val = text_input.readline().rstrip("\r\n") + sys.stdout.write(f"{val}\n") + sys.stdout.flush() + return val + + @_pause_echo(echo_input) # type: ignore + def hidden_input(prompt: t.Optional[str] = None) -> str: + sys.stdout.write(f"{prompt or ''}\n") + sys.stdout.flush() + return text_input.readline().rstrip("\r\n") + + @_pause_echo(echo_input) # type: ignore + def _getchar(echo: bool) -> str: + char = sys.stdin.read(1) + + if echo: + sys.stdout.write(char) + + sys.stdout.flush() + return char + + default_color = color + + def should_strip_ansi( + stream: t.Optional[t.IO] = None, color: t.Optional[bool] = None + ) -> bool: + if color is None: + return not default_color + return not color + + old_visible_prompt_func = termui.visible_prompt_func + old_hidden_prompt_func = termui.hidden_prompt_func + old__getchar_func = termui._getchar + old_should_strip_ansi = utils.should_strip_ansi # type: ignore + termui.visible_prompt_func = visible_input + termui.hidden_prompt_func = hidden_input + termui._getchar = _getchar + utils.should_strip_ansi = should_strip_ansi # type: ignore + + old_env = {} + try: + for key, value in env.items(): + old_env[key] = os.environ.get(key) + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + yield (bytes_output, bytes_error) + finally: + for key, value in old_env.items(): + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + sys.stdout = old_stdout + sys.stderr = old_stderr + sys.stdin = old_stdin + termui.visible_prompt_func = old_visible_prompt_func + termui.hidden_prompt_func = old_hidden_prompt_func + termui._getchar = old__getchar_func + utils.should_strip_ansi = old_should_strip_ansi # type: ignore + formatting.FORCED_WIDTH = old_forced_width + + def invoke( + self, + cli: "BaseCommand", + args: t.Optional[t.Union[str, t.Sequence[str]]] = None, + input: t.Optional[t.Union[str, bytes, t.IO]] = None, + env: t.Optional[t.Mapping[str, t.Optional[str]]] = None, + catch_exceptions: bool = True, + color: bool = False, + **extra: t.Any, + ) -> Result: + """Invokes a command in an isolated environment. The arguments are + forwarded directly to the command line script, the `extra` keyword + arguments are passed to the :meth:`~clickpkg.Command.main` function of + the command. + + This returns a :class:`Result` object. + + :param cli: the command to invoke + :param args: the arguments to invoke. It may be given as an iterable + or a string. When given as string it will be interpreted + as a Unix shell command. More details at + :func:`shlex.split`. + :param input: the input data for `sys.stdin`. + :param env: the environment overrides. + :param catch_exceptions: Whether to catch any other exceptions than + ``SystemExit``. + :param extra: the keyword arguments to pass to :meth:`main`. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + + .. versionchanged:: 8.0 + The result object has the ``return_value`` attribute with + the value returned from the invoked command. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. + + .. versionchanged:: 3.0 + Added the ``catch_exceptions`` parameter. + + .. versionchanged:: 3.0 + The result object has the ``exc_info`` attribute with the + traceback if available. + """ + exc_info = None + with self.isolation(input=input, env=env, color=color) as outstreams: + return_value = None + exception: t.Optional[BaseException] = None + exit_code = 0 + + if isinstance(args, str): + args = shlex.split(args) + + try: + prog_name = extra.pop("prog_name") + except KeyError: + prog_name = self.get_default_prog_name(cli) + + try: + return_value = cli.main(args=args or (), prog_name=prog_name, **extra) + except SystemExit as e: + exc_info = sys.exc_info() + e_code = t.cast(t.Optional[t.Union[int, t.Any]], e.code) + + if e_code is None: + e_code = 0 + + if e_code != 0: + exception = e + + if not isinstance(e_code, int): + sys.stdout.write(str(e_code)) + sys.stdout.write("\n") + e_code = 1 + + exit_code = e_code + + except Exception as e: + if not catch_exceptions: + raise + exception = e + exit_code = 1 + exc_info = sys.exc_info() + finally: + sys.stdout.flush() + stdout = outstreams[0].getvalue() + if self.mix_stderr: + stderr = None + else: + stderr = outstreams[1].getvalue() # type: ignore + + return Result( + runner=self, + stdout_bytes=stdout, + stderr_bytes=stderr, + return_value=return_value, + exit_code=exit_code, + exception=exception, + exc_info=exc_info, # type: ignore + ) + + @contextlib.contextmanager + def isolated_filesystem( + self, temp_dir: t.Optional[t.Union[str, os.PathLike]] = None + ) -> t.Iterator[str]: + """A context manager that creates a temporary directory and + changes the current working directory to it. This isolates tests + that affect the contents of the CWD to prevent them from + interfering with each other. + + :param temp_dir: Create the temporary directory under this + directory. If given, the created directory is not removed + when exiting. + + .. versionchanged:: 8.0 + Added the ``temp_dir`` parameter. + """ + cwd = os.getcwd() + dt = tempfile.mkdtemp(dir=temp_dir) # type: ignore[type-var] + os.chdir(dt) + + try: + yield t.cast(str, dt) + finally: + os.chdir(cwd) + + if temp_dir is None: + try: + shutil.rmtree(dt) + except OSError: # noqa: B014 + pass diff --git a/env/lib/python3.8/site-packages/click/types.py b/env/lib/python3.8/site-packages/click/types.py new file mode 100644 index 00000000..b45ee53d --- /dev/null +++ b/env/lib/python3.8/site-packages/click/types.py @@ -0,0 +1,1073 @@ +import os +import stat +import typing as t +from datetime import datetime +from gettext import gettext as _ +from gettext import ngettext + +from ._compat import _get_argv_encoding +from ._compat import get_filesystem_encoding +from ._compat import open_stream +from .exceptions import BadParameter +from .utils import LazyFile +from .utils import safecall + +if t.TYPE_CHECKING: + import typing_extensions as te + from .core import Context + from .core import Parameter + from .shell_completion import CompletionItem + + +class ParamType: + """Represents the type of a parameter. Validates and converts values + from the command line or Python into the correct type. + + To implement a custom type, subclass and implement at least the + following: + + - The :attr:`name` class attribute must be set. + - Calling an instance of the type with ``None`` must return + ``None``. This is already implemented by default. + - :meth:`convert` must convert string values to the correct type. + - :meth:`convert` must accept values that are already the correct + type. + - It must be able to convert a value if the ``ctx`` and ``param`` + arguments are ``None``. This can occur when converting prompt + input. + """ + + is_composite: t.ClassVar[bool] = False + arity: t.ClassVar[int] = 1 + + #: the descriptive name of this type + name: str + + #: if a list of this type is expected and the value is pulled from a + #: string environment variable, this is what splits it up. `None` + #: means any whitespace. For all parameters the general rule is that + #: whitespace splits them up. The exception are paths and files which + #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on + #: Windows). + envvar_list_splitter: t.ClassVar[t.Optional[str]] = None + + def to_info_dict(self) -> t.Dict[str, t.Any]: + """Gather information that could be useful for a tool generating + user-facing documentation. + + Use :meth:`click.Context.to_info_dict` to traverse the entire + CLI structure. + + .. versionadded:: 8.0 + """ + # The class name without the "ParamType" suffix. + param_type = type(self).__name__.partition("ParamType")[0] + param_type = param_type.partition("ParameterType")[0] + + # Custom subclasses might not remember to set a name. + if hasattr(self, "name"): + name = self.name + else: + name = param_type + + return {"param_type": param_type, "name": name} + + def __call__( + self, + value: t.Any, + param: t.Optional["Parameter"] = None, + ctx: t.Optional["Context"] = None, + ) -> t.Any: + if value is not None: + return self.convert(value, param, ctx) + + def get_metavar(self, param: "Parameter") -> t.Optional[str]: + """Returns the metavar default for this param if it provides one.""" + + def get_missing_message(self, param: "Parameter") -> t.Optional[str]: + """Optionally might return extra information about a missing + parameter. + + .. versionadded:: 2.0 + """ + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + """Convert the value to the correct type. This is not called if + the value is ``None`` (the missing value). + + This must accept string values from the command line, as well as + values that are already the correct type. It may also convert + other compatible types. + + The ``param`` and ``ctx`` arguments may be ``None`` in certain + situations, such as when converting prompt input. + + If the value cannot be converted, call :meth:`fail` with a + descriptive message. + + :param value: The value to convert. + :param param: The parameter that is using this type to convert + its value. May be ``None``. + :param ctx: The current context that arrived at this value. May + be ``None``. + """ + return value + + def split_envvar_value(self, rv: str) -> t.Sequence[str]: + """Given a value from an environment variable this splits it up + into small chunks depending on the defined envvar list splitter. + + If the splitter is set to `None`, which means that whitespace splits, + then leading and trailing whitespace is ignored. Otherwise, leading + and trailing splitters usually lead to empty items being included. + """ + return (rv or "").split(self.envvar_list_splitter) + + def fail( + self, + message: str, + param: t.Optional["Parameter"] = None, + ctx: t.Optional["Context"] = None, + ) -> "t.NoReturn": + """Helper method to fail with an invalid value message.""" + raise BadParameter(message, ctx=ctx, param=param) + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Return a list of + :class:`~click.shell_completion.CompletionItem` objects for the + incomplete value. Most types do not provide completions, but + some do, and this allows custom types to provide custom + completions as well. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + return [] + + +class CompositeParamType(ParamType): + is_composite = True + + @property + def arity(self) -> int: # type: ignore + raise NotImplementedError() + + +class FuncParamType(ParamType): + def __init__(self, func: t.Callable[[t.Any], t.Any]) -> None: + self.name = func.__name__ + self.func = func + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["func"] = self.func + return info_dict + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + try: + return self.func(value) + except ValueError: + try: + value = str(value) + except UnicodeError: + value = value.decode("utf-8", "replace") + + self.fail(value, param, ctx) + + +class UnprocessedParamType(ParamType): + name = "text" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + return value + + def __repr__(self) -> str: + return "UNPROCESSED" + + +class StringParamType(ParamType): + name = "text" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + if isinstance(value, bytes): + enc = _get_argv_encoding() + try: + value = value.decode(enc) + except UnicodeError: + fs_enc = get_filesystem_encoding() + if fs_enc != enc: + try: + value = value.decode(fs_enc) + except UnicodeError: + value = value.decode("utf-8", "replace") + else: + value = value.decode("utf-8", "replace") + return value + return str(value) + + def __repr__(self) -> str: + return "STRING" + + +class Choice(ParamType): + """The choice type allows a value to be checked against a fixed set + of supported values. All of these values have to be strings. + + You should only pass a list or tuple of choices. Other iterables + (like generators) may lead to surprising results. + + The resulting value will always be one of the originally passed choices + regardless of ``case_sensitive`` or any ``ctx.token_normalize_func`` + being specified. + + See :ref:`choice-opts` for an example. + + :param case_sensitive: Set to false to make choices case + insensitive. Defaults to true. + """ + + name = "choice" + + def __init__(self, choices: t.Sequence[str], case_sensitive: bool = True) -> None: + self.choices = choices + self.case_sensitive = case_sensitive + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["choices"] = self.choices + info_dict["case_sensitive"] = self.case_sensitive + return info_dict + + def get_metavar(self, param: "Parameter") -> str: + choices_str = "|".join(self.choices) + + # Use curly braces to indicate a required argument. + if param.required and param.param_type_name == "argument": + return f"{{{choices_str}}}" + + # Use square braces to indicate an option or optional argument. + return f"[{choices_str}]" + + def get_missing_message(self, param: "Parameter") -> str: + return _("Choose from:\n\t{choices}").format(choices=",\n\t".join(self.choices)) + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + # Match through normalization and case sensitivity + # first do token_normalize_func, then lowercase + # preserve original `value` to produce an accurate message in + # `self.fail` + normed_value = value + normed_choices = {choice: choice for choice in self.choices} + + if ctx is not None and ctx.token_normalize_func is not None: + normed_value = ctx.token_normalize_func(value) + normed_choices = { + ctx.token_normalize_func(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if not self.case_sensitive: + normed_value = normed_value.casefold() + normed_choices = { + normed_choice.casefold(): original + for normed_choice, original in normed_choices.items() + } + + if normed_value in normed_choices: + return normed_choices[normed_value] + + choices_str = ", ".join(map(repr, self.choices)) + self.fail( + ngettext( + "{value!r} is not {choice}.", + "{value!r} is not one of {choices}.", + len(self.choices), + ).format(value=value, choice=choices_str, choices=choices_str), + param, + ctx, + ) + + def __repr__(self) -> str: + return f"Choice({list(self.choices)})" + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Complete choices that start with the incomplete value. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + str_choices = map(str, self.choices) + + if self.case_sensitive: + matched = (c for c in str_choices if c.startswith(incomplete)) + else: + incomplete = incomplete.lower() + matched = (c for c in str_choices if c.lower().startswith(incomplete)) + + return [CompletionItem(c) for c in matched] + + +class DateTime(ParamType): + """The DateTime type converts date strings into `datetime` objects. + + The format strings which are checked are configurable, but default to some + common (non-timezone aware) ISO 8601 formats. + + When specifying *DateTime* formats, you should only pass a list or a tuple. + Other iterables, like generators, may lead to surprising results. + + The format strings are processed using ``datetime.strptime``, and this + consequently defines the format strings which are allowed. + + Parsing is tried using each format, in order, and the first format which + parses successfully is used. + + :param formats: A list or tuple of date format strings, in the order in + which they should be tried. Defaults to + ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``, + ``'%Y-%m-%d %H:%M:%S'``. + """ + + name = "datetime" + + def __init__(self, formats: t.Optional[t.Sequence[str]] = None): + self.formats = formats or ["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S", "%Y-%m-%d %H:%M:%S"] + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["formats"] = self.formats + return info_dict + + def get_metavar(self, param: "Parameter") -> str: + return f"[{'|'.join(self.formats)}]" + + def _try_to_convert_date(self, value: t.Any, format: str) -> t.Optional[datetime]: + try: + return datetime.strptime(value, format) + except ValueError: + return None + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + if isinstance(value, datetime): + return value + + for format in self.formats: + converted = self._try_to_convert_date(value, format) + + if converted is not None: + return converted + + formats_str = ", ".join(map(repr, self.formats)) + self.fail( + ngettext( + "{value!r} does not match the format {format}.", + "{value!r} does not match the formats {formats}.", + len(self.formats), + ).format(value=value, format=formats_str, formats=formats_str), + param, + ctx, + ) + + def __repr__(self) -> str: + return "DateTime" + + +class _NumberParamTypeBase(ParamType): + _number_class: t.ClassVar[t.Type] + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + try: + return self._number_class(value) + except ValueError: + self.fail( + _("{value!r} is not a valid {number_type}.").format( + value=value, number_type=self.name + ), + param, + ctx, + ) + + +class _NumberRangeBase(_NumberParamTypeBase): + def __init__( + self, + min: t.Optional[float] = None, + max: t.Optional[float] = None, + min_open: bool = False, + max_open: bool = False, + clamp: bool = False, + ) -> None: + self.min = min + self.max = max + self.min_open = min_open + self.max_open = max_open + self.clamp = clamp + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update( + min=self.min, + max=self.max, + min_open=self.min_open, + max_open=self.max_open, + clamp=self.clamp, + ) + return info_dict + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + import operator + + rv = super().convert(value, param, ctx) + lt_min: bool = self.min is not None and ( + operator.le if self.min_open else operator.lt + )(rv, self.min) + gt_max: bool = self.max is not None and ( + operator.ge if self.max_open else operator.gt + )(rv, self.max) + + if self.clamp: + if lt_min: + return self._clamp(self.min, 1, self.min_open) # type: ignore + + if gt_max: + return self._clamp(self.max, -1, self.max_open) # type: ignore + + if lt_min or gt_max: + self.fail( + _("{value} is not in the range {range}.").format( + value=rv, range=self._describe_range() + ), + param, + ctx, + ) + + return rv + + def _clamp(self, bound: float, dir: "te.Literal[1, -1]", open: bool) -> float: + """Find the valid value to clamp to bound in the given + direction. + + :param bound: The boundary value. + :param dir: 1 or -1 indicating the direction to move. + :param open: If true, the range does not include the bound. + """ + raise NotImplementedError + + def _describe_range(self) -> str: + """Describe the range for use in help text.""" + if self.min is None: + op = "<" if self.max_open else "<=" + return f"x{op}{self.max}" + + if self.max is None: + op = ">" if self.min_open else ">=" + return f"x{op}{self.min}" + + lop = "<" if self.min_open else "<=" + rop = "<" if self.max_open else "<=" + return f"{self.min}{lop}x{rop}{self.max}" + + def __repr__(self) -> str: + clamp = " clamped" if self.clamp else "" + return f"<{type(self).__name__} {self._describe_range()}{clamp}>" + + +class IntParamType(_NumberParamTypeBase): + name = "integer" + _number_class = int + + def __repr__(self) -> str: + return "INT" + + +class IntRange(_NumberRangeBase, IntParamType): + """Restrict an :data:`click.INT` value to a range of accepted + values. See :ref:`ranges`. + + If ``min`` or ``max`` are not passed, any value is accepted in that + direction. If ``min_open`` or ``max_open`` are enabled, the + corresponding boundary is not included in the range. + + If ``clamp`` is enabled, a value outside the range is clamped to the + boundary instead of failing. + + .. versionchanged:: 8.0 + Added the ``min_open`` and ``max_open`` parameters. + """ + + name = "integer range" + + def _clamp( # type: ignore + self, bound: int, dir: "te.Literal[1, -1]", open: bool + ) -> int: + if not open: + return bound + + return bound + dir + + +class FloatParamType(_NumberParamTypeBase): + name = "float" + _number_class = float + + def __repr__(self) -> str: + return "FLOAT" + + +class FloatRange(_NumberRangeBase, FloatParamType): + """Restrict a :data:`click.FLOAT` value to a range of accepted + values. See :ref:`ranges`. + + If ``min`` or ``max`` are not passed, any value is accepted in that + direction. If ``min_open`` or ``max_open`` are enabled, the + corresponding boundary is not included in the range. + + If ``clamp`` is enabled, a value outside the range is clamped to the + boundary instead of failing. This is not supported if either + boundary is marked ``open``. + + .. versionchanged:: 8.0 + Added the ``min_open`` and ``max_open`` parameters. + """ + + name = "float range" + + def __init__( + self, + min: t.Optional[float] = None, + max: t.Optional[float] = None, + min_open: bool = False, + max_open: bool = False, + clamp: bool = False, + ) -> None: + super().__init__( + min=min, max=max, min_open=min_open, max_open=max_open, clamp=clamp + ) + + if (min_open or max_open) and clamp: + raise TypeError("Clamping is not supported for open bounds.") + + def _clamp(self, bound: float, dir: "te.Literal[1, -1]", open: bool) -> float: + if not open: + return bound + + # Could use Python 3.9's math.nextafter here, but clamping an + # open float range doesn't seem to be particularly useful. It's + # left up to the user to write a callback to do it if needed. + raise RuntimeError("Clamping is not supported for open bounds.") + + +class BoolParamType(ParamType): + name = "boolean" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + if value in {False, True}: + return bool(value) + + norm = value.strip().lower() + + if norm in {"1", "true", "t", "yes", "y", "on"}: + return True + + if norm in {"0", "false", "f", "no", "n", "off"}: + return False + + self.fail( + _("{value!r} is not a valid boolean.").format(value=value), param, ctx + ) + + def __repr__(self) -> str: + return "BOOL" + + +class UUIDParameterType(ParamType): + name = "uuid" + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + import uuid + + if isinstance(value, uuid.UUID): + return value + + value = value.strip() + + try: + return uuid.UUID(value) + except ValueError: + self.fail( + _("{value!r} is not a valid UUID.").format(value=value), param, ctx + ) + + def __repr__(self) -> str: + return "UUID" + + +class File(ParamType): + """Declares a parameter to be a file for reading or writing. The file + is automatically closed once the context tears down (after the command + finished working). + + Files can be opened for reading or writing. The special value ``-`` + indicates stdin or stdout depending on the mode. + + By default, the file is opened for reading text data, but it can also be + opened in binary mode or for writing. The encoding parameter can be used + to force a specific encoding. + + The `lazy` flag controls if the file should be opened immediately or upon + first IO. The default is to be non-lazy for standard input and output + streams as well as files opened for reading, `lazy` otherwise. When opening a + file lazily for reading, it is still opened temporarily for validation, but + will not be held open until first IO. lazy is mainly useful when opening + for writing to avoid creating the file until it is needed. + + Starting with Click 2.0, files can also be opened atomically in which + case all writes go into a separate file in the same folder and upon + completion the file will be moved over to the original location. This + is useful if a file regularly read by other users is modified. + + See :ref:`file-args` for more information. + """ + + name = "filename" + envvar_list_splitter = os.path.pathsep + + def __init__( + self, + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + lazy: t.Optional[bool] = None, + atomic: bool = False, + ) -> None: + self.mode = mode + self.encoding = encoding + self.errors = errors + self.lazy = lazy + self.atomic = atomic + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update(mode=self.mode, encoding=self.encoding) + return info_dict + + def resolve_lazy_flag(self, value: t.Any) -> bool: + if self.lazy is not None: + return self.lazy + if value == "-": + return False + elif "w" in self.mode: + return True + return False + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + try: + if hasattr(value, "read") or hasattr(value, "write"): + return value + + lazy = self.resolve_lazy_flag(value) + + if lazy: + f: t.IO = t.cast( + t.IO, + LazyFile( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ), + ) + + if ctx is not None: + ctx.call_on_close(f.close_intelligently) # type: ignore + + return f + + f, should_close = open_stream( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + + # If a context is provided, we automatically close the file + # at the end of the context execution (or flush out). If a + # context does not exist, it's the caller's responsibility to + # properly close the file. This for instance happens when the + # type is used with prompts. + if ctx is not None: + if should_close: + ctx.call_on_close(safecall(f.close)) + else: + ctx.call_on_close(safecall(f.flush)) + + return f + except OSError as e: # noqa: B014 + self.fail(f"'{os.fsdecode(value)}': {e.strerror}", param, ctx) + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Return a special completion marker that tells the completion + system to use the shell to provide file path completions. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + return [CompletionItem(incomplete, type="file")] + + +class Path(ParamType): + """The ``Path`` type is similar to the :class:`File` type, but + returns the filename instead of an open file. Various checks can be + enabled to validate the type of file and permissions. + + :param exists: The file or directory needs to exist for the value to + be valid. If this is not set to ``True``, and the file does not + exist, then all further checks are silently skipped. + :param file_okay: Allow a file as a value. + :param dir_okay: Allow a directory as a value. + :param readable: if true, a readable check is performed. + :param writable: if true, a writable check is performed. + :param executable: if true, an executable check is performed. + :param resolve_path: Make the value absolute and resolve any + symlinks. A ``~`` is not expanded, as this is supposed to be + done by the shell only. + :param allow_dash: Allow a single dash as a value, which indicates + a standard stream (but does not open it). Use + :func:`~click.open_file` to handle opening this value. + :param path_type: Convert the incoming path value to this type. If + ``None``, keep Python's default, which is ``str``. Useful to + convert to :class:`pathlib.Path`. + + .. versionchanged:: 8.1 + Added the ``executable`` parameter. + + .. versionchanged:: 8.0 + Allow passing ``type=pathlib.Path``. + + .. versionchanged:: 6.0 + Added the ``allow_dash`` parameter. + """ + + envvar_list_splitter = os.path.pathsep + + def __init__( + self, + exists: bool = False, + file_okay: bool = True, + dir_okay: bool = True, + writable: bool = False, + readable: bool = True, + resolve_path: bool = False, + allow_dash: bool = False, + path_type: t.Optional[t.Type] = None, + executable: bool = False, + ): + self.exists = exists + self.file_okay = file_okay + self.dir_okay = dir_okay + self.readable = readable + self.writable = writable + self.executable = executable + self.resolve_path = resolve_path + self.allow_dash = allow_dash + self.type = path_type + + if self.file_okay and not self.dir_okay: + self.name = _("file") + elif self.dir_okay and not self.file_okay: + self.name = _("directory") + else: + self.name = _("path") + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict.update( + exists=self.exists, + file_okay=self.file_okay, + dir_okay=self.dir_okay, + writable=self.writable, + readable=self.readable, + allow_dash=self.allow_dash, + ) + return info_dict + + def coerce_path_result(self, rv: t.Any) -> t.Any: + if self.type is not None and not isinstance(rv, self.type): + if self.type is str: + rv = os.fsdecode(rv) + elif self.type is bytes: + rv = os.fsencode(rv) + else: + rv = self.type(rv) + + return rv + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + rv = value + + is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-") + + if not is_dash: + if self.resolve_path: + # os.path.realpath doesn't resolve symlinks on Windows + # until Python 3.8. Use pathlib for now. + import pathlib + + rv = os.fsdecode(pathlib.Path(rv).resolve()) + + try: + st = os.stat(rv) + except OSError: + if not self.exists: + return self.coerce_path_result(rv) + self.fail( + _("{name} {filename!r} does not exist.").format( + name=self.name.title(), filename=os.fsdecode(value) + ), + param, + ctx, + ) + + if not self.file_okay and stat.S_ISREG(st.st_mode): + self.fail( + _("{name} {filename!r} is a file.").format( + name=self.name.title(), filename=os.fsdecode(value) + ), + param, + ctx, + ) + if not self.dir_okay and stat.S_ISDIR(st.st_mode): + self.fail( + _("{name} '{filename}' is a directory.").format( + name=self.name.title(), filename=os.fsdecode(value) + ), + param, + ctx, + ) + + if self.readable and not os.access(rv, os.R_OK): + self.fail( + _("{name} {filename!r} is not readable.").format( + name=self.name.title(), filename=os.fsdecode(value) + ), + param, + ctx, + ) + + if self.writable and not os.access(rv, os.W_OK): + self.fail( + _("{name} {filename!r} is not writable.").format( + name=self.name.title(), filename=os.fsdecode(value) + ), + param, + ctx, + ) + + if self.executable and not os.access(value, os.X_OK): + self.fail( + _("{name} {filename!r} is not executable.").format( + name=self.name.title(), filename=os.fsdecode(value) + ), + param, + ctx, + ) + + return self.coerce_path_result(rv) + + def shell_complete( + self, ctx: "Context", param: "Parameter", incomplete: str + ) -> t.List["CompletionItem"]: + """Return a special completion marker that tells the completion + system to use the shell to provide path completions for only + directories or any paths. + + :param ctx: Invocation context for this command. + :param param: The parameter that is requesting completion. + :param incomplete: Value being completed. May be empty. + + .. versionadded:: 8.0 + """ + from click.shell_completion import CompletionItem + + type = "dir" if self.dir_okay and not self.file_okay else "file" + return [CompletionItem(incomplete, type=type)] + + +class Tuple(CompositeParamType): + """The default behavior of Click is to apply a type on a value directly. + This works well in most cases, except for when `nargs` is set to a fixed + count and different types should be used for different items. In this + case the :class:`Tuple` type can be used. This type can only be used + if `nargs` is set to a fixed number. + + For more information see :ref:`tuple-type`. + + This can be selected by using a Python tuple literal as a type. + + :param types: a list of types that should be used for the tuple items. + """ + + def __init__(self, types: t.Sequence[t.Union[t.Type, ParamType]]) -> None: + self.types = [convert_type(ty) for ty in types] + + def to_info_dict(self) -> t.Dict[str, t.Any]: + info_dict = super().to_info_dict() + info_dict["types"] = [t.to_info_dict() for t in self.types] + return info_dict + + @property + def name(self) -> str: # type: ignore + return f"<{' '.join(ty.name for ty in self.types)}>" + + @property + def arity(self) -> int: # type: ignore + return len(self.types) + + def convert( + self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] + ) -> t.Any: + len_type = len(self.types) + len_value = len(value) + + if len_value != len_type: + self.fail( + ngettext( + "{len_type} values are required, but {len_value} was given.", + "{len_type} values are required, but {len_value} were given.", + len_value, + ).format(len_type=len_type, len_value=len_value), + param=param, + ctx=ctx, + ) + + return tuple(ty(x, param, ctx) for ty, x in zip(self.types, value)) + + +def convert_type(ty: t.Optional[t.Any], default: t.Optional[t.Any] = None) -> ParamType: + """Find the most appropriate :class:`ParamType` for the given Python + type. If the type isn't provided, it can be inferred from a default + value. + """ + guessed_type = False + + if ty is None and default is not None: + if isinstance(default, (tuple, list)): + # If the default is empty, ty will remain None and will + # return STRING. + if default: + item = default[0] + + # A tuple of tuples needs to detect the inner types. + # Can't call convert recursively because that would + # incorrectly unwind the tuple to a single type. + if isinstance(item, (tuple, list)): + ty = tuple(map(type, item)) + else: + ty = type(item) + else: + ty = type(default) + + guessed_type = True + + if isinstance(ty, tuple): + return Tuple(ty) + + if isinstance(ty, ParamType): + return ty + + if ty is str or ty is None: + return STRING + + if ty is int: + return INT + + if ty is float: + return FLOAT + + if ty is bool: + return BOOL + + if guessed_type: + return STRING + + if __debug__: + try: + if issubclass(ty, ParamType): + raise AssertionError( + f"Attempted to use an uninstantiated parameter type ({ty})." + ) + except TypeError: + # ty is an instance (correct), so issubclass fails. + pass + + return FuncParamType(ty) + + +#: A dummy parameter type that just does nothing. From a user's +#: perspective this appears to just be the same as `STRING` but +#: internally no string conversion takes place if the input was bytes. +#: This is usually useful when working with file paths as they can +#: appear in bytes and unicode. +#: +#: For path related uses the :class:`Path` type is a better choice but +#: there are situations where an unprocessed type is useful which is why +#: it is is provided. +#: +#: .. versionadded:: 4.0 +UNPROCESSED = UnprocessedParamType() + +#: A unicode string parameter type which is the implicit default. This +#: can also be selected by using ``str`` as type. +STRING = StringParamType() + +#: An integer parameter. This can also be selected by using ``int`` as +#: type. +INT = IntParamType() + +#: A floating point value parameter. This can also be selected by using +#: ``float`` as type. +FLOAT = FloatParamType() + +#: A boolean parameter. This is the default for boolean flags. This can +#: also be selected by using ``bool`` as a type. +BOOL = BoolParamType() + +#: A UUID parameter. +UUID = UUIDParameterType() diff --git a/env/lib/python3.8/site-packages/click/utils.py b/env/lib/python3.8/site-packages/click/utils.py new file mode 100644 index 00000000..8283788a --- /dev/null +++ b/env/lib/python3.8/site-packages/click/utils.py @@ -0,0 +1,580 @@ +import os +import re +import sys +import typing as t +from functools import update_wrapper +from types import ModuleType + +from ._compat import _default_text_stderr +from ._compat import _default_text_stdout +from ._compat import _find_binary_writer +from ._compat import auto_wrap_for_ansi +from ._compat import binary_streams +from ._compat import get_filesystem_encoding +from ._compat import open_stream +from ._compat import should_strip_ansi +from ._compat import strip_ansi +from ._compat import text_streams +from ._compat import WIN +from .globals import resolve_color_default + +if t.TYPE_CHECKING: + import typing_extensions as te + +F = t.TypeVar("F", bound=t.Callable[..., t.Any]) + + +def _posixify(name: str) -> str: + return "-".join(name.split()).lower() + + +def safecall(func: F) -> F: + """Wraps a function so that it swallows exceptions.""" + + def wrapper(*args, **kwargs): # type: ignore + try: + return func(*args, **kwargs) + except Exception: + pass + + return update_wrapper(t.cast(F, wrapper), func) + + +def make_str(value: t.Any) -> str: + """Converts a value into a valid string.""" + if isinstance(value, bytes): + try: + return value.decode(get_filesystem_encoding()) + except UnicodeError: + return value.decode("utf-8", "replace") + return str(value) + + +def make_default_short_help(help: str, max_length: int = 45) -> str: + """Returns a condensed version of help string.""" + # Consider only the first paragraph. + paragraph_end = help.find("\n\n") + + if paragraph_end != -1: + help = help[:paragraph_end] + + # Collapse newlines, tabs, and spaces. + words = help.split() + + if not words: + return "" + + # The first paragraph started with a "no rewrap" marker, ignore it. + if words[0] == "\b": + words = words[1:] + + total_length = 0 + last_index = len(words) - 1 + + for i, word in enumerate(words): + total_length += len(word) + (i > 0) + + if total_length > max_length: # too long, truncate + break + + if word[-1] == ".": # sentence end, truncate without "..." + return " ".join(words[: i + 1]) + + if total_length == max_length and i != last_index: + break # not at sentence end, truncate with "..." + else: + return " ".join(words) # no truncation needed + + # Account for the length of the suffix. + total_length += len("...") + + # remove words until the length is short enough + while i > 0: + total_length -= len(words[i]) + (i > 0) + + if total_length <= max_length: + break + + i -= 1 + + return " ".join(words[:i]) + "..." + + +class LazyFile: + """A lazy file works like a regular file but it does not fully open + the file but it does perform some basic checks early to see if the + filename parameter does make sense. This is useful for safely opening + files for writing. + """ + + def __init__( + self, + filename: str, + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + atomic: bool = False, + ): + self.name = filename + self.mode = mode + self.encoding = encoding + self.errors = errors + self.atomic = atomic + self._f: t.Optional[t.IO] + + if filename == "-": + self._f, self.should_close = open_stream(filename, mode, encoding, errors) + else: + if "r" in mode: + # Open and close the file in case we're opening it for + # reading so that we can catch at least some errors in + # some cases early. + open(filename, mode).close() + self._f = None + self.should_close = True + + def __getattr__(self, name: str) -> t.Any: + return getattr(self.open(), name) + + def __repr__(self) -> str: + if self._f is not None: + return repr(self._f) + return f"" + + def open(self) -> t.IO: + """Opens the file if it's not yet open. This call might fail with + a :exc:`FileError`. Not handling this error will produce an error + that Click shows. + """ + if self._f is not None: + return self._f + try: + rv, self.should_close = open_stream( + self.name, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + except OSError as e: # noqa: E402 + from .exceptions import FileError + + raise FileError(self.name, hint=e.strerror) from e + self._f = rv + return rv + + def close(self) -> None: + """Closes the underlying file, no matter what.""" + if self._f is not None: + self._f.close() + + def close_intelligently(self) -> None: + """This function only closes the file if it was opened by the lazy + file wrapper. For instance this will never close stdin. + """ + if self.should_close: + self.close() + + def __enter__(self) -> "LazyFile": + return self + + def __exit__(self, exc_type, exc_value, tb): # type: ignore + self.close_intelligently() + + def __iter__(self) -> t.Iterator[t.AnyStr]: + self.open() + return iter(self._f) # type: ignore + + +class KeepOpenFile: + def __init__(self, file: t.IO) -> None: + self._file = file + + def __getattr__(self, name: str) -> t.Any: + return getattr(self._file, name) + + def __enter__(self) -> "KeepOpenFile": + return self + + def __exit__(self, exc_type, exc_value, tb): # type: ignore + pass + + def __repr__(self) -> str: + return repr(self._file) + + def __iter__(self) -> t.Iterator[t.AnyStr]: + return iter(self._file) + + +def echo( + message: t.Optional[t.Any] = None, + file: t.Optional[t.IO[t.Any]] = None, + nl: bool = True, + err: bool = False, + color: t.Optional[bool] = None, +) -> None: + """Print a message and newline to stdout or a file. This should be + used instead of :func:`print` because it provides better support + for different data, files, and environments. + + Compared to :func:`print`, this does the following: + + - Ensures that the output encoding is not misconfigured on Linux. + - Supports Unicode in the Windows console. + - Supports writing to binary outputs, and supports writing bytes + to text outputs. + - Supports colors and styles on Windows. + - Removes ANSI color and style codes if the output does not look + like an interactive terminal. + - Always flushes the output. + + :param message: The string or bytes to output. Other objects are + converted to strings. + :param file: The file to write to. Defaults to ``stdout``. + :param err: Write to ``stderr`` instead of ``stdout``. + :param nl: Print a newline after the message. Enabled by default. + :param color: Force showing or hiding colors and other styles. By + default Click will remove color if the output does not look like + an interactive terminal. + + .. versionchanged:: 6.0 + Support Unicode output on the Windows console. Click does not + modify ``sys.stdout``, so ``sys.stdout.write()`` and ``print()`` + will still not support Unicode. + + .. versionchanged:: 4.0 + Added the ``color`` parameter. + + .. versionadded:: 3.0 + Added the ``err`` parameter. + + .. versionchanged:: 2.0 + Support colors on Windows if colorama is installed. + """ + if file is None: + if err: + file = _default_text_stderr() + else: + file = _default_text_stdout() + + # Convert non bytes/text into the native string type. + if message is not None and not isinstance(message, (str, bytes, bytearray)): + out: t.Optional[t.Union[str, bytes]] = str(message) + else: + out = message + + if nl: + out = out or "" + if isinstance(out, str): + out += "\n" + else: + out += b"\n" + + if not out: + file.flush() + return + + # If there is a message and the value looks like bytes, we manually + # need to find the binary stream and write the message in there. + # This is done separately so that most stream types will work as you + # would expect. Eg: you can write to StringIO for other cases. + if isinstance(out, (bytes, bytearray)): + binary_file = _find_binary_writer(file) + + if binary_file is not None: + file.flush() + binary_file.write(out) + binary_file.flush() + return + + # ANSI style code support. For no message or bytes, nothing happens. + # When outputting to a file instead of a terminal, strip codes. + else: + color = resolve_color_default(color) + + if should_strip_ansi(file, color): + out = strip_ansi(out) + elif WIN: + if auto_wrap_for_ansi is not None: + file = auto_wrap_for_ansi(file) # type: ignore + elif not color: + out = strip_ansi(out) + + file.write(out) # type: ignore + file.flush() + + +def get_binary_stream(name: "te.Literal['stdin', 'stdout', 'stderr']") -> t.BinaryIO: + """Returns a system stream for byte processing. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + """ + opener = binary_streams.get(name) + if opener is None: + raise TypeError(f"Unknown standard stream '{name}'") + return opener() + + +def get_text_stream( + name: "te.Literal['stdin', 'stdout', 'stderr']", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", +) -> t.TextIO: + """Returns a system stream for text processing. This usually returns + a wrapped stream around a binary stream returned from + :func:`get_binary_stream` but it also can take shortcuts for already + correctly configured streams. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + :param encoding: overrides the detected default encoding. + :param errors: overrides the default error mode. + """ + opener = text_streams.get(name) + if opener is None: + raise TypeError(f"Unknown standard stream '{name}'") + return opener(encoding, errors) + + +def open_file( + filename: str, + mode: str = "r", + encoding: t.Optional[str] = None, + errors: t.Optional[str] = "strict", + lazy: bool = False, + atomic: bool = False, +) -> t.IO: + """Open a file, with extra behavior to handle ``'-'`` to indicate + a standard stream, lazy open on write, and atomic write. Similar to + the behavior of the :class:`~click.File` param type. + + If ``'-'`` is given to open ``stdout`` or ``stdin``, the stream is + wrapped so that using it in a context manager will not close it. + This makes it possible to use the function without accidentally + closing a standard stream: + + .. code-block:: python + + with open_file(filename) as f: + ... + + :param filename: The name of the file to open, or ``'-'`` for + ``stdin``/``stdout``. + :param mode: The mode in which to open the file. + :param encoding: The encoding to decode or encode a file opened in + text mode. + :param errors: The error handling mode. + :param lazy: Wait to open the file until it is accessed. For read + mode, the file is temporarily opened to raise access errors + early, then closed until it is read again. + :param atomic: Write to a temporary file and replace the given file + on close. + + .. versionadded:: 3.0 + """ + if lazy: + return t.cast(t.IO, LazyFile(filename, mode, encoding, errors, atomic=atomic)) + + f, should_close = open_stream(filename, mode, encoding, errors, atomic=atomic) + + if not should_close: + f = t.cast(t.IO, KeepOpenFile(f)) + + return f + + +def format_filename( + filename: t.Union[str, bytes, os.PathLike], shorten: bool = False +) -> str: + """Formats a filename for user display. The main purpose of this + function is to ensure that the filename can be displayed at all. This + will decode the filename to unicode if necessary in a way that it will + not fail. Optionally, it can shorten the filename to not include the + full path to the filename. + + :param filename: formats a filename for UI display. This will also convert + the filename into unicode without failing. + :param shorten: this optionally shortens the filename to strip of the + path that leads up to it. + """ + if shorten: + filename = os.path.basename(filename) + + return os.fsdecode(filename) + + +def get_app_dir(app_name: str, roaming: bool = True, force_posix: bool = False) -> str: + r"""Returns the config folder for the application. The default behavior + is to return whatever is most appropriate for the operating system. + + To give you an idea, for an app called ``"Foo Bar"``, something like + the following folders could be returned: + + Mac OS X: + ``~/Library/Application Support/Foo Bar`` + Mac OS X (POSIX): + ``~/.foo-bar`` + Unix: + ``~/.config/foo-bar`` + Unix (POSIX): + ``~/.foo-bar`` + Windows (roaming): + ``C:\Users\\AppData\Roaming\Foo Bar`` + Windows (not roaming): + ``C:\Users\\AppData\Local\Foo Bar`` + + .. versionadded:: 2.0 + + :param app_name: the application name. This should be properly capitalized + and can contain whitespace. + :param roaming: controls if the folder should be roaming or not on Windows. + Has no affect otherwise. + :param force_posix: if this is set to `True` then on any POSIX system the + folder will be stored in the home folder with a leading + dot instead of the XDG config home or darwin's + application support folder. + """ + if WIN: + key = "APPDATA" if roaming else "LOCALAPPDATA" + folder = os.environ.get(key) + if folder is None: + folder = os.path.expanduser("~") + return os.path.join(folder, app_name) + if force_posix: + return os.path.join(os.path.expanduser(f"~/.{_posixify(app_name)}")) + if sys.platform == "darwin": + return os.path.join( + os.path.expanduser("~/Library/Application Support"), app_name + ) + return os.path.join( + os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")), + _posixify(app_name), + ) + + +class PacifyFlushWrapper: + """This wrapper is used to catch and suppress BrokenPipeErrors resulting + from ``.flush()`` being called on broken pipe during the shutdown/final-GC + of the Python interpreter. Notably ``.flush()`` is always called on + ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any + other cleanup code, and the case where the underlying file is not a broken + pipe, all calls and attributes are proxied. + """ + + def __init__(self, wrapped: t.IO) -> None: + self.wrapped = wrapped + + def flush(self) -> None: + try: + self.wrapped.flush() + except OSError as e: + import errno + + if e.errno != errno.EPIPE: + raise + + def __getattr__(self, attr: str) -> t.Any: + return getattr(self.wrapped, attr) + + +def _detect_program_name( + path: t.Optional[str] = None, _main: t.Optional[ModuleType] = None +) -> str: + """Determine the command used to run the program, for use in help + text. If a file or entry point was executed, the file name is + returned. If ``python -m`` was used to execute a module or package, + ``python -m name`` is returned. + + This doesn't try to be too precise, the goal is to give a concise + name for help text. Files are only shown as their name without the + path. ``python`` is only shown for modules, and the full path to + ``sys.executable`` is not shown. + + :param path: The Python file being executed. Python puts this in + ``sys.argv[0]``, which is used by default. + :param _main: The ``__main__`` module. This should only be passed + during internal testing. + + .. versionadded:: 8.0 + Based on command args detection in the Werkzeug reloader. + + :meta private: + """ + if _main is None: + _main = sys.modules["__main__"] + + if not path: + path = sys.argv[0] + + # The value of __package__ indicates how Python was called. It may + # not exist if a setuptools script is installed as an egg. It may be + # set incorrectly for entry points created with pip on Windows. + if getattr(_main, "__package__", None) is None or ( + os.name == "nt" + and _main.__package__ == "" + and not os.path.exists(path) + and os.path.exists(f"{path}.exe") + ): + # Executed a file, like "python app.py". + return os.path.basename(path) + + # Executed a module, like "python -m example". + # Rewritten by Python from "-m script" to "/path/to/script.py". + # Need to look at main module to determine how it was executed. + py_module = t.cast(str, _main.__package__) + name = os.path.splitext(os.path.basename(path))[0] + + # A submodule like "example.cli". + if name != "__main__": + py_module = f"{py_module}.{name}" + + return f"python -m {py_module.lstrip('.')}" + + +def _expand_args( + args: t.Iterable[str], + *, + user: bool = True, + env: bool = True, + glob_recursive: bool = True, +) -> t.List[str]: + """Simulate Unix shell expansion with Python functions. + + See :func:`glob.glob`, :func:`os.path.expanduser`, and + :func:`os.path.expandvars`. + + This is intended for use on Windows, where the shell does not do any + expansion. It may not exactly match what a Unix shell would do. + + :param args: List of command line arguments to expand. + :param user: Expand user home directory. + :param env: Expand environment variables. + :param glob_recursive: ``**`` matches directories recursively. + + .. versionchanged:: 8.1 + Invalid glob patterns are treated as empty expansions rather + than raising an error. + + .. versionadded:: 8.0 + + :meta private: + """ + from glob import glob + + out = [] + + for arg in args: + if user: + arg = os.path.expanduser(arg) + + if env: + arg = os.path.expandvars(arg) + + try: + matches = glob(arg, recursive=glob_recursive) + except re.error: + matches = [] + + if not matches: + out.append(arg) + else: + out.extend(matches) + + return out diff --git a/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/LICENSE b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/LICENSE new file mode 100644 index 00000000..230c44fe --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/LICENSE @@ -0,0 +1,20 @@ +Copyright (c) 2020 Sébastien Eustace + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/METADATA b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/METADATA new file mode 100644 index 00000000..63485161 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/METADATA @@ -0,0 +1,22 @@ +Metadata-Version: 2.1 +Name: crashtest +Version: 0.3.1 +Summary: Manage Python errors with ease +Home-page: https://github.com/sdispater/crashtest +License: MIT +Author: Sébastien Eustace +Author-email: sebastien@eustace.io +Requires-Python: >=3.6,<4.0 +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Project-URL: Repository, https://github.com/sdispater/crashtest +Description-Content-Type: text/markdown + +# Crashtest + +Crashtest is a Python library that makes exceptions handling and inspection easier. + diff --git a/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/RECORD b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/RECORD new file mode 100644 index 00000000..83e9a41c --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/RECORD @@ -0,0 +1,29 @@ +crashtest-0.3.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +crashtest-0.3.1.dist-info/LICENSE,sha256=8ZeBM3grkPRzO8MI3bGSZ8P-BHl8iNntO8IZAySVqYI,1062 +crashtest-0.3.1.dist-info/METADATA,sha256=PfMGWTb6n_tkrHQQb3LH_vfp5VsYvJVCMl_1Smytflg,748 +crashtest-0.3.1.dist-info/RECORD,, +crashtest-0.3.1.dist-info/WHEEL,sha256=jM75OLmO0bF2QZxXmlw3A-IE58epFjnoGYhyw7ywvsc,85 +crashtest/__init__.py,sha256=r4xAFihOf72W9TD-lpMi6ntWSTKTP2SlzKP1ytkjRbI,22 +crashtest/__pycache__/__init__.cpython-38.pyc,, +crashtest/__pycache__/frame.cpython-38.pyc,, +crashtest/__pycache__/frame_collection.cpython-38.pyc,, +crashtest/__pycache__/inspector.cpython-38.pyc,, +crashtest/contracts/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +crashtest/contracts/__pycache__/__init__.cpython-38.pyc,, +crashtest/contracts/__pycache__/base_solution.cpython-38.pyc,, +crashtest/contracts/__pycache__/has_solutions_for_exception.cpython-38.pyc,, +crashtest/contracts/__pycache__/provides_solution.cpython-38.pyc,, +crashtest/contracts/__pycache__/solution.cpython-38.pyc,, +crashtest/contracts/__pycache__/solution_provider_repository.cpython-38.pyc,, +crashtest/contracts/base_solution.py,sha256=XwGYk4seqED9tMmBhED3PdJHFT3qCIbi8i3tyUGqG4Q,517 +crashtest/contracts/has_solutions_for_exception.py,sha256=x6QLIK-n3e-yw4basppwwdwm3I6lzjCJ-_9mHbFbAbU,287 +crashtest/contracts/provides_solution.py,sha256=4KSZJkWAH5dzYDR2h5H6AaXqNXOVja-307BANcEALk0,143 +crashtest/contracts/solution.py,sha256=4Cnq1oTF2VUd0ZIm14SeQmv5rCA99BkwNTiSX-ZfjR0,322 +crashtest/contracts/solution_provider_repository.py,sha256=tZYxED5MRPJmQoVXto6yOhbah0lgwxc7lav9x3JvgzY,556 +crashtest/frame.py,sha256=nr5Mfglwws6iWaLiX5XFoBITWXqQvJDEidkMMpAovJY,2050 +crashtest/frame_collection.py,sha256=x9Kg2zMnNLoiMm4sXschkzpzmGP6dXjjTcuV3O8yh4s,2113 +crashtest/inspector.py,sha256=dPottVpq0KX0xErBqmCP2k_lH7wafP4EXcmXSxSDF7I,1279 +crashtest/solution_providers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +crashtest/solution_providers/__pycache__/__init__.cpython-38.pyc,, +crashtest/solution_providers/__pycache__/solution_provider_repository.cpython-38.pyc,, +crashtest/solution_providers/solution_provider_repository.py,sha256=vUonL8oKrC8yF9c_EorwY81lQQQKErQHkwLD6UXDwnc,2165 diff --git a/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/WHEEL new file mode 100644 index 00000000..07c22b7e --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest-0.3.1.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: poetry 1.0.0a9 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/env/lib/python3.8/site-packages/crashtest/__init__.py b/env/lib/python3.8/site-packages/crashtest/__init__.py new file mode 100644 index 00000000..260c070a --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/__init__.py @@ -0,0 +1 @@ +__version__ = "0.3.1" diff --git a/env/lib/python3.8/site-packages/crashtest/contracts/__init__.py b/env/lib/python3.8/site-packages/crashtest/contracts/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/crashtest/contracts/base_solution.py b/env/lib/python3.8/site-packages/crashtest/contracts/base_solution.py new file mode 100644 index 00000000..860c0abd --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/contracts/base_solution.py @@ -0,0 +1,22 @@ +from typing import List + +from .solution import Solution + + +class BaseSolution(Solution): + def __init__(self, title: str = None, description: str = None) -> None: + self._title = title + self._description = description + self._links = [] + + @property + def solution_title(self) -> str: + return self._title + + @property + def solution_description(self) -> str: + return self._description + + @property + def documentation_links(self) -> List[str]: + return self._links diff --git a/env/lib/python3.8/site-packages/crashtest/contracts/has_solutions_for_exception.py b/env/lib/python3.8/site-packages/crashtest/contracts/has_solutions_for_exception.py new file mode 100644 index 00000000..8aa3cfc8 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/contracts/has_solutions_for_exception.py @@ -0,0 +1,11 @@ +from typing import List + +from .solution import Solution + + +class HasSolutionsForException: + def can_solve(self, exception: Exception) -> bool: + raise NotImplementedError() + + def get_solutions(self, exception: Exception) -> List[Solution]: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/crashtest/contracts/provides_solution.py b/env/lib/python3.8/site-packages/crashtest/contracts/provides_solution.py new file mode 100644 index 00000000..eba72871 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/contracts/provides_solution.py @@ -0,0 +1,7 @@ +from .solution import Solution + + +class ProvidesSolution: + @property + def solution(self) -> Solution: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/crashtest/contracts/solution.py b/env/lib/python3.8/site-packages/crashtest/contracts/solution.py new file mode 100644 index 00000000..82f2a48e --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/contracts/solution.py @@ -0,0 +1,15 @@ +from typing import List + + +class Solution: + @property + def solution_title(self) -> str: + raise NotImplementedError() + + @property + def solution_description(self) -> str: + raise NotImplementedError() + + @property + def documentation_links(self) -> List[str]: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/crashtest/contracts/solution_provider_repository.py b/env/lib/python3.8/site-packages/crashtest/contracts/solution_provider_repository.py new file mode 100644 index 00000000..bc8a0695 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/contracts/solution_provider_repository.py @@ -0,0 +1,19 @@ +from typing import List +from typing import Type + +from .solution import Solution + + +class SolutionProviderRepository: + def register_solution_provider( + self, solution_provider_class: Type + ) -> "SolutionProviderRepository": + raise NotImplementedError() + + def register_solution_providers( + self, solution_provider_classes: List[Type] + ) -> "SolutionProviderRepository": + raise NotImplementedError() + + def get_solutions_for_exception(self, exception: Exception) -> List[Solution]: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/crashtest/frame.py b/env/lib/python3.8/site-packages/crashtest/frame.py new file mode 100644 index 00000000..8436aae9 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/frame.py @@ -0,0 +1,74 @@ +import inspect + +from types import FrameType +from typing import Dict + + +class Frame: + _content_cache: Dict[str, str] = {} + + def __init__(self, frame_info: inspect.FrameInfo) -> None: + self._frame = frame_info.frame + self._frame_info = frame_info + self._lineno = frame_info.lineno + self._filename = frame_info.filename + self._function = frame_info.function + self._lines = None + self._file_content = None + + @property + def frame(self) -> FrameType: + return self._frame + + @property + def lineno(self) -> int: + return self._lineno + + @property + def filename(self) -> str: + return self._filename + + @property + def function(self) -> str: + return self._function + + @property + def line(self) -> str: + if not self._frame_info.code_context: + return "" + + return self._frame_info.code_context[0] + + @property + def file_content(self) -> str: + if self._file_content is None: + if not self._filename: + file_content = "" + else: + if self._filename not in self.__class__._content_cache: + try: + with open(self._filename) as f: + file_content = f.read() + except OSError: + file_content = "" + + self.__class__._content_cache[self._filename] = file_content + + file_content = self.__class__._content_cache[self._filename] + + self._file_content = file_content + + return self._file_content + + def __hash__(self) -> int: + return hash(self._filename) ^ hash(self._function) ^ hash(self._lineno) + + def __eq__(self, other: "Frame") -> bool: + return ( + self._filename == other.filename + and self._function == other.function + and self._lineno == other.lineno + ) + + def __repr__(self) -> str: + return f"" diff --git a/env/lib/python3.8/site-packages/crashtest/frame_collection.py b/env/lib/python3.8/site-packages/crashtest/frame_collection.py new file mode 100644 index 00000000..9002fdcc --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/frame_collection.py @@ -0,0 +1,72 @@ +from typing import List +from typing import Optional + +from .frame import Frame + + +class FrameCollection(list): + def __init__(self, frames: Optional[List[Frame]] = None, count: int = 0) -> None: + if frames is None: + frames = [] + + super().__init__(frames) + + self._count = count + + @property + def repetitions(self) -> int: + return self._count - 1 + + def is_repeated(self) -> bool: + return self._count > 1 + + def increment_count(self, increment: int = 1) -> "FrameCollection": + self._count += increment + + return self + + def compact(self) -> List["FrameCollection"]: + """ + Compacts the frames to deduplicate recursive calls. + """ + collections = [] + current_collection = FrameCollection() + + i = 0 + while i < len(self) - 1: + frame = self[i] + if frame in self[i + 1 :]: + duplicate_indices = [] + for sub_index, sub_frame in enumerate(self[i + 1 :]): + if frame == sub_frame: + duplicate_indices.append(sub_index + i + 1) + + found_duplicate = False + for duplicate_index in duplicate_indices: + collection = FrameCollection(self[i:duplicate_index]) + if collection == current_collection: + current_collection.increment_count() + i = duplicate_index + found_duplicate = True + break + + if found_duplicate: + continue + + collections.append(current_collection) + current_collection = FrameCollection(self[i : duplicate_indices[0]]) + + i = duplicate_indices[0] + + continue + + if current_collection.is_repeated(): + collections.append(current_collection) + current_collection = FrameCollection() + + current_collection.append(frame) + i += 1 + + collections.append(current_collection) + + return collections diff --git a/env/lib/python3.8/site-packages/crashtest/inspector.py b/env/lib/python3.8/site-packages/crashtest/inspector.py new file mode 100644 index 00000000..e74c4ed9 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/inspector.py @@ -0,0 +1,50 @@ +import inspect + +from typing import Optional + +from .frame import Frame +from .frame_collection import FrameCollection + + +class Inspector: + def __init__(self, exception: Exception): + self._exception = exception + self._frames = None + self._outer_frames = None + self._inner_frames = None + self._previous_exception = exception.__context__ + + @property + def exception(self) -> Exception: + return self._exception + + @property + def exception_name(self) -> str: + return self._exception.__class__.__name__ + + @property + def exception_message(self) -> str: + return str(self._exception) + + @property + def frames(self) -> FrameCollection: + if self._frames is not None: + return self._frames + + self._frames = FrameCollection() + + tb = self._exception.__traceback__ + + while tb: + frame_info = inspect.getframeinfo(tb) + self._frames.append(Frame(inspect.FrameInfo(tb.tb_frame, *frame_info))) + tb = tb.tb_next + + return self._frames + + @property + def previous_exception(self) -> Optional[Exception]: + return self._previous_exception + + def has_previous_exception(self) -> bool: + return self._previous_exception is not None diff --git a/env/lib/python3.8/site-packages/crashtest/solution_providers/__init__.py b/env/lib/python3.8/site-packages/crashtest/solution_providers/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/crashtest/solution_providers/solution_provider_repository.py b/env/lib/python3.8/site-packages/crashtest/solution_providers/solution_provider_repository.py new file mode 100644 index 00000000..db8d56c3 --- /dev/null +++ b/env/lib/python3.8/site-packages/crashtest/solution_providers/solution_provider_repository.py @@ -0,0 +1,64 @@ +from typing import List +from typing import Optional +from typing import Type + +from crashtest.contracts.has_solutions_for_exception import HasSolutionsForException +from crashtest.contracts.provides_solution import ProvidesSolution +from crashtest.contracts.solution import Solution +from crashtest.contracts.solution_provider_repository import ( + SolutionProviderRepository as BaseSolutionProviderRepository, +) + + +class SolutionProviderRepository(BaseSolutionProviderRepository): + def __init__(self, solution_providers: Optional[List[Type]] = None) -> None: + self._solution_providers = [] + + if solution_providers is None: + solution_providers = [] + + self.register_solution_providers(solution_providers) + + def register_solution_provider( + self, solution_provider_class: Type + ) -> "SolutionProviderRepository": + self._solution_providers.append(solution_provider_class) + + return self + + def register_solution_providers( + self, solution_provider_classes: List[Type] + ) -> "SolutionProviderRepository": + for solution_provider_class in solution_provider_classes: + self.register_solution_provider(solution_provider_class) + + return self + + def get_solutions_for_exception(self, exception: Exception) -> List[Solution]: + solutions = [] + + if isinstance(exception, Solution): + solutions.append(exception) + + if isinstance(exception, ProvidesSolution): + solutions.append(exception.solution) + + for solution_provider_class in self._solution_providers: + if not issubclass(solution_provider_class, HasSolutionsForException): + continue + + solution_provider: HasSolutionsForException = solution_provider_class() + + try: + if not solution_provider.can_solve(exception): + continue + except Exception: + continue + + try: + for solution in solution_provider.get_solutions(exception): + solutions.append(solution) + except Exception: + continue + + return solutions diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/INSTALLER b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE new file mode 100644 index 00000000..07074259 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE @@ -0,0 +1,6 @@ +This software is made available under the terms of *either* of the licenses +found in LICENSE.APACHE or LICENSE.BSD. Contributions to cryptography are made +under the terms of *both* these licenses. + +The code used in the OS random engine is derived from CPython, and is licensed +under the terms of the PSF License Agreement. diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.APACHE b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.APACHE new file mode 100644 index 00000000..62589edd --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.APACHE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + https://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.BSD b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.BSD new file mode 100644 index 00000000..ec1a29d3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.BSD @@ -0,0 +1,27 @@ +Copyright (c) Individual contributors. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of PyCA Cryptography nor the names of its contributors + may be used to endorse or promote products derived from this software + without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.PSF b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.PSF new file mode 100644 index 00000000..4d3a4f57 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/LICENSE.PSF @@ -0,0 +1,41 @@ +1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and + the Individual or Organization ("Licensee") accessing and otherwise using Python + 2.7.12 software in source or binary form and its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, PSF hereby + grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, + analyze, test, perform and/or display publicly, prepare derivative works, + distribute, and otherwise use Python 2.7.12 alone or in any derivative + version, provided, however, that PSF's License Agreement and PSF's notice of + copyright, i.e., "Copyright © 2001-2016 Python Software Foundation; All Rights + Reserved" are retained in Python 2.7.12 alone or in any derivative version + prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on or + incorporates Python 2.7.12 or any part thereof, and wants to make the + derivative work available to others as provided herein, then Licensee hereby + agrees to include in any such work a brief summary of the changes made to Python + 2.7.12. + +4. PSF is making Python 2.7.12 available to Licensee on an "AS IS" basis. + PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF + EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR + WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE + USE OF PYTHON 2.7.12 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. + +5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 2.7.12 + FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF + MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 2.7.12, OR ANY DERIVATIVE + THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material breach of + its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any relationship + of agency, partnership, or joint venture between PSF and Licensee. This License + Agreement does not grant permission to use PSF trademarks or trade name in a + trademark sense to endorse or promote products or services of Licensee, or any + third party. + +8. By copying, installing or otherwise using Python 2.7.12, Licensee agrees + to be bound by the terms and conditions of this License Agreement. diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/METADATA b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/METADATA new file mode 100644 index 00000000..155e9b14 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/METADATA @@ -0,0 +1,138 @@ +Metadata-Version: 2.1 +Name: cryptography +Version: 38.0.3 +Summary: cryptography is a package which provides cryptographic recipes and primitives to Python developers. +Home-page: https://github.com/pyca/cryptography +Author: The Python Cryptographic Authority and individual contributors +Author-email: cryptography-dev@python.org +License: BSD-3-Clause OR Apache-2.0 +Project-URL: Documentation, https://cryptography.io/ +Project-URL: Source, https://github.com/pyca/cryptography/ +Project-URL: Issues, https://github.com/pyca/cryptography/issues +Project-URL: Changelog, https://cryptography.io/en/latest/changelog/ +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Apache Software License +Classifier: License :: OSI Approved :: BSD License +Classifier: Natural Language :: English +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: POSIX +Classifier: Operating System :: POSIX :: BSD +Classifier: Operating System :: POSIX :: Linux +Classifier: Operating System :: Microsoft :: Windows +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Security :: Cryptography +Requires-Python: >=3.6 +Description-Content-Type: text/x-rst +License-File: LICENSE +License-File: LICENSE.APACHE +License-File: LICENSE.BSD +License-File: LICENSE.PSF +Requires-Dist: cffi (>=1.12) +Provides-Extra: docs +Requires-Dist: sphinx (!=1.8.0,!=3.1.0,!=3.1.1,>=1.6.5) ; extra == 'docs' +Requires-Dist: sphinx-rtd-theme ; extra == 'docs' +Provides-Extra: docstest +Requires-Dist: pyenchant (>=1.6.11) ; extra == 'docstest' +Requires-Dist: twine (>=1.12.0) ; extra == 'docstest' +Requires-Dist: sphinxcontrib-spelling (>=4.0.1) ; extra == 'docstest' +Provides-Extra: pep8test +Requires-Dist: black ; extra == 'pep8test' +Requires-Dist: flake8 ; extra == 'pep8test' +Requires-Dist: flake8-import-order ; extra == 'pep8test' +Requires-Dist: pep8-naming ; extra == 'pep8test' +Provides-Extra: sdist +Requires-Dist: setuptools-rust (>=0.11.4) ; extra == 'sdist' +Provides-Extra: ssh +Requires-Dist: bcrypt (>=3.1.5) ; extra == 'ssh' +Provides-Extra: test +Requires-Dist: pytest (>=6.2.0) ; extra == 'test' +Requires-Dist: pytest-benchmark ; extra == 'test' +Requires-Dist: pytest-cov ; extra == 'test' +Requires-Dist: pytest-subtests ; extra == 'test' +Requires-Dist: pytest-xdist ; extra == 'test' +Requires-Dist: pretend ; extra == 'test' +Requires-Dist: iso8601 ; extra == 'test' +Requires-Dist: pytz ; extra == 'test' +Requires-Dist: hypothesis (!=3.79.2,>=1.11.4) ; extra == 'test' + +pyca/cryptography +================= + +.. image:: https://img.shields.io/pypi/v/cryptography.svg + :target: https://pypi.org/project/cryptography/ + :alt: Latest Version + +.. image:: https://readthedocs.org/projects/cryptography/badge/?version=latest + :target: https://cryptography.io + :alt: Latest Docs + +.. image:: https://github.com/pyca/cryptography/workflows/CI/badge.svg?branch=main + :target: https://github.com/pyca/cryptography/actions?query=workflow%3ACI+branch%3Amain + + +``cryptography`` is a package which provides cryptographic recipes and +primitives to Python developers. Our goal is for it to be your "cryptographic +standard library". It supports Python 3.6+ and PyPy3 7.2+. + +``cryptography`` includes both high level recipes and low level interfaces to +common cryptographic algorithms such as symmetric ciphers, message digests, and +key derivation functions. For example, to encrypt something with +``cryptography``'s high level symmetric encryption recipe: + +.. code-block:: pycon + + >>> from cryptography.fernet import Fernet + >>> # Put this somewhere safe! + >>> key = Fernet.generate_key() + >>> f = Fernet(key) + >>> token = f.encrypt(b"A really secret message. Not for prying eyes.") + >>> token + '...' + >>> f.decrypt(token) + 'A really secret message. Not for prying eyes.' + +You can find more information in the `documentation`_. + +You can install ``cryptography`` with: + +.. code-block:: console + + $ pip install cryptography + +For full details see `the installation documentation`_. + +Discussion +~~~~~~~~~~ + +If you run into bugs, you can file them in our `issue tracker`_. + +We maintain a `cryptography-dev`_ mailing list for development discussion. + +You can also join ``#pyca`` on ``irc.libera.chat`` to ask questions or get +involved. + +Security +~~~~~~~~ + +Need to report a security issue? Please consult our `security reporting`_ +documentation. + + +.. _`documentation`: https://cryptography.io/ +.. _`the installation documentation`: https://cryptography.io/en/latest/installation/ +.. _`issue tracker`: https://github.com/pyca/cryptography/issues +.. _`cryptography-dev`: https://mail.python.org/mailman/listinfo/cryptography-dev +.. _`security reporting`: https://cryptography.io/en/latest/security/ + + diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/RECORD b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/RECORD new file mode 100644 index 00000000..1b0255aa --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/RECORD @@ -0,0 +1,181 @@ +cryptography-38.0.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cryptography-38.0.3.dist-info/LICENSE,sha256=Q9rSzHUqtyHNmp827OcPtTq3cTVR8tPYaU2OjFoG1uI,323 +cryptography-38.0.3.dist-info/LICENSE.APACHE,sha256=qsc7MUj20dcRHbyjIJn2jSbGRMaBOuHk8F9leaomY_4,11360 +cryptography-38.0.3.dist-info/LICENSE.BSD,sha256=YCxMdILeZHndLpeTzaJ15eY9dz2s0eymiSMqtwCPtPs,1532 +cryptography-38.0.3.dist-info/LICENSE.PSF,sha256=aT7ApmKzn5laTyUrA6YiKUVHDBtvEsoCkY5O_g32S58,2415 +cryptography-38.0.3.dist-info/METADATA,sha256=ihu2jLKmqhkSRg4b9ABhBLdc2-cKJFz5cgo2ipkdXLI,5304 +cryptography-38.0.3.dist-info/RECORD,, +cryptography-38.0.3.dist-info/WHEEL,sha256=VrEAJcM75jIpq44oA5iX1CYRpwmHq5Dpav8-kETpfhI,148 +cryptography-38.0.3.dist-info/top_level.txt,sha256=KNaT-Sn2K4uxNaEbe6mYdDn3qWDMlp4y-MtWfB73nJc,13 +cryptography/__about__.py,sha256=YT2FtC_aPiqMfxR2qRpIsSAuVImJc8wiwoJga9iC5B8,417 +cryptography/__init__.py,sha256=j08JCN_u_m8eL-zxbXRxgsriW6Oe29oSSo_e2hyyasg,748 +cryptography/__pycache__/__about__.cpython-38.pyc,, +cryptography/__pycache__/__init__.cpython-38.pyc,, +cryptography/__pycache__/exceptions.cpython-38.pyc,, +cryptography/__pycache__/fernet.cpython-38.pyc,, +cryptography/__pycache__/utils.cpython-38.pyc,, +cryptography/exceptions.py,sha256=sN_VVTF_LuKMM6R-lIASFFuzAmz1uZ2Qbcdko9WyS64,1471 +cryptography/fernet.py,sha256=pRH_HKl1ESBUBTDYcfHTk_5w31WPaB5MW0RuDhr6xa0,6843 +cryptography/hazmat/__init__.py,sha256=OYlvgprzULzZlsf3yYTsd6VUVyQmpsbHjgJdNnsyRwE,418 +cryptography/hazmat/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/__pycache__/_oid.cpython-38.pyc,, +cryptography/hazmat/_oid.py,sha256=iM71y6BjXe7YElHGMU92gNvukXcrIiejdhzuQkshmuk,13635 +cryptography/hazmat/backends/__init__.py,sha256=bgrjB1SX2vXX-rmfG7A4PqGkq-isqQVXGaZtjWHAgj0,324 +cryptography/hazmat/backends/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__init__.py,sha256=7rpz1Z3eV9vZy_d2iLrwC8Oz0vEruDFrjJlc6W2ZDXA,271 +cryptography/hazmat/backends/openssl/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/aead.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/backend.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/ciphers.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/cmac.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/decode_asn1.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/dh.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/dsa.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/ec.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/ed25519.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/ed448.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/hashes.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/hmac.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/poly1305.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/rsa.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/utils.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/x25519.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/x448.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/x509.cpython-38.pyc,, +cryptography/hazmat/backends/openssl/aead.py,sha256=1GASyrJPO8a-mDPTT7VJZXPb_0zEdrkW-Wu_rxV-6RQ,8442 +cryptography/hazmat/backends/openssl/backend.py,sha256=hzepYfPZxq57fj4SKFe9voksS4E7sGz2hx3bZVUFklg,101162 +cryptography/hazmat/backends/openssl/ciphers.py,sha256=n3rrPQZi1blJBKqIWeMG6-U6YTvEb8rXGQKn8i-kFog,10342 +cryptography/hazmat/backends/openssl/cmac.py,sha256=K5-S0H72KHZH-WPAcHL5jCtcNyoBZSMe7VmxGn8_VWA,3005 +cryptography/hazmat/backends/openssl/decode_asn1.py,sha256=nSqtgO5MJVf_UUkvw9tez10zhGnsGHq24OP1X2GKOe4,1113 +cryptography/hazmat/backends/openssl/dh.py,sha256=9fwPordToELTkeJ-c7TuO9NiE1vfUBejk2QEUZbvo4s,12230 +cryptography/hazmat/backends/openssl/dsa.py,sha256=awfP80ykAfb4C_I-aOo-PnGU1DF6uf8bnEi-jld18ec,8888 +cryptography/hazmat/backends/openssl/ec.py,sha256=kgxwW508FTXDwGG-7pSywBLlICZKKfo4bcTHnNpsvJY,11103 +cryptography/hazmat/backends/openssl/ed25519.py,sha256=irHT-jSbpTNMMHqw5T885uzAi3Syf3kaaHuTnKgQPSg,5920 +cryptography/hazmat/backends/openssl/ed448.py,sha256=K8HDEiXl98QGJ-4llT4SVZf5-xe8aCuci00DkZf0lhw,5874 +cryptography/hazmat/backends/openssl/hashes.py,sha256=3L5bkCOo2LbRSVNGLca_9rpCZ2zb8ISBrMLtts1BkEw,3241 +cryptography/hazmat/backends/openssl/hmac.py,sha256=9RX8bo9ywJievoodxjmqCXmD2iUWyH2jBmw78Hb-pOY,3095 +cryptography/hazmat/backends/openssl/poly1305.py,sha256=_qyGCXNaQVCFpa1qjb_9UtsI6lmnki_15Jbc5vihbeE,2514 +cryptography/hazmat/backends/openssl/rsa.py,sha256=R2qpvJbkHmx9AHJHRKvU7crYx7BBbpAgEg8bCfQRqLQ,21562 +cryptography/hazmat/backends/openssl/utils.py,sha256=7Ara81KkY0QCLPqW6kUG9dEsp52cZ3kOUJczwEpecJ0,1977 +cryptography/hazmat/backends/openssl/x25519.py,sha256=oA_ao4o27ki_OAx0UXNeI2ItZ84Xg_li7It1DxFlrZ0,4753 +cryptography/hazmat/backends/openssl/x448.py,sha256=a_zgqGUpGFvyKEoKRR1vgNdD_gk1gxGYpBp1a6x9HuE,4338 +cryptography/hazmat/backends/openssl/x509.py,sha256=WUoRC6UDM9FkOdn6xR5Mk-v_WCq7eJryenGN9ni8L-A,1452 +cryptography/hazmat/bindings/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/bindings/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/bindings/_openssl.abi3.so,sha256=2YMlT880lz8dWGyDsKNXCYKFzI5cdB3y7N8hfIvTgLU,8607136 +cryptography/hazmat/bindings/_openssl.pyi,sha256=mpNJLuYLbCVrd5i33FBTmWwL_55Dw7JPkSLlSX9Q7oI,230 +cryptography/hazmat/bindings/_rust.abi3.so,sha256=XOBXetJpDr16L1w8dH6Ec1B9qwlx49G6sAkd2vFU4w0,3586376 +cryptography/hazmat/bindings/_rust/__init__.pyi,sha256=GztNJPaJWej18fxbmSUvGy-3_g4je8sn6--jnPcaM9c,859 +cryptography/hazmat/bindings/_rust/asn1.pyi,sha256=9CyI-grOsLQB_hfnhJPoG9dNOdJ7Zg6B0iUpzCowh44,592 +cryptography/hazmat/bindings/_rust/ocsp.pyi,sha256=-rbDG2McEOyKdbLLCCZJLZpt6-Vw7y6RhttmqXl30Ds,949 +cryptography/hazmat/bindings/_rust/x509.pyi,sha256=jx2XffYsHNB_Cc-evLnlM3fluMuT3_L2bC_-ZR-mwek,1678 +cryptography/hazmat/bindings/openssl/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/bindings/openssl/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/bindings/openssl/__pycache__/_conditional.cpython-38.pyc,, +cryptography/hazmat/bindings/openssl/__pycache__/binding.cpython-38.pyc,, +cryptography/hazmat/bindings/openssl/_conditional.py,sha256=usiy9QHCkbs-grw5vpzUNvJLrGptCpevnvqcN_6kfsU,10667 +cryptography/hazmat/bindings/openssl/binding.py,sha256=ubeB3piiCcAHBoupGrf6QwtZ2zhZv7vY5qn7yWLw4S8,8023 +cryptography/hazmat/primitives/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/primitives/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/_asymmetric.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/_cipheralgorithm.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/_serialization.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/cmac.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/constant_time.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/hashes.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/hmac.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/keywrap.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/padding.cpython-38.pyc,, +cryptography/hazmat/primitives/__pycache__/poly1305.cpython-38.pyc,, +cryptography/hazmat/primitives/_asymmetric.py,sha256=nVJwmxkakirAXfFp410pC4kY_CinzN5FSJwhEn2IE34,485 +cryptography/hazmat/primitives/_cipheralgorithm.py,sha256=zd7N8rBYWaf8tPM7GDtZ9vUgarK_P0_PUNCFi3A0u0c,1016 +cryptography/hazmat/primitives/_serialization.py,sha256=HssBsIm3rNVPct1nZTACJzbymZc2WaZAWdkg1l5slD0,5196 +cryptography/hazmat/primitives/asymmetric/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/primitives/asymmetric/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/dh.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/dsa.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/ec.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/ed25519.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/ed448.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/padding.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/rsa.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/types.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/utils.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/x25519.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/x448.cpython-38.pyc,, +cryptography/hazmat/primitives/asymmetric/dh.py,sha256=pjjgKFcn2bCAaL_5zr0ygwXM8pHJFyAO6koJFkzhQb8,6604 +cryptography/hazmat/primitives/asymmetric/dsa.py,sha256=dIo6lYiHWRWUCxwejAi01w1-3jjmzEuJovqaVqDO3_g,7870 +cryptography/hazmat/primitives/asymmetric/ec.py,sha256=wX8wH9bD7g7-YxmINam_s9cPc9RUPrui2QdIKg6Q3Nc,14468 +cryptography/hazmat/primitives/asymmetric/ed25519.py,sha256=1qOl1UWV_-cXKHhwlFSyPBdhpx2HMDRukkI6eI5i8vM,2728 +cryptography/hazmat/primitives/asymmetric/ed448.py,sha256=oR-j4jGcWUnGxWi1GygHxVZbgkSOKHsR6y1E3Lf6wYM,2647 +cryptography/hazmat/primitives/asymmetric/padding.py,sha256=EkKuY9e6UFqSuQ0LvyKYKl_L19tOfNCTlHWEiKgHeUc,2690 +cryptography/hazmat/primitives/asymmetric/rsa.py,sha256=GiCIuBuJdpeew-yJ7mnTF4KFH_FUJaut1r-d6TRs31s,11322 +cryptography/hazmat/primitives/asymmetric/types.py,sha256=VidxjUWPOMB8q8vGiaXMhY715Zw8U9Vd4_rkqjOagRE,1814 +cryptography/hazmat/primitives/asymmetric/utils.py,sha256=5hD4KjfMbmozeFq08PLVunHr4FgeVzV1NkKalECM26s,756 +cryptography/hazmat/primitives/asymmetric/x25519.py,sha256=-nbaGlgT1sufO9Ic-urwKDql8Da0U3GL6hZJIMqHgVc,2588 +cryptography/hazmat/primitives/asymmetric/x448.py,sha256=V3lxb1VOiRTa3bzVUC3uZat2ogfExUOdktCIGUUMZ2Y,2556 +cryptography/hazmat/primitives/ciphers/__init__.py,sha256=Qp78Y3PDSRfwp7DDa3pezlLrED_QFhic_LvDw4LM9ZQ,646 +cryptography/hazmat/primitives/ciphers/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/aead.cpython-38.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/algorithms.cpython-38.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/base.cpython-38.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/modes.cpython-38.pyc,, +cryptography/hazmat/primitives/ciphers/aead.py,sha256=QnJD2doZ8XdpCIrDwqJBNgaw2eG9Tx4FWirIP159MAg,11488 +cryptography/hazmat/primitives/ciphers/algorithms.py,sha256=J1qeJK97fpSaX1E0ENsWvJG_qKGoOZvm57baBGOQofQ,5135 +cryptography/hazmat/primitives/ciphers/base.py,sha256=AiCYCzXbSZ9wQbXWMYc60IKmzqz5619YdaaF0zVr4rY,8251 +cryptography/hazmat/primitives/ciphers/modes.py,sha256=jRQocLF-I2BnZRKV66ZL9sSIHGhUN8YdvA3CXKXoxqg,8298 +cryptography/hazmat/primitives/cmac.py,sha256=ODkc7EonY1cRxyJ0SYOuwtiYQv6B0ZPxJQm3rXxfXd4,2037 +cryptography/hazmat/primitives/constant_time.py,sha256=6bkW00QjhKusdgsQbexXhMlGX0XRN59XNmxWS2W38NA,387 +cryptography/hazmat/primitives/hashes.py,sha256=cpaYjgkazlq7Xw0MVoR3cp17mD0TgyEvhZQbyoAWHzU,5996 +cryptography/hazmat/primitives/hmac.py,sha256=M_sa4smPIkO8ra17Xl_cM0daRhGCozUu_8gnHePEIb0,2131 +cryptography/hazmat/primitives/kdf/__init__.py,sha256=DcZhzfLG8d8IYBH771lGTVU5S87OQDpu3nrfOwZnsmA,715 +cryptography/hazmat/primitives/kdf/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/concatkdf.cpython-38.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/hkdf.cpython-38.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/kbkdf.cpython-38.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/pbkdf2.cpython-38.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/scrypt.cpython-38.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/x963kdf.cpython-38.pyc,, +cryptography/hazmat/primitives/kdf/concatkdf.py,sha256=5YXw8cLZCBYT6rVDGS5URQEeFiPW-ZRBRcPdZQIxTMA,3772 +cryptography/hazmat/primitives/kdf/hkdf.py,sha256=LlDQbCvlNzuLa_UJXrkG5fXGjAjor5Wunv2378TBmms,3031 +cryptography/hazmat/primitives/kdf/kbkdf.py,sha256=Ys2ITSbEw49V1v_DagQBd17owQr2A2iyPue4mot4Z_g,9196 +cryptography/hazmat/primitives/kdf/pbkdf2.py,sha256=wEMH4CJfPccCg9apQLXyWUWBrZLTpYLLnoZEnzvaHQo,2032 +cryptography/hazmat/primitives/kdf/scrypt.py,sha256=JvX_cD0o0Op5EcFNeZhr-vI5sYv_LdnJ6kNEbW3u5ow,2228 +cryptography/hazmat/primitives/kdf/x963kdf.py,sha256=JsdrJhw2IJVYkl8JIWUN66h7DrKZM2RoQ_tw_iKAvdI,2018 +cryptography/hazmat/primitives/keywrap.py,sha256=TWqyG9K7k-Ymq4kcIw7u3NIKUPVDtv6bimwxIJYTe20,5643 +cryptography/hazmat/primitives/padding.py,sha256=xruasOE5Cd8KEQ-yp9W6v9WKPvKH-GudHCPKQ7A8HfI,6207 +cryptography/hazmat/primitives/poly1305.py,sha256=QvxPMrqjgKJt0mOZSeZKk4NcxsNCd2kgfI-X1CmyUW4,1837 +cryptography/hazmat/primitives/serialization/__init__.py,sha256=f5dOLmDEvZROlG7HysPXY9DhvM9VXVRA2kwErJobzWw,1197 +cryptography/hazmat/primitives/serialization/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/base.cpython-38.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/pkcs12.cpython-38.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/pkcs7.cpython-38.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/ssh.cpython-38.pyc,, +cryptography/hazmat/primitives/serialization/base.py,sha256=yw8_yzIvruT6fmS-KrTmIXbAF00rItH48WXTPOSLdJ4,1761 +cryptography/hazmat/primitives/serialization/pkcs12.py,sha256=D0kpFOQdXdRGir1qg_mCchUpr1mlwhphGGl8OD7DR4Y,6725 +cryptography/hazmat/primitives/serialization/pkcs7.py,sha256=LnISP-1SEDXCpsoEbR0EfuIlWm8eJAgWupt0gvHyyIU,5870 +cryptography/hazmat/primitives/serialization/ssh.py,sha256=YWlhKV8uTil_LOLD9Hm5X5EZB1slviW1eg1AL4Jid1A,23935 +cryptography/hazmat/primitives/twofactor/__init__.py,sha256=ZHo4zwWidFP2RWFl8luiNuYkVMZPghzx54izPNSCtD4,222 +cryptography/hazmat/primitives/twofactor/__pycache__/__init__.cpython-38.pyc,, +cryptography/hazmat/primitives/twofactor/__pycache__/hotp.cpython-38.pyc,, +cryptography/hazmat/primitives/twofactor/__pycache__/totp.cpython-38.pyc,, +cryptography/hazmat/primitives/twofactor/hotp.py,sha256=v4wkTbdc1E53POx6pdNnEUBvANbmt4f6scQSsTgABeU,2989 +cryptography/hazmat/primitives/twofactor/totp.py,sha256=bIIxOI-LcLGNahB5kN7A_TwEyYMTsLjHd8eJc4b2cLg,1449 +cryptography/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cryptography/utils.py,sha256=eZS8-xeVAaGoyJYI23fN8uo0wPqo0ahl8rxobLOw7Zk,5031 +cryptography/x509/__init__.py,sha256=yC0TbuvPmWL1U4rEY-0m46SayuxCfPVNFWjJJdi5lY0,7654 +cryptography/x509/__pycache__/__init__.cpython-38.pyc,, +cryptography/x509/__pycache__/base.cpython-38.pyc,, +cryptography/x509/__pycache__/certificate_transparency.cpython-38.pyc,, +cryptography/x509/__pycache__/extensions.cpython-38.pyc,, +cryptography/x509/__pycache__/general_name.cpython-38.pyc,, +cryptography/x509/__pycache__/name.cpython-38.pyc,, +cryptography/x509/__pycache__/ocsp.cpython-38.pyc,, +cryptography/x509/__pycache__/oid.cpython-38.pyc,, +cryptography/x509/base.py,sha256=_LXtW3M-PeFGPwhY3BYS3tU1Tt9Bof0IzWmtgR_Uf1E,33942 +cryptography/x509/certificate_transparency.py,sha256=ks_yQPK8cCSSFF6htSs_Fcujwys7tc9OAPWn2TsPLqY,2130 +cryptography/x509/extensions.py,sha256=6-O2XdVn3OtYnX-qMT265RczUIDw1Dd79r5MVKEfI3E,65316 +cryptography/x509/general_name.py,sha256=S_kJd4ZsNGrMfi2osfFJEWqPxy3oPCAWpLb91yhxzPs,7896 +cryptography/x509/name.py,sha256=SlgpPyg_zEE-ysK36SKJIOgKYH1AKwNZ7wT8mC1YfKc,14846 +cryptography/x509/ocsp.py,sha256=OQKsqW_Y4mWY53UT_JG79RJR19xt53Q-iQSSw4m0kZM,16691 +cryptography/x509/oid.py,sha256=CLIlQwzE3PQXMvkKep4JbzVUaRDl_stwcX_U6-s2cNw,794 diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/WHEEL b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/WHEEL new file mode 100644 index 00000000..25950b4e --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: false +Tag: cp36-abi3-manylinux_2_17_x86_64 +Tag: cp36-abi3-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/top_level.txt b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/top_level.txt new file mode 100644 index 00000000..0d38bc5e --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography-38.0.3.dist-info/top_level.txt @@ -0,0 +1 @@ +cryptography diff --git a/env/lib/python3.8/site-packages/cryptography/__about__.py b/env/lib/python3.8/site-packages/cryptography/__about__.py new file mode 100644 index 00000000..15425c91 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/__about__.py @@ -0,0 +1,15 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +__all__ = [ + "__version__", + "__author__", + "__copyright__", +] + +__version__ = "38.0.3" + +__author__ = "The Python Cryptographic Authority and individual contributors" +__copyright__ = "Copyright 2013-2022 {}".format(__author__) diff --git a/env/lib/python3.8/site-packages/cryptography/__init__.py b/env/lib/python3.8/site-packages/cryptography/__init__.py new file mode 100644 index 00000000..599bf516 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/__init__.py @@ -0,0 +1,29 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import sys +import warnings + +from cryptography.__about__ import ( + __author__, + __copyright__, + __version__, +) +from cryptography.utils import CryptographyDeprecationWarning + + +__all__ = [ + "__version__", + "__author__", + "__copyright__", +] + +if sys.version_info[:2] == (3, 6): + warnings.warn( + "Python 3.6 is no longer supported by the Python core team. " + "Therefore, support for it is deprecated in cryptography and will be" + " removed in a future release.", + CryptographyDeprecationWarning, + stacklevel=2, + ) diff --git a/env/lib/python3.8/site-packages/cryptography/exceptions.py b/env/lib/python3.8/site-packages/cryptography/exceptions.py new file mode 100644 index 00000000..a315703c --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/exceptions.py @@ -0,0 +1,68 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography import utils + +if typing.TYPE_CHECKING: + from cryptography.hazmat.bindings.openssl.binding import ( + _OpenSSLErrorWithText, + ) + + +class _Reasons(utils.Enum): + BACKEND_MISSING_INTERFACE = 0 + UNSUPPORTED_HASH = 1 + UNSUPPORTED_CIPHER = 2 + UNSUPPORTED_PADDING = 3 + UNSUPPORTED_MGF = 4 + UNSUPPORTED_PUBLIC_KEY_ALGORITHM = 5 + UNSUPPORTED_ELLIPTIC_CURVE = 6 + UNSUPPORTED_SERIALIZATION = 7 + UNSUPPORTED_X509 = 8 + UNSUPPORTED_EXCHANGE_ALGORITHM = 9 + UNSUPPORTED_DIFFIE_HELLMAN = 10 + UNSUPPORTED_MAC = 11 + + +class UnsupportedAlgorithm(Exception): + def __init__( + self, message: str, reason: typing.Optional[_Reasons] = None + ) -> None: + super(UnsupportedAlgorithm, self).__init__(message) + self._reason = reason + + +class AlreadyFinalized(Exception): + pass + + +class AlreadyUpdated(Exception): + pass + + +class NotYetFinalized(Exception): + pass + + +class InvalidTag(Exception): + pass + + +class InvalidSignature(Exception): + pass + + +class InternalError(Exception): + def __init__( + self, msg: str, err_code: typing.List["_OpenSSLErrorWithText"] + ) -> None: + super(InternalError, self).__init__(msg) + self.err_code = err_code + + +class InvalidKey(Exception): + pass diff --git a/env/lib/python3.8/site-packages/cryptography/fernet.py b/env/lib/python3.8/site-packages/cryptography/fernet.py new file mode 100644 index 00000000..c18c3bcd --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/fernet.py @@ -0,0 +1,220 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import base64 +import binascii +import os +import time +import typing + +from cryptography import utils +from cryptography.exceptions import InvalidSignature +from cryptography.hazmat.primitives import hashes, padding +from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes +from cryptography.hazmat.primitives.hmac import HMAC + + +class InvalidToken(Exception): + pass + + +_MAX_CLOCK_SKEW = 60 + + +class Fernet: + def __init__( + self, + key: typing.Union[bytes, str], + backend: typing.Any = None, + ): + try: + key = base64.urlsafe_b64decode(key) + except binascii.Error as exc: + raise ValueError( + "Fernet key must be 32 url-safe base64-encoded bytes." + ) from exc + if len(key) != 32: + raise ValueError( + "Fernet key must be 32 url-safe base64-encoded bytes." + ) + + self._signing_key = key[:16] + self._encryption_key = key[16:] + + @classmethod + def generate_key(cls) -> bytes: + return base64.urlsafe_b64encode(os.urandom(32)) + + def encrypt(self, data: bytes) -> bytes: + return self.encrypt_at_time(data, int(time.time())) + + def encrypt_at_time(self, data: bytes, current_time: int) -> bytes: + iv = os.urandom(16) + return self._encrypt_from_parts(data, current_time, iv) + + def _encrypt_from_parts( + self, data: bytes, current_time: int, iv: bytes + ) -> bytes: + utils._check_bytes("data", data) + + padder = padding.PKCS7(algorithms.AES.block_size).padder() + padded_data = padder.update(data) + padder.finalize() + encryptor = Cipher( + algorithms.AES(self._encryption_key), + modes.CBC(iv), + ).encryptor() + ciphertext = encryptor.update(padded_data) + encryptor.finalize() + + basic_parts = ( + b"\x80" + + current_time.to_bytes(length=8, byteorder="big") + + iv + + ciphertext + ) + + h = HMAC(self._signing_key, hashes.SHA256()) + h.update(basic_parts) + hmac = h.finalize() + return base64.urlsafe_b64encode(basic_parts + hmac) + + def decrypt( + self, token: typing.Union[bytes, str], ttl: typing.Optional[int] = None + ) -> bytes: + timestamp, data = Fernet._get_unverified_token_data(token) + if ttl is None: + time_info = None + else: + time_info = (ttl, int(time.time())) + return self._decrypt_data(data, timestamp, time_info) + + def decrypt_at_time( + self, token: typing.Union[bytes, str], ttl: int, current_time: int + ) -> bytes: + if ttl is None: + raise ValueError( + "decrypt_at_time() can only be used with a non-None ttl" + ) + timestamp, data = Fernet._get_unverified_token_data(token) + return self._decrypt_data(data, timestamp, (ttl, current_time)) + + def extract_timestamp(self, token: typing.Union[bytes, str]) -> int: + timestamp, data = Fernet._get_unverified_token_data(token) + # Verify the token was not tampered with. + self._verify_signature(data) + return timestamp + + @staticmethod + def _get_unverified_token_data( + token: typing.Union[bytes, str] + ) -> typing.Tuple[int, bytes]: + if not isinstance(token, (str, bytes)): + raise TypeError("token must be bytes or str") + + try: + data = base64.urlsafe_b64decode(token) + except (TypeError, binascii.Error): + raise InvalidToken + + if not data or data[0] != 0x80: + raise InvalidToken + + if len(data) < 9: + raise InvalidToken + + timestamp = int.from_bytes(data[1:9], byteorder="big") + return timestamp, data + + def _verify_signature(self, data: bytes) -> None: + h = HMAC(self._signing_key, hashes.SHA256()) + h.update(data[:-32]) + try: + h.verify(data[-32:]) + except InvalidSignature: + raise InvalidToken + + def _decrypt_data( + self, + data: bytes, + timestamp: int, + time_info: typing.Optional[typing.Tuple[int, int]], + ) -> bytes: + if time_info is not None: + ttl, current_time = time_info + if timestamp + ttl < current_time: + raise InvalidToken + + if current_time + _MAX_CLOCK_SKEW < timestamp: + raise InvalidToken + + self._verify_signature(data) + + iv = data[9:25] + ciphertext = data[25:-32] + decryptor = Cipher( + algorithms.AES(self._encryption_key), modes.CBC(iv) + ).decryptor() + plaintext_padded = decryptor.update(ciphertext) + try: + plaintext_padded += decryptor.finalize() + except ValueError: + raise InvalidToken + unpadder = padding.PKCS7(algorithms.AES.block_size).unpadder() + + unpadded = unpadder.update(plaintext_padded) + try: + unpadded += unpadder.finalize() + except ValueError: + raise InvalidToken + return unpadded + + +class MultiFernet: + def __init__(self, fernets: typing.Iterable[Fernet]): + fernets = list(fernets) + if not fernets: + raise ValueError( + "MultiFernet requires at least one Fernet instance" + ) + self._fernets = fernets + + def encrypt(self, msg: bytes) -> bytes: + return self.encrypt_at_time(msg, int(time.time())) + + def encrypt_at_time(self, msg: bytes, current_time: int) -> bytes: + return self._fernets[0].encrypt_at_time(msg, current_time) + + def rotate(self, msg: typing.Union[bytes, str]) -> bytes: + timestamp, data = Fernet._get_unverified_token_data(msg) + for f in self._fernets: + try: + p = f._decrypt_data(data, timestamp, None) + break + except InvalidToken: + pass + else: + raise InvalidToken + + iv = os.urandom(16) + return self._fernets[0]._encrypt_from_parts(p, timestamp, iv) + + def decrypt( + self, msg: typing.Union[bytes, str], ttl: typing.Optional[int] = None + ) -> bytes: + for f in self._fernets: + try: + return f.decrypt(msg, ttl) + except InvalidToken: + pass + raise InvalidToken + + def decrypt_at_time( + self, msg: typing.Union[bytes, str], ttl: int, current_time: int + ) -> bytes: + for f in self._fernets: + try: + return f.decrypt_at_time(msg, ttl, current_time) + except InvalidToken: + pass + raise InvalidToken diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/__init__.py new file mode 100644 index 00000000..007694bc --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/__init__.py @@ -0,0 +1,10 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. +""" +Hazardous Materials + +This is a "Hazardous Materials" module. You should ONLY use it if you're +100% absolutely sure that you know what you're doing because this module +is full of land mines, dragons, and dinosaurs with laser guns. +""" diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/_oid.py b/env/lib/python3.8/site-packages/cryptography/hazmat/_oid.py new file mode 100644 index 00000000..604cf07b --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/_oid.py @@ -0,0 +1,285 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.bindings._rust import ( + ObjectIdentifier as ObjectIdentifier, +) +from cryptography.hazmat.primitives import hashes + + +class ExtensionOID: + SUBJECT_DIRECTORY_ATTRIBUTES = ObjectIdentifier("2.5.29.9") + SUBJECT_KEY_IDENTIFIER = ObjectIdentifier("2.5.29.14") + KEY_USAGE = ObjectIdentifier("2.5.29.15") + SUBJECT_ALTERNATIVE_NAME = ObjectIdentifier("2.5.29.17") + ISSUER_ALTERNATIVE_NAME = ObjectIdentifier("2.5.29.18") + BASIC_CONSTRAINTS = ObjectIdentifier("2.5.29.19") + NAME_CONSTRAINTS = ObjectIdentifier("2.5.29.30") + CRL_DISTRIBUTION_POINTS = ObjectIdentifier("2.5.29.31") + CERTIFICATE_POLICIES = ObjectIdentifier("2.5.29.32") + POLICY_MAPPINGS = ObjectIdentifier("2.5.29.33") + AUTHORITY_KEY_IDENTIFIER = ObjectIdentifier("2.5.29.35") + POLICY_CONSTRAINTS = ObjectIdentifier("2.5.29.36") + EXTENDED_KEY_USAGE = ObjectIdentifier("2.5.29.37") + FRESHEST_CRL = ObjectIdentifier("2.5.29.46") + INHIBIT_ANY_POLICY = ObjectIdentifier("2.5.29.54") + ISSUING_DISTRIBUTION_POINT = ObjectIdentifier("2.5.29.28") + AUTHORITY_INFORMATION_ACCESS = ObjectIdentifier("1.3.6.1.5.5.7.1.1") + SUBJECT_INFORMATION_ACCESS = ObjectIdentifier("1.3.6.1.5.5.7.1.11") + OCSP_NO_CHECK = ObjectIdentifier("1.3.6.1.5.5.7.48.1.5") + TLS_FEATURE = ObjectIdentifier("1.3.6.1.5.5.7.1.24") + CRL_NUMBER = ObjectIdentifier("2.5.29.20") + DELTA_CRL_INDICATOR = ObjectIdentifier("2.5.29.27") + PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS = ObjectIdentifier( + "1.3.6.1.4.1.11129.2.4.2" + ) + PRECERT_POISON = ObjectIdentifier("1.3.6.1.4.1.11129.2.4.3") + SIGNED_CERTIFICATE_TIMESTAMPS = ObjectIdentifier("1.3.6.1.4.1.11129.2.4.5") + + +class OCSPExtensionOID: + NONCE = ObjectIdentifier("1.3.6.1.5.5.7.48.1.2") + + +class CRLEntryExtensionOID: + CERTIFICATE_ISSUER = ObjectIdentifier("2.5.29.29") + CRL_REASON = ObjectIdentifier("2.5.29.21") + INVALIDITY_DATE = ObjectIdentifier("2.5.29.24") + + +class NameOID: + COMMON_NAME = ObjectIdentifier("2.5.4.3") + COUNTRY_NAME = ObjectIdentifier("2.5.4.6") + LOCALITY_NAME = ObjectIdentifier("2.5.4.7") + STATE_OR_PROVINCE_NAME = ObjectIdentifier("2.5.4.8") + STREET_ADDRESS = ObjectIdentifier("2.5.4.9") + ORGANIZATION_NAME = ObjectIdentifier("2.5.4.10") + ORGANIZATIONAL_UNIT_NAME = ObjectIdentifier("2.5.4.11") + SERIAL_NUMBER = ObjectIdentifier("2.5.4.5") + SURNAME = ObjectIdentifier("2.5.4.4") + GIVEN_NAME = ObjectIdentifier("2.5.4.42") + TITLE = ObjectIdentifier("2.5.4.12") + GENERATION_QUALIFIER = ObjectIdentifier("2.5.4.44") + X500_UNIQUE_IDENTIFIER = ObjectIdentifier("2.5.4.45") + DN_QUALIFIER = ObjectIdentifier("2.5.4.46") + PSEUDONYM = ObjectIdentifier("2.5.4.65") + USER_ID = ObjectIdentifier("0.9.2342.19200300.100.1.1") + DOMAIN_COMPONENT = ObjectIdentifier("0.9.2342.19200300.100.1.25") + EMAIL_ADDRESS = ObjectIdentifier("1.2.840.113549.1.9.1") + JURISDICTION_COUNTRY_NAME = ObjectIdentifier("1.3.6.1.4.1.311.60.2.1.3") + JURISDICTION_LOCALITY_NAME = ObjectIdentifier("1.3.6.1.4.1.311.60.2.1.1") + JURISDICTION_STATE_OR_PROVINCE_NAME = ObjectIdentifier( + "1.3.6.1.4.1.311.60.2.1.2" + ) + BUSINESS_CATEGORY = ObjectIdentifier("2.5.4.15") + POSTAL_ADDRESS = ObjectIdentifier("2.5.4.16") + POSTAL_CODE = ObjectIdentifier("2.5.4.17") + INN = ObjectIdentifier("1.2.643.3.131.1.1") + OGRN = ObjectIdentifier("1.2.643.100.1") + SNILS = ObjectIdentifier("1.2.643.100.3") + UNSTRUCTURED_NAME = ObjectIdentifier("1.2.840.113549.1.9.2") + + +class SignatureAlgorithmOID: + RSA_WITH_MD5 = ObjectIdentifier("1.2.840.113549.1.1.4") + RSA_WITH_SHA1 = ObjectIdentifier("1.2.840.113549.1.1.5") + # This is an alternate OID for RSA with SHA1 that is occasionally seen + _RSA_WITH_SHA1 = ObjectIdentifier("1.3.14.3.2.29") + RSA_WITH_SHA224 = ObjectIdentifier("1.2.840.113549.1.1.14") + RSA_WITH_SHA256 = ObjectIdentifier("1.2.840.113549.1.1.11") + RSA_WITH_SHA384 = ObjectIdentifier("1.2.840.113549.1.1.12") + RSA_WITH_SHA512 = ObjectIdentifier("1.2.840.113549.1.1.13") + RSA_WITH_SHA3_224 = ObjectIdentifier("2.16.840.1.101.3.4.3.13") + RSA_WITH_SHA3_256 = ObjectIdentifier("2.16.840.1.101.3.4.3.14") + RSA_WITH_SHA3_384 = ObjectIdentifier("2.16.840.1.101.3.4.3.15") + RSA_WITH_SHA3_512 = ObjectIdentifier("2.16.840.1.101.3.4.3.16") + RSASSA_PSS = ObjectIdentifier("1.2.840.113549.1.1.10") + ECDSA_WITH_SHA1 = ObjectIdentifier("1.2.840.10045.4.1") + ECDSA_WITH_SHA224 = ObjectIdentifier("1.2.840.10045.4.3.1") + ECDSA_WITH_SHA256 = ObjectIdentifier("1.2.840.10045.4.3.2") + ECDSA_WITH_SHA384 = ObjectIdentifier("1.2.840.10045.4.3.3") + ECDSA_WITH_SHA512 = ObjectIdentifier("1.2.840.10045.4.3.4") + ECDSA_WITH_SHA3_224 = ObjectIdentifier("2.16.840.1.101.3.4.3.9") + ECDSA_WITH_SHA3_256 = ObjectIdentifier("2.16.840.1.101.3.4.3.10") + ECDSA_WITH_SHA3_384 = ObjectIdentifier("2.16.840.1.101.3.4.3.11") + ECDSA_WITH_SHA3_512 = ObjectIdentifier("2.16.840.1.101.3.4.3.12") + DSA_WITH_SHA1 = ObjectIdentifier("1.2.840.10040.4.3") + DSA_WITH_SHA224 = ObjectIdentifier("2.16.840.1.101.3.4.3.1") + DSA_WITH_SHA256 = ObjectIdentifier("2.16.840.1.101.3.4.3.2") + DSA_WITH_SHA384 = ObjectIdentifier("2.16.840.1.101.3.4.3.3") + DSA_WITH_SHA512 = ObjectIdentifier("2.16.840.1.101.3.4.3.4") + ED25519 = ObjectIdentifier("1.3.101.112") + ED448 = ObjectIdentifier("1.3.101.113") + GOSTR3411_94_WITH_3410_2001 = ObjectIdentifier("1.2.643.2.2.3") + GOSTR3410_2012_WITH_3411_2012_256 = ObjectIdentifier("1.2.643.7.1.1.3.2") + GOSTR3410_2012_WITH_3411_2012_512 = ObjectIdentifier("1.2.643.7.1.1.3.3") + + +_SIG_OIDS_TO_HASH: typing.Dict[ + ObjectIdentifier, typing.Optional[hashes.HashAlgorithm] +] = { + SignatureAlgorithmOID.RSA_WITH_MD5: hashes.MD5(), + SignatureAlgorithmOID.RSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID._RSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID.RSA_WITH_SHA224: hashes.SHA224(), + SignatureAlgorithmOID.RSA_WITH_SHA256: hashes.SHA256(), + SignatureAlgorithmOID.RSA_WITH_SHA384: hashes.SHA384(), + SignatureAlgorithmOID.RSA_WITH_SHA512: hashes.SHA512(), + SignatureAlgorithmOID.ECDSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID.ECDSA_WITH_SHA224: hashes.SHA224(), + SignatureAlgorithmOID.ECDSA_WITH_SHA256: hashes.SHA256(), + SignatureAlgorithmOID.ECDSA_WITH_SHA384: hashes.SHA384(), + SignatureAlgorithmOID.ECDSA_WITH_SHA512: hashes.SHA512(), + SignatureAlgorithmOID.DSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID.DSA_WITH_SHA224: hashes.SHA224(), + SignatureAlgorithmOID.DSA_WITH_SHA256: hashes.SHA256(), + SignatureAlgorithmOID.ED25519: None, + SignatureAlgorithmOID.ED448: None, + SignatureAlgorithmOID.GOSTR3411_94_WITH_3410_2001: None, + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_256: None, + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_512: None, +} + + +class ExtendedKeyUsageOID: + SERVER_AUTH = ObjectIdentifier("1.3.6.1.5.5.7.3.1") + CLIENT_AUTH = ObjectIdentifier("1.3.6.1.5.5.7.3.2") + CODE_SIGNING = ObjectIdentifier("1.3.6.1.5.5.7.3.3") + EMAIL_PROTECTION = ObjectIdentifier("1.3.6.1.5.5.7.3.4") + TIME_STAMPING = ObjectIdentifier("1.3.6.1.5.5.7.3.8") + OCSP_SIGNING = ObjectIdentifier("1.3.6.1.5.5.7.3.9") + ANY_EXTENDED_KEY_USAGE = ObjectIdentifier("2.5.29.37.0") + SMARTCARD_LOGON = ObjectIdentifier("1.3.6.1.4.1.311.20.2.2") + KERBEROS_PKINIT_KDC = ObjectIdentifier("1.3.6.1.5.2.3.5") + IPSEC_IKE = ObjectIdentifier("1.3.6.1.5.5.7.3.17") + CERTIFICATE_TRANSPARENCY = ObjectIdentifier("1.3.6.1.4.1.11129.2.4.4") + + +class AuthorityInformationAccessOID: + CA_ISSUERS = ObjectIdentifier("1.3.6.1.5.5.7.48.2") + OCSP = ObjectIdentifier("1.3.6.1.5.5.7.48.1") + + +class SubjectInformationAccessOID: + CA_REPOSITORY = ObjectIdentifier("1.3.6.1.5.5.7.48.5") + + +class CertificatePoliciesOID: + CPS_QUALIFIER = ObjectIdentifier("1.3.6.1.5.5.7.2.1") + CPS_USER_NOTICE = ObjectIdentifier("1.3.6.1.5.5.7.2.2") + ANY_POLICY = ObjectIdentifier("2.5.29.32.0") + + +class AttributeOID: + CHALLENGE_PASSWORD = ObjectIdentifier("1.2.840.113549.1.9.7") + UNSTRUCTURED_NAME = ObjectIdentifier("1.2.840.113549.1.9.2") + + +_OID_NAMES = { + NameOID.COMMON_NAME: "commonName", + NameOID.COUNTRY_NAME: "countryName", + NameOID.LOCALITY_NAME: "localityName", + NameOID.STATE_OR_PROVINCE_NAME: "stateOrProvinceName", + NameOID.STREET_ADDRESS: "streetAddress", + NameOID.ORGANIZATION_NAME: "organizationName", + NameOID.ORGANIZATIONAL_UNIT_NAME: "organizationalUnitName", + NameOID.SERIAL_NUMBER: "serialNumber", + NameOID.SURNAME: "surname", + NameOID.GIVEN_NAME: "givenName", + NameOID.TITLE: "title", + NameOID.GENERATION_QUALIFIER: "generationQualifier", + NameOID.X500_UNIQUE_IDENTIFIER: "x500UniqueIdentifier", + NameOID.DN_QUALIFIER: "dnQualifier", + NameOID.PSEUDONYM: "pseudonym", + NameOID.USER_ID: "userID", + NameOID.DOMAIN_COMPONENT: "domainComponent", + NameOID.EMAIL_ADDRESS: "emailAddress", + NameOID.JURISDICTION_COUNTRY_NAME: "jurisdictionCountryName", + NameOID.JURISDICTION_LOCALITY_NAME: "jurisdictionLocalityName", + NameOID.JURISDICTION_STATE_OR_PROVINCE_NAME: ( + "jurisdictionStateOrProvinceName" + ), + NameOID.BUSINESS_CATEGORY: "businessCategory", + NameOID.POSTAL_ADDRESS: "postalAddress", + NameOID.POSTAL_CODE: "postalCode", + NameOID.INN: "INN", + NameOID.OGRN: "OGRN", + NameOID.SNILS: "SNILS", + NameOID.UNSTRUCTURED_NAME: "unstructuredName", + SignatureAlgorithmOID.RSA_WITH_MD5: "md5WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA1: "sha1WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA224: "sha224WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA256: "sha256WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA384: "sha384WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA512: "sha512WithRSAEncryption", + SignatureAlgorithmOID.RSASSA_PSS: "RSASSA-PSS", + SignatureAlgorithmOID.ECDSA_WITH_SHA1: "ecdsa-with-SHA1", + SignatureAlgorithmOID.ECDSA_WITH_SHA224: "ecdsa-with-SHA224", + SignatureAlgorithmOID.ECDSA_WITH_SHA256: "ecdsa-with-SHA256", + SignatureAlgorithmOID.ECDSA_WITH_SHA384: "ecdsa-with-SHA384", + SignatureAlgorithmOID.ECDSA_WITH_SHA512: "ecdsa-with-SHA512", + SignatureAlgorithmOID.DSA_WITH_SHA1: "dsa-with-sha1", + SignatureAlgorithmOID.DSA_WITH_SHA224: "dsa-with-sha224", + SignatureAlgorithmOID.DSA_WITH_SHA256: "dsa-with-sha256", + SignatureAlgorithmOID.ED25519: "ed25519", + SignatureAlgorithmOID.ED448: "ed448", + SignatureAlgorithmOID.GOSTR3411_94_WITH_3410_2001: ( + "GOST R 34.11-94 with GOST R 34.10-2001" + ), + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_256: ( + "GOST R 34.10-2012 with GOST R 34.11-2012 (256 bit)" + ), + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_512: ( + "GOST R 34.10-2012 with GOST R 34.11-2012 (512 bit)" + ), + ExtendedKeyUsageOID.SERVER_AUTH: "serverAuth", + ExtendedKeyUsageOID.CLIENT_AUTH: "clientAuth", + ExtendedKeyUsageOID.CODE_SIGNING: "codeSigning", + ExtendedKeyUsageOID.EMAIL_PROTECTION: "emailProtection", + ExtendedKeyUsageOID.TIME_STAMPING: "timeStamping", + ExtendedKeyUsageOID.OCSP_SIGNING: "OCSPSigning", + ExtendedKeyUsageOID.SMARTCARD_LOGON: "msSmartcardLogin", + ExtendedKeyUsageOID.KERBEROS_PKINIT_KDC: "pkInitKDC", + ExtensionOID.SUBJECT_DIRECTORY_ATTRIBUTES: "subjectDirectoryAttributes", + ExtensionOID.SUBJECT_KEY_IDENTIFIER: "subjectKeyIdentifier", + ExtensionOID.KEY_USAGE: "keyUsage", + ExtensionOID.SUBJECT_ALTERNATIVE_NAME: "subjectAltName", + ExtensionOID.ISSUER_ALTERNATIVE_NAME: "issuerAltName", + ExtensionOID.BASIC_CONSTRAINTS: "basicConstraints", + ExtensionOID.PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS: ( + "signedCertificateTimestampList" + ), + ExtensionOID.SIGNED_CERTIFICATE_TIMESTAMPS: ( + "signedCertificateTimestampList" + ), + ExtensionOID.PRECERT_POISON: "ctPoison", + CRLEntryExtensionOID.CRL_REASON: "cRLReason", + CRLEntryExtensionOID.INVALIDITY_DATE: "invalidityDate", + CRLEntryExtensionOID.CERTIFICATE_ISSUER: "certificateIssuer", + ExtensionOID.NAME_CONSTRAINTS: "nameConstraints", + ExtensionOID.CRL_DISTRIBUTION_POINTS: "cRLDistributionPoints", + ExtensionOID.CERTIFICATE_POLICIES: "certificatePolicies", + ExtensionOID.POLICY_MAPPINGS: "policyMappings", + ExtensionOID.AUTHORITY_KEY_IDENTIFIER: "authorityKeyIdentifier", + ExtensionOID.POLICY_CONSTRAINTS: "policyConstraints", + ExtensionOID.EXTENDED_KEY_USAGE: "extendedKeyUsage", + ExtensionOID.FRESHEST_CRL: "freshestCRL", + ExtensionOID.INHIBIT_ANY_POLICY: "inhibitAnyPolicy", + ExtensionOID.ISSUING_DISTRIBUTION_POINT: ("issuingDistributionPoint"), + ExtensionOID.AUTHORITY_INFORMATION_ACCESS: "authorityInfoAccess", + ExtensionOID.SUBJECT_INFORMATION_ACCESS: "subjectInfoAccess", + ExtensionOID.OCSP_NO_CHECK: "OCSPNoCheck", + ExtensionOID.CRL_NUMBER: "cRLNumber", + ExtensionOID.DELTA_CRL_INDICATOR: "deltaCRLIndicator", + ExtensionOID.TLS_FEATURE: "TLSFeature", + AuthorityInformationAccessOID.OCSP: "OCSP", + AuthorityInformationAccessOID.CA_ISSUERS: "caIssuers", + SubjectInformationAccessOID.CA_REPOSITORY: "caRepository", + CertificatePoliciesOID.CPS_QUALIFIER: "id-qt-cps", + CertificatePoliciesOID.CPS_USER_NOTICE: "id-qt-unotice", + OCSPExtensionOID.NONCE: "OCSPNonce", + AttributeOID.CHALLENGE_PASSWORD: "challengePassword", +} diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/__init__.py new file mode 100644 index 00000000..3926f85f --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/__init__.py @@ -0,0 +1,10 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. +from typing import Any + + +def default_backend() -> Any: + from cryptography.hazmat.backends.openssl.backend import backend + + return backend diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/__init__.py new file mode 100644 index 00000000..31fd17c3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/__init__.py @@ -0,0 +1,9 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +from cryptography.hazmat.backends.openssl.backend import backend + + +__all__ = ["backend"] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/aead.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/aead.py new file mode 100644 index 00000000..f7914af5 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/aead.py @@ -0,0 +1,251 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import InvalidTag + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + from cryptography.hazmat.primitives.ciphers.aead import ( + AESCCM, + AESGCM, + AESOCB3, + AESSIV, + ChaCha20Poly1305, + ) + + _AEAD_TYPES = typing.Union[ + AESCCM, AESGCM, AESOCB3, AESSIV, ChaCha20Poly1305 + ] + +_ENCRYPT = 1 +_DECRYPT = 0 + + +def _aead_cipher_name(cipher: "_AEAD_TYPES") -> bytes: + from cryptography.hazmat.primitives.ciphers.aead import ( + AESCCM, + AESGCM, + AESOCB3, + AESSIV, + ChaCha20Poly1305, + ) + + if isinstance(cipher, ChaCha20Poly1305): + return b"chacha20-poly1305" + elif isinstance(cipher, AESCCM): + return f"aes-{len(cipher._key) * 8}-ccm".encode("ascii") + elif isinstance(cipher, AESOCB3): + return f"aes-{len(cipher._key) * 8}-ocb".encode("ascii") + elif isinstance(cipher, AESSIV): + return f"aes-{len(cipher._key) * 8 // 2}-siv".encode("ascii") + else: + assert isinstance(cipher, AESGCM) + return f"aes-{len(cipher._key) * 8}-gcm".encode("ascii") + + +def _evp_cipher(cipher_name: bytes, backend: "Backend"): + if cipher_name.endswith(b"-siv"): + evp_cipher = backend._lib.EVP_CIPHER_fetch( + backend._ffi.NULL, + cipher_name, + backend._ffi.NULL, + ) + backend.openssl_assert(evp_cipher != backend._ffi.NULL) + evp_cipher = backend._ffi.gc(evp_cipher, backend._lib.EVP_CIPHER_free) + else: + evp_cipher = backend._lib.EVP_get_cipherbyname(cipher_name) + backend.openssl_assert(evp_cipher != backend._ffi.NULL) + + return evp_cipher + + +def _aead_setup( + backend: "Backend", + cipher_name: bytes, + key: bytes, + nonce: bytes, + tag: typing.Optional[bytes], + tag_len: int, + operation: int, +): + evp_cipher = _evp_cipher(cipher_name, backend) + ctx = backend._lib.EVP_CIPHER_CTX_new() + ctx = backend._ffi.gc(ctx, backend._lib.EVP_CIPHER_CTX_free) + res = backend._lib.EVP_CipherInit_ex( + ctx, + evp_cipher, + backend._ffi.NULL, + backend._ffi.NULL, + backend._ffi.NULL, + int(operation == _ENCRYPT), + ) + backend.openssl_assert(res != 0) + res = backend._lib.EVP_CIPHER_CTX_set_key_length(ctx, len(key)) + backend.openssl_assert(res != 0) + res = backend._lib.EVP_CIPHER_CTX_ctrl( + ctx, + backend._lib.EVP_CTRL_AEAD_SET_IVLEN, + len(nonce), + backend._ffi.NULL, + ) + backend.openssl_assert(res != 0) + if operation == _DECRYPT: + assert tag is not None + res = backend._lib.EVP_CIPHER_CTX_ctrl( + ctx, backend._lib.EVP_CTRL_AEAD_SET_TAG, len(tag), tag + ) + backend.openssl_assert(res != 0) + elif cipher_name.endswith(b"-ccm"): + res = backend._lib.EVP_CIPHER_CTX_ctrl( + ctx, backend._lib.EVP_CTRL_AEAD_SET_TAG, tag_len, backend._ffi.NULL + ) + backend.openssl_assert(res != 0) + + nonce_ptr = backend._ffi.from_buffer(nonce) + key_ptr = backend._ffi.from_buffer(key) + res = backend._lib.EVP_CipherInit_ex( + ctx, + backend._ffi.NULL, + backend._ffi.NULL, + key_ptr, + nonce_ptr, + int(operation == _ENCRYPT), + ) + backend.openssl_assert(res != 0) + return ctx + + +def _set_length(backend: "Backend", ctx, data_len: int) -> None: + intptr = backend._ffi.new("int *") + res = backend._lib.EVP_CipherUpdate( + ctx, backend._ffi.NULL, intptr, backend._ffi.NULL, data_len + ) + backend.openssl_assert(res != 0) + + +def _process_aad(backend: "Backend", ctx, associated_data: bytes) -> None: + outlen = backend._ffi.new("int *") + res = backend._lib.EVP_CipherUpdate( + ctx, backend._ffi.NULL, outlen, associated_data, len(associated_data) + ) + backend.openssl_assert(res != 0) + + +def _process_data(backend: "Backend", ctx, data: bytes) -> bytes: + outlen = backend._ffi.new("int *") + buf = backend._ffi.new("unsigned char[]", len(data)) + res = backend._lib.EVP_CipherUpdate(ctx, buf, outlen, data, len(data)) + if res == 0: + # AES SIV can error here if the data is invalid on decrypt + backend._consume_errors() + raise InvalidTag + return backend._ffi.buffer(buf, outlen[0])[:] + + +def _encrypt( + backend: "Backend", + cipher: "_AEAD_TYPES", + nonce: bytes, + data: bytes, + associated_data: typing.List[bytes], + tag_length: int, +) -> bytes: + from cryptography.hazmat.primitives.ciphers.aead import AESCCM, AESSIV + + cipher_name = _aead_cipher_name(cipher) + ctx = _aead_setup( + backend, cipher_name, cipher._key, nonce, None, tag_length, _ENCRYPT + ) + # CCM requires us to pass the length of the data before processing anything + # However calling this with any other AEAD results in an error + if isinstance(cipher, AESCCM): + _set_length(backend, ctx, len(data)) + + for ad in associated_data: + _process_aad(backend, ctx, ad) + processed_data = _process_data(backend, ctx, data) + outlen = backend._ffi.new("int *") + # All AEADs we support besides OCB are streaming so they return nothing + # in finalization. OCB can return up to (16 byte block - 1) bytes so + # we need a buffer here too. + buf = backend._ffi.new("unsigned char[]", 16) + res = backend._lib.EVP_CipherFinal_ex(ctx, buf, outlen) + backend.openssl_assert(res != 0) + processed_data += backend._ffi.buffer(buf, outlen[0])[:] + tag_buf = backend._ffi.new("unsigned char[]", tag_length) + res = backend._lib.EVP_CIPHER_CTX_ctrl( + ctx, backend._lib.EVP_CTRL_AEAD_GET_TAG, tag_length, tag_buf + ) + backend.openssl_assert(res != 0) + tag = backend._ffi.buffer(tag_buf)[:] + + if isinstance(cipher, AESSIV): + # RFC 5297 defines the output as IV || C, where the tag we generate is + # the "IV" and C is the ciphertext. This is the opposite of our + # other AEADs, which are Ciphertext || Tag + backend.openssl_assert(len(tag) == 16) + return tag + processed_data + else: + return processed_data + tag + + +def _decrypt( + backend: "Backend", + cipher: "_AEAD_TYPES", + nonce: bytes, + data: bytes, + associated_data: typing.List[bytes], + tag_length: int, +) -> bytes: + from cryptography.hazmat.primitives.ciphers.aead import AESCCM, AESSIV + + if len(data) < tag_length: + raise InvalidTag + + if isinstance(cipher, AESSIV): + # RFC 5297 defines the output as IV || C, where the tag we generate is + # the "IV" and C is the ciphertext. This is the opposite of our + # other AEADs, which are Ciphertext || Tag + tag = data[:tag_length] + data = data[tag_length:] + else: + tag = data[-tag_length:] + data = data[:-tag_length] + cipher_name = _aead_cipher_name(cipher) + ctx = _aead_setup( + backend, cipher_name, cipher._key, nonce, tag, tag_length, _DECRYPT + ) + # CCM requires us to pass the length of the data before processing anything + # However calling this with any other AEAD results in an error + if isinstance(cipher, AESCCM): + _set_length(backend, ctx, len(data)) + + for ad in associated_data: + _process_aad(backend, ctx, ad) + # CCM has a different error path if the tag doesn't match. Errors are + # raised in Update and Final is irrelevant. + if isinstance(cipher, AESCCM): + outlen = backend._ffi.new("int *") + buf = backend._ffi.new("unsigned char[]", len(data)) + res = backend._lib.EVP_CipherUpdate(ctx, buf, outlen, data, len(data)) + if res != 1: + backend._consume_errors() + raise InvalidTag + + processed_data = backend._ffi.buffer(buf, outlen[0])[:] + else: + processed_data = _process_data(backend, ctx, data) + outlen = backend._ffi.new("int *") + # OCB can return up to 15 bytes (16 byte block - 1) in finalization + buf = backend._ffi.new("unsigned char[]", 16) + res = backend._lib.EVP_CipherFinal_ex(ctx, buf, outlen) + processed_data += backend._ffi.buffer(buf, outlen[0])[:] + if res == 0: + backend._consume_errors() + raise InvalidTag + + return processed_data diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py new file mode 100644 index 00000000..f8776b73 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py @@ -0,0 +1,2649 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import collections +import contextlib +import itertools +import typing +import warnings +from contextlib import contextmanager + +from cryptography import utils, x509 +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.backends.openssl import aead +from cryptography.hazmat.backends.openssl.ciphers import _CipherContext +from cryptography.hazmat.backends.openssl.cmac import _CMACContext +from cryptography.hazmat.backends.openssl.dh import ( + _DHParameters, + _DHPrivateKey, + _DHPublicKey, + _dh_params_dup, +) +from cryptography.hazmat.backends.openssl.dsa import ( + _DSAParameters, + _DSAPrivateKey, + _DSAPublicKey, +) +from cryptography.hazmat.backends.openssl.ec import ( + _EllipticCurvePrivateKey, + _EllipticCurvePublicKey, +) +from cryptography.hazmat.backends.openssl.ed25519 import ( + _Ed25519PrivateKey, + _Ed25519PublicKey, +) +from cryptography.hazmat.backends.openssl.ed448 import ( + _ED448_KEY_SIZE, + _Ed448PrivateKey, + _Ed448PublicKey, +) +from cryptography.hazmat.backends.openssl.hashes import _HashContext +from cryptography.hazmat.backends.openssl.hmac import _HMACContext +from cryptography.hazmat.backends.openssl.poly1305 import ( + _POLY1305_KEY_SIZE, + _Poly1305Context, +) +from cryptography.hazmat.backends.openssl.rsa import ( + _RSAPrivateKey, + _RSAPublicKey, +) +from cryptography.hazmat.backends.openssl.x25519 import ( + _X25519PrivateKey, + _X25519PublicKey, +) +from cryptography.hazmat.backends.openssl.x448 import ( + _X448PrivateKey, + _X448PublicKey, +) +from cryptography.hazmat.bindings._rust import ( + x509 as rust_x509, +) +from cryptography.hazmat.bindings.openssl import binding +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives._asymmetric import AsymmetricPadding +from cryptography.hazmat.primitives.asymmetric import ( + dh, + dsa, + ec, + ed25519, + ed448, + rsa, + x25519, + x448, +) +from cryptography.hazmat.primitives.asymmetric.padding import ( + MGF1, + OAEP, + PKCS1v15, + PSS, +) +from cryptography.hazmat.primitives.asymmetric.types import ( + CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES, + PRIVATE_KEY_TYPES, + PUBLIC_KEY_TYPES, +) +from cryptography.hazmat.primitives.ciphers import ( + BlockCipherAlgorithm, + CipherAlgorithm, +) +from cryptography.hazmat.primitives.ciphers.algorithms import ( + AES, + AES128, + AES256, + ARC4, + Camellia, + ChaCha20, + SM4, + TripleDES, + _BlowfishInternal, + _CAST5Internal, + _IDEAInternal, + _SEEDInternal, +) +from cryptography.hazmat.primitives.ciphers.modes import ( + CBC, + CFB, + CFB8, + CTR, + ECB, + GCM, + Mode, + OFB, + XTS, +) +from cryptography.hazmat.primitives.kdf import scrypt +from cryptography.hazmat.primitives.serialization import pkcs7, ssh +from cryptography.hazmat.primitives.serialization.pkcs12 import ( + PBES, + PKCS12Certificate, + PKCS12KeyAndCertificates, + _ALLOWED_PKCS12_TYPES, + _PKCS12_CAS_TYPES, +) + + +_MemoryBIO = collections.namedtuple("_MemoryBIO", ["bio", "char_ptr"]) + + +# Not actually supported, just used as a marker for some serialization tests. +class _RC2: + pass + + +class Backend: + """ + OpenSSL API binding interfaces. + """ + + name = "openssl" + + # FIPS has opinions about acceptable algorithms and key sizes, but the + # disallowed algorithms are still present in OpenSSL. They just error if + # you try to use them. To avoid that we allowlist the algorithms in + # FIPS 140-3. This isn't ideal, but FIPS 140-3 is trash so here we are. + _fips_aead = { + b"aes-128-ccm", + b"aes-192-ccm", + b"aes-256-ccm", + b"aes-128-gcm", + b"aes-192-gcm", + b"aes-256-gcm", + } + # TripleDES encryption is disallowed/deprecated throughout 2023 in + # FIPS 140-3. To keep it simple we denylist any use of TripleDES (TDEA). + _fips_ciphers = (AES,) + # Sometimes SHA1 is still permissible. That logic is contained + # within the various *_supported methods. + _fips_hashes = ( + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, + hashes.SHA512_224, + hashes.SHA512_256, + hashes.SHA3_224, + hashes.SHA3_256, + hashes.SHA3_384, + hashes.SHA3_512, + hashes.SHAKE128, + hashes.SHAKE256, + ) + _fips_ecdh_curves = ( + ec.SECP224R1, + ec.SECP256R1, + ec.SECP384R1, + ec.SECP521R1, + ) + _fips_rsa_min_key_size = 2048 + _fips_rsa_min_public_exponent = 65537 + _fips_dsa_min_modulus = 1 << 2048 + _fips_dh_min_key_size = 2048 + _fips_dh_min_modulus = 1 << _fips_dh_min_key_size + + def __init__(self): + self._binding = binding.Binding() + self._ffi = self._binding.ffi + self._lib = self._binding.lib + self._rsa_skip_check_key = False + self._fips_enabled = self._is_fips_enabled() + + self._cipher_registry = {} + self._register_default_ciphers() + if self._fips_enabled and self._lib.CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE: + warnings.warn( + "OpenSSL FIPS mode is enabled. Can't enable DRBG fork safety.", + UserWarning, + ) + else: + self.activate_osrandom_engine() + self._dh_types = [self._lib.EVP_PKEY_DH] + if self._lib.Cryptography_HAS_EVP_PKEY_DHX: + self._dh_types.append(self._lib.EVP_PKEY_DHX) + + def __repr__(self) -> str: + return "".format( + self.openssl_version_text(), self._fips_enabled + ) + + def openssl_assert( + self, + ok: bool, + errors: typing.Optional[typing.List[binding._OpenSSLError]] = None, + ) -> None: + return binding._openssl_assert(self._lib, ok, errors=errors) + + def _is_fips_enabled(self) -> bool: + if self._lib.Cryptography_HAS_300_FIPS: + mode = self._lib.EVP_default_properties_is_fips_enabled( + self._ffi.NULL + ) + else: + mode = getattr(self._lib, "FIPS_mode", lambda: 0)() + + if mode == 0: + # OpenSSL without FIPS pushes an error on the error stack + self._lib.ERR_clear_error() + return bool(mode) + + def _enable_fips(self) -> None: + # This function enables FIPS mode for OpenSSL 3.0.0 on installs that + # have the FIPS provider installed properly. + self._binding._enable_fips() + assert self._is_fips_enabled() + self._fips_enabled = self._is_fips_enabled() + + def activate_builtin_random(self) -> None: + if self._lib.CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE: + # Obtain a new structural reference. + e = self._lib.ENGINE_get_default_RAND() + if e != self._ffi.NULL: + self._lib.ENGINE_unregister_RAND(e) + # Reset the RNG to use the built-in. + res = self._lib.RAND_set_rand_method(self._ffi.NULL) + self.openssl_assert(res == 1) + # decrement the structural reference from get_default_RAND + res = self._lib.ENGINE_finish(e) + self.openssl_assert(res == 1) + + @contextlib.contextmanager + def _get_osurandom_engine(self): + # Fetches an engine by id and returns it. This creates a structural + # reference. + e = self._lib.ENGINE_by_id(self._lib.Cryptography_osrandom_engine_id) + self.openssl_assert(e != self._ffi.NULL) + # Initialize the engine for use. This adds a functional reference. + res = self._lib.ENGINE_init(e) + self.openssl_assert(res == 1) + + try: + yield e + finally: + # Decrement the structural ref incremented by ENGINE_by_id. + res = self._lib.ENGINE_free(e) + self.openssl_assert(res == 1) + # Decrement the functional ref incremented by ENGINE_init. + res = self._lib.ENGINE_finish(e) + self.openssl_assert(res == 1) + + def activate_osrandom_engine(self) -> None: + if self._lib.CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE: + # Unregister and free the current engine. + self.activate_builtin_random() + with self._get_osurandom_engine() as e: + # Set the engine as the default RAND provider. + res = self._lib.ENGINE_set_default_RAND(e) + self.openssl_assert(res == 1) + # Reset the RNG to use the engine + res = self._lib.RAND_set_rand_method(self._ffi.NULL) + self.openssl_assert(res == 1) + + def osrandom_engine_implementation(self) -> str: + buf = self._ffi.new("char[]", 64) + with self._get_osurandom_engine() as e: + res = self._lib.ENGINE_ctrl_cmd( + e, b"get_implementation", len(buf), buf, self._ffi.NULL, 0 + ) + self.openssl_assert(res > 0) + return self._ffi.string(buf).decode("ascii") + + def openssl_version_text(self) -> str: + """ + Friendly string name of the loaded OpenSSL library. This is not + necessarily the same version as it was compiled against. + + Example: OpenSSL 1.1.1d 10 Sep 2019 + """ + return self._ffi.string( + self._lib.OpenSSL_version(self._lib.OPENSSL_VERSION) + ).decode("ascii") + + def openssl_version_number(self) -> int: + return self._lib.OpenSSL_version_num() + + def create_hmac_ctx( + self, key: bytes, algorithm: hashes.HashAlgorithm + ) -> _HMACContext: + return _HMACContext(self, key, algorithm) + + def _evp_md_from_algorithm(self, algorithm: hashes.HashAlgorithm): + if algorithm.name == "blake2b" or algorithm.name == "blake2s": + alg = "{}{}".format( + algorithm.name, algorithm.digest_size * 8 + ).encode("ascii") + else: + alg = algorithm.name.encode("ascii") + + evp_md = self._lib.EVP_get_digestbyname(alg) + return evp_md + + def _evp_md_non_null_from_algorithm(self, algorithm: hashes.HashAlgorithm): + evp_md = self._evp_md_from_algorithm(algorithm) + self.openssl_assert(evp_md != self._ffi.NULL) + return evp_md + + def hash_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + if self._fips_enabled and not isinstance(algorithm, self._fips_hashes): + return False + + evp_md = self._evp_md_from_algorithm(algorithm) + return evp_md != self._ffi.NULL + + def signature_hash_supported( + self, algorithm: hashes.HashAlgorithm + ) -> bool: + # Dedicated check for hashing algorithm use in message digest for + # signatures, e.g. RSA PKCS#1 v1.5 SHA1 (sha1WithRSAEncryption). + if self._fips_enabled and isinstance(algorithm, hashes.SHA1): + return False + return self.hash_supported(algorithm) + + def scrypt_supported(self) -> bool: + if self._fips_enabled: + return False + else: + return self._lib.Cryptography_HAS_SCRYPT == 1 + + def hmac_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + # FIPS mode still allows SHA1 for HMAC + if self._fips_enabled and isinstance(algorithm, hashes.SHA1): + return True + + return self.hash_supported(algorithm) + + def create_hash_ctx( + self, algorithm: hashes.HashAlgorithm + ) -> hashes.HashContext: + return _HashContext(self, algorithm) + + def cipher_supported(self, cipher: CipherAlgorithm, mode: Mode) -> bool: + if self._fips_enabled: + # FIPS mode requires AES. TripleDES is disallowed/deprecated in + # FIPS 140-3. + if not isinstance(cipher, self._fips_ciphers): + return False + + try: + adapter = self._cipher_registry[type(cipher), type(mode)] + except KeyError: + return False + evp_cipher = adapter(self, cipher, mode) + return self._ffi.NULL != evp_cipher + + def register_cipher_adapter(self, cipher_cls, mode_cls, adapter): + if (cipher_cls, mode_cls) in self._cipher_registry: + raise ValueError( + "Duplicate registration for: {} {}.".format( + cipher_cls, mode_cls + ) + ) + self._cipher_registry[cipher_cls, mode_cls] = adapter + + def _register_default_ciphers(self) -> None: + for cipher_cls in [AES, AES128, AES256]: + for mode_cls in [CBC, CTR, ECB, OFB, CFB, CFB8, GCM]: + self.register_cipher_adapter( + cipher_cls, + mode_cls, + GetCipherByName( + "{cipher.name}-{cipher.key_size}-{mode.name}" + ), + ) + for mode_cls in [CBC, CTR, ECB, OFB, CFB]: + self.register_cipher_adapter( + Camellia, + mode_cls, + GetCipherByName("{cipher.name}-{cipher.key_size}-{mode.name}"), + ) + for mode_cls in [CBC, CFB, CFB8, OFB]: + self.register_cipher_adapter( + TripleDES, mode_cls, GetCipherByName("des-ede3-{mode.name}") + ) + self.register_cipher_adapter( + TripleDES, ECB, GetCipherByName("des-ede3") + ) + for mode_cls in [CBC, CFB, OFB, ECB]: + self.register_cipher_adapter( + _BlowfishInternal, mode_cls, GetCipherByName("bf-{mode.name}") + ) + for mode_cls in [CBC, CFB, OFB, ECB]: + self.register_cipher_adapter( + _SEEDInternal, mode_cls, GetCipherByName("seed-{mode.name}") + ) + for cipher_cls, mode_cls in itertools.product( + [_CAST5Internal, _IDEAInternal], + [CBC, OFB, CFB, ECB], + ): + self.register_cipher_adapter( + cipher_cls, + mode_cls, + GetCipherByName("{cipher.name}-{mode.name}"), + ) + self.register_cipher_adapter(ARC4, type(None), GetCipherByName("rc4")) + # We don't actually support RC2, this is just used by some tests. + self.register_cipher_adapter(_RC2, type(None), GetCipherByName("rc2")) + self.register_cipher_adapter( + ChaCha20, type(None), GetCipherByName("chacha20") + ) + self.register_cipher_adapter(AES, XTS, _get_xts_cipher) + for mode_cls in [ECB, CBC, OFB, CFB, CTR]: + self.register_cipher_adapter( + SM4, mode_cls, GetCipherByName("sm4-{mode.name}") + ) + + def create_symmetric_encryption_ctx( + self, cipher: CipherAlgorithm, mode: Mode + ) -> _CipherContext: + return _CipherContext(self, cipher, mode, _CipherContext._ENCRYPT) + + def create_symmetric_decryption_ctx( + self, cipher: CipherAlgorithm, mode: Mode + ) -> _CipherContext: + return _CipherContext(self, cipher, mode, _CipherContext._DECRYPT) + + def pbkdf2_hmac_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + return self.hmac_supported(algorithm) + + def derive_pbkdf2_hmac( + self, + algorithm: hashes.HashAlgorithm, + length: int, + salt: bytes, + iterations: int, + key_material: bytes, + ) -> bytes: + buf = self._ffi.new("unsigned char[]", length) + evp_md = self._evp_md_non_null_from_algorithm(algorithm) + key_material_ptr = self._ffi.from_buffer(key_material) + res = self._lib.PKCS5_PBKDF2_HMAC( + key_material_ptr, + len(key_material), + salt, + len(salt), + iterations, + evp_md, + length, + buf, + ) + self.openssl_assert(res == 1) + return self._ffi.buffer(buf)[:] + + def _consume_errors(self) -> typing.List[binding._OpenSSLError]: + return binding._consume_errors(self._lib) + + def _consume_errors_with_text( + self, + ) -> typing.List[binding._OpenSSLErrorWithText]: + return binding._consume_errors_with_text(self._lib) + + def _bn_to_int(self, bn) -> int: + assert bn != self._ffi.NULL + self.openssl_assert(not self._lib.BN_is_negative(bn)) + + bn_num_bytes = self._lib.BN_num_bytes(bn) + bin_ptr = self._ffi.new("unsigned char[]", bn_num_bytes) + bin_len = self._lib.BN_bn2bin(bn, bin_ptr) + # A zero length means the BN has value 0 + self.openssl_assert(bin_len >= 0) + val = int.from_bytes(self._ffi.buffer(bin_ptr)[:bin_len], "big") + return val + + def _int_to_bn(self, num: int, bn=None): + """ + Converts a python integer to a BIGNUM. The returned BIGNUM will not + be garbage collected (to support adding them to structs that take + ownership of the object). Be sure to register it for GC if it will + be discarded after use. + """ + assert bn is None or bn != self._ffi.NULL + + if bn is None: + bn = self._ffi.NULL + + binary = num.to_bytes(int(num.bit_length() / 8.0 + 1), "big") + bn_ptr = self._lib.BN_bin2bn(binary, len(binary), bn) + self.openssl_assert(bn_ptr != self._ffi.NULL) + return bn_ptr + + def generate_rsa_private_key( + self, public_exponent: int, key_size: int + ) -> rsa.RSAPrivateKey: + rsa._verify_rsa_parameters(public_exponent, key_size) + + rsa_cdata = self._lib.RSA_new() + self.openssl_assert(rsa_cdata != self._ffi.NULL) + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + + bn = self._int_to_bn(public_exponent) + bn = self._ffi.gc(bn, self._lib.BN_free) + + res = self._lib.RSA_generate_key_ex( + rsa_cdata, key_size, bn, self._ffi.NULL + ) + self.openssl_assert(res == 1) + evp_pkey = self._rsa_cdata_to_evp_pkey(rsa_cdata) + + return _RSAPrivateKey( + self, rsa_cdata, evp_pkey, self._rsa_skip_check_key + ) + + def generate_rsa_parameters_supported( + self, public_exponent: int, key_size: int + ) -> bool: + return ( + public_exponent >= 3 + and public_exponent & 1 != 0 + and key_size >= 512 + ) + + def load_rsa_private_numbers( + self, numbers: rsa.RSAPrivateNumbers + ) -> rsa.RSAPrivateKey: + rsa._check_private_key_components( + numbers.p, + numbers.q, + numbers.d, + numbers.dmp1, + numbers.dmq1, + numbers.iqmp, + numbers.public_numbers.e, + numbers.public_numbers.n, + ) + rsa_cdata = self._lib.RSA_new() + self.openssl_assert(rsa_cdata != self._ffi.NULL) + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + p = self._int_to_bn(numbers.p) + q = self._int_to_bn(numbers.q) + d = self._int_to_bn(numbers.d) + dmp1 = self._int_to_bn(numbers.dmp1) + dmq1 = self._int_to_bn(numbers.dmq1) + iqmp = self._int_to_bn(numbers.iqmp) + e = self._int_to_bn(numbers.public_numbers.e) + n = self._int_to_bn(numbers.public_numbers.n) + res = self._lib.RSA_set0_factors(rsa_cdata, p, q) + self.openssl_assert(res == 1) + res = self._lib.RSA_set0_key(rsa_cdata, n, e, d) + self.openssl_assert(res == 1) + res = self._lib.RSA_set0_crt_params(rsa_cdata, dmp1, dmq1, iqmp) + self.openssl_assert(res == 1) + evp_pkey = self._rsa_cdata_to_evp_pkey(rsa_cdata) + + return _RSAPrivateKey( + self, rsa_cdata, evp_pkey, self._rsa_skip_check_key + ) + + def load_rsa_public_numbers( + self, numbers: rsa.RSAPublicNumbers + ) -> rsa.RSAPublicKey: + rsa._check_public_key_components(numbers.e, numbers.n) + rsa_cdata = self._lib.RSA_new() + self.openssl_assert(rsa_cdata != self._ffi.NULL) + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + e = self._int_to_bn(numbers.e) + n = self._int_to_bn(numbers.n) + res = self._lib.RSA_set0_key(rsa_cdata, n, e, self._ffi.NULL) + self.openssl_assert(res == 1) + evp_pkey = self._rsa_cdata_to_evp_pkey(rsa_cdata) + + return _RSAPublicKey(self, rsa_cdata, evp_pkey) + + def _create_evp_pkey_gc(self): + evp_pkey = self._lib.EVP_PKEY_new() + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + return evp_pkey + + def _rsa_cdata_to_evp_pkey(self, rsa_cdata): + evp_pkey = self._create_evp_pkey_gc() + res = self._lib.EVP_PKEY_set1_RSA(evp_pkey, rsa_cdata) + self.openssl_assert(res == 1) + return evp_pkey + + def _bytes_to_bio(self, data: bytes): + """ + Return a _MemoryBIO namedtuple of (BIO, char*). + + The char* is the storage for the BIO and it must stay alive until the + BIO is finished with. + """ + data_ptr = self._ffi.from_buffer(data) + bio = self._lib.BIO_new_mem_buf(data_ptr, len(data)) + self.openssl_assert(bio != self._ffi.NULL) + + return _MemoryBIO(self._ffi.gc(bio, self._lib.BIO_free), data_ptr) + + def _create_mem_bio_gc(self): + """ + Creates an empty memory BIO. + """ + bio_method = self._lib.BIO_s_mem() + self.openssl_assert(bio_method != self._ffi.NULL) + bio = self._lib.BIO_new(bio_method) + self.openssl_assert(bio != self._ffi.NULL) + bio = self._ffi.gc(bio, self._lib.BIO_free) + return bio + + def _read_mem_bio(self, bio) -> bytes: + """ + Reads a memory BIO. This only works on memory BIOs. + """ + buf = self._ffi.new("char **") + buf_len = self._lib.BIO_get_mem_data(bio, buf) + self.openssl_assert(buf_len > 0) + self.openssl_assert(buf[0] != self._ffi.NULL) + bio_data = self._ffi.buffer(buf[0], buf_len)[:] + return bio_data + + def _evp_pkey_to_private_key(self, evp_pkey) -> PRIVATE_KEY_TYPES: + """ + Return the appropriate type of PrivateKey given an evp_pkey cdata + pointer. + """ + + key_type = self._lib.EVP_PKEY_id(evp_pkey) + + if key_type == self._lib.EVP_PKEY_RSA: + rsa_cdata = self._lib.EVP_PKEY_get1_RSA(evp_pkey) + self.openssl_assert(rsa_cdata != self._ffi.NULL) + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + return _RSAPrivateKey( + self, rsa_cdata, evp_pkey, self._rsa_skip_check_key + ) + elif ( + key_type == self._lib.EVP_PKEY_RSA_PSS + and not self._lib.CRYPTOGRAPHY_IS_LIBRESSL + and not self._lib.CRYPTOGRAPHY_IS_BORINGSSL + and not self._lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_111E + ): + # At the moment the way we handle RSA PSS keys is to strip the + # PSS constraints from them and treat them as normal RSA keys + # Unfortunately the RSA * itself tracks this data so we need to + # extract, serialize, and reload it without the constraints. + rsa_cdata = self._lib.EVP_PKEY_get1_RSA(evp_pkey) + self.openssl_assert(rsa_cdata != self._ffi.NULL) + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + bio = self._create_mem_bio_gc() + res = self._lib.i2d_RSAPrivateKey_bio(bio, rsa_cdata) + self.openssl_assert(res == 1) + return self.load_der_private_key( + self._read_mem_bio(bio), password=None + ) + elif key_type == self._lib.EVP_PKEY_DSA: + dsa_cdata = self._lib.EVP_PKEY_get1_DSA(evp_pkey) + self.openssl_assert(dsa_cdata != self._ffi.NULL) + dsa_cdata = self._ffi.gc(dsa_cdata, self._lib.DSA_free) + return _DSAPrivateKey(self, dsa_cdata, evp_pkey) + elif key_type == self._lib.EVP_PKEY_EC: + ec_cdata = self._lib.EVP_PKEY_get1_EC_KEY(evp_pkey) + self.openssl_assert(ec_cdata != self._ffi.NULL) + ec_cdata = self._ffi.gc(ec_cdata, self._lib.EC_KEY_free) + return _EllipticCurvePrivateKey(self, ec_cdata, evp_pkey) + elif key_type in self._dh_types: + dh_cdata = self._lib.EVP_PKEY_get1_DH(evp_pkey) + self.openssl_assert(dh_cdata != self._ffi.NULL) + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + return _DHPrivateKey(self, dh_cdata, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_ED25519", None): + # EVP_PKEY_ED25519 is not present in OpenSSL < 1.1.1 + return _Ed25519PrivateKey(self, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_X448", None): + # EVP_PKEY_X448 is not present in OpenSSL < 1.1.1 + return _X448PrivateKey(self, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_X25519", None): + # EVP_PKEY_X25519 is not present in OpenSSL < 1.1.0 + return _X25519PrivateKey(self, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_ED448", None): + # EVP_PKEY_ED448 is not present in OpenSSL < 1.1.1 + return _Ed448PrivateKey(self, evp_pkey) + else: + raise UnsupportedAlgorithm("Unsupported key type.") + + def _evp_pkey_to_public_key(self, evp_pkey) -> PUBLIC_KEY_TYPES: + """ + Return the appropriate type of PublicKey given an evp_pkey cdata + pointer. + """ + + key_type = self._lib.EVP_PKEY_id(evp_pkey) + + if key_type == self._lib.EVP_PKEY_RSA: + rsa_cdata = self._lib.EVP_PKEY_get1_RSA(evp_pkey) + self.openssl_assert(rsa_cdata != self._ffi.NULL) + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + return _RSAPublicKey(self, rsa_cdata, evp_pkey) + elif ( + key_type == self._lib.EVP_PKEY_RSA_PSS + and not self._lib.CRYPTOGRAPHY_IS_LIBRESSL + and not self._lib.CRYPTOGRAPHY_IS_BORINGSSL + and not self._lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_111E + ): + rsa_cdata = self._lib.EVP_PKEY_get1_RSA(evp_pkey) + self.openssl_assert(rsa_cdata != self._ffi.NULL) + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + bio = self._create_mem_bio_gc() + res = self._lib.i2d_RSAPublicKey_bio(bio, rsa_cdata) + self.openssl_assert(res == 1) + return self.load_der_public_key(self._read_mem_bio(bio)) + elif key_type == self._lib.EVP_PKEY_DSA: + dsa_cdata = self._lib.EVP_PKEY_get1_DSA(evp_pkey) + self.openssl_assert(dsa_cdata != self._ffi.NULL) + dsa_cdata = self._ffi.gc(dsa_cdata, self._lib.DSA_free) + return _DSAPublicKey(self, dsa_cdata, evp_pkey) + elif key_type == self._lib.EVP_PKEY_EC: + ec_cdata = self._lib.EVP_PKEY_get1_EC_KEY(evp_pkey) + if ec_cdata == self._ffi.NULL: + errors = self._consume_errors_with_text() + raise ValueError("Unable to load EC key", errors) + ec_cdata = self._ffi.gc(ec_cdata, self._lib.EC_KEY_free) + return _EllipticCurvePublicKey(self, ec_cdata, evp_pkey) + elif key_type in self._dh_types: + dh_cdata = self._lib.EVP_PKEY_get1_DH(evp_pkey) + self.openssl_assert(dh_cdata != self._ffi.NULL) + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + return _DHPublicKey(self, dh_cdata, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_ED25519", None): + # EVP_PKEY_ED25519 is not present in OpenSSL < 1.1.1 + return _Ed25519PublicKey(self, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_X448", None): + # EVP_PKEY_X448 is not present in OpenSSL < 1.1.1 + return _X448PublicKey(self, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_X25519", None): + # EVP_PKEY_X25519 is not present in OpenSSL < 1.1.0 + return _X25519PublicKey(self, evp_pkey) + elif key_type == getattr(self._lib, "EVP_PKEY_ED448", None): + # EVP_PKEY_X25519 is not present in OpenSSL < 1.1.1 + return _Ed448PublicKey(self, evp_pkey) + else: + raise UnsupportedAlgorithm("Unsupported key type.") + + def _oaep_hash_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + return isinstance( + algorithm, + ( + hashes.SHA1, + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, + ), + ) + + def rsa_padding_supported(self, padding: AsymmetricPadding) -> bool: + if isinstance(padding, PKCS1v15): + return True + elif isinstance(padding, PSS) and isinstance(padding._mgf, MGF1): + # SHA1 is permissible in MGF1 in FIPS even when SHA1 is blocked + # as signature algorithm. + if self._fips_enabled and isinstance( + padding._mgf._algorithm, hashes.SHA1 + ): + return True + else: + return self.hash_supported(padding._mgf._algorithm) + elif isinstance(padding, OAEP) and isinstance(padding._mgf, MGF1): + return self._oaep_hash_supported( + padding._mgf._algorithm + ) and self._oaep_hash_supported(padding._algorithm) + else: + return False + + def generate_dsa_parameters(self, key_size: int) -> dsa.DSAParameters: + if key_size not in (1024, 2048, 3072, 4096): + raise ValueError( + "Key size must be 1024, 2048, 3072, or 4096 bits." + ) + + ctx = self._lib.DSA_new() + self.openssl_assert(ctx != self._ffi.NULL) + ctx = self._ffi.gc(ctx, self._lib.DSA_free) + + res = self._lib.DSA_generate_parameters_ex( + ctx, + key_size, + self._ffi.NULL, + 0, + self._ffi.NULL, + self._ffi.NULL, + self._ffi.NULL, + ) + + self.openssl_assert(res == 1) + + return _DSAParameters(self, ctx) + + def generate_dsa_private_key( + self, parameters: dsa.DSAParameters + ) -> dsa.DSAPrivateKey: + ctx = self._lib.DSAparams_dup( + parameters._dsa_cdata # type: ignore[attr-defined] + ) + self.openssl_assert(ctx != self._ffi.NULL) + ctx = self._ffi.gc(ctx, self._lib.DSA_free) + self._lib.DSA_generate_key(ctx) + evp_pkey = self._dsa_cdata_to_evp_pkey(ctx) + + return _DSAPrivateKey(self, ctx, evp_pkey) + + def generate_dsa_private_key_and_parameters( + self, key_size: int + ) -> dsa.DSAPrivateKey: + parameters = self.generate_dsa_parameters(key_size) + return self.generate_dsa_private_key(parameters) + + def _dsa_cdata_set_values(self, dsa_cdata, p, q, g, pub_key, priv_key): + res = self._lib.DSA_set0_pqg(dsa_cdata, p, q, g) + self.openssl_assert(res == 1) + res = self._lib.DSA_set0_key(dsa_cdata, pub_key, priv_key) + self.openssl_assert(res == 1) + + def load_dsa_private_numbers( + self, numbers: dsa.DSAPrivateNumbers + ) -> dsa.DSAPrivateKey: + dsa._check_dsa_private_numbers(numbers) + parameter_numbers = numbers.public_numbers.parameter_numbers + + dsa_cdata = self._lib.DSA_new() + self.openssl_assert(dsa_cdata != self._ffi.NULL) + dsa_cdata = self._ffi.gc(dsa_cdata, self._lib.DSA_free) + + p = self._int_to_bn(parameter_numbers.p) + q = self._int_to_bn(parameter_numbers.q) + g = self._int_to_bn(parameter_numbers.g) + pub_key = self._int_to_bn(numbers.public_numbers.y) + priv_key = self._int_to_bn(numbers.x) + self._dsa_cdata_set_values(dsa_cdata, p, q, g, pub_key, priv_key) + + evp_pkey = self._dsa_cdata_to_evp_pkey(dsa_cdata) + + return _DSAPrivateKey(self, dsa_cdata, evp_pkey) + + def load_dsa_public_numbers( + self, numbers: dsa.DSAPublicNumbers + ) -> dsa.DSAPublicKey: + dsa._check_dsa_parameters(numbers.parameter_numbers) + dsa_cdata = self._lib.DSA_new() + self.openssl_assert(dsa_cdata != self._ffi.NULL) + dsa_cdata = self._ffi.gc(dsa_cdata, self._lib.DSA_free) + + p = self._int_to_bn(numbers.parameter_numbers.p) + q = self._int_to_bn(numbers.parameter_numbers.q) + g = self._int_to_bn(numbers.parameter_numbers.g) + pub_key = self._int_to_bn(numbers.y) + priv_key = self._ffi.NULL + self._dsa_cdata_set_values(dsa_cdata, p, q, g, pub_key, priv_key) + + evp_pkey = self._dsa_cdata_to_evp_pkey(dsa_cdata) + + return _DSAPublicKey(self, dsa_cdata, evp_pkey) + + def load_dsa_parameter_numbers( + self, numbers: dsa.DSAParameterNumbers + ) -> dsa.DSAParameters: + dsa._check_dsa_parameters(numbers) + dsa_cdata = self._lib.DSA_new() + self.openssl_assert(dsa_cdata != self._ffi.NULL) + dsa_cdata = self._ffi.gc(dsa_cdata, self._lib.DSA_free) + + p = self._int_to_bn(numbers.p) + q = self._int_to_bn(numbers.q) + g = self._int_to_bn(numbers.g) + res = self._lib.DSA_set0_pqg(dsa_cdata, p, q, g) + self.openssl_assert(res == 1) + + return _DSAParameters(self, dsa_cdata) + + def _dsa_cdata_to_evp_pkey(self, dsa_cdata): + evp_pkey = self._create_evp_pkey_gc() + res = self._lib.EVP_PKEY_set1_DSA(evp_pkey, dsa_cdata) + self.openssl_assert(res == 1) + return evp_pkey + + def dsa_supported(self) -> bool: + return not self._fips_enabled + + def dsa_hash_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + if not self.dsa_supported(): + return False + return self.signature_hash_supported(algorithm) + + def cmac_algorithm_supported(self, algorithm) -> bool: + return self.cipher_supported( + algorithm, CBC(b"\x00" * algorithm.block_size) + ) + + def create_cmac_ctx(self, algorithm: BlockCipherAlgorithm) -> _CMACContext: + return _CMACContext(self, algorithm) + + def load_pem_private_key( + self, data: bytes, password: typing.Optional[bytes] + ) -> PRIVATE_KEY_TYPES: + return self._load_key( + self._lib.PEM_read_bio_PrivateKey, + self._evp_pkey_to_private_key, + data, + password, + ) + + def load_pem_public_key(self, data: bytes) -> PUBLIC_KEY_TYPES: + mem_bio = self._bytes_to_bio(data) + # In OpenSSL 3.0.x the PEM_read_bio_PUBKEY function will invoke + # the default password callback if you pass an encrypted private + # key. This is very, very, very bad as the default callback can + # trigger an interactive console prompt, which will hang the + # Python process. We therefore provide our own callback to + # catch this and error out properly. + userdata = self._ffi.new("CRYPTOGRAPHY_PASSWORD_DATA *") + evp_pkey = self._lib.PEM_read_bio_PUBKEY( + mem_bio.bio, + self._ffi.NULL, + self._ffi.addressof( + self._lib._original_lib, "Cryptography_pem_password_cb" + ), + userdata, + ) + if evp_pkey != self._ffi.NULL: + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + return self._evp_pkey_to_public_key(evp_pkey) + else: + # It's not a (RSA/DSA/ECDSA) subjectPublicKeyInfo, but we still + # need to check to see if it is a pure PKCS1 RSA public key (not + # embedded in a subjectPublicKeyInfo) + self._consume_errors() + res = self._lib.BIO_reset(mem_bio.bio) + self.openssl_assert(res == 1) + rsa_cdata = self._lib.PEM_read_bio_RSAPublicKey( + mem_bio.bio, + self._ffi.NULL, + self._ffi.addressof( + self._lib._original_lib, "Cryptography_pem_password_cb" + ), + userdata, + ) + if rsa_cdata != self._ffi.NULL: + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + evp_pkey = self._rsa_cdata_to_evp_pkey(rsa_cdata) + return _RSAPublicKey(self, rsa_cdata, evp_pkey) + else: + self._handle_key_loading_error() + + def load_pem_parameters(self, data: bytes) -> dh.DHParameters: + mem_bio = self._bytes_to_bio(data) + # only DH is supported currently + dh_cdata = self._lib.PEM_read_bio_DHparams( + mem_bio.bio, self._ffi.NULL, self._ffi.NULL, self._ffi.NULL + ) + if dh_cdata != self._ffi.NULL: + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + return _DHParameters(self, dh_cdata) + else: + self._handle_key_loading_error() + + def load_der_private_key( + self, data: bytes, password: typing.Optional[bytes] + ) -> PRIVATE_KEY_TYPES: + # OpenSSL has a function called d2i_AutoPrivateKey that in theory + # handles this automatically, however it doesn't handle encrypted + # private keys. Instead we try to load the key two different ways. + # First we'll try to load it as a traditional key. + bio_data = self._bytes_to_bio(data) + key = self._evp_pkey_from_der_traditional_key(bio_data, password) + if key: + return self._evp_pkey_to_private_key(key) + else: + # Finally we try to load it with the method that handles encrypted + # PKCS8 properly. + return self._load_key( + self._lib.d2i_PKCS8PrivateKey_bio, + self._evp_pkey_to_private_key, + data, + password, + ) + + def _evp_pkey_from_der_traditional_key(self, bio_data, password): + key = self._lib.d2i_PrivateKey_bio(bio_data.bio, self._ffi.NULL) + if key != self._ffi.NULL: + # In OpenSSL 3.0.0-alpha15 there exist scenarios where the key will + # successfully load but errors are still put on the stack. Tracked + # as https://github.com/openssl/openssl/issues/14996 + self._consume_errors() + + key = self._ffi.gc(key, self._lib.EVP_PKEY_free) + if password is not None: + raise TypeError( + "Password was given but private key is not encrypted." + ) + + return key + else: + self._consume_errors() + return None + + def load_der_public_key(self, data: bytes) -> PUBLIC_KEY_TYPES: + mem_bio = self._bytes_to_bio(data) + evp_pkey = self._lib.d2i_PUBKEY_bio(mem_bio.bio, self._ffi.NULL) + if evp_pkey != self._ffi.NULL: + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + return self._evp_pkey_to_public_key(evp_pkey) + else: + # It's not a (RSA/DSA/ECDSA) subjectPublicKeyInfo, but we still + # need to check to see if it is a pure PKCS1 RSA public key (not + # embedded in a subjectPublicKeyInfo) + self._consume_errors() + res = self._lib.BIO_reset(mem_bio.bio) + self.openssl_assert(res == 1) + rsa_cdata = self._lib.d2i_RSAPublicKey_bio( + mem_bio.bio, self._ffi.NULL + ) + if rsa_cdata != self._ffi.NULL: + rsa_cdata = self._ffi.gc(rsa_cdata, self._lib.RSA_free) + evp_pkey = self._rsa_cdata_to_evp_pkey(rsa_cdata) + return _RSAPublicKey(self, rsa_cdata, evp_pkey) + else: + self._handle_key_loading_error() + + def load_der_parameters(self, data: bytes) -> dh.DHParameters: + mem_bio = self._bytes_to_bio(data) + dh_cdata = self._lib.d2i_DHparams_bio(mem_bio.bio, self._ffi.NULL) + if dh_cdata != self._ffi.NULL: + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + return _DHParameters(self, dh_cdata) + elif self._lib.Cryptography_HAS_EVP_PKEY_DHX: + # We check to see if the is dhx. + self._consume_errors() + res = self._lib.BIO_reset(mem_bio.bio) + self.openssl_assert(res == 1) + dh_cdata = self._lib.Cryptography_d2i_DHxparams_bio( + mem_bio.bio, self._ffi.NULL + ) + if dh_cdata != self._ffi.NULL: + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + return _DHParameters(self, dh_cdata) + + self._handle_key_loading_error() + + def _cert2ossl(self, cert: x509.Certificate) -> typing.Any: + data = cert.public_bytes(serialization.Encoding.DER) + mem_bio = self._bytes_to_bio(data) + x509 = self._lib.d2i_X509_bio(mem_bio.bio, self._ffi.NULL) + self.openssl_assert(x509 != self._ffi.NULL) + x509 = self._ffi.gc(x509, self._lib.X509_free) + return x509 + + def _ossl2cert(self, x509: typing.Any) -> x509.Certificate: + bio = self._create_mem_bio_gc() + res = self._lib.i2d_X509_bio(bio, x509) + self.openssl_assert(res == 1) + return rust_x509.load_der_x509_certificate(self._read_mem_bio(bio)) + + def _csr2ossl(self, csr: x509.CertificateSigningRequest) -> typing.Any: + data = csr.public_bytes(serialization.Encoding.DER) + mem_bio = self._bytes_to_bio(data) + x509_req = self._lib.d2i_X509_REQ_bio(mem_bio.bio, self._ffi.NULL) + self.openssl_assert(x509_req != self._ffi.NULL) + x509_req = self._ffi.gc(x509_req, self._lib.X509_REQ_free) + return x509_req + + def _ossl2csr( + self, x509_req: typing.Any + ) -> x509.CertificateSigningRequest: + bio = self._create_mem_bio_gc() + res = self._lib.i2d_X509_REQ_bio(bio, x509_req) + self.openssl_assert(res == 1) + return rust_x509.load_der_x509_csr(self._read_mem_bio(bio)) + + def _crl2ossl(self, crl: x509.CertificateRevocationList) -> typing.Any: + data = crl.public_bytes(serialization.Encoding.DER) + mem_bio = self._bytes_to_bio(data) + x509_crl = self._lib.d2i_X509_CRL_bio(mem_bio.bio, self._ffi.NULL) + self.openssl_assert(x509_crl != self._ffi.NULL) + x509_crl = self._ffi.gc(x509_crl, self._lib.X509_CRL_free) + return x509_crl + + def _ossl2crl( + self, x509_crl: typing.Any + ) -> x509.CertificateRevocationList: + bio = self._create_mem_bio_gc() + res = self._lib.i2d_X509_CRL_bio(bio, x509_crl) + self.openssl_assert(res == 1) + return rust_x509.load_der_x509_crl(self._read_mem_bio(bio)) + + def _crl_is_signature_valid( + self, + crl: x509.CertificateRevocationList, + public_key: CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES, + ) -> bool: + if not isinstance( + public_key, + ( + _DSAPublicKey, + _RSAPublicKey, + _EllipticCurvePublicKey, + ), + ): + raise TypeError( + "Expecting one of DSAPublicKey, RSAPublicKey," + " or EllipticCurvePublicKey." + ) + x509_crl = self._crl2ossl(crl) + res = self._lib.X509_CRL_verify(x509_crl, public_key._evp_pkey) + + if res != 1: + self._consume_errors() + return False + + return True + + def _csr_is_signature_valid( + self, csr: x509.CertificateSigningRequest + ) -> bool: + x509_req = self._csr2ossl(csr) + pkey = self._lib.X509_REQ_get_pubkey(x509_req) + self.openssl_assert(pkey != self._ffi.NULL) + pkey = self._ffi.gc(pkey, self._lib.EVP_PKEY_free) + res = self._lib.X509_REQ_verify(x509_req, pkey) + + if res != 1: + self._consume_errors() + return False + + return True + + def _check_keys_correspond(self, key1, key2): + if self._lib.EVP_PKEY_cmp(key1._evp_pkey, key2._evp_pkey) != 1: + raise ValueError("Keys do not correspond") + + def _load_key(self, openssl_read_func, convert_func, data, password): + mem_bio = self._bytes_to_bio(data) + + userdata = self._ffi.new("CRYPTOGRAPHY_PASSWORD_DATA *") + if password is not None: + utils._check_byteslike("password", password) + password_ptr = self._ffi.from_buffer(password) + userdata.password = password_ptr + userdata.length = len(password) + + evp_pkey = openssl_read_func( + mem_bio.bio, + self._ffi.NULL, + self._ffi.addressof( + self._lib._original_lib, "Cryptography_pem_password_cb" + ), + userdata, + ) + + if evp_pkey == self._ffi.NULL: + if userdata.error != 0: + self._consume_errors() + if userdata.error == -1: + raise TypeError( + "Password was not given but private key is encrypted" + ) + else: + assert userdata.error == -2 + raise ValueError( + "Passwords longer than {} bytes are not supported " + "by this backend.".format(userdata.maxsize - 1) + ) + else: + self._handle_key_loading_error() + + # In OpenSSL 3.0.0-alpha15 there exist scenarios where the key will + # successfully load but errors are still put on the stack. Tracked + # as https://github.com/openssl/openssl/issues/14996 + self._consume_errors() + + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + + if password is not None and userdata.called == 0: + raise TypeError( + "Password was given but private key is not encrypted." + ) + + assert ( + password is not None and userdata.called == 1 + ) or password is None + + return convert_func(evp_pkey) + + def _handle_key_loading_error(self) -> typing.NoReturn: + errors = self._consume_errors() + + if not errors: + raise ValueError( + "Could not deserialize key data. The data may be in an " + "incorrect format or it may be encrypted with an unsupported " + "algorithm." + ) + + elif ( + errors[0]._lib_reason_match( + self._lib.ERR_LIB_EVP, self._lib.EVP_R_BAD_DECRYPT + ) + or errors[0]._lib_reason_match( + self._lib.ERR_LIB_PKCS12, + self._lib.PKCS12_R_PKCS12_CIPHERFINAL_ERROR, + ) + or ( + self._lib.Cryptography_HAS_PROVIDERS + and errors[0]._lib_reason_match( + self._lib.ERR_LIB_PROV, + self._lib.PROV_R_BAD_DECRYPT, + ) + ) + ): + raise ValueError("Bad decrypt. Incorrect password?") + + elif any( + error._lib_reason_match( + self._lib.ERR_LIB_EVP, + self._lib.EVP_R_UNSUPPORTED_PRIVATE_KEY_ALGORITHM, + ) + for error in errors + ): + raise ValueError("Unsupported public key algorithm.") + + else: + errors_with_text = binding._errors_with_text(errors) + raise ValueError( + "Could not deserialize key data. The data may be in an " + "incorrect format, it may be encrypted with an unsupported " + "algorithm, or it may be an unsupported key type (e.g. EC " + "curves with explicit parameters).", + errors_with_text, + ) + + def elliptic_curve_supported(self, curve: ec.EllipticCurve) -> bool: + try: + curve_nid = self._elliptic_curve_to_nid(curve) + except UnsupportedAlgorithm: + curve_nid = self._lib.NID_undef + + group = self._lib.EC_GROUP_new_by_curve_name(curve_nid) + + if group == self._ffi.NULL: + self._consume_errors() + return False + else: + self.openssl_assert(curve_nid != self._lib.NID_undef) + self._lib.EC_GROUP_free(group) + return True + + def elliptic_curve_signature_algorithm_supported( + self, + signature_algorithm: ec.EllipticCurveSignatureAlgorithm, + curve: ec.EllipticCurve, + ) -> bool: + # We only support ECDSA right now. + if not isinstance(signature_algorithm, ec.ECDSA): + return False + + return self.elliptic_curve_supported(curve) + + def generate_elliptic_curve_private_key( + self, curve: ec.EllipticCurve + ) -> ec.EllipticCurvePrivateKey: + """ + Generate a new private key on the named curve. + """ + + if self.elliptic_curve_supported(curve): + ec_cdata = self._ec_key_new_by_curve(curve) + + res = self._lib.EC_KEY_generate_key(ec_cdata) + self.openssl_assert(res == 1) + + evp_pkey = self._ec_cdata_to_evp_pkey(ec_cdata) + + return _EllipticCurvePrivateKey(self, ec_cdata, evp_pkey) + else: + raise UnsupportedAlgorithm( + "Backend object does not support {}.".format(curve.name), + _Reasons.UNSUPPORTED_ELLIPTIC_CURVE, + ) + + def load_elliptic_curve_private_numbers( + self, numbers: ec.EllipticCurvePrivateNumbers + ) -> ec.EllipticCurvePrivateKey: + public = numbers.public_numbers + + ec_cdata = self._ec_key_new_by_curve(public.curve) + + private_value = self._ffi.gc( + self._int_to_bn(numbers.private_value), self._lib.BN_clear_free + ) + res = self._lib.EC_KEY_set_private_key(ec_cdata, private_value) + if res != 1: + self._consume_errors() + raise ValueError("Invalid EC key.") + + self._ec_key_set_public_key_affine_coordinates( + ec_cdata, public.x, public.y + ) + + evp_pkey = self._ec_cdata_to_evp_pkey(ec_cdata) + + return _EllipticCurvePrivateKey(self, ec_cdata, evp_pkey) + + def load_elliptic_curve_public_numbers( + self, numbers: ec.EllipticCurvePublicNumbers + ) -> ec.EllipticCurvePublicKey: + ec_cdata = self._ec_key_new_by_curve(numbers.curve) + self._ec_key_set_public_key_affine_coordinates( + ec_cdata, numbers.x, numbers.y + ) + evp_pkey = self._ec_cdata_to_evp_pkey(ec_cdata) + + return _EllipticCurvePublicKey(self, ec_cdata, evp_pkey) + + def load_elliptic_curve_public_bytes( + self, curve: ec.EllipticCurve, point_bytes: bytes + ) -> ec.EllipticCurvePublicKey: + ec_cdata = self._ec_key_new_by_curve(curve) + group = self._lib.EC_KEY_get0_group(ec_cdata) + self.openssl_assert(group != self._ffi.NULL) + point = self._lib.EC_POINT_new(group) + self.openssl_assert(point != self._ffi.NULL) + point = self._ffi.gc(point, self._lib.EC_POINT_free) + with self._tmp_bn_ctx() as bn_ctx: + res = self._lib.EC_POINT_oct2point( + group, point, point_bytes, len(point_bytes), bn_ctx + ) + if res != 1: + self._consume_errors() + raise ValueError("Invalid public bytes for the given curve") + + res = self._lib.EC_KEY_set_public_key(ec_cdata, point) + self.openssl_assert(res == 1) + evp_pkey = self._ec_cdata_to_evp_pkey(ec_cdata) + return _EllipticCurvePublicKey(self, ec_cdata, evp_pkey) + + def derive_elliptic_curve_private_key( + self, private_value: int, curve: ec.EllipticCurve + ) -> ec.EllipticCurvePrivateKey: + ec_cdata = self._ec_key_new_by_curve(curve) + + get_func, group = self._ec_key_determine_group_get_func(ec_cdata) + + point = self._lib.EC_POINT_new(group) + self.openssl_assert(point != self._ffi.NULL) + point = self._ffi.gc(point, self._lib.EC_POINT_free) + + value = self._int_to_bn(private_value) + value = self._ffi.gc(value, self._lib.BN_clear_free) + + with self._tmp_bn_ctx() as bn_ctx: + res = self._lib.EC_POINT_mul( + group, point, value, self._ffi.NULL, self._ffi.NULL, bn_ctx + ) + self.openssl_assert(res == 1) + + bn_x = self._lib.BN_CTX_get(bn_ctx) + bn_y = self._lib.BN_CTX_get(bn_ctx) + + res = get_func(group, point, bn_x, bn_y, bn_ctx) + if res != 1: + self._consume_errors() + raise ValueError("Unable to derive key from private_value") + + res = self._lib.EC_KEY_set_public_key(ec_cdata, point) + self.openssl_assert(res == 1) + private = self._int_to_bn(private_value) + private = self._ffi.gc(private, self._lib.BN_clear_free) + res = self._lib.EC_KEY_set_private_key(ec_cdata, private) + self.openssl_assert(res == 1) + + evp_pkey = self._ec_cdata_to_evp_pkey(ec_cdata) + + return _EllipticCurvePrivateKey(self, ec_cdata, evp_pkey) + + def _ec_key_new_by_curve(self, curve: ec.EllipticCurve): + curve_nid = self._elliptic_curve_to_nid(curve) + return self._ec_key_new_by_curve_nid(curve_nid) + + def _ec_key_new_by_curve_nid(self, curve_nid: int): + ec_cdata = self._lib.EC_KEY_new_by_curve_name(curve_nid) + self.openssl_assert(ec_cdata != self._ffi.NULL) + return self._ffi.gc(ec_cdata, self._lib.EC_KEY_free) + + def elliptic_curve_exchange_algorithm_supported( + self, algorithm: ec.ECDH, curve: ec.EllipticCurve + ) -> bool: + if self._fips_enabled and not isinstance( + curve, self._fips_ecdh_curves + ): + return False + + return self.elliptic_curve_supported(curve) and isinstance( + algorithm, ec.ECDH + ) + + def _ec_cdata_to_evp_pkey(self, ec_cdata): + evp_pkey = self._create_evp_pkey_gc() + res = self._lib.EVP_PKEY_set1_EC_KEY(evp_pkey, ec_cdata) + self.openssl_assert(res == 1) + return evp_pkey + + def _elliptic_curve_to_nid(self, curve: ec.EllipticCurve) -> int: + """ + Get the NID for a curve name. + """ + + curve_aliases = {"secp192r1": "prime192v1", "secp256r1": "prime256v1"} + + curve_name = curve_aliases.get(curve.name, curve.name) + + curve_nid = self._lib.OBJ_sn2nid(curve_name.encode()) + if curve_nid == self._lib.NID_undef: + raise UnsupportedAlgorithm( + "{} is not a supported elliptic curve".format(curve.name), + _Reasons.UNSUPPORTED_ELLIPTIC_CURVE, + ) + return curve_nid + + @contextmanager + def _tmp_bn_ctx(self): + bn_ctx = self._lib.BN_CTX_new() + self.openssl_assert(bn_ctx != self._ffi.NULL) + bn_ctx = self._ffi.gc(bn_ctx, self._lib.BN_CTX_free) + self._lib.BN_CTX_start(bn_ctx) + try: + yield bn_ctx + finally: + self._lib.BN_CTX_end(bn_ctx) + + def _ec_key_determine_group_get_func(self, ctx): + """ + Given an EC_KEY determine the group and what function is required to + get point coordinates. + """ + self.openssl_assert(ctx != self._ffi.NULL) + + nid_two_field = self._lib.OBJ_sn2nid(b"characteristic-two-field") + self.openssl_assert(nid_two_field != self._lib.NID_undef) + + group = self._lib.EC_KEY_get0_group(ctx) + self.openssl_assert(group != self._ffi.NULL) + + method = self._lib.EC_GROUP_method_of(group) + self.openssl_assert(method != self._ffi.NULL) + + nid = self._lib.EC_METHOD_get_field_type(method) + self.openssl_assert(nid != self._lib.NID_undef) + + if nid == nid_two_field and self._lib.Cryptography_HAS_EC2M: + get_func = self._lib.EC_POINT_get_affine_coordinates_GF2m + else: + get_func = self._lib.EC_POINT_get_affine_coordinates_GFp + + assert get_func + + return get_func, group + + def _ec_key_set_public_key_affine_coordinates(self, ctx, x: int, y: int): + """ + Sets the public key point in the EC_KEY context to the affine x and y + values. + """ + + if x < 0 or y < 0: + raise ValueError( + "Invalid EC key. Both x and y must be non-negative." + ) + + x = self._ffi.gc(self._int_to_bn(x), self._lib.BN_free) + y = self._ffi.gc(self._int_to_bn(y), self._lib.BN_free) + res = self._lib.EC_KEY_set_public_key_affine_coordinates(ctx, x, y) + if res != 1: + self._consume_errors() + raise ValueError("Invalid EC key.") + + def _private_key_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + key, + evp_pkey, + cdata, + ) -> bytes: + # validate argument types + if not isinstance(encoding, serialization.Encoding): + raise TypeError("encoding must be an item from the Encoding enum") + if not isinstance(format, serialization.PrivateFormat): + raise TypeError( + "format must be an item from the PrivateFormat enum" + ) + if not isinstance( + encryption_algorithm, serialization.KeySerializationEncryption + ): + raise TypeError( + "Encryption algorithm must be a KeySerializationEncryption " + "instance" + ) + + # validate password + if isinstance(encryption_algorithm, serialization.NoEncryption): + password = b"" + elif isinstance( + encryption_algorithm, serialization.BestAvailableEncryption + ): + password = encryption_algorithm.password + if len(password) > 1023: + raise ValueError( + "Passwords longer than 1023 bytes are not supported by " + "this backend" + ) + elif ( + isinstance( + encryption_algorithm, serialization._KeySerializationEncryption + ) + and encryption_algorithm._format + is format + is serialization.PrivateFormat.OpenSSH + ): + password = encryption_algorithm.password + else: + raise ValueError("Unsupported encryption type") + + # PKCS8 + PEM/DER + if format is serialization.PrivateFormat.PKCS8: + if encoding is serialization.Encoding.PEM: + write_bio = self._lib.PEM_write_bio_PKCS8PrivateKey + elif encoding is serialization.Encoding.DER: + write_bio = self._lib.i2d_PKCS8PrivateKey_bio + else: + raise ValueError("Unsupported encoding for PKCS8") + return self._private_key_bytes_via_bio( + write_bio, evp_pkey, password + ) + + # TraditionalOpenSSL + PEM/DER + if format is serialization.PrivateFormat.TraditionalOpenSSL: + if self._fips_enabled and not isinstance( + encryption_algorithm, serialization.NoEncryption + ): + raise ValueError( + "Encrypted traditional OpenSSL format is not " + "supported in FIPS mode." + ) + key_type = self._lib.EVP_PKEY_id(evp_pkey) + + if encoding is serialization.Encoding.PEM: + if key_type == self._lib.EVP_PKEY_RSA: + write_bio = self._lib.PEM_write_bio_RSAPrivateKey + elif key_type == self._lib.EVP_PKEY_DSA: + write_bio = self._lib.PEM_write_bio_DSAPrivateKey + elif key_type == self._lib.EVP_PKEY_EC: + write_bio = self._lib.PEM_write_bio_ECPrivateKey + else: + raise ValueError( + "Unsupported key type for TraditionalOpenSSL" + ) + return self._private_key_bytes_via_bio( + write_bio, cdata, password + ) + + if encoding is serialization.Encoding.DER: + if password: + raise ValueError( + "Encryption is not supported for DER encoded " + "traditional OpenSSL keys" + ) + if key_type == self._lib.EVP_PKEY_RSA: + write_bio = self._lib.i2d_RSAPrivateKey_bio + elif key_type == self._lib.EVP_PKEY_EC: + write_bio = self._lib.i2d_ECPrivateKey_bio + elif key_type == self._lib.EVP_PKEY_DSA: + write_bio = self._lib.i2d_DSAPrivateKey_bio + else: + raise ValueError( + "Unsupported key type for TraditionalOpenSSL" + ) + return self._bio_func_output(write_bio, cdata) + + raise ValueError("Unsupported encoding for TraditionalOpenSSL") + + # OpenSSH + PEM + if format is serialization.PrivateFormat.OpenSSH: + if encoding is serialization.Encoding.PEM: + return ssh._serialize_ssh_private_key( + key, password, encryption_algorithm + ) + + raise ValueError( + "OpenSSH private key format can only be used" + " with PEM encoding" + ) + + # Anything that key-specific code was supposed to handle earlier, + # like Raw. + raise ValueError("format is invalid with this key") + + def _private_key_bytes_via_bio(self, write_bio, evp_pkey, password): + if not password: + evp_cipher = self._ffi.NULL + else: + # This is a curated value that we will update over time. + evp_cipher = self._lib.EVP_get_cipherbyname(b"aes-256-cbc") + + return self._bio_func_output( + write_bio, + evp_pkey, + evp_cipher, + password, + len(password), + self._ffi.NULL, + self._ffi.NULL, + ) + + def _bio_func_output(self, write_bio, *args): + bio = self._create_mem_bio_gc() + res = write_bio(bio, *args) + self.openssl_assert(res == 1) + return self._read_mem_bio(bio) + + def _public_key_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + key, + evp_pkey, + cdata, + ) -> bytes: + if not isinstance(encoding, serialization.Encoding): + raise TypeError("encoding must be an item from the Encoding enum") + if not isinstance(format, serialization.PublicFormat): + raise TypeError( + "format must be an item from the PublicFormat enum" + ) + + # SubjectPublicKeyInfo + PEM/DER + if format is serialization.PublicFormat.SubjectPublicKeyInfo: + if encoding is serialization.Encoding.PEM: + write_bio = self._lib.PEM_write_bio_PUBKEY + elif encoding is serialization.Encoding.DER: + write_bio = self._lib.i2d_PUBKEY_bio + else: + raise ValueError( + "SubjectPublicKeyInfo works only with PEM or DER encoding" + ) + return self._bio_func_output(write_bio, evp_pkey) + + # PKCS1 + PEM/DER + if format is serialization.PublicFormat.PKCS1: + # Only RSA is supported here. + key_type = self._lib.EVP_PKEY_id(evp_pkey) + if key_type != self._lib.EVP_PKEY_RSA: + raise ValueError("PKCS1 format is supported only for RSA keys") + + if encoding is serialization.Encoding.PEM: + write_bio = self._lib.PEM_write_bio_RSAPublicKey + elif encoding is serialization.Encoding.DER: + write_bio = self._lib.i2d_RSAPublicKey_bio + else: + raise ValueError("PKCS1 works only with PEM or DER encoding") + return self._bio_func_output(write_bio, cdata) + + # OpenSSH + OpenSSH + if format is serialization.PublicFormat.OpenSSH: + if encoding is serialization.Encoding.OpenSSH: + return ssh.serialize_ssh_public_key(key) + + raise ValueError( + "OpenSSH format must be used with OpenSSH encoding" + ) + + # Anything that key-specific code was supposed to handle earlier, + # like Raw, CompressedPoint, UncompressedPoint + raise ValueError("format is invalid with this key") + + def dh_supported(self) -> bool: + return not self._lib.CRYPTOGRAPHY_IS_BORINGSSL + + def generate_dh_parameters( + self, generator: int, key_size: int + ) -> dh.DHParameters: + if key_size < dh._MIN_MODULUS_SIZE: + raise ValueError( + "DH key_size must be at least {} bits".format( + dh._MIN_MODULUS_SIZE + ) + ) + + if generator not in (2, 5): + raise ValueError("DH generator must be 2 or 5") + + dh_param_cdata = self._lib.DH_new() + self.openssl_assert(dh_param_cdata != self._ffi.NULL) + dh_param_cdata = self._ffi.gc(dh_param_cdata, self._lib.DH_free) + + res = self._lib.DH_generate_parameters_ex( + dh_param_cdata, key_size, generator, self._ffi.NULL + ) + self.openssl_assert(res == 1) + + return _DHParameters(self, dh_param_cdata) + + def _dh_cdata_to_evp_pkey(self, dh_cdata): + evp_pkey = self._create_evp_pkey_gc() + res = self._lib.EVP_PKEY_set1_DH(evp_pkey, dh_cdata) + self.openssl_assert(res == 1) + return evp_pkey + + def generate_dh_private_key( + self, parameters: dh.DHParameters + ) -> dh.DHPrivateKey: + dh_key_cdata = _dh_params_dup( + parameters._dh_cdata, self # type: ignore[attr-defined] + ) + + res = self._lib.DH_generate_key(dh_key_cdata) + self.openssl_assert(res == 1) + + evp_pkey = self._dh_cdata_to_evp_pkey(dh_key_cdata) + + return _DHPrivateKey(self, dh_key_cdata, evp_pkey) + + def generate_dh_private_key_and_parameters( + self, generator: int, key_size: int + ) -> dh.DHPrivateKey: + return self.generate_dh_private_key( + self.generate_dh_parameters(generator, key_size) + ) + + def load_dh_private_numbers( + self, numbers: dh.DHPrivateNumbers + ) -> dh.DHPrivateKey: + parameter_numbers = numbers.public_numbers.parameter_numbers + + dh_cdata = self._lib.DH_new() + self.openssl_assert(dh_cdata != self._ffi.NULL) + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + + p = self._int_to_bn(parameter_numbers.p) + g = self._int_to_bn(parameter_numbers.g) + + if parameter_numbers.q is not None: + q = self._int_to_bn(parameter_numbers.q) + else: + q = self._ffi.NULL + + pub_key = self._int_to_bn(numbers.public_numbers.y) + priv_key = self._int_to_bn(numbers.x) + + res = self._lib.DH_set0_pqg(dh_cdata, p, q, g) + self.openssl_assert(res == 1) + + res = self._lib.DH_set0_key(dh_cdata, pub_key, priv_key) + self.openssl_assert(res == 1) + + codes = self._ffi.new("int[]", 1) + res = self._lib.Cryptography_DH_check(dh_cdata, codes) + self.openssl_assert(res == 1) + + # DH_check will return DH_NOT_SUITABLE_GENERATOR if p % 24 does not + # equal 11 when the generator is 2 (a quadratic nonresidue). + # We want to ignore that error because p % 24 == 23 is also fine. + # Specifically, g is then a quadratic residue. Within the context of + # Diffie-Hellman this means it can only generate half the possible + # values. That sounds bad, but quadratic nonresidues leak a bit of + # the key to the attacker in exchange for having the full key space + # available. See: https://crypto.stackexchange.com/questions/12961 + if codes[0] != 0 and not ( + parameter_numbers.g == 2 + and codes[0] ^ self._lib.DH_NOT_SUITABLE_GENERATOR == 0 + ): + raise ValueError("DH private numbers did not pass safety checks.") + + evp_pkey = self._dh_cdata_to_evp_pkey(dh_cdata) + + return _DHPrivateKey(self, dh_cdata, evp_pkey) + + def load_dh_public_numbers( + self, numbers: dh.DHPublicNumbers + ) -> dh.DHPublicKey: + dh_cdata = self._lib.DH_new() + self.openssl_assert(dh_cdata != self._ffi.NULL) + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + + parameter_numbers = numbers.parameter_numbers + + p = self._int_to_bn(parameter_numbers.p) + g = self._int_to_bn(parameter_numbers.g) + + if parameter_numbers.q is not None: + q = self._int_to_bn(parameter_numbers.q) + else: + q = self._ffi.NULL + + pub_key = self._int_to_bn(numbers.y) + + res = self._lib.DH_set0_pqg(dh_cdata, p, q, g) + self.openssl_assert(res == 1) + + res = self._lib.DH_set0_key(dh_cdata, pub_key, self._ffi.NULL) + self.openssl_assert(res == 1) + + evp_pkey = self._dh_cdata_to_evp_pkey(dh_cdata) + + return _DHPublicKey(self, dh_cdata, evp_pkey) + + def load_dh_parameter_numbers( + self, numbers: dh.DHParameterNumbers + ) -> dh.DHParameters: + dh_cdata = self._lib.DH_new() + self.openssl_assert(dh_cdata != self._ffi.NULL) + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + + p = self._int_to_bn(numbers.p) + g = self._int_to_bn(numbers.g) + + if numbers.q is not None: + q = self._int_to_bn(numbers.q) + else: + q = self._ffi.NULL + + res = self._lib.DH_set0_pqg(dh_cdata, p, q, g) + self.openssl_assert(res == 1) + + return _DHParameters(self, dh_cdata) + + def dh_parameters_supported( + self, p: int, g: int, q: typing.Optional[int] = None + ) -> bool: + dh_cdata = self._lib.DH_new() + self.openssl_assert(dh_cdata != self._ffi.NULL) + dh_cdata = self._ffi.gc(dh_cdata, self._lib.DH_free) + + p = self._int_to_bn(p) + g = self._int_to_bn(g) + + if q is not None: + q = self._int_to_bn(q) + else: + q = self._ffi.NULL + + res = self._lib.DH_set0_pqg(dh_cdata, p, q, g) + self.openssl_assert(res == 1) + + codes = self._ffi.new("int[]", 1) + res = self._lib.Cryptography_DH_check(dh_cdata, codes) + self.openssl_assert(res == 1) + + return codes[0] == 0 + + def dh_x942_serialization_supported(self) -> bool: + return self._lib.Cryptography_HAS_EVP_PKEY_DHX == 1 + + def x25519_load_public_bytes(self, data: bytes) -> x25519.X25519PublicKey: + # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_111 we can + # switch this to EVP_PKEY_new_raw_public_key + if len(data) != 32: + raise ValueError("An X25519 public key is 32 bytes long") + + evp_pkey = self._create_evp_pkey_gc() + res = self._lib.EVP_PKEY_set_type(evp_pkey, self._lib.NID_X25519) + self.openssl_assert(res == 1) + res = self._lib.EVP_PKEY_set1_tls_encodedpoint( + evp_pkey, data, len(data) + ) + self.openssl_assert(res == 1) + return _X25519PublicKey(self, evp_pkey) + + def x25519_load_private_bytes( + self, data: bytes + ) -> x25519.X25519PrivateKey: + # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_111 we can + # switch this to EVP_PKEY_new_raw_private_key and drop the + # zeroed_bytearray garbage. + # OpenSSL only has facilities for loading PKCS8 formatted private + # keys using the algorithm identifiers specified in + # https://tools.ietf.org/html/draft-ietf-curdle-pkix-09. + # This is the standard PKCS8 prefix for a 32 byte X25519 key. + # The form is: + # 0:d=0 hl=2 l= 46 cons: SEQUENCE + # 2:d=1 hl=2 l= 1 prim: INTEGER :00 + # 5:d=1 hl=2 l= 5 cons: SEQUENCE + # 7:d=2 hl=2 l= 3 prim: OBJECT :1.3.101.110 + # 12:d=1 hl=2 l= 34 prim: OCTET STRING (the key) + # Of course there's a bit more complexity. In reality OCTET STRING + # contains an OCTET STRING of length 32! So the last two bytes here + # are \x04\x20, which is an OCTET STRING of length 32. + if len(data) != 32: + raise ValueError("An X25519 private key is 32 bytes long") + + pkcs8_prefix = b'0.\x02\x01\x000\x05\x06\x03+en\x04"\x04 ' + with self._zeroed_bytearray(48) as ba: + ba[0:16] = pkcs8_prefix + ba[16:] = data + bio = self._bytes_to_bio(ba) + evp_pkey = self._lib.d2i_PrivateKey_bio(bio.bio, self._ffi.NULL) + + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + self.openssl_assert( + self._lib.EVP_PKEY_id(evp_pkey) == self._lib.EVP_PKEY_X25519 + ) + return _X25519PrivateKey(self, evp_pkey) + + def _evp_pkey_keygen_gc(self, nid): + evp_pkey_ctx = self._lib.EVP_PKEY_CTX_new_id(nid, self._ffi.NULL) + self.openssl_assert(evp_pkey_ctx != self._ffi.NULL) + evp_pkey_ctx = self._ffi.gc(evp_pkey_ctx, self._lib.EVP_PKEY_CTX_free) + res = self._lib.EVP_PKEY_keygen_init(evp_pkey_ctx) + self.openssl_assert(res == 1) + evp_ppkey = self._ffi.new("EVP_PKEY **") + res = self._lib.EVP_PKEY_keygen(evp_pkey_ctx, evp_ppkey) + self.openssl_assert(res == 1) + self.openssl_assert(evp_ppkey[0] != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_ppkey[0], self._lib.EVP_PKEY_free) + return evp_pkey + + def x25519_generate_key(self) -> x25519.X25519PrivateKey: + evp_pkey = self._evp_pkey_keygen_gc(self._lib.NID_X25519) + return _X25519PrivateKey(self, evp_pkey) + + def x25519_supported(self) -> bool: + if self._fips_enabled: + return False + return not self._lib.CRYPTOGRAPHY_IS_LIBRESSL + + def x448_load_public_bytes(self, data: bytes) -> x448.X448PublicKey: + if len(data) != 56: + raise ValueError("An X448 public key is 56 bytes long") + + evp_pkey = self._lib.EVP_PKEY_new_raw_public_key( + self._lib.NID_X448, self._ffi.NULL, data, len(data) + ) + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + return _X448PublicKey(self, evp_pkey) + + def x448_load_private_bytes(self, data: bytes) -> x448.X448PrivateKey: + if len(data) != 56: + raise ValueError("An X448 private key is 56 bytes long") + + data_ptr = self._ffi.from_buffer(data) + evp_pkey = self._lib.EVP_PKEY_new_raw_private_key( + self._lib.NID_X448, self._ffi.NULL, data_ptr, len(data) + ) + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + return _X448PrivateKey(self, evp_pkey) + + def x448_generate_key(self) -> x448.X448PrivateKey: + evp_pkey = self._evp_pkey_keygen_gc(self._lib.NID_X448) + return _X448PrivateKey(self, evp_pkey) + + def x448_supported(self) -> bool: + if self._fips_enabled: + return False + return ( + not self._lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_111 + and not self._lib.CRYPTOGRAPHY_IS_BORINGSSL + ) + + def ed25519_supported(self) -> bool: + if self._fips_enabled: + return False + return not self._lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_111B + + def ed25519_load_public_bytes( + self, data: bytes + ) -> ed25519.Ed25519PublicKey: + utils._check_bytes("data", data) + + if len(data) != ed25519._ED25519_KEY_SIZE: + raise ValueError("An Ed25519 public key is 32 bytes long") + + evp_pkey = self._lib.EVP_PKEY_new_raw_public_key( + self._lib.NID_ED25519, self._ffi.NULL, data, len(data) + ) + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + + return _Ed25519PublicKey(self, evp_pkey) + + def ed25519_load_private_bytes( + self, data: bytes + ) -> ed25519.Ed25519PrivateKey: + if len(data) != ed25519._ED25519_KEY_SIZE: + raise ValueError("An Ed25519 private key is 32 bytes long") + + utils._check_byteslike("data", data) + data_ptr = self._ffi.from_buffer(data) + evp_pkey = self._lib.EVP_PKEY_new_raw_private_key( + self._lib.NID_ED25519, self._ffi.NULL, data_ptr, len(data) + ) + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + + return _Ed25519PrivateKey(self, evp_pkey) + + def ed25519_generate_key(self) -> ed25519.Ed25519PrivateKey: + evp_pkey = self._evp_pkey_keygen_gc(self._lib.NID_ED25519) + return _Ed25519PrivateKey(self, evp_pkey) + + def ed448_supported(self) -> bool: + if self._fips_enabled: + return False + return ( + not self._lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_111B + and not self._lib.CRYPTOGRAPHY_IS_BORINGSSL + ) + + def ed448_load_public_bytes(self, data: bytes) -> ed448.Ed448PublicKey: + utils._check_bytes("data", data) + if len(data) != _ED448_KEY_SIZE: + raise ValueError("An Ed448 public key is 57 bytes long") + + evp_pkey = self._lib.EVP_PKEY_new_raw_public_key( + self._lib.NID_ED448, self._ffi.NULL, data, len(data) + ) + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + + return _Ed448PublicKey(self, evp_pkey) + + def ed448_load_private_bytes(self, data: bytes) -> ed448.Ed448PrivateKey: + utils._check_byteslike("data", data) + if len(data) != _ED448_KEY_SIZE: + raise ValueError("An Ed448 private key is 57 bytes long") + + data_ptr = self._ffi.from_buffer(data) + evp_pkey = self._lib.EVP_PKEY_new_raw_private_key( + self._lib.NID_ED448, self._ffi.NULL, data_ptr, len(data) + ) + self.openssl_assert(evp_pkey != self._ffi.NULL) + evp_pkey = self._ffi.gc(evp_pkey, self._lib.EVP_PKEY_free) + + return _Ed448PrivateKey(self, evp_pkey) + + def ed448_generate_key(self) -> ed448.Ed448PrivateKey: + evp_pkey = self._evp_pkey_keygen_gc(self._lib.NID_ED448) + return _Ed448PrivateKey(self, evp_pkey) + + def derive_scrypt( + self, + key_material: bytes, + salt: bytes, + length: int, + n: int, + r: int, + p: int, + ) -> bytes: + buf = self._ffi.new("unsigned char[]", length) + key_material_ptr = self._ffi.from_buffer(key_material) + res = self._lib.EVP_PBE_scrypt( + key_material_ptr, + len(key_material), + salt, + len(salt), + n, + r, + p, + scrypt._MEM_LIMIT, + buf, + length, + ) + if res != 1: + errors = self._consume_errors_with_text() + # memory required formula explained here: + # https://blog.filippo.io/the-scrypt-parameters/ + min_memory = 128 * n * r // (1024**2) + raise MemoryError( + "Not enough memory to derive key. These parameters require" + " {} MB of memory.".format(min_memory), + errors, + ) + return self._ffi.buffer(buf)[:] + + def aead_cipher_supported(self, cipher) -> bool: + cipher_name = aead._aead_cipher_name(cipher) + if self._fips_enabled and cipher_name not in self._fips_aead: + return False + # SIV isn't loaded through get_cipherbyname but instead a new fetch API + # only available in 3.0+. But if we know we're on 3.0+ then we know + # it's supported. + if cipher_name.endswith(b"-siv"): + return self._lib.CRYPTOGRAPHY_OPENSSL_300_OR_GREATER == 1 + else: + return ( + self._lib.EVP_get_cipherbyname(cipher_name) != self._ffi.NULL + ) + + @contextlib.contextmanager + def _zeroed_bytearray(self, length: int) -> typing.Iterator[bytearray]: + """ + This method creates a bytearray, which we copy data into (hopefully + also from a mutable buffer that can be dynamically erased!), and then + zero when we're done. + """ + ba = bytearray(length) + try: + yield ba + finally: + self._zero_data(ba, length) + + def _zero_data(self, data, length: int) -> None: + # We clear things this way because at the moment we're not + # sure of a better way that can guarantee it overwrites the + # memory of a bytearray and doesn't just replace the underlying char *. + for i in range(length): + data[i] = 0 + + @contextlib.contextmanager + def _zeroed_null_terminated_buf(self, data): + """ + This method takes bytes, which can be a bytestring or a mutable + buffer like a bytearray, and yields a null-terminated version of that + data. This is required because PKCS12_parse doesn't take a length with + its password char * and ffi.from_buffer doesn't provide null + termination. So, to support zeroing the data via bytearray we + need to build this ridiculous construct that copies the memory, but + zeroes it after use. + """ + if data is None: + yield self._ffi.NULL + else: + data_len = len(data) + buf = self._ffi.new("char[]", data_len + 1) + self._ffi.memmove(buf, data, data_len) + try: + yield buf + finally: + # Cast to a uint8_t * so we can assign by integer + self._zero_data(self._ffi.cast("uint8_t *", buf), data_len) + + def load_key_and_certificates_from_pkcs12( + self, data: bytes, password: typing.Optional[bytes] + ) -> typing.Tuple[ + typing.Optional[PRIVATE_KEY_TYPES], + typing.Optional[x509.Certificate], + typing.List[x509.Certificate], + ]: + pkcs12 = self.load_pkcs12(data, password) + return ( + pkcs12.key, + pkcs12.cert.certificate if pkcs12.cert else None, + [cert.certificate for cert in pkcs12.additional_certs], + ) + + def load_pkcs12( + self, data: bytes, password: typing.Optional[bytes] + ) -> PKCS12KeyAndCertificates: + if password is not None: + utils._check_byteslike("password", password) + + bio = self._bytes_to_bio(data) + p12 = self._lib.d2i_PKCS12_bio(bio.bio, self._ffi.NULL) + if p12 == self._ffi.NULL: + self._consume_errors() + raise ValueError("Could not deserialize PKCS12 data") + + p12 = self._ffi.gc(p12, self._lib.PKCS12_free) + evp_pkey_ptr = self._ffi.new("EVP_PKEY **") + x509_ptr = self._ffi.new("X509 **") + sk_x509_ptr = self._ffi.new("Cryptography_STACK_OF_X509 **") + with self._zeroed_null_terminated_buf(password) as password_buf: + res = self._lib.PKCS12_parse( + p12, password_buf, evp_pkey_ptr, x509_ptr, sk_x509_ptr + ) + # OpenSSL 3.0.6 leaves errors on the stack even in success, so + # we consume all errors unconditionally. + # https://github.com/openssl/openssl/issues/19389 + self._consume_errors() + if res == 0: + raise ValueError("Invalid password or PKCS12 data") + + cert = None + key = None + additional_certificates = [] + + if evp_pkey_ptr[0] != self._ffi.NULL: + evp_pkey = self._ffi.gc(evp_pkey_ptr[0], self._lib.EVP_PKEY_free) + key = self._evp_pkey_to_private_key(evp_pkey) + + if x509_ptr[0] != self._ffi.NULL: + x509 = self._ffi.gc(x509_ptr[0], self._lib.X509_free) + cert_obj = self._ossl2cert(x509) + name = None + maybe_name = self._lib.X509_alias_get0(x509, self._ffi.NULL) + if maybe_name != self._ffi.NULL: + name = self._ffi.string(maybe_name) + cert = PKCS12Certificate(cert_obj, name) + + if sk_x509_ptr[0] != self._ffi.NULL: + sk_x509 = self._ffi.gc(sk_x509_ptr[0], self._lib.sk_X509_free) + num = self._lib.sk_X509_num(sk_x509_ptr[0]) + + # In OpenSSL < 3.0.0 PKCS12 parsing reverses the order of the + # certificates. + indices: typing.Iterable[int] + if ( + self._lib.CRYPTOGRAPHY_OPENSSL_300_OR_GREATER + or self._lib.CRYPTOGRAPHY_IS_BORINGSSL + ): + indices = range(num) + else: + indices = reversed(range(num)) + + for i in indices: + x509 = self._lib.sk_X509_value(sk_x509, i) + self.openssl_assert(x509 != self._ffi.NULL) + x509 = self._ffi.gc(x509, self._lib.X509_free) + addl_cert = self._ossl2cert(x509) + addl_name = None + maybe_name = self._lib.X509_alias_get0(x509, self._ffi.NULL) + if maybe_name != self._ffi.NULL: + addl_name = self._ffi.string(maybe_name) + additional_certificates.append( + PKCS12Certificate(addl_cert, addl_name) + ) + + return PKCS12KeyAndCertificates(key, cert, additional_certificates) + + def serialize_key_and_certificates_to_pkcs12( + self, + name: typing.Optional[bytes], + key: typing.Optional[_ALLOWED_PKCS12_TYPES], + cert: typing.Optional[x509.Certificate], + cas: typing.Optional[typing.List[_PKCS12_CAS_TYPES]], + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + password = None + if name is not None: + utils._check_bytes("name", name) + + if isinstance(encryption_algorithm, serialization.NoEncryption): + nid_cert = -1 + nid_key = -1 + pkcs12_iter = 0 + mac_iter = 0 + mac_alg = self._ffi.NULL + elif isinstance( + encryption_algorithm, serialization.BestAvailableEncryption + ): + # PKCS12 encryption is hopeless trash and can never be fixed. + # OpenSSL 3 supports PBESv2, but Libre and Boring do not, so + # we use PBESv1 with 3DES on the older paths. + if self._lib.CRYPTOGRAPHY_OPENSSL_300_OR_GREATER: + nid_cert = self._lib.NID_aes_256_cbc + nid_key = self._lib.NID_aes_256_cbc + else: + nid_cert = self._lib.NID_pbe_WithSHA1And3_Key_TripleDES_CBC + nid_key = self._lib.NID_pbe_WithSHA1And3_Key_TripleDES_CBC + # At least we can set this higher than OpenSSL's default + pkcs12_iter = 20000 + # mac_iter chosen for compatibility reasons, see: + # https://www.openssl.org/docs/man1.1.1/man3/PKCS12_create.html + # Did we mention how lousy PKCS12 encryption is? + mac_iter = 1 + # MAC algorithm can only be set on OpenSSL 3.0.0+ + mac_alg = self._ffi.NULL + password = encryption_algorithm.password + elif ( + isinstance( + encryption_algorithm, serialization._KeySerializationEncryption + ) + and encryption_algorithm._format + is serialization.PrivateFormat.PKCS12 + ): + # Default to OpenSSL's defaults. Behavior will vary based on the + # version of OpenSSL cryptography is compiled against. + nid_cert = 0 + nid_key = 0 + # Use the default iters we use in best available + pkcs12_iter = 20000 + # See the Best Available comment for why this is 1 + mac_iter = 1 + password = encryption_algorithm.password + keycertalg = encryption_algorithm._key_cert_algorithm + if keycertalg is PBES.PBESv1SHA1And3KeyTripleDESCBC: + nid_cert = self._lib.NID_pbe_WithSHA1And3_Key_TripleDES_CBC + nid_key = self._lib.NID_pbe_WithSHA1And3_Key_TripleDES_CBC + elif keycertalg is PBES.PBESv2SHA256AndAES256CBC: + if not self._lib.CRYPTOGRAPHY_OPENSSL_300_OR_GREATER: + raise UnsupportedAlgorithm( + "PBESv2 is not supported by this version of OpenSSL" + ) + nid_cert = self._lib.NID_aes_256_cbc + nid_key = self._lib.NID_aes_256_cbc + else: + assert keycertalg is None + # We use OpenSSL's defaults + + if encryption_algorithm._hmac_hash is not None: + if not self._lib.Cryptography_HAS_PKCS12_SET_MAC: + raise UnsupportedAlgorithm( + "Setting MAC algorithm is not supported by this " + "version of OpenSSL." + ) + mac_alg = self._evp_md_non_null_from_algorithm( + encryption_algorithm._hmac_hash + ) + self.openssl_assert(mac_alg != self._ffi.NULL) + else: + mac_alg = self._ffi.NULL + + if encryption_algorithm._kdf_rounds is not None: + pkcs12_iter = encryption_algorithm._kdf_rounds + + else: + raise ValueError("Unsupported key encryption type") + + if cas is None or len(cas) == 0: + sk_x509 = self._ffi.NULL + else: + sk_x509 = self._lib.sk_X509_new_null() + sk_x509 = self._ffi.gc(sk_x509, self._lib.sk_X509_free) + + # This list is to keep the x509 values alive until end of function + ossl_cas = [] + for ca in cas: + if isinstance(ca, PKCS12Certificate): + ca_alias = ca.friendly_name + ossl_ca = self._cert2ossl(ca.certificate) + with self._zeroed_null_terminated_buf( + ca_alias + ) as ca_name_buf: + res = self._lib.X509_alias_set1( + ossl_ca, ca_name_buf, -1 + ) + self.openssl_assert(res == 1) + else: + ossl_ca = self._cert2ossl(ca) + ossl_cas.append(ossl_ca) + res = self._lib.sk_X509_push(sk_x509, ossl_ca) + backend.openssl_assert(res >= 1) + + with self._zeroed_null_terminated_buf(password) as password_buf: + with self._zeroed_null_terminated_buf(name) as name_buf: + ossl_cert = self._cert2ossl(cert) if cert else self._ffi.NULL + if key is not None: + evp_pkey = key._evp_pkey # type: ignore[union-attr] + else: + evp_pkey = self._ffi.NULL + + p12 = self._lib.PKCS12_create( + password_buf, + name_buf, + evp_pkey, + ossl_cert, + sk_x509, + nid_key, + nid_cert, + pkcs12_iter, + mac_iter, + 0, + ) + + if ( + self._lib.Cryptography_HAS_PKCS12_SET_MAC + and mac_alg != self._ffi.NULL + ): + self._lib.PKCS12_set_mac( + p12, + password_buf, + -1, + self._ffi.NULL, + 0, + mac_iter, + mac_alg, + ) + + self.openssl_assert(p12 != self._ffi.NULL) + p12 = self._ffi.gc(p12, self._lib.PKCS12_free) + + bio = self._create_mem_bio_gc() + res = self._lib.i2d_PKCS12_bio(bio, p12) + self.openssl_assert(res > 0) + return self._read_mem_bio(bio) + + def poly1305_supported(self) -> bool: + if self._fips_enabled: + return False + return self._lib.Cryptography_HAS_POLY1305 == 1 + + def create_poly1305_ctx(self, key: bytes) -> _Poly1305Context: + utils._check_byteslike("key", key) + if len(key) != _POLY1305_KEY_SIZE: + raise ValueError("A poly1305 key is 32 bytes long") + + return _Poly1305Context(self, key) + + def pkcs7_supported(self) -> bool: + return not self._lib.CRYPTOGRAPHY_IS_BORINGSSL + + def load_pem_pkcs7_certificates( + self, data: bytes + ) -> typing.List[x509.Certificate]: + utils._check_bytes("data", data) + bio = self._bytes_to_bio(data) + p7 = self._lib.PEM_read_bio_PKCS7( + bio.bio, self._ffi.NULL, self._ffi.NULL, self._ffi.NULL + ) + if p7 == self._ffi.NULL: + self._consume_errors() + raise ValueError("Unable to parse PKCS7 data") + + p7 = self._ffi.gc(p7, self._lib.PKCS7_free) + return self._load_pkcs7_certificates(p7) + + def load_der_pkcs7_certificates( + self, data: bytes + ) -> typing.List[x509.Certificate]: + utils._check_bytes("data", data) + bio = self._bytes_to_bio(data) + p7 = self._lib.d2i_PKCS7_bio(bio.bio, self._ffi.NULL) + if p7 == self._ffi.NULL: + self._consume_errors() + raise ValueError("Unable to parse PKCS7 data") + + p7 = self._ffi.gc(p7, self._lib.PKCS7_free) + return self._load_pkcs7_certificates(p7) + + def _load_pkcs7_certificates(self, p7): + nid = self._lib.OBJ_obj2nid(p7.type) + self.openssl_assert(nid != self._lib.NID_undef) + if nid != self._lib.NID_pkcs7_signed: + raise UnsupportedAlgorithm( + "Only basic signed structures are currently supported. NID" + " for this data was {}".format(nid), + _Reasons.UNSUPPORTED_SERIALIZATION, + ) + + sk_x509 = p7.d.sign.cert + num = self._lib.sk_X509_num(sk_x509) + certs = [] + for i in range(num): + x509 = self._lib.sk_X509_value(sk_x509, i) + self.openssl_assert(x509 != self._ffi.NULL) + res = self._lib.X509_up_ref(x509) + # When OpenSSL is less than 1.1.0 up_ref returns the current + # refcount. On 1.1.0+ it returns 1 for success. + self.openssl_assert(res >= 1) + x509 = self._ffi.gc(x509, self._lib.X509_free) + cert = self._ossl2cert(x509) + certs.append(cert) + + return certs + + def pkcs7_serialize_certificates( + self, + certs: typing.List[x509.Certificate], + encoding: serialization.Encoding, + ): + certs = list(certs) + if not certs or not all( + isinstance(cert, x509.Certificate) for cert in certs + ): + raise TypeError("certs must be a list of certs with length >= 1") + + if encoding not in ( + serialization.Encoding.PEM, + serialization.Encoding.DER, + ): + raise TypeError("encoding must DER or PEM from the Encoding enum") + + certs_sk = self._lib.sk_X509_new_null() + certs_sk = self._ffi.gc(certs_sk, self._lib.sk_X509_free) + # This list is to keep the x509 values alive until end of function + ossl_certs = [] + for cert in certs: + ossl_cert = self._cert2ossl(cert) + ossl_certs.append(ossl_cert) + res = self._lib.sk_X509_push(certs_sk, ossl_cert) + self.openssl_assert(res >= 1) + # We use PKCS7_sign here because it creates the PKCS7 and PKCS7_SIGNED + # structures for us rather than requiring manual assignment. + p7 = self._lib.PKCS7_sign( + self._ffi.NULL, + self._ffi.NULL, + certs_sk, + self._ffi.NULL, + self._lib.PKCS7_PARTIAL, + ) + bio_out = self._create_mem_bio_gc() + if encoding is serialization.Encoding.PEM: + res = self._lib.PEM_write_bio_PKCS7_stream( + bio_out, p7, self._ffi.NULL, 0 + ) + else: + assert encoding is serialization.Encoding.DER + res = self._lib.i2d_PKCS7_bio(bio_out, p7) + + self.openssl_assert(res == 1) + return self._read_mem_bio(bio_out) + + def pkcs7_sign( + self, + builder: pkcs7.PKCS7SignatureBuilder, + encoding: serialization.Encoding, + options: typing.List[pkcs7.PKCS7Options], + ) -> bytes: + assert builder._data is not None + bio = self._bytes_to_bio(builder._data) + init_flags = self._lib.PKCS7_PARTIAL + final_flags = 0 + + if len(builder._additional_certs) == 0: + certs = self._ffi.NULL + else: + certs = self._lib.sk_X509_new_null() + certs = self._ffi.gc(certs, self._lib.sk_X509_free) + # This list is to keep the x509 values alive until end of function + ossl_certs = [] + for cert in builder._additional_certs: + ossl_cert = self._cert2ossl(cert) + ossl_certs.append(ossl_cert) + res = self._lib.sk_X509_push(certs, ossl_cert) + self.openssl_assert(res >= 1) + + if pkcs7.PKCS7Options.DetachedSignature in options: + # Don't embed the data in the PKCS7 structure + init_flags |= self._lib.PKCS7_DETACHED + final_flags |= self._lib.PKCS7_DETACHED + + # This just inits a structure for us. However, there + # are flags we need to set, joy. + p7 = self._lib.PKCS7_sign( + self._ffi.NULL, + self._ffi.NULL, + certs, + self._ffi.NULL, + init_flags, + ) + self.openssl_assert(p7 != self._ffi.NULL) + p7 = self._ffi.gc(p7, self._lib.PKCS7_free) + signer_flags = 0 + # These flags are configurable on a per-signature basis + # but we've deliberately chosen to make the API only allow + # setting it across all signatures for now. + if pkcs7.PKCS7Options.NoCapabilities in options: + signer_flags |= self._lib.PKCS7_NOSMIMECAP + elif pkcs7.PKCS7Options.NoAttributes in options: + signer_flags |= self._lib.PKCS7_NOATTR + + if pkcs7.PKCS7Options.NoCerts in options: + signer_flags |= self._lib.PKCS7_NOCERTS + + for certificate, private_key, hash_algorithm in builder._signers: + ossl_cert = self._cert2ossl(certificate) + md = self._evp_md_non_null_from_algorithm(hash_algorithm) + p7signerinfo = self._lib.PKCS7_sign_add_signer( + p7, + ossl_cert, + private_key._evp_pkey, # type: ignore[union-attr] + md, + signer_flags, + ) + self.openssl_assert(p7signerinfo != self._ffi.NULL) + + for option in options: + # DetachedSignature, NoCapabilities, and NoAttributes are already + # handled so we just need to check these last two options. + if option is pkcs7.PKCS7Options.Text: + final_flags |= self._lib.PKCS7_TEXT + elif option is pkcs7.PKCS7Options.Binary: + final_flags |= self._lib.PKCS7_BINARY + + bio_out = self._create_mem_bio_gc() + if encoding is serialization.Encoding.SMIME: + # This finalizes the structure + res = self._lib.SMIME_write_PKCS7( + bio_out, p7, bio.bio, final_flags + ) + elif encoding is serialization.Encoding.PEM: + res = self._lib.PKCS7_final(p7, bio.bio, final_flags) + self.openssl_assert(res == 1) + res = self._lib.PEM_write_bio_PKCS7_stream( + bio_out, p7, bio.bio, final_flags + ) + else: + assert encoding is serialization.Encoding.DER + # We need to call finalize here becauase i2d_PKCS7_bio does not + # finalize. + res = self._lib.PKCS7_final(p7, bio.bio, final_flags) + self.openssl_assert(res == 1) + # OpenSSL 3.0 leaves a random bio error on the stack: + # https://github.com/openssl/openssl/issues/16681 + if self._lib.CRYPTOGRAPHY_OPENSSL_300_OR_GREATER: + self._consume_errors() + res = self._lib.i2d_PKCS7_bio(bio_out, p7) + self.openssl_assert(res == 1) + return self._read_mem_bio(bio_out) + + +class GetCipherByName: + def __init__(self, fmt: str): + self._fmt = fmt + + def __call__(self, backend: Backend, cipher: CipherAlgorithm, mode: Mode): + cipher_name = self._fmt.format(cipher=cipher, mode=mode).lower() + evp_cipher = backend._lib.EVP_get_cipherbyname( + cipher_name.encode("ascii") + ) + + # try EVP_CIPHER_fetch if present + if ( + evp_cipher == backend._ffi.NULL + and backend._lib.Cryptography_HAS_300_EVP_CIPHER + ): + evp_cipher = backend._lib.EVP_CIPHER_fetch( + backend._ffi.NULL, + cipher_name.encode("ascii"), + backend._ffi.NULL, + ) + + backend._consume_errors() + return evp_cipher + + +def _get_xts_cipher(backend: Backend, cipher: AES, mode): + cipher_name = "aes-{}-xts".format(cipher.key_size // 2) + return backend._lib.EVP_get_cipherbyname(cipher_name.encode("ascii")) + + +backend = Backend() diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ciphers.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ciphers.py new file mode 100644 index 00000000..1058de9f --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ciphers.py @@ -0,0 +1,282 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import InvalidTag, UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives import ciphers +from cryptography.hazmat.primitives.ciphers import algorithms, modes + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +class _CipherContext: + _ENCRYPT = 1 + _DECRYPT = 0 + _MAX_CHUNK_SIZE = 2**30 - 1 + + def __init__( + self, backend: "Backend", cipher, mode, operation: int + ) -> None: + self._backend = backend + self._cipher = cipher + self._mode = mode + self._operation = operation + self._tag: typing.Optional[bytes] = None + + if isinstance(self._cipher, ciphers.BlockCipherAlgorithm): + self._block_size_bytes = self._cipher.block_size // 8 + else: + self._block_size_bytes = 1 + + ctx = self._backend._lib.EVP_CIPHER_CTX_new() + ctx = self._backend._ffi.gc( + ctx, self._backend._lib.EVP_CIPHER_CTX_free + ) + + registry = self._backend._cipher_registry + try: + adapter = registry[type(cipher), type(mode)] + except KeyError: + raise UnsupportedAlgorithm( + "cipher {} in {} mode is not supported " + "by this backend.".format( + cipher.name, mode.name if mode else mode + ), + _Reasons.UNSUPPORTED_CIPHER, + ) + + evp_cipher = adapter(self._backend, cipher, mode) + if evp_cipher == self._backend._ffi.NULL: + msg = "cipher {0.name} ".format(cipher) + if mode is not None: + msg += "in {0.name} mode ".format(mode) + msg += ( + "is not supported by this backend (Your version of OpenSSL " + "may be too old. Current version: {}.)" + ).format(self._backend.openssl_version_text()) + raise UnsupportedAlgorithm(msg, _Reasons.UNSUPPORTED_CIPHER) + + if isinstance(mode, modes.ModeWithInitializationVector): + iv_nonce = self._backend._ffi.from_buffer( + mode.initialization_vector + ) + elif isinstance(mode, modes.ModeWithTweak): + iv_nonce = self._backend._ffi.from_buffer(mode.tweak) + elif isinstance(mode, modes.ModeWithNonce): + iv_nonce = self._backend._ffi.from_buffer(mode.nonce) + elif isinstance(cipher, algorithms.ChaCha20): + iv_nonce = self._backend._ffi.from_buffer(cipher.nonce) + else: + iv_nonce = self._backend._ffi.NULL + # begin init with cipher and operation type + res = self._backend._lib.EVP_CipherInit_ex( + ctx, + evp_cipher, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + operation, + ) + self._backend.openssl_assert(res != 0) + # set the key length to handle variable key ciphers + res = self._backend._lib.EVP_CIPHER_CTX_set_key_length( + ctx, len(cipher.key) + ) + self._backend.openssl_assert(res != 0) + if isinstance(mode, modes.GCM): + res = self._backend._lib.EVP_CIPHER_CTX_ctrl( + ctx, + self._backend._lib.EVP_CTRL_AEAD_SET_IVLEN, + len(iv_nonce), + self._backend._ffi.NULL, + ) + self._backend.openssl_assert(res != 0) + if mode.tag is not None: + res = self._backend._lib.EVP_CIPHER_CTX_ctrl( + ctx, + self._backend._lib.EVP_CTRL_AEAD_SET_TAG, + len(mode.tag), + mode.tag, + ) + self._backend.openssl_assert(res != 0) + self._tag = mode.tag + + # pass key/iv + res = self._backend._lib.EVP_CipherInit_ex( + ctx, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._backend._ffi.from_buffer(cipher.key), + iv_nonce, + operation, + ) + + # Check for XTS mode duplicate keys error + errors = self._backend._consume_errors() + lib = self._backend._lib + if res == 0 and ( + ( + lib.CRYPTOGRAPHY_OPENSSL_111D_OR_GREATER + and errors[0]._lib_reason_match( + lib.ERR_LIB_EVP, lib.EVP_R_XTS_DUPLICATED_KEYS + ) + ) + or ( + lib.Cryptography_HAS_PROVIDERS + and errors[0]._lib_reason_match( + lib.ERR_LIB_PROV, lib.PROV_R_XTS_DUPLICATED_KEYS + ) + ) + ): + raise ValueError("In XTS mode duplicated keys are not allowed") + + self._backend.openssl_assert(res != 0, errors=errors) + + # We purposely disable padding here as it's handled higher up in the + # API. + self._backend._lib.EVP_CIPHER_CTX_set_padding(ctx, 0) + self._ctx = ctx + + def update(self, data: bytes) -> bytes: + buf = bytearray(len(data) + self._block_size_bytes - 1) + n = self.update_into(data, buf) + return bytes(buf[:n]) + + def update_into(self, data: bytes, buf: bytes) -> int: + total_data_len = len(data) + if len(buf) < (total_data_len + self._block_size_bytes - 1): + raise ValueError( + "buffer must be at least {} bytes for this " + "payload".format(len(data) + self._block_size_bytes - 1) + ) + + data_processed = 0 + total_out = 0 + outlen = self._backend._ffi.new("int *") + baseoutbuf = self._backend._ffi.from_buffer(buf) + baseinbuf = self._backend._ffi.from_buffer(data) + + while data_processed != total_data_len: + outbuf = baseoutbuf + total_out + inbuf = baseinbuf + data_processed + inlen = min(self._MAX_CHUNK_SIZE, total_data_len - data_processed) + + res = self._backend._lib.EVP_CipherUpdate( + self._ctx, outbuf, outlen, inbuf, inlen + ) + if res == 0 and isinstance(self._mode, modes.XTS): + self._backend._consume_errors() + raise ValueError( + "In XTS mode you must supply at least a full block in the " + "first update call. For AES this is 16 bytes." + ) + else: + self._backend.openssl_assert(res != 0) + data_processed += inlen + total_out += outlen[0] + + return total_out + + def finalize(self) -> bytes: + if ( + self._operation == self._DECRYPT + and isinstance(self._mode, modes.ModeWithAuthenticationTag) + and self.tag is None + ): + raise ValueError( + "Authentication tag must be provided when decrypting." + ) + + buf = self._backend._ffi.new("unsigned char[]", self._block_size_bytes) + outlen = self._backend._ffi.new("int *") + res = self._backend._lib.EVP_CipherFinal_ex(self._ctx, buf, outlen) + if res == 0: + errors = self._backend._consume_errors() + + if not errors and isinstance(self._mode, modes.GCM): + raise InvalidTag + + lib = self._backend._lib + self._backend.openssl_assert( + errors[0]._lib_reason_match( + lib.ERR_LIB_EVP, + lib.EVP_R_DATA_NOT_MULTIPLE_OF_BLOCK_LENGTH, + ) + or ( + lib.Cryptography_HAS_PROVIDERS + and errors[0]._lib_reason_match( + lib.ERR_LIB_PROV, + lib.PROV_R_WRONG_FINAL_BLOCK_LENGTH, + ) + ) + or ( + lib.CRYPTOGRAPHY_IS_BORINGSSL + and errors[0].reason + == lib.CIPHER_R_DATA_NOT_MULTIPLE_OF_BLOCK_LENGTH + ), + errors=errors, + ) + raise ValueError( + "The length of the provided data is not a multiple of " + "the block length." + ) + + if ( + isinstance(self._mode, modes.GCM) + and self._operation == self._ENCRYPT + ): + tag_buf = self._backend._ffi.new( + "unsigned char[]", self._block_size_bytes + ) + res = self._backend._lib.EVP_CIPHER_CTX_ctrl( + self._ctx, + self._backend._lib.EVP_CTRL_AEAD_GET_TAG, + self._block_size_bytes, + tag_buf, + ) + self._backend.openssl_assert(res != 0) + self._tag = self._backend._ffi.buffer(tag_buf)[:] + + res = self._backend._lib.EVP_CIPHER_CTX_reset(self._ctx) + self._backend.openssl_assert(res == 1) + return self._backend._ffi.buffer(buf)[: outlen[0]] + + def finalize_with_tag(self, tag: bytes) -> bytes: + tag_len = len(tag) + if tag_len < self._mode._min_tag_length: + raise ValueError( + "Authentication tag must be {} bytes or longer.".format( + self._mode._min_tag_length + ) + ) + elif tag_len > self._block_size_bytes: + raise ValueError( + "Authentication tag cannot be more than {} bytes.".format( + self._block_size_bytes + ) + ) + res = self._backend._lib.EVP_CIPHER_CTX_ctrl( + self._ctx, self._backend._lib.EVP_CTRL_AEAD_SET_TAG, len(tag), tag + ) + self._backend.openssl_assert(res != 0) + self._tag = tag + return self.finalize() + + def authenticate_additional_data(self, data: bytes) -> None: + outlen = self._backend._ffi.new("int *") + res = self._backend._lib.EVP_CipherUpdate( + self._ctx, + self._backend._ffi.NULL, + outlen, + self._backend._ffi.from_buffer(data), + len(data), + ) + self._backend.openssl_assert(res != 0) + + @property + def tag(self) -> typing.Optional[bytes]: + return self._tag diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/cmac.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/cmac.py new file mode 100644 index 00000000..35f50c5c --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/cmac.py @@ -0,0 +1,87 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import ( + InvalidSignature, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.primitives import constant_time +from cryptography.hazmat.primitives.ciphers.modes import CBC + +if typing.TYPE_CHECKING: + from cryptography.hazmat.primitives import ciphers + from cryptography.hazmat.backends.openssl.backend import Backend + + +class _CMACContext: + def __init__( + self, + backend: "Backend", + algorithm: "ciphers.BlockCipherAlgorithm", + ctx=None, + ) -> None: + if not backend.cmac_algorithm_supported(algorithm): + raise UnsupportedAlgorithm( + "This backend does not support CMAC.", + _Reasons.UNSUPPORTED_CIPHER, + ) + + self._backend = backend + self._key = algorithm.key + self._algorithm = algorithm + self._output_length = algorithm.block_size // 8 + + if ctx is None: + registry = self._backend._cipher_registry + adapter = registry[type(algorithm), CBC] + + evp_cipher = adapter(self._backend, algorithm, CBC) + + ctx = self._backend._lib.CMAC_CTX_new() + + self._backend.openssl_assert(ctx != self._backend._ffi.NULL) + ctx = self._backend._ffi.gc(ctx, self._backend._lib.CMAC_CTX_free) + + key_ptr = self._backend._ffi.from_buffer(self._key) + res = self._backend._lib.CMAC_Init( + ctx, + key_ptr, + len(self._key), + evp_cipher, + self._backend._ffi.NULL, + ) + self._backend.openssl_assert(res == 1) + + self._ctx = ctx + + def update(self, data: bytes) -> None: + res = self._backend._lib.CMAC_Update(self._ctx, data, len(data)) + self._backend.openssl_assert(res == 1) + + def finalize(self) -> bytes: + buf = self._backend._ffi.new("unsigned char[]", self._output_length) + length = self._backend._ffi.new("size_t *", self._output_length) + res = self._backend._lib.CMAC_Final(self._ctx, buf, length) + self._backend.openssl_assert(res == 1) + + self._ctx = None + + return self._backend._ffi.buffer(buf)[:] + + def copy(self) -> "_CMACContext": + copied_ctx = self._backend._lib.CMAC_CTX_new() + copied_ctx = self._backend._ffi.gc( + copied_ctx, self._backend._lib.CMAC_CTX_free + ) + res = self._backend._lib.CMAC_CTX_copy(copied_ctx, self._ctx) + self._backend.openssl_assert(res == 1) + return _CMACContext(self._backend, self._algorithm, ctx=copied_ctx) + + def verify(self, signature: bytes) -> None: + digest = self.finalize() + if not constant_time.bytes_eq(digest, signature): + raise InvalidSignature("Signature did not match digest.") diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/decode_asn1.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/decode_asn1.py new file mode 100644 index 00000000..df91d6d8 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/decode_asn1.py @@ -0,0 +1,31 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +from cryptography import x509 + +# CRLReason ::= ENUMERATED { +# unspecified (0), +# keyCompromise (1), +# cACompromise (2), +# affiliationChanged (3), +# superseded (4), +# cessationOfOperation (5), +# certificateHold (6), +# -- value 7 is not used +# removeFromCRL (8), +# privilegeWithdrawn (9), +# aACompromise (10) } +_CRL_ENTRY_REASON_ENUM_TO_CODE = { + x509.ReasonFlags.unspecified: 0, + x509.ReasonFlags.key_compromise: 1, + x509.ReasonFlags.ca_compromise: 2, + x509.ReasonFlags.affiliation_changed: 3, + x509.ReasonFlags.superseded: 4, + x509.ReasonFlags.cessation_of_operation: 5, + x509.ReasonFlags.certificate_hold: 6, + x509.ReasonFlags.remove_from_crl: 8, + x509.ReasonFlags.privilege_withdrawn: 9, + x509.ReasonFlags.aa_compromise: 10, +} diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/dh.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/dh.py new file mode 100644 index 00000000..70364a3c --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/dh.py @@ -0,0 +1,318 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.asymmetric import dh + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +def _dh_params_dup(dh_cdata, backend: "Backend"): + lib = backend._lib + ffi = backend._ffi + + param_cdata = lib.DHparams_dup(dh_cdata) + backend.openssl_assert(param_cdata != ffi.NULL) + param_cdata = ffi.gc(param_cdata, lib.DH_free) + if lib.CRYPTOGRAPHY_IS_LIBRESSL: + # In libressl DHparams_dup don't copy q + q = ffi.new("BIGNUM **") + lib.DH_get0_pqg(dh_cdata, ffi.NULL, q, ffi.NULL) + q_dup = lib.BN_dup(q[0]) + res = lib.DH_set0_pqg(param_cdata, ffi.NULL, q_dup, ffi.NULL) + backend.openssl_assert(res == 1) + + return param_cdata + + +def _dh_cdata_to_parameters(dh_cdata, backend: "Backend") -> "_DHParameters": + param_cdata = _dh_params_dup(dh_cdata, backend) + return _DHParameters(backend, param_cdata) + + +class _DHParameters(dh.DHParameters): + def __init__(self, backend: "Backend", dh_cdata): + self._backend = backend + self._dh_cdata = dh_cdata + + def parameter_numbers(self) -> dh.DHParameterNumbers: + p = self._backend._ffi.new("BIGNUM **") + g = self._backend._ffi.new("BIGNUM **") + q = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_pqg(self._dh_cdata, p, q, g) + self._backend.openssl_assert(p[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(g[0] != self._backend._ffi.NULL) + q_val: typing.Optional[int] + if q[0] == self._backend._ffi.NULL: + q_val = None + else: + q_val = self._backend._bn_to_int(q[0]) + return dh.DHParameterNumbers( + p=self._backend._bn_to_int(p[0]), + g=self._backend._bn_to_int(g[0]), + q=q_val, + ) + + def generate_private_key(self) -> dh.DHPrivateKey: + return self._backend.generate_dh_private_key(self) + + def parameter_bytes( + self, + encoding: serialization.Encoding, + format: serialization.ParameterFormat, + ) -> bytes: + if encoding is serialization.Encoding.OpenSSH: + raise TypeError("OpenSSH encoding is not supported") + + if format is not serialization.ParameterFormat.PKCS3: + raise ValueError("Only PKCS3 serialization is supported") + + q = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_pqg( + self._dh_cdata, self._backend._ffi.NULL, q, self._backend._ffi.NULL + ) + if ( + q[0] != self._backend._ffi.NULL + and not self._backend._lib.Cryptography_HAS_EVP_PKEY_DHX + ): + raise UnsupportedAlgorithm( + "DH X9.42 serialization is not supported", + _Reasons.UNSUPPORTED_SERIALIZATION, + ) + + if encoding is serialization.Encoding.PEM: + if q[0] != self._backend._ffi.NULL: + write_bio = self._backend._lib.PEM_write_bio_DHxparams + else: + write_bio = self._backend._lib.PEM_write_bio_DHparams + elif encoding is serialization.Encoding.DER: + if q[0] != self._backend._ffi.NULL: + write_bio = self._backend._lib.Cryptography_i2d_DHxparams_bio + else: + write_bio = self._backend._lib.i2d_DHparams_bio + else: + raise TypeError("encoding must be an item from the Encoding enum") + + bio = self._backend._create_mem_bio_gc() + res = write_bio(bio, self._dh_cdata) + self._backend.openssl_assert(res == 1) + return self._backend._read_mem_bio(bio) + + +def _get_dh_num_bits(backend, dh_cdata) -> int: + p = backend._ffi.new("BIGNUM **") + backend._lib.DH_get0_pqg(dh_cdata, p, backend._ffi.NULL, backend._ffi.NULL) + backend.openssl_assert(p[0] != backend._ffi.NULL) + return backend._lib.BN_num_bits(p[0]) + + +class _DHPrivateKey(dh.DHPrivateKey): + def __init__(self, backend: "Backend", dh_cdata, evp_pkey): + self._backend = backend + self._dh_cdata = dh_cdata + self._evp_pkey = evp_pkey + self._key_size_bytes = self._backend._lib.DH_size(dh_cdata) + + @property + def key_size(self) -> int: + return _get_dh_num_bits(self._backend, self._dh_cdata) + + def private_numbers(self) -> dh.DHPrivateNumbers: + p = self._backend._ffi.new("BIGNUM **") + g = self._backend._ffi.new("BIGNUM **") + q = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_pqg(self._dh_cdata, p, q, g) + self._backend.openssl_assert(p[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(g[0] != self._backend._ffi.NULL) + if q[0] == self._backend._ffi.NULL: + q_val = None + else: + q_val = self._backend._bn_to_int(q[0]) + pub_key = self._backend._ffi.new("BIGNUM **") + priv_key = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_key(self._dh_cdata, pub_key, priv_key) + self._backend.openssl_assert(pub_key[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(priv_key[0] != self._backend._ffi.NULL) + return dh.DHPrivateNumbers( + public_numbers=dh.DHPublicNumbers( + parameter_numbers=dh.DHParameterNumbers( + p=self._backend._bn_to_int(p[0]), + g=self._backend._bn_to_int(g[0]), + q=q_val, + ), + y=self._backend._bn_to_int(pub_key[0]), + ), + x=self._backend._bn_to_int(priv_key[0]), + ) + + def exchange(self, peer_public_key: dh.DHPublicKey) -> bytes: + if not isinstance(peer_public_key, _DHPublicKey): + raise TypeError("peer_public_key must be a DHPublicKey") + + ctx = self._backend._lib.EVP_PKEY_CTX_new( + self._evp_pkey, self._backend._ffi.NULL + ) + self._backend.openssl_assert(ctx != self._backend._ffi.NULL) + ctx = self._backend._ffi.gc(ctx, self._backend._lib.EVP_PKEY_CTX_free) + res = self._backend._lib.EVP_PKEY_derive_init(ctx) + self._backend.openssl_assert(res == 1) + res = self._backend._lib.EVP_PKEY_derive_set_peer( + ctx, peer_public_key._evp_pkey + ) + # Invalid kex errors here in OpenSSL 3.0 because checks were moved + # to EVP_PKEY_derive_set_peer + self._exchange_assert(res == 1) + keylen = self._backend._ffi.new("size_t *") + res = self._backend._lib.EVP_PKEY_derive( + ctx, self._backend._ffi.NULL, keylen + ) + # Invalid kex errors here in OpenSSL < 3 + self._exchange_assert(res == 1) + self._backend.openssl_assert(keylen[0] > 0) + buf = self._backend._ffi.new("unsigned char[]", keylen[0]) + res = self._backend._lib.EVP_PKEY_derive(ctx, buf, keylen) + self._backend.openssl_assert(res == 1) + + key = self._backend._ffi.buffer(buf, keylen[0])[:] + pad = self._key_size_bytes - len(key) + + if pad > 0: + key = (b"\x00" * pad) + key + + return key + + def _exchange_assert(self, ok: bool) -> None: + if not ok: + errors_with_text = self._backend._consume_errors_with_text() + raise ValueError( + "Error computing shared key.", + errors_with_text, + ) + + def public_key(self) -> dh.DHPublicKey: + dh_cdata = _dh_params_dup(self._dh_cdata, self._backend) + pub_key = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_key( + self._dh_cdata, pub_key, self._backend._ffi.NULL + ) + self._backend.openssl_assert(pub_key[0] != self._backend._ffi.NULL) + pub_key_dup = self._backend._lib.BN_dup(pub_key[0]) + self._backend.openssl_assert(pub_key_dup != self._backend._ffi.NULL) + + res = self._backend._lib.DH_set0_key( + dh_cdata, pub_key_dup, self._backend._ffi.NULL + ) + self._backend.openssl_assert(res == 1) + evp_pkey = self._backend._dh_cdata_to_evp_pkey(dh_cdata) + return _DHPublicKey(self._backend, dh_cdata, evp_pkey) + + def parameters(self) -> dh.DHParameters: + return _dh_cdata_to_parameters(self._dh_cdata, self._backend) + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + if format is not serialization.PrivateFormat.PKCS8: + raise ValueError( + "DH private keys support only PKCS8 serialization" + ) + if not self._backend._lib.Cryptography_HAS_EVP_PKEY_DHX: + q = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_pqg( + self._dh_cdata, + self._backend._ffi.NULL, + q, + self._backend._ffi.NULL, + ) + if q[0] != self._backend._ffi.NULL: + raise UnsupportedAlgorithm( + "DH X9.42 serialization is not supported", + _Reasons.UNSUPPORTED_SERIALIZATION, + ) + + return self._backend._private_key_bytes( + encoding, + format, + encryption_algorithm, + self, + self._evp_pkey, + self._dh_cdata, + ) + + +class _DHPublicKey(dh.DHPublicKey): + def __init__(self, backend: "Backend", dh_cdata, evp_pkey): + self._backend = backend + self._dh_cdata = dh_cdata + self._evp_pkey = evp_pkey + self._key_size_bits = _get_dh_num_bits(self._backend, self._dh_cdata) + + @property + def key_size(self) -> int: + return self._key_size_bits + + def public_numbers(self) -> dh.DHPublicNumbers: + p = self._backend._ffi.new("BIGNUM **") + g = self._backend._ffi.new("BIGNUM **") + q = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_pqg(self._dh_cdata, p, q, g) + self._backend.openssl_assert(p[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(g[0] != self._backend._ffi.NULL) + if q[0] == self._backend._ffi.NULL: + q_val = None + else: + q_val = self._backend._bn_to_int(q[0]) + pub_key = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_key( + self._dh_cdata, pub_key, self._backend._ffi.NULL + ) + self._backend.openssl_assert(pub_key[0] != self._backend._ffi.NULL) + return dh.DHPublicNumbers( + parameter_numbers=dh.DHParameterNumbers( + p=self._backend._bn_to_int(p[0]), + g=self._backend._bn_to_int(g[0]), + q=q_val, + ), + y=self._backend._bn_to_int(pub_key[0]), + ) + + def parameters(self) -> dh.DHParameters: + return _dh_cdata_to_parameters(self._dh_cdata, self._backend) + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + if format is not serialization.PublicFormat.SubjectPublicKeyInfo: + raise ValueError( + "DH public keys support only " + "SubjectPublicKeyInfo serialization" + ) + + if not self._backend._lib.Cryptography_HAS_EVP_PKEY_DHX: + q = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DH_get0_pqg( + self._dh_cdata, + self._backend._ffi.NULL, + q, + self._backend._ffi.NULL, + ) + if q[0] != self._backend._ffi.NULL: + raise UnsupportedAlgorithm( + "DH X9.42 serialization is not supported", + _Reasons.UNSUPPORTED_SERIALIZATION, + ) + + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, None + ) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/dsa.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/dsa.py new file mode 100644 index 00000000..8634b725 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/dsa.py @@ -0,0 +1,239 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography.exceptions import InvalidSignature +from cryptography.hazmat.backends.openssl.utils import ( + _calculate_digest_and_algorithm, +) +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric import ( + dsa, + utils as asym_utils, +) + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +def _dsa_sig_sign( + backend: "Backend", private_key: "_DSAPrivateKey", data: bytes +) -> bytes: + sig_buf_len = backend._lib.DSA_size(private_key._dsa_cdata) + sig_buf = backend._ffi.new("unsigned char[]", sig_buf_len) + buflen = backend._ffi.new("unsigned int *") + + # The first parameter passed to DSA_sign is unused by OpenSSL but + # must be an integer. + res = backend._lib.DSA_sign( + 0, data, len(data), sig_buf, buflen, private_key._dsa_cdata + ) + backend.openssl_assert(res == 1) + backend.openssl_assert(buflen[0]) + + return backend._ffi.buffer(sig_buf)[: buflen[0]] + + +def _dsa_sig_verify( + backend: "Backend", + public_key: "_DSAPublicKey", + signature: bytes, + data: bytes, +) -> None: + # The first parameter passed to DSA_verify is unused by OpenSSL but + # must be an integer. + res = backend._lib.DSA_verify( + 0, data, len(data), signature, len(signature), public_key._dsa_cdata + ) + + if res != 1: + backend._consume_errors() + raise InvalidSignature + + +class _DSAParameters(dsa.DSAParameters): + def __init__(self, backend: "Backend", dsa_cdata): + self._backend = backend + self._dsa_cdata = dsa_cdata + + def parameter_numbers(self) -> dsa.DSAParameterNumbers: + p = self._backend._ffi.new("BIGNUM **") + q = self._backend._ffi.new("BIGNUM **") + g = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DSA_get0_pqg(self._dsa_cdata, p, q, g) + self._backend.openssl_assert(p[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(q[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(g[0] != self._backend._ffi.NULL) + return dsa.DSAParameterNumbers( + p=self._backend._bn_to_int(p[0]), + q=self._backend._bn_to_int(q[0]), + g=self._backend._bn_to_int(g[0]), + ) + + def generate_private_key(self) -> dsa.DSAPrivateKey: + return self._backend.generate_dsa_private_key(self) + + +class _DSAPrivateKey(dsa.DSAPrivateKey): + _key_size: int + + def __init__(self, backend: "Backend", dsa_cdata, evp_pkey): + self._backend = backend + self._dsa_cdata = dsa_cdata + self._evp_pkey = evp_pkey + + p = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DSA_get0_pqg( + dsa_cdata, p, self._backend._ffi.NULL, self._backend._ffi.NULL + ) + self._backend.openssl_assert(p[0] != backend._ffi.NULL) + self._key_size = self._backend._lib.BN_num_bits(p[0]) + + @property + def key_size(self) -> int: + return self._key_size + + def private_numbers(self) -> dsa.DSAPrivateNumbers: + p = self._backend._ffi.new("BIGNUM **") + q = self._backend._ffi.new("BIGNUM **") + g = self._backend._ffi.new("BIGNUM **") + pub_key = self._backend._ffi.new("BIGNUM **") + priv_key = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DSA_get0_pqg(self._dsa_cdata, p, q, g) + self._backend.openssl_assert(p[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(q[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(g[0] != self._backend._ffi.NULL) + self._backend._lib.DSA_get0_key(self._dsa_cdata, pub_key, priv_key) + self._backend.openssl_assert(pub_key[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(priv_key[0] != self._backend._ffi.NULL) + return dsa.DSAPrivateNumbers( + public_numbers=dsa.DSAPublicNumbers( + parameter_numbers=dsa.DSAParameterNumbers( + p=self._backend._bn_to_int(p[0]), + q=self._backend._bn_to_int(q[0]), + g=self._backend._bn_to_int(g[0]), + ), + y=self._backend._bn_to_int(pub_key[0]), + ), + x=self._backend._bn_to_int(priv_key[0]), + ) + + def public_key(self) -> dsa.DSAPublicKey: + dsa_cdata = self._backend._lib.DSAparams_dup(self._dsa_cdata) + self._backend.openssl_assert(dsa_cdata != self._backend._ffi.NULL) + dsa_cdata = self._backend._ffi.gc( + dsa_cdata, self._backend._lib.DSA_free + ) + pub_key = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DSA_get0_key( + self._dsa_cdata, pub_key, self._backend._ffi.NULL + ) + self._backend.openssl_assert(pub_key[0] != self._backend._ffi.NULL) + pub_key_dup = self._backend._lib.BN_dup(pub_key[0]) + res = self._backend._lib.DSA_set0_key( + dsa_cdata, pub_key_dup, self._backend._ffi.NULL + ) + self._backend.openssl_assert(res == 1) + evp_pkey = self._backend._dsa_cdata_to_evp_pkey(dsa_cdata) + return _DSAPublicKey(self._backend, dsa_cdata, evp_pkey) + + def parameters(self) -> dsa.DSAParameters: + dsa_cdata = self._backend._lib.DSAparams_dup(self._dsa_cdata) + self._backend.openssl_assert(dsa_cdata != self._backend._ffi.NULL) + dsa_cdata = self._backend._ffi.gc( + dsa_cdata, self._backend._lib.DSA_free + ) + return _DSAParameters(self._backend, dsa_cdata) + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + return self._backend._private_key_bytes( + encoding, + format, + encryption_algorithm, + self, + self._evp_pkey, + self._dsa_cdata, + ) + + def sign( + self, + data: bytes, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> bytes: + data, _ = _calculate_digest_and_algorithm(data, algorithm) + return _dsa_sig_sign(self._backend, self, data) + + +class _DSAPublicKey(dsa.DSAPublicKey): + _key_size: int + + def __init__(self, backend: "Backend", dsa_cdata, evp_pkey): + self._backend = backend + self._dsa_cdata = dsa_cdata + self._evp_pkey = evp_pkey + p = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DSA_get0_pqg( + dsa_cdata, p, self._backend._ffi.NULL, self._backend._ffi.NULL + ) + self._backend.openssl_assert(p[0] != backend._ffi.NULL) + self._key_size = self._backend._lib.BN_num_bits(p[0]) + + @property + def key_size(self) -> int: + return self._key_size + + def public_numbers(self) -> dsa.DSAPublicNumbers: + p = self._backend._ffi.new("BIGNUM **") + q = self._backend._ffi.new("BIGNUM **") + g = self._backend._ffi.new("BIGNUM **") + pub_key = self._backend._ffi.new("BIGNUM **") + self._backend._lib.DSA_get0_pqg(self._dsa_cdata, p, q, g) + self._backend.openssl_assert(p[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(q[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(g[0] != self._backend._ffi.NULL) + self._backend._lib.DSA_get0_key( + self._dsa_cdata, pub_key, self._backend._ffi.NULL + ) + self._backend.openssl_assert(pub_key[0] != self._backend._ffi.NULL) + return dsa.DSAPublicNumbers( + parameter_numbers=dsa.DSAParameterNumbers( + p=self._backend._bn_to_int(p[0]), + q=self._backend._bn_to_int(q[0]), + g=self._backend._bn_to_int(g[0]), + ), + y=self._backend._bn_to_int(pub_key[0]), + ) + + def parameters(self) -> dsa.DSAParameters: + dsa_cdata = self._backend._lib.DSAparams_dup(self._dsa_cdata) + dsa_cdata = self._backend._ffi.gc( + dsa_cdata, self._backend._lib.DSA_free + ) + return _DSAParameters(self._backend, dsa_cdata) + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, None + ) + + def verify( + self, + signature: bytes, + data: bytes, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> None: + data, _ = _calculate_digest_and_algorithm(data, algorithm) + return _dsa_sig_verify(self._backend, self, signature, data) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ec.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ec.py new file mode 100644 index 00000000..9bc6dd38 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ec.py @@ -0,0 +1,315 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import ( + InvalidSignature, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.backends.openssl.utils import ( + _calculate_digest_and_algorithm, + _evp_pkey_derive, +) +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.asymmetric import ec + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +def _check_signature_algorithm( + signature_algorithm: ec.EllipticCurveSignatureAlgorithm, +) -> None: + if not isinstance(signature_algorithm, ec.ECDSA): + raise UnsupportedAlgorithm( + "Unsupported elliptic curve signature algorithm.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + +def _ec_key_curve_sn(backend: "Backend", ec_key) -> str: + group = backend._lib.EC_KEY_get0_group(ec_key) + backend.openssl_assert(group != backend._ffi.NULL) + + nid = backend._lib.EC_GROUP_get_curve_name(group) + # The following check is to find EC keys with unnamed curves and raise + # an error for now. + if nid == backend._lib.NID_undef: + raise ValueError( + "ECDSA keys with explicit parameters are unsupported at this time" + ) + + # This is like the above check, but it also catches the case where you + # explicitly encoded a curve with the same parameters as a named curve. + # Don't do that. + if ( + not backend._lib.CRYPTOGRAPHY_IS_LIBRESSL + and backend._lib.EC_GROUP_get_asn1_flag(group) == 0 + ): + raise ValueError( + "ECDSA keys with explicit parameters are unsupported at this time" + ) + + curve_name = backend._lib.OBJ_nid2sn(nid) + backend.openssl_assert(curve_name != backend._ffi.NULL) + + sn = backend._ffi.string(curve_name).decode("ascii") + return sn + + +def _mark_asn1_named_ec_curve(backend: "Backend", ec_cdata): + """ + Set the named curve flag on the EC_KEY. This causes OpenSSL to + serialize EC keys along with their curve OID which makes + deserialization easier. + """ + + backend._lib.EC_KEY_set_asn1_flag( + ec_cdata, backend._lib.OPENSSL_EC_NAMED_CURVE + ) + + +def _check_key_infinity(backend: "Backend", ec_cdata) -> None: + point = backend._lib.EC_KEY_get0_public_key(ec_cdata) + backend.openssl_assert(point != backend._ffi.NULL) + group = backend._lib.EC_KEY_get0_group(ec_cdata) + backend.openssl_assert(group != backend._ffi.NULL) + if backend._lib.EC_POINT_is_at_infinity(group, point): + raise ValueError( + "Cannot load an EC public key where the point is at infinity" + ) + + +def _sn_to_elliptic_curve(backend: "Backend", sn: str) -> ec.EllipticCurve: + try: + return ec._CURVE_TYPES[sn]() + except KeyError: + raise UnsupportedAlgorithm( + "{} is not a supported elliptic curve".format(sn), + _Reasons.UNSUPPORTED_ELLIPTIC_CURVE, + ) + + +def _ecdsa_sig_sign( + backend: "Backend", private_key: "_EllipticCurvePrivateKey", data: bytes +) -> bytes: + max_size = backend._lib.ECDSA_size(private_key._ec_key) + backend.openssl_assert(max_size > 0) + + sigbuf = backend._ffi.new("unsigned char[]", max_size) + siglen_ptr = backend._ffi.new("unsigned int[]", 1) + res = backend._lib.ECDSA_sign( + 0, data, len(data), sigbuf, siglen_ptr, private_key._ec_key + ) + backend.openssl_assert(res == 1) + return backend._ffi.buffer(sigbuf)[: siglen_ptr[0]] + + +def _ecdsa_sig_verify( + backend: "Backend", + public_key: "_EllipticCurvePublicKey", + signature: bytes, + data: bytes, +) -> None: + res = backend._lib.ECDSA_verify( + 0, data, len(data), signature, len(signature), public_key._ec_key + ) + if res != 1: + backend._consume_errors() + raise InvalidSignature + + +class _EllipticCurvePrivateKey(ec.EllipticCurvePrivateKey): + def __init__(self, backend: "Backend", ec_key_cdata, evp_pkey): + self._backend = backend + self._ec_key = ec_key_cdata + self._evp_pkey = evp_pkey + + sn = _ec_key_curve_sn(backend, ec_key_cdata) + self._curve = _sn_to_elliptic_curve(backend, sn) + _mark_asn1_named_ec_curve(backend, ec_key_cdata) + _check_key_infinity(backend, ec_key_cdata) + + @property + def curve(self) -> ec.EllipticCurve: + return self._curve + + @property + def key_size(self) -> int: + return self.curve.key_size + + def exchange( + self, algorithm: ec.ECDH, peer_public_key: ec.EllipticCurvePublicKey + ) -> bytes: + if not ( + self._backend.elliptic_curve_exchange_algorithm_supported( + algorithm, self.curve + ) + ): + raise UnsupportedAlgorithm( + "This backend does not support the ECDH algorithm.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + if peer_public_key.curve.name != self.curve.name: + raise ValueError( + "peer_public_key and self are not on the same curve" + ) + + return _evp_pkey_derive(self._backend, self._evp_pkey, peer_public_key) + + def public_key(self) -> ec.EllipticCurvePublicKey: + group = self._backend._lib.EC_KEY_get0_group(self._ec_key) + self._backend.openssl_assert(group != self._backend._ffi.NULL) + + curve_nid = self._backend._lib.EC_GROUP_get_curve_name(group) + public_ec_key = self._backend._ec_key_new_by_curve_nid(curve_nid) + + point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key) + self._backend.openssl_assert(point != self._backend._ffi.NULL) + + res = self._backend._lib.EC_KEY_set_public_key(public_ec_key, point) + self._backend.openssl_assert(res == 1) + + evp_pkey = self._backend._ec_cdata_to_evp_pkey(public_ec_key) + + return _EllipticCurvePublicKey(self._backend, public_ec_key, evp_pkey) + + def private_numbers(self) -> ec.EllipticCurvePrivateNumbers: + bn = self._backend._lib.EC_KEY_get0_private_key(self._ec_key) + private_value = self._backend._bn_to_int(bn) + return ec.EllipticCurvePrivateNumbers( + private_value=private_value, + public_numbers=self.public_key().public_numbers(), + ) + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + return self._backend._private_key_bytes( + encoding, + format, + encryption_algorithm, + self, + self._evp_pkey, + self._ec_key, + ) + + def sign( + self, + data: bytes, + signature_algorithm: ec.EllipticCurveSignatureAlgorithm, + ) -> bytes: + _check_signature_algorithm(signature_algorithm) + data, _ = _calculate_digest_and_algorithm( + data, + signature_algorithm.algorithm, + ) + return _ecdsa_sig_sign(self._backend, self, data) + + +class _EllipticCurvePublicKey(ec.EllipticCurvePublicKey): + def __init__(self, backend: "Backend", ec_key_cdata, evp_pkey): + self._backend = backend + self._ec_key = ec_key_cdata + self._evp_pkey = evp_pkey + + sn = _ec_key_curve_sn(backend, ec_key_cdata) + self._curve = _sn_to_elliptic_curve(backend, sn) + _mark_asn1_named_ec_curve(backend, ec_key_cdata) + _check_key_infinity(backend, ec_key_cdata) + + @property + def curve(self) -> ec.EllipticCurve: + return self._curve + + @property + def key_size(self) -> int: + return self.curve.key_size + + def public_numbers(self) -> ec.EllipticCurvePublicNumbers: + get_func, group = self._backend._ec_key_determine_group_get_func( + self._ec_key + ) + point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key) + self._backend.openssl_assert(point != self._backend._ffi.NULL) + + with self._backend._tmp_bn_ctx() as bn_ctx: + bn_x = self._backend._lib.BN_CTX_get(bn_ctx) + bn_y = self._backend._lib.BN_CTX_get(bn_ctx) + + res = get_func(group, point, bn_x, bn_y, bn_ctx) + self._backend.openssl_assert(res == 1) + + x = self._backend._bn_to_int(bn_x) + y = self._backend._bn_to_int(bn_y) + + return ec.EllipticCurvePublicNumbers(x=x, y=y, curve=self._curve) + + def _encode_point(self, format: serialization.PublicFormat) -> bytes: + if format is serialization.PublicFormat.CompressedPoint: + conversion = self._backend._lib.POINT_CONVERSION_COMPRESSED + else: + assert format is serialization.PublicFormat.UncompressedPoint + conversion = self._backend._lib.POINT_CONVERSION_UNCOMPRESSED + + group = self._backend._lib.EC_KEY_get0_group(self._ec_key) + self._backend.openssl_assert(group != self._backend._ffi.NULL) + point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key) + self._backend.openssl_assert(point != self._backend._ffi.NULL) + with self._backend._tmp_bn_ctx() as bn_ctx: + buflen = self._backend._lib.EC_POINT_point2oct( + group, point, conversion, self._backend._ffi.NULL, 0, bn_ctx + ) + self._backend.openssl_assert(buflen > 0) + buf = self._backend._ffi.new("char[]", buflen) + res = self._backend._lib.EC_POINT_point2oct( + group, point, conversion, buf, buflen, bn_ctx + ) + self._backend.openssl_assert(buflen == res) + + return self._backend._ffi.buffer(buf)[:] + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + if ( + encoding is serialization.Encoding.X962 + or format is serialization.PublicFormat.CompressedPoint + or format is serialization.PublicFormat.UncompressedPoint + ): + if encoding is not serialization.Encoding.X962 or format not in ( + serialization.PublicFormat.CompressedPoint, + serialization.PublicFormat.UncompressedPoint, + ): + raise ValueError( + "X962 encoding must be used with CompressedPoint or " + "UncompressedPoint format" + ) + + return self._encode_point(format) + else: + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, None + ) + + def verify( + self, + signature: bytes, + data: bytes, + signature_algorithm: ec.EllipticCurveSignatureAlgorithm, + ) -> None: + _check_signature_algorithm(signature_algorithm) + data, _ = _calculate_digest_and_algorithm( + data, + signature_algorithm.algorithm, + ) + _ecdsa_sig_verify(self._backend, self, signature, data) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ed25519.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ed25519.py new file mode 100644 index 00000000..5cfdffb7 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ed25519.py @@ -0,0 +1,155 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import exceptions +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.asymmetric.ed25519 import ( + Ed25519PrivateKey, + Ed25519PublicKey, + _ED25519_KEY_SIZE, + _ED25519_SIG_SIZE, +) + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +class _Ed25519PublicKey(Ed25519PublicKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + encoding is not serialization.Encoding.Raw + or format is not serialization.PublicFormat.Raw + ): + raise ValueError( + "When using Raw both encoding and format must be Raw" + ) + + return self._raw_public_bytes() + + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, None + ) + + def _raw_public_bytes(self) -> bytes: + buf = self._backend._ffi.new("unsigned char []", _ED25519_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _ED25519_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_public_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED25519_KEY_SIZE) + return self._backend._ffi.buffer(buf, _ED25519_KEY_SIZE)[:] + + def verify(self, signature: bytes, data: bytes) -> None: + evp_md_ctx = self._backend._lib.EVP_MD_CTX_new() + self._backend.openssl_assert(evp_md_ctx != self._backend._ffi.NULL) + evp_md_ctx = self._backend._ffi.gc( + evp_md_ctx, self._backend._lib.EVP_MD_CTX_free + ) + res = self._backend._lib.EVP_DigestVerifyInit( + evp_md_ctx, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._evp_pkey, + ) + self._backend.openssl_assert(res == 1) + res = self._backend._lib.EVP_DigestVerify( + evp_md_ctx, signature, len(signature), data, len(data) + ) + if res != 1: + self._backend._consume_errors() + raise exceptions.InvalidSignature + + +class _Ed25519PrivateKey(Ed25519PrivateKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_key(self) -> Ed25519PublicKey: + buf = self._backend._ffi.new("unsigned char []", _ED25519_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _ED25519_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_public_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED25519_KEY_SIZE) + public_bytes = self._backend._ffi.buffer(buf)[:] + return self._backend.ed25519_load_public_bytes(public_bytes) + + def sign(self, data: bytes) -> bytes: + evp_md_ctx = self._backend._lib.EVP_MD_CTX_new() + self._backend.openssl_assert(evp_md_ctx != self._backend._ffi.NULL) + evp_md_ctx = self._backend._ffi.gc( + evp_md_ctx, self._backend._lib.EVP_MD_CTX_free + ) + res = self._backend._lib.EVP_DigestSignInit( + evp_md_ctx, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._evp_pkey, + ) + self._backend.openssl_assert(res == 1) + buf = self._backend._ffi.new("unsigned char[]", _ED25519_SIG_SIZE) + buflen = self._backend._ffi.new("size_t *", len(buf)) + res = self._backend._lib.EVP_DigestSign( + evp_md_ctx, buf, buflen, data, len(data) + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED25519_SIG_SIZE) + return self._backend._ffi.buffer(buf, buflen[0])[:] + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + format is not serialization.PrivateFormat.Raw + or encoding is not serialization.Encoding.Raw + or not isinstance( + encryption_algorithm, serialization.NoEncryption + ) + ): + raise ValueError( + "When using Raw both encoding and format must be Raw " + "and encryption_algorithm must be NoEncryption()" + ) + + return self._raw_private_bytes() + + return self._backend._private_key_bytes( + encoding, format, encryption_algorithm, self, self._evp_pkey, None + ) + + def _raw_private_bytes(self) -> bytes: + buf = self._backend._ffi.new("unsigned char []", _ED25519_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _ED25519_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_private_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED25519_KEY_SIZE) + return self._backend._ffi.buffer(buf, _ED25519_KEY_SIZE)[:] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ed448.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ed448.py new file mode 100644 index 00000000..dad93c6c --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/ed448.py @@ -0,0 +1,156 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import exceptions +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.asymmetric.ed448 import ( + Ed448PrivateKey, + Ed448PublicKey, +) + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + +_ED448_KEY_SIZE = 57 +_ED448_SIG_SIZE = 114 + + +class _Ed448PublicKey(Ed448PublicKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + encoding is not serialization.Encoding.Raw + or format is not serialization.PublicFormat.Raw + ): + raise ValueError( + "When using Raw both encoding and format must be Raw" + ) + + return self._raw_public_bytes() + + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, None + ) + + def _raw_public_bytes(self) -> bytes: + buf = self._backend._ffi.new("unsigned char []", _ED448_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _ED448_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_public_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED448_KEY_SIZE) + return self._backend._ffi.buffer(buf, _ED448_KEY_SIZE)[:] + + def verify(self, signature: bytes, data: bytes) -> None: + evp_md_ctx = self._backend._lib.EVP_MD_CTX_new() + self._backend.openssl_assert(evp_md_ctx != self._backend._ffi.NULL) + evp_md_ctx = self._backend._ffi.gc( + evp_md_ctx, self._backend._lib.EVP_MD_CTX_free + ) + res = self._backend._lib.EVP_DigestVerifyInit( + evp_md_ctx, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._evp_pkey, + ) + self._backend.openssl_assert(res == 1) + res = self._backend._lib.EVP_DigestVerify( + evp_md_ctx, signature, len(signature), data, len(data) + ) + if res != 1: + self._backend._consume_errors() + raise exceptions.InvalidSignature + + +class _Ed448PrivateKey(Ed448PrivateKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_key(self) -> Ed448PublicKey: + buf = self._backend._ffi.new("unsigned char []", _ED448_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _ED448_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_public_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED448_KEY_SIZE) + public_bytes = self._backend._ffi.buffer(buf)[:] + return self._backend.ed448_load_public_bytes(public_bytes) + + def sign(self, data: bytes) -> bytes: + evp_md_ctx = self._backend._lib.EVP_MD_CTX_new() + self._backend.openssl_assert(evp_md_ctx != self._backend._ffi.NULL) + evp_md_ctx = self._backend._ffi.gc( + evp_md_ctx, self._backend._lib.EVP_MD_CTX_free + ) + res = self._backend._lib.EVP_DigestSignInit( + evp_md_ctx, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._evp_pkey, + ) + self._backend.openssl_assert(res == 1) + buf = self._backend._ffi.new("unsigned char[]", _ED448_SIG_SIZE) + buflen = self._backend._ffi.new("size_t *", len(buf)) + res = self._backend._lib.EVP_DigestSign( + evp_md_ctx, buf, buflen, data, len(data) + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED448_SIG_SIZE) + return self._backend._ffi.buffer(buf, buflen[0])[:] + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + format is not serialization.PrivateFormat.Raw + or encoding is not serialization.Encoding.Raw + or not isinstance( + encryption_algorithm, serialization.NoEncryption + ) + ): + raise ValueError( + "When using Raw both encoding and format must be Raw " + "and encryption_algorithm must be NoEncryption()" + ) + + return self._raw_private_bytes() + + return self._backend._private_key_bytes( + encoding, format, encryption_algorithm, self, self._evp_pkey, None + ) + + def _raw_private_bytes(self) -> bytes: + buf = self._backend._ffi.new("unsigned char []", _ED448_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _ED448_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_private_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _ED448_KEY_SIZE) + return self._backend._ffi.buffer(buf, _ED448_KEY_SIZE)[:] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/hashes.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/hashes.py new file mode 100644 index 00000000..278b3815 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/hashes.py @@ -0,0 +1,87 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives import hashes + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +class _HashContext(hashes.HashContext): + def __init__( + self, backend: "Backend", algorithm: hashes.HashAlgorithm, ctx=None + ) -> None: + self._algorithm = algorithm + + self._backend = backend + + if ctx is None: + ctx = self._backend._lib.EVP_MD_CTX_new() + ctx = self._backend._ffi.gc( + ctx, self._backend._lib.EVP_MD_CTX_free + ) + evp_md = self._backend._evp_md_from_algorithm(algorithm) + if evp_md == self._backend._ffi.NULL: + raise UnsupportedAlgorithm( + "{} is not a supported hash on this backend.".format( + algorithm.name + ), + _Reasons.UNSUPPORTED_HASH, + ) + res = self._backend._lib.EVP_DigestInit_ex( + ctx, evp_md, self._backend._ffi.NULL + ) + self._backend.openssl_assert(res != 0) + + self._ctx = ctx + + @property + def algorithm(self) -> hashes.HashAlgorithm: + return self._algorithm + + def copy(self) -> "_HashContext": + copied_ctx = self._backend._lib.EVP_MD_CTX_new() + copied_ctx = self._backend._ffi.gc( + copied_ctx, self._backend._lib.EVP_MD_CTX_free + ) + res = self._backend._lib.EVP_MD_CTX_copy_ex(copied_ctx, self._ctx) + self._backend.openssl_assert(res != 0) + return _HashContext(self._backend, self.algorithm, ctx=copied_ctx) + + def update(self, data: bytes) -> None: + data_ptr = self._backend._ffi.from_buffer(data) + res = self._backend._lib.EVP_DigestUpdate( + self._ctx, data_ptr, len(data) + ) + self._backend.openssl_assert(res != 0) + + def finalize(self) -> bytes: + if isinstance(self.algorithm, hashes.ExtendableOutputFunction): + # extendable output functions use a different finalize + return self._finalize_xof() + else: + buf = self._backend._ffi.new( + "unsigned char[]", self._backend._lib.EVP_MAX_MD_SIZE + ) + outlen = self._backend._ffi.new("unsigned int *") + res = self._backend._lib.EVP_DigestFinal_ex(self._ctx, buf, outlen) + self._backend.openssl_assert(res != 0) + self._backend.openssl_assert( + outlen[0] == self.algorithm.digest_size + ) + return self._backend._ffi.buffer(buf)[: outlen[0]] + + def _finalize_xof(self) -> bytes: + buf = self._backend._ffi.new( + "unsigned char[]", self.algorithm.digest_size + ) + res = self._backend._lib.EVP_DigestFinalXOF( + self._ctx, buf, self.algorithm.digest_size + ) + self._backend.openssl_assert(res != 0) + return self._backend._ffi.buffer(buf)[: self.algorithm.digest_size] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/hmac.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/hmac.py new file mode 100644 index 00000000..5fd54074 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/hmac.py @@ -0,0 +1,85 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import ( + InvalidSignature, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.primitives import constant_time, hashes + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +class _HMACContext(hashes.HashContext): + def __init__( + self, + backend: "Backend", + key: bytes, + algorithm: hashes.HashAlgorithm, + ctx=None, + ): + self._algorithm = algorithm + self._backend = backend + + if ctx is None: + ctx = self._backend._lib.HMAC_CTX_new() + self._backend.openssl_assert(ctx != self._backend._ffi.NULL) + ctx = self._backend._ffi.gc(ctx, self._backend._lib.HMAC_CTX_free) + evp_md = self._backend._evp_md_from_algorithm(algorithm) + if evp_md == self._backend._ffi.NULL: + raise UnsupportedAlgorithm( + "{} is not a supported hash on this backend".format( + algorithm.name + ), + _Reasons.UNSUPPORTED_HASH, + ) + key_ptr = self._backend._ffi.from_buffer(key) + res = self._backend._lib.HMAC_Init_ex( + ctx, key_ptr, len(key), evp_md, self._backend._ffi.NULL + ) + self._backend.openssl_assert(res != 0) + + self._ctx = ctx + self._key = key + + @property + def algorithm(self) -> hashes.HashAlgorithm: + return self._algorithm + + def copy(self) -> "_HMACContext": + copied_ctx = self._backend._lib.HMAC_CTX_new() + self._backend.openssl_assert(copied_ctx != self._backend._ffi.NULL) + copied_ctx = self._backend._ffi.gc( + copied_ctx, self._backend._lib.HMAC_CTX_free + ) + res = self._backend._lib.HMAC_CTX_copy(copied_ctx, self._ctx) + self._backend.openssl_assert(res != 0) + return _HMACContext( + self._backend, self._key, self.algorithm, ctx=copied_ctx + ) + + def update(self, data: bytes) -> None: + data_ptr = self._backend._ffi.from_buffer(data) + res = self._backend._lib.HMAC_Update(self._ctx, data_ptr, len(data)) + self._backend.openssl_assert(res != 0) + + def finalize(self) -> bytes: + buf = self._backend._ffi.new( + "unsigned char[]", self._backend._lib.EVP_MAX_MD_SIZE + ) + outlen = self._backend._ffi.new("unsigned int *") + res = self._backend._lib.HMAC_Final(self._ctx, buf, outlen) + self._backend.openssl_assert(res != 0) + self._backend.openssl_assert(outlen[0] == self.algorithm.digest_size) + return self._backend._ffi.buffer(buf)[: outlen[0]] + + def verify(self, signature: bytes) -> None: + digest = self.finalize() + if not constant_time.bytes_eq(digest, signature): + raise InvalidSignature("Signature did not match digest.") diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/poly1305.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/poly1305.py new file mode 100644 index 00000000..dd6d376f --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/poly1305.py @@ -0,0 +1,68 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.exceptions import InvalidSignature +from cryptography.hazmat.primitives import constant_time + + +_POLY1305_TAG_SIZE = 16 +_POLY1305_KEY_SIZE = 32 + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +class _Poly1305Context: + def __init__(self, backend: "Backend", key: bytes) -> None: + self._backend = backend + + key_ptr = self._backend._ffi.from_buffer(key) + # This function copies the key into OpenSSL-owned memory so we don't + # need to retain it ourselves + evp_pkey = self._backend._lib.EVP_PKEY_new_raw_private_key( + self._backend._lib.NID_poly1305, + self._backend._ffi.NULL, + key_ptr, + len(key), + ) + self._backend.openssl_assert(evp_pkey != self._backend._ffi.NULL) + self._evp_pkey = self._backend._ffi.gc( + evp_pkey, self._backend._lib.EVP_PKEY_free + ) + ctx = self._backend._lib.EVP_MD_CTX_new() + self._backend.openssl_assert(ctx != self._backend._ffi.NULL) + self._ctx = self._backend._ffi.gc( + ctx, self._backend._lib.EVP_MD_CTX_free + ) + res = self._backend._lib.EVP_DigestSignInit( + self._ctx, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + self._evp_pkey, + ) + self._backend.openssl_assert(res == 1) + + def update(self, data: bytes) -> None: + data_ptr = self._backend._ffi.from_buffer(data) + res = self._backend._lib.EVP_DigestSignUpdate( + self._ctx, data_ptr, len(data) + ) + self._backend.openssl_assert(res != 0) + + def finalize(self) -> bytes: + buf = self._backend._ffi.new("unsigned char[]", _POLY1305_TAG_SIZE) + outlen = self._backend._ffi.new("size_t *", _POLY1305_TAG_SIZE) + res = self._backend._lib.EVP_DigestSignFinal(self._ctx, buf, outlen) + self._backend.openssl_assert(res != 0) + self._backend.openssl_assert(outlen[0] == _POLY1305_TAG_SIZE) + return self._backend._ffi.buffer(buf)[: outlen[0]] + + def verify(self, tag: bytes) -> None: + mac = self.finalize() + if not constant_time.bytes_eq(mac, tag): + raise InvalidSignature("Value did not match computed tag.") diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/rsa.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/rsa.py new file mode 100644 index 00000000..31cff162 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/rsa.py @@ -0,0 +1,586 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import threading +import typing + +from cryptography.exceptions import ( + InvalidSignature, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.backends.openssl.utils import ( + _calculate_digest_and_algorithm, +) +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric import ( + utils as asym_utils, +) +from cryptography.hazmat.primitives.asymmetric.padding import ( + AsymmetricPadding, + MGF1, + OAEP, + PKCS1v15, + PSS, + _Auto, + _DigestLength, + _MaxLength, + calculate_max_pss_salt_length, +) +from cryptography.hazmat.primitives.asymmetric.rsa import ( + RSAPrivateKey, + RSAPrivateNumbers, + RSAPublicKey, + RSAPublicNumbers, +) + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +def _get_rsa_pss_salt_length( + backend: "Backend", + pss: PSS, + key: typing.Union[RSAPrivateKey, RSAPublicKey], + hash_algorithm: hashes.HashAlgorithm, +) -> int: + salt = pss._salt_length + + if isinstance(salt, _MaxLength): + return calculate_max_pss_salt_length(key, hash_algorithm) + elif isinstance(salt, _DigestLength): + return hash_algorithm.digest_size + elif isinstance(salt, _Auto): + if isinstance(key, RSAPrivateKey): + raise ValueError( + "PSS salt length can only be set to AUTO when verifying" + ) + return backend._lib.RSA_PSS_SALTLEN_AUTO + else: + return salt + + +def _enc_dec_rsa( + backend: "Backend", + key: typing.Union["_RSAPrivateKey", "_RSAPublicKey"], + data: bytes, + padding: AsymmetricPadding, +) -> bytes: + if not isinstance(padding, AsymmetricPadding): + raise TypeError("Padding must be an instance of AsymmetricPadding.") + + if isinstance(padding, PKCS1v15): + padding_enum = backend._lib.RSA_PKCS1_PADDING + elif isinstance(padding, OAEP): + padding_enum = backend._lib.RSA_PKCS1_OAEP_PADDING + + if not isinstance(padding._mgf, MGF1): + raise UnsupportedAlgorithm( + "Only MGF1 is supported by this backend.", + _Reasons.UNSUPPORTED_MGF, + ) + + if not backend.rsa_padding_supported(padding): + raise UnsupportedAlgorithm( + "This combination of padding and hash algorithm is not " + "supported by this backend.", + _Reasons.UNSUPPORTED_PADDING, + ) + + else: + raise UnsupportedAlgorithm( + "{} is not supported by this backend.".format(padding.name), + _Reasons.UNSUPPORTED_PADDING, + ) + + return _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum, padding) + + +def _enc_dec_rsa_pkey_ctx( + backend: "Backend", + key: typing.Union["_RSAPrivateKey", "_RSAPublicKey"], + data: bytes, + padding_enum: int, + padding: AsymmetricPadding, +) -> bytes: + init: typing.Callable[[typing.Any], int] + crypt: typing.Callable[[typing.Any, typing.Any, int, bytes, int], int] + if isinstance(key, _RSAPublicKey): + init = backend._lib.EVP_PKEY_encrypt_init + crypt = backend._lib.EVP_PKEY_encrypt + else: + init = backend._lib.EVP_PKEY_decrypt_init + crypt = backend._lib.EVP_PKEY_decrypt + + pkey_ctx = backend._lib.EVP_PKEY_CTX_new(key._evp_pkey, backend._ffi.NULL) + backend.openssl_assert(pkey_ctx != backend._ffi.NULL) + pkey_ctx = backend._ffi.gc(pkey_ctx, backend._lib.EVP_PKEY_CTX_free) + res = init(pkey_ctx) + backend.openssl_assert(res == 1) + res = backend._lib.EVP_PKEY_CTX_set_rsa_padding(pkey_ctx, padding_enum) + backend.openssl_assert(res > 0) + buf_size = backend._lib.EVP_PKEY_size(key._evp_pkey) + backend.openssl_assert(buf_size > 0) + if isinstance(padding, OAEP): + mgf1_md = backend._evp_md_non_null_from_algorithm( + padding._mgf._algorithm + ) + res = backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(pkey_ctx, mgf1_md) + backend.openssl_assert(res > 0) + oaep_md = backend._evp_md_non_null_from_algorithm(padding._algorithm) + res = backend._lib.EVP_PKEY_CTX_set_rsa_oaep_md(pkey_ctx, oaep_md) + backend.openssl_assert(res > 0) + + if ( + isinstance(padding, OAEP) + and padding._label is not None + and len(padding._label) > 0 + ): + # set0_rsa_oaep_label takes ownership of the char * so we need to + # copy it into some new memory + labelptr = backend._lib.OPENSSL_malloc(len(padding._label)) + backend.openssl_assert(labelptr != backend._ffi.NULL) + backend._ffi.memmove(labelptr, padding._label, len(padding._label)) + res = backend._lib.EVP_PKEY_CTX_set0_rsa_oaep_label( + pkey_ctx, labelptr, len(padding._label) + ) + backend.openssl_assert(res == 1) + + outlen = backend._ffi.new("size_t *", buf_size) + buf = backend._ffi.new("unsigned char[]", buf_size) + # Everything from this line onwards is written with the goal of being as + # constant-time as is practical given the constraints of Python and our + # API. See Bleichenbacher's '98 attack on RSA, and its many many variants. + # As such, you should not attempt to change this (particularly to "clean it + # up") without understanding why it was written this way (see + # Chesterton's Fence), and without measuring to verify you have not + # introduced observable time differences. + res = crypt(pkey_ctx, buf, outlen, data, len(data)) + resbuf = backend._ffi.buffer(buf)[: outlen[0]] + backend._lib.ERR_clear_error() + if res <= 0: + raise ValueError("Encryption/decryption failed.") + return resbuf + + +def _rsa_sig_determine_padding( + backend: "Backend", + key: typing.Union["_RSAPrivateKey", "_RSAPublicKey"], + padding: AsymmetricPadding, + algorithm: typing.Optional[hashes.HashAlgorithm], +) -> int: + if not isinstance(padding, AsymmetricPadding): + raise TypeError("Expected provider of AsymmetricPadding.") + + pkey_size = backend._lib.EVP_PKEY_size(key._evp_pkey) + backend.openssl_assert(pkey_size > 0) + + if isinstance(padding, PKCS1v15): + # Hash algorithm is ignored for PKCS1v15-padding, may be None. + padding_enum = backend._lib.RSA_PKCS1_PADDING + elif isinstance(padding, PSS): + if not isinstance(padding._mgf, MGF1): + raise UnsupportedAlgorithm( + "Only MGF1 is supported by this backend.", + _Reasons.UNSUPPORTED_MGF, + ) + + # PSS padding requires a hash algorithm + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of hashes.HashAlgorithm.") + + # Size of key in bytes - 2 is the maximum + # PSS signature length (salt length is checked later) + if pkey_size - algorithm.digest_size - 2 < 0: + raise ValueError( + "Digest too large for key size. Use a larger " + "key or different digest." + ) + + padding_enum = backend._lib.RSA_PKCS1_PSS_PADDING + else: + raise UnsupportedAlgorithm( + "{} is not supported by this backend.".format(padding.name), + _Reasons.UNSUPPORTED_PADDING, + ) + + return padding_enum + + +# Hash algorithm can be absent (None) to initialize the context without setting +# any message digest algorithm. This is currently only valid for the PKCS1v15 +# padding type, where it means that the signature data is encoded/decoded +# as provided, without being wrapped in a DigestInfo structure. +def _rsa_sig_setup( + backend: "Backend", + padding: AsymmetricPadding, + algorithm: typing.Optional[hashes.HashAlgorithm], + key: typing.Union["_RSAPublicKey", "_RSAPrivateKey"], + init_func: typing.Callable[[typing.Any], int], +): + padding_enum = _rsa_sig_determine_padding(backend, key, padding, algorithm) + pkey_ctx = backend._lib.EVP_PKEY_CTX_new(key._evp_pkey, backend._ffi.NULL) + backend.openssl_assert(pkey_ctx != backend._ffi.NULL) + pkey_ctx = backend._ffi.gc(pkey_ctx, backend._lib.EVP_PKEY_CTX_free) + res = init_func(pkey_ctx) + if res != 1: + errors = backend._consume_errors() + raise ValueError("Unable to sign/verify with this key", errors) + + if algorithm is not None: + evp_md = backend._evp_md_non_null_from_algorithm(algorithm) + res = backend._lib.EVP_PKEY_CTX_set_signature_md(pkey_ctx, evp_md) + if res <= 0: + backend._consume_errors() + raise UnsupportedAlgorithm( + "{} is not supported by this backend for RSA signing.".format( + algorithm.name + ), + _Reasons.UNSUPPORTED_HASH, + ) + res = backend._lib.EVP_PKEY_CTX_set_rsa_padding(pkey_ctx, padding_enum) + if res <= 0: + backend._consume_errors() + raise UnsupportedAlgorithm( + "{} is not supported for the RSA signature operation.".format( + padding.name + ), + _Reasons.UNSUPPORTED_PADDING, + ) + if isinstance(padding, PSS): + assert isinstance(algorithm, hashes.HashAlgorithm) + res = backend._lib.EVP_PKEY_CTX_set_rsa_pss_saltlen( + pkey_ctx, + _get_rsa_pss_salt_length(backend, padding, key, algorithm), + ) + backend.openssl_assert(res > 0) + + mgf1_md = backend._evp_md_non_null_from_algorithm( + padding._mgf._algorithm + ) + res = backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(pkey_ctx, mgf1_md) + backend.openssl_assert(res > 0) + + return pkey_ctx + + +def _rsa_sig_sign( + backend: "Backend", + padding: AsymmetricPadding, + algorithm: hashes.HashAlgorithm, + private_key: "_RSAPrivateKey", + data: bytes, +) -> bytes: + pkey_ctx = _rsa_sig_setup( + backend, + padding, + algorithm, + private_key, + backend._lib.EVP_PKEY_sign_init, + ) + buflen = backend._ffi.new("size_t *") + res = backend._lib.EVP_PKEY_sign( + pkey_ctx, backend._ffi.NULL, buflen, data, len(data) + ) + backend.openssl_assert(res == 1) + buf = backend._ffi.new("unsigned char[]", buflen[0]) + res = backend._lib.EVP_PKEY_sign(pkey_ctx, buf, buflen, data, len(data)) + if res != 1: + errors = backend._consume_errors_with_text() + raise ValueError( + "Digest or salt length too long for key size. Use a larger key " + "or shorter salt length if you are specifying a PSS salt", + errors, + ) + + return backend._ffi.buffer(buf)[:] + + +def _rsa_sig_verify( + backend: "Backend", + padding: AsymmetricPadding, + algorithm: hashes.HashAlgorithm, + public_key: "_RSAPublicKey", + signature: bytes, + data: bytes, +) -> None: + pkey_ctx = _rsa_sig_setup( + backend, + padding, + algorithm, + public_key, + backend._lib.EVP_PKEY_verify_init, + ) + res = backend._lib.EVP_PKEY_verify( + pkey_ctx, signature, len(signature), data, len(data) + ) + # The previous call can return negative numbers in the event of an + # error. This is not a signature failure but we need to fail if it + # occurs. + backend.openssl_assert(res >= 0) + if res == 0: + backend._consume_errors() + raise InvalidSignature + + +def _rsa_sig_recover( + backend: "Backend", + padding: AsymmetricPadding, + algorithm: typing.Optional[hashes.HashAlgorithm], + public_key: "_RSAPublicKey", + signature: bytes, +) -> bytes: + pkey_ctx = _rsa_sig_setup( + backend, + padding, + algorithm, + public_key, + backend._lib.EVP_PKEY_verify_recover_init, + ) + + # Attempt to keep the rest of the code in this function as constant/time + # as possible. See the comment in _enc_dec_rsa_pkey_ctx. Note that the + # buflen parameter is used even though its value may be undefined in the + # error case. Due to the tolerant nature of Python slicing this does not + # trigger any exceptions. + maxlen = backend._lib.EVP_PKEY_size(public_key._evp_pkey) + backend.openssl_assert(maxlen > 0) + buf = backend._ffi.new("unsigned char[]", maxlen) + buflen = backend._ffi.new("size_t *", maxlen) + res = backend._lib.EVP_PKEY_verify_recover( + pkey_ctx, buf, buflen, signature, len(signature) + ) + resbuf = backend._ffi.buffer(buf)[: buflen[0]] + backend._lib.ERR_clear_error() + # Assume that all parameter errors are handled during the setup phase and + # any error here is due to invalid signature. + if res != 1: + raise InvalidSignature + return resbuf + + +class _RSAPrivateKey(RSAPrivateKey): + _evp_pkey: object + _rsa_cdata: object + _key_size: int + + def __init__( + self, backend: "Backend", rsa_cdata, evp_pkey, _skip_check_key: bool + ): + res: int + # RSA_check_key is slower in OpenSSL 3.0.0 due to improved + # primality checking. In normal use this is unlikely to be a problem + # since users don't load new keys constantly, but for TESTING we've + # added an init arg that allows skipping the checks. You should not + # use this in production code unless you understand the consequences. + if not _skip_check_key: + res = backend._lib.RSA_check_key(rsa_cdata) + if res != 1: + errors = backend._consume_errors_with_text() + raise ValueError("Invalid private key", errors) + # 2 is prime and passes an RSA key check, so we also check + # if p and q are odd just to be safe. + p = backend._ffi.new("BIGNUM **") + q = backend._ffi.new("BIGNUM **") + backend._lib.RSA_get0_factors(rsa_cdata, p, q) + backend.openssl_assert(p[0] != backend._ffi.NULL) + backend.openssl_assert(q[0] != backend._ffi.NULL) + p_odd = backend._lib.BN_is_odd(p[0]) + q_odd = backend._lib.BN_is_odd(q[0]) + if p_odd != 1 or q_odd != 1: + errors = backend._consume_errors_with_text() + raise ValueError("Invalid private key", errors) + + self._backend = backend + self._rsa_cdata = rsa_cdata + self._evp_pkey = evp_pkey + # Used for lazy blinding + self._blinded = False + self._blinding_lock = threading.Lock() + + n = self._backend._ffi.new("BIGNUM **") + self._backend._lib.RSA_get0_key( + self._rsa_cdata, + n, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + ) + self._backend.openssl_assert(n[0] != self._backend._ffi.NULL) + self._key_size = self._backend._lib.BN_num_bits(n[0]) + + def _enable_blinding(self) -> None: + # If you call blind on an already blinded RSA key OpenSSL will turn + # it off and back on, which is a performance hit we want to avoid. + if not self._blinded: + with self._blinding_lock: + self._non_threadsafe_enable_blinding() + + def _non_threadsafe_enable_blinding(self) -> None: + # This is only a separate function to allow for testing to cover both + # branches. It should never be invoked except through _enable_blinding. + # Check if it's not True again in case another thread raced past the + # first non-locked check. + if not self._blinded: + res = self._backend._lib.RSA_blinding_on( + self._rsa_cdata, self._backend._ffi.NULL + ) + self._backend.openssl_assert(res == 1) + self._blinded = True + + @property + def key_size(self) -> int: + return self._key_size + + def decrypt(self, ciphertext: bytes, padding: AsymmetricPadding) -> bytes: + self._enable_blinding() + key_size_bytes = (self.key_size + 7) // 8 + if key_size_bytes != len(ciphertext): + raise ValueError("Ciphertext length must be equal to key size.") + + return _enc_dec_rsa(self._backend, self, ciphertext, padding) + + def public_key(self) -> RSAPublicKey: + ctx = self._backend._lib.RSAPublicKey_dup(self._rsa_cdata) + self._backend.openssl_assert(ctx != self._backend._ffi.NULL) + ctx = self._backend._ffi.gc(ctx, self._backend._lib.RSA_free) + evp_pkey = self._backend._rsa_cdata_to_evp_pkey(ctx) + return _RSAPublicKey(self._backend, ctx, evp_pkey) + + def private_numbers(self) -> RSAPrivateNumbers: + n = self._backend._ffi.new("BIGNUM **") + e = self._backend._ffi.new("BIGNUM **") + d = self._backend._ffi.new("BIGNUM **") + p = self._backend._ffi.new("BIGNUM **") + q = self._backend._ffi.new("BIGNUM **") + dmp1 = self._backend._ffi.new("BIGNUM **") + dmq1 = self._backend._ffi.new("BIGNUM **") + iqmp = self._backend._ffi.new("BIGNUM **") + self._backend._lib.RSA_get0_key(self._rsa_cdata, n, e, d) + self._backend.openssl_assert(n[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(e[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(d[0] != self._backend._ffi.NULL) + self._backend._lib.RSA_get0_factors(self._rsa_cdata, p, q) + self._backend.openssl_assert(p[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(q[0] != self._backend._ffi.NULL) + self._backend._lib.RSA_get0_crt_params( + self._rsa_cdata, dmp1, dmq1, iqmp + ) + self._backend.openssl_assert(dmp1[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(dmq1[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(iqmp[0] != self._backend._ffi.NULL) + return RSAPrivateNumbers( + p=self._backend._bn_to_int(p[0]), + q=self._backend._bn_to_int(q[0]), + d=self._backend._bn_to_int(d[0]), + dmp1=self._backend._bn_to_int(dmp1[0]), + dmq1=self._backend._bn_to_int(dmq1[0]), + iqmp=self._backend._bn_to_int(iqmp[0]), + public_numbers=RSAPublicNumbers( + e=self._backend._bn_to_int(e[0]), + n=self._backend._bn_to_int(n[0]), + ), + ) + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + return self._backend._private_key_bytes( + encoding, + format, + encryption_algorithm, + self, + self._evp_pkey, + self._rsa_cdata, + ) + + def sign( + self, + data: bytes, + padding: AsymmetricPadding, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> bytes: + self._enable_blinding() + data, algorithm = _calculate_digest_and_algorithm(data, algorithm) + return _rsa_sig_sign(self._backend, padding, algorithm, self, data) + + +class _RSAPublicKey(RSAPublicKey): + _evp_pkey: object + _rsa_cdata: object + _key_size: int + + def __init__(self, backend: "Backend", rsa_cdata, evp_pkey): + self._backend = backend + self._rsa_cdata = rsa_cdata + self._evp_pkey = evp_pkey + + n = self._backend._ffi.new("BIGNUM **") + self._backend._lib.RSA_get0_key( + self._rsa_cdata, + n, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + ) + self._backend.openssl_assert(n[0] != self._backend._ffi.NULL) + self._key_size = self._backend._lib.BN_num_bits(n[0]) + + @property + def key_size(self) -> int: + return self._key_size + + def encrypt(self, plaintext: bytes, padding: AsymmetricPadding) -> bytes: + return _enc_dec_rsa(self._backend, self, plaintext, padding) + + def public_numbers(self) -> RSAPublicNumbers: + n = self._backend._ffi.new("BIGNUM **") + e = self._backend._ffi.new("BIGNUM **") + self._backend._lib.RSA_get0_key( + self._rsa_cdata, n, e, self._backend._ffi.NULL + ) + self._backend.openssl_assert(n[0] != self._backend._ffi.NULL) + self._backend.openssl_assert(e[0] != self._backend._ffi.NULL) + return RSAPublicNumbers( + e=self._backend._bn_to_int(e[0]), + n=self._backend._bn_to_int(n[0]), + ) + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, self._rsa_cdata + ) + + def verify( + self, + signature: bytes, + data: bytes, + padding: AsymmetricPadding, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> None: + data, algorithm = _calculate_digest_and_algorithm(data, algorithm) + _rsa_sig_verify( + self._backend, padding, algorithm, self, signature, data + ) + + def recover_data_from_signature( + self, + signature: bytes, + padding: AsymmetricPadding, + algorithm: typing.Optional[hashes.HashAlgorithm], + ) -> bytes: + if isinstance(algorithm, asym_utils.Prehashed): + raise TypeError( + "Prehashed is only supported in the sign and verify methods. " + "It cannot be used with recover_data_from_signature." + ) + return _rsa_sig_recover( + self._backend, padding, algorithm, self, signature + ) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/utils.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/utils.py new file mode 100644 index 00000000..3a70a581 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/utils.py @@ -0,0 +1,52 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives.asymmetric.utils import Prehashed + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +def _evp_pkey_derive(backend: "Backend", evp_pkey, peer_public_key) -> bytes: + ctx = backend._lib.EVP_PKEY_CTX_new(evp_pkey, backend._ffi.NULL) + backend.openssl_assert(ctx != backend._ffi.NULL) + ctx = backend._ffi.gc(ctx, backend._lib.EVP_PKEY_CTX_free) + res = backend._lib.EVP_PKEY_derive_init(ctx) + backend.openssl_assert(res == 1) + res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey) + backend.openssl_assert(res == 1) + keylen = backend._ffi.new("size_t *") + res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen) + backend.openssl_assert(res == 1) + backend.openssl_assert(keylen[0] > 0) + buf = backend._ffi.new("unsigned char[]", keylen[0]) + res = backend._lib.EVP_PKEY_derive(ctx, buf, keylen) + if res != 1: + errors_with_text = backend._consume_errors_with_text() + raise ValueError("Error computing shared key.", errors_with_text) + + return backend._ffi.buffer(buf, keylen[0])[:] + + +def _calculate_digest_and_algorithm( + data: bytes, + algorithm: typing.Union[Prehashed, hashes.HashAlgorithm], +) -> typing.Tuple[bytes, hashes.HashAlgorithm]: + if not isinstance(algorithm, Prehashed): + hash_ctx = hashes.Hash(algorithm) + hash_ctx.update(data) + data = hash_ctx.finalize() + else: + algorithm = algorithm._algorithm + + if len(data) != algorithm.digest_size: + raise ValueError( + "The provided data must be the same length as the hash " + "algorithm's digest size." + ) + + return (data, algorithm) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x25519.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x25519.py new file mode 100644 index 00000000..f68501a3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x25519.py @@ -0,0 +1,132 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.backends.openssl.utils import _evp_pkey_derive +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.asymmetric.x25519 import ( + X25519PrivateKey, + X25519PublicKey, +) + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + + +_X25519_KEY_SIZE = 32 + + +class _X25519PublicKey(X25519PublicKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + encoding is not serialization.Encoding.Raw + or format is not serialization.PublicFormat.Raw + ): + raise ValueError( + "When using Raw both encoding and format must be Raw" + ) + + return self._raw_public_bytes() + + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, None + ) + + def _raw_public_bytes(self) -> bytes: + ucharpp = self._backend._ffi.new("unsigned char **") + res = self._backend._lib.EVP_PKEY_get1_tls_encodedpoint( + self._evp_pkey, ucharpp + ) + self._backend.openssl_assert(res == 32) + self._backend.openssl_assert(ucharpp[0] != self._backend._ffi.NULL) + data = self._backend._ffi.gc( + ucharpp[0], self._backend._lib.OPENSSL_free + ) + return self._backend._ffi.buffer(data, res)[:] + + +class _X25519PrivateKey(X25519PrivateKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_key(self) -> X25519PublicKey: + bio = self._backend._create_mem_bio_gc() + res = self._backend._lib.i2d_PUBKEY_bio(bio, self._evp_pkey) + self._backend.openssl_assert(res == 1) + evp_pkey = self._backend._lib.d2i_PUBKEY_bio( + bio, self._backend._ffi.NULL + ) + self._backend.openssl_assert(evp_pkey != self._backend._ffi.NULL) + evp_pkey = self._backend._ffi.gc( + evp_pkey, self._backend._lib.EVP_PKEY_free + ) + return _X25519PublicKey(self._backend, evp_pkey) + + def exchange(self, peer_public_key: X25519PublicKey) -> bytes: + if not isinstance(peer_public_key, X25519PublicKey): + raise TypeError("peer_public_key must be X25519PublicKey.") + + return _evp_pkey_derive(self._backend, self._evp_pkey, peer_public_key) + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + format is not serialization.PrivateFormat.Raw + or encoding is not serialization.Encoding.Raw + or not isinstance( + encryption_algorithm, serialization.NoEncryption + ) + ): + raise ValueError( + "When using Raw both encoding and format must be Raw " + "and encryption_algorithm must be NoEncryption()" + ) + + return self._raw_private_bytes() + + return self._backend._private_key_bytes( + encoding, format, encryption_algorithm, self, self._evp_pkey, None + ) + + def _raw_private_bytes(self) -> bytes: + # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_111 we can + # switch this to EVP_PKEY_new_raw_private_key + # The trick we use here is serializing to a PKCS8 key and just + # using the last 32 bytes, which is the key itself. + bio = self._backend._create_mem_bio_gc() + res = self._backend._lib.i2d_PKCS8PrivateKey_bio( + bio, + self._evp_pkey, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + 0, + self._backend._ffi.NULL, + self._backend._ffi.NULL, + ) + self._backend.openssl_assert(res == 1) + pkcs8 = self._backend._read_mem_bio(bio) + self._backend.openssl_assert(len(pkcs8) == 48) + return pkcs8[-_X25519_KEY_SIZE:] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x448.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x448.py new file mode 100644 index 00000000..f45db56e --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x448.py @@ -0,0 +1,117 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.backends.openssl.utils import _evp_pkey_derive +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.asymmetric.x448 import ( + X448PrivateKey, + X448PublicKey, +) + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.backend import Backend + +_X448_KEY_SIZE = 56 + + +class _X448PublicKey(X448PublicKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PublicFormat, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + encoding is not serialization.Encoding.Raw + or format is not serialization.PublicFormat.Raw + ): + raise ValueError( + "When using Raw both encoding and format must be Raw" + ) + + return self._raw_public_bytes() + + return self._backend._public_key_bytes( + encoding, format, self, self._evp_pkey, None + ) + + def _raw_public_bytes(self) -> bytes: + buf = self._backend._ffi.new("unsigned char []", _X448_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _X448_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_public_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _X448_KEY_SIZE) + return self._backend._ffi.buffer(buf, _X448_KEY_SIZE)[:] + + +class _X448PrivateKey(X448PrivateKey): + def __init__(self, backend: "Backend", evp_pkey): + self._backend = backend + self._evp_pkey = evp_pkey + + def public_key(self) -> X448PublicKey: + buf = self._backend._ffi.new("unsigned char []", _X448_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _X448_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_public_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _X448_KEY_SIZE) + public_bytes = self._backend._ffi.buffer(buf)[:] + return self._backend.x448_load_public_bytes(public_bytes) + + def exchange(self, peer_public_key: X448PublicKey) -> bytes: + if not isinstance(peer_public_key, X448PublicKey): + raise TypeError("peer_public_key must be X448PublicKey.") + + return _evp_pkey_derive(self._backend, self._evp_pkey, peer_public_key) + + def private_bytes( + self, + encoding: serialization.Encoding, + format: serialization.PrivateFormat, + encryption_algorithm: serialization.KeySerializationEncryption, + ) -> bytes: + if ( + encoding is serialization.Encoding.Raw + or format is serialization.PublicFormat.Raw + ): + if ( + format is not serialization.PrivateFormat.Raw + or encoding is not serialization.Encoding.Raw + or not isinstance( + encryption_algorithm, serialization.NoEncryption + ) + ): + raise ValueError( + "When using Raw both encoding and format must be Raw " + "and encryption_algorithm must be NoEncryption()" + ) + + return self._raw_private_bytes() + + return self._backend._private_key_bytes( + encoding, format, encryption_algorithm, self, self._evp_pkey, None + ) + + def _raw_private_bytes(self) -> bytes: + buf = self._backend._ffi.new("unsigned char []", _X448_KEY_SIZE) + buflen = self._backend._ffi.new("size_t *", _X448_KEY_SIZE) + res = self._backend._lib.EVP_PKEY_get_raw_private_key( + self._evp_pkey, buf, buflen + ) + self._backend.openssl_assert(res == 1) + self._backend.openssl_assert(buflen[0] == _X448_KEY_SIZE) + return self._backend._ffi.buffer(buf, _X448_KEY_SIZE)[:] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x509.py b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x509.py new file mode 100644 index 00000000..aa4ed106 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/x509.py @@ -0,0 +1,45 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import warnings + +from cryptography import utils, x509 + + +# This exists for pyOpenSSL compatibility and SHOULD NOT BE USED +# WE WILL REMOVE THIS VERY SOON. +def _Certificate(backend, x509) -> x509.Certificate: # noqa: N802 + warnings.warn( + "This version of cryptography contains a temporary pyOpenSSL " + "fallback path. Upgrade pyOpenSSL now.", + utils.DeprecatedIn35, + ) + return backend._ossl2cert(x509) + + +# This exists for pyOpenSSL compatibility and SHOULD NOT BE USED +# WE WILL REMOVE THIS VERY SOON. +def _CertificateSigningRequest( # noqa: N802 + backend, x509_req +) -> x509.CertificateSigningRequest: + warnings.warn( + "This version of cryptography contains a temporary pyOpenSSL " + "fallback path. Upgrade pyOpenSSL now.", + utils.DeprecatedIn35, + ) + return backend._ossl2csr(x509_req) + + +# This exists for pyOpenSSL compatibility and SHOULD NOT BE USED +# WE WILL REMOVE THIS VERY SOON. +def _CertificateRevocationList( # noqa: N802 + backend, x509_crl +) -> x509.CertificateRevocationList: + warnings.warn( + "This version of cryptography contains a temporary pyOpenSSL " + "fallback path. Upgrade pyOpenSSL now.", + utils.DeprecatedIn35, + ) + return backend._ossl2crl(x509_crl) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so new file mode 100644 index 00000000..aa9cabe1 Binary files /dev/null and b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so differ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_openssl.pyi b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_openssl.pyi new file mode 100644 index 00000000..80100082 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_openssl.pyi @@ -0,0 +1,8 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +lib = typing.Any +ffi = typing.Any diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust.abi3.so b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust.abi3.so new file mode 100644 index 00000000..f851634f Binary files /dev/null and b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust.abi3.so differ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/__init__.pyi b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/__init__.pyi new file mode 100644 index 00000000..bab90a5a --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/__init__.pyi @@ -0,0 +1,29 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +def check_pkcs7_padding(data: bytes) -> bool: ... +def check_ansix923_padding(data: bytes) -> bool: ... + +class ObjectIdentifier: + def __init__(self, val: str) -> None: ... + @property + def dotted_string(self) -> str: ... + @property + def _name(self) -> str: ... + +T = typing.TypeVar("T") + +class FixedPool(typing.Generic[T]): + def __init__( + self, + create: typing.Callable[[], T], + destroy: typing.Callable[[T], None], + ) -> None: ... + def acquire(self) -> PoolAcquisition[T]: ... + +class PoolAcquisition(typing.Generic[T]): + def __enter__(self) -> T: ... + def __exit__(self, exc_type, exc_value, exc_tb) -> None: ... diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/asn1.pyi b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/asn1.pyi new file mode 100644 index 00000000..a8369ba8 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/asn1.pyi @@ -0,0 +1,16 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +class TestCertificate: + not_after_tag: int + not_before_tag: int + issuer_value_tags: typing.List[int] + subject_value_tags: typing.List[int] + +def decode_dss_signature(signature: bytes) -> typing.Tuple[int, int]: ... +def encode_dss_signature(r: int, s: int) -> bytes: ... +def parse_spki_for_data(data: bytes) -> bytes: ... +def test_parse_certificate(data: bytes) -> TestCertificate: ... diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/ocsp.pyi b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/ocsp.pyi new file mode 100644 index 00000000..acdea3dd --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/ocsp.pyi @@ -0,0 +1,26 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives.asymmetric.types import PRIVATE_KEY_TYPES +from cryptography.hazmat.primitives import hashes +from cryptography.x509 import Extension +from cryptography.x509.ocsp import ( + OCSPRequest, + OCSPRequestBuilder, + OCSPResponse, + OCSPResponseStatus, + OCSPResponseBuilder, +) + +def load_der_ocsp_request(data: bytes) -> OCSPRequest: ... +def load_der_ocsp_response(data: bytes) -> OCSPResponse: ... +def create_ocsp_request(builder: OCSPRequestBuilder) -> OCSPRequest: ... +def create_ocsp_response( + status: OCSPResponseStatus, + builder: typing.Optional[OCSPResponseBuilder], + private_key: typing.Optional[PRIVATE_KEY_TYPES], + hash_algorithm: typing.Optional[hashes.HashAlgorithm], +) -> OCSPResponse: ... diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/x509.pyi b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/x509.pyi new file mode 100644 index 00000000..317deb6e --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/_rust/x509.pyi @@ -0,0 +1,40 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import datetime +import typing + +from cryptography import x509 +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives.asymmetric.types import PRIVATE_KEY_TYPES + +def load_pem_x509_certificate(data: bytes) -> x509.Certificate: ... +def load_der_x509_certificate(data: bytes) -> x509.Certificate: ... +def load_pem_x509_crl(data: bytes) -> x509.CertificateRevocationList: ... +def load_der_x509_crl(data: bytes) -> x509.CertificateRevocationList: ... +def load_pem_x509_csr(data: bytes) -> x509.CertificateSigningRequest: ... +def load_der_x509_csr(data: bytes) -> x509.CertificateSigningRequest: ... +def encode_name_bytes(name: x509.Name) -> bytes: ... +def encode_extension_value(extension: x509.ExtensionType) -> bytes: ... +def create_x509_certificate( + builder: x509.CertificateBuilder, + private_key: PRIVATE_KEY_TYPES, + hash_algorithm: typing.Optional[hashes.HashAlgorithm], +) -> x509.Certificate: ... +def create_x509_csr( + builder: x509.CertificateSigningRequestBuilder, + private_key: PRIVATE_KEY_TYPES, + hash_algorithm: typing.Optional[hashes.HashAlgorithm], +) -> x509.CertificateSigningRequest: ... +def create_x509_crl( + builder: x509.CertificateRevocationListBuilder, + private_key: PRIVATE_KEY_TYPES, + hash_algorithm: typing.Optional[hashes.HashAlgorithm], +) -> x509.CertificateRevocationList: ... + +class Sct: ... +class Certificate: ... +class RevokedCertificate: ... +class CertificateRevocationList: ... +class CertificateSigningRequest: ... diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/_conditional.py b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/_conditional.py new file mode 100644 index 00000000..10f307af --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/_conditional.py @@ -0,0 +1,388 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + + +def cryptography_has_ec2m() -> typing.List[str]: + return [ + "EC_POINT_get_affine_coordinates_GF2m", + ] + + +def cryptography_has_ssl3_method() -> typing.List[str]: + return [ + "SSLv3_method", + "SSLv3_client_method", + "SSLv3_server_method", + ] + + +def cryptography_has_110_verification_params() -> typing.List[str]: + return ["X509_CHECK_FLAG_NEVER_CHECK_SUBJECT"] + + +def cryptography_has_set_cert_cb() -> typing.List[str]: + return [ + "SSL_CTX_set_cert_cb", + "SSL_set_cert_cb", + ] + + +def cryptography_has_ssl_st() -> typing.List[str]: + return [ + "SSL_ST_BEFORE", + "SSL_ST_OK", + "SSL_ST_INIT", + "SSL_ST_RENEGOTIATE", + ] + + +def cryptography_has_tls_st() -> typing.List[str]: + return [ + "TLS_ST_BEFORE", + "TLS_ST_OK", + ] + + +def cryptography_has_scrypt() -> typing.List[str]: + return [ + "EVP_PBE_scrypt", + ] + + +def cryptography_has_evp_pkey_dhx() -> typing.List[str]: + return [ + "EVP_PKEY_DHX", + ] + + +def cryptography_has_mem_functions() -> typing.List[str]: + return [ + "Cryptography_CRYPTO_set_mem_functions", + ] + + +def cryptography_has_x509_store_ctx_get_issuer() -> typing.List[str]: + return [ + "X509_STORE_get_get_issuer", + "X509_STORE_set_get_issuer", + ] + + +def cryptography_has_ed448() -> typing.List[str]: + return [ + "EVP_PKEY_ED448", + "NID_ED448", + ] + + +def cryptography_has_ed25519() -> typing.List[str]: + return [ + "NID_ED25519", + "EVP_PKEY_ED25519", + ] + + +def cryptography_has_poly1305() -> typing.List[str]: + return [ + "NID_poly1305", + "EVP_PKEY_POLY1305", + ] + + +def cryptography_has_oneshot_evp_digest_sign_verify() -> typing.List[str]: + return [ + "EVP_DigestSign", + "EVP_DigestVerify", + ] + + +def cryptography_has_evp_digestfinal_xof() -> typing.List[str]: + return [ + "EVP_DigestFinalXOF", + ] + + +def cryptography_has_evp_pkey_get_set_tls_encodedpoint() -> typing.List[str]: + return [ + "EVP_PKEY_get1_tls_encodedpoint", + "EVP_PKEY_set1_tls_encodedpoint", + ] + + +def cryptography_has_fips() -> typing.List[str]: + return [ + "FIPS_mode_set", + "FIPS_mode", + ] + + +def cryptography_has_ssl_sigalgs() -> typing.List[str]: + return [ + "SSL_CTX_set1_sigalgs_list", + ] + + +def cryptography_has_psk() -> typing.List[str]: + return [ + "SSL_CTX_use_psk_identity_hint", + "SSL_CTX_set_psk_server_callback", + "SSL_CTX_set_psk_client_callback", + ] + + +def cryptography_has_psk_tlsv13() -> typing.List[str]: + return [ + "SSL_CTX_set_psk_find_session_callback", + "SSL_CTX_set_psk_use_session_callback", + "Cryptography_SSL_SESSION_new", + "SSL_CIPHER_find", + "SSL_SESSION_set1_master_key", + "SSL_SESSION_set_cipher", + "SSL_SESSION_set_protocol_version", + ] + + +def cryptography_has_custom_ext() -> typing.List[str]: + return [ + "SSL_CTX_add_client_custom_ext", + "SSL_CTX_add_server_custom_ext", + "SSL_extension_supported", + ] + + +def cryptography_has_openssl_cleanup() -> typing.List[str]: + return [ + "OPENSSL_cleanup", + ] + + +def cryptography_has_tlsv13() -> typing.List[str]: + return [ + "TLS1_3_VERSION", + "SSL_OP_NO_TLSv1_3", + ] + + +def cryptography_has_tlsv13_functions() -> typing.List[str]: + return [ + "SSL_VERIFY_POST_HANDSHAKE", + "SSL_CTX_set_ciphersuites", + "SSL_verify_client_post_handshake", + "SSL_CTX_set_post_handshake_auth", + "SSL_set_post_handshake_auth", + "SSL_SESSION_get_max_early_data", + "SSL_write_early_data", + "SSL_read_early_data", + "SSL_CTX_set_max_early_data", + ] + + +def cryptography_has_keylog() -> typing.List[str]: + return [ + "SSL_CTX_set_keylog_callback", + "SSL_CTX_get_keylog_callback", + ] + + +def cryptography_has_raw_key() -> typing.List[str]: + return [ + "EVP_PKEY_new_raw_private_key", + "EVP_PKEY_new_raw_public_key", + "EVP_PKEY_get_raw_private_key", + "EVP_PKEY_get_raw_public_key", + ] + + +def cryptography_has_engine() -> typing.List[str]: + return [ + "ENGINE_by_id", + "ENGINE_init", + "ENGINE_finish", + "ENGINE_get_default_RAND", + "ENGINE_set_default_RAND", + "ENGINE_unregister_RAND", + "ENGINE_ctrl_cmd", + "ENGINE_free", + "ENGINE_get_name", + "Cryptography_add_osrandom_engine", + "ENGINE_ctrl_cmd_string", + "ENGINE_load_builtin_engines", + "ENGINE_load_private_key", + "ENGINE_load_public_key", + "SSL_CTX_set_client_cert_engine", + ] + + +def cryptography_has_verified_chain() -> typing.List[str]: + return [ + "SSL_get0_verified_chain", + ] + + +def cryptography_has_srtp() -> typing.List[str]: + return [ + "SSL_CTX_set_tlsext_use_srtp", + "SSL_set_tlsext_use_srtp", + "SSL_get_selected_srtp_profile", + ] + + +def cryptography_has_get_proto_version() -> typing.List[str]: + return [ + "SSL_CTX_get_min_proto_version", + "SSL_CTX_get_max_proto_version", + "SSL_get_min_proto_version", + "SSL_get_max_proto_version", + ] + + +def cryptography_has_providers() -> typing.List[str]: + return [ + "OSSL_PROVIDER_load", + "OSSL_PROVIDER_unload", + "ERR_LIB_PROV", + "PROV_R_WRONG_FINAL_BLOCK_LENGTH", + "PROV_R_BAD_DECRYPT", + ] + + +def cryptography_has_op_no_renegotiation() -> typing.List[str]: + return [ + "SSL_OP_NO_RENEGOTIATION", + ] + + +def cryptography_has_dtls_get_data_mtu() -> typing.List[str]: + return [ + "DTLS_get_data_mtu", + ] + + +def cryptography_has_300_fips() -> typing.List[str]: + return [ + "EVP_default_properties_is_fips_enabled", + "EVP_default_properties_enable_fips", + ] + + +def cryptography_has_ssl_cookie() -> typing.List[str]: + return [ + "SSL_OP_COOKIE_EXCHANGE", + "DTLSv1_listen", + "SSL_CTX_set_cookie_generate_cb", + "SSL_CTX_set_cookie_verify_cb", + ] + + +def cryptography_has_pkcs7_funcs() -> typing.List[str]: + return [ + "SMIME_write_PKCS7", + "PEM_write_bio_PKCS7_stream", + "PKCS7_sign_add_signer", + "PKCS7_final", + "PKCS7_verify", + "SMIME_read_PKCS7", + "PKCS7_get0_signers", + ] + + +def cryptography_has_bn_flags() -> typing.List[str]: + return [ + "BN_FLG_CONSTTIME", + "BN_set_flags", + "BN_prime_checks_for_size", + ] + + +def cryptography_has_evp_pkey_dh() -> typing.List[str]: + return [ + "EVP_PKEY_set1_DH", + ] + + +def cryptography_has_300_evp_cipher() -> typing.List[str]: + return ["EVP_CIPHER_fetch", "EVP_CIPHER_free"] + + +def cryptography_has_unexpected_eof_while_reading() -> typing.List[str]: + return ["SSL_R_UNEXPECTED_EOF_WHILE_READING"] + + +def cryptography_has_pkcs12_set_mac() -> typing.List[str]: + return ["PKCS12_set_mac"] + + +def cryptography_has_ssl_op_ignore_unexpected_eof() -> typing.List[str]: + return [ + "SSL_OP_IGNORE_UNEXPECTED_EOF", + ] + + +# This is a mapping of +# {condition: function-returning-names-dependent-on-that-condition} so we can +# loop over them and delete unsupported names at runtime. It will be removed +# when cffi supports #if in cdef. We use functions instead of just a dict of +# lists so we can use coverage to measure which are used. +CONDITIONAL_NAMES = { + "Cryptography_HAS_EC2M": cryptography_has_ec2m, + "Cryptography_HAS_SSL3_METHOD": cryptography_has_ssl3_method, + "Cryptography_HAS_110_VERIFICATION_PARAMS": ( + cryptography_has_110_verification_params + ), + "Cryptography_HAS_SET_CERT_CB": cryptography_has_set_cert_cb, + "Cryptography_HAS_SSL_ST": cryptography_has_ssl_st, + "Cryptography_HAS_TLS_ST": cryptography_has_tls_st, + "Cryptography_HAS_SCRYPT": cryptography_has_scrypt, + "Cryptography_HAS_EVP_PKEY_DHX": cryptography_has_evp_pkey_dhx, + "Cryptography_HAS_MEM_FUNCTIONS": cryptography_has_mem_functions, + "Cryptography_HAS_X509_STORE_CTX_GET_ISSUER": ( + cryptography_has_x509_store_ctx_get_issuer + ), + "Cryptography_HAS_ED448": cryptography_has_ed448, + "Cryptography_HAS_ED25519": cryptography_has_ed25519, + "Cryptography_HAS_POLY1305": cryptography_has_poly1305, + "Cryptography_HAS_ONESHOT_EVP_DIGEST_SIGN_VERIFY": ( + cryptography_has_oneshot_evp_digest_sign_verify + ), + "Cryptography_HAS_EVP_PKEY_get_set_tls_encodedpoint": ( + cryptography_has_evp_pkey_get_set_tls_encodedpoint + ), + "Cryptography_HAS_FIPS": cryptography_has_fips, + "Cryptography_HAS_SIGALGS": cryptography_has_ssl_sigalgs, + "Cryptography_HAS_PSK": cryptography_has_psk, + "Cryptography_HAS_PSK_TLSv1_3": cryptography_has_psk_tlsv13, + "Cryptography_HAS_CUSTOM_EXT": cryptography_has_custom_ext, + "Cryptography_HAS_OPENSSL_CLEANUP": cryptography_has_openssl_cleanup, + "Cryptography_HAS_TLSv1_3": cryptography_has_tlsv13, + "Cryptography_HAS_TLSv1_3_FUNCTIONS": cryptography_has_tlsv13_functions, + "Cryptography_HAS_KEYLOG": cryptography_has_keylog, + "Cryptography_HAS_RAW_KEY": cryptography_has_raw_key, + "Cryptography_HAS_EVP_DIGESTFINAL_XOF": ( + cryptography_has_evp_digestfinal_xof + ), + "Cryptography_HAS_ENGINE": cryptography_has_engine, + "Cryptography_HAS_VERIFIED_CHAIN": cryptography_has_verified_chain, + "Cryptography_HAS_SRTP": cryptography_has_srtp, + "Cryptography_HAS_GET_PROTO_VERSION": cryptography_has_get_proto_version, + "Cryptography_HAS_PROVIDERS": cryptography_has_providers, + "Cryptography_HAS_OP_NO_RENEGOTIATION": ( + cryptography_has_op_no_renegotiation + ), + "Cryptography_HAS_DTLS_GET_DATA_MTU": cryptography_has_dtls_get_data_mtu, + "Cryptography_HAS_300_FIPS": cryptography_has_300_fips, + "Cryptography_HAS_SSL_COOKIE": cryptography_has_ssl_cookie, + "Cryptography_HAS_PKCS7_FUNCS": cryptography_has_pkcs7_funcs, + "Cryptography_HAS_BN_FLAGS": cryptography_has_bn_flags, + "Cryptography_HAS_EVP_PKEY_DH": cryptography_has_evp_pkey_dh, + "Cryptography_HAS_300_EVP_CIPHER": cryptography_has_300_evp_cipher, + "Cryptography_HAS_UNEXPECTED_EOF_WHILE_READING": ( + cryptography_has_unexpected_eof_while_reading + ), + "Cryptography_HAS_PKCS12_SET_MAC": cryptography_has_pkcs12_set_mac, + "Cryptography_HAS_SSL_OP_IGNORE_UNEXPECTED_EOF": ( + cryptography_has_ssl_op_ignore_unexpected_eof + ), +} diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/binding.py b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/binding.py new file mode 100644 index 00000000..2b4c574b --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/bindings/openssl/binding.py @@ -0,0 +1,230 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import threading +import types +import typing +import warnings + +import cryptography +from cryptography import utils +from cryptography.exceptions import InternalError +from cryptography.hazmat.bindings._openssl import ffi, lib +from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES + +_OpenSSLErrorWithText = typing.NamedTuple( + "_OpenSSLErrorWithText", + [("code", int), ("lib", int), ("reason", int), ("reason_text", bytes)], +) + + +class _OpenSSLError: + def __init__(self, code: int, lib: int, reason: int): + self._code = code + self._lib = lib + self._reason = reason + + def _lib_reason_match(self, lib: int, reason: int) -> bool: + return lib == self.lib and reason == self.reason + + @property + def code(self) -> int: + return self._code + + @property + def lib(self) -> int: + return self._lib + + @property + def reason(self) -> int: + return self._reason + + +def _consume_errors(lib) -> typing.List[_OpenSSLError]: + errors = [] + while True: + code: int = lib.ERR_get_error() + if code == 0: + break + + err_lib: int = lib.ERR_GET_LIB(code) + err_reason: int = lib.ERR_GET_REASON(code) + + errors.append(_OpenSSLError(code, err_lib, err_reason)) + + return errors + + +def _errors_with_text( + errors: typing.List[_OpenSSLError], +) -> typing.List[_OpenSSLErrorWithText]: + errors_with_text = [] + for err in errors: + buf = ffi.new("char[]", 256) + lib.ERR_error_string_n(err.code, buf, len(buf)) + err_text_reason: bytes = ffi.string(buf) + + errors_with_text.append( + _OpenSSLErrorWithText( + err.code, err.lib, err.reason, err_text_reason + ) + ) + + return errors_with_text + + +def _consume_errors_with_text(lib): + return _errors_with_text(_consume_errors(lib)) + + +def _openssl_assert( + lib, ok: bool, errors: typing.Optional[typing.List[_OpenSSLError]] = None +) -> None: + if not ok: + if errors is None: + errors = _consume_errors(lib) + errors_with_text = _errors_with_text(errors) + + raise InternalError( + "Unknown OpenSSL error. This error is commonly encountered when " + "another library is not cleaning up the OpenSSL error stack. If " + "you are using cryptography with another library that uses " + "OpenSSL try disabling it before reporting a bug. Otherwise " + "please file an issue at https://github.com/pyca/cryptography/" + "issues with information on how to reproduce " + "this. ({0!r})".format(errors_with_text), + errors_with_text, + ) + + +def build_conditional_library(lib, conditional_names): + conditional_lib = types.ModuleType("lib") + conditional_lib._original_lib = lib # type: ignore[attr-defined] + excluded_names = set() + for condition, names_cb in conditional_names.items(): + if not getattr(lib, condition): + excluded_names.update(names_cb()) + + for attr in dir(lib): + if attr not in excluded_names: + setattr(conditional_lib, attr, getattr(lib, attr)) + + return conditional_lib + + +class Binding: + """ + OpenSSL API wrapper. + """ + + lib: typing.ClassVar = None + ffi = ffi + _lib_loaded = False + _init_lock = threading.Lock() + _legacy_provider: typing.Any = None + _default_provider: typing.Any = None + + def __init__(self): + self._ensure_ffi_initialized() + + def _enable_fips(self) -> None: + # This function enables FIPS mode for OpenSSL 3.0.0 on installs that + # have the FIPS provider installed properly. + _openssl_assert(self.lib, self.lib.CRYPTOGRAPHY_OPENSSL_300_OR_GREATER) + self._base_provider = self.lib.OSSL_PROVIDER_load( + self.ffi.NULL, b"base" + ) + _openssl_assert(self.lib, self._base_provider != self.ffi.NULL) + self.lib._fips_provider = self.lib.OSSL_PROVIDER_load( + self.ffi.NULL, b"fips" + ) + _openssl_assert(self.lib, self.lib._fips_provider != self.ffi.NULL) + + res = self.lib.EVP_default_properties_enable_fips(self.ffi.NULL, 1) + _openssl_assert(self.lib, res == 1) + + @classmethod + def _register_osrandom_engine(cls): + # Clear any errors extant in the queue before we start. In many + # scenarios other things may be interacting with OpenSSL in the same + # process space and it has proven untenable to assume that they will + # reliably clear the error queue. Once we clear it here we will + # error on any subsequent unexpected item in the stack. + cls.lib.ERR_clear_error() + if cls.lib.CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE: + result = cls.lib.Cryptography_add_osrandom_engine() + _openssl_assert(cls.lib, result in (1, 2)) + + @classmethod + def _ensure_ffi_initialized(cls): + with cls._init_lock: + if not cls._lib_loaded: + cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES) + cls._lib_loaded = True + cls._register_osrandom_engine() + # As of OpenSSL 3.0.0 we must register a legacy cipher provider + # to get RC2 (needed for junk asymmetric private key + # serialization), RC4, Blowfish, IDEA, SEED, etc. These things + # are ugly legacy, but we aren't going to get rid of them + # any time soon. + if cls.lib.CRYPTOGRAPHY_OPENSSL_300_OR_GREATER: + cls._legacy_provider = cls.lib.OSSL_PROVIDER_load( + cls.ffi.NULL, b"legacy" + ) + _openssl_assert( + cls.lib, cls._legacy_provider != cls.ffi.NULL + ) + cls._default_provider = cls.lib.OSSL_PROVIDER_load( + cls.ffi.NULL, b"default" + ) + _openssl_assert( + cls.lib, cls._default_provider != cls.ffi.NULL + ) + + @classmethod + def init_static_locks(cls): + cls._ensure_ffi_initialized() + + +def _verify_openssl_version(lib): + if ( + lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_111 + and not lib.CRYPTOGRAPHY_IS_LIBRESSL + and not lib.CRYPTOGRAPHY_IS_BORINGSSL + ): + warnings.warn( + "OpenSSL version 1.1.0 is no longer supported by the OpenSSL " + "project, please upgrade. The next release of cryptography will " + "drop support for OpenSSL 1.1.0.", + utils.DeprecatedIn37, + ) + + +def _verify_package_version(version): + # Occasionally we run into situations where the version of the Python + # package does not match the version of the shared object that is loaded. + # This may occur in environments where multiple versions of cryptography + # are installed and available in the python path. To avoid errors cropping + # up later this code checks that the currently imported package and the + # shared object that were loaded have the same version and raise an + # ImportError if they do not + so_package_version = ffi.string(lib.CRYPTOGRAPHY_PACKAGE_VERSION) + if version.encode("ascii") != so_package_version: + raise ImportError( + "The version of cryptography does not match the loaded " + "shared object. This can happen if you have multiple copies of " + "cryptography installed in your Python path. Please try creating " + "a new virtual environment to resolve this issue. " + "Loaded python version: {}, shared object version: {}".format( + version, so_package_version + ) + ) + + +_verify_package_version(cryptography.__version__) + +Binding.init_static_locks() + +_verify_openssl_version(Binding.lib) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_asymmetric.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_asymmetric.py new file mode 100644 index 00000000..cdadbdef --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_asymmetric.py @@ -0,0 +1,17 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import abc + + +# This exists to break an import cycle. It is normally accessible from the +# asymmetric padding module. + + +class AsymmetricPadding(metaclass=abc.ABCMeta): + @abc.abstractproperty + def name(self) -> str: + """ + A string naming this padding (e.g. "PSS", "PKCS1"). + """ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_cipheralgorithm.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_cipheralgorithm.py new file mode 100644 index 00000000..7f322048 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_cipheralgorithm.py @@ -0,0 +1,40 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import abc +import typing + + +# This exists to break an import cycle. It is normally accessible from the +# ciphers module. + + +class CipherAlgorithm(metaclass=abc.ABCMeta): + @abc.abstractproperty + def name(self) -> str: + """ + A string naming this mode (e.g. "AES", "Camellia"). + """ + + @abc.abstractproperty + def key_sizes(self) -> typing.FrozenSet[int]: + """ + Valid key sizes for this algorithm in bits + """ + + @abc.abstractproperty + def key_size(self) -> int: + """ + The size of the key being used as an integer in bits (e.g. 128, 256). + """ + + +class BlockCipherAlgorithm(metaclass=abc.ABCMeta): + key: bytes + + @abc.abstractproperty + def block_size(self) -> int: + """ + The size of a block as an integer in bits (e.g. 64, 128). + """ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_serialization.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_serialization.py new file mode 100644 index 00000000..fddb4c85 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/_serialization.py @@ -0,0 +1,168 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import abc +import typing + +from cryptography import utils +from cryptography.hazmat.primitives.hashes import HashAlgorithm + +# This exists to break an import cycle. These classes are normally accessible +# from the serialization module. + + +class PBES(utils.Enum): + PBESv1SHA1And3KeyTripleDESCBC = "PBESv1 using SHA1 and 3-Key TripleDES" + PBESv2SHA256AndAES256CBC = "PBESv2 using SHA256 PBKDF2 and AES256 CBC" + + +class Encoding(utils.Enum): + PEM = "PEM" + DER = "DER" + OpenSSH = "OpenSSH" + Raw = "Raw" + X962 = "ANSI X9.62" + SMIME = "S/MIME" + + +class PrivateFormat(utils.Enum): + PKCS8 = "PKCS8" + TraditionalOpenSSL = "TraditionalOpenSSL" + Raw = "Raw" + OpenSSH = "OpenSSH" + PKCS12 = "PKCS12" + + def encryption_builder(self) -> "KeySerializationEncryptionBuilder": + if self not in (PrivateFormat.OpenSSH, PrivateFormat.PKCS12): + raise ValueError( + "encryption_builder only supported with PrivateFormat.OpenSSH" + " and PrivateFormat.PKCS12" + ) + return KeySerializationEncryptionBuilder(self) + + +class PublicFormat(utils.Enum): + SubjectPublicKeyInfo = "X.509 subjectPublicKeyInfo with PKCS#1" + PKCS1 = "Raw PKCS#1" + OpenSSH = "OpenSSH" + Raw = "Raw" + CompressedPoint = "X9.62 Compressed Point" + UncompressedPoint = "X9.62 Uncompressed Point" + + +class ParameterFormat(utils.Enum): + PKCS3 = "PKCS3" + + +class KeySerializationEncryption(metaclass=abc.ABCMeta): + pass + + +class BestAvailableEncryption(KeySerializationEncryption): + def __init__(self, password: bytes): + if not isinstance(password, bytes) or len(password) == 0: + raise ValueError("Password must be 1 or more bytes.") + + self.password = password + + +class NoEncryption(KeySerializationEncryption): + pass + + +class KeySerializationEncryptionBuilder(object): + def __init__( + self, + format: PrivateFormat, + *, + _kdf_rounds: typing.Optional[int] = None, + _hmac_hash: typing.Optional[HashAlgorithm] = None, + _key_cert_algorithm: typing.Optional[PBES] = None, + ) -> None: + self._format = format + + self._kdf_rounds = _kdf_rounds + self._hmac_hash = _hmac_hash + self._key_cert_algorithm = _key_cert_algorithm + + def kdf_rounds(self, rounds: int) -> "KeySerializationEncryptionBuilder": + if self._kdf_rounds is not None: + raise ValueError("kdf_rounds already set") + + if not isinstance(rounds, int): + raise TypeError("kdf_rounds must be an integer") + + if rounds < 1: + raise ValueError("kdf_rounds must be a positive integer") + + return KeySerializationEncryptionBuilder( + self._format, + _kdf_rounds=rounds, + _hmac_hash=self._hmac_hash, + _key_cert_algorithm=self._key_cert_algorithm, + ) + + def hmac_hash( + self, algorithm: HashAlgorithm + ) -> "KeySerializationEncryptionBuilder": + if self._format is not PrivateFormat.PKCS12: + raise TypeError( + "hmac_hash only supported with PrivateFormat.PKCS12" + ) + + if self._hmac_hash is not None: + raise ValueError("hmac_hash already set") + return KeySerializationEncryptionBuilder( + self._format, + _kdf_rounds=self._kdf_rounds, + _hmac_hash=algorithm, + _key_cert_algorithm=self._key_cert_algorithm, + ) + + def key_cert_algorithm( + self, algorithm: PBES + ) -> "KeySerializationEncryptionBuilder": + if self._format is not PrivateFormat.PKCS12: + raise TypeError( + "key_cert_algorithm only supported with " + "PrivateFormat.PKCS12" + ) + if self._key_cert_algorithm is not None: + raise ValueError("key_cert_algorithm already set") + return KeySerializationEncryptionBuilder( + self._format, + _kdf_rounds=self._kdf_rounds, + _hmac_hash=self._hmac_hash, + _key_cert_algorithm=algorithm, + ) + + def build(self, password: bytes) -> KeySerializationEncryption: + if not isinstance(password, bytes) or len(password) == 0: + raise ValueError("Password must be 1 or more bytes.") + + return _KeySerializationEncryption( + self._format, + password, + kdf_rounds=self._kdf_rounds, + hmac_hash=self._hmac_hash, + key_cert_algorithm=self._key_cert_algorithm, + ) + + +class _KeySerializationEncryption(KeySerializationEncryption): + def __init__( + self, + format: PrivateFormat, + password: bytes, + *, + kdf_rounds: typing.Optional[int], + hmac_hash: typing.Optional[HashAlgorithm], + key_cert_algorithm: typing.Optional[PBES], + ): + self._format = format + self.password = password + + self._kdf_rounds = kdf_rounds + self._hmac_hash = hmac_hash + self._key_cert_algorithm = key_cert_algorithm diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dh.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dh.py new file mode 100644 index 00000000..2093ad4a --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dh.py @@ -0,0 +1,250 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing + +from cryptography.hazmat.primitives import _serialization + + +_MIN_MODULUS_SIZE = 512 + + +def generate_parameters( + generator: int, key_size: int, backend: typing.Any = None +) -> "DHParameters": + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.generate_dh_parameters(generator, key_size) + + +class DHParameterNumbers: + def __init__(self, p: int, g: int, q: typing.Optional[int] = None) -> None: + if not isinstance(p, int) or not isinstance(g, int): + raise TypeError("p and g must be integers") + if q is not None and not isinstance(q, int): + raise TypeError("q must be integer or None") + + if g < 2: + raise ValueError("DH generator must be 2 or greater") + + if p.bit_length() < _MIN_MODULUS_SIZE: + raise ValueError( + "p (modulus) must be at least {}-bit".format(_MIN_MODULUS_SIZE) + ) + + self._p = p + self._g = g + self._q = q + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DHParameterNumbers): + return NotImplemented + + return ( + self._p == other._p and self._g == other._g and self._q == other._q + ) + + def parameters(self, backend: typing.Any = None) -> "DHParameters": + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_dh_parameter_numbers(self) + + @property + def p(self) -> int: + return self._p + + @property + def g(self) -> int: + return self._g + + @property + def q(self) -> typing.Optional[int]: + return self._q + + +class DHPublicNumbers: + def __init__(self, y: int, parameter_numbers: DHParameterNumbers) -> None: + if not isinstance(y, int): + raise TypeError("y must be an integer.") + + if not isinstance(parameter_numbers, DHParameterNumbers): + raise TypeError( + "parameters must be an instance of DHParameterNumbers." + ) + + self._y = y + self._parameter_numbers = parameter_numbers + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DHPublicNumbers): + return NotImplemented + + return ( + self._y == other._y + and self._parameter_numbers == other._parameter_numbers + ) + + def public_key(self, backend: typing.Any = None) -> "DHPublicKey": + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_dh_public_numbers(self) + + @property + def y(self) -> int: + return self._y + + @property + def parameter_numbers(self) -> DHParameterNumbers: + return self._parameter_numbers + + +class DHPrivateNumbers: + def __init__(self, x: int, public_numbers: DHPublicNumbers) -> None: + if not isinstance(x, int): + raise TypeError("x must be an integer.") + + if not isinstance(public_numbers, DHPublicNumbers): + raise TypeError( + "public_numbers must be an instance of " "DHPublicNumbers." + ) + + self._x = x + self._public_numbers = public_numbers + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DHPrivateNumbers): + return NotImplemented + + return ( + self._x == other._x + and self._public_numbers == other._public_numbers + ) + + def private_key(self, backend: typing.Any = None) -> "DHPrivateKey": + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_dh_private_numbers(self) + + @property + def public_numbers(self) -> DHPublicNumbers: + return self._public_numbers + + @property + def x(self) -> int: + return self._x + + +class DHParameters(metaclass=abc.ABCMeta): + @abc.abstractmethod + def generate_private_key(self) -> "DHPrivateKey": + """ + Generates and returns a DHPrivateKey. + """ + + @abc.abstractmethod + def parameter_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.ParameterFormat, + ) -> bytes: + """ + Returns the parameters serialized as bytes. + """ + + @abc.abstractmethod + def parameter_numbers(self) -> DHParameterNumbers: + """ + Returns a DHParameterNumbers. + """ + + +DHParametersWithSerialization = DHParameters + + +class DHPublicKey(metaclass=abc.ABCMeta): + @abc.abstractproperty + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def parameters(self) -> DHParameters: + """ + The DHParameters object associated with this public key. + """ + + @abc.abstractmethod + def public_numbers(self) -> DHPublicNumbers: + """ + Returns a DHPublicNumbers. + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +DHPublicKeyWithSerialization = DHPublicKey + + +class DHPrivateKey(metaclass=abc.ABCMeta): + @abc.abstractproperty + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def public_key(self) -> DHPublicKey: + """ + The DHPublicKey associated with this private key. + """ + + @abc.abstractmethod + def parameters(self) -> DHParameters: + """ + The DHParameters object associated with this private key. + """ + + @abc.abstractmethod + def exchange(self, peer_public_key: DHPublicKey) -> bytes: + """ + Given peer's DHPublicKey, carry out the key exchange and + return shared key as bytes. + """ + + @abc.abstractmethod + def private_numbers(self) -> DHPrivateNumbers: + """ + Returns a DHPrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +DHPrivateKeyWithSerialization = DHPrivateKey diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py new file mode 100644 index 00000000..5e587097 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py @@ -0,0 +1,288 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing + +from cryptography.hazmat.primitives import _serialization, hashes +from cryptography.hazmat.primitives.asymmetric import ( + utils as asym_utils, +) + + +class DSAParameters(metaclass=abc.ABCMeta): + @abc.abstractmethod + def generate_private_key(self) -> "DSAPrivateKey": + """ + Generates and returns a DSAPrivateKey. + """ + + @abc.abstractmethod + def parameter_numbers(self) -> "DSAParameterNumbers": + """ + Returns a DSAParameterNumbers. + """ + + +DSAParametersWithNumbers = DSAParameters + + +class DSAPrivateKey(metaclass=abc.ABCMeta): + @abc.abstractproperty + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def public_key(self) -> "DSAPublicKey": + """ + The DSAPublicKey associated with this private key. + """ + + @abc.abstractmethod + def parameters(self) -> DSAParameters: + """ + The DSAParameters object associated with this private key. + """ + + @abc.abstractmethod + def sign( + self, + data: bytes, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> bytes: + """ + Signs the data + """ + + @abc.abstractmethod + def private_numbers(self) -> "DSAPrivateNumbers": + """ + Returns a DSAPrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +DSAPrivateKeyWithSerialization = DSAPrivateKey + + +class DSAPublicKey(metaclass=abc.ABCMeta): + @abc.abstractproperty + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def parameters(self) -> DSAParameters: + """ + The DSAParameters object associated with this public key. + """ + + @abc.abstractmethod + def public_numbers(self) -> "DSAPublicNumbers": + """ + Returns a DSAPublicNumbers. + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + @abc.abstractmethod + def verify( + self, + signature: bytes, + data: bytes, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> None: + """ + Verifies the signature of the data. + """ + + +DSAPublicKeyWithSerialization = DSAPublicKey + + +class DSAParameterNumbers: + def __init__(self, p: int, q: int, g: int): + if ( + not isinstance(p, int) + or not isinstance(q, int) + or not isinstance(g, int) + ): + raise TypeError( + "DSAParameterNumbers p, q, and g arguments must be integers." + ) + + self._p = p + self._q = q + self._g = g + + @property + def p(self) -> int: + return self._p + + @property + def q(self) -> int: + return self._q + + @property + def g(self) -> int: + return self._g + + def parameters(self, backend: typing.Any = None) -> DSAParameters: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_dsa_parameter_numbers(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DSAParameterNumbers): + return NotImplemented + + return self.p == other.p and self.q == other.q and self.g == other.g + + def __repr__(self) -> str: + return ( + "".format(self=self) + ) + + +class DSAPublicNumbers: + def __init__(self, y: int, parameter_numbers: DSAParameterNumbers): + if not isinstance(y, int): + raise TypeError("DSAPublicNumbers y argument must be an integer.") + + if not isinstance(parameter_numbers, DSAParameterNumbers): + raise TypeError( + "parameter_numbers must be a DSAParameterNumbers instance." + ) + + self._y = y + self._parameter_numbers = parameter_numbers + + @property + def y(self) -> int: + return self._y + + @property + def parameter_numbers(self) -> DSAParameterNumbers: + return self._parameter_numbers + + def public_key(self, backend: typing.Any = None) -> DSAPublicKey: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_dsa_public_numbers(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DSAPublicNumbers): + return NotImplemented + + return ( + self.y == other.y + and self.parameter_numbers == other.parameter_numbers + ) + + def __repr__(self) -> str: + return ( + "".format(self=self) + ) + + +class DSAPrivateNumbers: + def __init__(self, x: int, public_numbers: DSAPublicNumbers): + if not isinstance(x, int): + raise TypeError("DSAPrivateNumbers x argument must be an integer.") + + if not isinstance(public_numbers, DSAPublicNumbers): + raise TypeError( + "public_numbers must be a DSAPublicNumbers instance." + ) + self._public_numbers = public_numbers + self._x = x + + @property + def x(self) -> int: + return self._x + + @property + def public_numbers(self) -> DSAPublicNumbers: + return self._public_numbers + + def private_key(self, backend: typing.Any = None) -> DSAPrivateKey: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_dsa_private_numbers(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DSAPrivateNumbers): + return NotImplemented + + return ( + self.x == other.x and self.public_numbers == other.public_numbers + ) + + +def generate_parameters( + key_size: int, backend: typing.Any = None +) -> DSAParameters: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.generate_dsa_parameters(key_size) + + +def generate_private_key( + key_size: int, backend: typing.Any = None +) -> DSAPrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.generate_dsa_private_key_and_parameters(key_size) + + +def _check_dsa_parameters(parameters: DSAParameterNumbers) -> None: + if parameters.p.bit_length() not in [1024, 2048, 3072, 4096]: + raise ValueError( + "p must be exactly 1024, 2048, 3072, or 4096 bits long" + ) + if parameters.q.bit_length() not in [160, 224, 256]: + raise ValueError("q must be exactly 160, 224, or 256 bits long") + + if not (1 < parameters.g < parameters.p): + raise ValueError("g, p don't satisfy 1 < g < p.") + + +def _check_dsa_private_numbers(numbers: DSAPrivateNumbers) -> None: + parameters = numbers.public_numbers.parameter_numbers + _check_dsa_parameters(parameters) + if numbers.x <= 0 or numbers.x >= parameters.q: + raise ValueError("x must be > 0 and < q.") + + if numbers.public_numbers.y != pow(parameters.g, numbers.x, parameters.p): + raise ValueError("y must be equal to (g ** x % p).") diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ec.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ec.py new file mode 100644 index 00000000..3aaa382a --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ec.py @@ -0,0 +1,523 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing +import warnings + +from cryptography import utils +from cryptography.hazmat._oid import ObjectIdentifier +from cryptography.hazmat.primitives import _serialization, hashes +from cryptography.hazmat.primitives.asymmetric import ( + utils as asym_utils, +) + + +class EllipticCurveOID: + SECP192R1 = ObjectIdentifier("1.2.840.10045.3.1.1") + SECP224R1 = ObjectIdentifier("1.3.132.0.33") + SECP256K1 = ObjectIdentifier("1.3.132.0.10") + SECP256R1 = ObjectIdentifier("1.2.840.10045.3.1.7") + SECP384R1 = ObjectIdentifier("1.3.132.0.34") + SECP521R1 = ObjectIdentifier("1.3.132.0.35") + BRAINPOOLP256R1 = ObjectIdentifier("1.3.36.3.3.2.8.1.1.7") + BRAINPOOLP384R1 = ObjectIdentifier("1.3.36.3.3.2.8.1.1.11") + BRAINPOOLP512R1 = ObjectIdentifier("1.3.36.3.3.2.8.1.1.13") + SECT163K1 = ObjectIdentifier("1.3.132.0.1") + SECT163R2 = ObjectIdentifier("1.3.132.0.15") + SECT233K1 = ObjectIdentifier("1.3.132.0.26") + SECT233R1 = ObjectIdentifier("1.3.132.0.27") + SECT283K1 = ObjectIdentifier("1.3.132.0.16") + SECT283R1 = ObjectIdentifier("1.3.132.0.17") + SECT409K1 = ObjectIdentifier("1.3.132.0.36") + SECT409R1 = ObjectIdentifier("1.3.132.0.37") + SECT571K1 = ObjectIdentifier("1.3.132.0.38") + SECT571R1 = ObjectIdentifier("1.3.132.0.39") + + +class EllipticCurve(metaclass=abc.ABCMeta): + @abc.abstractproperty + def name(self) -> str: + """ + The name of the curve. e.g. secp256r1. + """ + + @abc.abstractproperty + def key_size(self) -> int: + """ + Bit size of a secret scalar for the curve. + """ + + +class EllipticCurveSignatureAlgorithm(metaclass=abc.ABCMeta): + @abc.abstractproperty + def algorithm( + self, + ) -> typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm]: + """ + The digest algorithm used with this signature. + """ + + +class EllipticCurvePrivateKey(metaclass=abc.ABCMeta): + @abc.abstractmethod + def exchange( + self, algorithm: "ECDH", peer_public_key: "EllipticCurvePublicKey" + ) -> bytes: + """ + Performs a key exchange operation using the provided algorithm with the + provided peer's public key. + """ + + @abc.abstractmethod + def public_key(self) -> "EllipticCurvePublicKey": + """ + The EllipticCurvePublicKey for this private key. + """ + + @abc.abstractproperty + def curve(self) -> EllipticCurve: + """ + The EllipticCurve that this key is on. + """ + + @abc.abstractproperty + def key_size(self) -> int: + """ + Bit size of a secret scalar for the curve. + """ + + @abc.abstractmethod + def sign( + self, + data: bytes, + signature_algorithm: EllipticCurveSignatureAlgorithm, + ) -> bytes: + """ + Signs the data + """ + + @abc.abstractmethod + def private_numbers(self) -> "EllipticCurvePrivateNumbers": + """ + Returns an EllipticCurvePrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +EllipticCurvePrivateKeyWithSerialization = EllipticCurvePrivateKey + + +class EllipticCurvePublicKey(metaclass=abc.ABCMeta): + @abc.abstractproperty + def curve(self) -> EllipticCurve: + """ + The EllipticCurve that this key is on. + """ + + @abc.abstractproperty + def key_size(self) -> int: + """ + Bit size of a secret scalar for the curve. + """ + + @abc.abstractmethod + def public_numbers(self) -> "EllipticCurvePublicNumbers": + """ + Returns an EllipticCurvePublicNumbers. + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + @abc.abstractmethod + def verify( + self, + signature: bytes, + data: bytes, + signature_algorithm: EllipticCurveSignatureAlgorithm, + ) -> None: + """ + Verifies the signature of the data. + """ + + @classmethod + def from_encoded_point( + cls, curve: EllipticCurve, data: bytes + ) -> "EllipticCurvePublicKey": + utils._check_bytes("data", data) + + if not isinstance(curve, EllipticCurve): + raise TypeError("curve must be an EllipticCurve instance") + + if len(data) == 0: + raise ValueError("data must not be an empty byte string") + + if data[0] not in [0x02, 0x03, 0x04]: + raise ValueError("Unsupported elliptic curve point type") + + from cryptography.hazmat.backends.openssl.backend import backend + + return backend.load_elliptic_curve_public_bytes(curve, data) + + +EllipticCurvePublicKeyWithSerialization = EllipticCurvePublicKey + + +class SECT571R1(EllipticCurve): + name = "sect571r1" + key_size = 570 + + +class SECT409R1(EllipticCurve): + name = "sect409r1" + key_size = 409 + + +class SECT283R1(EllipticCurve): + name = "sect283r1" + key_size = 283 + + +class SECT233R1(EllipticCurve): + name = "sect233r1" + key_size = 233 + + +class SECT163R2(EllipticCurve): + name = "sect163r2" + key_size = 163 + + +class SECT571K1(EllipticCurve): + name = "sect571k1" + key_size = 571 + + +class SECT409K1(EllipticCurve): + name = "sect409k1" + key_size = 409 + + +class SECT283K1(EllipticCurve): + name = "sect283k1" + key_size = 283 + + +class SECT233K1(EllipticCurve): + name = "sect233k1" + key_size = 233 + + +class SECT163K1(EllipticCurve): + name = "sect163k1" + key_size = 163 + + +class SECP521R1(EllipticCurve): + name = "secp521r1" + key_size = 521 + + +class SECP384R1(EllipticCurve): + name = "secp384r1" + key_size = 384 + + +class SECP256R1(EllipticCurve): + name = "secp256r1" + key_size = 256 + + +class SECP256K1(EllipticCurve): + name = "secp256k1" + key_size = 256 + + +class SECP224R1(EllipticCurve): + name = "secp224r1" + key_size = 224 + + +class SECP192R1(EllipticCurve): + name = "secp192r1" + key_size = 192 + + +class BrainpoolP256R1(EllipticCurve): + name = "brainpoolP256r1" + key_size = 256 + + +class BrainpoolP384R1(EllipticCurve): + name = "brainpoolP384r1" + key_size = 384 + + +class BrainpoolP512R1(EllipticCurve): + name = "brainpoolP512r1" + key_size = 512 + + +_CURVE_TYPES: typing.Dict[str, typing.Type[EllipticCurve]] = { + "prime192v1": SECP192R1, + "prime256v1": SECP256R1, + "secp192r1": SECP192R1, + "secp224r1": SECP224R1, + "secp256r1": SECP256R1, + "secp384r1": SECP384R1, + "secp521r1": SECP521R1, + "secp256k1": SECP256K1, + "sect163k1": SECT163K1, + "sect233k1": SECT233K1, + "sect283k1": SECT283K1, + "sect409k1": SECT409K1, + "sect571k1": SECT571K1, + "sect163r2": SECT163R2, + "sect233r1": SECT233R1, + "sect283r1": SECT283R1, + "sect409r1": SECT409R1, + "sect571r1": SECT571R1, + "brainpoolP256r1": BrainpoolP256R1, + "brainpoolP384r1": BrainpoolP384R1, + "brainpoolP512r1": BrainpoolP512R1, +} + + +class ECDSA(EllipticCurveSignatureAlgorithm): + def __init__( + self, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ): + self._algorithm = algorithm + + @property + def algorithm( + self, + ) -> typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm]: + return self._algorithm + + +def generate_private_key( + curve: EllipticCurve, backend: typing.Any = None +) -> EllipticCurvePrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.generate_elliptic_curve_private_key(curve) + + +def derive_private_key( + private_value: int, + curve: EllipticCurve, + backend: typing.Any = None, +) -> EllipticCurvePrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + if not isinstance(private_value, int): + raise TypeError("private_value must be an integer type.") + + if private_value <= 0: + raise ValueError("private_value must be a positive integer.") + + if not isinstance(curve, EllipticCurve): + raise TypeError("curve must provide the EllipticCurve interface.") + + return ossl.derive_elliptic_curve_private_key(private_value, curve) + + +class EllipticCurvePublicNumbers: + def __init__(self, x: int, y: int, curve: EllipticCurve): + if not isinstance(x, int) or not isinstance(y, int): + raise TypeError("x and y must be integers.") + + if not isinstance(curve, EllipticCurve): + raise TypeError("curve must provide the EllipticCurve interface.") + + self._y = y + self._x = x + self._curve = curve + + def public_key(self, backend: typing.Any = None) -> EllipticCurvePublicKey: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_elliptic_curve_public_numbers(self) + + def encode_point(self) -> bytes: + warnings.warn( + "encode_point has been deprecated on EllipticCurvePublicNumbers" + " and will be removed in a future version. Please use " + "EllipticCurvePublicKey.public_bytes to obtain both " + "compressed and uncompressed point encoding.", + utils.PersistentlyDeprecated2019, + stacklevel=2, + ) + # key_size is in bits. Convert to bytes and round up + byte_length = (self.curve.key_size + 7) // 8 + return ( + b"\x04" + + utils.int_to_bytes(self.x, byte_length) + + utils.int_to_bytes(self.y, byte_length) + ) + + @classmethod + def from_encoded_point( + cls, curve: EllipticCurve, data: bytes + ) -> "EllipticCurvePublicNumbers": + if not isinstance(curve, EllipticCurve): + raise TypeError("curve must be an EllipticCurve instance") + + warnings.warn( + "Support for unsafe construction of public numbers from " + "encoded data will be removed in a future version. " + "Please use EllipticCurvePublicKey.from_encoded_point", + utils.PersistentlyDeprecated2019, + stacklevel=2, + ) + + if data.startswith(b"\x04"): + # key_size is in bits. Convert to bytes and round up + byte_length = (curve.key_size + 7) // 8 + if len(data) == 2 * byte_length + 1: + x = int.from_bytes(data[1 : byte_length + 1], "big") + y = int.from_bytes(data[byte_length + 1 :], "big") + return cls(x, y, curve) + else: + raise ValueError("Invalid elliptic curve point data length") + else: + raise ValueError("Unsupported elliptic curve point type") + + @property + def curve(self) -> EllipticCurve: + return self._curve + + @property + def x(self) -> int: + return self._x + + @property + def y(self) -> int: + return self._y + + def __eq__(self, other: object) -> bool: + if not isinstance(other, EllipticCurvePublicNumbers): + return NotImplemented + + return ( + self.x == other.x + and self.y == other.y + and self.curve.name == other.curve.name + and self.curve.key_size == other.curve.key_size + ) + + def __hash__(self) -> int: + return hash((self.x, self.y, self.curve.name, self.curve.key_size)) + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + +class EllipticCurvePrivateNumbers: + def __init__( + self, private_value: int, public_numbers: EllipticCurvePublicNumbers + ): + if not isinstance(private_value, int): + raise TypeError("private_value must be an integer.") + + if not isinstance(public_numbers, EllipticCurvePublicNumbers): + raise TypeError( + "public_numbers must be an EllipticCurvePublicNumbers " + "instance." + ) + + self._private_value = private_value + self._public_numbers = public_numbers + + def private_key( + self, backend: typing.Any = None + ) -> EllipticCurvePrivateKey: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_elliptic_curve_private_numbers(self) + + @property + def private_value(self) -> int: + return self._private_value + + @property + def public_numbers(self) -> EllipticCurvePublicNumbers: + return self._public_numbers + + def __eq__(self, other: object) -> bool: + if not isinstance(other, EllipticCurvePrivateNumbers): + return NotImplemented + + return ( + self.private_value == other.private_value + and self.public_numbers == other.public_numbers + ) + + def __hash__(self) -> int: + return hash((self.private_value, self.public_numbers)) + + +class ECDH: + pass + + +_OID_TO_CURVE = { + EllipticCurveOID.SECP192R1: SECP192R1, + EllipticCurveOID.SECP224R1: SECP224R1, + EllipticCurveOID.SECP256K1: SECP256K1, + EllipticCurveOID.SECP256R1: SECP256R1, + EllipticCurveOID.SECP384R1: SECP384R1, + EllipticCurveOID.SECP521R1: SECP521R1, + EllipticCurveOID.BRAINPOOLP256R1: BrainpoolP256R1, + EllipticCurveOID.BRAINPOOLP384R1: BrainpoolP384R1, + EllipticCurveOID.BRAINPOOLP512R1: BrainpoolP512R1, + EllipticCurveOID.SECT163K1: SECT163K1, + EllipticCurveOID.SECT163R2: SECT163R2, + EllipticCurveOID.SECT233K1: SECT233K1, + EllipticCurveOID.SECT233R1: SECT233R1, + EllipticCurveOID.SECT283K1: SECT283K1, + EllipticCurveOID.SECT283R1: SECT283R1, + EllipticCurveOID.SECT409K1: SECT409K1, + EllipticCurveOID.SECT409R1: SECT409R1, + EllipticCurveOID.SECT571K1: SECT571K1, + EllipticCurveOID.SECT571R1: SECT571R1, +} + + +def get_curve_for_oid(oid: ObjectIdentifier) -> typing.Type[EllipticCurve]: + try: + return _OID_TO_CURVE[oid] + except KeyError: + raise LookupError( + "The provided object identifier has no matching elliptic " + "curve class" + ) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py new file mode 100644 index 00000000..43277028 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py @@ -0,0 +1,92 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives import _serialization + + +_ED25519_KEY_SIZE = 32 +_ED25519_SIG_SIZE = 64 + + +class Ed25519PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> "Ed25519PublicKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed25519_supported(): + raise UnsupportedAlgorithm( + "ed25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return backend.ed25519_load_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def verify(self, signature: bytes, data: bytes) -> None: + """ + Verify the signature. + """ + + +class Ed25519PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> "Ed25519PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed25519_supported(): + raise UnsupportedAlgorithm( + "ed25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return backend.ed25519_generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> "Ed25519PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed25519_supported(): + raise UnsupportedAlgorithm( + "ed25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return backend.ed25519_load_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> Ed25519PublicKey: + """ + The Ed25519PublicKey derived from the private key. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ + + @abc.abstractmethod + def sign(self, data: bytes) -> bytes: + """ + Signs the data. + """ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ed448.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ed448.py new file mode 100644 index 00000000..27bc27c6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/ed448.py @@ -0,0 +1,87 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives import _serialization + + +class Ed448PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> "Ed448PublicKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed448_supported(): + raise UnsupportedAlgorithm( + "ed448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return backend.ed448_load_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def verify(self, signature: bytes, data: bytes) -> None: + """ + Verify the signature. + """ + + +class Ed448PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> "Ed448PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed448_supported(): + raise UnsupportedAlgorithm( + "ed448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + return backend.ed448_generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> "Ed448PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed448_supported(): + raise UnsupportedAlgorithm( + "ed448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return backend.ed448_load_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> Ed448PublicKey: + """ + The Ed448PublicKey derived from the private key. + """ + + @abc.abstractmethod + def sign(self, data: bytes) -> bytes: + """ + Signs the data. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/padding.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/padding.py new file mode 100644 index 00000000..dd3c648f --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/padding.py @@ -0,0 +1,101 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing + +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives._asymmetric import ( + AsymmetricPadding as AsymmetricPadding, +) +from cryptography.hazmat.primitives.asymmetric import rsa + + +class PKCS1v15(AsymmetricPadding): + name = "EMSA-PKCS1-v1_5" + + +class _MaxLength: + "Sentinel value for `MAX_LENGTH`." + + +class _Auto: + "Sentinel value for `AUTO`." + + +class _DigestLength: + "Sentinel value for `DIGEST_LENGTH`." + + +class PSS(AsymmetricPadding): + MAX_LENGTH = _MaxLength() + AUTO = _Auto() + DIGEST_LENGTH = _DigestLength() + name = "EMSA-PSS" + _salt_length: typing.Union[int, _MaxLength, _Auto, _DigestLength] + + def __init__( + self, + mgf: "MGF", + salt_length: typing.Union[int, _MaxLength, _Auto, _DigestLength], + ) -> None: + self._mgf = mgf + + if not isinstance( + salt_length, (int, _MaxLength, _Auto, _DigestLength) + ): + raise TypeError( + "salt_length must be an integer, MAX_LENGTH, " + "DIGEST_LENGTH, or AUTO" + ) + + if isinstance(salt_length, int) and salt_length < 0: + raise ValueError("salt_length must be zero or greater.") + + self._salt_length = salt_length + + +class OAEP(AsymmetricPadding): + name = "EME-OAEP" + + def __init__( + self, + mgf: "MGF", + algorithm: hashes.HashAlgorithm, + label: typing.Optional[bytes], + ): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of hashes.HashAlgorithm.") + + self._mgf = mgf + self._algorithm = algorithm + self._label = label + + +class MGF(metaclass=abc.ABCMeta): + _algorithm: hashes.HashAlgorithm + + +class MGF1(MGF): + MAX_LENGTH = _MaxLength() + + def __init__(self, algorithm: hashes.HashAlgorithm): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of hashes.HashAlgorithm.") + + self._algorithm = algorithm + + +def calculate_max_pss_salt_length( + key: typing.Union["rsa.RSAPrivateKey", "rsa.RSAPublicKey"], + hash_algorithm: hashes.HashAlgorithm, +) -> int: + if not isinstance(key, (rsa.RSAPrivateKey, rsa.RSAPublicKey)): + raise TypeError("key must be an RSA public or private key") + # bit length - 1 per RFC 3447 + emlen = (key.key_size + 6) // 8 + salt_length = emlen - hash_algorithm.digest_size - 2 + assert salt_length >= 0 + return salt_length diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/rsa.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/rsa.py new file mode 100644 index 00000000..5ffe767c --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/rsa.py @@ -0,0 +1,425 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing +from math import gcd + +from cryptography.hazmat.primitives import _serialization, hashes +from cryptography.hazmat.primitives._asymmetric import AsymmetricPadding +from cryptography.hazmat.primitives.asymmetric import ( + utils as asym_utils, +) + + +class RSAPrivateKey(metaclass=abc.ABCMeta): + @abc.abstractmethod + def decrypt(self, ciphertext: bytes, padding: AsymmetricPadding) -> bytes: + """ + Decrypts the provided ciphertext. + """ + + @abc.abstractproperty + def key_size(self) -> int: + """ + The bit length of the public modulus. + """ + + @abc.abstractmethod + def public_key(self) -> "RSAPublicKey": + """ + The RSAPublicKey associated with this private key. + """ + + @abc.abstractmethod + def sign( + self, + data: bytes, + padding: AsymmetricPadding, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> bytes: + """ + Signs the data. + """ + + @abc.abstractmethod + def private_numbers(self) -> "RSAPrivateNumbers": + """ + Returns an RSAPrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +RSAPrivateKeyWithSerialization = RSAPrivateKey + + +class RSAPublicKey(metaclass=abc.ABCMeta): + @abc.abstractmethod + def encrypt(self, plaintext: bytes, padding: AsymmetricPadding) -> bytes: + """ + Encrypts the given plaintext. + """ + + @abc.abstractproperty + def key_size(self) -> int: + """ + The bit length of the public modulus. + """ + + @abc.abstractmethod + def public_numbers(self) -> "RSAPublicNumbers": + """ + Returns an RSAPublicNumbers + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + @abc.abstractmethod + def verify( + self, + signature: bytes, + data: bytes, + padding: AsymmetricPadding, + algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], + ) -> None: + """ + Verifies the signature of the data. + """ + + @abc.abstractmethod + def recover_data_from_signature( + self, + signature: bytes, + padding: AsymmetricPadding, + algorithm: typing.Optional[hashes.HashAlgorithm], + ) -> bytes: + """ + Recovers the original data from the signature. + """ + + +RSAPublicKeyWithSerialization = RSAPublicKey + + +def generate_private_key( + public_exponent: int, + key_size: int, + backend: typing.Any = None, +) -> RSAPrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + _verify_rsa_parameters(public_exponent, key_size) + return ossl.generate_rsa_private_key(public_exponent, key_size) + + +def _verify_rsa_parameters(public_exponent: int, key_size: int) -> None: + if public_exponent not in (3, 65537): + raise ValueError( + "public_exponent must be either 3 (for legacy compatibility) or " + "65537. Almost everyone should choose 65537 here!" + ) + + if key_size < 512: + raise ValueError("key_size must be at least 512-bits.") + + +def _check_private_key_components( + p: int, + q: int, + private_exponent: int, + dmp1: int, + dmq1: int, + iqmp: int, + public_exponent: int, + modulus: int, +) -> None: + if modulus < 3: + raise ValueError("modulus must be >= 3.") + + if p >= modulus: + raise ValueError("p must be < modulus.") + + if q >= modulus: + raise ValueError("q must be < modulus.") + + if dmp1 >= modulus: + raise ValueError("dmp1 must be < modulus.") + + if dmq1 >= modulus: + raise ValueError("dmq1 must be < modulus.") + + if iqmp >= modulus: + raise ValueError("iqmp must be < modulus.") + + if private_exponent >= modulus: + raise ValueError("private_exponent must be < modulus.") + + if public_exponent < 3 or public_exponent >= modulus: + raise ValueError("public_exponent must be >= 3 and < modulus.") + + if public_exponent & 1 == 0: + raise ValueError("public_exponent must be odd.") + + if dmp1 & 1 == 0: + raise ValueError("dmp1 must be odd.") + + if dmq1 & 1 == 0: + raise ValueError("dmq1 must be odd.") + + if p * q != modulus: + raise ValueError("p*q must equal modulus.") + + +def _check_public_key_components(e: int, n: int) -> None: + if n < 3: + raise ValueError("n must be >= 3.") + + if e < 3 or e >= n: + raise ValueError("e must be >= 3 and < n.") + + if e & 1 == 0: + raise ValueError("e must be odd.") + + +def _modinv(e: int, m: int) -> int: + """ + Modular Multiplicative Inverse. Returns x such that: (x*e) mod m == 1 + """ + x1, x2 = 1, 0 + a, b = e, m + while b > 0: + q, r = divmod(a, b) + xn = x1 - q * x2 + a, b, x1, x2 = b, r, x2, xn + return x1 % m + + +def rsa_crt_iqmp(p: int, q: int) -> int: + """ + Compute the CRT (q ** -1) % p value from RSA primes p and q. + """ + return _modinv(q, p) + + +def rsa_crt_dmp1(private_exponent: int, p: int) -> int: + """ + Compute the CRT private_exponent % (p - 1) value from the RSA + private_exponent (d) and p. + """ + return private_exponent % (p - 1) + + +def rsa_crt_dmq1(private_exponent: int, q: int) -> int: + """ + Compute the CRT private_exponent % (q - 1) value from the RSA + private_exponent (d) and q. + """ + return private_exponent % (q - 1) + + +# Controls the number of iterations rsa_recover_prime_factors will perform +# to obtain the prime factors. Each iteration increments by 2 so the actual +# maximum attempts is half this number. +_MAX_RECOVERY_ATTEMPTS = 1000 + + +def rsa_recover_prime_factors( + n: int, e: int, d: int +) -> typing.Tuple[int, int]: + """ + Compute factors p and q from the private exponent d. We assume that n has + no more than two factors. This function is adapted from code in PyCrypto. + """ + # See 8.2.2(i) in Handbook of Applied Cryptography. + ktot = d * e - 1 + # The quantity d*e-1 is a multiple of phi(n), even, + # and can be represented as t*2^s. + t = ktot + while t % 2 == 0: + t = t // 2 + # Cycle through all multiplicative inverses in Zn. + # The algorithm is non-deterministic, but there is a 50% chance + # any candidate a leads to successful factoring. + # See "Digitalized Signatures and Public Key Functions as Intractable + # as Factorization", M. Rabin, 1979 + spotted = False + a = 2 + while not spotted and a < _MAX_RECOVERY_ATTEMPTS: + k = t + # Cycle through all values a^{t*2^i}=a^k + while k < ktot: + cand = pow(a, k, n) + # Check if a^k is a non-trivial root of unity (mod n) + if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1: + # We have found a number such that (cand-1)(cand+1)=0 (mod n). + # Either of the terms divides n. + p = gcd(cand + 1, n) + spotted = True + break + k *= 2 + # This value was not any good... let's try another! + a += 2 + if not spotted: + raise ValueError("Unable to compute factors p and q from exponent d.") + # Found ! + q, r = divmod(n, p) + assert r == 0 + p, q = sorted((p, q), reverse=True) + return (p, q) + + +class RSAPrivateNumbers: + def __init__( + self, + p: int, + q: int, + d: int, + dmp1: int, + dmq1: int, + iqmp: int, + public_numbers: "RSAPublicNumbers", + ): + if ( + not isinstance(p, int) + or not isinstance(q, int) + or not isinstance(d, int) + or not isinstance(dmp1, int) + or not isinstance(dmq1, int) + or not isinstance(iqmp, int) + ): + raise TypeError( + "RSAPrivateNumbers p, q, d, dmp1, dmq1, iqmp arguments must" + " all be an integers." + ) + + if not isinstance(public_numbers, RSAPublicNumbers): + raise TypeError( + "RSAPrivateNumbers public_numbers must be an RSAPublicNumbers" + " instance." + ) + + self._p = p + self._q = q + self._d = d + self._dmp1 = dmp1 + self._dmq1 = dmq1 + self._iqmp = iqmp + self._public_numbers = public_numbers + + @property + def p(self) -> int: + return self._p + + @property + def q(self) -> int: + return self._q + + @property + def d(self) -> int: + return self._d + + @property + def dmp1(self) -> int: + return self._dmp1 + + @property + def dmq1(self) -> int: + return self._dmq1 + + @property + def iqmp(self) -> int: + return self._iqmp + + @property + def public_numbers(self) -> "RSAPublicNumbers": + return self._public_numbers + + def private_key(self, backend: typing.Any = None) -> RSAPrivateKey: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_rsa_private_numbers(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RSAPrivateNumbers): + return NotImplemented + + return ( + self.p == other.p + and self.q == other.q + and self.d == other.d + and self.dmp1 == other.dmp1 + and self.dmq1 == other.dmq1 + and self.iqmp == other.iqmp + and self.public_numbers == other.public_numbers + ) + + def __hash__(self) -> int: + return hash( + ( + self.p, + self.q, + self.d, + self.dmp1, + self.dmq1, + self.iqmp, + self.public_numbers, + ) + ) + + +class RSAPublicNumbers: + def __init__(self, e: int, n: int): + if not isinstance(e, int) or not isinstance(n, int): + raise TypeError("RSAPublicNumbers arguments must be integers.") + + self._e = e + self._n = n + + @property + def e(self) -> int: + return self._e + + @property + def n(self) -> int: + return self._n + + def public_key(self, backend: typing.Any = None) -> RSAPublicKey: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.load_rsa_public_numbers(self) + + def __repr__(self) -> str: + return "".format(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RSAPublicNumbers): + return NotImplemented + + return self.e == other.e and self.n == other.n + + def __hash__(self) -> int: + return hash((self.e, self.n)) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/types.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/types.py new file mode 100644 index 00000000..d4978152 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/types.py @@ -0,0 +1,69 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives.asymmetric import ( + dh, + dsa, + ec, + ed25519, + ed448, + rsa, + x25519, + x448, +) + + +# Every asymmetric key type +PUBLIC_KEY_TYPES = typing.Union[ + dh.DHPublicKey, + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, + x25519.X25519PublicKey, + x448.X448PublicKey, +] +# Every asymmetric key type +PRIVATE_KEY_TYPES = typing.Union[ + dh.DHPrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + x25519.X25519PrivateKey, + x448.X448PrivateKey, +] +# Just the key types we allow to be used for x509 signing. This mirrors +# the certificate public key types +CERTIFICATE_PRIVATE_KEY_TYPES = typing.Union[ + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, +] +# Just the key types we allow to be used for x509 signing. This mirrors +# the certificate private key types +CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES = typing.Union[ + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, +] +# This type removes DHPublicKey. x448/x25519 can be a public key +# but cannot be used in signing so they are allowed here. +CERTIFICATE_PUBLIC_KEY_TYPES = typing.Union[ + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, + x25519.X25519PublicKey, + x448.X448PublicKey, +] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/utils.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/utils.py new file mode 100644 index 00000000..638ecb35 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/utils.py @@ -0,0 +1,24 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +from cryptography.hazmat.bindings._rust import asn1 +from cryptography.hazmat.primitives import hashes + + +decode_dss_signature = asn1.decode_dss_signature +encode_dss_signature = asn1.encode_dss_signature + + +class Prehashed: + def __init__(self, algorithm: hashes.HashAlgorithm): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of HashAlgorithm.") + + self._algorithm = algorithm + self._digest_size = algorithm.digest_size + + @property + def digest_size(self) -> int: + return self._digest_size diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/x25519.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/x25519.py new file mode 100644 index 00000000..690af78c --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/x25519.py @@ -0,0 +1,81 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives import _serialization + + +class X25519PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> "X25519PublicKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x25519_supported(): + raise UnsupportedAlgorithm( + "X25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return backend.x25519_load_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + +class X25519PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> "X25519PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x25519_supported(): + raise UnsupportedAlgorithm( + "X25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + return backend.x25519_generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> "X25519PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x25519_supported(): + raise UnsupportedAlgorithm( + "X25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return backend.x25519_load_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> X25519PublicKey: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ + + @abc.abstractmethod + def exchange(self, peer_public_key: X25519PublicKey) -> bytes: + """ + Performs a key exchange operation using the provided peer's public key. + """ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/x448.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/x448.py new file mode 100644 index 00000000..7f71c272 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/x448.py @@ -0,0 +1,81 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives import _serialization + + +class X448PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> "X448PublicKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x448_supported(): + raise UnsupportedAlgorithm( + "X448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return backend.x448_load_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + +class X448PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> "X448PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x448_supported(): + raise UnsupportedAlgorithm( + "X448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + return backend.x448_generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> "X448PrivateKey": + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x448_supported(): + raise UnsupportedAlgorithm( + "X448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return backend.x448_load_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> X448PublicKey: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ + + @abc.abstractmethod + def exchange(self, peer_public_key: X448PublicKey) -> bytes: + """ + Performs a key exchange operation using the provided peer's public key. + """ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/__init__.py new file mode 100644 index 00000000..874dbd45 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/__init__.py @@ -0,0 +1,27 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +from cryptography.hazmat.primitives._cipheralgorithm import ( + BlockCipherAlgorithm, + CipherAlgorithm, +) +from cryptography.hazmat.primitives.ciphers.base import ( + AEADCipherContext, + AEADDecryptionContext, + AEADEncryptionContext, + Cipher, + CipherContext, +) + + +__all__ = [ + "Cipher", + "CipherAlgorithm", + "BlockCipherAlgorithm", + "CipherContext", + "AEADCipherContext", + "AEADDecryptionContext", + "AEADEncryptionContext", +] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/aead.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/aead.py new file mode 100644 index 00000000..3cdb3ebe --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/aead.py @@ -0,0 +1,361 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import os +import typing + +from cryptography import exceptions, utils +from cryptography.hazmat.backends.openssl import aead +from cryptography.hazmat.backends.openssl.backend import backend + + +class ChaCha20Poly1305: + _MAX_SIZE = 2**31 - 1 + + def __init__(self, key: bytes): + if not backend.aead_cipher_supported(self): + raise exceptions.UnsupportedAlgorithm( + "ChaCha20Poly1305 is not supported by this version of OpenSSL", + exceptions._Reasons.UNSUPPORTED_CIPHER, + ) + utils._check_byteslike("key", key) + + if len(key) != 32: + raise ValueError("ChaCha20Poly1305 key must be 32 bytes.") + + self._key = key + + @classmethod + def generate_key(cls) -> bytes: + return os.urandom(32) + + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + if len(data) > self._MAX_SIZE or len(associated_data) > self._MAX_SIZE: + # This is OverflowError to match what cffi would raise + raise OverflowError( + "Data or associated data too long. Max 2**31 - 1 bytes" + ) + + self._check_params(nonce, data, associated_data) + return aead._encrypt(backend, self, nonce, data, [associated_data], 16) + + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + self._check_params(nonce, data, associated_data) + return aead._decrypt(backend, self, nonce, data, [associated_data], 16) + + def _check_params( + self, + nonce: bytes, + data: bytes, + associated_data: bytes, + ) -> None: + utils._check_byteslike("nonce", nonce) + utils._check_bytes("data", data) + utils._check_bytes("associated_data", associated_data) + if len(nonce) != 12: + raise ValueError("Nonce must be 12 bytes") + + +class AESCCM: + _MAX_SIZE = 2**31 - 1 + + def __init__(self, key: bytes, tag_length: int = 16): + utils._check_byteslike("key", key) + if len(key) not in (16, 24, 32): + raise ValueError("AESCCM key must be 128, 192, or 256 bits.") + + self._key = key + if not isinstance(tag_length, int): + raise TypeError("tag_length must be an integer") + + if tag_length not in (4, 6, 8, 10, 12, 14, 16): + raise ValueError("Invalid tag_length") + + self._tag_length = tag_length + + if not backend.aead_cipher_supported(self): + raise exceptions.UnsupportedAlgorithm( + "AESCCM is not supported by this version of OpenSSL", + exceptions._Reasons.UNSUPPORTED_CIPHER, + ) + + @classmethod + def generate_key(cls, bit_length: int) -> bytes: + if not isinstance(bit_length, int): + raise TypeError("bit_length must be an integer") + + if bit_length not in (128, 192, 256): + raise ValueError("bit_length must be 128, 192, or 256") + + return os.urandom(bit_length // 8) + + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + if len(data) > self._MAX_SIZE or len(associated_data) > self._MAX_SIZE: + # This is OverflowError to match what cffi would raise + raise OverflowError( + "Data or associated data too long. Max 2**31 - 1 bytes" + ) + + self._check_params(nonce, data, associated_data) + self._validate_lengths(nonce, len(data)) + return aead._encrypt( + backend, self, nonce, data, [associated_data], self._tag_length + ) + + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + self._check_params(nonce, data, associated_data) + return aead._decrypt( + backend, self, nonce, data, [associated_data], self._tag_length + ) + + def _validate_lengths(self, nonce: bytes, data_len: int) -> None: + # For information about computing this, see + # https://tools.ietf.org/html/rfc3610#section-2.1 + l_val = 15 - len(nonce) + if 2 ** (8 * l_val) < data_len: + raise ValueError("Data too long for nonce") + + def _check_params( + self, nonce: bytes, data: bytes, associated_data: bytes + ) -> None: + utils._check_byteslike("nonce", nonce) + utils._check_bytes("data", data) + utils._check_bytes("associated_data", associated_data) + if not 7 <= len(nonce) <= 13: + raise ValueError("Nonce must be between 7 and 13 bytes") + + +class AESGCM: + _MAX_SIZE = 2**31 - 1 + + def __init__(self, key: bytes): + utils._check_byteslike("key", key) + if len(key) not in (16, 24, 32): + raise ValueError("AESGCM key must be 128, 192, or 256 bits.") + + self._key = key + + @classmethod + def generate_key(cls, bit_length: int) -> bytes: + if not isinstance(bit_length, int): + raise TypeError("bit_length must be an integer") + + if bit_length not in (128, 192, 256): + raise ValueError("bit_length must be 128, 192, or 256") + + return os.urandom(bit_length // 8) + + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + if len(data) > self._MAX_SIZE or len(associated_data) > self._MAX_SIZE: + # This is OverflowError to match what cffi would raise + raise OverflowError( + "Data or associated data too long. Max 2**31 - 1 bytes" + ) + + self._check_params(nonce, data, associated_data) + return aead._encrypt(backend, self, nonce, data, [associated_data], 16) + + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + self._check_params(nonce, data, associated_data) + return aead._decrypt(backend, self, nonce, data, [associated_data], 16) + + def _check_params( + self, + nonce: bytes, + data: bytes, + associated_data: bytes, + ) -> None: + utils._check_byteslike("nonce", nonce) + utils._check_bytes("data", data) + utils._check_bytes("associated_data", associated_data) + if len(nonce) < 8 or len(nonce) > 128: + raise ValueError("Nonce must be between 8 and 128 bytes") + + +class AESOCB3: + _MAX_SIZE = 2**31 - 1 + + def __init__(self, key: bytes): + utils._check_byteslike("key", key) + if len(key) not in (16, 24, 32): + raise ValueError("AESOCB3 key must be 128, 192, or 256 bits.") + + self._key = key + + if not backend.aead_cipher_supported(self): + raise exceptions.UnsupportedAlgorithm( + "OCB3 is not supported by this version of OpenSSL", + exceptions._Reasons.UNSUPPORTED_CIPHER, + ) + + @classmethod + def generate_key(cls, bit_length: int) -> bytes: + if not isinstance(bit_length, int): + raise TypeError("bit_length must be an integer") + + if bit_length not in (128, 192, 256): + raise ValueError("bit_length must be 128, 192, or 256") + + return os.urandom(bit_length // 8) + + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + if len(data) > self._MAX_SIZE or len(associated_data) > self._MAX_SIZE: + # This is OverflowError to match what cffi would raise + raise OverflowError( + "Data or associated data too long. Max 2**31 - 1 bytes" + ) + + self._check_params(nonce, data, associated_data) + return aead._encrypt(backend, self, nonce, data, [associated_data], 16) + + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: typing.Optional[bytes], + ) -> bytes: + if associated_data is None: + associated_data = b"" + + self._check_params(nonce, data, associated_data) + return aead._decrypt(backend, self, nonce, data, [associated_data], 16) + + def _check_params( + self, + nonce: bytes, + data: bytes, + associated_data: bytes, + ) -> None: + utils._check_byteslike("nonce", nonce) + utils._check_bytes("data", data) + utils._check_bytes("associated_data", associated_data) + if len(nonce) < 12 or len(nonce) > 15: + raise ValueError("Nonce must be between 12 and 15 bytes") + + +class AESSIV(object): + _MAX_SIZE = 2**31 - 1 + + def __init__(self, key: bytes): + utils._check_byteslike("key", key) + if len(key) not in (32, 48, 64): + raise ValueError("AESSIV key must be 256, 384, or 512 bits.") + + self._key = key + + if not backend.aead_cipher_supported(self): + raise exceptions.UnsupportedAlgorithm( + "AES-SIV is not supported by this version of OpenSSL", + exceptions._Reasons.UNSUPPORTED_CIPHER, + ) + + @classmethod + def generate_key(cls, bit_length: int) -> bytes: + if not isinstance(bit_length, int): + raise TypeError("bit_length must be an integer") + + if bit_length not in (256, 384, 512): + raise ValueError("bit_length must be 256, 384, or 512") + + return os.urandom(bit_length // 8) + + def encrypt( + self, + data: bytes, + associated_data: typing.Optional[typing.List[bytes]], + ) -> bytes: + if associated_data is None: + associated_data = [] + + self._check_params(data, associated_data) + + if len(data) > self._MAX_SIZE or any( + len(ad) > self._MAX_SIZE for ad in associated_data + ): + # This is OverflowError to match what cffi would raise + raise OverflowError( + "Data or associated data too long. Max 2**31 - 1 bytes" + ) + + return aead._encrypt(backend, self, b"", data, associated_data, 16) + + def decrypt( + self, + data: bytes, + associated_data: typing.Optional[typing.List[bytes]], + ) -> bytes: + if associated_data is None: + associated_data = [] + + self._check_params(data, associated_data) + + return aead._decrypt(backend, self, b"", data, associated_data, 16) + + def _check_params( + self, + data: bytes, + associated_data: typing.List, + ) -> None: + utils._check_bytes("data", data) + if not isinstance(associated_data, list) or not all( + isinstance(x, bytes) for x in associated_data + ): + raise TypeError("associated_data must be a list of bytes or None") diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/algorithms.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/algorithms.py new file mode 100644 index 00000000..61385426 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/algorithms.py @@ -0,0 +1,227 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +from cryptography import utils +from cryptography.hazmat.primitives.ciphers import ( + BlockCipherAlgorithm, + CipherAlgorithm, +) + + +def _verify_key_size(algorithm: CipherAlgorithm, key: bytes) -> bytes: + # Verify that the key is instance of bytes + utils._check_byteslike("key", key) + + # Verify that the key size matches the expected key size + if len(key) * 8 not in algorithm.key_sizes: + raise ValueError( + "Invalid key size ({}) for {}.".format( + len(key) * 8, algorithm.name + ) + ) + return key + + +class AES(CipherAlgorithm, BlockCipherAlgorithm): + name = "AES" + block_size = 128 + # 512 added to support AES-256-XTS, which uses 512-bit keys + key_sizes = frozenset([128, 192, 256, 512]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class AES128(CipherAlgorithm, BlockCipherAlgorithm): + name = "AES" + block_size = 128 + key_sizes = frozenset([128]) + key_size = 128 + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + +class AES256(CipherAlgorithm, BlockCipherAlgorithm): + name = "AES" + block_size = 128 + key_sizes = frozenset([256]) + key_size = 256 + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + +class Camellia(CipherAlgorithm, BlockCipherAlgorithm): + name = "camellia" + block_size = 128 + key_sizes = frozenset([128, 192, 256]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class TripleDES(CipherAlgorithm, BlockCipherAlgorithm): + name = "3DES" + block_size = 64 + key_sizes = frozenset([64, 128, 192]) + + def __init__(self, key: bytes): + if len(key) == 8: + key += key + key + elif len(key) == 16: + key += key[:8] + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class Blowfish(CipherAlgorithm, BlockCipherAlgorithm): + name = "Blowfish" + block_size = 64 + key_sizes = frozenset(range(32, 449, 8)) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +_BlowfishInternal = Blowfish +utils.deprecated( + Blowfish, + __name__, + "Blowfish has been deprecated", + utils.DeprecatedIn37, + name="Blowfish", +) + + +class CAST5(CipherAlgorithm, BlockCipherAlgorithm): + name = "CAST5" + block_size = 64 + key_sizes = frozenset(range(40, 129, 8)) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +_CAST5Internal = CAST5 +utils.deprecated( + CAST5, + __name__, + "CAST5 has been deprecated", + utils.DeprecatedIn37, + name="CAST5", +) + + +class ARC4(CipherAlgorithm): + name = "RC4" + key_sizes = frozenset([40, 56, 64, 80, 128, 160, 192, 256]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class IDEA(CipherAlgorithm, BlockCipherAlgorithm): + name = "IDEA" + block_size = 64 + key_sizes = frozenset([128]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +_IDEAInternal = IDEA +utils.deprecated( + IDEA, + __name__, + "IDEA has been deprecated", + utils.DeprecatedIn37, + name="IDEA", +) + + +class SEED(CipherAlgorithm, BlockCipherAlgorithm): + name = "SEED" + block_size = 128 + key_sizes = frozenset([128]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +_SEEDInternal = SEED +utils.deprecated( + SEED, + __name__, + "SEED has been deprecated", + utils.DeprecatedIn37, + name="SEED", +) + + +class ChaCha20(CipherAlgorithm): + name = "ChaCha20" + key_sizes = frozenset([256]) + + def __init__(self, key: bytes, nonce: bytes): + self.key = _verify_key_size(self, key) + utils._check_byteslike("nonce", nonce) + + if len(nonce) != 16: + raise ValueError("nonce must be 128-bits (16 bytes)") + + self._nonce = nonce + + @property + def nonce(self) -> bytes: + return self._nonce + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class SM4(CipherAlgorithm, BlockCipherAlgorithm): + name = "SM4" + block_size = 128 + key_sizes = frozenset([128]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/base.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/base.py new file mode 100644 index 00000000..2ea7fc63 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/base.py @@ -0,0 +1,269 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing + +from cryptography.exceptions import ( + AlreadyFinalized, + AlreadyUpdated, + NotYetFinalized, +) +from cryptography.hazmat.primitives._cipheralgorithm import CipherAlgorithm +from cryptography.hazmat.primitives.ciphers import modes + + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.ciphers import ( + _CipherContext as _BackendCipherContext, + ) + + +class CipherContext(metaclass=abc.ABCMeta): + @abc.abstractmethod + def update(self, data: bytes) -> bytes: + """ + Processes the provided bytes through the cipher and returns the results + as bytes. + """ + + @abc.abstractmethod + def update_into(self, data: bytes, buf: bytes) -> int: + """ + Processes the provided bytes and writes the resulting data into the + provided buffer. Returns the number of bytes written. + """ + + @abc.abstractmethod + def finalize(self) -> bytes: + """ + Returns the results of processing the final block as bytes. + """ + + +class AEADCipherContext(CipherContext, metaclass=abc.ABCMeta): + @abc.abstractmethod + def authenticate_additional_data(self, data: bytes) -> None: + """ + Authenticates the provided bytes. + """ + + +class AEADDecryptionContext(AEADCipherContext, metaclass=abc.ABCMeta): + @abc.abstractmethod + def finalize_with_tag(self, tag: bytes) -> bytes: + """ + Returns the results of processing the final block as bytes and allows + delayed passing of the authentication tag. + """ + + +class AEADEncryptionContext(AEADCipherContext, metaclass=abc.ABCMeta): + @abc.abstractproperty + def tag(self) -> bytes: + """ + Returns tag bytes. This is only available after encryption is + finalized. + """ + + +Mode = typing.TypeVar( + "Mode", bound=typing.Optional[modes.Mode], covariant=True +) + + +class Cipher(typing.Generic[Mode]): + def __init__( + self, + algorithm: CipherAlgorithm, + mode: Mode, + backend: typing.Any = None, + ): + + if not isinstance(algorithm, CipherAlgorithm): + raise TypeError("Expected interface of CipherAlgorithm.") + + if mode is not None: + # mypy needs this assert to narrow the type from our generic + # type. Maybe it won't some time in the future. + assert isinstance(mode, modes.Mode) + mode.validate_for_algorithm(algorithm) + + self.algorithm = algorithm + self.mode = mode + + @typing.overload + def encryptor( + self: "Cipher[modes.ModeWithAuthenticationTag]", + ) -> AEADEncryptionContext: + ... + + @typing.overload + def encryptor( + self: "_CIPHER_TYPE", + ) -> CipherContext: + ... + + def encryptor(self): + if isinstance(self.mode, modes.ModeWithAuthenticationTag): + if self.mode.tag is not None: + raise ValueError( + "Authentication tag must be None when encrypting." + ) + from cryptography.hazmat.backends.openssl.backend import backend + + ctx = backend.create_symmetric_encryption_ctx( + self.algorithm, self.mode + ) + return self._wrap_ctx(ctx, encrypt=True) + + @typing.overload + def decryptor( + self: "Cipher[modes.ModeWithAuthenticationTag]", + ) -> AEADDecryptionContext: + ... + + @typing.overload + def decryptor( + self: "_CIPHER_TYPE", + ) -> CipherContext: + ... + + def decryptor(self): + from cryptography.hazmat.backends.openssl.backend import backend + + ctx = backend.create_symmetric_decryption_ctx( + self.algorithm, self.mode + ) + return self._wrap_ctx(ctx, encrypt=False) + + def _wrap_ctx( + self, ctx: "_BackendCipherContext", encrypt: bool + ) -> typing.Union[ + AEADEncryptionContext, AEADDecryptionContext, CipherContext + ]: + if isinstance(self.mode, modes.ModeWithAuthenticationTag): + if encrypt: + return _AEADEncryptionContext(ctx) + else: + return _AEADDecryptionContext(ctx) + else: + return _CipherContext(ctx) + + +_CIPHER_TYPE = Cipher[ + typing.Union[ + modes.ModeWithNonce, + modes.ModeWithTweak, + None, + modes.ECB, + modes.ModeWithInitializationVector, + ] +] + + +class _CipherContext(CipherContext): + _ctx: typing.Optional["_BackendCipherContext"] + + def __init__(self, ctx: "_BackendCipherContext") -> None: + self._ctx = ctx + + def update(self, data: bytes) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + return self._ctx.update(data) + + def update_into(self, data: bytes, buf: bytes) -> int: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + return self._ctx.update_into(data, buf) + + def finalize(self) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + data = self._ctx.finalize() + self._ctx = None + return data + + +class _AEADCipherContext(AEADCipherContext): + _ctx: typing.Optional["_BackendCipherContext"] + _tag: typing.Optional[bytes] + + def __init__(self, ctx: "_BackendCipherContext") -> None: + self._ctx = ctx + self._bytes_processed = 0 + self._aad_bytes_processed = 0 + self._tag = None + self._updated = False + + def _check_limit(self, data_size: int) -> None: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + self._updated = True + self._bytes_processed += data_size + if self._bytes_processed > self._ctx._mode._MAX_ENCRYPTED_BYTES: + raise ValueError( + "{} has a maximum encrypted byte limit of {}".format( + self._ctx._mode.name, self._ctx._mode._MAX_ENCRYPTED_BYTES + ) + ) + + def update(self, data: bytes) -> bytes: + self._check_limit(len(data)) + # mypy needs this assert even though _check_limit already checked + assert self._ctx is not None + return self._ctx.update(data) + + def update_into(self, data: bytes, buf: bytes) -> int: + self._check_limit(len(data)) + # mypy needs this assert even though _check_limit already checked + assert self._ctx is not None + return self._ctx.update_into(data, buf) + + def finalize(self) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + data = self._ctx.finalize() + self._tag = self._ctx.tag + self._ctx = None + return data + + def authenticate_additional_data(self, data: bytes) -> None: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + if self._updated: + raise AlreadyUpdated("Update has been called on this context.") + + self._aad_bytes_processed += len(data) + if self._aad_bytes_processed > self._ctx._mode._MAX_AAD_BYTES: + raise ValueError( + "{} has a maximum AAD byte limit of {}".format( + self._ctx._mode.name, self._ctx._mode._MAX_AAD_BYTES + ) + ) + + self._ctx.authenticate_additional_data(data) + + +class _AEADDecryptionContext(_AEADCipherContext, AEADDecryptionContext): + def finalize_with_tag(self, tag: bytes) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + data = self._ctx.finalize_with_tag(tag) + self._tag = self._ctx.tag + self._ctx = None + return data + + +class _AEADEncryptionContext(_AEADCipherContext, AEADEncryptionContext): + @property + def tag(self) -> bytes: + if self._ctx is not None: + raise NotYetFinalized( + "You must finalize encryption before " "getting the tag." + ) + assert self._tag is not None + return self._tag diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/modes.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/modes.py new file mode 100644 index 00000000..d04e08cc --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/ciphers/modes.py @@ -0,0 +1,270 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing + +from cryptography import utils +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives._cipheralgorithm import ( + BlockCipherAlgorithm, + CipherAlgorithm, +) +from cryptography.hazmat.primitives.ciphers import algorithms + + +class Mode(metaclass=abc.ABCMeta): + @abc.abstractproperty + def name(self) -> str: + """ + A string naming this mode (e.g. "ECB", "CBC"). + """ + + @abc.abstractmethod + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + """ + Checks that all the necessary invariants of this (mode, algorithm) + combination are met. + """ + + +class ModeWithInitializationVector(Mode, metaclass=abc.ABCMeta): + @abc.abstractproperty + def initialization_vector(self) -> bytes: + """ + The value of the initialization vector for this mode as bytes. + """ + + +class ModeWithTweak(Mode, metaclass=abc.ABCMeta): + @abc.abstractproperty + def tweak(self) -> bytes: + """ + The value of the tweak for this mode as bytes. + """ + + +class ModeWithNonce(Mode, metaclass=abc.ABCMeta): + @abc.abstractproperty + def nonce(self) -> bytes: + """ + The value of the nonce for this mode as bytes. + """ + + +class ModeWithAuthenticationTag(Mode, metaclass=abc.ABCMeta): + @abc.abstractproperty + def tag(self) -> typing.Optional[bytes]: + """ + The value of the tag supplied to the constructor of this mode. + """ + + +def _check_aes_key_length(self: Mode, algorithm: CipherAlgorithm) -> None: + if algorithm.key_size > 256 and algorithm.name == "AES": + raise ValueError( + "Only 128, 192, and 256 bit keys are allowed for this AES mode" + ) + + +def _check_iv_length( + self: ModeWithInitializationVector, algorithm: BlockCipherAlgorithm +) -> None: + if len(self.initialization_vector) * 8 != algorithm.block_size: + raise ValueError( + "Invalid IV size ({}) for {}.".format( + len(self.initialization_vector), self.name + ) + ) + + +def _check_nonce_length( + nonce: bytes, name: str, algorithm: CipherAlgorithm +) -> None: + if not isinstance(algorithm, BlockCipherAlgorithm): + raise UnsupportedAlgorithm( + f"{name} requires a block cipher algorithm", + _Reasons.UNSUPPORTED_CIPHER, + ) + if len(nonce) * 8 != algorithm.block_size: + raise ValueError( + "Invalid nonce size ({}) for {}.".format(len(nonce), name) + ) + + +def _check_iv_and_key_length( + self: ModeWithInitializationVector, algorithm: CipherAlgorithm +) -> None: + if not isinstance(algorithm, BlockCipherAlgorithm): + raise UnsupportedAlgorithm( + f"{self} requires a block cipher algorithm", + _Reasons.UNSUPPORTED_CIPHER, + ) + _check_aes_key_length(self, algorithm) + _check_iv_length(self, algorithm) + + +class CBC(ModeWithInitializationVector): + name = "CBC" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class XTS(ModeWithTweak): + name = "XTS" + + def __init__(self, tweak: bytes): + utils._check_byteslike("tweak", tweak) + + if len(tweak) != 16: + raise ValueError("tweak must be 128-bits (16 bytes)") + + self._tweak = tweak + + @property + def tweak(self) -> bytes: + return self._tweak + + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + if isinstance(algorithm, (algorithms.AES128, algorithms.AES256)): + raise TypeError( + "The AES128 and AES256 classes do not support XTS, please use " + "the standard AES class instead." + ) + + if algorithm.key_size not in (256, 512): + raise ValueError( + "The XTS specification requires a 256-bit key for AES-128-XTS" + " and 512-bit key for AES-256-XTS" + ) + + +class ECB(Mode): + name = "ECB" + + validate_for_algorithm = _check_aes_key_length + + +class OFB(ModeWithInitializationVector): + name = "OFB" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class CFB(ModeWithInitializationVector): + name = "CFB" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class CFB8(ModeWithInitializationVector): + name = "CFB8" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class CTR(ModeWithNonce): + name = "CTR" + + def __init__(self, nonce: bytes): + utils._check_byteslike("nonce", nonce) + self._nonce = nonce + + @property + def nonce(self) -> bytes: + return self._nonce + + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + _check_aes_key_length(self, algorithm) + _check_nonce_length(self.nonce, self.name, algorithm) + + +class GCM(ModeWithInitializationVector, ModeWithAuthenticationTag): + name = "GCM" + _MAX_ENCRYPTED_BYTES = (2**39 - 256) // 8 + _MAX_AAD_BYTES = (2**64) // 8 + + def __init__( + self, + initialization_vector: bytes, + tag: typing.Optional[bytes] = None, + min_tag_length: int = 16, + ): + # OpenSSL 3.0.0 constrains GCM IVs to [64, 1024] bits inclusive + # This is a sane limit anyway so we'll enforce it here. + utils._check_byteslike("initialization_vector", initialization_vector) + if len(initialization_vector) < 8 or len(initialization_vector) > 128: + raise ValueError( + "initialization_vector must be between 8 and 128 bytes (64 " + "and 1024 bits)." + ) + self._initialization_vector = initialization_vector + if tag is not None: + utils._check_bytes("tag", tag) + if min_tag_length < 4: + raise ValueError("min_tag_length must be >= 4") + if len(tag) < min_tag_length: + raise ValueError( + "Authentication tag must be {} bytes or longer.".format( + min_tag_length + ) + ) + self._tag = tag + self._min_tag_length = min_tag_length + + @property + def tag(self) -> typing.Optional[bytes]: + return self._tag + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + _check_aes_key_length(self, algorithm) + if not isinstance(algorithm, BlockCipherAlgorithm): + raise UnsupportedAlgorithm( + "GCM requires a block cipher algorithm", + _Reasons.UNSUPPORTED_CIPHER, + ) + block_size_bytes = algorithm.block_size // 8 + if self._tag is not None and len(self._tag) > block_size_bytes: + raise ValueError( + "Authentication tag cannot be more than {} bytes.".format( + block_size_bytes + ) + ) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/cmac.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/cmac.py new file mode 100644 index 00000000..e08d65e1 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/cmac.py @@ -0,0 +1,66 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, +) +from cryptography.hazmat.primitives import ciphers + +if typing.TYPE_CHECKING: + from cryptography.hazmat.backends.openssl.cmac import _CMACContext + + +class CMAC: + _ctx: typing.Optional["_CMACContext"] + _algorithm: ciphers.BlockCipherAlgorithm + + def __init__( + self, + algorithm: ciphers.BlockCipherAlgorithm, + backend: typing.Any = None, + ctx: typing.Optional["_CMACContext"] = None, + ): + if not isinstance(algorithm, ciphers.BlockCipherAlgorithm): + raise TypeError("Expected instance of BlockCipherAlgorithm.") + self._algorithm = algorithm + + if ctx is None: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + self._ctx = ossl.create_cmac_ctx(self._algorithm) + else: + self._ctx = ctx + + def update(self, data: bytes) -> None: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + + utils._check_bytes("data", data) + self._ctx.update(data) + + def finalize(self) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + digest = self._ctx.finalize() + self._ctx = None + return digest + + def verify(self, signature: bytes) -> None: + utils._check_bytes("signature", signature) + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + + ctx, self._ctx = self._ctx, None + ctx.verify(signature) + + def copy(self) -> "CMAC": + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + return CMAC(self._algorithm, ctx=self._ctx.copy()) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/constant_time.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/constant_time.py new file mode 100644 index 00000000..a02fa9c4 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/constant_time.py @@ -0,0 +1,13 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import hmac + + +def bytes_eq(a: bytes, b: bytes) -> bool: + if not isinstance(a, bytes) or not isinstance(b, bytes): + raise TypeError("a and b must be bytes.") + + return hmac.compare_digest(a, b) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/hashes.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/hashes.py new file mode 100644 index 00000000..cc0771d4 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/hashes.py @@ -0,0 +1,259 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import abc +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, +) + + +class HashAlgorithm(metaclass=abc.ABCMeta): + @abc.abstractproperty + def name(self) -> str: + """ + A string naming this algorithm (e.g. "sha256", "md5"). + """ + + @abc.abstractproperty + def digest_size(self) -> int: + """ + The size of the resulting digest in bytes. + """ + + @abc.abstractproperty + def block_size(self) -> typing.Optional[int]: + """ + The internal block size of the hash function, or None if the hash + function does not use blocks internally (e.g. SHA3). + """ + + +class HashContext(metaclass=abc.ABCMeta): + @abc.abstractproperty + def algorithm(self) -> HashAlgorithm: + """ + A HashAlgorithm that will be used by this context. + """ + + @abc.abstractmethod + def update(self, data: bytes) -> None: + """ + Processes the provided bytes through the hash. + """ + + @abc.abstractmethod + def finalize(self) -> bytes: + """ + Finalizes the hash context and returns the hash digest as bytes. + """ + + @abc.abstractmethod + def copy(self) -> "HashContext": + """ + Return a HashContext that is a copy of the current context. + """ + + +class ExtendableOutputFunction(metaclass=abc.ABCMeta): + """ + An interface for extendable output functions. + """ + + +class Hash(HashContext): + _ctx: typing.Optional[HashContext] + + def __init__( + self, + algorithm: HashAlgorithm, + backend: typing.Any = None, + ctx: typing.Optional["HashContext"] = None, + ): + if not isinstance(algorithm, HashAlgorithm): + raise TypeError("Expected instance of hashes.HashAlgorithm.") + self._algorithm = algorithm + + if ctx is None: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + self._ctx = ossl.create_hash_ctx(self.algorithm) + else: + self._ctx = ctx + + @property + def algorithm(self) -> HashAlgorithm: + return self._algorithm + + def update(self, data: bytes) -> None: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + utils._check_byteslike("data", data) + self._ctx.update(data) + + def copy(self) -> "Hash": + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + return Hash(self.algorithm, ctx=self._ctx.copy()) + + def finalize(self) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + digest = self._ctx.finalize() + self._ctx = None + return digest + + +class SHA1(HashAlgorithm): + name = "sha1" + digest_size = 20 + block_size = 64 + + +class SHA512_224(HashAlgorithm): # noqa: N801 + name = "sha512-224" + digest_size = 28 + block_size = 128 + + +class SHA512_256(HashAlgorithm): # noqa: N801 + name = "sha512-256" + digest_size = 32 + block_size = 128 + + +class SHA224(HashAlgorithm): + name = "sha224" + digest_size = 28 + block_size = 64 + + +class SHA256(HashAlgorithm): + name = "sha256" + digest_size = 32 + block_size = 64 + + +class SHA384(HashAlgorithm): + name = "sha384" + digest_size = 48 + block_size = 128 + + +class SHA512(HashAlgorithm): + name = "sha512" + digest_size = 64 + block_size = 128 + + +class SHA3_224(HashAlgorithm): # noqa: N801 + name = "sha3-224" + digest_size = 28 + block_size = None + + +class SHA3_256(HashAlgorithm): # noqa: N801 + name = "sha3-256" + digest_size = 32 + block_size = None + + +class SHA3_384(HashAlgorithm): # noqa: N801 + name = "sha3-384" + digest_size = 48 + block_size = None + + +class SHA3_512(HashAlgorithm): # noqa: N801 + name = "sha3-512" + digest_size = 64 + block_size = None + + +class SHAKE128(HashAlgorithm, ExtendableOutputFunction): + name = "shake128" + block_size = None + + def __init__(self, digest_size: int): + if not isinstance(digest_size, int): + raise TypeError("digest_size must be an integer") + + if digest_size < 1: + raise ValueError("digest_size must be a positive integer") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class SHAKE256(HashAlgorithm, ExtendableOutputFunction): + name = "shake256" + block_size = None + + def __init__(self, digest_size: int): + if not isinstance(digest_size, int): + raise TypeError("digest_size must be an integer") + + if digest_size < 1: + raise ValueError("digest_size must be a positive integer") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class MD5(HashAlgorithm): + name = "md5" + digest_size = 16 + block_size = 64 + + +class BLAKE2b(HashAlgorithm): + name = "blake2b" + _max_digest_size = 64 + _min_digest_size = 1 + block_size = 128 + + def __init__(self, digest_size: int): + + if digest_size != 64: + raise ValueError("Digest size must be 64") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class BLAKE2s(HashAlgorithm): + name = "blake2s" + block_size = 64 + _max_digest_size = 32 + _min_digest_size = 1 + + def __init__(self, digest_size: int): + + if digest_size != 32: + raise ValueError("Digest size must be 32") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class SM3(HashAlgorithm): + name = "sm3" + digest_size = 32 + block_size = 64 diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/hmac.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/hmac.py new file mode 100644 index 00000000..1577326c --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/hmac.py @@ -0,0 +1,72 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, +) +from cryptography.hazmat.backends.openssl.hmac import _HMACContext +from cryptography.hazmat.primitives import hashes + + +class HMAC(hashes.HashContext): + _ctx: typing.Optional[_HMACContext] + + def __init__( + self, + key: bytes, + algorithm: hashes.HashAlgorithm, + backend: typing.Any = None, + ctx=None, + ): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of hashes.HashAlgorithm.") + self._algorithm = algorithm + + self._key = key + if ctx is None: + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + self._ctx = ossl.create_hmac_ctx(key, self.algorithm) + else: + self._ctx = ctx + + @property + def algorithm(self) -> hashes.HashAlgorithm: + return self._algorithm + + def update(self, data: bytes) -> None: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + utils._check_byteslike("data", data) + self._ctx.update(data) + + def copy(self) -> "HMAC": + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + return HMAC( + self._key, + self.algorithm, + ctx=self._ctx.copy(), + ) + + def finalize(self) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + digest = self._ctx.finalize() + self._ctx = None + return digest + + def verify(self, signature: bytes) -> None: + utils._check_bytes("signature", signature) + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + + ctx, self._ctx = self._ctx, None + ctx.verify(signature) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/__init__.py new file mode 100644 index 00000000..38e2f8bc --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/__init__.py @@ -0,0 +1,22 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc + + +class KeyDerivationFunction(metaclass=abc.ABCMeta): + @abc.abstractmethod + def derive(self, key_material: bytes) -> bytes: + """ + Deterministically generates and returns a new key based on the existing + key material. + """ + + @abc.abstractmethod + def verify(self, key_material: bytes, expected_key: bytes) -> None: + """ + Checks whether the key generated by the key material matches the + expected derived key. Raises an exception if they do not match. + """ diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/concatkdf.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/concatkdf.py new file mode 100644 index 00000000..0b0262eb --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/concatkdf.py @@ -0,0 +1,130 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, +) +from cryptography.hazmat.primitives import constant_time, hashes, hmac +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +def _int_to_u32be(n: int) -> bytes: + return n.to_bytes(length=4, byteorder="big") + + +def _common_args_checks( + algorithm: hashes.HashAlgorithm, + length: int, + otherinfo: typing.Optional[bytes], +) -> None: + max_length = algorithm.digest_size * (2**32 - 1) + if length > max_length: + raise ValueError( + "Cannot derive keys larger than {} bits.".format(max_length) + ) + if otherinfo is not None: + utils._check_bytes("otherinfo", otherinfo) + + +def _concatkdf_derive( + key_material: bytes, + length: int, + auxfn: typing.Callable[[], hashes.HashContext], + otherinfo: bytes, +) -> bytes: + utils._check_byteslike("key_material", key_material) + output = [b""] + outlen = 0 + counter = 1 + + while length > outlen: + h = auxfn() + h.update(_int_to_u32be(counter)) + h.update(key_material) + h.update(otherinfo) + output.append(h.finalize()) + outlen += len(output[-1]) + counter += 1 + + return b"".join(output)[:length] + + +class ConcatKDFHash(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + otherinfo: typing.Optional[bytes], + backend: typing.Any = None, + ): + _common_args_checks(algorithm, length, otherinfo) + self._algorithm = algorithm + self._length = length + self._otherinfo: bytes = otherinfo if otherinfo is not None else b"" + + self._used = False + + def _hash(self) -> hashes.Hash: + return hashes.Hash(self._algorithm) + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized + self._used = True + return _concatkdf_derive( + key_material, self._length, self._hash, self._otherinfo + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey + + +class ConcatKDFHMAC(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + salt: typing.Optional[bytes], + otherinfo: typing.Optional[bytes], + backend: typing.Any = None, + ): + _common_args_checks(algorithm, length, otherinfo) + self._algorithm = algorithm + self._length = length + self._otherinfo: bytes = otherinfo if otherinfo is not None else b"" + + if algorithm.block_size is None: + raise TypeError( + "{} is unsupported for ConcatKDF".format(algorithm.name) + ) + + if salt is None: + salt = b"\x00" * algorithm.block_size + else: + utils._check_bytes("salt", salt) + + self._salt = salt + + self._used = False + + def _hmac(self) -> hmac.HMAC: + return hmac.HMAC(self._salt, self._algorithm) + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized + self._used = True + return _concatkdf_derive( + key_material, self._length, self._hmac, self._otherinfo + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/hkdf.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/hkdf.py new file mode 100644 index 00000000..44889b67 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/hkdf.py @@ -0,0 +1,103 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, +) +from cryptography.hazmat.primitives import constant_time, hashes, hmac +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +class HKDF(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + salt: typing.Optional[bytes], + info: typing.Optional[bytes], + backend: typing.Any = None, + ): + self._algorithm = algorithm + + if salt is None: + salt = b"\x00" * self._algorithm.digest_size + else: + utils._check_bytes("salt", salt) + + self._salt = salt + + self._hkdf_expand = HKDFExpand(self._algorithm, length, info) + + def _extract(self, key_material: bytes) -> bytes: + h = hmac.HMAC(self._salt, self._algorithm) + h.update(key_material) + return h.finalize() + + def derive(self, key_material: bytes) -> bytes: + utils._check_byteslike("key_material", key_material) + return self._hkdf_expand.derive(self._extract(key_material)) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey + + +class HKDFExpand(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + info: typing.Optional[bytes], + backend: typing.Any = None, + ): + self._algorithm = algorithm + + max_length = 255 * algorithm.digest_size + + if length > max_length: + raise ValueError( + "Cannot derive keys larger than {} octets.".format(max_length) + ) + + self._length = length + + if info is None: + info = b"" + else: + utils._check_bytes("info", info) + + self._info = info + + self._used = False + + def _expand(self, key_material: bytes) -> bytes: + output = [b""] + counter = 1 + + while self._algorithm.digest_size * (len(output) - 1) < self._length: + h = hmac.HMAC(key_material, self._algorithm) + h.update(output[-1]) + h.update(self._info) + h.update(bytes([counter])) + output.append(h.finalize()) + counter += 1 + + return b"".join(output)[: self._length] + + def derive(self, key_material: bytes) -> bytes: + utils._check_byteslike("key_material", key_material) + if self._used: + raise AlreadyFinalized + + self._used = True + return self._expand(key_material) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/kbkdf.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/kbkdf.py new file mode 100644 index 00000000..7f185a9a --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/kbkdf.py @@ -0,0 +1,297 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.primitives import ( + ciphers, + cmac, + constant_time, + hashes, + hmac, +) +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +class Mode(utils.Enum): + CounterMode = "ctr" + + +class CounterLocation(utils.Enum): + BeforeFixed = "before_fixed" + AfterFixed = "after_fixed" + MiddleFixed = "middle_fixed" + + +class _KBKDFDeriver: + def __init__( + self, + prf: typing.Callable, + mode: Mode, + length: int, + rlen: int, + llen: typing.Optional[int], + location: CounterLocation, + break_location: typing.Optional[int], + label: typing.Optional[bytes], + context: typing.Optional[bytes], + fixed: typing.Optional[bytes], + ): + assert callable(prf) + + if not isinstance(mode, Mode): + raise TypeError("mode must be of type Mode") + + if not isinstance(location, CounterLocation): + raise TypeError("location must be of type CounterLocation") + + if break_location is None and location is CounterLocation.MiddleFixed: + raise ValueError("Please specify a break_location") + + if ( + break_location is not None + and location != CounterLocation.MiddleFixed + ): + raise ValueError( + "break_location is ignored when location is not" + " CounterLocation.MiddleFixed" + ) + + if break_location is not None and not isinstance(break_location, int): + raise TypeError("break_location must be an integer") + + if break_location is not None and break_location < 0: + raise ValueError("break_location must be a positive integer") + + if (label or context) and fixed: + raise ValueError( + "When supplying fixed data, " "label and context are ignored." + ) + + if rlen is None or not self._valid_byte_length(rlen): + raise ValueError("rlen must be between 1 and 4") + + if llen is None and fixed is None: + raise ValueError("Please specify an llen") + + if llen is not None and not isinstance(llen, int): + raise TypeError("llen must be an integer") + + if label is None: + label = b"" + + if context is None: + context = b"" + + utils._check_bytes("label", label) + utils._check_bytes("context", context) + self._prf = prf + self._mode = mode + self._length = length + self._rlen = rlen + self._llen = llen + self._location = location + self._break_location = break_location + self._label = label + self._context = context + self._used = False + self._fixed_data = fixed + + @staticmethod + def _valid_byte_length(value: int) -> bool: + if not isinstance(value, int): + raise TypeError("value must be of type int") + + value_bin = utils.int_to_bytes(1, value) + if not 1 <= len(value_bin) <= 4: + return False + return True + + def derive(self, key_material: bytes, prf_output_size: int) -> bytes: + if self._used: + raise AlreadyFinalized + + utils._check_byteslike("key_material", key_material) + self._used = True + + # inverse floor division (equivalent to ceiling) + rounds = -(-self._length // prf_output_size) + + output = [b""] + + # For counter mode, the number of iterations shall not be + # larger than 2^r-1, where r <= 32 is the binary length of the counter + # This ensures that the counter values used as an input to the + # PRF will not repeat during a particular call to the KDF function. + r_bin = utils.int_to_bytes(1, self._rlen) + if rounds > pow(2, len(r_bin) * 8) - 1: + raise ValueError("There are too many iterations.") + + fixed = self._generate_fixed_input() + + if self._location == CounterLocation.BeforeFixed: + data_before_ctr = b"" + data_after_ctr = fixed + elif self._location == CounterLocation.AfterFixed: + data_before_ctr = fixed + data_after_ctr = b"" + else: + if isinstance( + self._break_location, int + ) and self._break_location > len(fixed): + raise ValueError("break_location offset > len(fixed)") + data_before_ctr = fixed[: self._break_location] + data_after_ctr = fixed[self._break_location :] + + for i in range(1, rounds + 1): + h = self._prf(key_material) + + counter = utils.int_to_bytes(i, self._rlen) + input_data = data_before_ctr + counter + data_after_ctr + + h.update(input_data) + + output.append(h.finalize()) + + return b"".join(output)[: self._length] + + def _generate_fixed_input(self) -> bytes: + if self._fixed_data and isinstance(self._fixed_data, bytes): + return self._fixed_data + + l_val = utils.int_to_bytes(self._length * 8, self._llen) + + return b"".join([self._label, b"\x00", self._context, l_val]) + + +class KBKDFHMAC(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + mode: Mode, + length: int, + rlen: int, + llen: typing.Optional[int], + location: CounterLocation, + label: typing.Optional[bytes], + context: typing.Optional[bytes], + fixed: typing.Optional[bytes], + backend: typing.Any = None, + *, + break_location: typing.Optional[int] = None, + ): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported hash algorithm.", + _Reasons.UNSUPPORTED_HASH, + ) + + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.hmac_supported(algorithm): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported hmac algorithm.", + _Reasons.UNSUPPORTED_HASH, + ) + + self._algorithm = algorithm + + self._deriver = _KBKDFDeriver( + self._prf, + mode, + length, + rlen, + llen, + location, + break_location, + label, + context, + fixed, + ) + + def _prf(self, key_material: bytes) -> hmac.HMAC: + return hmac.HMAC(key_material, self._algorithm) + + def derive(self, key_material: bytes) -> bytes: + return self._deriver.derive(key_material, self._algorithm.digest_size) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey + + +class KBKDFCMAC(KeyDerivationFunction): + def __init__( + self, + algorithm, + mode: Mode, + length: int, + rlen: int, + llen: typing.Optional[int], + location: CounterLocation, + label: typing.Optional[bytes], + context: typing.Optional[bytes], + fixed: typing.Optional[bytes], + backend: typing.Any = None, + *, + break_location: typing.Optional[int] = None, + ): + if not issubclass( + algorithm, ciphers.BlockCipherAlgorithm + ) or not issubclass(algorithm, ciphers.CipherAlgorithm): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported cipher algorithm.", + _Reasons.UNSUPPORTED_CIPHER, + ) + + self._algorithm = algorithm + self._cipher: typing.Optional[ciphers.BlockCipherAlgorithm] = None + + self._deriver = _KBKDFDeriver( + self._prf, + mode, + length, + rlen, + llen, + location, + break_location, + label, + context, + fixed, + ) + + def _prf(self, _: bytes) -> cmac.CMAC: + assert self._cipher is not None + + return cmac.CMAC(self._cipher) + + def derive(self, key_material: bytes) -> bytes: + self._cipher = self._algorithm(key_material) + + assert self._cipher is not None + + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.cmac_algorithm_supported(self._cipher): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported cipher algorithm.", + _Reasons.UNSUPPORTED_CIPHER, + ) + + return self._deriver.derive(key_material, self._cipher.block_size // 8) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/pbkdf2.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/pbkdf2.py new file mode 100644 index 00000000..8d23f8c2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/pbkdf2.py @@ -0,0 +1,65 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.primitives import constant_time, hashes +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +class PBKDF2HMAC(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + salt: bytes, + iterations: int, + backend: typing.Any = None, + ): + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.pbkdf2_hmac_supported(algorithm): + raise UnsupportedAlgorithm( + "{} is not supported for PBKDF2 by this backend.".format( + algorithm.name + ), + _Reasons.UNSUPPORTED_HASH, + ) + self._used = False + self._algorithm = algorithm + self._length = length + utils._check_bytes("salt", salt) + self._salt = salt + self._iterations = iterations + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized("PBKDF2 instances can only be used once.") + self._used = True + + utils._check_byteslike("key_material", key_material) + from cryptography.hazmat.backends.openssl.backend import backend + + return backend.derive_pbkdf2_hmac( + self._algorithm, + self._length, + self._salt, + self._iterations, + key_material, + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + derived_key = self.derive(key_material) + if not constant_time.bytes_eq(derived_key, expected_key): + raise InvalidKey("Keys do not match.") diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/scrypt.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/scrypt.py new file mode 100644 index 00000000..ff81bbb1 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/scrypt.py @@ -0,0 +1,74 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import sys +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, + UnsupportedAlgorithm, +) +from cryptography.hazmat.primitives import constant_time +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +# This is used by the scrypt tests to skip tests that require more memory +# than the MEM_LIMIT +_MEM_LIMIT = sys.maxsize // 2 + + +class Scrypt(KeyDerivationFunction): + def __init__( + self, + salt: bytes, + length: int, + n: int, + r: int, + p: int, + backend: typing.Any = None, + ): + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.scrypt_supported(): + raise UnsupportedAlgorithm( + "This version of OpenSSL does not support scrypt" + ) + self._length = length + utils._check_bytes("salt", salt) + if n < 2 or (n & (n - 1)) != 0: + raise ValueError("n must be greater than 1 and be a power of 2.") + + if r < 1: + raise ValueError("r must be greater than or equal to 1.") + + if p < 1: + raise ValueError("p must be greater than or equal to 1.") + + self._used = False + self._salt = salt + self._n = n + self._r = r + self._p = p + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized("Scrypt instances can only be used once.") + self._used = True + + utils._check_byteslike("key_material", key_material) + from cryptography.hazmat.backends.openssl.backend import backend + + return backend.derive_scrypt( + key_material, self._salt, self._length, self._n, self._r, self._p + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + derived_key = self.derive(key_material) + if not constant_time.bytes_eq(derived_key, expected_key): + raise InvalidKey("Keys do not match.") diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/x963kdf.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/x963kdf.py new file mode 100644 index 00000000..aa6bcc1d --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/kdf/x963kdf.py @@ -0,0 +1,65 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, +) +from cryptography.hazmat.primitives import constant_time, hashes +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +def _int_to_u32be(n: int) -> bytes: + return n.to_bytes(length=4, byteorder="big") + + +class X963KDF(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + sharedinfo: typing.Optional[bytes], + backend: typing.Any = None, + ): + max_len = algorithm.digest_size * (2**32 - 1) + if length > max_len: + raise ValueError( + "Cannot derive keys larger than {} bits.".format(max_len) + ) + if sharedinfo is not None: + utils._check_bytes("sharedinfo", sharedinfo) + + self._algorithm = algorithm + self._length = length + self._sharedinfo = sharedinfo + self._used = False + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized + self._used = True + utils._check_byteslike("key_material", key_material) + output = [b""] + outlen = 0 + counter = 1 + + while self._length > outlen: + h = hashes.Hash(self._algorithm) + h.update(key_material) + h.update(_int_to_u32be(counter)) + if self._sharedinfo is not None: + h.update(self._sharedinfo) + output.append(h.finalize()) + outlen += len(output[-1]) + counter += 1 + + return b"".join(output)[: self._length] + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/keywrap.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/keywrap.py new file mode 100644 index 00000000..64771ca3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/keywrap.py @@ -0,0 +1,176 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography.hazmat.primitives.ciphers import Cipher +from cryptography.hazmat.primitives.ciphers.algorithms import AES +from cryptography.hazmat.primitives.ciphers.modes import ECB +from cryptography.hazmat.primitives.constant_time import bytes_eq + + +def _wrap_core( + wrapping_key: bytes, + a: bytes, + r: typing.List[bytes], +) -> bytes: + # RFC 3394 Key Wrap - 2.2.1 (index method) + encryptor = Cipher(AES(wrapping_key), ECB()).encryptor() + n = len(r) + for j in range(6): + for i in range(n): + # every encryption operation is a discrete 16 byte chunk (because + # AES has a 128-bit block size) and since we're using ECB it is + # safe to reuse the encryptor for the entire operation + b = encryptor.update(a + r[i]) + a = ( + int.from_bytes(b[:8], byteorder="big") ^ ((n * j) + i + 1) + ).to_bytes(length=8, byteorder="big") + r[i] = b[-8:] + + assert encryptor.finalize() == b"" + + return a + b"".join(r) + + +def aes_key_wrap( + wrapping_key: bytes, + key_to_wrap: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + if len(key_to_wrap) < 16: + raise ValueError("The key to wrap must be at least 16 bytes") + + if len(key_to_wrap) % 8 != 0: + raise ValueError("The key to wrap must be a multiple of 8 bytes") + + a = b"\xa6\xa6\xa6\xa6\xa6\xa6\xa6\xa6" + r = [key_to_wrap[i : i + 8] for i in range(0, len(key_to_wrap), 8)] + return _wrap_core(wrapping_key, a, r) + + +def _unwrap_core( + wrapping_key: bytes, + a: bytes, + r: typing.List[bytes], +) -> typing.Tuple[bytes, typing.List[bytes]]: + # Implement RFC 3394 Key Unwrap - 2.2.2 (index method) + decryptor = Cipher(AES(wrapping_key), ECB()).decryptor() + n = len(r) + for j in reversed(range(6)): + for i in reversed(range(n)): + atr = ( + int.from_bytes(a, byteorder="big") ^ ((n * j) + i + 1) + ).to_bytes(length=8, byteorder="big") + r[i] + # every decryption operation is a discrete 16 byte chunk so + # it is safe to reuse the decryptor for the entire operation + b = decryptor.update(atr) + a = b[:8] + r[i] = b[-8:] + + assert decryptor.finalize() == b"" + return a, r + + +def aes_key_wrap_with_padding( + wrapping_key: bytes, + key_to_wrap: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + aiv = b"\xA6\x59\x59\xA6" + len(key_to_wrap).to_bytes( + length=4, byteorder="big" + ) + # pad the key to wrap if necessary + pad = (8 - (len(key_to_wrap) % 8)) % 8 + key_to_wrap = key_to_wrap + b"\x00" * pad + if len(key_to_wrap) == 8: + # RFC 5649 - 4.1 - exactly 8 octets after padding + encryptor = Cipher(AES(wrapping_key), ECB()).encryptor() + b = encryptor.update(aiv + key_to_wrap) + assert encryptor.finalize() == b"" + return b + else: + r = [key_to_wrap[i : i + 8] for i in range(0, len(key_to_wrap), 8)] + return _wrap_core(wrapping_key, aiv, r) + + +def aes_key_unwrap_with_padding( + wrapping_key: bytes, + wrapped_key: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapped_key) < 16: + raise InvalidUnwrap("Must be at least 16 bytes") + + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + if len(wrapped_key) == 16: + # RFC 5649 - 4.2 - exactly two 64-bit blocks + decryptor = Cipher(AES(wrapping_key), ECB()).decryptor() + out = decryptor.update(wrapped_key) + assert decryptor.finalize() == b"" + a = out[:8] + data = out[8:] + n = 1 + else: + r = [wrapped_key[i : i + 8] for i in range(0, len(wrapped_key), 8)] + encrypted_aiv = r.pop(0) + n = len(r) + a, r = _unwrap_core(wrapping_key, encrypted_aiv, r) + data = b"".join(r) + + # 1) Check that MSB(32,A) = A65959A6. + # 2) Check that 8*(n-1) < LSB(32,A) <= 8*n. If so, let + # MLI = LSB(32,A). + # 3) Let b = (8*n)-MLI, and then check that the rightmost b octets of + # the output data are zero. + mli = int.from_bytes(a[4:], byteorder="big") + b = (8 * n) - mli + if ( + not bytes_eq(a[:4], b"\xa6\x59\x59\xa6") + or not 8 * (n - 1) < mli <= 8 * n + or (b != 0 and not bytes_eq(data[-b:], b"\x00" * b)) + ): + raise InvalidUnwrap() + + if b == 0: + return data + else: + return data[:-b] + + +def aes_key_unwrap( + wrapping_key: bytes, + wrapped_key: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapped_key) < 24: + raise InvalidUnwrap("Must be at least 24 bytes") + + if len(wrapped_key) % 8 != 0: + raise InvalidUnwrap("The wrapped key must be a multiple of 8 bytes") + + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + aiv = b"\xa6\xa6\xa6\xa6\xa6\xa6\xa6\xa6" + r = [wrapped_key[i : i + 8] for i in range(0, len(wrapped_key), 8)] + a = r.pop(0) + a, r = _unwrap_core(wrapping_key, a, r) + if not bytes_eq(a, aiv): + raise InvalidUnwrap() + + return b"".join(r) + + +class InvalidUnwrap(Exception): + pass diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/padding.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/padding.py new file mode 100644 index 00000000..d6c1d915 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/padding.py @@ -0,0 +1,224 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import typing + +from cryptography import utils +from cryptography.exceptions import AlreadyFinalized +from cryptography.hazmat.bindings._rust import ( + check_ansix923_padding, + check_pkcs7_padding, +) + + +class PaddingContext(metaclass=abc.ABCMeta): + @abc.abstractmethod + def update(self, data: bytes) -> bytes: + """ + Pads the provided bytes and returns any available data as bytes. + """ + + @abc.abstractmethod + def finalize(self) -> bytes: + """ + Finalize the padding, returns bytes. + """ + + +def _byte_padding_check(block_size: int) -> None: + if not (0 <= block_size <= 2040): + raise ValueError("block_size must be in range(0, 2041).") + + if block_size % 8 != 0: + raise ValueError("block_size must be a multiple of 8.") + + +def _byte_padding_update( + buffer_: typing.Optional[bytes], data: bytes, block_size: int +) -> typing.Tuple[bytes, bytes]: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + utils._check_byteslike("data", data) + + buffer_ += bytes(data) + + finished_blocks = len(buffer_) // (block_size // 8) + + result = buffer_[: finished_blocks * (block_size // 8)] + buffer_ = buffer_[finished_blocks * (block_size // 8) :] + + return buffer_, result + + +def _byte_padding_pad( + buffer_: typing.Optional[bytes], + block_size: int, + paddingfn: typing.Callable[[int], bytes], +) -> bytes: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + pad_size = block_size // 8 - len(buffer_) + return buffer_ + paddingfn(pad_size) + + +def _byte_unpadding_update( + buffer_: typing.Optional[bytes], data: bytes, block_size: int +) -> typing.Tuple[bytes, bytes]: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + utils._check_byteslike("data", data) + + buffer_ += bytes(data) + + finished_blocks = max(len(buffer_) // (block_size // 8) - 1, 0) + + result = buffer_[: finished_blocks * (block_size // 8)] + buffer_ = buffer_[finished_blocks * (block_size // 8) :] + + return buffer_, result + + +def _byte_unpadding_check( + buffer_: typing.Optional[bytes], + block_size: int, + checkfn: typing.Callable[[bytes], int], +) -> bytes: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + if len(buffer_) != block_size // 8: + raise ValueError("Invalid padding bytes.") + + valid = checkfn(buffer_) + + if not valid: + raise ValueError("Invalid padding bytes.") + + pad_size = buffer_[-1] + return buffer_[:-pad_size] + + +class PKCS7: + def __init__(self, block_size: int): + _byte_padding_check(block_size) + self.block_size = block_size + + def padder(self) -> PaddingContext: + return _PKCS7PaddingContext(self.block_size) + + def unpadder(self) -> PaddingContext: + return _PKCS7UnpaddingContext(self.block_size) + + +class _PKCS7PaddingContext(PaddingContext): + _buffer: typing.Optional[bytes] + + def __init__(self, block_size: int): + self.block_size = block_size + # TODO: more copies than necessary, we should use zero-buffer (#193) + self._buffer = b"" + + def update(self, data: bytes) -> bytes: + self._buffer, result = _byte_padding_update( + self._buffer, data, self.block_size + ) + return result + + def _padding(self, size: int) -> bytes: + return bytes([size]) * size + + def finalize(self) -> bytes: + result = _byte_padding_pad( + self._buffer, self.block_size, self._padding + ) + self._buffer = None + return result + + +class _PKCS7UnpaddingContext(PaddingContext): + _buffer: typing.Optional[bytes] + + def __init__(self, block_size: int): + self.block_size = block_size + # TODO: more copies than necessary, we should use zero-buffer (#193) + self._buffer = b"" + + def update(self, data: bytes) -> bytes: + self._buffer, result = _byte_unpadding_update( + self._buffer, data, self.block_size + ) + return result + + def finalize(self) -> bytes: + result = _byte_unpadding_check( + self._buffer, self.block_size, check_pkcs7_padding + ) + self._buffer = None + return result + + +class ANSIX923: + def __init__(self, block_size: int): + _byte_padding_check(block_size) + self.block_size = block_size + + def padder(self) -> PaddingContext: + return _ANSIX923PaddingContext(self.block_size) + + def unpadder(self) -> PaddingContext: + return _ANSIX923UnpaddingContext(self.block_size) + + +class _ANSIX923PaddingContext(PaddingContext): + _buffer: typing.Optional[bytes] + + def __init__(self, block_size: int): + self.block_size = block_size + # TODO: more copies than necessary, we should use zero-buffer (#193) + self._buffer = b"" + + def update(self, data: bytes) -> bytes: + self._buffer, result = _byte_padding_update( + self._buffer, data, self.block_size + ) + return result + + def _padding(self, size: int) -> bytes: + return bytes([0]) * (size - 1) + bytes([size]) + + def finalize(self) -> bytes: + result = _byte_padding_pad( + self._buffer, self.block_size, self._padding + ) + self._buffer = None + return result + + +class _ANSIX923UnpaddingContext(PaddingContext): + _buffer: typing.Optional[bytes] + + def __init__(self, block_size: int): + self.block_size = block_size + # TODO: more copies than necessary, we should use zero-buffer (#193) + self._buffer = b"" + + def update(self, data: bytes) -> bytes: + self._buffer, result = _byte_unpadding_update( + self._buffer, data, self.block_size + ) + return result + + def finalize(self) -> bytes: + result = _byte_unpadding_check( + self._buffer, + self.block_size, + check_ansix923_padding, + ) + self._buffer = None + return result diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/poly1305.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/poly1305.py new file mode 100644 index 00000000..7fcf4a50 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/poly1305.py @@ -0,0 +1,60 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.backends.openssl.poly1305 import _Poly1305Context + + +class Poly1305: + _ctx: typing.Optional[_Poly1305Context] + + def __init__(self, key: bytes): + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.poly1305_supported(): + raise UnsupportedAlgorithm( + "poly1305 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_MAC, + ) + self._ctx = backend.create_poly1305_ctx(key) + + def update(self, data: bytes) -> None: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + utils._check_byteslike("data", data) + self._ctx.update(data) + + def finalize(self) -> bytes: + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + mac = self._ctx.finalize() + self._ctx = None + return mac + + def verify(self, tag: bytes) -> None: + utils._check_bytes("tag", tag) + if self._ctx is None: + raise AlreadyFinalized("Context was already finalized.") + + ctx, self._ctx = self._ctx, None + ctx.verify(tag) + + @classmethod + def generate_tag(cls, key: bytes, data: bytes) -> bytes: + p = Poly1305(key) + p.update(data) + return p.finalize() + + @classmethod + def verify_tag(cls, key: bytes, data: bytes, tag: bytes) -> None: + p = Poly1305(key) + p.update(data) + p.verify(tag) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/__init__.py new file mode 100644 index 00000000..60241500 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/__init__.py @@ -0,0 +1,47 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +from cryptography.hazmat.primitives._serialization import ( + BestAvailableEncryption, + Encoding, + KeySerializationEncryption, + NoEncryption, + ParameterFormat, + PrivateFormat, + PublicFormat, + _KeySerializationEncryption, +) +from cryptography.hazmat.primitives.serialization.base import ( + load_der_parameters, + load_der_private_key, + load_der_public_key, + load_pem_parameters, + load_pem_private_key, + load_pem_public_key, +) +from cryptography.hazmat.primitives.serialization.ssh import ( + load_ssh_private_key, + load_ssh_public_key, +) + + +__all__ = [ + "load_der_parameters", + "load_der_private_key", + "load_der_public_key", + "load_pem_parameters", + "load_pem_private_key", + "load_pem_public_key", + "load_ssh_private_key", + "load_ssh_public_key", + "Encoding", + "PrivateFormat", + "PublicFormat", + "ParameterFormat", + "KeySerializationEncryption", + "BestAvailableEncryption", + "NoEncryption", + "_KeySerializationEncryption", +] diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/base.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/base.py new file mode 100644 index 00000000..059b6e40 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/base.py @@ -0,0 +1,64 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import typing + +from cryptography.hazmat.primitives.asymmetric import dh +from cryptography.hazmat.primitives.asymmetric.types import ( + PRIVATE_KEY_TYPES, + PUBLIC_KEY_TYPES, +) + + +def load_pem_private_key( + data: bytes, + password: typing.Optional[bytes], + backend: typing.Any = None, +) -> PRIVATE_KEY_TYPES: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_pem_private_key(data, password) + + +def load_pem_public_key( + data: bytes, backend: typing.Any = None +) -> PUBLIC_KEY_TYPES: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_pem_public_key(data) + + +def load_pem_parameters( + data: bytes, backend: typing.Any = None +) -> "dh.DHParameters": + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_pem_parameters(data) + + +def load_der_private_key( + data: bytes, + password: typing.Optional[bytes], + backend: typing.Any = None, +) -> PRIVATE_KEY_TYPES: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_der_private_key(data, password) + + +def load_der_public_key( + data: bytes, backend: typing.Any = None +) -> PUBLIC_KEY_TYPES: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_der_public_key(data) + + +def load_der_parameters( + data: bytes, backend: typing.Any = None +) -> "dh.DHParameters": + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_der_parameters(data) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/pkcs12.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/pkcs12.py new file mode 100644 index 00000000..662ea75a --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/pkcs12.py @@ -0,0 +1,228 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import x509 +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives._serialization import PBES as PBES +from cryptography.hazmat.primitives.asymmetric import ( + dsa, + ec, + ed25519, + ed448, + rsa, +) +from cryptography.hazmat.primitives.asymmetric.types import ( + PRIVATE_KEY_TYPES, +) + +__all__ = [ + "PBES", + "PKCS12Certificate", + "PKCS12KeyAndCertificates", + "load_key_and_certificates", + "load_pkcs12", + "serialize_key_and_certificates", +] + +_ALLOWED_PKCS12_TYPES = typing.Union[ + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, +] + + +class PKCS12Certificate: + def __init__( + self, + cert: x509.Certificate, + friendly_name: typing.Optional[bytes], + ): + if not isinstance(cert, x509.Certificate): + raise TypeError("Expecting x509.Certificate object") + if friendly_name is not None and not isinstance(friendly_name, bytes): + raise TypeError("friendly_name must be bytes or None") + self._cert = cert + self._friendly_name = friendly_name + + @property + def friendly_name(self) -> typing.Optional[bytes]: + return self._friendly_name + + @property + def certificate(self) -> x509.Certificate: + return self._cert + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PKCS12Certificate): + return NotImplemented + + return ( + self.certificate == other.certificate + and self.friendly_name == other.friendly_name + ) + + def __hash__(self) -> int: + return hash((self.certificate, self.friendly_name)) + + def __repr__(self) -> str: + return "".format( + self.certificate, self.friendly_name + ) + + +class PKCS12KeyAndCertificates: + def __init__( + self, + key: typing.Optional[PRIVATE_KEY_TYPES], + cert: typing.Optional[PKCS12Certificate], + additional_certs: typing.List[PKCS12Certificate], + ): + if key is not None and not isinstance( + key, + ( + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + ), + ): + raise TypeError( + "Key must be RSA, DSA, EllipticCurve, ED25519, or ED448" + " private key, or None." + ) + if cert is not None and not isinstance(cert, PKCS12Certificate): + raise TypeError("cert must be a PKCS12Certificate object or None") + if not all( + isinstance(add_cert, PKCS12Certificate) + for add_cert in additional_certs + ): + raise TypeError( + "all values in additional_certs must be PKCS12Certificate" + " objects" + ) + self._key = key + self._cert = cert + self._additional_certs = additional_certs + + @property + def key(self) -> typing.Optional[PRIVATE_KEY_TYPES]: + return self._key + + @property + def cert(self) -> typing.Optional[PKCS12Certificate]: + return self._cert + + @property + def additional_certs(self) -> typing.List[PKCS12Certificate]: + return self._additional_certs + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PKCS12KeyAndCertificates): + return NotImplemented + + return ( + self.key == other.key + and self.cert == other.cert + and self.additional_certs == other.additional_certs + ) + + def __hash__(self) -> int: + return hash((self.key, self.cert, tuple(self.additional_certs))) + + def __repr__(self) -> str: + fmt = ( + "" + ) + return fmt.format(self.key, self.cert, self.additional_certs) + + +def load_key_and_certificates( + data: bytes, + password: typing.Optional[bytes], + backend: typing.Any = None, +) -> typing.Tuple[ + typing.Optional[PRIVATE_KEY_TYPES], + typing.Optional[x509.Certificate], + typing.List[x509.Certificate], +]: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_key_and_certificates_from_pkcs12(data, password) + + +def load_pkcs12( + data: bytes, + password: typing.Optional[bytes], + backend: typing.Any = None, +) -> PKCS12KeyAndCertificates: + from cryptography.hazmat.backends.openssl.backend import backend as ossl + + return ossl.load_pkcs12(data, password) + + +_PKCS12_CAS_TYPES = typing.Union[ + x509.Certificate, + PKCS12Certificate, +] + + +def serialize_key_and_certificates( + name: typing.Optional[bytes], + key: typing.Optional[_ALLOWED_PKCS12_TYPES], + cert: typing.Optional[x509.Certificate], + cas: typing.Optional[typing.Iterable[_PKCS12_CAS_TYPES]], + encryption_algorithm: serialization.KeySerializationEncryption, +) -> bytes: + if key is not None and not isinstance( + key, + ( + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + ), + ): + raise TypeError( + "Key must be RSA, DSA, EllipticCurve, ED25519, or ED448" + " private key, or None." + ) + if cert is not None and not isinstance(cert, x509.Certificate): + raise TypeError("cert must be a certificate or None") + + if cas is not None: + cas = list(cas) + if not all( + isinstance( + val, + ( + x509.Certificate, + PKCS12Certificate, + ), + ) + for val in cas + ): + raise TypeError("all values in cas must be certificates") + + if not isinstance( + encryption_algorithm, serialization.KeySerializationEncryption + ): + raise TypeError( + "Key encryption algorithm must be a " + "KeySerializationEncryption instance" + ) + + if key is None and cert is None and not cas: + raise ValueError("You must supply at least one of key, cert, or cas") + + from cryptography.hazmat.backends.openssl.backend import backend + + return backend.serialize_key_and_certificates_to_pkcs12( + name, key, cert, cas, encryption_algorithm + ) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/pkcs7.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/pkcs7.py new file mode 100644 index 00000000..66414168 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/pkcs7.py @@ -0,0 +1,180 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import utils +from cryptography import x509 +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric import ec, rsa +from cryptography.utils import _check_byteslike + + +def load_pem_pkcs7_certificates(data: bytes) -> typing.List[x509.Certificate]: + from cryptography.hazmat.backends.openssl.backend import backend + + return backend.load_pem_pkcs7_certificates(data) + + +def load_der_pkcs7_certificates(data: bytes) -> typing.List[x509.Certificate]: + from cryptography.hazmat.backends.openssl.backend import backend + + return backend.load_der_pkcs7_certificates(data) + + +def serialize_certificates( + certs: typing.List[x509.Certificate], + encoding: serialization.Encoding, +) -> bytes: + from cryptography.hazmat.backends.openssl.backend import backend + + return backend.pkcs7_serialize_certificates(certs, encoding) + + +_ALLOWED_PKCS7_HASH_TYPES = typing.Union[ + hashes.SHA1, + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, +] + +_ALLOWED_PRIVATE_KEY_TYPES = typing.Union[ + rsa.RSAPrivateKey, ec.EllipticCurvePrivateKey +] + + +class PKCS7Options(utils.Enum): + Text = "Add text/plain MIME type" + Binary = "Don't translate input data into canonical MIME format" + DetachedSignature = "Don't embed data in the PKCS7 structure" + NoCapabilities = "Don't embed SMIME capabilities" + NoAttributes = "Don't embed authenticatedAttributes" + NoCerts = "Don't embed signer certificate" + + +class PKCS7SignatureBuilder: + def __init__( + self, + data: typing.Optional[bytes] = None, + signers: typing.List[ + typing.Tuple[ + x509.Certificate, + _ALLOWED_PRIVATE_KEY_TYPES, + _ALLOWED_PKCS7_HASH_TYPES, + ] + ] = [], + additional_certs: typing.List[x509.Certificate] = [], + ): + self._data = data + self._signers = signers + self._additional_certs = additional_certs + + def set_data(self, data: bytes) -> "PKCS7SignatureBuilder": + _check_byteslike("data", data) + if self._data is not None: + raise ValueError("data may only be set once") + + return PKCS7SignatureBuilder(data, self._signers) + + def add_signer( + self, + certificate: x509.Certificate, + private_key: _ALLOWED_PRIVATE_KEY_TYPES, + hash_algorithm: _ALLOWED_PKCS7_HASH_TYPES, + ) -> "PKCS7SignatureBuilder": + if not isinstance( + hash_algorithm, + ( + hashes.SHA1, + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, + ), + ): + raise TypeError( + "hash_algorithm must be one of hashes.SHA1, SHA224, " + "SHA256, SHA384, or SHA512" + ) + if not isinstance(certificate, x509.Certificate): + raise TypeError("certificate must be a x509.Certificate") + + if not isinstance( + private_key, (rsa.RSAPrivateKey, ec.EllipticCurvePrivateKey) + ): + raise TypeError("Only RSA & EC keys are supported at this time.") + + return PKCS7SignatureBuilder( + self._data, + self._signers + [(certificate, private_key, hash_algorithm)], + ) + + def add_certificate( + self, certificate: x509.Certificate + ) -> "PKCS7SignatureBuilder": + if not isinstance(certificate, x509.Certificate): + raise TypeError("certificate must be a x509.Certificate") + + return PKCS7SignatureBuilder( + self._data, self._signers, self._additional_certs + [certificate] + ) + + def sign( + self, + encoding: serialization.Encoding, + options: typing.Iterable[PKCS7Options], + backend: typing.Any = None, + ) -> bytes: + if len(self._signers) == 0: + raise ValueError("Must have at least one signer") + if self._data is None: + raise ValueError("You must add data to sign") + options = list(options) + if not all(isinstance(x, PKCS7Options) for x in options): + raise ValueError("options must be from the PKCS7Options enum") + if encoding not in ( + serialization.Encoding.PEM, + serialization.Encoding.DER, + serialization.Encoding.SMIME, + ): + raise ValueError( + "Must be PEM, DER, or SMIME from the Encoding enum" + ) + + # Text is a meaningless option unless it is accompanied by + # DetachedSignature + if ( + PKCS7Options.Text in options + and PKCS7Options.DetachedSignature not in options + ): + raise ValueError( + "When passing the Text option you must also pass " + "DetachedSignature" + ) + + if PKCS7Options.Text in options and encoding in ( + serialization.Encoding.DER, + serialization.Encoding.PEM, + ): + raise ValueError( + "The Text option is only available for SMIME serialization" + ) + + # No attributes implies no capabilities so we'll error if you try to + # pass both. + if ( + PKCS7Options.NoAttributes in options + and PKCS7Options.NoCapabilities in options + ): + raise ValueError( + "NoAttributes is a superset of NoCapabilities. Do not pass " + "both values." + ) + + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + return ossl.pkcs7_sign(self, encoding, options) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/ssh.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/ssh.py new file mode 100644 index 00000000..e06b8230 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/ssh.py @@ -0,0 +1,756 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import binascii +import os +import re +import typing +from base64 import encodebytes as _base64_encode + +from cryptography import utils +from cryptography.exceptions import UnsupportedAlgorithm +from cryptography.hazmat.primitives.asymmetric import dsa, ec, ed25519, rsa +from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes +from cryptography.hazmat.primitives.serialization import ( + Encoding, + KeySerializationEncryption, + NoEncryption, + PrivateFormat, + PublicFormat, + _KeySerializationEncryption, +) + +try: + from bcrypt import kdf as _bcrypt_kdf + + _bcrypt_supported = True +except ImportError: + _bcrypt_supported = False + + def _bcrypt_kdf( + password: bytes, + salt: bytes, + desired_key_bytes: int, + rounds: int, + ignore_few_rounds: bool = False, + ) -> bytes: + raise UnsupportedAlgorithm("Need bcrypt module") + + +_SSH_ED25519 = b"ssh-ed25519" +_SSH_RSA = b"ssh-rsa" +_SSH_DSA = b"ssh-dss" +_ECDSA_NISTP256 = b"ecdsa-sha2-nistp256" +_ECDSA_NISTP384 = b"ecdsa-sha2-nistp384" +_ECDSA_NISTP521 = b"ecdsa-sha2-nistp521" +_CERT_SUFFIX = b"-cert-v01@openssh.com" + +_SSH_PUBKEY_RC = re.compile(rb"\A(\S+)[ \t]+(\S+)") +_SK_MAGIC = b"openssh-key-v1\0" +_SK_START = b"-----BEGIN OPENSSH PRIVATE KEY-----" +_SK_END = b"-----END OPENSSH PRIVATE KEY-----" +_BCRYPT = b"bcrypt" +_NONE = b"none" +_DEFAULT_CIPHER = b"aes256-ctr" +_DEFAULT_ROUNDS = 16 + +# re is only way to work on bytes-like data +_PEM_RC = re.compile(_SK_START + b"(.*?)" + _SK_END, re.DOTALL) + +# padding for max blocksize +_PADDING = memoryview(bytearray(range(1, 1 + 16))) + +# ciphers that are actually used in key wrapping +_SSH_CIPHERS: typing.Dict[ + bytes, + typing.Tuple[ + typing.Type[algorithms.AES], + int, + typing.Union[typing.Type[modes.CTR], typing.Type[modes.CBC]], + int, + ], +] = { + b"aes256-ctr": (algorithms.AES, 32, modes.CTR, 16), + b"aes256-cbc": (algorithms.AES, 32, modes.CBC, 16), +} + +# map local curve name to key type +_ECDSA_KEY_TYPE = { + "secp256r1": _ECDSA_NISTP256, + "secp384r1": _ECDSA_NISTP384, + "secp521r1": _ECDSA_NISTP521, +} + + +def _ecdsa_key_type(public_key: ec.EllipticCurvePublicKey) -> bytes: + """Return SSH key_type and curve_name for private key.""" + curve = public_key.curve + if curve.name not in _ECDSA_KEY_TYPE: + raise ValueError( + f"Unsupported curve for ssh private key: {curve.name!r}" + ) + return _ECDSA_KEY_TYPE[curve.name] + + +def _ssh_pem_encode( + data: bytes, + prefix: bytes = _SK_START + b"\n", + suffix: bytes = _SK_END + b"\n", +) -> bytes: + return b"".join([prefix, _base64_encode(data), suffix]) + + +def _check_block_size(data: bytes, block_len: int) -> None: + """Require data to be full blocks""" + if not data or len(data) % block_len != 0: + raise ValueError("Corrupt data: missing padding") + + +def _check_empty(data: bytes) -> None: + """All data should have been parsed.""" + if data: + raise ValueError("Corrupt data: unparsed data") + + +def _init_cipher( + ciphername: bytes, + password: typing.Optional[bytes], + salt: bytes, + rounds: int, +) -> Cipher[typing.Union[modes.CBC, modes.CTR]]: + """Generate key + iv and return cipher.""" + if not password: + raise ValueError("Key is password-protected.") + + algo, key_len, mode, iv_len = _SSH_CIPHERS[ciphername] + seed = _bcrypt_kdf(password, salt, key_len + iv_len, rounds, True) + return Cipher(algo(seed[:key_len]), mode(seed[key_len:])) + + +def _get_u32(data: memoryview) -> typing.Tuple[int, memoryview]: + """Uint32""" + if len(data) < 4: + raise ValueError("Invalid data") + return int.from_bytes(data[:4], byteorder="big"), data[4:] + + +def _get_u64(data: memoryview) -> typing.Tuple[int, memoryview]: + """Uint64""" + if len(data) < 8: + raise ValueError("Invalid data") + return int.from_bytes(data[:8], byteorder="big"), data[8:] + + +def _get_sshstr(data: memoryview) -> typing.Tuple[memoryview, memoryview]: + """Bytes with u32 length prefix""" + n, data = _get_u32(data) + if n > len(data): + raise ValueError("Invalid data") + return data[:n], data[n:] + + +def _get_mpint(data: memoryview) -> typing.Tuple[int, memoryview]: + """Big integer.""" + val, data = _get_sshstr(data) + if val and val[0] > 0x7F: + raise ValueError("Invalid data") + return int.from_bytes(val, "big"), data + + +def _to_mpint(val: int) -> bytes: + """Storage format for signed bigint.""" + if val < 0: + raise ValueError("negative mpint not allowed") + if not val: + return b"" + nbytes = (val.bit_length() + 8) // 8 + return utils.int_to_bytes(val, nbytes) + + +class _FragList: + """Build recursive structure without data copy.""" + + flist: typing.List[bytes] + + def __init__(self, init: typing.List[bytes] = None) -> None: + self.flist = [] + if init: + self.flist.extend(init) + + def put_raw(self, val: bytes) -> None: + """Add plain bytes""" + self.flist.append(val) + + def put_u32(self, val: int) -> None: + """Big-endian uint32""" + self.flist.append(val.to_bytes(length=4, byteorder="big")) + + def put_sshstr(self, val: typing.Union[bytes, "_FragList"]) -> None: + """Bytes prefixed with u32 length""" + if isinstance(val, (bytes, memoryview, bytearray)): + self.put_u32(len(val)) + self.flist.append(val) + else: + self.put_u32(val.size()) + self.flist.extend(val.flist) + + def put_mpint(self, val: int) -> None: + """Big-endian bigint prefixed with u32 length""" + self.put_sshstr(_to_mpint(val)) + + def size(self) -> int: + """Current number of bytes""" + return sum(map(len, self.flist)) + + def render(self, dstbuf: memoryview, pos: int = 0) -> int: + """Write into bytearray""" + for frag in self.flist: + flen = len(frag) + start, pos = pos, pos + flen + dstbuf[start:pos] = frag + return pos + + def tobytes(self) -> bytes: + """Return as bytes""" + buf = memoryview(bytearray(self.size())) + self.render(buf) + return buf.tobytes() + + +class _SSHFormatRSA: + """Format for RSA keys. + + Public: + mpint e, n + Private: + mpint n, e, d, iqmp, p, q + """ + + def get_public(self, data: memoryview): + """RSA public fields""" + e, data = _get_mpint(data) + n, data = _get_mpint(data) + return (e, n), data + + def load_public( + self, data: memoryview + ) -> typing.Tuple[rsa.RSAPublicKey, memoryview]: + """Make RSA public key from data.""" + (e, n), data = self.get_public(data) + public_numbers = rsa.RSAPublicNumbers(e, n) + public_key = public_numbers.public_key() + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> typing.Tuple[rsa.RSAPrivateKey, memoryview]: + """Make RSA private key from data.""" + n, data = _get_mpint(data) + e, data = _get_mpint(data) + d, data = _get_mpint(data) + iqmp, data = _get_mpint(data) + p, data = _get_mpint(data) + q, data = _get_mpint(data) + + if (e, n) != pubfields: + raise ValueError("Corrupt data: rsa field mismatch") + dmp1 = rsa.rsa_crt_dmp1(d, p) + dmq1 = rsa.rsa_crt_dmq1(d, q) + public_numbers = rsa.RSAPublicNumbers(e, n) + private_numbers = rsa.RSAPrivateNumbers( + p, q, d, dmp1, dmq1, iqmp, public_numbers + ) + private_key = private_numbers.private_key() + return private_key, data + + def encode_public( + self, public_key: rsa.RSAPublicKey, f_pub: _FragList + ) -> None: + """Write RSA public key""" + pubn = public_key.public_numbers() + f_pub.put_mpint(pubn.e) + f_pub.put_mpint(pubn.n) + + def encode_private( + self, private_key: rsa.RSAPrivateKey, f_priv: _FragList + ) -> None: + """Write RSA private key""" + private_numbers = private_key.private_numbers() + public_numbers = private_numbers.public_numbers + + f_priv.put_mpint(public_numbers.n) + f_priv.put_mpint(public_numbers.e) + + f_priv.put_mpint(private_numbers.d) + f_priv.put_mpint(private_numbers.iqmp) + f_priv.put_mpint(private_numbers.p) + f_priv.put_mpint(private_numbers.q) + + +class _SSHFormatDSA: + """Format for DSA keys. + + Public: + mpint p, q, g, y + Private: + mpint p, q, g, y, x + """ + + def get_public( + self, data: memoryview + ) -> typing.Tuple[typing.Tuple, memoryview]: + """DSA public fields""" + p, data = _get_mpint(data) + q, data = _get_mpint(data) + g, data = _get_mpint(data) + y, data = _get_mpint(data) + return (p, q, g, y), data + + def load_public( + self, data: memoryview + ) -> typing.Tuple[dsa.DSAPublicKey, memoryview]: + """Make DSA public key from data.""" + (p, q, g, y), data = self.get_public(data) + parameter_numbers = dsa.DSAParameterNumbers(p, q, g) + public_numbers = dsa.DSAPublicNumbers(y, parameter_numbers) + self._validate(public_numbers) + public_key = public_numbers.public_key() + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> typing.Tuple[dsa.DSAPrivateKey, memoryview]: + """Make DSA private key from data.""" + (p, q, g, y), data = self.get_public(data) + x, data = _get_mpint(data) + + if (p, q, g, y) != pubfields: + raise ValueError("Corrupt data: dsa field mismatch") + parameter_numbers = dsa.DSAParameterNumbers(p, q, g) + public_numbers = dsa.DSAPublicNumbers(y, parameter_numbers) + self._validate(public_numbers) + private_numbers = dsa.DSAPrivateNumbers(x, public_numbers) + private_key = private_numbers.private_key() + return private_key, data + + def encode_public( + self, public_key: dsa.DSAPublicKey, f_pub: _FragList + ) -> None: + """Write DSA public key""" + public_numbers = public_key.public_numbers() + parameter_numbers = public_numbers.parameter_numbers + self._validate(public_numbers) + + f_pub.put_mpint(parameter_numbers.p) + f_pub.put_mpint(parameter_numbers.q) + f_pub.put_mpint(parameter_numbers.g) + f_pub.put_mpint(public_numbers.y) + + def encode_private( + self, private_key: dsa.DSAPrivateKey, f_priv: _FragList + ) -> None: + """Write DSA private key""" + self.encode_public(private_key.public_key(), f_priv) + f_priv.put_mpint(private_key.private_numbers().x) + + def _validate(self, public_numbers: dsa.DSAPublicNumbers) -> None: + parameter_numbers = public_numbers.parameter_numbers + if parameter_numbers.p.bit_length() != 1024: + raise ValueError("SSH supports only 1024 bit DSA keys") + + +class _SSHFormatECDSA: + """Format for ECDSA keys. + + Public: + str curve + bytes point + Private: + str curve + bytes point + mpint secret + """ + + def __init__(self, ssh_curve_name: bytes, curve: ec.EllipticCurve): + self.ssh_curve_name = ssh_curve_name + self.curve = curve + + def get_public( + self, data: memoryview + ) -> typing.Tuple[typing.Tuple, memoryview]: + """ECDSA public fields""" + curve, data = _get_sshstr(data) + point, data = _get_sshstr(data) + if curve != self.ssh_curve_name: + raise ValueError("Curve name mismatch") + if point[0] != 4: + raise NotImplementedError("Need uncompressed point") + return (curve, point), data + + def load_public( + self, data: memoryview + ) -> typing.Tuple[ec.EllipticCurvePublicKey, memoryview]: + """Make ECDSA public key from data.""" + (curve_name, point), data = self.get_public(data) + public_key = ec.EllipticCurvePublicKey.from_encoded_point( + self.curve, point.tobytes() + ) + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> typing.Tuple[ec.EllipticCurvePrivateKey, memoryview]: + """Make ECDSA private key from data.""" + (curve_name, point), data = self.get_public(data) + secret, data = _get_mpint(data) + + if (curve_name, point) != pubfields: + raise ValueError("Corrupt data: ecdsa field mismatch") + private_key = ec.derive_private_key(secret, self.curve) + return private_key, data + + def encode_public( + self, public_key: ec.EllipticCurvePublicKey, f_pub: _FragList + ) -> None: + """Write ECDSA public key""" + point = public_key.public_bytes( + Encoding.X962, PublicFormat.UncompressedPoint + ) + f_pub.put_sshstr(self.ssh_curve_name) + f_pub.put_sshstr(point) + + def encode_private( + self, private_key: ec.EllipticCurvePrivateKey, f_priv: _FragList + ) -> None: + """Write ECDSA private key""" + public_key = private_key.public_key() + private_numbers = private_key.private_numbers() + + self.encode_public(public_key, f_priv) + f_priv.put_mpint(private_numbers.private_value) + + +class _SSHFormatEd25519: + """Format for Ed25519 keys. + + Public: + bytes point + Private: + bytes point + bytes secret_and_point + """ + + def get_public( + self, data: memoryview + ) -> typing.Tuple[typing.Tuple, memoryview]: + """Ed25519 public fields""" + point, data = _get_sshstr(data) + return (point,), data + + def load_public( + self, data: memoryview + ) -> typing.Tuple[ed25519.Ed25519PublicKey, memoryview]: + """Make Ed25519 public key from data.""" + (point,), data = self.get_public(data) + public_key = ed25519.Ed25519PublicKey.from_public_bytes( + point.tobytes() + ) + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> typing.Tuple[ed25519.Ed25519PrivateKey, memoryview]: + """Make Ed25519 private key from data.""" + (point,), data = self.get_public(data) + keypair, data = _get_sshstr(data) + + secret = keypair[:32] + point2 = keypair[32:] + if point != point2 or (point,) != pubfields: + raise ValueError("Corrupt data: ed25519 field mismatch") + private_key = ed25519.Ed25519PrivateKey.from_private_bytes(secret) + return private_key, data + + def encode_public( + self, public_key: ed25519.Ed25519PublicKey, f_pub: _FragList + ) -> None: + """Write Ed25519 public key""" + raw_public_key = public_key.public_bytes( + Encoding.Raw, PublicFormat.Raw + ) + f_pub.put_sshstr(raw_public_key) + + def encode_private( + self, private_key: ed25519.Ed25519PrivateKey, f_priv: _FragList + ) -> None: + """Write Ed25519 private key""" + public_key = private_key.public_key() + raw_private_key = private_key.private_bytes( + Encoding.Raw, PrivateFormat.Raw, NoEncryption() + ) + raw_public_key = public_key.public_bytes( + Encoding.Raw, PublicFormat.Raw + ) + f_keypair = _FragList([raw_private_key, raw_public_key]) + + self.encode_public(public_key, f_priv) + f_priv.put_sshstr(f_keypair) + + +_KEY_FORMATS = { + _SSH_RSA: _SSHFormatRSA(), + _SSH_DSA: _SSHFormatDSA(), + _SSH_ED25519: _SSHFormatEd25519(), + _ECDSA_NISTP256: _SSHFormatECDSA(b"nistp256", ec.SECP256R1()), + _ECDSA_NISTP384: _SSHFormatECDSA(b"nistp384", ec.SECP384R1()), + _ECDSA_NISTP521: _SSHFormatECDSA(b"nistp521", ec.SECP521R1()), +} + + +def _lookup_kformat(key_type: bytes): + """Return valid format or throw error""" + if not isinstance(key_type, bytes): + key_type = memoryview(key_type).tobytes() + if key_type in _KEY_FORMATS: + return _KEY_FORMATS[key_type] + raise UnsupportedAlgorithm(f"Unsupported key type: {key_type!r}") + + +_SSH_PRIVATE_KEY_TYPES = typing.Union[ + ec.EllipticCurvePrivateKey, + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ed25519.Ed25519PrivateKey, +] + + +def load_ssh_private_key( + data: bytes, + password: typing.Optional[bytes], + backend: typing.Any = None, +) -> _SSH_PRIVATE_KEY_TYPES: + """Load private key from OpenSSH custom encoding.""" + utils._check_byteslike("data", data) + if password is not None: + utils._check_bytes("password", password) + + m = _PEM_RC.search(data) + if not m: + raise ValueError("Not OpenSSH private key format") + p1 = m.start(1) + p2 = m.end(1) + data = binascii.a2b_base64(memoryview(data)[p1:p2]) + if not data.startswith(_SK_MAGIC): + raise ValueError("Not OpenSSH private key format") + data = memoryview(data)[len(_SK_MAGIC) :] + + # parse header + ciphername, data = _get_sshstr(data) + kdfname, data = _get_sshstr(data) + kdfoptions, data = _get_sshstr(data) + nkeys, data = _get_u32(data) + if nkeys != 1: + raise ValueError("Only one key supported") + + # load public key data + pubdata, data = _get_sshstr(data) + pub_key_type, pubdata = _get_sshstr(pubdata) + kformat = _lookup_kformat(pub_key_type) + pubfields, pubdata = kformat.get_public(pubdata) + _check_empty(pubdata) + + # load secret data + edata, data = _get_sshstr(data) + _check_empty(data) + + if (ciphername, kdfname) != (_NONE, _NONE): + ciphername_bytes = ciphername.tobytes() + if ciphername_bytes not in _SSH_CIPHERS: + raise UnsupportedAlgorithm( + f"Unsupported cipher: {ciphername_bytes!r}" + ) + if kdfname != _BCRYPT: + raise UnsupportedAlgorithm(f"Unsupported KDF: {kdfname!r}") + blklen = _SSH_CIPHERS[ciphername_bytes][3] + _check_block_size(edata, blklen) + salt, kbuf = _get_sshstr(kdfoptions) + rounds, kbuf = _get_u32(kbuf) + _check_empty(kbuf) + ciph = _init_cipher(ciphername_bytes, password, salt.tobytes(), rounds) + edata = memoryview(ciph.decryptor().update(edata)) + else: + blklen = 8 + _check_block_size(edata, blklen) + ck1, edata = _get_u32(edata) + ck2, edata = _get_u32(edata) + if ck1 != ck2: + raise ValueError("Corrupt data: broken checksum") + + # load per-key struct + key_type, edata = _get_sshstr(edata) + if key_type != pub_key_type: + raise ValueError("Corrupt data: key type mismatch") + private_key, edata = kformat.load_private(edata, pubfields) + comment, edata = _get_sshstr(edata) + + # yes, SSH does padding check *after* all other parsing is done. + # need to follow as it writes zero-byte padding too. + if edata != _PADDING[: len(edata)]: + raise ValueError("Corrupt data: invalid padding") + + return private_key + + +def _serialize_ssh_private_key( + private_key: _SSH_PRIVATE_KEY_TYPES, + password: bytes, + encryption_algorithm: KeySerializationEncryption, +) -> bytes: + """Serialize private key with OpenSSH custom encoding.""" + utils._check_bytes("password", password) + + if isinstance(private_key, ec.EllipticCurvePrivateKey): + key_type = _ecdsa_key_type(private_key.public_key()) + elif isinstance(private_key, rsa.RSAPrivateKey): + key_type = _SSH_RSA + elif isinstance(private_key, dsa.DSAPrivateKey): + key_type = _SSH_DSA + elif isinstance(private_key, ed25519.Ed25519PrivateKey): + key_type = _SSH_ED25519 + else: + raise ValueError("Unsupported key type") + kformat = _lookup_kformat(key_type) + + # setup parameters + f_kdfoptions = _FragList() + if password: + ciphername = _DEFAULT_CIPHER + blklen = _SSH_CIPHERS[ciphername][3] + kdfname = _BCRYPT + rounds = _DEFAULT_ROUNDS + if ( + isinstance(encryption_algorithm, _KeySerializationEncryption) + and encryption_algorithm._kdf_rounds is not None + ): + rounds = encryption_algorithm._kdf_rounds + salt = os.urandom(16) + f_kdfoptions.put_sshstr(salt) + f_kdfoptions.put_u32(rounds) + ciph = _init_cipher(ciphername, password, salt, rounds) + else: + ciphername = kdfname = _NONE + blklen = 8 + ciph = None + nkeys = 1 + checkval = os.urandom(4) + comment = b"" + + # encode public and private parts together + f_public_key = _FragList() + f_public_key.put_sshstr(key_type) + kformat.encode_public(private_key.public_key(), f_public_key) + + f_secrets = _FragList([checkval, checkval]) + f_secrets.put_sshstr(key_type) + kformat.encode_private(private_key, f_secrets) + f_secrets.put_sshstr(comment) + f_secrets.put_raw(_PADDING[: blklen - (f_secrets.size() % blklen)]) + + # top-level structure + f_main = _FragList() + f_main.put_raw(_SK_MAGIC) + f_main.put_sshstr(ciphername) + f_main.put_sshstr(kdfname) + f_main.put_sshstr(f_kdfoptions) + f_main.put_u32(nkeys) + f_main.put_sshstr(f_public_key) + f_main.put_sshstr(f_secrets) + + # copy result info bytearray + slen = f_secrets.size() + mlen = f_main.size() + buf = memoryview(bytearray(mlen + blklen)) + f_main.render(buf) + ofs = mlen - slen + + # encrypt in-place + if ciph is not None: + ciph.encryptor().update_into(buf[ofs:mlen], buf[ofs:]) + + return _ssh_pem_encode(buf[:mlen]) + + +_SSH_PUBLIC_KEY_TYPES = typing.Union[ + ec.EllipticCurvePublicKey, + rsa.RSAPublicKey, + dsa.DSAPublicKey, + ed25519.Ed25519PublicKey, +] + + +def load_ssh_public_key( + data: bytes, backend: typing.Any = None +) -> _SSH_PUBLIC_KEY_TYPES: + """Load public key from OpenSSH one-line format.""" + utils._check_byteslike("data", data) + + m = _SSH_PUBKEY_RC.match(data) + if not m: + raise ValueError("Invalid line format") + key_type = orig_key_type = m.group(1) + key_body = m.group(2) + with_cert = False + if _CERT_SUFFIX == key_type[-len(_CERT_SUFFIX) :]: + with_cert = True + key_type = key_type[: -len(_CERT_SUFFIX)] + kformat = _lookup_kformat(key_type) + + try: + rest = memoryview(binascii.a2b_base64(key_body)) + except (TypeError, binascii.Error): + raise ValueError("Invalid key format") + + inner_key_type, rest = _get_sshstr(rest) + if inner_key_type != orig_key_type: + raise ValueError("Invalid key format") + if with_cert: + nonce, rest = _get_sshstr(rest) + public_key, rest = kformat.load_public(rest) + if with_cert: + serial, rest = _get_u64(rest) + cctype, rest = _get_u32(rest) + key_id, rest = _get_sshstr(rest) + principals, rest = _get_sshstr(rest) + valid_after, rest = _get_u64(rest) + valid_before, rest = _get_u64(rest) + crit_options, rest = _get_sshstr(rest) + extensions, rest = _get_sshstr(rest) + reserved, rest = _get_sshstr(rest) + sig_key, rest = _get_sshstr(rest) + signature, rest = _get_sshstr(rest) + _check_empty(rest) + return public_key + + +def serialize_ssh_public_key(public_key: _SSH_PUBLIC_KEY_TYPES) -> bytes: + """One-line public key format for OpenSSH""" + if isinstance(public_key, ec.EllipticCurvePublicKey): + key_type = _ecdsa_key_type(public_key) + elif isinstance(public_key, rsa.RSAPublicKey): + key_type = _SSH_RSA + elif isinstance(public_key, dsa.DSAPublicKey): + key_type = _SSH_DSA + elif isinstance(public_key, ed25519.Ed25519PublicKey): + key_type = _SSH_ED25519 + else: + raise ValueError("Unsupported key type") + kformat = _lookup_kformat(key_type) + + f_pub = _FragList() + f_pub.put_sshstr(key_type) + kformat.encode_public(public_key, f_pub) + + pub = binascii.b2a_base64(f_pub.tobytes()).strip() + return b"".join([key_type, b" ", pub]) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/__init__.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/__init__.py new file mode 100644 index 00000000..8a8b30f2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/__init__.py @@ -0,0 +1,7 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +class InvalidToken(Exception): + pass diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/hotp.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/hotp.py new file mode 100644 index 00000000..9730af2b --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/hotp.py @@ -0,0 +1,92 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import base64 +import typing +from urllib.parse import quote, urlencode + +from cryptography.hazmat.primitives import constant_time, hmac +from cryptography.hazmat.primitives.hashes import SHA1, SHA256, SHA512 +from cryptography.hazmat.primitives.twofactor import InvalidToken + + +_ALLOWED_HASH_TYPES = typing.Union[SHA1, SHA256, SHA512] + + +def _generate_uri( + hotp: "HOTP", + type_name: str, + account_name: str, + issuer: typing.Optional[str], + extra_parameters: typing.List[typing.Tuple[str, int]], +) -> str: + parameters = [ + ("digits", hotp._length), + ("secret", base64.b32encode(hotp._key)), + ("algorithm", hotp._algorithm.name.upper()), + ] + + if issuer is not None: + parameters.append(("issuer", issuer)) + + parameters.extend(extra_parameters) + + label = ( + f"{quote(issuer)}:{quote(account_name)}" + if issuer + else quote(account_name) + ) + return f"otpauth://{type_name}/{label}?{urlencode(parameters)}" + + +class HOTP: + def __init__( + self, + key: bytes, + length: int, + algorithm: _ALLOWED_HASH_TYPES, + backend: typing.Any = None, + enforce_key_length: bool = True, + ) -> None: + if len(key) < 16 and enforce_key_length is True: + raise ValueError("Key length has to be at least 128 bits.") + + if not isinstance(length, int): + raise TypeError("Length parameter must be an integer type.") + + if length < 6 or length > 8: + raise ValueError("Length of HOTP has to be between 6 to 8.") + + if not isinstance(algorithm, (SHA1, SHA256, SHA512)): + raise TypeError("Algorithm must be SHA1, SHA256 or SHA512.") + + self._key = key + self._length = length + self._algorithm = algorithm + + def generate(self, counter: int) -> bytes: + truncated_value = self._dynamic_truncate(counter) + hotp = truncated_value % (10**self._length) + return "{0:0{1}}".format(hotp, self._length).encode() + + def verify(self, hotp: bytes, counter: int) -> None: + if not constant_time.bytes_eq(self.generate(counter), hotp): + raise InvalidToken("Supplied HOTP value does not match.") + + def _dynamic_truncate(self, counter: int) -> int: + ctx = hmac.HMAC(self._key, self._algorithm) + ctx.update(counter.to_bytes(length=8, byteorder="big")) + hmac_value = ctx.finalize() + + offset = hmac_value[len(hmac_value) - 1] & 0b1111 + p = hmac_value[offset : offset + 4] + return int.from_bytes(p, byteorder="big") & 0x7FFFFFFF + + def get_provisioning_uri( + self, account_name: str, counter: int, issuer: typing.Optional[str] + ) -> str: + return _generate_uri( + self, "hotp", account_name, issuer, [("counter", int(counter))] + ) diff --git a/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/totp.py b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/totp.py new file mode 100644 index 00000000..317baba3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/hazmat/primitives/twofactor/totp.py @@ -0,0 +1,48 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives import constant_time +from cryptography.hazmat.primitives.twofactor import InvalidToken +from cryptography.hazmat.primitives.twofactor.hotp import ( + HOTP, + _ALLOWED_HASH_TYPES, + _generate_uri, +) + + +class TOTP: + def __init__( + self, + key: bytes, + length: int, + algorithm: _ALLOWED_HASH_TYPES, + time_step: int, + backend: typing.Any = None, + enforce_key_length: bool = True, + ): + self._time_step = time_step + self._hotp = HOTP( + key, length, algorithm, enforce_key_length=enforce_key_length + ) + + def generate(self, time: typing.Union[int, float]) -> bytes: + counter = int(time / self._time_step) + return self._hotp.generate(counter) + + def verify(self, totp: bytes, time: int) -> None: + if not constant_time.bytes_eq(self.generate(time), totp): + raise InvalidToken("Supplied TOTP value does not match.") + + def get_provisioning_uri( + self, account_name: str, issuer: typing.Optional[str] + ) -> str: + return _generate_uri( + self._hotp, + "totp", + account_name, + issuer, + [("period", int(self._time_step))], + ) diff --git a/env/lib/python3.8/site-packages/cryptography/py.typed b/env/lib/python3.8/site-packages/cryptography/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/cryptography/utils.py b/env/lib/python3.8/site-packages/cryptography/utils.py new file mode 100644 index 00000000..67d813be --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/utils.py @@ -0,0 +1,158 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import enum +import inspect +import sys +import types +import typing +import warnings + + +# We use a UserWarning subclass, instead of DeprecationWarning, because CPython +# decided deprecation warnings should be invisble by default. +class CryptographyDeprecationWarning(UserWarning): + pass + + +# Several APIs were deprecated with no specific end-of-life date because of the +# ubiquity of their use. They should not be removed until we agree on when that +# cycle ends. +PersistentlyDeprecated2019 = CryptographyDeprecationWarning +DeprecatedIn35 = CryptographyDeprecationWarning +DeprecatedIn36 = CryptographyDeprecationWarning +DeprecatedIn37 = CryptographyDeprecationWarning +DeprecatedIn38 = CryptographyDeprecationWarning + + +def _check_bytes(name: str, value: bytes) -> None: + if not isinstance(value, bytes): + raise TypeError("{} must be bytes".format(name)) + + +def _check_byteslike(name: str, value: bytes) -> None: + try: + memoryview(value) + except TypeError: + raise TypeError("{} must be bytes-like".format(name)) + + +def int_to_bytes(integer: int, length: typing.Optional[int] = None) -> bytes: + return integer.to_bytes( + length or (integer.bit_length() + 7) // 8 or 1, "big" + ) + + +class InterfaceNotImplemented(Exception): + pass + + +def strip_annotation(signature: inspect.Signature) -> inspect.Signature: + return inspect.Signature( + [ + param.replace(annotation=inspect.Parameter.empty) + for param in signature.parameters.values() + ] + ) + + +def verify_interface( + iface: abc.ABCMeta, klass: object, *, check_annotations: bool = False +): + for method in iface.__abstractmethods__: + if not hasattr(klass, method): + raise InterfaceNotImplemented( + "{} is missing a {!r} method".format(klass, method) + ) + if isinstance(getattr(iface, method), abc.abstractproperty): + # Can't properly verify these yet. + continue + sig = inspect.signature(getattr(iface, method)) + actual = inspect.signature(getattr(klass, method)) + if check_annotations: + ok = sig == actual + else: + ok = strip_annotation(sig) == strip_annotation(actual) + if not ok: + raise InterfaceNotImplemented( + "{}.{}'s signature differs from the expected. Expected: " + "{!r}. Received: {!r}".format(klass, method, sig, actual) + ) + + +class _DeprecatedValue: + def __init__(self, value: object, message: str, warning_class): + self.value = value + self.message = message + self.warning_class = warning_class + + +class _ModuleWithDeprecations(types.ModuleType): + def __init__(self, module: types.ModuleType): + super().__init__(module.__name__) + self.__dict__["_module"] = module + + def __getattr__(self, attr: str) -> object: + obj = getattr(self._module, attr) + if isinstance(obj, _DeprecatedValue): + warnings.warn(obj.message, obj.warning_class, stacklevel=2) + obj = obj.value + return obj + + def __setattr__(self, attr: str, value: object) -> None: + setattr(self._module, attr, value) + + def __delattr__(self, attr: str) -> None: + obj = getattr(self._module, attr) + if isinstance(obj, _DeprecatedValue): + warnings.warn(obj.message, obj.warning_class, stacklevel=2) + + delattr(self._module, attr) + + def __dir__(self) -> typing.Sequence[str]: + return ["_module"] + dir(self._module) + + +def deprecated( + value: object, + module_name: str, + message: str, + warning_class: typing.Type[Warning], + name: typing.Optional[str] = None, +) -> _DeprecatedValue: + module = sys.modules[module_name] + if not isinstance(module, _ModuleWithDeprecations): + sys.modules[module_name] = module = _ModuleWithDeprecations(module) + dv = _DeprecatedValue(value, message, warning_class) + # Maintain backwards compatibility with `name is None` for pyOpenSSL. + if name is not None: + setattr(module, name, dv) + return dv + + +def cached_property(func: typing.Callable) -> property: + cached_name = "_cached_{}".format(func) + sentinel = object() + + def inner(instance: object): + cache = getattr(instance, cached_name, sentinel) + if cache is not sentinel: + return cache + result = func(instance) + setattr(instance, cached_name, result) + return result + + return property(inner) + + +# Python 3.10 changed representation of enums. We use well-defined object +# representation and string representation from Python 3.9. +class Enum(enum.Enum): + def __repr__(self) -> str: + return f"<{self.__class__.__name__}.{self._name_}: {self._value_!r}>" + + def __str__(self) -> str: + return f"{self.__class__.__name__}.{self._name_}" diff --git a/env/lib/python3.8/site-packages/cryptography/x509/__init__.py b/env/lib/python3.8/site-packages/cryptography/x509/__init__.py new file mode 100644 index 00000000..906075d1 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/__init__.py @@ -0,0 +1,249 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +from cryptography.x509 import certificate_transparency +from cryptography.x509.base import ( + Attribute, + AttributeNotFound, + Attributes, + Certificate, + CertificateBuilder, + CertificateRevocationList, + CertificateRevocationListBuilder, + CertificateSigningRequest, + CertificateSigningRequestBuilder, + InvalidVersion, + RevokedCertificate, + RevokedCertificateBuilder, + Version, + load_der_x509_certificate, + load_der_x509_crl, + load_der_x509_csr, + load_pem_x509_certificate, + load_pem_x509_crl, + load_pem_x509_csr, + random_serial_number, +) +from cryptography.x509.extensions import ( + AccessDescription, + AuthorityInformationAccess, + AuthorityKeyIdentifier, + BasicConstraints, + CRLDistributionPoints, + CRLNumber, + CRLReason, + CertificateIssuer, + CertificatePolicies, + DeltaCRLIndicator, + DistributionPoint, + DuplicateExtension, + ExtendedKeyUsage, + Extension, + ExtensionNotFound, + ExtensionType, + Extensions, + FreshestCRL, + GeneralNames, + InhibitAnyPolicy, + InvalidityDate, + IssuerAlternativeName, + IssuingDistributionPoint, + KeyUsage, + NameConstraints, + NoticeReference, + OCSPNoCheck, + OCSPNonce, + PolicyConstraints, + PolicyInformation, + PrecertPoison, + PrecertificateSignedCertificateTimestamps, + ReasonFlags, + SignedCertificateTimestamps, + SubjectAlternativeName, + SubjectInformationAccess, + SubjectKeyIdentifier, + TLSFeature, + TLSFeatureType, + UnrecognizedExtension, + UserNotice, +) +from cryptography.x509.general_name import ( + DNSName, + DirectoryName, + GeneralName, + IPAddress, + OtherName, + RFC822Name, + RegisteredID, + UniformResourceIdentifier, + UnsupportedGeneralNameType, +) +from cryptography.x509.name import ( + Name, + NameAttribute, + RelativeDistinguishedName, +) +from cryptography.x509.oid import ( + AuthorityInformationAccessOID, + CRLEntryExtensionOID, + CertificatePoliciesOID, + ExtendedKeyUsageOID, + ExtensionOID, + NameOID, + ObjectIdentifier, + SignatureAlgorithmOID, +) + + +OID_AUTHORITY_INFORMATION_ACCESS = ExtensionOID.AUTHORITY_INFORMATION_ACCESS +OID_AUTHORITY_KEY_IDENTIFIER = ExtensionOID.AUTHORITY_KEY_IDENTIFIER +OID_BASIC_CONSTRAINTS = ExtensionOID.BASIC_CONSTRAINTS +OID_CERTIFICATE_POLICIES = ExtensionOID.CERTIFICATE_POLICIES +OID_CRL_DISTRIBUTION_POINTS = ExtensionOID.CRL_DISTRIBUTION_POINTS +OID_EXTENDED_KEY_USAGE = ExtensionOID.EXTENDED_KEY_USAGE +OID_FRESHEST_CRL = ExtensionOID.FRESHEST_CRL +OID_INHIBIT_ANY_POLICY = ExtensionOID.INHIBIT_ANY_POLICY +OID_ISSUER_ALTERNATIVE_NAME = ExtensionOID.ISSUER_ALTERNATIVE_NAME +OID_KEY_USAGE = ExtensionOID.KEY_USAGE +OID_NAME_CONSTRAINTS = ExtensionOID.NAME_CONSTRAINTS +OID_OCSP_NO_CHECK = ExtensionOID.OCSP_NO_CHECK +OID_POLICY_CONSTRAINTS = ExtensionOID.POLICY_CONSTRAINTS +OID_POLICY_MAPPINGS = ExtensionOID.POLICY_MAPPINGS +OID_SUBJECT_ALTERNATIVE_NAME = ExtensionOID.SUBJECT_ALTERNATIVE_NAME +OID_SUBJECT_DIRECTORY_ATTRIBUTES = ExtensionOID.SUBJECT_DIRECTORY_ATTRIBUTES +OID_SUBJECT_INFORMATION_ACCESS = ExtensionOID.SUBJECT_INFORMATION_ACCESS +OID_SUBJECT_KEY_IDENTIFIER = ExtensionOID.SUBJECT_KEY_IDENTIFIER + +OID_DSA_WITH_SHA1 = SignatureAlgorithmOID.DSA_WITH_SHA1 +OID_DSA_WITH_SHA224 = SignatureAlgorithmOID.DSA_WITH_SHA224 +OID_DSA_WITH_SHA256 = SignatureAlgorithmOID.DSA_WITH_SHA256 +OID_ECDSA_WITH_SHA1 = SignatureAlgorithmOID.ECDSA_WITH_SHA1 +OID_ECDSA_WITH_SHA224 = SignatureAlgorithmOID.ECDSA_WITH_SHA224 +OID_ECDSA_WITH_SHA256 = SignatureAlgorithmOID.ECDSA_WITH_SHA256 +OID_ECDSA_WITH_SHA384 = SignatureAlgorithmOID.ECDSA_WITH_SHA384 +OID_ECDSA_WITH_SHA512 = SignatureAlgorithmOID.ECDSA_WITH_SHA512 +OID_RSA_WITH_MD5 = SignatureAlgorithmOID.RSA_WITH_MD5 +OID_RSA_WITH_SHA1 = SignatureAlgorithmOID.RSA_WITH_SHA1 +OID_RSA_WITH_SHA224 = SignatureAlgorithmOID.RSA_WITH_SHA224 +OID_RSA_WITH_SHA256 = SignatureAlgorithmOID.RSA_WITH_SHA256 +OID_RSA_WITH_SHA384 = SignatureAlgorithmOID.RSA_WITH_SHA384 +OID_RSA_WITH_SHA512 = SignatureAlgorithmOID.RSA_WITH_SHA512 +OID_RSASSA_PSS = SignatureAlgorithmOID.RSASSA_PSS + +OID_COMMON_NAME = NameOID.COMMON_NAME +OID_COUNTRY_NAME = NameOID.COUNTRY_NAME +OID_DOMAIN_COMPONENT = NameOID.DOMAIN_COMPONENT +OID_DN_QUALIFIER = NameOID.DN_QUALIFIER +OID_EMAIL_ADDRESS = NameOID.EMAIL_ADDRESS +OID_GENERATION_QUALIFIER = NameOID.GENERATION_QUALIFIER +OID_GIVEN_NAME = NameOID.GIVEN_NAME +OID_LOCALITY_NAME = NameOID.LOCALITY_NAME +OID_ORGANIZATIONAL_UNIT_NAME = NameOID.ORGANIZATIONAL_UNIT_NAME +OID_ORGANIZATION_NAME = NameOID.ORGANIZATION_NAME +OID_PSEUDONYM = NameOID.PSEUDONYM +OID_SERIAL_NUMBER = NameOID.SERIAL_NUMBER +OID_STATE_OR_PROVINCE_NAME = NameOID.STATE_OR_PROVINCE_NAME +OID_SURNAME = NameOID.SURNAME +OID_TITLE = NameOID.TITLE + +OID_CLIENT_AUTH = ExtendedKeyUsageOID.CLIENT_AUTH +OID_CODE_SIGNING = ExtendedKeyUsageOID.CODE_SIGNING +OID_EMAIL_PROTECTION = ExtendedKeyUsageOID.EMAIL_PROTECTION +OID_OCSP_SIGNING = ExtendedKeyUsageOID.OCSP_SIGNING +OID_SERVER_AUTH = ExtendedKeyUsageOID.SERVER_AUTH +OID_TIME_STAMPING = ExtendedKeyUsageOID.TIME_STAMPING + +OID_ANY_POLICY = CertificatePoliciesOID.ANY_POLICY +OID_CPS_QUALIFIER = CertificatePoliciesOID.CPS_QUALIFIER +OID_CPS_USER_NOTICE = CertificatePoliciesOID.CPS_USER_NOTICE + +OID_CERTIFICATE_ISSUER = CRLEntryExtensionOID.CERTIFICATE_ISSUER +OID_CRL_REASON = CRLEntryExtensionOID.CRL_REASON +OID_INVALIDITY_DATE = CRLEntryExtensionOID.INVALIDITY_DATE + +OID_CA_ISSUERS = AuthorityInformationAccessOID.CA_ISSUERS +OID_OCSP = AuthorityInformationAccessOID.OCSP + +__all__ = [ + "certificate_transparency", + "load_pem_x509_certificate", + "load_der_x509_certificate", + "load_pem_x509_csr", + "load_der_x509_csr", + "load_pem_x509_crl", + "load_der_x509_crl", + "random_serial_number", + "Attribute", + "AttributeNotFound", + "Attributes", + "InvalidVersion", + "DeltaCRLIndicator", + "DuplicateExtension", + "ExtensionNotFound", + "UnsupportedGeneralNameType", + "NameAttribute", + "Name", + "RelativeDistinguishedName", + "ObjectIdentifier", + "ExtensionType", + "Extensions", + "Extension", + "ExtendedKeyUsage", + "FreshestCRL", + "IssuingDistributionPoint", + "TLSFeature", + "TLSFeatureType", + "OCSPNoCheck", + "BasicConstraints", + "CRLNumber", + "KeyUsage", + "AuthorityInformationAccess", + "SubjectInformationAccess", + "AccessDescription", + "CertificatePolicies", + "PolicyInformation", + "UserNotice", + "NoticeReference", + "SubjectKeyIdentifier", + "NameConstraints", + "CRLDistributionPoints", + "DistributionPoint", + "ReasonFlags", + "InhibitAnyPolicy", + "SubjectAlternativeName", + "IssuerAlternativeName", + "AuthorityKeyIdentifier", + "GeneralNames", + "GeneralName", + "RFC822Name", + "DNSName", + "UniformResourceIdentifier", + "RegisteredID", + "DirectoryName", + "IPAddress", + "OtherName", + "Certificate", + "CertificateRevocationList", + "CertificateRevocationListBuilder", + "CertificateSigningRequest", + "RevokedCertificate", + "RevokedCertificateBuilder", + "CertificateSigningRequestBuilder", + "CertificateBuilder", + "Version", + "OID_CA_ISSUERS", + "OID_OCSP", + "CertificateIssuer", + "CRLReason", + "InvalidityDate", + "UnrecognizedExtension", + "PolicyConstraints", + "PrecertificateSignedCertificateTimestamps", + "PrecertPoison", + "OCSPNonce", + "SignedCertificateTimestamps", + "SignatureAlgorithmOID", + "NameOID", +] diff --git a/env/lib/python3.8/site-packages/cryptography/x509/base.py b/env/lib/python3.8/site-packages/cryptography/x509/base.py new file mode 100644 index 00000000..9f6c41af --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/base.py @@ -0,0 +1,1097 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import datetime +import os +import typing + +from cryptography import utils +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric import ( + dsa, + ec, + ed25519, + ed448, + rsa, + x25519, + x448, +) +from cryptography.hazmat.primitives.asymmetric.types import ( + CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES, + CERTIFICATE_PRIVATE_KEY_TYPES, + CERTIFICATE_PUBLIC_KEY_TYPES, +) +from cryptography.x509.extensions import ( + Extension, + ExtensionType, + Extensions, + _make_sequence_methods, +) +from cryptography.x509.name import Name, _ASN1Type +from cryptography.x509.oid import ObjectIdentifier + + +_EARLIEST_UTC_TIME = datetime.datetime(1950, 1, 1) + + +class AttributeNotFound(Exception): + def __init__(self, msg: str, oid: ObjectIdentifier) -> None: + super(AttributeNotFound, self).__init__(msg) + self.oid = oid + + +def _reject_duplicate_extension( + extension: Extension[ExtensionType], + extensions: typing.List[Extension[ExtensionType]], +) -> None: + # This is quadratic in the number of extensions + for e in extensions: + if e.oid == extension.oid: + raise ValueError("This extension has already been set.") + + +def _reject_duplicate_attribute( + oid: ObjectIdentifier, + attributes: typing.List[ + typing.Tuple[ObjectIdentifier, bytes, typing.Optional[int]] + ], +) -> None: + # This is quadratic in the number of attributes + for attr_oid, _, _ in attributes: + if attr_oid == oid: + raise ValueError("This attribute has already been set.") + + +def _convert_to_naive_utc_time(time: datetime.datetime) -> datetime.datetime: + """Normalizes a datetime to a naive datetime in UTC. + + time -- datetime to normalize. Assumed to be in UTC if not timezone + aware. + """ + if time.tzinfo is not None: + offset = time.utcoffset() + offset = offset if offset else datetime.timedelta() + return time.replace(tzinfo=None) - offset + else: + return time + + +class Attribute: + def __init__( + self, + oid: ObjectIdentifier, + value: bytes, + _type: int = _ASN1Type.UTF8String.value, + ) -> None: + self._oid = oid + self._value = value + self._type = _type + + @property + def oid(self) -> ObjectIdentifier: + return self._oid + + @property + def value(self) -> bytes: + return self._value + + def __repr__(self) -> str: + return "".format(self.oid, self.value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Attribute): + return NotImplemented + + return ( + self.oid == other.oid + and self.value == other.value + and self._type == other._type + ) + + def __hash__(self) -> int: + return hash((self.oid, self.value, self._type)) + + +class Attributes: + def __init__( + self, + attributes: typing.Iterable[Attribute], + ) -> None: + self._attributes = list(attributes) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_attributes") + + def __repr__(self) -> str: + return "".format(self._attributes) + + def get_attribute_for_oid(self, oid: ObjectIdentifier) -> Attribute: + for attr in self: + if attr.oid == oid: + return attr + + raise AttributeNotFound("No {} attribute was found".format(oid), oid) + + +class Version(utils.Enum): + v1 = 0 + v3 = 2 + + +class InvalidVersion(Exception): + def __init__(self, msg: str, parsed_version: int) -> None: + super(InvalidVersion, self).__init__(msg) + self.parsed_version = parsed_version + + +class Certificate(metaclass=abc.ABCMeta): + @abc.abstractmethod + def fingerprint(self, algorithm: hashes.HashAlgorithm) -> bytes: + """ + Returns bytes using digest passed. + """ + + @abc.abstractproperty + def serial_number(self) -> int: + """ + Returns certificate serial number + """ + + @abc.abstractproperty + def version(self) -> Version: + """ + Returns the certificate version + """ + + @abc.abstractmethod + def public_key(self) -> CERTIFICATE_PUBLIC_KEY_TYPES: + """ + Returns the public key + """ + + @abc.abstractproperty + def not_valid_before(self) -> datetime.datetime: + """ + Not before time (represented as UTC datetime) + """ + + @abc.abstractproperty + def not_valid_after(self) -> datetime.datetime: + """ + Not after time (represented as UTC datetime) + """ + + @abc.abstractproperty + def issuer(self) -> Name: + """ + Returns the issuer name object. + """ + + @abc.abstractproperty + def subject(self) -> Name: + """ + Returns the subject name object. + """ + + @abc.abstractproperty + def signature_hash_algorithm( + self, + ) -> typing.Optional[hashes.HashAlgorithm]: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + in the certificate. + """ + + @abc.abstractproperty + def signature_algorithm_oid(self) -> ObjectIdentifier: + """ + Returns the ObjectIdentifier of the signature algorithm. + """ + + @abc.abstractproperty + def extensions(self) -> Extensions: + """ + Returns an Extensions object. + """ + + @abc.abstractproperty + def signature(self) -> bytes: + """ + Returns the signature bytes. + """ + + @abc.abstractproperty + def tbs_certificate_bytes(self) -> bytes: + """ + Returns the tbsCertificate payload bytes as defined in RFC 5280. + """ + + @abc.abstractproperty + def tbs_precertificate_bytes(self) -> bytes: + """ + Returns the tbsCertificate payload bytes with the SCT list extension + stripped. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + @abc.abstractmethod + def __hash__(self) -> int: + """ + Computes a hash. + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the certificate to PEM or DER format. + """ + + +# Runtime isinstance checks need this since the rust class is not a subclass. +Certificate.register(rust_x509.Certificate) + + +class RevokedCertificate(metaclass=abc.ABCMeta): + @abc.abstractproperty + def serial_number(self) -> int: + """ + Returns the serial number of the revoked certificate. + """ + + @abc.abstractproperty + def revocation_date(self) -> datetime.datetime: + """ + Returns the date of when this certificate was revoked. + """ + + @abc.abstractproperty + def extensions(self) -> Extensions: + """ + Returns an Extensions object containing a list of Revoked extensions. + """ + + +# Runtime isinstance checks need this since the rust class is not a subclass. +RevokedCertificate.register(rust_x509.RevokedCertificate) + + +class _RawRevokedCertificate(RevokedCertificate): + def __init__( + self, + serial_number: int, + revocation_date: datetime.datetime, + extensions: Extensions, + ): + self._serial_number = serial_number + self._revocation_date = revocation_date + self._extensions = extensions + + @property + def serial_number(self) -> int: + return self._serial_number + + @property + def revocation_date(self) -> datetime.datetime: + return self._revocation_date + + @property + def extensions(self) -> Extensions: + return self._extensions + + +class CertificateRevocationList(metaclass=abc.ABCMeta): + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the CRL to PEM or DER format. + """ + + @abc.abstractmethod + def fingerprint(self, algorithm: hashes.HashAlgorithm) -> bytes: + """ + Returns bytes using digest passed. + """ + + @abc.abstractmethod + def get_revoked_certificate_by_serial_number( + self, serial_number: int + ) -> typing.Optional[RevokedCertificate]: + """ + Returns an instance of RevokedCertificate or None if the serial_number + is not in the CRL. + """ + + @abc.abstractproperty + def signature_hash_algorithm( + self, + ) -> typing.Optional[hashes.HashAlgorithm]: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + in the certificate. + """ + + @abc.abstractproperty + def signature_algorithm_oid(self) -> ObjectIdentifier: + """ + Returns the ObjectIdentifier of the signature algorithm. + """ + + @abc.abstractproperty + def issuer(self) -> Name: + """ + Returns the X509Name with the issuer of this CRL. + """ + + @abc.abstractproperty + def next_update(self) -> typing.Optional[datetime.datetime]: + """ + Returns the date of next update for this CRL. + """ + + @abc.abstractproperty + def last_update(self) -> datetime.datetime: + """ + Returns the date of last update for this CRL. + """ + + @abc.abstractproperty + def extensions(self) -> Extensions: + """ + Returns an Extensions object containing a list of CRL extensions. + """ + + @abc.abstractproperty + def signature(self) -> bytes: + """ + Returns the signature bytes. + """ + + @abc.abstractproperty + def tbs_certlist_bytes(self) -> bytes: + """ + Returns the tbsCertList payload bytes as defined in RFC 5280. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + @abc.abstractmethod + def __len__(self) -> int: + """ + Number of revoked certificates in the CRL. + """ + + @typing.overload + def __getitem__(self, idx: int) -> RevokedCertificate: + ... + + @typing.overload + def __getitem__(self, idx: slice) -> typing.List[RevokedCertificate]: + ... + + @abc.abstractmethod + def __getitem__( + self, idx: typing.Union[int, slice] + ) -> typing.Union[RevokedCertificate, typing.List[RevokedCertificate]]: + """ + Returns a revoked certificate (or slice of revoked certificates). + """ + + @abc.abstractmethod + def __iter__(self) -> typing.Iterator[RevokedCertificate]: + """ + Iterator over the revoked certificates + """ + + @abc.abstractmethod + def is_signature_valid( + self, public_key: CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES + ) -> bool: + """ + Verifies signature of revocation list against given public key. + """ + + +CertificateRevocationList.register(rust_x509.CertificateRevocationList) + + +class CertificateSigningRequest(metaclass=abc.ABCMeta): + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + @abc.abstractmethod + def __hash__(self) -> int: + """ + Computes a hash. + """ + + @abc.abstractmethod + def public_key(self) -> CERTIFICATE_PUBLIC_KEY_TYPES: + """ + Returns the public key + """ + + @abc.abstractproperty + def subject(self) -> Name: + """ + Returns the subject name object. + """ + + @abc.abstractproperty + def signature_hash_algorithm( + self, + ) -> typing.Optional[hashes.HashAlgorithm]: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + in the certificate. + """ + + @abc.abstractproperty + def signature_algorithm_oid(self) -> ObjectIdentifier: + """ + Returns the ObjectIdentifier of the signature algorithm. + """ + + @abc.abstractproperty + def extensions(self) -> Extensions: + """ + Returns the extensions in the signing request. + """ + + @abc.abstractproperty + def attributes(self) -> Attributes: + """ + Returns an Attributes object. + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Encodes the request to PEM or DER format. + """ + + @abc.abstractproperty + def signature(self) -> bytes: + """ + Returns the signature bytes. + """ + + @abc.abstractproperty + def tbs_certrequest_bytes(self) -> bytes: + """ + Returns the PKCS#10 CertificationRequestInfo bytes as defined in RFC + 2986. + """ + + @abc.abstractproperty + def is_signature_valid(self) -> bool: + """ + Verifies signature of signing request. + """ + + @abc.abstractmethod + def get_attribute_for_oid(self, oid: ObjectIdentifier) -> bytes: + """ + Get the attribute value for a given OID. + """ + + +# Runtime isinstance checks need this since the rust class is not a subclass. +CertificateSigningRequest.register(rust_x509.CertificateSigningRequest) + + +# Backend argument preserved for API compatibility, but ignored. +def load_pem_x509_certificate( + data: bytes, backend: typing.Any = None +) -> Certificate: + return rust_x509.load_pem_x509_certificate(data) + + +# Backend argument preserved for API compatibility, but ignored. +def load_der_x509_certificate( + data: bytes, backend: typing.Any = None +) -> Certificate: + return rust_x509.load_der_x509_certificate(data) + + +# Backend argument preserved for API compatibility, but ignored. +def load_pem_x509_csr( + data: bytes, backend: typing.Any = None +) -> CertificateSigningRequest: + return rust_x509.load_pem_x509_csr(data) + + +# Backend argument preserved for API compatibility, but ignored. +def load_der_x509_csr( + data: bytes, backend: typing.Any = None +) -> CertificateSigningRequest: + return rust_x509.load_der_x509_csr(data) + + +# Backend argument preserved for API compatibility, but ignored. +def load_pem_x509_crl( + data: bytes, backend: typing.Any = None +) -> CertificateRevocationList: + return rust_x509.load_pem_x509_crl(data) + + +# Backend argument preserved for API compatibility, but ignored. +def load_der_x509_crl( + data: bytes, backend: typing.Any = None +) -> CertificateRevocationList: + return rust_x509.load_der_x509_crl(data) + + +class CertificateSigningRequestBuilder: + def __init__( + self, + subject_name: typing.Optional[Name] = None, + extensions: typing.List[Extension[ExtensionType]] = [], + attributes: typing.List[ + typing.Tuple[ObjectIdentifier, bytes, typing.Optional[int]] + ] = [], + ): + """ + Creates an empty X.509 certificate request (v1). + """ + self._subject_name = subject_name + self._extensions = extensions + self._attributes = attributes + + def subject_name(self, name: Name) -> "CertificateSigningRequestBuilder": + """ + Sets the certificate requestor's distinguished name. + """ + if not isinstance(name, Name): + raise TypeError("Expecting x509.Name object.") + if self._subject_name is not None: + raise ValueError("The subject name may only be set once.") + return CertificateSigningRequestBuilder( + name, self._extensions, self._attributes + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> "CertificateSigningRequestBuilder": + """ + Adds an X.509 extension to the certificate request. + """ + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return CertificateSigningRequestBuilder( + self._subject_name, + self._extensions + [extension], + self._attributes, + ) + + def add_attribute( + self, + oid: ObjectIdentifier, + value: bytes, + *, + _tag: typing.Optional[_ASN1Type] = None, + ) -> "CertificateSigningRequestBuilder": + """ + Adds an X.509 attribute with an OID and associated value. + """ + if not isinstance(oid, ObjectIdentifier): + raise TypeError("oid must be an ObjectIdentifier") + + if not isinstance(value, bytes): + raise TypeError("value must be bytes") + + if _tag is not None and not isinstance(_tag, _ASN1Type): + raise TypeError("tag must be _ASN1Type") + + _reject_duplicate_attribute(oid, self._attributes) + + if _tag is not None: + tag = _tag.value + else: + tag = None + + return CertificateSigningRequestBuilder( + self._subject_name, + self._extensions, + self._attributes + [(oid, value, tag)], + ) + + def sign( + self, + private_key: CERTIFICATE_PRIVATE_KEY_TYPES, + algorithm: typing.Optional[hashes.HashAlgorithm], + backend: typing.Any = None, + ) -> CertificateSigningRequest: + """ + Signs the request using the requestor's private key. + """ + if self._subject_name is None: + raise ValueError("A CertificateSigningRequest must have a subject") + return rust_x509.create_x509_csr(self, private_key, algorithm) + + +class CertificateBuilder: + _extensions: typing.List[Extension[ExtensionType]] + + def __init__( + self, + issuer_name: typing.Optional[Name] = None, + subject_name: typing.Optional[Name] = None, + public_key: typing.Optional[CERTIFICATE_PUBLIC_KEY_TYPES] = None, + serial_number: typing.Optional[int] = None, + not_valid_before: typing.Optional[datetime.datetime] = None, + not_valid_after: typing.Optional[datetime.datetime] = None, + extensions: typing.List[Extension[ExtensionType]] = [], + ) -> None: + self._version = Version.v3 + self._issuer_name = issuer_name + self._subject_name = subject_name + self._public_key = public_key + self._serial_number = serial_number + self._not_valid_before = not_valid_before + self._not_valid_after = not_valid_after + self._extensions = extensions + + def issuer_name(self, name: Name) -> "CertificateBuilder": + """ + Sets the CA's distinguished name. + """ + if not isinstance(name, Name): + raise TypeError("Expecting x509.Name object.") + if self._issuer_name is not None: + raise ValueError("The issuer name may only be set once.") + return CertificateBuilder( + name, + self._subject_name, + self._public_key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def subject_name(self, name: Name) -> "CertificateBuilder": + """ + Sets the requestor's distinguished name. + """ + if not isinstance(name, Name): + raise TypeError("Expecting x509.Name object.") + if self._subject_name is not None: + raise ValueError("The subject name may only be set once.") + return CertificateBuilder( + self._issuer_name, + name, + self._public_key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def public_key( + self, + key: CERTIFICATE_PUBLIC_KEY_TYPES, + ) -> "CertificateBuilder": + """ + Sets the requestor's public key (as found in the signing request). + """ + if not isinstance( + key, + ( + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, + x25519.X25519PublicKey, + x448.X448PublicKey, + ), + ): + raise TypeError( + "Expecting one of DSAPublicKey, RSAPublicKey," + " EllipticCurvePublicKey, Ed25519PublicKey," + " Ed448PublicKey, X25519PublicKey, or " + "X448PublicKey." + ) + if self._public_key is not None: + raise ValueError("The public key may only be set once.") + return CertificateBuilder( + self._issuer_name, + self._subject_name, + key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def serial_number(self, number: int) -> "CertificateBuilder": + """ + Sets the certificate serial number. + """ + if not isinstance(number, int): + raise TypeError("Serial number must be of integral type.") + if self._serial_number is not None: + raise ValueError("The serial number may only be set once.") + if number <= 0: + raise ValueError("The serial number should be positive.") + + # ASN.1 integers are always signed, so most significant bit must be + # zero. + if number.bit_length() >= 160: # As defined in RFC 5280 + raise ValueError( + "The serial number should not be more than 159 " "bits." + ) + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def not_valid_before( + self, time: datetime.datetime + ) -> "CertificateBuilder": + """ + Sets the certificate activation time. + """ + if not isinstance(time, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._not_valid_before is not None: + raise ValueError("The not valid before may only be set once.") + time = _convert_to_naive_utc_time(time) + if time < _EARLIEST_UTC_TIME: + raise ValueError( + "The not valid before date must be on or after" + " 1950 January 1)." + ) + if self._not_valid_after is not None and time > self._not_valid_after: + raise ValueError( + "The not valid before date must be before the not valid after " + "date." + ) + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + self._serial_number, + time, + self._not_valid_after, + self._extensions, + ) + + def not_valid_after(self, time: datetime.datetime) -> "CertificateBuilder": + """ + Sets the certificate expiration time. + """ + if not isinstance(time, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._not_valid_after is not None: + raise ValueError("The not valid after may only be set once.") + time = _convert_to_naive_utc_time(time) + if time < _EARLIEST_UTC_TIME: + raise ValueError( + "The not valid after date must be on or after" + " 1950 January 1." + ) + if ( + self._not_valid_before is not None + and time < self._not_valid_before + ): + raise ValueError( + "The not valid after date must be after the not valid before " + "date." + ) + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + self._serial_number, + self._not_valid_before, + time, + self._extensions, + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> "CertificateBuilder": + """ + Adds an X.509 extension to the certificate. + """ + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + self._extensions + [extension], + ) + + def sign( + self, + private_key: CERTIFICATE_PRIVATE_KEY_TYPES, + algorithm: typing.Optional[hashes.HashAlgorithm], + backend: typing.Any = None, + ) -> Certificate: + """ + Signs the certificate using the CA's private key. + """ + if self._subject_name is None: + raise ValueError("A certificate must have a subject name") + + if self._issuer_name is None: + raise ValueError("A certificate must have an issuer name") + + if self._serial_number is None: + raise ValueError("A certificate must have a serial number") + + if self._not_valid_before is None: + raise ValueError("A certificate must have a not valid before time") + + if self._not_valid_after is None: + raise ValueError("A certificate must have a not valid after time") + + if self._public_key is None: + raise ValueError("A certificate must have a public key") + + return rust_x509.create_x509_certificate(self, private_key, algorithm) + + +class CertificateRevocationListBuilder: + _extensions: typing.List[Extension[ExtensionType]] + _revoked_certificates: typing.List[RevokedCertificate] + + def __init__( + self, + issuer_name: typing.Optional[Name] = None, + last_update: typing.Optional[datetime.datetime] = None, + next_update: typing.Optional[datetime.datetime] = None, + extensions: typing.List[Extension[ExtensionType]] = [], + revoked_certificates: typing.List[RevokedCertificate] = [], + ): + self._issuer_name = issuer_name + self._last_update = last_update + self._next_update = next_update + self._extensions = extensions + self._revoked_certificates = revoked_certificates + + def issuer_name( + self, issuer_name: Name + ) -> "CertificateRevocationListBuilder": + if not isinstance(issuer_name, Name): + raise TypeError("Expecting x509.Name object.") + if self._issuer_name is not None: + raise ValueError("The issuer name may only be set once.") + return CertificateRevocationListBuilder( + issuer_name, + self._last_update, + self._next_update, + self._extensions, + self._revoked_certificates, + ) + + def last_update( + self, last_update: datetime.datetime + ) -> "CertificateRevocationListBuilder": + if not isinstance(last_update, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._last_update is not None: + raise ValueError("Last update may only be set once.") + last_update = _convert_to_naive_utc_time(last_update) + if last_update < _EARLIEST_UTC_TIME: + raise ValueError( + "The last update date must be on or after" " 1950 January 1." + ) + if self._next_update is not None and last_update > self._next_update: + raise ValueError( + "The last update date must be before the next update date." + ) + return CertificateRevocationListBuilder( + self._issuer_name, + last_update, + self._next_update, + self._extensions, + self._revoked_certificates, + ) + + def next_update( + self, next_update: datetime.datetime + ) -> "CertificateRevocationListBuilder": + if not isinstance(next_update, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._next_update is not None: + raise ValueError("Last update may only be set once.") + next_update = _convert_to_naive_utc_time(next_update) + if next_update < _EARLIEST_UTC_TIME: + raise ValueError( + "The last update date must be on or after" " 1950 January 1." + ) + if self._last_update is not None and next_update < self._last_update: + raise ValueError( + "The next update date must be after the last update date." + ) + return CertificateRevocationListBuilder( + self._issuer_name, + self._last_update, + next_update, + self._extensions, + self._revoked_certificates, + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> "CertificateRevocationListBuilder": + """ + Adds an X.509 extension to the certificate revocation list. + """ + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + return CertificateRevocationListBuilder( + self._issuer_name, + self._last_update, + self._next_update, + self._extensions + [extension], + self._revoked_certificates, + ) + + def add_revoked_certificate( + self, revoked_certificate: RevokedCertificate + ) -> "CertificateRevocationListBuilder": + """ + Adds a revoked certificate to the CRL. + """ + if not isinstance(revoked_certificate, RevokedCertificate): + raise TypeError("Must be an instance of RevokedCertificate") + + return CertificateRevocationListBuilder( + self._issuer_name, + self._last_update, + self._next_update, + self._extensions, + self._revoked_certificates + [revoked_certificate], + ) + + def sign( + self, + private_key: CERTIFICATE_PRIVATE_KEY_TYPES, + algorithm: typing.Optional[hashes.HashAlgorithm], + backend: typing.Any = None, + ) -> CertificateRevocationList: + if self._issuer_name is None: + raise ValueError("A CRL must have an issuer name") + + if self._last_update is None: + raise ValueError("A CRL must have a last update time") + + if self._next_update is None: + raise ValueError("A CRL must have a next update time") + + return rust_x509.create_x509_crl(self, private_key, algorithm) + + +class RevokedCertificateBuilder: + def __init__( + self, + serial_number: typing.Optional[int] = None, + revocation_date: typing.Optional[datetime.datetime] = None, + extensions: typing.List[Extension[ExtensionType]] = [], + ): + self._serial_number = serial_number + self._revocation_date = revocation_date + self._extensions = extensions + + def serial_number(self, number: int) -> "RevokedCertificateBuilder": + if not isinstance(number, int): + raise TypeError("Serial number must be of integral type.") + if self._serial_number is not None: + raise ValueError("The serial number may only be set once.") + if number <= 0: + raise ValueError("The serial number should be positive") + + # ASN.1 integers are always signed, so most significant bit must be + # zero. + if number.bit_length() >= 160: # As defined in RFC 5280 + raise ValueError( + "The serial number should not be more than 159 " "bits." + ) + return RevokedCertificateBuilder( + number, self._revocation_date, self._extensions + ) + + def revocation_date( + self, time: datetime.datetime + ) -> "RevokedCertificateBuilder": + if not isinstance(time, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._revocation_date is not None: + raise ValueError("The revocation date may only be set once.") + time = _convert_to_naive_utc_time(time) + if time < _EARLIEST_UTC_TIME: + raise ValueError( + "The revocation date must be on or after" " 1950 January 1." + ) + return RevokedCertificateBuilder( + self._serial_number, time, self._extensions + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> "RevokedCertificateBuilder": + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + return RevokedCertificateBuilder( + self._serial_number, + self._revocation_date, + self._extensions + [extension], + ) + + def build(self, backend: typing.Any = None) -> RevokedCertificate: + if self._serial_number is None: + raise ValueError("A revoked certificate must have a serial number") + if self._revocation_date is None: + raise ValueError( + "A revoked certificate must have a revocation date" + ) + return _RawRevokedCertificate( + self._serial_number, + self._revocation_date, + Extensions(self._extensions), + ) + + +def random_serial_number() -> int: + return int.from_bytes(os.urandom(20), "big") >> 1 diff --git a/env/lib/python3.8/site-packages/cryptography/x509/certificate_transparency.py b/env/lib/python3.8/site-packages/cryptography/x509/certificate_transparency.py new file mode 100644 index 00000000..18c7cf79 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/certificate_transparency.py @@ -0,0 +1,88 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import datetime + +from cryptography import utils +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.hazmat.primitives.hashes import HashAlgorithm + + +class LogEntryType(utils.Enum): + X509_CERTIFICATE = 0 + PRE_CERTIFICATE = 1 + + +class Version(utils.Enum): + v1 = 0 + + +class SignatureAlgorithm(utils.Enum): + """ + Signature algorithms that are valid for SCTs. + + These are exactly the same as SignatureAlgorithm in RFC 5246 (TLS 1.2). + + See: + """ + + ANONYMOUS = 0 + RSA = 1 + DSA = 2 + ECDSA = 3 + + +class SignedCertificateTimestamp(metaclass=abc.ABCMeta): + @abc.abstractproperty + def version(self) -> Version: + """ + Returns the SCT version. + """ + + @abc.abstractproperty + def log_id(self) -> bytes: + """ + Returns an identifier indicating which log this SCT is for. + """ + + @abc.abstractproperty + def timestamp(self) -> datetime.datetime: + """ + Returns the timestamp for this SCT. + """ + + @abc.abstractproperty + def entry_type(self) -> LogEntryType: + """ + Returns whether this is an SCT for a certificate or pre-certificate. + """ + + @abc.abstractproperty + def signature_hash_algorithm(self) -> HashAlgorithm: + """ + Returns the hash algorithm used for the SCT's signature. + """ + + @abc.abstractproperty + def signature_algorithm(self) -> SignatureAlgorithm: + """ + Returns the signing algorithm used for the SCT's signature. + """ + + @abc.abstractproperty + def signature(self) -> bytes: + """ + Returns the signature for this SCT. + """ + + @abc.abstractproperty + def extension_bytes(self) -> bytes: + """ + Returns the raw bytes of any extensions for this SCT. + """ + + +SignedCertificateTimestamp.register(rust_x509.Sct) diff --git a/env/lib/python3.8/site-packages/cryptography/x509/extensions.py b/env/lib/python3.8/site-packages/cryptography/x509/extensions.py new file mode 100644 index 00000000..cc8f25ef --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/extensions.py @@ -0,0 +1,2114 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import datetime +import hashlib +import ipaddress +import typing + +from cryptography import utils +from cryptography.hazmat.bindings._rust import asn1 +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.hazmat.primitives import constant_time, serialization +from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurvePublicKey +from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicKey +from cryptography.hazmat.primitives.asymmetric.types import ( + CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES, + CERTIFICATE_PUBLIC_KEY_TYPES, +) +from cryptography.x509.certificate_transparency import ( + SignedCertificateTimestamp, +) +from cryptography.x509.general_name import ( + DNSName, + DirectoryName, + GeneralName, + IPAddress, + OtherName, + RFC822Name, + RegisteredID, + UniformResourceIdentifier, + _IPADDRESS_TYPES, +) +from cryptography.x509.name import Name, RelativeDistinguishedName +from cryptography.x509.oid import ( + CRLEntryExtensionOID, + ExtensionOID, + OCSPExtensionOID, + ObjectIdentifier, +) + +ExtensionTypeVar = typing.TypeVar( + "ExtensionTypeVar", bound="ExtensionType", covariant=True +) + + +def _key_identifier_from_public_key( + public_key: CERTIFICATE_PUBLIC_KEY_TYPES, +) -> bytes: + if isinstance(public_key, RSAPublicKey): + data = public_key.public_bytes( + serialization.Encoding.DER, + serialization.PublicFormat.PKCS1, + ) + elif isinstance(public_key, EllipticCurvePublicKey): + data = public_key.public_bytes( + serialization.Encoding.X962, + serialization.PublicFormat.UncompressedPoint, + ) + else: + # This is a very slow way to do this. + serialized = public_key.public_bytes( + serialization.Encoding.DER, + serialization.PublicFormat.SubjectPublicKeyInfo, + ) + data = asn1.parse_spki_for_data(serialized) + + return hashlib.sha1(data).digest() + + +def _make_sequence_methods(field_name: str): + def len_method(self) -> int: + return len(getattr(self, field_name)) + + def iter_method(self): + return iter(getattr(self, field_name)) + + def getitem_method(self, idx): + return getattr(self, field_name)[idx] + + return len_method, iter_method, getitem_method + + +class DuplicateExtension(Exception): + def __init__(self, msg: str, oid: ObjectIdentifier) -> None: + super(DuplicateExtension, self).__init__(msg) + self.oid = oid + + +class ExtensionNotFound(Exception): + def __init__(self, msg: str, oid: ObjectIdentifier) -> None: + super(ExtensionNotFound, self).__init__(msg) + self.oid = oid + + +class ExtensionType(metaclass=abc.ABCMeta): + oid: typing.ClassVar[ObjectIdentifier] + + def public_bytes(self) -> bytes: + """ + Serializes the extension type to DER. + """ + raise NotImplementedError( + "public_bytes is not implemented for extension type {0!r}".format( + self + ) + ) + + +class Extensions: + def __init__( + self, extensions: typing.Iterable["Extension[ExtensionType]"] + ) -> None: + self._extensions = list(extensions) + + def get_extension_for_oid( + self, oid: ObjectIdentifier + ) -> "Extension[ExtensionType]": + for ext in self: + if ext.oid == oid: + return ext + + raise ExtensionNotFound("No {} extension was found".format(oid), oid) + + def get_extension_for_class( + self, extclass: typing.Type[ExtensionTypeVar] + ) -> "Extension[ExtensionTypeVar]": + if extclass is UnrecognizedExtension: + raise TypeError( + "UnrecognizedExtension can't be used with " + "get_extension_for_class because more than one instance of the" + " class may be present." + ) + + for ext in self: + if isinstance(ext.value, extclass): + return ext + + raise ExtensionNotFound( + "No {} extension was found".format(extclass), extclass.oid + ) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_extensions") + + def __repr__(self) -> str: + return "".format(self._extensions) + + +class CRLNumber(ExtensionType): + oid = ExtensionOID.CRL_NUMBER + + def __init__(self, crl_number: int) -> None: + if not isinstance(crl_number, int): + raise TypeError("crl_number must be an integer") + + self._crl_number = crl_number + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CRLNumber): + return NotImplemented + + return self.crl_number == other.crl_number + + def __hash__(self) -> int: + return hash(self.crl_number) + + def __repr__(self) -> str: + return "".format(self.crl_number) + + @property + def crl_number(self) -> int: + return self._crl_number + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class AuthorityKeyIdentifier(ExtensionType): + oid = ExtensionOID.AUTHORITY_KEY_IDENTIFIER + + def __init__( + self, + key_identifier: typing.Optional[bytes], + authority_cert_issuer: typing.Optional[typing.Iterable[GeneralName]], + authority_cert_serial_number: typing.Optional[int], + ) -> None: + if (authority_cert_issuer is None) != ( + authority_cert_serial_number is None + ): + raise ValueError( + "authority_cert_issuer and authority_cert_serial_number " + "must both be present or both None" + ) + + if authority_cert_issuer is not None: + authority_cert_issuer = list(authority_cert_issuer) + if not all( + isinstance(x, GeneralName) for x in authority_cert_issuer + ): + raise TypeError( + "authority_cert_issuer must be a list of GeneralName " + "objects" + ) + + if authority_cert_serial_number is not None and not isinstance( + authority_cert_serial_number, int + ): + raise TypeError("authority_cert_serial_number must be an integer") + + self._key_identifier = key_identifier + self._authority_cert_issuer = authority_cert_issuer + self._authority_cert_serial_number = authority_cert_serial_number + + # This takes a subset of CERTIFICATE_PUBLIC_KEY_TYPES because an issuer + # cannot have an X25519/X448 key. This introduces some unfortunate + # asymmetry that requires typing users to explicitly + # narrow their type, but we should make this accurate and not just + # convenient. + @classmethod + def from_issuer_public_key( + cls, public_key: CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES + ) -> "AuthorityKeyIdentifier": + digest = _key_identifier_from_public_key(public_key) + return cls( + key_identifier=digest, + authority_cert_issuer=None, + authority_cert_serial_number=None, + ) + + @classmethod + def from_issuer_subject_key_identifier( + cls, ski: "SubjectKeyIdentifier" + ) -> "AuthorityKeyIdentifier": + return cls( + key_identifier=ski.digest, + authority_cert_issuer=None, + authority_cert_serial_number=None, + ) + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, AuthorityKeyIdentifier): + return NotImplemented + + return ( + self.key_identifier == other.key_identifier + and self.authority_cert_issuer == other.authority_cert_issuer + and self.authority_cert_serial_number + == other.authority_cert_serial_number + ) + + def __hash__(self) -> int: + if self.authority_cert_issuer is None: + aci = None + else: + aci = tuple(self.authority_cert_issuer) + return hash( + (self.key_identifier, aci, self.authority_cert_serial_number) + ) + + @property + def key_identifier(self) -> typing.Optional[bytes]: + return self._key_identifier + + @property + def authority_cert_issuer( + self, + ) -> typing.Optional[typing.List[GeneralName]]: + return self._authority_cert_issuer + + @property + def authority_cert_serial_number(self) -> typing.Optional[int]: + return self._authority_cert_serial_number + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class SubjectKeyIdentifier(ExtensionType): + oid = ExtensionOID.SUBJECT_KEY_IDENTIFIER + + def __init__(self, digest: bytes) -> None: + self._digest = digest + + @classmethod + def from_public_key( + cls, public_key: CERTIFICATE_PUBLIC_KEY_TYPES + ) -> "SubjectKeyIdentifier": + return cls(_key_identifier_from_public_key(public_key)) + + @property + def digest(self) -> bytes: + return self._digest + + @property + def key_identifier(self) -> bytes: + return self._digest + + def __repr__(self) -> str: + return "".format(self.digest) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SubjectKeyIdentifier): + return NotImplemented + + return constant_time.bytes_eq(self.digest, other.digest) + + def __hash__(self) -> int: + return hash(self.digest) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class AuthorityInformationAccess(ExtensionType): + oid = ExtensionOID.AUTHORITY_INFORMATION_ACCESS + + def __init__( + self, descriptions: typing.Iterable["AccessDescription"] + ) -> None: + descriptions = list(descriptions) + if not all(isinstance(x, AccessDescription) for x in descriptions): + raise TypeError( + "Every item in the descriptions list must be an " + "AccessDescription" + ) + + self._descriptions = descriptions + + __len__, __iter__, __getitem__ = _make_sequence_methods("_descriptions") + + def __repr__(self) -> str: + return "".format(self._descriptions) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, AuthorityInformationAccess): + return NotImplemented + + return self._descriptions == other._descriptions + + def __hash__(self) -> int: + return hash(tuple(self._descriptions)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class SubjectInformationAccess(ExtensionType): + oid = ExtensionOID.SUBJECT_INFORMATION_ACCESS + + def __init__( + self, descriptions: typing.Iterable["AccessDescription"] + ) -> None: + descriptions = list(descriptions) + if not all(isinstance(x, AccessDescription) for x in descriptions): + raise TypeError( + "Every item in the descriptions list must be an " + "AccessDescription" + ) + + self._descriptions = descriptions + + __len__, __iter__, __getitem__ = _make_sequence_methods("_descriptions") + + def __repr__(self) -> str: + return "".format(self._descriptions) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SubjectInformationAccess): + return NotImplemented + + return self._descriptions == other._descriptions + + def __hash__(self) -> int: + return hash(tuple(self._descriptions)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class AccessDescription: + def __init__( + self, access_method: ObjectIdentifier, access_location: GeneralName + ) -> None: + if not isinstance(access_method, ObjectIdentifier): + raise TypeError("access_method must be an ObjectIdentifier") + + if not isinstance(access_location, GeneralName): + raise TypeError("access_location must be a GeneralName") + + self._access_method = access_method + self._access_location = access_location + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, AccessDescription): + return NotImplemented + + return ( + self.access_method == other.access_method + and self.access_location == other.access_location + ) + + def __hash__(self) -> int: + return hash((self.access_method, self.access_location)) + + @property + def access_method(self) -> ObjectIdentifier: + return self._access_method + + @property + def access_location(self) -> GeneralName: + return self._access_location + + +class BasicConstraints(ExtensionType): + oid = ExtensionOID.BASIC_CONSTRAINTS + + def __init__(self, ca: bool, path_length: typing.Optional[int]) -> None: + if not isinstance(ca, bool): + raise TypeError("ca must be a boolean value") + + if path_length is not None and not ca: + raise ValueError("path_length must be None when ca is False") + + if path_length is not None and ( + not isinstance(path_length, int) or path_length < 0 + ): + raise TypeError( + "path_length must be a non-negative integer or None" + ) + + self._ca = ca + self._path_length = path_length + + @property + def ca(self) -> bool: + return self._ca + + @property + def path_length(self) -> typing.Optional[int]: + return self._path_length + + def __repr__(self) -> str: + return ( + "" + ).format(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, BasicConstraints): + return NotImplemented + + return self.ca == other.ca and self.path_length == other.path_length + + def __hash__(self) -> int: + return hash((self.ca, self.path_length)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class DeltaCRLIndicator(ExtensionType): + oid = ExtensionOID.DELTA_CRL_INDICATOR + + def __init__(self, crl_number: int) -> None: + if not isinstance(crl_number, int): + raise TypeError("crl_number must be an integer") + + self._crl_number = crl_number + + @property + def crl_number(self) -> int: + return self._crl_number + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DeltaCRLIndicator): + return NotImplemented + + return self.crl_number == other.crl_number + + def __hash__(self) -> int: + return hash(self.crl_number) + + def __repr__(self) -> str: + return "".format(self) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CRLDistributionPoints(ExtensionType): + oid = ExtensionOID.CRL_DISTRIBUTION_POINTS + + def __init__( + self, distribution_points: typing.Iterable["DistributionPoint"] + ) -> None: + distribution_points = list(distribution_points) + if not all( + isinstance(x, DistributionPoint) for x in distribution_points + ): + raise TypeError( + "distribution_points must be a list of DistributionPoint " + "objects" + ) + + self._distribution_points = distribution_points + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_distribution_points" + ) + + def __repr__(self) -> str: + return "".format(self._distribution_points) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CRLDistributionPoints): + return NotImplemented + + return self._distribution_points == other._distribution_points + + def __hash__(self) -> int: + return hash(tuple(self._distribution_points)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class FreshestCRL(ExtensionType): + oid = ExtensionOID.FRESHEST_CRL + + def __init__( + self, distribution_points: typing.Iterable["DistributionPoint"] + ) -> None: + distribution_points = list(distribution_points) + if not all( + isinstance(x, DistributionPoint) for x in distribution_points + ): + raise TypeError( + "distribution_points must be a list of DistributionPoint " + "objects" + ) + + self._distribution_points = distribution_points + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_distribution_points" + ) + + def __repr__(self) -> str: + return "".format(self._distribution_points) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, FreshestCRL): + return NotImplemented + + return self._distribution_points == other._distribution_points + + def __hash__(self) -> int: + return hash(tuple(self._distribution_points)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class DistributionPoint: + def __init__( + self, + full_name: typing.Optional[typing.Iterable[GeneralName]], + relative_name: typing.Optional[RelativeDistinguishedName], + reasons: typing.Optional[typing.FrozenSet["ReasonFlags"]], + crl_issuer: typing.Optional[typing.Iterable[GeneralName]], + ) -> None: + if full_name and relative_name: + raise ValueError( + "You cannot provide both full_name and relative_name, at " + "least one must be None." + ) + + if full_name is not None: + full_name = list(full_name) + if not all(isinstance(x, GeneralName) for x in full_name): + raise TypeError( + "full_name must be a list of GeneralName objects" + ) + + if relative_name: + if not isinstance(relative_name, RelativeDistinguishedName): + raise TypeError( + "relative_name must be a RelativeDistinguishedName" + ) + + if crl_issuer is not None: + crl_issuer = list(crl_issuer) + if not all(isinstance(x, GeneralName) for x in crl_issuer): + raise TypeError( + "crl_issuer must be None or a list of general names" + ) + + if reasons and ( + not isinstance(reasons, frozenset) + or not all(isinstance(x, ReasonFlags) for x in reasons) + ): + raise TypeError("reasons must be None or frozenset of ReasonFlags") + + if reasons and ( + ReasonFlags.unspecified in reasons + or ReasonFlags.remove_from_crl in reasons + ): + raise ValueError( + "unspecified and remove_from_crl are not valid reasons in a " + "DistributionPoint" + ) + + if reasons and not crl_issuer and not (full_name or relative_name): + raise ValueError( + "You must supply crl_issuer, full_name, or relative_name when " + "reasons is not None" + ) + + self._full_name = full_name + self._relative_name = relative_name + self._reasons = reasons + self._crl_issuer = crl_issuer + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DistributionPoint): + return NotImplemented + + return ( + self.full_name == other.full_name + and self.relative_name == other.relative_name + and self.reasons == other.reasons + and self.crl_issuer == other.crl_issuer + ) + + def __hash__(self) -> int: + if self.full_name is not None: + fn: typing.Optional[typing.Tuple[GeneralName, ...]] = tuple( + self.full_name + ) + else: + fn = None + + if self.crl_issuer is not None: + crl_issuer: typing.Optional[ + typing.Tuple[GeneralName, ...] + ] = tuple(self.crl_issuer) + else: + crl_issuer = None + + return hash((fn, self.relative_name, self.reasons, crl_issuer)) + + @property + def full_name(self) -> typing.Optional[typing.List[GeneralName]]: + return self._full_name + + @property + def relative_name(self) -> typing.Optional[RelativeDistinguishedName]: + return self._relative_name + + @property + def reasons(self) -> typing.Optional[typing.FrozenSet["ReasonFlags"]]: + return self._reasons + + @property + def crl_issuer(self) -> typing.Optional[typing.List[GeneralName]]: + return self._crl_issuer + + +class ReasonFlags(utils.Enum): + unspecified = "unspecified" + key_compromise = "keyCompromise" + ca_compromise = "cACompromise" + affiliation_changed = "affiliationChanged" + superseded = "superseded" + cessation_of_operation = "cessationOfOperation" + certificate_hold = "certificateHold" + privilege_withdrawn = "privilegeWithdrawn" + aa_compromise = "aACompromise" + remove_from_crl = "removeFromCRL" + + +# These are distribution point bit string mappings. Not to be confused with +# CRLReason reason flags bit string mappings. +# ReasonFlags ::= BIT STRING { +# unused (0), +# keyCompromise (1), +# cACompromise (2), +# affiliationChanged (3), +# superseded (4), +# cessationOfOperation (5), +# certificateHold (6), +# privilegeWithdrawn (7), +# aACompromise (8) } +_REASON_BIT_MAPPING = { + 1: ReasonFlags.key_compromise, + 2: ReasonFlags.ca_compromise, + 3: ReasonFlags.affiliation_changed, + 4: ReasonFlags.superseded, + 5: ReasonFlags.cessation_of_operation, + 6: ReasonFlags.certificate_hold, + 7: ReasonFlags.privilege_withdrawn, + 8: ReasonFlags.aa_compromise, +} + +_CRLREASONFLAGS = { + ReasonFlags.key_compromise: 1, + ReasonFlags.ca_compromise: 2, + ReasonFlags.affiliation_changed: 3, + ReasonFlags.superseded: 4, + ReasonFlags.cessation_of_operation: 5, + ReasonFlags.certificate_hold: 6, + ReasonFlags.privilege_withdrawn: 7, + ReasonFlags.aa_compromise: 8, +} + + +class PolicyConstraints(ExtensionType): + oid = ExtensionOID.POLICY_CONSTRAINTS + + def __init__( + self, + require_explicit_policy: typing.Optional[int], + inhibit_policy_mapping: typing.Optional[int], + ) -> None: + if require_explicit_policy is not None and not isinstance( + require_explicit_policy, int + ): + raise TypeError( + "require_explicit_policy must be a non-negative integer or " + "None" + ) + + if inhibit_policy_mapping is not None and not isinstance( + inhibit_policy_mapping, int + ): + raise TypeError( + "inhibit_policy_mapping must be a non-negative integer or None" + ) + + if inhibit_policy_mapping is None and require_explicit_policy is None: + raise ValueError( + "At least one of require_explicit_policy and " + "inhibit_policy_mapping must not be None" + ) + + self._require_explicit_policy = require_explicit_policy + self._inhibit_policy_mapping = inhibit_policy_mapping + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PolicyConstraints): + return NotImplemented + + return ( + self.require_explicit_policy == other.require_explicit_policy + and self.inhibit_policy_mapping == other.inhibit_policy_mapping + ) + + def __hash__(self) -> int: + return hash( + (self.require_explicit_policy, self.inhibit_policy_mapping) + ) + + @property + def require_explicit_policy(self) -> typing.Optional[int]: + return self._require_explicit_policy + + @property + def inhibit_policy_mapping(self) -> typing.Optional[int]: + return self._inhibit_policy_mapping + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CertificatePolicies(ExtensionType): + oid = ExtensionOID.CERTIFICATE_POLICIES + + def __init__(self, policies: typing.Iterable["PolicyInformation"]) -> None: + policies = list(policies) + if not all(isinstance(x, PolicyInformation) for x in policies): + raise TypeError( + "Every item in the policies list must be a " + "PolicyInformation" + ) + + self._policies = policies + + __len__, __iter__, __getitem__ = _make_sequence_methods("_policies") + + def __repr__(self) -> str: + return "".format(self._policies) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CertificatePolicies): + return NotImplemented + + return self._policies == other._policies + + def __hash__(self) -> int: + return hash(tuple(self._policies)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class PolicyInformation: + def __init__( + self, + policy_identifier: ObjectIdentifier, + policy_qualifiers: typing.Optional[ + typing.Iterable[typing.Union[str, "UserNotice"]] + ], + ) -> None: + if not isinstance(policy_identifier, ObjectIdentifier): + raise TypeError("policy_identifier must be an ObjectIdentifier") + + self._policy_identifier = policy_identifier + + if policy_qualifiers is not None: + policy_qualifiers = list(policy_qualifiers) + if not all( + isinstance(x, (str, UserNotice)) for x in policy_qualifiers + ): + raise TypeError( + "policy_qualifiers must be a list of strings and/or " + "UserNotice objects or None" + ) + + self._policy_qualifiers = policy_qualifiers + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PolicyInformation): + return NotImplemented + + return ( + self.policy_identifier == other.policy_identifier + and self.policy_qualifiers == other.policy_qualifiers + ) + + def __hash__(self) -> int: + if self.policy_qualifiers is not None: + pq: typing.Optional[ + typing.Tuple[typing.Union[str, "UserNotice"], ...] + ] = tuple(self.policy_qualifiers) + else: + pq = None + + return hash((self.policy_identifier, pq)) + + @property + def policy_identifier(self) -> ObjectIdentifier: + return self._policy_identifier + + @property + def policy_qualifiers( + self, + ) -> typing.Optional[typing.List[typing.Union[str, "UserNotice"]]]: + return self._policy_qualifiers + + +class UserNotice: + def __init__( + self, + notice_reference: typing.Optional["NoticeReference"], + explicit_text: typing.Optional[str], + ) -> None: + if notice_reference and not isinstance( + notice_reference, NoticeReference + ): + raise TypeError( + "notice_reference must be None or a NoticeReference" + ) + + self._notice_reference = notice_reference + self._explicit_text = explicit_text + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, UserNotice): + return NotImplemented + + return ( + self.notice_reference == other.notice_reference + and self.explicit_text == other.explicit_text + ) + + def __hash__(self) -> int: + return hash((self.notice_reference, self.explicit_text)) + + @property + def notice_reference(self) -> typing.Optional["NoticeReference"]: + return self._notice_reference + + @property + def explicit_text(self) -> typing.Optional[str]: + return self._explicit_text + + +class NoticeReference: + def __init__( + self, + organization: typing.Optional[str], + notice_numbers: typing.Iterable[int], + ) -> None: + self._organization = organization + notice_numbers = list(notice_numbers) + if not all(isinstance(x, int) for x in notice_numbers): + raise TypeError("notice_numbers must be a list of integers") + + self._notice_numbers = notice_numbers + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, NoticeReference): + return NotImplemented + + return ( + self.organization == other.organization + and self.notice_numbers == other.notice_numbers + ) + + def __hash__(self) -> int: + return hash((self.organization, tuple(self.notice_numbers))) + + @property + def organization(self) -> typing.Optional[str]: + return self._organization + + @property + def notice_numbers(self) -> typing.List[int]: + return self._notice_numbers + + +class ExtendedKeyUsage(ExtensionType): + oid = ExtensionOID.EXTENDED_KEY_USAGE + + def __init__(self, usages: typing.Iterable[ObjectIdentifier]) -> None: + usages = list(usages) + if not all(isinstance(x, ObjectIdentifier) for x in usages): + raise TypeError( + "Every item in the usages list must be an ObjectIdentifier" + ) + + self._usages = usages + + __len__, __iter__, __getitem__ = _make_sequence_methods("_usages") + + def __repr__(self) -> str: + return "".format(self._usages) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, ExtendedKeyUsage): + return NotImplemented + + return self._usages == other._usages + + def __hash__(self) -> int: + return hash(tuple(self._usages)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class OCSPNoCheck(ExtensionType): + oid = ExtensionOID.OCSP_NO_CHECK + + def __eq__(self, other: object) -> bool: + if not isinstance(other, OCSPNoCheck): + return NotImplemented + + return True + + def __hash__(self) -> int: + return hash(OCSPNoCheck) + + def __repr__(self) -> str: + return "" + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class PrecertPoison(ExtensionType): + oid = ExtensionOID.PRECERT_POISON + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PrecertPoison): + return NotImplemented + + return True + + def __hash__(self) -> int: + return hash(PrecertPoison) + + def __repr__(self) -> str: + return "" + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class TLSFeature(ExtensionType): + oid = ExtensionOID.TLS_FEATURE + + def __init__(self, features: typing.Iterable["TLSFeatureType"]) -> None: + features = list(features) + if ( + not all(isinstance(x, TLSFeatureType) for x in features) + or len(features) == 0 + ): + raise TypeError( + "features must be a list of elements from the TLSFeatureType " + "enum" + ) + + self._features = features + + __len__, __iter__, __getitem__ = _make_sequence_methods("_features") + + def __repr__(self) -> str: + return "".format(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, TLSFeature): + return NotImplemented + + return self._features == other._features + + def __hash__(self) -> int: + return hash(tuple(self._features)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class TLSFeatureType(utils.Enum): + # status_request is defined in RFC 6066 and is used for what is commonly + # called OCSP Must-Staple when present in the TLS Feature extension in an + # X.509 certificate. + status_request = 5 + # status_request_v2 is defined in RFC 6961 and allows multiple OCSP + # responses to be provided. It is not currently in use by clients or + # servers. + status_request_v2 = 17 + + +_TLS_FEATURE_TYPE_TO_ENUM = {x.value: x for x in TLSFeatureType} + + +class InhibitAnyPolicy(ExtensionType): + oid = ExtensionOID.INHIBIT_ANY_POLICY + + def __init__(self, skip_certs: int) -> None: + if not isinstance(skip_certs, int): + raise TypeError("skip_certs must be an integer") + + if skip_certs < 0: + raise ValueError("skip_certs must be a non-negative integer") + + self._skip_certs = skip_certs + + def __repr__(self) -> str: + return "".format(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, InhibitAnyPolicy): + return NotImplemented + + return self.skip_certs == other.skip_certs + + def __hash__(self) -> int: + return hash(self.skip_certs) + + @property + def skip_certs(self) -> int: + return self._skip_certs + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class KeyUsage(ExtensionType): + oid = ExtensionOID.KEY_USAGE + + def __init__( + self, + digital_signature: bool, + content_commitment: bool, + key_encipherment: bool, + data_encipherment: bool, + key_agreement: bool, + key_cert_sign: bool, + crl_sign: bool, + encipher_only: bool, + decipher_only: bool, + ) -> None: + if not key_agreement and (encipher_only or decipher_only): + raise ValueError( + "encipher_only and decipher_only can only be true when " + "key_agreement is true" + ) + + self._digital_signature = digital_signature + self._content_commitment = content_commitment + self._key_encipherment = key_encipherment + self._data_encipherment = data_encipherment + self._key_agreement = key_agreement + self._key_cert_sign = key_cert_sign + self._crl_sign = crl_sign + self._encipher_only = encipher_only + self._decipher_only = decipher_only + + @property + def digital_signature(self) -> bool: + return self._digital_signature + + @property + def content_commitment(self) -> bool: + return self._content_commitment + + @property + def key_encipherment(self) -> bool: + return self._key_encipherment + + @property + def data_encipherment(self) -> bool: + return self._data_encipherment + + @property + def key_agreement(self) -> bool: + return self._key_agreement + + @property + def key_cert_sign(self) -> bool: + return self._key_cert_sign + + @property + def crl_sign(self) -> bool: + return self._crl_sign + + @property + def encipher_only(self) -> bool: + if not self.key_agreement: + raise ValueError( + "encipher_only is undefined unless key_agreement is true" + ) + else: + return self._encipher_only + + @property + def decipher_only(self) -> bool: + if not self.key_agreement: + raise ValueError( + "decipher_only is undefined unless key_agreement is true" + ) + else: + return self._decipher_only + + def __repr__(self) -> str: + try: + encipher_only = self.encipher_only + decipher_only = self.decipher_only + except ValueError: + # Users found None confusing because even though encipher/decipher + # have no meaning unless key_agreement is true, to construct an + # instance of the class you still need to pass False. + encipher_only = False + decipher_only = False + + return ( + "" + ).format(self, encipher_only, decipher_only) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, KeyUsage): + return NotImplemented + + return ( + self.digital_signature == other.digital_signature + and self.content_commitment == other.content_commitment + and self.key_encipherment == other.key_encipherment + and self.data_encipherment == other.data_encipherment + and self.key_agreement == other.key_agreement + and self.key_cert_sign == other.key_cert_sign + and self.crl_sign == other.crl_sign + and self._encipher_only == other._encipher_only + and self._decipher_only == other._decipher_only + ) + + def __hash__(self) -> int: + return hash( + ( + self.digital_signature, + self.content_commitment, + self.key_encipherment, + self.data_encipherment, + self.key_agreement, + self.key_cert_sign, + self.crl_sign, + self._encipher_only, + self._decipher_only, + ) + ) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class NameConstraints(ExtensionType): + oid = ExtensionOID.NAME_CONSTRAINTS + + def __init__( + self, + permitted_subtrees: typing.Optional[typing.Iterable[GeneralName]], + excluded_subtrees: typing.Optional[typing.Iterable[GeneralName]], + ) -> None: + if permitted_subtrees is not None: + permitted_subtrees = list(permitted_subtrees) + if not permitted_subtrees: + raise ValueError( + "permitted_subtrees must be a non-empty list or None" + ) + if not all(isinstance(x, GeneralName) for x in permitted_subtrees): + raise TypeError( + "permitted_subtrees must be a list of GeneralName objects " + "or None" + ) + + self._validate_ip_name(permitted_subtrees) + + if excluded_subtrees is not None: + excluded_subtrees = list(excluded_subtrees) + if not excluded_subtrees: + raise ValueError( + "excluded_subtrees must be a non-empty list or None" + ) + if not all(isinstance(x, GeneralName) for x in excluded_subtrees): + raise TypeError( + "excluded_subtrees must be a list of GeneralName objects " + "or None" + ) + + self._validate_ip_name(excluded_subtrees) + + if permitted_subtrees is None and excluded_subtrees is None: + raise ValueError( + "At least one of permitted_subtrees and excluded_subtrees " + "must not be None" + ) + + self._permitted_subtrees = permitted_subtrees + self._excluded_subtrees = excluded_subtrees + + def __eq__(self, other: object) -> bool: + if not isinstance(other, NameConstraints): + return NotImplemented + + return ( + self.excluded_subtrees == other.excluded_subtrees + and self.permitted_subtrees == other.permitted_subtrees + ) + + def _validate_ip_name(self, tree: typing.Iterable[GeneralName]) -> None: + if any( + isinstance(name, IPAddress) + and not isinstance( + name.value, (ipaddress.IPv4Network, ipaddress.IPv6Network) + ) + for name in tree + ): + raise TypeError( + "IPAddress name constraints must be an IPv4Network or" + " IPv6Network object" + ) + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __hash__(self) -> int: + if self.permitted_subtrees is not None: + ps: typing.Optional[typing.Tuple[GeneralName, ...]] = tuple( + self.permitted_subtrees + ) + else: + ps = None + + if self.excluded_subtrees is not None: + es: typing.Optional[typing.Tuple[GeneralName, ...]] = tuple( + self.excluded_subtrees + ) + else: + es = None + + return hash((ps, es)) + + @property + def permitted_subtrees( + self, + ) -> typing.Optional[typing.List[GeneralName]]: + return self._permitted_subtrees + + @property + def excluded_subtrees( + self, + ) -> typing.Optional[typing.List[GeneralName]]: + return self._excluded_subtrees + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class Extension(typing.Generic[ExtensionTypeVar]): + def __init__( + self, oid: ObjectIdentifier, critical: bool, value: ExtensionTypeVar + ) -> None: + if not isinstance(oid, ObjectIdentifier): + raise TypeError( + "oid argument must be an ObjectIdentifier instance." + ) + + if not isinstance(critical, bool): + raise TypeError("critical must be a boolean value") + + self._oid = oid + self._critical = critical + self._value = value + + @property + def oid(self) -> ObjectIdentifier: + return self._oid + + @property + def critical(self) -> bool: + return self._critical + + @property + def value(self) -> ExtensionTypeVar: + return self._value + + def __repr__(self) -> str: + return ( + "" + ).format(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Extension): + return NotImplemented + + return ( + self.oid == other.oid + and self.critical == other.critical + and self.value == other.value + ) + + def __hash__(self) -> int: + return hash((self.oid, self.critical, self.value)) + + +class GeneralNames: + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + general_names = list(general_names) + if not all(isinstance(x, GeneralName) for x in general_names): + raise TypeError( + "Every item in the general_names list must be an " + "object conforming to the GeneralName interface" + ) + + self._general_names = general_names + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[UniformResourceIdentifier], + typing.Type[RFC822Name], + ], + ) -> typing.List[str]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[DirectoryName], + ) -> typing.List[Name]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[RegisteredID], + ) -> typing.List[ObjectIdentifier]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[IPAddress] + ) -> typing.List[_IPADDRESS_TYPES]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[OtherName] + ) -> typing.List[OtherName]: + ... + + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[DirectoryName], + typing.Type[IPAddress], + typing.Type[OtherName], + typing.Type[RFC822Name], + typing.Type[RegisteredID], + typing.Type[UniformResourceIdentifier], + ], + ) -> typing.Union[ + typing.List[_IPADDRESS_TYPES], + typing.List[str], + typing.List[OtherName], + typing.List[Name], + typing.List[ObjectIdentifier], + ]: + # Return the value of each GeneralName, except for OtherName instances + # which we return directly because it has two important properties not + # just one value. + objs = (i for i in self if isinstance(i, type)) + if type != OtherName: + return [i.value for i in objs] + return list(objs) + + def __repr__(self) -> str: + return "".format(self._general_names) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, GeneralNames): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(tuple(self._general_names)) + + +class SubjectAlternativeName(ExtensionType): + oid = ExtensionOID.SUBJECT_ALTERNATIVE_NAME + + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + self._general_names = GeneralNames(general_names) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[UniformResourceIdentifier], + typing.Type[RFC822Name], + ], + ) -> typing.List[str]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[DirectoryName], + ) -> typing.List[Name]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[RegisteredID], + ) -> typing.List[ObjectIdentifier]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[IPAddress] + ) -> typing.List[_IPADDRESS_TYPES]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[OtherName] + ) -> typing.List[OtherName]: + ... + + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[DirectoryName], + typing.Type[IPAddress], + typing.Type[OtherName], + typing.Type[RFC822Name], + typing.Type[RegisteredID], + typing.Type[UniformResourceIdentifier], + ], + ) -> typing.Union[ + typing.List[_IPADDRESS_TYPES], + typing.List[str], + typing.List[OtherName], + typing.List[Name], + typing.List[ObjectIdentifier], + ]: + return self._general_names.get_values_for_type(type) + + def __repr__(self) -> str: + return "".format(self._general_names) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SubjectAlternativeName): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(self._general_names) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class IssuerAlternativeName(ExtensionType): + oid = ExtensionOID.ISSUER_ALTERNATIVE_NAME + + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + self._general_names = GeneralNames(general_names) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[UniformResourceIdentifier], + typing.Type[RFC822Name], + ], + ) -> typing.List[str]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[DirectoryName], + ) -> typing.List[Name]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[RegisteredID], + ) -> typing.List[ObjectIdentifier]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[IPAddress] + ) -> typing.List[_IPADDRESS_TYPES]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[OtherName] + ) -> typing.List[OtherName]: + ... + + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[DirectoryName], + typing.Type[IPAddress], + typing.Type[OtherName], + typing.Type[RFC822Name], + typing.Type[RegisteredID], + typing.Type[UniformResourceIdentifier], + ], + ) -> typing.Union[ + typing.List[_IPADDRESS_TYPES], + typing.List[str], + typing.List[OtherName], + typing.List[Name], + typing.List[ObjectIdentifier], + ]: + return self._general_names.get_values_for_type(type) + + def __repr__(self) -> str: + return "".format(self._general_names) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, IssuerAlternativeName): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(self._general_names) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CertificateIssuer(ExtensionType): + oid = CRLEntryExtensionOID.CERTIFICATE_ISSUER + + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + self._general_names = GeneralNames(general_names) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[UniformResourceIdentifier], + typing.Type[RFC822Name], + ], + ) -> typing.List[str]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[DirectoryName], + ) -> typing.List[Name]: + ... + + @typing.overload + def get_values_for_type( + self, + type: typing.Type[RegisteredID], + ) -> typing.List[ObjectIdentifier]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[IPAddress] + ) -> typing.List[_IPADDRESS_TYPES]: + ... + + @typing.overload + def get_values_for_type( + self, type: typing.Type[OtherName] + ) -> typing.List[OtherName]: + ... + + def get_values_for_type( + self, + type: typing.Union[ + typing.Type[DNSName], + typing.Type[DirectoryName], + typing.Type[IPAddress], + typing.Type[OtherName], + typing.Type[RFC822Name], + typing.Type[RegisteredID], + typing.Type[UniformResourceIdentifier], + ], + ) -> typing.Union[ + typing.List[_IPADDRESS_TYPES], + typing.List[str], + typing.List[OtherName], + typing.List[Name], + typing.List[ObjectIdentifier], + ]: + return self._general_names.get_values_for_type(type) + + def __repr__(self) -> str: + return "".format(self._general_names) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CertificateIssuer): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(self._general_names) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CRLReason(ExtensionType): + oid = CRLEntryExtensionOID.CRL_REASON + + def __init__(self, reason: ReasonFlags) -> None: + if not isinstance(reason, ReasonFlags): + raise TypeError("reason must be an element from ReasonFlags") + + self._reason = reason + + def __repr__(self) -> str: + return "".format(self._reason) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CRLReason): + return NotImplemented + + return self.reason == other.reason + + def __hash__(self) -> int: + return hash(self.reason) + + @property + def reason(self) -> ReasonFlags: + return self._reason + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class InvalidityDate(ExtensionType): + oid = CRLEntryExtensionOID.INVALIDITY_DATE + + def __init__(self, invalidity_date: datetime.datetime) -> None: + if not isinstance(invalidity_date, datetime.datetime): + raise TypeError("invalidity_date must be a datetime.datetime") + + self._invalidity_date = invalidity_date + + def __repr__(self) -> str: + return "".format( + self._invalidity_date + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, InvalidityDate): + return NotImplemented + + return self.invalidity_date == other.invalidity_date + + def __hash__(self) -> int: + return hash(self.invalidity_date) + + @property + def invalidity_date(self) -> datetime.datetime: + return self._invalidity_date + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class PrecertificateSignedCertificateTimestamps(ExtensionType): + oid = ExtensionOID.PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS + + def __init__( + self, + signed_certificate_timestamps: typing.Iterable[ + SignedCertificateTimestamp + ], + ) -> None: + signed_certificate_timestamps = list(signed_certificate_timestamps) + if not all( + isinstance(sct, SignedCertificateTimestamp) + for sct in signed_certificate_timestamps + ): + raise TypeError( + "Every item in the signed_certificate_timestamps list must be " + "a SignedCertificateTimestamp" + ) + self._signed_certificate_timestamps = signed_certificate_timestamps + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_signed_certificate_timestamps" + ) + + def __repr__(self) -> str: + return "".format( + list(self) + ) + + def __hash__(self) -> int: + return hash(tuple(self._signed_certificate_timestamps)) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PrecertificateSignedCertificateTimestamps): + return NotImplemented + + return ( + self._signed_certificate_timestamps + == other._signed_certificate_timestamps + ) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class SignedCertificateTimestamps(ExtensionType): + oid = ExtensionOID.SIGNED_CERTIFICATE_TIMESTAMPS + + def __init__( + self, + signed_certificate_timestamps: typing.Iterable[ + SignedCertificateTimestamp + ], + ) -> None: + signed_certificate_timestamps = list(signed_certificate_timestamps) + if not all( + isinstance(sct, SignedCertificateTimestamp) + for sct in signed_certificate_timestamps + ): + raise TypeError( + "Every item in the signed_certificate_timestamps list must be " + "a SignedCertificateTimestamp" + ) + self._signed_certificate_timestamps = signed_certificate_timestamps + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_signed_certificate_timestamps" + ) + + def __repr__(self) -> str: + return "".format(list(self)) + + def __hash__(self) -> int: + return hash(tuple(self._signed_certificate_timestamps)) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SignedCertificateTimestamps): + return NotImplemented + + return ( + self._signed_certificate_timestamps + == other._signed_certificate_timestamps + ) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class OCSPNonce(ExtensionType): + oid = OCSPExtensionOID.NONCE + + def __init__(self, nonce: bytes) -> None: + if not isinstance(nonce, bytes): + raise TypeError("nonce must be bytes") + + self._nonce = nonce + + def __eq__(self, other: object) -> bool: + if not isinstance(other, OCSPNonce): + return NotImplemented + + return self.nonce == other.nonce + + def __hash__(self) -> int: + return hash(self.nonce) + + def __repr__(self) -> str: + return "".format(self) + + @property + def nonce(self) -> bytes: + return self._nonce + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class IssuingDistributionPoint(ExtensionType): + oid = ExtensionOID.ISSUING_DISTRIBUTION_POINT + + def __init__( + self, + full_name: typing.Optional[typing.Iterable[GeneralName]], + relative_name: typing.Optional[RelativeDistinguishedName], + only_contains_user_certs: bool, + only_contains_ca_certs: bool, + only_some_reasons: typing.Optional[typing.FrozenSet[ReasonFlags]], + indirect_crl: bool, + only_contains_attribute_certs: bool, + ) -> None: + if full_name is not None: + full_name = list(full_name) + + if only_some_reasons and ( + not isinstance(only_some_reasons, frozenset) + or not all(isinstance(x, ReasonFlags) for x in only_some_reasons) + ): + raise TypeError( + "only_some_reasons must be None or frozenset of ReasonFlags" + ) + + if only_some_reasons and ( + ReasonFlags.unspecified in only_some_reasons + or ReasonFlags.remove_from_crl in only_some_reasons + ): + raise ValueError( + "unspecified and remove_from_crl are not valid reasons in an " + "IssuingDistributionPoint" + ) + + if not ( + isinstance(only_contains_user_certs, bool) + and isinstance(only_contains_ca_certs, bool) + and isinstance(indirect_crl, bool) + and isinstance(only_contains_attribute_certs, bool) + ): + raise TypeError( + "only_contains_user_certs, only_contains_ca_certs, " + "indirect_crl and only_contains_attribute_certs " + "must all be boolean." + ) + + crl_constraints = [ + only_contains_user_certs, + only_contains_ca_certs, + indirect_crl, + only_contains_attribute_certs, + ] + + if len([x for x in crl_constraints if x]) > 1: + raise ValueError( + "Only one of the following can be set to True: " + "only_contains_user_certs, only_contains_ca_certs, " + "indirect_crl, only_contains_attribute_certs" + ) + + if not any( + [ + only_contains_user_certs, + only_contains_ca_certs, + indirect_crl, + only_contains_attribute_certs, + full_name, + relative_name, + only_some_reasons, + ] + ): + raise ValueError( + "Cannot create empty extension: " + "if only_contains_user_certs, only_contains_ca_certs, " + "indirect_crl, and only_contains_attribute_certs are all False" + ", then either full_name, relative_name, or only_some_reasons " + "must have a value." + ) + + self._only_contains_user_certs = only_contains_user_certs + self._only_contains_ca_certs = only_contains_ca_certs + self._indirect_crl = indirect_crl + self._only_contains_attribute_certs = only_contains_attribute_certs + self._only_some_reasons = only_some_reasons + self._full_name = full_name + self._relative_name = relative_name + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, IssuingDistributionPoint): + return NotImplemented + + return ( + self.full_name == other.full_name + and self.relative_name == other.relative_name + and self.only_contains_user_certs == other.only_contains_user_certs + and self.only_contains_ca_certs == other.only_contains_ca_certs + and self.only_some_reasons == other.only_some_reasons + and self.indirect_crl == other.indirect_crl + and self.only_contains_attribute_certs + == other.only_contains_attribute_certs + ) + + def __hash__(self) -> int: + return hash( + ( + self.full_name, + self.relative_name, + self.only_contains_user_certs, + self.only_contains_ca_certs, + self.only_some_reasons, + self.indirect_crl, + self.only_contains_attribute_certs, + ) + ) + + @property + def full_name(self) -> typing.Optional[typing.List[GeneralName]]: + return self._full_name + + @property + def relative_name(self) -> typing.Optional[RelativeDistinguishedName]: + return self._relative_name + + @property + def only_contains_user_certs(self) -> bool: + return self._only_contains_user_certs + + @property + def only_contains_ca_certs(self) -> bool: + return self._only_contains_ca_certs + + @property + def only_some_reasons( + self, + ) -> typing.Optional[typing.FrozenSet[ReasonFlags]]: + return self._only_some_reasons + + @property + def indirect_crl(self) -> bool: + return self._indirect_crl + + @property + def only_contains_attribute_certs(self) -> bool: + return self._only_contains_attribute_certs + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class UnrecognizedExtension(ExtensionType): + def __init__(self, oid: ObjectIdentifier, value: bytes) -> None: + if not isinstance(oid, ObjectIdentifier): + raise TypeError("oid must be an ObjectIdentifier") + self._oid = oid + self._value = value + + @property + def oid(self) -> ObjectIdentifier: # type: ignore[override] + return self._oid + + @property + def value(self) -> bytes: + return self._value + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, UnrecognizedExtension): + return NotImplemented + + return self.oid == other.oid and self.value == other.value + + def __hash__(self) -> int: + return hash((self.oid, self.value)) + + def public_bytes(self) -> bytes: + return self.value diff --git a/env/lib/python3.8/site-packages/cryptography/x509/general_name.py b/env/lib/python3.8/site-packages/cryptography/x509/general_name.py new file mode 100644 index 00000000..9939233f --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/general_name.py @@ -0,0 +1,284 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import ipaddress +import typing +from email.utils import parseaddr + +from cryptography.x509.name import Name +from cryptography.x509.oid import ObjectIdentifier + + +_IPADDRESS_TYPES = typing.Union[ + ipaddress.IPv4Address, + ipaddress.IPv6Address, + ipaddress.IPv4Network, + ipaddress.IPv6Network, +] + + +class UnsupportedGeneralNameType(Exception): + pass + + +class GeneralName(metaclass=abc.ABCMeta): + @abc.abstractproperty + def value(self) -> typing.Any: + """ + Return the value of the object + """ + + +class RFC822Name(GeneralName): + def __init__(self, value: str) -> None: + if isinstance(value, str): + try: + value.encode("ascii") + except UnicodeEncodeError: + raise ValueError( + "RFC822Name values should be passed as an A-label string. " + "This means unicode characters should be encoded via " + "a library like idna." + ) + else: + raise TypeError("value must be string") + + name, address = parseaddr(value) + if name or not address: + # parseaddr has found a name (e.g. Name ) or the entire + # value is an empty string. + raise ValueError("Invalid rfc822name value") + + self._value = value + + @property + def value(self) -> str: + return self._value + + @classmethod + def _init_without_validation(cls, value: str) -> "RFC822Name": + instance = cls.__new__(cls) + instance._value = value + return instance + + def __repr__(self) -> str: + return "".format(self.value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RFC822Name): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class DNSName(GeneralName): + def __init__(self, value: str) -> None: + if isinstance(value, str): + try: + value.encode("ascii") + except UnicodeEncodeError: + raise ValueError( + "DNSName values should be passed as an A-label string. " + "This means unicode characters should be encoded via " + "a library like idna." + ) + else: + raise TypeError("value must be string") + + self._value = value + + @property + def value(self) -> str: + return self._value + + @classmethod + def _init_without_validation(cls, value: str) -> "DNSName": + instance = cls.__new__(cls) + instance._value = value + return instance + + def __repr__(self) -> str: + return "".format(self.value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DNSName): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class UniformResourceIdentifier(GeneralName): + def __init__(self, value: str) -> None: + if isinstance(value, str): + try: + value.encode("ascii") + except UnicodeEncodeError: + raise ValueError( + "URI values should be passed as an A-label string. " + "This means unicode characters should be encoded via " + "a library like idna." + ) + else: + raise TypeError("value must be string") + + self._value = value + + @property + def value(self) -> str: + return self._value + + @classmethod + def _init_without_validation( + cls, value: str + ) -> "UniformResourceIdentifier": + instance = cls.__new__(cls) + instance._value = value + return instance + + def __repr__(self) -> str: + return "".format(self.value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, UniformResourceIdentifier): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class DirectoryName(GeneralName): + def __init__(self, value: Name) -> None: + if not isinstance(value, Name): + raise TypeError("value must be a Name") + + self._value = value + + @property + def value(self) -> Name: + return self._value + + def __repr__(self) -> str: + return "".format(self.value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DirectoryName): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class RegisteredID(GeneralName): + def __init__(self, value: ObjectIdentifier) -> None: + if not isinstance(value, ObjectIdentifier): + raise TypeError("value must be an ObjectIdentifier") + + self._value = value + + @property + def value(self) -> ObjectIdentifier: + return self._value + + def __repr__(self) -> str: + return "".format(self.value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RegisteredID): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class IPAddress(GeneralName): + def __init__(self, value: _IPADDRESS_TYPES) -> None: + if not isinstance( + value, + ( + ipaddress.IPv4Address, + ipaddress.IPv6Address, + ipaddress.IPv4Network, + ipaddress.IPv6Network, + ), + ): + raise TypeError( + "value must be an instance of ipaddress.IPv4Address, " + "ipaddress.IPv6Address, ipaddress.IPv4Network, or " + "ipaddress.IPv6Network" + ) + + self._value = value + + @property + def value(self) -> _IPADDRESS_TYPES: + return self._value + + def _packed(self) -> bytes: + if isinstance( + self.value, (ipaddress.IPv4Address, ipaddress.IPv6Address) + ): + return self.value.packed + else: + return ( + self.value.network_address.packed + self.value.netmask.packed + ) + + def __repr__(self) -> str: + return "".format(self.value) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, IPAddress): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class OtherName(GeneralName): + def __init__(self, type_id: ObjectIdentifier, value: bytes) -> None: + if not isinstance(type_id, ObjectIdentifier): + raise TypeError("type_id must be an ObjectIdentifier") + if not isinstance(value, bytes): + raise TypeError("value must be a binary string") + + self._type_id = type_id + self._value = value + + @property + def type_id(self) -> ObjectIdentifier: + return self._type_id + + @property + def value(self) -> bytes: + return self._value + + def __repr__(self) -> str: + return "".format( + self.type_id, self.value + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, OtherName): + return NotImplemented + + return self.type_id == other.type_id and self.value == other.value + + def __hash__(self) -> int: + return hash((self.type_id, self.value)) diff --git a/env/lib/python3.8/site-packages/cryptography/x509/name.py b/env/lib/python3.8/site-packages/cryptography/x509/name.py new file mode 100644 index 00000000..702fb4b2 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/name.py @@ -0,0 +1,462 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import binascii +import re +import sys +import typing +import warnings + +from cryptography import utils +from cryptography.hazmat.bindings._rust import ( + x509 as rust_x509, +) +from cryptography.x509.oid import NameOID, ObjectIdentifier + + +class _ASN1Type(utils.Enum): + BitString = 3 + OctetString = 4 + UTF8String = 12 + NumericString = 18 + PrintableString = 19 + T61String = 20 + IA5String = 22 + UTCTime = 23 + GeneralizedTime = 24 + VisibleString = 26 + UniversalString = 28 + BMPString = 30 + + +_ASN1_TYPE_TO_ENUM = {i.value: i for i in _ASN1Type} +_NAMEOID_DEFAULT_TYPE: typing.Dict[ObjectIdentifier, _ASN1Type] = { + NameOID.COUNTRY_NAME: _ASN1Type.PrintableString, + NameOID.JURISDICTION_COUNTRY_NAME: _ASN1Type.PrintableString, + NameOID.SERIAL_NUMBER: _ASN1Type.PrintableString, + NameOID.DN_QUALIFIER: _ASN1Type.PrintableString, + NameOID.EMAIL_ADDRESS: _ASN1Type.IA5String, + NameOID.DOMAIN_COMPONENT: _ASN1Type.IA5String, +} + +# Type alias +_OidNameMap = typing.Mapping[ObjectIdentifier, str] +_NameOidMap = typing.Mapping[str, ObjectIdentifier] + +#: Short attribute names from RFC 4514: +#: https://tools.ietf.org/html/rfc4514#page-7 +_NAMEOID_TO_NAME: _OidNameMap = { + NameOID.COMMON_NAME: "CN", + NameOID.LOCALITY_NAME: "L", + NameOID.STATE_OR_PROVINCE_NAME: "ST", + NameOID.ORGANIZATION_NAME: "O", + NameOID.ORGANIZATIONAL_UNIT_NAME: "OU", + NameOID.COUNTRY_NAME: "C", + NameOID.STREET_ADDRESS: "STREET", + NameOID.DOMAIN_COMPONENT: "DC", + NameOID.USER_ID: "UID", +} +_NAME_TO_NAMEOID = {v: k for k, v in _NAMEOID_TO_NAME.items()} + + +def _escape_dn_value(val: typing.Union[str, bytes]) -> str: + """Escape special characters in RFC4514 Distinguished Name value.""" + + if not val: + return "" + + # RFC 4514 Section 2.4 defines the value as being the # (U+0023) character + # followed by the hexadecimal encoding of the octets. + if isinstance(val, bytes): + return "#" + binascii.hexlify(val).decode("utf8") + + # See https://tools.ietf.org/html/rfc4514#section-2.4 + val = val.replace("\\", "\\\\") + val = val.replace('"', '\\"') + val = val.replace("+", "\\+") + val = val.replace(",", "\\,") + val = val.replace(";", "\\;") + val = val.replace("<", "\\<") + val = val.replace(">", "\\>") + val = val.replace("\0", "\\00") + + if val[0] in ("#", " "): + val = "\\" + val + if val[-1] == " ": + val = val[:-1] + "\\ " + + return val + + +def _unescape_dn_value(val: str) -> str: + if not val: + return "" + + # See https://tools.ietf.org/html/rfc4514#section-3 + + # special = escaped / SPACE / SHARP / EQUALS + # escaped = DQUOTE / PLUS / COMMA / SEMI / LANGLE / RANGLE + def sub(m): + val = m.group(1) + # Regular escape + if len(val) == 1: + return val + # Hex-value scape + return chr(int(val, 16)) + + return _RFC4514NameParser._PAIR_RE.sub(sub, val) + + +class NameAttribute: + def __init__( + self, + oid: ObjectIdentifier, + value: typing.Union[str, bytes], + _type: typing.Optional[_ASN1Type] = None, + *, + _validate: bool = True, + ) -> None: + if not isinstance(oid, ObjectIdentifier): + raise TypeError( + "oid argument must be an ObjectIdentifier instance." + ) + if _type == _ASN1Type.BitString: + if oid != NameOID.X500_UNIQUE_IDENTIFIER: + raise TypeError( + "oid must be X500_UNIQUE_IDENTIFIER for BitString type." + ) + if not isinstance(value, bytes): + raise TypeError("value must be bytes for BitString") + else: + if not isinstance(value, str): + raise TypeError("value argument must be a str") + + if ( + oid == NameOID.COUNTRY_NAME + or oid == NameOID.JURISDICTION_COUNTRY_NAME + ): + assert isinstance(value, str) + c_len = len(value.encode("utf8")) + if c_len != 2 and _validate is True: + raise ValueError( + "Country name must be a 2 character country code" + ) + elif c_len != 2: + warnings.warn( + "Country names should be two characters, but the " + "attribute is {} characters in length.".format(c_len), + stacklevel=2, + ) + + # The appropriate ASN1 string type varies by OID and is defined across + # multiple RFCs including 2459, 3280, and 5280. In general UTF8String + # is preferred (2459), but 3280 and 5280 specify several OIDs with + # alternate types. This means when we see the sentinel value we need + # to look up whether the OID has a non-UTF8 type. If it does, set it + # to that. Otherwise, UTF8! + if _type is None: + _type = _NAMEOID_DEFAULT_TYPE.get(oid, _ASN1Type.UTF8String) + + if not isinstance(_type, _ASN1Type): + raise TypeError("_type must be from the _ASN1Type enum") + + self._oid = oid + self._value = value + self._type = _type + + @property + def oid(self) -> ObjectIdentifier: + return self._oid + + @property + def value(self) -> typing.Union[str, bytes]: + return self._value + + @property + def rfc4514_attribute_name(self) -> str: + """ + The short attribute name (for example "CN") if available, + otherwise the OID dotted string. + """ + return _NAMEOID_TO_NAME.get(self.oid, self.oid.dotted_string) + + def rfc4514_string( + self, attr_name_overrides: typing.Optional[_OidNameMap] = None + ) -> str: + """ + Format as RFC4514 Distinguished Name string. + + Use short attribute name if available, otherwise fall back to OID + dotted string. + """ + attr_name = ( + attr_name_overrides.get(self.oid) if attr_name_overrides else None + ) + if attr_name is None: + attr_name = self.rfc4514_attribute_name + + return f"{attr_name}={_escape_dn_value(self.value)}" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, NameAttribute): + return NotImplemented + + return self.oid == other.oid and self.value == other.value + + def __hash__(self) -> int: + return hash((self.oid, self.value)) + + def __repr__(self) -> str: + return "".format(self) + + +class RelativeDistinguishedName: + def __init__(self, attributes: typing.Iterable[NameAttribute]): + attributes = list(attributes) + if not attributes: + raise ValueError("a relative distinguished name cannot be empty") + if not all(isinstance(x, NameAttribute) for x in attributes): + raise TypeError("attributes must be an iterable of NameAttribute") + + # Keep list and frozenset to preserve attribute order where it matters + self._attributes = attributes + self._attribute_set = frozenset(attributes) + + if len(self._attribute_set) != len(attributes): + raise ValueError("duplicate attributes are not allowed") + + def get_attributes_for_oid( + self, oid: ObjectIdentifier + ) -> typing.List[NameAttribute]: + return [i for i in self if i.oid == oid] + + def rfc4514_string( + self, attr_name_overrides: typing.Optional[_OidNameMap] = None + ) -> str: + """ + Format as RFC4514 Distinguished Name string. + + Within each RDN, attributes are joined by '+', although that is rarely + used in certificates. + """ + return "+".join( + attr.rfc4514_string(attr_name_overrides) + for attr in self._attributes + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RelativeDistinguishedName): + return NotImplemented + + return self._attribute_set == other._attribute_set + + def __hash__(self) -> int: + return hash(self._attribute_set) + + def __iter__(self) -> typing.Iterator[NameAttribute]: + return iter(self._attributes) + + def __len__(self) -> int: + return len(self._attributes) + + def __repr__(self) -> str: + return "".format(self.rfc4514_string()) + + +class Name: + @typing.overload + def __init__(self, attributes: typing.Iterable[NameAttribute]) -> None: + ... + + @typing.overload + def __init__( + self, attributes: typing.Iterable[RelativeDistinguishedName] + ) -> None: + ... + + def __init__( + self, + attributes: typing.Iterable[ + typing.Union[NameAttribute, RelativeDistinguishedName] + ], + ) -> None: + attributes = list(attributes) + if all(isinstance(x, NameAttribute) for x in attributes): + self._attributes = [ + RelativeDistinguishedName([typing.cast(NameAttribute, x)]) + for x in attributes + ] + elif all(isinstance(x, RelativeDistinguishedName) for x in attributes): + self._attributes = typing.cast( + typing.List[RelativeDistinguishedName], attributes + ) + else: + raise TypeError( + "attributes must be a list of NameAttribute" + " or a list RelativeDistinguishedName" + ) + + @classmethod + def from_rfc4514_string( + cls, + data: str, + attr_name_overrides: typing.Optional[_NameOidMap] = None, + ) -> "Name": + return _RFC4514NameParser(data, attr_name_overrides or {}).parse() + + def rfc4514_string( + self, attr_name_overrides: typing.Optional[_OidNameMap] = None + ) -> str: + """ + Format as RFC4514 Distinguished Name string. + For example 'CN=foobar.com,O=Foo Corp,C=US' + + An X.509 name is a two-level structure: a list of sets of attributes. + Each list element is separated by ',' and within each list element, set + elements are separated by '+'. The latter is almost never used in + real world certificates. According to RFC4514 section 2.1 the + RDNSequence must be reversed when converting to string representation. + """ + return ",".join( + attr.rfc4514_string(attr_name_overrides) + for attr in reversed(self._attributes) + ) + + def get_attributes_for_oid( + self, oid: ObjectIdentifier + ) -> typing.List[NameAttribute]: + return [i for i in self if i.oid == oid] + + @property + def rdns(self) -> typing.List[RelativeDistinguishedName]: + return self._attributes + + def public_bytes(self, backend: typing.Any = None) -> bytes: + return rust_x509.encode_name_bytes(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Name): + return NotImplemented + + return self._attributes == other._attributes + + def __hash__(self) -> int: + # TODO: this is relatively expensive, if this looks like a bottleneck + # for you, consider optimizing! + return hash(tuple(self._attributes)) + + def __iter__(self) -> typing.Iterator[NameAttribute]: + for rdn in self._attributes: + for ava in rdn: + yield ava + + def __len__(self) -> int: + return sum(len(rdn) for rdn in self._attributes) + + def __repr__(self) -> str: + rdns = ",".join(attr.rfc4514_string() for attr in self._attributes) + return "".format(rdns) + + +class _RFC4514NameParser: + _OID_RE = re.compile(r"(0|([1-9]\d*))(\.(0|([1-9]\d*)))+") + _DESCR_RE = re.compile(r"[a-zA-Z][a-zA-Z\d-]*") + + _PAIR = r"\\([\\ #=\"\+,;<>]|[\da-zA-Z]{2})" + _PAIR_RE = re.compile(_PAIR) + _LUTF1 = r"[\x01-\x1f\x21\x24-\x2A\x2D-\x3A\x3D\x3F-\x5B\x5D-\x7F]" + _SUTF1 = r"[\x01-\x21\x23-\x2A\x2D-\x3A\x3D\x3F-\x5B\x5D-\x7F]" + _TUTF1 = r"[\x01-\x1F\x21\x23-\x2A\x2D-\x3A\x3D\x3F-\x5B\x5D-\x7F]" + _UTFMB = rf"[\x80-{chr(sys.maxunicode)}]" + _LEADCHAR = rf"{_LUTF1}|{_UTFMB}" + _STRINGCHAR = rf"{_SUTF1}|{_UTFMB}" + _TRAILCHAR = rf"{_TUTF1}|{_UTFMB}" + _STRING_RE = re.compile( + rf""" + ( + ({_LEADCHAR}|{_PAIR}) + ( + ({_STRINGCHAR}|{_PAIR})* + ({_TRAILCHAR}|{_PAIR}) + )? + )? + """, + re.VERBOSE, + ) + _HEXSTRING_RE = re.compile(r"#([\da-zA-Z]{2})+") + + def __init__(self, data: str, attr_name_overrides: _NameOidMap) -> None: + self._data = data + self._idx = 0 + + self._attr_name_overrides = attr_name_overrides + + def _has_data(self) -> bool: + return self._idx < len(self._data) + + def _peek(self) -> typing.Optional[str]: + if self._has_data(): + return self._data[self._idx] + return None + + def _read_char(self, ch: str) -> None: + if self._peek() != ch: + raise ValueError + self._idx += 1 + + def _read_re(self, pat) -> str: + match = pat.match(self._data, pos=self._idx) + if match is None: + raise ValueError + val = match.group() + self._idx += len(val) + return val + + def parse(self) -> Name: + """ + Parses the `data` string and converts it to a Name. + + According to RFC4514 section 2.1 the RDNSequence must be + reversed when converting to string representation. So, when + we parse it, we need to reverse again to get the RDNs on the + correct order. + """ + rdns = [self._parse_rdn()] + + while self._has_data(): + self._read_char(",") + rdns.append(self._parse_rdn()) + + return Name(reversed(rdns)) + + def _parse_rdn(self) -> RelativeDistinguishedName: + nas = [self._parse_na()] + while self._peek() == "+": + self._read_char("+") + nas.append(self._parse_na()) + + return RelativeDistinguishedName(nas) + + def _parse_na(self) -> NameAttribute: + try: + oid_value = self._read_re(self._OID_RE) + except ValueError: + name = self._read_re(self._DESCR_RE) + oid = self._attr_name_overrides.get( + name, _NAME_TO_NAMEOID.get(name) + ) + if oid is None: + raise ValueError + else: + oid = ObjectIdentifier(oid_value) + + self._read_char("=") + if self._peek() == "#": + value = self._read_re(self._HEXSTRING_RE) + value = binascii.unhexlify(value[1:]).decode() + else: + raw_value = self._read_re(self._STRING_RE) + value = _unescape_dn_value(raw_value) + + return NameAttribute(oid, value) diff --git a/env/lib/python3.8/site-packages/cryptography/x509/ocsp.py b/env/lib/python3.8/site-packages/cryptography/x509/ocsp.py new file mode 100644 index 00000000..c01e77a8 --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/ocsp.py @@ -0,0 +1,551 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + + +import abc +import datetime +import typing + +from cryptography import utils +from cryptography import x509 +from cryptography.hazmat.bindings._rust import ocsp +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric.types import ( + CERTIFICATE_PRIVATE_KEY_TYPES, +) +from cryptography.x509.base import ( + _EARLIEST_UTC_TIME, + _convert_to_naive_utc_time, + _reject_duplicate_extension, +) + + +class OCSPResponderEncoding(utils.Enum): + HASH = "By Hash" + NAME = "By Name" + + +class OCSPResponseStatus(utils.Enum): + SUCCESSFUL = 0 + MALFORMED_REQUEST = 1 + INTERNAL_ERROR = 2 + TRY_LATER = 3 + SIG_REQUIRED = 5 + UNAUTHORIZED = 6 + + +_ALLOWED_HASHES = ( + hashes.SHA1, + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, +) + + +def _verify_algorithm(algorithm: hashes.HashAlgorithm) -> None: + if not isinstance(algorithm, _ALLOWED_HASHES): + raise ValueError( + "Algorithm must be SHA1, SHA224, SHA256, SHA384, or SHA512" + ) + + +class OCSPCertStatus(utils.Enum): + GOOD = 0 + REVOKED = 1 + UNKNOWN = 2 + + +class _SingleResponse: + def __init__( + self, + cert: x509.Certificate, + issuer: x509.Certificate, + algorithm: hashes.HashAlgorithm, + cert_status: OCSPCertStatus, + this_update: datetime.datetime, + next_update: typing.Optional[datetime.datetime], + revocation_time: typing.Optional[datetime.datetime], + revocation_reason: typing.Optional[x509.ReasonFlags], + ): + if not isinstance(cert, x509.Certificate) or not isinstance( + issuer, x509.Certificate + ): + raise TypeError("cert and issuer must be a Certificate") + + _verify_algorithm(algorithm) + if not isinstance(this_update, datetime.datetime): + raise TypeError("this_update must be a datetime object") + if next_update is not None and not isinstance( + next_update, datetime.datetime + ): + raise TypeError("next_update must be a datetime object or None") + + self._cert = cert + self._issuer = issuer + self._algorithm = algorithm + self._this_update = this_update + self._next_update = next_update + + if not isinstance(cert_status, OCSPCertStatus): + raise TypeError( + "cert_status must be an item from the OCSPCertStatus enum" + ) + if cert_status is not OCSPCertStatus.REVOKED: + if revocation_time is not None: + raise ValueError( + "revocation_time can only be provided if the certificate " + "is revoked" + ) + if revocation_reason is not None: + raise ValueError( + "revocation_reason can only be provided if the certificate" + " is revoked" + ) + else: + if not isinstance(revocation_time, datetime.datetime): + raise TypeError("revocation_time must be a datetime object") + + revocation_time = _convert_to_naive_utc_time(revocation_time) + if revocation_time < _EARLIEST_UTC_TIME: + raise ValueError( + "The revocation_time must be on or after" + " 1950 January 1." + ) + + if revocation_reason is not None and not isinstance( + revocation_reason, x509.ReasonFlags + ): + raise TypeError( + "revocation_reason must be an item from the ReasonFlags " + "enum or None" + ) + + self._cert_status = cert_status + self._revocation_time = revocation_time + self._revocation_reason = revocation_reason + + +class OCSPRequest(metaclass=abc.ABCMeta): + @abc.abstractproperty + def issuer_key_hash(self) -> bytes: + """ + The hash of the issuer public key + """ + + @abc.abstractproperty + def issuer_name_hash(self) -> bytes: + """ + The hash of the issuer name + """ + + @abc.abstractproperty + def hash_algorithm(self) -> hashes.HashAlgorithm: + """ + The hash algorithm used in the issuer name and key hashes + """ + + @abc.abstractproperty + def serial_number(self) -> int: + """ + The serial number of the cert whose status is being checked + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the request to DER + """ + + @abc.abstractproperty + def extensions(self) -> x509.Extensions: + """ + The list of request extensions. Not single request extensions. + """ + + +class OCSPSingleResponse(metaclass=abc.ABCMeta): + @abc.abstractproperty + def certificate_status(self) -> OCSPCertStatus: + """ + The status of the certificate (an element from the OCSPCertStatus enum) + """ + + @abc.abstractproperty + def revocation_time(self) -> typing.Optional[datetime.datetime]: + """ + The date of when the certificate was revoked or None if not + revoked. + """ + + @abc.abstractproperty + def revocation_reason(self) -> typing.Optional[x509.ReasonFlags]: + """ + The reason the certificate was revoked or None if not specified or + not revoked. + """ + + @abc.abstractproperty + def this_update(self) -> datetime.datetime: + """ + The most recent time at which the status being indicated is known by + the responder to have been correct + """ + + @abc.abstractproperty + def next_update(self) -> typing.Optional[datetime.datetime]: + """ + The time when newer information will be available + """ + + @abc.abstractproperty + def issuer_key_hash(self) -> bytes: + """ + The hash of the issuer public key + """ + + @abc.abstractproperty + def issuer_name_hash(self) -> bytes: + """ + The hash of the issuer name + """ + + @abc.abstractproperty + def hash_algorithm(self) -> hashes.HashAlgorithm: + """ + The hash algorithm used in the issuer name and key hashes + """ + + @abc.abstractproperty + def serial_number(self) -> int: + """ + The serial number of the cert whose status is being checked + """ + + +class OCSPResponse(metaclass=abc.ABCMeta): + @abc.abstractproperty + def responses(self) -> typing.Iterator[OCSPSingleResponse]: + """ + An iterator over the individual SINGLERESP structures in the + response + """ + + @abc.abstractproperty + def response_status(self) -> OCSPResponseStatus: + """ + The status of the response. This is a value from the OCSPResponseStatus + enumeration + """ + + @abc.abstractproperty + def signature_algorithm_oid(self) -> x509.ObjectIdentifier: + """ + The ObjectIdentifier of the signature algorithm + """ + + @abc.abstractproperty + def signature_hash_algorithm( + self, + ) -> typing.Optional[hashes.HashAlgorithm]: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + """ + + @abc.abstractproperty + def signature(self) -> bytes: + """ + The signature bytes + """ + + @abc.abstractproperty + def tbs_response_bytes(self) -> bytes: + """ + The tbsResponseData bytes + """ + + @abc.abstractproperty + def certificates(self) -> typing.List[x509.Certificate]: + """ + A list of certificates used to help build a chain to verify the OCSP + response. This situation occurs when the OCSP responder uses a delegate + certificate. + """ + + @abc.abstractproperty + def responder_key_hash(self) -> typing.Optional[bytes]: + """ + The responder's key hash or None + """ + + @abc.abstractproperty + def responder_name(self) -> typing.Optional[x509.Name]: + """ + The responder's Name or None + """ + + @abc.abstractproperty + def produced_at(self) -> datetime.datetime: + """ + The time the response was produced + """ + + @abc.abstractproperty + def certificate_status(self) -> OCSPCertStatus: + """ + The status of the certificate (an element from the OCSPCertStatus enum) + """ + + @abc.abstractproperty + def revocation_time(self) -> typing.Optional[datetime.datetime]: + """ + The date of when the certificate was revoked or None if not + revoked. + """ + + @abc.abstractproperty + def revocation_reason(self) -> typing.Optional[x509.ReasonFlags]: + """ + The reason the certificate was revoked or None if not specified or + not revoked. + """ + + @abc.abstractproperty + def this_update(self) -> datetime.datetime: + """ + The most recent time at which the status being indicated is known by + the responder to have been correct + """ + + @abc.abstractproperty + def next_update(self) -> typing.Optional[datetime.datetime]: + """ + The time when newer information will be available + """ + + @abc.abstractproperty + def issuer_key_hash(self) -> bytes: + """ + The hash of the issuer public key + """ + + @abc.abstractproperty + def issuer_name_hash(self) -> bytes: + """ + The hash of the issuer name + """ + + @abc.abstractproperty + def hash_algorithm(self) -> hashes.HashAlgorithm: + """ + The hash algorithm used in the issuer name and key hashes + """ + + @abc.abstractproperty + def serial_number(self) -> int: + """ + The serial number of the cert whose status is being checked + """ + + @abc.abstractproperty + def extensions(self) -> x509.Extensions: + """ + The list of response extensions. Not single response extensions. + """ + + @abc.abstractproperty + def single_extensions(self) -> x509.Extensions: + """ + The list of single response extensions. Not response extensions. + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the response to DER + """ + + +class OCSPRequestBuilder: + def __init__( + self, + request: typing.Optional[ + typing.Tuple[ + x509.Certificate, x509.Certificate, hashes.HashAlgorithm + ] + ] = None, + extensions: typing.List[x509.Extension[x509.ExtensionType]] = [], + ) -> None: + self._request = request + self._extensions = extensions + + def add_certificate( + self, + cert: x509.Certificate, + issuer: x509.Certificate, + algorithm: hashes.HashAlgorithm, + ) -> "OCSPRequestBuilder": + if self._request is not None: + raise ValueError("Only one certificate can be added to a request") + + _verify_algorithm(algorithm) + if not isinstance(cert, x509.Certificate) or not isinstance( + issuer, x509.Certificate + ): + raise TypeError("cert and issuer must be a Certificate") + + return OCSPRequestBuilder((cert, issuer, algorithm), self._extensions) + + def add_extension( + self, extval: x509.ExtensionType, critical: bool + ) -> "OCSPRequestBuilder": + if not isinstance(extval, x509.ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = x509.Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return OCSPRequestBuilder( + self._request, self._extensions + [extension] + ) + + def build(self) -> OCSPRequest: + if self._request is None: + raise ValueError("You must add a certificate before building") + + return ocsp.create_ocsp_request(self) + + +class OCSPResponseBuilder: + def __init__( + self, + response: typing.Optional[_SingleResponse] = None, + responder_id: typing.Optional[ + typing.Tuple[x509.Certificate, OCSPResponderEncoding] + ] = None, + certs: typing.Optional[typing.List[x509.Certificate]] = None, + extensions: typing.List[x509.Extension[x509.ExtensionType]] = [], + ): + self._response = response + self._responder_id = responder_id + self._certs = certs + self._extensions = extensions + + def add_response( + self, + cert: x509.Certificate, + issuer: x509.Certificate, + algorithm: hashes.HashAlgorithm, + cert_status: OCSPCertStatus, + this_update: datetime.datetime, + next_update: typing.Optional[datetime.datetime], + revocation_time: typing.Optional[datetime.datetime], + revocation_reason: typing.Optional[x509.ReasonFlags], + ) -> "OCSPResponseBuilder": + if self._response is not None: + raise ValueError("Only one response per OCSPResponse.") + + singleresp = _SingleResponse( + cert, + issuer, + algorithm, + cert_status, + this_update, + next_update, + revocation_time, + revocation_reason, + ) + return OCSPResponseBuilder( + singleresp, + self._responder_id, + self._certs, + self._extensions, + ) + + def responder_id( + self, encoding: OCSPResponderEncoding, responder_cert: x509.Certificate + ) -> "OCSPResponseBuilder": + if self._responder_id is not None: + raise ValueError("responder_id can only be set once") + if not isinstance(responder_cert, x509.Certificate): + raise TypeError("responder_cert must be a Certificate") + if not isinstance(encoding, OCSPResponderEncoding): + raise TypeError( + "encoding must be an element from OCSPResponderEncoding" + ) + + return OCSPResponseBuilder( + self._response, + (responder_cert, encoding), + self._certs, + self._extensions, + ) + + def certificates( + self, certs: typing.Iterable[x509.Certificate] + ) -> "OCSPResponseBuilder": + if self._certs is not None: + raise ValueError("certificates may only be set once") + certs = list(certs) + if len(certs) == 0: + raise ValueError("certs must not be an empty list") + if not all(isinstance(x, x509.Certificate) for x in certs): + raise TypeError("certs must be a list of Certificates") + return OCSPResponseBuilder( + self._response, + self._responder_id, + certs, + self._extensions, + ) + + def add_extension( + self, extval: x509.ExtensionType, critical: bool + ) -> "OCSPResponseBuilder": + if not isinstance(extval, x509.ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = x509.Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return OCSPResponseBuilder( + self._response, + self._responder_id, + self._certs, + self._extensions + [extension], + ) + + def sign( + self, + private_key: CERTIFICATE_PRIVATE_KEY_TYPES, + algorithm: typing.Optional[hashes.HashAlgorithm], + ) -> OCSPResponse: + if self._response is None: + raise ValueError("You must add a response before signing") + if self._responder_id is None: + raise ValueError("You must add a responder_id before signing") + + return ocsp.create_ocsp_response( + OCSPResponseStatus.SUCCESSFUL, self, private_key, algorithm + ) + + @classmethod + def build_unsuccessful( + cls, response_status: OCSPResponseStatus + ) -> OCSPResponse: + if not isinstance(response_status, OCSPResponseStatus): + raise TypeError( + "response_status must be an item from OCSPResponseStatus" + ) + if response_status is OCSPResponseStatus.SUCCESSFUL: + raise ValueError("response_status cannot be SUCCESSFUL") + + return ocsp.create_ocsp_response(response_status, None, None, None) + + +def load_der_ocsp_request(data: bytes) -> OCSPRequest: + return ocsp.load_der_ocsp_request(data) + + +def load_der_ocsp_response(data: bytes) -> OCSPResponse: + return ocsp.load_der_ocsp_response(data) diff --git a/env/lib/python3.8/site-packages/cryptography/x509/oid.py b/env/lib/python3.8/site-packages/cryptography/x509/oid.py new file mode 100644 index 00000000..9bfac75a --- /dev/null +++ b/env/lib/python3.8/site-packages/cryptography/x509/oid.py @@ -0,0 +1,32 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat._oid import ( + AttributeOID, + AuthorityInformationAccessOID, + CRLEntryExtensionOID, + CertificatePoliciesOID, + ExtendedKeyUsageOID, + ExtensionOID, + NameOID, + OCSPExtensionOID, + ObjectIdentifier, + SignatureAlgorithmOID, + SubjectInformationAccessOID, +) + + +__all__ = [ + "AttributeOID", + "AuthorityInformationAccessOID", + "CRLEntryExtensionOID", + "CertificatePoliciesOID", + "ExtendedKeyUsageOID", + "ExtensionOID", + "NameOID", + "OCSPExtensionOID", + "ObjectIdentifier", + "SignatureAlgorithmOID", + "SubjectInformationAccessOID", +] diff --git a/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/AUTHORS.md b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/AUTHORS.md new file mode 100644 index 00000000..15cd4f8e --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/AUTHORS.md @@ -0,0 +1,13 @@ +Erik Welch [@eriknw](https://github.com/eriknw/) + +[Matthew Rocklin](http://matthewrocklin.com) [@mrocklin](http://github.com/mrocklin/) + +Lars Buitinck [@larsmans](http://github.com/larsmans) + +[Thouis (Ray) Jones](http://people.seas.harvard.edu/~thouis) [@thouis](https://github.com/thouis/) + +scoder [@scoder](https://github.com/scoder/) + +Phillip Cloud [@cpcloud](https://github.com/cpcloud) + +Joe Jevnik [@llllllllll](https://github.com/llllllllll) diff --git a/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/LICENSE.txt b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/LICENSE.txt new file mode 100644 index 00000000..7eee84e6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/LICENSE.txt @@ -0,0 +1,28 @@ +Copyright (c) 2014-2021 Erik Welch + +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + a. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + b. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + c. Neither the name of cytoolz nor the names of its contributors + may be used to endorse or promote products derived from this software + without specific prior written permission. + + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH +DAMAGE. diff --git a/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/METADATA b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/METADATA new file mode 100644 index 00000000..050d54bc --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/METADATA @@ -0,0 +1,119 @@ +Metadata-Version: 2.1 +Name: cytoolz +Version: 0.12.0 +Summary: Cython implementation of Toolz: High performance functional utilities +Home-page: https://github.com/pytoolz/cytoolz +Author: https://raw.github.com/pytoolz/cytoolz/master/AUTHORS.md +Author-email: erik.n.welch@gmail.com +Maintainer: Erik Welch +Maintainer-email: erik.n.welch@gmail.com +License: BSD +Keywords: functional utility itertools functools iterator generator curry memoize lazy streaming bigdata cython toolz cytoolz +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: Education +Classifier: Intended Audience :: Science/Research +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Cython +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Scientific/Engineering +Classifier: Topic :: Scientific/Engineering :: Information Analysis +Classifier: Topic :: Software Development +Classifier: Topic :: Software Development :: Libraries +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Utilities +Requires-Python: >=3.5 +License-File: LICENSE.txt +License-File: AUTHORS.md +Requires-Dist: toolz (>=0.8.0) +Provides-Extra: cython +Requires-Dist: cython ; extra == 'cython' + +CyToolz +======= + +|Build Status| |Version Status| + +Cython implementation of the +|literal toolz|_ `package, `__ which +provides high performance utility functions for iterables, functions, +and dictionaries. + +.. |literal toolz| replace:: ``toolz`` +.. _literal toolz: https://github.com/pytoolz/toolz + +``toolz`` is a pure Python package that borrows heavily from contemporary +functional languanges. It is designed to interoperate seamlessly with other +libraries including ``itertools``, ``functools``, and third party libraries. +High performance functional data analysis is possible with builtin types +like ``list`` and ``dict``, and user-defined data structures; and low memory +usage is achieved by using the iterator protocol and returning iterators +whenever possible. + +``cytoolz`` implements the same API as ``toolz``. The main differences are +that ``cytoolz`` is faster (typically 2-5x faster with a few spectacular +exceptions) and ``cytoolz`` offers a C API that is accessible to other +projects developed in Cython. Since ``toolz`` is able to process very +large (potentially infinite) data sets, the performance increase gained by +using ``cytoolz`` can be significant. + +See the PyToolz documentation at https://toolz.readthedocs.io and the full +`API Documentation `__ +for more details. + +LICENSE +------- + +New BSD. See `License File `__. + + +Install +------- + +``cytoolz`` is on the Python Package Index (PyPI): + +:: + + pip install cytoolz + +Dependencies +------------ + +``cytoolz`` supports Python 3.5+ with a common codebase. +It is developed in Cython, but requires no dependecies other than CPython +and a C compiler. Like ``toolz``, it is a light weight dependency. + +Contributions Welcome +--------------------- + +``toolz`` (and ``cytoolz``) aims to be a repository for utility functions, +particularly those that come from the functional programming and list +processing traditions. We welcome contributions that fall within this scope +and encourage users to scrape their ``util.py`` files for functions that are +broadly useful. + +Please take a look at our issue pages for +`toolz `__ and +`cytoolz `__ +for contribution ideas. + +Community +--------- + +See our `mailing list `__. +We're friendly. + +.. |Build Status| image:: https://travis-ci.org/pytoolz/cytoolz.svg?branch=master + :target: https://travis-ci.org/pytoolz/cytoolz +.. |Version Status| image:: https://badge.fury.io/py/cytoolz.svg + :target: http://badge.fury.io/py/cytoolz diff --git a/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/RECORD b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/RECORD new file mode 100644 index 00000000..ec1cc305 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/RECORD @@ -0,0 +1,74 @@ +cytoolz-0.12.0.dist-info/AUTHORS.md,sha256=Kx3ozbiUw_ptRrzKqNBFzNMYYZVKL2wcwzQ7M_PC4FU,635 +cytoolz-0.12.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cytoolz-0.12.0.dist-info/LICENSE.txt,sha256=s-TP3GpQHCyAiYUwMQPzgT7JurqONS6pzr7cM00ybHU,1494 +cytoolz-0.12.0.dist-info/METADATA,sha256=XNHVrUsT2v8e_voHXCLyVEqOV6iMf2-h2sDlq4lWGiE,4539 +cytoolz-0.12.0.dist-info/RECORD,, +cytoolz-0.12.0.dist-info/WHEEL,sha256=-ijGDuALlPxm3HbhKntps0QzHsi-DPlXqgerYTTJkFE,148 +cytoolz-0.12.0.dist-info/top_level.txt,sha256=-Vu2k6UHqAaFZ_x4-YH9irvOPFZwXqhiFngLWb9_Kl0,8 +cytoolz/__init__.pxd,sha256=aNLxQn7hhDu2T1VTmtIwEVaDz6w4W585NMRLLl1icEE,742 +cytoolz/__init__.py,sha256=-rfuQorT_Zwv-R2m94nXNtZubrkIbJ-FvKHsiyeoCzY,589 +cytoolz/__pycache__/__init__.cpython-38.pyc,, +cytoolz/__pycache__/_signatures.cpython-38.pyc,, +cytoolz/__pycache__/_version.cpython-38.pyc,, +cytoolz/__pycache__/compatibility.cpython-38.pyc,, +cytoolz/_signatures.py,sha256=UpqRVGeM6x-vEx9cThUK7zOweRAhAetAwuZxICIi_UY,4356 +cytoolz/_version.py,sha256=myRQlP4ve1gF0nxbYns98H18z7J3Ag6cR1xT40vsTzI,498 +cytoolz/compatibility.py,sha256=giOYcwv1TaOWDfB-C2JP2pFIJ5YZX9aP1s4UPzCQnw4,997 +cytoolz/cpython.pxd,sha256=etdpUVoXUozbumX7IGb2dnZFV-Q_VtXsTRxdRJb7hdw,497 +cytoolz/curried/__init__.py,sha256=o-zsXhKEsigezaTqM4a-Ic3ZCsQzxqfUnTzOi9gvmhU,2884 +cytoolz/curried/__pycache__/__init__.cpython-38.pyc,, +cytoolz/curried/__pycache__/exceptions.cpython-38.pyc,, +cytoolz/curried/__pycache__/operator.cpython-38.pyc,, +cytoolz/curried/exceptions.py,sha256=w1v9TGEi6M2YyZQqBGuo_Zw2lkcq8zIikpgSR1O7RI8,350 +cytoolz/curried/operator.py,sha256=kVpn_xnQNKN5N3XDhumzGCI2cDqkKtm5ZMZya9PDknI,521 +cytoolz/dicttoolz.cpython-38-x86_64-linux-gnu.so,sha256=nGYKPOtoTzeNLWiluduqrcV3MzSw84WdmD4LYnjiKMs,975552 +cytoolz/dicttoolz.pxd,sha256=sSVGV5w6YOJ_7mKmMxUAdtB3t6jejvqlqJx6SVPuD7M,1368 +cytoolz/dicttoolz.pyx,sha256=qP11TWBTby_8ahWaEkXAhrO-57Ic1-PzmWusbvdfCYw,15431 +cytoolz/functoolz.cpython-38-x86_64-linux-gnu.so,sha256=Zq3C4QBuEAuy9TBniPSgVWod2ClXgyoHti3H1Ld3tgw,1902808 +cytoolz/functoolz.pxd,sha256=eAlD9ukpfwo-ZOrM8E5enKiAs5-ZOKEFf4EsstVTPKU,1210 +cytoolz/functoolz.pyx,sha256=1MVQCOfYWdlvN2Fc-f8sBUHEZXHPkRiJXad0NFECRrw,25000 +cytoolz/itertoolz.cpython-38-x86_64-linux-gnu.so,sha256=znMpkUeiGkpHepOL-otXHSgUFl9DnCCljVRKNn8hHy0,3050864 +cytoolz/itertoolz.pxd,sha256=T_RBzsTt4NKSAhYP5fErl_wwyu6kObKbP1CiBG7TfJo,4695 +cytoolz/itertoolz.pyx,sha256=dOty1BCa7fz7HAmmX6sOhQt_0ngqH0yaIWzPd3v-IBY,51360 +cytoolz/recipes.cpython-38-x86_64-linux-gnu.so,sha256=WC-yS8qrOoynMxfOQcAgSep4j47L3aV-mZCwFg4PoEg,281232 +cytoolz/recipes.pxd,sha256=hW0TezjEMRNZW1hzTQAhdKQutoIgk0twcJ4fKESXY6g,100 +cytoolz/recipes.pyx,sha256=gP6w_U4Ejd75FEF2tQX_w7YPnscBi_SWi4BTBW7JDvo,1600 +cytoolz/tests/__pycache__/dev_skip_test.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_compatibility.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_curried.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_curried_toolzlike.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_dev_skip_test.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_dicttoolz.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_docstrings.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_doctests.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_embedded_sigs.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_functoolz.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_inspect_args.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_itertoolz.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_none_safe.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_recipes.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_serialization.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_signatures.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_tlz.cpython-38.pyc,, +cytoolz/tests/__pycache__/test_utils.cpython-38.pyc,, +cytoolz/tests/dev_skip_test.py,sha256=K9hiMFsJYi4p-EUN-wDGW2OmWHvTkuV66FYA8eWHdxQ,935 +cytoolz/tests/test_compatibility.py,sha256=ZwADjcMF1Wx-5Qg6eoUxvAxJ4GJ7A12TH0347RfqAa4,265 +cytoolz/tests/test_curried.py,sha256=hFzAcFSUrDy4GHOVQ1ffuQ9T3KpkW6Oz45V31hlb_ng,3782 +cytoolz/tests/test_curried_toolzlike.py,sha256=m4Ii86p2x5K9mUz6sr8FD2_LrCnpkydLlLLDx_JC5g4,1399 +cytoolz/tests/test_dev_skip_test.py,sha256=IWK2fnpV8h5yNWAsdmKIcAFYHDYsM3ET7DdyXw5jsdk,380 +cytoolz/tests/test_dicttoolz.py,sha256=c91Xj89NsgETulHFKytL9p0vfo8Y4-sfrTKJd7EAQso,9080 +cytoolz/tests/test_docstrings.py,sha256=wXPfOxUUH3rsg8nVAKuFR_iCqNiZz2_XwyYmYLaDlDw,3034 +cytoolz/tests/test_doctests.py,sha256=hy_r_ooPwGOc7IRz8iymMG1WhqW6pHV4dVJTyVHly-Y,465 +cytoolz/tests/test_embedded_sigs.py,sha256=JpYW3jZqDKvzujER0alKTSDXqVsIx0qCIVpJD1lvPzY,3788 +cytoolz/tests/test_functoolz.py,sha256=_UfIySZsh8cYGoQi05WqTUQ5xMj_XeGqa3pkHRLJAxY,20217 +cytoolz/tests/test_inspect_args.py,sha256=bzMTE4sH8w8bDqHGOSqyJ8kb-h3en52CELSypH_jPY4,15994 +cytoolz/tests/test_itertoolz.py,sha256=UmAX8oFobqvd6uftIuXXMnseM2tOFKlqCHlZwoo7xZM,18189 +cytoolz/tests/test_none_safe.py,sha256=ZZTnrSnnjSc_78qyjKY9kfHJwHSmGHejczoAh0DAtfs,12222 +cytoolz/tests/test_recipes.py,sha256=N---bJ9H9WYHbV3DreCSDKJAdEZIb1C55Bzi7Otzwnc,822 +cytoolz/tests/test_serialization.py,sha256=5dFP4fRPzTjW2T7Wb_p6eux0X-YB-boAEvySvY-p19E,5825 +cytoolz/tests/test_signatures.py,sha256=xfyFOcLyBIrle6DX_qfR1DfWitAKxzYcU9tr0UFeCEk,2877 +cytoolz/tests/test_tlz.py,sha256=sRPLZQK7h7kQGKM3GlxQcN7lGk1wyps565oNYgub2tk,1486 +cytoolz/tests/test_utils.py,sha256=iOvAXl9gUYSLdrjrz9oZdx2jl9uPxA9uDXezma6fYx4,385 +cytoolz/utils.cpython-38-x86_64-linux-gnu.so,sha256=4qFGPptQquoF87rXxxTHG75ozS9GwGmpAeeposjHbfQ,269432 +cytoolz/utils.pxd,sha256=dEfMVFagOaTBMb7-nzGVEG3ZuQSYz4ea91KtdO3XUeg,33 +cytoolz/utils.pyx,sha256=Q2F_yd1wCNBuUfXX6CH5_nezMrFbU2d60QJCE6w9WXg,1353 diff --git a/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/WHEEL new file mode 100644 index 00000000..3a48d348 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_17_x86_64 +Tag: cp38-cp38-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/top_level.txt new file mode 100644 index 00000000..60cebe31 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz-0.12.0.dist-info/top_level.txt @@ -0,0 +1 @@ +cytoolz diff --git a/env/lib/python3.8/site-packages/cytoolz/__init__.pxd b/env/lib/python3.8/site-packages/cytoolz/__init__.pxd new file mode 100644 index 00000000..6b5c0986 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/__init__.pxd @@ -0,0 +1,19 @@ +from cytoolz.itertoolz cimport ( + accumulate, c_merge_sorted, cons, count, drop, get, groupby, first, + frequencies, interleave, interpose, isdistinct, isiterable, iterate, + last, mapcat, nth, partition, partition_all, pluck, reduceby, remove, + rest, second, sliding_window, take, tail, take_nth, unique, join, + c_diff, topk, peek, random_sample, concat) + + +from cytoolz.functoolz cimport ( + c_compose, memoize, c_pipe, c_thread_first, c_thread_last, + complement, curry, do, identity, excepts, flip) + + +from cytoolz.dicttoolz cimport ( + assoc, c_merge, c_merge_with, c_dissoc, get_in, keyfilter, keymap, + itemfilter, itemmap, update_in, valfilter, valmap, assoc_in) + + +from cytoolz.recipes cimport countby, partitionby diff --git a/env/lib/python3.8/site-packages/cytoolz/__init__.py b/env/lib/python3.8/site-packages/cytoolz/__init__.py new file mode 100644 index 00000000..a45c1654 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/__init__.py @@ -0,0 +1,32 @@ +from .itertoolz import * + +from .functoolz import * + +from .dicttoolz import * + +from .recipes import * + +from functools import partial, reduce + +sorted = sorted +map = map +filter = filter + +# Aliases +comp = compose + +# Always-curried functions +flip = functoolz.flip = curry(functoolz.flip) +memoize = functoolz.memoize = curry(functoolz.memoize) + +from . import curried # sandbox + +functoolz._sigs.update_signature_registry() + +# What version of toolz does cytoolz implement? +__toolz_version__ = '0.12.0' + +from ._version import get_versions + +__version__ = get_versions()['version'] +del get_versions diff --git a/env/lib/python3.8/site-packages/cytoolz/_signatures.py b/env/lib/python3.8/site-packages/cytoolz/_signatures.py new file mode 100644 index 00000000..aded849f --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/_signatures.py @@ -0,0 +1,170 @@ +from toolz._signatures import * +from toolz._signatures import (_is_arity, _has_varargs, _has_keywords, + _num_required_args, _is_partial_args, _is_valid_args) + +cytoolz_info = {} + +cytoolz_info['cytoolz.dicttoolz'] = dict( + assoc=[ + lambda d, key, value, factory=dict: None], + assoc_in=[ + lambda d, keys, value, factory=dict: None], + dissoc=[ + lambda d, *keys, **kwargs: None], + get_in=[ + lambda keys, coll, default=None, no_default=False: None], + itemfilter=[ + lambda predicate, d, factory=dict: None], + itemmap=[ + lambda func, d, factory=dict: None], + keyfilter=[ + lambda predicate, d, factory=dict: None], + keymap=[ + lambda func, d, factory=dict: None], + merge=[ + lambda *dicts, **kwargs: None], + merge_with=[ + lambda func, *dicts, **kwargs: None], + update_in=[ + lambda d, keys, func, default=None, factory=dict: None], + valfilter=[ + lambda predicate, d, factory=dict: None], + valmap=[ + lambda func, d, factory=dict: None], +) + +cytoolz_info['cytoolz.functoolz'] = dict( + apply=[ + lambda *func_and_args, **kwargs: None], + Compose=[ + lambda *funcs: None], + complement=[ + lambda func: None], + compose=[ + lambda *funcs: None], + compose_left=[ + lambda *funcs: None], + curry=[ + lambda *args, **kwargs: None], + do=[ + lambda func, x: None], + excepts=[ + lambda exc, func, handler=None: None], + flip=[ + lambda: None, + lambda func: None, + lambda func, a: None, + lambda func, a, b: None], + _flip=[ + lambda func, a, b: None], + identity=[ + lambda x: None], + juxt=[ + lambda *funcs: None], + memoize=[ + lambda cache=None, key=None: None, + lambda func, cache=None, key=None: None], + _memoize=[ + lambda func, cache=None, key=None: None], + pipe=[ + lambda data, *funcs: None], + return_none=[ + lambda exc: None], + thread_first=[ + lambda val, *forms: None], + thread_last=[ + lambda val, *forms: None], +) + +cytoolz_info['cytoolz.itertoolz'] = dict( + accumulate=[ + lambda binop, seq, initial='__no__default__': None], + concat=[ + lambda seqs: None], + concatv=[ + lambda *seqs: None], + cons=[ + lambda el, seq: None], + count=[ + lambda seq: None], + diff=[ + lambda *seqs, **kwargs: None], + drop=[ + lambda n, seq: None], + first=[ + lambda seq: None], + frequencies=[ + lambda seq: None], + get=[ + lambda ind, seq, default=None: None], + getter=[ + lambda index: None], + groupby=[ + lambda key, seq: None], + identity=[ + lambda x: None], + interleave=[ + lambda seqs: None], + interpose=[ + lambda el, seq: None], + isdistinct=[ + lambda seq: None], + isiterable=[ + lambda x: None], + iterate=[ + lambda func, x: None], + join=[ + lambda leftkey, leftseq, rightkey, rightseq, left_default=None, right_default=None: None], + last=[ + lambda seq: None], + mapcat=[ + lambda func, seqs: None], + merge_sorted=[ + lambda *seqs, **kwargs: None], + nth=[ + lambda n, seq: None], + partition=[ + lambda n, seq, pad=None: None], + partition_all=[ + lambda n, seq: None], + peek=[ + lambda seq: None], + peekn=[ + lambda n, seq: None], + pluck=[ + lambda ind, seqs, default=None: None], + random_sample=[ + lambda prob, seq, random_state=None: None], + reduceby=[ + lambda key, binop, seq, init=None: None], + remove=[ + lambda predicate, seq: None], + rest=[ + lambda seq: None], + second=[ + lambda seq: None], + sliding_window=[ + lambda n, seq: None], + tail=[ + lambda n, seq: None], + take=[ + lambda n, seq: None], + take_nth=[ + lambda n, seq: None], + topk=[ + lambda k, seq, key=None: None], + unique=[ + lambda seq, key=None: None], +) + +cytoolz_info['cytoolz.recipes'] = dict( + countby=[ + lambda key, seq: None], + partitionby=[ + lambda func, seq: None], +) + +def update_signature_registry(): + create_signature_registry(cytoolz_info) + module_info.update(cytoolz_info) + diff --git a/env/lib/python3.8/site-packages/cytoolz/_version.py b/env/lib/python3.8/site-packages/cytoolz/_version.py new file mode 100644 index 00000000..21bb2b9a --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/_version.py @@ -0,0 +1,21 @@ + +# This file was generated by 'versioneer.py' (0.22) from +# revision-control system data, or from the parent directory name of an +# unpacked source archive. Distribution tarballs contain a pre-generated copy +# of this file. + +import json + +version_json = ''' +{ + "date": "2022-07-10T19:30:15-0500", + "dirty": false, + "error": null, + "full-revisionid": "812a1ab8fb64109f5cb185cc2f1d39dab8e196fc", + "version": "0.12.0" +} +''' # END VERSION_JSON + + +def get_versions(): + return json.loads(version_json) diff --git a/env/lib/python3.8/site-packages/cytoolz/compatibility.py b/env/lib/python3.8/site-packages/cytoolz/compatibility.py new file mode 100644 index 00000000..28bef91d --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/compatibility.py @@ -0,0 +1,30 @@ +import warnings +warnings.warn("The toolz.compatibility module is no longer " + "needed in Python 3 and has been deprecated. Please " + "import these utilities directly from the standard library. " + "This module will be removed in a future release.", + category=DeprecationWarning, stacklevel=2) + +import operator +import sys + +PY3 = sys.version_info[0] > 2 +PY34 = sys.version_info[0] == 3 and sys.version_info[1] == 4 +PYPY = hasattr(sys, 'pypy_version_info') and PY3 + +__all__ = ('map', 'filter', 'range', 'zip', 'reduce', 'zip_longest', + 'iteritems', 'iterkeys', 'itervalues', 'filterfalse', + 'PY3', 'PY34', 'PYPY') + + +map = map +filter = filter +range = range +zip = zip +from functools import reduce +from itertools import zip_longest +from itertools import filterfalse +iteritems = operator.methodcaller('items') +iterkeys = operator.methodcaller('keys') +itervalues = operator.methodcaller('values') +from collections.abc import Sequence diff --git a/env/lib/python3.8/site-packages/cytoolz/cpython.pxd b/env/lib/python3.8/site-packages/cytoolz/cpython.pxd new file mode 100644 index 00000000..d7df24bb --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/cpython.pxd @@ -0,0 +1,12 @@ +""" Additional bindings to Python's C-API. + +These differ from Cython's bindings in ``cpython``. +""" +from cpython.ref cimport PyObject + +cdef extern from "Python.h": + PyObject* PtrIter_Next "PyIter_Next"(object o) + PyObject* PtrObject_Call "PyObject_Call"(object callable_object, object args, object kw) + PyObject* PtrObject_GetItem "PyObject_GetItem"(object o, object key) + int PyDict_Next_Compat "PyDict_Next"(object p, Py_ssize_t *ppos, PyObject* *pkey, PyObject* *pvalue) except -1 + diff --git a/env/lib/python3.8/site-packages/cytoolz/curried/__init__.py b/env/lib/python3.8/site-packages/cytoolz/curried/__init__.py new file mode 100644 index 00000000..87fa7f23 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/curried/__init__.py @@ -0,0 +1,103 @@ +""" +Alternate namespace for cytoolz such that all functions are curried + +Currying provides implicit partial evaluation of all functions + +Example: + + Get usually requires two arguments, an index and a collection + >>> from cytoolz.curried import get + >>> get(0, ('a', 'b')) + 'a' + + When we use it in higher order functions we often want to pass a partially + evaluated form + >>> data = [(1, 2), (11, 22), (111, 222)] + >>> list(map(lambda seq: get(0, seq), data)) + [1, 11, 111] + + The curried version allows simple expression of partial evaluation + >>> list(map(get(0), data)) + [1, 11, 111] + +See Also: + cytoolz.functoolz.curry +""" +import cytoolz +from . import operator +from cytoolz import ( + apply, + comp, + complement, + compose, + compose_left, + concat, + concatv, + count, + curry, + diff, + first, + flip, + frequencies, + identity, + interleave, + isdistinct, + isiterable, + juxt, + last, + memoize, + merge_sorted, + peek, + pipe, + second, + thread_first, + thread_last, +) +from .exceptions import merge, merge_with + +accumulate = cytoolz.curry(cytoolz.accumulate) +assoc = cytoolz.curry(cytoolz.assoc) +assoc_in = cytoolz.curry(cytoolz.assoc_in) +cons = cytoolz.curry(cytoolz.cons) +countby = cytoolz.curry(cytoolz.countby) +dissoc = cytoolz.curry(cytoolz.dissoc) +do = cytoolz.curry(cytoolz.do) +drop = cytoolz.curry(cytoolz.drop) +excepts = cytoolz.curry(cytoolz.excepts) +filter = cytoolz.curry(cytoolz.filter) +get = cytoolz.curry(cytoolz.get) +get_in = cytoolz.curry(cytoolz.get_in) +groupby = cytoolz.curry(cytoolz.groupby) +interpose = cytoolz.curry(cytoolz.interpose) +itemfilter = cytoolz.curry(cytoolz.itemfilter) +itemmap = cytoolz.curry(cytoolz.itemmap) +iterate = cytoolz.curry(cytoolz.iterate) +join = cytoolz.curry(cytoolz.join) +keyfilter = cytoolz.curry(cytoolz.keyfilter) +keymap = cytoolz.curry(cytoolz.keymap) +map = cytoolz.curry(cytoolz.map) +mapcat = cytoolz.curry(cytoolz.mapcat) +nth = cytoolz.curry(cytoolz.nth) +partial = cytoolz.curry(cytoolz.partial) +partition = cytoolz.curry(cytoolz.partition) +partition_all = cytoolz.curry(cytoolz.partition_all) +partitionby = cytoolz.curry(cytoolz.partitionby) +peekn = cytoolz.curry(cytoolz.peekn) +pluck = cytoolz.curry(cytoolz.pluck) +random_sample = cytoolz.curry(cytoolz.random_sample) +reduce = cytoolz.curry(cytoolz.reduce) +reduceby = cytoolz.curry(cytoolz.reduceby) +remove = cytoolz.curry(cytoolz.remove) +sliding_window = cytoolz.curry(cytoolz.sliding_window) +sorted = cytoolz.curry(cytoolz.sorted) +tail = cytoolz.curry(cytoolz.tail) +take = cytoolz.curry(cytoolz.take) +take_nth = cytoolz.curry(cytoolz.take_nth) +topk = cytoolz.curry(cytoolz.topk) +unique = cytoolz.curry(cytoolz.unique) +update_in = cytoolz.curry(cytoolz.update_in) +valfilter = cytoolz.curry(cytoolz.valfilter) +valmap = cytoolz.curry(cytoolz.valmap) + +del exceptions +del cytoolz diff --git a/env/lib/python3.8/site-packages/cytoolz/curried/exceptions.py b/env/lib/python3.8/site-packages/cytoolz/curried/exceptions.py new file mode 100644 index 00000000..70dd8bc5 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/curried/exceptions.py @@ -0,0 +1,17 @@ +import cytoolz + +__all__ = ['merge', 'merge_with'] + + +@cytoolz.curry +def merge(d, *dicts, **kwargs): + return cytoolz.merge(d, *dicts, **kwargs) + + +@cytoolz.curry +def merge_with(func, d, *dicts, **kwargs): + return cytoolz.merge_with(func, d, *dicts, **kwargs) + + +merge.__doc__ = cytoolz.merge.__doc__ +merge_with.__doc__ = cytoolz.merge_with.__doc__ diff --git a/env/lib/python3.8/site-packages/cytoolz/curried/operator.py b/env/lib/python3.8/site-packages/cytoolz/curried/operator.py new file mode 100644 index 00000000..9597305f --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/curried/operator.py @@ -0,0 +1,22 @@ +from __future__ import absolute_import + +import operator + +from cytoolz.functoolz import curry + + +# Tests will catch if/when this needs updated +IGNORE = { + "__abs__", "__index__", "__inv__", "__invert__", "__neg__", "__not__", + "__pos__", "_abs", "abs", "attrgetter", "index", "inv", "invert", + "itemgetter", "neg", "not_", "pos", "truth" +} + +locals().update( + {name: f if name in IGNORE else curry(f) + for name, f in vars(operator).items() if callable(f)} +) + +# Clean up the namespace. +del curry +del operator diff --git a/env/lib/python3.8/site-packages/cytoolz/dicttoolz.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/cytoolz/dicttoolz.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..71c0d030 Binary files /dev/null and b/env/lib/python3.8/site-packages/cytoolz/dicttoolz.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/cytoolz/dicttoolz.pxd b/env/lib/python3.8/site-packages/cytoolz/dicttoolz.pxd new file mode 100644 index 00000000..22d91bcf --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/dicttoolz.pxd @@ -0,0 +1,51 @@ +from cpython.ref cimport PyObject + +# utility functions to perform iteration over dicts or generic mapping +cdef class _iter_mapping: + cdef object it + cdef object cur + +ctypedef int (*f_map_next)(object p, Py_ssize_t *ppos, PyObject* *pkey, PyObject* *pval) except -1 + +cdef f_map_next get_map_iter(object d, PyObject* *ptr) except NULL + +cdef int PyMapping_Next(object p, Py_ssize_t *ppos, PyObject* *pkey, PyObject* *pval) except -1 + + +cdef object c_merge(object dicts, object factory=*) + + +cdef object c_merge_with(object func, object dicts, object factory=*) + + +cpdef object valmap(object func, object d, object factory=*) + + +cpdef object keymap(object func, object d, object factory=*) + + +cpdef object itemmap(object func, object d, object factory=*) + + +cpdef object valfilter(object predicate, object d, object factory=*) + + +cpdef object keyfilter(object predicate, object d, object factory=*) + + +cpdef object itemfilter(object predicate, object d, object factory=*) + + +cpdef object assoc(object d, object key, object value, object factory=*) + + +cpdef object assoc_in(object d, object keys, object value, object factory=*) + + +cdef object c_dissoc(object d, object keys, object factory=*) + + +cpdef object update_in(object d, object keys, object func, object default=*, object factory=*) + + +cpdef object get_in(object keys, object coll, object default=*, object no_default=*) diff --git a/env/lib/python3.8/site-packages/cytoolz/dicttoolz.pyx b/env/lib/python3.8/site-packages/cytoolz/dicttoolz.pyx new file mode 100644 index 00000000..6a72db6a --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/dicttoolz.pyx @@ -0,0 +1,575 @@ +from cpython.dict cimport (PyDict_Check, PyDict_CheckExact, PyDict_GetItem, + PyDict_New, PyDict_Next, + PyDict_SetItem, PyDict_Update, PyDict_DelItem) +from cpython.list cimport PyList_Append, PyList_New +from cpython.object cimport PyObject_SetItem +from cpython.ref cimport PyObject, Py_DECREF, Py_INCREF, Py_XDECREF + +# Locally defined bindings that differ from `cython.cpython` bindings +from cytoolz.cpython cimport PyDict_Next_Compat, PtrIter_Next + +from copy import copy +from collections.abc import Mapping + + +__all__ = ['merge', 'merge_with', 'valmap', 'keymap', 'itemmap', 'valfilter', + 'keyfilter', 'itemfilter', 'assoc', 'dissoc', 'assoc_in', 'get_in', + 'update_in'] + + +cdef class _iter_mapping: + """ Keep a handle on the current item to prevent memory clean up too early""" + def __cinit__(self, object it): + self.it = it + self.cur = None + + def __iter__(self): + return self + + def __next__(self): + self.cur = next(self.it) + return self.cur + + +cdef int PyMapping_Next(object p, Py_ssize_t *ppos, PyObject* *pkey, PyObject* *pval) except -1: + """Mimic "PyDict_Next" interface, but for any mapping""" + cdef PyObject *obj + obj = PtrIter_Next(p) + if obj is NULL: + return 0 + pkey[0] = (obj)[0] + pval[0] = (obj)[1] + Py_XDECREF(obj) # removing this results in memory leak + return 1 + + +cdef f_map_next get_map_iter(object d, PyObject* *ptr) except NULL: + """Return function pointer to perform iteration over object returned in ptr. + + The returned function signature matches "PyDict_Next". If ``d`` is a dict, + then the returned function *is* PyDict_Next, so iteration wil be very fast. + + The object returned through ``ptr`` needs to have its reference count + reduced by one once the caller "owns" the object. + + This function lets us control exactly how iteration should be performed + over a given mapping. The current rules are: + + 1) If ``d`` is exactly a dict, use PyDict_Next + 2) If ``d`` is subtype of dict, use PyMapping_Next. This lets the user + control the order iteration, such as for ordereddict. + 3) If using PyMapping_Next, iterate using ``iteritems`` if possible, + otherwise iterate using ``items``. + + """ + cdef object val + cdef f_map_next rv + if PyDict_CheckExact(d): + val = d + rv = &PyDict_Next_Compat + else: + val = _iter_mapping(iter(d.items())) + rv = &PyMapping_Next + Py_INCREF(val) + ptr[0] = val + return rv + + +cdef get_factory(name, kwargs): + factory = kwargs.pop('factory', dict) + if kwargs: + raise TypeError("{0}() got an unexpected keyword argument " + "'{1}'".format(name, kwargs.popitem()[0])) + return factory + + +cdef object c_merge(object dicts, object factory=dict): + cdef object rv + rv = factory() + if PyDict_CheckExact(rv): + for d in dicts: + PyDict_Update(rv, d) + else: + for d in dicts: + rv.update(d) + return rv + + +def merge(*dicts, **kwargs): + """ + Merge a collection of dictionaries + + >>> merge({1: 'one'}, {2: 'two'}) + {1: 'one', 2: 'two'} + + Later dictionaries have precedence + + >>> merge({1: 2, 3: 4}, {3: 3, 4: 4}) + {1: 2, 3: 3, 4: 4} + + See Also: + merge_with + """ + if len(dicts) == 1 and not isinstance(dicts[0], Mapping): + dicts = dicts[0] + factory = get_factory('merge', kwargs) + return c_merge(dicts, factory) + + +cdef object c_merge_with(object func, object dicts, object factory=dict): + cdef: + dict result + object rv, d + list seq + f_map_next f + PyObject *obj + PyObject *pkey + PyObject *pval + Py_ssize_t pos + + result = PyDict_New() + rv = factory() + for d in dicts: + f = get_map_iter(d, &obj) + d = obj + Py_DECREF(d) + pos = 0 + while f(d, &pos, &pkey, &pval): + obj = PyDict_GetItem(result, pkey) + if obj is NULL: + seq = PyList_New(0) + PyList_Append(seq, pval) + PyDict_SetItem(result, pkey, seq) + else: + PyList_Append(obj, pval) + + f = get_map_iter(result, &obj) + d = obj + Py_DECREF(d) + pos = 0 + while f(d, &pos, &pkey, &pval): + PyObject_SetItem(rv, pkey, func(pval)) + return rv + + +def merge_with(func, *dicts, **kwargs): + """ + Merge dictionaries and apply function to combined values + + A key may occur in more than one dict, and all values mapped from the key + will be passed to the function as a list, such as func([val1, val2, ...]). + + >>> merge_with(sum, {1: 1, 2: 2}, {1: 10, 2: 20}) + {1: 11, 2: 22} + + >>> merge_with(first, {1: 1, 2: 2}, {2: 20, 3: 30}) # doctest: +SKIP + {1: 1, 2: 2, 3: 30} + + See Also: + merge + """ + + if len(dicts) == 1 and not isinstance(dicts[0], Mapping): + dicts = dicts[0] + factory = get_factory('merge_with', kwargs) + return c_merge_with(func, dicts, factory) + + +cpdef object valmap(object func, object d, object factory=dict): + """ + Apply function to values of dictionary + + >>> bills = {"Alice": [20, 15, 30], "Bob": [10, 35]} + >>> valmap(sum, bills) # doctest: +SKIP + {'Alice': 65, 'Bob': 45} + + See Also: + keymap + itemmap + """ + cdef: + object rv + f_map_next f + PyObject *obj + PyObject *pkey + PyObject *pval + Py_ssize_t pos = 0 + + rv = factory() + f = get_map_iter(d, &obj) + d = obj + Py_DECREF(d) + while f(d, &pos, &pkey, &pval): + rv[pkey] = func(pval) + return rv + + +cpdef object keymap(object func, object d, object factory=dict): + """ + Apply function to keys of dictionary + + >>> bills = {"Alice": [20, 15, 30], "Bob": [10, 35]} + >>> keymap(str.lower, bills) # doctest: +SKIP + {'alice': [20, 15, 30], 'bob': [10, 35]} + + See Also: + valmap + itemmap + """ + cdef: + object rv + f_map_next f + PyObject *obj + PyObject *pkey + PyObject *pval + Py_ssize_t pos = 0 + + rv = factory() + f = get_map_iter(d, &obj) + d = obj + Py_DECREF(d) + while f(d, &pos, &pkey, &pval): + rv[func(pkey)] = pval + return rv + + +cpdef object itemmap(object func, object d, object factory=dict): + """ + Apply function to items of dictionary + + >>> accountids = {"Alice": 10, "Bob": 20} + >>> itemmap(reversed, accountids) # doctest: +SKIP + {10: "Alice", 20: "Bob"} + + See Also: + keymap + valmap + """ + cdef: + object rv, k, v + f_map_next f + PyObject *obj + PyObject *pkey + PyObject *pval + Py_ssize_t pos = 0 + + rv = factory() + f = get_map_iter(d, &obj) + d = obj + Py_DECREF(d) + while f(d, &pos, &pkey, &pval): + k, v = func((pkey, pval)) + rv[k] = v + return rv + + +cpdef object valfilter(object predicate, object d, object factory=dict): + """ + Filter items in dictionary by value + + >>> iseven = lambda x: x % 2 == 0 + >>> d = {1: 2, 2: 3, 3: 4, 4: 5} + >>> valfilter(iseven, d) + {1: 2, 3: 4} + + See Also: + keyfilter + itemfilter + valmap + """ + cdef: + object rv + f_map_next f + PyObject *obj + PyObject *pkey + PyObject *pval + Py_ssize_t pos = 0 + + rv = factory() + f = get_map_iter(d, &obj) + d = obj + Py_DECREF(d) + while f(d, &pos, &pkey, &pval): + if predicate(pval): + rv[pkey] = pval + return rv + + +cpdef object keyfilter(object predicate, object d, object factory=dict): + """ + Filter items in dictionary by key + + >>> iseven = lambda x: x % 2 == 0 + >>> d = {1: 2, 2: 3, 3: 4, 4: 5} + >>> keyfilter(iseven, d) + {2: 3, 4: 5} + + See Also: + valfilter + itemfilter + keymap + """ + cdef: + object rv + f_map_next f + PyObject *obj + PyObject *pkey + PyObject *pval + Py_ssize_t pos = 0 + + rv = factory() + f = get_map_iter(d, &obj) + d = obj + Py_DECREF(d) + while f(d, &pos, &pkey, &pval): + if predicate(pkey): + rv[pkey] = pval + return rv + + +cpdef object itemfilter(object predicate, object d, object factory=dict): + """ + Filter items in dictionary by item + + >>> def isvalid(item): + ... k, v = item + ... return k % 2 == 0 and v < 4 + + >>> d = {1: 2, 2: 3, 3: 4, 4: 5} + >>> itemfilter(isvalid, d) + {2: 3} + + See Also: + keyfilter + valfilter + itemmap + """ + cdef: + object rv, k, v + f_map_next f + PyObject *obj + PyObject *pkey + PyObject *pval + Py_ssize_t pos = 0 + + rv = factory() + f = get_map_iter(d, &obj) + d = obj + Py_DECREF(d) + while f(d, &pos, &pkey, &pval): + k = pkey + v = pval + if predicate((k, v)): + rv[k] = v + return rv + + +cpdef object assoc(object d, object key, object value, object factory=dict): + """ + Return a new dict with new key value pair + + New dict has d[key] set to value. Does not modify the initial dictionary. + + >>> assoc({'x': 1}, 'x', 2) + {'x': 2} + >>> assoc({'x': 1}, 'y', 3) # doctest: +SKIP + {'x': 1, 'y': 3} + """ + cdef object rv + rv = factory() + if PyDict_CheckExact(rv): + PyDict_Update(rv, d) + else: + rv.update(d) + rv[key] = value + return rv + + +cpdef object assoc_in(object d, object keys, object value, object factory=dict): + """ + Return a new dict with new, potentially nested, key value pair + + >>> purchase = {'name': 'Alice', + ... 'order': {'items': ['Apple', 'Orange'], + ... 'costs': [0.50, 1.25]}, + ... 'credit card': '5555-1234-1234-1234'} + >>> assoc_in(purchase, ['order', 'costs'], [0.25, 1.00]) # doctest: +SKIP + {'credit card': '5555-1234-1234-1234', + 'name': 'Alice', + 'order': {'costs': [0.25, 1.00], 'items': ['Apple', 'Orange']}} + """ + cdef object prevkey, key + cdef object rv, inner, dtemp + prevkey, keys = keys[0], keys[1:] + rv = factory() + if PyDict_CheckExact(rv): + PyDict_Update(rv, d) + else: + rv.update(d) + inner = rv + + for key in keys: + if prevkey in d: + d = d[prevkey] + dtemp = factory() + if PyDict_CheckExact(dtemp): + PyDict_Update(dtemp, d) + else: + dtemp.update(d) + else: + d = factory() + dtemp = d + inner[prevkey] = dtemp + prevkey = key + inner = dtemp + + inner[prevkey] = value + return rv + + +cdef object c_dissoc(object d, object keys, object factory=dict): + # implementation copied from toolz. Not benchmarked. + cdef object rv + rv = factory() + if len(keys) < len(d) * 0.6: + rv.update(d) + for key in keys: + if key in rv: + del rv[key] + else: + remaining = set(d) + remaining.difference_update(keys) + for k in remaining: + rv[k] = d[k] + return rv + + +def dissoc(d, *keys, **kwargs): + """ + Return a new dict with the given key(s) removed. + + New dict has d[key] deleted for each supplied key. + Does not modify the initial dictionary. + + >>> dissoc({'x': 1, 'y': 2}, 'y') + {'x': 1} + >>> dissoc({'x': 1, 'y': 2}, 'y', 'x') + {} + >>> dissoc({'x': 1}, 'y') # Ignores missing keys + {'x': 1} + """ + return c_dissoc(d, keys, get_factory('dissoc', kwargs)) + + +cpdef object update_in(object d, object keys, object func, object default=None, object factory=dict): + """ + Update value in a (potentially) nested dictionary + + inputs: + d - dictionary on which to operate + keys - list or tuple giving the location of the value to be changed in d + func - function to operate on that value + + If keys == [k0,..,kX] and d[k0]..[kX] == v, update_in returns a copy of the + original dictionary with v replaced by func(v), but does not mutate the + original dictionary. + + If k0 is not a key in d, update_in creates nested dictionaries to the depth + specified by the keys, with the innermost value set to func(default). + + >>> inc = lambda x: x + 1 + >>> update_in({'a': 0}, ['a'], inc) + {'a': 1} + + >>> transaction = {'name': 'Alice', + ... 'purchase': {'items': ['Apple', 'Orange'], + ... 'costs': [0.50, 1.25]}, + ... 'credit card': '5555-1234-1234-1234'} + >>> update_in(transaction, ['purchase', 'costs'], sum) # doctest: +SKIP + {'credit card': '5555-1234-1234-1234', + 'name': 'Alice', + 'purchase': {'costs': 1.75, 'items': ['Apple', 'Orange']}} + + >>> # updating a value when k0 is not in d + >>> update_in({}, [1, 2, 3], str, default="bar") + {1: {2: {3: 'bar'}}} + >>> update_in({1: 'foo'}, [2, 3, 4], inc, 0) + {1: 'foo', 2: {3: {4: 1}}} + """ + cdef object prevkey, key + cdef object rv, inner, dtemp + prevkey, keys = keys[0], keys[1:] + rv = factory() + if PyDict_CheckExact(rv): + PyDict_Update(rv, d) + else: + rv.update(d) + inner = rv + + for key in keys: + if prevkey in d: + d = d[prevkey] + dtemp = factory() + if PyDict_CheckExact(dtemp): + PyDict_Update(dtemp, d) + else: + dtemp.update(d) + else: + d = factory() + dtemp = d + inner[prevkey] = dtemp + prevkey = key + inner = dtemp + + if prevkey in d: + key = func(d[prevkey]) + else: + key = func(default) + inner[prevkey] = key + return rv + + +cdef tuple _get_in_exceptions = (KeyError, IndexError, TypeError) + + +cpdef object get_in(object keys, object coll, object default=None, object no_default=False): + """ + Returns coll[i0][i1]...[iX] where [i0, i1, ..., iX]==keys. + + If coll[i0][i1]...[iX] cannot be found, returns ``default``, unless + ``no_default`` is specified, then it raises KeyError or IndexError. + + ``get_in`` is a generalization of ``operator.getitem`` for nested data + structures such as dictionaries and lists. + + >>> transaction = {'name': 'Alice', + ... 'purchase': {'items': ['Apple', 'Orange'], + ... 'costs': [0.50, 1.25]}, + ... 'credit card': '5555-1234-1234-1234'} + >>> get_in(['purchase', 'items', 0], transaction) + 'Apple' + >>> get_in(['name'], transaction) + 'Alice' + >>> get_in(['purchase', 'total'], transaction) + >>> get_in(['purchase', 'items', 'apple'], transaction) + >>> get_in(['purchase', 'items', 10], transaction) + >>> get_in(['purchase', 'total'], transaction, 0) + 0 + >>> get_in(['y'], {}, no_default=True) + Traceback (most recent call last): + ... + KeyError: 'y' + + See Also: + itertoolz.get + operator.getitem + """ + cdef object item + try: + for item in keys: + coll = coll[item] + return coll + except _get_in_exceptions: + if no_default: + raise + return default diff --git a/env/lib/python3.8/site-packages/cytoolz/functoolz.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/cytoolz/functoolz.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..806ad475 Binary files /dev/null and b/env/lib/python3.8/site-packages/cytoolz/functoolz.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/cytoolz/functoolz.pxd b/env/lib/python3.8/site-packages/cytoolz/functoolz.pxd new file mode 100644 index 00000000..5425c8b4 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/functoolz.pxd @@ -0,0 +1,68 @@ +cpdef object identity(object x) + + +cdef object c_thread_first(object val, object forms) + + +cdef object c_thread_last(object val, object forms) + + +cdef class curry: + cdef readonly object _sigspec + cdef readonly object _has_unknown_args + cdef readonly object func + cdef readonly tuple args + cdef readonly dict keywords + cdef public object __doc__ + cdef public object __name__ + cdef public object __module__ + cdef public object __qualname__ + + +cpdef object memoize(object func, object cache=*, object key=*) + + +cdef class _memoize: + cdef object func + cdef object cache + cdef object key + cdef bint is_unary + cdef bint may_have_kwargs + + +cdef class Compose: + cdef public object first + cdef public tuple funcs + + +cdef object c_compose(object funcs) + + +cdef object c_compose_left(object funcs) + + +cdef object c_pipe(object data, object funcs) + + +cdef class complement: + cdef object func + + +cdef class juxt: + cdef public tuple funcs + + +cpdef object do(object func, object x) + + +cpdef object flip(object func, object a, object b) + + +cpdef object return_none(object exc) + + +cdef class excepts: + cdef public object exc + cdef public object func + cdef public object handler + diff --git a/env/lib/python3.8/site-packages/cytoolz/functoolz.pyx b/env/lib/python3.8/site-packages/cytoolz/functoolz.pyx new file mode 100644 index 00000000..ea5d0e35 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/functoolz.pyx @@ -0,0 +1,876 @@ +import inspect +import sys +from functools import partial +from importlib import import_module +from operator import attrgetter +from types import MethodType +from cytoolz.utils import no_default +import cytoolz._signatures as _sigs + +from toolz.functoolz import (InstanceProperty, instanceproperty, is_arity, + num_required_args, has_varargs, has_keywords, + is_valid_args, is_partial_args) + +cimport cython +from cpython.dict cimport PyDict_Merge, PyDict_New +from cpython.object cimport (PyCallable_Check, PyObject_Call, PyObject_CallObject, + PyObject_RichCompare, Py_EQ, Py_NE) +from cpython.ref cimport PyObject +from cpython.sequence cimport PySequence_Concat +from cpython.set cimport PyFrozenSet_New +from cpython.tuple cimport PyTuple_Check, PyTuple_GET_SIZE + + +__all__ = ['identity', 'thread_first', 'thread_last', 'memoize', 'compose', 'compose_left', + 'pipe', 'complement', 'juxt', 'do', 'curry', 'memoize', 'flip', + 'excepts', 'apply'] + + +cpdef object identity(object x): + return x + + +def apply(*func_and_args, **kwargs): + """ + Applies a function and returns the results + + >>> def double(x): return 2*x + >>> def inc(x): return x + 1 + >>> apply(double, 5) + 10 + + >>> tuple(map(apply, [double, inc, double], [10, 500, 8000])) + (20, 501, 16000) + """ + if not func_and_args: + raise TypeError('func argument is required') + return func_and_args[0](*func_and_args[1:], **kwargs) + + +cdef object c_thread_first(object val, object forms): + cdef object form, func + cdef tuple args + for form in forms: + if PyCallable_Check(form): + val = form(val) + elif PyTuple_Check(form): + func, args = form[0], (val,) + form[1:] + val = PyObject_CallObject(func, args) + else: + val = None + return val + + +def thread_first(val, *forms): + """ + Thread value through a sequence of functions/forms + + >>> def double(x): return 2*x + >>> def inc(x): return x + 1 + >>> thread_first(1, inc, double) + 4 + + If the function expects more than one input you can specify those inputs + in a tuple. The value is used as the first input. + + >>> def add(x, y): return x + y + >>> def pow(x, y): return x**y + >>> thread_first(1, (add, 4), (pow, 2)) # pow(add(1, 4), 2) + 25 + + So in general + thread_first(x, f, (g, y, z)) + expands to + g(f(x), y, z) + + See Also: + thread_last + """ + return c_thread_first(val, forms) + + +cdef object c_thread_last(object val, object forms): + cdef object form, func + cdef tuple args + for form in forms: + if PyCallable_Check(form): + val = form(val) + elif PyTuple_Check(form): + func, args = form[0], form[1:] + (val,) + val = PyObject_CallObject(func, args) + else: + val = None + return val + + +def thread_last(val, *forms): + """ + Thread value through a sequence of functions/forms + + >>> def double(x): return 2*x + >>> def inc(x): return x + 1 + >>> thread_last(1, inc, double) + 4 + + If the function expects more than one input you can specify those inputs + in a tuple. The value is used as the last input. + + >>> def add(x, y): return x + y + >>> def pow(x, y): return x**y + >>> thread_last(1, (add, 4), (pow, 2)) # pow(2, add(4, 1)) + 32 + + So in general + thread_last(x, f, (g, y, z)) + expands to + g(y, z, f(x)) + + >>> def iseven(x): + ... return x % 2 == 0 + >>> list(thread_last([1, 2, 3], (map, inc), (filter, iseven))) + [2, 4] + + See Also: + thread_first + """ + return c_thread_last(val, forms) + + +cdef struct partialobject: + PyObject _ + PyObject *fn + PyObject *args + PyObject *kw + PyObject *dict + PyObject *weakreflist + + +cdef object _partial = partial(lambda: None) + + +cdef object _empty_kwargs(): + if ( _partial).kw is None: + return None + return PyDict_New() + + +cdef class curry: + """ curry(self, *args, **kwargs) + + Curry a callable function + + Enables partial application of arguments through calling a function with an + incomplete set of arguments. + + >>> def mul(x, y): + ... return x * y + >>> mul = curry(mul) + + >>> double = mul(2) + >>> double(10) + 20 + + Also supports keyword arguments + + >>> @curry # Can use curry as a decorator + ... def f(x, y, a=10): + ... return a * (x + y) + + >>> add = f(a=1) + >>> add(2, 3) + 5 + + See Also: + cytoolz.curried - namespace of curried functions + https://toolz.readthedocs.io/en/latest/curry.html + """ + + def __cinit__(self, *args, **kwargs): + if not args: + raise TypeError('__init__() takes at least 2 arguments (1 given)') + func, args = args[0], args[1:] + if not PyCallable_Check(func): + raise TypeError("Input must be callable") + + # curry- or functools.partial-like object? Unpack and merge arguments + if (hasattr(func, 'func') + and hasattr(func, 'args') + and hasattr(func, 'keywords') + and isinstance(func.args, tuple)): + if func.keywords: + PyDict_Merge(kwargs, func.keywords, False) + ## Equivalent to: + # for key, val in func.keywords.items(): + # if key not in kwargs: + # kwargs[key] = val + args = func.args + args + func = func.func + + self.func = func + self.args = args + self.keywords = kwargs if kwargs else _empty_kwargs() + self.__doc__ = getattr(func, '__doc__', None) + self.__name__ = getattr(func, '__name__', '') + self.__module__ = getattr(func, '__module__', None) + self.__qualname__ = getattr(func, '__qualname__', None) + self._sigspec = None + self._has_unknown_args = None + + def __str__(self): + return str(self.func) + + def __repr__(self): + return repr(self.func) + + def __hash__(self): + return hash((self.func, self.args, + frozenset(self.keywords.items()) if self.keywords + else None)) + + def __richcmp__(self, other, int op): + is_equal = (isinstance(other, curry) and self.func == other.func and + self.args == other.args and self.keywords == other.keywords) + if op == Py_EQ: + return is_equal + if op == Py_NE: + return not is_equal + return PyObject_RichCompare(id(self), id(other), op) + + def __call__(self, *args, **kwargs): + cdef object val + + if PyTuple_GET_SIZE(args) == 0: + args = self.args + elif PyTuple_GET_SIZE(self.args) != 0: + args = PySequence_Concat(self.args, args) + if self.keywords is not None: + PyDict_Merge(kwargs, self.keywords, False) + try: + return self.func(*args, **kwargs) + except TypeError as val: + if self._should_curry_internal(args, kwargs, val): + return type(self)(self.func, *args, **kwargs) + raise + + def _should_curry(self, args, kwargs, exc=None): + if PyTuple_GET_SIZE(args) == 0: + args = self.args + elif PyTuple_GET_SIZE(self.args) != 0: + args = PySequence_Concat(self.args, args) + if self.keywords is not None: + PyDict_Merge(kwargs, self.keywords, False) + return self._should_curry_internal(args, kwargs) + + def _should_curry_internal(self, args, kwargs, exc=None): + func = self.func + + # `toolz` has these three lines + #args = self.args + args + #if self.keywords: + # kwargs = dict(self.keywords, **kwargs) + + if self._sigspec is None: + sigspec = self._sigspec = _sigs.signature_or_spec(func) + self._has_unknown_args = has_varargs(func, sigspec) is not False + else: + sigspec = self._sigspec + + if is_partial_args(func, args, kwargs, sigspec) is False: + # Nothing can make the call valid + return False + elif self._has_unknown_args: + # The call may be valid and raised a TypeError, but we curry + # anyway because the function may have `*args`. This is useful + # for decorators with signature `func(*args, **kwargs)`. + return True + elif not is_valid_args(func, args, kwargs, sigspec): + # Adding more arguments may make the call valid + return True + else: + # There was a genuine TypeError + return False + + def bind(self, *args, **kwargs): + return type(self)(self, *args, **kwargs) + + def call(self, *args, **kwargs): + cdef object val + + if PyTuple_GET_SIZE(args) == 0: + args = self.args + elif PyTuple_GET_SIZE(self.args) != 0: + args = PySequence_Concat(self.args, args) + if self.keywords is not None: + PyDict_Merge(kwargs, self.keywords, False) + return self.func(*args, **kwargs) + + def __get__(self, instance, owner): + if instance is None: + return self + return type(self)(self, instance) + + property __signature__: + def __get__(self): + sig = inspect.signature(self.func) + args = self.args or () + keywords = self.keywords or {} + if is_partial_args(self.func, args, keywords, sig) is False: + raise TypeError('curry object has incorrect arguments') + + params = list(sig.parameters.values()) + skip = 0 + for param in params[:len(args)]: + if param.kind == param.VAR_POSITIONAL: + break + skip += 1 + + kwonly = False + newparams = [] + for param in params[skip:]: + kind = param.kind + default = param.default + if kind == param.VAR_KEYWORD: + pass + elif kind == param.VAR_POSITIONAL: + if kwonly: + continue + elif param.name in keywords: + default = keywords[param.name] + kind = param.KEYWORD_ONLY + kwonly = True + else: + if kwonly: + kind = param.KEYWORD_ONLY + if default is param.empty: + default = no_default + newparams.append(param.replace(default=default, kind=kind)) + + return sig.replace(parameters=newparams) + + def __reduce__(self): + func = self.func + modname = getattr(func, '__module__', None) + qualname = getattr(func, '__qualname__', None) + if qualname is None: + qualname = getattr(func, '__name__', None) + is_decorated = None + if modname and qualname: + attrs = [] + obj = import_module(modname) + for attr in qualname.split('.'): + if isinstance(obj, curry): + attrs.append('func') + obj = obj.func + obj = getattr(obj, attr, None) + if obj is None: + break + attrs.append(attr) + if isinstance(obj, curry) and obj.func is func: + is_decorated = obj is self + qualname = '.'.join(attrs) + func = '%s:%s' % (modname, qualname) + + state = (type(self), func, self.args, self.keywords, is_decorated) + return (_restore_curry, state) + + +cpdef object _restore_curry(cls, func, args, kwargs, is_decorated): + if isinstance(func, str): + modname, qualname = func.rsplit(':', 1) + obj = import_module(modname) + for attr in qualname.split('.'): + obj = getattr(obj, attr) + if is_decorated: + return obj + func = obj.func + obj = cls(func, *args, **(kwargs or {})) + return obj + + +cpdef object memoize(object func, object cache=None, object key=None): + """ + Cache a function's result for speedy future evaluation + + Considerations: + Trades memory for speed. + Only use on pure functions. + + >>> def add(x, y): return x + y + >>> add = memoize(add) + + Or use as a decorator + + >>> @memoize + ... def add(x, y): + ... return x + y + + Use the ``cache`` keyword to provide a dict-like object as an initial cache + + >>> @memoize(cache={(1, 2): 3}) + ... def add(x, y): + ... return x + y + + Note that the above works as a decorator because ``memoize`` is curried. + + It is also possible to provide a ``key(args, kwargs)`` function that + calculates keys used for the cache, which receives an ``args`` tuple and + ``kwargs`` dict as input, and must return a hashable value. However, + the default key function should be sufficient most of the time. + + >>> # Use key function that ignores extraneous keyword arguments + >>> @memoize(key=lambda args, kwargs: args) + ... def add(x, y, verbose=False): + ... if verbose: + ... print('Calculating %s + %s' % (x, y)) + ... return x + y + """ + return _memoize(func, cache, key) + + +cdef class _memoize: + + property __doc__: + def __get__(self): + return self.func.__doc__ + + property __name__: + def __get__(self): + return self.func.__name__ + + property __wrapped__: + def __get__(self): + return self.func + + def __cinit__(self, func, cache, key): + self.func = func + if cache is None: + self.cache = PyDict_New() + else: + self.cache = cache + self.key = key + + try: + self.may_have_kwargs = has_keywords(func) is not False + # Is unary function (single arg, no variadic argument or keywords)? + self.is_unary = is_arity(1, func) + except TypeError: + self.is_unary = False + self.may_have_kwargs = True + + def __call__(self, *args, **kwargs): + cdef object key + if self.key is not None: + key = self.key(args, kwargs) + elif self.is_unary: + key = args[0] + elif self.may_have_kwargs: + key = (args or None, + PyFrozenSet_New(kwargs.items()) if kwargs else None) + else: + key = args + + if key in self.cache: + return self.cache[key] + else: + result = PyObject_Call(self.func, args, kwargs) + self.cache[key] = result + return result + + def __get__(self, instance, owner): + if instance is None: + return self + return curry(self, instance) + + +cdef class Compose: + """ Compose(self, *funcs) + + A composition of functions + + See Also: + compose + """ + # fix for #103, note: we cannot use __name__ at module-scope in cython + __module__ = 'cytooz.functoolz' + + def __cinit__(self, *funcs): + self.first = funcs[-1] + self.funcs = tuple(reversed(funcs[:-1])) + + def __call__(self, *args, **kwargs): + cdef object func, ret + ret = PyObject_Call(self.first, args, kwargs) + for func in self.funcs: + ret = func(ret) + return ret + + def __reduce__(self): + return (Compose, (self.first,), self.funcs) + + def __setstate__(self, state): + self.funcs = state + + def __repr__(self): + return '{.__class__.__name__}{!r}'.format( + self, tuple(reversed((self.first, ) + self.funcs))) + + def __eq__(self, other): + if isinstance(other, Compose): + return other.first == self.first and other.funcs == self.funcs + return NotImplemented + + def __ne__(self, other): + if isinstance(other, Compose): + return other.first != self.first or other.funcs != self.funcs + return NotImplemented + + def __hash__(self): + return hash(self.first) ^ hash(self.funcs) + + def __get__(self, obj, objtype): + if obj is None: + return self + else: + return MethodType(self, obj) + + property __wrapped__: + def __get__(self): + return self.first + + property __signature__: + def __get__(self): + base = inspect.signature(self.first) + last = inspect.signature(self.funcs[-1]) + return base.replace(return_annotation=last.return_annotation) + + property __name__: + def __get__(self): + try: + return '_of_'.join( + f.__name__ for f in reversed((self.first,) + self.funcs) + ) + except AttributeError: + return type(self).__name__ + + property __doc__: + def __get__(self): + def composed_doc(*fs): + """Generate a docstring for the composition of fs. + """ + if not fs: + # Argument name for the docstring. + return '*args, **kwargs' + + return '{f}({g})'.format(f=fs[0].__name__, g=composed_doc(*fs[1:])) + + try: + return ( + 'lambda *args, **kwargs: ' + + composed_doc(*reversed((self.first,) + self.funcs)) + ) + except AttributeError: + # One of our callables does not have a `__name__`, whatever. + return 'A composition of functions' + + +cdef object c_compose(object funcs): + if not funcs: + return identity + elif len(funcs) == 1: + return funcs[0] + else: + return Compose(*funcs) + + +def compose(*funcs): + """ + Compose functions to operate in series. + + Returns a function that applies other functions in sequence. + + Functions are applied from right to left so that + ``compose(f, g, h)(x, y)`` is the same as ``f(g(h(x, y)))``. + + If no arguments are provided, the identity function (f(x) = x) is returned. + + >>> inc = lambda i: i + 1 + >>> compose(str, inc)(3) + '4' + + See Also: + compose_left + pipe + """ + return c_compose(funcs) + + +cdef object c_compose_left(object funcs): + if not funcs: + return identity + elif len(funcs) == 1: + return funcs[0] + else: + return Compose(*reversed(funcs)) + + +def compose_left(*funcs): + """ + Compose functions to operate in series. + + Returns a function that applies other functions in sequence. + + Functions are applied from left to right so that + ``compose_left(f, g, h)(x, y)`` is the same as ``h(g(f(x, y)))``. + + If no arguments are provided, the identity function (f(x) = x) is returned. + + >>> inc = lambda i: i + 1 + >>> compose_left(inc, str)(3) + '4' + + See Also: + compose + pipe + """ + return c_compose_left(funcs) + + +cdef object c_pipe(object data, object funcs): + cdef object func + for func in funcs: + data = func(data) + return data + + +def pipe(data, *funcs): + """ + Pipe a value through a sequence of functions + + I.e. ``pipe(data, f, g, h)`` is equivalent to ``h(g(f(data)))`` + + We think of the value as progressing through a pipe of several + transformations, much like pipes in UNIX + + ``$ cat data | f | g | h`` + + >>> double = lambda i: 2 * i + >>> pipe(3, double, str) + '6' + + See Also: + compose + compose_left + thread_first + thread_last + """ + return c_pipe(data, funcs) + + +cdef class complement: + """ complement(func) + + Convert a predicate function to its logical complement. + + In other words, return a function that, for inputs that normally + yield True, yields False, and vice-versa. + + >>> def iseven(n): return n % 2 == 0 + >>> isodd = complement(iseven) + >>> iseven(2) + True + >>> isodd(2) + False + """ + def __cinit__(self, func): + self.func = func + + def __call__(self, *args, **kwargs): + return not PyObject_Call(self.func, args, kwargs) # use PyObject_Not? + + def __reduce__(self): + return (complement, (self.func,)) + + +cdef class juxt: + """ juxt(self, *funcs) + + Creates a function that calls several functions with the same arguments + + Takes several functions and returns a function that applies its arguments + to each of those functions then returns a tuple of the results. + + Name comes from juxtaposition: the fact of two things being seen or placed + close together with contrasting effect. + + >>> inc = lambda x: x + 1 + >>> double = lambda x: x * 2 + >>> juxt(inc, double)(10) + (11, 20) + >>> juxt([inc, double])(10) + (11, 20) + """ + def __cinit__(self, *funcs): + if len(funcs) == 1 and not PyCallable_Check(funcs[0]): + funcs = funcs[0] + self.funcs = tuple(funcs) + + def __call__(self, *args, **kwargs): + if kwargs: + return tuple(PyObject_Call(func, args, kwargs) for func in self.funcs) + else: + return tuple(PyObject_CallObject(func, args) for func in self.funcs) + + def __reduce__(self): + return (juxt, (self.funcs,)) + + +cpdef object do(object func, object x): + """ + Runs ``func`` on ``x``, returns ``x`` + + Because the results of ``func`` are not returned, only the side + effects of ``func`` are relevant. + + Logging functions can be made by composing ``do`` with a storage function + like ``list.append`` or ``file.write`` + + >>> from cytoolz import compose + >>> from cytoolz.curried import do + + >>> log = [] + >>> inc = lambda x: x + 1 + >>> inc = compose(inc, do(log.append)) + >>> inc(1) + 2 + >>> inc(11) + 12 + >>> log + [1, 11] + """ + func(x) + return x + + +cpdef object flip(object func, object a, object b): + """ + Call the function call with the arguments flipped + + This function is curried. + + >>> def div(a, b): + ... return a // b + ... + >>> flip(div, 2, 6) + 3 + >>> div_by_two = flip(div, 2) + >>> div_by_two(4) + 2 + + This is particularly useful for built in functions and functions defined + in C extensions that accept positional only arguments. For example: + isinstance, issubclass. + + >>> data = [1, 'a', 'b', 2, 1.5, object(), 3] + >>> only_ints = list(filter(flip(isinstance, int), data)) + >>> only_ints + [1, 2, 3] + """ + return PyObject_CallObject(func, (b, a)) + + +_flip = flip # uncurried + + +cpdef object return_none(object exc): + """ + Returns None. + """ + return None + + +cdef class excepts: + """ excepts(self, exc, func, handler=return_none) + + A wrapper around a function to catch exceptions and + dispatch to a handler. + + This is like a functional try/except block, in the same way that + ifexprs are functional if/else blocks. + + Examples + -------- + >>> excepting = excepts( + ... ValueError, + ... lambda a: [1, 2].index(a), + ... lambda _: -1, + ... ) + >>> excepting(1) + 0 + >>> excepting(3) + -1 + + Multiple exceptions and default except clause. + >>> excepting = excepts((IndexError, KeyError), lambda a: a[0]) + >>> excepting([]) + >>> excepting([1]) + 1 + >>> excepting({}) + >>> excepting({0: 1}) + 1 + """ + + def __cinit__(self, exc, func, handler=return_none): + self.exc = exc + self.func = func + self.handler = handler + + def __call__(self, *args, **kwargs): + try: + return self.func(*args, **kwargs) + except self.exc as e: + return self.handler(e) + + property __name__: + def __get__(self): + exc = self.exc + try: + if isinstance(exc, tuple): + exc_name = '_or_'.join(map(attrgetter('__name__'), exc)) + else: + exc_name = exc.__name__ + return '%s_excepting_%s' % (self.func.__name__, exc_name) + except AttributeError: + return 'excepting' + + property __doc__: + def __get__(self): + from textwrap import dedent + + exc = self.exc + try: + if isinstance(exc, tuple): + exc_name = '(%s)' % ', '.join( + map(attrgetter('__name__'), exc), + ) + else: + exc_name = exc.__name__ + + return dedent( + """\ + A wrapper around {inst.func.__name__!r} that will except: + {exc} + and handle any exceptions with {inst.handler.__name__!r}. + + Docs for {inst.func.__name__!r}: + {inst.func.__doc__} + + Docs for {inst.handler.__name__!r}: + {inst.handler.__doc__} + """ + ).format( + inst=self, + exc=exc_name, + ) + except AttributeError: + return type(self).__doc__ + diff --git a/env/lib/python3.8/site-packages/cytoolz/itertoolz.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/cytoolz/itertoolz.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..e9bcdbb6 Binary files /dev/null and b/env/lib/python3.8/site-packages/cytoolz/itertoolz.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/cytoolz/itertoolz.pxd b/env/lib/python3.8/site-packages/cytoolz/itertoolz.pxd new file mode 100644 index 00000000..57e06098 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/itertoolz.pxd @@ -0,0 +1,281 @@ +cdef class remove: + cdef object predicate + cdef object iter_seq + + +cdef class accumulate: + cdef object binop + cdef object iter_seq + cdef object result + cdef object initial + + +cpdef dict groupby(object key, object seq) + + +cdef class _merge_sorted: + cdef object seq1 + cdef object seq2 + cdef object val1 + cdef object val2 + cdef Py_ssize_t loop + +cdef class _merge_sorted_key: + cdef object seq1 + cdef object seq2 + cdef object val1 + cdef object val2 + cdef object key + cdef object key1 + cdef object key2 + cdef Py_ssize_t loop + + +cdef object c_merge_sorted(object seqs, object key=*) + + +cdef class interleave: + cdef list iters + cdef list newiters + cdef Py_ssize_t i + cdef Py_ssize_t n + + +cdef class _unique_key: + cdef object key + cdef object iter_seq + cdef object seen + + +cdef class _unique_identity: + cdef object iter_seq + cdef object seen + + +cpdef object unique(object seq, object key=*) + + +cpdef object isiterable(object x) + + +cpdef object isdistinct(object seq) + + +cpdef object take(Py_ssize_t n, object seq) + + +cpdef object tail(Py_ssize_t n, object seq) + + +cpdef object drop(Py_ssize_t n, object seq) + + +cpdef object take_nth(Py_ssize_t n, object seq) + + +cpdef object first(object seq) + + +cpdef object second(object seq) + + +cpdef object nth(Py_ssize_t n, object seq) + + +cpdef object last(object seq) + + +cpdef object rest(object seq) + + +cpdef object get(object ind, object seq, object default=*) + + +cpdef object cons(object el, object seq) + + +cpdef object concat(object seqs) + + +cpdef object mapcat(object func, object seqs) + + +cdef class interpose: + cdef object el + cdef object iter_seq + cdef object val + cdef bint do_el + + +cpdef dict frequencies(object seq) + + +cpdef dict reduceby(object key, object binop, object seq, object init=*) + + +cdef class iterate: + cdef object func + cdef object x + cdef object val + + +cdef class sliding_window: + cdef object iterseq + cdef tuple prev + cdef Py_ssize_t n + + +cpdef object partition(Py_ssize_t n, object seq, object pad=*) + + +cdef class partition_all: + cdef Py_ssize_t n + cdef object iterseq + + +cpdef object count(object seq) + + +cdef class _pluck_index: + cdef object ind + cdef object iterseqs + + +cdef class _pluck_index_default: + cdef object ind + cdef object iterseqs + cdef object default + + +cdef class _pluck_list: + cdef list ind + cdef object iterseqs + cdef Py_ssize_t n + + +cdef class _pluck_list_default: + cdef list ind + cdef object iterseqs + cdef object default + cdef Py_ssize_t n + + +cpdef object pluck(object ind, object seqs, object default=*) + + +cdef class _getter_index: + cdef object ind + + +cdef class _getter_list: + cdef list ind + cdef Py_ssize_t n + + +cdef class _getter_null: + pass + + +cpdef object getter(object index) + + +cpdef object join(object leftkey, object leftseq, + object rightkey, object rightseq, + object left_default=*, + object right_default=*) + +cdef class _join: + cdef dict d + cdef list matches + cdef set seen_keys + cdef object leftseq + cdef object rightseq + cdef object _rightkey + cdef object right + cdef object left_default + cdef object right_default + cdef object keys + cdef Py_ssize_t N + cdef Py_ssize_t i + cdef bint is_rightseq_exhausted + + cdef object rightkey(self) + + +cdef class _inner_join(_join): + pass + +cdef class _right_outer_join(_join): + pass + +cdef class _left_outer_join(_join): + pass + +cdef class _outer_join(_join): + pass + + +cdef class _inner_join_key(_inner_join): + pass + +cdef class _inner_join_index(_inner_join): + pass + +cdef class _inner_join_indices(_inner_join): + pass + +cdef class _right_outer_join_key(_right_outer_join): + pass + +cdef class _right_outer_join_index(_right_outer_join): + pass + +cdef class _right_outer_join_indices(_right_outer_join): + pass + +cdef class _left_outer_join_key(_left_outer_join): + pass + +cdef class _left_outer_join_index(_left_outer_join): + pass + +cdef class _left_outer_join_indices(_left_outer_join): + pass + +cdef class _outer_join_key(_outer_join): + pass + +cdef class _outer_join_index(_outer_join): + pass + +cdef class _outer_join_indices(_outer_join): + pass + + +cdef class _diff_key: + cdef Py_ssize_t N + cdef object iters + cdef object key + + +cdef class _diff_identity: + cdef Py_ssize_t N + cdef object iters + + +cdef object c_diff(object seqs, object default=*, object key=*) + + +cpdef object topk(Py_ssize_t k, object seq, object key=*) + + +cpdef object peek(object seq) + + +cpdef object peekn(Py_ssize_t n, object seq) + + +cdef class random_sample: + cdef object iter_seq + cdef object prob + cdef object random_func diff --git a/env/lib/python3.8/site-packages/cytoolz/itertoolz.pyx b/env/lib/python3.8/site-packages/cytoolz/itertoolz.pyx new file mode 100644 index 00000000..0cce8d53 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/itertoolz.pyx @@ -0,0 +1,1810 @@ +from cpython.dict cimport PyDict_GetItem, PyDict_SetItem +from cpython.exc cimport PyErr_Clear, PyErr_GivenExceptionMatches, PyErr_Occurred +from cpython.list cimport PyList_Append, PyList_GET_ITEM, PyList_GET_SIZE +from cpython.object cimport PyObject_RichCompareBool, Py_NE +from cpython.ref cimport PyObject, Py_INCREF, Py_XDECREF +from cpython.sequence cimport PySequence_Check +from cpython.set cimport PySet_Add, PySet_Contains +from cpython.tuple cimport PyTuple_GET_ITEM, PyTuple_GetSlice, PyTuple_New, PyTuple_SET_ITEM + +# Locally defined bindings that differ from `cython.cpython` bindings +from cytoolz.cpython cimport PtrIter_Next, PtrObject_GetItem + +from collections import deque +from heapq import heapify, heappop, heapreplace +from itertools import chain, islice, zip_longest +from operator import itemgetter +from cytoolz.utils import no_default + + +__all__ = ['remove', 'accumulate', 'groupby', 'merge_sorted', 'interleave', + 'unique', 'isiterable', 'isdistinct', 'take', 'drop', 'take_nth', + 'first', 'second', 'nth', 'last', 'get', 'concat', 'concatv', + 'mapcat', 'cons', 'interpose', 'frequencies', 'reduceby', 'iterate', + 'sliding_window', 'partition', 'partition_all', 'count', 'pluck', + 'join', 'tail', 'diff', 'topk', 'peek', 'peekn', 'random_sample'] + + +cpdef object identity(object x): + return x + + +cdef class remove: + """ remove(predicate, seq) + + Return those items of sequence for which predicate(item) is False + + >>> def iseven(x): + ... return x % 2 == 0 + >>> list(remove(iseven, [1, 2, 3, 4])) + [1, 3] + """ + def __cinit__(self, object predicate, object seq): + self.predicate = predicate + self.iter_seq = iter(seq) + + def __iter__(self): + return self + + def __next__(self): + cdef object val + val = next(self.iter_seq) + while self.predicate(val): + val = next(self.iter_seq) + return val + + +cdef class accumulate: + """ accumulate(binop, seq, initial='__no__default__') + + Repeatedly apply binary function to a sequence, accumulating results + + >>> from operator import add, mul + >>> list(accumulate(add, [1, 2, 3, 4, 5])) + [1, 3, 6, 10, 15] + >>> list(accumulate(mul, [1, 2, 3, 4, 5])) + [1, 2, 6, 24, 120] + + Accumulate is similar to ``reduce`` and is good for making functions like + cumulative sum: + + >>> from functools import partial, reduce + >>> sum = partial(reduce, add) + >>> cumsum = partial(accumulate, add) + + Accumulate also takes an optional argument that will be used as the first + value. This is similar to reduce. + + >>> list(accumulate(add, [1, 2, 3], -1)) + [-1, 0, 2, 5] + >>> list(accumulate(add, [], 1)) + [1] + + See Also: + itertools.accumulate : In standard itertools for Python 3.2+ + """ + def __cinit__(self, object binop, object seq, object initial='__no__default__'): + self.binop = binop + self.iter_seq = iter(seq) + self.result = self # sentinel + self.initial = initial + + def __iter__(self): + return self + + def __next__(self): + if self.result is self: + if self.initial != no_default: + self.result = self.initial + else: + self.result = next(self.iter_seq) + else: + self.result = self.binop(self.result, next(self.iter_seq)) + return self.result + + +cdef inline object _groupby_core(dict d, object key, object item): + cdef PyObject *obj = PyDict_GetItem(d, key) + if obj is NULL: + val = [] + PyList_Append(val, item) + PyDict_SetItem(d, key, val) + else: + PyList_Append(obj, item) + + +cpdef dict groupby(object key, object seq): + """ + Group a collection by a key function + + >>> names = ['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank'] + >>> groupby(len, names) # doctest: +SKIP + {3: ['Bob', 'Dan'], 5: ['Alice', 'Edith', 'Frank'], 7: ['Charlie']} + + >>> iseven = lambda x: x % 2 == 0 + >>> groupby(iseven, [1, 2, 3, 4, 5, 6, 7, 8]) # doctest: +SKIP + {False: [1, 3, 5, 7], True: [2, 4, 6, 8]} + + Non-callable keys imply grouping on a member. + + >>> groupby('gender', [{'name': 'Alice', 'gender': 'F'}, + ... {'name': 'Bob', 'gender': 'M'}, + ... {'name': 'Charlie', 'gender': 'M'}]) # doctest:+SKIP + {'F': [{'gender': 'F', 'name': 'Alice'}], + 'M': [{'gender': 'M', 'name': 'Bob'}, + {'gender': 'M', 'name': 'Charlie'}]} + + Not to be confused with ``itertools.groupby`` + + See Also: + countby + """ + cdef dict d = {} + cdef object item, keyval + cdef Py_ssize_t i, N + if callable(key): + for item in seq: + keyval = key(item) + _groupby_core(d, keyval, item) + elif isinstance(key, list): + N = PyList_GET_SIZE(key) + for item in seq: + keyval = PyTuple_New(N) + for i in range(N): + val = PyList_GET_ITEM(key, i) + val = item[val] + Py_INCREF(val) + PyTuple_SET_ITEM(keyval, i, val) + _groupby_core(d, keyval, item) + else: + for item in seq: + keyval = item[key] + _groupby_core(d, keyval, item) + return d + + +cdef object _merge_sorted_binary(object seqs): + mid = len(seqs) // 2 + L1 = seqs[:mid] + if len(L1) == 1: + seq1 = iter(L1[0]) + else: + seq1 = _merge_sorted_binary(L1) + L2 = seqs[mid:] + if len(L2) == 1: + seq2 = iter(L2[0]) + else: + seq2 = _merge_sorted_binary(L2) + try: + val2 = next(seq2) + except StopIteration: + return seq1 + return _merge_sorted(seq1, seq2, val2) + + +cdef class _merge_sorted: + def __cinit__(self, seq1, seq2, val2): + self.seq1 = seq1 + self.seq2 = seq2 + self.val1 = None + self.val2 = val2 + self.loop = 0 + + def __iter__(self): + return self + + def __next__(self): + if self.loop == 0: + try: + self.val1 = next(self.seq1) + except StopIteration: + self.loop = 2 + return self.val2 + if self.val2 < self.val1: + self.loop = 1 + return self.val2 + return self.val1 + elif self.loop == 1: + try: + self.val2 = next(self.seq2) + except StopIteration: + self.loop = 3 + return self.val1 + if self.val2 < self.val1: + return self.val2 + self.loop = 0 + return self.val1 + elif self.loop == 2: + return next(self.seq2) + return next(self.seq1) + + +cdef object _merge_sorted_binary_key(object seqs, object key): + mid = len(seqs) // 2 + L1 = seqs[:mid] + if len(L1) == 1: + seq1 = iter(L1[0]) + else: + seq1 = _merge_sorted_binary_key(L1, key) + L2 = seqs[mid:] + if len(L2) == 1: + seq2 = iter(L2[0]) + else: + seq2 = _merge_sorted_binary_key(L2, key) + try: + val2 = next(seq2) + except StopIteration: + return seq1 + return _merge_sorted_key(seq1, seq2, val2, key) + + +cdef class _merge_sorted_key: + def __cinit__(self, seq1, seq2, val2, key): + self.seq1 = seq1 + self.seq2 = seq2 + self.key = key + self.val1 = None + self.key1 = None + self.val2 = val2 + self.key2 = key(val2) + self.loop = 0 + + def __iter__(self): + return self + + def __next__(self): + if self.loop == 0: + try: + self.val1 = next(self.seq1) + except StopIteration: + self.loop = 2 + return self.val2 + self.key1 = self.key(self.val1) + if self.key2 < self.key1: + self.loop = 1 + return self.val2 + return self.val1 + elif self.loop == 1: + try: + self.val2 = next(self.seq2) + except StopIteration: + self.loop = 3 + return self.val1 + self.key2 = self.key(self.val2) + if self.key2 < self.key1: + return self.val2 + self.loop = 0 + return self.val1 + elif self.loop == 2: + return next(self.seq2) + return next(self.seq1) + + +cdef object c_merge_sorted(object seqs, object key=None): + if len(seqs) == 0: + return iter([]) + elif len(seqs) == 1: + return iter(seqs[0]) + elif key is None: + return _merge_sorted_binary(seqs) + return _merge_sorted_binary_key(seqs, key) + + +def merge_sorted(*seqs, **kwargs): + """ + Merge and sort a collection of sorted collections + + This works lazily and only keeps one value from each iterable in memory. + + >>> list(merge_sorted([1, 3, 5], [2, 4, 6])) + [1, 2, 3, 4, 5, 6] + + >>> ''.join(merge_sorted('abc', 'abc', 'abc')) + 'aaabbbccc' + + The "key" function used to sort the input may be passed as a keyword. + + >>> list(merge_sorted([2, 3], [1, 3], key=lambda x: x // 3)) + [2, 1, 3, 3] + """ + if 'key' in kwargs: + return c_merge_sorted(seqs, kwargs['key']) + return c_merge_sorted(seqs) + + +cdef class interleave: + """ interleave(seqs) + + Interleave a sequence of sequences + + >>> list(interleave([[1, 2], [3, 4]])) + [1, 3, 2, 4] + + >>> ''.join(interleave(('ABC', 'XY'))) + 'AXBYC' + + Both the individual sequences and the sequence of sequences may be infinite + + Returns a lazy iterator + """ + def __cinit__(self, seqs): + self.iters = [iter(seq) for seq in seqs] + self.newiters = [] + self.i = 0 + self.n = PyList_GET_SIZE(self.iters) + + def __iter__(self): + return self + + def __next__(self): + # This implementation is similar to what is done in `toolz` in that we + # construct a new list of iterators, `self.newiters`, when a value is + # successfully retrieved from an iterator from `self.iters`. + cdef PyObject *obj + cdef object val + + if self.i == self.n: + self.n = PyList_GET_SIZE(self.newiters) + self.i = 0 + if self.n == 0: + raise StopIteration + self.iters = self.newiters + self.newiters = [] + val = PyList_GET_ITEM(self.iters, self.i) + self.i += 1 + obj = PtrIter_Next(val) + + # TODO: optimization opportunity. Previously, it was possible to + # continue on given exceptions, `self.pass_exceptions`, which is + # why this code is structured this way. Time to clean up? + while obj is NULL: + obj = PyErr_Occurred() + if obj is not NULL: + val = obj + PyErr_Clear() + raise val + + if self.i == self.n: + self.n = PyList_GET_SIZE(self.newiters) + self.i = 0 + if self.n == 0: + raise StopIteration + self.iters = self.newiters + self.newiters = [] + val = PyList_GET_ITEM(self.iters, self.i) + self.i += 1 + obj = PtrIter_Next(val) + + PyList_Append(self.newiters, val) + val = obj + Py_XDECREF(obj) + return val + + +cdef class _unique_key: + def __cinit__(self, object seq, object key): + self.iter_seq = iter(seq) + self.key = key + self.seen = set() + + def __iter__(self): + return self + + def __next__(self): + cdef object item, tag + item = next(self.iter_seq) + tag = self.key(item) + while PySet_Contains(self.seen, tag): + item = next(self.iter_seq) + tag = self.key(item) + PySet_Add(self.seen, tag) + return item + + +cdef class _unique_identity: + def __cinit__(self, object seq): + self.iter_seq = iter(seq) + self.seen = set() + + def __iter__(self): + return self + + def __next__(self): + cdef object item + item = next(self.iter_seq) + while PySet_Contains(self.seen, item): + item = next(self.iter_seq) + PySet_Add(self.seen, item) + return item + + +cpdef object unique(object seq, object key=None): + """ + Return only unique elements of a sequence + + >>> tuple(unique((1, 2, 3))) + (1, 2, 3) + >>> tuple(unique((1, 2, 1, 3))) + (1, 2, 3) + + Uniqueness can be defined by key keyword + + >>> tuple(unique(['cat', 'mouse', 'dog', 'hen'], key=len)) + ('cat', 'mouse') + """ + if key is None: + return _unique_identity(seq) + else: + return _unique_key(seq, key) + + +cpdef object isiterable(object x): + """ + Is x iterable? + + >>> isiterable([1, 2, 3]) + True + >>> isiterable('abc') + True + >>> isiterable(5) + False + """ + try: + iter(x) + return True + except TypeError: + pass + return False + + +cpdef object isdistinct(object seq): + """ + All values in sequence are distinct + + >>> isdistinct([1, 2, 3]) + True + >>> isdistinct([1, 2, 1]) + False + + >>> isdistinct("Hello") + False + >>> isdistinct("World") + True + """ + if iter(seq) is seq: + seen = set() + for item in seq: + if PySet_Contains(seen, item): + return False + seen.add(item) + return True + else: + return len(seq) == len(set(seq)) + + +cpdef object take(Py_ssize_t n, object seq): + """ + The first n elements of a sequence + + >>> list(take(2, [10, 20, 30, 40, 50])) + [10, 20] + + See Also: + drop + tail + """ + return islice(seq, n) + + +cpdef object tail(Py_ssize_t n, object seq): + """ + The last n elements of a sequence + + >>> tail(2, [10, 20, 30, 40, 50]) + [40, 50] + + See Also: + drop + take + """ + if PySequence_Check(seq): + return seq[-n:] + return tuple(deque(seq, n)) + + +cpdef object drop(Py_ssize_t n, object seq): + """ + The sequence following the first n elements + + >>> list(drop(2, [10, 20, 30, 40, 50])) + [30, 40, 50] + + See Also: + take + tail + """ + if n < 0: + raise ValueError('n argument for drop() must be non-negative') + cdef Py_ssize_t i + cdef object iter_seq + iter_seq = iter(seq) + try: + for i in range(n): + next(iter_seq) + except StopIteration: + pass + return iter_seq + + +cpdef object take_nth(Py_ssize_t n, object seq): + """ + Every nth item in seq + + >>> list(take_nth(2, [10, 20, 30, 40, 50])) + [10, 30, 50] + """ + return islice(seq, 0, None, n) + + +cpdef object first(object seq): + """ + The first element in a sequence + + >>> first('ABC') + 'A' + """ + return next(iter(seq)) + + +cpdef object second(object seq): + """ + The second element in a sequence + + >>> second('ABC') + 'B' + """ + seq = iter(seq) + next(seq) + return next(seq) + + +cpdef object nth(Py_ssize_t n, object seq): + """ + The nth element in a sequence + + >>> nth(1, 'ABC') + 'B' + """ + if PySequence_Check(seq): + return seq[n] + if n < 0: + raise ValueError('"n" must be positive when indexing an iterator') + seq = iter(seq) + while n > 0: + n -= 1 + next(seq) + return next(seq) + + +cpdef object last(object seq): + """ + The last element in a sequence + + >>> last('ABC') + 'C' + """ + cdef object val + if PySequence_Check(seq): + return seq[-1] + val = no_default + for val in seq: + pass + if val == no_default: + raise IndexError + return val + + +cpdef object rest(object seq): + seq = iter(seq) + next(seq) + return seq + + +cdef tuple _get_exceptions = (IndexError, KeyError, TypeError) +cdef tuple _get_list_exc = (IndexError, KeyError) + + +cpdef object get(object ind, object seq, object default='__no__default__'): + """ + Get element in a sequence or dict + + Provides standard indexing + + >>> get(1, 'ABC') # Same as 'ABC'[1] + 'B' + + Pass a list to get multiple values + + >>> get([1, 2], 'ABC') # ('ABC'[1], 'ABC'[2]) + ('B', 'C') + + Works on any value that supports indexing/getitem + For example here we see that it works with dictionaries + + >>> phonebook = {'Alice': '555-1234', + ... 'Bob': '555-5678', + ... 'Charlie':'555-9999'} + >>> get('Alice', phonebook) + '555-1234' + + >>> get(['Alice', 'Bob'], phonebook) + ('555-1234', '555-5678') + + Provide a default for missing values + + >>> get(['Alice', 'Dennis'], phonebook, None) + ('555-1234', None) + + See Also: + pluck + """ + cdef Py_ssize_t i + cdef object val + cdef tuple result + cdef PyObject *obj + if isinstance(ind, list): + i = PyList_GET_SIZE(ind) + result = PyTuple_New(i) + # List of indices, no default + if default == no_default: + for i, val in enumerate(ind): + val = seq[val] + Py_INCREF(val) + PyTuple_SET_ITEM(result, i, val) + return result + + # List of indices with default + for i, val in enumerate(ind): + obj = PtrObject_GetItem(seq, val) + if obj is NULL: + val = PyErr_Occurred() + PyErr_Clear() + if not PyErr_GivenExceptionMatches(val, _get_list_exc): + raise val + Py_INCREF(default) + PyTuple_SET_ITEM(result, i, default) + else: + val = obj + PyTuple_SET_ITEM(result, i, val) + return result + + obj = PtrObject_GetItem(seq, ind) + if obj is NULL: + val = PyErr_Occurred() + PyErr_Clear() + if default == no_default: + raise val + if PyErr_GivenExceptionMatches(val, _get_exceptions): + return default + raise val + val = obj + Py_XDECREF(obj) + return val + + +cpdef object concat(object seqs): + """ + Concatenate zero or more iterables, any of which may be infinite. + + An infinite sequence will prevent the rest of the arguments from + being included. + + We use chain.from_iterable rather than ``chain(*seqs)`` so that seqs + can be a generator. + + >>> list(concat([[], [1], [2, 3]])) + [1, 2, 3] + + See also: + itertools.chain.from_iterable equivalent + """ + return chain.from_iterable(seqs) + + +def concatv(*seqs): + """ + Variadic version of concat + + >>> list(concatv([], ["a"], ["b", "c"])) + ['a', 'b', 'c'] + + See also: + itertools.chain + """ + return chain.from_iterable(seqs) + + +cpdef object mapcat(object func, object seqs): + """ + Apply func to each sequence in seqs, concatenating results. + + >>> list(mapcat(lambda s: [c.upper() for c in s], + ... [["a", "b"], ["c", "d", "e"]])) + ['A', 'B', 'C', 'D', 'E'] + """ + return concat(map(func, seqs)) + + +cpdef object cons(object el, object seq): + """ + Add el to beginning of (possibly infinite) sequence seq. + + >>> list(cons(1, [2, 3])) + [1, 2, 3] + """ + return chain((el,), seq) + + +cdef class interpose: + """ interpose(el, seq) + + Introduce element between each pair of elements in seq + + >>> list(interpose("a", [1, 2, 3])) + [1, 'a', 2, 'a', 3] + """ + def __cinit__(self, object el, object seq): + self.el = el + self.iter_seq = iter(seq) + self.do_el = False + try: + self.val = next(self.iter_seq) + except StopIteration: + self.do_el = True + + def __iter__(self): + return self + + def __next__(self): + if self.do_el: + self.val = next(self.iter_seq) + self.do_el = False + return self.el + else: + self.do_el = True + return self.val + + +cpdef dict frequencies(object seq): + """ + Find number of occurrences of each value in seq + + >>> frequencies(['cat', 'cat', 'ox', 'pig', 'pig', 'cat']) #doctest: +SKIP + {'cat': 3, 'ox': 1, 'pig': 2} + + See Also: + countby + groupby + """ + cdef dict d = {} + cdef PyObject *obj + cdef Py_ssize_t val + for item in seq: + obj = PyDict_GetItem(d, item) + if obj is NULL: + d[item] = 1 + else: + val = obj + d[item] = val + 1 + return d + + +cdef inline object _reduceby_core(dict d, object key, object item, object binop, + object init, bint skip_init, bint call_init): + cdef PyObject *obj = PyDict_GetItem(d, key) + if obj is not NULL: + PyDict_SetItem(d, key, binop(obj, item)) + elif skip_init: + PyDict_SetItem(d, key, item) + elif call_init: + PyDict_SetItem(d, key, binop(init(), item)) + else: + PyDict_SetItem(d, key, binop(init, item)) + + +cpdef dict reduceby(object key, object binop, object seq, object init='__no__default__'): + """ + Perform a simultaneous groupby and reduction + + The computation: + + >>> result = reduceby(key, binop, seq, init) # doctest: +SKIP + + is equivalent to the following: + + >>> def reduction(group): # doctest: +SKIP + ... return reduce(binop, group, init) # doctest: +SKIP + + >>> groups = groupby(key, seq) # doctest: +SKIP + >>> result = valmap(reduction, groups) # doctest: +SKIP + + But the former does not build the intermediate groups, allowing it to + operate in much less space. This makes it suitable for larger datasets + that do not fit comfortably in memory + + The ``init`` keyword argument is the default initialization of the + reduction. This can be either a constant value like ``0`` or a callable + like ``lambda : 0`` as might be used in ``defaultdict``. + + Simple Examples + --------------- + + >>> from operator import add, mul + >>> iseven = lambda x: x % 2 == 0 + + >>> data = [1, 2, 3, 4, 5] + + >>> reduceby(iseven, add, data) # doctest: +SKIP + {False: 9, True: 6} + + >>> reduceby(iseven, mul, data) # doctest: +SKIP + {False: 15, True: 8} + + Complex Example + --------------- + + >>> projects = [{'name': 'build roads', 'state': 'CA', 'cost': 1000000}, + ... {'name': 'fight crime', 'state': 'IL', 'cost': 100000}, + ... {'name': 'help farmers', 'state': 'IL', 'cost': 2000000}, + ... {'name': 'help farmers', 'state': 'CA', 'cost': 200000}] + + >>> reduceby('state', # doctest: +SKIP + ... lambda acc, x: acc + x['cost'], + ... projects, 0) + {'CA': 1200000, 'IL': 2100000} + + Example Using ``init`` + ---------------------- + + >>> def set_add(s, i): + ... s.add(i) + ... return s + + >>> reduceby(iseven, set_add, [1, 2, 3, 4, 1, 2, 3], set) # doctest: +SKIP + {True: set([2, 4]), + False: set([1, 3])} + """ + cdef dict d = {} + cdef object item, keyval + cdef Py_ssize_t i, N + cdef bint skip_init = init == no_default + cdef bint call_init = callable(init) + if callable(key): + for item in seq: + keyval = key(item) + _reduceby_core(d, keyval, item, binop, init, skip_init, call_init) + elif isinstance(key, list): + N = PyList_GET_SIZE(key) + for item in seq: + keyval = PyTuple_New(N) + for i in range(N): + val = PyList_GET_ITEM(key, i) + val = item[val] + Py_INCREF(val) + PyTuple_SET_ITEM(keyval, i, val) + _reduceby_core(d, keyval, item, binop, init, skip_init, call_init) + else: + for item in seq: + keyval = item[key] + _reduceby_core(d, keyval, item, binop, init, skip_init, call_init) + return d + + +cdef class iterate: + """ iterate(func, x) + + Repeatedly apply a function func onto an original input + + Yields x, then func(x), then func(func(x)), then func(func(func(x))), etc.. + + >>> def inc(x): return x + 1 + >>> counter = iterate(inc, 0) + >>> next(counter) + 0 + >>> next(counter) + 1 + >>> next(counter) + 2 + + >>> double = lambda x: x * 2 + >>> powers_of_two = iterate(double, 1) + >>> next(powers_of_two) + 1 + >>> next(powers_of_two) + 2 + >>> next(powers_of_two) + 4 + >>> next(powers_of_two) + 8 + """ + def __cinit__(self, object func, object x): + self.func = func + self.x = x + self.val = self # sentinel + + def __iter__(self): + return self + + def __next__(self): + if self.val is self: + self.val = self.x + else: + self.x = self.func(self.x) + return self.x + + +cdef class sliding_window: + """ sliding_window(n, seq) + + A sequence of overlapping subsequences + + >>> list(sliding_window(2, [1, 2, 3, 4])) + [(1, 2), (2, 3), (3, 4)] + + This function creates a sliding window suitable for transformations like + sliding means / smoothing + + >>> mean = lambda seq: float(sum(seq)) / len(seq) + >>> list(map(mean, sliding_window(2, [1, 2, 3, 4]))) + [1.5, 2.5, 3.5] + """ + def __cinit__(self, Py_ssize_t n, object seq): + cdef Py_ssize_t i + self.iterseq = iter(seq) + self.prev = PyTuple_New(n) + Py_INCREF(None) + PyTuple_SET_ITEM(self.prev, 0, None) + for i, seq in enumerate(islice(self.iterseq, n-1), 1): + Py_INCREF(seq) + PyTuple_SET_ITEM(self.prev, i, seq) + self.n = n + + def __iter__(self): + return self + + def __next__(self): + cdef tuple current + cdef object item + cdef Py_ssize_t i + + item = next(self.iterseq) + current = PyTuple_New(self.n) + Py_INCREF(item) + PyTuple_SET_ITEM(current, self.n-1, item) + for i in range(1, self.n): + item = self.prev[i] + Py_INCREF(item) + PyTuple_SET_ITEM(current, i-1, item) + self.prev = current + return current + + +no_pad = '__no__pad__' + + +cpdef object partition(Py_ssize_t n, object seq, object pad='__no__pad__'): + """ + Partition sequence into tuples of length n + + >>> list(partition(2, [1, 2, 3, 4])) + [(1, 2), (3, 4)] + + If the length of ``seq`` is not evenly divisible by ``n``, the final tuple + is dropped if ``pad`` is not specified, or filled to length ``n`` by pad: + + >>> list(partition(2, [1, 2, 3, 4, 5])) + [(1, 2), (3, 4)] + + >>> list(partition(2, [1, 2, 3, 4, 5], pad=None)) + [(1, 2), (3, 4), (5, None)] + + See Also: + partition_all + """ + args = [iter(seq)] * n + if pad == '__no__pad__': + return zip(*args) + else: + return zip_longest(*args, fillvalue=pad) + + +cdef class partition_all: + """ partition_all(n, seq) + + Partition all elements of sequence into tuples of length at most n + + The final tuple may be shorter to accommodate extra elements. + + >>> list(partition_all(2, [1, 2, 3, 4])) + [(1, 2), (3, 4)] + + >>> list(partition_all(2, [1, 2, 3, 4, 5])) + [(1, 2), (3, 4), (5,)] + + See Also: + partition + """ + def __cinit__(self, Py_ssize_t n, object seq): + self.n = n + self.iterseq = iter(seq) + + def __iter__(self): + return self + + def __next__(self): + cdef tuple result + cdef object item + cdef Py_ssize_t i = 0 + result = PyTuple_New(self.n) + for item in self.iterseq: + Py_INCREF(item) + PyTuple_SET_ITEM(result, i, item) + i += 1 + if i == self.n: + return result + # iterable exhausted before filling the tuple + if i == 0: + raise StopIteration + for j in range(i, self.n): + Py_INCREF(None) + PyTuple_SET_ITEM(result, j, None) + return PyTuple_GetSlice(result, 0, i) + + +cpdef object count(object seq): + """ + Count the number of items in seq + + Like the builtin ``len`` but works on lazy sequences. + + Not to be confused with ``itertools.count`` + + See also: + len + """ + if iter(seq) is not seq and hasattr(seq, '__len__'): + return len(seq) + cdef Py_ssize_t i = 0 + for _ in seq: + i += 1 + return i + + +cdef class _pluck_index: + def __cinit__(self, object ind, object seqs): + self.ind = ind + self.iterseqs = iter(seqs) + + def __iter__(self): + return self + + def __next__(self): + val = next(self.iterseqs) + return val[self.ind] + + +cdef class _pluck_index_default: + def __cinit__(self, object ind, object seqs, object default): + self.ind = ind + self.iterseqs = iter(seqs) + self.default = default + + def __iter__(self): + return self + + def __next__(self): + cdef PyObject *obj + cdef object val + val = next(self.iterseqs) + obj = PtrObject_GetItem(val, self.ind) + if obj is NULL: + val = PyErr_Occurred() + PyErr_Clear() + if not PyErr_GivenExceptionMatches(val, _get_exceptions): + raise val + return self.default + val = obj + Py_XDECREF(obj) + return val + + +cdef class _pluck_list: + def __cinit__(self, list ind not None, object seqs): + self.ind = ind + self.iterseqs = iter(seqs) + self.n = len(ind) + + def __iter__(self): + return self + + def __next__(self): + cdef Py_ssize_t i + cdef tuple result + cdef object val, seq + seq = next(self.iterseqs) + result = PyTuple_New(self.n) + for i, val in enumerate(self.ind): + val = seq[val] + Py_INCREF(val) + PyTuple_SET_ITEM(result, i, val) + return result + + +cdef class _pluck_list_default: + def __cinit__(self, list ind not None, object seqs, object default): + self.ind = ind + self.iterseqs = iter(seqs) + self.default = default + self.n = len(ind) + + def __iter__(self): + return self + + def __next__(self): + cdef Py_ssize_t i + cdef object val, seq + cdef tuple result + seq = next(self.iterseqs) + result = PyTuple_New(self.n) + for i, val in enumerate(self.ind): + obj = PtrObject_GetItem(seq, val) + if obj is NULL: + val = PyErr_Occurred() + PyErr_Clear() + if not PyErr_GivenExceptionMatches(val, _get_list_exc): + raise val + Py_INCREF(self.default) + PyTuple_SET_ITEM(result, i, self.default) + else: + val = obj + PyTuple_SET_ITEM(result, i, val) + return result + + +cpdef object pluck(object ind, object seqs, object default='__no__default__'): + """ + plucks an element or several elements from each item in a sequence. + + ``pluck`` maps ``itertoolz.get`` over a sequence and returns one or more + elements of each item in the sequence. + + This is equivalent to running `map(curried.get(ind), seqs)` + + ``ind`` can be either a single string/index or a list of strings/indices. + ``seqs`` should be sequence containing sequences or dicts. + + e.g. + + >>> data = [{'id': 1, 'name': 'Cheese'}, {'id': 2, 'name': 'Pies'}] + >>> list(pluck('name', data)) + ['Cheese', 'Pies'] + >>> list(pluck([0, 1], [[1, 2, 3], [4, 5, 7]])) + [(1, 2), (4, 5)] + + See Also: + get + map + """ + if isinstance(ind, list): + if default != no_default: + return _pluck_list_default(ind, seqs, default) + if PyList_GET_SIZE(ind) < 10: + return _pluck_list(ind, seqs) + return map(itemgetter(*ind), seqs) + if default == no_default: + return _pluck_index(ind, seqs) + return _pluck_index_default(ind, seqs, default) + + +cdef class _getter_index: + def __cinit__(self, object ind): + self.ind = ind + + def __call__(self, object seq): + return seq[self.ind] + + +cdef class _getter_list: + def __cinit__(self, list ind not None): + self.ind = ind + self.n = len(ind) + + def __call__(self, object seq): + cdef Py_ssize_t i + cdef tuple result + cdef object val + result = PyTuple_New(self.n) + for i, val in enumerate(self.ind): + val = seq[val] + Py_INCREF(val) + PyTuple_SET_ITEM(result, i, val) + return result + + +cdef class _getter_null: + def __call__(self, object seq): + return () + + +# TODO: benchmark getters (and compare against itemgetter) +cpdef object getter(object index): + if isinstance(index, list): + if PyList_GET_SIZE(index) == 0: + return _getter_null() + elif PyList_GET_SIZE(index) < 10: + return _getter_list(index) + return itemgetter(*index) + return _getter_index(index) + + +cpdef object join(object leftkey, object leftseq, + object rightkey, object rightseq, + object left_default='__no__default__', + object right_default='__no__default__'): + """ + Join two sequences on common attributes + + This is a semi-streaming operation. The LEFT sequence is fully evaluated + and placed into memory. The RIGHT sequence is evaluated lazily and so can + be arbitrarily large. + (Note: If right_default is defined, then unique keys of rightseq + will also be stored in memory.) + + >>> friends = [('Alice', 'Edith'), + ... ('Alice', 'Zhao'), + ... ('Edith', 'Alice'), + ... ('Zhao', 'Alice'), + ... ('Zhao', 'Edith')] + + >>> cities = [('Alice', 'NYC'), + ... ('Alice', 'Chicago'), + ... ('Dan', 'Syndey'), + ... ('Edith', 'Paris'), + ... ('Edith', 'Berlin'), + ... ('Zhao', 'Shanghai')] + + >>> # Vacation opportunities + >>> # In what cities do people have friends? + >>> result = join(second, friends, + ... first, cities) + >>> for ((a, b), (c, d)) in sorted(unique(result)): + ... print((a, d)) + ('Alice', 'Berlin') + ('Alice', 'Paris') + ('Alice', 'Shanghai') + ('Edith', 'Chicago') + ('Edith', 'NYC') + ('Zhao', 'Chicago') + ('Zhao', 'NYC') + ('Zhao', 'Berlin') + ('Zhao', 'Paris') + + Specify outer joins with keyword arguments ``left_default`` and/or + ``right_default``. Here is a full outer join in which unmatched elements + are paired with None. + + >>> identity = lambda x: x + >>> list(join(identity, [1, 2, 3], + ... identity, [2, 3, 4], + ... left_default=None, right_default=None)) + [(2, 2), (3, 3), (None, 4), (1, None)] + + Usually the key arguments are callables to be applied to the sequences. If + the keys are not obviously callable then it is assumed that indexing was + intended, e.g. the following is a legal change. + The join is implemented as a hash join and the keys of leftseq must be + hashable. Additionally, if right_default is defined, then keys of rightseq + must also be hashable. + + >>> # result = join(second, friends, first, cities) + >>> result = join(1, friends, 0, cities) # doctest: +SKIP + """ + if left_default == no_default and right_default == no_default: + if callable(rightkey): + return _inner_join_key(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + elif isinstance(rightkey, list): + return _inner_join_indices(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + else: + return _inner_join_index(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + elif left_default != no_default and right_default == no_default: + if callable(rightkey): + return _right_outer_join_key(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + elif isinstance(rightkey, list): + return _right_outer_join_indices(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + else: + return _right_outer_join_index(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + elif left_default == no_default and right_default != no_default: + if callable(rightkey): + return _left_outer_join_key(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + elif isinstance(rightkey, list): + return _left_outer_join_indices(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + else: + return _left_outer_join_index(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + else: + if callable(rightkey): + return _outer_join_key(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + elif isinstance(rightkey, list): + return _outer_join_indices(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + else: + return _outer_join_index(leftkey, leftseq, rightkey, rightseq, + left_default, right_default) + +cdef class _join: + def __cinit__(self, + object leftkey, object leftseq, + object rightkey, object rightseq, + object left_default=no_default, + object right_default=no_default): + self.left_default = left_default + self.right_default = right_default + + self._rightkey = rightkey + self.rightseq = iter(rightseq) + if isinstance(rightkey, list): + self.N = len(rightkey) + + self.d = groupby(leftkey, leftseq) + self.seen_keys = set() + self.matches = [] + self.right = None + + self.is_rightseq_exhausted = False + + def __iter__(self): + return self + + cdef object rightkey(self): + pass + + +cdef class _right_outer_join(_join): + def __next__(self): + cdef PyObject *obj + if self.i == PyList_GET_SIZE(self.matches): + self.right = next(self.rightseq) + key = self.rightkey() + obj = PyDict_GetItem(self.d, key) + if obj is NULL: + return (self.left_default, self.right) + self.matches = obj + self.i = 0 + match = PyList_GET_ITEM(self.matches, self.i) # skip error checking + self.i += 1 + return (match, self.right) + + +cdef class _right_outer_join_key(_right_outer_join): + cdef object rightkey(self): + return self._rightkey(self.right) + + +cdef class _right_outer_join_index(_right_outer_join): + cdef object rightkey(self): + return self.right[self._rightkey] + + +cdef class _right_outer_join_indices(_right_outer_join): + cdef object rightkey(self): + keyval = PyTuple_New(self.N) + for i in range(self.N): + val = PyList_GET_ITEM(self._rightkey, i) + val = self.right[val] + Py_INCREF(val) + PyTuple_SET_ITEM(keyval, i, val) + return keyval + + +cdef class _outer_join(_join): + def __next__(self): + cdef PyObject *obj + if not self.is_rightseq_exhausted: + if self.i == PyList_GET_SIZE(self.matches): + try: + self.right = next(self.rightseq) + except StopIteration: + self.is_rightseq_exhausted = True + self.keys = iter(self.d) + return next(self) + key = self.rightkey() + PySet_Add(self.seen_keys, key) + obj = PyDict_GetItem(self.d, key) + if obj is NULL: + return (self.left_default, self.right) + self.matches = obj + self.i = 0 + match = PyList_GET_ITEM(self.matches, self.i) # skip error checking + self.i += 1 + return (match, self.right) + + else: + if self.i == PyList_GET_SIZE(self.matches): + key = next(self.keys) + while key in self.seen_keys: + key = next(self.keys) + obj = PyDict_GetItem(self.d, key) + self.matches = obj + self.i = 0 + match = PyList_GET_ITEM(self.matches, self.i) # skip error checking + self.i += 1 + return (match, self.right_default) + + +cdef class _outer_join_key(_outer_join): + cdef object rightkey(self): + return self._rightkey(self.right) + + +cdef class _outer_join_index(_outer_join): + cdef object rightkey(self): + return self.right[self._rightkey] + + +cdef class _outer_join_indices(_outer_join): + cdef object rightkey(self): + keyval = PyTuple_New(self.N) + for i in range(self.N): + val = PyList_GET_ITEM(self._rightkey, i) + val = self.right[val] + Py_INCREF(val) + PyTuple_SET_ITEM(keyval, i, val) + return keyval + + +cdef class _left_outer_join(_join): + def __next__(self): + cdef PyObject *obj + if not self.is_rightseq_exhausted: + if self.i == PyList_GET_SIZE(self.matches): + obj = NULL + while obj is NULL: + try: + self.right = next(self.rightseq) + except StopIteration: + self.is_rightseq_exhausted = True + self.keys = iter(self.d) + return next(self) + key = self.rightkey() + PySet_Add(self.seen_keys, key) + obj = PyDict_GetItem(self.d, key) + self.matches = obj + self.i = 0 + match = PyList_GET_ITEM(self.matches, self.i) # skip error checking + self.i += 1 + return (match, self.right) + + else: + if self.i == PyList_GET_SIZE(self.matches): + key = next(self.keys) + while key in self.seen_keys: + key = next(self.keys) + obj = PyDict_GetItem(self.d, key) + self.matches = obj + self.i = 0 + match = PyList_GET_ITEM(self.matches, self.i) # skip error checking + self.i += 1 + return (match, self.right_default) + + +cdef class _left_outer_join_key(_left_outer_join): + cdef object rightkey(self): + return self._rightkey(self.right) + + +cdef class _left_outer_join_index(_left_outer_join): + cdef object rightkey(self): + return self.right[self._rightkey] + + +cdef class _left_outer_join_indices(_left_outer_join): + cdef object rightkey(self): + keyval = PyTuple_New(self.N) + for i in range(self.N): + val = PyList_GET_ITEM(self._rightkey, i) + val = self.right[val] + Py_INCREF(val) + PyTuple_SET_ITEM(keyval, i, val) + return keyval + + +cdef class _inner_join(_join): + def __next__(self): + cdef PyObject *obj = NULL + if self.i == PyList_GET_SIZE(self.matches): + while obj is NULL: + self.right = next(self.rightseq) + key = self.rightkey() + obj = PyDict_GetItem(self.d, key) + self.matches = obj + self.i = 0 + match = PyList_GET_ITEM(self.matches, self.i) # skip error checking + self.i += 1 + return (match, self.right) + + +cdef class _inner_join_key(_inner_join): + cdef object rightkey(self): + return self._rightkey(self.right) + + +cdef class _inner_join_index(_inner_join): + cdef object rightkey(self): + return self.right[self._rightkey] + + +cdef class _inner_join_indices(_inner_join): + cdef object rightkey(self): + keyval = PyTuple_New(self.N) + for i in range(self.N): + val = PyList_GET_ITEM(self._rightkey, i) + val = self.right[val] + Py_INCREF(val) + PyTuple_SET_ITEM(keyval, i, val) + return keyval + + +cdef class _diff_key: + def __cinit__(self, object seqs, object key, object default=no_default): + self.N = len(seqs) + if self.N < 2: + raise TypeError('Too few sequences given (min 2 required)') + if default == no_default: + self.iters = zip(*seqs) + else: + self.iters = zip_longest(*seqs, fillvalue=default) + self.key = key + + def __iter__(self): + return self + + def __next__(self): + cdef object val, val2, items + cdef Py_ssize_t i + while True: + items = next(self.iters) + val = self.key(PyTuple_GET_ITEM(items, 0)) + for i in range(1, self.N): + val2 = self.key(PyTuple_GET_ITEM(items, i)) + if PyObject_RichCompareBool(val, val2, Py_NE): + return items + +cdef class _diff_identity: + def __cinit__(self, object seqs, object default=no_default): + self.N = len(seqs) + if self.N < 2: + raise TypeError('Too few sequences given (min 2 required)') + if default == no_default: + self.iters = zip(*seqs) + else: + self.iters = zip_longest(*seqs, fillvalue=default) + + def __iter__(self): + return self + + def __next__(self): + cdef object val, val2, items + cdef Py_ssize_t i + while True: + items = next(self.iters) + val = PyTuple_GET_ITEM(items, 0) + for i in range(1, self.N): + val2 = PyTuple_GET_ITEM(items, i) + if PyObject_RichCompareBool(val, val2, Py_NE): + return items + + +cdef object c_diff(object seqs, object default=no_default, object key=None): + if key is None: + return _diff_identity(seqs, default=default) + else: + return _diff_key(seqs, key, default=default) + + +def diff(*seqs, **kwargs): + """ + Return those items that differ between sequences + + >>> list(diff([1, 2, 3], [1, 2, 10, 100])) + [(3, 10)] + + Shorter sequences may be padded with a ``default`` value: + + >>> list(diff([1, 2, 3], [1, 2, 10, 100], default=None)) + [(3, 10), (None, 100)] + + A ``key`` function may also be applied to each item to use during + comparisons: + + >>> list(diff(['apples', 'bananas'], ['Apples', 'Oranges'], key=str.lower)) + [('bananas', 'Oranges')] + """ + N = len(seqs) + if N == 1 and isinstance(seqs[0], list): + seqs = seqs[0] + default = kwargs.get('default', no_default) + key = kwargs.get('key') + return c_diff(seqs, default=default, key=key) + + +cpdef object topk(Py_ssize_t k, object seq, object key=None): + """ + Find the k largest elements of a sequence + + Operates lazily in ``n*log(k)`` time + + >>> topk(2, [1, 100, 10, 1000]) + (1000, 100) + + Use a key function to change sorted order + + >>> topk(2, ['Alice', 'Bob', 'Charlie', 'Dan'], key=len) + ('Charlie', 'Alice') + + See also: + heapq.nlargest + """ + cdef object item, val, top + cdef object it = iter(seq) + cdef object _heapreplace = heapreplace + cdef Py_ssize_t i = k + cdef list pq = [] + + if key is not None and not callable(key): + key = getter(key) + + if k < 2: + if k < 1: + return () + top = list(take(1, it)) + if len(top) == 0: + return () + it = concatv(top, it) + if key is None: + return (max(it),) + else: + return (max(it, key=key),) + + for item in it: + if key is None: + PyList_Append(pq, (item, i)) + else: + PyList_Append(pq, (key(item), i, item)) + i -= 1 + if i == 0: + break + if i != 0: + pq.sort(reverse=True) + k = 0 if key is None else 2 + return tuple([item[k] for item in pq]) + + heapify(pq) + top = pq[0][0] + if key is None: + for item in it: + if top < item: + _heapreplace(pq, (item, i)) + top = pq[0][0] + i -= 1 + else: + for item in it: + val = key(item) + if top < val: + _heapreplace(pq, (val, i, item)) + top = pq[0][0] + i -= 1 + + pq.sort(reverse=True) + k = 0 if key is None else 2 + return tuple([item[k] for item in pq]) + + +cpdef object peek(object seq): + """ + Retrieve the next element of a sequence + + Returns the first element and an iterable equivalent to the original + sequence, still having the element retrieved. + + >>> seq = [0, 1, 2, 3, 4] + >>> first, seq = peek(seq) + >>> first + 0 + >>> list(seq) + [0, 1, 2, 3, 4] + """ + iterator = iter(seq) + item = next(iterator) + return item, chain((item,), iterator) + + +cpdef object peekn(Py_ssize_t n, object seq): + """ + Retrieve the next n elements of a sequence + + Returns a tuple of the first n elements and an iterable equivalent + to the original, still having the elements retrieved. + + >>> seq = [0, 1, 2, 3, 4] + >>> first_two, seq = peekn(2, seq) + >>> first_two + (0, 1) + >>> list(seq) + [0, 1, 2, 3, 4] + """ + iterator = iter(seq) + peeked = tuple(take(n, iterator)) + return peeked, chain(iter(peeked), iterator) + + +cdef class random_sample: + """ random_sample(prob, seq, random_state=None) + + Return elements from a sequence with probability of prob + + Returns a lazy iterator of random items from seq. + + ``random_sample`` considers each item independently and without + replacement. See below how the first time it returned 13 items and the + next time it returned 6 items. + + >>> seq = list(range(100)) + >>> list(random_sample(0.1, seq)) # doctest: +SKIP + [6, 9, 19, 35, 45, 50, 58, 62, 68, 72, 78, 86, 95] + >>> list(random_sample(0.1, seq)) # doctest: +SKIP + [6, 44, 54, 61, 69, 94] + + Providing an integer seed for ``random_state`` will result in + deterministic sampling. Given the same seed it will return the same sample + every time. + + >>> list(random_sample(0.1, seq, random_state=2016)) + [7, 9, 19, 25, 30, 32, 34, 48, 59, 60, 81, 98] + >>> list(random_sample(0.1, seq, random_state=2016)) + [7, 9, 19, 25, 30, 32, 34, 48, 59, 60, 81, 98] + + ``random_state`` can also be any object with a method ``random`` that + returns floats between 0.0 and 1.0 (exclusive). + + >>> from random import Random + >>> randobj = Random(2016) + >>> list(random_sample(0.1, seq, random_state=randobj)) + [7, 9, 19, 25, 30, 32, 34, 48, 59, 60, 81, 98] + """ + def __cinit__(self, object prob, object seq, random_state=None): + float(prob) + self.prob = prob + self.iter_seq = iter(seq) + if not hasattr(random_state, 'random'): + from random import Random + + random_state = Random(random_state) + self.random_func = random_state.random + + def __iter__(self): + return self + + def __next__(self): + while True: + if self.random_func() < self.prob: + return next(self.iter_seq) + next(self.iter_seq) diff --git a/env/lib/python3.8/site-packages/cytoolz/recipes.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/cytoolz/recipes.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..d18591e9 Binary files /dev/null and b/env/lib/python3.8/site-packages/cytoolz/recipes.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/cytoolz/recipes.pxd b/env/lib/python3.8/site-packages/cytoolz/recipes.pxd new file mode 100644 index 00000000..aa0bc393 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/recipes.pxd @@ -0,0 +1,5 @@ +cpdef object countby(object key, object seq) + + +cdef class partitionby: + cdef object iter_groupby diff --git a/env/lib/python3.8/site-packages/cytoolz/recipes.pyx b/env/lib/python3.8/site-packages/cytoolz/recipes.pyx new file mode 100644 index 00000000..323381a6 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/recipes.pyx @@ -0,0 +1,60 @@ +from cpython.sequence cimport PySequence_Tuple +from cytoolz.itertoolz cimport frequencies, pluck + +from itertools import groupby + + +__all__ = ['countby', 'partitionby'] + + +cpdef object countby(object key, object seq): + """ + Count elements of a collection by a key function + + >>> countby(len, ['cat', 'mouse', 'dog']) + {3: 2, 5: 1} + + >>> def iseven(x): return x % 2 == 0 + >>> countby(iseven, [1, 2, 3]) # doctest:+SKIP + {True: 1, False: 2} + + See Also: + groupby + """ + if not callable(key): + return frequencies(pluck(key, seq)) + return frequencies(map(key, seq)) + + +cdef class partitionby: + """ partitionby(func, seq) + + Partition a sequence according to a function + + Partition `s` into a sequence of lists such that, when traversing + `s`, every time the output of `func` changes a new list is started + and that and subsequent items are collected into that list. + + >>> is_space = lambda c: c == " " + >>> list(partitionby(is_space, "I have space")) + [('I',), (' ',), ('h', 'a', 'v', 'e'), (' ',), ('s', 'p', 'a', 'c', 'e')] + + >>> is_large = lambda x: x > 10 + >>> list(partitionby(is_large, [1, 2, 1, 99, 88, 33, 99, -1, 5])) + [(1, 2, 1), (99, 88, 33, 99), (-1, 5)] + + See also: + partition + groupby + itertools.groupby + """ + def __cinit__(self, object func, object seq): + self.iter_groupby = groupby(seq, key=func) + + def __iter__(self): + return self + + def __next__(self): + cdef object key, val + key, val = next(self.iter_groupby) + return PySequence_Tuple(val) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/dev_skip_test.py b/env/lib/python3.8/site-packages/cytoolz/tests/dev_skip_test.py new file mode 100644 index 00000000..c5da5b8a --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/dev_skip_test.py @@ -0,0 +1,32 @@ +""" +Determine when dev tests should be skipped by regular users. + +Some tests are only intended to be tested during development right +before performing a release. These do not test core functionality +of `cytoolz` and may be skipped. These tests are only run if the +following conditions are true: + + - toolz is installed + - toolz is the correct version + - cytoolz is a release version +""" +import cytoolz + +istest = lambda func: setattr(func, '__test__', True) or func +nottest = lambda func: setattr(func, '__test__', False) or func + +try: + import toolz + do_toolz_tests = True +except ImportError: + do_toolz_tests = False + +if do_toolz_tests: + do_toolz_tests = toolz.__version__.startswith(cytoolz.__toolz_version__) + do_toolz_tests &= '+' not in cytoolz.__version__ + +# Decorator used to skip tests for developmental versions of CyToolz +if do_toolz_tests: + dev_skip_test = istest +else: + dev_skip_test = nottest diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_compatibility.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_compatibility.py new file mode 100644 index 00000000..9b8468e0 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_compatibility.py @@ -0,0 +1,10 @@ + +import pytest +import importlib + +def test_compat_warn(): + with pytest.warns(DeprecationWarning): + # something else is importing this, + import cytoolz.compatibility + # reload to be sure we warn + importlib.reload(cytoolz.compatibility) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_curried.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_curried.py new file mode 100644 index 00000000..185eec5d --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_curried.py @@ -0,0 +1,117 @@ +import cytoolz +import cytoolz.curried +from cytoolz.curried import (take, first, second, sorted, merge_with, reduce, + merge, operator as cop) +from collections import defaultdict +from importlib import import_module +from operator import add + + +def test_take(): + assert list(take(2)([1, 2, 3])) == [1, 2] + + +def test_first(): + assert first is cytoolz.itertoolz.first + + +def test_merge(): + assert merge(factory=lambda: defaultdict(int))({1: 1}) == {1: 1} + assert merge({1: 1}) == {1: 1} + assert merge({1: 1}, factory=lambda: defaultdict(int)) == {1: 1} + + +def test_merge_with(): + assert merge_with(sum)({1: 1}, {1: 2}) == {1: 3} + + +def test_merge_with_list(): + assert merge_with(sum, [{'a': 1}, {'a': 2}]) == {'a': 3} + + +def test_sorted(): + assert sorted(key=second)([(1, 2), (2, 1)]) == [(2, 1), (1, 2)] + + +def test_reduce(): + assert reduce(add)((1, 2, 3)) == 6 + + +def test_module_name(): + assert cytoolz.curried.__name__ == 'cytoolz.curried' + + +def should_curry(func): + if not callable(func) or isinstance(func, cytoolz.curry): + return False + nargs = cytoolz.functoolz.num_required_args(func) + if nargs is None or nargs > 1: + return True + return nargs == 1 and cytoolz.functoolz.has_keywords(func) + + +def test_curried_operator(): + import operator + + for k, v in vars(cop).items(): + if not callable(v): + continue + + if not isinstance(v, cytoolz.curry): + try: + # Make sure it is unary + v(1) + except TypeError: + try: + v('x') + except TypeError: + pass + else: + continue + raise AssertionError( + 'cytoolz.curried.operator.%s is not curried!' % k, + ) + assert should_curry(getattr(operator, k)) == isinstance(v, cytoolz.curry), k + + # Make sure this isn't totally empty. + assert len(set(vars(cop)) & {'add', 'sub', 'mul'}) == 3 + + +def test_curried_namespace(): + exceptions = import_module('cytoolz.curried.exceptions') + namespace = {} + + + def curry_namespace(ns): + return { + name: cytoolz.curry(f) if should_curry(f) else f + for name, f in ns.items() if '__' not in name + } + + from_cytoolz = curry_namespace(vars(cytoolz)) + from_exceptions = curry_namespace(vars(exceptions)) + namespace.update(cytoolz.merge(from_cytoolz, from_exceptions)) + + namespace = cytoolz.valfilter(callable, namespace) + curried_namespace = cytoolz.valfilter(callable, cytoolz.curried.__dict__) + + if namespace != curried_namespace: + missing = set(namespace) - set(curried_namespace) + if missing: + raise AssertionError('There are missing functions in cytoolz.curried:\n %s' + % ' \n'.join(sorted(missing))) + extra = set(curried_namespace) - set(namespace) + if extra: + raise AssertionError('There are extra functions in cytoolz.curried:\n %s' + % ' \n'.join(sorted(extra))) + unequal = cytoolz.merge_with(list, namespace, curried_namespace) + unequal = cytoolz.valfilter(lambda x: x[0] != x[1], unequal) + messages = [] + for name, (orig_func, auto_func) in sorted(unequal.items()): + if name in from_exceptions: + messages.append('%s should come from cytoolz.curried.exceptions' % name) + elif should_curry(getattr(cytoolz, name)): + messages.append('%s should be curried from cytoolz' % name) + else: + messages.append('%s should come from cytoolz and NOT be curried' % name) + raise AssertionError('\n'.join(messages)) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_curried_toolzlike.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_curried_toolzlike.py new file mode 100644 index 00000000..7fdcdb8b --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_curried_toolzlike.py @@ -0,0 +1,40 @@ +import cytoolz +import cytoolz.curried +import types +from dev_skip_test import dev_skip_test + + +# Note that the tests in this file assume `toolz.curry` is a class, but we +# may some day make `toolz.curry` a function and `toolz.Curry` a class. + +@dev_skip_test +def test_toolzcurry_is_class(): + import toolz + assert isinstance(toolz.curry, type) is True + assert isinstance(toolz.curry, types.FunctionType) is False + + +@dev_skip_test +def test_cytoolz_like_toolz(): + import toolz + import toolz.curried + for key, val in vars(toolz.curried).items(): + if isinstance(val, toolz.curry): + if val.func is toolz.curry: # XXX: Python 3.4 work-around! + continue + assert hasattr(cytoolz.curried, key), ( + 'cytoolz.curried.%s does not exist' % key) + assert isinstance(getattr(cytoolz.curried, key), cytoolz.curry), ( + 'cytoolz.curried.%s should be curried' % key) + + +@dev_skip_test +def test_toolz_like_cytoolz(): + import toolz + import toolz.curried + for key, val in vars(cytoolz.curried).items(): + if isinstance(val, cytoolz.curry): + assert hasattr(toolz.curried, key), ( + 'cytoolz.curried.%s should not exist' % key) + assert isinstance(getattr(toolz.curried, key), toolz.curry), ( + 'cytoolz.curried.%s should not be curried' % key) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_dev_skip_test.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_dev_skip_test.py new file mode 100644 index 00000000..0c252711 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_dev_skip_test.py @@ -0,0 +1,21 @@ +from dev_skip_test import istest, nottest, dev_skip_test + +d = {} + + +@istest +def test_passes(): + d['istest'] = True + assert True + + +@nottest +def test_fails(): + d['nottest'] = True + assert False + + +def test_dev_skip_test(): + assert dev_skip_test is istest or dev_skip_test is nottest + assert d.get('istest', False) is True + assert d.get('nottest', False) is False diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_dicttoolz.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_dicttoolz.py new file mode 100644 index 00000000..4cc8019f --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_dicttoolz.py @@ -0,0 +1,270 @@ +from collections import defaultdict as _defaultdict +from collections.abc import Mapping +import os +from cytoolz.dicttoolz import (merge, merge_with, valmap, keymap, update_in, + assoc, dissoc, keyfilter, valfilter, itemmap, + itemfilter, assoc_in) +from cytoolz.functoolz import identity +from cytoolz.utils import raises + + +def inc(x): + return x + 1 + + +def iseven(i): + return i % 2 == 0 + + +class TestDict(object): + """Test typical usage: dict inputs, no factory keyword. + + Class attributes: + D: callable that inputs a dict and creates or returns a MutableMapping + kw: kwargs dict to specify "factory" keyword (if applicable) + """ + D = dict + kw = {} + + def test_merge(self): + D, kw = self.D, self.kw + assert merge(D({1: 1, 2: 2}), D({3: 4}), **kw) == D({1: 1, 2: 2, 3: 4}) + + def test_merge_iterable_arg(self): + D, kw = self.D, self.kw + assert merge([D({1: 1, 2: 2}), D({3: 4})], **kw) == D({1: 1, 2: 2, 3: 4}) + + def test_merge_with(self): + D, kw = self.D, self.kw + dicts = D({1: 1, 2: 2}), D({1: 10, 2: 20}) + assert merge_with(sum, *dicts, **kw) == D({1: 11, 2: 22}) + assert merge_with(tuple, *dicts, **kw) == D({1: (1, 10), 2: (2, 20)}) + + dicts = D({1: 1, 2: 2, 3: 3}), D({1: 10, 2: 20}) + assert merge_with(sum, *dicts, **kw) == D({1: 11, 2: 22, 3: 3}) + assert merge_with(tuple, *dicts, **kw) == D({1: (1, 10), 2: (2, 20), 3: (3,)}) + + assert not merge_with(sum) + + def test_merge_with_iterable_arg(self): + D, kw = self.D, self.kw + dicts = D({1: 1, 2: 2}), D({1: 10, 2: 20}) + assert merge_with(sum, *dicts, **kw) == D({1: 11, 2: 22}) + assert merge_with(sum, dicts, **kw) == D({1: 11, 2: 22}) + assert merge_with(sum, iter(dicts), **kw) == D({1: 11, 2: 22}) + + def test_valmap(self): + D, kw = self.D, self.kw + assert valmap(inc, D({1: 1, 2: 2}), **kw) == D({1: 2, 2: 3}) + + def test_keymap(self): + D, kw = self.D, self.kw + assert keymap(inc, D({1: 1, 2: 2}), **kw) == D({2: 1, 3: 2}) + + def test_itemmap(self): + D, kw = self.D, self.kw + assert itemmap(reversed, D({1: 2, 2: 4}), **kw) == D({2: 1, 4: 2}) + + def test_valfilter(self): + D, kw = self.D, self.kw + assert valfilter(iseven, D({1: 2, 2: 3}), **kw) == D({1: 2}) + + def test_keyfilter(self): + D, kw = self.D, self.kw + assert keyfilter(iseven, D({1: 2, 2: 3}), **kw) == D({2: 3}) + + def test_itemfilter(self): + D, kw = self.D, self.kw + assert itemfilter(lambda item: iseven(item[0]), D({1: 2, 2: 3}), **kw) == D({2: 3}) + assert itemfilter(lambda item: iseven(item[1]), D({1: 2, 2: 3}), **kw) == D({1: 2}) + + def test_assoc(self): + D, kw = self.D, self.kw + assert assoc(D({}), "a", 1, **kw) == D({"a": 1}) + assert assoc(D({"a": 1}), "a", 3, **kw) == D({"a": 3}) + assert assoc(D({"a": 1}), "b", 3, **kw) == D({"a": 1, "b": 3}) + + # Verify immutability: + d = D({'x': 1}) + oldd = d + assoc(d, 'x', 2, **kw) + assert d is oldd + + def test_dissoc(self): + D, kw = self.D, self.kw + assert dissoc(D({"a": 1}), "a", **kw) == D({}) + assert dissoc(D({"a": 1, "b": 2}), "a", **kw) == D({"b": 2}) + assert dissoc(D({"a": 1, "b": 2}), "b", **kw) == D({"a": 1}) + assert dissoc(D({"a": 1, "b": 2}), "a", "b", **kw) == D({}) + assert dissoc(D({"a": 1}), "a", **kw) == dissoc(dissoc(D({"a": 1}), "a", **kw), "a", **kw) + + # Verify immutability: + d = D({'x': 1}) + oldd = d + d2 = dissoc(d, 'x', **kw) + assert d is oldd + assert d2 is not oldd + + def test_assoc_in(self): + D, kw = self.D, self.kw + assert assoc_in(D({"a": 1}), ["a"], 2, **kw) == D({"a": 2}) + assert (assoc_in(D({"a": D({"b": 1})}), ["a", "b"], 2, **kw) == + D({"a": D({"b": 2})})) + assert assoc_in(D({}), ["a", "b"], 1, **kw) == D({"a": D({"b": 1})}) + + # Verify immutability: + d = D({'x': 1}) + oldd = d + d2 = assoc_in(d, ['x'], 2, **kw) + assert d is oldd + assert d2 is not oldd + + def test_update_in(self): + D, kw = self.D, self.kw + assert update_in(D({"a": 0}), ["a"], inc, **kw) == D({"a": 1}) + assert update_in(D({"a": 0, "b": 1}), ["b"], str, **kw) == D({"a": 0, "b": "1"}) + assert (update_in(D({"t": 1, "v": D({"a": 0})}), ["v", "a"], inc, **kw) == + D({"t": 1, "v": D({"a": 1})})) + # Handle one missing key. + assert update_in(D({}), ["z"], str, None, **kw) == D({"z": "None"}) + assert update_in(D({}), ["z"], inc, 0, **kw) == D({"z": 1}) + assert update_in(D({}), ["z"], lambda x: x+"ar", default="b", **kw) == D({"z": "bar"}) + # Same semantics as Clojure for multiple missing keys, ie. recursively + # create nested empty dictionaries to the depth specified by the + # keys with the innermost value set to f(default). + assert update_in(D({}), [0, 1], inc, default=-1, **kw) == D({0: D({1: 0})}) + assert update_in(D({}), [0, 1], str, default=100, **kw) == D({0: D({1: "100"})}) + assert (update_in(D({"foo": "bar", 1: 50}), ["d", 1, 0], str, 20, **kw) == + D({"foo": "bar", 1: 50, "d": D({1: D({0: "20"})})})) + # Verify immutability: + d = D({'x': 1}) + oldd = d + update_in(d, ['x'], inc, **kw) + assert d is oldd + + def test_factory(self): + D, kw = self.D, self.kw + assert merge(defaultdict(int, D({1: 2})), D({2: 3})) == {1: 2, 2: 3} + assert (merge(defaultdict(int, D({1: 2})), D({2: 3}), + factory=lambda: defaultdict(int)) == + defaultdict(int, D({1: 2, 2: 3}))) + assert not (merge(defaultdict(int, D({1: 2})), D({2: 3}), + factory=lambda: defaultdict(int)) == {1: 2, 2: 3}) + assert raises(TypeError, lambda: merge(D({1: 2}), D({2: 3}), factoryy=dict)) + + +class defaultdict(_defaultdict): + def __eq__(self, other): + return (super(defaultdict, self).__eq__(other) and + isinstance(other, _defaultdict) and + self.default_factory == other.default_factory) + + +class TestDefaultDict(TestDict): + """Test defaultdict as input and factory + + Class attributes: + D: callable that inputs a dict and creates or returns a MutableMapping + kw: kwargs dict to specify "factory" keyword (if applicable) + """ + @staticmethod + def D(dict_): + return defaultdict(int, dict_) + + kw = {'factory': lambda: defaultdict(int)} + + +class CustomMapping(object): + """Define methods of the MutableMapping protocol required by dicttoolz""" + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + + def __getitem__(self, key): + return self._d[key] + + def __setitem__(self, key, val): + self._d[key] = val + + def __delitem__(self, key): + del self._d[key] + + def __iter__(self): + return iter(self._d) + + def __len__(self): + return len(self._d) + + def __contains__(self, key): + return key in self._d + + def __eq__(self, other): + return isinstance(other, CustomMapping) and self._d == other._d + + def __ne__(self, other): + return not isinstance(other, CustomMapping) or self._d != other._d + + def keys(self): + return self._d.keys() + + def values(self): + return self._d.values() + + def items(self): + return self._d.items() + + def update(self, *args, **kwargs): + self._d.update(*args, **kwargs) + + # Unused methods that are part of the MutableMapping protocol + #def get(self, key, *args): + # return self._d.get(key, *args) + + #def pop(self, key, *args): + # return self._d.pop(key, *args) + + #def popitem(self, key): + # return self._d.popitem() + + #def clear(self): + # self._d.clear() + + #def setdefault(self, key, *args): + # return self._d.setdefault(self, key, *args) + + +class TestCustomMapping(TestDict): + """Test CustomMapping as input and factory + + Class attributes: + D: callable that inputs a dict and creates or returns a MutableMapping + kw: kwargs dict to specify "factory" keyword (if applicable) + """ + D = CustomMapping + kw = {'factory': lambda: CustomMapping()} + + +def test_environ(): + # See: https://github.com/pycytoolz/cycytoolz/issues/127 + assert keymap(identity, os.environ) == os.environ + assert valmap(identity, os.environ) == os.environ + assert itemmap(identity, os.environ) == os.environ + + +def test_merge_with_non_dict_mappings(): + class Foo(Mapping): + def __init__(self, d): + self.d = d + + def __iter__(self): + return iter(self.d) + + def __getitem__(self, key): + return self.d[key] + + def __len__(self): + return len(self.d) + + d = Foo({1: 1}) + + assert merge(d) is d or merge(d) == {1: 1} + assert merge_with(sum, d) == {1: 1} diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_docstrings.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_docstrings.py new file mode 100644 index 00000000..85654e62 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_docstrings.py @@ -0,0 +1,85 @@ +import difflib +import cytoolz + +from cytoolz import curry, identity, keyfilter, valfilter, merge_with +from cytoolz.utils import raises +from dev_skip_test import dev_skip_test + + +# `cytoolz` functions for which "# doctest: +SKIP" were added. +# This may have been done because the error message may not exactly match. +# The skipped tests should be added below with results and explanations. +skipped_doctests = ['get_in'] + + +@curry +def isfrommod(modname, func): + mod = getattr(func, '__module__', '') or '' + return mod.startswith(modname) or 'toolz.functoolz.curry' in str(type(func)) + + +def convertdoc(doc): + """ Convert docstring from `toolz` to `cytoolz`.""" + if hasattr(doc, '__doc__'): + doc = doc.__doc__ + doc = doc.replace('toolz', 'cytoolz') + doc = doc.replace('dictcytoolz', 'dicttoolz') + doc = doc.replace('funccytoolz', 'functoolz') + doc = doc.replace('itercytoolz', 'itertoolz') + doc = doc.replace('cytoolz.readthedocs', 'toolz.readthedocs') + return doc + + +@dev_skip_test +def test_docstrings_uptodate(): + import toolz + differ = difflib.Differ() + + # only consider items created in both `toolz` and `cytoolz` + toolz_dict = valfilter(isfrommod('toolz'), toolz.__dict__) + cytoolz_dict = valfilter(isfrommod('cytoolz'), cytoolz.__dict__) + + # only test functions that have docstrings defined in `toolz` + toolz_dict = valfilter(lambda x: getattr(x, '__doc__', ''), toolz_dict) + + # full API coverage should be tested elsewhere + toolz_dict = keyfilter(lambda x: x in cytoolz_dict, toolz_dict) + cytoolz_dict = keyfilter(lambda x: x in toolz_dict, cytoolz_dict) + + d = merge_with(identity, toolz_dict, cytoolz_dict) + for key, (toolz_func, cytoolz_func) in d.items(): + # only check if the new doctstring *contains* the expected docstring + toolz_doc = convertdoc(toolz_func) + cytoolz_doc = cytoolz_func.__doc__ + if toolz_doc not in cytoolz_doc: + diff = list(differ.compare(toolz_doc.splitlines(), + cytoolz_doc.splitlines())) + fulldiff = list(diff) + # remove additional lines at the beginning + while diff and diff[0].startswith('+'): + diff.pop(0) + # remove additional lines at the end + while diff and diff[-1].startswith('+'): + diff.pop() + + def checkbad(line): + return (line.startswith('+') and + not ('# doctest: +SKIP' in line and + key in skipped_doctests)) + + if any(map(checkbad, diff)): + assert False, 'Error: cytoolz.%s has a bad docstring:\n%s\n' % ( + key, '\n'.join(fulldiff)) + + +def test_get_in_doctest(): + # Original doctest: + # >>> get_in(['y'], {}, no_default=True) + # Traceback (most recent call last): + # ... + # KeyError: 'y' + + # cytoolz result: + # KeyError: + + raises(KeyError, lambda: cytoolz.get_in(['y'], {}, no_default=True)) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_doctests.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_doctests.py new file mode 100644 index 00000000..d9b72774 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_doctests.py @@ -0,0 +1,19 @@ +import doctest + +import cytoolz +import cytoolz.dicttoolz +import cytoolz.functoolz +import cytoolz.itertoolz +import cytoolz.recipes + + +def module_doctest(m, *args, **kwargs): + return doctest.testmod(m, *args, **kwargs).failed == 0 + + +def test_doctest(): + assert module_doctest(cytoolz) + assert module_doctest(cytoolz.dicttoolz) + assert module_doctest(cytoolz.functoolz) + assert module_doctest(cytoolz.itertoolz) + assert module_doctest(cytoolz.recipes) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_embedded_sigs.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_embedded_sigs.py new file mode 100644 index 00000000..c289b04d --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_embedded_sigs.py @@ -0,0 +1,100 @@ +import inspect +import cytoolz + +from types import BuiltinFunctionType, FunctionType +from cytoolz import curry, identity, keyfilter, valfilter, merge_with +from dev_skip_test import dev_skip_test + + +@curry +def isfrommod(modname, func): + mod = getattr(func, '__module__', '') or '' + return mod.startswith(modname) or 'toolz.functoolz.curry' in str(type(func)) + + +@dev_skip_test +def test_class_sigs(): + """ Test that all ``cdef class`` extension types in ``cytoolz`` have + correctly embedded the function signature as done in ``toolz``. + """ + import toolz + # only consider items created in both `toolz` and `cytoolz` + toolz_dict = valfilter(isfrommod('toolz'), toolz.__dict__) + cytoolz_dict = valfilter(isfrommod('cytoolz'), cytoolz.__dict__) + + # only test `cdef class` extensions from `cytoolz` + cytoolz_dict = valfilter(lambda x: not isinstance(x, BuiltinFunctionType), + cytoolz_dict) + + # full API coverage should be tested elsewhere + toolz_dict = keyfilter(lambda x: x in cytoolz_dict, toolz_dict) + cytoolz_dict = keyfilter(lambda x: x in toolz_dict, cytoolz_dict) + + class wrap: + """e.g., allow `factory=` to instead be `factory=dict` in signature""" + def __init__(self, obj): + self.obj = obj + + def __repr__(self): + return getattr(self.obj, '__name__', repr(self.obj)) + + d = merge_with(identity, toolz_dict, cytoolz_dict) + for key, (toolz_func, cytoolz_func) in d.items(): + if isinstance(toolz_func, FunctionType): + # function + toolz_spec = inspect.signature(toolz_func) + elif isinstance(toolz_func, toolz.curry): + # curried object + toolz_spec = inspect.signature(toolz_func.func) + else: + # class + toolz_spec = inspect.signature(toolz_func.__init__) + toolz_spec = toolz_spec.replace( + parameters=[ + v.replace(default=wrap(v.default)) + if v.default is not inspect._empty + else v + for v in toolz_spec.parameters.values() + ] + ) + # Hmm, Cython is showing str as unicode, such as `default=u'__no__default__'` + doc = cytoolz_func.__doc__ + doc_alt = doc.replace('Py_ssize_t ', '').replace("=u'", "='") + toolz_sig = toolz_func.__name__ + str(toolz_spec) + if not (toolz_sig in doc or toolz_sig in doc_alt): + message = ('cytoolz.%s does not have correct function signature.' + '\n\nExpected: %s' + '\n\nDocstring in cytoolz is:\n%s' + % (key, toolz_sig, cytoolz_func.__doc__)) + assert False, message + + +skip_sigs = ['identity'] +aliases = {'comp': 'compose'} + + +@dev_skip_test +def test_sig_at_beginning(): + """ Test that the function signature is at the beginning of the docstring + and is followed by exactly one blank line. + """ + cytoolz_dict = valfilter(isfrommod('cytoolz'), cytoolz.__dict__) + cytoolz_dict = keyfilter(lambda x: x not in skip_sigs, cytoolz_dict) + + for key, val in cytoolz_dict.items(): + doclines = val.__doc__.splitlines() + assert len(doclines) > 2, ( + 'cytoolz.%s docstring too short:\n\n%s' % (key, val.__doc__)) + + sig = '%s(' % aliases.get(key, key) + assert sig in doclines[0], ( + 'cytoolz.%s docstring missing signature at beginning:\n\n%s' + % (key, val.__doc__)) + + assert not doclines[1], ( + 'cytoolz.%s docstring missing blank line after signature:\n\n%s' + % (key, val.__doc__)) + + assert doclines[2], ( + 'cytoolz.%s docstring too many blank lines after signature:\n\n%s' + % (key, val.__doc__)) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_functoolz.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_functoolz.py new file mode 100644 index 00000000..47484f63 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_functoolz.py @@ -0,0 +1,796 @@ +import inspect +import cytoolz +from cytoolz.functoolz import (thread_first, thread_last, memoize, curry, + compose, compose_left, pipe, complement, do, juxt, + flip, excepts, apply) +from operator import add, mul, itemgetter +from cytoolz.utils import raises +from functools import partial + + +def iseven(x): + return x % 2 == 0 + + +def isodd(x): + return x % 2 == 1 + + +def inc(x): + return x + 1 + + +def double(x): + return 2 * x + + +class AlwaysEquals(object): + """useful to test correct __eq__ implementation of other objects""" + + def __eq__(self, other): + return True + + def __ne__(self, other): + return False + + +class NeverEquals(object): + """useful to test correct __eq__ implementation of other objects""" + + def __eq__(self, other): + return False + + def __ne__(self, other): + return True + + +def test_apply(): + assert apply(double, 5) == 10 + assert tuple(map(apply, [double, inc, double], [10, 500, 8000])) == (20, 501, 16000) + assert raises(TypeError, apply) + + +def test_thread_first(): + assert thread_first(2) == 2 + assert thread_first(2, inc) == 3 + assert thread_first(2, inc, inc) == 4 + assert thread_first(2, double, inc) == 5 + assert thread_first(2, (add, 5), double) == 14 + + +def test_thread_last(): + assert list(thread_last([1, 2, 3], (map, inc), (filter, iseven))) == [2, 4] + assert list(thread_last([1, 2, 3], (map, inc), (filter, isodd))) == [3] + assert thread_last(2, (add, 5), double) == 14 + + +def test_memoize(): + fn_calls = [0] # Storage for side effects + + def f(x, y): + """ A docstring """ + fn_calls[0] += 1 + return x + y + mf = memoize(f) + + assert mf(2, 3) is mf(2, 3) + assert fn_calls == [1] # function was only called once + assert mf.__doc__ == f.__doc__ + assert raises(TypeError, lambda: mf(1, {})) + + +def test_memoize_kwargs(): + fn_calls = [0] # Storage for side effects + + def f(x, y=0): + return x + y + + mf = memoize(f) + + assert mf(1) == f(1) + assert mf(1, 2) == f(1, 2) + assert mf(1, y=2) == f(1, y=2) + assert mf(1, y=3) == f(1, y=3) + + +def test_memoize_curried(): + @curry + def f(x, y=0): + return x + y + + f2 = f(y=1) + fm2 = memoize(f2) + + assert fm2(3) == f2(3) + assert fm2(3) == f2(3) + + +def test_memoize_partial(): + def f(x, y=0): + return x + y + + f2 = partial(f, y=1) + fm2 = memoize(f2) + + assert fm2(3) == f2(3) + assert fm2(3) == f2(3) + + +def test_memoize_key_signature(): + # Single argument should not be tupled as a key. No keywords. + mf = memoize(lambda x: False, cache={1: True}) + assert mf(1) is True + assert mf(2) is False + + # Single argument must be tupled if signature has varargs. No keywords. + mf = memoize(lambda x, *args: False, cache={(1,): True, (1, 2): 2}) + assert mf(1) is True + assert mf(2) is False + assert mf(1, 1) is False + assert mf(1, 2) == 2 + assert mf((1, 2)) is False + + # More than one argument is always tupled. No keywords. + mf = memoize(lambda x, y: False, cache={(1, 2): True}) + assert mf(1, 2) is True + assert mf(1, 3) is False + assert raises(TypeError, lambda: mf((1, 2))) + + # Nullary function (no inputs) uses empty tuple as the key + mf = memoize(lambda: False, cache={(): True}) + assert mf() is True + + # Single argument must be tupled if there are keyword arguments, because + # keyword arguments may be passed as unnamed args. + mf = memoize(lambda x, y=0: False, + cache={((1,), frozenset((('y', 2),))): 2, + ((1, 2), None): 3}) + assert mf(1, y=2) == 2 + assert mf(1, 2) == 3 + assert mf(2, y=2) is False + assert mf(2, 2) is False + assert mf(1) is False + assert mf((1, 2)) is False + + # Keyword-only signatures must still have an "args" tuple. + mf = memoize(lambda x=0: False, cache={(None, frozenset((('x', 1),))): 1, + ((1,), None): 2}) + assert mf() is False + assert mf(x=1) == 1 + assert mf(1) == 2 + + +def test_memoize_curry_cache(): + @memoize(cache={1: True}) + def f(x): + return False + + assert f(1) is True + assert f(2) is False + + +def test_memoize_key(): + @memoize(key=lambda args, kwargs: args[0]) + def f(x, y, *args, **kwargs): + return x + y + + assert f(1, 2) == 3 + assert f(1, 3) == 3 + + +def test_memoize_wrapped(): + + def foo(): + """ + Docstring + """ + pass + memoized_foo = memoize(foo) + assert memoized_foo.__wrapped__ is foo + + +def test_curry_simple(): + cmul = curry(mul) + double = cmul(2) + assert callable(double) + assert double(10) == 20 + assert repr(cmul) == repr(mul) + + cmap = curry(map) + assert list(cmap(inc)([1, 2, 3])) == [2, 3, 4] + + assert raises(TypeError, lambda: curry()) + assert raises(TypeError, lambda: curry({1: 2})) + + +def test_curry_kwargs(): + def f(a, b, c=10): + return (a + b) * c + + f = curry(f) + assert f(1, 2, 3) == 9 + assert f(1)(2, 3) == 9 + assert f(1, 2) == 30 + assert f(1, c=3)(2) == 9 + assert f(c=3)(1, 2) == 9 + + def g(a=1, b=10, c=0): + return a + b + c + + cg = curry(g, b=2) + assert cg() == 3 + assert cg(b=3) == 4 + assert cg(a=0) == 2 + assert cg(a=0, b=1) == 1 + assert cg(0) == 2 # pass "a" as arg, not kwarg + assert raises(TypeError, lambda: cg(1, 2)) # pass "b" as arg AND kwarg + + def h(x, func=int): + return func(x) + + # __init__ must not pick func as positional arg + assert curry(h)(0.0) == 0 + assert curry(h)(func=str)(0.0) == '0.0' + assert curry(h, func=str)(0.0) == '0.0' + + +def test_curry_passes_errors(): + @curry + def f(a, b): + if not isinstance(a, int): + raise TypeError() + return a + b + + assert f(1, 2) == 3 + assert raises(TypeError, lambda: f('1', 2)) + assert raises(TypeError, lambda: f('1')(2)) + assert raises(TypeError, lambda: f(1, 2, 3)) + + +def test_curry_docstring(): + def f(x, y): + """ A docstring """ + return x + + g = curry(f) + assert g.__doc__ == f.__doc__ + assert str(g) == str(f) + assert f(1, 2) == g(1, 2) + + +def test_curry_is_like_partial(): + def foo(a, b, c=1): + return a + b + c + + p, c = partial(foo, 1, c=2), curry(foo)(1, c=2) + assert p.keywords == c.keywords + assert p.args == c.args + assert p(3) == c(3) + + p, c = partial(foo, 1), curry(foo)(1) + assert p.keywords == c.keywords + assert p.args == c.args + assert p(3) == c(3) + assert p(3, c=2) == c(3, c=2) + + p, c = partial(foo, c=1), curry(foo)(c=1) + assert p.keywords == c.keywords + assert p.args == c.args + assert p(1, 2) == c(1, 2) + + +def test_curry_is_idempotent(): + def foo(a, b, c=1): + return a + b + c + + f = curry(foo, 1, c=2) + g = curry(f) + assert isinstance(f, curry) + assert isinstance(g, curry) + assert not isinstance(g.func, curry) + assert not hasattr(g.func, 'func') + assert f.func == g.func + assert f.args == g.args + assert f.keywords == g.keywords + + +def test_curry_attributes_readonly(): + def foo(a, b, c=1): + return a + b + c + + f = curry(foo, 1, c=2) + assert raises(AttributeError, lambda: setattr(f, 'args', (2,))) + assert raises(AttributeError, lambda: setattr(f, 'keywords', {'c': 3})) + assert raises(AttributeError, lambda: setattr(f, 'func', f)) + assert raises(AttributeError, lambda: delattr(f, 'args')) + assert raises(AttributeError, lambda: delattr(f, 'keywords')) + assert raises(AttributeError, lambda: delattr(f, 'func')) + + +def test_curry_attributes_writable(): + def foo(a, b, c=1): + return a + b + c + foo.__qualname__ = 'this.is.foo' + f = curry(foo, 1, c=2) + assert f.__qualname__ == 'this.is.foo' + f.__name__ = 'newname' + f.__doc__ = 'newdoc' + f.__module__ = 'newmodule' + f.__qualname__ = 'newqualname' + assert f.__name__ == 'newname' + assert f.__doc__ == 'newdoc' + assert f.__module__ == 'newmodule' + assert f.__qualname__ == 'newqualname' + if hasattr(f, 'func_name'): + assert f.__name__ == f.func_name + + +def test_curry_module(): + from cytoolz.curried.exceptions import merge + assert merge.__module__ == 'cytoolz.curried.exceptions' + + +def test_curry_comparable(): + def foo(a, b, c=1): + return a + b + c + f1 = curry(foo, 1, c=2) + f2 = curry(foo, 1, c=2) + g1 = curry(foo, 1, c=3) + h1 = curry(foo, c=2) + h2 = h1(c=2) + h3 = h1() + assert f1 == f2 + assert not (f1 != f2) + assert f1 != g1 + assert not (f1 == g1) + assert f1 != h1 + assert h1 == h2 + assert h1 == h3 + + # test function comparison works + def bar(a, b, c=1): + return a + b + c + b1 = curry(bar, 1, c=2) + assert b1 != f1 + + assert {f1, f2, g1, h1, h2, h3, b1, b1()} == {f1, g1, h1, b1} + + # test unhashable input + unhash1 = curry(foo, []) + assert raises(TypeError, lambda: hash(unhash1)) + unhash2 = curry(foo, c=[]) + assert raises(TypeError, lambda: hash(unhash2)) + + +def test_curry_doesnot_transmogrify(): + # Early versions of `curry` transmogrified to `partial` objects if + # only one positional argument remained even if keyword arguments + # were present. Now, `curry` should always remain `curry`. + def f(x, y=0): + return x + y + + cf = curry(f) + assert cf(y=1)(y=2)(y=3)(1) == f(1, 3) + + +def test_curry_on_classmethods(): + class A(object): + BASE = 10 + + def __init__(self, base): + self.BASE = base + + @curry + def addmethod(self, x, y): + return self.BASE + x + y + + @classmethod + @curry + def addclass(cls, x, y): + return cls.BASE + x + y + + @staticmethod + @curry + def addstatic(x, y): + return x + y + + a = A(100) + assert a.addmethod(3, 4) == 107 + assert a.addmethod(3)(4) == 107 + assert A.addmethod(a, 3, 4) == 107 + assert A.addmethod(a)(3)(4) == 107 + + assert a.addclass(3, 4) == 17 + assert a.addclass(3)(4) == 17 + assert A.addclass(3, 4) == 17 + assert A.addclass(3)(4) == 17 + + assert a.addstatic(3, 4) == 7 + assert a.addstatic(3)(4) == 7 + assert A.addstatic(3, 4) == 7 + assert A.addstatic(3)(4) == 7 + + # we want this to be of type curry + assert isinstance(a.addmethod, curry) + assert isinstance(A.addmethod, curry) + + +def test_memoize_on_classmethods(): + class A(object): + BASE = 10 + HASH = 10 + + def __init__(self, base): + self.BASE = base + + @memoize + def addmethod(self, x, y): + return self.BASE + x + y + + @classmethod + @memoize + def addclass(cls, x, y): + return cls.BASE + x + y + + @staticmethod + @memoize + def addstatic(x, y): + return x + y + + def __hash__(self): + return self.HASH + + a = A(100) + assert a.addmethod(3, 4) == 107 + assert A.addmethod(a, 3, 4) == 107 + + a.BASE = 200 + assert a.addmethod(3, 4) == 107 + a.HASH = 200 + assert a.addmethod(3, 4) == 207 + + assert a.addclass(3, 4) == 17 + assert A.addclass(3, 4) == 17 + A.BASE = 20 + assert A.addclass(3, 4) == 17 + A.HASH = 20 # hashing of class is handled by metaclass + assert A.addclass(3, 4) == 17 # hence, != 27 + + assert a.addstatic(3, 4) == 7 + assert A.addstatic(3, 4) == 7 + + +def test_curry_call(): + @curry + def add(x, y): + return x + y + assert raises(TypeError, lambda: add.call(1)) + assert add(1)(2) == add.call(1, 2) + assert add(1)(2) == add(1).call(2) + + +def test_curry_bind(): + @curry + def add(x=1, y=2): + return x + y + assert add() == add(1, 2) + assert add.bind(10)(20) == add(10, 20) + assert add.bind(10).bind(20)() == add(10, 20) + assert add.bind(x=10)(y=20) == add(10, 20) + assert add.bind(x=10).bind(y=20)() == add(10, 20) + + +def test_curry_unknown_args(): + def add3(x, y, z): + return x + y + z + + @curry + def f(*args): + return add3(*args) + + assert f()(1)(2)(3) == 6 + assert f(1)(2)(3) == 6 + assert f(1, 2)(3) == 6 + assert f(1, 2, 3) == 6 + assert f(1, 2)(3, 4) == f(1, 2, 3, 4) + + +def test_curry_bad_types(): + assert raises(TypeError, lambda: curry(1)) + + +def test_curry_subclassable(): + class mycurry(curry): + pass + + add = mycurry(lambda x, y: x+y) + assert isinstance(add, curry) + assert isinstance(add, mycurry) + assert isinstance(add(1), mycurry) + assert isinstance(add()(1), mycurry) + assert add(1)(2) == 3 + + # Should we make `_should_curry` public? + """ + class curry2(curry): + def _should_curry(self, args, kwargs, exc=None): + return len(self.args) + len(args) < 2 + + add = curry2(lambda x, y: x+y) + assert isinstance(add(1), curry2) + assert add(1)(2) == 3 + assert isinstance(add(1)(x=2), curry2) + assert raises(TypeError, lambda: add(1)(x=2)(3)) + """ + + +def generate_compose_test_cases(): + """ + Generate test cases for parametrized tests of the compose function. + """ + + def add_then_multiply(a, b, c=10): + return (a + b) * c + + return ( + ( + (), # arguments to compose() + (0,), {}, # positional and keyword args to the Composed object + 0 # expected result + ), + ( + (inc,), + (0,), {}, + 1 + ), + ( + (double, inc), + (0,), {}, + 2 + ), + ( + (str, iseven, inc, double), + (3,), {}, + "False" + ), + ( + (str, add), + (1, 2), {}, + '3' + ), + ( + (str, inc, add_then_multiply), + (1, 2), {"c": 3}, + '10' + ), + ) + + +def test_compose(): + for (compose_args, args, kw, expected) in generate_compose_test_cases(): + assert compose(*compose_args)(*args, **kw) == expected + + +def test_compose_metadata(): + + # Define two functions with different names + def f(a): + return a + + def g(a): + return a + + composed = compose(f, g) + assert composed.__name__ == 'f_of_g' + assert composed.__doc__ == 'lambda *args, **kwargs: f(g(*args, **kwargs))' + + # Create an object with no __name__. + h = object() + + composed = compose(f, h) + assert composed.__name__ == 'Compose' + assert composed.__doc__ == 'A composition of functions' + + assert repr(composed) == 'Compose({!r}, {!r})'.format(f, h) + + assert composed == compose(f, h) + assert composed == AlwaysEquals() + assert not composed == compose(h, f) + assert not composed == object() + assert not composed == NeverEquals() + + assert composed != compose(h, f) + assert composed != NeverEquals() + assert composed != object() + assert not composed != compose(f, h) + assert not composed != AlwaysEquals() + + assert hash(composed) == hash(compose(f, h)) + assert hash(composed) != hash(compose(h, f)) + + bindable = compose(str, lambda x: x*2, lambda x, y=0: int(x) + y) + + class MyClass: + + def __int__(self): + return 8 + + my_method = bindable + my_static_method = staticmethod(bindable) + + assert MyClass.my_method(3) == '6' + assert MyClass.my_method(3, y=2) == '10' + assert MyClass.my_static_method(2) == '4' + assert MyClass().my_method() == '16' + assert MyClass().my_method(y=3) == '22' + assert MyClass().my_static_method(0) == '0' + assert MyClass().my_static_method(0, 1) == '2' + + assert compose(f, h).__wrapped__ is h + if hasattr(cytoolz, 'sandbox'): # only test this with Python version (i.e., not Cython) + assert compose(f, h).__class__.__wrapped__ is None + + # __signature__ is python3 only + + def myfunc(a, b, c, *d, **e): + return 4 + + def otherfunc(f): + return 'result: {}'.format(f) + + # set annotations compatibly with python2 syntax + myfunc.__annotations__ = { + 'a': int, + 'b': str, + 'c': float, + 'd': int, + 'e': bool, + 'return': int, + } + otherfunc.__annotations__ = {'f': int, 'return': str} + + composed = compose(otherfunc, myfunc) + sig = inspect.signature(composed) + assert sig.parameters == inspect.signature(myfunc).parameters + assert sig.return_annotation == str + + class MyClass: + method = composed + + assert len(inspect.signature(MyClass().method).parameters) == 4 + + +def generate_compose_left_test_cases(): + """ + Generate test cases for parametrized tests of the compose function. + + These are based on, and equivalent to, those produced by + enerate_compose_test_cases(). + """ + return tuple( + (tuple(reversed(compose_args)), args, kwargs, expected) + for (compose_args, args, kwargs, expected) + in generate_compose_test_cases() + ) + + +def test_compose_left(): + for (compose_left_args, args, kw, expected) in generate_compose_left_test_cases(): + assert compose_left(*compose_left_args)(*args, **kw) == expected + + +def test_pipe(): + assert pipe(1, inc) == 2 + assert pipe(1, inc, inc) == 3 + assert pipe(1, double, inc, iseven) is False + + +def test_complement(): + # No args: + assert complement(lambda: False)() + assert not complement(lambda: True)() + + # Single arity: + assert complement(iseven)(1) + assert not complement(iseven)(2) + assert complement(complement(iseven))(2) + assert not complement(complement(isodd))(2) + + # Multiple arities: + both_even = lambda a, b: iseven(a) and iseven(b) + assert complement(both_even)(1, 2) + assert not complement(both_even)(2, 2) + + # Generic truthiness: + assert complement(lambda: "")() + assert complement(lambda: 0)() + assert complement(lambda: None)() + assert complement(lambda: [])() + + assert not complement(lambda: "x")() + assert not complement(lambda: 1)() + assert not complement(lambda: [1])() + + +def test_do(): + inc = lambda x: x + 1 + assert do(inc, 1) == 1 + + log = [] + assert do(log.append, 1) == 1 + assert log == [1] + + +def test_juxt_generator_input(): + data = list(range(10)) + juxtfunc = juxt(itemgetter(2*i) for i in range(5)) + assert juxtfunc(data) == (0, 2, 4, 6, 8) + assert juxtfunc(data) == (0, 2, 4, 6, 8) + + +def test_flip(): + def f(a, b): + return a, b + + assert flip(f, 'a', 'b') == ('b', 'a') + + +def test_excepts(): + # These are descriptors, make sure this works correctly. + assert excepts.__name__ == 'excepts' + assert ( + 'A wrapper around a function to catch exceptions and\n' + ' dispatch to a handler.\n' + ) in excepts.__doc__ + + def idx(a): + """idx docstring + """ + return [1, 2].index(a) + + def handler(e): + """handler docstring + """ + assert isinstance(e, ValueError) + return -1 + + excepting = excepts(ValueError, idx, handler) + assert excepting(1) == 0 + assert excepting(2) == 1 + assert excepting(3) == -1 + + assert excepting.__name__ == 'idx_excepting_ValueError' + assert 'idx docstring' in excepting.__doc__ + assert 'ValueError' in excepting.__doc__ + assert 'handler docstring' in excepting.__doc__ + + def getzero(a): + """getzero docstring + """ + return a[0] + + excepting = excepts((IndexError, KeyError), getzero) + assert excepting([]) is None + assert excepting([1]) == 1 + assert excepting({}) is None + assert excepting({0: 1}) == 1 + + assert excepting.__name__ == 'getzero_excepting_IndexError_or_KeyError' + assert 'getzero docstring' in excepting.__doc__ + assert 'return_none' in excepting.__doc__ + assert 'Returns None' in excepting.__doc__ + + def raise_(a): + """A function that raises an instance of the exception type given. + """ + raise a() + + excepting = excepts((ValueError, KeyError), raise_) + assert excepting(ValueError) is None + assert excepting(KeyError) is None + assert raises(TypeError, lambda: excepting(TypeError)) + assert raises(NotImplementedError, lambda: excepting(NotImplementedError)) + + excepting = excepts(object(), object(), object()) + assert excepting.__name__ == 'excepting' + assert excepting.__doc__ == excepts.__doc__ diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_inspect_args.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_inspect_args.py new file mode 100644 index 00000000..b2c56692 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_inspect_args.py @@ -0,0 +1,487 @@ +import functools +import inspect +import itertools +import operator +import cytoolz +from cytoolz.functoolz import (curry, is_valid_args, is_partial_args, is_arity, + num_required_args, has_varargs, has_keywords) +from cytoolz._signatures import builtins +import cytoolz._signatures as _sigs +from cytoolz.utils import raises + + +def make_func(param_string, raise_if_called=True): + if not param_string.startswith('('): + param_string = '(%s)' % param_string + if raise_if_called: + body = 'raise ValueError("function should not be called")' + else: + body = 'return True' + d = {} + exec('def func%s:\n %s' % (param_string, body), globals(), d) + return d['func'] + + +def test_make_func(): + f = make_func('') + assert raises(ValueError, lambda: f()) + assert raises(TypeError, lambda: f(1)) + + f = make_func('', raise_if_called=False) + assert f() + assert raises(TypeError, lambda: f(1)) + + f = make_func('x, y=1', raise_if_called=False) + assert f(1) + assert f(x=1) + assert f(1, 2) + assert f(x=1, y=2) + assert raises(TypeError, lambda: f(1, 2, 3)) + + f = make_func('(x, y=1)', raise_if_called=False) + assert f(1) + assert f(x=1) + assert f(1, 2) + assert f(x=1, y=2) + assert raises(TypeError, lambda: f(1, 2, 3)) + + +def test_is_valid(check_valid=is_valid_args, incomplete=False): + orig_check_valid = check_valid + check_valid = lambda func, *args, **kwargs: orig_check_valid(func, args, kwargs) + + f = make_func('') + assert check_valid(f) + assert check_valid(f, 1) is False + assert check_valid(f, x=1) is False + + f = make_func('x') + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, x=1) + assert check_valid(f, 1, x=2) is False + assert check_valid(f, 1, y=2) is False + assert check_valid(f, 1, 2) is False + assert check_valid(f, x=1, y=2) is False + + f = make_func('x=1') + assert check_valid(f) + assert check_valid(f, 1) + assert check_valid(f, x=1) + assert check_valid(f, 1, x=2) is False + assert check_valid(f, 1, y=2) is False + assert check_valid(f, 1, 2) is False + assert check_valid(f, x=1, y=2) is False + + f = make_func('*args') + assert check_valid(f) + assert check_valid(f, 1) + assert check_valid(f, 1, 2) + assert check_valid(f, x=1) is False + + f = make_func('**kwargs') + assert check_valid(f) + assert check_valid(f, x=1) + assert check_valid(f, x=1, y=2) + assert check_valid(f, 1) is False + + f = make_func('x, *args') + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, 1, 2) + assert check_valid(f, x=1) + assert check_valid(f, 1, x=1) is False + assert check_valid(f, 1, y=1) is False + + f = make_func('x, y=1, **kwargs') + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, x=1) + assert check_valid(f, 1, 2) + assert check_valid(f, x=1, y=2, z=3) + assert check_valid(f, 1, 2, y=3) is False + + f = make_func('a, b, c=3, d=4') + assert check_valid(f) is incomplete + assert check_valid(f, 1) is incomplete + assert check_valid(f, 1, 2) + assert check_valid(f, 1, c=3) is incomplete + assert check_valid(f, 1, e=3) is False + assert check_valid(f, 1, 2, e=3) is False + assert check_valid(f, 1, 2, b=3) is False + + assert check_valid(1) is False + + +def test_is_valid_py3(check_valid=is_valid_args, incomplete=False): + orig_check_valid = check_valid + check_valid = lambda func, *args, **kwargs: orig_check_valid(func, args, kwargs) + + f = make_func('x, *, y=1') + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, x=1) + assert check_valid(f, 1, y=2) + assert check_valid(f, 1, 2) is False + assert check_valid(f, 1, z=2) is False + + f = make_func('x, *args, y=1') + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, x=1) + assert check_valid(f, 1, y=2) + assert check_valid(f, 1, 2, y=2) + assert check_valid(f, 1, 2) + assert check_valid(f, 1, z=2) is False + + f = make_func('*, y=1') + assert check_valid(f) + assert check_valid(f, 1) is False + assert check_valid(f, y=1) + assert check_valid(f, z=1) is False + + f = make_func('x, *, y') + assert check_valid(f) is incomplete + assert check_valid(f, 1) is incomplete + assert check_valid(f, x=1) is incomplete + assert check_valid(f, 1, y=2) + assert check_valid(f, x=1, y=2) + assert check_valid(f, 1, 2) is False + assert check_valid(f, 1, z=2) is False + assert check_valid(f, 1, y=1, z=2) is False + + f = make_func('x=1, *, y, z=3') + assert check_valid(f) is incomplete + assert check_valid(f, 1, z=3) is incomplete + assert check_valid(f, y=2) + assert check_valid(f, 1, y=2) + assert check_valid(f, x=1, y=2) + assert check_valid(f, x=1, y=2, z=3) + assert check_valid(f, 1, x=1, y=2) is False + assert check_valid(f, 1, 3, y=2) is False + + f = make_func('w, x=2, *args, y, z=4') + assert check_valid(f) is incomplete + assert check_valid(f, 1) is incomplete + assert check_valid(f, 1, y=3) + + f = make_func('a, b, c=3, d=4, *args, e=5, f=6, g, h') + assert check_valid(f) is incomplete + assert check_valid(f, 1) is incomplete + assert check_valid(f, 1, 2) is incomplete + assert check_valid(f, 1, 2, g=7) is incomplete + assert check_valid(f, 1, 2, g=7, h=8) + assert check_valid(f, 1, 2, 3, 4, 5, 6, 7, 8, 9) is incomplete + + f = make_func('a: int, b: float') + assert check_valid(f) is incomplete + assert check_valid(f, 1) is incomplete + assert check_valid(f, b=1) is incomplete + assert check_valid(f, 1, 2) + + f = make_func('(a: int, b: float) -> float') + assert check_valid(f) is incomplete + assert check_valid(f, 1) is incomplete + assert check_valid(f, b=1) is incomplete + assert check_valid(f, 1, 2) + + f.__signature__ = 34 + assert check_valid(f) is False + + class RaisesValueError(object): + def __call__(self): + pass + @property + def __signature__(self): + raise ValueError('Testing Python 3.4') + + f = RaisesValueError() + assert check_valid(f) is None + + +def test_is_partial(): + test_is_valid(check_valid=is_partial_args, incomplete=True) + test_is_valid_py3(check_valid=is_partial_args, incomplete=True) + + +def test_is_valid_curry(): + def check_curry(func, args, kwargs, incomplete=True): + try: + curry(func)(*args, **kwargs) + curry(func, *args)(**kwargs) + curry(func, **kwargs)(*args) + curry(func, *args, **kwargs)() + if not isinstance(func, type(lambda: None)): + return None + return incomplete + except ValueError: + return True + except TypeError: + return False + + check_valid = functools.partial(check_curry, incomplete=True) + test_is_valid(check_valid=check_valid, incomplete=True) + test_is_valid_py3(check_valid=check_valid, incomplete=True) + + check_valid = functools.partial(check_curry, incomplete=False) + test_is_valid(check_valid=check_valid, incomplete=False) + test_is_valid_py3(check_valid=check_valid, incomplete=False) + + +def test_func_keyword(): + def f(func=None): + pass + assert is_valid_args(f, (), {}) + assert is_valid_args(f, (None,), {}) + assert is_valid_args(f, (), {'func': None}) + assert is_valid_args(f, (None,), {'func': None}) is False + assert is_partial_args(f, (), {}) + assert is_partial_args(f, (None,), {}) + assert is_partial_args(f, (), {'func': None}) + assert is_partial_args(f, (None,), {'func': None}) is False + + +def test_has_unknown_args(): + assert has_varargs(1) is False + assert has_varargs(map) + assert has_varargs(make_func('')) is False + assert has_varargs(make_func('x, y, z')) is False + assert has_varargs(make_func('*args')) + assert has_varargs(make_func('**kwargs')) is False + assert has_varargs(make_func('x, y, *args, **kwargs')) + assert has_varargs(make_func('x, y, z=1')) is False + assert has_varargs(make_func('x, y, z=1, **kwargs')) is False + + f = make_func('*args') + f.__signature__ = 34 + assert has_varargs(f) is False + + class RaisesValueError(object): + def __call__(self): + pass + @property + def __signature__(self): + raise ValueError('Testing Python 3.4') + + f = RaisesValueError() + assert has_varargs(f) is None + + +def test_num_required_args(): + assert num_required_args(lambda: None) == 0 + assert num_required_args(lambda x: None) == 1 + assert num_required_args(lambda x, *args: None) == 1 + assert num_required_args(lambda x, **kwargs: None) == 1 + assert num_required_args(lambda x, y, *args, **kwargs: None) == 2 + assert num_required_args(map) == 2 + assert num_required_args(dict) is None + + +def test_has_keywords(): + assert has_keywords(lambda: None) is False + assert has_keywords(lambda x: None) is False + assert has_keywords(lambda x=1: None) + assert has_keywords(lambda **kwargs: None) + assert has_keywords(int) + assert has_keywords(sorted) + assert has_keywords(max) + assert has_keywords(map) is False + assert has_keywords(bytearray) is None + + +def test_has_varargs(): + assert has_varargs(lambda: None) is False + assert has_varargs(lambda *args: None) + assert has_varargs(lambda **kwargs: None) is False + assert has_varargs(map) + assert has_varargs(max) is None + + +def test_is_arity(): + assert is_arity(0, lambda: None) + assert is_arity(1, lambda: None) is False + assert is_arity(1, lambda x: None) + assert is_arity(3, lambda x, y, z: None) + assert is_arity(1, lambda x, *args: None) is False + assert is_arity(1, lambda x, **kwargs: None) is False + assert is_arity(1, all) + assert is_arity(2, map) is False + assert is_arity(2, range) is None + + +def test_introspect_curry_valid_py3(check_valid=is_valid_args, incomplete=False): + orig_check_valid = check_valid + check_valid = lambda _func, *args, **kwargs: orig_check_valid(_func, args, kwargs) + + f = cytoolz.curry(make_func('x, y, z=0')) + assert check_valid(f) + assert check_valid(f, 1) + assert check_valid(f, 1, 2) + assert check_valid(f, 1, 2, 3) + assert check_valid(f, 1, 2, 3, 4) is False + assert check_valid(f, invalid_keyword=True) is False + assert check_valid(f(1)) + assert check_valid(f(1), 2) + assert check_valid(f(1), 2, 3) + assert check_valid(f(1), 2, 3, 4) is False + assert check_valid(f(1), x=2) is False + assert check_valid(f(1), y=2) + assert check_valid(f(x=1), 2) is False + assert check_valid(f(x=1), y=2) + assert check_valid(f(y=2), 1) + assert check_valid(f(y=2), 1, z=3) + assert check_valid(f(y=2), 1, 3) is False + + f = cytoolz.curry(make_func('x, y, z=0'), 1, x=1) + assert check_valid(f) is False + assert check_valid(f, z=3) is False + + f = cytoolz.curry(make_func('x, y, *args, z')) + assert check_valid(f) + assert check_valid(f, 0) + assert check_valid(f(1), 0) + assert check_valid(f(1, 2), 0) + assert check_valid(f(1, 2, 3), 0) + assert check_valid(f(1, 2, 3, 4), 0) + assert check_valid(f(1, 2, 3, 4), z=4) + assert check_valid(f(x=1)) + assert check_valid(f(x=1), 1) is False + assert check_valid(f(x=1), y=2) + + +def test_introspect_curry_partial_py3(): + test_introspect_curry_valid_py3(check_valid=is_partial_args, incomplete=True) + + +def test_introspect_curry_py3(): + f = cytoolz.curry(make_func('')) + assert num_required_args(f) == 0 + assert is_arity(0, f) + assert has_varargs(f) is False + assert has_keywords(f) is False + + f = cytoolz.curry(make_func('x')) + assert num_required_args(f) == 0 + assert is_arity(0, f) is False + assert is_arity(1, f) is False + assert has_varargs(f) is False + assert has_keywords(f) # A side-effect of being curried + + f = cytoolz.curry(make_func('x, y, z=0')) + assert num_required_args(f) == 0 + assert is_arity(0, f) is False + assert is_arity(1, f) is False + assert is_arity(2, f) is False + assert is_arity(3, f) is False + assert has_varargs(f) is False + assert has_keywords(f) + + f = cytoolz.curry(make_func('*args, **kwargs')) + assert num_required_args(f) == 0 + assert has_varargs(f) + assert has_keywords(f) + + +def test_introspect_builtin_modules(): + mods = [builtins, functools, itertools, operator, cytoolz, + cytoolz.functoolz, cytoolz.itertoolz, cytoolz.dicttoolz, cytoolz.recipes] + + denylist = set() + + def add_denylist(mod, attr): + if hasattr(mod, attr): + denylist.add(getattr(mod, attr)) + + add_denylist(builtins, 'basestring') + add_denylist(builtins, 'NoneType') + add_denylist(builtins, '__metaclass__') + add_denylist(builtins, 'sequenceiterator') + + def is_missing(modname, name, func): + if name.startswith('_') and not name.startswith('__'): + return False + if name.startswith('__pyx_unpickle_') or name.endswith('_cython__'): + return False + try: + if issubclass(func, BaseException): + return False + except TypeError: + pass + try: + return (callable(func) + and func.__module__ is not None + and modname in func.__module__ + and is_partial_args(func, (), {}) is not True + and func not in denylist) + except AttributeError: + return False + + missing = {} + for mod in mods: + modname = mod.__name__ + for name, func in vars(mod).items(): + if is_missing(modname, name, func): + if modname not in missing: + missing[modname] = [] + missing[modname].append(name) + if missing: + messages = [] + for modname, names in sorted(missing.items()): + msg = '{}:\n {}'.format(modname, '\n '.join(sorted(names))) + messages.append(msg) + message = 'Missing introspection for the following callables:\n\n' + raise AssertionError(message + '\n\n'.join(messages)) + + +def test_inspect_signature_property(): + + # By adding AddX to our signature registry, we can inspect the class + # itself and objects of the class. `inspect.signature` doesn't like + # it when `obj.__signature__` is a property. + class AddX(object): + def __init__(self, func): + self.func = func + + def __call__(self, addx, *args, **kwargs): + return addx + self.func(*args, **kwargs) + + @property + def __signature__(self): + sig = inspect.signature(self.func) + params = list(sig.parameters.values()) + kind = inspect.Parameter.POSITIONAL_OR_KEYWORD + newparam = inspect.Parameter('addx', kind) + params = [newparam] + params + return sig.replace(parameters=params) + + addx = AddX(lambda x: x) + sig = inspect.signature(addx) + assert sig == inspect.Signature(parameters=[ + inspect.Parameter('addx', inspect.Parameter.POSITIONAL_OR_KEYWORD), + inspect.Parameter('x', inspect.Parameter.POSITIONAL_OR_KEYWORD)]) + + assert num_required_args(AddX) is False + _sigs.signatures[AddX] = (_sigs.expand_sig((0, lambda func: None)),) + assert num_required_args(AddX) == 1 + del _sigs.signatures[AddX] + + +def test_inspect_wrapped_property(): + class Wrapped(object): + def __init__(self, func): + self.func = func + + def __call__(self, *args, **kwargs): + return self.func(*args, **kwargs) + + @property + def __wrapped__(self): + return self.func + + func = lambda x: x + wrapped = Wrapped(func) + assert inspect.signature(func) == inspect.signature(wrapped) + + assert num_required_args(Wrapped) is None + _sigs.signatures[Wrapped] = (_sigs.expand_sig((0, lambda func: None)),) + assert num_required_args(Wrapped) == 1 diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_itertoolz.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_itertoolz.py new file mode 100644 index 00000000..a9d87e1a --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_itertoolz.py @@ -0,0 +1,549 @@ +import itertools +from itertools import starmap +from cytoolz.utils import raises +from functools import partial +from random import Random +from pickle import dumps, loads +from cytoolz.itertoolz import (remove, groupby, merge_sorted, + concat, concatv, interleave, unique, + isiterable, getter, + mapcat, isdistinct, first, second, + nth, take, tail, drop, interpose, get, + rest, last, cons, frequencies, + reduceby, iterate, accumulate, + sliding_window, count, partition, + partition_all, take_nth, pluck, join, + diff, topk, peek, peekn, random_sample) +from operator import add, mul + + +# is comparison will fail between this and no_default +no_default2 = loads(dumps('__no__default__')) + + +def identity(x): + return x + + +def iseven(x): + return x % 2 == 0 + + +def isodd(x): + return x % 2 == 1 + + +def inc(x): + return x + 1 + + +def double(x): + return 2 * x + + +def test_remove(): + r = remove(iseven, range(5)) + assert type(r) is not list + assert list(r) == list(filter(isodd, range(5))) + + +def test_groupby(): + assert groupby(iseven, [1, 2, 3, 4]) == {True: [2, 4], False: [1, 3]} + + +def test_groupby_non_callable(): + assert groupby(0, [(1, 2), (1, 3), (2, 2), (2, 4)]) == \ + {1: [(1, 2), (1, 3)], + 2: [(2, 2), (2, 4)]} + + assert groupby([0], [(1, 2), (1, 3), (2, 2), (2, 4)]) == \ + {(1,): [(1, 2), (1, 3)], + (2,): [(2, 2), (2, 4)]} + + assert groupby([0, 0], [(1, 2), (1, 3), (2, 2), (2, 4)]) == \ + {(1, 1): [(1, 2), (1, 3)], + (2, 2): [(2, 2), (2, 4)]} + + +def test_merge_sorted(): + assert list(merge_sorted([1, 2, 3], [1, 2, 3])) == [1, 1, 2, 2, 3, 3] + assert list(merge_sorted([1, 3, 5], [2, 4, 6])) == [1, 2, 3, 4, 5, 6] + assert list(merge_sorted([1], [2, 4], [3], [])) == [1, 2, 3, 4] + assert list(merge_sorted([5, 3, 1], [6, 4, 3], [], + key=lambda x: -x)) == [6, 5, 4, 3, 3, 1] + assert list(merge_sorted([2, 1, 3], [1, 2, 3], + key=lambda x: x // 3)) == [2, 1, 1, 2, 3, 3] + assert list(merge_sorted([2, 3], [1, 3], + key=lambda x: x // 3)) == [2, 1, 3, 3] + assert ''.join(merge_sorted('abc', 'abc', 'abc')) == 'aaabbbccc' + assert ''.join(merge_sorted('abc', 'abc', 'abc', key=ord)) == 'aaabbbccc' + assert ''.join(merge_sorted('cba', 'cba', 'cba', + key=lambda x: -ord(x))) == 'cccbbbaaa' + assert list(merge_sorted([1], [2, 3, 4], key=identity)) == [1, 2, 3, 4] + + data = [[(1, 2), (0, 4), (3, 6)], [(5, 3), (6, 5), (8, 8)], + [(9, 1), (9, 8), (9, 9)]] + assert list(merge_sorted(*data, key=lambda x: x[1])) == [ + (9, 1), (1, 2), (5, 3), (0, 4), (6, 5), (3, 6), (8, 8), (9, 8), (9, 9)] + assert list(merge_sorted()) == [] + assert list(merge_sorted([1, 2, 3])) == [1, 2, 3] + assert list(merge_sorted([1, 4, 5], [2, 3])) == [1, 2, 3, 4, 5] + assert list(merge_sorted([1, 4, 5], [2, 3], key=identity)) == [ + 1, 2, 3, 4, 5] + assert list(merge_sorted([1, 5], [2], [4, 7], [3, 6], key=identity)) == [ + 1, 2, 3, 4, 5, 6, 7] + + +def test_interleave(): + assert ''.join(interleave(('ABC', '123'))) == 'A1B2C3' + assert ''.join(interleave(('ABC', '1'))) == 'A1BC' + + +def test_unique(): + assert tuple(unique((1, 2, 3))) == (1, 2, 3) + assert tuple(unique((1, 2, 1, 3))) == (1, 2, 3) + assert tuple(unique((1, 2, 3), key=iseven)) == (1, 2) + + +def test_isiterable(): + assert isiterable([1, 2, 3]) is True + assert isiterable('abc') is True + assert isiterable(5) is False + + +def test_isdistinct(): + assert isdistinct([1, 2, 3]) is True + assert isdistinct([1, 2, 1]) is False + + assert isdistinct("Hello") is False + assert isdistinct("World") is True + + assert isdistinct(iter([1, 2, 3])) is True + assert isdistinct(iter([1, 2, 1])) is False + + +def test_nth(): + assert nth(2, 'ABCDE') == 'C' + assert nth(2, iter('ABCDE')) == 'C' + assert nth(1, (3, 2, 1)) == 2 + assert nth(0, {'foo': 'bar'}) == 'foo' + assert raises(StopIteration, lambda: nth(10, {10: 'foo'})) + assert nth(-2, 'ABCDE') == 'D' + assert raises(ValueError, lambda: nth(-2, iter('ABCDE'))) + + +def test_first(): + assert first('ABCDE') == 'A' + assert first((3, 2, 1)) == 3 + assert isinstance(first({0: 'zero', 1: 'one'}), int) + + +def test_second(): + assert second('ABCDE') == 'B' + assert second((3, 2, 1)) == 2 + assert isinstance(second({0: 'zero', 1: 'one'}), int) + + +def test_last(): + assert last('ABCDE') == 'E' + assert last((3, 2, 1)) == 1 + assert isinstance(last({0: 'zero', 1: 'one'}), int) + + +def test_rest(): + assert list(rest('ABCDE')) == list('BCDE') + assert list(rest((3, 2, 1))) == list((2, 1)) + + +def test_take(): + assert list(take(3, 'ABCDE')) == list('ABC') + assert list(take(2, (3, 2, 1))) == list((3, 2)) + + +def test_tail(): + assert list(tail(3, 'ABCDE')) == list('CDE') + assert list(tail(3, iter('ABCDE'))) == list('CDE') + assert list(tail(2, (3, 2, 1))) == list((2, 1)) + + +def test_drop(): + assert list(drop(3, 'ABCDE')) == list('DE') + assert list(drop(1, (3, 2, 1))) == list((2, 1)) + + +def test_take_nth(): + assert list(take_nth(2, 'ABCDE')) == list('ACE') + + +def test_get(): + assert get(1, 'ABCDE') == 'B' + assert list(get([1, 3], 'ABCDE')) == list('BD') + assert get('a', {'a': 1, 'b': 2, 'c': 3}) == 1 + assert get(['a', 'b'], {'a': 1, 'b': 2, 'c': 3}) == (1, 2) + + assert get('foo', {}, default='bar') == 'bar' + assert get({}, [1, 2, 3], default='bar') == 'bar' + assert get([0, 2], 'AB', 'C') == ('A', 'C') + + assert get([0], 'AB') == ('A',) + assert get([], 'AB') == () + + assert raises(IndexError, lambda: get(10, 'ABC')) + assert raises(KeyError, lambda: get(10, {'a': 1})) + assert raises(TypeError, lambda: get({}, [1, 2, 3])) + assert raises(TypeError, lambda: get([1, 2, 3], 1, None)) + assert raises(KeyError, lambda: get('foo', {}, default=no_default2)) + + +def test_mapcat(): + assert (list(mapcat(identity, [[1, 2, 3], [4, 5, 6]])) == + [1, 2, 3, 4, 5, 6]) + + assert (list(mapcat(reversed, [[3, 2, 1, 0], [6, 5, 4], [9, 8, 7]])) == + list(range(10))) + + inc = lambda i: i + 1 + assert ([4, 5, 6, 7, 8, 9] == + list(mapcat(partial(map, inc), [[3, 4, 5], [6, 7, 8]]))) + + +def test_cons(): + assert list(cons(1, [2, 3])) == [1, 2, 3] + + +def test_concat(): + assert list(concat([[], [], []])) == [] + assert (list(take(5, concat([['a', 'b'], range(1000000000)]))) == + ['a', 'b', 0, 1, 2]) + + +def test_concatv(): + assert list(concatv([], [], [])) == [] + assert (list(take(5, concatv(['a', 'b'], range(1000000000)))) == + ['a', 'b', 0, 1, 2]) + + +def test_interpose(): + assert "a" == first(rest(interpose("a", range(1000000000)))) + assert "tXaXrXzXaXn" == "".join(interpose("X", "tarzan")) + assert list(interpose(0, itertools.repeat(1, 4))) == [1, 0, 1, 0, 1, 0, 1] + assert list(interpose('.', ['a', 'b', 'c'])) == ['a', '.', 'b', '.', 'c'] + + +def test_frequencies(): + assert (frequencies(["cat", "pig", "cat", "eel", + "pig", "dog", "dog", "dog"]) == + {"cat": 2, "eel": 1, "pig": 2, "dog": 3}) + assert frequencies([]) == {} + assert frequencies("onomatopoeia") == {"a": 2, "e": 1, "i": 1, "m": 1, + "o": 4, "n": 1, "p": 1, "t": 1} + + +def test_reduceby(): + data = [1, 2, 3, 4, 5] + iseven = lambda x: x % 2 == 0 + assert reduceby(iseven, add, data, 0) == {False: 9, True: 6} + assert reduceby(iseven, mul, data, 1) == {False: 15, True: 8} + + projects = [{'name': 'build roads', 'state': 'CA', 'cost': 1000000}, + {'name': 'fight crime', 'state': 'IL', 'cost': 100000}, + {'name': 'help farmers', 'state': 'IL', 'cost': 2000000}, + {'name': 'help farmers', 'state': 'CA', 'cost': 200000}] + assert reduceby(lambda x: x['state'], + lambda acc, x: acc + x['cost'], + projects, 0) == {'CA': 1200000, 'IL': 2100000} + + assert reduceby('state', + lambda acc, x: acc + x['cost'], + projects, 0) == {'CA': 1200000, 'IL': 2100000} + + +def test_reduce_by_init(): + assert reduceby(iseven, add, [1, 2, 3, 4]) == {True: 2 + 4, False: 1 + 3} + assert reduceby(iseven, add, [1, 2, 3, 4], no_default2) == {True: 2 + 4, + False: 1 + 3} + + +def test_reduce_by_callable_default(): + def set_add(s, i): + s.add(i) + return s + + assert reduceby(iseven, set_add, [1, 2, 3, 4, 1, 2], set) == \ + {True: {2, 4}, False: {1, 3}} + + +def test_iterate(): + assert list(itertools.islice(iterate(inc, 0), 0, 5)) == [0, 1, 2, 3, 4] + assert list(take(4, iterate(double, 1))) == [1, 2, 4, 8] + + +def test_accumulate(): + assert list(accumulate(add, [1, 2, 3, 4, 5])) == [1, 3, 6, 10, 15] + assert list(accumulate(mul, [1, 2, 3, 4, 5])) == [1, 2, 6, 24, 120] + assert list(accumulate(add, [1, 2, 3, 4, 5], -1)) == [-1, 0, 2, 5, 9, 14] + + def binop(a, b): + raise AssertionError('binop should not be called') + + start = object() + assert list(accumulate(binop, [], start)) == [start] + assert list(accumulate(binop, [])) == [] + assert list(accumulate(add, [1, 2, 3], no_default2)) == [1, 3, 6] + + +def test_accumulate_works_on_consumable_iterables(): + assert list(accumulate(add, iter((1, 2, 3)))) == [1, 3, 6] + + +def test_sliding_window(): + assert list(sliding_window(2, [1, 2, 3, 4])) == [(1, 2), (2, 3), (3, 4)] + assert list(sliding_window(3, [1, 2, 3, 4])) == [(1, 2, 3), (2, 3, 4)] + + +def test_sliding_window_of_short_iterator(): + assert list(sliding_window(3, [1, 2])) == [] + assert list(sliding_window(7, [1, 2])) == [] + + +def test_partition(): + assert list(partition(2, [1, 2, 3, 4])) == [(1, 2), (3, 4)] + assert list(partition(3, range(7))) == [(0, 1, 2), (3, 4, 5)] + assert list(partition(3, range(4), pad=-1)) == [(0, 1, 2), + (3, -1, -1)] + assert list(partition(2, [])) == [] + + +def test_partition_all(): + assert list(partition_all(2, [1, 2, 3, 4])) == [(1, 2), (3, 4)] + assert list(partition_all(3, range(5))) == [(0, 1, 2), (3, 4)] + assert list(partition_all(2, [])) == [] + + # Regression test: https://github.com/pycytoolz/cytoolz/issues/387 + class NoCompare(object): + def __eq__(self, other): + if self.__class__ == other.__class__: + return True + raise ValueError() + obj = NoCompare() + result = [(obj, obj, obj, obj), (obj, obj, obj)] + assert list(partition_all(4, [obj]*7)) == result + assert list(partition_all(4, iter([obj]*7))) == result + + +def test_count(): + assert count((1, 2, 3)) == 3 + assert count([]) == 0 + assert count(iter((1, 2, 3, 4))) == 4 + + assert count('hello') == 5 + assert count(iter('hello')) == 5 + + +def test_pluck(): + assert list(pluck(0, [[0, 1], [2, 3], [4, 5]])) == [0, 2, 4] + assert list(pluck([0, 1], [[0, 1, 2], [3, 4, 5]])) == [(0, 1), (3, 4)] + assert list(pluck(1, [[0], [0, 1]], None)) == [None, 1] + + data = [{'id': 1, 'name': 'cheese'}, {'id': 2, 'name': 'pies', 'price': 1}] + assert list(pluck('id', data)) == [1, 2] + assert list(pluck('price', data, 0)) == [0, 1] + assert list(pluck(['id', 'name'], data)) == [(1, 'cheese'), (2, 'pies')] + assert list(pluck(['name'], data)) == [('cheese',), ('pies',)] + assert list(pluck(['price', 'other'], data, 0)) == [(0, 0), (1, 0)] + + assert raises(IndexError, lambda: list(pluck(1, [[0]]))) + assert raises(KeyError, lambda: list(pluck('name', [{'id': 1}]))) + + assert list(pluck(0, [[0, 1], [2, 3], [4, 5]], no_default2)) == [0, 2, 4] + assert raises(IndexError, lambda: list(pluck(1, [[0]], no_default2))) + + +def test_join(): + names = [(1, 'one'), (2, 'two'), (3, 'three')] + fruit = [('apple', 1), ('orange', 1), ('banana', 2), ('coconut', 2)] + + def addpair(pair): + return pair[0] + pair[1] + + result = set(starmap(add, join(first, names, second, fruit))) + + expected = {(1, 'one', 'apple', 1), + (1, 'one', 'orange', 1), + (2, 'two', 'banana', 2), + (2, 'two', 'coconut', 2)} + + assert result == expected + + result = set(starmap(add, join(first, names, second, fruit, + left_default=no_default2, + right_default=no_default2))) + assert result == expected + + +def test_getter(): + assert getter(0)('Alice') == 'A' + assert getter([0])('Alice') == ('A',) + assert getter([])('Alice') == () + + +def test_key_as_getter(): + squares = [(i, i**2) for i in range(5)] + pows = [(i, i**2, i**3) for i in range(5)] + + assert set(join(0, squares, 0, pows)) == set(join(lambda x: x[0], squares, + lambda x: x[0], pows)) + + get = lambda x: (x[0], x[1]) + assert set(join([0, 1], squares, [0, 1], pows)) == set(join(get, squares, + get, pows)) + + get = lambda x: (x[0],) + assert set(join([0], squares, [0], pows)) == set(join(get, squares, + get, pows)) + + +def test_join_double_repeats(): + names = [(1, 'one'), (2, 'two'), (3, 'three'), (1, 'uno'), (2, 'dos')] + fruit = [('apple', 1), ('orange', 1), ('banana', 2), ('coconut', 2)] + + result = set(starmap(add, join(first, names, second, fruit))) + + expected = {(1, 'one', 'apple', 1), + (1, 'one', 'orange', 1), + (2, 'two', 'banana', 2), + (2, 'two', 'coconut', 2), + (1, 'uno', 'apple', 1), + (1, 'uno', 'orange', 1), + (2, 'dos', 'banana', 2), + (2, 'dos', 'coconut', 2)} + + assert result == expected + + +def test_join_missing_element(): + names = [(1, 'one'), (2, 'two'), (3, 'three')] + fruit = [('apple', 5), ('orange', 1)] + + result = set(starmap(add, join(first, names, second, fruit))) + + expected = {(1, 'one', 'orange', 1)} + + assert result == expected + + +def test_left_outer_join(): + result = set(join(identity, [1, 2], identity, [2, 3], left_default=None)) + expected = {(2, 2), (None, 3)} + + assert result == expected + + +def test_right_outer_join(): + result = set(join(identity, [1, 2], identity, [2, 3], right_default=None)) + expected = {(2, 2), (1, None)} + + assert result == expected + + +def test_outer_join(): + result = set(join(identity, [1, 2], identity, [2, 3], + left_default=None, right_default=None)) + expected = {(2, 2), (1, None), (None, 3)} + + assert result == expected + + +def test_diff(): + assert raises(TypeError, lambda: list(diff())) + assert raises(TypeError, lambda: list(diff([1, 2]))) + assert raises(TypeError, lambda: list(diff([1, 2], 3))) + assert list(diff([1, 2], (1, 2), iter([1, 2]))) == [] + assert list(diff([1, 2, 3], (1, 10, 3), iter([1, 2, 10]))) == [ + (2, 10, 2), (3, 3, 10)] + assert list(diff([1, 2], [10])) == [(1, 10)] + assert list(diff([1, 2], [10], default=None)) == [(1, 10), (2, None)] + # non-variadic usage + assert raises(TypeError, lambda: list(diff([]))) + assert raises(TypeError, lambda: list(diff([[]]))) + assert raises(TypeError, lambda: list(diff([[1, 2]]))) + assert raises(TypeError, lambda: list(diff([[1, 2], 3]))) + assert list(diff([(1, 2), (1, 3)])) == [(2, 3)] + + data1 = [{'cost': 1, 'currency': 'dollar'}, + {'cost': 2, 'currency': 'dollar'}] + + data2 = [{'cost': 100, 'currency': 'yen'}, + {'cost': 300, 'currency': 'yen'}] + + conversions = {'dollar': 1, 'yen': 0.01} + + def indollars(item): + return conversions[item['currency']] * item['cost'] + + list(diff(data1, data2, key=indollars)) == [ + ({'cost': 2, 'currency': 'dollar'}, {'cost': 300, 'currency': 'yen'})] + + +def test_topk(): + assert topk(2, [4, 1, 5, 2]) == (5, 4) + assert topk(2, [4, 1, 5, 2], key=lambda x: -x) == (1, 2) + assert topk(2, iter([5, 1, 4, 2]), key=lambda x: -x) == (1, 2) + + assert topk(2, [{'a': 1, 'b': 10}, {'a': 2, 'b': 9}, + {'a': 10, 'b': 1}, {'a': 9, 'b': 2}], key='a') == \ + ({'a': 10, 'b': 1}, {'a': 9, 'b': 2}) + + assert topk(2, [{'a': 1, 'b': 10}, {'a': 2, 'b': 9}, + {'a': 10, 'b': 1}, {'a': 9, 'b': 2}], key='b') == \ + ({'a': 1, 'b': 10}, {'a': 2, 'b': 9}) + assert topk(2, [(0, 4), (1, 3), (2, 2), (3, 1), (4, 0)], 0) == \ + ((4, 0), (3, 1)) + + +def test_topk_is_stable(): + assert topk(4, [5, 9, 2, 1, 5, 3], key=lambda x: 1) == (5, 9, 2, 1) + + +def test_peek(): + alist = ["Alice", "Bob", "Carol"] + element, blist = peek(alist) + assert element == alist[0] + assert list(blist) == alist + + assert raises(StopIteration, lambda: peek([])) + + +def test_peekn(): + alist = ("Alice", "Bob", "Carol") + elements, blist = peekn(2, alist) + assert elements == alist[:2] + assert tuple(blist) == alist + + elements, blist = peekn(len(alist) * 4, alist) + assert elements == alist + assert tuple(blist) == alist + + +def test_random_sample(): + alist = list(range(100)) + + assert list(random_sample(prob=1, seq=alist, random_state=2016)) == alist + + mk_rsample = lambda rs=1: list(random_sample(prob=0.1, + seq=alist, + random_state=rs)) + rsample1 = mk_rsample() + assert rsample1 == mk_rsample() + + rsample2 = mk_rsample(1984) + randobj = Random(1984) + assert rsample2 == mk_rsample(randobj) + + assert rsample1 != rsample2 + + assert mk_rsample(hash(object)) == mk_rsample(hash(object)) + assert mk_rsample(hash(object)) != mk_rsample(hash(object())) + assert mk_rsample(b"a") == mk_rsample(u"a") + + assert raises(TypeError, lambda: mk_rsample([])) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_none_safe.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_none_safe.py new file mode 100644 index 00000000..7d00a935 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_none_safe.py @@ -0,0 +1,342 @@ +""" Test that functions are reasonably behaved with None as input. + +Typed Cython objects (like dict) may also be None. Using functions from +Python's C API that expect a specific type but receive None instead can cause +problems such as throwing an uncatchable SystemError (and some systems may +segfault instead). We obviously don't what that to happen! As the tests +below discovered, this turned out to be a rare occurence. The only changes +required were to use `d.copy()` instead of `PyDict_Copy(d)`, and to always +return Python objects from functions instead of int or bint (so exceptions +can propagate). + +The vast majority of functions throw TypeError. The vast majority of +functions also behave the same in `toolz` and `cytoolz`. However, there +are a few minor exceptions. Since passing None to functions are edge cases +that don't have well-established behavior yet (other than raising TypeError), +the tests in this file serve to verify that the behavior is at least +reasonably well-behaved and don't cause SystemErrors. + +""" +# XXX: This file could be back-ported to `toolz` once unified testing exists. +import cytoolz +from cytoolz import * +from cytoolz.utils import raises +from operator import add + + +class GenException(object): + def __init__(self, exc): + self.exc = exc + + def __iter__(self): + return self + + def __next__(self): + raise self.exc + + def next(self): + raise self.exc + + +def test_dicttoolz(): + tested = [] + assert raises((TypeError, AttributeError), lambda: assoc(None, 1, 2)) + tested.append('assoc') + + assert raises((TypeError, AttributeError), lambda: dissoc(None, 1)) + tested.append('dissoc') + + # XXX + assert (raises(TypeError, lambda: get_in(None, {})) or + get_in(None, {}) is None) + + assert raises(TypeError, lambda: get_in(None, {}, no_default=True)) + assert get_in([0, 1], None) is None + assert raises(TypeError, lambda: get_in([0, 1], None, no_default=True)) + tested.append('get_in') + + assert raises(TypeError, lambda: keyfilter(None, {1: 2})) + assert raises((AttributeError, TypeError), lambda: keyfilter(identity, None)) + tested.append('keyfilter') + + # XXX + assert (raises(TypeError, lambda: keymap(None, {1: 2})) or + keymap(None, {1: 2}) == {(1,): 2}) + assert raises((AttributeError, TypeError), lambda: keymap(identity, None)) + tested.append('keymap') + + assert raises(TypeError, lambda: merge(None)) + assert raises((TypeError, AttributeError), lambda: merge(None, None)) + tested.append('merge') + + assert raises(TypeError, lambda: merge_with(None, {1: 2}, {3: 4})) + assert raises(TypeError, lambda: merge_with(identity, None)) + assert raises((TypeError, AttributeError), + lambda: merge_with(identity, None, None)) + tested.append('merge_with') + + assert raises(TypeError, lambda: update_in({1: {2: 3}}, [1, 2], None)) + assert raises(TypeError, lambda: update_in({1: {2: 3}}, None, identity)) + assert raises((TypeError, AttributeError), + lambda: update_in(None, [1, 2], identity)) + tested.append('update_in') + + assert raises(TypeError, lambda: valfilter(None, {1: 2})) + assert raises((AttributeError, TypeError), lambda: valfilter(identity, None)) + tested.append('valfilter') + + # XXX + assert (raises(TypeError, lambda: valmap(None, {1: 2})) or + valmap(None, {1: 2}) == {1: (2,)}) + assert raises((AttributeError, TypeError), lambda: valmap(identity, None)) + tested.append('valmap') + + assert (raises(TypeError, lambda: itemmap(None, {1: 2})) or + itemmap(None, {1: 2}) == {1: (2,)}) + assert raises((AttributeError, TypeError), lambda: itemmap(identity, None)) + tested.append('itemmap') + + assert raises(TypeError, lambda: itemfilter(None, {1: 2})) + assert raises((AttributeError, TypeError), lambda: itemfilter(identity, None)) + tested.append('itemfilter') + + assert raises((AttributeError, TypeError), lambda: assoc_in(None, [2, 2], 3)) + assert raises(TypeError, lambda: assoc_in({}, None, 3)) + tested.append('assoc_in') + + s1 = set(tested) + s2 = set(cytoolz.dicttoolz.__all__) + assert s1 == s2, '%s not tested for being None-safe' % ', '.join(s2 - s1) + + +def test_functoolz(): + tested = [] + assert raises(TypeError, lambda: complement(None)()) + tested.append('complement') + + assert compose(None) is None + assert raises(TypeError, lambda: compose(None, None)()) + tested.append('compose') + + assert compose_left(None) is None + assert raises(TypeError, lambda: compose_left(None, None)()) + tested.append('compose_left') + + assert raises(TypeError, lambda: curry(None)) + tested.append('curry') + + assert raises(TypeError, lambda: do(None, 1)) + tested.append('do') + + assert identity(None) is None + tested.append('identity') + + assert raises(TypeError, lambda: juxt(None)) + assert raises(TypeError, lambda: list(juxt(None, None)())) + tested.append('juxt') + + assert memoize(identity, key=None)(1) == 1 + assert memoize(identity, cache=None)(1) == 1 + tested.append('memoize') + + assert raises(TypeError, lambda: pipe(1, None)) + tested.append('pipe') + + assert thread_first(1, None) is None + tested.append('thread_first') + + assert thread_last(1, None) is None + tested.append('thread_last') + + assert flip(lambda a, b: (a, b))(None)(None) == (None, None) + tested.append('flip') + + assert apply(identity, None) is None + assert raises(TypeError, lambda: apply(None)) + tested.append('apply') + + excepts(None, lambda x: x) + excepts(TypeError, None) + tested.append('excepts') + + s1 = set(tested) + s2 = set(cytoolz.functoolz.__all__) + assert s1 == s2, '%s not tested for being None-safe' % ', '.join(s2 - s1) + + +def test_itertoolz(): + tested = [] + assert raises(TypeError, lambda: list(accumulate(None, [1, 2]))) + assert raises(TypeError, lambda: list(accumulate(identity, None))) + tested.append('accumulate') + + assert raises(TypeError, lambda: concat(None)) + assert raises(TypeError, lambda: list(concat([None]))) + tested.append('concat') + + assert raises(TypeError, lambda: list(concatv(None))) + tested.append('concatv') + + assert raises(TypeError, lambda: list(cons(1, None))) + tested.append('cons') + + assert raises(TypeError, lambda: count(None)) + tested.append('count') + + # XXX + assert (raises(TypeError, lambda: list(drop(None, [1, 2]))) or + list(drop(None, [1, 2])) == [1, 2]) + + assert raises(TypeError, lambda: list(drop(1, None))) + tested.append('drop') + + assert raises(TypeError, lambda: first(None)) + tested.append('first') + + assert raises(TypeError, lambda: frequencies(None)) + tested.append('frequencies') + + assert raises(TypeError, lambda: get(1, None)) + assert raises(TypeError, lambda: get([1, 2], None)) + tested.append('get') + + assert raises(TypeError, lambda: groupby(None, [1, 2])) + assert raises(TypeError, lambda: groupby(identity, None)) + tested.append('groupby') + + assert raises(TypeError, lambda: list(interleave(None))) + assert raises(TypeError, lambda: list(interleave([None, None]))) + assert raises(TypeError, + lambda: list(interleave([[1, 2], GenException(ValueError)], + pass_exceptions=None))) + tested.append('interleave') + + assert raises(TypeError, lambda: list(interpose(1, None))) + tested.append('interpose') + + assert raises(TypeError, lambda: isdistinct(None)) + tested.append('isdistinct') + + assert isiterable(None) is False + tested.append('isiterable') + + assert raises(TypeError, lambda: list(iterate(None, 1))) + tested.append('iterate') + + assert raises(TypeError, lambda: last(None)) + tested.append('last') + + # XXX + assert (raises(TypeError, lambda: list(mapcat(None, [[1], [2]]))) or + list(mapcat(None, [[1], [2]])) == [[1], [2]]) + assert raises(TypeError, lambda: list(mapcat(identity, [None, [2]]))) + assert raises(TypeError, lambda: list(mapcat(identity, None))) + tested.append('mapcat') + + assert raises(TypeError, lambda: list(merge_sorted(None, [1, 2]))) + tested.append('merge_sorted') + + assert raises(TypeError, lambda: nth(None, [1, 2])) + assert raises(TypeError, lambda: nth(0, None)) + tested.append('nth') + + assert raises(TypeError, lambda: partition(None, [1, 2, 3])) + assert raises(TypeError, lambda: partition(1, None)) + tested.append('partition') + + assert raises(TypeError, lambda: list(partition_all(None, [1, 2, 3]))) + assert raises(TypeError, lambda: list(partition_all(1, None))) + tested.append('partition_all') + + assert raises(TypeError, lambda: list(pluck(None, [[1], [2]]))) + assert raises(TypeError, lambda: list(pluck(0, [None, [2]]))) + assert raises(TypeError, lambda: list(pluck(0, None))) + tested.append('pluck') + + assert raises(TypeError, lambda: reduceby(None, add, [1, 2, 3], 0)) + assert raises(TypeError, lambda: reduceby(identity, None, [1, 2, 3], 0)) + assert raises(TypeError, lambda: reduceby(identity, add, None, 0)) + tested.append('reduceby') + + assert raises(TypeError, lambda: list(remove(None, [1, 2]))) + assert raises(TypeError, lambda: list(remove(identity, None))) + tested.append('remove') + + assert raises(TypeError, lambda: second(None)) + tested.append('second') + + # XXX + assert (raises(TypeError, lambda: list(sliding_window(None, [1, 2, 3]))) or + list(sliding_window(None, [1, 2, 3])) == []) + assert raises(TypeError, lambda: list(sliding_window(1, None))) + tested.append('sliding_window') + + # XXX + assert (raises(TypeError, lambda: list(take(None, [1, 2])) == [1, 2]) or + list(take(None, [1, 2])) == [1, 2]) + assert raises(TypeError, lambda: list(take(1, None))) + tested.append('take') + + # XXX + assert (raises(TypeError, lambda: list(tail(None, [1, 2])) == [1, 2]) or + list(tail(None, [1, 2])) == [1, 2]) + assert raises(TypeError, lambda: list(tail(1, None))) + tested.append('tail') + + # XXX + assert (raises(TypeError, lambda: list(take_nth(None, [1, 2]))) or + list(take_nth(None, [1, 2])) == [1, 2]) + assert raises(TypeError, lambda: list(take_nth(1, None))) + tested.append('take_nth') + + assert raises(TypeError, lambda: list(unique(None))) + assert list(unique([1, 1, 2], key=None)) == [1, 2] + tested.append('unique') + + assert raises(TypeError, lambda: join(first, None, second, (1, 2, 3))) + assert raises(TypeError, lambda: join(first, (1, 2, 3), second, None)) + tested.append('join') + + assert raises(TypeError, lambda: topk(None, [1, 2, 3])) + assert raises(TypeError, lambda: topk(3, None)) + tested.append('topk') + + assert raises(TypeError, lambda: list(diff(None, [1, 2, 3]))) + assert raises(TypeError, lambda: list(diff(None))) + assert raises(TypeError, lambda: list(diff([None]))) + assert raises(TypeError, lambda: list(diff([None, None]))) + tested.append('diff') + + assert raises(TypeError, lambda: peek(None)) + tested.append('peek') + + assert raises(TypeError, lambda: peekn(None, [1, 2, 3])) + assert raises(TypeError, lambda: peekn(3, None)) + tested.append('peekn') + + assert raises(TypeError, lambda: list(random_sample(None, [1]))) + assert raises(TypeError, lambda: list(random_sample(0.1, None))) + tested.append('random_sample') + + s1 = set(tested) + s2 = set(cytoolz.itertoolz.__all__) + assert s1 == s2, '%s not tested for being None-safe' % ', '.join(s2 - s1) + + +def test_recipes(): + tested = [] + # XXX + assert (raises(TypeError, lambda: countby(None, [1, 2])) or + countby(None, [1, 2]) == {(1,): 1, (2,): 1}) + assert raises(TypeError, lambda: countby(identity, None)) + tested.append('countby') + + # XXX + assert (raises(TypeError, lambda: list(partitionby(None, [1, 2]))) or + list(partitionby(None, [1, 2])) == [(1,), (2,)]) + assert raises(TypeError, lambda: list(partitionby(identity, None))) + tested.append('partitionby') + + s1 = set(tested) + s2 = set(cytoolz.recipes.__all__) + assert s1 == s2, '%s not tested for being None-safe' % ', '.join(s2 - s1) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_recipes.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_recipes.py new file mode 100644 index 00000000..13309f44 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_recipes.py @@ -0,0 +1,27 @@ +from cytoolz import first, identity, countby, partitionby + + +def iseven(x): + return x % 2 == 0 + + +def test_countby(): + assert countby(iseven, [1, 2, 3]) == {True: 1, False: 2} + assert countby(len, ['cat', 'dog', 'mouse']) == {3: 2, 5: 1} + assert countby(0, ('ab', 'ac', 'bc')) == {'a': 2, 'b': 1} + + +def test_partitionby(): + assert list(partitionby(identity, [])) == [] + + vowels = "aeiou" + assert (list(partitionby(vowels.__contains__, "abcdefghi")) == + [("a",), ("b", "c", "d"), ("e",), ("f", "g", "h"), ("i",)]) + + assert (list(map(first, + partitionby(identity, + [1, 1, 1, 2, 3, 3, 2, 2, 3]))) == + [1, 2, 3, 2, 3]) + + assert ''.join(map(first, + partitionby(identity, "Khhhaaaaannnnn!!!!"))) == 'Khan!' diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_serialization.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_serialization.py new file mode 100644 index 00000000..fd828486 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_serialization.py @@ -0,0 +1,192 @@ +from cytoolz import * +import cytoolz +import cytoolz.curried +import pickle +from cytoolz.utils import raises + + +def test_compose(): + f = compose(str, sum) + g = pickle.loads(pickle.dumps(f)) + assert f((1, 2)) == g((1, 2)) + + +def test_curry(): + f = curry(map)(str) + g = pickle.loads(pickle.dumps(f)) + assert list(f((1, 2, 3))) == list(g((1, 2, 3))) + + +def test_juxt(): + f = juxt(str, int, bool) + g = pickle.loads(pickle.dumps(f)) + assert f(1) == g(1) + assert f.funcs == g.funcs + + +def test_complement(): + f = complement(bool) + assert f(True) is False + assert f(False) is True + g = pickle.loads(pickle.dumps(f)) + assert f(True) == g(True) + assert f(False) == g(False) + + +def test_instanceproperty(): + p = cytoolz.functoolz.InstanceProperty(bool) + assert p.__get__(None) is None + assert p.__get__(0) is False + assert p.__get__(1) is True + p2 = pickle.loads(pickle.dumps(p)) + assert p2.__get__(None) is None + assert p2.__get__(0) is False + assert p2.__get__(1) is True + + +def f(x, y): + return x, y + + +def test_flip(): + flip = pickle.loads(pickle.dumps(cytoolz.functoolz.flip)) + assert flip is cytoolz.functoolz.flip + g1 = flip(f) + g2 = pickle.loads(pickle.dumps(g1)) + assert g1(1, 2) == g2(1, 2) == f(2, 1) + g1 = flip(f)(1) + g2 = pickle.loads(pickle.dumps(g1)) + assert g1(2) == g2(2) == f(2, 1) + + +def test_curried_exceptions(): + # This tests a global curried object that isn't defined in cytoolz.functoolz + merge = pickle.loads(pickle.dumps(cytoolz.curried.merge)) + assert merge is cytoolz.curried.merge + + +@cytoolz.curry +class GlobalCurried(object): + def __init__(self, x, y): + self.x = x + self.y = y + + @cytoolz.curry + def f1(self, a, b): + return self.x + self.y + a + b + + def g1(self): + pass + + def __reduce__(self): + """Allow us to serialize instances of GlobalCurried""" + return GlobalCurried, (self.x, self.y) + + @cytoolz.curry + class NestedCurried(object): + def __init__(self, x, y): + self.x = x + self.y = y + + @cytoolz.curry + def f2(self, a, b): + return self.x + self.y + a + b + + def g2(self): + pass + + def __reduce__(self): + """Allow us to serialize instances of NestedCurried""" + return GlobalCurried.NestedCurried, (self.x, self.y) + + class Nested(object): + def __init__(self, x, y): + self.x = x + self.y = y + + @cytoolz.curry + def f3(self, a, b): + return self.x + self.y + a + b + + def g3(self): + pass + + +def test_curried_qualname(): + + def preserves_identity(obj): + return pickle.loads(pickle.dumps(obj)) is obj + + assert preserves_identity(GlobalCurried) + assert preserves_identity(GlobalCurried.func.f1) + assert preserves_identity(GlobalCurried.func.NestedCurried) + assert preserves_identity(GlobalCurried.func.NestedCurried.func.f2) + assert preserves_identity(GlobalCurried.func.Nested.f3) + + global_curried1 = GlobalCurried(1) + global_curried2 = pickle.loads(pickle.dumps(global_curried1)) + assert global_curried1 is not global_curried2 + assert global_curried1(2).f1(3, 4) == global_curried2(2).f1(3, 4) == 10 + + global_curried3 = global_curried1(2) + global_curried4 = pickle.loads(pickle.dumps(global_curried3)) + assert global_curried3 is not global_curried4 + assert global_curried3.f1(3, 4) == global_curried4.f1(3, 4) == 10 + + func1 = global_curried1(2).f1(3) + func2 = pickle.loads(pickle.dumps(func1)) + assert func1 is not func2 + assert func1(4) == func2(4) == 10 + + nested_curried1 = GlobalCurried.func.NestedCurried(1) + nested_curried2 = pickle.loads(pickle.dumps(nested_curried1)) + assert nested_curried1 is not nested_curried2 + assert nested_curried1(2).f2(3, 4) == nested_curried2(2).f2(3, 4) == 10 + + # If we add `curry.__getattr__` forwarding, the following tests will pass + + # if not PY34: + # assert preserves_identity(GlobalCurried.func.g1) + # assert preserves_identity(GlobalCurried.func.NestedCurried.func.g2) + # assert preserves_identity(GlobalCurried.func.Nested) + # assert preserves_identity(GlobalCurried.func.Nested.g3) + # + # # Rely on curry.__getattr__ + # assert preserves_identity(GlobalCurried.f1) + # assert preserves_identity(GlobalCurried.NestedCurried) + # assert preserves_identity(GlobalCurried.NestedCurried.f2) + # assert preserves_identity(GlobalCurried.Nested.f3) + # if not PY34: + # assert preserves_identity(GlobalCurried.g1) + # assert preserves_identity(GlobalCurried.NestedCurried.g2) + # assert preserves_identity(GlobalCurried.Nested) + # assert preserves_identity(GlobalCurried.Nested.g3) + # + # nested_curried3 = nested_curried1(2) + # nested_curried4 = pickle.loads(pickle.dumps(nested_curried3)) + # assert nested_curried3 is not nested_curried4 + # assert nested_curried3.f2(3, 4) == nested_curried4.f2(3, 4) == 10 + # + # func1 = nested_curried1(2).f2(3) + # func2 = pickle.loads(pickle.dumps(func1)) + # assert func1 is not func2 + # assert func1(4) == func2(4) == 10 + # + # if not PY34: + # nested3 = GlobalCurried.func.Nested(1, 2) + # nested4 = pickle.loads(pickle.dumps(nested3)) + # assert nested3 is not nested4 + # assert nested3.f3(3, 4) == nested4.f3(3, 4) == 10 + # + # func1 = nested3.f3(3) + # func2 = pickle.loads(pickle.dumps(func1)) + # assert func1 is not func2 + # assert func1(4) == func2(4) == 10 + + +def test_curried_bad_qualname(): + @cytoolz.curry + class Bad(object): + __qualname__ = 'cytoolz.functoolz.not.a.valid.path' + + assert raises(pickle.PicklingError, lambda: pickle.dumps(Bad)) diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_signatures.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_signatures.py new file mode 100644 index 00000000..ab331e8b --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_signatures.py @@ -0,0 +1,85 @@ +import functools +import cytoolz._signatures as _sigs +from cytoolz._signatures import builtins, _is_valid_args, _is_partial_args + + +def test_is_valid(check_valid=_is_valid_args, incomplete=False): + orig_check_valid = check_valid + check_valid = lambda func, *args, **kwargs: orig_check_valid(func, args, kwargs) + + assert check_valid(lambda x: None) is None + + f = builtins.abs + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, x=1) is False + assert check_valid(f, 1, 2) is False + + f = builtins.complex + assert check_valid(f) + assert check_valid(f, 1) + assert check_valid(f, real=1) + assert check_valid(f, 1, 2) + assert check_valid(f, 1, imag=2) + assert check_valid(f, 1, real=2) is False + assert check_valid(f, 1, 2, 3) is False + assert check_valid(f, 1, 2, imag=3) is False + + f = builtins.int + assert check_valid(f) + assert check_valid(f, 1) + assert check_valid(f, x=1) + assert check_valid(f, 1, 2) + assert check_valid(f, 1, base=2) + assert check_valid(f, x=1, base=2) + assert check_valid(f, base=2) is incomplete + assert check_valid(f, 1, 2, 3) is False + + f = builtins.map + assert check_valid(f) is incomplete + assert check_valid(f, 1) is incomplete + assert check_valid(f, 1, 2) + assert check_valid(f, 1, 2, 3) + assert check_valid(f, 1, 2, 3, 4) + + f = builtins.min + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, iterable=1) is False + assert check_valid(f, 1, 2) + assert check_valid(f, 1, 2, 3) + assert check_valid(f, key=None) is incomplete + assert check_valid(f, 1, key=None) + assert check_valid(f, 1, 2, key=None) + assert check_valid(f, 1, 2, 3, key=None) + assert check_valid(f, key=None, default=None) is incomplete + assert check_valid(f, 1, key=None, default=None) + assert check_valid(f, 1, 2, key=None, default=None) is False + assert check_valid(f, 1, 2, 3, key=None, default=None) is False + + f = builtins.range + assert check_valid(f) is incomplete + assert check_valid(f, 1) + assert check_valid(f, 1, 2) + assert check_valid(f, 1, 2, 3) + assert check_valid(f, 1, 2, step=3) is False + assert check_valid(f, 1, 2, 3, 4) is False + + f = functools.partial + assert orig_check_valid(f, (), {}) is incomplete + assert orig_check_valid(f, (), {'func': 1}) is incomplete + assert orig_check_valid(f, (1,), {}) + assert orig_check_valid(f, (1,), {'func': 1}) + assert orig_check_valid(f, (1, 2), {}) + + +def test_is_partial(): + test_is_valid(check_valid=_is_partial_args, incomplete=True) + + +def test_for_coverage(): # :) + assert _sigs._is_arity(1, 1) is None + assert _sigs._is_arity(1, all) + assert _sigs._has_varargs(None) is None + assert _sigs._has_keywords(None) is None + assert _sigs._num_required_args(None) is None diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_tlz.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_tlz.py new file mode 100644 index 00000000..d1fe7913 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_tlz.py @@ -0,0 +1,54 @@ +import toolz + + +def test_tlz(): + import tlz + tlz.curry + tlz.functoolz.curry + assert tlz.__package__ == 'tlz' + assert tlz.__name__ == 'tlz' + import tlz.curried + assert tlz.curried.__package__ == 'tlz.curried' + assert tlz.curried.__name__ == 'tlz.curried' + tlz.curried.curry + import tlz.curried.operator + assert tlz.curried.operator.__package__ in (None, 'tlz.curried') + assert tlz.curried.operator.__name__ == 'tlz.curried.operator' + assert tlz.functoolz.__name__ == 'tlz.functoolz' + m1 = tlz.functoolz + import tlz.functoolz as m2 + assert m1 is m2 + import tlz.sandbox + try: + import tlzthisisabadname.curried + 1/0 + except ImportError: + pass + try: + import tlz.curry + 1/0 + except ImportError: + pass + try: + import tlz.badsubmodulename + 1/0 + except ImportError: + pass + + assert toolz.__package__ == 'toolz' + assert toolz.curried.__package__ == 'toolz.curried' + assert toolz.functoolz.__name__ == 'toolz.functoolz' + + import cytoolz + assert cytoolz.__package__ == 'cytoolz' + assert cytoolz.curried.__package__ == 'cytoolz.curried' + assert cytoolz.functoolz.__name__ == 'cytoolz.functoolz' + assert tlz.pluck is cytoolz.pluck + + assert tlz.__file__ == toolz.__file__ + assert tlz.functoolz.__file__ == toolz.functoolz.__file__ + + assert tlz.pipe is toolz.pipe + + assert 'tlz' in tlz.__doc__ + assert tlz.curried.__doc__ is not None diff --git a/env/lib/python3.8/site-packages/cytoolz/tests/test_utils.py b/env/lib/python3.8/site-packages/cytoolz/tests/test_utils.py new file mode 100644 index 00000000..2356b47b --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/tests/test_utils.py @@ -0,0 +1,15 @@ +from cytoolz.utils import consume, raises + + +def test_raises(): + assert raises(ZeroDivisionError, lambda: 1 / 0) + assert not raises(ZeroDivisionError, lambda: 1) + + +def test_consume(): + l = [1, 2, 3] + assert consume(l) is None + il = iter(l) + assert consume(il) is None + assert raises(StopIteration, lambda: next(il)) + assert raises(TypeError, lambda: consume(1)) diff --git a/env/lib/python3.8/site-packages/cytoolz/utils.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/cytoolz/utils.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..070aaf4a Binary files /dev/null and b/env/lib/python3.8/site-packages/cytoolz/utils.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/cytoolz/utils.pxd b/env/lib/python3.8/site-packages/cytoolz/utils.pxd new file mode 100644 index 00000000..412e8fa5 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/utils.pxd @@ -0,0 +1 @@ +cpdef object consume(object seq) diff --git a/env/lib/python3.8/site-packages/cytoolz/utils.pyx b/env/lib/python3.8/site-packages/cytoolz/utils.pyx new file mode 100644 index 00000000..010e50a0 --- /dev/null +++ b/env/lib/python3.8/site-packages/cytoolz/utils.pyx @@ -0,0 +1,57 @@ +import os.path +import cytoolz + + +__all__ = ['raises', 'no_default', 'include_dirs', 'consume'] + + +try: + # Attempt to get the no_default sentinel object from toolz + from toolz.utils import no_default +except ImportError: + no_default = '__no__default__' + + +def raises(err, lamda): + try: + lamda() + return False + except err: + return True + + +def include_dirs(): + """ Return a list of directories containing the *.pxd files for ``cytoolz`` + + Use this to include ``cytoolz`` in your own Cython project, which allows + fast C bindinds to be imported such as ``from cytoolz cimport get``. + + Below is a minimal "setup.py" file using ``include_dirs``: + + from distutils.core import setup + from distutils.extension import Extension + from Cython.Distutils import build_ext + + import cytoolz.utils + + ext_modules=[ + Extension("mymodule", + ["mymodule.pyx"], + include_dirs=cytoolz.utils.include_dirs() + ) + ] + + setup( + name = "mymodule", + cmdclass = {"build_ext": build_ext}, + ext_modules = ext_modules + ) + """ + return os.path.split(cytoolz.__path__[0]) + + +cpdef object consume(object seq): + """ + Efficiently consume an iterable """ + for _ in seq: + pass diff --git a/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/INSTALLER b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/LICENSE.txt b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/LICENSE.txt new file mode 100644 index 00000000..c31ac56d --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/LICENSE.txt @@ -0,0 +1,284 @@ +A. HISTORY OF THE SOFTWARE +========================== + +Python was created in the early 1990s by Guido van Rossum at Stichting +Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands +as a successor of a language called ABC. Guido remains Python's +principal author, although it includes many contributions from others. + +In 1995, Guido continued his work on Python at the Corporation for +National Research Initiatives (CNRI, see http://www.cnri.reston.va.us) +in Reston, Virginia where he released several versions of the +software. + +In May 2000, Guido and the Python core development team moved to +BeOpen.com to form the BeOpen PythonLabs team. In October of the same +year, the PythonLabs team moved to Digital Creations (now Zope +Corporation, see http://www.zope.com). In 2001, the Python Software +Foundation (PSF, see http://www.python.org/psf/) was formed, a +non-profit organization created specifically to own Python-related +Intellectual Property. Zope Corporation is a sponsoring member of +the PSF. + +All Python releases are Open Source (see http://www.opensource.org for +the Open Source Definition). Historically, most, but not all, Python +releases have also been GPL-compatible; the table below summarizes +the various releases. + + Release Derived Year Owner GPL- + from compatible? (1) + + 0.9.0 thru 1.2 1991-1995 CWI yes + 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes + 1.6 1.5.2 2000 CNRI no + 2.0 1.6 2000 BeOpen.com no + 1.6.1 1.6 2001 CNRI yes (2) + 2.1 2.0+1.6.1 2001 PSF no + 2.0.1 2.0+1.6.1 2001 PSF yes + 2.1.1 2.1+2.0.1 2001 PSF yes + 2.2 2.1.1 2001 PSF yes + 2.1.2 2.1.1 2002 PSF yes + 2.1.3 2.1.2 2002 PSF yes + 2.2.1 2.2 2002 PSF yes + 2.2.2 2.2.1 2002 PSF yes + 2.2.3 2.2.2 2003 PSF yes + 2.3 2.2.2 2002-2003 PSF yes + 2.3.1 2.3 2002-2003 PSF yes + 2.3.2 2.3.1 2002-2003 PSF yes + 2.3.3 2.3.2 2002-2003 PSF yes + 2.3.4 2.3.3 2004 PSF yes + 2.3.5 2.3.4 2005 PSF yes + 2.4 2.3 2004 PSF yes + 2.4.1 2.4 2005 PSF yes + 2.4.2 2.4.1 2005 PSF yes + 2.4.3 2.4.2 2006 PSF yes + 2.4.4 2.4.3 2006 PSF yes + 2.5 2.4 2006 PSF yes + 2.5.1 2.5 2007 PSF yes + 2.5.2 2.5.1 2008 PSF yes + 2.5.3 2.5.2 2008 PSF yes + 2.6 2.5 2008 PSF yes + 2.6.1 2.6 2008 PSF yes + 2.6.2 2.6.1 2009 PSF yes + 2.6.3 2.6.2 2009 PSF yes + 2.6.4 2.6.3 2009 PSF yes + 2.6.5 2.6.4 2010 PSF yes + 3.0 2.6 2008 PSF yes + 3.0.1 3.0 2009 PSF yes + 3.1 3.0.1 2009 PSF yes + 3.1.1 3.1 2009 PSF yes + 3.1.2 3.1 2010 PSF yes + 3.2 3.1 2010 PSF yes + +Footnotes: + +(1) GPL-compatible doesn't mean that we're distributing Python under + the GPL. All Python licenses, unlike the GPL, let you distribute + a modified version without making your changes open source. The + GPL-compatible licenses make it possible to combine Python with + other software that is released under the GPL; the others don't. + +(2) According to Richard Stallman, 1.6.1 is not GPL-compatible, + because its license has a choice of law clause. According to + CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1 + is "not incompatible" with the GPL. + +Thanks to the many outside volunteers who have worked under Guido's +direction to make these releases possible. + + +B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON +=============================================================== + +PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 +-------------------------------------------- + +1. This LICENSE AGREEMENT is between the Python Software Foundation +("PSF"), and the Individual or Organization ("Licensee") accessing and +otherwise using this software ("Python") in source or binary form and +its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, PSF hereby +grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, +analyze, test, perform and/or display publicly, prepare derivative works, +distribute, and otherwise use Python alone or in any derivative version, +provided, however, that PSF's License Agreement and PSF's notice of copyright, +i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 +Python Software Foundation; All Rights Reserved" are retained in Python alone or +in any derivative version prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on +or incorporates Python or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to Python. + +4. PSF is making Python available to Licensee on an "AS IS" +basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON +FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any +relationship of agency, partnership, or joint venture between PSF and +Licensee. This License Agreement does not grant permission to use PSF +trademarks or trade name in a trademark sense to endorse or promote +products or services of Licensee, or any third party. + +8. By copying, installing or otherwise using Python, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. + + +BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0 +------------------------------------------- + +BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1 + +1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an +office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the +Individual or Organization ("Licensee") accessing and otherwise using +this software in source or binary form and its associated +documentation ("the Software"). + +2. Subject to the terms and conditions of this BeOpen Python License +Agreement, BeOpen hereby grants Licensee a non-exclusive, +royalty-free, world-wide license to reproduce, analyze, test, perform +and/or display publicly, prepare derivative works, distribute, and +otherwise use the Software alone or in any derivative version, +provided, however, that the BeOpen Python License is retained in the +Software, alone or in any derivative version prepared by Licensee. + +3. BeOpen is making the Software available to Licensee on an "AS IS" +basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE +SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS +AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY +DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +5. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +6. This License Agreement shall be governed by and interpreted in all +respects by the law of the State of California, excluding conflict of +law provisions. Nothing in this License Agreement shall be deemed to +create any relationship of agency, partnership, or joint venture +between BeOpen and Licensee. This License Agreement does not grant +permission to use BeOpen trademarks or trade names in a trademark +sense to endorse or promote products or services of Licensee, or any +third party. As an exception, the "BeOpen Python" logos available at +http://www.pythonlabs.com/logos.html may be used according to the +permissions granted on that web page. + +7. By copying, installing or otherwise using the software, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. + + +CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1 +--------------------------------------- + +1. This LICENSE AGREEMENT is between the Corporation for National +Research Initiatives, having an office at 1895 Preston White Drive, +Reston, VA 20191 ("CNRI"), and the Individual or Organization +("Licensee") accessing and otherwise using Python 1.6.1 software in +source or binary form and its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, CNRI +hereby grants Licensee a nonexclusive, royalty-free, world-wide +license to reproduce, analyze, test, perform and/or display publicly, +prepare derivative works, distribute, and otherwise use Python 1.6.1 +alone or in any derivative version, provided, however, that CNRI's +License Agreement and CNRI's notice of copyright, i.e., "Copyright (c) +1995-2001 Corporation for National Research Initiatives; All Rights +Reserved" are retained in Python 1.6.1 alone or in any derivative +version prepared by Licensee. Alternately, in lieu of CNRI's License +Agreement, Licensee may substitute the following text (omitting the +quotes): "Python 1.6.1 is made available subject to the terms and +conditions in CNRI's License Agreement. This Agreement together with +Python 1.6.1 may be located on the Internet using the following +unique, persistent identifier (known as a handle): 1895.22/1013. This +Agreement may also be obtained from a proxy server on the Internet +using the following URL: http://hdl.handle.net/1895.22/1013". + +3. In the event Licensee prepares a derivative work that is based on +or incorporates Python 1.6.1 or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to Python 1.6.1. + +4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" +basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON +1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. This License Agreement shall be governed by the federal +intellectual property law of the United States, including without +limitation the federal copyright law, and, to the extent such +U.S. federal law does not apply, by the law of the Commonwealth of +Virginia, excluding Virginia's conflict of law provisions. +Notwithstanding the foregoing, with regard to derivative works based +on Python 1.6.1 that incorporate non-separable material that was +previously distributed under the GNU General Public License (GPL), the +law of the Commonwealth of Virginia shall govern this License +Agreement only as to issues arising under or with respect to +Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this +License Agreement shall be deemed to create any relationship of +agency, partnership, or joint venture between CNRI and Licensee. This +License Agreement does not grant permission to use CNRI trademarks or +trade name in a trademark sense to endorse or promote products or +services of Licensee, or any third party. + +8. By clicking on the "ACCEPT" button where indicated, or by copying, +installing or otherwise using Python 1.6.1, Licensee agrees to be +bound by the terms and conditions of this License Agreement. + + ACCEPT + + +CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2 +-------------------------------------------------- + +Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, +The Netherlands. All rights reserved. + +Permission to use, copy, modify, and distribute this software and its +documentation for any purpose and without fee is hereby granted, +provided that the above copyright notice appear in all copies and that +both that copyright notice and this permission notice appear in +supporting documentation, and that the name of Stichting Mathematisch +Centrum or CWI not be used in advertising or publicity pertaining to +distribution of the software without specific, written prior +permission. + +STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO +THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND +FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE +FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT +OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/METADATA b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/METADATA new file mode 100644 index 00000000..806df8bf --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/METADATA @@ -0,0 +1,116 @@ +Metadata-Version: 2.1 +Name: distlib +Version: 0.3.6 +Summary: Distribution utilities +Home-page: https://github.com/pypa/distlib +Author: Vinay Sajip +Author-email: vinay_sajip@red-dove.com +License: Python license +Project-URL: Documentation, https://distlib.readthedocs.io/ +Project-URL: Source, https://github.com/pypa/distlib +Project-URL: Tracker, https://github.com/pypa/distlib/issues +Platform: any +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Python Software Foundation License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Topic :: Software Development +License-File: LICENSE.txt + +|badge1| |badge2| + +.. |badge1| image:: https://img.shields.io/github/workflow/status/pypa/distlib/Tests + :alt: GitHub test status + +.. |badge2| image:: https://img.shields.io/codecov/c/github/pypa/distlib + :target: https://app.codecov.io/gh/pypa/distlib + :alt: GitHub coverage status + +What is it? +----------- + +Distlib is a library which implements low-level functions that relate to +packaging and distribution of Python software. It is intended to be used as the +basis for third-party packaging tools. The documentation is available at + +https://distlib.readthedocs.io/ + +Main features +------------- + +Distlib currently offers the following features: + +* The package ``distlib.database``, which implements a database of installed + distributions, as defined by :pep:`376`, and distribution dependency graph + logic. Support is also provided for non-installed distributions (i.e. + distributions registered with metadata on an index like PyPI), including + the ability to scan for dependencies and building dependency graphs. +* The package ``distlib.index``, which implements an interface to perform + operations on an index, such as registering a project, uploading a + distribution or uploading documentation. Support is included for verifying + SSL connections (with domain matching) and signing/verifying packages using + GnuPG. +* The package ``distlib.metadata``, which implements distribution metadata as + defined by :pep:`643`, :pep:`566`, :pep:`345`, :pep:`314` and :pep:`241`. +* The package ``distlib.markers``, which implements environment markers as + defined by :pep:`508`. +* The package ``distlib.manifest``, which implements lists of files used + in packaging source distributions. +* The package ``distlib.locators``, which allows finding distributions, whether + on PyPI (XML-RPC or via the "simple" interface), local directories or some + other source. +* The package ``distlib.resources``, which allows access to data files stored + in Python packages, both in the file system and in .zip files. +* The package ``distlib.scripts``, which allows installing of scripts with + adjustment of shebang lines and support for native Windows executable + launchers. +* The package ``distlib.version``, which implements version specifiers as + defined by :pep:`440`, but also support for working with "legacy" versions and + semantic versions. +* The package ``distlib.wheel``, which provides support for building and + installing from the Wheel format for binary distributions (see :pep:`427`). +* The package ``distlib.util``, which contains miscellaneous functions and + classes which are useful in packaging, but which do not fit neatly into + one of the other packages in ``distlib``.* The package implements enhanced + globbing functionality such as the ability to use ``**`` in patterns to + specify recursing into subdirectories. + + +Python version and platform compatibility +----------------------------------------- + +Distlib is intended to be used on and is tested on Python versions 2.7 and 3.6 - 3.10, +pypy-2.7 and pypy3 on Linux, Windows, and macOS. + +Project status +-------------- + +The project has reached a mature status in its development: there is a comprehensive +test suite and it has been exercised on Windows, Ubuntu and macOS. The project is used +by well-known projects such as `pip `_ and `caniusepython3 +`_. + +This project was migrated from Mercurial to Git and from BitBucket to GitHub, and +although all information of importance has been retained across the migration, some +commit references in issues and issue comments may have become invalid. + +Code of Conduct +--------------- + +Everyone interacting in the distlib project's codebases, issue trackers, chat +rooms, and mailing lists is expected to follow the `PyPA Code of Conduct`_. + +.. _PyPA Code of Conduct: https://www.pypa.io/en/latest/code-of-conduct/ + + diff --git a/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/RECORD b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/RECORD new file mode 100644 index 00000000..5432cff2 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/RECORD @@ -0,0 +1,38 @@ +distlib-0.3.6.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +distlib-0.3.6.dist-info/LICENSE.txt,sha256=gI4QyKarjesUn_mz-xn0R6gICUYG1xKpylf-rTVSWZ0,14531 +distlib-0.3.6.dist-info/METADATA,sha256=t7pBTqEAPspwna_64nyQC2sOA4OSqbCKqMwI1cr0th8,5112 +distlib-0.3.6.dist-info/RECORD,, +distlib-0.3.6.dist-info/WHEEL,sha256=z9j0xAa_JmUKMpmz72K0ZGALSM_n-wQVmGbleXx2VHg,110 +distlib-0.3.6.dist-info/top_level.txt,sha256=9BERqitu_vzyeyILOcGzX9YyA2AB_xlC4-81V6xoizk,8 +distlib/__init__.py,sha256=acgfseOC55dNrVAzaBKpUiH3Z6V7Q1CaxsiQ3K7pC-E,581 +distlib/__pycache__/__init__.cpython-38.pyc,, +distlib/__pycache__/compat.cpython-38.pyc,, +distlib/__pycache__/database.cpython-38.pyc,, +distlib/__pycache__/index.cpython-38.pyc,, +distlib/__pycache__/locators.cpython-38.pyc,, +distlib/__pycache__/manifest.cpython-38.pyc,, +distlib/__pycache__/markers.cpython-38.pyc,, +distlib/__pycache__/metadata.cpython-38.pyc,, +distlib/__pycache__/resources.cpython-38.pyc,, +distlib/__pycache__/scripts.cpython-38.pyc,, +distlib/__pycache__/util.cpython-38.pyc,, +distlib/__pycache__/version.cpython-38.pyc,, +distlib/__pycache__/wheel.cpython-38.pyc,, +distlib/compat.py,sha256=tfoMrj6tujk7G4UC2owL6ArgDuCKabgBxuJRGZSmpko,41259 +distlib/database.py,sha256=o_mw0fAr93NDAHHHfqG54Y1Hi9Rkfrp2BX15XWZYK50,51697 +distlib/index.py,sha256=HFiDG7LMoaBs829WuotrfIwcErOOExUOR_AeBtw_TCU,20834 +distlib/locators.py,sha256=wNzG-zERzS_XGls-nBPVVyLRHa2skUlkn0-5n0trMWA,51991 +distlib/manifest.py,sha256=nQEhYmgoreaBZzyFzwYsXxJARu3fo4EkunU163U16iE,14811 +distlib/markers.py,sha256=TpHHHLgkzyT7YHbwj-2i6weRaq-Ivy2-MUnrDkjau-U,5058 +distlib/metadata.py,sha256=g_DIiu8nBXRzA-mWPRpatHGbmFZqaFoss7z9TG7QSUU,39801 +distlib/resources.py,sha256=LwbPksc0A1JMbi6XnuPdMBUn83X7BPuFNWqPGEKI698,10820 +distlib/scripts.py,sha256=BmkTKmiTk4m2cj-iueliatwz3ut_9SsABBW51vnQnZU,18102 +distlib/t32.exe,sha256=a0GV5kCoWsMutvliiCKmIgV98eRZ33wXoS-XrqvJQVs,97792 +distlib/t64-arm.exe,sha256=68TAa32V504xVBnufojh0PcenpR3U4wAqTqf-MZqbPw,182784 +distlib/t64.exe,sha256=gaYY8hy4fbkHYTTnA4i26ct8IQZzkBG2pRdy0iyuBrc,108032 +distlib/util.py,sha256=31dPXn3Rfat0xZLeVoFpuniyhe6vsbl9_QN-qd9Lhlk,66262 +distlib/version.py,sha256=WG__LyAa2GwmA6qSoEJtvJE8REA1LZpbSizy8WvhJLk,23513 +distlib/w32.exe,sha256=R4csx3-OGM9kL4aPIzQKRo5TfmRSHZo6QWyLhDhNBks,91648 +distlib/w64-arm.exe,sha256=xdyYhKj0WDcVUOCb05blQYvzdYIKMbmJn2SZvzkcey4,168448 +distlib/w64.exe,sha256=ejGf-rojoBfXseGLpya6bFTFPWRG21X5KvU8J5iU-K0,101888 +distlib/wheel.py,sha256=Rgqs658VsJ3R2845qwnZD8XQryV2CzWw2mghwLvxxsI,43898 diff --git a/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/WHEEL b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/WHEEL new file mode 100644 index 00000000..0b18a281 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/top_level.txt b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/top_level.txt new file mode 100644 index 00000000..f68bb072 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib-0.3.6.dist-info/top_level.txt @@ -0,0 +1 @@ +distlib diff --git a/env/lib/python3.8/site-packages/distlib/__init__.py b/env/lib/python3.8/site-packages/distlib/__init__.py new file mode 100644 index 00000000..962173c8 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/__init__.py @@ -0,0 +1,23 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2022 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import logging + +__version__ = '0.3.6' + +class DistlibException(Exception): + pass + +try: + from logging import NullHandler +except ImportError: # pragma: no cover + class NullHandler(logging.Handler): + def handle(self, record): pass + def emit(self, record): pass + def createLock(self): self.lock = None + +logger = logging.getLogger(__name__) +logger.addHandler(NullHandler()) diff --git a/env/lib/python3.8/site-packages/distlib/compat.py b/env/lib/python3.8/site-packages/distlib/compat.py new file mode 100644 index 00000000..1fe3d225 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/compat.py @@ -0,0 +1,1116 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2017 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from __future__ import absolute_import + +import os +import re +import sys + +try: + import ssl +except ImportError: # pragma: no cover + ssl = None + +if sys.version_info[0] < 3: # pragma: no cover + from StringIO import StringIO + string_types = basestring, + text_type = unicode + from types import FileType as file_type + import __builtin__ as builtins + import ConfigParser as configparser + from urlparse import urlparse, urlunparse, urljoin, urlsplit, urlunsplit + from urllib import (urlretrieve, quote as _quote, unquote, url2pathname, + pathname2url, ContentTooShortError, splittype) + + def quote(s): + if isinstance(s, unicode): + s = s.encode('utf-8') + return _quote(s) + + import urllib2 + from urllib2 import (Request, urlopen, URLError, HTTPError, + HTTPBasicAuthHandler, HTTPPasswordMgr, + HTTPHandler, HTTPRedirectHandler, + build_opener) + if ssl: + from urllib2 import HTTPSHandler + import httplib + import xmlrpclib + import Queue as queue + from HTMLParser import HTMLParser + import htmlentitydefs + raw_input = raw_input + from itertools import ifilter as filter + from itertools import ifilterfalse as filterfalse + + # Leaving this around for now, in case it needs resurrecting in some way + # _userprog = None + # def splituser(host): + # """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'.""" + # global _userprog + # if _userprog is None: + # import re + # _userprog = re.compile('^(.*)@(.*)$') + + # match = _userprog.match(host) + # if match: return match.group(1, 2) + # return None, host + +else: # pragma: no cover + from io import StringIO + string_types = str, + text_type = str + from io import TextIOWrapper as file_type + import builtins + import configparser + import shutil + from urllib.parse import (urlparse, urlunparse, urljoin, quote, + unquote, urlsplit, urlunsplit, splittype) + from urllib.request import (urlopen, urlretrieve, Request, url2pathname, + pathname2url, + HTTPBasicAuthHandler, HTTPPasswordMgr, + HTTPHandler, HTTPRedirectHandler, + build_opener) + if ssl: + from urllib.request import HTTPSHandler + from urllib.error import HTTPError, URLError, ContentTooShortError + import http.client as httplib + import urllib.request as urllib2 + import xmlrpc.client as xmlrpclib + import queue + from html.parser import HTMLParser + import html.entities as htmlentitydefs + raw_input = input + from itertools import filterfalse + filter = filter + + +try: + from ssl import match_hostname, CertificateError +except ImportError: # pragma: no cover + class CertificateError(ValueError): + pass + + + def _dnsname_match(dn, hostname, max_wildcards=1): + """Matching according to RFC 6125, section 6.4.3 + + http://tools.ietf.org/html/rfc6125#section-6.4.3 + """ + pats = [] + if not dn: + return False + + parts = dn.split('.') + leftmost, remainder = parts[0], parts[1:] + + wildcards = leftmost.count('*') + if wildcards > max_wildcards: + # Issue #17980: avoid denials of service by refusing more + # than one wildcard per fragment. A survey of established + # policy among SSL implementations showed it to be a + # reasonable choice. + raise CertificateError( + "too many wildcards in certificate DNS name: " + repr(dn)) + + # speed up common case w/o wildcards + if not wildcards: + return dn.lower() == hostname.lower() + + # RFC 6125, section 6.4.3, subitem 1. + # The client SHOULD NOT attempt to match a presented identifier in which + # the wildcard character comprises a label other than the left-most label. + if leftmost == '*': + # When '*' is a fragment by itself, it matches a non-empty dotless + # fragment. + pats.append('[^.]+') + elif leftmost.startswith('xn--') or hostname.startswith('xn--'): + # RFC 6125, section 6.4.3, subitem 3. + # The client SHOULD NOT attempt to match a presented identifier + # where the wildcard character is embedded within an A-label or + # U-label of an internationalized domain name. + pats.append(re.escape(leftmost)) + else: + # Otherwise, '*' matches any dotless string, e.g. www* + pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) + + # add the remaining fragments, ignore any wildcards + for frag in remainder: + pats.append(re.escape(frag)) + + pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) + return pat.match(hostname) + + + def match_hostname(cert, hostname): + """Verify that *cert* (in decoded format as returned by + SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 + rules are followed, but IP addresses are not accepted for *hostname*. + + CertificateError is raised on failure. On success, the function + returns nothing. + """ + if not cert: + raise ValueError("empty or no certificate, match_hostname needs a " + "SSL socket or SSL context with either " + "CERT_OPTIONAL or CERT_REQUIRED") + dnsnames = [] + san = cert.get('subjectAltName', ()) + for key, value in san: + if key == 'DNS': + if _dnsname_match(value, hostname): + return + dnsnames.append(value) + if not dnsnames: + # The subject is only checked when there is no dNSName entry + # in subjectAltName + for sub in cert.get('subject', ()): + for key, value in sub: + # XXX according to RFC 2818, the most specific Common Name + # must be used. + if key == 'commonName': + if _dnsname_match(value, hostname): + return + dnsnames.append(value) + if len(dnsnames) > 1: + raise CertificateError("hostname %r " + "doesn't match either of %s" + % (hostname, ', '.join(map(repr, dnsnames)))) + elif len(dnsnames) == 1: + raise CertificateError("hostname %r " + "doesn't match %r" + % (hostname, dnsnames[0])) + else: + raise CertificateError("no appropriate commonName or " + "subjectAltName fields were found") + + +try: + from types import SimpleNamespace as Container +except ImportError: # pragma: no cover + class Container(object): + """ + A generic container for when multiple values need to be returned + """ + def __init__(self, **kwargs): + self.__dict__.update(kwargs) + + +try: + from shutil import which +except ImportError: # pragma: no cover + # Implementation from Python 3.3 + def which(cmd, mode=os.F_OK | os.X_OK, path=None): + """Given a command, mode, and a PATH string, return the path which + conforms to the given mode on the PATH, or None if there is no such + file. + + `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result + of os.environ.get("PATH"), or can be overridden with a custom search + path. + + """ + # Check that a given file can be accessed with the correct mode. + # Additionally check that `file` is not a directory, as on Windows + # directories pass the os.access check. + def _access_check(fn, mode): + return (os.path.exists(fn) and os.access(fn, mode) + and not os.path.isdir(fn)) + + # If we're given a path with a directory part, look it up directly rather + # than referring to PATH directories. This includes checking relative to the + # current directory, e.g. ./script + if os.path.dirname(cmd): + if _access_check(cmd, mode): + return cmd + return None + + if path is None: + path = os.environ.get("PATH", os.defpath) + if not path: + return None + path = path.split(os.pathsep) + + if sys.platform == "win32": + # The current directory takes precedence on Windows. + if not os.curdir in path: + path.insert(0, os.curdir) + + # PATHEXT is necessary to check on Windows. + pathext = os.environ.get("PATHEXT", "").split(os.pathsep) + # See if the given file matches any of the expected path extensions. + # This will allow us to short circuit when given "python.exe". + # If it does match, only test that one, otherwise we have to try + # others. + if any(cmd.lower().endswith(ext.lower()) for ext in pathext): + files = [cmd] + else: + files = [cmd + ext for ext in pathext] + else: + # On other platforms you don't have things like PATHEXT to tell you + # what file suffixes are executable, so just pass on cmd as-is. + files = [cmd] + + seen = set() + for dir in path: + normdir = os.path.normcase(dir) + if not normdir in seen: + seen.add(normdir) + for thefile in files: + name = os.path.join(dir, thefile) + if _access_check(name, mode): + return name + return None + + +# ZipFile is a context manager in 2.7, but not in 2.6 + +from zipfile import ZipFile as BaseZipFile + +if hasattr(BaseZipFile, '__enter__'): # pragma: no cover + ZipFile = BaseZipFile +else: # pragma: no cover + from zipfile import ZipExtFile as BaseZipExtFile + + class ZipExtFile(BaseZipExtFile): + def __init__(self, base): + self.__dict__.update(base.__dict__) + + def __enter__(self): + return self + + def __exit__(self, *exc_info): + self.close() + # return None, so if an exception occurred, it will propagate + + class ZipFile(BaseZipFile): + def __enter__(self): + return self + + def __exit__(self, *exc_info): + self.close() + # return None, so if an exception occurred, it will propagate + + def open(self, *args, **kwargs): + base = BaseZipFile.open(self, *args, **kwargs) + return ZipExtFile(base) + +try: + from platform import python_implementation +except ImportError: # pragma: no cover + def python_implementation(): + """Return a string identifying the Python implementation.""" + if 'PyPy' in sys.version: + return 'PyPy' + if os.name == 'java': + return 'Jython' + if sys.version.startswith('IronPython'): + return 'IronPython' + return 'CPython' + +import shutil +import sysconfig + +try: + callable = callable +except NameError: # pragma: no cover + from collections.abc import Callable + + def callable(obj): + return isinstance(obj, Callable) + + +try: + fsencode = os.fsencode + fsdecode = os.fsdecode +except AttributeError: # pragma: no cover + # Issue #99: on some systems (e.g. containerised), + # sys.getfilesystemencoding() returns None, and we need a real value, + # so fall back to utf-8. From the CPython 2.7 docs relating to Unix and + # sys.getfilesystemencoding(): the return value is "the user’s preference + # according to the result of nl_langinfo(CODESET), or None if the + # nl_langinfo(CODESET) failed." + _fsencoding = sys.getfilesystemencoding() or 'utf-8' + if _fsencoding == 'mbcs': + _fserrors = 'strict' + else: + _fserrors = 'surrogateescape' + + def fsencode(filename): + if isinstance(filename, bytes): + return filename + elif isinstance(filename, text_type): + return filename.encode(_fsencoding, _fserrors) + else: + raise TypeError("expect bytes or str, not %s" % + type(filename).__name__) + + def fsdecode(filename): + if isinstance(filename, text_type): + return filename + elif isinstance(filename, bytes): + return filename.decode(_fsencoding, _fserrors) + else: + raise TypeError("expect bytes or str, not %s" % + type(filename).__name__) + +try: + from tokenize import detect_encoding +except ImportError: # pragma: no cover + from codecs import BOM_UTF8, lookup + import re + + cookie_re = re.compile(r"coding[:=]\s*([-\w.]+)") + + def _get_normal_name(orig_enc): + """Imitates get_normal_name in tokenizer.c.""" + # Only care about the first 12 characters. + enc = orig_enc[:12].lower().replace("_", "-") + if enc == "utf-8" or enc.startswith("utf-8-"): + return "utf-8" + if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \ + enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")): + return "iso-8859-1" + return orig_enc + + def detect_encoding(readline): + """ + The detect_encoding() function is used to detect the encoding that should + be used to decode a Python source file. It requires one argument, readline, + in the same way as the tokenize() generator. + + It will call readline a maximum of twice, and return the encoding used + (as a string) and a list of any lines (left as bytes) it has read in. + + It detects the encoding from the presence of a utf-8 bom or an encoding + cookie as specified in pep-0263. If both a bom and a cookie are present, + but disagree, a SyntaxError will be raised. If the encoding cookie is an + invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, + 'utf-8-sig' is returned. + + If no encoding is specified, then the default of 'utf-8' will be returned. + """ + try: + filename = readline.__self__.name + except AttributeError: + filename = None + bom_found = False + encoding = None + default = 'utf-8' + def read_or_stop(): + try: + return readline() + except StopIteration: + return b'' + + def find_cookie(line): + try: + # Decode as UTF-8. Either the line is an encoding declaration, + # in which case it should be pure ASCII, or it must be UTF-8 + # per default encoding. + line_string = line.decode('utf-8') + except UnicodeDecodeError: + msg = "invalid or missing encoding declaration" + if filename is not None: + msg = '{} for {!r}'.format(msg, filename) + raise SyntaxError(msg) + + matches = cookie_re.findall(line_string) + if not matches: + return None + encoding = _get_normal_name(matches[0]) + try: + codec = lookup(encoding) + except LookupError: + # This behaviour mimics the Python interpreter + if filename is None: + msg = "unknown encoding: " + encoding + else: + msg = "unknown encoding for {!r}: {}".format(filename, + encoding) + raise SyntaxError(msg) + + if bom_found: + if codec.name != 'utf-8': + # This behaviour mimics the Python interpreter + if filename is None: + msg = 'encoding problem: utf-8' + else: + msg = 'encoding problem for {!r}: utf-8'.format(filename) + raise SyntaxError(msg) + encoding += '-sig' + return encoding + + first = read_or_stop() + if first.startswith(BOM_UTF8): + bom_found = True + first = first[3:] + default = 'utf-8-sig' + if not first: + return default, [] + + encoding = find_cookie(first) + if encoding: + return encoding, [first] + + second = read_or_stop() + if not second: + return default, [first] + + encoding = find_cookie(second) + if encoding: + return encoding, [first, second] + + return default, [first, second] + +# For converting & <-> & etc. +try: + from html import escape +except ImportError: + from cgi import escape +if sys.version_info[:2] < (3, 4): + unescape = HTMLParser().unescape +else: + from html import unescape + +try: + from collections import ChainMap +except ImportError: # pragma: no cover + from collections import MutableMapping + + try: + from reprlib import recursive_repr as _recursive_repr + except ImportError: + def _recursive_repr(fillvalue='...'): + ''' + Decorator to make a repr function return fillvalue for a recursive + call + ''' + + def decorating_function(user_function): + repr_running = set() + + def wrapper(self): + key = id(self), get_ident() + if key in repr_running: + return fillvalue + repr_running.add(key) + try: + result = user_function(self) + finally: + repr_running.discard(key) + return result + + # Can't use functools.wraps() here because of bootstrap issues + wrapper.__module__ = getattr(user_function, '__module__') + wrapper.__doc__ = getattr(user_function, '__doc__') + wrapper.__name__ = getattr(user_function, '__name__') + wrapper.__annotations__ = getattr(user_function, '__annotations__', {}) + return wrapper + + return decorating_function + + class ChainMap(MutableMapping): + ''' A ChainMap groups multiple dicts (or other mappings) together + to create a single, updateable view. + + The underlying mappings are stored in a list. That list is public and can + accessed or updated using the *maps* attribute. There is no other state. + + Lookups search the underlying mappings successively until a key is found. + In contrast, writes, updates, and deletions only operate on the first + mapping. + + ''' + + def __init__(self, *maps): + '''Initialize a ChainMap by setting *maps* to the given mappings. + If no mappings are provided, a single empty dictionary is used. + + ''' + self.maps = list(maps) or [{}] # always at least one map + + def __missing__(self, key): + raise KeyError(key) + + def __getitem__(self, key): + for mapping in self.maps: + try: + return mapping[key] # can't use 'key in mapping' with defaultdict + except KeyError: + pass + return self.__missing__(key) # support subclasses that define __missing__ + + def get(self, key, default=None): + return self[key] if key in self else default + + def __len__(self): + return len(set().union(*self.maps)) # reuses stored hash values if possible + + def __iter__(self): + return iter(set().union(*self.maps)) + + def __contains__(self, key): + return any(key in m for m in self.maps) + + def __bool__(self): + return any(self.maps) + + @_recursive_repr() + def __repr__(self): + return '{0.__class__.__name__}({1})'.format( + self, ', '.join(map(repr, self.maps))) + + @classmethod + def fromkeys(cls, iterable, *args): + 'Create a ChainMap with a single dict created from the iterable.' + return cls(dict.fromkeys(iterable, *args)) + + def copy(self): + 'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]' + return self.__class__(self.maps[0].copy(), *self.maps[1:]) + + __copy__ = copy + + def new_child(self): # like Django's Context.push() + 'New ChainMap with a new dict followed by all previous maps.' + return self.__class__({}, *self.maps) + + @property + def parents(self): # like Django's Context.pop() + 'New ChainMap from maps[1:].' + return self.__class__(*self.maps[1:]) + + def __setitem__(self, key, value): + self.maps[0][key] = value + + def __delitem__(self, key): + try: + del self.maps[0][key] + except KeyError: + raise KeyError('Key not found in the first mapping: {!r}'.format(key)) + + def popitem(self): + 'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.' + try: + return self.maps[0].popitem() + except KeyError: + raise KeyError('No keys found in the first mapping.') + + def pop(self, key, *args): + 'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].' + try: + return self.maps[0].pop(key, *args) + except KeyError: + raise KeyError('Key not found in the first mapping: {!r}'.format(key)) + + def clear(self): + 'Clear maps[0], leaving maps[1:] intact.' + self.maps[0].clear() + +try: + from importlib.util import cache_from_source # Python >= 3.4 +except ImportError: # pragma: no cover + def cache_from_source(path, debug_override=None): + assert path.endswith('.py') + if debug_override is None: + debug_override = __debug__ + if debug_override: + suffix = 'c' + else: + suffix = 'o' + return path + suffix + +try: + from collections import OrderedDict +except ImportError: # pragma: no cover +## {{{ http://code.activestate.com/recipes/576693/ (r9) +# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy. +# Passes Python2.7's test suite and incorporates all the latest updates. + try: + from thread import get_ident as _get_ident + except ImportError: + from dummy_thread import get_ident as _get_ident + + try: + from _abcoll import KeysView, ValuesView, ItemsView + except ImportError: + pass + + + class OrderedDict(dict): + 'Dictionary that remembers insertion order' + # An inherited dict maps keys to values. + # The inherited dict provides __getitem__, __len__, __contains__, and get. + # The remaining methods are order-aware. + # Big-O running times for all methods are the same as for regular dictionaries. + + # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The circular doubly linked list starts and ends with a sentinel element. + # The sentinel element never gets deleted (this simplifies the algorithm). + # Each link is stored as a list of length three: [PREV, NEXT, KEY]. + + def __init__(self, *args, **kwds): + '''Initialize an ordered dictionary. Signature is the same as for + regular dictionaries, but keyword arguments are not recommended + because their insertion order is arbitrary. + + ''' + if len(args) > 1: + raise TypeError('expected at most 1 arguments, got %d' % len(args)) + try: + self.__root + except AttributeError: + self.__root = root = [] # sentinel node + root[:] = [root, root, None] + self.__map = {} + self.__update(*args, **kwds) + + def __setitem__(self, key, value, dict_setitem=dict.__setitem__): + 'od.__setitem__(i, y) <==> od[i]=y' + # Setting a new item creates a new link which goes at the end of the linked + # list, and the inherited dictionary is updated with the new key/value pair. + if key not in self: + root = self.__root + last = root[0] + last[1] = root[0] = self.__map[key] = [last, root, key] + dict_setitem(self, key, value) + + def __delitem__(self, key, dict_delitem=dict.__delitem__): + 'od.__delitem__(y) <==> del od[y]' + # Deleting an existing item uses self.__map to find the link which is + # then removed by updating the links in the predecessor and successor nodes. + dict_delitem(self, key) + link_prev, link_next, key = self.__map.pop(key) + link_prev[1] = link_next + link_next[0] = link_prev + + def __iter__(self): + 'od.__iter__() <==> iter(od)' + root = self.__root + curr = root[1] + while curr is not root: + yield curr[2] + curr = curr[1] + + def __reversed__(self): + 'od.__reversed__() <==> reversed(od)' + root = self.__root + curr = root[0] + while curr is not root: + yield curr[2] + curr = curr[0] + + def clear(self): + 'od.clear() -> None. Remove all items from od.' + try: + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + except AttributeError: + pass + dict.clear(self) + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + root = self.__root + if last: + link = root[0] + link_prev = link[0] + link_prev[1] = root + root[0] = link_prev + else: + link = root[1] + link_next = link[1] + root[1] = link_next + link_next[0] = root + key = link[2] + del self.__map[key] + value = dict.pop(self, key) + return key, value + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) items in od' + for k in self: + yield (k, self[k]) + + def update(*args, **kwds): + '''od.update(E, **F) -> None. Update od from dict/iterable E and F. + + If E is a dict instance, does: for k in E: od[k] = E[k] + If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] + Or if E is an iterable of items, does: for k, v in E: od[k] = v + In either case, this is followed by: for k, v in F.items(): od[k] = v + + ''' + if len(args) > 2: + raise TypeError('update() takes at most 2 positional ' + 'arguments (%d given)' % (len(args),)) + elif not args: + raise TypeError('update() takes at least 1 argument (0 given)') + self = args[0] + # Make progressively weaker assumptions about "other" + other = () + if len(args) == 2: + other = args[1] + if isinstance(other, dict): + for key in other: + self[key] = other[key] + elif hasattr(other, 'keys'): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value. + If key is not found, d is returned if given, otherwise KeyError is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def __repr__(self, _repr_running=None): + 'od.__repr__() <==> repr(od)' + if not _repr_running: _repr_running = {} + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + + def __reduce__(self): + 'Return state information for pickling' + items = [[k, self[k]] for k in self] + inst_dict = vars(self).copy() + for k in vars(OrderedDict()): + inst_dict.pop(k, None) + if inst_dict: + return (self.__class__, (items,), inst_dict) + return self.__class__, (items,) + + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) + + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S + and values equal to v (which defaults to None). + + ''' + d = cls() + for key in iterable: + d[key] = value + return d + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + return not self == other + + # -- the following methods are only used in Python 2.7 -- + + def viewkeys(self): + "od.viewkeys() -> a set-like object providing a view on od's keys" + return KeysView(self) + + def viewvalues(self): + "od.viewvalues() -> an object providing a view on od's values" + return ValuesView(self) + + def viewitems(self): + "od.viewitems() -> a set-like object providing a view on od's items" + return ItemsView(self) + +try: + from logging.config import BaseConfigurator, valid_ident +except ImportError: # pragma: no cover + IDENTIFIER = re.compile('^[a-z_][a-z0-9_]*$', re.I) + + + def valid_ident(s): + m = IDENTIFIER.match(s) + if not m: + raise ValueError('Not a valid Python identifier: %r' % s) + return True + + + # The ConvertingXXX classes are wrappers around standard Python containers, + # and they serve to convert any suitable values in the container. The + # conversion converts base dicts, lists and tuples to their wrapped + # equivalents, whereas strings which match a conversion format are converted + # appropriately. + # + # Each wrapper should have a configurator attribute holding the actual + # configurator to use for conversion. + + class ConvertingDict(dict): + """A converting dictionary wrapper.""" + + def __getitem__(self, key): + value = dict.__getitem__(self, key) + result = self.configurator.convert(value) + #If the converted value is different, save for next time + if value is not result: + self[key] = result + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + def get(self, key, default=None): + value = dict.get(self, key, default) + result = self.configurator.convert(value) + #If the converted value is different, save for next time + if value is not result: + self[key] = result + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + def pop(self, key, default=None): + value = dict.pop(self, key, default) + result = self.configurator.convert(value) + if value is not result: + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + class ConvertingList(list): + """A converting list wrapper.""" + def __getitem__(self, key): + value = list.__getitem__(self, key) + result = self.configurator.convert(value) + #If the converted value is different, save for next time + if value is not result: + self[key] = result + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + def pop(self, idx=-1): + value = list.pop(self, idx) + result = self.configurator.convert(value) + if value is not result: + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + return result + + class ConvertingTuple(tuple): + """A converting tuple wrapper.""" + def __getitem__(self, key): + value = tuple.__getitem__(self, key) + result = self.configurator.convert(value) + if value is not result: + if type(result) in (ConvertingDict, ConvertingList, + ConvertingTuple): + result.parent = self + result.key = key + return result + + class BaseConfigurator(object): + """ + The configurator base class which defines some useful defaults. + """ + + CONVERT_PATTERN = re.compile(r'^(?P[a-z]+)://(?P.*)$') + + WORD_PATTERN = re.compile(r'^\s*(\w+)\s*') + DOT_PATTERN = re.compile(r'^\.\s*(\w+)\s*') + INDEX_PATTERN = re.compile(r'^\[\s*(\w+)\s*\]\s*') + DIGIT_PATTERN = re.compile(r'^\d+$') + + value_converters = { + 'ext' : 'ext_convert', + 'cfg' : 'cfg_convert', + } + + # We might want to use a different one, e.g. importlib + importer = staticmethod(__import__) + + def __init__(self, config): + self.config = ConvertingDict(config) + self.config.configurator = self + + def resolve(self, s): + """ + Resolve strings to objects using standard import and attribute + syntax. + """ + name = s.split('.') + used = name.pop(0) + try: + found = self.importer(used) + for frag in name: + used += '.' + frag + try: + found = getattr(found, frag) + except AttributeError: + self.importer(used) + found = getattr(found, frag) + return found + except ImportError: + e, tb = sys.exc_info()[1:] + v = ValueError('Cannot resolve %r: %s' % (s, e)) + v.__cause__, v.__traceback__ = e, tb + raise v + + def ext_convert(self, value): + """Default converter for the ext:// protocol.""" + return self.resolve(value) + + def cfg_convert(self, value): + """Default converter for the cfg:// protocol.""" + rest = value + m = self.WORD_PATTERN.match(rest) + if m is None: + raise ValueError("Unable to convert %r" % value) + else: + rest = rest[m.end():] + d = self.config[m.groups()[0]] + #print d, rest + while rest: + m = self.DOT_PATTERN.match(rest) + if m: + d = d[m.groups()[0]] + else: + m = self.INDEX_PATTERN.match(rest) + if m: + idx = m.groups()[0] + if not self.DIGIT_PATTERN.match(idx): + d = d[idx] + else: + try: + n = int(idx) # try as number first (most likely) + d = d[n] + except TypeError: + d = d[idx] + if m: + rest = rest[m.end():] + else: + raise ValueError('Unable to convert ' + '%r at %r' % (value, rest)) + #rest should be empty + return d + + def convert(self, value): + """ + Convert values to an appropriate type. dicts, lists and tuples are + replaced by their converting alternatives. Strings are checked to + see if they have a conversion format and are converted if they do. + """ + if not isinstance(value, ConvertingDict) and isinstance(value, dict): + value = ConvertingDict(value) + value.configurator = self + elif not isinstance(value, ConvertingList) and isinstance(value, list): + value = ConvertingList(value) + value.configurator = self + elif not isinstance(value, ConvertingTuple) and\ + isinstance(value, tuple): + value = ConvertingTuple(value) + value.configurator = self + elif isinstance(value, string_types): + m = self.CONVERT_PATTERN.match(value) + if m: + d = m.groupdict() + prefix = d['prefix'] + converter = self.value_converters.get(prefix, None) + if converter: + suffix = d['suffix'] + converter = getattr(self, converter) + value = converter(suffix) + return value + + def configure_custom(self, config): + """Configure an object with a user-supplied factory.""" + c = config.pop('()') + if not callable(c): + c = self.resolve(c) + props = config.pop('.', None) + # Check for valid identifiers + kwargs = dict([(k, config[k]) for k in config if valid_ident(k)]) + result = c(**kwargs) + if props: + for name, value in props.items(): + setattr(result, name, value) + return result + + def as_tuple(self, value): + """Utility function which converts lists to tuples.""" + if isinstance(value, list): + value = tuple(value) + return value diff --git a/env/lib/python3.8/site-packages/distlib/database.py b/env/lib/python3.8/site-packages/distlib/database.py new file mode 100644 index 00000000..5db5d7f5 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/database.py @@ -0,0 +1,1350 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2017 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""PEP 376 implementation.""" + +from __future__ import unicode_literals + +import base64 +import codecs +import contextlib +import hashlib +import logging +import os +import posixpath +import sys +import zipimport + +from . import DistlibException, resources +from .compat import StringIO +from .version import get_scheme, UnsupportedVersionError +from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME) +from .util import (parse_requirement, cached_property, parse_name_and_version, + read_exports, write_exports, CSVReader, CSVWriter) + + +__all__ = ['Distribution', 'BaseInstalledDistribution', + 'InstalledDistribution', 'EggInfoDistribution', + 'DistributionPath'] + + +logger = logging.getLogger(__name__) + +EXPORTS_FILENAME = 'pydist-exports.json' +COMMANDS_FILENAME = 'pydist-commands.json' + +DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', + 'RESOURCES', EXPORTS_FILENAME, 'SHARED') + +DISTINFO_EXT = '.dist-info' + + +class _Cache(object): + """ + A simple cache mapping names and .dist-info paths to distributions + """ + def __init__(self): + """ + Initialise an instance. There is normally one for each DistributionPath. + """ + self.name = {} + self.path = {} + self.generated = False + + def clear(self): + """ + Clear the cache, setting it to its initial state. + """ + self.name.clear() + self.path.clear() + self.generated = False + + def add(self, dist): + """ + Add a distribution to the cache. + :param dist: The distribution to add. + """ + if dist.path not in self.path: + self.path[dist.path] = dist + self.name.setdefault(dist.key, []).append(dist) + + +class DistributionPath(object): + """ + Represents a set of distributions installed on a path (typically sys.path). + """ + def __init__(self, path=None, include_egg=False): + """ + Create an instance from a path, optionally including legacy (distutils/ + setuptools/distribute) distributions. + :param path: The path to use, as a list of directories. If not specified, + sys.path is used. + :param include_egg: If True, this instance will look for and return legacy + distributions as well as those based on PEP 376. + """ + if path is None: + path = sys.path + self.path = path + self._include_dist = True + self._include_egg = include_egg + + self._cache = _Cache() + self._cache_egg = _Cache() + self._cache_enabled = True + self._scheme = get_scheme('default') + + def _get_cache_enabled(self): + return self._cache_enabled + + def _set_cache_enabled(self, value): + self._cache_enabled = value + + cache_enabled = property(_get_cache_enabled, _set_cache_enabled) + + def clear_cache(self): + """ + Clears the internal cache. + """ + self._cache.clear() + self._cache_egg.clear() + + + def _yield_distributions(self): + """ + Yield .dist-info and/or .egg(-info) distributions. + """ + # We need to check if we've seen some resources already, because on + # some Linux systems (e.g. some Debian/Ubuntu variants) there are + # symlinks which alias other files in the environment. + seen = set() + for path in self.path: + finder = resources.finder_for_path(path) + if finder is None: + continue + r = finder.find('') + if not r or not r.is_container: + continue + rset = sorted(r.resources) + for entry in rset: + r = finder.find(entry) + if not r or r.path in seen: + continue + try: + if self._include_dist and entry.endswith(DISTINFO_EXT): + possible_filenames = [METADATA_FILENAME, + WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME] + for metadata_filename in possible_filenames: + metadata_path = posixpath.join(entry, metadata_filename) + pydist = finder.find(metadata_path) + if pydist: + break + else: + continue + + with contextlib.closing(pydist.as_stream()) as stream: + metadata = Metadata(fileobj=stream, scheme='legacy') + logger.debug('Found %s', r.path) + seen.add(r.path) + yield new_dist_class(r.path, metadata=metadata, + env=self) + elif self._include_egg and entry.endswith(('.egg-info', + '.egg')): + logger.debug('Found %s', r.path) + seen.add(r.path) + yield old_dist_class(r.path, self) + except Exception as e: + msg = 'Unable to read distribution at %s, perhaps due to bad metadata: %s' + logger.warning(msg, r.path, e) + import warnings + warnings.warn(msg % (r.path, e), stacklevel=2) + + def _generate_cache(self): + """ + Scan the path for distributions and populate the cache with + those that are found. + """ + gen_dist = not self._cache.generated + gen_egg = self._include_egg and not self._cache_egg.generated + if gen_dist or gen_egg: + for dist in self._yield_distributions(): + if isinstance(dist, InstalledDistribution): + self._cache.add(dist) + else: + self._cache_egg.add(dist) + + if gen_dist: + self._cache.generated = True + if gen_egg: + self._cache_egg.generated = True + + @classmethod + def distinfo_dirname(cls, name, version): + """ + The *name* and *version* parameters are converted into their + filename-escaped form, i.e. any ``'-'`` characters are replaced + with ``'_'`` other than the one in ``'dist-info'`` and the one + separating the name from the version number. + + :parameter name: is converted to a standard distribution name by replacing + any runs of non- alphanumeric characters with a single + ``'-'``. + :type name: string + :parameter version: is converted to a standard version string. Spaces + become dots, and all other non-alphanumeric characters + (except dots) become dashes, with runs of multiple + dashes condensed to a single dash. + :type version: string + :returns: directory name + :rtype: string""" + name = name.replace('-', '_') + return '-'.join([name, version]) + DISTINFO_EXT + + def get_distributions(self): + """ + Provides an iterator that looks for distributions and returns + :class:`InstalledDistribution` or + :class:`EggInfoDistribution` instances for each one of them. + + :rtype: iterator of :class:`InstalledDistribution` and + :class:`EggInfoDistribution` instances + """ + if not self._cache_enabled: + for dist in self._yield_distributions(): + yield dist + else: + self._generate_cache() + + for dist in self._cache.path.values(): + yield dist + + if self._include_egg: + for dist in self._cache_egg.path.values(): + yield dist + + def get_distribution(self, name): + """ + Looks for a named distribution on the path. + + This function only returns the first result found, as no more than one + value is expected. If nothing is found, ``None`` is returned. + + :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` + or ``None`` + """ + result = None + name = name.lower() + if not self._cache_enabled: + for dist in self._yield_distributions(): + if dist.key == name: + result = dist + break + else: + self._generate_cache() + + if name in self._cache.name: + result = self._cache.name[name][0] + elif self._include_egg and name in self._cache_egg.name: + result = self._cache_egg.name[name][0] + return result + + def provides_distribution(self, name, version=None): + """ + Iterates over all distributions to find which distributions provide *name*. + If a *version* is provided, it will be used to filter the results. + + This function only returns the first result found, since no more than + one values are expected. If the directory is not found, returns ``None``. + + :parameter version: a version specifier that indicates the version + required, conforming to the format in ``PEP-345`` + + :type name: string + :type version: string + """ + matcher = None + if version is not None: + try: + matcher = self._scheme.matcher('%s (%s)' % (name, version)) + except ValueError: + raise DistlibException('invalid name or version: %r, %r' % + (name, version)) + + for dist in self.get_distributions(): + # We hit a problem on Travis where enum34 was installed and doesn't + # have a provides attribute ... + if not hasattr(dist, 'provides'): + logger.debug('No "provides": %s', dist) + else: + provided = dist.provides + + for p in provided: + p_name, p_ver = parse_name_and_version(p) + if matcher is None: + if p_name == name: + yield dist + break + else: + if p_name == name and matcher.match(p_ver): + yield dist + break + + def get_file_path(self, name, relative_path): + """ + Return the path to a resource file. + """ + dist = self.get_distribution(name) + if dist is None: + raise LookupError('no distribution named %r found' % name) + return dist.get_resource_path(relative_path) + + def get_exported_entries(self, category, name=None): + """ + Return all of the exported entries in a particular category. + + :param category: The category to search for entries. + :param name: If specified, only entries with that name are returned. + """ + for dist in self.get_distributions(): + r = dist.exports + if category in r: + d = r[category] + if name is not None: + if name in d: + yield d[name] + else: + for v in d.values(): + yield v + + +class Distribution(object): + """ + A base class for distributions, whether installed or from indexes. + Either way, it must have some metadata, so that's all that's needed + for construction. + """ + + build_time_dependency = False + """ + Set to True if it's known to be only a build-time dependency (i.e. + not needed after installation). + """ + + requested = False + """A boolean that indicates whether the ``REQUESTED`` metadata file is + present (in other words, whether the package was installed by user + request or it was installed as a dependency).""" + + def __init__(self, metadata): + """ + Initialise an instance. + :param metadata: The instance of :class:`Metadata` describing this + distribution. + """ + self.metadata = metadata + self.name = metadata.name + self.key = self.name.lower() # for case-insensitive comparisons + self.version = metadata.version + self.locator = None + self.digest = None + self.extras = None # additional features requested + self.context = None # environment marker overrides + self.download_urls = set() + self.digests = {} + + @property + def source_url(self): + """ + The source archive download URL for this distribution. + """ + return self.metadata.source_url + + download_url = source_url # Backward compatibility + + @property + def name_and_version(self): + """ + A utility property which displays the name and version in parentheses. + """ + return '%s (%s)' % (self.name, self.version) + + @property + def provides(self): + """ + A set of distribution names and versions provided by this distribution. + :return: A set of "name (version)" strings. + """ + plist = self.metadata.provides + s = '%s (%s)' % (self.name, self.version) + if s not in plist: + plist.append(s) + return plist + + def _get_requirements(self, req_attr): + md = self.metadata + reqts = getattr(md, req_attr) + logger.debug('%s: got requirements %r from metadata: %r', self.name, req_attr, + reqts) + return set(md.get_requirements(reqts, extras=self.extras, + env=self.context)) + + @property + def run_requires(self): + return self._get_requirements('run_requires') + + @property + def meta_requires(self): + return self._get_requirements('meta_requires') + + @property + def build_requires(self): + return self._get_requirements('build_requires') + + @property + def test_requires(self): + return self._get_requirements('test_requires') + + @property + def dev_requires(self): + return self._get_requirements('dev_requires') + + def matches_requirement(self, req): + """ + Say if this instance matches (fulfills) a requirement. + :param req: The requirement to match. + :rtype req: str + :return: True if it matches, else False. + """ + # Requirement may contain extras - parse to lose those + # from what's passed to the matcher + r = parse_requirement(req) + scheme = get_scheme(self.metadata.scheme) + try: + matcher = scheme.matcher(r.requirement) + except UnsupportedVersionError: + # XXX compat-mode if cannot read the version + logger.warning('could not read version %r - using name only', + req) + name = req.split()[0] + matcher = scheme.matcher(name) + + name = matcher.key # case-insensitive + + result = False + for p in self.provides: + p_name, p_ver = parse_name_and_version(p) + if p_name != name: + continue + try: + result = matcher.match(p_ver) + break + except UnsupportedVersionError: + pass + return result + + def __repr__(self): + """ + Return a textual representation of this instance, + """ + if self.source_url: + suffix = ' [%s]' % self.source_url + else: + suffix = '' + return '' % (self.name, self.version, suffix) + + def __eq__(self, other): + """ + See if this distribution is the same as another. + :param other: The distribution to compare with. To be equal to one + another. distributions must have the same type, name, + version and source_url. + :return: True if it is the same, else False. + """ + if type(other) is not type(self): + result = False + else: + result = (self.name == other.name and + self.version == other.version and + self.source_url == other.source_url) + return result + + def __hash__(self): + """ + Compute hash in a way which matches the equality test. + """ + return hash(self.name) + hash(self.version) + hash(self.source_url) + + +class BaseInstalledDistribution(Distribution): + """ + This is the base class for installed distributions (whether PEP 376 or + legacy). + """ + + hasher = None + + def __init__(self, metadata, path, env=None): + """ + Initialise an instance. + :param metadata: An instance of :class:`Metadata` which describes the + distribution. This will normally have been initialised + from a metadata file in the ``path``. + :param path: The path of the ``.dist-info`` or ``.egg-info`` + directory for the distribution. + :param env: This is normally the :class:`DistributionPath` + instance where this distribution was found. + """ + super(BaseInstalledDistribution, self).__init__(metadata) + self.path = path + self.dist_path = env + + def get_hash(self, data, hasher=None): + """ + Get the hash of some data, using a particular hash algorithm, if + specified. + + :param data: The data to be hashed. + :type data: bytes + :param hasher: The name of a hash implementation, supported by hashlib, + or ``None``. Examples of valid values are ``'sha1'``, + ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and + ``'sha512'``. If no hasher is specified, the ``hasher`` + attribute of the :class:`InstalledDistribution` instance + is used. If the hasher is determined to be ``None``, MD5 + is used as the hashing algorithm. + :returns: The hash of the data. If a hasher was explicitly specified, + the returned hash will be prefixed with the specified hasher + followed by '='. + :rtype: str + """ + if hasher is None: + hasher = self.hasher + if hasher is None: + hasher = hashlib.md5 + prefix = '' + else: + hasher = getattr(hashlib, hasher) + prefix = '%s=' % self.hasher + digest = hasher(data).digest() + digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') + return '%s%s' % (prefix, digest) + + +class InstalledDistribution(BaseInstalledDistribution): + """ + Created with the *path* of the ``.dist-info`` directory provided to the + constructor. It reads the metadata contained in ``pydist.json`` when it is + instantiated., or uses a passed in Metadata instance (useful for when + dry-run mode is being used). + """ + + hasher = 'sha256' + + def __init__(self, path, metadata=None, env=None): + self.modules = [] + self.finder = finder = resources.finder_for_path(path) + if finder is None: + raise ValueError('finder unavailable for %s' % path) + if env and env._cache_enabled and path in env._cache.path: + metadata = env._cache.path[path].metadata + elif metadata is None: + r = finder.find(METADATA_FILENAME) + # Temporary - for Wheel 0.23 support + if r is None: + r = finder.find(WHEEL_METADATA_FILENAME) + # Temporary - for legacy support + if r is None: + r = finder.find(LEGACY_METADATA_FILENAME) + if r is None: + raise ValueError('no %s found in %s' % (METADATA_FILENAME, + path)) + with contextlib.closing(r.as_stream()) as stream: + metadata = Metadata(fileobj=stream, scheme='legacy') + + super(InstalledDistribution, self).__init__(metadata, path, env) + + if env and env._cache_enabled: + env._cache.add(self) + + r = finder.find('REQUESTED') + self.requested = r is not None + p = os.path.join(path, 'top_level.txt') + if os.path.exists(p): + with open(p, 'rb') as f: + data = f.read().decode('utf-8') + self.modules = data.splitlines() + + def __repr__(self): + return '' % ( + self.name, self.version, self.path) + + def __str__(self): + return "%s %s" % (self.name, self.version) + + def _get_records(self): + """ + Get the list of installed files for the distribution + :return: A list of tuples of path, hash and size. Note that hash and + size might be ``None`` for some entries. The path is exactly + as stored in the file (which is as in PEP 376). + """ + results = [] + r = self.get_distinfo_resource('RECORD') + with contextlib.closing(r.as_stream()) as stream: + with CSVReader(stream=stream) as record_reader: + # Base location is parent dir of .dist-info dir + #base_location = os.path.dirname(self.path) + #base_location = os.path.abspath(base_location) + for row in record_reader: + missing = [None for i in range(len(row), 3)] + path, checksum, size = row + missing + #if not os.path.isabs(path): + # path = path.replace('/', os.sep) + # path = os.path.join(base_location, path) + results.append((path, checksum, size)) + return results + + @cached_property + def exports(self): + """ + Return the information exported by this distribution. + :return: A dictionary of exports, mapping an export category to a dict + of :class:`ExportEntry` instances describing the individual + export entries, and keyed by name. + """ + result = {} + r = self.get_distinfo_resource(EXPORTS_FILENAME) + if r: + result = self.read_exports() + return result + + def read_exports(self): + """ + Read exports data from a file in .ini format. + + :return: A dictionary of exports, mapping an export category to a list + of :class:`ExportEntry` instances describing the individual + export entries. + """ + result = {} + r = self.get_distinfo_resource(EXPORTS_FILENAME) + if r: + with contextlib.closing(r.as_stream()) as stream: + result = read_exports(stream) + return result + + def write_exports(self, exports): + """ + Write a dictionary of exports to a file in .ini format. + :param exports: A dictionary of exports, mapping an export category to + a list of :class:`ExportEntry` instances describing the + individual export entries. + """ + rf = self.get_distinfo_file(EXPORTS_FILENAME) + with open(rf, 'w') as f: + write_exports(exports, f) + + def get_resource_path(self, relative_path): + """ + NOTE: This API may change in the future. + + Return the absolute path to a resource file with the given relative + path. + + :param relative_path: The path, relative to .dist-info, of the resource + of interest. + :return: The absolute path where the resource is to be found. + """ + r = self.get_distinfo_resource('RESOURCES') + with contextlib.closing(r.as_stream()) as stream: + with CSVReader(stream=stream) as resources_reader: + for relative, destination in resources_reader: + if relative == relative_path: + return destination + raise KeyError('no resource file with relative path %r ' + 'is installed' % relative_path) + + def list_installed_files(self): + """ + Iterates over the ``RECORD`` entries and returns a tuple + ``(path, hash, size)`` for each line. + + :returns: iterator of (path, hash, size) + """ + for result in self._get_records(): + yield result + + def write_installed_files(self, paths, prefix, dry_run=False): + """ + Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any + existing ``RECORD`` file is silently overwritten. + + prefix is used to determine when to write absolute paths. + """ + prefix = os.path.join(prefix, '') + base = os.path.dirname(self.path) + base_under_prefix = base.startswith(prefix) + base = os.path.join(base, '') + record_path = self.get_distinfo_file('RECORD') + logger.info('creating %s', record_path) + if dry_run: + return None + with CSVWriter(record_path) as writer: + for path in paths: + if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): + # do not put size and hash, as in PEP-376 + hash_value = size = '' + else: + size = '%d' % os.path.getsize(path) + with open(path, 'rb') as fp: + hash_value = self.get_hash(fp.read()) + if path.startswith(base) or (base_under_prefix and + path.startswith(prefix)): + path = os.path.relpath(path, base) + writer.writerow((path, hash_value, size)) + + # add the RECORD file itself + if record_path.startswith(base): + record_path = os.path.relpath(record_path, base) + writer.writerow((record_path, '', '')) + return record_path + + def check_installed_files(self): + """ + Checks that the hashes and sizes of the files in ``RECORD`` are + matched by the files themselves. Returns a (possibly empty) list of + mismatches. Each entry in the mismatch list will be a tuple consisting + of the path, 'exists', 'size' or 'hash' according to what didn't match + (existence is checked first, then size, then hash), the expected + value and the actual value. + """ + mismatches = [] + base = os.path.dirname(self.path) + record_path = self.get_distinfo_file('RECORD') + for path, hash_value, size in self.list_installed_files(): + if not os.path.isabs(path): + path = os.path.join(base, path) + if path == record_path: + continue + if not os.path.exists(path): + mismatches.append((path, 'exists', True, False)) + elif os.path.isfile(path): + actual_size = str(os.path.getsize(path)) + if size and actual_size != size: + mismatches.append((path, 'size', size, actual_size)) + elif hash_value: + if '=' in hash_value: + hasher = hash_value.split('=', 1)[0] + else: + hasher = None + + with open(path, 'rb') as f: + actual_hash = self.get_hash(f.read(), hasher) + if actual_hash != hash_value: + mismatches.append((path, 'hash', hash_value, actual_hash)) + return mismatches + + @cached_property + def shared_locations(self): + """ + A dictionary of shared locations whose keys are in the set 'prefix', + 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. + The corresponding value is the absolute path of that category for + this distribution, and takes into account any paths selected by the + user at installation time (e.g. via command-line arguments). In the + case of the 'namespace' key, this would be a list of absolute paths + for the roots of namespace packages in this distribution. + + The first time this property is accessed, the relevant information is + read from the SHARED file in the .dist-info directory. + """ + result = {} + shared_path = os.path.join(self.path, 'SHARED') + if os.path.isfile(shared_path): + with codecs.open(shared_path, 'r', encoding='utf-8') as f: + lines = f.read().splitlines() + for line in lines: + key, value = line.split('=', 1) + if key == 'namespace': + result.setdefault(key, []).append(value) + else: + result[key] = value + return result + + def write_shared_locations(self, paths, dry_run=False): + """ + Write shared location information to the SHARED file in .dist-info. + :param paths: A dictionary as described in the documentation for + :meth:`shared_locations`. + :param dry_run: If True, the action is logged but no file is actually + written. + :return: The path of the file written to. + """ + shared_path = os.path.join(self.path, 'SHARED') + logger.info('creating %s', shared_path) + if dry_run: + return None + lines = [] + for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): + path = paths[key] + if os.path.isdir(paths[key]): + lines.append('%s=%s' % (key, path)) + for ns in paths.get('namespace', ()): + lines.append('namespace=%s' % ns) + + with codecs.open(shared_path, 'w', encoding='utf-8') as f: + f.write('\n'.join(lines)) + return shared_path + + def get_distinfo_resource(self, path): + if path not in DIST_FILES: + raise DistlibException('invalid path for a dist-info file: ' + '%r at %r' % (path, self.path)) + finder = resources.finder_for_path(self.path) + if finder is None: + raise DistlibException('Unable to get a finder for %s' % self.path) + return finder.find(path) + + def get_distinfo_file(self, path): + """ + Returns a path located under the ``.dist-info`` directory. Returns a + string representing the path. + + :parameter path: a ``'/'``-separated path relative to the + ``.dist-info`` directory or an absolute path; + If *path* is an absolute path and doesn't start + with the ``.dist-info`` directory path, + a :class:`DistlibException` is raised + :type path: str + :rtype: str + """ + # Check if it is an absolute path # XXX use relpath, add tests + if path.find(os.sep) >= 0: + # it's an absolute path? + distinfo_dirname, path = path.split(os.sep)[-2:] + if distinfo_dirname != self.path.split(os.sep)[-1]: + raise DistlibException( + 'dist-info file %r does not belong to the %r %s ' + 'distribution' % (path, self.name, self.version)) + + # The file must be relative + if path not in DIST_FILES: + raise DistlibException('invalid path for a dist-info file: ' + '%r at %r' % (path, self.path)) + + return os.path.join(self.path, path) + + def list_distinfo_files(self): + """ + Iterates over the ``RECORD`` entries and returns paths for each line if + the path is pointing to a file located in the ``.dist-info`` directory + or one of its subdirectories. + + :returns: iterator of paths + """ + base = os.path.dirname(self.path) + for path, checksum, size in self._get_records(): + # XXX add separator or use real relpath algo + if not os.path.isabs(path): + path = os.path.join(base, path) + if path.startswith(self.path): + yield path + + def __eq__(self, other): + return (isinstance(other, InstalledDistribution) and + self.path == other.path) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + __hash__ = object.__hash__ + + +class EggInfoDistribution(BaseInstalledDistribution): + """Created with the *path* of the ``.egg-info`` directory or file provided + to the constructor. It reads the metadata contained in the file itself, or + if the given path happens to be a directory, the metadata is read from the + file ``PKG-INFO`` under that directory.""" + + requested = True # as we have no way of knowing, assume it was + shared_locations = {} + + def __init__(self, path, env=None): + def set_name_and_version(s, n, v): + s.name = n + s.key = n.lower() # for case-insensitive comparisons + s.version = v + + self.path = path + self.dist_path = env + if env and env._cache_enabled and path in env._cache_egg.path: + metadata = env._cache_egg.path[path].metadata + set_name_and_version(self, metadata.name, metadata.version) + else: + metadata = self._get_metadata(path) + + # Need to be set before caching + set_name_and_version(self, metadata.name, metadata.version) + + if env and env._cache_enabled: + env._cache_egg.add(self) + super(EggInfoDistribution, self).__init__(metadata, path, env) + + def _get_metadata(self, path): + requires = None + + def parse_requires_data(data): + """Create a list of dependencies from a requires.txt file. + + *data*: the contents of a setuptools-produced requires.txt file. + """ + reqs = [] + lines = data.splitlines() + for line in lines: + line = line.strip() + if line.startswith('['): + logger.warning('Unexpected line: quitting requirement scan: %r', + line) + break + r = parse_requirement(line) + if not r: + logger.warning('Not recognised as a requirement: %r', line) + continue + if r.extras: + logger.warning('extra requirements in requires.txt are ' + 'not supported') + if not r.constraints: + reqs.append(r.name) + else: + cons = ', '.join('%s%s' % c for c in r.constraints) + reqs.append('%s (%s)' % (r.name, cons)) + return reqs + + def parse_requires_path(req_path): + """Create a list of dependencies from a requires.txt file. + + *req_path*: the path to a setuptools-produced requires.txt file. + """ + + reqs = [] + try: + with codecs.open(req_path, 'r', 'utf-8') as fp: + reqs = parse_requires_data(fp.read()) + except IOError: + pass + return reqs + + tl_path = tl_data = None + if path.endswith('.egg'): + if os.path.isdir(path): + p = os.path.join(path, 'EGG-INFO') + meta_path = os.path.join(p, 'PKG-INFO') + metadata = Metadata(path=meta_path, scheme='legacy') + req_path = os.path.join(p, 'requires.txt') + tl_path = os.path.join(p, 'top_level.txt') + requires = parse_requires_path(req_path) + else: + # FIXME handle the case where zipfile is not available + zipf = zipimport.zipimporter(path) + fileobj = StringIO( + zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) + metadata = Metadata(fileobj=fileobj, scheme='legacy') + try: + data = zipf.get_data('EGG-INFO/requires.txt') + tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') + requires = parse_requires_data(data.decode('utf-8')) + except IOError: + requires = None + elif path.endswith('.egg-info'): + if os.path.isdir(path): + req_path = os.path.join(path, 'requires.txt') + requires = parse_requires_path(req_path) + path = os.path.join(path, 'PKG-INFO') + tl_path = os.path.join(path, 'top_level.txt') + metadata = Metadata(path=path, scheme='legacy') + else: + raise DistlibException('path must end with .egg-info or .egg, ' + 'got %r' % path) + + if requires: + metadata.add_requirements(requires) + # look for top-level modules in top_level.txt, if present + if tl_data is None: + if tl_path is not None and os.path.exists(tl_path): + with open(tl_path, 'rb') as f: + tl_data = f.read().decode('utf-8') + if not tl_data: + tl_data = [] + else: + tl_data = tl_data.splitlines() + self.modules = tl_data + return metadata + + def __repr__(self): + return '' % ( + self.name, self.version, self.path) + + def __str__(self): + return "%s %s" % (self.name, self.version) + + def check_installed_files(self): + """ + Checks that the hashes and sizes of the files in ``RECORD`` are + matched by the files themselves. Returns a (possibly empty) list of + mismatches. Each entry in the mismatch list will be a tuple consisting + of the path, 'exists', 'size' or 'hash' according to what didn't match + (existence is checked first, then size, then hash), the expected + value and the actual value. + """ + mismatches = [] + record_path = os.path.join(self.path, 'installed-files.txt') + if os.path.exists(record_path): + for path, _, _ in self.list_installed_files(): + if path == record_path: + continue + if not os.path.exists(path): + mismatches.append((path, 'exists', True, False)) + return mismatches + + def list_installed_files(self): + """ + Iterates over the ``installed-files.txt`` entries and returns a tuple + ``(path, hash, size)`` for each line. + + :returns: a list of (path, hash, size) + """ + + def _md5(path): + f = open(path, 'rb') + try: + content = f.read() + finally: + f.close() + return hashlib.md5(content).hexdigest() + + def _size(path): + return os.stat(path).st_size + + record_path = os.path.join(self.path, 'installed-files.txt') + result = [] + if os.path.exists(record_path): + with codecs.open(record_path, 'r', encoding='utf-8') as f: + for line in f: + line = line.strip() + p = os.path.normpath(os.path.join(self.path, line)) + # "./" is present as a marker between installed files + # and installation metadata files + if not os.path.exists(p): + logger.warning('Non-existent file: %s', p) + if p.endswith(('.pyc', '.pyo')): + continue + #otherwise fall through and fail + if not os.path.isdir(p): + result.append((p, _md5(p), _size(p))) + result.append((record_path, None, None)) + return result + + def list_distinfo_files(self, absolute=False): + """ + Iterates over the ``installed-files.txt`` entries and returns paths for + each line if the path is pointing to a file located in the + ``.egg-info`` directory or one of its subdirectories. + + :parameter absolute: If *absolute* is ``True``, each returned path is + transformed into a local absolute path. Otherwise the + raw value from ``installed-files.txt`` is returned. + :type absolute: boolean + :returns: iterator of paths + """ + record_path = os.path.join(self.path, 'installed-files.txt') + if os.path.exists(record_path): + skip = True + with codecs.open(record_path, 'r', encoding='utf-8') as f: + for line in f: + line = line.strip() + if line == './': + skip = False + continue + if not skip: + p = os.path.normpath(os.path.join(self.path, line)) + if p.startswith(self.path): + if absolute: + yield p + else: + yield line + + def __eq__(self, other): + return (isinstance(other, EggInfoDistribution) and + self.path == other.path) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + __hash__ = object.__hash__ + +new_dist_class = InstalledDistribution +old_dist_class = EggInfoDistribution + + +class DependencyGraph(object): + """ + Represents a dependency graph between distributions. + + The dependency relationships are stored in an ``adjacency_list`` that maps + distributions to a list of ``(other, label)`` tuples where ``other`` + is a distribution and the edge is labeled with ``label`` (i.e. the version + specifier, if such was provided). Also, for more efficient traversal, for + every distribution ``x``, a list of predecessors is kept in + ``reverse_list[x]``. An edge from distribution ``a`` to + distribution ``b`` means that ``a`` depends on ``b``. If any missing + dependencies are found, they are stored in ``missing``, which is a + dictionary that maps distributions to a list of requirements that were not + provided by any other distributions. + """ + + def __init__(self): + self.adjacency_list = {} + self.reverse_list = {} + self.missing = {} + + def add_distribution(self, distribution): + """Add the *distribution* to the graph. + + :type distribution: :class:`distutils2.database.InstalledDistribution` + or :class:`distutils2.database.EggInfoDistribution` + """ + self.adjacency_list[distribution] = [] + self.reverse_list[distribution] = [] + #self.missing[distribution] = [] + + def add_edge(self, x, y, label=None): + """Add an edge from distribution *x* to distribution *y* with the given + *label*. + + :type x: :class:`distutils2.database.InstalledDistribution` or + :class:`distutils2.database.EggInfoDistribution` + :type y: :class:`distutils2.database.InstalledDistribution` or + :class:`distutils2.database.EggInfoDistribution` + :type label: ``str`` or ``None`` + """ + self.adjacency_list[x].append((y, label)) + # multiple edges are allowed, so be careful + if x not in self.reverse_list[y]: + self.reverse_list[y].append(x) + + def add_missing(self, distribution, requirement): + """ + Add a missing *requirement* for the given *distribution*. + + :type distribution: :class:`distutils2.database.InstalledDistribution` + or :class:`distutils2.database.EggInfoDistribution` + :type requirement: ``str`` + """ + logger.debug('%s missing %r', distribution, requirement) + self.missing.setdefault(distribution, []).append(requirement) + + def _repr_dist(self, dist): + return '%s %s' % (dist.name, dist.version) + + def repr_node(self, dist, level=1): + """Prints only a subgraph""" + output = [self._repr_dist(dist)] + for other, label in self.adjacency_list[dist]: + dist = self._repr_dist(other) + if label is not None: + dist = '%s [%s]' % (dist, label) + output.append(' ' * level + str(dist)) + suboutput = self.repr_node(other, level + 1) + subs = suboutput.split('\n') + output.extend(subs[1:]) + return '\n'.join(output) + + def to_dot(self, f, skip_disconnected=True): + """Writes a DOT output for the graph to the provided file *f*. + + If *skip_disconnected* is set to ``True``, then all distributions + that are not dependent on any other distribution are skipped. + + :type f: has to support ``file``-like operations + :type skip_disconnected: ``bool`` + """ + disconnected = [] + + f.write("digraph dependencies {\n") + for dist, adjs in self.adjacency_list.items(): + if len(adjs) == 0 and not skip_disconnected: + disconnected.append(dist) + for other, label in adjs: + if not label is None: + f.write('"%s" -> "%s" [label="%s"]\n' % + (dist.name, other.name, label)) + else: + f.write('"%s" -> "%s"\n' % (dist.name, other.name)) + if not skip_disconnected and len(disconnected) > 0: + f.write('subgraph disconnected {\n') + f.write('label = "Disconnected"\n') + f.write('bgcolor = red\n') + + for dist in disconnected: + f.write('"%s"' % dist.name) + f.write('\n') + f.write('}\n') + f.write('}\n') + + def topological_sort(self): + """ + Perform a topological sort of the graph. + :return: A tuple, the first element of which is a topologically sorted + list of distributions, and the second element of which is a + list of distributions that cannot be sorted because they have + circular dependencies and so form a cycle. + """ + result = [] + # Make a shallow copy of the adjacency list + alist = {} + for k, v in self.adjacency_list.items(): + alist[k] = v[:] + while True: + # See what we can remove in this run + to_remove = [] + for k, v in list(alist.items())[:]: + if not v: + to_remove.append(k) + del alist[k] + if not to_remove: + # What's left in alist (if anything) is a cycle. + break + # Remove from the adjacency list of others + for k, v in alist.items(): + alist[k] = [(d, r) for d, r in v if d not in to_remove] + logger.debug('Moving to result: %s', + ['%s (%s)' % (d.name, d.version) for d in to_remove]) + result.extend(to_remove) + return result, list(alist.keys()) + + def __repr__(self): + """Representation of the graph""" + output = [] + for dist, adjs in self.adjacency_list.items(): + output.append(self.repr_node(dist)) + return '\n'.join(output) + + +def make_graph(dists, scheme='default'): + """Makes a dependency graph from the given distributions. + + :parameter dists: a list of distributions + :type dists: list of :class:`distutils2.database.InstalledDistribution` and + :class:`distutils2.database.EggInfoDistribution` instances + :rtype: a :class:`DependencyGraph` instance + """ + scheme = get_scheme(scheme) + graph = DependencyGraph() + provided = {} # maps names to lists of (version, dist) tuples + + # first, build the graph and find out what's provided + for dist in dists: + graph.add_distribution(dist) + + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Add to provided: %s, %s, %s', name, version, dist) + provided.setdefault(name, []).append((version, dist)) + + # now make the edges + for dist in dists: + requires = (dist.run_requires | dist.meta_requires | + dist.build_requires | dist.dev_requires) + for req in requires: + try: + matcher = scheme.matcher(req) + except UnsupportedVersionError: + # XXX compat-mode if cannot read the version + logger.warning('could not read version %r - using name only', + req) + name = req.split()[0] + matcher = scheme.matcher(name) + + name = matcher.key # case-insensitive + + matched = False + if name in provided: + for version, provider in provided[name]: + try: + match = matcher.match(version) + except UnsupportedVersionError: + match = False + + if match: + graph.add_edge(dist, provider, req) + matched = True + break + if not matched: + graph.add_missing(dist, req) + return graph + + +def get_dependent_dists(dists, dist): + """Recursively generate a list of distributions from *dists* that are + dependent on *dist*. + + :param dists: a list of distributions + :param dist: a distribution, member of *dists* for which we are interested + """ + if dist not in dists: + raise DistlibException('given distribution %r is not a member ' + 'of the list' % dist.name) + graph = make_graph(dists) + + dep = [dist] # dependent distributions + todo = graph.reverse_list[dist] # list of nodes we should inspect + + while todo: + d = todo.pop() + dep.append(d) + for succ in graph.reverse_list[d]: + if succ not in dep: + todo.append(succ) + + dep.pop(0) # remove dist from dep, was there to prevent infinite loops + return dep + + +def get_required_dists(dists, dist): + """Recursively generate a list of distributions from *dists* that are + required by *dist*. + + :param dists: a list of distributions + :param dist: a distribution, member of *dists* for which we are interested + in finding the dependencies. + """ + if dist not in dists: + raise DistlibException('given distribution %r is not a member ' + 'of the list' % dist.name) + graph = make_graph(dists) + + req = set() # required distributions + todo = graph.adjacency_list[dist] # list of nodes we should inspect + seen = set(t[0] for t in todo) # already added to todo + + while todo: + d = todo.pop()[0] + req.add(d) + pred_list = graph.adjacency_list[d] + for pred in pred_list: + d = pred[0] + if d not in req and d not in seen: + seen.add(d) + todo.append(pred) + return req + + +def make_dist(name, version, **kwargs): + """ + A convenience method for making a dist given just a name and version. + """ + summary = kwargs.pop('summary', 'Placeholder for summary') + md = Metadata(**kwargs) + md.name = name + md.version = version + md.summary = summary or 'Placeholder for summary' + return Distribution(md) diff --git a/env/lib/python3.8/site-packages/distlib/index.py b/env/lib/python3.8/site-packages/distlib/index.py new file mode 100644 index 00000000..9b6d129e --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/index.py @@ -0,0 +1,508 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import hashlib +import logging +import os +import shutil +import subprocess +import tempfile +try: + from threading import Thread +except ImportError: # pragma: no cover + from dummy_threading import Thread + +from . import DistlibException +from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr, + urlparse, build_opener, string_types) +from .util import zip_dir, ServerProxy + +logger = logging.getLogger(__name__) + +DEFAULT_INDEX = 'https://pypi.org/pypi' +DEFAULT_REALM = 'pypi' + +class PackageIndex(object): + """ + This class represents a package index compatible with PyPI, the Python + Package Index. + """ + + boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$' + + def __init__(self, url=None): + """ + Initialise an instance. + + :param url: The URL of the index. If not specified, the URL for PyPI is + used. + """ + self.url = url or DEFAULT_INDEX + self.read_configuration() + scheme, netloc, path, params, query, frag = urlparse(self.url) + if params or query or frag or scheme not in ('http', 'https'): + raise DistlibException('invalid repository: %s' % self.url) + self.password_handler = None + self.ssl_verifier = None + self.gpg = None + self.gpg_home = None + with open(os.devnull, 'w') as sink: + # Use gpg by default rather than gpg2, as gpg2 insists on + # prompting for passwords + for s in ('gpg', 'gpg2'): + try: + rc = subprocess.check_call([s, '--version'], stdout=sink, + stderr=sink) + if rc == 0: + self.gpg = s + break + except OSError: + pass + + def _get_pypirc_command(self): + """ + Get the distutils command for interacting with PyPI configurations. + :return: the command. + """ + from .util import _get_pypirc_command as cmd + return cmd() + + def read_configuration(self): + """ + Read the PyPI access configuration as supported by distutils. This populates + ``username``, ``password``, ``realm`` and ``url`` attributes from the + configuration. + """ + from .util import _load_pypirc + cfg = _load_pypirc(self) + self.username = cfg.get('username') + self.password = cfg.get('password') + self.realm = cfg.get('realm', 'pypi') + self.url = cfg.get('repository', self.url) + + def save_configuration(self): + """ + Save the PyPI access configuration. You must have set ``username`` and + ``password`` attributes before calling this method. + """ + self.check_credentials() + from .util import _store_pypirc + _store_pypirc(self) + + def check_credentials(self): + """ + Check that ``username`` and ``password`` have been set, and raise an + exception if not. + """ + if self.username is None or self.password is None: + raise DistlibException('username and password must be set') + pm = HTTPPasswordMgr() + _, netloc, _, _, _, _ = urlparse(self.url) + pm.add_password(self.realm, netloc, self.username, self.password) + self.password_handler = HTTPBasicAuthHandler(pm) + + def register(self, metadata): # pragma: no cover + """ + Register a distribution on PyPI, using the provided metadata. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the distribution to be + registered. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + metadata.validate() + d = metadata.todict() + d[':action'] = 'verify' + request = self.encode_request(d.items(), []) + response = self.send_request(request) + d[':action'] = 'submit' + request = self.encode_request(d.items(), []) + return self.send_request(request) + + def _reader(self, name, stream, outbuf): + """ + Thread runner for reading lines of from a subprocess into a buffer. + + :param name: The logical name of the stream (used for logging only). + :param stream: The stream to read from. This will typically a pipe + connected to the output stream of a subprocess. + :param outbuf: The list to append the read lines to. + """ + while True: + s = stream.readline() + if not s: + break + s = s.decode('utf-8').rstrip() + outbuf.append(s) + logger.debug('%s: %s' % (name, s)) + stream.close() + + def get_sign_command(self, filename, signer, sign_password, keystore=None): # pragma: no cover + """ + Return a suitable command for signing a file. + + :param filename: The pathname to the file to be signed. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: The signing command as a list suitable to be + passed to :class:`subprocess.Popen`. + """ + cmd = [self.gpg, '--status-fd', '2', '--no-tty'] + if keystore is None: + keystore = self.gpg_home + if keystore: + cmd.extend(['--homedir', keystore]) + if sign_password is not None: + cmd.extend(['--batch', '--passphrase-fd', '0']) + td = tempfile.mkdtemp() + sf = os.path.join(td, os.path.basename(filename) + '.asc') + cmd.extend(['--detach-sign', '--armor', '--local-user', + signer, '--output', sf, filename]) + logger.debug('invoking: %s', ' '.join(cmd)) + return cmd, sf + + def run_command(self, cmd, input_data=None): + """ + Run a command in a child process , passing it any input data specified. + + :param cmd: The command to run. + :param input_data: If specified, this must be a byte string containing + data to be sent to the child process. + :return: A tuple consisting of the subprocess' exit code, a list of + lines read from the subprocess' ``stdout``, and a list of + lines read from the subprocess' ``stderr``. + """ + kwargs = { + 'stdout': subprocess.PIPE, + 'stderr': subprocess.PIPE, + } + if input_data is not None: + kwargs['stdin'] = subprocess.PIPE + stdout = [] + stderr = [] + p = subprocess.Popen(cmd, **kwargs) + # We don't use communicate() here because we may need to + # get clever with interacting with the command + t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout)) + t1.start() + t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr)) + t2.start() + if input_data is not None: + p.stdin.write(input_data) + p.stdin.close() + + p.wait() + t1.join() + t2.join() + return p.returncode, stdout, stderr + + def sign_file(self, filename, signer, sign_password, keystore=None): # pragma: no cover + """ + Sign a file. + + :param filename: The pathname to the file to be signed. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param keystore: The path to a directory which contains the keys + used in signing. If not specified, the instance's + ``gpg_home`` attribute is used instead. + :return: The absolute pathname of the file where the signature is + stored. + """ + cmd, sig_file = self.get_sign_command(filename, signer, sign_password, + keystore) + rc, stdout, stderr = self.run_command(cmd, + sign_password.encode('utf-8')) + if rc != 0: + raise DistlibException('sign command failed with error ' + 'code %s' % rc) + return sig_file + + def upload_file(self, metadata, filename, signer=None, sign_password=None, + filetype='sdist', pyversion='source', keystore=None): + """ + Upload a release file to the index. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the file to be uploaded. + :param filename: The pathname of the file to be uploaded. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param filetype: The type of the file being uploaded. This is the + distutils command which produced that file, e.g. + ``sdist`` or ``bdist_wheel``. + :param pyversion: The version of Python which the release relates + to. For code compatible with any Python, this would + be ``source``, otherwise it would be e.g. ``3.2``. + :param keystore: The path to a directory which contains the keys + used in signing. If not specified, the instance's + ``gpg_home`` attribute is used instead. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + if not os.path.exists(filename): + raise DistlibException('not found: %s' % filename) + metadata.validate() + d = metadata.todict() + sig_file = None + if signer: + if not self.gpg: + logger.warning('no signing program available - not signed') + else: + sig_file = self.sign_file(filename, signer, sign_password, + keystore) + with open(filename, 'rb') as f: + file_data = f.read() + md5_digest = hashlib.md5(file_data).hexdigest() + sha256_digest = hashlib.sha256(file_data).hexdigest() + d.update({ + ':action': 'file_upload', + 'protocol_version': '1', + 'filetype': filetype, + 'pyversion': pyversion, + 'md5_digest': md5_digest, + 'sha256_digest': sha256_digest, + }) + files = [('content', os.path.basename(filename), file_data)] + if sig_file: + with open(sig_file, 'rb') as f: + sig_data = f.read() + files.append(('gpg_signature', os.path.basename(sig_file), + sig_data)) + shutil.rmtree(os.path.dirname(sig_file)) + request = self.encode_request(d.items(), files) + return self.send_request(request) + + def upload_documentation(self, metadata, doc_dir): # pragma: no cover + """ + Upload documentation to the index. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the documentation to be + uploaded. + :param doc_dir: The pathname of the directory which contains the + documentation. This should be the directory that + contains the ``index.html`` for the documentation. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + if not os.path.isdir(doc_dir): + raise DistlibException('not a directory: %r' % doc_dir) + fn = os.path.join(doc_dir, 'index.html') + if not os.path.exists(fn): + raise DistlibException('not found: %r' % fn) + metadata.validate() + name, version = metadata.name, metadata.version + zip_data = zip_dir(doc_dir).getvalue() + fields = [(':action', 'doc_upload'), + ('name', name), ('version', version)] + files = [('content', name, zip_data)] + request = self.encode_request(fields, files) + return self.send_request(request) + + def get_verify_command(self, signature_filename, data_filename, + keystore=None): + """ + Return a suitable command for verifying a file. + + :param signature_filename: The pathname to the file containing the + signature. + :param data_filename: The pathname to the file containing the + signed data. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: The verifying command as a list suitable to be + passed to :class:`subprocess.Popen`. + """ + cmd = [self.gpg, '--status-fd', '2', '--no-tty'] + if keystore is None: + keystore = self.gpg_home + if keystore: + cmd.extend(['--homedir', keystore]) + cmd.extend(['--verify', signature_filename, data_filename]) + logger.debug('invoking: %s', ' '.join(cmd)) + return cmd + + def verify_signature(self, signature_filename, data_filename, + keystore=None): + """ + Verify a signature for a file. + + :param signature_filename: The pathname to the file containing the + signature. + :param data_filename: The pathname to the file containing the + signed data. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: True if the signature was verified, else False. + """ + if not self.gpg: + raise DistlibException('verification unavailable because gpg ' + 'unavailable') + cmd = self.get_verify_command(signature_filename, data_filename, + keystore) + rc, stdout, stderr = self.run_command(cmd) + if rc not in (0, 1): + raise DistlibException('verify command failed with error ' + 'code %s' % rc) + return rc == 0 + + def download_file(self, url, destfile, digest=None, reporthook=None): + """ + This is a convenience method for downloading a file from an URL. + Normally, this will be a file from the index, though currently + no check is made for this (i.e. a file can be downloaded from + anywhere). + + The method is just like the :func:`urlretrieve` function in the + standard library, except that it allows digest computation to be + done during download and checking that the downloaded data + matched any expected value. + + :param url: The URL of the file to be downloaded (assumed to be + available via an HTTP GET request). + :param destfile: The pathname where the downloaded file is to be + saved. + :param digest: If specified, this must be a (hasher, value) + tuple, where hasher is the algorithm used (e.g. + ``'md5'``) and ``value`` is the expected value. + :param reporthook: The same as for :func:`urlretrieve` in the + standard library. + """ + if digest is None: + digester = None + logger.debug('No digest specified') + else: + if isinstance(digest, (list, tuple)): + hasher, digest = digest + else: + hasher = 'md5' + digester = getattr(hashlib, hasher)() + logger.debug('Digest specified: %s' % digest) + # The following code is equivalent to urlretrieve. + # We need to do it this way so that we can compute the + # digest of the file as we go. + with open(destfile, 'wb') as dfp: + # addinfourl is not a context manager on 2.x + # so we have to use try/finally + sfp = self.send_request(Request(url)) + try: + headers = sfp.info() + blocksize = 8192 + size = -1 + read = 0 + blocknum = 0 + if "content-length" in headers: + size = int(headers["Content-Length"]) + if reporthook: + reporthook(blocknum, blocksize, size) + while True: + block = sfp.read(blocksize) + if not block: + break + read += len(block) + dfp.write(block) + if digester: + digester.update(block) + blocknum += 1 + if reporthook: + reporthook(blocknum, blocksize, size) + finally: + sfp.close() + + # check that we got the whole file, if we can + if size >= 0 and read < size: + raise DistlibException( + 'retrieval incomplete: got only %d out of %d bytes' + % (read, size)) + # if we have a digest, it must match. + if digester: + actual = digester.hexdigest() + if digest != actual: + raise DistlibException('%s digest mismatch for %s: expected ' + '%s, got %s' % (hasher, destfile, + digest, actual)) + logger.debug('Digest verified: %s', digest) + + def send_request(self, req): + """ + Send a standard library :class:`Request` to PyPI and return its + response. + + :param req: The request to send. + :return: The HTTP response from PyPI (a standard library HTTPResponse). + """ + handlers = [] + if self.password_handler: + handlers.append(self.password_handler) + if self.ssl_verifier: + handlers.append(self.ssl_verifier) + opener = build_opener(*handlers) + return opener.open(req) + + def encode_request(self, fields, files): + """ + Encode fields and files for posting to an HTTP server. + + :param fields: The fields to send as a list of (fieldname, value) + tuples. + :param files: The files to send as a list of (fieldname, filename, + file_bytes) tuple. + """ + # Adapted from packaging, which in turn was adapted from + # http://code.activestate.com/recipes/146306 + + parts = [] + boundary = self.boundary + for k, values in fields: + if not isinstance(values, (list, tuple)): + values = [values] + + for v in values: + parts.extend(( + b'--' + boundary, + ('Content-Disposition: form-data; name="%s"' % + k).encode('utf-8'), + b'', + v.encode('utf-8'))) + for key, filename, value in files: + parts.extend(( + b'--' + boundary, + ('Content-Disposition: form-data; name="%s"; filename="%s"' % + (key, filename)).encode('utf-8'), + b'', + value)) + + parts.extend((b'--' + boundary + b'--', b'')) + + body = b'\r\n'.join(parts) + ct = b'multipart/form-data; boundary=' + boundary + headers = { + 'Content-type': ct, + 'Content-length': str(len(body)) + } + return Request(self.url, body, headers) + + def search(self, terms, operator=None): # pragma: no cover + if isinstance(terms, string_types): + terms = {'name': terms} + rpc_proxy = ServerProxy(self.url, timeout=3.0) + try: + return rpc_proxy.search(terms, operator or 'and') + finally: + rpc_proxy('close')() diff --git a/env/lib/python3.8/site-packages/distlib/locators.py b/env/lib/python3.8/site-packages/distlib/locators.py new file mode 100644 index 00000000..966ebc0e --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/locators.py @@ -0,0 +1,1300 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2015 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# + +import gzip +from io import BytesIO +import json +import logging +import os +import posixpath +import re +try: + import threading +except ImportError: # pragma: no cover + import dummy_threading as threading +import zlib + +from . import DistlibException +from .compat import (urljoin, urlparse, urlunparse, url2pathname, pathname2url, + queue, quote, unescape, build_opener, + HTTPRedirectHandler as BaseRedirectHandler, text_type, + Request, HTTPError, URLError) +from .database import Distribution, DistributionPath, make_dist +from .metadata import Metadata, MetadataInvalidError +from .util import (cached_property, ensure_slash, split_filename, get_project_data, + parse_requirement, parse_name_and_version, ServerProxy, + normalize_name) +from .version import get_scheme, UnsupportedVersionError +from .wheel import Wheel, is_compatible + +logger = logging.getLogger(__name__) + +HASHER_HASH = re.compile(r'^(\w+)=([a-f0-9]+)') +CHARSET = re.compile(r';\s*charset\s*=\s*(.*)\s*$', re.I) +HTML_CONTENT_TYPE = re.compile('text/html|application/x(ht)?ml') +DEFAULT_INDEX = 'https://pypi.org/pypi' + +def get_all_distribution_names(url=None): + """ + Return all distribution names known by an index. + :param url: The URL of the index. + :return: A list of all known distribution names. + """ + if url is None: + url = DEFAULT_INDEX + client = ServerProxy(url, timeout=3.0) + try: + return client.list_packages() + finally: + client('close')() + +class RedirectHandler(BaseRedirectHandler): + """ + A class to work around a bug in some Python 3.2.x releases. + """ + # There's a bug in the base version for some 3.2.x + # (e.g. 3.2.2 on Ubuntu Oneiric). If a Location header + # returns e.g. /abc, it bails because it says the scheme '' + # is bogus, when actually it should use the request's + # URL for the scheme. See Python issue #13696. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + newurl = None + for key in ('location', 'uri'): + if key in headers: + newurl = headers[key] + break + if newurl is None: # pragma: no cover + return + urlparts = urlparse(newurl) + if urlparts.scheme == '': + newurl = urljoin(req.get_full_url(), newurl) + if hasattr(headers, 'replace_header'): + headers.replace_header(key, newurl) + else: + headers[key] = newurl + return BaseRedirectHandler.http_error_302(self, req, fp, code, msg, + headers) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + +class Locator(object): + """ + A base class for locators - things that locate distributions. + """ + source_extensions = ('.tar.gz', '.tar.bz2', '.tar', '.zip', '.tgz', '.tbz') + binary_extensions = ('.egg', '.exe', '.whl') + excluded_extensions = ('.pdf',) + + # A list of tags indicating which wheels you want to match. The default + # value of None matches against the tags compatible with the running + # Python. If you want to match other values, set wheel_tags on a locator + # instance to a list of tuples (pyver, abi, arch) which you want to match. + wheel_tags = None + + downloadable_extensions = source_extensions + ('.whl',) + + def __init__(self, scheme='default'): + """ + Initialise an instance. + :param scheme: Because locators look for most recent versions, they + need to know the version scheme to use. This specifies + the current PEP-recommended scheme - use ``'legacy'`` + if you need to support existing distributions on PyPI. + """ + self._cache = {} + self.scheme = scheme + # Because of bugs in some of the handlers on some of the platforms, + # we use our own opener rather than just using urlopen. + self.opener = build_opener(RedirectHandler()) + # If get_project() is called from locate(), the matcher instance + # is set from the requirement passed to locate(). See issue #18 for + # why this can be useful to know. + self.matcher = None + self.errors = queue.Queue() + + def get_errors(self): + """ + Return any errors which have occurred. + """ + result = [] + while not self.errors.empty(): # pragma: no cover + try: + e = self.errors.get(False) + result.append(e) + except self.errors.Empty: + continue + self.errors.task_done() + return result + + def clear_errors(self): + """ + Clear any errors which may have been logged. + """ + # Just get the errors and throw them away + self.get_errors() + + def clear_cache(self): + self._cache.clear() + + def _get_scheme(self): + return self._scheme + + def _set_scheme(self, value): + self._scheme = value + + scheme = property(_get_scheme, _set_scheme) + + def _get_project(self, name): + """ + For a given project, get a dictionary mapping available versions to Distribution + instances. + + This should be implemented in subclasses. + + If called from a locate() request, self.matcher will be set to a + matcher for the requirement to satisfy, otherwise it will be None. + """ + raise NotImplementedError('Please implement in the subclass') + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Please implement in the subclass') + + def get_project(self, name): + """ + For a given project, get a dictionary mapping available versions to Distribution + instances. + + This calls _get_project to do all the work, and just implements a caching layer on top. + """ + if self._cache is None: # pragma: no cover + result = self._get_project(name) + elif name in self._cache: + result = self._cache[name] + else: + self.clear_errors() + result = self._get_project(name) + self._cache[name] = result + return result + + def score_url(self, url): + """ + Give an url a score which can be used to choose preferred URLs + for a given project release. + """ + t = urlparse(url) + basename = posixpath.basename(t.path) + compatible = True + is_wheel = basename.endswith('.whl') + is_downloadable = basename.endswith(self.downloadable_extensions) + if is_wheel: + compatible = is_compatible(Wheel(basename), self.wheel_tags) + return (t.scheme == 'https', 'pypi.org' in t.netloc, + is_downloadable, is_wheel, compatible, basename) + + def prefer_url(self, url1, url2): + """ + Choose one of two URLs where both are candidates for distribution + archives for the same version of a distribution (for example, + .tar.gz vs. zip). + + The current implementation favours https:// URLs over http://, archives + from PyPI over those from other locations, wheel compatibility (if a + wheel) and then the archive name. + """ + result = url2 + if url1: + s1 = self.score_url(url1) + s2 = self.score_url(url2) + if s1 > s2: + result = url1 + if result != url2: + logger.debug('Not replacing %r with %r', url1, url2) + else: + logger.debug('Replacing %r with %r', url1, url2) + return result + + def split_filename(self, filename, project_name): + """ + Attempt to split a filename in project name, version and Python version. + """ + return split_filename(filename, project_name) + + def convert_url_to_download_info(self, url, project_name): + """ + See if a URL is a candidate for a download URL for a project (the URL + has typically been scraped from an HTML page). + + If it is, a dictionary is returned with keys "name", "version", + "filename" and "url"; otherwise, None is returned. + """ + def same_project(name1, name2): + return normalize_name(name1) == normalize_name(name2) + + result = None + scheme, netloc, path, params, query, frag = urlparse(url) + if frag.lower().startswith('egg='): # pragma: no cover + logger.debug('%s: version hint in fragment: %r', + project_name, frag) + m = HASHER_HASH.match(frag) + if m: + algo, digest = m.groups() + else: + algo, digest = None, None + origpath = path + if path and path[-1] == '/': # pragma: no cover + path = path[:-1] + if path.endswith('.whl'): + try: + wheel = Wheel(path) + if not is_compatible(wheel, self.wheel_tags): + logger.debug('Wheel not compatible: %s', path) + else: + if project_name is None: + include = True + else: + include = same_project(wheel.name, project_name) + if include: + result = { + 'name': wheel.name, + 'version': wheel.version, + 'filename': wheel.filename, + 'url': urlunparse((scheme, netloc, origpath, + params, query, '')), + 'python-version': ', '.join( + ['.'.join(list(v[2:])) for v in wheel.pyver]), + } + except Exception as e: # pragma: no cover + logger.warning('invalid path for wheel: %s', path) + elif not path.endswith(self.downloadable_extensions): # pragma: no cover + logger.debug('Not downloadable: %s', path) + else: # downloadable extension + path = filename = posixpath.basename(path) + for ext in self.downloadable_extensions: + if path.endswith(ext): + path = path[:-len(ext)] + t = self.split_filename(path, project_name) + if not t: # pragma: no cover + logger.debug('No match for project/version: %s', path) + else: + name, version, pyver = t + if not project_name or same_project(project_name, name): + result = { + 'name': name, + 'version': version, + 'filename': filename, + 'url': urlunparse((scheme, netloc, origpath, + params, query, '')), + #'packagetype': 'sdist', + } + if pyver: # pragma: no cover + result['python-version'] = pyver + break + if result and algo: + result['%s_digest' % algo] = digest + return result + + def _get_digest(self, info): + """ + Get a digest from a dictionary by looking at a "digests" dictionary + or keys of the form 'algo_digest'. + + Returns a 2-tuple (algo, digest) if found, else None. Currently + looks only for SHA256, then MD5. + """ + result = None + if 'digests' in info: + digests = info['digests'] + for algo in ('sha256', 'md5'): + if algo in digests: + result = (algo, digests[algo]) + break + if not result: + for algo in ('sha256', 'md5'): + key = '%s_digest' % algo + if key in info: + result = (algo, info[key]) + break + return result + + def _update_version_data(self, result, info): + """ + Update a result dictionary (the final result from _get_project) with a + dictionary for a specific version, which typically holds information + gleaned from a filename or URL for an archive for the distribution. + """ + name = info.pop('name') + version = info.pop('version') + if version in result: + dist = result[version] + md = dist.metadata + else: + dist = make_dist(name, version, scheme=self.scheme) + md = dist.metadata + dist.digest = digest = self._get_digest(info) + url = info['url'] + result['digests'][url] = digest + if md.source_url != info['url']: + md.source_url = self.prefer_url(md.source_url, url) + result['urls'].setdefault(version, set()).add(url) + dist.locator = self + result[version] = dist + + def locate(self, requirement, prereleases=False): + """ + Find the most recent distribution which matches the given + requirement. + + :param requirement: A requirement of the form 'foo (1.0)' or perhaps + 'foo (>= 1.0, < 2.0, != 1.3)' + :param prereleases: If ``True``, allow pre-release versions + to be located. Otherwise, pre-release versions + are not returned. + :return: A :class:`Distribution` instance, or ``None`` if no such + distribution could be located. + """ + result = None + r = parse_requirement(requirement) + if r is None: # pragma: no cover + raise DistlibException('Not a valid requirement: %r' % requirement) + scheme = get_scheme(self.scheme) + self.matcher = matcher = scheme.matcher(r.requirement) + logger.debug('matcher: %s (%s)', matcher, type(matcher).__name__) + versions = self.get_project(r.name) + if len(versions) > 2: # urls and digests keys are present + # sometimes, versions are invalid + slist = [] + vcls = matcher.version_class + for k in versions: + if k in ('urls', 'digests'): + continue + try: + if not matcher.match(k): + pass # logger.debug('%s did not match %r', matcher, k) + else: + if prereleases or not vcls(k).is_prerelease: + slist.append(k) + # else: + # logger.debug('skipping pre-release ' + # 'version %s of %s', k, matcher.name) + except Exception: # pragma: no cover + logger.warning('error matching %s with %r', matcher, k) + pass # slist.append(k) + if len(slist) > 1: + slist = sorted(slist, key=scheme.key) + if slist: + logger.debug('sorted list: %s', slist) + version = slist[-1] + result = versions[version] + if result: + if r.extras: + result.extras = r.extras + result.download_urls = versions.get('urls', {}).get(version, set()) + d = {} + sd = versions.get('digests', {}) + for url in result.download_urls: + if url in sd: # pragma: no cover + d[url] = sd[url] + result.digests = d + self.matcher = None + return result + + +class PyPIRPCLocator(Locator): + """ + This locator uses XML-RPC to locate distributions. It therefore + cannot be used with simple mirrors (that only mirror file content). + """ + def __init__(self, url, **kwargs): + """ + Initialise an instance. + + :param url: The URL to use for XML-RPC. + :param kwargs: Passed to the superclass constructor. + """ + super(PyPIRPCLocator, self).__init__(**kwargs) + self.base_url = url + self.client = ServerProxy(url, timeout=3.0) + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + return set(self.client.list_packages()) + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + versions = self.client.package_releases(name, True) + for v in versions: + urls = self.client.release_urls(name, v) + data = self.client.release_data(name, v) + metadata = Metadata(scheme=self.scheme) + metadata.name = data['name'] + metadata.version = data['version'] + metadata.license = data.get('license') + metadata.keywords = data.get('keywords', []) + metadata.summary = data.get('summary') + dist = Distribution(metadata) + if urls: + info = urls[0] + metadata.source_url = info['url'] + dist.digest = self._get_digest(info) + dist.locator = self + result[v] = dist + for info in urls: + url = info['url'] + digest = self._get_digest(info) + result['urls'].setdefault(v, set()).add(url) + result['digests'][url] = digest + return result + +class PyPIJSONLocator(Locator): + """ + This locator uses PyPI's JSON interface. It's very limited in functionality + and probably not worth using. + """ + def __init__(self, url, **kwargs): + super(PyPIJSONLocator, self).__init__(**kwargs) + self.base_url = ensure_slash(url) + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Not available from this locator') + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + url = urljoin(self.base_url, '%s/json' % quote(name)) + try: + resp = self.opener.open(url) + data = resp.read().decode() # for now + d = json.loads(data) + md = Metadata(scheme=self.scheme) + data = d['info'] + md.name = data['name'] + md.version = data['version'] + md.license = data.get('license') + md.keywords = data.get('keywords', []) + md.summary = data.get('summary') + dist = Distribution(md) + dist.locator = self + urls = d['urls'] + result[md.version] = dist + for info in d['urls']: + url = info['url'] + dist.download_urls.add(url) + dist.digests[url] = self._get_digest(info) + result['urls'].setdefault(md.version, set()).add(url) + result['digests'][url] = self._get_digest(info) + # Now get other releases + for version, infos in d['releases'].items(): + if version == md.version: + continue # already done + omd = Metadata(scheme=self.scheme) + omd.name = md.name + omd.version = version + odist = Distribution(omd) + odist.locator = self + result[version] = odist + for info in infos: + url = info['url'] + odist.download_urls.add(url) + odist.digests[url] = self._get_digest(info) + result['urls'].setdefault(version, set()).add(url) + result['digests'][url] = self._get_digest(info) +# for info in urls: +# md.source_url = info['url'] +# dist.digest = self._get_digest(info) +# dist.locator = self +# for info in urls: +# url = info['url'] +# result['urls'].setdefault(md.version, set()).add(url) +# result['digests'][url] = self._get_digest(info) + except Exception as e: + self.errors.put(text_type(e)) + logger.exception('JSON fetch failed: %s', e) + return result + + +class Page(object): + """ + This class represents a scraped HTML page. + """ + # The following slightly hairy-looking regex just looks for the contents of + # an anchor link, which has an attribute "href" either immediately preceded + # or immediately followed by a "rel" attribute. The attribute values can be + # declared with double quotes, single quotes or no quotes - which leads to + # the length of the expression. + _href = re.compile(""" +(rel\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*))\\s+)? +href\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*)) +(\\s+rel\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*)))? +""", re.I | re.S | re.X) + _base = re.compile(r"""]+)""", re.I | re.S) + + def __init__(self, data, url): + """ + Initialise an instance with the Unicode page contents and the URL they + came from. + """ + self.data = data + self.base_url = self.url = url + m = self._base.search(self.data) + if m: + self.base_url = m.group(1) + + _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I) + + @cached_property + def links(self): + """ + Return the URLs of all the links on a page together with information + about their "rel" attribute, for determining which ones to treat as + downloads and which ones to queue for further scraping. + """ + def clean(url): + "Tidy up an URL." + scheme, netloc, path, params, query, frag = urlparse(url) + return urlunparse((scheme, netloc, quote(path), + params, query, frag)) + + result = set() + for match in self._href.finditer(self.data): + d = match.groupdict('') + rel = (d['rel1'] or d['rel2'] or d['rel3'] or + d['rel4'] or d['rel5'] or d['rel6']) + url = d['url1'] or d['url2'] or d['url3'] + url = urljoin(self.base_url, url) + url = unescape(url) + url = self._clean_re.sub(lambda m: '%%%2x' % ord(m.group(0)), url) + result.add((url, rel)) + # We sort the result, hoping to bring the most recent versions + # to the front + result = sorted(result, key=lambda t: t[0], reverse=True) + return result + + +class SimpleScrapingLocator(Locator): + """ + A locator which scrapes HTML pages to locate downloads for a distribution. + This runs multiple threads to do the I/O; performance is at least as good + as pip's PackageFinder, which works in an analogous fashion. + """ + + # These are used to deal with various Content-Encoding schemes. + decoders = { + 'deflate': zlib.decompress, + 'gzip': lambda b: gzip.GzipFile(fileobj=BytesIO(b)).read(), + 'none': lambda b: b, + } + + def __init__(self, url, timeout=None, num_workers=10, **kwargs): + """ + Initialise an instance. + :param url: The root URL to use for scraping. + :param timeout: The timeout, in seconds, to be applied to requests. + This defaults to ``None`` (no timeout specified). + :param num_workers: The number of worker threads you want to do I/O, + This defaults to 10. + :param kwargs: Passed to the superclass. + """ + super(SimpleScrapingLocator, self).__init__(**kwargs) + self.base_url = ensure_slash(url) + self.timeout = timeout + self._page_cache = {} + self._seen = set() + self._to_fetch = queue.Queue() + self._bad_hosts = set() + self.skip_externals = False + self.num_workers = num_workers + self._lock = threading.RLock() + # See issue #45: we need to be resilient when the locator is used + # in a thread, e.g. with concurrent.futures. We can't use self._lock + # as it is for coordinating our internal threads - the ones created + # in _prepare_threads. + self._gplock = threading.RLock() + self.platform_check = False # See issue #112 + + def _prepare_threads(self): + """ + Threads are created only when get_project is called, and terminate + before it returns. They are there primarily to parallelise I/O (i.e. + fetching web pages). + """ + self._threads = [] + for i in range(self.num_workers): + t = threading.Thread(target=self._fetch) + t.daemon = True + t.start() + self._threads.append(t) + + def _wait_threads(self): + """ + Tell all the threads to terminate (by sending a sentinel value) and + wait for them to do so. + """ + # Note that you need two loops, since you can't say which + # thread will get each sentinel + for t in self._threads: + self._to_fetch.put(None) # sentinel + for t in self._threads: + t.join() + self._threads = [] + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + with self._gplock: + self.result = result + self.project_name = name + url = urljoin(self.base_url, '%s/' % quote(name)) + self._seen.clear() + self._page_cache.clear() + self._prepare_threads() + try: + logger.debug('Queueing %s', url) + self._to_fetch.put(url) + self._to_fetch.join() + finally: + self._wait_threads() + del self.result + return result + + platform_dependent = re.compile(r'\b(linux_(i\d86|x86_64|arm\w+)|' + r'win(32|_amd64)|macosx_?\d+)\b', re.I) + + def _is_platform_dependent(self, url): + """ + Does an URL refer to a platform-specific download? + """ + return self.platform_dependent.search(url) + + def _process_download(self, url): + """ + See if an URL is a suitable download for a project. + + If it is, register information in the result dictionary (for + _get_project) about the specific version it's for. + + Note that the return value isn't actually used other than as a boolean + value. + """ + if self.platform_check and self._is_platform_dependent(url): + info = None + else: + info = self.convert_url_to_download_info(url, self.project_name) + logger.debug('process_download: %s -> %s', url, info) + if info: + with self._lock: # needed because self.result is shared + self._update_version_data(self.result, info) + return info + + def _should_queue(self, link, referrer, rel): + """ + Determine whether a link URL from a referring page and with a + particular "rel" attribute should be queued for scraping. + """ + scheme, netloc, path, _, _, _ = urlparse(link) + if path.endswith(self.source_extensions + self.binary_extensions + + self.excluded_extensions): + result = False + elif self.skip_externals and not link.startswith(self.base_url): + result = False + elif not referrer.startswith(self.base_url): + result = False + elif rel not in ('homepage', 'download'): + result = False + elif scheme not in ('http', 'https', 'ftp'): + result = False + elif self._is_platform_dependent(link): + result = False + else: + host = netloc.split(':', 1)[0] + if host.lower() == 'localhost': + result = False + else: + result = True + logger.debug('should_queue: %s (%s) from %s -> %s', link, rel, + referrer, result) + return result + + def _fetch(self): + """ + Get a URL to fetch from the work queue, get the HTML page, examine its + links for download candidates and candidates for further scraping. + + This is a handy method to run in a thread. + """ + while True: + url = self._to_fetch.get() + try: + if url: + page = self.get_page(url) + if page is None: # e.g. after an error + continue + for link, rel in page.links: + if link not in self._seen: + try: + self._seen.add(link) + if (not self._process_download(link) and + self._should_queue(link, url, rel)): + logger.debug('Queueing %s from %s', link, url) + self._to_fetch.put(link) + except MetadataInvalidError: # e.g. invalid versions + pass + except Exception as e: # pragma: no cover + self.errors.put(text_type(e)) + finally: + # always do this, to avoid hangs :-) + self._to_fetch.task_done() + if not url: + #logger.debug('Sentinel seen, quitting.') + break + + def get_page(self, url): + """ + Get the HTML for an URL, possibly from an in-memory cache. + + XXX TODO Note: this cache is never actually cleared. It's assumed that + the data won't get stale over the lifetime of a locator instance (not + necessarily true for the default_locator). + """ + # http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api + scheme, netloc, path, _, _, _ = urlparse(url) + if scheme == 'file' and os.path.isdir(url2pathname(path)): + url = urljoin(ensure_slash(url), 'index.html') + + if url in self._page_cache: + result = self._page_cache[url] + logger.debug('Returning %s from cache: %s', url, result) + else: + host = netloc.split(':', 1)[0] + result = None + if host in self._bad_hosts: + logger.debug('Skipping %s due to bad host %s', url, host) + else: + req = Request(url, headers={'Accept-encoding': 'identity'}) + try: + logger.debug('Fetching %s', url) + resp = self.opener.open(req, timeout=self.timeout) + logger.debug('Fetched %s', url) + headers = resp.info() + content_type = headers.get('Content-Type', '') + if HTML_CONTENT_TYPE.match(content_type): + final_url = resp.geturl() + data = resp.read() + encoding = headers.get('Content-Encoding') + if encoding: + decoder = self.decoders[encoding] # fail if not found + data = decoder(data) + encoding = 'utf-8' + m = CHARSET.search(content_type) + if m: + encoding = m.group(1) + try: + data = data.decode(encoding) + except UnicodeError: # pragma: no cover + data = data.decode('latin-1') # fallback + result = Page(data, final_url) + self._page_cache[final_url] = result + except HTTPError as e: + if e.code != 404: + logger.exception('Fetch failed: %s: %s', url, e) + except URLError as e: # pragma: no cover + logger.exception('Fetch failed: %s: %s', url, e) + with self._lock: + self._bad_hosts.add(host) + except Exception as e: # pragma: no cover + logger.exception('Fetch failed: %s: %s', url, e) + finally: + self._page_cache[url] = result # even if None (failure) + return result + + _distname_re = re.compile(']*>([^<]+)<') + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + page = self.get_page(self.base_url) + if not page: + raise DistlibException('Unable to get %s' % self.base_url) + for match in self._distname_re.finditer(page.data): + result.add(match.group(1)) + return result + +class DirectoryLocator(Locator): + """ + This class locates distributions in a directory tree. + """ + + def __init__(self, path, **kwargs): + """ + Initialise an instance. + :param path: The root of the directory tree to search. + :param kwargs: Passed to the superclass constructor, + except for: + * recursive - if True (the default), subdirectories are + recursed into. If False, only the top-level directory + is searched, + """ + self.recursive = kwargs.pop('recursive', True) + super(DirectoryLocator, self).__init__(**kwargs) + path = os.path.abspath(path) + if not os.path.isdir(path): # pragma: no cover + raise DistlibException('Not a directory: %r' % path) + self.base_dir = path + + def should_include(self, filename, parent): + """ + Should a filename be considered as a candidate for a distribution + archive? As well as the filename, the directory which contains it + is provided, though not used by the current implementation. + """ + return filename.endswith(self.downloadable_extensions) + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + for root, dirs, files in os.walk(self.base_dir): + for fn in files: + if self.should_include(fn, root): + fn = os.path.join(root, fn) + url = urlunparse(('file', '', + pathname2url(os.path.abspath(fn)), + '', '', '')) + info = self.convert_url_to_download_info(url, name) + if info: + self._update_version_data(result, info) + if not self.recursive: + break + return result + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + for root, dirs, files in os.walk(self.base_dir): + for fn in files: + if self.should_include(fn, root): + fn = os.path.join(root, fn) + url = urlunparse(('file', '', + pathname2url(os.path.abspath(fn)), + '', '', '')) + info = self.convert_url_to_download_info(url, None) + if info: + result.add(info['name']) + if not self.recursive: + break + return result + +class JSONLocator(Locator): + """ + This locator uses special extended metadata (not available on PyPI) and is + the basis of performant dependency resolution in distlib. Other locators + require archive downloads before dependencies can be determined! As you + might imagine, that can be slow. + """ + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Not available from this locator') + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + data = get_project_data(name) + if data: + for info in data.get('files', []): + if info['ptype'] != 'sdist' or info['pyversion'] != 'source': + continue + # We don't store summary in project metadata as it makes + # the data bigger for no benefit during dependency + # resolution + dist = make_dist(data['name'], info['version'], + summary=data.get('summary', + 'Placeholder for summary'), + scheme=self.scheme) + md = dist.metadata + md.source_url = info['url'] + # TODO SHA256 digest + if 'digest' in info and info['digest']: + dist.digest = ('md5', info['digest']) + md.dependencies = info.get('requirements', {}) + dist.exports = info.get('exports', {}) + result[dist.version] = dist + result['urls'].setdefault(dist.version, set()).add(info['url']) + return result + +class DistPathLocator(Locator): + """ + This locator finds installed distributions in a path. It can be useful for + adding to an :class:`AggregatingLocator`. + """ + def __init__(self, distpath, **kwargs): + """ + Initialise an instance. + + :param distpath: A :class:`DistributionPath` instance to search. + """ + super(DistPathLocator, self).__init__(**kwargs) + assert isinstance(distpath, DistributionPath) + self.distpath = distpath + + def _get_project(self, name): + dist = self.distpath.get_distribution(name) + if dist is None: + result = {'urls': {}, 'digests': {}} + else: + result = { + dist.version: dist, + 'urls': {dist.version: set([dist.source_url])}, + 'digests': {dist.version: set([None])} + } + return result + + +class AggregatingLocator(Locator): + """ + This class allows you to chain and/or merge a list of locators. + """ + def __init__(self, *locators, **kwargs): + """ + Initialise an instance. + + :param locators: The list of locators to search. + :param kwargs: Passed to the superclass constructor, + except for: + * merge - if False (the default), the first successful + search from any of the locators is returned. If True, + the results from all locators are merged (this can be + slow). + """ + self.merge = kwargs.pop('merge', False) + self.locators = locators + super(AggregatingLocator, self).__init__(**kwargs) + + def clear_cache(self): + super(AggregatingLocator, self).clear_cache() + for locator in self.locators: + locator.clear_cache() + + def _set_scheme(self, value): + self._scheme = value + for locator in self.locators: + locator.scheme = value + + scheme = property(Locator.scheme.fget, _set_scheme) + + def _get_project(self, name): + result = {} + for locator in self.locators: + d = locator.get_project(name) + if d: + if self.merge: + files = result.get('urls', {}) + digests = result.get('digests', {}) + # next line could overwrite result['urls'], result['digests'] + result.update(d) + df = result.get('urls') + if files and df: + for k, v in files.items(): + if k in df: + df[k] |= v + else: + df[k] = v + dd = result.get('digests') + if digests and dd: + dd.update(digests) + else: + # See issue #18. If any dists are found and we're looking + # for specific constraints, we only return something if + # a match is found. For example, if a DirectoryLocator + # returns just foo (1.0) while we're looking for + # foo (>= 2.0), we'll pretend there was nothing there so + # that subsequent locators can be queried. Otherwise we + # would just return foo (1.0) which would then lead to a + # failure to find foo (>= 2.0), because other locators + # weren't searched. Note that this only matters when + # merge=False. + if self.matcher is None: + found = True + else: + found = False + for k in d: + if self.matcher.match(k): + found = True + break + if found: + result = d + break + return result + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + for locator in self.locators: + try: + result |= locator.get_distribution_names() + except NotImplementedError: + pass + return result + + +# We use a legacy scheme simply because most of the dists on PyPI use legacy +# versions which don't conform to PEP 440. +default_locator = AggregatingLocator( + # JSONLocator(), # don't use as PEP 426 is withdrawn + SimpleScrapingLocator('https://pypi.org/simple/', + timeout=3.0), + scheme='legacy') + +locate = default_locator.locate + + +class DependencyFinder(object): + """ + Locate dependencies for distributions. + """ + + def __init__(self, locator=None): + """ + Initialise an instance, using the specified locator + to locate distributions. + """ + self.locator = locator or default_locator + self.scheme = get_scheme(self.locator.scheme) + + def add_distribution(self, dist): + """ + Add a distribution to the finder. This will update internal information + about who provides what. + :param dist: The distribution to add. + """ + logger.debug('adding distribution %s', dist) + name = dist.key + self.dists_by_name[name] = dist + self.dists[(name, dist.version)] = dist + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Add to provided: %s, %s, %s', name, version, dist) + self.provided.setdefault(name, set()).add((version, dist)) + + def remove_distribution(self, dist): + """ + Remove a distribution from the finder. This will update internal + information about who provides what. + :param dist: The distribution to remove. + """ + logger.debug('removing distribution %s', dist) + name = dist.key + del self.dists_by_name[name] + del self.dists[(name, dist.version)] + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Remove from provided: %s, %s, %s', name, version, dist) + s = self.provided[name] + s.remove((version, dist)) + if not s: + del self.provided[name] + + def get_matcher(self, reqt): + """ + Get a version matcher for a requirement. + :param reqt: The requirement + :type reqt: str + :return: A version matcher (an instance of + :class:`distlib.version.Matcher`). + """ + try: + matcher = self.scheme.matcher(reqt) + except UnsupportedVersionError: # pragma: no cover + # XXX compat-mode if cannot read the version + name = reqt.split()[0] + matcher = self.scheme.matcher(name) + return matcher + + def find_providers(self, reqt): + """ + Find the distributions which can fulfill a requirement. + + :param reqt: The requirement. + :type reqt: str + :return: A set of distribution which can fulfill the requirement. + """ + matcher = self.get_matcher(reqt) + name = matcher.key # case-insensitive + result = set() + provided = self.provided + if name in provided: + for version, provider in provided[name]: + try: + match = matcher.match(version) + except UnsupportedVersionError: + match = False + + if match: + result.add(provider) + break + return result + + def try_to_replace(self, provider, other, problems): + """ + Attempt to replace one provider with another. This is typically used + when resolving dependencies from multiple sources, e.g. A requires + (B >= 1.0) while C requires (B >= 1.1). + + For successful replacement, ``provider`` must meet all the requirements + which ``other`` fulfills. + + :param provider: The provider we are trying to replace with. + :param other: The provider we're trying to replace. + :param problems: If False is returned, this will contain what + problems prevented replacement. This is currently + a tuple of the literal string 'cantreplace', + ``provider``, ``other`` and the set of requirements + that ``provider`` couldn't fulfill. + :return: True if we can replace ``other`` with ``provider``, else + False. + """ + rlist = self.reqts[other] + unmatched = set() + for s in rlist: + matcher = self.get_matcher(s) + if not matcher.match(provider.version): + unmatched.add(s) + if unmatched: + # can't replace other with provider + problems.add(('cantreplace', provider, other, + frozenset(unmatched))) + result = False + else: + # can replace other with provider + self.remove_distribution(other) + del self.reqts[other] + for s in rlist: + self.reqts.setdefault(provider, set()).add(s) + self.add_distribution(provider) + result = True + return result + + def find(self, requirement, meta_extras=None, prereleases=False): + """ + Find a distribution and all distributions it depends on. + + :param requirement: The requirement specifying the distribution to + find, or a Distribution instance. + :param meta_extras: A list of meta extras such as :test:, :build: and + so on. + :param prereleases: If ``True``, allow pre-release versions to be + returned - otherwise, don't return prereleases + unless they're all that's available. + + Return a set of :class:`Distribution` instances and a set of + problems. + + The distributions returned should be such that they have the + :attr:`required` attribute set to ``True`` if they were + from the ``requirement`` passed to ``find()``, and they have the + :attr:`build_time_dependency` attribute set to ``True`` unless they + are post-installation dependencies of the ``requirement``. + + The problems should be a tuple consisting of the string + ``'unsatisfied'`` and the requirement which couldn't be satisfied + by any distribution known to the locator. + """ + + self.provided = {} + self.dists = {} + self.dists_by_name = {} + self.reqts = {} + + meta_extras = set(meta_extras or []) + if ':*:' in meta_extras: + meta_extras.remove(':*:') + # :meta: and :run: are implicitly included + meta_extras |= set([':test:', ':build:', ':dev:']) + + if isinstance(requirement, Distribution): + dist = odist = requirement + logger.debug('passed %s as requirement', odist) + else: + dist = odist = self.locator.locate(requirement, + prereleases=prereleases) + if dist is None: + raise DistlibException('Unable to locate %r' % requirement) + logger.debug('located %s', odist) + dist.requested = True + problems = set() + todo = set([dist]) + install_dists = set([odist]) + while todo: + dist = todo.pop() + name = dist.key # case-insensitive + if name not in self.dists_by_name: + self.add_distribution(dist) + else: + #import pdb; pdb.set_trace() + other = self.dists_by_name[name] + if other != dist: + self.try_to_replace(dist, other, problems) + + ireqts = dist.run_requires | dist.meta_requires + sreqts = dist.build_requires + ereqts = set() + if meta_extras and dist in install_dists: + for key in ('test', 'build', 'dev'): + e = ':%s:' % key + if e in meta_extras: + ereqts |= getattr(dist, '%s_requires' % key) + all_reqts = ireqts | sreqts | ereqts + for r in all_reqts: + providers = self.find_providers(r) + if not providers: + logger.debug('No providers found for %r', r) + provider = self.locator.locate(r, prereleases=prereleases) + # If no provider is found and we didn't consider + # prereleases, consider them now. + if provider is None and not prereleases: + provider = self.locator.locate(r, prereleases=True) + if provider is None: + logger.debug('Cannot satisfy %r', r) + problems.add(('unsatisfied', r)) + else: + n, v = provider.key, provider.version + if (n, v) not in self.dists: + todo.add(provider) + providers.add(provider) + if r in ireqts and dist in install_dists: + install_dists.add(provider) + logger.debug('Adding %s to install_dists', + provider.name_and_version) + for p in providers: + name = p.key + if name not in self.dists_by_name: + self.reqts.setdefault(p, set()).add(r) + else: + other = self.dists_by_name[name] + if other != p: + # see if other can be replaced by p + self.try_to_replace(p, other, problems) + + dists = set(self.dists.values()) + for dist in dists: + dist.build_time_dependency = dist not in install_dists + if dist.build_time_dependency: + logger.debug('%s is a build-time dependency only.', + dist.name_and_version) + logger.debug('find done for %s', odist) + return dists, problems diff --git a/env/lib/python3.8/site-packages/distlib/manifest.py b/env/lib/python3.8/site-packages/distlib/manifest.py new file mode 100644 index 00000000..ca0fe442 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/manifest.py @@ -0,0 +1,393 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2013 Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +""" +Class representing the list of files in a distribution. + +Equivalent to distutils.filelist, but fixes some problems. +""" +import fnmatch +import logging +import os +import re +import sys + +from . import DistlibException +from .compat import fsdecode +from .util import convert_path + + +__all__ = ['Manifest'] + +logger = logging.getLogger(__name__) + +# a \ followed by some spaces + EOL +_COLLAPSE_PATTERN = re.compile('\\\\w*\n', re.M) +_COMMENTED_LINE = re.compile('#.*?(?=\n)|\n(?=$)', re.M | re.S) + +# +# Due to the different results returned by fnmatch.translate, we need +# to do slightly different processing for Python 2.7 and 3.2 ... this needed +# to be brought in for Python 3.6 onwards. +# +_PYTHON_VERSION = sys.version_info[:2] + +class Manifest(object): + """A list of files built by on exploring the filesystem and filtered by + applying various patterns to what we find there. + """ + + def __init__(self, base=None): + """ + Initialise an instance. + + :param base: The base directory to explore under. + """ + self.base = os.path.abspath(os.path.normpath(base or os.getcwd())) + self.prefix = self.base + os.sep + self.allfiles = None + self.files = set() + + # + # Public API + # + + def findall(self): + """Find all files under the base and set ``allfiles`` to the absolute + pathnames of files found. + """ + from stat import S_ISREG, S_ISDIR, S_ISLNK + + self.allfiles = allfiles = [] + root = self.base + stack = [root] + pop = stack.pop + push = stack.append + + while stack: + root = pop() + names = os.listdir(root) + + for name in names: + fullname = os.path.join(root, name) + + # Avoid excess stat calls -- just one will do, thank you! + stat = os.stat(fullname) + mode = stat.st_mode + if S_ISREG(mode): + allfiles.append(fsdecode(fullname)) + elif S_ISDIR(mode) and not S_ISLNK(mode): + push(fullname) + + def add(self, item): + """ + Add a file to the manifest. + + :param item: The pathname to add. This can be relative to the base. + """ + if not item.startswith(self.prefix): + item = os.path.join(self.base, item) + self.files.add(os.path.normpath(item)) + + def add_many(self, items): + """ + Add a list of files to the manifest. + + :param items: The pathnames to add. These can be relative to the base. + """ + for item in items: + self.add(item) + + def sorted(self, wantdirs=False): + """ + Return sorted files in directory order + """ + + def add_dir(dirs, d): + dirs.add(d) + logger.debug('add_dir added %s', d) + if d != self.base: + parent, _ = os.path.split(d) + assert parent not in ('', '/') + add_dir(dirs, parent) + + result = set(self.files) # make a copy! + if wantdirs: + dirs = set() + for f in result: + add_dir(dirs, os.path.dirname(f)) + result |= dirs + return [os.path.join(*path_tuple) for path_tuple in + sorted(os.path.split(path) for path in result)] + + def clear(self): + """Clear all collected files.""" + self.files = set() + self.allfiles = [] + + def process_directive(self, directive): + """ + Process a directive which either adds some files from ``allfiles`` to + ``files``, or removes some files from ``files``. + + :param directive: The directive to process. This should be in a format + compatible with distutils ``MANIFEST.in`` files: + + http://docs.python.org/distutils/sourcedist.html#commands + """ + # Parse the line: split it up, make sure the right number of words + # is there, and return the relevant words. 'action' is always + # defined: it's the first word of the line. Which of the other + # three are defined depends on the action; it'll be either + # patterns, (dir and patterns), or (dirpattern). + action, patterns, thedir, dirpattern = self._parse_directive(directive) + + # OK, now we know that the action is valid and we have the + # right number of words on the line for that action -- so we + # can proceed with minimal error-checking. + if action == 'include': + for pattern in patterns: + if not self._include_pattern(pattern, anchor=True): + logger.warning('no files found matching %r', pattern) + + elif action == 'exclude': + for pattern in patterns: + found = self._exclude_pattern(pattern, anchor=True) + #if not found: + # logger.warning('no previously-included files ' + # 'found matching %r', pattern) + + elif action == 'global-include': + for pattern in patterns: + if not self._include_pattern(pattern, anchor=False): + logger.warning('no files found matching %r ' + 'anywhere in distribution', pattern) + + elif action == 'global-exclude': + for pattern in patterns: + found = self._exclude_pattern(pattern, anchor=False) + #if not found: + # logger.warning('no previously-included files ' + # 'matching %r found anywhere in ' + # 'distribution', pattern) + + elif action == 'recursive-include': + for pattern in patterns: + if not self._include_pattern(pattern, prefix=thedir): + logger.warning('no files found matching %r ' + 'under directory %r', pattern, thedir) + + elif action == 'recursive-exclude': + for pattern in patterns: + found = self._exclude_pattern(pattern, prefix=thedir) + #if not found: + # logger.warning('no previously-included files ' + # 'matching %r found under directory %r', + # pattern, thedir) + + elif action == 'graft': + if not self._include_pattern(None, prefix=dirpattern): + logger.warning('no directories found matching %r', + dirpattern) + + elif action == 'prune': + if not self._exclude_pattern(None, prefix=dirpattern): + logger.warning('no previously-included directories found ' + 'matching %r', dirpattern) + else: # pragma: no cover + # This should never happen, as it should be caught in + # _parse_template_line + raise DistlibException( + 'invalid action %r' % action) + + # + # Private API + # + + def _parse_directive(self, directive): + """ + Validate a directive. + :param directive: The directive to validate. + :return: A tuple of action, patterns, thedir, dir_patterns + """ + words = directive.split() + if len(words) == 1 and words[0] not in ('include', 'exclude', + 'global-include', + 'global-exclude', + 'recursive-include', + 'recursive-exclude', + 'graft', 'prune'): + # no action given, let's use the default 'include' + words.insert(0, 'include') + + action = words[0] + patterns = thedir = dir_pattern = None + + if action in ('include', 'exclude', + 'global-include', 'global-exclude'): + if len(words) < 2: + raise DistlibException( + '%r expects ...' % action) + + patterns = [convert_path(word) for word in words[1:]] + + elif action in ('recursive-include', 'recursive-exclude'): + if len(words) < 3: + raise DistlibException( + '%r expects ...' % action) + + thedir = convert_path(words[1]) + patterns = [convert_path(word) for word in words[2:]] + + elif action in ('graft', 'prune'): + if len(words) != 2: + raise DistlibException( + '%r expects a single ' % action) + + dir_pattern = convert_path(words[1]) + + else: + raise DistlibException('unknown action %r' % action) + + return action, patterns, thedir, dir_pattern + + def _include_pattern(self, pattern, anchor=True, prefix=None, + is_regex=False): + """Select strings (presumably filenames) from 'self.files' that + match 'pattern', a Unix-style wildcard (glob) pattern. + + Patterns are not quite the same as implemented by the 'fnmatch' + module: '*' and '?' match non-special characters, where "special" + is platform-dependent: slash on Unix; colon, slash, and backslash on + DOS/Windows; and colon on Mac OS. + + If 'anchor' is true (the default), then the pattern match is more + stringent: "*.py" will match "foo.py" but not "foo/bar.py". If + 'anchor' is false, both of these will match. + + If 'prefix' is supplied, then only filenames starting with 'prefix' + (itself a pattern) and ending with 'pattern', with anything in between + them, will match. 'anchor' is ignored in this case. + + If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and + 'pattern' is assumed to be either a string containing a regex or a + regex object -- no translation is done, the regex is just compiled + and used as-is. + + Selected strings will be added to self.files. + + Return True if files are found. + """ + # XXX docstring lying about what the special chars are? + found = False + pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex) + + # delayed loading of allfiles list + if self.allfiles is None: + self.findall() + + for name in self.allfiles: + if pattern_re.search(name): + self.files.add(name) + found = True + return found + + def _exclude_pattern(self, pattern, anchor=True, prefix=None, + is_regex=False): + """Remove strings (presumably filenames) from 'files' that match + 'pattern'. + + Other parameters are the same as for 'include_pattern()', above. + The list 'self.files' is modified in place. Return True if files are + found. + + This API is public to allow e.g. exclusion of SCM subdirs, e.g. when + packaging source distributions + """ + found = False + pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex) + for f in list(self.files): + if pattern_re.search(f): + self.files.remove(f) + found = True + return found + + def _translate_pattern(self, pattern, anchor=True, prefix=None, + is_regex=False): + """Translate a shell-like wildcard pattern to a compiled regular + expression. + + Return the compiled regex. If 'is_regex' true, + then 'pattern' is directly compiled to a regex (if it's a string) + or just returned as-is (assumes it's a regex object). + """ + if is_regex: + if isinstance(pattern, str): + return re.compile(pattern) + else: + return pattern + + if _PYTHON_VERSION > (3, 2): + # ditch start and end characters + start, _, end = self._glob_to_re('_').partition('_') + + if pattern: + pattern_re = self._glob_to_re(pattern) + if _PYTHON_VERSION > (3, 2): + assert pattern_re.startswith(start) and pattern_re.endswith(end) + else: + pattern_re = '' + + base = re.escape(os.path.join(self.base, '')) + if prefix is not None: + # ditch end of pattern character + if _PYTHON_VERSION <= (3, 2): + empty_pattern = self._glob_to_re('') + prefix_re = self._glob_to_re(prefix)[:-len(empty_pattern)] + else: + prefix_re = self._glob_to_re(prefix) + assert prefix_re.startswith(start) and prefix_re.endswith(end) + prefix_re = prefix_re[len(start): len(prefix_re) - len(end)] + sep = os.sep + if os.sep == '\\': + sep = r'\\' + if _PYTHON_VERSION <= (3, 2): + pattern_re = '^' + base + sep.join((prefix_re, + '.*' + pattern_re)) + else: + pattern_re = pattern_re[len(start): len(pattern_re) - len(end)] + pattern_re = r'%s%s%s%s.*%s%s' % (start, base, prefix_re, sep, + pattern_re, end) + else: # no prefix -- respect anchor flag + if anchor: + if _PYTHON_VERSION <= (3, 2): + pattern_re = '^' + base + pattern_re + else: + pattern_re = r'%s%s%s' % (start, base, pattern_re[len(start):]) + + return re.compile(pattern_re) + + def _glob_to_re(self, pattern): + """Translate a shell-like glob pattern to a regular expression. + + Return a string containing the regex. Differs from + 'fnmatch.translate()' in that '*' does not match "special characters" + (which are platform-specific). + """ + pattern_re = fnmatch.translate(pattern) + + # '?' and '*' in the glob pattern become '.' and '.*' in the RE, which + # IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix, + # and by extension they shouldn't match such "special characters" under + # any OS. So change all non-escaped dots in the RE to match any + # character except the special characters (currently: just os.sep). + sep = os.sep + if os.sep == '\\': + # we're using a regex to manipulate a regex, so we need + # to escape the backslash twice + sep = r'\\\\' + escaped = r'\1[^%s]' % sep + pattern_re = re.sub(r'((? y, + '!=': lambda x, y: x != y, + '<': lambda x, y: x < y, + '<=': lambda x, y: x == y or x < y, + '>': lambda x, y: x > y, + '>=': lambda x, y: x == y or x > y, + 'and': lambda x, y: x and y, + 'or': lambda x, y: x or y, + 'in': lambda x, y: x in y, + 'not in': lambda x, y: x not in y, + } + + def evaluate(self, expr, context): + """ + Evaluate a marker expression returned by the :func:`parse_requirement` + function in the specified context. + """ + if isinstance(expr, string_types): + if expr[0] in '\'"': + result = expr[1:-1] + else: + if expr not in context: + raise SyntaxError('unknown variable: %s' % expr) + result = context[expr] + else: + assert isinstance(expr, dict) + op = expr['op'] + if op not in self.operations: + raise NotImplementedError('op not implemented: %s' % op) + elhs = expr['lhs'] + erhs = expr['rhs'] + if _is_literal(expr['lhs']) and _is_literal(expr['rhs']): + raise SyntaxError('invalid comparison: %s %s %s' % (elhs, op, erhs)) + + lhs = self.evaluate(elhs, context) + rhs = self.evaluate(erhs, context) + if ((elhs == 'python_version' or erhs == 'python_version') and + op in ('<', '<=', '>', '>=', '===', '==', '!=', '~=')): + lhs = NV(lhs) + rhs = NV(rhs) + elif elhs == 'python_version' and op in ('in', 'not in'): + lhs = NV(lhs) + rhs = _get_versions(rhs) + result = self.operations[op](lhs, rhs) + return result + +_DIGITS = re.compile(r'\d+\.\d+') + +def default_context(): + def format_full_version(info): + version = '%s.%s.%s' % (info.major, info.minor, info.micro) + kind = info.releaselevel + if kind != 'final': + version += kind[0] + str(info.serial) + return version + + if hasattr(sys, 'implementation'): + implementation_version = format_full_version(sys.implementation.version) + implementation_name = sys.implementation.name + else: + implementation_version = '0' + implementation_name = '' + + ppv = platform.python_version() + m = _DIGITS.match(ppv) + pv = m.group(0) + result = { + 'implementation_name': implementation_name, + 'implementation_version': implementation_version, + 'os_name': os.name, + 'platform_machine': platform.machine(), + 'platform_python_implementation': platform.python_implementation(), + 'platform_release': platform.release(), + 'platform_system': platform.system(), + 'platform_version': platform.version(), + 'platform_in_venv': str(in_venv()), + 'python_full_version': ppv, + 'python_version': pv, + 'sys_platform': sys.platform, + } + return result + +DEFAULT_CONTEXT = default_context() +del default_context + +evaluator = Evaluator() + +def interpret(marker, execution_context=None): + """ + Interpret a marker and return a result depending on environment. + + :param marker: The marker to interpret. + :type marker: str + :param execution_context: The context used for name lookup. + :type execution_context: mapping + """ + try: + expr, rest = parse_marker(marker) + except Exception as e: + raise SyntaxError('Unable to interpret marker syntax: %s: %s' % (marker, e)) + if rest and rest[0] != '#': + raise SyntaxError('unexpected trailing data in marker: %s: %s' % (marker, rest)) + context = dict(DEFAULT_CONTEXT) + if execution_context: + context.update(execution_context) + return evaluator.evaluate(expr, context) diff --git a/env/lib/python3.8/site-packages/distlib/metadata.py b/env/lib/python3.8/site-packages/distlib/metadata.py new file mode 100644 index 00000000..c329e197 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/metadata.py @@ -0,0 +1,1076 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""Implementation of the Metadata for Python packages PEPs. + +Supports all metadata formats (1.0, 1.1, 1.2, 1.3/2.1 and 2.2). +""" +from __future__ import unicode_literals + +import codecs +from email import message_from_file +import json +import logging +import re + + +from . import DistlibException, __version__ +from .compat import StringIO, string_types, text_type +from .markers import interpret +from .util import extract_by_key, get_extras +from .version import get_scheme, PEP440_VERSION_RE + +logger = logging.getLogger(__name__) + + +class MetadataMissingError(DistlibException): + """A required metadata is missing""" + + +class MetadataConflictError(DistlibException): + """Attempt to read or write metadata fields that are conflictual.""" + + +class MetadataUnrecognizedVersionError(DistlibException): + """Unknown metadata version number.""" + + +class MetadataInvalidError(DistlibException): + """A metadata value is invalid""" + +# public API of this module +__all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION'] + +# Encoding used for the PKG-INFO files +PKG_INFO_ENCODING = 'utf-8' + +# preferred version. Hopefully will be changed +# to 1.2 once PEP 345 is supported everywhere +PKG_INFO_PREFERRED_VERSION = '1.1' + +_LINE_PREFIX_1_2 = re.compile('\n \\|') +_LINE_PREFIX_PRE_1_2 = re.compile('\n ') +_241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'License') + +_314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'License', 'Classifier', 'Download-URL', 'Obsoletes', + 'Provides', 'Requires') + +_314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier', + 'Download-URL') + +_345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'Maintainer', 'Maintainer-email', 'License', + 'Classifier', 'Download-URL', 'Obsoletes-Dist', + 'Project-URL', 'Provides-Dist', 'Requires-Dist', + 'Requires-Python', 'Requires-External') + +_345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python', + 'Obsoletes-Dist', 'Requires-External', 'Maintainer', + 'Maintainer-email', 'Project-URL') + +_426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'Maintainer', 'Maintainer-email', 'License', + 'Classifier', 'Download-URL', 'Obsoletes-Dist', + 'Project-URL', 'Provides-Dist', 'Requires-Dist', + 'Requires-Python', 'Requires-External', 'Private-Version', + 'Obsoleted-By', 'Setup-Requires-Dist', 'Extension', + 'Provides-Extra') + +_426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By', + 'Setup-Requires-Dist', 'Extension') + +# See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in +# the metadata. Include them in the tuple literal below to allow them +# (for now). +# Ditto for Obsoletes - see issue #140. +_566_FIELDS = _426_FIELDS + ('Description-Content-Type', + 'Requires', 'Provides', 'Obsoletes') + +_566_MARKERS = ('Description-Content-Type',) + +_643_MARKERS = ('Dynamic', 'License-File') + +_643_FIELDS = _566_FIELDS + _643_MARKERS + +_ALL_FIELDS = set() +_ALL_FIELDS.update(_241_FIELDS) +_ALL_FIELDS.update(_314_FIELDS) +_ALL_FIELDS.update(_345_FIELDS) +_ALL_FIELDS.update(_426_FIELDS) +_ALL_FIELDS.update(_566_FIELDS) +_ALL_FIELDS.update(_643_FIELDS) + +EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''') + + +def _version2fieldlist(version): + if version == '1.0': + return _241_FIELDS + elif version == '1.1': + return _314_FIELDS + elif version == '1.2': + return _345_FIELDS + elif version in ('1.3', '2.1'): + # avoid adding field names if already there + return _345_FIELDS + tuple(f for f in _566_FIELDS if f not in _345_FIELDS) + elif version == '2.0': + raise ValueError('Metadata 2.0 is withdrawn and not supported') + # return _426_FIELDS + elif version == '2.2': + return _643_FIELDS + raise MetadataUnrecognizedVersionError(version) + + +def _best_version(fields): + """Detect the best version depending on the fields used.""" + def _has_marker(keys, markers): + for marker in markers: + if marker in keys: + return True + return False + + keys = [] + for key, value in fields.items(): + if value in ([], 'UNKNOWN', None): + continue + keys.append(key) + + possible_versions = ['1.0', '1.1', '1.2', '1.3', '2.1', '2.2'] # 2.0 removed + + # first let's try to see if a field is not part of one of the version + for key in keys: + if key not in _241_FIELDS and '1.0' in possible_versions: + possible_versions.remove('1.0') + logger.debug('Removed 1.0 due to %s', key) + if key not in _314_FIELDS and '1.1' in possible_versions: + possible_versions.remove('1.1') + logger.debug('Removed 1.1 due to %s', key) + if key not in _345_FIELDS and '1.2' in possible_versions: + possible_versions.remove('1.2') + logger.debug('Removed 1.2 due to %s', key) + if key not in _566_FIELDS and '1.3' in possible_versions: + possible_versions.remove('1.3') + logger.debug('Removed 1.3 due to %s', key) + if key not in _566_FIELDS and '2.1' in possible_versions: + if key != 'Description': # In 2.1, description allowed after headers + possible_versions.remove('2.1') + logger.debug('Removed 2.1 due to %s', key) + if key not in _643_FIELDS and '2.2' in possible_versions: + possible_versions.remove('2.2') + logger.debug('Removed 2.2 due to %s', key) + # if key not in _426_FIELDS and '2.0' in possible_versions: + # possible_versions.remove('2.0') + # logger.debug('Removed 2.0 due to %s', key) + + # possible_version contains qualified versions + if len(possible_versions) == 1: + return possible_versions[0] # found ! + elif len(possible_versions) == 0: + logger.debug('Out of options - unknown metadata set: %s', fields) + raise MetadataConflictError('Unknown metadata set') + + # let's see if one unique marker is found + is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS) + is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS) + is_2_1 = '2.1' in possible_versions and _has_marker(keys, _566_MARKERS) + # is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS) + is_2_2 = '2.2' in possible_versions and _has_marker(keys, _643_MARKERS) + if int(is_1_1) + int(is_1_2) + int(is_2_1) + int(is_2_2) > 1: + raise MetadataConflictError('You used incompatible 1.1/1.2/2.1/2.2 fields') + + # we have the choice, 1.0, or 1.2, 2.1 or 2.2 + # - 1.0 has a broken Summary field but works with all tools + # - 1.1 is to avoid + # - 1.2 fixes Summary but has little adoption + # - 2.1 adds more features + # - 2.2 is the latest + if not is_1_1 and not is_1_2 and not is_2_1 and not is_2_2: + # we couldn't find any specific marker + if PKG_INFO_PREFERRED_VERSION in possible_versions: + return PKG_INFO_PREFERRED_VERSION + if is_1_1: + return '1.1' + if is_1_2: + return '1.2' + if is_2_1: + return '2.1' + # if is_2_2: + # return '2.2' + + return '2.2' + +# This follows the rules about transforming keys as described in +# https://www.python.org/dev/peps/pep-0566/#id17 +_ATTR2FIELD = { + name.lower().replace("-", "_"): name for name in _ALL_FIELDS +} +_FIELD2ATTR = {field: attr for attr, field in _ATTR2FIELD.items()} + +_PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist') +_VERSIONS_FIELDS = ('Requires-Python',) +_VERSION_FIELDS = ('Version',) +_LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes', + 'Requires', 'Provides', 'Obsoletes-Dist', + 'Provides-Dist', 'Requires-Dist', 'Requires-External', + 'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist', + 'Provides-Extra', 'Extension', 'License-File') +_LISTTUPLEFIELDS = ('Project-URL',) + +_ELEMENTSFIELD = ('Keywords',) + +_UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description') + +_MISSING = object() + +_FILESAFE = re.compile('[^A-Za-z0-9.]+') + + +def _get_name_and_version(name, version, for_filename=False): + """Return the distribution name with version. + + If for_filename is true, return a filename-escaped form.""" + if for_filename: + # For both name and version any runs of non-alphanumeric or '.' + # characters are replaced with a single '-'. Additionally any + # spaces in the version string become '.' + name = _FILESAFE.sub('-', name) + version = _FILESAFE.sub('-', version.replace(' ', '.')) + return '%s-%s' % (name, version) + + +class LegacyMetadata(object): + """The legacy metadata of a release. + + Supports versions 1.0, 1.1, 1.2, 2.0 and 1.3/2.1 (auto-detected). You can + instantiate the class with one of these arguments (or none): + - *path*, the path to a metadata file + - *fileobj* give a file-like object with metadata as content + - *mapping* is a dict-like object + - *scheme* is a version scheme name + """ + # TODO document the mapping API and UNKNOWN default key + + def __init__(self, path=None, fileobj=None, mapping=None, + scheme='default'): + if [path, fileobj, mapping].count(None) < 2: + raise TypeError('path, fileobj and mapping are exclusive') + self._fields = {} + self.requires_files = [] + self._dependencies = None + self.scheme = scheme + if path is not None: + self.read(path) + elif fileobj is not None: + self.read_file(fileobj) + elif mapping is not None: + self.update(mapping) + self.set_metadata_version() + + def set_metadata_version(self): + self._fields['Metadata-Version'] = _best_version(self._fields) + + def _write_field(self, fileobj, name, value): + fileobj.write('%s: %s\n' % (name, value)) + + def __getitem__(self, name): + return self.get(name) + + def __setitem__(self, name, value): + return self.set(name, value) + + def __delitem__(self, name): + field_name = self._convert_name(name) + try: + del self._fields[field_name] + except KeyError: + raise KeyError(name) + + def __contains__(self, name): + return (name in self._fields or + self._convert_name(name) in self._fields) + + def _convert_name(self, name): + if name in _ALL_FIELDS: + return name + name = name.replace('-', '_').lower() + return _ATTR2FIELD.get(name, name) + + def _default_value(self, name): + if name in _LISTFIELDS or name in _ELEMENTSFIELD: + return [] + return 'UNKNOWN' + + def _remove_line_prefix(self, value): + if self.metadata_version in ('1.0', '1.1'): + return _LINE_PREFIX_PRE_1_2.sub('\n', value) + else: + return _LINE_PREFIX_1_2.sub('\n', value) + + def __getattr__(self, name): + if name in _ATTR2FIELD: + return self[name] + raise AttributeError(name) + + # + # Public API + # + +# dependencies = property(_get_dependencies, _set_dependencies) + + def get_fullname(self, filesafe=False): + """Return the distribution name with version. + + If filesafe is true, return a filename-escaped form.""" + return _get_name_and_version(self['Name'], self['Version'], filesafe) + + def is_field(self, name): + """return True if name is a valid metadata key""" + name = self._convert_name(name) + return name in _ALL_FIELDS + + def is_multi_field(self, name): + name = self._convert_name(name) + return name in _LISTFIELDS + + def read(self, filepath): + """Read the metadata values from a file path.""" + fp = codecs.open(filepath, 'r', encoding='utf-8') + try: + self.read_file(fp) + finally: + fp.close() + + def read_file(self, fileob): + """Read the metadata values from a file object.""" + msg = message_from_file(fileob) + self._fields['Metadata-Version'] = msg['metadata-version'] + + # When reading, get all the fields we can + for field in _ALL_FIELDS: + if field not in msg: + continue + if field in _LISTFIELDS: + # we can have multiple lines + values = msg.get_all(field) + if field in _LISTTUPLEFIELDS and values is not None: + values = [tuple(value.split(',')) for value in values] + self.set(field, values) + else: + # single line + value = msg[field] + if value is not None and value != 'UNKNOWN': + self.set(field, value) + + # PEP 566 specifies that the body be used for the description, if + # available + body = msg.get_payload() + self["Description"] = body if body else self["Description"] + # logger.debug('Attempting to set metadata for %s', self) + # self.set_metadata_version() + + def write(self, filepath, skip_unknown=False): + """Write the metadata fields to filepath.""" + fp = codecs.open(filepath, 'w', encoding='utf-8') + try: + self.write_file(fp, skip_unknown) + finally: + fp.close() + + def write_file(self, fileobject, skip_unknown=False): + """Write the PKG-INFO format data to a file object.""" + self.set_metadata_version() + + for field in _version2fieldlist(self['Metadata-Version']): + values = self.get(field) + if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']): + continue + if field in _ELEMENTSFIELD: + self._write_field(fileobject, field, ','.join(values)) + continue + if field not in _LISTFIELDS: + if field == 'Description': + if self.metadata_version in ('1.0', '1.1'): + values = values.replace('\n', '\n ') + else: + values = values.replace('\n', '\n |') + values = [values] + + if field in _LISTTUPLEFIELDS: + values = [','.join(value) for value in values] + + for value in values: + self._write_field(fileobject, field, value) + + def update(self, other=None, **kwargs): + """Set metadata values from the given iterable `other` and kwargs. + + Behavior is like `dict.update`: If `other` has a ``keys`` method, + they are looped over and ``self[key]`` is assigned ``other[key]``. + Else, ``other`` is an iterable of ``(key, value)`` iterables. + + Keys that don't match a metadata field or that have an empty value are + dropped. + """ + def _set(key, value): + if key in _ATTR2FIELD and value: + self.set(self._convert_name(key), value) + + if not other: + # other is None or empty container + pass + elif hasattr(other, 'keys'): + for k in other.keys(): + _set(k, other[k]) + else: + for k, v in other: + _set(k, v) + + if kwargs: + for k, v in kwargs.items(): + _set(k, v) + + def set(self, name, value): + """Control then set a metadata field.""" + name = self._convert_name(name) + + if ((name in _ELEMENTSFIELD or name == 'Platform') and + not isinstance(value, (list, tuple))): + if isinstance(value, string_types): + value = [v.strip() for v in value.split(',')] + else: + value = [] + elif (name in _LISTFIELDS and + not isinstance(value, (list, tuple))): + if isinstance(value, string_types): + value = [value] + else: + value = [] + + if logger.isEnabledFor(logging.WARNING): + project_name = self['Name'] + + scheme = get_scheme(self.scheme) + if name in _PREDICATE_FIELDS and value is not None: + for v in value: + # check that the values are valid + if not scheme.is_valid_matcher(v.split(';')[0]): + logger.warning( + "'%s': '%s' is not valid (field '%s')", + project_name, v, name) + # FIXME this rejects UNKNOWN, is that right? + elif name in _VERSIONS_FIELDS and value is not None: + if not scheme.is_valid_constraint_list(value): + logger.warning("'%s': '%s' is not a valid version (field '%s')", + project_name, value, name) + elif name in _VERSION_FIELDS and value is not None: + if not scheme.is_valid_version(value): + logger.warning("'%s': '%s' is not a valid version (field '%s')", + project_name, value, name) + + if name in _UNICODEFIELDS: + if name == 'Description': + value = self._remove_line_prefix(value) + + self._fields[name] = value + + def get(self, name, default=_MISSING): + """Get a metadata field.""" + name = self._convert_name(name) + if name not in self._fields: + if default is _MISSING: + default = self._default_value(name) + return default + if name in _UNICODEFIELDS: + value = self._fields[name] + return value + elif name in _LISTFIELDS: + value = self._fields[name] + if value is None: + return [] + res = [] + for val in value: + if name not in _LISTTUPLEFIELDS: + res.append(val) + else: + # That's for Project-URL + res.append((val[0], val[1])) + return res + + elif name in _ELEMENTSFIELD: + value = self._fields[name] + if isinstance(value, string_types): + return value.split(',') + return self._fields[name] + + def check(self, strict=False): + """Check if the metadata is compliant. If strict is True then raise if + no Name or Version are provided""" + self.set_metadata_version() + + # XXX should check the versions (if the file was loaded) + missing, warnings = [], [] + + for attr in ('Name', 'Version'): # required by PEP 345 + if attr not in self: + missing.append(attr) + + if strict and missing != []: + msg = 'missing required metadata: %s' % ', '.join(missing) + raise MetadataMissingError(msg) + + for attr in ('Home-page', 'Author'): + if attr not in self: + missing.append(attr) + + # checking metadata 1.2 (XXX needs to check 1.1, 1.0) + if self['Metadata-Version'] != '1.2': + return missing, warnings + + scheme = get_scheme(self.scheme) + + def are_valid_constraints(value): + for v in value: + if not scheme.is_valid_matcher(v.split(';')[0]): + return False + return True + + for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints), + (_VERSIONS_FIELDS, + scheme.is_valid_constraint_list), + (_VERSION_FIELDS, + scheme.is_valid_version)): + for field in fields: + value = self.get(field, None) + if value is not None and not controller(value): + warnings.append("Wrong value for '%s': %s" % (field, value)) + + return missing, warnings + + def todict(self, skip_missing=False): + """Return fields as a dict. + + Field names will be converted to use the underscore-lowercase style + instead of hyphen-mixed case (i.e. home_page instead of Home-page). + This is as per https://www.python.org/dev/peps/pep-0566/#id17. + """ + self.set_metadata_version() + + fields = _version2fieldlist(self['Metadata-Version']) + + data = {} + + for field_name in fields: + if not skip_missing or field_name in self._fields: + key = _FIELD2ATTR[field_name] + if key != 'project_url': + data[key] = self[field_name] + else: + data[key] = [','.join(u) for u in self[field_name]] + + return data + + def add_requirements(self, requirements): + if self['Metadata-Version'] == '1.1': + # we can't have 1.1 metadata *and* Setuptools requires + for field in ('Obsoletes', 'Requires', 'Provides'): + if field in self: + del self[field] + self['Requires-Dist'] += requirements + + # Mapping API + # TODO could add iter* variants + + def keys(self): + return list(_version2fieldlist(self['Metadata-Version'])) + + def __iter__(self): + for key in self.keys(): + yield key + + def values(self): + return [self[key] for key in self.keys()] + + def items(self): + return [(key, self[key]) for key in self.keys()] + + def __repr__(self): + return '<%s %s %s>' % (self.__class__.__name__, self.name, + self.version) + + +METADATA_FILENAME = 'pydist.json' +WHEEL_METADATA_FILENAME = 'metadata.json' +LEGACY_METADATA_FILENAME = 'METADATA' + + +class Metadata(object): + """ + The metadata of a release. This implementation uses 2.1 + metadata where possible. If not possible, it wraps a LegacyMetadata + instance which handles the key-value metadata format. + """ + + METADATA_VERSION_MATCHER = re.compile(r'^\d+(\.\d+)*$') + + NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I) + + FIELDNAME_MATCHER = re.compile('^[A-Z]([0-9A-Z-]*[0-9A-Z])?$', re.I) + + VERSION_MATCHER = PEP440_VERSION_RE + + SUMMARY_MATCHER = re.compile('.{1,2047}') + + METADATA_VERSION = '2.0' + + GENERATOR = 'distlib (%s)' % __version__ + + MANDATORY_KEYS = { + 'name': (), + 'version': (), + 'summary': ('legacy',), + } + + INDEX_KEYS = ('name version license summary description author ' + 'author_email keywords platform home_page classifiers ' + 'download_url') + + DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires ' + 'dev_requires provides meta_requires obsoleted_by ' + 'supports_environments') + + SYNTAX_VALIDATORS = { + 'metadata_version': (METADATA_VERSION_MATCHER, ()), + 'name': (NAME_MATCHER, ('legacy',)), + 'version': (VERSION_MATCHER, ('legacy',)), + 'summary': (SUMMARY_MATCHER, ('legacy',)), + 'dynamic': (FIELDNAME_MATCHER, ('legacy',)), + } + + __slots__ = ('_legacy', '_data', 'scheme') + + def __init__(self, path=None, fileobj=None, mapping=None, + scheme='default'): + if [path, fileobj, mapping].count(None) < 2: + raise TypeError('path, fileobj and mapping are exclusive') + self._legacy = None + self._data = None + self.scheme = scheme + #import pdb; pdb.set_trace() + if mapping is not None: + try: + self._validate_mapping(mapping, scheme) + self._data = mapping + except MetadataUnrecognizedVersionError: + self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme) + self.validate() + else: + data = None + if path: + with open(path, 'rb') as f: + data = f.read() + elif fileobj: + data = fileobj.read() + if data is None: + # Initialised with no args - to be added + self._data = { + 'metadata_version': self.METADATA_VERSION, + 'generator': self.GENERATOR, + } + else: + if not isinstance(data, text_type): + data = data.decode('utf-8') + try: + self._data = json.loads(data) + self._validate_mapping(self._data, scheme) + except ValueError: + # Note: MetadataUnrecognizedVersionError does not + # inherit from ValueError (it's a DistlibException, + # which should not inherit from ValueError). + # The ValueError comes from the json.load - if that + # succeeds and we get a validation error, we want + # that to propagate + self._legacy = LegacyMetadata(fileobj=StringIO(data), + scheme=scheme) + self.validate() + + common_keys = set(('name', 'version', 'license', 'keywords', 'summary')) + + none_list = (None, list) + none_dict = (None, dict) + + mapped_keys = { + 'run_requires': ('Requires-Dist', list), + 'build_requires': ('Setup-Requires-Dist', list), + 'dev_requires': none_list, + 'test_requires': none_list, + 'meta_requires': none_list, + 'extras': ('Provides-Extra', list), + 'modules': none_list, + 'namespaces': none_list, + 'exports': none_dict, + 'commands': none_dict, + 'classifiers': ('Classifier', list), + 'source_url': ('Download-URL', None), + 'metadata_version': ('Metadata-Version', None), + } + + del none_list, none_dict + + def __getattribute__(self, key): + common = object.__getattribute__(self, 'common_keys') + mapped = object.__getattribute__(self, 'mapped_keys') + if key in mapped: + lk, maker = mapped[key] + if self._legacy: + if lk is None: + result = None if maker is None else maker() + else: + result = self._legacy.get(lk) + else: + value = None if maker is None else maker() + if key not in ('commands', 'exports', 'modules', 'namespaces', + 'classifiers'): + result = self._data.get(key, value) + else: + # special cases for PEP 459 + sentinel = object() + result = sentinel + d = self._data.get('extensions') + if d: + if key == 'commands': + result = d.get('python.commands', value) + elif key == 'classifiers': + d = d.get('python.details') + if d: + result = d.get(key, value) + else: + d = d.get('python.exports') + if not d: + d = self._data.get('python.exports') + if d: + result = d.get(key, value) + if result is sentinel: + result = value + elif key not in common: + result = object.__getattribute__(self, key) + elif self._legacy: + result = self._legacy.get(key) + else: + result = self._data.get(key) + return result + + def _validate_value(self, key, value, scheme=None): + if key in self.SYNTAX_VALIDATORS: + pattern, exclusions = self.SYNTAX_VALIDATORS[key] + if (scheme or self.scheme) not in exclusions: + m = pattern.match(value) + if not m: + raise MetadataInvalidError("'%s' is an invalid value for " + "the '%s' property" % (value, + key)) + + def __setattr__(self, key, value): + self._validate_value(key, value) + common = object.__getattribute__(self, 'common_keys') + mapped = object.__getattribute__(self, 'mapped_keys') + if key in mapped: + lk, _ = mapped[key] + if self._legacy: + if lk is None: + raise NotImplementedError + self._legacy[lk] = value + elif key not in ('commands', 'exports', 'modules', 'namespaces', + 'classifiers'): + self._data[key] = value + else: + # special cases for PEP 459 + d = self._data.setdefault('extensions', {}) + if key == 'commands': + d['python.commands'] = value + elif key == 'classifiers': + d = d.setdefault('python.details', {}) + d[key] = value + else: + d = d.setdefault('python.exports', {}) + d[key] = value + elif key not in common: + object.__setattr__(self, key, value) + else: + if key == 'keywords': + if isinstance(value, string_types): + value = value.strip() + if value: + value = value.split() + else: + value = [] + if self._legacy: + self._legacy[key] = value + else: + self._data[key] = value + + @property + def name_and_version(self): + return _get_name_and_version(self.name, self.version, True) + + @property + def provides(self): + if self._legacy: + result = self._legacy['Provides-Dist'] + else: + result = self._data.setdefault('provides', []) + s = '%s (%s)' % (self.name, self.version) + if s not in result: + result.append(s) + return result + + @provides.setter + def provides(self, value): + if self._legacy: + self._legacy['Provides-Dist'] = value + else: + self._data['provides'] = value + + def get_requirements(self, reqts, extras=None, env=None): + """ + Base method to get dependencies, given a set of extras + to satisfy and an optional environment context. + :param reqts: A list of sometimes-wanted dependencies, + perhaps dependent on extras and environment. + :param extras: A list of optional components being requested. + :param env: An optional environment for marker evaluation. + """ + if self._legacy: + result = reqts + else: + result = [] + extras = get_extras(extras or [], self.extras) + for d in reqts: + if 'extra' not in d and 'environment' not in d: + # unconditional + include = True + else: + if 'extra' not in d: + # Not extra-dependent - only environment-dependent + include = True + else: + include = d.get('extra') in extras + if include: + # Not excluded because of extras, check environment + marker = d.get('environment') + if marker: + include = interpret(marker, env) + if include: + result.extend(d['requires']) + for key in ('build', 'dev', 'test'): + e = ':%s:' % key + if e in extras: + extras.remove(e) + # A recursive call, but it should terminate since 'test' + # has been removed from the extras + reqts = self._data.get('%s_requires' % key, []) + result.extend(self.get_requirements(reqts, extras=extras, + env=env)) + return result + + @property + def dictionary(self): + if self._legacy: + return self._from_legacy() + return self._data + + @property + def dependencies(self): + if self._legacy: + raise NotImplementedError + else: + return extract_by_key(self._data, self.DEPENDENCY_KEYS) + + @dependencies.setter + def dependencies(self, value): + if self._legacy: + raise NotImplementedError + else: + self._data.update(value) + + def _validate_mapping(self, mapping, scheme): + if mapping.get('metadata_version') != self.METADATA_VERSION: + raise MetadataUnrecognizedVersionError() + missing = [] + for key, exclusions in self.MANDATORY_KEYS.items(): + if key not in mapping: + if scheme not in exclusions: + missing.append(key) + if missing: + msg = 'Missing metadata items: %s' % ', '.join(missing) + raise MetadataMissingError(msg) + for k, v in mapping.items(): + self._validate_value(k, v, scheme) + + def validate(self): + if self._legacy: + missing, warnings = self._legacy.check(True) + if missing or warnings: + logger.warning('Metadata: missing: %s, warnings: %s', + missing, warnings) + else: + self._validate_mapping(self._data, self.scheme) + + def todict(self): + if self._legacy: + return self._legacy.todict(True) + else: + result = extract_by_key(self._data, self.INDEX_KEYS) + return result + + def _from_legacy(self): + assert self._legacy and not self._data + result = { + 'metadata_version': self.METADATA_VERSION, + 'generator': self.GENERATOR, + } + lmd = self._legacy.todict(True) # skip missing ones + for k in ('name', 'version', 'license', 'summary', 'description', + 'classifier'): + if k in lmd: + if k == 'classifier': + nk = 'classifiers' + else: + nk = k + result[nk] = lmd[k] + kw = lmd.get('Keywords', []) + if kw == ['']: + kw = [] + result['keywords'] = kw + keys = (('requires_dist', 'run_requires'), + ('setup_requires_dist', 'build_requires')) + for ok, nk in keys: + if ok in lmd and lmd[ok]: + result[nk] = [{'requires': lmd[ok]}] + result['provides'] = self.provides + author = {} + maintainer = {} + return result + + LEGACY_MAPPING = { + 'name': 'Name', + 'version': 'Version', + ('extensions', 'python.details', 'license'): 'License', + 'summary': 'Summary', + 'description': 'Description', + ('extensions', 'python.project', 'project_urls', 'Home'): 'Home-page', + ('extensions', 'python.project', 'contacts', 0, 'name'): 'Author', + ('extensions', 'python.project', 'contacts', 0, 'email'): 'Author-email', + 'source_url': 'Download-URL', + ('extensions', 'python.details', 'classifiers'): 'Classifier', + } + + def _to_legacy(self): + def process_entries(entries): + reqts = set() + for e in entries: + extra = e.get('extra') + env = e.get('environment') + rlist = e['requires'] + for r in rlist: + if not env and not extra: + reqts.add(r) + else: + marker = '' + if extra: + marker = 'extra == "%s"' % extra + if env: + if marker: + marker = '(%s) and %s' % (env, marker) + else: + marker = env + reqts.add(';'.join((r, marker))) + return reqts + + assert self._data and not self._legacy + result = LegacyMetadata() + nmd = self._data + # import pdb; pdb.set_trace() + for nk, ok in self.LEGACY_MAPPING.items(): + if not isinstance(nk, tuple): + if nk in nmd: + result[ok] = nmd[nk] + else: + d = nmd + found = True + for k in nk: + try: + d = d[k] + except (KeyError, IndexError): + found = False + break + if found: + result[ok] = d + r1 = process_entries(self.run_requires + self.meta_requires) + r2 = process_entries(self.build_requires + self.dev_requires) + if self.extras: + result['Provides-Extra'] = sorted(self.extras) + result['Requires-Dist'] = sorted(r1) + result['Setup-Requires-Dist'] = sorted(r2) + # TODO: any other fields wanted + return result + + def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True): + if [path, fileobj].count(None) != 1: + raise ValueError('Exactly one of path and fileobj is needed') + self.validate() + if legacy: + if self._legacy: + legacy_md = self._legacy + else: + legacy_md = self._to_legacy() + if path: + legacy_md.write(path, skip_unknown=skip_unknown) + else: + legacy_md.write_file(fileobj, skip_unknown=skip_unknown) + else: + if self._legacy: + d = self._from_legacy() + else: + d = self._data + if fileobj: + json.dump(d, fileobj, ensure_ascii=True, indent=2, + sort_keys=True) + else: + with codecs.open(path, 'w', 'utf-8') as f: + json.dump(d, f, ensure_ascii=True, indent=2, + sort_keys=True) + + def add_requirements(self, requirements): + if self._legacy: + self._legacy.add_requirements(requirements) + else: + run_requires = self._data.setdefault('run_requires', []) + always = None + for entry in run_requires: + if 'environment' not in entry and 'extra' not in entry: + always = entry + break + if always is None: + always = { 'requires': requirements } + run_requires.insert(0, always) + else: + rset = set(always['requires']) | set(requirements) + always['requires'] = sorted(rset) + + def __repr__(self): + name = self.name or '(no name)' + version = self.version or 'no version' + return '<%s %s %s (%s)>' % (self.__class__.__name__, + self.metadata_version, name, version) diff --git a/env/lib/python3.8/site-packages/distlib/resources.py b/env/lib/python3.8/site-packages/distlib/resources.py new file mode 100644 index 00000000..fef52aa1 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/resources.py @@ -0,0 +1,358 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2017 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from __future__ import unicode_literals + +import bisect +import io +import logging +import os +import pkgutil +import sys +import types +import zipimport + +from . import DistlibException +from .util import cached_property, get_cache_base, Cache + +logger = logging.getLogger(__name__) + + +cache = None # created when needed + + +class ResourceCache(Cache): + def __init__(self, base=None): + if base is None: + # Use native string to avoid issues on 2.x: see Python #20140. + base = os.path.join(get_cache_base(), str('resource-cache')) + super(ResourceCache, self).__init__(base) + + def is_stale(self, resource, path): + """ + Is the cache stale for the given resource? + + :param resource: The :class:`Resource` being cached. + :param path: The path of the resource in the cache. + :return: True if the cache is stale. + """ + # Cache invalidation is a hard problem :-) + return True + + def get(self, resource): + """ + Get a resource into the cache, + + :param resource: A :class:`Resource` instance. + :return: The pathname of the resource in the cache. + """ + prefix, path = resource.finder.get_cache_info(resource) + if prefix is None: + result = path + else: + result = os.path.join(self.base, self.prefix_to_dir(prefix), path) + dirname = os.path.dirname(result) + if not os.path.isdir(dirname): + os.makedirs(dirname) + if not os.path.exists(result): + stale = True + else: + stale = self.is_stale(resource, path) + if stale: + # write the bytes of the resource to the cache location + with open(result, 'wb') as f: + f.write(resource.bytes) + return result + + +class ResourceBase(object): + def __init__(self, finder, name): + self.finder = finder + self.name = name + + +class Resource(ResourceBase): + """ + A class representing an in-package resource, such as a data file. This is + not normally instantiated by user code, but rather by a + :class:`ResourceFinder` which manages the resource. + """ + is_container = False # Backwards compatibility + + def as_stream(self): + """ + Get the resource as a stream. + + This is not a property to make it obvious that it returns a new stream + each time. + """ + return self.finder.get_stream(self) + + @cached_property + def file_path(self): + global cache + if cache is None: + cache = ResourceCache() + return cache.get(self) + + @cached_property + def bytes(self): + return self.finder.get_bytes(self) + + @cached_property + def size(self): + return self.finder.get_size(self) + + +class ResourceContainer(ResourceBase): + is_container = True # Backwards compatibility + + @cached_property + def resources(self): + return self.finder.get_resources(self) + + +class ResourceFinder(object): + """ + Resource finder for file system resources. + """ + + if sys.platform.startswith('java'): + skipped_extensions = ('.pyc', '.pyo', '.class') + else: + skipped_extensions = ('.pyc', '.pyo') + + def __init__(self, module): + self.module = module + self.loader = getattr(module, '__loader__', None) + self.base = os.path.dirname(getattr(module, '__file__', '')) + + def _adjust_path(self, path): + return os.path.realpath(path) + + def _make_path(self, resource_name): + # Issue #50: need to preserve type of path on Python 2.x + # like os.path._get_sep + if isinstance(resource_name, bytes): # should only happen on 2.x + sep = b'/' + else: + sep = '/' + parts = resource_name.split(sep) + parts.insert(0, self.base) + result = os.path.join(*parts) + return self._adjust_path(result) + + def _find(self, path): + return os.path.exists(path) + + def get_cache_info(self, resource): + return None, resource.path + + def find(self, resource_name): + path = self._make_path(resource_name) + if not self._find(path): + result = None + else: + if self._is_directory(path): + result = ResourceContainer(self, resource_name) + else: + result = Resource(self, resource_name) + result.path = path + return result + + def get_stream(self, resource): + return open(resource.path, 'rb') + + def get_bytes(self, resource): + with open(resource.path, 'rb') as f: + return f.read() + + def get_size(self, resource): + return os.path.getsize(resource.path) + + def get_resources(self, resource): + def allowed(f): + return (f != '__pycache__' and not + f.endswith(self.skipped_extensions)) + return set([f for f in os.listdir(resource.path) if allowed(f)]) + + def is_container(self, resource): + return self._is_directory(resource.path) + + _is_directory = staticmethod(os.path.isdir) + + def iterator(self, resource_name): + resource = self.find(resource_name) + if resource is not None: + todo = [resource] + while todo: + resource = todo.pop(0) + yield resource + if resource.is_container: + rname = resource.name + for name in resource.resources: + if not rname: + new_name = name + else: + new_name = '/'.join([rname, name]) + child = self.find(new_name) + if child.is_container: + todo.append(child) + else: + yield child + + +class ZipResourceFinder(ResourceFinder): + """ + Resource finder for resources in .zip files. + """ + def __init__(self, module): + super(ZipResourceFinder, self).__init__(module) + archive = self.loader.archive + self.prefix_len = 1 + len(archive) + # PyPy doesn't have a _files attr on zipimporter, and you can't set one + if hasattr(self.loader, '_files'): + self._files = self.loader._files + else: + self._files = zipimport._zip_directory_cache[archive] + self.index = sorted(self._files) + + def _adjust_path(self, path): + return path + + def _find(self, path): + path = path[self.prefix_len:] + if path in self._files: + result = True + else: + if path and path[-1] != os.sep: + path = path + os.sep + i = bisect.bisect(self.index, path) + try: + result = self.index[i].startswith(path) + except IndexError: + result = False + if not result: + logger.debug('_find failed: %r %r', path, self.loader.prefix) + else: + logger.debug('_find worked: %r %r', path, self.loader.prefix) + return result + + def get_cache_info(self, resource): + prefix = self.loader.archive + path = resource.path[1 + len(prefix):] + return prefix, path + + def get_bytes(self, resource): + return self.loader.get_data(resource.path) + + def get_stream(self, resource): + return io.BytesIO(self.get_bytes(resource)) + + def get_size(self, resource): + path = resource.path[self.prefix_len:] + return self._files[path][3] + + def get_resources(self, resource): + path = resource.path[self.prefix_len:] + if path and path[-1] != os.sep: + path += os.sep + plen = len(path) + result = set() + i = bisect.bisect(self.index, path) + while i < len(self.index): + if not self.index[i].startswith(path): + break + s = self.index[i][plen:] + result.add(s.split(os.sep, 1)[0]) # only immediate children + i += 1 + return result + + def _is_directory(self, path): + path = path[self.prefix_len:] + if path and path[-1] != os.sep: + path += os.sep + i = bisect.bisect(self.index, path) + try: + result = self.index[i].startswith(path) + except IndexError: + result = False + return result + + +_finder_registry = { + type(None): ResourceFinder, + zipimport.zipimporter: ZipResourceFinder +} + +try: + # In Python 3.6, _frozen_importlib -> _frozen_importlib_external + try: + import _frozen_importlib_external as _fi + except ImportError: + import _frozen_importlib as _fi + _finder_registry[_fi.SourceFileLoader] = ResourceFinder + _finder_registry[_fi.FileFinder] = ResourceFinder + # See issue #146 + _finder_registry[_fi.SourcelessFileLoader] = ResourceFinder + del _fi +except (ImportError, AttributeError): + pass + + +def register_finder(loader, finder_maker): + _finder_registry[type(loader)] = finder_maker + + +_finder_cache = {} + + +def finder(package): + """ + Return a resource finder for a package. + :param package: The name of the package. + :return: A :class:`ResourceFinder` instance for the package. + """ + if package in _finder_cache: + result = _finder_cache[package] + else: + if package not in sys.modules: + __import__(package) + module = sys.modules[package] + path = getattr(module, '__path__', None) + if path is None: + raise DistlibException('You cannot get a finder for a module, ' + 'only for a package') + loader = getattr(module, '__loader__', None) + finder_maker = _finder_registry.get(type(loader)) + if finder_maker is None: + raise DistlibException('Unable to locate finder for %r' % package) + result = finder_maker(module) + _finder_cache[package] = result + return result + + +_dummy_module = types.ModuleType(str('__dummy__')) + + +def finder_for_path(path): + """ + Return a resource finder for a path, which should represent a container. + + :param path: The path. + :return: A :class:`ResourceFinder` instance for the path. + """ + result = None + # calls any path hooks, gets importer into cache + pkgutil.get_importer(path) + loader = sys.path_importer_cache.get(path) + finder = _finder_registry.get(type(loader)) + if finder: + module = _dummy_module + module.__file__ = os.path.join(path, '') + module.__loader__ = loader + result = finder(module) + return result diff --git a/env/lib/python3.8/site-packages/distlib/scripts.py b/env/lib/python3.8/site-packages/distlib/scripts.py new file mode 100644 index 00000000..d2706242 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/scripts.py @@ -0,0 +1,437 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2015 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from io import BytesIO +import logging +import os +import re +import struct +import sys +import time +from zipfile import ZipInfo + +from .compat import sysconfig, detect_encoding, ZipFile +from .resources import finder +from .util import (FileOperator, get_export_entry, convert_path, + get_executable, get_platform, in_venv) + +logger = logging.getLogger(__name__) + +_DEFAULT_MANIFEST = ''' + + + + + + + + + + + + +'''.strip() + +# check if Python is called on the first line with this expression +FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$') +SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*- +import re +import sys +from %(module)s import %(import_name)s +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(%(func)s()) +''' + + +def enquote_executable(executable): + if ' ' in executable: + # make sure we quote only the executable in case of env + # for example /usr/bin/env "/dir with spaces/bin/jython" + # instead of "/usr/bin/env /dir with spaces/bin/jython" + # otherwise whole + if executable.startswith('/usr/bin/env '): + env, _executable = executable.split(' ', 1) + if ' ' in _executable and not _executable.startswith('"'): + executable = '%s "%s"' % (env, _executable) + else: + if not executable.startswith('"'): + executable = '"%s"' % executable + return executable + +# Keep the old name around (for now), as there is at least one project using it! +_enquote_executable = enquote_executable + +class ScriptMaker(object): + """ + A class to copy or create scripts from source scripts or callable + specifications. + """ + script_template = SCRIPT_TEMPLATE + + executable = None # for shebangs + + def __init__(self, source_dir, target_dir, add_launchers=True, + dry_run=False, fileop=None): + self.source_dir = source_dir + self.target_dir = target_dir + self.add_launchers = add_launchers + self.force = False + self.clobber = False + # It only makes sense to set mode bits on POSIX. + self.set_mode = (os.name == 'posix') or (os.name == 'java' and + os._name == 'posix') + self.variants = set(('', 'X.Y')) + self._fileop = fileop or FileOperator(dry_run) + + self._is_nt = os.name == 'nt' or ( + os.name == 'java' and os._name == 'nt') + self.version_info = sys.version_info + + def _get_alternate_executable(self, executable, options): + if options.get('gui', False) and self._is_nt: # pragma: no cover + dn, fn = os.path.split(executable) + fn = fn.replace('python', 'pythonw') + executable = os.path.join(dn, fn) + return executable + + if sys.platform.startswith('java'): # pragma: no cover + def _is_shell(self, executable): + """ + Determine if the specified executable is a script + (contains a #! line) + """ + try: + with open(executable) as fp: + return fp.read(2) == '#!' + except (OSError, IOError): + logger.warning('Failed to open %s', executable) + return False + + def _fix_jython_executable(self, executable): + if self._is_shell(executable): + # Workaround for Jython is not needed on Linux systems. + import java + + if java.lang.System.getProperty('os.name') == 'Linux': + return executable + elif executable.lower().endswith('jython.exe'): + # Use wrapper exe for Jython on Windows + return executable + return '/usr/bin/env %s' % executable + + def _build_shebang(self, executable, post_interp): + """ + Build a shebang line. In the simple case (on Windows, or a shebang line + which is not too long or contains spaces) use a simple formulation for + the shebang. Otherwise, use /bin/sh as the executable, with a contrived + shebang which allows the script to run either under Python or sh, using + suitable quoting. Thanks to Harald Nordgren for his input. + + See also: http://www.in-ulm.de/~mascheck/various/shebang/#length + https://hg.mozilla.org/mozilla-central/file/tip/mach + """ + if os.name != 'posix': + simple_shebang = True + else: + # Add 3 for '#!' prefix and newline suffix. + shebang_length = len(executable) + len(post_interp) + 3 + if sys.platform == 'darwin': + max_shebang_length = 512 + else: + max_shebang_length = 127 + simple_shebang = ((b' ' not in executable) and + (shebang_length <= max_shebang_length)) + + if simple_shebang: + result = b'#!' + executable + post_interp + b'\n' + else: + result = b'#!/bin/sh\n' + result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n' + result += b"' '''" + return result + + def _get_shebang(self, encoding, post_interp=b'', options=None): + enquote = True + if self.executable: + executable = self.executable + enquote = False # assume this will be taken care of + elif not sysconfig.is_python_build(): + executable = get_executable() + elif in_venv(): # pragma: no cover + executable = os.path.join(sysconfig.get_path('scripts'), + 'python%s' % sysconfig.get_config_var('EXE')) + else: # pragma: no cover + executable = os.path.join( + sysconfig.get_config_var('BINDIR'), + 'python%s%s' % (sysconfig.get_config_var('VERSION'), + sysconfig.get_config_var('EXE'))) + if not os.path.isfile(executable): + # for Python builds from source on Windows, no Python executables with + # a version suffix are created, so we use python.exe + executable = os.path.join(sysconfig.get_config_var('BINDIR'), + 'python%s' % (sysconfig.get_config_var('EXE'))) + if options: + executable = self._get_alternate_executable(executable, options) + + if sys.platform.startswith('java'): # pragma: no cover + executable = self._fix_jython_executable(executable) + + # Normalise case for Windows - COMMENTED OUT + # executable = os.path.normcase(executable) + # N.B. The normalising operation above has been commented out: See + # issue #124. Although paths in Windows are generally case-insensitive, + # they aren't always. For example, a path containing a ẞ (which is a + # LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a + # LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by + # Windows as equivalent in path names. + + # If the user didn't specify an executable, it may be necessary to + # cater for executable paths with spaces (not uncommon on Windows) + if enquote: + executable = enquote_executable(executable) + # Issue #51: don't use fsencode, since we later try to + # check that the shebang is decodable using utf-8. + executable = executable.encode('utf-8') + # in case of IronPython, play safe and enable frames support + if (sys.platform == 'cli' and '-X:Frames' not in post_interp + and '-X:FullFrames' not in post_interp): # pragma: no cover + post_interp += b' -X:Frames' + shebang = self._build_shebang(executable, post_interp) + # Python parser starts to read a script using UTF-8 until + # it gets a #coding:xxx cookie. The shebang has to be the + # first line of a file, the #coding:xxx cookie cannot be + # written before. So the shebang has to be decodable from + # UTF-8. + try: + shebang.decode('utf-8') + except UnicodeDecodeError: # pragma: no cover + raise ValueError( + 'The shebang (%r) is not decodable from utf-8' % shebang) + # If the script is encoded to a custom encoding (use a + # #coding:xxx cookie), the shebang has to be decodable from + # the script encoding too. + if encoding != 'utf-8': + try: + shebang.decode(encoding) + except UnicodeDecodeError: # pragma: no cover + raise ValueError( + 'The shebang (%r) is not decodable ' + 'from the script encoding (%r)' % (shebang, encoding)) + return shebang + + def _get_script_text(self, entry): + return self.script_template % dict(module=entry.prefix, + import_name=entry.suffix.split('.')[0], + func=entry.suffix) + + manifest = _DEFAULT_MANIFEST + + def get_manifest(self, exename): + base = os.path.basename(exename) + return self.manifest % base + + def _write_script(self, names, shebang, script_bytes, filenames, ext): + use_launcher = self.add_launchers and self._is_nt + linesep = os.linesep.encode('utf-8') + if not shebang.endswith(linesep): + shebang += linesep + if not use_launcher: + script_bytes = shebang + script_bytes + else: # pragma: no cover + if ext == 'py': + launcher = self._get_launcher('t') + else: + launcher = self._get_launcher('w') + stream = BytesIO() + with ZipFile(stream, 'w') as zf: + source_date_epoch = os.environ.get('SOURCE_DATE_EPOCH') + if source_date_epoch: + date_time = time.gmtime(int(source_date_epoch))[:6] + zinfo = ZipInfo(filename='__main__.py', date_time=date_time) + zf.writestr(zinfo, script_bytes) + else: + zf.writestr('__main__.py', script_bytes) + zip_data = stream.getvalue() + script_bytes = launcher + shebang + zip_data + for name in names: + outname = os.path.join(self.target_dir, name) + if use_launcher: # pragma: no cover + n, e = os.path.splitext(outname) + if e.startswith('.py'): + outname = n + outname = '%s.exe' % outname + try: + self._fileop.write_binary_file(outname, script_bytes) + except Exception: + # Failed writing an executable - it might be in use. + logger.warning('Failed to write executable - trying to ' + 'use .deleteme logic') + dfname = '%s.deleteme' % outname + if os.path.exists(dfname): + os.remove(dfname) # Not allowed to fail here + os.rename(outname, dfname) # nor here + self._fileop.write_binary_file(outname, script_bytes) + logger.debug('Able to replace executable using ' + '.deleteme logic') + try: + os.remove(dfname) + except Exception: + pass # still in use - ignore error + else: + if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover + outname = '%s.%s' % (outname, ext) + if os.path.exists(outname) and not self.clobber: + logger.warning('Skipping existing file %s', outname) + continue + self._fileop.write_binary_file(outname, script_bytes) + if self.set_mode: + self._fileop.set_executable_mode([outname]) + filenames.append(outname) + + variant_separator = '-' + + def get_script_filenames(self, name): + result = set() + if '' in self.variants: + result.add(name) + if 'X' in self.variants: + result.add('%s%s' % (name, self.version_info[0])) + if 'X.Y' in self.variants: + result.add('%s%s%s.%s' % (name, self.variant_separator, + self.version_info[0], self.version_info[1])) + return result + + def _make_script(self, entry, filenames, options=None): + post_interp = b'' + if options: + args = options.get('interpreter_args', []) + if args: + args = ' %s' % ' '.join(args) + post_interp = args.encode('utf-8') + shebang = self._get_shebang('utf-8', post_interp, options=options) + script = self._get_script_text(entry).encode('utf-8') + scriptnames = self.get_script_filenames(entry.name) + if options and options.get('gui', False): + ext = 'pyw' + else: + ext = 'py' + self._write_script(scriptnames, shebang, script, filenames, ext) + + def _copy_script(self, script, filenames): + adjust = False + script = os.path.join(self.source_dir, convert_path(script)) + outname = os.path.join(self.target_dir, os.path.basename(script)) + if not self.force and not self._fileop.newer(script, outname): + logger.debug('not copying %s (up-to-date)', script) + return + + # Always open the file, but ignore failures in dry-run mode -- + # that way, we'll get accurate feedback if we can read the + # script. + try: + f = open(script, 'rb') + except IOError: # pragma: no cover + if not self.dry_run: + raise + f = None + else: + first_line = f.readline() + if not first_line: # pragma: no cover + logger.warning('%s is an empty file (skipping)', script) + return + + match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n')) + if match: + adjust = True + post_interp = match.group(1) or b'' + + if not adjust: + if f: + f.close() + self._fileop.copy_file(script, outname) + if self.set_mode: + self._fileop.set_executable_mode([outname]) + filenames.append(outname) + else: + logger.info('copying and adjusting %s -> %s', script, + self.target_dir) + if not self._fileop.dry_run: + encoding, lines = detect_encoding(f.readline) + f.seek(0) + shebang = self._get_shebang(encoding, post_interp) + if b'pythonw' in first_line: # pragma: no cover + ext = 'pyw' + else: + ext = 'py' + n = os.path.basename(outname) + self._write_script([n], shebang, f.read(), filenames, ext) + if f: + f.close() + + @property + def dry_run(self): + return self._fileop.dry_run + + @dry_run.setter + def dry_run(self, value): + self._fileop.dry_run = value + + if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover + # Executable launcher support. + # Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/ + + def _get_launcher(self, kind): + if struct.calcsize('P') == 8: # 64-bit + bits = '64' + else: + bits = '32' + platform_suffix = '-arm' if get_platform() == 'win-arm64' else '' + name = '%s%s%s.exe' % (kind, bits, platform_suffix) + # Issue 31: don't hardcode an absolute package name, but + # determine it relative to the current package + distlib_package = __name__.rsplit('.', 1)[0] + resource = finder(distlib_package).find(name) + if not resource: + msg = ('Unable to find resource %s in package %s' % (name, + distlib_package)) + raise ValueError(msg) + return resource.bytes + + # Public API follows + + def make(self, specification, options=None): + """ + Make a script. + + :param specification: The specification, which is either a valid export + entry specification (to make a script from a + callable) or a filename (to make a script by + copying from a source location). + :param options: A dictionary of options controlling script generation. + :return: A list of all absolute pathnames written to. + """ + filenames = [] + entry = get_export_entry(specification) + if entry is None: + self._copy_script(specification, filenames) + else: + self._make_script(entry, filenames, options=options) + return filenames + + def make_multiple(self, specifications, options=None): + """ + Take a list of specifications and make scripts from them, + :param specifications: A list of specifications. + :return: A list of all absolute pathnames written to, + """ + filenames = [] + for specification in specifications: + filenames.extend(self.make(specification, options)) + return filenames diff --git a/env/lib/python3.8/site-packages/distlib/t32.exe b/env/lib/python3.8/site-packages/distlib/t32.exe new file mode 100644 index 00000000..52154f0b Binary files /dev/null and b/env/lib/python3.8/site-packages/distlib/t32.exe differ diff --git a/env/lib/python3.8/site-packages/distlib/t64-arm.exe b/env/lib/python3.8/site-packages/distlib/t64-arm.exe new file mode 100644 index 00000000..e1ab8f8f Binary files /dev/null and b/env/lib/python3.8/site-packages/distlib/t64-arm.exe differ diff --git a/env/lib/python3.8/site-packages/distlib/t64.exe b/env/lib/python3.8/site-packages/distlib/t64.exe new file mode 100644 index 00000000..e8bebdba Binary files /dev/null and b/env/lib/python3.8/site-packages/distlib/t64.exe differ diff --git a/env/lib/python3.8/site-packages/distlib/util.py b/env/lib/python3.8/site-packages/distlib/util.py new file mode 100644 index 00000000..dd01849d --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/util.py @@ -0,0 +1,1932 @@ +# +# Copyright (C) 2012-2021 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import codecs +from collections import deque +import contextlib +import csv +from glob import iglob as std_iglob +import io +import json +import logging +import os +import py_compile +import re +import socket +try: + import ssl +except ImportError: # pragma: no cover + ssl = None +import subprocess +import sys +import tarfile +import tempfile +import textwrap + +try: + import threading +except ImportError: # pragma: no cover + import dummy_threading as threading +import time + +from . import DistlibException +from .compat import (string_types, text_type, shutil, raw_input, StringIO, + cache_from_source, urlopen, urljoin, httplib, xmlrpclib, + splittype, HTTPHandler, BaseConfigurator, valid_ident, + Container, configparser, URLError, ZipFile, fsdecode, + unquote, urlparse) + +logger = logging.getLogger(__name__) + +# +# Requirement parsing code as per PEP 508 +# + +IDENTIFIER = re.compile(r'^([\w\.-]+)\s*') +VERSION_IDENTIFIER = re.compile(r'^([\w\.*+-]+)\s*') +COMPARE_OP = re.compile(r'^(<=?|>=?|={2,3}|[~!]=)\s*') +MARKER_OP = re.compile(r'^((<=?)|(>=?)|={2,3}|[~!]=|in|not\s+in)\s*') +OR = re.compile(r'^or\b\s*') +AND = re.compile(r'^and\b\s*') +NON_SPACE = re.compile(r'(\S+)\s*') +STRING_CHUNK = re.compile(r'([\s\w\.{}()*+#:;,/?!~`@$%^&=|<>\[\]-]+)') + + +def parse_marker(marker_string): + """ + Parse a marker string and return a dictionary containing a marker expression. + + The dictionary will contain keys "op", "lhs" and "rhs" for non-terminals in + the expression grammar, or strings. A string contained in quotes is to be + interpreted as a literal string, and a string not contained in quotes is a + variable (such as os_name). + """ + def marker_var(remaining): + # either identifier, or literal string + m = IDENTIFIER.match(remaining) + if m: + result = m.groups()[0] + remaining = remaining[m.end():] + elif not remaining: + raise SyntaxError('unexpected end of input') + else: + q = remaining[0] + if q not in '\'"': + raise SyntaxError('invalid expression: %s' % remaining) + oq = '\'"'.replace(q, '') + remaining = remaining[1:] + parts = [q] + while remaining: + # either a string chunk, or oq, or q to terminate + if remaining[0] == q: + break + elif remaining[0] == oq: + parts.append(oq) + remaining = remaining[1:] + else: + m = STRING_CHUNK.match(remaining) + if not m: + raise SyntaxError('error in string literal: %s' % remaining) + parts.append(m.groups()[0]) + remaining = remaining[m.end():] + else: + s = ''.join(parts) + raise SyntaxError('unterminated string: %s' % s) + parts.append(q) + result = ''.join(parts) + remaining = remaining[1:].lstrip() # skip past closing quote + return result, remaining + + def marker_expr(remaining): + if remaining and remaining[0] == '(': + result, remaining = marker(remaining[1:].lstrip()) + if remaining[0] != ')': + raise SyntaxError('unterminated parenthesis: %s' % remaining) + remaining = remaining[1:].lstrip() + else: + lhs, remaining = marker_var(remaining) + while remaining: + m = MARKER_OP.match(remaining) + if not m: + break + op = m.groups()[0] + remaining = remaining[m.end():] + rhs, remaining = marker_var(remaining) + lhs = {'op': op, 'lhs': lhs, 'rhs': rhs} + result = lhs + return result, remaining + + def marker_and(remaining): + lhs, remaining = marker_expr(remaining) + while remaining: + m = AND.match(remaining) + if not m: + break + remaining = remaining[m.end():] + rhs, remaining = marker_expr(remaining) + lhs = {'op': 'and', 'lhs': lhs, 'rhs': rhs} + return lhs, remaining + + def marker(remaining): + lhs, remaining = marker_and(remaining) + while remaining: + m = OR.match(remaining) + if not m: + break + remaining = remaining[m.end():] + rhs, remaining = marker_and(remaining) + lhs = {'op': 'or', 'lhs': lhs, 'rhs': rhs} + return lhs, remaining + + return marker(marker_string) + + +def parse_requirement(req): + """ + Parse a requirement passed in as a string. Return a Container + whose attributes contain the various parts of the requirement. + """ + remaining = req.strip() + if not remaining or remaining.startswith('#'): + return None + m = IDENTIFIER.match(remaining) + if not m: + raise SyntaxError('name expected: %s' % remaining) + distname = m.groups()[0] + remaining = remaining[m.end():] + extras = mark_expr = versions = uri = None + if remaining and remaining[0] == '[': + i = remaining.find(']', 1) + if i < 0: + raise SyntaxError('unterminated extra: %s' % remaining) + s = remaining[1:i] + remaining = remaining[i + 1:].lstrip() + extras = [] + while s: + m = IDENTIFIER.match(s) + if not m: + raise SyntaxError('malformed extra: %s' % s) + extras.append(m.groups()[0]) + s = s[m.end():] + if not s: + break + if s[0] != ',': + raise SyntaxError('comma expected in extras: %s' % s) + s = s[1:].lstrip() + if not extras: + extras = None + if remaining: + if remaining[0] == '@': + # it's a URI + remaining = remaining[1:].lstrip() + m = NON_SPACE.match(remaining) + if not m: + raise SyntaxError('invalid URI: %s' % remaining) + uri = m.groups()[0] + t = urlparse(uri) + # there are issues with Python and URL parsing, so this test + # is a bit crude. See bpo-20271, bpo-23505. Python doesn't + # always parse invalid URLs correctly - it should raise + # exceptions for malformed URLs + if not (t.scheme and t.netloc): + raise SyntaxError('Invalid URL: %s' % uri) + remaining = remaining[m.end():].lstrip() + else: + + def get_versions(ver_remaining): + """ + Return a list of operator, version tuples if any are + specified, else None. + """ + m = COMPARE_OP.match(ver_remaining) + versions = None + if m: + versions = [] + while True: + op = m.groups()[0] + ver_remaining = ver_remaining[m.end():] + m = VERSION_IDENTIFIER.match(ver_remaining) + if not m: + raise SyntaxError('invalid version: %s' % ver_remaining) + v = m.groups()[0] + versions.append((op, v)) + ver_remaining = ver_remaining[m.end():] + if not ver_remaining or ver_remaining[0] != ',': + break + ver_remaining = ver_remaining[1:].lstrip() + # Some packages have a trailing comma which would break things + # See issue #148 + if not ver_remaining: + break + m = COMPARE_OP.match(ver_remaining) + if not m: + raise SyntaxError('invalid constraint: %s' % ver_remaining) + if not versions: + versions = None + return versions, ver_remaining + + if remaining[0] != '(': + versions, remaining = get_versions(remaining) + else: + i = remaining.find(')', 1) + if i < 0: + raise SyntaxError('unterminated parenthesis: %s' % remaining) + s = remaining[1:i] + remaining = remaining[i + 1:].lstrip() + # As a special diversion from PEP 508, allow a version number + # a.b.c in parentheses as a synonym for ~= a.b.c (because this + # is allowed in earlier PEPs) + if COMPARE_OP.match(s): + versions, _ = get_versions(s) + else: + m = VERSION_IDENTIFIER.match(s) + if not m: + raise SyntaxError('invalid constraint: %s' % s) + v = m.groups()[0] + s = s[m.end():].lstrip() + if s: + raise SyntaxError('invalid constraint: %s' % s) + versions = [('~=', v)] + + if remaining: + if remaining[0] != ';': + raise SyntaxError('invalid requirement: %s' % remaining) + remaining = remaining[1:].lstrip() + + mark_expr, remaining = parse_marker(remaining) + + if remaining and remaining[0] != '#': + raise SyntaxError('unexpected trailing data: %s' % remaining) + + if not versions: + rs = distname + else: + rs = '%s %s' % (distname, ', '.join(['%s %s' % con for con in versions])) + return Container(name=distname, extras=extras, constraints=versions, + marker=mark_expr, url=uri, requirement=rs) + + +def get_resources_dests(resources_root, rules): + """Find destinations for resources files""" + + def get_rel_path(root, path): + # normalizes and returns a lstripped-/-separated path + root = root.replace(os.path.sep, '/') + path = path.replace(os.path.sep, '/') + assert path.startswith(root) + return path[len(root):].lstrip('/') + + destinations = {} + for base, suffix, dest in rules: + prefix = os.path.join(resources_root, base) + for abs_base in iglob(prefix): + abs_glob = os.path.join(abs_base, suffix) + for abs_path in iglob(abs_glob): + resource_file = get_rel_path(resources_root, abs_path) + if dest is None: # remove the entry if it was here + destinations.pop(resource_file, None) + else: + rel_path = get_rel_path(abs_base, abs_path) + rel_dest = dest.replace(os.path.sep, '/').rstrip('/') + destinations[resource_file] = rel_dest + '/' + rel_path + return destinations + + +def in_venv(): + if hasattr(sys, 'real_prefix'): + # virtualenv venvs + result = True + else: + # PEP 405 venvs + result = sys.prefix != getattr(sys, 'base_prefix', sys.prefix) + return result + + +def get_executable(): +# The __PYVENV_LAUNCHER__ dance is apparently no longer needed, as +# changes to the stub launcher mean that sys.executable always points +# to the stub on OS X +# if sys.platform == 'darwin' and ('__PYVENV_LAUNCHER__' +# in os.environ): +# result = os.environ['__PYVENV_LAUNCHER__'] +# else: +# result = sys.executable +# return result + # Avoid normcasing: see issue #143 + # result = os.path.normcase(sys.executable) + result = sys.executable + if not isinstance(result, text_type): + result = fsdecode(result) + return result + + +def proceed(prompt, allowed_chars, error_prompt=None, default=None): + p = prompt + while True: + s = raw_input(p) + p = prompt + if not s and default: + s = default + if s: + c = s[0].lower() + if c in allowed_chars: + break + if error_prompt: + p = '%c: %s\n%s' % (c, error_prompt, prompt) + return c + + +def extract_by_key(d, keys): + if isinstance(keys, string_types): + keys = keys.split() + result = {} + for key in keys: + if key in d: + result[key] = d[key] + return result + +def read_exports(stream): + if sys.version_info[0] >= 3: + # needs to be a text stream + stream = codecs.getreader('utf-8')(stream) + # Try to load as JSON, falling back on legacy format + data = stream.read() + stream = StringIO(data) + try: + jdata = json.load(stream) + result = jdata['extensions']['python.exports']['exports'] + for group, entries in result.items(): + for k, v in entries.items(): + s = '%s = %s' % (k, v) + entry = get_export_entry(s) + assert entry is not None + entries[k] = entry + return result + except Exception: + stream.seek(0, 0) + + def read_stream(cp, stream): + if hasattr(cp, 'read_file'): + cp.read_file(stream) + else: + cp.readfp(stream) + + cp = configparser.ConfigParser() + try: + read_stream(cp, stream) + except configparser.MissingSectionHeaderError: + stream.close() + data = textwrap.dedent(data) + stream = StringIO(data) + read_stream(cp, stream) + + result = {} + for key in cp.sections(): + result[key] = entries = {} + for name, value in cp.items(key): + s = '%s = %s' % (name, value) + entry = get_export_entry(s) + assert entry is not None + #entry.dist = self + entries[name] = entry + return result + + +def write_exports(exports, stream): + if sys.version_info[0] >= 3: + # needs to be a text stream + stream = codecs.getwriter('utf-8')(stream) + cp = configparser.ConfigParser() + for k, v in exports.items(): + # TODO check k, v for valid values + cp.add_section(k) + for entry in v.values(): + if entry.suffix is None: + s = entry.prefix + else: + s = '%s:%s' % (entry.prefix, entry.suffix) + if entry.flags: + s = '%s [%s]' % (s, ', '.join(entry.flags)) + cp.set(k, entry.name, s) + cp.write(stream) + + +@contextlib.contextmanager +def tempdir(): + td = tempfile.mkdtemp() + try: + yield td + finally: + shutil.rmtree(td) + +@contextlib.contextmanager +def chdir(d): + cwd = os.getcwd() + try: + os.chdir(d) + yield + finally: + os.chdir(cwd) + + +@contextlib.contextmanager +def socket_timeout(seconds=15): + cto = socket.getdefaulttimeout() + try: + socket.setdefaulttimeout(seconds) + yield + finally: + socket.setdefaulttimeout(cto) + + +class cached_property(object): + def __init__(self, func): + self.func = func + #for attr in ('__name__', '__module__', '__doc__'): + # setattr(self, attr, getattr(func, attr, None)) + + def __get__(self, obj, cls=None): + if obj is None: + return self + value = self.func(obj) + object.__setattr__(obj, self.func.__name__, value) + #obj.__dict__[self.func.__name__] = value = self.func(obj) + return value + +def convert_path(pathname): + """Return 'pathname' as a name that will work on the native filesystem. + + The path is split on '/' and put back together again using the current + directory separator. Needed because filenames in the setup script are + always supplied in Unix style, and have to be converted to the local + convention before we can actually use them in the filesystem. Raises + ValueError on non-Unix-ish systems if 'pathname' either starts or + ends with a slash. + """ + if os.sep == '/': + return pathname + if not pathname: + return pathname + if pathname[0] == '/': + raise ValueError("path '%s' cannot be absolute" % pathname) + if pathname[-1] == '/': + raise ValueError("path '%s' cannot end with '/'" % pathname) + + paths = pathname.split('/') + while os.curdir in paths: + paths.remove(os.curdir) + if not paths: + return os.curdir + return os.path.join(*paths) + + +class FileOperator(object): + def __init__(self, dry_run=False): + self.dry_run = dry_run + self.ensured = set() + self._init_record() + + def _init_record(self): + self.record = False + self.files_written = set() + self.dirs_created = set() + + def record_as_written(self, path): + if self.record: + self.files_written.add(path) + + def newer(self, source, target): + """Tell if the target is newer than the source. + + Returns true if 'source' exists and is more recently modified than + 'target', or if 'source' exists and 'target' doesn't. + + Returns false if both exist and 'target' is the same age or younger + than 'source'. Raise PackagingFileError if 'source' does not exist. + + Note that this test is not very accurate: files created in the same + second will have the same "age". + """ + if not os.path.exists(source): + raise DistlibException("file '%r' does not exist" % + os.path.abspath(source)) + if not os.path.exists(target): + return True + + return os.stat(source).st_mtime > os.stat(target).st_mtime + + def copy_file(self, infile, outfile, check=True): + """Copy a file respecting dry-run and force flags. + """ + self.ensure_dir(os.path.dirname(outfile)) + logger.info('Copying %s to %s', infile, outfile) + if not self.dry_run: + msg = None + if check: + if os.path.islink(outfile): + msg = '%s is a symlink' % outfile + elif os.path.exists(outfile) and not os.path.isfile(outfile): + msg = '%s is a non-regular file' % outfile + if msg: + raise ValueError(msg + ' which would be overwritten') + shutil.copyfile(infile, outfile) + self.record_as_written(outfile) + + def copy_stream(self, instream, outfile, encoding=None): + assert not os.path.isdir(outfile) + self.ensure_dir(os.path.dirname(outfile)) + logger.info('Copying stream %s to %s', instream, outfile) + if not self.dry_run: + if encoding is None: + outstream = open(outfile, 'wb') + else: + outstream = codecs.open(outfile, 'w', encoding=encoding) + try: + shutil.copyfileobj(instream, outstream) + finally: + outstream.close() + self.record_as_written(outfile) + + def write_binary_file(self, path, data): + self.ensure_dir(os.path.dirname(path)) + if not self.dry_run: + if os.path.exists(path): + os.remove(path) + with open(path, 'wb') as f: + f.write(data) + self.record_as_written(path) + + def write_text_file(self, path, data, encoding): + self.write_binary_file(path, data.encode(encoding)) + + def set_mode(self, bits, mask, files): + if os.name == 'posix' or (os.name == 'java' and os._name == 'posix'): + # Set the executable bits (owner, group, and world) on + # all the files specified. + for f in files: + if self.dry_run: + logger.info("changing mode of %s", f) + else: + mode = (os.stat(f).st_mode | bits) & mask + logger.info("changing mode of %s to %o", f, mode) + os.chmod(f, mode) + + set_executable_mode = lambda s, f: s.set_mode(0o555, 0o7777, f) + + def ensure_dir(self, path): + path = os.path.abspath(path) + if path not in self.ensured and not os.path.exists(path): + self.ensured.add(path) + d, f = os.path.split(path) + self.ensure_dir(d) + logger.info('Creating %s' % path) + if not self.dry_run: + os.mkdir(path) + if self.record: + self.dirs_created.add(path) + + def byte_compile(self, path, optimize=False, force=False, prefix=None, hashed_invalidation=False): + dpath = cache_from_source(path, not optimize) + logger.info('Byte-compiling %s to %s', path, dpath) + if not self.dry_run: + if force or self.newer(path, dpath): + if not prefix: + diagpath = None + else: + assert path.startswith(prefix) + diagpath = path[len(prefix):] + compile_kwargs = {} + if hashed_invalidation and hasattr(py_compile, 'PycInvalidationMode'): + compile_kwargs['invalidation_mode'] = py_compile.PycInvalidationMode.CHECKED_HASH + py_compile.compile(path, dpath, diagpath, True, **compile_kwargs) # raise error + self.record_as_written(dpath) + return dpath + + def ensure_removed(self, path): + if os.path.exists(path): + if os.path.isdir(path) and not os.path.islink(path): + logger.debug('Removing directory tree at %s', path) + if not self.dry_run: + shutil.rmtree(path) + if self.record: + if path in self.dirs_created: + self.dirs_created.remove(path) + else: + if os.path.islink(path): + s = 'link' + else: + s = 'file' + logger.debug('Removing %s %s', s, path) + if not self.dry_run: + os.remove(path) + if self.record: + if path in self.files_written: + self.files_written.remove(path) + + def is_writable(self, path): + result = False + while not result: + if os.path.exists(path): + result = os.access(path, os.W_OK) + break + parent = os.path.dirname(path) + if parent == path: + break + path = parent + return result + + def commit(self): + """ + Commit recorded changes, turn off recording, return + changes. + """ + assert self.record + result = self.files_written, self.dirs_created + self._init_record() + return result + + def rollback(self): + if not self.dry_run: + for f in list(self.files_written): + if os.path.exists(f): + os.remove(f) + # dirs should all be empty now, except perhaps for + # __pycache__ subdirs + # reverse so that subdirs appear before their parents + dirs = sorted(self.dirs_created, reverse=True) + for d in dirs: + flist = os.listdir(d) + if flist: + assert flist == ['__pycache__'] + sd = os.path.join(d, flist[0]) + os.rmdir(sd) + os.rmdir(d) # should fail if non-empty + self._init_record() + +def resolve(module_name, dotted_path): + if module_name in sys.modules: + mod = sys.modules[module_name] + else: + mod = __import__(module_name) + if dotted_path is None: + result = mod + else: + parts = dotted_path.split('.') + result = getattr(mod, parts.pop(0)) + for p in parts: + result = getattr(result, p) + return result + + +class ExportEntry(object): + def __init__(self, name, prefix, suffix, flags): + self.name = name + self.prefix = prefix + self.suffix = suffix + self.flags = flags + + @cached_property + def value(self): + return resolve(self.prefix, self.suffix) + + def __repr__(self): # pragma: no cover + return '' % (self.name, self.prefix, + self.suffix, self.flags) + + def __eq__(self, other): + if not isinstance(other, ExportEntry): + result = False + else: + result = (self.name == other.name and + self.prefix == other.prefix and + self.suffix == other.suffix and + self.flags == other.flags) + return result + + __hash__ = object.__hash__ + + +ENTRY_RE = re.compile(r'''(?P(\w|[-.+])+) + \s*=\s*(?P(\w+)([:\.]\w+)*) + \s*(\[\s*(?P[\w-]+(=\w+)?(,\s*\w+(=\w+)?)*)\s*\])? + ''', re.VERBOSE) + +def get_export_entry(specification): + m = ENTRY_RE.search(specification) + if not m: + result = None + if '[' in specification or ']' in specification: + raise DistlibException("Invalid specification " + "'%s'" % specification) + else: + d = m.groupdict() + name = d['name'] + path = d['callable'] + colons = path.count(':') + if colons == 0: + prefix, suffix = path, None + else: + if colons != 1: + raise DistlibException("Invalid specification " + "'%s'" % specification) + prefix, suffix = path.split(':') + flags = d['flags'] + if flags is None: + if '[' in specification or ']' in specification: + raise DistlibException("Invalid specification " + "'%s'" % specification) + flags = [] + else: + flags = [f.strip() for f in flags.split(',')] + result = ExportEntry(name, prefix, suffix, flags) + return result + + +def get_cache_base(suffix=None): + """ + Return the default base location for distlib caches. If the directory does + not exist, it is created. Use the suffix provided for the base directory, + and default to '.distlib' if it isn't provided. + + On Windows, if LOCALAPPDATA is defined in the environment, then it is + assumed to be a directory, and will be the parent directory of the result. + On POSIX, and on Windows if LOCALAPPDATA is not defined, the user's home + directory - using os.expanduser('~') - will be the parent directory of + the result. + + The result is just the directory '.distlib' in the parent directory as + determined above, or with the name specified with ``suffix``. + """ + if suffix is None: + suffix = '.distlib' + if os.name == 'nt' and 'LOCALAPPDATA' in os.environ: + result = os.path.expandvars('$localappdata') + else: + # Assume posix, or old Windows + result = os.path.expanduser('~') + # we use 'isdir' instead of 'exists', because we want to + # fail if there's a file with that name + if os.path.isdir(result): + usable = os.access(result, os.W_OK) + if not usable: + logger.warning('Directory exists but is not writable: %s', result) + else: + try: + os.makedirs(result) + usable = True + except OSError: + logger.warning('Unable to create %s', result, exc_info=True) + usable = False + if not usable: + result = tempfile.mkdtemp() + logger.warning('Default location unusable, using %s', result) + return os.path.join(result, suffix) + + +def path_to_cache_dir(path): + """ + Convert an absolute path to a directory name for use in a cache. + + The algorithm used is: + + #. On Windows, any ``':'`` in the drive is replaced with ``'---'``. + #. Any occurrence of ``os.sep`` is replaced with ``'--'``. + #. ``'.cache'`` is appended. + """ + d, p = os.path.splitdrive(os.path.abspath(path)) + if d: + d = d.replace(':', '---') + p = p.replace(os.sep, '--') + return d + p + '.cache' + + +def ensure_slash(s): + if not s.endswith('/'): + return s + '/' + return s + + +def parse_credentials(netloc): + username = password = None + if '@' in netloc: + prefix, netloc = netloc.rsplit('@', 1) + if ':' not in prefix: + username = prefix + else: + username, password = prefix.split(':', 1) + if username: + username = unquote(username) + if password: + password = unquote(password) + return username, password, netloc + + +def get_process_umask(): + result = os.umask(0o22) + os.umask(result) + return result + +def is_string_sequence(seq): + result = True + i = None + for i, s in enumerate(seq): + if not isinstance(s, string_types): + result = False + break + assert i is not None + return result + +PROJECT_NAME_AND_VERSION = re.compile('([a-z0-9_]+([.-][a-z_][a-z0-9_]*)*)-' + '([a-z0-9_.+-]+)', re.I) +PYTHON_VERSION = re.compile(r'-py(\d\.?\d?)') + + +def split_filename(filename, project_name=None): + """ + Extract name, version, python version from a filename (no extension) + + Return name, version, pyver or None + """ + result = None + pyver = None + filename = unquote(filename).replace(' ', '-') + m = PYTHON_VERSION.search(filename) + if m: + pyver = m.group(1) + filename = filename[:m.start()] + if project_name and len(filename) > len(project_name) + 1: + m = re.match(re.escape(project_name) + r'\b', filename) + if m: + n = m.end() + result = filename[:n], filename[n + 1:], pyver + if result is None: + m = PROJECT_NAME_AND_VERSION.match(filename) + if m: + result = m.group(1), m.group(3), pyver + return result + +# Allow spaces in name because of legacy dists like "Twisted Core" +NAME_VERSION_RE = re.compile(r'(?P[\w .-]+)\s*' + r'\(\s*(?P[^\s)]+)\)$') + +def parse_name_and_version(p): + """ + A utility method used to get name and version from a string. + + From e.g. a Provides-Dist value. + + :param p: A value in a form 'foo (1.0)' + :return: The name and version as a tuple. + """ + m = NAME_VERSION_RE.match(p) + if not m: + raise DistlibException('Ill-formed name/version string: \'%s\'' % p) + d = m.groupdict() + return d['name'].strip().lower(), d['ver'] + +def get_extras(requested, available): + result = set() + requested = set(requested or []) + available = set(available or []) + if '*' in requested: + requested.remove('*') + result |= available + for r in requested: + if r == '-': + result.add(r) + elif r.startswith('-'): + unwanted = r[1:] + if unwanted not in available: + logger.warning('undeclared extra: %s' % unwanted) + if unwanted in result: + result.remove(unwanted) + else: + if r not in available: + logger.warning('undeclared extra: %s' % r) + result.add(r) + return result +# +# Extended metadata functionality +# + +def _get_external_data(url): + result = {} + try: + # urlopen might fail if it runs into redirections, + # because of Python issue #13696. Fixed in locators + # using a custom redirect handler. + resp = urlopen(url) + headers = resp.info() + ct = headers.get('Content-Type') + if not ct.startswith('application/json'): + logger.debug('Unexpected response for JSON request: %s', ct) + else: + reader = codecs.getreader('utf-8')(resp) + #data = reader.read().decode('utf-8') + #result = json.loads(data) + result = json.load(reader) + except Exception as e: + logger.exception('Failed to get external data for %s: %s', url, e) + return result + +_external_data_base_url = 'https://www.red-dove.com/pypi/projects/' + +def get_project_data(name): + url = '%s/%s/project.json' % (name[0].upper(), name) + url = urljoin(_external_data_base_url, url) + result = _get_external_data(url) + return result + +def get_package_data(name, version): + url = '%s/%s/package-%s.json' % (name[0].upper(), name, version) + url = urljoin(_external_data_base_url, url) + return _get_external_data(url) + + +class Cache(object): + """ + A class implementing a cache for resources that need to live in the file system + e.g. shared libraries. This class was moved from resources to here because it + could be used by other modules, e.g. the wheel module. + """ + + def __init__(self, base): + """ + Initialise an instance. + + :param base: The base directory where the cache should be located. + """ + # we use 'isdir' instead of 'exists', because we want to + # fail if there's a file with that name + if not os.path.isdir(base): # pragma: no cover + os.makedirs(base) + if (os.stat(base).st_mode & 0o77) != 0: + logger.warning('Directory \'%s\' is not private', base) + self.base = os.path.abspath(os.path.normpath(base)) + + def prefix_to_dir(self, prefix): + """ + Converts a resource prefix to a directory name in the cache. + """ + return path_to_cache_dir(prefix) + + def clear(self): + """ + Clear the cache. + """ + not_removed = [] + for fn in os.listdir(self.base): + fn = os.path.join(self.base, fn) + try: + if os.path.islink(fn) or os.path.isfile(fn): + os.remove(fn) + elif os.path.isdir(fn): + shutil.rmtree(fn) + except Exception: + not_removed.append(fn) + return not_removed + + +class EventMixin(object): + """ + A very simple publish/subscribe system. + """ + def __init__(self): + self._subscribers = {} + + def add(self, event, subscriber, append=True): + """ + Add a subscriber for an event. + + :param event: The name of an event. + :param subscriber: The subscriber to be added (and called when the + event is published). + :param append: Whether to append or prepend the subscriber to an + existing subscriber list for the event. + """ + subs = self._subscribers + if event not in subs: + subs[event] = deque([subscriber]) + else: + sq = subs[event] + if append: + sq.append(subscriber) + else: + sq.appendleft(subscriber) + + def remove(self, event, subscriber): + """ + Remove a subscriber for an event. + + :param event: The name of an event. + :param subscriber: The subscriber to be removed. + """ + subs = self._subscribers + if event not in subs: + raise ValueError('No subscribers: %r' % event) + subs[event].remove(subscriber) + + def get_subscribers(self, event): + """ + Return an iterator for the subscribers for an event. + :param event: The event to return subscribers for. + """ + return iter(self._subscribers.get(event, ())) + + def publish(self, event, *args, **kwargs): + """ + Publish a event and return a list of values returned by its + subscribers. + + :param event: The event to publish. + :param args: The positional arguments to pass to the event's + subscribers. + :param kwargs: The keyword arguments to pass to the event's + subscribers. + """ + result = [] + for subscriber in self.get_subscribers(event): + try: + value = subscriber(event, *args, **kwargs) + except Exception: + logger.exception('Exception during event publication') + value = None + result.append(value) + logger.debug('publish %s: args = %s, kwargs = %s, result = %s', + event, args, kwargs, result) + return result + +# +# Simple sequencing +# +class Sequencer(object): + def __init__(self): + self._preds = {} + self._succs = {} + self._nodes = set() # nodes with no preds/succs + + def add_node(self, node): + self._nodes.add(node) + + def remove_node(self, node, edges=False): + if node in self._nodes: + self._nodes.remove(node) + if edges: + for p in set(self._preds.get(node, ())): + self.remove(p, node) + for s in set(self._succs.get(node, ())): + self.remove(node, s) + # Remove empties + for k, v in list(self._preds.items()): + if not v: + del self._preds[k] + for k, v in list(self._succs.items()): + if not v: + del self._succs[k] + + def add(self, pred, succ): + assert pred != succ + self._preds.setdefault(succ, set()).add(pred) + self._succs.setdefault(pred, set()).add(succ) + + def remove(self, pred, succ): + assert pred != succ + try: + preds = self._preds[succ] + succs = self._succs[pred] + except KeyError: # pragma: no cover + raise ValueError('%r not a successor of anything' % succ) + try: + preds.remove(pred) + succs.remove(succ) + except KeyError: # pragma: no cover + raise ValueError('%r not a successor of %r' % (succ, pred)) + + def is_step(self, step): + return (step in self._preds or step in self._succs or + step in self._nodes) + + def get_steps(self, final): + if not self.is_step(final): + raise ValueError('Unknown: %r' % final) + result = [] + todo = [] + seen = set() + todo.append(final) + while todo: + step = todo.pop(0) + if step in seen: + # if a step was already seen, + # move it to the end (so it will appear earlier + # when reversed on return) ... but not for the + # final step, as that would be confusing for + # users + if step != final: + result.remove(step) + result.append(step) + else: + seen.add(step) + result.append(step) + preds = self._preds.get(step, ()) + todo.extend(preds) + return reversed(result) + + @property + def strong_connections(self): + #http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm + index_counter = [0] + stack = [] + lowlinks = {} + index = {} + result = [] + + graph = self._succs + + def strongconnect(node): + # set the depth index for this node to the smallest unused index + index[node] = index_counter[0] + lowlinks[node] = index_counter[0] + index_counter[0] += 1 + stack.append(node) + + # Consider successors + try: + successors = graph[node] + except Exception: + successors = [] + for successor in successors: + if successor not in lowlinks: + # Successor has not yet been visited + strongconnect(successor) + lowlinks[node] = min(lowlinks[node],lowlinks[successor]) + elif successor in stack: + # the successor is in the stack and hence in the current + # strongly connected component (SCC) + lowlinks[node] = min(lowlinks[node],index[successor]) + + # If `node` is a root node, pop the stack and generate an SCC + if lowlinks[node] == index[node]: + connected_component = [] + + while True: + successor = stack.pop() + connected_component.append(successor) + if successor == node: break + component = tuple(connected_component) + # storing the result + result.append(component) + + for node in graph: + if node not in lowlinks: + strongconnect(node) + + return result + + @property + def dot(self): + result = ['digraph G {'] + for succ in self._preds: + preds = self._preds[succ] + for pred in preds: + result.append(' %s -> %s;' % (pred, succ)) + for node in self._nodes: + result.append(' %s;' % node) + result.append('}') + return '\n'.join(result) + +# +# Unarchiving functionality for zip, tar, tgz, tbz, whl +# + +ARCHIVE_EXTENSIONS = ('.tar.gz', '.tar.bz2', '.tar', '.zip', + '.tgz', '.tbz', '.whl') + +def unarchive(archive_filename, dest_dir, format=None, check=True): + + def check_path(path): + if not isinstance(path, text_type): + path = path.decode('utf-8') + p = os.path.abspath(os.path.join(dest_dir, path)) + if not p.startswith(dest_dir) or p[plen] != os.sep: + raise ValueError('path outside destination: %r' % p) + + dest_dir = os.path.abspath(dest_dir) + plen = len(dest_dir) + archive = None + if format is None: + if archive_filename.endswith(('.zip', '.whl')): + format = 'zip' + elif archive_filename.endswith(('.tar.gz', '.tgz')): + format = 'tgz' + mode = 'r:gz' + elif archive_filename.endswith(('.tar.bz2', '.tbz')): + format = 'tbz' + mode = 'r:bz2' + elif archive_filename.endswith('.tar'): + format = 'tar' + mode = 'r' + else: # pragma: no cover + raise ValueError('Unknown format for %r' % archive_filename) + try: + if format == 'zip': + archive = ZipFile(archive_filename, 'r') + if check: + names = archive.namelist() + for name in names: + check_path(name) + else: + archive = tarfile.open(archive_filename, mode) + if check: + names = archive.getnames() + for name in names: + check_path(name) + if format != 'zip' and sys.version_info[0] < 3: + # See Python issue 17153. If the dest path contains Unicode, + # tarfile extraction fails on Python 2.x if a member path name + # contains non-ASCII characters - it leads to an implicit + # bytes -> unicode conversion using ASCII to decode. + for tarinfo in archive.getmembers(): + if not isinstance(tarinfo.name, text_type): + tarinfo.name = tarinfo.name.decode('utf-8') + archive.extractall(dest_dir) + + finally: + if archive: + archive.close() + + +def zip_dir(directory): + """zip a directory tree into a BytesIO object""" + result = io.BytesIO() + dlen = len(directory) + with ZipFile(result, "w") as zf: + for root, dirs, files in os.walk(directory): + for name in files: + full = os.path.join(root, name) + rel = root[dlen:] + dest = os.path.join(rel, name) + zf.write(full, dest) + return result + +# +# Simple progress bar +# + +UNITS = ('', 'K', 'M', 'G','T','P') + + +class Progress(object): + unknown = 'UNKNOWN' + + def __init__(self, minval=0, maxval=100): + assert maxval is None or maxval >= minval + self.min = self.cur = minval + self.max = maxval + self.started = None + self.elapsed = 0 + self.done = False + + def update(self, curval): + assert self.min <= curval + assert self.max is None or curval <= self.max + self.cur = curval + now = time.time() + if self.started is None: + self.started = now + else: + self.elapsed = now - self.started + + def increment(self, incr): + assert incr >= 0 + self.update(self.cur + incr) + + def start(self): + self.update(self.min) + return self + + def stop(self): + if self.max is not None: + self.update(self.max) + self.done = True + + @property + def maximum(self): + return self.unknown if self.max is None else self.max + + @property + def percentage(self): + if self.done: + result = '100 %' + elif self.max is None: + result = ' ?? %' + else: + v = 100.0 * (self.cur - self.min) / (self.max - self.min) + result = '%3d %%' % v + return result + + def format_duration(self, duration): + if (duration <= 0) and self.max is None or self.cur == self.min: + result = '??:??:??' + #elif duration < 1: + # result = '--:--:--' + else: + result = time.strftime('%H:%M:%S', time.gmtime(duration)) + return result + + @property + def ETA(self): + if self.done: + prefix = 'Done' + t = self.elapsed + #import pdb; pdb.set_trace() + else: + prefix = 'ETA ' + if self.max is None: + t = -1 + elif self.elapsed == 0 or (self.cur == self.min): + t = 0 + else: + #import pdb; pdb.set_trace() + t = float(self.max - self.min) + t /= self.cur - self.min + t = (t - 1) * self.elapsed + return '%s: %s' % (prefix, self.format_duration(t)) + + @property + def speed(self): + if self.elapsed == 0: + result = 0.0 + else: + result = (self.cur - self.min) / self.elapsed + for unit in UNITS: + if result < 1000: + break + result /= 1000.0 + return '%d %sB/s' % (result, unit) + +# +# Glob functionality +# + +RICH_GLOB = re.compile(r'\{([^}]*)\}') +_CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]') +_CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$') + + +def iglob(path_glob): + """Extended globbing function that supports ** and {opt1,opt2,opt3}.""" + if _CHECK_RECURSIVE_GLOB.search(path_glob): + msg = """invalid glob %r: recursive glob "**" must be used alone""" + raise ValueError(msg % path_glob) + if _CHECK_MISMATCH_SET.search(path_glob): + msg = """invalid glob %r: mismatching set marker '{' or '}'""" + raise ValueError(msg % path_glob) + return _iglob(path_glob) + + +def _iglob(path_glob): + rich_path_glob = RICH_GLOB.split(path_glob, 1) + if len(rich_path_glob) > 1: + assert len(rich_path_glob) == 3, rich_path_glob + prefix, set, suffix = rich_path_glob + for item in set.split(','): + for path in _iglob(''.join((prefix, item, suffix))): + yield path + else: + if '**' not in path_glob: + for item in std_iglob(path_glob): + yield item + else: + prefix, radical = path_glob.split('**', 1) + if prefix == '': + prefix = '.' + if radical == '': + radical = '*' + else: + # we support both + radical = radical.lstrip('/') + radical = radical.lstrip('\\') + for path, dir, files in os.walk(prefix): + path = os.path.normpath(path) + for fn in _iglob(os.path.join(path, radical)): + yield fn + +if ssl: + from .compat import (HTTPSHandler as BaseHTTPSHandler, match_hostname, + CertificateError) + + +# +# HTTPSConnection which verifies certificates/matches domains +# + + class HTTPSConnection(httplib.HTTPSConnection): + ca_certs = None # set this to the path to the certs file (.pem) + check_domain = True # only used if ca_certs is not None + + # noinspection PyPropertyAccess + def connect(self): + sock = socket.create_connection((self.host, self.port), self.timeout) + if getattr(self, '_tunnel_host', False): + self.sock = sock + self._tunnel() + + context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) + if hasattr(ssl, 'OP_NO_SSLv2'): + context.options |= ssl.OP_NO_SSLv2 + if self.cert_file: + context.load_cert_chain(self.cert_file, self.key_file) + kwargs = {} + if self.ca_certs: + context.verify_mode = ssl.CERT_REQUIRED + context.load_verify_locations(cafile=self.ca_certs) + if getattr(ssl, 'HAS_SNI', False): + kwargs['server_hostname'] = self.host + + self.sock = context.wrap_socket(sock, **kwargs) + if self.ca_certs and self.check_domain: + try: + match_hostname(self.sock.getpeercert(), self.host) + logger.debug('Host verified: %s', self.host) + except CertificateError: # pragma: no cover + self.sock.shutdown(socket.SHUT_RDWR) + self.sock.close() + raise + + class HTTPSHandler(BaseHTTPSHandler): + def __init__(self, ca_certs, check_domain=True): + BaseHTTPSHandler.__init__(self) + self.ca_certs = ca_certs + self.check_domain = check_domain + + def _conn_maker(self, *args, **kwargs): + """ + This is called to create a connection instance. Normally you'd + pass a connection class to do_open, but it doesn't actually check for + a class, and just expects a callable. As long as we behave just as a + constructor would have, we should be OK. If it ever changes so that + we *must* pass a class, we'll create an UnsafeHTTPSConnection class + which just sets check_domain to False in the class definition, and + choose which one to pass to do_open. + """ + result = HTTPSConnection(*args, **kwargs) + if self.ca_certs: + result.ca_certs = self.ca_certs + result.check_domain = self.check_domain + return result + + def https_open(self, req): + try: + return self.do_open(self._conn_maker, req) + except URLError as e: + if 'certificate verify failed' in str(e.reason): + raise CertificateError('Unable to verify server certificate ' + 'for %s' % req.host) + else: + raise + + # + # To prevent against mixing HTTP traffic with HTTPS (examples: A Man-In-The- + # Middle proxy using HTTP listens on port 443, or an index mistakenly serves + # HTML containing a http://xyz link when it should be https://xyz), + # you can use the following handler class, which does not allow HTTP traffic. + # + # It works by inheriting from HTTPHandler - so build_opener won't add a + # handler for HTTP itself. + # + class HTTPSOnlyHandler(HTTPSHandler, HTTPHandler): + def http_open(self, req): + raise URLError('Unexpected HTTP request on what should be a secure ' + 'connection: %s' % req) + +# +# XML-RPC with timeouts +# +class Transport(xmlrpclib.Transport): + def __init__(self, timeout, use_datetime=0): + self.timeout = timeout + xmlrpclib.Transport.__init__(self, use_datetime) + + def make_connection(self, host): + h, eh, x509 = self.get_host_info(host) + if not self._connection or host != self._connection[0]: + self._extra_headers = eh + self._connection = host, httplib.HTTPConnection(h) + return self._connection[1] + +if ssl: + class SafeTransport(xmlrpclib.SafeTransport): + def __init__(self, timeout, use_datetime=0): + self.timeout = timeout + xmlrpclib.SafeTransport.__init__(self, use_datetime) + + def make_connection(self, host): + h, eh, kwargs = self.get_host_info(host) + if not kwargs: + kwargs = {} + kwargs['timeout'] = self.timeout + if not self._connection or host != self._connection[0]: + self._extra_headers = eh + self._connection = host, httplib.HTTPSConnection(h, None, + **kwargs) + return self._connection[1] + + +class ServerProxy(xmlrpclib.ServerProxy): + def __init__(self, uri, **kwargs): + self.timeout = timeout = kwargs.pop('timeout', None) + # The above classes only come into play if a timeout + # is specified + if timeout is not None: + # scheme = splittype(uri) # deprecated as of Python 3.8 + scheme = urlparse(uri)[0] + use_datetime = kwargs.get('use_datetime', 0) + if scheme == 'https': + tcls = SafeTransport + else: + tcls = Transport + kwargs['transport'] = t = tcls(timeout, use_datetime=use_datetime) + self.transport = t + xmlrpclib.ServerProxy.__init__(self, uri, **kwargs) + +# +# CSV functionality. This is provided because on 2.x, the csv module can't +# handle Unicode. However, we need to deal with Unicode in e.g. RECORD files. +# + +def _csv_open(fn, mode, **kwargs): + if sys.version_info[0] < 3: + mode += 'b' + else: + kwargs['newline'] = '' + # Python 3 determines encoding from locale. Force 'utf-8' + # file encoding to match other forced utf-8 encoding + kwargs['encoding'] = 'utf-8' + return open(fn, mode, **kwargs) + + +class CSVBase(object): + defaults = { + 'delimiter': str(','), # The strs are used because we need native + 'quotechar': str('"'), # str in the csv API (2.x won't take + 'lineterminator': str('\n') # Unicode) + } + + def __enter__(self): + return self + + def __exit__(self, *exc_info): + self.stream.close() + + +class CSVReader(CSVBase): + def __init__(self, **kwargs): + if 'stream' in kwargs: + stream = kwargs['stream'] + if sys.version_info[0] >= 3: + # needs to be a text stream + stream = codecs.getreader('utf-8')(stream) + self.stream = stream + else: + self.stream = _csv_open(kwargs['path'], 'r') + self.reader = csv.reader(self.stream, **self.defaults) + + def __iter__(self): + return self + + def next(self): + result = next(self.reader) + if sys.version_info[0] < 3: + for i, item in enumerate(result): + if not isinstance(item, text_type): + result[i] = item.decode('utf-8') + return result + + __next__ = next + +class CSVWriter(CSVBase): + def __init__(self, fn, **kwargs): + self.stream = _csv_open(fn, 'w') + self.writer = csv.writer(self.stream, **self.defaults) + + def writerow(self, row): + if sys.version_info[0] < 3: + r = [] + for item in row: + if isinstance(item, text_type): + item = item.encode('utf-8') + r.append(item) + row = r + self.writer.writerow(row) + +# +# Configurator functionality +# + +class Configurator(BaseConfigurator): + + value_converters = dict(BaseConfigurator.value_converters) + value_converters['inc'] = 'inc_convert' + + def __init__(self, config, base=None): + super(Configurator, self).__init__(config) + self.base = base or os.getcwd() + + def configure_custom(self, config): + def convert(o): + if isinstance(o, (list, tuple)): + result = type(o)([convert(i) for i in o]) + elif isinstance(o, dict): + if '()' in o: + result = self.configure_custom(o) + else: + result = {} + for k in o: + result[k] = convert(o[k]) + else: + result = self.convert(o) + return result + + c = config.pop('()') + if not callable(c): + c = self.resolve(c) + props = config.pop('.', None) + # Check for valid identifiers + args = config.pop('[]', ()) + if args: + args = tuple([convert(o) for o in args]) + items = [(k, convert(config[k])) for k in config if valid_ident(k)] + kwargs = dict(items) + result = c(*args, **kwargs) + if props: + for n, v in props.items(): + setattr(result, n, convert(v)) + return result + + def __getitem__(self, key): + result = self.config[key] + if isinstance(result, dict) and '()' in result: + self.config[key] = result = self.configure_custom(result) + return result + + def inc_convert(self, value): + """Default converter for the inc:// protocol.""" + if not os.path.isabs(value): + value = os.path.join(self.base, value) + with codecs.open(value, 'r', encoding='utf-8') as f: + result = json.load(f) + return result + + +class SubprocessMixin(object): + """ + Mixin for running subprocesses and capturing their output + """ + def __init__(self, verbose=False, progress=None): + self.verbose = verbose + self.progress = progress + + def reader(self, stream, context): + """ + Read lines from a subprocess' output stream and either pass to a progress + callable (if specified) or write progress information to sys.stderr. + """ + progress = self.progress + verbose = self.verbose + while True: + s = stream.readline() + if not s: + break + if progress is not None: + progress(s, context) + else: + if not verbose: + sys.stderr.write('.') + else: + sys.stderr.write(s.decode('utf-8')) + sys.stderr.flush() + stream.close() + + def run_command(self, cmd, **kwargs): + p = subprocess.Popen(cmd, stdout=subprocess.PIPE, + stderr=subprocess.PIPE, **kwargs) + t1 = threading.Thread(target=self.reader, args=(p.stdout, 'stdout')) + t1.start() + t2 = threading.Thread(target=self.reader, args=(p.stderr, 'stderr')) + t2.start() + p.wait() + t1.join() + t2.join() + if self.progress is not None: + self.progress('done.', 'main') + elif self.verbose: + sys.stderr.write('done.\n') + return p + + +def normalize_name(name): + """Normalize a python package name a la PEP 503""" + # https://www.python.org/dev/peps/pep-0503/#normalized-names + return re.sub('[-_.]+', '-', name).lower() + +# def _get_pypirc_command(): + # """ + # Get the distutils command for interacting with PyPI configurations. + # :return: the command. + # """ + # from distutils.core import Distribution + # from distutils.config import PyPIRCCommand + # d = Distribution() + # return PyPIRCCommand(d) + +class PyPIRCFile(object): + + DEFAULT_REPOSITORY = 'https://upload.pypi.org/legacy/' + DEFAULT_REALM = 'pypi' + + def __init__(self, fn=None, url=None): + if fn is None: + fn = os.path.join(os.path.expanduser('~'), '.pypirc') + self.filename = fn + self.url = url + + def read(self): + result = {} + + if os.path.exists(self.filename): + repository = self.url or self.DEFAULT_REPOSITORY + + config = configparser.RawConfigParser() + config.read(self.filename) + sections = config.sections() + if 'distutils' in sections: + # let's get the list of servers + index_servers = config.get('distutils', 'index-servers') + _servers = [server.strip() for server in + index_servers.split('\n') + if server.strip() != ''] + if _servers == []: + # nothing set, let's try to get the default pypi + if 'pypi' in sections: + _servers = ['pypi'] + else: + for server in _servers: + result = {'server': server} + result['username'] = config.get(server, 'username') + + # optional params + for key, default in (('repository', self.DEFAULT_REPOSITORY), + ('realm', self.DEFAULT_REALM), + ('password', None)): + if config.has_option(server, key): + result[key] = config.get(server, key) + else: + result[key] = default + + # work around people having "repository" for the "pypi" + # section of their config set to the HTTP (rather than + # HTTPS) URL + if (server == 'pypi' and + repository in (self.DEFAULT_REPOSITORY, 'pypi')): + result['repository'] = self.DEFAULT_REPOSITORY + elif (result['server'] != repository and + result['repository'] != repository): + result = {} + elif 'server-login' in sections: + # old format + server = 'server-login' + if config.has_option(server, 'repository'): + repository = config.get(server, 'repository') + else: + repository = self.DEFAULT_REPOSITORY + result = { + 'username': config.get(server, 'username'), + 'password': config.get(server, 'password'), + 'repository': repository, + 'server': server, + 'realm': self.DEFAULT_REALM + } + return result + + def update(self, username, password): + # import pdb; pdb.set_trace() + config = configparser.RawConfigParser() + fn = self.filename + config.read(fn) + if not config.has_section('pypi'): + config.add_section('pypi') + config.set('pypi', 'username', username) + config.set('pypi', 'password', password) + with open(fn, 'w') as f: + config.write(f) + +def _load_pypirc(index): + """ + Read the PyPI access configuration as supported by distutils. + """ + return PyPIRCFile(url=index.url).read() + +def _store_pypirc(index): + PyPIRCFile().update(index.username, index.password) + +# +# get_platform()/get_host_platform() copied from Python 3.10.a0 source, with some minor +# tweaks +# + +def get_host_platform(): + """Return a string that identifies the current platform. This is used mainly to + distinguish platform-specific build directories and platform-specific built + distributions. Typically includes the OS name and version and the + architecture (as supplied by 'os.uname()'), although the exact information + included depends on the OS; eg. on Linux, the kernel version isn't + particularly important. + + Examples of returned values: + linux-i586 + linux-alpha (?) + solaris-2.6-sun4u + + Windows will return one of: + win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc) + win32 (all others - specifically, sys.platform is returned) + + For other non-POSIX platforms, currently just returns 'sys.platform'. + + """ + if os.name == 'nt': + if 'amd64' in sys.version.lower(): + return 'win-amd64' + if '(arm)' in sys.version.lower(): + return 'win-arm32' + if '(arm64)' in sys.version.lower(): + return 'win-arm64' + return sys.platform + + # Set for cross builds explicitly + if "_PYTHON_HOST_PLATFORM" in os.environ: + return os.environ["_PYTHON_HOST_PLATFORM"] + + if os.name != 'posix' or not hasattr(os, 'uname'): + # XXX what about the architecture? NT is Intel or Alpha, + # Mac OS is M68k or PPC, etc. + return sys.platform + + # Try to distinguish various flavours of Unix + + (osname, host, release, version, machine) = os.uname() + + # Convert the OS name to lowercase, remove '/' characters, and translate + # spaces (for "Power Macintosh") + osname = osname.lower().replace('/', '') + machine = machine.replace(' ', '_').replace('/', '-') + + if osname[:5] == 'linux': + # At least on Linux/Intel, 'machine' is the processor -- + # i386, etc. + # XXX what about Alpha, SPARC, etc? + return "%s-%s" % (osname, machine) + + elif osname[:5] == 'sunos': + if release[0] >= '5': # SunOS 5 == Solaris 2 + osname = 'solaris' + release = '%d.%s' % (int(release[0]) - 3, release[2:]) + # We can't use 'platform.architecture()[0]' because a + # bootstrap problem. We use a dict to get an error + # if some suspicious happens. + bitness = {2147483647:'32bit', 9223372036854775807:'64bit'} + machine += '.%s' % bitness[sys.maxsize] + # fall through to standard osname-release-machine representation + elif osname[:3] == 'aix': + from _aix_support import aix_platform + return aix_platform() + elif osname[:6] == 'cygwin': + osname = 'cygwin' + rel_re = re.compile (r'[\d.]+', re.ASCII) + m = rel_re.match(release) + if m: + release = m.group() + elif osname[:6] == 'darwin': + import _osx_support, distutils.sysconfig + osname, release, machine = _osx_support.get_platform_osx( + distutils.sysconfig.get_config_vars(), + osname, release, machine) + + return '%s-%s-%s' % (osname, release, machine) + + +_TARGET_TO_PLAT = { + 'x86' : 'win32', + 'x64' : 'win-amd64', + 'arm' : 'win-arm32', +} + + +def get_platform(): + if os.name != 'nt': + return get_host_platform() + cross_compilation_target = os.environ.get('VSCMD_ARG_TGT_ARCH') + if cross_compilation_target not in _TARGET_TO_PLAT: + return get_host_platform() + return _TARGET_TO_PLAT[cross_compilation_target] diff --git a/env/lib/python3.8/site-packages/distlib/version.py b/env/lib/python3.8/site-packages/distlib/version.py new file mode 100644 index 00000000..c7c8bb6f --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/version.py @@ -0,0 +1,739 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2017 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +""" +Implementation of a flexible versioning scheme providing support for PEP-440, +setuptools-compatible and semantic versioning. +""" + +import logging +import re + +from .compat import string_types +from .util import parse_requirement + +__all__ = ['NormalizedVersion', 'NormalizedMatcher', + 'LegacyVersion', 'LegacyMatcher', + 'SemanticVersion', 'SemanticMatcher', + 'UnsupportedVersionError', 'get_scheme'] + +logger = logging.getLogger(__name__) + + +class UnsupportedVersionError(ValueError): + """This is an unsupported version.""" + pass + + +class Version(object): + def __init__(self, s): + self._string = s = s.strip() + self._parts = parts = self.parse(s) + assert isinstance(parts, tuple) + assert len(parts) > 0 + + def parse(self, s): + raise NotImplementedError('please implement in a subclass') + + def _check_compatible(self, other): + if type(self) != type(other): + raise TypeError('cannot compare %r and %r' % (self, other)) + + def __eq__(self, other): + self._check_compatible(other) + return self._parts == other._parts + + def __ne__(self, other): + return not self.__eq__(other) + + def __lt__(self, other): + self._check_compatible(other) + return self._parts < other._parts + + def __gt__(self, other): + return not (self.__lt__(other) or self.__eq__(other)) + + def __le__(self, other): + return self.__lt__(other) or self.__eq__(other) + + def __ge__(self, other): + return self.__gt__(other) or self.__eq__(other) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + def __hash__(self): + return hash(self._parts) + + def __repr__(self): + return "%s('%s')" % (self.__class__.__name__, self._string) + + def __str__(self): + return self._string + + @property + def is_prerelease(self): + raise NotImplementedError('Please implement in subclasses.') + + +class Matcher(object): + version_class = None + + # value is either a callable or the name of a method + _operators = { + '<': lambda v, c, p: v < c, + '>': lambda v, c, p: v > c, + '<=': lambda v, c, p: v == c or v < c, + '>=': lambda v, c, p: v == c or v > c, + '==': lambda v, c, p: v == c, + '===': lambda v, c, p: v == c, + # by default, compatible => >=. + '~=': lambda v, c, p: v == c or v > c, + '!=': lambda v, c, p: v != c, + } + + # this is a method only to support alternative implementations + # via overriding + def parse_requirement(self, s): + return parse_requirement(s) + + def __init__(self, s): + if self.version_class is None: + raise ValueError('Please specify a version class') + self._string = s = s.strip() + r = self.parse_requirement(s) + if not r: + raise ValueError('Not valid: %r' % s) + self.name = r.name + self.key = self.name.lower() # for case-insensitive comparisons + clist = [] + if r.constraints: + # import pdb; pdb.set_trace() + for op, s in r.constraints: + if s.endswith('.*'): + if op not in ('==', '!='): + raise ValueError('\'.*\' not allowed for ' + '%r constraints' % op) + # Could be a partial version (e.g. for '2.*') which + # won't parse as a version, so keep it as a string + vn, prefix = s[:-2], True + # Just to check that vn is a valid version + self.version_class(vn) + else: + # Should parse as a version, so we can create an + # instance for the comparison + vn, prefix = self.version_class(s), False + clist.append((op, vn, prefix)) + self._parts = tuple(clist) + + def match(self, version): + """ + Check if the provided version matches the constraints. + + :param version: The version to match against this instance. + :type version: String or :class:`Version` instance. + """ + if isinstance(version, string_types): + version = self.version_class(version) + for operator, constraint, prefix in self._parts: + f = self._operators.get(operator) + if isinstance(f, string_types): + f = getattr(self, f) + if not f: + msg = ('%r not implemented ' + 'for %s' % (operator, self.__class__.__name__)) + raise NotImplementedError(msg) + if not f(version, constraint, prefix): + return False + return True + + @property + def exact_version(self): + result = None + if len(self._parts) == 1 and self._parts[0][0] in ('==', '==='): + result = self._parts[0][1] + return result + + def _check_compatible(self, other): + if type(self) != type(other) or self.name != other.name: + raise TypeError('cannot compare %s and %s' % (self, other)) + + def __eq__(self, other): + self._check_compatible(other) + return self.key == other.key and self._parts == other._parts + + def __ne__(self, other): + return not self.__eq__(other) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + def __hash__(self): + return hash(self.key) + hash(self._parts) + + def __repr__(self): + return "%s(%r)" % (self.__class__.__name__, self._string) + + def __str__(self): + return self._string + + +PEP440_VERSION_RE = re.compile(r'^v?(\d+!)?(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' + r'(\.(post)(\d+))?(\.(dev)(\d+))?' + r'(\+([a-zA-Z\d]+(\.[a-zA-Z\d]+)?))?$') + + +def _pep_440_key(s): + s = s.strip() + m = PEP440_VERSION_RE.match(s) + if not m: + raise UnsupportedVersionError('Not a valid version: %s' % s) + groups = m.groups() + nums = tuple(int(v) for v in groups[1].split('.')) + while len(nums) > 1 and nums[-1] == 0: + nums = nums[:-1] + + if not groups[0]: + epoch = 0 + else: + epoch = int(groups[0][:-1]) + pre = groups[4:6] + post = groups[7:9] + dev = groups[10:12] + local = groups[13] + if pre == (None, None): + pre = () + else: + pre = pre[0], int(pre[1]) + if post == (None, None): + post = () + else: + post = post[0], int(post[1]) + if dev == (None, None): + dev = () + else: + dev = dev[0], int(dev[1]) + if local is None: + local = () + else: + parts = [] + for part in local.split('.'): + # to ensure that numeric compares as > lexicographic, avoid + # comparing them directly, but encode a tuple which ensures + # correct sorting + if part.isdigit(): + part = (1, int(part)) + else: + part = (0, part) + parts.append(part) + local = tuple(parts) + if not pre: + # either before pre-release, or final release and after + if not post and dev: + # before pre-release + pre = ('a', -1) # to sort before a0 + else: + pre = ('z',) # to sort after all pre-releases + # now look at the state of post and dev. + if not post: + post = ('_',) # sort before 'a' + if not dev: + dev = ('final',) + + #print('%s -> %s' % (s, m.groups())) + return epoch, nums, pre, post, dev, local + + +_normalized_key = _pep_440_key + + +class NormalizedVersion(Version): + """A rational version. + + Good: + 1.2 # equivalent to "1.2.0" + 1.2.0 + 1.2a1 + 1.2.3a2 + 1.2.3b1 + 1.2.3c1 + 1.2.3.4 + TODO: fill this out + + Bad: + 1 # minimum two numbers + 1.2a # release level must have a release serial + 1.2.3b + """ + def parse(self, s): + result = _normalized_key(s) + # _normalized_key loses trailing zeroes in the release + # clause, since that's needed to ensure that X.Y == X.Y.0 == X.Y.0.0 + # However, PEP 440 prefix matching needs it: for example, + # (~= 1.4.5.0) matches differently to (~= 1.4.5.0.0). + m = PEP440_VERSION_RE.match(s) # must succeed + groups = m.groups() + self._release_clause = tuple(int(v) for v in groups[1].split('.')) + return result + + PREREL_TAGS = set(['a', 'b', 'c', 'rc', 'dev']) + + @property + def is_prerelease(self): + return any(t[0] in self.PREREL_TAGS for t in self._parts if t) + + +def _match_prefix(x, y): + x = str(x) + y = str(y) + if x == y: + return True + if not x.startswith(y): + return False + n = len(y) + return x[n] == '.' + + +class NormalizedMatcher(Matcher): + version_class = NormalizedVersion + + # value is either a callable or the name of a method + _operators = { + '~=': '_match_compatible', + '<': '_match_lt', + '>': '_match_gt', + '<=': '_match_le', + '>=': '_match_ge', + '==': '_match_eq', + '===': '_match_arbitrary', + '!=': '_match_ne', + } + + def _adjust_local(self, version, constraint, prefix): + if prefix: + strip_local = '+' not in constraint and version._parts[-1] + else: + # both constraint and version are + # NormalizedVersion instances. + # If constraint does not have a local component, + # ensure the version doesn't, either. + strip_local = not constraint._parts[-1] and version._parts[-1] + if strip_local: + s = version._string.split('+', 1)[0] + version = self.version_class(s) + return version, constraint + + def _match_lt(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if version >= constraint: + return False + release_clause = constraint._release_clause + pfx = '.'.join([str(i) for i in release_clause]) + return not _match_prefix(version, pfx) + + def _match_gt(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if version <= constraint: + return False + release_clause = constraint._release_clause + pfx = '.'.join([str(i) for i in release_clause]) + return not _match_prefix(version, pfx) + + def _match_le(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + return version <= constraint + + def _match_ge(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + return version >= constraint + + def _match_eq(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if not prefix: + result = (version == constraint) + else: + result = _match_prefix(version, constraint) + return result + + def _match_arbitrary(self, version, constraint, prefix): + return str(version) == str(constraint) + + def _match_ne(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if not prefix: + result = (version != constraint) + else: + result = not _match_prefix(version, constraint) + return result + + def _match_compatible(self, version, constraint, prefix): + version, constraint = self._adjust_local(version, constraint, prefix) + if version == constraint: + return True + if version < constraint: + return False +# if not prefix: +# return True + release_clause = constraint._release_clause + if len(release_clause) > 1: + release_clause = release_clause[:-1] + pfx = '.'.join([str(i) for i in release_clause]) + return _match_prefix(version, pfx) + +_REPLACEMENTS = ( + (re.compile('[.+-]$'), ''), # remove trailing puncts + (re.compile(r'^[.](\d)'), r'0.\1'), # .N -> 0.N at start + (re.compile('^[.-]'), ''), # remove leading puncts + (re.compile(r'^\((.*)\)$'), r'\1'), # remove parentheses + (re.compile(r'^v(ersion)?\s*(\d+)'), r'\2'), # remove leading v(ersion) + (re.compile(r'^r(ev)?\s*(\d+)'), r'\2'), # remove leading v(ersion) + (re.compile('[.]{2,}'), '.'), # multiple runs of '.' + (re.compile(r'\b(alfa|apha)\b'), 'alpha'), # misspelt alpha + (re.compile(r'\b(pre-alpha|prealpha)\b'), + 'pre.alpha'), # standardise + (re.compile(r'\(beta\)$'), 'beta'), # remove parentheses +) + +_SUFFIX_REPLACEMENTS = ( + (re.compile('^[:~._+-]+'), ''), # remove leading puncts + (re.compile('[,*")([\\]]'), ''), # remove unwanted chars + (re.compile('[~:+_ -]'), '.'), # replace illegal chars + (re.compile('[.]{2,}'), '.'), # multiple runs of '.' + (re.compile(r'\.$'), ''), # trailing '.' +) + +_NUMERIC_PREFIX = re.compile(r'(\d+(\.\d+)*)') + + +def _suggest_semantic_version(s): + """ + Try to suggest a semantic form for a version for which + _suggest_normalized_version couldn't come up with anything. + """ + result = s.strip().lower() + for pat, repl in _REPLACEMENTS: + result = pat.sub(repl, result) + if not result: + result = '0.0.0' + + # Now look for numeric prefix, and separate it out from + # the rest. + #import pdb; pdb.set_trace() + m = _NUMERIC_PREFIX.match(result) + if not m: + prefix = '0.0.0' + suffix = result + else: + prefix = m.groups()[0].split('.') + prefix = [int(i) for i in prefix] + while len(prefix) < 3: + prefix.append(0) + if len(prefix) == 3: + suffix = result[m.end():] + else: + suffix = '.'.join([str(i) for i in prefix[3:]]) + result[m.end():] + prefix = prefix[:3] + prefix = '.'.join([str(i) for i in prefix]) + suffix = suffix.strip() + if suffix: + #import pdb; pdb.set_trace() + # massage the suffix. + for pat, repl in _SUFFIX_REPLACEMENTS: + suffix = pat.sub(repl, suffix) + + if not suffix: + result = prefix + else: + sep = '-' if 'dev' in suffix else '+' + result = prefix + sep + suffix + if not is_semver(result): + result = None + return result + + +def _suggest_normalized_version(s): + """Suggest a normalized version close to the given version string. + + If you have a version string that isn't rational (i.e. NormalizedVersion + doesn't like it) then you might be able to get an equivalent (or close) + rational version from this function. + + This does a number of simple normalizations to the given string, based + on observation of versions currently in use on PyPI. Given a dump of + those version during PyCon 2009, 4287 of them: + - 2312 (53.93%) match NormalizedVersion without change + with the automatic suggestion + - 3474 (81.04%) match when using this suggestion method + + @param s {str} An irrational version string. + @returns A rational version string, or None, if couldn't determine one. + """ + try: + _normalized_key(s) + return s # already rational + except UnsupportedVersionError: + pass + + rs = s.lower() + + # part of this could use maketrans + for orig, repl in (('-alpha', 'a'), ('-beta', 'b'), ('alpha', 'a'), + ('beta', 'b'), ('rc', 'c'), ('-final', ''), + ('-pre', 'c'), + ('-release', ''), ('.release', ''), ('-stable', ''), + ('+', '.'), ('_', '.'), (' ', ''), ('.final', ''), + ('final', '')): + rs = rs.replace(orig, repl) + + # if something ends with dev or pre, we add a 0 + rs = re.sub(r"pre$", r"pre0", rs) + rs = re.sub(r"dev$", r"dev0", rs) + + # if we have something like "b-2" or "a.2" at the end of the + # version, that is probably beta, alpha, etc + # let's remove the dash or dot + rs = re.sub(r"([abc]|rc)[\-\.](\d+)$", r"\1\2", rs) + + # 1.0-dev-r371 -> 1.0.dev371 + # 0.1-dev-r79 -> 0.1.dev79 + rs = re.sub(r"[\-\.](dev)[\-\.]?r?(\d+)$", r".\1\2", rs) + + # Clean: 2.0.a.3, 2.0.b1, 0.9.0~c1 + rs = re.sub(r"[.~]?([abc])\.?", r"\1", rs) + + # Clean: v0.3, v1.0 + if rs.startswith('v'): + rs = rs[1:] + + # Clean leading '0's on numbers. + #TODO: unintended side-effect on, e.g., "2003.05.09" + # PyPI stats: 77 (~2%) better + rs = re.sub(r"\b0+(\d+)(?!\d)", r"\1", rs) + + # Clean a/b/c with no version. E.g. "1.0a" -> "1.0a0". Setuptools infers + # zero. + # PyPI stats: 245 (7.56%) better + rs = re.sub(r"(\d+[abc])$", r"\g<1>0", rs) + + # the 'dev-rNNN' tag is a dev tag + rs = re.sub(r"\.?(dev-r|dev\.r)\.?(\d+)$", r".dev\2", rs) + + # clean the - when used as a pre delimiter + rs = re.sub(r"-(a|b|c)(\d+)$", r"\1\2", rs) + + # a terminal "dev" or "devel" can be changed into ".dev0" + rs = re.sub(r"[\.\-](dev|devel)$", r".dev0", rs) + + # a terminal "dev" can be changed into ".dev0" + rs = re.sub(r"(?![\.\-])dev$", r".dev0", rs) + + # a terminal "final" or "stable" can be removed + rs = re.sub(r"(final|stable)$", "", rs) + + # The 'r' and the '-' tags are post release tags + # 0.4a1.r10 -> 0.4a1.post10 + # 0.9.33-17222 -> 0.9.33.post17222 + # 0.9.33-r17222 -> 0.9.33.post17222 + rs = re.sub(r"\.?(r|-|-r)\.?(\d+)$", r".post\2", rs) + + # Clean 'r' instead of 'dev' usage: + # 0.9.33+r17222 -> 0.9.33.dev17222 + # 1.0dev123 -> 1.0.dev123 + # 1.0.git123 -> 1.0.dev123 + # 1.0.bzr123 -> 1.0.dev123 + # 0.1a0dev.123 -> 0.1a0.dev123 + # PyPI stats: ~150 (~4%) better + rs = re.sub(r"\.?(dev|git|bzr)\.?(\d+)$", r".dev\2", rs) + + # Clean '.pre' (normalized from '-pre' above) instead of 'c' usage: + # 0.2.pre1 -> 0.2c1 + # 0.2-c1 -> 0.2c1 + # 1.0preview123 -> 1.0c123 + # PyPI stats: ~21 (0.62%) better + rs = re.sub(r"\.?(pre|preview|-c)(\d+)$", r"c\g<2>", rs) + + # Tcl/Tk uses "px" for their post release markers + rs = re.sub(r"p(\d+)$", r".post\1", rs) + + try: + _normalized_key(rs) + except UnsupportedVersionError: + rs = None + return rs + +# +# Legacy version processing (distribute-compatible) +# + +_VERSION_PART = re.compile(r'([a-z]+|\d+|[\.-])', re.I) +_VERSION_REPLACE = { + 'pre': 'c', + 'preview': 'c', + '-': 'final-', + 'rc': 'c', + 'dev': '@', + '': None, + '.': None, +} + + +def _legacy_key(s): + def get_parts(s): + result = [] + for p in _VERSION_PART.split(s.lower()): + p = _VERSION_REPLACE.get(p, p) + if p: + if '0' <= p[:1] <= '9': + p = p.zfill(8) + else: + p = '*' + p + result.append(p) + result.append('*final') + return result + + result = [] + for p in get_parts(s): + if p.startswith('*'): + if p < '*final': + while result and result[-1] == '*final-': + result.pop() + while result and result[-1] == '00000000': + result.pop() + result.append(p) + return tuple(result) + + +class LegacyVersion(Version): + def parse(self, s): + return _legacy_key(s) + + @property + def is_prerelease(self): + result = False + for x in self._parts: + if (isinstance(x, string_types) and x.startswith('*') and + x < '*final'): + result = True + break + return result + + +class LegacyMatcher(Matcher): + version_class = LegacyVersion + + _operators = dict(Matcher._operators) + _operators['~='] = '_match_compatible' + + numeric_re = re.compile(r'^(\d+(\.\d+)*)') + + def _match_compatible(self, version, constraint, prefix): + if version < constraint: + return False + m = self.numeric_re.match(str(constraint)) + if not m: + logger.warning('Cannot compute compatible match for version %s ' + ' and constraint %s', version, constraint) + return True + s = m.groups()[0] + if '.' in s: + s = s.rsplit('.', 1)[0] + return _match_prefix(version, s) + +# +# Semantic versioning +# + +_SEMVER_RE = re.compile(r'^(\d+)\.(\d+)\.(\d+)' + r'(-[a-z0-9]+(\.[a-z0-9-]+)*)?' + r'(\+[a-z0-9]+(\.[a-z0-9-]+)*)?$', re.I) + + +def is_semver(s): + return _SEMVER_RE.match(s) + + +def _semantic_key(s): + def make_tuple(s, absent): + if s is None: + result = (absent,) + else: + parts = s[1:].split('.') + # We can't compare ints and strings on Python 3, so fudge it + # by zero-filling numeric values so simulate a numeric comparison + result = tuple([p.zfill(8) if p.isdigit() else p for p in parts]) + return result + + m = is_semver(s) + if not m: + raise UnsupportedVersionError(s) + groups = m.groups() + major, minor, patch = [int(i) for i in groups[:3]] + # choose the '|' and '*' so that versions sort correctly + pre, build = make_tuple(groups[3], '|'), make_tuple(groups[5], '*') + return (major, minor, patch), pre, build + + +class SemanticVersion(Version): + def parse(self, s): + return _semantic_key(s) + + @property + def is_prerelease(self): + return self._parts[1][0] != '|' + + +class SemanticMatcher(Matcher): + version_class = SemanticVersion + + +class VersionScheme(object): + def __init__(self, key, matcher, suggester=None): + self.key = key + self.matcher = matcher + self.suggester = suggester + + def is_valid_version(self, s): + try: + self.matcher.version_class(s) + result = True + except UnsupportedVersionError: + result = False + return result + + def is_valid_matcher(self, s): + try: + self.matcher(s) + result = True + except UnsupportedVersionError: + result = False + return result + + def is_valid_constraint_list(self, s): + """ + Used for processing some metadata fields + """ + # See issue #140. Be tolerant of a single trailing comma. + if s.endswith(','): + s = s[:-1] + return self.is_valid_matcher('dummy_name (%s)' % s) + + def suggest(self, s): + if self.suggester is None: + result = None + else: + result = self.suggester(s) + return result + +_SCHEMES = { + 'normalized': VersionScheme(_normalized_key, NormalizedMatcher, + _suggest_normalized_version), + 'legacy': VersionScheme(_legacy_key, LegacyMatcher, lambda self, s: s), + 'semantic': VersionScheme(_semantic_key, SemanticMatcher, + _suggest_semantic_version), +} + +_SCHEMES['default'] = _SCHEMES['normalized'] + + +def get_scheme(name): + if name not in _SCHEMES: + raise ValueError('unknown scheme name: %r' % name) + return _SCHEMES[name] diff --git a/env/lib/python3.8/site-packages/distlib/w32.exe b/env/lib/python3.8/site-packages/distlib/w32.exe new file mode 100644 index 00000000..4ee2d3a3 Binary files /dev/null and b/env/lib/python3.8/site-packages/distlib/w32.exe differ diff --git a/env/lib/python3.8/site-packages/distlib/w64-arm.exe b/env/lib/python3.8/site-packages/distlib/w64-arm.exe new file mode 100644 index 00000000..951d5817 Binary files /dev/null and b/env/lib/python3.8/site-packages/distlib/w64-arm.exe differ diff --git a/env/lib/python3.8/site-packages/distlib/w64.exe b/env/lib/python3.8/site-packages/distlib/w64.exe new file mode 100644 index 00000000..5763076d Binary files /dev/null and b/env/lib/python3.8/site-packages/distlib/w64.exe differ diff --git a/env/lib/python3.8/site-packages/distlib/wheel.py b/env/lib/python3.8/site-packages/distlib/wheel.py new file mode 100644 index 00000000..028c2d99 --- /dev/null +++ b/env/lib/python3.8/site-packages/distlib/wheel.py @@ -0,0 +1,1082 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2020 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from __future__ import unicode_literals + +import base64 +import codecs +import datetime +from email import message_from_file +import hashlib +import json +import logging +import os +import posixpath +import re +import shutil +import sys +import tempfile +import zipfile + +from . import __version__, DistlibException +from .compat import sysconfig, ZipFile, fsdecode, text_type, filter +from .database import InstalledDistribution +from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME) +from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache, + cached_property, get_cache_base, read_exports, tempdir, + get_platform) +from .version import NormalizedVersion, UnsupportedVersionError + +logger = logging.getLogger(__name__) + +cache = None # created when needed + +if hasattr(sys, 'pypy_version_info'): # pragma: no cover + IMP_PREFIX = 'pp' +elif sys.platform.startswith('java'): # pragma: no cover + IMP_PREFIX = 'jy' +elif sys.platform == 'cli': # pragma: no cover + IMP_PREFIX = 'ip' +else: + IMP_PREFIX = 'cp' + +VER_SUFFIX = sysconfig.get_config_var('py_version_nodot') +if not VER_SUFFIX: # pragma: no cover + VER_SUFFIX = '%s%s' % sys.version_info[:2] +PYVER = 'py' + VER_SUFFIX +IMPVER = IMP_PREFIX + VER_SUFFIX + +ARCH = get_platform().replace('-', '_').replace('.', '_') + +ABI = sysconfig.get_config_var('SOABI') +if ABI and ABI.startswith('cpython-'): + ABI = ABI.replace('cpython-', 'cp').split('-')[0] +else: + def _derive_abi(): + parts = ['cp', VER_SUFFIX] + if sysconfig.get_config_var('Py_DEBUG'): + parts.append('d') + if IMP_PREFIX == 'cp': + vi = sys.version_info[:2] + if vi < (3, 8): + wpm = sysconfig.get_config_var('WITH_PYMALLOC') + if wpm is None: + wpm = True + if wpm: + parts.append('m') + if vi < (3, 3): + us = sysconfig.get_config_var('Py_UNICODE_SIZE') + if us == 4 or (us is None and sys.maxunicode == 0x10FFFF): + parts.append('u') + return ''.join(parts) + ABI = _derive_abi() + del _derive_abi + +FILENAME_RE = re.compile(r''' +(?P[^-]+) +-(?P\d+[^-]*) +(-(?P\d+[^-]*))? +-(?P\w+\d+(\.\w+\d+)*) +-(?P\w+) +-(?P\w+(\.\w+)*) +\.whl$ +''', re.IGNORECASE | re.VERBOSE) + +NAME_VERSION_RE = re.compile(r''' +(?P[^-]+) +-(?P\d+[^-]*) +(-(?P\d+[^-]*))?$ +''', re.IGNORECASE | re.VERBOSE) + +SHEBANG_RE = re.compile(br'\s*#![^\r\n]*') +SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$') +SHEBANG_PYTHON = b'#!python' +SHEBANG_PYTHONW = b'#!pythonw' + +if os.sep == '/': + to_posix = lambda o: o +else: + to_posix = lambda o: o.replace(os.sep, '/') + +if sys.version_info[0] < 3: + import imp +else: + imp = None + import importlib.machinery + import importlib.util + +def _get_suffixes(): + if imp: + return [s[0] for s in imp.get_suffixes()] + else: + return importlib.machinery.EXTENSION_SUFFIXES + +def _load_dynamic(name, path): + # https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly + if imp: + return imp.load_dynamic(name, path) + else: + spec = importlib.util.spec_from_file_location(name, path) + module = importlib.util.module_from_spec(spec) + sys.modules[name] = module + spec.loader.exec_module(module) + return module + +class Mounter(object): + def __init__(self): + self.impure_wheels = {} + self.libs = {} + + def add(self, pathname, extensions): + self.impure_wheels[pathname] = extensions + self.libs.update(extensions) + + def remove(self, pathname): + extensions = self.impure_wheels.pop(pathname) + for k, v in extensions: + if k in self.libs: + del self.libs[k] + + def find_module(self, fullname, path=None): + if fullname in self.libs: + result = self + else: + result = None + return result + + def load_module(self, fullname): + if fullname in sys.modules: + result = sys.modules[fullname] + else: + if fullname not in self.libs: + raise ImportError('unable to find extension for %s' % fullname) + result = _load_dynamic(fullname, self.libs[fullname]) + result.__loader__ = self + parts = fullname.rsplit('.', 1) + if len(parts) > 1: + result.__package__ = parts[0] + return result + +_hook = Mounter() + + +class Wheel(object): + """ + Class to build and install from Wheel files (PEP 427). + """ + + wheel_version = (1, 1) + hash_kind = 'sha256' + + def __init__(self, filename=None, sign=False, verify=False): + """ + Initialise an instance using a (valid) filename. + """ + self.sign = sign + self.should_verify = verify + self.buildver = '' + self.pyver = [PYVER] + self.abi = ['none'] + self.arch = ['any'] + self.dirname = os.getcwd() + if filename is None: + self.name = 'dummy' + self.version = '0.1' + self._filename = self.filename + else: + m = NAME_VERSION_RE.match(filename) + if m: + info = m.groupdict('') + self.name = info['nm'] + # Reinstate the local version separator + self.version = info['vn'].replace('_', '-') + self.buildver = info['bn'] + self._filename = self.filename + else: + dirname, filename = os.path.split(filename) + m = FILENAME_RE.match(filename) + if not m: + raise DistlibException('Invalid name or ' + 'filename: %r' % filename) + if dirname: + self.dirname = os.path.abspath(dirname) + self._filename = filename + info = m.groupdict('') + self.name = info['nm'] + self.version = info['vn'] + self.buildver = info['bn'] + self.pyver = info['py'].split('.') + self.abi = info['bi'].split('.') + self.arch = info['ar'].split('.') + + @property + def filename(self): + """ + Build and return a filename from the various components. + """ + if self.buildver: + buildver = '-' + self.buildver + else: + buildver = '' + pyver = '.'.join(self.pyver) + abi = '.'.join(self.abi) + arch = '.'.join(self.arch) + # replace - with _ as a local version separator + version = self.version.replace('-', '_') + return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver, + pyver, abi, arch) + + @property + def exists(self): + path = os.path.join(self.dirname, self.filename) + return os.path.isfile(path) + + @property + def tags(self): + for pyver in self.pyver: + for abi in self.abi: + for arch in self.arch: + yield pyver, abi, arch + + @cached_property + def metadata(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + wrapper = codecs.getreader('utf-8') + with ZipFile(pathname, 'r') as zf: + wheel_metadata = self.get_wheel_metadata(zf) + wv = wheel_metadata['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + # if file_version < (1, 1): + # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME, + # LEGACY_METADATA_FILENAME] + # else: + # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME] + fns = [WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME] + result = None + for fn in fns: + try: + metadata_filename = posixpath.join(info_dir, fn) + with zf.open(metadata_filename) as bf: + wf = wrapper(bf) + result = Metadata(fileobj=wf) + if result: + break + except KeyError: + pass + if not result: + raise ValueError('Invalid wheel, because metadata is ' + 'missing: looked in %s' % ', '.join(fns)) + return result + + def get_wheel_metadata(self, zf): + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + metadata_filename = posixpath.join(info_dir, 'WHEEL') + with zf.open(metadata_filename) as bf: + wf = codecs.getreader('utf-8')(bf) + message = message_from_file(wf) + return dict(message) + + @cached_property + def info(self): + pathname = os.path.join(self.dirname, self.filename) + with ZipFile(pathname, 'r') as zf: + result = self.get_wheel_metadata(zf) + return result + + def process_shebang(self, data): + m = SHEBANG_RE.match(data) + if m: + end = m.end() + shebang, data_after_shebang = data[:end], data[end:] + # Preserve any arguments after the interpreter + if b'pythonw' in shebang.lower(): + shebang_python = SHEBANG_PYTHONW + else: + shebang_python = SHEBANG_PYTHON + m = SHEBANG_DETAIL_RE.match(shebang) + if m: + args = b' ' + m.groups()[-1] + else: + args = b'' + shebang = shebang_python + args + data = shebang + data_after_shebang + else: + cr = data.find(b'\r') + lf = data.find(b'\n') + if cr < 0 or cr > lf: + term = b'\n' + else: + if data[cr:cr + 2] == b'\r\n': + term = b'\r\n' + else: + term = b'\r' + data = SHEBANG_PYTHON + term + data + return data + + def get_hash(self, data, hash_kind=None): + if hash_kind is None: + hash_kind = self.hash_kind + try: + hasher = getattr(hashlib, hash_kind) + except AttributeError: + raise DistlibException('Unsupported hash algorithm: %r' % hash_kind) + result = hasher(data).digest() + result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii') + return hash_kind, result + + def write_record(self, records, record_path, archive_record_path): + records = list(records) # make a copy, as mutated + records.append((archive_record_path, '', '')) + with CSVWriter(record_path) as writer: + for row in records: + writer.writerow(row) + + def write_records(self, info, libdir, archive_paths): + records = [] + distinfo, info_dir = info + hasher = getattr(hashlib, self.hash_kind) + for ap, p in archive_paths: + with open(p, 'rb') as f: + data = f.read() + digest = '%s=%s' % self.get_hash(data) + size = os.path.getsize(p) + records.append((ap, digest, size)) + + p = os.path.join(distinfo, 'RECORD') + ap = to_posix(os.path.join(info_dir, 'RECORD')) + self.write_record(records, p, ap) + archive_paths.append((ap, p)) + + def build_zip(self, pathname, archive_paths): + with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf: + for ap, p in archive_paths: + logger.debug('Wrote %s to %s in wheel', p, ap) + zf.write(p, ap) + + def build(self, paths, tags=None, wheel_version=None): + """ + Build a wheel from files in specified paths, and use any specified tags + when determining the name of the wheel. + """ + if tags is None: + tags = {} + + libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0] + if libkey == 'platlib': + is_pure = 'false' + default_pyver = [IMPVER] + default_abi = [ABI] + default_arch = [ARCH] + else: + is_pure = 'true' + default_pyver = [PYVER] + default_abi = ['none'] + default_arch = ['any'] + + self.pyver = tags.get('pyver', default_pyver) + self.abi = tags.get('abi', default_abi) + self.arch = tags.get('arch', default_arch) + + libdir = paths[libkey] + + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + archive_paths = [] + + # First, stuff which is not in site-packages + for key in ('data', 'headers', 'scripts'): + if key not in paths: + continue + path = paths[key] + if os.path.isdir(path): + for root, dirs, files in os.walk(path): + for fn in files: + p = fsdecode(os.path.join(root, fn)) + rp = os.path.relpath(p, path) + ap = to_posix(os.path.join(data_dir, key, rp)) + archive_paths.append((ap, p)) + if key == 'scripts' and not p.endswith('.exe'): + with open(p, 'rb') as f: + data = f.read() + data = self.process_shebang(data) + with open(p, 'wb') as f: + f.write(data) + + # Now, stuff which is in site-packages, other than the + # distinfo stuff. + path = libdir + distinfo = None + for root, dirs, files in os.walk(path): + if root == path: + # At the top level only, save distinfo for later + # and skip it for now + for i, dn in enumerate(dirs): + dn = fsdecode(dn) + if dn.endswith('.dist-info'): + distinfo = os.path.join(root, dn) + del dirs[i] + break + assert distinfo, '.dist-info directory expected, not found' + + for fn in files: + # comment out next suite to leave .pyc files in + if fsdecode(fn).endswith(('.pyc', '.pyo')): + continue + p = os.path.join(root, fn) + rp = to_posix(os.path.relpath(p, path)) + archive_paths.append((rp, p)) + + # Now distinfo. Assumed to be flat, i.e. os.listdir is enough. + files = os.listdir(distinfo) + for fn in files: + if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'): + p = fsdecode(os.path.join(distinfo, fn)) + ap = to_posix(os.path.join(info_dir, fn)) + archive_paths.append((ap, p)) + + wheel_metadata = [ + 'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version), + 'Generator: distlib %s' % __version__, + 'Root-Is-Purelib: %s' % is_pure, + ] + for pyver, abi, arch in self.tags: + wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch)) + p = os.path.join(distinfo, 'WHEEL') + with open(p, 'w') as f: + f.write('\n'.join(wheel_metadata)) + ap = to_posix(os.path.join(info_dir, 'WHEEL')) + archive_paths.append((ap, p)) + + # sort the entries by archive path. Not needed by any spec, but it + # keeps the archive listing and RECORD tidier than they would otherwise + # be. Use the number of path segments to keep directory entries together, + # and keep the dist-info stuff at the end. + def sorter(t): + ap = t[0] + n = ap.count('/') + if '.dist-info' in ap: + n += 10000 + return (n, ap) + archive_paths = sorted(archive_paths, key=sorter) + + # Now, at last, RECORD. + # Paths in here are archive paths - nothing else makes sense. + self.write_records((distinfo, info_dir), libdir, archive_paths) + # Now, ready to build the zip file + pathname = os.path.join(self.dirname, self.filename) + self.build_zip(pathname, archive_paths) + return pathname + + def skip_entry(self, arcname): + """ + Determine whether an archive entry should be skipped when verifying + or installing. + """ + # The signature file won't be in RECORD, + # and we don't currently don't do anything with it + # We also skip directories, as they won't be in RECORD + # either. See: + # + # https://github.com/pypa/wheel/issues/294 + # https://github.com/pypa/wheel/issues/287 + # https://github.com/pypa/wheel/pull/289 + # + return arcname.endswith(('/', '/RECORD.jws')) + + def install(self, paths, maker, **kwargs): + """ + Install a wheel to the specified paths. If kwarg ``warner`` is + specified, it should be a callable, which will be called with two + tuples indicating the wheel version of this software and the wheel + version in the file, if there is a discrepancy in the versions. + This can be used to issue any warnings to raise any exceptions. + If kwarg ``lib_only`` is True, only the purelib/platlib files are + installed, and the headers, scripts, data and dist-info metadata are + not written. If kwarg ``bytecode_hashed_invalidation`` is True, written + bytecode will try to use file-hash based invalidation (PEP-552) on + supported interpreter versions (CPython 2.7+). + + The return value is a :class:`InstalledDistribution` instance unless + ``options.lib_only`` is True, in which case the return value is ``None``. + """ + + dry_run = maker.dry_run + warner = kwargs.get('warner') + lib_only = kwargs.get('lib_only', False) + bc_hashed_invalidation = kwargs.get('bytecode_hashed_invalidation', False) + + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME) + wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') + record_name = posixpath.join(info_dir, 'RECORD') + + wrapper = codecs.getreader('utf-8') + + with ZipFile(pathname, 'r') as zf: + with zf.open(wheel_metadata_name) as bwf: + wf = wrapper(bwf) + message = message_from_file(wf) + wv = message['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + if (file_version != self.wheel_version) and warner: + warner(self.wheel_version, file_version) + + if message['Root-Is-Purelib'] == 'true': + libdir = paths['purelib'] + else: + libdir = paths['platlib'] + + records = {} + with zf.open(record_name) as bf: + with CSVReader(stream=bf) as reader: + for row in reader: + p = row[0] + records[p] = row + + data_pfx = posixpath.join(data_dir, '') + info_pfx = posixpath.join(info_dir, '') + script_pfx = posixpath.join(data_dir, 'scripts', '') + + # make a new instance rather than a copy of maker's, + # as we mutate it + fileop = FileOperator(dry_run=dry_run) + fileop.record = True # so we can rollback if needed + + bc = not sys.dont_write_bytecode # Double negatives. Lovely! + + outfiles = [] # for RECORD writing + + # for script copying/shebang processing + workdir = tempfile.mkdtemp() + # set target dir later + # we default add_launchers to False, as the + # Python Launcher should be used instead + maker.source_dir = workdir + maker.target_dir = None + try: + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + if self.skip_entry(u_arcname): + continue + row = records[u_arcname] + if row[2] and str(zinfo.file_size) != row[2]: + raise DistlibException('size mismatch for ' + '%s' % u_arcname) + if row[1]: + kind, value = row[1].split('=', 1) + with zf.open(arcname) as bf: + data = bf.read() + _, digest = self.get_hash(data, kind) + if digest != value: + raise DistlibException('digest mismatch for ' + '%s' % arcname) + + if lib_only and u_arcname.startswith((info_pfx, data_pfx)): + logger.debug('lib_only: skipping %s', u_arcname) + continue + is_script = (u_arcname.startswith(script_pfx) + and not u_arcname.endswith('.exe')) + + if u_arcname.startswith(data_pfx): + _, where, rp = u_arcname.split('/', 2) + outfile = os.path.join(paths[where], convert_path(rp)) + else: + # meant for site-packages. + if u_arcname in (wheel_metadata_name, record_name): + continue + outfile = os.path.join(libdir, convert_path(u_arcname)) + if not is_script: + with zf.open(arcname) as bf: + fileop.copy_stream(bf, outfile) + # Issue #147: permission bits aren't preserved. Using + # zf.extract(zinfo, libdir) should have worked, but didn't, + # see https://www.thetopsites.net/article/53834422.shtml + # So ... manually preserve permission bits as given in zinfo + if os.name == 'posix': + # just set the normal permission bits + os.chmod(outfile, (zinfo.external_attr >> 16) & 0x1FF) + outfiles.append(outfile) + # Double check the digest of the written file + if not dry_run and row[1]: + with open(outfile, 'rb') as bf: + data = bf.read() + _, newdigest = self.get_hash(data, kind) + if newdigest != digest: + raise DistlibException('digest mismatch ' + 'on write for ' + '%s' % outfile) + if bc and outfile.endswith('.py'): + try: + pyc = fileop.byte_compile(outfile, + hashed_invalidation=bc_hashed_invalidation) + outfiles.append(pyc) + except Exception: + # Don't give up if byte-compilation fails, + # but log it and perhaps warn the user + logger.warning('Byte-compilation failed', + exc_info=True) + else: + fn = os.path.basename(convert_path(arcname)) + workname = os.path.join(workdir, fn) + with zf.open(arcname) as bf: + fileop.copy_stream(bf, workname) + + dn, fn = os.path.split(outfile) + maker.target_dir = dn + filenames = maker.make(fn) + fileop.set_executable_mode(filenames) + outfiles.extend(filenames) + + if lib_only: + logger.debug('lib_only: returning None') + dist = None + else: + # Generate scripts + + # Try to get pydist.json so we can see if there are + # any commands to generate. If this fails (e.g. because + # of a legacy wheel), log a warning but don't give up. + commands = None + file_version = self.info['Wheel-Version'] + if file_version == '1.0': + # Use legacy info + ep = posixpath.join(info_dir, 'entry_points.txt') + try: + with zf.open(ep) as bwf: + epdata = read_exports(bwf) + commands = {} + for key in ('console', 'gui'): + k = '%s_scripts' % key + if k in epdata: + commands['wrap_%s' % key] = d = {} + for v in epdata[k].values(): + s = '%s:%s' % (v.prefix, v.suffix) + if v.flags: + s += ' [%s]' % ','.join(v.flags) + d[v.name] = s + except Exception: + logger.warning('Unable to read legacy script ' + 'metadata, so cannot generate ' + 'scripts') + else: + try: + with zf.open(metadata_name) as bwf: + wf = wrapper(bwf) + commands = json.load(wf).get('extensions') + if commands: + commands = commands.get('python.commands') + except Exception: + logger.warning('Unable to read JSON metadata, so ' + 'cannot generate scripts') + if commands: + console_scripts = commands.get('wrap_console', {}) + gui_scripts = commands.get('wrap_gui', {}) + if console_scripts or gui_scripts: + script_dir = paths.get('scripts', '') + if not os.path.isdir(script_dir): + raise ValueError('Valid script path not ' + 'specified') + maker.target_dir = script_dir + for k, v in console_scripts.items(): + script = '%s = %s' % (k, v) + filenames = maker.make(script) + fileop.set_executable_mode(filenames) + + if gui_scripts: + options = {'gui': True } + for k, v in gui_scripts.items(): + script = '%s = %s' % (k, v) + filenames = maker.make(script, options) + fileop.set_executable_mode(filenames) + + p = os.path.join(libdir, info_dir) + dist = InstalledDistribution(p) + + # Write SHARED + paths = dict(paths) # don't change passed in dict + del paths['purelib'] + del paths['platlib'] + paths['lib'] = libdir + p = dist.write_shared_locations(paths, dry_run) + if p: + outfiles.append(p) + + # Write RECORD + dist.write_installed_files(outfiles, paths['prefix'], + dry_run) + return dist + except Exception: # pragma: no cover + logger.exception('installation failed.') + fileop.rollback() + raise + finally: + shutil.rmtree(workdir) + + def _get_dylib_cache(self): + global cache + if cache is None: + # Use native string to avoid issues on 2.x: see Python #20140. + base = os.path.join(get_cache_base(), str('dylib-cache'), + '%s.%s' % sys.version_info[:2]) + cache = Cache(base) + return cache + + def _get_extensions(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + arcname = posixpath.join(info_dir, 'EXTENSIONS') + wrapper = codecs.getreader('utf-8') + result = [] + with ZipFile(pathname, 'r') as zf: + try: + with zf.open(arcname) as bf: + wf = wrapper(bf) + extensions = json.load(wf) + cache = self._get_dylib_cache() + prefix = cache.prefix_to_dir(pathname) + cache_base = os.path.join(cache.base, prefix) + if not os.path.isdir(cache_base): + os.makedirs(cache_base) + for name, relpath in extensions.items(): + dest = os.path.join(cache_base, convert_path(relpath)) + if not os.path.exists(dest): + extract = True + else: + file_time = os.stat(dest).st_mtime + file_time = datetime.datetime.fromtimestamp(file_time) + info = zf.getinfo(relpath) + wheel_time = datetime.datetime(*info.date_time) + extract = wheel_time > file_time + if extract: + zf.extract(relpath, cache_base) + result.append((name, dest)) + except KeyError: + pass + return result + + def is_compatible(self): + """ + Determine if a wheel is compatible with the running system. + """ + return is_compatible(self) + + def is_mountable(self): + """ + Determine if a wheel is asserted as mountable by its metadata. + """ + return True # for now - metadata details TBD + + def mount(self, append=False): + pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) + if not self.is_compatible(): + msg = 'Wheel %s not compatible with this Python.' % pathname + raise DistlibException(msg) + if not self.is_mountable(): + msg = 'Wheel %s is marked as not mountable.' % pathname + raise DistlibException(msg) + if pathname in sys.path: + logger.debug('%s already in path', pathname) + else: + if append: + sys.path.append(pathname) + else: + sys.path.insert(0, pathname) + extensions = self._get_extensions() + if extensions: + if _hook not in sys.meta_path: + sys.meta_path.append(_hook) + _hook.add(pathname, extensions) + + def unmount(self): + pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) + if pathname not in sys.path: + logger.debug('%s not in path', pathname) + else: + sys.path.remove(pathname) + if pathname in _hook.impure_wheels: + _hook.remove(pathname) + if not _hook.impure_wheels: + if _hook in sys.meta_path: + sys.meta_path.remove(_hook) + + def verify(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME) + wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') + record_name = posixpath.join(info_dir, 'RECORD') + + wrapper = codecs.getreader('utf-8') + + with ZipFile(pathname, 'r') as zf: + with zf.open(wheel_metadata_name) as bwf: + wf = wrapper(bwf) + message = message_from_file(wf) + wv = message['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + # TODO version verification + + records = {} + with zf.open(record_name) as bf: + with CSVReader(stream=bf) as reader: + for row in reader: + p = row[0] + records[p] = row + + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + # See issue #115: some wheels have .. in their entries, but + # in the filename ... e.g. __main__..py ! So the check is + # updated to look for .. in the directory portions + p = u_arcname.split('/') + if '..' in p: + raise DistlibException('invalid entry in ' + 'wheel: %r' % u_arcname) + + if self.skip_entry(u_arcname): + continue + row = records[u_arcname] + if row[2] and str(zinfo.file_size) != row[2]: + raise DistlibException('size mismatch for ' + '%s' % u_arcname) + if row[1]: + kind, value = row[1].split('=', 1) + with zf.open(arcname) as bf: + data = bf.read() + _, digest = self.get_hash(data, kind) + if digest != value: + raise DistlibException('digest mismatch for ' + '%s' % arcname) + + def update(self, modifier, dest_dir=None, **kwargs): + """ + Update the contents of a wheel in a generic way. The modifier should + be a callable which expects a dictionary argument: its keys are + archive-entry paths, and its values are absolute filesystem paths + where the contents the corresponding archive entries can be found. The + modifier is free to change the contents of the files pointed to, add + new entries and remove entries, before returning. This method will + extract the entire contents of the wheel to a temporary location, call + the modifier, and then use the passed (and possibly updated) + dictionary to write a new wheel. If ``dest_dir`` is specified, the new + wheel is written there -- otherwise, the original wheel is overwritten. + + The modifier should return True if it updated the wheel, else False. + This method returns the same value the modifier returns. + """ + + def get_version(path_map, info_dir): + version = path = None + key = '%s/%s' % (info_dir, LEGACY_METADATA_FILENAME) + if key not in path_map: + key = '%s/PKG-INFO' % info_dir + if key in path_map: + path = path_map[key] + version = Metadata(path=path).version + return version, path + + def update_version(version, path): + updated = None + try: + v = NormalizedVersion(version) + i = version.find('-') + if i < 0: + updated = '%s+1' % version + else: + parts = [int(s) for s in version[i + 1:].split('.')] + parts[-1] += 1 + updated = '%s+%s' % (version[:i], + '.'.join(str(i) for i in parts)) + except UnsupportedVersionError: + logger.debug('Cannot update non-compliant (PEP-440) ' + 'version %r', version) + if updated: + md = Metadata(path=path) + md.version = updated + legacy = path.endswith(LEGACY_METADATA_FILENAME) + md.write(path=path, legacy=legacy) + logger.debug('Version updated from %r to %r', version, + updated) + + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + record_name = posixpath.join(info_dir, 'RECORD') + with tempdir() as workdir: + with ZipFile(pathname, 'r') as zf: + path_map = {} + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + if u_arcname == record_name: + continue + if '..' in u_arcname: + raise DistlibException('invalid entry in ' + 'wheel: %r' % u_arcname) + zf.extract(zinfo, workdir) + path = os.path.join(workdir, convert_path(u_arcname)) + path_map[u_arcname] = path + + # Remember the version. + original_version, _ = get_version(path_map, info_dir) + # Files extracted. Call the modifier. + modified = modifier(path_map, **kwargs) + if modified: + # Something changed - need to build a new wheel. + current_version, path = get_version(path_map, info_dir) + if current_version and (current_version == original_version): + # Add or update local version to signify changes. + update_version(current_version, path) + # Decide where the new wheel goes. + if dest_dir is None: + fd, newpath = tempfile.mkstemp(suffix='.whl', + prefix='wheel-update-', + dir=workdir) + os.close(fd) + else: + if not os.path.isdir(dest_dir): + raise DistlibException('Not a directory: %r' % dest_dir) + newpath = os.path.join(dest_dir, self.filename) + archive_paths = list(path_map.items()) + distinfo = os.path.join(workdir, info_dir) + info = distinfo, info_dir + self.write_records(info, workdir, archive_paths) + self.build_zip(newpath, archive_paths) + if dest_dir is None: + shutil.copyfile(newpath, pathname) + return modified + +def _get_glibc_version(): + import platform + ver = platform.libc_ver() + result = [] + if ver[0] == 'glibc': + for s in ver[1].split('.'): + result.append(int(s) if s.isdigit() else 0) + result = tuple(result) + return result + +def compatible_tags(): + """ + Return (pyver, abi, arch) tuples compatible with this Python. + """ + versions = [VER_SUFFIX] + major = VER_SUFFIX[0] + for minor in range(sys.version_info[1] - 1, - 1, -1): + versions.append(''.join([major, str(minor)])) + + abis = [] + for suffix in _get_suffixes(): + if suffix.startswith('.abi'): + abis.append(suffix.split('.', 2)[1]) + abis.sort() + if ABI != 'none': + abis.insert(0, ABI) + abis.append('none') + result = [] + + arches = [ARCH] + if sys.platform == 'darwin': + m = re.match(r'(\w+)_(\d+)_(\d+)_(\w+)$', ARCH) + if m: + name, major, minor, arch = m.groups() + minor = int(minor) + matches = [arch] + if arch in ('i386', 'ppc'): + matches.append('fat') + if arch in ('i386', 'ppc', 'x86_64'): + matches.append('fat3') + if arch in ('ppc64', 'x86_64'): + matches.append('fat64') + if arch in ('i386', 'x86_64'): + matches.append('intel') + if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'): + matches.append('universal') + while minor >= 0: + for match in matches: + s = '%s_%s_%s_%s' % (name, major, minor, match) + if s != ARCH: # already there + arches.append(s) + minor -= 1 + + # Most specific - our Python version, ABI and arch + for abi in abis: + for arch in arches: + result.append((''.join((IMP_PREFIX, versions[0])), abi, arch)) + # manylinux + if abi != 'none' and sys.platform.startswith('linux'): + arch = arch.replace('linux_', '') + parts = _get_glibc_version() + if len(parts) == 2: + if parts >= (2, 5): + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux1_%s' % arch)) + if parts >= (2, 12): + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux2010_%s' % arch)) + if parts >= (2, 17): + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux2014_%s' % arch)) + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux_%s_%s_%s' % (parts[0], parts[1], + arch))) + + # where no ABI / arch dependency, but IMP_PREFIX dependency + for i, version in enumerate(versions): + result.append((''.join((IMP_PREFIX, version)), 'none', 'any')) + if i == 0: + result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any')) + + # no IMP_PREFIX, ABI or arch dependency + for i, version in enumerate(versions): + result.append((''.join(('py', version)), 'none', 'any')) + if i == 0: + result.append((''.join(('py', version[0])), 'none', 'any')) + + return set(result) + + +COMPATIBLE_TAGS = compatible_tags() + +del compatible_tags + + +def is_compatible(wheel, tags=None): + if not isinstance(wheel, Wheel): + wheel = Wheel(wheel) # assume it's a filename + result = False + if tags is None: + tags = COMPATIBLE_TAGS + for ver, abi, arch in tags: + if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch: + result = True + break + return result diff --git a/env/lib/python3.8/site-packages/distutils-precedence.pth b/env/lib/python3.8/site-packages/distutils-precedence.pth new file mode 100644 index 00000000..7f009fe9 --- /dev/null +++ b/env/lib/python3.8/site-packages/distutils-precedence.pth @@ -0,0 +1 @@ +import os; var = 'SETUPTOOLS_USE_DISTUTILS'; enabled = os.environ.get(var, 'local') == 'local'; enabled and __import__('_distutils_hack').add_shim(); diff --git a/env/lib/python3.8/site-packages/docs/tutorial/conclusion.txt b/env/lib/python3.8/site-packages/docs/tutorial/conclusion.txt new file mode 100644 index 00000000..70f1f01e --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/conclusion.txt @@ -0,0 +1,13 @@ +.. _tutorial-conclusion: + +Conclusion +========== + +This tutorial currently only covers a small (but important) part of Dulwich. +It still needs to be extended to cover packs, refs, reflogs and network +communication. + +Dulwich is abstracting much of the Git plumbing, so there would be more to +see. + +For now, that's all folks! diff --git a/env/lib/python3.8/site-packages/docs/tutorial/encoding.txt b/env/lib/python3.8/site-packages/docs/tutorial/encoding.txt new file mode 100644 index 00000000..9f1ea60e --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/encoding.txt @@ -0,0 +1,26 @@ +Encoding +======== + +You will notice that all lower-level functions in Dulwich take byte strings +rather than unicode strings. This is intentional. + +Although `C git`_ recommends the use of UTF-8 for encoding, this is not +strictly enforced and C git treats filenames as sequences of non-NUL bytes. +There are repositories in the wild that use non-UTF-8 encoding for filenames +and commit messages. + +.. _C git: https://github.com/git/git/blob/master/Documentation/i18n.txt + +The library should be able to read *all* existing git repositories, +regardless of what encoding they use. This is the main reason why Dulwich +does not convert paths to unicode strings. + +A further consideration is that converting back and forth to unicode +is an extra performance penalty. E.g. if you are just iterating over file +contents, there is no need to consider encoded strings. Users of the library +may have specific assumptions they can make about the encoding - e.g. they +could just decide that all their data is latin-1, or the default Python +encoding. + +Higher level functions, such as the porcelain in dulwich.porcelain, will +automatically convert unicode strings to UTF-8 bytestrings. diff --git a/env/lib/python3.8/site-packages/docs/tutorial/file-format.txt b/env/lib/python3.8/site-packages/docs/tutorial/file-format.txt new file mode 100644 index 00000000..afd21cec --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/file-format.txt @@ -0,0 +1,100 @@ +Git File format +=============== + +For a better understanding of Dulwich, we'll start by explaining most of the +Git secrets. + +Open the ".git" folder of any Git-managed repository. You'll find folders +like "branches", "hooks"... We're only interested in "objects" here. Open it. + +You'll mostly see 2 hex-digits folders. Git identifies content by its SHA-1 +digest. The 2 hex-digits plus the 38 hex-digits of files inside these folders +form the 40 characters (or 20 bytes) id of Git objects you'll manage in +Dulwich. + +We'll first study the three main objects: + +- The Commit; + +- The Tree; + +- The Blob. + +The Commit +---------- + +You're used to generate commits using Git. You have set up your name and +e-mail, and you know how to see the history using ``git log``. + +A commit file looks like this:: + + commit tree + parent + [parent if several parents from merges] + author + committer + + + +But where are the changes you committed? The commit contains a reference to a +tree. + +The Tree +-------- + +A tree is a collection of file information, the state of a single directory at +a given point in time. + +A tree file looks like this:: + + tree ... + +And repeats for every file in the tree. + +Note that the SHA-1 digest is in binary form here. + +The file mode is like the octal argument you could give to the ``chmod`` +command. Except it is in extended form to tell regular files from +directories and other types. + +We now know how our files are referenced but we haven't found their actual +content yet. That's where the reference to a blob comes in. + +The Blob +-------- + +A blob is simply the content of files you are versioning. + +A blob file looks like this:: + + blob + +If you change a single line, another blob will be generated by Git each time you +successfully run ``git add``. This is how Git can fastly checkout any version in +time. + +On the opposite, several identical files with different filenames generate +only one blob. That's mostly how renames are so cheap and efficient in Git. + +Dulwich Objects +--------------- + +Dulwich implements these three objects with an API to easily access the +information you need, while abstracting some more secrets Git is using to +accelerate operations and reduce space. + +More About Git formats +---------------------- + +These three objects make up most of the contents of a Git repository and are +used for the history. They can either appear as simple files on disk (one file +per object) or in a ``pack`` file, which is a container for a number of these +objects. + +There is also an index of the current state of the working copy in the +repository as well as files to track the existing branches and tags. + +For a more detailed explanation of object formats and SHA-1 digests, see: +http://www-cs-students.stanford.edu/~blynn/gitmagic/ch08.html + +Just note that recent versions of Git compress object files using zlib. diff --git a/env/lib/python3.8/site-packages/docs/tutorial/index.txt b/env/lib/python3.8/site-packages/docs/tutorial/index.txt new file mode 100644 index 00000000..5a249de3 --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/index.txt @@ -0,0 +1,19 @@ +.. _tutorial: + +======== +Tutorial +======== + +.. toctree:: + :maxdepth: 2 + + introduction + encoding + file-format + repo + object-store + remote + tag + porcelain + conclusion + diff --git a/env/lib/python3.8/site-packages/docs/tutorial/introduction.txt b/env/lib/python3.8/site-packages/docs/tutorial/introduction.txt new file mode 100644 index 00000000..9fee9d86 --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/introduction.txt @@ -0,0 +1,20 @@ +.. _tutorial-introduction: + +Introduction +============ + +Like Git itself, Dulwich consists of two main layers; the so-called plumbing +and the porcelain. + +The plumbing is the lower layer and it deals with the Git object database and the +nitty gritty internals. The porcelain is roughly what you would expect to +be exposed to as a user of the ``git`` command-like tool. + +Dulwich has a fairly complete plumbing implementation, and a more recently +added porcelain implementation. The porcelain code lives in +``dulwich.porcelain``. + + +For the large part, this tutorial introduces you to the internal concepts of +Git and the main plumbing parts of Dulwich. The last chapter covers +the porcelain. diff --git a/env/lib/python3.8/site-packages/docs/tutorial/object-store.txt b/env/lib/python3.8/site-packages/docs/tutorial/object-store.txt new file mode 100644 index 00000000..b1746d3d --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/object-store.txt @@ -0,0 +1,189 @@ +.. _tutorial-object-store: + +The object store +================ + +The objects are stored in the ``object store`` of the repository. + + >>> from dulwich.repo import Repo + >>> repo = Repo.init("myrepo", mkdir=True) + +Initial commit +-------------- + +When you use Git, you generally add or modify content. As our repository is +empty for now, we'll start by adding a new file:: + + >>> from dulwich.objects import Blob + >>> blob = Blob.from_string(b"My file content\n") + >>> print(blob.id.decode('ascii')) + c55063a4d5d37aa1af2b2dad3a70aa34dae54dc6 + +Of course you could create a blob from an existing file using ``from_file`` +instead. + +As said in the introduction, file content is separated from file name. Let's +give this content a name:: + + >>> from dulwich.objects import Tree + >>> tree = Tree() + >>> tree.add(b"spam", 0o100644, blob.id) + +Note that "0o100644" is the octal form for a regular file with common +permissions. You can hardcode them or you can use the ``stat`` module. + +The tree state of our repository still needs to be placed in time. That's the +job of the commit:: + + >>> from dulwich.objects import Commit, parse_timezone + >>> from time import time + >>> commit = Commit() + >>> commit.tree = tree.id + >>> author = b"Your Name " + >>> commit.author = commit.committer = author + >>> commit.commit_time = commit.author_time = int(time()) + >>> tz = parse_timezone(b'-0200')[0] + >>> commit.commit_timezone = commit.author_timezone = tz + >>> commit.encoding = b"UTF-8" + >>> commit.message = b"Initial commit" + +Note that the initial commit has no parents. + +At this point, the repository is still empty because all operations happen in +memory. Let's "commit" it. + + >>> object_store = repo.object_store + >>> object_store.add_object(blob) + +Now the ".git/objects" folder contains a first SHA-1 file. Let's continue +saving the changes:: + + >>> object_store.add_object(tree) + >>> object_store.add_object(commit) + +Now the physical repository contains three objects but still has no branch. +Let's create the master branch like Git would:: + + >>> repo.refs[b'refs/heads/master'] = commit.id + +The master branch now has a commit where to start. When we commit to master, we +are also moving HEAD, which is Git's currently checked out branch: + + >>> head = repo.refs[b'HEAD'] + >>> head == commit.id + True + >>> head == repo.refs[b'refs/heads/master'] + True + +How did that work? As it turns out, HEAD is a special kind of ref called a +symbolic ref, and it points at master. Most functions on the refs container +work transparently with symbolic refs, but we can also take a peek inside HEAD: + + >>> import sys + >>> print(repo.refs.read_ref(b'HEAD').decode(sys.getfilesystemencoding())) + ref: refs/heads/master + +Normally, you won't need to use read_ref. If you want to change what ref HEAD +points to, in order to check out another branch, just use set_symbolic_ref. + +Now our repository is officially tracking a branch named "master" referring to a +single commit. + +Playing again with Git +---------------------- + +At this point you can come back to the shell, go into the "myrepo" folder and +type ``git status`` to let Git confirm that this is a regular repository on +branch "master". + +Git will tell you that the file "spam" is deleted, which is normal because +Git is comparing the repository state with the current working copy. And we +have absolutely no working copy using Dulwich because we don't need it at +all! + +You can checkout the last state using ``git checkout -f``. The force flag +will prevent Git from complaining that there are uncommitted changes in the +working copy. + +The file ``spam`` appears and with no surprise contains the same bytes as the +blob:: + + $ cat spam + My file content + +Changing a File and Committing it +--------------------------------- + +Now we have a first commit, the next one will show a difference. + +As seen in the introduction, it's about making a path in a tree point to a +new blob. The old blob will remain to compute the diff. The tree is altered +and the new commit'task is to point to this new version. + +Let's first build the blob:: + + >>> from dulwich.objects import Blob + >>> spam = Blob.from_string(b"My new file content\n") + >>> print(spam.id.decode('ascii')) + 16ee2682887a962f854ebd25a61db16ef4efe49f + +An alternative is to alter the previously constructed blob object:: + + >>> blob.data = b"My new file content\n" + >>> print(blob.id.decode('ascii')) + 16ee2682887a962f854ebd25a61db16ef4efe49f + +In any case, update the blob id known as "spam". You also have the +opportunity of changing its mode:: + + >>> tree[b"spam"] = (0o100644, spam.id) + +Now let's record the change:: + + >>> from dulwich.objects import Commit + >>> from time import time + >>> c2 = Commit() + >>> c2.tree = tree.id + >>> c2.parents = [commit.id] + >>> c2.author = c2.committer = b"John Doe " + >>> c2.commit_time = c2.author_time = int(time()) + >>> c2.commit_timezone = c2.author_timezone = 0 + >>> c2.encoding = b"UTF-8" + >>> c2.message = b'Changing "spam"' + +In this new commit we record the changed tree id, and most important, the +previous commit as the parent. Parents are actually a list because a commit +may happen to have several parents after merging branches. + +Let's put the objects in the object store:: + + >>> repo.object_store.add_object(spam) + >>> repo.object_store.add_object(tree) + >>> repo.object_store.add_object(c2) + +You can already ask git to introspect this commit using ``git show`` and the +value of ``c2.id`` as an argument. You'll see the difference will the +previous blob recorded as "spam". + +The diff between the previous head and the new one can be printed using +write_tree_diff:: + + >>> from dulwich.patch import write_tree_diff + >>> from io import BytesIO + >>> out = BytesIO() + >>> write_tree_diff(out, repo.object_store, commit.tree, tree.id) + >>> import sys; _ = sys.stdout.write(out.getvalue().decode('ascii')) + diff --git a/spam b/spam + index c55063a..16ee268 100644 + --- a/spam + +++ b/spam + @@ -1 +1 @@ + -My file content + +My new file content + +You won't see it using git log because the head is still the previous +commit. It's easy to remedy:: + + >>> repo.refs[b'refs/heads/master'] = c2.id + +Now all git tools will work as expected. diff --git a/env/lib/python3.8/site-packages/docs/tutorial/porcelain.txt b/env/lib/python3.8/site-packages/docs/tutorial/porcelain.txt new file mode 100644 index 00000000..84da8ebe --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/porcelain.txt @@ -0,0 +1,47 @@ +Porcelain +========= + +The ``porcelain`` is the higher level interface, built on top of the lower +level implementation covered in previous chapters of this tutorial. The +``dulwich.porcelain`` module in Dulwich is aimed to closely resemble +the Git command-line API that you are familiar with. + +Basic concepts +-------------- +The porcelain operations are implemented as top-level functions in the +``dulwich.porcelain`` module. Most arguments can either be strings or +more complex Dulwich objects; e.g. a repository argument will either take +a string with a path to the repository or an instance of a ``Repo`` object. + +Initializing a new repository +----------------------------- + + >>> from dulwich import porcelain + + >>> repo = porcelain.init("myrepo") + +Clone a repository +------------------ + + >>> porcelain.clone("git://github.com/jelmer/dulwich", "dulwich-clone") + +Basic authentication works using the ``username`` and ``password`` parameters: + + >>> porcelain.clone( + "https://example.com/a-private-repo.git", + "a-private-repo-clone", + username="user", password="password") + +Commit changes +-------------- + + >>> r = porcelain.init("testrepo") + >>> open("testrepo/testfile", "w").write("data") + >>> porcelain.add(r, "testfile") + >>> porcelain.commit(r, b"A sample commit") + +Push changes +------------ + + >>> tr = porcelain.init("targetrepo") + >>> r = porcelain.push("testrepo", "targetrepo", "master") diff --git a/env/lib/python3.8/site-packages/docs/tutorial/remote.txt b/env/lib/python3.8/site-packages/docs/tutorial/remote.txt new file mode 100644 index 00000000..94fd0a48 --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/remote.txt @@ -0,0 +1,87 @@ +.. _tutorial-remote: + +Most of the tests in this file require a Dulwich server, so let's start one: + + >>> from dulwich.repo import Repo + >>> from dulwich.server import DictBackend, TCPGitServer + >>> import threading + >>> repo = Repo.init("remote", mkdir=True) + >>> cid = repo.do_commit(b"message", committer=b"Jelmer ") + >>> backend = DictBackend({b'/': repo}) + >>> dul_server = TCPGitServer(backend, b'localhost', 0) + >>> threading.Thread(target=dul_server.serve).start() + >>> server_address, server_port=dul_server.socket.getsockname() + +Remote repositories +=================== + +The interface for remote Git repositories is different from that +for local repositories. + +The Git smart server protocol provides three basic operations: + + * upload-pack - provides a pack with objects requested by the client + * receive-pack - imports a pack with objects provided by the client + * upload-archive - provides a tarball with the contents of a specific revision + +The smart server protocol can be accessed over either plain TCP (git://), +SSH (git+ssh://) or tunneled over HTTP (http://). + +Dulwich provides support for accessing remote repositories in +``dulwich.client``. To create a new client, you can construct +one manually:: + + >>> from dulwich.client import TCPGitClient + >>> client = TCPGitClient(server_address, server_port) + +Retrieving raw pack files +------------------------- + +The client object can then be used to retrieve a pack. The ``fetch_pack`` +method takes a ``determine_wants`` callback argument, which allows the +client to determine which objects it wants to end up with:: + + >>> def determine_wants(refs, depth=None): + ... # retrieve all objects + ... return refs.values() + +Note that the ``depth`` keyword argument will contain an optional requested +shallow fetch depth. + +Another required object is a "graph walker", which is used to determine +which objects that the client already has should not be sent again +by the server. Here in the tutorial we'll just use a dummy graph walker +which claims that the client doesn't have any objects:: + + >>> class DummyGraphWalker(object): + ... def ack(self, sha): pass + ... def next(self): pass + ... def __next__(self): pass + +With the ``determine_wants`` function in place, we can now fetch a pack, +which we will write to a ``BytesIO`` object:: + + >>> from io import BytesIO + >>> f = BytesIO() + >>> result = client.fetch_pack(b"/", determine_wants, + ... DummyGraphWalker(), pack_data=f.write) + +``f`` will now contain a full pack file:: + + >>> print(f.getvalue()[:4].decode('ascii')) + PACK + +Fetching objects into a local repository +---------------------------------------- + +It is also possible to fetch from a remote repository into a local repository, +in which case Dulwich takes care of providing the right graph walker, and +importing the received pack file into the local repository:: + + >>> from dulwich.repo import Repo + >>> local = Repo.init("local", mkdir=True) + >>> remote_refs = client.fetch(b"/", local) + +Let's shut down the server now that all tests have been run:: + + >>> dul_server.shutdown() diff --git a/env/lib/python3.8/site-packages/docs/tutorial/repo.txt b/env/lib/python3.8/site-packages/docs/tutorial/repo.txt new file mode 100644 index 00000000..1597ea3e --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/repo.txt @@ -0,0 +1,101 @@ +.. _tutorial-repo: + +The repository +============== + +After this introduction, let's start directly with code:: + + >>> from dulwich.repo import Repo + +The access to a repository is through the Repo object. You can open an +existing repository or you can create a new one. There are two types of Git +repositories: + + Regular Repositories -- They are the ones you create using ``git init`` and + you daily use. They contain a ``.git`` folder. + + Bare Repositories -- There is no ".git" folder. The top-level folder + contains itself the "branches", "hooks"... folders. These are used for + published repositories (mirrors). They do not have a working tree. + +Creating a repository +--------------------- + +Let's create a folder and turn it into a repository, like ``git init`` would:: + + >>> from os import mkdir + >>> import sys + >>> mkdir("myrepo") + >>> repo = Repo.init("myrepo") + >>> repo + + +You can already look at the structure of the "myrepo/.git" folder, though it +is mostly empty for now. + +Opening an existing repository +------------------------------ + +To reopen an existing repository, simply pass its path to the constructor +of ``Repo``:: + + >>> repo = Repo("myrepo") + >>> repo + + +Opening the index +----------------- + +The index is used as a staging area. Once you do a commit, +the files tracked in the index will be recorded as the contents of the new +commit. As mentioned earlier, only non-bare repositories have a working tree, +so only non-bare repositories will have an index, too. To open the index, simply +call:: + + >>> index = repo.open_index() + >>> print(index.path) + myrepo/.git/index + +Since the repository was just created, the index will be empty:: + + >>> list(index) + [] + +Staging new files +----------------- + +The repository allows "staging" files. Only files can be staged - directories +aren't tracked explicitly by git. Let's create a simple text file and stage it:: + + >>> f = open('myrepo/foo', 'wb') + >>> _ = f.write(b"monty") + >>> f.close() + + >>> repo.stage([b"foo"]) + +It will now show up in the index:: + + >>> print(",".join([f.decode(sys.getfilesystemencoding()) for f in repo.open_index()])) + foo + + +Creating new commits +-------------------- + +Now that we have staged a change, we can commit it. The easiest way to +do this is by using ``Repo.do_commit``. It is also possible to manipulate +the lower-level objects involved in this, but we'll leave that for a +separate chapter of the tutorial. + +To create a simple commit on the current branch, it is only necessary +to specify the message. The committer and author will be retrieved from the +repository configuration or global configuration if they are not specified:: + + >>> commit_id = repo.do_commit( + ... b"The first commit", committer=b"Jelmer Vernooij ") + +``do_commit`` returns the SHA1 of the commit. Since the commit was to the +default branch, the repository's head will now be set to that commit:: + + >>> repo.head() == commit_id + True diff --git a/env/lib/python3.8/site-packages/docs/tutorial/tag.txt b/env/lib/python3.8/site-packages/docs/tutorial/tag.txt new file mode 100644 index 00000000..663e9a00 --- /dev/null +++ b/env/lib/python3.8/site-packages/docs/tutorial/tag.txt @@ -0,0 +1,57 @@ +.. _tutorial-tag: + +Tagging +======= + +This tutorial will demonstrate how to add a tag to a commit via dulwich. + +First let's initialize the repository: + + >>> from dulwich.repo import Repo + >>> _repo = Repo("myrepo", mkdir=True) + +Next we build the commit object and add it to the object store: + + >>> from dulwich.objects import Blob, Tree, Commit, parse_timezone + >>> permissions = 0100644 + >>> author = "John Smith" + >>> blob = Blob.from_string("empty") + >>> tree = Tree() + >>> tree.add(tag, permissions, blob.id) + >>> commit = Commit() + >>> commit.tree = tree.id + >>> commit.author = commit.committer = author + >>> commit.commit_time = commit.author_time = int(time()) + >>> tz = parse_timezone('-0200')[0] + >>> commit.commit_timezone = commit.author_timezone = tz + >>> commit.encoding = "UTF-8" + >>> commit.message = 'Tagging repo: ' + message + +Add objects to the repo store instance: + + >>> object_store = _repo.object_store + >>> object_store.add_object(blob) + >>> object_store.add_object(tree) + >>> object_store.add_object(commit) + >>> master_branch = 'master' + >>> _repo.refs['refs/heads/' + master_branch] = commit.id + +Finally, add the tag top the repo: + + >>> _repo['refs/tags/' + commit] = commit.id + +Alternatively, we can use the tag object if we'd like to annotate the tag: + + >>> from dulwich.objects import Blob, Tree, Commit, parse_timezone, Tag + >>> tag_message = "Tag Annotation" + >>> tag = Tag() + >>> tag.tagger = author + >>> tag.message = message + >>> tag.name = "v0.1" + >>> tag.object = (Commit, commit.id) + >>> tag.tag_time = commit.author_time + >>> tag.tag_timezone = tz + >>> object_store.add_object(tag) + >>> _repo['refs/tags/' + tag] = tag.id + + diff --git a/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/COPYING b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/COPYING new file mode 100644 index 00000000..551c4a93 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/COPYING @@ -0,0 +1,548 @@ +Dulwich may be used under the conditions of either of two licenses, +the Apache License (version 2.0 or later) or the GNU General Public License, +version 2.0 or later. + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + GNU GENERAL PUBLIC LICENSE + Version 2, June 1991 + + Copyright (C) 1989, 1991 Free Software Foundation, Inc., + 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +License is intended to guarantee your freedom to share and change free +software--to make sure the software is free for all its users. This +General Public License applies to most of the Free Software +Foundation's software and to any other program whose authors commit to +using it. (Some other Free Software Foundation software is covered by +the GNU Lesser General Public License instead.) You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +this service if you wish), that you receive source code or can get it +if you want it, that you can change the software or use pieces of it +in new free programs; and that you know you can do these things. + + To protect your rights, we need to make restrictions that forbid +anyone to deny you these rights or to ask you to surrender the rights. +These restrictions translate to certain responsibilities for you if you +distribute copies of the software, or if you modify it. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must give the recipients all the rights that +you have. You must make sure that they, too, receive or can get the +source code. And you must show them these terms so they know their +rights. + + We protect your rights with two steps: (1) copyright the software, and +(2) offer you this license which gives you legal permission to copy, +distribute and/or modify the software. + + Also, for each author's protection and ours, we want to make certain +that everyone understands that there is no warranty for this free +software. If the software is modified by someone else and passed on, we +want its recipients to know that what they have is not the original, so +that any problems introduced by others will not reflect on the original +authors' reputations. + + Finally, any free program is threatened constantly by software +patents. We wish to avoid the danger that redistributors of a free +program will individually obtain patent licenses, in effect making the +program proprietary. To prevent this, we have made it clear that any +patent must be licensed for everyone's free use or not licensed at all. + + The precise terms and conditions for copying, distribution and +modification follow. + + GNU GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License applies to any program or other work which contains +a notice placed by the copyright holder saying it may be distributed +under the terms of this General Public License. The "Program", below, +refers to any such program or work, and a "work based on the Program" +means either the Program or any derivative work under copyright law: +that is to say, a work containing the Program or a portion of it, +either verbatim or with modifications and/or translated into another +language. (Hereinafter, translation is included without limitation in +the term "modification".) Each licensee is addressed as "you". + +Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running the Program is not restricted, and the output from the Program +is covered only if its contents constitute a work based on the +Program (independent of having been made by running the Program). +Whether that is true depends on what the Program does. + + 1. You may copy and distribute verbatim copies of the Program's +source code as you receive it, in any medium, provided that you +conspicuously and appropriately publish on each copy an appropriate +copyright notice and disclaimer of warranty; keep intact all the +notices that refer to this License and to the absence of any warranty; +and give any other recipients of the Program a copy of this License +along with the Program. + +You may charge a fee for the physical act of transferring a copy, and +you may at your option offer warranty protection in exchange for a fee. + + 2. You may modify your copy or copies of the Program or any portion +of it, thus forming a work based on the Program, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) You must cause the modified files to carry prominent notices + stating that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in + whole or in part contains or is derived from the Program or any + part thereof, to be licensed as a whole at no charge to all third + parties under the terms of this License. + + c) If the modified program normally reads commands interactively + when run, you must cause it, when started running for such + interactive use in the most ordinary way, to print or display an + announcement including an appropriate copyright notice and a + notice that there is no warranty (or else, saying that you provide + a warranty) and that users may redistribute the program under + these conditions, and telling the user how to view a copy of this + License. (Exception: if the Program itself is interactive but + does not normally print such an announcement, your work based on + the Program is not required to print an announcement.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Program, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Program, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Program. + +In addition, mere aggregation of another work not based on the Program +with the Program (or with a work based on the Program) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may copy and distribute the Program (or a work based on it, +under Section 2) in object code or executable form under the terms of +Sections 1 and 2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable + source code, which must be distributed under the terms of Sections + 1 and 2 above on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three + years, to give any third party, for a charge no more than your + cost of physically performing source distribution, a complete + machine-readable copy of the corresponding source code, to be + distributed under the terms of Sections 1 and 2 above on a medium + customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer + to distribute corresponding source code. (This alternative is + allowed only for noncommercial distribution and only if you + received the program in object code or executable form with such + an offer, in accord with Subsection b above.) + +The source code for a work means the preferred form of the work for +making modifications to it. For an executable work, complete source +code means all the source code for all modules it contains, plus any +associated interface definition files, plus the scripts used to +control compilation and installation of the executable. However, as a +special exception, the source code distributed need not include +anything that is normally distributed (in either source or binary +form) with the major components (compiler, kernel, and so on) of the +operating system on which the executable runs, unless that component +itself accompanies the executable. + +If distribution of executable or object code is made by offering +access to copy from a designated place, then offering equivalent +access to copy the source code from the same place counts as +distribution of the source code, even though third parties are not +compelled to copy the source along with the object code. + + 4. You may not copy, modify, sublicense, or distribute the Program +except as expressly provided under this License. Any attempt +otherwise to copy, modify, sublicense or distribute the Program is +void, and will automatically terminate your rights under this License. +However, parties who have received copies, or rights, from you under +this License will not have their licenses terminated so long as such +parties remain in full compliance. + + 5. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Program or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Program (or any work based on the +Program), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Program or works based on it. + + 6. Each time you redistribute the Program (or any work based on the +Program), the recipient automatically receives a license from the +original licensor to copy, distribute or modify the Program subject to +these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties to +this License. + + 7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Program at all. For example, if a patent +license would not permit royalty-free redistribution of the Program by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Program. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system, which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 8. If the distribution and/or use of the Program is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Program under this License +may add an explicit geographical distribution limitation excluding +those countries, so that distribution is permitted only in or among +countries not thus excluded. In such case, this License incorporates +the limitation as if written in the body of this License. + + 9. The Free Software Foundation may publish revised and/or new versions +of the General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any +later version", you have the option of following the terms and conditions +either of that version or of any later version published by the Free +Software Foundation. If the Program does not specify a version number of +this License, you may choose any version ever published by the Free Software +Foundation. + + 10. If you wish to incorporate parts of the Program into other free +programs whose distribution conditions are different, write to the author +to ask for permission. For software which is copyrighted by the Free +Software Foundation, write to the Free Software Foundation; we sometimes +make exceptions for this. Our decision will be guided by the two goals +of preserving the free status of all derivatives of our free software and +of promoting the sharing and reuse of software generally. + + NO WARRANTY + + 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY +FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN +OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES +PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED +OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS +TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE +PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, +REPAIR OR CORRECTION. + + 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR +REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, +INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING +OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED +TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY +YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER +PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License along + with this program; if not, write to the Free Software Foundation, Inc., + 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this +when it starts in an interactive mode: + + Gnomovision version 69, Copyright (C) year name of author + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, the commands you use may +be called something other than `show w' and `show c'; they could even be +mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the program, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the program + `Gnomovision' (which makes passes at compilers) written by James Hacker. + + , 1 April 1989 + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program into +proprietary programs. If your program is a subroutine library, you may +consider it more useful to permit linking proprietary applications with the +library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. + + + diff --git a/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/INSTALLER b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/METADATA b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/METADATA new file mode 100644 index 00000000..c67096bb --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/METADATA @@ -0,0 +1,128 @@ +Metadata-Version: 2.1 +Name: dulwich +Version: 0.20.50 +Summary: Python Git Library +Home-page: https://www.dulwich.io/ +Author: Jelmer Vernooij +Author-email: jelmer@jelmer.uk +License: Apachev2 or later or GPLv2 +Project-URL: Repository, https://www.dulwich.io/code/ +Project-URL: GitHub, https://github.com/dulwich/dulwich +Project-URL: Bug Tracker, https://github.com/dulwich/dulwich/issues +Keywords: vcs,git +Classifier: Development Status :: 4 - Beta +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Operating System :: POSIX +Classifier: Operating System :: Microsoft :: Windows +Classifier: Topic :: Software Development :: Version Control +Requires-Python: >=3.6 +License-File: COPYING +Requires-Dist: urllib3 (>=1.25) +Provides-Extra: fastimport +Requires-Dist: fastimport ; extra == 'fastimport' +Provides-Extra: https +Requires-Dist: urllib3 (>=1.24.1) ; extra == 'https' +Provides-Extra: paramiko +Requires-Dist: paramiko ; extra == 'paramiko' +Provides-Extra: pgp +Requires-Dist: gpg ; extra == 'pgp' + +Dulwich +======= + +This is the Dulwich project. + +It aims to provide an interface to git repos (both local and remote) that +doesn't call out to git directly but instead uses pure Python. + +**Main website**: + +**License**: Apache License, version 2 or GNU General Public License, version 2 or later. + +The project is named after the part of London that Mr. and Mrs. Git live in +in the particular Monty Python sketch. + +Installation +------------ + +By default, Dulwich' setup.py will attempt to build and install the optional C +extensions. The reason for this is that they significantly improve the performance +since some low-level operations that are executed often are much slower in CPython. + +If you don't want to install the C bindings, specify the --pure argument to setup.py:: + + $ python setup.py --pure install + +or if you are installing from pip:: + + $ pip install dulwich --global-option="--pure" + +Note that you can also specify --global-option in a +`requirements.txt `_ +file, e.g. like this:: + + dulwich --global-option=--pure + +Getting started +--------------- + +Dulwich comes with both a lower-level API and higher-level plumbing ("porcelain"). + +For example, to use the lower level API to access the commit message of the +last commit:: + + >>> from dulwich.repo import Repo + >>> r = Repo('.') + >>> r.head() + '57fbe010446356833a6ad1600059d80b1e731e15' + >>> c = r[r.head()] + >>> c + + >>> c.message + 'Add note about encoding.\n' + +And to print it using porcelain:: + + >>> from dulwich import porcelain + >>> porcelain.log('.', max_entries=1) + -------------------------------------------------- + commit: 57fbe010446356833a6ad1600059d80b1e731e15 + Author: Jelmer Vernooij + Date: Sat Apr 29 2017 23:57:34 +0000 + + Add note about encoding. + +Further documentation +--------------------- + +The dulwich documentation can be found in docs/ and built by running ``make +doc``. It can also be found `on the web `_. + +Help +---- + +There is a *#dulwich* IRC channel on the `OFTC `_, and +a `dulwich-discuss `_ +mailing list. + +Contributing +------------ + +For a full list of contributors, see the git logs or `AUTHORS `_. + +If you'd like to contribute to Dulwich, see the `CONTRIBUTING `_ +file and `list of open issues `_. + +Supported versions of Python +---------------------------- + +At the moment, Dulwich supports (and is tested on) CPython 3.6 and later and +Pypy. diff --git a/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/RECORD b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/RECORD new file mode 100644 index 00000000..aa58eb03 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/RECORD @@ -0,0 +1,222 @@ +../../../bin/dul-receive-pack,sha256=lxw5R6kL8MCBuXUXhhrJfWQ7VGgN1L3BgeVG_zUQhU0,1192 +../../../bin/dul-upload-pack,sha256=IXkL1JyWmXVNwWzv54otG2Rl4aJe1KzhPtybRO_CTbU,1188 +../../../bin/dulwich,sha256=jEXk80AumBRQle0txBYZiBO6n_qYtpd6pbA1l4u5MrM,261 +docs/tutorial/conclusion.txt,sha256=jF_ye3QMRfa4pOxJxbo2x33AVg1JpZHMs-nEX-n5AAU,322 +docs/tutorial/encoding.txt,sha256=l0IJ02ak8HGtLCul-5gxX3zSF0zy-0rGtuRE18dC1ws,1164 +docs/tutorial/file-format.txt,sha256=OdbtQQXERYpaDDg3mUxVzPhbupmXkaGGL7mz-hs_wsY,3098 +docs/tutorial/index.txt,sha256=o5ENguWHJvn-dC35Um9kCLGgcYWMxqsK1R4vdlqYmmY,186 +docs/tutorial/introduction.txt,sha256=7gYQp0nWDvpipJupTb7LQJaskQWv8aknn0Sekyl2yfw,687 +docs/tutorial/object-store.txt,sha256=EgG54dzuU2GxpVVso6S-PMMR3V8jg497lt1YKnyrKmY,6242 +docs/tutorial/porcelain.txt,sha256=K5ztpfz_R1uJ_XDVRNSRaRn0vMm_6Fn1Bo3TECVHcts,1421 +docs/tutorial/remote.txt,sha256=BqwTIJM_RaolizQUVlzz6CwkpfnhJSYhv75w98j9mjc,3147 +docs/tutorial/repo.txt,sha256=Cr4Lu7s5EL9NiLo4KQEIPiSWK6gRUfoetn7J61Iy8Pw,3020 +docs/tutorial/tag.txt,sha256=Zki-_knHbp88qaGluo2jJB6BuJPE9sNto3lX0j8wFvg,1762 +dulwich-0.20.50.dist-info/COPYING,sha256=wrOh9MbCR1FO_Ony4eKMj_8xqtb9FTmqtWdtrTd5zok,29516 +dulwich-0.20.50.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +dulwich-0.20.50.dist-info/METADATA,sha256=uuV0DLHJAeWTBo-qY6BwBRjzGxy-SK7jk2nL7K1oeIQ,4198 +dulwich-0.20.50.dist-info/RECORD,, +dulwich-0.20.50.dist-info/WHEEL,sha256=-ijGDuALlPxm3HbhKntps0QzHsi-DPlXqgerYTTJkFE,148 +dulwich-0.20.50.dist-info/entry_points.txt,sha256=xlouG51seFuiV2AgaQ0toSLk_O3lQkt3Yi0Xz6Hpm-I,45 +dulwich-0.20.50.dist-info/top_level.txt,sha256=nM2tVoc5Jm7klW6oKwk6y7y2CbgLcs4NHm-lKZ-up6I,8 +dulwich/__init__.py,sha256=q3URVZV7icDYois_FgWsX29vueLgh10MBHALcSWHoYU,1112 +dulwich/__main__.py,sha256=FeAQQ-Ghak6zVgKtQyJxtfxDTRMPzFJ5-RAdF7oei9M,62 +dulwich/__pycache__/__init__.cpython-38.pyc,, +dulwich/__pycache__/__main__.cpython-38.pyc,, +dulwich/__pycache__/archive.cpython-38.pyc,, +dulwich/__pycache__/bundle.cpython-38.pyc,, +dulwich/__pycache__/cli.cpython-38.pyc,, +dulwich/__pycache__/client.cpython-38.pyc,, +dulwich/__pycache__/config.cpython-38.pyc,, +dulwich/__pycache__/credentials.cpython-38.pyc,, +dulwich/__pycache__/diff_tree.cpython-38.pyc,, +dulwich/__pycache__/errors.cpython-38.pyc,, +dulwich/__pycache__/fastexport.cpython-38.pyc,, +dulwich/__pycache__/file.cpython-38.pyc,, +dulwich/__pycache__/graph.cpython-38.pyc,, +dulwich/__pycache__/greenthreads.cpython-38.pyc,, +dulwich/__pycache__/hooks.cpython-38.pyc,, +dulwich/__pycache__/ignore.cpython-38.pyc,, +dulwich/__pycache__/index.cpython-38.pyc,, +dulwich/__pycache__/lfs.cpython-38.pyc,, +dulwich/__pycache__/line_ending.cpython-38.pyc,, +dulwich/__pycache__/log_utils.cpython-38.pyc,, +dulwich/__pycache__/lru_cache.cpython-38.pyc,, +dulwich/__pycache__/mailmap.cpython-38.pyc,, +dulwich/__pycache__/object_store.cpython-38.pyc,, +dulwich/__pycache__/objects.cpython-38.pyc,, +dulwich/__pycache__/objectspec.cpython-38.pyc,, +dulwich/__pycache__/pack.cpython-38.pyc,, +dulwich/__pycache__/patch.cpython-38.pyc,, +dulwich/__pycache__/porcelain.cpython-38.pyc,, +dulwich/__pycache__/protocol.cpython-38.pyc,, +dulwich/__pycache__/reflog.cpython-38.pyc,, +dulwich/__pycache__/refs.cpython-38.pyc,, +dulwich/__pycache__/repo.cpython-38.pyc,, +dulwich/__pycache__/server.cpython-38.pyc,, +dulwich/__pycache__/stash.cpython-38.pyc,, +dulwich/__pycache__/submodule.cpython-38.pyc,, +dulwich/__pycache__/walk.cpython-38.pyc,, +dulwich/__pycache__/web.cpython-38.pyc,, +dulwich/_diff_tree.c,sha256=LfOGMLE-IvFLuo8BYBkoMkOVF7IExb4Cf1Vv1gruaQA,11125 +dulwich/_diff_tree.cpython-38-x86_64-linux-gnu.so,sha256=W-j3mjHM19jEkIfPe6vAk0vTja0xzBmQzIpwT_om28s,66760 +dulwich/_objects.c,sha256=FyUOqiXQ5Wu_Ixir62GhSA3_hJn442DlK7giikU57iA,8096 +dulwich/_objects.cpython-38-x86_64-linux-gnu.so,sha256=Ae3iFtMsiLu4FdaoR9fyVIoalhHy--qpAAPON3ciitE,42960 +dulwich/_pack.c,sha256=JTVULj9S6B9Uubjy5jD85wEWR5o2ztRVSyI_FRj7uPs,6732 +dulwich/_pack.cpython-38-x86_64-linux-gnu.so,sha256=Na0Dul6VAmt2-rlfHbZhVbFRc88QeSQVlWEcnWS2yAo,41352 +dulwich/archive.py,sha256=6OdtmA-tiRjkX8LervTqVEyUU-zb9FPKjlkIbT6s2XI,4810 +dulwich/bundle.py,sha256=N1f9O8T7CxFk3puIvzycmyfT2jpRTfQbxSKMAU-dSgM,4148 +dulwich/cli.py,sha256=fSziY4xnbFKJitirGbDcvSVF3y4b8YCWWNGWBxBBE9U,22545 +dulwich/client.py,sha256=UQBXTlnHZ6IfZtN5kR0nFpxQtOBqw-YzTwwv9hFGFOo,76405 +dulwich/cloud/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +dulwich/cloud/__pycache__/__init__.cpython-38.pyc,, +dulwich/cloud/__pycache__/gcs.cpython-38.pyc,, +dulwich/cloud/gcs.py,sha256=xdzwOmeEqdwdlBuhffNgnKa7rwzjUH5vBz8d9aTctkY,3039 +dulwich/config.py,sha256=sX-TQKe_sn3WpcWatnCatUtnH74gDrskvm8mlaR6DGc,24785 +dulwich/contrib/__init__.py,sha256=pmqf26IsjVLGdGR8RhLP4vdTpH1qo5ILvFclipWNBxk,1241 +dulwich/contrib/__pycache__/__init__.cpython-38.pyc,, +dulwich/contrib/__pycache__/diffstat.cpython-38.pyc,, +dulwich/contrib/__pycache__/paramiko_vendor.cpython-38.pyc,, +dulwich/contrib/__pycache__/release_robot.cpython-38.pyc,, +dulwich/contrib/__pycache__/requests_vendor.cpython-38.pyc,, +dulwich/contrib/__pycache__/swift.cpython-38.pyc,, +dulwich/contrib/__pycache__/test_paramiko_vendor.cpython-38.pyc,, +dulwich/contrib/__pycache__/test_release_robot.cpython-38.pyc,, +dulwich/contrib/__pycache__/test_swift.cpython-38.pyc,, +dulwich/contrib/__pycache__/test_swift_smoke.cpython-38.pyc,, +dulwich/contrib/diffstat.py,sha256=nzFRyw-fPrHnvcN_sMQGdY6wwdbpu1Qiqeczr4pEn8U,13496 +dulwich/contrib/paramiko_vendor.py,sha256=9Z3Sehr6Nyjtdr99QYo3SBFW67XbX-wWb8FIZU_7Cc0,3472 +dulwich/contrib/release_robot.py,sha256=Sd-S2KkHi4Dzy1fRmDJerncYDniwk_gkvAXfLlfiHNE,5319 +dulwich/contrib/requests_vendor.py,sha256=x9qrJas46r-tg8KNalef5BRC_IBX01avIerg2-MJVds,4870 +dulwich/contrib/swift.py,sha256=F5y8g9ePR9LamSCS-0ChVnGqcCjS2Wxx5lQyOPlnkEM,34236 +dulwich/contrib/test_paramiko_vendor.py,sha256=xpfFRi6rXhu045ffwTPs3jp37VywUc5zDjbLdsbS2T0,8541 +dulwich/contrib/test_release_robot.py,sha256=I0DSUil44r2SemnAk_1lEFycH7wmA_tSl4vNyo6T1Js,4922 +dulwich/contrib/test_swift.py,sha256=0tcZixXtFLIgvqUIpCPnzSoQFRbcw2OOrdlWK-R8U40,16663 +dulwich/contrib/test_swift_smoke.py,sha256=pmvJ3QT3-saN4v_TP6hJ6M1HdgcrikQHLYD7z2LDvXk,12270 +dulwich/credentials.py,sha256=EC7LnTB6Cw9IWByBLHwcek35PF9C66SDQPA2oVjBOOI,3239 +dulwich/diff_tree.py,sha256=0PKlpMsYin6L26MhlJP-6T4WwwLXNlKJeErmy5jUcLU,22733 +dulwich/errors.py,sha256=X_YzvIKC7g9hl-bFdTCyUaU3oo-wsfNyhla7dgm1vwM,6019 +dulwich/fastexport.py,sha256=cMr91F0lsxDDm6QIszvw9jEd8popd6-EfKncdmqah-Y,8902 +dulwich/file.py,sha256=RyHg9cM69qHpWJbBNm9nD2_Y2JTrYu0puK4G5tMBby8,6984 +dulwich/graph.py,sha256=RfMu8YvuVW5YoOZYWPCnsQU-XI2Lf-2ppRXauCQs1WA,4555 +dulwich/greenthreads.py,sha256=Y3VMtGWRCPNeOG5lCMR_4IbGY7oe35hbjC4GIyqiwzA,4953 +dulwich/hooks.py,sha256=xCoPbPMW9l3htehO1Mtq8huZD6pfU6pPn5ery88Q52s,6483 +dulwich/ignore.py,sha256=r_XNWDSOfZOpCqJk2u2ngkovUHm8uFH39OOum2ZdNN4,11868 +dulwich/index.py,sha256=_yAA0Xyr_j5ZQFcK78fblbq6zUOYfE64lIG5dQqmT6k,30591 +dulwich/lfs.py,sha256=29jZc5T9XtOi8UWSxz6uw3ZJN3Zysz3mX2KDvrO1Z1E,2505 +dulwich/line_ending.py,sha256=tZWOv3Y7RVNG3xJeSm7za69jmPCY1qoCqkOGjP16D1Q,11560 +dulwich/log_utils.py,sha256=AtmtOdYq1UVQn_HKvG3_HJPZPu8B9Rz57wrbigdWgpM,2575 +dulwich/lru_cache.py,sha256=jmRIuL_iWf7j68zUGF-LHRM9CjIHUDGprXdcdTuRF8E,14414 +dulwich/mailmap.py,sha256=iR5SIVerCYfppGXyJY9YbE5KTp4VSttJVXkoNns5Sh4,4011 +dulwich/object_store.py,sha256=2dOaUGht8mp1Y-XGv13vfT9C6sGSFO5bsbrNHmrR0rY,52664 +dulwich/objects.py,sha256=1h6h-x0E5EROPvL09s3Ppn3ivI_w0Lw64R1UuVrv0u0,51862 +dulwich/objectspec.py,sha256=pA552T7xKU0RehXhrtscAnIEMS0ivUhBGoqQh_0wdb4,6860 +dulwich/pack.py,sha256=jYkc7CQVgszZtR4-xvMzXRsrjC6lqs55TOQ-66tHc1E,75546 +dulwich/patch.py,sha256=gRtD5SZIWOWuG70NZCdHrHs0UYMMOZnjkN2cNKVJ7eM,12590 +dulwich/porcelain.py,sha256=9Ub7YOBBfp7mvYLLW2LIJ-g0Rt9QRMCbC6YQc35Y-tk,62922 +dulwich/protocol.py,sha256=EjMTk9jvf_fhBxfKbrowN6lDWYmRpIz1tIYhuAnjreY,18273 +dulwich/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +dulwich/reflog.py,sha256=igG2xfR2kznfRm-Q5PdCITbJ_iR5UTo5SKIyx6L3cyc,4150 +dulwich/refs.py,sha256=MLPoIP25XHyv0F_LS-b1bpoaIp-qzbEkNFMMSTVmXYg,37813 +dulwich/repo.py,sha256=m8Vggv9FXcgzqC8OmHtlbESy91Et0bBKjaLBb_rWP1w,60195 +dulwich/server.py,sha256=Jk4F8FaDuYSL_Snu1NE9SIGa4XgvKSne3kzjIe0hXic,43401 +dulwich/stash.py,sha256=SU7uxMqyf-9aZyNtV-FhLiDRzelkRbGRCE4h8UbCa5A,4088 +dulwich/stdint.h,sha256=B36r9z8CPO4QzpDIi2Fa8M0lj68ZLblCdaoP5HVrAEs,385 +dulwich/submodule.py,sha256=XFt30B27uMRwWpV_B8BSw_puwgbSLno01xqZI2WyLzw,1485 +dulwich/tests/__init__.py,sha256=qWfbcNVXQk4WwJ0YxV-QbY9Hb9ebBae_76sukwSp8YI,5926 +dulwich/tests/__pycache__/__init__.cpython-38.pyc,, +dulwich/tests/__pycache__/test_archive.cpython-38.pyc,, +dulwich/tests/__pycache__/test_blackbox.cpython-38.pyc,, +dulwich/tests/__pycache__/test_bundle.cpython-38.pyc,, +dulwich/tests/__pycache__/test_client.cpython-38.pyc,, +dulwich/tests/__pycache__/test_config.cpython-38.pyc,, +dulwich/tests/__pycache__/test_credentials.cpython-38.pyc,, +dulwich/tests/__pycache__/test_diff_tree.cpython-38.pyc,, +dulwich/tests/__pycache__/test_fastexport.cpython-38.pyc,, +dulwich/tests/__pycache__/test_file.cpython-38.pyc,, +dulwich/tests/__pycache__/test_grafts.cpython-38.pyc,, +dulwich/tests/__pycache__/test_graph.cpython-38.pyc,, +dulwich/tests/__pycache__/test_greenthreads.cpython-38.pyc,, +dulwich/tests/__pycache__/test_hooks.cpython-38.pyc,, +dulwich/tests/__pycache__/test_ignore.cpython-38.pyc,, +dulwich/tests/__pycache__/test_index.cpython-38.pyc,, +dulwich/tests/__pycache__/test_lfs.cpython-38.pyc,, +dulwich/tests/__pycache__/test_line_ending.cpython-38.pyc,, +dulwich/tests/__pycache__/test_lru_cache.cpython-38.pyc,, +dulwich/tests/__pycache__/test_mailmap.cpython-38.pyc,, +dulwich/tests/__pycache__/test_missing_obj_finder.cpython-38.pyc,, +dulwich/tests/__pycache__/test_object_store.cpython-38.pyc,, +dulwich/tests/__pycache__/test_objects.cpython-38.pyc,, +dulwich/tests/__pycache__/test_objectspec.cpython-38.pyc,, +dulwich/tests/__pycache__/test_pack.cpython-38.pyc,, +dulwich/tests/__pycache__/test_patch.cpython-38.pyc,, +dulwich/tests/__pycache__/test_porcelain.cpython-38.pyc,, +dulwich/tests/__pycache__/test_protocol.cpython-38.pyc,, +dulwich/tests/__pycache__/test_reflog.cpython-38.pyc,, +dulwich/tests/__pycache__/test_refs.cpython-38.pyc,, +dulwich/tests/__pycache__/test_repository.cpython-38.pyc,, +dulwich/tests/__pycache__/test_server.cpython-38.pyc,, +dulwich/tests/__pycache__/test_stash.cpython-38.pyc,, +dulwich/tests/__pycache__/test_utils.cpython-38.pyc,, +dulwich/tests/__pycache__/test_walk.cpython-38.pyc,, +dulwich/tests/__pycache__/test_web.cpython-38.pyc,, +dulwich/tests/__pycache__/utils.cpython-38.pyc,, +dulwich/tests/compat/__init__.py,sha256=evDAcsgJMUihgBVpDsyo10QVTUQh5OGqzLMSVz5ObO0,1442 +dulwich/tests/compat/__pycache__/__init__.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/server_utils.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_client.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_pack.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_patch.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_porcelain.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_repository.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_server.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_utils.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/test_web.cpython-38.pyc,, +dulwich/tests/compat/__pycache__/utils.cpython-38.pyc,, +dulwich/tests/compat/server_utils.py,sha256=tX5A03HTuycMYWHDLOO1P-P3rv4_6_yjSkfm0Kdb3jE,13225 +dulwich/tests/compat/test_client.py,sha256=uwF1FwYx_Alc-GCKmNzowUe6xPoLuq2VK4_rSugXJuA,24174 +dulwich/tests/compat/test_pack.py,sha256=mAbROivVgBvCNTN_IKB70KZElP7LVST_zELEY67XF-w,6553 +dulwich/tests/compat/test_patch.py,sha256=84NOyzTjBHTI8b1bbyYD7CxyUO1ZHsz__tY6DBYinBw,4155 +dulwich/tests/compat/test_porcelain.py,sha256=CSFvD6n4LOJZmNZyVgwJXKxrnCbfm8O0TJuDiNsJoFw,3335 +dulwich/tests/compat/test_repository.py,sha256=2p1Ylv0TCGe1E7ZdvadXqdUb2qJNP8EDE-HpLK-LNT8,9252 +dulwich/tests/compat/test_server.py,sha256=xyTMTpq5MX6-QfKEy5pviAMZDmn8mXSsoinyaQt4UYA,3567 +dulwich/tests/compat/test_utils.py,sha256=H_5HkvCsxfGXUqK5VgudBFlEMdCkGw5eKkRJdfNXeRA,3510 +dulwich/tests/compat/test_web.py,sha256=DBkKbn6Pugo9ow842wE-D4YZUWQeA4USJb2jPX2nxAA,7367 +dulwich/tests/compat/utils.py,sha256=43oAd-SPqYyLc_hrQzbYn70hmlvDfyBykP0p2SRwTu0,9326 +dulwich/tests/test_archive.py,sha256=ame0fR4-aeoSM5DgAVdoQikDJNRtVk6I-7ndG4KQk40,3864 +dulwich/tests/test_blackbox.py,sha256=EvL-FydxdPeWuYDNyScduUDGhB6CIMbfJDN8t_cKkr0,2626 +dulwich/tests/test_bundle.py,sha256=dEKPWWjK0LUZJhgNGLU3pLAUZgsOXQidL6Oe_iiITX8,1750 +dulwich/tests/test_client.py,sha256=v_g_SrRPLlP_rSWgBmVYQYrI30JpFX6IPbASSZAcDkY,55211 +dulwich/tests/test_config.py,sha256=qlYccPUa04pAVYs2qgw-6ioM2tkptOFXTURhI5HdT6c,16217 +dulwich/tests/test_credentials.py,sha256=PXV1OBKcahKm7nB6mmf6TciaNpVhT735H2-BknYXkCA,3011 +dulwich/tests/test_diff_tree.py,sha256=XeGGVLz-1n6klMLE7JTyHrbk8vEPh9F1aRjUFt9_EfM,44923 +dulwich/tests/test_fastexport.py,sha256=O114UP-BDU-zGDumF9mPtV85EGGoGXsNrImWrN2QqH0,10330 +dulwich/tests/test_file.py,sha256=OjhK3qOfrPNWBZM9sjlEGAAjQMzz96UxnwgDcP9QEpk,6353 +dulwich/tests/test_grafts.py,sha256=CSt2yZL1w-5QfBLoelCH-Qj6aDMMsYBTYR4cb5gOnpg,6573 +dulwich/tests/test_graph.py,sha256=9q0jV_6cJV8TGrOr-Ahx4jvEYGa4lWnJdT8uI0LYZpc,5855 +dulwich/tests/test_greenthreads.py,sha256=gx5VkxVOhBB4YT5PrrcteYt8guHKIDPmHSexHDjqXzc,4645 +dulwich/tests/test_hooks.py,sha256=2igto0t20ptMmM4SaBH44DUiO678FvbCXfTKU-MsS6E,5725 +dulwich/tests/test_ignore.py,sha256=37IRV2_lHi--8doDRdtwVC3zdT1soVh6ehFGodM7cfo,10245 +dulwich/tests/test_index.py,sha256=3iucEXaVXf7Q00uUJBOAciXy2jNL0AimG8Yeqo5632g,26720 +dulwich/tests/test_lfs.py,sha256=OigkU8tJus7dBjz3VkO-HUMdNTsPwpFwOe6FOOaEpJU,1567 +dulwich/tests/test_line_ending.py,sha256=GdbpRCTGfjRqMXvxblLCx3HguYl2jc5kAHzKzKPyHI8,7202 +dulwich/tests/test_lru_cache.py,sha256=lHzsPnD1eFWrOiKBSopF571KDapfoSpHWAyvBgwaY7U,16144 +dulwich/tests/test_mailmap.py,sha256=Jf-wVhcsHe2RzKsXgDqyc3su6xki2LOV4DV_W-s_ypI,3821 +dulwich/tests/test_missing_obj_finder.py,sha256=-XYly_r3HHaYhGF6B0GPMzh2gdNU4VP2Q3CH2eURxsE,11286 +dulwich/tests/test_object_store.py,sha256=ueFXShC3vZgLS2SDUawUydUeZtnXyGHQvzZzPBrEyuM,27969 +dulwich/tests/test_objects.py,sha256=aEeraTMW-KZUPFGYJFtXQV8Ev_OTgPP7DcDBrgAi-g0,51986 +dulwich/tests/test_objectspec.py,sha256=K8_O5RwIDdbRFCWvf6cA_4RdL_25_ElyHc7GClX56OA,8541 +dulwich/tests/test_pack.py,sha256=cYfrU8ysu1NzHCdoHq55SxCasvn-KckhM0PWdLOtGtA,44707 +dulwich/tests/test_patch.py,sha256=WfO5-R0MAT46qaH6SdifKDtd8JaZzyRQ1RYuClgRvk8,21432 +dulwich/tests/test_porcelain.py,sha256=13vtEhrDiIQwyDB7ZNVv8wnrJ02BWpfTowze9j4ZLvw,112329 +dulwich/tests/test_protocol.py,sha256=AMMBGVDebdGHAWjy3Yv1Pft8ppVl4B9sTDujbeexADg,11070 +dulwich/tests/test_reflog.py,sha256=P6ckJnTmxpkSw-p_tOd20afIYLAfjchm4s8fOd9D3ec,5118 +dulwich/tests/test_refs.py,sha256=mj8ox9bmKPFrM6YHoEpa9NfjtddlnpXD66JqaiatDZg,28385 +dulwich/tests/test_repository.py,sha256=HkTP5SiyYjRJrIt9NHU4emPrioVN8VCtnU5W4Sz8Y48,55104 +dulwich/tests/test_server.py,sha256=gPCQDb865aHNpRleDhh3xBBcsVJPVSfAoDLNfbDWVmk,37714 +dulwich/tests/test_stash.py,sha256=NhzWhC-kpgCfuxrPRGIMyDFivGNcdf8tURU9zvRjIdU,1248 +dulwich/tests/test_utils.py,sha256=EpEZ8aPMnkdDZSTA9eMN4yvvFY9_1ZYtaks6ogU-_Uo,3299 +dulwich/tests/test_walk.py,sha256=2bCmXdXUvHLsJ66iVccPgJvZHGS8oYXhNP-SzTlpCIE,24121 +dulwich/tests/test_web.py,sha256=mGlyd89A5jbqbHSPuHOXdYDGmFikzCqN_4iT6ScVt0I,20179 +dulwich/tests/utils.py,sha256=plkOs2_wLwP1V5QtbYHH2XCffLr8WV7beGBPE7punME,12504 +dulwich/walk.py,sha256=mj7QFPSEQ2hGWFeYKOWUElIbgmpjCX06m9-AU7BlQr4,16213 +dulwich/web.py,sha256=Fuc5S5YiPNy0V-U8vU8PIGT8Z6_Ji3dnW5mzWNmBbgY,19210 diff --git a/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/WHEEL b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/WHEEL new file mode 100644 index 00000000..3a48d348 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_17_x86_64 +Tag: cp38-cp38-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/entry_points.txt new file mode 100644 index 00000000..af5f138b --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[console_scripts] +dulwich = dulwich.cli:main diff --git a/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/top_level.txt b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/top_level.txt new file mode 100644 index 00000000..e8413a49 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich-0.20.50.dist-info/top_level.txt @@ -0,0 +1 @@ +dulwich diff --git a/env/lib/python3.8/site-packages/dulwich/__init__.py b/env/lib/python3.8/site-packages/dulwich/__init__.py new file mode 100644 index 00000000..ca08b38b --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/__init__.py @@ -0,0 +1,25 @@ +# __init__.py -- The git module of dulwich +# Copyright (C) 2007 James Westby +# Copyright (C) 2008 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + + +"""Python implementation of the Git file formats and protocols.""" + +__version__ = (0, 20, 50) diff --git a/env/lib/python3.8/site-packages/dulwich/__main__.py b/env/lib/python3.8/site-packages/dulwich/__main__.py new file mode 100644 index 00000000..3453a7e6 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/__main__.py @@ -0,0 +1,4 @@ +from . import cli + +if __name__ == "__main__": + cli._main() diff --git a/env/lib/python3.8/site-packages/dulwich/_diff_tree.c b/env/lib/python3.8/site-packages/dulwich/_diff_tree.c new file mode 100644 index 00000000..88d17ed9 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/_diff_tree.c @@ -0,0 +1,470 @@ +/* + * Copyright (C) 2010 Google, Inc. + * + * Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU + * General Public License as public by the Free Software Foundation; version 2.0 + * or (at your option) any later version. You can redistribute it and/or + * modify it under the terms of either of these two licenses. + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * You should have received a copy of the licenses; if not, see + * for a copy of the GNU General Public License + * and for a copy of the Apache + * License, Version 2.0. + */ + +#define PY_SSIZE_T_CLEAN +#include +#include + +#ifdef _MSC_VER +typedef unsigned short mode_t; +#endif + +static PyObject *tree_entry_cls = NULL, *null_entry = NULL, + *defaultdict_cls = NULL, *int_cls = NULL; +static int block_size; + +/** + * Free an array of PyObject pointers, decrementing any references. + */ +static void free_objects(PyObject **objs, Py_ssize_t n) +{ + Py_ssize_t i; + for (i = 0; i < n; i++) + Py_XDECREF(objs[i]); + PyMem_Free(objs); +} + +/** + * Get the entries of a tree, prepending the given path. + * + * Args: + * path: The path to prepend, without trailing slashes. + * path_len: The length of path. + * tree: The Tree object to iterate. + * n: Set to the length of result. + * Returns: A (C) array of PyObject pointers to TreeEntry objects for each path + * in tree. + */ +static PyObject **tree_entries(char *path, Py_ssize_t path_len, PyObject *tree, + Py_ssize_t *n) +{ + PyObject *iteritems, *items, **result = NULL; + PyObject *old_entry, *name, *sha; + Py_ssize_t i = 0, name_len, new_path_len; + char *new_path; + + if (tree == Py_None) { + *n = 0; + result = PyMem_New(PyObject*, 0); + if (!result) { + PyErr_NoMemory(); + return NULL; + } + return result; + } + + iteritems = PyObject_GetAttrString(tree, "iteritems"); + if (!iteritems) + return NULL; + items = PyObject_CallFunctionObjArgs(iteritems, Py_True, NULL); + Py_DECREF(iteritems); + if (items == NULL) { + return NULL; + } + /* The C implementation of iteritems returns a list, so depend on that. */ + if (!PyList_Check(items)) { + PyErr_SetString(PyExc_TypeError, + "Tree.iteritems() did not return a list"); + return NULL; + } + + *n = PyList_Size(items); + result = PyMem_New(PyObject*, *n); + if (!result) { + PyErr_NoMemory(); + goto error; + } + for (i = 0; i < *n; i++) { + old_entry = PyList_GetItem(items, i); + if (!old_entry) + goto error; + sha = PyTuple_GetItem(old_entry, 2); + if (!sha) + goto error; + name = PyTuple_GET_ITEM(old_entry, 0); + name_len = PyBytes_Size(name); + if (PyErr_Occurred()) + goto error; + + new_path_len = name_len; + if (path_len) + new_path_len += path_len + 1; + new_path = PyMem_Malloc(new_path_len); + if (!new_path) { + PyErr_NoMemory(); + goto error; + } + if (path_len) { + memcpy(new_path, path, path_len); + new_path[path_len] = '/'; + memcpy(new_path + path_len + 1, PyBytes_AS_STRING(name), name_len); + } else { + memcpy(new_path, PyBytes_AS_STRING(name), name_len); + } + + result[i] = PyObject_CallFunction(tree_entry_cls, "y#OO", new_path, + new_path_len, PyTuple_GET_ITEM(old_entry, 1), sha); + PyMem_Free(new_path); + if (!result[i]) { + goto error; + } + } + Py_DECREF(items); + return result; + +error: + if (result) + free_objects(result, i); + Py_DECREF(items); + return NULL; +} + +/** + * Use strcmp to compare the paths of two TreeEntry objects. + */ +static int entry_path_cmp(PyObject *entry1, PyObject *entry2) +{ + PyObject *path1 = NULL, *path2 = NULL; + int result = 0; + + path1 = PyObject_GetAttrString(entry1, "path"); + if (!path1) + goto done; + + if (!PyBytes_Check(path1)) { + PyErr_SetString(PyExc_TypeError, "path is not a (byte)string"); + goto done; + } + + path2 = PyObject_GetAttrString(entry2, "path"); + if (!path2) + goto done; + + if (!PyBytes_Check(path2)) { + PyErr_SetString(PyExc_TypeError, "path is not a (byte)string"); + goto done; + } + + result = strcmp(PyBytes_AS_STRING(path1), PyBytes_AS_STRING(path2)); + +done: + Py_XDECREF(path1); + Py_XDECREF(path2); + return result; +} + +static PyObject *py_merge_entries(PyObject *self, PyObject *args) +{ + PyObject *tree1, *tree2, **entries1 = NULL, **entries2 = NULL; + PyObject *e1, *e2, *pair, *result = NULL; + Py_ssize_t n1 = 0, n2 = 0, i1 = 0, i2 = 0, path_len; + char *path_str; + int cmp; + + if (!PyArg_ParseTuple(args, "y#OO", &path_str, &path_len, &tree1, &tree2)) + return NULL; + + entries1 = tree_entries(path_str, path_len, tree1, &n1); + if (!entries1) + goto error; + + entries2 = tree_entries(path_str, path_len, tree2, &n2); + if (!entries2) + goto error; + + result = PyList_New(0); + if (!result) + goto error; + + while (i1 < n1 && i2 < n2) { + cmp = entry_path_cmp(entries1[i1], entries2[i2]); + if (PyErr_Occurred()) + goto error; + if (!cmp) { + e1 = entries1[i1++]; + e2 = entries2[i2++]; + } else if (cmp < 0) { + e1 = entries1[i1++]; + e2 = null_entry; + } else { + e1 = null_entry; + e2 = entries2[i2++]; + } + pair = PyTuple_Pack(2, e1, e2); + if (!pair) + goto error; + PyList_Append(result, pair); + Py_DECREF(pair); + } + + while (i1 < n1) { + pair = PyTuple_Pack(2, entries1[i1++], null_entry); + if (!pair) + goto error; + PyList_Append(result, pair); + Py_DECREF(pair); + } + while (i2 < n2) { + pair = PyTuple_Pack(2, null_entry, entries2[i2++]); + if (!pair) + goto error; + PyList_Append(result, pair); + Py_DECREF(pair); + } + goto done; + +error: + Py_XDECREF(result); + result = NULL; + +done: + if (entries1) + free_objects(entries1, n1); + if (entries2) + free_objects(entries2, n2); + return result; +} + +/* Not all environments define S_ISDIR */ +#if !defined(S_ISDIR) && defined(S_IFMT) && defined(S_IFDIR) +#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) +#endif + +static PyObject *py_is_tree(PyObject *self, PyObject *args) +{ + PyObject *entry, *mode, *result; + long lmode; + + if (!PyArg_ParseTuple(args, "O", &entry)) + return NULL; + + mode = PyObject_GetAttrString(entry, "mode"); + if (!mode) + return NULL; + + if (mode == Py_None) { + result = Py_False; + Py_INCREF(result); + } else { + lmode = PyLong_AsLong(mode); + if (lmode == -1 && PyErr_Occurred()) { + Py_DECREF(mode); + return NULL; + } + result = PyBool_FromLong(S_ISDIR((mode_t)lmode)); + } + Py_DECREF(mode); + return result; +} + +static Py_hash_t add_hash(PyObject *get, PyObject *set, char *str, int n) +{ + PyObject *str_obj = NULL, *hash_obj = NULL, *value = NULL, + *set_value = NULL; + Py_hash_t hash; + + /* It would be nice to hash without copying str into a PyString, but that + * isn't exposed by the API. */ + str_obj = PyBytes_FromStringAndSize(str, n); + if (!str_obj) + goto error; + hash = PyObject_Hash(str_obj); + if (hash == -1) + goto error; + hash_obj = PyLong_FromLong(hash); + if (!hash_obj) + goto error; + + value = PyObject_CallFunctionObjArgs(get, hash_obj, NULL); + if (!value) + goto error; + set_value = PyObject_CallFunction(set, "(Ol)", hash_obj, + PyLong_AS_LONG(value) + n); + if (!set_value) + goto error; + + Py_DECREF(str_obj); + Py_DECREF(hash_obj); + Py_DECREF(value); + Py_DECREF(set_value); + return 0; + +error: + Py_XDECREF(str_obj); + Py_XDECREF(hash_obj); + Py_XDECREF(value); + Py_XDECREF(set_value); + return -1; +} + +static PyObject *py_count_blocks(PyObject *self, PyObject *args) +{ + PyObject *obj, *chunks = NULL, *chunk, *counts = NULL, *get = NULL, + *set = NULL; + char *chunk_str, *block = NULL; + Py_ssize_t num_chunks, chunk_len; + int i, j, n = 0; + char c; + + if (!PyArg_ParseTuple(args, "O", &obj)) + goto error; + + counts = PyObject_CallFunctionObjArgs(defaultdict_cls, int_cls, NULL); + if (!counts) + goto error; + get = PyObject_GetAttrString(counts, "__getitem__"); + set = PyObject_GetAttrString(counts, "__setitem__"); + + chunks = PyObject_CallMethod(obj, "as_raw_chunks", NULL); + if (!chunks) + goto error; + if (!PyList_Check(chunks)) { + PyErr_SetString(PyExc_TypeError, + "as_raw_chunks() did not return a list"); + goto error; + } + num_chunks = PyList_GET_SIZE(chunks); + block = PyMem_New(char, block_size); + if (!block) { + PyErr_NoMemory(); + goto error; + } + + for (i = 0; i < num_chunks; i++) { + chunk = PyList_GET_ITEM(chunks, i); + if (!PyBytes_Check(chunk)) { + PyErr_SetString(PyExc_TypeError, "chunk is not a string"); + goto error; + } + if (PyBytes_AsStringAndSize(chunk, &chunk_str, &chunk_len) == -1) + goto error; + + for (j = 0; j < chunk_len; j++) { + c = chunk_str[j]; + block[n++] = c; + if (c == '\n' || n == block_size) { + if (add_hash(get, set, block, n) == -1) + goto error; + n = 0; + } + } + } + if (n && add_hash(get, set, block, n) == -1) + goto error; + + Py_DECREF(chunks); + Py_DECREF(get); + Py_DECREF(set); + PyMem_Free(block); + return counts; + +error: + Py_XDECREF(chunks); + Py_XDECREF(get); + Py_XDECREF(set); + Py_XDECREF(counts); + PyMem_Free(block); + return NULL; +} + +static PyMethodDef py_diff_tree_methods[] = { + { "_is_tree", (PyCFunction)py_is_tree, METH_VARARGS, NULL }, + { "_merge_entries", (PyCFunction)py_merge_entries, METH_VARARGS, NULL }, + { "_count_blocks", (PyCFunction)py_count_blocks, METH_VARARGS, NULL }, + { NULL, NULL, 0, NULL } +}; + +static PyObject * +moduleinit(void) +{ + PyObject *m, *objects_mod = NULL, *diff_tree_mod = NULL; + PyObject *block_size_obj = NULL; + + static struct PyModuleDef moduledef = { + PyModuleDef_HEAD_INIT, + "_diff_tree", /* m_name */ + NULL, /* m_doc */ + -1, /* m_size */ + py_diff_tree_methods, /* m_methods */ + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear*/ + NULL, /* m_free */ + }; + m = PyModule_Create(&moduledef); + if (!m) + goto error; + + objects_mod = PyImport_ImportModule("dulwich.objects"); + if (!objects_mod) + goto error; + + tree_entry_cls = PyObject_GetAttrString(objects_mod, "TreeEntry"); + Py_DECREF(objects_mod); + if (!tree_entry_cls) + goto error; + + diff_tree_mod = PyImport_ImportModule("dulwich.diff_tree"); + if (!diff_tree_mod) + goto error; + + null_entry = PyObject_GetAttrString(diff_tree_mod, "_NULL_ENTRY"); + if (!null_entry) + goto error; + + block_size_obj = PyObject_GetAttrString(diff_tree_mod, "_BLOCK_SIZE"); + if (!block_size_obj) + goto error; + block_size = (int)PyLong_AsLong(block_size_obj); + + if (PyErr_Occurred()) + goto error; + + defaultdict_cls = PyObject_GetAttrString(diff_tree_mod, "defaultdict"); + if (!defaultdict_cls) + goto error; + + /* This is kind of hacky, but I don't know of a better way to get the + * PyObject* version of int. */ + int_cls = PyDict_GetItemString(PyEval_GetBuiltins(), "int"); + if (!int_cls) { + PyErr_SetString(PyExc_NameError, "int"); + goto error; + } + + Py_DECREF(diff_tree_mod); + + return m; + +error: + Py_XDECREF(objects_mod); + Py_XDECREF(diff_tree_mod); + Py_XDECREF(null_entry); + Py_XDECREF(block_size_obj); + Py_XDECREF(defaultdict_cls); + Py_XDECREF(int_cls); + return NULL; +} + +PyMODINIT_FUNC +PyInit__diff_tree(void) +{ + return moduleinit(); +} diff --git a/env/lib/python3.8/site-packages/dulwich/_diff_tree.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/dulwich/_diff_tree.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..9949e487 Binary files /dev/null and b/env/lib/python3.8/site-packages/dulwich/_diff_tree.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/dulwich/_objects.c b/env/lib/python3.8/site-packages/dulwich/_objects.c new file mode 100644 index 00000000..34635e82 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/_objects.c @@ -0,0 +1,309 @@ +/* + * Copyright (C) 2009 Jelmer Vernooij + * + * Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU + * General Public License as public by the Free Software Foundation; version 2.0 + * or (at your option) any later version. You can redistribute it and/or + * modify it under the terms of either of these two licenses. + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * You should have received a copy of the licenses; if not, see + * for a copy of the GNU General Public License + * and for a copy of the Apache + * License, Version 2.0. + */ + +#define PY_SSIZE_T_CLEAN +#include +#include +#include + +#if defined(__MINGW32_VERSION) || defined(__APPLE__) +size_t rep_strnlen(char *text, size_t maxlen); +size_t rep_strnlen(char *text, size_t maxlen) +{ + const char *last = memchr(text, '\0', maxlen); + return last ? (size_t) (last - text) : maxlen; +} +#define strnlen rep_strnlen +#endif + +#define bytehex(x) (((x)<0xa)?('0'+(x)):('a'-0xa+(x))) + +static PyObject *tree_entry_cls; +static PyObject *object_format_exception_cls; + +static PyObject *sha_to_pyhex(const unsigned char *sha) +{ + char hexsha[41]; + int i; + for (i = 0; i < 20; i++) { + hexsha[i*2] = bytehex((sha[i] & 0xF0) >> 4); + hexsha[i*2+1] = bytehex(sha[i] & 0x0F); + } + + return PyBytes_FromStringAndSize(hexsha, 40); +} + +static PyObject *py_parse_tree(PyObject *self, PyObject *args, PyObject *kw) +{ + char *text, *start, *end; + Py_ssize_t len; int strict; + size_t namelen; + PyObject *ret, *item, *name, *sha, *py_strict = NULL; + static char *kwlist[] = {"text", "strict", NULL}; + + if (!PyArg_ParseTupleAndKeywords(args, kw, "y#|O", kwlist, + &text, &len, &py_strict)) + return NULL; + strict = py_strict ? PyObject_IsTrue(py_strict) : 0; + /* TODO: currently this returns a list; if memory usage is a concern, + * consider rewriting as a custom iterator object */ + ret = PyList_New(0); + if (ret == NULL) { + return NULL; + } + start = text; + end = text + len; + while (text < end) { + long mode; + if (strict && text[0] == '0') { + PyErr_SetString(object_format_exception_cls, + "Illegal leading zero on mode"); + Py_DECREF(ret); + return NULL; + } + mode = strtol(text, &text, 8); + if (*text != ' ') { + PyErr_SetString(PyExc_ValueError, "Expected space"); + Py_DECREF(ret); + return NULL; + } + text++; + namelen = strnlen(text, len - (text - start)); + name = PyBytes_FromStringAndSize(text, namelen); + if (name == NULL) { + Py_DECREF(ret); + return NULL; + } + if (text + namelen + 20 >= end) { + PyErr_SetString(PyExc_ValueError, "SHA truncated"); + Py_DECREF(ret); + Py_DECREF(name); + return NULL; + } + sha = sha_to_pyhex((unsigned char *)text+namelen+1); + if (sha == NULL) { + Py_DECREF(ret); + Py_DECREF(name); + return NULL; + } + item = Py_BuildValue("(NlN)", name, mode, sha); + if (item == NULL) { + Py_DECREF(ret); + Py_DECREF(sha); + Py_DECREF(name); + return NULL; + } + if (PyList_Append(ret, item) == -1) { + Py_DECREF(ret); + Py_DECREF(item); + return NULL; + } + Py_DECREF(item); + text += namelen+21; + } + return ret; +} + +struct tree_item { + const char *name; + int mode; + PyObject *tuple; +}; + +/* Not all environments define S_ISDIR */ +#if !defined(S_ISDIR) && defined(S_IFMT) && defined(S_IFDIR) +#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) +#endif + +int cmp_tree_item(const void *_a, const void *_b) +{ + const struct tree_item *a = _a, *b = _b; + const char *remain_a, *remain_b; + int ret; + size_t common; + if (strlen(a->name) > strlen(b->name)) { + common = strlen(b->name); + remain_a = a->name + common; + remain_b = (S_ISDIR(b->mode)?"/":""); + } else if (strlen(b->name) > strlen(a->name)) { + common = strlen(a->name); + remain_a = (S_ISDIR(a->mode)?"/":""); + remain_b = b->name + common; + } else { /* strlen(a->name) == strlen(b->name) */ + common = 0; + remain_a = a->name; + remain_b = b->name; + } + ret = strncmp(a->name, b->name, common); + if (ret != 0) + return ret; + return strcmp(remain_a, remain_b); +} + +int cmp_tree_item_name_order(const void *_a, const void *_b) { + const struct tree_item *a = _a, *b = _b; + return strcmp(a->name, b->name); +} + +static PyObject *py_sorted_tree_items(PyObject *self, PyObject *args) +{ + struct tree_item *qsort_entries = NULL; + int name_order, n = 0, i; + PyObject *entries, *py_name_order, *ret, *key, *value, *py_mode, *py_sha; + Py_ssize_t pos = 0, num_entries; + int (*cmp)(const void *, const void *); + + if (!PyArg_ParseTuple(args, "OO", &entries, &py_name_order)) + goto error; + + if (!PyDict_Check(entries)) { + PyErr_SetString(PyExc_TypeError, "Argument not a dictionary"); + goto error; + } + + name_order = PyObject_IsTrue(py_name_order); + if (name_order == -1) + goto error; + cmp = name_order ? cmp_tree_item_name_order : cmp_tree_item; + + num_entries = PyDict_Size(entries); + if (PyErr_Occurred()) + goto error; + qsort_entries = PyMem_New(struct tree_item, num_entries); + if (!qsort_entries) { + PyErr_NoMemory(); + goto error; + } + + while (PyDict_Next(entries, &pos, &key, &value)) { + if (!PyBytes_Check(key)) { + PyErr_SetString(PyExc_TypeError, "Name is not a string"); + goto error; + } + + if (PyTuple_Size(value) != 2) { + PyErr_SetString(PyExc_ValueError, "Tuple has invalid size"); + goto error; + } + + py_mode = PyTuple_GET_ITEM(value, 0); + if (!PyLong_Check(py_mode)) { + PyErr_SetString(PyExc_TypeError, "Mode is not an integral type"); + goto error; + } + + py_sha = PyTuple_GET_ITEM(value, 1); + if (!PyBytes_Check(py_sha)) { + PyErr_SetString(PyExc_TypeError, "SHA is not a string"); + goto error; + } + qsort_entries[n].name = PyBytes_AS_STRING(key); + qsort_entries[n].mode = PyLong_AsLong(py_mode); + + qsort_entries[n].tuple = PyObject_CallFunctionObjArgs( + tree_entry_cls, key, py_mode, py_sha, NULL); + if (qsort_entries[n].tuple == NULL) + goto error; + n++; + } + + qsort(qsort_entries, num_entries, sizeof(struct tree_item), cmp); + + ret = PyList_New(num_entries); + if (ret == NULL) { + PyErr_NoMemory(); + goto error; + } + + for (i = 0; i < num_entries; i++) { + PyList_SET_ITEM(ret, i, qsort_entries[i].tuple); + } + PyMem_Free(qsort_entries); + return ret; + +error: + for (i = 0; i < n; i++) { + Py_XDECREF(qsort_entries[i].tuple); + } + PyMem_Free(qsort_entries); + return NULL; +} + +static PyMethodDef py_objects_methods[] = { + { "parse_tree", (PyCFunction)py_parse_tree, METH_VARARGS | METH_KEYWORDS, + NULL }, + { "sorted_tree_items", py_sorted_tree_items, METH_VARARGS, NULL }, + { NULL, NULL, 0, NULL } +}; + +static PyObject * +moduleinit(void) +{ + PyObject *m, *objects_mod, *errors_mod; + + static struct PyModuleDef moduledef = { + PyModuleDef_HEAD_INIT, + "_objects", /* m_name */ + NULL, /* m_doc */ + -1, /* m_size */ + py_objects_methods, /* m_methods */ + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear*/ + NULL, /* m_free */ + }; + m = PyModule_Create(&moduledef); + if (m == NULL) { + return NULL; + } + + errors_mod = PyImport_ImportModule("dulwich.errors"); + if (errors_mod == NULL) { + return NULL; + } + + object_format_exception_cls = PyObject_GetAttrString( + errors_mod, "ObjectFormatException"); + Py_DECREF(errors_mod); + if (object_format_exception_cls == NULL) { + return NULL; + } + + /* This is a circular import but should be safe since this module is + * imported at at the very bottom of objects.py. */ + objects_mod = PyImport_ImportModule("dulwich.objects"); + if (objects_mod == NULL) { + return NULL; + } + + tree_entry_cls = PyObject_GetAttrString(objects_mod, "TreeEntry"); + Py_DECREF(objects_mod); + if (tree_entry_cls == NULL) { + return NULL; + } + + return m; +} + +PyMODINIT_FUNC +PyInit__objects(void) +{ + return moduleinit(); +} diff --git a/env/lib/python3.8/site-packages/dulwich/_objects.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/dulwich/_objects.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..3500c5fd Binary files /dev/null and b/env/lib/python3.8/site-packages/dulwich/_objects.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/dulwich/_pack.c b/env/lib/python3.8/site-packages/dulwich/_pack.c new file mode 100644 index 00000000..3ee9828d --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/_pack.c @@ -0,0 +1,282 @@ +/* + * Copyright (C) 2009 Jelmer Vernooij + * + * Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU + * General Public License as public by the Free Software Foundation; version 2.0 + * or (at your option) any later version. You can redistribute it and/or + * modify it under the terms of either of these two licenses. + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * You should have received a copy of the licenses; if not, see + * for a copy of the GNU General Public License + * and for a copy of the Apache + * License, Version 2.0. + */ + +#define PY_SSIZE_T_CLEAN +#include +#include + +static PyObject *PyExc_ApplyDeltaError = NULL; + +static int py_is_sha(PyObject *sha) +{ + if (!PyBytes_CheckExact(sha)) + return 0; + + if (PyBytes_Size(sha) != 20) + return 0; + + return 1; +} + + +static size_t get_delta_header_size(uint8_t *delta, size_t *index, size_t length) +{ + size_t size = 0; + size_t i = 0; + while ((*index) < length) { + uint8_t cmd = delta[*index]; + (*index)++; + size |= (cmd & ~0x80) << i; + i += 7; + if (!(cmd & 0x80)) + break; + } + return size; +} + +static PyObject *py_chunked_as_string(PyObject *py_buf) +{ + if (PyList_Check(py_buf)) { + PyObject *sep = PyBytes_FromString(""); + if (sep == NULL) { + PyErr_NoMemory(); + return NULL; + } + py_buf = _PyBytes_Join(sep, py_buf); + Py_DECREF(sep); + if (py_buf == NULL) { + PyErr_NoMemory(); + return NULL; + } + } else if (PyBytes_Check(py_buf)) { + Py_INCREF(py_buf); + } else { + PyErr_SetString(PyExc_TypeError, + "src_buf is not a string or a list of chunks"); + return NULL; + } + return py_buf; +} + +static PyObject *py_apply_delta(PyObject *self, PyObject *args) +{ + uint8_t *src_buf, *delta; + size_t src_buf_len, delta_len; + size_t src_size, dest_size; + size_t outindex = 0; + size_t index; + uint8_t *out; + PyObject *ret, *py_src_buf, *py_delta, *ret_list; + + if (!PyArg_ParseTuple(args, "OO", &py_src_buf, &py_delta)) + return NULL; + + py_src_buf = py_chunked_as_string(py_src_buf); + if (py_src_buf == NULL) + return NULL; + + py_delta = py_chunked_as_string(py_delta); + if (py_delta == NULL) { + Py_DECREF(py_src_buf); + return NULL; + } + + src_buf = (uint8_t *)PyBytes_AS_STRING(py_src_buf); + src_buf_len = (size_t)PyBytes_GET_SIZE(py_src_buf); + + delta = (uint8_t *)PyBytes_AS_STRING(py_delta); + delta_len = (size_t)PyBytes_GET_SIZE(py_delta); + + index = 0; + src_size = get_delta_header_size(delta, &index, delta_len); + if (src_size != src_buf_len) { + PyErr_Format(PyExc_ApplyDeltaError, + "Unexpected source buffer size: %lu vs %ld", src_size, src_buf_len); + Py_DECREF(py_src_buf); + Py_DECREF(py_delta); + return NULL; + } + dest_size = get_delta_header_size(delta, &index, delta_len); + ret = PyBytes_FromStringAndSize(NULL, dest_size); + if (ret == NULL) { + PyErr_NoMemory(); + Py_DECREF(py_src_buf); + Py_DECREF(py_delta); + return NULL; + } + out = (uint8_t *)PyBytes_AS_STRING(ret); + while (index < delta_len) { + uint8_t cmd = delta[index]; + index++; + if (cmd & 0x80) { + size_t cp_off = 0, cp_size = 0; + int i; + for (i = 0; i < 4; i++) { + if (cmd & (1 << i)) { + uint8_t x = delta[index]; + index++; + cp_off |= x << (i * 8); + } + } + for (i = 0; i < 3; i++) { + if (cmd & (1 << (4+i))) { + uint8_t x = delta[index]; + index++; + cp_size |= x << (i * 8); + } + } + if (cp_size == 0) + cp_size = 0x10000; + if (cp_off + cp_size < cp_size || + cp_off + cp_size > src_size || + cp_size > dest_size) + break; + memcpy(out+outindex, src_buf+cp_off, cp_size); + outindex += cp_size; + dest_size -= cp_size; + } else if (cmd != 0) { + if (cmd > dest_size) + break; + memcpy(out+outindex, delta+index, cmd); + outindex += cmd; + index += cmd; + dest_size -= cmd; + } else { + PyErr_SetString(PyExc_ApplyDeltaError, "Invalid opcode 0"); + Py_DECREF(ret); + Py_DECREF(py_delta); + Py_DECREF(py_src_buf); + return NULL; + } + } + Py_DECREF(py_src_buf); + Py_DECREF(py_delta); + + if (index != delta_len) { + PyErr_SetString(PyExc_ApplyDeltaError, "delta not empty"); + Py_DECREF(ret); + return NULL; + } + + if (dest_size != 0) { + PyErr_SetString(PyExc_ApplyDeltaError, "dest size incorrect"); + Py_DECREF(ret); + return NULL; + } + + ret_list = Py_BuildValue("[N]", ret); + if (ret_list == NULL) { + Py_DECREF(ret); + return NULL; + } + return ret_list; +} + +static PyObject *py_bisect_find_sha(PyObject *self, PyObject *args) +{ + PyObject *unpack_name; + char *sha; + Py_ssize_t sha_len; + int start, end; + if (!PyArg_ParseTuple(args, "iiy#O", &start, &end, + &sha, &sha_len, &unpack_name)) + return NULL; + + if (sha_len != 20) { + PyErr_SetString(PyExc_ValueError, "Sha is not 20 bytes long"); + return NULL; + } + if (start > end) { + PyErr_SetString(PyExc_AssertionError, "start > end"); + return NULL; + } + + while (start <= end) { + PyObject *file_sha; + Py_ssize_t i = (start + end)/2; + int cmp; + file_sha = PyObject_CallFunction(unpack_name, "i", i); + if (file_sha == NULL) { + return NULL; + } + if (!py_is_sha(file_sha)) { + PyErr_SetString(PyExc_TypeError, "unpack_name returned non-sha object"); + Py_DECREF(file_sha); + return NULL; + } + cmp = memcmp(PyBytes_AS_STRING(file_sha), sha, 20); + Py_DECREF(file_sha); + if (cmp < 0) + start = i + 1; + else if (cmp > 0) + end = i - 1; + else { + return PyLong_FromLong(i); + } + } + Py_RETURN_NONE; +} + + +static PyMethodDef py_pack_methods[] = { + { "apply_delta", (PyCFunction)py_apply_delta, METH_VARARGS, NULL }, + { "bisect_find_sha", (PyCFunction)py_bisect_find_sha, METH_VARARGS, NULL }, + { NULL, NULL, 0, NULL } +}; + +static PyObject * +moduleinit(void) +{ + PyObject *m; + PyObject *errors_module; + + static struct PyModuleDef moduledef = { + PyModuleDef_HEAD_INIT, + "_pack", /* m_name */ + NULL, /* m_doc */ + -1, /* m_size */ + py_pack_methods, /* m_methods */ + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear*/ + NULL, /* m_free */ + }; + + errors_module = PyImport_ImportModule("dulwich.errors"); + if (errors_module == NULL) + return NULL; + + PyExc_ApplyDeltaError = PyObject_GetAttrString(errors_module, "ApplyDeltaError"); + Py_DECREF(errors_module); + if (PyExc_ApplyDeltaError == NULL) + return NULL; + + m = PyModule_Create(&moduledef); + if (m == NULL) + return NULL; + + return m; +} + +PyMODINIT_FUNC +PyInit__pack(void) +{ + return moduleinit(); +} diff --git a/env/lib/python3.8/site-packages/dulwich/_pack.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/dulwich/_pack.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..2aa5dd89 Binary files /dev/null and b/env/lib/python3.8/site-packages/dulwich/_pack.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/dulwich/archive.py b/env/lib/python3.8/site-packages/dulwich/archive.py new file mode 100644 index 00000000..decae764 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/archive.py @@ -0,0 +1,135 @@ +# archive.py -- Creating an archive from a tarball +# Copyright (C) 2015 Jonas Haag +# Copyright (C) 2015 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Generates tarballs for Git trees. + +""" + +import posixpath +import stat +import tarfile +import struct +from os import SEEK_END +from io import BytesIO +from contextlib import closing + + +class ChunkedBytesIO(object): + """Turn a list of bytestrings into a file-like object. + + This is similar to creating a `BytesIO` from a concatenation of the + bytestring list, but saves memory by NOT creating one giant bytestring + first:: + + BytesIO(b''.join(list_of_bytestrings)) =~= ChunkedBytesIO( + list_of_bytestrings) + """ + + def __init__(self, contents): + self.contents = contents + self.pos = (0, 0) + + def read(self, maxbytes=None): + if maxbytes < 0: + maxbytes = float("inf") + + buf = [] + chunk, cursor = self.pos + + while chunk < len(self.contents): + if maxbytes < len(self.contents[chunk]) - cursor: + buf.append(self.contents[chunk][cursor : cursor + maxbytes]) + cursor += maxbytes + self.pos = (chunk, cursor) + break + else: + buf.append(self.contents[chunk][cursor:]) + maxbytes -= len(self.contents[chunk]) - cursor + chunk += 1 + cursor = 0 + self.pos = (chunk, cursor) + return b"".join(buf) + + +def tar_stream(store, tree, mtime, prefix=b"", format=""): + """Generate a tar stream for the contents of a Git tree. + + Returns a generator that lazily assembles a .tar.gz archive, yielding it in + pieces (bytestrings). To obtain the complete .tar.gz binary file, simply + concatenate these chunks. + + Args: + store: Object store to retrieve objects from + tree: Tree object for the tree root + mtime: UNIX timestamp that is assigned as the modification time for + all files, and the gzip header modification time if format='gz' + format: Optional compression format for tarball + Returns: + Bytestrings + """ + buf = BytesIO() + with closing(tarfile.open(None, "w:%s" % format, buf)) as tar: + if format == "gz": + # Manually correct the gzip header file modification time so that + # archives created from the same Git tree are always identical. + # The gzip header file modification time is not currently + # accessible from the tarfile API, see: + # https://bugs.python.org/issue31526 + buf.seek(0) + assert buf.read(2) == b"\x1f\x8b", "Invalid gzip header" + buf.seek(4) + buf.write(struct.pack(" +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Bundle format support. +""" + +from typing import Dict, List, Tuple, Optional, Union, Sequence +from .pack import PackData, write_pack_data + + +class Bundle(object): + + version = None # type: Optional[int] + + capabilities = {} # type: Dict[str, str] + prerequisites = [] # type: List[Tuple[bytes, str]] + references = {} # type: Dict[str, bytes] + pack_data = [] # type: Union[PackData, Sequence[bytes]] + + def __eq__(self, other): + if not isinstance(other, type(self)): + return False + if self.version != other.version: + return False + if self.capabilities != other.capabilities: + return False + if self.prerequisites != other.prerequisites: + return False + if self.references != other.references: + return False + if self.pack_data != other.pack_data: + return False + return True + + +def _read_bundle(f, version): + capabilities = {} + prerequisites = [] + references = {} + line = f.readline() + if version >= 3: + while line.startswith(b"@"): + line = line[1:].rstrip(b"\n") + try: + key, value = line.split(b"=", 1) + except ValueError: + key = line + value = None + else: + value = value.decode("utf-8") + capabilities[key.decode("utf-8")] = value + line = f.readline() + while line.startswith(b"-"): + (obj_id, comment) = line[1:].rstrip(b"\n").split(b" ", 1) + prerequisites.append((obj_id, comment.decode("utf-8"))) + line = f.readline() + while line != b"\n": + (obj_id, ref) = line.rstrip(b"\n").split(b" ", 1) + references[ref] = obj_id + line = f.readline() + pack_data = PackData.from_file(f) + ret = Bundle() + ret.references = references + ret.capabilities = capabilities + ret.prerequisites = prerequisites + ret.pack_data = pack_data + ret.version = version + return ret + + +def read_bundle(f): + """Read a bundle file.""" + firstline = f.readline() + if firstline == b"# v2 git bundle\n": + return _read_bundle(f, 2) + if firstline == b"# v3 git bundle\n": + return _read_bundle(f, 3) + raise AssertionError("unsupported bundle format header: %r" % firstline) + + +def write_bundle(f, bundle): + version = bundle.version + if version is None: + if bundle.capabilities: + version = 3 + else: + version = 2 + if version == 2: + f.write(b"# v2 git bundle\n") + elif version == 3: + f.write(b"# v3 git bundle\n") + else: + raise AssertionError("unknown version %d" % version) + if version == 3: + for key, value in bundle.capabilities.items(): + f.write(b"@" + key.encode("utf-8")) + if value is not None: + f.write(b"=" + value.encode("utf-8")) + f.write(b"\n") + for (obj_id, comment) in bundle.prerequisites: + f.write(b"-%s %s\n" % (obj_id, comment.encode("utf-8"))) + for ref, obj_id in bundle.references.items(): + f.write(b"%s %s\n" % (obj_id, ref)) + f.write(b"\n") + write_pack_data(f.write, records=bundle.pack_data) diff --git a/env/lib/python3.8/site-packages/dulwich/cli.py b/env/lib/python3.8/site-packages/dulwich/cli.py new file mode 100644 index 00000000..184bd9c9 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/cli.py @@ -0,0 +1,795 @@ +#!/usr/bin/python3 -u +# +# dulwich - Simple command-line interface to Dulwich +# Copyright (C) 2008-2011 Jelmer Vernooij +# vim: expandtab +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Simple command-line interface to Dulwich> + +This is a very simple command-line wrapper for Dulwich. It is by +no means intended to be a full-blown Git command-line interface but just +a way to test Dulwich. +""" + +import os +import sys +from getopt import getopt +import argparse +import optparse +import signal +from typing import Dict, Type, Optional + +from dulwich import porcelain +from dulwich.client import get_transport_and_path +from dulwich.errors import ApplyDeltaError +from dulwich.index import Index +from dulwich.objectspec import parse_commit +from dulwich.pack import Pack, sha_to_hex +from dulwich.repo import Repo + + +def signal_int(signal, frame): + sys.exit(1) + + +def signal_quit(signal, frame): + import pdb + + pdb.set_trace() + + +class Command(object): + """A Dulwich subcommand.""" + + def run(self, args): + """Run the command.""" + raise NotImplementedError(self.run) + + +class cmd_archive(Command): + def run(self, args): + parser = argparse.ArgumentParser() + parser.add_argument( + "--remote", + type=str, + help="Retrieve archive from specified remote repo", + ) + parser.add_argument('committish', type=str, nargs='?') + args = parser.parse_args(args) + if args.remote: + client, path = get_transport_and_path(args.remote) + client.archive( + path, + args.committish, + sys.stdout.write, + write_error=sys.stderr.write, + ) + else: + porcelain.archive( + ".", args.committish, outstream=sys.stdout.buffer, + errstream=sys.stderr + ) + + +class cmd_add(Command): + def run(self, argv): + parser = argparse.ArgumentParser() + args = parser.parse_args(argv) + + porcelain.add(".", paths=args) + + +class cmd_rm(Command): + def run(self, argv): + parser = argparse.ArgumentParser() + args = parser.parse_args(argv) + + porcelain.rm(".", paths=args) + + +class cmd_fetch_pack(Command): + def run(self, argv): + parser = argparse.ArgumentParser() + parser.add_argument('--all', action='store_true') + parser.add_argument('location', nargs='?', type=str) + args = parser.parse_args(argv) + client, path = get_transport_and_path(args.location) + r = Repo(".") + if args.all: + determine_wants = r.object_store.determine_wants_all + else: + + def determine_wants(x, **kwargs): + return [y for y in args if y not in r.object_store] + + client.fetch(path, r, determine_wants) + + +class cmd_fetch(Command): + def run(self, args): + opts, args = getopt(args, "", []) + opts = dict(opts) + client, path = get_transport_and_path(args.pop(0)) + r = Repo(".") + refs = client.fetch(path, r, progress=sys.stdout.write) + print("Remote refs:") + for item in refs.items(): + print("%s -> %s" % item) + + +class cmd_fsck(Command): + def run(self, args): + opts, args = getopt(args, "", []) + opts = dict(opts) + for (obj, msg) in porcelain.fsck("."): + print("%s: %s" % (obj, msg)) + + +class cmd_log(Command): + def run(self, args): + parser = optparse.OptionParser() + parser.add_option( + "--reverse", + dest="reverse", + action="store_true", + help="Reverse order in which entries are printed", + ) + parser.add_option( + "--name-status", + dest="name_status", + action="store_true", + help="Print name/status for each changed file", + ) + options, args = parser.parse_args(args) + + porcelain.log( + ".", + paths=args, + reverse=options.reverse, + name_status=options.name_status, + outstream=sys.stdout, + ) + + +class cmd_diff(Command): + def run(self, args): + opts, args = getopt(args, "", []) + + r = Repo(".") + if args == []: + commit_id = b'HEAD' + else: + commit_id = args[0] + commit = parse_commit(r, commit_id) + parent_commit = r[commit.parents[0]] + porcelain.diff_tree( + r, parent_commit.tree, commit.tree, outstream=sys.stdout.buffer) + + +class cmd_dump_pack(Command): + def run(self, args): + opts, args = getopt(args, "", []) + + if args == []: + print("Usage: dulwich dump-pack FILENAME") + sys.exit(1) + + basename, _ = os.path.splitext(args[0]) + x = Pack(basename) + print("Object names checksum: %s" % x.name()) + print("Checksum: %s" % sha_to_hex(x.get_stored_checksum())) + if not x.check(): + print("CHECKSUM DOES NOT MATCH") + print("Length: %d" % len(x)) + for name in x: + try: + print("\t%s" % x[name]) + except KeyError as k: + print("\t%s: Unable to resolve base %s" % (name, k)) + except ApplyDeltaError as e: + print("\t%s: Unable to apply delta: %r" % (name, e)) + + +class cmd_dump_index(Command): + def run(self, args): + opts, args = getopt(args, "", []) + + if args == []: + print("Usage: dulwich dump-index FILENAME") + sys.exit(1) + + filename = args[0] + idx = Index(filename) + + for o in idx: + print(o, idx[o]) + + +class cmd_init(Command): + def run(self, args): + opts, args = getopt(args, "", ["bare"]) + opts = dict(opts) + + if args == []: + path = os.getcwd() + else: + path = args[0] + + porcelain.init(path, bare=("--bare" in opts)) + + +class cmd_clone(Command): + def run(self, args): + parser = optparse.OptionParser() + parser.add_option( + "--bare", + dest="bare", + help="Whether to create a bare repository.", + action="store_true", + ) + parser.add_option( + "--depth", dest="depth", type=int, help="Depth at which to fetch" + ) + parser.add_option( + "-b", "--branch", dest="branch", type=str, + help=("Check out branch instead of branch pointed to by remote " + "HEAD")) + options, args = parser.parse_args(args) + + if args == []: + print("usage: dulwich clone host:path [PATH]") + sys.exit(1) + + source = args.pop(0) + if len(args) > 0: + target = args.pop(0) + else: + target = None + + porcelain.clone(source, target, bare=options.bare, depth=options.depth, + branch=options.branch) + + +class cmd_commit(Command): + def run(self, args): + opts, args = getopt(args, "", ["message"]) + opts = dict(opts) + porcelain.commit(".", message=opts["--message"]) + + +class cmd_commit_tree(Command): + def run(self, args): + opts, args = getopt(args, "", ["message"]) + if args == []: + print("usage: dulwich commit-tree tree") + sys.exit(1) + opts = dict(opts) + porcelain.commit_tree(".", tree=args[0], message=opts["--message"]) + + +class cmd_update_server_info(Command): + def run(self, args): + porcelain.update_server_info(".") + + +class cmd_symbolic_ref(Command): + def run(self, args): + opts, args = getopt(args, "", ["ref-name", "force"]) + if not args: + print("Usage: dulwich symbolic-ref REF_NAME [--force]") + sys.exit(1) + + ref_name = args.pop(0) + porcelain.symbolic_ref(".", ref_name=ref_name, force="--force" in args) + + +class cmd_show(Command): + def run(self, argv): + parser = argparse.ArgumentParser() + parser.add_argument('objectish', type=str, nargs='*') + args = parser.parse_args(argv) + porcelain.show(".", args.objectish or None) + + +class cmd_diff_tree(Command): + def run(self, args): + opts, args = getopt(args, "", []) + if len(args) < 2: + print("Usage: dulwich diff-tree OLD-TREE NEW-TREE") + sys.exit(1) + porcelain.diff_tree(".", args[0], args[1]) + + +class cmd_rev_list(Command): + def run(self, args): + opts, args = getopt(args, "", []) + if len(args) < 1: + print("Usage: dulwich rev-list COMMITID...") + sys.exit(1) + porcelain.rev_list(".", args) + + +class cmd_tag(Command): + def run(self, args): + parser = optparse.OptionParser() + parser.add_option( + "-a", + "--annotated", + help="Create an annotated tag.", + action="store_true", + ) + parser.add_option( + "-s", "--sign", help="Sign the annotated tag.", action="store_true" + ) + options, args = parser.parse_args(args) + porcelain.tag_create( + ".", args[0], annotated=options.annotated, sign=options.sign + ) + + +class cmd_repack(Command): + def run(self, args): + opts, args = getopt(args, "", []) + opts = dict(opts) + porcelain.repack(".") + + +class cmd_reset(Command): + def run(self, args): + opts, args = getopt(args, "", ["hard", "soft", "mixed"]) + opts = dict(opts) + mode = "" + if "--hard" in opts: + mode = "hard" + elif "--soft" in opts: + mode = "soft" + elif "--mixed" in opts: + mode = "mixed" + porcelain.reset(".", mode=mode, *args) + + +class cmd_daemon(Command): + def run(self, args): + from dulwich import log_utils + from dulwich.protocol import TCP_GIT_PORT + + parser = optparse.OptionParser() + parser.add_option( + "-l", + "--listen_address", + dest="listen_address", + default="localhost", + help="Binding IP address.", + ) + parser.add_option( + "-p", + "--port", + dest="port", + type=int, + default=TCP_GIT_PORT, + help="Binding TCP port.", + ) + options, args = parser.parse_args(args) + + log_utils.default_logging_config() + if len(args) >= 1: + gitdir = args[0] + else: + gitdir = "." + + porcelain.daemon(gitdir, address=options.listen_address, port=options.port) + + +class cmd_web_daemon(Command): + def run(self, args): + from dulwich import log_utils + + parser = optparse.OptionParser() + parser.add_option( + "-l", + "--listen_address", + dest="listen_address", + default="", + help="Binding IP address.", + ) + parser.add_option( + "-p", + "--port", + dest="port", + type=int, + default=8000, + help="Binding TCP port.", + ) + options, args = parser.parse_args(args) + + log_utils.default_logging_config() + if len(args) >= 1: + gitdir = args[0] + else: + gitdir = "." + + porcelain.web_daemon(gitdir, address=options.listen_address, port=options.port) + + +class cmd_write_tree(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + sys.stdout.write("%s\n" % porcelain.write_tree(".")) + + +class cmd_receive_pack(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + if len(args) >= 1: + gitdir = args[0] + else: + gitdir = "." + porcelain.receive_pack(gitdir) + + +class cmd_upload_pack(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + if len(args) >= 1: + gitdir = args[0] + else: + gitdir = "." + porcelain.upload_pack(gitdir) + + +class cmd_status(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + if len(args) >= 1: + gitdir = args[0] + else: + gitdir = "." + status = porcelain.status(gitdir) + if any(names for (kind, names) in status.staged.items()): + sys.stdout.write("Changes to be committed:\n\n") + for kind, names in status.staged.items(): + for name in names: + sys.stdout.write( + "\t%s: %s\n" % (kind, name.decode(sys.getfilesystemencoding())) + ) + sys.stdout.write("\n") + if status.unstaged: + sys.stdout.write("Changes not staged for commit:\n\n") + for name in status.unstaged: + sys.stdout.write("\t%s\n" % name.decode(sys.getfilesystemencoding())) + sys.stdout.write("\n") + if status.untracked: + sys.stdout.write("Untracked files:\n\n") + for name in status.untracked: + sys.stdout.write("\t%s\n" % name) + sys.stdout.write("\n") + + +class cmd_ls_remote(Command): + def run(self, args): + opts, args = getopt(args, "", []) + if len(args) < 1: + print("Usage: dulwich ls-remote URL") + sys.exit(1) + refs = porcelain.ls_remote(args[0]) + for ref in sorted(refs): + sys.stdout.write("%s\t%s\n" % (ref, refs[ref])) + + +class cmd_ls_tree(Command): + def run(self, args): + parser = optparse.OptionParser() + parser.add_option( + "-r", + "--recursive", + action="store_true", + help="Recursively list tree contents.", + ) + parser.add_option("--name-only", action="store_true", help="Only display name.") + options, args = parser.parse_args(args) + try: + treeish = args.pop(0) + except IndexError: + treeish = None + porcelain.ls_tree( + ".", + treeish, + outstream=sys.stdout, + recursive=options.recursive, + name_only=options.name_only, + ) + + +class cmd_pack_objects(Command): + def run(self, args): + opts, args = getopt(args, "", ["stdout"]) + opts = dict(opts) + if len(args) < 1 and "--stdout" not in args: + print("Usage: dulwich pack-objects basename") + sys.exit(1) + object_ids = [line.strip() for line in sys.stdin.readlines()] + basename = args[0] + if "--stdout" in opts: + packf = getattr(sys.stdout, "buffer", sys.stdout) + idxf = None + close = [] + else: + packf = open(basename + ".pack", "w") + idxf = open(basename + ".idx", "w") + close = [packf, idxf] + porcelain.pack_objects(".", object_ids, packf, idxf) + for f in close: + f.close() + + +class cmd_pull(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + try: + from_location = args[0] + except IndexError: + from_location = None + porcelain.pull(".", from_location) + + +class cmd_push(Command): + + def run(self, argv): + parser = argparse.ArgumentParser() + parser.add_argument('-f', '--force', action='store_true', help='Force') + parser.add_argument('to_location', type=str) + parser.add_argument('refspec', type=str, nargs='*') + args = parser.parse_args(argv) + try: + porcelain.push('.', args.to_location, args.refspec or None, force=args.force) + except porcelain.DivergedBranches: + sys.stderr.write('Diverged branches; specify --force to override') + return 1 + + +class cmd_remote_add(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + porcelain.remote_add(".", args[0], args[1]) + + +class SuperCommand(Command): + + subcommands: Dict[str, Type[Command]] = {} + default_command: Optional[Type[Command]] = None + + def run(self, args): + if not args and not self.default_command: + print("Supported subcommands: %s" % ", ".join(self.subcommands.keys())) + return False + cmd = args[0] + try: + cmd_kls = self.subcommands[cmd] + except KeyError: + print("No such subcommand: %s" % args[0]) + return False + return cmd_kls().run(args[1:]) + + +class cmd_remote(SuperCommand): + + subcommands = { + "add": cmd_remote_add, + } + + +class cmd_submodule_list(Command): + def run(self, argv): + parser = argparse.ArgumentParser() + parser.parse_args(argv) + for path, sha in porcelain.submodule_list("."): + sys.stdout.write(' %s %s\n' % (sha, path)) + + +class cmd_submodule_init(Command): + def run(self, argv): + parser = argparse.ArgumentParser() + parser.parse_args(argv) + porcelain.submodule_init(".") + + +class cmd_submodule(SuperCommand): + + subcommands = { + "init": cmd_submodule_init, + } + + default_command = cmd_submodule_init + + +class cmd_check_ignore(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + ret = 1 + for path in porcelain.check_ignore(".", args): + print(path) + ret = 0 + return ret + + +class cmd_check_mailmap(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + for arg in args: + canonical_identity = porcelain.check_mailmap(".", arg) + print(canonical_identity) + + +class cmd_stash_list(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + for i, entry in porcelain.stash_list("."): + print("stash@{%d}: %s" % (i, entry.message.rstrip("\n"))) + + +class cmd_stash_push(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + porcelain.stash_push(".") + print("Saved working directory and index state") + + +class cmd_stash_pop(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + porcelain.stash_pop(".") + print("Restrored working directory and index state") + + +class cmd_stash(SuperCommand): + + subcommands = { + "list": cmd_stash_list, + "pop": cmd_stash_pop, + "push": cmd_stash_push, + } + + +class cmd_ls_files(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + for name in porcelain.ls_files("."): + print(name) + + +class cmd_describe(Command): + def run(self, args): + parser = optparse.OptionParser() + options, args = parser.parse_args(args) + print(porcelain.describe(".")) + + +class cmd_help(Command): + def run(self, args): + parser = optparse.OptionParser() + parser.add_option( + "-a", + "--all", + dest="all", + action="store_true", + help="List all commands.", + ) + options, args = parser.parse_args(args) + + if options.all: + print("Available commands:") + for cmd in sorted(commands): + print(" %s" % cmd) + else: + print( + """\ +The dulwich command line tool is currently a very basic frontend for the +Dulwich python module. For full functionality, please see the API reference. + +For a list of supported commands, see 'dulwich help -a'. +""" + ) + + +commands = { + "add": cmd_add, + "archive": cmd_archive, + "check-ignore": cmd_check_ignore, + "check-mailmap": cmd_check_mailmap, + "clone": cmd_clone, + "commit": cmd_commit, + "commit-tree": cmd_commit_tree, + "describe": cmd_describe, + "daemon": cmd_daemon, + "diff": cmd_diff, + "diff-tree": cmd_diff_tree, + "dump-pack": cmd_dump_pack, + "dump-index": cmd_dump_index, + "fetch-pack": cmd_fetch_pack, + "fetch": cmd_fetch, + "fsck": cmd_fsck, + "help": cmd_help, + "init": cmd_init, + "log": cmd_log, + "ls-files": cmd_ls_files, + "ls-remote": cmd_ls_remote, + "ls-tree": cmd_ls_tree, + "pack-objects": cmd_pack_objects, + "pull": cmd_pull, + "push": cmd_push, + "receive-pack": cmd_receive_pack, + "remote": cmd_remote, + "repack": cmd_repack, + "reset": cmd_reset, + "rev-list": cmd_rev_list, + "rm": cmd_rm, + "show": cmd_show, + "stash": cmd_stash, + "status": cmd_status, + "symbolic-ref": cmd_symbolic_ref, + "submodule": cmd_submodule, + "tag": cmd_tag, + "update-server-info": cmd_update_server_info, + "upload-pack": cmd_upload_pack, + "web-daemon": cmd_web_daemon, + "write-tree": cmd_write_tree, +} + + +def main(argv=None): + if argv is None: + argv = sys.argv[1:] + + if len(argv) < 1: + print("Usage: dulwich <%s> [OPTIONS...]" % ("|".join(commands.keys()))) + return 1 + + cmd = argv[0] + try: + cmd_kls = commands[cmd] + except KeyError: + print("No such subcommand: %s" % cmd) + return 1 + # TODO(jelmer): Return non-0 on errors + return cmd_kls().run(argv[1:]) + + +def _main(): + if "DULWICH_PDB" in os.environ and getattr(signal, "SIGQUIT", None): + signal.signal(signal.SIGQUIT, signal_quit) # type: ignore + signal.signal(signal.SIGINT, signal_int) + + sys.exit(main()) + + +if __name__ == "__main__": + _main() diff --git a/env/lib/python3.8/site-packages/dulwich/client.py b/env/lib/python3.8/site-packages/dulwich/client.py new file mode 100644 index 00000000..a8b70fc9 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/client.py @@ -0,0 +1,2398 @@ +# client.py -- Implementation of the client side git protocols +# Copyright (C) 2008-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Client side support for the Git protocol. + +The Dulwich client supports the following capabilities: + + * thin-pack + * multi_ack_detailed + * multi_ack + * side-band-64k + * ofs-delta + * quiet + * report-status + * delete-refs + * shallow + +Known capabilities that are not supported: + + * no-progress + * include-tag +""" + +from contextlib import closing +from io import BytesIO, BufferedReader +import logging +import os +import select +import socket +import subprocess +import sys +from typing import ( + Any, + Callable, + Dict, + List, + Optional, + Set, + Tuple, + IO, + Union, + TYPE_CHECKING, +) + +from urllib.parse import ( + quote as urlquote, + unquote as urlunquote, + urlparse, + urljoin, + urlunsplit, + urlunparse, +) + +if TYPE_CHECKING: + import urllib3 + + +import dulwich +from dulwich.config import get_xdg_config_home_path, Config, apply_instead_of +from dulwich.errors import ( + GitProtocolError, + NotGitRepository, + SendPackError, +) +from dulwich.protocol import ( + HangupException, + _RBUFSIZE, + agent_string, + capability_agent, + extract_capability_names, + CAPABILITY_AGENT, + CAPABILITY_DELETE_REFS, + CAPABILITY_INCLUDE_TAG, + CAPABILITY_MULTI_ACK, + CAPABILITY_MULTI_ACK_DETAILED, + CAPABILITY_OFS_DELTA, + CAPABILITY_QUIET, + CAPABILITY_REPORT_STATUS, + CAPABILITY_SHALLOW, + CAPABILITY_SYMREF, + CAPABILITY_SIDE_BAND_64K, + CAPABILITY_THIN_PACK, + CAPABILITIES_REF, + KNOWN_RECEIVE_CAPABILITIES, + KNOWN_UPLOAD_CAPABILITIES, + COMMAND_DEEPEN, + COMMAND_SHALLOW, + COMMAND_UNSHALLOW, + COMMAND_DONE, + COMMAND_HAVE, + COMMAND_WANT, + SIDE_BAND_CHANNEL_DATA, + SIDE_BAND_CHANNEL_PROGRESS, + SIDE_BAND_CHANNEL_FATAL, + PktLineParser, + Protocol, + TCP_GIT_PORT, + ZERO_SHA, + extract_capabilities, + parse_capability, + pkt_line, +) +from dulwich.pack import ( + write_pack_objects, + PackChunkGenerator, +) +from dulwich.refs import ( + read_info_refs, + ANNOTATED_TAG_SUFFIX, + _import_remote_refs, +) +from dulwich.repo import Repo + + +# url2pathname is lazily imported +url2pathname = None + + +logger = logging.getLogger(__name__) + + +class InvalidWants(Exception): + """Invalid wants.""" + + def __init__(self, wants): + Exception.__init__( + self, "requested wants not in server provided refs: %r" % wants + ) + + +class HTTPUnauthorized(Exception): + """Raised when authentication fails.""" + + def __init__(self, www_authenticate, url): + Exception.__init__(self, "No valid credentials provided") + self.www_authenticate = www_authenticate + self.url = url + + +class HTTPProxyUnauthorized(Exception): + """Raised when proxy authentication fails.""" + + def __init__(self, proxy_authenticate, url): + Exception.__init__(self, "No valid proxy credentials provided") + self.proxy_authenticate = proxy_authenticate + self.url = url + + +def _fileno_can_read(fileno): + """Check if a file descriptor is readable.""" + return len(select.select([fileno], [], [], 0)[0]) > 0 + + +def _win32_peek_avail(handle): + """Wrapper around PeekNamedPipe to check how many bytes are available.""" + from ctypes import byref, wintypes, windll + + c_avail = wintypes.DWORD() + c_message = wintypes.DWORD() + success = windll.kernel32.PeekNamedPipe( + handle, None, 0, None, byref(c_avail), byref(c_message) + ) + if not success: + raise OSError(wintypes.GetLastError()) + return c_avail.value + + +COMMON_CAPABILITIES = [CAPABILITY_OFS_DELTA, CAPABILITY_SIDE_BAND_64K] +UPLOAD_CAPABILITIES = [ + CAPABILITY_THIN_PACK, + CAPABILITY_MULTI_ACK, + CAPABILITY_MULTI_ACK_DETAILED, + CAPABILITY_SHALLOW, +] + COMMON_CAPABILITIES +RECEIVE_CAPABILITIES = [ + CAPABILITY_REPORT_STATUS, + CAPABILITY_DELETE_REFS, +] + COMMON_CAPABILITIES + + +class ReportStatusParser(object): + """Handle status as reported by servers with 'report-status' capability.""" + + def __init__(self): + self._done = False + self._pack_status = None + self._ref_statuses = [] + + def check(self): + """Check if there were any errors and, if so, raise exceptions. + + Raises: + SendPackError: Raised when the server could not unpack + Returns: + iterator over refs + """ + if self._pack_status not in (b"unpack ok", None): + raise SendPackError(self._pack_status) + for status in self._ref_statuses: + try: + status, rest = status.split(b" ", 1) + except ValueError: + # malformed response, move on to the next one + continue + if status == b"ng": + ref, error = rest.split(b" ", 1) + yield ref, error.decode("utf-8") + elif status == b"ok": + yield rest, None + else: + raise GitProtocolError("invalid ref status %r" % status) + + def handle_packet(self, pkt): + """Handle a packet. + + Raises: + GitProtocolError: Raised when packets are received after a flush + packet. + """ + if self._done: + raise GitProtocolError("received more data after status report") + if pkt is None: + self._done = True + return + if self._pack_status is None: + self._pack_status = pkt.strip() + else: + ref_status = pkt.strip() + self._ref_statuses.append(ref_status) + + +def read_pkt_refs(pkt_seq): + server_capabilities = None + refs = {} + # Receive refs from server + for pkt in pkt_seq: + (sha, ref) = pkt.rstrip(b"\n").split(None, 1) + if sha == b"ERR": + raise GitProtocolError(ref.decode("utf-8", "replace")) + if server_capabilities is None: + (ref, server_capabilities) = extract_capabilities(ref) + refs[ref] = sha + + if len(refs) == 0: + return {}, set([]) + if refs == {CAPABILITIES_REF: ZERO_SHA}: + refs = {} + return refs, set(server_capabilities) + + +class FetchPackResult(object): + """Result of a fetch-pack operation. + + Attributes: + refs: Dictionary with all remote refs + symrefs: Dictionary with remote symrefs + agent: User agent string + """ + + _FORWARDED_ATTRS = [ + "clear", + "copy", + "fromkeys", + "get", + "items", + "keys", + "pop", + "popitem", + "setdefault", + "update", + "values", + "viewitems", + "viewkeys", + "viewvalues", + ] + + def __init__(self, refs, symrefs, agent, new_shallow=None, new_unshallow=None): + self.refs = refs + self.symrefs = symrefs + self.agent = agent + self.new_shallow = new_shallow + self.new_unshallow = new_unshallow + + def _warn_deprecated(self): + import warnings + + warnings.warn( + "Use FetchPackResult.refs instead.", + DeprecationWarning, + stacklevel=3, + ) + + def __eq__(self, other): + if isinstance(other, dict): + self._warn_deprecated() + return self.refs == other + return ( + self.refs == other.refs + and self.symrefs == other.symrefs + and self.agent == other.agent + ) + + def __contains__(self, name): + self._warn_deprecated() + return name in self.refs + + def __getitem__(self, name): + self._warn_deprecated() + return self.refs[name] + + def __len__(self): + self._warn_deprecated() + return len(self.refs) + + def __iter__(self): + self._warn_deprecated() + return iter(self.refs) + + def __getattribute__(self, name): + if name in type(self)._FORWARDED_ATTRS: + self._warn_deprecated() + return getattr(self.refs, name) + return super(FetchPackResult, self).__getattribute__(name) + + def __repr__(self): + return "%s(%r, %r, %r)" % ( + self.__class__.__name__, + self.refs, + self.symrefs, + self.agent, + ) + + +class SendPackResult(object): + """Result of a upload-pack operation. + + Attributes: + refs: Dictionary with all remote refs + agent: User agent string + ref_status: Optional dictionary mapping ref name to error message (if it + failed to update), or None if it was updated successfully + """ + + _FORWARDED_ATTRS = [ + "clear", + "copy", + "fromkeys", + "get", + "items", + "keys", + "pop", + "popitem", + "setdefault", + "update", + "values", + "viewitems", + "viewkeys", + "viewvalues", + ] + + def __init__(self, refs, agent=None, ref_status=None): + self.refs = refs + self.agent = agent + self.ref_status = ref_status + + def _warn_deprecated(self): + import warnings + + warnings.warn( + "Use SendPackResult.refs instead.", + DeprecationWarning, + stacklevel=3, + ) + + def __eq__(self, other): + if isinstance(other, dict): + self._warn_deprecated() + return self.refs == other + return self.refs == other.refs and self.agent == other.agent + + def __contains__(self, name): + self._warn_deprecated() + return name in self.refs + + def __getitem__(self, name): + self._warn_deprecated() + return self.refs[name] + + def __len__(self): + self._warn_deprecated() + return len(self.refs) + + def __iter__(self): + self._warn_deprecated() + return iter(self.refs) + + def __getattribute__(self, name): + if name in type(self)._FORWARDED_ATTRS: + self._warn_deprecated() + return getattr(self.refs, name) + return super(SendPackResult, self).__getattribute__(name) + + def __repr__(self): + return "%s(%r, %r)" % (self.__class__.__name__, self.refs, self.agent) + + +def _read_shallow_updates(pkt_seq): + new_shallow = set() + new_unshallow = set() + for pkt in pkt_seq: + cmd, sha = pkt.split(b" ", 1) + if cmd == COMMAND_SHALLOW: + new_shallow.add(sha.strip()) + elif cmd == COMMAND_UNSHALLOW: + new_unshallow.add(sha.strip()) + else: + raise GitProtocolError("unknown command %s" % pkt) + return (new_shallow, new_unshallow) + + +class _v1ReceivePackHeader(object): + + def __init__(self, capabilities, old_refs, new_refs): + self.want = [] + self.have = [] + self._it = self._handle_receive_pack_head(capabilities, old_refs, new_refs) + self.sent_capabilities = False + + def __iter__(self): + return self._it + + def _handle_receive_pack_head(self, capabilities, old_refs, new_refs): + """Handle the head of a 'git-receive-pack' request. + + Args: + capabilities: List of negotiated capabilities + old_refs: Old refs, as received from the server + new_refs: Refs to change + + Returns: + (have, want) tuple + """ + self.have = [x for x in old_refs.values() if not x == ZERO_SHA] + + for refname in new_refs: + if not isinstance(refname, bytes): + raise TypeError("refname is not a bytestring: %r" % refname) + old_sha1 = old_refs.get(refname, ZERO_SHA) + if not isinstance(old_sha1, bytes): + raise TypeError( + "old sha1 for %s is not a bytestring: %r" % (refname, old_sha1) + ) + new_sha1 = new_refs.get(refname, ZERO_SHA) + if not isinstance(new_sha1, bytes): + raise TypeError( + "old sha1 for %s is not a bytestring %r" % (refname, new_sha1) + ) + + if old_sha1 != new_sha1: + logger.debug( + 'Sending updated ref %r: %r -> %r', + refname, old_sha1, new_sha1) + if self.sent_capabilities: + yield old_sha1 + b" " + new_sha1 + b" " + refname + else: + yield ( + old_sha1 + + b" " + + new_sha1 + + b" " + + refname + + b"\0" + + b" ".join(sorted(capabilities)) + ) + self.sent_capabilities = True + if new_sha1 not in self.have and new_sha1 != ZERO_SHA: + self.want.append(new_sha1) + yield None + + +def _read_side_band64k_data(pkt_seq, channel_callbacks): + """Read per-channel data. + + This requires the side-band-64k capability. + + Args: + pkt_seq: Sequence of packets to read + channel_callbacks: Dictionary mapping channels to packet + handlers to use. None for a callback discards channel data. + """ + for pkt in pkt_seq: + channel = ord(pkt[:1]) + pkt = pkt[1:] + try: + cb = channel_callbacks[channel] + except KeyError as exc: + raise AssertionError( + "Invalid sideband channel %d" % channel) from exc + else: + if cb is not None: + cb(pkt) + + +def _handle_upload_pack_head( + proto, capabilities, graph_walker, wants, can_read, depth +): + """Handle the head of a 'git-upload-pack' request. + + Args: + proto: Protocol object to read from + capabilities: List of negotiated capabilities + graph_walker: GraphWalker instance to call .ack() on + wants: List of commits to fetch + can_read: function that returns a boolean that indicates + whether there is extra graph data to read on proto + depth: Depth for request + + Returns: + + """ + assert isinstance(wants, list) and isinstance(wants[0], bytes) + proto.write_pkt_line( + COMMAND_WANT + + b" " + + wants[0] + + b" " + + b" ".join(sorted(capabilities)) + + b"\n" + ) + for want in wants[1:]: + proto.write_pkt_line(COMMAND_WANT + b" " + want + b"\n") + if depth not in (0, None) or getattr(graph_walker, "shallow", None): + if CAPABILITY_SHALLOW not in capabilities: + raise GitProtocolError( + "server does not support shallow capability required for " "depth" + ) + for sha in graph_walker.shallow: + proto.write_pkt_line(COMMAND_SHALLOW + b" " + sha + b"\n") + if depth is not None: + proto.write_pkt_line( + COMMAND_DEEPEN + b" " + str(depth).encode("ascii") + b"\n" + ) + proto.write_pkt_line(None) + if can_read is not None: + (new_shallow, new_unshallow) = _read_shallow_updates(proto.read_pkt_seq()) + else: + new_shallow = new_unshallow = None + else: + new_shallow = new_unshallow = set() + proto.write_pkt_line(None) + have = next(graph_walker) + while have: + proto.write_pkt_line(COMMAND_HAVE + b" " + have + b"\n") + if can_read is not None and can_read(): + pkt = proto.read_pkt_line() + parts = pkt.rstrip(b"\n").split(b" ") + if parts[0] == b"ACK": + graph_walker.ack(parts[1]) + if parts[2] in (b"continue", b"common"): + pass + elif parts[2] == b"ready": + break + else: + raise AssertionError( + "%s not in ('continue', 'ready', 'common)" % parts[2] + ) + have = next(graph_walker) + proto.write_pkt_line(COMMAND_DONE + b"\n") + return (new_shallow, new_unshallow) + + +def _handle_upload_pack_tail( + proto, + capabilities, + graph_walker, + pack_data, + progress=None, + rbufsize=_RBUFSIZE, +): + """Handle the tail of a 'git-upload-pack' request. + + Args: + proto: Protocol object to read from + capabilities: List of negotiated capabilities + graph_walker: GraphWalker instance to call .ack() on + pack_data: Function to call with pack data + progress: Optional progress reporting function + rbufsize: Read buffer size + + Returns: + + """ + pkt = proto.read_pkt_line() + while pkt: + parts = pkt.rstrip(b"\n").split(b" ") + if parts[0] == b"ACK": + graph_walker.ack(parts[1]) + if len(parts) < 3 or parts[2] not in ( + b"ready", + b"continue", + b"common", + ): + break + pkt = proto.read_pkt_line() + if CAPABILITY_SIDE_BAND_64K in capabilities: + if progress is None: + # Just ignore progress data + + def progress(x): + pass + + _read_side_band64k_data( + proto.read_pkt_seq(), + { + SIDE_BAND_CHANNEL_DATA: pack_data, + SIDE_BAND_CHANNEL_PROGRESS: progress, + }, + ) + else: + while True: + data = proto.read(rbufsize) + if data == b"": + break + pack_data(data) + + +# TODO(durin42): this doesn't correctly degrade if the server doesn't +# support some capabilities. This should work properly with servers +# that don't support multi_ack. +class GitClient(object): + """Git smart server client.""" + + def __init__( + self, + thin_packs=True, + report_activity=None, + quiet=False, + include_tags=False, + ): + """Create a new GitClient instance. + + Args: + thin_packs: Whether or not thin packs should be retrieved + report_activity: Optional callback for reporting transport + activity. + include_tags: send annotated tags when sending the objects they point + to + """ + self._report_activity = report_activity + self._report_status_parser = None + self._fetch_capabilities = set(UPLOAD_CAPABILITIES) + self._fetch_capabilities.add(capability_agent()) + self._send_capabilities = set(RECEIVE_CAPABILITIES) + self._send_capabilities.add(capability_agent()) + if quiet: + self._send_capabilities.add(CAPABILITY_QUIET) + if not thin_packs: + self._fetch_capabilities.remove(CAPABILITY_THIN_PACK) + if include_tags: + self._fetch_capabilities.add(CAPABILITY_INCLUDE_TAG) + + def get_url(self, path): + """Retrieves full url to given path. + + Args: + path: Repository path (as string) + + Returns: + Url to path (as string) + + """ + raise NotImplementedError(self.get_url) + + @classmethod + def from_parsedurl(cls, parsedurl, **kwargs): + """Create an instance of this client from a urlparse.parsed object. + + Args: + parsedurl: Result of urlparse() + + Returns: + A `GitClient` object + """ + raise NotImplementedError(cls.from_parsedurl) + + def send_pack(self, path, update_refs, generate_pack_data, progress=None): + """Upload a pack to a remote repository. + + Args: + path: Repository path (as bytestring) + update_refs: Function to determine changes to remote refs. Receive + dict with existing remote refs, returns dict with + changed refs (name -> sha, where sha=ZERO_SHA for deletions) + generate_pack_data: Function that can return a tuple + with number of objects and list of pack data to include + progress: Optional progress function + + Returns: + SendPackResult object + + Raises: + SendPackError: if server rejects the pack data + + """ + raise NotImplementedError(self.send_pack) + + def clone(self, path, target_path, mkdir: bool = True, bare=False, origin="origin", + checkout=None, branch=None, progress=None, depth=None): + """Clone a repository.""" + from .refs import _set_origin_head, _set_default_branch, _set_head + + if mkdir: + os.mkdir(target_path) + + try: + target = None + if not bare: + target = Repo.init(target_path) + if checkout is None: + checkout = True + else: + if checkout: + raise ValueError("checkout and bare are incompatible") + target = Repo.init_bare(target_path) + + # TODO(jelmer): abstract method for get_location? + if isinstance(self, (LocalGitClient, SubprocessGitClient)): + encoded_path = path.encode('utf-8') + else: + encoded_path = self.get_url(path).encode('utf-8') + + assert target is not None + target_config = target.get_config() + target_config.set((b"remote", origin.encode('utf-8')), b"url", encoded_path) + target_config.set( + (b"remote", origin.encode('utf-8')), + b"fetch", + b"+refs/heads/*:refs/remotes/" + origin.encode('utf-8') + b"/*", + ) + target_config.write_to_path() + + ref_message = b"clone: from " + encoded_path + result = self.fetch(path, target, progress=progress, depth=depth) + _import_remote_refs( + target.refs, origin, result.refs, message=ref_message) + + origin_head = result.symrefs.get(b"HEAD") + origin_sha = result.refs.get(b'HEAD') + if origin_sha and not origin_head: + # set detached HEAD + target.refs[b"HEAD"] = origin_sha + + _set_origin_head(target.refs, origin.encode('utf-8'), origin_head) + head_ref = _set_default_branch( + target.refs, origin.encode('utf-8'), origin_head, branch, ref_message + ) + + # Update target head + if head_ref: + head = _set_head(target.refs, head_ref, ref_message) + else: + head = None + + if checkout and head is not None: + target.reset_index() + except BaseException: + if target is not None: + target.close() + if mkdir: + import shutil + shutil.rmtree(target_path) + raise + return target + + def fetch( + self, + path: str, + target: Repo, + determine_wants: Optional[ + Callable[[Dict[bytes, bytes], Optional[int]], List[bytes]] + ] = None, + progress: Optional[Callable[[bytes], None]] = None, + depth: Optional[int] = None + ) -> FetchPackResult: + """Fetch into a target repository. + + Args: + path: Path to fetch from (as bytestring) + target: Target repository to fetch into + determine_wants: Optional function to determine what refs to fetch. + Receives dictionary of name->sha, should return + list of shas to fetch. Defaults to all shas. + progress: Optional progress function + depth: Depth to fetch at + + Returns: + Dictionary with all remote refs (not just those fetched) + + """ + if determine_wants is None: + determine_wants = target.object_store.determine_wants_all + if CAPABILITY_THIN_PACK in self._fetch_capabilities: + # TODO(jelmer): Avoid reading entire file into memory and + # only processing it after the whole file has been fetched. + from tempfile import SpooledTemporaryFile + f = SpooledTemporaryFile() # type: IO[bytes] + + def commit(): + if f.tell(): + f.seek(0) + target.object_store.add_thin_pack(f.read, None) + f.close() + + def abort(): + f.close() + + else: + f, commit, abort = target.object_store.add_pack() + try: + result = self.fetch_pack( + path, + determine_wants, + target.get_graph_walker(), + f.write, + progress=progress, + depth=depth, + ) + except BaseException: + abort() + raise + else: + commit() + target.update_shallow(result.new_shallow, result.new_unshallow) + return result + + def fetch_pack( + self, + path, + determine_wants, + graph_walker, + pack_data, + progress=None, + depth=None, + ): + """Retrieve a pack from a git smart server. + + Args: + path: Remote path to fetch from + determine_wants: Function determine what refs + to fetch. Receives dictionary of name->sha, should return + list of shas to fetch. + graph_walker: Object with next() and ack(). + pack_data: Callback called for each bit of data in the pack + progress: Callback for progress reports (strings) + depth: Shallow fetch depth + + Returns: + FetchPackResult object + + """ + raise NotImplementedError(self.fetch_pack) + + def get_refs(self, path): + """Retrieve the current refs from a git smart server. + + Args: + path: Path to the repo to fetch from. (as bytestring) + + Returns: + + """ + raise NotImplementedError(self.get_refs) + + @staticmethod + def _should_send_pack(new_refs): + # The packfile MUST NOT be sent if the only command used is delete. + return any(sha != ZERO_SHA for sha in new_refs.values()) + + def _negotiate_receive_pack_capabilities(self, server_capabilities): + negotiated_capabilities = self._send_capabilities & server_capabilities + agent = None + for capability in server_capabilities: + k, v = parse_capability(capability) + if k == CAPABILITY_AGENT: + agent = v + unknown_capabilities = ( # noqa: F841 + extract_capability_names(server_capabilities) - KNOWN_RECEIVE_CAPABILITIES + ) + # TODO(jelmer): warn about unknown capabilities + return negotiated_capabilities, agent + + def _handle_receive_pack_tail( + self, + proto: Protocol, + capabilities: Set[bytes], + progress: Callable[[bytes], None] = None, + ) -> Optional[Dict[bytes, Optional[str]]]: + """Handle the tail of a 'git-receive-pack' request. + + Args: + proto: Protocol object to read from + capabilities: List of negotiated capabilities + progress: Optional progress reporting function + + Returns: + dict mapping ref name to: + error message if the ref failed to update + None if it was updated successfully + """ + if CAPABILITY_SIDE_BAND_64K in capabilities: + if progress is None: + + def progress(x): + pass + + channel_callbacks = {2: progress} + if CAPABILITY_REPORT_STATUS in capabilities: + channel_callbacks[1] = PktLineParser( + self._report_status_parser.handle_packet + ).parse + _read_side_band64k_data(proto.read_pkt_seq(), channel_callbacks) + else: + if CAPABILITY_REPORT_STATUS in capabilities: + for pkt in proto.read_pkt_seq(): + self._report_status_parser.handle_packet(pkt) + if self._report_status_parser is not None: + return dict(self._report_status_parser.check()) + + return None + + def _negotiate_upload_pack_capabilities(self, server_capabilities): + unknown_capabilities = ( # noqa: F841 + extract_capability_names(server_capabilities) - KNOWN_UPLOAD_CAPABILITIES + ) + # TODO(jelmer): warn about unknown capabilities + symrefs = {} + agent = None + for capability in server_capabilities: + k, v = parse_capability(capability) + if k == CAPABILITY_SYMREF: + (src, dst) = v.split(b":", 1) + symrefs[src] = dst + if k == CAPABILITY_AGENT: + agent = v + + negotiated_capabilities = self._fetch_capabilities & server_capabilities + return (negotiated_capabilities, symrefs, agent) + + def archive( + self, + path, + committish, + write_data, + progress=None, + write_error=None, + format=None, + subdirs=None, + prefix=None, + ): + """Retrieve an archive of the specified tree. + """ + raise NotImplementedError(self.archive) + + +def check_wants(wants, refs): + """Check that a set of wants is valid. + + Args: + wants: Set of object SHAs to fetch + refs: Refs dictionary to check against + + Returns: + + """ + missing = set(wants) - { + v for (k, v) in refs.items() if not k.endswith(ANNOTATED_TAG_SUFFIX) + } + if missing: + raise InvalidWants(missing) + + +def _remote_error_from_stderr(stderr): + if stderr is None: + return HangupException() + lines = [line.rstrip(b"\n") for line in stderr.readlines()] + for line in lines: + if line.startswith(b"ERROR: "): + return GitProtocolError(line[len(b"ERROR: ") :].decode("utf-8", "replace")) + return HangupException(lines) + + +class TraditionalGitClient(GitClient): + """Traditional Git client.""" + + DEFAULT_ENCODING = "utf-8" + + def __init__(self, path_encoding=DEFAULT_ENCODING, **kwargs): + self._remote_path_encoding = path_encoding + super(TraditionalGitClient, self).__init__(**kwargs) + + async def _connect(self, cmd, path): + """Create a connection to the server. + + This method is abstract - concrete implementations should + implement their own variant which connects to the server and + returns an initialized Protocol object with the service ready + for use and a can_read function which may be used to see if + reads would block. + + Args: + cmd: The git service name to which we should connect. + path: The path we should pass to the service. (as bytestirng) + """ + raise NotImplementedError() + + def send_pack(self, path, update_refs, generate_pack_data, progress=None): + """Upload a pack to a remote repository. + + Args: + path: Repository path (as bytestring) + update_refs: Function to determine changes to remote refs. + Receive dict with existing remote refs, returns dict with + changed refs (name -> sha, where sha=ZERO_SHA for deletions) + generate_pack_data: Function that can return a tuple with + number of objects and pack data to upload. + progress: Optional callback called with progress updates + + Returns: + SendPackResult + + Raises: + SendPackError: if server rejects the pack data + + """ + proto, unused_can_read, stderr = self._connect(b"receive-pack", path) + with proto: + try: + old_refs, server_capabilities = read_pkt_refs(proto.read_pkt_seq()) + except HangupException as exc: + raise _remote_error_from_stderr(stderr) from exc + ( + negotiated_capabilities, + agent, + ) = self._negotiate_receive_pack_capabilities(server_capabilities) + if CAPABILITY_REPORT_STATUS in negotiated_capabilities: + self._report_status_parser = ReportStatusParser() + report_status_parser = self._report_status_parser + + try: + new_refs = orig_new_refs = update_refs(dict(old_refs)) + except BaseException: + proto.write_pkt_line(None) + raise + + if set(new_refs.items()).issubset(set(old_refs.items())): + proto.write_pkt_line(None) + return SendPackResult(new_refs, agent=agent, ref_status={}) + + if CAPABILITY_DELETE_REFS not in server_capabilities: + # Server does not support deletions. Fail later. + new_refs = dict(orig_new_refs) + for ref, sha in orig_new_refs.items(): + if sha == ZERO_SHA: + if CAPABILITY_REPORT_STATUS in negotiated_capabilities: + report_status_parser._ref_statuses.append( + b"ng " + ref + b" remote does not support deleting refs" + ) + report_status_parser._ref_status_ok = False + del new_refs[ref] + + if new_refs is None: + proto.write_pkt_line(None) + return SendPackResult(old_refs, agent=agent, ref_status={}) + + if len(new_refs) == 0 and orig_new_refs: + # NOOP - Original new refs filtered out by policy + proto.write_pkt_line(None) + if report_status_parser is not None: + ref_status = dict(report_status_parser.check()) + else: + ref_status = None + return SendPackResult(old_refs, agent=agent, ref_status=ref_status) + + header_handler = _v1ReceivePackHeader(negotiated_capabilities, old_refs, new_refs) + + for pkt in header_handler: + proto.write_pkt_line(pkt) + + pack_data_count, pack_data = generate_pack_data( + header_handler.have, + header_handler.want, + ofs_delta=(CAPABILITY_OFS_DELTA in negotiated_capabilities), + ) + + if self._should_send_pack(new_refs): + for chunk in PackChunkGenerator(pack_data_count, pack_data): + proto.write(chunk) + + ref_status = self._handle_receive_pack_tail( + proto, negotiated_capabilities, progress + ) + return SendPackResult(new_refs, agent=agent, ref_status=ref_status) + + def fetch_pack( + self, + path, + determine_wants, + graph_walker, + pack_data, + progress=None, + depth=None, + ): + """Retrieve a pack from a git smart server. + + Args: + path: Remote path to fetch from + determine_wants: Function determine what refs + to fetch. Receives dictionary of name->sha, should return + list of shas to fetch. + graph_walker: Object with next() and ack(). + pack_data: Callback called for each bit of data in the pack + progress: Callback for progress reports (strings) + depth: Shallow fetch depth + + Returns: + FetchPackResult object + + """ + proto, can_read, stderr = self._connect(b"upload-pack", path) + with proto: + try: + refs, server_capabilities = read_pkt_refs(proto.read_pkt_seq()) + except HangupException as exc: + raise _remote_error_from_stderr(stderr) from exc + ( + negotiated_capabilities, + symrefs, + agent, + ) = self._negotiate_upload_pack_capabilities(server_capabilities) + + if refs is None: + proto.write_pkt_line(None) + return FetchPackResult(refs, symrefs, agent) + + try: + if depth is not None: + wants = determine_wants(refs, depth=depth) + else: + wants = determine_wants(refs) + except BaseException: + proto.write_pkt_line(None) + raise + if wants is not None: + wants = [cid for cid in wants if cid != ZERO_SHA] + if not wants: + proto.write_pkt_line(None) + return FetchPackResult(refs, symrefs, agent) + (new_shallow, new_unshallow) = _handle_upload_pack_head( + proto, + negotiated_capabilities, + graph_walker, + wants, + can_read, + depth=depth, + ) + _handle_upload_pack_tail( + proto, + negotiated_capabilities, + graph_walker, + pack_data, + progress, + ) + return FetchPackResult(refs, symrefs, agent, new_shallow, new_unshallow) + + def get_refs(self, path): + """Retrieve the current refs from a git smart server.""" + # stock `git ls-remote` uses upload-pack + proto, _, stderr = self._connect(b"upload-pack", path) + with proto: + try: + refs, _ = read_pkt_refs(proto.read_pkt_seq()) + except HangupException as exc: + raise _remote_error_from_stderr(stderr) from exc + proto.write_pkt_line(None) + return refs + + def archive( + self, + path, + committish, + write_data, + progress=None, + write_error=None, + format=None, + subdirs=None, + prefix=None, + ): + proto, can_read, stderr = self._connect(b"upload-archive", path) + with proto: + if format is not None: + proto.write_pkt_line(b"argument --format=" + format) + proto.write_pkt_line(b"argument " + committish) + if subdirs is not None: + for subdir in subdirs: + proto.write_pkt_line(b"argument " + subdir) + if prefix is not None: + proto.write_pkt_line(b"argument --prefix=" + prefix) + proto.write_pkt_line(None) + try: + pkt = proto.read_pkt_line() + except HangupException as exc: + raise _remote_error_from_stderr(stderr) from exc + if pkt == b"NACK\n" or pkt == b"NACK": + return + elif pkt == b"ACK\n" or pkt == b"ACK": + pass + elif pkt.startswith(b"ERR "): + raise GitProtocolError(pkt[4:].rstrip(b"\n").decode("utf-8", "replace")) + else: + raise AssertionError("invalid response %r" % pkt) + ret = proto.read_pkt_line() + if ret is not None: + raise AssertionError("expected pkt tail") + _read_side_band64k_data( + proto.read_pkt_seq(), + { + SIDE_BAND_CHANNEL_DATA: write_data, + SIDE_BAND_CHANNEL_PROGRESS: progress, + SIDE_BAND_CHANNEL_FATAL: write_error, + }, + ) + + +class TCPGitClient(TraditionalGitClient): + """A Git Client that works over TCP directly (i.e. git://).""" + + def __init__(self, host, port=None, **kwargs): + if port is None: + port = TCP_GIT_PORT + self._host = host + self._port = port + super(TCPGitClient, self).__init__(**kwargs) + + @classmethod + def from_parsedurl(cls, parsedurl, **kwargs): + return cls(parsedurl.hostname, port=parsedurl.port, **kwargs) + + def get_url(self, path): + netloc = self._host + if self._port is not None and self._port != TCP_GIT_PORT: + netloc += ":%d" % self._port + return urlunsplit(("git", netloc, path, "", "")) + + def _connect(self, cmd, path): + if not isinstance(cmd, bytes): + raise TypeError(cmd) + if not isinstance(path, bytes): + path = path.encode(self._remote_path_encoding) + sockaddrs = socket.getaddrinfo( + self._host, self._port, socket.AF_UNSPEC, socket.SOCK_STREAM + ) + s = None + err = socket.error("no address found for %s" % self._host) + for (family, socktype, proto, canonname, sockaddr) in sockaddrs: + s = socket.socket(family, socktype, proto) + s.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) + try: + s.connect(sockaddr) + break + except socket.error as e: + err = e + if s is not None: + s.close() + s = None + if s is None: + raise err + # -1 means system default buffering + rfile = s.makefile("rb", -1) + # 0 means unbuffered + wfile = s.makefile("wb", 0) + + def close(): + rfile.close() + wfile.close() + s.close() + + proto = Protocol( + rfile.read, + wfile.write, + close, + report_activity=self._report_activity, + ) + if path.startswith(b"/~"): + path = path[1:] + # TODO(jelmer): Alternative to ascii? + proto.send_cmd(b"git-" + cmd, path, b"host=" + self._host.encode("ascii")) + return proto, lambda: _fileno_can_read(s), None + + +class SubprocessWrapper(object): + """A socket-like object that talks to a subprocess via pipes.""" + + def __init__(self, proc): + self.proc = proc + self.read = BufferedReader(proc.stdout).read + self.write = proc.stdin.write + + @property + def stderr(self): + return self.proc.stderr + + def can_read(self): + if sys.platform == "win32": + from msvcrt import get_osfhandle + + handle = get_osfhandle(self.proc.stdout.fileno()) + return _win32_peek_avail(handle) != 0 + else: + return _fileno_can_read(self.proc.stdout.fileno()) + + def close(self): + self.proc.stdin.close() + self.proc.stdout.close() + if self.proc.stderr: + self.proc.stderr.close() + self.proc.wait() + + +def find_git_command() -> List[str]: + """Find command to run for system Git (usually C Git).""" + if sys.platform == "win32": # support .exe, .bat and .cmd + try: # to avoid overhead + import win32api + except ImportError: # run through cmd.exe with some overhead + return ["cmd", "/c", "git"] + else: + status, git = win32api.FindExecutable("git") + return [git] + else: + return ["git"] + + +class SubprocessGitClient(TraditionalGitClient): + """Git client that talks to a server using a subprocess.""" + + @classmethod + def from_parsedurl(cls, parsedurl, **kwargs): + return cls(**kwargs) + + git_command = None + + def _connect(self, service, path): + if not isinstance(service, bytes): + raise TypeError(service) + if isinstance(path, bytes): + path = path.decode(self._remote_path_encoding) + if self.git_command is None: + git_command = find_git_command() + argv = git_command + [service.decode("ascii"), path] + p = subprocess.Popen( + argv, + bufsize=0, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + ) + pw = SubprocessWrapper(p) + return ( + Protocol( + pw.read, + pw.write, + pw.close, + report_activity=self._report_activity, + ), + pw.can_read, + p.stderr, + ) + + +class LocalGitClient(GitClient): + """Git Client that just uses a local Repo.""" + + def __init__(self, thin_packs=True, report_activity=None, + config: Optional[Config] = None): + """Create a new LocalGitClient instance. + + Args: + thin_packs: Whether or not thin packs should be retrieved + report_activity: Optional callback for reporting transport + activity. + """ + self._report_activity = report_activity + # Ignore the thin_packs argument + + def get_url(self, path): + return urlunsplit(("file", "", path, "", "")) + + @classmethod + def from_parsedurl(cls, parsedurl, **kwargs): + return cls(**kwargs) + + @classmethod + def _open_repo(cls, path): + + if not isinstance(path, str): + path = os.fsdecode(path) + return closing(Repo(path)) + + def send_pack(self, path, update_refs, generate_pack_data, progress=None): + """Upload a pack to a remote repository. + + Args: + path: Repository path (as bytestring) + update_refs: Function to determine changes to remote refs. + Receive dict with existing remote refs, returns dict with + changed refs (name -> sha, where sha=ZERO_SHA for deletions) + with number of items and pack data to upload. + progress: Optional progress function + + Returns: + SendPackResult + + Raises: + SendPackError: if server rejects the pack data + + """ + if not progress: + + def progress(x): + pass + + with self._open_repo(path) as target: + old_refs = target.get_refs() + new_refs = update_refs(dict(old_refs)) + + have = [sha1 for sha1 in old_refs.values() if sha1 != ZERO_SHA] + want = [] + for refname, new_sha1 in new_refs.items(): + if ( + new_sha1 not in have + and new_sha1 not in want + and new_sha1 != ZERO_SHA + ): + want.append(new_sha1) + + if not want and set(new_refs.items()).issubset(set(old_refs.items())): + return SendPackResult(new_refs, ref_status={}) + + target.object_store.add_pack_data( + *generate_pack_data(have, want, ofs_delta=True) + ) + + ref_status = {} + + for refname, new_sha1 in new_refs.items(): + old_sha1 = old_refs.get(refname, ZERO_SHA) + if new_sha1 != ZERO_SHA: + if not target.refs.set_if_equals(refname, old_sha1, new_sha1): + msg = "unable to set %s to %s" % (refname, new_sha1) + progress(msg) + ref_status[refname] = msg + else: + if not target.refs.remove_if_equals(refname, old_sha1): + progress("unable to remove %s" % refname) + ref_status[refname] = "unable to remove" + + return SendPackResult(new_refs, ref_status=ref_status) + + def fetch(self, path, target, determine_wants=None, progress=None, depth=None): + """Fetch into a target repository. + + Args: + path: Path to fetch from (as bytestring) + target: Target repository to fetch into + determine_wants: Optional function determine what refs + to fetch. Receives dictionary of name->sha, should return + list of shas to fetch. Defaults to all shas. + progress: Optional progress function + depth: Shallow fetch depth + + Returns: + FetchPackResult object + + """ + with self._open_repo(path) as r: + refs = r.fetch( + target, + determine_wants=determine_wants, + progress=progress, + depth=depth, + ) + return FetchPackResult(refs, r.refs.get_symrefs(), agent_string()) + + def fetch_pack( + self, + path, + determine_wants, + graph_walker, + pack_data, + progress=None, + depth=None, + ): + """Retrieve a pack from a git smart server. + + Args: + path: Remote path to fetch from + determine_wants: Function determine what refs + to fetch. Receives dictionary of name->sha, should return + list of shas to fetch. + graph_walker: Object with next() and ack(). + pack_data: Callback called for each bit of data in the pack + progress: Callback for progress reports (strings) + depth: Shallow fetch depth + + Returns: + FetchPackResult object + + """ + with self._open_repo(path) as r: + objects_iter = r.fetch_objects( + determine_wants, graph_walker, progress=progress, depth=depth + ) + symrefs = r.refs.get_symrefs() + agent = agent_string() + + # Did the process short-circuit (e.g. in a stateless RPC call)? + # Note that the client still expects a 0-object pack in most cases. + if objects_iter is None: + return FetchPackResult(None, symrefs, agent) + write_pack_objects(pack_data, objects_iter) + return FetchPackResult(r.get_refs(), symrefs, agent) + + def get_refs(self, path): + """Retrieve the current refs from a git smart server.""" + + with self._open_repo(path) as target: + return target.get_refs() + + +# What Git client to use for local access +default_local_git_client_cls = LocalGitClient + + +class SSHVendor(object): + """A client side SSH implementation.""" + + def run_command( + self, + host, + command, + username=None, + port=None, + password=None, + key_filename=None, + ssh_command=None, + ): + """Connect to an SSH server. + + Run a command remotely and return a file-like object for interaction + with the remote command. + + Args: + host: Host name + command: Command to run (as argv array) + username: Optional ame of user to log in as + port: Optional SSH port to use + password: Optional ssh password for login or private key + key_filename: Optional path to private keyfile + ssh_command: Optional SSH command + + Returns: + + """ + raise NotImplementedError(self.run_command) + + +class StrangeHostname(Exception): + """Refusing to connect to strange SSH hostname.""" + + def __init__(self, hostname): + super(StrangeHostname, self).__init__(hostname) + + +class SubprocessSSHVendor(SSHVendor): + """SSH vendor that shells out to the local 'ssh' command.""" + + def run_command( + self, + host, + command, + username=None, + port=None, + password=None, + key_filename=None, + ssh_command=None, + ): + + if password is not None: + raise NotImplementedError( + "Setting password not supported by SubprocessSSHVendor." + ) + + if ssh_command: + import shlex + args = shlex.split( + ssh_command, posix=(sys.platform != 'win32')) + ["-x"] + else: + args = ["ssh", "-x"] + + if port: + args.extend(["-p", str(port)]) + + if key_filename: + args.extend(["-i", str(key_filename)]) + + if username: + host = "%s@%s" % (username, host) + if host.startswith("-"): + raise StrangeHostname(hostname=host) + args.append(host) + + proc = subprocess.Popen( + args + [command], + bufsize=0, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + ) + return SubprocessWrapper(proc) + + +class PLinkSSHVendor(SSHVendor): + """SSH vendor that shells out to the local 'plink' command.""" + + def run_command( + self, + host, + command, + username=None, + port=None, + password=None, + key_filename=None, + ssh_command=None, + ): + + if ssh_command: + import shlex + args = shlex.split( + ssh_command, posix=(sys.platform != 'win32')) + ["-ssh"] + elif sys.platform == "win32": + args = ["plink.exe", "-ssh"] + else: + args = ["plink", "-ssh"] + + if password is not None: + import warnings + + warnings.warn( + "Invoking PLink with a password exposes the password in the " + "process list." + ) + args.extend(["-pw", str(password)]) + + if port: + args.extend(["-P", str(port)]) + + if key_filename: + args.extend(["-i", str(key_filename)]) + + if username: + host = "%s@%s" % (username, host) + if host.startswith("-"): + raise StrangeHostname(hostname=host) + args.append(host) + + proc = subprocess.Popen( + args + [command], + bufsize=0, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + ) + return SubprocessWrapper(proc) + + +def ParamikoSSHVendor(**kwargs): + import warnings + + warnings.warn( + "ParamikoSSHVendor has been moved to dulwich.contrib.paramiko_vendor.", + DeprecationWarning, + ) + from dulwich.contrib.paramiko_vendor import ParamikoSSHVendor + + return ParamikoSSHVendor(**kwargs) + + +# Can be overridden by users +get_ssh_vendor = SubprocessSSHVendor + + +class SSHGitClient(TraditionalGitClient): + def __init__( + self, + host, + port=None, + username=None, + vendor=None, + config=None, + password=None, + key_filename=None, + ssh_command=None, + **kwargs + ): + self.host = host + self.port = port + self.username = username + self.password = password + self.key_filename = key_filename + self.ssh_command = ssh_command or os.environ.get( + "GIT_SSH_COMMAND", os.environ.get("GIT_SSH") + ) + super(SSHGitClient, self).__init__(**kwargs) + self.alternative_paths = {} + if vendor is not None: + self.ssh_vendor = vendor + else: + self.ssh_vendor = get_ssh_vendor() + + def get_url(self, path): + netloc = self.host + if self.port is not None: + netloc += ":%d" % self.port + + if self.username is not None: + netloc = urlquote(self.username, "@/:") + "@" + netloc + + return urlunsplit(("ssh", netloc, path, "", "")) + + @classmethod + def from_parsedurl(cls, parsedurl, **kwargs): + return cls( + host=parsedurl.hostname, + port=parsedurl.port, + username=parsedurl.username, + **kwargs + ) + + def _get_cmd_path(self, cmd): + cmd = self.alternative_paths.get(cmd, b"git-" + cmd) + assert isinstance(cmd, bytes) + return cmd + + def _connect(self, cmd, path): + if not isinstance(cmd, bytes): + raise TypeError(cmd) + if isinstance(path, bytes): + path = path.decode(self._remote_path_encoding) + if path.startswith("/~"): + path = path[1:] + argv = ( + self._get_cmd_path(cmd).decode(self._remote_path_encoding) + + " '" + + path + + "'" + ) + kwargs = {} + if self.password is not None: + kwargs["password"] = self.password + if self.key_filename is not None: + kwargs["key_filename"] = self.key_filename + # GIT_SSH_COMMAND takes precedence over GIT_SSH + if self.ssh_command is not None: + kwargs["ssh_command"] = self.ssh_command + con = self.ssh_vendor.run_command( + self.host, argv, port=self.port, username=self.username, **kwargs + ) + return ( + Protocol( + con.read, + con.write, + con.close, + report_activity=self._report_activity, + ), + con.can_read, + getattr(con, "stderr", None), + ) + + +def default_user_agent_string(): + # Start user agent with "git/", because GitHub requires this. :-( See + # https://github.com/jelmer/dulwich/issues/562 for details. + return "git/dulwich/%s" % ".".join([str(x) for x in dulwich.__version__]) + + +def default_urllib3_manager( # noqa: C901 + config, pool_manager_cls=None, proxy_manager_cls=None, **override_kwargs +) -> Union["urllib3.ProxyManager", "urllib3.PoolManager"]: + """Return urllib3 connection pool manager. + + Honour detected proxy configurations. + + Args: + config: `dulwich.config.ConfigDict` instance with Git configuration. + override_kwargs: Additional arguments for `urllib3.ProxyManager` + + Returns: + Either pool_manager_cls (defaults to `urllib3.ProxyManager`) instance for + proxy configurations, proxy_manager_cls + (defaults to `urllib3.PoolManager`) instance otherwise + + """ + proxy_server = user_agent = None + ca_certs = ssl_verify = None + + if proxy_server is None: + for proxyname in ("https_proxy", "http_proxy", "all_proxy"): + proxy_server = os.environ.get(proxyname) + if proxy_server is not None: + break + + if config is not None: + if proxy_server is None: + try: + proxy_server = config.get(b"http", b"proxy") + except KeyError: + pass + try: + user_agent = config.get(b"http", b"useragent") + except KeyError: + pass + + # TODO(jelmer): Support per-host settings + try: + ssl_verify = config.get_boolean(b"http", b"sslVerify") + except KeyError: + ssl_verify = True + + try: + ca_certs = config.get(b"http", b"sslCAInfo") + except KeyError: + ca_certs = None + + if user_agent is None: + user_agent = default_user_agent_string() + + headers = {"User-agent": user_agent} + + kwargs = { + "ca_certs" : ca_certs, + } + if ssl_verify is True: + kwargs["cert_reqs"] = "CERT_REQUIRED" + elif ssl_verify is False: + kwargs["cert_reqs"] = "CERT_NONE" + else: + # Default to SSL verification + kwargs["cert_reqs"] = "CERT_REQUIRED" + + kwargs.update(override_kwargs) + + import urllib3 + + if proxy_server is not None: + if proxy_manager_cls is None: + proxy_manager_cls = urllib3.ProxyManager + if not isinstance(proxy_server, str): + proxy_server = proxy_server.decode() + manager = proxy_manager_cls(proxy_server, headers=headers, **kwargs) + else: + if pool_manager_cls is None: + pool_manager_cls = urllib3.PoolManager + manager = pool_manager_cls(headers=headers, **kwargs) + + return manager + + +class AbstractHttpGitClient(GitClient): + """Abstract base class for HTTP Git Clients. + + This is agonistic of the actual HTTP implementation. + + Subclasses should provide an implementation of the + _http_request method. + """ + + def __init__(self, base_url, dumb=False, **kwargs): + self._base_url = base_url.rstrip("/") + "/" + self.dumb = dumb + GitClient.__init__(self, **kwargs) + + def _http_request(self, url, headers=None, data=None): + """Perform HTTP request. + + Args: + url: Request URL. + headers: Optional custom headers to override defaults. + data: Request data. + + Returns: + Tuple (response, read), where response is an urllib3 + response object with additional content_type and + redirect_location properties, and read is a consumable read + method for the response data. + + """ + + raise NotImplementedError(self._http_request) + + def _discover_references(self, service, base_url): + assert base_url[-1] == "/" + tail = "info/refs" + headers = {"Accept": "*/*"} + if self.dumb is not True: + tail += "?service=%s" % service.decode("ascii") + url = urljoin(base_url, tail) + resp, read = self._http_request(url, headers) + + if resp.redirect_location: + # Something changed (redirect!), so let's update the base URL + if not resp.redirect_location.endswith(tail): + raise GitProtocolError( + "Redirected from URL %s to URL %s without %s" + % (url, resp.redirect_location, tail) + ) + base_url = resp.redirect_location[: -len(tail)] + + try: + self.dumb = not resp.content_type.startswith("application/x-git-") + if not self.dumb: + proto = Protocol(read, None) + # The first line should mention the service + try: + [pkt] = list(proto.read_pkt_seq()) + except ValueError as exc: + raise GitProtocolError( + "unexpected number of packets received") from exc + if pkt.rstrip(b"\n") != (b"# service=" + service): + raise GitProtocolError( + "unexpected first line %r from smart server" % pkt + ) + return read_pkt_refs(proto.read_pkt_seq()) + (base_url,) + else: + return read_info_refs(resp), set(), base_url + finally: + resp.close() + + def _smart_request(self, service, url, data): + """Send a 'smart' HTTP request. + + This is a simple wrapper around _http_request that sets + a couple of extra headers. + """ + assert url[-1] == "/" + url = urljoin(url, service) + result_content_type = "application/x-%s-result" % service + headers = { + "Content-Type": "application/x-%s-request" % service, + "Accept": result_content_type, + } + if isinstance(data, bytes): + headers["Content-Length"] = str(len(data)) + resp, read = self._http_request(url, headers, data) + if resp.content_type != result_content_type: + raise GitProtocolError( + "Invalid content-type from server: %s" % resp.content_type + ) + return resp, read + + def send_pack(self, path, update_refs, generate_pack_data, progress=None): + """Upload a pack to a remote repository. + + Args: + path: Repository path (as bytestring) + update_refs: Function to determine changes to remote refs. + Receives dict with existing remote refs, returns dict with + changed refs (name -> sha, where sha=ZERO_SHA for deletions) + generate_pack_data: Function that can return a tuple + with number of elements and pack data to upload. + progress: Optional progress function + + Returns: + SendPackResult + + Raises: + SendPackError: if server rejects the pack data + + """ + url = self._get_url(path) + old_refs, server_capabilities, url = self._discover_references( + b"git-receive-pack", url + ) + ( + negotiated_capabilities, + agent, + ) = self._negotiate_receive_pack_capabilities(server_capabilities) + negotiated_capabilities.add(capability_agent()) + + if CAPABILITY_REPORT_STATUS in negotiated_capabilities: + self._report_status_parser = ReportStatusParser() + + new_refs = update_refs(dict(old_refs)) + if new_refs is None: + # Determine wants function is aborting the push. + return SendPackResult(old_refs, agent=agent, ref_status={}) + if set(new_refs.items()).issubset(set(old_refs.items())): + return SendPackResult(new_refs, agent=agent, ref_status={}) + if self.dumb: + raise NotImplementedError(self.fetch_pack) + + def body_generator(): + header_handler = _v1ReceivePackHeader(negotiated_capabilities, old_refs, new_refs) + for pkt in header_handler: + yield pkt_line(pkt) + pack_data_count, pack_data = generate_pack_data( + header_handler.have, + header_handler.want, + ofs_delta=(CAPABILITY_OFS_DELTA in negotiated_capabilities), + ) + if self._should_send_pack(new_refs): + yield from PackChunkGenerator(pack_data_count, pack_data) + + resp, read = self._smart_request( + "git-receive-pack", url, data=body_generator() + ) + try: + resp_proto = Protocol(read, None) + ref_status = self._handle_receive_pack_tail( + resp_proto, negotiated_capabilities, progress + ) + return SendPackResult(new_refs, agent=agent, ref_status=ref_status) + finally: + resp.close() + + def fetch_pack( + self, + path, + determine_wants, + graph_walker, + pack_data, + progress=None, + depth=None, + ): + """Retrieve a pack from a git smart server. + + Args: + path: Path to fetch from + determine_wants: Callback that returns list of commits to fetch + graph_walker: Object with next() and ack(). + pack_data: Callback called for each bit of data in the pack + progress: Callback for progress reports (strings) + depth: Depth for request + + Returns: + FetchPackResult object + + """ + url = self._get_url(path) + refs, server_capabilities, url = self._discover_references( + b"git-upload-pack", url + ) + ( + negotiated_capabilities, + symrefs, + agent, + ) = self._negotiate_upload_pack_capabilities(server_capabilities) + if depth is not None: + wants = determine_wants(refs, depth=depth) + else: + wants = determine_wants(refs) + if wants is not None: + wants = [cid for cid in wants if cid != ZERO_SHA] + if not wants: + return FetchPackResult(refs, symrefs, agent) + if self.dumb: + raise NotImplementedError(self.fetch_pack) + req_data = BytesIO() + req_proto = Protocol(None, req_data.write) + (new_shallow, new_unshallow) = _handle_upload_pack_head( + req_proto, + negotiated_capabilities, + graph_walker, + wants, + can_read=None, + depth=depth, + ) + resp, read = self._smart_request( + "git-upload-pack", url, data=req_data.getvalue() + ) + try: + resp_proto = Protocol(read, None) + if new_shallow is None and new_unshallow is None: + (new_shallow, new_unshallow) = _read_shallow_updates( + resp_proto.read_pkt_seq()) + _handle_upload_pack_tail( + resp_proto, + negotiated_capabilities, + graph_walker, + pack_data, + progress, + ) + return FetchPackResult(refs, symrefs, agent, new_shallow, new_unshallow) + finally: + resp.close() + + def get_refs(self, path): + """Retrieve the current refs from a git smart server.""" + url = self._get_url(path) + refs, _, _ = self._discover_references(b"git-upload-pack", url) + return refs + + def get_url(self, path): + return self._get_url(path).rstrip("/") + + def _get_url(self, path): + return urljoin(self._base_url, path).rstrip("/") + "/" + + @classmethod + def from_parsedurl(cls, parsedurl, **kwargs): + password = parsedurl.password + if password is not None: + kwargs["password"] = urlunquote(password) + username = parsedurl.username + if username is not None: + kwargs["username"] = urlunquote(username) + netloc = parsedurl.hostname + if parsedurl.port: + netloc = "%s:%s" % (netloc, parsedurl.port) + if parsedurl.username: + netloc = "%s@%s" % (parsedurl.username, netloc) + parsedurl = parsedurl._replace(netloc=netloc) + return cls(urlunparse(parsedurl), **kwargs) + + def __repr__(self): + return "%s(%r, dumb=%r)" % ( + type(self).__name__, + self._base_url, + self.dumb, + ) + + +class Urllib3HttpGitClient(AbstractHttpGitClient): + def __init__( + self, + base_url, + dumb=None, + pool_manager=None, + config=None, + username=None, + password=None, + **kwargs + ): + self._username = username + self._password = password + + if pool_manager is None: + self.pool_manager = default_urllib3_manager(config) + else: + self.pool_manager = pool_manager + + if username is not None: + # No escaping needed: ":" is not allowed in username: + # https://tools.ietf.org/html/rfc2617#section-2 + credentials = f"{username}:{password or ''}" + import urllib3.util + + basic_auth = urllib3.util.make_headers(basic_auth=credentials) + self.pool_manager.headers.update(basic_auth) + + self.config = config + + super(Urllib3HttpGitClient, self).__init__( + base_url=base_url, dumb=dumb, **kwargs) + + def _get_url(self, path): + if not isinstance(path, str): + # urllib3.util.url._encode_invalid_chars() converts the path back + # to bytes using the utf-8 codec. + path = path.decode("utf-8") + return urljoin(self._base_url, path).rstrip("/") + "/" + + def _http_request(self, url, headers=None, data=None): + req_headers = self.pool_manager.headers.copy() + if headers is not None: + req_headers.update(headers) + req_headers["Pragma"] = "no-cache" + + if data is None: + resp = self.pool_manager.request( + "GET", url, headers=req_headers, preload_content=False) + else: + resp = self.pool_manager.request( + "POST", url, headers=req_headers, body=data, preload_content=False + ) + + if resp.status == 404: + raise NotGitRepository() + if resp.status == 401: + raise HTTPUnauthorized(resp.getheader("WWW-Authenticate"), url) + if resp.status == 407: + raise HTTPProxyUnauthorized(resp.getheader("Proxy-Authenticate"), url) + if resp.status != 200: + raise GitProtocolError( + "unexpected http resp %d for %s" % (resp.status, url) + ) + + resp.content_type = resp.getheader("Content-Type") + # Check if geturl() is available (urllib3 version >= 1.23) + try: + resp_url = resp.geturl() + except AttributeError: + # get_redirect_location() is available for urllib3 >= 1.1 + resp.redirect_location = resp.get_redirect_location() + else: + resp.redirect_location = resp_url if resp_url != url else "" + return resp, resp.read + + +HttpGitClient = Urllib3HttpGitClient + + +def _win32_url_to_path(parsed) -> str: + """Convert a file: URL to a path. + + https://datatracker.ietf.org/doc/html/rfc8089 + """ + assert sys.platform == "win32" or os.name == "nt" + assert parsed.scheme == "file" + + _, netloc, path, _, _, _ = parsed + + if netloc == "localhost" or not netloc: + netloc = "" + elif ( + netloc + and len(netloc) >= 2 + and netloc[0].isalpha() + and netloc[1:2] in (":", ":/") + ): + # file://C:/foo.bar/baz or file://C://foo.bar//baz + netloc = netloc[:2] + else: + raise NotImplementedError("Non-local file URLs are not supported") + + global url2pathname + if url2pathname is None: + from urllib.request import url2pathname # type: ignore + return url2pathname(netloc + path) # type: ignore + + +def get_transport_and_path_from_url( + url: str, config: Optional[Config] = None, + operation: Optional[str] = None, **kwargs) -> Tuple[GitClient, str]: + """Obtain a git client from a URL. + + Args: + url: URL to open (a unicode string) + config: Optional config object + operation: Kind of operation that'll be performed; "pull" or "push" + thin_packs: Whether or not thin packs should be retrieved + report_activity: Optional callback for reporting transport + activity. + + Returns: + Tuple with client instance and relative path. + + """ + if config is not None: + url = apply_instead_of(config, url, push=(operation == "push")) + + return _get_transport_and_path_from_url( + url, config=config, operation=operation, **kwargs) + + +def _get_transport_and_path_from_url(url, config, operation, **kwargs): + parsed = urlparse(url) + if parsed.scheme == "git": + return (TCPGitClient.from_parsedurl(parsed, **kwargs), parsed.path) + elif parsed.scheme in ("git+ssh", "ssh"): + return SSHGitClient.from_parsedurl(parsed, **kwargs), parsed.path + elif parsed.scheme in ("http", "https"): + return ( + HttpGitClient.from_parsedurl(parsed, config=config, **kwargs), + parsed.path, + ) + elif parsed.scheme == "file": + if sys.platform == "win32" or os.name == "nt": + return default_local_git_client_cls(**kwargs), _win32_url_to_path(parsed) + return ( + default_local_git_client_cls.from_parsedurl(parsed, **kwargs), + parsed.path, + ) + + raise ValueError("unknown scheme '%s'" % parsed.scheme) + + +def parse_rsync_url(location: str) -> Tuple[Optional[str], str, str]: + """Parse a rsync-style URL.""" + if ":" in location and "@" not in location: + # SSH with no user@, zero or one leading slash. + (host, path) = location.split(":", 1) + user = None + elif ":" in location: + # SSH with user@host:foo. + user_host, path = location.split(":", 1) + if "@" in user_host: + user, host = user_host.rsplit("@", 1) + else: + user = None + host = user_host + else: + raise ValueError("not a valid rsync-style URL") + return (user, host, path) + + +def get_transport_and_path( + location: str, + config: Optional[Config] = None, + operation: Optional[str] = None, + **kwargs: Any +) -> Tuple[GitClient, str]: + """Obtain a git client from a URL. + + Args: + location: URL or path (a string) + config: Optional config object + operation: Kind of operation that'll be performed; "pull" or "push" + thin_packs: Whether or not thin packs should be retrieved + report_activity: Optional callback for reporting transport + activity. + + Returns: + Tuple with client instance and relative path. + + """ + if config is not None: + location = apply_instead_of(config, location, push=(operation == "push")) + + # First, try to parse it as a URL + try: + return _get_transport_and_path_from_url( + location, config=config, operation=operation, **kwargs) + except ValueError: + pass + + if sys.platform == "win32" and location[0].isalpha() and location[1:3] == ":\\": + # Windows local path + return default_local_git_client_cls(**kwargs), location + + try: + (username, hostname, path) = parse_rsync_url(location) + except ValueError: + # Otherwise, assume it's a local path. + return default_local_git_client_cls(**kwargs), location + else: + return SSHGitClient(hostname, username=username, **kwargs), path + + +DEFAULT_GIT_CREDENTIALS_PATHS = [ + os.path.expanduser("~/.git-credentials"), + get_xdg_config_home_path("git", "credentials"), +] + + +def get_credentials_from_store( + scheme, hostname, username=None, fnames=DEFAULT_GIT_CREDENTIALS_PATHS +): + for fname in fnames: + try: + with open(fname, "rb") as f: + for line in f: + parsed_line = urlparse(line.strip()) + if ( + parsed_line.scheme == scheme + and parsed_line.hostname == hostname + and (username is None or parsed_line.username == username) + ): + return parsed_line.username, parsed_line.password + except FileNotFoundError: + # If the file doesn't exist, try the next one. + continue diff --git a/env/lib/python3.8/site-packages/dulwich/cloud/__init__.py b/env/lib/python3.8/site-packages/dulwich/cloud/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/dulwich/cloud/gcs.py b/env/lib/python3.8/site-packages/dulwich/cloud/gcs.py new file mode 100644 index 00000000..eb566efd --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/cloud/gcs.py @@ -0,0 +1,82 @@ +# object_store.py -- Object store for git objects +# Copyright (C) 2021 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + + +"""Storage of repositories on GCS.""" + +import posixpath +import tempfile + +from ..object_store import BucketBasedObjectStore +from ..pack import PackData, Pack, load_pack_index_file + + +# TODO(jelmer): For performance, read ranges? + + +class GcsObjectStore(BucketBasedObjectStore): + + def __init__(self, bucket, subpath=''): + super(GcsObjectStore, self).__init__() + self.bucket = bucket + self.subpath = subpath + + def __repr__(self): + return "%s(%r, subpath=%r)" % ( + type(self).__name__, self.bucket, self.subpath) + + def _remove_pack(self, name): + self.bucket.delete_blobs([ + posixpath.join(self.subpath, name) + '.' + ext + for ext in ['pack', 'idx']]) + + def _iter_pack_names(self): + packs = {} + for blob in self.bucket.list_blobs(prefix=self.subpath): + name, ext = posixpath.splitext(posixpath.basename(blob.name)) + packs.setdefault(name, set()).add(ext) + for name, exts in packs.items(): + if exts == set(['.pack', '.idx']): + yield name + + def _load_pack_data(self, name): + b = self.bucket.blob(posixpath.join(self.subpath, name + '.pack')) + f = tempfile.SpooledTemporaryFile() + b.download_to_file(f) + f.seek(0) + return PackData(name + '.pack', f) + + def _load_pack_index(self, name): + b = self.bucket.blob(posixpath.join(self.subpath, name + '.idx')) + f = tempfile.SpooledTemporaryFile() + b.download_to_file(f) + f.seek(0) + return load_pack_index_file(name + '.idx', f) + + def _get_pack(self, name): + return Pack.from_lazy_objects( + lambda: self._load_pack_data(name), + lambda: self._load_pack_index(name)) + + def _upload_pack(self, basename, pack_file, index_file): + idxblob = self.bucket.blob(posixpath.join(self.subpath, basename + '.idx')) + datablob = self.bucket.blob(posixpath.join(self.subpath, basename + '.pack')) + idxblob.upload_from_file(index_file) + datablob.upload_from_file(pack_file) diff --git a/env/lib/python3.8/site-packages/dulwich/config.py b/env/lib/python3.8/site-packages/dulwich/config.py new file mode 100644 index 00000000..fb057b0a --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/config.py @@ -0,0 +1,813 @@ +# config.py - Reading and writing Git config files +# Copyright (C) 2011-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Reading and writing Git configuration files. + +TODO: + * preserve formatting when updating configuration files + * treat subsection names as case-insensitive for [branch.foo] style + subsections +""" + +import os +import sys +from typing import ( + BinaryIO, + Iterable, + Iterator, + KeysView, + List, + MutableMapping, + Optional, + Tuple, + Union, + overload, +) + +from dulwich.file import GitFile + + +SENTINEL = object() + + +def lower_key(key): + if isinstance(key, (bytes, str)): + return key.lower() + + if isinstance(key, Iterable): + return type(key)(map(lower_key, key)) + + return key + + +class CaseInsensitiveOrderedMultiDict(MutableMapping): + + def __init__(self): + self._real = [] + self._keyed = {} + + @classmethod + def make(cls, dict_in=None): + + if isinstance(dict_in, cls): + return dict_in + + out = cls() + + if dict_in is None: + return out + + if not isinstance(dict_in, MutableMapping): + raise TypeError + + for key, value in dict_in.items(): + out[key] = value + + return out + + def __len__(self): + return len(self._keyed) + + def keys(self) -> KeysView[Tuple[bytes, ...]]: + return self._keyed.keys() + + def items(self): + return iter(self._real) + + def __iter__(self): + return self._keyed.__iter__() + + def values(self): + return self._keyed.values() + + def __setitem__(self, key, value): + self._real.append((key, value)) + self._keyed[lower_key(key)] = value + + def __delitem__(self, key): + key = lower_key(key) + del self._keyed[key] + for i, (actual, unused_value) in reversed(list(enumerate(self._real))): + if lower_key(actual) == key: + del self._real[i] + + def __getitem__(self, item): + return self._keyed[lower_key(item)] + + def get(self, key, default=SENTINEL): + try: + return self[key] + except KeyError: + pass + + if default is SENTINEL: + return type(self)() + + return default + + def get_all(self, key): + key = lower_key(key) + for actual, value in self._real: + if lower_key(actual) == key: + yield value + + def setdefault(self, key, default=SENTINEL): + try: + return self[key] + except KeyError: + self[key] = self.get(key, default) + + return self[key] + + +Name = bytes +NameLike = Union[bytes, str] +Section = Tuple[bytes, ...] +SectionLike = Union[bytes, str, Tuple[Union[bytes, str], ...]] +Value = bytes +ValueLike = Union[bytes, str] + + +class Config(object): + """A Git configuration.""" + + def get(self, section: SectionLike, name: NameLike) -> Value: + """Retrieve the contents of a configuration setting. + + Args: + section: Tuple with section name and optional subsection name + name: Variable name + Returns: + Contents of the setting + Raises: + KeyError: if the value is not set + """ + raise NotImplementedError(self.get) + + def get_multivar(self, section: SectionLike, name: NameLike) -> Iterator[Value]: + """Retrieve the contents of a multivar configuration setting. + + Args: + section: Tuple with section name and optional subsection namee + name: Variable name + Returns: + Contents of the setting as iterable + Raises: + KeyError: if the value is not set + """ + raise NotImplementedError(self.get_multivar) + + @overload + def get_boolean(self, section: SectionLike, name: NameLike, default: bool) -> bool: + ... + + @overload + def get_boolean(self, section: SectionLike, name: NameLike) -> Optional[bool]: + ... + + def get_boolean( + self, section: SectionLike, name: NameLike, default: Optional[bool] = None + ) -> Optional[bool]: + """Retrieve a configuration setting as boolean. + + Args: + section: Tuple with section name and optional subsection name + name: Name of the setting, including section and possible + subsection. + Returns: + Contents of the setting + """ + try: + value = self.get(section, name) + except KeyError: + return default + if value.lower() == b"true": + return True + elif value.lower() == b"false": + return False + raise ValueError("not a valid boolean string: %r" % value) + + def set( + self, + section: SectionLike, + name: NameLike, + value: Union[ValueLike, bool] + ) -> None: + """Set a configuration value. + + Args: + section: Tuple with section name and optional subsection namee + name: Name of the configuration value, including section + and optional subsection + value: value of the setting + """ + raise NotImplementedError(self.set) + + def items(self, section: SectionLike) -> Iterator[Tuple[Name, Value]]: + """Iterate over the configuration pairs for a specific section. + + Args: + section: Tuple with section name and optional subsection namee + Returns: + Iterator over (name, value) pairs + """ + raise NotImplementedError(self.items) + + def sections(self) -> Iterator[Section]: + """Iterate over the sections. + + Returns: Iterator over section tuples + """ + raise NotImplementedError(self.sections) + + def has_section(self, name: Section) -> bool: + """Check if a specified section exists. + + Args: + name: Name of section to check for + Returns: + boolean indicating whether the section exists + """ + return name in self.sections() + + +class ConfigDict(Config, MutableMapping[Section, MutableMapping[Name, Value]]): + """Git configuration stored in a dictionary.""" + + def __init__( + self, + values: Union[ + MutableMapping[Section, MutableMapping[Name, Value]], None + ] = None, + encoding: Union[str, None] = None + ) -> None: + """Create a new ConfigDict.""" + if encoding is None: + encoding = sys.getdefaultencoding() + self.encoding = encoding + self._values = CaseInsensitiveOrderedMultiDict.make(values) + + def __repr__(self) -> str: + return "%s(%r)" % (self.__class__.__name__, self._values) + + def __eq__(self, other: object) -> bool: + return isinstance(other, self.__class__) and other._values == self._values + + def __getitem__(self, key: Section) -> MutableMapping[Name, Value]: + return self._values.__getitem__(key) + + def __setitem__( + self, + key: Section, + value: MutableMapping[Name, Value] + ) -> None: + return self._values.__setitem__(key, value) + + def __delitem__(self, key: Section) -> None: + return self._values.__delitem__(key) + + def __iter__(self) -> Iterator[Section]: + return self._values.__iter__() + + def __len__(self) -> int: + return self._values.__len__() + + @classmethod + def _parse_setting(cls, name: str): + parts = name.split(".") + if len(parts) == 3: + return (parts[0], parts[1], parts[2]) + else: + return (parts[0], None, parts[1]) + + def _check_section_and_name( + self, + section: SectionLike, + name: NameLike + ) -> Tuple[Section, Name]: + if not isinstance(section, tuple): + section = (section,) + + checked_section = tuple( + [ + subsection.encode(self.encoding) + if not isinstance(subsection, bytes) + else subsection + for subsection in section + ] + ) + + if not isinstance(name, bytes): + name = name.encode(self.encoding) + + return checked_section, name + + def get_multivar( + self, + section: SectionLike, + name: NameLike + ) -> Iterator[Value]: + section, name = self._check_section_and_name(section, name) + + if len(section) > 1: + try: + return self._values[section].get_all(name) + except KeyError: + pass + + return self._values[(section[0],)].get_all(name) + + def get( # type: ignore[override] + self, + section: SectionLike, + name: NameLike, + ) -> Value: + section, name = self._check_section_and_name(section, name) + + if len(section) > 1: + try: + return self._values[section][name] + except KeyError: + pass + + return self._values[(section[0],)][name] + + def set( + self, + section: SectionLike, + name: NameLike, + value: Union[ValueLike, bool], + ) -> None: + section, name = self._check_section_and_name(section, name) + + if isinstance(value, bool): + value = b"true" if value else b"false" + + if not isinstance(value, bytes): + value = value.encode(self.encoding) + + self._values.setdefault(section)[name] = value + + def items( # type: ignore[override] + self, + section: Section + ) -> Iterator[Tuple[Name, Value]]: + return self._values.get(section).items() + + def sections(self) -> Iterator[Section]: + return self._values.keys() + + +def _format_string(value: bytes) -> bytes: + if ( + value.startswith(b" ") + or value.startswith(b"\t") + or value.endswith(b" ") + or b"#" in value + or value.endswith(b"\t") + ): + return b'"' + _escape_value(value) + b'"' + else: + return _escape_value(value) + + +_ESCAPE_TABLE = { + ord(b"\\"): ord(b"\\"), + ord(b'"'): ord(b'"'), + ord(b"n"): ord(b"\n"), + ord(b"t"): ord(b"\t"), + ord(b"b"): ord(b"\b"), +} +_COMMENT_CHARS = [ord(b"#"), ord(b";")] +_WHITESPACE_CHARS = [ord(b"\t"), ord(b" ")] + + +def _parse_string(value: bytes) -> bytes: + value = bytearray(value.strip()) + ret = bytearray() + whitespace = bytearray() + in_quotes = False + i = 0 + while i < len(value): + c = value[i] + if c == ord(b"\\"): + i += 1 + try: + v = _ESCAPE_TABLE[value[i]] + except IndexError as exc: + raise ValueError( + "escape character in %r at %d before end of string" % (value, i) + ) from exc + except KeyError as exc: + raise ValueError( + "escape character followed by unknown character " + "%s at %d in %r" % (value[i], i, value) + ) from exc + if whitespace: + ret.extend(whitespace) + whitespace = bytearray() + ret.append(v) + elif c == ord(b'"'): + in_quotes = not in_quotes + elif c in _COMMENT_CHARS and not in_quotes: + # the rest of the line is a comment + break + elif c in _WHITESPACE_CHARS: + whitespace.append(c) + else: + if whitespace: + ret.extend(whitespace) + whitespace = bytearray() + ret.append(c) + i += 1 + + if in_quotes: + raise ValueError("missing end quote") + + return bytes(ret) + + +def _escape_value(value: bytes) -> bytes: + """Escape a value.""" + value = value.replace(b"\\", b"\\\\") + value = value.replace(b"\r", b"\\r") + value = value.replace(b"\n", b"\\n") + value = value.replace(b"\t", b"\\t") + value = value.replace(b'"', b'\\"') + return value + + +def _check_variable_name(name: bytes) -> bool: + for i in range(len(name)): + c = name[i : i + 1] + if not c.isalnum() and c != b"-": + return False + return True + + +def _check_section_name(name: bytes) -> bool: + for i in range(len(name)): + c = name[i : i + 1] + if not c.isalnum() and c not in (b"-", b"."): + return False + return True + + +def _strip_comments(line: bytes) -> bytes: + comment_bytes = {ord(b"#"), ord(b";")} + quote = ord(b'"') + string_open = False + # Normalize line to bytearray for simple 2/3 compatibility + for i, character in enumerate(bytearray(line)): + # Comment characters outside balanced quotes denote comment start + if character == quote: + string_open = not string_open + elif not string_open and character in comment_bytes: + return line[:i] + return line + + +def _parse_section_header_line(line: bytes) -> Tuple[Section, bytes]: + # Parse section header ("[bla]") + line = _strip_comments(line).rstrip() + in_quotes = False + escaped = False + for i, c in enumerate(line): + if escaped: + escaped = False + continue + if c == ord(b'"'): + in_quotes = not in_quotes + if c == ord(b'\\'): + escaped = True + if c == ord(b']') and not in_quotes: + last = i + break + else: + raise ValueError("expected trailing ]") + pts = line[1:last].split(b" ", 1) + line = line[last + 1:] + section: Section + if len(pts) == 2: + if pts[1][:1] != b'"' or pts[1][-1:] != b'"': + raise ValueError("Invalid subsection %r" % pts[1]) + else: + pts[1] = pts[1][1:-1] + if not _check_section_name(pts[0]): + raise ValueError("invalid section name %r" % pts[0]) + section = (pts[0], pts[1]) + else: + if not _check_section_name(pts[0]): + raise ValueError("invalid section name %r" % pts[0]) + pts = pts[0].split(b".", 1) + if len(pts) == 2: + section = (pts[0], pts[1]) + else: + section = (pts[0],) + return section, line + + +class ConfigFile(ConfigDict): + """A Git configuration file, like .git/config or ~/.gitconfig.""" + + def __init__( + self, + values: Union[ + MutableMapping[Section, MutableMapping[Name, Value]], None + ] = None, + encoding: Union[str, None] = None + ) -> None: + super(ConfigFile, self).__init__(values=values, encoding=encoding) + self.path: Optional[str] = None + + @classmethod # noqa: C901 + def from_file(cls, f: BinaryIO) -> "ConfigFile": # noqa: C901 + """Read configuration from a file-like object.""" + ret = cls() + section: Optional[Section] = None + setting = None + continuation = None + for lineno, line in enumerate(f.readlines()): + if lineno == 0 and line.startswith(b'\xef\xbb\xbf'): + line = line[3:] + line = line.lstrip() + if setting is None: + if len(line) > 0 and line[:1] == b"[": + section, line = _parse_section_header_line(line) + ret._values.setdefault(section) + if _strip_comments(line).strip() == b"": + continue + if section is None: + raise ValueError("setting %r without section" % line) + try: + setting, value = line.split(b"=", 1) + except ValueError: + setting = line + value = b"true" + setting = setting.strip() + if not _check_variable_name(setting): + raise ValueError("invalid variable name %r" % setting) + if value.endswith(b"\\\n"): + continuation = value[:-2] + elif value.endswith(b"\\\r\n"): + continuation = value[:-3] + else: + continuation = None + value = _parse_string(value) + ret._values[section][setting] = value + setting = None + else: # continuation line + if line.endswith(b"\\\n"): + continuation += line[:-2] + elif line.endswith(b"\\\r\n"): + continuation += line[:-3] + else: + continuation += line + value = _parse_string(continuation) + ret._values[section][setting] = value + continuation = None + setting = None + return ret + + @classmethod + def from_path(cls, path: str) -> "ConfigFile": + """Read configuration from a file on disk.""" + with GitFile(path, "rb") as f: + ret = cls.from_file(f) + ret.path = path + return ret + + def write_to_path(self, path: Optional[str] = None) -> None: + """Write configuration to a file on disk.""" + if path is None: + path = self.path + with GitFile(path, "wb") as f: + self.write_to_file(f) + + def write_to_file(self, f: BinaryIO) -> None: + """Write configuration to a file-like object.""" + for section, values in self._values.items(): + try: + section_name, subsection_name = section + except ValueError: + (section_name,) = section + subsection_name = None + if subsection_name is None: + f.write(b"[" + section_name + b"]\n") + else: + f.write(b"[" + section_name + b' "' + subsection_name + b'"]\n') + for key, value in values.items(): + value = _format_string(value) + f.write(b"\t" + key + b" = " + value + b"\n") + + +def get_xdg_config_home_path(*path_segments): + xdg_config_home = os.environ.get( + "XDG_CONFIG_HOME", + os.path.expanduser("~/.config/"), + ) + return os.path.join(xdg_config_home, *path_segments) + + +def _find_git_in_win_path(): + for exe in ("git.exe", "git.cmd"): + for path in os.environ.get("PATH", "").split(";"): + if os.path.exists(os.path.join(path, exe)): + # exe path is .../Git/bin/git.exe or .../Git/cmd/git.exe + git_dir, _bin_dir = os.path.split(path) + yield git_dir + break + + +def _find_git_in_win_reg(): + import platform + import winreg + + if platform.machine() == "AMD64": + subkey = ( + "SOFTWARE\\Wow6432Node\\Microsoft\\Windows\\" + "CurrentVersion\\Uninstall\\Git_is1" + ) + else: + subkey = ( + "SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\" + "Uninstall\\Git_is1" + ) + + for key in (winreg.HKEY_CURRENT_USER, winreg.HKEY_LOCAL_MACHINE): + try: + with winreg.OpenKey(key, subkey) as k: + val, typ = winreg.QueryValueEx(k, "InstallLocation") + if typ == winreg.REG_SZ: + yield val + except OSError: + pass + + +# There is no set standard for system config dirs on windows. We try the +# following: +# - %PROGRAMDATA%/Git/config - (deprecated) Windows config dir per CGit docs +# - %PROGRAMFILES%/Git/etc/gitconfig - Git for Windows (msysgit) config dir +# Used if CGit installation (Git/bin/git.exe) is found in PATH in the +# system registry +def get_win_system_paths(): + if "PROGRAMDATA" in os.environ: + yield os.path.join(os.environ["PROGRAMDATA"], "Git", "config") + + for git_dir in _find_git_in_win_path(): + yield os.path.join(git_dir, "etc", "gitconfig") + for git_dir in _find_git_in_win_reg(): + yield os.path.join(git_dir, "etc", "gitconfig") + + +class StackedConfig(Config): + """Configuration which reads from multiple config files..""" + + def __init__( + self, backends: List[ConfigFile], writable: Optional[ConfigFile] = None + ): + self.backends = backends + self.writable = writable + + def __repr__(self) -> str: + return "<%s for %r>" % (self.__class__.__name__, self.backends) + + @classmethod + def default(cls) -> "StackedConfig": + return cls(cls.default_backends()) + + @classmethod + def default_backends(cls) -> List[ConfigFile]: + """Retrieve the default configuration. + + See git-config(1) for details on the files searched. + """ + paths = [] + paths.append(os.path.expanduser("~/.gitconfig")) + paths.append(get_xdg_config_home_path("git", "config")) + + if "GIT_CONFIG_NOSYSTEM" not in os.environ: + paths.append("/etc/gitconfig") + if sys.platform == "win32": + paths.extend(get_win_system_paths()) + + backends = [] + for path in paths: + try: + cf = ConfigFile.from_path(path) + except FileNotFoundError: + continue + backends.append(cf) + return backends + + def get(self, section: SectionLike, name: NameLike) -> Value: + if not isinstance(section, tuple): + section = (section,) + for backend in self.backends: + try: + return backend.get(section, name) + except KeyError: + pass + raise KeyError(name) + + def get_multivar(self, section: SectionLike, name: NameLike) -> Iterator[Value]: + if not isinstance(section, tuple): + section = (section,) + for backend in self.backends: + try: + yield from backend.get_multivar(section, name) + except KeyError: + pass + + def set( + self, + section: SectionLike, + name: NameLike, + value: Union[ValueLike, bool] + ) -> None: + if self.writable is None: + raise NotImplementedError(self.set) + return self.writable.set(section, name, value) + + def sections(self) -> Iterator[Section]: + seen = set() + for backend in self.backends: + for section in backend.sections(): + if section not in seen: + seen.add(section) + yield section + + +def read_submodules(path: str) -> Iterator[Tuple[bytes, bytes, bytes]]: + """read a .gitmodules file.""" + cfg = ConfigFile.from_path(path) + return parse_submodules(cfg) + + +def parse_submodules(config: ConfigFile) -> Iterator[Tuple[bytes, bytes, bytes]]: + """Parse a gitmodules GitConfig file, returning submodules. + + Args: + config: A `ConfigFile` + Returns: + list of tuples (submodule path, url, name), + where name is quoted part of the section's name. + """ + for section in config.keys(): + section_kind, section_name = section + if section_kind == b"submodule": + sm_path = config.get(section, b"path") + sm_url = config.get(section, b"url") + yield (sm_path, sm_url, section_name) + + +def iter_instead_of(config: Config, push: bool = False) -> Iterable[Tuple[str, str]]: + """Iterate over insteadOf / pushInsteadOf values. + """ + for section in config.sections(): + if section[0] != b'url': + continue + replacement = section[1] + try: + needles = list(config.get_multivar(section, "insteadOf")) + except KeyError: + needles = [] + if push: + try: + needles += list(config.get_multivar(section, "pushInsteadOf")) + except KeyError: + pass + for needle in needles: + assert isinstance(needle, bytes) + yield needle.decode('utf-8'), replacement.decode('utf-8') + + +def apply_instead_of(config: Config, orig_url: str, push: bool = False) -> str: + """Apply insteadOf / pushInsteadOf to a URL. + """ + longest_needle = "" + updated_url = orig_url + for needle, replacement in iter_instead_of(config, push): + if not orig_url.startswith(needle): + continue + if len(longest_needle) < len(needle): + longest_needle = needle + updated_url = replacement + orig_url[len(needle):] + return updated_url diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/__init__.py b/env/lib/python3.8/site-packages/dulwich/contrib/__init__.py new file mode 100644 index 00000000..cc23d2af --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/__init__.py @@ -0,0 +1,32 @@ +# __init__.py -- Contrib module for Dulwich +# Copyright (C) 2014 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + + +def test_suite(): + import unittest + + names = [ + "paramiko_vendor", + "release_robot", + "swift", + ] + module_names = ["dulwich.contrib.test_" + name for name in names] + loader = unittest.TestLoader() + return loader.loadTestsFromNames(module_names) diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/diffstat.py b/env/lib/python3.8/site-packages/dulwich/contrib/diffstat.py new file mode 100644 index 00000000..7a552ed4 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/diffstat.py @@ -0,0 +1,345 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab + +# Copyright (c) 2020 Kevin B. Hendricks, Stratford Ontario Canada +# All rights reserved. +# +# This diffstat code was extracted and heavily modified from: +# +# https://github.com/techtonik/python-patch +# Under the following license: +# +# Patch utility to apply unified diffs +# Brute-force line-by-line non-recursive parsing +# +# Copyright (c) 2008-2016 anatoly techtonik +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. + +import sys +import re + +# only needs to detect git style diffs as this is for +# use with dulwich + +_git_header_name = re.compile(br"diff --git a/(.*) b/(.*)") + +_GIT_HEADER_START = b"diff --git a/" +_GIT_BINARY_START = b"Binary file" +_GIT_RENAMEFROM_START = b"rename from" +_GIT_RENAMETO_START = b"rename to" +_GIT_CHUNK_START = b"@@" +_GIT_ADDED_START = b"+" +_GIT_DELETED_START = b"-" +_GIT_UNCHANGED_START = b" " + +# emulate original full Patch class by just extracting +# filename and minimal chunk added/deleted information to +# properly interface with diffstat routine + + +def _parse_patch(lines): + """Parse a git style diff or patch to generate diff stats. + + Args: + lines: list of byte string lines from the diff to be parsed + Returns: A tuple (names, is_binary, counts) of three lists + """ + names = [] + nametypes = [] + counts = [] + in_patch_chunk = in_git_header = binaryfile = False + currentfile = None + added = deleted = 0 + for line in lines: + if line.startswith(_GIT_HEADER_START): + if currentfile is not None: + names.append(currentfile) + nametypes.append(binaryfile) + counts.append((added, deleted)) + currentfile = _git_header_name.search(line).group(2) + binaryfile = False + added = deleted = 0 + in_git_header = True + in_patch_chunk = False + elif line.startswith(_GIT_BINARY_START) and in_git_header: + binaryfile = True + in_git_header = False + elif line.startswith(_GIT_RENAMEFROM_START) and in_git_header: + currentfile = line[12:] + elif line.startswith(_GIT_RENAMETO_START) and in_git_header: + currentfile += b" => %s" % line[10:] + elif line.startswith(_GIT_CHUNK_START) and (in_patch_chunk or in_git_header): + in_patch_chunk = True + in_git_header = False + elif line.startswith(_GIT_ADDED_START) and in_patch_chunk: + added += 1 + elif line.startswith(_GIT_DELETED_START) and in_patch_chunk: + deleted += 1 + elif not line.startswith(_GIT_UNCHANGED_START) and in_patch_chunk: + in_patch_chunk = False + # handle end of input + if currentfile is not None: + names.append(currentfile) + nametypes.append(binaryfile) + counts.append((added, deleted)) + return names, nametypes, counts + + +# note must all done using bytes not string because on linux filenames +# may not be encodable even to utf-8 +def diffstat(lines, max_width=80): + """Generate summary statistics from a git style diff ala + (git diff tag1 tag2 --stat) + Args: + lines: list of byte string "lines" from the diff to be parsed + max_width: maximum line length for generating the summary + statistics (default 80) + Returns: A byte string that lists the changed files with change + counts and histogram + """ + names, nametypes, counts = _parse_patch(lines) + insert = [] + delete = [] + namelen = 0 + maxdiff = 0 # max changes for any file used for histogram width calc + for i, filename in enumerate(names): + i, d = counts[i] + insert.append(i) + delete.append(d) + namelen = max(namelen, len(filename)) + maxdiff = max(maxdiff, i + d) + output = b"" + statlen = len(str(maxdiff)) # stats column width + for i, n in enumerate(names): + binaryfile = nametypes[i] + # %-19s | %-4d %s + # note b'%d' % namelen is not supported until Python 3.5 + # To convert an int to a format width specifier for byte + # strings use str(namelen).encode('ascii') + format = ( + b" %-" + + str(namelen).encode("ascii") + + b"s | %" + + str(statlen).encode("ascii") + + b"s %s\n" + ) + binformat = b" %-" + str(namelen).encode("ascii") + b"s | %s\n" + if not binaryfile: + hist = b"" + # -- calculating histogram -- + width = len(format % (b"", b"", b"")) + histwidth = max(2, max_width - width) + if maxdiff < histwidth: + hist = b"+" * insert[i] + b"-" * delete[i] + else: + iratio = (float(insert[i]) / maxdiff) * histwidth + dratio = (float(delete[i]) / maxdiff) * histwidth + iwidth = dwidth = 0 + # make sure every entry that had actual insertions gets + # at least one + + if insert[i] > 0: + iwidth = int(iratio) + if iwidth == 0 and 0 < iratio < 1: + iwidth = 1 + # make sure every entry that had actual deletions gets + # at least one - + if delete[i] > 0: + dwidth = int(dratio) + if dwidth == 0 and 0 < dratio < 1: + dwidth = 1 + hist = b"+" * int(iwidth) + b"-" * int(dwidth) + output += format % ( + bytes(names[i]), + str(insert[i] + delete[i]).encode("ascii"), + hist, + ) + else: + output += binformat % (bytes(names[i]), b"Bin") + + output += b" %d files changed, %d insertions(+), %d deletions(-)" % ( + len(names), + sum(insert), + sum(delete), + ) + return output + + +def main(): + argv = sys.argv + # allow diffstat.py to also be used from the command line + if len(sys.argv) > 1: + diffpath = argv[1] + data = b"" + with open(diffpath, "rb") as f: + data = f.read() + lines = data.split(b"\n") + result = diffstat(lines) + print(result.decode("utf-8")) + return 0 + + # if no path argument to a diff file is passed in, run + # a self test. The test case includes tricky things like + # a diff of diff, binary files, renames with further changes + # added files and removed files. + # All extracted from Sigil-Ebook/Sigil's github repo with + # full permission to use under this license. + selftest = b""" +diff --git a/docs/qt512.7_remove_bad_workaround.patch b/docs/qt512.7_remove_bad_workaround.patch +new file mode 100644 +index 00000000..64e34192 +--- /dev/null ++++ b/docs/qt512.7_remove_bad_workaround.patch +@@ -0,0 +1,15 @@ ++--- qtbase/src/gui/kernel/qwindow.cpp.orig 2019-12-12 09:15:59.000000000 -0500 +++++ qtbase/src/gui/kernel/qwindow.cpp 2020-01-10 10:36:53.000000000 -0500 ++@@ -218,12 +218,6 @@ ++ QGuiApplicationPrivate::window_list.removeAll(this); ++ if (!QGuiApplicationPrivate::is_app_closing) ++ QGuiApplicationPrivate::instance()->modalWindowList.removeOne(this); ++- ++- // focus_window is normally cleared in destroy(), but the window may in ++- // some cases end up becoming the focus window again. Clear it again ++- // here as a workaround. See QTBUG-75326. ++- if (QGuiApplicationPrivate::focus_window == this) ++- QGuiApplicationPrivate::focus_window = 0; ++ } ++ ++ void QWindowPrivate::init(QScreen *targetScreen) +diff --git a/docs/testplugin_v017.zip b/docs/testplugin_v017.zip +new file mode 100644 +index 00000000..a4cf4c4c +Binary files /dev/null and b/docs/testplugin_v017.zip differ +diff --git a/ci_scripts/macgddeploy.py b/ci_scripts/gddeploy.py +similarity index 73% +rename from ci_scripts/macgddeploy.py +rename to ci_scripts/gddeploy.py +index a512d075..f9dacd33 100644 +--- a/ci_scripts/macgddeploy.py ++++ b/ci_scripts/gddeploy.py +@@ -1,19 +1,32 @@ + #!/usr/bin/env python3 + + import os ++import sys + import subprocess + import datetime + import shutil ++import glob + + gparent = os.path.expandvars('$GDRIVE_DIR') + grefresh_token = os.path.expandvars('$GDRIVE_REFRESH_TOKEN') + +-travis_branch = os.path.expandvars('$TRAVIS_BRANCH') +-travis_commit = os.path.expandvars('$TRAVIS_COMMIT') +-travis_build_number = os.path.expandvars('$TRAVIS_BUILD_NUMBER') ++if sys.platform.lower().startswith('darwin'): ++ travis_branch = os.path.expandvars('$TRAVIS_BRANCH') ++ travis_commit = os.path.expandvars('$TRAVIS_COMMIT') ++ travis_build_number = os.path.expandvars('$TRAVIS_BUILD_NUMBER') ++ ++ origfilename = './bin/Sigil.tar.xz' ++ newfilename = './bin/Sigil-{}-{}-build_num-{}.tar.xz'.format(travis_branch, travis_commit[:7],travis_build_numbe\ +r) ++else: ++ appveyor_branch = os.path.expandvars('$APPVEYOR_REPO_BRANCH') ++ appveyor_commit = os.path.expandvars('$APPVEYOR_REPO_COMMIT') ++ appveyor_build_number = os.path.expandvars('$APPVEYOR_BUILD_NUMBER') ++ names = glob.glob('.\\installer\\Sigil-*-Setup.exe') ++ if not names: ++ exit(1) ++ origfilename = names[0] ++ newfilename = '.\\installer\\Sigil-{}-{}-build_num-{}-Setup.exe'.format(appveyor_branch, appveyor_commit[:7], ap\ +pveyor_build_number) + +-origfilename = './bin/Sigil.tar.xz' +-newfilename = './bin/Sigil-{}-{}-build_num-{}.tar.xz'.format(travis_branch, travis_commit[:7],travis_build_number) + shutil.copy2(origfilename, newfilename) + + folder_name = datetime.date.today() +diff --git a/docs/qt512.6_backport_009abcd_fix.patch b/docs/qt512.6_backport_009abcd_fix.patch +deleted file mode 100644 +index f4724347..00000000 +--- a/docs/qt512.6_backport_009abcd_fix.patch ++++ /dev/null +@@ -1,26 +0,0 @@ +---- qtbase/src/widgets/kernel/qwidget.cpp.orig 2019-11-08 10:57:07.000000000 -0500 +-+++ qtbase/src/widgets/kernel/qwidget.cpp 2019-12-11 12:32:24.000000000 -0500 +-@@ -8934,6 +8934,23 @@ +- } +- } +- switch (event->type()) { +-+ case QEvent::PlatformSurface: { +-+ // Sync up QWidget's view of whether or not the widget has been created +-+ switch (static_cast(event)->surfaceEventType()) { +-+ case QPlatformSurfaceEvent::SurfaceCreated: +-+ if (!testAttribute(Qt::WA_WState_Created)) +-+ create(); +-+ break; +-+ case QPlatformSurfaceEvent::SurfaceAboutToBeDestroyed: +-+ if (testAttribute(Qt::WA_WState_Created)) { +-+ // Child windows have already been destroyed by QWindow, +-+ // so we skip them here. +-+ destroy(false, false); +-+ } +-+ break; +-+ } +-+ break; +-+ } +- case QEvent::MouseMove: +- mouseMoveEvent((QMouseEvent*)event); +- break; +diff --git a/docs/Building_Sigil_On_MacOSX.txt b/docs/Building_Sigil_On_MacOSX.txt +index 3b41fd80..64914c78 100644 +--- a/docs/Building_Sigil_On_MacOSX.txt ++++ b/docs/Building_Sigil_On_MacOSX.txt +@@ -113,7 +113,7 @@ install_name_tool -add_rpath @loader_path/../../Frameworks ./bin/Sigil.app/Content + + # To test if the newly bundled python 3 version of Sigil is working properly ypou can do the following: + +-1. download testplugin_v014.zip from https://github.com/Sigil-Ebook/Sigil/tree/master/docs ++1. download testplugin_v017.zip from https://github.com/Sigil-Ebook/Sigil/tree/master/docs + 2. open Sigil.app to the normal nearly blank template epub it generates when opened + 3. use Plugins->Manage Plugins menu and make sure the "Use Bundled Python" checkbox is checked + 4. use the "Add Plugin" button to navigate to and add testplugin.zip and then hit "Okay" to exit the Manage Plugins Dialog +""" + + testoutput = b""" docs/qt512.7_remove_bad_workaround.patch | 15 ++++++++++++ + docs/testplugin_v017.zip | Bin + ci_scripts/macgddeploy.py => ci_scripts/gddeploy.py | 0 + docs/qt512.6_backport_009abcd_fix.patch | 26 --------------------- + docs/Building_Sigil_On_MacOSX.txt | 2 +- + 5 files changed, 16 insertions(+), 27 deletions(-)""" + + # return 0 on success otherwise return -1 + result = diffstat(selftest.split(b"\n")) + if result == testoutput: + print("self test passed") + return 0 + print("self test failed") + print("Received:") + print(result.decode("utf-8")) + print("Expected:") + print(testoutput.decode("utf-8")) + return -1 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/paramiko_vendor.py b/env/lib/python3.8/site-packages/dulwich/contrib/paramiko_vendor.py new file mode 100644 index 00000000..541d99b5 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/paramiko_vendor.py @@ -0,0 +1,117 @@ +# paramiko_vendor.py -- paramiko implementation of the SSHVendor interface +# Copyright (C) 2013 Aaron O'Mullan +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Paramiko SSH support for Dulwich. + +To use this implementation as the SSH implementation in Dulwich, override +the dulwich.client.get_ssh_vendor attribute: + + >>> from dulwich import client as _mod_client + >>> from dulwich.contrib.paramiko_vendor import ParamikoSSHVendor + >>> _mod_client.get_ssh_vendor = ParamikoSSHVendor + +This implementation is experimental and does not have any tests. +""" + +import paramiko +import paramiko.client + + +class _ParamikoWrapper(object): + def __init__(self, client, channel): + self.client = client + self.channel = channel + + # Channel must block + self.channel.setblocking(True) + + @property + def stderr(self): + return self.channel.makefile_stderr('rb') + + def can_read(self): + return self.channel.recv_ready() + + def write(self, data): + return self.channel.sendall(data) + + def read(self, n=None): + data = self.channel.recv(n) + data_len = len(data) + + # Closed socket + if not data: + return b"" + + # Read more if needed + if n and data_len < n: + diff_len = n - data_len + return data + self.read(diff_len) + return data + + def close(self): + self.channel.close() + + +class ParamikoSSHVendor(object): + # http://docs.paramiko.org/en/2.4/api/client.html + + def __init__(self, **kwargs): + self.kwargs = kwargs + + def run_command( + self, + host, + command, + username=None, + port=None, + password=None, + pkey=None, + key_filename=None, + **kwargs + ): + + client = paramiko.SSHClient() + + connection_kwargs = {"hostname": host} + connection_kwargs.update(self.kwargs) + if username: + connection_kwargs["username"] = username + if port: + connection_kwargs["port"] = port + if password: + connection_kwargs["password"] = password + if pkey: + connection_kwargs["pkey"] = pkey + if key_filename: + connection_kwargs["key_filename"] = key_filename + connection_kwargs.update(kwargs) + + policy = paramiko.client.MissingHostKeyPolicy() + client.set_missing_host_key_policy(policy) + client.connect(**connection_kwargs) + + # Open SSH session + channel = client.get_transport().open_session() + + # Run commands + channel.exec_command(command) + + return _ParamikoWrapper(client, channel) diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/release_robot.py b/env/lib/python3.8/site-packages/dulwich/contrib/release_robot.py new file mode 100644 index 00000000..5b2734e7 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/release_robot.py @@ -0,0 +1,147 @@ +# release_robot.py +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Determine last version string from tags. + +Alternate to `Versioneer `_ using +`Dulwich `_ to sort tags by time from +newest to oldest. + +Copy the following into the package ``__init__.py`` module:: + + from dulwich.contrib.release_robot import get_current_version + __version__ = get_current_version() + +This example assumes the tags have a leading "v" like "v0.3", and that the +``.git`` folder is in a project folder that contains the package folder. + +EG:: + + * project + | + * .git + | + +-* package + | + * __init__.py <-- put __version__ here + + +""" + +import datetime +import re +import sys +import time + +from dulwich.repo import Repo + +# CONSTANTS +PROJDIR = "." +PATTERN = r"[ a-zA-Z_\-]*([\d\.]+[\-\w\.]*)" + + +def get_recent_tags(projdir=PROJDIR): + """Get list of tags in order from newest to oldest and their datetimes. + + Args: + projdir: path to ``.git`` + Returns: + list of tags sorted by commit time from newest to oldest + + Each tag in the list contains the tag name, commit time, commit id, author + and any tag meta. If a tag isn't annotated, then its tag meta is ``None``. + Otherwise the tag meta is a tuple containing the tag time, tag id and tag + name. Time is in UTC. + """ + with Repo(projdir) as project: # dulwich repository object + refs = project.get_refs() # dictionary of refs and their SHA-1 values + tags = {} # empty dictionary to hold tags, commits and datetimes + # iterate over refs in repository + for key, value in refs.items(): + key = key.decode("utf-8") # compatible with Python-3 + obj = project.get_object(value) # dulwich object from SHA-1 + # don't just check if object is "tag" b/c it could be a "commit" + # instead check if "tags" is in the ref-name + if u"tags" not in key: + # skip ref if not a tag + continue + # strip the leading text from refs to get "tag name" + _, tag = key.rsplit(u"/", 1) + # check if tag object is "commit" or "tag" pointing to a "commit" + try: + commit = obj.object # a tuple (commit class, commit id) + except AttributeError: + commit = obj + tag_meta = None + else: + tag_meta = ( + datetime.datetime(*time.gmtime(obj.tag_time)[:6]), + obj.id.decode("utf-8"), + obj.name.decode("utf-8"), + ) # compatible with Python-3 + commit = project.get_object(commit[1]) # commit object + # get tag commit datetime, but dulwich returns seconds since + # beginning of epoch, so use Python time module to convert it to + # timetuple then convert to datetime + tags[tag] = [ + datetime.datetime(*time.gmtime(commit.commit_time)[:6]), + commit.id.decode("utf-8"), + commit.author.decode("utf-8"), + tag_meta, + ] # compatible with Python-3 + + # return list of tags sorted by their datetimes from newest to oldest + return sorted(tags.items(), key=lambda tag: tag[1][0], reverse=True) + + +def get_current_version(projdir=PROJDIR, pattern=PATTERN, logger=None): + """Return the most recent tag, using an options regular expression pattern. + + The default pattern will strip any characters preceding the first semantic + version. *EG*: "Release-0.2.1-rc.1" will be come "0.2.1-rc.1". If no match + is found, then the most recent tag is return without modification. + + Args: + projdir: path to ``.git`` + pattern: regular expression pattern with group that matches version + logger: a Python logging instance to capture exception + Returns: + tag matching first group in regular expression pattern + """ + tags = get_recent_tags(projdir) + try: + tag = tags[0][0] + except IndexError: + return + matches = re.match(pattern, tag) + try: + current_version = matches.group(1) + except (IndexError, AttributeError) as err: + if logger: + logger.exception(err) + return tag + return current_version + + +if __name__ == "__main__": + if len(sys.argv) > 1: + _PROJDIR = sys.argv[1] + else: + _PROJDIR = PROJDIR + print(get_current_version(projdir=_PROJDIR)) diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/requests_vendor.py b/env/lib/python3.8/site-packages/dulwich/contrib/requests_vendor.py new file mode 100644 index 00000000..c7f0183a --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/requests_vendor.py @@ -0,0 +1,145 @@ +# requests_vendor.py -- requests implementation of the AbstractHttpGitClient interface +# Copyright (C) 2022 Eden Shalit +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. + + +"""Requests HTTP client support for Dulwich. + +To use this implementation as the HTTP implementation in Dulwich, override +the dulwich.client.HttpGitClient attribute: + + >>> from dulwich import client as _mod_client + >>> from dulwich.contrib.requests_vendor import RequestsHttpGitClient + >>> _mod_client.HttpGitClient = RequestsHttpGitClient + +This implementation is experimental and does not have any tests. +""" +from io import BytesIO + +from requests import Session + +from dulwich.client import AbstractHttpGitClient, HTTPUnauthorized, HTTPProxyUnauthorized, default_user_agent_string +from dulwich.errors import NotGitRepository, GitProtocolError + + +class RequestsHttpGitClient(AbstractHttpGitClient): + def __init__( + self, + base_url, + dumb=None, + config=None, + username=None, + password=None, + **kwargs + ): + self._username = username + self._password = password + + self.session = get_session(config) + + if username is not None: + self.session.auth = (username, password) + + super(RequestsHttpGitClient, self).__init__( + base_url=base_url, dumb=dumb, **kwargs) + + def _http_request(self, url, headers=None, data=None, allow_compression=False): + req_headers = self.session.headers.copy() + if headers is not None: + req_headers.update(headers) + + if allow_compression: + req_headers["Accept-Encoding"] = "gzip" + else: + req_headers["Accept-Encoding"] = "identity" + + if data: + resp = self.session.post(url, headers=req_headers, data=data) + else: + resp = self.session.get(url, headers=req_headers) + + if resp.status_code == 404: + raise NotGitRepository() + if resp.status_code == 401: + raise HTTPUnauthorized(resp.headers.get("WWW-Authenticate"), url) + if resp.status_code == 407: + raise HTTPProxyUnauthorized(resp.headers.get("Proxy-Authenticate"), url) + if resp.status_code != 200: + raise GitProtocolError( + "unexpected http resp %d for %s" % (resp.status_code, url) + ) + + # Add required fields as stated in AbstractHttpGitClient._http_request + resp.content_type = resp.headers.get("Content-Type") + resp.redirect_location = "" + if resp.history: + resp.redirect_location = resp.url + + read = BytesIO(resp.content).read + + return resp, read + + +def get_session(config): + session = Session() + session.headers.update({"Pragma": "no-cache"}) + + proxy_server = user_agent = ca_certs = ssl_verify = None + + if config is not None: + try: + proxy_server = config.get(b"http", b"proxy") + if isinstance(proxy_server, bytes): + proxy_server = proxy_server.decode() + except KeyError: + pass + + try: + user_agent = config.get(b"http", b"useragent") + if isinstance(user_agent, bytes): + user_agent = user_agent.decode() + except KeyError: + pass + + try: + ssl_verify = config.get_boolean(b"http", b"sslVerify") + except KeyError: + ssl_verify = True + + try: + ca_certs = config.get(b"http", b"sslCAInfo") + if isinstance(ca_certs, bytes): + ca_certs = ca_certs.decode() + except KeyError: + ca_certs = None + + if user_agent is None: + user_agent = default_user_agent_string() + session.headers.update({"User-agent": user_agent}) + + if ca_certs: + session.verify = ca_certs + elif ssl_verify is False: + session.verify = ssl_verify + + if proxy_server: + session.proxies.update({ + "http": proxy_server, + "https": proxy_server + }) + return session diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/swift.py b/env/lib/python3.8/site-packages/dulwich/contrib/swift.py new file mode 100644 index 00000000..4deac806 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/swift.py @@ -0,0 +1,1080 @@ +# swift.py -- Repo implementation atop OpenStack SWIFT +# Copyright (C) 2013 eNovance SAS +# +# Author: Fabien Boucher +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Repo implementation atop OpenStack SWIFT.""" + +# TODO: Refactor to share more code with dulwich/repo.py. +# TODO(fbo): Second attempt to _send() must be notified via real log +# TODO(fbo): More logs for operations + +import os +import stat +import zlib +import tempfile +import posixpath + +import urllib.parse as urlparse + +from io import BytesIO +from configparser import ConfigParser +from geventhttpclient import HTTPClient + +from dulwich.greenthreads import ( + GreenThreadsMissingObjectFinder, + GreenThreadsObjectStoreIterator, +) + +from dulwich.lru_cache import LRUSizeCache +from dulwich.objects import ( + Blob, + Commit, + Tree, + Tag, + S_ISGITLINK, +) +from dulwich.object_store import ( + PackBasedObjectStore, + PACKDIR, + INFODIR, +) +from dulwich.pack import ( + PackData, + Pack, + PackIndexer, + PackStreamCopier, + write_pack_header, + compute_file_sha, + iter_sha1, + write_pack_index_v2, + load_pack_index_file, + read_pack_header, + _compute_object_size, + unpack_object, + write_pack_object, +) +from dulwich.protocol import TCP_GIT_PORT +from dulwich.refs import ( + InfoRefsContainer, + read_info_refs, + write_info_refs, +) +from dulwich.repo import ( + BaseRepo, + OBJECTDIR, +) +from dulwich.server import ( + Backend, + TCPGitServer, +) + +import json + +import sys + + +""" +# Configuration file sample +[swift] +# Authentication URL (Keystone or Swift) +auth_url = http://127.0.0.1:5000/v2.0 +# Authentication version to use +auth_ver = 2 +# The tenant and username separated by a semicolon +username = admin;admin +# The user password +password = pass +# The Object storage region to use (auth v2) (Default RegionOne) +region_name = RegionOne +# The Object storage endpoint URL to use (auth v2) (Default internalURL) +endpoint_type = internalURL +# Concurrency to use for parallel tasks (Default 10) +concurrency = 10 +# Size of the HTTP pool (Default 10) +http_pool_length = 10 +# Timeout delay for HTTP connections (Default 20) +http_timeout = 20 +# Chunk size to read from pack (Bytes) (Default 12228) +chunk_length = 12228 +# Cache size (MBytes) (Default 20) +cache_length = 20 +""" + + +class PackInfoObjectStoreIterator(GreenThreadsObjectStoreIterator): + def __len__(self): + while self.finder.objects_to_send: + for _ in range(0, len(self.finder.objects_to_send)): + sha = self.finder.next() + self._shas.append(sha) + return len(self._shas) + + +class PackInfoMissingObjectFinder(GreenThreadsMissingObjectFinder): + def next(self): + while True: + if not self.objects_to_send: + return None + (sha, name, leaf) = self.objects_to_send.pop() + if sha not in self.sha_done: + break + if not leaf: + info = self.object_store.pack_info_get(sha) + if info[0] == Commit.type_num: + self.add_todo([(info[2], "", False)]) + elif info[0] == Tree.type_num: + self.add_todo([tuple(i) for i in info[1]]) + elif info[0] == Tag.type_num: + self.add_todo([(info[1], None, False)]) + if sha in self._tagged: + self.add_todo([(self._tagged[sha], None, True)]) + self.sha_done.add(sha) + self.progress("counting objects: %d\r" % len(self.sha_done)) + return (sha, name) + + +def load_conf(path=None, file=None): + """Load configuration in global var CONF + + Args: + path: The path to the configuration file + file: If provided read instead the file like object + """ + conf = ConfigParser() + if file: + try: + conf.read_file(file, path) + except AttributeError: + # read_file only exists in Python3 + conf.readfp(file) + return conf + confpath = None + if not path: + try: + confpath = os.environ["DULWICH_SWIFT_CFG"] + except KeyError as exc: + raise Exception( + "You need to specify a configuration file") from exc + else: + confpath = path + if not os.path.isfile(confpath): + raise Exception("Unable to read configuration file %s" % confpath) + conf.read(confpath) + return conf + + +def swift_load_pack_index(scon, filename): + """Read a pack index file from Swift + + Args: + scon: a `SwiftConnector` instance + filename: Path to the index file objectise + Returns: a `PackIndexer` instance + """ + with scon.get_object(filename) as f: + return load_pack_index_file(filename, f) + + +def pack_info_create(pack_data, pack_index): + pack = Pack.from_objects(pack_data, pack_index) + info = {} + for obj in pack.iterobjects(): + # Commit + if obj.type_num == Commit.type_num: + info[obj.id] = (obj.type_num, obj.parents, obj.tree) + # Tree + elif obj.type_num == Tree.type_num: + shas = [ + (s, n, not stat.S_ISDIR(m)) + for n, m, s in obj.items() + if not S_ISGITLINK(m) + ] + info[obj.id] = (obj.type_num, shas) + # Blob + elif obj.type_num == Blob.type_num: + info[obj.id] = None + # Tag + elif obj.type_num == Tag.type_num: + info[obj.id] = (obj.type_num, obj.object[1]) + return zlib.compress(json.dumps(info)) + + +def load_pack_info(filename, scon=None, file=None): + if not file: + f = scon.get_object(filename) + else: + f = file + if not f: + return None + try: + return json.loads(zlib.decompress(f.read())) + finally: + f.close() + + +class SwiftException(Exception): + pass + + +class SwiftConnector(object): + """A Connector to swift that manage authentication and errors catching""" + + def __init__(self, root, conf): + """Initialize a SwiftConnector + + Args: + root: The swift container that will act as Git bare repository + conf: A ConfigParser Object + """ + self.conf = conf + self.auth_ver = self.conf.get("swift", "auth_ver") + if self.auth_ver not in ["1", "2"]: + raise NotImplementedError("Wrong authentication version use either 1 or 2") + self.auth_url = self.conf.get("swift", "auth_url") + self.user = self.conf.get("swift", "username") + self.password = self.conf.get("swift", "password") + self.concurrency = self.conf.getint("swift", "concurrency") or 10 + self.http_timeout = self.conf.getint("swift", "http_timeout") or 20 + self.http_pool_length = self.conf.getint("swift", "http_pool_length") or 10 + self.region_name = self.conf.get("swift", "region_name") or "RegionOne" + self.endpoint_type = self.conf.get("swift", "endpoint_type") or "internalURL" + self.cache_length = self.conf.getint("swift", "cache_length") or 20 + self.chunk_length = self.conf.getint("swift", "chunk_length") or 12228 + self.root = root + block_size = 1024 * 12 # 12KB + if self.auth_ver == "1": + self.storage_url, self.token = self.swift_auth_v1() + else: + self.storage_url, self.token = self.swift_auth_v2() + + token_header = {"X-Auth-Token": str(self.token)} + self.httpclient = HTTPClient.from_url( + str(self.storage_url), + concurrency=self.http_pool_length, + block_size=block_size, + connection_timeout=self.http_timeout, + network_timeout=self.http_timeout, + headers=token_header, + ) + self.base_path = str( + posixpath.join(urlparse.urlparse(self.storage_url).path, self.root) + ) + + def swift_auth_v1(self): + self.user = self.user.replace(";", ":") + auth_httpclient = HTTPClient.from_url( + self.auth_url, + connection_timeout=self.http_timeout, + network_timeout=self.http_timeout, + ) + headers = {"X-Auth-User": self.user, "X-Auth-Key": self.password} + path = urlparse.urlparse(self.auth_url).path + + ret = auth_httpclient.request("GET", path, headers=headers) + + # Should do something with redirections (301 in my case) + + if ret.status_code < 200 or ret.status_code >= 300: + raise SwiftException( + "AUTH v1.0 request failed on " + + "%s with error code %s (%s)" + % ( + str(auth_httpclient.get_base_url()) + path, + ret.status_code, + str(ret.items()), + ) + ) + storage_url = ret["X-Storage-Url"] + token = ret["X-Auth-Token"] + return storage_url, token + + def swift_auth_v2(self): + self.tenant, self.user = self.user.split(";") + auth_dict = {} + auth_dict["auth"] = { + "passwordCredentials": { + "username": self.user, + "password": self.password, + }, + "tenantName": self.tenant, + } + auth_json = json.dumps(auth_dict) + headers = {"Content-Type": "application/json"} + auth_httpclient = HTTPClient.from_url( + self.auth_url, + connection_timeout=self.http_timeout, + network_timeout=self.http_timeout, + ) + path = urlparse.urlparse(self.auth_url).path + if not path.endswith("tokens"): + path = posixpath.join(path, "tokens") + ret = auth_httpclient.request("POST", path, body=auth_json, headers=headers) + + if ret.status_code < 200 or ret.status_code >= 300: + raise SwiftException( + "AUTH v2.0 request failed on " + + "%s with error code %s (%s)" + % ( + str(auth_httpclient.get_base_url()) + path, + ret.status_code, + str(ret.items()), + ) + ) + auth_ret_json = json.loads(ret.read()) + token = auth_ret_json["access"]["token"]["id"] + catalogs = auth_ret_json["access"]["serviceCatalog"] + object_store = [ + o_store for o_store in catalogs if o_store["type"] == "object-store" + ][0] + endpoints = object_store["endpoints"] + endpoint = [endp for endp in endpoints if endp["region"] == self.region_name][0] + return endpoint[self.endpoint_type], token + + def test_root_exists(self): + """Check that Swift container exist + + Returns: True if exist or None it not + """ + ret = self.httpclient.request("HEAD", self.base_path) + if ret.status_code == 404: + return None + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "HEAD request failed with error code %s" % ret.status_code + ) + return True + + def create_root(self): + """Create the Swift container + + Raises: + SwiftException: if unable to create + """ + if not self.test_root_exists(): + ret = self.httpclient.request("PUT", self.base_path) + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "PUT request failed with error code %s" % ret.status_code + ) + + def get_container_objects(self): + """Retrieve objects list in a container + + Returns: A list of dict that describe objects + or None if container does not exist + """ + qs = "?format=json" + path = self.base_path + qs + ret = self.httpclient.request("GET", path) + if ret.status_code == 404: + return None + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "GET request failed with error code %s" % ret.status_code + ) + content = ret.read() + return json.loads(content) + + def get_object_stat(self, name): + """Retrieve object stat + + Args: + name: The object name + Returns: + A dict that describe the object or None if object does not exist + """ + path = self.base_path + "/" + name + ret = self.httpclient.request("HEAD", path) + if ret.status_code == 404: + return None + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "HEAD request failed with error code %s" % ret.status_code + ) + resp_headers = {} + for header, value in ret.items(): + resp_headers[header.lower()] = value + return resp_headers + + def put_object(self, name, content): + """Put an object + + Args: + name: The object name + content: A file object + Raises: + SwiftException: if unable to create + """ + content.seek(0) + data = content.read() + path = self.base_path + "/" + name + headers = {"Content-Length": str(len(data))} + + def _send(): + ret = self.httpclient.request("PUT", path, body=data, headers=headers) + return ret + + try: + # Sometime got Broken Pipe - Dirty workaround + ret = _send() + except Exception: + # Second attempt work + ret = _send() + + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "PUT request failed with error code %s" % ret.status_code + ) + + def get_object(self, name, range=None): + """Retrieve an object + + Args: + name: The object name + range: A string range like "0-10" to + retrieve specified bytes in object content + Returns: + A file like instance or bytestring if range is specified + """ + headers = {} + if range: + headers["Range"] = "bytes=%s" % range + path = self.base_path + "/" + name + ret = self.httpclient.request("GET", path, headers=headers) + if ret.status_code == 404: + return None + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "GET request failed with error code %s" % ret.status_code + ) + content = ret.read() + + if range: + return content + return BytesIO(content) + + def del_object(self, name): + """Delete an object + + Args: + name: The object name + Raises: + SwiftException: if unable to delete + """ + path = self.base_path + "/" + name + ret = self.httpclient.request("DELETE", path) + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "DELETE request failed with error code %s" % ret.status_code + ) + + def del_root(self): + """Delete the root container by removing container content + + Raises: + SwiftException: if unable to delete + """ + for obj in self.get_container_objects(): + self.del_object(obj["name"]) + ret = self.httpclient.request("DELETE", self.base_path) + if ret.status_code < 200 or ret.status_code > 300: + raise SwiftException( + "DELETE request failed with error code %s" % ret.status_code + ) + + +class SwiftPackReader(object): + """A SwiftPackReader that mimic read and sync method + + The reader allows to read a specified amount of bytes from + a given offset of a Swift object. A read offset is kept internally. + The reader will read from Swift a specified amount of data to complete + its internal buffer. chunk_length specify the amount of data + to read from Swift. + """ + + def __init__(self, scon, filename, pack_length): + """Initialize a SwiftPackReader + + Args: + scon: a `SwiftConnector` instance + filename: the pack filename + pack_length: The size of the pack object + """ + self.scon = scon + self.filename = filename + self.pack_length = pack_length + self.offset = 0 + self.base_offset = 0 + self.buff = b"" + self.buff_length = self.scon.chunk_length + + def _read(self, more=False): + if more: + self.buff_length = self.buff_length * 2 + offset = self.base_offset + r = min(self.base_offset + self.buff_length, self.pack_length) + ret = self.scon.get_object(self.filename, range="%s-%s" % (offset, r)) + self.buff = ret + + def read(self, length): + """Read a specified amount of Bytes form the pack object + + Args: + length: amount of bytes to read + Returns: + a bytestring + """ + end = self.offset + length + if self.base_offset + end > self.pack_length: + data = self.buff[self.offset :] + self.offset = end + return data + if end > len(self.buff): + # Need to read more from swift + self._read(more=True) + return self.read(length) + data = self.buff[self.offset : end] + self.offset = end + return data + + def seek(self, offset): + """Seek to a specified offset + + Args: + offset: the offset to seek to + """ + self.base_offset = offset + self._read() + self.offset = 0 + + def read_checksum(self): + """Read the checksum from the pack + + Returns: the checksum bytestring + """ + return self.scon.get_object(self.filename, range="-20") + + +class SwiftPackData(PackData): + """The data contained in a packfile. + + We use the SwiftPackReader to read bytes from packs stored in Swift + using the Range header feature of Swift. + """ + + def __init__(self, scon, filename): + """Initialize a SwiftPackReader + + Args: + scon: a `SwiftConnector` instance + filename: the pack filename + """ + self.scon = scon + self._filename = filename + self._header_size = 12 + headers = self.scon.get_object_stat(self._filename) + self.pack_length = int(headers["content-length"]) + pack_reader = SwiftPackReader(self.scon, self._filename, self.pack_length) + (version, self._num_objects) = read_pack_header(pack_reader.read) + self._offset_cache = LRUSizeCache( + 1024 * 1024 * self.scon.cache_length, + compute_size=_compute_object_size, + ) + self.pack = None + + def get_object_at(self, offset): + if offset in self._offset_cache: + return self._offset_cache[offset] + assert offset >= self._header_size + pack_reader = SwiftPackReader(self.scon, self._filename, self.pack_length) + pack_reader.seek(offset) + unpacked, _ = unpack_object(pack_reader.read) + return (unpacked.pack_type_num, unpacked._obj()) + + def get_stored_checksum(self): + pack_reader = SwiftPackReader(self.scon, self._filename, self.pack_length) + return pack_reader.read_checksum() + + def close(self): + pass + + +class SwiftPack(Pack): + """A Git pack object. + + Same implementation as pack.Pack except that _idx_load and + _data_load are bounded to Swift version of load_pack_index and + PackData. + """ + + def __init__(self, *args, **kwargs): + self.scon = kwargs["scon"] + del kwargs["scon"] + super(SwiftPack, self).__init__(*args, **kwargs) + self._pack_info_path = self._basename + ".info" + self._pack_info = None + self._pack_info_load = lambda: load_pack_info(self._pack_info_path, self.scon) + self._idx_load = lambda: swift_load_pack_index(self.scon, self._idx_path) + self._data_load = lambda: SwiftPackData(self.scon, self._data_path) + + @property + def pack_info(self): + """The pack data object being used.""" + if self._pack_info is None: + self._pack_info = self._pack_info_load() + return self._pack_info + + +class SwiftObjectStore(PackBasedObjectStore): + """A Swift Object Store + + Allow to manage a bare Git repository from Openstack Swift. + This object store only supports pack files and not loose objects. + """ + + def __init__(self, scon): + """Open a Swift object store. + + Args: + scon: A `SwiftConnector` instance + """ + super(SwiftObjectStore, self).__init__() + self.scon = scon + self.root = self.scon.root + self.pack_dir = posixpath.join(OBJECTDIR, PACKDIR) + self._alternates = None + + def _update_pack_cache(self): + objects = self.scon.get_container_objects() + pack_files = [ + o["name"].replace(".pack", "") + for o in objects + if o["name"].endswith(".pack") + ] + ret = [] + for basename in pack_files: + pack = SwiftPack(basename, scon=self.scon) + self._pack_cache[basename] = pack + ret.append(pack) + return ret + + def _iter_loose_objects(self): + """Loose objects are not supported by this repository""" + return [] + + def iter_shas(self, finder): + """An iterator over pack's ObjectStore. + + Returns: a `ObjectStoreIterator` or `GreenThreadsObjectStoreIterator` + instance if gevent is enabled + """ + shas = iter(finder.next, None) + return PackInfoObjectStoreIterator(self, shas, finder, self.scon.concurrency) + + def find_missing_objects(self, *args, **kwargs): + kwargs["concurrency"] = self.scon.concurrency + return PackInfoMissingObjectFinder(self, *args, **kwargs) + + def pack_info_get(self, sha): + for pack in self.packs: + if sha in pack: + return pack.pack_info[sha] + + def _collect_ancestors(self, heads, common=set()): + def _find_parents(commit): + for pack in self.packs: + if commit in pack: + try: + parents = pack.pack_info[commit][1] + except KeyError: + # Seems to have no parents + return [] + return parents + + bases = set() + commits = set() + queue = [] + queue.extend(heads) + while queue: + e = queue.pop(0) + if e in common: + bases.add(e) + elif e not in commits: + commits.add(e) + parents = _find_parents(e) + queue.extend(parents) + return (commits, bases) + + def add_pack(self): + """Add a new pack to this object store. + + Returns: Fileobject to write to and a commit function to + call when the pack is finished. + """ + f = BytesIO() + + def commit(): + f.seek(0) + pack = PackData(file=f, filename="") + entries = pack.sorted_entries() + if entries: + basename = posixpath.join( + self.pack_dir, + "pack-%s" % iter_sha1(entry[0] for entry in entries), + ) + index = BytesIO() + write_pack_index_v2(index, entries, pack.get_stored_checksum()) + self.scon.put_object(basename + ".pack", f) + f.close() + self.scon.put_object(basename + ".idx", index) + index.close() + final_pack = SwiftPack(basename, scon=self.scon) + final_pack.check_length_and_checksum() + self._add_cached_pack(basename, final_pack) + return final_pack + else: + return None + + def abort(): + pass + + return f, commit, abort + + def add_object(self, obj): + self.add_objects( + [ + (obj, None), + ] + ) + + def _pack_cache_stale(self): + return False + + def _get_loose_object(self, sha): + return None + + def add_thin_pack(self, read_all, read_some): + """Read a thin pack + + Read it from a stream and complete it in a temporary file. + Then the pack and the corresponding index file are uploaded to Swift. + """ + fd, path = tempfile.mkstemp(prefix="tmp_pack_") + f = os.fdopen(fd, "w+b") + try: + indexer = PackIndexer(f, resolve_ext_ref=self.get_raw) + copier = PackStreamCopier(read_all, read_some, f, delta_iter=indexer) + copier.verify() + return self._complete_thin_pack(f, path, copier, indexer) + finally: + f.close() + os.unlink(path) + + def _complete_thin_pack(self, f, path, copier, indexer): + entries = list(indexer) + + # Update the header with the new number of objects. + f.seek(0) + write_pack_header(f, len(entries) + len(indexer.ext_refs())) + + # Must flush before reading (http://bugs.python.org/issue3207) + f.flush() + + # Rescan the rest of the pack, computing the SHA with the new header. + new_sha = compute_file_sha(f, end_ofs=-20) + + # Must reposition before writing (http://bugs.python.org/issue3207) + f.seek(0, os.SEEK_CUR) + + # Complete the pack. + for ext_sha in indexer.ext_refs(): + assert len(ext_sha) == 20 + type_num, data = self.get_raw(ext_sha) + offset = f.tell() + crc32 = write_pack_object(f, type_num, data, sha=new_sha) + entries.append((ext_sha, offset, crc32)) + pack_sha = new_sha.digest() + f.write(pack_sha) + f.flush() + + # Move the pack in. + entries.sort() + pack_base_name = posixpath.join( + self.pack_dir, + "pack-" + os.fsdecode(iter_sha1(e[0] for e in entries)), + ) + self.scon.put_object(pack_base_name + ".pack", f) + + # Write the index. + filename = pack_base_name + ".idx" + index_file = BytesIO() + write_pack_index_v2(index_file, entries, pack_sha) + self.scon.put_object(filename, index_file) + + # Write pack info. + f.seek(0) + pack_data = PackData(filename="", file=f) + index_file.seek(0) + pack_index = load_pack_index_file("", index_file) + serialized_pack_info = pack_info_create(pack_data, pack_index) + f.close() + index_file.close() + pack_info_file = BytesIO(serialized_pack_info) + filename = pack_base_name + ".info" + self.scon.put_object(filename, pack_info_file) + pack_info_file.close() + + # Add the pack to the store and return it. + final_pack = SwiftPack(pack_base_name, scon=self.scon) + final_pack.check_length_and_checksum() + self._add_cached_pack(pack_base_name, final_pack) + return final_pack + + +class SwiftInfoRefsContainer(InfoRefsContainer): + """Manage references in info/refs object.""" + + def __init__(self, scon, store): + self.scon = scon + self.filename = "info/refs" + self.store = store + f = self.scon.get_object(self.filename) + if not f: + f = BytesIO(b"") + super(SwiftInfoRefsContainer, self).__init__(f) + + def _load_check_ref(self, name, old_ref): + self._check_refname(name) + f = self.scon.get_object(self.filename) + if not f: + return {} + refs = read_info_refs(f) + if old_ref is not None: + if refs[name] != old_ref: + return False + return refs + + def _write_refs(self, refs): + f = BytesIO() + f.writelines(write_info_refs(refs, self.store)) + self.scon.put_object(self.filename, f) + + def set_if_equals(self, name, old_ref, new_ref): + """Set a refname to new_ref only if it currently equals old_ref.""" + if name == "HEAD": + return True + refs = self._load_check_ref(name, old_ref) + if not isinstance(refs, dict): + return False + refs[name] = new_ref + self._write_refs(refs) + self._refs[name] = new_ref + return True + + def remove_if_equals(self, name, old_ref): + """Remove a refname only if it currently equals old_ref.""" + if name == "HEAD": + return True + refs = self._load_check_ref(name, old_ref) + if not isinstance(refs, dict): + return False + del refs[name] + self._write_refs(refs) + del self._refs[name] + return True + + def allkeys(self): + try: + self._refs["HEAD"] = self._refs["refs/heads/master"] + except KeyError: + pass + return self._refs.keys() + + +class SwiftRepo(BaseRepo): + def __init__(self, root, conf): + """Init a Git bare Repository on top of a Swift container. + + References are managed in info/refs objects by + `SwiftInfoRefsContainer`. The root attribute is the Swift + container that contain the Git bare repository. + + Args: + root: The container which contains the bare repo + conf: A ConfigParser object + """ + self.root = root.lstrip("/") + self.conf = conf + self.scon = SwiftConnector(self.root, self.conf) + objects = self.scon.get_container_objects() + if not objects: + raise Exception("There is not any GIT repo here : %s" % self.root) + objects = [o["name"].split("/")[0] for o in objects] + if OBJECTDIR not in objects: + raise Exception("This repository (%s) is not bare." % self.root) + self.bare = True + self._controldir = self.root + object_store = SwiftObjectStore(self.scon) + refs = SwiftInfoRefsContainer(self.scon, object_store) + BaseRepo.__init__(self, object_store, refs) + + def _determine_file_mode(self): + """Probe the file-system to determine whether permissions can be trusted. + + Returns: True if permissions can be trusted, False otherwise. + """ + return False + + def _put_named_file(self, filename, contents): + """Put an object in a Swift container + + Args: + filename: the path to the object to put on Swift + contents: the content as bytestring + """ + with BytesIO() as f: + f.write(contents) + self.scon.put_object(filename, f) + + @classmethod + def init_bare(cls, scon, conf): + """Create a new bare repository. + + Args: + scon: a `SwiftConnector` instance + conf: a ConfigParser object + Returns: + a `SwiftRepo` instance + """ + scon.create_root() + for obj in [ + posixpath.join(OBJECTDIR, PACKDIR), + posixpath.join(INFODIR, "refs"), + ]: + scon.put_object(obj, BytesIO(b"")) + ret = cls(scon.root, conf) + ret._init_files(True) + return ret + + +class SwiftSystemBackend(Backend): + def __init__(self, logger, conf): + self.conf = conf + self.logger = logger + + def open_repository(self, path): + self.logger.info("opening repository at %s", path) + return SwiftRepo(path, self.conf) + + +def cmd_daemon(args): + """Entry point for starting a TCP git server.""" + import optparse + + parser = optparse.OptionParser() + parser.add_option( + "-l", + "--listen_address", + dest="listen_address", + default="127.0.0.1", + help="Binding IP address.", + ) + parser.add_option( + "-p", + "--port", + dest="port", + type=int, + default=TCP_GIT_PORT, + help="Binding TCP port.", + ) + parser.add_option( + "-c", + "--swift_config", + dest="swift_config", + default="", + help="Path to the configuration file for Swift backend.", + ) + options, args = parser.parse_args(args) + + try: + import gevent + import geventhttpclient # noqa: F401 + except ImportError: + print( + "gevent and geventhttpclient libraries are mandatory " + " for use the Swift backend." + ) + sys.exit(1) + import gevent.monkey + + gevent.monkey.patch_socket() + from dulwich import log_utils + + logger = log_utils.getLogger(__name__) + conf = load_conf(options.swift_config) + backend = SwiftSystemBackend(logger, conf) + + log_utils.default_logging_config() + server = TCPGitServer(backend, options.listen_address, port=options.port) + server.serve_forever() + + +def cmd_init(args): + import optparse + + parser = optparse.OptionParser() + parser.add_option( + "-c", + "--swift_config", + dest="swift_config", + default="", + help="Path to the configuration file for Swift backend.", + ) + options, args = parser.parse_args(args) + + conf = load_conf(options.swift_config) + if args == []: + parser.error("missing repository name") + repo = args[0] + scon = SwiftConnector(repo, conf) + SwiftRepo.init_bare(scon, conf) + + +def main(argv=sys.argv): + commands = { + "init": cmd_init, + "daemon": cmd_daemon, + } + + if len(sys.argv) < 2: + print("Usage: %s <%s> [OPTIONS...]" % (sys.argv[0], "|".join(commands.keys()))) + sys.exit(1) + + cmd = sys.argv[1] + if cmd not in commands: + print("No such subcommand: %s" % cmd) + sys.exit(1) + commands[cmd](sys.argv[2:]) + + +if __name__ == "__main__": + main() diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/test_paramiko_vendor.py b/env/lib/python3.8/site-packages/dulwich/contrib/test_paramiko_vendor.py new file mode 100644 index 00000000..712b4f6c --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/test_paramiko_vendor.py @@ -0,0 +1,201 @@ +# test_paramiko_vendor.py +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for paramiko_vendor.""" + +import socket +import threading + +from dulwich.tests import TestCase + +from io import StringIO +from unittest import skipIf + +try: + import paramiko +except ImportError: + has_paramiko = False +else: + has_paramiko = True + from dulwich.contrib.paramiko_vendor import ParamikoSSHVendor + + class Server(paramiko.ServerInterface): + """http://docs.paramiko.org/en/2.4/api/server.html""" + def __init__(self, commands, *args, **kwargs): + super(Server, self).__init__(*args, **kwargs) + self.commands = commands + + def check_channel_exec_request(self, channel, command): + self.commands.append(command) + return True + + def check_auth_password(self, username, password): + if username == USER and password == PASSWORD: + return paramiko.AUTH_SUCCESSFUL + return paramiko.AUTH_FAILED + + def check_auth_publickey(self, username, key): + pubkey = paramiko.RSAKey.from_private_key(StringIO(CLIENT_KEY)) + if username == USER and key == pubkey: + return paramiko.AUTH_SUCCESSFUL + return paramiko.AUTH_FAILED + + def check_channel_request(self, kind, chanid): + if kind == "session": + return paramiko.OPEN_SUCCEEDED + return paramiko.OPEN_FAILED_ADMINISTRATIVELY_PROHIBITED + + def get_allowed_auths(self, username): + return "password,publickey" + + +USER = 'testuser' +PASSWORD = 'test' +SERVER_KEY = """\ +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEAy/L1sSYAzxsMprtNXW4u/1jGXXkQmQ2xtmKVlR+RlIL3a1BH +bzTpPlZyjltAAwzIP8XRh0iJFKz5y3zSQChhX47ZGN0NvQsVct8R+YwsUonwfAJ+ +JN0KBKKvC8fPHlzqBr3gX+ZxqsFH934tQ6wdQPH5eQWtdM8L826lMsH1737uyTGk ++mCSDjL3c6EzY83g7qhkJU2R4qbi6ne01FaWADzG8sOzXnHT+xpxtk8TTT8yCVUY +MmBNsSoA/ka3iWz70ghB+6Xb0WpFJZXWq1oYovviPAfZGZSrxBZMxsWMye70SdLl +TqsBEt0+miIcm9s0fvjWvQuhaHX6mZs5VO4r5QIDAQABAoIBAGYqeYWaYgFdrYLA +hUrubUCg+g3NHdFuGL4iuIgRXl4lFUh+2KoOuWDu8Uf60iA1AQNhV0sLvQ/Mbv3O +s4xMLisuZfaclctDiCUZNenqnDFkxEF7BjH1QJV94W5nU4wEQ3/JEmM4D2zYkfKb +FJW33JeyH6TOgUvohDYYEU1R+J9V8qA243p+ui1uVtNI6Pb0TXJnG5y9Ny4vkSWH +Fi0QoMPR1r9xJ4SEearGzA/crb4SmmDTKhGSoMsT3d5ATieLmwcS66xWz8w4oFGJ +yzDq24s4Fp9ccNjMf/xR8XRiekJv835gjEqwF9IXyvgOaq6XJ1iCqGPFDKa25nui +JnEstOkCgYEA/ZXk7aIanvdeJlTqpX578sJfCnrXLydzE8emk1b7+5mrzGxQ4/pM +PBQs2f8glT3t0O0mRX9NoRqnwrid88/b+cY4NCOICFZeasX336/gYQxyVeRLJS6Z +hnGEQqry8qS7PdKAyeHMNmZFrUh4EiHiObymEfQS+mkRUObn0cGBTw8CgYEAzeQU +D2baec1DawjppKaRynAvWjp+9ry1lZx9unryKVRwjRjkEpw+b3/+hdaF1IvsVSce +cNj+6W2guZ2tyHuPhZ64/4SJVyE2hKDSKD4xTb2nVjsMeN0bLD2UWXC9mwbx8nWa +2tmtUZ7a/okQb2cSdosJinRewLNqXIsBXamT1csCgYEA0cXb2RCOQQ6U3dTFPx4A +3vMXuA2iUKmrsqMoEx6T2LBow/Sefdkik1iFOdipVYwjXP+w9zC2QR1Rxez/DR/X +8ymceNUjxPHdrSoTQQG29dFcC92MpDeGXQcuyA+uZjcLhbrLOzYEvsOfxBb87NMG +14hNQPDNekTMREafYo9WrtUCgYAREK54+FVzcwf7fymedA/xb4r9N4v+d3W1iNsC +8d3Qfyc1CrMct8aVB07ZWQaOr2pPRIbJY7L9NhD0UZVt4I/sy1MaGqonhqE2LP4+ +R6legDG2e/50ph7yc8gwAaA1kUXMiuLi8Nfkw/3yyvmJwklNegi4aRzRbA2Mzhi2 +4q9WMQKBgQCb0JNyxHG4pvLWCF/j0Sm1FfvrpnqSv5678n1j4GX7Ka/TubOK1Y4K +U+Oib7dKa/zQMWehVFNTayrsq6bKVZ6q7zG+IHiRLw4wjeAxREFH6WUjDrn9vl2l +D48DKbBuBwuVOJWyq3qbfgJXojscgNQklrsPdXVhDwOF0dYxP89HnA== +-----END RSA PRIVATE KEY-----""" +CLIENT_KEY = """\ +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEAxvREKSElPOm/0z/nPO+j5rk2tjdgGcGc7We1QZ6TRXYLu7nN +GeEFIL4p8N1i6dmB+Eydt7xqCU79MWD6Yy4prFe1+/K1wCDUxIbFMxqQcX5zjJzd +i8j8PbcaUlVhP/OkjtkSxrXaGDO1BzfdV4iEBtTV/2l3zmLKJlt3jnOHLczP24CB +DTQKp3rKshbRefzot9Y+wnaK692RsYgsyo9YEP0GyWKG9topCHk13r46J6vGLeuj +ryUKqmbLJkzbJbIcEqwTDo5iHaCVqaMr5Hrb8BdMucSseqZQJsXSd+9tdRcIblUQ +38kZjmFMm4SFbruJcpZCNM2wNSZPIRX+3eiwNwIDAQABAoIBAHSacOBSJsr+jIi5 +KUOTh9IPtzswVUiDKwARCjB9Sf8p4lKR4N1L/n9kNJyQhApeikgGT2GCMftmqgoo +tlculQoHFgemBlOmak0MV8NNzF5YKEy/GzF0CDH7gJfEpoyetVFrdA+2QS5yD6U9 +XqKQxiBi2VEqdScmyyeT8AwzNYTnPeH/DOEcnbdRjqiy/CD79F49CQ1lX1Fuqm0K +I7BivBH1xo/rVnUP4F+IzocDqoga+Pjdj0LTXIgJlHQDSbhsQqWujWQDDuKb+MAw +sNK4Zf8ErV3j1PyA7f/M5LLq6zgstkW4qikDHo4SpZX8kFOO8tjqb7kujj7XqeaB +CxqrOTECgYEA73uWkrohcmDJ4KqbuL3tbExSCOUiaIV+sT1eGPNi7GCmXD4eW5Z4 +75v2IHymW83lORSu/DrQ6sKr1nkuRpqr2iBzRmQpl/H+wahIhBXlnJ25uUjDsuPO +1Pq2LcmyD+jTxVnmbSe/q7O09gZQw3I6H4+BMHmpbf8tC97lqimzpJ0CgYEA1K0W +ZL70Xtn9quyHvbtae/BW07NZnxvUg4UaVIAL9Zu34JyplJzyzbIjrmlDbv6aRogH +/KtuG9tfbf55K/jjqNORiuRtzt1hUN1ye4dyW7tHx2/7lXdlqtyK40rQl8P0kqf8 +zaS6BqjnobgSdSpg32rWoL/pcBHPdJCJEgQ8zeMCgYEA0/PK8TOhNIzrP1dgGSKn +hkkJ9etuB5nW5mEM7gJDFDf6JPupfJ/xiwe6z0fjKK9S57EhqgUYMB55XYnE5iIw +ZQ6BV9SAZ4V7VsRs4dJLdNC3tn/rDGHJBgCaym2PlbsX6rvFT+h1IC8dwv0V79Ui +Ehq9WTzkMoE8yhvNokvkPZUCgYEAgBAFxv5xGdh79ftdtXLmhnDvZ6S8l6Fjcxqo +Ay/jg66Tp43OU226iv/0mmZKM8Dd1xC8dnon4GBVc19jSYYiWBulrRPlx0Xo/o+K +CzZBN1lrXH1i6dqufpc0jq8TMf/N+q1q/c1uMupsKCY1/xVYpc+ok71b7J7c49zQ +nOeuUW8CgYA9Infooy65FTgbzca0c9kbCUBmcAPQ2ItH3JcPKWPQTDuV62HcT00o +fZdIV47Nez1W5Clk191RMy8TXuqI54kocciUWpThc6j44hz49oUueb8U4bLcEHzA +WxtWBWHwxfSmqgTXilEA3ALJp0kNolLnEttnhENwJpZHlqtes0ZA4w== +-----END RSA PRIVATE KEY-----""" + + +@skipIf(not has_paramiko, "paramiko is not installed") +class ParamikoSSHVendorTests(TestCase): + + def setUp(self): + import paramiko.transport + + # re-enable server functionality for tests + if hasattr(paramiko.transport, "SERVER_DISABLED_BY_GENTOO"): + paramiko.transport.SERVER_DISABLED_BY_GENTOO = False + + self.commands = [] + socket.setdefaulttimeout(10) + self.addCleanup(socket.setdefaulttimeout, None) + self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + self.socket.bind(('127.0.0.1', 0)) + self.socket.listen(5) + self.addCleanup(self.socket.close) + self.port = self.socket.getsockname()[1] + self.thread = threading.Thread(target=self._run) + self.thread.start() + + def tearDown(self): + self.thread.join() + + def _run(self): + try: + conn, addr = self.socket.accept() + except socket.error: + return False + self.transport = paramiko.Transport(conn) + self.addCleanup(self.transport.close) + host_key = paramiko.RSAKey.from_private_key(StringIO(SERVER_KEY)) + self.transport.add_server_key(host_key) + server = Server(self.commands) + self.transport.start_server(server=server) + + def test_run_command_password(self): + vendor = ParamikoSSHVendor(allow_agent=False, look_for_keys=False,) + vendor.run_command( + '127.0.0.1', 'test_run_command_password', + username=USER, port=self.port, password=PASSWORD) + + self.assertIn(b'test_run_command_password', self.commands) + + def test_run_command_with_privkey(self): + key = paramiko.RSAKey.from_private_key(StringIO(CLIENT_KEY)) + + vendor = ParamikoSSHVendor(allow_agent=False, look_for_keys=False,) + vendor.run_command( + '127.0.0.1', 'test_run_command_with_privkey', + username=USER, port=self.port, pkey=key) + + self.assertIn(b'test_run_command_with_privkey', self.commands) + + def test_run_command_data_transfer(self): + vendor = ParamikoSSHVendor(allow_agent=False, look_for_keys=False,) + con = vendor.run_command( + '127.0.0.1', 'test_run_command_data_transfer', + username=USER, port=self.port, password=PASSWORD) + + self.assertIn(b'test_run_command_data_transfer', self.commands) + + channel = self.transport.accept(5) + channel.send(b'stdout\n') + channel.send_stderr(b'stderr\n') + channel.close() + + # Fixme: it's return false + # self.assertTrue(con.can_read()) + + self.assertEqual(b'stdout\n', con.read(4096)) + + # Fixme: it's return empty string + # self.assertEqual(b'stderr\n', con.read_stderr(4096)) diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/test_release_robot.py b/env/lib/python3.8/site-packages/dulwich/contrib/test_release_robot.py new file mode 100644 index 00000000..5cdb91ff --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/test_release_robot.py @@ -0,0 +1,134 @@ +# release_robot.py +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for release_robot.""" + +import datetime +import os +import re +import shutil +import tempfile +import time +import unittest + +from dulwich.contrib import release_robot +from dulwich.repo import Repo +from dulwich.tests.utils import make_commit, make_tag + +BASEDIR = os.path.abspath(os.path.dirname(__file__)) # this directory + + +def gmtime_to_datetime(gmt): + return datetime.datetime(*time.gmtime(gmt)[:6]) + + +class TagPatternTests(unittest.TestCase): + """test tag patterns""" + + def test_tag_pattern(self): + """test tag patterns""" + test_cases = { + "0.3": "0.3", + "v0.3": "0.3", + "release0.3": "0.3", + "Release-0.3": "0.3", + "v0.3rc1": "0.3rc1", + "v0.3-rc1": "0.3-rc1", + "v0.3-rc.1": "0.3-rc.1", + "version 0.3": "0.3", + "version_0.3_rc_1": "0.3_rc_1", + "v1": "1", + "0.3rc1": "0.3rc1", + } + for testcase, version in test_cases.items(): + matches = re.match(release_robot.PATTERN, testcase) + self.assertEqual(matches.group(1), version) + + +class GetRecentTagsTest(unittest.TestCase): + """test get recent tags""" + + # Git repo for dulwich project + test_repo = os.path.join(BASEDIR, "dulwich_test_repo.zip") + committer = b"Mark Mikofski " + test_tags = [b"v0.1a", b"v0.1"] + tag_test_data = { + test_tags[0]: [1484788003, b"3" * 40, None], + test_tags[1]: [1484788314, b"1" * 40, (1484788401, b"2" * 40)], + } + + @classmethod + def setUpClass(cls): + cls.projdir = tempfile.mkdtemp() # temporary project directory + cls.repo = Repo.init(cls.projdir) # test repo + obj_store = cls.repo.object_store # test repo object store + # commit 1 ('2017-01-19T01:06:43') + cls.c1 = make_commit( + id=cls.tag_test_data[cls.test_tags[0]][1], + commit_time=cls.tag_test_data[cls.test_tags[0]][0], + message=b"unannotated tag", + author=cls.committer, + ) + obj_store.add_object(cls.c1) + # tag 1: unannotated + cls.t1 = cls.test_tags[0] + cls.repo[b"refs/tags/" + cls.t1] = cls.c1.id # add unannotated tag + # commit 2 ('2017-01-19T01:11:54') + cls.c2 = make_commit( + id=cls.tag_test_data[cls.test_tags[1]][1], + commit_time=cls.tag_test_data[cls.test_tags[1]][0], + message=b"annotated tag", + parents=[cls.c1.id], + author=cls.committer, + ) + obj_store.add_object(cls.c2) + # tag 2: annotated ('2017-01-19T01:13:21') + cls.t2 = make_tag( + cls.c2, + id=cls.tag_test_data[cls.test_tags[1]][2][1], + name=cls.test_tags[1], + tag_time=cls.tag_test_data[cls.test_tags[1]][2][0], + ) + obj_store.add_object(cls.t2) + cls.repo[b"refs/heads/master"] = cls.c2.id + cls.repo[b"refs/tags/" + cls.t2.name] = cls.t2.id # add annotated tag + + @classmethod + def tearDownClass(cls): + cls.repo.close() + shutil.rmtree(cls.projdir) + + def test_get_recent_tags(self): + """test get recent tags""" + tags = release_robot.get_recent_tags(self.projdir) # get test tags + for tag, metadata in tags: + tag = tag.encode("utf-8") + test_data = self.tag_test_data[tag] # test data tag + # test commit date, id and author name + self.assertEqual(metadata[0], gmtime_to_datetime(test_data[0])) + self.assertEqual(metadata[1].encode("utf-8"), test_data[1]) + self.assertEqual(metadata[2].encode("utf-8"), self.committer) + # skip unannotated tags + tag_obj = test_data[2] + if not tag_obj: + continue + # tag date, id and name + self.assertEqual(metadata[3][0], gmtime_to_datetime(tag_obj[0])) + self.assertEqual(metadata[3][1].encode("utf-8"), tag_obj[1]) + self.assertEqual(metadata[3][2].encode("utf-8"), tag) diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/test_swift.py b/env/lib/python3.8/site-packages/dulwich/contrib/test_swift.py new file mode 100644 index 00000000..a6a5f1d6 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/test_swift.py @@ -0,0 +1,500 @@ +# test_swift.py -- Unittests for the Swift backend. +# Copyright (C) 2013 eNovance SAS +# +# Author: Fabien Boucher +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for dulwich.contrib.swift.""" + +import posixpath + +from time import time +from io import BytesIO, StringIO + +from unittest import skipIf + +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.test_object_store import ( + ObjectStoreTests, +) +from dulwich.objects import ( + Blob, + Commit, + Tree, + Tag, + parse_timezone, +) + +import json + +missing_libs = [] + +try: + import gevent # noqa:F401 +except ModuleNotFoundError: + missing_libs.append("gevent") + +try: + import geventhttpclient # noqa:F401 +except ModuleNotFoundError: + missing_libs.append("geventhttpclient") + +try: + from unittest.mock import patch +except ModuleNotFoundError: + missing_libs.append("mock") + +skipmsg = "Required libraries are not installed (%r)" % missing_libs + + +if not missing_libs: + from dulwich.contrib import swift + +config_file = """[swift] +auth_url = http://127.0.0.1:8080/auth/%(version_str)s +auth_ver = %(version_int)s +username = test;tester +password = testing +region_name = %(region_name)s +endpoint_type = %(endpoint_type)s +concurrency = %(concurrency)s +chunk_length = %(chunk_length)s +cache_length = %(cache_length)s +http_pool_length = %(http_pool_length)s +http_timeout = %(http_timeout)s +""" + +def_config_file = { + "version_str": "v1.0", + "version_int": 1, + "concurrency": 1, + "chunk_length": 12228, + "cache_length": 1, + "region_name": "test", + "endpoint_type": "internalURL", + "http_pool_length": 1, + "http_timeout": 1, +} + + +def create_swift_connector(store={}): + return lambda root, conf: FakeSwiftConnector(root, conf=conf, store=store) + + +class Response(object): + def __init__(self, headers={}, status=200, content=None): + self.headers = headers + self.status_code = status + self.content = content + + def __getitem__(self, key): + return self.headers[key] + + def items(self): + return self.headers.items() + + def read(self): + return self.content + + +def fake_auth_request_v1(*args, **kwargs): + ret = Response( + { + "X-Storage-Url": "http://127.0.0.1:8080/v1.0/AUTH_fakeuser", + "X-Auth-Token": "12" * 10, + }, + 200, + ) + return ret + + +def fake_auth_request_v1_error(*args, **kwargs): + ret = Response({}, 401) + return ret + + +def fake_auth_request_v2(*args, **kwargs): + s_url = "http://127.0.0.1:8080/v1.0/AUTH_fakeuser" + resp = { + "access": { + "token": {"id": "12" * 10}, + "serviceCatalog": [ + { + "type": "object-store", + "endpoints": [ + { + "region": "test", + "internalURL": s_url, + }, + ], + }, + ], + } + } + ret = Response(status=200, content=json.dumps(resp)) + return ret + + +def create_commit(data, marker=b"Default", blob=None): + if not blob: + blob = Blob.from_string(b"The blob content " + marker) + tree = Tree() + tree.add(b"thefile_" + marker, 0o100644, blob.id) + cmt = Commit() + if data: + assert isinstance(data[-1], Commit) + cmt.parents = [data[-1].id] + cmt.tree = tree.id + author = b"John Doe " + marker + b" " + cmt.author = cmt.committer = author + tz = parse_timezone(b"-0200")[0] + cmt.commit_time = cmt.author_time = int(time()) + cmt.commit_timezone = cmt.author_timezone = tz + cmt.encoding = b"UTF-8" + cmt.message = b"The commit message " + marker + tag = Tag() + tag.tagger = b"john@doe.net" + tag.message = b"Annotated tag" + tag.tag_timezone = parse_timezone(b"-0200")[0] + tag.tag_time = cmt.author_time + tag.object = (Commit, cmt.id) + tag.name = b"v_" + marker + b"_0.1" + return blob, tree, tag, cmt + + +def create_commits(length=1, marker=b"Default"): + data = [] + for i in range(0, length): + _marker = ("%s_%s" % (marker, i)).encode() + blob, tree, tag, cmt = create_commit(data, _marker) + data.extend([blob, tree, tag, cmt]) + return data + + +@skipIf(missing_libs, skipmsg) +class FakeSwiftConnector(object): + def __init__(self, root, conf, store=None): + if store: + self.store = store + else: + self.store = {} + self.conf = conf + self.root = root + self.concurrency = 1 + self.chunk_length = 12228 + self.cache_length = 1 + + def put_object(self, name, content): + name = posixpath.join(self.root, name) + if hasattr(content, "seek"): + content.seek(0) + content = content.read() + self.store[name] = content + + def get_object(self, name, range=None): + name = posixpath.join(self.root, name) + if not range: + try: + return BytesIO(self.store[name]) + except KeyError: + return None + else: + l, r = range.split("-") + try: + if not l: + r = -int(r) + return self.store[name][r:] + else: + return self.store[name][int(l) : int(r)] + except KeyError: + return None + + def get_container_objects(self): + return [{"name": k.replace(self.root + "/", "")} for k in self.store] + + def create_root(self): + if self.root in self.store.keys(): + pass + else: + self.store[self.root] = "" + + def get_object_stat(self, name): + name = posixpath.join(self.root, name) + if name not in self.store: + return None + return {"content-length": len(self.store[name])} + + +@skipIf(missing_libs, skipmsg) +class TestSwiftRepo(TestCase): + def setUp(self): + super(TestSwiftRepo, self).setUp() + self.conf = swift.load_conf(file=StringIO(config_file % def_config_file)) + + def test_init(self): + store = {"fakerepo/objects/pack": ""} + with patch( + "dulwich.contrib.swift.SwiftConnector", + new_callable=create_swift_connector, + store=store, + ): + swift.SwiftRepo("fakerepo", conf=self.conf) + + def test_init_no_data(self): + with patch( + "dulwich.contrib.swift.SwiftConnector", + new_callable=create_swift_connector, + ): + self.assertRaises(Exception, swift.SwiftRepo, "fakerepo", self.conf) + + def test_init_bad_data(self): + store = {"fakerepo/.git/objects/pack": ""} + with patch( + "dulwich.contrib.swift.SwiftConnector", + new_callable=create_swift_connector, + store=store, + ): + self.assertRaises(Exception, swift.SwiftRepo, "fakerepo", self.conf) + + def test_put_named_file(self): + store = {"fakerepo/objects/pack": ""} + with patch( + "dulwich.contrib.swift.SwiftConnector", + new_callable=create_swift_connector, + store=store, + ): + repo = swift.SwiftRepo("fakerepo", conf=self.conf) + desc = b"Fake repo" + repo._put_named_file("description", desc) + self.assertEqual(repo.scon.store["fakerepo/description"], desc) + + def test_init_bare(self): + fsc = FakeSwiftConnector("fakeroot", conf=self.conf) + with patch( + "dulwich.contrib.swift.SwiftConnector", + new_callable=create_swift_connector, + store=fsc.store, + ): + swift.SwiftRepo.init_bare(fsc, conf=self.conf) + self.assertIn("fakeroot/objects/pack", fsc.store) + self.assertIn("fakeroot/info/refs", fsc.store) + self.assertIn("fakeroot/description", fsc.store) + + +@skipIf(missing_libs, skipmsg) +class TestSwiftInfoRefsContainer(TestCase): + def setUp(self): + super(TestSwiftInfoRefsContainer, self).setUp() + content = ( + b"22effb216e3a82f97da599b8885a6cadb488b4c5\trefs/heads/master\n" + b"cca703b0e1399008b53a1a236d6b4584737649e4\trefs/heads/dev" + ) + self.store = {"fakerepo/info/refs": content} + self.conf = swift.load_conf(file=StringIO(config_file % def_config_file)) + self.fsc = FakeSwiftConnector("fakerepo", conf=self.conf) + self.object_store = {} + + def test_init(self): + """info/refs does not exists""" + irc = swift.SwiftInfoRefsContainer(self.fsc, self.object_store) + self.assertEqual(len(irc._refs), 0) + self.fsc.store = self.store + irc = swift.SwiftInfoRefsContainer(self.fsc, self.object_store) + self.assertIn(b"refs/heads/dev", irc.allkeys()) + self.assertIn(b"refs/heads/master", irc.allkeys()) + + def test_set_if_equals(self): + self.fsc.store = self.store + irc = swift.SwiftInfoRefsContainer(self.fsc, self.object_store) + irc.set_if_equals( + b"refs/heads/dev", + b"cca703b0e1399008b53a1a236d6b4584737649e4", + b"1" * 40, + ) + self.assertEqual(irc[b"refs/heads/dev"], b"1" * 40) + + def test_remove_if_equals(self): + self.fsc.store = self.store + irc = swift.SwiftInfoRefsContainer(self.fsc, self.object_store) + irc.remove_if_equals( + b"refs/heads/dev", b"cca703b0e1399008b53a1a236d6b4584737649e4" + ) + self.assertNotIn(b"refs/heads/dev", irc.allkeys()) + + +@skipIf(missing_libs, skipmsg) +class TestSwiftConnector(TestCase): + def setUp(self): + super(TestSwiftConnector, self).setUp() + self.conf = swift.load_conf(file=StringIO(config_file % def_config_file)) + with patch("geventhttpclient.HTTPClient.request", fake_auth_request_v1): + self.conn = swift.SwiftConnector("fakerepo", conf=self.conf) + + def test_init_connector(self): + self.assertEqual(self.conn.auth_ver, "1") + self.assertEqual(self.conn.auth_url, "http://127.0.0.1:8080/auth/v1.0") + self.assertEqual(self.conn.user, "test:tester") + self.assertEqual(self.conn.password, "testing") + self.assertEqual(self.conn.root, "fakerepo") + self.assertEqual( + self.conn.storage_url, "http://127.0.0.1:8080/v1.0/AUTH_fakeuser" + ) + self.assertEqual(self.conn.token, "12" * 10) + self.assertEqual(self.conn.http_timeout, 1) + self.assertEqual(self.conn.http_pool_length, 1) + self.assertEqual(self.conn.concurrency, 1) + self.conf.set("swift", "auth_ver", "2") + self.conf.set("swift", "auth_url", "http://127.0.0.1:8080/auth/v2.0") + with patch("geventhttpclient.HTTPClient.request", fake_auth_request_v2): + conn = swift.SwiftConnector("fakerepo", conf=self.conf) + self.assertEqual(conn.user, "tester") + self.assertEqual(conn.tenant, "test") + self.conf.set("swift", "auth_ver", "1") + self.conf.set("swift", "auth_url", "http://127.0.0.1:8080/auth/v1.0") + with patch("geventhttpclient.HTTPClient.request", fake_auth_request_v1_error): + self.assertRaises( + swift.SwiftException, + lambda: swift.SwiftConnector("fakerepo", conf=self.conf), + ) + + def test_root_exists(self): + with patch("geventhttpclient.HTTPClient.request", lambda *args: Response()): + self.assertEqual(self.conn.test_root_exists(), True) + + def test_root_not_exists(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args: Response(status=404), + ): + self.assertEqual(self.conn.test_root_exists(), None) + + def test_create_root(self): + with patch( + "dulwich.contrib.swift.SwiftConnector.test_root_exists", + lambda *args: None, + ): + with patch("geventhttpclient.HTTPClient.request", lambda *args: Response()): + self.assertEqual(self.conn.create_root(), None) + + def test_create_root_fails(self): + with patch( + "dulwich.contrib.swift.SwiftConnector.test_root_exists", + lambda *args: None, + ): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args: Response(status=404), + ): + self.assertRaises(swift.SwiftException, self.conn.create_root) + + def test_get_container_objects(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args: Response( + content=json.dumps((({"name": "a"}, {"name": "b"}))) + ), + ): + self.assertEqual(len(self.conn.get_container_objects()), 2) + + def test_get_container_objects_fails(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args: Response(status=404), + ): + self.assertEqual(self.conn.get_container_objects(), None) + + def test_get_object_stat(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args: Response(headers={"content-length": "10"}), + ): + self.assertEqual(self.conn.get_object_stat("a")["content-length"], "10") + + def test_get_object_stat_fails(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args: Response(status=404), + ): + self.assertEqual(self.conn.get_object_stat("a"), None) + + def test_put_object(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args, **kwargs: Response(), + ): + self.assertEqual(self.conn.put_object("a", BytesIO(b"content")), None) + + def test_put_object_fails(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args, **kwargs: Response(status=400), + ): + self.assertRaises( + swift.SwiftException, + lambda: self.conn.put_object("a", BytesIO(b"content")), + ) + + def test_get_object(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args, **kwargs: Response(content=b"content"), + ): + self.assertEqual(self.conn.get_object("a").read(), b"content") + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args, **kwargs: Response(content=b"content"), + ): + self.assertEqual(self.conn.get_object("a", range="0-6"), b"content") + + def test_get_object_fails(self): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args, **kwargs: Response(status=404), + ): + self.assertEqual(self.conn.get_object("a"), None) + + def test_del_object(self): + with patch("geventhttpclient.HTTPClient.request", lambda *args: Response()): + self.assertEqual(self.conn.del_object("a"), None) + + def test_del_root(self): + with patch( + "dulwich.contrib.swift.SwiftConnector.del_object", + lambda *args: None, + ): + with patch( + "dulwich.contrib.swift.SwiftConnector." "get_container_objects", + lambda *args: ({"name": "a"}, {"name": "b"}), + ): + with patch( + "geventhttpclient.HTTPClient.request", + lambda *args: Response(), + ): + self.assertEqual(self.conn.del_root(), None) + + +@skipIf(missing_libs, skipmsg) +class SwiftObjectStoreTests(ObjectStoreTests, TestCase): + def setUp(self): + TestCase.setUp(self) + conf = swift.load_conf(file=StringIO(config_file % def_config_file)) + fsc = FakeSwiftConnector("fakerepo", conf=conf) + self.store = swift.SwiftObjectStore(fsc) diff --git a/env/lib/python3.8/site-packages/dulwich/contrib/test_swift_smoke.py b/env/lib/python3.8/site-packages/dulwich/contrib/test_swift_smoke.py new file mode 100644 index 00000000..461f75f8 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/contrib/test_swift_smoke.py @@ -0,0 +1,313 @@ +# test_smoke.py -- Functional tests for the Swift backend. +# Copyright (C) 2013 eNovance SAS +# +# Author: Fabien Boucher +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Start functional tests + +A Swift installation must be available before +starting those tests. The account and authentication method used +during this functional tests must be changed in the configuration file +passed as environment variable. +The container used to create a fake repository is defined +in cls.fakerepo and will be deleted after the tests. + +DULWICH_SWIFT_CFG=/tmp/conf.cfg PYTHONPATH=. python -m unittest \ + dulwich.tests_swift.test_smoke +""" + +import os +import unittest +import tempfile +import shutil + +import gevent +from gevent import monkey + +monkey.patch_all() + +from dulwich import ( # noqa:E402 + server, + repo, + index, + client, + objects, +) +from dulwich.contrib import swift # noqa:E402 + + +class DulwichServer: + """Start the TCPGitServer with Swift backend""" + + def __init__(self, backend, port): + self.port = port + self.backend = backend + + def run(self): + self.server = server.TCPGitServer(self.backend, "localhost", port=self.port) + self.job = gevent.spawn(self.server.serve_forever) + + def stop(self): + self.server.shutdown() + gevent.joinall((self.job,)) + + +class SwiftSystemBackend(server.Backend): + def open_repository(self, path): + return swift.SwiftRepo(path, conf=swift.load_conf()) + + +class SwiftRepoSmokeTest(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.backend = SwiftSystemBackend() + cls.port = 9148 + cls.server_address = "localhost" + cls.fakerepo = "fakerepo" + cls.th_server = DulwichServer(cls.backend, cls.port) + cls.th_server.run() + cls.conf = swift.load_conf() + + @classmethod + def tearDownClass(cls): + cls.th_server.stop() + + def setUp(self): + self.scon = swift.SwiftConnector(self.fakerepo, self.conf) + if self.scon.test_root_exists(): + try: + self.scon.del_root() + except swift.SwiftException: + pass + self.temp_d = tempfile.mkdtemp() + if os.path.isdir(self.temp_d): + shutil.rmtree(self.temp_d) + + def tearDown(self): + if self.scon.test_root_exists(): + try: + self.scon.del_root() + except swift.SwiftException: + pass + if os.path.isdir(self.temp_d): + shutil.rmtree(self.temp_d) + + def test_init_bare(self): + swift.SwiftRepo.init_bare(self.scon, self.conf) + self.assertTrue(self.scon.test_root_exists()) + obj = self.scon.get_container_objects() + filtered = [ + o for o in obj if o["name"] == "info/refs" or o["name"] == "objects/pack" + ] + self.assertEqual(len(filtered), 2) + + def test_clone_bare(self): + local_repo = repo.Repo.init(self.temp_d, mkdir=True) + swift.SwiftRepo.init_bare(self.scon, self.conf) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + remote_refs = tcp_client.fetch(self.fakerepo, local_repo) + # The remote repo is empty (no refs retrieved) + self.assertEqual(remote_refs, None) + + def test_push_commit(self): + def determine_wants(*args, **kwargs): + return {"refs/heads/master": local_repo.refs["HEAD"]} + + local_repo = repo.Repo.init(self.temp_d, mkdir=True) + # Nothing in the staging area + local_repo.do_commit("Test commit", "fbo@localhost") + sha = local_repo.refs.read_loose_ref("refs/heads/master") + swift.SwiftRepo.init_bare(self.scon, self.conf) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + tcp_client.send_pack( + self.fakerepo, determine_wants, local_repo.generate_pack_data + ) + swift_repo = swift.SwiftRepo("fakerepo", self.conf) + remote_sha = swift_repo.refs.read_loose_ref("refs/heads/master") + self.assertEqual(sha, remote_sha) + + def test_push_branch(self): + def determine_wants(*args, **kwargs): + return {"refs/heads/mybranch": local_repo.refs["refs/heads/mybranch"]} + + local_repo = repo.Repo.init(self.temp_d, mkdir=True) + # Nothing in the staging area + local_repo.do_commit("Test commit", "fbo@localhost", ref="refs/heads/mybranch") + sha = local_repo.refs.read_loose_ref("refs/heads/mybranch") + swift.SwiftRepo.init_bare(self.scon, self.conf) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + tcp_client.send_pack( + "/fakerepo", determine_wants, local_repo.generate_pack_data + ) + swift_repo = swift.SwiftRepo(self.fakerepo, self.conf) + remote_sha = swift_repo.refs.read_loose_ref("refs/heads/mybranch") + self.assertEqual(sha, remote_sha) + + def test_push_multiple_branch(self): + def determine_wants(*args, **kwargs): + return { + "refs/heads/mybranch": local_repo.refs["refs/heads/mybranch"], + "refs/heads/master": local_repo.refs["refs/heads/master"], + "refs/heads/pullr-108": local_repo.refs["refs/heads/pullr-108"], + } + + local_repo = repo.Repo.init(self.temp_d, mkdir=True) + # Nothing in the staging area + local_shas = {} + remote_shas = {} + for branch in ("master", "mybranch", "pullr-108"): + local_shas[branch] = local_repo.do_commit( + "Test commit %s" % branch, + "fbo@localhost", + ref="refs/heads/%s" % branch, + ) + swift.SwiftRepo.init_bare(self.scon, self.conf) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + tcp_client.send_pack( + self.fakerepo, determine_wants, local_repo.generate_pack_data + ) + swift_repo = swift.SwiftRepo("fakerepo", self.conf) + for branch in ("master", "mybranch", "pullr-108"): + remote_shas[branch] = swift_repo.refs.read_loose_ref( + "refs/heads/%s" % branch + ) + self.assertDictEqual(local_shas, remote_shas) + + def test_push_data_branch(self): + def determine_wants(*args, **kwargs): + return {"refs/heads/master": local_repo.refs["HEAD"]} + + local_repo = repo.Repo.init(self.temp_d, mkdir=True) + os.mkdir(os.path.join(self.temp_d, "dir")) + files = ("testfile", "testfile2", "dir/testfile3") + i = 0 + for f in files: + open(os.path.join(self.temp_d, f), "w").write("DATA %s" % i) + i += 1 + local_repo.stage(files) + local_repo.do_commit("Test commit", "fbo@localhost", ref="refs/heads/master") + swift.SwiftRepo.init_bare(self.scon, self.conf) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + tcp_client.send_pack( + self.fakerepo, determine_wants, local_repo.generate_pack_data + ) + swift_repo = swift.SwiftRepo("fakerepo", self.conf) + commit_sha = swift_repo.refs.read_loose_ref("refs/heads/master") + otype, data = swift_repo.object_store.get_raw(commit_sha) + commit = objects.ShaFile.from_raw_string(otype, data) + otype, data = swift_repo.object_store.get_raw(commit._tree) + tree = objects.ShaFile.from_raw_string(otype, data) + objs = tree.items() + objs_ = [] + for tree_entry in objs: + objs_.append(swift_repo.object_store.get_raw(tree_entry.sha)) + # Blob + self.assertEqual(objs_[1][1], "DATA 0") + self.assertEqual(objs_[2][1], "DATA 1") + # Tree + self.assertEqual(objs_[0][0], 2) + + def test_clone_then_push_data(self): + self.test_push_data_branch() + shutil.rmtree(self.temp_d) + local_repo = repo.Repo.init(self.temp_d, mkdir=True) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + remote_refs = tcp_client.fetch(self.fakerepo, local_repo) + files = ( + os.path.join(self.temp_d, "testfile"), + os.path.join(self.temp_d, "testfile2"), + ) + local_repo["HEAD"] = remote_refs["refs/heads/master"] + indexfile = local_repo.index_path() + tree = local_repo["HEAD"].tree + index.build_index_from_tree( + local_repo.path, indexfile, local_repo.object_store, tree + ) + for f in files: + self.assertEqual(os.path.isfile(f), True) + + def determine_wants(*args, **kwargs): + return {"refs/heads/master": local_repo.refs["HEAD"]} + + os.mkdir(os.path.join(self.temp_d, "test")) + files = ("testfile11", "testfile22", "test/testfile33") + i = 0 + for f in files: + open(os.path.join(self.temp_d, f), "w").write("DATA %s" % i) + i += 1 + local_repo.stage(files) + local_repo.do_commit("Test commit", "fbo@localhost", ref="refs/heads/master") + tcp_client.send_pack( + "/fakerepo", determine_wants, local_repo.generate_pack_data + ) + + def test_push_remove_branch(self): + def determine_wants(*args, **kwargs): + return { + "refs/heads/pullr-108": objects.ZERO_SHA, + "refs/heads/master": local_repo.refs["refs/heads/master"], + "refs/heads/mybranch": local_repo.refs["refs/heads/mybranch"], + } + + self.test_push_multiple_branch() + local_repo = repo.Repo(self.temp_d) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + tcp_client.send_pack( + self.fakerepo, determine_wants, local_repo.generate_pack_data + ) + swift_repo = swift.SwiftRepo("fakerepo", self.conf) + self.assertNotIn("refs/heads/pullr-108", swift_repo.refs.allkeys()) + + def test_push_annotated_tag(self): + def determine_wants(*args, **kwargs): + return { + "refs/heads/master": local_repo.refs["HEAD"], + "refs/tags/v1.0": local_repo.refs["refs/tags/v1.0"], + } + + local_repo = repo.Repo.init(self.temp_d, mkdir=True) + # Nothing in the staging area + sha = local_repo.do_commit("Test commit", "fbo@localhost") + otype, data = local_repo.object_store.get_raw(sha) + commit = objects.ShaFile.from_raw_string(otype, data) + tag = objects.Tag() + tag.tagger = "fbo@localhost" + tag.message = "Annotated tag" + tag.tag_timezone = objects.parse_timezone("-0200")[0] + tag.tag_time = commit.author_time + tag.object = (objects.Commit, commit.id) + tag.name = "v0.1" + local_repo.object_store.add_object(tag) + local_repo.refs["refs/tags/v1.0"] = tag.id + swift.SwiftRepo.init_bare(self.scon, self.conf) + tcp_client = client.TCPGitClient(self.server_address, port=self.port) + tcp_client.send_pack( + self.fakerepo, determine_wants, local_repo.generate_pack_data + ) + swift_repo = swift.SwiftRepo(self.fakerepo, self.conf) + tag_sha = swift_repo.refs.read_loose_ref("refs/tags/v1.0") + otype, data = swift_repo.object_store.get_raw(tag_sha) + rtag = objects.ShaFile.from_raw_string(otype, data) + self.assertEqual(rtag.object[1], commit.id) + self.assertEqual(rtag.id, tag.id) + + +if __name__ == "__main__": + unittest.main() diff --git a/env/lib/python3.8/site-packages/dulwich/credentials.py b/env/lib/python3.8/site-packages/dulwich/credentials.py new file mode 100644 index 00000000..683bb6c0 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/credentials.py @@ -0,0 +1,89 @@ +# credentials.py -- support for git credential helpers + +# Copyright (C) 2022 Daniele Trifirò +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Support for git credential helpers + +https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage + +""" +import sys +from typing import Iterator, Optional +from urllib.parse import ParseResult, urlparse + +from dulwich.config import ConfigDict, SectionLike + + +def match_urls(url: ParseResult, url_prefix: ParseResult) -> bool: + base_match = ( + url.scheme == url_prefix.scheme + and url.hostname == url_prefix.hostname + and url.port == url_prefix.port + ) + user_match = url.username == url_prefix.username if url_prefix.username else True + path_match = url.path.rstrip("/").startswith(url_prefix.path.rstrip()) + return base_match and user_match and path_match + + +def match_partial_url(valid_url: ParseResult, partial_url: str) -> bool: + """matches a parsed url with a partial url (no scheme/netloc)""" + if "://" not in partial_url: + parsed = urlparse("scheme://" + partial_url) + else: + parsed = urlparse(partial_url) + if valid_url.scheme != parsed.scheme: + return False + + if any( + ( + (parsed.hostname and valid_url.hostname != parsed.hostname), + (parsed.username and valid_url.username != parsed.username), + (parsed.port and valid_url.port != parsed.port), + (parsed.path and parsed.path.rstrip("/") != valid_url.path.rstrip("/")), + ), + ): + return False + + return True + + +def urlmatch_credential_sections( + config: ConfigDict, url: Optional[str] +) -> Iterator[SectionLike]: + """Returns credential sections from the config which match the given URL""" + encoding = config.encoding or sys.getdefaultencoding() + parsed_url = urlparse(url or "") + for config_section in config.sections(): + if config_section[0] != b"credential": + continue + + if len(config_section) < 2: + yield config_section + continue + + config_url = config_section[1].decode(encoding) + parsed_config_url = urlparse(config_url) + if parsed_config_url.scheme and parsed_config_url.netloc: + is_match = match_urls(parsed_url, parsed_config_url) + else: + is_match = match_partial_url(parsed_url, config_url) + + if is_match: + yield config_section diff --git a/env/lib/python3.8/site-packages/dulwich/diff_tree.py b/env/lib/python3.8/site-packages/dulwich/diff_tree.py new file mode 100644 index 00000000..e6cabff0 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/diff_tree.py @@ -0,0 +1,648 @@ +# diff_tree.py -- Utilities for diffing files and trees. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Utilities for diffing files and trees.""" + +from collections import ( + defaultdict, + namedtuple, +) + +from io import BytesIO +from itertools import chain +import stat + +from dulwich.objects import ( + S_ISGITLINK, + TreeEntry, +) + + +# TreeChange type constants. +CHANGE_ADD = "add" +CHANGE_MODIFY = "modify" +CHANGE_DELETE = "delete" +CHANGE_RENAME = "rename" +CHANGE_COPY = "copy" +CHANGE_UNCHANGED = "unchanged" + +RENAME_CHANGE_TYPES = (CHANGE_RENAME, CHANGE_COPY) + +_NULL_ENTRY = TreeEntry(None, None, None) + +_MAX_SCORE = 100 +RENAME_THRESHOLD = 60 +MAX_FILES = 200 +REWRITE_THRESHOLD = None + + +class TreeChange(namedtuple("TreeChange", ["type", "old", "new"])): + """Named tuple a single change between two trees.""" + + @classmethod + def add(cls, new): + return cls(CHANGE_ADD, _NULL_ENTRY, new) + + @classmethod + def delete(cls, old): + return cls(CHANGE_DELETE, old, _NULL_ENTRY) + + +def _tree_entries(path, tree): + result = [] + if not tree: + return result + for entry in tree.iteritems(name_order=True): + result.append(entry.in_path(path)) + return result + + +def _merge_entries(path, tree1, tree2): + """Merge the entries of two trees. + + Args: + path: A path to prepend to all tree entry names. + tree1: The first Tree object to iterate, or None. + tree2: The second Tree object to iterate, or None. + Returns: + A list of pairs of TreeEntry objects for each pair of entries in + the trees. If an entry exists in one tree but not the other, the other + entry will have all attributes set to None. If neither entry's path is + None, they are guaranteed to match. + """ + entries1 = _tree_entries(path, tree1) + entries2 = _tree_entries(path, tree2) + i1 = i2 = 0 + len1 = len(entries1) + len2 = len(entries2) + + result = [] + while i1 < len1 and i2 < len2: + entry1 = entries1[i1] + entry2 = entries2[i2] + if entry1.path < entry2.path: + result.append((entry1, _NULL_ENTRY)) + i1 += 1 + elif entry1.path > entry2.path: + result.append((_NULL_ENTRY, entry2)) + i2 += 1 + else: + result.append((entry1, entry2)) + i1 += 1 + i2 += 1 + for i in range(i1, len1): + result.append((entries1[i], _NULL_ENTRY)) + for i in range(i2, len2): + result.append((_NULL_ENTRY, entries2[i])) + return result + + +def _is_tree(entry): + mode = entry.mode + if mode is None: + return False + return stat.S_ISDIR(mode) + + +def walk_trees(store, tree1_id, tree2_id, prune_identical=False): + """Recursively walk all the entries of two trees. + + Iteration is depth-first pre-order, as in e.g. os.walk. + + Args: + store: An ObjectStore for looking up objects. + tree1_id: The SHA of the first Tree object to iterate, or None. + tree2_id: The SHA of the second Tree object to iterate, or None. + prune_identical: If True, identical subtrees will not be walked. + Returns: + Iterator over Pairs of TreeEntry objects for each pair of entries + in the trees and their subtrees recursively. If an entry exists in one + tree but not the other, the other entry will have all attributes set + to None. If neither entry's path is None, they are guaranteed to + match. + """ + # This could be fairly easily generalized to >2 trees if we find a use + # case. + mode1 = tree1_id and stat.S_IFDIR or None + mode2 = tree2_id and stat.S_IFDIR or None + todo = [(TreeEntry(b"", mode1, tree1_id), TreeEntry(b"", mode2, tree2_id))] + while todo: + entry1, entry2 = todo.pop() + is_tree1 = _is_tree(entry1) + is_tree2 = _is_tree(entry2) + if prune_identical and is_tree1 and is_tree2 and entry1 == entry2: + continue + + tree1 = is_tree1 and store[entry1.sha] or None + tree2 = is_tree2 and store[entry2.sha] or None + path = entry1.path or entry2.path + todo.extend(reversed(_merge_entries(path, tree1, tree2))) + yield entry1, entry2 + + +def _skip_tree(entry, include_trees): + if entry.mode is None or (not include_trees and stat.S_ISDIR(entry.mode)): + return _NULL_ENTRY + return entry + + +def tree_changes( + store, + tree1_id, + tree2_id, + want_unchanged=False, + rename_detector=None, + include_trees=False, + change_type_same=False, +): + """Find the differences between the contents of two trees. + + Args: + store: An ObjectStore for looking up objects. + tree1_id: The SHA of the source tree. + tree2_id: The SHA of the target tree. + want_unchanged: If True, include TreeChanges for unmodified entries + as well. + include_trees: Whether to include trees + rename_detector: RenameDetector object for detecting renames. + change_type_same: Whether to report change types in the same + entry or as delete+add. + Returns: + Iterator over TreeChange instances for each change between the + source and target tree. + """ + if rename_detector is not None and tree1_id is not None and tree2_id is not None: + for change in rename_detector.changes_with_renames( + tree1_id, + tree2_id, + want_unchanged=want_unchanged, + include_trees=include_trees, + ): + yield change + return + + entries = walk_trees( + store, tree1_id, tree2_id, prune_identical=(not want_unchanged) + ) + for entry1, entry2 in entries: + if entry1 == entry2 and not want_unchanged: + continue + + # Treat entries for trees as missing. + entry1 = _skip_tree(entry1, include_trees) + entry2 = _skip_tree(entry2, include_trees) + + if entry1 != _NULL_ENTRY and entry2 != _NULL_ENTRY: + if ( + stat.S_IFMT(entry1.mode) != stat.S_IFMT(entry2.mode) + and not change_type_same + ): + # File type changed: report as delete/add. + yield TreeChange.delete(entry1) + entry1 = _NULL_ENTRY + change_type = CHANGE_ADD + elif entry1 == entry2: + change_type = CHANGE_UNCHANGED + else: + change_type = CHANGE_MODIFY + elif entry1 != _NULL_ENTRY: + change_type = CHANGE_DELETE + elif entry2 != _NULL_ENTRY: + change_type = CHANGE_ADD + else: + # Both were None because at least one was a tree. + continue + yield TreeChange(change_type, entry1, entry2) + + +def _all_eq(seq, key, value): + for e in seq: + if key(e) != value: + return False + return True + + +def _all_same(seq, key): + return _all_eq(seq[1:], key, key(seq[0])) + + +def tree_changes_for_merge(store, parent_tree_ids, tree_id, rename_detector=None): + """Get the tree changes for a merge tree relative to all its parents. + + Args: + store: An ObjectStore for looking up objects. + parent_tree_ids: An iterable of the SHAs of the parent trees. + tree_id: The SHA of the merge tree. + rename_detector: RenameDetector object for detecting renames. + + Returns: + Iterator over lists of TreeChange objects, one per conflicted path + in the merge. + + Each list contains one element per parent, with the TreeChange for that + path relative to that parent. An element may be None if it never + existed in one parent and was deleted in two others. + + A path is only included in the output if it is a conflict, i.e. its SHA + in the merge tree is not found in any of the parents, or in the case of + deletes, if not all of the old SHAs match. + """ + all_parent_changes = [ + tree_changes(store, t, tree_id, rename_detector=rename_detector) + for t in parent_tree_ids + ] + num_parents = len(parent_tree_ids) + changes_by_path = defaultdict(lambda: [None] * num_parents) + + # Organize by path. + for i, parent_changes in enumerate(all_parent_changes): + for change in parent_changes: + if change.type == CHANGE_DELETE: + path = change.old.path + else: + path = change.new.path + changes_by_path[path][i] = change + + def old_sha(c): + return c.old.sha + + def change_type(c): + return c.type + + # Yield only conflicting changes. + for _, changes in sorted(changes_by_path.items()): + assert len(changes) == num_parents + have = [c for c in changes if c is not None] + if _all_eq(have, change_type, CHANGE_DELETE): + if not _all_same(have, old_sha): + yield changes + elif not _all_same(have, change_type): + yield changes + elif None not in changes: + # If no change was found relative to one parent, that means the SHA + # must have matched the SHA in that parent, so it is not a + # conflict. + yield changes + + +_BLOCK_SIZE = 64 + + +def _count_blocks(obj): + """Count the blocks in an object. + + Splits the data into blocks either on lines or <=64-byte chunks of lines. + + Args: + obj: The object to count blocks for. + Returns: + A dict of block hashcode -> total bytes occurring. + """ + block_counts = defaultdict(int) + block = BytesIO() + n = 0 + + # Cache attrs as locals to avoid expensive lookups in the inner loop. + block_write = block.write + block_seek = block.seek + block_truncate = block.truncate + block_getvalue = block.getvalue + + for c in chain.from_iterable(obj.as_raw_chunks()): + c = c.to_bytes(1, "big") + block_write(c) + n += 1 + if c == b"\n" or n == _BLOCK_SIZE: + value = block_getvalue() + block_counts[hash(value)] += len(value) + block_seek(0) + block_truncate() + n = 0 + if n > 0: + last_block = block_getvalue() + block_counts[hash(last_block)] += len(last_block) + return block_counts + + +def _common_bytes(blocks1, blocks2): + """Count the number of common bytes in two block count dicts. + + Args: + blocks1: The first dict of block hashcode -> total bytes. + blocks2: The second dict of block hashcode -> total bytes. + Returns: + The number of bytes in common between blocks1 and blocks2. This is + only approximate due to possible hash collisions. + """ + # Iterate over the smaller of the two dicts, since this is symmetrical. + if len(blocks1) > len(blocks2): + blocks1, blocks2 = blocks2, blocks1 + score = 0 + for block, count1 in blocks1.items(): + count2 = blocks2.get(block) + if count2: + score += min(count1, count2) + return score + + +def _similarity_score(obj1, obj2, block_cache=None): + """Compute a similarity score for two objects. + + Args: + obj1: The first object to score. + obj2: The second object to score. + block_cache: An optional dict of SHA to block counts to cache + results between calls. + Returns: + The similarity score between the two objects, defined as the + number of bytes in common between the two objects divided by the + maximum size, scaled to the range 0-100. + """ + if block_cache is None: + block_cache = {} + if obj1.id not in block_cache: + block_cache[obj1.id] = _count_blocks(obj1) + if obj2.id not in block_cache: + block_cache[obj2.id] = _count_blocks(obj2) + + common_bytes = _common_bytes(block_cache[obj1.id], block_cache[obj2.id]) + max_size = max(obj1.raw_length(), obj2.raw_length()) + if not max_size: + return _MAX_SCORE + return int(float(common_bytes) * _MAX_SCORE / max_size) + + +def _tree_change_key(entry): + # Sort by old path then new path. If only one exists, use it for both keys. + path1 = entry.old.path + path2 = entry.new.path + if path1 is None: + path1 = path2 + if path2 is None: + path2 = path1 + return (path1, path2) + + +class RenameDetector(object): + """Object for handling rename detection between two trees.""" + + def __init__( + self, + store, + rename_threshold=RENAME_THRESHOLD, + max_files=MAX_FILES, + rewrite_threshold=REWRITE_THRESHOLD, + find_copies_harder=False, + ): + """Initialize the rename detector. + + Args: + store: An ObjectStore for looking up objects. + rename_threshold: The threshold similarity score for considering + an add/delete pair to be a rename/copy; see _similarity_score. + max_files: The maximum number of adds and deletes to consider, + or None for no limit. The detector is guaranteed to compare no more + than max_files ** 2 add/delete pairs. This limit is provided + because rename detection can be quadratic in the project size. If + the limit is exceeded, no content rename detection is attempted. + rewrite_threshold: The threshold similarity score below which a + modify should be considered a delete/add, or None to not break + modifies; see _similarity_score. + find_copies_harder: If True, consider unmodified files when + detecting copies. + """ + self._store = store + self._rename_threshold = rename_threshold + self._rewrite_threshold = rewrite_threshold + self._max_files = max_files + self._find_copies_harder = find_copies_harder + self._want_unchanged = False + + def _reset(self): + self._adds = [] + self._deletes = [] + self._changes = [] + + def _should_split(self, change): + if ( + self._rewrite_threshold is None + or change.type != CHANGE_MODIFY + or change.old.sha == change.new.sha + ): + return False + old_obj = self._store[change.old.sha] + new_obj = self._store[change.new.sha] + return _similarity_score(old_obj, new_obj) < self._rewrite_threshold + + def _add_change(self, change): + if change.type == CHANGE_ADD: + self._adds.append(change) + elif change.type == CHANGE_DELETE: + self._deletes.append(change) + elif self._should_split(change): + self._deletes.append(TreeChange.delete(change.old)) + self._adds.append(TreeChange.add(change.new)) + elif ( + self._find_copies_harder and change.type == CHANGE_UNCHANGED + ) or change.type == CHANGE_MODIFY: + # Treat all modifies as potential deletes for rename detection, + # but don't split them (to avoid spurious renames). Setting + # find_copies_harder means we treat unchanged the same as + # modified. + self._deletes.append(change) + else: + self._changes.append(change) + + def _collect_changes(self, tree1_id, tree2_id): + want_unchanged = self._find_copies_harder or self._want_unchanged + for change in tree_changes( + self._store, + tree1_id, + tree2_id, + want_unchanged=want_unchanged, + include_trees=self._include_trees, + ): + self._add_change(change) + + def _prune(self, add_paths, delete_paths): + self._adds = [a for a in self._adds if a.new.path not in add_paths] + self._deletes = [d for d in self._deletes if d.old.path not in delete_paths] + + def _find_exact_renames(self): + add_map = defaultdict(list) + for add in self._adds: + add_map[add.new.sha].append(add.new) + delete_map = defaultdict(list) + for delete in self._deletes: + # Keep track of whether the delete was actually marked as a delete. + # If not, it needs to be marked as a copy. + is_delete = delete.type == CHANGE_DELETE + delete_map[delete.old.sha].append((delete.old, is_delete)) + + add_paths = set() + delete_paths = set() + for sha, sha_deletes in delete_map.items(): + sha_adds = add_map[sha] + for (old, is_delete), new in zip(sha_deletes, sha_adds): + if stat.S_IFMT(old.mode) != stat.S_IFMT(new.mode): + continue + if is_delete: + delete_paths.add(old.path) + add_paths.add(new.path) + new_type = is_delete and CHANGE_RENAME or CHANGE_COPY + self._changes.append(TreeChange(new_type, old, new)) + + num_extra_adds = len(sha_adds) - len(sha_deletes) + # TODO(dborowitz): Less arbitrary way of dealing with extra copies. + old = sha_deletes[0][0] + if num_extra_adds > 0: + for new in sha_adds[-num_extra_adds:]: + add_paths.add(new.path) + self._changes.append(TreeChange(CHANGE_COPY, old, new)) + self._prune(add_paths, delete_paths) + + def _should_find_content_renames(self): + return len(self._adds) * len(self._deletes) <= self._max_files ** 2 + + def _rename_type(self, check_paths, delete, add): + if check_paths and delete.old.path == add.new.path: + # If the paths match, this must be a split modify, so make sure it + # comes out as a modify. + return CHANGE_MODIFY + elif delete.type != CHANGE_DELETE: + # If it's in deletes but not marked as a delete, it must have been + # added due to find_copies_harder, and needs to be marked as a + # copy. + return CHANGE_COPY + return CHANGE_RENAME + + def _find_content_rename_candidates(self): + candidates = self._candidates = [] + # TODO: Optimizations: + # - Compare object sizes before counting blocks. + # - Skip if delete's S_IFMT differs from all adds. + # - Skip if adds or deletes is empty. + # Match C git's behavior of not attempting to find content renames if + # the matrix size exceeds the threshold. + if not self._should_find_content_renames(): + return + + block_cache = {} + check_paths = self._rename_threshold is not None + for delete in self._deletes: + if S_ISGITLINK(delete.old.mode): + continue # Git links don't exist in this repo. + old_sha = delete.old.sha + old_obj = self._store[old_sha] + block_cache[old_sha] = _count_blocks(old_obj) + for add in self._adds: + if stat.S_IFMT(delete.old.mode) != stat.S_IFMT(add.new.mode): + continue + new_obj = self._store[add.new.sha] + score = _similarity_score(old_obj, new_obj, block_cache=block_cache) + if score > self._rename_threshold: + new_type = self._rename_type(check_paths, delete, add) + rename = TreeChange(new_type, delete.old, add.new) + candidates.append((-score, rename)) + + def _choose_content_renames(self): + # Sort scores from highest to lowest, but keep names in ascending + # order. + self._candidates.sort() + + delete_paths = set() + add_paths = set() + for _, change in self._candidates: + new_path = change.new.path + if new_path in add_paths: + continue + old_path = change.old.path + orig_type = change.type + if old_path in delete_paths: + change = TreeChange(CHANGE_COPY, change.old, change.new) + + # If the candidate was originally a copy, that means it came from a + # modified or unchanged path, so we don't want to prune it. + if orig_type != CHANGE_COPY: + delete_paths.add(old_path) + add_paths.add(new_path) + self._changes.append(change) + self._prune(add_paths, delete_paths) + + def _join_modifies(self): + if self._rewrite_threshold is None: + return + + modifies = {} + delete_map = {d.old.path: d for d in self._deletes} + for add in self._adds: + path = add.new.path + delete = delete_map.get(path) + if delete is not None and stat.S_IFMT(delete.old.mode) == stat.S_IFMT( + add.new.mode + ): + modifies[path] = TreeChange(CHANGE_MODIFY, delete.old, add.new) + + self._adds = [a for a in self._adds if a.new.path not in modifies] + self._deletes = [a for a in self._deletes if a.new.path not in modifies] + self._changes += modifies.values() + + def _sorted_changes(self): + result = [] + result.extend(self._adds) + result.extend(self._deletes) + result.extend(self._changes) + result.sort(key=_tree_change_key) + return result + + def _prune_unchanged(self): + if self._want_unchanged: + return + self._deletes = [d for d in self._deletes if d.type != CHANGE_UNCHANGED] + + def changes_with_renames( + self, tree1_id, tree2_id, want_unchanged=False, include_trees=False + ): + """Iterate TreeChanges between two tree SHAs, with rename detection.""" + self._reset() + self._want_unchanged = want_unchanged + self._include_trees = include_trees + self._collect_changes(tree1_id, tree2_id) + self._find_exact_renames() + self._find_content_rename_candidates() + self._choose_content_renames() + self._join_modifies() + self._prune_unchanged() + return self._sorted_changes() + + +# Hold on to the pure-python implementations for testing. +_is_tree_py = _is_tree +_merge_entries_py = _merge_entries +_count_blocks_py = _count_blocks +try: + # Try to import C versions + from dulwich._diff_tree import ( # type: ignore + _is_tree, + _merge_entries, + _count_blocks, + ) +except ImportError: + pass diff --git a/env/lib/python3.8/site-packages/dulwich/errors.py b/env/lib/python3.8/site-packages/dulwich/errors.py new file mode 100644 index 00000000..9a7a8727 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/errors.py @@ -0,0 +1,204 @@ +# errors.py -- errors for dulwich +# Copyright (C) 2007 James Westby +# Copyright (C) 2009-2012 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Dulwich-related exception classes and utility functions.""" + + +# Please do not add more errors here, but instead add them close to the code +# that raises the error. + + +import binascii + + +class ChecksumMismatch(Exception): + """A checksum didn't match the expected contents.""" + + def __init__(self, expected, got, extra=None): + if len(expected) == 20: + expected = binascii.hexlify(expected) + if len(got) == 20: + got = binascii.hexlify(got) + self.expected = expected + self.got = got + self.extra = extra + if self.extra is None: + Exception.__init__( + self, + "Checksum mismatch: Expected %s, got %s" % (expected, got), + ) + else: + Exception.__init__( + self, + "Checksum mismatch: Expected %s, got %s; %s" % (expected, got, extra), + ) + + +class WrongObjectException(Exception): + """Baseclass for all the _ is not a _ exceptions on objects. + + Do not instantiate directly. + + Subclasses should define a type_name attribute that indicates what + was expected if they were raised. + """ + + def __init__(self, sha, *args, **kwargs): + Exception.__init__(self, "%s is not a %s" % (sha, self.type_name)) + + +class NotCommitError(WrongObjectException): + """Indicates that the sha requested does not point to a commit.""" + + type_name = "commit" + + +class NotTreeError(WrongObjectException): + """Indicates that the sha requested does not point to a tree.""" + + type_name = "tree" + + +class NotTagError(WrongObjectException): + """Indicates that the sha requested does not point to a tag.""" + + type_name = "tag" + + +class NotBlobError(WrongObjectException): + """Indicates that the sha requested does not point to a blob.""" + + type_name = "blob" + + +class MissingCommitError(Exception): + """Indicates that a commit was not found in the repository""" + + def __init__(self, sha, *args, **kwargs): + self.sha = sha + Exception.__init__(self, "%s is not in the revision store" % sha) + + +class ObjectMissing(Exception): + """Indicates that a requested object is missing.""" + + def __init__(self, sha, *args, **kwargs): + Exception.__init__(self, "%s is not in the pack" % sha) + + +class ApplyDeltaError(Exception): + """Indicates that applying a delta failed.""" + + def __init__(self, *args, **kwargs): + Exception.__init__(self, *args, **kwargs) + + +class NotGitRepository(Exception): + """Indicates that no Git repository was found.""" + + def __init__(self, *args, **kwargs): + Exception.__init__(self, *args, **kwargs) + + +class GitProtocolError(Exception): + """Git protocol exception.""" + + def __init__(self, *args, **kwargs): + Exception.__init__(self, *args, **kwargs) + + def __eq__(self, other): + return isinstance(self, type(other)) and self.args == other.args + + +class SendPackError(GitProtocolError): + """An error occurred during send_pack.""" + + +# N.B.: UpdateRefsError is no longer used and will be result in +# Dulwich 0.21. +# remove: >= 0.21 +class UpdateRefsError(GitProtocolError): + """The server reported errors updating refs.""" + + def __init__(self, *args, **kwargs): + self.ref_status = kwargs.pop("ref_status") + super(UpdateRefsError, self).__init__(*args, **kwargs) + + +class HangupException(GitProtocolError): + """Hangup exception.""" + + def __init__(self, stderr_lines=None): + if stderr_lines: + super(HangupException, self).__init__( + "\n".join( + [line.decode("utf-8", "surrogateescape") for line in stderr_lines] + ) + ) + else: + super(HangupException, self).__init__( + "The remote server unexpectedly closed the connection." + ) + self.stderr_lines = stderr_lines + + def __eq__(self, other): + return isinstance(self, type(other)) and self.stderr_lines == other.stderr_lines + + +class UnexpectedCommandError(GitProtocolError): + """Unexpected command received in a proto line.""" + + def __init__(self, command): + if command is None: + command = "flush-pkt" + else: + command = "command %s" % command + super(UnexpectedCommandError, self).__init__( + "Protocol got unexpected %s" % command + ) + + +class FileFormatException(Exception): + """Base class for exceptions relating to reading git file formats.""" + + +class PackedRefsException(FileFormatException): + """Indicates an error parsing a packed-refs file.""" + + +class ObjectFormatException(FileFormatException): + """Indicates an error parsing an object.""" + + +class NoIndexPresent(Exception): + """No index is present.""" + + +class CommitError(Exception): + """An error occurred while performing a commit.""" + + +class RefFormatError(Exception): + """Indicates an invalid ref name.""" + + +class HookError(Exception): + """An error occurred while executing a hook.""" diff --git a/env/lib/python3.8/site-packages/dulwich/fastexport.py b/env/lib/python3.8/site-packages/dulwich/fastexport.py new file mode 100644 index 00000000..5b596b9a --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/fastexport.py @@ -0,0 +1,258 @@ +# __init__.py -- Fast export/import functionality +# Copyright (C) 2010-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + + +"""Fast export/import functionality.""" + +from dulwich.index import ( + commit_tree, +) +from dulwich.objects import ( + Blob, + Commit, + Tag, + ZERO_SHA, +) +from fastimport import ( + commands, + errors as fastimport_errors, + parser, + processor, +) + +import stat + + +def split_email(text): + (name, email) = text.rsplit(b" <", 1) + return (name, email.rstrip(b">")) + + +class GitFastExporter(object): + """Generate a fast-export output stream for Git objects.""" + + def __init__(self, outf, store): + self.outf = outf + self.store = store + self.markers = {} + self._marker_idx = 0 + + def print_cmd(self, cmd): + self.outf.write(getattr(cmd, "__bytes__", cmd.__repr__)() + b"\n") + + def _allocate_marker(self): + self._marker_idx += 1 + return ("%d" % (self._marker_idx,)).encode("ascii") + + def _export_blob(self, blob): + marker = self._allocate_marker() + self.markers[marker] = blob.id + return (commands.BlobCommand(marker, blob.data), marker) + + def emit_blob(self, blob): + (cmd, marker) = self._export_blob(blob) + self.print_cmd(cmd) + return marker + + def _iter_files(self, base_tree, new_tree): + for ( + (old_path, new_path), + (old_mode, new_mode), + (old_hexsha, new_hexsha), + ) in self.store.tree_changes(base_tree, new_tree): + if new_path is None: + yield commands.FileDeleteCommand(old_path) + continue + if not stat.S_ISDIR(new_mode): + blob = self.store[new_hexsha] + marker = self.emit_blob(blob) + if old_path != new_path and old_path is not None: + yield commands.FileRenameCommand(old_path, new_path) + if old_mode != new_mode or old_hexsha != new_hexsha: + prefixed_marker = b":" + marker + yield commands.FileModifyCommand( + new_path, new_mode, prefixed_marker, None + ) + + def _export_commit(self, commit, ref, base_tree=None): + file_cmds = list(self._iter_files(base_tree, commit.tree)) + marker = self._allocate_marker() + if commit.parents: + from_ = commit.parents[0] + merges = commit.parents[1:] + else: + from_ = None + merges = [] + author, author_email = split_email(commit.author) + committer, committer_email = split_email(commit.committer) + cmd = commands.CommitCommand( + ref, + marker, + (author, author_email, commit.author_time, commit.author_timezone), + ( + committer, + committer_email, + commit.commit_time, + commit.commit_timezone, + ), + commit.message, + from_, + merges, + file_cmds, + ) + return (cmd, marker) + + def emit_commit(self, commit, ref, base_tree=None): + cmd, marker = self._export_commit(commit, ref, base_tree) + self.print_cmd(cmd) + return marker + + +class GitImportProcessor(processor.ImportProcessor): + """An import processor that imports into a Git repository using Dulwich.""" + + # FIXME: Batch creation of objects? + + def __init__(self, repo, params=None, verbose=False, outf=None): + processor.ImportProcessor.__init__(self, params, verbose) + self.repo = repo + self.last_commit = ZERO_SHA + self.markers = {} + self._contents = {} + + def lookup_object(self, objectish): + if objectish.startswith(b":"): + return self.markers[objectish[1:]] + return objectish + + def import_stream(self, stream): + p = parser.ImportParser(stream) + self.process(p.iter_commands) + return self.markers + + def blob_handler(self, cmd): + """Process a BlobCommand.""" + blob = Blob.from_string(cmd.data) + self.repo.object_store.add_object(blob) + if cmd.mark: + self.markers[cmd.mark] = blob.id + + def checkpoint_handler(self, cmd): + """Process a CheckpointCommand.""" + pass + + def commit_handler(self, cmd): + """Process a CommitCommand.""" + commit = Commit() + if cmd.author is not None: + author = cmd.author + else: + author = cmd.committer + (author_name, author_email, author_timestamp, author_timezone) = author + ( + committer_name, + committer_email, + commit_timestamp, + commit_timezone, + ) = cmd.committer + commit.author = author_name + b" <" + author_email + b">" + commit.author_timezone = author_timezone + commit.author_time = int(author_timestamp) + commit.committer = committer_name + b" <" + committer_email + b">" + commit.commit_timezone = commit_timezone + commit.commit_time = int(commit_timestamp) + commit.message = cmd.message + commit.parents = [] + if cmd.from_: + cmd.from_ = self.lookup_object(cmd.from_) + self._reset_base(cmd.from_) + for filecmd in cmd.iter_files(): + if filecmd.name == b"filemodify": + if filecmd.data is not None: + blob = Blob.from_string(filecmd.data) + self.repo.object_store.add(blob) + blob_id = blob.id + else: + blob_id = self.lookup_object(filecmd.dataref) + self._contents[filecmd.path] = (filecmd.mode, blob_id) + elif filecmd.name == b"filedelete": + del self._contents[filecmd.path] + elif filecmd.name == b"filecopy": + self._contents[filecmd.dest_path] = self._contents[filecmd.src_path] + elif filecmd.name == b"filerename": + self._contents[filecmd.new_path] = self._contents[filecmd.old_path] + del self._contents[filecmd.old_path] + elif filecmd.name == b"filedeleteall": + self._contents = {} + else: + raise Exception("Command %s not supported" % filecmd.name) + commit.tree = commit_tree( + self.repo.object_store, + ((path, hexsha, mode) for (path, (mode, hexsha)) in self._contents.items()), + ) + if self.last_commit != ZERO_SHA: + commit.parents.append(self.last_commit) + for merge in cmd.merges: + commit.parents.append(self.lookup_object(merge)) + self.repo.object_store.add_object(commit) + self.repo[cmd.ref] = commit.id + self.last_commit = commit.id + if cmd.mark: + self.markers[cmd.mark] = commit.id + + def progress_handler(self, cmd): + """Process a ProgressCommand.""" + pass + + def _reset_base(self, commit_id): + if self.last_commit == commit_id: + return + self._contents = {} + self.last_commit = commit_id + if commit_id != ZERO_SHA: + tree_id = self.repo[commit_id].tree + for ( + path, + mode, + hexsha, + ) in self.repo.object_store.iter_tree_contents(tree_id): + self._contents[path] = (mode, hexsha) + + def reset_handler(self, cmd): + """Process a ResetCommand.""" + if cmd.from_ is None: + from_ = ZERO_SHA + else: + from_ = self.lookup_object(cmd.from_) + self._reset_base(from_) + self.repo.refs[cmd.ref] = from_ + + def tag_handler(self, cmd): + """Process a TagCommand.""" + tag = Tag() + tag.tagger = cmd.tagger + tag.message = cmd.message + tag.name = cmd.tag + self.repo.add_object(tag) + self.repo.refs["refs/tags/" + tag.name] = tag.id + + def feature_handler(self, cmd): + """Process a FeatureCommand.""" + raise fastimport_errors.UnknownFeature(cmd.feature_name) diff --git a/env/lib/python3.8/site-packages/dulwich/file.py b/env/lib/python3.8/site-packages/dulwich/file.py new file mode 100644 index 00000000..d007a15c --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/file.py @@ -0,0 +1,218 @@ +# file.py -- Safe access to git files +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Safe access to git files.""" + +import io +import os +import sys + + +def ensure_dir_exists(dirname): + """Ensure a directory exists, creating if necessary.""" + try: + os.makedirs(dirname) + except FileExistsError: + pass + + +def _fancy_rename(oldname, newname): + """Rename file with temporary backup file to rollback if rename fails""" + if not os.path.exists(newname): + try: + os.rename(oldname, newname) + except OSError: + raise + return + + # Defer the tempfile import since it pulls in a lot of other things. + import tempfile + + # destination file exists + try: + (fd, tmpfile) = tempfile.mkstemp(".tmp", prefix=oldname, dir=".") + os.close(fd) + os.remove(tmpfile) + except OSError: + # either file could not be created (e.g. permission problem) + # or could not be deleted (e.g. rude virus scanner) + raise + try: + os.rename(newname, tmpfile) + except OSError: + raise # no rename occurred + try: + os.rename(oldname, newname) + except OSError: + os.rename(tmpfile, newname) + raise + os.remove(tmpfile) + + +def GitFile(filename, mode="rb", bufsize=-1, mask=0o644): + """Create a file object that obeys the git file locking protocol. + + Returns: a builtin file object or a _GitFile object + + Note: See _GitFile for a description of the file locking protocol. + + Only read-only and write-only (binary) modes are supported; r+, w+, and a + are not. To read and write from the same file, you can take advantage of + the fact that opening a file for write does not actually open the file you + request. + + The default file mask makes any created files user-writable and + world-readable. + + """ + if "a" in mode: + raise IOError("append mode not supported for Git files") + if "+" in mode: + raise IOError("read/write mode not supported for Git files") + if "b" not in mode: + raise IOError("text mode not supported for Git files") + if "w" in mode: + return _GitFile(filename, mode, bufsize, mask) + else: + return io.open(filename, mode, bufsize) + + +class FileLocked(Exception): + """File is already locked.""" + + def __init__(self, filename, lockfilename): + self.filename = filename + self.lockfilename = lockfilename + super(FileLocked, self).__init__(filename, lockfilename) + + +class _GitFile(object): + """File that follows the git locking protocol for writes. + + All writes to a file foo will be written into foo.lock in the same + directory, and the lockfile will be renamed to overwrite the original file + on close. + + Note: You *must* call close() or abort() on a _GitFile for the lock to be + released. Typically this will happen in a finally block. + """ + + PROXY_PROPERTIES = set( + [ + "closed", + "encoding", + "errors", + "mode", + "name", + "newlines", + "softspace", + ] + ) + PROXY_METHODS = ( + "__iter__", + "flush", + "fileno", + "isatty", + "read", + "readline", + "readlines", + "seek", + "tell", + "truncate", + "write", + "writelines", + ) + + def __init__(self, filename, mode, bufsize, mask): + self._filename = filename + if isinstance(self._filename, bytes): + self._lockfilename = self._filename + b".lock" + else: + self._lockfilename = self._filename + ".lock" + try: + fd = os.open( + self._lockfilename, + os.O_RDWR | os.O_CREAT | os.O_EXCL | getattr(os, "O_BINARY", 0), + mask, + ) + except FileExistsError as exc: + raise FileLocked(filename, self._lockfilename) from exc + self._file = os.fdopen(fd, mode, bufsize) + self._closed = False + + for method in self.PROXY_METHODS: + setattr(self, method, getattr(self._file, method)) + + def abort(self): + """Close and discard the lockfile without overwriting the target. + + If the file is already closed, this is a no-op. + """ + if self._closed: + return + self._file.close() + try: + os.remove(self._lockfilename) + self._closed = True + except FileNotFoundError: + # The file may have been removed already, which is ok. + self._closed = True + + def close(self): + """Close this file, saving the lockfile over the original. + + Note: If this method fails, it will attempt to delete the lockfile. + However, it is not guaranteed to do so (e.g. if a filesystem + becomes suddenly read-only), which will prevent future writes to + this file until the lockfile is removed manually. + Raises: + OSError: if the original file could not be overwritten. The + lock file is still closed, so further attempts to write to the same + file object will raise ValueError. + """ + if self._closed: + return + self._file.flush() + os.fsync(self._file.fileno()) + self._file.close() + try: + if getattr(os, "replace", None) is not None: + os.replace(self._lockfilename, self._filename) + else: + if sys.platform != "win32": + os.rename(self._lockfilename, self._filename) + else: + # Windows versions prior to Vista don't support atomic + # renames + _fancy_rename(self._lockfilename, self._filename) + finally: + self.abort() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.close() + + def __getattr__(self, name): + """Proxy property calls to the underlying file.""" + if name in self.PROXY_PROPERTIES: + return getattr(self._file, name) + raise AttributeError(name) diff --git a/env/lib/python3.8/site-packages/dulwich/graph.py b/env/lib/python3.8/site-packages/dulwich/graph.py new file mode 100644 index 00000000..a948cc91 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/graph.py @@ -0,0 +1,146 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab +# Copyright (c) 2020 Kevin B. Hendricks, Stratford Ontario Canada +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. + +""" +Implementation of merge-base following the approach of git +""" + +from collections import deque + + +def _find_lcas(lookup_parents, c1, c2s): + cands = [] + cstates = {} + + # Flags to Record State + _ANC_OF_1 = 1 # ancestor of commit 1 + _ANC_OF_2 = 2 # ancestor of commit 2 + _DNC = 4 # Do Not Consider + _LCA = 8 # potential LCA + + def _has_candidates(wlst, cstates): + for cmt in wlst: + if cmt in cstates: + if not (cstates[cmt] & _DNC): + return True + return False + + # initialize the working list + wlst = deque() + cstates[c1] = _ANC_OF_1 + wlst.append(c1) + for c2 in c2s: + cstates[c2] = _ANC_OF_2 + wlst.append(c2) + + # loop until no other LCA candidates are viable in working list + # adding any parents to the list in a breadth first manner + while _has_candidates(wlst, cstates): + cmt = wlst.popleft() + flags = cstates[cmt] + if flags == (_ANC_OF_1 | _ANC_OF_2): + # potential common ancestor + if not (flags & _LCA): + flags = flags | _LCA + cstates[cmt] = flags + cands.append(cmt) + # mark any parents of this node _DNC as all parents + # would be one level further removed common ancestors + flags = flags | _DNC + parents = lookup_parents(cmt) + if parents: + for pcmt in parents: + if pcmt in cstates: + cstates[pcmt] = cstates[pcmt] | flags + else: + cstates[pcmt] = flags + wlst.append(pcmt) + + # walk final candidates removing any superseded by _DNC by later lower LCAs + results = [] + for cmt in cands: + if not (cstates[cmt] & _DNC): + results.append(cmt) + return results + + +def find_merge_base(repo, commit_ids): + """Find lowest common ancestors of commit_ids[0] and *any* of commits_ids[1:] + + Args: + repo: Repository object + commit_ids: list of commit ids + Returns: + list of lowest common ancestor commit_ids + """ + if not commit_ids: + return [] + c1 = commit_ids[0] + if not len(commit_ids) > 1: + return [c1] + c2s = commit_ids[1:] + if c1 in c2s: + return [c1] + parents_provider = repo.parents_provider() + return _find_lcas(parents_provider.get_parents, c1, c2s) + + +def find_octopus_base(repo, commit_ids): + """Find lowest common ancestors of *all* provided commit_ids + + Args: + repo: Repository + commit_ids: list of commit ids + Returns: + list of lowest common ancestor commit_ids + """ + + if not commit_ids: + return [] + if len(commit_ids) <= 2: + return find_merge_base(repo, commit_ids) + parents_provider = repo.parents_provider() + lcas = [commit_ids[0]] + others = commit_ids[1:] + for cmt in others: + next_lcas = [] + for ca in lcas: + res = _find_lcas(parents_provider.get_parents, cmt, [ca]) + next_lcas.extend(res) + lcas = next_lcas[:] + return lcas + + +def can_fast_forward(repo, c1, c2): + """Is it possible to fast-forward from c1 to c2? + + Args: + repo: Repository to retrieve objects from + c1: Commit id for first commit + c2: Commit id for second commit + """ + if c1 == c2: + return True + + # Algorithm: Find the common ancestor + parents_provider = repo.parents_provider() + lcas = _find_lcas(parents_provider.get_parents, c1, [c2]) + return lcas == [c1] diff --git a/env/lib/python3.8/site-packages/dulwich/greenthreads.py b/env/lib/python3.8/site-packages/dulwich/greenthreads.py new file mode 100644 index 00000000..ec89e8f3 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/greenthreads.py @@ -0,0 +1,146 @@ +# greenthreads.py -- Utility module for querying an ObjectStore with gevent +# Copyright (C) 2013 eNovance SAS +# +# Author: Fabien Boucher +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Utility module for querying an ObjectStore with gevent.""" + +import gevent +from gevent import pool + +from dulwich.objects import ( + Commit, + Tag, +) +from dulwich.object_store import ( + MissingObjectFinder, + _collect_filetree_revs, + ObjectStoreIterator, +) + + +def _split_commits_and_tags(obj_store, lst, ignore_unknown=False, pool=None): + """Split object id list into two list with commit SHA1s and tag SHA1s. + + Same implementation as object_store._split_commits_and_tags + except we use gevent to parallelize object retrieval. + """ + commits = set() + tags = set() + + def find_commit_type(sha): + try: + o = obj_store[sha] + except KeyError: + if not ignore_unknown: + raise + else: + if isinstance(o, Commit): + commits.add(sha) + elif isinstance(o, Tag): + tags.add(sha) + commits.add(o.object[1]) + else: + raise KeyError("Not a commit or a tag: %s" % sha) + + jobs = [pool.spawn(find_commit_type, s) for s in lst] + gevent.joinall(jobs) + return (commits, tags) + + +class GreenThreadsMissingObjectFinder(MissingObjectFinder): + """Find the objects missing from another object store. + + Same implementation as object_store.MissingObjectFinder + except we use gevent to parallelize object retrieval. + """ + + def __init__( + self, + object_store, + haves, + wants, + progress=None, + get_tagged=None, + concurrency=1, + get_parents=None, + ): + def collect_tree_sha(sha): + self.sha_done.add(sha) + cmt = object_store[sha] + _collect_filetree_revs(object_store, cmt.tree, self.sha_done) + + self.object_store = object_store + p = pool.Pool(size=concurrency) + + have_commits, have_tags = _split_commits_and_tags(object_store, haves, True, p) + want_commits, want_tags = _split_commits_and_tags(object_store, wants, False, p) + all_ancestors = object_store._collect_ancestors(have_commits)[0] + missing_commits, common_commits = object_store._collect_ancestors( + want_commits, all_ancestors + ) + + self.sha_done = set() + jobs = [p.spawn(collect_tree_sha, c) for c in common_commits] + gevent.joinall(jobs) + for t in have_tags: + self.sha_done.add(t) + missing_tags = want_tags.difference(have_tags) + wants = missing_commits.union(missing_tags) + self.objects_to_send = set([(w, None, False) for w in wants]) + if progress is None: + self.progress = lambda x: None + else: + self.progress = progress + self._tagged = get_tagged and get_tagged() or {} + + +class GreenThreadsObjectStoreIterator(ObjectStoreIterator): + """ObjectIterator that works on top of an ObjectStore. + + Same implementation as object_store.ObjectStoreIterator + except we use gevent to parallelize object retrieval. + """ + + def __init__(self, store, shas, finder, concurrency=1): + self.finder = finder + self.p = pool.Pool(size=concurrency) + super(GreenThreadsObjectStoreIterator, self).__init__(store, shas) + + def retrieve(self, args): + sha, path = args + return self.store[sha], path + + def __iter__(self): + for sha, path in self.p.imap_unordered(self.retrieve, self.itershas()): + yield sha, path + + def __len__(self): + if len(self._shas) > 0: + return len(self._shas) + while self.finder.objects_to_send: + jobs = [] + for _ in range(0, len(self.finder.objects_to_send)): + jobs.append(self.p.spawn(self.finder.next)) + gevent.joinall(jobs) + for j in jobs: + if j.value is not None: + self._shas.append(j.value) + return len(self._shas) diff --git a/env/lib/python3.8/site-packages/dulwich/hooks.py b/env/lib/python3.8/site-packages/dulwich/hooks.py new file mode 100644 index 00000000..0b4e39d1 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/hooks.py @@ -0,0 +1,203 @@ +# hooks.py -- for dealing with git hooks +# Copyright (C) 2012-2013 Jelmer Vernooij and others. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Access to hooks.""" + +import os +import subprocess + +from dulwich.errors import ( + HookError, +) + + +class Hook(object): + """Generic hook object.""" + + def execute(self, *args): + """Execute the hook with the given args + + Args: + args: argument list to hook + Raises: + HookError: hook execution failure + Returns: + a hook may return a useful value + """ + raise NotImplementedError(self.execute) + + +class ShellHook(Hook): + """Hook by executable file + + Implements standard githooks(5) [0]: + + [0] http://www.kernel.org/pub/software/scm/git/docs/githooks.html + """ + + def __init__( + self, + name, + path, + numparam, + pre_exec_callback=None, + post_exec_callback=None, + cwd=None, + ): + """Setup shell hook definition + + Args: + name: name of hook for error messages + path: absolute path to executable file + numparam: number of requirements parameters + pre_exec_callback: closure for setup before execution + Defaults to None. Takes in the variable argument list from the + execute functions and returns a modified argument list for the + shell hook. + post_exec_callback: closure for cleanup after execution + Defaults to None. Takes in a boolean for hook success and the + modified argument list and returns the final hook return value + if applicable + cwd: working directory to switch to when executing the hook + """ + self.name = name + self.filepath = path + self.numparam = numparam + + self.pre_exec_callback = pre_exec_callback + self.post_exec_callback = post_exec_callback + + self.cwd = cwd + + def execute(self, *args): + """Execute the hook with given args""" + + if len(args) != self.numparam: + raise HookError( + "Hook %s executed with wrong number of args. \ + Expected %d. Saw %d. args: %s" + % (self.name, self.numparam, len(args), args) + ) + + if self.pre_exec_callback is not None: + args = self.pre_exec_callback(*args) + + try: + ret = subprocess.call( + [os.path.relpath(self.filepath, self.cwd)] + list(args), + cwd=self.cwd) + if ret != 0: + if self.post_exec_callback is not None: + self.post_exec_callback(0, *args) + raise HookError( + "Hook %s exited with non-zero status %d" % (self.name, ret) + ) + if self.post_exec_callback is not None: + return self.post_exec_callback(1, *args) + except OSError: # no file. silent failure. + if self.post_exec_callback is not None: + self.post_exec_callback(0, *args) + + +class PreCommitShellHook(ShellHook): + """pre-commit shell hook""" + + def __init__(self, cwd, controldir): + filepath = os.path.join(controldir, "hooks", "pre-commit") + + ShellHook.__init__(self, "pre-commit", filepath, 0, cwd=cwd) + + +class PostCommitShellHook(ShellHook): + """post-commit shell hook""" + + def __init__(self, controldir): + filepath = os.path.join(controldir, "hooks", "post-commit") + + ShellHook.__init__(self, "post-commit", filepath, 0, cwd=controldir) + + +class CommitMsgShellHook(ShellHook): + """commit-msg shell hook + """ + + def __init__(self, controldir): + filepath = os.path.join(controldir, "hooks", "commit-msg") + + def prepare_msg(*args): + import tempfile + + (fd, path) = tempfile.mkstemp() + + with os.fdopen(fd, "wb") as f: + f.write(args[0]) + + return (path,) + + def clean_msg(success, *args): + if success: + with open(args[0], "rb") as f: + new_msg = f.read() + os.unlink(args[0]) + return new_msg + os.unlink(args[0]) + + ShellHook.__init__( + self, "commit-msg", filepath, 1, prepare_msg, clean_msg, controldir + ) + + +class PostReceiveShellHook(ShellHook): + """post-receive shell hook""" + + def __init__(self, controldir): + self.controldir = controldir + filepath = os.path.join(controldir, "hooks", "post-receive") + ShellHook.__init__(self, "post-receive", path=filepath, numparam=0) + + def execute(self, client_refs): + # do nothing if the script doesn't exist + if not os.path.exists(self.filepath): + return None + + try: + env = os.environ.copy() + env["GIT_DIR"] = self.controldir + + p = subprocess.Popen( + self.filepath, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + env=env, + ) + + # client_refs is a list of (oldsha, newsha, ref) + in_data = b"\n".join([b" ".join(ref) for ref in client_refs]) + + out_data, err_data = p.communicate(in_data) + + if (p.returncode != 0) or err_data: + err_fmt = b"post-receive exit code: %d\n" + b"stdout:\n%s\nstderr:\n%s" + err_msg = err_fmt % (p.returncode, out_data, err_data) + raise HookError(err_msg.decode('utf-8', 'backslashreplace')) + return out_data + except OSError as err: + raise HookError(repr(err)) from err diff --git a/env/lib/python3.8/site-packages/dulwich/ignore.py b/env/lib/python3.8/site-packages/dulwich/ignore.py new file mode 100644 index 00000000..1831fe7d --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/ignore.py @@ -0,0 +1,396 @@ +# Copyright (C) 2017 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Parsing of gitignore files. + +For details for the matching rules, see https://git-scm.com/docs/gitignore +""" + +import os.path +import re +from typing import ( + BinaryIO, + Iterable, + List, + Optional, + TYPE_CHECKING, + Dict, + Union, +) + +if TYPE_CHECKING: + from dulwich.repo import Repo + +from dulwich.config import get_xdg_config_home_path, Config + + +def _translate_segment(segment: bytes) -> bytes: + if segment == b"*": + return b"[^/]+" + res = b"" + i, n = 0, len(segment) + while i < n: + c = segment[i : i + 1] + i = i + 1 + if c == b"*": + res += b"[^/]*" + elif c == b"?": + res += b"[^/]" + elif c == b"\\": + res += re.escape(segment[i : i + 1]) + i += 1 + elif c == b"[": + j = i + if j < n and segment[j : j + 1] == b"!": + j = j + 1 + if j < n and segment[j : j + 1] == b"]": + j = j + 1 + while j < n and segment[j : j + 1] != b"]": + j = j + 1 + if j >= n: + res += b"\\[" + else: + stuff = segment[i:j].replace(b"\\", b"\\\\") + i = j + 1 + if stuff.startswith(b"!"): + stuff = b"^" + stuff[1:] + elif stuff.startswith(b"^"): + stuff = b"\\" + stuff + res += b"[" + stuff + b"]" + else: + res += re.escape(c) + return res + + +def translate(pat: bytes) -> bytes: + """Translate a shell PATTERN to a regular expression. + + There is no way to quote meta-characters. + + Originally copied from fnmatch in Python 2.7, but modified for Dulwich + to cope with features in Git ignore patterns. + """ + + res = b"(?ms)" + + if b"/" not in pat[:-1]: + # If there's no slash, this is a filename-based match + res += b"(.*/)?" + + if pat.startswith(b"**/"): + # Leading **/ + pat = pat[2:] + res += b"(.*/)?" + + if pat.startswith(b"/"): + pat = pat[1:] + + for i, segment in enumerate(pat.split(b"/")): + if segment == b"**": + res += b"(/.*)?" + continue + else: + res += (re.escape(b"/") if i > 0 else b"") + _translate_segment(segment) + + if not pat.endswith(b"/"): + res += b"/?" + + return res + b"\\Z" + + +def read_ignore_patterns(f: BinaryIO) -> Iterable[bytes]: + """Read a git ignore file. + + Args: + f: File-like object to read from + Returns: List of patterns + """ + + for line in f: + line = line.rstrip(b"\r\n") + + # Ignore blank lines, they're used for readability. + if not line.strip(): + continue + + if line.startswith(b"#"): + # Comment + continue + + # Trailing spaces are ignored unless they are quoted with a backslash. + while line.endswith(b" ") and not line.endswith(b"\\ "): + line = line[:-1] + line = line.replace(b"\\ ", b" ") + + yield line + + +def match_pattern(path: bytes, pattern: bytes, ignorecase: bool = False) -> bool: + """Match a gitignore-style pattern against a path. + + Args: + path: Path to match + pattern: Pattern to match + ignorecase: Whether to do case-sensitive matching + Returns: + bool indicating whether the pattern matched + """ + return Pattern(pattern, ignorecase).match(path) + + +class Pattern(object): + """A single ignore pattern.""" + + def __init__(self, pattern: bytes, ignorecase: bool = False): + self.pattern = pattern + self.ignorecase = ignorecase + if pattern[0:1] == b"!": + self.is_exclude = False + pattern = pattern[1:] + else: + if pattern[0:1] == b"\\": + pattern = pattern[1:] + self.is_exclude = True + flags = 0 + if self.ignorecase: + flags = re.IGNORECASE + self._re = re.compile(translate(pattern), flags) + + def __bytes__(self) -> bytes: + return self.pattern + + def __str__(self) -> str: + return os.fsdecode(self.pattern) + + def __eq__(self, other: object) -> bool: + return ( + isinstance(other, type(self)) + and self.pattern == other.pattern + and self.ignorecase == other.ignorecase + ) + + def __repr__(self) -> str: + return "%s(%r, %r)" % ( + type(self).__name__, + self.pattern, + self.ignorecase, + ) + + def match(self, path: bytes) -> bool: + """Try to match a path against this ignore pattern. + + Args: + path: Path to match (relative to ignore location) + Returns: boolean + """ + return bool(self._re.match(path)) + + +class IgnoreFilter(object): + def __init__(self, patterns: Iterable[bytes], ignorecase: bool = False, path=None): + self._patterns = [] # type: List[Pattern] + self._ignorecase = ignorecase + self._path = path + for pattern in patterns: + self.append_pattern(pattern) + + def append_pattern(self, pattern: bytes) -> None: + """Add a pattern to the set.""" + self._patterns.append(Pattern(pattern, self._ignorecase)) + + def find_matching(self, path: Union[bytes, str]) -> Iterable[Pattern]: + """Yield all matching patterns for path. + + Args: + path: Path to match + Returns: + Iterator over iterators + """ + if not isinstance(path, bytes): + path = os.fsencode(path) + for pattern in self._patterns: + if pattern.match(path): + yield pattern + + def is_ignored(self, path: bytes) -> Optional[bool]: + """Check whether a path is ignored. + + For directories, include a trailing slash. + + Returns: status is None if file is not mentioned, True if it is + included, False if it is explicitly excluded. + """ + status = None + for pattern in self.find_matching(path): + status = pattern.is_exclude + return status + + @classmethod + def from_path(cls, path, ignorecase: bool = False) -> "IgnoreFilter": + with open(path, "rb") as f: + return cls(read_ignore_patterns(f), ignorecase, path=path) + + def __repr__(self) -> str: + path = getattr(self, "_path", None) + if path is not None: + return "%s.from_path(%r)" % (type(self).__name__, path) + else: + return "<%s>" % (type(self).__name__) + + +class IgnoreFilterStack(object): + """Check for ignore status in multiple filters.""" + + def __init__(self, filters): + self._filters = filters + + def is_ignored(self, path: str) -> Optional[bool]: + """Check whether a path is explicitly included or excluded in ignores. + + Args: + path: Path to check + Returns: + None if the file is not mentioned, True if it is included, + False if it is explicitly excluded. + """ + status = None + for filter in self._filters: + status = filter.is_ignored(path) + if status is not None: + return status + return status + + +def default_user_ignore_filter_path(config: Config) -> str: + """Return default user ignore filter path. + + Args: + config: A Config object + Returns: + Path to a global ignore file + """ + try: + value = config.get((b"core",), b"excludesFile") + assert isinstance(value, bytes) + return value.decode(encoding="utf-8") + except KeyError: + pass + + return get_xdg_config_home_path("git", "ignore") + + +class IgnoreFilterManager(object): + """Ignore file manager.""" + + def __init__( + self, + top_path: str, + global_filters: List[IgnoreFilter], + ignorecase: bool, + ): + self._path_filters = {} # type: Dict[str, Optional[IgnoreFilter]] + self._top_path = top_path + self._global_filters = global_filters + self._ignorecase = ignorecase + + def __repr__(self) -> str: + return "%s(%s, %r, %r)" % ( + type(self).__name__, + self._top_path, + self._global_filters, + self._ignorecase, + ) + + def _load_path(self, path: str) -> Optional[IgnoreFilter]: + try: + return self._path_filters[path] + except KeyError: + pass + + p = os.path.join(self._top_path, path, ".gitignore") + try: + self._path_filters[path] = IgnoreFilter.from_path(p, self._ignorecase) + except IOError: + self._path_filters[path] = None + return self._path_filters[path] + + def find_matching(self, path: str) -> Iterable[Pattern]: + """Find matching patterns for path. + + Args: + path: Path to check + Returns: + Iterator over Pattern instances + """ + if os.path.isabs(path): + raise ValueError("%s is an absolute path" % path) + filters = [(0, f) for f in self._global_filters] + if os.path.sep != "/": + path = path.replace(os.path.sep, "/") + parts = path.split("/") + matches = [] + for i in range(len(parts) + 1): + dirname = "/".join(parts[:i]) + for s, f in filters: + relpath = "/".join(parts[s:i]) + if i < len(parts): + # Paths leading up to the final part are all directories, + # so need a trailing slash. + relpath += "/" + matches += list(f.find_matching(relpath)) + ignore_filter = self._load_path(dirname) + if ignore_filter is not None: + filters.insert(0, (i, ignore_filter)) + return iter(matches) + + def is_ignored(self, path: str) -> Optional[bool]: + """Check whether a path is explicitly included or excluded in ignores. + + Args: + path: Path to check + Returns: + None if the file is not mentioned, True if it is included, + False if it is explicitly excluded. + """ + matches = list(self.find_matching(path)) + if matches: + return matches[-1].is_exclude + return None + + @classmethod + def from_repo(cls, repo: "Repo") -> "IgnoreFilterManager": + """Create a IgnoreFilterManager from a repository. + + Args: + repo: Repository object + Returns: + A `IgnoreFilterManager` object + """ + global_filters = [] + for p in [ + os.path.join(repo.controldir(), "info", "exclude"), + default_user_ignore_filter_path(repo.get_config_stack()), + ]: + try: + global_filters.append(IgnoreFilter.from_path(os.path.expanduser(p))) + except IOError: + pass + config = repo.get_config_stack() + ignorecase = config.get_boolean((b"core"), (b"ignorecase"), False) + return cls(repo.path, global_filters, ignorecase) diff --git a/env/lib/python3.8/site-packages/dulwich/index.py b/env/lib/python3.8/site-packages/dulwich/index.py new file mode 100644 index 00000000..7caed095 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/index.py @@ -0,0 +1,1040 @@ +# index.py -- File parser/writer for the git index file +# Copyright (C) 2008-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Parser for the git index file format.""" + +import collections +import os +import stat +import struct +import sys +from typing import ( + Any, + BinaryIO, + Callable, + Dict, + List, + Optional, + TYPE_CHECKING, + Iterable, + Iterator, + Tuple, + Union, +) + +if TYPE_CHECKING: + from dulwich.object_store import BaseObjectStore + +from dulwich.file import GitFile +from dulwich.objects import ( + Blob, + S_IFGITLINK, + S_ISGITLINK, + Tree, + hex_to_sha, + sha_to_hex, + ObjectID, +) +from dulwich.pack import ( + SHA1Reader, + SHA1Writer, +) + + +# TODO(jelmer): Switch to dataclass? +IndexEntry = collections.namedtuple( + "IndexEntry", + [ + "ctime", + "mtime", + "dev", + "ino", + "mode", + "uid", + "gid", + "size", + "sha", + "flags", + "extended_flags", + ], +) + + +# 2-bit stage (during merge) +FLAG_STAGEMASK = 0x3000 + +# assume-valid +FLAG_VALID = 0x8000 + +# extended flag (must be zero in version 2) +FLAG_EXTENDED = 0x4000 + + +# used by sparse checkout +EXTENDED_FLAG_SKIP_WORKTREE = 0x4000 + +# used by "git add -N" +EXTENDED_FLAG_INTEND_TO_ADD = 0x2000 + + +DEFAULT_VERSION = 2 + + +def pathsplit(path: bytes) -> Tuple[bytes, bytes]: + """Split a /-delimited path into a directory part and a basename. + + Args: + path: The path to split. + Returns: + Tuple with directory name and basename + """ + try: + (dirname, basename) = path.rsplit(b"/", 1) + except ValueError: + return (b"", path) + else: + return (dirname, basename) + + +def pathjoin(*args): + """Join a /-delimited path.""" + return b"/".join([p for p in args if p]) + + +def read_cache_time(f): + """Read a cache time. + + Args: + f: File-like object to read from + Returns: + Tuple with seconds and nanoseconds + """ + return struct.unpack(">LL", f.read(8)) + + +def write_cache_time(f, t): + """Write a cache time. + + Args: + f: File-like object to write to + t: Time to write (as int, float or tuple with secs and nsecs) + """ + if isinstance(t, int): + t = (t, 0) + elif isinstance(t, float): + (secs, nsecs) = divmod(t, 1.0) + t = (int(secs), int(nsecs * 1000000000)) + elif not isinstance(t, tuple): + raise TypeError(t) + f.write(struct.pack(">LL", *t)) + + +def read_cache_entry(f, version: int) -> Tuple[str, IndexEntry]: + """Read an entry from a cache file. + + Args: + f: File-like object to read from + Returns: + tuple with: name, IndexEntry + """ + beginoffset = f.tell() + ctime = read_cache_time(f) + mtime = read_cache_time(f) + ( + dev, + ino, + mode, + uid, + gid, + size, + sha, + flags, + ) = struct.unpack(">LLLLLL20sH", f.read(20 + 4 * 6 + 2)) + if flags & FLAG_EXTENDED: + if version < 3: + raise AssertionError( + 'extended flag set in index with version < 3') + (extended_flags, ) = struct.unpack(">H", f.read(2)) + else: + extended_flags = 0 + name = f.read((flags & 0x0FFF)) + # Padding: + if version < 4: + real_size = (f.tell() - beginoffset + 8) & ~7 + f.read((beginoffset + real_size) - f.tell()) + return ( + name, + IndexEntry( + ctime, + mtime, + dev, + ino, + mode, + uid, + gid, + size, + sha_to_hex(sha), + flags & ~0x0FFF, + extended_flags, + )) + + +def write_cache_entry(f, name: bytes, entry: IndexEntry, version: int) -> None: + """Write an index entry to a file. + + Args: + f: File object + entry: IndexEntry to write, tuple with: + """ + beginoffset = f.tell() + write_cache_time(f, entry.ctime) + write_cache_time(f, entry.mtime) + flags = len(name) | (entry.flags & ~0x0FFF) + if entry.extended_flags: + flags |= FLAG_EXTENDED + if flags & FLAG_EXTENDED and version is not None and version < 3: + raise AssertionError('unable to use extended flags in version < 3') + f.write( + struct.pack( + b">LLLLLL20sH", + entry.dev & 0xFFFFFFFF, + entry.ino & 0xFFFFFFFF, + entry.mode, + entry.uid, + entry.gid, + entry.size, + hex_to_sha(entry.sha), + flags, + ) + ) + if flags & FLAG_EXTENDED: + f.write(struct.pack(b">H", entry.extended_flags)) + f.write(name) + if version < 4: + real_size = (f.tell() - beginoffset + 8) & ~7 + f.write(b"\0" * ((beginoffset + real_size) - f.tell())) + + +class UnsupportedIndexFormat(Exception): + """An unsupported index format was encountered.""" + + def __init__(self, version): + self.index_format_version = version + + +def read_index(f: BinaryIO): + """Read an index file, yielding the individual entries.""" + header = f.read(4) + if header != b"DIRC": + raise AssertionError("Invalid index file header: %r" % header) + (version, num_entries) = struct.unpack(b">LL", f.read(4 * 2)) + if version not in (1, 2, 3): + raise UnsupportedIndexFormat(version) + for i in range(num_entries): + yield read_cache_entry(f, version) + + +def read_index_dict(f) -> Dict[bytes, IndexEntry]: + """Read an index file and return it as a dictionary. + + Args: + f: File object to read from + """ + ret = {} + for name, entry in read_index(f): + ret[name] = entry + return ret + + +def write_index(f: BinaryIO, entries: List[Tuple[bytes, IndexEntry]], version: Optional[int] = None): + """Write an index file. + + Args: + f: File-like object to write to + version: Version number to write + entries: Iterable over the entries to write + """ + if version is None: + version = DEFAULT_VERSION + f.write(b"DIRC") + f.write(struct.pack(b">LL", version, len(entries))) + for name, entry in entries: + write_cache_entry(f, name, entry, version) + + +def write_index_dict( + f: BinaryIO, + entries: Dict[bytes, IndexEntry], + version: Optional[int] = None, +) -> None: + """Write an index file based on the contents of a dictionary.""" + entries_list = [] + for name in sorted(entries): + entries_list.append((name, entries[name])) + write_index(f, entries_list, version=version) + + +def cleanup_mode(mode: int) -> int: + """Cleanup a mode value. + + This will return a mode that can be stored in a tree object. + + Args: + mode: Mode to clean up. + Returns: + mode + """ + if stat.S_ISLNK(mode): + return stat.S_IFLNK + elif stat.S_ISDIR(mode): + return stat.S_IFDIR + elif S_ISGITLINK(mode): + return S_IFGITLINK + ret = stat.S_IFREG | 0o644 + if mode & 0o100: + ret |= 0o111 + return ret + + +class Index(object): + """A Git Index file.""" + + def __init__(self, filename: Union[bytes, str], read=True): + """Create an index object associated with the given filename. + + Args: + filename: Path to the index file + read: Whether to initialize the index from the given file, should it exist. + """ + self._filename = filename + # TODO(jelmer): Store the version returned by read_index + self._version = None + self.clear() + if read: + self.read() + + @property + def path(self): + return self._filename + + def __repr__(self): + return "%s(%r)" % (self.__class__.__name__, self._filename) + + def write(self) -> None: + """Write current contents of index to disk.""" + f = GitFile(self._filename, "wb") + try: + f = SHA1Writer(f) + write_index_dict(f, self._byname, version=self._version) + finally: + f.close() + + def read(self): + """Read current contents of index from disk.""" + if not os.path.exists(self._filename): + return + f = GitFile(self._filename, "rb") + try: + f = SHA1Reader(f) + for name, entry in read_index(f): + self[name] = entry + # FIXME: Additional data? + f.read(os.path.getsize(self._filename) - f.tell() - 20) + f.check_sha() + finally: + f.close() + + def __len__(self) -> int: + """Number of entries in this index file.""" + return len(self._byname) + + def __getitem__(self, name: bytes) -> IndexEntry: + """Retrieve entry by relative path. + + Returns: tuple with (ctime, mtime, dev, ino, mode, uid, gid, size, sha, + flags) + """ + return self._byname[name] + + def __iter__(self) -> Iterator[bytes]: + """Iterate over the paths in this index.""" + return iter(self._byname) + + def get_sha1(self, path: bytes) -> bytes: + """Return the (git object) SHA1 for the object at a path.""" + return self[path].sha + + def get_mode(self, path: bytes) -> int: + """Return the POSIX file mode for the object at a path.""" + return self[path].mode + + def iterobjects(self) -> Iterable[Tuple[bytes, bytes, int]]: + """Iterate over path, sha, mode tuples for use with commit_tree.""" + for path in self: + entry = self[path] + yield path, entry.sha, cleanup_mode(entry.mode) + + def clear(self): + """Remove all contents from this index.""" + self._byname = {} + + def __setitem__(self, name: bytes, x: IndexEntry): + assert isinstance(name, bytes) + assert len(x) == len(IndexEntry._fields) + # Remove the old entry if any + self._byname[name] = IndexEntry(*x) + + def __delitem__(self, name: bytes): + assert isinstance(name, bytes) + del self._byname[name] + + def iteritems(self) -> Iterator[Tuple[bytes, IndexEntry]]: + return self._byname.items() + + def items(self) -> Iterator[Tuple[bytes, IndexEntry]]: + return self._byname.items() + + def update(self, entries: Dict[bytes, IndexEntry]): + for name, value in entries.items(): + self[name] = value + + def changes_from_tree( + self, object_store, tree: ObjectID, want_unchanged: bool = False): + """Find the differences between the contents of this index and a tree. + + Args: + object_store: Object store to use for retrieving tree contents + tree: SHA1 of the root tree + want_unchanged: Whether unchanged files should be reported + Returns: Iterator over tuples with (oldpath, newpath), (oldmode, + newmode), (oldsha, newsha) + """ + + def lookup_entry(path): + entry = self[path] + return entry.sha, cleanup_mode(entry.mode) + + for (name, mode, sha) in changes_from_tree( + self._byname.keys(), + lookup_entry, + object_store, + tree, + want_unchanged=want_unchanged, + ): + yield (name, mode, sha) + + def commit(self, object_store): + """Create a new tree from an index. + + Args: + object_store: Object store to save the tree in + Returns: + Root tree SHA + """ + return commit_tree(object_store, self.iterobjects()) + + +def commit_tree( + object_store: "BaseObjectStore", blobs: Iterable[Tuple[bytes, bytes, int]] +) -> bytes: + """Commit a new tree. + + Args: + object_store: Object store to add trees to + blobs: Iterable over blob path, sha, mode entries + Returns: + SHA1 of the created tree. + """ + trees = {b"": {}} # type: Dict[bytes, Any] + + def add_tree(path): + if path in trees: + return trees[path] + dirname, basename = pathsplit(path) + t = add_tree(dirname) + assert isinstance(basename, bytes) + newtree = {} + t[basename] = newtree + trees[path] = newtree + return newtree + + for path, sha, mode in blobs: + tree_path, basename = pathsplit(path) + tree = add_tree(tree_path) + tree[basename] = (mode, sha) + + def build_tree(path): + tree = Tree() + for basename, entry in trees[path].items(): + if isinstance(entry, dict): + mode = stat.S_IFDIR + sha = build_tree(pathjoin(path, basename)) + else: + (mode, sha) = entry + tree.add(basename, mode, sha) + object_store.add_object(tree) + return tree.id + + return build_tree(b"") + + +def commit_index(object_store: "BaseObjectStore", index: Index) -> bytes: + """Create a new tree from an index. + + Args: + object_store: Object store to save the tree in + index: Index file + Note: This function is deprecated, use index.commit() instead. + Returns: Root tree sha. + """ + return commit_tree(object_store, index.iterobjects()) + + +def changes_from_tree( + names: Iterable[bytes], + lookup_entry: Callable[[bytes], Tuple[bytes, int]], + object_store: "BaseObjectStore", + tree: Optional[bytes], + want_unchanged=False, +) -> Iterable[ + Tuple[ + Tuple[Optional[bytes], Optional[bytes]], + Tuple[Optional[int], Optional[int]], + Tuple[Optional[bytes], Optional[bytes]], + ] +]: + """Find the differences between the contents of a tree and + a working copy. + + Args: + names: Iterable of names in the working copy + lookup_entry: Function to lookup an entry in the working copy + object_store: Object store to use for retrieving tree contents + tree: SHA1 of the root tree, or None for an empty tree + want_unchanged: Whether unchanged files should be reported + Returns: Iterator over tuples with (oldpath, newpath), (oldmode, newmode), + (oldsha, newsha) + """ + # TODO(jelmer): Support a include_trees option + other_names = set(names) + + if tree is not None: + for (name, mode, sha) in object_store.iter_tree_contents(tree): + try: + (other_sha, other_mode) = lookup_entry(name) + except KeyError: + # Was removed + yield ((name, None), (mode, None), (sha, None)) + else: + other_names.remove(name) + if want_unchanged or other_sha != sha or other_mode != mode: + yield ((name, name), (mode, other_mode), (sha, other_sha)) + + # Mention added files + for name in other_names: + try: + (other_sha, other_mode) = lookup_entry(name) + except KeyError: + pass + else: + yield ((None, name), (None, other_mode), (None, other_sha)) + + +def index_entry_from_stat( + stat_val, hex_sha: bytes, flags: int, mode: Optional[int] = None, + extended_flags: Optional[int] = None +): + """Create a new index entry from a stat value. + + Args: + stat_val: POSIX stat_result instance + hex_sha: Hex sha of the object + flags: Index flags + """ + if mode is None: + mode = cleanup_mode(stat_val.st_mode) + + return IndexEntry( + stat_val.st_ctime, + stat_val.st_mtime, + stat_val.st_dev, + stat_val.st_ino, + mode, + stat_val.st_uid, + stat_val.st_gid, + stat_val.st_size, + hex_sha, + flags, + extended_flags + ) + + +if sys.platform == 'win32': + # On Windows, creating symlinks either requires administrator privileges + # or developer mode. Raise a more helpful error when we're unable to + # create symlinks + + # https://github.com/jelmer/dulwich/issues/1005 + + class WindowsSymlinkPermissionError(PermissionError): + + def __init__(self, errno, msg, filename): + super(PermissionError, self).__init__( + errno, "Unable to create symlink; " + "do you have developer mode enabled? %s" % msg, + filename) + + def symlink(src, dst, target_is_directory=False, *, dir_fd=None): + try: + return os.symlink( + src, dst, target_is_directory=target_is_directory, + dir_fd=dir_fd) + except PermissionError as e: + raise WindowsSymlinkPermissionError( + e.errno, e.strerror, e.filename) from e +else: + symlink = os.symlink + + +def build_file_from_blob( + blob: Blob, mode: int, target_path: bytes, *, honor_filemode=True, + tree_encoding="utf-8", symlink_fn=None +): + """Build a file or symlink on disk based on a Git object. + + Args: + blob: The git object + mode: File mode + target_path: Path to write to + honor_filemode: An optional flag to honor core.filemode setting in + config file, default is core.filemode=True, change executable bit + symlink: Function to use for creating symlinks + Returns: stat object for the file + """ + try: + oldstat = os.lstat(target_path) + except FileNotFoundError: + oldstat = None + contents = blob.as_raw_string() + if stat.S_ISLNK(mode): + if oldstat: + os.unlink(target_path) + if sys.platform == "win32": + # os.readlink on Python3 on Windows requires a unicode string. + contents = contents.decode(tree_encoding) # type: ignore + target_path = target_path.decode(tree_encoding) # type: ignore + (symlink_fn or symlink)(contents, target_path) + else: + if oldstat is not None and oldstat.st_size == len(contents): + with open(target_path, "rb") as f: + if f.read() == contents: + return oldstat + + with open(target_path, "wb") as f: + # Write out file + f.write(contents) + + if honor_filemode: + os.chmod(target_path, mode) + + return os.lstat(target_path) + + +INVALID_DOTNAMES = (b".git", b".", b"..", b"") + + +def validate_path_element_default(element: bytes) -> bool: + return element.lower() not in INVALID_DOTNAMES + + +def validate_path_element_ntfs(element: bytes) -> bool: + stripped = element.rstrip(b". ").lower() + if stripped in INVALID_DOTNAMES: + return False + if stripped == b"git~1": + return False + return True + + +def validate_path(path: bytes, + element_validator=validate_path_element_default) -> bool: + """Default path validator that just checks for .git/.""" + parts = path.split(b"/") + for p in parts: + if not element_validator(p): + return False + else: + return True + + +def build_index_from_tree( + root_path: Union[str, bytes], + index_path: Union[str, bytes], + object_store: "BaseObjectStore", + tree_id: bytes, + honor_filemode: bool = True, + validate_path_element=validate_path_element_default, + symlink_fn=None +): + """Generate and materialize index from a tree + + Args: + tree_id: Tree to materialize + root_path: Target dir for materialized index files + index_path: Target path for generated index + object_store: Non-empty object store holding tree contents + honor_filemode: An optional flag to honor core.filemode setting in + config file, default is core.filemode=True, change executable bit + validate_path_element: Function to validate path elements to check + out; default just refuses .git and .. directories. + + Note: existing index is wiped and contents are not merged + in a working dir. Suitable only for fresh clones. + """ + index = Index(index_path, read=False) + if not isinstance(root_path, bytes): + root_path = os.fsencode(root_path) + + for entry in object_store.iter_tree_contents(tree_id): + if not validate_path(entry.path, validate_path_element): + continue + full_path = _tree_to_fs_path(root_path, entry.path) + + if not os.path.exists(os.path.dirname(full_path)): + os.makedirs(os.path.dirname(full_path)) + + # TODO(jelmer): Merge new index into working tree + if S_ISGITLINK(entry.mode): + if not os.path.isdir(full_path): + os.mkdir(full_path) + st = os.lstat(full_path) + # TODO(jelmer): record and return submodule paths + else: + obj = object_store[entry.sha] + st = build_file_from_blob( + obj, entry.mode, full_path, + honor_filemode=honor_filemode, + symlink_fn=symlink_fn, + ) + + # Add file to index + if not honor_filemode or S_ISGITLINK(entry.mode): + # we can not use tuple slicing to build a new tuple, + # because on windows that will convert the times to + # longs, which causes errors further along + st_tuple = ( + entry.mode, + st.st_ino, + st.st_dev, + st.st_nlink, + st.st_uid, + st.st_gid, + st.st_size, + st.st_atime, + st.st_mtime, + st.st_ctime, + ) + st = st.__class__(st_tuple) + index[entry.path] = index_entry_from_stat(st, entry.sha, 0) + + index.write() + + +def blob_from_path_and_mode(fs_path: bytes, mode: int, + tree_encoding="utf-8"): + """Create a blob from a path and a stat object. + + Args: + fs_path: Full file system path to file + mode: File mode + Returns: A `Blob` object + """ + assert isinstance(fs_path, bytes) + blob = Blob() + if stat.S_ISLNK(mode): + if sys.platform == "win32": + # os.readlink on Python3 on Windows requires a unicode string. + blob.data = os.readlink(os.fsdecode(fs_path)).encode(tree_encoding) + else: + blob.data = os.readlink(fs_path) + else: + with open(fs_path, "rb") as f: + blob.data = f.read() + return blob + + +def blob_from_path_and_stat(fs_path: bytes, st, tree_encoding="utf-8"): + """Create a blob from a path and a stat object. + + Args: + fs_path: Full file system path to file + st: A stat object + Returns: A `Blob` object + """ + return blob_from_path_and_mode(fs_path, st.st_mode, tree_encoding) + + +def read_submodule_head(path: Union[str, bytes]) -> Optional[bytes]: + """Read the head commit of a submodule. + + Args: + path: path to the submodule + Returns: HEAD sha, None if not a valid head/repository + """ + from dulwich.errors import NotGitRepository + from dulwich.repo import Repo + + # Repo currently expects a "str", so decode if necessary. + # TODO(jelmer): Perhaps move this into Repo() ? + if not isinstance(path, str): + path = os.fsdecode(path) + try: + repo = Repo(path) + except NotGitRepository: + return None + try: + return repo.head() + except KeyError: + return None + + +def _has_directory_changed(tree_path: bytes, entry): + """Check if a directory has changed after getting an error. + + When handling an error trying to create a blob from a path, call this + function. It will check if the path is a directory. If it's a directory + and a submodule, check the submodule head to see if it's has changed. If + not, consider the file as changed as Git tracked a file and not a + directory. + + Return true if the given path should be considered as changed and False + otherwise or if the path is not a directory. + """ + # This is actually a directory + if os.path.exists(os.path.join(tree_path, b".git")): + # Submodule + head = read_submodule_head(tree_path) + if entry.sha != head: + return True + else: + # The file was changed to a directory, so consider it removed. + return True + + return False + + +def get_unstaged_changes( + index: Index, root_path: Union[str, bytes], + filter_blob_callback=None): + """Walk through an index and check for differences against working tree. + + Args: + index: index to check + root_path: path in which to find files + Returns: iterator over paths with unstaged changes + """ + # For each entry in the index check the sha1 & ensure not staged + if not isinstance(root_path, bytes): + root_path = os.fsencode(root_path) + + for tree_path, entry in index.iteritems(): + full_path = _tree_to_fs_path(root_path, tree_path) + try: + st = os.lstat(full_path) + if stat.S_ISDIR(st.st_mode): + if _has_directory_changed(tree_path, entry): + yield tree_path + continue + + if not stat.S_ISREG(st.st_mode) and not stat.S_ISLNK(st.st_mode): + continue + + blob = blob_from_path_and_stat(full_path, st) + + if filter_blob_callback is not None: + blob = filter_blob_callback(blob, tree_path) + except FileNotFoundError: + # The file was removed, so we assume that counts as + # different from whatever file used to exist. + yield tree_path + else: + if blob.id != entry.sha: + yield tree_path + + +os_sep_bytes = os.sep.encode("ascii") + + +def _tree_to_fs_path(root_path: bytes, tree_path: bytes): + """Convert a git tree path to a file system path. + + Args: + root_path: Root filesystem path + tree_path: Git tree path as bytes + + Returns: File system path. + """ + assert isinstance(tree_path, bytes) + if os_sep_bytes != b"/": + sep_corrected_path = tree_path.replace(b"/", os_sep_bytes) + else: + sep_corrected_path = tree_path + return os.path.join(root_path, sep_corrected_path) + + +def _fs_to_tree_path(fs_path: Union[str, bytes]) -> bytes: + """Convert a file system path to a git tree path. + + Args: + fs_path: File system path. + + Returns: Git tree path as bytes + """ + if not isinstance(fs_path, bytes): + fs_path_bytes = os.fsencode(fs_path) + else: + fs_path_bytes = fs_path + if os_sep_bytes != b"/": + tree_path = fs_path_bytes.replace(os_sep_bytes, b"/") + else: + tree_path = fs_path_bytes + return tree_path + + +def index_entry_from_directory(st, path: bytes) -> Optional[IndexEntry]: + if os.path.exists(os.path.join(path, b".git")): + head = read_submodule_head(path) + if head is None: + return None + return index_entry_from_stat(st, head, 0, mode=S_IFGITLINK) + return None + + +def index_entry_from_path( + path: bytes, object_store: Optional["BaseObjectStore"] = None +) -> Optional[IndexEntry]: + """Create an index from a filesystem path. + + This returns an index value for files, symlinks + and tree references. for directories and + non-existent files it returns None + + Args: + path: Path to create an index entry for + object_store: Optional object store to + save new blobs in + Returns: An index entry; None for directories + """ + assert isinstance(path, bytes) + st = os.lstat(path) + if stat.S_ISDIR(st.st_mode): + return index_entry_from_directory(st, path) + + if stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode): + blob = blob_from_path_and_stat(path, st) + if object_store is not None: + object_store.add_object(blob) + return index_entry_from_stat(st, blob.id, 0) + + return None + + +def iter_fresh_entries( + paths: Iterable[bytes], root_path: bytes, + object_store: Optional["BaseObjectStore"] = None +) -> Iterator[Tuple[bytes, Optional[IndexEntry]]]: + """Iterate over current versions of index entries on disk. + + Args: + paths: Paths to iterate over + root_path: Root path to access from + object_store: Optional store to save new blobs in + Returns: Iterator over path, index_entry + """ + for path in paths: + p = _tree_to_fs_path(root_path, path) + try: + entry = index_entry_from_path(p, object_store=object_store) + except (FileNotFoundError, IsADirectoryError): + entry = None + yield path, entry + + +def iter_fresh_objects( + paths: Iterable[bytes], root_path: bytes, include_deleted=False, + object_store=None) -> Iterator[ + Tuple[bytes, Optional[bytes], Optional[int]]]: + """Iterate over versions of objects on disk referenced by index. + + Args: + root_path: Root path to access from + include_deleted: Include deleted entries with sha and + mode set to None + object_store: Optional object store to report new items to + Returns: Iterator over path, sha, mode + """ + for path, entry in iter_fresh_entries( + paths, root_path, object_store=object_store): + if entry is None: + if include_deleted: + yield path, None, None + else: + entry = IndexEntry(*entry) + yield path, entry.sha, cleanup_mode(entry.mode) + + +def refresh_index(index: Index, root_path: bytes): + """Refresh the contents of an index. + + This is the equivalent to running 'git commit -a'. + + Args: + index: Index to update + root_path: Root filesystem path + """ + for path, entry in iter_fresh_entries(index, root_path): + if entry: + index[path] = entry + + +class locked_index(object): + """Lock the index while making modifications. + + Works as a context manager. + """ + def __init__(self, path: Union[bytes, str]): + self._path = path + + def __enter__(self): + self._file = GitFile(self._path, "wb") + self._index = Index(self._path) + return self._index + + def __exit__(self, exc_type, exc_value, traceback): + if exc_type is not None: + self._file.abort() + return + try: + f = SHA1Writer(self._file) + write_index_dict(f, self._index._byname) + except BaseException: + self._file.abort() + else: + f.close() diff --git a/env/lib/python3.8/site-packages/dulwich/lfs.py b/env/lib/python3.8/site-packages/dulwich/lfs.py new file mode 100644 index 00000000..03b57b26 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/lfs.py @@ -0,0 +1,74 @@ +# lfs.py -- Implementation of the LFS +# Copyright (C) 2020 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +import hashlib +import os +import tempfile + + +class LFSStore(object): + """Stores objects on disk, indexed by SHA256.""" + + def __init__(self, path): + self.path = path + + @classmethod + def create(cls, lfs_dir): + if not os.path.isdir(lfs_dir): + os.mkdir(lfs_dir) + os.mkdir(os.path.join(lfs_dir, "tmp")) + os.mkdir(os.path.join(lfs_dir, "objects")) + return cls(lfs_dir) + + @classmethod + def from_repo(cls, repo, create=False): + lfs_dir = os.path.join(repo.controldir, "lfs") + if create: + return cls.create(lfs_dir) + return cls(lfs_dir) + + def _sha_path(self, sha): + return os.path.join(self.path, "objects", sha[0:2], sha[2:4], sha) + + def open_object(self, sha): + """Open an object by sha.""" + try: + return open(self._sha_path(sha), "rb") + except FileNotFoundError as exc: + raise KeyError(sha) from exc + + def write_object(self, chunks): + """Write an object. + + Returns: object SHA + """ + sha = hashlib.sha256() + tmpdir = os.path.join(self.path, "tmp") + with tempfile.NamedTemporaryFile(dir=tmpdir, mode="wb", delete=False) as f: + for chunk in chunks: + sha.update(chunk) + f.write(chunk) + f.flush() + tmppath = f.name + path = self._sha_path(sha.hexdigest()) + if not os.path.exists(os.path.dirname(path)): + os.makedirs(os.path.dirname(path)) + os.rename(tmppath, path) + return sha.hexdigest() diff --git a/env/lib/python3.8/site-packages/dulwich/line_ending.py b/env/lib/python3.8/site-packages/dulwich/line_ending.py new file mode 100644 index 00000000..1f931b63 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/line_ending.py @@ -0,0 +1,306 @@ +# line_ending.py -- Line ending conversion functions +# Copyright (C) 2018-2018 Boris Feld +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# +"""All line-ending related functions, from conversions to config processing + +Line-ending normalization is a complex beast. Here is some notes and details +about how it seems to work. + +The normalization is a two-fold process that happens at two moments: + +- When reading a file from the index and to the working directory. For example + when doing a ``git clone`` or ``git checkout`` call. We call this process the + read filter in this module. +- When writing a file to the index from the working directory. For example + when doing a ``git add`` call. We call this process the write filter in this + module. + +Note that when checking status (getting unstaged changes), whether or not +normalization is done on write depends on whether or not the file in the +working dir has also been normalized on read: + +- For autocrlf=true all files are always normalized on both read and write. +- For autocrlf=input files are only normalized on write if they are newly + "added". Since files which are already committed are not normalized on + checkout into the working tree, they are also left alone when staging + modifications into the index. + +One thing to know is that Git does line-ending normalization only on text +files. How does Git know that a file is text? We can either mark a file as a +text file, a binary file or ask Git to automatically decides. Git has an +heuristic to detect if a file is a text file or a binary file. It seems based +on the percentage of non-printable characters in files. + +The code for this heuristic is here: +https://git.kernel.org/pub/scm/git/git.git/tree/convert.c#n46 + +Dulwich have an implementation with a slightly different heuristic, the +`dulwich.patch.is_binary` function. + +The binary detection heuristic implementation is close to the one in JGit: +https://github.com/eclipse/jgit/blob/f6873ffe522bbc3536969a3a3546bf9a819b92bf/org.eclipse.jgit/src/org/eclipse/jgit/diff/RawText.java#L300 + +There is multiple variables that impact the normalization. + +First, a repository can contains a ``.gitattributes`` file (or more than one...) +that can further customize the operation on some file patterns, for example: + + \\*.txt text + +Force all ``.txt`` files to be treated as text files and to have their lines +endings normalized. + + \\*.jpg -text + +Force all ``.jpg`` files to be treated as binary files and to not have their +lines endings converted. + + \\*.vcproj text eol=crlf + +Force all ``.vcproj`` files to be treated as text files and to have their lines +endings converted into ``CRLF`` in working directory no matter the native EOL of +the platform. + + \\*.sh text eol=lf + +Force all ``.sh`` files to be treated as text files and to have their lines +endings converted into ``LF`` in working directory no matter the native EOL of +the platform. + +If the ``eol`` attribute is not defined, Git uses the ``core.eol`` configuration +value described later. + + \\* text=auto + +Force all files to be scanned by the text file heuristic detection and to have +their line endings normalized in case they are detected as text files. + +Git also have a obsolete attribute named ``crlf`` that can be translated to the +corresponding text attribute value. + +Then there are some configuration option (that can be defined at the +repository or user level): + +- core.autocrlf +- core.eol + +``core.autocrlf`` is taken into account for all files that doesn't have a ``text`` +attribute defined in ``.gitattributes``; it takes three possible values: + + - ``true``: This forces all files on the working directory to have CRLF + line-endings in the working directory and convert line-endings to LF + when writing to the index. When autocrlf is set to true, eol value is + ignored. + - ``input``: Quite similar to the ``true`` value but only force the write + filter, ie line-ending of new files added to the index will get their + line-endings converted to LF. + - ``false`` (default): No normalization is done. + +``core.eol`` is the top-level configuration to define the line-ending to use +when applying the read_filer. It takes three possible values: + + - ``lf``: When normalization is done, force line-endings to be ``LF`` in the + working directory. + - ``crlf``: When normalization is done, force line-endings to be ``CRLF`` in + the working directory. + - ``native`` (default): When normalization is done, force line-endings to be + the platform's native line ending. + +One thing to remember is when line-ending normalization is done on a file, Git +always normalize line-ending to ``LF`` when writing to the index. + +There are sources that seems to indicate that Git won't do line-ending +normalization when a file contains mixed line-endings. I think this logic +might be in text / binary detection heuristic but couldn't find it yet. + +Sources: +- https://git-scm.com/docs/git-config#git-config-coreeol +- https://git-scm.com/docs/git-config#git-config-coreautocrlf +- https://git-scm.com/docs/gitattributes#_checking_out_and_checking_in +- https://adaptivepatchwork.com/2012/03/01/mind-the-end-of-your-line/ +""" + +from dulwich.objects import Blob +from dulwich.patch import is_binary + +CRLF = b"\r\n" +LF = b"\n" + + +def convert_crlf_to_lf(text_hunk): + """Convert CRLF in text hunk into LF + + Args: + text_hunk: A bytes string representing a text hunk + Returns: The text hunk with the same type, with CRLF replaced into LF + """ + return text_hunk.replace(CRLF, LF) + + +def convert_lf_to_crlf(text_hunk): + """Convert LF in text hunk into CRLF + + Args: + text_hunk: A bytes string representing a text hunk + Returns: The text hunk with the same type, with LF replaced into CRLF + """ + # TODO find a more efficient way of doing it + intermediary = text_hunk.replace(CRLF, LF) + return intermediary.replace(LF, CRLF) + + +def get_checkout_filter(core_eol, core_autocrlf, git_attributes): + """Returns the correct checkout filter based on the passed arguments""" + # TODO this function should process the git_attributes for the path and if + # the text attribute is not defined, fallback on the + # get_checkout_filter_autocrlf function with the autocrlf value + return get_checkout_filter_autocrlf(core_autocrlf) + + +def get_checkin_filter(core_eol, core_autocrlf, git_attributes): + """Returns the correct checkin filter based on the passed arguments""" + # TODO this function should process the git_attributes for the path and if + # the text attribute is not defined, fallback on the + # get_checkin_filter_autocrlf function with the autocrlf value + return get_checkin_filter_autocrlf(core_autocrlf) + + +def get_checkout_filter_autocrlf(core_autocrlf): + """Returns the correct checkout filter base on autocrlf value + + Args: + core_autocrlf: The bytes configuration value of core.autocrlf. + Valid values are: b'true', b'false' or b'input'. + Returns: Either None if no filter has to be applied or a function + accepting a single argument, a binary text hunk + """ + + if core_autocrlf == b"true": + return convert_lf_to_crlf + + return None + + +def get_checkin_filter_autocrlf(core_autocrlf): + """Returns the correct checkin filter base on autocrlf value + + Args: + core_autocrlf: The bytes configuration value of core.autocrlf. + Valid values are: b'true', b'false' or b'input'. + Returns: Either None if no filter has to be applied or a function + accepting a single argument, a binary text hunk + """ + + if core_autocrlf == b"true" or core_autocrlf == b"input": + return convert_crlf_to_lf + + # Checking filter should never be `convert_lf_to_crlf` + return None + + +class BlobNormalizer(object): + """An object to store computation result of which filter to apply based + on configuration, gitattributes, path and operation (checkin or checkout) + """ + + def __init__(self, config_stack, gitattributes): + self.config_stack = config_stack + self.gitattributes = gitattributes + + # Compute which filters we needs based on parameters + try: + core_eol = config_stack.get("core", "eol") + except KeyError: + core_eol = "native" + + try: + core_autocrlf = config_stack.get("core", "autocrlf").lower() + except KeyError: + core_autocrlf = False + + self.fallback_read_filter = get_checkout_filter( + core_eol, core_autocrlf, self.gitattributes + ) + self.fallback_write_filter = get_checkin_filter( + core_eol, core_autocrlf, self.gitattributes + ) + + def checkin_normalize(self, blob, tree_path): + """Normalize a blob during a checkin operation""" + if self.fallback_write_filter is not None: + return normalize_blob( + blob, self.fallback_write_filter, binary_detection=True + ) + + return blob + + def checkout_normalize(self, blob, tree_path): + """Normalize a blob during a checkout operation""" + if self.fallback_read_filter is not None: + return normalize_blob( + blob, self.fallback_read_filter, binary_detection=True + ) + + return blob + + +def normalize_blob(blob, conversion, binary_detection): + """Takes a blob as input returns either the original blob if + binary_detection is True and the blob content looks like binary, else + return a new blob with converted data + """ + # Read the original blob + data = blob.data + + # If we need to detect if a file is binary and the file is detected as + # binary, do not apply the conversion function and return the original + # chunked text + if binary_detection is True: + if is_binary(data): + return blob + + # Now apply the conversion + converted_data = conversion(data) + + new_blob = Blob() + new_blob.data = converted_data + + return new_blob + + +class TreeBlobNormalizer(BlobNormalizer): + def __init__(self, config_stack, git_attributes, object_store, tree=None): + super().__init__(config_stack, git_attributes) + if tree: + self.existing_paths = { + name + for name, _, _ in object_store.iter_tree_contents(tree) + } + else: + self.existing_paths = set() + + def checkin_normalize(self, blob, tree_path): + # Existing files should only be normalized on checkin if it was + # previously normalized on checkout + if ( + self.fallback_read_filter is not None + or tree_path not in self.existing_paths + ): + return super().checkin_normalize(blob, tree_path) + return blob diff --git a/env/lib/python3.8/site-packages/dulwich/log_utils.py b/env/lib/python3.8/site-packages/dulwich/log_utils.py new file mode 100644 index 00000000..556332df --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/log_utils.py @@ -0,0 +1,73 @@ +# log_utils.py -- Logging utilities for Dulwich +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Logging utilities for Dulwich. + +Any module that uses logging needs to do compile-time initialization to set up +the logging environment. Since Dulwich is also used as a library, clients may +not want to see any logging output. In that case, we need to use a special +handler to suppress spurious warnings like "No handlers could be found for +logger dulwich.foo". + +For details on the _NullHandler approach, see: +http://docs.python.org/library/logging.html#configuring-logging-for-a-library + +For many modules, the only function from the logging module they need is +getLogger; this module exports that function for convenience. If a calling +module needs something else, it can import the standard logging module +directly. +""" + +import logging +import sys + +getLogger = logging.getLogger + + +class _NullHandler(logging.Handler): + """No-op logging handler to avoid unexpected logging warnings.""" + + def emit(self, record): + pass + + +_NULL_HANDLER = _NullHandler() +_DULWICH_LOGGER = getLogger("dulwich") +_DULWICH_LOGGER.addHandler(_NULL_HANDLER) + + +def default_logging_config(): + """Set up the default Dulwich loggers.""" + remove_null_handler() + logging.basicConfig( + level=logging.INFO, + stream=sys.stderr, + format="%(asctime)s %(levelname)s: %(message)s", + ) + + +def remove_null_handler(): + """Remove the null handler from the Dulwich loggers. + + If a caller wants to set up logging using something other than + default_logging_config, calling this function first is a minor optimization + to avoid the overhead of using the _NullHandler. + """ + _DULWICH_LOGGER.removeHandler(_NULL_HANDLER) diff --git a/env/lib/python3.8/site-packages/dulwich/lru_cache.py b/env/lib/python3.8/site-packages/dulwich/lru_cache.py new file mode 100644 index 00000000..be18766f --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/lru_cache.py @@ -0,0 +1,383 @@ +# lru_cache.py -- Simple LRU cache for dulwich +# Copyright (C) 2006, 2008 Canonical Ltd +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""A simple least-recently-used (LRU) cache.""" + +_null_key = object() + + +class _LRUNode(object): + """This maintains the linked-list which is the lru internals.""" + + __slots__ = ("prev", "next_key", "key", "value", "cleanup", "size") + + def __init__(self, key, value, cleanup=None): + self.prev = None + self.next_key = _null_key + self.key = key + self.value = value + self.cleanup = cleanup + # TODO: We could compute this 'on-the-fly' like we used to, and remove + # one pointer from this object, we just need to decide if it + # actually costs us much of anything in normal usage + self.size = None + + def __repr__(self): + if self.prev is None: + prev_key = None + else: + prev_key = self.prev.key + return "%s(%r n:%r p:%r)" % ( + self.__class__.__name__, + self.key, + self.next_key, + prev_key, + ) + + def run_cleanup(self): + if self.cleanup is not None: + self.cleanup(self.key, self.value) + self.cleanup = None + # Just make sure to break any refcycles, etc + self.value = None + + +class LRUCache(object): + """A class which manages a cache of entries, removing unused ones.""" + + def __init__(self, max_cache=100, after_cleanup_count=None): + self._cache = {} + # The "HEAD" of the lru linked list + self._most_recently_used = None + # The "TAIL" of the lru linked list + self._least_recently_used = None + self._update_max_cache(max_cache, after_cleanup_count) + + def __contains__(self, key): + return key in self._cache + + def __getitem__(self, key): + cache = self._cache + node = cache[key] + # Inlined from _record_access to decrease the overhead of __getitem__ + # We also have more knowledge about structure if __getitem__ is + # succeeding, then we know that self._most_recently_used must not be + # None, etc. + mru = self._most_recently_used + if node is mru: + # Nothing to do, this node is already at the head of the queue + return node.value + # Remove this node from the old location + node_prev = node.prev + next_key = node.next_key + # benchmarking shows that the lookup of _null_key in globals is faster + # than the attribute lookup for (node is self._least_recently_used) + if next_key is _null_key: + # 'node' is the _least_recently_used, because it doesn't have a + # 'next' item. So move the current lru to the previous node. + self._least_recently_used = node_prev + else: + node_next = cache[next_key] + node_next.prev = node_prev + node_prev.next_key = next_key + # Insert this node at the front of the list + node.next_key = mru.key + mru.prev = node + self._most_recently_used = node + node.prev = None + return node.value + + def __len__(self): + return len(self._cache) + + def _walk_lru(self): + """Walk the LRU list, only meant to be used in tests.""" + node = self._most_recently_used + if node is not None: + if node.prev is not None: + raise AssertionError( + "the _most_recently_used entry is not" + " supposed to have a previous entry" + " %s" % (node,) + ) + while node is not None: + if node.next_key is _null_key: + if node is not self._least_recently_used: + raise AssertionError( + "only the last node should have" " no next value: %s" % (node,) + ) + node_next = None + else: + node_next = self._cache[node.next_key] + if node_next.prev is not node: + raise AssertionError( + "inconsistency found, node.next.prev" " != node: %s" % (node,) + ) + if node.prev is None: + if node is not self._most_recently_used: + raise AssertionError( + "only the _most_recently_used should" + " not have a previous node: %s" % (node,) + ) + else: + if node.prev.next_key != node.key: + raise AssertionError( + "inconsistency found, node.prev.next" " != node: %s" % (node,) + ) + yield node + node = node_next + + def add(self, key, value, cleanup=None): + """Add a new value to the cache. + + Also, if the entry is ever removed from the cache, call + cleanup(key, value). + + Args: + key: The key to store it under + value: The object to store + cleanup: None or a function taking (key, value) to indicate + 'value' should be cleaned up. + """ + if key is _null_key: + raise ValueError("cannot use _null_key as a key") + if key in self._cache: + node = self._cache[key] + node.run_cleanup() + node.value = value + node.cleanup = cleanup + else: + node = _LRUNode(key, value, cleanup=cleanup) + self._cache[key] = node + self._record_access(node) + + if len(self._cache) > self._max_cache: + # Trigger the cleanup + self.cleanup() + + def cache_size(self): + """Get the number of entries we will cache.""" + return self._max_cache + + def get(self, key, default=None): + node = self._cache.get(key, None) + if node is None: + return default + self._record_access(node) + return node.value + + def keys(self): + """Get the list of keys currently cached. + + Note that values returned here may not be available by the time you + request them later. This is simply meant as a peak into the current + state. + + Returns: An unordered list of keys that are currently cached. + """ + return self._cache.keys() + + def items(self): + """Get the key:value pairs as a dict.""" + return {k: n.value for k, n in self._cache.items()} + + def cleanup(self): + """Clear the cache until it shrinks to the requested size. + + This does not completely wipe the cache, just makes sure it is under + the after_cleanup_count. + """ + # Make sure the cache is shrunk to the correct size + while len(self._cache) > self._after_cleanup_count: + self._remove_lru() + + def __setitem__(self, key, value): + """Add a value to the cache, there will be no cleanup function.""" + self.add(key, value, cleanup=None) + + def _record_access(self, node): + """Record that key was accessed.""" + # Move 'node' to the front of the queue + if self._most_recently_used is None: + self._most_recently_used = node + self._least_recently_used = node + return + elif node is self._most_recently_used: + # Nothing to do, this node is already at the head of the queue + return + # We've taken care of the tail pointer, remove the node, and insert it + # at the front + # REMOVE + if node is self._least_recently_used: + self._least_recently_used = node.prev + if node.prev is not None: + node.prev.next_key = node.next_key + if node.next_key is not _null_key: + node_next = self._cache[node.next_key] + node_next.prev = node.prev + # INSERT + node.next_key = self._most_recently_used.key + self._most_recently_used.prev = node + self._most_recently_used = node + node.prev = None + + def _remove_node(self, node): + if node is self._least_recently_used: + self._least_recently_used = node.prev + self._cache.pop(node.key) + # If we have removed all entries, remove the head pointer as well + if self._least_recently_used is None: + self._most_recently_used = None + node.run_cleanup() + # Now remove this node from the linked list + if node.prev is not None: + node.prev.next_key = node.next_key + if node.next_key is not _null_key: + node_next = self._cache[node.next_key] + node_next.prev = node.prev + # And remove this node's pointers + node.prev = None + node.next_key = _null_key + + def _remove_lru(self): + """Remove one entry from the lru, and handle consequences. + + If there are no more references to the lru, then this entry should be + removed from the cache. + """ + self._remove_node(self._least_recently_used) + + def clear(self): + """Clear out all of the cache.""" + # Clean up in LRU order + while self._cache: + self._remove_lru() + + def resize(self, max_cache, after_cleanup_count=None): + """Change the number of entries that will be cached.""" + self._update_max_cache(max_cache, after_cleanup_count=after_cleanup_count) + + def _update_max_cache(self, max_cache, after_cleanup_count=None): + self._max_cache = max_cache + if after_cleanup_count is None: + self._after_cleanup_count = self._max_cache * 8 / 10 + else: + self._after_cleanup_count = min(after_cleanup_count, self._max_cache) + self.cleanup() + + +class LRUSizeCache(LRUCache): + """An LRUCache that removes things based on the size of the values. + + This differs in that it doesn't care how many actual items there are, + it just restricts the cache to be cleaned up after so much data is stored. + + The size of items added will be computed using compute_size(value), which + defaults to len() if not supplied. + """ + + def __init__( + self, max_size=1024 * 1024, after_cleanup_size=None, compute_size=None + ): + """Create a new LRUSizeCache. + + Args: + max_size: The max number of bytes to store before we start + clearing out entries. + after_cleanup_size: After cleaning up, shrink everything to this + size. + compute_size: A function to compute the size of the values. We + use a function here, so that you can pass 'len' if you are just + using simple strings, or a more complex function if you are using + something like a list of strings, or even a custom object. + The function should take the form "compute_size(value) => integer". + If not supplied, it defaults to 'len()' + """ + self._value_size = 0 + self._compute_size = compute_size + if compute_size is None: + self._compute_size = len + self._update_max_size(max_size, after_cleanup_size=after_cleanup_size) + LRUCache.__init__(self, max_cache=max(int(max_size / 512), 1)) + + def add(self, key, value, cleanup=None): + """Add a new value to the cache. + + Also, if the entry is ever removed from the cache, call + cleanup(key, value). + + Args: + key: The key to store it under + value: The object to store + cleanup: None or a function taking (key, value) to indicate + 'value' should be cleaned up. + """ + if key is _null_key: + raise ValueError("cannot use _null_key as a key") + node = self._cache.get(key, None) + value_len = self._compute_size(value) + if value_len >= self._after_cleanup_size: + # The new value is 'too big to fit', as it would fill up/overflow + # the cache all by itself + if node is not None: + # We won't be replacing the old node, so just remove it + self._remove_node(node) + if cleanup is not None: + cleanup(key, value) + return + if node is None: + node = _LRUNode(key, value, cleanup=cleanup) + self._cache[key] = node + else: + self._value_size -= node.size + node.size = value_len + self._value_size += value_len + self._record_access(node) + + if self._value_size > self._max_size: + # Time to cleanup + self.cleanup() + + def cleanup(self): + """Clear the cache until it shrinks to the requested size. + + This does not completely wipe the cache, just makes sure it is under + the after_cleanup_size. + """ + # Make sure the cache is shrunk to the correct size + while self._value_size > self._after_cleanup_size: + self._remove_lru() + + def _remove_node(self, node): + self._value_size -= node.size + LRUCache._remove_node(self, node) + + def resize(self, max_size, after_cleanup_size=None): + """Change the number of bytes that will be cached.""" + self._update_max_size(max_size, after_cleanup_size=after_cleanup_size) + max_cache = max(int(max_size / 512), 1) + self._update_max_cache(max_cache) + + def _update_max_size(self, max_size, after_cleanup_size=None): + self._max_size = max_size + if after_cleanup_size is None: + self._after_cleanup_size = self._max_size * 8 // 10 + else: + self._after_cleanup_size = min(after_cleanup_size, self._max_size) diff --git a/env/lib/python3.8/site-packages/dulwich/mailmap.py b/env/lib/python3.8/site-packages/dulwich/mailmap.py new file mode 100644 index 00000000..230e1e6f --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/mailmap.py @@ -0,0 +1,114 @@ +# mailmap.py -- Mailmap reader +# Copyright (C) 2018 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Mailmap file reader.""" + + +def parse_identity(text): + # TODO(jelmer): Integrate this with dulwich.fastexport.split_email and + # dulwich.repo.check_user_identity + (name, email) = text.rsplit(b"<", 1) + name = name.strip() + email = email.rstrip(b">").strip() + if not name: + name = None + if not email: + email = None + return (name, email) + + +def read_mailmap(f): + """Read a mailmap. + + Args: + f: File-like object to read from + Returns: Iterator over + ((canonical_name, canonical_email), (from_name, from_email)) tuples + """ + for line in f: + # Remove comments + line = line.split(b"#")[0] + line = line.strip() + if not line: + continue + (canonical_identity, from_identity) = line.split(b">", 1) + canonical_identity += b">" + if from_identity.strip(): + parsed_from_identity = parse_identity(from_identity) + else: + parsed_from_identity = None + parsed_canonical_identity = parse_identity(canonical_identity) + yield parsed_canonical_identity, parsed_from_identity + + +class Mailmap(object): + """Class for accessing a mailmap file.""" + + def __init__(self, map=None): + self._table = {} + if map: + for (canonical_identity, from_identity) in map: + self.add_entry(canonical_identity, from_identity) + + def add_entry(self, canonical_identity, from_identity=None): + """Add an entry to the mail mail. + + Any of the fields can be None, but at least one of them needs to be + set. + + Args: + canonical_identity: The canonical identity (tuple) + from_identity: The from identity (tuple) + """ + if from_identity is None: + from_name, from_email = None, None + else: + (from_name, from_email) = from_identity + (canonical_name, canonical_email) = canonical_identity + if from_name is None and from_email is None: + self._table[canonical_name, None] = canonical_identity + self._table[None, canonical_email] = canonical_identity + else: + self._table[from_name, from_email] = canonical_identity + + def lookup(self, identity): + """Lookup an identity in this mailmail.""" + if not isinstance(identity, tuple): + was_tuple = False + identity = parse_identity(identity) + else: + was_tuple = True + for query in [identity, (None, identity[1]), (identity[0], None)]: + canonical_identity = self._table.get(query) + if canonical_identity is not None: + identity = ( + canonical_identity[0] or identity[0], + canonical_identity[1] or identity[1], + ) + break + if was_tuple: + return identity + else: + return identity[0] + b" <" + identity[1] + b">" + + @classmethod + def from_path(cls, path): + with open(path, "rb") as f: + return cls(read_mailmap(f)) diff --git a/env/lib/python3.8/site-packages/dulwich/object_store.py b/env/lib/python3.8/site-packages/dulwich/object_store.py new file mode 100644 index 00000000..b55d35b3 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/object_store.py @@ -0,0 +1,1612 @@ +# object_store.py -- Object store for git objects +# Copyright (C) 2008-2013 Jelmer Vernooij +# and others +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + + +"""Git object store interfaces and implementation.""" + +from io import BytesIO +import os +import stat +import sys + +from typing import Callable, Dict, List, Optional, Tuple + +from dulwich.diff_tree import ( + tree_changes, + walk_trees, +) +from dulwich.errors import ( + NotTreeError, +) +from dulwich.file import GitFile +from dulwich.objects import ( + ObjectID, + Commit, + ShaFile, + Tag, + Tree, + ZERO_SHA, + hex_to_sha, + sha_to_hex, + hex_to_filename, + S_ISGITLINK, + object_class, + valid_hexsha, +) +from dulwich.pack import ( + Pack, + PackData, + PackInflater, + PackFileDisappeared, + load_pack_index_file, + iter_sha1, + pack_objects_to_data, + write_pack_header, + write_pack_index_v2, + write_pack_data, + write_pack_object, + compute_file_sha, + PackIndexer, + PackStreamCopier, +) +from dulwich.protocol import DEPTH_INFINITE +from dulwich.refs import ANNOTATED_TAG_SUFFIX, Ref + +INFODIR = "info" +PACKDIR = "pack" + +# use permissions consistent with Git; just readable by everyone +# TODO: should packs also be non-writable on Windows? if so, that +# would requite some rather significant adjustments to the test suite +PACK_MODE = 0o444 if sys.platform != "win32" else 0o644 + + +class BaseObjectStore(object): + """Object store interface.""" + + def determine_wants_all( + self, + refs: Dict[Ref, ObjectID], + depth: Optional[int] = None + ) -> List[ObjectID]: + def _want_deepen(sha): + if not depth: + return False + if depth == DEPTH_INFINITE: + return True + return depth > self._get_depth(sha) + + return [ + sha + for (ref, sha) in refs.items() + if (sha not in self or _want_deepen(sha)) + and not ref.endswith(ANNOTATED_TAG_SUFFIX) + and not sha == ZERO_SHA + ] + + def iter_shas(self, shas): + """Iterate over the objects for the specified shas. + + Args: + shas: Iterable object with SHAs + Returns: Object iterator + """ + return ObjectStoreIterator(self, shas) + + def contains_loose(self, sha): + """Check if a particular object is present by SHA1 and is loose.""" + raise NotImplementedError(self.contains_loose) + + def contains_packed(self, sha): + """Check if a particular object is present by SHA1 and is packed.""" + raise NotImplementedError(self.contains_packed) + + def __contains__(self, sha): + """Check if a particular object is present by SHA1. + + This method makes no distinction between loose and packed objects. + """ + return self.contains_packed(sha) or self.contains_loose(sha) + + @property + def packs(self): + """Iterable of pack objects.""" + raise NotImplementedError + + def get_raw(self, name): + """Obtain the raw text for an object. + + Args: + name: sha for the object. + Returns: tuple with numeric type and object contents. + """ + raise NotImplementedError(self.get_raw) + + def __getitem__(self, sha: ObjectID): + """Obtain an object by SHA1.""" + type_num, uncomp = self.get_raw(sha) + return ShaFile.from_raw_string(type_num, uncomp, sha=sha) + + def __iter__(self): + """Iterate over the SHAs that are present in this store.""" + raise NotImplementedError(self.__iter__) + + def add_pack( + self + ) -> Tuple[BytesIO, Callable[[], None], Callable[[], None]]: + """Add a new pack to this object store.""" + raise NotImplementedError(self.add_pack) + + def add_object(self, obj): + """Add a single object to this object store.""" + raise NotImplementedError(self.add_object) + + def add_objects(self, objects, progress=None): + """Add a set of objects to this object store. + + Args: + objects: Iterable over a list of (object, path) tuples + """ + raise NotImplementedError(self.add_objects) + + def add_pack_data(self, count, pack_data, progress=None): + """Add pack data to this object store. + + Args: + count: Number of items to add + pack_data: Iterator over pack data tuples + """ + if count == 0: + # Don't bother writing an empty pack file + return + f, commit, abort = self.add_pack() + try: + write_pack_data( + f.write, + count, + pack_data, + progress, + compression_level=self.pack_compression_level, + ) + except BaseException: + abort() + raise + else: + return commit() + + def tree_changes( + self, + source, + target, + want_unchanged=False, + include_trees=False, + change_type_same=False, + rename_detector=None, + ): + """Find the differences between the contents of two trees + + Args: + source: SHA1 of the source tree + target: SHA1 of the target tree + want_unchanged: Whether unchanged files should be reported + include_trees: Whether to include trees + change_type_same: Whether to report files changing + type in the same entry. + Returns: Iterator over tuples with + (oldpath, newpath), (oldmode, newmode), (oldsha, newsha) + """ + for change in tree_changes( + self, + source, + target, + want_unchanged=want_unchanged, + include_trees=include_trees, + change_type_same=change_type_same, + rename_detector=rename_detector, + ): + yield ( + (change.old.path, change.new.path), + (change.old.mode, change.new.mode), + (change.old.sha, change.new.sha), + ) + + def iter_tree_contents(self, tree_id, include_trees=False): + """Iterate the contents of a tree and all subtrees. + + Iteration is depth-first pre-order, as in e.g. os.walk. + + Args: + tree_id: SHA1 of the tree. + include_trees: If True, include tree objects in the iteration. + Returns: Iterator over TreeEntry namedtuples for all the objects in a + tree. + """ + for entry, _ in walk_trees(self, tree_id, None): + if ( + entry.mode is not None and not stat.S_ISDIR(entry.mode) + ) or include_trees: + yield entry + + def find_missing_objects( + self, + haves, + wants, + shallow=None, + progress=None, + get_tagged=None, + get_parents=lambda commit: commit.parents, + depth=None, + ): + """Find the missing objects required for a set of revisions. + + Args: + haves: Iterable over SHAs already in common. + wants: Iterable over SHAs of objects to fetch. + shallow: Set of shallow commit SHA1s to skip + progress: Simple progress function that will be called with + updated progress strings. + get_tagged: Function that returns a dict of pointed-to sha -> + tag sha for including tags. + get_parents: Optional function for getting the parents of a + commit. + Returns: Iterator over (sha, path) pairs. + """ + finder = MissingObjectFinder( + self, + haves, + wants, + shallow, + progress, + get_tagged, + get_parents=get_parents, + ) + return iter(finder.next, None) + + def find_common_revisions(self, graphwalker): + """Find which revisions this store has in common using graphwalker. + + Args: + graphwalker: A graphwalker object. + Returns: List of SHAs that are in common + """ + haves = [] + sha = next(graphwalker) + while sha: + if sha in self: + haves.append(sha) + graphwalker.ack(sha) + sha = next(graphwalker) + return haves + + def generate_pack_contents(self, have, want, shallow=None, progress=None): + """Iterate over the contents of a pack file. + + Args: + have: List of SHA1s of objects that should not be sent + want: List of SHA1s of objects that should be sent + shallow: Set of shallow commit SHA1s to skip + progress: Optional progress reporting method + """ + missing = self.find_missing_objects(have, want, shallow, progress) + return self.iter_shas(missing) + + def generate_pack_data( + self, have, want, shallow=None, progress=None, ofs_delta=True + ): + """Generate pack data objects for a set of wants/haves. + + Args: + have: List of SHA1s of objects that should not be sent + want: List of SHA1s of objects that should be sent + shallow: Set of shallow commit SHA1s to skip + ofs_delta: Whether OFS deltas can be included + progress: Optional progress reporting method + """ + # TODO(jelmer): More efficient implementation + return pack_objects_to_data( + self.generate_pack_contents(have, want, shallow, progress) + ) + + def peel_sha(self, sha): + """Peel all tags from a SHA. + + Args: + sha: The object SHA to peel. + Returns: The fully-peeled SHA1 of a tag object, after peeling all + intermediate tags; if the original ref does not point to a tag, + this will equal the original SHA1. + """ + obj = self[sha] + obj_class = object_class(obj.type_name) + while obj_class is Tag: + obj_class, sha = obj.object + obj = self[sha] + return obj + + def _collect_ancestors( + self, + heads, + common=frozenset(), + shallow=frozenset(), + get_parents=lambda commit: commit.parents, + ): + """Collect all ancestors of heads up to (excluding) those in common. + + Args: + heads: commits to start from + common: commits to end at, or empty set to walk repository + completely + get_parents: Optional function for getting the parents of a + commit. + Returns: a tuple (A, B) where A - all commits reachable + from heads but not present in common, B - common (shared) elements + that are directly reachable from heads + """ + bases = set() + commits = set() + queue = [] + queue.extend(heads) + while queue: + e = queue.pop(0) + if e in common: + bases.add(e) + elif e not in commits: + commits.add(e) + if e in shallow: + continue + cmt = self[e] + queue.extend(get_parents(cmt)) + return (commits, bases) + + def _get_depth( + self, head, get_parents=lambda commit: commit.parents, max_depth=None, + ): + """Return the current available depth for the given head. + For commits with multiple parents, the largest possible depth will be + returned. + + Args: + head: commit to start from + get_parents: optional function for getting the parents of a commit + max_depth: maximum depth to search + """ + if head not in self: + return 0 + current_depth = 1 + queue = [(head, current_depth)] + while queue and (max_depth is None or current_depth < max_depth): + e, depth = queue.pop(0) + current_depth = max(current_depth, depth) + cmt = self[e] + if isinstance(cmt, Tag): + _cls, sha = cmt.object + cmt = self[sha] + queue.extend( + (parent, depth + 1) + for parent in get_parents(cmt) + if parent in self + ) + return current_depth + + def close(self): + """Close any files opened by this object store.""" + # Default implementation is a NO-OP + + +class PackBasedObjectStore(BaseObjectStore): + def __init__(self, pack_compression_level=-1): + self._pack_cache = {} + self.pack_compression_level = pack_compression_level + + @property + def alternates(self): + return [] + + def contains_packed(self, sha): + """Check if a particular object is present by SHA1 and is packed. + + This does not check alternates. + """ + for pack in self.packs: + try: + if sha in pack: + return True + except PackFileDisappeared: + pass + return False + + def __contains__(self, sha): + """Check if a particular object is present by SHA1. + + This method makes no distinction between loose and packed objects. + """ + if self.contains_packed(sha) or self.contains_loose(sha): + return True + for alternate in self.alternates: + if sha in alternate: + return True + return False + + def _add_cached_pack(self, base_name, pack): + """Add a newly appeared pack to the cache by path.""" + prev_pack = self._pack_cache.get(base_name) + if prev_pack is not pack: + self._pack_cache[base_name] = pack + if prev_pack: + prev_pack.close() + + def _clear_cached_packs(self): + pack_cache = self._pack_cache + self._pack_cache = {} + while pack_cache: + (name, pack) = pack_cache.popitem() + pack.close() + + def _iter_cached_packs(self): + return self._pack_cache.values() + + def _update_pack_cache(self): + raise NotImplementedError(self._update_pack_cache) + + def close(self): + self._clear_cached_packs() + + @property + def packs(self): + """List with pack objects.""" + return list(self._iter_cached_packs()) + list(self._update_pack_cache()) + + def _iter_alternate_objects(self): + """Iterate over the SHAs of all the objects in alternate stores.""" + for alternate in self.alternates: + for alternate_object in alternate: + yield alternate_object + + def _iter_loose_objects(self): + """Iterate over the SHAs of all loose objects.""" + raise NotImplementedError(self._iter_loose_objects) + + def _get_loose_object(self, sha): + raise NotImplementedError(self._get_loose_object) + + def _remove_loose_object(self, sha): + raise NotImplementedError(self._remove_loose_object) + + def _remove_pack(self, name): + raise NotImplementedError(self._remove_pack) + + def pack_loose_objects(self): + """Pack loose objects. + + Returns: Number of objects packed + """ + objects = set() + for sha in self._iter_loose_objects(): + objects.add((self._get_loose_object(sha), None)) + self.add_objects(list(objects)) + for obj, path in objects: + self._remove_loose_object(obj.id) + return len(objects) + + def repack(self): + """Repack the packs in this repository. + + Note that this implementation is fairly naive and currently keeps all + objects in memory while it repacks. + """ + loose_objects = set() + for sha in self._iter_loose_objects(): + loose_objects.add(self._get_loose_object(sha)) + objects = {(obj, None) for obj in loose_objects} + old_packs = {p.name(): p for p in self.packs} + for name, pack in old_packs.items(): + objects.update((obj, None) for obj in pack.iterobjects()) + + # The name of the consolidated pack might match the name of a + # pre-existing pack. Take care not to remove the newly created + # consolidated pack. + + consolidated = self.add_objects(objects) + old_packs.pop(consolidated.name(), None) + + for obj in loose_objects: + self._remove_loose_object(obj.id) + for name, pack in old_packs.items(): + self._remove_pack(pack) + self._update_pack_cache() + return len(objects) + + def __iter__(self): + """Iterate over the SHAs that are present in this store.""" + self._update_pack_cache() + for pack in self._iter_cached_packs(): + try: + for sha in pack: + yield sha + except PackFileDisappeared: + pass + for sha in self._iter_loose_objects(): + yield sha + for sha in self._iter_alternate_objects(): + yield sha + + def contains_loose(self, sha): + """Check if a particular object is present by SHA1 and is loose. + + This does not check alternates. + """ + return self._get_loose_object(sha) is not None + + def get_raw(self, name): + """Obtain the raw fulltext for an object. + + Args: + name: sha for the object. + Returns: tuple with numeric type and object contents. + """ + if name == ZERO_SHA: + raise KeyError(name) + if len(name) == 40: + sha = hex_to_sha(name) + hexsha = name + elif len(name) == 20: + sha = name + hexsha = None + else: + raise AssertionError("Invalid object name %r" % (name,)) + for pack in self._iter_cached_packs(): + try: + return pack.get_raw(sha) + except (KeyError, PackFileDisappeared): + pass + if hexsha is None: + hexsha = sha_to_hex(name) + ret = self._get_loose_object(hexsha) + if ret is not None: + return ret.type_num, ret.as_raw_string() + # Maybe something else has added a pack with the object + # in the mean time? + for pack in self._update_pack_cache(): + try: + return pack.get_raw(sha) + except KeyError: + pass + for alternate in self.alternates: + try: + return alternate.get_raw(hexsha) + except KeyError: + pass + raise KeyError(hexsha) + + def add_objects(self, objects, progress=None): + """Add a set of objects to this object store. + + Args: + objects: Iterable over (object, path) tuples, should support + __len__. + Returns: Pack object of the objects written. + """ + return self.add_pack_data(*pack_objects_to_data(objects), progress=progress) + + +class DiskObjectStore(PackBasedObjectStore): + """Git-style object store that exists on disk.""" + + def __init__(self, path, loose_compression_level=-1, pack_compression_level=-1): + """Open an object store. + + Args: + path: Path of the object store. + loose_compression_level: zlib compression level for loose objects + pack_compression_level: zlib compression level for pack objects + """ + super(DiskObjectStore, self).__init__( + pack_compression_level=pack_compression_level + ) + self.path = path + self.pack_dir = os.path.join(self.path, PACKDIR) + self._alternates = None + self.loose_compression_level = loose_compression_level + self.pack_compression_level = pack_compression_level + + def __repr__(self): + return "<%s(%r)>" % (self.__class__.__name__, self.path) + + @classmethod + def from_config(cls, path, config): + try: + default_compression_level = int( + config.get((b"core",), b"compression").decode() + ) + except KeyError: + default_compression_level = -1 + try: + loose_compression_level = int( + config.get((b"core",), b"looseCompression").decode() + ) + except KeyError: + loose_compression_level = default_compression_level + try: + pack_compression_level = int( + config.get((b"core",), "packCompression").decode() + ) + except KeyError: + pack_compression_level = default_compression_level + return cls(path, loose_compression_level, pack_compression_level) + + @property + def alternates(self): + if self._alternates is not None: + return self._alternates + self._alternates = [] + for path in self._read_alternate_paths(): + self._alternates.append(DiskObjectStore(path)) + return self._alternates + + def _read_alternate_paths(self): + try: + f = GitFile(os.path.join(self.path, INFODIR, "alternates"), "rb") + except FileNotFoundError: + return + with f: + for line in f.readlines(): + line = line.rstrip(b"\n") + if line.startswith(b"#"): + continue + if os.path.isabs(line): + yield os.fsdecode(line) + else: + yield os.fsdecode(os.path.join(os.fsencode(self.path), line)) + + def add_alternate_path(self, path): + """Add an alternate path to this object store.""" + try: + os.mkdir(os.path.join(self.path, INFODIR)) + except FileExistsError: + pass + alternates_path = os.path.join(self.path, INFODIR, "alternates") + with GitFile(alternates_path, "wb") as f: + try: + orig_f = open(alternates_path, "rb") + except FileNotFoundError: + pass + else: + with orig_f: + f.write(orig_f.read()) + f.write(os.fsencode(path) + b"\n") + + if not os.path.isabs(path): + path = os.path.join(self.path, path) + self.alternates.append(DiskObjectStore(path)) + + def _update_pack_cache(self): + """Read and iterate over new pack files and cache them.""" + try: + pack_dir_contents = os.listdir(self.pack_dir) + except FileNotFoundError: + self.close() + return [] + pack_files = set() + for name in pack_dir_contents: + if name.startswith("pack-") and name.endswith(".pack"): + # verify that idx exists first (otherwise the pack was not yet + # fully written) + idx_name = os.path.splitext(name)[0] + ".idx" + if idx_name in pack_dir_contents: + pack_name = name[: -len(".pack")] + pack_files.add(pack_name) + + # Open newly appeared pack files + new_packs = [] + for f in pack_files: + if f not in self._pack_cache: + pack = Pack(os.path.join(self.pack_dir, f)) + new_packs.append(pack) + self._pack_cache[f] = pack + # Remove disappeared pack files + for f in set(self._pack_cache) - pack_files: + self._pack_cache.pop(f).close() + return new_packs + + def _get_shafile_path(self, sha): + # Check from object dir + return hex_to_filename(self.path, sha) + + def _iter_loose_objects(self): + for base in os.listdir(self.path): + if len(base) != 2: + continue + for rest in os.listdir(os.path.join(self.path, base)): + sha = os.fsencode(base + rest) + if not valid_hexsha(sha): + continue + yield sha + + def _get_loose_object(self, sha): + path = self._get_shafile_path(sha) + try: + return ShaFile.from_path(path) + except FileNotFoundError: + return None + + def _remove_loose_object(self, sha): + os.remove(self._get_shafile_path(sha)) + + def _remove_pack(self, pack): + try: + del self._pack_cache[os.path.basename(pack._basename)] + except KeyError: + pass + pack.close() + os.remove(pack.data.path) + os.remove(pack.index.path) + + def _get_pack_basepath(self, entries): + suffix = iter_sha1(entry[0] for entry in entries) + # TODO: Handle self.pack_dir being bytes + suffix = suffix.decode("ascii") + return os.path.join(self.pack_dir, "pack-" + suffix) + + def _complete_thin_pack(self, f, path, copier, indexer): + """Move a specific file containing a pack into the pack directory. + + Note: The file should be on the same file system as the + packs directory. + + Args: + f: Open file object for the pack. + path: Path to the pack file. + copier: A PackStreamCopier to use for writing pack data. + indexer: A PackIndexer for indexing the pack. + """ + entries = list(indexer) + + # Update the header with the new number of objects. + f.seek(0) + write_pack_header(f.write, len(entries) + len(indexer.ext_refs())) + + # Must flush before reading (http://bugs.python.org/issue3207) + f.flush() + + # Rescan the rest of the pack, computing the SHA with the new header. + new_sha = compute_file_sha(f, end_ofs=-20) + + # Must reposition before writing (http://bugs.python.org/issue3207) + f.seek(0, os.SEEK_CUR) + + # Complete the pack. + for ext_sha in indexer.ext_refs(): + assert len(ext_sha) == 20 + type_num, data = self.get_raw(ext_sha) + offset = f.tell() + crc32 = write_pack_object( + f.write, + type_num, + data, + sha=new_sha, + compression_level=self.pack_compression_level, + ) + entries.append((ext_sha, offset, crc32)) + pack_sha = new_sha.digest() + f.write(pack_sha) + f.close() + + # Move the pack in. + entries.sort() + pack_base_name = self._get_pack_basepath(entries) + target_pack = pack_base_name + ".pack" + if sys.platform == "win32": + # Windows might have the target pack file lingering. Attempt + # removal, silently passing if the target does not exist. + try: + os.remove(target_pack) + except FileNotFoundError: + pass + os.rename(path, target_pack) + + # Write the index. + index_file = GitFile(pack_base_name + ".idx", "wb", mask=PACK_MODE) + try: + write_pack_index_v2(index_file, entries, pack_sha) + index_file.close() + finally: + index_file.abort() + + # Add the pack to the store and return it. + final_pack = Pack(pack_base_name) + final_pack.check_length_and_checksum() + self._add_cached_pack(pack_base_name, final_pack) + return final_pack + + def add_thin_pack(self, read_all, read_some): + """Add a new thin pack to this object store. + + Thin packs are packs that contain deltas with parents that exist + outside the pack. They should never be placed in the object store + directly, and always indexed and completed as they are copied. + + Args: + read_all: Read function that blocks until the number of + requested bytes are read. + read_some: Read function that returns at least one byte, but may + not return the number of bytes requested. + Returns: A Pack object pointing at the now-completed thin pack in the + objects/pack directory. + """ + import tempfile + + fd, path = tempfile.mkstemp(dir=self.path, prefix="tmp_pack_") + with os.fdopen(fd, "w+b") as f: + os.chmod(path, PACK_MODE) + indexer = PackIndexer(f, resolve_ext_ref=self.get_raw) + copier = PackStreamCopier(read_all, read_some, f, delta_iter=indexer) + copier.verify() + return self._complete_thin_pack(f, path, copier, indexer) + + def move_in_pack(self, path): + """Move a specific file containing a pack into the pack directory. + + Note: The file should be on the same file system as the + packs directory. + + Args: + path: Path to the pack file. + """ + with PackData(path) as p: + entries = p.sorted_entries() + basename = self._get_pack_basepath(entries) + index_name = basename + ".idx" + if not os.path.exists(index_name): + with GitFile(index_name, "wb", mask=PACK_MODE) as f: + write_pack_index_v2(f, entries, p.get_stored_checksum()) + for pack in self.packs: + if pack._basename == basename: + return pack + target_pack = basename + ".pack" + if sys.platform == "win32": + # Windows might have the target pack file lingering. Attempt + # removal, silently passing if the target does not exist. + try: + os.remove(target_pack) + except FileNotFoundError: + pass + os.rename(path, target_pack) + final_pack = Pack(basename) + self._add_cached_pack(basename, final_pack) + return final_pack + + def add_pack(self): + """Add a new pack to this object store. + + Returns: Fileobject to write to, a commit function to + call when the pack is finished and an abort + function. + """ + import tempfile + + fd, path = tempfile.mkstemp(dir=self.pack_dir, suffix=".pack") + f = os.fdopen(fd, "wb") + os.chmod(path, PACK_MODE) + + def commit(): + f.flush() + os.fsync(fd) + f.close() + if os.path.getsize(path) > 0: + return self.move_in_pack(path) + else: + os.remove(path) + return None + + def abort(): + f.close() + os.remove(path) + + return f, commit, abort + + def add_object(self, obj): + """Add a single object to this object store. + + Args: + obj: Object to add + """ + path = self._get_shafile_path(obj.id) + dir = os.path.dirname(path) + try: + os.mkdir(dir) + except FileExistsError: + pass + if os.path.exists(path): + return # Already there, no need to write again + with GitFile(path, "wb", mask=PACK_MODE) as f: + f.write( + obj.as_legacy_object(compression_level=self.loose_compression_level) + ) + + @classmethod + def init(cls, path): + try: + os.mkdir(path) + except FileExistsError: + pass + os.mkdir(os.path.join(path, "info")) + os.mkdir(os.path.join(path, PACKDIR)) + return cls(path) + + +class MemoryObjectStore(BaseObjectStore): + """Object store that keeps all objects in memory.""" + + def __init__(self): + super(MemoryObjectStore, self).__init__() + self._data = {} + self.pack_compression_level = -1 + + def _to_hexsha(self, sha): + if len(sha) == 40: + return sha + elif len(sha) == 20: + return sha_to_hex(sha) + else: + raise ValueError("Invalid sha %r" % (sha,)) + + def contains_loose(self, sha): + """Check if a particular object is present by SHA1 and is loose.""" + return self._to_hexsha(sha) in self._data + + def contains_packed(self, sha): + """Check if a particular object is present by SHA1 and is packed.""" + return False + + def __iter__(self): + """Iterate over the SHAs that are present in this store.""" + return iter(self._data.keys()) + + @property + def packs(self): + """List with pack objects.""" + return [] + + def get_raw(self, name: ObjectID): + """Obtain the raw text for an object. + + Args: + name: sha for the object. + Returns: tuple with numeric type and object contents. + """ + obj = self[self._to_hexsha(name)] + return obj.type_num, obj.as_raw_string() + + def __getitem__(self, name: ObjectID): + return self._data[self._to_hexsha(name)].copy() + + def __delitem__(self, name: ObjectID): + """Delete an object from this store, for testing only.""" + del self._data[self._to_hexsha(name)] + + def add_object(self, obj): + """Add a single object to this object store.""" + self._data[obj.id] = obj.copy() + + def add_objects(self, objects, progress=None): + """Add a set of objects to this object store. + + Args: + objects: Iterable over a list of (object, path) tuples + """ + for obj, path in objects: + self.add_object(obj) + + def add_pack(self): + """Add a new pack to this object store. + + Because this object store doesn't support packs, we extract and add the + individual objects. + + Returns: Fileobject to write to and a commit function to + call when the pack is finished. + """ + f = BytesIO() + + def commit(): + p = PackData.from_file(BytesIO(f.getvalue()), f.tell()) + f.close() + for obj in PackInflater.for_pack_data(p, self.get_raw): + self.add_object(obj) + + def abort(): + pass + + return f, commit, abort + + def _complete_thin_pack(self, f, indexer): + """Complete a thin pack by adding external references. + + Args: + f: Open file object for the pack. + indexer: A PackIndexer for indexing the pack. + """ + entries = list(indexer) + + # Update the header with the new number of objects. + f.seek(0) + write_pack_header(f.write, len(entries) + len(indexer.ext_refs())) + + # Rescan the rest of the pack, computing the SHA with the new header. + new_sha = compute_file_sha(f, end_ofs=-20) + + # Complete the pack. + for ext_sha in indexer.ext_refs(): + assert len(ext_sha) == 20 + type_num, data = self.get_raw(ext_sha) + write_pack_object(f.write, type_num, data, sha=new_sha) + pack_sha = new_sha.digest() + f.write(pack_sha) + + def add_thin_pack(self, read_all, read_some): + """Add a new thin pack to this object store. + + Thin packs are packs that contain deltas with parents that exist + outside the pack. Because this object store doesn't support packs, we + extract and add the individual objects. + + Args: + read_all: Read function that blocks until the number of + requested bytes are read. + read_some: Read function that returns at least one byte, but may + not return the number of bytes requested. + """ + f, commit, abort = self.add_pack() + try: + indexer = PackIndexer(f, resolve_ext_ref=self.get_raw) + copier = PackStreamCopier(read_all, read_some, f, delta_iter=indexer) + copier.verify() + self._complete_thin_pack(f, indexer) + except BaseException: + abort() + raise + else: + commit() + + +class ObjectIterator(object): + """Interface for iterating over objects.""" + + def iterobjects(self): + raise NotImplementedError(self.iterobjects) + + +class ObjectStoreIterator(ObjectIterator): + """ObjectIterator that works on top of an ObjectStore.""" + + def __init__(self, store, sha_iter): + """Create a new ObjectIterator. + + Args: + store: Object store to retrieve from + sha_iter: Iterator over (sha, path) tuples + """ + self.store = store + self.sha_iter = sha_iter + self._shas = [] + + def __iter__(self): + """Yield tuple with next object and path.""" + for sha, path in self.itershas(): + yield self.store[sha], path + + def iterobjects(self): + """Iterate over just the objects.""" + for o, path in self: + yield o + + def itershas(self): + """Iterate over the SHAs.""" + for sha in self._shas: + yield sha + for sha in self.sha_iter: + self._shas.append(sha) + yield sha + + def __contains__(self, needle): + """Check if an object is present. + + Note: This checks if the object is present in + the underlying object store, not if it would + be yielded by the iterator. + + Args: + needle: SHA1 of the object to check for + """ + if needle == ZERO_SHA: + return False + return needle in self.store + + def __getitem__(self, key): + """Find an object by SHA1. + + Note: This retrieves the object from the underlying + object store. It will also succeed if the object would + not be returned by the iterator. + """ + return self.store[key] + + def __len__(self): + """Return the number of objects.""" + return len(list(self.itershas())) + + def _empty(self): + it = self.itershas() + try: + next(it) + except StopIteration: + return True + else: + return False + + def __bool__(self): + """Indicate whether this object has contents.""" + return not self._empty() + + +def tree_lookup_path(lookup_obj, root_sha, path): + """Look up an object in a Git tree. + + Args: + lookup_obj: Callback for retrieving object by SHA1 + root_sha: SHA1 of the root tree + path: Path to lookup + Returns: A tuple of (mode, SHA) of the resulting path. + """ + tree = lookup_obj(root_sha) + if not isinstance(tree, Tree): + raise NotTreeError(root_sha) + return tree.lookup_path(lookup_obj, path) + + +def _collect_filetree_revs(obj_store, tree_sha, kset): + """Collect SHA1s of files and directories for specified tree. + + Args: + obj_store: Object store to get objects by SHA from + tree_sha: tree reference to walk + kset: set to fill with references to files and directories + """ + filetree = obj_store[tree_sha] + for name, mode, sha in filetree.iteritems(): + if not S_ISGITLINK(mode) and sha not in kset: + kset.add(sha) + if stat.S_ISDIR(mode): + _collect_filetree_revs(obj_store, sha, kset) + + +def _split_commits_and_tags(obj_store, lst, ignore_unknown=False): + """Split object id list into three lists with commit, tag, and other SHAs. + + Commits referenced by tags are included into commits + list as well. Only SHA1s known in this repository will get + through, and unless ignore_unknown argument is True, KeyError + is thrown for SHA1 missing in the repository + + Args: + obj_store: Object store to get objects by SHA1 from + lst: Collection of commit and tag SHAs + ignore_unknown: True to skip SHA1 missing in the repository + silently. + Returns: A tuple of (commits, tags, others) SHA1s + """ + commits = set() + tags = set() + others = set() + for e in lst: + try: + o = obj_store[e] + except KeyError: + if not ignore_unknown: + raise + else: + if isinstance(o, Commit): + commits.add(e) + elif isinstance(o, Tag): + tags.add(e) + tagged = o.object[1] + c, t, o = _split_commits_and_tags( + obj_store, [tagged], ignore_unknown=ignore_unknown + ) + commits |= c + tags |= t + others |= o + else: + others.add(e) + return (commits, tags, others) + + +class MissingObjectFinder(object): + """Find the objects missing from another object store. + + Args: + object_store: Object store containing at least all objects to be + sent + haves: SHA1s of commits not to send (already present in target) + wants: SHA1s of commits to send + progress: Optional function to report progress to. + get_tagged: Function that returns a dict of pointed-to sha -> tag + sha for including tags. + get_parents: Optional function for getting the parents of a commit. + tagged: dict of pointed-to sha -> tag sha for including tags + """ + + def __init__( + self, + object_store, + haves, + wants, + shallow=None, + progress=None, + get_tagged=None, + get_parents=lambda commit: commit.parents, + ): + self.object_store = object_store + if shallow is None: + shallow = set() + self._get_parents = get_parents + # process Commits and Tags differently + # Note, while haves may list commits/tags not available locally, + # and such SHAs would get filtered out by _split_commits_and_tags, + # wants shall list only known SHAs, and otherwise + # _split_commits_and_tags fails with KeyError + have_commits, have_tags, have_others = _split_commits_and_tags( + object_store, haves, True + ) + want_commits, want_tags, want_others = _split_commits_and_tags( + object_store, wants, False + ) + # all_ancestors is a set of commits that shall not be sent + # (complete repository up to 'haves') + all_ancestors = object_store._collect_ancestors( + have_commits, shallow=shallow, get_parents=self._get_parents + )[0] + # all_missing - complete set of commits between haves and wants + # common - commits from all_ancestors we hit into while + # traversing parent hierarchy of wants + missing_commits, common_commits = object_store._collect_ancestors( + want_commits, + all_ancestors, + shallow=shallow, + get_parents=self._get_parents, + ) + self.sha_done = set() + # Now, fill sha_done with commits and revisions of + # files and directories known to be both locally + # and on target. Thus these commits and files + # won't get selected for fetch + for h in common_commits: + self.sha_done.add(h) + cmt = object_store[h] + _collect_filetree_revs(object_store, cmt.tree, self.sha_done) + # record tags we have as visited, too + for t in have_tags: + self.sha_done.add(t) + + missing_tags = want_tags.difference(have_tags) + missing_others = want_others.difference(have_others) + # in fact, what we 'want' is commits, tags, and others + # we've found missing + wants = missing_commits.union(missing_tags) + wants = wants.union(missing_others) + + self.objects_to_send = set([(w, None, False) for w in wants]) + + if progress is None: + self.progress = lambda x: None + else: + self.progress = progress + self._tagged = get_tagged and get_tagged() or {} + + def add_todo(self, entries): + self.objects_to_send.update([e for e in entries if not e[0] in self.sha_done]) + + def next(self): + while True: + if not self.objects_to_send: + return None + (sha, name, leaf) = self.objects_to_send.pop() + if sha not in self.sha_done: + break + if not leaf: + o = self.object_store[sha] + if isinstance(o, Commit): + self.add_todo([(o.tree, b"", False)]) + elif isinstance(o, Tree): + self.add_todo( + [ + (s, n, not stat.S_ISDIR(m)) + for n, m, s in o.iteritems() + if not S_ISGITLINK(m) + ] + ) + elif isinstance(o, Tag): + self.add_todo([(o.object[1], None, False)]) + if sha in self._tagged: + self.add_todo([(self._tagged[sha], None, True)]) + self.sha_done.add(sha) + self.progress(("counting objects: %d\r" % len(self.sha_done)).encode("ascii")) + return (sha, name) + + __next__ = next + + +class ObjectStoreGraphWalker(object): + """Graph walker that finds what commits are missing from an object store. + + Attributes: + heads: Revisions without descendants in the local repo + get_parents: Function to retrieve parents in the local repo + """ + + def __init__(self, local_heads, get_parents, shallow=None): + """Create a new instance. + + Args: + local_heads: Heads to start search with + get_parents: Function for finding the parents of a SHA1. + """ + self.heads = set(local_heads) + self.get_parents = get_parents + self.parents = {} + if shallow is None: + shallow = set() + self.shallow = shallow + + def ack(self, sha): + """Ack that a revision and its ancestors are present in the source.""" + if len(sha) != 40: + raise ValueError("unexpected sha %r received" % sha) + ancestors = set([sha]) + + # stop if we run out of heads to remove + while self.heads: + for a in ancestors: + if a in self.heads: + self.heads.remove(a) + + # collect all ancestors + new_ancestors = set() + for a in ancestors: + ps = self.parents.get(a) + if ps is not None: + new_ancestors.update(ps) + self.parents[a] = None + + # no more ancestors; stop + if not new_ancestors: + break + + ancestors = new_ancestors + + def next(self): + """Iterate over ancestors of heads in the target.""" + if self.heads: + ret = self.heads.pop() + try: + ps = self.get_parents(ret) + except KeyError: + return None + self.parents[ret] = ps + self.heads.update([p for p in ps if p not in self.parents]) + return ret + return None + + __next__ = next + + +def commit_tree_changes(object_store, tree, changes): + """Commit a specified set of changes to a tree structure. + + This will apply a set of changes on top of an existing tree, storing new + objects in object_store. + + changes are a list of tuples with (path, mode, object_sha). + Paths can be both blobs and trees. See the mode and + object sha to None deletes the path. + + This method works especially well if there are only a small + number of changes to a big tree. For a large number of changes + to a large tree, use e.g. commit_tree. + + Args: + object_store: Object store to store new objects in + and retrieve old ones from. + tree: Original tree root + changes: changes to apply + Returns: New tree root object + """ + # TODO(jelmer): Save up the objects and add them using .add_objects + # rather than with individual calls to .add_object. + nested_changes = {} + for (path, new_mode, new_sha) in changes: + try: + (dirname, subpath) = path.split(b"/", 1) + except ValueError: + if new_sha is None: + del tree[path] + else: + tree[path] = (new_mode, new_sha) + else: + nested_changes.setdefault(dirname, []).append((subpath, new_mode, new_sha)) + for name, subchanges in nested_changes.items(): + try: + orig_subtree = object_store[tree[name][1]] + except KeyError: + orig_subtree = Tree() + subtree = commit_tree_changes(object_store, orig_subtree, subchanges) + if len(subtree) == 0: + del tree[name] + else: + tree[name] = (stat.S_IFDIR, subtree.id) + object_store.add_object(tree) + return tree + + +class OverlayObjectStore(BaseObjectStore): + """Object store that can overlay multiple object stores.""" + + def __init__(self, bases, add_store=None): + self.bases = bases + self.add_store = add_store + + def add_object(self, object): + if self.add_store is None: + raise NotImplementedError(self.add_object) + return self.add_store.add_object(object) + + def add_objects(self, objects, progress=None): + if self.add_store is None: + raise NotImplementedError(self.add_object) + return self.add_store.add_objects(objects, progress) + + @property + def packs(self): + ret = [] + for b in self.bases: + ret.extend(b.packs) + return ret + + def __iter__(self): + done = set() + for b in self.bases: + for o_id in b: + if o_id not in done: + yield o_id + done.add(o_id) + + def get_raw(self, sha_id): + for b in self.bases: + try: + return b.get_raw(sha_id) + except KeyError: + pass + raise KeyError(sha_id) + + def contains_packed(self, sha): + for b in self.bases: + if b.contains_packed(sha): + return True + return False + + def contains_loose(self, sha): + for b in self.bases: + if b.contains_loose(sha): + return True + return False + + +def read_packs_file(f): + """Yield the packs listed in a packs file.""" + for line in f.read().splitlines(): + if not line: + continue + (kind, name) = line.split(b" ", 1) + if kind != b"P": + continue + yield os.fsdecode(name) + + +class BucketBasedObjectStore(PackBasedObjectStore): + """Object store implementation that uses a bucket store like S3 as backend. + """ + + def _iter_loose_objects(self): + """Iterate over the SHAs of all loose objects.""" + return iter([]) + + def _get_loose_object(self, sha): + return None + + def _remove_loose_object(self, sha): + # Doesn't exist.. + pass + + def _remove_pack(self, name): + raise NotImplementedError(self._remove_pack) + + def _iter_pack_names(self): + raise NotImplementedError(self._iter_pack_names) + + def _get_pack(self, name): + raise NotImplementedError(self._get_pack) + + def _update_pack_cache(self): + pack_files = set(self._iter_pack_names()) + + # Open newly appeared pack files + new_packs = [] + for f in pack_files: + if f not in self._pack_cache: + pack = self._get_pack(f) + new_packs.append(pack) + self._pack_cache[f] = pack + # Remove disappeared pack files + for f in set(self._pack_cache) - pack_files: + self._pack_cache.pop(f).close() + return new_packs + + def _upload_pack(self, basename, pack_file, index_file): + raise NotImplementedError + + def add_pack(self): + """Add a new pack to this object store. + + Returns: Fileobject to write to, a commit function to + call when the pack is finished and an abort + function. + """ + import tempfile + + pf = tempfile.SpooledTemporaryFile() + + def commit(): + if pf.tell() == 0: + pf.close() + return None + + pf.seek(0) + p = PackData(pf.name, pf) + entries = p.sorted_entries() + basename = iter_sha1(entry[0] for entry in entries).decode('ascii') + idxf = tempfile.SpooledTemporaryFile() + checksum = p.get_stored_checksum() + write_pack_index_v2(idxf, entries, checksum) + idxf.seek(0) + idx = load_pack_index_file(basename + '.idx', idxf) + for pack in self.packs: + if pack.get_stored_checksum() == p.get_stored_checksum(): + p.close() + idx.close() + return pack + pf.seek(0) + idxf.seek(0) + self._upload_pack(basename, pf, idxf) + final_pack = Pack.from_objects(p, idx) + self._add_cached_pack(basename, final_pack) + return final_pack + + return pf, commit, pf.close diff --git a/env/lib/python3.8/site-packages/dulwich/objects.py b/env/lib/python3.8/site-packages/dulwich/objects.py new file mode 100644 index 00000000..492a10ac --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/objects.py @@ -0,0 +1,1621 @@ +# objects.py -- Access to base git objects +# Copyright (C) 2007 James Westby +# Copyright (C) 2008-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Access to base git objects.""" + +import binascii +from io import BytesIO +from collections import namedtuple +import os +import posixpath +import stat +from typing import ( + Optional, + Dict, + Iterable, + Union, + Type, + Iterator, + List, +) +import zlib +from hashlib import sha1 + +from dulwich.errors import ( + ChecksumMismatch, + NotBlobError, + NotCommitError, + NotTagError, + NotTreeError, + ObjectFormatException, + FileFormatException, +) +from dulwich.file import GitFile + + +ZERO_SHA = b"0" * 40 + +# Header fields for commits +_TREE_HEADER = b"tree" +_PARENT_HEADER = b"parent" +_AUTHOR_HEADER = b"author" +_COMMITTER_HEADER = b"committer" +_ENCODING_HEADER = b"encoding" +_MERGETAG_HEADER = b"mergetag" +_GPGSIG_HEADER = b"gpgsig" + +# Header fields for objects +_OBJECT_HEADER = b"object" +_TYPE_HEADER = b"type" +_TAG_HEADER = b"tag" +_TAGGER_HEADER = b"tagger" + + +S_IFGITLINK = 0o160000 + + +MAX_TIME = 9223372036854775807 # (2**63) - 1 - signed long int max + +BEGIN_PGP_SIGNATURE = b"-----BEGIN PGP SIGNATURE-----" + + +ObjectID = bytes + + +class EmptyFileException(FileFormatException): + """An unexpectedly empty file was encountered.""" + + +def S_ISGITLINK(m): + """Check if a mode indicates a submodule. + + Args: + m: Mode to check + Returns: a ``boolean`` + """ + return stat.S_IFMT(m) == S_IFGITLINK + + +def _decompress(string): + dcomp = zlib.decompressobj() + dcomped = dcomp.decompress(string) + dcomped += dcomp.flush() + return dcomped + + +def sha_to_hex(sha): + """Takes a string and returns the hex of the sha within""" + hexsha = binascii.hexlify(sha) + assert len(hexsha) == 40, "Incorrect length of sha1 string: %s" % hexsha + return hexsha + + +def hex_to_sha(hex): + """Takes a hex sha and returns a binary sha""" + assert len(hex) == 40, "Incorrect length of hexsha: %s" % hex + try: + return binascii.unhexlify(hex) + except TypeError as exc: + if not isinstance(hex, bytes): + raise + raise ValueError(exc.args[0]) from exc + + +def valid_hexsha(hex): + if len(hex) != 40: + return False + try: + binascii.unhexlify(hex) + except (TypeError, binascii.Error): + return False + else: + return True + + +def hex_to_filename(path, hex): + """Takes a hex sha and returns its filename relative to the given path.""" + # os.path.join accepts bytes or unicode, but all args must be of the same + # type. Make sure that hex which is expected to be bytes, is the same type + # as path. + if getattr(path, "encode", None) is not None: + hex = hex.decode("ascii") + dir = hex[:2] + file = hex[2:] + # Check from object dir + return os.path.join(path, dir, file) + + +def filename_to_hex(filename): + """Takes an object filename and returns its corresponding hex sha.""" + # grab the last (up to) two path components + names = filename.rsplit(os.path.sep, 2)[-2:] + errmsg = "Invalid object filename: %s" % filename + assert len(names) == 2, errmsg + base, rest = names + assert len(base) == 2 and len(rest) == 38, errmsg + hex = (base + rest).encode("ascii") + hex_to_sha(hex) + return hex + + +def object_header(num_type: int, length: int) -> bytes: + """Return an object header for the given numeric type and text length.""" + cls = object_class(num_type) + if cls is None: + raise AssertionError("unsupported class type num: %d" % num_type) + return cls.type_name + b" " + str(length).encode("ascii") + b"\0" + + +def serializable_property(name: str, docstring: Optional[str] = None): + """A property that helps tracking whether serialization is necessary.""" + + def set(obj, value): + setattr(obj, "_" + name, value) + obj._needs_serialization = True + + def get(obj): + return getattr(obj, "_" + name) + + return property(get, set, doc=docstring) + + +def object_class(type: Union[bytes, int]) -> Optional[Type["ShaFile"]]: + """Get the object class corresponding to the given type. + + Args: + type: Either a type name string or a numeric type. + Returns: The ShaFile subclass corresponding to the given type, or None if + type is not a valid type name/number. + """ + return _TYPE_MAP.get(type, None) + + +def check_hexsha(hex, error_msg): + """Check if a string is a valid hex sha string. + + Args: + hex: Hex string to check + error_msg: Error message to use in exception + Raises: + ObjectFormatException: Raised when the string is not valid + """ + if not valid_hexsha(hex): + raise ObjectFormatException("%s %s" % (error_msg, hex)) + + +def check_identity(identity: bytes, error_msg: str) -> None: + """Check if the specified identity is valid. + + This will raise an exception if the identity is not valid. + + Args: + identity: Identity string + error_msg: Error message to use in exception + """ + email_start = identity.find(b'<') + email_end = identity.find(b'>') + if not all([ + email_start >= 1, + identity[email_start - 1] == b' '[0], + identity.find(b'<', email_start + 1) == -1, + email_end == len(identity) - 1, + b'\0' not in identity, + b'\n' not in identity, + ]): + raise ObjectFormatException(error_msg) + + +def check_time(time_seconds): + """Check if the specified time is not prone to overflow error. + + This will raise an exception if the time is not valid. + + Args: + time_seconds: time in seconds + + """ + # Prevent overflow error + if time_seconds > MAX_TIME: + raise ObjectFormatException("Date field should not exceed %s" % MAX_TIME) + + +def git_line(*items): + """Formats items into a space separated line.""" + return b" ".join(items) + b"\n" + + +class FixedSha(object): + """SHA object that behaves like hashlib's but is given a fixed value.""" + + __slots__ = ("_hexsha", "_sha") + + def __init__(self, hexsha): + if getattr(hexsha, "encode", None) is not None: + hexsha = hexsha.encode("ascii") + if not isinstance(hexsha, bytes): + raise TypeError("Expected bytes for hexsha, got %r" % hexsha) + self._hexsha = hexsha + self._sha = hex_to_sha(hexsha) + + def digest(self): + """Return the raw SHA digest.""" + return self._sha + + def hexdigest(self): + """Return the hex SHA digest.""" + return self._hexsha.decode("ascii") + + +class ShaFile(object): + """A git SHA file.""" + + __slots__ = ("_chunked_text", "_sha", "_needs_serialization") + + _needs_serialization: bool + type_name: bytes + type_num: int + _chunked_text: Optional[List[bytes]] + + @staticmethod + def _parse_legacy_object_header(magic, f) -> "ShaFile": + """Parse a legacy object, creating it but not reading the file.""" + bufsize = 1024 + decomp = zlib.decompressobj() + header = decomp.decompress(magic) + start = 0 + end = -1 + while end < 0: + extra = f.read(bufsize) + header += decomp.decompress(extra) + magic += extra + end = header.find(b"\0", start) + start = len(header) + header = header[:end] + type_name, size = header.split(b" ", 1) + try: + int(size) # sanity check + except ValueError as exc: + raise ObjectFormatException( + "Object size not an integer: %s" % exc) from exc + obj_class = object_class(type_name) + if not obj_class: + raise ObjectFormatException("Not a known type: %s" % type_name.decode('ascii')) + return obj_class() + + def _parse_legacy_object(self, map) -> None: + """Parse a legacy object, setting the raw string.""" + text = _decompress(map) + header_end = text.find(b"\0") + if header_end < 0: + raise ObjectFormatException("Invalid object header, no \\0") + self.set_raw_string(text[header_end + 1 :]) + + def as_legacy_object_chunks( + self, compression_level: int = -1) -> Iterator[bytes]: + """Return chunks representing the object in the experimental format. + + Returns: List of strings + """ + compobj = zlib.compressobj(compression_level) + yield compobj.compress(self._header()) + for chunk in self.as_raw_chunks(): + yield compobj.compress(chunk) + yield compobj.flush() + + def as_legacy_object(self, compression_level: int = -1) -> bytes: + """Return string representing the object in the experimental format.""" + return b"".join( + self.as_legacy_object_chunks(compression_level=compression_level) + ) + + def as_raw_chunks(self) -> List[bytes]: + """Return chunks with serialization of the object. + + Returns: List of strings, not necessarily one per line + """ + if self._needs_serialization: + self._sha = None + self._chunked_text = self._serialize() + self._needs_serialization = False + return self._chunked_text # type: ignore + + def as_raw_string(self) -> bytes: + """Return raw string with serialization of the object. + + Returns: String object + """ + return b"".join(self.as_raw_chunks()) + + def __bytes__(self) -> bytes: + """Return raw string serialization of this object.""" + return self.as_raw_string() + + def __hash__(self): + """Return unique hash for this object.""" + return hash(self.id) + + def as_pretty_string(self) -> bytes: + """Return a string representing this object, fit for display.""" + return self.as_raw_string() + + def set_raw_string( + self, text: bytes, sha: Optional[ObjectID] = None) -> None: + """Set the contents of this object from a serialized string.""" + if not isinstance(text, bytes): + raise TypeError("Expected bytes for text, got %r" % text) + self.set_raw_chunks([text], sha) + + def set_raw_chunks( + self, chunks: List[bytes], + sha: Optional[ObjectID] = None) -> None: + """Set the contents of this object from a list of chunks.""" + self._chunked_text = chunks + self._deserialize(chunks) + if sha is None: + self._sha = None + else: + self._sha = FixedSha(sha) # type: ignore + self._needs_serialization = False + + @staticmethod + def _parse_object_header(magic, f): + """Parse a new style object, creating it but not reading the file.""" + num_type = (ord(magic[0:1]) >> 4) & 7 + obj_class = object_class(num_type) + if not obj_class: + raise ObjectFormatException("Not a known type %d" % num_type) + return obj_class() + + def _parse_object(self, map) -> None: + """Parse a new style object, setting self._text.""" + # skip type and size; type must have already been determined, and + # we trust zlib to fail if it's otherwise corrupted + byte = ord(map[0:1]) + used = 1 + while (byte & 0x80) != 0: + byte = ord(map[used : used + 1]) + used += 1 + raw = map[used:] + self.set_raw_string(_decompress(raw)) + + @classmethod + def _is_legacy_object(cls, magic: bytes) -> bool: + b0 = ord(magic[0:1]) + b1 = ord(magic[1:2]) + word = (b0 << 8) + b1 + return (b0 & 0x8F) == 0x08 and (word % 31) == 0 + + @classmethod + def _parse_file(cls, f): + map = f.read() + if not map: + raise EmptyFileException("Corrupted empty file detected") + + if cls._is_legacy_object(map): + obj = cls._parse_legacy_object_header(map, f) + obj._parse_legacy_object(map) + else: + obj = cls._parse_object_header(map, f) + obj._parse_object(map) + return obj + + def __init__(self): + """Don't call this directly""" + self._sha = None + self._chunked_text = [] + self._needs_serialization = True + + def _deserialize(self, chunks): + raise NotImplementedError(self._deserialize) + + def _serialize(self): + raise NotImplementedError(self._serialize) + + @classmethod + def from_path(cls, path): + """Open a SHA file from disk.""" + with GitFile(path, "rb") as f: + return cls.from_file(f) + + @classmethod + def from_file(cls, f): + """Get the contents of a SHA file on disk.""" + try: + obj = cls._parse_file(f) + obj._sha = None + return obj + except (IndexError, ValueError) as exc: + raise ObjectFormatException("invalid object header") from exc + + @staticmethod + def from_raw_string(type_num, string, sha=None): + """Creates an object of the indicated type from the raw string given. + + Args: + type_num: The numeric type of the object. + string: The raw uncompressed contents. + sha: Optional known sha for the object + """ + obj = object_class(type_num)() + obj.set_raw_string(string, sha) + return obj + + @staticmethod + def from_raw_chunks( + type_num: int, chunks: List[bytes], + sha: Optional[ObjectID] = None): + """Creates an object of the indicated type from the raw chunks given. + + Args: + type_num: The numeric type of the object. + chunks: An iterable of the raw uncompressed contents. + sha: Optional known sha for the object + """ + cls = object_class(type_num) + if cls is None: + raise AssertionError("unsupported class type num: %d" % type_num) + obj = cls() + obj.set_raw_chunks(chunks, sha) + return obj + + @classmethod + def from_string(cls, string): + """Create a ShaFile from a string.""" + obj = cls() + obj.set_raw_string(string) + return obj + + def _check_has_member(self, member, error_msg): + """Check that the object has a given member variable. + + Args: + member: the member variable to check for + error_msg: the message for an error if the member is missing + Raises: + ObjectFormatException: with the given error_msg if member is + missing or is None + """ + if getattr(self, member, None) is None: + raise ObjectFormatException(error_msg) + + def check(self) -> None: + """Check this object for internal consistency. + + Raises: + ObjectFormatException: if the object is malformed in some way + ChecksumMismatch: if the object was created with a SHA that does + not match its contents + """ + # TODO: if we find that error-checking during object parsing is a + # performance bottleneck, those checks should be moved to the class's + # check() method during optimization so we can still check the object + # when necessary. + old_sha = self.id + try: + self._deserialize(self.as_raw_chunks()) + self._sha = None + new_sha = self.id + except Exception as exc: + raise ObjectFormatException(exc) from exc + if old_sha != new_sha: + raise ChecksumMismatch(new_sha, old_sha) + + def _header(self): + return object_header(self.type_num, self.raw_length()) + + def raw_length(self) -> int: + """Returns the length of the raw string of this object.""" + ret = 0 + for chunk in self.as_raw_chunks(): + ret += len(chunk) + return ret + + def sha(self): + """The SHA1 object that is the name of this object.""" + if self._sha is None or self._needs_serialization: + # this is a local because as_raw_chunks() overwrites self._sha + new_sha = sha1() + new_sha.update(self._header()) + for chunk in self.as_raw_chunks(): + new_sha.update(chunk) + self._sha = new_sha + return self._sha + + def copy(self): + """Create a new copy of this SHA1 object from its raw string""" + obj_class = object_class(self.type_num) + return obj_class.from_raw_string(self.type_num, self.as_raw_string(), self.id) + + @property + def id(self): + """The hex SHA of this object.""" + return self.sha().hexdigest().encode("ascii") + + def __repr__(self): + return "<%s %s>" % (self.__class__.__name__, self.id) + + def __ne__(self, other): + """Check whether this object does not match the other.""" + return not isinstance(other, ShaFile) or self.id != other.id + + def __eq__(self, other): + """Return True if the SHAs of the two objects match.""" + return isinstance(other, ShaFile) and self.id == other.id + + def __lt__(self, other): + """Return whether SHA of this object is less than the other.""" + if not isinstance(other, ShaFile): + raise TypeError + return self.id < other.id + + def __le__(self, other): + """Check whether SHA of this object is less than or equal to the other.""" + if not isinstance(other, ShaFile): + raise TypeError + return self.id <= other.id + + +class Blob(ShaFile): + """A Git Blob object.""" + + __slots__ = () + + type_name = b"blob" + type_num = 3 + + def __init__(self): + super(Blob, self).__init__() + self._chunked_text = [] + self._needs_serialization = False + + def _get_data(self): + return self.as_raw_string() + + def _set_data(self, data): + self.set_raw_string(data) + + data = property( + _get_data, _set_data, doc="The text contained within the blob object." + ) + + def _get_chunked(self): + return self._chunked_text + + def _set_chunked(self, chunks): + self._chunked_text = chunks + + def _serialize(self): + return self._chunked_text + + def _deserialize(self, chunks): + self._chunked_text = chunks + + chunked = property( + _get_chunked, + _set_chunked, + doc="The text in the blob object, as chunks (not necessarily lines)", + ) + + @classmethod + def from_path(cls, path): + blob = ShaFile.from_path(path) + if not isinstance(blob, cls): + raise NotBlobError(path) + return blob + + def check(self): + """Check this object for internal consistency. + + Raises: + ObjectFormatException: if the object is malformed in some way + """ + super(Blob, self).check() + + def splitlines(self) -> List[bytes]: + """Return list of lines in this blob. + + This preserves the original line endings. + """ + chunks = self.chunked + if not chunks: + return [] + if len(chunks) == 1: + return chunks[0].splitlines(True) + remaining = None + ret = [] + for chunk in chunks: + lines = chunk.splitlines(True) + if len(lines) > 1: + ret.append((remaining or b"") + lines[0]) + ret.extend(lines[1:-1]) + remaining = lines[-1] + elif len(lines) == 1: + if remaining is None: + remaining = lines.pop() + else: + remaining += lines.pop() + if remaining is not None: + ret.append(remaining) + return ret + + +def _parse_message(chunks: Iterable[bytes]): + """Parse a message with a list of fields and a body. + + Args: + chunks: the raw chunks of the tag or commit object. + Returns: iterator of tuples of (field, value), one per header line, in the + order read from the text, possibly including duplicates. Includes a + field named None for the freeform tag/commit text. + """ + f = BytesIO(b"".join(chunks)) + k = None + v = b"" + eof = False + + def _strip_last_newline(value): + """Strip the last newline from value""" + if value and value.endswith(b"\n"): + return value[:-1] + return value + + # Parse the headers + # + # Headers can contain newlines. The next line is indented with a space. + # We store the latest key as 'k', and the accumulated value as 'v'. + for line in f: + if line.startswith(b" "): + # Indented continuation of the previous line + v += line[1:] + else: + if k is not None: + # We parsed a new header, return its value + yield (k, _strip_last_newline(v)) + if line == b"\n": + # Empty line indicates end of headers + break + (k, v) = line.split(b" ", 1) + + else: + # We reached end of file before the headers ended. We still need to + # return the previous header, then we need to return a None field for + # the text. + eof = True + if k is not None: + yield (k, _strip_last_newline(v)) + yield (None, None) + + if not eof: + # We didn't reach the end of file while parsing headers. We can return + # the rest of the file as a message. + yield (None, f.read()) + + f.close() + + +class Tag(ShaFile): + """A Git Tag object.""" + + type_name = b"tag" + type_num = 4 + + __slots__ = ( + "_tag_timezone_neg_utc", + "_name", + "_object_sha", + "_object_class", + "_tag_time", + "_tag_timezone", + "_tagger", + "_message", + "_signature", + ) + + def __init__(self): + super(Tag, self).__init__() + self._tagger = None + self._tag_time = None + self._tag_timezone = None + self._tag_timezone_neg_utc = False + self._signature = None + + @classmethod + def from_path(cls, filename): + tag = ShaFile.from_path(filename) + if not isinstance(tag, cls): + raise NotTagError(filename) + return tag + + def check(self): + """Check this object for internal consistency. + + Raises: + ObjectFormatException: if the object is malformed in some way + """ + super(Tag, self).check() + self._check_has_member("_object_sha", "missing object sha") + self._check_has_member("_object_class", "missing object type") + self._check_has_member("_name", "missing tag name") + + if not self._name: + raise ObjectFormatException("empty tag name") + + check_hexsha(self._object_sha, "invalid object sha") + + if getattr(self, "_tagger", None): + check_identity(self._tagger, "invalid tagger") + + self._check_has_member("_tag_time", "missing tag time") + check_time(self._tag_time) + + last = None + for field, _ in _parse_message(self._chunked_text): + if field == _OBJECT_HEADER and last is not None: + raise ObjectFormatException("unexpected object") + elif field == _TYPE_HEADER and last != _OBJECT_HEADER: + raise ObjectFormatException("unexpected type") + elif field == _TAG_HEADER and last != _TYPE_HEADER: + raise ObjectFormatException("unexpected tag name") + elif field == _TAGGER_HEADER and last != _TAG_HEADER: + raise ObjectFormatException("unexpected tagger") + last = field + + def _serialize(self): + chunks = [] + chunks.append(git_line(_OBJECT_HEADER, self._object_sha)) + chunks.append(git_line(_TYPE_HEADER, self._object_class.type_name)) + chunks.append(git_line(_TAG_HEADER, self._name)) + if self._tagger: + if self._tag_time is None: + chunks.append(git_line(_TAGGER_HEADER, self._tagger)) + else: + chunks.append( + git_line( + _TAGGER_HEADER, + self._tagger, + str(self._tag_time).encode("ascii"), + format_timezone(self._tag_timezone, self._tag_timezone_neg_utc), + ) + ) + if self._message is not None: + chunks.append(b"\n") # To close headers + chunks.append(self._message) + if self._signature is not None: + chunks.append(self._signature) + return chunks + + def _deserialize(self, chunks): + """Grab the metadata attached to the tag""" + self._tagger = None + self._tag_time = None + self._tag_timezone = None + self._tag_timezone_neg_utc = False + for field, value in _parse_message(chunks): + if field == _OBJECT_HEADER: + self._object_sha = value + elif field == _TYPE_HEADER: + obj_class = object_class(value) + if not obj_class: + raise ObjectFormatException("Not a known type: %s" % value) + self._object_class = obj_class + elif field == _TAG_HEADER: + self._name = value + elif field == _TAGGER_HEADER: + ( + self._tagger, + self._tag_time, + (self._tag_timezone, self._tag_timezone_neg_utc), + ) = parse_time_entry(value) + elif field is None: + if value is None: + self._message = None + self._signature = None + else: + try: + sig_idx = value.index(BEGIN_PGP_SIGNATURE) + except ValueError: + self._message = value + self._signature = None + else: + self._message = value[:sig_idx] + self._signature = value[sig_idx:] + else: + raise ObjectFormatException("Unknown field %s" % field) + + def _get_object(self): + """Get the object pointed to by this tag. + + Returns: tuple of (object class, sha). + """ + return (self._object_class, self._object_sha) + + def _set_object(self, value): + (self._object_class, self._object_sha) = value + self._needs_serialization = True + + object = property(_get_object, _set_object) + + name = serializable_property("name", "The name of this tag") + tagger = serializable_property( + "tagger", "Returns the name of the person who created this tag" + ) + tag_time = serializable_property( + "tag_time", + "The creation timestamp of the tag. As the number of seconds " + "since the epoch", + ) + tag_timezone = serializable_property( + "tag_timezone", "The timezone that tag_time is in." + ) + message = serializable_property("message", "the message attached to this tag") + + signature = serializable_property("signature", "Optional detached GPG signature") + + def sign(self, keyid: Optional[str] = None): + import gpg + with gpg.Context(armor=True) as c: + if keyid is not None: + key = c.get_key(keyid) + with gpg.Context(armor=True, signers=[key]) as ctx: + self.signature, unused_result = ctx.sign( + self.as_raw_string(), + mode=gpg.constants.sig.mode.DETACH, + ) + else: + self.signature, unused_result = c.sign( + self.as_raw_string(), mode=gpg.constants.sig.mode.DETACH + ) + + def verify(self, keyids: Optional[Iterable[str]] = None): + """Verify GPG signature for this tag (if it is signed). + + Args: + keyids: Optional iterable of trusted keyids for this tag. + If this tag is not signed by any key in keyids verification will + fail. If not specified, this function only verifies that the tag + has a valid signature. + + Raises: + gpg.errors.BadSignatures: if GPG signature verification fails + gpg.errors.MissingSignatures: if tag was not signed by a key + specified in keyids + """ + if self._signature is None: + return + + import gpg + + with gpg.Context() as ctx: + data, result = ctx.verify( + self.as_raw_string()[: -len(self._signature)], + signature=self._signature, + ) + if keyids: + keys = [ + ctx.get_key(key) + for key in keyids + ] + for key in keys: + for subkey in keys: + for sig in result.signatures: + if subkey.can_sign and subkey.fpr == sig.fpr: + return + raise gpg.errors.MissingSignatures( + result, keys, results=(data, result) + ) + + +class TreeEntry(namedtuple("TreeEntry", ["path", "mode", "sha"])): + """Named tuple encapsulating a single tree entry.""" + + def in_path(self, path): + """Return a copy of this entry with the given path prepended.""" + if not isinstance(self.path, bytes): + raise TypeError("Expected bytes for path, got %r" % path) + return TreeEntry(posixpath.join(path, self.path), self.mode, self.sha) + + +def parse_tree(text, strict=False): + """Parse a tree text. + + Args: + text: Serialized text to parse + Returns: iterator of tuples of (name, mode, sha) + Raises: + ObjectFormatException: if the object was malformed in some way + """ + count = 0 + length = len(text) + while count < length: + mode_end = text.index(b" ", count) + mode_text = text[count:mode_end] + if strict and mode_text.startswith(b"0"): + raise ObjectFormatException("Invalid mode '%s'" % mode_text) + try: + mode = int(mode_text, 8) + except ValueError as exc: + raise ObjectFormatException( + "Invalid mode '%s'" % mode_text) from exc + name_end = text.index(b"\0", mode_end) + name = text[mode_end + 1 : name_end] + count = name_end + 21 + sha = text[name_end + 1 : count] + if len(sha) != 20: + raise ObjectFormatException("Sha has invalid length") + hexsha = sha_to_hex(sha) + yield (name, mode, hexsha) + + +def serialize_tree(items): + """Serialize the items in a tree to a text. + + Args: + items: Sorted iterable over (name, mode, sha) tuples + Returns: Serialized tree text as chunks + """ + for name, mode, hexsha in items: + yield ( + ("%04o" % mode).encode("ascii") + b" " + name + b"\0" + hex_to_sha(hexsha) + ) + + +def sorted_tree_items(entries, name_order): + """Iterate over a tree entries dictionary. + + Args: + name_order: If True, iterate entries in order of their name. If + False, iterate entries in tree order, that is, treat subtree entries as + having '/' appended. + entries: Dictionary mapping names to (mode, sha) tuples + Returns: Iterator over (name, mode, hexsha) + """ + key_func = name_order and key_entry_name_order or key_entry + for name, entry in sorted(entries.items(), key=key_func): + mode, hexsha = entry + # Stricter type checks than normal to mirror checks in the C version. + mode = int(mode) + if not isinstance(hexsha, bytes): + raise TypeError("Expected bytes for SHA, got %r" % hexsha) + yield TreeEntry(name, mode, hexsha) + + +def key_entry(entry): + """Sort key for tree entry. + + Args: + entry: (name, value) tuplee + """ + (name, value) = entry + if stat.S_ISDIR(value[0]): + name += b"/" + return name + + +def key_entry_name_order(entry): + """Sort key for tree entry in name order.""" + return entry[0] + + +def pretty_format_tree_entry(name, mode, hexsha, encoding="utf-8"): + """Pretty format tree entry. + + Args: + name: Name of the directory entry + mode: Mode of entry + hexsha: Hexsha of the referenced object + Returns: string describing the tree entry + """ + if mode & stat.S_IFDIR: + kind = "tree" + else: + kind = "blob" + return "%04o %s %s\t%s\n" % ( + mode, + kind, + hexsha.decode("ascii"), + name.decode(encoding, "replace"), + ) + + +class SubmoduleEncountered(Exception): + """A submodule was encountered while resolving a path.""" + + def __init__(self, path, sha): + self.path = path + self.sha = sha + + +class Tree(ShaFile): + """A Git tree object""" + + type_name = b"tree" + type_num = 2 + + __slots__ = "_entries" + + def __init__(self): + super(Tree, self).__init__() + self._entries = {} + + @classmethod + def from_path(cls, filename): + tree = ShaFile.from_path(filename) + if not isinstance(tree, cls): + raise NotTreeError(filename) + return tree + + def __contains__(self, name): + return name in self._entries + + def __getitem__(self, name): + return self._entries[name] + + def __setitem__(self, name, value): + """Set a tree entry by name. + + Args: + name: The name of the entry, as a string. + value: A tuple of (mode, hexsha), where mode is the mode of the + entry as an integral type and hexsha is the hex SHA of the entry as + a string. + """ + mode, hexsha = value + self._entries[name] = (mode, hexsha) + self._needs_serialization = True + + def __delitem__(self, name): + del self._entries[name] + self._needs_serialization = True + + def __len__(self): + return len(self._entries) + + def __iter__(self): + return iter(self._entries) + + def add(self, name, mode, hexsha): + """Add an entry to the tree. + + Args: + mode: The mode of the entry as an integral type. Not all + possible modes are supported by git; see check() for details. + name: The name of the entry, as a string. + hexsha: The hex SHA of the entry as a string. + """ + self._entries[name] = mode, hexsha + self._needs_serialization = True + + def iteritems(self, name_order=False): + """Iterate over entries. + + Args: + name_order: If True, iterate in name order instead of tree + order. + Returns: Iterator over (name, mode, sha) tuples + """ + return sorted_tree_items(self._entries, name_order) + + def items(self): + """Return the sorted entries in this tree. + + Returns: List with (name, mode, sha) tuples + """ + return list(self.iteritems()) + + def _deserialize(self, chunks): + """Grab the entries in the tree""" + try: + parsed_entries = parse_tree(b"".join(chunks)) + except ValueError as exc: + raise ObjectFormatException(exc) from exc + # TODO: list comprehension is for efficiency in the common (small) + # case; if memory efficiency in the large case is a concern, use a + # genexp. + self._entries = dict([(n, (m, s)) for n, m, s in parsed_entries]) + + def check(self): + """Check this object for internal consistency. + + Raises: + ObjectFormatException: if the object is malformed in some way + """ + super(Tree, self).check() + last = None + allowed_modes = ( + stat.S_IFREG | 0o755, + stat.S_IFREG | 0o644, + stat.S_IFLNK, + stat.S_IFDIR, + S_IFGITLINK, + # TODO: optionally exclude as in git fsck --strict + stat.S_IFREG | 0o664, + ) + for name, mode, sha in parse_tree(b"".join(self._chunked_text), True): + check_hexsha(sha, "invalid sha %s" % sha) + if b"/" in name or name in (b"", b".", b"..", b".git"): + raise ObjectFormatException( + "invalid name %s" % name.decode("utf-8", "replace") + ) + + if mode not in allowed_modes: + raise ObjectFormatException("invalid mode %06o" % mode) + + entry = (name, (mode, sha)) + if last: + if key_entry(last) > key_entry(entry): + raise ObjectFormatException("entries not sorted") + if name == last[0]: + raise ObjectFormatException("duplicate entry %s" % name) + last = entry + + def _serialize(self): + return list(serialize_tree(self.iteritems())) + + def as_pretty_string(self): + text = [] + for name, mode, hexsha in self.iteritems(): + text.append(pretty_format_tree_entry(name, mode, hexsha)) + return "".join(text) + + def lookup_path(self, lookup_obj, path): + """Look up an object in a Git tree. + + Args: + lookup_obj: Callback for retrieving object by SHA1 + path: Path to lookup + Returns: A tuple of (mode, SHA) of the resulting path. + """ + parts = path.split(b"/") + sha = self.id + mode = None + for i, p in enumerate(parts): + if not p: + continue + if mode is not None and S_ISGITLINK(mode): + raise SubmoduleEncountered(b'/'.join(parts[:i]), sha) + obj = lookup_obj(sha) + if not isinstance(obj, Tree): + raise NotTreeError(sha) + mode, sha = obj[p] + return mode, sha + + +def parse_timezone(text): + """Parse a timezone text fragment (e.g. '+0100'). + + Args: + text: Text to parse. + Returns: Tuple with timezone as seconds difference to UTC + and a boolean indicating whether this was a UTC timezone + prefixed with a negative sign (-0000). + """ + # cgit parses the first character as the sign, and the rest + # as an integer (using strtol), which could also be negative. + # We do the same for compatibility. See #697828. + if not text[0] in b"+-": + raise ValueError("Timezone must start with + or - (%(text)s)" % vars()) + sign = text[:1] + offset = int(text[1:]) + if sign == b"-": + offset = -offset + unnecessary_negative_timezone = offset >= 0 and sign == b"-" + signum = (offset < 0) and -1 or 1 + offset = abs(offset) + hours = int(offset / 100) + minutes = offset % 100 + return ( + signum * (hours * 3600 + minutes * 60), + unnecessary_negative_timezone, + ) + + +def format_timezone(offset, unnecessary_negative_timezone=False): + """Format a timezone for Git serialization. + + Args: + offset: Timezone offset as seconds difference to UTC + unnecessary_negative_timezone: Whether to use a minus sign for + UTC or positive timezones (-0000 and --700 rather than +0000 / +0700). + """ + if offset % 60 != 0: + raise ValueError("Unable to handle non-minute offset.") + if offset < 0 or unnecessary_negative_timezone: + sign = "-" + offset = -offset + else: + sign = "+" + return ("%c%02d%02d" % (sign, offset / 3600, (offset / 60) % 60)).encode("ascii") + + +def parse_time_entry(value): + """Parse time entry behavior + + Args: + value: Bytes representing a git commit/tag line + Raises: + ObjectFormatException in case of parsing error (malformed + field date) + Returns: Tuple of (author, time, (timezone, timezone_neg_utc)) + """ + try: + sep = value.rindex(b"> ") + except ValueError: + return (value, None, (None, False)) + try: + person = value[0 : sep + 1] + rest = value[sep + 2 :] + timetext, timezonetext = rest.rsplit(b" ", 1) + time = int(timetext) + timezone, timezone_neg_utc = parse_timezone(timezonetext) + except ValueError as exc: + raise ObjectFormatException(exc) from exc + return person, time, (timezone, timezone_neg_utc) + + +def parse_commit(chunks): + """Parse a commit object from chunks. + + Args: + chunks: Chunks to parse + Returns: Tuple of (tree, parents, author_info, commit_info, + encoding, mergetag, gpgsig, message, extra) + """ + parents = [] + extra = [] + tree = None + author_info = (None, None, (None, None)) + commit_info = (None, None, (None, None)) + encoding = None + mergetag = [] + message = None + gpgsig = None + + for field, value in _parse_message(chunks): + # TODO(jelmer): Enforce ordering + if field == _TREE_HEADER: + tree = value + elif field == _PARENT_HEADER: + parents.append(value) + elif field == _AUTHOR_HEADER: + author_info = parse_time_entry(value) + elif field == _COMMITTER_HEADER: + commit_info = parse_time_entry(value) + elif field == _ENCODING_HEADER: + encoding = value + elif field == _MERGETAG_HEADER: + mergetag.append(Tag.from_string(value + b"\n")) + elif field == _GPGSIG_HEADER: + gpgsig = value + elif field is None: + message = value + else: + extra.append((field, value)) + return ( + tree, + parents, + author_info, + commit_info, + encoding, + mergetag, + gpgsig, + message, + extra, + ) + + +class Commit(ShaFile): + """A git commit object""" + + type_name = b"commit" + type_num = 1 + + __slots__ = ( + "_parents", + "_encoding", + "_extra", + "_author_timezone_neg_utc", + "_commit_timezone_neg_utc", + "_commit_time", + "_author_time", + "_author_timezone", + "_commit_timezone", + "_author", + "_committer", + "_tree", + "_message", + "_mergetag", + "_gpgsig", + ) + + def __init__(self): + super(Commit, self).__init__() + self._parents = [] + self._encoding = None + self._mergetag = [] + self._gpgsig = None + self._extra = [] + self._author_timezone_neg_utc = False + self._commit_timezone_neg_utc = False + + @classmethod + def from_path(cls, path): + commit = ShaFile.from_path(path) + if not isinstance(commit, cls): + raise NotCommitError(path) + return commit + + def _deserialize(self, chunks): + ( + self._tree, + self._parents, + author_info, + commit_info, + self._encoding, + self._mergetag, + self._gpgsig, + self._message, + self._extra, + ) = parse_commit(chunks) + ( + self._author, + self._author_time, + (self._author_timezone, self._author_timezone_neg_utc), + ) = author_info + ( + self._committer, + self._commit_time, + (self._commit_timezone, self._commit_timezone_neg_utc), + ) = commit_info + + def check(self): + """Check this object for internal consistency. + + Raises: + ObjectFormatException: if the object is malformed in some way + """ + super(Commit, self).check() + self._check_has_member("_tree", "missing tree") + self._check_has_member("_author", "missing author") + self._check_has_member("_committer", "missing committer") + self._check_has_member("_author_time", "missing author time") + self._check_has_member("_commit_time", "missing commit time") + + for parent in self._parents: + check_hexsha(parent, "invalid parent sha") + check_hexsha(self._tree, "invalid tree sha") + + check_identity(self._author, "invalid author") + check_identity(self._committer, "invalid committer") + + check_time(self._author_time) + check_time(self._commit_time) + + last = None + for field, _ in _parse_message(self._chunked_text): + if field == _TREE_HEADER and last is not None: + raise ObjectFormatException("unexpected tree") + elif field == _PARENT_HEADER and last not in ( + _PARENT_HEADER, + _TREE_HEADER, + ): + raise ObjectFormatException("unexpected parent") + elif field == _AUTHOR_HEADER and last not in ( + _TREE_HEADER, + _PARENT_HEADER, + ): + raise ObjectFormatException("unexpected author") + elif field == _COMMITTER_HEADER and last != _AUTHOR_HEADER: + raise ObjectFormatException("unexpected committer") + elif field == _ENCODING_HEADER and last != _COMMITTER_HEADER: + raise ObjectFormatException("unexpected encoding") + last = field + + # TODO: optionally check for duplicate parents + + def sign(self, keyid: Optional[str] = None): + import gpg + with gpg.Context(armor=True) as c: + if keyid is not None: + key = c.get_key(keyid) + with gpg.Context(armor=True, signers=[key]) as ctx: + self.gpgsig, unused_result = ctx.sign( + self.as_raw_string(), + mode=gpg.constants.sig.mode.DETACH, + ) + else: + self.gpgsig, unused_result = c.sign( + self.as_raw_string(), mode=gpg.constants.sig.mode.DETACH + ) + + def verify(self, keyids: Optional[Iterable[str]] = None): + """Verify GPG signature for this commit (if it is signed). + + Args: + keyids: Optional iterable of trusted keyids for this commit. + If this commit is not signed by any key in keyids verification will + fail. If not specified, this function only verifies that the commit + has a valid signature. + + Raises: + gpg.errors.BadSignatures: if GPG signature verification fails + gpg.errors.MissingSignatures: if commit was not signed by a key + specified in keyids + """ + if self._gpgsig is None: + return + + import gpg + + with gpg.Context() as ctx: + self_without_gpgsig = self.copy() + self_without_gpgsig._gpgsig = None + self_without_gpgsig.gpgsig = None + data, result = ctx.verify( + self_without_gpgsig.as_raw_string(), + signature=self._gpgsig, + ) + if keyids: + keys = [ + ctx.get_key(key) + for key in keyids + ] + for key in keys: + for subkey in keys: + for sig in result.signatures: + if subkey.can_sign and subkey.fpr == sig.fpr: + return + raise gpg.errors.MissingSignatures( + result, keys, results=(data, result) + ) + + def _serialize(self): + chunks = [] + tree_bytes = self._tree.id if isinstance(self._tree, Tree) else self._tree + chunks.append(git_line(_TREE_HEADER, tree_bytes)) + for p in self._parents: + chunks.append(git_line(_PARENT_HEADER, p)) + chunks.append( + git_line( + _AUTHOR_HEADER, + self._author, + str(self._author_time).encode("ascii"), + format_timezone(self._author_timezone, self._author_timezone_neg_utc), + ) + ) + chunks.append( + git_line( + _COMMITTER_HEADER, + self._committer, + str(self._commit_time).encode("ascii"), + format_timezone(self._commit_timezone, self._commit_timezone_neg_utc), + ) + ) + if self.encoding: + chunks.append(git_line(_ENCODING_HEADER, self.encoding)) + for mergetag in self.mergetag: + mergetag_chunks = mergetag.as_raw_string().split(b"\n") + + chunks.append(git_line(_MERGETAG_HEADER, mergetag_chunks[0])) + # Embedded extra header needs leading space + for chunk in mergetag_chunks[1:]: + chunks.append(b" " + chunk + b"\n") + + # No trailing empty line + if chunks[-1].endswith(b" \n"): + chunks[-1] = chunks[-1][:-2] + for k, v in self.extra: + if b"\n" in k or b"\n" in v: + raise AssertionError("newline in extra data: %r -> %r" % (k, v)) + chunks.append(git_line(k, v)) + if self.gpgsig: + sig_chunks = self.gpgsig.split(b"\n") + chunks.append(git_line(_GPGSIG_HEADER, sig_chunks[0])) + for chunk in sig_chunks[1:]: + chunks.append(git_line(b"", chunk)) + chunks.append(b"\n") # There must be a new line after the headers + chunks.append(self._message) + return chunks + + tree = serializable_property("tree", "Tree that is the state of this commit") + + def _get_parents(self): + """Return a list of parents of this commit.""" + return self._parents + + def _set_parents(self, value): + """Set a list of parents of this commit.""" + self._needs_serialization = True + self._parents = value + + parents = property( + _get_parents, + _set_parents, + doc="Parents of this commit, by their SHA1.", + ) + + def _get_extra(self): + """Return extra settings of this commit.""" + return self._extra + + extra = property( + _get_extra, + doc="Extra header fields not understood (presumably added in a " + "newer version of git). Kept verbatim so the object can " + "be correctly reserialized. For private commit metadata, use " + "pseudo-headers in Commit.message, rather than this field.", + ) + + author = serializable_property("author", "The name of the author of the commit") + + committer = serializable_property( + "committer", "The name of the committer of the commit" + ) + + message = serializable_property("message", "The commit message") + + commit_time = serializable_property( + "commit_time", + "The timestamp of the commit. As the number of seconds since the " "epoch.", + ) + + commit_timezone = serializable_property( + "commit_timezone", "The zone the commit time is in" + ) + + author_time = serializable_property( + "author_time", + "The timestamp the commit was written. As the number of " + "seconds since the epoch.", + ) + + author_timezone = serializable_property( + "author_timezone", "Returns the zone the author time is in." + ) + + encoding = serializable_property("encoding", "Encoding of the commit message.") + + mergetag = serializable_property("mergetag", "Associated signed tag.") + + gpgsig = serializable_property("gpgsig", "GPG Signature.") + + +OBJECT_CLASSES = ( + Commit, + Tree, + Blob, + Tag, +) + +_TYPE_MAP: Dict[Union[bytes, int], Type[ShaFile]] = {} + +for cls in OBJECT_CLASSES: + _TYPE_MAP[cls.type_name] = cls + _TYPE_MAP[cls.type_num] = cls + + +# Hold on to the pure-python implementations for testing +_parse_tree_py = parse_tree +_sorted_tree_items_py = sorted_tree_items +try: + # Try to import C versions + from dulwich._objects import parse_tree, sorted_tree_items # type: ignore +except ImportError: + pass diff --git a/env/lib/python3.8/site-packages/dulwich/objectspec.py b/env/lib/python3.8/site-packages/dulwich/objectspec.py new file mode 100644 index 00000000..044e6580 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/objectspec.py @@ -0,0 +1,242 @@ +# objectspec.py -- Object specification +# Copyright (C) 2014 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Object specification.""" + +from typing import Union, List, Tuple + + +def to_bytes(text): + if getattr(text, "encode", None) is not None: + text = text.encode("ascii") + return text + + +def parse_object(repo, objectish): + """Parse a string referring to an object. + + Args: + repo: A `Repo` object + objectish: A string referring to an object + Returns: A git object + Raises: + KeyError: If the object can not be found + """ + objectish = to_bytes(objectish) + return repo[objectish] + + +def parse_tree(repo, treeish): + """Parse a string referring to a tree. + + Args: + repo: A `Repo` object + treeish: A string referring to a tree + Returns: A git object + Raises: + KeyError: If the object can not be found + """ + treeish = to_bytes(treeish) + try: + treeish = parse_ref(repo, treeish) + except KeyError: # treeish is commit sha + pass + o = repo[treeish] + if o.type_name == b"commit": + return repo[o.tree] + return o + + +def parse_ref(container, refspec): + """Parse a string referring to a reference. + + Args: + container: A RefsContainer object + refspec: A string referring to a ref + Returns: A ref + Raises: + KeyError: If the ref can not be found + """ + refspec = to_bytes(refspec) + possible_refs = [ + refspec, + b"refs/" + refspec, + b"refs/tags/" + refspec, + b"refs/heads/" + refspec, + b"refs/remotes/" + refspec, + b"refs/remotes/" + refspec + b"/HEAD", + ] + for ref in possible_refs: + if ref in container: + return ref + raise KeyError(refspec) + + +def parse_reftuple(lh_container, rh_container, refspec, force=False): + """Parse a reftuple spec. + + Args: + lh_container: A RefsContainer object + rh_container: A RefsContainer object + refspec: A string + Returns: A tuple with left and right ref + Raises: + KeyError: If one of the refs can not be found + """ + refspec = to_bytes(refspec) + if refspec.startswith(b"+"): + force = True + refspec = refspec[1:] + if b":" in refspec: + (lh, rh) = refspec.split(b":") + else: + lh = rh = refspec + if lh == b"": + lh = None + else: + lh = parse_ref(lh_container, lh) + if rh == b"": + rh = None + else: + try: + rh = parse_ref(rh_container, rh) + except KeyError: + # TODO: check force? + if b"/" not in rh: + rh = b"refs/heads/" + rh + return (lh, rh, force) + + +def parse_reftuples( + lh_container, rh_container, + refspecs: Union[bytes, List[bytes], List[Tuple[bytes, bytes]]], + force: bool = False): + """Parse a list of reftuple specs to a list of reftuples. + + Args: + lh_container: A RefsContainer object + rh_container: A RefsContainer object + refspecs: A list of refspecs or a string + force: Force overwriting for all reftuples + Returns: A list of refs + Raises: + KeyError: If one of the refs can not be found + """ + if not isinstance(refspecs, list): + refspecs = [refspecs] + ret = [] + # TODO: Support * in refspecs + for refspec in refspecs: + ret.append(parse_reftuple(lh_container, rh_container, refspec, force=force)) + return ret + + +def parse_refs(container, refspecs): + """Parse a list of refspecs to a list of refs. + + Args: + container: A RefsContainer object + refspecs: A list of refspecs or a string + Returns: A list of refs + Raises: + KeyError: If one of the refs can not be found + """ + # TODO: Support * in refspecs + if not isinstance(refspecs, list): + refspecs = [refspecs] + ret = [] + for refspec in refspecs: + ret.append(parse_ref(container, refspec)) + return ret + + +def parse_commit_range(repo, committishs): + """Parse a string referring to a range of commits. + + Args: + repo: A `Repo` object + committishs: A string referring to a range of commits. + Returns: An iterator over `Commit` objects + Raises: + KeyError: When the reference commits can not be found + ValueError: If the range can not be parsed + """ + committishs = to_bytes(committishs) + # TODO(jelmer): Support more than a single commit.. + return iter([parse_commit(repo, committishs)]) + + +class AmbiguousShortId(Exception): + """The short id is ambiguous.""" + + def __init__(self, prefix, options): + self.prefix = prefix + self.options = options + + +def scan_for_short_id(object_store, prefix): + """Scan an object store for a short id.""" + # TODO(jelmer): This could short-circuit looking for objects + # starting with a certain prefix. + ret = [] + for object_id in object_store: + if object_id.startswith(prefix): + ret.append(object_store[object_id]) + if not ret: + raise KeyError(prefix) + if len(ret) == 1: + return ret[0] + raise AmbiguousShortId(prefix, ret) + + +def parse_commit(repo, committish): + """Parse a string referring to a single commit. + + Args: + repo: A` Repo` object + committish: A string referring to a single commit. + Returns: A Commit object + Raises: + KeyError: When the reference commits can not be found + ValueError: If the range can not be parsed + """ + committish = to_bytes(committish) + try: + return repo[committish] + except KeyError: + pass + try: + return repo[parse_ref(repo, committish)] + except KeyError: + pass + if len(committish) >= 4 and len(committish) < 40: + try: + int(committish, 16) + except ValueError: + pass + else: + try: + return scan_for_short_id(repo.object_store, committish) + except KeyError: + pass + raise KeyError(committish) + + +# TODO: parse_path_in_tree(), which handles e.g. v1.0:Documentation diff --git a/env/lib/python3.8/site-packages/dulwich/pack.py b/env/lib/python3.8/site-packages/dulwich/pack.py new file mode 100644 index 00000000..84e4634e --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/pack.py @@ -0,0 +1,2296 @@ +# pack.py -- For dealing with packed git objects. +# Copyright (C) 2007 James Westby +# Copyright (C) 2008-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Classes for dealing with packed git objects. + +A pack is a compact representation of a bunch of objects, stored +using deltas where possible. + +They have two parts, the pack file, which stores the data, and an index +that tells you where the data is. + +To find an object you look in all of the index files 'til you find a +match for the object name. You then use the pointer got from this as +a pointer in to the corresponding packfile. +""" + +from collections import defaultdict + +import binascii +from io import BytesIO, UnsupportedOperation +from collections import ( + deque, +) +try: + from cdifflib import CSequenceMatcher as SequenceMatcher +except ModuleNotFoundError: + from difflib import SequenceMatcher +import struct + +from itertools import chain + +import os +import sys +from typing import Optional, Callable, Tuple, List +import warnings + +from hashlib import sha1 +from os import ( + SEEK_CUR, + SEEK_END, +) +from struct import unpack_from +import zlib + +try: + import mmap +except ImportError: + has_mmap = False +else: + has_mmap = True + +# For some reason the above try, except fails to set has_mmap = False for plan9 +if sys.platform == "Plan9": + has_mmap = False + +from dulwich.errors import ( + ApplyDeltaError, + ChecksumMismatch, +) +from dulwich.file import GitFile +from dulwich.lru_cache import ( + LRUSizeCache, +) +from dulwich.objects import ( + ShaFile, + hex_to_sha, + sha_to_hex, + object_header, +) + + +OFS_DELTA = 6 +REF_DELTA = 7 + +DELTA_TYPES = (OFS_DELTA, REF_DELTA) + + +DEFAULT_PACK_DELTA_WINDOW_SIZE = 10 + + +def take_msb_bytes(read, crc32=None): + """Read bytes marked with most significant bit. + + Args: + read: Read function + """ + ret = [] + while len(ret) == 0 or ret[-1] & 0x80: + b = read(1) + if crc32 is not None: + crc32 = binascii.crc32(b, crc32) + ret.append(ord(b[:1])) + return ret, crc32 + + +class PackFileDisappeared(Exception): + def __init__(self, obj): + self.obj = obj + + +class UnpackedObject(object): + """Class encapsulating an object unpacked from a pack file. + + These objects should only be created from within unpack_object. Most + members start out as empty and are filled in at various points by + read_zlib_chunks, unpack_object, DeltaChainIterator, etc. + + End users of this object should take care that the function they're getting + this object from is guaranteed to set the members they need. + """ + + __slots__ = [ + "offset", # Offset in its pack. + "_sha", # Cached binary SHA. + "obj_type_num", # Type of this object. + "obj_chunks", # Decompressed and delta-resolved chunks. + "pack_type_num", # Type of this object in the pack (may be a delta). + "delta_base", # Delta base offset or SHA. + "comp_chunks", # Compressed object chunks. + "decomp_chunks", # Decompressed object chunks. + "decomp_len", # Decompressed length of this object. + "crc32", # CRC32. + ] + + # TODO(dborowitz): read_zlib_chunks and unpack_object could very well be + # methods of this object. + def __init__(self, pack_type_num, delta_base, decomp_len, crc32): + self.offset = None + self._sha = None + self.pack_type_num = pack_type_num + self.delta_base = delta_base + self.comp_chunks = None + self.decomp_chunks = [] + self.decomp_len = decomp_len + self.crc32 = crc32 + + if pack_type_num in DELTA_TYPES: + self.obj_type_num = None + self.obj_chunks = None + else: + self.obj_type_num = pack_type_num + self.obj_chunks = self.decomp_chunks + self.delta_base = delta_base + + def sha(self): + """Return the binary SHA of this object.""" + if self._sha is None: + self._sha = obj_sha(self.obj_type_num, self.obj_chunks) + return self._sha + + def sha_file(self): + """Return a ShaFile from this object.""" + return ShaFile.from_raw_chunks(self.obj_type_num, self.obj_chunks) + + # Only provided for backwards compatibility with code that expects either + # chunks or a delta tuple. + def _obj(self): + """Return the decompressed chunks, or (delta base, delta chunks).""" + if self.pack_type_num in DELTA_TYPES: + return (self.delta_base, self.decomp_chunks) + else: + return self.decomp_chunks + + def __eq__(self, other): + if not isinstance(other, UnpackedObject): + return False + for slot in self.__slots__: + if getattr(self, slot) != getattr(other, slot): + return False + return True + + def __ne__(self, other): + return not (self == other) + + def __repr__(self): + data = ["%s=%r" % (s, getattr(self, s)) for s in self.__slots__] + return "%s(%s)" % (self.__class__.__name__, ", ".join(data)) + + +_ZLIB_BUFSIZE = 4096 + + +def read_zlib_chunks( + read_some, unpacked, include_comp=False, buffer_size=_ZLIB_BUFSIZE +): + """Read zlib data from a buffer. + + This function requires that the buffer have additional data following the + compressed data, which is guaranteed to be the case for git pack files. + + Args: + read_some: Read function that returns at least one byte, but may + return less than the requested size. + unpacked: An UnpackedObject to write result data to. If its crc32 + attr is not None, the CRC32 of the compressed bytes will be computed + using this starting CRC32. + After this function, will have the following attrs set: + * comp_chunks (if include_comp is True) + * decomp_chunks + * decomp_len + * crc32 + include_comp: If True, include compressed data in the result. + buffer_size: Size of the read buffer. + Returns: Leftover unused data from the decompression. + Raises: + zlib.error: if a decompression error occurred. + """ + if unpacked.decomp_len <= -1: + raise ValueError("non-negative zlib data stream size expected") + decomp_obj = zlib.decompressobj() + + comp_chunks = [] + decomp_chunks = unpacked.decomp_chunks + decomp_len = 0 + crc32 = unpacked.crc32 + + while True: + add = read_some(buffer_size) + if not add: + raise zlib.error("EOF before end of zlib stream") + comp_chunks.append(add) + decomp = decomp_obj.decompress(add) + decomp_len += len(decomp) + decomp_chunks.append(decomp) + unused = decomp_obj.unused_data + if unused: + left = len(unused) + if crc32 is not None: + crc32 = binascii.crc32(add[:-left], crc32) + if include_comp: + comp_chunks[-1] = add[:-left] + break + elif crc32 is not None: + crc32 = binascii.crc32(add, crc32) + if crc32 is not None: + crc32 &= 0xFFFFFFFF + + if decomp_len != unpacked.decomp_len: + raise zlib.error("decompressed data does not match expected size") + + unpacked.crc32 = crc32 + if include_comp: + unpacked.comp_chunks = comp_chunks + return unused + + +def iter_sha1(iter): + """Return the hexdigest of the SHA1 over a set of names. + + Args: + iter: Iterator over string objects + Returns: 40-byte hex sha1 digest + """ + sha = sha1() + for name in iter: + sha.update(name) + return sha.hexdigest().encode("ascii") + + +def load_pack_index(path): + """Load an index file by path. + + Args: + path: Path to the index file + Returns: A PackIndex loaded from the given path + """ + with GitFile(path, "rb") as f: + return load_pack_index_file(path, f) + + +def _load_file_contents(f, size=None): + try: + fd = f.fileno() + except (UnsupportedOperation, AttributeError): + fd = None + # Attempt to use mmap if possible + if fd is not None: + if size is None: + size = os.fstat(fd).st_size + if has_mmap: + try: + contents = mmap.mmap(fd, size, access=mmap.ACCESS_READ) + except mmap.error: + # Perhaps a socket? + pass + else: + return contents, size + contents = f.read() + size = len(contents) + return contents, size + + +def load_pack_index_file(path, f): + """Load an index file from a file-like object. + + Args: + path: Path for the index file + f: File-like object + Returns: A PackIndex loaded from the given file + """ + contents, size = _load_file_contents(f) + if contents[:4] == b"\377tOc": + version = struct.unpack(b">L", contents[4:8])[0] + if version == 2: + return PackIndex2(path, file=f, contents=contents, size=size) + else: + raise KeyError("Unknown pack index format %d" % version) + else: + return PackIndex1(path, file=f, contents=contents, size=size) + + +def bisect_find_sha(start, end, sha, unpack_name): + """Find a SHA in a data blob with sorted SHAs. + + Args: + start: Start index of range to search + end: End index of range to search + sha: Sha to find + unpack_name: Callback to retrieve SHA by index + Returns: Index of the SHA, or None if it wasn't found + """ + assert start <= end + while start <= end: + i = (start + end) // 2 + file_sha = unpack_name(i) + if file_sha < sha: + start = i + 1 + elif file_sha > sha: + end = i - 1 + else: + return i + return None + + +class PackIndex(object): + """An index in to a packfile. + + Given a sha id of an object a pack index can tell you the location in the + packfile of that object if it has it. + """ + + def __eq__(self, other): + if not isinstance(other, PackIndex): + return False + + for (name1, _, _), (name2, _, _) in zip( + self.iterentries(), other.iterentries() + ): + if name1 != name2: + return False + return True + + def __ne__(self, other): + return not self.__eq__(other) + + def __len__(self): + """Return the number of entries in this pack index.""" + raise NotImplementedError(self.__len__) + + def __iter__(self): + """Iterate over the SHAs in this pack.""" + return map(sha_to_hex, self._itersha()) + + def iterentries(self): + """Iterate over the entries in this pack index. + + Returns: iterator over tuples with object name, offset in packfile and + crc32 checksum. + """ + raise NotImplementedError(self.iterentries) + + def get_pack_checksum(self): + """Return the SHA1 checksum stored for the corresponding packfile. + + Returns: 20-byte binary digest + """ + raise NotImplementedError(self.get_pack_checksum) + + def object_index(self, sha): + """Return the index in to the corresponding packfile for the object. + + Given the name of an object it will return the offset that object + lives at within the corresponding pack file. If the pack file doesn't + have the object then None will be returned. + """ + raise NotImplementedError(self.object_index) + + def object_sha1(self, index): + """Return the SHA1 corresponding to the index in the pack file.""" + # PERFORMANCE/TODO(jelmer): Avoid scanning entire index + for (name, offset, crc32) in self.iterentries(): + if offset == index: + return name + else: + raise KeyError(index) + + def _object_index(self, sha): + """See object_index. + + Args: + sha: A *binary* SHA string. (20 characters long)_ + """ + raise NotImplementedError(self._object_index) + + def objects_sha1(self): + """Return the hex SHA1 over all the shas of all objects in this pack. + + Note: This is used for the filename of the pack. + """ + return iter_sha1(self._itersha()) + + def _itersha(self): + """Yield all the SHA1's of the objects in the index, sorted.""" + raise NotImplementedError(self._itersha) + + +class MemoryPackIndex(PackIndex): + """Pack index that is stored entirely in memory.""" + + def __init__(self, entries, pack_checksum=None): + """Create a new MemoryPackIndex. + + Args: + entries: Sequence of name, idx, crc32 (sorted) + pack_checksum: Optional pack checksum + """ + self._by_sha = {} + self._by_index = {} + for name, idx, crc32 in entries: + self._by_sha[name] = idx + self._by_index[idx] = name + self._entries = entries + self._pack_checksum = pack_checksum + + def get_pack_checksum(self): + return self._pack_checksum + + def __len__(self): + return len(self._entries) + + def object_index(self, sha): + if len(sha) == 40: + sha = hex_to_sha(sha) + return self._by_sha[sha][0] + + def object_sha1(self, index): + return self._by_index[index] + + def _itersha(self): + return iter(self._by_sha) + + def iterentries(self): + return iter(self._entries) + + +class FilePackIndex(PackIndex): + """Pack index that is based on a file. + + To do the loop it opens the file, and indexes first 256 4 byte groups + with the first byte of the sha id. The value in the four byte group indexed + is the end of the group that shares the same starting byte. Subtract one + from the starting byte and index again to find the start of the group. + The values are sorted by sha id within the group, so do the math to find + the start and end offset and then bisect in to find if the value is + present. + """ + + _fan_out_table: List[int] + + def __init__(self, filename, file=None, contents=None, size=None): + """Create a pack index object. + + Provide it with the name of the index file to consider, and it will map + it whenever required. + """ + self._filename = filename + # Take the size now, so it can be checked each time we map the file to + # ensure that it hasn't changed. + if file is None: + self._file = GitFile(filename, "rb") + else: + self._file = file + if contents is None: + self._contents, self._size = _load_file_contents(self._file, size) + else: + self._contents, self._size = (contents, size) + + @property + def path(self): + return self._filename + + def __eq__(self, other): + # Quick optimization: + if ( + isinstance(other, FilePackIndex) + and self._fan_out_table != other._fan_out_table + ): + return False + + return super(FilePackIndex, self).__eq__(other) + + def close(self): + self._file.close() + if getattr(self._contents, "close", None) is not None: + self._contents.close() + + def __len__(self): + """Return the number of entries in this pack index.""" + return self._fan_out_table[-1] + + def _unpack_entry(self, i): + """Unpack the i-th entry in the index file. + + Returns: Tuple with object name (SHA), offset in pack file and CRC32 + checksum (if known). + """ + raise NotImplementedError(self._unpack_entry) + + def _unpack_name(self, i): + """Unpack the i-th name from the index file.""" + raise NotImplementedError(self._unpack_name) + + def _unpack_offset(self, i): + """Unpack the i-th object offset from the index file.""" + raise NotImplementedError(self._unpack_offset) + + def _unpack_crc32_checksum(self, i): + """Unpack the crc32 checksum for the ith object from the index file.""" + raise NotImplementedError(self._unpack_crc32_checksum) + + def _itersha(self): + for i in range(len(self)): + yield self._unpack_name(i) + + def iterentries(self): + """Iterate over the entries in this pack index. + + Returns: iterator over tuples with object name, offset in packfile and + crc32 checksum. + """ + for i in range(len(self)): + yield self._unpack_entry(i) + + def _read_fan_out_table(self, start_offset): + ret = [] + for i in range(0x100): + fanout_entry = self._contents[ + start_offset + i * 4 : start_offset + (i + 1) * 4 + ] + ret.append(struct.unpack(">L", fanout_entry)[0]) + return ret + + def check(self): + """Check that the stored checksum matches the actual checksum.""" + actual = self.calculate_checksum() + stored = self.get_stored_checksum() + if actual != stored: + raise ChecksumMismatch(stored, actual) + + def calculate_checksum(self): + """Calculate the SHA1 checksum over this pack index. + + Returns: This is a 20-byte binary digest + """ + return sha1(self._contents[:-20]).digest() + + def get_pack_checksum(self): + """Return the SHA1 checksum stored for the corresponding packfile. + + Returns: 20-byte binary digest + """ + return bytes(self._contents[-40:-20]) + + def get_stored_checksum(self): + """Return the SHA1 checksum stored for this index. + + Returns: 20-byte binary digest + """ + return bytes(self._contents[-20:]) + + def object_index(self, sha): + """Return the index in to the corresponding packfile for the object. + + Given the name of an object it will return the offset that object + lives at within the corresponding pack file. If the pack file doesn't + have the object then None will be returned. + """ + if len(sha) == 40: + sha = hex_to_sha(sha) + try: + return self._object_index(sha) + except ValueError as exc: + closed = getattr(self._contents, "closed", None) + if closed in (None, True): + raise PackFileDisappeared(self) from exc + raise + + def _object_index(self, sha): + """See object_index. + + Args: + sha: A *binary* SHA string. (20 characters long)_ + """ + assert len(sha) == 20 + idx = ord(sha[:1]) + if idx == 0: + start = 0 + else: + start = self._fan_out_table[idx - 1] + end = self._fan_out_table[idx] + i = bisect_find_sha(start, end, sha, self._unpack_name) + if i is None: + raise KeyError(sha) + return self._unpack_offset(i) + + +class PackIndex1(FilePackIndex): + """Version 1 Pack Index file.""" + + def __init__(self, filename, file=None, contents=None, size=None): + super(PackIndex1, self).__init__(filename, file, contents, size) + self.version = 1 + self._fan_out_table = self._read_fan_out_table(0) + + def _unpack_entry(self, i): + (offset, name) = unpack_from(">L20s", self._contents, (0x100 * 4) + (i * 24)) + return (name, offset, None) + + def _unpack_name(self, i): + offset = (0x100 * 4) + (i * 24) + 4 + return self._contents[offset : offset + 20] + + def _unpack_offset(self, i): + offset = (0x100 * 4) + (i * 24) + return unpack_from(">L", self._contents, offset)[0] + + def _unpack_crc32_checksum(self, i): + # Not stored in v1 index files + return None + + +class PackIndex2(FilePackIndex): + """Version 2 Pack Index file.""" + + def __init__(self, filename, file=None, contents=None, size=None): + super(PackIndex2, self).__init__(filename, file, contents, size) + if self._contents[:4] != b"\377tOc": + raise AssertionError("Not a v2 pack index file") + (self.version,) = unpack_from(b">L", self._contents, 4) + if self.version != 2: + raise AssertionError("Version was %d" % self.version) + self._fan_out_table = self._read_fan_out_table(8) + self._name_table_offset = 8 + 0x100 * 4 + self._crc32_table_offset = self._name_table_offset + 20 * len(self) + self._pack_offset_table_offset = self._crc32_table_offset + 4 * len(self) + self._pack_offset_largetable_offset = self._pack_offset_table_offset + 4 * len( + self + ) + + def _unpack_entry(self, i): + return ( + self._unpack_name(i), + self._unpack_offset(i), + self._unpack_crc32_checksum(i), + ) + + def _unpack_name(self, i): + offset = self._name_table_offset + i * 20 + return self._contents[offset : offset + 20] + + def _unpack_offset(self, i): + offset = self._pack_offset_table_offset + i * 4 + offset = unpack_from(">L", self._contents, offset)[0] + if offset & (2 ** 31): + offset = self._pack_offset_largetable_offset + (offset & (2 ** 31 - 1)) * 8 + offset = unpack_from(">Q", self._contents, offset)[0] + return offset + + def _unpack_crc32_checksum(self, i): + return unpack_from(">L", self._contents, self._crc32_table_offset + i * 4)[0] + + +def read_pack_header(read): + """Read the header of a pack file. + + Args: + read: Read function + Returns: Tuple of (pack version, number of objects). If no data is + available to read, returns (None, None). + """ + header = read(12) + if not header: + return None, None + if header[:4] != b"PACK": + raise AssertionError("Invalid pack header %r" % header) + (version,) = unpack_from(b">L", header, 4) + if version not in (2, 3): + raise AssertionError("Version was %d" % version) + (num_objects,) = unpack_from(b">L", header, 8) + return (version, num_objects) + + +def chunks_length(chunks): + if isinstance(chunks, bytes): + return len(chunks) + else: + return sum(map(len, chunks)) + + +def unpack_object( + read_all, + read_some=None, + compute_crc32=False, + include_comp=False, + zlib_bufsize=_ZLIB_BUFSIZE, +): + """Unpack a Git object. + + Args: + read_all: Read function that blocks until the number of requested + bytes are read. + read_some: Read function that returns at least one byte, but may not + return the number of bytes requested. + compute_crc32: If True, compute the CRC32 of the compressed data. If + False, the returned CRC32 will be None. + include_comp: If True, include compressed data in the result. + zlib_bufsize: An optional buffer size for zlib operations. + Returns: A tuple of (unpacked, unused), where unused is the unused data + leftover from decompression, and unpacked in an UnpackedObject with + the following attrs set: + + * obj_chunks (for non-delta types) + * pack_type_num + * delta_base (for delta types) + * comp_chunks (if include_comp is True) + * decomp_chunks + * decomp_len + * crc32 (if compute_crc32 is True) + """ + if read_some is None: + read_some = read_all + if compute_crc32: + crc32 = 0 + else: + crc32 = None + + bytes, crc32 = take_msb_bytes(read_all, crc32=crc32) + type_num = (bytes[0] >> 4) & 0x07 + size = bytes[0] & 0x0F + for i, byte in enumerate(bytes[1:]): + size += (byte & 0x7F) << ((i * 7) + 4) + + raw_base = len(bytes) + if type_num == OFS_DELTA: + bytes, crc32 = take_msb_bytes(read_all, crc32=crc32) + raw_base += len(bytes) + if bytes[-1] & 0x80: + raise AssertionError + delta_base_offset = bytes[0] & 0x7F + for byte in bytes[1:]: + delta_base_offset += 1 + delta_base_offset <<= 7 + delta_base_offset += byte & 0x7F + delta_base = delta_base_offset + elif type_num == REF_DELTA: + delta_base = read_all(20) + if compute_crc32: + crc32 = binascii.crc32(delta_base, crc32) + raw_base += 20 + else: + delta_base = None + + unpacked = UnpackedObject(type_num, delta_base, size, crc32) + unused = read_zlib_chunks( + read_some, + unpacked, + buffer_size=zlib_bufsize, + include_comp=include_comp, + ) + return unpacked, unused + + +def _compute_object_size(value): + """Compute the size of a unresolved object for use with LRUSizeCache.""" + (num, obj) = value + if num in DELTA_TYPES: + return chunks_length(obj[1]) + return chunks_length(obj) + + +class PackStreamReader(object): + """Class to read a pack stream. + + The pack is read from a ReceivableProtocol using read() or recv() as + appropriate. + """ + + def __init__(self, read_all, read_some=None, zlib_bufsize=_ZLIB_BUFSIZE): + self.read_all = read_all + if read_some is None: + self.read_some = read_all + else: + self.read_some = read_some + self.sha = sha1() + self._offset = 0 + self._rbuf = BytesIO() + # trailer is a deque to avoid memory allocation on small reads + self._trailer = deque() + self._zlib_bufsize = zlib_bufsize + + def _read(self, read, size): + """Read up to size bytes using the given callback. + + As a side effect, update the verifier's hash (excluding the last 20 + bytes read). + + Args: + read: The read callback to read from. + size: The maximum number of bytes to read; the particular + behavior is callback-specific. + """ + data = read(size) + + # maintain a trailer of the last 20 bytes we've read + n = len(data) + self._offset += n + tn = len(self._trailer) + if n >= 20: + to_pop = tn + to_add = 20 + else: + to_pop = max(n + tn - 20, 0) + to_add = n + self.sha.update( + bytes(bytearray([self._trailer.popleft() for _ in range(to_pop)])) + ) + self._trailer.extend(data[-to_add:]) + + # hash everything but the trailer + self.sha.update(data[:-to_add]) + return data + + def _buf_len(self): + buf = self._rbuf + start = buf.tell() + buf.seek(0, SEEK_END) + end = buf.tell() + buf.seek(start) + return end - start + + @property + def offset(self): + return self._offset - self._buf_len() + + def read(self, size): + """Read, blocking until size bytes are read.""" + buf_len = self._buf_len() + if buf_len >= size: + return self._rbuf.read(size) + buf_data = self._rbuf.read() + self._rbuf = BytesIO() + return buf_data + self._read(self.read_all, size - buf_len) + + def recv(self, size): + """Read up to size bytes, blocking until one byte is read.""" + buf_len = self._buf_len() + if buf_len: + data = self._rbuf.read(size) + if size >= buf_len: + self._rbuf = BytesIO() + return data + return self._read(self.read_some, size) + + def __len__(self): + return self._num_objects + + def read_objects(self, compute_crc32=False): + """Read the objects in this pack file. + + Args: + compute_crc32: If True, compute the CRC32 of the compressed + data. If False, the returned CRC32 will be None. + Returns: Iterator over UnpackedObjects with the following members set: + offset + obj_type_num + obj_chunks (for non-delta types) + delta_base (for delta types) + decomp_chunks + decomp_len + crc32 (if compute_crc32 is True) + Raises: + ChecksumMismatch: if the checksum of the pack contents does not + match the checksum in the pack trailer. + zlib.error: if an error occurred during zlib decompression. + IOError: if an error occurred writing to the output file. + """ + pack_version, self._num_objects = read_pack_header(self.read) + if pack_version is None: + return + + for i in range(self._num_objects): + offset = self.offset + unpacked, unused = unpack_object( + self.read, + read_some=self.recv, + compute_crc32=compute_crc32, + zlib_bufsize=self._zlib_bufsize, + ) + unpacked.offset = offset + + # prepend any unused data to current read buffer + buf = BytesIO() + buf.write(unused) + buf.write(self._rbuf.read()) + buf.seek(0) + self._rbuf = buf + + yield unpacked + + if self._buf_len() < 20: + # If the read buffer is full, then the last read() got the whole + # trailer off the wire. If not, it means there is still some of the + # trailer to read. We need to read() all 20 bytes; N come from the + # read buffer and (20 - N) come from the wire. + self.read(20) + + pack_sha = bytearray(self._trailer) + if pack_sha != self.sha.digest(): + raise ChecksumMismatch(sha_to_hex(pack_sha), self.sha.hexdigest()) + + +class PackStreamCopier(PackStreamReader): + """Class to verify a pack stream as it is being read. + + The pack is read from a ReceivableProtocol using read() or recv() as + appropriate and written out to the given file-like object. + """ + + def __init__(self, read_all, read_some, outfile, delta_iter=None): + """Initialize the copier. + + Args: + read_all: Read function that blocks until the number of + requested bytes are read. + read_some: Read function that returns at least one byte, but may + not return the number of bytes requested. + outfile: File-like object to write output through. + delta_iter: Optional DeltaChainIterator to record deltas as we + read them. + """ + super(PackStreamCopier, self).__init__(read_all, read_some=read_some) + self.outfile = outfile + self._delta_iter = delta_iter + + def _read(self, read, size): + """Read data from the read callback and write it to the file.""" + data = super(PackStreamCopier, self)._read(read, size) + self.outfile.write(data) + return data + + def verify(self): + """Verify a pack stream and write it to the output file. + + See PackStreamReader.iterobjects for a list of exceptions this may + throw. + """ + if self._delta_iter: + for unpacked in self.read_objects(): + self._delta_iter.record(unpacked) + else: + for _ in self.read_objects(): + pass + + +def obj_sha(type, chunks): + """Compute the SHA for a numeric type and object chunks.""" + sha = sha1() + sha.update(object_header(type, chunks_length(chunks))) + if isinstance(chunks, bytes): + sha.update(chunks) + else: + for chunk in chunks: + sha.update(chunk) + return sha.digest() + + +def compute_file_sha(f, start_ofs=0, end_ofs=0, buffer_size=1 << 16): + """Hash a portion of a file into a new SHA. + + Args: + f: A file-like object to read from that supports seek(). + start_ofs: The offset in the file to start reading at. + end_ofs: The offset in the file to end reading at, relative to the + end of the file. + buffer_size: A buffer size for reading. + Returns: A new SHA object updated with data read from the file. + """ + sha = sha1() + f.seek(0, SEEK_END) + length = f.tell() + if (end_ofs < 0 and length + end_ofs < start_ofs) or end_ofs > length: + raise AssertionError( + "Attempt to read beyond file length. " + "start_ofs: %d, end_ofs: %d, file length: %d" % (start_ofs, end_ofs, length) + ) + todo = length + end_ofs - start_ofs + f.seek(start_ofs) + while todo: + data = f.read(min(todo, buffer_size)) + sha.update(data) + todo -= len(data) + return sha + + +class PackData(object): + """The data contained in a packfile. + + Pack files can be accessed both sequentially for exploding a pack, and + directly with the help of an index to retrieve a specific object. + + The objects within are either complete or a delta against another. + + The header is variable length. If the MSB of each byte is set then it + indicates that the subsequent byte is still part of the header. + For the first byte the next MS bits are the type, which tells you the type + of object, and whether it is a delta. The LS byte is the lowest bits of the + size. For each subsequent byte the LS 7 bits are the next MS bits of the + size, i.e. the last byte of the header contains the MS bits of the size. + + For the complete objects the data is stored as zlib deflated data. + The size in the header is the uncompressed object size, so to uncompress + you need to just keep feeding data to zlib until you get an object back, + or it errors on bad data. This is done here by just giving the complete + buffer from the start of the deflated object on. This is bad, but until I + get mmap sorted out it will have to do. + + Currently there are no integrity checks done. Also no attempt is made to + try and detect the delta case, or a request for an object at the wrong + position. It will all just throw a zlib or KeyError. + """ + + def __init__(self, filename, file=None, size=None): + """Create a PackData object representing the pack in the given filename. + + The file must exist and stay readable until the object is disposed of. + It must also stay the same size. It will be mapped whenever needed. + + Currently there is a restriction on the size of the pack as the python + mmap implementation is flawed. + """ + self._filename = filename + self._size = size + self._header_size = 12 + if file is None: + self._file = GitFile(self._filename, "rb") + else: + self._file = file + (version, self._num_objects) = read_pack_header(self._file.read) + self._offset_cache = LRUSizeCache( + 1024 * 1024 * 20, compute_size=_compute_object_size + ) + + @property + def filename(self): + return os.path.basename(self._filename) + + @property + def path(self): + return self._filename + + @classmethod + def from_file(cls, file, size=None): + return cls(str(file), file=file, size=size) + + @classmethod + def from_path(cls, path): + return cls(filename=path) + + def close(self): + self._file.close() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.close() + + def __eq__(self, other): + if isinstance(other, PackData): + return self.get_stored_checksum() == other.get_stored_checksum() + if isinstance(other, list): + if len(self) != len(other): + return False + for o1, o2 in zip(self.iterobjects(), other): + if o1 != o2: + return False + return True + return False + + def _get_size(self): + if self._size is not None: + return self._size + self._size = os.path.getsize(self._filename) + if self._size < self._header_size: + errmsg = "%s is too small for a packfile (%d < %d)" % ( + self._filename, + self._size, + self._header_size, + ) + raise AssertionError(errmsg) + return self._size + + def __len__(self): + """Returns the number of objects in this pack.""" + return self._num_objects + + def calculate_checksum(self): + """Calculate the checksum for this pack. + + Returns: 20-byte binary SHA1 digest + """ + return compute_file_sha(self._file, end_ofs=-20).digest() + + def iterobjects(self, progress=None, compute_crc32=True): + self._file.seek(self._header_size) + for i in range(1, self._num_objects + 1): + offset = self._file.tell() + unpacked, unused = unpack_object( + self._file.read, compute_crc32=compute_crc32 + ) + if progress is not None: + progress(i, self._num_objects) + yield ( + offset, + unpacked.pack_type_num, + unpacked._obj(), + unpacked.crc32, + ) + # Back up over unused data. + self._file.seek(-len(unused), SEEK_CUR) + + def _iter_unpacked(self): + # TODO(dborowitz): Merge this with iterobjects, if we can change its + # return type. + self._file.seek(self._header_size) + + if self._num_objects is None: + return + + for _ in range(self._num_objects): + offset = self._file.tell() + unpacked, unused = unpack_object(self._file.read, compute_crc32=False) + unpacked.offset = offset + yield unpacked + # Back up over unused data. + self._file.seek(-len(unused), SEEK_CUR) + + def iterentries(self, progress=None, resolve_ext_ref=None): + """Yield entries summarizing the contents of this pack. + + Args: + progress: Progress function, called with current and total + object count. + Returns: iterator of tuples with (sha, offset, crc32) + """ + num_objects = self._num_objects + indexer = PackIndexer.for_pack_data(self, resolve_ext_ref=resolve_ext_ref) + for i, result in enumerate(indexer): + if progress is not None: + progress(i, num_objects) + yield result + + def sorted_entries(self, progress=None, resolve_ext_ref=None): + """Return entries in this pack, sorted by SHA. + + Args: + progress: Progress function, called with current and total + object count + Returns: Iterator of tuples with (sha, offset, crc32) + """ + return sorted(self.iterentries( + progress=progress, resolve_ext_ref=resolve_ext_ref)) + + def create_index_v1(self, filename, progress=None, resolve_ext_ref=None): + """Create a version 1 file for this data file. + + Args: + filename: Index filename. + progress: Progress report function + Returns: Checksum of index file + """ + entries = self.sorted_entries( + progress=progress, resolve_ext_ref=resolve_ext_ref) + with GitFile(filename, "wb") as f: + return write_pack_index_v1(f, entries, self.calculate_checksum()) + + def create_index_v2(self, filename, progress=None, resolve_ext_ref=None): + """Create a version 2 index file for this data file. + + Args: + filename: Index filename. + progress: Progress report function + Returns: Checksum of index file + """ + entries = self.sorted_entries( + progress=progress, resolve_ext_ref=resolve_ext_ref) + with GitFile(filename, "wb") as f: + return write_pack_index_v2(f, entries, self.calculate_checksum()) + + def create_index(self, filename, progress=None, version=2, resolve_ext_ref=None): + """Create an index file for this data file. + + Args: + filename: Index filename. + progress: Progress report function + Returns: Checksum of index file + """ + if version == 1: + return self.create_index_v1( + filename, progress, resolve_ext_ref=resolve_ext_ref) + elif version == 2: + return self.create_index_v2( + filename, progress, resolve_ext_ref=resolve_ext_ref) + else: + raise ValueError("unknown index format %d" % version) + + def get_stored_checksum(self): + """Return the expected checksum stored in this pack.""" + self._file.seek(-20, SEEK_END) + return self._file.read(20) + + def check(self): + """Check the consistency of this pack.""" + actual = self.calculate_checksum() + stored = self.get_stored_checksum() + if actual != stored: + raise ChecksumMismatch(stored, actual) + + def get_compressed_data_at(self, offset): + """Given offset in the packfile return compressed data that is there. + + Using the associated index the location of an object can be looked up, + and then the packfile can be asked directly for that object using this + function. + """ + assert offset >= self._header_size + self._file.seek(offset) + unpacked, _ = unpack_object(self._file.read, include_comp=True) + return ( + unpacked.pack_type_num, + unpacked.delta_base, + unpacked.comp_chunks, + ) + + def get_object_at(self, offset): + """Given an offset in to the packfile return the object that is there. + + Using the associated index the location of an object can be looked up, + and then the packfile can be asked directly for that object using this + function. + """ + try: + return self._offset_cache[offset] + except KeyError: + pass + assert offset >= self._header_size + self._file.seek(offset) + unpacked, _ = unpack_object(self._file.read) + return (unpacked.pack_type_num, unpacked._obj()) + + +class DeltaChainIterator(object): + """Abstract iterator over pack data based on delta chains. + + Each object in the pack is guaranteed to be inflated exactly once, + regardless of how many objects reference it as a delta base. As a result, + memory usage is proportional to the length of the longest delta chain. + + Subclasses can override _result to define the result type of the iterator. + By default, results are UnpackedObjects with the following members set: + + * offset + * obj_type_num + * obj_chunks + * pack_type_num + * delta_base (for delta types) + * comp_chunks (if _include_comp is True) + * decomp_chunks + * decomp_len + * crc32 (if _compute_crc32 is True) + """ + + _compute_crc32 = False + _include_comp = False + + def __init__(self, file_obj, resolve_ext_ref=None): + self._file = file_obj + self._resolve_ext_ref = resolve_ext_ref + self._pending_ofs = defaultdict(list) + self._pending_ref = defaultdict(list) + self._full_ofs = [] + self._shas = {} + self._ext_refs = [] + + @classmethod + def for_pack_data(cls, pack_data, resolve_ext_ref=None): + walker = cls(None, resolve_ext_ref=resolve_ext_ref) + walker.set_pack_data(pack_data) + for unpacked in pack_data._iter_unpacked(): + walker.record(unpacked) + return walker + + def record(self, unpacked): + type_num = unpacked.pack_type_num + offset = unpacked.offset + if type_num == OFS_DELTA: + base_offset = offset - unpacked.delta_base + self._pending_ofs[base_offset].append(offset) + elif type_num == REF_DELTA: + self._pending_ref[unpacked.delta_base].append(offset) + else: + self._full_ofs.append((offset, type_num)) + + def set_pack_data(self, pack_data): + self._file = pack_data._file + + def _walk_all_chains(self): + for offset, type_num in self._full_ofs: + yield from self._follow_chain(offset, type_num, None) + yield from self._walk_ref_chains() + assert not self._pending_ofs + + def _ensure_no_pending(self): + if self._pending_ref: + raise KeyError([sha_to_hex(s) for s in self._pending_ref]) + + def _walk_ref_chains(self): + if not self._resolve_ext_ref: + self._ensure_no_pending() + return + + for base_sha, pending in sorted(self._pending_ref.items()): + if base_sha not in self._pending_ref: + continue + try: + type_num, chunks = self._resolve_ext_ref(base_sha) + except KeyError: + # Not an external ref, but may depend on one. Either it will + # get popped via a _follow_chain call, or we will raise an + # error below. + continue + self._ext_refs.append(base_sha) + self._pending_ref.pop(base_sha) + for new_offset in pending: + yield from self._follow_chain(new_offset, type_num, chunks) + + self._ensure_no_pending() + + def _result(self, unpacked): + return unpacked + + def _resolve_object(self, offset, obj_type_num, base_chunks): + self._file.seek(offset) + unpacked, _ = unpack_object( + self._file.read, + include_comp=self._include_comp, + compute_crc32=self._compute_crc32, + ) + unpacked.offset = offset + if base_chunks is None: + assert unpacked.pack_type_num == obj_type_num + else: + assert unpacked.pack_type_num in DELTA_TYPES + unpacked.obj_type_num = obj_type_num + unpacked.obj_chunks = apply_delta(base_chunks, unpacked.decomp_chunks) + return unpacked + + def _follow_chain(self, offset, obj_type_num, base_chunks): + # Unlike PackData.get_object_at, there is no need to cache offsets as + # this approach by design inflates each object exactly once. + todo = [(offset, obj_type_num, base_chunks)] + while todo: + (offset, obj_type_num, base_chunks) = todo.pop() + unpacked = self._resolve_object(offset, obj_type_num, base_chunks) + yield self._result(unpacked) + + unblocked = chain( + self._pending_ofs.pop(unpacked.offset, []), + self._pending_ref.pop(unpacked.sha(), []), + ) + todo.extend( + (new_offset, unpacked.obj_type_num, unpacked.obj_chunks) + for new_offset in unblocked + ) + + def __iter__(self): + return self._walk_all_chains() + + def ext_refs(self): + return self._ext_refs + + +class PackIndexer(DeltaChainIterator): + """Delta chain iterator that yields index entries.""" + + _compute_crc32 = True + + def _result(self, unpacked): + return unpacked.sha(), unpacked.offset, unpacked.crc32 + + +class PackInflater(DeltaChainIterator): + """Delta chain iterator that yields ShaFile objects.""" + + def _result(self, unpacked): + return unpacked.sha_file() + + +class SHA1Reader(object): + """Wrapper for file-like object that remembers the SHA1 of its data.""" + + def __init__(self, f): + self.f = f + self.sha1 = sha1(b"") + + def read(self, num=None): + data = self.f.read(num) + self.sha1.update(data) + return data + + def check_sha(self): + stored = self.f.read(20) + if stored != self.sha1.digest(): + raise ChecksumMismatch(self.sha1.hexdigest(), sha_to_hex(stored)) + + def close(self): + return self.f.close() + + def tell(self): + return self.f.tell() + + +class SHA1Writer(object): + """Wrapper for file-like object that remembers the SHA1 of its data.""" + + def __init__(self, f): + self.f = f + self.length = 0 + self.sha1 = sha1(b"") + + def write(self, data): + self.sha1.update(data) + self.f.write(data) + self.length += len(data) + + def write_sha(self): + sha = self.sha1.digest() + assert len(sha) == 20 + self.f.write(sha) + self.length += len(sha) + return sha + + def close(self): + sha = self.write_sha() + self.f.close() + return sha + + def offset(self): + return self.length + + def tell(self): + return self.f.tell() + + +def pack_object_header(type_num, delta_base, size): + """Create a pack object header for the given object info. + + Args: + type_num: Numeric type of the object. + delta_base: Delta base offset or ref, or None for whole objects. + size: Uncompressed object size. + Returns: A header for a packed object. + """ + header = [] + c = (type_num << 4) | (size & 15) + size >>= 4 + while size: + header.append(c | 0x80) + c = size & 0x7F + size >>= 7 + header.append(c) + if type_num == OFS_DELTA: + ret = [delta_base & 0x7F] + delta_base >>= 7 + while delta_base: + delta_base -= 1 + ret.insert(0, 0x80 | (delta_base & 0x7F)) + delta_base >>= 7 + header.extend(ret) + elif type_num == REF_DELTA: + assert len(delta_base) == 20 + header += delta_base + return bytearray(header) + + +def pack_object_chunks(type, object, compression_level=-1): + """Generate chunks for a pack object. + + Args: + type: Numeric type of the object + object: Object to write + compression_level: the zlib compression level + Returns: Chunks + """ + if type in DELTA_TYPES: + delta_base, object = object + else: + delta_base = None + if isinstance(object, bytes): + object = [object] + yield bytes(pack_object_header(type, delta_base, sum(map(len, object)))) + compressor = zlib.compressobj(level=compression_level) + for data in object: + yield compressor.compress(data) + yield compressor.flush() + + +def write_pack_object(write, type, object, sha=None, compression_level=-1): + """Write pack object to a file. + + Args: + write: Write function to use + type: Numeric type of the object + object: Object to write + compression_level: the zlib compression level + Returns: Tuple with offset at which the object was written, and crc32 + """ + if hasattr(write, 'write'): + warnings.warn( + 'write_pack_object() now takes a write rather than file argument', + DeprecationWarning, stacklevel=2) + write = write.write + crc32 = 0 + for chunk in pack_object_chunks( + type, object, compression_level=compression_level): + write(chunk) + if sha is not None: + sha.update(chunk) + crc32 = binascii.crc32(chunk, crc32) + return crc32 & 0xFFFFFFFF + + +def write_pack( + filename, + objects, + deltify=None, + delta_window_size=None, + compression_level=-1, +): + """Write a new pack data file. + + Args: + filename: Path to the new pack file (without .pack extension) + objects: (object, path) tuple iterable to write. Should provide __len__ + delta_window_size: Delta window size + deltify: Whether to deltify pack objects + compression_level: the zlib compression level + Returns: Tuple with checksum of pack file and index file + """ + with GitFile(filename + ".pack", "wb") as f: + entries, data_sum = write_pack_objects( + f.write, + objects, + delta_window_size=delta_window_size, + deltify=deltify, + compression_level=compression_level, + ) + entries = sorted([(k, v[0], v[1]) for (k, v) in entries.items()]) + with GitFile(filename + ".idx", "wb") as f: + return data_sum, write_pack_index_v2(f, entries, data_sum) + + +def pack_header_chunks(num_objects): + """Yield chunks for a pack header.""" + yield b"PACK" # Pack header + yield struct.pack(b">L", 2) # Pack version + yield struct.pack(b">L", num_objects) # Number of objects in pack + + +def write_pack_header(write, num_objects): + """Write a pack header for the given number of objects.""" + if hasattr(write, 'write'): + write = write.write + warnings.warn( + 'write_pack_header() now takes a write rather than file argument', + DeprecationWarning, stacklevel=2) + for chunk in pack_header_chunks(num_objects): + write(chunk) + + +def deltify_pack_objects(objects, window_size=None): + """Generate deltas for pack objects. + + Args: + objects: An iterable of (object, path) tuples to deltify. + window_size: Window size; None for default + Returns: Iterator over type_num, object id, delta_base, content + delta_base is None for full text entries + """ + # TODO(jelmer): Use threads + if window_size is None: + window_size = DEFAULT_PACK_DELTA_WINDOW_SIZE + # Build a list of objects ordered by the magic Linus heuristic + # This helps us find good objects to diff against us + magic = [] + for obj, path in objects: + magic.append((obj.type_num, path, -obj.raw_length(), obj)) + magic.sort() + + possible_bases = deque() + + for type_num, path, neg_length, o in magic: + raw = o.as_raw_chunks() + winner = raw + winner_len = sum(map(len, winner)) + winner_base = None + for base_id, base_type_num, base in possible_bases: + if base_type_num != type_num: + continue + delta_len = 0 + delta = [] + for chunk in create_delta(base, raw): + delta_len += len(chunk) + if delta_len >= winner_len: + break + delta.append(chunk) + else: + winner_base = base_id + winner = delta + winner_len = sum(map(len, winner)) + yield type_num, o.sha().digest(), winner_base, winner + possible_bases.appendleft((o.sha().digest(), type_num, raw)) + while len(possible_bases) > window_size: + possible_bases.pop() + + +def pack_objects_to_data(objects): + """Create pack data from objects + + Args: + objects: Pack objects + Returns: Tuples with (type_num, hexdigest, delta base, object chunks) + """ + count = len(objects) + return ( + count, + ( + (o.type_num, o.sha().digest(), None, o.as_raw_chunks()) + for (o, path) in objects + ), + ) + + +def write_pack_objects( + write, objects, delta_window_size=None, deltify=None, compression_level=-1 +): + """Write a new pack data file. + + Args: + write: write function to use + objects: Iterable of (object, path) tuples to write. Should provide + __len__ + delta_window_size: Sliding window size for searching for deltas; + Set to None for default window size. + deltify: Whether to deltify objects + compression_level: the zlib compression level to use + Returns: Dict mapping id -> (offset, crc32 checksum), pack checksum + """ + if hasattr(write, 'write'): + warnings.warn( + 'write_pack_objects() now takes a write rather than file argument', + DeprecationWarning, stacklevel=2) + write = write.write + + if deltify is None: + # PERFORMANCE/TODO(jelmer): This should be enabled but is *much* too + # slow at the moment. + deltify = False + if deltify: + pack_contents = deltify_pack_objects(objects, delta_window_size) + pack_contents_count = len(objects) + else: + pack_contents_count, pack_contents = pack_objects_to_data(objects) + + return write_pack_data( + write, + pack_contents_count, + pack_contents, + compression_level=compression_level, + ) + + +class PackChunkGenerator(object): + + def __init__(self, num_records=None, records=None, progress=None, compression_level=-1): + self.cs = sha1(b"") + self.entries = {} + self._it = self._pack_data_chunks( + num_records=num_records, records=records, progress=progress, compression_level=compression_level) + + def sha1digest(self): + return self.cs.digest() + + def __iter__(self): + return self._it + + def _pack_data_chunks(self, num_records=None, records=None, progress=None, compression_level=-1): + """Iterate pack data file chunks.. + + Args: + num_records: Number of records (defaults to len(records) if None) + records: Iterator over type_num, object_id, delta_base, raw + progress: Function to report progress to + compression_level: the zlib compression level + Returns: Dict mapping id -> (offset, crc32 checksum), pack checksum + """ + # Write the pack + if num_records is None: + num_records = len(records) + offset = 0 + for chunk in pack_header_chunks(num_records): + yield chunk + self.cs.update(chunk) + offset += len(chunk) + actual_num_records = 0 + for i, (type_num, object_id, delta_base, raw) in enumerate(records): + if progress is not None: + progress(("writing pack data: %d/%d\r" % (i, num_records)).encode("ascii")) + if delta_base is not None: + try: + base_offset, base_crc32 = self.entries[delta_base] + except KeyError: + type_num = REF_DELTA + raw = (delta_base, raw) + else: + type_num = OFS_DELTA + raw = (offset - base_offset, raw) + crc32 = 0 + object_size = 0 + for chunk in pack_object_chunks(type_num, raw, compression_level=compression_level): + yield chunk + crc32 = binascii.crc32(chunk, crc32) + self.cs.update(chunk) + object_size += len(chunk) + actual_num_records += 1 + self.entries[object_id] = (offset, crc32) + offset += object_size + if actual_num_records != num_records: + raise AssertionError( + 'actual records written differs: %d != %d' % ( + actual_num_records, num_records)) + + yield self.cs.digest() + + +def write_pack_data(write, num_records=None, records=None, progress=None, compression_level=-1): + """Write a new pack data file. + + Args: + write: Write function to use + num_records: Number of records (defaults to len(records) if None) + records: Iterator over type_num, object_id, delta_base, raw + progress: Function to report progress to + compression_level: the zlib compression level + Returns: Dict mapping id -> (offset, crc32 checksum), pack checksum + """ + if hasattr(write, 'write'): + warnings.warn( + 'write_pack_data() now takes a write rather than file argument', + DeprecationWarning, stacklevel=2) + write = write.write + chunk_generator = PackChunkGenerator( + num_records=num_records, records=records, progress=progress, + compression_level=compression_level) + for chunk in chunk_generator: + write(chunk) + return chunk_generator.entries, chunk_generator.sha1digest() + + +def write_pack_index_v1(f, entries, pack_checksum): + """Write a new pack index file. + + Args: + f: A file-like object to write to + entries: List of tuples with object name (sha), offset_in_pack, + and crc32_checksum. + pack_checksum: Checksum of the pack file. + Returns: The SHA of the written index file + """ + f = SHA1Writer(f) + fan_out_table = defaultdict(lambda: 0) + for (name, offset, entry_checksum) in entries: + fan_out_table[ord(name[:1])] += 1 + # Fan-out table + for i in range(0x100): + f.write(struct.pack(">L", fan_out_table[i])) + fan_out_table[i + 1] += fan_out_table[i] + for (name, offset, entry_checksum) in entries: + if not (offset <= 0xFFFFFFFF): + raise TypeError("pack format 1 only supports offsets < 2Gb") + f.write(struct.pack(">L20s", offset, name)) + assert len(pack_checksum) == 20 + f.write(pack_checksum) + return f.write_sha() + + +def _delta_encode_size(size) -> bytes: + ret = bytearray() + c = size & 0x7F + size >>= 7 + while size: + ret.append(c | 0x80) + c = size & 0x7F + size >>= 7 + ret.append(c) + return bytes(ret) + + +# The length of delta compression copy operations in version 2 packs is limited +# to 64K. To copy more, we use several copy operations. Version 3 packs allow +# 24-bit lengths in copy operations, but we always make version 2 packs. +_MAX_COPY_LEN = 0xFFFF + + +def _encode_copy_operation(start, length): + scratch = bytearray([0x80]) + for i in range(4): + if start & 0xFF << i * 8: + scratch.append((start >> i * 8) & 0xFF) + scratch[0] |= 1 << i + for i in range(2): + if length & 0xFF << i * 8: + scratch.append((length >> i * 8) & 0xFF) + scratch[0] |= 1 << (4 + i) + return bytes(scratch) + + +def create_delta(base_buf, target_buf): + """Use python difflib to work out how to transform base_buf to target_buf. + + Args: + base_buf: Base buffer + target_buf: Target buffer + """ + if isinstance(base_buf, list): + base_buf = b''.join(base_buf) + if isinstance(target_buf, list): + target_buf = b''.join(target_buf) + assert isinstance(base_buf, bytes) + assert isinstance(target_buf, bytes) + # write delta header + yield _delta_encode_size(len(base_buf)) + yield _delta_encode_size(len(target_buf)) + # write out delta opcodes + seq = SequenceMatcher(isjunk=None, a=base_buf, b=target_buf) + for opcode, i1, i2, j1, j2 in seq.get_opcodes(): + # Git patch opcodes don't care about deletes! + # if opcode == 'replace' or opcode == 'delete': + # pass + if opcode == "equal": + # If they are equal, unpacker will use data from base_buf + # Write out an opcode that says what range to use + copy_start = i1 + copy_len = i2 - i1 + while copy_len > 0: + to_copy = min(copy_len, _MAX_COPY_LEN) + yield _encode_copy_operation(copy_start, to_copy) + copy_start += to_copy + copy_len -= to_copy + if opcode == "replace" or opcode == "insert": + # If we are replacing a range or adding one, then we just + # output it to the stream (prefixed by its size) + s = j2 - j1 + o = j1 + while s > 127: + yield bytes([127]) + yield memoryview(target_buf)[o:o + 127] + s -= 127 + o += 127 + yield bytes([s]) + yield memoryview(target_buf)[o:o + s] + + +def apply_delta(src_buf, delta): + """Based on the similar function in git's patch-delta.c. + + Args: + src_buf: Source buffer + delta: Delta instructions + """ + if not isinstance(src_buf, bytes): + src_buf = b"".join(src_buf) + if not isinstance(delta, bytes): + delta = b"".join(delta) + out = [] + index = 0 + delta_length = len(delta) + + def get_delta_header_size(delta, index): + size = 0 + i = 0 + while delta: + cmd = ord(delta[index : index + 1]) + index += 1 + size |= (cmd & ~0x80) << i + i += 7 + if not cmd & 0x80: + break + return size, index + + src_size, index = get_delta_header_size(delta, index) + dest_size, index = get_delta_header_size(delta, index) + assert src_size == len(src_buf), "%d vs %d" % (src_size, len(src_buf)) + while index < delta_length: + cmd = ord(delta[index : index + 1]) + index += 1 + if cmd & 0x80: + cp_off = 0 + for i in range(4): + if cmd & (1 << i): + x = ord(delta[index : index + 1]) + index += 1 + cp_off |= x << (i * 8) + cp_size = 0 + # Version 3 packs can contain copy sizes larger than 64K. + for i in range(3): + if cmd & (1 << (4 + i)): + x = ord(delta[index : index + 1]) + index += 1 + cp_size |= x << (i * 8) + if cp_size == 0: + cp_size = 0x10000 + if ( + cp_off + cp_size < cp_size + or cp_off + cp_size > src_size + or cp_size > dest_size + ): + break + out.append(src_buf[cp_off : cp_off + cp_size]) + elif cmd != 0: + out.append(delta[index : index + cmd]) + index += cmd + else: + raise ApplyDeltaError("Invalid opcode 0") + + if index != delta_length: + raise ApplyDeltaError("delta not empty: %r" % delta[index:]) + + if dest_size != chunks_length(out): + raise ApplyDeltaError("dest size incorrect") + + return out + + +def write_pack_index_v2(f, entries, pack_checksum): + """Write a new pack index file. + + Args: + f: File-like object to write to + entries: List of tuples with object name (sha), offset_in_pack, and + crc32_checksum. + pack_checksum: Checksum of the pack file. + Returns: The SHA of the index file written + """ + f = SHA1Writer(f) + f.write(b"\377tOc") # Magic! + f.write(struct.pack(">L", 2)) + fan_out_table = defaultdict(lambda: 0) + for (name, offset, entry_checksum) in entries: + fan_out_table[ord(name[:1])] += 1 + # Fan-out table + largetable = [] + for i in range(0x100): + f.write(struct.pack(b">L", fan_out_table[i])) + fan_out_table[i + 1] += fan_out_table[i] + for (name, offset, entry_checksum) in entries: + f.write(name) + for (name, offset, entry_checksum) in entries: + f.write(struct.pack(b">L", entry_checksum)) + for (name, offset, entry_checksum) in entries: + if offset < 2 ** 31: + f.write(struct.pack(b">L", offset)) + else: + f.write(struct.pack(b">L", 2 ** 31 + len(largetable))) + largetable.append(offset) + for offset in largetable: + f.write(struct.pack(b">Q", offset)) + assert len(pack_checksum) == 20 + f.write(pack_checksum) + return f.write_sha() + + +write_pack_index = write_pack_index_v2 + + +class _PackTupleIterable(object): + """Helper for Pack.pack_tuples.""" + + def __init__(self, iterobjects, length): + self._iterobjects = iterobjects + self._length = length + + def __len__(self): + return self._length + + def __iter__(self): + return ((o, None) for o in self._iterobjects()) + + +class Pack(object): + """A Git pack object.""" + + def __init__(self, basename, resolve_ext_ref: Optional[ + Callable[[bytes], Tuple[int, UnpackedObject]]] = None): + self._basename = basename + self._data = None + self._idx = None + self._idx_path = self._basename + ".idx" + self._data_path = self._basename + ".pack" + self._data_load = lambda: PackData(self._data_path) + self._idx_load = lambda: load_pack_index(self._idx_path) + self.resolve_ext_ref = resolve_ext_ref + + @classmethod + def from_lazy_objects(cls, data_fn, idx_fn): + """Create a new pack object from callables to load pack data and + index objects.""" + ret = cls("") + ret._data_load = data_fn + ret._idx_load = idx_fn + return ret + + @classmethod + def from_objects(cls, data, idx): + """Create a new pack object from pack data and index objects.""" + ret = cls("") + ret._data = data + ret._data_load = None + ret._idx = idx + ret._idx_load = None + ret.check_length_and_checksum() + return ret + + def name(self): + """The SHA over the SHAs of the objects in this pack.""" + return self.index.objects_sha1() + + @property + def data(self): + """The pack data object being used.""" + if self._data is None: + self._data = self._data_load() + self.check_length_and_checksum() + return self._data + + @property + def index(self): + """The index being used. + + Note: This may be an in-memory index + """ + if self._idx is None: + self._idx = self._idx_load() + return self._idx + + def close(self): + if self._data is not None: + self._data.close() + if self._idx is not None: + self._idx.close() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.close() + + def __eq__(self, other): + return isinstance(self, type(other)) and self.index == other.index + + def __len__(self): + """Number of entries in this pack.""" + return len(self.index) + + def __repr__(self): + return "%s(%r)" % (self.__class__.__name__, self._basename) + + def __iter__(self): + """Iterate over all the sha1s of the objects in this pack.""" + return iter(self.index) + + def check_length_and_checksum(self): + """Sanity check the length and checksum of the pack index and data.""" + assert len(self.index) == len(self.data) + idx_stored_checksum = self.index.get_pack_checksum() + data_stored_checksum = self.data.get_stored_checksum() + if idx_stored_checksum != data_stored_checksum: + raise ChecksumMismatch( + sha_to_hex(idx_stored_checksum), + sha_to_hex(data_stored_checksum), + ) + + def check(self): + """Check the integrity of this pack. + + Raises: + ChecksumMismatch: if a checksum for the index or data is wrong + """ + self.index.check() + self.data.check() + for obj in self.iterobjects(): + obj.check() + # TODO: object connectivity checks + + def get_stored_checksum(self): + return self.data.get_stored_checksum() + + def __contains__(self, sha1): + """Check whether this pack contains a particular SHA1.""" + try: + self.index.object_index(sha1) + return True + except KeyError: + return False + + def get_raw_unresolved(self, sha1): + """Get raw unresolved data for a SHA. + + Args: + sha1: SHA to return data for + Returns: Tuple with pack object type, delta base (if applicable), + list of data chunks + """ + offset = self.index.object_index(sha1) + (obj_type, delta_base, chunks) = self.data.get_compressed_data_at(offset) + if obj_type == OFS_DELTA: + delta_base = sha_to_hex(self.index.object_sha1(offset - delta_base)) + obj_type = REF_DELTA + return (obj_type, delta_base, chunks) + + def get_raw(self, sha1): + offset = self.index.object_index(sha1) + obj_type, obj = self.data.get_object_at(offset) + type_num, chunks = self.resolve_object(offset, obj_type, obj) + return type_num, b"".join(chunks) + + def __getitem__(self, sha1): + """Retrieve the specified SHA1.""" + type, uncomp = self.get_raw(sha1) + return ShaFile.from_raw_string(type, uncomp, sha=sha1) + + def iterobjects(self): + """Iterate over the objects in this pack.""" + return iter( + PackInflater.for_pack_data(self.data, resolve_ext_ref=self.resolve_ext_ref) + ) + + def pack_tuples(self): + """Provide an iterable for use with write_pack_objects. + + Returns: Object that can iterate over (object, path) tuples + and provides __len__ + """ + + return _PackTupleIterable(self.iterobjects, len(self)) + + def keep(self, msg=None): + """Add a .keep file for the pack, preventing git from garbage collecting it. + + Args: + msg: A message written inside the .keep file; can be used later + to determine whether or not a .keep file is obsolete. + Returns: The path of the .keep file, as a string. + """ + keepfile_name = "%s.keep" % self._basename + with GitFile(keepfile_name, "wb") as keepfile: + if msg: + keepfile.write(msg) + keepfile.write(b"\n") + return keepfile_name + + def get_ref(self, sha) -> Tuple[int, int, UnpackedObject]: + """Get the object for a ref SHA, only looking in this pack.""" + # TODO: cache these results + try: + offset = self.index.object_index(sha) + except KeyError: + offset = None + if offset: + type, obj = self.data.get_object_at(offset) + elif self.resolve_ext_ref: + type, obj = self.resolve_ext_ref(sha) + else: + raise KeyError(sha) + return offset, type, obj + + def resolve_object(self, offset, type, obj, get_ref=None): + """Resolve an object, possibly resolving deltas when necessary. + + Returns: Tuple with object type and contents. + """ + # Walk down the delta chain, building a stack of deltas to reach + # the requested object. + base_offset = offset + base_type = type + base_obj = obj + delta_stack = [] + while base_type in DELTA_TYPES: + prev_offset = base_offset + if get_ref is None: + get_ref = self.get_ref + if base_type == OFS_DELTA: + (delta_offset, delta) = base_obj + # TODO: clean up asserts and replace with nicer error messages + base_offset = base_offset - delta_offset + base_type, base_obj = self.data.get_object_at(base_offset) + assert isinstance(base_type, int) + elif base_type == REF_DELTA: + (basename, delta) = base_obj + assert isinstance(basename, bytes) and len(basename) == 20 + base_offset, base_type, base_obj = get_ref(basename) + assert isinstance(base_type, int) + delta_stack.append((prev_offset, base_type, delta)) + + # Now grab the base object (mustn't be a delta) and apply the + # deltas all the way up the stack. + chunks = base_obj + for prev_offset, delta_type, delta in reversed(delta_stack): + chunks = apply_delta(chunks, delta) + # TODO(dborowitz): This can result in poor performance if + # large base objects are separated from deltas in the pack. + # We should reorganize so that we apply deltas to all + # objects in a chain one after the other to optimize cache + # performance. + if prev_offset is not None: + self.data._offset_cache[prev_offset] = base_type, chunks + return base_type, chunks + + def entries(self, progress=None): + """Yield entries summarizing the contents of this pack. + + Args: + progress: Progress function, called with current and total + object count. + Returns: iterator of tuples with (sha, offset, crc32) + """ + return self.data.iterentries( + progress=progress, resolve_ext_ref=self.resolve_ext_ref) + + def sorted_entries(self, progress=None): + """Return entries in this pack, sorted by SHA. + + Args: + progress: Progress function, called with current and total + object count + Returns: Iterator of tuples with (sha, offset, crc32) + """ + return self.data.sorted_entries( + progress=progress, resolve_ext_ref=self.resolve_ext_ref) + + +try: + from dulwich._pack import ( # type: ignore # noqa: F811 + apply_delta, + bisect_find_sha, + ) +except ImportError: + pass diff --git a/env/lib/python3.8/site-packages/dulwich/patch.py b/env/lib/python3.8/site-packages/dulwich/patch.py new file mode 100644 index 00000000..8c92a878 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/patch.py @@ -0,0 +1,406 @@ +# patch.py -- For dealing with packed-style patches. +# Copyright (C) 2009-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Classes for dealing with git am-style patches. + +These patches are basically unified diffs with some extra metadata tacked +on. +""" + +from difflib import SequenceMatcher +import email.parser +import time + +from dulwich.objects import ( + Blob, + Commit, + S_ISGITLINK, +) + +FIRST_FEW_BYTES = 8000 + + +def write_commit_patch(f, commit, contents, progress, version=None, encoding=None): + """Write a individual file patch. + + Args: + commit: Commit object + progress: Tuple with current patch number and total. + Returns: + tuple with filename and contents + """ + encoding = encoding or getattr(f, "encoding", "ascii") + if isinstance(contents, str): + contents = contents.encode(encoding) + (num, total) = progress + f.write( + b"From " + + commit.id + + b" " + + time.ctime(commit.commit_time).encode(encoding) + + b"\n" + ) + f.write(b"From: " + commit.author + b"\n") + f.write( + b"Date: " + time.strftime("%a, %d %b %Y %H:%M:%S %Z").encode(encoding) + b"\n" + ) + f.write( + ("Subject: [PATCH %d/%d] " % (num, total)).encode(encoding) + + commit.message + + b"\n" + ) + f.write(b"\n") + f.write(b"---\n") + try: + import subprocess + + p = subprocess.Popen( + ["diffstat"], stdout=subprocess.PIPE, stdin=subprocess.PIPE + ) + except (ImportError, OSError): + pass # diffstat not available? + else: + (diffstat, _) = p.communicate(contents) + f.write(diffstat) + f.write(b"\n") + f.write(contents) + f.write(b"-- \n") + if version is None: + from dulwich import __version__ as dulwich_version + + f.write(b"Dulwich %d.%d.%d\n" % dulwich_version) + else: + f.write(version.encode(encoding) + b"\n") + + +def get_summary(commit): + """Determine the summary line for use in a filename. + + Args: + commit: Commit + Returns: Summary string + """ + decoded = commit.message.decode(errors="replace") + return decoded.splitlines()[0].replace(" ", "-") + + +# Unified Diff +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return "{}".format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return "{},{}".format(beginning, length) + + +def unified_diff( + a, + b, + fromfile="", + tofile="", + fromfiledate="", + tofiledate="", + n=3, + lineterm="\n", + tree_encoding="utf-8", + output_encoding="utf-8", +): + """difflib.unified_diff that can detect "No newline at end of file" as + original "git diff" does. + + Based on the same function in Python2.7 difflib.py + """ + started = False + for group in SequenceMatcher(None, a, b).get_grouped_opcodes(n): + if not started: + started = True + fromdate = "\t{}".format(fromfiledate) if fromfiledate else "" + todate = "\t{}".format(tofiledate) if tofiledate else "" + yield "--- {}{}{}".format( + fromfile.decode(tree_encoding), fromdate, lineterm + ).encode(output_encoding) + yield "+++ {}{}{}".format( + tofile.decode(tree_encoding), todate, lineterm + ).encode(output_encoding) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield "@@ -{} +{} @@{}".format(file1_range, file2_range, lineterm).encode( + output_encoding + ) + + for tag, i1, i2, j1, j2 in group: + if tag == "equal": + for line in a[i1:i2]: + yield b" " + line + continue + if tag in ("replace", "delete"): + for line in a[i1:i2]: + if not line[-1:] == b"\n": + line += b"\n\\ No newline at end of file\n" + yield b"-" + line + if tag in ("replace", "insert"): + for line in b[j1:j2]: + if not line[-1:] == b"\n": + line += b"\n\\ No newline at end of file\n" + yield b"+" + line + + +def is_binary(content): + """See if the first few bytes contain any null characters. + + Args: + content: Bytestring to check for binary content + """ + return b"\0" in content[:FIRST_FEW_BYTES] + + +def shortid(hexsha): + if hexsha is None: + return b"0" * 7 + else: + return hexsha[:7] + + +def patch_filename(p, root): + if p is None: + return b"/dev/null" + else: + return root + b"/" + p + + +def write_object_diff(f, store, old_file, new_file, diff_binary=False): + """Write the diff for an object. + + Args: + f: File-like object to write to + store: Store to retrieve objects from, if necessary + old_file: (path, mode, hexsha) tuple + new_file: (path, mode, hexsha) tuple + diff_binary: Whether to diff files even if they + are considered binary files by is_binary(). + + Note: the tuple elements should be None for nonexistent files + """ + (old_path, old_mode, old_id) = old_file + (new_path, new_mode, new_id) = new_file + patched_old_path = patch_filename(old_path, b"a") + patched_new_path = patch_filename(new_path, b"b") + + def content(mode, hexsha): + if hexsha is None: + return Blob.from_string(b"") + elif S_ISGITLINK(mode): + return Blob.from_string(b"Subproject commit " + hexsha + b"\n") + else: + return store[hexsha] + + def lines(content): + if not content: + return [] + else: + return content.splitlines() + + f.writelines( + gen_diff_header((old_path, new_path), (old_mode, new_mode), (old_id, new_id)) + ) + old_content = content(old_mode, old_id) + new_content = content(new_mode, new_id) + if not diff_binary and (is_binary(old_content.data) or is_binary(new_content.data)): + binary_diff = ( + b"Binary files " + + patched_old_path + + b" and " + + patched_new_path + + b" differ\n" + ) + f.write(binary_diff) + else: + f.writelines( + unified_diff( + lines(old_content), + lines(new_content), + patched_old_path, + patched_new_path, + ) + ) + + +# TODO(jelmer): Support writing unicode, rather than bytes. +def gen_diff_header(paths, modes, shas): + """Write a blob diff header. + + Args: + paths: Tuple with old and new path + modes: Tuple with old and new modes + shas: Tuple with old and new shas + """ + (old_path, new_path) = paths + (old_mode, new_mode) = modes + (old_sha, new_sha) = shas + if old_path is None and new_path is not None: + old_path = new_path + if new_path is None and old_path is not None: + new_path = old_path + old_path = patch_filename(old_path, b"a") + new_path = patch_filename(new_path, b"b") + yield b"diff --git " + old_path + b" " + new_path + b"\n" + + if old_mode != new_mode: + if new_mode is not None: + if old_mode is not None: + yield ("old file mode %o\n" % old_mode).encode("ascii") + yield ("new file mode %o\n" % new_mode).encode("ascii") + else: + yield ("deleted file mode %o\n" % old_mode).encode("ascii") + yield b"index " + shortid(old_sha) + b".." + shortid(new_sha) + if new_mode is not None and old_mode is not None: + yield (" %o" % new_mode).encode("ascii") + yield b"\n" + + +# TODO(jelmer): Support writing unicode, rather than bytes. +def write_blob_diff(f, old_file, new_file): + """Write blob diff. + + Args: + f: File-like object to write to + old_file: (path, mode, hexsha) tuple (None if nonexisting) + new_file: (path, mode, hexsha) tuple (None if nonexisting) + + Note: The use of write_object_diff is recommended over this function. + """ + (old_path, old_mode, old_blob) = old_file + (new_path, new_mode, new_blob) = new_file + patched_old_path = patch_filename(old_path, b"a") + patched_new_path = patch_filename(new_path, b"b") + + def lines(blob): + if blob is not None: + return blob.splitlines() + else: + return [] + + f.writelines( + gen_diff_header( + (old_path, new_path), + (old_mode, new_mode), + (getattr(old_blob, "id", None), getattr(new_blob, "id", None)), + ) + ) + old_contents = lines(old_blob) + new_contents = lines(new_blob) + f.writelines( + unified_diff(old_contents, new_contents, patched_old_path, patched_new_path) + ) + + +def write_tree_diff(f, store, old_tree, new_tree, diff_binary=False): + """Write tree diff. + + Args: + f: File-like object to write to. + old_tree: Old tree id + new_tree: New tree id + diff_binary: Whether to diff files even if they + are considered binary files by is_binary(). + """ + changes = store.tree_changes(old_tree, new_tree) + for (oldpath, newpath), (oldmode, newmode), (oldsha, newsha) in changes: + write_object_diff( + f, + store, + (oldpath, oldmode, oldsha), + (newpath, newmode, newsha), + diff_binary=diff_binary, + ) + + +def git_am_patch_split(f, encoding=None): + """Parse a git-am-style patch and split it up into bits. + + Args: + f: File-like object to parse + encoding: Encoding to use when creating Git objects + Returns: Tuple with commit object, diff contents and git version + """ + encoding = encoding or getattr(f, "encoding", "ascii") + encoding = encoding or "ascii" + contents = f.read() + if isinstance(contents, bytes) and getattr(email.parser, "BytesParser", None): + parser = email.parser.BytesParser() + msg = parser.parsebytes(contents) + else: + parser = email.parser.Parser() + msg = parser.parsestr(contents) + return parse_patch_message(msg, encoding) + + +def parse_patch_message(msg, encoding=None): + """Extract a Commit object and patch from an e-mail message. + + Args: + msg: An email message (email.message.Message) + encoding: Encoding to use to encode Git commits + Returns: Tuple with commit object, diff contents and git version + """ + c = Commit() + c.author = msg["from"].encode(encoding) + c.committer = msg["from"].encode(encoding) + try: + patch_tag_start = msg["subject"].index("[PATCH") + except ValueError: + subject = msg["subject"] + else: + close = msg["subject"].index("] ", patch_tag_start) + subject = msg["subject"][close + 2 :] + c.message = (subject.replace("\n", "") + "\n").encode(encoding) + first = True + + body = msg.get_payload(decode=True) + lines = body.splitlines(True) + line_iter = iter(lines) + + for line in line_iter: + if line == b"---\n": + break + if first: + if line.startswith(b"From: "): + c.author = line[len(b"From: ") :].rstrip() + else: + c.message += b"\n" + line + first = False + else: + c.message += line + diff = b"" + for line in line_iter: + if line == b"-- \n": + break + diff += line + try: + version = next(line_iter).rstrip(b"\n") + except StopIteration: + version = None + return c, diff, version diff --git a/env/lib/python3.8/site-packages/dulwich/porcelain.py b/env/lib/python3.8/site-packages/dulwich/porcelain.py new file mode 100644 index 00000000..8164a6d8 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/porcelain.py @@ -0,0 +1,2086 @@ +# porcelain.py -- Porcelain-like layer on top of Dulwich +# Copyright (C) 2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Simple wrapper that provides porcelain-like functions on top of Dulwich. + +Currently implemented: + * archive + * add + * branch{_create,_delete,_list} + * check-ignore + * checkout + * clone + * commit + * commit-tree + * daemon + * describe + * diff-tree + * fetch + * init + * ls-files + * ls-remote + * ls-tree + * pull + * push + * rm + * remote{_add} + * receive-pack + * reset + * submodule_add + * submodule_init + * submodule_list + * rev-list + * tag{_create,_delete,_list} + * upload-pack + * update-server-info + * status + * symbolic-ref + +These functions are meant to behave similarly to the git subcommands. +Differences in behaviour are considered bugs. + +Note: one of the consequences of this is that paths tend to be +interpreted relative to the current working directory rather than relative +to the repository root. + +Functions should generally accept both unicode strings and bytestrings +""" + +from collections import namedtuple +from contextlib import ( + closing, + contextmanager, +) +from io import BytesIO, RawIOBase +import datetime +import os +from pathlib import Path +import posixpath +import stat +import sys +import time +from typing import ( + Optional, + Tuple, + Union, +) + +from dulwich.archive import ( + tar_stream, +) +from dulwich.client import ( + get_transport_and_path, +) +from dulwich.config import ( + Config, + ConfigFile, + StackedConfig, + read_submodules, +) +from dulwich.diff_tree import ( + CHANGE_ADD, + CHANGE_DELETE, + CHANGE_MODIFY, + CHANGE_RENAME, + CHANGE_COPY, + RENAME_CHANGE_TYPES, +) +from dulwich.errors import ( + SendPackError, +) +from dulwich.graph import ( + can_fast_forward, +) +from dulwich.ignore import IgnoreFilterManager +from dulwich.index import ( + blob_from_path_and_stat, + get_unstaged_changes, + build_file_from_blob, + _fs_to_tree_path, +) +from dulwich.object_store import ( + tree_lookup_path, +) +from dulwich.objects import ( + Commit, + Tag, + format_timezone, + parse_timezone, + pretty_format_tree_entry, +) +from dulwich.objectspec import ( + parse_commit, + parse_object, + parse_ref, + parse_reftuples, + parse_tree, +) +from dulwich.pack import ( + write_pack_index, + write_pack_objects, +) +from dulwich.patch import write_tree_diff +from dulwich.protocol import ( + Protocol, + ZERO_SHA, +) +from dulwich.refs import ( + LOCAL_BRANCH_PREFIX, + LOCAL_TAG_PREFIX, + _import_remote_refs, +) +from dulwich.repo import BaseRepo, Repo +from dulwich.server import ( + FileSystemBackend, + TCPGitServer, + ReceivePackHandler, + UploadPackHandler, + update_server_info as server_update_server_info, +) + + +# Module level tuple definition for status output +GitStatus = namedtuple("GitStatus", "staged unstaged untracked") + + +class NoneStream(RawIOBase): + """Fallback if stdout or stderr are unavailable, does nothing.""" + + def read(self, size=-1): + return None + + def readall(self): + return None + + def readinto(self, b): + return None + + def write(self, b): + return None + + +default_bytes_out_stream = getattr(sys.stdout, "buffer", None) or NoneStream() +default_bytes_err_stream = getattr(sys.stderr, "buffer", None) or NoneStream() + + +DEFAULT_ENCODING = "utf-8" + + +class Error(Exception): + """Porcelain-based error. """ + + def __init__(self, msg): + super(Error, self).__init__(msg) + + +class RemoteExists(Error): + """Raised when the remote already exists.""" + + +class TimezoneFormatError(Error): + """Raised when the timezone cannot be determined from a given string.""" + + +def parse_timezone_format(tz_str): + """Parse given string and attempt to return a timezone offset. + + Different formats are considered in the following order: + + - Git internal format: + - RFC 2822: e.g. Mon, 20 Nov 1995 19:12:08 -0500 + - ISO 8601: e.g. 1995-11-20T19:12:08-0500 + + Args: + tz_str: datetime string + Returns: Timezone offset as integer + Raises: + TimezoneFormatError: if timezone information cannot be extracted + """ + import re + + # Git internal format + internal_format_pattern = re.compile("^[0-9]+ [+-][0-9]{,4}$") + if re.match(internal_format_pattern, tz_str): + try: + tz_internal = parse_timezone(tz_str.split(" ")[1].encode(DEFAULT_ENCODING)) + return tz_internal[0] + except ValueError: + pass + + # RFC 2822 + import email.utils + rfc_2822 = email.utils.parsedate_tz(tz_str) + if rfc_2822: + return rfc_2822[9] + + # ISO 8601 + + # Supported offsets: + # sHHMM, sHH:MM, sHH + iso_8601_pattern = re.compile("[0-9] ?([+-])([0-9]{2})(?::(?=[0-9]{2}))?([0-9]{2})?$") + match = re.search(iso_8601_pattern, tz_str) + total_secs = 0 + if match: + sign, hours, minutes = match.groups() + total_secs += int(hours) * 3600 + if minutes: + total_secs += int(minutes) * 60 + total_secs = -total_secs if sign == "-" else total_secs + return total_secs + + # YYYY.MM.DD, MM/DD/YYYY, DD.MM.YYYY contain no timezone information + + raise TimezoneFormatError(tz_str) + + +def get_user_timezones(): + """Retrieve local timezone as described in + https://raw.githubusercontent.com/git/git/v2.3.0/Documentation/date-formats.txt + Returns: A tuple containing author timezone, committer timezone + """ + local_timezone = time.localtime().tm_gmtoff + + if os.environ.get("GIT_AUTHOR_DATE"): + author_timezone = parse_timezone_format(os.environ["GIT_AUTHOR_DATE"]) + else: + author_timezone = local_timezone + if os.environ.get("GIT_COMMITTER_DATE"): + commit_timezone = parse_timezone_format(os.environ["GIT_COMMITTER_DATE"]) + else: + commit_timezone = local_timezone + + return author_timezone, commit_timezone + + +def open_repo(path_or_repo): + """Open an argument that can be a repository or a path for a repository.""" + if isinstance(path_or_repo, BaseRepo): + return path_or_repo + return Repo(path_or_repo) + + +@contextmanager +def _noop_context_manager(obj): + """Context manager that has the same api as closing but does nothing.""" + yield obj + + +def open_repo_closing(path_or_repo): + """Open an argument that can be a repository or a path for a repository. + returns a context manager that will close the repo on exit if the argument + is a path, else does nothing if the argument is a repo. + """ + if isinstance(path_or_repo, BaseRepo): + return _noop_context_manager(path_or_repo) + return closing(Repo(path_or_repo)) + + +def path_to_tree_path(repopath, path, tree_encoding=DEFAULT_ENCODING): + """Convert a path to a path usable in an index, e.g. bytes and relative to + the repository root. + + Args: + repopath: Repository path, absolute or relative to the cwd + path: A path, absolute or relative to the cwd + Returns: A path formatted for use in e.g. an index + """ + # Resolve might returns a relative path on Windows + # https://bugs.python.org/issue38671 + if sys.platform == "win32": + path = os.path.abspath(path) + + path = Path(path) + resolved_path = path.resolve() + + # Resolve and abspath seems to behave differently regarding symlinks, + # as we are doing abspath on the file path, we need to do the same on + # the repo path or they might not match + if sys.platform == "win32": + repopath = os.path.abspath(repopath) + + repopath = Path(repopath).resolve() + + try: + relpath = resolved_path.relative_to(repopath) + except ValueError: + # If path is a symlink that points to a file outside the repo, we + # want the relpath for the link itself, not the resolved target + if path.is_symlink(): + parent = path.parent.resolve() + relpath = (parent / path.name).relative_to(repopath) + else: + raise + if sys.platform == "win32": + return str(relpath).replace(os.path.sep, "/").encode(tree_encoding) + else: + return bytes(relpath) + + +class DivergedBranches(Error): + """Branches have diverged and fast-forward is not possible.""" + + def __init__(self, current_sha, new_sha): + self.current_sha = current_sha + self.new_sha = new_sha + + +def check_diverged(repo, current_sha, new_sha): + """Check if updating to a sha can be done with fast forwarding. + + Args: + repo: Repository object + current_sha: Current head sha + new_sha: New head sha + """ + try: + can = can_fast_forward(repo, current_sha, new_sha) + except KeyError: + can = False + if not can: + raise DivergedBranches(current_sha, new_sha) + + +def archive( + repo, + committish=None, + outstream=default_bytes_out_stream, + errstream=default_bytes_err_stream, +): + """Create an archive. + + Args: + repo: Path of repository for which to generate an archive. + committish: Commit SHA1 or ref to use + outstream: Output stream (defaults to stdout) + errstream: Error stream (defaults to stderr) + """ + + if committish is None: + committish = "HEAD" + with open_repo_closing(repo) as repo_obj: + c = parse_commit(repo_obj, committish) + for chunk in tar_stream( + repo_obj.object_store, repo_obj.object_store[c.tree], c.commit_time + ): + outstream.write(chunk) + + +def update_server_info(repo="."): + """Update server info files for a repository. + + Args: + repo: path to the repository + """ + with open_repo_closing(repo) as r: + server_update_server_info(r) + + +def symbolic_ref(repo, ref_name, force=False): + """Set git symbolic ref into HEAD. + + Args: + repo: path to the repository + ref_name: short name of the new ref + force: force settings without checking if it exists in refs/heads + """ + with open_repo_closing(repo) as repo_obj: + ref_path = _make_branch_ref(ref_name) + if not force and ref_path not in repo_obj.refs.keys(): + raise Error("fatal: ref `%s` is not a ref" % ref_name) + repo_obj.refs.set_symbolic_ref(b"HEAD", ref_path) + + +def commit( + repo=".", + message=None, + author=None, + author_timezone=None, + committer=None, + commit_timezone=None, + encoding=None, + no_verify=False, + signoff=False, +): + """Create a new commit. + + Args: + repo: Path to repository + message: Optional commit message + author: Optional author name and email + author_timezone: Author timestamp timezone + committer: Optional committer name and email + commit_timezone: Commit timestamp timezone + no_verify: Skip pre-commit and commit-msg hooks + signoff: GPG Sign the commit (bool, defaults to False, + pass True to use default GPG key, + pass a str containing Key ID to use a specific GPG key) + Returns: SHA1 of the new commit + """ + # FIXME: Support --all argument + if getattr(message, "encode", None): + message = message.encode(encoding or DEFAULT_ENCODING) + if getattr(author, "encode", None): + author = author.encode(encoding or DEFAULT_ENCODING) + if getattr(committer, "encode", None): + committer = committer.encode(encoding or DEFAULT_ENCODING) + local_timezone = get_user_timezones() + if author_timezone is None: + author_timezone = local_timezone[0] + if commit_timezone is None: + commit_timezone = local_timezone[1] + with open_repo_closing(repo) as r: + return r.do_commit( + message=message, + author=author, + author_timezone=author_timezone, + committer=committer, + commit_timezone=commit_timezone, + encoding=encoding, + no_verify=no_verify, + sign=signoff if isinstance(signoff, (str, bool)) else None, + ) + + +def commit_tree(repo, tree, message=None, author=None, committer=None): + """Create a new commit object. + + Args: + repo: Path to repository + tree: An existing tree object + author: Optional author name and email + committer: Optional committer name and email + """ + with open_repo_closing(repo) as r: + return r.do_commit( + message=message, tree=tree, committer=committer, author=author + ) + + +def init(path=".", bare=False): + """Create a new git repository. + + Args: + path: Path to repository. + bare: Whether to create a bare repository. + Returns: A Repo instance + """ + if not os.path.exists(path): + os.mkdir(path) + + if bare: + return Repo.init_bare(path) + else: + return Repo.init(path) + + +def clone( + source, + target=None, + bare=False, + checkout=None, + errstream=default_bytes_err_stream, + outstream=None, + origin: Optional[str] = "origin", + depth: Optional[int] = None, + branch: Optional[Union[str, bytes]] = None, + config: Optional[Config] = None, + **kwargs +): + """Clone a local or remote git repository. + + Args: + source: Path or URL for source repository + target: Path to target repository (optional) + bare: Whether or not to create a bare repository + checkout: Whether or not to check-out HEAD after cloning + errstream: Optional stream to write progress to + outstream: Optional stream to write progress to (deprecated) + origin: Name of remote from the repository used to clone + depth: Depth to fetch at + branch: Optional branch or tag to be used as HEAD in the new repository + instead of the cloned repository's HEAD. + config: Configuration to use + Returns: The new repository + """ + if outstream is not None: + import warnings + + warnings.warn( + "outstream= has been deprecated in favour of errstream=.", + DeprecationWarning, + stacklevel=3, + ) + # TODO(jelmer): Capture logging output and stream to errstream + + if config is None: + config = StackedConfig.default() + + if checkout is None: + checkout = not bare + if checkout and bare: + raise Error("checkout and bare are incompatible") + + if target is None: + target = source.split("/")[-1] + + if isinstance(branch, str): + branch = branch.encode(DEFAULT_ENCODING) + + mkdir = not os.path.exists(target) + + (client, path) = get_transport_and_path( + source, config=config, **kwargs) + + return client.clone( + path, + target, + mkdir=mkdir, + bare=bare, + origin=origin, + checkout=checkout, + branch=branch, + progress=errstream.write, + depth=depth, + ) + + +def add(repo=".", paths=None): + """Add files to the staging area. + + Args: + repo: Repository for the files + paths: Paths to add. No value passed stages all modified files. + Returns: Tuple with set of added files and ignored files + + If the repository contains ignored directories, the returned set will + contain the path to an ignored directory (with trailing slash). Individual + files within ignored directories will not be returned. + """ + ignored = set() + with open_repo_closing(repo) as r: + repo_path = Path(r.path).resolve() + ignore_manager = IgnoreFilterManager.from_repo(r) + if not paths: + paths = list( + get_untracked_paths( + str(Path(os.getcwd()).resolve()), + str(repo_path), + r.open_index(), + ) + ) + relpaths = [] + if not isinstance(paths, list): + paths = [paths] + for p in paths: + path = Path(p) + relpath = str(path.resolve().relative_to(repo_path)) + # FIXME: Support patterns + if path.is_dir(): + relpath = os.path.join(relpath, "") + if ignore_manager.is_ignored(relpath): + ignored.add(relpath) + continue + relpaths.append(relpath) + r.stage(relpaths) + return (relpaths, ignored) + + +def _is_subdir(subdir, parentdir): + """Check whether subdir is parentdir or a subdir of parentdir + + If parentdir or subdir is a relative path, it will be disamgibuated + relative to the pwd. + """ + parentdir_abs = os.path.realpath(parentdir) + os.path.sep + subdir_abs = os.path.realpath(subdir) + os.path.sep + return subdir_abs.startswith(parentdir_abs) + + +# TODO: option to remove ignored files also, in line with `git clean -fdx` +def clean(repo=".", target_dir=None): + """Remove any untracked files from the target directory recursively + + Equivalent to running ``git clean -fd`` in target_dir. + + Args: + repo: Repository where the files may be tracked + target_dir: Directory to clean - current directory if None + """ + if target_dir is None: + target_dir = os.getcwd() + + with open_repo_closing(repo) as r: + if not _is_subdir(target_dir, r.path): + raise Error("target_dir must be in the repo's working dir") + + config = r.get_config_stack() + require_force = config.get_boolean( # noqa: F841 + (b"clean",), b"requireForce", True + ) + + # TODO(jelmer): if require_force is set, then make sure that -f, -i or + # -n is specified. + + index = r.open_index() + ignore_manager = IgnoreFilterManager.from_repo(r) + + paths_in_wd = _walk_working_dir_paths(target_dir, r.path) + # Reverse file visit order, so that files and subdirectories are + # removed before containing directory + for ap, is_dir in reversed(list(paths_in_wd)): + if is_dir: + # All subdirectories and files have been removed if untracked, + # so dir contains no tracked files iff it is empty. + is_empty = len(os.listdir(ap)) == 0 + if is_empty: + os.rmdir(ap) + else: + ip = path_to_tree_path(r.path, ap) + is_tracked = ip in index + + rp = os.path.relpath(ap, r.path) + is_ignored = ignore_manager.is_ignored(rp) + + if not is_tracked and not is_ignored: + os.remove(ap) + + +def remove(repo=".", paths=None, cached=False): + """Remove files from the staging area. + + Args: + repo: Repository for the files + paths: Paths to remove + """ + with open_repo_closing(repo) as r: + index = r.open_index() + for p in paths: + full_path = os.fsencode(os.path.abspath(p)) + tree_path = path_to_tree_path(r.path, p) + try: + index_sha = index[tree_path].sha + except KeyError as exc: + raise Error("%s did not match any files" % p) from exc + + if not cached: + try: + st = os.lstat(full_path) + except OSError: + pass + else: + try: + blob = blob_from_path_and_stat(full_path, st) + except IOError: + pass + else: + try: + committed_sha = tree_lookup_path( + r.__getitem__, r[r.head()].tree, tree_path + )[1] + except KeyError: + committed_sha = None + + if blob.id != index_sha and index_sha != committed_sha: + raise Error( + "file has staged content differing " + "from both the file and head: %s" % p + ) + + if index_sha != committed_sha: + raise Error("file has staged changes: %s" % p) + os.remove(full_path) + del index[tree_path] + index.write() + + +rm = remove + + +def commit_decode(commit, contents, default_encoding=DEFAULT_ENCODING): + if commit.encoding: + encoding = commit.encoding.decode("ascii") + else: + encoding = default_encoding + return contents.decode(encoding, "replace") + + +def commit_encode(commit, contents, default_encoding=DEFAULT_ENCODING): + if commit.encoding: + encoding = commit.encoding.decode("ascii") + else: + encoding = default_encoding + return contents.encode(encoding) + + +def print_commit(commit, decode, outstream=sys.stdout): + """Write a human-readable commit log entry. + + Args: + commit: A `Commit` object + outstream: A stream file to write to + """ + outstream.write("-" * 50 + "\n") + outstream.write("commit: " + commit.id.decode("ascii") + "\n") + if len(commit.parents) > 1: + outstream.write( + "merge: " + + "...".join([c.decode("ascii") for c in commit.parents[1:]]) + + "\n" + ) + outstream.write("Author: " + decode(commit.author) + "\n") + if commit.author != commit.committer: + outstream.write("Committer: " + decode(commit.committer) + "\n") + + time_tuple = time.gmtime(commit.author_time + commit.author_timezone) + time_str = time.strftime("%a %b %d %Y %H:%M:%S", time_tuple) + timezone_str = format_timezone(commit.author_timezone).decode("ascii") + outstream.write("Date: " + time_str + " " + timezone_str + "\n") + outstream.write("\n") + outstream.write(decode(commit.message) + "\n") + outstream.write("\n") + + +def print_tag(tag, decode, outstream=sys.stdout): + """Write a human-readable tag. + + Args: + tag: A `Tag` object + decode: Function for decoding bytes to unicode string + outstream: A stream to write to + """ + outstream.write("Tagger: " + decode(tag.tagger) + "\n") + time_tuple = time.gmtime(tag.tag_time + tag.tag_timezone) + time_str = time.strftime("%a %b %d %Y %H:%M:%S", time_tuple) + timezone_str = format_timezone(tag.tag_timezone).decode("ascii") + outstream.write("Date: " + time_str + " " + timezone_str + "\n") + outstream.write("\n") + outstream.write(decode(tag.message)) + outstream.write("\n") + + +def show_blob(repo, blob, decode, outstream=sys.stdout): + """Write a blob to a stream. + + Args: + repo: A `Repo` object + blob: A `Blob` object + decode: Function for decoding bytes to unicode string + outstream: A stream file to write to + """ + outstream.write(decode(blob.data)) + + +def show_commit(repo, commit, decode, outstream=sys.stdout): + """Show a commit to a stream. + + Args: + repo: A `Repo` object + commit: A `Commit` object + decode: Function for decoding bytes to unicode string + outstream: Stream to write to + """ + print_commit(commit, decode=decode, outstream=outstream) + if commit.parents: + parent_commit = repo[commit.parents[0]] + base_tree = parent_commit.tree + else: + base_tree = None + diffstream = BytesIO() + write_tree_diff(diffstream, repo.object_store, base_tree, commit.tree) + diffstream.seek(0) + outstream.write(commit_decode(commit, diffstream.getvalue())) + + +def show_tree(repo, tree, decode, outstream=sys.stdout): + """Print a tree to a stream. + + Args: + repo: A `Repo` object + tree: A `Tree` object + decode: Function for decoding bytes to unicode string + outstream: Stream to write to + """ + for n in tree: + outstream.write(decode(n) + "\n") + + +def show_tag(repo, tag, decode, outstream=sys.stdout): + """Print a tag to a stream. + + Args: + repo: A `Repo` object + tag: A `Tag` object + decode: Function for decoding bytes to unicode string + outstream: Stream to write to + """ + print_tag(tag, decode, outstream) + show_object(repo, repo[tag.object[1]], decode, outstream) + + +def show_object(repo, obj, decode, outstream): + return { + b"tree": show_tree, + b"blob": show_blob, + b"commit": show_commit, + b"tag": show_tag, + }[obj.type_name](repo, obj, decode, outstream) + + +def print_name_status(changes): + """Print a simple status summary, listing changed files.""" + for change in changes: + if not change: + continue + if isinstance(change, list): + change = change[0] + if change.type == CHANGE_ADD: + path1 = change.new.path + path2 = "" + kind = "A" + elif change.type == CHANGE_DELETE: + path1 = change.old.path + path2 = "" + kind = "D" + elif change.type == CHANGE_MODIFY: + path1 = change.new.path + path2 = "" + kind = "M" + elif change.type in RENAME_CHANGE_TYPES: + path1 = change.old.path + path2 = change.new.path + if change.type == CHANGE_RENAME: + kind = "R" + elif change.type == CHANGE_COPY: + kind = "C" + yield "%-8s%-20s%-20s" % (kind, path1, path2) + + +def log( + repo=".", + paths=None, + outstream=sys.stdout, + max_entries=None, + reverse=False, + name_status=False, +): + """Write commit logs. + + Args: + repo: Path to repository + paths: Optional set of specific paths to print entries for + outstream: Stream to write log output to + reverse: Reverse order in which entries are printed + name_status: Print name status + max_entries: Optional maximum number of entries to display + """ + with open_repo_closing(repo) as r: + walker = r.get_walker(max_entries=max_entries, paths=paths, reverse=reverse) + for entry in walker: + + def decode(x): + return commit_decode(entry.commit, x) + + print_commit(entry.commit, decode, outstream) + if name_status: + outstream.writelines( + [line + "\n" for line in print_name_status(entry.changes())] + ) + + +# TODO(jelmer): better default for encoding? +def show( + repo=".", + objects=None, + outstream=sys.stdout, + default_encoding=DEFAULT_ENCODING, +): + """Print the changes in a commit. + + Args: + repo: Path to repository + objects: Objects to show (defaults to [HEAD]) + outstream: Stream to write to + default_encoding: Default encoding to use if none is set in the + commit + """ + if objects is None: + objects = ["HEAD"] + if not isinstance(objects, list): + objects = [objects] + with open_repo_closing(repo) as r: + for objectish in objects: + o = parse_object(r, objectish) + if isinstance(o, Commit): + + def decode(x): + return commit_decode(o, x, default_encoding) + + else: + + def decode(x): + return x.decode(default_encoding) + + show_object(r, o, decode, outstream) + + +def diff_tree(repo, old_tree, new_tree, outstream=default_bytes_out_stream): + """Compares the content and mode of blobs found via two tree objects. + + Args: + repo: Path to repository + old_tree: Id of old tree + new_tree: Id of new tree + outstream: Stream to write to + """ + with open_repo_closing(repo) as r: + write_tree_diff(outstream, r.object_store, old_tree, new_tree) + + +def rev_list(repo, commits, outstream=sys.stdout): + """Lists commit objects in reverse chronological order. + + Args: + repo: Path to repository + commits: Commits over which to iterate + outstream: Stream to write to + """ + with open_repo_closing(repo) as r: + for entry in r.get_walker(include=[r[c].id for c in commits]): + outstream.write(entry.commit.id + b"\n") + + +def _canonical_part(url: str) -> str: + name = url.rsplit('/', 1)[-1] + if name.endswith('.git'): + name = name[:-4] + return name + + +def submodule_add(repo, url, path=None, name=None): + """Add a new submodule. + + Args: + repo: Path to repository + url: URL of repository to add as submodule + path: Path where submodule should live + """ + with open_repo_closing(repo) as r: + if path is None: + path = os.path.relpath(_canonical_part(url), r.path) + if name is None: + name = path + + # TODO(jelmer): Move this logic to dulwich.submodule + gitmodules_path = os.path.join(r.path, ".gitmodules") + try: + config = ConfigFile.from_path(gitmodules_path) + except FileNotFoundError: + config = ConfigFile() + config.path = gitmodules_path + config.set(("submodule", name), "url", url) + config.set(("submodule", name), "path", path) + config.write_to_path() + + +def submodule_init(repo): + """Initialize submodules. + + Args: + repo: Path to repository + """ + with open_repo_closing(repo) as r: + config = r.get_config() + gitmodules_path = os.path.join(r.path, '.gitmodules') + for path, url, name in read_submodules(gitmodules_path): + config.set((b'submodule', name), b'active', True) + config.set((b'submodule', name), b'url', url) + config.write_to_path() + + +def submodule_list(repo): + """List submodules. + + Args: + repo: Path to repository + """ + from .submodule import iter_cached_submodules + with open_repo_closing(repo) as r: + for path, sha in iter_cached_submodules(r.object_store, r[r.head()].tree): + yield path.decode(DEFAULT_ENCODING), sha.decode(DEFAULT_ENCODING) + + +def tag_create( + repo, + tag, + author=None, + message=None, + annotated=False, + objectish="HEAD", + tag_time=None, + tag_timezone=None, + sign=False, + encoding=DEFAULT_ENCODING +): + """Creates a tag in git via dulwich calls: + + Args: + repo: Path to repository + tag: tag string + author: tag author (optional, if annotated is set) + message: tag message (optional) + annotated: whether to create an annotated tag + objectish: object the tag should point at, defaults to HEAD + tag_time: Optional time for annotated tag + tag_timezone: Optional timezone for annotated tag + sign: GPG Sign the tag (bool, defaults to False, + pass True to use default GPG key, + pass a str containing Key ID to use a specific GPG key) + """ + + with open_repo_closing(repo) as r: + object = parse_object(r, objectish) + + if annotated: + # Create the tag object + tag_obj = Tag() + if author is None: + # TODO(jelmer): Don't use repo private method. + author = r._get_user_identity(r.get_config_stack()) + tag_obj.tagger = author + tag_obj.message = message + "\n".encode(encoding) + tag_obj.name = tag + tag_obj.object = (type(object), object.id) + if tag_time is None: + tag_time = int(time.time()) + tag_obj.tag_time = tag_time + if tag_timezone is None: + tag_timezone = get_user_timezones()[1] + elif isinstance(tag_timezone, str): + tag_timezone = parse_timezone(tag_timezone) + tag_obj.tag_timezone = tag_timezone + if sign: + tag_obj.sign(sign if isinstance(sign, str) else None) + + r.object_store.add_object(tag_obj) + tag_id = tag_obj.id + else: + tag_id = object.id + + r.refs[_make_tag_ref(tag)] = tag_id + + +def tag_list(repo, outstream=sys.stdout): + """List all tags. + + Args: + repo: Path to repository + outstream: Stream to write tags to + """ + with open_repo_closing(repo) as r: + tags = sorted(r.refs.as_dict(b"refs/tags")) + return tags + + +def tag_delete(repo, name): + """Remove a tag. + + Args: + repo: Path to repository + name: Name of tag to remove + """ + with open_repo_closing(repo) as r: + if isinstance(name, bytes): + names = [name] + elif isinstance(name, list): + names = name + else: + raise Error("Unexpected tag name type %r" % name) + for name in names: + del r.refs[_make_tag_ref(name)] + + +def reset(repo, mode, treeish="HEAD"): + """Reset current HEAD to the specified state. + + Args: + repo: Path to repository + mode: Mode ("hard", "soft", "mixed") + treeish: Treeish to reset to + """ + + if mode != "hard": + raise Error("hard is the only mode currently supported") + + with open_repo_closing(repo) as r: + tree = parse_tree(r, treeish) + r.reset_index(tree.id) + + +def get_remote_repo( + repo: Repo, remote_location: Optional[Union[str, bytes]] = None +) -> Tuple[Optional[str], str]: + config = repo.get_config() + if remote_location is None: + remote_location = get_branch_remote(repo) + if isinstance(remote_location, str): + encoded_location = remote_location.encode() + else: + encoded_location = remote_location + + section = (b"remote", encoded_location) + + remote_name = None # type: Optional[str] + + if config.has_section(section): + remote_name = encoded_location.decode() + encoded_location = config.get(section, "url") + else: + remote_name = None + + return (remote_name, encoded_location.decode()) + + +def push( + repo, + remote_location=None, + refspecs=None, + outstream=default_bytes_out_stream, + errstream=default_bytes_err_stream, + force=False, + **kwargs +): + """Remote push with dulwich via dulwich.client + + Args: + repo: Path to repository + remote_location: Location of the remote + refspecs: Refs to push to remote + outstream: A stream file to write output + errstream: A stream file to write errors + force: Force overwriting refs + """ + + # Open the repo + with open_repo_closing(repo) as r: + if refspecs is None: + refspecs = [active_branch(r)] + (remote_name, remote_location) = get_remote_repo(r, remote_location) + + # Get the client and path + client, path = get_transport_and_path( + remote_location, config=r.get_config_stack(), **kwargs + ) + + selected_refs = [] + remote_changed_refs = {} + + def update_refs(refs): + selected_refs.extend(parse_reftuples(r.refs, refs, refspecs, force=force)) + new_refs = {} + # TODO: Handle selected_refs == {None: None} + for (lh, rh, force_ref) in selected_refs: + if lh is None: + new_refs[rh] = ZERO_SHA + remote_changed_refs[rh] = None + else: + try: + localsha = r.refs[lh] + except KeyError as exc: + raise Error( + "No valid ref %s in local repository" % lh + ) from exc + if not force_ref and rh in refs: + check_diverged(r, refs[rh], localsha) + new_refs[rh] = localsha + remote_changed_refs[rh] = localsha + return new_refs + + err_encoding = getattr(errstream, "encoding", None) or DEFAULT_ENCODING + remote_location = client.get_url(path) + try: + result = client.send_pack( + path, + update_refs, + generate_pack_data=r.generate_pack_data, + progress=errstream.write, + ) + except SendPackError as exc: + raise Error( + "Push to " + remote_location + " failed -> " + exc.args[0].decode(), + ) from exc + else: + errstream.write( + b"Push to " + remote_location.encode(err_encoding) + b" successful.\n" + ) + + for ref, error in (result.ref_status or {}).items(): + if error is not None: + errstream.write( + b"Push of ref %s failed: %s\n" % (ref, error.encode(err_encoding)) + ) + else: + errstream.write(b"Ref %s updated\n" % ref) + + if remote_name is not None: + _import_remote_refs(r.refs, remote_name, remote_changed_refs) + + +def pull( + repo, + remote_location=None, + refspecs=None, + outstream=default_bytes_out_stream, + errstream=default_bytes_err_stream, + fast_forward=True, + force=False, + **kwargs +): + """Pull from remote via dulwich.client + + Args: + repo: Path to repository + remote_location: Location of the remote + refspecs: refspecs to fetch + outstream: A stream file to write to output + errstream: A stream file to write to errors + """ + # Open the repo + with open_repo_closing(repo) as r: + (remote_name, remote_location) = get_remote_repo(r, remote_location) + + if refspecs is None: + refspecs = [b"HEAD"] + selected_refs = [] + + def determine_wants(remote_refs, **kwargs): + selected_refs.extend( + parse_reftuples(remote_refs, r.refs, refspecs, force=force) + ) + return [ + remote_refs[lh] + for (lh, rh, force_ref) in selected_refs + if remote_refs[lh] not in r.object_store + ] + + client, path = get_transport_and_path( + remote_location, config=r.get_config_stack(), **kwargs + ) + fetch_result = client.fetch( + path, r, progress=errstream.write, determine_wants=determine_wants + ) + for (lh, rh, force_ref) in selected_refs: + if not force_ref and rh in r.refs: + try: + check_diverged(r, r.refs.follow(rh)[1], fetch_result.refs[lh]) + except DivergedBranches as exc: + if fast_forward: + raise + else: + raise NotImplementedError( + "merge is not yet supported") from exc + r.refs[rh] = fetch_result.refs[lh] + if selected_refs: + r[b"HEAD"] = fetch_result.refs[selected_refs[0][1]] + + # Perform 'git checkout .' - syncs staged changes + tree = r[b"HEAD"].tree + r.reset_index(tree=tree) + if remote_name is not None: + _import_remote_refs(r.refs, remote_name, fetch_result.refs) + + +def status(repo=".", ignored=False, untracked_files="all"): + """Returns staged, unstaged, and untracked changes relative to the HEAD. + + Args: + repo: Path to repository or repository object + ignored: Whether to include ignored files in untracked + untracked_files: How to handle untracked files, defaults to "all": + "no": do not return untracked files + "all": include all files in untracked directories + Using untracked_files="no" can be faster than "all" when the worktreee + contains many untracked files/directories. + + Note: untracked_files="normal" (git's default) is not implemented. + + Returns: GitStatus tuple, + staged - dict with lists of staged paths (diff index/HEAD) + unstaged - list of unstaged paths (diff index/working-tree) + untracked - list of untracked, un-ignored & non-.git paths + """ + with open_repo_closing(repo) as r: + # 1. Get status of staged + tracked_changes = get_tree_changes(r) + # 2. Get status of unstaged + index = r.open_index() + normalizer = r.get_blob_normalizer() + filter_callback = normalizer.checkin_normalize + unstaged_changes = list(get_unstaged_changes(index, r.path, filter_callback)) + + untracked_paths = get_untracked_paths( + r.path, + r.path, + index, + exclude_ignored=not ignored, + untracked_files=untracked_files, + ) + if sys.platform == "win32": + untracked_changes = [ + path.replace(os.path.sep, "/") for path in untracked_paths + ] + else: + untracked_changes = list(untracked_paths) + + return GitStatus(tracked_changes, unstaged_changes, untracked_changes) + + +def _walk_working_dir_paths(frompath, basepath, prune_dirnames=None): + """Get path, is_dir for files in working dir from frompath + + Args: + frompath: Path to begin walk + basepath: Path to compare to + prune_dirnames: Optional callback to prune dirnames during os.walk + dirnames will be set to result of prune_dirnames(dirpath, dirnames) + """ + for dirpath, dirnames, filenames in os.walk(frompath): + # Skip .git and below. + if ".git" in dirnames: + dirnames.remove(".git") + if dirpath != basepath: + continue + + if ".git" in filenames: + filenames.remove(".git") + if dirpath != basepath: + continue + + if dirpath != frompath: + yield dirpath, True + + for filename in filenames: + filepath = os.path.join(dirpath, filename) + yield filepath, False + + if prune_dirnames: + dirnames[:] = prune_dirnames(dirpath, dirnames) + + +def get_untracked_paths( + frompath, basepath, index, exclude_ignored=False, untracked_files="all" +): + """Get untracked paths. + + Args: + frompath: Path to walk + basepath: Path to compare to + index: Index to check against + exclude_ignored: Whether to exclude ignored paths + untracked_files: How to handle untracked files: + - "no": return an empty list + - "all": return all files in untracked directories + - "normal": Not implemented + + Note: ignored directories will never be walked for performance reasons. + If exclude_ignored is False, only the path to an ignored directory will + be yielded, no files inside the directory will be returned + """ + if untracked_files == "normal": + raise NotImplementedError("normal is not yet supported") + + if untracked_files not in ("no", "all"): + raise ValueError("untracked_files must be one of (no, all)") + + if untracked_files == "no": + return + + with open_repo_closing(basepath) as r: + ignore_manager = IgnoreFilterManager.from_repo(r) + + ignored_dirs = [] + + def prune_dirnames(dirpath, dirnames): + for i in range(len(dirnames) - 1, -1, -1): + path = os.path.join(dirpath, dirnames[i]) + ip = os.path.join(os.path.relpath(path, basepath), "") + if ignore_manager.is_ignored(ip): + if not exclude_ignored: + ignored_dirs.append( + os.path.join(os.path.relpath(path, frompath), "") + ) + del dirnames[i] + return dirnames + + for ap, is_dir in _walk_working_dir_paths( + frompath, basepath, prune_dirnames=prune_dirnames + ): + if not is_dir: + ip = path_to_tree_path(basepath, ap) + if ip not in index: + if not exclude_ignored or not ignore_manager.is_ignored( + os.path.relpath(ap, basepath) + ): + yield os.path.relpath(ap, frompath) + + yield from ignored_dirs + + +def get_tree_changes(repo): + """Return add/delete/modify changes to tree by comparing index to HEAD. + + Args: + repo: repo path or object + Returns: dict with lists for each type of change + """ + with open_repo_closing(repo) as r: + index = r.open_index() + + # Compares the Index to the HEAD & determines changes + # Iterate through the changes and report add/delete/modify + # TODO: call out to dulwich.diff_tree somehow. + tracked_changes = { + "add": [], + "delete": [], + "modify": [], + } + try: + tree_id = r[b"HEAD"].tree + except KeyError: + tree_id = None + + for change in index.changes_from_tree(r.object_store, tree_id): + if not change[0][0]: + tracked_changes["add"].append(change[0][1]) + elif not change[0][1]: + tracked_changes["delete"].append(change[0][0]) + elif change[0][0] == change[0][1]: + tracked_changes["modify"].append(change[0][0]) + else: + raise NotImplementedError("git mv ops not yet supported") + return tracked_changes + + +def daemon(path=".", address=None, port=None): + """Run a daemon serving Git requests over TCP/IP. + + Args: + path: Path to the directory to serve. + address: Optional address to listen on (defaults to ::) + port: Optional port to listen on (defaults to TCP_GIT_PORT) + """ + # TODO(jelmer): Support git-daemon-export-ok and --export-all. + backend = FileSystemBackend(path) + server = TCPGitServer(backend, address, port) + server.serve_forever() + + +def web_daemon(path=".", address=None, port=None): + """Run a daemon serving Git requests over HTTP. + + Args: + path: Path to the directory to serve + address: Optional address to listen on (defaults to ::) + port: Optional port to listen on (defaults to 80) + """ + from dulwich.web import ( + make_wsgi_chain, + make_server, + WSGIRequestHandlerLogger, + WSGIServerLogger, + ) + + backend = FileSystemBackend(path) + app = make_wsgi_chain(backend) + server = make_server( + address, + port, + app, + handler_class=WSGIRequestHandlerLogger, + server_class=WSGIServerLogger, + ) + server.serve_forever() + + +def upload_pack(path=".", inf=None, outf=None): + """Upload a pack file after negotiating its contents using smart protocol. + + Args: + path: Path to the repository + inf: Input stream to communicate with client + outf: Output stream to communicate with client + """ + if outf is None: + outf = getattr(sys.stdout, "buffer", sys.stdout) + if inf is None: + inf = getattr(sys.stdin, "buffer", sys.stdin) + path = os.path.expanduser(path) + backend = FileSystemBackend(path) + + def send_fn(data): + outf.write(data) + outf.flush() + + proto = Protocol(inf.read, send_fn) + handler = UploadPackHandler(backend, [path], proto) + # FIXME: Catch exceptions and write a single-line summary to outf. + handler.handle() + return 0 + + +def receive_pack(path=".", inf=None, outf=None): + """Receive a pack file after negotiating its contents using smart protocol. + + Args: + path: Path to the repository + inf: Input stream to communicate with client + outf: Output stream to communicate with client + """ + if outf is None: + outf = getattr(sys.stdout, "buffer", sys.stdout) + if inf is None: + inf = getattr(sys.stdin, "buffer", sys.stdin) + path = os.path.expanduser(path) + backend = FileSystemBackend(path) + + def send_fn(data): + outf.write(data) + outf.flush() + + proto = Protocol(inf.read, send_fn) + handler = ReceivePackHandler(backend, [path], proto) + # FIXME: Catch exceptions and write a single-line summary to outf. + handler.handle() + return 0 + + +def _make_branch_ref(name): + if getattr(name, "encode", None): + name = name.encode(DEFAULT_ENCODING) + return LOCAL_BRANCH_PREFIX + name + + +def _make_tag_ref(name): + if getattr(name, "encode", None): + name = name.encode(DEFAULT_ENCODING) + return LOCAL_TAG_PREFIX + name + + +def branch_delete(repo, name): + """Delete a branch. + + Args: + repo: Path to the repository + name: Name of the branch + """ + with open_repo_closing(repo) as r: + if isinstance(name, list): + names = name + else: + names = [name] + for name in names: + del r.refs[_make_branch_ref(name)] + + +def branch_create(repo, name, objectish=None, force=False): + """Create a branch. + + Args: + repo: Path to the repository + name: Name of the new branch + objectish: Target object to point new branch at (defaults to HEAD) + force: Force creation of branch, even if it already exists + """ + with open_repo_closing(repo) as r: + if objectish is None: + objectish = "HEAD" + object = parse_object(r, objectish) + refname = _make_branch_ref(name) + ref_message = b"branch: Created from " + objectish.encode(DEFAULT_ENCODING) + if force: + r.refs.set_if_equals(refname, None, object.id, message=ref_message) + else: + if not r.refs.add_if_new(refname, object.id, message=ref_message): + raise Error("Branch with name %s already exists." % name) + + +def branch_list(repo): + """List all branches. + + Args: + repo: Path to the repository + """ + with open_repo_closing(repo) as r: + return r.refs.keys(base=LOCAL_BRANCH_PREFIX) + + +def active_branch(repo): + """Return the active branch in the repository, if any. + + Args: + repo: Repository to open + Returns: + branch name + Raises: + KeyError: if the repository does not have a working tree + IndexError: if HEAD is floating + """ + with open_repo_closing(repo) as r: + active_ref = r.refs.follow(b"HEAD")[0][1] + if not active_ref.startswith(LOCAL_BRANCH_PREFIX): + raise ValueError(active_ref) + return active_ref[len(LOCAL_BRANCH_PREFIX) :] + + +def get_branch_remote(repo): + """Return the active branch's remote name, if any. + + Args: + repo: Repository to open + Returns: + remote name + Raises: + KeyError: if the repository does not have a working tree + """ + with open_repo_closing(repo) as r: + branch_name = active_branch(r.path) + config = r.get_config() + try: + remote_name = config.get((b"branch", branch_name), b"remote") + except KeyError: + remote_name = b"origin" + return remote_name + + +def fetch( + repo, + remote_location=None, + outstream=sys.stdout, + errstream=default_bytes_err_stream, + message=None, + depth=None, + prune=False, + prune_tags=False, + force=False, + **kwargs +): + """Fetch objects from a remote server. + + Args: + repo: Path to the repository + remote_location: String identifying a remote server + outstream: Output stream (defaults to stdout) + errstream: Error stream (defaults to stderr) + message: Reflog message (defaults to b"fetch: from ") + depth: Depth to fetch at + prune: Prune remote removed refs + prune_tags: Prune reomte removed tags + Returns: + Dictionary with refs on the remote + """ + with open_repo_closing(repo) as r: + (remote_name, remote_location) = get_remote_repo(r, remote_location) + if message is None: + message = b"fetch: from " + remote_location.encode(DEFAULT_ENCODING) + client, path = get_transport_and_path( + remote_location, config=r.get_config_stack(), **kwargs + ) + fetch_result = client.fetch(path, r, progress=errstream.write, depth=depth) + if remote_name is not None: + _import_remote_refs( + r.refs, + remote_name, + fetch_result.refs, + message, + prune=prune, + prune_tags=prune_tags, + ) + return fetch_result + + +def ls_remote(remote, config: Optional[Config] = None, **kwargs): + """List the refs in a remote. + + Args: + remote: Remote repository location + config: Configuration to use + Returns: + Dictionary with remote refs + """ + if config is None: + config = StackedConfig.default() + client, host_path = get_transport_and_path(remote, config=config, **kwargs) + return client.get_refs(host_path) + + +def repack(repo): + """Repack loose files in a repository. + + Currently this only packs loose objects. + + Args: + repo: Path to the repository + """ + with open_repo_closing(repo) as r: + r.object_store.pack_loose_objects() + + +def pack_objects(repo, object_ids, packf, idxf, delta_window_size=None): + """Pack objects into a file. + + Args: + repo: Path to the repository + object_ids: List of object ids to write + packf: File-like object to write to + idxf: File-like object to write to (can be None) + """ + with open_repo_closing(repo) as r: + entries, data_sum = write_pack_objects( + packf.write, + r.object_store.iter_shas((oid, None) for oid in object_ids), + delta_window_size=delta_window_size, + ) + if idxf is not None: + entries = sorted([(k, v[0], v[1]) for (k, v) in entries.items()]) + write_pack_index(idxf, entries, data_sum) + + +def ls_tree( + repo, + treeish=b"HEAD", + outstream=sys.stdout, + recursive=False, + name_only=False, +): + """List contents of a tree. + + Args: + repo: Path to the repository + treeish: Tree id to list + outstream: Output stream (defaults to stdout) + recursive: Whether to recursively list files + name_only: Only print item name + """ + + def list_tree(store, treeid, base): + for (name, mode, sha) in store[treeid].iteritems(): + if base: + name = posixpath.join(base, name) + if name_only: + outstream.write(name + b"\n") + else: + outstream.write(pretty_format_tree_entry(name, mode, sha)) + if stat.S_ISDIR(mode) and recursive: + list_tree(store, sha, name) + + with open_repo_closing(repo) as r: + tree = parse_tree(r, treeish) + list_tree(r.object_store, tree.id, "") + + +def remote_add(repo: Repo, name: Union[bytes, str], url: Union[bytes, str]): + """Add a remote. + + Args: + repo: Path to the repository + name: Remote name + url: Remote URL + """ + if not isinstance(name, bytes): + name = name.encode(DEFAULT_ENCODING) + if not isinstance(url, bytes): + url = url.encode(DEFAULT_ENCODING) + with open_repo_closing(repo) as r: + c = r.get_config() + section = (b"remote", name) + if c.has_section(section): + raise RemoteExists(section) + c.set(section, b"url", url) + c.write_to_path() + + +def remote_remove(repo: Repo, name: Union[bytes, str]): + """Remove a remote + + Args: + repo: Path to the repository + name: Remote name + """ + if not isinstance(name, bytes): + name = name.encode(DEFAULT_ENCODING) + with open_repo_closing(repo) as r: + c = r.get_config() + section = (b"remote", name) + del c[section] + c.write_to_path() + + +def check_ignore(repo, paths, no_index=False): + """Debug gitignore files. + + Args: + repo: Path to the repository + paths: List of paths to check for + no_index: Don't check index + Returns: List of ignored files + """ + with open_repo_closing(repo) as r: + index = r.open_index() + ignore_manager = IgnoreFilterManager.from_repo(r) + for path in paths: + if not no_index and path_to_tree_path(r.path, path) in index: + continue + if os.path.isabs(path): + path = os.path.relpath(path, r.path) + if ignore_manager.is_ignored(path): + yield path + + +def update_head(repo, target, detached=False, new_branch=None): + """Update HEAD to point at a new branch/commit. + + Note that this does not actually update the working tree. + + Args: + repo: Path to the repository + detached: Create a detached head + target: Branch or committish to switch to + new_branch: New branch to create + """ + with open_repo_closing(repo) as r: + if new_branch is not None: + to_set = _make_branch_ref(new_branch) + else: + to_set = b"HEAD" + if detached: + # TODO(jelmer): Provide some way so that the actual ref gets + # updated rather than what it points to, so the delete isn't + # necessary. + del r.refs[to_set] + r.refs[to_set] = parse_commit(r, target).id + else: + r.refs.set_symbolic_ref(to_set, parse_ref(r, target)) + if new_branch is not None: + r.refs.set_symbolic_ref(b"HEAD", to_set) + + +def reset_file(repo, file_path: str, target: bytes = b'HEAD', + symlink_fn=None): + """Reset the file to specific commit or branch. + + Args: + repo: dulwich Repo object + file_path: file to reset, relative to the repository path + target: branch or commit or b'HEAD' to reset + """ + tree = parse_tree(repo, treeish=target) + tree_path = _fs_to_tree_path(file_path) + + file_entry = tree.lookup_path(repo.object_store.__getitem__, tree_path) + full_path = os.path.join(os.fsencode(repo.path), tree_path) + blob = repo.object_store[file_entry[1]] + mode = file_entry[0] + build_file_from_blob(blob, mode, full_path, symlink_fn=symlink_fn) + + +def check_mailmap(repo, contact): + """Check canonical name and email of contact. + + Args: + repo: Path to the repository + contact: Contact name and/or email + Returns: Canonical contact data + """ + with open_repo_closing(repo) as r: + from dulwich.mailmap import Mailmap + + try: + mailmap = Mailmap.from_path(os.path.join(r.path, ".mailmap")) + except FileNotFoundError: + mailmap = Mailmap() + return mailmap.lookup(contact) + + +def fsck(repo): + """Check a repository. + + Args: + repo: A path to the repository + Returns: Iterator over errors/warnings + """ + with open_repo_closing(repo) as r: + # TODO(jelmer): check pack files + # TODO(jelmer): check graph + # TODO(jelmer): check refs + for sha in r.object_store: + o = r.object_store[sha] + try: + o.check() + except Exception as e: + yield (sha, e) + + +def stash_list(repo): + """List all stashes in a repository.""" + with open_repo_closing(repo) as r: + from dulwich.stash import Stash + + stash = Stash.from_repo(r) + return enumerate(list(stash.stashes())) + + +def stash_push(repo): + """Push a new stash onto the stack.""" + with open_repo_closing(repo) as r: + from dulwich.stash import Stash + + stash = Stash.from_repo(r) + stash.push() + + +def stash_pop(repo, index): + """Pop a stash from the stack.""" + with open_repo_closing(repo) as r: + from dulwich.stash import Stash + + stash = Stash.from_repo(r) + stash.pop(index) + + +def stash_drop(repo, index): + """Drop a stash from the stack.""" + with open_repo_closing(repo) as r: + from dulwich.stash import Stash + + stash = Stash.from_repo(r) + stash.drop(index) + + +def ls_files(repo): + """List all files in an index.""" + with open_repo_closing(repo) as r: + return sorted(r.open_index()) + + +def find_unique_abbrev(object_store, object_id): + """For now, just return 7 characters.""" + # TODO(jelmer): Add some logic here to return a number of characters that + # scales relative with the size of the repository + return object_id.decode("ascii")[:7] + + +def describe(repo): + """Describe the repository version. + + Args: + repo: git repository + Returns: a string description of the current git revision + + Examples: "gabcdefh", "v0.1" or "v0.1-5-gabcdefh". + """ + # Get the repository + with open_repo_closing(repo) as r: + # Get a list of all tags + refs = r.get_refs() + tags = {} + for key, value in refs.items(): + key = key.decode() + obj = r.get_object(value) + if u"tags" not in key: + continue + + _, tag = key.rsplit(u"/", 1) + + try: + commit = obj.object + except AttributeError: + continue + else: + commit = r.get_object(commit[1]) + tags[tag] = [ + datetime.datetime(*time.gmtime(commit.commit_time)[:6]), + commit.id.decode("ascii"), + ] + + sorted_tags = sorted(tags.items(), key=lambda tag: tag[1][0], reverse=True) + + # If there are no tags, return the current commit + if len(sorted_tags) == 0: + return "g{}".format(find_unique_abbrev(r.object_store, r[r.head()].id)) + + # We're now 0 commits from the top + commit_count = 0 + + # Get the latest commit + latest_commit = r[r.head()] + + # Walk through all commits + walker = r.get_walker() + for entry in walker: + # Check if tag + commit_id = entry.commit.id.decode("ascii") + for tag in sorted_tags: + tag_name = tag[0] + tag_commit = tag[1][1] + if commit_id == tag_commit: + if commit_count == 0: + return tag_name + else: + return "{}-{}-g{}".format( + tag_name, + commit_count, + latest_commit.id.decode("ascii")[:7], + ) + + commit_count += 1 + + # Return plain commit if no parent tag can be found + return "g{}".format(latest_commit.id.decode("ascii")[:7]) + + +def get_object_by_path(repo, path, committish=None): + """Get an object by path. + + Args: + repo: A path to the repository + path: Path to look up + committish: Commit to look up path in + Returns: A `ShaFile` object + """ + if committish is None: + committish = "HEAD" + # Get the repository + with open_repo_closing(repo) as r: + commit = parse_commit(r, committish) + base_tree = commit.tree + if not isinstance(path, bytes): + path = commit_encode(commit, path) + (mode, sha) = tree_lookup_path(r.object_store.__getitem__, base_tree, path) + return r[sha] + + +def write_tree(repo): + """Write a tree object from the index. + + Args: + repo: Repository for which to write tree + Returns: tree id for the tree that was written + """ + with open_repo_closing(repo) as r: + return r.open_index().commit(r.object_store) diff --git a/env/lib/python3.8/site-packages/dulwich/protocol.py b/env/lib/python3.8/site-packages/dulwich/protocol.py new file mode 100644 index 00000000..a02ef21c --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/protocol.py @@ -0,0 +1,583 @@ +# protocol.py -- Shared parts of the git protocols +# Copyright (C) 2008 John Carr +# Copyright (C) 2008-2012 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Generic functions for talking the git smart server protocol.""" + +from io import BytesIO +from os import ( + SEEK_END, +) +import socket + +import dulwich +from dulwich.errors import ( + HangupException, + GitProtocolError, +) + +TCP_GIT_PORT = 9418 + +ZERO_SHA = b"0" * 40 + +SINGLE_ACK = 0 +MULTI_ACK = 1 +MULTI_ACK_DETAILED = 2 + +# pack data +SIDE_BAND_CHANNEL_DATA = 1 +# progress messages +SIDE_BAND_CHANNEL_PROGRESS = 2 +# fatal error message just before stream aborts +SIDE_BAND_CHANNEL_FATAL = 3 + +CAPABILITY_ATOMIC = b"atomic" +CAPABILITY_DEEPEN_SINCE = b"deepen-since" +CAPABILITY_DEEPEN_NOT = b"deepen-not" +CAPABILITY_DEEPEN_RELATIVE = b"deepen-relative" +CAPABILITY_DELETE_REFS = b"delete-refs" +CAPABILITY_INCLUDE_TAG = b"include-tag" +CAPABILITY_MULTI_ACK = b"multi_ack" +CAPABILITY_MULTI_ACK_DETAILED = b"multi_ack_detailed" +CAPABILITY_NO_DONE = b"no-done" +CAPABILITY_NO_PROGRESS = b"no-progress" +CAPABILITY_OFS_DELTA = b"ofs-delta" +CAPABILITY_QUIET = b"quiet" +CAPABILITY_REPORT_STATUS = b"report-status" +CAPABILITY_SHALLOW = b"shallow" +CAPABILITY_SIDE_BAND = b"side-band" +CAPABILITY_SIDE_BAND_64K = b"side-band-64k" +CAPABILITY_THIN_PACK = b"thin-pack" +CAPABILITY_AGENT = b"agent" +CAPABILITY_SYMREF = b"symref" +CAPABILITY_ALLOW_TIP_SHA1_IN_WANT = b"allow-tip-sha1-in-want" +CAPABILITY_ALLOW_REACHABLE_SHA1_IN_WANT = b"allow-reachable-sha1-in-want" + +# Magic ref that is used to attach capabilities to when +# there are no refs. Should always be ste to ZERO_SHA. +CAPABILITIES_REF = b"capabilities^{}" + +COMMON_CAPABILITIES = [ + CAPABILITY_OFS_DELTA, + CAPABILITY_SIDE_BAND, + CAPABILITY_SIDE_BAND_64K, + CAPABILITY_AGENT, + CAPABILITY_NO_PROGRESS, +] +KNOWN_UPLOAD_CAPABILITIES = set( + COMMON_CAPABILITIES + + [ + CAPABILITY_THIN_PACK, + CAPABILITY_MULTI_ACK, + CAPABILITY_MULTI_ACK_DETAILED, + CAPABILITY_INCLUDE_TAG, + CAPABILITY_DEEPEN_SINCE, + CAPABILITY_SYMREF, + CAPABILITY_SHALLOW, + CAPABILITY_DEEPEN_NOT, + CAPABILITY_DEEPEN_RELATIVE, + CAPABILITY_ALLOW_TIP_SHA1_IN_WANT, + CAPABILITY_ALLOW_REACHABLE_SHA1_IN_WANT, + ] +) +KNOWN_RECEIVE_CAPABILITIES = set( + COMMON_CAPABILITIES + + [ + CAPABILITY_REPORT_STATUS, + CAPABILITY_DELETE_REFS, + CAPABILITY_QUIET, + CAPABILITY_ATOMIC, + ] +) + +DEPTH_INFINITE = 0x7FFFFFFF + +NAK_LINE = b"NAK\n" + + +def agent_string(): + return ("dulwich/%d.%d.%d" % dulwich.__version__).encode("ascii") + + +def capability_agent(): + return CAPABILITY_AGENT + b"=" + agent_string() + + +def capability_symref(from_ref, to_ref): + return CAPABILITY_SYMREF + b"=" + from_ref + b":" + to_ref + + +def extract_capability_names(capabilities): + return {parse_capability(c)[0] for c in capabilities} + + +def parse_capability(capability): + parts = capability.split(b"=", 1) + if len(parts) == 1: + return (parts[0], None) + return tuple(parts) + + +def symref_capabilities(symrefs): + return [capability_symref(*k) for k in symrefs] + + +COMMAND_DEEPEN = b"deepen" +COMMAND_SHALLOW = b"shallow" +COMMAND_UNSHALLOW = b"unshallow" +COMMAND_DONE = b"done" +COMMAND_WANT = b"want" +COMMAND_HAVE = b"have" + + +def format_cmd_pkt(cmd, *args): + return cmd + b" " + b"".join([(a + b"\0") for a in args]) + + +def parse_cmd_pkt(line): + splice_at = line.find(b" ") + cmd, args = line[:splice_at], line[splice_at + 1 :] + assert args[-1:] == b"\x00" + return cmd, args[:-1].split(b"\0") + + +def pkt_line(data): + """Wrap data in a pkt-line. + + Args: + data: The data to wrap, as a str or None. + Returns: The data prefixed with its length in pkt-line format; if data was + None, returns the flush-pkt ('0000'). + """ + if data is None: + return b"0000" + return ("%04x" % (len(data) + 4)).encode("ascii") + data + + +class Protocol(object): + """Class for interacting with a remote git process over the wire. + + Parts of the git wire protocol use 'pkt-lines' to communicate. A pkt-line + consists of the length of the line as a 4-byte hex string, followed by the + payload data. The length includes the 4-byte header. The special line + '0000' indicates the end of a section of input and is called a 'flush-pkt'. + + For details on the pkt-line format, see the cgit distribution: + Documentation/technical/protocol-common.txt + """ + + def __init__(self, read, write, close=None, report_activity=None): + self.read = read + self.write = write + self._close = close + self.report_activity = report_activity + self._readahead = None + + def close(self): + if self._close: + self._close() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.close() + + def read_pkt_line(self): + """Reads a pkt-line from the remote git process. + + This method may read from the readahead buffer; see unread_pkt_line. + + Returns: The next string from the stream, without the length prefix, or + None for a flush-pkt ('0000'). + """ + if self._readahead is None: + read = self.read + else: + read = self._readahead.read + self._readahead = None + + try: + sizestr = read(4) + if not sizestr: + raise HangupException() + size = int(sizestr, 16) + if size == 0: + if self.report_activity: + self.report_activity(4, "read") + return None + if self.report_activity: + self.report_activity(size, "read") + pkt_contents = read(size - 4) + except ConnectionResetError as exc: + raise HangupException() from exc + except socket.error as exc: + raise GitProtocolError(exc) from exc + else: + if len(pkt_contents) + 4 != size: + raise GitProtocolError( + "Length of pkt read %04x does not match length prefix %04x" + % (len(pkt_contents) + 4, size) + ) + return pkt_contents + + def eof(self): + """Test whether the protocol stream has reached EOF. + + Note that this refers to the actual stream EOF and not just a + flush-pkt. + + Returns: True if the stream is at EOF, False otherwise. + """ + try: + next_line = self.read_pkt_line() + except HangupException: + return True + self.unread_pkt_line(next_line) + return False + + def unread_pkt_line(self, data): + """Unread a single line of data into the readahead buffer. + + This method can be used to unread a single pkt-line into a fixed + readahead buffer. + + Args: + data: The data to unread, without the length prefix. + Raises: + ValueError: If more than one pkt-line is unread. + """ + if self._readahead is not None: + raise ValueError("Attempted to unread multiple pkt-lines.") + self._readahead = BytesIO(pkt_line(data)) + + def read_pkt_seq(self): + """Read a sequence of pkt-lines from the remote git process. + + Returns: Yields each line of data up to but not including the next + flush-pkt. + """ + pkt = self.read_pkt_line() + while pkt: + yield pkt + pkt = self.read_pkt_line() + + def write_pkt_line(self, line): + """Sends a pkt-line to the remote git process. + + Args: + line: A string containing the data to send, without the length + prefix. + """ + try: + line = pkt_line(line) + self.write(line) + if self.report_activity: + self.report_activity(len(line), "write") + except socket.error as exc: + raise GitProtocolError(exc) from exc + + def write_sideband(self, channel, blob): + """Write multiplexed data to the sideband. + + Args: + channel: An int specifying the channel to write to. + blob: A blob of data (as a string) to send on this channel. + """ + # a pktline can be a max of 65520. a sideband line can therefore be + # 65520-5 = 65515 + # WTF: Why have the len in ASCII, but the channel in binary. + while blob: + self.write_pkt_line(bytes(bytearray([channel])) + blob[:65515]) + blob = blob[65515:] + + def send_cmd(self, cmd, *args): + """Send a command and some arguments to a git server. + + Only used for the TCP git protocol (git://). + + Args: + cmd: The remote service to access. + args: List of arguments to send to remove service. + """ + self.write_pkt_line(format_cmd_pkt(cmd, *args)) + + def read_cmd(self): + """Read a command and some arguments from the git client + + Only used for the TCP git protocol (git://). + + Returns: A tuple of (command, [list of arguments]). + """ + line = self.read_pkt_line() + return parse_cmd_pkt(line) + + +_RBUFSIZE = 8192 # Default read buffer size. + + +class ReceivableProtocol(Protocol): + """Variant of Protocol that allows reading up to a size without blocking. + + This class has a recv() method that behaves like socket.recv() in addition + to a read() method. + + If you want to read n bytes from the wire and block until exactly n bytes + (or EOF) are read, use read(n). If you want to read at most n bytes from + the wire but don't care if you get less, use recv(n). Note that recv(n) + will still block until at least one byte is read. + """ + + def __init__( + self, recv, write, close=None, report_activity=None, rbufsize=_RBUFSIZE + ): + super(ReceivableProtocol, self).__init__( + self.read, write, close=close, report_activity=report_activity + ) + self._recv = recv + self._rbuf = BytesIO() + self._rbufsize = rbufsize + + def read(self, size): + # From _fileobj.read in socket.py in the Python 2.6.5 standard library, + # with the following modifications: + # - omit the size <= 0 branch + # - seek back to start rather than 0 in case some buffer has been + # consumed. + # - use SEEK_END instead of the magic number. + # Copyright (c) 2001-2010 Python Software Foundation; All Rights + # Reserved + # Licensed under the Python Software Foundation License. + # TODO: see if buffer is more efficient than cBytesIO. + assert size > 0 + + # Our use of BytesIO rather than lists of string objects returned by + # recv() minimizes memory usage and fragmentation that occurs when + # rbufsize is large compared to the typical return value of recv(). + buf = self._rbuf + start = buf.tell() + buf.seek(0, SEEK_END) + # buffer may have been partially consumed by recv() + buf_len = buf.tell() - start + if buf_len >= size: + # Already have size bytes in our buffer? Extract and return. + buf.seek(start) + rv = buf.read(size) + self._rbuf = BytesIO() + self._rbuf.write(buf.read()) + self._rbuf.seek(0) + return rv + + self._rbuf = BytesIO() # reset _rbuf. we consume it via buf. + while True: + left = size - buf_len + # recv() will malloc the amount of memory given as its + # parameter even though it often returns much less data + # than that. The returned data string is short lived + # as we copy it into a BytesIO and free it. This avoids + # fragmentation issues on many platforms. + data = self._recv(left) + if not data: + break + n = len(data) + if n == size and not buf_len: + # Shortcut. Avoid buffer data copies when: + # - We have no data in our buffer. + # AND + # - Our call to recv returned exactly the + # number of bytes we were asked to read. + return data + if n == left: + buf.write(data) + del data # explicit free + break + assert n <= left, "_recv(%d) returned %d bytes" % (left, n) + buf.write(data) + buf_len += n + del data # explicit free + # assert buf_len == buf.tell() + buf.seek(start) + return buf.read() + + def recv(self, size): + assert size > 0 + + buf = self._rbuf + start = buf.tell() + buf.seek(0, SEEK_END) + buf_len = buf.tell() + buf.seek(start) + + left = buf_len - start + if not left: + # only read from the wire if our read buffer is exhausted + data = self._recv(self._rbufsize) + if len(data) == size: + # shortcut: skip the buffer if we read exactly size bytes + return data + buf = BytesIO() + buf.write(data) + buf.seek(0) + del data # explicit free + self._rbuf = buf + return buf.read(size) + + +def extract_capabilities(text): + """Extract a capabilities list from a string, if present. + + Args: + text: String to extract from + Returns: Tuple with text with capabilities removed and list of capabilities + """ + if b"\0" not in text: + return text, [] + text, capabilities = text.rstrip().split(b"\0") + return (text, capabilities.strip().split(b" ")) + + +def extract_want_line_capabilities(text): + """Extract a capabilities list from a want line, if present. + + Note that want lines have capabilities separated from the rest of the line + by a space instead of a null byte. Thus want lines have the form: + + want obj-id cap1 cap2 ... + + Args: + text: Want line to extract from + Returns: Tuple with text with capabilities removed and list of capabilities + """ + split_text = text.rstrip().split(b" ") + if len(split_text) < 3: + return text, [] + return (b" ".join(split_text[:2]), split_text[2:]) + + +def ack_type(capabilities): + """Extract the ack type from a capabilities list.""" + if b"multi_ack_detailed" in capabilities: + return MULTI_ACK_DETAILED + elif b"multi_ack" in capabilities: + return MULTI_ACK + return SINGLE_ACK + + +class BufferedPktLineWriter(object): + """Writer that wraps its data in pkt-lines and has an independent buffer. + + Consecutive calls to write() wrap the data in a pkt-line and then buffers + it until enough lines have been written such that their total length + (including length prefix) reach the buffer size. + """ + + def __init__(self, write, bufsize=65515): + """Initialize the BufferedPktLineWriter. + + Args: + write: A write callback for the underlying writer. + bufsize: The internal buffer size, including length prefixes. + """ + self._write = write + self._bufsize = bufsize + self._wbuf = BytesIO() + self._buflen = 0 + + def write(self, data): + """Write data, wrapping it in a pkt-line.""" + line = pkt_line(data) + line_len = len(line) + over = self._buflen + line_len - self._bufsize + if over >= 0: + start = line_len - over + self._wbuf.write(line[:start]) + self.flush() + else: + start = 0 + saved = line[start:] + self._wbuf.write(saved) + self._buflen += len(saved) + + def flush(self): + """Flush all data from the buffer.""" + data = self._wbuf.getvalue() + if data: + self._write(data) + self._len = 0 + self._wbuf = BytesIO() + + +class PktLineParser(object): + """Packet line parser that hands completed packets off to a callback.""" + + def __init__(self, handle_pkt): + self.handle_pkt = handle_pkt + self._readahead = BytesIO() + + def parse(self, data): + """Parse a fragment of data and call back for any completed packets.""" + self._readahead.write(data) + buf = self._readahead.getvalue() + if len(buf) < 4: + return + while len(buf) >= 4: + size = int(buf[:4], 16) + if size == 0: + self.handle_pkt(None) + buf = buf[4:] + elif size <= len(buf): + self.handle_pkt(buf[4:size]) + buf = buf[size:] + else: + break + self._readahead = BytesIO() + self._readahead.write(buf) + + def get_tail(self): + """Read back any unused data.""" + return self._readahead.getvalue() + + +def format_capability_line(capabilities): + return b"".join([b" " + c for c in capabilities]) + + +def format_ref_line(ref, sha, capabilities=None): + if capabilities is None: + return sha + b" " + ref + b"\n" + else: + return ( + sha + b" " + ref + b"\0" + + format_capability_line(capabilities) + + b"\n") + + +def format_shallow_line(sha): + return COMMAND_SHALLOW + b" " + sha + + +def format_unshallow_line(sha): + return COMMAND_UNSHALLOW + b" " + sha + + +def format_ack_line(sha, ack_type=b""): + if ack_type: + ack_type = b" " + ack_type + return b"ACK " + sha + ack_type + b"\n" diff --git a/env/lib/python3.8/site-packages/dulwich/py.typed b/env/lib/python3.8/site-packages/dulwich/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/dulwich/reflog.py b/env/lib/python3.8/site-packages/dulwich/reflog.py new file mode 100644 index 00000000..0574ddc3 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/reflog.py @@ -0,0 +1,154 @@ +# reflog.py -- Parsing and writing reflog files +# Copyright (C) 2015 Jelmer Vernooij and others. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Utilities for reading and generating reflogs. +""" + +import collections + +from dulwich.objects import ( + format_timezone, + parse_timezone, + ZERO_SHA, +) + +Entry = collections.namedtuple( + "Entry", + ["old_sha", "new_sha", "committer", "timestamp", "timezone", "message"], +) + + +def format_reflog_line(old_sha, new_sha, committer, timestamp, timezone, message): + """Generate a single reflog line. + + Args: + old_sha: Old Commit SHA + new_sha: New Commit SHA + committer: Committer name and e-mail + timestamp: Timestamp + timezone: Timezone + message: Message + """ + if old_sha is None: + old_sha = ZERO_SHA + return ( + old_sha + + b" " + + new_sha + + b" " + + committer + + b" " + + str(int(timestamp)).encode("ascii") + + b" " + + format_timezone(timezone) + + b"\t" + + message + ) + + +def parse_reflog_line(line): + """Parse a reflog line. + + Args: + line: Line to parse + Returns: Tuple of (old_sha, new_sha, committer, timestamp, timezone, + message) + """ + (begin, message) = line.split(b"\t", 1) + (old_sha, new_sha, rest) = begin.split(b" ", 2) + (committer, timestamp_str, timezone_str) = rest.rsplit(b" ", 2) + return Entry( + old_sha, + new_sha, + committer, + int(timestamp_str), + parse_timezone(timezone_str)[0], + message, + ) + + +def read_reflog(f): + """Read reflog. + + Args: + f: File-like object + Returns: Iterator over Entry objects + """ + for line in f: + yield parse_reflog_line(line) + + +def drop_reflog_entry(f, index, rewrite=False): + """Drop the specified reflog entry. + + Args: + f: File-like object + index: Reflog entry index (in Git reflog reverse 0-indexed order) + rewrite: If a reflog entry's predecessor is removed, set its + old SHA to the new SHA of the entry that now precedes it + """ + if index < 0: + raise ValueError("Invalid reflog index %d" % index) + + log = [] + offset = f.tell() + for line in f: + log.append((offset, parse_reflog_line(line))) + offset = f.tell() + + inverse_index = len(log) - index - 1 + write_offset = log[inverse_index][0] + f.seek(write_offset) + + if index == 0: + f.truncate() + return + + del log[inverse_index] + if rewrite and index > 0 and log: + if inverse_index == 0: + previous_new = ZERO_SHA + else: + previous_new = log[inverse_index - 1][1].new_sha + offset, entry = log[inverse_index] + log[inverse_index] = ( + offset, + Entry( + previous_new, + entry.new_sha, + entry.committer, + entry.timestamp, + entry.timezone, + entry.message, + ), + ) + + for _, entry in log[inverse_index:]: + f.write( + format_reflog_line( + entry.old_sha, + entry.new_sha, + entry.committer, + entry.timestamp, + entry.timezone, + entry.message, + ) + ) + f.truncate() diff --git a/env/lib/python3.8/site-packages/dulwich/refs.py b/env/lib/python3.8/site-packages/dulwich/refs.py new file mode 100644 index 00000000..91159b95 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/refs.py @@ -0,0 +1,1222 @@ +# refs.py -- For dealing with git refs +# Copyright (C) 2008-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + + +"""Ref handling. + +""" +import os +from typing import Dict, Optional + +from dulwich.errors import ( + PackedRefsException, + RefFormatError, +) +from dulwich.objects import ( + git_line, + valid_hexsha, + ZERO_SHA, + Tag, + ObjectID, +) +from dulwich.file import ( + GitFile, + ensure_dir_exists, +) + +Ref = bytes + +HEADREF = b"HEAD" +SYMREF = b"ref: " +LOCAL_BRANCH_PREFIX = b"refs/heads/" +LOCAL_TAG_PREFIX = b"refs/tags/" +BAD_REF_CHARS = set(b"\177 ~^:?*[") +ANNOTATED_TAG_SUFFIX = b"^{}" + + +class SymrefLoop(Exception): + """There is a loop between one or more symrefs.""" + + def __init__(self, ref, depth): + self.ref = ref + self.depth = depth + + +def parse_symref_value(contents): + """Parse a symref value. + + Args: + contents: Contents to parse + Returns: Destination + """ + if contents.startswith(SYMREF): + return contents[len(SYMREF) :].rstrip(b"\r\n") + raise ValueError(contents) + + +def check_ref_format(refname: Ref): + """Check if a refname is correctly formatted. + + Implements all the same rules as git-check-ref-format[1]. + + [1] + http://www.kernel.org/pub/software/scm/git/docs/git-check-ref-format.html + + Args: + refname: The refname to check + Returns: True if refname is valid, False otherwise + """ + # These could be combined into one big expression, but are listed + # separately to parallel [1]. + if b"/." in refname or refname.startswith(b"."): + return False + if b"/" not in refname: + return False + if b".." in refname: + return False + for i, c in enumerate(refname): + if ord(refname[i : i + 1]) < 0o40 or c in BAD_REF_CHARS: + return False + if refname[-1] in b"/.": + return False + if refname.endswith(b".lock"): + return False + if b"@{" in refname: + return False + if b"\\" in refname: + return False + return True + + +class RefsContainer(object): + """A container for refs.""" + + def __init__(self, logger=None): + self._logger = logger + + def _log( + self, + ref, + old_sha, + new_sha, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + if self._logger is None: + return + if message is None: + return + self._logger(ref, old_sha, new_sha, committer, timestamp, timezone, message) + + def set_symbolic_ref( + self, + name, + other, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + """Make a ref point at another ref. + + Args: + name: Name of the ref to set + other: Name of the ref to point at + message: Optional message + """ + raise NotImplementedError(self.set_symbolic_ref) + + def get_packed_refs(self): + """Get contents of the packed-refs file. + + Returns: Dictionary mapping ref names to SHA1s + + Note: Will return an empty dictionary when no packed-refs file is + present. + """ + raise NotImplementedError(self.get_packed_refs) + + def get_peeled(self, name): + """Return the cached peeled value of a ref, if available. + + Args: + name: Name of the ref to peel + Returns: The peeled value of the ref. If the ref is known not point to + a tag, this will be the SHA the ref refers to. If the ref may point + to a tag, but no cached information is available, None is returned. + """ + return None + + def import_refs( + self, + base: Ref, + other: Dict[Ref, ObjectID], + committer: Optional[bytes] = None, + timestamp: Optional[bytes] = None, + timezone: Optional[bytes] = None, + message: Optional[bytes] = None, + prune: bool = False, + ): + if prune: + to_delete = set(self.subkeys(base)) + else: + to_delete = set() + for name, value in other.items(): + if value is None: + to_delete.add(name) + else: + self.set_if_equals( + b"/".join((base, name)), None, value, message=message + ) + if to_delete: + try: + to_delete.remove(name) + except KeyError: + pass + for ref in to_delete: + self.remove_if_equals(b"/".join((base, ref)), None, message=message) + + def allkeys(self): + """All refs present in this container.""" + raise NotImplementedError(self.allkeys) + + def __iter__(self): + return iter(self.allkeys()) + + def keys(self, base=None): + """Refs present in this container. + + Args: + base: An optional base to return refs under. + Returns: An unsorted set of valid refs in this container, including + packed refs. + """ + if base is not None: + return self.subkeys(base) + else: + return self.allkeys() + + def subkeys(self, base): + """Refs present in this container under a base. + + Args: + base: The base to return refs under. + Returns: A set of valid refs in this container under the base; the base + prefix is stripped from the ref names returned. + """ + keys = set() + base_len = len(base) + 1 + for refname in self.allkeys(): + if refname.startswith(base): + keys.add(refname[base_len:]) + return keys + + def as_dict(self, base=None): + """Return the contents of this container as a dictionary.""" + ret = {} + keys = self.keys(base) + if base is None: + base = b"" + else: + base = base.rstrip(b"/") + for key in keys: + try: + ret[key] = self[(base + b"/" + key).strip(b"/")] + except (SymrefLoop, KeyError): + continue # Unable to resolve + + return ret + + def _check_refname(self, name): + """Ensure a refname is valid and lives in refs or is HEAD. + + HEAD is not a valid refname according to git-check-ref-format, but this + class needs to be able to touch HEAD. Also, check_ref_format expects + refnames without the leading 'refs/', but this class requires that + so it cannot touch anything outside the refs dir (or HEAD). + + Args: + name: The name of the reference. + Raises: + KeyError: if a refname is not HEAD or is otherwise not valid. + """ + if name in (HEADREF, b"refs/stash"): + return + if not name.startswith(b"refs/") or not check_ref_format(name[5:]): + raise RefFormatError(name) + + def read_ref(self, refname): + """Read a reference without following any references. + + Args: + refname: The name of the reference + Returns: The contents of the ref file, or None if it does + not exist. + """ + contents = self.read_loose_ref(refname) + if not contents: + contents = self.get_packed_refs().get(refname, None) + return contents + + def read_loose_ref(self, name): + """Read a loose reference and return its contents. + + Args: + name: the refname to read + Returns: The contents of the ref file, or None if it does + not exist. + """ + raise NotImplementedError(self.read_loose_ref) + + def follow(self, name): + """Follow a reference name. + + Returns: a tuple of (refnames, sha), wheres refnames are the names of + references in the chain + """ + contents = SYMREF + name + depth = 0 + refnames = [] + while contents.startswith(SYMREF): + refname = contents[len(SYMREF) :] + refnames.append(refname) + contents = self.read_ref(refname) + if not contents: + break + depth += 1 + if depth > 5: + raise SymrefLoop(name, depth) + return refnames, contents + + def __contains__(self, refname): + if self.read_ref(refname): + return True + return False + + def __getitem__(self, name): + """Get the SHA1 for a reference name. + + This method follows all symbolic references. + """ + _, sha = self.follow(name) + if sha is None: + raise KeyError(name) + return sha + + def set_if_equals( + self, + name, + old_ref, + new_ref, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + """Set a refname to new_ref only if it currently equals old_ref. + + This method follows all symbolic references if applicable for the + subclass, and can be used to perform an atomic compare-and-swap + operation. + + Args: + name: The refname to set. + old_ref: The old sha the refname must refer to, or None to set + unconditionally. + new_ref: The new sha the refname will refer to. + message: Message for reflog + Returns: True if the set was successful, False otherwise. + """ + raise NotImplementedError(self.set_if_equals) + + def add_if_new(self, name, ref, committer=None, timestamp=None, + timezone=None, message=None): + """Add a new reference only if it does not already exist. + + Args: + name: Ref name + ref: Ref value + """ + raise NotImplementedError(self.add_if_new) + + def __setitem__(self, name, ref): + """Set a reference name to point to the given SHA1. + + This method follows all symbolic references if applicable for the + subclass. + + Note: This method unconditionally overwrites the contents of a + reference. To update atomically only if the reference has not + changed, use set_if_equals(). + + Args: + name: The refname to set. + ref: The new sha the refname will refer to. + """ + self.set_if_equals(name, None, ref) + + def remove_if_equals( + self, + name, + old_ref, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + """Remove a refname only if it currently equals old_ref. + + This method does not follow symbolic references, even if applicable for + the subclass. It can be used to perform an atomic compare-and-delete + operation. + + Args: + name: The refname to delete. + old_ref: The old sha the refname must refer to, or None to + delete unconditionally. + message: Message for reflog + Returns: True if the delete was successful, False otherwise. + """ + raise NotImplementedError(self.remove_if_equals) + + def __delitem__(self, name): + """Remove a refname. + + This method does not follow symbolic references, even if applicable for + the subclass. + + Note: This method unconditionally deletes the contents of a reference. + To delete atomically only if the reference has not changed, use + remove_if_equals(). + + Args: + name: The refname to delete. + """ + self.remove_if_equals(name, None) + + def get_symrefs(self): + """Get a dict with all symrefs in this container. + + Returns: Dictionary mapping source ref to target ref + """ + ret = {} + for src in self.allkeys(): + try: + dst = parse_symref_value(self.read_ref(src)) + except ValueError: + pass + else: + ret[src] = dst + return ret + + +class DictRefsContainer(RefsContainer): + """RefsContainer backed by a simple dict. + + This container does not support symbolic or packed references and is not + threadsafe. + """ + + def __init__(self, refs, logger=None): + super(DictRefsContainer, self).__init__(logger=logger) + self._refs = refs + self._peeled = {} + self._watchers = set() + + def allkeys(self): + return self._refs.keys() + + def read_loose_ref(self, name): + return self._refs.get(name, None) + + def get_packed_refs(self): + return {} + + def _notify(self, ref, newsha): + for watcher in self._watchers: + watcher._notify((ref, newsha)) + + def set_symbolic_ref( + self, + name: Ref, + other: Ref, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + old = self.follow(name)[-1] + new = SYMREF + other + self._refs[name] = new + self._notify(name, new) + self._log( + name, + old, + new, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + + def set_if_equals( + self, + name, + old_ref, + new_ref, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + if old_ref is not None and self._refs.get(name, ZERO_SHA) != old_ref: + return False + realnames, _ = self.follow(name) + for realname in realnames: + self._check_refname(realname) + old = self._refs.get(realname) + self._refs[realname] = new_ref + self._notify(realname, new_ref) + self._log( + realname, + old, + new_ref, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + return True + + def add_if_new( + self, + name: Ref, + ref: ObjectID, + committer=None, + timestamp=None, + timezone=None, + message: Optional[bytes] = None, + ): + if name in self._refs: + return False + self._refs[name] = ref + self._notify(name, ref) + self._log( + name, + None, + ref, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + return True + + def remove_if_equals( + self, + name, + old_ref, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + if old_ref is not None and self._refs.get(name, ZERO_SHA) != old_ref: + return False + try: + old = self._refs.pop(name) + except KeyError: + pass + else: + self._notify(name, None) + self._log( + name, + old, + None, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + return True + + def get_peeled(self, name): + return self._peeled.get(name) + + def _update(self, refs): + """Update multiple refs; intended only for testing.""" + # TODO(dborowitz): replace this with a public function that uses + # set_if_equal. + for ref, sha in refs.items(): + self.set_if_equals(ref, None, sha) + + def _update_peeled(self, peeled): + """Update cached peeled refs; intended only for testing.""" + self._peeled.update(peeled) + + +class InfoRefsContainer(RefsContainer): + """Refs container that reads refs from a info/refs file.""" + + def __init__(self, f): + self._refs = {} + self._peeled = {} + for line in f.readlines(): + sha, name = line.rstrip(b"\n").split(b"\t") + if name.endswith(ANNOTATED_TAG_SUFFIX): + name = name[:-3] + if not check_ref_format(name): + raise ValueError("invalid ref name %r" % name) + self._peeled[name] = sha + else: + if not check_ref_format(name): + raise ValueError("invalid ref name %r" % name) + self._refs[name] = sha + + def allkeys(self): + return self._refs.keys() + + def read_loose_ref(self, name): + return self._refs.get(name, None) + + def get_packed_refs(self): + return {} + + def get_peeled(self, name): + try: + return self._peeled[name] + except KeyError: + return self._refs[name] + + +class DiskRefsContainer(RefsContainer): + """Refs container that reads refs from disk.""" + + def __init__(self, path, worktree_path=None, logger=None): + super(DiskRefsContainer, self).__init__(logger=logger) + if getattr(path, "encode", None) is not None: + path = os.fsencode(path) + self.path = path + if worktree_path is None: + worktree_path = path + if getattr(worktree_path, "encode", None) is not None: + worktree_path = os.fsencode(worktree_path) + self.worktree_path = worktree_path + self._packed_refs = None + self._peeled_refs = None + + def __repr__(self): + return "%s(%r)" % (self.__class__.__name__, self.path) + + def subkeys(self, base): + subkeys = set() + path = self.refpath(base) + for root, unused_dirs, files in os.walk(path): + dir = root[len(path) :] + if os.path.sep != "/": + dir = dir.replace(os.fsencode(os.path.sep), b"/") + dir = dir.strip(b"/") + for filename in files: + refname = b"/".join(([dir] if dir else []) + [filename]) + # check_ref_format requires at least one /, so we prepend the + # base before calling it. + if check_ref_format(base + b"/" + refname): + subkeys.add(refname) + for key in self.get_packed_refs(): + if key.startswith(base): + subkeys.add(key[len(base) :].strip(b"/")) + return subkeys + + def allkeys(self): + allkeys = set() + if os.path.exists(self.refpath(HEADREF)): + allkeys.add(HEADREF) + path = self.refpath(b"") + refspath = self.refpath(b"refs") + for root, unused_dirs, files in os.walk(refspath): + dir = root[len(path) :] + if os.path.sep != "/": + dir = dir.replace(os.fsencode(os.path.sep), b"/") + for filename in files: + refname = b"/".join([dir, filename]) + if check_ref_format(refname): + allkeys.add(refname) + allkeys.update(self.get_packed_refs()) + return allkeys + + def refpath(self, name): + """Return the disk path of a ref.""" + if os.path.sep != "/": + name = name.replace(b"/", os.fsencode(os.path.sep)) + # TODO: as the 'HEAD' reference is working tree specific, it + # should actually not be a part of RefsContainer + if name == HEADREF: + return os.path.join(self.worktree_path, name) + else: + return os.path.join(self.path, name) + + def get_packed_refs(self): + """Get contents of the packed-refs file. + + Returns: Dictionary mapping ref names to SHA1s + + Note: Will return an empty dictionary when no packed-refs file is + present. + """ + # TODO: invalidate the cache on repacking + if self._packed_refs is None: + # set both to empty because we want _peeled_refs to be + # None if and only if _packed_refs is also None. + self._packed_refs = {} + self._peeled_refs = {} + path = os.path.join(self.path, b"packed-refs") + try: + f = GitFile(path, "rb") + except FileNotFoundError: + return {} + with f: + first_line = next(iter(f)).rstrip() + if first_line.startswith(b"# pack-refs") and b" peeled" in first_line: + for sha, name, peeled in read_packed_refs_with_peeled(f): + self._packed_refs[name] = sha + if peeled: + self._peeled_refs[name] = peeled + else: + f.seek(0) + for sha, name in read_packed_refs(f): + self._packed_refs[name] = sha + return self._packed_refs + + def get_peeled(self, name): + """Return the cached peeled value of a ref, if available. + + Args: + name: Name of the ref to peel + Returns: The peeled value of the ref. If the ref is known not point to + a tag, this will be the SHA the ref refers to. If the ref may point + to a tag, but no cached information is available, None is returned. + """ + self.get_packed_refs() + if self._peeled_refs is None or name not in self._packed_refs: + # No cache: no peeled refs were read, or this ref is loose + return None + if name in self._peeled_refs: + return self._peeled_refs[name] + else: + # Known not peelable + return self[name] + + def read_loose_ref(self, name): + """Read a reference file and return its contents. + + If the reference file a symbolic reference, only read the first line of + the file. Otherwise, only read the first 40 bytes. + + Args: + name: the refname to read, relative to refpath + Returns: The contents of the ref file, or None if the file does not + exist. + Raises: + IOError: if any other error occurs + """ + filename = self.refpath(name) + try: + with GitFile(filename, "rb") as f: + header = f.read(len(SYMREF)) + if header == SYMREF: + # Read only the first line + return header + next(iter(f)).rstrip(b"\r\n") + else: + # Read only the first 40 bytes + return header + f.read(40 - len(SYMREF)) + except (FileNotFoundError, IsADirectoryError, NotADirectoryError): + return None + + def _remove_packed_ref(self, name): + if self._packed_refs is None: + return + filename = os.path.join(self.path, b"packed-refs") + # reread cached refs from disk, while holding the lock + f = GitFile(filename, "wb") + try: + self._packed_refs = None + self.get_packed_refs() + + if name not in self._packed_refs: + return + + del self._packed_refs[name] + if name in self._peeled_refs: + del self._peeled_refs[name] + write_packed_refs(f, self._packed_refs, self._peeled_refs) + f.close() + finally: + f.abort() + + def set_symbolic_ref( + self, + name, + other, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + """Make a ref point at another ref. + + Args: + name: Name of the ref to set + other: Name of the ref to point at + message: Optional message to describe the change + """ + self._check_refname(name) + self._check_refname(other) + filename = self.refpath(name) + f = GitFile(filename, "wb") + try: + f.write(SYMREF + other + b"\n") + sha = self.follow(name)[-1] + self._log( + name, + sha, + sha, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + except BaseException: + f.abort() + raise + else: + f.close() + + def set_if_equals( + self, + name, + old_ref, + new_ref, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + """Set a refname to new_ref only if it currently equals old_ref. + + This method follows all symbolic references, and can be used to perform + an atomic compare-and-swap operation. + + Args: + name: The refname to set. + old_ref: The old sha the refname must refer to, or None to set + unconditionally. + new_ref: The new sha the refname will refer to. + message: Set message for reflog + Returns: True if the set was successful, False otherwise. + """ + self._check_refname(name) + try: + realnames, _ = self.follow(name) + realname = realnames[-1] + except (KeyError, IndexError, SymrefLoop): + realname = name + filename = self.refpath(realname) + + # make sure none of the ancestor folders is in packed refs + probe_ref = os.path.dirname(realname) + packed_refs = self.get_packed_refs() + while probe_ref: + if packed_refs.get(probe_ref, None) is not None: + raise NotADirectoryError(filename) + probe_ref = os.path.dirname(probe_ref) + + ensure_dir_exists(os.path.dirname(filename)) + with GitFile(filename, "wb") as f: + if old_ref is not None: + try: + # read again while holding the lock + orig_ref = self.read_loose_ref(realname) + if orig_ref is None: + orig_ref = self.get_packed_refs().get(realname, ZERO_SHA) + if orig_ref != old_ref: + f.abort() + return False + except (OSError, IOError): + f.abort() + raise + try: + f.write(new_ref + b"\n") + except (OSError, IOError): + f.abort() + raise + self._log( + realname, + old_ref, + new_ref, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + return True + + def add_if_new( + self, + name: bytes, + ref: bytes, + committer=None, + timestamp=None, + timezone=None, + message: Optional[bytes] = None, + ): + """Add a new reference only if it does not already exist. + + This method follows symrefs, and only ensures that the last ref in the + chain does not exist. + + Args: + name: The refname to set. + ref: The new sha the refname will refer to. + message: Optional message for reflog + Returns: True if the add was successful, False otherwise. + """ + try: + realnames, contents = self.follow(name) + if contents is not None: + return False + realname = realnames[-1] + except (KeyError, IndexError): + realname = name + self._check_refname(realname) + filename = self.refpath(realname) + ensure_dir_exists(os.path.dirname(filename)) + with GitFile(filename, "wb") as f: + if os.path.exists(filename) or name in self.get_packed_refs(): + f.abort() + return False + try: + f.write(ref + b"\n") + except (OSError, IOError): + f.abort() + raise + else: + self._log( + name, + None, + ref, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + return True + + def remove_if_equals( + self, + name, + old_ref, + committer=None, + timestamp=None, + timezone=None, + message=None, + ): + """Remove a refname only if it currently equals old_ref. + + This method does not follow symbolic references. It can be used to + perform an atomic compare-and-delete operation. + + Args: + name: The refname to delete. + old_ref: The old sha the refname must refer to, or None to + delete unconditionally. + message: Optional message + Returns: True if the delete was successful, False otherwise. + """ + self._check_refname(name) + filename = self.refpath(name) + ensure_dir_exists(os.path.dirname(filename)) + f = GitFile(filename, "wb") + try: + if old_ref is not None: + orig_ref = self.read_loose_ref(name) + if orig_ref is None: + orig_ref = self.get_packed_refs().get(name, ZERO_SHA) + if orig_ref != old_ref: + return False + + # remove the reference file itself + try: + os.remove(filename) + except FileNotFoundError: + pass # may only be packed + + self._remove_packed_ref(name) + self._log( + name, + old_ref, + None, + committer=committer, + timestamp=timestamp, + timezone=timezone, + message=message, + ) + finally: + # never write, we just wanted the lock + f.abort() + + # outside of the lock, clean-up any parent directory that might now + # be empty. this ensures that re-creating a reference of the same + # name of what was previously a directory works as expected + parent = name + while True: + try: + parent, _ = parent.rsplit(b"/", 1) + except ValueError: + break + + if parent == b'refs': + break + parent_filename = self.refpath(parent) + try: + os.rmdir(parent_filename) + except OSError: + # this can be caused by the parent directory being + # removed by another process, being not empty, etc. + # in any case, this is non fatal because we already + # removed the reference, just ignore it + break + + return True + + +def _split_ref_line(line): + """Split a single ref line into a tuple of SHA1 and name.""" + fields = line.rstrip(b"\n\r").split(b" ") + if len(fields) != 2: + raise PackedRefsException("invalid ref line %r" % line) + sha, name = fields + if not valid_hexsha(sha): + raise PackedRefsException("Invalid hex sha %r" % sha) + if not check_ref_format(name): + raise PackedRefsException("invalid ref name %r" % name) + return (sha, name) + + +def read_packed_refs(f): + """Read a packed refs file. + + Args: + f: file-like object to read from + Returns: Iterator over tuples with SHA1s and ref names. + """ + for line in f: + if line.startswith(b"#"): + # Comment + continue + if line.startswith(b"^"): + raise PackedRefsException("found peeled ref in packed-refs without peeled") + yield _split_ref_line(line) + + +def read_packed_refs_with_peeled(f): + """Read a packed refs file including peeled refs. + + Assumes the "# pack-refs with: peeled" line was already read. Yields tuples + with ref names, SHA1s, and peeled SHA1s (or None). + + Args: + f: file-like object to read from, seek'ed to the second line + """ + last = None + for line in f: + if line[0] == b"#": + continue + line = line.rstrip(b"\r\n") + if line.startswith(b"^"): + if not last: + raise PackedRefsException("unexpected peeled ref line") + if not valid_hexsha(line[1:]): + raise PackedRefsException("Invalid hex sha %r" % line[1:]) + sha, name = _split_ref_line(last) + last = None + yield (sha, name, line[1:]) + else: + if last: + sha, name = _split_ref_line(last) + yield (sha, name, None) + last = line + if last: + sha, name = _split_ref_line(last) + yield (sha, name, None) + + +def write_packed_refs(f, packed_refs, peeled_refs=None): + """Write a packed refs file. + + Args: + f: empty file-like object to write to + packed_refs: dict of refname to sha of packed refs to write + peeled_refs: dict of refname to peeled value of sha + """ + if peeled_refs is None: + peeled_refs = {} + else: + f.write(b"# pack-refs with: peeled\n") + for refname in sorted(packed_refs.keys()): + f.write(git_line(packed_refs[refname], refname)) + if refname in peeled_refs: + f.write(b"^" + peeled_refs[refname] + b"\n") + + +def read_info_refs(f): + ret = {} + for line in f.readlines(): + (sha, name) = line.rstrip(b"\r\n").split(b"\t", 1) + ret[name] = sha + return ret + + +def write_info_refs(refs, store): + """Generate info refs.""" + for name, sha in sorted(refs.items()): + # get_refs() includes HEAD as a special case, but we don't want to + # advertise it + if name == HEADREF: + continue + try: + o = store[sha] + except KeyError: + continue + peeled = store.peel_sha(sha) + yield o.id + b"\t" + name + b"\n" + if o.id != peeled.id: + yield peeled.id + b"\t" + name + ANNOTATED_TAG_SUFFIX + b"\n" + + +def is_local_branch(x): + return x.startswith(LOCAL_BRANCH_PREFIX) + + +def strip_peeled_refs(refs): + """Remove all peeled refs""" + return { + ref: sha + for (ref, sha) in refs.items() + if not ref.endswith(ANNOTATED_TAG_SUFFIX) + } + + +def _set_origin_head(refs, origin, origin_head): + # set refs/remotes/origin/HEAD + origin_base = b"refs/remotes/" + origin + b"/" + if origin_head and origin_head.startswith(LOCAL_BRANCH_PREFIX): + origin_ref = origin_base + HEADREF + target_ref = origin_base + origin_head[len(LOCAL_BRANCH_PREFIX) :] + if target_ref in refs: + refs.set_symbolic_ref(origin_ref, target_ref) + + +def _set_default_branch( + refs: RefsContainer, origin: bytes, origin_head: bytes, branch: bytes, + ref_message: Optional[bytes]) -> bytes: + """Set the default branch. + """ + origin_base = b"refs/remotes/" + origin + b"/" + if branch: + origin_ref = origin_base + branch + if origin_ref in refs: + local_ref = LOCAL_BRANCH_PREFIX + branch + refs.add_if_new( + local_ref, refs[origin_ref], ref_message + ) + head_ref = local_ref + elif LOCAL_TAG_PREFIX + branch in refs: + head_ref = LOCAL_TAG_PREFIX + branch + else: + raise ValueError( + "%r is not a valid branch or tag" % os.fsencode(branch) + ) + elif origin_head: + head_ref = origin_head + if origin_head.startswith(LOCAL_BRANCH_PREFIX): + origin_ref = origin_base + origin_head[len(LOCAL_BRANCH_PREFIX) :] + else: + origin_ref = origin_head + try: + refs.add_if_new( + head_ref, refs[origin_ref], ref_message + ) + except KeyError: + pass + else: + raise ValueError('neither origin_head nor branch are provided') + return head_ref + + +def _set_head(refs, head_ref, ref_message): + if head_ref.startswith(LOCAL_TAG_PREFIX): + # detach HEAD at specified tag + head = refs[head_ref] + if isinstance(head, Tag): + _cls, obj = head.object + head = obj.get_object(obj).id + del refs[HEADREF] + refs.set_if_equals( + HEADREF, None, head, message=ref_message + ) + else: + # set HEAD to specific branch + try: + head = refs[head_ref] + refs.set_symbolic_ref(HEADREF, head_ref) + refs.set_if_equals(HEADREF, None, head, message=ref_message) + except KeyError: + head = None + return head + + +def _import_remote_refs( + refs_container: RefsContainer, + remote_name: str, + refs: Dict[str, str], + message: Optional[bytes] = None, + prune: bool = False, + prune_tags: bool = False, +): + stripped_refs = strip_peeled_refs(refs) + branches = { + n[len(LOCAL_BRANCH_PREFIX) :]: v + for (n, v) in stripped_refs.items() + if n.startswith(LOCAL_BRANCH_PREFIX) + } + refs_container.import_refs( + b"refs/remotes/" + remote_name.encode(), + branches, + message=message, + prune=prune, + ) + tags = { + n[len(LOCAL_TAG_PREFIX) :]: v + for (n, v) in stripped_refs.items() + if n.startswith(LOCAL_TAG_PREFIX) and not n.endswith(ANNOTATED_TAG_SUFFIX) + } + refs_container.import_refs(LOCAL_TAG_PREFIX, tags, message=message, prune=prune_tags) diff --git a/env/lib/python3.8/site-packages/dulwich/repo.py b/env/lib/python3.8/site-packages/dulwich/repo.py new file mode 100644 index 00000000..b0a2e020 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/repo.py @@ -0,0 +1,1836 @@ +# repo.py -- For dealing with git repositories. +# Copyright (C) 2007 James Westby +# Copyright (C) 2008-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + + +"""Repository access. + +This module contains the base class for git repositories +(BaseRepo) and an implementation which uses a repository on +local disk (Repo). + +""" + +from io import BytesIO +import os +import sys +import stat +import time +from typing import ( + Optional, + BinaryIO, + Callable, + Tuple, + TYPE_CHECKING, + List, + Dict, + Union, + Iterable, + Set +) + +if TYPE_CHECKING: + # There are no circular imports here, but we try to defer imports as long + # as possible to reduce start-up time for anything that doesn't need + # these imports. + from dulwich.config import StackedConfig, ConfigFile + from dulwich.index import Index + +from dulwich.errors import ( + NoIndexPresent, + NotBlobError, + NotCommitError, + NotGitRepository, + NotTreeError, + NotTagError, + CommitError, + RefFormatError, + HookError, +) +from dulwich.file import ( + GitFile, +) +from dulwich.object_store import ( + DiskObjectStore, + MemoryObjectStore, + BaseObjectStore, + ObjectStoreGraphWalker, +) +from dulwich.objects import ( + check_hexsha, + valid_hexsha, + Blob, + Commit, + ShaFile, + Tag, + Tree, + ObjectID, +) +from dulwich.pack import ( + pack_objects_to_data, +) + +from dulwich.hooks import ( + Hook, + PreCommitShellHook, + PostCommitShellHook, + CommitMsgShellHook, + PostReceiveShellHook, +) + +from dulwich.line_ending import BlobNormalizer, TreeBlobNormalizer + +from dulwich.refs import ( # noqa: F401 + Ref, + ANNOTATED_TAG_SUFFIX, + LOCAL_BRANCH_PREFIX, + LOCAL_TAG_PREFIX, + check_ref_format, + RefsContainer, + DictRefsContainer, + InfoRefsContainer, + DiskRefsContainer, + read_packed_refs, + read_packed_refs_with_peeled, + write_packed_refs, + SYMREF, + _set_default_branch, + _set_head, + _set_origin_head, +) + + +import warnings + + +CONTROLDIR = ".git" +OBJECTDIR = "objects" +REFSDIR = "refs" +REFSDIR_TAGS = "tags" +REFSDIR_HEADS = "heads" +INDEX_FILENAME = "index" +COMMONDIR = "commondir" +GITDIR = "gitdir" +WORKTREES = "worktrees" + +BASE_DIRECTORIES = [ + ["branches"], + [REFSDIR], + [REFSDIR, REFSDIR_TAGS], + [REFSDIR, REFSDIR_HEADS], + ["hooks"], + ["info"], +] + +DEFAULT_REF = b"refs/heads/master" + + +class InvalidUserIdentity(Exception): + """User identity is not of the format 'user '""" + + def __init__(self, identity): + self.identity = identity + + +# TODO(jelmer): Cache? +def _get_default_identity() -> Tuple[str, str]: + import socket + + for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'): + username = os.environ.get(name) + if username: + break + else: + username = None + + try: + import pwd + except ImportError: + fullname = None + else: + try: + entry = pwd.getpwuid(os.getuid()) # type: ignore + except KeyError: + fullname = None + else: + if getattr(entry, 'gecos', None): + fullname = entry.pw_gecos.split(",")[0] + else: + fullname = None + if username is None: + username = entry.pw_name + if not fullname: + fullname = username + email = os.environ.get("EMAIL") + if email is None: + email = "{}@{}".format(username, socket.gethostname()) + return (fullname, email) # type: ignore + + +def get_user_identity(config: "StackedConfig", kind: Optional[str] = None) -> bytes: + """Determine the identity to use for new commits. + + If kind is set, this first checks + GIT_${KIND}_NAME and GIT_${KIND}_EMAIL. + + If those variables are not set, then it will fall back + to reading the user.name and user.email settings from + the specified configuration. + + If that also fails, then it will fall back to using + the current users' identity as obtained from the host + system (e.g. the gecos field, $EMAIL, $USER@$(hostname -f). + + Args: + kind: Optional kind to return identity for, + usually either "AUTHOR" or "COMMITTER". + + Returns: + A user identity + """ + user = None # type: Optional[bytes] + email = None # type: Optional[bytes] + if kind: + user_uc = os.environ.get("GIT_" + kind + "_NAME") + if user_uc is not None: + user = user_uc.encode("utf-8") + email_uc = os.environ.get("GIT_" + kind + "_EMAIL") + if email_uc is not None: + email = email_uc.encode("utf-8") + if user is None: + try: + user = config.get(("user",), "name") + except KeyError: + user = None + if email is None: + try: + email = config.get(("user",), "email") + except KeyError: + email = None + default_user, default_email = _get_default_identity() + if user is None: + user = default_user.encode("utf-8") + if email is None: + email = default_email.encode("utf-8") + if email.startswith(b"<") and email.endswith(b">"): + email = email[1:-1] + return user + b" <" + email + b">" + + +def check_user_identity(identity): + """Verify that a user identity is formatted correctly. + + Args: + identity: User identity bytestring + Raises: + InvalidUserIdentity: Raised when identity is invalid + """ + try: + fst, snd = identity.split(b" <", 1) + except ValueError as exc: + raise InvalidUserIdentity(identity) from exc + if b">" not in snd: + raise InvalidUserIdentity(identity) + if b'\0' in identity or b'\n' in identity: + raise InvalidUserIdentity(identity) + + +def parse_graftpoints( + graftpoints: Iterable[bytes], +) -> Dict[bytes, List[bytes]]: + """Convert a list of graftpoints into a dict + + Args: + graftpoints: Iterator of graftpoint lines + + Each line is formatted as: + []* + + Resulting dictionary is: + : [*] + + https://git.wiki.kernel.org/index.php/GraftPoint + """ + grafts = {} + for line in graftpoints: + raw_graft = line.split(None, 1) + + commit = raw_graft[0] + if len(raw_graft) == 2: + parents = raw_graft[1].split() + else: + parents = [] + + for sha in [commit] + parents: + check_hexsha(sha, "Invalid graftpoint") + + grafts[commit] = parents + return grafts + + +def serialize_graftpoints(graftpoints: Dict[bytes, List[bytes]]) -> bytes: + """Convert a dictionary of grafts into string + + The graft dictionary is: + : [*] + + Each line is formatted as: + []* + + https://git.wiki.kernel.org/index.php/GraftPoint + + """ + graft_lines = [] + for commit, parents in graftpoints.items(): + if parents: + graft_lines.append(commit + b" " + b" ".join(parents)) + else: + graft_lines.append(commit) + return b"\n".join(graft_lines) + + +def _set_filesystem_hidden(path): + """Mark path as to be hidden if supported by platform and filesystem. + + On win32 uses SetFileAttributesW api: + + """ + if sys.platform == "win32": + import ctypes + from ctypes.wintypes import BOOL, DWORD, LPCWSTR + + FILE_ATTRIBUTE_HIDDEN = 2 + SetFileAttributesW = ctypes.WINFUNCTYPE(BOOL, LPCWSTR, DWORD)( + ("SetFileAttributesW", ctypes.windll.kernel32) + ) + + if isinstance(path, bytes): + path = os.fsdecode(path) + if not SetFileAttributesW(path, FILE_ATTRIBUTE_HIDDEN): + pass # Could raise or log `ctypes.WinError()` here + + # Could implement other platform specific filesystem hiding here + + +class ParentsProvider(object): + def __init__(self, store, grafts={}, shallows=[]): + self.store = store + self.grafts = grafts + self.shallows = set(shallows) + + def get_parents(self, commit_id, commit=None): + try: + return self.grafts[commit_id] + except KeyError: + pass + if commit_id in self.shallows: + return [] + if commit is None: + commit = self.store[commit_id] + return commit.parents + + +class BaseRepo(object): + """Base class for a git repository. + + This base class is meant to be used for Repository implementations that e.g. + work on top of a different transport than a standard filesystem path. + + Attributes: + object_store: Dictionary-like object for accessing + the objects + refs: Dictionary-like object with the refs in this + repository + """ + + def __init__(self, object_store: BaseObjectStore, refs: RefsContainer): + """Open a repository. + + This shouldn't be called directly, but rather through one of the + base classes, such as MemoryRepo or Repo. + + Args: + object_store: Object store to use + refs: Refs container to use + """ + self.object_store = object_store + self.refs = refs + + self._graftpoints = {} # type: Dict[bytes, List[bytes]] + self.hooks = {} # type: Dict[str, Hook] + + def _determine_file_mode(self) -> bool: + """Probe the file-system to determine whether permissions can be trusted. + + Returns: True if permissions can be trusted, False otherwise. + """ + raise NotImplementedError(self._determine_file_mode) + + def _init_files(self, bare: bool) -> None: + """Initialize a default set of named files.""" + from dulwich.config import ConfigFile + + self._put_named_file("description", b"Unnamed repository") + f = BytesIO() + cf = ConfigFile() + cf.set("core", "repositoryformatversion", "0") + if self._determine_file_mode(): + cf.set("core", "filemode", True) + else: + cf.set("core", "filemode", False) + + cf.set("core", "bare", bare) + cf.set("core", "logallrefupdates", True) + cf.write_to_file(f) + self._put_named_file("config", f.getvalue()) + self._put_named_file(os.path.join("info", "exclude"), b"") + + def get_named_file(self, path: str) -> Optional[BinaryIO]: + """Get a file from the control dir with a specific name. + + Although the filename should be interpreted as a filename relative to + the control dir in a disk-based Repo, the object returned need not be + pointing to a file in that location. + + Args: + path: The path to the file, relative to the control dir. + Returns: An open file object, or None if the file does not exist. + """ + raise NotImplementedError(self.get_named_file) + + def _put_named_file(self, path: str, contents: bytes): + """Write a file to the control dir with the given name and contents. + + Args: + path: The path to the file, relative to the control dir. + contents: A string to write to the file. + """ + raise NotImplementedError(self._put_named_file) + + def _del_named_file(self, path: str): + """Delete a file in the control directory with the given name.""" + raise NotImplementedError(self._del_named_file) + + def open_index(self) -> "Index": + """Open the index for this repository. + + Raises: + NoIndexPresent: If no index is present + Returns: The matching `Index` + """ + raise NotImplementedError(self.open_index) + + def fetch(self, target, determine_wants=None, progress=None, depth=None): + """Fetch objects into another repository. + + Args: + target: The target repository + determine_wants: Optional function to determine what refs to + fetch. + progress: Optional progress function + depth: Optional shallow fetch depth + Returns: The local refs + """ + if determine_wants is None: + determine_wants = target.object_store.determine_wants_all + count, pack_data = self.fetch_pack_data( + determine_wants, + target.get_graph_walker(), + progress=progress, + depth=depth, + ) + target.object_store.add_pack_data(count, pack_data, progress) + return self.get_refs() + + def fetch_pack_data( + self, + determine_wants, + graph_walker, + progress, + get_tagged=None, + depth=None, + ): + """Fetch the pack data required for a set of revisions. + + Args: + determine_wants: Function that takes a dictionary with heads + and returns the list of heads to fetch. + graph_walker: Object that can iterate over the list of revisions + to fetch and has an "ack" method that will be called to acknowledge + that a revision is present. + progress: Simple progress function that will be called with + updated progress strings. + get_tagged: Function that returns a dict of pointed-to sha -> + tag sha for including tags. + depth: Shallow fetch depth + Returns: count and iterator over pack data + """ + # TODO(jelmer): Fetch pack data directly, don't create objects first. + objects = self.fetch_objects( + determine_wants, graph_walker, progress, get_tagged, depth=depth + ) + return pack_objects_to_data(objects) + + def fetch_objects( + self, + determine_wants, + graph_walker, + progress, + get_tagged=None, + depth=None, + ): + """Fetch the missing objects required for a set of revisions. + + Args: + determine_wants: Function that takes a dictionary with heads + and returns the list of heads to fetch. + graph_walker: Object that can iterate over the list of revisions + to fetch and has an "ack" method that will be called to acknowledge + that a revision is present. + progress: Simple progress function that will be called with + updated progress strings. + get_tagged: Function that returns a dict of pointed-to sha -> + tag sha for including tags. + depth: Shallow fetch depth + Returns: iterator over objects, with __len__ implemented + """ + if depth not in (None, 0): + raise NotImplementedError("depth not supported yet") + + refs = {} + for ref, sha in self.get_refs().items(): + try: + obj = self.object_store[sha] + except KeyError: + warnings.warn( + "ref %s points at non-present sha %s" + % (ref.decode("utf-8", "replace"), sha.decode("ascii")), + UserWarning, + ) + continue + else: + if isinstance(obj, Tag): + refs[ref + ANNOTATED_TAG_SUFFIX] = obj.object[1] + refs[ref] = sha + + wants = determine_wants(refs) + if not isinstance(wants, list): + raise TypeError("determine_wants() did not return a list") + + shallows = getattr(graph_walker, "shallow", frozenset()) + unshallows = getattr(graph_walker, "unshallow", frozenset()) + + if wants == []: + # TODO(dborowitz): find a way to short-circuit that doesn't change + # this interface. + + if shallows or unshallows: + # Do not send a pack in shallow short-circuit path + return None + + return [] + + # If the graph walker is set up with an implementation that can + # ACK/NAK to the wire, it will write data to the client through + # this call as a side-effect. + haves = self.object_store.find_common_revisions(graph_walker) + + # Deal with shallow requests separately because the haves do + # not reflect what objects are missing + if shallows or unshallows: + # TODO: filter the haves commits from iter_shas. the specific + # commits aren't missing. + haves = [] + + parents_provider = ParentsProvider(self.object_store, shallows=shallows) + + def get_parents(commit): + return parents_provider.get_parents(commit.id, commit) + + return self.object_store.iter_shas( + self.object_store.find_missing_objects( + haves, + wants, + self.get_shallow(), + progress, + get_tagged, + get_parents=get_parents, + ) + ) + + def generate_pack_data(self, have: List[ObjectID], want: List[ObjectID], + progress: Optional[Callable[[str], None]] = None, + ofs_delta: Optional[bool] = None): + """Generate pack data objects for a set of wants/haves. + + Args: + have: List of SHA1s of objects that should not be sent + want: List of SHA1s of objects that should be sent + ofs_delta: Whether OFS deltas can be included + progress: Optional progress reporting method + """ + return self.object_store.generate_pack_data( + have, + want, + shallow=self.get_shallow(), + progress=progress, + ofs_delta=ofs_delta, + ) + + def get_graph_walker( + self, heads: List[ObjectID] = None) -> ObjectStoreGraphWalker: + """Retrieve a graph walker. + + A graph walker is used by a remote repository (or proxy) + to find out which objects are present in this repository. + + Args: + heads: Repository heads to use (optional) + Returns: A graph walker object + """ + if heads is None: + heads = [ + sha + for sha in self.refs.as_dict(b"refs/heads").values() + if sha in self.object_store + ] + parents_provider = ParentsProvider(self.object_store) + return ObjectStoreGraphWalker( + heads, parents_provider.get_parents, shallow=self.get_shallow() + ) + + def get_refs(self) -> Dict[bytes, bytes]: + """Get dictionary with all refs. + + Returns: A ``dict`` mapping ref names to SHA1s + """ + return self.refs.as_dict() + + def head(self) -> bytes: + """Return the SHA1 pointed at by HEAD.""" + return self.refs[b"HEAD"] + + def _get_object(self, sha, cls): + assert len(sha) in (20, 40) + ret = self.get_object(sha) + if not isinstance(ret, cls): + if cls is Commit: + raise NotCommitError(ret) + elif cls is Blob: + raise NotBlobError(ret) + elif cls is Tree: + raise NotTreeError(ret) + elif cls is Tag: + raise NotTagError(ret) + else: + raise Exception( + "Type invalid: %r != %r" % (ret.type_name, cls.type_name) + ) + return ret + + def get_object(self, sha: bytes) -> ShaFile: + """Retrieve the object with the specified SHA. + + Args: + sha: SHA to retrieve + Returns: A ShaFile object + Raises: + KeyError: when the object can not be found + """ + return self.object_store[sha] + + def parents_provider(self) -> ParentsProvider: + return ParentsProvider( + self.object_store, + grafts=self._graftpoints, + shallows=self.get_shallow(), + ) + + def get_parents(self, sha: bytes, commit: Commit = None) -> List[bytes]: + """Retrieve the parents of a specific commit. + + If the specific commit is a graftpoint, the graft parents + will be returned instead. + + Args: + sha: SHA of the commit for which to retrieve the parents + commit: Optional commit matching the sha + Returns: List of parents + """ + return self.parents_provider().get_parents(sha, commit) + + def get_config(self) -> "ConfigFile": + """Retrieve the config object. + + Returns: `ConfigFile` object for the ``.git/config`` file. + """ + raise NotImplementedError(self.get_config) + + def get_description(self): + """Retrieve the description for this repository. + + Returns: String with the description of the repository + as set by the user. + """ + raise NotImplementedError(self.get_description) + + def set_description(self, description): + """Set the description for this repository. + + Args: + description: Text to set as description for this repository. + """ + raise NotImplementedError(self.set_description) + + def get_config_stack(self) -> "StackedConfig": + """Return a config stack for this repository. + + This stack accesses the configuration for both this repository + itself (.git/config) and the global configuration, which usually + lives in ~/.gitconfig. + + Returns: `Config` instance for this repository + """ + from dulwich.config import StackedConfig + + backends = [self.get_config()] + StackedConfig.default_backends() + return StackedConfig(backends, writable=backends[0]) + + def get_shallow(self) -> Set[ObjectID]: + """Get the set of shallow commits. + + Returns: Set of shallow commits. + """ + f = self.get_named_file("shallow") + if f is None: + return set() + with f: + return {line.strip() for line in f} + + def update_shallow(self, new_shallow, new_unshallow): + """Update the list of shallow objects. + + Args: + new_shallow: Newly shallow objects + new_unshallow: Newly no longer shallow objects + """ + shallow = self.get_shallow() + if new_shallow: + shallow.update(new_shallow) + if new_unshallow: + shallow.difference_update(new_unshallow) + if shallow: + self._put_named_file( + "shallow", b"".join([sha + b"\n" for sha in shallow]) + ) + else: + self._del_named_file("shallow") + + def get_peeled(self, ref: Ref) -> ObjectID: + """Get the peeled value of a ref. + + Args: + ref: The refname to peel. + Returns: The fully-peeled SHA1 of a tag object, after peeling all + intermediate tags; if the original ref does not point to a tag, + this will equal the original SHA1. + """ + cached = self.refs.get_peeled(ref) + if cached is not None: + return cached + return self.object_store.peel_sha(self.refs[ref]).id + + def get_walker(self, include: Optional[List[bytes]] = None, + *args, **kwargs): + """Obtain a walker for this repository. + + Args: + include: Iterable of SHAs of commits to include along with their + ancestors. Defaults to [HEAD] + exclude: Iterable of SHAs of commits to exclude along with their + ancestors, overriding includes. + order: ORDER_* constant specifying the order of results. + Anything other than ORDER_DATE may result in O(n) memory usage. + reverse: If True, reverse the order of output, requiring O(n) + memory. + max_entries: The maximum number of entries to yield, or None for + no limit. + paths: Iterable of file or subtree paths to show entries for. + rename_detector: diff.RenameDetector object for detecting + renames. + follow: If True, follow path across renames/copies. Forces a + default rename_detector. + since: Timestamp to list commits after. + until: Timestamp to list commits before. + queue_cls: A class to use for a queue of commits, supporting the + iterator protocol. The constructor takes a single argument, the + Walker. + Returns: A `Walker` object + """ + from dulwich.walk import Walker + + if include is None: + include = [self.head()] + + kwargs["get_parents"] = lambda commit: self.get_parents(commit.id, commit) + + return Walker(self.object_store, include, *args, **kwargs) + + def __getitem__(self, name: Union[ObjectID, Ref]): + """Retrieve a Git object by SHA1 or ref. + + Args: + name: A Git object SHA1 or a ref name + Returns: A `ShaFile` object, such as a Commit or Blob + Raises: + KeyError: when the specified ref or object does not exist + """ + if not isinstance(name, bytes): + raise TypeError( + "'name' must be bytestring, not %.80s" % type(name).__name__ + ) + if len(name) in (20, 40): + try: + return self.object_store[name] + except (KeyError, ValueError): + pass + try: + return self.object_store[self.refs[name]] + except RefFormatError as exc: + raise KeyError(name) from exc + + def __contains__(self, name: bytes) -> bool: + """Check if a specific Git object or ref is present. + + Args: + name: Git object SHA1 or ref name + """ + if len(name) == 20 or (len(name) == 40 and valid_hexsha(name)): + return name in self.object_store or name in self.refs + else: + return name in self.refs + + def __setitem__(self, name: bytes, value: Union[ShaFile, bytes]): + """Set a ref. + + Args: + name: ref name + value: Ref value - either a ShaFile object, or a hex sha + """ + if name.startswith(b"refs/") or name == b"HEAD": + if isinstance(value, ShaFile): + self.refs[name] = value.id + elif isinstance(value, bytes): + self.refs[name] = value + else: + raise TypeError(value) + else: + raise ValueError(name) + + def __delitem__(self, name: bytes): + """Remove a ref. + + Args: + name: Name of the ref to remove + """ + if name.startswith(b"refs/") or name == b"HEAD": + del self.refs[name] + else: + raise ValueError(name) + + def _get_user_identity(self, config: "StackedConfig", kind: str = None) -> bytes: + """Determine the identity to use for new commits.""" + # TODO(jelmer): Deprecate this function in favor of get_user_identity + return get_user_identity(config) + + def _add_graftpoints(self, updated_graftpoints: Dict[bytes, List[bytes]]): + """Add or modify graftpoints + + Args: + updated_graftpoints: Dict of commit shas to list of parent shas + """ + + # Simple validation + for commit, parents in updated_graftpoints.items(): + for sha in [commit] + parents: + check_hexsha(sha, "Invalid graftpoint") + + self._graftpoints.update(updated_graftpoints) + + def _remove_graftpoints(self, to_remove: List[bytes] = []) -> None: + """Remove graftpoints + + Args: + to_remove: List of commit shas + """ + for sha in to_remove: + del self._graftpoints[sha] + + def _read_heads(self, name): + f = self.get_named_file(name) + if f is None: + return [] + with f: + return [line.strip() for line in f.readlines() if line.strip()] + + def do_commit( # noqa: C901 + self, + message: Optional[bytes] = None, + committer: Optional[bytes] = None, + author: Optional[bytes] = None, + commit_timestamp=None, + commit_timezone=None, + author_timestamp=None, + author_timezone=None, + tree: Optional[ObjectID] = None, + encoding: Optional[bytes] = None, + ref: Ref = b"HEAD", + merge_heads: Optional[List[ObjectID]] = None, + no_verify: bool = False, + sign: bool = False, + ): + """Create a new commit. + + If not specified, committer and author default to + get_user_identity(..., 'COMMITTER') + and get_user_identity(..., 'AUTHOR') respectively. + + Args: + message: Commit message + committer: Committer fullname + author: Author fullname + commit_timestamp: Commit timestamp (defaults to now) + commit_timezone: Commit timestamp timezone (defaults to GMT) + author_timestamp: Author timestamp (defaults to commit + timestamp) + author_timezone: Author timestamp timezone + (defaults to commit timestamp timezone) + tree: SHA1 of the tree root to use (if not specified the + current index will be committed). + encoding: Encoding + ref: Optional ref to commit to (defaults to current branch) + merge_heads: Merge heads (defaults to .git/MERGE_HEAD) + no_verify: Skip pre-commit and commit-msg hooks + sign: GPG Sign the commit (bool, defaults to False, + pass True to use default GPG key, + pass a str containing Key ID to use a specific GPG key) + + Returns: + New commit SHA1 + """ + + try: + if not no_verify: + self.hooks["pre-commit"].execute() + except HookError as exc: + raise CommitError(exc) from exc + except KeyError: # no hook defined, silent fallthrough + pass + + c = Commit() + if tree is None: + index = self.open_index() + c.tree = index.commit(self.object_store) + else: + if len(tree) != 40: + raise ValueError("tree must be a 40-byte hex sha string") + c.tree = tree + + config = self.get_config_stack() + if merge_heads is None: + merge_heads = self._read_heads("MERGE_HEAD") + if committer is None: + committer = get_user_identity(config, kind="COMMITTER") + check_user_identity(committer) + c.committer = committer + if commit_timestamp is None: + # FIXME: Support GIT_COMMITTER_DATE environment variable + commit_timestamp = time.time() + c.commit_time = int(commit_timestamp) + if commit_timezone is None: + # FIXME: Use current user timezone rather than UTC + commit_timezone = 0 + c.commit_timezone = commit_timezone + if author is None: + author = get_user_identity(config, kind="AUTHOR") + c.author = author + check_user_identity(author) + if author_timestamp is None: + # FIXME: Support GIT_AUTHOR_DATE environment variable + author_timestamp = commit_timestamp + c.author_time = int(author_timestamp) + if author_timezone is None: + author_timezone = commit_timezone + c.author_timezone = author_timezone + if encoding is None: + try: + encoding = config.get(("i18n",), "commitEncoding") + except KeyError: + pass # No dice + if encoding is not None: + c.encoding = encoding + if message is None: + # FIXME: Try to read commit message from .git/MERGE_MSG + raise ValueError("No commit message specified") + + try: + if no_verify: + c.message = message + else: + c.message = self.hooks["commit-msg"].execute(message) + if c.message is None: + c.message = message + except HookError as exc: + raise CommitError(exc) from exc + except KeyError: # no hook defined, message not modified + c.message = message + + keyid = sign if isinstance(sign, str) else None + + if ref is None: + # Create a dangling commit + c.parents = merge_heads + if sign: + c.sign(keyid) + self.object_store.add_object(c) + else: + try: + old_head = self.refs[ref] + c.parents = [old_head] + merge_heads + if sign: + c.sign(keyid) + self.object_store.add_object(c) + ok = self.refs.set_if_equals( + ref, + old_head, + c.id, + message=b"commit: " + message, + committer=committer, + timestamp=commit_timestamp, + timezone=commit_timezone, + ) + except KeyError: + c.parents = merge_heads + if sign: + c.sign(keyid) + self.object_store.add_object(c) + ok = self.refs.add_if_new( + ref, + c.id, + message=b"commit: " + message, + committer=committer, + timestamp=commit_timestamp, + timezone=commit_timezone, + ) + if not ok: + # Fail if the atomic compare-and-swap failed, leaving the + # commit and all its objects as garbage. + raise CommitError(f"{ref!r} changed during commit") + + self._del_named_file("MERGE_HEAD") + + try: + self.hooks["post-commit"].execute() + except HookError as e: # silent failure + warnings.warn("post-commit hook failed: %s" % e, UserWarning) + except KeyError: # no hook defined, silent fallthrough + pass + + return c.id + + +def read_gitfile(f): + """Read a ``.git`` file. + + The first line of the file should start with "gitdir: " + + Args: + f: File-like object to read from + Returns: A path + """ + cs = f.read() + if not cs.startswith("gitdir: "): + raise ValueError("Expected file to start with 'gitdir: '") + return cs[len("gitdir: ") :].rstrip("\n") + + +class UnsupportedVersion(Exception): + """Unsupported repository version.""" + + def __init__(self, version): + self.version = version + + +class UnsupportedExtension(Exception): + """Unsupported repository extension.""" + + def __init__(self, extension): + self.extension = extension + + +class Repo(BaseRepo): + """A git repository backed by local disk. + + To open an existing repository, call the constructor with + the path of the repository. + + To create a new repository, use the Repo.init class method. + + Note that a repository object may hold on to resources such + as file handles for performance reasons; call .close() to free + up those resources. + + Attributes: + + path: Path to the working copy (if it exists) or repository control + directory (if the repository is bare) + bare: Whether this is a bare repository + """ + + path: str + bare: bool + + def __init__( + self, + root: str, + object_store: Optional[BaseObjectStore] = None, + bare: Optional[bool] = None + ) -> None: + self.symlink_fn = None + hidden_path = os.path.join(root, CONTROLDIR) + if bare is None: + if (os.path.isfile(hidden_path) + or os.path.isdir(os.path.join(hidden_path, OBJECTDIR))): + bare = False + elif (os.path.isdir(os.path.join(root, OBJECTDIR)) + and os.path.isdir(os.path.join(root, REFSDIR))): + bare = True + else: + raise NotGitRepository( + "No git repository was found at %(path)s" % dict(path=root) + ) + + self.bare = bare + if bare is False: + if os.path.isfile(hidden_path): + with open(hidden_path, "r") as f: + path = read_gitfile(f) + self._controldir = os.path.join(root, path) + else: + self._controldir = hidden_path + else: + self._controldir = root + commondir = self.get_named_file(COMMONDIR) + if commondir is not None: + with commondir: + self._commondir = os.path.join( + self.controldir(), + os.fsdecode(commondir.read().rstrip(b"\r\n")), + ) + else: + self._commondir = self._controldir + self.path = root + config = self.get_config() + try: + repository_format_version = config.get( + "core", + "repositoryformatversion" + ) + format_version = ( + 0 + if repository_format_version is None + else int(repository_format_version) + ) + except KeyError: + format_version = 0 + + if format_version not in (0, 1): + raise UnsupportedVersion(format_version) + + for extension in config.items((b"extensions", )): + raise UnsupportedExtension(extension) + + if object_store is None: + object_store = DiskObjectStore.from_config( + os.path.join(self.commondir(), OBJECTDIR), config + ) + refs = DiskRefsContainer( + self.commondir(), self._controldir, logger=self._write_reflog + ) + BaseRepo.__init__(self, object_store, refs) + + self._graftpoints = {} + graft_file = self.get_named_file( + os.path.join("info", "grafts"), basedir=self.commondir() + ) + if graft_file: + with graft_file: + self._graftpoints.update(parse_graftpoints(graft_file)) + graft_file = self.get_named_file("shallow", basedir=self.commondir()) + if graft_file: + with graft_file: + self._graftpoints.update(parse_graftpoints(graft_file)) + + self.hooks["pre-commit"] = PreCommitShellHook(self.path, self.controldir()) + self.hooks["commit-msg"] = CommitMsgShellHook(self.controldir()) + self.hooks["post-commit"] = PostCommitShellHook(self.controldir()) + self.hooks["post-receive"] = PostReceiveShellHook(self.controldir()) + + def _write_reflog( + self, ref, old_sha, new_sha, committer, timestamp, timezone, message + ): + from .reflog import format_reflog_line + + path = os.path.join(self.controldir(), "logs", os.fsdecode(ref)) + try: + os.makedirs(os.path.dirname(path)) + except FileExistsError: + pass + if committer is None: + config = self.get_config_stack() + committer = self._get_user_identity(config) + check_user_identity(committer) + if timestamp is None: + timestamp = int(time.time()) + if timezone is None: + timezone = 0 # FIXME + with open(path, "ab") as f: + f.write( + format_reflog_line( + old_sha, new_sha, committer, timestamp, timezone, message + ) + + b"\n" + ) + + @classmethod + def discover(cls, start="."): + """Iterate parent directories to discover a repository + + Return a Repo object for the first parent directory that looks like a + Git repository. + + Args: + start: The directory to start discovery from (defaults to '.') + """ + remaining = True + path = os.path.abspath(start) + while remaining: + try: + return cls(path) + except NotGitRepository: + path, remaining = os.path.split(path) + raise NotGitRepository( + "No git repository was found at %(path)s" % dict(path=start) + ) + + def controldir(self): + """Return the path of the control directory.""" + return self._controldir + + def commondir(self): + """Return the path of the common directory. + + For a main working tree, it is identical to controldir(). + + For a linked working tree, it is the control directory of the + main working tree. + """ + return self._commondir + + def _determine_file_mode(self): + """Probe the file-system to determine whether permissions can be trusted. + + Returns: True if permissions can be trusted, False otherwise. + """ + fname = os.path.join(self.path, ".probe-permissions") + with open(fname, "w") as f: + f.write("") + + st1 = os.lstat(fname) + try: + os.chmod(fname, st1.st_mode ^ stat.S_IXUSR) + except PermissionError: + return False + st2 = os.lstat(fname) + + os.unlink(fname) + + mode_differs = st1.st_mode != st2.st_mode + st2_has_exec = (st2.st_mode & stat.S_IXUSR) != 0 + + return mode_differs and st2_has_exec + + def _put_named_file(self, path, contents): + """Write a file to the control dir with the given name and contents. + + Args: + path: The path to the file, relative to the control dir. + contents: A string to write to the file. + """ + path = path.lstrip(os.path.sep) + with GitFile(os.path.join(self.controldir(), path), "wb") as f: + f.write(contents) + + def _del_named_file(self, path): + try: + os.unlink(os.path.join(self.controldir(), path)) + except FileNotFoundError: + return + + def get_named_file(self, path, basedir=None): + """Get a file from the control dir with a specific name. + + Although the filename should be interpreted as a filename relative to + the control dir in a disk-based Repo, the object returned need not be + pointing to a file in that location. + + Args: + path: The path to the file, relative to the control dir. + basedir: Optional argument that specifies an alternative to the + control dir. + Returns: An open file object, or None if the file does not exist. + """ + # TODO(dborowitz): sanitize filenames, since this is used directly by + # the dumb web serving code. + if basedir is None: + basedir = self.controldir() + path = path.lstrip(os.path.sep) + try: + return open(os.path.join(basedir, path), "rb") + except FileNotFoundError: + return None + + def index_path(self): + """Return path to the index file.""" + return os.path.join(self.controldir(), INDEX_FILENAME) + + def open_index(self) -> "Index": + """Open the index for this repository. + + Raises: + NoIndexPresent: If no index is present + Returns: The matching `Index` + """ + from dulwich.index import Index + + if not self.has_index(): + raise NoIndexPresent() + return Index(self.index_path()) + + def has_index(self): + """Check if an index is present.""" + # Bare repos must never have index files; non-bare repos may have a + # missing index file, which is treated as empty. + return not self.bare + + def stage(self, fs_paths: Union[str, bytes, os.PathLike, Iterable[Union[str, bytes, os.PathLike]]]) -> None: + """Stage a set of paths. + + Args: + fs_paths: List of paths, relative to the repository path + """ + + root_path_bytes = os.fsencode(self.path) + + if isinstance(fs_paths, (str, bytes, os.PathLike)): + fs_paths = [fs_paths] + fs_paths = list(fs_paths) + + from dulwich.index import ( + blob_from_path_and_stat, + index_entry_from_stat, + index_entry_from_directory, + _fs_to_tree_path, + ) + + index = self.open_index() + blob_normalizer = self.get_blob_normalizer() + for fs_path in fs_paths: + if not isinstance(fs_path, bytes): + fs_path = os.fsencode(fs_path) + if os.path.isabs(fs_path): + raise ValueError( + "path %r should be relative to " + "repository root, not absolute" % fs_path + ) + tree_path = _fs_to_tree_path(fs_path) + full_path = os.path.join(root_path_bytes, fs_path) + try: + st = os.lstat(full_path) + except OSError: + # File no longer exists + try: + del index[tree_path] + except KeyError: + pass # already removed + else: + if stat.S_ISDIR(st.st_mode): + entry = index_entry_from_directory(st, full_path) + if entry: + index[tree_path] = entry + else: + try: + del index[tree_path] + except KeyError: + pass + elif not stat.S_ISREG(st.st_mode) and not stat.S_ISLNK(st.st_mode): + try: + del index[tree_path] + except KeyError: + pass + else: + blob = blob_from_path_and_stat(full_path, st) + blob = blob_normalizer.checkin_normalize(blob, fs_path) + self.object_store.add_object(blob) + index[tree_path] = index_entry_from_stat(st, blob.id, 0) + index.write() + + def unstage(self, fs_paths: List[str]): + """unstage specific file in the index + Args: + fs_paths: a list of files to unstage, + relative to the repository path + """ + from dulwich.index import ( + IndexEntry, + _fs_to_tree_path, + ) + + index = self.open_index() + try: + tree_id = self[b'HEAD'].tree + except KeyError: + # no head mean no commit in the repo + for fs_path in fs_paths: + tree_path = _fs_to_tree_path(fs_path) + del index[tree_path] + index.write() + return + + for fs_path in fs_paths: + tree_path = _fs_to_tree_path(fs_path) + try: + tree_entry = self.object_store[tree_id].lookup_path( + self.object_store.__getitem__, tree_path) + except KeyError: + # if tree_entry didn't exist, this file was being added, so + # remove index entry + try: + del index[tree_path] + continue + except KeyError as exc: + raise KeyError( + "file '%s' not in index" % (tree_path.decode()) + ) from exc + + st = None + try: + st = os.lstat(os.path.join(self.path, fs_path)) + except FileNotFoundError: + pass + + index_entry = IndexEntry( + ctime=(self[b'HEAD'].commit_time, 0), + mtime=(self[b'HEAD'].commit_time, 0), + dev=st.st_dev if st else 0, + ino=st.st_ino if st else 0, + mode=tree_entry[0], + uid=st.st_uid if st else 0, + gid=st.st_gid if st else 0, + size=len(self[tree_entry[1]].data), + sha=tree_entry[1], + flags=0, + extended_flags=0 + ) + + index[tree_path] = index_entry + index.write() + + def clone( + self, + target_path, + mkdir=True, + bare=False, + origin=b"origin", + checkout=None, + branch=None, + progress=None, + depth=None, + ): + """Clone this repository. + + Args: + target_path: Target path + mkdir: Create the target directory + bare: Whether to create a bare repository + checkout: Whether or not to check-out HEAD after cloning + origin: Base name for refs in target repository + cloned from this repository + branch: Optional branch or tag to be used as HEAD in the new repository + instead of this repository's HEAD. + progress: Optional progress function + depth: Depth at which to fetch + Returns: Created repository as `Repo` + """ + + encoded_path = self.path + if not isinstance(encoded_path, bytes): + encoded_path = os.fsencode(encoded_path) + + if mkdir: + os.mkdir(target_path) + + try: + target = None + if not bare: + target = Repo.init(target_path) + if checkout is None: + checkout = True + else: + if checkout: + raise ValueError("checkout and bare are incompatible") + target = Repo.init_bare(target_path) + + target_config = target.get_config() + target_config.set((b"remote", origin), b"url", encoded_path) + target_config.set( + (b"remote", origin), + b"fetch", + b"+refs/heads/*:refs/remotes/" + origin + b"/*", + ) + target_config.write_to_path() + + ref_message = b"clone: from " + encoded_path + self.fetch(target, depth=depth) + target.refs.import_refs( + b"refs/remotes/" + origin, + self.refs.as_dict(b"refs/heads"), + message=ref_message, + ) + target.refs.import_refs( + b"refs/tags", self.refs.as_dict(b"refs/tags"), message=ref_message + ) + + head_chain, origin_sha = self.refs.follow(b"HEAD") + origin_head = head_chain[-1] if head_chain else None + if origin_sha and not origin_head: + # set detached HEAD + target.refs[b"HEAD"] = origin_sha + + _set_origin_head(target.refs, origin, origin_head) + head_ref = _set_default_branch( + target.refs, origin, origin_head, branch, ref_message + ) + + # Update target head + if head_ref: + head = _set_head(target.refs, head_ref, ref_message) + else: + head = None + + if checkout and head is not None: + target.reset_index() + except BaseException: + if target is not None: + target.close() + if mkdir: + import shutil + shutil.rmtree(target_path) + raise + return target + + def reset_index(self, tree: Optional[bytes] = None): + """Reset the index back to a specific tree. + + Args: + tree: Tree SHA to reset to, None for current HEAD tree. + """ + from dulwich.index import ( + build_index_from_tree, + validate_path_element_default, + validate_path_element_ntfs, + ) + + if tree is None: + head = self[b"HEAD"] + if isinstance(head, Tag): + _cls, obj = head.object + head = self.get_object(obj) + tree = head.tree + config = self.get_config() + honor_filemode = config.get_boolean(b"core", b"filemode", os.name != "nt") + if config.get_boolean(b"core", b"core.protectNTFS", os.name == "nt"): + validate_path_element = validate_path_element_ntfs + else: + validate_path_element = validate_path_element_default + return build_index_from_tree( + self.path, + self.index_path(), + self.object_store, + tree, + honor_filemode=honor_filemode, + validate_path_element=validate_path_element, + symlink_fn=self.symlink_fn, + ) + + def get_config(self) -> "ConfigFile": + """Retrieve the config object. + + Returns: `ConfigFile` object for the ``.git/config`` file. + """ + from dulwich.config import ConfigFile + + path = os.path.join(self._commondir, "config") + try: + return ConfigFile.from_path(path) + except FileNotFoundError: + ret = ConfigFile() + ret.path = path + return ret + + def get_description(self): + """Retrieve the description of this repository. + + Returns: A string describing the repository or None. + """ + path = os.path.join(self._controldir, "description") + try: + with GitFile(path, "rb") as f: + return f.read() + except FileNotFoundError: + return None + + def __repr__(self): + return "" % self.path + + def set_description(self, description): + """Set the description for this repository. + + Args: + description: Text to set as description for this repository. + """ + + self._put_named_file("description", description) + + @classmethod + def _init_maybe_bare(cls, path, controldir, bare, object_store=None): + for d in BASE_DIRECTORIES: + os.mkdir(os.path.join(controldir, *d)) + if object_store is None: + object_store = DiskObjectStore.init(os.path.join(controldir, OBJECTDIR)) + ret = cls(path, bare=bare, object_store=object_store) + ret.refs.set_symbolic_ref(b"HEAD", DEFAULT_REF) + ret._init_files(bare) + return ret + + @classmethod + def init(cls, path: str, mkdir: bool = False) -> "Repo": + """Create a new repository. + + Args: + path: Path in which to create the repository + mkdir: Whether to create the directory + Returns: `Repo` instance + """ + if mkdir: + os.mkdir(path) + controldir = os.path.join(path, CONTROLDIR) + os.mkdir(controldir) + _set_filesystem_hidden(controldir) + return cls._init_maybe_bare(path, controldir, False) + + @classmethod + def _init_new_working_directory(cls, path, main_repo, identifier=None, mkdir=False): + """Create a new working directory linked to a repository. + + Args: + path: Path in which to create the working tree. + main_repo: Main repository to reference + identifier: Worktree identifier + mkdir: Whether to create the directory + Returns: `Repo` instance + """ + if mkdir: + os.mkdir(path) + if identifier is None: + identifier = os.path.basename(path) + main_worktreesdir = os.path.join(main_repo.controldir(), WORKTREES) + worktree_controldir = os.path.join(main_worktreesdir, identifier) + gitdirfile = os.path.join(path, CONTROLDIR) + with open(gitdirfile, "wb") as f: + f.write(b"gitdir: " + os.fsencode(worktree_controldir) + b"\n") + try: + os.mkdir(main_worktreesdir) + except FileExistsError: + pass + try: + os.mkdir(worktree_controldir) + except FileExistsError: + pass + with open(os.path.join(worktree_controldir, GITDIR), "wb") as f: + f.write(os.fsencode(gitdirfile) + b"\n") + with open(os.path.join(worktree_controldir, COMMONDIR), "wb") as f: + f.write(b"../..\n") + with open(os.path.join(worktree_controldir, "HEAD"), "wb") as f: + f.write(main_repo.head() + b"\n") + r = cls(path) + r.reset_index() + return r + + @classmethod + def init_bare(cls, path, mkdir=False, object_store=None): + """Create a new bare repository. + + ``path`` should already exist and be an empty directory. + + Args: + path: Path to create bare repository in + Returns: a `Repo` instance + """ + if mkdir: + os.mkdir(path) + return cls._init_maybe_bare(path, path, True, object_store=object_store) + + create = init_bare + + def close(self): + """Close any files opened by this repository.""" + self.object_store.close() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.close() + + def get_blob_normalizer(self): + """Return a BlobNormalizer object""" + # TODO Parse the git attributes files + git_attributes = {} + config_stack = self.get_config_stack() + try: + tree = self.object_store[self.refs[b"HEAD"]].tree + return TreeBlobNormalizer( + config_stack, + git_attributes, + self.object_store, + tree, + ) + except KeyError: + return BlobNormalizer(config_stack, git_attributes) + + +class MemoryRepo(BaseRepo): + """Repo that stores refs, objects, and named files in memory. + + MemoryRepos are always bare: they have no working tree and no index, since + those have a stronger dependency on the filesystem. + """ + + def __init__(self): + from dulwich.config import ConfigFile + + self._reflog = [] + refs_container = DictRefsContainer({}, logger=self._append_reflog) + BaseRepo.__init__(self, MemoryObjectStore(), refs_container) + self._named_files = {} + self.bare = True + self._config = ConfigFile() + self._description = None + + def _append_reflog(self, *args): + self._reflog.append(args) + + def set_description(self, description): + self._description = description + + def get_description(self): + return self._description + + def _determine_file_mode(self): + """Probe the file-system to determine whether permissions can be trusted. + + Returns: True if permissions can be trusted, False otherwise. + """ + return sys.platform != "win32" + + def _put_named_file(self, path, contents): + """Write a file to the control dir with the given name and contents. + + Args: + path: The path to the file, relative to the control dir. + contents: A string to write to the file. + """ + self._named_files[path] = contents + + def _del_named_file(self, path): + try: + del self._named_files[path] + except KeyError: + pass + + def get_named_file(self, path, basedir=None): + """Get a file from the control dir with a specific name. + + Although the filename should be interpreted as a filename relative to + the control dir in a disk-baked Repo, the object returned need not be + pointing to a file in that location. + + Args: + path: The path to the file, relative to the control dir. + Returns: An open file object, or None if the file does not exist. + """ + contents = self._named_files.get(path, None) + if contents is None: + return None + return BytesIO(contents) + + def open_index(self): + """Fail to open index for this repo, since it is bare. + + Raises: + NoIndexPresent: Raised when no index is present + """ + raise NoIndexPresent() + + def get_config(self): + """Retrieve the config object. + + Returns: `ConfigFile` object. + """ + return self._config + + @classmethod + def init_bare(cls, objects, refs): + """Create a new bare repository in memory. + + Args: + objects: Objects for the new repository, + as iterable + refs: Refs as dictionary, mapping names + to object SHA1s + """ + ret = cls() + for obj in objects: + ret.object_store.add_object(obj) + for refname, sha in refs.items(): + ret.refs.add_if_new(refname, sha) + ret._init_files(bare=True) + return ret diff --git a/env/lib/python3.8/site-packages/dulwich/server.py b/env/lib/python3.8/site-packages/dulwich/server.py new file mode 100644 index 00000000..49eff1b4 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/server.py @@ -0,0 +1,1273 @@ +# server.py -- Implementation of the server side git protocols +# Copyright (C) 2008 John Carr +# Coprygith (C) 2011-2012 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Git smart network protocol server implementation. + +For more detailed implementation on the network protocol, see the +Documentation/technical directory in the cgit distribution, and in particular: + +* Documentation/technical/protocol-capabilities.txt +* Documentation/technical/pack-protocol.txt + +Currently supported capabilities: + + * include-tag + * thin-pack + * multi_ack_detailed + * multi_ack + * side-band-64k + * ofs-delta + * no-progress + * report-status + * delete-refs + * shallow + * symref +""" + +import collections +import os +import socket +import sys +import time +from typing import List, Tuple, Dict, Optional, Iterable +import zlib + +import socketserver + +from dulwich.archive import tar_stream +from dulwich.errors import ( + ApplyDeltaError, + ChecksumMismatch, + GitProtocolError, + HookError, + NotGitRepository, + UnexpectedCommandError, + ObjectFormatException, +) +from dulwich import log_utils +from dulwich.objects import ( + Commit, + valid_hexsha, +) +from dulwich.pack import ( + write_pack_objects, +) +from dulwich.protocol import ( + BufferedPktLineWriter, + capability_agent, + CAPABILITIES_REF, + CAPABILITY_AGENT, + CAPABILITY_DELETE_REFS, + CAPABILITY_INCLUDE_TAG, + CAPABILITY_MULTI_ACK_DETAILED, + CAPABILITY_MULTI_ACK, + CAPABILITY_NO_DONE, + CAPABILITY_NO_PROGRESS, + CAPABILITY_OFS_DELTA, + CAPABILITY_QUIET, + CAPABILITY_REPORT_STATUS, + CAPABILITY_SHALLOW, + CAPABILITY_SIDE_BAND_64K, + CAPABILITY_THIN_PACK, + COMMAND_DEEPEN, + COMMAND_DONE, + COMMAND_HAVE, + COMMAND_SHALLOW, + COMMAND_UNSHALLOW, + COMMAND_WANT, + MULTI_ACK, + MULTI_ACK_DETAILED, + Protocol, + ReceivableProtocol, + SIDE_BAND_CHANNEL_DATA, + SIDE_BAND_CHANNEL_PROGRESS, + SIDE_BAND_CHANNEL_FATAL, + SINGLE_ACK, + TCP_GIT_PORT, + ZERO_SHA, + ack_type, + extract_capabilities, + extract_want_line_capabilities, + symref_capabilities, + format_ref_line, + format_shallow_line, + format_unshallow_line, + format_ack_line, + NAK_LINE, +) +from dulwich.refs import ( + ANNOTATED_TAG_SUFFIX, + write_info_refs, +) +from dulwich.repo import ( + BaseRepo, + Repo, +) + + +logger = log_utils.getLogger(__name__) + + +class Backend(object): + """A backend for the Git smart server implementation.""" + + def open_repository(self, path): + """Open the repository at a path. + + Args: + path: Path to the repository + Raises: + NotGitRepository: no git repository was found at path + Returns: Instance of BackendRepo + """ + raise NotImplementedError(self.open_repository) + + +class BackendRepo(object): + """Repository abstraction used by the Git server. + + The methods required here are a subset of those provided by + dulwich.repo.Repo. + """ + + object_store = None + refs = None + + def get_refs(self) -> Dict[bytes, bytes]: + """ + Get all the refs in the repository + + Returns: dict of name -> sha + """ + raise NotImplementedError + + def get_peeled(self, name: bytes) -> Optional[bytes]: + """Return the cached peeled value of a ref, if available. + + Args: + name: Name of the ref to peel + Returns: The peeled value of the ref. If the ref is known not point to + a tag, this will be the SHA the ref refers to. If no cached + information about a tag is available, this method may return None, + but it should attempt to peel the tag if possible. + """ + return None + + def fetch_objects(self, determine_wants, graph_walker, progress, get_tagged=None): + """ + Yield the objects required for a list of commits. + + Args: + progress: is a callback to send progress messages to the client + get_tagged: Function that returns a dict of pointed-to sha -> + tag sha for including tags. + """ + raise NotImplementedError + + +class DictBackend(Backend): + """Trivial backend that looks up Git repositories in a dictionary.""" + + def __init__(self, repos): + self.repos = repos + + def open_repository(self, path: str) -> BaseRepo: + logger.debug("Opening repository at %s", path) + try: + return self.repos[path] + except KeyError as exc: + raise NotGitRepository( + "No git repository was found at %(path)s" % dict(path=path) + ) from exc + + +class FileSystemBackend(Backend): + """Simple backend looking up Git repositories in the local file system.""" + + def __init__(self, root=os.sep): + super(FileSystemBackend, self).__init__() + self.root = (os.path.abspath(root) + os.sep).replace(os.sep * 2, os.sep) + + def open_repository(self, path): + logger.debug("opening repository at %s", path) + abspath = os.path.abspath(os.path.join(self.root, path)) + os.sep + normcase_abspath = os.path.normcase(abspath) + normcase_root = os.path.normcase(self.root) + if not normcase_abspath.startswith(normcase_root): + raise NotGitRepository("Path %r not inside root %r" % (path, self.root)) + return Repo(abspath) + + +class Handler(object): + """Smart protocol command handler base class.""" + + def __init__(self, backend, proto, stateless_rpc=False): + self.backend = backend + self.proto = proto + self.stateless_rpc = stateless_rpc + + def handle(self): + raise NotImplementedError(self.handle) + + +class PackHandler(Handler): + """Protocol handler for packs.""" + + def __init__(self, backend, proto, stateless_rpc=False): + super(PackHandler, self).__init__(backend, proto, stateless_rpc) + self._client_capabilities = None + # Flags needed for the no-done capability + self._done_received = False + + @classmethod + def capabilities(cls) -> Iterable[bytes]: + raise NotImplementedError(cls.capabilities) + + @classmethod + def innocuous_capabilities(cls) -> Iterable[bytes]: + return [ + CAPABILITY_INCLUDE_TAG, + CAPABILITY_THIN_PACK, + CAPABILITY_NO_PROGRESS, + CAPABILITY_OFS_DELTA, + capability_agent(), + ] + + @classmethod + def required_capabilities(cls) -> Iterable[bytes]: + """Return a list of capabilities that we require the client to have.""" + return [] + + def set_client_capabilities(self, caps: Iterable[bytes]) -> None: + allowable_caps = set(self.innocuous_capabilities()) + allowable_caps.update(self.capabilities()) + for cap in caps: + if cap.startswith(CAPABILITY_AGENT + b"="): + continue + if cap not in allowable_caps: + raise GitProtocolError( + "Client asked for capability %r that " "was not advertised." % cap + ) + for cap in self.required_capabilities(): + if cap not in caps: + raise GitProtocolError( + "Client does not support required " "capability %r." % cap + ) + self._client_capabilities = set(caps) + logger.info("Client capabilities: %s", caps) + + def has_capability(self, cap: bytes) -> bool: + if self._client_capabilities is None: + raise GitProtocolError( + "Server attempted to access capability %r " "before asking client" % cap + ) + return cap in self._client_capabilities + + def notify_done(self) -> None: + self._done_received = True + + +class UploadPackHandler(PackHandler): + """Protocol handler for uploading a pack to the client.""" + + def __init__(self, backend, args, proto, stateless_rpc=False, advertise_refs=False): + super(UploadPackHandler, self).__init__( + backend, proto, stateless_rpc=stateless_rpc + ) + self.repo = backend.open_repository(args[0]) + self._graph_walker = None + self.advertise_refs = advertise_refs + # A state variable for denoting that the have list is still + # being processed, and the client is not accepting any other + # data (such as side-band, see the progress method here). + self._processing_have_lines = False + + @classmethod + def capabilities(cls): + return [ + CAPABILITY_MULTI_ACK_DETAILED, + CAPABILITY_MULTI_ACK, + CAPABILITY_SIDE_BAND_64K, + CAPABILITY_THIN_PACK, + CAPABILITY_OFS_DELTA, + CAPABILITY_NO_PROGRESS, + CAPABILITY_INCLUDE_TAG, + CAPABILITY_SHALLOW, + CAPABILITY_NO_DONE, + ] + + @classmethod + def required_capabilities(cls): + return ( + CAPABILITY_SIDE_BAND_64K, + CAPABILITY_THIN_PACK, + CAPABILITY_OFS_DELTA, + ) + + def progress(self, message): + if self.has_capability(CAPABILITY_NO_PROGRESS) or self._processing_have_lines: + return + self.proto.write_sideband(SIDE_BAND_CHANNEL_PROGRESS, message) + + def get_tagged(self, refs=None, repo=None): + """Get a dict of peeled values of tags to their original tag shas. + + Args: + refs: dict of refname -> sha of possible tags; defaults to all + of the backend's refs. + repo: optional Repo instance for getting peeled refs; defaults + to the backend's repo, if available + Returns: dict of peeled_sha -> tag_sha, where tag_sha is the sha of a + tag whose peeled value is peeled_sha. + """ + if not self.has_capability(CAPABILITY_INCLUDE_TAG): + return {} + if refs is None: + refs = self.repo.get_refs() + if repo is None: + repo = getattr(self.repo, "repo", None) + if repo is None: + # Bail if we don't have a Repo available; this is ok since + # clients must be able to handle if the server doesn't include + # all relevant tags. + # TODO: fix behavior when missing + return {} + # TODO(jelmer): Integrate this with the refs logic in + # Repo.fetch_objects + tagged = {} + for name, sha in refs.items(): + peeled_sha = repo.get_peeled(name) + if peeled_sha != sha: + tagged[peeled_sha] = sha + return tagged + + def handle(self): + def write(x): + return self.proto.write_sideband(SIDE_BAND_CHANNEL_DATA, x) + + graph_walker = _ProtocolGraphWalker( + self, + self.repo.object_store, + self.repo.get_peeled, + self.repo.refs.get_symrefs, + ) + wants = [] + + def wants_wrapper(refs, **kwargs): + wants.extend(graph_walker.determine_wants(refs, **kwargs)) + return wants + + objects_iter = self.repo.fetch_objects( + wants_wrapper, + graph_walker, + self.progress, + get_tagged=self.get_tagged, + ) + + # Note the fact that client is only processing responses related + # to the have lines it sent, and any other data (including side- + # band) will be be considered a fatal error. + self._processing_have_lines = True + + # Did the process short-circuit (e.g. in a stateless RPC call)? Note + # that the client still expects a 0-object pack in most cases. + # Also, if it also happens that the object_iter is instantiated + # with a graph walker with an implementation that talks over the + # wire (which is this instance of this class) this will actually + # iterate through everything and write things out to the wire. + if len(wants) == 0: + return + + # The provided haves are processed, and it is safe to send side- + # band data now. + self._processing_have_lines = False + + if not graph_walker.handle_done( + not self.has_capability(CAPABILITY_NO_DONE), self._done_received + ): + return + + self.progress( + ("counting objects: %d, done.\n" % len(objects_iter)).encode("ascii") + ) + write_pack_objects(write, objects_iter) + # we are done + self.proto.write_pkt_line(None) + + +def _split_proto_line(line, allowed): + """Split a line read from the wire. + + Args: + line: The line read from the wire. + allowed: An iterable of command names that should be allowed. + Command names not listed below as possible return values will be + ignored. If None, any commands from the possible return values are + allowed. + Returns: a tuple having one of the following forms: + ('want', obj_id) + ('have', obj_id) + ('done', None) + (None, None) (for a flush-pkt) + + Raises: + UnexpectedCommandError: if the line cannot be parsed into one of the + allowed return values. + """ + if not line: + fields = [None] + else: + fields = line.rstrip(b"\n").split(b" ", 1) + command = fields[0] + if allowed is not None and command not in allowed: + raise UnexpectedCommandError(command) + if len(fields) == 1 and command in (COMMAND_DONE, None): + return (command, None) + elif len(fields) == 2: + if command in ( + COMMAND_WANT, + COMMAND_HAVE, + COMMAND_SHALLOW, + COMMAND_UNSHALLOW, + ): + if not valid_hexsha(fields[1]): + raise GitProtocolError("Invalid sha") + return tuple(fields) + elif command == COMMAND_DEEPEN: + return command, int(fields[1]) + raise GitProtocolError("Received invalid line from client: %r" % line) + + +def _find_shallow(store, heads, depth): + """Find shallow commits according to a given depth. + + Args: + store: An ObjectStore for looking up objects. + heads: Iterable of head SHAs to start walking from. + depth: The depth of ancestors to include. A depth of one includes + only the heads themselves. + Returns: A tuple of (shallow, not_shallow), sets of SHAs that should be + considered shallow and unshallow according to the arguments. Note that + these sets may overlap if a commit is reachable along multiple paths. + """ + parents = {} + + def get_parents(sha): + result = parents.get(sha, None) + if not result: + result = store[sha].parents + parents[sha] = result + return result + + todo = [] # stack of (sha, depth) + for head_sha in heads: + obj = store.peel_sha(head_sha) + if isinstance(obj, Commit): + todo.append((obj.id, 1)) + + not_shallow = set() + shallow = set() + while todo: + sha, cur_depth = todo.pop() + if cur_depth < depth: + not_shallow.add(sha) + new_depth = cur_depth + 1 + todo.extend((p, new_depth) for p in get_parents(sha)) + else: + shallow.add(sha) + + return shallow, not_shallow + + +def _want_satisfied(store, haves, want, earliest): + o = store[want] + pending = collections.deque([o]) + known = set([want]) + while pending: + commit = pending.popleft() + if commit.id in haves: + return True + if commit.type_name != b"commit": + # non-commit wants are assumed to be satisfied + continue + for parent in commit.parents: + if parent in known: + continue + known.add(parent) + parent_obj = store[parent] + # TODO: handle parents with later commit times than children + if parent_obj.commit_time >= earliest: + pending.append(parent_obj) + return False + + +def _all_wants_satisfied(store, haves, wants): + """Check whether all the current wants are satisfied by a set of haves. + + Args: + store: Object store to retrieve objects from + haves: A set of commits we know the client has. + wants: A set of commits the client wants + Note: Wants are specified with set_wants rather than passed in since + in the current interface they are determined outside this class. + """ + haves = set(haves) + if haves: + earliest = min([store[h].commit_time for h in haves]) + else: + earliest = 0 + for want in wants: + if not _want_satisfied(store, haves, want, earliest): + return False + + return True + + +class _ProtocolGraphWalker(object): + """A graph walker that knows the git protocol. + + As a graph walker, this class implements ack(), next(), and reset(). It + also contains some base methods for interacting with the wire and walking + the commit tree. + + The work of determining which acks to send is passed on to the + implementation instance stored in _impl. The reason for this is that we do + not know at object creation time what ack level the protocol requires. A + call to set_ack_type() is required to set up the implementation, before + any calls to next() or ack() are made. + """ + + def __init__(self, handler, object_store, get_peeled, get_symrefs): + self.handler = handler + self.store = object_store + self.get_peeled = get_peeled + self.get_symrefs = get_symrefs + self.proto = handler.proto + self.stateless_rpc = handler.stateless_rpc + self.advertise_refs = handler.advertise_refs + self._wants = [] + self.shallow = set() + self.client_shallow = set() + self.unshallow = set() + self._cached = False + self._cache = [] + self._cache_index = 0 + self._impl = None + + def determine_wants(self, heads, depth=None): + """Determine the wants for a set of heads. + + The given heads are advertised to the client, who then specifies which + refs they want using 'want' lines. This portion of the protocol is the + same regardless of ack type, and in fact is used to set the ack type of + the ProtocolGraphWalker. + + If the client has the 'shallow' capability, this method also reads and + responds to the 'shallow' and 'deepen' lines from the client. These are + not part of the wants per se, but they set up necessary state for + walking the graph. Additionally, later code depends on this method + consuming everything up to the first 'have' line. + + Args: + heads: a dict of refname->SHA1 to advertise + Returns: a list of SHA1s requested by the client + """ + symrefs = self.get_symrefs() + values = set(heads.values()) + if self.advertise_refs or not self.stateless_rpc: + for i, (ref, sha) in enumerate(sorted(heads.items())): + try: + peeled_sha = self.get_peeled(ref) + except KeyError: + # Skip refs that are inaccessible + # TODO(jelmer): Integrate with Repo.fetch_objects refs + # logic. + continue + if i == 0: + logger.info( + "Sending capabilities: %s", self.handler.capabilities()) + line = format_ref_line( + ref, sha, + self.handler.capabilities() + + symref_capabilities(symrefs.items())) + else: + line = format_ref_line(ref, sha) + self.proto.write_pkt_line(line) + if peeled_sha != sha: + self.proto.write_pkt_line( + format_ref_line(ref + ANNOTATED_TAG_SUFFIX, peeled_sha)) + + # i'm done.. + self.proto.write_pkt_line(None) + + if self.advertise_refs: + return [] + + # Now client will sending want want want commands + want = self.proto.read_pkt_line() + if not want: + return [] + line, caps = extract_want_line_capabilities(want) + self.handler.set_client_capabilities(caps) + self.set_ack_type(ack_type(caps)) + allowed = (COMMAND_WANT, COMMAND_SHALLOW, COMMAND_DEEPEN, None) + command, sha = _split_proto_line(line, allowed) + + want_revs = [] + while command == COMMAND_WANT: + if sha not in values: + raise GitProtocolError("Client wants invalid object %s" % sha) + want_revs.append(sha) + command, sha = self.read_proto_line(allowed) + + self.set_wants(want_revs) + if command in (COMMAND_SHALLOW, COMMAND_DEEPEN): + self.unread_proto_line(command, sha) + self._handle_shallow_request(want_revs) + + if self.stateless_rpc and self.proto.eof(): + # The client may close the socket at this point, expecting a + # flush-pkt from the server. We might be ready to send a packfile + # at this point, so we need to explicitly short-circuit in this + # case. + return [] + + return want_revs + + def unread_proto_line(self, command, value): + if isinstance(value, int): + value = str(value).encode("ascii") + self.proto.unread_pkt_line(command + b" " + value) + + def ack(self, have_ref): + if len(have_ref) != 40: + raise ValueError("invalid sha %r" % have_ref) + return self._impl.ack(have_ref) + + def reset(self): + self._cached = True + self._cache_index = 0 + + def next(self): + if not self._cached: + if not self._impl and self.stateless_rpc: + return None + return next(self._impl) + self._cache_index += 1 + if self._cache_index > len(self._cache): + return None + return self._cache[self._cache_index] + + __next__ = next + + def read_proto_line(self, allowed): + """Read a line from the wire. + + Args: + allowed: An iterable of command names that should be allowed. + Returns: A tuple of (command, value); see _split_proto_line. + Raises: + UnexpectedCommandError: If an error occurred reading the line. + """ + return _split_proto_line(self.proto.read_pkt_line(), allowed) + + def _handle_shallow_request(self, wants): + while True: + command, val = self.read_proto_line((COMMAND_DEEPEN, COMMAND_SHALLOW)) + if command == COMMAND_DEEPEN: + depth = val + break + self.client_shallow.add(val) + self.read_proto_line((None,)) # consume client's flush-pkt + + shallow, not_shallow = _find_shallow(self.store, wants, depth) + + # Update self.shallow instead of reassigning it since we passed a + # reference to it before this method was called. + self.shallow.update(shallow - not_shallow) + new_shallow = self.shallow - self.client_shallow + unshallow = self.unshallow = not_shallow & self.client_shallow + + for sha in sorted(new_shallow): + self.proto.write_pkt_line(format_shallow_line(sha)) + for sha in sorted(unshallow): + self.proto.write_pkt_line(format_unshallow_line(sha)) + + self.proto.write_pkt_line(None) + + def notify_done(self): + # relay the message down to the handler. + self.handler.notify_done() + + def send_ack(self, sha, ack_type=b""): + self.proto.write_pkt_line(format_ack_line(sha, ack_type)) + + def send_nak(self): + self.proto.write_pkt_line(NAK_LINE) + + def handle_done(self, done_required, done_received): + # Delegate this to the implementation. + return self._impl.handle_done(done_required, done_received) + + def set_wants(self, wants): + self._wants = wants + + def all_wants_satisfied(self, haves): + """Check whether all the current wants are satisfied by a set of haves. + + Args: + haves: A set of commits we know the client has. + Note: Wants are specified with set_wants rather than passed in since + in the current interface they are determined outside this class. + """ + return _all_wants_satisfied(self.store, haves, self._wants) + + def set_ack_type(self, ack_type): + impl_classes = { + MULTI_ACK: MultiAckGraphWalkerImpl, + MULTI_ACK_DETAILED: MultiAckDetailedGraphWalkerImpl, + SINGLE_ACK: SingleAckGraphWalkerImpl, + } + self._impl = impl_classes[ack_type](self) + + +_GRAPH_WALKER_COMMANDS = (COMMAND_HAVE, COMMAND_DONE, None) + + +class SingleAckGraphWalkerImpl(object): + """Graph walker implementation that speaks the single-ack protocol.""" + + def __init__(self, walker): + self.walker = walker + self._common = [] + + def ack(self, have_ref): + if not self._common: + self.walker.send_ack(have_ref) + self._common.append(have_ref) + + def next(self): + command, sha = self.walker.read_proto_line(_GRAPH_WALKER_COMMANDS) + if command in (None, COMMAND_DONE): + # defer the handling of done + self.walker.notify_done() + return None + elif command == COMMAND_HAVE: + return sha + + __next__ = next + + def handle_done(self, done_required, done_received): + if not self._common: + self.walker.send_nak() + + if done_required and not done_received: + # we are not done, especially when done is required; skip + # the pack for this request and especially do not handle + # the done. + return False + + if not done_received and not self._common: + # Okay we are not actually done then since the walker picked + # up no haves. This is usually triggered when client attempts + # to pull from a source that has no common base_commit. + # See: test_server.MultiAckDetailedGraphWalkerImplTestCase.\ + # test_multi_ack_stateless_nodone + return False + + return True + + +class MultiAckGraphWalkerImpl(object): + """Graph walker implementation that speaks the multi-ack protocol.""" + + def __init__(self, walker): + self.walker = walker + self._found_base = False + self._common = [] + + def ack(self, have_ref): + self._common.append(have_ref) + if not self._found_base: + self.walker.send_ack(have_ref, b"continue") + if self.walker.all_wants_satisfied(self._common): + self._found_base = True + # else we blind ack within next + + def next(self): + while True: + command, sha = self.walker.read_proto_line(_GRAPH_WALKER_COMMANDS) + if command is None: + self.walker.send_nak() + # in multi-ack mode, a flush-pkt indicates the client wants to + # flush but more have lines are still coming + continue + elif command == COMMAND_DONE: + self.walker.notify_done() + return None + elif command == COMMAND_HAVE: + if self._found_base: + # blind ack + self.walker.send_ack(sha, b"continue") + return sha + + __next__ = next + + def handle_done(self, done_required, done_received): + if done_required and not done_received: + # we are not done, especially when done is required; skip + # the pack for this request and especially do not handle + # the done. + return False + + if not done_received and not self._common: + # Okay we are not actually done then since the walker picked + # up no haves. This is usually triggered when client attempts + # to pull from a source that has no common base_commit. + # See: test_server.MultiAckDetailedGraphWalkerImplTestCase.\ + # test_multi_ack_stateless_nodone + return False + + # don't nak unless no common commits were found, even if not + # everything is satisfied + if self._common: + self.walker.send_ack(self._common[-1]) + else: + self.walker.send_nak() + return True + + +class MultiAckDetailedGraphWalkerImpl(object): + """Graph walker implementation speaking the multi-ack-detailed protocol.""" + + def __init__(self, walker): + self.walker = walker + self._common = [] + + def ack(self, have_ref): + # Should only be called iff have_ref is common + self._common.append(have_ref) + self.walker.send_ack(have_ref, b"common") + + def next(self): + while True: + command, sha = self.walker.read_proto_line(_GRAPH_WALKER_COMMANDS) + if command is None: + if self.walker.all_wants_satisfied(self._common): + self.walker.send_ack(self._common[-1], b"ready") + self.walker.send_nak() + if self.walker.stateless_rpc: + # The HTTP version of this request a flush-pkt always + # signifies an end of request, so we also return + # nothing here as if we are done (but not really, as + # it depends on whether no-done capability was + # specified and that's handled in handle_done which + # may or may not call post_nodone_check depending on + # that). + return None + elif command == COMMAND_DONE: + # Let the walker know that we got a done. + self.walker.notify_done() + break + elif command == COMMAND_HAVE: + # return the sha and let the caller ACK it with the + # above ack method. + return sha + # don't nak unless no common commits were found, even if not + # everything is satisfied + + __next__ = next + + def handle_done(self, done_required, done_received): + if done_required and not done_received: + # we are not done, especially when done is required; skip + # the pack for this request and especially do not handle + # the done. + return False + + if not done_received and not self._common: + # Okay we are not actually done then since the walker picked + # up no haves. This is usually triggered when client attempts + # to pull from a source that has no common base_commit. + # See: test_server.MultiAckDetailedGraphWalkerImplTestCase.\ + # test_multi_ack_stateless_nodone + return False + + # don't nak unless no common commits were found, even if not + # everything is satisfied + if self._common: + self.walker.send_ack(self._common[-1]) + else: + self.walker.send_nak() + return True + + +class ReceivePackHandler(PackHandler): + """Protocol handler for downloading a pack from the client.""" + + def __init__(self, backend, args, proto, stateless_rpc=False, advertise_refs=False): + super(ReceivePackHandler, self).__init__( + backend, proto, stateless_rpc=stateless_rpc + ) + self.repo = backend.open_repository(args[0]) + self.advertise_refs = advertise_refs + + @classmethod + def capabilities(cls) -> Iterable[bytes]: + return [ + CAPABILITY_REPORT_STATUS, + CAPABILITY_DELETE_REFS, + CAPABILITY_QUIET, + CAPABILITY_OFS_DELTA, + CAPABILITY_SIDE_BAND_64K, + CAPABILITY_NO_DONE, + ] + + def _apply_pack( + self, refs: List[Tuple[bytes, bytes, bytes]] + ) -> List[Tuple[bytes, bytes]]: + all_exceptions = ( + IOError, + OSError, + ChecksumMismatch, + ApplyDeltaError, + AssertionError, + socket.error, + zlib.error, + ObjectFormatException, + ) + status = [] + will_send_pack = False + + for command in refs: + if command[1] != ZERO_SHA: + will_send_pack = True + + if will_send_pack: + # TODO: more informative error messages than just the exception + # string + try: + recv = getattr(self.proto, "recv", None) + self.repo.object_store.add_thin_pack(self.proto.read, recv) + status.append((b"unpack", b"ok")) + except all_exceptions as e: + status.append((b"unpack", str(e).replace("\n", "").encode("utf-8"))) + # The pack may still have been moved in, but it may contain + # broken objects. We trust a later GC to clean it up. + else: + # The git protocol want to find a status entry related to unpack + # process even if no pack data has been sent. + status.append((b"unpack", b"ok")) + + for oldsha, sha, ref in refs: + ref_status = b"ok" + try: + if sha == ZERO_SHA: + if CAPABILITY_DELETE_REFS not in self.capabilities(): + raise GitProtocolError( + "Attempted to delete refs without delete-refs " + "capability." + ) + try: + self.repo.refs.remove_if_equals(ref, oldsha) + except all_exceptions: + ref_status = b"failed to delete" + else: + try: + self.repo.refs.set_if_equals(ref, oldsha, sha) + except all_exceptions: + ref_status = b"failed to write" + except KeyError: + ref_status = b"bad ref" + status.append((ref, ref_status)) + + return status + + def _report_status(self, status: List[Tuple[bytes, bytes]]) -> None: + if self.has_capability(CAPABILITY_SIDE_BAND_64K): + writer = BufferedPktLineWriter( + lambda d: self.proto.write_sideband(SIDE_BAND_CHANNEL_DATA, d) + ) + write = writer.write + + def flush(): + writer.flush() + self.proto.write_pkt_line(None) + + else: + write = self.proto.write_pkt_line + + def flush(): + pass + + for name, msg in status: + if name == b"unpack": + write(b"unpack " + msg + b"\n") + elif msg == b"ok": + write(b"ok " + name + b"\n") + else: + write(b"ng " + name + b" " + msg + b"\n") + write(None) + flush() + + def _on_post_receive(self, client_refs): + hook = self.repo.hooks.get("post-receive", None) + if not hook: + return + try: + output = hook.execute(client_refs) + if output: + self.proto.write_sideband(SIDE_BAND_CHANNEL_PROGRESS, output) + except HookError as err: + self.proto.write_sideband(SIDE_BAND_CHANNEL_FATAL, str(err).encode('utf-8')) + + def handle(self) -> None: + if self.advertise_refs or not self.stateless_rpc: + refs = sorted(self.repo.get_refs().items()) + symrefs = sorted(self.repo.refs.get_symrefs().items()) + + if not refs: + refs = [(CAPABILITIES_REF, ZERO_SHA)] + logger.info( + "Sending capabilities: %s", self.capabilities()) + self.proto.write_pkt_line( + format_ref_line( + refs[0][0], refs[0][1], + self.capabilities() + symref_capabilities(symrefs))) + for i in range(1, len(refs)): + ref = refs[i] + self.proto.write_pkt_line(format_ref_line(ref[0], ref[1])) + + self.proto.write_pkt_line(None) + if self.advertise_refs: + return + + client_refs = [] + ref = self.proto.read_pkt_line() + + # if ref is none then client doesn't want to send us anything.. + if ref is None: + return + + ref, caps = extract_capabilities(ref) + self.set_client_capabilities(caps) + + # client will now send us a list of (oldsha, newsha, ref) + while ref: + client_refs.append(ref.split()) + ref = self.proto.read_pkt_line() + + # backend can now deal with this refs and read a pack using self.read + status = self._apply_pack(client_refs) + + self._on_post_receive(client_refs) + + # when we have read all the pack from the client, send a status report + # if the client asked for it + if self.has_capability(CAPABILITY_REPORT_STATUS): + self._report_status(status) + + +class UploadArchiveHandler(Handler): + def __init__(self, backend, args, proto, stateless_rpc=False): + super(UploadArchiveHandler, self).__init__(backend, proto, stateless_rpc) + self.repo = backend.open_repository(args[0]) + + def handle(self): + def write(x): + return self.proto.write_sideband(SIDE_BAND_CHANNEL_DATA, x) + + arguments = [] + for pkt in self.proto.read_pkt_seq(): + (key, value) = pkt.split(b" ", 1) + if key != b"argument": + raise GitProtocolError("unknown command %s" % key) + arguments.append(value.rstrip(b"\n")) + prefix = b"" + format = "tar" + i = 0 + store = self.repo.object_store + while i < len(arguments): + argument = arguments[i] + if argument == b"--prefix": + i += 1 + prefix = arguments[i] + elif argument == b"--format": + i += 1 + format = arguments[i].decode("ascii") + else: + commit_sha = self.repo.refs[argument] + tree = store[store[commit_sha].tree] + i += 1 + self.proto.write_pkt_line(b"ACK") + self.proto.write_pkt_line(None) + for chunk in tar_stream( + store, tree, mtime=time.time(), prefix=prefix, format=format + ): + write(chunk) + self.proto.write_pkt_line(None) + + +# Default handler classes for git services. +DEFAULT_HANDLERS = { + b"git-upload-pack": UploadPackHandler, + b"git-receive-pack": ReceivePackHandler, + b"git-upload-archive": UploadArchiveHandler, +} + + +class TCPGitRequestHandler(socketserver.StreamRequestHandler): + def __init__(self, handlers, *args, **kwargs): + self.handlers = handlers + socketserver.StreamRequestHandler.__init__(self, *args, **kwargs) + + def handle(self): + proto = ReceivableProtocol(self.connection.recv, self.wfile.write) + command, args = proto.read_cmd() + logger.info("Handling %s request, args=%s", command, args) + + cls = self.handlers.get(command, None) + if not callable(cls): + raise GitProtocolError("Invalid service %s" % command) + h = cls(self.server.backend, args, proto) + h.handle() + + +class TCPGitServer(socketserver.TCPServer): + + allow_reuse_address = True + serve = socketserver.TCPServer.serve_forever + + def _make_handler(self, *args, **kwargs): + return TCPGitRequestHandler(self.handlers, *args, **kwargs) + + def __init__(self, backend, listen_addr, port=TCP_GIT_PORT, handlers=None): + self.handlers = dict(DEFAULT_HANDLERS) + if handlers is not None: + self.handlers.update(handlers) + self.backend = backend + logger.info("Listening for TCP connections on %s:%d", listen_addr, port) + socketserver.TCPServer.__init__(self, (listen_addr, port), self._make_handler) + + def verify_request(self, request, client_address): + logger.info("Handling request from %s", client_address) + return True + + def handle_error(self, request, client_address): + logger.exception( + "Exception happened during processing of request " "from %s", + client_address, + ) + + +def main(argv=sys.argv): + """Entry point for starting a TCP git server.""" + import optparse + + parser = optparse.OptionParser() + parser.add_option( + "-l", + "--listen_address", + dest="listen_address", + default="localhost", + help="Binding IP address.", + ) + parser.add_option( + "-p", + "--port", + dest="port", + type=int, + default=TCP_GIT_PORT, + help="Binding TCP port.", + ) + options, args = parser.parse_args(argv) + + log_utils.default_logging_config() + if len(args) > 1: + gitdir = args[1] + else: + gitdir = "." + # TODO(jelmer): Support git-daemon-export-ok and --export-all. + backend = FileSystemBackend(gitdir) + server = TCPGitServer(backend, options.listen_address, options.port) + server.serve_forever() + + +def serve_command( + handler_cls, argv=sys.argv, backend=None, inf=sys.stdin, outf=sys.stdout +): + """Serve a single command. + + This is mostly useful for the implementation of commands used by e.g. + git+ssh. + + Args: + handler_cls: `Handler` class to use for the request + argv: execv-style command-line arguments. Defaults to sys.argv. + backend: `Backend` to use + inf: File-like object to read from, defaults to standard input. + outf: File-like object to write to, defaults to standard output. + Returns: Exit code for use with sys.exit. 0 on success, 1 on failure. + """ + if backend is None: + backend = FileSystemBackend() + + def send_fn(data): + outf.write(data) + outf.flush() + + proto = Protocol(inf.read, send_fn) + handler = handler_cls(backend, argv[1:], proto) + # FIXME: Catch exceptions and write a single-line summary to outf. + handler.handle() + return 0 + + +def generate_info_refs(repo): + """Generate an info refs file.""" + refs = repo.get_refs() + return write_info_refs(refs, repo.object_store) + + +def generate_objects_info_packs(repo): + """Generate an index for for packs.""" + for pack in repo.object_store.packs: + yield (b"P " + os.fsencode(pack.data.filename) + b"\n") + + +def update_server_info(repo): + """Generate server info for dumb file access. + + This generates info/refs and objects/info/packs, + similar to "git update-server-info". + """ + repo._put_named_file( + os.path.join("info", "refs"), b"".join(generate_info_refs(repo)) + ) + + repo._put_named_file( + os.path.join("objects", "info", "packs"), + b"".join(generate_objects_info_packs(repo)), + ) + + +if __name__ == "__main__": + main() diff --git a/env/lib/python3.8/site-packages/dulwich/stash.py b/env/lib/python3.8/site-packages/dulwich/stash.py new file mode 100644 index 00000000..34f41022 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/stash.py @@ -0,0 +1,137 @@ +# stash.py +# Copyright (C) 2018 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Stash handling.""" + +from __future__ import absolute_import + +import os + +from dulwich.file import GitFile +from dulwich.index import ( + commit_tree, + iter_fresh_objects, +) +from dulwich.reflog import drop_reflog_entry, read_reflog + + +DEFAULT_STASH_REF = b"refs/stash" + + +class Stash(object): + """A Git stash. + + Note that this doesn't currently update the working tree. + """ + + def __init__(self, repo, ref=DEFAULT_STASH_REF): + self._ref = ref + self._repo = repo + + @property + def _reflog_path(self): + return os.path.join( + self._repo.commondir(), "logs", os.fsdecode(self._ref) + ) + + def stashes(self): + try: + with GitFile(self._reflog_path, "rb") as f: + return reversed(list(read_reflog(f))) + except FileNotFoundError: + return [] + + @classmethod + def from_repo(cls, repo): + """Create a new stash from a Repo object.""" + return cls(repo) + + def drop(self, index): + """Drop entry with specified index.""" + with open(self._reflog_path, "rb+") as f: + drop_reflog_entry(f, index, rewrite=True) + if len(self) == 0: + os.remove(self._reflog_path) + del self._repo.refs[self._ref] + return + if index == 0: + self._repo.refs[self._ref] = self[0].new_sha + + def pop(self, index): + raise NotImplementedError(self.pop) + + def push(self, committer=None, author=None, message=None): + """Create a new stash. + + Args: + committer: Optional committer name to use + author: Optional author name to use + message: Optional commit message + """ + # First, create the index commit. + commit_kwargs = {} + if committer is not None: + commit_kwargs["committer"] = committer + if author is not None: + commit_kwargs["author"] = author + + index = self._repo.open_index() + index_tree_id = index.commit(self._repo.object_store) + index_commit_id = self._repo.do_commit( + ref=None, + tree=index_tree_id, + message=b"Index stash", + merge_heads=[self._repo.head()], + no_verify=True, + **commit_kwargs + ) + + # Then, the working tree one. + stash_tree_id = commit_tree( + self._repo.object_store, + iter_fresh_objects( + index, + os.fsencode(self._repo.path), + object_store=self._repo.object_store, + ), + ) + + if message is None: + message = b"A stash on " + self._repo.head() + + # TODO(jelmer): Just pass parents into do_commit()? + self._repo.refs[self._ref] = self._repo.head() + + cid = self._repo.do_commit( + ref=self._ref, + tree=stash_tree_id, + message=message, + merge_heads=[index_commit_id], + no_verify=True, + **commit_kwargs + ) + + return cid + + def __getitem__(self, index): + return list(self.stashes())[index] + + def __len__(self): + return len(list(self.stashes())) diff --git a/env/lib/python3.8/site-packages/dulwich/stdint.h b/env/lib/python3.8/site-packages/dulwich/stdint.h new file mode 100644 index 00000000..682bbd05 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/stdint.h @@ -0,0 +1,19 @@ +/** + * Replacement of gcc' stdint.h for MSVC + */ + +#ifndef STDINT_H +#define STDINT_H + +typedef signed char int8_t; +typedef signed short int16_t; +typedef signed int int32_t; +typedef signed long long int64_t; + +typedef unsigned char uint8_t; +typedef unsigned short uint16_t; +typedef unsigned int uint32_t; +typedef unsigned long long uint64_t; + + +#endif diff --git a/env/lib/python3.8/site-packages/dulwich/submodule.py b/env/lib/python3.8/site-packages/dulwich/submodule.py new file mode 100644 index 00000000..dc98d879 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/submodule.py @@ -0,0 +1,40 @@ +# config.py - Reading and writing Git config files +# Copyright (C) 2011-2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Working with Git submodules. +""" + +from typing import Iterator, Tuple +from .objects import S_ISGITLINK + + +def iter_cached_submodules(store, root_tree_id: bytes) -> Iterator[Tuple[str, bytes]]: + """iterate over cached submodules. + + Args: + store: Object store to iterate + root_tree_id: SHA of root tree + + Returns: + Iterator over over (path, sha) tuples + """ + for entry in store.iter_tree_contents(root_tree_id): + if S_ISGITLINK(entry.mode): + yield entry.path, entry.sha diff --git a/env/lib/python3.8/site-packages/dulwich/tests/__init__.py b/env/lib/python3.8/site-packages/dulwich/tests/__init__.py new file mode 100644 index 00000000..17af13b1 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/__init__.py @@ -0,0 +1,223 @@ +# __init__.py -- The tests for dulwich +# Copyright (C) 2007 James Westby +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for Dulwich.""" + +__all__ = [ + 'SkipTest', + 'TestCase', + 'BlackboxTestCase', + 'skipIf', + 'expectedFailure', +] + +import doctest +import os +import shutil +import subprocess +import sys +import tempfile + + +# If Python itself provides an exception, use that +import unittest +from unittest import ( # noqa: F401 + SkipTest, + TestCase as _TestCase, + skipIf, + expectedFailure, +) + + +class TestCase(_TestCase): + def setUp(self): + super(TestCase, self).setUp() + self._old_home = os.environ.get("HOME") + os.environ["HOME"] = "/nonexistent" + os.environ["GIT_CONFIG_NOSYSTEM"] = "1" + + def tearDown(self): + super(TestCase, self).tearDown() + if self._old_home: + os.environ["HOME"] = self._old_home + else: + del os.environ["HOME"] + + +class BlackboxTestCase(TestCase): + """Blackbox testing.""" + + # TODO(jelmer): Include more possible binary paths. + bin_directories = [ + os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "bin")), + "/usr/bin", + "/usr/local/bin", + ] + + def bin_path(self, name): + """Determine the full path of a binary. + + Args: + name: Name of the script + Returns: Full path + """ + for d in self.bin_directories: + p = os.path.join(d, name) + if os.path.isfile(p): + return p + else: + raise SkipTest("Unable to find binary %s" % name) + + def run_command(self, name, args): + """Run a Dulwich command. + + Args: + name: Name of the command, as it exists in bin/ + args: Arguments to the command + """ + env = dict(os.environ) + env["PYTHONPATH"] = os.pathsep.join(sys.path) + + # Since they don't have any extensions, Windows can't recognize + # executablility of the Python files in /bin. Even then, we'd have to + # expect the user to set up file associations for .py files. + # + # Save us from all that headache and call python with the bin script. + argv = [sys.executable, self.bin_path(name)] + args + return subprocess.Popen( + argv, + stdout=subprocess.PIPE, + stdin=subprocess.PIPE, + stderr=subprocess.PIPE, + env=env, + ) + + +def self_test_suite(): + names = [ + "archive", + "blackbox", + "bundle", + "client", + "config", + "credentials", + "diff_tree", + "fastexport", + "file", + "grafts", + "graph", + "greenthreads", + "hooks", + "ignore", + "index", + "lfs", + "line_ending", + "lru_cache", + "mailmap", + "objects", + "objectspec", + "object_store", + "missing_obj_finder", + "pack", + "patch", + "porcelain", + "protocol", + "reflog", + "refs", + "repository", + "server", + "stash", + "utils", + "walk", + "web", + ] + module_names = ["dulwich.tests.test_" + name for name in names] + loader = unittest.TestLoader() + return loader.loadTestsFromNames(module_names) + + +def tutorial_test_suite(): + import dulwich.client + import dulwich.config + import dulwich.index + import dulwich.reflog + import dulwich.repo + import dulwich.server + import dulwich.patch # noqa: F401 + + tutorial = [ + "introduction", + "file-format", + "repo", + "object-store", + "remote", + "conclusion", + ] + tutorial_files = ["../../docs/tutorial/%s.txt" % name for name in tutorial] + + def setup(test): + test.__old_cwd = os.getcwd() + test.tempdir = tempfile.mkdtemp() + test.globs.update({"tempdir": test.tempdir}) + os.chdir(test.tempdir) + + def teardown(test): + os.chdir(test.__old_cwd) + shutil.rmtree(test.tempdir) + + return doctest.DocFileSuite( + module_relative=True, + package="dulwich.tests", + setUp=setup, + tearDown=teardown, + *tutorial_files + ) + + +def nocompat_test_suite(): + result = unittest.TestSuite() + result.addTests(self_test_suite()) + result.addTests(tutorial_test_suite()) + from dulwich.contrib import test_suite as contrib_test_suite + + result.addTests(contrib_test_suite()) + return result + + +def compat_test_suite(): + result = unittest.TestSuite() + from dulwich.tests.compat import test_suite as compat_test_suite + + result.addTests(compat_test_suite()) + return result + + +def test_suite(): + result = unittest.TestSuite() + result.addTests(self_test_suite()) + if sys.platform != "win32": + result.addTests(tutorial_test_suite()) + from dulwich.tests.compat import test_suite as compat_test_suite + + result.addTests(compat_test_suite()) + from dulwich.contrib import test_suite as contrib_test_suite + + result.addTests(contrib_test_suite()) + return result diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/__init__.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/__init__.py new file mode 100644 index 00000000..24747775 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/__init__.py @@ -0,0 +1,42 @@ +# __init__.py -- Compatibility tests for dulwich +# Copyright (C) 2010 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Compatibility tests for Dulwich.""" + +import unittest + + +def test_suite(): + names = [ + "client", + "pack", + "patch", + "porcelain", + "repository", + "server", + "utils", + "web", + ] + module_names = ["dulwich.tests.compat.test_" + name for name in names] + result = unittest.TestSuite() + loader = unittest.TestLoader() + suite = loader.loadTestsFromNames(module_names) + result.addTests(suite) + return result diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/server_utils.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/server_utils.py new file mode 100644 index 00000000..7837e1b9 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/server_utils.py @@ -0,0 +1,368 @@ +# server_utils.py -- Git server compatibility utilities +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Utilities for testing git server compatibility.""" + +import errno +import os +import shutil +import socket +import tempfile + +from dulwich.repo import Repo +from dulwich.objects import hex_to_sha +from dulwich.protocol import ( + CAPABILITY_SIDE_BAND_64K, +) +from dulwich.server import ( + ReceivePackHandler, +) +from dulwich.tests.utils import ( + tear_down_repo, +) +from dulwich.tests.compat.utils import ( + run_git_or_fail, +) +from dulwich.tests.compat.utils import require_git_version + + +class _StubRepo(object): + """A stub repo that just contains a path to tear down.""" + + def __init__(self, name): + temp_dir = tempfile.mkdtemp() + self.path = os.path.join(temp_dir, name) + os.mkdir(self.path) + + def close(self): + pass + + +def _get_shallow(repo): + shallow_file = repo.get_named_file("shallow") + if not shallow_file: + return [] + shallows = [] + with shallow_file: + for line in shallow_file: + sha = line.strip() + if not sha: + continue + hex_to_sha(sha) + shallows.append(sha) + return shallows + + +class ServerTests(object): + """Base tests for testing servers. + + Does not inherit from TestCase so tests are not automatically run. + """ + + min_single_branch_version = ( + 1, + 7, + 10, + ) + + def import_repos(self): + self._old_repo = self.import_repo("server_old.export") + self._new_repo = self.import_repo("server_new.export") + + def url(self, port): + return "%s://localhost:%s/" % (self.protocol, port) + + def branch_args(self, branches=None): + if branches is None: + branches = ["master", "branch"] + return ["%s:%s" % (b, b) for b in branches] + + def test_push_to_dulwich(self): + self.import_repos() + self.assertReposNotEqual(self._old_repo, self._new_repo) + port = self._start_server(self._old_repo) + + run_git_or_fail( + ["push", self.url(port)] + self.branch_args(), + cwd=self._new_repo.path, + ) + self.assertReposEqual(self._old_repo, self._new_repo) + + def test_push_to_dulwich_no_op(self): + self._old_repo = self.import_repo("server_old.export") + self._new_repo = self.import_repo("server_old.export") + self.assertReposEqual(self._old_repo, self._new_repo) + port = self._start_server(self._old_repo) + + run_git_or_fail( + ["push", self.url(port)] + self.branch_args(), + cwd=self._new_repo.path, + ) + self.assertReposEqual(self._old_repo, self._new_repo) + + def test_push_to_dulwich_remove_branch(self): + self._old_repo = self.import_repo("server_old.export") + self._new_repo = self.import_repo("server_old.export") + self.assertReposEqual(self._old_repo, self._new_repo) + port = self._start_server(self._old_repo) + + run_git_or_fail(["push", self.url(port), ":master"], cwd=self._new_repo.path) + + self.assertEqual(list(self._old_repo.get_refs().keys()), [b"refs/heads/branch"]) + + def test_fetch_from_dulwich(self): + self.import_repos() + self.assertReposNotEqual(self._old_repo, self._new_repo) + port = self._start_server(self._new_repo) + + run_git_or_fail( + ["fetch", self.url(port)] + self.branch_args(), + cwd=self._old_repo.path, + ) + # flush the pack cache so any new packs are picked up + self._old_repo.object_store._pack_cache_time = 0 + self.assertReposEqual(self._old_repo, self._new_repo) + + def test_fetch_from_dulwich_no_op(self): + self._old_repo = self.import_repo("server_old.export") + self._new_repo = self.import_repo("server_old.export") + self.assertReposEqual(self._old_repo, self._new_repo) + port = self._start_server(self._new_repo) + + run_git_or_fail( + ["fetch", self.url(port)] + self.branch_args(), + cwd=self._old_repo.path, + ) + # flush the pack cache so any new packs are picked up + self._old_repo.object_store._pack_cache_time = 0 + self.assertReposEqual(self._old_repo, self._new_repo) + + def test_clone_from_dulwich_empty(self): + old_repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, old_repo_dir) + self._old_repo = Repo.init_bare(old_repo_dir) + port = self._start_server(self._old_repo) + + new_repo_base_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, new_repo_base_dir) + new_repo_dir = os.path.join(new_repo_base_dir, "empty_new") + run_git_or_fail(["clone", self.url(port), new_repo_dir], cwd=new_repo_base_dir) + new_repo = Repo(new_repo_dir) + self.assertReposEqual(self._old_repo, new_repo) + + def test_lsremote_from_dulwich(self): + self._repo = self.import_repo("server_old.export") + port = self._start_server(self._repo) + o = run_git_or_fail(["ls-remote", self.url(port)]) + self.assertEqual(len(o.split(b"\n")), 4) + + def test_new_shallow_clone_from_dulwich(self): + require_git_version(self.min_single_branch_version) + self._source_repo = self.import_repo("server_new.export") + self._stub_repo = _StubRepo("shallow") + self.addCleanup(tear_down_repo, self._stub_repo) + port = self._start_server(self._source_repo) + + # Fetch at depth 1 + run_git_or_fail( + [ + "clone", + "--mirror", + "--depth=1", + "--no-single-branch", + self.url(port), + self._stub_repo.path, + ] + ) + clone = self._stub_repo = Repo(self._stub_repo.path) + expected_shallow = [ + b"35e0b59e187dd72a0af294aedffc213eaa4d03ff", + b"514dc6d3fbfe77361bcaef320c4d21b72bc10be9", + ] + self.assertEqual(expected_shallow, _get_shallow(clone)) + self.assertReposNotEqual(clone, self._source_repo) + + def test_shallow_clone_from_git_is_identical(self): + require_git_version(self.min_single_branch_version) + self._source_repo = self.import_repo("server_new.export") + self._stub_repo_git = _StubRepo("shallow-git") + self.addCleanup(tear_down_repo, self._stub_repo_git) + self._stub_repo_dw = _StubRepo("shallow-dw") + self.addCleanup(tear_down_repo, self._stub_repo_dw) + + # shallow clone using stock git, then using dulwich + run_git_or_fail( + [ + "clone", + "--mirror", + "--depth=1", + "--no-single-branch", + "file://" + self._source_repo.path, + self._stub_repo_git.path, + ] + ) + + port = self._start_server(self._source_repo) + run_git_or_fail( + [ + "clone", + "--mirror", + "--depth=1", + "--no-single-branch", + self.url(port), + self._stub_repo_dw.path, + ] + ) + + # compare the two clones; they should be equal + self.assertReposEqual( + Repo(self._stub_repo_git.path), Repo(self._stub_repo_dw.path) + ) + + def test_fetch_same_depth_into_shallow_clone_from_dulwich(self): + require_git_version(self.min_single_branch_version) + self._source_repo = self.import_repo("server_new.export") + self._stub_repo = _StubRepo("shallow") + self.addCleanup(tear_down_repo, self._stub_repo) + port = self._start_server(self._source_repo) + + # Fetch at depth 2 + run_git_or_fail( + [ + "clone", + "--mirror", + "--depth=2", + "--no-single-branch", + self.url(port), + self._stub_repo.path, + ] + ) + clone = self._stub_repo = Repo(self._stub_repo.path) + + # Fetching at the same depth is a no-op. + run_git_or_fail( + ["fetch", "--depth=2", self.url(port)] + self.branch_args(), + cwd=self._stub_repo.path, + ) + expected_shallow = [ + b"94de09a530df27ac3bb613aaecdd539e0a0655e1", + b"da5cd81e1883c62a25bb37c4d1f8ad965b29bf8d", + ] + self.assertEqual(expected_shallow, _get_shallow(clone)) + self.assertReposNotEqual(clone, self._source_repo) + + def test_fetch_full_depth_into_shallow_clone_from_dulwich(self): + require_git_version(self.min_single_branch_version) + self._source_repo = self.import_repo("server_new.export") + self._stub_repo = _StubRepo("shallow") + self.addCleanup(tear_down_repo, self._stub_repo) + port = self._start_server(self._source_repo) + + # Fetch at depth 2 + run_git_or_fail( + [ + "clone", + "--mirror", + "--depth=2", + "--no-single-branch", + self.url(port), + self._stub_repo.path, + ] + ) + clone = self._stub_repo = Repo(self._stub_repo.path) + + # Fetching at the same depth is a no-op. + run_git_or_fail( + ["fetch", "--depth=2", self.url(port)] + self.branch_args(), + cwd=self._stub_repo.path, + ) + + # The whole repo only has depth 4, so it should equal server_new. + run_git_or_fail( + ["fetch", "--depth=4", self.url(port)] + self.branch_args(), + cwd=self._stub_repo.path, + ) + self.assertEqual([], _get_shallow(clone)) + self.assertReposEqual(clone, self._source_repo) + + def test_fetch_from_dulwich_issue_88_standard(self): + # Basically an integration test to see that the ACK/NAK + # generation works on repos with common head. + self._source_repo = self.import_repo("issue88_expect_ack_nak_server.export") + self._client_repo = self.import_repo("issue88_expect_ack_nak_client.export") + port = self._start_server(self._source_repo) + + run_git_or_fail(["fetch", self.url(port), "master"], cwd=self._client_repo.path) + self.assertObjectStoreEqual( + self._source_repo.object_store, self._client_repo.object_store + ) + + def test_fetch_from_dulwich_issue_88_alternative(self): + # likewise, but the case where the two repos have no common parent + self._source_repo = self.import_repo("issue88_expect_ack_nak_other.export") + self._client_repo = self.import_repo("issue88_expect_ack_nak_client.export") + port = self._start_server(self._source_repo) + + self.assertRaises( + KeyError, + self._client_repo.get_object, + b"02a14da1fc1fc13389bbf32f0af7d8899f2b2323", + ) + run_git_or_fail(["fetch", self.url(port), "master"], cwd=self._client_repo.path) + self.assertEqual( + b"commit", + self._client_repo.get_object( + b"02a14da1fc1fc13389bbf32f0af7d8899f2b2323" + ).type_name, + ) + + def test_push_to_dulwich_issue_88_standard(self): + # Same thing, but we reverse the role of the server/client + # and do a push instead. + self._source_repo = self.import_repo("issue88_expect_ack_nak_client.export") + self._client_repo = self.import_repo("issue88_expect_ack_nak_server.export") + port = self._start_server(self._source_repo) + + run_git_or_fail(["push", self.url(port), "master"], cwd=self._client_repo.path) + self.assertReposEqual(self._source_repo, self._client_repo) + + +# TODO(dborowitz): Come up with a better way of testing various permutations of +# capabilities. The only reason it is the way it is now is that side-band-64k +# was only recently introduced into git-receive-pack. +class NoSideBand64kReceivePackHandler(ReceivePackHandler): + """ReceivePackHandler that does not support side-band-64k.""" + + @classmethod + def capabilities(cls): + return [ + c + for c in ReceivePackHandler.capabilities() + if c != CAPABILITY_SIDE_BAND_64K + ] + + +def ignore_error(error): + """Check whether this error is safe to ignore.""" + (e_type, e_value, e_tb) = error + return issubclass(e_type, socket.error) and e_value[0] in ( + errno.ECONNRESET, + errno.EPIPE, + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_client.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_client.py new file mode 100644 index 00000000..e9d5d2d1 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_client.py @@ -0,0 +1,680 @@ +# test_client.py -- Compatibility tests for git client. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Compatibility tests between the Dulwich client and the cgit server.""" + +import copy +from io import BytesIO +import os +import select +import signal +import stat +import subprocess +import sys +import tarfile +import tempfile +import threading + +from urllib.parse import unquote + +import http.server + +from dulwich import ( + client, + file, + index, + protocol, + objects, + repo, +) +from dulwich.tests import ( + SkipTest, + expectedFailure, +) +from dulwich.tests.compat.utils import ( + CompatTestCase, + check_for_daemon, + import_repo_to_dir, + rmtree_ro, + run_git_or_fail, + _DEFAULT_GIT, +) + + +if sys.platform == "win32": + import ctypes + + +class DulwichClientTestBase(object): + """Tests for client/server compatibility.""" + + def setUp(self): + self.gitroot = os.path.dirname( + import_repo_to_dir("server_new.export").rstrip(os.sep) + ) + self.dest = os.path.join(self.gitroot, "dest") + file.ensure_dir_exists(self.dest) + run_git_or_fail(["init", "--quiet", "--bare"], cwd=self.dest) + + def tearDown(self): + rmtree_ro(self.gitroot) + + def assertDestEqualsSrc(self): + repo_dir = os.path.join(self.gitroot, "server_new.export") + dest_repo_dir = os.path.join(self.gitroot, "dest") + with repo.Repo(repo_dir) as src: + with repo.Repo(dest_repo_dir) as dest: + self.assertReposEqual(src, dest) + + def _client(self): + raise NotImplementedError() + + def _build_path(self): + raise NotImplementedError() + + def _do_send_pack(self): + c = self._client() + srcpath = os.path.join(self.gitroot, "server_new.export") + with repo.Repo(srcpath) as src: + sendrefs = dict(src.get_refs()) + del sendrefs[b"HEAD"] + c.send_pack( + self._build_path("/dest"), + lambda _: sendrefs, + src.generate_pack_data, + ) + + def test_send_pack(self): + self._do_send_pack() + self.assertDestEqualsSrc() + + def test_send_pack_nothing_to_send(self): + self._do_send_pack() + self.assertDestEqualsSrc() + # nothing to send, but shouldn't raise either. + self._do_send_pack() + + @staticmethod + def _add_file(repo, tree_id, filename, contents): + tree = repo[tree_id] + blob = objects.Blob() + blob.data = contents.encode("utf-8") + repo.object_store.add_object(blob) + tree.add(filename.encode("utf-8"), stat.S_IFREG | 0o644, blob.id) + repo.object_store.add_object(tree) + return tree.id + + def test_send_pack_from_shallow_clone(self): + c = self._client() + server_new_path = os.path.join(self.gitroot, "server_new.export") + run_git_or_fail(["config", "http.uploadpack", "true"], cwd=server_new_path) + run_git_or_fail(["config", "http.receivepack", "true"], cwd=server_new_path) + remote_path = self._build_path("/server_new.export") + with repo.Repo(self.dest) as local: + result = c.fetch(remote_path, local, depth=1) + for r in result.refs.items(): + local.refs.set_if_equals(r[0], None, r[1]) + tree_id = local[local.head()].tree + for filename, contents in [ + ("bar", "bar contents"), + ("zop", "zop contents"), + ]: + tree_id = self._add_file(local, tree_id, filename, contents) + commit_id = local.do_commit( + message=b"add " + filename.encode("utf-8"), + committer=b"Joe Example ", + tree=tree_id, + ) + sendrefs = dict(local.get_refs()) + del sendrefs[b"HEAD"] + c.send_pack(remote_path, lambda _: sendrefs, local.generate_pack_data) + with repo.Repo(server_new_path) as remote: + self.assertEqual(remote.head(), commit_id) + + def test_send_without_report_status(self): + c = self._client() + c._send_capabilities.remove(b"report-status") + srcpath = os.path.join(self.gitroot, "server_new.export") + with repo.Repo(srcpath) as src: + sendrefs = dict(src.get_refs()) + del sendrefs[b"HEAD"] + c.send_pack( + self._build_path("/dest"), + lambda _: sendrefs, + src.generate_pack_data, + ) + self.assertDestEqualsSrc() + + def make_dummy_commit(self, dest): + b = objects.Blob.from_string(b"hi") + dest.object_store.add_object(b) + t = index.commit_tree(dest.object_store, [(b"hi", b.id, 0o100644)]) + c = objects.Commit() + c.author = c.committer = b"Foo Bar " + c.author_time = c.commit_time = 0 + c.author_timezone = c.commit_timezone = 0 + c.message = b"hi" + c.tree = t + dest.object_store.add_object(c) + return c.id + + def disable_ff_and_make_dummy_commit(self): + # disable non-fast-forward pushes to the server + dest = repo.Repo(os.path.join(self.gitroot, "dest")) + run_git_or_fail( + ["config", "receive.denyNonFastForwards", "true"], cwd=dest.path + ) + commit_id = self.make_dummy_commit(dest) + return dest, commit_id + + def compute_send(self, src): + sendrefs = dict(src.get_refs()) + del sendrefs[b"HEAD"] + return sendrefs, src.generate_pack_data + + def test_send_pack_one_error(self): + dest, dummy_commit = self.disable_ff_and_make_dummy_commit() + dest.refs[b"refs/heads/master"] = dummy_commit + repo_dir = os.path.join(self.gitroot, "server_new.export") + with repo.Repo(repo_dir) as src: + sendrefs, gen_pack = self.compute_send(src) + c = self._client() + result = c.send_pack( + self._build_path("/dest"), lambda _: sendrefs, gen_pack + ) + self.assertEqual( + { + b"refs/heads/branch": None, + b"refs/heads/master": "non-fast-forward", + }, + result.ref_status, + ) + + def test_send_pack_multiple_errors(self): + dest, dummy = self.disable_ff_and_make_dummy_commit() + # set up for two non-ff errors + branch, master = b"refs/heads/branch", b"refs/heads/master" + dest.refs[branch] = dest.refs[master] = dummy + repo_dir = os.path.join(self.gitroot, "server_new.export") + with repo.Repo(repo_dir) as src: + sendrefs, gen_pack = self.compute_send(src) + c = self._client() + result = c.send_pack( + self._build_path("/dest"), lambda _: sendrefs, gen_pack + ) + self.assertEqual( + {branch: "non-fast-forward", master: "non-fast-forward"}, + result.ref_status, + ) + + def test_archive(self): + c = self._client() + f = BytesIO() + c.archive(self._build_path("/server_new.export"), b"HEAD", f.write) + f.seek(0) + tf = tarfile.open(fileobj=f) + self.assertEqual(["baz", "foo"], tf.getnames()) + + def test_fetch_pack(self): + c = self._client() + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + result = c.fetch(self._build_path("/server_new.export"), dest) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertDestEqualsSrc() + + def test_fetch_pack_depth(self): + c = self._client() + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + result = c.fetch(self._build_path("/server_new.export"), dest, depth=1) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertEqual( + dest.get_shallow(), + set( + [ + b"35e0b59e187dd72a0af294aedffc213eaa4d03ff", + b"514dc6d3fbfe77361bcaef320c4d21b72bc10be9", + ] + ), + ) + + def test_repeat(self): + c = self._client() + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + result = c.fetch(self._build_path("/server_new.export"), dest) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertDestEqualsSrc() + result = c.fetch(self._build_path("/server_new.export"), dest) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertDestEqualsSrc() + + def test_fetch_empty_pack(self): + c = self._client() + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + result = c.fetch(self._build_path("/server_new.export"), dest) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertDestEqualsSrc() + + def dw(refs, **kwargs): + return list(refs.values()) + + result = c.fetch( + self._build_path("/server_new.export"), + dest, + determine_wants=dw, + ) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertDestEqualsSrc() + + def test_incremental_fetch_pack(self): + self.test_fetch_pack() + dest, dummy = self.disable_ff_and_make_dummy_commit() + dest.refs[b"refs/heads/master"] = dummy + c = self._client() + repo_dir = os.path.join(self.gitroot, "server_new.export") + with repo.Repo(repo_dir) as dest: + result = c.fetch(self._build_path("/dest"), dest) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertDestEqualsSrc() + + def test_fetch_pack_no_side_band_64k(self): + c = self._client() + c._fetch_capabilities.remove(b"side-band-64k") + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + result = c.fetch(self._build_path("/server_new.export"), dest) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + self.assertDestEqualsSrc() + + def test_fetch_pack_zero_sha(self): + # zero sha1s are already present on the client, and should + # be ignored + c = self._client() + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + result = c.fetch( + self._build_path("/server_new.export"), + dest, + lambda refs, **kwargs: [protocol.ZERO_SHA], + ) + for r in result.refs.items(): + dest.refs.set_if_equals(r[0], None, r[1]) + + def test_send_remove_branch(self): + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + dummy_commit = self.make_dummy_commit(dest) + dest.refs[b"refs/heads/master"] = dummy_commit + dest.refs[b"refs/heads/abranch"] = dummy_commit + sendrefs = dict(dest.refs) + sendrefs[b"refs/heads/abranch"] = b"00" * 20 + del sendrefs[b"HEAD"] + + def gen_pack(have, want, ofs_delta=False): + return 0, [] + + c = self._client() + self.assertEqual(dest.refs[b"refs/heads/abranch"], dummy_commit) + c.send_pack(self._build_path("/dest"), lambda _: sendrefs, gen_pack) + self.assertNotIn(b"refs/heads/abranch", dest.refs) + + def test_send_new_branch_empty_pack(self): + with repo.Repo(os.path.join(self.gitroot, "dest")) as dest: + dummy_commit = self.make_dummy_commit(dest) + dest.refs[b"refs/heads/master"] = dummy_commit + dest.refs[b"refs/heads/abranch"] = dummy_commit + sendrefs = {b"refs/heads/bbranch": dummy_commit} + + def gen_pack(have, want, ofs_delta=False): + return 0, [] + + c = self._client() + self.assertEqual(dest.refs[b"refs/heads/abranch"], dummy_commit) + c.send_pack(self._build_path("/dest"), lambda _: sendrefs, gen_pack) + self.assertEqual(dummy_commit, dest.refs[b"refs/heads/abranch"]) + + def test_get_refs(self): + c = self._client() + refs = c.get_refs(self._build_path("/server_new.export")) + + repo_dir = os.path.join(self.gitroot, "server_new.export") + with repo.Repo(repo_dir) as dest: + self.assertDictEqual(dest.refs.as_dict(), refs) + + +class DulwichTCPClientTest(CompatTestCase, DulwichClientTestBase): + def setUp(self): + CompatTestCase.setUp(self) + DulwichClientTestBase.setUp(self) + if check_for_daemon(limit=1): + raise SkipTest( + "git-daemon was already running on port %s" % protocol.TCP_GIT_PORT + ) + fd, self.pidfile = tempfile.mkstemp( + prefix="dulwich-test-git-client", suffix=".pid" + ) + os.fdopen(fd).close() + args = [ + _DEFAULT_GIT, + "daemon", + "--verbose", + "--export-all", + "--pid-file=%s" % self.pidfile, + "--base-path=%s" % self.gitroot, + "--enable=receive-pack", + "--enable=upload-archive", + "--listen=localhost", + "--reuseaddr", + self.gitroot, + ] + self.process = subprocess.Popen( + args, + cwd=self.gitroot, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + ) + if not check_for_daemon(): + raise SkipTest("git-daemon failed to start") + + def tearDown(self): + with open(self.pidfile) as f: + pid = int(f.read().strip()) + if sys.platform == "win32": + PROCESS_TERMINATE = 1 + handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, pid) + ctypes.windll.kernel32.TerminateProcess(handle, -1) + ctypes.windll.kernel32.CloseHandle(handle) + else: + try: + os.kill(pid, signal.SIGKILL) + os.unlink(self.pidfile) + except (OSError, IOError): + pass + self.process.wait() + self.process.stdout.close() + self.process.stderr.close() + DulwichClientTestBase.tearDown(self) + CompatTestCase.tearDown(self) + + def _client(self): + return client.TCPGitClient("localhost") + + def _build_path(self, path): + return path + + if sys.platform == "win32": + + @expectedFailure + def test_fetch_pack_no_side_band_64k(self): + DulwichClientTestBase.test_fetch_pack_no_side_band_64k(self) + + def test_send_remove_branch(self): + # This test fails intermittently on my machine, probably due to some sort + # of race condition. Probably also related to #1015 + self.skipTest('skip flaky test; see #1015') + + +class TestSSHVendor(object): + @staticmethod + def run_command( + host, + command, + username=None, + port=None, + password=None, + key_filename=None, + ): + cmd, path = command.split(" ") + cmd = cmd.split("-", 1) + path = path.replace("'", "") + p = subprocess.Popen( + cmd + [path], + bufsize=0, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + ) + return client.SubprocessWrapper(p) + + +class DulwichMockSSHClientTest(CompatTestCase, DulwichClientTestBase): + def setUp(self): + CompatTestCase.setUp(self) + DulwichClientTestBase.setUp(self) + self.real_vendor = client.get_ssh_vendor + client.get_ssh_vendor = TestSSHVendor + + def tearDown(self): + DulwichClientTestBase.tearDown(self) + CompatTestCase.tearDown(self) + client.get_ssh_vendor = self.real_vendor + + def _client(self): + return client.SSHGitClient("localhost") + + def _build_path(self, path): + return self.gitroot + path + + +class DulwichSubprocessClientTest(CompatTestCase, DulwichClientTestBase): + def setUp(self): + CompatTestCase.setUp(self) + DulwichClientTestBase.setUp(self) + + def tearDown(self): + DulwichClientTestBase.tearDown(self) + CompatTestCase.tearDown(self) + + def _client(self): + return client.SubprocessGitClient() + + def _build_path(self, path): + return self.gitroot + path + + +class GitHTTPRequestHandler(http.server.SimpleHTTPRequestHandler): + """HTTP Request handler that calls out to 'git http-backend'.""" + + # Make rfile unbuffered -- we need to read one line and then pass + # the rest to a subprocess, so we can't use buffered input. + rbufsize = 0 + + def do_POST(self): + self.run_backend() + + def do_GET(self): + self.run_backend() + + def send_head(self): + return self.run_backend() + + def log_request(self, code="-", size="-"): + # Let's be quiet, the test suite is noisy enough already + pass + + def run_backend(self): # noqa: C901 + """Call out to git http-backend.""" + # Based on CGIHTTPServer.CGIHTTPRequestHandler.run_cgi: + # Copyright (c) 2001-2010 Python Software Foundation; + # All Rights Reserved + # Licensed under the Python Software Foundation License. + rest = self.path + # find an explicit query string, if present. + i = rest.rfind("?") + if i >= 0: + rest, query = rest[:i], rest[i + 1 :] + else: + query = "" + + env = copy.deepcopy(os.environ) + env["SERVER_SOFTWARE"] = self.version_string() + env["SERVER_NAME"] = self.server.server_name + env["GATEWAY_INTERFACE"] = "CGI/1.1" + env["SERVER_PROTOCOL"] = self.protocol_version + env["SERVER_PORT"] = str(self.server.server_port) + env["GIT_PROJECT_ROOT"] = self.server.root_path + env["GIT_HTTP_EXPORT_ALL"] = "1" + env["REQUEST_METHOD"] = self.command + uqrest = unquote(rest) + env["PATH_INFO"] = uqrest + env["SCRIPT_NAME"] = "/" + if query: + env["QUERY_STRING"] = query + host = self.address_string() + if host != self.client_address[0]: + env["REMOTE_HOST"] = host + env["REMOTE_ADDR"] = self.client_address[0] + authorization = self.headers.get("authorization") + if authorization: + authorization = authorization.split() + if len(authorization) == 2: + import base64 + import binascii + + env["AUTH_TYPE"] = authorization[0] + if authorization[0].lower() == "basic": + try: + authorization = base64.decodestring(authorization[1]) + except binascii.Error: + pass + else: + authorization = authorization.split(":") + if len(authorization) == 2: + env["REMOTE_USER"] = authorization[0] + # XXX REMOTE_IDENT + content_type = self.headers.get("content-type") + if content_type: + env["CONTENT_TYPE"] = content_type + length = self.headers.get("content-length") + if length: + env["CONTENT_LENGTH"] = length + referer = self.headers.get("referer") + if referer: + env["HTTP_REFERER"] = referer + accept = [] + for line in self.headers.getallmatchingheaders("accept"): + if line[:1] in "\t\n\r ": + accept.append(line.strip()) + else: + accept = accept + line[7:].split(",") + env["HTTP_ACCEPT"] = ",".join(accept) + ua = self.headers.get("user-agent") + if ua: + env["HTTP_USER_AGENT"] = ua + co = self.headers.get("cookie") + if co: + env["HTTP_COOKIE"] = co + # XXX Other HTTP_* headers + # Since we're setting the env in the parent, provide empty + # values to override previously set values + for k in ( + "QUERY_STRING", + "REMOTE_HOST", + "CONTENT_LENGTH", + "HTTP_USER_AGENT", + "HTTP_COOKIE", + "HTTP_REFERER", + ): + env.setdefault(k, "") + + self.wfile.write(b"HTTP/1.1 200 Script output follows\r\n") + self.wfile.write(("Server: %s\r\n" % self.server.server_name).encode("ascii")) + self.wfile.write(("Date: %s\r\n" % self.date_time_string()).encode("ascii")) + + decoded_query = query.replace("+", " ") + + try: + nbytes = int(length) + except (TypeError, ValueError): + nbytes = -1 + if self.command.lower() == "post": + if nbytes > 0: + data = self.rfile.read(nbytes) + elif self.headers.get('transfer-encoding') == 'chunked': + chunks = [] + while True: + line = self.rfile.readline() + length = int(line.rstrip(), 16) + chunk = self.rfile.read(length + 2) + chunks.append(chunk[:-2]) + if length == 0: + break + data = b''.join(chunks) + env["CONTENT_LENGTH"] = str(len(data)) + else: + raise AssertionError + else: + data = None + env["CONTENT_LENGTH"] = "0" + # throw away additional data [see bug #427345] + while select.select([self.rfile._sock], [], [], 0)[0]: + if not self.rfile._sock.recv(1): + break + args = ["http-backend"] + if "=" not in decoded_query: + args.append(decoded_query) + stdout = run_git_or_fail(args, input=data, env=env, stderr=subprocess.PIPE) + self.wfile.write(stdout) + + +class HTTPGitServer(http.server.HTTPServer): + + allow_reuse_address = True + + def __init__(self, server_address, root_path): + http.server.HTTPServer.__init__(self, server_address, GitHTTPRequestHandler) + self.root_path = root_path + self.server_name = "localhost" + + def get_url(self): + return "http://%s:%s/" % (self.server_name, self.server_port) + + +class DulwichHttpClientTest(CompatTestCase, DulwichClientTestBase): + + min_git_version = (1, 7, 0, 2) + + def setUp(self): + CompatTestCase.setUp(self) + DulwichClientTestBase.setUp(self) + self._httpd = HTTPGitServer(("localhost", 0), self.gitroot) + self.addCleanup(self._httpd.shutdown) + threading.Thread(target=self._httpd.serve_forever).start() + run_git_or_fail(["config", "http.uploadpack", "true"], cwd=self.dest) + run_git_or_fail(["config", "http.receivepack", "true"], cwd=self.dest) + + def tearDown(self): + DulwichClientTestBase.tearDown(self) + CompatTestCase.tearDown(self) + self._httpd.shutdown() + self._httpd.socket.close() + + def _client(self): + return client.HttpGitClient(self._httpd.get_url()) + + def _build_path(self, path): + return path + + def test_archive(self): + raise SkipTest("exporting archives not supported over http") diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_pack.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_pack.py new file mode 100644 index 00000000..6da2f0ff --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_pack.py @@ -0,0 +1,172 @@ +# test_pack.py -- Compatibility tests for git packs. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Compatibility tests for git packs.""" + + +import binascii +import os +import re +import shutil +import tempfile + +from dulwich.pack import ( + write_pack, +) +from dulwich.objects import ( + Blob, +) +from dulwich.tests import ( + SkipTest, +) +from dulwich.tests.test_pack import ( + a_sha, + pack1_sha, + PackTests, +) +from dulwich.tests.compat.utils import ( + require_git_version, + run_git_or_fail, +) + +_NON_DELTA_RE = re.compile(b"non delta: (?P\\d+) objects") + + +def _git_verify_pack_object_list(output): + pack_shas = set() + for line in output.splitlines(): + sha = line[:40] + try: + binascii.unhexlify(sha) + except (TypeError, binascii.Error): + continue # non-sha line + pack_shas.add(sha) + return pack_shas + + +class TestPack(PackTests): + """Compatibility tests for reading and writing pack files.""" + + def setUp(self): + require_git_version((1, 5, 0)) + super(TestPack, self).setUp() + self._tempdir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self._tempdir) + + def test_copy(self): + with self.get_pack(pack1_sha) as origpack: + self.assertSucceeds(origpack.index.check) + pack_path = os.path.join(self._tempdir, "Elch") + write_pack(pack_path, origpack.pack_tuples()) + output = run_git_or_fail(["verify-pack", "-v", pack_path]) + orig_shas = {o.id for o in origpack.iterobjects()} + self.assertEqual(orig_shas, _git_verify_pack_object_list(output)) + + def test_deltas_work(self): + with self.get_pack(pack1_sha) as orig_pack: + orig_blob = orig_pack[a_sha] + new_blob = Blob() + new_blob.data = orig_blob.data + b"x" + all_to_pack = list(orig_pack.pack_tuples()) + [(new_blob, None)] + pack_path = os.path.join(self._tempdir, "pack_with_deltas") + write_pack(pack_path, all_to_pack, deltify=True) + output = run_git_or_fail(["verify-pack", "-v", pack_path]) + self.assertEqual( + {x[0].id for x in all_to_pack}, + _git_verify_pack_object_list(output), + ) + # We specifically made a new blob that should be a delta + # against the blob a_sha, so make sure we really got only 3 + # non-delta objects: + got_non_delta = int(_NON_DELTA_RE.search(output).group("non_delta")) + self.assertEqual( + 3, + got_non_delta, + "Expected 3 non-delta objects, got %d" % got_non_delta, + ) + + def test_delta_medium_object(self): + # This tests an object set that will have a copy operation + # 2**20 in size. + with self.get_pack(pack1_sha) as orig_pack: + orig_blob = orig_pack[a_sha] + new_blob = Blob() + new_blob.data = orig_blob.data + (b"x" * 2 ** 20) + new_blob_2 = Blob() + new_blob_2.data = new_blob.data + b"y" + all_to_pack = list(orig_pack.pack_tuples()) + [ + (new_blob, None), + (new_blob_2, None), + ] + pack_path = os.path.join(self._tempdir, "pack_with_deltas") + write_pack(pack_path, all_to_pack, deltify=True) + output = run_git_or_fail(["verify-pack", "-v", pack_path]) + self.assertEqual( + {x[0].id for x in all_to_pack}, + _git_verify_pack_object_list(output), + ) + # We specifically made a new blob that should be a delta + # against the blob a_sha, so make sure we really got only 3 + # non-delta objects: + got_non_delta = int(_NON_DELTA_RE.search(output).group("non_delta")) + self.assertEqual( + 3, + got_non_delta, + "Expected 3 non-delta objects, got %d" % got_non_delta, + ) + # We expect one object to have a delta chain length of two + # (new_blob_2), so let's verify that actually happens: + self.assertIn(b"chain length = 2", output) + + # This test is SUPER slow: over 80 seconds on a 2012-era + # laptop. This is because SequenceMatcher is worst-case quadratic + # on the input size. It's impractical to produce deltas for + # objects this large, but it's still worth doing the right thing + # when it happens. + def test_delta_large_object(self): + # This tests an object set that will have a copy operation + # 2**25 in size. This is a copy large enough that it requires + # two copy operations in git's binary delta format. + raise SkipTest("skipping slow, large test") + with self.get_pack(pack1_sha) as orig_pack: + new_blob = Blob() + new_blob.data = "big blob" + ("x" * 2 ** 25) + new_blob_2 = Blob() + new_blob_2.data = new_blob.data + "y" + all_to_pack = list(orig_pack.pack_tuples()) + [ + (new_blob, None), + (new_blob_2, None), + ] + pack_path = os.path.join(self._tempdir, "pack_with_deltas") + write_pack(pack_path, all_to_pack, deltify=True) + output = run_git_or_fail(["verify-pack", "-v", pack_path]) + self.assertEqual( + {x[0].id for x in all_to_pack}, + _git_verify_pack_object_list(output), + ) + # We specifically made a new blob that should be a delta + # against the blob a_sha, so make sure we really got only 4 + # non-delta objects: + got_non_delta = int(_NON_DELTA_RE.search(output).group("non_delta")) + self.assertEqual( + 4, + got_non_delta, + "Expected 4 non-delta objects, got %d" % got_non_delta, + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_patch.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_patch.py new file mode 100644 index 00000000..2f442af4 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_patch.py @@ -0,0 +1,120 @@ +# test_patch.py -- test patch compatibility with CGit +# Copyright (C) 2019 Boris Feld +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests related to patch compatibility with CGit.""" +from io import BytesIO +import os +import shutil +import tempfile + +from dulwich import porcelain +from dulwich.repo import ( + Repo, +) +from dulwich.tests.compat.utils import ( + CompatTestCase, + run_git_or_fail, +) + + +class CompatPatchTestCase(CompatTestCase): + def setUp(self): + super(CompatPatchTestCase, self).setUp() + self.test_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.test_dir) + self.repo_path = os.path.join(self.test_dir, "repo") + self.repo = Repo.init(self.repo_path, mkdir=True) + self.addCleanup(self.repo.close) + + def test_patch_apply(self): + # Prepare the repository + + # Create some files and commit them + file_list = ["to_exists", "to_modify", "to_delete"] + for file in file_list: + file_path = os.path.join(self.repo_path, file) + + # Touch the files + with open(file_path, "w"): + pass + + self.repo.stage(file_list) + + first_commit = self.repo.do_commit(b"The first commit") + + # Make a copy of the repository so we can apply the diff later + copy_path = os.path.join(self.test_dir, "copy") + shutil.copytree(self.repo_path, copy_path) + + # Do some changes + with open(os.path.join(self.repo_path, "to_modify"), "w") as f: + f.write("Modified!") + + os.remove(os.path.join(self.repo_path, "to_delete")) + + with open(os.path.join(self.repo_path, "to_add"), "w"): + pass + + self.repo.stage(["to_modify", "to_delete", "to_add"]) + + second_commit = self.repo.do_commit(b"The second commit") + + # Get the patch + first_tree = self.repo[first_commit].tree + second_tree = self.repo[second_commit].tree + + outstream = BytesIO() + porcelain.diff_tree( + self.repo.path, first_tree, second_tree, outstream=outstream + ) + + # Save it on disk + patch_path = os.path.join(self.test_dir, "patch.patch") + with open(patch_path, "wb") as patch: + patch.write(outstream.getvalue()) + + # And try to apply it to the copy directory + git_command = ["-C", copy_path, "apply", patch_path] + run_git_or_fail(git_command) + + # And now check that the files contents are exactly the same between + # the two repositories + original_files = set(os.listdir(self.repo_path)) + new_files = set(os.listdir(copy_path)) + + # Check that we have the exact same files in both repositories + self.assertEqual(original_files, new_files) + + for file in original_files: + if file == ".git": + continue + + original_file_path = os.path.join(self.repo_path, file) + copy_file_path = os.path.join(copy_path, file) + + self.assertTrue(os.path.isfile(copy_file_path)) + + with open(original_file_path, "rb") as original_file: + original_content = original_file.read() + + with open(copy_file_path, "rb") as copy_file: + copy_content = copy_file.read() + + self.assertEqual(original_content, copy_content) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_porcelain.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_porcelain.py new file mode 100644 index 00000000..e0aed812 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_porcelain.py @@ -0,0 +1,101 @@ +# test_porcelain .py -- Tests for dulwich.porcelain/CGit compatibility +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Compatibility tests for dulwich.porcelain.""" + +import os +import platform +import sys +from unittest import skipIf + +from dulwich import porcelain +from dulwich.tests.utils import ( + build_commit_graph, +) +from dulwich.tests.compat.utils import ( + run_git_or_fail, + CompatTestCase, +) +from dulwich.tests.test_porcelain import ( + PorcelainGpgTestCase, +) + + +@skipIf(platform.python_implementation() == "PyPy" or sys.platform == "win32", "gpgme not easily available or supported on Windows and PyPy") +class TagCreateSignTestCase(PorcelainGpgTestCase, CompatTestCase): + def setUp(self): + super(TagCreateSignTestCase, self).setUp() + + def test_sign(self): + # Test that dulwich signatures can be verified by CGit + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + cfg = self.repo.get_config() + cfg.set(("user",), "signingKey", PorcelainGpgTestCase.DEFAULT_KEY_ID) + self.import_default_key() + + porcelain.tag_create( + self.repo.path, + b"tryme", + b"foo ", + b"bar", + annotated=True, + sign=True, + ) + + run_git_or_fail( + [ + "--git-dir={}".format(self.repo.controldir()), + "tag", + "-v", + "tryme" + ], + env={'GNUPGHOME': os.environ['GNUPGHOME']}, + ) + + def test_verify(self): + # Test that CGit signatures can be verified by dulwich + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + self.import_default_key() + + run_git_or_fail( + [ + "--git-dir={}".format(self.repo.controldir()), + "tag", + "-u", + PorcelainGpgTestCase.DEFAULT_KEY_ID, + "-m", + "foo", + "verifyme", + ], + env={ + 'GNUPGHOME': os.environ['GNUPGHOME'], + 'GIT_COMMITTER_NAME': 'Joe Example', + 'GIT_COMMITTER_EMAIL': 'joe@example.com', + }, + ) + tag = self.repo[b"refs/tags/verifyme"] + self.assertNotEqual(tag.signature, None) + tag.verify() diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_repository.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_repository.py new file mode 100644 index 00000000..8d2459d1 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_repository.py @@ -0,0 +1,245 @@ +# test_repo.py -- Git repo compatibility tests +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Compatibility tests for dulwich repositories.""" + + +from io import BytesIO +from itertools import chain +import os +import tempfile + +from dulwich.objects import ( + hex_to_sha, +) +from dulwich.repo import ( + check_ref_format, + Repo, +) +from dulwich.tests.compat.utils import ( + require_git_version, + rmtree_ro, + run_git_or_fail, + CompatTestCase, +) + + +class ObjectStoreTestCase(CompatTestCase): + """Tests for git repository compatibility.""" + + def setUp(self): + super(ObjectStoreTestCase, self).setUp() + self._repo = self.import_repo("server_new.export") + + def _run_git(self, args): + return run_git_or_fail(args, cwd=self._repo.path) + + def _parse_refs(self, output): + refs = {} + for line in BytesIO(output): + fields = line.rstrip(b"\n").split(b" ") + self.assertEqual(3, len(fields)) + refname, type_name, sha = fields + check_ref_format(refname[5:]) + hex_to_sha(sha) + refs[refname] = (type_name, sha) + return refs + + def _parse_objects(self, output): + return {s.rstrip(b"\n").split(b" ")[0] for s in BytesIO(output)} + + def test_bare(self): + self.assertTrue(self._repo.bare) + self.assertFalse(os.path.exists(os.path.join(self._repo.path, ".git"))) + + def test_head(self): + output = self._run_git(["rev-parse", "HEAD"]) + head_sha = output.rstrip(b"\n") + hex_to_sha(head_sha) + self.assertEqual(head_sha, self._repo.refs[b"HEAD"]) + + def test_refs(self): + output = self._run_git( + ["for-each-ref", "--format=%(refname) %(objecttype) %(objectname)"] + ) + expected_refs = self._parse_refs(output) + + actual_refs = {} + for refname, sha in self._repo.refs.as_dict().items(): + if refname == b"HEAD": + continue # handled in test_head + obj = self._repo[sha] + self.assertEqual(sha, obj.id) + actual_refs[refname] = (obj.type_name, obj.id) + self.assertEqual(expected_refs, actual_refs) + + # TODO(dborowitz): peeled ref tests + + def _get_loose_shas(self): + output = self._run_git(["rev-list", "--all", "--objects", "--unpacked"]) + return self._parse_objects(output) + + def _get_all_shas(self): + output = self._run_git(["rev-list", "--all", "--objects"]) + return self._parse_objects(output) + + def assertShasMatch(self, expected_shas, actual_shas_iter): + actual_shas = set() + for sha in actual_shas_iter: + obj = self._repo[sha] + self.assertEqual(sha, obj.id) + actual_shas.add(sha) + self.assertEqual(expected_shas, actual_shas) + + def test_loose_objects(self): + # TODO(dborowitz): This is currently not very useful since + # fast-imported repos only contained packed objects. + expected_shas = self._get_loose_shas() + self.assertShasMatch( + expected_shas, self._repo.object_store._iter_loose_objects() + ) + + def test_packed_objects(self): + expected_shas = self._get_all_shas() - self._get_loose_shas() + self.assertShasMatch( + expected_shas, chain.from_iterable(self._repo.object_store.packs) + ) + + def test_all_objects(self): + expected_shas = self._get_all_shas() + self.assertShasMatch(expected_shas, iter(self._repo.object_store)) + + +class WorkingTreeTestCase(ObjectStoreTestCase): + """Test for compatibility with git-worktree.""" + + min_git_version = (2, 5, 0) + + def create_new_worktree(self, repo_dir, branch): + """Create a new worktree using git-worktree. + + Args: + repo_dir: The directory of the main working tree. + branch: The branch or commit to checkout in the new worktree. + + Returns: The path to the new working tree. + """ + temp_dir = tempfile.mkdtemp() + run_git_or_fail(["worktree", "add", temp_dir, branch], cwd=repo_dir) + self.addCleanup(rmtree_ro, temp_dir) + return temp_dir + + def setUp(self): + super(WorkingTreeTestCase, self).setUp() + self._worktree_path = self.create_new_worktree(self._repo.path, "branch") + self._worktree_repo = Repo(self._worktree_path) + self.addCleanup(self._worktree_repo.close) + self._mainworktree_repo = self._repo + self._number_of_working_tree = 2 + self._repo = self._worktree_repo + + def test_refs(self): + super(WorkingTreeTestCase, self).test_refs() + self.assertEqual( + self._mainworktree_repo.refs.allkeys(), self._repo.refs.allkeys() + ) + + def test_head_equality(self): + self.assertNotEqual( + self._repo.refs[b"HEAD"], self._mainworktree_repo.refs[b"HEAD"] + ) + + def test_bare(self): + self.assertFalse(self._repo.bare) + self.assertTrue(os.path.isfile(os.path.join(self._repo.path, ".git"))) + + def _parse_worktree_list(self, output): + worktrees = [] + for line in BytesIO(output): + fields = line.rstrip(b"\n").split() + worktrees.append(tuple(f.decode() for f in fields)) + return worktrees + + def test_git_worktree_list(self): + # 'git worktree list' was introduced in 2.7.0 + require_git_version((2, 7, 0)) + output = run_git_or_fail(["worktree", "list"], cwd=self._repo.path) + worktrees = self._parse_worktree_list(output) + self.assertEqual(len(worktrees), self._number_of_working_tree) + self.assertEqual(worktrees[0][1], "(bare)") + self.assertTrue(os.path.samefile(worktrees[0][0], self._mainworktree_repo.path)) + + output = run_git_or_fail(["worktree", "list"], cwd=self._mainworktree_repo.path) + worktrees = self._parse_worktree_list(output) + self.assertEqual(len(worktrees), self._number_of_working_tree) + self.assertEqual(worktrees[0][1], "(bare)") + self.assertTrue(os.path.samefile(worktrees[0][0], self._mainworktree_repo.path)) + + def test_git_worktree_config(self): + """Test that git worktree config parsing matches the git CLI's behavior.""" + # Set some config value in the main repo using the git CLI + require_git_version((2, 7, 0)) + test_name = "Jelmer" + test_email = "jelmer@apache.org" + run_git_or_fail(["config", "user.name", test_name], cwd=self._repo.path) + run_git_or_fail(["config", "user.email", test_email], cwd=self._repo.path) + + worktree_cfg = self._worktree_repo.get_config() + main_cfg = self._repo.get_config() + + # Assert that both the worktree repo and main repo have the same view of the config, + # and that the config matches what we set with the git cli + self.assertEqual(worktree_cfg, main_cfg) + for c in [worktree_cfg, main_cfg]: + self.assertEqual(test_name.encode(), c.get((b"user",), b"name")) + self.assertEqual(test_email.encode(), c.get((b"user",), b"email")) + + # Read the config values in the worktree with the git cli and assert they match + # the dulwich-parsed configs + output_name = run_git_or_fail(["config", "user.name"], cwd=self._mainworktree_repo.path).decode().rstrip("\n") + output_email = run_git_or_fail(["config", "user.email"], cwd=self._mainworktree_repo.path).decode().rstrip("\n") + self.assertEqual(test_name, output_name) + self.assertEqual(test_email, output_email) + + +class InitNewWorkingDirectoryTestCase(WorkingTreeTestCase): + """Test compatibility of Repo.init_new_working_directory.""" + + min_git_version = (2, 5, 0) + + def setUp(self): + super(InitNewWorkingDirectoryTestCase, self).setUp() + self._other_worktree = self._repo + worktree_repo_path = tempfile.mkdtemp() + self.addCleanup(rmtree_ro, worktree_repo_path) + self._repo = Repo._init_new_working_directory( + worktree_repo_path, self._mainworktree_repo + ) + self.addCleanup(self._repo.close) + self._number_of_working_tree = 3 + + def test_head_equality(self): + self.assertEqual( + self._repo.refs[b"HEAD"], self._mainworktree_repo.refs[b"HEAD"] + ) + + def test_bare(self): + self.assertFalse(self._repo.bare) + self.assertTrue(os.path.isfile(os.path.join(self._repo.path, ".git"))) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_server.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_server.py new file mode 100644 index 00000000..5efa414c --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_server.py @@ -0,0 +1,97 @@ +# test_server.py -- Compatibility tests for git server. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Compatibility tests between Dulwich and the cgit server. + +Warning: these tests should be fairly stable, but when writing/debugging new + tests, deadlocks may freeze the test process such that it cannot be + Ctrl-C'ed. On POSIX systems, you can kill the tests with Ctrl-Z, "kill %". +""" + +import threading +import os +import sys + +from dulwich.server import ( + DictBackend, + TCPGitServer, +) +from dulwich.tests import skipIf +from dulwich.tests.compat.server_utils import ( + ServerTests, + NoSideBand64kReceivePackHandler, +) +from dulwich.tests.compat.utils import ( + CompatTestCase, + require_git_version, +) + + +@skipIf(sys.platform == "win32", "Broken on windows, with very long fail time.") +class GitServerTestCase(ServerTests, CompatTestCase): + """Tests for client/server compatibility. + + This server test case does not use side-band-64k in git-receive-pack. + """ + + protocol = "git" + + def _handlers(self): + return {b"git-receive-pack": NoSideBand64kReceivePackHandler} + + def _check_server(self, dul_server): + receive_pack_handler_cls = dul_server.handlers[b"git-receive-pack"] + caps = receive_pack_handler_cls.capabilities() + self.assertNotIn(b"side-band-64k", caps) + + def _start_server(self, repo): + backend = DictBackend({b"/": repo}) + dul_server = TCPGitServer(backend, b"localhost", 0, handlers=self._handlers()) + self._check_server(dul_server) + self.addCleanup(dul_server.shutdown) + self.addCleanup(dul_server.server_close) + threading.Thread(target=dul_server.serve).start() + self._server = dul_server + _, port = self._server.socket.getsockname() + return port + + +@skipIf(sys.platform == "win32", "Broken on windows, with very long fail time.") +class GitServerSideBand64kTestCase(GitServerTestCase): + """Tests for client/server compatibility with side-band-64k support.""" + + # side-band-64k in git-receive-pack was introduced in git 1.7.0.2 + min_git_version = (1, 7, 0, 2) + + def setUp(self): + super(GitServerSideBand64kTestCase, self).setUp() + # side-band-64k is broken in the windows client. + # https://github.com/msysgit/git/issues/101 + # Fix has landed for the 1.9.3 release. + if os.name == "nt": + require_git_version((1, 9, 3)) + + def _handlers(self): + return None # default handlers include side-band-64k + + def _check_server(self, server): + receive_pack_handler_cls = server.handlers[b"git-receive-pack"] + caps = receive_pack_handler_cls.capabilities() + self.assertIn(b"side-band-64k", caps) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_utils.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_utils.py new file mode 100644 index 00000000..c8746b5c --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_utils.py @@ -0,0 +1,91 @@ +# test_utils.py -- Tests for git compatibility utilities +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for git compatibility utilities.""" + +from dulwich.tests import ( + SkipTest, + TestCase, +) +from dulwich.tests.compat import utils + + +class GitVersionTests(TestCase): + def setUp(self): + super(GitVersionTests, self).setUp() + self._orig_run_git = utils.run_git + self._version_str = None # tests can override to set stub version + + def run_git(args, **unused_kwargs): + self.assertEqual(["--version"], args) + return 0, self._version_str, '' + + utils.run_git = run_git + + def tearDown(self): + super(GitVersionTests, self).tearDown() + utils.run_git = self._orig_run_git + + def test_git_version_none(self): + self._version_str = b"not a git version" + self.assertEqual(None, utils.git_version()) + + def test_git_version_3(self): + self._version_str = b"git version 1.6.6" + self.assertEqual((1, 6, 6, 0), utils.git_version()) + + def test_git_version_4(self): + self._version_str = b"git version 1.7.0.2" + self.assertEqual((1, 7, 0, 2), utils.git_version()) + + def test_git_version_extra(self): + self._version_str = b"git version 1.7.0.3.295.gd8fa2" + self.assertEqual((1, 7, 0, 3), utils.git_version()) + + def assertRequireSucceeds(self, required_version): + try: + utils.require_git_version(required_version) + except SkipTest: + self.fail() + + def assertRequireFails(self, required_version): + self.assertRaises(SkipTest, utils.require_git_version, required_version) + + def test_require_git_version(self): + try: + self._version_str = b"git version 1.6.6" + self.assertRequireSucceeds((1, 6, 6)) + self.assertRequireSucceeds((1, 6, 6, 0)) + self.assertRequireSucceeds((1, 6, 5)) + self.assertRequireSucceeds((1, 6, 5, 99)) + self.assertRequireFails((1, 7, 0)) + self.assertRequireFails((1, 7, 0, 2)) + self.assertRaises(ValueError, utils.require_git_version, (1, 6, 6, 0, 0)) + + self._version_str = b"git version 1.7.0.2" + self.assertRequireSucceeds((1, 6, 6)) + self.assertRequireSucceeds((1, 6, 6, 0)) + self.assertRequireSucceeds((1, 7, 0)) + self.assertRequireSucceeds((1, 7, 0, 2)) + self.assertRequireFails((1, 7, 0, 3)) + self.assertRequireFails((1, 7, 1)) + except SkipTest as e: + # This test is designed to catch all SkipTest exceptions. + self.fail("Test unexpectedly skipped: %s" % e) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/test_web.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_web.py new file mode 100644 index 00000000..05852dd6 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/test_web.py @@ -0,0 +1,209 @@ +# test_web.py -- Compatibility tests for the git web server. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Compatibility tests between Dulwich and the cgit HTTP server. + +warning: these tests should be fairly stable, but when writing/debugging new + tests, deadlocks may freeze the test process such that it cannot be + Ctrl-C'ed. On POSIX systems, you can kill the tests with Ctrl-Z, "kill %". +""" + +import threading +from wsgiref import simple_server +import sys +from typing import Tuple + +from dulwich.server import ( + DictBackend, + UploadPackHandler, + ReceivePackHandler, +) +from dulwich.tests import ( + SkipTest, + skipIf, +) +from dulwich.web import ( + make_wsgi_chain, + HTTPGitApplication, + WSGIRequestHandlerLogger, + WSGIServerLogger, +) + +from dulwich.tests.compat.server_utils import ( + ServerTests, + NoSideBand64kReceivePackHandler, +) +from dulwich.tests.compat.utils import ( + CompatTestCase, +) + + +@skipIf(sys.platform == "win32", "Broken on windows, with very long fail time.") +class WebTests(ServerTests): + """Base tests for web server tests. + + Contains utility and setUp/tearDown methods, but does non inherit from + TestCase so tests are not automatically run. + """ + + protocol = "http" + + def _start_server(self, repo): + backend = DictBackend({"/": repo}) + app = self._make_app(backend) + dul_server = simple_server.make_server( + "localhost", + 0, + app, + server_class=WSGIServerLogger, + handler_class=WSGIRequestHandlerLogger, + ) + self.addCleanup(dul_server.shutdown) + self.addCleanup(dul_server.server_close) + threading.Thread(target=dul_server.serve_forever).start() + self._server = dul_server + _, port = dul_server.socket.getsockname() + return port + + +@skipIf(sys.platform == "win32", "Broken on windows, with very long fail time.") +class SmartWebTestCase(WebTests, CompatTestCase): + """Test cases for smart HTTP server. + + This server test case does not use side-band-64k in git-receive-pack. + """ + + min_git_version = (1, 6, 6) # type: Tuple[int, ...] + + def _handlers(self): + return {b"git-receive-pack": NoSideBand64kReceivePackHandler} + + def _check_app(self, app): + receive_pack_handler_cls = app.handlers[b"git-receive-pack"] + caps = receive_pack_handler_cls.capabilities() + self.assertNotIn(b"side-band-64k", caps) + + def _make_app(self, backend): + app = make_wsgi_chain(backend, handlers=self._handlers()) + to_check = app + # peel back layers until we're at the base application + while not issubclass(to_check.__class__, HTTPGitApplication): + to_check = to_check.app + self._check_app(to_check) + return app + + +def patch_capabilities(handler, caps_removed): + # Patch a handler's capabilities by specifying a list of them to be + # removed, and return the original classmethod for restoration. + original_capabilities = handler.capabilities + filtered_capabilities = [ + i for i in original_capabilities() if i not in caps_removed + ] + + def capabilities(cls): + return filtered_capabilities + + handler.capabilities = classmethod(capabilities) + return original_capabilities + + +@skipIf(sys.platform == "win32", "Broken on windows, with very long fail time.") +class SmartWebSideBand64kTestCase(SmartWebTestCase): + """Test cases for smart HTTP server with side-band-64k support.""" + + # side-band-64k in git-receive-pack was introduced in git 1.7.0.2 + min_git_version = (1, 7, 0, 2) + + def setUp(self): + self.o_uph_cap = patch_capabilities(UploadPackHandler, (b"no-done",)) + self.o_rph_cap = patch_capabilities(ReceivePackHandler, (b"no-done",)) + super(SmartWebSideBand64kTestCase, self).setUp() + + def tearDown(self): + super(SmartWebSideBand64kTestCase, self).tearDown() + UploadPackHandler.capabilities = self.o_uph_cap + ReceivePackHandler.capabilities = self.o_rph_cap + + def _handlers(self): + return None # default handlers include side-band-64k + + def _check_app(self, app): + receive_pack_handler_cls = app.handlers[b"git-receive-pack"] + caps = receive_pack_handler_cls.capabilities() + self.assertIn(b"side-band-64k", caps) + self.assertNotIn(b"no-done", caps) + + +class SmartWebSideBand64kNoDoneTestCase(SmartWebTestCase): + """Test cases for smart HTTP server with side-band-64k and no-done + support. + """ + + # no-done was introduced in git 1.7.4 + min_git_version = (1, 7, 4) + + def _handlers(self): + return None # default handlers include side-band-64k + + def _check_app(self, app): + receive_pack_handler_cls = app.handlers[b"git-receive-pack"] + caps = receive_pack_handler_cls.capabilities() + self.assertIn(b"side-band-64k", caps) + self.assertIn(b"no-done", caps) + + +@skipIf(sys.platform == "win32", "Broken on windows, with very long fail time.") +class DumbWebTestCase(WebTests, CompatTestCase): + """Test cases for dumb HTTP server.""" + + def _make_app(self, backend): + return make_wsgi_chain(backend, dumb=True) + + def test_push_to_dulwich(self): + # Note: remove this if dulwich implements dumb web pushing. + raise SkipTest("Dumb web pushing not supported.") + + def test_push_to_dulwich_remove_branch(self): + # Note: remove this if dumb pushing is supported + raise SkipTest("Dumb web pushing not supported.") + + def test_new_shallow_clone_from_dulwich(self): + # Note: remove this if C git and dulwich implement dumb web shallow + # clones. + raise SkipTest("Dumb web shallow cloning not supported.") + + def test_shallow_clone_from_git_is_identical(self): + # Note: remove this if C git and dulwich implement dumb web shallow + # clones. + raise SkipTest("Dumb web shallow cloning not supported.") + + def test_fetch_same_depth_into_shallow_clone_from_dulwich(self): + # Note: remove this if C git and dulwich implement dumb web shallow + # clones. + raise SkipTest("Dumb web shallow cloning not supported.") + + def test_fetch_full_depth_into_shallow_clone_from_dulwich(self): + # Note: remove this if C git and dulwich implement dumb web shallow + # clones. + raise SkipTest("Dumb web shallow cloning not supported.") + + def test_push_to_dulwich_issue_88_standard(self): + raise SkipTest("Dumb web pushing not supported.") diff --git a/env/lib/python3.8/site-packages/dulwich/tests/compat/utils.py b/env/lib/python3.8/site-packages/dulwich/tests/compat/utils.py new file mode 100644 index 00000000..47500e2d --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/compat/utils.py @@ -0,0 +1,284 @@ +# utils.py -- Git compatibility utilities +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Utilities for interacting with cgit.""" + +import errno +import functools +import os +import shutil +import socket +import stat +import subprocess +import sys +import tempfile +import time +from typing import Tuple + +from dulwich.repo import Repo +from dulwich.protocol import TCP_GIT_PORT + +from dulwich.tests import ( + SkipTest, + TestCase, +) + +_DEFAULT_GIT = "git" +_VERSION_LEN = 4 +_REPOS_DATA_DIR = os.path.abspath( + os.path.join( + os.path.dirname(__file__), os.pardir, os.pardir, os.pardir, + "testdata", "repos") +) + + +def git_version(git_path=_DEFAULT_GIT): + """Attempt to determine the version of git currently installed. + + Args: + git_path: Path to the git executable; defaults to the version in + the system path. + Returns: A tuple of ints of the form (major, minor, point, sub-point), or + None if no git installation was found. + """ + try: + output = run_git_or_fail(["--version"], git_path=git_path) + except OSError: + return None + version_prefix = b"git version " + if not output.startswith(version_prefix): + return None + + parts = output[len(version_prefix) :].split(b".") + nums = [] + for part in parts: + try: + nums.append(int(part)) + except ValueError: + break + + while len(nums) < _VERSION_LEN: + nums.append(0) + return tuple(nums[:_VERSION_LEN]) + + +def require_git_version(required_version, git_path=_DEFAULT_GIT): + """Require git version >= version, or skip the calling test. + + Args: + required_version: A tuple of ints of the form (major, minor, point, + sub-point); omitted components default to 0. + git_path: Path to the git executable; defaults to the version in + the system path. + Raises: + ValueError: if the required version tuple has too many parts. + SkipTest: if no suitable git version was found at the given path. + """ + found_version = git_version(git_path=git_path) + if found_version is None: + raise SkipTest( + "Test requires git >= %s, but c git not found" % (required_version,) + ) + + if len(required_version) > _VERSION_LEN: + raise ValueError( + "Invalid version tuple %s, expected %i parts" + % (required_version, _VERSION_LEN) + ) + + required_version = list(required_version) + while len(found_version) < len(required_version): + required_version.append(0) + required_version = tuple(required_version) + + if found_version < required_version: + required_version = ".".join(map(str, required_version)) + found_version = ".".join(map(str, found_version)) + raise SkipTest( + "Test requires git >= %s, found %s" % (required_version, found_version) + ) + + +def run_git( + args, git_path=_DEFAULT_GIT, input=None, capture_stdout=False, + capture_stderr=False, **popen_kwargs +): + """Run a git command. + + Input is piped from the input parameter and output is sent to the standard + streams, unless capture_stdout is set. + + Args: + args: A list of args to the git command. + git_path: Path to to the git executable. + input: Input data to be sent to stdin. + capture_stdout: Whether to capture and return stdout. + popen_kwargs: Additional kwargs for subprocess.Popen; + stdin/stdout args are ignored. + Returns: A tuple of (returncode, stdout contents, stderr contents). + If capture_stdout is False, None will be returned as stdout contents. + If capture_stderr is False, None will be returned as stderr contents. + Raises: + OSError: if the git executable was not found. + """ + + env = popen_kwargs.pop("env", {}) + env["LC_ALL"] = env["LANG"] = "C" + env["PATH"] = os.getenv("PATH") + + args = [git_path] + args + popen_kwargs["stdin"] = subprocess.PIPE + if capture_stdout: + popen_kwargs["stdout"] = subprocess.PIPE + else: + popen_kwargs.pop("stdout", None) + if capture_stderr: + popen_kwargs["stderr"] = subprocess.PIPE + else: + popen_kwargs.pop("stderr", None) + p = subprocess.Popen(args, env=env, **popen_kwargs) + stdout, stderr = p.communicate(input=input) + return (p.returncode, stdout, stderr) + + +def run_git_or_fail(args, git_path=_DEFAULT_GIT, input=None, **popen_kwargs): + """Run a git command, capture stdout/stderr, and fail if git fails.""" + if "stderr" not in popen_kwargs: + popen_kwargs["stderr"] = subprocess.STDOUT + returncode, stdout, stderr = run_git( + args, git_path=git_path, input=input, capture_stdout=True, + capture_stderr=True, **popen_kwargs + ) + if returncode != 0: + raise AssertionError( + "git with args %r failed with %d: stdout=%r stderr=%r" % (args, returncode, stdout, stderr) + ) + return stdout + + +def import_repo_to_dir(name): + """Import a repo from a fast-export file in a temporary directory. + + These are used rather than binary repos for compat tests because they are + more compact and human-editable, and we already depend on git. + + Args: + name: The name of the repository export file, relative to + dulwich/tests/data/repos. + Returns: The path to the imported repository. + """ + temp_dir = tempfile.mkdtemp() + export_path = os.path.join(_REPOS_DATA_DIR, name) + temp_repo_dir = os.path.join(temp_dir, name) + export_file = open(export_path, "rb") + run_git_or_fail(["init", "--quiet", "--bare", temp_repo_dir]) + run_git_or_fail(["fast-import"], input=export_file.read(), cwd=temp_repo_dir) + export_file.close() + return temp_repo_dir + + +def check_for_daemon(limit=10, delay=0.1, timeout=0.1, port=TCP_GIT_PORT): + """Check for a running TCP daemon. + + Defaults to checking 10 times with a delay of 0.1 sec between tries. + + Args: + limit: Number of attempts before deciding no daemon is running. + delay: Delay between connection attempts. + timeout: Socket timeout for connection attempts. + port: Port on which we expect the daemon to appear. + Returns: A boolean, true if a daemon is running on the specified port, + false if not. + """ + for _ in range(limit): + time.sleep(delay) + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.settimeout(delay) + try: + s.connect(("localhost", port)) + return True + except socket.timeout: + pass + except socket.error as e: + if getattr(e, "errno", False) and e.errno != errno.ECONNREFUSED: + raise + elif e.args[0] != errno.ECONNREFUSED: + raise + finally: + s.close() + return False + + +class CompatTestCase(TestCase): + """Test case that requires git for compatibility checks. + + Subclasses can change the git version required by overriding + min_git_version. + """ + + min_git_version = (1, 5, 0) # type: Tuple[int, ...] + + def setUp(self): + super(CompatTestCase, self).setUp() + require_git_version(self.min_git_version) + + def assertObjectStoreEqual(self, store1, store2): + self.assertEqual(sorted(set(store1)), sorted(set(store2))) + + def assertReposEqual(self, repo1, repo2): + self.assertEqual(repo1.get_refs(), repo2.get_refs()) + self.assertObjectStoreEqual(repo1.object_store, repo2.object_store) + + def assertReposNotEqual(self, repo1, repo2): + refs1 = repo1.get_refs() + objs1 = set(repo1.object_store) + refs2 = repo2.get_refs() + objs2 = set(repo2.object_store) + self.assertFalse(refs1 == refs2 and objs1 == objs2) + + def import_repo(self, name): + """Import a repo from a fast-export file in a temporary directory. + + Args: + name: The name of the repository export file, relative to + dulwich/tests/data/repos. + Returns: An initialized Repo object that lives in a temporary + directory. + """ + path = import_repo_to_dir(name) + repo = Repo(path) + + def cleanup(): + repo.close() + rmtree_ro(os.path.dirname(path.rstrip(os.sep))) + + self.addCleanup(cleanup) + return repo + + +if sys.platform == "win32": + + def remove_ro(action, name, exc): + os.chmod(name, stat.S_IWRITE) + os.remove(name) + + rmtree_ro = functools.partial(shutil.rmtree, onerror=remove_ro) +else: + rmtree_ro = shutil.rmtree diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_archive.py b/env/lib/python3.8/site-packages/dulwich/tests/test_archive.py new file mode 100644 index 00000000..842a9868 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_archive.py @@ -0,0 +1,111 @@ +# test_archive.py -- tests for archive +# Copyright (C) 2015 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for archive support.""" + +from io import BytesIO +import tarfile +import struct +from unittest import skipUnless + +from dulwich.archive import tar_stream +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + Blob, + Tree, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + build_commit_graph, +) + +try: + from unittest.mock import patch +except ImportError: + patch = None # type: ignore + + +class ArchiveTests(TestCase): + def test_empty(self): + store = MemoryObjectStore() + c1, c2, c3 = build_commit_graph(store, [[1], [2, 1], [3, 1, 2]]) + tree = store[c3.tree] + stream = b"".join(tar_stream(store, tree, 10)) + out = BytesIO(stream) + tf = tarfile.TarFile(fileobj=out) + self.addCleanup(tf.close) + self.assertEqual([], tf.getnames()) + + def _get_example_tar_stream(self, *tar_stream_args, **tar_stream_kwargs): + store = MemoryObjectStore() + b1 = Blob.from_string(b"somedata") + store.add_object(b1) + t1 = Tree() + t1.add(b"somename", 0o100644, b1.id) + store.add_object(t1) + stream = b"".join(tar_stream(store, t1, *tar_stream_args, **tar_stream_kwargs)) + return BytesIO(stream) + + def test_simple(self): + stream = self._get_example_tar_stream(mtime=0) + tf = tarfile.TarFile(fileobj=stream) + self.addCleanup(tf.close) + self.assertEqual(["somename"], tf.getnames()) + + def test_unicode(self): + store = MemoryObjectStore() + b1 = Blob.from_string(b"somedata") + store.add_object(b1) + t1 = Tree() + t1.add("ő".encode('utf-8'), 0o100644, b1.id) + store.add_object(t1) + stream = b"".join(tar_stream(store, t1, mtime=0)) + tf = tarfile.TarFile(fileobj=BytesIO(stream)) + self.addCleanup(tf.close) + self.assertEqual(["ő"], tf.getnames()) + + def test_prefix(self): + stream = self._get_example_tar_stream(mtime=0, prefix=b"blah") + tf = tarfile.TarFile(fileobj=stream) + self.addCleanup(tf.close) + self.assertEqual(["blah/somename"], tf.getnames()) + + def test_gzip_mtime(self): + stream = self._get_example_tar_stream(mtime=1234, format="gz") + expected_mtime = struct.pack(" +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Blackbox tests for Dulwich commands.""" + +import tempfile +import shutil + +from dulwich.repo import ( + Repo, +) +from dulwich.tests import ( + BlackboxTestCase, +) + + +class GitReceivePackTests(BlackboxTestCase): + """Blackbox tests for dul-receive-pack.""" + + def setUp(self): + super(GitReceivePackTests, self).setUp() + self.path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.path) + self.repo = Repo.init(self.path) + + def test_basic(self): + process = self.run_command("dul-receive-pack", [self.path]) + (stdout, stderr) = process.communicate(b"0000") + self.assertEqual(b"0000", stdout[-4:]) + self.assertEqual(0, process.returncode) + + def test_missing_arg(self): + process = self.run_command("dul-receive-pack", []) + (stdout, stderr) = process.communicate() + self.assertEqual( + [b"usage: dul-receive-pack "], stderr.splitlines()[-1:] + ) + self.assertEqual(b"", stdout) + self.assertEqual(1, process.returncode) + + +class GitUploadPackTests(BlackboxTestCase): + """Blackbox tests for dul-upload-pack.""" + + def setUp(self): + super(GitUploadPackTests, self).setUp() + self.path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.path) + self.repo = Repo.init(self.path) + + def test_missing_arg(self): + process = self.run_command("dul-upload-pack", []) + (stdout, stderr) = process.communicate() + self.assertEqual( + [b"usage: dul-upload-pack "], stderr.splitlines()[-1:] + ) + self.assertEqual(b"", stdout) + self.assertEqual(1, process.returncode) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_bundle.py b/env/lib/python3.8/site-packages/dulwich/tests/test_bundle.py new file mode 100644 index 00000000..7a9f38af --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_bundle.py @@ -0,0 +1,51 @@ +# test_bundle.py -- tests for bundle +# Copyright (C) 2020 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for bundle support.""" + +import os +import tempfile + +from dulwich.tests import ( + TestCase, +) + +from dulwich.bundle import ( + Bundle, + read_bundle, + write_bundle, +) + + +class BundleTests(TestCase): + def test_roundtrip_bundle(self): + origbundle = Bundle() + origbundle.version = 3 + origbundle.capabilities = {"foo": None} + origbundle.references = {b"refs/heads/master": b"ab" * 20} + origbundle.prerequisites = [(b"cc" * 20, "comment")] + with tempfile.TemporaryDirectory() as td: + with open(os.path.join(td, "foo"), "wb") as f: + write_bundle(f, origbundle) + + with open(os.path.join(td, "foo"), "rb") as f: + newbundle = read_bundle(f) + + self.assertEqual(origbundle, newbundle) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_client.py b/env/lib/python3.8/site-packages/dulwich/tests/test_client.py new file mode 100644 index 00000000..eb225100 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_client.py @@ -0,0 +1,1617 @@ +# test_client.py -- Tests for the git protocol, client side +# Copyright (C) 2009 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +from io import BytesIO +import base64 +import os +import sys +import shutil +import tempfile +import warnings + +from urllib.parse import ( + quote as urlquote, + urlparse, +) + +from unittest.mock import patch + +import dulwich +from dulwich import ( + client, +) +from dulwich.client import ( + InvalidWants, + LocalGitClient, + TraditionalGitClient, + TCPGitClient, + SSHGitClient, + HttpGitClient, + FetchPackResult, + ReportStatusParser, + SendPackError, + StrangeHostname, + SubprocessSSHVendor, + PLinkSSHVendor, + HangupException, + GitProtocolError, + check_wants, + default_urllib3_manager, + get_credentials_from_store, + get_transport_and_path, + get_transport_and_path_from_url, + parse_rsync_url, + _remote_error_from_stderr, +) +from dulwich.config import ( + ConfigDict, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.protocol import ( + TCP_GIT_PORT, + Protocol, +) +from dulwich.pack import ( + pack_objects_to_data, + write_pack_data, + write_pack_objects, +) +from dulwich.objects import Commit, Tree +from dulwich.repo import ( + MemoryRepo, + Repo, +) +from dulwich.tests import skipIf +from dulwich.tests.utils import ( + open_repo, + tear_down_repo, + setup_warning_catcher, +) + + +class DummyClient(TraditionalGitClient): + def __init__(self, can_read, read, write): + self.can_read = can_read + self.read = read + self.write = write + TraditionalGitClient.__init__(self) + + def _connect(self, service, path): + return Protocol(self.read, self.write), self.can_read, None + + +class DummyPopen: + def __init__(self, *args, **kwards): + self.stdin = BytesIO(b"stdin") + self.stdout = BytesIO(b"stdout") + self.stderr = BytesIO(b"stderr") + self.returncode = 0 + self.args = args + self.kwargs = kwards + + def communicate(self, *args, **kwards): + return ("Running", "") + + def wait(self, *args, **kwards): + return False + + +# TODO(durin42): add unit-level tests of GitClient +class GitClientTests(TestCase): + def setUp(self): + super(GitClientTests, self).setUp() + self.rout = BytesIO() + self.rin = BytesIO() + self.client = DummyClient(lambda x: True, self.rin.read, self.rout.write) + + def test_caps(self): + agent_cap = ("agent=dulwich/%d.%d.%d" % dulwich.__version__).encode("ascii") + self.assertEqual( + set( + [ + b"multi_ack", + b"side-band-64k", + b"ofs-delta", + b"thin-pack", + b"multi_ack_detailed", + b"shallow", + agent_cap, + ] + ), + set(self.client._fetch_capabilities), + ) + self.assertEqual( + set( + [ + b"delete-refs", + b"ofs-delta", + b"report-status", + b"side-band-64k", + agent_cap, + ] + ), + set(self.client._send_capabilities), + ) + + def test_archive_ack(self): + self.rin.write(b"0009NACK\n" b"0000") + self.rin.seek(0) + self.client.archive(b"bla", b"HEAD", None, None) + self.assertEqual(self.rout.getvalue(), b"0011argument HEAD0000") + + def test_fetch_empty(self): + self.rin.write(b"0000") + self.rin.seek(0) + + def check_heads(heads, **kwargs): + self.assertEqual(heads, {}) + return [] + + ret = self.client.fetch_pack(b"/", check_heads, None, None) + self.assertEqual({}, ret.refs) + self.assertEqual({}, ret.symrefs) + + def test_fetch_pack_ignores_magic_ref(self): + self.rin.write( + b"00000000000000000000000000000000000000000000 capabilities^{}" + b"\x00 multi_ack " + b"thin-pack side-band side-band-64k ofs-delta shallow no-progress " + b"include-tag\n" + b"0000" + ) + self.rin.seek(0) + + def check_heads(heads, **kwargs): + self.assertEqual({}, heads) + return [] + + ret = self.client.fetch_pack(b"bla", check_heads, None, None, None) + self.assertEqual({}, ret.refs) + self.assertEqual({}, ret.symrefs) + self.assertEqual(self.rout.getvalue(), b"0000") + + def test_fetch_pack_none(self): + self.rin.write( + b"008855dcc6bf963f922e1ed5c4bbaaefcfacef57b1d7 HEAD\x00multi_ack " + b"thin-pack side-band side-band-64k ofs-delta shallow no-progress " + b"include-tag\n" + b"0000" + ) + self.rin.seek(0) + ret = self.client.fetch_pack(b"bla", lambda heads, **kwargs: [], None, None, None) + self.assertEqual( + {b"HEAD": b"55dcc6bf963f922e1ed5c4bbaaefcfacef57b1d7"}, ret.refs + ) + self.assertEqual({}, ret.symrefs) + self.assertEqual(self.rout.getvalue(), b"0000") + + def test_send_pack_no_sideband64k_with_update_ref_error(self): + # No side-bank-64k reported by server shouldn't try to parse + # side band data + pkts = [ + b"55dcc6bf963f922e1ed5c4bbaaefcfacef57b1d7 capabilities^{}" + b"\x00 report-status delete-refs ofs-delta\n", + b"", + b"unpack ok", + b"ng refs/foo/bar pre-receive hook declined", + b"", + ] + for pkt in pkts: + if pkt == b"": + self.rin.write(b"0000") + else: + self.rin.write(("%04x" % (len(pkt) + 4)).encode("ascii") + pkt) + self.rin.seek(0) + + tree = Tree() + commit = Commit() + commit.tree = tree + commit.parents = [] + commit.author = commit.committer = b"test user" + commit.commit_time = commit.author_time = 1174773719 + commit.commit_timezone = commit.author_timezone = 0 + commit.encoding = b"UTF-8" + commit.message = b"test message" + + def update_refs(refs): + return { + b"refs/foo/bar": commit.id, + } + + def generate_pack_data(have, want, ofs_delta=False): + return pack_objects_to_data( + [ + (commit, None), + (tree, ""), + ] + ) + + result = self.client.send_pack("blah", update_refs, generate_pack_data) + self.assertEqual( + {b"refs/foo/bar": "pre-receive hook declined"}, result.ref_status + ) + self.assertEqual({b"refs/foo/bar": commit.id}, result.refs) + + def test_send_pack_none(self): + # Set ref to current value + self.rin.write( + b"0078310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"refs/heads/master\x00 report-status delete-refs " + b"side-band-64k quiet ofs-delta\n" + b"0000" + ) + self.rin.seek(0) + + def update_refs(refs): + return {b"refs/heads/master": b"310ca9477129b8586fa2afc779c1f57cf64bba6c"} + + def generate_pack_data(have, want, ofs_delta=False): + return 0, [] + + self.client.send_pack(b"/", update_refs, generate_pack_data) + self.assertEqual(self.rout.getvalue(), b"0000") + + def test_send_pack_keep_and_delete(self): + self.rin.write( + b"0063310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"refs/heads/master\x00report-status delete-refs ofs-delta\n" + b"003f310ca9477129b8586fa2afc779c1f57cf64bba6c refs/heads/keepme\n" + b"0000000eunpack ok\n" + b"0019ok refs/heads/master\n" + b"0000" + ) + self.rin.seek(0) + + def update_refs(refs): + return {b"refs/heads/master": b"0" * 40} + + def generate_pack_data(have, want, ofs_delta=False): + return 0, [] + + self.client.send_pack(b"/", update_refs, generate_pack_data) + self.assertEqual( + self.rout.getvalue(), + b"008b310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"0000000000000000000000000000000000000000 " + b"refs/heads/master\x00delete-refs ofs-delta report-status0000", + ) + + def test_send_pack_delete_only(self): + self.rin.write( + b"0063310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"refs/heads/master\x00report-status delete-refs ofs-delta\n" + b"0000000eunpack ok\n" + b"0019ok refs/heads/master\n" + b"0000" + ) + self.rin.seek(0) + + def update_refs(refs): + return {b"refs/heads/master": b"0" * 40} + + def generate_pack_data(have, want, ofs_delta=False): + return 0, [] + + self.client.send_pack(b"/", update_refs, generate_pack_data) + self.assertEqual( + self.rout.getvalue(), + b"008b310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"0000000000000000000000000000000000000000 " + b"refs/heads/master\x00delete-refs ofs-delta report-status0000", + ) + + def test_send_pack_new_ref_only(self): + self.rin.write( + b"0063310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"refs/heads/master\x00report-status delete-refs ofs-delta\n" + b"0000000eunpack ok\n" + b"0019ok refs/heads/blah12\n" + b"0000" + ) + self.rin.seek(0) + + def update_refs(refs): + return { + b"refs/heads/blah12": b"310ca9477129b8586fa2afc779c1f57cf64bba6c", + b"refs/heads/master": b"310ca9477129b8586fa2afc779c1f57cf64bba6c", + } + + def generate_pack_data(have, want, ofs_delta=False): + return 0, [] + + f = BytesIO() + write_pack_objects(f.write, {}) + self.client.send_pack("/", update_refs, generate_pack_data) + self.assertEqual( + self.rout.getvalue(), + b"008b0000000000000000000000000000000000000000 " + b"310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"refs/heads/blah12\x00delete-refs ofs-delta report-status0000" + + f.getvalue(), + ) + + def test_send_pack_new_ref(self): + self.rin.write( + b"0064310ca9477129b8586fa2afc779c1f57cf64bba6c " + b"refs/heads/master\x00 report-status delete-refs ofs-delta\n" + b"0000000eunpack ok\n" + b"0019ok refs/heads/blah12\n" + b"0000" + ) + self.rin.seek(0) + + tree = Tree() + commit = Commit() + commit.tree = tree + commit.parents = [] + commit.author = commit.committer = b"test user" + commit.commit_time = commit.author_time = 1174773719 + commit.commit_timezone = commit.author_timezone = 0 + commit.encoding = b"UTF-8" + commit.message = b"test message" + + def update_refs(refs): + return { + b"refs/heads/blah12": commit.id, + b"refs/heads/master": b"310ca9477129b8586fa2afc779c1f57cf64bba6c", + } + + def generate_pack_data(have, want, ofs_delta=False): + return pack_objects_to_data( + [ + (commit, None), + (tree, b""), + ] + ) + + f = BytesIO() + write_pack_data(f.write, *generate_pack_data(None, None)) + self.client.send_pack(b"/", update_refs, generate_pack_data) + self.assertEqual( + self.rout.getvalue(), + b"008b0000000000000000000000000000000000000000 " + + commit.id + + b" refs/heads/blah12\x00delete-refs ofs-delta report-status0000" + + f.getvalue(), + ) + + def test_send_pack_no_deleteref_delete_only(self): + pkts = [ + b"310ca9477129b8586fa2afc779c1f57cf64bba6c refs/heads/master" + b"\x00 report-status ofs-delta\n", + b"", + b"", + ] + for pkt in pkts: + if pkt == b"": + self.rin.write(b"0000") + else: + self.rin.write(("%04x" % (len(pkt) + 4)).encode("ascii") + pkt) + self.rin.seek(0) + + def update_refs(refs): + return {b"refs/heads/master": b"0" * 40} + + def generate_pack_data(have, want, ofs_delta=False): + return 0, [] + + result = self.client.send_pack(b"/", update_refs, generate_pack_data) + self.assertEqual( + result.ref_status, + {b"refs/heads/master": "remote does not support deleting refs"}, + ) + self.assertEqual( + result.refs, + {b"refs/heads/master": b"310ca9477129b8586fa2afc779c1f57cf64bba6c"}, + ) + self.assertEqual(self.rout.getvalue(), b"0000") + + +class TestGetTransportAndPath(TestCase): + def test_tcp(self): + c, path = get_transport_and_path("git://foo.com/bar/baz") + self.assertIsInstance(c, TCPGitClient) + self.assertEqual("foo.com", c._host) + self.assertEqual(TCP_GIT_PORT, c._port) + self.assertEqual("/bar/baz", path) + + def test_tcp_port(self): + c, path = get_transport_and_path("git://foo.com:1234/bar/baz") + self.assertIsInstance(c, TCPGitClient) + self.assertEqual("foo.com", c._host) + self.assertEqual(1234, c._port) + self.assertEqual("/bar/baz", path) + + def test_git_ssh_explicit(self): + c, path = get_transport_and_path("git+ssh://foo.com/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("/bar/baz", path) + + def test_ssh_explicit(self): + c, path = get_transport_and_path("ssh://foo.com/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("/bar/baz", path) + + def test_ssh_port_explicit(self): + c, path = get_transport_and_path("git+ssh://foo.com:1234/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(1234, c.port) + self.assertEqual("/bar/baz", path) + + def test_username_and_port_explicit_unknown_scheme(self): + c, path = get_transport_and_path("unknown://git@server:7999/dply/stuff.git") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("unknown", c.host) + self.assertEqual("//git@server:7999/dply/stuff.git", path) + + def test_username_and_port_explicit(self): + c, path = get_transport_and_path("ssh://git@server:7999/dply/stuff.git") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("git", c.username) + self.assertEqual("server", c.host) + self.assertEqual(7999, c.port) + self.assertEqual("/dply/stuff.git", path) + + def test_ssh_abspath_doubleslash(self): + c, path = get_transport_and_path("git+ssh://foo.com//bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("//bar/baz", path) + + def test_ssh_port(self): + c, path = get_transport_and_path("git+ssh://foo.com:1234/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(1234, c.port) + self.assertEqual("/bar/baz", path) + + def test_ssh_implicit(self): + c, path = get_transport_and_path("foo:/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("/bar/baz", path) + + def test_ssh_host(self): + c, path = get_transport_and_path("foo.com:/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("/bar/baz", path) + + def test_ssh_user_host(self): + c, path = get_transport_and_path("user@foo.com:/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual("user", c.username) + self.assertEqual("/bar/baz", path) + + def test_ssh_relpath(self): + c, path = get_transport_and_path("foo:bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("bar/baz", path) + + def test_ssh_host_relpath(self): + c, path = get_transport_and_path("foo.com:bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("bar/baz", path) + + def test_ssh_user_host_relpath(self): + c, path = get_transport_and_path("user@foo.com:bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual("user", c.username) + self.assertEqual("bar/baz", path) + + def test_local(self): + c, path = get_transport_and_path("foo.bar/baz") + self.assertIsInstance(c, LocalGitClient) + self.assertEqual("foo.bar/baz", path) + + @skipIf(sys.platform != "win32", "Behaviour only happens on windows.") + def test_local_abs_windows_path(self): + c, path = get_transport_and_path("C:\\foo.bar\\baz") + self.assertIsInstance(c, LocalGitClient) + self.assertEqual("C:\\foo.bar\\baz", path) + + def test_error(self): + # Need to use a known urlparse.uses_netloc URL scheme to get the + # expected parsing of the URL on Python versions less than 2.6.5 + c, path = get_transport_and_path("prospero://bar/baz") + self.assertIsInstance(c, SSHGitClient) + + def test_http(self): + url = "https://github.com/jelmer/dulwich" + c, path = get_transport_and_path(url) + self.assertIsInstance(c, HttpGitClient) + self.assertEqual("/jelmer/dulwich", path) + + def test_http_auth(self): + url = "https://user:passwd@github.com/jelmer/dulwich" + + c, path = get_transport_and_path(url) + + self.assertIsInstance(c, HttpGitClient) + self.assertEqual("/jelmer/dulwich", path) + self.assertEqual("user", c._username) + self.assertEqual("passwd", c._password) + + def test_http_auth_with_username(self): + url = "https://github.com/jelmer/dulwich" + + c, path = get_transport_and_path(url, username="user2", password="blah") + + self.assertIsInstance(c, HttpGitClient) + self.assertEqual("/jelmer/dulwich", path) + self.assertEqual("user2", c._username) + self.assertEqual("blah", c._password) + + def test_http_auth_with_username_and_in_url(self): + url = "https://user:passwd@github.com/jelmer/dulwich" + + c, path = get_transport_and_path(url, username="user2", password="blah") + + self.assertIsInstance(c, HttpGitClient) + self.assertEqual("/jelmer/dulwich", path) + self.assertEqual("user", c._username) + self.assertEqual("passwd", c._password) + + def test_http_no_auth(self): + url = "https://github.com/jelmer/dulwich" + + c, path = get_transport_and_path(url) + + self.assertIsInstance(c, HttpGitClient) + self.assertEqual("/jelmer/dulwich", path) + self.assertIs(None, c._username) + self.assertIs(None, c._password) + + +class TestGetTransportAndPathFromUrl(TestCase): + def test_tcp(self): + c, path = get_transport_and_path_from_url("git://foo.com/bar/baz") + self.assertIsInstance(c, TCPGitClient) + self.assertEqual("foo.com", c._host) + self.assertEqual(TCP_GIT_PORT, c._port) + self.assertEqual("/bar/baz", path) + + def test_tcp_port(self): + c, path = get_transport_and_path_from_url("git://foo.com:1234/bar/baz") + self.assertIsInstance(c, TCPGitClient) + self.assertEqual("foo.com", c._host) + self.assertEqual(1234, c._port) + self.assertEqual("/bar/baz", path) + + def test_ssh_explicit(self): + c, path = get_transport_and_path_from_url("git+ssh://foo.com/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("/bar/baz", path) + + def test_ssh_port_explicit(self): + c, path = get_transport_and_path_from_url("git+ssh://foo.com:1234/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(1234, c.port) + self.assertEqual("/bar/baz", path) + + def test_ssh_homepath(self): + c, path = get_transport_and_path_from_url("git+ssh://foo.com/~/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(None, c.port) + self.assertEqual(None, c.username) + self.assertEqual("/~/bar/baz", path) + + def test_ssh_port_homepath(self): + c, path = get_transport_and_path_from_url("git+ssh://foo.com:1234/~/bar/baz") + self.assertIsInstance(c, SSHGitClient) + self.assertEqual("foo.com", c.host) + self.assertEqual(1234, c.port) + self.assertEqual("/~/bar/baz", path) + + def test_ssh_host_relpath(self): + self.assertRaises( + ValueError, get_transport_and_path_from_url, "foo.com:bar/baz" + ) + + def test_ssh_user_host_relpath(self): + self.assertRaises( + ValueError, get_transport_and_path_from_url, "user@foo.com:bar/baz" + ) + + def test_local_path(self): + self.assertRaises(ValueError, get_transport_and_path_from_url, "foo.bar/baz") + + def test_error(self): + # Need to use a known urlparse.uses_netloc URL scheme to get the + # expected parsing of the URL on Python versions less than 2.6.5 + self.assertRaises( + ValueError, get_transport_and_path_from_url, "prospero://bar/baz" + ) + + def test_http(self): + url = "https://github.com/jelmer/dulwich" + c, path = get_transport_and_path_from_url(url) + self.assertIsInstance(c, HttpGitClient) + self.assertEqual("https://github.com", c.get_url(b"/")) + self.assertEqual("/jelmer/dulwich", path) + + def test_http_port(self): + url = "https://github.com:9090/jelmer/dulwich" + c, path = get_transport_and_path_from_url(url) + self.assertEqual("https://github.com:9090", c.get_url(b"/")) + self.assertIsInstance(c, HttpGitClient) + self.assertEqual("/jelmer/dulwich", path) + + @patch("os.name", "posix") + @patch("sys.platform", "linux") + def test_file(self): + c, path = get_transport_and_path_from_url("file:///home/jelmer/foo") + self.assertIsInstance(c, LocalGitClient) + self.assertEqual("/home/jelmer/foo", path) + + @patch("os.name", "nt") + @patch("sys.platform", "win32") + def test_file_win(self): + # `_win32_url_to_path` uses urllib.request.url2pathname, which is set to + # `ntutl2path.url2pathname` when `os.name==nt` + from nturl2path import url2pathname + + with patch("dulwich.client.url2pathname", url2pathname): + expected = "C:\\foo.bar\\baz" + for file_url in [ + "file:C:/foo.bar/baz", + "file:/C:/foo.bar/baz", + "file://C:/foo.bar/baz", + "file://C://foo.bar//baz", + "file:///C:/foo.bar/baz", + ]: + c, path = get_transport_and_path(file_url) + self.assertIsInstance(c, LocalGitClient) + self.assertEqual(path, expected) + + for remote_url in [ + "file://host.example.com/C:/foo.bar/baz" + "file://host.example.com/C:/foo.bar/baz" + "file:////host.example/foo.bar/baz", + ]: + with self.assertRaises(NotImplementedError): + c, path = get_transport_and_path(remote_url) + + +class TestSSHVendor(object): + def __init__(self): + self.host = None + self.command = "" + self.username = None + self.port = None + self.password = None + self.key_filename = None + + def run_command( + self, + host, + command, + username=None, + port=None, + password=None, + key_filename=None, + ssh_command=None, + ): + self.host = host + self.command = command + self.username = username + self.port = port + self.password = password + self.key_filename = key_filename + self.ssh_command = ssh_command + + class Subprocess: + pass + + setattr(Subprocess, "read", lambda: None) + setattr(Subprocess, "write", lambda: None) + setattr(Subprocess, "close", lambda: None) + setattr(Subprocess, "can_read", lambda: None) + return Subprocess() + + +class SSHGitClientTests(TestCase): + def setUp(self): + super(SSHGitClientTests, self).setUp() + + self.server = TestSSHVendor() + self.real_vendor = client.get_ssh_vendor + client.get_ssh_vendor = lambda: self.server + + self.client = SSHGitClient("git.samba.org") + + def tearDown(self): + super(SSHGitClientTests, self).tearDown() + client.get_ssh_vendor = self.real_vendor + + def test_get_url(self): + path = "/tmp/repo.git" + c = SSHGitClient("git.samba.org") + + url = c.get_url(path) + self.assertEqual("ssh://git.samba.org/tmp/repo.git", url) + + def test_get_url_with_username_and_port(self): + path = "/tmp/repo.git" + c = SSHGitClient("git.samba.org", port=2222, username="user") + + url = c.get_url(path) + self.assertEqual("ssh://user@git.samba.org:2222/tmp/repo.git", url) + + def test_default_command(self): + self.assertEqual(b"git-upload-pack", self.client._get_cmd_path(b"upload-pack")) + + def test_alternative_command_path(self): + self.client.alternative_paths[b"upload-pack"] = b"/usr/lib/git/git-upload-pack" + self.assertEqual( + b"/usr/lib/git/git-upload-pack", + self.client._get_cmd_path(b"upload-pack"), + ) + + def test_alternative_command_path_spaces(self): + self.client.alternative_paths[ + b"upload-pack" + ] = b"/usr/lib/git/git-upload-pack -ibla" + self.assertEqual( + b"/usr/lib/git/git-upload-pack -ibla", + self.client._get_cmd_path(b"upload-pack"), + ) + + def test_connect(self): + server = self.server + client = self.client + + client.username = b"username" + client.port = 1337 + + client._connect(b"command", b"/path/to/repo") + self.assertEqual(b"username", server.username) + self.assertEqual(1337, server.port) + self.assertEqual("git-command '/path/to/repo'", server.command) + + client._connect(b"relative-command", b"/~/path/to/repo") + self.assertEqual("git-relative-command '~/path/to/repo'", server.command) + + def test_ssh_command_precedence(self): + os.environ["GIT_SSH"] = "/path/to/ssh" + test_client = SSHGitClient("git.samba.org") + self.assertEqual(test_client.ssh_command, "/path/to/ssh") + + os.environ["GIT_SSH_COMMAND"] = "/path/to/ssh -o Option=Value" + test_client = SSHGitClient("git.samba.org") + self.assertEqual(test_client.ssh_command, "/path/to/ssh -o Option=Value") + + test_client = SSHGitClient("git.samba.org", ssh_command="ssh -o Option1=Value1") + self.assertEqual(test_client.ssh_command, "ssh -o Option1=Value1") + + del os.environ["GIT_SSH"] + del os.environ["GIT_SSH_COMMAND"] + + +class ReportStatusParserTests(TestCase): + def test_invalid_pack(self): + parser = ReportStatusParser() + parser.handle_packet(b"unpack error - foo bar") + parser.handle_packet(b"ok refs/foo/bar") + parser.handle_packet(None) + self.assertRaises(SendPackError, list, parser.check()) + + def test_update_refs_error(self): + parser = ReportStatusParser() + parser.handle_packet(b"unpack ok") + parser.handle_packet(b"ng refs/foo/bar need to pull") + parser.handle_packet(None) + self.assertEqual([(b"refs/foo/bar", "need to pull")], list(parser.check())) + + def test_ok(self): + parser = ReportStatusParser() + parser.handle_packet(b"unpack ok") + parser.handle_packet(b"ok refs/foo/bar") + parser.handle_packet(None) + self.assertEqual([(b"refs/foo/bar", None)], list(parser.check())) + + +class LocalGitClientTests(TestCase): + def test_get_url(self): + path = "/tmp/repo.git" + c = LocalGitClient() + + url = c.get_url(path) + self.assertEqual("file:///tmp/repo.git", url) + + def test_fetch_into_empty(self): + c = LocalGitClient() + t = MemoryRepo() + s = open_repo("a.git") + self.addCleanup(tear_down_repo, s) + self.assertEqual(s.get_refs(), c.fetch(s.path, t).refs) + + def test_clone(self): + c = LocalGitClient() + s = open_repo("a.git") + self.addCleanup(tear_down_repo, s) + target = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, target) + result_repo = c.clone(s.path, target, mkdir=False) + self.addCleanup(result_repo.close) + expected = dict(s.get_refs()) + expected[b'refs/remotes/origin/HEAD'] = expected[b'HEAD'] + expected[b'refs/remotes/origin/master'] = expected[b'refs/heads/master'] + self.assertEqual(expected, result_repo.get_refs()) + + def test_fetch_empty(self): + c = LocalGitClient() + s = open_repo("a.git") + self.addCleanup(tear_down_repo, s) + out = BytesIO() + walker = {} + ret = c.fetch_pack( + s.path, lambda heads, **kwargs: [], graph_walker=walker, pack_data=out.write + ) + self.assertEqual( + { + b"HEAD": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/heads/master": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/tags/mytag": b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", + b"refs/tags/mytag-packed": b"b0931cadc54336e78a1d980420e3268903b57a50", + }, + ret.refs, + ) + self.assertEqual({b"HEAD": b"refs/heads/master"}, ret.symrefs) + self.assertEqual( + b"PACK\x00\x00\x00\x02\x00\x00\x00\x00\x02\x9d\x08" + b"\x82;\xd8\xa8\xea\xb5\x10\xadj\xc7\\\x82<\xfd>\xd3\x1e", + out.getvalue(), + ) + + def test_fetch_pack_none(self): + c = LocalGitClient() + s = open_repo("a.git") + self.addCleanup(tear_down_repo, s) + out = BytesIO() + walker = MemoryRepo().get_graph_walker() + ret = c.fetch_pack( + s.path, + lambda heads, **kwargs: [b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"], + graph_walker=walker, + pack_data=out.write, + ) + self.assertEqual({b"HEAD": b"refs/heads/master"}, ret.symrefs) + self.assertEqual( + { + b"HEAD": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/heads/master": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/tags/mytag": b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", + b"refs/tags/mytag-packed": b"b0931cadc54336e78a1d980420e3268903b57a50", + }, + ret.refs, + ) + # Hardcoding is not ideal, but we'll fix that some other day.. + self.assertTrue( + out.getvalue().startswith(b"PACK\x00\x00\x00\x02\x00\x00\x00\x07") + ) + + def test_send_pack_without_changes(self): + local = open_repo("a.git") + self.addCleanup(tear_down_repo, local) + + target = open_repo("a.git") + self.addCleanup(tear_down_repo, target) + + self.send_and_verify(b"master", local, target) + + def test_send_pack_with_changes(self): + local = open_repo("a.git") + self.addCleanup(tear_down_repo, local) + + target_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, target_path) + with Repo.init_bare(target_path) as target: + self.send_and_verify(b"master", local, target) + + def test_get_refs(self): + local = open_repo("refs.git") + self.addCleanup(tear_down_repo, local) + + client = LocalGitClient() + refs = client.get_refs(local.path) + self.assertDictEqual(local.refs.as_dict(), refs) + + def send_and_verify(self, branch, local, target): + """Send branch from local to remote repository and verify it worked.""" + client = LocalGitClient() + ref_name = b"refs/heads/" + branch + result = client.send_pack( + target.path, + lambda _: {ref_name: local.refs[ref_name]}, + local.generate_pack_data, + ) + + self.assertEqual(local.refs[ref_name], result.refs[ref_name]) + self.assertIs(None, result.agent) + self.assertEqual({}, result.ref_status) + + obj_local = local.get_object(result.refs[ref_name]) + obj_target = target.get_object(result.refs[ref_name]) + self.assertEqual(obj_local, obj_target) + + +class HttpGitClientTests(TestCase): + def test_get_url(self): + base_url = "https://github.com/jelmer/dulwich" + path = "/jelmer/dulwich" + c = HttpGitClient(base_url) + + url = c.get_url(path) + self.assertEqual("https://github.com/jelmer/dulwich", url) + + def test_get_url_bytes_path(self): + base_url = "https://github.com/jelmer/dulwich" + path_bytes = b"/jelmer/dulwich" + c = HttpGitClient(base_url) + + url = c.get_url(path_bytes) + self.assertEqual("https://github.com/jelmer/dulwich", url) + + def test_get_url_with_username_and_passwd(self): + base_url = "https://github.com/jelmer/dulwich" + path = "/jelmer/dulwich" + c = HttpGitClient(base_url, username="USERNAME", password="PASSWD") + + url = c.get_url(path) + self.assertEqual("https://github.com/jelmer/dulwich", url) + + def test_init_username_passwd_set(self): + url = "https://github.com/jelmer/dulwich" + + c = HttpGitClient(url, config=None, username="user", password="passwd") + self.assertEqual("user", c._username) + self.assertEqual("passwd", c._password) + + basic_auth = c.pool_manager.headers["authorization"] + auth_string = "%s:%s" % ("user", "passwd") + b64_credentials = base64.b64encode(auth_string.encode("latin1")) + expected_basic_auth = "Basic %s" % b64_credentials.decode("latin1") + self.assertEqual(basic_auth, expected_basic_auth) + + def test_init_username_set_no_password(self): + url = "https://github.com/jelmer/dulwich" + + c = HttpGitClient(url, config=None, username="user") + self.assertEqual("user", c._username) + self.assertIs(c._password, None) + + basic_auth = c.pool_manager.headers["authorization"] + auth_string = b"user:" + b64_credentials = base64.b64encode(auth_string) + expected_basic_auth = f"Basic {b64_credentials.decode('ascii')}" + self.assertEqual(basic_auth, expected_basic_auth) + + def test_init_no_username_passwd(self): + url = "https://github.com/jelmer/dulwich" + + c = HttpGitClient(url, config=None) + self.assertIs(None, c._username) + self.assertIs(None, c._password) + self.assertNotIn("authorization", c.pool_manager.headers) + + def test_from_parsedurl_username_only(self): + username = "user" + url = f"https://{username}@github.com/jelmer/dulwich" + + c = HttpGitClient.from_parsedurl(urlparse(url)) + self.assertEqual(c._username, username) + self.assertEqual(c._password, None) + + basic_auth = c.pool_manager.headers["authorization"] + auth_string = username.encode('ascii') + b":" + b64_credentials = base64.b64encode(auth_string) + expected_basic_auth = f"Basic {b64_credentials.decode('ascii')}" + self.assertEqual(basic_auth, expected_basic_auth) + + def test_from_parsedurl_on_url_with_quoted_credentials(self): + original_username = "john|the|first" + quoted_username = urlquote(original_username) + + original_password = "Ya#1$2%3" + quoted_password = urlquote(original_password) + + url = "https://{username}:{password}@github.com/jelmer/dulwich".format( + username=quoted_username, password=quoted_password + ) + + c = HttpGitClient.from_parsedurl(urlparse(url)) + self.assertEqual(original_username, c._username) + self.assertEqual(original_password, c._password) + + basic_auth = c.pool_manager.headers["authorization"] + auth_string = "%s:%s" % (original_username, original_password) + b64_credentials = base64.b64encode(auth_string.encode("latin1")) + expected_basic_auth = "Basic %s" % b64_credentials.decode("latin1") + self.assertEqual(basic_auth, expected_basic_auth) + + def test_url_redirect_location(self): + from urllib3.response import HTTPResponse + + test_data = { + "https://gitlab.com/inkscape/inkscape/": { + "redirect_url": "https://gitlab.com/inkscape/inkscape.git/", + "refs_data": ( + b"001e# service=git-upload-pack\n00000032" + b"fb2bebf4919a011f0fd7cec085443d0031228e76 " + b"HEAD\n0000" + ), + }, + "https://github.com/jelmer/dulwich/": { + "redirect_url": "https://github.com/jelmer/dulwich/", + "refs_data": ( + b"001e# service=git-upload-pack\n00000032" + b"3ff25e09724aa4d86ea5bca7d5dd0399a3c8bfcf " + b"HEAD\n0000" + ), + }, + } + + tail = "info/refs?service=git-upload-pack" + + # we need to mock urllib3.PoolManager as this test will fail + # otherwise without an active internet connection + class PoolManagerMock: + def __init__(self): + self.headers = {} + + def request(self, method, url, fields=None, headers=None, redirect=True, preload_content=True): + base_url = url[: -len(tail)] + redirect_base_url = test_data[base_url]["redirect_url"] + redirect_url = redirect_base_url + tail + headers = { + "Content-Type": "application/x-git-upload-pack-advertisement" + } + body = test_data[base_url]["refs_data"] + # urllib3 handles automatic redirection by default + status = 200 + request_url = redirect_url + # simulate urllib3 behavior when redirect parameter is False + if redirect is False: + request_url = url + if redirect_base_url != base_url: + body = b"" + headers["location"] = redirect_url + status = 301 + return HTTPResponse( + body=BytesIO(body), + headers=headers, + request_method=method, + request_url=request_url, + preload_content=preload_content, + status=status, + ) + + pool_manager = PoolManagerMock() + + for base_url in test_data.keys(): + # instantiate HttpGitClient with mocked pool manager + c = HttpGitClient(base_url, pool_manager=pool_manager, config=None) + # call method that detects url redirection + _, _, processed_url = c._discover_references(b"git-upload-pack", base_url) + + # send the same request as the method above without redirection + resp = c.pool_manager.request("GET", base_url + tail, redirect=False) + + # check expected behavior of urllib3 + redirect_location = resp.get_redirect_location() + if resp.status == 200: + self.assertFalse(redirect_location) + + if redirect_location: + # check that url redirection has been correctly detected + self.assertEqual(processed_url, redirect_location[: -len(tail)]) + else: + # check also the no redirection case + self.assertEqual(processed_url, base_url) + + +class TCPGitClientTests(TestCase): + def test_get_url(self): + host = "github.com" + path = "/jelmer/dulwich" + c = TCPGitClient(host) + + url = c.get_url(path) + self.assertEqual("git://github.com/jelmer/dulwich", url) + + def test_get_url_with_port(self): + host = "github.com" + path = "/jelmer/dulwich" + port = 9090 + c = TCPGitClient(host, port=port) + + url = c.get_url(path) + self.assertEqual("git://github.com:9090/jelmer/dulwich", url) + + +class DefaultUrllib3ManagerTest(TestCase): + def test_no_config(self): + manager = default_urllib3_manager(config=None) + self.assertEqual(manager.connection_pool_kw["cert_reqs"], "CERT_REQUIRED") + + def test_config_no_proxy(self): + import urllib3 + + manager = default_urllib3_manager(config=ConfigDict()) + self.assertNotIsInstance(manager, urllib3.ProxyManager) + self.assertIsInstance(manager, urllib3.PoolManager) + + def test_config_no_proxy_custom_cls(self): + import urllib3 + + class CustomPoolManager(urllib3.PoolManager): + pass + + manager = default_urllib3_manager( + config=ConfigDict(), pool_manager_cls=CustomPoolManager + ) + self.assertIsInstance(manager, CustomPoolManager) + + def test_config_ssl(self): + config = ConfigDict() + config.set(b"http", b"sslVerify", b"true") + manager = default_urllib3_manager(config=config) + self.assertEqual(manager.connection_pool_kw["cert_reqs"], "CERT_REQUIRED") + + def test_config_no_ssl(self): + config = ConfigDict() + config.set(b"http", b"sslVerify", b"false") + manager = default_urllib3_manager(config=config) + self.assertEqual(manager.connection_pool_kw["cert_reqs"], "CERT_NONE") + + def test_config_proxy(self): + import urllib3 + + config = ConfigDict() + config.set(b"http", b"proxy", b"http://localhost:3128/") + manager = default_urllib3_manager(config=config) + + self.assertIsInstance(manager, urllib3.ProxyManager) + self.assertTrue(hasattr(manager, "proxy")) + self.assertEqual(manager.proxy.scheme, "http") + self.assertEqual(manager.proxy.host, "localhost") + self.assertEqual(manager.proxy.port, 3128) + + def test_environment_proxy(self): + import urllib3 + + config = ConfigDict() + os.environ["http_proxy"] = "http://myproxy:8080" + manager = default_urllib3_manager(config=config) + self.assertIsInstance(manager, urllib3.ProxyManager) + self.assertTrue(hasattr(manager, "proxy")) + self.assertEqual(manager.proxy.scheme, "http") + self.assertEqual(manager.proxy.host, "myproxy") + self.assertEqual(manager.proxy.port, 8080) + del os.environ["http_proxy"] + + def test_config_proxy_custom_cls(self): + import urllib3 + + class CustomProxyManager(urllib3.ProxyManager): + pass + + config = ConfigDict() + config.set(b"http", b"proxy", b"http://localhost:3128/") + manager = default_urllib3_manager( + config=config, proxy_manager_cls=CustomProxyManager + ) + self.assertIsInstance(manager, CustomProxyManager) + + def test_config_no_verify_ssl(self): + manager = default_urllib3_manager(config=None, cert_reqs="CERT_NONE") + self.assertEqual(manager.connection_pool_kw["cert_reqs"], "CERT_NONE") + + +class SubprocessSSHVendorTests(TestCase): + def setUp(self): + # Monkey Patch client subprocess popen + self._orig_popen = dulwich.client.subprocess.Popen + dulwich.client.subprocess.Popen = DummyPopen + + def tearDown(self): + dulwich.client.subprocess.Popen = self._orig_popen + + def test_run_command_dashes(self): + vendor = SubprocessSSHVendor() + self.assertRaises( + StrangeHostname, + vendor.run_command, + "--weird-host", + "git-clone-url", + ) + + def test_run_command_password(self): + vendor = SubprocessSSHVendor() + self.assertRaises( + NotImplementedError, + vendor.run_command, + "host", + "git-clone-url", + password="12345", + ) + + def test_run_command_password_and_privkey(self): + vendor = SubprocessSSHVendor() + self.assertRaises( + NotImplementedError, + vendor.run_command, + "host", + "git-clone-url", + password="12345", + key_filename="/tmp/id_rsa", + ) + + def test_run_command_with_port_username_and_privkey(self): + expected = [ + "ssh", + "-x", + "-p", + "2200", + "-i", + "/tmp/id_rsa", + "user@host", + "git-clone-url", + ] + + vendor = SubprocessSSHVendor() + command = vendor.run_command( + "host", + "git-clone-url", + username="user", + port="2200", + key_filename="/tmp/id_rsa", + ) + + args = command.proc.args + + self.assertListEqual(expected, args[0]) + + def test_run_with_ssh_command(self): + expected = [ + "/path/to/ssh", + "-o", + "Option=Value", + "-x", + "host", + "git-clone-url", + ] + + vendor = SubprocessSSHVendor() + command = vendor.run_command( + "host", + "git-clone-url", + ssh_command="/path/to/ssh -o Option=Value", + ) + + args = command.proc.args + self.assertListEqual(expected, args[0]) + + +class PLinkSSHVendorTests(TestCase): + def setUp(self): + # Monkey Patch client subprocess popen + self._orig_popen = dulwich.client.subprocess.Popen + dulwich.client.subprocess.Popen = DummyPopen + + def tearDown(self): + dulwich.client.subprocess.Popen = self._orig_popen + + def test_run_command_dashes(self): + vendor = PLinkSSHVendor() + self.assertRaises( + StrangeHostname, + vendor.run_command, + "--weird-host", + "git-clone-url", + ) + + def test_run_command_password_and_privkey(self): + vendor = PLinkSSHVendor() + + warnings.simplefilter("always", UserWarning) + self.addCleanup(warnings.resetwarnings) + warnings_list, restore_warnings = setup_warning_catcher() + self.addCleanup(restore_warnings) + + command = vendor.run_command( + "host", + "git-clone-url", + password="12345", + key_filename="/tmp/id_rsa", + ) + + expected_warning = UserWarning( + "Invoking PLink with a password exposes the password in the " + "process list." + ) + + for w in warnings_list: + if type(w) == type(expected_warning) and w.args == expected_warning.args: + break + else: + raise AssertionError( + "Expected warning %r not in %r" % (expected_warning, warnings_list) + ) + + args = command.proc.args + + if sys.platform == "win32": + binary = ["plink.exe", "-ssh"] + else: + binary = ["plink", "-ssh"] + expected = binary + [ + "-pw", + "12345", + "-i", + "/tmp/id_rsa", + "host", + "git-clone-url", + ] + self.assertListEqual(expected, args[0]) + + def test_run_command_password(self): + if sys.platform == "win32": + binary = ["plink.exe", "-ssh"] + else: + binary = ["plink", "-ssh"] + expected = binary + ["-pw", "12345", "host", "git-clone-url"] + + vendor = PLinkSSHVendor() + + warnings.simplefilter("always", UserWarning) + self.addCleanup(warnings.resetwarnings) + warnings_list, restore_warnings = setup_warning_catcher() + self.addCleanup(restore_warnings) + + command = vendor.run_command("host", "git-clone-url", password="12345") + + expected_warning = UserWarning( + "Invoking PLink with a password exposes the password in the " + "process list." + ) + + for w in warnings_list: + if type(w) == type(expected_warning) and w.args == expected_warning.args: + break + else: + raise AssertionError( + "Expected warning %r not in %r" % (expected_warning, warnings_list) + ) + + args = command.proc.args + + self.assertListEqual(expected, args[0]) + + def test_run_command_with_port_username_and_privkey(self): + if sys.platform == "win32": + binary = ["plink.exe", "-ssh"] + else: + binary = ["plink", "-ssh"] + expected = binary + [ + "-P", + "2200", + "-i", + "/tmp/id_rsa", + "user@host", + "git-clone-url", + ] + + vendor = PLinkSSHVendor() + command = vendor.run_command( + "host", + "git-clone-url", + username="user", + port="2200", + key_filename="/tmp/id_rsa", + ) + + args = command.proc.args + + self.assertListEqual(expected, args[0]) + + def test_run_with_ssh_command(self): + expected = [ + "/path/to/plink", + "-x", + "host", + "git-clone-url", + ] + + vendor = SubprocessSSHVendor() + command = vendor.run_command( + "host", + "git-clone-url", + ssh_command="/path/to/plink", + ) + + args = command.proc.args + self.assertListEqual(expected, args[0]) + + +class RsyncUrlTests(TestCase): + def test_simple(self): + self.assertEqual(parse_rsync_url("foo:bar/path"), (None, "foo", "bar/path")) + self.assertEqual( + parse_rsync_url("user@foo:bar/path"), ("user", "foo", "bar/path") + ) + + def test_path(self): + self.assertRaises(ValueError, parse_rsync_url, "/path") + + +class CheckWantsTests(TestCase): + def test_fine(self): + check_wants( + [b"2f3dc7a53fb752a6961d3a56683df46d4d3bf262"], + {b"refs/heads/blah": b"2f3dc7a53fb752a6961d3a56683df46d4d3bf262"}, + ) + + def test_missing(self): + self.assertRaises( + InvalidWants, + check_wants, + [b"2f3dc7a53fb752a6961d3a56683df46d4d3bf262"], + {b"refs/heads/blah": b"3f3dc7a53fb752a6961d3a56683df46d4d3bf262"}, + ) + + def test_annotated(self): + self.assertRaises( + InvalidWants, + check_wants, + [b"2f3dc7a53fb752a6961d3a56683df46d4d3bf262"], + { + b"refs/heads/blah": b"3f3dc7a53fb752a6961d3a56683df46d4d3bf262", + b"refs/heads/blah^{}": b"2f3dc7a53fb752a6961d3a56683df46d4d3bf262", + }, + ) + + +class FetchPackResultTests(TestCase): + def test_eq(self): + self.assertEqual( + FetchPackResult( + {b"refs/heads/master": b"2f3dc7a53fb752a6961d3a56683df46d4d3bf262"}, + {}, + b"user/agent", + ), + FetchPackResult( + {b"refs/heads/master": b"2f3dc7a53fb752a6961d3a56683df46d4d3bf262"}, + {}, + b"user/agent", + ), + ) + + +class GitCredentialStoreTests(TestCase): + @classmethod + def setUpClass(cls): + with tempfile.NamedTemporaryFile(delete=False) as f: + f.write(b"https://user:pass@example.org\n") + cls.fname = f.name + + @classmethod + def tearDownClass(cls): + os.unlink(cls.fname) + + def test_nonmatching_scheme(self): + self.assertEqual( + get_credentials_from_store(b"http", b"example.org", fnames=[self.fname]), + None, + ) + + def test_nonmatching_hostname(self): + self.assertEqual( + get_credentials_from_store(b"https", b"noentry.org", fnames=[self.fname]), + None, + ) + + def test_match_without_username(self): + self.assertEqual( + get_credentials_from_store(b"https", b"example.org", fnames=[self.fname]), + (b"user", b"pass"), + ) + + def test_match_with_matching_username(self): + self.assertEqual( + get_credentials_from_store( + b"https", b"example.org", b"user", fnames=[self.fname] + ), + (b"user", b"pass"), + ) + + def test_no_match_with_nonmatching_username(self): + self.assertEqual( + get_credentials_from_store( + b"https", b"example.org", b"otheruser", fnames=[self.fname] + ), + None, + ) + + +class RemoteErrorFromStderrTests(TestCase): + def test_nothing(self): + self.assertEqual(_remote_error_from_stderr(None), HangupException()) + + def test_error_line(self): + b = BytesIO( + b"""\ +This is some random output. +ERROR: This is the actual error +with a tail +""" + ) + self.assertEqual( + _remote_error_from_stderr(b), + GitProtocolError("This is the actual error"), + ) + + def test_no_error_line(self): + b = BytesIO( + b"""\ +This is output without an error line. +And this line is just random noise, too. +""" + ) + self.assertEqual( + _remote_error_from_stderr(b), + HangupException( + [ + b"This is output without an error line.", + b"And this line is just random noise, too.", + ] + ), + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_config.py b/env/lib/python3.8/site-packages/dulwich/tests/test_config.py new file mode 100644 index 00000000..5213e4c1 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_config.py @@ -0,0 +1,476 @@ +# test_config.py -- Tests for reading and writing configuration files +# Copyright (C) 2011 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for reading and writing configuration files.""" + +import os +import sys +from io import BytesIO +from unittest import skipIf +from unittest.mock import patch + +from dulwich.config import ( + ConfigDict, + ConfigFile, + StackedConfig, + _check_section_name, + _check_variable_name, + _format_string, + _escape_value, + _parse_string, + parse_submodules, + apply_instead_of, +) +from dulwich.tests import ( + TestCase, +) + + +class ConfigFileTests(TestCase): + def from_file(self, text): + return ConfigFile.from_file(BytesIO(text)) + + def test_empty(self): + ConfigFile() + + def test_eq(self): + self.assertEqual(ConfigFile(), ConfigFile()) + + def test_default_config(self): + cf = self.from_file( + b"""[core] +\trepositoryformatversion = 0 +\tfilemode = true +\tbare = false +\tlogallrefupdates = true +""" + ) + self.assertEqual( + ConfigFile( + { + (b"core",): { + b"repositoryformatversion": b"0", + b"filemode": b"true", + b"bare": b"false", + b"logallrefupdates": b"true", + } + } + ), + cf, + ) + + def test_from_file_empty(self): + cf = self.from_file(b"") + self.assertEqual(ConfigFile(), cf) + + def test_empty_line_before_section(self): + cf = self.from_file(b"\n[section]\n") + self.assertEqual(ConfigFile({(b"section",): {}}), cf) + + def test_comment_before_section(self): + cf = self.from_file(b"# foo\n[section]\n") + self.assertEqual(ConfigFile({(b"section",): {}}), cf) + + def test_comment_after_section(self): + cf = self.from_file(b"[section] # foo\n") + self.assertEqual(ConfigFile({(b"section",): {}}), cf) + + def test_comment_after_variable(self): + cf = self.from_file(b"[section]\nbar= foo # a comment\n") + self.assertEqual(ConfigFile({(b"section",): {b"bar": b"foo"}}), cf) + + def test_comment_character_within_value_string(self): + cf = self.from_file(b'[section]\nbar= "foo#bar"\n') + self.assertEqual(ConfigFile({(b"section",): {b"bar": b"foo#bar"}}), cf) + + def test_comment_character_within_section_string(self): + cf = self.from_file(b'[branch "foo#bar"] # a comment\nbar= foo\n') + self.assertEqual(ConfigFile({(b"branch", b"foo#bar"): {b"bar": b"foo"}}), cf) + + def test_closing_bracket_within_section_string(self): + cf = self.from_file(b'[branch "foo]bar"] # a comment\nbar= foo\n') + self.assertEqual(ConfigFile({(b"branch", b"foo]bar"): {b"bar": b"foo"}}), cf) + + def test_from_file_section(self): + cf = self.from_file(b"[core]\nfoo = bar\n") + self.assertEqual(b"bar", cf.get((b"core",), b"foo")) + self.assertEqual(b"bar", cf.get((b"core", b"foo"), b"foo")) + + def test_from_file_multiple(self): + cf = self.from_file(b"[core]\nfoo = bar\nfoo = blah\n") + self.assertEqual([b"bar", b"blah"], list(cf.get_multivar((b"core",), b"foo"))) + self.assertEqual([], list(cf.get_multivar((b"core", ), b"blah"))) + + def test_from_file_utf8_bom(self): + text = "[core]\nfoo = b\u00e4r\n".encode("utf-8-sig") + cf = self.from_file(text) + self.assertEqual(b"b\xc3\xa4r", cf.get((b"core",), b"foo")) + + def test_from_file_section_case_insensitive_lower(self): + cf = self.from_file(b"[cOre]\nfOo = bar\n") + self.assertEqual(b"bar", cf.get((b"core",), b"foo")) + self.assertEqual(b"bar", cf.get((b"core", b"foo"), b"foo")) + + def test_from_file_section_case_insensitive_mixed(self): + cf = self.from_file(b"[cOre]\nfOo = bar\n") + self.assertEqual(b"bar", cf.get((b"core",), b"fOo")) + self.assertEqual(b"bar", cf.get((b"cOre", b"fOo"), b"fOo")) + + def test_from_file_with_mixed_quoted(self): + cf = self.from_file(b'[core]\nfoo = "bar"la\n') + self.assertEqual(b"barla", cf.get((b"core",), b"foo")) + + def test_from_file_section_with_open_brackets(self): + self.assertRaises(ValueError, self.from_file, b"[core\nfoo = bar\n") + + def test_from_file_value_with_open_quoted(self): + self.assertRaises(ValueError, self.from_file, b'[core]\nfoo = "bar\n') + + def test_from_file_with_quotes(self): + cf = self.from_file(b"[core]\n" b'foo = " bar"\n') + self.assertEqual(b" bar", cf.get((b"core",), b"foo")) + + def test_from_file_with_interrupted_line(self): + cf = self.from_file(b"[core]\n" b"foo = bar\\\n" b" la\n") + self.assertEqual(b"barla", cf.get((b"core",), b"foo")) + + def test_from_file_with_boolean_setting(self): + cf = self.from_file(b"[core]\n" b"foo\n") + self.assertEqual(b"true", cf.get((b"core",), b"foo")) + + def test_from_file_subsection(self): + cf = self.from_file(b'[branch "foo"]\nfoo = bar\n') + self.assertEqual(b"bar", cf.get((b"branch", b"foo"), b"foo")) + + def test_from_file_subsection_invalid(self): + self.assertRaises(ValueError, self.from_file, b'[branch "foo]\nfoo = bar\n') + + def test_from_file_subsection_not_quoted(self): + cf = self.from_file(b"[branch.foo]\nfoo = bar\n") + self.assertEqual(b"bar", cf.get((b"branch", b"foo"), b"foo")) + + def test_write_preserve_multivar(self): + cf = self.from_file(b"[core]\nfoo = bar\nfoo = blah\n") + f = BytesIO() + cf.write_to_file(f) + self.assertEqual(b"[core]\n\tfoo = bar\n\tfoo = blah\n", f.getvalue()) + + def test_write_to_file_empty(self): + c = ConfigFile() + f = BytesIO() + c.write_to_file(f) + self.assertEqual(b"", f.getvalue()) + + def test_write_to_file_section(self): + c = ConfigFile() + c.set((b"core",), b"foo", b"bar") + f = BytesIO() + c.write_to_file(f) + self.assertEqual(b"[core]\n\tfoo = bar\n", f.getvalue()) + + def test_write_to_file_subsection(self): + c = ConfigFile() + c.set((b"branch", b"blie"), b"foo", b"bar") + f = BytesIO() + c.write_to_file(f) + self.assertEqual(b'[branch "blie"]\n\tfoo = bar\n', f.getvalue()) + + def test_same_line(self): + cf = self.from_file(b"[branch.foo] foo = bar\n") + self.assertEqual(b"bar", cf.get((b"branch", b"foo"), b"foo")) + + def test_quoted_newlines_windows(self): + cf = self.from_file( + b"[alias]\r\n" + b"c = '!f() { \\\r\n" + b" printf '[git commit -m \\\"%s\\\"]\\n' \\\"$*\\\" && \\\r\n" + b" git commit -m \\\"$*\\\"; \\\r\n" + b" }; f'\r\n") + self.assertEqual(list(cf.sections()), [(b'alias', )]) + self.assertEqual( + b'\'!f() { printf \'[git commit -m "%s"]\n\' ' + b'"$*" && git commit -m "$*"', + cf.get((b"alias", ), b"c")) + + def test_quoted(self): + cf = self.from_file( + b"""[gui] +\tfontdiff = -family \\\"Ubuntu Mono\\\" -size 11 -overstrike 0 +""" + ) + self.assertEqual( + ConfigFile( + { + (b"gui",): { + b"fontdiff": b'-family "Ubuntu Mono" -size 11 -overstrike 0', + } + } + ), + cf, + ) + + def test_quoted_multiline(self): + cf = self.from_file( + b"""[alias] +who = \"!who() {\\ + git log --no-merges --pretty=format:'%an - %ae' $@ | uniq -c | sort -rn;\\ +};\\ +who\" +""" + ) + self.assertEqual( + ConfigFile( + { + (b"alias",): { + b"who": ( + b"!who() {git log --no-merges --pretty=format:'%an - " + b"%ae' $@ | uniq -c | sort -rn;};who" + ) + } + } + ), + cf, + ) + + def test_set_hash_gets_quoted(self): + c = ConfigFile() + c.set(b"xandikos", b"color", b"#665544") + f = BytesIO() + c.write_to_file(f) + self.assertEqual(b'[xandikos]\n\tcolor = "#665544"\n', f.getvalue()) + + +class ConfigDictTests(TestCase): + def test_get_set(self): + cd = ConfigDict() + self.assertRaises(KeyError, cd.get, b"foo", b"core") + cd.set((b"core",), b"foo", b"bla") + self.assertEqual(b"bla", cd.get((b"core",), b"foo")) + cd.set((b"core",), b"foo", b"bloe") + self.assertEqual(b"bloe", cd.get((b"core",), b"foo")) + + def test_get_boolean(self): + cd = ConfigDict() + cd.set((b"core",), b"foo", b"true") + self.assertTrue(cd.get_boolean((b"core",), b"foo")) + cd.set((b"core",), b"foo", b"false") + self.assertFalse(cd.get_boolean((b"core",), b"foo")) + cd.set((b"core",), b"foo", b"invalid") + self.assertRaises(ValueError, cd.get_boolean, (b"core",), b"foo") + + def test_dict(self): + cd = ConfigDict() + cd.set((b"core",), b"foo", b"bla") + cd.set((b"core2",), b"foo", b"bloe") + + self.assertEqual([(b"core",), (b"core2",)], list(cd.keys())) + self.assertEqual(cd[(b"core",)], {b"foo": b"bla"}) + + cd[b"a"] = b"b" + self.assertEqual(cd[b"a"], b"b") + + def test_items(self): + cd = ConfigDict() + cd.set((b"core",), b"foo", b"bla") + cd.set((b"core2",), b"foo", b"bloe") + + self.assertEqual([(b"foo", b"bla")], list(cd.items((b"core",)))) + + def test_items_nonexistant(self): + cd = ConfigDict() + cd.set((b"core2",), b"foo", b"bloe") + + self.assertEqual([], list(cd.items((b"core",)))) + + def test_sections(self): + cd = ConfigDict() + cd.set((b"core2",), b"foo", b"bloe") + + self.assertEqual([(b"core2",)], list(cd.sections())) + + +class StackedConfigTests(TestCase): + def setUp(self): + super(StackedConfigTests, self).setUp() + self._old_path = os.environ.get("PATH") + + def tearDown(self): + super(StackedConfigTests, self).tearDown() + os.environ["PATH"] = self._old_path + + def test_default_backends(self): + StackedConfig.default_backends() + + @skipIf(sys.platform != "win32", "Windows specific config location.") + def test_windows_config_from_path(self): + from dulwich.config import get_win_system_paths + + install_dir = os.path.join("C:", "foo", "Git") + os.environ["PATH"] = os.path.join(install_dir, "cmd") + with patch("os.path.exists", return_value=True): + paths = set(get_win_system_paths()) + self.assertEqual( + { + os.path.join(os.environ.get("PROGRAMDATA"), "Git", "config"), + os.path.join(install_dir, "etc", "gitconfig"), + }, + paths, + ) + + @skipIf(sys.platform != "win32", "Windows specific config location.") + def test_windows_config_from_reg(self): + import winreg + + from dulwich.config import get_win_system_paths + + del os.environ["PATH"] + install_dir = os.path.join("C:", "foo", "Git") + with patch("winreg.OpenKey"): + with patch( + "winreg.QueryValueEx", + return_value=(install_dir, winreg.REG_SZ), + ): + paths = set(get_win_system_paths()) + self.assertEqual( + { + os.path.join(os.environ.get("PROGRAMDATA"), "Git", "config"), + os.path.join(install_dir, "etc", "gitconfig"), + }, + paths, + ) + + +class EscapeValueTests(TestCase): + def test_nothing(self): + self.assertEqual(b"foo", _escape_value(b"foo")) + + def test_backslash(self): + self.assertEqual(b"foo\\\\", _escape_value(b"foo\\")) + + def test_newline(self): + self.assertEqual(b"foo\\n", _escape_value(b"foo\n")) + + +class FormatStringTests(TestCase): + def test_quoted(self): + self.assertEqual(b'" foo"', _format_string(b" foo")) + self.assertEqual(b'"\\tfoo"', _format_string(b"\tfoo")) + + def test_not_quoted(self): + self.assertEqual(b"foo", _format_string(b"foo")) + self.assertEqual(b"foo bar", _format_string(b"foo bar")) + + +class ParseStringTests(TestCase): + def test_quoted(self): + self.assertEqual(b" foo", _parse_string(b'" foo"')) + self.assertEqual(b"\tfoo", _parse_string(b'"\\tfoo"')) + + def test_not_quoted(self): + self.assertEqual(b"foo", _parse_string(b"foo")) + self.assertEqual(b"foo bar", _parse_string(b"foo bar")) + + def test_nothing(self): + self.assertEqual(b"", _parse_string(b"")) + + def test_tab(self): + self.assertEqual(b"\tbar\t", _parse_string(b"\\tbar\\t")) + + def test_newline(self): + self.assertEqual(b"\nbar\t", _parse_string(b"\\nbar\\t\t")) + + def test_quote(self): + self.assertEqual(b'"foo"', _parse_string(b'\\"foo\\"')) + + +class CheckVariableNameTests(TestCase): + def test_invalid(self): + self.assertFalse(_check_variable_name(b"foo ")) + self.assertFalse(_check_variable_name(b"bar,bar")) + self.assertFalse(_check_variable_name(b"bar.bar")) + + def test_valid(self): + self.assertTrue(_check_variable_name(b"FOO")) + self.assertTrue(_check_variable_name(b"foo")) + self.assertTrue(_check_variable_name(b"foo-bar")) + + +class CheckSectionNameTests(TestCase): + def test_invalid(self): + self.assertFalse(_check_section_name(b"foo ")) + self.assertFalse(_check_section_name(b"bar,bar")) + + def test_valid(self): + self.assertTrue(_check_section_name(b"FOO")) + self.assertTrue(_check_section_name(b"foo")) + self.assertTrue(_check_section_name(b"foo-bar")) + self.assertTrue(_check_section_name(b"bar.bar")) + + +class SubmodulesTests(TestCase): + def testSubmodules(self): + cf = ConfigFile.from_file( + BytesIO( + b"""\ +[submodule "core/lib"] +\tpath = core/lib +\turl = https://github.com/phhusson/QuasselC.git +""" + ) + ) + got = list(parse_submodules(cf)) + self.assertEqual( + [ + ( + b"core/lib", + b"https://github.com/phhusson/QuasselC.git", + b"core/lib", + ) + ], + got, + ) + + +class ApplyInsteadOfTests(TestCase): + def test_none(self): + config = ConfigDict() + self.assertEqual( + 'https://example.com/', apply_instead_of(config, 'https://example.com/')) + + def test_apply(self): + config = ConfigDict() + config.set( + ('url', 'https://samba.org/'), 'insteadOf', 'https://example.com/') + self.assertEqual( + 'https://samba.org/', + apply_instead_of(config, 'https://example.com/')) + + def test_apply_multiple(self): + config = ConfigDict() + config.set( + ('url', 'https://samba.org/'), 'insteadOf', 'https://blah.com/') + config.set( + ('url', 'https://samba.org/'), 'insteadOf', 'https://example.com/') + self.assertEqual( + [b'https://blah.com/', b'https://example.com/'], + list(config.get_multivar(('url', 'https://samba.org/'), 'insteadOf'))) + self.assertEqual( + 'https://samba.org/', + apply_instead_of(config, 'https://example.com/')) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_credentials.py b/env/lib/python3.8/site-packages/dulwich/tests/test_credentials.py new file mode 100644 index 00000000..7bb90e56 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_credentials.py @@ -0,0 +1,75 @@ +# test_credentials.py -- tests for credentials.py + +# Copyright (C) 2022 Daniele Trifirò +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +from urllib.parse import urlparse + +from dulwich.config import ConfigDict +from dulwich.credentials import ( + match_partial_url, + match_urls, + urlmatch_credential_sections, +) +from dulwich.tests import TestCase + + +class TestCredentialHelpersUtils(TestCase): + + def test_match_urls(self): + url = urlparse("https://github.com/jelmer/dulwich/") + url_1 = urlparse("https://github.com/jelmer/dulwich") + url_2 = urlparse("https://github.com/jelmer") + url_3 = urlparse("https://github.com") + self.assertTrue(match_urls(url, url_1)) + self.assertTrue(match_urls(url, url_2)) + self.assertTrue(match_urls(url, url_3)) + + non_matching = urlparse("https://git.sr.ht/") + self.assertFalse(match_urls(url, non_matching)) + + def test_match_partial_url(self): + url = urlparse("https://github.com/jelmer/dulwich/") + self.assertTrue(match_partial_url(url, "github.com")) + self.assertFalse(match_partial_url(url, "github.com/jelmer/")) + self.assertTrue(match_partial_url(url, "github.com/jelmer/dulwich")) + self.assertFalse(match_partial_url(url, "github.com/jel")) + self.assertFalse(match_partial_url(url, "github.com/jel/")) + + def test_urlmatch_credential_sections(self): + config = ConfigDict() + config.set((b"credential", "https://github.com"), b"helper", "foo") + config.set((b"credential", "git.sr.ht"), b"helper", "foo") + config.set(b"credential", b"helper", "bar") + + self.assertEqual( + list(urlmatch_credential_sections(config, "https://github.com")), [ + (b"credential", b"https://github.com"), + (b"credential",), + ]) + + self.assertEqual( + list(urlmatch_credential_sections(config, "https://git.sr.ht")), [ + (b"credential", b"git.sr.ht"), + (b"credential",), + ]) + + self.assertEqual( + list(urlmatch_credential_sections(config, "missing_url")), [ + (b"credential",)]) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_diff_tree.py b/env/lib/python3.8/site-packages/dulwich/tests/test_diff_tree.py new file mode 100644 index 00000000..20cecaad --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_diff_tree.py @@ -0,0 +1,1157 @@ +# test_diff_tree.py -- Tests for file and tree diff utilities. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for file and tree diff utilities.""" + +from itertools import permutations +from dulwich.diff_tree import ( + CHANGE_MODIFY, + CHANGE_RENAME, + CHANGE_COPY, + CHANGE_UNCHANGED, + TreeChange, + _merge_entries, + _merge_entries_py, + tree_changes, + tree_changes_for_merge, + _count_blocks, + _count_blocks_py, + _similarity_score, + _tree_change_key, + RenameDetector, + _is_tree, + _is_tree_py, +) +from dulwich.index import ( + commit_tree, +) +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + ShaFile, + Blob, + TreeEntry, + Tree, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + F, + make_object, + functest_builder, + ext_functest_builder, +) + + +class DiffTestCase(TestCase): + def setUp(self): + super(DiffTestCase, self).setUp() + self.store = MemoryObjectStore() + self.empty_tree = self.commit_tree([]) + + def commit_tree(self, entries): + commit_blobs = [] + for entry in entries: + if len(entry) == 2: + path, obj = entry + mode = F + else: + path, obj, mode = entry + if isinstance(obj, Blob): + self.store.add_object(obj) + sha = obj.id + else: + sha = obj + commit_blobs.append((path, sha, mode)) + return self.store[commit_tree(self.store, commit_blobs)] + + +class TreeChangesTest(DiffTestCase): + def setUp(self): + super(TreeChangesTest, self).setUp() + self.detector = RenameDetector(self.store) + + def assertMergeFails(self, merge_entries, name, mode, sha): + t = Tree() + t[name] = (mode, sha) + self.assertRaises((TypeError, ValueError), merge_entries, "", t, t) + + def _do_test_merge_entries(self, merge_entries): + blob_a1 = make_object(Blob, data=b"a1") + blob_a2 = make_object(Blob, data=b"a2") + blob_b1 = make_object(Blob, data=b"b1") + blob_c2 = make_object(Blob, data=b"c2") + tree1 = self.commit_tree([(b"a", blob_a1, 0o100644), (b"b", blob_b1, 0o100755)]) + tree2 = self.commit_tree([(b"a", blob_a2, 0o100644), (b"c", blob_c2, 0o100755)]) + + self.assertEqual([], merge_entries(b"", self.empty_tree, self.empty_tree)) + self.assertEqual( + [ + ((None, None, None), (b"a", 0o100644, blob_a1.id)), + ((None, None, None), (b"b", 0o100755, blob_b1.id)), + ], + merge_entries(b"", self.empty_tree, tree1), + ) + self.assertEqual( + [ + ((None, None, None), (b"x/a", 0o100644, blob_a1.id)), + ((None, None, None), (b"x/b", 0o100755, blob_b1.id)), + ], + merge_entries(b"x", self.empty_tree, tree1), + ) + + self.assertEqual( + [ + ((b"a", 0o100644, blob_a2.id), (None, None, None)), + ((b"c", 0o100755, blob_c2.id), (None, None, None)), + ], + merge_entries(b"", tree2, self.empty_tree), + ) + + self.assertEqual( + [ + ((b"a", 0o100644, blob_a1.id), (b"a", 0o100644, blob_a2.id)), + ((b"b", 0o100755, blob_b1.id), (None, None, None)), + ((None, None, None), (b"c", 0o100755, blob_c2.id)), + ], + merge_entries(b"", tree1, tree2), + ) + + self.assertEqual( + [ + ((b"a", 0o100644, blob_a2.id), (b"a", 0o100644, blob_a1.id)), + ((None, None, None), (b"b", 0o100755, blob_b1.id)), + ((b"c", 0o100755, blob_c2.id), (None, None, None)), + ], + merge_entries(b"", tree2, tree1), + ) + + self.assertMergeFails(merge_entries, 0xDEADBEEF, 0o100644, "1" * 40) + self.assertMergeFails(merge_entries, b"a", b"deadbeef", "1" * 40) + self.assertMergeFails(merge_entries, b"a", 0o100644, 0xDEADBEEF) + + test_merge_entries = functest_builder(_do_test_merge_entries, _merge_entries_py) + test_merge_entries_extension = ext_functest_builder( + _do_test_merge_entries, _merge_entries + ) + + def _do_test_is_tree(self, is_tree): + self.assertFalse(is_tree(TreeEntry(None, None, None))) + self.assertFalse(is_tree(TreeEntry(b"a", 0o100644, b"a" * 40))) + self.assertFalse(is_tree(TreeEntry(b"a", 0o100755, b"a" * 40))) + self.assertFalse(is_tree(TreeEntry(b"a", 0o120000, b"a" * 40))) + self.assertTrue(is_tree(TreeEntry(b"a", 0o040000, b"a" * 40))) + self.assertRaises(TypeError, is_tree, TreeEntry(b"a", b"x", b"a" * 40)) + self.assertRaises(AttributeError, is_tree, 1234) + + test_is_tree = functest_builder(_do_test_is_tree, _is_tree_py) + test_is_tree_extension = ext_functest_builder(_do_test_is_tree, _is_tree) + + def assertChangesEqual(self, expected, tree1, tree2, **kwargs): + actual = list(tree_changes(self.store, tree1.id, tree2.id, **kwargs)) + self.assertEqual(expected, actual) + + # For brevity, the following tests use tuples instead of TreeEntry objects. + + def test_tree_changes_empty(self): + self.assertChangesEqual([], self.empty_tree, self.empty_tree) + + def test_tree_changes_no_changes(self): + blob = make_object(Blob, data=b"blob") + tree = self.commit_tree([(b"a", blob), (b"b/c", blob)]) + self.assertChangesEqual([], self.empty_tree, self.empty_tree) + self.assertChangesEqual([], tree, tree) + self.assertChangesEqual( + [ + TreeChange(CHANGE_UNCHANGED, (b"a", F, blob.id), (b"a", F, blob.id)), + TreeChange( + CHANGE_UNCHANGED, + (b"b/c", F, blob.id), + (b"b/c", F, blob.id), + ), + ], + tree, + tree, + want_unchanged=True, + ) + + def test_tree_changes_add_delete(self): + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + tree = self.commit_tree([(b"a", blob_a, 0o100644), (b"x/b", blob_b, 0o100755)]) + self.assertChangesEqual( + [ + TreeChange.add((b"a", 0o100644, blob_a.id)), + TreeChange.add((b"x/b", 0o100755, blob_b.id)), + ], + self.empty_tree, + tree, + ) + self.assertChangesEqual( + [ + TreeChange.delete((b"a", 0o100644, blob_a.id)), + TreeChange.delete((b"x/b", 0o100755, blob_b.id)), + ], + tree, + self.empty_tree, + ) + + def test_tree_changes_modify_contents(self): + blob_a1 = make_object(Blob, data=b"a1") + blob_a2 = make_object(Blob, data=b"a2") + tree1 = self.commit_tree([(b"a", blob_a1)]) + tree2 = self.commit_tree([(b"a", blob_a2)]) + self.assertChangesEqual( + [TreeChange(CHANGE_MODIFY, (b"a", F, blob_a1.id), (b"a", F, blob_a2.id))], + tree1, + tree2, + ) + + def test_tree_changes_modify_mode(self): + blob_a = make_object(Blob, data=b"a") + tree1 = self.commit_tree([(b"a", blob_a, 0o100644)]) + tree2 = self.commit_tree([(b"a", blob_a, 0o100755)]) + self.assertChangesEqual( + [ + TreeChange( + CHANGE_MODIFY, + (b"a", 0o100644, blob_a.id), + (b"a", 0o100755, blob_a.id), + ) + ], + tree1, + tree2, + ) + + def test_tree_changes_change_type(self): + blob_a1 = make_object(Blob, data=b"a") + blob_a2 = make_object(Blob, data=b"/foo/bar") + tree1 = self.commit_tree([(b"a", blob_a1, 0o100644)]) + tree2 = self.commit_tree([(b"a", blob_a2, 0o120000)]) + self.assertChangesEqual( + [ + TreeChange.delete((b"a", 0o100644, blob_a1.id)), + TreeChange.add((b"a", 0o120000, blob_a2.id)), + ], + tree1, + tree2, + ) + + def test_tree_changes_change_type_same(self): + blob_a1 = make_object(Blob, data=b"a") + blob_a2 = make_object(Blob, data=b"/foo/bar") + tree1 = self.commit_tree([(b"a", blob_a1, 0o100644)]) + tree2 = self.commit_tree([(b"a", blob_a2, 0o120000)]) + self.assertChangesEqual( + [ + TreeChange( + CHANGE_MODIFY, + (b"a", 0o100644, blob_a1.id), + (b"a", 0o120000, blob_a2.id), + ) + ], + tree1, + tree2, + change_type_same=True, + ) + + def test_tree_changes_to_tree(self): + blob_a = make_object(Blob, data=b"a") + blob_x = make_object(Blob, data=b"x") + tree1 = self.commit_tree([(b"a", blob_a)]) + tree2 = self.commit_tree([(b"a/x", blob_x)]) + self.assertChangesEqual( + [ + TreeChange.delete((b"a", F, blob_a.id)), + TreeChange.add((b"a/x", F, blob_x.id)), + ], + tree1, + tree2, + ) + + def test_tree_changes_complex(self): + blob_a_1 = make_object(Blob, data=b"a1_1") + blob_bx1_1 = make_object(Blob, data=b"bx1_1") + blob_bx2_1 = make_object(Blob, data=b"bx2_1") + blob_by1_1 = make_object(Blob, data=b"by1_1") + blob_by2_1 = make_object(Blob, data=b"by2_1") + tree1 = self.commit_tree( + [ + (b"a", blob_a_1), + (b"b/x/1", blob_bx1_1), + (b"b/x/2", blob_bx2_1), + (b"b/y/1", blob_by1_1), + (b"b/y/2", blob_by2_1), + ] + ) + + blob_a_2 = make_object(Blob, data=b"a1_2") + blob_bx1_2 = blob_bx1_1 + blob_by_2 = make_object(Blob, data=b"by_2") + blob_c_2 = make_object(Blob, data=b"c_2") + tree2 = self.commit_tree( + [ + (b"a", blob_a_2), + (b"b/x/1", blob_bx1_2), + (b"b/y", blob_by_2), + (b"c", blob_c_2), + ] + ) + + self.assertChangesEqual( + [ + TreeChange( + CHANGE_MODIFY, + (b"a", F, blob_a_1.id), + (b"a", F, blob_a_2.id), + ), + TreeChange.delete((b"b/x/2", F, blob_bx2_1.id)), + TreeChange.add((b"b/y", F, blob_by_2.id)), + TreeChange.delete((b"b/y/1", F, blob_by1_1.id)), + TreeChange.delete((b"b/y/2", F, blob_by2_1.id)), + TreeChange.add((b"c", F, blob_c_2.id)), + ], + tree1, + tree2, + ) + + def test_tree_changes_name_order(self): + blob = make_object(Blob, data=b"a") + tree1 = self.commit_tree([(b"a", blob), (b"a.", blob), (b"a..", blob)]) + # Tree order is the reverse of this, so if we used tree order, 'a..' + # would not be merged. + tree2 = self.commit_tree([(b"a/x", blob), (b"a./x", blob), (b"a..", blob)]) + + self.assertChangesEqual( + [ + TreeChange.delete((b"a", F, blob.id)), + TreeChange.add((b"a/x", F, blob.id)), + TreeChange.delete((b"a.", F, blob.id)), + TreeChange.add((b"a./x", F, blob.id)), + ], + tree1, + tree2, + ) + + def test_tree_changes_prune(self): + blob_a1 = make_object(Blob, data=b"a1") + blob_a2 = make_object(Blob, data=b"a2") + blob_x = make_object(Blob, data=b"x") + tree1 = self.commit_tree([(b"a", blob_a1), (b"b/x", blob_x)]) + tree2 = self.commit_tree([(b"a", blob_a2), (b"b/x", blob_x)]) + # Remove identical items so lookups will fail unless we prune. + subtree = self.store[tree1[b"b"][1]] + for entry in subtree.items(): + del self.store[entry.sha] + del self.store[subtree.id] + + self.assertChangesEqual( + [TreeChange(CHANGE_MODIFY, (b"a", F, blob_a1.id), (b"a", F, blob_a2.id))], + tree1, + tree2, + ) + + def test_tree_changes_rename_detector(self): + blob_a1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob_a2 = make_object(Blob, data=b"a\nb\nc\ne\n") + blob_b = make_object(Blob, data=b"b") + tree1 = self.commit_tree([(b"a", blob_a1), (b"b", blob_b)]) + tree2 = self.commit_tree([(b"c", blob_a2), (b"b", blob_b)]) + detector = RenameDetector(self.store) + + self.assertChangesEqual( + [ + TreeChange.delete((b"a", F, blob_a1.id)), + TreeChange.add((b"c", F, blob_a2.id)), + ], + tree1, + tree2, + ) + self.assertChangesEqual( + [ + TreeChange.delete((b"a", F, blob_a1.id)), + TreeChange( + CHANGE_UNCHANGED, + (b"b", F, blob_b.id), + (b"b", F, blob_b.id), + ), + TreeChange.add((b"c", F, blob_a2.id)), + ], + tree1, + tree2, + want_unchanged=True, + ) + self.assertChangesEqual( + [TreeChange(CHANGE_RENAME, (b"a", F, blob_a1.id), (b"c", F, blob_a2.id))], + tree1, + tree2, + rename_detector=detector, + ) + self.assertChangesEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob_a1.id), (b"c", F, blob_a2.id)), + TreeChange( + CHANGE_UNCHANGED, + (b"b", F, blob_b.id), + (b"b", F, blob_b.id), + ), + ], + tree1, + tree2, + rename_detector=detector, + want_unchanged=True, + ) + + def assertChangesForMergeEqual(self, expected, parent_trees, merge_tree, **kwargs): + parent_tree_ids = [t.id for t in parent_trees] + actual = list( + tree_changes_for_merge(self.store, parent_tree_ids, merge_tree.id, **kwargs) + ) + self.assertEqual(expected, actual) + + parent_tree_ids.reverse() + expected = [list(reversed(cs)) for cs in expected] + actual = list( + tree_changes_for_merge(self.store, parent_tree_ids, merge_tree.id, **kwargs) + ) + self.assertEqual(expected, actual) + + def test_tree_changes_for_merge_add_no_conflict(self): + blob = make_object(Blob, data=b"blob") + parent1 = self.commit_tree([]) + parent2 = merge = self.commit_tree([(b"a", blob)]) + self.assertChangesForMergeEqual([], [parent1, parent2], merge) + self.assertChangesForMergeEqual([], [parent2, parent2], merge) + + def test_tree_changes_for_merge_add_modify_conflict(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + parent1 = self.commit_tree([]) + parent2 = self.commit_tree([(b"a", blob1)]) + merge = self.commit_tree([(b"a", blob2)]) + self.assertChangesForMergeEqual( + [ + [ + TreeChange.add((b"a", F, blob2.id)), + TreeChange(CHANGE_MODIFY, (b"a", F, blob1.id), (b"a", F, blob2.id)), + ] + ], + [parent1, parent2], + merge, + ) + + def test_tree_changes_for_merge_modify_modify_conflict(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + blob3 = make_object(Blob, data=b"3") + parent1 = self.commit_tree([(b"a", blob1)]) + parent2 = self.commit_tree([(b"a", blob2)]) + merge = self.commit_tree([(b"a", blob3)]) + self.assertChangesForMergeEqual( + [ + [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob1.id), (b"a", F, blob3.id)), + TreeChange(CHANGE_MODIFY, (b"a", F, blob2.id), (b"a", F, blob3.id)), + ] + ], + [parent1, parent2], + merge, + ) + + def test_tree_changes_for_merge_modify_no_conflict(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + parent1 = self.commit_tree([(b"a", blob1)]) + parent2 = merge = self.commit_tree([(b"a", blob2)]) + self.assertChangesForMergeEqual([], [parent1, parent2], merge) + + def test_tree_changes_for_merge_delete_delete_conflict(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + parent1 = self.commit_tree([(b"a", blob1)]) + parent2 = self.commit_tree([(b"a", blob2)]) + merge = self.commit_tree([]) + self.assertChangesForMergeEqual( + [ + [ + TreeChange.delete((b"a", F, blob1.id)), + TreeChange.delete((b"a", F, blob2.id)), + ] + ], + [parent1, parent2], + merge, + ) + + def test_tree_changes_for_merge_delete_no_conflict(self): + blob = make_object(Blob, data=b"blob") + has = self.commit_tree([(b"a", blob)]) + doesnt_have = self.commit_tree([]) + self.assertChangesForMergeEqual([], [has, has], doesnt_have) + self.assertChangesForMergeEqual([], [has, doesnt_have], doesnt_have) + + def test_tree_changes_for_merge_octopus_no_conflict(self): + r = list(range(5)) + blobs = [make_object(Blob, data=bytes(i)) for i in r] + parents = [self.commit_tree([(b"a", blobs[i])]) for i in r] + for i in r: + # Take the SHA from each of the parents. + self.assertChangesForMergeEqual([], parents, parents[i]) + + def test_tree_changes_for_merge_octopus_modify_conflict(self): + # Because the octopus merge strategy is limited, I doubt it's possible + # to create this with the git command line. But the output is well- + # defined, so test it anyway. + r = list(range(5)) + parent_blobs = [make_object(Blob, data=bytes(i)) for i in r] + merge_blob = make_object(Blob, data=b"merge") + parents = [self.commit_tree([(b"a", parent_blobs[i])]) for i in r] + merge = self.commit_tree([(b"a", merge_blob)]) + expected = [ + [ + TreeChange( + CHANGE_MODIFY, + (b"a", F, parent_blobs[i].id), + (b"a", F, merge_blob.id), + ) + for i in r + ] + ] + self.assertChangesForMergeEqual(expected, parents, merge) + + def test_tree_changes_for_merge_octopus_delete(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"3") + parent1 = self.commit_tree([(b"a", blob1)]) + parent2 = self.commit_tree([(b"a", blob2)]) + parent3 = merge = self.commit_tree([]) + self.assertChangesForMergeEqual([], [parent1, parent1, parent1], merge) + self.assertChangesForMergeEqual([], [parent1, parent1, parent3], merge) + self.assertChangesForMergeEqual([], [parent1, parent3, parent3], merge) + self.assertChangesForMergeEqual( + [ + [ + TreeChange.delete((b"a", F, blob1.id)), + TreeChange.delete((b"a", F, blob2.id)), + None, + ] + ], + [parent1, parent2, parent3], + merge, + ) + + def test_tree_changes_for_merge_add_add_same_conflict(self): + blob = make_object(Blob, data=b"a\nb\nc\nd\n") + parent1 = self.commit_tree([(b"a", blob)]) + parent2 = self.commit_tree([]) + merge = self.commit_tree([(b"b", blob)]) + add = TreeChange.add((b"b", F, blob.id)) + self.assertChangesForMergeEqual([[add, add]], [parent1, parent2], merge) + + def test_tree_changes_for_merge_add_exact_rename_conflict(self): + blob = make_object(Blob, data=b"a\nb\nc\nd\n") + parent1 = self.commit_tree([(b"a", blob)]) + parent2 = self.commit_tree([]) + merge = self.commit_tree([(b"b", blob)]) + self.assertChangesForMergeEqual( + [ + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob.id), (b"b", F, blob.id)), + TreeChange.add((b"b", F, blob.id)), + ] + ], + [parent1, parent2], + merge, + rename_detector=self.detector, + ) + + def test_tree_changes_for_merge_add_content_rename_conflict(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\nc\ne\n") + parent1 = self.commit_tree([(b"a", blob1)]) + parent2 = self.commit_tree([]) + merge = self.commit_tree([(b"b", blob2)]) + self.assertChangesForMergeEqual( + [ + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"b", F, blob2.id)), + TreeChange.add((b"b", F, blob2.id)), + ] + ], + [parent1, parent2], + merge, + rename_detector=self.detector, + ) + + def test_tree_changes_for_merge_modify_rename_conflict(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\nc\ne\n") + parent1 = self.commit_tree([(b"a", blob1)]) + parent2 = self.commit_tree([(b"b", blob1)]) + merge = self.commit_tree([(b"b", blob2)]) + self.assertChangesForMergeEqual( + [ + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"b", F, blob2.id)), + TreeChange(CHANGE_MODIFY, (b"b", F, blob1.id), (b"b", F, blob2.id)), + ] + ], + [parent1, parent2], + merge, + rename_detector=self.detector, + ) + + +class RenameDetectionTest(DiffTestCase): + def _do_test_count_blocks(self, count_blocks): + blob = make_object(Blob, data=b"a\nb\na\n") + self.assertBlockCountEqual({b"a\n": 4, b"b\n": 2}, count_blocks(blob)) + + test_count_blocks = functest_builder(_do_test_count_blocks, _count_blocks_py) + test_count_blocks_extension = ext_functest_builder( + _do_test_count_blocks, _count_blocks + ) + + def _do_test_count_blocks_no_newline(self, count_blocks): + blob = make_object(Blob, data=b"a\na") + self.assertBlockCountEqual({b"a\n": 2, b"a": 1}, _count_blocks(blob)) + + test_count_blocks_no_newline = functest_builder( + _do_test_count_blocks_no_newline, _count_blocks_py + ) + test_count_blocks_no_newline_extension = ext_functest_builder( + _do_test_count_blocks_no_newline, _count_blocks + ) + + def assertBlockCountEqual(self, expected, got): + self.assertEqual( + {(hash(l) & 0xFFFFFFFF): c for (l, c) in expected.items()}, + {(h & 0xFFFFFFFF): c for (h, c) in got.items()}, + ) + + def _do_test_count_blocks_chunks(self, count_blocks): + blob = ShaFile.from_raw_chunks(Blob.type_num, [b"a\nb", b"\na\n"]) + self.assertBlockCountEqual({b"a\n": 4, b"b\n": 2}, _count_blocks(blob)) + + test_count_blocks_chunks = functest_builder( + _do_test_count_blocks_chunks, _count_blocks_py + ) + test_count_blocks_chunks_extension = ext_functest_builder( + _do_test_count_blocks_chunks, _count_blocks + ) + + def _do_test_count_blocks_long_lines(self, count_blocks): + a = b"a" * 64 + data = a + b"xxx\ny\n" + a + b"zzz\n" + blob = make_object(Blob, data=data) + self.assertBlockCountEqual( + {b"a" * 64: 128, b"xxx\n": 4, b"y\n": 2, b"zzz\n": 4}, + _count_blocks(blob), + ) + + test_count_blocks_long_lines = functest_builder( + _do_test_count_blocks_long_lines, _count_blocks_py + ) + test_count_blocks_long_lines_extension = ext_functest_builder( + _do_test_count_blocks_long_lines, _count_blocks + ) + + def assertSimilar(self, expected_score, blob1, blob2): + self.assertEqual(expected_score, _similarity_score(blob1, blob2)) + self.assertEqual(expected_score, _similarity_score(blob2, blob1)) + + def test_similarity_score(self): + blob0 = make_object(Blob, data=b"") + blob1 = make_object(Blob, data=b"ab\ncd\ncd\n") + blob2 = make_object(Blob, data=b"ab\n") + blob3 = make_object(Blob, data=b"cd\n") + blob4 = make_object(Blob, data=b"cd\ncd\n") + + self.assertSimilar(100, blob0, blob0) + self.assertSimilar(0, blob0, blob1) + self.assertSimilar(33, blob1, blob2) + self.assertSimilar(33, blob1, blob3) + self.assertSimilar(66, blob1, blob4) + self.assertSimilar(0, blob2, blob3) + self.assertSimilar(50, blob3, blob4) + + def test_similarity_score_cache(self): + blob1 = make_object(Blob, data=b"ab\ncd\n") + blob2 = make_object(Blob, data=b"ab\n") + + block_cache = {} + self.assertEqual(50, _similarity_score(blob1, blob2, block_cache=block_cache)) + self.assertEqual(set([blob1.id, blob2.id]), set(block_cache)) + + def fail_chunks(): + self.fail("Unexpected call to as_raw_chunks()") + + blob1.as_raw_chunks = blob2.as_raw_chunks = fail_chunks + blob1.raw_length = lambda: 6 + blob2.raw_length = lambda: 3 + self.assertEqual(50, _similarity_score(blob1, blob2, block_cache=block_cache)) + + def test_tree_entry_sort(self): + sha = "abcd" * 10 + expected_entries = [ + TreeChange.add(TreeEntry(b"aaa", F, sha)), + TreeChange( + CHANGE_COPY, + TreeEntry(b"bbb", F, sha), + TreeEntry(b"aab", F, sha), + ), + TreeChange( + CHANGE_MODIFY, + TreeEntry(b"bbb", F, sha), + TreeEntry(b"bbb", F, b"dabc" * 10), + ), + TreeChange( + CHANGE_RENAME, + TreeEntry(b"bbc", F, sha), + TreeEntry(b"ddd", F, sha), + ), + TreeChange.delete(TreeEntry(b"ccc", F, sha)), + ] + + for perm in permutations(expected_entries): + self.assertEqual(expected_entries, sorted(perm, key=_tree_change_key)) + + def detect_renames(self, tree1, tree2, want_unchanged=False, **kwargs): + detector = RenameDetector(self.store, **kwargs) + return detector.changes_with_renames( + tree1.id, tree2.id, want_unchanged=want_unchanged + ) + + def test_no_renames(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\ne\nf\n") + blob3 = make_object(Blob, data=b"a\nb\ng\nh\n") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"a", blob1), (b"b", blob3)]) + self.assertEqual( + [TreeChange(CHANGE_MODIFY, (b"b", F, blob2.id), (b"b", F, blob3.id))], + self.detect_renames(tree1, tree2), + ) + + def test_exact_rename_one_to_one(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"c", blob1), (b"d", blob2)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"c", F, blob1.id)), + TreeChange(CHANGE_RENAME, (b"b", F, blob2.id), (b"d", F, blob2.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_rename_split_different_type(self): + blob = make_object(Blob, data=b"/foo") + tree1 = self.commit_tree([(b"a", blob, 0o100644)]) + tree2 = self.commit_tree([(b"a", blob, 0o120000)]) + self.assertEqual( + [ + TreeChange.add((b"a", 0o120000, blob.id)), + TreeChange.delete((b"a", 0o100644, blob.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_rename_and_different_type(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + tree1 = self.commit_tree([(b"a", blob1)]) + tree2 = self.commit_tree([(b"a", blob2, 0o120000), (b"b", blob1)]) + self.assertEqual( + [ + TreeChange.add((b"a", 0o120000, blob2.id)), + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"b", F, blob1.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_rename_one_to_many(self): + blob = make_object(Blob, data=b"1") + tree1 = self.commit_tree([(b"a", blob)]) + tree2 = self.commit_tree([(b"b", blob), (b"c", blob)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob.id), (b"b", F, blob.id)), + TreeChange(CHANGE_COPY, (b"a", F, blob.id), (b"c", F, blob.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_rename_many_to_one(self): + blob = make_object(Blob, data=b"1") + tree1 = self.commit_tree([(b"a", blob), (b"b", blob)]) + tree2 = self.commit_tree([(b"c", blob)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob.id), (b"c", F, blob.id)), + TreeChange.delete((b"b", F, blob.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_rename_many_to_many(self): + blob = make_object(Blob, data=b"1") + tree1 = self.commit_tree([(b"a", blob), (b"b", blob)]) + tree2 = self.commit_tree([(b"c", blob), (b"d", blob), (b"e", blob)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob.id), (b"c", F, blob.id)), + TreeChange(CHANGE_COPY, (b"a", F, blob.id), (b"e", F, blob.id)), + TreeChange(CHANGE_RENAME, (b"b", F, blob.id), (b"d", F, blob.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_copy_modify(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\nc\ne\n") + tree1 = self.commit_tree([(b"a", blob1)]) + tree2 = self.commit_tree([(b"a", blob2), (b"b", blob1)]) + self.assertEqual( + [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob1.id), (b"a", F, blob2.id)), + TreeChange(CHANGE_COPY, (b"a", F, blob1.id), (b"b", F, blob1.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_copy_change_mode(self): + blob = make_object(Blob, data=b"a\nb\nc\nd\n") + tree1 = self.commit_tree([(b"a", blob)]) + tree2 = self.commit_tree([(b"a", blob, 0o100755), (b"b", blob)]) + self.assertEqual( + [ + TreeChange( + CHANGE_MODIFY, + (b"a", F, blob.id), + (b"a", 0o100755, blob.id), + ), + TreeChange(CHANGE_COPY, (b"a", F, blob.id), (b"b", F, blob.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_rename_threshold(self): + blob1 = make_object(Blob, data=b"a\nb\nc\n") + blob2 = make_object(Blob, data=b"a\nb\nd\n") + tree1 = self.commit_tree([(b"a", blob1)]) + tree2 = self.commit_tree([(b"b", blob2)]) + self.assertEqual( + [TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"b", F, blob2.id))], + self.detect_renames(tree1, tree2, rename_threshold=50), + ) + self.assertEqual( + [ + TreeChange.delete((b"a", F, blob1.id)), + TreeChange.add((b"b", F, blob2.id)), + ], + self.detect_renames(tree1, tree2, rename_threshold=75), + ) + + def test_content_rename_max_files(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd") + blob4 = make_object(Blob, data=b"a\nb\nc\ne\n") + blob2 = make_object(Blob, data=b"e\nf\ng\nh\n") + blob3 = make_object(Blob, data=b"e\nf\ng\ni\n") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"c", blob3), (b"d", blob4)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"d", F, blob4.id)), + TreeChange(CHANGE_RENAME, (b"b", F, blob2.id), (b"c", F, blob3.id)), + ], + self.detect_renames(tree1, tree2), + ) + self.assertEqual( + [ + TreeChange.delete((b"a", F, blob1.id)), + TreeChange.delete((b"b", F, blob2.id)), + TreeChange.add((b"c", F, blob3.id)), + TreeChange.add((b"d", F, blob4.id)), + ], + self.detect_renames(tree1, tree2, max_files=1), + ) + + def test_content_rename_one_to_one(self): + b11 = make_object(Blob, data=b"a\nb\nc\nd\n") + b12 = make_object(Blob, data=b"a\nb\nc\ne\n") + b21 = make_object(Blob, data=b"e\nf\ng\n\nh") + b22 = make_object(Blob, data=b"e\nf\ng\n\ni") + tree1 = self.commit_tree([(b"a", b11), (b"b", b21)]) + tree2 = self.commit_tree([(b"c", b12), (b"d", b22)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, b11.id), (b"c", F, b12.id)), + TreeChange(CHANGE_RENAME, (b"b", F, b21.id), (b"d", F, b22.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_content_rename_one_to_one_ordering(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\ne\nf\n") + blob2 = make_object(Blob, data=b"a\nb\nc\nd\ng\nh\n") + # 6/10 match to blob1, 8/10 match to blob2 + blob3 = make_object(Blob, data=b"a\nb\nc\nd\ng\ni\n") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"c", blob3)]) + self.assertEqual( + [ + TreeChange.delete((b"a", F, blob1.id)), + TreeChange(CHANGE_RENAME, (b"b", F, blob2.id), (b"c", F, blob3.id)), + ], + self.detect_renames(tree1, tree2), + ) + + tree3 = self.commit_tree([(b"a", blob2), (b"b", blob1)]) + tree4 = self.commit_tree([(b"c", blob3)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob2.id), (b"c", F, blob3.id)), + TreeChange.delete((b"b", F, blob1.id)), + ], + self.detect_renames(tree3, tree4), + ) + + def test_content_rename_one_to_many(self): + blob1 = make_object(Blob, data=b"aa\nb\nc\nd\ne\n") + blob2 = make_object(Blob, data=b"ab\nb\nc\nd\ne\n") # 8/11 match + blob3 = make_object(Blob, data=b"aa\nb\nc\nd\nf\n") # 9/11 match + tree1 = self.commit_tree([(b"a", blob1)]) + tree2 = self.commit_tree([(b"b", blob2), (b"c", blob3)]) + self.assertEqual( + [ + TreeChange(CHANGE_COPY, (b"a", F, blob1.id), (b"b", F, blob2.id)), + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"c", F, blob3.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_content_rename_many_to_one(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\nc\ne\n") + blob3 = make_object(Blob, data=b"a\nb\nc\nf\n") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"c", blob3)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"c", F, blob3.id)), + TreeChange.delete((b"b", F, blob2.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_content_rename_many_to_many(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\nc\ne\n") + blob3 = make_object(Blob, data=b"a\nb\nc\nf\n") + blob4 = make_object(Blob, data=b"a\nb\nc\ng\n") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"c", blob3), (b"d", blob4)]) + # TODO(dborowitz): Distribute renames rather than greedily choosing + # copies. + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"c", F, blob3.id)), + TreeChange(CHANGE_COPY, (b"a", F, blob1.id), (b"d", F, blob4.id)), + TreeChange.delete((b"b", F, blob2.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_content_rename_with_more_deletions(self): + blob1 = make_object(Blob, data=b"") + tree1 = self.commit_tree( + [(b"a", blob1), (b"b", blob1), (b"c", blob1), (b"d", blob1)] + ) + tree2 = self.commit_tree([(b"e", blob1), (b"f", blob1), (b"g", blob1)]) + self.maxDiff = None + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"e", F, blob1.id)), + TreeChange(CHANGE_RENAME, (b"b", F, blob1.id), (b"f", F, blob1.id)), + TreeChange(CHANGE_RENAME, (b"c", F, blob1.id), (b"g", F, blob1.id)), + TreeChange.delete((b"d", F, blob1.id)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_content_rename_gitlink(self): + blob1 = make_object(Blob, data=b"blob1") + blob2 = make_object(Blob, data=b"blob2") + link1 = b"1" * 40 + link2 = b"2" * 40 + tree1 = self.commit_tree([(b"a", blob1), (b"b", link1, 0o160000)]) + tree2 = self.commit_tree([(b"c", blob2), (b"d", link2, 0o160000)]) + self.assertEqual( + [ + TreeChange.delete((b"a", 0o100644, blob1.id)), + TreeChange.delete((b"b", 0o160000, link1)), + TreeChange.add((b"c", 0o100644, blob2.id)), + TreeChange.add((b"d", 0o160000, link2)), + ], + self.detect_renames(tree1, tree2), + ) + + def test_exact_rename_swap(self): + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"a", blob2), (b"b", blob1)]) + self.assertEqual( + [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob1.id), (b"a", F, blob2.id)), + TreeChange(CHANGE_MODIFY, (b"b", F, blob2.id), (b"b", F, blob1.id)), + ], + self.detect_renames(tree1, tree2), + ) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"b", F, blob1.id)), + TreeChange(CHANGE_RENAME, (b"b", F, blob2.id), (b"a", F, blob2.id)), + ], + self.detect_renames(tree1, tree2, rewrite_threshold=50), + ) + + def test_content_rename_swap(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"e\nf\ng\nh\n") + blob3 = make_object(Blob, data=b"a\nb\nc\ne\n") + blob4 = make_object(Blob, data=b"e\nf\ng\ni\n") + tree1 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + tree2 = self.commit_tree([(b"a", blob4), (b"b", blob3)]) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"b", F, blob3.id)), + TreeChange(CHANGE_RENAME, (b"b", F, blob2.id), (b"a", F, blob4.id)), + ], + self.detect_renames(tree1, tree2, rewrite_threshold=60), + ) + + def test_rewrite_threshold(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\nc\ne\n") + blob3 = make_object(Blob, data=b"a\nb\nf\ng\n") + + tree1 = self.commit_tree([(b"a", blob1)]) + tree2 = self.commit_tree([(b"a", blob3), (b"b", blob2)]) + + no_renames = [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob1.id), (b"a", F, blob3.id)), + TreeChange(CHANGE_COPY, (b"a", F, blob1.id), (b"b", F, blob2.id)), + ] + self.assertEqual(no_renames, self.detect_renames(tree1, tree2)) + self.assertEqual( + no_renames, self.detect_renames(tree1, tree2, rewrite_threshold=40) + ) + self.assertEqual( + [ + TreeChange.add((b"a", F, blob3.id)), + TreeChange(CHANGE_RENAME, (b"a", F, blob1.id), (b"b", F, blob2.id)), + ], + self.detect_renames(tree1, tree2, rewrite_threshold=80), + ) + + def test_find_copies_harder_exact(self): + blob = make_object(Blob, data=b"blob") + tree1 = self.commit_tree([(b"a", blob)]) + tree2 = self.commit_tree([(b"a", blob), (b"b", blob)]) + self.assertEqual( + [TreeChange.add((b"b", F, blob.id))], + self.detect_renames(tree1, tree2), + ) + self.assertEqual( + [TreeChange(CHANGE_COPY, (b"a", F, blob.id), (b"b", F, blob.id))], + self.detect_renames(tree1, tree2, find_copies_harder=True), + ) + + def test_find_copies_harder_content(self): + blob1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob2 = make_object(Blob, data=b"a\nb\nc\ne\n") + tree1 = self.commit_tree([(b"a", blob1)]) + tree2 = self.commit_tree([(b"a", blob1), (b"b", blob2)]) + self.assertEqual( + [TreeChange.add((b"b", F, blob2.id))], + self.detect_renames(tree1, tree2), + ) + self.assertEqual( + [TreeChange(CHANGE_COPY, (b"a", F, blob1.id), (b"b", F, blob2.id))], + self.detect_renames(tree1, tree2, find_copies_harder=True), + ) + + def test_find_copies_harder_with_rewrites(self): + blob_a1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob_a2 = make_object(Blob, data=b"f\ng\nh\ni\n") + blob_b2 = make_object(Blob, data=b"a\nb\nc\ne\n") + tree1 = self.commit_tree([(b"a", blob_a1)]) + tree2 = self.commit_tree([(b"a", blob_a2), (b"b", blob_b2)]) + self.assertEqual( + [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob_a1.id), (b"a", F, blob_a2.id)), + TreeChange(CHANGE_COPY, (b"a", F, blob_a1.id), (b"b", F, blob_b2.id)), + ], + self.detect_renames(tree1, tree2, find_copies_harder=True), + ) + self.assertEqual( + [ + TreeChange.add((b"a", F, blob_a2.id)), + TreeChange(CHANGE_RENAME, (b"a", F, blob_a1.id), (b"b", F, blob_b2.id)), + ], + self.detect_renames( + tree1, tree2, rewrite_threshold=50, find_copies_harder=True + ), + ) + + def test_reuse_detector(self): + blob = make_object(Blob, data=b"blob") + tree1 = self.commit_tree([(b"a", blob)]) + tree2 = self.commit_tree([(b"b", blob)]) + detector = RenameDetector(self.store) + changes = [TreeChange(CHANGE_RENAME, (b"a", F, blob.id), (b"b", F, blob.id))] + self.assertEqual(changes, detector.changes_with_renames(tree1.id, tree2.id)) + self.assertEqual(changes, detector.changes_with_renames(tree1.id, tree2.id)) + + def test_want_unchanged(self): + blob_a1 = make_object(Blob, data=b"a\nb\nc\nd\n") + blob_b = make_object(Blob, data=b"b") + blob_c2 = make_object(Blob, data=b"a\nb\nc\ne\n") + tree1 = self.commit_tree([(b"a", blob_a1), (b"b", blob_b)]) + tree2 = self.commit_tree([(b"c", blob_c2), (b"b", blob_b)]) + self.assertEqual( + [TreeChange(CHANGE_RENAME, (b"a", F, blob_a1.id), (b"c", F, blob_c2.id))], + self.detect_renames(tree1, tree2), + ) + self.assertEqual( + [ + TreeChange(CHANGE_RENAME, (b"a", F, blob_a1.id), (b"c", F, blob_c2.id)), + TreeChange( + CHANGE_UNCHANGED, + (b"b", F, blob_b.id), + (b"b", F, blob_b.id), + ), + ], + self.detect_renames(tree1, tree2, want_unchanged=True), + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_fastexport.py b/env/lib/python3.8/site-packages/dulwich/tests/test_fastexport.py new file mode 100644 index 00000000..1dff9b1c --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_fastexport.py @@ -0,0 +1,317 @@ +# test_fastexport.py -- Fast export/import functionality +# Copyright (C) 2010 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +from io import BytesIO +import stat + + +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + Blob, + Commit, + Tree, + ZERO_SHA, +) +from dulwich.repo import ( + MemoryRepo, +) +from dulwich.tests import ( + SkipTest, + TestCase, +) +from dulwich.tests.utils import ( + build_commit_graph, +) + + +class GitFastExporterTests(TestCase): + """Tests for the GitFastExporter tests.""" + + def setUp(self): + super(GitFastExporterTests, self).setUp() + self.store = MemoryObjectStore() + self.stream = BytesIO() + try: + from dulwich.fastexport import GitFastExporter + except ImportError as exc: + raise SkipTest("python-fastimport not available") from exc + self.fastexporter = GitFastExporter(self.stream, self.store) + + def test_emit_blob(self): + b = Blob() + b.data = b"fooBAR" + self.fastexporter.emit_blob(b) + self.assertEqual(b"blob\nmark :1\ndata 6\nfooBAR\n", self.stream.getvalue()) + + def test_emit_commit(self): + b = Blob() + b.data = b"FOO" + t = Tree() + t.add(b"foo", stat.S_IFREG | 0o644, b.id) + c = Commit() + c.committer = c.author = b"Jelmer " + c.author_time = c.commit_time = 1271345553 + c.author_timezone = c.commit_timezone = 0 + c.message = b"msg" + c.tree = t.id + self.store.add_objects([(b, None), (t, None), (c, None)]) + self.fastexporter.emit_commit(c, b"refs/heads/master") + self.assertEqual( + b"""blob +mark :1 +data 3 +FOO +commit refs/heads/master +mark :2 +author Jelmer 1271345553 +0000 +committer Jelmer 1271345553 +0000 +data 3 +msg +M 644 :1 foo +""", + self.stream.getvalue(), + ) + + +class GitImportProcessorTests(TestCase): + """Tests for the GitImportProcessor tests.""" + + def setUp(self): + super(GitImportProcessorTests, self).setUp() + self.repo = MemoryRepo() + try: + from dulwich.fastexport import GitImportProcessor + except ImportError as exc: + raise SkipTest("python-fastimport not available") from exc + self.processor = GitImportProcessor(self.repo) + + def test_reset_handler(self): + from fastimport import commands + + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + cmd = commands.ResetCommand(b"refs/heads/foo", c1.id) + self.processor.reset_handler(cmd) + self.assertEqual(c1.id, self.repo.get_refs()[b"refs/heads/foo"]) + self.assertEqual(c1.id, self.processor.last_commit) + + def test_reset_handler_marker(self): + from fastimport import commands + + [c1, c2] = build_commit_graph(self.repo.object_store, [[1], [2]]) + self.processor.markers[b"10"] = c1.id + cmd = commands.ResetCommand(b"refs/heads/foo", b":10") + self.processor.reset_handler(cmd) + self.assertEqual(c1.id, self.repo.get_refs()[b"refs/heads/foo"]) + + def test_reset_handler_default(self): + from fastimport import commands + + [c1, c2] = build_commit_graph(self.repo.object_store, [[1], [2]]) + cmd = commands.ResetCommand(b"refs/heads/foo", None) + self.processor.reset_handler(cmd) + self.assertEqual(ZERO_SHA, self.repo.get_refs()[b"refs/heads/foo"]) + + def test_commit_handler(self): + from fastimport import commands + + cmd = commands.CommitCommand( + b"refs/heads/foo", + b"mrkr", + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + b"FOO", + None, + [], + [], + ) + self.processor.commit_handler(cmd) + commit = self.repo[self.processor.last_commit] + self.assertEqual(b"Jelmer ", commit.author) + self.assertEqual(b"Jelmer ", commit.committer) + self.assertEqual(b"FOO", commit.message) + self.assertEqual([], commit.parents) + self.assertEqual(432432432.0, commit.commit_time) + self.assertEqual(432432432.0, commit.author_time) + self.assertEqual(3600, commit.commit_timezone) + self.assertEqual(3600, commit.author_timezone) + self.assertEqual(commit, self.repo[b"refs/heads/foo"]) + + def test_commit_handler_markers(self): + from fastimport import commands + + [c1, c2, c3] = build_commit_graph(self.repo.object_store, [[1], [2], [3]]) + self.processor.markers[b"10"] = c1.id + self.processor.markers[b"42"] = c2.id + self.processor.markers[b"98"] = c3.id + cmd = commands.CommitCommand( + b"refs/heads/foo", + b"mrkr", + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + b"FOO", + b":10", + [b":42", b":98"], + [], + ) + self.processor.commit_handler(cmd) + commit = self.repo[self.processor.last_commit] + self.assertEqual(c1.id, commit.parents[0]) + self.assertEqual(c2.id, commit.parents[1]) + self.assertEqual(c3.id, commit.parents[2]) + + def test_import_stream(self): + markers = self.processor.import_stream( + BytesIO( + b"""blob +mark :1 +data 11 +text for a + +commit refs/heads/master +mark :2 +committer Joe Foo 1288287382 +0000 +data 20 + +M 100644 :1 a + +""" + ) + ) + self.assertEqual(2, len(markers)) + self.assertIsInstance(self.repo[markers[b"1"]], Blob) + self.assertIsInstance(self.repo[markers[b"2"]], Commit) + + def test_file_add(self): + from fastimport import commands + + cmd = commands.BlobCommand(b"23", b"data") + self.processor.blob_handler(cmd) + cmd = commands.CommitCommand( + b"refs/heads/foo", + b"mrkr", + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + b"FOO", + None, + [], + [commands.FileModifyCommand(b"path", 0o100644, b":23", None)], + ) + self.processor.commit_handler(cmd) + commit = self.repo[self.processor.last_commit] + self.assertEqual( + [(b"path", 0o100644, b"6320cd248dd8aeaab759d5871f8781b5c0505172")], + self.repo[commit.tree].items(), + ) + + def simple_commit(self): + from fastimport import commands + + cmd = commands.BlobCommand(b"23", b"data") + self.processor.blob_handler(cmd) + cmd = commands.CommitCommand( + b"refs/heads/foo", + b"mrkr", + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + b"FOO", + None, + [], + [commands.FileModifyCommand(b"path", 0o100644, b":23", None)], + ) + self.processor.commit_handler(cmd) + commit = self.repo[self.processor.last_commit] + return commit + + def make_file_commit(self, file_cmds): + """Create a trivial commit with the specified file commands. + + Args: + file_cmds: File commands to run. + Returns: The created commit object + """ + from fastimport import commands + + cmd = commands.CommitCommand( + b"refs/heads/foo", + b"mrkr", + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + (b"Jelmer", b"jelmer@samba.org", 432432432.0, 3600), + b"FOO", + None, + [], + file_cmds, + ) + self.processor.commit_handler(cmd) + return self.repo[self.processor.last_commit] + + def test_file_copy(self): + from fastimport import commands + + self.simple_commit() + commit = self.make_file_commit([commands.FileCopyCommand(b"path", b"new_path")]) + self.assertEqual( + [ + ( + b"new_path", + 0o100644, + b"6320cd248dd8aeaab759d5871f8781b5c0505172", + ), + ( + b"path", + 0o100644, + b"6320cd248dd8aeaab759d5871f8781b5c0505172", + ), + ], + self.repo[commit.tree].items(), + ) + + def test_file_move(self): + from fastimport import commands + + self.simple_commit() + commit = self.make_file_commit( + [commands.FileRenameCommand(b"path", b"new_path")] + ) + self.assertEqual( + [ + ( + b"new_path", + 0o100644, + b"6320cd248dd8aeaab759d5871f8781b5c0505172", + ), + ], + self.repo[commit.tree].items(), + ) + + def test_file_delete(self): + from fastimport import commands + + self.simple_commit() + commit = self.make_file_commit([commands.FileDeleteCommand(b"path")]) + self.assertEqual([], self.repo[commit.tree].items()) + + def test_file_deleteall(self): + from fastimport import commands + + self.simple_commit() + commit = self.make_file_commit([commands.FileDeleteAllCommand()]) + self.assertEqual([], self.repo[commit.tree].items()) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_file.py b/env/lib/python3.8/site-packages/dulwich/tests/test_file.py new file mode 100644 index 00000000..cc6d68a5 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_file.py @@ -0,0 +1,212 @@ +# test_file.py -- Test for git files +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +import io +import os +import shutil +import sys +import tempfile + +from dulwich.file import FileLocked, GitFile, _fancy_rename +from dulwich.tests import ( + SkipTest, + TestCase, +) + + +class FancyRenameTests(TestCase): + def setUp(self): + super(FancyRenameTests, self).setUp() + self._tempdir = tempfile.mkdtemp() + self.foo = self.path("foo") + self.bar = self.path("bar") + self.create(self.foo, b"foo contents") + + def tearDown(self): + shutil.rmtree(self._tempdir) + super(FancyRenameTests, self).tearDown() + + def path(self, filename): + return os.path.join(self._tempdir, filename) + + def create(self, path, contents): + f = open(path, "wb") + f.write(contents) + f.close() + + def test_no_dest_exists(self): + self.assertFalse(os.path.exists(self.bar)) + _fancy_rename(self.foo, self.bar) + self.assertFalse(os.path.exists(self.foo)) + + new_f = open(self.bar, "rb") + self.assertEqual(b"foo contents", new_f.read()) + new_f.close() + + def test_dest_exists(self): + self.create(self.bar, b"bar contents") + _fancy_rename(self.foo, self.bar) + self.assertFalse(os.path.exists(self.foo)) + + new_f = open(self.bar, "rb") + self.assertEqual(b"foo contents", new_f.read()) + new_f.close() + + def test_dest_opened(self): + if sys.platform != "win32": + raise SkipTest("platform allows overwriting open files") + self.create(self.bar, b"bar contents") + dest_f = open(self.bar, "rb") + self.assertRaises(OSError, _fancy_rename, self.foo, self.bar) + dest_f.close() + self.assertTrue(os.path.exists(self.path("foo"))) + + new_f = open(self.foo, "rb") + self.assertEqual(b"foo contents", new_f.read()) + new_f.close() + + new_f = open(self.bar, "rb") + self.assertEqual(b"bar contents", new_f.read()) + new_f.close() + + +class GitFileTests(TestCase): + def setUp(self): + super(GitFileTests, self).setUp() + self._tempdir = tempfile.mkdtemp() + f = open(self.path("foo"), "wb") + f.write(b"foo contents") + f.close() + + def tearDown(self): + shutil.rmtree(self._tempdir) + super(GitFileTests, self).tearDown() + + def path(self, filename): + return os.path.join(self._tempdir, filename) + + def test_invalid(self): + foo = self.path("foo") + self.assertRaises(IOError, GitFile, foo, mode="r") + self.assertRaises(IOError, GitFile, foo, mode="ab") + self.assertRaises(IOError, GitFile, foo, mode="r+b") + self.assertRaises(IOError, GitFile, foo, mode="w+b") + self.assertRaises(IOError, GitFile, foo, mode="a+bU") + + def test_readonly(self): + f = GitFile(self.path("foo"), "rb") + self.assertIsInstance(f, io.IOBase) + self.assertEqual(b"foo contents", f.read()) + self.assertEqual(b"", f.read()) + f.seek(4) + self.assertEqual(b"contents", f.read()) + f.close() + + def test_default_mode(self): + f = GitFile(self.path("foo")) + self.assertEqual(b"foo contents", f.read()) + f.close() + + def test_write(self): + foo = self.path("foo") + foo_lock = "%s.lock" % foo + + orig_f = open(foo, "rb") + self.assertEqual(orig_f.read(), b"foo contents") + orig_f.close() + + self.assertFalse(os.path.exists(foo_lock)) + f = GitFile(foo, "wb") + self.assertFalse(f.closed) + self.assertRaises(AttributeError, getattr, f, "not_a_file_property") + + self.assertTrue(os.path.exists(foo_lock)) + f.write(b"new stuff") + f.seek(4) + f.write(b"contents") + f.close() + self.assertFalse(os.path.exists(foo_lock)) + + new_f = open(foo, "rb") + self.assertEqual(b"new contents", new_f.read()) + new_f.close() + + def test_open_twice(self): + foo = self.path("foo") + f1 = GitFile(foo, "wb") + f1.write(b"new") + try: + f2 = GitFile(foo, "wb") + self.fail() + except FileLocked: + pass + else: + f2.close() + f1.write(b" contents") + f1.close() + + # Ensure trying to open twice doesn't affect original. + f = open(foo, "rb") + self.assertEqual(b"new contents", f.read()) + f.close() + + def test_abort(self): + foo = self.path("foo") + foo_lock = "%s.lock" % foo + + orig_f = open(foo, "rb") + self.assertEqual(orig_f.read(), b"foo contents") + orig_f.close() + + f = GitFile(foo, "wb") + f.write(b"new contents") + f.abort() + self.assertTrue(f.closed) + self.assertFalse(os.path.exists(foo_lock)) + + new_orig_f = open(foo, "rb") + self.assertEqual(new_orig_f.read(), b"foo contents") + new_orig_f.close() + + def test_abort_close(self): + foo = self.path("foo") + f = GitFile(foo, "wb") + f.abort() + try: + f.close() + except (IOError, OSError): + self.fail() + + f = GitFile(foo, "wb") + f.close() + try: + f.abort() + except (IOError, OSError): + self.fail() + + def test_abort_close_removed(self): + foo = self.path("foo") + f = GitFile(foo, "wb") + + f._file.close() + os.remove(foo + ".lock") + + f.abort() + self.assertTrue(f._closed) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_grafts.py b/env/lib/python3.8/site-packages/dulwich/tests/test_grafts.py new file mode 100644 index 00000000..92fff290 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_grafts.py @@ -0,0 +1,210 @@ +# test_grafts.py -- Tests for graftpoints +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for graftpoints.""" + +import os +import tempfile +import shutil + +from dulwich.errors import ObjectFormatException +from dulwich.tests import TestCase +from dulwich.objects import ( + Tree, +) +from dulwich.repo import ( + parse_graftpoints, + serialize_graftpoints, + MemoryRepo, + Repo, +) + + +def makesha(digit): + return (str(digit).encode("ascii") * 40)[:40] + + +class GraftParserTests(TestCase): + def assertParse(self, expected, graftpoints): + self.assertEqual(expected, parse_graftpoints(iter(graftpoints))) + + def test_no_grafts(self): + self.assertParse({}, []) + + def test_no_parents(self): + self.assertParse({makesha(0): []}, [makesha(0)]) + + def test_parents(self): + self.assertParse( + {makesha(0): [makesha(1), makesha(2)]}, + [b" ".join([makesha(0), makesha(1), makesha(2)])], + ) + + def test_multiple_hybrid(self): + self.assertParse( + { + makesha(0): [], + makesha(1): [makesha(2)], + makesha(3): [makesha(4), makesha(5)], + }, + [ + makesha(0), + b" ".join([makesha(1), makesha(2)]), + b" ".join([makesha(3), makesha(4), makesha(5)]), + ], + ) + + +class GraftSerializerTests(TestCase): + def assertSerialize(self, expected, graftpoints): + self.assertEqual(sorted(expected), sorted(serialize_graftpoints(graftpoints))) + + def test_no_grafts(self): + self.assertSerialize(b"", {}) + + def test_no_parents(self): + self.assertSerialize(makesha(0), {makesha(0): []}) + + def test_parents(self): + self.assertSerialize( + b" ".join([makesha(0), makesha(1), makesha(2)]), + {makesha(0): [makesha(1), makesha(2)]}, + ) + + def test_multiple_hybrid(self): + self.assertSerialize( + b"\n".join( + [ + makesha(0), + b" ".join([makesha(1), makesha(2)]), + b" ".join([makesha(3), makesha(4), makesha(5)]), + ] + ), + { + makesha(0): [], + makesha(1): [makesha(2)], + makesha(3): [makesha(4), makesha(5)], + }, + ) + + +class GraftsInRepositoryBase(object): + def tearDown(self): + super(GraftsInRepositoryBase, self).tearDown() + + def get_repo_with_grafts(self, grafts): + r = self._repo + r._add_graftpoints(grafts) + return r + + def test_no_grafts(self): + r = self.get_repo_with_grafts({}) + + shas = [e.commit.id for e in r.get_walker()] + self.assertEqual(shas, self._shas[::-1]) + + def test_no_parents_graft(self): + r = self.get_repo_with_grafts({self._repo.head(): []}) + + self.assertEqual([e.commit.id for e in r.get_walker()], [r.head()]) + + def test_existing_parent_graft(self): + r = self.get_repo_with_grafts({self._shas[-1]: [self._shas[0]]}) + + self.assertEqual( + [e.commit.id for e in r.get_walker()], + [self._shas[-1], self._shas[0]], + ) + + def test_remove_graft(self): + r = self.get_repo_with_grafts({self._repo.head(): []}) + r._remove_graftpoints([self._repo.head()]) + + self.assertEqual([e.commit.id for e in r.get_walker()], self._shas[::-1]) + + def test_object_store_fail_invalid_parents(self): + r = self._repo + + self.assertRaises( + ObjectFormatException, r._add_graftpoints, {self._shas[-1]: ["1"]} + ) + + +class GraftsInRepoTests(GraftsInRepositoryBase, TestCase): + def setUp(self): + super(GraftsInRepoTests, self).setUp() + self._repo_dir = os.path.join(tempfile.mkdtemp()) + r = self._repo = Repo.init(self._repo_dir) + self.addCleanup(shutil.rmtree, self._repo_dir) + + self._shas = [] + + commit_kwargs = { + "committer": b"Test Committer ", + "author": b"Test Author ", + "commit_timestamp": 12395, + "commit_timezone": 0, + "author_timestamp": 12395, + "author_timezone": 0, + } + + self._shas.append(r.do_commit(b"empty commit", **commit_kwargs)) + self._shas.append(r.do_commit(b"empty commit", **commit_kwargs)) + self._shas.append(r.do_commit(b"empty commit", **commit_kwargs)) + + def test_init_with_empty_info_grafts(self): + r = self._repo + r._put_named_file(os.path.join("info", "grafts"), b"") + + r = Repo(self._repo_dir) + self.assertEqual({}, r._graftpoints) + + def test_init_with_info_grafts(self): + r = self._repo + r._put_named_file( + os.path.join("info", "grafts"), + self._shas[-1] + b" " + self._shas[0], + ) + + r = Repo(self._repo_dir) + self.assertEqual({self._shas[-1]: [self._shas[0]]}, r._graftpoints) + + +class GraftsInMemoryRepoTests(GraftsInRepositoryBase, TestCase): + def setUp(self): + super(GraftsInMemoryRepoTests, self).setUp() + r = self._repo = MemoryRepo() + + self._shas = [] + + tree = Tree() + + commit_kwargs = { + "committer": b"Test Committer ", + "author": b"Test Author ", + "commit_timestamp": 12395, + "commit_timezone": 0, + "author_timestamp": 12395, + "author_timezone": 0, + "tree": tree.id, + } + + self._shas.append(r.do_commit(b"empty commit", **commit_kwargs)) + self._shas.append(r.do_commit(b"empty commit", **commit_kwargs)) + self._shas.append(r.do_commit(b"empty commit", **commit_kwargs)) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_graph.py b/env/lib/python3.8/site-packages/dulwich/tests/test_graph.py new file mode 100644 index 00000000..5d5d76ee --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_graph.py @@ -0,0 +1,184 @@ +# -*- coding: utf-8 -*- +# test_index.py -- Tests for merge +# encoding: utf-8 +# Copyright (c) 2020 Kevin B. Hendricks, Stratford Ontario Canada +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. + +"""Tests for dulwich.graph.""" + +from dulwich.tests import TestCase +from dulwich.tests.utils import make_commit +from dulwich.repo import MemoryRepo + +from dulwich.graph import _find_lcas, can_fast_forward + + +class FindMergeBaseTests(TestCase): + @staticmethod + def run_test(dag, inputs): + def lookup_parents(commit_id): + return dag[commit_id] + + c1 = inputs[0] + c2s = inputs[1:] + return set(_find_lcas(lookup_parents, c1, c2s)) + + def test_multiple_lca(self): + # two lowest common ancestors + graph = { + "5": ["1", "2"], + "4": ["3", "1"], + "3": ["2"], + "2": ["0"], + "1": [], + "0": [], + } + self.assertEqual(self.run_test(graph, ["4", "5"]), set(["1", "2"])) + + def test_no_common_ancestor(self): + # no common ancestor + graph = { + "4": ["2"], + "3": ["1"], + "2": [], + "1": ["0"], + "0": [], + } + self.assertEqual(self.run_test(graph, ["4", "3"]), set([])) + + def test_ancestor(self): + # ancestor + graph = { + "G": ["D", "F"], + "F": ["E"], + "D": ["C"], + "C": ["B"], + "E": ["B"], + "B": ["A"], + "A": [], + } + self.assertEqual(self.run_test(graph, ["D", "C"]), set(["C"])) + + def test_direct_parent(self): + # parent + graph = { + "G": ["D", "F"], + "F": ["E"], + "D": ["C"], + "C": ["B"], + "E": ["B"], + "B": ["A"], + "A": [], + } + self.assertEqual(self.run_test(graph, ["G", "D"]), set(["D"])) + + def test_another_crossover(self): + # Another cross over + graph = { + "G": ["D", "F"], + "F": ["E", "C"], + "D": ["C", "E"], + "C": ["B"], + "E": ["B"], + "B": ["A"], + "A": [], + } + self.assertEqual(self.run_test(graph, ["D", "F"]), set(["E", "C"])) + + def test_three_way_merge_lca(self): + # three way merge commit straight from git docs + graph = { + "C": ["C1"], + "C1": ["C2"], + "C2": ["C3"], + "C3": ["C4"], + "C4": ["2"], + "B": ["B1"], + "B1": ["B2"], + "B2": ["B3"], + "B3": ["1"], + "A": ["A1"], + "A1": ["A2"], + "A2": ["A3"], + "A3": ["1"], + "1": ["2"], + "2": [], + } + # assumes a theoretical merge M exists that merges B and C first + # which actually means find the first LCA from either of B OR C with A + self.assertEqual(self.run_test(graph, ["A", "B", "C"]), set(["1"])) + + def test_octopus(self): + # octopus algorithm test + # test straight from git docs of A, B, and C + # but this time use octopus to find lcas of A, B, and C simultaneously + graph = { + "C": ["C1"], + "C1": ["C2"], + "C2": ["C3"], + "C3": ["C4"], + "C4": ["2"], + "B": ["B1"], + "B1": ["B2"], + "B2": ["B3"], + "B3": ["1"], + "A": ["A1"], + "A1": ["A2"], + "A2": ["A3"], + "A3": ["1"], + "1": ["2"], + "2": [], + } + + def lookup_parents(cid): + return graph[cid] + + lcas = ["A"] + others = ["B", "C"] + for cmt in others: + next_lcas = [] + for ca in lcas: + res = _find_lcas(lookup_parents, cmt, [ca]) + next_lcas.extend(res) + lcas = next_lcas[:] + self.assertEqual(set(lcas), set(["2"])) + + +class CanFastForwardTests(TestCase): + def test_ff(self): + r = MemoryRepo() + base = make_commit() + c1 = make_commit(parents=[base.id]) + c2 = make_commit(parents=[c1.id]) + r.object_store.add_objects([(base, None), (c1, None), (c2, None)]) + self.assertTrue(can_fast_forward(r, c1.id, c1.id)) + self.assertTrue(can_fast_forward(r, base.id, c1.id)) + self.assertTrue(can_fast_forward(r, c1.id, c2.id)) + self.assertFalse(can_fast_forward(r, c2.id, c1.id)) + + def test_diverged(self): + r = MemoryRepo() + base = make_commit() + c1 = make_commit(parents=[base.id]) + c2a = make_commit(parents=[c1.id], message=b"2a") + c2b = make_commit(parents=[c1.id], message=b"2b") + r.object_store.add_objects([(base, None), (c1, None), (c2a, None), (c2b, None)]) + self.assertTrue(can_fast_forward(r, c1.id, c2a.id)) + self.assertTrue(can_fast_forward(r, c1.id, c2b.id)) + self.assertFalse(can_fast_forward(r, c2a.id, c2b.id)) + self.assertFalse(can_fast_forward(r, c2b.id, c2a.id)) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_greenthreads.py b/env/lib/python3.8/site-packages/dulwich/tests/test_greenthreads.py new file mode 100644 index 00000000..2606aea9 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_greenthreads.py @@ -0,0 +1,135 @@ +# test_greenthreads.py -- Unittests for eventlet. +# Copyright (C) 2013 eNovance SAS +# +# Author: Fabien Boucher +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +import time + +from dulwich.tests import ( + skipIf, + TestCase, +) +from dulwich.object_store import ( + MemoryObjectStore, + MissingObjectFinder, +) +from dulwich.objects import ( + Commit, + Blob, + Tree, + parse_timezone, +) + +try: + import gevent # noqa: F401 + + gevent_support = True +except ImportError: + gevent_support = False + +if gevent_support: + from dulwich.greenthreads import ( + GreenThreadsObjectStoreIterator, + GreenThreadsMissingObjectFinder, + ) + +skipmsg = "Gevent library is not installed" + + +def create_commit(marker=None): + blob = Blob.from_string(b"The blob content " + marker) + tree = Tree() + tree.add(b"thefile " + marker, 0o100644, blob.id) + cmt = Commit() + cmt.tree = tree.id + cmt.author = cmt.committer = b"John Doe " + cmt.message = marker + tz = parse_timezone(b"-0200")[0] + cmt.commit_time = cmt.author_time = int(time.time()) + cmt.commit_timezone = cmt.author_timezone = tz + return cmt, tree, blob + + +def init_store(store, count=1): + ret = [] + for i in range(0, count): + objs = create_commit(marker=("%d" % i).encode("ascii")) + for obj in objs: + ret.append(obj) + store.add_object(obj) + return ret + + +@skipIf(not gevent_support, skipmsg) +class TestGreenThreadsObjectStoreIterator(TestCase): + def setUp(self): + super(TestGreenThreadsObjectStoreIterator, self).setUp() + self.store = MemoryObjectStore() + self.cmt_amount = 10 + self.objs = init_store(self.store, self.cmt_amount) + + def test_len(self): + wants = [sha.id for sha in self.objs if isinstance(sha, Commit)] + finder = MissingObjectFinder(self.store, (), wants) + iterator = GreenThreadsObjectStoreIterator( + self.store, iter(finder.next, None), finder + ) + # One commit refers one tree and one blob + self.assertEqual(len(iterator), self.cmt_amount * 3) + haves = wants[0 : self.cmt_amount - 1] + finder = MissingObjectFinder(self.store, haves, wants) + iterator = GreenThreadsObjectStoreIterator( + self.store, iter(finder.next, None), finder + ) + self.assertEqual(len(iterator), 3) + + def test_iter(self): + wants = [sha.id for sha in self.objs if isinstance(sha, Commit)] + finder = MissingObjectFinder(self.store, (), wants) + iterator = GreenThreadsObjectStoreIterator( + self.store, iter(finder.next, None), finder + ) + objs = [] + for sha, path in iterator: + self.assertIn(sha, self.objs) + objs.append(sha) + self.assertEqual(len(objs), len(self.objs)) + + +@skipIf(not gevent_support, skipmsg) +class TestGreenThreadsMissingObjectFinder(TestCase): + def setUp(self): + super(TestGreenThreadsMissingObjectFinder, self).setUp() + self.store = MemoryObjectStore() + self.cmt_amount = 10 + self.objs = init_store(self.store, self.cmt_amount) + + def test_finder(self): + wants = [sha.id for sha in self.objs if isinstance(sha, Commit)] + finder = GreenThreadsMissingObjectFinder(self.store, (), wants) + self.assertEqual(len(finder.sha_done), 0) + self.assertEqual(len(finder.objects_to_send), self.cmt_amount) + + finder = GreenThreadsMissingObjectFinder( + self.store, wants[0 : int(self.cmt_amount / 2)], wants + ) + # sha_done will contains commit id and sha of blob referred in tree + self.assertEqual(len(finder.sha_done), (self.cmt_amount / 2) * 2) + self.assertEqual(len(finder.objects_to_send), self.cmt_amount / 2) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_hooks.py b/env/lib/python3.8/site-packages/dulwich/tests/test_hooks.py new file mode 100644 index 00000000..6e416f1d --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_hooks.py @@ -0,0 +1,192 @@ +# test_hooks.py -- Tests for executing hooks +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for executing hooks.""" + +import os +import stat +import shutil +import sys +import tempfile + +from dulwich import errors + +from dulwich.hooks import ( + PreCommitShellHook, + PostCommitShellHook, + CommitMsgShellHook, +) + +from dulwich.tests import TestCase + + +class ShellHookTests(TestCase): + def setUp(self): + super(ShellHookTests, self).setUp() + if os.name != "posix": + self.skipTest("shell hook tests requires POSIX shell") + self.assertTrue(os.path.exists("/bin/sh")) + + def test_hook_pre_commit(self): + repo_dir = os.path.join(tempfile.mkdtemp()) + os.mkdir(os.path.join(repo_dir, "hooks")) + self.addCleanup(shutil.rmtree, repo_dir) + + pre_commit_fail = """#!/bin/sh +exit 1 +""" + + pre_commit_success = """#!/bin/sh +exit 0 +""" + pre_commit_cwd = ( + """#!/bin/sh +if [ "$(pwd)" != '""" + + repo_dir + + """' ]; then + echo "Expected path '""" + + repo_dir + + """', got '$(pwd)'" + exit 1 +fi + +exit 0 +""" + ) + + pre_commit = os.path.join(repo_dir, "hooks", "pre-commit") + hook = PreCommitShellHook(repo_dir, repo_dir) + + with open(pre_commit, "w") as f: + f.write(pre_commit_fail) + os.chmod(pre_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + self.assertRaises(errors.HookError, hook.execute) + + if sys.platform != "darwin": + # Don't bother running this test on darwin since path + # canonicalization messages with our simple string comparison. + with open(pre_commit, "w") as f: + f.write(pre_commit_cwd) + os.chmod(pre_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + hook.execute() + + with open(pre_commit, "w") as f: + f.write(pre_commit_success) + os.chmod(pre_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + hook.execute() + + def test_hook_commit_msg(self): + + repo_dir = os.path.join(tempfile.mkdtemp()) + os.mkdir(os.path.join(repo_dir, "hooks")) + self.addCleanup(shutil.rmtree, repo_dir) + + commit_msg_fail = """#!/bin/sh +exit 1 +""" + + commit_msg_success = """#!/bin/sh +exit 0 +""" + + commit_msg_cwd = ( + """#!/bin/sh +if [ "$(pwd)" = '""" + + repo_dir + + "' ]; then exit 0; else exit 1; fi\n" + ) + + commit_msg = os.path.join(repo_dir, "hooks", "commit-msg") + hook = CommitMsgShellHook(repo_dir) + + with open(commit_msg, "w") as f: + f.write(commit_msg_fail) + os.chmod(commit_msg, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + self.assertRaises(errors.HookError, hook.execute, b"failed commit") + + if sys.platform != "darwin": + # Don't bother running this test on darwin since path + # canonicalization messages with our simple string comparison. + with open(commit_msg, "w") as f: + f.write(commit_msg_cwd) + os.chmod(commit_msg, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + hook.execute(b"cwd test commit") + + with open(commit_msg, "w") as f: + f.write(commit_msg_success) + os.chmod(commit_msg, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + hook.execute(b"empty commit") + + def test_hook_post_commit(self): + + (fd, path) = tempfile.mkstemp() + os.close(fd) + + repo_dir = os.path.join(tempfile.mkdtemp()) + os.mkdir(os.path.join(repo_dir, "hooks")) + self.addCleanup(shutil.rmtree, repo_dir) + + post_commit_success = ( + """#!/bin/sh +rm """ + + path + + "\n" + ) + + post_commit_fail = """#!/bin/sh +exit 1 +""" + + post_commit_cwd = ( + """#!/bin/sh +if [ "$(pwd)" = '""" + + repo_dir + + "' ]; then exit 0; else exit 1; fi\n" + ) + + post_commit = os.path.join(repo_dir, "hooks", "post-commit") + hook = PostCommitShellHook(repo_dir) + + with open(post_commit, "w") as f: + f.write(post_commit_fail) + os.chmod(post_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + self.assertRaises(errors.HookError, hook.execute) + + if sys.platform != "darwin": + # Don't bother running this test on darwin since path + # canonicalization messages with our simple string comparison. + with open(post_commit, "w") as f: + f.write(post_commit_cwd) + os.chmod(post_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + hook.execute() + + with open(post_commit, "w") as f: + f.write(post_commit_success) + os.chmod(post_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + hook.execute() + self.assertFalse(os.path.exists(path)) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_ignore.py b/env/lib/python3.8/site-packages/dulwich/tests/test_ignore.py new file mode 100644 index 00000000..b3a6ab13 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_ignore.py @@ -0,0 +1,281 @@ +# test_ignore.py -- Tests for ignore files. +# Copyright (C) 2017 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for ignore files.""" + +from io import BytesIO +import os +import re +import shutil +import tempfile +from dulwich.tests import TestCase + +from dulwich.ignore import ( + IgnoreFilter, + IgnoreFilterManager, + IgnoreFilterStack, + Pattern, + match_pattern, + read_ignore_patterns, + translate, +) +from dulwich.repo import Repo + + +POSITIVE_MATCH_TESTS = [ + (b"foo.c", b"*.c"), + (b".c", b"*.c"), + (b"foo/foo.c", b"*.c"), + (b"foo/foo.c", b"foo.c"), + (b"foo.c", b"/*.c"), + (b"foo.c", b"/foo.c"), + (b"foo.c", b"foo.c"), + (b"foo.c", b"foo.[ch]"), + (b"foo/bar/bla.c", b"foo/**"), + (b"foo/bar/bla/blie.c", b"foo/**/blie.c"), + (b"foo/bar/bla.c", b"**/bla.c"), + (b"bla.c", b"**/bla.c"), + (b"foo/bar", b"foo/**/bar"), + (b"foo/bla/bar", b"foo/**/bar"), + (b"foo/bar/", b"bar/"), + (b"foo/bar/", b"bar"), + (b"foo/bar/something", b"foo/bar/*"), +] + +NEGATIVE_MATCH_TESTS = [ + (b"foo.c", b"foo.[dh]"), + (b"foo/foo.c", b"/foo.c"), + (b"foo/foo.c", b"/*.c"), + (b"foo/bar/", b"/bar/"), + (b"foo/bar/", b"foo/bar/*"), + (b"foo/bar", b"foo?bar"), +] + + +TRANSLATE_TESTS = [ + (b"*.c", b"(?ms)(.*/)?[^/]*\\.c/?\\Z"), + (b"foo.c", b"(?ms)(.*/)?foo\\.c/?\\Z"), + (b"/*.c", b"(?ms)[^/]*\\.c/?\\Z"), + (b"/foo.c", b"(?ms)foo\\.c/?\\Z"), + (b"foo.c", b"(?ms)(.*/)?foo\\.c/?\\Z"), + (b"foo.[ch]", b"(?ms)(.*/)?foo\\.[ch]/?\\Z"), + (b"bar/", b"(?ms)(.*/)?bar\\/\\Z"), + (b"foo/**", b"(?ms)foo(/.*)?/?\\Z"), + (b"foo/**/blie.c", b"(?ms)foo(/.*)?\\/blie\\.c/?\\Z"), + (b"**/bla.c", b"(?ms)(.*/)?bla\\.c/?\\Z"), + (b"foo/**/bar", b"(?ms)foo(/.*)?\\/bar/?\\Z"), + (b"foo/bar/*", b"(?ms)foo\\/bar\\/[^/]+/?\\Z"), + (b"/foo\\[bar\\]", b"(?ms)foo\\[bar\\]/?\\Z"), + (b"/foo[bar]", b"(?ms)foo[bar]/?\\Z"), + (b"/foo[0-9]", b"(?ms)foo[0-9]/?\\Z"), +] + + +class TranslateTests(TestCase): + def test_translate(self): + for (pattern, regex) in TRANSLATE_TESTS: + if re.escape(b"/") == b"/": + # Slash is no longer escaped in Python3.7, so undo the escaping + # in the expected return value.. + regex = regex.replace(b"\\/", b"/") + self.assertEqual( + regex, + translate(pattern), + "orig pattern: %r, regex: %r, expected: %r" + % (pattern, translate(pattern), regex), + ) + + +class ReadIgnorePatterns(TestCase): + def test_read_file(self): + f = BytesIO( + b""" +# a comment +\x20\x20 +# and an empty line: + +\\#not a comment +!negative +with trailing whitespace +with escaped trailing whitespace\\ +""" + ) + self.assertEqual( + list(read_ignore_patterns(f)), + [ + b"\\#not a comment", + b"!negative", + b"with trailing whitespace", + b"with escaped trailing whitespace ", + ], + ) + + +class MatchPatternTests(TestCase): + def test_matches(self): + for (path, pattern) in POSITIVE_MATCH_TESTS: + self.assertTrue( + match_pattern(path, pattern), + "path: %r, pattern: %r" % (path, pattern), + ) + + def test_no_matches(self): + for (path, pattern) in NEGATIVE_MATCH_TESTS: + self.assertFalse( + match_pattern(path, pattern), + "path: %r, pattern: %r" % (path, pattern), + ) + + +class IgnoreFilterTests(TestCase): + def test_included(self): + filter = IgnoreFilter([b"a.c", b"b.c"]) + self.assertTrue(filter.is_ignored(b"a.c")) + self.assertIs(None, filter.is_ignored(b"c.c")) + self.assertEqual([Pattern(b"a.c")], list(filter.find_matching(b"a.c"))) + self.assertEqual([], list(filter.find_matching(b"c.c"))) + + def test_included_ignorecase(self): + filter = IgnoreFilter([b"a.c", b"b.c"], ignorecase=False) + self.assertTrue(filter.is_ignored(b"a.c")) + self.assertFalse(filter.is_ignored(b"A.c")) + filter = IgnoreFilter([b"a.c", b"b.c"], ignorecase=True) + self.assertTrue(filter.is_ignored(b"a.c")) + self.assertTrue(filter.is_ignored(b"A.c")) + self.assertTrue(filter.is_ignored(b"A.C")) + + def test_excluded(self): + filter = IgnoreFilter([b"a.c", b"b.c", b"!c.c"]) + self.assertFalse(filter.is_ignored(b"c.c")) + self.assertIs(None, filter.is_ignored(b"d.c")) + self.assertEqual([Pattern(b"!c.c")], list(filter.find_matching(b"c.c"))) + self.assertEqual([], list(filter.find_matching(b"d.c"))) + + def test_include_exclude_include(self): + filter = IgnoreFilter([b"a.c", b"!a.c", b"a.c"]) + self.assertTrue(filter.is_ignored(b"a.c")) + self.assertEqual( + [Pattern(b"a.c"), Pattern(b"!a.c"), Pattern(b"a.c")], + list(filter.find_matching(b"a.c")), + ) + + def test_manpage(self): + # A specific example from the gitignore manpage + filter = IgnoreFilter([b"/*", b"!/foo", b"/foo/*", b"!/foo/bar"]) + self.assertTrue(filter.is_ignored(b"a.c")) + self.assertTrue(filter.is_ignored(b"foo/blie")) + self.assertFalse(filter.is_ignored(b"foo")) + self.assertFalse(filter.is_ignored(b"foo/bar")) + self.assertFalse(filter.is_ignored(b"foo/bar/")) + self.assertFalse(filter.is_ignored(b"foo/bar/bloe")) + + def test_regex_special(self): + # See https://github.com/dulwich/dulwich/issues/930#issuecomment-1026166429 + filter = IgnoreFilter([b"/foo\\[bar\\]", b"/foo"]) + self.assertTrue(filter.is_ignored("foo")) + self.assertTrue(filter.is_ignored("foo[bar]")) + + +class IgnoreFilterStackTests(TestCase): + def test_stack_first(self): + filter1 = IgnoreFilter([b"[a].c", b"[b].c", b"![d].c"]) + filter2 = IgnoreFilter([b"[a].c", b"![b],c", b"[c].c", b"[d].c"]) + stack = IgnoreFilterStack([filter1, filter2]) + self.assertIs(True, stack.is_ignored(b"a.c")) + self.assertIs(True, stack.is_ignored(b"b.c")) + self.assertIs(True, stack.is_ignored(b"c.c")) + self.assertIs(False, stack.is_ignored(b"d.c")) + self.assertIs(None, stack.is_ignored(b"e.c")) + + +class IgnoreFilterManagerTests(TestCase): + def test_load_ignore(self): + tmp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init(tmp_dir) + with open(os.path.join(repo.path, ".gitignore"), "wb") as f: + f.write(b"/foo/bar\n") + f.write(b"/dir2\n") + f.write(b"/dir3/\n") + os.mkdir(os.path.join(repo.path, "dir")) + with open(os.path.join(repo.path, "dir", ".gitignore"), "wb") as f: + f.write(b"/blie\n") + with open(os.path.join(repo.path, "dir", "blie"), "wb") as f: + f.write(b"IGNORED") + p = os.path.join(repo.controldir(), "info", "exclude") + with open(p, "wb") as f: + f.write(b"/excluded\n") + m = IgnoreFilterManager.from_repo(repo) + self.assertTrue(m.is_ignored("dir/blie")) + self.assertIs(None, m.is_ignored(os.path.join("dir", "bloe"))) + self.assertIs(None, m.is_ignored("dir")) + self.assertTrue(m.is_ignored(os.path.join("foo", "bar"))) + self.assertTrue(m.is_ignored(os.path.join("excluded"))) + self.assertTrue(m.is_ignored(os.path.join("dir2", "fileinignoreddir"))) + self.assertFalse(m.is_ignored("dir3")) + self.assertTrue(m.is_ignored("dir3/")) + self.assertTrue(m.is_ignored("dir3/bla")) + + def test_nested_gitignores(self): + tmp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init(tmp_dir) + + with open(os.path.join(repo.path, '.gitignore'), 'wb') as f: + f.write(b'/*\n') + f.write(b'!/foo\n') + + os.mkdir(os.path.join(repo.path, 'foo')) + with open(os.path.join(repo.path, 'foo', '.gitignore'), 'wb') as f: + f.write(b'/bar\n') + + with open(os.path.join(repo.path, 'foo', 'bar'), 'wb') as f: + f.write(b'IGNORED') + + m = IgnoreFilterManager.from_repo(repo) + self.assertTrue(m.is_ignored('foo/bar')) + + def test_load_ignore_ignorecase(self): + tmp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init(tmp_dir) + config = repo.get_config() + config.set(b"core", b"ignorecase", True) + config.write_to_path() + with open(os.path.join(repo.path, ".gitignore"), "wb") as f: + f.write(b"/foo/bar\n") + f.write(b"/dir\n") + m = IgnoreFilterManager.from_repo(repo) + self.assertTrue(m.is_ignored(os.path.join("dir", "blie"))) + self.assertTrue(m.is_ignored(os.path.join("DIR", "blie"))) + + def test_ignored_contents(self): + tmp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init(tmp_dir) + with open(os.path.join(repo.path, ".gitignore"), "wb") as f: + f.write(b"a/*\n") + f.write(b"!a/*.txt\n") + m = IgnoreFilterManager.from_repo(repo) + os.mkdir(os.path.join(repo.path, "a")) + self.assertIs(None, m.is_ignored("a")) + self.assertIs(None, m.is_ignored("a/")) + self.assertFalse(m.is_ignored("a/b.txt")) + self.assertTrue(m.is_ignored("a/c.dat")) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_index.py b/env/lib/python3.8/site-packages/dulwich/tests/test_index.py new file mode 100644 index 00000000..920f344f --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_index.py @@ -0,0 +1,813 @@ +# -*- coding: utf-8 -*- +# test_index.py -- Tests for the git index +# encoding: utf-8 +# Copyright (C) 2008-2009 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the index.""" + + +from io import BytesIO +import os +import shutil +import stat +import struct +import sys +import tempfile + +from dulwich.index import ( + Index, + build_index_from_tree, + cleanup_mode, + commit_tree, + get_unstaged_changes, + index_entry_from_stat, + read_index, + read_index_dict, + validate_path_element_default, + validate_path_element_ntfs, + write_cache_time, + write_index, + write_index_dict, + _tree_to_fs_path, + _fs_to_tree_path, + IndexEntry, +) +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + Blob, + Commit, + Tree, + S_IFGITLINK, +) +from dulwich.repo import Repo +from dulwich.tests import ( + TestCase, + skipIf, +) + + +def can_symlink(): + """Return whether running process can create symlinks.""" + if sys.platform != "win32": + # Platforms other than Windows should allow symlinks without issues. + return True + + test_source = tempfile.mkdtemp() + test_target = test_source + "can_symlink" + try: + os.symlink(test_source, test_target) + except (NotImplementedError, OSError): + return False + return True + + +class IndexTestCase(TestCase): + + datadir = os.path.join(os.path.dirname(__file__), "../../testdata/indexes") + + def get_simple_index(self, name): + return Index(os.path.join(self.datadir, name)) + + +class SimpleIndexTestCase(IndexTestCase): + def test_len(self): + self.assertEqual(1, len(self.get_simple_index("index"))) + + def test_iter(self): + self.assertEqual([b"bla"], list(self.get_simple_index("index"))) + + def test_iterobjects(self): + self.assertEqual( + [(b"bla", b"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391", 33188)], + list(self.get_simple_index("index").iterobjects()), + ) + + def test_getitem(self): + self.assertEqual( + ( + (1230680220, 0), + (1230680220, 0), + 2050, + 3761020, + 33188, + 1000, + 1000, + 0, + b"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391", + 0, + 0, + ), + self.get_simple_index("index")[b"bla"], + ) + + def test_empty(self): + i = self.get_simple_index("notanindex") + self.assertEqual(0, len(i)) + self.assertFalse(os.path.exists(i._filename)) + + def test_against_empty_tree(self): + i = self.get_simple_index("index") + changes = list(i.changes_from_tree(MemoryObjectStore(), None)) + self.assertEqual(1, len(changes)) + (oldname, newname), (oldmode, newmode), (oldsha, newsha) = changes[0] + self.assertEqual(b"bla", newname) + self.assertEqual(b"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391", newsha) + + +class SimpleIndexWriterTestCase(IndexTestCase): + def setUp(self): + IndexTestCase.setUp(self) + self.tempdir = tempfile.mkdtemp() + + def tearDown(self): + IndexTestCase.tearDown(self) + shutil.rmtree(self.tempdir) + + def test_simple_write(self): + entries = [ + ( + b"barbla", + IndexEntry( + (1230680220, 0), + (1230680220, 0), + 2050, + 3761020, + 33188, + 1000, + 1000, + 0, + b"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391", + 0, + 0) + ) + ] + filename = os.path.join(self.tempdir, "test-simple-write-index") + with open(filename, "wb+") as x: + write_index(x, entries) + + with open(filename, "rb") as x: + self.assertEqual(entries, list(read_index(x))) + + +class ReadIndexDictTests(IndexTestCase): + def setUp(self): + IndexTestCase.setUp(self) + self.tempdir = tempfile.mkdtemp() + + def tearDown(self): + IndexTestCase.tearDown(self) + shutil.rmtree(self.tempdir) + + def test_simple_write(self): + entries = { + b"barbla": IndexEntry( + (1230680220, 0), + (1230680220, 0), + 2050, + 3761020, + 33188, + 1000, + 1000, + 0, + b"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391", + 0, + 0, + ) + } + filename = os.path.join(self.tempdir, "test-simple-write-index") + with open(filename, "wb+") as x: + write_index_dict(x, entries) + + with open(filename, "rb") as x: + self.assertEqual(entries, read_index_dict(x)) + + +class CommitTreeTests(TestCase): + def setUp(self): + super(CommitTreeTests, self).setUp() + self.store = MemoryObjectStore() + + def test_single_blob(self): + blob = Blob() + blob.data = b"foo" + self.store.add_object(blob) + blobs = [(b"bla", blob.id, stat.S_IFREG)] + rootid = commit_tree(self.store, blobs) + self.assertEqual(rootid, b"1a1e80437220f9312e855c37ac4398b68e5c1d50") + self.assertEqual((stat.S_IFREG, blob.id), self.store[rootid][b"bla"]) + self.assertEqual(set([rootid, blob.id]), set(self.store._data.keys())) + + def test_nested(self): + blob = Blob() + blob.data = b"foo" + self.store.add_object(blob) + blobs = [(b"bla/bar", blob.id, stat.S_IFREG)] + rootid = commit_tree(self.store, blobs) + self.assertEqual(rootid, b"d92b959b216ad0d044671981196781b3258fa537") + dirid = self.store[rootid][b"bla"][1] + self.assertEqual(dirid, b"c1a1deb9788150829579a8b4efa6311e7b638650") + self.assertEqual((stat.S_IFDIR, dirid), self.store[rootid][b"bla"]) + self.assertEqual((stat.S_IFREG, blob.id), self.store[dirid][b"bar"]) + self.assertEqual(set([rootid, dirid, blob.id]), set(self.store._data.keys())) + + +class CleanupModeTests(TestCase): + def assertModeEqual(self, expected, got): + self.assertEqual(expected, got, "%o != %o" % (expected, got)) + + def test_file(self): + self.assertModeEqual(0o100644, cleanup_mode(0o100000)) + + def test_executable(self): + self.assertModeEqual(0o100755, cleanup_mode(0o100711)) + self.assertModeEqual(0o100755, cleanup_mode(0o100700)) + + def test_symlink(self): + self.assertModeEqual(0o120000, cleanup_mode(0o120711)) + + def test_dir(self): + self.assertModeEqual(0o040000, cleanup_mode(0o40531)) + + def test_submodule(self): + self.assertModeEqual(0o160000, cleanup_mode(0o160744)) + + +class WriteCacheTimeTests(TestCase): + def test_write_string(self): + f = BytesIO() + self.assertRaises(TypeError, write_cache_time, f, "foo") + + def test_write_int(self): + f = BytesIO() + write_cache_time(f, 434343) + self.assertEqual(struct.pack(">LL", 434343, 0), f.getvalue()) + + def test_write_tuple(self): + f = BytesIO() + write_cache_time(f, (434343, 21)) + self.assertEqual(struct.pack(">LL", 434343, 21), f.getvalue()) + + def test_write_float(self): + f = BytesIO() + write_cache_time(f, 434343.000000021) + self.assertEqual(struct.pack(">LL", 434343, 21), f.getvalue()) + + +class IndexEntryFromStatTests(TestCase): + def test_simple(self): + st = os.stat_result( + ( + 16877, + 131078, + 64769, + 154, + 1000, + 1000, + 12288, + 1323629595, + 1324180496, + 1324180496, + ) + ) + entry = index_entry_from_stat(st, "22" * 20, 0) + self.assertEqual( + entry, + IndexEntry( + 1324180496, + 1324180496, + 64769, + 131078, + 16384, + 1000, + 1000, + 12288, + "2222222222222222222222222222222222222222", + 0, + None, + ), + ) + + def test_override_mode(self): + st = os.stat_result( + ( + stat.S_IFREG + 0o644, + 131078, + 64769, + 154, + 1000, + 1000, + 12288, + 1323629595, + 1324180496, + 1324180496, + ) + ) + entry = index_entry_from_stat(st, "22" * 20, 0, mode=stat.S_IFREG + 0o755) + self.assertEqual( + entry, + IndexEntry( + 1324180496, + 1324180496, + 64769, + 131078, + 33261, + 1000, + 1000, + 12288, + "2222222222222222222222222222222222222222", + 0, + None, + ), + ) + + +class BuildIndexTests(TestCase): + def assertReasonableIndexEntry(self, index_entry, mode, filesize, sha): + self.assertEqual(index_entry[4], mode) # mode + self.assertEqual(index_entry[7], filesize) # filesize + self.assertEqual(index_entry[8], sha) # sha + + def assertFileContents(self, path, contents, symlink=False): + if symlink: + self.assertEqual(os.readlink(path), contents) + else: + with open(path, "rb") as f: + self.assertEqual(f.read(), contents) + + def test_empty(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + tree = Tree() + repo.object_store.add_object(tree) + + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + + # Verify index entries + index = repo.open_index() + self.assertEqual(len(index), 0) + + # Verify no files + self.assertEqual([".git"], os.listdir(repo.path)) + + def test_git_dir(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Populate repo + filea = Blob.from_string(b"file a") + filee = Blob.from_string(b"d") + + tree = Tree() + tree[b".git/a"] = (stat.S_IFREG | 0o644, filea.id) + tree[b"c/e"] = (stat.S_IFREG | 0o644, filee.id) + + repo.object_store.add_objects([(o, None) for o in [filea, filee, tree]]) + + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + + # Verify index entries + index = repo.open_index() + self.assertEqual(len(index), 1) + + # filea + apath = os.path.join(repo.path, ".git", "a") + self.assertFalse(os.path.exists(apath)) + + # filee + epath = os.path.join(repo.path, "c", "e") + self.assertTrue(os.path.exists(epath)) + self.assertReasonableIndexEntry( + index[b"c/e"], stat.S_IFREG | 0o644, 1, filee.id + ) + self.assertFileContents(epath, b"d") + + def test_nonempty(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Populate repo + filea = Blob.from_string(b"file a") + fileb = Blob.from_string(b"file b") + filed = Blob.from_string(b"file d") + + tree = Tree() + tree[b"a"] = (stat.S_IFREG | 0o644, filea.id) + tree[b"b"] = (stat.S_IFREG | 0o644, fileb.id) + tree[b"c/d"] = (stat.S_IFREG | 0o644, filed.id) + + repo.object_store.add_objects( + [(o, None) for o in [filea, fileb, filed, tree]] + ) + + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + + # Verify index entries + index = repo.open_index() + self.assertEqual(len(index), 3) + + # filea + apath = os.path.join(repo.path, "a") + self.assertTrue(os.path.exists(apath)) + self.assertReasonableIndexEntry( + index[b"a"], stat.S_IFREG | 0o644, 6, filea.id + ) + self.assertFileContents(apath, b"file a") + + # fileb + bpath = os.path.join(repo.path, "b") + self.assertTrue(os.path.exists(bpath)) + self.assertReasonableIndexEntry( + index[b"b"], stat.S_IFREG | 0o644, 6, fileb.id + ) + self.assertFileContents(bpath, b"file b") + + # filed + dpath = os.path.join(repo.path, "c", "d") + self.assertTrue(os.path.exists(dpath)) + self.assertReasonableIndexEntry( + index[b"c/d"], stat.S_IFREG | 0o644, 6, filed.id + ) + self.assertFileContents(dpath, b"file d") + + # Verify no extra files + self.assertEqual([".git", "a", "b", "c"], sorted(os.listdir(repo.path))) + self.assertEqual(["d"], sorted(os.listdir(os.path.join(repo.path, "c")))) + + @skipIf(not getattr(os, "sync", None), "Requires sync support") + def test_norewrite(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + # Populate repo + filea = Blob.from_string(b"file a") + filea_path = os.path.join(repo_dir, "a") + tree = Tree() + tree[b"a"] = (stat.S_IFREG | 0o644, filea.id) + + repo.object_store.add_objects([(o, None) for o in [filea, tree]]) + + # First Write + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + # Use sync as metadata can be cached on some FS + os.sync() + mtime = os.stat(filea_path).st_mtime + + # Test Rewrite + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + os.sync() + self.assertEqual(mtime, os.stat(filea_path).st_mtime) + + # Modify content + with open(filea_path, "wb") as fh: + fh.write(b"test a") + os.sync() + mtime = os.stat(filea_path).st_mtime + + # Test rewrite + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + os.sync() + with open(filea_path, "rb") as fh: + self.assertEqual(b"file a", fh.read()) + + @skipIf(not can_symlink(), "Requires symlink support") + def test_symlink(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Populate repo + filed = Blob.from_string(b"file d") + filee = Blob.from_string(b"d") + + tree = Tree() + tree[b"c/d"] = (stat.S_IFREG | 0o644, filed.id) + tree[b"c/e"] = (stat.S_IFLNK, filee.id) # symlink + + repo.object_store.add_objects([(o, None) for o in [filed, filee, tree]]) + + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + + # Verify index entries + index = repo.open_index() + + # symlink to d + epath = os.path.join(repo.path, "c", "e") + self.assertTrue(os.path.exists(epath)) + self.assertReasonableIndexEntry( + index[b"c/e"], + stat.S_IFLNK, + 0 if sys.platform == "win32" else 1, + filee.id, + ) + self.assertFileContents(epath, "d", symlink=True) + + def test_no_decode_encode(self): + repo_dir = tempfile.mkdtemp() + repo_dir_bytes = os.fsencode(repo_dir) + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Populate repo + file = Blob.from_string(b"foo") + + tree = Tree() + latin1_name = u"À".encode("latin1") + latin1_path = os.path.join(repo_dir_bytes, latin1_name) + utf8_name = u"À".encode("utf8") + utf8_path = os.path.join(repo_dir_bytes, utf8_name) + tree[latin1_name] = (stat.S_IFREG | 0o644, file.id) + tree[utf8_name] = (stat.S_IFREG | 0o644, file.id) + + repo.object_store.add_objects([(o, None) for o in [file, tree]]) + + try: + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + except OSError as e: + if e.errno == 92 and sys.platform == "darwin": + # Our filename isn't supported by the platform :( + self.skipTest("can not write filename %r" % e.filename) + else: + raise + except UnicodeDecodeError: + # This happens e.g. with python3.6 on Windows. + # It implicitly decodes using utf8, which doesn't work. + self.skipTest("can not implicitly convert as utf8") + + # Verify index entries + index = repo.open_index() + self.assertIn(latin1_name, index) + self.assertIn(utf8_name, index) + + self.assertTrue(os.path.exists(latin1_path)) + + self.assertTrue(os.path.exists(utf8_path)) + + def test_git_submodule(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + filea = Blob.from_string(b"file alalala") + + subtree = Tree() + subtree[b"a"] = (stat.S_IFREG | 0o644, filea.id) + + c = Commit() + c.tree = subtree.id + c.committer = c.author = b"Somebody " + c.commit_time = c.author_time = 42342 + c.commit_timezone = c.author_timezone = 0 + c.parents = [] + c.message = b"Subcommit" + + tree = Tree() + tree[b"c"] = (S_IFGITLINK, c.id) + + repo.object_store.add_objects([(o, None) for o in [tree]]) + + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + + # Verify index entries + index = repo.open_index() + self.assertEqual(len(index), 1) + + # filea + apath = os.path.join(repo.path, "c/a") + self.assertFalse(os.path.exists(apath)) + + # dir c + cpath = os.path.join(repo.path, "c") + self.assertTrue(os.path.isdir(cpath)) + self.assertEqual(index[b"c"][4], S_IFGITLINK) # mode + self.assertEqual(index[b"c"][8], c.id) # sha + + def test_git_submodule_exists(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + filea = Blob.from_string(b"file alalala") + + subtree = Tree() + subtree[b"a"] = (stat.S_IFREG | 0o644, filea.id) + + c = Commit() + c.tree = subtree.id + c.committer = c.author = b"Somebody " + c.commit_time = c.author_time = 42342 + c.commit_timezone = c.author_timezone = 0 + c.parents = [] + c.message = b"Subcommit" + + tree = Tree() + tree[b"c"] = (S_IFGITLINK, c.id) + + os.mkdir(os.path.join(repo_dir, "c")) + repo.object_store.add_objects([(o, None) for o in [tree]]) + + build_index_from_tree( + repo.path, repo.index_path(), repo.object_store, tree.id + ) + + # Verify index entries + index = repo.open_index() + self.assertEqual(len(index), 1) + + # filea + apath = os.path.join(repo.path, "c/a") + self.assertFalse(os.path.exists(apath)) + + # dir c + cpath = os.path.join(repo.path, "c") + self.assertTrue(os.path.isdir(cpath)) + self.assertEqual(index[b"c"][4], S_IFGITLINK) # mode + self.assertEqual(index[b"c"][8], c.id) # sha + + +class GetUnstagedChangesTests(TestCase): + def test_get_unstaged_changes(self): + """Unit test for get_unstaged_changes.""" + + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Commit a dummy file then modify it + foo1_fullpath = os.path.join(repo_dir, "foo1") + with open(foo1_fullpath, "wb") as f: + f.write(b"origstuff") + + foo2_fullpath = os.path.join(repo_dir, "foo2") + with open(foo2_fullpath, "wb") as f: + f.write(b"origstuff") + + repo.stage(["foo1", "foo2"]) + repo.do_commit( + b"test status", + author=b"author ", + committer=b"committer ", + ) + + with open(foo1_fullpath, "wb") as f: + f.write(b"newstuff") + + # modify access and modify time of path + os.utime(foo1_fullpath, (0, 0)) + + changes = get_unstaged_changes(repo.open_index(), repo_dir) + + self.assertEqual(list(changes), [b"foo1"]) + + def test_get_unstaged_deleted_changes(self): + """Unit test for get_unstaged_changes.""" + + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Commit a dummy file then remove it + foo1_fullpath = os.path.join(repo_dir, "foo1") + with open(foo1_fullpath, "wb") as f: + f.write(b"origstuff") + + repo.stage(["foo1"]) + repo.do_commit( + b"test status", + author=b"author ", + committer=b"committer ", + ) + + os.unlink(foo1_fullpath) + + changes = get_unstaged_changes(repo.open_index(), repo_dir) + + self.assertEqual(list(changes), [b"foo1"]) + + def test_get_unstaged_changes_removed_replaced_by_directory(self): + """Unit test for get_unstaged_changes.""" + + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Commit a dummy file then modify it + foo1_fullpath = os.path.join(repo_dir, "foo1") + with open(foo1_fullpath, "wb") as f: + f.write(b"origstuff") + + repo.stage(["foo1"]) + repo.do_commit( + b"test status", + author=b"author ", + committer=b"committer ", + ) + + os.remove(foo1_fullpath) + os.mkdir(foo1_fullpath) + + changes = get_unstaged_changes(repo.open_index(), repo_dir) + + self.assertEqual(list(changes), [b"foo1"]) + + @skipIf(not can_symlink(), "Requires symlink support") + def test_get_unstaged_changes_removed_replaced_by_link(self): + """Unit test for get_unstaged_changes.""" + + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + with Repo.init(repo_dir) as repo: + + # Commit a dummy file then modify it + foo1_fullpath = os.path.join(repo_dir, "foo1") + with open(foo1_fullpath, "wb") as f: + f.write(b"origstuff") + + repo.stage(["foo1"]) + repo.do_commit( + b"test status", + author=b"author ", + committer=b"committer ", + ) + + os.remove(foo1_fullpath) + os.symlink(os.path.dirname(foo1_fullpath), foo1_fullpath) + + changes = get_unstaged_changes(repo.open_index(), repo_dir) + + self.assertEqual(list(changes), [b"foo1"]) + + +class TestValidatePathElement(TestCase): + def test_default(self): + self.assertTrue(validate_path_element_default(b"bla")) + self.assertTrue(validate_path_element_default(b".bla")) + self.assertFalse(validate_path_element_default(b".git")) + self.assertFalse(validate_path_element_default(b".giT")) + self.assertFalse(validate_path_element_default(b"..")) + self.assertTrue(validate_path_element_default(b"git~1")) + + def test_ntfs(self): + self.assertTrue(validate_path_element_ntfs(b"bla")) + self.assertTrue(validate_path_element_ntfs(b".bla")) + self.assertFalse(validate_path_element_ntfs(b".git")) + self.assertFalse(validate_path_element_ntfs(b".giT")) + self.assertFalse(validate_path_element_ntfs(b"..")) + self.assertFalse(validate_path_element_ntfs(b"git~1")) + + +class TestTreeFSPathConversion(TestCase): + def test_tree_to_fs_path(self): + tree_path = u"délwíçh/foo".encode("utf8") + fs_path = _tree_to_fs_path(b"/prefix/path", tree_path) + self.assertEqual( + fs_path, + os.fsencode(os.path.join(u"/prefix/path", u"délwíçh", u"foo")), + ) + + def test_fs_to_tree_path_str(self): + fs_path = os.path.join(os.path.join(u"délwíçh", u"foo")) + tree_path = _fs_to_tree_path(fs_path) + self.assertEqual(tree_path, u"délwíçh/foo".encode("utf-8")) + + def test_fs_to_tree_path_bytes(self): + fs_path = os.path.join(os.fsencode(os.path.join(u"délwíçh", u"foo"))) + tree_path = _fs_to_tree_path(fs_path) + self.assertEqual(tree_path, u"délwíçh/foo".encode("utf-8")) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_lfs.py b/env/lib/python3.8/site-packages/dulwich/tests/test_lfs.py new file mode 100644 index 00000000..5ffb6da3 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_lfs.py @@ -0,0 +1,42 @@ +# test_lfs.py -- tests for LFS +# Copyright (C) 2020 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for LFS support.""" + +from . import TestCase +from ..lfs import LFSStore +import shutil +import tempfile + + +class LFSTests(TestCase): + def setUp(self): + super(LFSTests, self).setUp() + self.test_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.test_dir) + self.lfs = LFSStore.create(self.test_dir) + + def test_create(self): + sha = self.lfs.write_object([b"a", b"b"]) + with self.lfs.open_object(sha) as f: + self.assertEqual(b"ab", f.read()) + + def test_missing(self): + self.assertRaises(KeyError, self.lfs.open_object, "abcdeabcdeabcdeabcde") diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_line_ending.py b/env/lib/python3.8/site-packages/dulwich/tests/test_line_ending.py new file mode 100644 index 00000000..1e114158 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_line_ending.py @@ -0,0 +1,197 @@ +# -*- coding: utf-8 -*- +# test_line_ending.py -- Tests for the line ending functions +# encoding: utf-8 +# Copyright (C) 2018-2019 Boris Feld +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the line ending conversion.""" + +from dulwich.line_ending import ( + normalize_blob, + convert_crlf_to_lf, + convert_lf_to_crlf, + get_checkin_filter_autocrlf, + get_checkout_filter_autocrlf, +) +from dulwich.objects import Blob +from dulwich.tests import TestCase + + +class LineEndingConversion(TestCase): + """Test the line ending conversion functions in various cases""" + + def test_convert_crlf_to_lf_no_op(self): + self.assertEqual(convert_crlf_to_lf(b"foobar"), b"foobar") + + def test_convert_crlf_to_lf(self): + self.assertEqual(convert_crlf_to_lf(b"line1\r\nline2"), b"line1\nline2") + + def test_convert_crlf_to_lf_mixed(self): + self.assertEqual(convert_crlf_to_lf(b"line1\r\n\nline2"), b"line1\n\nline2") + + def test_convert_lf_to_crlf_no_op(self): + self.assertEqual(convert_lf_to_crlf(b"foobar"), b"foobar") + + def test_convert_lf_to_crlf(self): + self.assertEqual(convert_lf_to_crlf(b"line1\nline2"), b"line1\r\nline2") + + def test_convert_lf_to_crlf_mixed(self): + self.assertEqual(convert_lf_to_crlf(b"line1\r\n\nline2"), b"line1\r\n\r\nline2") + + +class GetLineEndingAutocrlfFilters(TestCase): + def test_get_checkin_filter_autocrlf_default(self): + checkin_filter = get_checkin_filter_autocrlf(b"false") + + self.assertEqual(checkin_filter, None) + + def test_get_checkin_filter_autocrlf_true(self): + checkin_filter = get_checkin_filter_autocrlf(b"true") + + self.assertEqual(checkin_filter, convert_crlf_to_lf) + + def test_get_checkin_filter_autocrlf_input(self): + checkin_filter = get_checkin_filter_autocrlf(b"input") + + self.assertEqual(checkin_filter, convert_crlf_to_lf) + + def test_get_checkout_filter_autocrlf_default(self): + checkout_filter = get_checkout_filter_autocrlf(b"false") + + self.assertEqual(checkout_filter, None) + + def test_get_checkout_filter_autocrlf_true(self): + checkout_filter = get_checkout_filter_autocrlf(b"true") + + self.assertEqual(checkout_filter, convert_lf_to_crlf) + + def test_get_checkout_filter_autocrlf_input(self): + checkout_filter = get_checkout_filter_autocrlf(b"input") + + self.assertEqual(checkout_filter, None) + + +class NormalizeBlobTestCase(TestCase): + def test_normalize_to_lf_no_op(self): + base_content = b"line1\nline2" + base_sha = "f8be7bb828880727816015d21abcbc37d033f233" + + base_blob = Blob() + base_blob.set_raw_string(base_content) + + self.assertEqual(base_blob.as_raw_chunks(), [base_content]) + self.assertEqual(base_blob.sha().hexdigest(), base_sha) + + filtered_blob = normalize_blob( + base_blob, convert_crlf_to_lf, binary_detection=False + ) + + self.assertEqual(filtered_blob.as_raw_chunks(), [base_content]) + self.assertEqual(filtered_blob.sha().hexdigest(), base_sha) + + def test_normalize_to_lf(self): + base_content = b"line1\r\nline2" + base_sha = "3a1bd7a52799fe5cf6411f1d35f4c10bacb1db96" + + base_blob = Blob() + base_blob.set_raw_string(base_content) + + self.assertEqual(base_blob.as_raw_chunks(), [base_content]) + self.assertEqual(base_blob.sha().hexdigest(), base_sha) + + filtered_blob = normalize_blob( + base_blob, convert_crlf_to_lf, binary_detection=False + ) + + normalized_content = b"line1\nline2" + normalized_sha = "f8be7bb828880727816015d21abcbc37d033f233" + + self.assertEqual(filtered_blob.as_raw_chunks(), [normalized_content]) + self.assertEqual(filtered_blob.sha().hexdigest(), normalized_sha) + + def test_normalize_to_lf_binary(self): + base_content = b"line1\r\nline2\0" + base_sha = "b44504193b765f7cd79673812de8afb55b372ab2" + + base_blob = Blob() + base_blob.set_raw_string(base_content) + + self.assertEqual(base_blob.as_raw_chunks(), [base_content]) + self.assertEqual(base_blob.sha().hexdigest(), base_sha) + + filtered_blob = normalize_blob( + base_blob, convert_crlf_to_lf, binary_detection=True + ) + + self.assertEqual(filtered_blob.as_raw_chunks(), [base_content]) + self.assertEqual(filtered_blob.sha().hexdigest(), base_sha) + + def test_normalize_to_crlf_no_op(self): + base_content = b"line1\r\nline2" + base_sha = "3a1bd7a52799fe5cf6411f1d35f4c10bacb1db96" + + base_blob = Blob() + base_blob.set_raw_string(base_content) + + self.assertEqual(base_blob.as_raw_chunks(), [base_content]) + self.assertEqual(base_blob.sha().hexdigest(), base_sha) + + filtered_blob = normalize_blob( + base_blob, convert_lf_to_crlf, binary_detection=False + ) + + self.assertEqual(filtered_blob.as_raw_chunks(), [base_content]) + self.assertEqual(filtered_blob.sha().hexdigest(), base_sha) + + def test_normalize_to_crlf(self): + base_content = b"line1\nline2" + base_sha = "f8be7bb828880727816015d21abcbc37d033f233" + + base_blob = Blob() + base_blob.set_raw_string(base_content) + + self.assertEqual(base_blob.as_raw_chunks(), [base_content]) + self.assertEqual(base_blob.sha().hexdigest(), base_sha) + + filtered_blob = normalize_blob( + base_blob, convert_lf_to_crlf, binary_detection=False + ) + + normalized_content = b"line1\r\nline2" + normalized_sha = "3a1bd7a52799fe5cf6411f1d35f4c10bacb1db96" + + self.assertEqual(filtered_blob.as_raw_chunks(), [normalized_content]) + self.assertEqual(filtered_blob.sha().hexdigest(), normalized_sha) + + def test_normalize_to_crlf_binary(self): + base_content = b"line1\r\nline2\0" + base_sha = "b44504193b765f7cd79673812de8afb55b372ab2" + + base_blob = Blob() + base_blob.set_raw_string(base_content) + + self.assertEqual(base_blob.as_raw_chunks(), [base_content]) + self.assertEqual(base_blob.sha().hexdigest(), base_sha) + + filtered_blob = normalize_blob( + base_blob, convert_lf_to_crlf, binary_detection=True + ) + + self.assertEqual(filtered_blob.as_raw_chunks(), [base_content]) + self.assertEqual(filtered_blob.sha().hexdigest(), base_sha) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_lru_cache.py b/env/lib/python3.8/site-packages/dulwich/tests/test_lru_cache.py new file mode 100644 index 00000000..74fb8482 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_lru_cache.py @@ -0,0 +1,455 @@ +# Copyright (C) 2006, 2008 Canonical Ltd +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the lru_cache module.""" + +from dulwich import ( + lru_cache, +) +from dulwich.tests import ( + TestCase, +) + + +class TestLRUCache(TestCase): + """Test that LRU cache properly keeps track of entries.""" + + def test_cache_size(self): + cache = lru_cache.LRUCache(max_cache=10) + self.assertEqual(10, cache.cache_size()) + + cache = lru_cache.LRUCache(max_cache=256) + self.assertEqual(256, cache.cache_size()) + + cache.resize(512) + self.assertEqual(512, cache.cache_size()) + + def test_missing(self): + cache = lru_cache.LRUCache(max_cache=10) + + self.assertNotIn("foo", cache) + self.assertRaises(KeyError, cache.__getitem__, "foo") + + cache["foo"] = "bar" + self.assertEqual("bar", cache["foo"]) + self.assertIn("foo", cache) + self.assertNotIn("bar", cache) + + def test_map_None(self): + # Make sure that we can properly map None as a key. + cache = lru_cache.LRUCache(max_cache=10) + self.assertNotIn(None, cache) + cache[None] = 1 + self.assertEqual(1, cache[None]) + cache[None] = 2 + self.assertEqual(2, cache[None]) + # Test the various code paths of __getitem__, to make sure that we can + # handle when None is the key for the LRU and the MRU + cache[1] = 3 + cache[None] = 1 + cache[None] + cache[1] + cache[None] + self.assertEqual([None, 1], [n.key for n in cache._walk_lru()]) + + def test_add__null_key(self): + cache = lru_cache.LRUCache(max_cache=10) + self.assertRaises(ValueError, cache.add, lru_cache._null_key, 1) + + def test_overflow(self): + """Adding extra entries will pop out old ones.""" + cache = lru_cache.LRUCache(max_cache=1, after_cleanup_count=1) + + cache["foo"] = "bar" + # With a max cache of 1, adding 'baz' should pop out 'foo' + cache["baz"] = "biz" + + self.assertNotIn("foo", cache) + self.assertIn("baz", cache) + + self.assertEqual("biz", cache["baz"]) + + def test_by_usage(self): + """Accessing entries bumps them up in priority.""" + cache = lru_cache.LRUCache(max_cache=2) + + cache["baz"] = "biz" + cache["foo"] = "bar" + + self.assertEqual("biz", cache["baz"]) + + # This must kick out 'foo' because it was the last accessed + cache["nub"] = "in" + + self.assertNotIn("foo", cache) + + def test_cleanup(self): + """Test that we can use a cleanup function.""" + cleanup_called = [] + + def cleanup_func(key, val): + cleanup_called.append((key, val)) + + cache = lru_cache.LRUCache(max_cache=2, after_cleanup_count=2) + + cache.add("baz", "1", cleanup=cleanup_func) + cache.add("foo", "2", cleanup=cleanup_func) + cache.add("biz", "3", cleanup=cleanup_func) + + self.assertEqual([("baz", "1")], cleanup_called) + + # 'foo' is now most recent, so final cleanup will call it last + cache["foo"] + cache.clear() + self.assertEqual([("baz", "1"), ("biz", "3"), ("foo", "2")], cleanup_called) + + def test_cleanup_on_replace(self): + """Replacing an object should cleanup the old value.""" + cleanup_called = [] + + def cleanup_func(key, val): + cleanup_called.append((key, val)) + + cache = lru_cache.LRUCache(max_cache=2) + cache.add(1, 10, cleanup=cleanup_func) + cache.add(2, 20, cleanup=cleanup_func) + cache.add(2, 25, cleanup=cleanup_func) + + self.assertEqual([(2, 20)], cleanup_called) + self.assertEqual(25, cache[2]) + + # Even __setitem__ should make sure cleanup() is called + cache[2] = 26 + self.assertEqual([(2, 20), (2, 25)], cleanup_called) + + def test_len(self): + cache = lru_cache.LRUCache(max_cache=10, after_cleanup_count=10) + + cache[1] = 10 + cache[2] = 20 + cache[3] = 30 + cache[4] = 40 + + self.assertEqual(4, len(cache)) + + cache[5] = 50 + cache[6] = 60 + cache[7] = 70 + cache[8] = 80 + + self.assertEqual(8, len(cache)) + + cache[1] = 15 # replacement + + self.assertEqual(8, len(cache)) + + cache[9] = 90 + cache[10] = 100 + cache[11] = 110 + + # We hit the max + self.assertEqual(10, len(cache)) + self.assertEqual( + [11, 10, 9, 1, 8, 7, 6, 5, 4, 3], + [n.key for n in cache._walk_lru()], + ) + + def test_cleanup_shrinks_to_after_clean_count(self): + cache = lru_cache.LRUCache(max_cache=5, after_cleanup_count=3) + + cache.add(1, 10) + cache.add(2, 20) + cache.add(3, 25) + cache.add(4, 30) + cache.add(5, 35) + + self.assertEqual(5, len(cache)) + # This will bump us over the max, which causes us to shrink down to + # after_cleanup_cache size + cache.add(6, 40) + self.assertEqual(3, len(cache)) + + def test_after_cleanup_larger_than_max(self): + cache = lru_cache.LRUCache(max_cache=5, after_cleanup_count=10) + self.assertEqual(5, cache._after_cleanup_count) + + def test_after_cleanup_none(self): + cache = lru_cache.LRUCache(max_cache=5, after_cleanup_count=None) + # By default _after_cleanup_size is 80% of the normal size + self.assertEqual(4, cache._after_cleanup_count) + + def test_cleanup_2(self): + cache = lru_cache.LRUCache(max_cache=5, after_cleanup_count=2) + + # Add these in order + cache.add(1, 10) + cache.add(2, 20) + cache.add(3, 25) + cache.add(4, 30) + cache.add(5, 35) + + self.assertEqual(5, len(cache)) + # Force a compaction + cache.cleanup() + self.assertEqual(2, len(cache)) + + def test_preserve_last_access_order(self): + cache = lru_cache.LRUCache(max_cache=5) + + # Add these in order + cache.add(1, 10) + cache.add(2, 20) + cache.add(3, 25) + cache.add(4, 30) + cache.add(5, 35) + + self.assertEqual([5, 4, 3, 2, 1], [n.key for n in cache._walk_lru()]) + + # Now access some randomly + cache[2] + cache[5] + cache[3] + cache[2] + self.assertEqual([2, 3, 5, 4, 1], [n.key for n in cache._walk_lru()]) + + def test_get(self): + cache = lru_cache.LRUCache(max_cache=5) + + cache.add(1, 10) + cache.add(2, 20) + self.assertEqual(20, cache.get(2)) + self.assertEqual(None, cache.get(3)) + obj = object() + self.assertIs(obj, cache.get(3, obj)) + self.assertEqual([2, 1], [n.key for n in cache._walk_lru()]) + self.assertEqual(10, cache.get(1)) + self.assertEqual([1, 2], [n.key for n in cache._walk_lru()]) + + def test_keys(self): + cache = lru_cache.LRUCache(max_cache=5, after_cleanup_count=5) + + cache[1] = 2 + cache[2] = 3 + cache[3] = 4 + self.assertEqual([1, 2, 3], sorted(cache.keys())) + cache[4] = 5 + cache[5] = 6 + cache[6] = 7 + self.assertEqual([2, 3, 4, 5, 6], sorted(cache.keys())) + + def test_resize_smaller(self): + cache = lru_cache.LRUCache(max_cache=5, after_cleanup_count=4) + cache[1] = 2 + cache[2] = 3 + cache[3] = 4 + cache[4] = 5 + cache[5] = 6 + self.assertEqual([1, 2, 3, 4, 5], sorted(cache.keys())) + cache[6] = 7 + self.assertEqual([3, 4, 5, 6], sorted(cache.keys())) + # Now resize to something smaller, which triggers a cleanup + cache.resize(max_cache=3, after_cleanup_count=2) + self.assertEqual([5, 6], sorted(cache.keys())) + # Adding something will use the new size + cache[7] = 8 + self.assertEqual([5, 6, 7], sorted(cache.keys())) + cache[8] = 9 + self.assertEqual([7, 8], sorted(cache.keys())) + + def test_resize_larger(self): + cache = lru_cache.LRUCache(max_cache=5, after_cleanup_count=4) + cache[1] = 2 + cache[2] = 3 + cache[3] = 4 + cache[4] = 5 + cache[5] = 6 + self.assertEqual([1, 2, 3, 4, 5], sorted(cache.keys())) + cache[6] = 7 + self.assertEqual([3, 4, 5, 6], sorted(cache.keys())) + cache.resize(max_cache=8, after_cleanup_count=6) + self.assertEqual([3, 4, 5, 6], sorted(cache.keys())) + cache[7] = 8 + cache[8] = 9 + cache[9] = 10 + cache[10] = 11 + self.assertEqual([3, 4, 5, 6, 7, 8, 9, 10], sorted(cache.keys())) + cache[11] = 12 # triggers cleanup back to new after_cleanup_count + self.assertEqual([6, 7, 8, 9, 10, 11], sorted(cache.keys())) + + +class TestLRUSizeCache(TestCase): + def test_basic_init(self): + cache = lru_cache.LRUSizeCache() + self.assertEqual(2048, cache._max_cache) + self.assertEqual(int(cache._max_size * 0.8), cache._after_cleanup_size) + self.assertEqual(0, cache._value_size) + + def test_add__null_key(self): + cache = lru_cache.LRUSizeCache() + self.assertRaises(ValueError, cache.add, lru_cache._null_key, 1) + + def test_add_tracks_size(self): + cache = lru_cache.LRUSizeCache() + self.assertEqual(0, cache._value_size) + cache.add("my key", "my value text") + self.assertEqual(13, cache._value_size) + + def test_remove_tracks_size(self): + cache = lru_cache.LRUSizeCache() + self.assertEqual(0, cache._value_size) + cache.add("my key", "my value text") + self.assertEqual(13, cache._value_size) + node = cache._cache["my key"] + cache._remove_node(node) + self.assertEqual(0, cache._value_size) + + def test_no_add_over_size(self): + """Adding a large value may not be cached at all.""" + cache = lru_cache.LRUSizeCache(max_size=10, after_cleanup_size=5) + self.assertEqual(0, cache._value_size) + self.assertEqual({}, cache.items()) + cache.add("test", "key") + self.assertEqual(3, cache._value_size) + self.assertEqual({"test": "key"}, cache.items()) + cache.add("test2", "key that is too big") + self.assertEqual(3, cache._value_size) + self.assertEqual({"test": "key"}, cache.items()) + # If we would add a key, only to cleanup and remove all cached entries, + # then obviously that value should not be stored + cache.add("test3", "bigkey") + self.assertEqual(3, cache._value_size) + self.assertEqual({"test": "key"}, cache.items()) + + cache.add("test4", "bikey") + self.assertEqual(3, cache._value_size) + self.assertEqual({"test": "key"}, cache.items()) + + def test_no_add_over_size_cleanup(self): + """If a large value is not cached, we will call cleanup right away.""" + cleanup_calls = [] + + def cleanup(key, value): + cleanup_calls.append((key, value)) + + cache = lru_cache.LRUSizeCache(max_size=10, after_cleanup_size=5) + self.assertEqual(0, cache._value_size) + self.assertEqual({}, cache.items()) + cache.add("test", "key that is too big", cleanup=cleanup) + # key was not added + self.assertEqual(0, cache._value_size) + self.assertEqual({}, cache.items()) + # and cleanup was called + self.assertEqual([("test", "key that is too big")], cleanup_calls) + + def test_adding_clears_cache_based_on_size(self): + """The cache is cleared in LRU order until small enough""" + cache = lru_cache.LRUSizeCache(max_size=20) + cache.add("key1", "value") # 5 chars + cache.add("key2", "value2") # 6 chars + cache.add("key3", "value23") # 7 chars + self.assertEqual(5 + 6 + 7, cache._value_size) + cache["key2"] # reference key2 so it gets a newer reference time + cache.add("key4", "value234") # 8 chars, over limit + # We have to remove 2 keys to get back under limit + self.assertEqual(6 + 8, cache._value_size) + self.assertEqual({"key2": "value2", "key4": "value234"}, cache.items()) + + def test_adding_clears_to_after_cleanup_size(self): + cache = lru_cache.LRUSizeCache(max_size=20, after_cleanup_size=10) + cache.add("key1", "value") # 5 chars + cache.add("key2", "value2") # 6 chars + cache.add("key3", "value23") # 7 chars + self.assertEqual(5 + 6 + 7, cache._value_size) + cache["key2"] # reference key2 so it gets a newer reference time + cache.add("key4", "value234") # 8 chars, over limit + # We have to remove 3 keys to get back under limit + self.assertEqual(8, cache._value_size) + self.assertEqual({"key4": "value234"}, cache.items()) + + def test_custom_sizes(self): + def size_of_list(lst): + return sum(len(x) for x in lst) + + cache = lru_cache.LRUSizeCache( + max_size=20, after_cleanup_size=10, compute_size=size_of_list + ) + + cache.add("key1", ["val", "ue"]) # 5 chars + cache.add("key2", ["val", "ue2"]) # 6 chars + cache.add("key3", ["val", "ue23"]) # 7 chars + self.assertEqual(5 + 6 + 7, cache._value_size) + cache["key2"] # reference key2 so it gets a newer reference time + cache.add("key4", ["value", "234"]) # 8 chars, over limit + # We have to remove 3 keys to get back under limit + self.assertEqual(8, cache._value_size) + self.assertEqual({"key4": ["value", "234"]}, cache.items()) + + def test_cleanup(self): + cache = lru_cache.LRUSizeCache(max_size=20, after_cleanup_size=10) + + # Add these in order + cache.add("key1", "value") # 5 chars + cache.add("key2", "value2") # 6 chars + cache.add("key3", "value23") # 7 chars + self.assertEqual(5 + 6 + 7, cache._value_size) + + cache.cleanup() + # Only the most recent fits after cleaning up + self.assertEqual(7, cache._value_size) + + def test_keys(self): + cache = lru_cache.LRUSizeCache(max_size=10) + + cache[1] = "a" + cache[2] = "b" + cache[3] = "cdef" + self.assertEqual([1, 2, 3], sorted(cache.keys())) + + def test_resize_smaller(self): + cache = lru_cache.LRUSizeCache(max_size=10, after_cleanup_size=9) + cache[1] = "abc" + cache[2] = "def" + cache[3] = "ghi" + cache[4] = "jkl" + # Triggers a cleanup + self.assertEqual([2, 3, 4], sorted(cache.keys())) + # Resize should also cleanup again + cache.resize(max_size=6, after_cleanup_size=4) + self.assertEqual([4], sorted(cache.keys())) + # Adding should use the new max size + cache[5] = "mno" + self.assertEqual([4, 5], sorted(cache.keys())) + cache[6] = "pqr" + self.assertEqual([6], sorted(cache.keys())) + + def test_resize_larger(self): + cache = lru_cache.LRUSizeCache(max_size=10, after_cleanup_size=9) + cache[1] = "abc" + cache[2] = "def" + cache[3] = "ghi" + cache[4] = "jkl" + # Triggers a cleanup + self.assertEqual([2, 3, 4], sorted(cache.keys())) + cache.resize(max_size=15, after_cleanup_size=12) + self.assertEqual([2, 3, 4], sorted(cache.keys())) + cache[5] = "mno" + cache[6] = "pqr" + self.assertEqual([2, 3, 4, 5, 6], sorted(cache.keys())) + cache[7] = "stu" + self.assertEqual([4, 5, 6, 7], sorted(cache.keys())) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_mailmap.py b/env/lib/python3.8/site-packages/dulwich/tests/test_mailmap.py new file mode 100644 index 00000000..bd44689b --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_mailmap.py @@ -0,0 +1,101 @@ +# test_mailmap.py -- Tests for dulwich.mailmap +# Copyright (C) 2018 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for dulwich.mailmap.""" + +from io import BytesIO + +from unittest import TestCase + +from dulwich.mailmap import Mailmap, read_mailmap + + +class ReadMailmapTests(TestCase): + def test_read(self): + b = BytesIO( + b"""\ +Jane Doe +Joe R. Developer +# A comment + # Comment +Some Dude nick1 +Other Author nick2 +Other Author +Santa Claus +""" + ) + self.assertEqual( + [ + ((b"Jane Doe", b"jane@desktop.(none)"), None), + ((b"Joe R. Developer", b"joe@example.com"), None), + ((None, b"cto@company.xx"), (None, b"cto@coompany.xx")), + ( + (b"Some Dude", b"some@dude.xx"), + (b"nick1", b"bugs@company.xx"), + ), + ( + (b"Other Author", b"other@author.xx"), + (b"nick2", b"bugs@company.xx"), + ), + ( + (b"Other Author", b"other@author.xx"), + (None, b"nick2@company.xx"), + ), + ( + (b"Santa Claus", b"santa.claus@northpole.xx"), + (None, b"me@company.xx"), + ), + ], + list(read_mailmap(b)), + ) + + +class MailmapTests(TestCase): + def test_lookup(self): + m = Mailmap() + m.add_entry((b"Jane Doe", b"jane@desktop.(none)"), (None, None)) + m.add_entry((b"Joe R. Developer", b"joe@example.com"), None) + m.add_entry((None, b"cto@company.xx"), (None, b"cto@coompany.xx")) + m.add_entry((b"Some Dude", b"some@dude.xx"), (b"nick1", b"bugs@company.xx")) + m.add_entry( + (b"Other Author", b"other@author.xx"), + (b"nick2", b"bugs@company.xx"), + ) + m.add_entry((b"Other Author", b"other@author.xx"), (None, b"nick2@company.xx")) + m.add_entry( + (b"Santa Claus", b"santa.claus@northpole.xx"), + (None, b"me@company.xx"), + ) + self.assertEqual( + b"Jane Doe ", + m.lookup(b"Jane Doe "), + ) + self.assertEqual( + b"Jane Doe ", + m.lookup(b"Jane Doe "), + ) + self.assertEqual( + b"Jane Doe ", + m.lookup(b"Jane D. "), + ) + self.assertEqual( + b"Some Dude ", m.lookup(b"nick1 ") + ) + self.assertEqual(b"CTO ", m.lookup(b"CTO ")) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_missing_obj_finder.py b/env/lib/python3.8/site-packages/dulwich/tests/test_missing_obj_finder.py new file mode 100644 index 00000000..817f4d15 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_missing_obj_finder.py @@ -0,0 +1,319 @@ +# test_missing_obj_finder.py -- tests for MissingObjectFinder +# Copyright (C) 2012 syntevo GmbH +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + Blob, +) +from dulwich.tests import TestCase +from dulwich.tests.utils import ( + make_object, + make_tag, + build_commit_graph, +) + + +class MissingObjectFinderTest(TestCase): + def setUp(self): + super(MissingObjectFinderTest, self).setUp() + self.store = MemoryObjectStore() + self.commits = [] + + def cmt(self, n): + return self.commits[n - 1] + + def assertMissingMatch(self, haves, wants, expected): + for sha, path in self.store.find_missing_objects(haves, wants, set()): + self.assertIn( + sha, + expected, + "(%s,%s) erroneously reported as missing" % (sha, path) + ) + expected.remove(sha) + + self.assertEqual( + len(expected), + 0, + "some objects are not reported as missing: %s" % (expected,), + ) + + +class MOFLinearRepoTest(MissingObjectFinderTest): + def setUp(self): + super(MOFLinearRepoTest, self).setUp() + # present in 1, removed in 3 + f1_1 = make_object(Blob, data=b"f1") + # present in all revisions, changed in 2 and 3 + f2_1 = make_object(Blob, data=b"f2") + f2_2 = make_object(Blob, data=b"f2-changed") + f2_3 = make_object(Blob, data=b"f2-changed-again") + # added in 2, left unmodified in 3 + f3_2 = make_object(Blob, data=b"f3") + + commit_spec = [[1], [2, 1], [3, 2]] + trees = { + 1: [(b"f1", f1_1), (b"f2", f2_1)], + 2: [(b"f1", f1_1), (b"f2", f2_2), (b"f3", f3_2)], + 3: [(b"f2", f2_3), (b"f3", f3_2)], + } + # commit 1: f1 and f2 + # commit 2: f3 added, f2 changed. Missing shall report commit id and a + # tree referenced by commit + # commit 3: f1 removed, f2 changed. Commit sha and root tree sha shall + # be reported as modified + self.commits = build_commit_graph(self.store, commit_spec, trees) + self.missing_1_2 = [self.cmt(2).id, self.cmt(2).tree, f2_2.id, f3_2.id] + self.missing_2_3 = [self.cmt(3).id, self.cmt(3).tree, f2_3.id] + self.missing_1_3 = [ + self.cmt(2).id, + self.cmt(3).id, + self.cmt(2).tree, + self.cmt(3).tree, + f2_2.id, + f3_2.id, + f2_3.id, + ] + + def test_1_to_2(self): + self.assertMissingMatch([self.cmt(1).id], [self.cmt(2).id], self.missing_1_2) + + def test_2_to_3(self): + self.assertMissingMatch([self.cmt(2).id], [self.cmt(3).id], self.missing_2_3) + + def test_1_to_3(self): + self.assertMissingMatch([self.cmt(1).id], [self.cmt(3).id], self.missing_1_3) + + def test_bogus_haves(self): + """Ensure non-existent SHA in haves are tolerated""" + bogus_sha = self.cmt(2).id[::-1] + haves = [self.cmt(1).id, bogus_sha] + wants = [self.cmt(3).id] + self.assertMissingMatch(haves, wants, self.missing_1_3) + + def test_bogus_wants_failure(self): + """Ensure non-existent SHA in wants are not tolerated""" + bogus_sha = self.cmt(2).id[::-1] + haves = [self.cmt(1).id] + wants = [self.cmt(3).id, bogus_sha] + self.assertRaises( + KeyError, self.store.find_missing_objects, haves, wants, set() + ) + + def test_no_changes(self): + self.assertMissingMatch([self.cmt(3).id], [self.cmt(3).id], []) + + +class MOFMergeForkRepoTest(MissingObjectFinderTest): + # 1 --- 2 --- 4 --- 6 --- 7 + # \ / + # 3 --- + # \ + # 5 + + def setUp(self): + super(MOFMergeForkRepoTest, self).setUp() + f1_1 = make_object(Blob, data=b"f1") + f1_2 = make_object(Blob, data=b"f1-2") + f1_4 = make_object(Blob, data=b"f1-4") + f1_7 = make_object(Blob, data=b"f1-2") # same data as in rev 2 + f2_1 = make_object(Blob, data=b"f2") + f2_3 = make_object(Blob, data=b"f2-3") + f3_3 = make_object(Blob, data=b"f3") + f3_5 = make_object(Blob, data=b"f3-5") + commit_spec = [[1], [2, 1], [3, 2], [4, 2], [5, 3], [6, 3, 4], [7, 6]] + trees = { + 1: [(b"f1", f1_1), (b"f2", f2_1)], + 2: [(b"f1", f1_2), (b"f2", f2_1)], # f1 changed + # f3 added, f2 changed + 3: [(b"f1", f1_2), (b"f2", f2_3), (b"f3", f3_3)], + 4: [(b"f1", f1_4), (b"f2", f2_1)], # f1 changed + 5: [(b"f1", f1_2), (b"f3", f3_5)], # f2 removed, f3 changed + # merged 3 and 4 + 6: [(b"f1", f1_4), (b"f2", f2_3), (b"f3", f3_3)], + # f1 changed to match rev2. f3 removed + 7: [(b"f1", f1_7), (b"f2", f2_3)], + } + self.commits = build_commit_graph(self.store, commit_spec, trees) + + self.f1_2_id = f1_2.id + self.f1_4_id = f1_4.id + self.f1_7_id = f1_7.id + self.f2_3_id = f2_3.id + self.f3_3_id = f3_3.id + + self.assertEqual(f1_2.id, f1_7.id, "[sanity]") + + def test_have6_want7(self): + # have 6, want 7. Ideally, shall not report f1_7 as it's the same as + # f1_2, however, to do so, MissingObjectFinder shall not record trees + # of common commits only, but also all parent trees and tree items, + # which is an overkill (i.e. in sha_done it records f1_4 as known, and + # doesn't record f1_2 was known prior to that, hence can't detect f1_7 + # is in fact f1_2 and shall not be reported) + self.assertMissingMatch( + [self.cmt(6).id], + [self.cmt(7).id], + [self.cmt(7).id, self.cmt(7).tree, self.f1_7_id], + ) + + def test_have4_want7(self): + # have 4, want 7. Shall not include rev5 as it is not in the tree + # between 4 and 7 (well, it is, but its SHA's are irrelevant for 4..7 + # commit hierarchy) + self.assertMissingMatch( + [self.cmt(4).id], + [self.cmt(7).id], + [ + self.cmt(7).id, + self.cmt(6).id, + self.cmt(3).id, + self.cmt(7).tree, + self.cmt(6).tree, + self.cmt(3).tree, + self.f2_3_id, + self.f3_3_id, + ], + ) + + def test_have1_want6(self): + # have 1, want 6. Shall not include rev5 + self.assertMissingMatch( + [self.cmt(1).id], + [self.cmt(6).id], + [ + self.cmt(6).id, + self.cmt(4).id, + self.cmt(3).id, + self.cmt(2).id, + self.cmt(6).tree, + self.cmt(4).tree, + self.cmt(3).tree, + self.cmt(2).tree, + self.f1_2_id, + self.f1_4_id, + self.f2_3_id, + self.f3_3_id, + ], + ) + + def test_have3_want6(self): + # have 3, want 7. Shall not report rev2 and its tree, because + # haves(3) means has parents, i.e. rev2, too + # BUT shall report any changes descending rev2 (excluding rev3) + # Shall NOT report f1_7 as it's technically == f1_2 + self.assertMissingMatch( + [self.cmt(3).id], + [self.cmt(7).id], + [ + self.cmt(7).id, + self.cmt(6).id, + self.cmt(4).id, + self.cmt(7).tree, + self.cmt(6).tree, + self.cmt(4).tree, + self.f1_4_id, + ], + ) + + def test_have5_want7(self): + # have 5, want 7. Common parent is rev2, hence children of rev2 from + # a descent line other than rev5 shall be reported + # expects f1_4 from rev6. f3_5 is known in rev5; + # f1_7 shall be the same as f1_2 (known, too) + self.assertMissingMatch( + [self.cmt(5).id], + [self.cmt(7).id], + [ + self.cmt(7).id, + self.cmt(6).id, + self.cmt(4).id, + self.cmt(7).tree, + self.cmt(6).tree, + self.cmt(4).tree, + self.f1_4_id, + ], + ) + + +class MOFTagsTest(MissingObjectFinderTest): + def setUp(self): + super(MOFTagsTest, self).setUp() + f1_1 = make_object(Blob, data=b"f1") + commit_spec = [[1]] + trees = {1: [(b"f1", f1_1)]} + self.commits = build_commit_graph(self.store, commit_spec, trees) + + self._normal_tag = make_tag(self.cmt(1)) + self.store.add_object(self._normal_tag) + + self._tag_of_tag = make_tag(self._normal_tag) + self.store.add_object(self._tag_of_tag) + + self._tag_of_tree = make_tag(self.store[self.cmt(1).tree]) + self.store.add_object(self._tag_of_tree) + + self._tag_of_blob = make_tag(f1_1) + self.store.add_object(self._tag_of_blob) + + self._tag_of_tag_of_blob = make_tag(self._tag_of_blob) + self.store.add_object(self._tag_of_tag_of_blob) + + self.f1_1_id = f1_1.id + + def test_tagged_commit(self): + # The user already has the tagged commit, all they want is the tag, + # so send them only the tag object. + self.assertMissingMatch( + [self.cmt(1).id], [self._normal_tag.id], [self._normal_tag.id] + ) + + # The remaining cases are unusual, but do happen in the wild. + def test_tagged_tag(self): + # User already has tagged tag, send only tag of tag + self.assertMissingMatch( + [self._normal_tag.id], [self._tag_of_tag.id], [self._tag_of_tag.id] + ) + # User needs both tags, but already has commit + self.assertMissingMatch( + [self.cmt(1).id], + [self._tag_of_tag.id], + [self._normal_tag.id, self._tag_of_tag.id], + ) + + def test_tagged_tree(self): + self.assertMissingMatch( + [], + [self._tag_of_tree.id], + [self._tag_of_tree.id, self.cmt(1).tree, self.f1_1_id], + ) + + def test_tagged_blob(self): + self.assertMissingMatch( + [], [self._tag_of_blob.id], [self._tag_of_blob.id, self.f1_1_id] + ) + + def test_tagged_tagged_blob(self): + self.assertMissingMatch( + [], + [self._tag_of_tag_of_blob.id], + [self._tag_of_tag_of_blob.id, self._tag_of_blob.id, self.f1_1_id], + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_object_store.py b/env/lib/python3.8/site-packages/dulwich/tests/test_object_store.py new file mode 100644 index 00000000..b8498f0c --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_object_store.py @@ -0,0 +1,805 @@ +# test_object_store.py -- tests for object_store.py +# Copyright (C) 2008 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the object store interface.""" + + +from contextlib import closing +from io import BytesIO +from unittest import skipUnless +import os +import shutil +import stat +import sys +import tempfile + +from dulwich.index import ( + commit_tree, +) +from dulwich.errors import ( + NotTreeError, +) +from dulwich.objects import ( + sha_to_hex, + Blob, + Tree, + TreeEntry, + EmptyFileException, + SubmoduleEncountered, + S_IFGITLINK, +) +from dulwich.object_store import ( + DiskObjectStore, + MemoryObjectStore, + OverlayObjectStore, + ObjectStoreGraphWalker, + commit_tree_changes, + read_packs_file, + tree_lookup_path, +) +from dulwich.pack import ( + REF_DELTA, + write_pack_objects, +) +from dulwich.protocol import DEPTH_INFINITE +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + make_object, + make_tag, + build_pack, +) + +try: + from unittest.mock import patch +except ImportError: + patch = None # type: ignore + + +testobject = make_object(Blob, data=b"yummy data") + + +class ObjectStoreTests(object): + def test_determine_wants_all(self): + self.assertEqual( + [b"1" * 40], + self.store.determine_wants_all({b"refs/heads/foo": b"1" * 40}), + ) + + def test_determine_wants_all_zero(self): + self.assertEqual( + [], self.store.determine_wants_all({b"refs/heads/foo": b"0" * 40}) + ) + + @skipUnless(patch, "Required mock.patch") + def test_determine_wants_all_depth(self): + self.store.add_object(testobject) + refs = {b"refs/heads/foo": testobject.id} + with patch.object(self.store, "_get_depth", return_value=1) as m: + self.assertEqual( + [], self.store.determine_wants_all(refs, depth=0) + ) + self.assertEqual( + [testobject.id], + self.store.determine_wants_all(refs, depth=DEPTH_INFINITE), + ) + m.assert_not_called() + + self.assertEqual( + [], self.store.determine_wants_all(refs, depth=1) + ) + m.assert_called_with(testobject.id) + self.assertEqual( + [testobject.id], self.store.determine_wants_all(refs, depth=2) + ) + + def test_get_depth(self): + self.assertEqual( + 0, self.store._get_depth(testobject.id) + ) + + self.store.add_object(testobject) + self.assertEqual( + 1, self.store._get_depth(testobject.id, get_parents=lambda x: []) + ) + + parent = make_object(Blob, data=b"parent data") + self.store.add_object(parent) + self.assertEqual( + 2, + self.store._get_depth( + testobject.id, + get_parents=lambda x: [parent.id] if x == testobject else [], + ), + ) + + def test_iter(self): + self.assertEqual([], list(self.store)) + + def test_get_nonexistant(self): + self.assertRaises(KeyError, lambda: self.store[b"a" * 40]) + + def test_contains_nonexistant(self): + self.assertNotIn(b"a" * 40, self.store) + + def test_add_objects_empty(self): + self.store.add_objects([]) + + def test_add_commit(self): + # TODO: Argh, no way to construct Git commit objects without + # access to a serialized form. + self.store.add_objects([]) + + def test_store_resilience(self): + """Test if updating an existing stored object doesn't erase the + object from the store. + """ + test_object = make_object(Blob, data=b"data") + + self.store.add_object(test_object) + test_object_id = test_object.id + test_object.data = test_object.data + b"update" + stored_test_object = self.store[test_object_id] + + self.assertNotEqual(test_object.id, stored_test_object.id) + self.assertEqual(stored_test_object.id, test_object_id) + + def test_add_object(self): + self.store.add_object(testobject) + self.assertEqual(set([testobject.id]), set(self.store)) + self.assertIn(testobject.id, self.store) + r = self.store[testobject.id] + self.assertEqual(r, testobject) + + def test_add_objects(self): + data = [(testobject, "mypath")] + self.store.add_objects(data) + self.assertEqual(set([testobject.id]), set(self.store)) + self.assertIn(testobject.id, self.store) + r = self.store[testobject.id] + self.assertEqual(r, testobject) + + def test_tree_changes(self): + blob_a1 = make_object(Blob, data=b"a1") + blob_a2 = make_object(Blob, data=b"a2") + blob_b = make_object(Blob, data=b"b") + for blob in [blob_a1, blob_a2, blob_b]: + self.store.add_object(blob) + + blobs_1 = [(b"a", blob_a1.id, 0o100644), (b"b", blob_b.id, 0o100644)] + tree1_id = commit_tree(self.store, blobs_1) + blobs_2 = [(b"a", blob_a2.id, 0o100644), (b"b", blob_b.id, 0o100644)] + tree2_id = commit_tree(self.store, blobs_2) + change_a = ( + (b"a", b"a"), + (0o100644, 0o100644), + (blob_a1.id, blob_a2.id), + ) + self.assertEqual([change_a], list(self.store.tree_changes(tree1_id, tree2_id))) + self.assertEqual( + [ + change_a, + ((b"b", b"b"), (0o100644, 0o100644), (blob_b.id, blob_b.id)), + ], + list(self.store.tree_changes(tree1_id, tree2_id, want_unchanged=True)), + ) + + def test_iter_tree_contents(self): + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + blob_c = make_object(Blob, data=b"c") + for blob in [blob_a, blob_b, blob_c]: + self.store.add_object(blob) + + blobs = [ + (b"a", blob_a.id, 0o100644), + (b"ad/b", blob_b.id, 0o100644), + (b"ad/bd/c", blob_c.id, 0o100755), + (b"ad/c", blob_c.id, 0o100644), + (b"c", blob_c.id, 0o100644), + ] + tree_id = commit_tree(self.store, blobs) + self.assertEqual( + [TreeEntry(p, m, h) for (p, h, m) in blobs], + list(self.store.iter_tree_contents(tree_id)), + ) + + def test_iter_tree_contents_include_trees(self): + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + blob_c = make_object(Blob, data=b"c") + for blob in [blob_a, blob_b, blob_c]: + self.store.add_object(blob) + + blobs = [ + (b"a", blob_a.id, 0o100644), + (b"ad/b", blob_b.id, 0o100644), + (b"ad/bd/c", blob_c.id, 0o100755), + ] + tree_id = commit_tree(self.store, blobs) + tree = self.store[tree_id] + tree_ad = self.store[tree[b"ad"][1]] + tree_bd = self.store[tree_ad[b"bd"][1]] + + expected = [ + TreeEntry(b"", 0o040000, tree_id), + TreeEntry(b"a", 0o100644, blob_a.id), + TreeEntry(b"ad", 0o040000, tree_ad.id), + TreeEntry(b"ad/b", 0o100644, blob_b.id), + TreeEntry(b"ad/bd", 0o040000, tree_bd.id), + TreeEntry(b"ad/bd/c", 0o100755, blob_c.id), + ] + actual = self.store.iter_tree_contents(tree_id, include_trees=True) + self.assertEqual(expected, list(actual)) + + def make_tag(self, name, obj): + tag = make_tag(obj, name=name) + self.store.add_object(tag) + return tag + + def test_peel_sha(self): + self.store.add_object(testobject) + tag1 = self.make_tag(b"1", testobject) + tag2 = self.make_tag(b"2", testobject) + tag3 = self.make_tag(b"3", testobject) + for obj in [testobject, tag1, tag2, tag3]: + self.assertEqual(testobject, self.store.peel_sha(obj.id)) + + def test_get_raw(self): + self.store.add_object(testobject) + self.assertEqual( + (Blob.type_num, b"yummy data"), self.store.get_raw(testobject.id) + ) + + def test_close(self): + # For now, just check that close doesn't barf. + self.store.add_object(testobject) + self.store.close() + + +class OverlayObjectStoreTests(ObjectStoreTests, TestCase): + def setUp(self): + TestCase.setUp(self) + self.bases = [MemoryObjectStore(), MemoryObjectStore()] + self.store = OverlayObjectStore(self.bases, self.bases[0]) + + +class MemoryObjectStoreTests(ObjectStoreTests, TestCase): + def setUp(self): + TestCase.setUp(self) + self.store = MemoryObjectStore() + + def test_add_pack(self): + o = MemoryObjectStore() + f, commit, abort = o.add_pack() + try: + b = make_object(Blob, data=b"more yummy data") + write_pack_objects(f.write, [(b, None)]) + except BaseException: + abort() + raise + else: + commit() + + def test_add_pack_emtpy(self): + o = MemoryObjectStore() + f, commit, abort = o.add_pack() + commit() + + def test_add_thin_pack(self): + o = MemoryObjectStore() + blob = make_object(Blob, data=b"yummy data") + o.add_object(blob) + + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (blob.id, b"more yummy data")), + ], + store=o, + ) + o.add_thin_pack(f.read, None) + packed_blob_sha = sha_to_hex(entries[0][3]) + self.assertEqual( + (Blob.type_num, b"more yummy data"), o.get_raw(packed_blob_sha) + ) + + def test_add_thin_pack_empty(self): + o = MemoryObjectStore() + + f = BytesIO() + entries = build_pack(f, [], store=o) + self.assertEqual([], entries) + o.add_thin_pack(f.read, None) + + +class PackBasedObjectStoreTests(ObjectStoreTests): + def tearDown(self): + for pack in self.store.packs: + pack.close() + + def test_empty_packs(self): + self.assertEqual([], list(self.store.packs)) + + def test_pack_loose_objects(self): + b1 = make_object(Blob, data=b"yummy data") + self.store.add_object(b1) + b2 = make_object(Blob, data=b"more yummy data") + self.store.add_object(b2) + b3 = make_object(Blob, data=b"even more yummy data") + b4 = make_object(Blob, data=b"and more yummy data") + self.store.add_objects([(b3, None), (b4, None)]) + self.assertEqual({b1.id, b2.id, b3.id, b4.id}, set(self.store)) + self.assertEqual(1, len(self.store.packs)) + self.assertEqual(2, self.store.pack_loose_objects()) + self.assertNotEqual([], list(self.store.packs)) + self.assertEqual(0, self.store.pack_loose_objects()) + + def test_repack(self): + b1 = make_object(Blob, data=b"yummy data") + self.store.add_object(b1) + b2 = make_object(Blob, data=b"more yummy data") + self.store.add_object(b2) + b3 = make_object(Blob, data=b"even more yummy data") + b4 = make_object(Blob, data=b"and more yummy data") + self.store.add_objects([(b3, None), (b4, None)]) + b5 = make_object(Blob, data=b"and more data") + b6 = make_object(Blob, data=b"and some more data") + self.store.add_objects([(b5, None), (b6, None)]) + self.assertEqual({b1.id, b2.id, b3.id, b4.id, b5.id, b6.id}, set(self.store)) + self.assertEqual(2, len(self.store.packs)) + self.assertEqual(6, self.store.repack()) + self.assertEqual(1, len(self.store.packs)) + self.assertEqual(0, self.store.pack_loose_objects()) + + def test_repack_existing(self): + b1 = make_object(Blob, data=b"yummy data") + self.store.add_object(b1) + b2 = make_object(Blob, data=b"more yummy data") + self.store.add_object(b2) + self.store.add_objects([(b1, None), (b2, None)]) + self.store.add_objects([(b2, None)]) + self.assertEqual({b1.id, b2.id}, set(self.store)) + self.assertEqual(2, len(self.store.packs)) + self.assertEqual(2, self.store.repack()) + self.assertEqual(1, len(self.store.packs)) + self.assertEqual(0, self.store.pack_loose_objects()) + + self.assertEqual({b1.id, b2.id}, set(self.store)) + self.assertEqual(1, len(self.store.packs)) + self.assertEqual(2, self.store.repack()) + self.assertEqual(1, len(self.store.packs)) + self.assertEqual(0, self.store.pack_loose_objects()) + + +class DiskObjectStoreTests(PackBasedObjectStoreTests, TestCase): + def setUp(self): + TestCase.setUp(self) + self.store_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.store_dir) + self.store = DiskObjectStore.init(self.store_dir) + + def tearDown(self): + TestCase.tearDown(self) + PackBasedObjectStoreTests.tearDown(self) + + def test_loose_compression_level(self): + alternate_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, alternate_dir) + alternate_store = DiskObjectStore(alternate_dir, loose_compression_level=6) + b2 = make_object(Blob, data=b"yummy data") + alternate_store.add_object(b2) + + def test_alternates(self): + alternate_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, alternate_dir) + alternate_store = DiskObjectStore(alternate_dir) + b2 = make_object(Blob, data=b"yummy data") + alternate_store.add_object(b2) + store = DiskObjectStore(self.store_dir) + self.assertRaises(KeyError, store.__getitem__, b2.id) + store.add_alternate_path(alternate_dir) + self.assertIn(b2.id, store) + self.assertEqual(b2, store[b2.id]) + + def test_read_alternate_paths(self): + store = DiskObjectStore(self.store_dir) + + abs_path = os.path.abspath(os.path.normpath("/abspath")) + # ensures in particular existence of the alternates file + store.add_alternate_path(abs_path) + self.assertEqual(set(store._read_alternate_paths()), {abs_path}) + + store.add_alternate_path("relative-path") + self.assertIn( + os.path.join(store.path, "relative-path"), + set(store._read_alternate_paths()), + ) + + # arguably, add_alternate_path() could strip comments. + # Meanwhile it's more convenient to use it than to import INFODIR + store.add_alternate_path("# comment") + for alt_path in store._read_alternate_paths(): + self.assertNotIn("#", alt_path) + + def test_file_modes(self): + self.store.add_object(testobject) + path = self.store._get_shafile_path(testobject.id) + mode = os.stat(path).st_mode + + packmode = "0o100444" if sys.platform != "win32" else "0o100666" + self.assertEqual(oct(mode), packmode) + + def test_corrupted_object_raise_exception(self): + """Corrupted sha1 disk file should raise specific exception""" + self.store.add_object(testobject) + self.assertEqual( + (Blob.type_num, b"yummy data"), self.store.get_raw(testobject.id) + ) + self.assertTrue(self.store.contains_loose(testobject.id)) + self.assertIsNotNone(self.store._get_loose_object(testobject.id)) + + path = self.store._get_shafile_path(testobject.id) + old_mode = os.stat(path).st_mode + os.chmod(path, 0o600) + with open(path, "wb") as f: # corrupt the file + f.write(b"") + os.chmod(path, old_mode) + + expected_error_msg = "Corrupted empty file detected" + try: + self.store.contains_loose(testobject.id) + except EmptyFileException as e: + self.assertEqual(str(e), expected_error_msg) + + try: + self.store._get_loose_object(testobject.id) + except EmptyFileException as e: + self.assertEqual(str(e), expected_error_msg) + + # this does not change iteration on loose objects though + self.assertEqual([testobject.id], list(self.store._iter_loose_objects())) + + def test_tempfile_in_loose_store(self): + self.store.add_object(testobject) + self.assertEqual([testobject.id], list(self.store._iter_loose_objects())) + + # add temporary files to the loose store + for i in range(256): + dirname = os.path.join(self.store_dir, "%02x" % i) + if not os.path.isdir(dirname): + os.makedirs(dirname) + fd, n = tempfile.mkstemp(prefix="tmp_obj_", dir=dirname) + os.close(fd) + + self.assertEqual([testobject.id], list(self.store._iter_loose_objects())) + + def test_add_alternate_path(self): + store = DiskObjectStore(self.store_dir) + self.assertEqual([], list(store._read_alternate_paths())) + store.add_alternate_path("/foo/path") + self.assertEqual(["/foo/path"], list(store._read_alternate_paths())) + store.add_alternate_path("/bar/path") + self.assertEqual( + ["/foo/path", "/bar/path"], list(store._read_alternate_paths()) + ) + + def test_rel_alternative_path(self): + alternate_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, alternate_dir) + alternate_store = DiskObjectStore(alternate_dir) + b2 = make_object(Blob, data=b"yummy data") + alternate_store.add_object(b2) + store = DiskObjectStore(self.store_dir) + self.assertRaises(KeyError, store.__getitem__, b2.id) + store.add_alternate_path(os.path.relpath(alternate_dir, self.store_dir)) + self.assertEqual(list(alternate_store), list(store.alternates[0])) + self.assertIn(b2.id, store) + self.assertEqual(b2, store[b2.id]) + + def test_pack_dir(self): + o = DiskObjectStore(self.store_dir) + self.assertEqual(os.path.join(self.store_dir, "pack"), o.pack_dir) + + def test_add_pack(self): + o = DiskObjectStore(self.store_dir) + f, commit, abort = o.add_pack() + try: + b = make_object(Blob, data=b"more yummy data") + write_pack_objects(f.write, [(b, None)]) + except BaseException: + abort() + raise + else: + commit() + + def test_add_thin_pack(self): + o = DiskObjectStore(self.store_dir) + try: + blob = make_object(Blob, data=b"yummy data") + o.add_object(blob) + + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (blob.id, b"more yummy data")), + ], + store=o, + ) + + with o.add_thin_pack(f.read, None) as pack: + packed_blob_sha = sha_to_hex(entries[0][3]) + pack.check_length_and_checksum() + self.assertEqual(sorted([blob.id, packed_blob_sha]), list(pack)) + self.assertTrue(o.contains_packed(packed_blob_sha)) + self.assertTrue(o.contains_packed(blob.id)) + self.assertEqual( + (Blob.type_num, b"more yummy data"), + o.get_raw(packed_blob_sha), + ) + finally: + o.close() + + def test_add_thin_pack_empty(self): + with closing(DiskObjectStore(self.store_dir)) as o: + f = BytesIO() + entries = build_pack(f, [], store=o) + self.assertEqual([], entries) + o.add_thin_pack(f.read, None) + + +class TreeLookupPathTests(TestCase): + def setUp(self): + TestCase.setUp(self) + self.store = MemoryObjectStore() + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + blob_c = make_object(Blob, data=b"c") + for blob in [blob_a, blob_b, blob_c]: + self.store.add_object(blob) + + blobs = [ + (b"a", blob_a.id, 0o100644), + (b"ad/b", blob_b.id, 0o100644), + (b"ad/bd/c", blob_c.id, 0o100755), + (b"ad/c", blob_c.id, 0o100644), + (b"c", blob_c.id, 0o100644), + (b"d", blob_c.id, S_IFGITLINK), + ] + self.tree_id = commit_tree(self.store, blobs) + + def get_object(self, sha): + return self.store[sha] + + def test_lookup_blob(self): + o_id = tree_lookup_path(self.get_object, self.tree_id, b"a")[1] + self.assertIsInstance(self.store[o_id], Blob) + + def test_lookup_tree(self): + o_id = tree_lookup_path(self.get_object, self.tree_id, b"ad")[1] + self.assertIsInstance(self.store[o_id], Tree) + o_id = tree_lookup_path(self.get_object, self.tree_id, b"ad/bd")[1] + self.assertIsInstance(self.store[o_id], Tree) + o_id = tree_lookup_path(self.get_object, self.tree_id, b"ad/bd/")[1] + self.assertIsInstance(self.store[o_id], Tree) + + def test_lookup_submodule(self): + tree_lookup_path(self.get_object, self.tree_id, b"d")[1] + self.assertRaises( + SubmoduleEncountered, tree_lookup_path, self.get_object, + self.tree_id, b"d/a") + + def test_lookup_nonexistent(self): + self.assertRaises( + KeyError, tree_lookup_path, self.get_object, self.tree_id, b"j" + ) + + def test_lookup_not_tree(self): + self.assertRaises( + NotTreeError, + tree_lookup_path, + self.get_object, + self.tree_id, + b"ad/b/j", + ) + + +class ObjectStoreGraphWalkerTests(TestCase): + def get_walker(self, heads, parent_map): + new_parent_map = dict( + [(k * 40, [(p * 40) for p in ps]) for (k, ps) in parent_map.items()] + ) + return ObjectStoreGraphWalker( + [x * 40 for x in heads], new_parent_map.__getitem__ + ) + + def test_ack_invalid_value(self): + gw = self.get_walker([], {}) + self.assertRaises(ValueError, gw.ack, "tooshort") + + def test_empty(self): + gw = self.get_walker([], {}) + self.assertIs(None, next(gw)) + gw.ack(b"a" * 40) + self.assertIs(None, next(gw)) + + def test_descends(self): + gw = self.get_walker([b"a"], {b"a": [b"b"], b"b": []}) + self.assertEqual(b"a" * 40, next(gw)) + self.assertEqual(b"b" * 40, next(gw)) + + def test_present(self): + gw = self.get_walker([b"a"], {b"a": [b"b"], b"b": []}) + gw.ack(b"a" * 40) + self.assertIs(None, next(gw)) + + def test_parent_present(self): + gw = self.get_walker([b"a"], {b"a": [b"b"], b"b": []}) + self.assertEqual(b"a" * 40, next(gw)) + gw.ack(b"a" * 40) + self.assertIs(None, next(gw)) + + def test_child_ack_later(self): + gw = self.get_walker([b"a"], {b"a": [b"b"], b"b": [b"c"], b"c": []}) + self.assertEqual(b"a" * 40, next(gw)) + self.assertEqual(b"b" * 40, next(gw)) + gw.ack(b"a" * 40) + self.assertIs(None, next(gw)) + + def test_only_once(self): + # a b + # | | + # c d + # \ / + # e + gw = self.get_walker( + [b"a", b"b"], + { + b"a": [b"c"], + b"b": [b"d"], + b"c": [b"e"], + b"d": [b"e"], + b"e": [], + }, + ) + walk = [] + acked = False + walk.append(next(gw)) + walk.append(next(gw)) + # A branch (a, c) or (b, d) may be done after 2 steps or 3 depending on + # the order walked: 3-step walks include (a, b, c) and (b, a, d), etc. + if walk == [b"a" * 40, b"c" * 40] or walk == [b"b" * 40, b"d" * 40]: + gw.ack(walk[0]) + acked = True + + walk.append(next(gw)) + if not acked and walk[2] == b"c" * 40: + gw.ack(b"a" * 40) + elif not acked and walk[2] == b"d" * 40: + gw.ack(b"b" * 40) + walk.append(next(gw)) + self.assertIs(None, next(gw)) + + self.assertEqual([b"a" * 40, b"b" * 40, b"c" * 40, b"d" * 40], sorted(walk)) + self.assertLess(walk.index(b"a" * 40), walk.index(b"c" * 40)) + self.assertLess(walk.index(b"b" * 40), walk.index(b"d" * 40)) + + +class CommitTreeChangesTests(TestCase): + def setUp(self): + super(CommitTreeChangesTests, self).setUp() + self.store = MemoryObjectStore() + self.blob_a = make_object(Blob, data=b"a") + self.blob_b = make_object(Blob, data=b"b") + self.blob_c = make_object(Blob, data=b"c") + for blob in [self.blob_a, self.blob_b, self.blob_c]: + self.store.add_object(blob) + + blobs = [ + (b"a", self.blob_a.id, 0o100644), + (b"ad/b", self.blob_b.id, 0o100644), + (b"ad/bd/c", self.blob_c.id, 0o100755), + (b"ad/c", self.blob_c.id, 0o100644), + (b"c", self.blob_c.id, 0o100644), + ] + self.tree_id = commit_tree(self.store, blobs) + + def test_no_changes(self): + self.assertEqual( + self.store[self.tree_id], + commit_tree_changes(self.store, self.store[self.tree_id], []), + ) + + def test_add_blob(self): + blob_d = make_object(Blob, data=b"d") + new_tree = commit_tree_changes( + self.store, self.store[self.tree_id], [(b"d", 0o100644, blob_d.id)] + ) + self.assertEqual( + new_tree[b"d"], + (33188, b"c59d9b6344f1af00e504ba698129f07a34bbed8d"), + ) + + def test_add_blob_in_dir(self): + blob_d = make_object(Blob, data=b"d") + new_tree = commit_tree_changes( + self.store, + self.store[self.tree_id], + [(b"e/f/d", 0o100644, blob_d.id)], + ) + self.assertEqual( + new_tree.items(), + [ + TreeEntry(path=b"a", mode=stat.S_IFREG | 0o100644, sha=self.blob_a.id), + TreeEntry( + path=b"ad", + mode=stat.S_IFDIR, + sha=b"0e2ce2cd7725ff4817791be31ccd6e627e801f4a", + ), + TreeEntry(path=b"c", mode=stat.S_IFREG | 0o100644, sha=self.blob_c.id), + TreeEntry( + path=b"e", + mode=stat.S_IFDIR, + sha=b"6ab344e288724ac2fb38704728b8896e367ed108", + ), + ], + ) + e_tree = self.store[new_tree[b"e"][1]] + self.assertEqual( + e_tree.items(), + [ + TreeEntry( + path=b"f", + mode=stat.S_IFDIR, + sha=b"24d2c94d8af232b15a0978c006bf61ef4479a0a5", + ) + ], + ) + f_tree = self.store[e_tree[b"f"][1]] + self.assertEqual( + f_tree.items(), + [TreeEntry(path=b"d", mode=stat.S_IFREG | 0o100644, sha=blob_d.id)], + ) + + def test_delete_blob(self): + new_tree = commit_tree_changes( + self.store, self.store[self.tree_id], [(b"ad/bd/c", None, None)] + ) + self.assertEqual(set(new_tree), {b"a", b"ad", b"c"}) + ad_tree = self.store[new_tree[b"ad"][1]] + self.assertEqual(set(ad_tree), {b"b", b"c"}) + + +class TestReadPacksFile(TestCase): + def test_read_packs(self): + self.assertEqual( + ["pack-1.pack"], + list( + read_packs_file( + BytesIO( + b"""P pack-1.pack +""" + ) + ) + ), + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_objects.py b/env/lib/python3.8/site-packages/dulwich/tests/test_objects.py new file mode 100644 index 00000000..607e1848 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_objects.py @@ -0,0 +1,1453 @@ +# test_objects.py -- tests for objects.py +# Copyright (C) 2007 James Westby +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for git base objects.""" + +# TODO: Round-trip parse-serialize-parse and serialize-parse-serialize tests. + + +from io import BytesIO +import datetime +from itertools import ( + permutations, +) +import os +import stat +from contextlib import contextmanager + +from dulwich.errors import ( + ObjectFormatException, +) +from dulwich.objects import ( + Blob, + Tree, + Commit, + ShaFile, + Tag, + TreeEntry, + format_timezone, + hex_to_sha, + sha_to_hex, + hex_to_filename, + check_hexsha, + check_identity, + object_class, + parse_timezone, + pretty_format_tree_entry, + parse_tree, + _parse_tree_py, + sorted_tree_items, + _sorted_tree_items_py, + MAX_TIME, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + make_commit, + make_object, + functest_builder, + ext_functest_builder, +) + +a_sha = b"6f670c0fb53f9463760b7295fbb814e965fb20c8" +b_sha = b"2969be3e8ee1c0222396a5611407e4769f14e54b" +c_sha = b"954a536f7819d40e6f637f849ee187dd10066349" +tree_sha = b"70c190eb48fa8bbb50ddc692a17b44cb781af7f6" +tag_sha = b"71033db03a03c6a36721efcf1968dd8f8e0cf023" + + +class TestHexToSha(TestCase): + def test_simple(self): + self.assertEqual(b"\xab\xcd" * 10, hex_to_sha(b"abcd" * 10)) + + def test_reverse(self): + self.assertEqual(b"abcd" * 10, sha_to_hex(b"\xab\xcd" * 10)) + + +class BlobReadTests(TestCase): + """Test decompression of blobs""" + + def get_sha_file(self, cls, base, sha): + dir = os.path.join(os.path.dirname(__file__), "..", "..", "testdata", base) + return cls.from_path(hex_to_filename(dir, sha)) + + def get_blob(self, sha): + """Return the blob named sha from the test data dir""" + return self.get_sha_file(Blob, "blobs", sha) + + def get_tree(self, sha): + return self.get_sha_file(Tree, "trees", sha) + + def get_tag(self, sha): + return self.get_sha_file(Tag, "tags", sha) + + def commit(self, sha): + return self.get_sha_file(Commit, "commits", sha) + + def test_decompress_simple_blob(self): + b = self.get_blob(a_sha) + self.assertEqual(b.data, b"test 1\n") + self.assertEqual(b.sha().hexdigest().encode("ascii"), a_sha) + + def test_hash(self): + b = self.get_blob(a_sha) + self.assertEqual(hash(b.id), hash(b)) + + def test_parse_empty_blob_object(self): + sha = b"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391" + b = self.get_blob(sha) + self.assertEqual(b.data, b"") + self.assertEqual(b.id, sha) + self.assertEqual(b.sha().hexdigest().encode("ascii"), sha) + + def test_create_blob_from_string(self): + string = b"test 2\n" + b = Blob.from_string(string) + self.assertEqual(b.data, string) + self.assertEqual(b.sha().hexdigest().encode("ascii"), b_sha) + + def test_legacy_from_file(self): + b1 = Blob.from_string(b"foo") + b_raw = b1.as_legacy_object() + b2 = b1.from_file(BytesIO(b_raw)) + self.assertEqual(b1, b2) + + def test_legacy_from_file_compression_level(self): + b1 = Blob.from_string(b"foo") + b_raw = b1.as_legacy_object(compression_level=6) + b2 = b1.from_file(BytesIO(b_raw)) + self.assertEqual(b1, b2) + + def test_chunks(self): + string = b"test 5\n" + b = Blob.from_string(string) + self.assertEqual([string], b.chunked) + + def test_splitlines(self): + for case in [ + [], + [b"foo\nbar\n"], + [b"bl\na", b"blie"], + [b"bl\na", b"blie", b"bloe\n"], + [b"", b"bl\na", b"blie", b"bloe\n"], + [b"", b"", b"", b"bla\n"], + [b"", b"", b"", b"bla\n", b""], + [b"bl", b"", b"a\naaa"], + [b"a\naaa", b"a"], + ]: + b = Blob() + b.chunked = case + self.assertEqual(b.data.splitlines(True), b.splitlines()) + + def test_set_chunks(self): + b = Blob() + b.chunked = [b"te", b"st", b" 5\n"] + self.assertEqual(b"test 5\n", b.data) + b.chunked = [b"te", b"st", b" 6\n"] + self.assertEqual(b"test 6\n", b.as_raw_string()) + self.assertEqual(b"test 6\n", bytes(b)) + + def test_parse_legacy_blob(self): + string = b"test 3\n" + b = self.get_blob(c_sha) + self.assertEqual(b.data, string) + self.assertEqual(b.sha().hexdigest().encode("ascii"), c_sha) + + def test_eq(self): + blob1 = self.get_blob(a_sha) + blob2 = self.get_blob(a_sha) + self.assertEqual(blob1, blob2) + + def test_read_tree_from_file(self): + t = self.get_tree(tree_sha) + self.assertEqual(t.items()[0], (b"a", 33188, a_sha)) + self.assertEqual(t.items()[1], (b"b", 33188, b_sha)) + + def test_read_tree_from_file_parse_count(self): + old_deserialize = Tree._deserialize + + def reset_deserialize(): + Tree._deserialize = old_deserialize + + self.addCleanup(reset_deserialize) + self.deserialize_count = 0 + + def counting_deserialize(*args, **kwargs): + self.deserialize_count += 1 + return old_deserialize(*args, **kwargs) + + Tree._deserialize = counting_deserialize + t = self.get_tree(tree_sha) + self.assertEqual(t.items()[0], (b"a", 33188, a_sha)) + self.assertEqual(t.items()[1], (b"b", 33188, b_sha)) + self.assertEqual(self.deserialize_count, 1) + + def test_read_tag_from_file(self): + t = self.get_tag(tag_sha) + self.assertEqual( + t.object, (Commit, b"51b668fd5bf7061b7d6fa525f88803e6cfadaa51") + ) + self.assertEqual(t.name, b"signed") + self.assertEqual(t.tagger, b"Ali Sabil ") + self.assertEqual(t.tag_time, 1231203091) + self.assertEqual(t.message, b"This is a signed tag\n") + self.assertEqual( + t.signature, + b"-----BEGIN PGP SIGNATURE-----\n" + b"Version: GnuPG v1.4.9 (GNU/Linux)\n" + b"\n" + b"iEYEABECAAYFAkliqx8ACgkQqSMmLy9u/" + b"kcx5ACfakZ9NnPl02tOyYP6pkBoEkU1\n" + b"5EcAn0UFgokaSvS371Ym/4W9iJj6vh3h\n" + b"=ql7y\n" + b"-----END PGP SIGNATURE-----\n", + ) + + def test_read_commit_from_file(self): + sha = b"60dacdc733de308bb77bb76ce0fb0f9b44c9769e" + c = self.commit(sha) + self.assertEqual(c.tree, tree_sha) + self.assertEqual(c.parents, [b"0d89f20333fbb1d2f3a94da77f4981373d8f4310"]) + self.assertEqual(c.author, b"James Westby ") + self.assertEqual(c.committer, b"James Westby ") + self.assertEqual(c.commit_time, 1174759230) + self.assertEqual(c.commit_timezone, 0) + self.assertEqual(c.author_timezone, 0) + self.assertEqual(c.message, b"Test commit\n") + + def test_read_commit_no_parents(self): + sha = b"0d89f20333fbb1d2f3a94da77f4981373d8f4310" + c = self.commit(sha) + self.assertEqual(c.tree, b"90182552c4a85a45ec2a835cadc3451bebdfe870") + self.assertEqual(c.parents, []) + self.assertEqual(c.author, b"James Westby ") + self.assertEqual(c.committer, b"James Westby ") + self.assertEqual(c.commit_time, 1174758034) + self.assertEqual(c.commit_timezone, 0) + self.assertEqual(c.author_timezone, 0) + self.assertEqual(c.message, b"Test commit\n") + + def test_read_commit_two_parents(self): + sha = b"5dac377bdded4c9aeb8dff595f0faeebcc8498cc" + c = self.commit(sha) + self.assertEqual(c.tree, b"d80c186a03f423a81b39df39dc87fd269736ca86") + self.assertEqual( + c.parents, + [ + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", + ], + ) + self.assertEqual(c.author, b"James Westby ") + self.assertEqual(c.committer, b"James Westby ") + self.assertEqual(c.commit_time, 1174773719) + self.assertEqual(c.commit_timezone, 0) + self.assertEqual(c.author_timezone, 0) + self.assertEqual(c.message, b"Merge ../b\n") + + def test_stub_sha(self): + sha = b"5" * 40 + c = make_commit(id=sha, message=b"foo") + self.assertIsInstance(c, Commit) + self.assertEqual(sha, c.id) + self.assertNotEqual(sha, c.sha()) + + +class ShaFileCheckTests(TestCase): + def assertCheckFails(self, cls, data): + obj = cls() + + def do_check(): + obj.set_raw_string(data) + obj.check() + + self.assertRaises(ObjectFormatException, do_check) + + def assertCheckSucceeds(self, cls, data): + obj = cls() + obj.set_raw_string(data) + self.assertEqual(None, obj.check()) + + +small_buffer_zlib_object = ( + b"\x48\x89\x15\xcc\x31\x0e\xc2\x30\x0c\x40\x51\xe6" + b"\x9c\xc2\x3b\xaa\x64\x37\xc4\xc1\x12\x42\x5c\xc5" + b"\x49\xac\x52\xd4\x92\xaa\x78\xe1\xf6\x94\xed\xeb" + b"\x0d\xdf\x75\x02\xa2\x7c\xea\xe5\x65\xd5\x81\x8b" + b"\x9a\x61\xba\xa0\xa9\x08\x36\xc9\x4c\x1a\xad\x88" + b"\x16\xba\x46\xc4\xa8\x99\x6a\x64\xe1\xe0\xdf\xcd" + b"\xa0\xf6\x75\x9d\x3d\xf8\xf1\xd0\x77\xdb\xfb\xdc" + b"\x86\xa3\x87\xf1\x2f\x93\xed\x00\xb7\xc7\xd2\xab" + b"\x2e\xcf\xfe\xf1\x3b\x50\xa4\x91\x53\x12\x24\x38" + b"\x23\x21\x86\xf0\x03\x2f\x91\x24\x52" +) + + +class ShaFileTests(TestCase): + def test_deflated_smaller_window_buffer(self): + # zlib on some systems uses smaller buffers, + # resulting in a different header. + # See https://github.com/libgit2/libgit2/pull/464 + sf = ShaFile.from_file(BytesIO(small_buffer_zlib_object)) + self.assertEqual(sf.type_name, b"tag") + self.assertEqual(sf.tagger, b" <@localhost>") + + +class CommitSerializationTests(TestCase): + def make_commit(self, **kwargs): + attrs = { + "tree": b"d80c186a03f423a81b39df39dc87fd269736ca86", + "parents": [ + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", + ], + "author": b"James Westby ", + "committer": b"James Westby ", + "commit_time": 1174773719, + "author_time": 1174773719, + "commit_timezone": 0, + "author_timezone": 0, + "message": b"Merge ../b\n", + } + attrs.update(kwargs) + return make_commit(**attrs) + + def test_encoding(self): + c = self.make_commit(encoding=b"iso8859-1") + self.assertIn(b"encoding iso8859-1\n", c.as_raw_string()) + + def test_short_timestamp(self): + c = self.make_commit(commit_time=30) + c1 = Commit() + c1.set_raw_string(c.as_raw_string()) + self.assertEqual(30, c1.commit_time) + + def test_full_tree(self): + c = self.make_commit(commit_time=30) + t = Tree() + t.add(b"data-x", 0o644, Blob().id) + c.tree = t + c1 = Commit() + c1.set_raw_string(c.as_raw_string()) + self.assertEqual(t.id, c1.tree) + self.assertEqual(c.as_raw_string(), c1.as_raw_string()) + + def test_raw_length(self): + c = self.make_commit() + self.assertEqual(len(c.as_raw_string()), c.raw_length()) + + def test_simple(self): + c = self.make_commit() + self.assertEqual(c.id, b"5dac377bdded4c9aeb8dff595f0faeebcc8498cc") + self.assertEqual( + b"tree d80c186a03f423a81b39df39dc87fd269736ca86\n" + b"parent ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd\n" + b"parent 4cffe90e0a41ad3f5190079d7c8f036bde29cbe6\n" + b"author James Westby " + b"1174773719 +0000\n" + b"committer James Westby " + b"1174773719 +0000\n" + b"\n" + b"Merge ../b\n", + c.as_raw_string(), + ) + + def test_timezone(self): + c = self.make_commit(commit_timezone=(5 * 60)) + self.assertIn(b" +0005\n", c.as_raw_string()) + + def test_neg_timezone(self): + c = self.make_commit(commit_timezone=(-1 * 3600)) + self.assertIn(b" -0100\n", c.as_raw_string()) + + def test_deserialize(self): + c = self.make_commit() + d = Commit() + d._deserialize(c.as_raw_chunks()) + self.assertEqual(c, d) + + def test_serialize_gpgsig(self): + commit = self.make_commit( + gpgsig=b"""-----BEGIN PGP SIGNATURE----- +Version: GnuPG v1 + +iQIcBAABCgAGBQJULCdfAAoJEACAbyvXKaRXuKwP/RyP9PA49uAvu8tQVCC/uBa8 +vi975+xvO14R8Pp8k2nps7lSxCdtCd+xVT1VRHs0wNhOZo2YCVoU1HATkPejqSeV +NScTHcxnk4/+bxyfk14xvJkNp7FlQ3npmBkA+lbV0Ubr33rvtIE5jiJPyz+SgWAg +xdBG2TojV0squj00GoH/euK6aX7GgZtwdtpTv44haCQdSuPGDcI4TORqR6YSqvy3 +GPE+3ZqXPFFb+KILtimkxitdwB7CpwmNse2vE3rONSwTvi8nq3ZoQYNY73CQGkUy +qoFU0pDtw87U3niFin1ZccDgH0bB6624sLViqrjcbYJeg815Htsu4rmzVaZADEVC +XhIO4MThebusdk0AcNGjgpf3HRHk0DPMDDlIjm+Oao0cqovvF6VyYmcb0C+RmhJj +dodLXMNmbqErwTk3zEkW0yZvNIYXH7m9SokPCZa4eeIM7be62X6h1mbt0/IU6Th+ +v18fS0iTMP/Viug5und+05C/v04kgDo0CPphAbXwWMnkE4B6Tl9sdyUYXtvQsL7x +0+WP1gL27ANqNZiI07Kz/BhbBAQI/+2TFT7oGr0AnFPQ5jHp+3GpUf6OKuT1wT3H +ND189UFuRuubxb42vZhpcXRbqJVWnbECTKVUPsGZqat3enQUB63uM4i6/RdONDZA +fDeF1m4qYs+cUXKNUZ03 +=X6RT +-----END PGP SIGNATURE-----""" + ) + self.maxDiff = None + self.assertEqual( + b"""\ +tree d80c186a03f423a81b39df39dc87fd269736ca86 +parent ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd +parent 4cffe90e0a41ad3f5190079d7c8f036bde29cbe6 +author James Westby 1174773719 +0000 +committer James Westby 1174773719 +0000 +gpgsig -----BEGIN PGP SIGNATURE----- + Version: GnuPG v1 + + iQIcBAABCgAGBQJULCdfAAoJEACAbyvXKaRXuKwP/RyP9PA49uAvu8tQVCC/uBa8 + vi975+xvO14R8Pp8k2nps7lSxCdtCd+xVT1VRHs0wNhOZo2YCVoU1HATkPejqSeV + NScTHcxnk4/+bxyfk14xvJkNp7FlQ3npmBkA+lbV0Ubr33rvtIE5jiJPyz+SgWAg + xdBG2TojV0squj00GoH/euK6aX7GgZtwdtpTv44haCQdSuPGDcI4TORqR6YSqvy3 + GPE+3ZqXPFFb+KILtimkxitdwB7CpwmNse2vE3rONSwTvi8nq3ZoQYNY73CQGkUy + qoFU0pDtw87U3niFin1ZccDgH0bB6624sLViqrjcbYJeg815Htsu4rmzVaZADEVC + XhIO4MThebusdk0AcNGjgpf3HRHk0DPMDDlIjm+Oao0cqovvF6VyYmcb0C+RmhJj + dodLXMNmbqErwTk3zEkW0yZvNIYXH7m9SokPCZa4eeIM7be62X6h1mbt0/IU6Th+ + v18fS0iTMP/Viug5und+05C/v04kgDo0CPphAbXwWMnkE4B6Tl9sdyUYXtvQsL7x + 0+WP1gL27ANqNZiI07Kz/BhbBAQI/+2TFT7oGr0AnFPQ5jHp+3GpUf6OKuT1wT3H + ND189UFuRuubxb42vZhpcXRbqJVWnbECTKVUPsGZqat3enQUB63uM4i6/RdONDZA + fDeF1m4qYs+cUXKNUZ03 + =X6RT + -----END PGP SIGNATURE----- + +Merge ../b +""", + commit.as_raw_string(), + ) + + def test_serialize_mergetag(self): + tag = make_object( + Tag, + object=(Commit, b"a38d6181ff27824c79fc7df825164a212eff6a3f"), + object_type_name=b"commit", + name=b"v2.6.22-rc7", + tag_time=1183319674, + tag_timezone=0, + tagger=b"Linus Torvalds ", + message=default_message, + ) + commit = self.make_commit(mergetag=[tag]) + + self.assertEqual( + b"""tree d80c186a03f423a81b39df39dc87fd269736ca86 +parent ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd +parent 4cffe90e0a41ad3f5190079d7c8f036bde29cbe6 +author James Westby 1174773719 +0000 +committer James Westby 1174773719 +0000 +mergetag object a38d6181ff27824c79fc7df825164a212eff6a3f + type commit + tag v2.6.22-rc7 + tagger Linus Torvalds 1183319674 +0000 + + Linux 2.6.22-rc7 + -----BEGIN PGP SIGNATURE----- + Version: GnuPG v1.4.7 (GNU/Linux) + + iD8DBQBGiAaAF3YsRnbiHLsRAitMAKCiLboJkQECM/jpYsY3WPfvUgLXkACgg3ql + OK2XeQOiEeXtT76rV4t2WR4= + =ivrA + -----END PGP SIGNATURE----- + +Merge ../b +""", + commit.as_raw_string(), + ) + + def test_serialize_mergetags(self): + tag = make_object( + Tag, + object=(Commit, b"a38d6181ff27824c79fc7df825164a212eff6a3f"), + object_type_name=b"commit", + name=b"v2.6.22-rc7", + tag_time=1183319674, + tag_timezone=0, + tagger=b"Linus Torvalds ", + message=default_message, + ) + commit = self.make_commit(mergetag=[tag, tag]) + + self.assertEqual( + b"""tree d80c186a03f423a81b39df39dc87fd269736ca86 +parent ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd +parent 4cffe90e0a41ad3f5190079d7c8f036bde29cbe6 +author James Westby 1174773719 +0000 +committer James Westby 1174773719 +0000 +mergetag object a38d6181ff27824c79fc7df825164a212eff6a3f + type commit + tag v2.6.22-rc7 + tagger Linus Torvalds 1183319674 +0000 + + Linux 2.6.22-rc7 + -----BEGIN PGP SIGNATURE----- + Version: GnuPG v1.4.7 (GNU/Linux) + + iD8DBQBGiAaAF3YsRnbiHLsRAitMAKCiLboJkQECM/jpYsY3WPfvUgLXkACgg3ql + OK2XeQOiEeXtT76rV4t2WR4= + =ivrA + -----END PGP SIGNATURE----- +mergetag object a38d6181ff27824c79fc7df825164a212eff6a3f + type commit + tag v2.6.22-rc7 + tagger Linus Torvalds 1183319674 +0000 + + Linux 2.6.22-rc7 + -----BEGIN PGP SIGNATURE----- + Version: GnuPG v1.4.7 (GNU/Linux) + + iD8DBQBGiAaAF3YsRnbiHLsRAitMAKCiLboJkQECM/jpYsY3WPfvUgLXkACgg3ql + OK2XeQOiEeXtT76rV4t2WR4= + =ivrA + -----END PGP SIGNATURE----- + +Merge ../b +""", + commit.as_raw_string(), + ) + + def test_deserialize_mergetag(self): + tag = make_object( + Tag, + object=(Commit, b"a38d6181ff27824c79fc7df825164a212eff6a3f"), + object_type_name=b"commit", + name=b"v2.6.22-rc7", + tag_time=1183319674, + tag_timezone=0, + tagger=b"Linus Torvalds ", + message=default_message, + ) + commit = self.make_commit(mergetag=[tag]) + + d = Commit() + d._deserialize(commit.as_raw_chunks()) + self.assertEqual(commit, d) + + def test_deserialize_mergetags(self): + tag = make_object( + Tag, + object=(Commit, b"a38d6181ff27824c79fc7df825164a212eff6a3f"), + object_type_name=b"commit", + name=b"v2.6.22-rc7", + tag_time=1183319674, + tag_timezone=0, + tagger=b"Linus Torvalds ", + message=default_message, + ) + commit = self.make_commit(mergetag=[tag, tag]) + + d = Commit() + d._deserialize(commit.as_raw_chunks()) + self.assertEqual(commit, d) + + +default_committer = b"James Westby 1174773719 +0000" + + +class CommitParseTests(ShaFileCheckTests): + def make_commit_lines( + self, + tree=b"d80c186a03f423a81b39df39dc87fd269736ca86", + parents=[ + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", + ], + author=default_committer, + committer=default_committer, + encoding=None, + message=b"Merge ../b\n", + extra=None, + ): + lines = [] + if tree is not None: + lines.append(b"tree " + tree) + if parents is not None: + lines.extend(b"parent " + p for p in parents) + if author is not None: + lines.append(b"author " + author) + if committer is not None: + lines.append(b"committer " + committer) + if encoding is not None: + lines.append(b"encoding " + encoding) + if extra is not None: + for name, value in sorted(extra.items()): + lines.append(name + b" " + value) + lines.append(b"") + if message is not None: + lines.append(message) + return lines + + def make_commit_text(self, **kwargs): + return b"\n".join(self.make_commit_lines(**kwargs)) + + def test_simple(self): + c = Commit.from_string(self.make_commit_text()) + self.assertEqual(b"Merge ../b\n", c.message) + self.assertEqual(b"James Westby ", c.author) + self.assertEqual(b"James Westby ", c.committer) + self.assertEqual(b"d80c186a03f423a81b39df39dc87fd269736ca86", c.tree) + self.assertEqual( + [ + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", + ], + c.parents, + ) + expected_time = datetime.datetime(2007, 3, 24, 22, 1, 59) + self.assertEqual( + expected_time, datetime.datetime.utcfromtimestamp(c.commit_time) + ) + self.assertEqual(0, c.commit_timezone) + self.assertEqual( + expected_time, datetime.datetime.utcfromtimestamp(c.author_time) + ) + self.assertEqual(0, c.author_timezone) + self.assertEqual(None, c.encoding) + + def test_custom(self): + c = Commit.from_string(self.make_commit_text(extra={b"extra-field": b"data"})) + self.assertEqual([(b"extra-field", b"data")], c.extra) + + def test_encoding(self): + c = Commit.from_string(self.make_commit_text(encoding=b"UTF-8")) + self.assertEqual(b"UTF-8", c.encoding) + + def test_check(self): + self.assertCheckSucceeds(Commit, self.make_commit_text()) + self.assertCheckSucceeds(Commit, self.make_commit_text(parents=None)) + self.assertCheckSucceeds(Commit, self.make_commit_text(encoding=b"UTF-8")) + + self.assertCheckFails(Commit, self.make_commit_text(tree=b"xxx")) + self.assertCheckFails(Commit, self.make_commit_text(parents=[a_sha, b"xxx"])) + bad_committer = b"some guy without an email address 1174773719 +0000" + self.assertCheckFails(Commit, self.make_commit_text(committer=bad_committer)) + self.assertCheckFails(Commit, self.make_commit_text(author=bad_committer)) + self.assertCheckFails(Commit, self.make_commit_text(author=None)) + self.assertCheckFails(Commit, self.make_commit_text(committer=None)) + self.assertCheckFails( + Commit, self.make_commit_text(author=None, committer=None) + ) + + def test_check_duplicates(self): + # duplicate each of the header fields + for i in range(5): + lines = self.make_commit_lines(parents=[a_sha], encoding=b"UTF-8") + lines.insert(i, lines[i]) + text = b"\n".join(lines) + if lines[i].startswith(b"parent"): + # duplicate parents are ok for now + self.assertCheckSucceeds(Commit, text) + else: + self.assertCheckFails(Commit, text) + + def test_check_order(self): + lines = self.make_commit_lines(parents=[a_sha], encoding=b"UTF-8") + headers = lines[:5] + rest = lines[5:] + # of all possible permutations, ensure only the original succeeds + for perm in permutations(headers): + perm = list(perm) + text = b"\n".join(perm + rest) + if perm == headers: + self.assertCheckSucceeds(Commit, text) + else: + self.assertCheckFails(Commit, text) + + def test_check_commit_with_unparseable_time(self): + identity_with_wrong_time = ( + b"Igor Sysoev 18446743887488505614+42707004" + ) + + # Those fail at reading time + self.assertCheckFails( + Commit, + self.make_commit_text( + author=default_committer, committer=identity_with_wrong_time + ), + ) + self.assertCheckFails( + Commit, + self.make_commit_text( + author=identity_with_wrong_time, committer=default_committer + ), + ) + + def test_check_commit_with_overflow_date(self): + """Date with overflow should raise an ObjectFormatException when checked""" + identity_with_wrong_time = ( + b"Igor Sysoev 18446743887488505614 +42707004" + ) + commit0 = Commit.from_string( + self.make_commit_text( + author=identity_with_wrong_time, committer=default_committer + ) + ) + commit1 = Commit.from_string( + self.make_commit_text( + author=default_committer, committer=identity_with_wrong_time + ) + ) + + # Those fails when triggering the check() method + for commit in [commit0, commit1]: + with self.assertRaises(ObjectFormatException): + commit.check() + + def test_mangled_author_line(self): + """Mangled author line should successfully parse""" + author_line = ( + b'Karl MacMillan <"Karl MacMillan ' + b'"> 1197475547 -0500' + ) + expected_identity = ( + b'Karl MacMillan <"Karl MacMillan ' + b'">' + ) + commit = Commit.from_string(self.make_commit_text(author=author_line)) + + # The commit parses properly + self.assertEqual(commit.author, expected_identity) + + # But the check fails because the author identity is bogus + with self.assertRaises(ObjectFormatException): + commit.check() + + def test_parse_gpgsig(self): + c = Commit.from_string( + b"""tree aaff74984cccd156a469afa7d9ab10e4777beb24 +author Jelmer Vernooij 1412179807 +0200 +committer Jelmer Vernooij 1412179807 +0200 +gpgsig -----BEGIN PGP SIGNATURE----- + Version: GnuPG v1 + + iQIcBAABCgAGBQJULCdfAAoJEACAbyvXKaRXuKwP/RyP9PA49uAvu8tQVCC/uBa8 + vi975+xvO14R8Pp8k2nps7lSxCdtCd+xVT1VRHs0wNhOZo2YCVoU1HATkPejqSeV + NScTHcxnk4/+bxyfk14xvJkNp7FlQ3npmBkA+lbV0Ubr33rvtIE5jiJPyz+SgWAg + xdBG2TojV0squj00GoH/euK6aX7GgZtwdtpTv44haCQdSuPGDcI4TORqR6YSqvy3 + GPE+3ZqXPFFb+KILtimkxitdwB7CpwmNse2vE3rONSwTvi8nq3ZoQYNY73CQGkUy + qoFU0pDtw87U3niFin1ZccDgH0bB6624sLViqrjcbYJeg815Htsu4rmzVaZADEVC + XhIO4MThebusdk0AcNGjgpf3HRHk0DPMDDlIjm+Oao0cqovvF6VyYmcb0C+RmhJj + dodLXMNmbqErwTk3zEkW0yZvNIYXH7m9SokPCZa4eeIM7be62X6h1mbt0/IU6Th+ + v18fS0iTMP/Viug5und+05C/v04kgDo0CPphAbXwWMnkE4B6Tl9sdyUYXtvQsL7x + 0+WP1gL27ANqNZiI07Kz/BhbBAQI/+2TFT7oGr0AnFPQ5jHp+3GpUf6OKuT1wT3H + ND189UFuRuubxb42vZhpcXRbqJVWnbECTKVUPsGZqat3enQUB63uM4i6/RdONDZA + fDeF1m4qYs+cUXKNUZ03 + =X6RT + -----END PGP SIGNATURE----- + +foo +""" + ) + self.assertEqual(b"foo\n", c.message) + self.assertEqual([], c.extra) + self.assertEqual( + b"""-----BEGIN PGP SIGNATURE----- +Version: GnuPG v1 + +iQIcBAABCgAGBQJULCdfAAoJEACAbyvXKaRXuKwP/RyP9PA49uAvu8tQVCC/uBa8 +vi975+xvO14R8Pp8k2nps7lSxCdtCd+xVT1VRHs0wNhOZo2YCVoU1HATkPejqSeV +NScTHcxnk4/+bxyfk14xvJkNp7FlQ3npmBkA+lbV0Ubr33rvtIE5jiJPyz+SgWAg +xdBG2TojV0squj00GoH/euK6aX7GgZtwdtpTv44haCQdSuPGDcI4TORqR6YSqvy3 +GPE+3ZqXPFFb+KILtimkxitdwB7CpwmNse2vE3rONSwTvi8nq3ZoQYNY73CQGkUy +qoFU0pDtw87U3niFin1ZccDgH0bB6624sLViqrjcbYJeg815Htsu4rmzVaZADEVC +XhIO4MThebusdk0AcNGjgpf3HRHk0DPMDDlIjm+Oao0cqovvF6VyYmcb0C+RmhJj +dodLXMNmbqErwTk3zEkW0yZvNIYXH7m9SokPCZa4eeIM7be62X6h1mbt0/IU6Th+ +v18fS0iTMP/Viug5und+05C/v04kgDo0CPphAbXwWMnkE4B6Tl9sdyUYXtvQsL7x +0+WP1gL27ANqNZiI07Kz/BhbBAQI/+2TFT7oGr0AnFPQ5jHp+3GpUf6OKuT1wT3H +ND189UFuRuubxb42vZhpcXRbqJVWnbECTKVUPsGZqat3enQUB63uM4i6/RdONDZA +fDeF1m4qYs+cUXKNUZ03 +=X6RT +-----END PGP SIGNATURE-----""", + c.gpgsig, + ) + + def test_parse_header_trailing_newline(self): + c = Commit.from_string( + b"""\ +tree a7d6277f78d3ecd0230a1a5df6db00b1d9c521ac +parent c09b6dec7a73760fbdb478383a3c926b18db8bbe +author Neil Matatall 1461964057 -1000 +committer Neil Matatall 1461964057 -1000 +gpgsig -----BEGIN PGP SIGNATURE----- + + wsBcBAABCAAQBQJXI80ZCRA6pcNDcVZ70gAAarcIABs72xRX3FWeox349nh6ucJK + CtwmBTusez2Zwmq895fQEbZK7jpaGO5TRO4OvjFxlRo0E08UFx3pxZHSpj6bsFeL + hHsDXnCaotphLkbgKKRdGZo7tDqM84wuEDlh4MwNe7qlFC7bYLDyysc81ZX5lpMm + 2MFF1TvjLAzSvkT7H1LPkuR3hSvfCYhikbPOUNnKOo0sYjeJeAJ/JdAVQ4mdJIM0 + gl3REp9+A+qBEpNQI7z94Pg5Bc5xenwuDh3SJgHvJV6zBWupWcdB3fAkVd4TPnEZ + nHxksHfeNln9RKseIDcy4b2ATjhDNIJZARHNfr6oy4u3XPW4svRqtBsLoMiIeuI= + =ms6q + -----END PGP SIGNATURE----- + + +3.3.0 version bump and docs +""" + ) + self.assertEqual([], c.extra) + self.assertEqual( + b"""\ +-----BEGIN PGP SIGNATURE----- + +wsBcBAABCAAQBQJXI80ZCRA6pcNDcVZ70gAAarcIABs72xRX3FWeox349nh6ucJK +CtwmBTusez2Zwmq895fQEbZK7jpaGO5TRO4OvjFxlRo0E08UFx3pxZHSpj6bsFeL +hHsDXnCaotphLkbgKKRdGZo7tDqM84wuEDlh4MwNe7qlFC7bYLDyysc81ZX5lpMm +2MFF1TvjLAzSvkT7H1LPkuR3hSvfCYhikbPOUNnKOo0sYjeJeAJ/JdAVQ4mdJIM0 +gl3REp9+A+qBEpNQI7z94Pg5Bc5xenwuDh3SJgHvJV6zBWupWcdB3fAkVd4TPnEZ +nHxksHfeNln9RKseIDcy4b2ATjhDNIJZARHNfr6oy4u3XPW4svRqtBsLoMiIeuI= +=ms6q +-----END PGP SIGNATURE-----\n""", + c.gpgsig, + ) + + +_TREE_ITEMS = { + b"a.c": (0o100755, b"d80c186a03f423a81b39df39dc87fd269736ca86"), + b"a": (stat.S_IFDIR, b"d80c186a03f423a81b39df39dc87fd269736ca86"), + b"a/c": (stat.S_IFDIR, b"d80c186a03f423a81b39df39dc87fd269736ca86"), +} + +_SORTED_TREE_ITEMS = [ + TreeEntry(b"a.c", 0o100755, b"d80c186a03f423a81b39df39dc87fd269736ca86"), + TreeEntry(b"a", stat.S_IFDIR, b"d80c186a03f423a81b39df39dc87fd269736ca86"), + TreeEntry(b"a/c", stat.S_IFDIR, b"d80c186a03f423a81b39df39dc87fd269736ca86"), +] + + +class TreeTests(ShaFileCheckTests): + def test_add(self): + myhexsha = b"d80c186a03f423a81b39df39dc87fd269736ca86" + x = Tree() + x.add(b"myname", 0o100755, myhexsha) + self.assertEqual(x[b"myname"], (0o100755, myhexsha)) + self.assertEqual(b"100755 myname\0" + hex_to_sha(myhexsha), x.as_raw_string()) + + def test_simple(self): + myhexsha = b"d80c186a03f423a81b39df39dc87fd269736ca86" + x = Tree() + x[b"myname"] = (0o100755, myhexsha) + self.assertEqual(b"100755 myname\0" + hex_to_sha(myhexsha), x.as_raw_string()) + self.assertEqual(b"100755 myname\0" + hex_to_sha(myhexsha), bytes(x)) + + def test_tree_update_id(self): + x = Tree() + x[b"a.c"] = (0o100755, b"d80c186a03f423a81b39df39dc87fd269736ca86") + self.assertEqual(b"0c5c6bc2c081accfbc250331b19e43b904ab9cdd", x.id) + x[b"a.b"] = (stat.S_IFDIR, b"d80c186a03f423a81b39df39dc87fd269736ca86") + self.assertEqual(b"07bfcb5f3ada15bbebdfa3bbb8fd858a363925c8", x.id) + + def test_tree_iteritems_dir_sort(self): + x = Tree() + for name, item in _TREE_ITEMS.items(): + x[name] = item + self.assertEqual(_SORTED_TREE_ITEMS, x.items()) + + def test_tree_items_dir_sort(self): + x = Tree() + for name, item in _TREE_ITEMS.items(): + x[name] = item + self.assertEqual(_SORTED_TREE_ITEMS, x.items()) + + def _do_test_parse_tree(self, parse_tree): + dir = os.path.join(os.path.dirname(__file__), "..", "..", "testdata", "trees") + o = Tree.from_path(hex_to_filename(dir, tree_sha)) + self.assertEqual( + [(b"a", 0o100644, a_sha), (b"b", 0o100644, b_sha)], + list(parse_tree(o.as_raw_string())), + ) + # test a broken tree that has a leading 0 on the file mode + broken_tree = b"0100644 foo\0" + hex_to_sha(a_sha) + + def eval_parse_tree(*args, **kwargs): + return list(parse_tree(*args, **kwargs)) + + self.assertEqual([(b"foo", 0o100644, a_sha)], eval_parse_tree(broken_tree)) + self.assertRaises( + ObjectFormatException, eval_parse_tree, broken_tree, strict=True + ) + + test_parse_tree = functest_builder(_do_test_parse_tree, _parse_tree_py) + test_parse_tree_extension = ext_functest_builder(_do_test_parse_tree, parse_tree) + + def _do_test_sorted_tree_items(self, sorted_tree_items): + def do_sort(entries): + return list(sorted_tree_items(entries, False)) + + actual = do_sort(_TREE_ITEMS) + self.assertEqual(_SORTED_TREE_ITEMS, actual) + self.assertIsInstance(actual[0], TreeEntry) + + # C/Python implementations may differ in specific error types, but + # should all error on invalid inputs. + # For example, the C implementation has stricter type checks, so may + # raise TypeError where the Python implementation raises + # AttributeError. + errors = (TypeError, ValueError, AttributeError) + self.assertRaises(errors, do_sort, b"foo") + self.assertRaises(errors, do_sort, {b"foo": (1, 2, 3)}) + + myhexsha = b"d80c186a03f423a81b39df39dc87fd269736ca86" + self.assertRaises(errors, do_sort, {b"foo": (b"xxx", myhexsha)}) + self.assertRaises(errors, do_sort, {b"foo": (0o100755, 12345)}) + + test_sorted_tree_items = functest_builder( + _do_test_sorted_tree_items, _sorted_tree_items_py + ) + test_sorted_tree_items_extension = ext_functest_builder( + _do_test_sorted_tree_items, sorted_tree_items + ) + + def _do_test_sorted_tree_items_name_order(self, sorted_tree_items): + self.assertEqual( + [ + TreeEntry( + b"a", + stat.S_IFDIR, + b"d80c186a03f423a81b39df39dc87fd269736ca86", + ), + TreeEntry( + b"a.c", + 0o100755, + b"d80c186a03f423a81b39df39dc87fd269736ca86", + ), + TreeEntry( + b"a/c", + stat.S_IFDIR, + b"d80c186a03f423a81b39df39dc87fd269736ca86", + ), + ], + list(sorted_tree_items(_TREE_ITEMS, True)), + ) + + test_sorted_tree_items_name_order = functest_builder( + _do_test_sorted_tree_items_name_order, _sorted_tree_items_py + ) + test_sorted_tree_items_name_order_extension = ext_functest_builder( + _do_test_sorted_tree_items_name_order, sorted_tree_items + ) + + def test_check(self): + t = Tree + sha = hex_to_sha(a_sha) + + # filenames + self.assertCheckSucceeds(t, b"100644 .a\0" + sha) + self.assertCheckFails(t, b"100644 \0" + sha) + self.assertCheckFails(t, b"100644 .\0" + sha) + self.assertCheckFails(t, b"100644 a/a\0" + sha) + self.assertCheckFails(t, b"100644 ..\0" + sha) + self.assertCheckFails(t, b"100644 .git\0" + sha) + + # modes + self.assertCheckSucceeds(t, b"100644 a\0" + sha) + self.assertCheckSucceeds(t, b"100755 a\0" + sha) + self.assertCheckSucceeds(t, b"160000 a\0" + sha) + # TODO more whitelisted modes + self.assertCheckFails(t, b"123456 a\0" + sha) + self.assertCheckFails(t, b"123abc a\0" + sha) + # should fail check, but parses ok + self.assertCheckFails(t, b"0100644 foo\0" + sha) + + # shas + self.assertCheckFails(t, b"100644 a\0" + (b"x" * 5)) + self.assertCheckFails(t, b"100644 a\0" + (b"x" * 18) + b"\0") + self.assertCheckFails(t, b"100644 a\0" + (b"x" * 21) + b"\n100644 b\0" + sha) + + # ordering + sha2 = hex_to_sha(b_sha) + self.assertCheckSucceeds(t, b"100644 a\0" + sha + b"\n100644 b\0" + sha) + self.assertCheckSucceeds(t, b"100644 a\0" + sha + b"\n100644 b\0" + sha2) + self.assertCheckFails(t, b"100644 a\0" + sha + b"\n100755 a\0" + sha2) + self.assertCheckFails(t, b"100644 b\0" + sha2 + b"\n100644 a\0" + sha) + + def test_iter(self): + t = Tree() + t[b"foo"] = (0o100644, a_sha) + self.assertEqual(set([b"foo"]), set(t)) + + +class TagSerializeTests(TestCase): + def test_serialize_simple(self): + x = make_object( + Tag, + tagger=b"Jelmer Vernooij ", + name=b"0.1", + message=b"Tag 0.1", + object=(Blob, b"d80c186a03f423a81b39df39dc87fd269736ca86"), + tag_time=423423423, + tag_timezone=0, + ) + self.assertEqual( + ( + b"object d80c186a03f423a81b39df39dc87fd269736ca86\n" + b"type blob\n" + b"tag 0.1\n" + b"tagger Jelmer Vernooij " + b"423423423 +0000\n" + b"\n" + b"Tag 0.1" + ), + x.as_raw_string(), + ) + + def test_serialize_none_message(self): + x = make_object( + Tag, + tagger=b"Jelmer Vernooij ", + name=b"0.1", + message=None, + object=(Blob, b"d80c186a03f423a81b39df39dc87fd269736ca86"), + tag_time=423423423, + tag_timezone=0, + ) + self.assertEqual( + ( + b"object d80c186a03f423a81b39df39dc87fd269736ca86\n" + b"type blob\n" + b"tag 0.1\n" + b"tagger Jelmer Vernooij " + b"423423423 +0000\n" + ), + x.as_raw_string(), + ) + + +default_tagger = ( + b"Linus Torvalds " b"1183319674 -0700" +) +default_message = b"""Linux 2.6.22-rc7 +-----BEGIN PGP SIGNATURE----- +Version: GnuPG v1.4.7 (GNU/Linux) + +iD8DBQBGiAaAF3YsRnbiHLsRAitMAKCiLboJkQECM/jpYsY3WPfvUgLXkACgg3ql +OK2XeQOiEeXtT76rV4t2WR4= +=ivrA +-----END PGP SIGNATURE----- +""" + + +class TagParseTests(ShaFileCheckTests): + def make_tag_lines( + self, + object_sha=b"a38d6181ff27824c79fc7df825164a212eff6a3f", + object_type_name=b"commit", + name=b"v2.6.22-rc7", + tagger=default_tagger, + message=default_message, + ): + lines = [] + if object_sha is not None: + lines.append(b"object " + object_sha) + if object_type_name is not None: + lines.append(b"type " + object_type_name) + if name is not None: + lines.append(b"tag " + name) + if tagger is not None: + lines.append(b"tagger " + tagger) + if message is not None: + lines.append(b"") + lines.append(message) + return lines + + def make_tag_text(self, **kwargs): + return b"\n".join(self.make_tag_lines(**kwargs)) + + def test_parse(self): + x = Tag() + x.set_raw_string(self.make_tag_text()) + self.assertEqual( + b"Linus Torvalds ", x.tagger + ) + self.assertEqual(b"v2.6.22-rc7", x.name) + object_type, object_sha = x.object + self.assertEqual(b"a38d6181ff27824c79fc7df825164a212eff6a3f", object_sha) + self.assertEqual(Commit, object_type) + self.assertEqual( + datetime.datetime.utcfromtimestamp(x.tag_time), + datetime.datetime(2007, 7, 1, 19, 54, 34), + ) + self.assertEqual(-25200, x.tag_timezone) + + def test_parse_no_tagger(self): + x = Tag() + x.set_raw_string(self.make_tag_text(tagger=None)) + self.assertEqual(None, x.tagger) + self.assertEqual(b"v2.6.22-rc7", x.name) + self.assertEqual(None, x.tag_time) + + def test_parse_no_message(self): + x = Tag() + x.set_raw_string(self.make_tag_text(message=None)) + self.assertEqual(None, x.message) + self.assertEqual( + b"Linus Torvalds ", x.tagger + ) + self.assertEqual( + datetime.datetime.utcfromtimestamp(x.tag_time), + datetime.datetime(2007, 7, 1, 19, 54, 34), + ) + self.assertEqual(-25200, x.tag_timezone) + self.assertEqual(b"v2.6.22-rc7", x.name) + + def test_check(self): + self.assertCheckSucceeds(Tag, self.make_tag_text()) + self.assertCheckFails(Tag, self.make_tag_text(object_sha=None)) + self.assertCheckFails(Tag, self.make_tag_text(object_type_name=None)) + self.assertCheckFails(Tag, self.make_tag_text(name=None)) + self.assertCheckFails(Tag, self.make_tag_text(name=b"")) + self.assertCheckFails(Tag, self.make_tag_text(object_type_name=b"foobar")) + self.assertCheckFails( + Tag, + self.make_tag_text( + tagger=b"some guy without an email address 1183319674 -0700" + ), + ) + self.assertCheckFails( + Tag, + self.make_tag_text( + tagger=( + b"Linus Torvalds " + b"Sun 7 Jul 2007 12:54:34 +0700" + ) + ), + ) + self.assertCheckFails(Tag, self.make_tag_text(object_sha=b"xxx")) + + def test_check_tag_with_unparseable_field(self): + self.assertCheckFails( + Tag, + self.make_tag_text( + tagger=( + b"Linus Torvalds " + b"423423+0000" + ) + ), + ) + + def test_check_tag_with_overflow_time(self): + """Date with overflow should raise an ObjectFormatException when checked""" + author = "Some Dude %s +0000" % (MAX_TIME + 1,) + tag = Tag.from_string(self.make_tag_text(tagger=(author.encode()))) + with self.assertRaises(ObjectFormatException): + tag.check() + + def test_check_duplicates(self): + # duplicate each of the header fields + for i in range(4): + lines = self.make_tag_lines() + lines.insert(i, lines[i]) + self.assertCheckFails(Tag, b"\n".join(lines)) + + def test_check_order(self): + lines = self.make_tag_lines() + headers = lines[:4] + rest = lines[4:] + # of all possible permutations, ensure only the original succeeds + for perm in permutations(headers): + perm = list(perm) + text = b"\n".join(perm + rest) + if perm == headers: + self.assertCheckSucceeds(Tag, text) + else: + self.assertCheckFails(Tag, text) + + def test_tree_copy_after_update(self): + """Check Tree.id is correctly updated when the tree is copied after updated.""" + shas = [] + tree = Tree() + shas.append(tree.id) + tree.add(b"data", 0o644, Blob().id) + copied = tree.copy() + shas.append(tree.id) + shas.append(copied.id) + + self.assertNotIn(shas[0], shas[1:]) + self.assertEqual(shas[1], shas[2]) + + +class CheckTests(TestCase): + def test_check_hexsha(self): + check_hexsha(a_sha, "failed to check good sha") + self.assertRaises( + ObjectFormatException, check_hexsha, b"1" * 39, "sha too short" + ) + self.assertRaises( + ObjectFormatException, check_hexsha, b"1" * 41, "sha too long" + ) + self.assertRaises( + ObjectFormatException, + check_hexsha, + b"x" * 40, + "invalid characters", + ) + + def test_check_identity(self): + check_identity( + b"Dave Borowitz ", + "failed to check good identity", + ) + check_identity(b" ", "failed to check good identity") + self.assertRaises( + ObjectFormatException, check_identity, b'', 'no space before email' + ) + self.assertRaises( + ObjectFormatException, check_identity, b"Dave Borowitz", "no email" + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b"Dave Borowitz ", + "incomplete email", + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b"Dave Borowitz <", + "typo", + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b"Dave Borowitz >", + "typo", + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b"Dave Borowitz xxx", + "trailing characters", + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b"Dave Borowitz xxx", + "trailing characters", + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b'Dave', + 'reserved byte in name', + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b'Dave>Borowitz ', + 'reserved byte in name', + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b'Dave\0Borowitz ', + 'null byte', + ) + self.assertRaises( + ObjectFormatException, + check_identity, + b'Dave\nBorowitz ', + 'newline byte', + ) + + +class TimezoneTests(TestCase): + def test_parse_timezone_utc(self): + self.assertEqual((0, False), parse_timezone(b"+0000")) + + def test_parse_timezone_utc_negative(self): + self.assertEqual((0, True), parse_timezone(b"-0000")) + + def test_generate_timezone_utc(self): + self.assertEqual(b"+0000", format_timezone(0)) + + def test_generate_timezone_utc_negative(self): + self.assertEqual(b"-0000", format_timezone(0, True)) + + def test_parse_timezone_cet(self): + self.assertEqual((60 * 60, False), parse_timezone(b"+0100")) + + def test_format_timezone_cet(self): + self.assertEqual(b"+0100", format_timezone(60 * 60)) + + def test_format_timezone_pdt(self): + self.assertEqual(b"-0400", format_timezone(-4 * 60 * 60)) + + def test_parse_timezone_pdt(self): + self.assertEqual((-4 * 60 * 60, False), parse_timezone(b"-0400")) + + def test_format_timezone_pdt_half(self): + self.assertEqual(b"-0440", format_timezone(int(((-4 * 60) - 40) * 60))) + + def test_format_timezone_double_negative(self): + self.assertEqual(b"--700", format_timezone(int(((7 * 60)) * 60), True)) + + def test_parse_timezone_pdt_half(self): + self.assertEqual((((-4 * 60) - 40) * 60, False), parse_timezone(b"-0440")) + + def test_parse_timezone_double_negative(self): + self.assertEqual((int(((7 * 60)) * 60), False), parse_timezone(b"+700")) + self.assertEqual((int(((7 * 60)) * 60), True), parse_timezone(b"--700")) + + +class ShaFileCopyTests(TestCase): + def assert_copy(self, orig): + oclass = object_class(orig.type_num) + + copy = orig.copy() + self.assertIsInstance(copy, oclass) + self.assertEqual(copy, orig) + self.assertIsNot(copy, orig) + + def test_commit_copy(self): + attrs = { + "tree": b"d80c186a03f423a81b39df39dc87fd269736ca86", + "parents": [ + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", + ], + "author": b"James Westby ", + "committer": b"James Westby ", + "commit_time": 1174773719, + "author_time": 1174773719, + "commit_timezone": 0, + "author_timezone": 0, + "message": b"Merge ../b\n", + } + commit = make_commit(**attrs) + self.assert_copy(commit) + + def test_blob_copy(self): + blob = make_object(Blob, data=b"i am a blob") + self.assert_copy(blob) + + def test_tree_copy(self): + blob = make_object(Blob, data=b"i am a blob") + tree = Tree() + tree[b"blob"] = (stat.S_IFREG, blob.id) + self.assert_copy(tree) + + def test_tag_copy(self): + tag = make_object( + Tag, + name=b"tag", + message=b"", + tagger=b"Tagger ", + tag_time=12345, + tag_timezone=0, + object=(Commit, b"0" * 40), + ) + self.assert_copy(tag) + + +class ShaFileSerializeTests(TestCase): + """`ShaFile` objects only gets serialized once if they haven't changed.""" + + @contextmanager + def assert_serialization_on_change( + self, obj, needs_serialization_after_change=True + ): + old_id = obj.id + self.assertFalse(obj._needs_serialization) + + yield obj + + if needs_serialization_after_change: + self.assertTrue(obj._needs_serialization) + else: + self.assertFalse(obj._needs_serialization) + new_id = obj.id + self.assertFalse(obj._needs_serialization) + self.assertNotEqual(old_id, new_id) + + def test_commit_serialize(self): + attrs = { + "tree": b"d80c186a03f423a81b39df39dc87fd269736ca86", + "parents": [ + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", + ], + "author": b"James Westby ", + "committer": b"James Westby ", + "commit_time": 1174773719, + "author_time": 1174773719, + "commit_timezone": 0, + "author_timezone": 0, + "message": b"Merge ../b\n", + } + commit = make_commit(**attrs) + + with self.assert_serialization_on_change(commit): + commit.parents = [b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd"] + + def test_blob_serialize(self): + blob = make_object(Blob, data=b"i am a blob") + + with self.assert_serialization_on_change( + blob, needs_serialization_after_change=False + ): + blob.data = b"i am another blob" + + def test_tree_serialize(self): + blob = make_object(Blob, data=b"i am a blob") + tree = Tree() + tree[b"blob"] = (stat.S_IFREG, blob.id) + + with self.assert_serialization_on_change(tree): + tree[b"blob2"] = (stat.S_IFREG, blob.id) + + def test_tag_serialize(self): + tag = make_object( + Tag, + name=b"tag", + message=b"", + tagger=b"Tagger ", + tag_time=12345, + tag_timezone=0, + object=(Commit, b"0" * 40), + ) + + with self.assert_serialization_on_change(tag): + tag.message = b"new message" + + def test_tag_serialize_time_error(self): + with self.assertRaises(ObjectFormatException): + tag = make_object( + Tag, + name=b"tag", + message=b"some message", + tagger=b"Tagger 1174773719+0000", + object=(Commit, b"0" * 40), + ) + tag._deserialize(tag._serialize()) + + +class PrettyFormatTreeEntryTests(TestCase): + def test_format(self): + self.assertEqual( + "40000 tree 40820c38cfb182ce6c8b261555410d8382a5918b\tfoo\n", + pretty_format_tree_entry( + b"foo", 0o40000, b"40820c38cfb182ce6c8b261555410d8382a5918b" + ), + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_objectspec.py b/env/lib/python3.8/site-packages/dulwich/tests/test_objectspec.py new file mode 100644 index 00000000..22340eb6 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_objectspec.py @@ -0,0 +1,266 @@ +# test_objectspec.py -- tests for objectspec.py +# Copyright (C) 2014 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for revision spec parsing.""" + +# TODO: Round-trip parse-serialize-parse and serialize-parse-serialize tests. + + +from dulwich.objects import ( + Blob, +) +from dulwich.objectspec import ( + parse_object, + parse_commit, + parse_commit_range, + parse_ref, + parse_refs, + parse_reftuple, + parse_reftuples, + parse_tree, +) +from dulwich.repo import MemoryRepo +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + build_commit_graph, +) + + +class ParseObjectTests(TestCase): + """Test parse_object.""" + + def test_nonexistent(self): + r = MemoryRepo() + self.assertRaises(KeyError, parse_object, r, "thisdoesnotexist") + + def test_blob_by_sha(self): + r = MemoryRepo() + b = Blob.from_string(b"Blah") + r.object_store.add_object(b) + self.assertEqual(b, parse_object(r, b.id)) + + +class ParseCommitRangeTests(TestCase): + """Test parse_commit_range.""" + + def test_nonexistent(self): + r = MemoryRepo() + self.assertRaises(KeyError, parse_commit_range, r, "thisdoesnotexist") + + def test_commit_by_sha(self): + r = MemoryRepo() + c1, c2, c3 = build_commit_graph(r.object_store, [[1], [2, 1], [3, 1, 2]]) + self.assertEqual([c1], list(parse_commit_range(r, c1.id))) + + +class ParseCommitTests(TestCase): + """Test parse_commit.""" + + def test_nonexistent(self): + r = MemoryRepo() + self.assertRaises(KeyError, parse_commit, r, "thisdoesnotexist") + + def test_commit_by_sha(self): + r = MemoryRepo() + [c1] = build_commit_graph(r.object_store, [[1]]) + self.assertEqual(c1, parse_commit(r, c1.id)) + + def test_commit_by_short_sha(self): + r = MemoryRepo() + [c1] = build_commit_graph(r.object_store, [[1]]) + self.assertEqual(c1, parse_commit(r, c1.id[:10])) + + +class ParseRefTests(TestCase): + def test_nonexistent(self): + r = {} + self.assertRaises(KeyError, parse_ref, r, b"thisdoesnotexist") + + def test_ambiguous_ref(self): + r = { + b"ambig1": "bla", + b"refs/ambig1": "bla", + b"refs/tags/ambig1": "bla", + b"refs/heads/ambig1": "bla", + b"refs/remotes/ambig1": "bla", + b"refs/remotes/ambig1/HEAD": "bla", + } + self.assertEqual(b"ambig1", parse_ref(r, b"ambig1")) + + def test_ambiguous_ref2(self): + r = { + b"refs/ambig2": "bla", + b"refs/tags/ambig2": "bla", + b"refs/heads/ambig2": "bla", + b"refs/remotes/ambig2": "bla", + b"refs/remotes/ambig2/HEAD": "bla", + } + self.assertEqual(b"refs/ambig2", parse_ref(r, b"ambig2")) + + def test_ambiguous_tag(self): + r = { + b"refs/tags/ambig3": "bla", + b"refs/heads/ambig3": "bla", + b"refs/remotes/ambig3": "bla", + b"refs/remotes/ambig3/HEAD": "bla", + } + self.assertEqual(b"refs/tags/ambig3", parse_ref(r, b"ambig3")) + + def test_ambiguous_head(self): + r = { + b"refs/heads/ambig4": "bla", + b"refs/remotes/ambig4": "bla", + b"refs/remotes/ambig4/HEAD": "bla", + } + self.assertEqual(b"refs/heads/ambig4", parse_ref(r, b"ambig4")) + + def test_ambiguous_remote(self): + r = {b"refs/remotes/ambig5": "bla", b"refs/remotes/ambig5/HEAD": "bla"} + self.assertEqual(b"refs/remotes/ambig5", parse_ref(r, b"ambig5")) + + def test_ambiguous_remote_head(self): + r = {b"refs/remotes/ambig6/HEAD": "bla"} + self.assertEqual(b"refs/remotes/ambig6/HEAD", parse_ref(r, b"ambig6")) + + def test_heads_full(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual(b"refs/heads/foo", parse_ref(r, b"refs/heads/foo")) + + def test_heads_partial(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual(b"refs/heads/foo", parse_ref(r, b"heads/foo")) + + def test_tags_partial(self): + r = {b"refs/tags/foo": "bla"} + self.assertEqual(b"refs/tags/foo", parse_ref(r, b"tags/foo")) + + +class ParseRefsTests(TestCase): + def test_nonexistent(self): + r = {} + self.assertRaises(KeyError, parse_refs, r, [b"thisdoesnotexist"]) + + def test_head(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual([b"refs/heads/foo"], parse_refs(r, [b"foo"])) + + def test_full(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual([b"refs/heads/foo"], parse_refs(r, b"refs/heads/foo")) + + +class ParseReftupleTests(TestCase): + def test_nonexistent(self): + r = {} + self.assertRaises(KeyError, parse_reftuple, r, r, b"thisdoesnotexist") + + def test_head(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + (b"refs/heads/foo", b"refs/heads/foo", False), + parse_reftuple(r, r, b"foo"), + ) + self.assertEqual( + (b"refs/heads/foo", b"refs/heads/foo", True), + parse_reftuple(r, r, b"+foo"), + ) + self.assertEqual( + (b"refs/heads/foo", b"refs/heads/foo", True), + parse_reftuple(r, {}, b"+foo"), + ) + self.assertEqual( + (b"refs/heads/foo", b"refs/heads/foo", True), + parse_reftuple(r, {}, b"foo", True), + ) + + def test_full(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + (b"refs/heads/foo", b"refs/heads/foo", False), + parse_reftuple(r, r, b"refs/heads/foo"), + ) + + def test_no_left_ref(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + (None, b"refs/heads/foo", False), + parse_reftuple(r, r, b":refs/heads/foo"), + ) + + def test_no_right_ref(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + (b"refs/heads/foo", None, False), + parse_reftuple(r, r, b"refs/heads/foo:"), + ) + + def test_default_with_string(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + (b"refs/heads/foo", b"refs/heads/foo", False), + parse_reftuple(r, r, "foo"), + ) + + +class ParseReftuplesTests(TestCase): + def test_nonexistent(self): + r = {} + self.assertRaises(KeyError, parse_reftuples, r, r, [b"thisdoesnotexist"]) + + def test_head(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + [(b"refs/heads/foo", b"refs/heads/foo", False)], + parse_reftuples(r, r, [b"foo"]), + ) + + def test_full(self): + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + [(b"refs/heads/foo", b"refs/heads/foo", False)], + parse_reftuples(r, r, b"refs/heads/foo"), + ) + r = {b"refs/heads/foo": "bla"} + self.assertEqual( + [(b"refs/heads/foo", b"refs/heads/foo", True)], + parse_reftuples(r, r, b"refs/heads/foo", True), + ) + + +class ParseTreeTests(TestCase): + """Test parse_tree.""" + + def test_nonexistent(self): + r = MemoryRepo() + self.assertRaises(KeyError, parse_tree, r, "thisdoesnotexist") + + def test_from_commit(self): + r = MemoryRepo() + c1, c2, c3 = build_commit_graph(r.object_store, [[1], [2, 1], [3, 1, 2]]) + self.assertEqual(r[c1.tree], parse_tree(r, c1.id)) + self.assertEqual(r[c1.tree], parse_tree(r, c1.tree)) + + def test_from_ref(self): + r = MemoryRepo() + c1, c2, c3 = build_commit_graph(r.object_store, [[1], [2, 1], [3, 1, 2]]) + r.refs[b'refs/heads/foo'] = c1.id + self.assertEqual(r[c1.tree], parse_tree(r, b'foo')) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_pack.py b/env/lib/python3.8/site-packages/dulwich/tests/test_pack.py new file mode 100644 index 00000000..d3b2af0b --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_pack.py @@ -0,0 +1,1248 @@ +# test_pack.py -- Tests for the handling of git packs. +# Copyright (C) 2007 James Westby +# Copyright (C) 2008 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for Dulwich packs.""" + + +from io import BytesIO +from hashlib import sha1 +import os +import shutil +import sys +import tempfile +import zlib + +from dulwich.errors import ( + ApplyDeltaError, + ChecksumMismatch, +) +from dulwich.file import ( + GitFile, +) +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + hex_to_sha, + sha_to_hex, + Commit, + Tree, + Blob, +) +from dulwich.pack import ( + OFS_DELTA, + REF_DELTA, + MemoryPackIndex, + Pack, + PackData, + apply_delta, + create_delta, + deltify_pack_objects, + load_pack_index, + UnpackedObject, + read_zlib_chunks, + write_pack_header, + write_pack_index_v1, + write_pack_index_v2, + write_pack_object, + write_pack, + unpack_object, + compute_file_sha, + PackStreamReader, + DeltaChainIterator, + _delta_encode_size, + _encode_copy_operation, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + make_object, + build_pack, +) + +pack1_sha = b"bc63ddad95e7321ee734ea11a7a62d314e0d7481" + +a_sha = b"6f670c0fb53f9463760b7295fbb814e965fb20c8" +tree_sha = b"b2a2766a2879c209ab1176e7e778b81ae422eeaa" +commit_sha = b"f18faa16531ac570a3fdc8c7ca16682548dafd12" +indexmode = "0o100644" if sys.platform != "win32" else "0o100666" + + +class PackTests(TestCase): + """Base class for testing packs""" + + def setUp(self): + super(PackTests, self).setUp() + self.tempdir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.tempdir) + + datadir = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../testdata/packs")) + + def get_pack_index(self, sha): + """Returns a PackIndex from the datadir with the given sha""" + return load_pack_index( + os.path.join(self.datadir, "pack-%s.idx" % sha.decode("ascii")) + ) + + def get_pack_data(self, sha): + """Returns a PackData object from the datadir with the given sha""" + return PackData( + os.path.join(self.datadir, "pack-%s.pack" % sha.decode("ascii")) + ) + + def get_pack(self, sha): + return Pack(os.path.join(self.datadir, "pack-%s" % sha.decode("ascii"))) + + def assertSucceeds(self, func, *args, **kwargs): + try: + func(*args, **kwargs) + except ChecksumMismatch as e: + self.fail(e) + + +class PackIndexTests(PackTests): + """Class that tests the index of packfiles""" + + def test_object_index(self): + """Tests that the correct object offset is returned from the index.""" + p = self.get_pack_index(pack1_sha) + self.assertRaises(KeyError, p.object_index, pack1_sha) + self.assertEqual(p.object_index(a_sha), 178) + self.assertEqual(p.object_index(tree_sha), 138) + self.assertEqual(p.object_index(commit_sha), 12) + + def test_object_sha1(self): + """Tests that the correct object offset is returned from the index.""" + p = self.get_pack_index(pack1_sha) + self.assertRaises(KeyError, p.object_sha1, 876) + self.assertEqual(p.object_sha1(178), hex_to_sha(a_sha)) + self.assertEqual(p.object_sha1(138), hex_to_sha(tree_sha)) + self.assertEqual(p.object_sha1(12), hex_to_sha(commit_sha)) + + def test_index_len(self): + p = self.get_pack_index(pack1_sha) + self.assertEqual(3, len(p)) + + def test_get_stored_checksum(self): + p = self.get_pack_index(pack1_sha) + self.assertEqual( + b"f2848e2ad16f329ae1c92e3b95e91888daa5bd01", + sha_to_hex(p.get_stored_checksum()), + ) + self.assertEqual( + b"721980e866af9a5f93ad674144e1459b8ba3e7b7", + sha_to_hex(p.get_pack_checksum()), + ) + + def test_index_check(self): + p = self.get_pack_index(pack1_sha) + self.assertSucceeds(p.check) + + def test_iterentries(self): + p = self.get_pack_index(pack1_sha) + entries = [(sha_to_hex(s), o, c) for s, o, c in p.iterentries()] + self.assertEqual( + [ + (b"6f670c0fb53f9463760b7295fbb814e965fb20c8", 178, None), + (b"b2a2766a2879c209ab1176e7e778b81ae422eeaa", 138, None), + (b"f18faa16531ac570a3fdc8c7ca16682548dafd12", 12, None), + ], + entries, + ) + + def test_iter(self): + p = self.get_pack_index(pack1_sha) + self.assertEqual(set([tree_sha, commit_sha, a_sha]), set(p)) + + +class TestPackDeltas(TestCase): + + test_string1 = b"The answer was flailing in the wind" + test_string2 = b"The answer was falling down the pipe" + test_string3 = b"zzzzz" + + test_string_empty = b"" + test_string_big = b"Z" * 8192 + test_string_huge = b"Z" * 100000 + + def _test_roundtrip(self, base, target): + self.assertEqual( + target, + b"".join(apply_delta(base, list(create_delta(base, target)))) + ) + + def test_nochange(self): + self._test_roundtrip(self.test_string1, self.test_string1) + + def test_nochange_huge(self): + self._test_roundtrip(self.test_string_huge, self.test_string_huge) + + def test_change(self): + self._test_roundtrip(self.test_string1, self.test_string2) + + def test_rewrite(self): + self._test_roundtrip(self.test_string1, self.test_string3) + + def test_empty_to_big(self): + self._test_roundtrip(self.test_string_empty, self.test_string_big) + + def test_empty_to_huge(self): + self._test_roundtrip(self.test_string_empty, self.test_string_huge) + + def test_huge_copy(self): + self._test_roundtrip( + self.test_string_huge + self.test_string1, + self.test_string_huge + self.test_string2, + ) + + def test_dest_overflow(self): + self.assertRaises( + ApplyDeltaError, + apply_delta, + b"a" * 0x10000, + b"\x80\x80\x04\x80\x80\x04\x80" + b"a" * 0x10000, + ) + self.assertRaises( + ApplyDeltaError, apply_delta, b"", b"\x00\x80\x02\xb0\x11\x11" + ) + + def test_pypy_issue(self): + # Test for https://github.com/jelmer/dulwich/issues/509 / + # https://bitbucket.org/pypy/pypy/issues/2499/cpyext-pystring_asstring-doesnt-work + chunks = [ + b"tree 03207ccf58880a748188836155ceed72f03d65d6\n" + b"parent 408fbab530fd4abe49249a636a10f10f44d07a21\n" + b"author Victor Stinner " + b"1421355207 +0100\n" + b"committer Victor Stinner " + b"1421355207 +0100\n" + b"\n" + b"Backout changeset 3a06020af8cf\n" + b"\nStreamWriter: close() now clears the reference to the " + b"transport\n" + b"\nStreamWriter now raises an exception if it is closed: " + b"write(), writelines(),\n" + b"write_eof(), can_write_eof(), get_extra_info(), drain().\n" + ] + delta = [ + b"\xcd\x03\xad\x03]tree ff3c181a393d5a7270cddc01ea863818a8621ca8\n" + b"parent 20a103cc90135494162e819f98d0edfc1f1fba6b\x91]7\x0510738" + b"\x91\x99@\x0b10738 +0100\x93\x04\x01\xc9" + ] + res = apply_delta(chunks, delta) + expected = [ + b"tree ff3c181a393d5a7270cddc01ea863818a8621ca8\n" + b"parent 20a103cc90135494162e819f98d0edfc1f1fba6b", + b"\nauthor Victor Stinner 14213", + b"10738", + b" +0100\ncommitter Victor Stinner " b"14213", + b"10738 +0100", + b"\n\nStreamWriter: close() now clears the reference to the " + b"transport\n\n" + b"StreamWriter now raises an exception if it is closed: " + b"write(), writelines(),\n" + b"write_eof(), can_write_eof(), get_extra_info(), drain().\n", + ] + self.assertEqual(b"".join(expected), b"".join(res)) + + +class TestPackData(PackTests): + """Tests getting the data from the packfile.""" + + def test_create_pack(self): + self.get_pack_data(pack1_sha).close() + + def test_from_file(self): + path = os.path.join(self.datadir, "pack-%s.pack" % pack1_sha.decode("ascii")) + with open(path, "rb") as f: + PackData.from_file(f, os.path.getsize(path)) + + def test_pack_len(self): + with self.get_pack_data(pack1_sha) as p: + self.assertEqual(3, len(p)) + + def test_index_check(self): + with self.get_pack_data(pack1_sha) as p: + self.assertSucceeds(p.check) + + def test_iterobjects(self): + with self.get_pack_data(pack1_sha) as p: + commit_data = ( + b"tree b2a2766a2879c209ab1176e7e778b81ae422eeaa\n" + b"author James Westby " + b"1174945067 +0100\n" + b"committer James Westby " + b"1174945067 +0100\n" + b"\n" + b"Test commit\n" + ) + blob_sha = b"6f670c0fb53f9463760b7295fbb814e965fb20c8" + tree_data = b"100644 a\0" + hex_to_sha(blob_sha) + actual = [] + for offset, type_num, chunks, crc32 in p.iterobjects(): + actual.append((offset, type_num, b"".join(chunks), crc32)) + self.assertEqual( + [ + (12, 1, commit_data, 3775879613), + (138, 2, tree_data, 912998690), + (178, 3, b"test 1\n", 1373561701), + ], + actual, + ) + + def test_iterentries(self): + with self.get_pack_data(pack1_sha) as p: + entries = {(sha_to_hex(s), o, c) for s, o, c in p.iterentries()} + self.assertEqual( + set( + [ + ( + b"6f670c0fb53f9463760b7295fbb814e965fb20c8", + 178, + 1373561701, + ), + ( + b"b2a2766a2879c209ab1176e7e778b81ae422eeaa", + 138, + 912998690, + ), + ( + b"f18faa16531ac570a3fdc8c7ca16682548dafd12", + 12, + 3775879613, + ), + ] + ), + entries, + ) + + def test_create_index_v1(self): + with self.get_pack_data(pack1_sha) as p: + filename = os.path.join(self.tempdir, "v1test.idx") + p.create_index_v1(filename) + idx1 = load_pack_index(filename) + idx2 = self.get_pack_index(pack1_sha) + self.assertEqual(oct(os.stat(filename).st_mode), indexmode) + self.assertEqual(idx1, idx2) + + def test_create_index_v2(self): + with self.get_pack_data(pack1_sha) as p: + filename = os.path.join(self.tempdir, "v2test.idx") + p.create_index_v2(filename) + idx1 = load_pack_index(filename) + idx2 = self.get_pack_index(pack1_sha) + self.assertEqual(oct(os.stat(filename).st_mode), indexmode) + self.assertEqual(idx1, idx2) + + def test_compute_file_sha(self): + f = BytesIO(b"abcd1234wxyz") + self.assertEqual( + sha1(b"abcd1234wxyz").hexdigest(), compute_file_sha(f).hexdigest() + ) + self.assertEqual( + sha1(b"abcd1234wxyz").hexdigest(), + compute_file_sha(f, buffer_size=5).hexdigest(), + ) + self.assertEqual( + sha1(b"abcd1234").hexdigest(), + compute_file_sha(f, end_ofs=-4).hexdigest(), + ) + self.assertEqual( + sha1(b"1234wxyz").hexdigest(), + compute_file_sha(f, start_ofs=4).hexdigest(), + ) + self.assertEqual( + sha1(b"1234").hexdigest(), + compute_file_sha(f, start_ofs=4, end_ofs=-4).hexdigest(), + ) + + def test_compute_file_sha_short_file(self): + f = BytesIO(b"abcd1234wxyz") + self.assertRaises(AssertionError, compute_file_sha, f, end_ofs=-20) + self.assertRaises(AssertionError, compute_file_sha, f, end_ofs=20) + self.assertRaises( + AssertionError, compute_file_sha, f, start_ofs=10, end_ofs=-12 + ) + + +class TestPack(PackTests): + def test_len(self): + with self.get_pack(pack1_sha) as p: + self.assertEqual(3, len(p)) + + def test_contains(self): + with self.get_pack(pack1_sha) as p: + self.assertIn(tree_sha, p) + + def test_get(self): + with self.get_pack(pack1_sha) as p: + self.assertEqual(type(p[tree_sha]), Tree) + + def test_iter(self): + with self.get_pack(pack1_sha) as p: + self.assertEqual(set([tree_sha, commit_sha, a_sha]), set(p)) + + def test_iterobjects(self): + with self.get_pack(pack1_sha) as p: + expected = set([p[s] for s in [commit_sha, tree_sha, a_sha]]) + self.assertEqual(expected, set(list(p.iterobjects()))) + + def test_pack_tuples(self): + with self.get_pack(pack1_sha) as p: + tuples = p.pack_tuples() + expected = set([(p[s], None) for s in [commit_sha, tree_sha, a_sha]]) + self.assertEqual(expected, set(list(tuples))) + self.assertEqual(expected, set(list(tuples))) + self.assertEqual(3, len(tuples)) + + def test_get_object_at(self): + """Tests random access for non-delta objects""" + with self.get_pack(pack1_sha) as p: + obj = p[a_sha] + self.assertEqual(obj.type_name, b"blob") + self.assertEqual(obj.sha().hexdigest().encode("ascii"), a_sha) + obj = p[tree_sha] + self.assertEqual(obj.type_name, b"tree") + self.assertEqual(obj.sha().hexdigest().encode("ascii"), tree_sha) + obj = p[commit_sha] + self.assertEqual(obj.type_name, b"commit") + self.assertEqual(obj.sha().hexdigest().encode("ascii"), commit_sha) + + def test_copy(self): + with self.get_pack(pack1_sha) as origpack: + self.assertSucceeds(origpack.index.check) + basename = os.path.join(self.tempdir, "Elch") + write_pack(basename, origpack.pack_tuples()) + + with Pack(basename) as newpack: + self.assertEqual(origpack, newpack) + self.assertSucceeds(newpack.index.check) + self.assertEqual(origpack.name(), newpack.name()) + self.assertEqual( + origpack.index.get_pack_checksum(), + newpack.index.get_pack_checksum(), + ) + + wrong_version = origpack.index.version != newpack.index.version + orig_checksum = origpack.index.get_stored_checksum() + new_checksum = newpack.index.get_stored_checksum() + self.assertTrue(wrong_version or orig_checksum == new_checksum) + + def test_commit_obj(self): + with self.get_pack(pack1_sha) as p: + commit = p[commit_sha] + self.assertEqual(b"James Westby ", commit.author) + self.assertEqual([], commit.parents) + + def _copy_pack(self, origpack): + basename = os.path.join(self.tempdir, "somepack") + write_pack(basename, origpack.pack_tuples()) + return Pack(basename) + + def test_keep_no_message(self): + with self.get_pack(pack1_sha) as p: + p = self._copy_pack(p) + + with p: + keepfile_name = p.keep() + + # file should exist + self.assertTrue(os.path.exists(keepfile_name)) + + with open(keepfile_name, "r") as f: + buf = f.read() + self.assertEqual("", buf) + + def test_keep_message(self): + with self.get_pack(pack1_sha) as p: + p = self._copy_pack(p) + + msg = b"some message" + with p: + keepfile_name = p.keep(msg) + + # file should exist + self.assertTrue(os.path.exists(keepfile_name)) + + # and contain the right message, with a linefeed + with open(keepfile_name, "rb") as f: + buf = f.read() + self.assertEqual(msg + b"\n", buf) + + def test_name(self): + with self.get_pack(pack1_sha) as p: + self.assertEqual(pack1_sha, p.name()) + + def test_length_mismatch(self): + with self.get_pack_data(pack1_sha) as data: + index = self.get_pack_index(pack1_sha) + Pack.from_objects(data, index).check_length_and_checksum() + + data._file.seek(12) + bad_file = BytesIO() + write_pack_header(bad_file.write, 9999) + bad_file.write(data._file.read()) + bad_file = BytesIO(bad_file.getvalue()) + bad_data = PackData("", file=bad_file) + bad_pack = Pack.from_lazy_objects(lambda: bad_data, lambda: index) + self.assertRaises(AssertionError, lambda: bad_pack.data) + self.assertRaises( + AssertionError, bad_pack.check_length_and_checksum + ) + + def test_checksum_mismatch(self): + with self.get_pack_data(pack1_sha) as data: + index = self.get_pack_index(pack1_sha) + Pack.from_objects(data, index).check_length_and_checksum() + + data._file.seek(0) + bad_file = BytesIO(data._file.read()[:-20] + (b"\xff" * 20)) + bad_data = PackData("", file=bad_file) + bad_pack = Pack.from_lazy_objects(lambda: bad_data, lambda: index) + self.assertRaises(ChecksumMismatch, lambda: bad_pack.data) + self.assertRaises( + ChecksumMismatch, bad_pack.check_length_and_checksum + ) + + def test_iterobjects_2(self): + with self.get_pack(pack1_sha) as p: + objs = {o.id: o for o in p.iterobjects()} + self.assertEqual(3, len(objs)) + self.assertEqual(sorted(objs), sorted(p.index)) + self.assertIsInstance(objs[a_sha], Blob) + self.assertIsInstance(objs[tree_sha], Tree) + self.assertIsInstance(objs[commit_sha], Commit) + + +class TestThinPack(PackTests): + def setUp(self): + super(TestThinPack, self).setUp() + self.store = MemoryObjectStore() + self.blobs = {} + for blob in (b"foo", b"bar", b"foo1234", b"bar2468"): + self.blobs[blob] = make_object(Blob, data=blob) + self.store.add_object(self.blobs[b"foo"]) + self.store.add_object(self.blobs[b"bar"]) + + # Build a thin pack. 'foo' is as an external reference, 'bar' an + # internal reference. + self.pack_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.pack_dir) + self.pack_prefix = os.path.join(self.pack_dir, "pack") + + with open(self.pack_prefix + ".pack", "wb") as f: + build_pack( + f, + [ + (REF_DELTA, (self.blobs[b"foo"].id, b"foo1234")), + (Blob.type_num, b"bar"), + (REF_DELTA, (self.blobs[b"bar"].id, b"bar2468")), + ], + store=self.store, + ) + + # Index the new pack. + with self.make_pack(True) as pack: + with PackData(pack._data_path) as data: + data.create_index( + self.pack_prefix + ".idx", resolve_ext_ref=pack.resolve_ext_ref) + + del self.store[self.blobs[b"bar"].id] + + def make_pack(self, resolve_ext_ref): + return Pack( + self.pack_prefix, + resolve_ext_ref=self.store.get_raw if resolve_ext_ref else None, + ) + + def test_get_raw(self): + with self.make_pack(False) as p: + self.assertRaises(KeyError, p.get_raw, self.blobs[b"foo1234"].id) + with self.make_pack(True) as p: + self.assertEqual((3, b"foo1234"), p.get_raw(self.blobs[b"foo1234"].id)) + + def test_get_raw_unresolved(self): + with self.make_pack(False) as p: + self.assertEqual( + ( + 7, + b"\x19\x10(\x15f=#\xf8\xb7ZG\xe7\xa0\x19e\xdc\xdc\x96F\x8c", + [b"x\x9ccf\x9f\xc0\xccbhdl\x02\x00\x06f\x01l"], + ), + p.get_raw_unresolved(self.blobs[b"foo1234"].id), + ) + with self.make_pack(True) as p: + self.assertEqual( + ( + 7, + b"\x19\x10(\x15f=#\xf8\xb7ZG\xe7\xa0\x19e\xdc\xdc\x96F\x8c", + [b"x\x9ccf\x9f\xc0\xccbhdl\x02\x00\x06f\x01l"], + ), + p.get_raw_unresolved(self.blobs[b"foo1234"].id), + ) + + def test_iterobjects(self): + with self.make_pack(False) as p: + self.assertRaises(KeyError, list, p.iterobjects()) + with self.make_pack(True) as p: + self.assertEqual( + sorted( + [ + self.blobs[b"foo1234"].id, + self.blobs[b"bar"].id, + self.blobs[b"bar2468"].id, + ] + ), + sorted(o.id for o in p.iterobjects()), + ) + + +class WritePackTests(TestCase): + def test_write_pack_header(self): + f = BytesIO() + write_pack_header(f.write, 42) + self.assertEqual(b"PACK\x00\x00\x00\x02\x00\x00\x00*", f.getvalue()) + + def test_write_pack_object(self): + f = BytesIO() + f.write(b"header") + offset = f.tell() + crc32 = write_pack_object(f.write, Blob.type_num, b"blob") + self.assertEqual(crc32, zlib.crc32(f.getvalue()[6:]) & 0xFFFFFFFF) + + f.write(b"x") # unpack_object needs extra trailing data. + f.seek(offset) + unpacked, unused = unpack_object(f.read, compute_crc32=True) + self.assertEqual(Blob.type_num, unpacked.pack_type_num) + self.assertEqual(Blob.type_num, unpacked.obj_type_num) + self.assertEqual([b"blob"], unpacked.decomp_chunks) + self.assertEqual(crc32, unpacked.crc32) + self.assertEqual(b"x", unused) + + def test_write_pack_object_sha(self): + f = BytesIO() + f.write(b"header") + offset = f.tell() + sha_a = sha1(b"foo") + sha_b = sha_a.copy() + write_pack_object(f.write, Blob.type_num, b"blob", sha=sha_a) + self.assertNotEqual(sha_a.digest(), sha_b.digest()) + sha_b.update(f.getvalue()[offset:]) + self.assertEqual(sha_a.digest(), sha_b.digest()) + + def test_write_pack_object_compression_level(self): + f = BytesIO() + f.write(b"header") + offset = f.tell() + sha_a = sha1(b"foo") + sha_b = sha_a.copy() + write_pack_object(f.write, Blob.type_num, b"blob", sha=sha_a, compression_level=6) + self.assertNotEqual(sha_a.digest(), sha_b.digest()) + sha_b.update(f.getvalue()[offset:]) + self.assertEqual(sha_a.digest(), sha_b.digest()) + + +pack_checksum = hex_to_sha("721980e866af9a5f93ad674144e1459b8ba3e7b7") + + +class BaseTestPackIndexWriting(object): + def assertSucceeds(self, func, *args, **kwargs): + try: + func(*args, **kwargs) + except ChecksumMismatch as e: + self.fail(e) + + def index(self, filename, entries, pack_checksum): + raise NotImplementedError(self.index) + + def test_empty(self): + idx = self.index("empty.idx", [], pack_checksum) + self.assertEqual(idx.get_pack_checksum(), pack_checksum) + self.assertEqual(0, len(idx)) + + def test_large(self): + entry1_sha = hex_to_sha("4e6388232ec39792661e2e75db8fb117fc869ce6") + entry2_sha = hex_to_sha("e98f071751bd77f59967bfa671cd2caebdccc9a2") + entries = [ + (entry1_sha, 0xF2972D0830529B87, 24), + (entry2_sha, (~0xF2972D0830529B87) & (2 ** 64 - 1), 92), + ] + if not self._supports_large: + self.assertRaises( + TypeError, self.index, "single.idx", entries, pack_checksum + ) + return + idx = self.index("single.idx", entries, pack_checksum) + self.assertEqual(idx.get_pack_checksum(), pack_checksum) + self.assertEqual(2, len(idx)) + actual_entries = list(idx.iterentries()) + self.assertEqual(len(entries), len(actual_entries)) + for mine, actual in zip(entries, actual_entries): + my_sha, my_offset, my_crc = mine + actual_sha, actual_offset, actual_crc = actual + self.assertEqual(my_sha, actual_sha) + self.assertEqual(my_offset, actual_offset) + if self._has_crc32_checksum: + self.assertEqual(my_crc, actual_crc) + else: + self.assertIsNone(actual_crc) + + def test_single(self): + entry_sha = hex_to_sha("6f670c0fb53f9463760b7295fbb814e965fb20c8") + my_entries = [(entry_sha, 178, 42)] + idx = self.index("single.idx", my_entries, pack_checksum) + self.assertEqual(idx.get_pack_checksum(), pack_checksum) + self.assertEqual(1, len(idx)) + actual_entries = list(idx.iterentries()) + self.assertEqual(len(my_entries), len(actual_entries)) + for mine, actual in zip(my_entries, actual_entries): + my_sha, my_offset, my_crc = mine + actual_sha, actual_offset, actual_crc = actual + self.assertEqual(my_sha, actual_sha) + self.assertEqual(my_offset, actual_offset) + if self._has_crc32_checksum: + self.assertEqual(my_crc, actual_crc) + else: + self.assertIsNone(actual_crc) + + +class BaseTestFilePackIndexWriting(BaseTestPackIndexWriting): + def setUp(self): + self.tempdir = tempfile.mkdtemp() + + def tearDown(self): + shutil.rmtree(self.tempdir) + + def index(self, filename, entries, pack_checksum): + path = os.path.join(self.tempdir, filename) + self.writeIndex(path, entries, pack_checksum) + idx = load_pack_index(path) + self.assertSucceeds(idx.check) + self.assertEqual(idx.version, self._expected_version) + return idx + + def writeIndex(self, filename, entries, pack_checksum): + # FIXME: Write to BytesIO instead rather than hitting disk ? + with GitFile(filename, "wb") as f: + self._write_fn(f, entries, pack_checksum) + + +class TestMemoryIndexWriting(TestCase, BaseTestPackIndexWriting): + def setUp(self): + TestCase.setUp(self) + self._has_crc32_checksum = True + self._supports_large = True + + def index(self, filename, entries, pack_checksum): + return MemoryPackIndex(entries, pack_checksum) + + def tearDown(self): + TestCase.tearDown(self) + + +class TestPackIndexWritingv1(TestCase, BaseTestFilePackIndexWriting): + def setUp(self): + TestCase.setUp(self) + BaseTestFilePackIndexWriting.setUp(self) + self._has_crc32_checksum = False + self._expected_version = 1 + self._supports_large = False + self._write_fn = write_pack_index_v1 + + def tearDown(self): + TestCase.tearDown(self) + BaseTestFilePackIndexWriting.tearDown(self) + + +class TestPackIndexWritingv2(TestCase, BaseTestFilePackIndexWriting): + def setUp(self): + TestCase.setUp(self) + BaseTestFilePackIndexWriting.setUp(self) + self._has_crc32_checksum = True + self._supports_large = True + self._expected_version = 2 + self._write_fn = write_pack_index_v2 + + def tearDown(self): + TestCase.tearDown(self) + BaseTestFilePackIndexWriting.tearDown(self) + + +class ReadZlibTests(TestCase): + + decomp = ( + b"tree 4ada885c9196b6b6fa08744b5862bf92896fc002\n" + b"parent None\n" + b"author Jelmer Vernooij 1228980214 +0000\n" + b"committer Jelmer Vernooij 1228980214 +0000\n" + b"\n" + b"Provide replacement for mmap()'s offset argument." + ) + comp = zlib.compress(decomp) + extra = b"nextobject" + + def setUp(self): + super(ReadZlibTests, self).setUp() + self.read = BytesIO(self.comp + self.extra).read + self.unpacked = UnpackedObject(Tree.type_num, None, len(self.decomp), 0) + + def test_decompress_size(self): + good_decomp_len = len(self.decomp) + self.unpacked.decomp_len = -1 + self.assertRaises(ValueError, read_zlib_chunks, self.read, self.unpacked) + self.unpacked.decomp_len = good_decomp_len - 1 + self.assertRaises(zlib.error, read_zlib_chunks, self.read, self.unpacked) + self.unpacked.decomp_len = good_decomp_len + 1 + self.assertRaises(zlib.error, read_zlib_chunks, self.read, self.unpacked) + + def test_decompress_truncated(self): + read = BytesIO(self.comp[:10]).read + self.assertRaises(zlib.error, read_zlib_chunks, read, self.unpacked) + + read = BytesIO(self.comp).read + self.assertRaises(zlib.error, read_zlib_chunks, read, self.unpacked) + + def test_decompress_empty(self): + unpacked = UnpackedObject(Tree.type_num, None, 0, None) + comp = zlib.compress(b"") + read = BytesIO(comp + self.extra).read + unused = read_zlib_chunks(read, unpacked) + self.assertEqual(b"", b"".join(unpacked.decomp_chunks)) + self.assertNotEqual(b"", unused) + self.assertEqual(self.extra, unused + read()) + + def test_decompress_no_crc32(self): + self.unpacked.crc32 = None + read_zlib_chunks(self.read, self.unpacked) + self.assertEqual(None, self.unpacked.crc32) + + def _do_decompress_test(self, buffer_size, **kwargs): + unused = read_zlib_chunks( + self.read, self.unpacked, buffer_size=buffer_size, **kwargs + ) + self.assertEqual(self.decomp, b"".join(self.unpacked.decomp_chunks)) + self.assertEqual(zlib.crc32(self.comp), self.unpacked.crc32) + self.assertNotEqual(b"", unused) + self.assertEqual(self.extra, unused + self.read()) + + def test_simple_decompress(self): + self._do_decompress_test(4096) + self.assertEqual(None, self.unpacked.comp_chunks) + + # These buffer sizes are not intended to be realistic, but rather simulate + # larger buffer sizes that may end at various places. + def test_decompress_buffer_size_1(self): + self._do_decompress_test(1) + + def test_decompress_buffer_size_2(self): + self._do_decompress_test(2) + + def test_decompress_buffer_size_3(self): + self._do_decompress_test(3) + + def test_decompress_buffer_size_4(self): + self._do_decompress_test(4) + + def test_decompress_include_comp(self): + self._do_decompress_test(4096, include_comp=True) + self.assertEqual(self.comp, b"".join(self.unpacked.comp_chunks)) + + +class DeltifyTests(TestCase): + def test_empty(self): + self.assertEqual([], list(deltify_pack_objects([]))) + + def test_single(self): + b = Blob.from_string(b"foo") + self.assertEqual( + [(b.type_num, b.sha().digest(), None, b.as_raw_chunks())], + list(deltify_pack_objects([(b, b"")])), + ) + + def test_simple_delta(self): + b1 = Blob.from_string(b"a" * 101) + b2 = Blob.from_string(b"a" * 100) + delta = list(create_delta(b1.as_raw_chunks(), b2.as_raw_chunks())) + self.assertEqual( + [ + (b1.type_num, b1.sha().digest(), None, b1.as_raw_chunks()), + (b2.type_num, b2.sha().digest(), b1.sha().digest(), delta), + ], + list(deltify_pack_objects([(b1, b""), (b2, b"")])), + ) + + +class TestPackStreamReader(TestCase): + def test_read_objects_emtpy(self): + f = BytesIO() + build_pack(f, []) + reader = PackStreamReader(f.read) + self.assertEqual(0, len(list(reader.read_objects()))) + + def test_read_objects(self): + f = BytesIO() + entries = build_pack( + f, + [ + (Blob.type_num, b"blob"), + (OFS_DELTA, (0, b"blob1")), + ], + ) + reader = PackStreamReader(f.read) + objects = list(reader.read_objects(compute_crc32=True)) + self.assertEqual(2, len(objects)) + + unpacked_blob, unpacked_delta = objects + + self.assertEqual(entries[0][0], unpacked_blob.offset) + self.assertEqual(Blob.type_num, unpacked_blob.pack_type_num) + self.assertEqual(Blob.type_num, unpacked_blob.obj_type_num) + self.assertEqual(None, unpacked_blob.delta_base) + self.assertEqual(b"blob", b"".join(unpacked_blob.decomp_chunks)) + self.assertEqual(entries[0][4], unpacked_blob.crc32) + + self.assertEqual(entries[1][0], unpacked_delta.offset) + self.assertEqual(OFS_DELTA, unpacked_delta.pack_type_num) + self.assertEqual(None, unpacked_delta.obj_type_num) + self.assertEqual( + unpacked_delta.offset - unpacked_blob.offset, + unpacked_delta.delta_base, + ) + delta = create_delta(b"blob", b"blob1") + self.assertEqual(b''.join(delta), b"".join(unpacked_delta.decomp_chunks)) + self.assertEqual(entries[1][4], unpacked_delta.crc32) + + def test_read_objects_buffered(self): + f = BytesIO() + build_pack( + f, + [ + (Blob.type_num, b"blob"), + (OFS_DELTA, (0, b"blob1")), + ], + ) + reader = PackStreamReader(f.read, zlib_bufsize=4) + self.assertEqual(2, len(list(reader.read_objects()))) + + def test_read_objects_empty(self): + reader = PackStreamReader(BytesIO().read) + self.assertEqual([], list(reader.read_objects())) + + +class TestPackIterator(DeltaChainIterator): + + _compute_crc32 = True + + def __init__(self, *args, **kwargs): + super(TestPackIterator, self).__init__(*args, **kwargs) + self._unpacked_offsets = set() + + def _result(self, unpacked): + """Return entries in the same format as build_pack.""" + return ( + unpacked.offset, + unpacked.obj_type_num, + b"".join(unpacked.obj_chunks), + unpacked.sha(), + unpacked.crc32, + ) + + def _resolve_object(self, offset, pack_type_num, base_chunks): + assert offset not in self._unpacked_offsets, ( + "Attempted to re-inflate offset %i" % offset + ) + self._unpacked_offsets.add(offset) + return super(TestPackIterator, self)._resolve_object( + offset, pack_type_num, base_chunks + ) + + +class DeltaChainIteratorTests(TestCase): + def setUp(self): + super(DeltaChainIteratorTests, self).setUp() + self.store = MemoryObjectStore() + self.fetched = set() + + def store_blobs(self, blobs_data): + blobs = [] + for data in blobs_data: + blob = make_object(Blob, data=data) + blobs.append(blob) + self.store.add_object(blob) + return blobs + + def get_raw_no_repeat(self, bin_sha): + """Wrapper around store.get_raw that doesn't allow repeat lookups.""" + hex_sha = sha_to_hex(bin_sha) + self.assertNotIn( + hex_sha, + self.fetched, + "Attempted to re-fetch object %s" % hex_sha + ) + self.fetched.add(hex_sha) + return self.store.get_raw(hex_sha) + + def make_pack_iter(self, f, thin=None): + if thin is None: + thin = bool(list(self.store)) + resolve_ext_ref = thin and self.get_raw_no_repeat or None + data = PackData("test.pack", file=f) + return TestPackIterator.for_pack_data(data, resolve_ext_ref=resolve_ext_ref) + + def assertEntriesMatch(self, expected_indexes, entries, pack_iter): + expected = [entries[i] for i in expected_indexes] + self.assertEqual(expected, list(pack_iter._walk_all_chains())) + + def test_no_deltas(self): + f = BytesIO() + entries = build_pack( + f, + [ + (Commit.type_num, b"commit"), + (Blob.type_num, b"blob"), + (Tree.type_num, b"tree"), + ], + ) + self.assertEntriesMatch([0, 1, 2], entries, self.make_pack_iter(f)) + + def test_ofs_deltas(self): + f = BytesIO() + entries = build_pack( + f, + [ + (Blob.type_num, b"blob"), + (OFS_DELTA, (0, b"blob1")), + (OFS_DELTA, (0, b"blob2")), + ], + ) + # Delta resolution changed to DFS + self.assertEntriesMatch([0, 2, 1], entries, self.make_pack_iter(f)) + + def test_ofs_deltas_chain(self): + f = BytesIO() + entries = build_pack( + f, + [ + (Blob.type_num, b"blob"), + (OFS_DELTA, (0, b"blob1")), + (OFS_DELTA, (1, b"blob2")), + ], + ) + self.assertEntriesMatch([0, 1, 2], entries, self.make_pack_iter(f)) + + def test_ref_deltas(self): + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (1, b"blob1")), + (Blob.type_num, (b"blob")), + (REF_DELTA, (1, b"blob2")), + ], + ) + # Delta resolution changed to DFS + self.assertEntriesMatch([1, 2, 0], entries, self.make_pack_iter(f)) + + def test_ref_deltas_chain(self): + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (2, b"blob1")), + (Blob.type_num, (b"blob")), + (REF_DELTA, (1, b"blob2")), + ], + ) + self.assertEntriesMatch([1, 2, 0], entries, self.make_pack_iter(f)) + + def test_ofs_and_ref_deltas(self): + # Deltas pending on this offset are popped before deltas depending on + # this ref. + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (1, b"blob1")), + (Blob.type_num, (b"blob")), + (OFS_DELTA, (1, b"blob2")), + ], + ) + + # Delta resolution changed to DFS + self.assertEntriesMatch([1, 0, 2], entries, self.make_pack_iter(f)) + + def test_mixed_chain(self): + f = BytesIO() + entries = build_pack( + f, + [ + (Blob.type_num, b"blob"), + (REF_DELTA, (2, b"blob2")), + (OFS_DELTA, (0, b"blob1")), + (OFS_DELTA, (1, b"blob3")), + (OFS_DELTA, (0, b"bob")), + ]) + # Delta resolution changed to DFS + self.assertEntriesMatch([0, 4, 2, 1, 3], entries, self.make_pack_iter(f)) + + def test_long_chain(self): + n = 100 + objects_spec = [(Blob.type_num, b"blob")] + for i in range(n): + objects_spec.append((OFS_DELTA, (i, b"blob" + str(i).encode("ascii")))) + f = BytesIO() + entries = build_pack(f, objects_spec) + self.assertEntriesMatch(range(n + 1), entries, self.make_pack_iter(f)) + + def test_branchy_chain(self): + n = 100 + objects_spec = [(Blob.type_num, b"blob")] + for i in range(n): + objects_spec.append((OFS_DELTA, (0, b"blob" + str(i).encode("ascii")))) + f = BytesIO() + entries = build_pack(f, objects_spec) + # Delta resolution changed to DFS + indices = [0] + list(range(100, 0, -1)) + self.assertEntriesMatch(indices, entries, self.make_pack_iter(f)) + + def test_ext_ref(self): + (blob,) = self.store_blobs([b"blob"]) + f = BytesIO() + entries = build_pack(f, [(REF_DELTA, (blob.id, b"blob1"))], store=self.store) + pack_iter = self.make_pack_iter(f) + self.assertEntriesMatch([0], entries, pack_iter) + self.assertEqual([hex_to_sha(blob.id)], pack_iter.ext_refs()) + + def test_ext_ref_chain(self): + (blob,) = self.store_blobs([b"blob"]) + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (1, b"blob2")), + (REF_DELTA, (blob.id, b"blob1")), + ], + store=self.store, + ) + pack_iter = self.make_pack_iter(f) + self.assertEntriesMatch([1, 0], entries, pack_iter) + self.assertEqual([hex_to_sha(blob.id)], pack_iter.ext_refs()) + + def test_ext_ref_chain_degenerate(self): + # Test a degenerate case where the sender is sending a REF_DELTA + # object that expands to an object already in the repository. + (blob,) = self.store_blobs([b"blob"]) + (blob2,) = self.store_blobs([b"blob2"]) + assert blob.id < blob2.id + + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (blob.id, b"blob2")), + (REF_DELTA, (0, b"blob3")), + ], + store=self.store, + ) + pack_iter = self.make_pack_iter(f) + self.assertEntriesMatch([0, 1], entries, pack_iter) + self.assertEqual([hex_to_sha(blob.id)], pack_iter.ext_refs()) + + def test_ext_ref_multiple_times(self): + (blob,) = self.store_blobs([b"blob"]) + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (blob.id, b"blob1")), + (REF_DELTA, (blob.id, b"blob2")), + ], + store=self.store, + ) + pack_iter = self.make_pack_iter(f) + self.assertEntriesMatch([0, 1], entries, pack_iter) + self.assertEqual([hex_to_sha(blob.id)], pack_iter.ext_refs()) + + def test_multiple_ext_refs(self): + b1, b2 = self.store_blobs([b"foo", b"bar"]) + f = BytesIO() + entries = build_pack( + f, + [ + (REF_DELTA, (b1.id, b"foo1")), + (REF_DELTA, (b2.id, b"bar2")), + ], + store=self.store, + ) + pack_iter = self.make_pack_iter(f) + self.assertEntriesMatch([0, 1], entries, pack_iter) + self.assertEqual([hex_to_sha(b1.id), hex_to_sha(b2.id)], pack_iter.ext_refs()) + + def test_bad_ext_ref_non_thin_pack(self): + (blob,) = self.store_blobs([b"blob"]) + f = BytesIO() + build_pack(f, [(REF_DELTA, (blob.id, b"blob1"))], store=self.store) + pack_iter = self.make_pack_iter(f, thin=False) + try: + list(pack_iter._walk_all_chains()) + self.fail() + except KeyError as e: + self.assertEqual(([blob.id],), e.args) + + def test_bad_ext_ref_thin_pack(self): + b1, b2, b3 = self.store_blobs([b"foo", b"bar", b"baz"]) + f = BytesIO() + build_pack( + f, + [ + (REF_DELTA, (1, b"foo99")), + (REF_DELTA, (b1.id, b"foo1")), + (REF_DELTA, (b2.id, b"bar2")), + (REF_DELTA, (b3.id, b"baz3")), + ], + store=self.store, + ) + del self.store[b2.id] + del self.store[b3.id] + pack_iter = self.make_pack_iter(f) + try: + list(pack_iter._walk_all_chains()) + self.fail() + except KeyError as e: + self.assertEqual((sorted([b2.id, b3.id]),), (sorted(e.args[0]),)) + + +class DeltaEncodeSizeTests(TestCase): + def test_basic(self): + self.assertEqual(b"\x00", _delta_encode_size(0)) + self.assertEqual(b"\x01", _delta_encode_size(1)) + self.assertEqual(b"\xfa\x01", _delta_encode_size(250)) + self.assertEqual(b"\xe8\x07", _delta_encode_size(1000)) + self.assertEqual(b"\xa0\x8d\x06", _delta_encode_size(100000)) + + +class EncodeCopyOperationTests(TestCase): + def test_basic(self): + self.assertEqual(b"\x80", _encode_copy_operation(0, 0)) + self.assertEqual(b"\x91\x01\x0a", _encode_copy_operation(1, 10)) + self.assertEqual(b"\xb1\x64\xe8\x03", _encode_copy_operation(100, 1000)) + self.assertEqual(b"\x93\xe8\x03\x01", _encode_copy_operation(1000, 1)) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_patch.py b/env/lib/python3.8/site-packages/dulwich/tests/test_patch.py new file mode 100644 index 00000000..38aac88f --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_patch.py @@ -0,0 +1,647 @@ +# test_patch.py -- tests for patch.py +# Copyright (C) 2010 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for patch.py.""" + +from io import BytesIO, StringIO + +from dulwich.objects import ( + Blob, + Commit, + S_IFGITLINK, + Tree, +) +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.patch import ( + get_summary, + git_am_patch_split, + write_blob_diff, + write_commit_patch, + write_object_diff, + write_tree_diff, +) +from dulwich.tests import ( + SkipTest, + TestCase, +) + + +class WriteCommitPatchTests(TestCase): + def test_simple_bytesio(self): + f = BytesIO() + c = Commit() + c.committer = c.author = b"Jelmer " + c.commit_time = c.author_time = 1271350201 + c.commit_timezone = c.author_timezone = 0 + c.message = b"This is the first line\nAnd this is the second line.\n" + c.tree = Tree().id + write_commit_patch(f, c, b"CONTENTS", (1, 1), version="custom") + f.seek(0) + lines = f.readlines() + self.assertTrue( + lines[0].startswith(b"From 0b0d34d1b5b596c928adc9a727a4b9e03d025298") + ) + self.assertEqual(lines[1], b"From: Jelmer \n") + self.assertTrue(lines[2].startswith(b"Date: ")) + self.assertEqual( + [ + b"Subject: [PATCH 1/1] This is the first line\n", + b"And this is the second line.\n", + b"\n", + b"\n", + b"---\n", + ], + lines[3:8], + ) + self.assertEqual([b"CONTENTS-- \n", b"custom\n"], lines[-2:]) + if len(lines) >= 12: + # diffstat may not be present + self.assertEqual(lines[8], b" 0 files changed\n") + + +class ReadGitAmPatch(TestCase): + def test_extract_string(self): + text = b"""\ +From ff643aae102d8870cac88e8f007e70f58f3a7363 Mon Sep 17 00:00:00 2001 +From: Jelmer Vernooij +Date: Thu, 15 Apr 2010 15:40:28 +0200 +Subject: [PATCH 1/2] Remove executable bit from prey.ico (triggers a warning). + +--- + pixmaps/prey.ico | Bin 9662 -> 9662 bytes + 1 files changed, 0 insertions(+), 0 deletions(-) + mode change 100755 => 100644 pixmaps/prey.ico + +-- +1.7.0.4 +""" + c, diff, version = git_am_patch_split(StringIO(text.decode("utf-8")), "utf-8") + self.assertEqual(b"Jelmer Vernooij ", c.committer) + self.assertEqual(b"Jelmer Vernooij ", c.author) + self.assertEqual( + b"Remove executable bit from prey.ico " b"(triggers a warning).\n", + c.message, + ) + self.assertEqual( + b""" pixmaps/prey.ico | Bin 9662 -> 9662 bytes + 1 files changed, 0 insertions(+), 0 deletions(-) + mode change 100755 => 100644 pixmaps/prey.ico + +""", + diff, + ) + self.assertEqual(b"1.7.0.4", version) + + def test_extract_bytes(self): + text = b"""\ +From ff643aae102d8870cac88e8f007e70f58f3a7363 Mon Sep 17 00:00:00 2001 +From: Jelmer Vernooij +Date: Thu, 15 Apr 2010 15:40:28 +0200 +Subject: [PATCH 1/2] Remove executable bit from prey.ico (triggers a warning). + +--- + pixmaps/prey.ico | Bin 9662 -> 9662 bytes + 1 files changed, 0 insertions(+), 0 deletions(-) + mode change 100755 => 100644 pixmaps/prey.ico + +-- +1.7.0.4 +""" + c, diff, version = git_am_patch_split(BytesIO(text)) + self.assertEqual(b"Jelmer Vernooij ", c.committer) + self.assertEqual(b"Jelmer Vernooij ", c.author) + self.assertEqual( + b"Remove executable bit from prey.ico " b"(triggers a warning).\n", + c.message, + ) + self.assertEqual( + b""" pixmaps/prey.ico | Bin 9662 -> 9662 bytes + 1 files changed, 0 insertions(+), 0 deletions(-) + mode change 100755 => 100644 pixmaps/prey.ico + +""", + diff, + ) + self.assertEqual(b"1.7.0.4", version) + + def test_extract_spaces(self): + text = b"""From ff643aae102d8870cac88e8f007e70f58f3a7363 Mon Sep 17 00:00:00 2001 +From: Jelmer Vernooij +Date: Thu, 15 Apr 2010 15:40:28 +0200 +Subject: [Dulwich-users] [PATCH] Added unit tests for + dulwich.object_store.tree_lookup_path. + +* dulwich/tests/test_object_store.py + (TreeLookupPathTests): This test case contains a few tests that ensure the + tree_lookup_path function works as expected. +--- + pixmaps/prey.ico | Bin 9662 -> 9662 bytes + 1 files changed, 0 insertions(+), 0 deletions(-) + mode change 100755 => 100644 pixmaps/prey.ico + +-- +1.7.0.4 +""" + c, diff, version = git_am_patch_split(BytesIO(text), "utf-8") + self.assertEqual( + b"""\ +Added unit tests for dulwich.object_store.tree_lookup_path. + +* dulwich/tests/test_object_store.py + (TreeLookupPathTests): This test case contains a few tests that ensure the + tree_lookup_path function works as expected. +""", + c.message, + ) + + def test_extract_pseudo_from_header(self): + text = b"""From ff643aae102d8870cac88e8f007e70f58f3a7363 Mon Sep 17 00:00:00 2001 +From: Jelmer Vernooij +Date: Thu, 15 Apr 2010 15:40:28 +0200 +Subject: [Dulwich-users] [PATCH] Added unit tests for + dulwich.object_store.tree_lookup_path. + +From: Jelmer Vernooij + +* dulwich/tests/test_object_store.py + (TreeLookupPathTests): This test case contains a few tests that ensure the + tree_lookup_path function works as expected. +--- + pixmaps/prey.ico | Bin 9662 -> 9662 bytes + 1 files changed, 0 insertions(+), 0 deletions(-) + mode change 100755 => 100644 pixmaps/prey.ico + +-- +1.7.0.4 +""" + c, diff, version = git_am_patch_split(BytesIO(text), "utf-8") + self.assertEqual(b"Jelmer Vernooij ", c.author) + self.assertEqual( + b"""\ +Added unit tests for dulwich.object_store.tree_lookup_path. + +* dulwich/tests/test_object_store.py + (TreeLookupPathTests): This test case contains a few tests that ensure the + tree_lookup_path function works as expected. +""", + c.message, + ) + + def test_extract_no_version_tail(self): + text = b"""\ +From ff643aae102d8870cac88e8f007e70f58f3a7363 Mon Sep 17 00:00:00 2001 +From: Jelmer Vernooij +Date: Thu, 15 Apr 2010 15:40:28 +0200 +Subject: [Dulwich-users] [PATCH] Added unit tests for + dulwich.object_store.tree_lookup_path. + +From: Jelmer Vernooij + +--- + pixmaps/prey.ico | Bin 9662 -> 9662 bytes + 1 files changed, 0 insertions(+), 0 deletions(-) + mode change 100755 => 100644 pixmaps/prey.ico + +""" + c, diff, version = git_am_patch_split(BytesIO(text), "utf-8") + self.assertEqual(None, version) + + def test_extract_mercurial(self): + raise SkipTest( + "git_am_patch_split doesn't handle Mercurial patches " "properly yet" + ) + expected_diff = """\ +diff --git a/dulwich/tests/test_patch.py b/dulwich/tests/test_patch.py +--- a/dulwich/tests/test_patch.py ++++ b/dulwich/tests/test_patch.py +@@ -158,7 +158,7 @@ + + ''' + c, diff, version = git_am_patch_split(BytesIO(text)) +- self.assertIs(None, version) ++ self.assertEqual(None, version) + + + class DiffTests(TestCase): +""" + text = ( + """\ +From dulwich-users-bounces+jelmer=samba.org@lists.launchpad.net \ +Mon Nov 29 00:58:18 2010 +Date: Sun, 28 Nov 2010 17:57:27 -0600 +From: Augie Fackler +To: dulwich-users +Subject: [Dulwich-users] [PATCH] test_patch: fix tests on Python 2.6 +Content-Transfer-Encoding: 8bit + +Change-Id: I5e51313d4ae3a65c3f00c665002a7489121bb0d6 + +%s + +_______________________________________________ +Mailing list: https://launchpad.net/~dulwich-users +Post to : dulwich-users@lists.launchpad.net +Unsubscribe : https://launchpad.net/~dulwich-users +More help : https://help.launchpad.net/ListHelp + +""" + % expected_diff + ) + c, diff, version = git_am_patch_split(BytesIO(text)) + self.assertEqual(expected_diff, diff) + self.assertEqual(None, version) + + +class DiffTests(TestCase): + """Tests for write_blob_diff and write_tree_diff.""" + + def test_blob_diff(self): + f = BytesIO() + write_blob_diff( + f, + (b"foo.txt", 0o644, Blob.from_string(b"old\nsame\n")), + (b"bar.txt", 0o644, Blob.from_string(b"new\nsame\n")), + ) + self.assertEqual( + [ + b"diff --git a/foo.txt b/bar.txt", + b"index 3b0f961..a116b51 644", + b"--- a/foo.txt", + b"+++ b/bar.txt", + b"@@ -1,2 +1,2 @@", + b"-old", + b"+new", + b" same", + ], + f.getvalue().splitlines(), + ) + + def test_blob_add(self): + f = BytesIO() + write_blob_diff( + f, + (None, None, None), + (b"bar.txt", 0o644, Blob.from_string(b"new\nsame\n")), + ) + self.assertEqual( + [ + b"diff --git a/bar.txt b/bar.txt", + b"new file mode 644", + b"index 0000000..a116b51", + b"--- /dev/null", + b"+++ b/bar.txt", + b"@@ -0,0 +1,2 @@", + b"+new", + b"+same", + ], + f.getvalue().splitlines(), + ) + + def test_blob_remove(self): + f = BytesIO() + write_blob_diff( + f, + (b"bar.txt", 0o644, Blob.from_string(b"new\nsame\n")), + (None, None, None), + ) + self.assertEqual( + [ + b"diff --git a/bar.txt b/bar.txt", + b"deleted file mode 644", + b"index a116b51..0000000", + b"--- a/bar.txt", + b"+++ /dev/null", + b"@@ -1,2 +0,0 @@", + b"-new", + b"-same", + ], + f.getvalue().splitlines(), + ) + + def test_tree_diff(self): + f = BytesIO() + store = MemoryObjectStore() + added = Blob.from_string(b"add\n") + removed = Blob.from_string(b"removed\n") + changed1 = Blob.from_string(b"unchanged\nremoved\n") + changed2 = Blob.from_string(b"unchanged\nadded\n") + unchanged = Blob.from_string(b"unchanged\n") + tree1 = Tree() + tree1.add(b"removed.txt", 0o644, removed.id) + tree1.add(b"changed.txt", 0o644, changed1.id) + tree1.add(b"unchanged.txt", 0o644, changed1.id) + tree2 = Tree() + tree2.add(b"added.txt", 0o644, added.id) + tree2.add(b"changed.txt", 0o644, changed2.id) + tree2.add(b"unchanged.txt", 0o644, changed1.id) + store.add_objects( + [ + (o, None) + for o in [ + tree1, + tree2, + added, + removed, + changed1, + changed2, + unchanged, + ] + ] + ) + write_tree_diff(f, store, tree1.id, tree2.id) + self.assertEqual( + [ + b"diff --git a/added.txt b/added.txt", + b"new file mode 644", + b"index 0000000..76d4bb8", + b"--- /dev/null", + b"+++ b/added.txt", + b"@@ -0,0 +1 @@", + b"+add", + b"diff --git a/changed.txt b/changed.txt", + b"index bf84e48..1be2436 644", + b"--- a/changed.txt", + b"+++ b/changed.txt", + b"@@ -1,2 +1,2 @@", + b" unchanged", + b"-removed", + b"+added", + b"diff --git a/removed.txt b/removed.txt", + b"deleted file mode 644", + b"index 2c3f0b3..0000000", + b"--- a/removed.txt", + b"+++ /dev/null", + b"@@ -1 +0,0 @@", + b"-removed", + ], + f.getvalue().splitlines(), + ) + + def test_tree_diff_submodule(self): + f = BytesIO() + store = MemoryObjectStore() + tree1 = Tree() + tree1.add( + b"asubmodule", + S_IFGITLINK, + b"06d0bdd9e2e20377b3180e4986b14c8549b393e4", + ) + tree2 = Tree() + tree2.add( + b"asubmodule", + S_IFGITLINK, + b"cc975646af69f279396d4d5e1379ac6af80ee637", + ) + store.add_objects([(o, None) for o in [tree1, tree2]]) + write_tree_diff(f, store, tree1.id, tree2.id) + self.assertEqual( + [ + b"diff --git a/asubmodule b/asubmodule", + b"index 06d0bdd..cc97564 160000", + b"--- a/asubmodule", + b"+++ b/asubmodule", + b"@@ -1 +1 @@", + b"-Subproject commit 06d0bdd9e2e20377b3180e4986b14c8549b393e4", + b"+Subproject commit cc975646af69f279396d4d5e1379ac6af80ee637", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_blob(self): + f = BytesIO() + b1 = Blob.from_string(b"old\nsame\n") + b2 = Blob.from_string(b"new\nsame\n") + store = MemoryObjectStore() + store.add_objects([(b1, None), (b2, None)]) + write_object_diff( + f, store, (b"foo.txt", 0o644, b1.id), (b"bar.txt", 0o644, b2.id) + ) + self.assertEqual( + [ + b"diff --git a/foo.txt b/bar.txt", + b"index 3b0f961..a116b51 644", + b"--- a/foo.txt", + b"+++ b/bar.txt", + b"@@ -1,2 +1,2 @@", + b"-old", + b"+new", + b" same", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_add_blob(self): + f = BytesIO() + store = MemoryObjectStore() + b2 = Blob.from_string(b"new\nsame\n") + store.add_object(b2) + write_object_diff(f, store, (None, None, None), (b"bar.txt", 0o644, b2.id)) + self.assertEqual( + [ + b"diff --git a/bar.txt b/bar.txt", + b"new file mode 644", + b"index 0000000..a116b51", + b"--- /dev/null", + b"+++ b/bar.txt", + b"@@ -0,0 +1,2 @@", + b"+new", + b"+same", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_remove_blob(self): + f = BytesIO() + b1 = Blob.from_string(b"new\nsame\n") + store = MemoryObjectStore() + store.add_object(b1) + write_object_diff(f, store, (b"bar.txt", 0o644, b1.id), (None, None, None)) + self.assertEqual( + [ + b"diff --git a/bar.txt b/bar.txt", + b"deleted file mode 644", + b"index a116b51..0000000", + b"--- a/bar.txt", + b"+++ /dev/null", + b"@@ -1,2 +0,0 @@", + b"-new", + b"-same", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_bin_blob_force(self): + f = BytesIO() + # Prepare two slightly different PNG headers + b1 = Blob.from_string( + b"\x89\x50\x4e\x47\x0d\x0a\x1a\x0a" + b"\x00\x00\x00\x0d\x49\x48\x44\x52" + b"\x00\x00\x01\xd5\x00\x00\x00\x9f" + b"\x08\x04\x00\x00\x00\x05\x04\x8b" + ) + b2 = Blob.from_string( + b"\x89\x50\x4e\x47\x0d\x0a\x1a\x0a" + b"\x00\x00\x00\x0d\x49\x48\x44\x52" + b"\x00\x00\x01\xd5\x00\x00\x00\x9f" + b"\x08\x03\x00\x00\x00\x98\xd3\xb3" + ) + store = MemoryObjectStore() + store.add_objects([(b1, None), (b2, None)]) + write_object_diff( + f, + store, + (b"foo.png", 0o644, b1.id), + (b"bar.png", 0o644, b2.id), + diff_binary=True, + ) + self.assertEqual( + [ + b"diff --git a/foo.png b/bar.png", + b"index f73e47d..06364b7 644", + b"--- a/foo.png", + b"+++ b/bar.png", + b"@@ -1,4 +1,4 @@", + b" \x89PNG", + b" \x1a", + b" \x00\x00\x00", + b"-IHDR\x00\x00\x01\xd5\x00\x00\x00" + b"\x9f\x08\x04\x00\x00\x00\x05\x04\x8b", + b"\\ No newline at end of file", + b"+IHDR\x00\x00\x01\xd5\x00\x00\x00\x9f" + b"\x08\x03\x00\x00\x00\x98\xd3\xb3", + b"\\ No newline at end of file", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_bin_blob(self): + f = BytesIO() + # Prepare two slightly different PNG headers + b1 = Blob.from_string( + b"\x89\x50\x4e\x47\x0d\x0a\x1a\x0a" + b"\x00\x00\x00\x0d\x49\x48\x44\x52" + b"\x00\x00\x01\xd5\x00\x00\x00\x9f" + b"\x08\x04\x00\x00\x00\x05\x04\x8b" + ) + b2 = Blob.from_string( + b"\x89\x50\x4e\x47\x0d\x0a\x1a\x0a" + b"\x00\x00\x00\x0d\x49\x48\x44\x52" + b"\x00\x00\x01\xd5\x00\x00\x00\x9f" + b"\x08\x03\x00\x00\x00\x98\xd3\xb3" + ) + store = MemoryObjectStore() + store.add_objects([(b1, None), (b2, None)]) + write_object_diff( + f, store, (b"foo.png", 0o644, b1.id), (b"bar.png", 0o644, b2.id) + ) + self.assertEqual( + [ + b"diff --git a/foo.png b/bar.png", + b"index f73e47d..06364b7 644", + b"Binary files a/foo.png and b/bar.png differ", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_add_bin_blob(self): + f = BytesIO() + b2 = Blob.from_string( + b"\x89\x50\x4e\x47\x0d\x0a\x1a\x0a" + b"\x00\x00\x00\x0d\x49\x48\x44\x52" + b"\x00\x00\x01\xd5\x00\x00\x00\x9f" + b"\x08\x03\x00\x00\x00\x98\xd3\xb3" + ) + store = MemoryObjectStore() + store.add_object(b2) + write_object_diff(f, store, (None, None, None), (b"bar.png", 0o644, b2.id)) + self.assertEqual( + [ + b"diff --git a/bar.png b/bar.png", + b"new file mode 644", + b"index 0000000..06364b7", + b"Binary files /dev/null and b/bar.png differ", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_remove_bin_blob(self): + f = BytesIO() + b1 = Blob.from_string( + b"\x89\x50\x4e\x47\x0d\x0a\x1a\x0a" + b"\x00\x00\x00\x0d\x49\x48\x44\x52" + b"\x00\x00\x01\xd5\x00\x00\x00\x9f" + b"\x08\x04\x00\x00\x00\x05\x04\x8b" + ) + store = MemoryObjectStore() + store.add_object(b1) + write_object_diff(f, store, (b"foo.png", 0o644, b1.id), (None, None, None)) + self.assertEqual( + [ + b"diff --git a/foo.png b/foo.png", + b"deleted file mode 644", + b"index f73e47d..0000000", + b"Binary files a/foo.png and /dev/null differ", + ], + f.getvalue().splitlines(), + ) + + def test_object_diff_kind_change(self): + f = BytesIO() + b1 = Blob.from_string(b"new\nsame\n") + store = MemoryObjectStore() + store.add_object(b1) + write_object_diff( + f, + store, + (b"bar.txt", 0o644, b1.id), + ( + b"bar.txt", + 0o160000, + b"06d0bdd9e2e20377b3180e4986b14c8549b393e4", + ), + ) + self.assertEqual( + [ + b"diff --git a/bar.txt b/bar.txt", + b"old file mode 644", + b"new file mode 160000", + b"index a116b51..06d0bdd 160000", + b"--- a/bar.txt", + b"+++ b/bar.txt", + b"@@ -1,2 +1 @@", + b"-new", + b"-same", + b"+Subproject commit 06d0bdd9e2e20377b3180e4986b14c8549b393e4", + ], + f.getvalue().splitlines(), + ) + + +class GetSummaryTests(TestCase): + def test_simple(self): + c = Commit() + c.committer = c.author = b"Jelmer " + c.commit_time = c.author_time = 1271350201 + c.commit_timezone = c.author_timezone = 0 + c.message = b"This is the first line\nAnd this is the second line.\n" + c.tree = Tree().id + self.assertEqual("This-is-the-first-line", get_summary(c)) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_porcelain.py b/env/lib/python3.8/site-packages/dulwich/tests/test_porcelain.py new file mode 100644 index 00000000..6c38a4ba --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_porcelain.py @@ -0,0 +1,3149 @@ +# test_porcelain.py -- porcelain tests +# Copyright (C) 2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for dulwich.porcelain.""" + +import contextlib +from io import BytesIO, StringIO +import os +import platform +import re +import shutil +import stat +import subprocess +import sys +import tarfile +import tempfile +import threading +import time +from unittest import skipIf + +from dulwich import porcelain +from dulwich.diff_tree import tree_changes +from dulwich.errors import ( + CommitError, +) +from dulwich.objects import ( + Blob, + Tag, + Tree, + ZERO_SHA, +) +from dulwich.repo import ( + NoIndexPresent, + Repo, +) +from dulwich.server import ( + DictBackend, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + build_commit_graph, + make_commit, + make_object, +) +from dulwich.web import ( + make_server, + make_wsgi_chain, +) + +try: + import gpg +except ImportError: + gpg = None + + +def flat_walk_dir(dir_to_walk): + for dirpath, _, filenames in os.walk(dir_to_walk): + rel_dirpath = os.path.relpath(dirpath, dir_to_walk) + if not dirpath == dir_to_walk: + yield rel_dirpath + for filename in filenames: + if dirpath == dir_to_walk: + yield filename + else: + yield os.path.join(rel_dirpath, filename) + + +class PorcelainTestCase(TestCase): + def setUp(self): + super(PorcelainTestCase, self).setUp() + self.test_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.test_dir) + self.repo_path = os.path.join(self.test_dir, "repo") + self.repo = Repo.init(self.repo_path, mkdir=True) + self.addCleanup(self.repo.close) + + def assertRecentTimestamp(self, ts): + # On some slow CIs it does actually take more than 5 seconds to go from + # creating the tag to here. + self.assertLess(time.time() - ts, 50) + + +@skipIf(gpg is None, "gpg is not available") +class PorcelainGpgTestCase(PorcelainTestCase): + DEFAULT_KEY = """ +-----BEGIN PGP PRIVATE KEY BLOCK----- + +lQVYBGBjIyIBDADAwydvMPQqeEiK54FG1DHwT5sQejAaJOb+PsOhVa4fLcKsrO3F +g5CxO+/9BHCXAr8xQAtp/gOhDN05fyK3MFyGlL9s+Cd8xf34S3R4rN/qbF0oZmaa +FW0MuGnniq54HINs8KshadVn1Dhi/GYSJ588qNFRl/qxFTYAk+zaGsgX/QgFfy0f +djWXJLypZXu9D6DlyJ0cPSzUlfBkI2Ytx6grzIquRjY0FbkjK3l+iGsQ+ebRMdcP +Sqd5iTN9XuzIUVoBFAZBRjibKV3N2wxlnCbfLlzCyDp7rktzSThzjJ2pVDuLrMAx +6/L9hIhwmFwdtY4FBFGvMR0b0Ugh3kCsRWr8sgj9I7dUoLHid6ObYhJFhnD3GzRc +U+xX1uy3iTCqJDsG334aQIhC5Giuxln4SUZna2MNbq65ksh38N1aM/t3+Dc/TKVB +rb5KWicRPCQ4DIQkHMDCSPyj+dvRLCPzIaPvHD7IrCfHYHOWuvvPGCpwjo0As3iP +IecoMeguPLVaqgcAEQEAAQAL/i5/pQaUd4G7LDydpbixPS6r9UrfPrU/y5zvBP/p +DCynPDutJ1oq539pZvXQ2VwEJJy7x0UVKkjyMndJLNWly9wHC7o8jkHx/NalVP47 +LXR+GWbCdOOcYYbdAWcCNB3zOtzPnWhdAEagkc2G9xRQDIB0dLHLCIUpCbLP/CWM +qlHnDsVMrVTWjgzcpsnyGgw8NeLYJtYGB8dsN+XgCCjo7a9LEvUBKNgdmWBbf14/ +iBw7PCugazFcH9QYfZwzhsi3nqRRagTXHbxFRG0LD9Ro9qCEutHYGP2PJ59Nj8+M +zaVkJj/OxWxVOGvn2q16mQBCjKpbWfqXZVVl+G5DGOmiSTZqXy+3j6JCKdOMy6Qd +JBHOHhFZXYmWYaaPzoc33T/C3QhMfY5sOtUDLJmV05Wi4dyBeNBEslYgUuTk/jXb +5ZAie25eDdrsoqkcnSs2ZguMF7AXhe6il2zVhUUMs/6UZgd6I7I4Is0HXT/pnxEp +uiTRFu4v8E+u+5a8O3pffe5boQYA3TsIxceen20qY+kRaTOkURHMZLn/y6KLW8bZ +rNJyXWS9hBAcbbSGhfOwYfzbDCM17yPQO3E2zo8lcGdRklUdIIaCxQwtu36N5dfx +OLCCQc5LmYdl/EAm91iAhrr7dNntZ18MU09gdzUu+ONZwu4CP3cJT83+qYZULso8 +4Fvd/X8IEfGZ7kM+ylrdqBwtlrn8yYXtom+ows2M2UuNR53B+BUOd73kVLTkTCjE +JH63+nE8BqG7tDLCMws+23SAA3xxBgDfDrr0x7zCozQKVQEqBzQr9Uoo/c/ZjAfi +syzNSrDz+g5gqJYtuL9XpPJVWf6V1GXVyJlSbxR9CjTkBxmlPxpvV25IsbVSsh0o +aqkf2eWpbCL6Qb2E0jd1rvf8sGeTTohzYfiSVVsC2t9ngRO/CmetizwQBvRzLGMZ +4mtAPiy7ZEDc2dFrPp7zlKISYmJZUx/DJVuZWuOrVMpBP+bSgJXoMTlICxZUqUnE +2VKVStb/L+Tl8XCwIWdrZb9BaDnHqfcGAM2B4HNPxP88Yj1tEDly/vqeb3vVMhj+ +S1lunnLdgxp46YyuTMYAzj88eCGurRtzBsdxxlGAsioEnZGebEqAHQbieKq/DO6I +MOMZHMSVBDqyyIx3assGlxSX8BSFW0lhKyT7i0XqnAgCJ9f/5oq0SbFGq+01VQb7 +jIx9PbcYJORxsE0JG/CXXPv27bRtQXsudkWGSYvC0NLOgk4z8+kQpQtyFh16lujq +WRwMeriu0qNDjCa1/eHIKDovhAZ3GyO5/9m1tBlUZXN0IFVzZXIgPHRlc3RAdGVz +dC5jb20+iQHOBBMBCAA4AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAFiEEjrR8 +MQ4fJK44PYMvfN2AClLmXiYFAmDcEZEACgkQfN2AClLmXibZzgv/ZfeTpTuqQE1W +C1jT5KpQExnt0BizTX0U7BvSn8Fr6VXTyol6kYc3u71GLUuJyawCLtIzOXqOXJvz +bjcZqymcMADuftKcfMy513FhbF6MhdVd6QoeBP6+7/xXOFJCi+QVYF7SQ2h7K1Qm ++yXOiAMgSxhCZQGPBNJLlDUOd47nSIMANvlumFtmLY/1FD7RpG7WQWjeX1mnxNTw +hUU+Yv7GuFc/JprXCIYqHbhWfvXyVtae2ZK4xuVi5eqwA2RfggOVM7drb+CgPhG0 ++9aEDDLOZqVi65wK7J73Puo3rFTbPQMljxw5s27rWqF+vB6hhVdJOPNomWy3naPi +k5MW0mhsacASz1WYndpZz+XaQTq/wJF5HUyyeUWJ0vlOEdwx021PHcqSTyfNnkjD +KncrE21t2sxWRsgGDETxIwkd2b2HNGAvveUD0ffFK/oJHGSXjAERFGc3wuiDj3mQ +BvKm4wt4QF9ZMrCdhMAA6ax5kfEUqQR4ntmrJk/khp/mV7TILaI4nQVYBGBjIyIB +DADghIo9wXnRxzfdDTvwnP8dHpLAIaPokgdpyLswqUCixJWiW2xcV6weUjEWwH6n +eN/t1uZYVehbrotxVPla+MPvzhxp6/cmG+2lhzEBOp6zRwnL1wIB6HoKJfpREhyM +c8rLR0zMso1L1bJTyydvnu07a7BWo3VWKjilb0rEZZUSD/2hidx5HxMOJSoidLWe +d/PPuv6yht3NtA4UThlcfldm9G6PbqCdm1kMEKAkq0wVJvhPJ6gEFRNJimgygfUw +MDFXEIhQtxjgdV5Uoz3O5452VLoRsDlgpi3E0WDGj7WXDaO5uSU0T5aJgVgHCP/f +xZhHuQFk2YYIl5nCBpOZyWWI0IKmscTuEwzpkhICQDQFvcMZ5ibsl7wA2P7YTrQf +FDMjjzuaK80GYPfxDFlyKUyLqFt8w/QzsZLDLX7+jxIEpbRAaMw/JsWqm5BMxxbS +3CIQiS5S3oSKDsNINelqWFfwvLhvlQra8gIxyNTlek25OdgG66BiiX+seH8A/ql+ +F+MAEQEAAQAL/1jrNSLjMt9pwo6qFKClVQZP2vf7+sH7v7LeHIDXr3EnYUnVYnOq +B1FU5PspTp/+J9W25DB9CZLx7Gj8qeslFdiuLSOoIBB4RCToB3kAoeTH0DHqW/Gs +hFTrmJkuDp9zpo/ek6SIXJx5rHAyR9KVw0fizQprH2f6PcgLbTWeM61dJuqowmg3 +7eCOyIKv7VQvFqEhYokLD+JNmrvg+Htg0DXGvdjRjAwPf/NezEXpj67a6cHTp1/C +hwp7pevG+3fTxaCJFesl5/TxxtnaBLE8m2uo/S6Hxgn9l0edonroe1QlTjEqGLy2 +7qi2z5Rem+v6GWNDRgvAWur13v8FNdyduHlioG/NgRsU9mE2MYeFsfi3cfNpJQp/ +wC9PSCIXrb/45mkS8KyjZpCrIPB9RV/m0MREq01TPom7rstZc4A1pD0Ot7AtUYS3 +e95zLyEmeLziPJ9fV4fgPmEudDr1uItnmV0LOskKlpg5sc0hhdrwYoobfkKt2dx6 +DqfMlcM1ZkUbLQYA4jwfpFJG4HmYvjL2xCJxM0ycjvMbqFN+4UjgYWVlRfOrm1V4 +Op86FjbRbV6OOCNhznotAg7mul4xtzrrTkK8o3YLBeJseDgl4AWuzXtNa9hE0XpK +9gJoEHUuBOOsamVh2HpXESFyE5CclOV7JSh541TlZKfnqfZYCg4JSbp0UijkawCL +5bJJUiGGMD9rZUxIAKQO1DvUEzptS7Jl6S3y5sbIIhilp4KfYWbSk3PPu9CnZD5b +LhEQp0elxnb/IL8PBgD+DpTeC8unkGKXUpbe9x0ISI6V1D6FmJq/FxNg7fMa3QCh +fGiAyoTm80ZETynj+blRaDO3gY4lTLa3Opubof1EqK2QmwXmpyvXEZNYcQfQ2CCS +GOWUCK8jEQamUPf1PWndZXJUmROI1WukhlL71V/ir6zQeVCv1wcwPwclJPnAe87u +pEklnCYpvsEldwHUX9u0BWzoULIEsi+ddtHmT0KTeF/DHRy0W15jIHbjFqhqckj1 +/6fmr7l7kIi/kN4vWe0F/0Q8IXX+cVMgbl3aIuaGcvENLGcoAsAtPGx88SfRgmfu +HK64Y7hx1m+Bo215rxJzZRjqHTBPp0BmCi+JKkaavIBrYRbsx20gveI4dzhLcUhB +kiT4Q7oz0/VbGHS1CEf9KFeS/YOGj57s4yHauSVI0XdP9kBRTWmXvBkzsooB2cKH +hwhUN7iiT1k717CiTNUT6Q/pcPFCyNuMoBBGQTU206JEgIjQvI3f8xMUMGmGVVQz +9/k716ycnhb2JZ/Q/AyQIeHJiQG2BBgBCAAgAhsMFiEEjrR8MQ4fJK44PYMvfN2A +ClLmXiYFAmDcEa4ACgkQfN2AClLmXiZxxQv/XaMN0hPCygtrQMbCsTNb34JbvJzh +hngPuUAfTbRHrR3YeATyQofNbL0DD3fvfzeFF8qESqvzCSZxS6dYsXPd4MCJTzlp +zYBZ2X0sOrgDqZvqCZKN72RKgdk0KvthdzAxsIm2dfcQOxxowXMxhJEXZmsFpusx +jKJxOcrfVRjXJnh9isY0NpCoqMQ+3k3wDJ3VGEHV7G+A+vFkWfbLJF5huQ96uaH9 +Uc+jUsREUH9G82ZBqpoioEN8Ith4VXpYnKdTMonK/+ZcyeraJZhXrvbjnEomKdzU +0pu4bt1HlLR3dcnpjN7b009MBf2xLgEfQk2nPZ4zzY+tDkxygtPllaB4dldFjBpT +j7Q+t49sWMjmlJUbLlHfuJ7nUUK5+cGjBsWVObAEcyfemHWCTVFnEa2BJslGC08X +rFcjRRcMEr9ct4551QFBHsv3O/Wp3/wqczYgE9itSnGT05w+4vLt4smG+dnEHjRJ +brMb2upTHa+kjktjdO96/BgSnKYqmNmPB/qB +=ivA/ +-----END PGP PRIVATE KEY BLOCK----- + """ + + DEFAULT_KEY_ID = "8EB47C310E1F24AE383D832F7CDD800A52E65E26" + + NON_DEFAULT_KEY = """ +-----BEGIN PGP PRIVATE KEY BLOCK----- + +lQVYBGBjI0ABDADGWBRp+t02emfzUlhrc1psqIhhecFm6Em0Kv33cfDpnfoMF1tK +Yy/4eLYIR7FmpdbFPcDThFNHbXJzBi00L1mp0XQE2l50h/2bDAAgREdZ+NVo5a7/ +RSZjauNU1PxW6pnXMehEh1tyIQmV78jAukaakwaicrpIenMiFUN3fAKHnLuFffA6 +t0f3LqJvTDhUw/o2vPgw5e6UDQhA1C+KTv1KXVrhJNo88a3hZqCZ76z3drKR411Q +zYgT4DUb8lfnbN+z2wfqT9oM5cegh2k86/mxAA3BYOeQrhmQo/7uhezcgbxtdGZr +YlbuaNDTSBrn10ZoaxLPo2dJe2zWxgD6MpvsGU1w3tcRW508qo/+xoWp2/pDzmok ++uhOh1NAj9zB05VWBz1r7oBgCOIKpkD/LD4VKq59etsZ/UnrYDwKdXWZp7uhshkU +M7N35lUJcR76a852dlMdrgpmY18+BP7+o7M+5ElHTiqQbMuE1nHTg8RgVpdV+tUx +dg6GWY/XHf5asm8AEQEAAQAL/A85epOp+GnymmEQfI3+5D178D//Lwu9n86vECB6 +xAHCqQtdjZnXpDp/1YUsL59P8nzgYRk7SoMskQDoQ/cB/XFuDOhEdMSgHaTVlnrj +ktCCq6rqGnUosyolbb64vIfVaSqd/5SnCStpAsnaBoBYrAu4ZmV4xfjDQWwn0q5s +u+r56mD0SkjPgbwk/b3qTVagVmf2OFzUgWwm1e/X+bA1oPag1NV8VS4hZPXswT4f +qhiyqUFOgP6vUBcqehkjkIDIl/54xII7/P5tp3LIZawvIXqHKNTqYPCqaCqCj+SL +vMYDIb6acjescfZoM71eAeHAANeFZzr/rwfBT+dEP6qKmPXNcvgE11X44ZCr04nT +zOV/uDUifEvKT5qgtyJpSFEVr7EXubJPKoNNhoYqq9z1pYU7IedX5BloiVXKOKTY +0pk7JkLqf3g5fYtXh/wol1owemITJy5V5PgaqZvk491LkI6S+kWC7ANYUg+TDPIW +afxW3E5N1CYV6XDAl0ZihbLcoQYAy0Ky/p/wayWKePyuPBLwx9O89GSONK2pQljZ +yaAgxPQ5/i1vx6LIMg7k/722bXR9W3zOjWOin4eatPM3d2hkG96HFvnBqXSmXOPV +03Xqy1/B5Tj8E9naLKUHE/OBQEc363DgLLG9db5HfPlpAngeppYPdyWkhzXyzkgS +PylaE5eW3zkdjEbYJ6RBTecTZEgBaMvJNPdWbn//frpP7kGvyiCg5Es+WjLInUZ6 +0sdifcNTCewzLXK80v/y5mVOdJhPBgD5zs9cYdyiQJayqAuOr+He1eMHMVUbm9as +qBmPrst398eBW9ZYF7eBfTSlUf6B+WnvyLKEGsUf/7IK0EWDlzoBuWzWiHjUAY1g +m9eTV2MnvCCCefqCErWwfFo2nWOasAZA9sKD+ICIBY4tbtvSl4yfLBzTMwSvs9ZS +K1ocPSYUnhm2miSWZ8RLZPH7roHQasNHpyq/AX7DahFf2S/bJ+46ZGZ8Pigr7hA+ +MjmpQ4qVdb5SaViPmZhAKO+PjuCHm+EF/2H0Y3Sl4eXgxZWoQVOUeXdWg9eMfYrj +XDtUMIFppV/QxbeztZKvJdfk64vt/crvLsOp0hOky9cKwY89r4QaHfexU3qR+qDq +UlMvR1rHk7dS5HZAtw0xKsFJNkuDxvBkMqv8Los8zp3nUl+U99dfZOArzNkW38wx +FPa0ixkC9za2BkDrWEA8vTnxw0A2upIFegDUhwOByrSyfPPnG3tKGeqt3Izb/kDk +Q9vmo+HgxBOguMIvlzbBfQZwtbd/gXzlvPqCtCJBbm90aGVyIFRlc3QgVXNlciA8 +dGVzdDJAdGVzdC5jb20+iQHOBBMBCAA4AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4B +AheAFiEEapM5P1DF5qzT1vtFuTYhLttOFMAFAmDcEeEACgkQuTYhLttOFMDe0Qv/ +Qx/bzXztJ3BCc+CYAVDx7Kr37S68etwwLgcWzhG+CDeMB5F/QE+upKgxy2iaqQFR +mxfOMgf/TIQkUfkbaASzK1LpnesYO85pk7XYjoN1bYEHiXTkeW+bgB6aJIxrRmO2 +SrWasdBC/DsI3Mrya8YMt/TiHC6VpRJVxCe5vv7/kZC4CXrgTBnZocXx/YXimbke +poPMVdbvhYh6N0aGeS38jRKgyN10KXmhDTAQDwseVFavBWAjVfx3DEwjtK2Z2GbA +aL8JvAwRtqiPFkDMIKPL4UwxtXFws8SpMt6juroUkNyf6+BxNWYqmwXHPy8zCJAb +xkxIJMlEc+s7qQsP3fILOo8Xn+dVzJ5sa5AoARoXm1GMjsdqaKAzq99Dic/dHnaQ +Civev1PQsdwlYW2C2wNXNeIrxMndbDMFfNuZ6BnGHWJ/wjcp/pFs4YkyyZN8JH7L +hP2FO4Jgham3AuP13kC3Ivea7V6hR8QNcDZRwFPOMIX4tXwQv1T72+7DZGaA25O7 +nQVXBGBjI0ABDADJMBYIcG0Yil9YxFs7aYzNbd7alUAr89VbY8eIGPHP3INFPM1w +lBQCu+4j6xdEbhMpppLBZ9A5TEylP4C6qLtPa+oLtPeuSw8gHDE10XE4lbgPs376 +rL60XdImSOHhiduACUefYjqpcmFH9Bim1CC+koArYrSQJQx1Jri+OpnTaL/8UID0 +KzD/kEgMVGlHIVj9oJmb4+j9pW8I/g0wDSnIaEKFMxqu6SIVJ1GWj+MUMvZigjLC +sNCZd7PnbOC5VeU3SsXj6he74Jx0AmGMPWIHi9M0DjHO5d1cCbXTnud8xxM1bOh4 +7aCTnMK5cVyIr+adihgJpVVhrndSM8aklBPRgtozrGNCgF2CkYU2P1blxfloNr/8 +UZpM83o+s1aObBszzRNLxnpNORqoLqjfPtLEPQnagxE+4EapCq0NZ/x6yO5VTwwp +NljdFAEk40uGuKyn1QA3uNMHy5DlpLl+tU7t1KEovdZ+OVYsYKZhVzw0MTpKogk9 +JI7AN0q62ronPskAEQEAAQAL+O8BUSt1ZCVjPSIXIsrR+ZOSkszZwgJ1CWIoh0IH +YD2vmcMHGIhFYgBdgerpvhptKhaw7GcXDScEnYkyh5s4GE2hxclik1tbj/x1gYCN +8BNoyeDdPFxQG73qN12D99QYEctpOsz9xPLIDwmL0j1ehAfhwqHIAPm9Ca+i8JYM +x/F+35S/jnKDXRI+NVlwbiEyXKXxxIqNlpy9i8sDBGexO5H5Sg0zSN/B1duLekGD +biDw6gLc6bCgnS+0JOUpU07Z2fccMOY9ncjKGD2uIb/ePPUaek92GCQyq0eorCIV +brcQsRc5sSsNtnRKQTQtxioROeDg7kf2oWySeHTswlXW/219ihrSXgteHJd+rPm7 +DYLEeGLRny8bRKv8rQdAtApHaJE4dAATXeY4RYo4NlXHYaztGYtU6kiM/3zCfWAe +9Nn+Wh9jMTZrjefUCagS5r6ZqAh7veNo/vgIGaCLh0a1Ypa0Yk9KFrn3LYEM3zgk +3m3bn+7qgy5cUYXoJ3DGJJEhBgDPonpW0WElqLs5ZMem1ha85SC38F0IkAaSuzuz +v3eORiKWuyJGF32Q2XHa1RHQs1JtUKd8rxFer3b8Oq71zLz6JtVc9dmRudvgcJYX +0PC11F6WGjZFSSp39dajFp0A5DKUs39F3w7J1yuDM56TDIN810ywufGAHARY1pZb +UJAy/dTqjFnCbNjpAakor3hVzqxcmUG+7Y2X9c2AGncT1MqAQC3M8JZcuZvkK8A9 +cMk8B914ryYE7VsZMdMhyTwHmykGAPgNLLa3RDETeGeGCKWI+ZPOoU0ib5JtJZ1d +P3tNwfZKuZBZXKW9gqYqyBa/qhMip84SP30pr/TvulcdAFC759HK8sQZyJ6Vw24P +c+5ssRxrQUEw1rvJPWhmQCmCOZHBMQl5T6eaTOpR5u3aUKTMlxPKhK9eC1dCSTnI +/nyL8An3VKnLy+K/LI42YGphBVLLJmBewuTVDIJviWRdntiG8dElyEJMOywUltk3 +2CEmqgsD9tPO8rXZjnMrMn3gfsiaoQYA6/6/e2utkHr7gAoWBgrBBdqVHsvqh5Ro +2DjLAOpZItO/EdCJfDAmbTYOa04535sBDP2tcH/vipPOPpbr1Y9Y/mNsKCulNxed +yqAmEkKOcerLUP5UHju0AB6VBjHJFdU2mqT+UjPyBk7WeKXgFomyoYMv3KpNOFWR +xi0Xji4kKHbttA6Hy3UcGPr9acyUAlDYeKmxbSUYIPhw32bbGrX9+F5YriTufRsG +3jftQVo9zqdcQSD/5pUTMn3EYbEcohYB2YWJAbYEGAEIACACGwwWIQRqkzk/UMXm +rNPW+0W5NiEu204UwAUCYNwR6wAKCRC5NiEu204UwOPnC/92PgB1c3h9FBXH1maz +g29fndHIHH65VLgqMiQ7HAMojwRlT5Xnj5tdkCBmszRkv5vMvdJRa3ZY8Ed/Inqr +hxBFNzpjqX4oj/RYIQLKXWWfkTKYVLJFZFPCSo00jesw2gieu3Ke/Yy4gwhtNodA +v+s6QNMvffTW/K3XNrWDB0E7/LXbdidzhm+MBu8ov2tuC3tp9liLICiE1jv/2xT4 +CNSO6yphmk1/1zEYHS/mN9qJ2csBmte2cdmGyOcuVEHk3pyINNMDOamaURBJGRwF +XB5V7gTKUFU4jCp3chywKrBHJHxGGDUmPBmZtDtfWAOgL32drK7/KUyzZL/WO7Fj +akOI0hRDFOcqTYWL20H7+hAiX3oHMP7eou3L5C7wJ9+JMcACklN/WMjG9a536DFJ +4UgZ6HyKPP+wy837Hbe8b25kNMBwFgiaLR0lcgzxj7NyQWjVCMOEN+M55tRCjvL6 +ya6JVZCRbMXfdCy8lVPgtNQ6VlHaj8Wvnn2FLbWWO2n2r3s= +=9zU5 +-----END PGP PRIVATE KEY BLOCK----- +""" + + NON_DEFAULT_KEY_ID = "6A93393F50C5E6ACD3D6FB45B936212EDB4E14C0" + + def setUp(self): + super(PorcelainGpgTestCase, self).setUp() + self.gpg_dir = os.path.join(self.test_dir, "gpg") + os.mkdir(self.gpg_dir, mode=0o700) + # Ignore errors when deleting GNUPGHOME, because of race conditions + # (e.g. the gpg-agent socket having been deleted). See + # https://github.com/jelmer/dulwich/issues/1000 + self.addCleanup(shutil.rmtree, self.gpg_dir, ignore_errors=True) + self._old_gnupghome = os.environ.get("GNUPGHOME") + os.environ["GNUPGHOME"] = self.gpg_dir + if self._old_gnupghome is None: + self.addCleanup(os.environ.__delitem__, "GNUPGHOME") + else: + self.addCleanup(os.environ.__setitem__, "GNUPGHOME", self._old_gnupghome) + + def import_default_key(self): + subprocess.run( + ["gpg", "--import"], + stdout=subprocess.DEVNULL, + stderr=subprocess.DEVNULL, + input=PorcelainGpgTestCase.DEFAULT_KEY, + universal_newlines=True, + ) + + def import_non_default_key(self): + subprocess.run( + ["gpg", "--import"], + stdout=subprocess.DEVNULL, + stderr=subprocess.DEVNULL, + input=PorcelainGpgTestCase.NON_DEFAULT_KEY, + universal_newlines=True, + ) + + +class ArchiveTests(PorcelainTestCase): + """Tests for the archive command.""" + + def test_simple(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"refs/heads/master"] = c3.id + out = BytesIO() + err = BytesIO() + porcelain.archive( + self.repo.path, b"refs/heads/master", outstream=out, errstream=err + ) + self.assertEqual(b"", err.getvalue()) + tf = tarfile.TarFile(fileobj=out) + self.addCleanup(tf.close) + self.assertEqual([], tf.getnames()) + + +class UpdateServerInfoTests(PorcelainTestCase): + def test_simple(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"refs/heads/foo"] = c3.id + porcelain.update_server_info(self.repo.path) + self.assertTrue( + os.path.exists(os.path.join(self.repo.controldir(), "info", "refs")) + ) + + +class CommitTests(PorcelainTestCase): + def test_custom_author(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"refs/heads/foo"] = c3.id + sha = porcelain.commit( + self.repo.path, + message=b"Some message", + author=b"Joe ", + committer=b"Bob ", + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + def test_unicode(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"refs/heads/foo"] = c3.id + sha = porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + def test_no_verify(self): + if os.name != "posix": + self.skipTest("shell hook tests requires POSIX shell") + self.assertTrue(os.path.exists("/bin/sh")) + + hooks_dir = os.path.join(self.repo.controldir(), "hooks") + os.makedirs(hooks_dir, exist_ok=True) + self.addCleanup(shutil.rmtree, hooks_dir) + + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + + hook_fail = "#!/bin/sh\nexit 1" + + # hooks are executed in pre-commit, commit-msg order + # test commit-msg failure first, then pre-commit failure, then + # no_verify to skip both hooks + commit_msg = os.path.join(hooks_dir, "commit-msg") + with open(commit_msg, "w") as f: + f.write(hook_fail) + os.chmod(commit_msg, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + with self.assertRaises(CommitError): + porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + ) + + pre_commit = os.path.join(hooks_dir, "pre-commit") + with open(pre_commit, "w") as f: + f.write(hook_fail) + os.chmod(pre_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + with self.assertRaises(CommitError): + porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + ) + + sha = porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + no_verify=True, + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + def test_timezone(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"refs/heads/foo"] = c3.id + sha = porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + author_timezone=18000, + committer="Bob ", + commit_timezone=18000, + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + commit = self.repo.get_object(sha) + self.assertEqual(commit._author_timezone, 18000) + self.assertEqual(commit._commit_timezone, 18000) + + os.environ["GIT_AUTHOR_DATE"] = os.environ["GIT_COMMITTER_DATE"] = "1995-11-20T19:12:08-0501" + + sha = porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + commit = self.repo.get_object(sha) + self.assertEqual(commit._author_timezone, -18060) + self.assertEqual(commit._commit_timezone, -18060) + + del os.environ["GIT_AUTHOR_DATE"] + del os.environ["GIT_COMMITTER_DATE"] + local_timezone = time.localtime().tm_gmtoff + + sha = porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + commit = self.repo.get_object(sha) + self.assertEqual(commit._author_timezone, local_timezone) + self.assertEqual(commit._commit_timezone, local_timezone) + + +@skipIf(platform.python_implementation() == "PyPy" or sys.platform == "win32", "gpgme not easily available or supported on Windows and PyPy") +class CommitSignTests(PorcelainGpgTestCase): + + def test_default_key(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + cfg = self.repo.get_config() + cfg.set(("user",), "signingKey", PorcelainGpgTestCase.DEFAULT_KEY_ID) + self.import_default_key() + + sha = porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + signoff=True, + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + commit = self.repo.get_object(sha) + # GPG Signatures aren't deterministic, so we can't do a static assertion. + commit.verify() + commit.verify(keyids=[PorcelainGpgTestCase.DEFAULT_KEY_ID]) + + self.import_non_default_key() + self.assertRaises( + gpg.errors.MissingSignatures, + commit.verify, + keyids=[PorcelainGpgTestCase.NON_DEFAULT_KEY_ID], + ) + + commit.committer = b"Alice " + self.assertRaises( + gpg.errors.BadSignatures, + commit.verify, + ) + + def test_non_default_key(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + cfg = self.repo.get_config() + cfg.set(("user",), "signingKey", PorcelainGpgTestCase.DEFAULT_KEY_ID) + self.import_non_default_key() + + sha = porcelain.commit( + self.repo.path, + message="Some message", + author="Joe ", + committer="Bob ", + signoff=PorcelainGpgTestCase.NON_DEFAULT_KEY_ID, + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + commit = self.repo.get_object(sha) + # GPG Signatures aren't deterministic, so we can't do a static assertion. + commit.verify() + + +class TimezoneTests(PorcelainTestCase): + + def put_envs(self, value): + os.environ["GIT_AUTHOR_DATE"] = os.environ["GIT_COMMITTER_DATE"] = value + + def fallback(self, value): + self.put_envs(value) + self.assertRaises(porcelain.TimezoneFormatError, porcelain.get_user_timezones) + + def test_internal_format(self): + self.put_envs("0 +0500") + self.assertTupleEqual((18000, 18000), porcelain.get_user_timezones()) + + def test_rfc_2822(self): + self.put_envs("Mon, 20 Nov 1995 19:12:08 -0500") + self.assertTupleEqual((-18000, -18000), porcelain.get_user_timezones()) + + self.put_envs("Mon, 20 Nov 1995 19:12:08") + self.assertTupleEqual((0, 0), porcelain.get_user_timezones()) + + def test_iso8601(self): + self.put_envs("1995-11-20T19:12:08-0501") + self.assertTupleEqual((-18060, -18060), porcelain.get_user_timezones()) + + self.put_envs("1995-11-20T19:12:08+0501") + self.assertTupleEqual((18060, 18060), porcelain.get_user_timezones()) + + self.put_envs("1995-11-20T19:12:08-05:01") + self.assertTupleEqual((-18060, -18060), porcelain.get_user_timezones()) + + self.put_envs("1995-11-20 19:12:08-05") + self.assertTupleEqual((-18000, -18000), porcelain.get_user_timezones()) + + # https://github.com/git/git/blob/96b2d4fa927c5055adc5b1d08f10a5d7352e2989/t/t6300-for-each-ref.sh#L128 + self.put_envs("2006-07-03 17:18:44 +0200") + self.assertTupleEqual((7200, 7200), porcelain.get_user_timezones()) + + def test_missing_or_malformed(self): + # TODO: add more here + self.fallback("0 + 0500") + self.fallback("a +0500") + + self.fallback("1995-11-20T19:12:08") + self.fallback("1995-11-20T19:12:08-05:") + + self.fallback("1995.11.20") + self.fallback("11/20/1995") + self.fallback("20.11.1995") + + def test_different_envs(self): + os.environ["GIT_AUTHOR_DATE"] = "0 +0500" + os.environ["GIT_COMMITTER_DATE"] = "0 +0501" + self.assertTupleEqual((18000, 18060), porcelain.get_user_timezones()) + + def test_no_envs(self): + local_timezone = time.localtime().tm_gmtoff + + self.put_envs("0 +0500") + self.assertTupleEqual((18000, 18000), porcelain.get_user_timezones()) + + del os.environ["GIT_COMMITTER_DATE"] + self.assertTupleEqual((18000, local_timezone), porcelain.get_user_timezones()) + + self.put_envs("0 +0500") + del os.environ["GIT_AUTHOR_DATE"] + self.assertTupleEqual((local_timezone, 18000), porcelain.get_user_timezones()) + + self.put_envs("0 +0500") + del os.environ["GIT_AUTHOR_DATE"] + del os.environ["GIT_COMMITTER_DATE"] + self.assertTupleEqual((local_timezone, local_timezone), porcelain.get_user_timezones()) + + +class CleanTests(PorcelainTestCase): + def put_files(self, tracked, ignored, untracked, empty_dirs): + """Put the described files in the wd""" + all_files = tracked | ignored | untracked + for file_path in all_files: + abs_path = os.path.join(self.repo.path, file_path) + # File may need to be written in a dir that doesn't exist yet, so + # create the parent dir(s) as necessary + parent_dir = os.path.dirname(abs_path) + try: + os.makedirs(parent_dir) + except FileExistsError: + pass + with open(abs_path, "w") as f: + f.write("") + + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.writelines(ignored) + + for dir_path in empty_dirs: + os.mkdir(os.path.join(self.repo.path, "empty_dir")) + + files_to_add = [os.path.join(self.repo.path, t) for t in tracked] + porcelain.add(repo=self.repo.path, paths=files_to_add) + porcelain.commit(repo=self.repo.path, message="init commit") + + def assert_wd(self, expected_paths): + """Assert paths of files and dirs in wd are same as expected_paths""" + control_dir_rel = os.path.relpath(self.repo._controldir, self.repo.path) + + # normalize paths to simplify comparison across platforms + found_paths = { + os.path.normpath(p) + for p in flat_walk_dir(self.repo.path) + if not p.split(os.sep)[0] == control_dir_rel + } + norm_expected_paths = {os.path.normpath(p) for p in expected_paths} + self.assertEqual(found_paths, norm_expected_paths) + + def test_from_root(self): + self.put_files( + tracked={"tracked_file", "tracked_dir/tracked_file", ".gitignore"}, + ignored={"ignored_file"}, + untracked={ + "untracked_file", + "tracked_dir/untracked_dir/untracked_file", + "untracked_dir/untracked_dir/untracked_file", + }, + empty_dirs={"empty_dir"}, + ) + + porcelain.clean(repo=self.repo.path, target_dir=self.repo.path) + + self.assert_wd( + { + "tracked_file", + "tracked_dir/tracked_file", + ".gitignore", + "ignored_file", + "tracked_dir", + } + ) + + def test_from_subdir(self): + self.put_files( + tracked={"tracked_file", "tracked_dir/tracked_file", ".gitignore"}, + ignored={"ignored_file"}, + untracked={ + "untracked_file", + "tracked_dir/untracked_dir/untracked_file", + "untracked_dir/untracked_dir/untracked_file", + }, + empty_dirs={"empty_dir"}, + ) + + porcelain.clean( + repo=self.repo, + target_dir=os.path.join(self.repo.path, "untracked_dir"), + ) + + self.assert_wd( + { + "tracked_file", + "tracked_dir/tracked_file", + ".gitignore", + "ignored_file", + "untracked_file", + "tracked_dir/untracked_dir/untracked_file", + "empty_dir", + "untracked_dir", + "tracked_dir", + "tracked_dir/untracked_dir", + } + ) + + +class CloneTests(PorcelainTestCase): + def test_simple_local(self): + f1_1 = make_object(Blob, data=b"f1") + commit_spec = [[1], [2, 1], [3, 1, 2]] + trees = { + 1: [(b"f1", f1_1), (b"f2", f1_1)], + 2: [(b"f1", f1_1), (b"f2", f1_1)], + 3: [(b"f1", f1_1), (b"f2", f1_1)], + } + + c1, c2, c3 = build_commit_graph(self.repo.object_store, commit_spec, trees) + self.repo.refs[b"refs/heads/master"] = c3.id + self.repo.refs[b"refs/tags/foo"] = c3.id + target_path = tempfile.mkdtemp() + errstream = BytesIO() + self.addCleanup(shutil.rmtree, target_path) + r = porcelain.clone( + self.repo.path, target_path, checkout=False, errstream=errstream + ) + self.addCleanup(r.close) + self.assertEqual(r.path, target_path) + target_repo = Repo(target_path) + self.assertEqual(0, len(target_repo.open_index())) + self.assertEqual(c3.id, target_repo.refs[b"refs/tags/foo"]) + self.assertNotIn(b"f1", os.listdir(target_path)) + self.assertNotIn(b"f2", os.listdir(target_path)) + c = r.get_config() + encoded_path = self.repo.path + if not isinstance(encoded_path, bytes): + encoded_path = encoded_path.encode("utf-8") + self.assertEqual(encoded_path, c.get((b"remote", b"origin"), b"url")) + self.assertEqual( + b"+refs/heads/*:refs/remotes/origin/*", + c.get((b"remote", b"origin"), b"fetch"), + ) + + def test_simple_local_with_checkout(self): + f1_1 = make_object(Blob, data=b"f1") + commit_spec = [[1], [2, 1], [3, 1, 2]] + trees = { + 1: [(b"f1", f1_1), (b"f2", f1_1)], + 2: [(b"f1", f1_1), (b"f2", f1_1)], + 3: [(b"f1", f1_1), (b"f2", f1_1)], + } + + c1, c2, c3 = build_commit_graph(self.repo.object_store, commit_spec, trees) + self.repo.refs[b"refs/heads/master"] = c3.id + target_path = tempfile.mkdtemp() + errstream = BytesIO() + self.addCleanup(shutil.rmtree, target_path) + with porcelain.clone( + self.repo.path, target_path, checkout=True, errstream=errstream + ) as r: + self.assertEqual(r.path, target_path) + with Repo(target_path) as r: + self.assertEqual(r.head(), c3.id) + self.assertIn("f1", os.listdir(target_path)) + self.assertIn("f2", os.listdir(target_path)) + + def test_bare_local_with_checkout(self): + f1_1 = make_object(Blob, data=b"f1") + commit_spec = [[1], [2, 1], [3, 1, 2]] + trees = { + 1: [(b"f1", f1_1), (b"f2", f1_1)], + 2: [(b"f1", f1_1), (b"f2", f1_1)], + 3: [(b"f1", f1_1), (b"f2", f1_1)], + } + + c1, c2, c3 = build_commit_graph(self.repo.object_store, commit_spec, trees) + self.repo.refs[b"refs/heads/master"] = c3.id + target_path = tempfile.mkdtemp() + errstream = BytesIO() + self.addCleanup(shutil.rmtree, target_path) + with porcelain.clone( + self.repo.path, target_path, bare=True, errstream=errstream + ) as r: + self.assertEqual(r.path, target_path) + with Repo(target_path) as r: + r.head() + self.assertRaises(NoIndexPresent, r.open_index) + self.assertNotIn(b"f1", os.listdir(target_path)) + self.assertNotIn(b"f2", os.listdir(target_path)) + + def test_no_checkout_with_bare(self): + f1_1 = make_object(Blob, data=b"f1") + commit_spec = [[1]] + trees = {1: [(b"f1", f1_1), (b"f2", f1_1)]} + + (c1,) = build_commit_graph(self.repo.object_store, commit_spec, trees) + self.repo.refs[b"refs/heads/master"] = c1.id + self.repo.refs[b"HEAD"] = c1.id + target_path = tempfile.mkdtemp() + errstream = BytesIO() + self.addCleanup(shutil.rmtree, target_path) + self.assertRaises( + porcelain.Error, + porcelain.clone, + self.repo.path, + target_path, + checkout=True, + bare=True, + errstream=errstream, + ) + + def test_no_head_no_checkout(self): + f1_1 = make_object(Blob, data=b"f1") + commit_spec = [[1]] + trees = {1: [(b"f1", f1_1), (b"f2", f1_1)]} + + (c1,) = build_commit_graph(self.repo.object_store, commit_spec, trees) + self.repo.refs[b"refs/heads/master"] = c1.id + target_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, target_path) + errstream = BytesIO() + r = porcelain.clone( + self.repo.path, target_path, checkout=True, errstream=errstream + ) + r.close() + + def test_no_head_no_checkout_outstream_errstream_autofallback(self): + f1_1 = make_object(Blob, data=b"f1") + commit_spec = [[1]] + trees = {1: [(b"f1", f1_1), (b"f2", f1_1)]} + + (c1,) = build_commit_graph(self.repo.object_store, commit_spec, trees) + self.repo.refs[b"refs/heads/master"] = c1.id + target_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, target_path) + errstream = porcelain.NoneStream() + r = porcelain.clone( + self.repo.path, target_path, checkout=True, errstream=errstream + ) + r.close() + + def test_source_broken(self): + with tempfile.TemporaryDirectory() as parent: + target_path = os.path.join(parent, "target") + self.assertRaises( + Exception, porcelain.clone, "/nonexistent/repo", target_path + ) + self.assertFalse(os.path.exists(target_path)) + + def test_fetch_symref(self): + f1_1 = make_object(Blob, data=b"f1") + trees = {1: [(b"f1", f1_1), (b"f2", f1_1)]} + [c1] = build_commit_graph(self.repo.object_store, [[1]], trees) + self.repo.refs.set_symbolic_ref(b"HEAD", b"refs/heads/else") + self.repo.refs[b"refs/heads/else"] = c1.id + target_path = tempfile.mkdtemp() + errstream = BytesIO() + self.addCleanup(shutil.rmtree, target_path) + r = porcelain.clone( + self.repo.path, target_path, checkout=False, errstream=errstream + ) + self.addCleanup(r.close) + self.assertEqual(r.path, target_path) + target_repo = Repo(target_path) + self.assertEqual(0, len(target_repo.open_index())) + self.assertEqual(c1.id, target_repo.refs[b"refs/heads/else"]) + self.assertEqual(c1.id, target_repo.refs[b"HEAD"]) + self.assertEqual( + {b"HEAD": b"refs/heads/else", b"refs/remotes/origin/HEAD": b"refs/remotes/origin/else"}, + target_repo.refs.get_symrefs(), + ) + + +class InitTests(TestCase): + def test_non_bare(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + porcelain.init(repo_dir) + + def test_bare(self): + repo_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + porcelain.init(repo_dir, bare=True) + + +class AddTests(PorcelainTestCase): + def test_add_default_paths(self): + # create a file for initial commit + fullpath = os.path.join(self.repo.path, "blah") + with open(fullpath, "w") as f: + f.write("\n") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + porcelain.commit( + repo=self.repo.path, + message=b"test", + author=b"test ", + committer=b"test ", + ) + + # Add a second test file and a file in a directory + with open(os.path.join(self.repo.path, "foo"), "w") as f: + f.write("\n") + os.mkdir(os.path.join(self.repo.path, "adir")) + with open(os.path.join(self.repo.path, "adir", "afile"), "w") as f: + f.write("\n") + cwd = os.getcwd() + try: + os.chdir(self.repo.path) + self.assertEqual(set(["foo", "blah", "adir", ".git"]), set(os.listdir("."))) + self.assertEqual( + (["foo", os.path.join("adir", "afile")], set()), + porcelain.add(self.repo.path), + ) + finally: + os.chdir(cwd) + + # Check that foo was added and nothing in .git was modified + index = self.repo.open_index() + self.assertEqual(sorted(index), [b"adir/afile", b"blah", b"foo"]) + + def test_add_default_paths_subdir(self): + os.mkdir(os.path.join(self.repo.path, "foo")) + with open(os.path.join(self.repo.path, "blah"), "w") as f: + f.write("\n") + with open(os.path.join(self.repo.path, "foo", "blie"), "w") as f: + f.write("\n") + + cwd = os.getcwd() + try: + os.chdir(os.path.join(self.repo.path, "foo")) + porcelain.add(repo=self.repo.path) + porcelain.commit( + repo=self.repo.path, + message=b"test", + author=b"test ", + committer=b"test ", + ) + finally: + os.chdir(cwd) + + index = self.repo.open_index() + self.assertEqual(sorted(index), [b"foo/blie"]) + + def test_add_file(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(self.repo.path, paths=[fullpath]) + self.assertIn(b"foo", self.repo.open_index()) + + def test_add_ignored(self): + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.write("foo\nsubdir/") + with open(os.path.join(self.repo.path, "foo"), "w") as f: + f.write("BAR") + with open(os.path.join(self.repo.path, "bar"), "w") as f: + f.write("BAR") + os.mkdir(os.path.join(self.repo.path, "subdir")) + with open(os.path.join(self.repo.path, "subdir", "baz"), "w") as f: + f.write("BAZ") + (added, ignored) = porcelain.add( + self.repo.path, + paths=[ + os.path.join(self.repo.path, "foo"), + os.path.join(self.repo.path, "bar"), + os.path.join(self.repo.path, "subdir"), + ], + ) + self.assertIn(b"bar", self.repo.open_index()) + self.assertEqual(set(["bar"]), set(added)) + self.assertEqual(set(["foo", os.path.join("subdir", "")]), ignored) + + def test_add_file_absolute_path(self): + # Absolute paths are (not yet) supported + with open(os.path.join(self.repo.path, "foo"), "w") as f: + f.write("BAR") + porcelain.add(self.repo, paths=[os.path.join(self.repo.path, "foo")]) + self.assertIn(b"foo", self.repo.open_index()) + + def test_add_not_in_repo(self): + with open(os.path.join(self.test_dir, "foo"), "w") as f: + f.write("BAR") + self.assertRaises( + ValueError, + porcelain.add, + self.repo, + paths=[os.path.join(self.test_dir, "foo")], + ) + self.assertRaises( + (ValueError, FileNotFoundError), + porcelain.add, + self.repo, + paths=["../foo"], + ) + self.assertEqual([], list(self.repo.open_index())) + + def test_add_file_clrf_conversion(self): + # Set the right configuration to the repo + c = self.repo.get_config() + c.set("core", "autocrlf", "input") + c.write_to_path() + + # Add a file with CRLF line-ending + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "wb") as f: + f.write(b"line1\r\nline2") + porcelain.add(self.repo.path, paths=[fullpath]) + + # The line-endings should have been converted to LF + index = self.repo.open_index() + self.assertIn(b"foo", index) + + entry = index[b"foo"] + blob = self.repo[entry.sha] + self.assertEqual(blob.data, b"line1\nline2") + + +class RemoveTests(PorcelainTestCase): + def test_remove_file(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(self.repo.path, paths=[fullpath]) + porcelain.commit( + repo=self.repo, + message=b"test", + author=b"test ", + committer=b"test ", + ) + self.assertTrue(os.path.exists(os.path.join(self.repo.path, "foo"))) + cwd = os.getcwd() + try: + os.chdir(self.repo.path) + porcelain.remove(self.repo.path, paths=["foo"]) + finally: + os.chdir(cwd) + self.assertFalse(os.path.exists(os.path.join(self.repo.path, "foo"))) + + def test_remove_file_staged(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + cwd = os.getcwd() + try: + os.chdir(self.repo.path) + porcelain.add(self.repo.path, paths=[fullpath]) + self.assertRaises(Exception, porcelain.rm, self.repo.path, paths=["foo"]) + finally: + os.chdir(cwd) + + def test_remove_file_removed_on_disk(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(self.repo.path, paths=[fullpath]) + cwd = os.getcwd() + try: + os.chdir(self.repo.path) + os.remove(fullpath) + porcelain.remove(self.repo.path, paths=["foo"]) + finally: + os.chdir(cwd) + self.assertFalse(os.path.exists(os.path.join(self.repo.path, "foo"))) + + +class LogTests(PorcelainTestCase): + def test_simple(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + outstream = StringIO() + porcelain.log(self.repo.path, outstream=outstream) + self.assertEqual(3, outstream.getvalue().count("-" * 50)) + + def test_max_entries(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + outstream = StringIO() + porcelain.log(self.repo.path, outstream=outstream, max_entries=1) + self.assertEqual(1, outstream.getvalue().count("-" * 50)) + + +class ShowTests(PorcelainTestCase): + def test_nolist(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + outstream = StringIO() + porcelain.show(self.repo.path, objects=c3.id, outstream=outstream) + self.assertTrue(outstream.getvalue().startswith("-" * 50)) + + def test_simple(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + outstream = StringIO() + porcelain.show(self.repo.path, objects=[c3.id], outstream=outstream) + self.assertTrue(outstream.getvalue().startswith("-" * 50)) + + def test_blob(self): + b = Blob.from_string(b"The Foo\n") + self.repo.object_store.add_object(b) + outstream = StringIO() + porcelain.show(self.repo.path, objects=[b.id], outstream=outstream) + self.assertEqual(outstream.getvalue(), "The Foo\n") + + def test_commit_no_parent(self): + a = Blob.from_string(b"The Foo\n") + ta = Tree() + ta.add(b"somename", 0o100644, a.id) + ca = make_commit(tree=ta.id) + self.repo.object_store.add_objects([(a, None), (ta, None), (ca, None)]) + outstream = StringIO() + porcelain.show(self.repo.path, objects=[ca.id], outstream=outstream) + self.assertMultiLineEqual( + outstream.getvalue(), + """\ +-------------------------------------------------- +commit: 344da06c1bb85901270b3e8875c988a027ec087d +Author: Test Author +Committer: Test Committer +Date: Fri Jan 01 2010 00:00:00 +0000 + +Test message. + +diff --git a/somename b/somename +new file mode 100644 +index 0000000..ea5c7bf +--- /dev/null ++++ b/somename +@@ -0,0 +1 @@ ++The Foo +""", + ) + + def test_tag(self): + a = Blob.from_string(b"The Foo\n") + ta = Tree() + ta.add(b"somename", 0o100644, a.id) + ca = make_commit(tree=ta.id) + self.repo.object_store.add_objects([(a, None), (ta, None), (ca, None)]) + porcelain.tag_create( + self.repo.path, + b"tryme", + b"foo ", + b"bar", + annotated=True, + objectish=ca.id, + tag_time=1552854211, + tag_timezone=0, + ) + outstream = StringIO() + porcelain.show(self.repo, objects=[b"refs/tags/tryme"], outstream=outstream) + self.maxDiff = None + self.assertMultiLineEqual( + outstream.getvalue(), + """\ +Tagger: foo +Date: Sun Mar 17 2019 20:23:31 +0000 + +bar + +-------------------------------------------------- +commit: 344da06c1bb85901270b3e8875c988a027ec087d +Author: Test Author +Committer: Test Committer +Date: Fri Jan 01 2010 00:00:00 +0000 + +Test message. + +diff --git a/somename b/somename +new file mode 100644 +index 0000000..ea5c7bf +--- /dev/null ++++ b/somename +@@ -0,0 +1 @@ ++The Foo +""", + ) + + def test_commit_with_change(self): + a = Blob.from_string(b"The Foo\n") + ta = Tree() + ta.add(b"somename", 0o100644, a.id) + ca = make_commit(tree=ta.id) + b = Blob.from_string(b"The Bar\n") + tb = Tree() + tb.add(b"somename", 0o100644, b.id) + cb = make_commit(tree=tb.id, parents=[ca.id]) + self.repo.object_store.add_objects( + [ + (a, None), + (b, None), + (ta, None), + (tb, None), + (ca, None), + (cb, None), + ] + ) + outstream = StringIO() + porcelain.show(self.repo.path, objects=[cb.id], outstream=outstream) + self.assertMultiLineEqual( + outstream.getvalue(), + """\ +-------------------------------------------------- +commit: 2c6b6c9cb72c130956657e1fdae58e5b103744fa +Author: Test Author +Committer: Test Committer +Date: Fri Jan 01 2010 00:00:00 +0000 + +Test message. + +diff --git a/somename b/somename +index ea5c7bf..fd38bcb 100644 +--- a/somename ++++ b/somename +@@ -1 +1 @@ +-The Foo ++The Bar +""", + ) + + +class SymbolicRefTests(PorcelainTestCase): + def test_set_wrong_symbolic_ref(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + + self.assertRaises( + porcelain.Error, porcelain.symbolic_ref, self.repo.path, b"foobar" + ) + + def test_set_force_wrong_symbolic_ref(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + + porcelain.symbolic_ref(self.repo.path, b"force_foobar", force=True) + + # test if we actually changed the file + with self.repo.get_named_file("HEAD") as f: + new_ref = f.read() + self.assertEqual(new_ref, b"ref: refs/heads/force_foobar\n") + + def test_set_symbolic_ref(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + + porcelain.symbolic_ref(self.repo.path, b"master") + + def test_set_symbolic_ref_other_than_master(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, + [[1], [2, 1], [3, 1, 2]], + attrs=dict(refs="develop"), + ) + self.repo.refs[b"HEAD"] = c3.id + self.repo.refs[b"refs/heads/develop"] = c3.id + + porcelain.symbolic_ref(self.repo.path, b"develop") + + # test if we actually changed the file + with self.repo.get_named_file("HEAD") as f: + new_ref = f.read() + self.assertEqual(new_ref, b"ref: refs/heads/develop\n") + + +class DiffTreeTests(PorcelainTestCase): + def test_empty(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + outstream = BytesIO() + porcelain.diff_tree(self.repo.path, c2.tree, c3.tree, outstream=outstream) + self.assertEqual(outstream.getvalue(), b"") + + +class CommitTreeTests(PorcelainTestCase): + def test_simple(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + b = Blob() + b.data = b"foo the bar" + t = Tree() + t.add(b"somename", 0o100644, b.id) + self.repo.object_store.add_object(t) + self.repo.object_store.add_object(b) + sha = porcelain.commit_tree( + self.repo.path, + t.id, + message=b"Withcommit.", + author=b"Joe ", + committer=b"Jane ", + ) + self.assertIsInstance(sha, bytes) + self.assertEqual(len(sha), 40) + + +class RevListTests(PorcelainTestCase): + def test_simple(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + outstream = BytesIO() + porcelain.rev_list(self.repo.path, [c3.id], outstream=outstream) + self.assertEqual( + c3.id + b"\n" + c2.id + b"\n" + c1.id + b"\n", outstream.getvalue() + ) + + +@skipIf(platform.python_implementation() == "PyPy" or sys.platform == "win32", "gpgme not easily available or supported on Windows and PyPy") +class TagCreateSignTests(PorcelainGpgTestCase): + + def test_default_key(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + cfg = self.repo.get_config() + cfg.set(("user",), "signingKey", PorcelainGpgTestCase.DEFAULT_KEY_ID) + self.import_default_key() + + porcelain.tag_create( + self.repo.path, + b"tryme", + b"foo ", + b"bar", + annotated=True, + sign=True, + ) + + tags = self.repo.refs.as_dict(b"refs/tags") + self.assertEqual(list(tags.keys()), [b"tryme"]) + tag = self.repo[b"refs/tags/tryme"] + self.assertIsInstance(tag, Tag) + self.assertEqual(b"foo ", tag.tagger) + self.assertEqual(b"bar\n", tag.message) + self.assertRecentTimestamp(tag.tag_time) + tag = self.repo[b'refs/tags/tryme'] + # GPG Signatures aren't deterministic, so we can't do a static assertion. + tag.verify() + tag.verify(keyids=[PorcelainGpgTestCase.DEFAULT_KEY_ID]) + + self.import_non_default_key() + self.assertRaises( + gpg.errors.MissingSignatures, + tag.verify, + keyids=[PorcelainGpgTestCase.NON_DEFAULT_KEY_ID], + ) + + tag._chunked_text = [b"bad data", tag._signature] + self.assertRaises( + gpg.errors.BadSignatures, + tag.verify, + ) + + def test_non_default_key(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + cfg = self.repo.get_config() + cfg.set(("user",), "signingKey", PorcelainGpgTestCase.DEFAULT_KEY_ID) + self.import_non_default_key() + + porcelain.tag_create( + self.repo.path, + b"tryme", + b"foo ", + b"bar", + annotated=True, + sign=PorcelainGpgTestCase.NON_DEFAULT_KEY_ID, + ) + + tags = self.repo.refs.as_dict(b"refs/tags") + self.assertEqual(list(tags.keys()), [b"tryme"]) + tag = self.repo[b"refs/tags/tryme"] + self.assertIsInstance(tag, Tag) + self.assertEqual(b"foo ", tag.tagger) + self.assertEqual(b"bar\n", tag.message) + self.assertRecentTimestamp(tag.tag_time) + tag = self.repo[b'refs/tags/tryme'] + # GPG Signatures aren't deterministic, so we can't do a static assertion. + tag.verify() + + +class TagCreateTests(PorcelainTestCase): + + def test_annotated(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + + porcelain.tag_create( + self.repo.path, + b"tryme", + b"foo ", + b"bar", + annotated=True, + ) + + tags = self.repo.refs.as_dict(b"refs/tags") + self.assertEqual(list(tags.keys()), [b"tryme"]) + tag = self.repo[b"refs/tags/tryme"] + self.assertIsInstance(tag, Tag) + self.assertEqual(b"foo ", tag.tagger) + self.assertEqual(b"bar\n", tag.message) + self.assertRecentTimestamp(tag.tag_time) + + def test_unannotated(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + + porcelain.tag_create(self.repo.path, b"tryme", annotated=False) + + tags = self.repo.refs.as_dict(b"refs/tags") + self.assertEqual(list(tags.keys()), [b"tryme"]) + self.repo[b"refs/tags/tryme"] + self.assertEqual(list(tags.values()), [self.repo.head()]) + + def test_unannotated_unicode(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + + porcelain.tag_create(self.repo.path, "tryme", annotated=False) + + tags = self.repo.refs.as_dict(b"refs/tags") + self.assertEqual(list(tags.keys()), [b"tryme"]) + self.repo[b"refs/tags/tryme"] + self.assertEqual(list(tags.values()), [self.repo.head()]) + + +class TagListTests(PorcelainTestCase): + def test_empty(self): + tags = porcelain.tag_list(self.repo.path) + self.assertEqual([], tags) + + def test_simple(self): + self.repo.refs[b"refs/tags/foo"] = b"aa" * 20 + self.repo.refs[b"refs/tags/bar/bla"] = b"bb" * 20 + tags = porcelain.tag_list(self.repo.path) + + self.assertEqual([b"bar/bla", b"foo"], tags) + + +class TagDeleteTests(PorcelainTestCase): + def test_simple(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo[b"HEAD"] = c1.id + porcelain.tag_create(self.repo, b"foo") + self.assertIn(b"foo", porcelain.tag_list(self.repo)) + porcelain.tag_delete(self.repo, b"foo") + self.assertNotIn(b"foo", porcelain.tag_list(self.repo)) + + +class ResetTests(PorcelainTestCase): + def test_hard_head(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(self.repo.path, paths=[fullpath]) + porcelain.commit( + self.repo.path, + message=b"Some message", + committer=b"Jane ", + author=b"John ", + ) + + with open(os.path.join(self.repo.path, "foo"), "wb") as f: + f.write(b"OOH") + + porcelain.reset(self.repo, "hard", b"HEAD") + + index = self.repo.open_index() + changes = list( + tree_changes( + self.repo, + index.commit(self.repo.object_store), + self.repo[b"HEAD"].tree, + ) + ) + + self.assertEqual([], changes) + + def test_hard_commit(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(self.repo.path, paths=[fullpath]) + sha = porcelain.commit( + self.repo.path, + message=b"Some message", + committer=b"Jane ", + author=b"John ", + ) + + with open(fullpath, "wb") as f: + f.write(b"BAZ") + porcelain.add(self.repo.path, paths=[fullpath]) + porcelain.commit( + self.repo.path, + message=b"Some other message", + committer=b"Jane ", + author=b"John ", + ) + + porcelain.reset(self.repo, "hard", sha) + + index = self.repo.open_index() + changes = list( + tree_changes( + self.repo, + index.commit(self.repo.object_store), + self.repo[sha].tree, + ) + ) + + self.assertEqual([], changes) + + +class ResetFileTests(PorcelainTestCase): + + def test_reset_modify_file_to_commit(self): + file = 'foo' + full_path = os.path.join(self.repo.path, file) + + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self.repo, paths=[full_path]) + sha = porcelain.commit( + self.repo, + message=b"unitest", + committer=b"Jane ", + author=b"John ", + ) + with open(full_path, 'a') as f: + f.write('something new') + porcelain.reset_file(self.repo, file, target=sha) + + with open(full_path, 'r') as f: + self.assertEqual('hello', f.read()) + + def test_reset_remove_file_to_commit(self): + file = 'foo' + full_path = os.path.join(self.repo.path, file) + + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self.repo, paths=[full_path]) + sha = porcelain.commit( + self.repo, + message=b"unitest", + committer=b"Jane ", + author=b"John ", + ) + os.remove(full_path) + porcelain.reset_file(self.repo, file, target=sha) + + with open(full_path, 'r') as f: + self.assertEqual('hello', f.read()) + + def test_resetfile_with_dir(self): + os.mkdir(os.path.join(self.repo.path, 'new_dir')) + full_path = os.path.join(self.repo.path, 'new_dir', 'foo') + + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self.repo, paths=[full_path]) + sha = porcelain.commit( + self.repo, + message=b"unitest", + committer=b"Jane ", + author=b"John ", + ) + with open(full_path, 'a') as f: + f.write('something new') + porcelain.commit( + self.repo, + message=b"unitest 2", + committer=b"Jane ", + author=b"John ", + ) + porcelain.reset_file(self.repo, os.path.join('new_dir', 'foo'), target=sha) + with open(full_path, 'r') as f: + self.assertEqual('hello', f.read()) + + +class SubmoduleTests(PorcelainTestCase): + + def test_empty(self): + porcelain.commit( + repo=self.repo.path, + message=b"init", + author=b"author ", + committer=b"committer ", + ) + + self.assertEqual([], list(porcelain.submodule_list(self.repo))) + + def test_add(self): + porcelain.submodule_add(self.repo, "../bar.git", "bar") + with open('%s/.gitmodules' % self.repo.path, 'r') as f: + self.assertEqual("""\ +[submodule "bar"] +\turl = ../bar.git +\tpath = bar +""", f.read()) + + def test_init(self): + porcelain.submodule_add(self.repo, "../bar.git", "bar") + porcelain.submodule_init(self.repo) + + +class PushTests(PorcelainTestCase): + def test_simple(self): + """ + Basic test of porcelain push where self.repo is the remote. First + clone the remote, commit a file to the clone, then push the changes + back to the remote. + """ + outstream = BytesIO() + errstream = BytesIO() + + porcelain.commit( + repo=self.repo.path, + message=b"init", + author=b"author ", + committer=b"committer ", + ) + + # Setup target repo cloned from temp test repo + clone_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, clone_path) + target_repo = porcelain.clone( + self.repo.path, target=clone_path, errstream=errstream + ) + try: + self.assertEqual(target_repo[b"HEAD"], self.repo[b"HEAD"]) + finally: + target_repo.close() + + # create a second file to be pushed back to origin + handle, fullpath = tempfile.mkstemp(dir=clone_path) + os.close(handle) + porcelain.add(repo=clone_path, paths=[fullpath]) + porcelain.commit( + repo=clone_path, + message=b"push", + author=b"author ", + committer=b"committer ", + ) + + # Setup a non-checked out branch in the remote + refs_path = b"refs/heads/foo" + new_id = self.repo[b"HEAD"].id + self.assertNotEqual(new_id, ZERO_SHA) + self.repo.refs[refs_path] = new_id + + # Push to the remote + porcelain.push( + clone_path, + "origin", + b"HEAD:" + refs_path, + outstream=outstream, + errstream=errstream, + ) + + self.assertEqual( + target_repo.refs[b"refs/remotes/origin/foo"], + target_repo.refs[b"HEAD"], + ) + + # Check that the target and source + with Repo(clone_path) as r_clone: + self.assertEqual( + { + b"HEAD": new_id, + b"refs/heads/foo": r_clone[b"HEAD"].id, + b"refs/heads/master": new_id, + }, + self.repo.get_refs(), + ) + self.assertEqual(r_clone[b"HEAD"].id, self.repo[refs_path].id) + + # Get the change in the target repo corresponding to the add + # this will be in the foo branch. + change = list( + tree_changes( + self.repo, + self.repo[b"HEAD"].tree, + self.repo[b"refs/heads/foo"].tree, + ) + )[0] + self.assertEqual( + os.path.basename(fullpath), change.new.path.decode("ascii") + ) + + def test_local_missing(self): + """Pushing a new branch.""" + outstream = BytesIO() + errstream = BytesIO() + + # Setup target repo cloned from temp test repo + clone_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, clone_path) + target_repo = porcelain.init(clone_path) + target_repo.close() + + self.assertRaises( + porcelain.Error, + porcelain.push, + self.repo, + clone_path, + b"HEAD:refs/heads/master", + outstream=outstream, + errstream=errstream, + ) + + def test_new(self): + """Pushing a new branch.""" + outstream = BytesIO() + errstream = BytesIO() + + # Setup target repo cloned from temp test repo + clone_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, clone_path) + target_repo = porcelain.init(clone_path) + target_repo.close() + + # create a second file to be pushed back to origin + handle, fullpath = tempfile.mkstemp(dir=clone_path) + os.close(handle) + porcelain.add(repo=clone_path, paths=[fullpath]) + new_id = porcelain.commit( + repo=self.repo, + message=b"push", + author=b"author ", + committer=b"committer ", + ) + + # Push to the remote + porcelain.push( + self.repo, + clone_path, + b"HEAD:refs/heads/master", + outstream=outstream, + errstream=errstream, + ) + + with Repo(clone_path) as r_clone: + self.assertEqual( + { + b"HEAD": new_id, + b"refs/heads/master": new_id, + }, + r_clone.get_refs(), + ) + + def test_delete(self): + """Basic test of porcelain push, removing a branch.""" + outstream = BytesIO() + errstream = BytesIO() + + porcelain.commit( + repo=self.repo.path, + message=b"init", + author=b"author ", + committer=b"committer ", + ) + + # Setup target repo cloned from temp test repo + clone_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, clone_path) + target_repo = porcelain.clone( + self.repo.path, target=clone_path, errstream=errstream + ) + target_repo.close() + + # Setup a non-checked out branch in the remote + refs_path = b"refs/heads/foo" + new_id = self.repo[b"HEAD"].id + self.assertNotEqual(new_id, ZERO_SHA) + self.repo.refs[refs_path] = new_id + + # Push to the remote + porcelain.push( + clone_path, + self.repo.path, + b":" + refs_path, + outstream=outstream, + errstream=errstream, + ) + + self.assertEqual( + { + b"HEAD": new_id, + b"refs/heads/master": new_id, + }, + self.repo.get_refs(), + ) + + def test_diverged(self): + outstream = BytesIO() + errstream = BytesIO() + + porcelain.commit( + repo=self.repo.path, + message=b"init", + author=b"author ", + committer=b"committer ", + ) + + # Setup target repo cloned from temp test repo + clone_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, clone_path) + target_repo = porcelain.clone( + self.repo.path, target=clone_path, errstream=errstream + ) + target_repo.close() + + remote_id = porcelain.commit( + repo=self.repo.path, + message=b"remote change", + author=b"author ", + committer=b"committer ", + ) + + local_id = porcelain.commit( + repo=clone_path, + message=b"local change", + author=b"author ", + committer=b"committer ", + ) + + outstream = BytesIO() + errstream = BytesIO() + + # Push to the remote + self.assertRaises( + porcelain.DivergedBranches, + porcelain.push, + clone_path, + self.repo.path, + b"refs/heads/master", + outstream=outstream, + errstream=errstream, + ) + + self.assertEqual( + { + b"HEAD": remote_id, + b"refs/heads/master": remote_id, + }, + self.repo.get_refs(), + ) + + self.assertEqual(b"", outstream.getvalue()) + self.assertEqual(b"", errstream.getvalue()) + + outstream = BytesIO() + errstream = BytesIO() + + # Push to the remote with --force + porcelain.push( + clone_path, + self.repo.path, + b"refs/heads/master", + outstream=outstream, + errstream=errstream, + force=True, + ) + + self.assertEqual( + { + b"HEAD": local_id, + b"refs/heads/master": local_id, + }, + self.repo.get_refs(), + ) + + self.assertEqual(b"", outstream.getvalue()) + self.assertTrue(re.match(b"Push to .* successful.\n", errstream.getvalue())) + + +class PullTests(PorcelainTestCase): + def setUp(self): + super(PullTests, self).setUp() + # create a file for initial commit + handle, fullpath = tempfile.mkstemp(dir=self.repo.path) + os.close(handle) + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test", + author=b"test ", + committer=b"test ", + ) + + # Setup target repo + self.target_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.target_path) + target_repo = porcelain.clone( + self.repo.path, target=self.target_path, errstream=BytesIO() + ) + target_repo.close() + + # create a second file to be pushed + handle, fullpath = tempfile.mkstemp(dir=self.repo.path) + os.close(handle) + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test2", + author=b"test2 ", + committer=b"test2 ", + ) + + self.assertIn(b"refs/heads/master", self.repo.refs) + self.assertIn(b"refs/heads/master", target_repo.refs) + + def test_simple(self): + outstream = BytesIO() + errstream = BytesIO() + + # Pull changes into the cloned repo + porcelain.pull( + self.target_path, + self.repo.path, + b"refs/heads/master", + outstream=outstream, + errstream=errstream, + ) + + # Check the target repo for pushed changes + with Repo(self.target_path) as r: + self.assertEqual(r[b"HEAD"].id, self.repo[b"HEAD"].id) + + def test_diverged(self): + outstream = BytesIO() + errstream = BytesIO() + + c3a = porcelain.commit( + repo=self.target_path, + message=b"test3a", + author=b"test2 ", + committer=b"test2 ", + ) + + porcelain.commit( + repo=self.repo.path, + message=b"test3b", + author=b"test2 ", + committer=b"test2 ", + ) + + # Pull changes into the cloned repo + self.assertRaises( + porcelain.DivergedBranches, + porcelain.pull, + self.target_path, + self.repo.path, + b"refs/heads/master", + outstream=outstream, + errstream=errstream, + ) + + # Check the target repo for pushed changes + with Repo(self.target_path) as r: + self.assertEqual(r[b"refs/heads/master"].id, c3a) + + self.assertRaises( + NotImplementedError, + porcelain.pull, + self.target_path, + self.repo.path, + b"refs/heads/master", + outstream=outstream, + errstream=errstream, + fast_forward=False, + ) + + # Check the target repo for pushed changes + with Repo(self.target_path) as r: + self.assertEqual(r[b"refs/heads/master"].id, c3a) + + def test_no_refspec(self): + outstream = BytesIO() + errstream = BytesIO() + + # Pull changes into the cloned repo + porcelain.pull( + self.target_path, + self.repo.path, + outstream=outstream, + errstream=errstream, + ) + + # Check the target repo for pushed changes + with Repo(self.target_path) as r: + self.assertEqual(r[b"HEAD"].id, self.repo[b"HEAD"].id) + + def test_no_remote_location(self): + outstream = BytesIO() + errstream = BytesIO() + + # Pull changes into the cloned repo + porcelain.pull( + self.target_path, + refspecs=b"refs/heads/master", + outstream=outstream, + errstream=errstream, + ) + + # Check the target repo for pushed changes + with Repo(self.target_path) as r: + self.assertEqual(r[b"HEAD"].id, self.repo[b"HEAD"].id) + + +class StatusTests(PorcelainTestCase): + def test_empty(self): + results = porcelain.status(self.repo) + self.assertEqual({"add": [], "delete": [], "modify": []}, results.staged) + self.assertEqual([], results.unstaged) + + def test_status_base(self): + """Integration test for `status` functionality.""" + + # Commit a dummy file then modify it + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("origstuff") + + porcelain.add(repo=self.repo.path, paths=[fullpath]) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + # modify access and modify time of path + os.utime(fullpath, (0, 0)) + + with open(fullpath, "wb") as f: + f.write(b"stuff") + + # Make a dummy file and stage it + filename_add = "bar" + fullpath = os.path.join(self.repo.path, filename_add) + with open(fullpath, "w") as f: + f.write("stuff") + porcelain.add(repo=self.repo.path, paths=fullpath) + + results = porcelain.status(self.repo) + + self.assertEqual(results.staged["add"][0], filename_add.encode("ascii")) + self.assertEqual(results.unstaged, [b"foo"]) + + def test_status_all(self): + del_path = os.path.join(self.repo.path, "foo") + mod_path = os.path.join(self.repo.path, "bar") + add_path = os.path.join(self.repo.path, "baz") + us_path = os.path.join(self.repo.path, "blye") + ut_path = os.path.join(self.repo.path, "blyat") + with open(del_path, "w") as f: + f.write("origstuff") + with open(mod_path, "w") as f: + f.write("origstuff") + with open(us_path, "w") as f: + f.write("origstuff") + porcelain.add(repo=self.repo.path, paths=[del_path, mod_path, us_path]) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + porcelain.remove(self.repo.path, [del_path]) + with open(add_path, "w") as f: + f.write("origstuff") + with open(mod_path, "w") as f: + f.write("more_origstuff") + with open(us_path, "w") as f: + f.write("more_origstuff") + porcelain.add(repo=self.repo.path, paths=[add_path, mod_path]) + with open(us_path, "w") as f: + f.write("\norigstuff") + with open(ut_path, "w") as f: + f.write("origstuff") + results = porcelain.status(self.repo.path) + self.assertDictEqual( + {"add": [b"baz"], "delete": [b"foo"], "modify": [b"bar"]}, + results.staged, + ) + self.assertListEqual(results.unstaged, [b"blye"]) + results_no_untracked = porcelain.status(self.repo.path, untracked_files="no") + self.assertListEqual(results_no_untracked.untracked, []) + + def test_status_wrong_untracked_files_value(self): + with self.assertRaises(ValueError): + porcelain.status(self.repo.path, untracked_files="antani") + + def test_status_untracked_path(self): + untracked_dir = os.path.join(self.repo_path, "untracked_dir") + os.mkdir(untracked_dir) + untracked_file = os.path.join(untracked_dir, "untracked_file") + with open(untracked_file, "w") as fh: + fh.write("untracked") + + _, _, untracked = porcelain.status(self.repo.path, untracked_files="all") + self.assertEqual(untracked, ["untracked_dir/untracked_file"]) + + def test_status_crlf_mismatch(self): + # First make a commit as if the file has been added on a Linux system + # or with core.autocrlf=True + file_path = os.path.join(self.repo.path, "crlf") + with open(file_path, "wb") as f: + f.write(b"line1\nline2") + porcelain.add(repo=self.repo.path, paths=[file_path]) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + # Then update the file as if it was created by CGit on a Windows + # system with core.autocrlf=true + with open(file_path, "wb") as f: + f.write(b"line1\r\nline2") + + results = porcelain.status(self.repo) + self.assertDictEqual({"add": [], "delete": [], "modify": []}, results.staged) + self.assertListEqual(results.unstaged, [b"crlf"]) + self.assertListEqual(results.untracked, []) + + def test_status_autocrlf_true(self): + # First make a commit as if the file has been added on a Linux system + # or with core.autocrlf=True + file_path = os.path.join(self.repo.path, "crlf") + with open(file_path, "wb") as f: + f.write(b"line1\nline2") + porcelain.add(repo=self.repo.path, paths=[file_path]) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + # Then update the file as if it was created by CGit on a Windows + # system with core.autocrlf=true + with open(file_path, "wb") as f: + f.write(b"line1\r\nline2") + + # TODO: It should be set automatically by looking at the configuration + c = self.repo.get_config() + c.set("core", "autocrlf", True) + c.write_to_path() + + results = porcelain.status(self.repo) + self.assertDictEqual({"add": [], "delete": [], "modify": []}, results.staged) + self.assertListEqual(results.unstaged, []) + self.assertListEqual(results.untracked, []) + + def test_status_autocrlf_input(self): + # Commit existing file with CRLF + file_path = os.path.join(self.repo.path, "crlf-exists") + with open(file_path, "wb") as f: + f.write(b"line1\r\nline2") + porcelain.add(repo=self.repo.path, paths=[file_path]) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + c = self.repo.get_config() + c.set("core", "autocrlf", "input") + c.write_to_path() + + # Add new (untracked) file + file_path = os.path.join(self.repo.path, "crlf-new") + with open(file_path, "wb") as f: + f.write(b"line1\r\nline2") + porcelain.add(repo=self.repo.path, paths=[file_path]) + + results = porcelain.status(self.repo) + self.assertDictEqual({"add": [b"crlf-new"], "delete": [], "modify": []}, results.staged) + self.assertListEqual(results.unstaged, []) + self.assertListEqual(results.untracked, []) + + def test_get_tree_changes_add(self): + """Unit test for get_tree_changes add.""" + + # Make a dummy file, stage + filename = "bar" + fullpath = os.path.join(self.repo.path, filename) + with open(fullpath, "w") as f: + f.write("stuff") + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + filename = "foo" + fullpath = os.path.join(self.repo.path, filename) + with open(fullpath, "w") as f: + f.write("stuff") + porcelain.add(repo=self.repo.path, paths=fullpath) + changes = porcelain.get_tree_changes(self.repo.path) + + self.assertEqual(changes["add"][0], filename.encode("ascii")) + self.assertEqual(len(changes["add"]), 1) + self.assertEqual(len(changes["modify"]), 0) + self.assertEqual(len(changes["delete"]), 0) + + def test_get_tree_changes_modify(self): + """Unit test for get_tree_changes modify.""" + + # Make a dummy file, stage, commit, modify + filename = "foo" + fullpath = os.path.join(self.repo.path, filename) + with open(fullpath, "w") as f: + f.write("stuff") + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + with open(fullpath, "w") as f: + f.write("otherstuff") + porcelain.add(repo=self.repo.path, paths=fullpath) + changes = porcelain.get_tree_changes(self.repo.path) + + self.assertEqual(changes["modify"][0], filename.encode("ascii")) + self.assertEqual(len(changes["add"]), 0) + self.assertEqual(len(changes["modify"]), 1) + self.assertEqual(len(changes["delete"]), 0) + + def test_get_tree_changes_delete(self): + """Unit test for get_tree_changes delete.""" + + # Make a dummy file, stage, commit, remove + filename = "foo" + fullpath = os.path.join(self.repo.path, filename) + with open(fullpath, "w") as f: + f.write("stuff") + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + cwd = os.getcwd() + try: + os.chdir(self.repo.path) + porcelain.remove(repo=self.repo.path, paths=[filename]) + finally: + os.chdir(cwd) + changes = porcelain.get_tree_changes(self.repo.path) + + self.assertEqual(changes["delete"][0], filename.encode("ascii")) + self.assertEqual(len(changes["add"]), 0) + self.assertEqual(len(changes["modify"]), 0) + self.assertEqual(len(changes["delete"]), 1) + + def test_get_untracked_paths(self): + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.write("ignored\n") + with open(os.path.join(self.repo.path, "ignored"), "w") as f: + f.write("blah\n") + with open(os.path.join(self.repo.path, "notignored"), "w") as f: + f.write("blah\n") + os.symlink( + os.path.join(self.repo.path, os.pardir, "external_target"), + os.path.join(self.repo.path, "link"), + ) + self.assertEqual( + set(["ignored", "notignored", ".gitignore", "link"]), + set( + porcelain.get_untracked_paths( + self.repo.path, self.repo.path, self.repo.open_index() + ) + ), + ) + self.assertEqual( + set([".gitignore", "notignored", "link"]), + set(porcelain.status(self.repo).untracked), + ) + self.assertEqual( + set([".gitignore", "notignored", "ignored", "link"]), + set(porcelain.status(self.repo, ignored=True).untracked), + ) + + def test_get_untracked_paths_subrepo(self): + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.write("nested/\n") + with open(os.path.join(self.repo.path, "notignored"), "w") as f: + f.write("blah\n") + + subrepo = Repo.init(os.path.join(self.repo.path, "nested"), mkdir=True) + with open(os.path.join(subrepo.path, "ignored"), "w") as f: + f.write("bleep\n") + with open(os.path.join(subrepo.path, "with"), "w") as f: + f.write("bloop\n") + with open(os.path.join(subrepo.path, "manager"), "w") as f: + f.write("blop\n") + + self.assertEqual( + set([".gitignore", "notignored", os.path.join("nested", "")]), + set( + porcelain.get_untracked_paths( + self.repo.path, self.repo.path, self.repo.open_index() + ) + ), + ) + self.assertEqual( + set([".gitignore", "notignored"]), + set( + porcelain.get_untracked_paths( + self.repo.path, + self.repo.path, + self.repo.open_index(), + exclude_ignored=True, + ) + ), + ) + self.assertEqual( + set(["ignored", "with", "manager"]), + set( + porcelain.get_untracked_paths( + subrepo.path, subrepo.path, subrepo.open_index() + ) + ), + ) + self.assertEqual( + set(), + set( + porcelain.get_untracked_paths( + subrepo.path, + self.repo.path, + self.repo.open_index(), + ) + ), + ) + self.assertEqual( + set([os.path.join('nested', 'ignored'), + os.path.join('nested', 'with'), + os.path.join('nested', 'manager')]), + set( + porcelain.get_untracked_paths( + self.repo.path, + subrepo.path, + self.repo.open_index(), + ) + ), + ) + + def test_get_untracked_paths_subdir(self): + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.write("subdir/\nignored") + with open(os.path.join(self.repo.path, "notignored"), "w") as f: + f.write("blah\n") + os.mkdir(os.path.join(self.repo.path, "subdir")) + with open(os.path.join(self.repo.path, "ignored"), "w") as f: + f.write("foo") + with open(os.path.join(self.repo.path, "subdir", "ignored"), "w") as f: + f.write("foo") + + self.assertEqual( + set( + [ + ".gitignore", + "notignored", + "ignored", + os.path.join("subdir", ""), + ] + ), + set( + porcelain.get_untracked_paths( + self.repo.path, + self.repo.path, + self.repo.open_index(), + ) + ) + ) + self.assertEqual( + set([".gitignore", "notignored"]), + set( + porcelain.get_untracked_paths( + self.repo.path, + self.repo.path, + self.repo.open_index(), + exclude_ignored=True, + ) + ) + ) + + def test_get_untracked_paths_invalid_untracked_files(self): + with self.assertRaises(ValueError): + list( + porcelain.get_untracked_paths( + self.repo.path, + self.repo.path, + self.repo.open_index(), + untracked_files="invalid_value", + ) + ) + + def test_get_untracked_paths_normal(self): + with self.assertRaises(NotImplementedError): + _, _, _ = porcelain.status( + repo=self.repo.path, untracked_files="normal" + ) + +# TODO(jelmer): Add test for dulwich.porcelain.daemon + + +class UploadPackTests(PorcelainTestCase): + """Tests for upload_pack.""" + + def test_upload_pack(self): + outf = BytesIO() + exitcode = porcelain.upload_pack(self.repo.path, BytesIO(b"0000"), outf) + outlines = outf.getvalue().splitlines() + self.assertEqual([b"0000"], outlines) + self.assertEqual(0, exitcode) + + +class ReceivePackTests(PorcelainTestCase): + """Tests for receive_pack.""" + + def test_receive_pack(self): + filename = "foo" + fullpath = os.path.join(self.repo.path, filename) + with open(fullpath, "w") as f: + f.write("stuff") + porcelain.add(repo=self.repo.path, paths=fullpath) + self.repo.do_commit( + message=b"test status", + author=b"author ", + committer=b"committer ", + author_timestamp=1402354300, + commit_timestamp=1402354300, + author_timezone=0, + commit_timezone=0, + ) + outf = BytesIO() + exitcode = porcelain.receive_pack(self.repo.path, BytesIO(b"0000"), outf) + outlines = outf.getvalue().splitlines() + self.assertEqual( + [ + b"0091319b56ce3aee2d489f759736a79cc552c9bb86d9 HEAD\x00 report-status " + b"delete-refs quiet ofs-delta side-band-64k " + b"no-done symref=HEAD:refs/heads/master", + b"003f319b56ce3aee2d489f759736a79cc552c9bb86d9 refs/heads/master", + b"0000", + ], + outlines, + ) + self.assertEqual(0, exitcode) + + +class BranchListTests(PorcelainTestCase): + def test_standard(self): + self.assertEqual(set([]), set(porcelain.branch_list(self.repo))) + + def test_new_branch(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo[b"HEAD"] = c1.id + porcelain.branch_create(self.repo, b"foo") + self.assertEqual( + set([b"master", b"foo"]), set(porcelain.branch_list(self.repo)) + ) + + +class BranchCreateTests(PorcelainTestCase): + def test_branch_exists(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo[b"HEAD"] = c1.id + porcelain.branch_create(self.repo, b"foo") + self.assertRaises(porcelain.Error, porcelain.branch_create, self.repo, b"foo") + porcelain.branch_create(self.repo, b"foo", force=True) + + def test_new_branch(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo[b"HEAD"] = c1.id + porcelain.branch_create(self.repo, b"foo") + self.assertEqual( + set([b"master", b"foo"]), set(porcelain.branch_list(self.repo)) + ) + + +class BranchDeleteTests(PorcelainTestCase): + def test_simple(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo[b"HEAD"] = c1.id + porcelain.branch_create(self.repo, b"foo") + self.assertIn(b"foo", porcelain.branch_list(self.repo)) + porcelain.branch_delete(self.repo, b"foo") + self.assertNotIn(b"foo", porcelain.branch_list(self.repo)) + + def test_simple_unicode(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo[b"HEAD"] = c1.id + porcelain.branch_create(self.repo, "foo") + self.assertIn(b"foo", porcelain.branch_list(self.repo)) + porcelain.branch_delete(self.repo, "foo") + self.assertNotIn(b"foo", porcelain.branch_list(self.repo)) + + +class FetchTests(PorcelainTestCase): + def test_simple(self): + outstream = BytesIO() + errstream = BytesIO() + + # create a file for initial commit + handle, fullpath = tempfile.mkstemp(dir=self.repo.path) + os.close(handle) + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test", + author=b"test ", + committer=b"test ", + ) + + # Setup target repo + target_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, target_path) + target_repo = porcelain.clone( + self.repo.path, target=target_path, errstream=errstream + ) + + # create a second file to be pushed + handle, fullpath = tempfile.mkstemp(dir=self.repo.path) + os.close(handle) + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test2", + author=b"test2 ", + committer=b"test2 ", + ) + + self.assertNotIn(self.repo[b"HEAD"].id, target_repo) + target_repo.close() + + # Fetch changes into the cloned repo + porcelain.fetch(target_path, "origin", outstream=outstream, errstream=errstream) + + # Assert that fetch updated the local image of the remote + self.assert_correct_remote_refs(target_repo.get_refs(), self.repo.get_refs()) + + # Check the target repo for pushed changes + with Repo(target_path) as r: + self.assertIn(self.repo[b"HEAD"].id, r) + + def test_with_remote_name(self): + remote_name = "origin" + outstream = BytesIO() + errstream = BytesIO() + + # create a file for initial commit + handle, fullpath = tempfile.mkstemp(dir=self.repo.path) + os.close(handle) + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test", + author=b"test ", + committer=b"test ", + ) + + # Setup target repo + target_path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, target_path) + target_repo = porcelain.clone( + self.repo.path, target=target_path, errstream=errstream + ) + + # Capture current refs + target_refs = target_repo.get_refs() + + # create a second file to be pushed + handle, fullpath = tempfile.mkstemp(dir=self.repo.path) + os.close(handle) + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.commit( + repo=self.repo.path, + message=b"test2", + author=b"test2 ", + committer=b"test2 ", + ) + + self.assertNotIn(self.repo[b"HEAD"].id, target_repo) + + target_config = target_repo.get_config() + target_config.set( + (b"remote", remote_name.encode()), b"url", self.repo.path.encode() + ) + target_repo.close() + + # Fetch changes into the cloned repo + porcelain.fetch( + target_path, remote_name, outstream=outstream, errstream=errstream + ) + + # Assert that fetch updated the local image of the remote + self.assert_correct_remote_refs(target_repo.get_refs(), self.repo.get_refs()) + + # Check the target repo for pushed changes, as well as updates + # for the refs + with Repo(target_path) as r: + self.assertIn(self.repo[b"HEAD"].id, r) + self.assertNotEqual(self.repo.get_refs(), target_refs) + + def assert_correct_remote_refs( + self, local_refs, remote_refs, remote_name=b"origin" + ): + """Assert that known remote refs corresponds to actual remote refs.""" + local_ref_prefix = b"refs/heads" + remote_ref_prefix = b"refs/remotes/" + remote_name + + locally_known_remote_refs = { + k[len(remote_ref_prefix) + 1 :]: v + for k, v in local_refs.items() + if k.startswith(remote_ref_prefix) + } + + normalized_remote_refs = { + k[len(local_ref_prefix) + 1 :]: v + for k, v in remote_refs.items() + if k.startswith(local_ref_prefix) + } + if b"HEAD" in locally_known_remote_refs and b"HEAD" in remote_refs: + normalized_remote_refs[b"HEAD"] = remote_refs[b"HEAD"] + + self.assertEqual(locally_known_remote_refs, normalized_remote_refs) + + +class RepackTests(PorcelainTestCase): + def test_empty(self): + porcelain.repack(self.repo) + + def test_simple(self): + handle, fullpath = tempfile.mkstemp(dir=self.repo.path) + os.close(handle) + porcelain.add(repo=self.repo.path, paths=fullpath) + porcelain.repack(self.repo) + + +class LsTreeTests(PorcelainTestCase): + def test_empty(self): + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + f = StringIO() + porcelain.ls_tree(self.repo, b"HEAD", outstream=f) + self.assertEqual(f.getvalue(), "") + + def test_simple(self): + # Commit a dummy file then modify it + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("origstuff") + + porcelain.add(repo=self.repo.path, paths=[fullpath]) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + f = StringIO() + porcelain.ls_tree(self.repo, b"HEAD", outstream=f) + self.assertEqual( + f.getvalue(), + "100644 blob 8b82634d7eae019850bb883f06abf428c58bc9aa\tfoo\n", + ) + + def test_recursive(self): + # Create a directory then write a dummy file in it + dirpath = os.path.join(self.repo.path, "adir") + filepath = os.path.join(dirpath, "afile") + os.mkdir(dirpath) + with open(filepath, "w") as f: + f.write("origstuff") + porcelain.add(repo=self.repo.path, paths=[filepath]) + porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + f = StringIO() + porcelain.ls_tree(self.repo, b"HEAD", outstream=f) + self.assertEqual( + f.getvalue(), + "40000 tree b145cc69a5e17693e24d8a7be0016ed8075de66d\tadir\n", + ) + f = StringIO() + porcelain.ls_tree(self.repo, b"HEAD", outstream=f, recursive=True) + self.assertEqual( + f.getvalue(), + "40000 tree b145cc69a5e17693e24d8a7be0016ed8075de66d\tadir\n" + "100644 blob 8b82634d7eae019850bb883f06abf428c58bc9aa\tadir" + "/afile\n", + ) + + +class LsRemoteTests(PorcelainTestCase): + def test_empty(self): + self.assertEqual({}, porcelain.ls_remote(self.repo.path)) + + def test_some(self): + cid = porcelain.commit( + repo=self.repo.path, + message=b"test status", + author=b"author ", + committer=b"committer ", + ) + + self.assertEqual( + {b"refs/heads/master": cid, b"HEAD": cid}, + porcelain.ls_remote(self.repo.path), + ) + + +class LsFilesTests(PorcelainTestCase): + def test_empty(self): + self.assertEqual([], list(porcelain.ls_files(self.repo))) + + def test_simple(self): + # Commit a dummy file then modify it + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("origstuff") + + porcelain.add(repo=self.repo.path, paths=[fullpath]) + self.assertEqual([b"foo"], list(porcelain.ls_files(self.repo))) + + +class RemoteAddTests(PorcelainTestCase): + def test_new(self): + porcelain.remote_add(self.repo, "jelmer", "git://jelmer.uk/code/dulwich") + c = self.repo.get_config() + self.assertEqual( + c.get((b"remote", b"jelmer"), b"url"), + b"git://jelmer.uk/code/dulwich", + ) + + def test_exists(self): + porcelain.remote_add(self.repo, "jelmer", "git://jelmer.uk/code/dulwich") + self.assertRaises( + porcelain.RemoteExists, + porcelain.remote_add, + self.repo, + "jelmer", + "git://jelmer.uk/code/dulwich", + ) + + +class RemoteRemoveTests(PorcelainTestCase): + def test_remove(self): + porcelain.remote_add(self.repo, "jelmer", "git://jelmer.uk/code/dulwich") + c = self.repo.get_config() + self.assertEqual( + c.get((b"remote", b"jelmer"), b"url"), + b"git://jelmer.uk/code/dulwich", + ) + porcelain.remote_remove(self.repo, "jelmer") + self.assertRaises(KeyError, porcelain.remote_remove, self.repo, "jelmer") + c = self.repo.get_config() + self.assertRaises(KeyError, c.get, (b"remote", b"jelmer"), b"url") + + +class CheckIgnoreTests(PorcelainTestCase): + def test_check_ignored(self): + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.write("foo") + foo_path = os.path.join(self.repo.path, "foo") + with open(foo_path, "w") as f: + f.write("BAR") + bar_path = os.path.join(self.repo.path, "bar") + with open(bar_path, "w") as f: + f.write("BAR") + self.assertEqual(["foo"], list(porcelain.check_ignore(self.repo, [foo_path]))) + self.assertEqual([], list(porcelain.check_ignore(self.repo, [bar_path]))) + + def test_check_added_abs(self): + path = os.path.join(self.repo.path, "foo") + with open(path, "w") as f: + f.write("BAR") + self.repo.stage(["foo"]) + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.write("foo\n") + self.assertEqual([], list(porcelain.check_ignore(self.repo, [path]))) + self.assertEqual( + ["foo"], + list(porcelain.check_ignore(self.repo, [path], no_index=True)), + ) + + def test_check_added_rel(self): + with open(os.path.join(self.repo.path, "foo"), "w") as f: + f.write("BAR") + self.repo.stage(["foo"]) + with open(os.path.join(self.repo.path, ".gitignore"), "w") as f: + f.write("foo\n") + cwd = os.getcwd() + os.mkdir(os.path.join(self.repo.path, "bar")) + os.chdir(os.path.join(self.repo.path, "bar")) + try: + self.assertEqual(list(porcelain.check_ignore(self.repo, ["../foo"])), []) + self.assertEqual( + ["../foo"], + list(porcelain.check_ignore(self.repo, ["../foo"], no_index=True)), + ) + finally: + os.chdir(cwd) + + +class UpdateHeadTests(PorcelainTestCase): + def test_set_to_branch(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo.refs[b"refs/heads/blah"] = c1.id + porcelain.update_head(self.repo, "blah") + self.assertEqual(c1.id, self.repo.head()) + self.assertEqual(b"ref: refs/heads/blah", self.repo.refs.read_ref(b"HEAD")) + + def test_set_to_branch_detached(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo.refs[b"refs/heads/blah"] = c1.id + porcelain.update_head(self.repo, "blah", detached=True) + self.assertEqual(c1.id, self.repo.head()) + self.assertEqual(c1.id, self.repo.refs.read_ref(b"HEAD")) + + def test_set_to_commit_detached(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo.refs[b"refs/heads/blah"] = c1.id + porcelain.update_head(self.repo, c1.id, detached=True) + self.assertEqual(c1.id, self.repo.head()) + self.assertEqual(c1.id, self.repo.refs.read_ref(b"HEAD")) + + def test_set_new_branch(self): + [c1] = build_commit_graph(self.repo.object_store, [[1]]) + self.repo.refs[b"refs/heads/blah"] = c1.id + porcelain.update_head(self.repo, "blah", new_branch="bar") + self.assertEqual(c1.id, self.repo.head()) + self.assertEqual(b"ref: refs/heads/bar", self.repo.refs.read_ref(b"HEAD")) + + +class MailmapTests(PorcelainTestCase): + def test_no_mailmap(self): + self.assertEqual( + b"Jelmer Vernooij ", + porcelain.check_mailmap(self.repo, b"Jelmer Vernooij "), + ) + + def test_mailmap_lookup(self): + with open(os.path.join(self.repo.path, ".mailmap"), "wb") as f: + f.write( + b"""\ +Jelmer Vernooij +""" + ) + self.assertEqual( + b"Jelmer Vernooij ", + porcelain.check_mailmap(self.repo, b"Jelmer Vernooij "), + ) + + +class FsckTests(PorcelainTestCase): + def test_none(self): + self.assertEqual([], list(porcelain.fsck(self.repo))) + + def test_git_dir(self): + obj = Tree() + a = Blob() + a.data = b"foo" + obj.add(b".git", 0o100644, a.id) + self.repo.object_store.add_objects([(a, None), (obj, None)]) + self.assertEqual( + [(obj.id, "invalid name .git")], + [(sha, str(e)) for (sha, e) in porcelain.fsck(self.repo)], + ) + + +class DescribeTests(PorcelainTestCase): + def test_no_commits(self): + self.assertRaises(KeyError, porcelain.describe, self.repo.path) + + def test_single_commit(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + sha = porcelain.commit( + self.repo.path, + message=b"Some message", + author=b"Joe ", + committer=b"Bob ", + ) + self.assertEqual( + "g{}".format(sha[:7].decode("ascii")), + porcelain.describe(self.repo.path), + ) + + def test_tag(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + porcelain.commit( + self.repo.path, + message=b"Some message", + author=b"Joe ", + committer=b"Bob ", + ) + porcelain.tag_create( + self.repo.path, + b"tryme", + b"foo ", + b"bar", + annotated=True, + ) + self.assertEqual("tryme", porcelain.describe(self.repo.path)) + + def test_tag_and_commit(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + porcelain.commit( + self.repo.path, + message=b"Some message", + author=b"Joe ", + committer=b"Bob ", + ) + porcelain.tag_create( + self.repo.path, + b"tryme", + b"foo ", + b"bar", + annotated=True, + ) + with open(fullpath, "w") as f: + f.write("BAR2") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + sha = porcelain.commit( + self.repo.path, + message=b"Some message", + author=b"Joe ", + committer=b"Bob ", + ) + self.assertEqual( + "tryme-1-g{}".format(sha[:7].decode("ascii")), + porcelain.describe(self.repo.path), + ) + + +class PathToTreeTests(PorcelainTestCase): + def setUp(self): + super(PathToTreeTests, self).setUp() + self.fp = os.path.join(self.test_dir, "bar") + with open(self.fp, "w") as f: + f.write("something") + oldcwd = os.getcwd() + self.addCleanup(os.chdir, oldcwd) + os.chdir(self.test_dir) + + def test_path_to_tree_path_base(self): + self.assertEqual(b"bar", porcelain.path_to_tree_path(self.test_dir, self.fp)) + self.assertEqual(b"bar", porcelain.path_to_tree_path(".", "./bar")) + self.assertEqual(b"bar", porcelain.path_to_tree_path(".", "bar")) + cwd = os.getcwd() + self.assertEqual( + b"bar", porcelain.path_to_tree_path(".", os.path.join(cwd, "bar")) + ) + self.assertEqual(b"bar", porcelain.path_to_tree_path(cwd, "bar")) + + def test_path_to_tree_path_syntax(self): + self.assertEqual(b"bar", porcelain.path_to_tree_path(".", "./bar")) + + def test_path_to_tree_path_error(self): + with self.assertRaises(ValueError): + with tempfile.TemporaryDirectory() as od: + porcelain.path_to_tree_path(od, self.fp) + + def test_path_to_tree_path_rel(self): + cwd = os.getcwd() + os.mkdir(os.path.join(self.repo.path, "foo")) + os.mkdir(os.path.join(self.repo.path, "foo/bar")) + try: + os.chdir(os.path.join(self.repo.path, "foo/bar")) + with open("baz", "w") as f: + f.write("contents") + self.assertEqual(b"bar/baz", porcelain.path_to_tree_path("..", "baz")) + self.assertEqual( + b"bar/baz", + porcelain.path_to_tree_path( + os.path.join(os.getcwd(), ".."), + os.path.join(os.getcwd(), "baz"), + ), + ) + self.assertEqual( + b"bar/baz", + porcelain.path_to_tree_path("..", os.path.join(os.getcwd(), "baz")), + ) + self.assertEqual( + b"bar/baz", + porcelain.path_to_tree_path(os.path.join(os.getcwd(), ".."), "baz"), + ) + finally: + os.chdir(cwd) + + +class GetObjectByPathTests(PorcelainTestCase): + def test_simple(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + porcelain.commit( + self.repo.path, + message=b"Some message", + author=b"Joe ", + committer=b"Bob ", + ) + self.assertEqual(b"BAR", porcelain.get_object_by_path(self.repo, "foo").data) + self.assertEqual(b"BAR", porcelain.get_object_by_path(self.repo, b"foo").data) + + def test_encoding(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + porcelain.commit( + self.repo.path, + message=b"Some message", + author=b"Joe ", + committer=b"Bob ", + encoding=b"utf-8", + ) + self.assertEqual(b"BAR", porcelain.get_object_by_path(self.repo, "foo").data) + self.assertEqual(b"BAR", porcelain.get_object_by_path(self.repo, b"foo").data) + + def test_missing(self): + self.assertRaises(KeyError, porcelain.get_object_by_path, self.repo, "foo") + + +class WriteTreeTests(PorcelainTestCase): + def test_simple(self): + fullpath = os.path.join(self.repo.path, "foo") + with open(fullpath, "w") as f: + f.write("BAR") + porcelain.add(repo=self.repo.path, paths=[fullpath]) + self.assertEqual( + b"d2092c8a9f311f0311083bf8d177f2ca0ab5b241", + porcelain.write_tree(self.repo), + ) + + +class ActiveBranchTests(PorcelainTestCase): + def test_simple(self): + self.assertEqual(b"master", porcelain.active_branch(self.repo)) + + +class FindUniqueAbbrevTests(PorcelainTestCase): + + def test_simple(self): + c1, c2, c3 = build_commit_graph( + self.repo.object_store, [[1], [2, 1], [3, 1, 2]] + ) + self.repo.refs[b"HEAD"] = c3.id + self.assertEqual( + c1.id.decode('ascii')[:7], + porcelain.find_unique_abbrev(self.repo.object_store, c1.id)) + + +class ServerTests(PorcelainTestCase): + @contextlib.contextmanager + def _serving(self): + with make_server('localhost', 0, self.app) as server: + thread = threading.Thread(target=server.serve_forever, daemon=True) + thread.start() + + try: + yield f"http://localhost:{server.server_port}" + + finally: + server.shutdown() + thread.join(10) + + def setUp(self): + super().setUp() + + self.served_repo_path = os.path.join(self.test_dir, "served_repo.git") + self.served_repo = Repo.init_bare(self.served_repo_path, mkdir=True) + self.addCleanup(self.served_repo.close) + + backend = DictBackend({"/": self.served_repo}) + self.app = make_wsgi_chain(backend) + + def test_pull(self): + c1, = build_commit_graph(self.served_repo.object_store, [[1]]) + self.served_repo.refs[b"refs/heads/master"] = c1.id + + with self._serving() as url: + porcelain.pull(self.repo, url, "master") + + def test_push(self): + c1, = build_commit_graph(self.repo.object_store, [[1]]) + self.repo.refs[b"refs/heads/master"] = c1.id + + with self._serving() as url: + porcelain.push(self.repo, url, "master") diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_protocol.py b/env/lib/python3.8/site-packages/dulwich/tests/test_protocol.py new file mode 100644 index 00000000..9985fa4a --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_protocol.py @@ -0,0 +1,323 @@ +# test_protocol.py -- Tests for the git protocol +# Copyright (C) 2009 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the smart protocol utility functions.""" + + +from io import BytesIO + +from dulwich.errors import ( + HangupException, +) +from dulwich.protocol import ( + GitProtocolError, + PktLineParser, + Protocol, + ReceivableProtocol, + extract_capabilities, + extract_want_line_capabilities, + ack_type, + SINGLE_ACK, + MULTI_ACK, + MULTI_ACK_DETAILED, + BufferedPktLineWriter, +) +from dulwich.tests import TestCase + + +class BaseProtocolTests(object): + def test_write_pkt_line_none(self): + self.proto.write_pkt_line(None) + self.assertEqual(self.rout.getvalue(), b"0000") + + def test_write_pkt_line(self): + self.proto.write_pkt_line(b"bla") + self.assertEqual(self.rout.getvalue(), b"0007bla") + + def test_read_pkt_line(self): + self.rin.write(b"0008cmd ") + self.rin.seek(0) + self.assertEqual(b"cmd ", self.proto.read_pkt_line()) + + def test_eof(self): + self.rin.write(b"0000") + self.rin.seek(0) + self.assertFalse(self.proto.eof()) + self.assertEqual(None, self.proto.read_pkt_line()) + self.assertTrue(self.proto.eof()) + self.assertRaises(HangupException, self.proto.read_pkt_line) + + def test_unread_pkt_line(self): + self.rin.write(b"0007foo0000") + self.rin.seek(0) + self.assertEqual(b"foo", self.proto.read_pkt_line()) + self.proto.unread_pkt_line(b"bar") + self.assertEqual(b"bar", self.proto.read_pkt_line()) + self.assertEqual(None, self.proto.read_pkt_line()) + self.proto.unread_pkt_line(b"baz1") + self.assertRaises(ValueError, self.proto.unread_pkt_line, b"baz2") + + def test_read_pkt_seq(self): + self.rin.write(b"0008cmd 0005l0000") + self.rin.seek(0) + self.assertEqual([b"cmd ", b"l"], list(self.proto.read_pkt_seq())) + + def test_read_pkt_line_none(self): + self.rin.write(b"0000") + self.rin.seek(0) + self.assertEqual(None, self.proto.read_pkt_line()) + + def test_read_pkt_line_wrong_size(self): + self.rin.write(b"0100too short") + self.rin.seek(0) + self.assertRaises(GitProtocolError, self.proto.read_pkt_line) + + def test_write_sideband(self): + self.proto.write_sideband(3, b"bloe") + self.assertEqual(self.rout.getvalue(), b"0009\x03bloe") + + def test_send_cmd(self): + self.proto.send_cmd(b"fetch", b"a", b"b") + self.assertEqual(self.rout.getvalue(), b"000efetch a\x00b\x00") + + def test_read_cmd(self): + self.rin.write(b"0012cmd arg1\x00arg2\x00") + self.rin.seek(0) + self.assertEqual((b"cmd", [b"arg1", b"arg2"]), self.proto.read_cmd()) + + def test_read_cmd_noend0(self): + self.rin.write(b"0011cmd arg1\x00arg2") + self.rin.seek(0) + self.assertRaises(AssertionError, self.proto.read_cmd) + + +class ProtocolTests(BaseProtocolTests, TestCase): + def setUp(self): + TestCase.setUp(self) + self.rout = BytesIO() + self.rin = BytesIO() + self.proto = Protocol(self.rin.read, self.rout.write) + + +class ReceivableBytesIO(BytesIO): + """BytesIO with socket-like recv semantics for testing.""" + + def __init__(self): + BytesIO.__init__(self) + self.allow_read_past_eof = False + + def recv(self, size): + # fail fast if no bytes are available; in a real socket, this would + # block forever + if self.tell() == len(self.getvalue()) and not self.allow_read_past_eof: + raise GitProtocolError("Blocking read past end of socket") + if size == 1: + return self.read(1) + # calls shouldn't return quite as much as asked for + return self.read(size - 1) + + +class ReceivableProtocolTests(BaseProtocolTests, TestCase): + def setUp(self): + TestCase.setUp(self) + self.rout = BytesIO() + self.rin = ReceivableBytesIO() + self.proto = ReceivableProtocol(self.rin.recv, self.rout.write) + self.proto._rbufsize = 8 + + def test_eof(self): + # Allow blocking reads past EOF just for this test. The only parts of + # the protocol that might check for EOF do not depend on the recv() + # semantics anyway. + self.rin.allow_read_past_eof = True + BaseProtocolTests.test_eof(self) + + def test_recv(self): + all_data = b"1234567" * 10 # not a multiple of bufsize + self.rin.write(all_data) + self.rin.seek(0) + data = b"" + # We ask for 8 bytes each time and actually read 7, so it should take + # exactly 10 iterations. + for _ in range(10): + data += self.proto.recv(10) + # any more reads would block + self.assertRaises(GitProtocolError, self.proto.recv, 10) + self.assertEqual(all_data, data) + + def test_recv_read(self): + all_data = b"1234567" # recv exactly in one call + self.rin.write(all_data) + self.rin.seek(0) + self.assertEqual(b"1234", self.proto.recv(4)) + self.assertEqual(b"567", self.proto.read(3)) + self.assertRaises(GitProtocolError, self.proto.recv, 10) + + def test_read_recv(self): + all_data = b"12345678abcdefg" + self.rin.write(all_data) + self.rin.seek(0) + self.assertEqual(b"1234", self.proto.read(4)) + self.assertEqual(b"5678abc", self.proto.recv(8)) + self.assertEqual(b"defg", self.proto.read(4)) + self.assertRaises(GitProtocolError, self.proto.recv, 10) + + def test_mixed(self): + # arbitrary non-repeating string + all_data = b",".join(str(i).encode("ascii") for i in range(100)) + self.rin.write(all_data) + self.rin.seek(0) + data = b"" + + for i in range(1, 100): + data += self.proto.recv(i) + # if we get to the end, do a non-blocking read instead of blocking + if len(data) + i > len(all_data): + data += self.proto.recv(i) + # ReceivableBytesIO leaves off the last byte unless we ask + # nicely + data += self.proto.recv(1) + break + else: + data += self.proto.read(i) + else: + # didn't break, something must have gone wrong + self.fail() + + self.assertEqual(all_data, data) + + +class CapabilitiesTestCase(TestCase): + def test_plain(self): + self.assertEqual((b"bla", []), extract_capabilities(b"bla")) + + def test_caps(self): + self.assertEqual((b"bla", [b"la"]), extract_capabilities(b"bla\0la")) + self.assertEqual((b"bla", [b"la"]), extract_capabilities(b"bla\0la\n")) + self.assertEqual((b"bla", [b"la", b"la"]), extract_capabilities(b"bla\0la la")) + + def test_plain_want_line(self): + self.assertEqual((b"want bla", []), extract_want_line_capabilities(b"want bla")) + + def test_caps_want_line(self): + self.assertEqual( + (b"want bla", [b"la"]), + extract_want_line_capabilities(b"want bla la"), + ) + self.assertEqual( + (b"want bla", [b"la"]), + extract_want_line_capabilities(b"want bla la\n"), + ) + self.assertEqual( + (b"want bla", [b"la", b"la"]), + extract_want_line_capabilities(b"want bla la la"), + ) + + def test_ack_type(self): + self.assertEqual(SINGLE_ACK, ack_type([b"foo", b"bar"])) + self.assertEqual(MULTI_ACK, ack_type([b"foo", b"bar", b"multi_ack"])) + self.assertEqual( + MULTI_ACK_DETAILED, + ack_type([b"foo", b"bar", b"multi_ack_detailed"]), + ) + # choose detailed when both present + self.assertEqual( + MULTI_ACK_DETAILED, + ack_type([b"foo", b"bar", b"multi_ack", b"multi_ack_detailed"]), + ) + + +class BufferedPktLineWriterTests(TestCase): + def setUp(self): + TestCase.setUp(self) + self._output = BytesIO() + self._writer = BufferedPktLineWriter(self._output.write, bufsize=16) + + def assertOutputEquals(self, expected): + self.assertEqual(expected, self._output.getvalue()) + + def _truncate(self): + self._output.seek(0) + self._output.truncate() + + def test_write(self): + self._writer.write(b"foo") + self.assertOutputEquals(b"") + self._writer.flush() + self.assertOutputEquals(b"0007foo") + + def test_write_none(self): + self._writer.write(None) + self.assertOutputEquals(b"") + self._writer.flush() + self.assertOutputEquals(b"0000") + + def test_flush_empty(self): + self._writer.flush() + self.assertOutputEquals(b"") + + def test_write_multiple(self): + self._writer.write(b"foo") + self._writer.write(b"bar") + self.assertOutputEquals(b"") + self._writer.flush() + self.assertOutputEquals(b"0007foo0007bar") + + def test_write_across_boundary(self): + self._writer.write(b"foo") + self._writer.write(b"barbaz") + self.assertOutputEquals(b"0007foo000abarba") + self._truncate() + self._writer.flush() + self.assertOutputEquals(b"z") + + def test_write_to_boundary(self): + self._writer.write(b"foo") + self._writer.write(b"barba") + self.assertOutputEquals(b"0007foo0009barba") + self._truncate() + self._writer.write(b"z") + self._writer.flush() + self.assertOutputEquals(b"0005z") + + +class PktLineParserTests(TestCase): + def test_none(self): + pktlines = [] + parser = PktLineParser(pktlines.append) + parser.parse(b"0000") + self.assertEqual(pktlines, [None]) + self.assertEqual(b"", parser.get_tail()) + + def test_small_fragments(self): + pktlines = [] + parser = PktLineParser(pktlines.append) + parser.parse(b"00") + parser.parse(b"05") + parser.parse(b"z0000") + self.assertEqual(pktlines, [b"z", None]) + self.assertEqual(b"", parser.get_tail()) + + def test_multiple_packets(self): + pktlines = [] + parser = PktLineParser(pktlines.append) + parser.parse(b"0005z0006aba") + self.assertEqual(pktlines, [b"z", b"ab"]) + self.assertEqual(b"a", parser.get_tail()) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_reflog.py b/env/lib/python3.8/site-packages/dulwich/tests/test_reflog.py new file mode 100644 index 00000000..c8bfcaf5 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_reflog.py @@ -0,0 +1,146 @@ +# test_reflog.py -- tests for reflog.py +# encoding: utf-8 +# Copyright (C) 2015 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for dulwich.reflog.""" + +from io import BytesIO + +from dulwich.objects import ZERO_SHA +from dulwich.reflog import ( + drop_reflog_entry, + format_reflog_line, + parse_reflog_line, + read_reflog, +) + +from dulwich.tests import ( + TestCase, +) + + +class ReflogLineTests(TestCase): + def test_format(self): + self.assertEqual( + b"0000000000000000000000000000000000000000 " + b"49030649db3dfec5a9bc03e5dde4255a14499f16 Jelmer Vernooij " + b" 1446552482 +0000 " + b"clone: from git://jelmer.uk/samba", + format_reflog_line( + b"0000000000000000000000000000000000000000", + b"49030649db3dfec5a9bc03e5dde4255a14499f16", + b"Jelmer Vernooij ", + 1446552482, + 0, + b"clone: from git://jelmer.uk/samba", + ), + ) + + self.assertEqual( + b"0000000000000000000000000000000000000000 " + b"49030649db3dfec5a9bc03e5dde4255a14499f16 Jelmer Vernooij " + b" 1446552482 +0000 " + b"clone: from git://jelmer.uk/samba", + format_reflog_line( + None, + b"49030649db3dfec5a9bc03e5dde4255a14499f16", + b"Jelmer Vernooij ", + 1446552482, + 0, + b"clone: from git://jelmer.uk/samba", + ), + ) + + def test_parse(self): + reflog_line = ( + b"0000000000000000000000000000000000000000 " + b"49030649db3dfec5a9bc03e5dde4255a14499f16 Jelmer Vernooij " + b" 1446552482 +0000 " + b"clone: from git://jelmer.uk/samba" + ) + self.assertEqual( + ( + b"0000000000000000000000000000000000000000", + b"49030649db3dfec5a9bc03e5dde4255a14499f16", + b"Jelmer Vernooij ", + 1446552482, + 0, + b"clone: from git://jelmer.uk/samba", + ), + parse_reflog_line(reflog_line), + ) + + +_TEST_REFLOG = ( + b"0000000000000000000000000000000000000000 " + b"49030649db3dfec5a9bc03e5dde4255a14499f16 Jelmer Vernooij " + b" 1446552482 +0000 " + b"clone: from git://jelmer.uk/samba\n" + b"49030649db3dfec5a9bc03e5dde4255a14499f16 " + b"42d06bd4b77fed026b154d16493e5deab78f02ec Jelmer Vernooij " + b" 1446552483 +0000 " + b"clone: from git://jelmer.uk/samba\n" + b"42d06bd4b77fed026b154d16493e5deab78f02ec " + b"df6800012397fb85c56e7418dd4eb9405dee075c Jelmer Vernooij " + b" 1446552484 +0000 " + b"clone: from git://jelmer.uk/samba\n" +) + + +class ReflogDropTests(TestCase): + def setUp(self): + TestCase.setUp(self) + self.f = BytesIO(_TEST_REFLOG) + self.original_log = list(read_reflog(self.f)) + self.f.seek(0) + + def _read_log(self): + self.f.seek(0) + return list(read_reflog(self.f)) + + def test_invalid(self): + self.assertRaises(ValueError, drop_reflog_entry, self.f, -1) + + def test_drop_entry(self): + drop_reflog_entry(self.f, 0) + log = self._read_log() + self.assertEqual(len(log), 2) + self.assertEqual(self.original_log[0:2], log) + + self.f.seek(0) + drop_reflog_entry(self.f, 1) + log = self._read_log() + self.assertEqual(len(log), 1) + self.assertEqual(self.original_log[1], log[0]) + + def test_drop_entry_with_rewrite(self): + drop_reflog_entry(self.f, 1, True) + log = self._read_log() + self.assertEqual(len(log), 2) + self.assertEqual(self.original_log[0], log[0]) + self.assertEqual(self.original_log[0].new_sha, log[1].old_sha) + self.assertEqual(self.original_log[2].new_sha, log[1].new_sha) + + self.f.seek(0) + drop_reflog_entry(self.f, 1, True) + log = self._read_log() + self.assertEqual(len(log), 1) + self.assertEqual(ZERO_SHA, log[0].old_sha) + self.assertEqual(self.original_log[2].new_sha, log[0].new_sha) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_refs.py b/env/lib/python3.8/site-packages/dulwich/tests/test_refs.py new file mode 100644 index 00000000..4389d600 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_refs.py @@ -0,0 +1,766 @@ +# test_refs.py -- tests for refs.py +# encoding: utf-8 +# Copyright (C) 2013 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for dulwich.refs.""" + +from io import BytesIO +import os +import sys +import tempfile + +from dulwich import errors +from dulwich.file import ( + GitFile, +) +from dulwich.objects import ZERO_SHA +from dulwich.refs import ( + DictRefsContainer, + InfoRefsContainer, + SymrefLoop, + check_ref_format, + _split_ref_line, + parse_symref_value, + read_packed_refs_with_peeled, + read_packed_refs, + strip_peeled_refs, + write_packed_refs, +) +from dulwich.repo import Repo + +from dulwich.tests import ( + SkipTest, + TestCase, +) + +from dulwich.tests.utils import ( + open_repo, + tear_down_repo, +) + + +class CheckRefFormatTests(TestCase): + """Tests for the check_ref_format function. + + These are the same tests as in the git test suite. + """ + + def test_valid(self): + self.assertTrue(check_ref_format(b"heads/foo")) + self.assertTrue(check_ref_format(b"foo/bar/baz")) + self.assertTrue(check_ref_format(b"refs///heads/foo")) + self.assertTrue(check_ref_format(b"foo./bar")) + self.assertTrue(check_ref_format(b"heads/foo@bar")) + self.assertTrue(check_ref_format(b"heads/fix.lock.error")) + + def test_invalid(self): + self.assertFalse(check_ref_format(b"foo")) + self.assertFalse(check_ref_format(b"heads/foo/")) + self.assertFalse(check_ref_format(b"./foo")) + self.assertFalse(check_ref_format(b".refs/foo")) + self.assertFalse(check_ref_format(b"heads/foo..bar")) + self.assertFalse(check_ref_format(b"heads/foo?bar")) + self.assertFalse(check_ref_format(b"heads/foo.lock")) + self.assertFalse(check_ref_format(b"heads/v@{ation")) + self.assertFalse(check_ref_format(b"heads/foo\bar")) + + +ONES = b"1" * 40 +TWOS = b"2" * 40 +THREES = b"3" * 40 +FOURS = b"4" * 40 + + +class PackedRefsFileTests(TestCase): + def test_split_ref_line_errors(self): + self.assertRaises(errors.PackedRefsException, _split_ref_line, b"singlefield") + self.assertRaises(errors.PackedRefsException, _split_ref_line, b"badsha name") + self.assertRaises( + errors.PackedRefsException, + _split_ref_line, + ONES + b" bad/../refname", + ) + + def test_read_without_peeled(self): + f = BytesIO(b"\n".join([b"# comment", ONES + b" ref/1", TWOS + b" ref/2"])) + self.assertEqual( + [(ONES, b"ref/1"), (TWOS, b"ref/2")], list(read_packed_refs(f)) + ) + + def test_read_without_peeled_errors(self): + f = BytesIO(b"\n".join([ONES + b" ref/1", b"^" + TWOS])) + self.assertRaises(errors.PackedRefsException, list, read_packed_refs(f)) + + def test_read_with_peeled(self): + f = BytesIO( + b"\n".join( + [ + ONES + b" ref/1", + TWOS + b" ref/2", + b"^" + THREES, + FOURS + b" ref/4", + ] + ) + ) + self.assertEqual( + [ + (ONES, b"ref/1", None), + (TWOS, b"ref/2", THREES), + (FOURS, b"ref/4", None), + ], + list(read_packed_refs_with_peeled(f)), + ) + + def test_read_with_peeled_errors(self): + f = BytesIO(b"\n".join([b"^" + TWOS, ONES + b" ref/1"])) + self.assertRaises(errors.PackedRefsException, list, read_packed_refs(f)) + + f = BytesIO(b"\n".join([ONES + b" ref/1", b"^" + TWOS, b"^" + THREES])) + self.assertRaises(errors.PackedRefsException, list, read_packed_refs(f)) + + def test_write_with_peeled(self): + f = BytesIO() + write_packed_refs(f, {b"ref/1": ONES, b"ref/2": TWOS}, {b"ref/1": THREES}) + self.assertEqual( + b"\n".join( + [ + b"# pack-refs with: peeled", + ONES + b" ref/1", + b"^" + THREES, + TWOS + b" ref/2", + ] + ) + + b"\n", + f.getvalue(), + ) + + def test_write_without_peeled(self): + f = BytesIO() + write_packed_refs(f, {b"ref/1": ONES, b"ref/2": TWOS}) + self.assertEqual( + b"\n".join([ONES + b" ref/1", TWOS + b" ref/2"]) + b"\n", + f.getvalue(), + ) + + +# Dict of refs that we expect all RefsContainerTests subclasses to define. +_TEST_REFS = { + b"HEAD": b"42d06bd4b77fed026b154d16493e5deab78f02ec", + b"refs/heads/40-char-ref-aaaaaaaaaaaaaaaaaa": b"42d06bd4b77fed026b154d16493e5deab78f02ec", + b"refs/heads/master": b"42d06bd4b77fed026b154d16493e5deab78f02ec", + b"refs/heads/packed": b"42d06bd4b77fed026b154d16493e5deab78f02ec", + b"refs/tags/refs-0.1": b"df6800012397fb85c56e7418dd4eb9405dee075c", + b"refs/tags/refs-0.2": b"3ec9c43c84ff242e3ef4a9fc5bc111fd780a76a8", + b"refs/heads/loop": b"ref: refs/heads/loop", +} + + +class RefsContainerTests(object): + def test_keys(self): + actual_keys = set(self._refs.keys()) + self.assertEqual(set(self._refs.allkeys()), actual_keys) + self.assertEqual(set(_TEST_REFS.keys()), actual_keys) + + actual_keys = self._refs.keys(b"refs/heads") + actual_keys.discard(b"loop") + self.assertEqual( + [b"40-char-ref-aaaaaaaaaaaaaaaaaa", b"master", b"packed"], + sorted(actual_keys), + ) + self.assertEqual( + [b"refs-0.1", b"refs-0.2"], sorted(self._refs.keys(b"refs/tags")) + ) + + def test_iter(self): + actual_keys = set(self._refs.keys()) + self.assertEqual(set(self._refs), actual_keys) + self.assertEqual(set(_TEST_REFS.keys()), actual_keys) + + def test_as_dict(self): + # refs/heads/loop does not show up even if it exists + expected_refs = dict(_TEST_REFS) + del expected_refs[b"refs/heads/loop"] + self.assertEqual(expected_refs, self._refs.as_dict()) + + def test_get_symrefs(self): + self._refs.set_symbolic_ref(b"refs/heads/src", b"refs/heads/dst") + symrefs = self._refs.get_symrefs() + if b"HEAD" in symrefs: + symrefs.pop(b"HEAD") + self.assertEqual( + { + b"refs/heads/src": b"refs/heads/dst", + b"refs/heads/loop": b"refs/heads/loop", + }, + symrefs, + ) + + def test_setitem(self): + self._refs[b"refs/some/ref"] = b"42d06bd4b77fed026b154d16493e5deab78f02ec" + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/some/ref"], + ) + self.assertRaises( + errors.RefFormatError, + self._refs.__setitem__, + b"notrefs/foo", + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + ) + + def test_set_if_equals(self): + nines = b"9" * 40 + self.assertFalse(self._refs.set_if_equals(b"HEAD", b"c0ffee", nines)) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", self._refs[b"HEAD"] + ) + + self.assertTrue( + self._refs.set_if_equals( + b"HEAD", b"42d06bd4b77fed026b154d16493e5deab78f02ec", nines + ) + ) + self.assertEqual(nines, self._refs[b"HEAD"]) + + # Setting the ref again is a no-op, but will return True. + self.assertTrue(self._refs.set_if_equals(b"HEAD", nines, nines)) + self.assertEqual(nines, self._refs[b"HEAD"]) + + self.assertTrue(self._refs.set_if_equals(b"refs/heads/master", None, nines)) + self.assertEqual(nines, self._refs[b"refs/heads/master"]) + + self.assertTrue( + self._refs.set_if_equals(b"refs/heads/nonexistent", ZERO_SHA, nines) + ) + self.assertEqual(nines, self._refs[b"refs/heads/nonexistent"]) + + def test_add_if_new(self): + nines = b"9" * 40 + self.assertFalse(self._refs.add_if_new(b"refs/heads/master", nines)) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/heads/master"], + ) + + self.assertTrue(self._refs.add_if_new(b"refs/some/ref", nines)) + self.assertEqual(nines, self._refs[b"refs/some/ref"]) + + def test_set_symbolic_ref(self): + self._refs.set_symbolic_ref(b"refs/heads/symbolic", b"refs/heads/master") + self.assertEqual( + b"ref: refs/heads/master", + self._refs.read_loose_ref(b"refs/heads/symbolic"), + ) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/heads/symbolic"], + ) + + def test_set_symbolic_ref_overwrite(self): + nines = b"9" * 40 + self.assertNotIn(b"refs/heads/symbolic", self._refs) + self._refs[b"refs/heads/symbolic"] = nines + self.assertEqual(nines, self._refs.read_loose_ref(b"refs/heads/symbolic")) + self._refs.set_symbolic_ref(b"refs/heads/symbolic", b"refs/heads/master") + self.assertEqual( + b"ref: refs/heads/master", + self._refs.read_loose_ref(b"refs/heads/symbolic"), + ) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/heads/symbolic"], + ) + + def test_check_refname(self): + self._refs._check_refname(b"HEAD") + self._refs._check_refname(b"refs/stash") + self._refs._check_refname(b"refs/heads/foo") + + self.assertRaises(errors.RefFormatError, self._refs._check_refname, b"refs") + self.assertRaises( + errors.RefFormatError, self._refs._check_refname, b"notrefs/foo" + ) + + def test_contains(self): + self.assertIn(b"refs/heads/master", self._refs) + self.assertNotIn(b"refs/heads/bar", self._refs) + + def test_delitem(self): + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/heads/master"], + ) + del self._refs[b"refs/heads/master"] + self.assertRaises(KeyError, lambda: self._refs[b"refs/heads/master"]) + + def test_remove_if_equals(self): + self.assertFalse(self._refs.remove_if_equals(b"HEAD", b"c0ffee")) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", self._refs[b"HEAD"] + ) + self.assertTrue( + self._refs.remove_if_equals( + b"refs/tags/refs-0.2", + b"3ec9c43c84ff242e3ef4a9fc5bc111fd780a76a8", + ) + ) + self.assertTrue(self._refs.remove_if_equals(b"refs/tags/refs-0.2", ZERO_SHA)) + self.assertNotIn(b"refs/tags/refs-0.2", self._refs) + + def test_import_refs_name(self): + self._refs[ + b"refs/remotes/origin/other" + ] = b"48d01bd4b77fed026b154d16493e5deab78f02ec" + self._refs.import_refs( + b"refs/remotes/origin", + {b"master": b"42d06bd4b77fed026b154d16493e5deab78f02ec"}, + ) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/remotes/origin/master"], + ) + self.assertEqual( + b"48d01bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/remotes/origin/other"], + ) + + def test_import_refs_name_prune(self): + self._refs[ + b"refs/remotes/origin/other" + ] = b"48d01bd4b77fed026b154d16493e5deab78f02ec" + self._refs.import_refs( + b"refs/remotes/origin", + {b"master": b"42d06bd4b77fed026b154d16493e5deab78f02ec"}, + prune=True, + ) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/remotes/origin/master"], + ) + self.assertNotIn(b"refs/remotes/origin/other", self._refs) + + +class DictRefsContainerTests(RefsContainerTests, TestCase): + def setUp(self): + TestCase.setUp(self) + self._refs = DictRefsContainer(dict(_TEST_REFS)) + + def test_invalid_refname(self): + # FIXME: Move this test into RefsContainerTests, but requires + # some way of injecting invalid refs. + self._refs._refs[b"refs/stash"] = b"00" * 20 + expected_refs = dict(_TEST_REFS) + del expected_refs[b"refs/heads/loop"] + expected_refs[b"refs/stash"] = b"00" * 20 + self.assertEqual(expected_refs, self._refs.as_dict()) + + +class DiskRefsContainerTests(RefsContainerTests, TestCase): + def setUp(self): + TestCase.setUp(self) + self._repo = open_repo("refs.git") + self.addCleanup(tear_down_repo, self._repo) + self._refs = self._repo.refs + + def test_get_packed_refs(self): + self.assertEqual( + { + b"refs/heads/packed": b"42d06bd4b77fed026b154d16493e5deab78f02ec", + b"refs/tags/refs-0.1": b"df6800012397fb85c56e7418dd4eb9405dee075c", + }, + self._refs.get_packed_refs(), + ) + + def test_get_peeled_not_packed(self): + # not packed + self.assertEqual(None, self._refs.get_peeled(b"refs/tags/refs-0.2")) + self.assertEqual( + b"3ec9c43c84ff242e3ef4a9fc5bc111fd780a76a8", + self._refs[b"refs/tags/refs-0.2"], + ) + + # packed, known not peelable + self.assertEqual( + self._refs[b"refs/heads/packed"], + self._refs.get_peeled(b"refs/heads/packed"), + ) + + # packed, peeled + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs.get_peeled(b"refs/tags/refs-0.1"), + ) + + def test_setitem(self): + RefsContainerTests.test_setitem(self) + path = os.path.join(self._refs.path, b"refs", b"some", b"ref") + with open(path, "rb") as f: + self.assertEqual(b"42d06bd4b77fed026b154d16493e5deab78f02ec", f.read()[:40]) + + self.assertRaises( + OSError, + self._refs.__setitem__, + b"refs/some/ref/sub", + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + ) + + def test_delete_refs_container(self): + # We shouldn't delete the refs directory + self._refs[b'refs/heads/blah'] = b"42d06bd4b77fed026b154d16493e5deab78f02ec" + for ref in self._refs.allkeys(): + del self._refs[ref] + self.assertTrue(os.path.exists(os.path.join(self._refs.path, b'refs'))) + + def test_setitem_packed(self): + with open(os.path.join(self._refs.path, b"packed-refs"), "w") as f: + f.write("# pack-refs with: peeled fully-peeled sorted \n") + f.write("42d06bd4b77fed026b154d16493e5deab78f02ec refs/heads/packed\n") + + # It's allowed to set a new ref on a packed ref, the new ref will be + # placed outside on refs/ + self._refs[b"refs/heads/packed"] = b"3ec9c43c84ff242e3ef4a9fc5bc111fd780a76a8" + packed_ref_path = os.path.join(self._refs.path, b"refs", b"heads", b"packed") + with open(packed_ref_path, "rb") as f: + self.assertEqual(b"3ec9c43c84ff242e3ef4a9fc5bc111fd780a76a8", f.read()[:40]) + + self.assertRaises( + OSError, + self._refs.__setitem__, + b"refs/heads/packed/sub", + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + ) + + def test_setitem_symbolic(self): + ones = b"1" * 40 + self._refs[b"HEAD"] = ones + self.assertEqual(ones, self._refs[b"HEAD"]) + + # ensure HEAD was not modified + f = open(os.path.join(self._refs.path, b"HEAD"), "rb") + v = next(iter(f)).rstrip(b"\n\r") + f.close() + self.assertEqual(b"ref: refs/heads/master", v) + + # ensure the symbolic link was written through + f = open(os.path.join(self._refs.path, b"refs", b"heads", b"master"), "rb") + self.assertEqual(ones, f.read()[:40]) + f.close() + + def test_set_if_equals(self): + RefsContainerTests.test_set_if_equals(self) + + # ensure symref was followed + self.assertEqual(b"9" * 40, self._refs[b"refs/heads/master"]) + + # ensure lockfile was deleted + self.assertFalse( + os.path.exists( + os.path.join(self._refs.path, b"refs", b"heads", b"master.lock") + ) + ) + self.assertFalse(os.path.exists(os.path.join(self._refs.path, b"HEAD.lock"))) + + def test_add_if_new_packed(self): + # don't overwrite packed ref + self.assertFalse(self._refs.add_if_new(b"refs/tags/refs-0.1", b"9" * 40)) + self.assertEqual( + b"df6800012397fb85c56e7418dd4eb9405dee075c", + self._refs[b"refs/tags/refs-0.1"], + ) + + def test_add_if_new_symbolic(self): + # Use an empty repo instead of the default. + repo_dir = os.path.join(tempfile.mkdtemp(), "test") + os.makedirs(repo_dir) + repo = Repo.init(repo_dir) + self.addCleanup(tear_down_repo, repo) + refs = repo.refs + + nines = b"9" * 40 + self.assertEqual(b"ref: refs/heads/master", refs.read_ref(b"HEAD")) + self.assertNotIn(b"refs/heads/master", refs) + self.assertTrue(refs.add_if_new(b"HEAD", nines)) + self.assertEqual(b"ref: refs/heads/master", refs.read_ref(b"HEAD")) + self.assertEqual(nines, refs[b"HEAD"]) + self.assertEqual(nines, refs[b"refs/heads/master"]) + self.assertFalse(refs.add_if_new(b"HEAD", b"1" * 40)) + self.assertEqual(nines, refs[b"HEAD"]) + self.assertEqual(nines, refs[b"refs/heads/master"]) + + def test_follow(self): + self.assertEqual( + ( + [b"HEAD", b"refs/heads/master"], + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + ), + self._refs.follow(b"HEAD"), + ) + self.assertEqual( + ( + [b"refs/heads/master"], + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + ), + self._refs.follow(b"refs/heads/master"), + ) + self.assertRaises(SymrefLoop, self._refs.follow, b"refs/heads/loop") + + def test_set_overwrite_loop(self): + self.assertRaises(SymrefLoop, self._refs.follow, b"refs/heads/loop") + self._refs[b'refs/heads/loop'] = ( + b"42d06bd4b77fed026b154d16493e5deab78f02ec") + self.assertEqual( + ([b'refs/heads/loop'], b'42d06bd4b77fed026b154d16493e5deab78f02ec'), + self._refs.follow(b"refs/heads/loop")) + + def test_delitem(self): + RefsContainerTests.test_delitem(self) + ref_file = os.path.join(self._refs.path, b"refs", b"heads", b"master") + self.assertFalse(os.path.exists(ref_file)) + self.assertNotIn(b"refs/heads/master", self._refs.get_packed_refs()) + + def test_delitem_symbolic(self): + self.assertEqual(b"ref: refs/heads/master", self._refs.read_loose_ref(b"HEAD")) + del self._refs[b"HEAD"] + self.assertRaises(KeyError, lambda: self._refs[b"HEAD"]) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs[b"refs/heads/master"], + ) + self.assertFalse(os.path.exists(os.path.join(self._refs.path, b"HEAD"))) + + def test_remove_if_equals_symref(self): + # HEAD is a symref, so shouldn't equal its dereferenced value + self.assertFalse( + self._refs.remove_if_equals( + b"HEAD", b"42d06bd4b77fed026b154d16493e5deab78f02ec" + ) + ) + self.assertTrue( + self._refs.remove_if_equals( + b"refs/heads/master", + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + ) + ) + self.assertRaises(KeyError, lambda: self._refs[b"refs/heads/master"]) + + # HEAD is now a broken symref + self.assertRaises(KeyError, lambda: self._refs[b"HEAD"]) + self.assertEqual(b"ref: refs/heads/master", self._refs.read_loose_ref(b"HEAD")) + + self.assertFalse( + os.path.exists( + os.path.join(self._refs.path, b"refs", b"heads", b"master.lock") + ) + ) + self.assertFalse(os.path.exists(os.path.join(self._refs.path, b"HEAD.lock"))) + + def test_remove_packed_without_peeled(self): + refs_file = os.path.join(self._repo.path, "packed-refs") + f = GitFile(refs_file) + refs_data = f.read() + f.close() + f = GitFile(refs_file, "wb") + f.write( + b"\n".join( + line + for line in refs_data.split(b"\n") + if not line or line[0] not in b"#^" + ) + ) + f.close() + self._repo = Repo(self._repo.path) + refs = self._repo.refs + self.assertTrue( + refs.remove_if_equals( + b"refs/heads/packed", + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + ) + ) + + def test_remove_if_equals_packed(self): + # test removing ref that is only packed + self.assertEqual( + b"df6800012397fb85c56e7418dd4eb9405dee075c", + self._refs[b"refs/tags/refs-0.1"], + ) + self.assertTrue( + self._refs.remove_if_equals( + b"refs/tags/refs-0.1", + b"df6800012397fb85c56e7418dd4eb9405dee075c", + ) + ) + self.assertRaises(KeyError, lambda: self._refs[b"refs/tags/refs-0.1"]) + + def test_remove_parent(self): + self._refs[b"refs/heads/foo/bar"] = b"df6800012397fb85c56e7418dd4eb9405dee075c" + del self._refs[b"refs/heads/foo/bar"] + ref_file = os.path.join( + self._refs.path, + b"refs", + b"heads", + b"foo", + b"bar", + ) + self.assertFalse(os.path.exists(ref_file)) + ref_file = os.path.join(self._refs.path, b"refs", b"heads", b"foo") + self.assertFalse(os.path.exists(ref_file)) + ref_file = os.path.join(self._refs.path, b"refs", b"heads") + self.assertTrue(os.path.exists(ref_file)) + self._refs[b"refs/heads/foo"] = b"df6800012397fb85c56e7418dd4eb9405dee075c" + + def test_read_ref(self): + self.assertEqual(b"ref: refs/heads/master", self._refs.read_ref(b"HEAD")) + self.assertEqual( + b"42d06bd4b77fed026b154d16493e5deab78f02ec", + self._refs.read_ref(b"refs/heads/packed"), + ) + self.assertEqual(None, self._refs.read_ref(b"nonexistent")) + + def test_read_loose_ref(self): + self._refs[b"refs/heads/foo"] = b"df6800012397fb85c56e7418dd4eb9405dee075c" + + self.assertEqual(None, self._refs.read_ref(b"refs/heads/foo/bar")) + + def test_non_ascii(self): + try: + encoded_ref = os.fsencode(u"refs/tags/schön") + except UnicodeEncodeError as exc: + raise SkipTest( + "filesystem encoding doesn't support special character" + ) from exc + p = os.path.join(os.fsencode(self._repo.path), encoded_ref) + with open(p, "w") as f: + f.write("00" * 20) + + expected_refs = dict(_TEST_REFS) + expected_refs[encoded_ref] = b"00" * 20 + del expected_refs[b"refs/heads/loop"] + + self.assertEqual(expected_refs, self._repo.get_refs()) + + def test_cyrillic(self): + if sys.platform in ("darwin", "win32"): + raise SkipTest("filesystem encoding doesn't support arbitrary bytes") + # reported in https://github.com/dulwich/dulwich/issues/608 + name = b"\xcd\xee\xe2\xe0\xff\xe2\xe5\xf2\xea\xe01" + encoded_ref = b"refs/heads/" + name + with open(os.path.join(os.fsencode(self._repo.path), encoded_ref), "w") as f: + f.write("00" * 20) + + expected_refs = set(_TEST_REFS.keys()) + expected_refs.add(encoded_ref) + + self.assertEqual(expected_refs, set(self._repo.refs.allkeys())) + self.assertEqual( + {r[len(b"refs/") :] for r in expected_refs if r.startswith(b"refs/")}, + set(self._repo.refs.subkeys(b"refs/")), + ) + expected_refs.remove(b"refs/heads/loop") + expected_refs.add(b"HEAD") + self.assertEqual(expected_refs, set(self._repo.get_refs().keys())) + + +_TEST_REFS_SERIALIZED = ( + b"42d06bd4b77fed026b154d16493e5deab78f02ec\t" + b"refs/heads/40-char-ref-aaaaaaaaaaaaaaaaaa\n" + b"42d06bd4b77fed026b154d16493e5deab78f02ec\trefs/heads/master\n" + b"42d06bd4b77fed026b154d16493e5deab78f02ec\trefs/heads/packed\n" + b"df6800012397fb85c56e7418dd4eb9405dee075c\trefs/tags/refs-0.1\n" + b"3ec9c43c84ff242e3ef4a9fc5bc111fd780a76a8\trefs/tags/refs-0.2\n" +) + + +class InfoRefsContainerTests(TestCase): + def test_invalid_refname(self): + text = _TEST_REFS_SERIALIZED + b"00" * 20 + b"\trefs/stash\n" + refs = InfoRefsContainer(BytesIO(text)) + expected_refs = dict(_TEST_REFS) + del expected_refs[b"HEAD"] + expected_refs[b"refs/stash"] = b"00" * 20 + del expected_refs[b"refs/heads/loop"] + self.assertEqual(expected_refs, refs.as_dict()) + + def test_keys(self): + refs = InfoRefsContainer(BytesIO(_TEST_REFS_SERIALIZED)) + actual_keys = set(refs.keys()) + self.assertEqual(set(refs.allkeys()), actual_keys) + expected_refs = dict(_TEST_REFS) + del expected_refs[b"HEAD"] + del expected_refs[b"refs/heads/loop"] + self.assertEqual(set(expected_refs.keys()), actual_keys) + + actual_keys = refs.keys(b"refs/heads") + actual_keys.discard(b"loop") + self.assertEqual( + [b"40-char-ref-aaaaaaaaaaaaaaaaaa", b"master", b"packed"], + sorted(actual_keys), + ) + self.assertEqual([b"refs-0.1", b"refs-0.2"], sorted(refs.keys(b"refs/tags"))) + + def test_as_dict(self): + refs = InfoRefsContainer(BytesIO(_TEST_REFS_SERIALIZED)) + # refs/heads/loop does not show up even if it exists + expected_refs = dict(_TEST_REFS) + del expected_refs[b"HEAD"] + del expected_refs[b"refs/heads/loop"] + self.assertEqual(expected_refs, refs.as_dict()) + + def test_contains(self): + refs = InfoRefsContainer(BytesIO(_TEST_REFS_SERIALIZED)) + self.assertIn(b"refs/heads/master", refs) + self.assertNotIn(b"refs/heads/bar", refs) + + def test_get_peeled(self): + refs = InfoRefsContainer(BytesIO(_TEST_REFS_SERIALIZED)) + # refs/heads/loop does not show up even if it exists + self.assertEqual( + _TEST_REFS[b"refs/heads/master"], + refs.get_peeled(b"refs/heads/master"), + ) + + +class ParseSymrefValueTests(TestCase): + def test_valid(self): + self.assertEqual(b"refs/heads/foo", parse_symref_value(b"ref: refs/heads/foo")) + + def test_invalid(self): + self.assertRaises(ValueError, parse_symref_value, b"foobar") + + +class StripPeeledRefsTests(TestCase): + + all_refs = { + b"refs/heads/master": b"8843d7f92416211de9ebb963ff4ce28125932878", + b"refs/heads/testing": b"186a005b134d8639a58b6731c7c1ea821a6eedba", + b"refs/tags/1.0.0": b"a93db4b0360cc635a2b93675010bac8d101f73f0", + b"refs/tags/1.0.0^{}": b"a93db4b0360cc635a2b93675010bac8d101f73f0", + b"refs/tags/2.0.0": b"0749936d0956c661ac8f8d3483774509c165f89e", + b"refs/tags/2.0.0^{}": b"0749936d0956c661ac8f8d3483774509c165f89e", + } + non_peeled_refs = { + b"refs/heads/master": b"8843d7f92416211de9ebb963ff4ce28125932878", + b"refs/heads/testing": b"186a005b134d8639a58b6731c7c1ea821a6eedba", + b"refs/tags/1.0.0": b"a93db4b0360cc635a2b93675010bac8d101f73f0", + b"refs/tags/2.0.0": b"0749936d0956c661ac8f8d3483774509c165f89e", + } + + def test_strip_peeled_refs(self): + # Simple check of two dicts + self.assertEqual(strip_peeled_refs(self.all_refs), self.non_peeled_refs) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_repository.py b/env/lib/python3.8/site-packages/dulwich/tests/test_repository.py new file mode 100644 index 00000000..ad48c89d --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_repository.py @@ -0,0 +1,1509 @@ +# -*- coding: utf-8 -*- +# test_repository.py -- tests for repository.py +# Copyright (C) 2007 James Westby +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the repository.""" + +import glob +import locale +import os +import shutil +import stat +import sys +import tempfile +import warnings + +from dulwich import errors +from dulwich import porcelain +from dulwich.object_store import ( + tree_lookup_path, +) +from dulwich import objects +from dulwich.config import Config +from dulwich.errors import NotGitRepository +from dulwich.repo import ( + InvalidUserIdentity, + Repo, + MemoryRepo, + check_user_identity, + UnsupportedVersion, + UnsupportedExtension, +) +from dulwich.tests import ( + TestCase, + skipIf, +) +from dulwich.tests.utils import ( + open_repo, + tear_down_repo, + setup_warning_catcher, +) + +missing_sha = b"b91fa4d900e17e99b433218e988c4eb4a3e9a097" + + +class CreateRepositoryTests(TestCase): + def assertFileContentsEqual(self, expected, repo, path): + f = repo.get_named_file(path) + if not f: + self.assertEqual(expected, None) + else: + with f: + self.assertEqual(expected, f.read()) + + def _check_repo_contents(self, repo, expect_bare): + self.assertEqual(expect_bare, repo.bare) + self.assertFileContentsEqual(b"Unnamed repository", repo, "description") + self.assertFileContentsEqual(b"", repo, os.path.join("info", "exclude")) + self.assertFileContentsEqual(None, repo, "nonexistent file") + barestr = b"bare = " + str(expect_bare).lower().encode("ascii") + with repo.get_named_file("config") as f: + config_text = f.read() + self.assertIn(barestr, config_text, "%r" % config_text) + expect_filemode = sys.platform != "win32" + barestr = b"filemode = " + str(expect_filemode).lower().encode("ascii") + with repo.get_named_file("config") as f: + config_text = f.read() + self.assertIn(barestr, config_text, "%r" % config_text) + + if isinstance(repo, Repo): + expected_mode = '0o100644' if expect_filemode else '0o100666' + expected = { + 'HEAD': expected_mode, + 'config': expected_mode, + 'description': expected_mode, + } + actual = { + f[len(repo._controldir) + 1:]: oct(os.stat(f).st_mode) + for f in glob.glob(os.path.join(repo._controldir, '*')) + if os.path.isfile(f) + } + + self.assertEqual(expected, actual) + + def test_create_memory(self): + repo = MemoryRepo.init_bare([], {}) + self._check_repo_contents(repo, True) + + def test_create_disk_bare(self): + tmp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init_bare(tmp_dir) + self.assertEqual(tmp_dir, repo._controldir) + self._check_repo_contents(repo, True) + + def test_create_disk_non_bare(self): + tmp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init(tmp_dir) + self.assertEqual(os.path.join(tmp_dir, ".git"), repo._controldir) + self._check_repo_contents(repo, False) + + def test_create_disk_non_bare_mkdir(self): + tmp_dir = tempfile.mkdtemp() + target_dir = os.path.join(tmp_dir, "target") + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init(target_dir, mkdir=True) + self.assertEqual(os.path.join(target_dir, ".git"), repo._controldir) + self._check_repo_contents(repo, False) + + def test_create_disk_bare_mkdir(self): + tmp_dir = tempfile.mkdtemp() + target_dir = os.path.join(tmp_dir, "target") + self.addCleanup(shutil.rmtree, tmp_dir) + repo = Repo.init_bare(target_dir, mkdir=True) + self.assertEqual(target_dir, repo._controldir) + self._check_repo_contents(repo, True) + + +class MemoryRepoTests(TestCase): + def test_set_description(self): + r = MemoryRepo.init_bare([], {}) + description = b"Some description" + r.set_description(description) + self.assertEqual(description, r.get_description()) + + +class RepositoryRootTests(TestCase): + def mkdtemp(self): + return tempfile.mkdtemp() + + def open_repo(self, name): + temp_dir = self.mkdtemp() + repo = open_repo(name, temp_dir) + self.addCleanup(tear_down_repo, repo) + return repo + + def test_simple_props(self): + r = self.open_repo("a.git") + self.assertEqual(r.controldir(), r.path) + + def test_setitem(self): + r = self.open_repo("a.git") + r[b"refs/tags/foo"] = b"a90fa2d900a17e99b433217e988c4eb4a2e9a097" + self.assertEqual( + b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", r[b"refs/tags/foo"].id + ) + + def test_getitem_unicode(self): + r = self.open_repo("a.git") + + test_keys = [ + (b"refs/heads/master", True), + (b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", True), + (b"11" * 19 + b"--", False), + ] + + for k, contained in test_keys: + self.assertEqual(k in r, contained) + + # Avoid deprecation warning under Py3.2+ + if getattr(self, "assertRaisesRegex", None): + assertRaisesRegexp = self.assertRaisesRegex + else: + assertRaisesRegexp = self.assertRaisesRegexp + for k, _ in test_keys: + assertRaisesRegexp( + TypeError, + "'name' must be bytestring, not int", + r.__getitem__, + 12, + ) + + def test_delitem(self): + r = self.open_repo("a.git") + + del r[b"refs/heads/master"] + self.assertRaises(KeyError, lambda: r[b"refs/heads/master"]) + + del r[b"HEAD"] + self.assertRaises(KeyError, lambda: r[b"HEAD"]) + + self.assertRaises(ValueError, r.__delitem__, b"notrefs/foo") + + def test_get_refs(self): + r = self.open_repo("a.git") + self.assertEqual( + { + b"HEAD": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/heads/master": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/tags/mytag": b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", + b"refs/tags/mytag-packed": b"b0931cadc54336e78a1d980420e3268903b57a50", + }, + r.get_refs(), + ) + + def test_head(self): + r = self.open_repo("a.git") + self.assertEqual(r.head(), b"a90fa2d900a17e99b433217e988c4eb4a2e9a097") + + def test_get_object(self): + r = self.open_repo("a.git") + obj = r.get_object(r.head()) + self.assertEqual(obj.type_name, b"commit") + + def test_get_object_non_existant(self): + r = self.open_repo("a.git") + self.assertRaises(KeyError, r.get_object, missing_sha) + + def test_contains_object(self): + r = self.open_repo("a.git") + self.assertIn(r.head(), r) + self.assertNotIn(b"z" * 40, r) + + def test_contains_ref(self): + r = self.open_repo("a.git") + self.assertIn(b"HEAD", r) + + def test_get_no_description(self): + r = self.open_repo("a.git") + self.assertIs(None, r.get_description()) + + def test_get_description(self): + r = self.open_repo("a.git") + with open(os.path.join(r.path, "description"), "wb") as f: + f.write(b"Some description") + self.assertEqual(b"Some description", r.get_description()) + + def test_set_description(self): + r = self.open_repo("a.git") + description = b"Some description" + r.set_description(description) + self.assertEqual(description, r.get_description()) + + def test_contains_missing(self): + r = self.open_repo("a.git") + self.assertNotIn(b"bar", r) + + def test_get_peeled(self): + # unpacked ref + r = self.open_repo("a.git") + tag_sha = b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a" + self.assertNotEqual(r[tag_sha].sha().hexdigest(), r.head()) + self.assertEqual(r.get_peeled(b"refs/tags/mytag"), r.head()) + + # packed ref with cached peeled value + packed_tag_sha = b"b0931cadc54336e78a1d980420e3268903b57a50" + parent_sha = r[r.head()].parents[0] + self.assertNotEqual(r[packed_tag_sha].sha().hexdigest(), parent_sha) + self.assertEqual(r.get_peeled(b"refs/tags/mytag-packed"), parent_sha) + + # TODO: add more corner cases to test repo + + def test_get_peeled_not_tag(self): + r = self.open_repo("a.git") + self.assertEqual(r.get_peeled(b"HEAD"), r.head()) + + def test_get_parents(self): + r = self.open_repo("a.git") + self.assertEqual( + [b"2a72d929692c41d8554c07f6301757ba18a65d91"], + r.get_parents(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"), + ) + r.update_shallow([b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"], None) + self.assertEqual([], r.get_parents(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097")) + + def test_get_walker(self): + r = self.open_repo("a.git") + # include defaults to [r.head()] + self.assertEqual( + [e.commit.id for e in r.get_walker()], + [r.head(), b"2a72d929692c41d8554c07f6301757ba18a65d91"], + ) + self.assertEqual( + [ + e.commit.id + for e in r.get_walker([b"2a72d929692c41d8554c07f6301757ba18a65d91"]) + ], + [b"2a72d929692c41d8554c07f6301757ba18a65d91"], + ) + self.assertEqual( + [ + e.commit.id + for e in r.get_walker(b"2a72d929692c41d8554c07f6301757ba18a65d91") + ], + [b"2a72d929692c41d8554c07f6301757ba18a65d91"], + ) + + def assertFilesystemHidden(self, path): + if sys.platform != "win32": + return + import ctypes + from ctypes.wintypes import DWORD, LPCWSTR + + GetFileAttributesW = ctypes.WINFUNCTYPE(DWORD, LPCWSTR)( + ("GetFileAttributesW", ctypes.windll.kernel32) + ) + self.assertTrue(2 & GetFileAttributesW(path)) + + def test_init_existing(self): + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + t = Repo.init(tmp_dir) + self.addCleanup(t.close) + self.assertEqual(os.listdir(tmp_dir), [".git"]) + self.assertFilesystemHidden(os.path.join(tmp_dir, ".git")) + + def test_init_mkdir(self): + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo_dir = os.path.join(tmp_dir, "a-repo") + + t = Repo.init(repo_dir, mkdir=True) + self.addCleanup(t.close) + self.assertEqual(os.listdir(repo_dir), [".git"]) + self.assertFilesystemHidden(os.path.join(repo_dir, ".git")) + + def test_init_mkdir_unicode(self): + repo_name = u"\xa7" + try: + os.fsencode(repo_name) + except UnicodeEncodeError: + self.skipTest("filesystem lacks unicode support") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + repo_dir = os.path.join(tmp_dir, repo_name) + + t = Repo.init(repo_dir, mkdir=True) + self.addCleanup(t.close) + self.assertEqual(os.listdir(repo_dir), [".git"]) + self.assertFilesystemHidden(os.path.join(repo_dir, ".git")) + + @skipIf(sys.platform == "win32", "fails on Windows") + def test_fetch(self): + r = self.open_repo("a.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + t = Repo.init(tmp_dir) + self.addCleanup(t.close) + r.fetch(t) + self.assertIn(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", t) + self.assertIn(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", t) + self.assertIn(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", t) + self.assertIn(b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", t) + self.assertIn(b"b0931cadc54336e78a1d980420e3268903b57a50", t) + + @skipIf(sys.platform == "win32", "fails on Windows") + def test_fetch_ignores_missing_refs(self): + r = self.open_repo("a.git") + missing = b"1234566789123456789123567891234657373833" + r.refs[b"refs/heads/blah"] = missing + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + t = Repo.init(tmp_dir) + self.addCleanup(t.close) + r.fetch(t) + self.assertIn(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", t) + self.assertIn(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", t) + self.assertIn(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", t) + self.assertIn(b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", t) + self.assertIn(b"b0931cadc54336e78a1d980420e3268903b57a50", t) + self.assertNotIn(missing, t) + + def test_clone(self): + r = self.open_repo("a.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + with r.clone(tmp_dir, mkdir=False) as t: + self.assertEqual( + { + b"HEAD": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/remotes/origin/master": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/remotes/origin/HEAD": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/heads/master": b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + b"refs/tags/mytag": b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", + b"refs/tags/mytag-packed": b"b0931cadc54336e78a1d980420e3268903b57a50", + }, + t.refs.as_dict(), + ) + shas = [e.commit.id for e in r.get_walker()] + self.assertEqual( + shas, [t.head(), b"2a72d929692c41d8554c07f6301757ba18a65d91"] + ) + c = t.get_config() + encoded_path = r.path + if not isinstance(encoded_path, bytes): + encoded_path = os.fsencode(encoded_path) + self.assertEqual(encoded_path, c.get((b"remote", b"origin"), b"url")) + self.assertEqual( + b"+refs/heads/*:refs/remotes/origin/*", + c.get((b"remote", b"origin"), b"fetch"), + ) + + def test_clone_no_head(self): + temp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, temp_dir) + repo_dir = os.path.join(os.path.dirname(__file__), "..", "..", "testdata", "repos") + dest_dir = os.path.join(temp_dir, "a.git") + shutil.copytree(os.path.join(repo_dir, "a.git"), dest_dir, symlinks=True) + r = Repo(dest_dir) + del r.refs[b"refs/heads/master"] + del r.refs[b"HEAD"] + t = r.clone(os.path.join(temp_dir, "b.git"), mkdir=True) + self.assertEqual( + { + b"refs/tags/mytag": b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", + b"refs/tags/mytag-packed": b"b0931cadc54336e78a1d980420e3268903b57a50", + }, + t.refs.as_dict(), + ) + + def test_clone_empty(self): + """Test clone() doesn't crash if HEAD points to a non-existing ref. + + This simulates cloning server-side bare repository either when it is + still empty or if user renames master branch and pushes private repo + to the server. + Non-bare repo HEAD always points to an existing ref. + """ + r = self.open_repo("empty.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + r.clone(tmp_dir, mkdir=False, bare=True) + + def test_clone_bare(self): + r = self.open_repo("a.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + t = r.clone(tmp_dir, mkdir=False) + t.close() + + def test_clone_checkout_and_bare(self): + r = self.open_repo("a.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + self.assertRaises( + ValueError, r.clone, tmp_dir, mkdir=False, checkout=True, bare=True + ) + + def test_clone_branch(self): + r = self.open_repo("a.git") + r.refs[b"refs/heads/mybranch"] = b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a" + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + with r.clone(tmp_dir, mkdir=False, branch=b"mybranch") as t: + # HEAD should point to specified branch and not origin HEAD + chain, sha = t.refs.follow(b"HEAD") + self.assertEqual(chain[-1], b"refs/heads/mybranch") + self.assertEqual(sha, b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a") + self.assertEqual( + t.refs[b"refs/remotes/origin/HEAD"], + b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + ) + + def test_clone_tag(self): + r = self.open_repo("a.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + with r.clone(tmp_dir, mkdir=False, branch=b"mytag") as t: + # HEAD should be detached (and not a symbolic ref) at tag + self.assertEqual( + t.refs.read_ref(b"HEAD"), + b"28237f4dc30d0d462658d6b937b08a0f0b6ef55a", + ) + self.assertEqual( + t.refs[b"refs/remotes/origin/HEAD"], + b"a90fa2d900a17e99b433217e988c4eb4a2e9a097", + ) + + def test_clone_invalid_branch(self): + r = self.open_repo("a.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + self.assertRaises( + ValueError, + r.clone, + tmp_dir, + mkdir=False, + branch=b"mybranch", + ) + + def test_merge_history(self): + r = self.open_repo("simple_merge.git") + shas = [e.commit.id for e in r.get_walker()] + self.assertEqual( + shas, + [ + b"5dac377bdded4c9aeb8dff595f0faeebcc8498cc", + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", + b"60dacdc733de308bb77bb76ce0fb0f9b44c9769e", + b"0d89f20333fbb1d2f3a94da77f4981373d8f4310", + ], + ) + + def test_out_of_order_merge(self): + """Test that revision history is ordered by date, not parent order.""" + r = self.open_repo("ooo_merge.git") + shas = [e.commit.id for e in r.get_walker()] + self.assertEqual( + shas, + [ + b"7601d7f6231db6a57f7bbb79ee52e4d462fd44d1", + b"f507291b64138b875c28e03469025b1ea20bc614", + b"fb5b0425c7ce46959bec94d54b9a157645e114f5", + b"f9e39b120c68182a4ba35349f832d0e4e61f485c", + ], + ) + + def test_get_tags_empty(self): + r = self.open_repo("ooo_merge.git") + self.assertEqual({}, r.refs.as_dict(b"refs/tags")) + + def test_get_config(self): + r = self.open_repo("ooo_merge.git") + self.assertIsInstance(r.get_config(), Config) + + def test_get_config_stack(self): + r = self.open_repo("ooo_merge.git") + self.assertIsInstance(r.get_config_stack(), Config) + + def test_common_revisions(self): + """ + This test demonstrates that ``find_common_revisions()`` actually + returns common heads, not revisions; dulwich already uses + ``find_common_revisions()`` in such a manner (see + ``Repo.fetch_objects()``). + """ + + expected_shas = set([b"60dacdc733de308bb77bb76ce0fb0f9b44c9769e"]) + + # Source for objects. + r_base = self.open_repo("simple_merge.git") + + # Re-create each-side of the merge in simple_merge.git. + # + # Since the trees and blobs are missing, the repository created is + # corrupted, but we're only checking for commits for the purpose of + # this test, so it's immaterial. + r1_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, r1_dir) + r1_commits = [ + b"ab64bbdcc51b170d21588e5c5d391ee5c0c96dfd", # HEAD + b"60dacdc733de308bb77bb76ce0fb0f9b44c9769e", + b"0d89f20333fbb1d2f3a94da77f4981373d8f4310", + ] + + r2_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, r2_dir) + r2_commits = [ + b"4cffe90e0a41ad3f5190079d7c8f036bde29cbe6", # HEAD + b"60dacdc733de308bb77bb76ce0fb0f9b44c9769e", + b"0d89f20333fbb1d2f3a94da77f4981373d8f4310", + ] + + r1 = Repo.init_bare(r1_dir) + for c in r1_commits: + r1.object_store.add_object(r_base.get_object(c)) + r1.refs[b"HEAD"] = r1_commits[0] + + r2 = Repo.init_bare(r2_dir) + for c in r2_commits: + r2.object_store.add_object(r_base.get_object(c)) + r2.refs[b"HEAD"] = r2_commits[0] + + # Finally, the 'real' testing! + shas = r2.object_store.find_common_revisions(r1.get_graph_walker()) + self.assertEqual(set(shas), expected_shas) + + shas = r1.object_store.find_common_revisions(r2.get_graph_walker()) + self.assertEqual(set(shas), expected_shas) + + def test_shell_hook_pre_commit(self): + if os.name != "posix": + self.skipTest("shell hook tests requires POSIX shell") + + pre_commit_fail = """#!/bin/sh +exit 1 +""" + + pre_commit_success = """#!/bin/sh +exit 0 +""" + + repo_dir = os.path.join(self.mkdtemp()) + self.addCleanup(shutil.rmtree, repo_dir) + r = Repo.init(repo_dir) + self.addCleanup(r.close) + + pre_commit = os.path.join(r.controldir(), "hooks", "pre-commit") + + with open(pre_commit, "w") as f: + f.write(pre_commit_fail) + os.chmod(pre_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + self.assertRaises( + errors.CommitError, + r.do_commit, + b"failed commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + + with open(pre_commit, "w") as f: + f.write(pre_commit_success) + os.chmod(pre_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + commit_sha = r.do_commit( + b"empty commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual([], r[commit_sha].parents) + + def test_shell_hook_commit_msg(self): + if os.name != "posix": + self.skipTest("shell hook tests requires POSIX shell") + + commit_msg_fail = """#!/bin/sh +exit 1 +""" + + commit_msg_success = """#!/bin/sh +exit 0 +""" + + repo_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + r = Repo.init(repo_dir) + self.addCleanup(r.close) + + commit_msg = os.path.join(r.controldir(), "hooks", "commit-msg") + + with open(commit_msg, "w") as f: + f.write(commit_msg_fail) + os.chmod(commit_msg, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + self.assertRaises( + errors.CommitError, + r.do_commit, + b"failed commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + + with open(commit_msg, "w") as f: + f.write(commit_msg_success) + os.chmod(commit_msg, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + commit_sha = r.do_commit( + b"empty commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual([], r[commit_sha].parents) + + def test_shell_hook_pre_commit_add_files(self): + if os.name != "posix": + self.skipTest("shell hook tests requires POSIX shell") + + pre_commit_contents = """#!%(executable)s +import sys +sys.path.extend(%(path)r) +from dulwich.repo import Repo + +with open('foo', 'w') as f: + f.write('newfile') + +r = Repo('.') +r.stage(['foo']) +""" % { + 'executable': sys.executable, + 'path': [os.path.join(os.path.dirname(__file__), '..', '..')] + sys.path} + + repo_dir = os.path.join(self.mkdtemp()) + self.addCleanup(shutil.rmtree, repo_dir) + r = Repo.init(repo_dir) + self.addCleanup(r.close) + + with open(os.path.join(repo_dir, 'blah'), 'w') as f: + f.write('blah') + + r.stage(['blah']) + + pre_commit = os.path.join(r.controldir(), "hooks", "pre-commit") + + with open(pre_commit, "w") as f: + f.write(pre_commit_contents) + os.chmod(pre_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + commit_sha = r.do_commit( + b"new commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual([], r[commit_sha].parents) + + tree = r[r[commit_sha].tree] + self.assertEqual(set([b'blah', b'foo']), set(tree)) + + def test_shell_hook_post_commit(self): + if os.name != "posix": + self.skipTest("shell hook tests requires POSIX shell") + + repo_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, repo_dir) + + r = Repo.init(repo_dir) + self.addCleanup(r.close) + + (fd, path) = tempfile.mkstemp(dir=repo_dir) + os.close(fd) + post_commit_msg = ( + """#!/bin/sh +rm """ + + path + + """ +""" + ) + + root_sha = r.do_commit( + b"empty commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + self.assertEqual([], r[root_sha].parents) + + post_commit = os.path.join(r.controldir(), "hooks", "post-commit") + + with open(post_commit, "wb") as f: + f.write(post_commit_msg.encode(locale.getpreferredencoding())) + os.chmod(post_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + commit_sha = r.do_commit( + b"empty commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + self.assertEqual([root_sha], r[commit_sha].parents) + + self.assertFalse(os.path.exists(path)) + + post_commit_msg_fail = """#!/bin/sh +exit 1 +""" + with open(post_commit, "w") as f: + f.write(post_commit_msg_fail) + os.chmod(post_commit, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) + + warnings.simplefilter("always", UserWarning) + self.addCleanup(warnings.resetwarnings) + warnings_list, restore_warnings = setup_warning_catcher() + self.addCleanup(restore_warnings) + + commit_sha2 = r.do_commit( + b"empty commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + expected_warning = UserWarning( + "post-commit hook failed: Hook post-commit exited with " + "non-zero status 1", + ) + for w in warnings_list: + if type(w) == type(expected_warning) and w.args == expected_warning.args: + break + else: + raise AssertionError( + "Expected warning %r not in %r" % (expected_warning, warnings_list) + ) + self.assertEqual([commit_sha], r[commit_sha2].parents) + + def test_as_dict(self): + def check(repo): + self.assertEqual( + repo.refs.subkeys(b"refs/tags"), + repo.refs.subkeys(b"refs/tags/"), + ) + self.assertEqual( + repo.refs.as_dict(b"refs/tags"), + repo.refs.as_dict(b"refs/tags/"), + ) + self.assertEqual( + repo.refs.as_dict(b"refs/heads"), + repo.refs.as_dict(b"refs/heads/"), + ) + + bare = self.open_repo("a.git") + tmp_dir = self.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + with bare.clone(tmp_dir, mkdir=False) as nonbare: + check(nonbare) + check(bare) + + def test_working_tree(self): + temp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, temp_dir) + worktree_temp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, worktree_temp_dir) + r = Repo.init(temp_dir) + self.addCleanup(r.close) + root_sha = r.do_commit( + b"empty commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + r.refs[b"refs/heads/master"] = root_sha + w = Repo._init_new_working_directory(worktree_temp_dir, r) + self.addCleanup(w.close) + new_sha = w.do_commit( + b"new commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + w.refs[b"HEAD"] = new_sha + self.assertEqual( + os.path.abspath(r.controldir()), os.path.abspath(w.commondir()) + ) + self.assertEqual(r.refs.keys(), w.refs.keys()) + self.assertNotEqual(r.head(), w.head()) + + +class BuildRepoRootTests(TestCase): + """Tests that build on-disk repos from scratch. + + Repos live in a temp dir and are torn down after each test. They start with + a single commit in master having single file named 'a'. + """ + + def get_repo_dir(self): + return os.path.join(tempfile.mkdtemp(), "test") + + def setUp(self): + super(BuildRepoRootTests, self).setUp() + self._repo_dir = self.get_repo_dir() + os.makedirs(self._repo_dir) + r = self._repo = Repo.init(self._repo_dir) + self.addCleanup(tear_down_repo, r) + self.assertFalse(r.bare) + self.assertEqual(b"ref: refs/heads/master", r.refs.read_ref(b"HEAD")) + self.assertRaises(KeyError, lambda: r.refs[b"refs/heads/master"]) + + with open(os.path.join(r.path, "a"), "wb") as f: + f.write(b"file contents") + r.stage(["a"]) + commit_sha = r.do_commit( + b"msg", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + self.assertEqual([], r[commit_sha].parents) + self._root_commit = commit_sha + + def test_get_shallow(self): + self.assertEqual(set(), self._repo.get_shallow()) + with open(os.path.join(self._repo.path, ".git", "shallow"), "wb") as f: + f.write(b"a90fa2d900a17e99b433217e988c4eb4a2e9a097\n") + self.assertEqual( + {b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"}, + self._repo.get_shallow(), + ) + + def test_update_shallow(self): + self._repo.update_shallow(None, None) # no op + self.assertEqual(set(), self._repo.get_shallow()) + self._repo.update_shallow([b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"], None) + self.assertEqual( + {b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"}, + self._repo.get_shallow(), + ) + self._repo.update_shallow( + [b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"], + [b"f9e39b120c68182a4ba35349f832d0e4e61f485c"], + ) + self.assertEqual( + {b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"}, + self._repo.get_shallow(), + ) + self._repo.update_shallow( + None, [b"a90fa2d900a17e99b433217e988c4eb4a2e9a097"] + ) + self.assertEqual(set(), self._repo.get_shallow()) + self.assertEqual( + False, + os.path.exists(os.path.join(self._repo.controldir(), "shallow")), + ) + + def test_build_repo(self): + r = self._repo + self.assertEqual(b"ref: refs/heads/master", r.refs.read_ref(b"HEAD")) + self.assertEqual(self._root_commit, r.refs[b"refs/heads/master"]) + expected_blob = objects.Blob.from_string(b"file contents") + self.assertEqual(expected_blob.data, r[expected_blob.id].data) + actual_commit = r[self._root_commit] + self.assertEqual(b"msg", actual_commit.message) + + def test_commit_modified(self): + r = self._repo + with open(os.path.join(r.path, "a"), "wb") as f: + f.write(b"new contents") + r.stage(["a"]) + commit_sha = r.do_commit( + b"modified a", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual([self._root_commit], r[commit_sha].parents) + a_mode, a_id = tree_lookup_path(r.get_object, r[commit_sha].tree, b"a") + self.assertEqual(stat.S_IFREG | 0o644, a_mode) + self.assertEqual(b"new contents", r[a_id].data) + + @skipIf(not getattr(os, "symlink", None), "Requires symlink support") + def test_commit_symlink(self): + r = self._repo + os.symlink("a", os.path.join(r.path, "b")) + r.stage(["a", "b"]) + commit_sha = r.do_commit( + b"Symlink b", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual([self._root_commit], r[commit_sha].parents) + b_mode, b_id = tree_lookup_path(r.get_object, r[commit_sha].tree, b"b") + self.assertTrue(stat.S_ISLNK(b_mode)) + self.assertEqual(b"a", r[b_id].data) + + def test_commit_merge_heads_file(self): + tmp_dir = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, tmp_dir) + r = Repo.init(tmp_dir) + with open(os.path.join(r.path, "a"), "w") as f: + f.write("initial text") + c1 = r.do_commit( + b"initial commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + with open(os.path.join(r.path, "a"), "w") as f: + f.write("merged text") + with open(os.path.join(r.path, ".git", "MERGE_HEAD"), "w") as f: + f.write("c27a2d21dd136312d7fa9e8baabb82561a1727d0\n") + r.stage(["a"]) + commit_sha = r.do_commit( + b"deleted a", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual( + [c1, b"c27a2d21dd136312d7fa9e8baabb82561a1727d0"], + r[commit_sha].parents, + ) + + def test_commit_deleted(self): + r = self._repo + os.remove(os.path.join(r.path, "a")) + r.stage(["a"]) + commit_sha = r.do_commit( + b"deleted a", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual([self._root_commit], r[commit_sha].parents) + self.assertEqual([], list(r.open_index())) + tree = r[r[commit_sha].tree] + self.assertEqual([], list(tree.iteritems())) + + def test_commit_follows(self): + r = self._repo + r.refs.set_symbolic_ref(b"HEAD", b"refs/heads/bla") + commit_sha = r.do_commit( + b"commit with strange character", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ref=b"HEAD", + ) + self.assertEqual(commit_sha, r[b"refs/heads/bla"].id) + + def test_commit_encoding(self): + r = self._repo + commit_sha = r.do_commit( + b"commit with strange character \xee", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + encoding=b"iso8859-1", + ) + self.assertEqual(b"iso8859-1", r[commit_sha].encoding) + + def test_compression_level(self): + r = self._repo + c = r.get_config() + c.set(("core",), "compression", "3") + c.set(("core",), "looseCompression", "4") + c.write_to_path() + r = Repo(self._repo_dir) + self.assertEqual(r.object_store.loose_compression_level, 4) + + def test_repositoryformatversion_unsupported(self): + r = self._repo + c = r.get_config() + c.set(("core",), "repositoryformatversion", "2") + c.write_to_path() + self.assertRaises(UnsupportedVersion, Repo, self._repo_dir) + + def test_repositoryformatversion_1(self): + r = self._repo + c = r.get_config() + c.set(("core",), "repositoryformatversion", "1") + c.write_to_path() + Repo(self._repo_dir) + + def test_repositoryformatversion_1_extension(self): + r = self._repo + c = r.get_config() + c.set(("core",), "repositoryformatversion", "1") + c.set(("extensions", ), "worktreeconfig", True) + c.write_to_path() + self.assertRaises(UnsupportedExtension, Repo, self._repo_dir) + + def test_commit_encoding_from_config(self): + r = self._repo + c = r.get_config() + c.set(("i18n",), "commitEncoding", "iso8859-1") + c.write_to_path() + commit_sha = r.do_commit( + b"commit with strange character \xee", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ) + self.assertEqual(b"iso8859-1", r[commit_sha].encoding) + + def test_commit_config_identity(self): + # commit falls back to the users' identity if it wasn't specified + r = self._repo + c = r.get_config() + c.set((b"user",), b"name", b"Jelmer") + c.set((b"user",), b"email", b"jelmer@apache.org") + c.write_to_path() + commit_sha = r.do_commit(b"message") + self.assertEqual(b"Jelmer ", r[commit_sha].author) + self.assertEqual(b"Jelmer ", r[commit_sha].committer) + + def test_commit_config_identity_strips_than(self): + # commit falls back to the users' identity if it wasn't specified, + # and strips superfluous <> + r = self._repo + c = r.get_config() + c.set((b"user",), b"name", b"Jelmer") + c.set((b"user",), b"email", b"") + c.write_to_path() + commit_sha = r.do_commit(b"message") + self.assertEqual(b"Jelmer ", r[commit_sha].author) + self.assertEqual(b"Jelmer ", r[commit_sha].committer) + + def test_commit_config_identity_in_memoryrepo(self): + # commit falls back to the users' identity if it wasn't specified + r = MemoryRepo.init_bare([], {}) + c = r.get_config() + c.set((b"user",), b"name", b"Jelmer") + c.set((b"user",), b"email", b"jelmer@apache.org") + + commit_sha = r.do_commit(b"message", tree=objects.Tree().id) + self.assertEqual(b"Jelmer ", r[commit_sha].author) + self.assertEqual(b"Jelmer ", r[commit_sha].committer) + + def overrideEnv(self, name, value): + def restore(): + if oldval is not None: + os.environ[name] = oldval + else: + del os.environ[name] + + oldval = os.environ.get(name) + os.environ[name] = value + self.addCleanup(restore) + + def test_commit_config_identity_from_env(self): + # commit falls back to the users' identity if it wasn't specified + self.overrideEnv("GIT_COMMITTER_NAME", "joe") + self.overrideEnv("GIT_COMMITTER_EMAIL", "joe@example.com") + r = self._repo + c = r.get_config() + c.set((b"user",), b"name", b"Jelmer") + c.set((b"user",), b"email", b"jelmer@apache.org") + c.write_to_path() + commit_sha = r.do_commit(b"message") + self.assertEqual(b"Jelmer ", r[commit_sha].author) + self.assertEqual(b"joe ", r[commit_sha].committer) + + def test_commit_fail_ref(self): + r = self._repo + + def set_if_equals(name, old_ref, new_ref, **kwargs): + return False + + r.refs.set_if_equals = set_if_equals + + def add_if_new(name, new_ref, **kwargs): + self.fail("Unexpected call to add_if_new") + + r.refs.add_if_new = add_if_new + + old_shas = set(r.object_store) + self.assertRaises( + errors.CommitError, + r.do_commit, + b"failed commit", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12345, + commit_timezone=0, + author_timestamp=12345, + author_timezone=0, + ) + new_shas = set(r.object_store) - old_shas + self.assertEqual(1, len(new_shas)) + # Check that the new commit (now garbage) was added. + new_commit = r[new_shas.pop()] + self.assertEqual(r[self._root_commit].tree, new_commit.tree) + self.assertEqual(b"failed commit", new_commit.message) + + def test_commit_branch(self): + r = self._repo + + commit_sha = r.do_commit( + b"commit to branch", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ref=b"refs/heads/new_branch", + ) + self.assertEqual(self._root_commit, r[b"HEAD"].id) + self.assertEqual(commit_sha, r[b"refs/heads/new_branch"].id) + self.assertEqual([], r[commit_sha].parents) + self.assertIn(b"refs/heads/new_branch", r) + + new_branch_head = commit_sha + + commit_sha = r.do_commit( + b"commit to branch 2", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ref=b"refs/heads/new_branch", + ) + self.assertEqual(self._root_commit, r[b"HEAD"].id) + self.assertEqual(commit_sha, r[b"refs/heads/new_branch"].id) + self.assertEqual([new_branch_head], r[commit_sha].parents) + + def test_commit_merge_heads(self): + r = self._repo + merge_1 = r.do_commit( + b"commit to branch 2", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ref=b"refs/heads/new_branch", + ) + commit_sha = r.do_commit( + b"commit with merge", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + merge_heads=[merge_1], + ) + self.assertEqual([self._root_commit, merge_1], r[commit_sha].parents) + + def test_commit_dangling_commit(self): + r = self._repo + + old_shas = set(r.object_store) + old_refs = r.get_refs() + commit_sha = r.do_commit( + b"commit with no ref", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ref=None, + ) + new_shas = set(r.object_store) - old_shas + + # New sha is added, but no new refs + self.assertEqual(1, len(new_shas)) + new_commit = r[new_shas.pop()] + self.assertEqual(r[self._root_commit].tree, new_commit.tree) + self.assertEqual([], r[commit_sha].parents) + self.assertEqual(old_refs, r.get_refs()) + + def test_commit_dangling_commit_with_parents(self): + r = self._repo + + old_shas = set(r.object_store) + old_refs = r.get_refs() + commit_sha = r.do_commit( + b"commit with no ref", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ref=None, + merge_heads=[self._root_commit], + ) + new_shas = set(r.object_store) - old_shas + + # New sha is added, but no new refs + self.assertEqual(1, len(new_shas)) + new_commit = r[new_shas.pop()] + self.assertEqual(r[self._root_commit].tree, new_commit.tree) + self.assertEqual([self._root_commit], r[commit_sha].parents) + self.assertEqual(old_refs, r.get_refs()) + + def test_stage_absolute(self): + r = self._repo + os.remove(os.path.join(r.path, "a")) + self.assertRaises(ValueError, r.stage, [os.path.join(r.path, "a")]) + + def test_stage_deleted(self): + r = self._repo + os.remove(os.path.join(r.path, "a")) + r.stage(["a"]) + r.stage(["a"]) # double-stage a deleted path + self.assertEqual([], list(r.open_index())) + + def test_stage_directory(self): + r = self._repo + os.mkdir(os.path.join(r.path, "c")) + r.stage(["c"]) + self.assertEqual([b"a"], list(r.open_index())) + + def test_stage_submodule(self): + r = self._repo + s = Repo.init(os.path.join(r.path, "sub"), mkdir=True) + s.do_commit(b'message') + r.stage(["sub"]) + self.assertEqual([b"a", b"sub"], list(r.open_index())) + + def test_unstage_midify_file_with_dir(self): + os.mkdir(os.path.join(self._repo.path, 'new_dir')) + full_path = os.path.join(self._repo.path, 'new_dir', 'foo') + + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self._repo, paths=[full_path]) + porcelain.commit( + self._repo, + message=b"unitest", + committer=b"Jane ", + author=b"John ", + ) + with open(full_path, 'a') as f: + f.write('something new') + self._repo.unstage(['new_dir/foo']) + status = list(porcelain.status(self._repo)) + self.assertEqual([{'add': [], 'delete': [], 'modify': []}, [b'new_dir/foo'], []], status) + + def test_unstage_while_no_commit(self): + file = 'foo' + full_path = os.path.join(self._repo.path, file) + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self._repo, paths=[full_path]) + self._repo.unstage([file]) + status = list(porcelain.status(self._repo)) + self.assertEqual([{'add': [], 'delete': [], 'modify': []}, [], ['foo']], status) + + def test_unstage_add_file(self): + file = 'foo' + full_path = os.path.join(self._repo.path, file) + porcelain.commit( + self._repo, + message=b"unitest", + committer=b"Jane ", + author=b"John ", + ) + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self._repo, paths=[full_path]) + self._repo.unstage([file]) + status = list(porcelain.status(self._repo)) + self.assertEqual([{'add': [], 'delete': [], 'modify': []}, [], ['foo']], status) + + def test_unstage_modify_file(self): + file = 'foo' + full_path = os.path.join(self._repo.path, file) + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self._repo, paths=[full_path]) + porcelain.commit( + self._repo, + message=b"unitest", + committer=b"Jane ", + author=b"John ", + ) + with open(full_path, 'a') as f: + f.write('broken') + porcelain.add(self._repo, paths=[full_path]) + self._repo.unstage([file]) + status = list(porcelain.status(self._repo)) + + self.assertEqual([{'add': [], 'delete': [], 'modify': []}, [b'foo'], []], status) + + def test_unstage_remove_file(self): + file = 'foo' + full_path = os.path.join(self._repo.path, file) + with open(full_path, 'w') as f: + f.write('hello') + porcelain.add(self._repo, paths=[full_path]) + porcelain.commit( + self._repo, + message=b"unitest", + committer=b"Jane ", + author=b"John ", + ) + os.remove(full_path) + self._repo.unstage([file]) + status = list(porcelain.status(self._repo)) + self.assertEqual([{'add': [], 'delete': [], 'modify': []}, [b'foo'], []], status) + + def test_reset_index(self): + r = self._repo + with open(os.path.join(r.path, 'a'), 'wb') as f: + f.write(b'changed') + with open(os.path.join(r.path, 'b'), 'wb') as f: + f.write(b'added') + r.stage(['a', 'b']) + status = list(porcelain.status(self._repo)) + self.assertEqual([{'add': [b'b'], 'delete': [], 'modify': [b'a']}, [], []], status) + r.reset_index() + status = list(porcelain.status(self._repo)) + self.assertEqual([{'add': [], 'delete': [], 'modify': []}, [], ['b']], status) + + @skipIf( + sys.platform in ("win32", "darwin"), + "tries to implicitly decode as utf8", + ) + def test_commit_no_encode_decode(self): + r = self._repo + repo_path_bytes = os.fsencode(r.path) + encodings = ("utf8", "latin1") + names = [u"À".encode(encoding) for encoding in encodings] + for name, encoding in zip(names, encodings): + full_path = os.path.join(repo_path_bytes, name) + with open(full_path, "wb") as f: + f.write(encoding.encode("ascii")) + # These files are break tear_down_repo, so cleanup these files + # ourselves. + self.addCleanup(os.remove, full_path) + + r.stage(names) + commit_sha = r.do_commit( + b"Files with different encodings", + committer=b"Test Committer ", + author=b"Test Author ", + commit_timestamp=12395, + commit_timezone=0, + author_timestamp=12395, + author_timezone=0, + ref=None, + merge_heads=[self._root_commit], + ) + + for name, encoding in zip(names, encodings): + mode, id = tree_lookup_path(r.get_object, r[commit_sha].tree, name) + self.assertEqual(stat.S_IFREG | 0o644, mode) + self.assertEqual(encoding.encode("ascii"), r[id].data) + + def test_discover_intended(self): + path = os.path.join(self._repo_dir, "b/c") + r = Repo.discover(path) + self.assertEqual(r.head(), self._repo.head()) + + def test_discover_isrepo(self): + r = Repo.discover(self._repo_dir) + self.assertEqual(r.head(), self._repo.head()) + + def test_discover_notrepo(self): + with self.assertRaises(NotGitRepository): + Repo.discover("/") + + +class CheckUserIdentityTests(TestCase): + def test_valid(self): + check_user_identity(b"Me ") + + def test_invalid(self): + self.assertRaises(InvalidUserIdentity, check_user_identity, b"No Email") + self.assertRaises( + InvalidUserIdentity, check_user_identity, b"Fullname " + ) + self.assertRaises( + InvalidUserIdentity, check_user_identity, b"Fullname >order<>" + ) + self.assertRaises( + InvalidUserIdentity, check_user_identity, b'Contains\0null byte <>' + ) + self.assertRaises( + InvalidUserIdentity, check_user_identity, b'Contains\nnewline byte <>' + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_server.py b/env/lib/python3.8/site-packages/dulwich/tests/test_server.py new file mode 100644 index 00000000..e6d4076d --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_server.py @@ -0,0 +1,1186 @@ +# test_server.py -- Tests for the git server +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the smart protocol server.""" + +from io import BytesIO +import os +import shutil +import tempfile + +import sys + +from dulwich.errors import ( + GitProtocolError, + NotGitRepository, + UnexpectedCommandError, + HangupException, +) +from dulwich.objects import Tree +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.repo import ( + MemoryRepo, + Repo, +) +from dulwich.server import ( + Backend, + DictBackend, + FileSystemBackend, + MultiAckGraphWalkerImpl, + MultiAckDetailedGraphWalkerImpl, + PackHandler, + _split_proto_line, + serve_command, + _find_shallow, + _ProtocolGraphWalker, + ReceivePackHandler, + SingleAckGraphWalkerImpl, + UploadPackHandler, + update_server_info, +) +from dulwich.tests import TestCase +from dulwich.tests.utils import ( + make_commit, + make_tag, +) +from dulwich.protocol import ( + ZERO_SHA, + format_capability_line, +) + +ONE = b"1" * 40 +TWO = b"2" * 40 +THREE = b"3" * 40 +FOUR = b"4" * 40 +FIVE = b"5" * 40 +SIX = b"6" * 40 + + +class TestProto(object): + def __init__(self): + self._output = [] + self._received = {0: [], 1: [], 2: [], 3: []} + + def set_output(self, output_lines): + self._output = output_lines + + def read_pkt_line(self): + if self._output: + data = self._output.pop(0) + if data is not None: + return data.rstrip() + b"\n" + else: + # flush-pkt ('0000'). + return None + else: + raise HangupException() + + def write_sideband(self, band, data): + self._received[band].append(data) + + def write_pkt_line(self, data): + self._received[0].append(data) + + def get_received_line(self, band=0): + lines = self._received[band] + return lines.pop(0) + + +class TestGenericPackHandler(PackHandler): + def __init__(self): + PackHandler.__init__(self, Backend(), None) + + @classmethod + def capabilities(cls): + return [b"cap1", b"cap2", b"cap3"] + + @classmethod + def required_capabilities(cls): + return [b"cap2"] + + +class HandlerTestCase(TestCase): + def setUp(self): + super(HandlerTestCase, self).setUp() + self._handler = TestGenericPackHandler() + + def assertSucceeds(self, func, *args, **kwargs): + try: + func(*args, **kwargs) + except GitProtocolError as e: + self.fail(e) + + def test_capability_line(self): + self.assertEqual( + b" cap1 cap2 cap3", + format_capability_line([b"cap1", b"cap2", b"cap3"]), + ) + + def test_set_client_capabilities(self): + set_caps = self._handler.set_client_capabilities + self.assertSucceeds(set_caps, [b"cap2"]) + self.assertSucceeds(set_caps, [b"cap1", b"cap2"]) + + # different order + self.assertSucceeds(set_caps, [b"cap3", b"cap1", b"cap2"]) + + # error cases + self.assertRaises(GitProtocolError, set_caps, [b"capxxx", b"cap2"]) + self.assertRaises(GitProtocolError, set_caps, [b"cap1", b"cap3"]) + + # ignore innocuous but unknown capabilities + self.assertRaises(GitProtocolError, set_caps, [b"cap2", b"ignoreme"]) + self.assertNotIn(b"ignoreme", self._handler.capabilities()) + self._handler.innocuous_capabilities = lambda: (b"ignoreme",) + self.assertSucceeds(set_caps, [b"cap2", b"ignoreme"]) + + def test_has_capability(self): + self.assertRaises(GitProtocolError, self._handler.has_capability, b"cap") + caps = self._handler.capabilities() + self._handler.set_client_capabilities(caps) + for cap in caps: + self.assertTrue(self._handler.has_capability(cap)) + self.assertFalse(self._handler.has_capability(b"capxxx")) + + +class UploadPackHandlerTestCase(TestCase): + def setUp(self): + super(UploadPackHandlerTestCase, self).setUp() + self._repo = MemoryRepo.init_bare([], {}) + backend = DictBackend({b"/": self._repo}) + self._handler = UploadPackHandler( + backend, [b"/", b"host=lolcathost"], TestProto() + ) + + def test_progress(self): + caps = self._handler.required_capabilities() + self._handler.set_client_capabilities(caps) + self._handler.progress(b"first message") + self._handler.progress(b"second message") + self.assertEqual(b"first message", self._handler.proto.get_received_line(2)) + self.assertEqual(b"second message", self._handler.proto.get_received_line(2)) + self.assertRaises(IndexError, self._handler.proto.get_received_line, 2) + + def test_no_progress(self): + caps = list(self._handler.required_capabilities()) + [b"no-progress"] + self._handler.set_client_capabilities(caps) + self._handler.progress(b"first message") + self._handler.progress(b"second message") + self.assertRaises(IndexError, self._handler.proto.get_received_line, 2) + + def test_get_tagged(self): + refs = { + b"refs/tags/tag1": ONE, + b"refs/tags/tag2": TWO, + b"refs/heads/master": FOUR, # not a tag, no peeled value + } + # repo needs to peel this object + self._repo.object_store.add_object(make_commit(id=FOUR)) + self._repo.refs._update(refs) + peeled = { + b"refs/tags/tag1": b"1234" * 10, + b"refs/tags/tag2": b"5678" * 10, + } + self._repo.refs._update_peeled(peeled) + + caps = list(self._handler.required_capabilities()) + [b"include-tag"] + self._handler.set_client_capabilities(caps) + self.assertEqual( + {b"1234" * 10: ONE, b"5678" * 10: TWO}, + self._handler.get_tagged(refs, repo=self._repo), + ) + + # non-include-tag case + caps = self._handler.required_capabilities() + self._handler.set_client_capabilities(caps) + self.assertEqual({}, self._handler.get_tagged(refs, repo=self._repo)) + + def test_nothing_to_do_but_wants(self): + # Just the fact that the client claims to want an object is enough + # for sending a pack. Even if there turns out to be nothing. + refs = {b"refs/tags/tag1": ONE} + tree = Tree() + self._repo.object_store.add_object(tree) + self._repo.object_store.add_object(make_commit(id=ONE, tree=tree)) + self._repo.refs._update(refs) + self._handler.proto.set_output( + [ + b"want " + ONE + b" side-band-64k thin-pack ofs-delta", + None, + b"have " + ONE, + b"done", + None, + ] + ) + self._handler.handle() + # The server should always send a pack, even if it's empty. + self.assertTrue(self._handler.proto.get_received_line(1).startswith(b"PACK")) + + def test_nothing_to_do_no_wants(self): + # Don't send a pack if the client didn't ask for anything. + refs = {b"refs/tags/tag1": ONE} + tree = Tree() + self._repo.object_store.add_object(tree) + self._repo.object_store.add_object(make_commit(id=ONE, tree=tree)) + self._repo.refs._update(refs) + self._handler.proto.set_output([None]) + self._handler.handle() + # The server should not send a pack, since the client didn't ask for + # anything. + self.assertEqual([], self._handler.proto._received[1]) + + +class FindShallowTests(TestCase): + def setUp(self): + super(FindShallowTests, self).setUp() + self._store = MemoryObjectStore() + + def make_commit(self, **attrs): + commit = make_commit(**attrs) + self._store.add_object(commit) + return commit + + def make_linear_commits(self, n, message=b""): + commits = [] + parents = [] + for _ in range(n): + commits.append(self.make_commit(parents=parents, message=message)) + parents = [commits[-1].id] + return commits + + def assertSameElements(self, expected, actual): + self.assertEqual(set(expected), set(actual)) + + def test_linear(self): + c1, c2, c3 = self.make_linear_commits(3) + + self.assertEqual( + (set([c3.id]), set([])), _find_shallow(self._store, [c3.id], 1) + ) + self.assertEqual( + (set([c2.id]), set([c3.id])), + _find_shallow(self._store, [c3.id], 2), + ) + self.assertEqual( + (set([c1.id]), set([c2.id, c3.id])), + _find_shallow(self._store, [c3.id], 3), + ) + self.assertEqual( + (set([]), set([c1.id, c2.id, c3.id])), + _find_shallow(self._store, [c3.id], 4), + ) + + def test_multiple_independent(self): + a = self.make_linear_commits(2, message=b"a") + b = self.make_linear_commits(2, message=b"b") + c = self.make_linear_commits(2, message=b"c") + heads = [a[1].id, b[1].id, c[1].id] + + self.assertEqual( + (set([a[0].id, b[0].id, c[0].id]), set(heads)), + _find_shallow(self._store, heads, 2), + ) + + def test_multiple_overlapping(self): + # Create the following commit tree: + # 1--2 + # \ + # 3--4 + c1, c2 = self.make_linear_commits(2) + c3 = self.make_commit(parents=[c1.id]) + c4 = self.make_commit(parents=[c3.id]) + + # 1 is shallow along the path from 4, but not along the path from 2. + self.assertEqual( + (set([c1.id]), set([c1.id, c2.id, c3.id, c4.id])), + _find_shallow(self._store, [c2.id, c4.id], 3), + ) + + def test_merge(self): + c1 = self.make_commit() + c2 = self.make_commit() + c3 = self.make_commit(parents=[c1.id, c2.id]) + + self.assertEqual( + (set([c1.id, c2.id]), set([c3.id])), + _find_shallow(self._store, [c3.id], 2), + ) + + def test_tag(self): + c1, c2 = self.make_linear_commits(2) + tag = make_tag(c2, name=b"tag") + self._store.add_object(tag) + + self.assertEqual( + (set([c1.id]), set([c2.id])), + _find_shallow(self._store, [tag.id], 2), + ) + + +class TestUploadPackHandler(UploadPackHandler): + @classmethod + def required_capabilities(self): + return [] + + +class ReceivePackHandlerTestCase(TestCase): + def setUp(self): + super(ReceivePackHandlerTestCase, self).setUp() + self._repo = MemoryRepo.init_bare([], {}) + backend = DictBackend({b"/": self._repo}) + self._handler = ReceivePackHandler( + backend, [b"/", b"host=lolcathost"], TestProto() + ) + + def test_apply_pack_del_ref(self): + refs = {b"refs/heads/master": TWO, b"refs/heads/fake-branch": ONE} + self._repo.refs._update(refs) + update_refs = [ + [ONE, ZERO_SHA, b"refs/heads/fake-branch"], + ] + self._handler.set_client_capabilities([b"delete-refs"]) + status = self._handler._apply_pack(update_refs) + self.assertEqual(status[0][0], b"unpack") + self.assertEqual(status[0][1], b"ok") + self.assertEqual(status[1][0], b"refs/heads/fake-branch") + self.assertEqual(status[1][1], b"ok") + + +class ProtocolGraphWalkerEmptyTestCase(TestCase): + def setUp(self): + super(ProtocolGraphWalkerEmptyTestCase, self).setUp() + self._repo = MemoryRepo.init_bare([], {}) + backend = DictBackend({b"/": self._repo}) + self._walker = _ProtocolGraphWalker( + TestUploadPackHandler(backend, [b"/", b"host=lolcats"], TestProto()), + self._repo.object_store, + self._repo.get_peeled, + self._repo.refs.get_symrefs, + ) + + def test_empty_repository(self): + # The server should wait for a flush packet. + self._walker.proto.set_output([]) + self.assertRaises(HangupException, self._walker.determine_wants, {}) + self.assertEqual(None, self._walker.proto.get_received_line()) + + self._walker.proto.set_output([None]) + self.assertEqual([], self._walker.determine_wants({})) + self.assertEqual(None, self._walker.proto.get_received_line()) + + +class ProtocolGraphWalkerTestCase(TestCase): + def setUp(self): + super(ProtocolGraphWalkerTestCase, self).setUp() + # Create the following commit tree: + # 3---5 + # / + # 1---2---4 + commits = [ + make_commit(id=ONE, parents=[], commit_time=111), + make_commit(id=TWO, parents=[ONE], commit_time=222), + make_commit(id=THREE, parents=[ONE], commit_time=333), + make_commit(id=FOUR, parents=[TWO], commit_time=444), + make_commit(id=FIVE, parents=[THREE], commit_time=555), + ] + self._repo = MemoryRepo.init_bare(commits, {}) + backend = DictBackend({b"/": self._repo}) + self._walker = _ProtocolGraphWalker( + TestUploadPackHandler(backend, [b"/", b"host=lolcats"], TestProto()), + self._repo.object_store, + self._repo.get_peeled, + self._repo.refs.get_symrefs, + ) + + def test_all_wants_satisfied_no_haves(self): + self._walker.set_wants([ONE]) + self.assertFalse(self._walker.all_wants_satisfied([])) + self._walker.set_wants([TWO]) + self.assertFalse(self._walker.all_wants_satisfied([])) + self._walker.set_wants([THREE]) + self.assertFalse(self._walker.all_wants_satisfied([])) + + def test_all_wants_satisfied_have_root(self): + self._walker.set_wants([ONE]) + self.assertTrue(self._walker.all_wants_satisfied([ONE])) + self._walker.set_wants([TWO]) + self.assertTrue(self._walker.all_wants_satisfied([ONE])) + self._walker.set_wants([THREE]) + self.assertTrue(self._walker.all_wants_satisfied([ONE])) + + def test_all_wants_satisfied_have_branch(self): + self._walker.set_wants([TWO]) + self.assertTrue(self._walker.all_wants_satisfied([TWO])) + # wrong branch + self._walker.set_wants([THREE]) + self.assertFalse(self._walker.all_wants_satisfied([TWO])) + + def test_all_wants_satisfied(self): + self._walker.set_wants([FOUR, FIVE]) + # trivial case: wants == haves + self.assertTrue(self._walker.all_wants_satisfied([FOUR, FIVE])) + # cases that require walking the commit tree + self.assertTrue(self._walker.all_wants_satisfied([ONE])) + self.assertFalse(self._walker.all_wants_satisfied([TWO])) + self.assertFalse(self._walker.all_wants_satisfied([THREE])) + self.assertTrue(self._walker.all_wants_satisfied([TWO, THREE])) + + def test_split_proto_line(self): + allowed = (b"want", b"done", None) + self.assertEqual( + (b"want", ONE), _split_proto_line(b"want " + ONE + b"\n", allowed) + ) + self.assertEqual( + (b"want", TWO), _split_proto_line(b"want " + TWO + b"\n", allowed) + ) + self.assertRaises(GitProtocolError, _split_proto_line, b"want xxxx\n", allowed) + self.assertRaises( + UnexpectedCommandError, + _split_proto_line, + b"have " + THREE + b"\n", + allowed, + ) + self.assertRaises( + GitProtocolError, + _split_proto_line, + b"foo " + FOUR + b"\n", + allowed, + ) + self.assertRaises(GitProtocolError, _split_proto_line, b"bar", allowed) + self.assertEqual((b"done", None), _split_proto_line(b"done\n", allowed)) + self.assertEqual((None, None), _split_proto_line(b"", allowed)) + + def test_determine_wants(self): + self._walker.proto.set_output([None]) + self.assertEqual([], self._walker.determine_wants({})) + self.assertEqual(None, self._walker.proto.get_received_line()) + + self._walker.proto.set_output( + [ + b"want " + ONE + b" multi_ack", + b"want " + TWO, + None, + ] + ) + heads = { + b"refs/heads/ref1": ONE, + b"refs/heads/ref2": TWO, + b"refs/heads/ref3": THREE, + } + self._repo.refs._update(heads) + self.assertEqual([ONE, TWO], self._walker.determine_wants(heads)) + + self._walker.advertise_refs = True + self.assertEqual([], self._walker.determine_wants(heads)) + self._walker.advertise_refs = False + + self._walker.proto.set_output([b"want " + FOUR + b" multi_ack", None]) + self.assertRaises(GitProtocolError, self._walker.determine_wants, heads) + + self._walker.proto.set_output([None]) + self.assertEqual([], self._walker.determine_wants(heads)) + + self._walker.proto.set_output([b"want " + ONE + b" multi_ack", b"foo", None]) + self.assertRaises(GitProtocolError, self._walker.determine_wants, heads) + + self._walker.proto.set_output([b"want " + FOUR + b" multi_ack", None]) + self.assertRaises(GitProtocolError, self._walker.determine_wants, heads) + + def test_determine_wants_advertisement(self): + self._walker.proto.set_output([None]) + # advertise branch tips plus tag + heads = { + b"refs/heads/ref4": FOUR, + b"refs/heads/ref5": FIVE, + b"refs/heads/tag6": SIX, + } + self._repo.refs._update(heads) + self._repo.refs._update_peeled(heads) + self._repo.refs._update_peeled({b"refs/heads/tag6": FIVE}) + self._walker.determine_wants(heads) + lines = [] + while True: + line = self._walker.proto.get_received_line() + if line is None: + break + # strip capabilities list if present + if b"\x00" in line: + line = line[: line.index(b"\x00")] + lines.append(line.rstrip()) + + self.assertEqual( + [ + FOUR + b" refs/heads/ref4", + FIVE + b" refs/heads/ref5", + FIVE + b" refs/heads/tag6^{}", + SIX + b" refs/heads/tag6", + ], + sorted(lines), + ) + + # ensure peeled tag was advertised immediately following tag + for i, line in enumerate(lines): + if line.endswith(b" refs/heads/tag6"): + self.assertEqual(FIVE + b" refs/heads/tag6^{}", lines[i + 1]) + + # TODO: test commit time cutoff + + def _handle_shallow_request(self, lines, heads): + self._walker.proto.set_output(lines + [None]) + self._walker._handle_shallow_request(heads) + + def assertReceived(self, expected): + self.assertEqual( + expected, list(iter(self._walker.proto.get_received_line, None)) + ) + + def test_handle_shallow_request_no_client_shallows(self): + self._handle_shallow_request([b"deepen 2\n"], [FOUR, FIVE]) + self.assertEqual(set([TWO, THREE]), self._walker.shallow) + self.assertReceived( + [ + b"shallow " + TWO, + b"shallow " + THREE, + ] + ) + + def test_handle_shallow_request_no_new_shallows(self): + lines = [ + b"shallow " + TWO + b"\n", + b"shallow " + THREE + b"\n", + b"deepen 2\n", + ] + self._handle_shallow_request(lines, [FOUR, FIVE]) + self.assertEqual(set([TWO, THREE]), self._walker.shallow) + self.assertReceived([]) + + def test_handle_shallow_request_unshallows(self): + lines = [ + b"shallow " + TWO + b"\n", + b"deepen 3\n", + ] + self._handle_shallow_request(lines, [FOUR, FIVE]) + self.assertEqual(set([ONE]), self._walker.shallow) + self.assertReceived( + [ + b"shallow " + ONE, + b"unshallow " + TWO, + # THREE is unshallow but was is not shallow in the client + ] + ) + + +class TestProtocolGraphWalker(object): + def __init__(self): + self.acks = [] + self.lines = [] + self.wants_satisified = False + self.stateless_rpc = None + self.advertise_refs = False + self._impl = None + self.done_required = True + self.done_received = False + self._empty = False + self.pack_sent = False + + def read_proto_line(self, allowed): + command, sha = self.lines.pop(0) + if allowed is not None: + assert command in allowed + return command, sha + + def send_ack(self, sha, ack_type=b""): + self.acks.append((sha, ack_type)) + + def send_nak(self): + self.acks.append((None, b"nak")) + + def all_wants_satisfied(self, haves): + if haves: + return self.wants_satisified + + def pop_ack(self): + if not self.acks: + return None + return self.acks.pop(0) + + def handle_done(self): + if not self._impl: + return + # Whether or not PACK is sent after is determined by this, so + # record this value. + self.pack_sent = self._impl.handle_done(self.done_required, self.done_received) + return self.pack_sent + + def notify_done(self): + self.done_received = True + + +class AckGraphWalkerImplTestCase(TestCase): + """Base setup and asserts for AckGraphWalker tests.""" + + def setUp(self): + super(AckGraphWalkerImplTestCase, self).setUp() + self._walker = TestProtocolGraphWalker() + self._walker.lines = [ + (b"have", TWO), + (b"have", ONE), + (b"have", THREE), + (b"done", None), + ] + self._impl = self.impl_cls(self._walker) + self._walker._impl = self._impl + + def assertNoAck(self): + self.assertEqual(None, self._walker.pop_ack()) + + def assertAcks(self, acks): + for sha, ack_type in acks: + self.assertEqual((sha, ack_type), self._walker.pop_ack()) + self.assertNoAck() + + def assertAck(self, sha, ack_type=b""): + self.assertAcks([(sha, ack_type)]) + + def assertNak(self): + self.assertAck(None, b"nak") + + def assertNextEquals(self, sha): + self.assertEqual(sha, next(self._impl)) + + def assertNextEmpty(self): + # This is necessary because of no-done - the assumption that it + # it safe to immediately send out the final ACK is no longer + # true but the test is still needed for it. TestProtocolWalker + # does implement the handle_done which will determine whether + # the final confirmation can be sent. + self.assertRaises(IndexError, next, self._impl) + self._walker.handle_done() + + +class SingleAckGraphWalkerImplTestCase(AckGraphWalkerImplTestCase): + + impl_cls = SingleAckGraphWalkerImpl + + def test_single_ack(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE) + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNoAck() + + def test_single_ack_flush(self): + # same as ack test but ends with a flush-pkt instead of done + self._walker.lines[-1] = (None, None) + + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE) + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNoAck() + + def test_single_ack_nak(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNoAck() + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertNak() + + def test_single_ack_nak_flush(self): + # same as nak test but ends with a flush-pkt instead of done + self._walker.lines[-1] = (None, None) + + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNoAck() + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertNak() + + +class MultiAckGraphWalkerImplTestCase(AckGraphWalkerImplTestCase): + + impl_cls = MultiAckGraphWalkerImpl + + def test_multi_ack(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE, b"continue") + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertAck(THREE, b"continue") + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertAck(THREE) + + def test_multi_ack_partial(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE, b"continue") + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertAck(ONE) + + def test_multi_ack_flush(self): + self._walker.lines = [ + (b"have", TWO), + (None, None), + (b"have", ONE), + (b"have", THREE), + (b"done", None), + ] + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNak() # nak the flush-pkt + + self._impl.ack(ONE) + self.assertAck(ONE, b"continue") + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertAck(THREE, b"continue") + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertAck(THREE) + + def test_multi_ack_nak(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNoAck() + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertNak() + + +class MultiAckDetailedGraphWalkerImplTestCase(AckGraphWalkerImplTestCase): + + impl_cls = MultiAckDetailedGraphWalkerImpl + + def test_multi_ack(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE, b"common") + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertAck(THREE, b"common") + + # done is read. + self._walker.wants_satisified = True + self.assertNextEquals(None) + self._walker.lines.append((None, None)) + self.assertNextEmpty() + self.assertAcks([(THREE, b"ready"), (None, b"nak"), (THREE, b"")]) + # PACK is sent + self.assertTrue(self._walker.pack_sent) + + def test_multi_ack_nodone(self): + self._walker.done_required = False + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE, b"common") + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertAck(THREE, b"common") + + # done is read. + self._walker.wants_satisified = True + self.assertNextEquals(None) + self._walker.lines.append((None, None)) + self.assertNextEmpty() + self.assertAcks([(THREE, b"ready"), (None, b"nak"), (THREE, b"")]) + # PACK is sent + self.assertTrue(self._walker.pack_sent) + + def test_multi_ack_flush_end(self): + # transmission ends with a flush-pkt without a done but no-done is + # assumed. + self._walker.lines[-1] = (None, None) + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE, b"common") + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertAck(THREE, b"common") + + # no done is read + self._walker.wants_satisified = True + self.assertNextEmpty() + self.assertAcks([(THREE, b"ready"), (None, b"nak")]) + # PACK is NOT sent + self.assertFalse(self._walker.pack_sent) + + def test_multi_ack_flush_end_nodone(self): + # transmission ends with a flush-pkt without a done but no-done is + # assumed. + self._walker.lines[-1] = (None, None) + self._walker.done_required = False + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE, b"common") + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertAck(THREE, b"common") + + # no done is read, but pretend it is (last 'ACK 'commit_id' '') + self._walker.wants_satisified = True + self.assertNextEmpty() + self.assertAcks([(THREE, b"ready"), (None, b"nak"), (THREE, b"")]) + # PACK is sent + self.assertTrue(self._walker.pack_sent) + + def test_multi_ack_partial(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self._impl.ack(ONE) + self.assertAck(ONE, b"common") + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertAck(ONE) + + def test_multi_ack_flush(self): + # same as ack test but contains a flush-pkt in the middle + self._walker.lines = [ + (b"have", TWO), + (None, None), + (b"have", ONE), + (b"have", THREE), + (b"done", None), + (None, None), + ] + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNak() # nak the flush-pkt + + self._impl.ack(ONE) + self.assertAck(ONE, b"common") + + self.assertNextEquals(THREE) + self._impl.ack(THREE) + self.assertAck(THREE, b"common") + + self._walker.wants_satisified = True + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertAcks([(THREE, b"ready"), (None, b"nak"), (THREE, b"")]) + + def test_multi_ack_nak(self): + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNoAck() + + self.assertNextEquals(THREE) + self.assertNoAck() + + # Done is sent here. + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertNak() + self.assertNextEmpty() + self.assertTrue(self._walker.pack_sent) + + def test_multi_ack_nak_nodone(self): + self._walker.done_required = False + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNoAck() + + self.assertNextEquals(THREE) + self.assertNoAck() + + # Done is sent here. + self.assertFalse(self._walker.pack_sent) + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertTrue(self._walker.pack_sent) + self.assertNak() + self.assertNextEmpty() + + def test_multi_ack_nak_flush(self): + # same as nak test but contains a flush-pkt in the middle + self._walker.lines = [ + (b"have", TWO), + (None, None), + (b"have", ONE), + (b"have", THREE), + (b"done", None), + ] + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNak() + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertNextEquals(None) + self.assertNextEmpty() + self.assertNak() + + def test_multi_ack_stateless(self): + # transmission ends with a flush-pkt + self._walker.lines[-1] = (None, None) + self._walker.stateless_rpc = True + + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNoAck() + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertFalse(self._walker.pack_sent) + self.assertNextEquals(None) + self.assertNak() + + self.assertNextEmpty() + self.assertNoAck() + self.assertFalse(self._walker.pack_sent) + + def test_multi_ack_stateless_nodone(self): + self._walker.done_required = False + # transmission ends with a flush-pkt + self._walker.lines[-1] = (None, None) + self._walker.stateless_rpc = True + + self.assertNextEquals(TWO) + self.assertNoAck() + + self.assertNextEquals(ONE) + self.assertNoAck() + + self.assertNextEquals(THREE) + self.assertNoAck() + + self.assertFalse(self._walker.pack_sent) + self.assertNextEquals(None) + self.assertNak() + + self.assertNextEmpty() + self.assertNoAck() + # PACK will still not be sent. + self.assertFalse(self._walker.pack_sent) + + +class FileSystemBackendTests(TestCase): + """Tests for FileSystemBackend.""" + + def setUp(self): + super(FileSystemBackendTests, self).setUp() + self.path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.path) + self.repo = Repo.init(self.path) + if sys.platform == "win32": + self.backend = FileSystemBackend(self.path[0] + ":" + os.sep) + else: + self.backend = FileSystemBackend() + + def test_nonexistant(self): + self.assertRaises( + NotGitRepository, + self.backend.open_repository, + "/does/not/exist/unless/foo", + ) + + def test_absolute(self): + repo = self.backend.open_repository(self.path) + self.assertTrue( + os.path.samefile( + os.path.abspath(repo.path), os.path.abspath(self.repo.path) + ) + ) + + def test_child(self): + self.assertRaises( + NotGitRepository, + self.backend.open_repository, + os.path.join(self.path, "foo"), + ) + + def test_bad_repo_path(self): + backend = FileSystemBackend() + + self.assertRaises(NotGitRepository, lambda: backend.open_repository("/ups")) + + +class DictBackendTests(TestCase): + """Tests for DictBackend.""" + + def test_nonexistant(self): + repo = MemoryRepo.init_bare([], {}) + backend = DictBackend({b"/": repo}) + self.assertRaises( + NotGitRepository, + backend.open_repository, + "/does/not/exist/unless/foo", + ) + + def test_bad_repo_path(self): + repo = MemoryRepo.init_bare([], {}) + backend = DictBackend({b"/": repo}) + + self.assertRaises(NotGitRepository, lambda: backend.open_repository("/ups")) + + +class ServeCommandTests(TestCase): + """Tests for serve_command.""" + + def setUp(self): + super(ServeCommandTests, self).setUp() + self.backend = DictBackend({}) + + def serve_command(self, handler_cls, args, inf, outf): + return serve_command( + handler_cls, + [b"test"] + args, + backend=self.backend, + inf=inf, + outf=outf, + ) + + def test_receive_pack(self): + commit = make_commit(id=ONE, parents=[], commit_time=111) + self.backend.repos[b"/"] = MemoryRepo.init_bare( + [commit], {b"refs/heads/master": commit.id} + ) + outf = BytesIO() + exitcode = self.serve_command( + ReceivePackHandler, [b"/"], BytesIO(b"0000"), outf + ) + outlines = outf.getvalue().splitlines() + self.assertEqual(2, len(outlines)) + self.assertEqual( + b"1111111111111111111111111111111111111111 refs/heads/master", + outlines[0][4:].split(b"\x00")[0], + ) + self.assertEqual(b"0000", outlines[-1]) + self.assertEqual(0, exitcode) + + +class UpdateServerInfoTests(TestCase): + """Tests for update_server_info.""" + + def setUp(self): + super(UpdateServerInfoTests, self).setUp() + self.path = tempfile.mkdtemp() + self.addCleanup(shutil.rmtree, self.path) + self.repo = Repo.init(self.path) + + def test_empty(self): + update_server_info(self.repo) + with open(os.path.join(self.path, ".git", "info", "refs"), "rb") as f: + self.assertEqual(b"", f.read()) + p = os.path.join(self.path, ".git", "objects", "info", "packs") + with open(p, "rb") as f: + self.assertEqual(b"", f.read()) + + def test_simple(self): + commit_id = self.repo.do_commit( + message=b"foo", + committer=b"Joe Example ", + ref=b"refs/heads/foo", + ) + update_server_info(self.repo) + with open(os.path.join(self.path, ".git", "info", "refs"), "rb") as f: + self.assertEqual(f.read(), commit_id + b"\trefs/heads/foo\n") + p = os.path.join(self.path, ".git", "objects", "info", "packs") + with open(p, "rb") as f: + self.assertEqual(f.read(), b"") diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_stash.py b/env/lib/python3.8/site-packages/dulwich/tests/test_stash.py new file mode 100644 index 00000000..ff16dd52 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_stash.py @@ -0,0 +1,35 @@ +# test_stash.py -- tests for stash +# Copyright (C) 2018 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for stashes.""" + +from . import TestCase + +from ..repo import MemoryRepo +from ..stash import Stash + + +class StashTests(TestCase): + """Tests for stash.""" + + def test_obtain(self): + repo = MemoryRepo() + stash = Stash.from_repo(repo) + self.assertIsInstance(stash, Stash) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_utils.py b/env/lib/python3.8/site-packages/dulwich/tests/test_utils.py new file mode 100644 index 00000000..847ef5e2 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_utils.py @@ -0,0 +1,92 @@ +# test_utils.py -- Tests for git test utilities. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for git test utilities.""" + +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + Blob, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.tests.utils import ( + make_object, + build_commit_graph, +) + + +class BuildCommitGraphTest(TestCase): + def setUp(self): + super(BuildCommitGraphTest, self).setUp() + self.store = MemoryObjectStore() + + def test_linear(self): + c1, c2 = build_commit_graph(self.store, [[1], [2, 1]]) + for obj_id in [c1.id, c2.id, c1.tree, c2.tree]: + self.assertIn(obj_id, self.store) + self.assertEqual([], c1.parents) + self.assertEqual([c1.id], c2.parents) + self.assertEqual(c1.tree, c2.tree) + self.assertEqual([], self.store[c1.tree].items()) + self.assertGreater(c2.commit_time, c1.commit_time) + + def test_merge(self): + c1, c2, c3, c4 = build_commit_graph( + self.store, [[1], [2, 1], [3, 1], [4, 2, 3]] + ) + self.assertEqual([c2.id, c3.id], c4.parents) + self.assertGreater(c4.commit_time, c2.commit_time) + self.assertGreater(c4.commit_time, c3.commit_time) + + def test_missing_parent(self): + self.assertRaises( + ValueError, build_commit_graph, self.store, [[1], [3, 2], [2, 1]] + ) + + def test_trees(self): + a1 = make_object(Blob, data=b"aaa1") + a2 = make_object(Blob, data=b"aaa2") + c1, c2 = build_commit_graph( + self.store, + [[1], [2, 1]], + trees={1: [(b"a", a1)], 2: [(b"a", a2, 0o100644)]}, + ) + self.assertEqual((0o100644, a1.id), self.store[c1.tree][b"a"]) + self.assertEqual((0o100644, a2.id), self.store[c2.tree][b"a"]) + + def test_attrs(self): + c1, c2 = build_commit_graph( + self.store, [[1], [2, 1]], attrs={1: {"message": b"Hooray!"}} + ) + self.assertEqual(b"Hooray!", c1.message) + self.assertEqual(b"Commit 2", c2.message) + + def test_commit_time(self): + c1, c2, c3 = build_commit_graph( + self.store, + [[1], [2, 1], [3, 2]], + attrs={1: {"commit_time": 124}, 2: {"commit_time": 123}}, + ) + self.assertEqual(124, c1.commit_time) + self.assertEqual(123, c2.commit_time) + self.assertTrue(c2.commit_time < c1.commit_time < c3.commit_time) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_walk.py b/env/lib/python3.8/site-packages/dulwich/tests/test_walk.py new file mode 100644 index 00000000..d77ede05 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_walk.py @@ -0,0 +1,632 @@ +# test_walk.py -- Tests for commit walking functionality. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for commit walking functionality.""" + +from itertools import ( + permutations, +) +from unittest import expectedFailure + +from dulwich.diff_tree import ( + CHANGE_MODIFY, + CHANGE_RENAME, + TreeChange, + RenameDetector, +) +from dulwich.errors import ( + MissingCommitError, +) +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + Commit, + Blob, +) +from dulwich.walk import ORDER_TOPO, WalkEntry, Walker, _topo_reorder +from dulwich.tests import TestCase +from dulwich.tests.utils import ( + F, + make_object, + make_tag, + build_commit_graph, +) + + +class TestWalkEntry(object): + def __init__(self, commit, changes): + self.commit = commit + self.changes = changes + + def __repr__(self): + return "" % ( + self.commit.id, + self.changes, + ) + + def __eq__(self, other): + if not isinstance(other, WalkEntry) or self.commit != other.commit: + return False + if self.changes is None: + return True + return self.changes == other.changes() + + +class WalkerTest(TestCase): + def setUp(self): + super(WalkerTest, self).setUp() + self.store = MemoryObjectStore() + + def make_commits(self, commit_spec, **kwargs): + times = kwargs.pop("times", []) + attrs = kwargs.pop("attrs", {}) + for i, t in enumerate(times): + attrs.setdefault(i + 1, {})["commit_time"] = t + return build_commit_graph(self.store, commit_spec, attrs=attrs, **kwargs) + + def make_linear_commits(self, num_commits, **kwargs): + commit_spec = [] + for i in range(1, num_commits + 1): + c = [i] + if i > 1: + c.append(i - 1) + commit_spec.append(c) + return self.make_commits(commit_spec, **kwargs) + + def assertWalkYields(self, expected, *args, **kwargs): + walker = Walker(self.store, *args, **kwargs) + expected = list(expected) + for i, entry in enumerate(expected): + if isinstance(entry, Commit): + expected[i] = TestWalkEntry(entry, None) + actual = list(walker) + self.assertEqual(expected, actual) + + def test_tag(self): + c1, c2, c3 = self.make_linear_commits(3) + t2 = make_tag(target=c2) + self.store.add_object(t2) + self.assertWalkYields([c2, c1], [t2.id]) + + def test_linear(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([c1], [c1.id]) + self.assertWalkYields([c2, c1], [c2.id]) + self.assertWalkYields([c3, c2, c1], [c3.id]) + self.assertWalkYields([c3, c2, c1], [c3.id, c1.id]) + self.assertWalkYields([c3, c2], [c3.id], exclude=[c1.id]) + self.assertWalkYields([c3, c2], [c3.id, c1.id], exclude=[c1.id]) + self.assertWalkYields([c3], [c3.id, c1.id], exclude=[c2.id]) + + def test_missing(self): + cs = list(reversed(self.make_linear_commits(20))) + self.assertWalkYields(cs, [cs[0].id]) + + # Exactly how close we can get to a missing commit depends on our + # implementation (in particular the choice of _MAX_EXTRA_COMMITS), but + # we should at least be able to walk some history in a broken repo. + del self.store[cs[-1].id] + for i in range(1, 11): + self.assertWalkYields(cs[:i], [cs[0].id], max_entries=i) + self.assertRaises(MissingCommitError, Walker, self.store, [cs[-1].id]) + + def test_branch(self): + c1, x2, x3, y4 = self.make_commits([[1], [2, 1], [3, 2], [4, 1]]) + self.assertWalkYields([x3, x2, c1], [x3.id]) + self.assertWalkYields([y4, c1], [y4.id]) + self.assertWalkYields([y4, x2, c1], [y4.id, x2.id]) + self.assertWalkYields([y4, x2], [y4.id, x2.id], exclude=[c1.id]) + self.assertWalkYields([y4, x3], [y4.id, x3.id], exclude=[x2.id]) + self.assertWalkYields([y4], [y4.id], exclude=[x3.id]) + self.assertWalkYields([x3, x2], [x3.id], exclude=[y4.id]) + + def test_merge(self): + c1, c2, c3, c4 = self.make_commits([[1], [2, 1], [3, 1], [4, 2, 3]]) + self.assertWalkYields([c4, c3, c2, c1], [c4.id]) + self.assertWalkYields([c3, c1], [c3.id]) + self.assertWalkYields([c2, c1], [c2.id]) + self.assertWalkYields([c4, c3], [c4.id], exclude=[c2.id]) + self.assertWalkYields([c4, c2], [c4.id], exclude=[c3.id]) + + def test_merge_of_new_branch_from_old_base(self): + # The commit on the branch was made at a time after any of the + # commits on master, but the branch was from an older commit. + # See also test_merge_of_old_branch + self.maxDiff = None + c1, c2, c3, c4, c5 = self.make_commits( + [[1], [2, 1], [3, 2], [4, 1], [5, 3, 4]], + times=[1, 2, 3, 4, 5], + ) + self.assertWalkYields([c5, c4, c3, c2, c1], [c5.id]) + self.assertWalkYields([c3, c2, c1], [c3.id]) + self.assertWalkYields([c2, c1], [c2.id]) + + @expectedFailure + def test_merge_of_old_branch(self): + # The commit on the branch was made at a time before any of + # the commits on master, but it was merged into master after + # those commits. + # See also test_merge_of_new_branch_from_old_base + self.maxDiff = None + c1, c2, c3, c4, c5 = self.make_commits( + [[1], [2, 1], [3, 2], [4, 1], [5, 3, 4]], + times=[1, 3, 4, 2, 5], + ) + self.assertWalkYields([c5, c4, c3, c2, c1], [c5.id]) + self.assertWalkYields([c3, c2, c1], [c3.id]) + self.assertWalkYields([c2, c1], [c2.id]) + + def test_reverse(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([c1, c2, c3], [c3.id], reverse=True) + + def test_max_entries(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([c3, c2, c1], [c3.id], max_entries=3) + self.assertWalkYields([c3, c2], [c3.id], max_entries=2) + self.assertWalkYields([c3], [c3.id], max_entries=1) + + def test_reverse_after_max_entries(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([c1, c2, c3], [c3.id], max_entries=3, reverse=True) + self.assertWalkYields([c2, c3], [c3.id], max_entries=2, reverse=True) + self.assertWalkYields([c3], [c3.id], max_entries=1, reverse=True) + + def test_changes_one_parent(self): + blob_a1 = make_object(Blob, data=b"a1") + blob_a2 = make_object(Blob, data=b"a2") + blob_b2 = make_object(Blob, data=b"b2") + c1, c2 = self.make_linear_commits( + 2, + trees={ + 1: [(b"a", blob_a1)], + 2: [(b"a", blob_a2), (b"b", blob_b2)], + }, + ) + e1 = TestWalkEntry(c1, [TreeChange.add((b"a", F, blob_a1.id))]) + e2 = TestWalkEntry( + c2, + [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob_a1.id), (b"a", F, blob_a2.id)), + TreeChange.add((b"b", F, blob_b2.id)), + ], + ) + self.assertWalkYields([e2, e1], [c2.id]) + + def test_changes_multiple_parents(self): + blob_a1 = make_object(Blob, data=b"a1") + blob_b2 = make_object(Blob, data=b"b2") + blob_a3 = make_object(Blob, data=b"a3") + c1, c2, c3 = self.make_commits( + [[1], [2], [3, 1, 2]], + trees={ + 1: [(b"a", blob_a1)], + 2: [(b"b", blob_b2)], + 3: [(b"a", blob_a3), (b"b", blob_b2)], + }, + ) + # a is a modify/add conflict and b is not conflicted. + changes = [ + [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob_a1.id), (b"a", F, blob_a3.id)), + TreeChange.add((b"a", F, blob_a3.id)), + ] + ] + self.assertWalkYields( + [TestWalkEntry(c3, changes)], [c3.id], exclude=[c1.id, c2.id] + ) + + def test_path_matches(self): + walker = Walker(None, [], paths=[b"foo", b"bar", b"baz/quux"]) + self.assertTrue(walker._path_matches(b"foo")) + self.assertTrue(walker._path_matches(b"foo/a")) + self.assertTrue(walker._path_matches(b"foo/a/b")) + self.assertTrue(walker._path_matches(b"bar")) + self.assertTrue(walker._path_matches(b"baz/quux")) + self.assertTrue(walker._path_matches(b"baz/quux/a")) + + self.assertFalse(walker._path_matches(None)) + self.assertFalse(walker._path_matches(b"oops")) + self.assertFalse(walker._path_matches(b"fool")) + self.assertFalse(walker._path_matches(b"baz")) + self.assertFalse(walker._path_matches(b"baz/quu")) + + def test_paths(self): + blob_a1 = make_object(Blob, data=b"a1") + blob_b2 = make_object(Blob, data=b"b2") + blob_a3 = make_object(Blob, data=b"a3") + blob_b3 = make_object(Blob, data=b"b3") + c1, c2, c3 = self.make_linear_commits( + 3, + trees={ + 1: [(b"a", blob_a1)], + 2: [(b"a", blob_a1), (b"x/b", blob_b2)], + 3: [(b"a", blob_a3), (b"x/b", blob_b3)], + }, + ) + + self.assertWalkYields([c3, c2, c1], [c3.id]) + self.assertWalkYields([c3, c1], [c3.id], paths=[b"a"]) + self.assertWalkYields([c3, c2], [c3.id], paths=[b"x/b"]) + + # All changes are included, not just for requested paths. + changes = [ + TreeChange(CHANGE_MODIFY, (b"a", F, blob_a1.id), (b"a", F, blob_a3.id)), + TreeChange(CHANGE_MODIFY, (b"x/b", F, blob_b2.id), (b"x/b", F, blob_b3.id)), + ] + self.assertWalkYields( + [TestWalkEntry(c3, changes)], [c3.id], max_entries=1, paths=[b"a"] + ) + + def test_paths_subtree(self): + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + c1, c2, c3 = self.make_linear_commits( + 3, + trees={ + 1: [(b"x/a", blob_a)], + 2: [(b"b", blob_b), (b"x/a", blob_a)], + 3: [(b"b", blob_b), (b"x/a", blob_a), (b"x/b", blob_b)], + }, + ) + self.assertWalkYields([c2], [c3.id], paths=[b"b"]) + self.assertWalkYields([c3, c1], [c3.id], paths=[b"x"]) + + def test_paths_max_entries(self): + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + c1, c2 = self.make_linear_commits( + 2, trees={1: [(b"a", blob_a)], 2: [(b"a", blob_a), (b"b", blob_b)]} + ) + self.assertWalkYields([c2], [c2.id], paths=[b"b"], max_entries=1) + self.assertWalkYields([c1], [c1.id], paths=[b"a"], max_entries=1) + + def test_paths_merge(self): + blob_a1 = make_object(Blob, data=b"a1") + blob_a2 = make_object(Blob, data=b"a2") + blob_a3 = make_object(Blob, data=b"a3") + x1, y2, m3, m4 = self.make_commits( + [[1], [2], [3, 1, 2], [4, 1, 2]], + trees={ + 1: [(b"a", blob_a1)], + 2: [(b"a", blob_a2)], + 3: [(b"a", blob_a3)], + 4: [(b"a", blob_a1)], + }, + ) # Non-conflicting + self.assertWalkYields([m3, y2, x1], [m3.id], paths=[b"a"]) + self.assertWalkYields([y2, x1], [m4.id], paths=[b"a"]) + + def test_changes_with_renames(self): + blob = make_object(Blob, data=b"blob") + c1, c2 = self.make_linear_commits( + 2, trees={1: [(b"a", blob)], 2: [(b"b", blob)]} + ) + entry_a = (b"a", F, blob.id) + entry_b = (b"b", F, blob.id) + changes_without_renames = [ + TreeChange.delete(entry_a), + TreeChange.add(entry_b), + ] + changes_with_renames = [TreeChange(CHANGE_RENAME, entry_a, entry_b)] + self.assertWalkYields( + [TestWalkEntry(c2, changes_without_renames)], + [c2.id], + max_entries=1, + ) + detector = RenameDetector(self.store) + self.assertWalkYields( + [TestWalkEntry(c2, changes_with_renames)], + [c2.id], + max_entries=1, + rename_detector=detector, + ) + + def test_follow_rename(self): + blob = make_object(Blob, data=b"blob") + names = [b"a", b"a", b"b", b"b", b"c", b"c"] + + trees = {i + 1: [(n, blob, F)] for i, n in enumerate(names)} + c1, c2, c3, c4, c5, c6 = self.make_linear_commits(6, trees=trees) + self.assertWalkYields([c5], [c6.id], paths=[b"c"]) + + def e(n): + return (n, F, blob.id) + + self.assertWalkYields( + [ + TestWalkEntry(c5, [TreeChange(CHANGE_RENAME, e(b"b"), e(b"c"))]), + TestWalkEntry(c3, [TreeChange(CHANGE_RENAME, e(b"a"), e(b"b"))]), + TestWalkEntry(c1, [TreeChange.add(e(b"a"))]), + ], + [c6.id], + paths=[b"c"], + follow=True, + ) + + def test_follow_rename_remove_path(self): + blob = make_object(Blob, data=b"blob") + _, _, _, c4, c5, c6 = self.make_linear_commits( + 6, + trees={ + 1: [(b"a", blob), (b"c", blob)], + 2: [], + 3: [], + 4: [(b"b", blob)], + 5: [(b"a", blob)], + 6: [(b"c", blob)], + }, + ) + + def e(n): + return (n, F, blob.id) + + # Once the path changes to b, we aren't interested in a or c anymore. + self.assertWalkYields( + [ + TestWalkEntry(c6, [TreeChange(CHANGE_RENAME, e(b"a"), e(b"c"))]), + TestWalkEntry(c5, [TreeChange(CHANGE_RENAME, e(b"b"), e(b"a"))]), + TestWalkEntry(c4, [TreeChange.add(e(b"b"))]), + ], + [c6.id], + paths=[b"c"], + follow=True, + ) + + def test_since(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([c3, c2, c1], [c3.id], since=-1) + self.assertWalkYields([c3, c2, c1], [c3.id], since=0) + self.assertWalkYields([c3, c2], [c3.id], since=1) + self.assertWalkYields([c3, c2], [c3.id], since=99) + self.assertWalkYields([c3, c2], [c3.id], since=100) + self.assertWalkYields([c3], [c3.id], since=101) + self.assertWalkYields([c3], [c3.id], since=199) + self.assertWalkYields([c3], [c3.id], since=200) + self.assertWalkYields([], [c3.id], since=201) + self.assertWalkYields([], [c3.id], since=300) + + def test_until(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([], [c3.id], until=-1) + self.assertWalkYields([c1], [c3.id], until=0) + self.assertWalkYields([c1], [c3.id], until=1) + self.assertWalkYields([c1], [c3.id], until=99) + self.assertWalkYields([c2, c1], [c3.id], until=100) + self.assertWalkYields([c2, c1], [c3.id], until=101) + self.assertWalkYields([c2, c1], [c3.id], until=199) + self.assertWalkYields([c3, c2, c1], [c3.id], until=200) + self.assertWalkYields([c3, c2, c1], [c3.id], until=201) + self.assertWalkYields([c3, c2, c1], [c3.id], until=300) + + def test_since_until(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([], [c3.id], since=100, until=99) + self.assertWalkYields([c3, c2, c1], [c3.id], since=-1, until=201) + self.assertWalkYields([c2], [c3.id], since=100, until=100) + self.assertWalkYields([c2], [c3.id], since=50, until=150) + + def test_since_over_scan(self): + commits = self.make_linear_commits(11, times=[9, 0, 1, 2, 3, 4, 5, 8, 6, 7, 9]) + c8, _, c10, c11 = commits[-4:] + del self.store[commits[0].id] + # c9 is older than we want to walk, but is out of order with its + # parent, so we need to walk past it to get to c8. + # c1 would also match, but we've deleted it, and it should get pruned + # even with over-scanning. + self.assertWalkYields([c11, c10, c8], [c11.id], since=7) + + def assertTopoOrderEqual(self, expected_commits, commits): + entries = [TestWalkEntry(c, None) for c in commits] + actual_ids = [e.commit.id for e in list(_topo_reorder(entries))] + self.assertEqual([c.id for c in expected_commits], actual_ids) + + def test_topo_reorder_linear(self): + commits = self.make_linear_commits(5) + commits.reverse() + for perm in permutations(commits): + self.assertTopoOrderEqual(commits, perm) + + def test_topo_reorder_multiple_parents(self): + c1, c2, c3 = self.make_commits([[1], [2], [3, 1, 2]]) + # Already sorted, so totally FIFO. + self.assertTopoOrderEqual([c3, c2, c1], [c3, c2, c1]) + self.assertTopoOrderEqual([c3, c1, c2], [c3, c1, c2]) + + # c3 causes one parent to be yielded. + self.assertTopoOrderEqual([c3, c2, c1], [c2, c3, c1]) + self.assertTopoOrderEqual([c3, c1, c2], [c1, c3, c2]) + + # c3 causes both parents to be yielded. + self.assertTopoOrderEqual([c3, c2, c1], [c1, c2, c3]) + self.assertTopoOrderEqual([c3, c2, c1], [c2, c1, c3]) + + def test_topo_reorder_multiple_children(self): + c1, c2, c3 = self.make_commits([[1], [2, 1], [3, 1]]) + + # c2 and c3 are FIFO but c1 moves to the end. + self.assertTopoOrderEqual([c3, c2, c1], [c3, c2, c1]) + self.assertTopoOrderEqual([c3, c2, c1], [c3, c1, c2]) + self.assertTopoOrderEqual([c3, c2, c1], [c1, c3, c2]) + + self.assertTopoOrderEqual([c2, c3, c1], [c2, c3, c1]) + self.assertTopoOrderEqual([c2, c3, c1], [c2, c1, c3]) + self.assertTopoOrderEqual([c2, c3, c1], [c1, c2, c3]) + + def test_out_of_order_children(self): + c1, c2, c3, c4, c5 = self.make_commits( + [[1], [2, 1], [3, 2], [4, 1], [5, 3, 4]], times=[2, 1, 3, 4, 5] + ) + self.assertWalkYields([c5, c4, c3, c1, c2], [c5.id]) + self.assertWalkYields([c5, c4, c3, c2, c1], [c5.id], order=ORDER_TOPO) + + def test_out_of_order_with_exclude(self): + # Create the following graph: + # c1-------x2---m6 + # \ / + # \-y3--y4-/--y5 + # Due to skew, y5 is the oldest commit. + c1, x2, y3, y4, y5, m6 = self.make_commits( + [[1], [2, 1], [3, 1], [4, 3], [5, 4], [6, 2, 4]], + times=[2, 3, 4, 5, 1, 6], + ) + self.assertWalkYields([m6, y4, y3, x2, c1], [m6.id]) + # Ensure that c1..y4 get excluded even though they're popped from the + # priority queue long before y5. + self.assertWalkYields([m6, x2], [m6.id], exclude=[y5.id]) + + def test_empty_walk(self): + c1, c2, c3 = self.make_linear_commits(3) + self.assertWalkYields([], [c3.id], exclude=[c3.id]) + + +class WalkEntryTest(TestCase): + def setUp(self): + super(WalkEntryTest, self).setUp() + self.store = MemoryObjectStore() + + def make_commits(self, commit_spec, **kwargs): + times = kwargs.pop("times", []) + attrs = kwargs.pop("attrs", {}) + for i, t in enumerate(times): + attrs.setdefault(i + 1, {})["commit_time"] = t + return build_commit_graph(self.store, commit_spec, attrs=attrs, **kwargs) + + def make_linear_commits(self, num_commits, **kwargs): + commit_spec = [] + for i in range(1, num_commits + 1): + c = [i] + if i > 1: + c.append(i - 1) + commit_spec.append(c) + return self.make_commits(commit_spec, **kwargs) + + def test_all_changes(self): + # Construct a commit with 2 files in different subdirectories. + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + c1 = self.make_linear_commits( + 1, + trees={1: [(b"x/a", blob_a), (b"y/b", blob_b)]}, + )[0] + + # Get the WalkEntry for the commit. + walker = Walker(self.store, c1.id) + walker_entry = list(walker)[0] + changes = walker_entry.changes() + + # Compare the changes with the expected values. + entry_a = (b"x/a", F, blob_a.id) + entry_b = (b"y/b", F, blob_b.id) + self.assertEqual( + [TreeChange.add(entry_a), TreeChange.add(entry_b)], + changes, + ) + + def test_all_with_merge(self): + blob_a = make_object(Blob, data=b"a") + blob_a2 = make_object(Blob, data=b"a2") + blob_b = make_object(Blob, data=b"b") + blob_b2 = make_object(Blob, data=b"b2") + x1, y2, m3 = self.make_commits( + [[1], [2], [3, 1, 2]], + trees={ + 1: [(b"x/a", blob_a)], + 2: [(b"y/b", blob_b)], + 3: [(b"x/a", blob_a2), (b"y/b", blob_b2)], + }, + ) + + # Get the WalkEntry for the merge commit. + walker = Walker(self.store, m3.id) + entries = list(walker) + walker_entry = entries[0] + self.assertEqual(walker_entry.commit.id, m3.id) + changes = walker_entry.changes() + self.assertEqual(2, len(changes)) + + entry_a = (b"x/a", F, blob_a.id) + entry_a2 = (b"x/a", F, blob_a2.id) + entry_b = (b"y/b", F, blob_b.id) + entry_b2 = (b"y/b", F, blob_b2.id) + self.assertEqual( + [ + [ + TreeChange(CHANGE_MODIFY, entry_a, entry_a2), + TreeChange.add(entry_a2), + ], + [ + TreeChange.add(entry_b2), + TreeChange(CHANGE_MODIFY, entry_b, entry_b2), + ], + ], + changes, + ) + + def test_filter_changes(self): + # Construct a commit with 2 files in different subdirectories. + blob_a = make_object(Blob, data=b"a") + blob_b = make_object(Blob, data=b"b") + c1 = self.make_linear_commits( + 1, + trees={1: [(b"x/a", blob_a), (b"y/b", blob_b)]}, + )[0] + + # Get the WalkEntry for the commit. + walker = Walker(self.store, c1.id) + walker_entry = list(walker)[0] + changes = walker_entry.changes(path_prefix=b"x") + + # Compare the changes with the expected values. + entry_a = (b"a", F, blob_a.id) + self.assertEqual( + [TreeChange.add(entry_a)], + changes, + ) + + def test_filter_with_merge(self): + blob_a = make_object(Blob, data=b"a") + blob_a2 = make_object(Blob, data=b"a2") + blob_b = make_object(Blob, data=b"b") + blob_b2 = make_object(Blob, data=b"b2") + x1, y2, m3 = self.make_commits( + [[1], [2], [3, 1, 2]], + trees={ + 1: [(b"x/a", blob_a)], + 2: [(b"y/b", blob_b)], + 3: [(b"x/a", blob_a2), (b"y/b", blob_b2)], + }, + ) + + # Get the WalkEntry for the merge commit. + walker = Walker(self.store, m3.id) + entries = list(walker) + walker_entry = entries[0] + self.assertEqual(walker_entry.commit.id, m3.id) + changes = walker_entry.changes(b"x") + self.assertEqual(1, len(changes)) + + entry_a = (b"a", F, blob_a.id) + entry_a2 = (b"a", F, blob_a2.id) + self.assertEqual( + [[TreeChange(CHANGE_MODIFY, entry_a, entry_a2)]], + changes, + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/test_web.py b/env/lib/python3.8/site-packages/dulwich/tests/test_web.py new file mode 100644 index 00000000..34fcc545 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/test_web.py @@ -0,0 +1,590 @@ +# test_web.py -- Tests for the git HTTP server +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Tests for the Git HTTP server.""" + +from io import BytesIO +import gzip +import re +import os +from typing import Type + +from dulwich.object_store import ( + MemoryObjectStore, +) +from dulwich.objects import ( + Blob, +) +from dulwich.repo import ( + BaseRepo, + MemoryRepo, +) +from dulwich.server import ( + DictBackend, +) +from dulwich.tests import ( + TestCase, +) +from dulwich.web import ( + HTTP_OK, + HTTP_NOT_FOUND, + HTTP_FORBIDDEN, + HTTP_ERROR, + GunzipFilter, + send_file, + get_text_file, + get_loose_object, + get_pack_file, + get_idx_file, + get_info_refs, + get_info_packs, + handle_service_request, + _LengthLimitedFile, + HTTPGitRequest, + HTTPGitApplication, +) + +from dulwich.tests.utils import ( + make_object, + make_tag, +) + + +class MinimalistWSGIInputStream(object): + """WSGI input stream with no 'seek()' and 'tell()' methods.""" + + def __init__(self, data): + self.data = data + self.pos = 0 + + def read(self, howmuch): + start = self.pos + end = self.pos + howmuch + if start >= len(self.data): + return "" + self.pos = end + return self.data[start:end] + + +class MinimalistWSGIInputStream2(MinimalistWSGIInputStream): + """WSGI input stream with no *working* 'seek()' and 'tell()' methods.""" + + def seek(self, pos): + raise NotImplementedError + + def tell(self): + raise NotImplementedError + + +class TestHTTPGitRequest(HTTPGitRequest): + """HTTPGitRequest with overridden methods to help test caching.""" + + def __init__(self, *args, **kwargs): + HTTPGitRequest.__init__(self, *args, **kwargs) + self.cached = None + + def nocache(self): + self.cached = False + + def cache_forever(self): + self.cached = True + + +class WebTestCase(TestCase): + """Base TestCase with useful instance vars and utility functions.""" + + _req_class = TestHTTPGitRequest # type: Type[HTTPGitRequest] + + def setUp(self): + super(WebTestCase, self).setUp() + self._environ = {} + self._req = self._req_class( + self._environ, self._start_response, handlers=self._handlers() + ) + self._status = None + self._headers = [] + self._output = BytesIO() + + def _start_response(self, status, headers): + self._status = status + self._headers = list(headers) + return self._output.write + + def _handlers(self): + return None + + def assertContentTypeEquals(self, expected): + self.assertIn(("Content-Type", expected), self._headers) + + +def _test_backend(objects, refs=None, named_files=None): + if not refs: + refs = {} + if not named_files: + named_files = {} + repo = MemoryRepo.init_bare(objects, refs) + for path, contents in named_files.items(): + repo._put_named_file(path, contents) + return DictBackend({"/": repo}) + + +class DumbHandlersTestCase(WebTestCase): + def test_send_file_not_found(self): + list(send_file(self._req, None, "text/plain")) + self.assertEqual(HTTP_NOT_FOUND, self._status) + + def test_send_file(self): + f = BytesIO(b"foobar") + output = b"".join(send_file(self._req, f, "some/thing")) + self.assertEqual(b"foobar", output) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("some/thing") + self.assertTrue(f.closed) + + def test_send_file_buffered(self): + bufsize = 10240 + xs = b"x" * bufsize + f = BytesIO(2 * xs) + self.assertEqual([xs, xs], list(send_file(self._req, f, "some/thing"))) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("some/thing") + self.assertTrue(f.closed) + + def test_send_file_error(self): + class TestFile(object): + def __init__(self, exc_class): + self.closed = False + self._exc_class = exc_class + + def read(self, size=-1): + raise self._exc_class() + + def close(self): + self.closed = True + + f = TestFile(IOError) + list(send_file(self._req, f, "some/thing")) + self.assertEqual(HTTP_ERROR, self._status) + self.assertTrue(f.closed) + self.assertFalse(self._req.cached) + + # non-IOErrors are reraised + f = TestFile(AttributeError) + self.assertRaises(AttributeError, list, send_file(self._req, f, "some/thing")) + self.assertTrue(f.closed) + self.assertFalse(self._req.cached) + + def test_get_text_file(self): + backend = _test_backend([], named_files={"description": b"foo"}) + mat = re.search(".*", "description") + output = b"".join(get_text_file(self._req, backend, mat)) + self.assertEqual(b"foo", output) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("text/plain") + self.assertFalse(self._req.cached) + + def test_get_loose_object(self): + blob = make_object(Blob, data=b"foo") + backend = _test_backend([blob]) + mat = re.search("^(..)(.{38})$", blob.id.decode("ascii")) + output = b"".join(get_loose_object(self._req, backend, mat)) + self.assertEqual(blob.as_legacy_object(), output) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("application/x-git-loose-object") + self.assertTrue(self._req.cached) + + def test_get_loose_object_missing(self): + mat = re.search("^(..)(.{38})$", "1" * 40) + list(get_loose_object(self._req, _test_backend([]), mat)) + self.assertEqual(HTTP_NOT_FOUND, self._status) + + def test_get_loose_object_error(self): + blob = make_object(Blob, data=b"foo") + backend = _test_backend([blob]) + mat = re.search("^(..)(.{38})$", blob.id.decode("ascii")) + + def as_legacy_object_error(self): + raise IOError + + self.addCleanup(setattr, Blob, "as_legacy_object", Blob.as_legacy_object) + Blob.as_legacy_object = as_legacy_object_error + list(get_loose_object(self._req, backend, mat)) + self.assertEqual(HTTP_ERROR, self._status) + + def test_get_pack_file(self): + pack_name = os.path.join("objects", "pack", "pack-%s.pack" % ("1" * 40)) + backend = _test_backend([], named_files={pack_name: b"pack contents"}) + mat = re.search(".*", pack_name) + output = b"".join(get_pack_file(self._req, backend, mat)) + self.assertEqual(b"pack contents", output) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("application/x-git-packed-objects") + self.assertTrue(self._req.cached) + + def test_get_idx_file(self): + idx_name = os.path.join("objects", "pack", "pack-%s.idx" % ("1" * 40)) + backend = _test_backend([], named_files={idx_name: b"idx contents"}) + mat = re.search(".*", idx_name) + output = b"".join(get_idx_file(self._req, backend, mat)) + self.assertEqual(b"idx contents", output) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("application/x-git-packed-objects-toc") + self.assertTrue(self._req.cached) + + def test_get_info_refs(self): + self._environ["QUERY_STRING"] = "" + + blob1 = make_object(Blob, data=b"1") + blob2 = make_object(Blob, data=b"2") + blob3 = make_object(Blob, data=b"3") + + tag1 = make_tag(blob2, name=b"tag-tag") + + objects = [blob1, blob2, blob3, tag1] + refs = { + b"HEAD": b"000", + b"refs/heads/master": blob1.id, + b"refs/tags/tag-tag": tag1.id, + b"refs/tags/blob-tag": blob3.id, + } + backend = _test_backend(objects, refs=refs) + + mat = re.search(".*", "//info/refs") + self.assertEqual( + [ + blob1.id + b"\trefs/heads/master\n", + blob3.id + b"\trefs/tags/blob-tag\n", + tag1.id + b"\trefs/tags/tag-tag\n", + blob2.id + b"\trefs/tags/tag-tag^{}\n", + ], + list(get_info_refs(self._req, backend, mat)), + ) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("text/plain") + self.assertFalse(self._req.cached) + + def test_get_info_refs_not_found(self): + self._environ["QUERY_STRING"] = "" + + objects = [] + refs = {} + backend = _test_backend(objects, refs=refs) + + mat = re.search("info/refs", "/foo/info/refs") + self.assertEqual( + [b"No git repository was found at /foo"], + list(get_info_refs(self._req, backend, mat)), + ) + self.assertEqual(HTTP_NOT_FOUND, self._status) + self.assertContentTypeEquals("text/plain") + + def test_get_info_packs(self): + class TestPackData(object): + def __init__(self, sha): + self.filename = "pack-%s.pack" % sha + + class TestPack(object): + def __init__(self, sha): + self.data = TestPackData(sha) + + packs = [TestPack(str(i) * 40) for i in range(1, 4)] + + class TestObjectStore(MemoryObjectStore): + # property must be overridden, can't be assigned + @property + def packs(self): + return packs + + store = TestObjectStore() + repo = BaseRepo(store, None) + backend = DictBackend({"/": repo}) + mat = re.search(".*", "//info/packs") + output = b"".join(get_info_packs(self._req, backend, mat)) + expected = b"".join( + [(b"P pack-" + s + b".pack\n") for s in [b"1" * 40, b"2" * 40, b"3" * 40]] + ) + self.assertEqual(expected, output) + self.assertEqual(HTTP_OK, self._status) + self.assertContentTypeEquals("text/plain") + self.assertFalse(self._req.cached) + + +class SmartHandlersTestCase(WebTestCase): + class _TestUploadPackHandler(object): + def __init__( + self, + backend, + args, + proto, + stateless_rpc=None, + advertise_refs=False, + ): + self.args = args + self.proto = proto + self.stateless_rpc = stateless_rpc + self.advertise_refs = advertise_refs + + def handle(self): + self.proto.write(b"handled input: " + self.proto.recv(1024)) + + def _make_handler(self, *args, **kwargs): + self._handler = self._TestUploadPackHandler(*args, **kwargs) + return self._handler + + def _handlers(self): + return {b"git-upload-pack": self._make_handler} + + def test_handle_service_request_unknown(self): + mat = re.search(".*", "/git-evil-handler") + content = list(handle_service_request(self._req, "backend", mat)) + self.assertEqual(HTTP_FORBIDDEN, self._status) + self.assertNotIn(b"git-evil-handler", b"".join(content)) + self.assertFalse(self._req.cached) + + def _run_handle_service_request(self, content_length=None): + self._environ["wsgi.input"] = BytesIO(b"foo") + if content_length is not None: + self._environ["CONTENT_LENGTH"] = content_length + mat = re.search(".*", "/git-upload-pack") + + class Backend(object): + def open_repository(self, path): + return None + + handler_output = b"".join(handle_service_request(self._req, Backend(), mat)) + write_output = self._output.getvalue() + # Ensure all output was written via the write callback. + self.assertEqual(b"", handler_output) + self.assertEqual(b"handled input: foo", write_output) + self.assertContentTypeEquals("application/x-git-upload-pack-result") + self.assertFalse(self._handler.advertise_refs) + self.assertTrue(self._handler.stateless_rpc) + self.assertFalse(self._req.cached) + + def test_handle_service_request(self): + self._run_handle_service_request() + + def test_handle_service_request_with_length(self): + self._run_handle_service_request(content_length="3") + + def test_handle_service_request_empty_length(self): + self._run_handle_service_request(content_length="") + + def test_get_info_refs_unknown(self): + self._environ["QUERY_STRING"] = "service=git-evil-handler" + + class Backend(object): + def open_repository(self, url): + return None + + mat = re.search(".*", "/git-evil-pack") + content = list(get_info_refs(self._req, Backend(), mat)) + self.assertNotIn(b"git-evil-handler", b"".join(content)) + self.assertEqual(HTTP_FORBIDDEN, self._status) + self.assertFalse(self._req.cached) + + def test_get_info_refs(self): + self._environ["wsgi.input"] = BytesIO(b"foo") + self._environ["QUERY_STRING"] = "service=git-upload-pack" + + class Backend(object): + def open_repository(self, url): + return None + + mat = re.search(".*", "/git-upload-pack") + handler_output = b"".join(get_info_refs(self._req, Backend(), mat)) + write_output = self._output.getvalue() + self.assertEqual( + ( + b"001e# service=git-upload-pack\n" + b"0000" + # input is ignored by the handler + b"handled input: " + ), + write_output, + ) + # Ensure all output was written via the write callback. + self.assertEqual(b"", handler_output) + self.assertTrue(self._handler.advertise_refs) + self.assertTrue(self._handler.stateless_rpc) + self.assertFalse(self._req.cached) + + +class LengthLimitedFileTestCase(TestCase): + def test_no_cutoff(self): + f = _LengthLimitedFile(BytesIO(b"foobar"), 1024) + self.assertEqual(b"foobar", f.read()) + + def test_cutoff(self): + f = _LengthLimitedFile(BytesIO(b"foobar"), 3) + self.assertEqual(b"foo", f.read()) + self.assertEqual(b"", f.read()) + + def test_multiple_reads(self): + f = _LengthLimitedFile(BytesIO(b"foobar"), 3) + self.assertEqual(b"fo", f.read(2)) + self.assertEqual(b"o", f.read(2)) + self.assertEqual(b"", f.read()) + + +class HTTPGitRequestTestCase(WebTestCase): + + # This class tests the contents of the actual cache headers + _req_class = HTTPGitRequest + + def test_not_found(self): + self._req.cache_forever() # cache headers should be discarded + message = "Something not found" + self.assertEqual(message.encode("ascii"), self._req.not_found(message)) + self.assertEqual(HTTP_NOT_FOUND, self._status) + self.assertEqual(set([("Content-Type", "text/plain")]), set(self._headers)) + + def test_forbidden(self): + self._req.cache_forever() # cache headers should be discarded + message = "Something not found" + self.assertEqual(message.encode("ascii"), self._req.forbidden(message)) + self.assertEqual(HTTP_FORBIDDEN, self._status) + self.assertEqual(set([("Content-Type", "text/plain")]), set(self._headers)) + + def test_respond_ok(self): + self._req.respond() + self.assertEqual([], self._headers) + self.assertEqual(HTTP_OK, self._status) + + def test_respond(self): + self._req.nocache() + self._req.respond( + status=402, + content_type="some/type", + headers=[("X-Foo", "foo"), ("X-Bar", "bar")], + ) + self.assertEqual( + set( + [ + ("X-Foo", "foo"), + ("X-Bar", "bar"), + ("Content-Type", "some/type"), + ("Expires", "Fri, 01 Jan 1980 00:00:00 GMT"), + ("Pragma", "no-cache"), + ("Cache-Control", "no-cache, max-age=0, must-revalidate"), + ] + ), + set(self._headers), + ) + self.assertEqual(402, self._status) + + +class HTTPGitApplicationTestCase(TestCase): + def setUp(self): + super(HTTPGitApplicationTestCase, self).setUp() + self._app = HTTPGitApplication("backend") + + self._environ = { + "PATH_INFO": "/foo", + "REQUEST_METHOD": "GET", + } + + def _test_handler(self, req, backend, mat): + # tests interface used by all handlers + self.assertEqual(self._environ, req.environ) + self.assertEqual("backend", backend) + self.assertEqual("/foo", mat.group(0)) + return "output" + + def _add_handler(self, app): + req = self._environ["REQUEST_METHOD"] + app.services = { + (req, re.compile("/foo$")): self._test_handler, + } + + def test_call(self): + self._add_handler(self._app) + self.assertEqual("output", self._app(self._environ, None)) + + def test_fallback_app(self): + def test_app(environ, start_response): + return "output" + + app = HTTPGitApplication("backend", fallback_app=test_app) + self.assertEqual("output", app(self._environ, None)) + + +class GunzipTestCase(HTTPGitApplicationTestCase): + __doc__ = """TestCase for testing the GunzipFilter, ensuring the wsgi.input + is correctly decompressed and headers are corrected. + """ + example_text = __doc__.encode("ascii") + + def setUp(self): + super(GunzipTestCase, self).setUp() + self._app = GunzipFilter(self._app) + self._environ["HTTP_CONTENT_ENCODING"] = "gzip" + self._environ["REQUEST_METHOD"] = "POST" + + def _get_zstream(self, text): + zstream = BytesIO() + zfile = gzip.GzipFile(fileobj=zstream, mode="w") + zfile.write(text) + zfile.close() + zlength = zstream.tell() + zstream.seek(0) + return zstream, zlength + + def _test_call(self, orig, zstream, zlength): + self._add_handler(self._app.app) + self.assertLess(zlength, len(orig)) + self.assertEqual(self._environ["HTTP_CONTENT_ENCODING"], "gzip") + self._environ["CONTENT_LENGTH"] = zlength + self._environ["wsgi.input"] = zstream + self._app(self._environ, None) + buf = self._environ["wsgi.input"] + self.assertIsNot(buf, zstream) + buf.seek(0) + self.assertEqual(orig, buf.read()) + self.assertIs(None, self._environ.get("CONTENT_LENGTH")) + self.assertNotIn("HTTP_CONTENT_ENCODING", self._environ) + + def test_call(self): + self._test_call(self.example_text, *self._get_zstream(self.example_text)) + + def test_call_no_seek(self): + """ + This ensures that the gunzipping code doesn't require any methods on + 'wsgi.input' except for '.read()'. (In particular, it shouldn't + require '.seek()'. See https://github.com/jelmer/dulwich/issues/140.) + """ + zstream, zlength = self._get_zstream(self.example_text) + self._test_call( + self.example_text, + MinimalistWSGIInputStream(zstream.read()), + zlength, + ) + + def test_call_no_working_seek(self): + """ + Similar to 'test_call_no_seek', but this time the methods are available + (but defunct). See https://github.com/jonashaag/klaus/issues/154. + """ + zstream, zlength = self._get_zstream(self.example_text) + self._test_call( + self.example_text, + MinimalistWSGIInputStream2(zstream.read()), + zlength, + ) diff --git a/env/lib/python3.8/site-packages/dulwich/tests/utils.py b/env/lib/python3.8/site-packages/dulwich/tests/utils.py new file mode 100644 index 00000000..110046c8 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/tests/utils.py @@ -0,0 +1,378 @@ +# utils.py -- Test utilities for Dulwich. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""Utility functions common to Dulwich tests.""" + + +import datetime +import os +import shutil +import tempfile +import time +import types + +import warnings + +from dulwich.index import ( + commit_tree, +) +from dulwich.objects import ( + FixedSha, + Commit, + Tag, + object_class, +) +from dulwich.pack import ( + OFS_DELTA, + REF_DELTA, + DELTA_TYPES, + obj_sha, + SHA1Writer, + write_pack_header, + write_pack_object, + create_delta, +) +from dulwich.repo import Repo +from dulwich.tests import ( # noqa: F401 + skipIf, + SkipTest, +) + + +# Plain files are very frequently used in tests, so let the mode be very short. +F = 0o100644 # Shorthand mode for Files. + + +def open_repo(name, temp_dir=None): + """Open a copy of a repo in a temporary directory. + + Use this function for accessing repos in dulwich/tests/data/repos to avoid + accidentally or intentionally modifying those repos in place. Use + tear_down_repo to delete any temp files created. + + Args: + name: The name of the repository, relative to + dulwich/tests/data/repos + temp_dir: temporary directory to initialize to. If not provided, a + temporary directory will be created. + Returns: An initialized Repo object that lives in a temporary directory. + """ + if temp_dir is None: + temp_dir = tempfile.mkdtemp() + repo_dir = os.path.join(os.path.dirname(__file__), "..", "..", "testdata", "repos", name) + temp_repo_dir = os.path.join(temp_dir, name) + shutil.copytree(repo_dir, temp_repo_dir, symlinks=True) + return Repo(temp_repo_dir) + + +def tear_down_repo(repo): + """Tear down a test repository.""" + repo.close() + temp_dir = os.path.dirname(repo.path.rstrip(os.sep)) + shutil.rmtree(temp_dir) + + +def make_object(cls, **attrs): + """Make an object for testing and assign some members. + + This method creates a new subclass to allow arbitrary attribute + reassignment, which is not otherwise possible with objects having + __slots__. + + Args: + attrs: dict of attributes to set on the new object. + Returns: A newly initialized object of type cls. + """ + + class TestObject(cls): + """Class that inherits from the given class, but without __slots__. + + Note that classes with __slots__ can't have arbitrary attributes + monkey-patched in, so this is a class that is exactly the same only + with a __dict__ instead of __slots__. + """ + + pass + + TestObject.__name__ = "TestObject_" + cls.__name__ + + obj = TestObject() + for name, value in attrs.items(): + if name == "id": + # id property is read-only, so we overwrite sha instead. + sha = FixedSha(value) + obj.sha = lambda: sha + else: + setattr(obj, name, value) + return obj + + +def make_commit(**attrs): + """Make a Commit object with a default set of members. + + Args: + attrs: dict of attributes to overwrite from the default values. + Returns: A newly initialized Commit object. + """ + default_time = 1262304000 # 2010-01-01 00:00:00 + all_attrs = { + "author": b"Test Author ", + "author_time": default_time, + "author_timezone": 0, + "committer": b"Test Committer ", + "commit_time": default_time, + "commit_timezone": 0, + "message": b"Test message.", + "parents": [], + "tree": b"0" * 40, + } + all_attrs.update(attrs) + return make_object(Commit, **all_attrs) + + +def make_tag(target, **attrs): + """Make a Tag object with a default set of values. + + Args: + target: object to be tagged (Commit, Blob, Tree, etc) + attrs: dict of attributes to overwrite from the default values. + Returns: A newly initialized Tag object. + """ + target_id = target.id + target_type = object_class(target.type_name) + default_time = int(time.mktime(datetime.datetime(2010, 1, 1).timetuple())) + all_attrs = { + "tagger": b"Test Author ", + "tag_time": default_time, + "tag_timezone": 0, + "message": b"Test message.", + "object": (target_type, target_id), + "name": b"Test Tag", + } + all_attrs.update(attrs) + return make_object(Tag, **all_attrs) + + +def functest_builder(method, func): + """Generate a test method that tests the given function.""" + + def do_test(self): + method(self, func) + + return do_test + + +def ext_functest_builder(method, func): + """Generate a test method that tests the given extension function. + + This is intended to generate test methods that test both a pure-Python + version and an extension version using common test code. The extension test + will raise SkipTest if the extension is not found. + + Sample usage: + + class MyTest(TestCase); + def _do_some_test(self, func_impl): + self.assertEqual('foo', func_impl()) + + test_foo = functest_builder(_do_some_test, foo_py) + test_foo_extension = ext_functest_builder(_do_some_test, _foo_c) + + Args: + method: The method to run. It must must two parameters, self and the + function implementation to test. + func: The function implementation to pass to method. + """ + + def do_test(self): + if not isinstance(func, types.BuiltinFunctionType): + raise SkipTest("%s extension not found" % func) + method(self, func) + + return do_test + + +def build_pack(f, objects_spec, store=None): + """Write test pack data from a concise spec. + + Args: + f: A file-like object to write the pack to. + objects_spec: A list of (type_num, obj). For non-delta types, obj + is the string of that object's data. + For delta types, obj is a tuple of (base, data), where: + + * base can be either an index in objects_spec of the base for that + * delta; or for a ref delta, a SHA, in which case the resulting pack + * will be thin and the base will be an external ref. + * data is a string of the full, non-deltified data for that object. + + Note that offsets/refs and deltas are computed within this function. + store: An optional ObjectStore for looking up external refs. + Returns: A list of tuples in the order specified by objects_spec: + (offset, type num, data, sha, CRC32) + """ + sf = SHA1Writer(f) + num_objects = len(objects_spec) + write_pack_header(sf.write, num_objects) + + full_objects = {} + offsets = {} + crc32s = {} + + while len(full_objects) < num_objects: + for i, (type_num, data) in enumerate(objects_spec): + if type_num not in DELTA_TYPES: + full_objects[i] = (type_num, data, obj_sha(type_num, [data])) + continue + base, data = data + if isinstance(base, int): + if base not in full_objects: + continue + base_type_num, _, _ = full_objects[base] + else: + base_type_num, _ = store.get_raw(base) + full_objects[i] = ( + base_type_num, + data, + obj_sha(base_type_num, [data]), + ) + + for i, (type_num, obj) in enumerate(objects_spec): + offset = f.tell() + if type_num == OFS_DELTA: + base_index, data = obj + base = offset - offsets[base_index] + _, base_data, _ = full_objects[base_index] + obj = (base, list(create_delta(base_data, data))) + elif type_num == REF_DELTA: + base_ref, data = obj + if isinstance(base_ref, int): + _, base_data, base = full_objects[base_ref] + else: + base_type_num, base_data = store.get_raw(base_ref) + base = obj_sha(base_type_num, base_data) + obj = (base, list(create_delta(base_data, data))) + + crc32 = write_pack_object(sf.write, type_num, obj) + offsets[i] = offset + crc32s[i] = crc32 + + expected = [] + for i in range(num_objects): + type_num, data, sha = full_objects[i] + assert len(sha) == 20 + expected.append((offsets[i], type_num, data, sha, crc32s[i])) + + sf.write_sha() + f.seek(0) + return expected + + +def build_commit_graph(object_store, commit_spec, trees=None, attrs=None): + """Build a commit graph from a concise specification. + + Sample usage: + >>> c1, c2, c3 = build_commit_graph(store, [[1], [2, 1], [3, 1, 2]]) + >>> store[store[c3].parents[0]] == c1 + True + >>> store[store[c3].parents[1]] == c2 + True + + If not otherwise specified, commits will refer to the empty tree and have + commit times increasing in the same order as the commit spec. + + Args: + object_store: An ObjectStore to commit objects to. + commit_spec: An iterable of iterables of ints defining the commit + graph. Each entry defines one commit, and entries must be in + topological order. The first element of each entry is a commit number, + and the remaining elements are its parents. The commit numbers are only + meaningful for the call to make_commits; since real commit objects are + created, they will get created with real, opaque SHAs. + trees: An optional dict of commit number -> tree spec for building + trees for commits. The tree spec is an iterable of (path, blob, mode) + or (path, blob) entries; if mode is omitted, it defaults to the normal + file mode (0100644). + attrs: A dict of commit number -> (dict of attribute -> value) for + assigning additional values to the commits. + Returns: The list of commit objects created. + Raises: + ValueError: If an undefined commit identifier is listed as a parent. + """ + if trees is None: + trees = {} + if attrs is None: + attrs = {} + commit_time = 0 + nums = {} + commits = [] + + for commit in commit_spec: + commit_num = commit[0] + try: + parent_ids = [nums[pn] for pn in commit[1:]] + except KeyError as exc: + (missing_parent,) = exc.args + raise ValueError("Unknown parent %i" % missing_parent) from exc + + blobs = [] + for entry in trees.get(commit_num, []): + if len(entry) == 2: + path, blob = entry + entry = (path, blob, F) + path, blob, mode = entry + blobs.append((path, blob.id, mode)) + object_store.add_object(blob) + tree_id = commit_tree(object_store, blobs) + + commit_attrs = { + "message": ("Commit %i" % commit_num).encode("ascii"), + "parents": parent_ids, + "tree": tree_id, + "commit_time": commit_time, + } + commit_attrs.update(attrs.get(commit_num, {})) + commit_obj = make_commit(**commit_attrs) + + # By default, increment the time by a lot. Out-of-order commits should + # be closer together than this because their main cause is clock skew. + commit_time = commit_attrs["commit_time"] + 100 + nums[commit_num] = commit_obj.id + object_store.add_object(commit_obj) + commits.append(commit_obj) + + return commits + + +def setup_warning_catcher(): + """Wrap warnings.showwarning with code that records warnings.""" + + caught_warnings = [] + original_showwarning = warnings.showwarning + + def custom_showwarning(*args, **kwargs): + caught_warnings.append(args[0]) + + warnings.showwarning = custom_showwarning + + def restore_showwarning(): + warnings.showwarning = original_showwarning + + return caught_warnings, restore_showwarning diff --git a/env/lib/python3.8/site-packages/dulwich/walk.py b/env/lib/python3.8/site-packages/dulwich/walk.py new file mode 100644 index 00000000..5a38f717 --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/walk.py @@ -0,0 +1,439 @@ +# walk.py -- General implementation of walking commits and their contents. +# Copyright (C) 2010 Google, Inc. +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""General implementation of walking commits and their contents.""" + + +import collections +import heapq +from itertools import chain +from typing import List, Tuple, Set + +from dulwich.diff_tree import ( + RENAME_CHANGE_TYPES, + tree_changes, + tree_changes_for_merge, + RenameDetector, +) +from dulwich.errors import ( + MissingCommitError, +) +from dulwich.objects import ( + Commit, + Tag, + ObjectID, +) + +ORDER_DATE = "date" +ORDER_TOPO = "topo" + +ALL_ORDERS = (ORDER_DATE, ORDER_TOPO) + +# Maximum number of commits to walk past a commit time boundary. +_MAX_EXTRA_COMMITS = 5 + + +class WalkEntry(object): + """Object encapsulating a single result from a walk.""" + + def __init__(self, walker, commit): + self.commit = commit + self._store = walker.store + self._get_parents = walker.get_parents + self._changes = {} + self._rename_detector = walker.rename_detector + + def changes(self, path_prefix=None): + """Get the tree changes for this entry. + + Args: + path_prefix: Portion of the path in the repository to + use to filter changes. Must be a directory name. Must be + a full, valid, path reference (no partial names or wildcards). + Returns: For commits with up to one parent, a list of TreeChange + objects; if the commit has no parents, these will be relative to + the empty tree. For merge commits, a list of lists of TreeChange + objects; see dulwich.diff.tree_changes_for_merge. + """ + cached = self._changes.get(path_prefix) + if cached is None: + commit = self.commit + if not self._get_parents(commit): + changes_func = tree_changes + parent = None + elif len(self._get_parents(commit)) == 1: + changes_func = tree_changes + parent = self._store[self._get_parents(commit)[0]].tree + if path_prefix: + mode, subtree_sha = parent.lookup_path( + self._store.__getitem__, + path_prefix, + ) + parent = self._store[subtree_sha] + else: + changes_func = tree_changes_for_merge + parent = [self._store[p].tree for p in self._get_parents(commit)] + if path_prefix: + parent_trees = [self._store[p] for p in parent] + parent = [] + for p in parent_trees: + try: + mode, st = p.lookup_path( + self._store.__getitem__, + path_prefix, + ) + except KeyError: + pass + else: + parent.append(st) + commit_tree_sha = commit.tree + if path_prefix: + commit_tree = self._store[commit_tree_sha] + mode, commit_tree_sha = commit_tree.lookup_path( + self._store.__getitem__, + path_prefix, + ) + cached = list( + changes_func( + self._store, + parent, + commit_tree_sha, + rename_detector=self._rename_detector, + ) + ) + self._changes[path_prefix] = cached + return self._changes[path_prefix] + + def __repr__(self): + return "" % ( + self.commit.id, + self.changes(), + ) + + +class _CommitTimeQueue(object): + """Priority queue of WalkEntry objects by commit time.""" + + def __init__(self, walker: "Walker"): + self._walker = walker + self._store = walker.store + self._get_parents = walker.get_parents + self._excluded = walker.excluded + self._pq: List[Tuple[int, Commit]] = [] + self._pq_set: Set[ObjectID] = set() + self._seen: Set[ObjectID] = set() + self._done: Set[ObjectID] = set() + self._min_time = walker.since + self._last = None + self._extra_commits_left = _MAX_EXTRA_COMMITS + self._is_finished = False + + for commit_id in chain(walker.include, walker.excluded): + self._push(commit_id) + + def _push(self, object_id: bytes): + try: + obj = self._store[object_id] + except KeyError as exc: + raise MissingCommitError(object_id) from exc + if isinstance(obj, Tag): + self._push(obj.object[1]) + return + # TODO(jelmer): What to do about non-Commit and non-Tag objects? + commit = obj + if commit.id not in self._pq_set and commit.id not in self._done: + heapq.heappush(self._pq, (-commit.commit_time, commit)) + self._pq_set.add(commit.id) + self._seen.add(commit.id) + + def _exclude_parents(self, commit): + excluded = self._excluded + seen = self._seen + todo = [commit] + while todo: + commit = todo.pop() + for parent in self._get_parents(commit): + if parent not in excluded and parent in seen: + # TODO: This is inefficient unless the object store does + # some caching (which DiskObjectStore currently does not). + # We could either add caching in this class or pass around + # parsed queue entry objects instead of commits. + todo.append(self._store[parent]) + excluded.add(parent) + + def next(self): + if self._is_finished: + return None + while self._pq: + _, commit = heapq.heappop(self._pq) + sha = commit.id + self._pq_set.remove(sha) + if sha in self._done: + continue + self._done.add(sha) + + for parent_id in self._get_parents(commit): + self._push(parent_id) + + reset_extra_commits = True + is_excluded = sha in self._excluded + if is_excluded: + self._exclude_parents(commit) + if self._pq and all(c.id in self._excluded for _, c in self._pq): + _, n = self._pq[0] + if self._last and n.commit_time >= self._last.commit_time: + # If the next commit is newer than the last one, we + # need to keep walking in case its parents (which we + # may not have seen yet) are excluded. This gives the + # excluded set a chance to "catch up" while the commit + # is still in the Walker's output queue. + reset_extra_commits = True + else: + reset_extra_commits = False + + if self._min_time is not None and commit.commit_time < self._min_time: + # We want to stop walking at min_time, but commits at the + # boundary may be out of order with respect to their parents. + # So we walk _MAX_EXTRA_COMMITS more commits once we hit this + # boundary. + reset_extra_commits = False + + if reset_extra_commits: + # We're not at a boundary, so reset the counter. + self._extra_commits_left = _MAX_EXTRA_COMMITS + else: + self._extra_commits_left -= 1 + if not self._extra_commits_left: + break + + if not is_excluded: + self._last = commit + return WalkEntry(self._walker, commit) + self._is_finished = True + return None + + __next__ = next + + +class Walker(object): + """Object for performing a walk of commits in a store. + + Walker objects are initialized with a store and other options and can then + be treated as iterators of Commit objects. + """ + + def __init__( + self, + store, + include, + exclude=None, + order=ORDER_DATE, + reverse=False, + max_entries=None, + paths=None, + rename_detector=None, + follow=False, + since=None, + until=None, + get_parents=lambda commit: commit.parents, + queue_cls=_CommitTimeQueue, + ): + """Constructor. + + Args: + store: ObjectStore instance for looking up objects. + include: Iterable of SHAs of commits to include along with their + ancestors. + exclude: Iterable of SHAs of commits to exclude along with their + ancestors, overriding includes. + order: ORDER_* constant specifying the order of results. + Anything other than ORDER_DATE may result in O(n) memory usage. + reverse: If True, reverse the order of output, requiring O(n) + memory. + max_entries: The maximum number of entries to yield, or None for + no limit. + paths: Iterable of file or subtree paths to show entries for. + rename_detector: diff.RenameDetector object for detecting + renames. + follow: If True, follow path across renames/copies. Forces a + default rename_detector. + since: Timestamp to list commits after. + until: Timestamp to list commits before. + get_parents: Method to retrieve the parents of a commit + queue_cls: A class to use for a queue of commits, supporting the + iterator protocol. The constructor takes a single argument, the + Walker. + """ + # Note: when adding arguments to this method, please also update + # dulwich.repo.BaseRepo.get_walker + if order not in ALL_ORDERS: + raise ValueError("Unknown walk order %s" % order) + self.store = store + if isinstance(include, bytes): + # TODO(jelmer): Really, this should require a single type. + # Print deprecation warning here? + include = [include] + self.include = include + self.excluded = set(exclude or []) + self.order = order + self.reverse = reverse + self.max_entries = max_entries + self.paths = paths and set(paths) or None + if follow and not rename_detector: + rename_detector = RenameDetector(store) + self.rename_detector = rename_detector + self.get_parents = get_parents + self.follow = follow + self.since = since + self.until = until + + self._num_entries = 0 + self._queue = queue_cls(self) + self._out_queue = collections.deque() + + def _path_matches(self, changed_path): + if changed_path is None: + return False + for followed_path in self.paths: + if changed_path == followed_path: + return True + if ( + changed_path.startswith(followed_path) + and changed_path[len(followed_path)] == b"/"[0] + ): + return True + return False + + def _change_matches(self, change): + if not change: + return False + + old_path = change.old.path + new_path = change.new.path + if self._path_matches(new_path): + if self.follow and change.type in RENAME_CHANGE_TYPES: + self.paths.add(old_path) + self.paths.remove(new_path) + return True + elif self._path_matches(old_path): + return True + return False + + def _should_return(self, entry): + """Determine if a walk entry should be returned.. + + Args: + entry: The WalkEntry to consider. + Returns: True if the WalkEntry should be returned by this walk, or + False otherwise (e.g. if it doesn't match any requested paths). + """ + commit = entry.commit + if self.since is not None and commit.commit_time < self.since: + return False + if self.until is not None and commit.commit_time > self.until: + return False + if commit.id in self.excluded: + return False + + if self.paths is None: + return True + + if len(self.get_parents(commit)) > 1: + for path_changes in entry.changes(): + # For merge commits, only include changes with conflicts for + # this path. Since a rename conflict may include different + # old.paths, we have to check all of them. + for change in path_changes: + if self._change_matches(change): + return True + else: + for change in entry.changes(): + if self._change_matches(change): + return True + return None + + def _next(self): + max_entries = self.max_entries + while max_entries is None or self._num_entries < max_entries: + entry = next(self._queue) + if entry is not None: + self._out_queue.append(entry) + if entry is None or len(self._out_queue) > _MAX_EXTRA_COMMITS: + if not self._out_queue: + return None + entry = self._out_queue.popleft() + if self._should_return(entry): + self._num_entries += 1 + return entry + return None + + def _reorder(self, results): + """Possibly reorder a results iterator. + + Args: + results: An iterator of WalkEntry objects, in the order returned + from the queue_cls. + Returns: An iterator or list of WalkEntry objects, in the order + required by the Walker. + """ + if self.order == ORDER_TOPO: + results = _topo_reorder(results, self.get_parents) + if self.reverse: + results = reversed(list(results)) + return results + + def __iter__(self): + return iter(self._reorder(iter(self._next, None))) + + +def _topo_reorder(entries, get_parents=lambda commit: commit.parents): + """Reorder an iterable of entries topologically. + + This works best assuming the entries are already in almost-topological + order, e.g. in commit time order. + + Args: + entries: An iterable of WalkEntry objects. + get_parents: Optional function for getting the parents of a commit. + Returns: iterator over WalkEntry objects from entries in FIFO order, except + where a parent would be yielded before any of its children. + """ + todo = collections.deque() + pending = {} + num_children = collections.defaultdict(int) + for entry in entries: + todo.append(entry) + for p in get_parents(entry.commit): + num_children[p] += 1 + + while todo: + entry = todo.popleft() + commit = entry.commit + commit_id = commit.id + if num_children[commit_id]: + pending[commit_id] = entry + continue + for parent_id in get_parents(commit): + num_children[parent_id] -= 1 + if not num_children[parent_id]: + parent_entry = pending.pop(parent_id, None) + if parent_entry: + todo.appendleft(parent_entry) + yield entry diff --git a/env/lib/python3.8/site-packages/dulwich/web.py b/env/lib/python3.8/site-packages/dulwich/web.py new file mode 100644 index 00000000..2eef721a --- /dev/null +++ b/env/lib/python3.8/site-packages/dulwich/web.py @@ -0,0 +1,623 @@ +# web.py -- WSGI smart-http server +# Copyright (C) 2010 Google, Inc. +# Copyright (C) 2012 Jelmer Vernooij +# +# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU +# General Public License as public by the Free Software Foundation; version 2.0 +# or (at your option) any later version. You can redistribute it and/or +# modify it under the terms of either of these two licenses. +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# You should have received a copy of the licenses; if not, see +# for a copy of the GNU General Public License +# and for a copy of the Apache +# License, Version 2.0. +# + +"""HTTP server for dulwich that implements the git smart HTTP protocol.""" + +from io import BytesIO +import shutil +import tempfile +import gzip +import os +import re +import sys +import time +from typing import List, Tuple, Optional +from wsgiref.simple_server import ( + WSGIRequestHandler, + ServerHandler, + WSGIServer, + make_server, +) + +from urllib.parse import parse_qs + + +from dulwich import log_utils +from dulwich.protocol import ( + ReceivableProtocol, +) +from dulwich.repo import ( + BaseRepo, + NotGitRepository, + Repo, +) +from dulwich.server import ( + DictBackend, + DEFAULT_HANDLERS, + generate_info_refs, + generate_objects_info_packs, +) + + +logger = log_utils.getLogger(__name__) + + +# HTTP error strings +HTTP_OK = "200 OK" +HTTP_NOT_FOUND = "404 Not Found" +HTTP_FORBIDDEN = "403 Forbidden" +HTTP_ERROR = "500 Internal Server Error" + + +NO_CACHE_HEADERS = [ + ("Expires", "Fri, 01 Jan 1980 00:00:00 GMT"), + ("Pragma", "no-cache"), + ("Cache-Control", "no-cache, max-age=0, must-revalidate"), +] + + +def cache_forever_headers(now=None): + if now is None: + now = time.time() + return [ + ("Date", date_time_string(now)), + ("Expires", date_time_string(now + 31536000)), + ("Cache-Control", "public, max-age=31536000"), + ] + + +def date_time_string(timestamp: Optional[float] = None) -> str: + # From BaseHTTPRequestHandler.date_time_string in BaseHTTPServer.py in the + # Python 2.6.5 standard library, following modifications: + # - Made a global rather than an instance method. + # - weekdayname and monthname are renamed and locals rather than class + # variables. + # Copyright (c) 2001-2010 Python Software Foundation; All Rights Reserved + weekdays = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"] + months = [ + None, + "Jan", + "Feb", + "Mar", + "Apr", + "May", + "Jun", + "Jul", + "Aug", + "Sep", + "Oct", + "Nov", + "Dec", + ] + if timestamp is None: + timestamp = time.time() + year, month, day, hh, mm, ss, wd = time.gmtime(timestamp)[:7] + return "%s, %02d %3s %4d %02d:%02d:%02d GMD" % ( + weekdays[wd], + day, + months[month], + year, + hh, + mm, + ss, + ) + + +def url_prefix(mat) -> str: + """Extract the URL prefix from a regex match. + + Args: + mat: A regex match object. + Returns: The URL prefix, defined as the text before the match in the + original string. Normalized to start with one leading slash and end + with zero. + """ + return "/" + mat.string[: mat.start()].strip("/") + + +def get_repo(backend, mat) -> BaseRepo: + """Get a Repo instance for the given backend and URL regex match.""" + return backend.open_repository(url_prefix(mat)) + + +def send_file(req, f, content_type): + """Send a file-like object to the request output. + + Args: + req: The HTTPGitRequest object to send output to. + f: An open file-like object to send; will be closed. + content_type: The MIME type for the file. + Returns: Iterator over the contents of the file, as chunks. + """ + if f is None: + yield req.not_found("File not found") + return + try: + req.respond(HTTP_OK, content_type) + while True: + data = f.read(10240) + if not data: + break + yield data + except IOError: + yield req.error("Error reading file") + finally: + f.close() + + +def _url_to_path(url): + return url.replace("/", os.path.sep) + + +def get_text_file(req, backend, mat): + req.nocache() + path = _url_to_path(mat.group()) + logger.info("Sending plain text file %s", path) + return send_file(req, get_repo(backend, mat).get_named_file(path), "text/plain") + + +def get_loose_object(req, backend, mat): + sha = (mat.group(1) + mat.group(2)).encode("ascii") + logger.info("Sending loose object %s", sha) + object_store = get_repo(backend, mat).object_store + if not object_store.contains_loose(sha): + yield req.not_found("Object not found") + return + try: + data = object_store[sha].as_legacy_object() + except IOError: + yield req.error("Error reading object") + return + req.cache_forever() + req.respond(HTTP_OK, "application/x-git-loose-object") + yield data + + +def get_pack_file(req, backend, mat): + req.cache_forever() + path = _url_to_path(mat.group()) + logger.info("Sending pack file %s", path) + return send_file( + req, + get_repo(backend, mat).get_named_file(path), + "application/x-git-packed-objects", + ) + + +def get_idx_file(req, backend, mat): + req.cache_forever() + path = _url_to_path(mat.group()) + logger.info("Sending pack file %s", path) + return send_file( + req, + get_repo(backend, mat).get_named_file(path), + "application/x-git-packed-objects-toc", + ) + + +def get_info_refs(req, backend, mat): + params = parse_qs(req.environ["QUERY_STRING"]) + service = params.get("service", [None])[0] + try: + repo = get_repo(backend, mat) + except NotGitRepository as e: + yield req.not_found(str(e)) + return + if service and not req.dumb: + handler_cls = req.handlers.get(service.encode("ascii"), None) + if handler_cls is None: + yield req.forbidden("Unsupported service") + return + req.nocache() + write = req.respond(HTTP_OK, "application/x-%s-advertisement" % service) + proto = ReceivableProtocol(BytesIO().read, write) + handler = handler_cls( + backend, + [url_prefix(mat)], + proto, + stateless_rpc=True, + advertise_refs=True, + ) + handler.proto.write_pkt_line(b"# service=" + service.encode("ascii") + b"\n") + handler.proto.write_pkt_line(None) + handler.handle() + else: + # non-smart fallback + # TODO: select_getanyfile() (see http-backend.c) + req.nocache() + req.respond(HTTP_OK, "text/plain") + logger.info("Emulating dumb info/refs") + for text in generate_info_refs(repo): + yield text + + +def get_info_packs(req, backend, mat): + req.nocache() + req.respond(HTTP_OK, "text/plain") + logger.info("Emulating dumb info/packs") + return generate_objects_info_packs(get_repo(backend, mat)) + + +def _chunk_iter(f): + while True: + line = f.readline() + length = int(line.rstrip(), 16) + chunk = f.read(length + 2) + if length == 0: + break + yield chunk[:-2] + + +class ChunkReader(object): + + def __init__(self, f): + self._iter = _chunk_iter(f) + self._buffer = [] + + def read(self, n): + while sum(map(len, self._buffer)) < n: + try: + self._buffer.append(next(self._iter)) + except StopIteration: + break + f = b''.join(self._buffer) + ret = f[:n] + self._buffer = [f[n:]] + return ret + + +class _LengthLimitedFile(object): + """Wrapper class to limit the length of reads from a file-like object. + + This is used to ensure EOF is read from the wsgi.input object once + Content-Length bytes are read. This behavior is required by the WSGI spec + but not implemented in wsgiref as of 2.5. + """ + + def __init__(self, input, max_bytes): + self._input = input + self._bytes_avail = max_bytes + + def read(self, size=-1): + if self._bytes_avail <= 0: + return b"" + if size == -1 or size > self._bytes_avail: + size = self._bytes_avail + self._bytes_avail -= size + return self._input.read(size) + + # TODO: support more methods as necessary + + +def handle_service_request(req, backend, mat): + service = mat.group().lstrip("/") + logger.info("Handling service request for %s", service) + handler_cls = req.handlers.get(service.encode("ascii"), None) + if handler_cls is None: + yield req.forbidden("Unsupported service") + return + try: + get_repo(backend, mat) + except NotGitRepository as e: + yield req.not_found(str(e)) + return + req.nocache() + write = req.respond(HTTP_OK, "application/x-%s-result" % service) + if req.environ.get('HTTP_TRANSFER_ENCODING') == 'chunked': + read = ChunkReader(req.environ["wsgi.input"]).read + else: + read = req.environ["wsgi.input"].read + proto = ReceivableProtocol(read, write) + # TODO(jelmer): Find a way to pass in repo, rather than having handler_cls + # reopen. + handler = handler_cls(backend, [url_prefix(mat)], proto, stateless_rpc=True) + handler.handle() + + +class HTTPGitRequest(object): + """Class encapsulating the state of a single git HTTP request. + + Attributes: + environ: the WSGI environment for the request. + """ + + def __init__(self, environ, start_response, dumb: bool = False, handlers=None): + self.environ = environ + self.dumb = dumb + self.handlers = handlers + self._start_response = start_response + self._cache_headers = [] # type: List[Tuple[str, str]] + self._headers = [] # type: List[Tuple[str, str]] + + def add_header(self, name, value): + """Add a header to the response.""" + self._headers.append((name, value)) + + def respond( + self, + status: str = HTTP_OK, + content_type: Optional[str] = None, + headers: Optional[List[Tuple[str, str]]] = None, + ): + """Begin a response with the given status and other headers.""" + if headers: + self._headers.extend(headers) + if content_type: + self._headers.append(("Content-Type", content_type)) + self._headers.extend(self._cache_headers) + + return self._start_response(status, self._headers) + + def not_found(self, message: str) -> bytes: + """Begin a HTTP 404 response and return the text of a message.""" + self._cache_headers = [] + logger.info("Not found: %s", message) + self.respond(HTTP_NOT_FOUND, "text/plain") + return message.encode("ascii") + + def forbidden(self, message: str) -> bytes: + """Begin a HTTP 403 response and return the text of a message.""" + self._cache_headers = [] + logger.info("Forbidden: %s", message) + self.respond(HTTP_FORBIDDEN, "text/plain") + return message.encode("ascii") + + def error(self, message: str) -> bytes: + """Begin a HTTP 500 response and return the text of a message.""" + self._cache_headers = [] + logger.error("Error: %s", message) + self.respond(HTTP_ERROR, "text/plain") + return message.encode("ascii") + + def nocache(self) -> None: + """Set the response to never be cached by the client.""" + self._cache_headers = NO_CACHE_HEADERS + + def cache_forever(self) -> None: + """Set the response to be cached forever by the client.""" + self._cache_headers = cache_forever_headers() + + +class HTTPGitApplication(object): + """Class encapsulating the state of a git WSGI application. + + Attributes: + backend: the Backend object backing this application + """ + + services = { + ("GET", re.compile("/HEAD$")): get_text_file, + ("GET", re.compile("/info/refs$")): get_info_refs, + ("GET", re.compile("/objects/info/alternates$")): get_text_file, + ("GET", re.compile("/objects/info/http-alternates$")): get_text_file, + ("GET", re.compile("/objects/info/packs$")): get_info_packs, + ( + "GET", + re.compile("/objects/([0-9a-f]{2})/([0-9a-f]{38})$"), + ): get_loose_object, + ( + "GET", + re.compile("/objects/pack/pack-([0-9a-f]{40})\\.pack$"), + ): get_pack_file, + ( + "GET", + re.compile("/objects/pack/pack-([0-9a-f]{40})\\.idx$"), + ): get_idx_file, + ("POST", re.compile("/git-upload-pack$")): handle_service_request, + ("POST", re.compile("/git-receive-pack$")): handle_service_request, + } + + def __init__(self, backend, dumb: bool = False, handlers=None, fallback_app=None): + self.backend = backend + self.dumb = dumb + self.handlers = dict(DEFAULT_HANDLERS) + self.fallback_app = fallback_app + if handlers is not None: + self.handlers.update(handlers) + + def __call__(self, environ, start_response): + path = environ["PATH_INFO"] + method = environ["REQUEST_METHOD"] + req = HTTPGitRequest( + environ, start_response, dumb=self.dumb, handlers=self.handlers + ) + # environ['QUERY_STRING'] has qs args + handler = None + for smethod, spath in self.services.keys(): + if smethod != method: + continue + mat = spath.search(path) + if mat: + handler = self.services[smethod, spath] + break + + if handler is None: + if self.fallback_app is not None: + return self.fallback_app(environ, start_response) + else: + return [req.not_found("Sorry, that method is not supported")] + + return handler(req, self.backend, mat) + + +class GunzipFilter(object): + """WSGI middleware that unzips gzip-encoded requests before + passing on to the underlying application. + """ + + def __init__(self, application): + self.app = application + + def __call__(self, environ, start_response): + if environ.get("HTTP_CONTENT_ENCODING", "") == "gzip": + try: + environ["wsgi.input"].tell() + wsgi_input = environ["wsgi.input"] + except (AttributeError, IOError, NotImplementedError): + # The gzip implementation in the standard library of Python 2.x + # requires working '.seek()' and '.tell()' methods on the input + # stream. Read the data into a temporary file to work around + # this limitation. + wsgi_input = tempfile.SpooledTemporaryFile(16 * 1024 * 1024) + shutil.copyfileobj(environ["wsgi.input"], wsgi_input) + wsgi_input.seek(0) + + environ["wsgi.input"] = gzip.GzipFile( + filename=None, fileobj=wsgi_input, mode="r" + ) + del environ["HTTP_CONTENT_ENCODING"] + if "CONTENT_LENGTH" in environ: + del environ["CONTENT_LENGTH"] + + return self.app(environ, start_response) + + +class LimitedInputFilter(object): + """WSGI middleware that limits the input length of a request to that + specified in Content-Length. + """ + + def __init__(self, application): + self.app = application + + def __call__(self, environ, start_response): + # This is not necessary if this app is run from a conforming WSGI + # server. Unfortunately, there's no way to tell that at this point. + # TODO: git may used HTTP/1.1 chunked encoding instead of specifying + # content-length + content_length = environ.get("CONTENT_LENGTH", "") + if content_length: + environ["wsgi.input"] = _LengthLimitedFile( + environ["wsgi.input"], int(content_length) + ) + return self.app(environ, start_response) + + +def make_wsgi_chain(*args, **kwargs): + """Factory function to create an instance of HTTPGitApplication, + correctly wrapped with needed middleware. + """ + app = HTTPGitApplication(*args, **kwargs) + wrapped_app = LimitedInputFilter(GunzipFilter(app)) + return wrapped_app + + +class ServerHandlerLogger(ServerHandler): + """ServerHandler that uses dulwich's logger for logging exceptions.""" + + def log_exception(self, exc_info): + logger.exception( + "Exception happened during processing of request", + exc_info=exc_info, + ) + + def log_message(self, format, *args): + logger.info(format, *args) + + def log_error(self, *args): + logger.error(*args) + + +class WSGIRequestHandlerLogger(WSGIRequestHandler): + """WSGIRequestHandler that uses dulwich's logger for logging exceptions.""" + + def log_exception(self, exc_info): + logger.exception( + "Exception happened during processing of request", + exc_info=exc_info, + ) + + def log_message(self, format, *args): + logger.info(format, *args) + + def log_error(self, *args): + logger.error(*args) + + def handle(self): + """Handle a single HTTP request""" + + self.raw_requestline = self.rfile.readline() + if not self.parse_request(): # An error code has been sent, just exit + return + + handler = ServerHandlerLogger( + self.rfile, self.wfile, self.get_stderr(), self.get_environ() + ) + handler.request_handler = self # backpointer for logging + handler.run(self.server.get_app()) + + +class WSGIServerLogger(WSGIServer): + def handle_error(self, request, client_address): + """Handle an error. """ + logger.exception( + "Exception happened during processing of request from %s" + % str(client_address) + ) + + +def main(argv=sys.argv): + """Entry point for starting an HTTP git server.""" + import optparse + + parser = optparse.OptionParser() + parser.add_option( + "-l", + "--listen_address", + dest="listen_address", + default="localhost", + help="Binding IP address.", + ) + parser.add_option( + "-p", + "--port", + dest="port", + type=int, + default=8000, + help="Port to listen on.", + ) + options, args = parser.parse_args(argv) + + if len(args) > 1: + gitdir = args[1] + else: + gitdir = os.getcwd() + + log_utils.default_logging_config() + backend = DictBackend({"/": Repo(gitdir)}) + app = make_wsgi_chain(backend) + server = make_server( + options.listen_address, + options.port, + app, + handler_class=WSGIRequestHandlerLogger, + server_class=WSGIServerLogger, + ) + logger.info( + "Listening for HTTP connections on %s:%d", + options.listen_address, + options.port, + ) + server.serve_forever() + + +if __name__ == "__main__": + main() diff --git a/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/LICENSE new file mode 100644 index 00000000..474479a2 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/LICENSE @@ -0,0 +1,24 @@ +"python-ecdsa" Copyright (c) 2010 Brian Warner + +Portions written in 2005 by Peter Pearson and placed in the public domain. + +Permission is hereby granted, free of charge, to any person +obtaining a copy of this software and associated documentation +files (the "Software"), to deal in the Software without +restriction, including without limitation the rights to use, +copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the +Software is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES +OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT +HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +OTHER DEALINGS IN THE SOFTWARE. diff --git a/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/METADATA b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/METADATA new file mode 100644 index 00000000..fa1c5fee --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/METADATA @@ -0,0 +1,675 @@ +Metadata-Version: 2.1 +Name: ecdsa +Version: 0.18.0 +Summary: ECDSA cryptographic signature library (pure python) +Home-page: http://github.com/tlsfuzzer/python-ecdsa +Author: Brian Warner +Author-email: warner@lothar.com +License: MIT +Platform: UNKNOWN +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.6 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Requires-Python: >=2.6, !=3.0.*, !=3.1.*, !=3.2.* +Description-Content-Type: text/markdown +License-File: LICENSE +Requires-Dist: six (>=1.9.0) +Provides-Extra: gmpy +Requires-Dist: gmpy ; extra == 'gmpy' +Provides-Extra: gmpy2 +Requires-Dist: gmpy2 ; extra == 'gmpy2' + +# Pure-Python ECDSA and ECDH + +[![Build Status](https://github.com/tlsfuzzer/python-ecdsa/workflows/GitHub%20CI/badge.svg?branch=master)](https://github.com/tlsfuzzer/python-ecdsa/actions?query=workflow%3A%22GitHub+CI%22+branch%3Amaster) +[![Documentation Status](https://readthedocs.org/projects/ecdsa/badge/?version=latest)](https://ecdsa.readthedocs.io/en/latest/?badge=latest) +[![Coverage Status](https://coveralls.io/repos/github/tlsfuzzer/python-ecdsa/badge.svg?branch=master)](https://coveralls.io/github/tlsfuzzer/python-ecdsa?branch=master) +![condition coverage](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/tomato42/9b6ca1f3410207fbeca785a178781651/raw/python-ecdsa-condition-coverage.json) +[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/tlsfuzzer/python-ecdsa.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/tlsfuzzer/python-ecdsa/context:python) +[![Total alerts](https://img.shields.io/lgtm/alerts/g/tlsfuzzer/python-ecdsa.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/tlsfuzzer/python-ecdsa/alerts/) +[![Latest Version](https://img.shields.io/pypi/v/ecdsa.svg?style=flat)](https://pypi.python.org/pypi/ecdsa/) +![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg?style=flat) + + +This is an easy-to-use implementation of ECC (Elliptic Curve Cryptography) +with support for ECDSA (Elliptic Curve Digital Signature Algorithm), +EdDSA (Edwards-curve Digital Signature Algorithm) and ECDH +(Elliptic Curve Diffie-Hellman), implemented purely in Python, released under +the MIT license. With this library, you can quickly create key pairs (signing +key and verifying key), sign messages, and verify the signatures. You can +also agree on a shared secret key based on exchanged public keys. +The keys and signatures are very short, making them easy to handle and +incorporate into other protocols. + +**NOTE: This library should not be used in production settings, see [Security](#Security) for more details.** + +## Features + +This library provides key generation, signing, verifying, and shared secret +derivation for five +popular NIST "Suite B" GF(p) (_prime field_) curves, with key lengths of 192, +224, 256, 384, and 521 bits. The "short names" for these curves, as known by +the OpenSSL tool (`openssl ecparam -list_curves`), are: `prime192v1`, +`secp224r1`, `prime256v1`, `secp384r1`, and `secp521r1`. It includes the +256-bit curve `secp256k1` used by Bitcoin. There is also support for the +regular (non-twisted) variants of Brainpool curves from 160 to 512 bits. The +"short names" of those curves are: `brainpoolP160r1`, `brainpoolP192r1`, +`brainpoolP224r1`, `brainpoolP256r1`, `brainpoolP320r1`, `brainpoolP384r1`, +`brainpoolP512r1`. Few of the small curves from SEC standard are also +included (mainly to speed-up testing of the library), those are: +`secp112r1`, `secp112r2`, `secp128r1`, and `secp160r1`. +Key generation, siging and verifying is also supported for Ed25519 and +Ed448 curves. +No other curves are included, but it is not too hard to add support for more +curves over prime fields. + +## Dependencies + +This library uses only Python and the 'six' package. It is compatible with +Python 2.6, 2.7, and 3.3+. It also supports execution on alternative +implementations like pypy and pypy3. + +If `gmpy2` or `gmpy` is installed, they will be used for faster arithmetic. +Either of them can be installed after this library is installed, +`python-ecdsa` will detect their presence on start-up and use them +automatically. +You should prefer `gmpy2` on Python3 for optimal performance. + +To run the OpenSSL compatibility tests, the 'openssl' tool must be in your +`PATH`. This release has been tested successfully against OpenSSL 0.9.8o, +1.0.0a, 1.0.2f, 1.1.1d and 3.0.1 (among others). + + +## Installation + +This library is available on PyPI, it's recommended to install it using `pip`: + +``` +pip install ecdsa +``` + +In case higher performance is wanted and using native code is not a problem, +it's possible to specify installation together with `gmpy2`: + +``` +pip install ecdsa[gmpy2] +``` + +or (slower, legacy option): +``` +pip install ecdsa[gmpy] +``` + +## Speed + +The following table shows how long this library takes to generate key pairs +(`keygen`), to sign data (`sign`), to verify those signatures (`verify`), +to derive a shared secret (`ecdh`), and +to verify the signatures with no key-specific precomputation (`no PC verify`). +All those values are in seconds. +For convenience, the inverses of those values are also provided: +how many keys per second can be generated (`keygen/s`), how many signatures +can be made per second (`sign/s`), how many signatures can be verified +per second (`verify/s`), how many shared secrets can be derived per second +(`ecdh/s`), and how many signatures with no key specific +precomputation can be verified per second (`no PC verify/s`). The size of raw +signature (generally the smallest +the way a signature can be encoded) is also provided in the `siglen` column. +Use `tox -e speed` to generate this table on your own computer. +On an Intel Core i7 4790K @ 4.0GHz I'm getting the following performance: + +``` + siglen keygen keygen/s sign sign/s verify verify/s no PC verify no PC verify/s + NIST192p: 48 0.00032s 3134.06 0.00033s 2985.53 0.00063s 1598.36 0.00129s 774.43 + NIST224p: 56 0.00040s 2469.24 0.00042s 2367.88 0.00081s 1233.41 0.00170s 586.66 + NIST256p: 64 0.00051s 1952.73 0.00054s 1867.80 0.00098s 1021.86 0.00212s 471.27 + NIST384p: 96 0.00107s 935.92 0.00111s 904.23 0.00203s 491.77 0.00446s 224.00 + NIST521p: 132 0.00210s 475.52 0.00215s 464.16 0.00398s 251.28 0.00874s 114.39 + SECP256k1: 64 0.00052s 1921.54 0.00054s 1847.49 0.00105s 948.68 0.00210s 477.01 + BRAINPOOLP160r1: 40 0.00025s 4003.88 0.00026s 3845.12 0.00053s 1893.93 0.00105s 949.92 + BRAINPOOLP192r1: 48 0.00033s 3043.97 0.00034s 2975.98 0.00063s 1581.50 0.00135s 742.29 + BRAINPOOLP224r1: 56 0.00041s 2436.44 0.00043s 2315.51 0.00078s 1278.49 0.00180s 556.16 + BRAINPOOLP256r1: 64 0.00053s 1892.49 0.00054s 1846.24 0.00114s 875.64 0.00229s 437.25 + BRAINPOOLP320r1: 80 0.00073s 1361.26 0.00076s 1309.25 0.00143s 699.29 0.00322s 310.49 + BRAINPOOLP384r1: 96 0.00107s 931.29 0.00111s 901.80 0.00230s 434.19 0.00476s 210.20 + BRAINPOOLP512r1: 128 0.00207s 483.41 0.00212s 471.42 0.00425s 235.43 0.00912s 109.61 + SECP112r1: 28 0.00015s 6672.53 0.00016s 6440.34 0.00031s 3265.41 0.00056s 1774.20 + SECP112r2: 28 0.00015s 6697.11 0.00015s 6479.98 0.00028s 3524.72 0.00058s 1716.16 + SECP128r1: 32 0.00018s 5497.65 0.00019s 5272.89 0.00036s 2747.39 0.00072s 1396.16 + SECP160r1: 42 0.00025s 3949.32 0.00026s 3894.45 0.00046s 2153.85 0.00102s 985.07 + Ed25519: 64 0.00076s 1324.48 0.00042s 2405.01 0.00109s 918.05 0.00344s 290.50 + Ed448: 114 0.00176s 569.53 0.00115s 870.94 0.00282s 355.04 0.01024s 97.69 + + ecdh ecdh/s + NIST192p: 0.00104s 964.89 + NIST224p: 0.00134s 748.63 + NIST256p: 0.00170s 587.08 + NIST384p: 0.00352s 283.90 + NIST521p: 0.00717s 139.51 + SECP256k1: 0.00154s 648.40 + BRAINPOOLP160r1: 0.00082s 1220.70 + BRAINPOOLP192r1: 0.00105s 956.75 + BRAINPOOLP224r1: 0.00136s 734.52 + BRAINPOOLP256r1: 0.00178s 563.32 + BRAINPOOLP320r1: 0.00252s 397.23 + BRAINPOOLP384r1: 0.00376s 266.27 + BRAINPOOLP512r1: 0.00733s 136.35 + SECP112r1: 0.00046s 2180.40 + SECP112r2: 0.00045s 2229.14 + SECP128r1: 0.00054s 1868.15 + SECP160r1: 0.00080s 1243.98 +``` + +To test performance with `gmpy2` loaded, use `tox -e speedgmpy2`. +On the same machine I'm getting the following performance with `gmpy2`: +``` + siglen keygen keygen/s sign sign/s verify verify/s no PC verify no PC verify/s + NIST192p: 48 0.00017s 5933.40 0.00017s 5751.70 0.00032s 3125.28 0.00067s 1502.41 + NIST224p: 56 0.00021s 4782.87 0.00022s 4610.05 0.00040s 2487.04 0.00089s 1126.90 + NIST256p: 64 0.00023s 4263.98 0.00024s 4125.16 0.00045s 2200.88 0.00098s 1016.82 + NIST384p: 96 0.00041s 2449.54 0.00042s 2399.96 0.00083s 1210.57 0.00172s 581.43 + NIST521p: 132 0.00071s 1416.07 0.00072s 1389.81 0.00144s 692.93 0.00312s 320.40 + SECP256k1: 64 0.00024s 4245.05 0.00024s 4122.09 0.00045s 2206.40 0.00094s 1068.32 + BRAINPOOLP160r1: 40 0.00014s 6939.17 0.00015s 6681.55 0.00029s 3452.43 0.00057s 1769.81 + BRAINPOOLP192r1: 48 0.00017s 5920.05 0.00017s 5774.36 0.00034s 2979.00 0.00069s 1453.19 + BRAINPOOLP224r1: 56 0.00021s 4732.12 0.00022s 4622.65 0.00041s 2422.47 0.00087s 1149.87 + BRAINPOOLP256r1: 64 0.00024s 4233.02 0.00024s 4115.20 0.00047s 2143.27 0.00098s 1015.60 + BRAINPOOLP320r1: 80 0.00032s 3162.38 0.00032s 3077.62 0.00063s 1598.83 0.00136s 737.34 + BRAINPOOLP384r1: 96 0.00041s 2436.88 0.00042s 2395.62 0.00083s 1202.68 0.00178s 562.85 + BRAINPOOLP512r1: 128 0.00063s 1587.60 0.00064s 1558.83 0.00125s 799.96 0.00281s 355.83 + SECP112r1: 28 0.00009s 11118.66 0.00009s 10775.48 0.00018s 5456.00 0.00033s 3020.83 + SECP112r2: 28 0.00009s 11322.97 0.00009s 10857.71 0.00017s 5748.77 0.00032s 3094.28 + SECP128r1: 32 0.00010s 10078.39 0.00010s 9665.27 0.00019s 5200.58 0.00036s 2760.88 + SECP160r1: 42 0.00015s 6875.51 0.00015s 6647.35 0.00029s 3422.41 0.00057s 1768.35 + Ed25519: 64 0.00030s 3322.56 0.00018s 5568.63 0.00046s 2165.35 0.00153s 654.02 + Ed448: 114 0.00060s 1680.53 0.00039s 2567.40 0.00096s 1036.67 0.00350s 285.62 + + ecdh ecdh/s + NIST192p: 0.00050s 1985.70 + NIST224p: 0.00066s 1524.16 + NIST256p: 0.00071s 1413.07 + NIST384p: 0.00127s 788.89 + NIST521p: 0.00230s 434.85 + SECP256k1: 0.00071s 1409.95 + BRAINPOOLP160r1: 0.00042s 2374.65 + BRAINPOOLP192r1: 0.00051s 1960.01 + BRAINPOOLP224r1: 0.00066s 1518.37 + BRAINPOOLP256r1: 0.00071s 1399.90 + BRAINPOOLP320r1: 0.00100s 997.21 + BRAINPOOLP384r1: 0.00129s 777.51 + BRAINPOOLP512r1: 0.00210s 475.99 + SECP112r1: 0.00022s 4457.70 + SECP112r2: 0.00024s 4252.33 + SECP128r1: 0.00028s 3589.31 + SECP160r1: 0.00043s 2305.02 +``` + +(there's also `gmpy` version, execute it using `tox -e speedgmpy`) + +For comparison, a highly optimised implementation (including curve-specific +assembly for some curves), like the one in OpenSSL 1.1.1d, provides the +following performance numbers on the same machine. +Run `openssl speed ecdsa` and `openssl speed ecdh` to reproduce it: +``` + sign verify sign/s verify/s + 192 bits ecdsa (nistp192) 0.0002s 0.0002s 4785.6 5380.7 + 224 bits ecdsa (nistp224) 0.0000s 0.0001s 22475.6 9822.0 + 256 bits ecdsa (nistp256) 0.0000s 0.0001s 45069.6 14166.6 + 384 bits ecdsa (nistp384) 0.0008s 0.0006s 1265.6 1648.1 + 521 bits ecdsa (nistp521) 0.0003s 0.0005s 3753.1 1819.5 + 256 bits ecdsa (brainpoolP256r1) 0.0003s 0.0003s 2983.5 3333.2 + 384 bits ecdsa (brainpoolP384r1) 0.0008s 0.0007s 1258.8 1528.1 + 512 bits ecdsa (brainpoolP512r1) 0.0015s 0.0012s 675.1 860.1 + + sign verify sign/s verify/s + 253 bits EdDSA (Ed25519) 0.0000s 0.0001s 28217.9 10897.7 + 456 bits EdDSA (Ed448) 0.0003s 0.0005s 3926.5 2147.7 + + op op/s + 192 bits ecdh (nistp192) 0.0002s 4853.4 + 224 bits ecdh (nistp224) 0.0001s 15252.1 + 256 bits ecdh (nistp256) 0.0001s 18436.3 + 384 bits ecdh (nistp384) 0.0008s 1292.7 + 521 bits ecdh (nistp521) 0.0003s 2884.7 + 256 bits ecdh (brainpoolP256r1) 0.0003s 3066.5 + 384 bits ecdh (brainpoolP384r1) 0.0008s 1298.0 + 512 bits ecdh (brainpoolP512r1) 0.0014s 694.8 +``` + +Keys and signature can be serialized in different ways (see Usage, below). +For a NIST192p key, the three basic representations require strings of the +following lengths (in bytes): + + to_string: signkey= 24, verifykey= 48, signature=48 + compressed: signkey=n/a, verifykey= 25, signature=n/a + DER: signkey=106, verifykey= 80, signature=55 + PEM: signkey=278, verifykey=162, (no support for PEM signatures) + +## History + +In 2006, Peter Pearson announced his pure-python implementation of ECDSA in a +[message to sci.crypt][1], available from his [download site][2]. In 2010, +Brian Warner wrote a wrapper around this code, to make it a bit easier and +safer to use. In 2020, Hubert Kario included an implementation of elliptic +curve cryptography that uses Jacobian coordinates internally, improving +performance about 20-fold. You are looking at the README for this wrapper. + +[1]: http://www.derkeiler.com/Newsgroups/sci.crypt/2006-01/msg00651.html +[2]: http://webpages.charter.net/curryfans/peter/downloads.html + +## Testing + +To run the full test suite, do this: + + tox -e coverage + +On an Intel Core i7 4790K @ 4.0GHz, the tests take about 18 seconds to execute. +The test suite uses +[`hypothesis`](https://github.com/HypothesisWorks/hypothesis) so there is some +inherent variability in the test suite execution time. + +One part of `test_pyecdsa.py` and `test_ecdh.py` checks compatibility with +OpenSSL, by running the "openssl" CLI tool, make sure it's in your `PATH` if +you want to test compatibility with it (if OpenSSL is missing, too old, or +doesn't support all the curves supported in upstream releases you will see +skipped tests in the above `coverage` run). + +## Security + +This library was not designed with security in mind. If you are processing +data that needs to be protected we suggest you use a quality wrapper around +OpenSSL. [pyca/cryptography](https://cryptography.io) is one example of such +a wrapper. The primary use-case of this library is as a portable library for +interoperability testing and as a teaching tool. + +**This library does not protect against side-channel attacks.** + +Do not allow attackers to measure how long it takes you to generate a key pair +or sign a message. Do not allow attackers to run code on the same physical +machine when key pair generation or signing is taking place (this includes +virtual machines). Do not allow attackers to measure how much power your +computer uses while generating the key pair or signing a message. Do not allow +attackers to measure RF interference coming from your computer while generating +a key pair or signing a message. Note: just loading the private key will cause +key pair generation. Other operations or attack vectors may also be +vulnerable to attacks. **For a sophisticated attacker observing just one +operation with a private key will be sufficient to completely +reconstruct the private key**. + +Please also note that any Pure-python cryptographic library will be vulnerable +to the same side-channel attacks. This is because Python does not provide +side-channel secure primitives (with the exception of +[`hmac.compare_digest()`][3]), making side-channel secure programming +impossible. + +This library depends upon a strong source of random numbers. Do not use it on +a system where `os.urandom()` does not provide cryptographically secure +random numbers. + +[3]: https://docs.python.org/3/library/hmac.html#hmac.compare_digest + +## Usage + +You start by creating a `SigningKey`. You can use this to sign data, by passing +in data as a byte string and getting back the signature (also a byte string). +You can also ask a `SigningKey` to give you the corresponding `VerifyingKey`. +The `VerifyingKey` can be used to verify a signature, by passing it both the +data string and the signature byte string: it either returns True or raises +`BadSignatureError`. + +```python +from ecdsa import SigningKey +sk = SigningKey.generate() # uses NIST192p +vk = sk.verifying_key +signature = sk.sign(b"message") +assert vk.verify(signature, b"message") +``` + +Each `SigningKey`/`VerifyingKey` is associated with a specific curve, like +NIST192p (the default one). Longer curves are more secure, but take longer to +use, and result in longer keys and signatures. + +```python +from ecdsa import SigningKey, NIST384p +sk = SigningKey.generate(curve=NIST384p) +vk = sk.verifying_key +signature = sk.sign(b"message") +assert vk.verify(signature, b"message") +``` + +The `SigningKey` can be serialized into several different formats: the shortest +is to call `s=sk.to_string()`, and then re-create it with +`SigningKey.from_string(s, curve)` . This short form does not record the +curve, so you must be sure to pass to `from_string()` the same curve you used +for the original key. The short form of a NIST192p-based signing key is just 24 +bytes long. If a point encoding is invalid or it does not lie on the specified +curve, `from_string()` will raise `MalformedPointError`. + +```python +from ecdsa import SigningKey, NIST384p +sk = SigningKey.generate(curve=NIST384p) +sk_string = sk.to_string() +sk2 = SigningKey.from_string(sk_string, curve=NIST384p) +print(sk_string.hex()) +print(sk2.to_string().hex()) +``` + +Note: while the methods are called `to_string()` the type they return is +actually `bytes`, the "string" part is leftover from Python 2. + +`sk.to_pem()` and `sk.to_der()` will serialize the signing key into the same +formats that OpenSSL uses. The PEM file looks like the familiar ASCII-armored +`"-----BEGIN EC PRIVATE KEY-----"` base64-encoded format, and the DER format +is a shorter binary form of the same data. +`SigningKey.from_pem()/.from_der()` will undo this serialization. These +formats include the curve name, so you do not need to pass in a curve +identifier to the deserializer. In case the file is malformed `from_der()` +and `from_pem()` will raise `UnexpectedDER` or` MalformedPointError`. + +```python +from ecdsa import SigningKey, NIST384p +sk = SigningKey.generate(curve=NIST384p) +sk_pem = sk.to_pem() +sk2 = SigningKey.from_pem(sk_pem) +# sk and sk2 are the same key +``` + +Likewise, the `VerifyingKey` can be serialized in the same way: +`vk.to_string()/VerifyingKey.from_string()`, `to_pem()/from_pem()`, and +`to_der()/from_der()`. The same `curve=` argument is needed for +`VerifyingKey.from_string()`. + +```python +from ecdsa import SigningKey, VerifyingKey, NIST384p +sk = SigningKey.generate(curve=NIST384p) +vk = sk.verifying_key +vk_string = vk.to_string() +vk2 = VerifyingKey.from_string(vk_string, curve=NIST384p) +# vk and vk2 are the same key + +from ecdsa import SigningKey, VerifyingKey, NIST384p +sk = SigningKey.generate(curve=NIST384p) +vk = sk.verifying_key +vk_pem = vk.to_pem() +vk2 = VerifyingKey.from_pem(vk_pem) +# vk and vk2 are the same key +``` + +There are a couple of different ways to compute a signature. Fundamentally, +ECDSA takes a number that represents the data being signed, and returns a +pair of numbers that represent the signature. The `hashfunc=` argument to +`sk.sign()` and `vk.verify()` is used to turn an arbitrary string into a +fixed-length digest, which is then turned into a number that ECDSA can sign, +and both sign and verify must use the same approach. The default value is +`hashlib.sha1`, but if you use NIST256p or a longer curve, you can use +`hashlib.sha256` instead. + +There are also multiple ways to represent a signature. The default +`sk.sign()` and `vk.verify()` methods present it as a short string, for +simplicity and minimal overhead. To use a different scheme, use the +`sk.sign(sigencode=)` and `vk.verify(sigdecode=)` arguments. There are helper +functions in the `ecdsa.util` module that can be useful here. + +It is also possible to create a `SigningKey` from a "seed", which is +deterministic. This can be used in protocols where you want to derive +consistent signing keys from some other secret, for example when you want +three separate keys and only want to store a single master secret. You should +start with a uniformly-distributed unguessable seed with about `curve.baselen` +bytes of entropy, and then use one of the helper functions in `ecdsa.util` to +convert it into an integer in the correct range, and then finally pass it +into `SigningKey.from_secret_exponent()`, like this: + +```python +import os +from ecdsa import NIST384p, SigningKey +from ecdsa.util import randrange_from_seed__trytryagain + +def make_key(seed): + secexp = randrange_from_seed__trytryagain(seed, NIST384p.order) + return SigningKey.from_secret_exponent(secexp, curve=NIST384p) + +seed = os.urandom(NIST384p.baselen) # or other starting point +sk1a = make_key(seed) +sk1b = make_key(seed) +# note: sk1a and sk1b are the same key +assert sk1a.to_string() == sk1b.to_string() +sk2 = make_key(b"2-"+seed) # different key +assert sk1a.to_string() != sk2.to_string() +``` + +In case the application will verify a lot of signatures made with a single +key, it's possible to precompute some of the internal values to make +signature verification significantly faster. The break-even point occurs at +about 100 signatures verified. + +To perform precomputation, you can call the `precompute()` method +on `VerifyingKey` instance: +```python +from ecdsa import SigningKey, NIST384p +sk = SigningKey.generate(curve=NIST384p) +vk = sk.verifying_key +vk.precompute() +signature = sk.sign(b"message") +assert vk.verify(signature, b"message") +``` + +Once `precompute()` was called, all signature verifications with this key will +be faster to execute. + +## OpenSSL Compatibility + +To produce signatures that can be verified by OpenSSL tools, or to verify +signatures that were produced by those tools, use: + +```python +# openssl ecparam -name prime256v1 -genkey -out sk.pem +# openssl ec -in sk.pem -pubout -out vk.pem +# echo "data for signing" > data +# openssl dgst -sha256 -sign sk.pem -out data.sig data +# openssl dgst -sha256 -verify vk.pem -signature data.sig data +# openssl dgst -sha256 -prverify sk.pem -signature data.sig data + +import hashlib +from ecdsa import SigningKey, VerifyingKey +from ecdsa.util import sigencode_der, sigdecode_der + +with open("vk.pem") as f: + vk = VerifyingKey.from_pem(f.read()) + +with open("data", "rb") as f: + data = f.read() + +with open("data.sig", "rb") as f: + signature = f.read() + +assert vk.verify(signature, data, hashlib.sha256, sigdecode=sigdecode_der) + +with open("sk.pem") as f: + sk = SigningKey.from_pem(f.read(), hashlib.sha256) + +new_signature = sk.sign_deterministic(data, sigencode=sigencode_der) + +with open("data.sig2", "wb") as f: + f.write(new_signature) + +# openssl dgst -sha256 -verify vk.pem -signature data.sig2 data +``` + +Note: if compatibility with OpenSSL 1.0.0 or earlier is necessary, the +`sigencode_string` and `sigdecode_string` from `ecdsa.util` can be used for +respectively writing and reading the signatures. + +The keys also can be written in format that openssl can handle: + +```python +from ecdsa import SigningKey, VerifyingKey + +with open("sk.pem") as f: + sk = SigningKey.from_pem(f.read()) +with open("sk.pem", "wb") as f: + f.write(sk.to_pem()) + +with open("vk.pem") as f: + vk = VerifyingKey.from_pem(f.read()) +with open("vk.pem", "wb") as f: + f.write(vk.to_pem()) +``` + +## Entropy + +Creating a signing key with `SigningKey.generate()` requires some form of +entropy (as opposed to +`from_secret_exponent`/`from_string`/`from_der`/`from_pem`, +which are deterministic and do not require an entropy source). The default +source is `os.urandom()`, but you can pass any other function that behaves +like `os.urandom` as the `entropy=` argument to do something different. This +may be useful in unit tests, where you want to achieve repeatable results. The +`ecdsa.util.PRNG` utility is handy here: it takes a seed and produces a strong +pseudo-random stream from it: + +```python +from ecdsa.util import PRNG +from ecdsa import SigningKey +rng1 = PRNG(b"seed") +sk1 = SigningKey.generate(entropy=rng1) +rng2 = PRNG(b"seed") +sk2 = SigningKey.generate(entropy=rng2) +# sk1 and sk2 are the same key +``` + +Likewise, ECDSA signature generation requires a random number, and each +signature must use a different one (using the same number twice will +immediately reveal the private signing key). The `sk.sign()` method takes an +`entropy=` argument which behaves the same as `SigningKey.generate(entropy=)`. + +## Deterministic Signatures + +If you call `SigningKey.sign_deterministic(data)` instead of `.sign(data)`, +the code will generate a deterministic signature instead of a random one. +This uses the algorithm from RFC6979 to safely generate a unique `k` value, +derived from the private key and the message being signed. Each time you sign +the same message with the same key, you will get the same signature (using +the same `k`). + +This may become the default in a future version, as it is not vulnerable to +failures of the entropy source. + +## Examples + +Create a NIST192p key pair and immediately save both to disk: + +```python +from ecdsa import SigningKey +sk = SigningKey.generate() +vk = sk.verifying_key +with open("private.pem", "wb") as f: + f.write(sk.to_pem()) +with open("public.pem", "wb") as f: + f.write(vk.to_pem()) +``` + +Load a signing key from disk, use it to sign a message (using SHA-1), and write +the signature to disk: + +```python +from ecdsa import SigningKey +with open("private.pem") as f: + sk = SigningKey.from_pem(f.read()) +with open("message", "rb") as f: + message = f.read() +sig = sk.sign(message) +with open("signature", "wb") as f: + f.write(sig) +``` + +Load the verifying key, message, and signature from disk, and verify the +signature (assume SHA-1 hash): + +```python +from ecdsa import VerifyingKey, BadSignatureError +vk = VerifyingKey.from_pem(open("public.pem").read()) +with open("message", "rb") as f: + message = f.read() +with open("signature", "rb") as f: + sig = f.read() +try: + vk.verify(sig, message) + print "good signature" +except BadSignatureError: + print "BAD SIGNATURE" +``` + +Create a NIST521p key pair: + +```python +from ecdsa import SigningKey, NIST521p +sk = SigningKey.generate(curve=NIST521p) +vk = sk.verifying_key +``` + +Create three independent signing keys from a master seed: + +```python +from ecdsa import NIST192p, SigningKey +from ecdsa.util import randrange_from_seed__trytryagain + +def make_key_from_seed(seed, curve=NIST192p): + secexp = randrange_from_seed__trytryagain(seed, curve.order) + return SigningKey.from_secret_exponent(secexp, curve) + +sk1 = make_key_from_seed("1:%s" % seed) +sk2 = make_key_from_seed("2:%s" % seed) +sk3 = make_key_from_seed("3:%s" % seed) +``` + +Load a verifying key from disk and print it using hex encoding in +uncompressed and compressed format (defined in X9.62 and SEC1 standards): + +```python +from ecdsa import VerifyingKey + +with open("public.pem") as f: + vk = VerifyingKey.from_pem(f.read()) + +print("uncompressed: {0}".format(vk.to_string("uncompressed").hex())) +print("compressed: {0}".format(vk.to_string("compressed").hex())) +``` + +Load a verifying key from a hex string from compressed format, output +uncompressed: + +```python +from ecdsa import VerifyingKey, NIST256p + +comp_str = '022799c0d0ee09772fdd337d4f28dc155581951d07082fb19a38aa396b67e77759' +vk = VerifyingKey.from_string(bytearray.fromhex(comp_str), curve=NIST256p) +print(vk.to_string("uncompressed").hex()) +``` + +ECDH key exchange with remote party: + +```python +from ecdsa import ECDH, NIST256p + +ecdh = ECDH(curve=NIST256p) +ecdh.generate_private_key() +local_public_key = ecdh.get_public_key() +#send `local_public_key` to remote party and receive `remote_public_key` from remote party +with open("remote_public_key.pem") as e: + remote_public_key = e.read() +ecdh.load_received_public_key_pem(remote_public_key) +secret = ecdh.generate_sharedsecret_bytes() +``` + + diff --git a/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/RECORD b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/RECORD new file mode 100644 index 00000000..82289797 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/RECORD @@ -0,0 +1,64 @@ +ecdsa-0.18.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +ecdsa-0.18.0.dist-info/LICENSE,sha256=PsqYRXc9LluMydjBGdNF8ApIBuS9Zg1KPWzfnA6di7I,1147 +ecdsa-0.18.0.dist-info/METADATA,sha256=vesFVMWT6uSeOuNwGGxtBm8nm6GROqNNRO28jr8wWqM,29750 +ecdsa-0.18.0.dist-info/RECORD,, +ecdsa-0.18.0.dist-info/WHEEL,sha256=Z-nyYpwrcSqxfdux5Mbn_DQ525iP7J2DG3JgGvOYyTQ,110 +ecdsa-0.18.0.dist-info/top_level.txt,sha256=7ovPHfAPyTou19f8gOSbHm6B9dGjTibWolcCB7Zjovs,6 +ecdsa/__init__.py,sha256=m8jWFcZ9E-iiDqVaTy7ABtvEyTJlF58tZ45UAKj3UWo,1637 +ecdsa/__pycache__/__init__.cpython-38.pyc,, +ecdsa/__pycache__/_compat.cpython-38.pyc,, +ecdsa/__pycache__/_rwlock.cpython-38.pyc,, +ecdsa/__pycache__/_sha3.cpython-38.pyc,, +ecdsa/__pycache__/_version.cpython-38.pyc,, +ecdsa/__pycache__/curves.cpython-38.pyc,, +ecdsa/__pycache__/der.cpython-38.pyc,, +ecdsa/__pycache__/ecdh.cpython-38.pyc,, +ecdsa/__pycache__/ecdsa.cpython-38.pyc,, +ecdsa/__pycache__/eddsa.cpython-38.pyc,, +ecdsa/__pycache__/ellipticcurve.cpython-38.pyc,, +ecdsa/__pycache__/errors.cpython-38.pyc,, +ecdsa/__pycache__/keys.cpython-38.pyc,, +ecdsa/__pycache__/numbertheory.cpython-38.pyc,, +ecdsa/__pycache__/rfc6979.cpython-38.pyc,, +ecdsa/__pycache__/test_curves.cpython-38.pyc,, +ecdsa/__pycache__/test_der.cpython-38.pyc,, +ecdsa/__pycache__/test_ecdh.cpython-38.pyc,, +ecdsa/__pycache__/test_ecdsa.cpython-38.pyc,, +ecdsa/__pycache__/test_eddsa.cpython-38.pyc,, +ecdsa/__pycache__/test_ellipticcurve.cpython-38.pyc,, +ecdsa/__pycache__/test_jacobi.cpython-38.pyc,, +ecdsa/__pycache__/test_keys.cpython-38.pyc,, +ecdsa/__pycache__/test_malformed_sigs.cpython-38.pyc,, +ecdsa/__pycache__/test_numbertheory.cpython-38.pyc,, +ecdsa/__pycache__/test_pyecdsa.cpython-38.pyc,, +ecdsa/__pycache__/test_rw_lock.cpython-38.pyc,, +ecdsa/__pycache__/test_sha3.cpython-38.pyc,, +ecdsa/__pycache__/util.cpython-38.pyc,, +ecdsa/_compat.py,sha256=EhUF8-sFu1dKKGDibkmItbYm_nKoklSIBgkIburUoAg,4619 +ecdsa/_rwlock.py,sha256=CAwHp2V65ksI8B1UqY7EccK9LaUToiv6pDLVzm44eag,2849 +ecdsa/_sha3.py,sha256=DJs7QLmdkQMU35llyD8HQeAXNvf5sMcujO6oFdScIqI,4747 +ecdsa/_version.py,sha256=YZ3BGOHr1Ltse4LfX7F80J6qmKFA-NS-G2eYUuw2WnU,498 +ecdsa/curves.py,sha256=Na5rpnuADNvWkCTlUGbs9xwVogY6Vl2_3uNzpVGgxtE,14390 +ecdsa/der.py,sha256=OmfH8fojeqfJnasAt7I1P8j_qfwcwl-W4gDx1-cO8M0,14109 +ecdsa/ecdh.py,sha256=Tiirawt5xegVDrY9eS-ATvvfmTIznUyv5fy2k7VnzTk,11011 +ecdsa/ecdsa.py,sha256=LPRHHXNvGyZ67lyM6cWqUuhQceCwoktfPij20UTsdJo,24955 +ecdsa/eddsa.py,sha256=IzsGzoGAefcoYjF7DVjFkX5ZJqiK2LTOmAMe6wyf4UU,7170 +ecdsa/ellipticcurve.py,sha256=HwlFqrihf7Q2GQTLYQ0PeCwsgMhk_GlkZz3I2Xj4-eI,53625 +ecdsa/errors.py,sha256=b4mhnmIpRnEdHzbectHAA5F7O9MtSaI-fYoc13_vBxQ,130 +ecdsa/keys.py,sha256=plsomRHYrN3Z3iigUqKM5An8_6TDk1nJNpFv_cPGAvM,65124 +ecdsa/numbertheory.py,sha256=XWugBG59BxrpvjZm7Ytsnmkv770vK8IkClObThpbvAM,17479 +ecdsa/rfc6979.py,sha256=zwzo33lsZJA9r2dSf7HCliI_yIbw5cJ0Ek9tLdRRO40,2850 +ecdsa/test_curves.py,sha256=l5N-m4Yo5IAy4a8aJMsBamaSlLAfSoYjYqCj1HDEVpU,13081 +ecdsa/test_der.py,sha256=q2mr4HS_JyUxojWTSLJu-MQZiTwaCE7W_VG3rwwPEas,14956 +ecdsa/test_ecdh.py,sha256=hUJXTo_Cr9ji9-EpPvpQf7-TgB97WUj2tWcwU-LqCXc,15238 +ecdsa/test_ecdsa.py,sha256=1clBfDtA0zdF-13BoKXsJffL5K-iFutlMgov0kmGro0,23923 +ecdsa/test_eddsa.py,sha256=Vlv5J0C4zNJu5zzr756qpm0AJmARpzBKCWrhHaF_bR4,32615 +ecdsa/test_ellipticcurve.py,sha256=K6W_EQunOfE-RVSux6d1O7LXzSuAsVk9F1elwyY-rYA,6085 +ecdsa/test_jacobi.py,sha256=JDQeM_JKwPfwWBd4IgqtOp1rboeQNUIPPA334b-nmLQ,18388 +ecdsa/test_keys.py,sha256=n_IYLxG4JwD84dYLDmRjV2A-NqsSrrR1P0XBJOCZsEI,32833 +ecdsa/test_malformed_sigs.py,sha256=hiV2vwzFrIdNIC-inYUJIKboyAAw2TKAIVXtFadyojg,10857 +ecdsa/test_numbertheory.py,sha256=g6hi7NZFKuMSAxJSAYW5sWM7ivSCiw8g5-PeoDyowgY,11619 +ecdsa/test_pyecdsa.py,sha256=WeRujEKpkZzHvEeXNnnhM1QdMH0Lxou7Bl4u8RXY-jM,82757 +ecdsa/test_rw_lock.py,sha256=byv0_FTM90cbuHPCI6__LeQJkHL_zYEeVYIBO8e2LLc,7021 +ecdsa/test_sha3.py,sha256=oKULy5KOTaXjpLXSyuHrB1wjPiQDxB6INp7Tf1EU8Ko,3022 +ecdsa/util.py,sha256=cOEN3_c8p79Dc8a-LcUQP2ctIsYky35jhSWc9hLP1qc,14618 diff --git a/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/WHEEL new file mode 100644 index 00000000..01b8fc7d --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/top_level.txt new file mode 100644 index 00000000..aa5efdb5 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa-0.18.0.dist-info/top_level.txt @@ -0,0 +1 @@ +ecdsa diff --git a/env/lib/python3.8/site-packages/ecdsa/__init__.py b/env/lib/python3.8/site-packages/ecdsa/__init__.py new file mode 100644 index 00000000..ce8749aa --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/__init__.py @@ -0,0 +1,90 @@ +# while we don't use six in this file, we did bundle it for a long time, so +# keep as part of module in a virtual way (through __all__) +import six +from .keys import ( + SigningKey, + VerifyingKey, + BadSignatureError, + BadDigestError, + MalformedPointError, +) +from .curves import ( + NIST192p, + NIST224p, + NIST256p, + NIST384p, + NIST521p, + SECP256k1, + BRAINPOOLP160r1, + BRAINPOOLP192r1, + BRAINPOOLP224r1, + BRAINPOOLP256r1, + BRAINPOOLP320r1, + BRAINPOOLP384r1, + BRAINPOOLP512r1, + SECP112r1, + SECP112r2, + SECP128r1, + SECP160r1, + Ed25519, + Ed448, +) +from .ecdh import ( + ECDH, + NoKeyError, + NoCurveError, + InvalidCurveError, + InvalidSharedSecretError, +) +from .der import UnexpectedDER +from . import _version + +# This code comes from http://github.com/tlsfuzzer/python-ecdsa +__all__ = [ + "curves", + "der", + "ecdsa", + "ellipticcurve", + "keys", + "numbertheory", + "test_pyecdsa", + "util", + "six", +] + +_hush_pyflakes = [ + SigningKey, + VerifyingKey, + BadSignatureError, + BadDigestError, + MalformedPointError, + UnexpectedDER, + InvalidCurveError, + NoKeyError, + InvalidSharedSecretError, + ECDH, + NoCurveError, + NIST192p, + NIST224p, + NIST256p, + NIST384p, + NIST521p, + SECP256k1, + BRAINPOOLP160r1, + BRAINPOOLP192r1, + BRAINPOOLP224r1, + BRAINPOOLP256r1, + BRAINPOOLP320r1, + BRAINPOOLP384r1, + BRAINPOOLP512r1, + SECP112r1, + SECP112r2, + SECP128r1, + SECP160r1, + Ed25519, + Ed448, + six.b(""), +] +del _hush_pyflakes + +__version__ = _version.get_versions()["version"] diff --git a/env/lib/python3.8/site-packages/ecdsa/_compat.py b/env/lib/python3.8/site-packages/ecdsa/_compat.py new file mode 100644 index 00000000..83d41a5f --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/_compat.py @@ -0,0 +1,153 @@ +""" +Common functions for providing cross-python version compatibility. +""" +import sys +import re +import binascii +from six import integer_types + + +def str_idx_as_int(string, index): + """Take index'th byte from string, return as integer""" + val = string[index] + if isinstance(val, integer_types): + return val + return ord(val) + + +if sys.version_info < (3, 0): # pragma: no branch + import platform + + def normalise_bytes(buffer_object): + """Cast the input into array of bytes.""" + # flake8 runs on py3 where `buffer` indeed doesn't exist... + return buffer(buffer_object) # noqa: F821 + + def hmac_compat(ret): + return ret + + if ( + sys.version_info < (2, 7) + or sys.version_info < (2, 7, 4) + or platform.system() == "Java" + ): # pragma: no branch + + def remove_whitespace(text): + """Removes all whitespace from passed in string""" + return re.sub(r"\s+", "", text) + + def compat26_str(val): + return str(val) + + def bit_length(val): + if val == 0: + return 0 + return len(bin(val)) - 2 + + else: + + def remove_whitespace(text): + """Removes all whitespace from passed in string""" + return re.sub(r"\s+", "", text, flags=re.UNICODE) + + def compat26_str(val): + return val + + def bit_length(val): + """Return number of bits necessary to represent an integer.""" + return val.bit_length() + + def b2a_hex(val): + return binascii.b2a_hex(compat26_str(val)) + + def a2b_hex(val): + try: + return bytearray(binascii.a2b_hex(val)) + except Exception as e: + raise ValueError("base16 error: %s" % e) + + def bytes_to_int(val, byteorder): + """Convert bytes to an int.""" + if not val: + return 0 + if byteorder == "big": + return int(b2a_hex(val), 16) + if byteorder == "little": + return int(b2a_hex(val[::-1]), 16) + raise ValueError("Only 'big' and 'little' endian supported") + + def int_to_bytes(val, length=None, byteorder="big"): + """Return number converted to bytes""" + if length is None: + length = byte_length(val) + if byteorder == "big": + return bytearray( + (val >> i) & 0xFF for i in reversed(range(0, length * 8, 8)) + ) + if byteorder == "little": + return bytearray( + (val >> i) & 0xFF for i in range(0, length * 8, 8) + ) + raise ValueError("Only 'big' or 'little' endian supported") + +else: + if sys.version_info < (3, 4): # pragma: no branch + # on python 3.3 hmac.hmac.update() accepts only bytes, on newer + # versions it does accept memoryview() also + def hmac_compat(data): + if not isinstance(data, bytes): # pragma: no branch + return bytes(data) + return data + + def normalise_bytes(buffer_object): + """Cast the input into array of bytes.""" + if not buffer_object: + return b"" + return memoryview(buffer_object).cast("B") + + else: + + def hmac_compat(data): + return data + + def normalise_bytes(buffer_object): + """Cast the input into array of bytes.""" + return memoryview(buffer_object).cast("B") + + def compat26_str(val): + return val + + def remove_whitespace(text): + """Removes all whitespace from passed in string""" + return re.sub(r"\s+", "", text, flags=re.UNICODE) + + def a2b_hex(val): + try: + return bytearray(binascii.a2b_hex(bytearray(val, "ascii"))) + except Exception as e: + raise ValueError("base16 error: %s" % e) + + # pylint: disable=invalid-name + # pylint is stupid here and doesn't notice it's a function, not + # constant + bytes_to_int = int.from_bytes + # pylint: enable=invalid-name + + def bit_length(val): + """Return number of bits necessary to represent an integer.""" + return val.bit_length() + + def int_to_bytes(val, length=None, byteorder="big"): + """Convert integer to bytes.""" + if length is None: + length = byte_length(val) + # for gmpy we need to convert back to native int + if type(val) != int: + val = int(val) + return bytearray(val.to_bytes(length=length, byteorder=byteorder)) + + +def byte_length(val): + """Return number of bytes necessary to represent an integer.""" + length = bit_length(val) + return (length + 7) // 8 diff --git a/env/lib/python3.8/site-packages/ecdsa/_rwlock.py b/env/lib/python3.8/site-packages/ecdsa/_rwlock.py new file mode 100644 index 00000000..010e4981 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/_rwlock.py @@ -0,0 +1,86 @@ +# Copyright Mateusz Kobos, (c) 2011 +# https://code.activestate.com/recipes/577803-reader-writer-lock-with-priority-for-writers/ +# released under the MIT licence + +import threading + + +__author__ = "Mateusz Kobos" + + +class RWLock: + """ + Read-Write locking primitive + + Synchronization object used in a solution of so-called second + readers-writers problem. In this problem, many readers can simultaneously + access a share, and a writer has an exclusive access to this share. + Additionally, the following constraints should be met: + 1) no reader should be kept waiting if the share is currently opened for + reading unless a writer is also waiting for the share, + 2) no writer should be kept waiting for the share longer than absolutely + necessary. + + The implementation is based on [1, secs. 4.2.2, 4.2.6, 4.2.7] + with a modification -- adding an additional lock (C{self.__readers_queue}) + -- in accordance with [2]. + + Sources: + [1] A.B. Downey: "The little book of semaphores", Version 2.1.5, 2008 + [2] P.J. Courtois, F. Heymans, D.L. Parnas: + "Concurrent Control with 'Readers' and 'Writers'", + Communications of the ACM, 1971 (via [3]) + [3] http://en.wikipedia.org/wiki/Readers-writers_problem + """ + + def __init__(self): + """ + A lock giving an even higher priority to the writer in certain + cases (see [2] for a discussion). + """ + self.__read_switch = _LightSwitch() + self.__write_switch = _LightSwitch() + self.__no_readers = threading.Lock() + self.__no_writers = threading.Lock() + self.__readers_queue = threading.Lock() + + def reader_acquire(self): + self.__readers_queue.acquire() + self.__no_readers.acquire() + self.__read_switch.acquire(self.__no_writers) + self.__no_readers.release() + self.__readers_queue.release() + + def reader_release(self): + self.__read_switch.release(self.__no_writers) + + def writer_acquire(self): + self.__write_switch.acquire(self.__no_readers) + self.__no_writers.acquire() + + def writer_release(self): + self.__no_writers.release() + self.__write_switch.release(self.__no_readers) + + +class _LightSwitch: + """An auxiliary "light switch"-like object. The first thread turns on the + "switch", the last one turns it off (see [1, sec. 4.2.2] for details).""" + + def __init__(self): + self.__counter = 0 + self.__mutex = threading.Lock() + + def acquire(self, lock): + self.__mutex.acquire() + self.__counter += 1 + if self.__counter == 1: + lock.acquire() + self.__mutex.release() + + def release(self, lock): + self.__mutex.acquire() + self.__counter -= 1 + if self.__counter == 0: + lock.release() + self.__mutex.release() diff --git a/env/lib/python3.8/site-packages/ecdsa/_sha3.py b/env/lib/python3.8/site-packages/ecdsa/_sha3.py new file mode 100644 index 00000000..2db00586 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/_sha3.py @@ -0,0 +1,181 @@ +""" +Implementation of the SHAKE-256 algorithm for Ed448 +""" + +try: + import hashlib + + hashlib.new("shake256").digest(64) + + def shake_256(msg, outlen): + return hashlib.new("shake256", msg).digest(outlen) + +except (TypeError, ValueError): + + from ._compat import bytes_to_int, int_to_bytes + + # From little endian. + def _from_le(s): + return bytes_to_int(s, byteorder="little") + + # Rotate a word x by b places to the left. + def _rol(x, b): + return ((x << b) | (x >> (64 - b))) & (2**64 - 1) + + # Do the SHA-3 state transform on state s. + def _sha3_transform(s): + ROTATIONS = [ + 0, + 1, + 62, + 28, + 27, + 36, + 44, + 6, + 55, + 20, + 3, + 10, + 43, + 25, + 39, + 41, + 45, + 15, + 21, + 8, + 18, + 2, + 61, + 56, + 14, + ] + PERMUTATION = [ + 1, + 6, + 9, + 22, + 14, + 20, + 2, + 12, + 13, + 19, + 23, + 15, + 4, + 24, + 21, + 8, + 16, + 5, + 3, + 18, + 17, + 11, + 7, + 10, + ] + RC = [ + 0x0000000000000001, + 0x0000000000008082, + 0x800000000000808A, + 0x8000000080008000, + 0x000000000000808B, + 0x0000000080000001, + 0x8000000080008081, + 0x8000000000008009, + 0x000000000000008A, + 0x0000000000000088, + 0x0000000080008009, + 0x000000008000000A, + 0x000000008000808B, + 0x800000000000008B, + 0x8000000000008089, + 0x8000000000008003, + 0x8000000000008002, + 0x8000000000000080, + 0x000000000000800A, + 0x800000008000000A, + 0x8000000080008081, + 0x8000000000008080, + 0x0000000080000001, + 0x8000000080008008, + ] + + for rnd in range(0, 24): + # AddColumnParity (Theta) + c = [0] * 5 + d = [0] * 5 + for i in range(0, 25): + c[i % 5] ^= s[i] + for i in range(0, 5): + d[i] = c[(i + 4) % 5] ^ _rol(c[(i + 1) % 5], 1) + for i in range(0, 25): + s[i] ^= d[i % 5] + # RotateWords (Rho) + for i in range(0, 25): + s[i] = _rol(s[i], ROTATIONS[i]) + # PermuteWords (Pi) + t = s[PERMUTATION[0]] + for i in range(0, len(PERMUTATION) - 1): + s[PERMUTATION[i]] = s[PERMUTATION[i + 1]] + s[PERMUTATION[-1]] = t + # NonlinearMixRows (Chi) + for i in range(0, 25, 5): + t = [ + s[i], + s[i + 1], + s[i + 2], + s[i + 3], + s[i + 4], + s[i], + s[i + 1], + ] + for j in range(0, 5): + s[i + j] = t[j] ^ ((~t[j + 1]) & (t[j + 2])) + # AddRoundConstant (Iota) + s[0] ^= RC[rnd] + + # Reinterpret octet array b to word array and XOR it to state s. + def _reinterpret_to_words_and_xor(s, b): + for j in range(0, len(b) // 8): + s[j] ^= _from_le(b[8 * j : 8 * j + 8]) + + # Reinterpret word array w to octet array and return it. + def _reinterpret_to_octets(w): + mp = bytearray() + for j in range(0, len(w)): + mp += int_to_bytes(w[j], 8, byteorder="little") + return mp + + def _sha3_raw(msg, r_w, o_p, e_b): + """Semi-generic SHA-3 implementation""" + r_b = 8 * r_w + s = [0] * 25 + # Handle whole blocks. + idx = 0 + blocks = len(msg) // r_b + for i in range(0, blocks): + _reinterpret_to_words_and_xor(s, msg[idx : idx + r_b]) + idx += r_b + _sha3_transform(s) + # Handle last block padding. + m = bytearray(msg[idx:]) + m.append(o_p) + while len(m) < r_b: + m.append(0) + m[len(m) - 1] |= 128 + # Handle padded last block. + _reinterpret_to_words_and_xor(s, m) + _sha3_transform(s) + # Output. + out = bytearray() + while len(out) < e_b: + out += _reinterpret_to_octets(s[:r_w]) + _sha3_transform(s) + return out[:e_b] + + def shake_256(msg, outlen): + return _sha3_raw(msg, 17, 31, outlen) diff --git a/env/lib/python3.8/site-packages/ecdsa/_version.py b/env/lib/python3.8/site-packages/ecdsa/_version.py new file mode 100644 index 00000000..96aae17b --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/_version.py @@ -0,0 +1,21 @@ + +# This file was generated by 'versioneer.py' (0.21) from +# revision-control system data, or from the parent directory name of an +# unpacked source archive. Distribution tarballs contain a pre-generated copy +# of this file. + +import json + +version_json = ''' +{ + "date": "2022-07-09T14:49:17+0200", + "dirty": false, + "error": null, + "full-revisionid": "341e0d8be9fedf66fbc9a95630b4ed2138343380", + "version": "0.18.0" +} +''' # END VERSION_JSON + + +def get_versions(): + return json.loads(version_json) diff --git a/env/lib/python3.8/site-packages/ecdsa/curves.py b/env/lib/python3.8/site-packages/ecdsa/curves.py new file mode 100644 index 00000000..1119ee5a --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/curves.py @@ -0,0 +1,513 @@ +from __future__ import division + +from six import PY2 +from . import der, ecdsa, ellipticcurve, eddsa +from .util import orderlen, number_to_string, string_to_number +from ._compat import normalise_bytes, bit_length + + +# orderlen was defined in this module previously, so keep it in __all__, +# will need to mark it as deprecated later +__all__ = [ + "UnknownCurveError", + "orderlen", + "Curve", + "SECP112r1", + "SECP112r2", + "SECP128r1", + "SECP160r1", + "NIST192p", + "NIST224p", + "NIST256p", + "NIST384p", + "NIST521p", + "curves", + "find_curve", + "curve_by_name", + "SECP256k1", + "BRAINPOOLP160r1", + "BRAINPOOLP192r1", + "BRAINPOOLP224r1", + "BRAINPOOLP256r1", + "BRAINPOOLP320r1", + "BRAINPOOLP384r1", + "BRAINPOOLP512r1", + "PRIME_FIELD_OID", + "CHARACTERISTIC_TWO_FIELD_OID", + "Ed25519", + "Ed448", +] + + +PRIME_FIELD_OID = (1, 2, 840, 10045, 1, 1) +CHARACTERISTIC_TWO_FIELD_OID = (1, 2, 840, 10045, 1, 2) + + +class UnknownCurveError(Exception): + pass + + +class Curve: + def __init__(self, name, curve, generator, oid, openssl_name=None): + self.name = name + self.openssl_name = openssl_name # maybe None + self.curve = curve + self.generator = generator + self.order = generator.order() + if isinstance(curve, ellipticcurve.CurveEdTw): + # EdDSA keys are special in that both private and public + # are the same size (as it's defined only with compressed points) + + # +1 for the sign bit and then round up + self.baselen = (bit_length(curve.p()) + 1 + 7) // 8 + self.verifying_key_length = self.baselen + else: + self.baselen = orderlen(self.order) + self.verifying_key_length = 2 * orderlen(curve.p()) + self.signature_length = 2 * self.baselen + self.oid = oid + if oid: + self.encoded_oid = der.encode_oid(*oid) + + def __eq__(self, other): + if isinstance(other, Curve): + return ( + self.curve == other.curve and self.generator == other.generator + ) + return NotImplemented + + def __ne__(self, other): + return not self == other + + def __repr__(self): + return self.name + + def to_der(self, encoding=None, point_encoding="uncompressed"): + """Serialise the curve parameters to binary string. + + :param str encoding: the format to save the curve parameters in. + Default is ``named_curve``, with fallback being the ``explicit`` + if the OID is not set for the curve. + :param str point_encoding: the point encoding of the generator when + explicit curve encoding is used. Ignored for ``named_curve`` + format. + + :return: DER encoded ECParameters structure + :rtype: bytes + """ + if encoding is None: + if self.oid: + encoding = "named_curve" + else: + encoding = "explicit" + + if encoding not in ("named_curve", "explicit"): + raise ValueError( + "Only 'named_curve' and 'explicit' encodings supported" + ) + + if encoding == "named_curve": + if not self.oid: + raise UnknownCurveError( + "Can't encode curve using named_curve encoding without " + "associated curve OID" + ) + return der.encode_oid(*self.oid) + elif isinstance(self.curve, ellipticcurve.CurveEdTw): + assert encoding == "explicit" + raise UnknownCurveError( + "Twisted Edwards curves don't support explicit encoding" + ) + + # encode the ECParameters sequence + curve_p = self.curve.p() + version = der.encode_integer(1) + field_id = der.encode_sequence( + der.encode_oid(*PRIME_FIELD_OID), der.encode_integer(curve_p) + ) + curve = der.encode_sequence( + der.encode_octet_string( + number_to_string(self.curve.a() % curve_p, curve_p) + ), + der.encode_octet_string( + number_to_string(self.curve.b() % curve_p, curve_p) + ), + ) + base = der.encode_octet_string(self.generator.to_bytes(point_encoding)) + order = der.encode_integer(self.generator.order()) + seq_elements = [version, field_id, curve, base, order] + if self.curve.cofactor(): + cofactor = der.encode_integer(self.curve.cofactor()) + seq_elements.append(cofactor) + + return der.encode_sequence(*seq_elements) + + def to_pem(self, encoding=None, point_encoding="uncompressed"): + """ + Serialise the curve parameters to the :term:`PEM` format. + + :param str encoding: the format to save the curve parameters in. + Default is ``named_curve``, with fallback being the ``explicit`` + if the OID is not set for the curve. + :param str point_encoding: the point encoding of the generator when + explicit curve encoding is used. Ignored for ``named_curve`` + format. + + :return: PEM encoded ECParameters structure + :rtype: str + """ + return der.topem( + self.to_der(encoding, point_encoding), "EC PARAMETERS" + ) + + @staticmethod + def from_der(data, valid_encodings=None): + """Decode the curve parameters from DER file. + + :param data: the binary string to decode the parameters from + :type data: :term:`bytes-like object` + :param valid_encodings: set of names of allowed encodings, by default + all (set by passing ``None``), supported ones are ``named_curve`` + and ``explicit`` + :type valid_encodings: :term:`set-like object` + """ + if not valid_encodings: + valid_encodings = set(("named_curve", "explicit")) + if not all(i in ["named_curve", "explicit"] for i in valid_encodings): + raise ValueError( + "Only named_curve and explicit encodings supported" + ) + data = normalise_bytes(data) + if not der.is_sequence(data): + if "named_curve" not in valid_encodings: + raise der.UnexpectedDER( + "named_curve curve parameters not allowed" + ) + oid, empty = der.remove_object(data) + if empty: + raise der.UnexpectedDER("Unexpected data after OID") + return find_curve(oid) + + if "explicit" not in valid_encodings: + raise der.UnexpectedDER("explicit curve parameters not allowed") + + seq, empty = der.remove_sequence(data) + if empty: + raise der.UnexpectedDER( + "Unexpected data after ECParameters structure" + ) + # decode the ECParameters sequence + version, rest = der.remove_integer(seq) + if version != 1: + raise der.UnexpectedDER("Unknown parameter encoding format") + field_id, rest = der.remove_sequence(rest) + curve, rest = der.remove_sequence(rest) + base_bytes, rest = der.remove_octet_string(rest) + order, rest = der.remove_integer(rest) + cofactor = None + if rest: + # the ASN.1 specification of ECParameters allows for future + # extensions of the sequence, so ignore the remaining bytes + cofactor, _ = der.remove_integer(rest) + + # decode the ECParameters.fieldID sequence + field_type, rest = der.remove_object(field_id) + if field_type == CHARACTERISTIC_TWO_FIELD_OID: + raise UnknownCurveError("Characteristic 2 curves unsupported") + if field_type != PRIME_FIELD_OID: + raise UnknownCurveError( + "Unknown field type: {0}".format(field_type) + ) + prime, empty = der.remove_integer(rest) + if empty: + raise der.UnexpectedDER( + "Unexpected data after ECParameters.fieldID.Prime-p element" + ) + + # decode the ECParameters.curve sequence + curve_a_bytes, rest = der.remove_octet_string(curve) + curve_b_bytes, rest = der.remove_octet_string(rest) + # seed can be defined here, but we don't parse it, so ignore `rest` + + curve_a = string_to_number(curve_a_bytes) + curve_b = string_to_number(curve_b_bytes) + + curve_fp = ellipticcurve.CurveFp(prime, curve_a, curve_b, cofactor) + + # decode the ECParameters.base point + + base = ellipticcurve.PointJacobi.from_bytes( + curve_fp, + base_bytes, + valid_encodings=("uncompressed", "compressed", "hybrid"), + order=order, + generator=True, + ) + tmp_curve = Curve("unknown", curve_fp, base, None) + + # if the curve matches one of the well-known ones, use the well-known + # one in preference, as it will have the OID and name associated + for i in curves: + if tmp_curve == i: + return i + return tmp_curve + + @classmethod + def from_pem(cls, string, valid_encodings=None): + """Decode the curve parameters from PEM file. + + :param str string: the text string to decode the parameters from + :param valid_encodings: set of names of allowed encodings, by default + all (set by passing ``None``), supported ones are ``named_curve`` + and ``explicit`` + :type valid_encodings: :term:`set-like object` + """ + if not PY2 and isinstance(string, str): # pragma: no branch + string = string.encode() + + ec_param_index = string.find(b"-----BEGIN EC PARAMETERS-----") + if ec_param_index == -1: + raise der.UnexpectedDER("EC PARAMETERS PEM header not found") + + return cls.from_der( + der.unpem(string[ec_param_index:]), valid_encodings + ) + + +# the SEC curves +SECP112r1 = Curve( + "SECP112r1", + ecdsa.curve_112r1, + ecdsa.generator_112r1, + (1, 3, 132, 0, 6), + "secp112r1", +) + + +SECP112r2 = Curve( + "SECP112r2", + ecdsa.curve_112r2, + ecdsa.generator_112r2, + (1, 3, 132, 0, 7), + "secp112r2", +) + + +SECP128r1 = Curve( + "SECP128r1", + ecdsa.curve_128r1, + ecdsa.generator_128r1, + (1, 3, 132, 0, 28), + "secp128r1", +) + + +SECP160r1 = Curve( + "SECP160r1", + ecdsa.curve_160r1, + ecdsa.generator_160r1, + (1, 3, 132, 0, 8), + "secp160r1", +) + + +# the NIST curves +NIST192p = Curve( + "NIST192p", + ecdsa.curve_192, + ecdsa.generator_192, + (1, 2, 840, 10045, 3, 1, 1), + "prime192v1", +) + + +NIST224p = Curve( + "NIST224p", + ecdsa.curve_224, + ecdsa.generator_224, + (1, 3, 132, 0, 33), + "secp224r1", +) + + +NIST256p = Curve( + "NIST256p", + ecdsa.curve_256, + ecdsa.generator_256, + (1, 2, 840, 10045, 3, 1, 7), + "prime256v1", +) + + +NIST384p = Curve( + "NIST384p", + ecdsa.curve_384, + ecdsa.generator_384, + (1, 3, 132, 0, 34), + "secp384r1", +) + + +NIST521p = Curve( + "NIST521p", + ecdsa.curve_521, + ecdsa.generator_521, + (1, 3, 132, 0, 35), + "secp521r1", +) + + +SECP256k1 = Curve( + "SECP256k1", + ecdsa.curve_secp256k1, + ecdsa.generator_secp256k1, + (1, 3, 132, 0, 10), + "secp256k1", +) + + +BRAINPOOLP160r1 = Curve( + "BRAINPOOLP160r1", + ecdsa.curve_brainpoolp160r1, + ecdsa.generator_brainpoolp160r1, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 1), + "brainpoolP160r1", +) + + +BRAINPOOLP192r1 = Curve( + "BRAINPOOLP192r1", + ecdsa.curve_brainpoolp192r1, + ecdsa.generator_brainpoolp192r1, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 3), + "brainpoolP192r1", +) + + +BRAINPOOLP224r1 = Curve( + "BRAINPOOLP224r1", + ecdsa.curve_brainpoolp224r1, + ecdsa.generator_brainpoolp224r1, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 5), + "brainpoolP224r1", +) + + +BRAINPOOLP256r1 = Curve( + "BRAINPOOLP256r1", + ecdsa.curve_brainpoolp256r1, + ecdsa.generator_brainpoolp256r1, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 7), + "brainpoolP256r1", +) + + +BRAINPOOLP320r1 = Curve( + "BRAINPOOLP320r1", + ecdsa.curve_brainpoolp320r1, + ecdsa.generator_brainpoolp320r1, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 9), + "brainpoolP320r1", +) + + +BRAINPOOLP384r1 = Curve( + "BRAINPOOLP384r1", + ecdsa.curve_brainpoolp384r1, + ecdsa.generator_brainpoolp384r1, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 11), + "brainpoolP384r1", +) + + +BRAINPOOLP512r1 = Curve( + "BRAINPOOLP512r1", + ecdsa.curve_brainpoolp512r1, + ecdsa.generator_brainpoolp512r1, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 13), + "brainpoolP512r1", +) + + +Ed25519 = Curve( + "Ed25519", + eddsa.curve_ed25519, + eddsa.generator_ed25519, + (1, 3, 101, 112), +) + + +Ed448 = Curve( + "Ed448", + eddsa.curve_ed448, + eddsa.generator_ed448, + (1, 3, 101, 113), +) + + +# no order in particular, but keep previously added curves first +curves = [ + NIST192p, + NIST224p, + NIST256p, + NIST384p, + NIST521p, + SECP256k1, + BRAINPOOLP160r1, + BRAINPOOLP192r1, + BRAINPOOLP224r1, + BRAINPOOLP256r1, + BRAINPOOLP320r1, + BRAINPOOLP384r1, + BRAINPOOLP512r1, + SECP112r1, + SECP112r2, + SECP128r1, + SECP160r1, + Ed25519, + Ed448, +] + + +def find_curve(oid_curve): + """Select a curve based on its OID + + :param tuple[int,...] oid_curve: ASN.1 Object Identifier of the + curve to return, like ``(1, 2, 840, 10045, 3, 1, 7)`` for ``NIST256p``. + + :raises UnknownCurveError: When the oid doesn't match any of the supported + curves + + :rtype: ~ecdsa.curves.Curve + """ + for c in curves: + if c.oid == oid_curve: + return c + raise UnknownCurveError( + "I don't know about the curve with oid %s." + "I only know about these: %s" % (oid_curve, [c.name for c in curves]) + ) + + +def curve_by_name(name): + """Select a curve based on its name. + + Returns a :py:class:`~ecdsa.curves.Curve` object with a ``name`` name. + Note that ``name`` is case-sensitve. + + :param str name: Name of the curve to return, like ``NIST256p`` or + ``prime256v1`` + + :raises UnknownCurveError: When the name doesn't match any of the supported + curves + + :rtype: ~ecdsa.curves.Curve + """ + for c in curves: + if name == c.name or (c.openssl_name and name == c.openssl_name): + return c + raise UnknownCurveError( + "Curve with name {0!r} unknown, only curves supported: {1}".format( + name, [c.name for c in curves] + ) + ) diff --git a/env/lib/python3.8/site-packages/ecdsa/der.py b/env/lib/python3.8/site-packages/ecdsa/der.py new file mode 100644 index 00000000..8b27941c --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/der.py @@ -0,0 +1,409 @@ +from __future__ import division + +import binascii +import base64 +import warnings +from itertools import chain +from six import int2byte, b, text_type +from ._compat import str_idx_as_int + + +class UnexpectedDER(Exception): + pass + + +def encode_constructed(tag, value): + return int2byte(0xA0 + tag) + encode_length(len(value)) + value + + +def encode_integer(r): + assert r >= 0 # can't support negative numbers yet + h = ("%x" % r).encode() + if len(h) % 2: + h = b("0") + h + s = binascii.unhexlify(h) + num = str_idx_as_int(s, 0) + if num <= 0x7F: + return b("\x02") + encode_length(len(s)) + s + else: + # DER integers are two's complement, so if the first byte is + # 0x80-0xff then we need an extra 0x00 byte to prevent it from + # looking negative. + return b("\x02") + encode_length(len(s) + 1) + b("\x00") + s + + +# sentry object to check if an argument was specified (used to detect +# deprecated calling convention) +_sentry = object() + + +def encode_bitstring(s, unused=_sentry): + """ + Encode a binary string as a BIT STRING using :term:`DER` encoding. + + Note, because there is no native Python object that can encode an actual + bit string, this function only accepts byte strings as the `s` argument. + The byte string is the actual bit string that will be encoded, padded + on the right (least significant bits, looking from big endian perspective) + to the first full byte. If the bit string has a bit length that is multiple + of 8, then the padding should not be included. For correct DER encoding + the padding bits MUST be set to 0. + + Number of bits of padding need to be provided as the `unused` parameter. + In case they are specified as None, it means the number of unused bits + is already encoded in the string as the first byte. + + The deprecated call convention specifies just the `s` parameters and + encodes the number of unused bits as first parameter (same convention + as with None). + + Empty string must be encoded with `unused` specified as 0. + + Future version of python-ecdsa will make specifying the `unused` argument + mandatory. + + :param s: bytes to encode + :type s: bytes like object + :param unused: number of bits at the end of `s` that are unused, must be + between 0 and 7 (inclusive) + :type unused: int or None + + :raises ValueError: when `unused` is too large or too small + + :return: `s` encoded using DER + :rtype: bytes + """ + encoded_unused = b"" + len_extra = 0 + if unused is _sentry: + warnings.warn( + "Legacy call convention used, unused= needs to be specified", + DeprecationWarning, + ) + elif unused is not None: + if not 0 <= unused <= 7: + raise ValueError("unused must be integer between 0 and 7") + if unused: + if not s: + raise ValueError("unused is non-zero but s is empty") + last = str_idx_as_int(s, -1) + if last & (2**unused - 1): + raise ValueError("unused bits must be zeros in DER") + encoded_unused = int2byte(unused) + len_extra = 1 + return b("\x03") + encode_length(len(s) + len_extra) + encoded_unused + s + + +def encode_octet_string(s): + return b("\x04") + encode_length(len(s)) + s + + +def encode_oid(first, second, *pieces): + assert 0 <= first < 2 and 0 <= second <= 39 or first == 2 and 0 <= second + body = b"".join( + chain( + [encode_number(40 * first + second)], + (encode_number(p) for p in pieces), + ) + ) + return b"\x06" + encode_length(len(body)) + body + + +def encode_sequence(*encoded_pieces): + total_len = sum([len(p) for p in encoded_pieces]) + return b("\x30") + encode_length(total_len) + b("").join(encoded_pieces) + + +def encode_number(n): + b128_digits = [] + while n: + b128_digits.insert(0, (n & 0x7F) | 0x80) + n = n >> 7 + if not b128_digits: + b128_digits.append(0) + b128_digits[-1] &= 0x7F + return b("").join([int2byte(d) for d in b128_digits]) + + +def is_sequence(string): + return string and string[:1] == b"\x30" + + +def remove_constructed(string): + s0 = str_idx_as_int(string, 0) + if (s0 & 0xE0) != 0xA0: + raise UnexpectedDER( + "wanted type 'constructed tag' (0xa0-0xbf), got 0x%02x" % s0 + ) + tag = s0 & 0x1F + length, llen = read_length(string[1:]) + body = string[1 + llen : 1 + llen + length] + rest = string[1 + llen + length :] + return tag, body, rest + + +def remove_sequence(string): + if not string: + raise UnexpectedDER("Empty string does not encode a sequence") + if string[:1] != b"\x30": + n = str_idx_as_int(string, 0) + raise UnexpectedDER("wanted type 'sequence' (0x30), got 0x%02x" % n) + length, lengthlength = read_length(string[1:]) + if length > len(string) - 1 - lengthlength: + raise UnexpectedDER("Length longer than the provided buffer") + endseq = 1 + lengthlength + length + return string[1 + lengthlength : endseq], string[endseq:] + + +def remove_octet_string(string): + if string[:1] != b"\x04": + n = str_idx_as_int(string, 0) + raise UnexpectedDER("wanted type 'octetstring' (0x04), got 0x%02x" % n) + length, llen = read_length(string[1:]) + body = string[1 + llen : 1 + llen + length] + rest = string[1 + llen + length :] + return body, rest + + +def remove_object(string): + if not string: + raise UnexpectedDER( + "Empty string does not encode an object identifier" + ) + if string[:1] != b"\x06": + n = str_idx_as_int(string, 0) + raise UnexpectedDER("wanted type 'object' (0x06), got 0x%02x" % n) + length, lengthlength = read_length(string[1:]) + body = string[1 + lengthlength : 1 + lengthlength + length] + rest = string[1 + lengthlength + length :] + if not body: + raise UnexpectedDER("Empty object identifier") + if len(body) != length: + raise UnexpectedDER( + "Length of object identifier longer than the provided buffer" + ) + numbers = [] + while body: + n, ll = read_number(body) + numbers.append(n) + body = body[ll:] + n0 = numbers.pop(0) + if n0 < 80: + first = n0 // 40 + else: + first = 2 + second = n0 - (40 * first) + numbers.insert(0, first) + numbers.insert(1, second) + return tuple(numbers), rest + + +def remove_integer(string): + if not string: + raise UnexpectedDER( + "Empty string is an invalid encoding of an integer" + ) + if string[:1] != b"\x02": + n = str_idx_as_int(string, 0) + raise UnexpectedDER("wanted type 'integer' (0x02), got 0x%02x" % n) + length, llen = read_length(string[1:]) + if length > len(string) - 1 - llen: + raise UnexpectedDER("Length longer than provided buffer") + if length == 0: + raise UnexpectedDER("0-byte long encoding of integer") + numberbytes = string[1 + llen : 1 + llen + length] + rest = string[1 + llen + length :] + msb = str_idx_as_int(numberbytes, 0) + if not msb < 0x80: + raise UnexpectedDER("Negative integers are not supported") + # check if the encoding is the minimal one (DER requirement) + if length > 1 and not msb: + # leading zero byte is allowed if the integer would have been + # considered a negative number otherwise + smsb = str_idx_as_int(numberbytes, 1) + if smsb < 0x80: + raise UnexpectedDER( + "Invalid encoding of integer, unnecessary " + "zero padding bytes" + ) + return int(binascii.hexlify(numberbytes), 16), rest + + +def read_number(string): + number = 0 + llen = 0 + if str_idx_as_int(string, 0) == 0x80: + raise UnexpectedDER("Non minimal encoding of OID subidentifier") + # base-128 big endian, with most significant bit set in all but the last + # byte + while True: + if llen >= len(string): + raise UnexpectedDER("ran out of length bytes") + number = number << 7 + d = str_idx_as_int(string, llen) + number += d & 0x7F + llen += 1 + if not d & 0x80: + break + return number, llen + + +def encode_length(l): + assert l >= 0 + if l < 0x80: + return int2byte(l) + s = ("%x" % l).encode() + if len(s) % 2: + s = b("0") + s + s = binascii.unhexlify(s) + llen = len(s) + return int2byte(0x80 | llen) + s + + +def read_length(string): + if not string: + raise UnexpectedDER("Empty string can't encode valid length value") + num = str_idx_as_int(string, 0) + if not (num & 0x80): + # short form + return (num & 0x7F), 1 + # else long-form: b0&0x7f is number of additional base256 length bytes, + # big-endian + llen = num & 0x7F + if not llen: + raise UnexpectedDER("Invalid length encoding, length of length is 0") + if llen > len(string) - 1: + raise UnexpectedDER("Length of length longer than provided buffer") + # verify that the encoding is minimal possible (DER requirement) + msb = str_idx_as_int(string, 1) + if not msb or llen == 1 and msb < 0x80: + raise UnexpectedDER("Not minimal encoding of length") + return int(binascii.hexlify(string[1 : 1 + llen]), 16), 1 + llen + + +def remove_bitstring(string, expect_unused=_sentry): + """ + Remove a BIT STRING object from `string` following :term:`DER`. + + The `expect_unused` can be used to specify if the bit string should + have the amount of unused bits decoded or not. If it's an integer, any + read BIT STRING that has number of unused bits different from specified + value will cause UnexpectedDER exception to be raised (this is especially + useful when decoding BIT STRINGS that have DER encoded object in them; + DER encoding is byte oriented, so the unused bits will always equal 0). + + If the `expect_unused` is specified as None, the first element returned + will be a tuple, with the first value being the extracted bit string + while the second value will be the decoded number of unused bits. + + If the `expect_unused` is unspecified, the decoding of byte with + number of unused bits will not be attempted and the bit string will be + returned as-is, the callee will be required to decode it and verify its + correctness. + + Future version of python will require the `expected_unused` parameter + to be specified. + + :param string: string of bytes to extract the BIT STRING from + :type string: bytes like object + :param expect_unused: number of bits that should be unused in the BIT + STRING, or None, to return it to caller + :type expect_unused: int or None + + :raises UnexpectedDER: when the encoding does not follow DER. + + :return: a tuple with first element being the extracted bit string and + the second being the remaining bytes in the string (if any); if the + `expect_unused` is specified as None, the first element of the returned + tuple will be a tuple itself, with first element being the bit string + as bytes and the second element being the number of unused bits at the + end of the byte array as an integer + :rtype: tuple + """ + if not string: + raise UnexpectedDER("Empty string does not encode a bitstring") + if expect_unused is _sentry: + warnings.warn( + "Legacy call convention used, expect_unused= needs to be" + " specified", + DeprecationWarning, + ) + num = str_idx_as_int(string, 0) + if string[:1] != b"\x03": + raise UnexpectedDER("wanted bitstring (0x03), got 0x%02x" % num) + length, llen = read_length(string[1:]) + if not length: + raise UnexpectedDER("Invalid length of bit string, can't be 0") + body = string[1 + llen : 1 + llen + length] + rest = string[1 + llen + length :] + if expect_unused is not _sentry: + unused = str_idx_as_int(body, 0) + if not 0 <= unused <= 7: + raise UnexpectedDER("Invalid encoding of unused bits") + if expect_unused is not None and expect_unused != unused: + raise UnexpectedDER("Unexpected number of unused bits") + body = body[1:] + if unused: + if not body: + raise UnexpectedDER("Invalid encoding of empty bit string") + last = str_idx_as_int(body, -1) + # verify that all the unused bits are set to zero (DER requirement) + if last & (2**unused - 1): + raise UnexpectedDER("Non zero padding bits in bit string") + if expect_unused is None: + body = (body, unused) + return body, rest + + +# SEQUENCE([1, STRING(secexp), cont[0], OBJECT(curvename), cont[1], BINTSTRING) + + +# signatures: (from RFC3279) +# ansi-X9-62 OBJECT IDENTIFIER ::= { +# iso(1) member-body(2) us(840) 10045 } +# +# id-ecSigType OBJECT IDENTIFIER ::= { +# ansi-X9-62 signatures(4) } +# ecdsa-with-SHA1 OBJECT IDENTIFIER ::= { +# id-ecSigType 1 } +# so 1,2,840,10045,4,1 +# so 0x42, .. .. + +# Ecdsa-Sig-Value ::= SEQUENCE { +# r INTEGER, +# s INTEGER } + +# id-public-key-type OBJECT IDENTIFIER ::= { ansi-X9.62 2 } +# +# id-ecPublicKey OBJECT IDENTIFIER ::= { id-publicKeyType 1 } + +# I think the secp224r1 identifier is (t=06,l=05,v=2b81040021) +# secp224r1 OBJECT IDENTIFIER ::= { +# iso(1) identified-organization(3) certicom(132) curve(0) 33 } +# and the secp384r1 is (t=06,l=05,v=2b81040022) +# secp384r1 OBJECT IDENTIFIER ::= { +# iso(1) identified-organization(3) certicom(132) curve(0) 34 } + + +def unpem(pem): + if isinstance(pem, text_type): # pragma: no branch + pem = pem.encode() + + d = b("").join( + [ + l.strip() + for l in pem.split(b("\n")) + if l and not l.startswith(b("-----")) + ] + ) + return base64.b64decode(d) + + +def topem(der, name): + b64 = base64.b64encode(der) + lines = [("-----BEGIN %s-----\n" % name).encode()] + lines.extend( + [b64[start : start + 64] + b("\n") for start in range(0, len(b64), 64)] + ) + lines.append(("-----END %s-----\n" % name).encode()) + return b("").join(lines) diff --git a/env/lib/python3.8/site-packages/ecdsa/ecdh.py b/env/lib/python3.8/site-packages/ecdsa/ecdh.py new file mode 100644 index 00000000..7f697d9a --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/ecdh.py @@ -0,0 +1,336 @@ +""" +Class for performing Elliptic-curve Diffie-Hellman (ECDH) operations. +""" + +from .util import number_to_string +from .ellipticcurve import INFINITY +from .keys import SigningKey, VerifyingKey + + +__all__ = [ + "ECDH", + "NoKeyError", + "NoCurveError", + "InvalidCurveError", + "InvalidSharedSecretError", +] + + +class NoKeyError(Exception): + """ECDH. Key not found but it is needed for operation.""" + + pass + + +class NoCurveError(Exception): + """ECDH. Curve not set but it is needed for operation.""" + + pass + + +class InvalidCurveError(Exception): + """ + ECDH. Raised in case the public and private keys use different curves. + """ + + pass + + +class InvalidSharedSecretError(Exception): + """ECDH. Raised in case the shared secret we obtained is an INFINITY.""" + + pass + + +class ECDH(object): + """ + Elliptic-curve Diffie-Hellman (ECDH). A key agreement protocol. + + Allows two parties, each having an elliptic-curve public-private key + pair, to establish a shared secret over an insecure channel + """ + + def __init__(self, curve=None, private_key=None, public_key=None): + """ + ECDH init. + + Call can be initialised without parameters, then the first operation + (loading either key) will set the used curve. + All parameters must be ultimately set before shared secret + calculation will be allowed. + + :param curve: curve for operations + :type curve: Curve + :param private_key: `my` private key for ECDH + :type private_key: SigningKey + :param public_key: `their` public key for ECDH + :type public_key: VerifyingKey + """ + self.curve = curve + self.private_key = None + self.public_key = None + if private_key: + self.load_private_key(private_key) + if public_key: + self.load_received_public_key(public_key) + + def _get_shared_secret(self, remote_public_key): + if not self.private_key: + raise NoKeyError( + "Private key needs to be set to create shared secret" + ) + if not self.public_key: + raise NoKeyError( + "Public key needs to be set to create shared secret" + ) + if not ( + self.private_key.curve == self.curve == remote_public_key.curve + ): + raise InvalidCurveError( + "Curves for public key and private key is not equal." + ) + + # shared secret = PUBKEYtheirs * PRIVATEKEYours + result = ( + remote_public_key.pubkey.point + * self.private_key.privkey.secret_multiplier + ) + if result == INFINITY: + raise InvalidSharedSecretError("Invalid shared secret (INFINITY).") + + return result.x() + + def set_curve(self, key_curve): + """ + Set the working curve for ecdh operations. + + :param key_curve: curve from `curves` module + :type key_curve: Curve + """ + self.curve = key_curve + + def generate_private_key(self): + """ + Generate local private key for ecdh operation with curve that was set. + + :raises NoCurveError: Curve must be set before key generation. + + :return: public (verifying) key from this private key. + :rtype: VerifyingKey + """ + if not self.curve: + raise NoCurveError("Curve must be set prior to key generation.") + return self.load_private_key(SigningKey.generate(curve=self.curve)) + + def load_private_key(self, private_key): + """ + Load private key from SigningKey (keys.py) object. + + Needs to have the same curve as was set with set_curve method. + If curve is not set - it sets from this SigningKey + + :param private_key: Initialised SigningKey class + :type private_key: SigningKey + + :raises InvalidCurveError: private_key curve not the same as self.curve + + :return: public (verifying) key from this private key. + :rtype: VerifyingKey + """ + if not self.curve: + self.curve = private_key.curve + if self.curve != private_key.curve: + raise InvalidCurveError("Curve mismatch.") + self.private_key = private_key + return self.private_key.get_verifying_key() + + def load_private_key_bytes(self, private_key): + """ + Load private key from byte string. + + Uses current curve and checks if the provided key matches + the curve of ECDH key agreement. + Key loads via from_string method of SigningKey class + + :param private_key: private key in bytes string format + :type private_key: :term:`bytes-like object` + + :raises NoCurveError: Curve must be set before loading. + + :return: public (verifying) key from this private key. + :rtype: VerifyingKey + """ + if not self.curve: + raise NoCurveError("Curve must be set prior to key load.") + return self.load_private_key( + SigningKey.from_string(private_key, curve=self.curve) + ) + + def load_private_key_der(self, private_key_der): + """ + Load private key from DER byte string. + + Compares the curve of the DER-encoded key with the ECDH set curve, + uses the former if unset. + + Note, the only DER format supported is the RFC5915 + Look at keys.py:SigningKey.from_der() + + :param private_key_der: string with the DER encoding of private ECDSA + key + :type private_key_der: string + + :raises InvalidCurveError: private_key curve not the same as self.curve + + :return: public (verifying) key from this private key. + :rtype: VerifyingKey + """ + return self.load_private_key(SigningKey.from_der(private_key_der)) + + def load_private_key_pem(self, private_key_pem): + """ + Load private key from PEM string. + + Compares the curve of the DER-encoded key with the ECDH set curve, + uses the former if unset. + + Note, the only PEM format supported is the RFC5915 + Look at keys.py:SigningKey.from_pem() + it needs to have `EC PRIVATE KEY` section + + :param private_key_pem: string with PEM-encoded private ECDSA key + :type private_key_pem: string + + :raises InvalidCurveError: private_key curve not the same as self.curve + + :return: public (verifying) key from this private key. + :rtype: VerifyingKey + """ + return self.load_private_key(SigningKey.from_pem(private_key_pem)) + + def get_public_key(self): + """ + Provides a public key that matches the local private key. + + Needs to be sent to the remote party. + + :return: public (verifying) key from local private key. + :rtype: VerifyingKey + """ + return self.private_key.get_verifying_key() + + def load_received_public_key(self, public_key): + """ + Load public key from VerifyingKey (keys.py) object. + + Needs to have the same curve as set as current for ecdh operation. + If curve is not set - it sets it from VerifyingKey. + + :param public_key: Initialised VerifyingKey class + :type public_key: VerifyingKey + + :raises InvalidCurveError: public_key curve not the same as self.curve + """ + if not self.curve: + self.curve = public_key.curve + if self.curve != public_key.curve: + raise InvalidCurveError("Curve mismatch.") + self.public_key = public_key + + def load_received_public_key_bytes( + self, public_key_str, valid_encodings=None + ): + """ + Load public key from byte string. + + Uses current curve and checks if key length corresponds to + the current curve. + Key loads via from_string method of VerifyingKey class + + :param public_key_str: public key in bytes string format + :type public_key_str: :term:`bytes-like object` + :param valid_encodings: list of acceptable point encoding formats, + supported ones are: :term:`uncompressed`, :term:`compressed`, + :term:`hybrid`, and :term:`raw encoding` (specified with ``raw`` + name). All formats by default (specified with ``None``). + :type valid_encodings: :term:`set-like object` + """ + return self.load_received_public_key( + VerifyingKey.from_string( + public_key_str, self.curve, valid_encodings + ) + ) + + def load_received_public_key_der(self, public_key_der): + """ + Load public key from DER byte string. + + Compares the curve of the DER-encoded key with the ECDH set curve, + uses the former if unset. + + Note, the only DER format supported is the RFC5912 + Look at keys.py:VerifyingKey.from_der() + + :param public_key_der: string with the DER encoding of public ECDSA key + :type public_key_der: string + + :raises InvalidCurveError: public_key curve not the same as self.curve + """ + return self.load_received_public_key( + VerifyingKey.from_der(public_key_der) + ) + + def load_received_public_key_pem(self, public_key_pem): + """ + Load public key from PEM string. + + Compares the curve of the PEM-encoded key with the ECDH set curve, + uses the former if unset. + + Note, the only PEM format supported is the RFC5912 + Look at keys.py:VerifyingKey.from_pem() + + :param public_key_pem: string with PEM-encoded public ECDSA key + :type public_key_pem: string + + :raises InvalidCurveError: public_key curve not the same as self.curve + """ + return self.load_received_public_key( + VerifyingKey.from_pem(public_key_pem) + ) + + def generate_sharedsecret_bytes(self): + """ + Generate shared secret from local private key and remote public key. + + The objects needs to have both private key and received public key + before generation is allowed. + + :raises InvalidCurveError: public_key curve not the same as self.curve + :raises NoKeyError: public_key or private_key is not set + + :return: shared secret + :rtype: bytes + """ + return number_to_string( + self.generate_sharedsecret(), self.private_key.curve.curve.p() + ) + + def generate_sharedsecret(self): + """ + Generate shared secret from local private key and remote public key. + + The objects needs to have both private key and received public key + before generation is allowed. + + It's the same for local and remote party, + shared secret(local private key, remote public key) == + shared secret(local public key, remote private key) + + :raises InvalidCurveError: public_key curve not the same as self.curve + :raises NoKeyError: public_key or private_key is not set + + :return: shared secret + :rtype: int + """ + return self._get_shared_secret(self.public_key) diff --git a/env/lib/python3.8/site-packages/ecdsa/ecdsa.py b/env/lib/python3.8/site-packages/ecdsa/ecdsa.py new file mode 100644 index 00000000..33282816 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/ecdsa.py @@ -0,0 +1,859 @@ +#! /usr/bin/env python + +""" +Low level implementation of Elliptic-Curve Digital Signatures. + +.. note :: + You're most likely looking for the :py:class:`~ecdsa.keys` module. + This is a low-level implementation of the ECDSA that operates on + integers, not byte strings. + +NOTE: This a low level implementation of ECDSA, for normal applications +you should be looking at the keys.py module. + +Classes and methods for elliptic-curve signatures: +private keys, public keys, signatures, +and definitions of prime-modulus curves. + +Example: + +.. code-block:: python + + # (In real-life applications, you would probably want to + # protect against defects in SystemRandom.) + from random import SystemRandom + randrange = SystemRandom().randrange + + # Generate a public/private key pair using the NIST Curve P-192: + + g = generator_192 + n = g.order() + secret = randrange( 1, n ) + pubkey = Public_key( g, g * secret ) + privkey = Private_key( pubkey, secret ) + + # Signing a hash value: + + hash = randrange( 1, n ) + signature = privkey.sign( hash, randrange( 1, n ) ) + + # Verifying a signature for a hash value: + + if pubkey.verifies( hash, signature ): + print_("Demo verification succeeded.") + else: + print_("*** Demo verification failed.") + + # Verification fails if the hash value is modified: + + if pubkey.verifies( hash-1, signature ): + print_("**** Demo verification failed to reject tampered hash.") + else: + print_("Demo verification correctly rejected tampered hash.") + +Revision history: + 2005.12.31 - Initial version. + + 2008.11.25 - Substantial revisions introducing new classes. + + 2009.05.16 - Warn against using random.randrange in real applications. + + 2009.05.17 - Use random.SystemRandom by default. + +Originally written in 2005 by Peter Pearson and placed in the public domain, +modified as part of the python-ecdsa package. +""" + +from six import int2byte, b +from . import ellipticcurve +from . import numbertheory +from .util import bit_length +from ._compat import remove_whitespace + + +class RSZeroError(RuntimeError): + pass + + +class InvalidPointError(RuntimeError): + pass + + +class Signature(object): + """ + ECDSA signature. + + :ivar int r: the ``r`` element of the ECDSA signature + :ivar int s: the ``s`` element of the ECDSA signature + """ + + def __init__(self, r, s): + self.r = r + self.s = s + + def recover_public_keys(self, hash, generator): + """ + Returns two public keys for which the signature is valid + + :param int hash: signed hash + :param AbstractPoint generator: is the generator used in creation + of the signature + :rtype: tuple(Public_key, Public_key) + :return: a pair of public keys that can validate the signature + """ + curve = generator.curve() + n = generator.order() + r = self.r + s = self.s + e = hash + x = r + + # Compute the curve point with x as x-coordinate + alpha = ( + pow(x, 3, curve.p()) + (curve.a() * x) + curve.b() + ) % curve.p() + beta = numbertheory.square_root_mod_prime(alpha, curve.p()) + y = beta if beta % 2 == 0 else curve.p() - beta + + # Compute the public key + R1 = ellipticcurve.PointJacobi(curve, x, y, 1, n) + Q1 = numbertheory.inverse_mod(r, n) * (s * R1 + (-e % n) * generator) + Pk1 = Public_key(generator, Q1) + + # And the second solution + R2 = ellipticcurve.PointJacobi(curve, x, -y, 1, n) + Q2 = numbertheory.inverse_mod(r, n) * (s * R2 + (-e % n) * generator) + Pk2 = Public_key(generator, Q2) + + return [Pk1, Pk2] + + +class Public_key(object): + """Public key for ECDSA.""" + + def __init__(self, generator, point, verify=True): + """Low level ECDSA public key object. + + :param generator: the Point that generates the group (the base point) + :param point: the Point that defines the public key + :param bool verify: if True check if point is valid point on curve + + :raises InvalidPointError: if the point parameters are invalid or + point does not lay on the curve + """ + + self.curve = generator.curve() + self.generator = generator + self.point = point + n = generator.order() + p = self.curve.p() + if not (0 <= point.x() < p) or not (0 <= point.y() < p): + raise InvalidPointError( + "The public point has x or y out of range." + ) + if verify and not self.curve.contains_point(point.x(), point.y()): + raise InvalidPointError("Point does not lay on the curve") + if not n: + raise InvalidPointError("Generator point must have order.") + # for curve parameters with base point with cofactor 1, all points + # that are on the curve are scalar multiples of the base point, so + # verifying that is not necessary. See Section 3.2.2.1 of SEC 1 v2 + if ( + verify + and self.curve.cofactor() != 1 + and not n * point == ellipticcurve.INFINITY + ): + raise InvalidPointError("Generator point order is bad.") + + def __eq__(self, other): + """Return True if the keys are identical, False otherwise. + + Note: for comparison, only placement on the same curve and point + equality is considered, use of the same generator point is not + considered. + """ + if isinstance(other, Public_key): + return self.curve == other.curve and self.point == other.point + return NotImplemented + + def __ne__(self, other): + """Return False if the keys are identical, True otherwise.""" + return not self == other + + def verifies(self, hash, signature): + """Verify that signature is a valid signature of hash. + Return True if the signature is valid. + """ + + # From X9.62 J.3.1. + + G = self.generator + n = G.order() + r = signature.r + s = signature.s + if r < 1 or r > n - 1: + return False + if s < 1 or s > n - 1: + return False + c = numbertheory.inverse_mod(s, n) + u1 = (hash * c) % n + u2 = (r * c) % n + if hasattr(G, "mul_add"): + xy = G.mul_add(u1, self.point, u2) + else: + xy = u1 * G + u2 * self.point + v = xy.x() % n + return v == r + + +class Private_key(object): + """Private key for ECDSA.""" + + def __init__(self, public_key, secret_multiplier): + """public_key is of class Public_key; + secret_multiplier is a large integer. + """ + + self.public_key = public_key + self.secret_multiplier = secret_multiplier + + def __eq__(self, other): + """Return True if the points are identical, False otherwise.""" + if isinstance(other, Private_key): + return ( + self.public_key == other.public_key + and self.secret_multiplier == other.secret_multiplier + ) + return NotImplemented + + def __ne__(self, other): + """Return False if the points are identical, True otherwise.""" + return not self == other + + def sign(self, hash, random_k): + """Return a signature for the provided hash, using the provided + random nonce. It is absolutely vital that random_k be an unpredictable + number in the range [1, self.public_key.point.order()-1]. If + an attacker can guess random_k, he can compute our private key from a + single signature. Also, if an attacker knows a few high-order + bits (or a few low-order bits) of random_k, he can compute our private + key from many signatures. The generation of nonces with adequate + cryptographic strength is very difficult and far beyond the scope + of this comment. + + May raise RuntimeError, in which case retrying with a new + random value k is in order. + """ + + G = self.public_key.generator + n = G.order() + k = random_k % n + # Fix the bit-length of the random nonce, + # so that it doesn't leak via timing. + # This does not change that ks = k mod n + ks = k + n + kt = ks + n + if bit_length(ks) == bit_length(n): + p1 = kt * G + else: + p1 = ks * G + r = p1.x() % n + if r == 0: + raise RSZeroError("amazingly unlucky random number r") + s = ( + numbertheory.inverse_mod(k, n) + * (hash + (self.secret_multiplier * r) % n) + ) % n + if s == 0: + raise RSZeroError("amazingly unlucky random number s") + return Signature(r, s) + + +def int_to_string(x): + """Convert integer x into a string of bytes, as per X9.62.""" + assert x >= 0 + if x == 0: + return b("\0") + result = [] + while x: + ordinal = x & 0xFF + result.append(int2byte(ordinal)) + x >>= 8 + + result.reverse() + return b("").join(result) + + +def string_to_int(s): + """Convert a string of bytes into an integer, as per X9.62.""" + result = 0 + for c in s: + if not isinstance(c, int): + c = ord(c) + result = 256 * result + c + return result + + +def digest_integer(m): + """Convert an integer into a string of bytes, compute + its SHA-1 hash, and convert the result to an integer.""" + # + # I don't expect this function to be used much. I wrote + # it in order to be able to duplicate the examples + # in ECDSAVS. + # + from hashlib import sha1 + + return string_to_int(sha1(int_to_string(m)).digest()) + + +def point_is_valid(generator, x, y): + """Is (x,y) a valid public key based on the specified generator?""" + + # These are the tests specified in X9.62. + + n = generator.order() + curve = generator.curve() + p = curve.p() + if not (0 <= x < p) or not (0 <= y < p): + return False + if not curve.contains_point(x, y): + return False + if ( + curve.cofactor() != 1 + and not n * ellipticcurve.PointJacobi(curve, x, y, 1) + == ellipticcurve.INFINITY + ): + return False + return True + + +# secp112r1 curve +_p = int(remove_whitespace("DB7C 2ABF62E3 5E668076 BEAD208B"), 16) +# s = 00F50B02 8E4D696E 67687561 51752904 72783FB1 +_a = int(remove_whitespace("DB7C 2ABF62E3 5E668076 BEAD2088"), 16) +_b = int(remove_whitespace("659E F8BA0439 16EEDE89 11702B22"), 16) +_Gx = int(remove_whitespace("09487239 995A5EE7 6B55F9C2 F098"), 16) +_Gy = int(remove_whitespace("A89C E5AF8724 C0A23E0E 0FF77500"), 16) +_r = int(remove_whitespace("DB7C 2ABF62E3 5E7628DF AC6561C5"), 16) +_h = 1 +curve_112r1 = ellipticcurve.CurveFp(_p, _a, _b, _h) +generator_112r1 = ellipticcurve.PointJacobi( + curve_112r1, _Gx, _Gy, 1, _r, generator=True +) + + +# secp112r2 curve +_p = int(remove_whitespace("DB7C 2ABF62E3 5E668076 BEAD208B"), 16) +# s = 022757A1 114D69E 67687561 51755316 C05E0BD4 +_a = int(remove_whitespace("6127 C24C05F3 8A0AAAF6 5C0EF02C"), 16) +_b = int(remove_whitespace("51DE F1815DB5 ED74FCC3 4C85D709"), 16) +_Gx = int(remove_whitespace("4BA30AB5 E892B4E1 649DD092 8643"), 16) +_Gy = int(remove_whitespace("ADCD 46F5882E 3747DEF3 6E956E97"), 16) +_r = int(remove_whitespace("36DF 0AAFD8B8 D7597CA1 0520D04B"), 16) +_h = 4 +curve_112r2 = ellipticcurve.CurveFp(_p, _a, _b, _h) +generator_112r2 = ellipticcurve.PointJacobi( + curve_112r2, _Gx, _Gy, 1, _r, generator=True +) + + +# secp128r1 curve +_p = int(remove_whitespace("FFFFFFFD FFFFFFFF FFFFFFFF FFFFFFFF"), 16) +# S = 000E0D4D 69E6768 75615175 0CC03A44 73D03679 +# a and b are mod p, so a is equal to p-3, or simply -3 +# _a = -3 +_b = int(remove_whitespace("E87579C1 1079F43D D824993C 2CEE5ED3"), 16) +_Gx = int(remove_whitespace("161FF752 8B899B2D 0C28607C A52C5B86"), 16) +_Gy = int(remove_whitespace("CF5AC839 5BAFEB13 C02DA292 DDED7A83"), 16) +_r = int(remove_whitespace("FFFFFFFE 00000000 75A30D1B 9038A115"), 16) +_h = 1 +curve_128r1 = ellipticcurve.CurveFp(_p, -3, _b, _h) +generator_128r1 = ellipticcurve.PointJacobi( + curve_128r1, _Gx, _Gy, 1, _r, generator=True +) + + +# secp160r1 +_p = int(remove_whitespace("FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF 7FFFFFFF"), 16) +# S = 1053CDE4 2C14D696 E6768756 1517533B F3F83345 +# a and b are mod p, so a is equal to p-3, or simply -3 +# _a = -3 +_b = int(remove_whitespace("1C97BEFC 54BD7A8B 65ACF89F 81D4D4AD C565FA45"), 16) +_Gx = int( + remove_whitespace("4A96B568 8EF57328 46646989 68C38BB9 13CBFC82"), + 16, +) +_Gy = int( + remove_whitespace("23A62855 3168947D 59DCC912 04235137 7AC5FB32"), + 16, +) +_r = int( + remove_whitespace("01 00000000 00000000 0001F4C8 F927AED3 CA752257"), + 16, +) +_h = 1 +curve_160r1 = ellipticcurve.CurveFp(_p, -3, _b, _h) +generator_160r1 = ellipticcurve.PointJacobi( + curve_160r1, _Gx, _Gy, 1, _r, generator=True +) + + +# NIST Curve P-192: +_p = 6277101735386680763835789423207666416083908700390324961279 +_r = 6277101735386680763835789423176059013767194773182842284081 +# s = 0x3045ae6fc8422f64ed579528d38120eae12196d5L +# c = 0x3099d2bbbfcb2538542dcd5fb078b6ef5f3d6fe2c745de65L +_b = int( + remove_whitespace( + """ + 64210519 E59C80E7 0FA7E9AB 72243049 FEB8DEEC C146B9B1""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + 188DA80E B03090F6 7CBF20EB 43A18800 F4FF0AFD 82FF1012""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + 07192B95 FFC8DA78 631011ED 6B24CDD5 73F977A1 1E794811""" + ), + 16, +) + +curve_192 = ellipticcurve.CurveFp(_p, -3, _b, 1) +generator_192 = ellipticcurve.PointJacobi( + curve_192, _Gx, _Gy, 1, _r, generator=True +) + + +# NIST Curve P-224: +_p = int( + remove_whitespace( + """ + 2695994666715063979466701508701963067355791626002630814351 + 0066298881""" + ) +) +_r = int( + remove_whitespace( + """ + 2695994666715063979466701508701962594045780771442439172168 + 2722368061""" + ) +) +# s = 0xbd71344799d5c7fcdc45b59fa3b9ab8f6a948bc5L +# c = 0x5b056c7e11dd68f40469ee7f3c7a7d74f7d121116506d031218291fbL +_b = int( + remove_whitespace( + """ + B4050A85 0C04B3AB F5413256 5044B0B7 D7BFD8BA 270B3943 + 2355FFB4""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + B70E0CBD 6BB4BF7F 321390B9 4A03C1D3 56C21122 343280D6 + 115C1D21""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + BD376388 B5F723FB 4C22DFE6 CD4375A0 5A074764 44D58199 + 85007E34""" + ), + 16, +) + +curve_224 = ellipticcurve.CurveFp(_p, -3, _b, 1) +generator_224 = ellipticcurve.PointJacobi( + curve_224, _Gx, _Gy, 1, _r, generator=True +) + +# NIST Curve P-256: +_p = int( + remove_whitespace( + """ + 1157920892103562487626974469494075735300861434152903141955 + 33631308867097853951""" + ) +) +_r = int( + remove_whitespace( + """ + 115792089210356248762697446949407573529996955224135760342 + 422259061068512044369""" + ) +) +# s = 0xc49d360886e704936a6678e1139d26b7819f7e90L +# c = 0x7efba1662985be9403cb055c75d4f7e0ce8d84a9c5114abcaf3177680104fa0dL +_b = int( + remove_whitespace( + """ + 5AC635D8 AA3A93E7 B3EBBD55 769886BC 651D06B0 CC53B0F6 + 3BCE3C3E 27D2604B""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + 6B17D1F2 E12C4247 F8BCE6E5 63A440F2 77037D81 2DEB33A0 + F4A13945 D898C296""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + 4FE342E2 FE1A7F9B 8EE7EB4A 7C0F9E16 2BCE3357 6B315ECE + CBB64068 37BF51F5""" + ), + 16, +) + +curve_256 = ellipticcurve.CurveFp(_p, -3, _b, 1) +generator_256 = ellipticcurve.PointJacobi( + curve_256, _Gx, _Gy, 1, _r, generator=True +) + +# NIST Curve P-384: +_p = int( + remove_whitespace( + """ + 3940200619639447921227904010014361380507973927046544666794 + 8293404245721771496870329047266088258938001861606973112319""" + ) +) +_r = int( + remove_whitespace( + """ + 3940200619639447921227904010014361380507973927046544666794 + 6905279627659399113263569398956308152294913554433653942643""" + ) +) +# s = 0xa335926aa319a27a1d00896a6773a4827acdac73L +# c = int(remove_whitespace( +# """ +# 79d1e655 f868f02f ff48dcde e14151dd b80643c1 406d0ca1 +# 0dfe6fc5 2009540a 495e8042 ea5f744f 6e184667 cc722483""" +# ), 16) +_b = int( + remove_whitespace( + """ + B3312FA7 E23EE7E4 988E056B E3F82D19 181D9C6E FE814112 + 0314088F 5013875A C656398D 8A2ED19D 2A85C8ED D3EC2AEF""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + AA87CA22 BE8B0537 8EB1C71E F320AD74 6E1D3B62 8BA79B98 + 59F741E0 82542A38 5502F25D BF55296C 3A545E38 72760AB7""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + 3617DE4A 96262C6F 5D9E98BF 9292DC29 F8F41DBD 289A147C + E9DA3113 B5F0B8C0 0A60B1CE 1D7E819D 7A431D7C 90EA0E5F""" + ), + 16, +) + +curve_384 = ellipticcurve.CurveFp(_p, -3, _b, 1) +generator_384 = ellipticcurve.PointJacobi( + curve_384, _Gx, _Gy, 1, _r, generator=True +) + +# NIST Curve P-521: +_p = int( + "686479766013060971498190079908139321726943530014330540939" + "446345918554318339765605212255964066145455497729631139148" + "0858037121987999716643812574028291115057151" +) +_r = int( + "686479766013060971498190079908139321726943530014330540939" + "446345918554318339765539424505774633321719753296399637136" + "3321113864768612440380340372808892707005449" +) +# s = 0xd09e8800291cb85396cc6717393284aaa0da64baL +# c = int(remove_whitespace( +# """ +# 0b4 8bfa5f42 0a349495 39d2bdfc 264eeeeb 077688e4 +# 4fbf0ad8 f6d0edb3 7bd6b533 28100051 8e19f1b9 ffbe0fe9 +# ed8a3c22 00b8f875 e523868c 70c1e5bf 55bad637""" +# ), 16) +_b = int( + remove_whitespace( + """ + 051 953EB961 8E1C9A1F 929A21A0 B68540EE A2DA725B + 99B315F3 B8B48991 8EF109E1 56193951 EC7E937B 1652C0BD + 3BB1BF07 3573DF88 3D2C34F1 EF451FD4 6B503F00""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + C6 858E06B7 0404E9CD 9E3ECB66 2395B442 9C648139 + 053FB521 F828AF60 6B4D3DBA A14B5E77 EFE75928 FE1DC127 + A2FFA8DE 3348B3C1 856A429B F97E7E31 C2E5BD66""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + 118 39296A78 9A3BC004 5C8A5FB4 2C7D1BD9 98F54449 + 579B4468 17AFBD17 273E662C 97EE7299 5EF42640 C550B901 + 3FAD0761 353C7086 A272C240 88BE9476 9FD16650""" + ), + 16, +) + +curve_521 = ellipticcurve.CurveFp(_p, -3, _b, 1) +generator_521 = ellipticcurve.PointJacobi( + curve_521, _Gx, _Gy, 1, _r, generator=True +) + +# Certicom secp256-k1 +_a = 0x0000000000000000000000000000000000000000000000000000000000000000 +_b = 0x0000000000000000000000000000000000000000000000000000000000000007 +_p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F +_Gx = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798 +_Gy = 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8 +_r = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 + +curve_secp256k1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_secp256k1 = ellipticcurve.PointJacobi( + curve_secp256k1, _Gx, _Gy, 1, _r, generator=True +) + +# Brainpool P-160-r1 +_a = 0x340E7BE2A280EB74E2BE61BADA745D97E8F7C300 +_b = 0x1E589A8595423412134FAA2DBDEC95C8D8675E58 +_p = 0xE95E4A5F737059DC60DFC7AD95B3D8139515620F +_Gx = 0xBED5AF16EA3F6A4F62938C4631EB5AF7BDBCDBC3 +_Gy = 0x1667CB477A1A8EC338F94741669C976316DA6321 +_q = 0xE95E4A5F737059DC60DF5991D45029409E60FC09 + +curve_brainpoolp160r1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_brainpoolp160r1 = ellipticcurve.PointJacobi( + curve_brainpoolp160r1, _Gx, _Gy, 1, _q, generator=True +) + +# Brainpool P-192-r1 +_a = 0x6A91174076B1E0E19C39C031FE8685C1CAE040E5C69A28EF +_b = 0x469A28EF7C28CCA3DC721D044F4496BCCA7EF4146FBF25C9 +_p = 0xC302F41D932A36CDA7A3463093D18DB78FCE476DE1A86297 +_Gx = 0xC0A0647EAAB6A48753B033C56CB0F0900A2F5C4853375FD6 +_Gy = 0x14B690866ABD5BB88B5F4828C1490002E6773FA2FA299B8F +_q = 0xC302F41D932A36CDA7A3462F9E9E916B5BE8F1029AC4ACC1 + +curve_brainpoolp192r1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_brainpoolp192r1 = ellipticcurve.PointJacobi( + curve_brainpoolp192r1, _Gx, _Gy, 1, _q, generator=True +) + +# Brainpool P-224-r1 +_a = 0x68A5E62CA9CE6C1C299803A6C1530B514E182AD8B0042A59CAD29F43 +_b = 0x2580F63CCFE44138870713B1A92369E33E2135D266DBB372386C400B +_p = 0xD7C134AA264366862A18302575D1D787B09F075797DA89F57EC8C0FF +_Gx = 0x0D9029AD2C7E5CF4340823B2A87DC68C9E4CE3174C1E6EFDEE12C07D +_Gy = 0x58AA56F772C0726F24C6B89E4ECDAC24354B9E99CAA3F6D3761402CD +_q = 0xD7C134AA264366862A18302575D0FB98D116BC4B6DDEBCA3A5A7939F + +curve_brainpoolp224r1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_brainpoolp224r1 = ellipticcurve.PointJacobi( + curve_brainpoolp224r1, _Gx, _Gy, 1, _q, generator=True +) + +# Brainpool P-256-r1 +_a = 0x7D5A0975FC2C3057EEF67530417AFFE7FB8055C126DC5C6CE94A4B44F330B5D9 +_b = 0x26DC5C6CE94A4B44F330B5D9BBD77CBF958416295CF7E1CE6BCCDC18FF8C07B6 +_p = 0xA9FB57DBA1EEA9BC3E660A909D838D726E3BF623D52620282013481D1F6E5377 +_Gx = 0x8BD2AEB9CB7E57CB2C4B482FFC81B7AFB9DE27E1E3BD23C23A4453BD9ACE3262 +_Gy = 0x547EF835C3DAC4FD97F8461A14611DC9C27745132DED8E545C1D54C72F046997 +_q = 0xA9FB57DBA1EEA9BC3E660A909D838D718C397AA3B561A6F7901E0E82974856A7 + +curve_brainpoolp256r1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_brainpoolp256r1 = ellipticcurve.PointJacobi( + curve_brainpoolp256r1, _Gx, _Gy, 1, _q, generator=True +) + +# Brainpool P-320-r1 +_a = int( + remove_whitespace( + """ + 3EE30B568FBAB0F883CCEBD46D3F3BB8A2A73513F5EB79DA66190EB085FFA9 + F492F375A97D860EB4""" + ), + 16, +) +_b = int( + remove_whitespace( + """ + 520883949DFDBC42D3AD198640688A6FE13F41349554B49ACC31DCCD884539 + 816F5EB4AC8FB1F1A6""" + ), + 16, +) +_p = int( + remove_whitespace( + """ + D35E472036BC4FB7E13C785ED201E065F98FCFA6F6F40DEF4F92B9EC7893EC + 28FCD412B1F1B32E27""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + 43BD7E9AFB53D8B85289BCC48EE5BFE6F20137D10A087EB6E7871E2A10A599 + C710AF8D0D39E20611""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + 14FDD05545EC1CC8AB4093247F77275E0743FFED117182EAA9C77877AAAC6A + C7D35245D1692E8EE1""" + ), + 16, +) +_q = int( + remove_whitespace( + """ + D35E472036BC4FB7E13C785ED201E065F98FCFA5B68F12A32D482EC7EE8658 + E98691555B44C59311""" + ), + 16, +) + +curve_brainpoolp320r1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_brainpoolp320r1 = ellipticcurve.PointJacobi( + curve_brainpoolp320r1, _Gx, _Gy, 1, _q, generator=True +) + +# Brainpool P-384-r1 +_a = int( + remove_whitespace( + """ + 7BC382C63D8C150C3C72080ACE05AFA0C2BEA28E4FB22787139165EFBA91F9 + 0F8AA5814A503AD4EB04A8C7DD22CE2826""" + ), + 16, +) +_b = int( + remove_whitespace( + """ + 04A8C7DD22CE28268B39B55416F0447C2FB77DE107DCD2A62E880EA53EEB62 + D57CB4390295DBC9943AB78696FA504C11""" + ), + 16, +) +_p = int( + remove_whitespace( + """ + 8CB91E82A3386D280F5D6F7E50E641DF152F7109ED5456B412B1DA197FB711 + 23ACD3A729901D1A71874700133107EC53""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + 1D1C64F068CF45FFA2A63A81B7C13F6B8847A3E77EF14FE3DB7FCAFE0CBD10 + E8E826E03436D646AAEF87B2E247D4AF1E""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + 8ABE1D7520F9C2A45CB1EB8E95CFD55262B70B29FEEC5864E19C054FF991292 + 80E4646217791811142820341263C5315""" + ), + 16, +) +_q = int( + remove_whitespace( + """ + 8CB91E82A3386D280F5D6F7E50E641DF152F7109ED5456B31F166E6CAC0425 + A7CF3AB6AF6B7FC3103B883202E9046565""" + ), + 16, +) + +curve_brainpoolp384r1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_brainpoolp384r1 = ellipticcurve.PointJacobi( + curve_brainpoolp384r1, _Gx, _Gy, 1, _q, generator=True +) + +# Brainpool P-512-r1 +_a = int( + remove_whitespace( + """ + 7830A3318B603B89E2327145AC234CC594CBDD8D3DF91610A83441CAEA9863 + BC2DED5D5AA8253AA10A2EF1C98B9AC8B57F1117A72BF2C7B9E7C1AC4D77FC94CA""" + ), + 16, +) +_b = int( + remove_whitespace( + """ + 3DF91610A83441CAEA9863BC2DED5D5AA8253AA10A2EF1C98B9AC8B57F1117 + A72BF2C7B9E7C1AC4D77FC94CADC083E67984050B75EBAE5DD2809BD638016F723""" + ), + 16, +) +_p = int( + remove_whitespace( + """ + AADD9DB8DBE9C48B3FD4E6AE33C9FC07CB308DB3B3C9D20ED6639CCA703308 + 717D4D9B009BC66842AECDA12AE6A380E62881FF2F2D82C68528AA6056583A48F3""" + ), + 16, +) +_Gx = int( + remove_whitespace( + """ + 81AEE4BDD82ED9645A21322E9C4C6A9385ED9F70B5D916C1B43B62EEF4D009 + 8EFF3B1F78E2D0D48D50D1687B93B97D5F7C6D5047406A5E688B352209BCB9F822""" + ), + 16, +) +_Gy = int( + remove_whitespace( + """ + 7DDE385D566332ECC0EABFA9CF7822FDF209F70024A57B1AA000C55B881F81 + 11B2DCDE494A5F485E5BCA4BD88A2763AED1CA2B2FA8F0540678CD1E0F3AD80892""" + ), + 16, +) +_q = int( + remove_whitespace( + """ + AADD9DB8DBE9C48B3FD4E6AE33C9FC07CB308DB3B3C9D20ED6639CCA703308 + 70553E5C414CA92619418661197FAC10471DB1D381085DDADDB58796829CA90069""" + ), + 16, +) + +curve_brainpoolp512r1 = ellipticcurve.CurveFp(_p, _a, _b, 1) +generator_brainpoolp512r1 = ellipticcurve.PointJacobi( + curve_brainpoolp512r1, _Gx, _Gy, 1, _q, generator=True +) diff --git a/env/lib/python3.8/site-packages/ecdsa/eddsa.py b/env/lib/python3.8/site-packages/ecdsa/eddsa.py new file mode 100644 index 00000000..9769cfd8 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/eddsa.py @@ -0,0 +1,252 @@ +"""Implementation of Edwards Digital Signature Algorithm.""" + +import hashlib +from ._sha3 import shake_256 +from . import ellipticcurve +from ._compat import ( + remove_whitespace, + bit_length, + bytes_to_int, + int_to_bytes, + compat26_str, +) + +# edwards25519, defined in RFC7748 +_p = 2**255 - 19 +_a = -1 +_d = int( + remove_whitespace( + "370957059346694393431380835087545651895421138798432190163887855330" + "85940283555" + ) +) +_h = 8 + +_Gx = int( + remove_whitespace( + "151122213495354007725011514095885315114540126930418572060461132" + "83949847762202" + ) +) +_Gy = int( + remove_whitespace( + "463168356949264781694283940034751631413079938662562256157830336" + "03165251855960" + ) +) +_r = 2**252 + 0x14DEF9DEA2F79CD65812631A5CF5D3ED + + +def _sha512(data): + return hashlib.new("sha512", compat26_str(data)).digest() + + +curve_ed25519 = ellipticcurve.CurveEdTw(_p, _a, _d, _h, _sha512) +generator_ed25519 = ellipticcurve.PointEdwards( + curve_ed25519, _Gx, _Gy, 1, _Gx * _Gy % _p, _r, generator=True +) + + +# edwards448, defined in RFC7748 +_p = 2**448 - 2**224 - 1 +_a = 1 +_d = -39081 % _p +_h = 4 + +_Gx = int( + remove_whitespace( + "224580040295924300187604334099896036246789641632564134246125461" + "686950415467406032909029192869357953282578032075146446173674602635" + "247710" + ) +) +_Gy = int( + remove_whitespace( + "298819210078481492676017930443930673437544040154080242095928241" + "372331506189835876003536878655418784733982303233503462500531545062" + "832660" + ) +) +_r = 2**446 - 0x8335DC163BB124B65129C96FDE933D8D723A70AADC873D6D54A7BB0D + + +def _shake256(data): + return shake_256(data, 114) + + +curve_ed448 = ellipticcurve.CurveEdTw(_p, _a, _d, _h, _shake256) +generator_ed448 = ellipticcurve.PointEdwards( + curve_ed448, _Gx, _Gy, 1, _Gx * _Gy % _p, _r, generator=True +) + + +class PublicKey(object): + """Public key for the Edwards Digital Signature Algorithm.""" + + def __init__(self, generator, public_key, public_point=None): + self.generator = generator + self.curve = generator.curve() + self.__encoded = public_key + # plus one for the sign bit and round up + self.baselen = (bit_length(self.curve.p()) + 1 + 7) // 8 + if len(public_key) != self.baselen: + raise ValueError( + "Incorrect size of the public key, expected: {0} bytes".format( + self.baselen + ) + ) + if public_point: + self.__point = public_point + else: + self.__point = ellipticcurve.PointEdwards.from_bytes( + self.curve, public_key + ) + + def __eq__(self, other): + if isinstance(other, PublicKey): + return ( + self.curve == other.curve and self.__encoded == other.__encoded + ) + return NotImplemented + + def __ne__(self, other): + return not self == other + + @property + def point(self): + return self.__point + + @point.setter + def point(self, other): + if self.__point != other: + raise ValueError("Can't change the coordinates of the point") + self.__point = other + + def public_point(self): + return self.__point + + def public_key(self): + return self.__encoded + + def verify(self, data, signature): + """Verify a Pure EdDSA signature over data.""" + data = compat26_str(data) + if len(signature) != 2 * self.baselen: + raise ValueError( + "Invalid signature length, expected: {0} bytes".format( + 2 * self.baselen + ) + ) + R = ellipticcurve.PointEdwards.from_bytes( + self.curve, signature[: self.baselen] + ) + S = bytes_to_int(signature[self.baselen :], "little") + if S >= self.generator.order(): + raise ValueError("Invalid signature") + + dom = bytearray() + if self.curve == curve_ed448: + dom = bytearray(b"SigEd448" + b"\x00\x00") + + k = bytes_to_int( + self.curve.hash_func(dom + R.to_bytes() + self.__encoded + data), + "little", + ) + + if self.generator * S != self.__point * k + R: + raise ValueError("Invalid signature") + + return True + + +class PrivateKey(object): + """Private key for the Edwards Digital Signature Algorithm.""" + + def __init__(self, generator, private_key): + self.generator = generator + self.curve = generator.curve() + # plus one for the sign bit and round up + self.baselen = (bit_length(self.curve.p()) + 1 + 7) // 8 + if len(private_key) != self.baselen: + raise ValueError( + "Incorrect size of private key, expected: {0} bytes".format( + self.baselen + ) + ) + self.__private_key = bytes(private_key) + self.__h = bytearray(self.curve.hash_func(private_key)) + self.__public_key = None + + a = self.__h[: self.baselen] + a = self._key_prune(a) + scalar = bytes_to_int(a, "little") + self.__s = scalar + + @property + def private_key(self): + return self.__private_key + + def __eq__(self, other): + if isinstance(other, PrivateKey): + return ( + self.curve == other.curve + and self.__private_key == other.__private_key + ) + return NotImplemented + + def __ne__(self, other): + return not self == other + + def _key_prune(self, key): + # make sure the key is not in a small subgroup + h = self.curve.cofactor() + if h == 4: + h_log = 2 + elif h == 8: + h_log = 3 + else: + raise ValueError("Only cofactor 4 and 8 curves supported") + key[0] &= ~((1 << h_log) - 1) + + # ensure the highest bit is set but no higher + l = bit_length(self.curve.p()) + if l % 8 == 0: + key[-1] = 0 + key[-2] |= 0x80 + else: + key[-1] = key[-1] & (1 << (l % 8)) - 1 | 1 << (l % 8) - 1 + return key + + def public_key(self): + """Generate the public key based on the included private key""" + if self.__public_key: + return self.__public_key + + public_point = self.generator * self.__s + + self.__public_key = PublicKey( + self.generator, public_point.to_bytes(), public_point + ) + + return self.__public_key + + def sign(self, data): + """Perform a Pure EdDSA signature over data.""" + data = compat26_str(data) + A = self.public_key().public_key() + + prefix = self.__h[self.baselen :] + + dom = bytearray() + if self.curve == curve_ed448: + dom = bytearray(b"SigEd448" + b"\x00\x00") + + r = bytes_to_int(self.curve.hash_func(dom + prefix + data), "little") + R = (self.generator * r).to_bytes() + + k = bytes_to_int(self.curve.hash_func(dom + R + A + data), "little") + k %= self.generator.order() + + S = (r + k * self.__s) % self.generator.order() + + return R + int_to_bytes(S, self.baselen, "little") diff --git a/env/lib/python3.8/site-packages/ecdsa/ellipticcurve.py b/env/lib/python3.8/site-packages/ecdsa/ellipticcurve.py new file mode 100644 index 00000000..d6f71463 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/ellipticcurve.py @@ -0,0 +1,1584 @@ +#! /usr/bin/env python +# -*- coding: utf-8 -*- +# +# Implementation of elliptic curves, for cryptographic applications. +# +# This module doesn't provide any way to choose a random elliptic +# curve, nor to verify that an elliptic curve was chosen randomly, +# because one can simply use NIST's standard curves. +# +# Notes from X9.62-1998 (draft): +# Nomenclature: +# - Q is a public key. +# The "Elliptic Curve Domain Parameters" include: +# - q is the "field size", which in our case equals p. +# - p is a big prime. +# - G is a point of prime order (5.1.1.1). +# - n is the order of G (5.1.1.1). +# Public-key validation (5.2.2): +# - Verify that Q is not the point at infinity. +# - Verify that X_Q and Y_Q are in [0,p-1]. +# - Verify that Q is on the curve. +# - Verify that nQ is the point at infinity. +# Signature generation (5.3): +# - Pick random k from [1,n-1]. +# Signature checking (5.4.2): +# - Verify that r and s are in [1,n-1]. +# +# Revision history: +# 2005.12.31 - Initial version. +# 2008.11.25 - Change CurveFp.is_on to contains_point. +# +# Written in 2005 by Peter Pearson and placed in the public domain. +# Modified extensively as part of python-ecdsa. + +from __future__ import division + +try: + from gmpy2 import mpz + + GMPY = True +except ImportError: # pragma: no branch + try: + from gmpy import mpz + + GMPY = True + except ImportError: + GMPY = False + + +from six import python_2_unicode_compatible +from . import numbertheory +from ._compat import normalise_bytes, int_to_bytes, bit_length, bytes_to_int +from .errors import MalformedPointError +from .util import orderlen, string_to_number, number_to_string + + +@python_2_unicode_compatible +class CurveFp(object): + """ + :term:`Short Weierstrass Elliptic Curve ` over a + prime field. + """ + + if GMPY: # pragma: no branch + + def __init__(self, p, a, b, h=None): + """ + The curve of points satisfying y^2 = x^3 + a*x + b (mod p). + + h is an integer that is the cofactor of the elliptic curve domain + parameters; it is the number of points satisfying the elliptic + curve equation divided by the order of the base point. It is used + for selection of efficient algorithm for public point verification. + """ + self.__p = mpz(p) + self.__a = mpz(a) + self.__b = mpz(b) + # h is not used in calculations and it can be None, so don't use + # gmpy with it + self.__h = h + + else: # pragma: no branch + + def __init__(self, p, a, b, h=None): + """ + The curve of points satisfying y^2 = x^3 + a*x + b (mod p). + + h is an integer that is the cofactor of the elliptic curve domain + parameters; it is the number of points satisfying the elliptic + curve equation divided by the order of the base point. It is used + for selection of efficient algorithm for public point verification. + """ + self.__p = p + self.__a = a + self.__b = b + self.__h = h + + def __eq__(self, other): + """Return True if other is an identical curve, False otherwise. + + Note: the value of the cofactor of the curve is not taken into account + when comparing curves, as it's derived from the base point and + intrinsic curve characteristic (but it's complex to compute), + only the prime and curve parameters are considered. + """ + if isinstance(other, CurveFp): + p = self.__p + return ( + self.__p == other.__p + and self.__a % p == other.__a % p + and self.__b % p == other.__b % p + ) + return NotImplemented + + def __ne__(self, other): + """Return False if other is an identical curve, True otherwise.""" + return not self == other + + def __hash__(self): + return hash((self.__p, self.__a, self.__b)) + + def p(self): + return self.__p + + def a(self): + return self.__a + + def b(self): + return self.__b + + def cofactor(self): + return self.__h + + def contains_point(self, x, y): + """Is the point (x,y) on this curve?""" + return (y * y - ((x * x + self.__a) * x + self.__b)) % self.__p == 0 + + def __str__(self): + return "CurveFp(p=%d, a=%d, b=%d, h=%d)" % ( + self.__p, + self.__a, + self.__b, + self.__h, + ) + + +class CurveEdTw(object): + """Parameters for a Twisted Edwards Elliptic Curve""" + + if GMPY: # pragma: no branch + + def __init__(self, p, a, d, h=None, hash_func=None): + """ + The curve of points satisfying a*x^2 + y^2 = 1 + d*x^2*y^2 (mod p). + + h is the cofactor of the curve. + hash_func is the hash function associated with the curve + (like SHA-512 for Ed25519) + """ + self.__p = mpz(p) + self.__a = mpz(a) + self.__d = mpz(d) + self.__h = h + self.__hash_func = hash_func + + else: + + def __init__(self, p, a, d, h=None, hash_func=None): + """ + The curve of points satisfying a*x^2 + y^2 = 1 + d*x^2*y^2 (mod p). + + h is the cofactor of the curve. + hash_func is the hash function associated with the curve + (like SHA-512 for Ed25519) + """ + self.__p = p + self.__a = a + self.__d = d + self.__h = h + self.__hash_func = hash_func + + def __eq__(self, other): + """Returns True if other is an identical curve.""" + if isinstance(other, CurveEdTw): + p = self.__p + return ( + self.__p == other.__p + and self.__a % p == other.__a % p + and self.__d % p == other.__d % p + ) + return NotImplemented + + def __ne__(self, other): + """Return False if the other is an identical curve, True otherwise.""" + return not self == other + + def __hash__(self): + return hash((self.__p, self.__a, self.__d)) + + def contains_point(self, x, y): + """Is the point (x, y) on this curve?""" + return ( + self.__a * x * x + y * y - 1 - self.__d * x * x * y * y + ) % self.__p == 0 + + def p(self): + return self.__p + + def a(self): + return self.__a + + def d(self): + return self.__d + + def hash_func(self, data): + return self.__hash_func(data) + + def cofactor(self): + return self.__h + + def __str__(self): + return "CurveEdTw(p={0}, a={1}, d={2}, h={3})".format( + self.__p, + self.__a, + self.__d, + self.__h, + ) + + +class AbstractPoint(object): + """Class for common methods of elliptic curve points.""" + + @staticmethod + def _from_raw_encoding(data, raw_encoding_length): + """ + Decode public point from :term:`raw encoding`. + + :term:`raw encoding` is the same as the :term:`uncompressed` encoding, + but without the 0x04 byte at the beginning. + """ + # real assert, from_bytes() should not call us with different length + assert len(data) == raw_encoding_length + xs = data[: raw_encoding_length // 2] + ys = data[raw_encoding_length // 2 :] + # real assert, raw_encoding_length is calculated by multiplying an + # integer by two so it will always be even + assert len(xs) == raw_encoding_length // 2 + assert len(ys) == raw_encoding_length // 2 + coord_x = string_to_number(xs) + coord_y = string_to_number(ys) + + return coord_x, coord_y + + @staticmethod + def _from_compressed(data, curve): + """Decode public point from compressed encoding.""" + if data[:1] not in (b"\x02", b"\x03"): + raise MalformedPointError("Malformed compressed point encoding") + + is_even = data[:1] == b"\x02" + x = string_to_number(data[1:]) + p = curve.p() + alpha = (pow(x, 3, p) + (curve.a() * x) + curve.b()) % p + try: + beta = numbertheory.square_root_mod_prime(alpha, p) + except numbertheory.Error as e: + raise MalformedPointError( + "Encoding does not correspond to a point on curve", e + ) + if is_even == bool(beta & 1): + y = p - beta + else: + y = beta + return x, y + + @classmethod + def _from_hybrid(cls, data, raw_encoding_length, validate_encoding): + """Decode public point from hybrid encoding.""" + # real assert, from_bytes() should not call us with different types + assert data[:1] in (b"\x06", b"\x07") + + # primarily use the uncompressed as it's easiest to handle + x, y = cls._from_raw_encoding(data[1:], raw_encoding_length) + + # but validate if it's self-consistent if we're asked to do that + if validate_encoding and ( + y & 1 + and data[:1] != b"\x07" + or (not y & 1) + and data[:1] != b"\x06" + ): + raise MalformedPointError("Inconsistent hybrid point encoding") + + return x, y + + @classmethod + def _from_edwards(cls, curve, data): + """Decode a point on an Edwards curve.""" + data = bytearray(data) + p = curve.p() + # add 1 for the sign bit and then round up + exp_len = (bit_length(p) + 1 + 7) // 8 + if len(data) != exp_len: + raise MalformedPointError("Point length doesn't match the curve.") + x_0 = (data[-1] & 0x80) >> 7 + + data[-1] &= 0x80 - 1 + + y = bytes_to_int(data, "little") + if GMPY: + y = mpz(y) + + x2 = ( + (y * y - 1) + * numbertheory.inverse_mod(curve.d() * y * y - curve.a(), p) + % p + ) + + try: + x = numbertheory.square_root_mod_prime(x2, p) + except numbertheory.Error as e: + raise MalformedPointError( + "Encoding does not correspond to a point on curve", e + ) + + if x % 2 != x_0: + x = -x % p + + return x, y + + @classmethod + def from_bytes( + cls, curve, data, validate_encoding=True, valid_encodings=None + ): + """ + Initialise the object from byte encoding of a point. + + The method does accept and automatically detect the type of point + encoding used. It supports the :term:`raw encoding`, + :term:`uncompressed`, :term:`compressed`, and :term:`hybrid` encodings. + + Note: generally you will want to call the ``from_bytes()`` method of + either a child class, PointJacobi or Point. + + :param data: single point encoding of the public key + :type data: :term:`bytes-like object` + :param curve: the curve on which the public key is expected to lay + :type curve: ~ecdsa.ellipticcurve.CurveFp + :param validate_encoding: whether to verify that the encoding of the + point is self-consistent, defaults to True, has effect only + on ``hybrid`` encoding + :type validate_encoding: bool + :param valid_encodings: list of acceptable point encoding formats, + supported ones are: :term:`uncompressed`, :term:`compressed`, + :term:`hybrid`, and :term:`raw encoding` (specified with ``raw`` + name). All formats by default (specified with ``None``). + :type valid_encodings: :term:`set-like object` + + :raises `~ecdsa.errors.MalformedPointError`: if the public point does + not lay on the curve or the encoding is invalid + + :return: x and y coordinates of the encoded point + :rtype: tuple(int, int) + """ + if not valid_encodings: + valid_encodings = set( + ["uncompressed", "compressed", "hybrid", "raw"] + ) + if not all( + i in set(("uncompressed", "compressed", "hybrid", "raw")) + for i in valid_encodings + ): + raise ValueError( + "Only uncompressed, compressed, hybrid or raw encoding " + "supported." + ) + data = normalise_bytes(data) + + if isinstance(curve, CurveEdTw): + return cls._from_edwards(curve, data) + + key_len = len(data) + raw_encoding_length = 2 * orderlen(curve.p()) + if key_len == raw_encoding_length and "raw" in valid_encodings: + coord_x, coord_y = cls._from_raw_encoding( + data, raw_encoding_length + ) + elif key_len == raw_encoding_length + 1 and ( + "hybrid" in valid_encodings or "uncompressed" in valid_encodings + ): + if data[:1] in (b"\x06", b"\x07") and "hybrid" in valid_encodings: + coord_x, coord_y = cls._from_hybrid( + data, raw_encoding_length, validate_encoding + ) + elif data[:1] == b"\x04" and "uncompressed" in valid_encodings: + coord_x, coord_y = cls._from_raw_encoding( + data[1:], raw_encoding_length + ) + else: + raise MalformedPointError( + "Invalid X9.62 encoding of the public point" + ) + elif ( + key_len == raw_encoding_length // 2 + 1 + and "compressed" in valid_encodings + ): + coord_x, coord_y = cls._from_compressed(data, curve) + else: + raise MalformedPointError( + "Length of string does not match lengths of " + "any of the enabled ({0}) encodings of the " + "curve.".format(", ".join(valid_encodings)) + ) + return coord_x, coord_y + + def _raw_encode(self): + """Convert the point to the :term:`raw encoding`.""" + prime = self.curve().p() + x_str = number_to_string(self.x(), prime) + y_str = number_to_string(self.y(), prime) + return x_str + y_str + + def _compressed_encode(self): + """Encode the point into the compressed form.""" + prime = self.curve().p() + x_str = number_to_string(self.x(), prime) + if self.y() & 1: + return b"\x03" + x_str + return b"\x02" + x_str + + def _hybrid_encode(self): + """Encode the point into the hybrid form.""" + raw_enc = self._raw_encode() + if self.y() & 1: + return b"\x07" + raw_enc + return b"\x06" + raw_enc + + def _edwards_encode(self): + """Encode the point according to RFC8032 encoding.""" + self.scale() + x, y, p = self.x(), self.y(), self.curve().p() + + # add 1 for the sign bit and then round up + enc_len = (bit_length(p) + 1 + 7) // 8 + y_str = int_to_bytes(y, enc_len, "little") + if x % 2: + y_str[-1] |= 0x80 + return y_str + + def to_bytes(self, encoding="raw"): + """ + Convert the point to a byte string. + + The method by default uses the :term:`raw encoding` (specified + by `encoding="raw"`. It can also output points in :term:`uncompressed`, + :term:`compressed`, and :term:`hybrid` formats. + + For points on Edwards curves `encoding` is ignored and only the + encoding defined in RFC 8032 is supported. + + :return: :term:`raw encoding` of a public on the curve + :rtype: bytes + """ + assert encoding in ("raw", "uncompressed", "compressed", "hybrid") + curve = self.curve() + if isinstance(curve, CurveEdTw): + return self._edwards_encode() + elif encoding == "raw": + return self._raw_encode() + elif encoding == "uncompressed": + return b"\x04" + self._raw_encode() + elif encoding == "hybrid": + return self._hybrid_encode() + else: + return self._compressed_encode() + + @staticmethod + def _naf(mult): + """Calculate non-adjacent form of number.""" + ret = [] + while mult: + if mult % 2: + nd = mult % 4 + if nd >= 2: + nd -= 4 + ret.append(nd) + mult -= nd + else: + ret.append(0) + mult //= 2 + return ret + + +class PointJacobi(AbstractPoint): + """ + Point on a short Weierstrass elliptic curve. Uses Jacobi coordinates. + + In Jacobian coordinates, there are three parameters, X, Y and Z. + They correspond to affine parameters 'x' and 'y' like so: + + x = X / Z² + y = Y / Z³ + """ + + def __init__(self, curve, x, y, z, order=None, generator=False): + """ + Initialise a point that uses Jacobi representation internally. + + :param CurveFp curve: curve on which the point resides + :param int x: the X parameter of Jacobi representation (equal to x when + converting from affine coordinates + :param int y: the Y parameter of Jacobi representation (equal to y when + converting from affine coordinates + :param int z: the Z parameter of Jacobi representation (equal to 1 when + converting from affine coordinates + :param int order: the point order, must be non zero when using + generator=True + :param bool generator: the point provided is a curve generator, as + such, it will be commonly used with scalar multiplication. This will + cause to precompute multiplication table generation for it + """ + super(PointJacobi, self).__init__() + self.__curve = curve + if GMPY: # pragma: no branch + self.__coords = (mpz(x), mpz(y), mpz(z)) + self.__order = order and mpz(order) + else: # pragma: no branch + self.__coords = (x, y, z) + self.__order = order + self.__generator = generator + self.__precompute = [] + + @classmethod + def from_bytes( + cls, + curve, + data, + validate_encoding=True, + valid_encodings=None, + order=None, + generator=False, + ): + """ + Initialise the object from byte encoding of a point. + + The method does accept and automatically detect the type of point + encoding used. It supports the :term:`raw encoding`, + :term:`uncompressed`, :term:`compressed`, and :term:`hybrid` encodings. + + :param data: single point encoding of the public key + :type data: :term:`bytes-like object` + :param curve: the curve on which the public key is expected to lay + :type curve: ~ecdsa.ellipticcurve.CurveFp + :param validate_encoding: whether to verify that the encoding of the + point is self-consistent, defaults to True, has effect only + on ``hybrid`` encoding + :type validate_encoding: bool + :param valid_encodings: list of acceptable point encoding formats, + supported ones are: :term:`uncompressed`, :term:`compressed`, + :term:`hybrid`, and :term:`raw encoding` (specified with ``raw`` + name). All formats by default (specified with ``None``). + :type valid_encodings: :term:`set-like object` + :param int order: the point order, must be non zero when using + generator=True + :param bool generator: the point provided is a curve generator, as + such, it will be commonly used with scalar multiplication. This + will cause to precompute multiplication table generation for it + + :raises `~ecdsa.errors.MalformedPointError`: if the public point does + not lay on the curve or the encoding is invalid + + :return: Point on curve + :rtype: PointJacobi + """ + coord_x, coord_y = super(PointJacobi, cls).from_bytes( + curve, data, validate_encoding, valid_encodings + ) + return PointJacobi(curve, coord_x, coord_y, 1, order, generator) + + def _maybe_precompute(self): + if not self.__generator or self.__precompute: + return + + # since this code will execute just once, and it's fully deterministic, + # depend on atomicity of the last assignment to switch from empty + # self.__precompute to filled one and just ignore the unlikely + # situation when two threads execute it at the same time (as it won't + # lead to inconsistent __precompute) + order = self.__order + assert order + precompute = [] + i = 1 + order *= 2 + coord_x, coord_y, coord_z = self.__coords + doubler = PointJacobi(self.__curve, coord_x, coord_y, coord_z, order) + order *= 2 + precompute.append((doubler.x(), doubler.y())) + + while i < order: + i *= 2 + doubler = doubler.double().scale() + precompute.append((doubler.x(), doubler.y())) + + self.__precompute = precompute + + def __getstate__(self): + # while this code can execute at the same time as _maybe_precompute() + # is updating the __precompute or scale() is updating the __coords, + # there is no requirement for consistency between __coords and + # __precompute + state = self.__dict__.copy() + return state + + def __setstate__(self, state): + self.__dict__.update(state) + + def __eq__(self, other): + """Compare for equality two points with each-other. + + Note: only points that lay on the same curve can be equal. + """ + x1, y1, z1 = self.__coords + if other is INFINITY: + return not y1 or not z1 + if isinstance(other, Point): + x2, y2, z2 = other.x(), other.y(), 1 + elif isinstance(other, PointJacobi): + x2, y2, z2 = other.__coords + else: + return NotImplemented + if self.__curve != other.curve(): + return False + p = self.__curve.p() + + zz1 = z1 * z1 % p + zz2 = z2 * z2 % p + + # compare the fractions by bringing them to the same denominator + # depend on short-circuit to save 4 multiplications in case of + # inequality + return (x1 * zz2 - x2 * zz1) % p == 0 and ( + y1 * zz2 * z2 - y2 * zz1 * z1 + ) % p == 0 + + def __ne__(self, other): + """Compare for inequality two points with each-other.""" + return not self == other + + def order(self): + """Return the order of the point. + + None if it is undefined. + """ + return self.__order + + def curve(self): + """Return curve over which the point is defined.""" + return self.__curve + + def x(self): + """ + Return affine x coordinate. + + This method should be used only when the 'y' coordinate is not needed. + It's computationally more efficient to use `to_affine()` and then + call x() and y() on the returned instance. Or call `scale()` + and then x() and y() on the returned instance. + """ + x, _, z = self.__coords + if z == 1: + return x + p = self.__curve.p() + z = numbertheory.inverse_mod(z, p) + return x * z**2 % p + + def y(self): + """ + Return affine y coordinate. + + This method should be used only when the 'x' coordinate is not needed. + It's computationally more efficient to use `to_affine()` and then + call x() and y() on the returned instance. Or call `scale()` + and then x() and y() on the returned instance. + """ + _, y, z = self.__coords + if z == 1: + return y + p = self.__curve.p() + z = numbertheory.inverse_mod(z, p) + return y * z**3 % p + + def scale(self): + """ + Return point scaled so that z == 1. + + Modifies point in place, returns self. + """ + x, y, z = self.__coords + if z == 1: + return self + + # scaling is deterministic, so even if two threads execute the below + # code at the same time, they will set __coords to the same value + p = self.__curve.p() + z_inv = numbertheory.inverse_mod(z, p) + zz_inv = z_inv * z_inv % p + x = x * zz_inv % p + y = y * zz_inv * z_inv % p + self.__coords = (x, y, 1) + return self + + def to_affine(self): + """Return point in affine form.""" + _, y, z = self.__coords + if not y or not z: + return INFINITY + self.scale() + x, y, z = self.__coords + return Point(self.__curve, x, y, self.__order) + + @staticmethod + def from_affine(point, generator=False): + """Create from an affine point. + + :param bool generator: set to True to make the point to precalculate + multiplication table - useful for public point when verifying many + signatures (around 100 or so) or for generator points of a curve. + """ + return PointJacobi( + point.curve(), point.x(), point.y(), 1, point.order(), generator + ) + + # please note that all the methods that use the equations from + # hyperelliptic + # are formatted in a way to maximise performance. + # Things that make code faster: multiplying instead of taking to the power + # (`xx = x * x; xxxx = xx * xx % p` is faster than `xxxx = x**4 % p` and + # `pow(x, 4, p)`), + # multiple assignments at the same time (`x1, x2 = self.x1, self.x2` is + # faster than `x1 = self.x1; x2 = self.x2`), + # similarly, sometimes the `% p` is skipped if it makes the calculation + # faster and the result of calculation is later reduced modulo `p` + + def _double_with_z_1(self, X1, Y1, p, a): + """Add a point to itself with z == 1.""" + # after: + # http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html#doubling-mdbl-2007-bl + XX, YY = X1 * X1 % p, Y1 * Y1 % p + if not YY: + return 0, 0, 1 + YYYY = YY * YY % p + S = 2 * ((X1 + YY) ** 2 - XX - YYYY) % p + M = 3 * XX + a + T = (M * M - 2 * S) % p + # X3 = T + Y3 = (M * (S - T) - 8 * YYYY) % p + Z3 = 2 * Y1 % p + return T, Y3, Z3 + + def _double(self, X1, Y1, Z1, p, a): + """Add a point to itself, arbitrary z.""" + if Z1 == 1: + return self._double_with_z_1(X1, Y1, p, a) + if not Y1 or not Z1: + return 0, 0, 1 + # after: + # http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html#doubling-dbl-2007-bl + XX, YY = X1 * X1 % p, Y1 * Y1 % p + if not YY: + return 0, 0, 1 + YYYY = YY * YY % p + ZZ = Z1 * Z1 % p + S = 2 * ((X1 + YY) ** 2 - XX - YYYY) % p + M = (3 * XX + a * ZZ * ZZ) % p + T = (M * M - 2 * S) % p + # X3 = T + Y3 = (M * (S - T) - 8 * YYYY) % p + Z3 = ((Y1 + Z1) ** 2 - YY - ZZ) % p + + return T, Y3, Z3 + + def double(self): + """Add a point to itself.""" + X1, Y1, Z1 = self.__coords + + if not Y1: + return INFINITY + + p, a = self.__curve.p(), self.__curve.a() + + X3, Y3, Z3 = self._double(X1, Y1, Z1, p, a) + + if not Y3 or not Z3: + return INFINITY + return PointJacobi(self.__curve, X3, Y3, Z3, self.__order) + + def _add_with_z_1(self, X1, Y1, X2, Y2, p): + """add points when both Z1 and Z2 equal 1""" + # after: + # http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html#addition-mmadd-2007-bl + H = X2 - X1 + HH = H * H + I = 4 * HH % p + J = H * I + r = 2 * (Y2 - Y1) + if not H and not r: + return self._double_with_z_1(X1, Y1, p, self.__curve.a()) + V = X1 * I + X3 = (r**2 - J - 2 * V) % p + Y3 = (r * (V - X3) - 2 * Y1 * J) % p + Z3 = 2 * H % p + return X3, Y3, Z3 + + def _add_with_z_eq(self, X1, Y1, Z1, X2, Y2, p): + """add points when Z1 == Z2""" + # after: + # http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html#addition-zadd-2007-m + A = (X2 - X1) ** 2 % p + B = X1 * A % p + C = X2 * A + D = (Y2 - Y1) ** 2 % p + if not A and not D: + return self._double(X1, Y1, Z1, p, self.__curve.a()) + X3 = (D - B - C) % p + Y3 = ((Y2 - Y1) * (B - X3) - Y1 * (C - B)) % p + Z3 = Z1 * (X2 - X1) % p + return X3, Y3, Z3 + + def _add_with_z2_1(self, X1, Y1, Z1, X2, Y2, p): + """add points when Z2 == 1""" + # after: + # http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html#addition-madd-2007-bl + Z1Z1 = Z1 * Z1 % p + U2, S2 = X2 * Z1Z1 % p, Y2 * Z1 * Z1Z1 % p + H = (U2 - X1) % p + HH = H * H % p + I = 4 * HH % p + J = H * I + r = 2 * (S2 - Y1) % p + if not r and not H: + return self._double_with_z_1(X2, Y2, p, self.__curve.a()) + V = X1 * I + X3 = (r * r - J - 2 * V) % p + Y3 = (r * (V - X3) - 2 * Y1 * J) % p + Z3 = ((Z1 + H) ** 2 - Z1Z1 - HH) % p + return X3, Y3, Z3 + + def _add_with_z_ne(self, X1, Y1, Z1, X2, Y2, Z2, p): + """add points with arbitrary z""" + # after: + # http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html#addition-add-2007-bl + Z1Z1 = Z1 * Z1 % p + Z2Z2 = Z2 * Z2 % p + U1 = X1 * Z2Z2 % p + U2 = X2 * Z1Z1 % p + S1 = Y1 * Z2 * Z2Z2 % p + S2 = Y2 * Z1 * Z1Z1 % p + H = U2 - U1 + I = 4 * H * H % p + J = H * I % p + r = 2 * (S2 - S1) % p + if not H and not r: + return self._double(X1, Y1, Z1, p, self.__curve.a()) + V = U1 * I + X3 = (r * r - J - 2 * V) % p + Y3 = (r * (V - X3) - 2 * S1 * J) % p + Z3 = ((Z1 + Z2) ** 2 - Z1Z1 - Z2Z2) * H % p + + return X3, Y3, Z3 + + def __radd__(self, other): + """Add other to self.""" + return self + other + + def _add(self, X1, Y1, Z1, X2, Y2, Z2, p): + """add two points, select fastest method.""" + if not Y1 or not Z1: + return X2, Y2, Z2 + if not Y2 or not Z2: + return X1, Y1, Z1 + if Z1 == Z2: + if Z1 == 1: + return self._add_with_z_1(X1, Y1, X2, Y2, p) + return self._add_with_z_eq(X1, Y1, Z1, X2, Y2, p) + if Z1 == 1: + return self._add_with_z2_1(X2, Y2, Z2, X1, Y1, p) + if Z2 == 1: + return self._add_with_z2_1(X1, Y1, Z1, X2, Y2, p) + return self._add_with_z_ne(X1, Y1, Z1, X2, Y2, Z2, p) + + def __add__(self, other): + """Add two points on elliptic curve.""" + if self == INFINITY: + return other + if other == INFINITY: + return self + if isinstance(other, Point): + other = PointJacobi.from_affine(other) + if self.__curve != other.__curve: + raise ValueError("The other point is on different curve") + + p = self.__curve.p() + X1, Y1, Z1 = self.__coords + X2, Y2, Z2 = other.__coords + + X3, Y3, Z3 = self._add(X1, Y1, Z1, X2, Y2, Z2, p) + + if not Y3 or not Z3: + return INFINITY + return PointJacobi(self.__curve, X3, Y3, Z3, self.__order) + + def __rmul__(self, other): + """Multiply point by an integer.""" + return self * other + + def _mul_precompute(self, other): + """Multiply point by integer with precomputation table.""" + X3, Y3, Z3, p = 0, 0, 1, self.__curve.p() + _add = self._add + for X2, Y2 in self.__precompute: + if other % 2: + if other % 4 >= 2: + other = (other + 1) // 2 + X3, Y3, Z3 = _add(X3, Y3, Z3, X2, -Y2, 1, p) + else: + other = (other - 1) // 2 + X3, Y3, Z3 = _add(X3, Y3, Z3, X2, Y2, 1, p) + else: + other //= 2 + + if not Y3 or not Z3: + return INFINITY + return PointJacobi(self.__curve, X3, Y3, Z3, self.__order) + + def __mul__(self, other): + """Multiply point by an integer.""" + if not self.__coords[1] or not other: + return INFINITY + if other == 1: + return self + if self.__order: + # order*2 as a protection for Minerva + other = other % (self.__order * 2) + self._maybe_precompute() + if self.__precompute: + return self._mul_precompute(other) + + self = self.scale() + X2, Y2, _ = self.__coords + X3, Y3, Z3 = 0, 0, 1 + p, a = self.__curve.p(), self.__curve.a() + _double = self._double + _add = self._add + # since adding points when at least one of them is scaled + # is quicker, reverse the NAF order + for i in reversed(self._naf(other)): + X3, Y3, Z3 = _double(X3, Y3, Z3, p, a) + if i < 0: + X3, Y3, Z3 = _add(X3, Y3, Z3, X2, -Y2, 1, p) + elif i > 0: + X3, Y3, Z3 = _add(X3, Y3, Z3, X2, Y2, 1, p) + + if not Y3 or not Z3: + return INFINITY + + return PointJacobi(self.__curve, X3, Y3, Z3, self.__order) + + def mul_add(self, self_mul, other, other_mul): + """ + Do two multiplications at the same time, add results. + + calculates self*self_mul + other*other_mul + """ + if other == INFINITY or other_mul == 0: + return self * self_mul + if self_mul == 0: + return other * other_mul + if not isinstance(other, PointJacobi): + other = PointJacobi.from_affine(other) + # when the points have precomputed answers, then multiplying them alone + # is faster (as it uses NAF and no point doublings) + self._maybe_precompute() + other._maybe_precompute() + if self.__precompute and other.__precompute: + return self * self_mul + other * other_mul + + if self.__order: + self_mul = self_mul % self.__order + other_mul = other_mul % self.__order + + # (X3, Y3, Z3) is the accumulator + X3, Y3, Z3 = 0, 0, 1 + p, a = self.__curve.p(), self.__curve.a() + + # as we have 6 unique points to work with, we can't scale all of them, + # but do scale the ones that are used most often + self.scale() + X1, Y1, Z1 = self.__coords + other.scale() + X2, Y2, Z2 = other.__coords + + _double = self._double + _add = self._add + + # with NAF we have 3 options: no add, subtract, add + # so with 2 points, we have 9 combinations: + # 0, -A, +A, -B, -A-B, +A-B, +B, -A+B, +A+B + # so we need 4 combined points: + mAmB_X, mAmB_Y, mAmB_Z = _add(X1, -Y1, Z1, X2, -Y2, Z2, p) + pAmB_X, pAmB_Y, pAmB_Z = _add(X1, Y1, Z1, X2, -Y2, Z2, p) + mApB_X, mApB_Y, mApB_Z = _add(X1, -Y1, Z1, X2, Y2, Z2, p) + pApB_X, pApB_Y, pApB_Z = _add(X1, Y1, Z1, X2, Y2, Z2, p) + # when the self and other sum to infinity, we need to add them + # one by one to get correct result but as that's very unlikely to + # happen in regular operation, we don't need to optimise this case + if not pApB_Y or not pApB_Z: + return self * self_mul + other * other_mul + + # gmp object creation has cumulatively higher overhead than the + # speedup we get from calculating the NAF using gmp so ensure use + # of int() + self_naf = list(reversed(self._naf(int(self_mul)))) + other_naf = list(reversed(self._naf(int(other_mul)))) + # ensure that the lists are the same length (zip() will truncate + # longer one otherwise) + if len(self_naf) < len(other_naf): + self_naf = [0] * (len(other_naf) - len(self_naf)) + self_naf + elif len(self_naf) > len(other_naf): + other_naf = [0] * (len(self_naf) - len(other_naf)) + other_naf + + for A, B in zip(self_naf, other_naf): + X3, Y3, Z3 = _double(X3, Y3, Z3, p, a) + + # conditions ordered from most to least likely + if A == 0: + if B == 0: + pass + elif B < 0: + X3, Y3, Z3 = _add(X3, Y3, Z3, X2, -Y2, Z2, p) + else: + assert B > 0 + X3, Y3, Z3 = _add(X3, Y3, Z3, X2, Y2, Z2, p) + elif A < 0: + if B == 0: + X3, Y3, Z3 = _add(X3, Y3, Z3, X1, -Y1, Z1, p) + elif B < 0: + X3, Y3, Z3 = _add(X3, Y3, Z3, mAmB_X, mAmB_Y, mAmB_Z, p) + else: + assert B > 0 + X3, Y3, Z3 = _add(X3, Y3, Z3, mApB_X, mApB_Y, mApB_Z, p) + else: + assert A > 0 + if B == 0: + X3, Y3, Z3 = _add(X3, Y3, Z3, X1, Y1, Z1, p) + elif B < 0: + X3, Y3, Z3 = _add(X3, Y3, Z3, pAmB_X, pAmB_Y, pAmB_Z, p) + else: + assert B > 0 + X3, Y3, Z3 = _add(X3, Y3, Z3, pApB_X, pApB_Y, pApB_Z, p) + + if not Y3 or not Z3: + return INFINITY + + return PointJacobi(self.__curve, X3, Y3, Z3, self.__order) + + def __neg__(self): + """Return negated point.""" + x, y, z = self.__coords + return PointJacobi(self.__curve, x, -y, z, self.__order) + + +class Point(AbstractPoint): + """A point on a short Weierstrass elliptic curve. Altering x and y is + forbidden, but they can be read by the x() and y() methods.""" + + def __init__(self, curve, x, y, order=None): + """curve, x, y, order; order (optional) is the order of this point.""" + super(Point, self).__init__() + self.__curve = curve + if GMPY: + self.__x = x and mpz(x) + self.__y = y and mpz(y) + self.__order = order and mpz(order) + else: + self.__x = x + self.__y = y + self.__order = order + # self.curve is allowed to be None only for INFINITY: + if self.__curve: + assert self.__curve.contains_point(x, y) + # for curves with cofactor 1, all points that are on the curve are + # scalar multiples of the base point, so performing multiplication is + # not necessary to verify that. See Section 3.2.2.1 of SEC 1 v2 + if curve and curve.cofactor() != 1 and order: + assert self * order == INFINITY + + @classmethod + def from_bytes( + cls, + curve, + data, + validate_encoding=True, + valid_encodings=None, + order=None, + ): + """ + Initialise the object from byte encoding of a point. + + The method does accept and automatically detect the type of point + encoding used. It supports the :term:`raw encoding`, + :term:`uncompressed`, :term:`compressed`, and :term:`hybrid` encodings. + + :param data: single point encoding of the public key + :type data: :term:`bytes-like object` + :param curve: the curve on which the public key is expected to lay + :type curve: ~ecdsa.ellipticcurve.CurveFp + :param validate_encoding: whether to verify that the encoding of the + point is self-consistent, defaults to True, has effect only + on ``hybrid`` encoding + :type validate_encoding: bool + :param valid_encodings: list of acceptable point encoding formats, + supported ones are: :term:`uncompressed`, :term:`compressed`, + :term:`hybrid`, and :term:`raw encoding` (specified with ``raw`` + name). All formats by default (specified with ``None``). + :type valid_encodings: :term:`set-like object` + :param int order: the point order, must be non zero when using + generator=True + + :raises `~ecdsa.errors.MalformedPointError`: if the public point does + not lay on the curve or the encoding is invalid + + :return: Point on curve + :rtype: Point + """ + coord_x, coord_y = super(Point, cls).from_bytes( + curve, data, validate_encoding, valid_encodings + ) + return Point(curve, coord_x, coord_y, order) + + def __eq__(self, other): + """Return True if the points are identical, False otherwise. + + Note: only points that lay on the same curve can be equal. + """ + if isinstance(other, Point): + return ( + self.__curve == other.__curve + and self.__x == other.__x + and self.__y == other.__y + ) + return NotImplemented + + def __ne__(self, other): + """Returns False if points are identical, True otherwise.""" + return not self == other + + def __neg__(self): + return Point(self.__curve, self.__x, self.__curve.p() - self.__y) + + def __add__(self, other): + """Add one point to another point.""" + + # X9.62 B.3: + + if not isinstance(other, Point): + return NotImplemented + if other == INFINITY: + return self + if self == INFINITY: + return other + assert self.__curve == other.__curve + if self.__x == other.__x: + if (self.__y + other.__y) % self.__curve.p() == 0: + return INFINITY + else: + return self.double() + + p = self.__curve.p() + + l = ( + (other.__y - self.__y) + * numbertheory.inverse_mod(other.__x - self.__x, p) + ) % p + + x3 = (l * l - self.__x - other.__x) % p + y3 = (l * (self.__x - x3) - self.__y) % p + + return Point(self.__curve, x3, y3) + + def __mul__(self, other): + """Multiply a point by an integer.""" + + def leftmost_bit(x): + assert x > 0 + result = 1 + while result <= x: + result = 2 * result + return result // 2 + + e = other + if e == 0 or (self.__order and e % self.__order == 0): + return INFINITY + if self == INFINITY: + return INFINITY + if e < 0: + return (-self) * (-e) + + # From X9.62 D.3.2: + + e3 = 3 * e + negative_self = Point(self.__curve, self.__x, -self.__y, self.__order) + i = leftmost_bit(e3) // 2 + result = self + # print_("Multiplying %s by %d (e3 = %d):" % (self, other, e3)) + while i > 1: + result = result.double() + if (e3 & i) != 0 and (e & i) == 0: + result = result + self + if (e3 & i) == 0 and (e & i) != 0: + result = result + negative_self + # print_(". . . i = %d, result = %s" % ( i, result )) + i = i // 2 + + return result + + def __rmul__(self, other): + """Multiply a point by an integer.""" + + return self * other + + def __str__(self): + if self == INFINITY: + return "infinity" + return "(%d,%d)" % (self.__x, self.__y) + + def double(self): + """Return a new point that is twice the old.""" + + if self == INFINITY: + return INFINITY + + # X9.62 B.3: + + p = self.__curve.p() + a = self.__curve.a() + + l = ( + (3 * self.__x * self.__x + a) + * numbertheory.inverse_mod(2 * self.__y, p) + ) % p + + x3 = (l * l - 2 * self.__x) % p + y3 = (l * (self.__x - x3) - self.__y) % p + + return Point(self.__curve, x3, y3) + + def x(self): + return self.__x + + def y(self): + return self.__y + + def curve(self): + return self.__curve + + def order(self): + return self.__order + + +class PointEdwards(AbstractPoint): + """Point on Twisted Edwards curve. + + Internally represents the coordinates on the curve using four parameters, + X, Y, Z, T. They correspond to affine parameters 'x' and 'y' like so: + + x = X / Z + y = Y / Z + x*y = T / Z + """ + + def __init__(self, curve, x, y, z, t, order=None, generator=False): + """ + Initialise a point that uses the extended coordinates internally. + """ + super(PointEdwards, self).__init__() + self.__curve = curve + if GMPY: # pragma: no branch + self.__coords = (mpz(x), mpz(y), mpz(z), mpz(t)) + self.__order = order and mpz(order) + else: # pragma: no branch + self.__coords = (x, y, z, t) + self.__order = order + self.__generator = generator + self.__precompute = [] + + @classmethod + def from_bytes( + cls, + curve, + data, + validate_encoding=None, + valid_encodings=None, + order=None, + generator=False, + ): + """ + Initialise the object from byte encoding of a point. + + `validate_encoding` and `valid_encodings` are provided for + compatibility with Weierstrass curves, they are ignored for Edwards + points. + + :param data: single point encoding of the public key + :type data: :term:`bytes-like object` + :param curve: the curve on which the public key is expected to lay + :type curve: ecdsa.ellipticcurve.CurveEdTw + :param None validate_encoding: Ignored, encoding is always validated + :param None valid_encodings: Ignored, there is just one encoding + supported + :param int order: the point order, must be non zero when using + generator=True + :param bool generator: Flag to mark the point as a curve generator, + this will cause the library to pre-compute some values to + make repeated usages of the point much faster + + :raises `~ecdsa.errors.MalformedPointError`: if the public point does + not lay on the curve or the encoding is invalid + + :return: Initialised point on an Edwards curve + :rtype: PointEdwards + """ + coord_x, coord_y = super(PointEdwards, cls).from_bytes( + curve, data, validate_encoding, valid_encodings + ) + return PointEdwards( + curve, coord_x, coord_y, 1, coord_x * coord_y, order, generator + ) + + def _maybe_precompute(self): + if not self.__generator or self.__precompute: + return self.__precompute + + # since this code will execute just once, and it's fully deterministic, + # depend on atomicity of the last assignment to switch from empty + # self.__precompute to filled one and just ignore the unlikely + # situation when two threads execute it at the same time (as it won't + # lead to inconsistent __precompute) + order = self.__order + assert order + precompute = [] + i = 1 + order *= 2 + coord_x, coord_y, coord_z, coord_t = self.__coords + prime = self.__curve.p() + + doubler = PointEdwards( + self.__curve, coord_x, coord_y, coord_z, coord_t, order + ) + # for "protection" against Minerva we need 1 or 2 more bits depending + # on order bit size, but it's easier to just calculate one + # point more always + order *= 4 + + while i < order: + doubler = doubler.scale() + coord_x, coord_y = doubler.x(), doubler.y() + coord_t = coord_x * coord_y % prime + precompute.append((coord_x, coord_y, coord_t)) + + i *= 2 + doubler = doubler.double() + + self.__precompute = precompute + return self.__precompute + + def x(self): + """Return affine x coordinate.""" + X1, _, Z1, _ = self.__coords + if Z1 == 1: + return X1 + p = self.__curve.p() + z_inv = numbertheory.inverse_mod(Z1, p) + return X1 * z_inv % p + + def y(self): + """Return affine y coordinate.""" + _, Y1, Z1, _ = self.__coords + if Z1 == 1: + return Y1 + p = self.__curve.p() + z_inv = numbertheory.inverse_mod(Z1, p) + return Y1 * z_inv % p + + def curve(self): + """Return the curve of the point.""" + return self.__curve + + def order(self): + return self.__order + + def scale(self): + """ + Return point scaled so that z == 1. + + Modifies point in place, returns self. + """ + X1, Y1, Z1, _ = self.__coords + if Z1 == 1: + return self + + p = self.__curve.p() + z_inv = numbertheory.inverse_mod(Z1, p) + x = X1 * z_inv % p + y = Y1 * z_inv % p + t = x * y % p + self.__coords = (x, y, 1, t) + return self + + def __eq__(self, other): + """Compare for equality two points with each-other. + + Note: only points on the same curve can be equal. + """ + x1, y1, z1, t1 = self.__coords + if other is INFINITY: + return not x1 or not t1 + if isinstance(other, PointEdwards): + x2, y2, z2, t2 = other.__coords + else: + return NotImplemented + if self.__curve != other.curve(): + return False + p = self.__curve.p() + + # cross multiply to eliminate divisions + xn1 = x1 * z2 % p + xn2 = x2 * z1 % p + yn1 = y1 * z2 % p + yn2 = y2 * z1 % p + return xn1 == xn2 and yn1 == yn2 + + def __ne__(self, other): + """Compare for inequality two points with each-other.""" + return not self == other + + def _add(self, X1, Y1, Z1, T1, X2, Y2, Z2, T2, p, a): + """add two points, assume sane parameters.""" + # after add-2008-hwcd-2 + # from https://hyperelliptic.org/EFD/g1p/auto-twisted-extended.html + # NOTE: there are more efficient formulas for Z1 or Z2 == 1 + A = X1 * X2 % p + B = Y1 * Y2 % p + C = Z1 * T2 % p + D = T1 * Z2 % p + E = D + C + F = ((X1 - Y1) * (X2 + Y2) + B - A) % p + G = B + a * A + H = D - C + if not H: + return self._double(X1, Y1, Z1, T1, p, a) + X3 = E * F % p + Y3 = G * H % p + T3 = E * H % p + Z3 = F * G % p + + return X3, Y3, Z3, T3 + + def __add__(self, other): + """Add point to another.""" + if other == INFINITY: + return self + if ( + not isinstance(other, PointEdwards) + or self.__curve != other.__curve + ): + raise ValueError("The other point is on a different curve.") + + p, a = self.__curve.p(), self.__curve.a() + X1, Y1, Z1, T1 = self.__coords + X2, Y2, Z2, T2 = other.__coords + + X3, Y3, Z3, T3 = self._add(X1, Y1, Z1, T1, X2, Y2, Z2, T2, p, a) + + if not X3 or not T3: + return INFINITY + return PointEdwards(self.__curve, X3, Y3, Z3, T3, self.__order) + + def __radd__(self, other): + """Add other to self.""" + return self + other + + def _double(self, X1, Y1, Z1, T1, p, a): + """Double the point, assume sane parameters.""" + # after "dbl-2008-hwcd" + # from https://hyperelliptic.org/EFD/g1p/auto-twisted-extended.html + # NOTE: there are more efficient formulas for Z1 == 1 + A = X1 * X1 % p + B = Y1 * Y1 % p + C = 2 * Z1 * Z1 % p + D = a * A % p + E = ((X1 + Y1) * (X1 + Y1) - A - B) % p + G = D + B + F = G - C + H = D - B + X3 = E * F % p + Y3 = G * H % p + T3 = E * H % p + Z3 = F * G % p + + return X3, Y3, Z3, T3 + + def double(self): + """Return point added to itself.""" + X1, Y1, Z1, T1 = self.__coords + + if not X1 or not T1: + return INFINITY + + p, a = self.__curve.p(), self.__curve.a() + + X3, Y3, Z3, T3 = self._double(X1, Y1, Z1, T1, p, a) + + if not X3 or not T3: + return INFINITY + return PointEdwards(self.__curve, X3, Y3, Z3, T3, self.__order) + + def __rmul__(self, other): + """Multiply point by an integer.""" + return self * other + + def _mul_precompute(self, other): + """Multiply point by integer with precomputation table.""" + X3, Y3, Z3, T3, p, a = 0, 1, 1, 0, self.__curve.p(), self.__curve.a() + _add = self._add + for X2, Y2, T2 in self.__precompute: + rem = other % 4 + if rem == 0 or rem == 2: + other //= 2 + elif rem == 3: + other = (other + 1) // 2 + X3, Y3, Z3, T3 = _add(X3, Y3, Z3, T3, -X2, Y2, 1, -T2, p, a) + else: + assert rem == 1 + other = (other - 1) // 2 + X3, Y3, Z3, T3 = _add(X3, Y3, Z3, T3, X2, Y2, 1, T2, p, a) + + if not X3 or not T3: + return INFINITY + + return PointEdwards(self.__curve, X3, Y3, Z3, T3, self.__order) + + def __mul__(self, other): + """Multiply point by an integer.""" + X2, Y2, Z2, T2 = self.__coords + if not X2 or not T2 or not other: + return INFINITY + if other == 1: + return self + if self.__order: + # order*2 as a "protection" for Minerva + other = other % (self.__order * 2) + if self._maybe_precompute(): + return self._mul_precompute(other) + + X3, Y3, Z3, T3 = 0, 1, 1, 0 # INFINITY in extended coordinates + p, a = self.__curve.p(), self.__curve.a() + _double = self._double + _add = self._add + + for i in reversed(self._naf(other)): + X3, Y3, Z3, T3 = _double(X3, Y3, Z3, T3, p, a) + if i < 0: + X3, Y3, Z3, T3 = _add(X3, Y3, Z3, T3, -X2, Y2, Z2, -T2, p, a) + elif i > 0: + X3, Y3, Z3, T3 = _add(X3, Y3, Z3, T3, X2, Y2, Z2, T2, p, a) + + if not X3 or not T3: + return INFINITY + + return PointEdwards(self.__curve, X3, Y3, Z3, T3, self.__order) + + +# This one point is the Point At Infinity for all purposes: +INFINITY = Point(None, None, None) diff --git a/env/lib/python3.8/site-packages/ecdsa/errors.py b/env/lib/python3.8/site-packages/ecdsa/errors.py new file mode 100644 index 00000000..0184c05b --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/errors.py @@ -0,0 +1,4 @@ +class MalformedPointError(AssertionError): + """Raised in case the encoding of private or public key is malformed.""" + + pass diff --git a/env/lib/python3.8/site-packages/ecdsa/keys.py b/env/lib/python3.8/site-packages/ecdsa/keys.py new file mode 100644 index 00000000..2b7d3168 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/keys.py @@ -0,0 +1,1612 @@ +""" +Primary classes for performing signing and verification operations. +""" + +import binascii +from hashlib import sha1 +import os +from six import PY2, b +from . import ecdsa, eddsa +from . import der +from . import rfc6979 +from . import ellipticcurve +from .curves import NIST192p, Curve, Ed25519, Ed448 +from .ecdsa import RSZeroError +from .util import string_to_number, number_to_string, randrange +from .util import sigencode_string, sigdecode_string, bit_length +from .util import ( + oid_ecPublicKey, + encoded_oid_ecPublicKey, + oid_ecDH, + oid_ecMQV, + MalformedSignature, +) +from ._compat import normalise_bytes +from .errors import MalformedPointError +from .ellipticcurve import PointJacobi, CurveEdTw + + +__all__ = [ + "BadSignatureError", + "BadDigestError", + "VerifyingKey", + "SigningKey", + "MalformedPointError", +] + + +class BadSignatureError(Exception): + """ + Raised when verification of signature failed. + + Will be raised irrespective of reason of the failure: + + * the calculated or provided hash does not match the signature + * the signature does not match the curve/public key + * the encoding of the signature is malformed + * the size of the signature does not match the curve of the VerifyingKey + """ + + pass + + +class BadDigestError(Exception): + """Raised in case the selected hash is too large for the curve.""" + + pass + + +def _truncate_and_convert_digest(digest, curve, allow_truncate): + """Truncates and converts digest to an integer.""" + if not allow_truncate: + if len(digest) > curve.baselen: + raise BadDigestError( + "this curve ({0}) is too short " + "for the length of your digest ({1})".format( + curve.name, 8 * len(digest) + ) + ) + else: + digest = digest[: curve.baselen] + number = string_to_number(digest) + if allow_truncate: + max_length = bit_length(curve.order) + # we don't use bit_length(number) as that truncates leading zeros + length = len(digest) * 8 + + # See NIST FIPS 186-4: + # + # When the length of the output of the hash function is greater + # than N (i.e., the bit length of q), then the leftmost N bits of + # the hash function output block shall be used in any calculation + # using the hash function output during the generation or + # verification of a digital signature. + # + # as such, we need to shift-out the low-order bits: + number >>= max(0, length - max_length) + + return number + + +class VerifyingKey(object): + """ + Class for handling keys that can verify signatures (public keys). + + :ivar `~ecdsa.curves.Curve` ~.curve: The Curve over which all the + cryptographic operations will take place + :ivar default_hashfunc: the function that will be used for hashing the + data. Should implement the same API as hashlib.sha1 + :vartype default_hashfunc: callable + :ivar pubkey: the actual public key + :vartype pubkey: ~ecdsa.ecdsa.Public_key + """ + + def __init__(self, _error__please_use_generate=None): + """Unsupported, please use one of the classmethods to initialise.""" + if not _error__please_use_generate: + raise TypeError( + "Please use VerifyingKey.generate() to construct me" + ) + self.curve = None + self.default_hashfunc = None + self.pubkey = None + + def __repr__(self): + pub_key = self.to_string("compressed") + if self.default_hashfunc: + hash_name = self.default_hashfunc().name + else: + hash_name = "None" + return "VerifyingKey.from_string({0!r}, {1!r}, {2})".format( + pub_key, self.curve, hash_name + ) + + def __eq__(self, other): + """Return True if the points are identical, False otherwise.""" + if isinstance(other, VerifyingKey): + return self.curve == other.curve and self.pubkey == other.pubkey + return NotImplemented + + def __ne__(self, other): + """Return False if the points are identical, True otherwise.""" + return not self == other + + @classmethod + def from_public_point( + cls, point, curve=NIST192p, hashfunc=sha1, validate_point=True + ): + """ + Initialise the object from a Point object. + + This is a low-level method, generally you will not want to use it. + + :param point: The point to wrap around, the actual public key + :type point: ~ecdsa.ellipticcurve.AbstractPoint + :param curve: The curve on which the point needs to reside, defaults + to NIST192p + :type curve: ~ecdsa.curves.Curve + :param hashfunc: The default hash function that will be used for + verification, needs to implement the same interface + as :py:class:`hashlib.sha1` + :type hashfunc: callable + :type bool validate_point: whether to check if the point lays on curve + should always be used if the public point is not a result + of our own calculation + + :raises MalformedPointError: if the public point does not lay on the + curve + + :return: Initialised VerifyingKey object + :rtype: VerifyingKey + """ + self = cls(_error__please_use_generate=True) + if isinstance(curve.curve, CurveEdTw): + raise ValueError("Method incompatible with Edwards curves") + if not isinstance(point, ellipticcurve.PointJacobi): + point = ellipticcurve.PointJacobi.from_affine(point) + self.curve = curve + self.default_hashfunc = hashfunc + try: + self.pubkey = ecdsa.Public_key( + curve.generator, point, validate_point + ) + except ecdsa.InvalidPointError: + raise MalformedPointError("Point does not lay on the curve") + self.pubkey.order = curve.order + return self + + def precompute(self, lazy=False): + """ + Precompute multiplication tables for faster signature verification. + + Calling this method will cause the library to precompute the + scalar multiplication tables, used in signature verification. + While it's an expensive operation (comparable to performing + as many signatures as the bit size of the curve, i.e. 256 for NIST256p) + it speeds up verification 2 times. You should call this method + if you expect to verify hundreds of signatures (or more) using the same + VerifyingKey object. + + Note: You should call this method only once, this method generates a + new precomputation table every time it's called. + + :param bool lazy: whether to calculate the precomputation table now + (if set to False) or if it should be delayed to the time of first + use (when set to True) + """ + if isinstance(self.curve.curve, CurveEdTw): + pt = self.pubkey.point + self.pubkey.point = ellipticcurve.PointEdwards( + pt.curve(), + pt.x(), + pt.y(), + 1, + pt.x() * pt.y(), + self.curve.order, + generator=True, + ) + else: + self.pubkey.point = ellipticcurve.PointJacobi.from_affine( + self.pubkey.point, True + ) + # as precomputation in now delayed to the time of first use of the + # point and we were asked specifically to precompute now, make + # sure the precomputation is performed now to preserve the behaviour + if not lazy: + self.pubkey.point * 2 + + @classmethod + def from_string( + cls, + string, + curve=NIST192p, + hashfunc=sha1, + validate_point=True, + valid_encodings=None, + ): + """ + Initialise the object from byte encoding of public key. + + The method does accept and automatically detect the type of point + encoding used. It supports the :term:`raw encoding`, + :term:`uncompressed`, :term:`compressed`, and :term:`hybrid` encodings. + It also works with the native encoding of Ed25519 and Ed448 public + keys (technically those are compressed, but encoded differently than + in other signature systems). + + Note, while the method is named "from_string" it's a misnomer from + Python 2 days when there were no binary strings. In Python 3 the + input needs to be a bytes-like object. + + :param string: single point encoding of the public key + :type string: :term:`bytes-like object` + :param curve: the curve on which the public key is expected to lay + :type curve: ~ecdsa.curves.Curve + :param hashfunc: The default hash function that will be used for + verification, needs to implement the same interface as + hashlib.sha1. Ignored for EdDSA. + :type hashfunc: callable + :param validate_point: whether to verify that the point lays on the + provided curve or not, defaults to True. Ignored for EdDSA. + :type validate_point: bool + :param valid_encodings: list of acceptable point encoding formats, + supported ones are: :term:`uncompressed`, :term:`compressed`, + :term:`hybrid`, and :term:`raw encoding` (specified with ``raw`` + name). All formats by default (specified with ``None``). + Ignored for EdDSA. + :type valid_encodings: :term:`set-like object` + + :raises MalformedPointError: if the public point does not lay on the + curve or the encoding is invalid + + :return: Initialised VerifyingKey object + :rtype: VerifyingKey + """ + if isinstance(curve.curve, CurveEdTw): + self = cls(_error__please_use_generate=True) + self.curve = curve + self.default_hashfunc = None # ignored for EdDSA + try: + self.pubkey = eddsa.PublicKey(curve.generator, string) + except ValueError: + raise MalformedPointError("Malformed point for the curve") + return self + + point = PointJacobi.from_bytes( + curve.curve, + string, + validate_encoding=validate_point, + valid_encodings=valid_encodings, + ) + return cls.from_public_point(point, curve, hashfunc, validate_point) + + @classmethod + def from_pem( + cls, + string, + hashfunc=sha1, + valid_encodings=None, + valid_curve_encodings=None, + ): + """ + Initialise from public key stored in :term:`PEM` format. + + The PEM header of the key should be ``BEGIN PUBLIC KEY``. + + See the :func:`~VerifyingKey.from_der()` method for details of the + format supported. + + Note: only a single PEM object decoding is supported in provided + string. + + :param string: text with PEM-encoded public ECDSA key + :type string: str + :param valid_encodings: list of allowed point encodings. + By default :term:`uncompressed`, :term:`compressed`, and + :term:`hybrid`. To read malformed files, include + :term:`raw encoding` with ``raw`` in the list. + :type valid_encodings: :term:`set-like object` + :param valid_curve_encodings: list of allowed encoding formats + for curve parameters. By default (``None``) all are supported: + ``named_curve`` and ``explicit``. + :type valid_curve_encodings: :term:`set-like object` + + + :return: Initialised VerifyingKey object + :rtype: VerifyingKey + """ + return cls.from_der( + der.unpem(string), + hashfunc=hashfunc, + valid_encodings=valid_encodings, + valid_curve_encodings=valid_curve_encodings, + ) + + @classmethod + def from_der( + cls, + string, + hashfunc=sha1, + valid_encodings=None, + valid_curve_encodings=None, + ): + """ + Initialise the key stored in :term:`DER` format. + + The expected format of the key is the SubjectPublicKeyInfo structure + from RFC5912 (for RSA keys, it's known as the PKCS#1 format):: + + SubjectPublicKeyInfo {PUBLIC-KEY: IOSet} ::= SEQUENCE { + algorithm AlgorithmIdentifier {PUBLIC-KEY, {IOSet}}, + subjectPublicKey BIT STRING + } + + Note: only public EC keys are supported by this method. The + SubjectPublicKeyInfo.algorithm.algorithm field must specify + id-ecPublicKey (see RFC3279). + + Only the named curve encoding is supported, thus the + SubjectPublicKeyInfo.algorithm.parameters field needs to be an + object identifier. A sequence in that field indicates an explicit + parameter curve encoding, this format is not supported. A NULL object + in that field indicates an "implicitlyCA" encoding, where the curve + parameters come from CA certificate, those, again, are not supported. + + :param string: binary string with the DER encoding of public ECDSA key + :type string: bytes-like object + :param valid_encodings: list of allowed point encodings. + By default :term:`uncompressed`, :term:`compressed`, and + :term:`hybrid`. To read malformed files, include + :term:`raw encoding` with ``raw`` in the list. + :type valid_encodings: :term:`set-like object` + :param valid_curve_encodings: list of allowed encoding formats + for curve parameters. By default (``None``) all are supported: + ``named_curve`` and ``explicit``. + :type valid_curve_encodings: :term:`set-like object` + + :return: Initialised VerifyingKey object + :rtype: VerifyingKey + """ + if valid_encodings is None: + valid_encodings = set(["uncompressed", "compressed", "hybrid"]) + string = normalise_bytes(string) + # [[oid_ecPublicKey,oid_curve], point_str_bitstring] + s1, empty = der.remove_sequence(string) + if empty != b"": + raise der.UnexpectedDER( + "trailing junk after DER pubkey: %s" % binascii.hexlify(empty) + ) + s2, point_str_bitstring = der.remove_sequence(s1) + # s2 = oid_ecPublicKey,oid_curve + oid_pk, rest = der.remove_object(s2) + if oid_pk in (Ed25519.oid, Ed448.oid): + if oid_pk == Ed25519.oid: + curve = Ed25519 + else: + assert oid_pk == Ed448.oid + curve = Ed448 + point_str, empty = der.remove_bitstring(point_str_bitstring, 0) + if empty: + raise der.UnexpectedDER("trailing junk after public key") + return cls.from_string(point_str, curve, None) + if not oid_pk == oid_ecPublicKey: + raise der.UnexpectedDER( + "Unexpected object identifier in DER " + "encoding: {0!r}".format(oid_pk) + ) + curve = Curve.from_der(rest, valid_curve_encodings) + point_str, empty = der.remove_bitstring(point_str_bitstring, 0) + if empty != b"": + raise der.UnexpectedDER( + "trailing junk after pubkey pointstring: %s" + % binascii.hexlify(empty) + ) + # raw encoding of point is invalid in DER files + if len(point_str) == curve.verifying_key_length: + raise der.UnexpectedDER("Malformed encoding of public point") + return cls.from_string( + point_str, + curve, + hashfunc=hashfunc, + valid_encodings=valid_encodings, + ) + + @classmethod + def from_public_key_recovery( + cls, + signature, + data, + curve, + hashfunc=sha1, + sigdecode=sigdecode_string, + allow_truncate=True, + ): + """ + Return keys that can be used as verifiers of the provided signature. + + Tries to recover the public key that can be used to verify the + signature, usually returns two keys like that. + + :param signature: the byte string with the encoded signature + :type signature: bytes-like object + :param data: the data to be hashed for signature verification + :type data: bytes-like object + :param curve: the curve over which the signature was performed + :type curve: ~ecdsa.curves.Curve + :param hashfunc: The default hash function that will be used for + verification, needs to implement the same interface as hashlib.sha1 + :type hashfunc: callable + :param sigdecode: Callable to define the way the signature needs to + be decoded to an object, needs to handle `signature` as the + first parameter, the curve order (an int) as the second and return + a tuple with two integers, "r" as the first one and "s" as the + second one. See :func:`ecdsa.util.sigdecode_string` and + :func:`ecdsa.util.sigdecode_der` for examples. + :param bool allow_truncate: if True, the provided hashfunc can generate + values larger than the bit size of the order of the curve, the + extra bits (at the end of the digest) will be truncated. + :type sigdecode: callable + + :return: Initialised VerifyingKey objects + :rtype: list of VerifyingKey + """ + if isinstance(curve.curve, CurveEdTw): + raise ValueError("Method unsupported for Edwards curves") + data = normalise_bytes(data) + digest = hashfunc(data).digest() + return cls.from_public_key_recovery_with_digest( + signature, + digest, + curve, + hashfunc=hashfunc, + sigdecode=sigdecode, + allow_truncate=allow_truncate, + ) + + @classmethod + def from_public_key_recovery_with_digest( + cls, + signature, + digest, + curve, + hashfunc=sha1, + sigdecode=sigdecode_string, + allow_truncate=False, + ): + """ + Return keys that can be used as verifiers of the provided signature. + + Tries to recover the public key that can be used to verify the + signature, usually returns two keys like that. + + :param signature: the byte string with the encoded signature + :type signature: bytes-like object + :param digest: the hash value of the message signed by the signature + :type digest: bytes-like object + :param curve: the curve over which the signature was performed + :type curve: ~ecdsa.curves.Curve + :param hashfunc: The default hash function that will be used for + verification, needs to implement the same interface as hashlib.sha1 + :type hashfunc: callable + :param sigdecode: Callable to define the way the signature needs to + be decoded to an object, needs to handle `signature` as the + first parameter, the curve order (an int) as the second and return + a tuple with two integers, "r" as the first one and "s" as the + second one. See :func:`ecdsa.util.sigdecode_string` and + :func:`ecdsa.util.sigdecode_der` for examples. + :type sigdecode: callable + :param bool allow_truncate: if True, the provided hashfunc can generate + values larger than the bit size of the order of the curve (and + the length of provided `digest`), the extra bits (at the end of the + digest) will be truncated. + + :return: Initialised VerifyingKey object + :rtype: VerifyingKey + """ + if isinstance(curve.curve, CurveEdTw): + raise ValueError("Method unsupported for Edwards curves") + generator = curve.generator + r, s = sigdecode(signature, generator.order()) + sig = ecdsa.Signature(r, s) + + digest = normalise_bytes(digest) + digest_as_number = _truncate_and_convert_digest( + digest, curve, allow_truncate + ) + pks = sig.recover_public_keys(digest_as_number, generator) + + # Transforms the ecdsa.Public_key object into a VerifyingKey + verifying_keys = [ + cls.from_public_point(pk.point, curve, hashfunc) for pk in pks + ] + return verifying_keys + + def to_string(self, encoding="raw"): + """ + Convert the public key to a byte string. + + The method by default uses the :term:`raw encoding` (specified + by `encoding="raw"`. It can also output keys in :term:`uncompressed`, + :term:`compressed` and :term:`hybrid` formats. + + Remember that the curve identification is not part of the encoding + so to decode the point using :func:`~VerifyingKey.from_string`, curve + needs to be specified. + + Note: while the method is called "to_string", it's a misnomer from + Python 2 days when character strings and byte strings shared type. + On Python 3 the returned type will be `bytes`. + + :return: :term:`raw encoding` of the public key (public point) on the + curve + :rtype: bytes + """ + assert encoding in ("raw", "uncompressed", "compressed", "hybrid") + return self.pubkey.point.to_bytes(encoding) + + def to_pem( + self, point_encoding="uncompressed", curve_parameters_encoding=None + ): + """ + Convert the public key to the :term:`PEM` format. + + The PEM header of the key will be ``BEGIN PUBLIC KEY``. + + The format of the key is described in the + :func:`~VerifyingKey.from_der()` method. + This method supports only "named curve" encoding of keys. + + :param str point_encoding: specification of the encoding format + of public keys. "uncompressed" is most portable, "compressed" is + smallest. "hybrid" is uncommon and unsupported by most + implementations, it is as big as "uncompressed". + :param str curve_parameters_encoding: the encoding for curve parameters + to use, by default tries to use ``named_curve`` encoding, + if that is not possible, falls back to ``explicit`` encoding. + + :return: portable encoding of the public key + :rtype: bytes + + .. warning:: The PEM is encoded to US-ASCII, it needs to be + re-encoded if the system is incompatible (e.g. uses UTF-16) + """ + return der.topem( + self.to_der(point_encoding, curve_parameters_encoding), + "PUBLIC KEY", + ) + + def to_der( + self, point_encoding="uncompressed", curve_parameters_encoding=None + ): + """ + Convert the public key to the :term:`DER` format. + + The format of the key is described in the + :func:`~VerifyingKey.from_der()` method. + This method supports only "named curve" encoding of keys. + + :param str point_encoding: specification of the encoding format + of public keys. "uncompressed" is most portable, "compressed" is + smallest. "hybrid" is uncommon and unsupported by most + implementations, it is as big as "uncompressed". + :param str curve_parameters_encoding: the encoding for curve parameters + to use, by default tries to use ``named_curve`` encoding, + if that is not possible, falls back to ``explicit`` encoding. + + :return: DER encoding of the public key + :rtype: bytes + """ + if point_encoding == "raw": + raise ValueError("raw point_encoding not allowed in DER") + point_str = self.to_string(point_encoding) + if isinstance(self.curve.curve, CurveEdTw): + return der.encode_sequence( + der.encode_sequence(der.encode_oid(*self.curve.oid)), + der.encode_bitstring(bytes(point_str), 0), + ) + return der.encode_sequence( + der.encode_sequence( + encoded_oid_ecPublicKey, + self.curve.to_der(curve_parameters_encoding, point_encoding), + ), + # 0 is the number of unused bits in the + # bit string + der.encode_bitstring(point_str, 0), + ) + + def verify( + self, + signature, + data, + hashfunc=None, + sigdecode=sigdecode_string, + allow_truncate=True, + ): + """ + Verify a signature made over provided data. + + Will hash `data` to verify the signature. + + By default expects signature in :term:`raw encoding`. Can also be used + to verify signatures in ASN.1 DER encoding by using + :func:`ecdsa.util.sigdecode_der` + as the `sigdecode` parameter. + + :param signature: encoding of the signature + :type signature: sigdecode method dependent + :param data: data signed by the `signature`, will be hashed using + `hashfunc`, if specified, or default hash function + :type data: :term:`bytes-like object` + :param hashfunc: The default hash function that will be used for + verification, needs to implement the same interface as hashlib.sha1 + :type hashfunc: callable + :param sigdecode: Callable to define the way the signature needs to + be decoded to an object, needs to handle `signature` as the + first parameter, the curve order (an int) as the second and return + a tuple with two integers, "r" as the first one and "s" as the + second one. See :func:`ecdsa.util.sigdecode_string` and + :func:`ecdsa.util.sigdecode_der` for examples. + :type sigdecode: callable + :param bool allow_truncate: if True, the provided digest can have + bigger bit-size than the order of the curve, the extra bits (at + the end of the digest) will be truncated. Use it when verifying + SHA-384 output using NIST256p or in similar situations. Defaults to + True. + + :raises BadSignatureError: if the signature is invalid or malformed + + :return: True if the verification was successful + :rtype: bool + """ + # signature doesn't have to be a bytes-like-object so don't normalise + # it, the decoders will do that + data = normalise_bytes(data) + if isinstance(self.curve.curve, CurveEdTw): + signature = normalise_bytes(signature) + try: + return self.pubkey.verify(data, signature) + except (ValueError, MalformedPointError) as e: + raise BadSignatureError("Signature verification failed", e) + + hashfunc = hashfunc or self.default_hashfunc + digest = hashfunc(data).digest() + return self.verify_digest(signature, digest, sigdecode, allow_truncate) + + def verify_digest( + self, + signature, + digest, + sigdecode=sigdecode_string, + allow_truncate=False, + ): + """ + Verify a signature made over provided hash value. + + By default expects signature in :term:`raw encoding`. Can also be used + to verify signatures in ASN.1 DER encoding by using + :func:`ecdsa.util.sigdecode_der` + as the `sigdecode` parameter. + + :param signature: encoding of the signature + :type signature: sigdecode method dependent + :param digest: raw hash value that the signature authenticates. + :type digest: :term:`bytes-like object` + :param sigdecode: Callable to define the way the signature needs to + be decoded to an object, needs to handle `signature` as the + first parameter, the curve order (an int) as the second and return + a tuple with two integers, "r" as the first one and "s" as the + second one. See :func:`ecdsa.util.sigdecode_string` and + :func:`ecdsa.util.sigdecode_der` for examples. + :type sigdecode: callable + :param bool allow_truncate: if True, the provided digest can have + bigger bit-size than the order of the curve, the extra bits (at + the end of the digest) will be truncated. Use it when verifying + SHA-384 output using NIST256p or in similar situations. + + :raises BadSignatureError: if the signature is invalid or malformed + :raises BadDigestError: if the provided digest is too big for the curve + associated with this VerifyingKey and allow_truncate was not set + + :return: True if the verification was successful + :rtype: bool + """ + # signature doesn't have to be a bytes-like-object so don't normalise + # it, the decoders will do that + digest = normalise_bytes(digest) + number = _truncate_and_convert_digest( + digest, + self.curve, + allow_truncate, + ) + + try: + r, s = sigdecode(signature, self.pubkey.order) + except (der.UnexpectedDER, MalformedSignature) as e: + raise BadSignatureError("Malformed formatting of signature", e) + sig = ecdsa.Signature(r, s) + if self.pubkey.verifies(number, sig): + return True + raise BadSignatureError("Signature verification failed") + + +class SigningKey(object): + """ + Class for handling keys that can create signatures (private keys). + + :ivar `~ecdsa.curves.Curve` curve: The Curve over which all the + cryptographic operations will take place + :ivar default_hashfunc: the function that will be used for hashing the + data. Should implement the same API as :py:class:`hashlib.sha1` + :ivar int baselen: the length of a :term:`raw encoding` of private key + :ivar `~ecdsa.keys.VerifyingKey` verifying_key: the public key + associated with this private key + :ivar `~ecdsa.ecdsa.Private_key` privkey: the actual private key + """ + + def __init__(self, _error__please_use_generate=None): + """Unsupported, please use one of the classmethods to initialise.""" + if not _error__please_use_generate: + raise TypeError("Please use SigningKey.generate() to construct me") + self.curve = None + self.default_hashfunc = None + self.baselen = None + self.verifying_key = None + self.privkey = None + + def __eq__(self, other): + """Return True if the points are identical, False otherwise.""" + if isinstance(other, SigningKey): + return ( + self.curve == other.curve + and self.verifying_key == other.verifying_key + and self.privkey == other.privkey + ) + return NotImplemented + + def __ne__(self, other): + """Return False if the points are identical, True otherwise.""" + return not self == other + + @classmethod + def _twisted_edwards_keygen(cls, curve, entropy): + """Generate a private key on a Twisted Edwards curve.""" + if not entropy: + entropy = os.urandom + random = entropy(curve.baselen) + private_key = eddsa.PrivateKey(curve.generator, random) + public_key = private_key.public_key() + + verifying_key = VerifyingKey.from_string( + public_key.public_key(), curve + ) + + self = cls(_error__please_use_generate=True) + self.curve = curve + self.default_hashfunc = None + self.baselen = curve.baselen + self.privkey = private_key + self.verifying_key = verifying_key + return self + + @classmethod + def _weierstrass_keygen(cls, curve, entropy, hashfunc): + """Generate a private key on a Weierstrass curve.""" + secexp = randrange(curve.order, entropy) + return cls.from_secret_exponent(secexp, curve, hashfunc) + + @classmethod + def generate(cls, curve=NIST192p, entropy=None, hashfunc=sha1): + """ + Generate a random private key. + + :param curve: The curve on which the point needs to reside, defaults + to NIST192p + :type curve: ~ecdsa.curves.Curve + :param entropy: Source of randomness for generating the private keys, + should provide cryptographically secure random numbers if the keys + need to be secure. Uses os.urandom() by default. + :type entropy: callable + :param hashfunc: The default hash function that will be used for + signing, needs to implement the same interface + as hashlib.sha1 + :type hashfunc: callable + + :return: Initialised SigningKey object + :rtype: SigningKey + """ + if isinstance(curve.curve, CurveEdTw): + return cls._twisted_edwards_keygen(curve, entropy) + return cls._weierstrass_keygen(curve, entropy, hashfunc) + + @classmethod + def from_secret_exponent(cls, secexp, curve=NIST192p, hashfunc=sha1): + """ + Create a private key from a random integer. + + Note: it's a low level method, it's recommended to use the + :func:`~SigningKey.generate` method to create private keys. + + :param int secexp: secret multiplier (the actual private key in ECDSA). + Needs to be an integer between 1 and the curve order. + :param curve: The curve on which the point needs to reside + :type curve: ~ecdsa.curves.Curve + :param hashfunc: The default hash function that will be used for + signing, needs to implement the same interface + as hashlib.sha1 + :type hashfunc: callable + + :raises MalformedPointError: when the provided secexp is too large + or too small for the curve selected + :raises RuntimeError: if the generation of public key from private + key failed + + :return: Initialised SigningKey object + :rtype: SigningKey + """ + if isinstance(curve.curve, CurveEdTw): + raise ValueError( + "Edwards keys don't support setting the secret scalar " + "(exponent) directly" + ) + self = cls(_error__please_use_generate=True) + self.curve = curve + self.default_hashfunc = hashfunc + self.baselen = curve.baselen + n = curve.order + if not 1 <= secexp < n: + raise MalformedPointError( + "Invalid value for secexp, expected integer " + "between 1 and {0}".format(n) + ) + pubkey_point = curve.generator * secexp + if hasattr(pubkey_point, "scale"): + pubkey_point = pubkey_point.scale() + self.verifying_key = VerifyingKey.from_public_point( + pubkey_point, curve, hashfunc, False + ) + pubkey = self.verifying_key.pubkey + self.privkey = ecdsa.Private_key(pubkey, secexp) + self.privkey.order = n + return self + + @classmethod + def from_string(cls, string, curve=NIST192p, hashfunc=sha1): + """ + Decode the private key from :term:`raw encoding`. + + Note: the name of this method is a misnomer coming from days of + Python 2, when binary strings and character strings shared a type. + In Python 3, the expected type is `bytes`. + + :param string: the raw encoding of the private key + :type string: :term:`bytes-like object` + :param curve: The curve on which the point needs to reside + :type curve: ~ecdsa.curves.Curve + :param hashfunc: The default hash function that will be used for + signing, needs to implement the same interface + as hashlib.sha1 + :type hashfunc: callable + + :raises MalformedPointError: if the length of encoding doesn't match + the provided curve or the encoded values is too large + :raises RuntimeError: if the generation of public key from private + key failed + + :return: Initialised SigningKey object + :rtype: SigningKey + """ + string = normalise_bytes(string) + + if len(string) != curve.baselen: + raise MalformedPointError( + "Invalid length of private key, received {0}, " + "expected {1}".format(len(string), curve.baselen) + ) + if isinstance(curve.curve, CurveEdTw): + self = cls(_error__please_use_generate=True) + self.curve = curve + self.default_hashfunc = None # Ignored for EdDSA + self.baselen = curve.baselen + self.privkey = eddsa.PrivateKey(curve.generator, string) + self.verifying_key = VerifyingKey.from_string( + self.privkey.public_key().public_key(), curve + ) + return self + secexp = string_to_number(string) + return cls.from_secret_exponent(secexp, curve, hashfunc) + + @classmethod + def from_pem(cls, string, hashfunc=sha1, valid_curve_encodings=None): + """ + Initialise from key stored in :term:`PEM` format. + + The PEM formats supported are the un-encrypted RFC5915 + (the ssleay format) supported by OpenSSL, and the more common + un-encrypted RFC5958 (the PKCS #8 format). + + The legacy format files have the header with the string + ``BEGIN EC PRIVATE KEY``. + PKCS#8 files have the header ``BEGIN PRIVATE KEY``. + Encrypted files (ones that include the string + ``Proc-Type: 4,ENCRYPTED`` + right after the PEM header) are not supported. + + See :func:`~SigningKey.from_der` for ASN.1 syntax of the objects in + this files. + + :param string: text with PEM-encoded private ECDSA key + :type string: str + :param valid_curve_encodings: list of allowed encoding formats + for curve parameters. By default (``None``) all are supported: + ``named_curve`` and ``explicit``. + :type valid_curve_encodings: :term:`set-like object` + + + :raises MalformedPointError: if the length of encoding doesn't match + the provided curve or the encoded values is too large + :raises RuntimeError: if the generation of public key from private + key failed + :raises UnexpectedDER: if the encoding of the PEM file is incorrect + + :return: Initialised SigningKey object + :rtype: SigningKey + """ + if not PY2 and isinstance(string, str): # pragma: no branch + string = string.encode() + + # The privkey pem may have multiple sections, commonly it also has + # "EC PARAMETERS", we need just "EC PRIVATE KEY". PKCS#8 should not + # have the "EC PARAMETERS" section; it's just "PRIVATE KEY". + private_key_index = string.find(b"-----BEGIN EC PRIVATE KEY-----") + if private_key_index == -1: + private_key_index = string.index(b"-----BEGIN PRIVATE KEY-----") + + return cls.from_der( + der.unpem(string[private_key_index:]), + hashfunc, + valid_curve_encodings, + ) + + @classmethod + def from_der(cls, string, hashfunc=sha1, valid_curve_encodings=None): + """ + Initialise from key stored in :term:`DER` format. + + The DER formats supported are the un-encrypted RFC5915 + (the ssleay format) supported by OpenSSL, and the more common + un-encrypted RFC5958 (the PKCS #8 format). + + Both formats contain an ASN.1 object following the syntax specified + in RFC5915:: + + ECPrivateKey ::= SEQUENCE { + version INTEGER { ecPrivkeyVer1(1) }} (ecPrivkeyVer1), + privateKey OCTET STRING, + parameters [0] ECParameters {{ NamedCurve }} OPTIONAL, + publicKey [1] BIT STRING OPTIONAL + } + + `publicKey` field is ignored completely (errors, if any, in it will + be undetected). + + Two formats are supported for the `parameters` field: the named + curve and the explicit encoding of curve parameters. + In the legacy ssleay format, this implementation requires the optional + `parameters` field to get the curve name. In PKCS #8 format, the curve + is part of the PrivateKeyAlgorithmIdentifier. + + The PKCS #8 format includes an ECPrivateKey object as the `privateKey` + field within a larger structure:: + + OneAsymmetricKey ::= SEQUENCE { + version Version, + privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, + privateKey PrivateKey, + attributes [0] Attributes OPTIONAL, + ..., + [[2: publicKey [1] PublicKey OPTIONAL ]], + ... + } + + The `attributes` and `publicKey` fields are completely ignored; errors + in them will not be detected. + + :param string: binary string with DER-encoded private ECDSA key + :type string: :term:`bytes-like object` + :param valid_curve_encodings: list of allowed encoding formats + for curve parameters. By default (``None``) all are supported: + ``named_curve`` and ``explicit``. + Ignored for EdDSA. + :type valid_curve_encodings: :term:`set-like object` + + :raises MalformedPointError: if the length of encoding doesn't match + the provided curve or the encoded values is too large + :raises RuntimeError: if the generation of public key from private + key failed + :raises UnexpectedDER: if the encoding of the DER file is incorrect + + :return: Initialised SigningKey object + :rtype: SigningKey + """ + s = normalise_bytes(string) + curve = None + + s, empty = der.remove_sequence(s) + if empty != b(""): + raise der.UnexpectedDER( + "trailing junk after DER privkey: %s" % binascii.hexlify(empty) + ) + + version, s = der.remove_integer(s) + + # At this point, PKCS #8 has a sequence containing the algorithm + # identifier and the curve identifier. The ssleay format instead has + # an octet string containing the key data, so this is how we can + # distinguish the two formats. + if der.is_sequence(s): + if version not in (0, 1): + raise der.UnexpectedDER( + "expected version '0' or '1' at start of privkey, got %d" + % version + ) + + sequence, s = der.remove_sequence(s) + algorithm_oid, algorithm_identifier = der.remove_object(sequence) + + if algorithm_oid in (Ed25519.oid, Ed448.oid): + if algorithm_identifier: + raise der.UnexpectedDER( + "Non NULL parameters for a EdDSA key" + ) + key_str_der, s = der.remove_octet_string(s) + + # As RFC5958 describe, there are may be optional Attributes + # and Publickey. Don't raise error if something after + # Privatekey + + # TODO parse attributes or validate publickey + # if s: + # raise der.UnexpectedDER( + # "trailing junk inside the privateKey" + # ) + key_str, s = der.remove_octet_string(key_str_der) + if s: + raise der.UnexpectedDER( + "trailing junk after the encoded private key" + ) + + if algorithm_oid == Ed25519.oid: + curve = Ed25519 + else: + assert algorithm_oid == Ed448.oid + curve = Ed448 + + return cls.from_string(key_str, curve, None) + + if algorithm_oid not in (oid_ecPublicKey, oid_ecDH, oid_ecMQV): + raise der.UnexpectedDER( + "unexpected algorithm identifier '%s'" % (algorithm_oid,) + ) + + curve = Curve.from_der(algorithm_identifier, valid_curve_encodings) + + if empty != b"": + raise der.UnexpectedDER( + "unexpected data after algorithm identifier: %s" + % binascii.hexlify(empty) + ) + + # Up next is an octet string containing an ECPrivateKey. Ignore + # the optional "attributes" and "publicKey" fields that come after. + s, _ = der.remove_octet_string(s) + + # Unpack the ECPrivateKey to get to the key data octet string, + # and rejoin the ssleay parsing path. + s, empty = der.remove_sequence(s) + if empty != b(""): + raise der.UnexpectedDER( + "trailing junk after DER privkey: %s" + % binascii.hexlify(empty) + ) + + version, s = der.remove_integer(s) + + # The version of the ECPrivateKey must be 1. + if version != 1: + raise der.UnexpectedDER( + "expected version '1' at start of DER privkey, got %d" + % version + ) + + privkey_str, s = der.remove_octet_string(s) + + if not curve: + tag, curve_oid_str, s = der.remove_constructed(s) + if tag != 0: + raise der.UnexpectedDER( + "expected tag 0 in DER privkey, got %d" % tag + ) + curve = Curve.from_der(curve_oid_str, valid_curve_encodings) + + # we don't actually care about the following fields + # + # tag, pubkey_bitstring, s = der.remove_constructed(s) + # if tag != 1: + # raise der.UnexpectedDER("expected tag 1 in DER privkey, got %d" + # % tag) + # pubkey_str = der.remove_bitstring(pubkey_bitstring, 0) + # if empty != "": + # raise der.UnexpectedDER("trailing junk after DER privkey " + # "pubkeystr: %s" + # % binascii.hexlify(empty)) + + # our from_string method likes fixed-length privkey strings + if len(privkey_str) < curve.baselen: + privkey_str = ( + b("\x00") * (curve.baselen - len(privkey_str)) + privkey_str + ) + return cls.from_string(privkey_str, curve, hashfunc) + + def to_string(self): + """ + Convert the private key to :term:`raw encoding`. + + Note: while the method is named "to_string", its name comes from + Python 2 days, when binary and character strings used the same type. + The type used in Python 3 is `bytes`. + + :return: raw encoding of private key + :rtype: bytes + """ + if isinstance(self.curve.curve, CurveEdTw): + return bytes(self.privkey.private_key) + secexp = self.privkey.secret_multiplier + s = number_to_string(secexp, self.privkey.order) + return s + + def to_pem( + self, + point_encoding="uncompressed", + format="ssleay", + curve_parameters_encoding=None, + ): + """ + Convert the private key to the :term:`PEM` format. + + See :func:`~SigningKey.from_pem` method for format description. + + Only the named curve format is supported. + The public key will be included in generated string. + + The PEM header will specify ``BEGIN EC PRIVATE KEY`` or + ``BEGIN PRIVATE KEY``, depending on the desired format. + + :param str point_encoding: format to use for encoding public point + :param str format: either ``ssleay`` (default) or ``pkcs8`` + :param str curve_parameters_encoding: format of encoded curve + parameters, default depends on the curve, if the curve has + an associated OID, ``named_curve`` format will be used, + if no OID is associated with the curve, the fallback of + ``explicit`` parameters will be used. + + :return: PEM encoded private key + :rtype: bytes + + .. warning:: The PEM is encoded to US-ASCII, it needs to be + re-encoded if the system is incompatible (e.g. uses UTF-16) + """ + # TODO: "BEGIN ECPARAMETERS" + assert format in ("ssleay", "pkcs8") + header = "EC PRIVATE KEY" if format == "ssleay" else "PRIVATE KEY" + return der.topem( + self.to_der(point_encoding, format, curve_parameters_encoding), + header, + ) + + def _encode_eddsa(self): + """Create a PKCS#8 encoding of EdDSA keys.""" + ec_private_key = der.encode_octet_string(self.to_string()) + return der.encode_sequence( + der.encode_integer(0), + der.encode_sequence(der.encode_oid(*self.curve.oid)), + der.encode_octet_string(ec_private_key), + ) + + def to_der( + self, + point_encoding="uncompressed", + format="ssleay", + curve_parameters_encoding=None, + ): + """ + Convert the private key to the :term:`DER` format. + + See :func:`~SigningKey.from_der` method for format specification. + + Only the named curve format is supported. + The public key will be included in the generated string. + + :param str point_encoding: format to use for encoding public point + Ignored for EdDSA + :param str format: either ``ssleay`` (default) or ``pkcs8``. + EdDSA keys require ``pkcs8``. + :param str curve_parameters_encoding: format of encoded curve + parameters, default depends on the curve, if the curve has + an associated OID, ``named_curve`` format will be used, + if no OID is associated with the curve, the fallback of + ``explicit`` parameters will be used. + Ignored for EdDSA. + + :return: DER encoded private key + :rtype: bytes + """ + # SEQ([int(1), octetstring(privkey),cont[0], oid(secp224r1), + # cont[1],bitstring]) + if point_encoding == "raw": + raise ValueError("raw encoding not allowed in DER") + assert format in ("ssleay", "pkcs8") + if isinstance(self.curve.curve, CurveEdTw): + if format != "pkcs8": + raise ValueError("Only PKCS#8 format supported for EdDSA keys") + return self._encode_eddsa() + encoded_vk = self.get_verifying_key().to_string(point_encoding) + priv_key_elems = [ + der.encode_integer(1), + der.encode_octet_string(self.to_string()), + ] + if format == "ssleay": + priv_key_elems.append( + der.encode_constructed( + 0, self.curve.to_der(curve_parameters_encoding) + ) + ) + # the 0 in encode_bitstring specifies the number of unused bits + # in the `encoded_vk` string + priv_key_elems.append( + der.encode_constructed(1, der.encode_bitstring(encoded_vk, 0)) + ) + ec_private_key = der.encode_sequence(*priv_key_elems) + + if format == "ssleay": + return ec_private_key + else: + return der.encode_sequence( + # version = 1 means the public key is not present in the + # top-level structure. + der.encode_integer(1), + der.encode_sequence( + der.encode_oid(*oid_ecPublicKey), + self.curve.to_der(curve_parameters_encoding), + ), + der.encode_octet_string(ec_private_key), + ) + + def get_verifying_key(self): + """ + Return the VerifyingKey associated with this private key. + + Equivalent to reading the `verifying_key` field of an instance. + + :return: a public key that can be used to verify the signatures made + with this SigningKey + :rtype: VerifyingKey + """ + return self.verifying_key + + def sign_deterministic( + self, + data, + hashfunc=None, + sigencode=sigencode_string, + extra_entropy=b"", + ): + """ + Create signature over data. + + For Weierstrass curves it uses the deterministic RFC6979 algorithm. + For Edwards curves it uses the standard EdDSA algorithm. + + For ECDSA the data will be hashed using the `hashfunc` function before + signing. + For EdDSA the data will be hashed with the hash associated with the + curve (SHA-512 for Ed25519 and SHAKE-256 for Ed448). + + This is the recommended method for performing signatures when hashing + of data is necessary. + + :param data: data to be hashed and computed signature over + :type data: :term:`bytes-like object` + :param hashfunc: hash function to use for computing the signature, + if unspecified, the default hash function selected during + object initialisation will be used (see + `VerifyingKey.default_hashfunc`). The object needs to implement + the same interface as hashlib.sha1. + Ignored with EdDSA. + :type hashfunc: callable + :param sigencode: function used to encode the signature. + The function needs to accept three parameters: the two integers + that are the signature and the order of the curve over which the + signature was computed. It needs to return an encoded signature. + See `ecdsa.util.sigencode_string` and `ecdsa.util.sigencode_der` + as examples of such functions. + Ignored with EdDSA. + :type sigencode: callable + :param extra_entropy: additional data that will be fed into the random + number generator used in the RFC6979 process. Entirely optional. + Ignored with EdDSA. + :type extra_entropy: :term:`bytes-like object` + + :return: encoded signature over `data` + :rtype: bytes or sigencode function dependent type + """ + hashfunc = hashfunc or self.default_hashfunc + data = normalise_bytes(data) + + if isinstance(self.curve.curve, CurveEdTw): + return self.privkey.sign(data) + + extra_entropy = normalise_bytes(extra_entropy) + digest = hashfunc(data).digest() + + return self.sign_digest_deterministic( + digest, + hashfunc=hashfunc, + sigencode=sigencode, + extra_entropy=extra_entropy, + allow_truncate=True, + ) + + def sign_digest_deterministic( + self, + digest, + hashfunc=None, + sigencode=sigencode_string, + extra_entropy=b"", + allow_truncate=False, + ): + """ + Create signature for digest using the deterministic RFC6979 algorithm. + + `digest` should be the output of cryptographically secure hash function + like SHA256 or SHA-3-256. + + This is the recommended method for performing signatures when no + hashing of data is necessary. + + :param digest: hash of data that will be signed + :type digest: :term:`bytes-like object` + :param hashfunc: hash function to use for computing the random "k" + value from RFC6979 process, + if unspecified, the default hash function selected during + object initialisation will be used (see + :attr:`.VerifyingKey.default_hashfunc`). The object needs to + implement + the same interface as :func:`~hashlib.sha1` from :py:mod:`hashlib`. + :type hashfunc: callable + :param sigencode: function used to encode the signature. + The function needs to accept three parameters: the two integers + that are the signature and the order of the curve over which the + signature was computed. It needs to return an encoded signature. + See :func:`~ecdsa.util.sigencode_string` and + :func:`~ecdsa.util.sigencode_der` + as examples of such functions. + :type sigencode: callable + :param extra_entropy: additional data that will be fed into the random + number generator used in the RFC6979 process. Entirely optional. + :type extra_entropy: :term:`bytes-like object` + :param bool allow_truncate: if True, the provided digest can have + bigger bit-size than the order of the curve, the extra bits (at + the end of the digest) will be truncated. Use it when signing + SHA-384 output using NIST256p or in similar situations. + + :return: encoded signature for the `digest` hash + :rtype: bytes or sigencode function dependent type + """ + if isinstance(self.curve.curve, CurveEdTw): + raise ValueError("Method unsupported for Edwards curves") + secexp = self.privkey.secret_multiplier + hashfunc = hashfunc or self.default_hashfunc + digest = normalise_bytes(digest) + extra_entropy = normalise_bytes(extra_entropy) + + def simple_r_s(r, s, order): + return r, s, order + + retry_gen = 0 + while True: + k = rfc6979.generate_k( + self.curve.generator.order(), + secexp, + hashfunc, + digest, + retry_gen=retry_gen, + extra_entropy=extra_entropy, + ) + try: + r, s, order = self.sign_digest( + digest, + sigencode=simple_r_s, + k=k, + allow_truncate=allow_truncate, + ) + break + except RSZeroError: + retry_gen += 1 + + return sigencode(r, s, order) + + def sign( + self, + data, + entropy=None, + hashfunc=None, + sigencode=sigencode_string, + k=None, + allow_truncate=True, + ): + """ + Create signature over data. + + Uses the probabilistic ECDSA algorithm for Weierstrass curves + (NIST256p, etc.) and the deterministic EdDSA algorithm for the + Edwards curves (Ed25519, Ed448). + + This method uses the standard ECDSA algorithm that requires a + cryptographically secure random number generator. + + It's recommended to use the :func:`~SigningKey.sign_deterministic` + method instead of this one. + + :param data: data that will be hashed for signing + :type data: :term:`bytes-like object` + :param callable entropy: randomness source, :func:`os.urandom` by + default. Ignored with EdDSA. + :param hashfunc: hash function to use for hashing the provided + ``data``. + If unspecified the default hash function selected during + object initialisation will be used (see + :attr:`.VerifyingKey.default_hashfunc`). + Should behave like :func:`~hashlib.sha1` from :py:mod:`hashlib`. + The output length of the + hash (in bytes) must not be longer than the length of the curve + order (rounded up to the nearest byte), so using SHA256 with + NIST256p is ok, but SHA256 with NIST192p is not. (In the 2**-96ish + unlikely event of a hash output larger than the curve order, the + hash will effectively be wrapped mod n). + If you want to explicitly allow use of large hashes with small + curves set the ``allow_truncate`` to ``True``. + Use ``hashfunc=hashlib.sha1`` to match openssl's + ``-ecdsa-with-SHA1`` mode, + or ``hashfunc=hashlib.sha256`` for openssl-1.0.0's + ``-ecdsa-with-SHA256``. + Ignored for EdDSA + :type hashfunc: callable + :param sigencode: function used to encode the signature. + The function needs to accept three parameters: the two integers + that are the signature and the order of the curve over which the + signature was computed. It needs to return an encoded signature. + See :func:`~ecdsa.util.sigencode_string` and + :func:`~ecdsa.util.sigencode_der` + as examples of such functions. + Ignored for EdDSA + :type sigencode: callable + :param int k: a pre-selected nonce for calculating the signature. + In typical use cases, it should be set to None (the default) to + allow its generation from an entropy source. + Ignored for EdDSA. + :param bool allow_truncate: if ``True``, the provided digest can have + bigger bit-size than the order of the curve, the extra bits (at + the end of the digest) will be truncated. Use it when signing + SHA-384 output using NIST256p or in similar situations. True by + default. + Ignored for EdDSA. + + :raises RSZeroError: in the unlikely event when *r* parameter or + *s* parameter of the created signature is equal 0, as that would + leak the key. Caller should try a better entropy source, retry with + different ``k``, or use the + :func:`~SigningKey.sign_deterministic` in such case. + + :return: encoded signature of the hash of `data` + :rtype: bytes or sigencode function dependent type + """ + hashfunc = hashfunc or self.default_hashfunc + data = normalise_bytes(data) + if isinstance(self.curve.curve, CurveEdTw): + return self.sign_deterministic(data) + h = hashfunc(data).digest() + return self.sign_digest(h, entropy, sigencode, k, allow_truncate) + + def sign_digest( + self, + digest, + entropy=None, + sigencode=sigencode_string, + k=None, + allow_truncate=False, + ): + """ + Create signature over digest using the probabilistic ECDSA algorithm. + + This method uses the standard ECDSA algorithm that requires a + cryptographically secure random number generator. + + This method does not hash the input. + + It's recommended to use the + :func:`~SigningKey.sign_digest_deterministic` method + instead of this one. + + :param digest: hash value that will be signed + :type digest: :term:`bytes-like object` + :param callable entropy: randomness source, os.urandom by default + :param sigencode: function used to encode the signature. + The function needs to accept three parameters: the two integers + that are the signature and the order of the curve over which the + signature was computed. It needs to return an encoded signature. + See `ecdsa.util.sigencode_string` and `ecdsa.util.sigencode_der` + as examples of such functions. + :type sigencode: callable + :param int k: a pre-selected nonce for calculating the signature. + In typical use cases, it should be set to None (the default) to + allow its generation from an entropy source. + :param bool allow_truncate: if True, the provided digest can have + bigger bit-size than the order of the curve, the extra bits (at + the end of the digest) will be truncated. Use it when signing + SHA-384 output using NIST256p or in similar situations. + + :raises RSZeroError: in the unlikely event when "r" parameter or + "s" parameter of the created signature is equal 0, as that would + leak the key. Caller should try a better entropy source, retry with + different 'k', or use the + :func:`~SigningKey.sign_digest_deterministic` in such case. + + :return: encoded signature for the `digest` hash + :rtype: bytes or sigencode function dependent type + """ + if isinstance(self.curve.curve, CurveEdTw): + raise ValueError("Method unsupported for Edwards curves") + digest = normalise_bytes(digest) + number = _truncate_and_convert_digest( + digest, + self.curve, + allow_truncate, + ) + r, s = self.sign_number(number, entropy, k) + return sigencode(r, s, self.privkey.order) + + def sign_number(self, number, entropy=None, k=None): + """ + Sign an integer directly. + + Note, this is a low level method, usually you will want to use + :func:`~SigningKey.sign_deterministic` or + :func:`~SigningKey.sign_digest_deterministic`. + + :param int number: number to sign using the probabilistic ECDSA + algorithm. + :param callable entropy: entropy source, os.urandom by default + :param int k: pre-selected nonce for signature operation. If unset + it will be selected at random using the entropy source. + + :raises RSZeroError: in the unlikely event when "r" parameter or + "s" parameter of the created signature is equal 0, as that would + leak the key. Caller should try a better entropy source, retry with + different 'k', or use the + :func:`~SigningKey.sign_digest_deterministic` in such case. + + :return: the "r" and "s" parameters of the signature + :rtype: tuple of ints + """ + if isinstance(self.curve.curve, CurveEdTw): + raise ValueError("Method unsupported for Edwards curves") + order = self.privkey.order + + if k is not None: + _k = k + else: + _k = randrange(order, entropy) + + assert 1 <= _k < order + sig = self.privkey.sign(number, _k) + return sig.r, sig.s diff --git a/env/lib/python3.8/site-packages/ecdsa/numbertheory.py b/env/lib/python3.8/site-packages/ecdsa/numbertheory.py new file mode 100644 index 00000000..d3500c70 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/numbertheory.py @@ -0,0 +1,825 @@ +#! /usr/bin/env python +# +# Provide some simple capabilities from number theory. +# +# Version of 2008.11.14. +# +# Written in 2005 and 2006 by Peter Pearson and placed in the public domain. +# Revision history: +# 2008.11.14: Use pow(base, exponent, modulus) for modular_exp. +# Make gcd and lcm accept arbitrarily many arguments. + +from __future__ import division + +import sys +from six import integer_types, PY2 +from six.moves import reduce + +try: + xrange +except NameError: + xrange = range +try: + from gmpy2 import powmod + + GMPY2 = True + GMPY = False +except ImportError: + GMPY2 = False + try: + from gmpy import mpz + + GMPY = True + except ImportError: + GMPY = False + +import math +import warnings + + +class Error(Exception): + """Base class for exceptions in this module.""" + + pass + + +class JacobiError(Error): + pass + + +class SquareRootError(Error): + pass + + +class NegativeExponentError(Error): + pass + + +def modular_exp(base, exponent, modulus): # pragma: no cover + """Raise base to exponent, reducing by modulus""" + # deprecated in 0.14 + warnings.warn( + "Function is unused in library code. If you use this code, " + "change to pow() builtin.", + DeprecationWarning, + ) + if exponent < 0: + raise NegativeExponentError( + "Negative exponents (%d) not allowed" % exponent + ) + return pow(base, exponent, modulus) + + +def polynomial_reduce_mod(poly, polymod, p): + """Reduce poly by polymod, integer arithmetic modulo p. + + Polynomials are represented as lists of coefficients + of increasing powers of x.""" + + # This module has been tested only by extensive use + # in calculating modular square roots. + + # Just to make this easy, require a monic polynomial: + assert polymod[-1] == 1 + + assert len(polymod) > 1 + + while len(poly) >= len(polymod): + if poly[-1] != 0: + for i in xrange(2, len(polymod) + 1): + poly[-i] = (poly[-i] - poly[-1] * polymod[-i]) % p + poly = poly[0:-1] + + return poly + + +def polynomial_multiply_mod(m1, m2, polymod, p): + """Polynomial multiplication modulo a polynomial over ints mod p. + + Polynomials are represented as lists of coefficients + of increasing powers of x.""" + + # This is just a seat-of-the-pants implementation. + + # This module has been tested only by extensive use + # in calculating modular square roots. + + # Initialize the product to zero: + + prod = (len(m1) + len(m2) - 1) * [0] + + # Add together all the cross-terms: + + for i in xrange(len(m1)): + for j in xrange(len(m2)): + prod[i + j] = (prod[i + j] + m1[i] * m2[j]) % p + + return polynomial_reduce_mod(prod, polymod, p) + + +def polynomial_exp_mod(base, exponent, polymod, p): + """Polynomial exponentiation modulo a polynomial over ints mod p. + + Polynomials are represented as lists of coefficients + of increasing powers of x.""" + + # Based on the Handbook of Applied Cryptography, algorithm 2.227. + + # This module has been tested only by extensive use + # in calculating modular square roots. + + assert exponent < p + + if exponent == 0: + return [1] + + G = base + k = exponent + if k % 2 == 1: + s = G + else: + s = [1] + + while k > 1: + k = k // 2 + G = polynomial_multiply_mod(G, G, polymod, p) + if k % 2 == 1: + s = polynomial_multiply_mod(G, s, polymod, p) + + return s + + +def jacobi(a, n): + """Jacobi symbol""" + + # Based on the Handbook of Applied Cryptography (HAC), algorithm 2.149. + + # This function has been tested by comparison with a small + # table printed in HAC, and by extensive use in calculating + # modular square roots. + + if not n >= 3: + raise JacobiError("n must be larger than 2") + if not n % 2 == 1: + raise JacobiError("n must be odd") + a = a % n + if a == 0: + return 0 + if a == 1: + return 1 + a1, e = a, 0 + while a1 % 2 == 0: + a1, e = a1 // 2, e + 1 + if e % 2 == 0 or n % 8 == 1 or n % 8 == 7: + s = 1 + else: + s = -1 + if a1 == 1: + return s + if n % 4 == 3 and a1 % 4 == 3: + s = -s + return s * jacobi(n % a1, a1) + + +def square_root_mod_prime(a, p): + """Modular square root of a, mod p, p prime.""" + + # Based on the Handbook of Applied Cryptography, algorithms 3.34 to 3.39. + + # This module has been tested for all values in [0,p-1] for + # every prime p from 3 to 1229. + + assert 0 <= a < p + assert 1 < p + + if a == 0: + return 0 + if p == 2: + return a + + jac = jacobi(a, p) + if jac == -1: + raise SquareRootError("%d has no square root modulo %d" % (a, p)) + + if p % 4 == 3: + return pow(a, (p + 1) // 4, p) + + if p % 8 == 5: + d = pow(a, (p - 1) // 4, p) + if d == 1: + return pow(a, (p + 3) // 8, p) + assert d == p - 1 + return (2 * a * pow(4 * a, (p - 5) // 8, p)) % p + + if PY2: + # xrange on python2 can take integers representable as C long only + range_top = min(0x7FFFFFFF, p) + else: + range_top = p + for b in xrange(2, range_top): + if jacobi(b * b - 4 * a, p) == -1: + f = (a, -b, 1) + ff = polynomial_exp_mod((0, 1), (p + 1) // 2, f, p) + if ff[1]: + raise SquareRootError("p is not prime") + return ff[0] + raise RuntimeError("No b found.") + + +# because all the inverse_mod code is arch/environment specific, and coveralls +# expects it to execute equal number of times, we need to waive it by +# adding the "no branch" pragma to all branches +if GMPY2: # pragma: no branch + + def inverse_mod(a, m): + """Inverse of a mod m.""" + if a == 0: # pragma: no branch + return 0 + return powmod(a, -1, m) + +elif GMPY: # pragma: no branch + + def inverse_mod(a, m): + """Inverse of a mod m.""" + # while libgmp does support inverses modulo, it is accessible + # only using the native `pow()` function, and `pow()` in gmpy sanity + # checks the parameters before passing them on to underlying + # implementation + if a == 0: # pragma: no branch + return 0 + a = mpz(a) + m = mpz(m) + + lm, hm = mpz(1), mpz(0) + low, high = a % m, m + while low > 1: # pragma: no branch + r = high // low + lm, low, hm, high = hm - lm * r, high - low * r, lm, low + + return lm % m + +elif sys.version_info >= (3, 8): # pragma: no branch + + def inverse_mod(a, m): + """Inverse of a mod m.""" + if a == 0: # pragma: no branch + return 0 + return pow(a, -1, m) + +else: # pragma: no branch + + def inverse_mod(a, m): + """Inverse of a mod m.""" + + if a == 0: # pragma: no branch + return 0 + + lm, hm = 1, 0 + low, high = a % m, m + while low > 1: # pragma: no branch + r = high // low + lm, low, hm, high = hm - lm * r, high - low * r, lm, low + + return lm % m + + +try: + gcd2 = math.gcd +except AttributeError: + + def gcd2(a, b): + """Greatest common divisor using Euclid's algorithm.""" + while a: + a, b = b % a, a + return b + + +def gcd(*a): + """Greatest common divisor. + + Usage: gcd([ 2, 4, 6 ]) + or: gcd(2, 4, 6) + """ + + if len(a) > 1: + return reduce(gcd2, a) + if hasattr(a[0], "__iter__"): + return reduce(gcd2, a[0]) + return a[0] + + +def lcm2(a, b): + """Least common multiple of two integers.""" + + return (a * b) // gcd(a, b) + + +def lcm(*a): + """Least common multiple. + + Usage: lcm([ 3, 4, 5 ]) + or: lcm(3, 4, 5) + """ + + if len(a) > 1: + return reduce(lcm2, a) + if hasattr(a[0], "__iter__"): + return reduce(lcm2, a[0]) + return a[0] + + +def factorization(n): + """Decompose n into a list of (prime,exponent) pairs.""" + + assert isinstance(n, integer_types) + + if n < 2: + return [] + + result = [] + + # Test the small primes: + + for d in smallprimes: + if d > n: + break + q, r = divmod(n, d) + if r == 0: + count = 1 + while d <= n: + n = q + q, r = divmod(n, d) + if r != 0: + break + count = count + 1 + result.append((d, count)) + + # If n is still greater than the last of our small primes, + # it may require further work: + + if n > smallprimes[-1]: + if is_prime(n): # If what's left is prime, it's easy: + result.append((n, 1)) + else: # Ugh. Search stupidly for a divisor: + d = smallprimes[-1] + while 1: + d = d + 2 # Try the next divisor. + q, r = divmod(n, d) + if q < d: # n < d*d means we're done, n = 1 or prime. + break + if r == 0: # d divides n. How many times? + count = 1 + n = q + while d <= n: # As long as d might still divide n, + q, r = divmod(n, d) # see if it does. + if r != 0: + break + n = q # It does. Reduce n, increase count. + count = count + 1 + result.append((d, count)) + if n > 1: + result.append((n, 1)) + + return result + + +def phi(n): # pragma: no cover + """Return the Euler totient function of n.""" + # deprecated in 0.14 + warnings.warn( + "Function is unused by library code. If you use this code, " + "please open an issue in " + "https://github.com/tlsfuzzer/python-ecdsa", + DeprecationWarning, + ) + + assert isinstance(n, integer_types) + + if n < 3: + return 1 + + result = 1 + ff = factorization(n) + for f in ff: + e = f[1] + if e > 1: + result = result * f[0] ** (e - 1) * (f[0] - 1) + else: + result = result * (f[0] - 1) + return result + + +def carmichael(n): # pragma: no cover + """Return Carmichael function of n. + + Carmichael(n) is the smallest integer x such that + m**x = 1 mod n for all m relatively prime to n. + """ + # deprecated in 0.14 + warnings.warn( + "Function is unused by library code. If you use this code, " + "please open an issue in " + "https://github.com/tlsfuzzer/python-ecdsa", + DeprecationWarning, + ) + + return carmichael_of_factorized(factorization(n)) + + +def carmichael_of_factorized(f_list): # pragma: no cover + """Return the Carmichael function of a number that is + represented as a list of (prime,exponent) pairs. + """ + # deprecated in 0.14 + warnings.warn( + "Function is unused by library code. If you use this code, " + "please open an issue in " + "https://github.com/tlsfuzzer/python-ecdsa", + DeprecationWarning, + ) + + if len(f_list) < 1: + return 1 + + result = carmichael_of_ppower(f_list[0]) + for i in xrange(1, len(f_list)): + result = lcm(result, carmichael_of_ppower(f_list[i])) + + return result + + +def carmichael_of_ppower(pp): # pragma: no cover + """Carmichael function of the given power of the given prime.""" + # deprecated in 0.14 + warnings.warn( + "Function is unused by library code. If you use this code, " + "please open an issue in " + "https://github.com/tlsfuzzer/python-ecdsa", + DeprecationWarning, + ) + + p, a = pp + if p == 2 and a > 2: + return 2 ** (a - 2) + else: + return (p - 1) * p ** (a - 1) + + +def order_mod(x, m): # pragma: no cover + """Return the order of x in the multiplicative group mod m.""" + # deprecated in 0.14 + warnings.warn( + "Function is unused by library code. If you use this code, " + "please open an issue in " + "https://github.com/tlsfuzzer/python-ecdsa", + DeprecationWarning, + ) + + # Warning: this implementation is not very clever, and will + # take a long time if m is very large. + + if m <= 1: + return 0 + + assert gcd(x, m) == 1 + + z = x + result = 1 + while z != 1: + z = (z * x) % m + result = result + 1 + return result + + +def largest_factor_relatively_prime(a, b): # pragma: no cover + """Return the largest factor of a relatively prime to b.""" + # deprecated in 0.14 + warnings.warn( + "Function is unused by library code. If you use this code, " + "please open an issue in " + "https://github.com/tlsfuzzer/python-ecdsa", + DeprecationWarning, + ) + + while 1: + d = gcd(a, b) + if d <= 1: + break + b = d + while 1: + q, r = divmod(a, d) + if r > 0: + break + a = q + return a + + +def kinda_order_mod(x, m): # pragma: no cover + """Return the order of x in the multiplicative group mod m', + where m' is the largest factor of m relatively prime to x. + """ + # deprecated in 0.14 + warnings.warn( + "Function is unused by library code. If you use this code, " + "please open an issue in " + "https://github.com/tlsfuzzer/python-ecdsa", + DeprecationWarning, + ) + + return order_mod(x, largest_factor_relatively_prime(m, x)) + + +def is_prime(n): + """Return True if x is prime, False otherwise. + + We use the Miller-Rabin test, as given in Menezes et al. p. 138. + This test is not exact: there are composite values n for which + it returns True. + + In testing the odd numbers from 10000001 to 19999999, + about 66 composites got past the first test, + 5 got past the second test, and none got past the third. + Since factors of 2, 3, 5, 7, and 11 were detected during + preliminary screening, the number of numbers tested by + Miller-Rabin was (19999999 - 10000001)*(2/3)*(4/5)*(6/7) + = 4.57 million. + """ + + # (This is used to study the risk of false positives:) + global miller_rabin_test_count + + miller_rabin_test_count = 0 + + if n <= smallprimes[-1]: + if n in smallprimes: + return True + else: + return False + + if gcd(n, 2 * 3 * 5 * 7 * 11) != 1: + return False + + # Choose a number of iterations sufficient to reduce the + # probability of accepting a composite below 2**-80 + # (from Menezes et al. Table 4.4): + + t = 40 + n_bits = 1 + int(math.log(n, 2)) + for k, tt in ( + (100, 27), + (150, 18), + (200, 15), + (250, 12), + (300, 9), + (350, 8), + (400, 7), + (450, 6), + (550, 5), + (650, 4), + (850, 3), + (1300, 2), + ): + if n_bits < k: + break + t = tt + + # Run the test t times: + + s = 0 + r = n - 1 + while (r % 2) == 0: + s = s + 1 + r = r // 2 + for i in xrange(t): + a = smallprimes[i] + y = pow(a, r, n) + if y != 1 and y != n - 1: + j = 1 + while j <= s - 1 and y != n - 1: + y = pow(y, 2, n) + if y == 1: + miller_rabin_test_count = i + 1 + return False + j = j + 1 + if y != n - 1: + miller_rabin_test_count = i + 1 + return False + return True + + +def next_prime(starting_value): + """Return the smallest prime larger than the starting value.""" + + if starting_value < 2: + return 2 + result = (starting_value + 1) | 1 + while not is_prime(result): + result = result + 2 + return result + + +smallprimes = [ + 2, + 3, + 5, + 7, + 11, + 13, + 17, + 19, + 23, + 29, + 31, + 37, + 41, + 43, + 47, + 53, + 59, + 61, + 67, + 71, + 73, + 79, + 83, + 89, + 97, + 101, + 103, + 107, + 109, + 113, + 127, + 131, + 137, + 139, + 149, + 151, + 157, + 163, + 167, + 173, + 179, + 181, + 191, + 193, + 197, + 199, + 211, + 223, + 227, + 229, + 233, + 239, + 241, + 251, + 257, + 263, + 269, + 271, + 277, + 281, + 283, + 293, + 307, + 311, + 313, + 317, + 331, + 337, + 347, + 349, + 353, + 359, + 367, + 373, + 379, + 383, + 389, + 397, + 401, + 409, + 419, + 421, + 431, + 433, + 439, + 443, + 449, + 457, + 461, + 463, + 467, + 479, + 487, + 491, + 499, + 503, + 509, + 521, + 523, + 541, + 547, + 557, + 563, + 569, + 571, + 577, + 587, + 593, + 599, + 601, + 607, + 613, + 617, + 619, + 631, + 641, + 643, + 647, + 653, + 659, + 661, + 673, + 677, + 683, + 691, + 701, + 709, + 719, + 727, + 733, + 739, + 743, + 751, + 757, + 761, + 769, + 773, + 787, + 797, + 809, + 811, + 821, + 823, + 827, + 829, + 839, + 853, + 857, + 859, + 863, + 877, + 881, + 883, + 887, + 907, + 911, + 919, + 929, + 937, + 941, + 947, + 953, + 967, + 971, + 977, + 983, + 991, + 997, + 1009, + 1013, + 1019, + 1021, + 1031, + 1033, + 1039, + 1049, + 1051, + 1061, + 1063, + 1069, + 1087, + 1091, + 1093, + 1097, + 1103, + 1109, + 1117, + 1123, + 1129, + 1151, + 1153, + 1163, + 1171, + 1181, + 1187, + 1193, + 1201, + 1213, + 1217, + 1223, + 1229, +] + +miller_rabin_test_count = 0 diff --git a/env/lib/python3.8/site-packages/ecdsa/rfc6979.py b/env/lib/python3.8/site-packages/ecdsa/rfc6979.py new file mode 100644 index 00000000..0728b5a4 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/rfc6979.py @@ -0,0 +1,113 @@ +""" +RFC 6979: + Deterministic Usage of the Digital Signature Algorithm (DSA) and + Elliptic Curve Digital Signature Algorithm (ECDSA) + + http://tools.ietf.org/html/rfc6979 + +Many thanks to Coda Hale for his implementation in Go language: + https://github.com/codahale/rfc6979 +""" + +import hmac +from binascii import hexlify +from .util import number_to_string, number_to_string_crop, bit_length +from ._compat import hmac_compat + + +# bit_length was defined in this module previously so keep it for backwards +# compatibility, will need to deprecate and remove it later +__all__ = ["bit_length", "bits2int", "bits2octets", "generate_k"] + + +def bits2int(data, qlen): + x = int(hexlify(data), 16) + l = len(data) * 8 + + if l > qlen: + return x >> (l - qlen) + return x + + +def bits2octets(data, order): + z1 = bits2int(data, bit_length(order)) + z2 = z1 - order + + if z2 < 0: + z2 = z1 + + return number_to_string_crop(z2, order) + + +# https://tools.ietf.org/html/rfc6979#section-3.2 +def generate_k(order, secexp, hash_func, data, retry_gen=0, extra_entropy=b""): + """ + Generate the ``k`` value - the nonce for DSA. + + :param int order: order of the DSA generator used in the signature + :param int secexp: secure exponent (private key) in numeric form + :param hash_func: reference to the same hash function used for generating + hash, like :py:class:`hashlib.sha1` + :param bytes data: hash in binary form of the signing data + :param int retry_gen: how many good 'k' values to skip before returning + :param bytes extra_entropy: additional added data in binary form as per + section-3.6 of rfc6979 + :rtype: int + """ + + qlen = bit_length(order) + holen = hash_func().digest_size + rolen = (qlen + 7) // 8 + bx = ( + hmac_compat(number_to_string(secexp, order)), + hmac_compat(bits2octets(data, order)), + hmac_compat(extra_entropy), + ) + + # Step B + v = b"\x01" * holen + + # Step C + k = b"\x00" * holen + + # Step D + + k = hmac.new(k, digestmod=hash_func) + k.update(v + b"\x00") + for i in bx: + k.update(i) + k = k.digest() + + # Step E + v = hmac.new(k, v, hash_func).digest() + + # Step F + k = hmac.new(k, digestmod=hash_func) + k.update(v + b"\x01") + for i in bx: + k.update(i) + k = k.digest() + + # Step G + v = hmac.new(k, v, hash_func).digest() + + # Step H + while True: + # Step H1 + t = b"" + + # Step H2 + while len(t) < rolen: + v = hmac.new(k, v, hash_func).digest() + t += v + + # Step H3 + secret = bits2int(t, qlen) + + if 1 <= secret < order: + if retry_gen <= 0: + return secret + retry_gen -= 1 + + k = hmac.new(k, v + b"\x00", hash_func).digest() + v = hmac.new(k, v, hash_func).digest() diff --git a/env/lib/python3.8/site-packages/ecdsa/test_curves.py b/env/lib/python3.8/site-packages/ecdsa/test_curves.py new file mode 100644 index 00000000..93b6c9bd --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_curves.py @@ -0,0 +1,361 @@ +try: + import unittest2 as unittest +except ImportError: + import unittest + +import base64 +import pytest +from .curves import ( + Curve, + NIST256p, + curves, + UnknownCurveError, + PRIME_FIELD_OID, + curve_by_name, +) +from .ellipticcurve import CurveFp, PointJacobi, CurveEdTw +from . import der +from .util import number_to_string + + +class TestParameterEncoding(unittest.TestCase): + @classmethod + def setUpClass(cls): + # minimal, but with cofactor (excludes seed when compared to + # OpenSSL output) + cls.base64_params = ( + "MIHgAgEBMCwGByqGSM49AQECIQD/////AAAAAQAAAAAAAAAAAAAAAP/////////" + "//////zBEBCD/////AAAAAQAAAAAAAAAAAAAAAP///////////////AQgWsY12K" + "o6k+ez671VdpiGvGUdBrDMU7D2O848PifSYEsEQQRrF9Hy4SxCR/i85uVjpEDyd" + "wN9gS3rM6D0oTlF2JjClk/jQuL+Gn+bjufrSnwPnhYrzjNXazFezsu2QGg3v1H1" + "AiEA/////wAAAAD//////////7zm+q2nF56E87nKwvxjJVECAQE=" + ) + + def test_from_pem(self): + pem_params = ( + "-----BEGIN EC PARAMETERS-----\n" + "MIHgAgEBMCwGByqGSM49AQECIQD/////AAAAAQAAAAAAAAAAAAAAAP/////////\n" + "//////zBEBCD/////AAAAAQAAAAAAAAAAAAAAAP///////////////AQgWsY12K\n" + "o6k+ez671VdpiGvGUdBrDMU7D2O848PifSYEsEQQRrF9Hy4SxCR/i85uVjpEDyd\n" + "wN9gS3rM6D0oTlF2JjClk/jQuL+Gn+bjufrSnwPnhYrzjNXazFezsu2QGg3v1H1\n" + "AiEA/////wAAAAD//////////7zm+q2nF56E87nKwvxjJVECAQE=\n" + "-----END EC PARAMETERS-----\n" + ) + curve = Curve.from_pem(pem_params) + + self.assertIs(curve, NIST256p) + + def test_from_pem_with_explicit_when_explicit_disabled(self): + pem_params = ( + "-----BEGIN EC PARAMETERS-----\n" + "MIHgAgEBMCwGByqGSM49AQECIQD/////AAAAAQAAAAAAAAAAAAAAAP/////////\n" + "//////zBEBCD/////AAAAAQAAAAAAAAAAAAAAAP///////////////AQgWsY12K\n" + "o6k+ez671VdpiGvGUdBrDMU7D2O848PifSYEsEQQRrF9Hy4SxCR/i85uVjpEDyd\n" + "wN9gS3rM6D0oTlF2JjClk/jQuL+Gn+bjufrSnwPnhYrzjNXazFezsu2QGg3v1H1\n" + "AiEA/////wAAAAD//////////7zm+q2nF56E87nKwvxjJVECAQE=\n" + "-----END EC PARAMETERS-----\n" + ) + with self.assertRaises(der.UnexpectedDER) as e: + Curve.from_pem(pem_params, ["named_curve"]) + + self.assertIn("explicit curve parameters not", str(e.exception)) + + def test_from_pem_with_named_curve_with_named_curve_disabled(self): + pem_params = ( + "-----BEGIN EC PARAMETERS-----\n" + "BggqhkjOPQMBBw==\n" + "-----END EC PARAMETERS-----\n" + ) + with self.assertRaises(der.UnexpectedDER) as e: + Curve.from_pem(pem_params, ["explicit"]) + + self.assertIn("named_curve curve parameters not", str(e.exception)) + + def test_from_pem_with_wrong_header(self): + pem_params = ( + "-----BEGIN PARAMETERS-----\n" + "MIHgAgEBMCwGByqGSM49AQECIQD/////AAAAAQAAAAAAAAAAAAAAAP/////////\n" + "//////zBEBCD/////AAAAAQAAAAAAAAAAAAAAAP///////////////AQgWsY12K\n" + "o6k+ez671VdpiGvGUdBrDMU7D2O848PifSYEsEQQRrF9Hy4SxCR/i85uVjpEDyd\n" + "wN9gS3rM6D0oTlF2JjClk/jQuL+Gn+bjufrSnwPnhYrzjNXazFezsu2QGg3v1H1\n" + "AiEA/////wAAAAD//////////7zm+q2nF56E87nKwvxjJVECAQE=\n" + "-----END PARAMETERS-----\n" + ) + with self.assertRaises(der.UnexpectedDER) as e: + Curve.from_pem(pem_params) + + self.assertIn("PARAMETERS PEM header", str(e.exception)) + + def test_to_pem(self): + pem_params = ( + b"-----BEGIN EC PARAMETERS-----\n" + b"BggqhkjOPQMBBw==\n" + b"-----END EC PARAMETERS-----\n" + ) + encoding = NIST256p.to_pem() + + self.assertEqual(pem_params, encoding) + + def test_compare_with_different_object(self): + self.assertNotEqual(NIST256p, 256) + + def test_named_curve_params_der(self): + encoded = NIST256p.to_der() + + # just the encoding of the NIST256p OID (prime256v1) + self.assertEqual(b"\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07", encoded) + + def test_verify_that_default_is_named_curve_der(self): + encoded_default = NIST256p.to_der() + encoded_named = NIST256p.to_der("named_curve") + + self.assertEqual(encoded_default, encoded_named) + + def test_encoding_to_explicit_params(self): + encoded = NIST256p.to_der("explicit") + + self.assertEqual(encoded, bytes(base64.b64decode(self.base64_params))) + + def test_encoding_to_unsupported_type(self): + with self.assertRaises(ValueError) as e: + NIST256p.to_der("unsupported") + + self.assertIn("Only 'named_curve'", str(e.exception)) + + def test_encoding_to_explicit_compressed_params(self): + encoded = NIST256p.to_der("explicit", "compressed") + + compressed_base_point = ( + "MIHAAgEBMCwGByqGSM49AQECIQD/////AAAAAQAAAAAAAAAAAAAAAP//////////" + "/////zBEBCD/////AAAAAQAAAAAAAAAAAAAAAP///////////////AQgWsY12Ko6" + "k+ez671VdpiGvGUdBrDMU7D2O848PifSYEsEIQNrF9Hy4SxCR/i85uVjpEDydwN9" + "gS3rM6D0oTlF2JjClgIhAP////8AAAAA//////////+85vqtpxeehPO5ysL8YyVR" + "AgEB" + ) + + self.assertEqual( + encoded, bytes(base64.b64decode(compressed_base_point)) + ) + + def test_decoding_explicit_from_openssl(self): + # generated with openssl 1.1.1k using + # openssl ecparam -name P-256 -param_enc explicit -out /tmp/file.pem + p256_explicit = ( + "MIH3AgEBMCwGByqGSM49AQECIQD/////AAAAAQAAAAAAAAAAAAAAAP//////////" + "/////zBbBCD/////AAAAAQAAAAAAAAAAAAAAAP///////////////AQgWsY12Ko6" + "k+ez671VdpiGvGUdBrDMU7D2O848PifSYEsDFQDEnTYIhucEk2pmeOETnSa3gZ9+" + "kARBBGsX0fLhLEJH+Lzm5WOkQPJ3A32BLeszoPShOUXYmMKWT+NC4v4af5uO5+tK" + "fA+eFivOM1drMV7Oy7ZAaDe/UfUCIQD/////AAAAAP//////////vOb6racXnoTz" + "ucrC/GMlUQIBAQ==" + ) + + decoded = Curve.from_der(bytes(base64.b64decode(p256_explicit))) + + self.assertEqual(NIST256p, decoded) + + def test_decoding_well_known_from_explicit_params(self): + curve = Curve.from_der(bytes(base64.b64decode(self.base64_params))) + + self.assertIs(curve, NIST256p) + + def test_decoding_with_incorrect_valid_encodings(self): + with self.assertRaises(ValueError) as e: + Curve.from_der(b"", ["explicitCA"]) + + self.assertIn("Only named_curve", str(e.exception)) + + def test_compare_curves_with_different_generators(self): + curve_fp = CurveFp(23, 1, 7) + base_a = PointJacobi(curve_fp, 13, 3, 1, 9, generator=True) + base_b = PointJacobi(curve_fp, 1, 20, 1, 9, generator=True) + + curve_a = Curve("unknown", curve_fp, base_a, None) + curve_b = Curve("unknown", curve_fp, base_b, None) + + self.assertNotEqual(curve_a, curve_b) + + def test_default_encode_for_custom_curve(self): + curve_fp = CurveFp(23, 1, 7) + base_point = PointJacobi(curve_fp, 13, 3, 1, 9, generator=True) + + curve = Curve("unknown", curve_fp, base_point, None) + + encoded = curve.to_der() + + decoded = Curve.from_der(encoded) + + self.assertEqual(curve, decoded) + + expected = "MCECAQEwDAYHKoZIzj0BAQIBFzAGBAEBBAEHBAMEDQMCAQk=" + + self.assertEqual(encoded, bytes(base64.b64decode(expected))) + + def test_named_curve_encode_for_custom_curve(self): + curve_fp = CurveFp(23, 1, 7) + base_point = PointJacobi(curve_fp, 13, 3, 1, 9, generator=True) + + curve = Curve("unknown", curve_fp, base_point, None) + + with self.assertRaises(UnknownCurveError) as e: + curve.to_der("named_curve") + + self.assertIn("Can't encode curve", str(e.exception)) + + def test_try_decoding_binary_explicit(self): + sect113r1_explicit = ( + "MIGRAgEBMBwGByqGSM49AQIwEQIBcQYJKoZIzj0BAgMCAgEJMDkEDwAwiCUMpufH" + "/mSc6Fgg9wQPAOi+5NPiJgdEGIvg6ccjAxUAEOcjqxTWluZ2h1YVF1b+v4/LSakE" + "HwQAnXNhbzX0qxQH1zViwQ8ApSgwJ3lY7oTRMV7TGIYCDwEAAAAAAAAA2czsijnl" + "bwIBAg==" + ) + + with self.assertRaises(UnknownCurveError) as e: + Curve.from_der(base64.b64decode(sect113r1_explicit)) + + self.assertIn("Characteristic 2 curves unsupported", str(e.exception)) + + def test_decode_malformed_named_curve(self): + bad_der = der.encode_oid(*NIST256p.oid) + der.encode_integer(1) + + with self.assertRaises(der.UnexpectedDER) as e: + Curve.from_der(bad_der) + + self.assertIn("Unexpected data after OID", str(e.exception)) + + def test_decode_malformed_explicit_garbage_after_ECParam(self): + bad_der = bytes( + base64.b64decode(self.base64_params) + ) + der.encode_integer(1) + + with self.assertRaises(der.UnexpectedDER) as e: + Curve.from_der(bad_der) + + self.assertIn("Unexpected data after ECParameters", str(e.exception)) + + def test_decode_malformed_unknown_version_number(self): + bad_der = der.encode_sequence(der.encode_integer(2)) + + with self.assertRaises(der.UnexpectedDER) as e: + Curve.from_der(bad_der) + + self.assertIn("Unknown parameter encoding format", str(e.exception)) + + def test_decode_malformed_unknown_field_type(self): + curve_p = NIST256p.curve.p() + bad_der = der.encode_sequence( + der.encode_integer(1), + der.encode_sequence( + der.encode_oid(1, 2, 3), der.encode_integer(curve_p) + ), + der.encode_sequence( + der.encode_octet_string( + number_to_string(NIST256p.curve.a() % curve_p, curve_p) + ), + der.encode_octet_string( + number_to_string(NIST256p.curve.b(), curve_p) + ), + ), + der.encode_octet_string( + NIST256p.generator.to_bytes("uncompressed") + ), + der.encode_integer(NIST256p.generator.order()), + ) + + with self.assertRaises(UnknownCurveError) as e: + Curve.from_der(bad_der) + + self.assertIn("Unknown field type: (1, 2, 3)", str(e.exception)) + + def test_decode_malformed_garbage_after_prime(self): + curve_p = NIST256p.curve.p() + bad_der = der.encode_sequence( + der.encode_integer(1), + der.encode_sequence( + der.encode_oid(*PRIME_FIELD_OID), + der.encode_integer(curve_p), + der.encode_integer(1), + ), + der.encode_sequence( + der.encode_octet_string( + number_to_string(NIST256p.curve.a() % curve_p, curve_p) + ), + der.encode_octet_string( + number_to_string(NIST256p.curve.b(), curve_p) + ), + ), + der.encode_octet_string( + NIST256p.generator.to_bytes("uncompressed") + ), + der.encode_integer(NIST256p.generator.order()), + ) + + with self.assertRaises(der.UnexpectedDER) as e: + Curve.from_der(bad_der) + + self.assertIn("Prime-p element", str(e.exception)) + + +class TestCurveSearching(unittest.TestCase): + def test_correct_name(self): + c = curve_by_name("NIST256p") + self.assertIs(c, NIST256p) + + def test_openssl_name(self): + c = curve_by_name("prime256v1") + self.assertIs(c, NIST256p) + + def test_unknown_curve(self): + with self.assertRaises(UnknownCurveError) as e: + curve_by_name("foo bar") + + self.assertIn( + "name 'foo bar' unknown, only curves supported: " + "['NIST192p', 'NIST224p'", + str(e.exception), + ) + + def test_with_None_as_parameter(self): + with self.assertRaises(UnknownCurveError) as e: + curve_by_name(None) + + self.assertIn( + "name None unknown, only curves supported: " + "['NIST192p', 'NIST224p'", + str(e.exception), + ) + + +@pytest.mark.parametrize("curve", curves, ids=[i.name for i in curves]) +def test_curve_params_encode_decode_named(curve): + ret = Curve.from_der(curve.to_der("named_curve")) + + assert curve == ret + + +@pytest.mark.parametrize("curve", curves, ids=[i.name for i in curves]) +def test_curve_params_encode_decode_explicit(curve): + if isinstance(curve.curve, CurveEdTw): + with pytest.raises(UnknownCurveError): + curve.to_der("explicit") + else: + ret = Curve.from_der(curve.to_der("explicit")) + + assert curve == ret + + +@pytest.mark.parametrize("curve", curves, ids=[i.name for i in curves]) +def test_curve_params_encode_decode_default(curve): + ret = Curve.from_der(curve.to_der()) + + assert curve == ret + + +@pytest.mark.parametrize("curve", curves, ids=[i.name for i in curves]) +def test_curve_params_encode_decode_explicit_compressed(curve): + if isinstance(curve.curve, CurveEdTw): + with pytest.raises(UnknownCurveError): + curve.to_der("explicit", "compressed") + else: + ret = Curve.from_der(curve.to_der("explicit", "compressed")) + + assert curve == ret diff --git a/env/lib/python3.8/site-packages/ecdsa/test_der.py b/env/lib/python3.8/site-packages/ecdsa/test_der.py new file mode 100644 index 00000000..0ca5bd7f --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_der.py @@ -0,0 +1,476 @@ +# compatibility with Python 2.6, for that we need unittest2 package, +# which is not available on 3.3 or 3.4 +import warnings +from binascii import hexlify + +try: + import unittest2 as unittest +except ImportError: + import unittest +from six import b +import hypothesis.strategies as st +from hypothesis import given +import pytest +from ._compat import str_idx_as_int +from .curves import NIST256p, NIST224p +from .der import ( + remove_integer, + UnexpectedDER, + read_length, + encode_bitstring, + remove_bitstring, + remove_object, + encode_oid, + remove_constructed, + remove_octet_string, + remove_sequence, +) + + +class TestRemoveInteger(unittest.TestCase): + # DER requires the integers to be 0-padded only if they would be + # interpreted as negative, check if those errors are detected + def test_non_minimal_encoding(self): + with self.assertRaises(UnexpectedDER): + remove_integer(b("\x02\x02\x00\x01")) + + def test_negative_with_high_bit_set(self): + with self.assertRaises(UnexpectedDER): + remove_integer(b("\x02\x01\x80")) + + def test_minimal_with_high_bit_set(self): + val, rem = remove_integer(b("\x02\x02\x00\x80")) + + self.assertEqual(val, 0x80) + self.assertEqual(rem, b"") + + def test_two_zero_bytes_with_high_bit_set(self): + with self.assertRaises(UnexpectedDER): + remove_integer(b("\x02\x03\x00\x00\xff")) + + def test_zero_length_integer(self): + with self.assertRaises(UnexpectedDER): + remove_integer(b("\x02\x00")) + + def test_empty_string(self): + with self.assertRaises(UnexpectedDER): + remove_integer(b("")) + + def test_encoding_of_zero(self): + val, rem = remove_integer(b("\x02\x01\x00")) + + self.assertEqual(val, 0) + self.assertEqual(rem, b"") + + def test_encoding_of_127(self): + val, rem = remove_integer(b("\x02\x01\x7f")) + + self.assertEqual(val, 127) + self.assertEqual(rem, b"") + + def test_encoding_of_128(self): + val, rem = remove_integer(b("\x02\x02\x00\x80")) + + self.assertEqual(val, 128) + self.assertEqual(rem, b"") + + def test_wrong_tag(self): + with self.assertRaises(UnexpectedDER) as e: + remove_integer(b"\x01\x02\x00\x80") + + self.assertIn("wanted type 'integer'", str(e.exception)) + + def test_wrong_length(self): + with self.assertRaises(UnexpectedDER) as e: + remove_integer(b"\x02\x03\x00\x80") + + self.assertIn("Length longer", str(e.exception)) + + +class TestReadLength(unittest.TestCase): + # DER requires the lengths between 0 and 127 to be encoded using the short + # form and lengths above that encoded with minimal number of bytes + # necessary + def test_zero_length(self): + self.assertEqual((0, 1), read_length(b("\x00"))) + + def test_two_byte_zero_length(self): + with self.assertRaises(UnexpectedDER): + read_length(b("\x81\x00")) + + def test_two_byte_small_length(self): + with self.assertRaises(UnexpectedDER): + read_length(b("\x81\x7f")) + + def test_long_form_with_zero_length(self): + with self.assertRaises(UnexpectedDER): + read_length(b("\x80")) + + def test_smallest_two_byte_length(self): + self.assertEqual((128, 2), read_length(b("\x81\x80"))) + + def test_zero_padded_length(self): + with self.assertRaises(UnexpectedDER): + read_length(b("\x82\x00\x80")) + + def test_two_three_byte_length(self): + self.assertEqual((256, 3), read_length(b"\x82\x01\x00")) + + def test_empty_string(self): + with self.assertRaises(UnexpectedDER): + read_length(b("")) + + def test_length_overflow(self): + with self.assertRaises(UnexpectedDER): + read_length(b("\x83\x01\x00")) + + +class TestEncodeBitstring(unittest.TestCase): + # DER requires BIT STRINGS to include a number of padding bits in the + # encoded byte string, that padding must be between 0 and 7 + + def test_old_call_convention(self): + """This is the old way to use the function.""" + warnings.simplefilter("always") + with pytest.warns(DeprecationWarning) as warns: + der = encode_bitstring(b"\x00\xff") + + self.assertEqual(len(warns), 1) + self.assertIn( + "unused= needs to be specified", warns[0].message.args[0] + ) + + self.assertEqual(der, b"\x03\x02\x00\xff") + + def test_new_call_convention(self): + """This is how it should be called now.""" + warnings.simplefilter("always") + with pytest.warns(None) as warns: + der = encode_bitstring(b"\xff", 0) + + # verify that new call convention doesn't raise Warnings + self.assertEqual(len(warns), 0) + + self.assertEqual(der, b"\x03\x02\x00\xff") + + def test_implicit_unused_bits(self): + """ + Writing bit string with already included the number of unused bits. + """ + warnings.simplefilter("always") + with pytest.warns(None) as warns: + der = encode_bitstring(b"\x00\xff", None) + + # verify that new call convention doesn't raise Warnings + self.assertEqual(len(warns), 0) + + self.assertEqual(der, b"\x03\x02\x00\xff") + + def test_explicit_unused_bits(self): + der = encode_bitstring(b"\xff\xf0", 4) + + self.assertEqual(der, b"\x03\x03\x04\xff\xf0") + + def test_empty_string(self): + self.assertEqual(encode_bitstring(b"", 0), b"\x03\x01\x00") + + def test_invalid_unused_count(self): + with self.assertRaises(ValueError): + encode_bitstring(b"\xff\x00", 8) + + def test_invalid_unused_with_empty_string(self): + with self.assertRaises(ValueError): + encode_bitstring(b"", 1) + + def test_non_zero_padding_bits(self): + with self.assertRaises(ValueError): + encode_bitstring(b"\xff", 2) + + +class TestRemoveBitstring(unittest.TestCase): + def test_old_call_convention(self): + """This is the old way to call the function.""" + warnings.simplefilter("always") + with pytest.warns(DeprecationWarning) as warns: + bits, rest = remove_bitstring(b"\x03\x02\x00\xff") + + self.assertEqual(len(warns), 1) + self.assertIn( + "expect_unused= needs to be specified", warns[0].message.args[0] + ) + + self.assertEqual(bits, b"\x00\xff") + self.assertEqual(rest, b"") + + def test_new_call_convention(self): + warnings.simplefilter("always") + with pytest.warns(None) as warns: + bits, rest = remove_bitstring(b"\x03\x02\x00\xff", 0) + + self.assertEqual(len(warns), 0) + + self.assertEqual(bits, b"\xff") + self.assertEqual(rest, b"") + + def test_implicit_unexpected_unused(self): + warnings.simplefilter("always") + with pytest.warns(None) as warns: + bits, rest = remove_bitstring(b"\x03\x02\x00\xff", None) + + self.assertEqual(len(warns), 0) + + self.assertEqual(bits, (b"\xff", 0)) + self.assertEqual(rest, b"") + + def test_with_padding(self): + ret, rest = remove_bitstring(b"\x03\x02\x04\xf0", None) + + self.assertEqual(ret, (b"\xf0", 4)) + self.assertEqual(rest, b"") + + def test_not_a_bitstring(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"\x02\x02\x00\xff", None) + + def test_empty_encoding(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"\x03\x00", None) + + def test_empty_string(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"", None) + + def test_no_length(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"\x03", None) + + def test_unexpected_number_of_unused_bits(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"\x03\x02\x00\xff", 1) + + def test_invalid_encoding_of_unused_bits(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"\x03\x03\x08\xff\x00", None) + + def test_invalid_encoding_of_empty_string(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"\x03\x01\x01", None) + + def test_invalid_padding_bits(self): + with self.assertRaises(UnexpectedDER): + remove_bitstring(b"\x03\x02\x01\xff", None) + + +class TestStrIdxAsInt(unittest.TestCase): + def test_str(self): + self.assertEqual(115, str_idx_as_int("str", 0)) + + def test_bytes(self): + self.assertEqual(115, str_idx_as_int(b"str", 0)) + + def test_bytearray(self): + self.assertEqual(115, str_idx_as_int(bytearray(b"str"), 0)) + + +class TestEncodeOid(unittest.TestCase): + def test_pub_key_oid(self): + oid_ecPublicKey = encode_oid(1, 2, 840, 10045, 2, 1) + self.assertEqual(hexlify(oid_ecPublicKey), b("06072a8648ce3d0201")) + + def test_nist224p_oid(self): + self.assertEqual(hexlify(NIST224p.encoded_oid), b("06052b81040021")) + + def test_nist256p_oid(self): + self.assertEqual( + hexlify(NIST256p.encoded_oid), b"06082a8648ce3d030107" + ) + + def test_large_second_subid(self): + # from X.690, section 8.19.5 + oid = encode_oid(2, 999, 3) + self.assertEqual(oid, b"\x06\x03\x88\x37\x03") + + def test_with_two_subids(self): + oid = encode_oid(2, 999) + self.assertEqual(oid, b"\x06\x02\x88\x37") + + def test_zero_zero(self): + oid = encode_oid(0, 0) + self.assertEqual(oid, b"\x06\x01\x00") + + def test_with_wrong_types(self): + with self.assertRaises((TypeError, AssertionError)): + encode_oid(0, None) + + def test_with_small_first_large_second(self): + with self.assertRaises(AssertionError): + encode_oid(1, 40) + + def test_small_first_max_second(self): + oid = encode_oid(1, 39) + self.assertEqual(oid, b"\x06\x01\x4f") + + def test_with_invalid_first(self): + with self.assertRaises(AssertionError): + encode_oid(3, 39) + + +class TestRemoveObject(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.oid_ecPublicKey = encode_oid(1, 2, 840, 10045, 2, 1) + + def test_pub_key_oid(self): + oid, rest = remove_object(self.oid_ecPublicKey) + self.assertEqual(rest, b"") + self.assertEqual(oid, (1, 2, 840, 10045, 2, 1)) + + def test_with_extra_bytes(self): + oid, rest = remove_object(self.oid_ecPublicKey + b"more") + self.assertEqual(rest, b"more") + self.assertEqual(oid, (1, 2, 840, 10045, 2, 1)) + + def test_with_large_second_subid(self): + # from X.690, section 8.19.5 + oid, rest = remove_object(b"\x06\x03\x88\x37\x03") + self.assertEqual(rest, b"") + self.assertEqual(oid, (2, 999, 3)) + + def test_with_padded_first_subid(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x06\x02\x80\x00") + + def test_with_padded_second_subid(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x06\x04\x88\x37\x80\x01") + + def test_with_missing_last_byte_of_multi_byte(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x06\x03\x88\x37\x83") + + def test_with_two_subids(self): + oid, rest = remove_object(b"\x06\x02\x88\x37") + self.assertEqual(rest, b"") + self.assertEqual(oid, (2, 999)) + + def test_zero_zero(self): + oid, rest = remove_object(b"\x06\x01\x00") + self.assertEqual(rest, b"") + self.assertEqual(oid, (0, 0)) + + def test_empty_string(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"") + + def test_missing_length(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x06") + + def test_empty_oid(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x06\x00") + + def test_empty_oid_overflow(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x06\x01") + + def test_with_wrong_type(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x04\x02\x88\x37") + + def test_with_too_long_length(self): + with self.assertRaises(UnexpectedDER): + remove_object(b"\x06\x03\x88\x37") + + +class TestRemoveConstructed(unittest.TestCase): + def test_simple(self): + data = b"\xa1\x02\xff\xaa" + + tag, body, rest = remove_constructed(data) + + self.assertEqual(tag, 0x01) + self.assertEqual(body, b"\xff\xaa") + self.assertEqual(rest, b"") + + def test_with_malformed_tag(self): + data = b"\x01\x02\xff\xaa" + + with self.assertRaises(UnexpectedDER) as e: + remove_constructed(data) + + self.assertIn("constructed tag", str(e.exception)) + + +class TestRemoveOctetString(unittest.TestCase): + def test_simple(self): + data = b"\x04\x03\xaa\xbb\xcc" + body, rest = remove_octet_string(data) + self.assertEqual(body, b"\xaa\xbb\xcc") + self.assertEqual(rest, b"") + + def test_with_malformed_tag(self): + data = b"\x03\x03\xaa\xbb\xcc" + with self.assertRaises(UnexpectedDER) as e: + remove_octet_string(data) + + self.assertIn("octetstring", str(e.exception)) + + +class TestRemoveSequence(unittest.TestCase): + def test_simple(self): + data = b"\x30\x02\xff\xaa" + body, rest = remove_sequence(data) + self.assertEqual(body, b"\xff\xaa") + self.assertEqual(rest, b"") + + def test_with_empty_string(self): + with self.assertRaises(UnexpectedDER) as e: + remove_sequence(b"") + + self.assertIn("Empty string", str(e.exception)) + + def test_with_wrong_tag(self): + data = b"\x20\x02\xff\xaa" + + with self.assertRaises(UnexpectedDER) as e: + remove_sequence(data) + + self.assertIn("wanted type 'sequence'", str(e.exception)) + + def test_with_wrong_length(self): + data = b"\x30\x03\xff\xaa" + + with self.assertRaises(UnexpectedDER) as e: + remove_sequence(data) + + self.assertIn("Length longer", str(e.exception)) + + +@st.composite +def st_oid(draw, max_value=2**512, max_size=50): + """ + Hypothesis strategy that returns valid OBJECT IDENTIFIERs as tuples + + :param max_value: maximum value of any single sub-identifier + :param max_size: maximum length of the generated OID + """ + first = draw(st.integers(min_value=0, max_value=2)) + if first < 2: + second = draw(st.integers(min_value=0, max_value=39)) + else: + second = draw(st.integers(min_value=0, max_value=max_value)) + rest = draw( + st.lists( + st.integers(min_value=0, max_value=max_value), max_size=max_size + ) + ) + return (first, second) + tuple(rest) + + +@given(st_oid()) +def test_oids(ids): + encoded_oid = encode_oid(*ids) + decoded_oid, rest = remove_object(encoded_oid) + assert rest == b"" + assert decoded_oid == ids diff --git a/env/lib/python3.8/site-packages/ecdsa/test_ecdh.py b/env/lib/python3.8/site-packages/ecdsa/test_ecdh.py new file mode 100644 index 00000000..872d4d14 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_ecdh.py @@ -0,0 +1,441 @@ +import os +import shutil +import subprocess +import pytest +from binascii import unhexlify + +try: + import unittest2 as unittest +except ImportError: + import unittest + +from .curves import ( + NIST192p, + NIST224p, + NIST256p, + NIST384p, + NIST521p, + BRAINPOOLP160r1, +) +from .curves import curves +from .ecdh import ( + ECDH, + InvalidCurveError, + InvalidSharedSecretError, + NoKeyError, + NoCurveError, +) +from .keys import SigningKey, VerifyingKey +from .ellipticcurve import CurveEdTw + + +@pytest.mark.parametrize( + "vcurve", + curves, + ids=[curve.name for curve in curves], +) +def test_ecdh_each(vcurve): + if isinstance(vcurve.curve, CurveEdTw): + pytest.skip("ECDH is not supported for Edwards curves") + ecdh1 = ECDH(curve=vcurve) + ecdh2 = ECDH(curve=vcurve) + + ecdh2.generate_private_key() + ecdh1.load_received_public_key(ecdh2.get_public_key()) + ecdh2.load_received_public_key(ecdh1.generate_private_key()) + + secret1 = ecdh1.generate_sharedsecret_bytes() + secret2 = ecdh2.generate_sharedsecret_bytes() + assert secret1 == secret2 + + +def test_ecdh_both_keys_present(): + key1 = SigningKey.generate(BRAINPOOLP160r1) + key2 = SigningKey.generate(BRAINPOOLP160r1) + + ecdh1 = ECDH(BRAINPOOLP160r1, key1, key2.verifying_key) + ecdh2 = ECDH(private_key=key2, public_key=key1.verifying_key) + + secret1 = ecdh1.generate_sharedsecret_bytes() + secret2 = ecdh2.generate_sharedsecret_bytes() + + assert secret1 == secret2 + + +def test_ecdh_no_public_key(): + ecdh1 = ECDH(curve=NIST192p) + + with pytest.raises(NoKeyError): + ecdh1.generate_sharedsecret_bytes() + + ecdh1.generate_private_key() + + with pytest.raises(NoKeyError): + ecdh1.generate_sharedsecret_bytes() + + +class TestECDH(unittest.TestCase): + def test_load_key_from_wrong_curve(self): + ecdh1 = ECDH() + ecdh1.set_curve(NIST192p) + + key1 = SigningKey.generate(BRAINPOOLP160r1) + + with self.assertRaises(InvalidCurveError) as e: + ecdh1.load_private_key(key1) + + self.assertIn("Curve mismatch", str(e.exception)) + + def test_generate_without_curve(self): + ecdh1 = ECDH() + + with self.assertRaises(NoCurveError) as e: + ecdh1.generate_private_key() + + self.assertIn("Curve must be set", str(e.exception)) + + def test_load_bytes_without_curve_set(self): + ecdh1 = ECDH() + + with self.assertRaises(NoCurveError) as e: + ecdh1.load_private_key_bytes(b"\x01" * 32) + + self.assertIn("Curve must be set", str(e.exception)) + + def test_set_curve_from_received_public_key(self): + ecdh1 = ECDH() + + key1 = SigningKey.generate(BRAINPOOLP160r1) + + ecdh1.load_received_public_key(key1.verifying_key) + + self.assertEqual(ecdh1.curve, BRAINPOOLP160r1) + + +def test_ecdh_wrong_public_key_curve(): + ecdh1 = ECDH(curve=NIST192p) + ecdh1.generate_private_key() + ecdh2 = ECDH(curve=NIST256p) + ecdh2.generate_private_key() + + with pytest.raises(InvalidCurveError): + ecdh1.load_received_public_key(ecdh2.get_public_key()) + + with pytest.raises(InvalidCurveError): + ecdh2.load_received_public_key(ecdh1.get_public_key()) + + ecdh1.public_key = ecdh2.get_public_key() + ecdh2.public_key = ecdh1.get_public_key() + + with pytest.raises(InvalidCurveError): + ecdh1.generate_sharedsecret_bytes() + + with pytest.raises(InvalidCurveError): + ecdh2.generate_sharedsecret_bytes() + + +def test_ecdh_invalid_shared_secret_curve(): + ecdh1 = ECDH(curve=NIST256p) + ecdh1.generate_private_key() + + ecdh1.load_received_public_key( + SigningKey.generate(NIST256p).get_verifying_key() + ) + + ecdh1.private_key.privkey.secret_multiplier = ecdh1.private_key.curve.order + + with pytest.raises(InvalidSharedSecretError): + ecdh1.generate_sharedsecret_bytes() + + +# https://github.com/scogliani/ecc-test-vectors/blob/master/ecdh_kat/secp192r1.txt +# https://github.com/scogliani/ecc-test-vectors/blob/master/ecdh_kat/secp256r1.txt +# https://github.com/coruus/nist-testvectors/blob/master/csrc.nist.gov/groups/STM/cavp/documents/components/ecccdhtestvectors/KAS_ECC_CDH_PrimitiveTest.txt +@pytest.mark.parametrize( + "curve,privatekey,pubkey,secret", + [ + pytest.param( + NIST192p, + "f17d3fea367b74d340851ca4270dcb24c271f445bed9d527", + "42ea6dd9969dd2a61fea1aac7f8e98edcc896c6e55857cc0" + "dfbe5d7c61fac88b11811bde328e8a0d12bf01a9d204b523", + "803d8ab2e5b6e6fca715737c3a82f7ce3c783124f6d51cd0", + id="NIST192p-1", + ), + pytest.param( + NIST192p, + "56e853349d96fe4c442448dacb7cf92bb7a95dcf574a9bd5", + "deb5712fa027ac8d2f22c455ccb73a91e17b6512b5e030e7" + "7e2690a02cc9b28708431a29fb54b87b1f0c14e011ac2125", + "c208847568b98835d7312cef1f97f7aa298283152313c29d", + id="NIST192p-2", + ), + pytest.param( + NIST192p, + "c6ef61fe12e80bf56f2d3f7d0bb757394519906d55500949", + "4edaa8efc5a0f40f843663ec5815e7762dddc008e663c20f" + "0a9f8dc67a3e60ef6d64b522185d03df1fc0adfd42478279", + "87229107047a3b611920d6e3b2c0c89bea4f49412260b8dd", + id="NIST192p-3", + ), + pytest.param( + NIST192p, + "e6747b9c23ba7044f38ff7e62c35e4038920f5a0163d3cda", + "8887c276edeed3e9e866b46d58d895c73fbd80b63e382e88" + "04c5097ba6645e16206cfb70f7052655947dd44a17f1f9d5", + "eec0bed8fc55e1feddc82158fd6dc0d48a4d796aaf47d46c", + id="NIST192p-4", + ), + pytest.param( + NIST192p, + "beabedd0154a1afcfc85d52181c10f5eb47adc51f655047d", + "0d045f30254adc1fcefa8a5b1f31bf4e739dd327cd18d594" + "542c314e41427c08278a08ce8d7305f3b5b849c72d8aff73", + "716e743b1b37a2cd8479f0a3d5a74c10ba2599be18d7e2f4", + id="NIST192p-5", + ), + pytest.param( + NIST192p, + "cf70354226667321d6e2baf40999e2fd74c7a0f793fa8699", + "fb35ca20d2e96665c51b98e8f6eb3d79113508d8bccd4516" + "368eec0d5bfb847721df6aaff0e5d48c444f74bf9cd8a5a7", + "f67053b934459985a315cb017bf0302891798d45d0e19508", + id="NIST192p-6", + ), + pytest.param( + NIST224p, + "8346a60fc6f293ca5a0d2af68ba71d1dd389e5e40837942df3e43cbd", + "af33cd0629bc7e996320a3f40368f74de8704fa37b8fab69abaae280" + "882092ccbba7930f419a8a4f9bb16978bbc3838729992559a6f2e2d7", + "7d96f9a3bd3c05cf5cc37feb8b9d5209d5c2597464dec3e9983743e8", + id="NIST224p", + ), + pytest.param( + NIST256p, + "7d7dc5f71eb29ddaf80d6214632eeae03d9058af1fb6d22ed80badb62bc1a534", + "700c48f77f56584c5cc632ca65640db91b6bacce3a4df6b42ce7cc838833d287" + "db71e509e3fd9b060ddb20ba5c51dcc5948d46fbf640dfe0441782cab85fa4ac", + "46fc62106420ff012e54a434fbdd2d25ccc5852060561e68040dd7778997bd7b", + id="NIST256p-1", + ), + pytest.param( + NIST256p, + "38f65d6dce47676044d58ce5139582d568f64bb16098d179dbab07741dd5caf5", + "809f04289c64348c01515eb03d5ce7ac1a8cb9498f5caa50197e58d43a86a7ae" + "b29d84e811197f25eba8f5194092cb6ff440e26d4421011372461f579271cda3", + "057d636096cb80b67a8c038c890e887d1adfa4195e9b3ce241c8a778c59cda67", + id="NIST256p-2", + ), + pytest.param( + NIST256p, + "1accfaf1b97712b85a6f54b148985a1bdc4c9bec0bd258cad4b3d603f49f32c8", + "a2339c12d4a03c33546de533268b4ad667debf458b464d77443636440ee7fec3" + "ef48a3ab26e20220bcda2c1851076839dae88eae962869a497bf73cb66faf536", + "2d457b78b4614132477618a5b077965ec90730a8c81a1c75d6d4ec68005d67ec", + id="NIST256p-3", + ), + pytest.param( + NIST256p, + "207c43a79bfee03db6f4b944f53d2fb76cc49ef1c9c4d34d51b6c65c4db6932d", + "df3989b9fa55495719b3cf46dccd28b5153f7808191dd518eff0c3cff2b705ed" + "422294ff46003429d739a33206c8752552c8ba54a270defc06e221e0feaf6ac4", + "96441259534b80f6aee3d287a6bb17b5094dd4277d9e294f8fe73e48bf2a0024", + id="NIST256p-4", + ), + pytest.param( + NIST256p, + "59137e38152350b195c9718d39673d519838055ad908dd4757152fd8255c09bf", + "41192d2813e79561e6a1d6f53c8bc1a433a199c835e141b05a74a97b0faeb922" + "1af98cc45e98a7e041b01cf35f462b7562281351c8ebf3ffa02e33a0722a1328", + "19d44c8d63e8e8dd12c22a87b8cd4ece27acdde04dbf47f7f27537a6999a8e62", + id="NIST256p-5", + ), + pytest.param( + NIST256p, + "f5f8e0174610a661277979b58ce5c90fee6c9b3bb346a90a7196255e40b132ef", + "33e82092a0f1fb38f5649d5867fba28b503172b7035574bf8e5b7100a3052792" + "f2cf6b601e0a05945e335550bf648d782f46186c772c0f20d3cd0d6b8ca14b2f", + "664e45d5bba4ac931cd65d52017e4be9b19a515f669bea4703542a2c525cd3d3", + id="NIST256p-6", + ), + pytest.param( + NIST384p, + "3cc3122a68f0d95027ad38c067916ba0eb8c38894d22e1b1" + "5618b6818a661774ad463b205da88cf699ab4d43c9cf98a1", + "a7c76b970c3b5fe8b05d2838ae04ab47697b9eaf52e76459" + "2efda27fe7513272734466b400091adbf2d68c58e0c50066" + "ac68f19f2e1cb879aed43a9969b91a0839c4c38a49749b66" + "1efedf243451915ed0905a32b060992b468c64766fc8437a", + "5f9d29dc5e31a163060356213669c8ce132e22f57c9a04f4" + "0ba7fcead493b457e5621e766c40a2e3d4d6a04b25e533f1", + id="NIST384p", + ), + pytest.param( + NIST521p, + "017eecc07ab4b329068fba65e56a1f8890aa935e57134ae0ffcce802735151f4ea" + "c6564f6ee9974c5e6887a1fefee5743ae2241bfeb95d5ce31ddcb6f9edb4d6fc47", + "00685a48e86c79f0f0875f7bc18d25eb5fc8c0b07e5da4f4370f3a949034085433" + "4b1e1b87fa395464c60626124a4e70d0f785601d37c09870ebf176666877a2046d" + "01ba52c56fc8776d9e8f5db4f0cc27636d0b741bbe05400697942e80b739884a83" + "bde99e0f6716939e632bc8986fa18dccd443a348b6c3e522497955a4f3c302f676", + "005fc70477c3e63bc3954bd0df3ea0d1f41ee21746ed95fc5e1fdf90930d5e1366" + "72d72cc770742d1711c3c3a4c334a0ad9759436a4d3c5bf6e74b9578fac148c831", + id="NIST521p", + ), + ], +) +def test_ecdh_NIST(curve, privatekey, pubkey, secret): + ecdh = ECDH(curve=curve) + ecdh.load_private_key_bytes(unhexlify(privatekey)) + ecdh.load_received_public_key_bytes(unhexlify(pubkey)) + + sharedsecret = ecdh.generate_sharedsecret_bytes() + + assert sharedsecret == unhexlify(secret) + + +pem_local_private_key = ( + "-----BEGIN EC PRIVATE KEY-----\n" + "MF8CAQEEGF7IQgvW75JSqULpiQQ8op9WH6Uldw6xxaAKBggqhkjOPQMBAaE0AzIA\n" + "BLiBd9CE7xf15FY5QIAoNg+fWbSk1yZOYtoGUdzkejWkxbRc9RWTQjqLVXucIJnz\n" + "bA==\n" + "-----END EC PRIVATE KEY-----\n" +) +der_local_private_key = ( + "305f02010104185ec8420bd6ef9252a942e989043ca29f561fa525770eb1c5a00a06082a864" + "8ce3d030101a13403320004b88177d084ef17f5e45639408028360f9f59b4a4d7264e62da06" + "51dce47a35a4c5b45cf51593423a8b557b9c2099f36c" +) +pem_remote_public_key = ( + "-----BEGIN PUBLIC KEY-----\n" + "MEkwEwYHKoZIzj0CAQYIKoZIzj0DAQEDMgAEuIF30ITvF/XkVjlAgCg2D59ZtKTX\n" + "Jk5i2gZR3OR6NaTFtFz1FZNCOotVe5wgmfNs\n" + "-----END PUBLIC KEY-----\n" +) +der_remote_public_key = ( + "3049301306072a8648ce3d020106082a8648ce3d03010103320004b88177d084ef17f5e4563" + "9408028360f9f59b4a4d7264e62da0651dce47a35a4c5b45cf51593423a8b557b9c2099f36c" +) +gshared_secret = "8f457e34982478d1c34b9cd2d0c15911b72dd60d869e2cea" + + +def test_ecdh_pem(): + ecdh = ECDH() + ecdh.load_private_key_pem(pem_local_private_key) + ecdh.load_received_public_key_pem(pem_remote_public_key) + + sharedsecret = ecdh.generate_sharedsecret_bytes() + + assert sharedsecret == unhexlify(gshared_secret) + + +def test_ecdh_der(): + ecdh = ECDH() + ecdh.load_private_key_der(unhexlify(der_local_private_key)) + ecdh.load_received_public_key_der(unhexlify(der_remote_public_key)) + + sharedsecret = ecdh.generate_sharedsecret_bytes() + + assert sharedsecret == unhexlify(gshared_secret) + + +# Exception classes used by run_openssl. +class RunOpenSslError(Exception): + pass + + +def run_openssl(cmd): + OPENSSL = "openssl" + p = subprocess.Popen( + [OPENSSL] + cmd.split(), + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + ) + stdout, ignored = p.communicate() + if p.returncode != 0: + raise RunOpenSslError( + "cmd '%s %s' failed: rc=%s, stdout/err was %s" + % (OPENSSL, cmd, p.returncode, stdout) + ) + return stdout.decode() + + +OPENSSL_SUPPORTED_CURVES = set( + c.split(":")[0].strip() + for c in run_openssl("ecparam -list_curves").split("\n") +) + + +@pytest.mark.parametrize( + "vcurve", + curves, + ids=[curve.name for curve in curves], +) +def test_ecdh_with_openssl(vcurve): + if isinstance(vcurve.curve, CurveEdTw): + pytest.skip("Edwards curves are not supported for ECDH") + + assert vcurve.openssl_name + + if vcurve.openssl_name not in OPENSSL_SUPPORTED_CURVES: + pytest.skip("system openssl does not support " + vcurve.openssl_name) + + try: + hlp = run_openssl("pkeyutl -help") + if hlp.find("-derive") == 0: # pragma: no cover + pytest.skip("system openssl does not support `pkeyutl -derive`") + except RunOpenSslError: # pragma: no cover + pytest.skip("system openssl could not be executed") + + if os.path.isdir("t"): # pragma: no branch + shutil.rmtree("t") + os.mkdir("t") + run_openssl( + "ecparam -name %s -genkey -out t/privkey1.pem" % vcurve.openssl_name + ) + run_openssl( + "ecparam -name %s -genkey -out t/privkey2.pem" % vcurve.openssl_name + ) + run_openssl("ec -in t/privkey1.pem -pubout -out t/pubkey1.pem") + + ecdh1 = ECDH(curve=vcurve) + ecdh2 = ECDH(curve=vcurve) + with open("t/privkey1.pem") as e: + key = e.read() + ecdh1.load_private_key_pem(key) + with open("t/privkey2.pem") as e: + key = e.read() + ecdh2.load_private_key_pem(key) + + with open("t/pubkey1.pem") as e: + key = e.read() + vk1 = VerifyingKey.from_pem(key) + assert vk1.to_string() == ecdh1.get_public_key().to_string() + vk2 = ecdh2.get_public_key() + with open("t/pubkey2.pem", "wb") as e: + e.write(vk2.to_pem()) + + ecdh1.load_received_public_key(vk2) + ecdh2.load_received_public_key(vk1) + secret1 = ecdh1.generate_sharedsecret_bytes() + secret2 = ecdh2.generate_sharedsecret_bytes() + + assert secret1 == secret2 + + run_openssl( + "pkeyutl -derive -inkey t/privkey1.pem -peerkey t/pubkey2.pem -out t/secret1" + ) + run_openssl( + "pkeyutl -derive -inkey t/privkey2.pem -peerkey t/pubkey1.pem -out t/secret2" + ) + + with open("t/secret1", "rb") as e: + ssl_secret1 = e.read() + with open("t/secret1", "rb") as e: + ssl_secret2 = e.read() + + assert len(ssl_secret1) == vk1.curve.verifying_key_length // 2 + assert len(secret1) == vk1.curve.verifying_key_length // 2 + + assert ssl_secret1 == ssl_secret2 + assert secret1 == ssl_secret1 diff --git a/env/lib/python3.8/site-packages/ecdsa/test_ecdsa.py b/env/lib/python3.8/site-packages/ecdsa/test_ecdsa.py new file mode 100644 index 00000000..dbc4a6eb --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_ecdsa.py @@ -0,0 +1,661 @@ +from __future__ import print_function +import sys +import hypothesis.strategies as st +from hypothesis import given, settings, note, example + +try: + import unittest2 as unittest +except ImportError: + import unittest +import pytest +from .ecdsa import ( + Private_key, + Public_key, + Signature, + generator_192, + digest_integer, + ellipticcurve, + point_is_valid, + generator_224, + generator_256, + generator_384, + generator_521, + generator_secp256k1, + curve_192, + InvalidPointError, + curve_112r2, + generator_112r2, + int_to_string, +) + + +HYP_SETTINGS = {} +# old hypothesis doesn't have the "deadline" setting +if sys.version_info > (2, 7): # pragma: no branch + # SEC521p is slow, allow long execution for it + HYP_SETTINGS["deadline"] = 5000 + + +class TestP192FromX9_62(unittest.TestCase): + """Check test vectors from X9.62""" + + @classmethod + def setUpClass(cls): + cls.d = 651056770906015076056810763456358567190100156695615665659 + cls.Q = cls.d * generator_192 + cls.k = 6140507067065001063065065565667405560006161556565665656654 + cls.R = cls.k * generator_192 + + cls.msg = 968236873715988614170569073515315707566766479517 + cls.pubk = Public_key(generator_192, generator_192 * cls.d) + cls.privk = Private_key(cls.pubk, cls.d) + cls.sig = cls.privk.sign(cls.msg, cls.k) + + def test_point_multiplication(self): + assert self.Q.x() == 0x62B12D60690CDCF330BABAB6E69763B471F994DD702D16A5 + + def test_point_multiplication_2(self): + assert self.R.x() == 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD + assert self.R.y() == 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835 + + def test_mult_and_addition(self): + u1 = 2563697409189434185194736134579731015366492496392189760599 + u2 = 6266643813348617967186477710235785849136406323338782220568 + temp = u1 * generator_192 + u2 * self.Q + assert temp.x() == 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD + assert temp.y() == 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835 + + def test_signature(self): + r, s = self.sig.r, self.sig.s + assert r == 3342403536405981729393488334694600415596881826869351677613 + assert s == 5735822328888155254683894997897571951568553642892029982342 + + def test_verification(self): + assert self.pubk.verifies(self.msg, self.sig) + + def test_rejection(self): + assert not self.pubk.verifies(self.msg - 1, self.sig) + + +class TestPublicKey(unittest.TestCase): + def test_equality_public_keys(self): + gen = generator_192 + x = 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6 + y = 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F + point = ellipticcurve.Point(gen.curve(), x, y) + pub_key1 = Public_key(gen, point) + pub_key2 = Public_key(gen, point) + self.assertEqual(pub_key1, pub_key2) + + def test_inequality_public_key(self): + gen = generator_192 + x1 = 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6 + y1 = 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F + point1 = ellipticcurve.Point(gen.curve(), x1, y1) + + x2 = 0x6A223D00BD22C52833409A163E057E5B5DA1DEF2A197DD15 + y2 = 0x7B482604199367F1F303F9EF627F922F97023E90EAE08ABF + point2 = ellipticcurve.Point(gen.curve(), x2, y2) + + pub_key1 = Public_key(gen, point1) + pub_key2 = Public_key(gen, point2) + self.assertNotEqual(pub_key1, pub_key2) + + def test_inequality_different_curves(self): + gen = generator_192 + x1 = 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6 + y1 = 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F + point1 = ellipticcurve.Point(gen.curve(), x1, y1) + + x2 = 0x722BA0FB6B8FC8898A4C6AB49E66 + y2 = 0x2B7344BB57A7ABC8CA0F1A398C7D + point2 = ellipticcurve.Point(generator_112r2.curve(), x2, y2) + + pub_key1 = Public_key(gen, point1) + pub_key2 = Public_key(generator_112r2, point2) + self.assertNotEqual(pub_key1, pub_key2) + + def test_inequality_public_key_not_implemented(self): + gen = generator_192 + x = 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6 + y = 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F + point = ellipticcurve.Point(gen.curve(), x, y) + pub_key = Public_key(gen, point) + self.assertNotEqual(pub_key, None) + + def test_public_key_with_generator_without_order(self): + gen = ellipticcurve.PointJacobi( + generator_192.curve(), generator_192.x(), generator_192.y(), 1 + ) + + x = 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6 + y = 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F + point = ellipticcurve.Point(gen.curve(), x, y) + + with self.assertRaises(InvalidPointError) as e: + Public_key(gen, point) + + self.assertIn("Generator point must have order", str(e.exception)) + + def test_public_point_on_curve_not_scalar_multiple_of_base_point(self): + x = 2 + y = 0xBE6AA4938EF7CFE6FE29595B6B00 + # we need a curve with cofactor != 1 + point = ellipticcurve.PointJacobi(curve_112r2, x, y, 1) + + self.assertTrue(curve_112r2.contains_point(x, y)) + + with self.assertRaises(InvalidPointError) as e: + Public_key(generator_112r2, point) + + self.assertIn("Generator point order", str(e.exception)) + + def test_point_is_valid_with_not_scalar_multiple_of_base_point(self): + x = 2 + y = 0xBE6AA4938EF7CFE6FE29595B6B00 + + self.assertFalse(point_is_valid(generator_112r2, x, y)) + + # the tests to verify the extensiveness of tests in ecdsa.ecdsa + # if PointJacobi gets modified to calculate the x and y mod p the tests + # below will need to use a fake/mock object + def test_invalid_point_x_negative(self): + pt = ellipticcurve.PointJacobi(curve_192, -1, 0, 1) + + with self.assertRaises(InvalidPointError) as e: + Public_key(generator_192, pt) + + self.assertIn("The public point has x or y", str(e.exception)) + + def test_invalid_point_x_equal_p(self): + pt = ellipticcurve.PointJacobi(curve_192, curve_192.p(), 0, 1) + + with self.assertRaises(InvalidPointError) as e: + Public_key(generator_192, pt) + + self.assertIn("The public point has x or y", str(e.exception)) + + def test_invalid_point_y_negative(self): + pt = ellipticcurve.PointJacobi(curve_192, 0, -1, 1) + + with self.assertRaises(InvalidPointError) as e: + Public_key(generator_192, pt) + + self.assertIn("The public point has x or y", str(e.exception)) + + def test_invalid_point_y_equal_p(self): + pt = ellipticcurve.PointJacobi(curve_192, 0, curve_192.p(), 1) + + with self.assertRaises(InvalidPointError) as e: + Public_key(generator_192, pt) + + self.assertIn("The public point has x or y", str(e.exception)) + + +class TestPublicKeyVerifies(unittest.TestCase): + # test all the different ways that a signature can be publicly invalid + @classmethod + def setUpClass(cls): + gen = generator_192 + x = 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6 + y = 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F + point = ellipticcurve.Point(gen.curve(), x, y) + + cls.pub_key = Public_key(gen, point) + + def test_sig_with_r_zero(self): + sig = Signature(0, 1) + + self.assertFalse(self.pub_key.verifies(1, sig)) + + def test_sig_with_r_order(self): + sig = Signature(generator_192.order(), 1) + + self.assertFalse(self.pub_key.verifies(1, sig)) + + def test_sig_with_s_zero(self): + sig = Signature(1, 0) + + self.assertFalse(self.pub_key.verifies(1, sig)) + + def test_sig_with_s_order(self): + sig = Signature(1, generator_192.order()) + + self.assertFalse(self.pub_key.verifies(1, sig)) + + +class TestPrivateKey(unittest.TestCase): + @classmethod + def setUpClass(cls): + gen = generator_192 + x = 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6 + y = 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F + point = ellipticcurve.Point(gen.curve(), x, y) + cls.pub_key = Public_key(gen, point) + + def test_equality_private_keys(self): + pr_key1 = Private_key(self.pub_key, 100) + pr_key2 = Private_key(self.pub_key, 100) + self.assertEqual(pr_key1, pr_key2) + + def test_inequality_private_keys(self): + pr_key1 = Private_key(self.pub_key, 100) + pr_key2 = Private_key(self.pub_key, 200) + self.assertNotEqual(pr_key1, pr_key2) + + def test_inequality_private_keys_not_implemented(self): + pr_key = Private_key(self.pub_key, 100) + self.assertNotEqual(pr_key, None) + + +# Testing point validity, as per ECDSAVS.pdf B.2.2: +P192_POINTS = [ + ( + generator_192, + 0xCD6D0F029A023E9AACA429615B8F577ABEE685D8257CC83A, + 0x00019C410987680E9FB6C0B6ECC01D9A2647C8BAE27721BACDFC, + False, + ), + ( + generator_192, + 0x00017F2FCE203639E9EAF9FB50B81FC32776B30E3B02AF16C73B, + 0x95DA95C5E72DD48E229D4748D4EEE658A9A54111B23B2ADB, + False, + ), + ( + generator_192, + 0x4F77F8BC7FCCBADD5760F4938746D5F253EE2168C1CF2792, + 0x000147156FF824D131629739817EDB197717C41AAB5C2A70F0F6, + False, + ), + ( + generator_192, + 0xC58D61F88D905293BCD4CD0080BCB1B7F811F2FFA41979F6, + 0x8804DC7A7C4C7F8B5D437F5156F3312CA7D6DE8A0E11867F, + True, + ), + ( + generator_192, + 0xCDF56C1AA3D8AFC53C521ADF3FFB96734A6A630A4A5B5A70, + 0x97C1C44A5FB229007B5EC5D25F7413D170068FFD023CAA4E, + True, + ), + ( + generator_192, + 0x89009C0DC361C81E99280C8E91DF578DF88CDF4B0CDEDCED, + 0x27BE44A529B7513E727251F128B34262A0FD4D8EC82377B9, + True, + ), + ( + generator_192, + 0x6A223D00BD22C52833409A163E057E5B5DA1DEF2A197DD15, + 0x7B482604199367F1F303F9EF627F922F97023E90EAE08ABF, + True, + ), + ( + generator_192, + 0x6DCCBDE75C0948C98DAB32EA0BC59FE125CF0FB1A3798EDA, + 0x0001171A3E0FA60CF3096F4E116B556198DE430E1FBD330C8835, + False, + ), + ( + generator_192, + 0xD266B39E1F491FC4ACBBBC7D098430931CFA66D55015AF12, + 0x193782EB909E391A3148B7764E6B234AA94E48D30A16DBB2, + False, + ), + ( + generator_192, + 0x9D6DDBCD439BAA0C6B80A654091680E462A7D1D3F1FFEB43, + 0x6AD8EFC4D133CCF167C44EB4691C80ABFFB9F82B932B8CAA, + False, + ), + ( + generator_192, + 0x146479D944E6BDA87E5B35818AA666A4C998A71F4E95EDBC, + 0xA86D6FE62BC8FBD88139693F842635F687F132255858E7F6, + False, + ), + ( + generator_192, + 0xE594D4A598046F3598243F50FD2C7BD7D380EDB055802253, + 0x509014C0C4D6B536E3CA750EC09066AF39B4C8616A53A923, + False, + ), +] + + +@pytest.mark.parametrize("generator,x,y,expected", P192_POINTS) +def test_point_validity(generator, x, y, expected): + """ + `generator` defines the curve; is `(x, y)` a point on + this curve? `expected` is True if the right answer is Yes. + """ + assert point_is_valid(generator, x, y) == expected + + +# Trying signature-verification tests from ECDSAVS.pdf B.2.4: +CURVE_192_KATS = [ + ( + generator_192, + int( + "0x84ce72aa8699df436059f052ac51b6398d2511e49631bcb7e71f89c499b9ee" + "425dfbc13a5f6d408471b054f2655617cbbaf7937b7c80cd8865cf02c8487d30" + "d2b0fbd8b2c4e102e16d828374bbc47b93852f212d5043c3ea720f086178ff79" + "8cc4f63f787b9c2e419efa033e7644ea7936f54462dc21a6c4580725f7f0e7d1" + "58", + 16, + ), + 0xD9DBFB332AA8E5FF091E8CE535857C37C73F6250FFB2E7AC, + 0x282102E364FEDED3AD15DDF968F88D8321AA268DD483EBC4, + 0x64DCA58A20787C488D11D6DD96313F1B766F2D8EFE122916, + 0x1ECBA28141E84AB4ECAD92F56720E2CC83EB3D22DEC72479, + True, + ), + ( + generator_192, + int( + "0x94bb5bacd5f8ea765810024db87f4224ad71362a3c28284b2b9f39fab86db1" + "2e8beb94aae899768229be8fdb6c4f12f28912bb604703a79ccff769c1607f5a" + "91450f30ba0460d359d9126cbd6296be6d9c4bb96c0ee74cbb44197c207f6db3" + "26ab6f5a659113a9034e54be7b041ced9dcf6458d7fb9cbfb2744d999f7dfd63" + "f4", + 16, + ), + 0x3E53EF8D3112AF3285C0E74842090712CD324832D4277AE7, + 0xCC75F8952D30AEC2CBB719FC6AA9934590B5D0FF5A83ADB7, + 0x8285261607283BA18F335026130BAB31840DCFD9C3E555AF, + 0x356D89E1B04541AFC9704A45E9C535CE4A50929E33D7E06C, + True, + ), + ( + generator_192, + int( + "0xf6227a8eeb34afed1621dcc89a91d72ea212cb2f476839d9b4243c66877911" + "b37b4ad6f4448792a7bbba76c63bdd63414b6facab7dc71c3396a73bd7ee14cd" + "d41a659c61c99b779cecf07bc51ab391aa3252386242b9853ea7da67fd768d30" + "3f1b9b513d401565b6f1eb722dfdb96b519fe4f9bd5de67ae131e64b40e78c42" + "dd", + 16, + ), + 0x16335DBE95F8E8254A4E04575D736BEFB258B8657F773CB7, + 0x421B13379C59BC9DCE38A1099CA79BBD06D647C7F6242336, + 0x4141BD5D64EA36C5B0BD21EF28C02DA216ED9D04522B1E91, + 0x159A6AA852BCC579E821B7BB0994C0861FB08280C38DAA09, + False, + ), + ( + generator_192, + int( + "0x16b5f93afd0d02246f662761ed8e0dd9504681ed02a253006eb36736b56309" + "7ba39f81c8e1bce7a16c1339e345efabbc6baa3efb0612948ae51103382a8ee8" + "bc448e3ef71e9f6f7a9676694831d7f5dd0db5446f179bcb737d4a526367a447" + "bfe2c857521c7f40b6d7d7e01a180d92431fb0bbd29c04a0c420a57b3ed26ccd" + "8a", + 16, + ), + 0xFD14CDF1607F5EFB7B1793037B15BDF4BAA6F7C16341AB0B, + 0x83FA0795CC6C4795B9016DAC928FD6BAC32F3229A96312C4, + 0x8DFDB832951E0167C5D762A473C0416C5C15BC1195667DC1, + 0x1720288A2DC13FA1EC78F763F8FE2FF7354A7E6FDDE44520, + False, + ), + ( + generator_192, + int( + "0x08a2024b61b79d260e3bb43ef15659aec89e5b560199bc82cf7c65c77d3919" + "2e03b9a895d766655105edd9188242b91fbde4167f7862d4ddd61e5d4ab55196" + "683d4f13ceb90d87aea6e07eb50a874e33086c4a7cb0273a8e1c4408f4b846bc" + "eae1ebaac1b2b2ea851a9b09de322efe34cebe601653efd6ddc876ce8c2f2072" + "fb", + 16, + ), + 0x674F941DC1A1F8B763C9334D726172D527B90CA324DB8828, + 0x65ADFA32E8B236CB33A3E84CF59BFB9417AE7E8EDE57A7FF, + 0x9508B9FDD7DAF0D8126F9E2BC5A35E4C6D800B5B804D7796, + 0x36F2BF6B21B987C77B53BB801B3435A577E3D493744BFAB0, + False, + ), + ( + generator_192, + int( + "0x1843aba74b0789d4ac6b0b8923848023a644a7b70afa23b1191829bbe4397c" + "e15b629bf21a8838298653ed0c19222b95fa4f7390d1b4c844d96e645537e0aa" + "e98afb5c0ac3bd0e4c37f8daaff25556c64e98c319c52687c904c4de7240a1cc" + "55cd9756b7edaef184e6e23b385726e9ffcba8001b8f574987c1a3fedaaa83ca" + "6d", + 16, + ), + 0x10ECCA1AAD7220B56A62008B35170BFD5E35885C4014A19F, + 0x04EB61984C6C12ADE3BC47F3C629ECE7AA0A033B9948D686, + 0x82BFA4E82C0DFE9274169B86694E76CE993FD83B5C60F325, + 0xA97685676C59A65DBDE002FE9D613431FB183E8006D05633, + False, + ), + ( + generator_192, + int( + "0x5a478f4084ddd1a7fea038aa9732a822106385797d02311aeef4d0264f824f" + "698df7a48cfb6b578cf3da416bc0799425bb491be5b5ecc37995b85b03420a98" + "f2c4dc5c31a69a379e9e322fbe706bbcaf0f77175e05cbb4fa162e0da82010a2" + "78461e3e974d137bc746d1880d6eb02aa95216014b37480d84b87f717bb13f76" + "e1", + 16, + ), + 0x6636653CB5B894CA65C448277B29DA3AD101C4C2300F7C04, + 0xFDF1CBB3FC3FD6A4F890B59E554544175FA77DBDBEB656C1, + 0xEAC2DDECDDFB79931A9C3D49C08DE0645C783A24CB365E1C, + 0x3549FEE3CFA7E5F93BC47D92D8BA100E881A2A93C22F8D50, + False, + ), + ( + generator_192, + int( + "0xc598774259a058fa65212ac57eaa4f52240e629ef4c310722088292d1d4af6" + "c39b49ce06ba77e4247b20637174d0bd67c9723feb57b5ead232b47ea452d5d7" + "a089f17c00b8b6767e434a5e16c231ba0efa718a340bf41d67ea2d295812ff1b" + "9277daacb8bc27b50ea5e6443bcf95ef4e9f5468fe78485236313d53d1c68f6b" + "a2", + 16, + ), + 0xA82BD718D01D354001148CD5F69B9EBF38FF6F21898F8AAA, + 0xE67CEEDE07FC2EBFAFD62462A51E4B6C6B3D5B537B7CAF3E, + 0x4D292486C620C3DE20856E57D3BB72FCDE4A73AD26376955, + 0xA85289591A6081D5728825520E62FF1C64F94235C04C7F95, + False, + ), + ( + generator_192, + int( + "0xca98ed9db081a07b7557f24ced6c7b9891269a95d2026747add9e9eb80638a" + "961cf9c71a1b9f2c29744180bd4c3d3db60f2243c5c0b7cc8a8d40a3f9a7fc91" + "0250f2187136ee6413ffc67f1a25e1c4c204fa9635312252ac0e0481d89b6d53" + "808f0c496ba87631803f6c572c1f61fa049737fdacce4adff757afed4f05beb6" + "58", + 16, + ), + 0x7D3B016B57758B160C4FCA73D48DF07AE3B6B30225126C2F, + 0x4AF3790D9775742BDE46F8DA876711BE1B65244B2B39E7EC, + 0x95F778F5F656511A5AB49A5D69DDD0929563C29CBC3A9E62, + 0x75C87FC358C251B4C83D2DD979FAAD496B539F9F2EE7A289, + False, + ), + ( + generator_192, + int( + "0x31dd9a54c8338bea06b87eca813d555ad1850fac9742ef0bbe40dad400e102" + "88acc9c11ea7dac79eb16378ebea9490e09536099f1b993e2653cd50240014c9" + "0a9c987f64545abc6a536b9bd2435eb5e911fdfde2f13be96ea36ad38df4ae9e" + "a387b29cced599af777338af2794820c9cce43b51d2112380a35802ab7e396c9" + "7a", + 16, + ), + 0x9362F28C4EF96453D8A2F849F21E881CD7566887DA8BEB4A, + 0xE64D26D8D74C48A024AE85D982EE74CD16046F4EE5333905, + 0xF3923476A296C88287E8DE914B0B324AD5A963319A4FE73B, + 0xF0BAEED7624ED00D15244D8BA2AEDE085517DBDEC8AC65F5, + True, + ), + ( + generator_192, + int( + "0xb2b94e4432267c92f9fdb9dc6040c95ffa477652761290d3c7de312283f645" + "0d89cc4aabe748554dfb6056b2d8e99c7aeaad9cdddebdee9dbc099839562d90" + "64e68e7bb5f3a6bba0749ca9a538181fc785553a4000785d73cc207922f63e8c" + "e1112768cb1de7b673aed83a1e4a74592f1268d8e2a4e9e63d414b5d442bd045" + "6d", + 16, + ), + 0xCC6FC032A846AAAC25533EB033522824F94E670FA997ECEF, + 0xE25463EF77A029ECCDA8B294FD63DD694E38D223D30862F1, + 0x066B1D07F3A40E679B620EDA7F550842A35C18B80C5EBE06, + 0xA0B0FB201E8F2DF65E2C4508EF303BDC90D934016F16B2DC, + False, + ), + ( + generator_192, + int( + "0x4366fcadf10d30d086911de30143da6f579527036937007b337f7282460eae" + "5678b15cccda853193ea5fc4bc0a6b9d7a31128f27e1214988592827520b214e" + "ed5052f7775b750b0c6b15f145453ba3fee24a085d65287e10509eb5d5f602c4" + "40341376b95c24e5c4727d4b859bfe1483d20538acdd92c7997fa9c614f0f839" + "d7", + 16, + ), + 0x955C908FE900A996F7E2089BEE2F6376830F76A19135E753, + 0xBA0C42A91D3847DE4A592A46DC3FDAF45A7CC709B90DE520, + 0x1F58AD77FC04C782815A1405B0925E72095D906CBF52A668, + 0xF2E93758B3AF75EDF784F05A6761C9B9A6043C66B845B599, + False, + ), + ( + generator_192, + int( + "0x543f8af57d750e33aa8565e0cae92bfa7a1ff78833093421c2942cadf99866" + "70a5ff3244c02a8225e790fbf30ea84c74720abf99cfd10d02d34377c3d3b412" + "69bea763384f372bb786b5846f58932defa68023136cd571863b304886e95e52" + "e7877f445b9364b3f06f3c28da12707673fecb4b8071de06b6e0a3c87da160ce" + "f3", + 16, + ), + 0x31F7FA05576D78A949B24812D4383107A9A45BB5FCCDD835, + 0x8DC0EB65994A90F02B5E19BD18B32D61150746C09107E76B, + 0xBE26D59E4E883DDE7C286614A767B31E49AD88789D3A78FF, + 0x8762CA831C1CE42DF77893C9B03119428E7A9B819B619068, + False, + ), + ( + generator_192, + int( + "0xd2e8454143ce281e609a9d748014dcebb9d0bc53adb02443a6aac2ffe6cb009f" + "387c346ecb051791404f79e902ee333ad65e5c8cb38dc0d1d39a8dc90add502357" + "2720e5b94b190d43dd0d7873397504c0c7aef2727e628eb6a74411f2e400c65670" + "716cb4a815dc91cbbfeb7cfe8c929e93184c938af2c078584da045e8f8d1", + 16, + ), + 0x66AA8EDBBDB5CF8E28CEB51B5BDA891CAE2DF84819FE25C0, + 0x0C6BC2F69030A7CE58D4A00E3B3349844784A13B8936F8DA, + 0xA4661E69B1734F4A71B788410A464B71E7FFE42334484F23, + 0x738421CF5E049159D69C57A915143E226CAC8355E149AFE9, + False, + ), + ( + generator_192, + int( + "0x6660717144040f3e2f95a4e25b08a7079c702a8b29babad5a19a87654bc5c5af" + "a261512a11b998a4fb36b5d8fe8bd942792ff0324b108120de86d63f65855e5461" + "184fc96a0a8ffd2ce6d5dfb0230cbbdd98f8543e361b3205f5da3d500fdc8bac6d" + "b377d75ebef3cb8f4d1ff738071ad0938917889250b41dd1d98896ca06fb", + 16, + ), + 0xBCFACF45139B6F5F690A4C35A5FFFA498794136A2353FC77, + 0x6F4A6C906316A6AFC6D98FE1F0399D056F128FE0270B0F22, + 0x9DB679A3DAFE48F7CCAD122933ACFE9DA0970B71C94C21C1, + 0x984C2DB99827576C0A41A5DA41E07D8CC768BC82F18C9DA9, + False, + ), +] + + +@pytest.mark.parametrize("gen,msg,qx,qy,r,s,expected", CURVE_192_KATS) +def test_signature_validity(gen, msg, qx, qy, r, s, expected): + """ + `msg` = message, `qx` and `qy` represent the base point on + elliptic curve of `gen`, `r` and `s` are the signature, and + `expected` is True iff the signature is expected to be valid.""" + pubk = Public_key(gen, ellipticcurve.Point(gen.curve(), qx, qy)) + assert expected == pubk.verifies(digest_integer(msg), Signature(r, s)) + + +@pytest.mark.parametrize( + "gen,msg,qx,qy,r,s,expected", [x for x in CURVE_192_KATS if x[6]] +) +def test_pk_recovery(gen, msg, r, s, qx, qy, expected): + del expected + sign = Signature(r, s) + pks = sign.recover_public_keys(digest_integer(msg), gen) + + assert pks + + # Test if the signature is valid for all found public keys + for pk in pks: + q = pk.point + test_signature_validity(gen, msg, q.x(), q.y(), r, s, True) + + # Test if the original public key is in the set of found keys + original_q = ellipticcurve.Point(gen.curve(), qx, qy) + points = [pk.point for pk in pks] + assert original_q in points + + +@st.composite +def st_random_gen_key_msg_nonce(draw): + """Hypothesis strategy for test_sig_verify().""" + name_gen = { + "generator_192": generator_192, + "generator_224": generator_224, + "generator_256": generator_256, + "generator_secp256k1": generator_secp256k1, + "generator_384": generator_384, + "generator_521": generator_521, + } + name = draw(st.sampled_from(sorted(name_gen.keys()))) + note("Generator used: {0}".format(name)) + generator = name_gen[name] + order = int(generator.order()) + + key = draw(st.integers(min_value=1, max_value=order)) + msg = draw(st.integers(min_value=1, max_value=order)) + nonce = draw( + st.integers(min_value=1, max_value=order + 1) + | st.integers(min_value=order >> 1, max_value=order) + ) + return generator, key, msg, nonce + + +SIG_VER_SETTINGS = dict(HYP_SETTINGS) +SIG_VER_SETTINGS["max_examples"] = 10 + + +@settings(**SIG_VER_SETTINGS) +@example((generator_224, 4, 1, 1)) +@given(st_random_gen_key_msg_nonce()) +def test_sig_verify(args): + """ + Check if signing and verification works for arbitrary messages and + that signatures for other messages are rejected. + """ + generator, sec_mult, msg, nonce = args + + pubkey = Public_key(generator, generator * sec_mult) + privkey = Private_key(pubkey, sec_mult) + + signature = privkey.sign(msg, nonce) + + assert pubkey.verifies(msg, signature) + + assert not pubkey.verifies(msg - 1, signature) + + +def test_int_to_string_with_zero(): + assert int_to_string(0) == b"\x00" diff --git a/env/lib/python3.8/site-packages/ecdsa/test_eddsa.py b/env/lib/python3.8/site-packages/ecdsa/test_eddsa.py new file mode 100644 index 00000000..7a09ad7f --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_eddsa.py @@ -0,0 +1,1079 @@ +import pickle +import hashlib +import pytest + +try: + import unittest2 as unittest +except ImportError: + import unittest +from hypothesis import given, settings, example +import hypothesis.strategies as st +from .ellipticcurve import PointEdwards, INFINITY, CurveEdTw +from .eddsa import ( + generator_ed25519, + curve_ed25519, + generator_ed448, + curve_ed448, + PrivateKey, + PublicKey, +) +from .ecdsa import generator_256, curve_256 +from .errors import MalformedPointError +from ._compat import a2b_hex, compat26_str + + +class TestA2B_Hex(unittest.TestCase): + def test_invalid_input(self): + with self.assertRaises(ValueError): + a2b_hex("abcdefghi") + + +def test_ed25519_curve_compare(): + assert curve_ed25519 != curve_256 + + +def test_ed25519_and_ed448_compare(): + assert curve_ed448 != curve_ed25519 + + +def test_ed25519_and_custom_curve_compare(): + a = CurveEdTw(curve_ed25519.p(), -curve_ed25519.a(), 1) + + assert curve_ed25519 != a + + +def test_ed25519_and_almost_exact_curve_compare(): + a = CurveEdTw(curve_ed25519.p(), curve_ed25519.a(), 1) + + assert curve_ed25519 != a + + +def test_ed25519_and_same_curve_params(): + a = CurveEdTw(curve_ed25519.p(), curve_ed25519.a(), curve_ed25519.d()) + + assert curve_ed25519 == a + assert not (curve_ed25519 != a) + + +def test_ed25519_contains_point(): + g = generator_ed25519 + assert curve_ed25519.contains_point(g.x(), g.y()) + + +def test_ed25519_contains_point_bad(): + assert not curve_ed25519.contains_point(1, 1) + + +def test_ed25519_double(): + a = generator_ed25519 + + z = a.double() + + assert isinstance(z, PointEdwards) + + x2 = int( + "24727413235106541002554574571675588834622768167397638456726423" + "682521233608206" + ) + y2 = int( + "15549675580280190176352668710449542251549572066445060580507079" + "593062643049417" + ) + + b = PointEdwards(curve_ed25519, x2, y2, 1, x2 * y2) + + assert z == b + assert a != b + + +def test_ed25519_add_as_double(): + a = generator_ed25519 + + z = a + a + + assert isinstance(z, PointEdwards) + + b = generator_ed25519.double() + + assert z == b + + +def test_ed25519_double_infinity(): + a = PointEdwards(curve_ed25519, 0, 1, 1, 0) + + z = a.double() + + assert z is INFINITY + + +def test_ed25519_double_badly_encoded_infinity(): + # invalid point, mostly to make instrumental happy + a = PointEdwards(curve_ed25519, 1, 1, 1, 0) + + z = a.double() + + assert z is INFINITY + + +def test_ed25519_eq_with_different_z(): + x = generator_ed25519.x() + y = generator_ed25519.y() + p = curve_ed25519.p() + + a = PointEdwards(curve_ed25519, x * 2 % p, y * 2 % p, 2, x * y * 2 % p) + b = PointEdwards(curve_ed25519, x * 3 % p, y * 3 % p, 3, x * y * 3 % p) + + assert a == b + + assert not (a != b) + + +def test_ed25519_eq_against_infinity(): + assert generator_ed25519 != INFINITY + + +def test_ed25519_eq_encoded_infinity_against_infinity(): + a = PointEdwards(curve_ed25519, 0, 1, 1, 0) + assert a == INFINITY + + +def test_ed25519_eq_bad_encode_of_infinity_against_infinity(): + # technically incorrect encoding of the point at infinity, but we check + # both X and T, so verify that just T==0 works + a = PointEdwards(curve_ed25519, 1, 1, 1, 0) + assert a == INFINITY + + +def test_ed25519_eq_against_non_Edwards_point(): + assert generator_ed25519 != generator_256 + + +def test_ed25519_eq_against_negated_point(): + g = generator_ed25519 + neg = PointEdwards(curve_ed25519, -g.x(), g.y(), 1, -g.x() * g.y()) + assert g != neg + + +def test_ed25519_eq_x_different_y(): + # not points on the curve, but __eq__ doesn't care + a = PointEdwards(curve_ed25519, 1, 1, 1, 1) + b = PointEdwards(curve_ed25519, 1, 2, 1, 2) + + assert a != b + + +def test_ed25519_test_normalisation_and_scaling(): + x = generator_ed25519.x() + y = generator_ed25519.y() + p = curve_ed25519.p() + + a = PointEdwards(curve_ed25519, x * 11 % p, y * 11 % p, 11, x * y * 11 % p) + + assert a.x() == x + assert a.y() == y + + a.scale() + + assert a.x() == x + assert a.y() == y + + a.scale() # second execution should be a noop + + assert a.x() == x + assert a.y() == y + + +def test_ed25519_add_three_times(): + a = generator_ed25519 + + z = a + a + a + + x3 = int( + "468967334644549386571235445953867877890461982801326656862413" + "21779790909858396" + ) + y3 = int( + "832484377853344397649037712036920113830141722629755531674120" + "2210403726505172" + ) + + b = PointEdwards(curve_ed25519, x3, y3, 1, x3 * y3) + + assert z == b + + +def test_ed25519_add_to_infinity(): + # generator * (order-1) + x1 = int( + "427838232691226969392843410947554224151809796397784248136826" + "78720006717057747" + ) + y1 = int( + "463168356949264781694283940034751631413079938662562256157830" + "33603165251855960" + ) + inf_m_1 = PointEdwards(curve_ed25519, x1, y1, 1, x1 * y1) + + inf = inf_m_1 + generator_ed25519 + + assert inf is INFINITY + + +def test_ed25519_add_and_mul_equivalence(): + g = generator_ed25519 + + assert g + g == g * 2 + assert g + g + g == g * 3 + + +def test_ed25519_add_literal_infinity(): + g = generator_ed25519 + z = g + INFINITY + + assert z == g + + +def test_ed25519_add_infinity(): + inf = PointEdwards(curve_ed25519, 0, 1, 1, 0) + g = generator_ed25519 + z = g + inf + + assert z == g + + z = inf + g + + assert z == g + + +class TestEd25519(unittest.TestCase): + def test_add_wrong_curves(self): + with self.assertRaises(ValueError) as e: + generator_ed25519 + generator_ed448 + + self.assertIn("different curve", str(e.exception)) + + def test_add_wrong_point_type(self): + with self.assertRaises(ValueError) as e: + generator_ed25519 + generator_256 + + self.assertIn("different curve", str(e.exception)) + + +def test_ed25519_mul_to_order_min_1(): + x1 = int( + "427838232691226969392843410947554224151809796397784248136826" + "78720006717057747" + ) + y1 = int( + "463168356949264781694283940034751631413079938662562256157830" + "33603165251855960" + ) + inf_m_1 = PointEdwards(curve_ed25519, x1, y1, 1, x1 * y1) + + assert generator_ed25519 * (generator_ed25519.order() - 1) == inf_m_1 + + +def test_ed25519_mul_to_infinity(): + assert generator_ed25519 * generator_ed25519.order() == INFINITY + + +def test_ed25519_mul_to_infinity_plus_1(): + g = generator_ed25519 + assert g * (g.order() + 1) == g + + +def test_ed25519_mul_and_add(): + g = generator_ed25519 + a = g * 128 + b = g * 64 + g * 64 + + assert a == b + + +def test_ed25519_mul_and_add_2(): + g = generator_ed25519 + + a = g * 123 + b = g * 120 + g * 3 + + assert a == b + + +def test_ed25519_mul_infinity(): + inf = PointEdwards(curve_ed25519, 0, 1, 1, 0) + + z = inf * 11 + + assert z == INFINITY + + +def test_ed25519_mul_by_zero(): + z = generator_ed25519 * 0 + + assert z == INFINITY + + +def test_ed25519_mul_by_one(): + z = generator_ed25519 * 1 + + assert z == generator_ed25519 + + +def test_ed25519_mul_custom_point(): + # verify that multiplication without order set works + + g = generator_ed25519 + + a = PointEdwards(curve_ed25519, g.x(), g.y(), 1, g.x() * g.y()) + + z = a * 11 + + assert z == g * 11 + + +def test_ed25519_pickle(): + g = generator_ed25519 + assert pickle.loads(pickle.dumps(g)) == g + + +def test_ed448_eq_against_different_curve(): + assert generator_ed25519 != generator_ed448 + + +def test_ed448_double(): + g = generator_ed448 + z = g.double() + + assert isinstance(z, PointEdwards) + + x2 = int( + "4845591495304045936995492052586696895690942404582120401876" + "6013278705691214670908136440114445572635086627683154494739" + "7859048262938744149" + ) + y2 = int( + "4940887598674337276743026725267350893505445523037277237461" + "2648447308771911703729389009346215770388834286503647778745" + "3078312060500281069" + ) + + b = PointEdwards(curve_ed448, x2, y2, 1, x2 * y2) + + assert z == b + assert g != b + + +def test_ed448_add_as_double(): + g = generator_ed448 + z = g + g + + b = g.double() + + assert z == b + + +def test_ed448_mul_as_double(): + g = generator_ed448 + z = g * 2 + b = g.double() + + assert z == b + + +def test_ed448_add_to_infinity(): + # generator * (order - 1) + x1 = int( + "5022586839996825903617194737881084981068517190547539260353" + "6473749366191269932473977736719082931859264751085238669719" + "1187378895383117729" + ) + y1 = int( + "2988192100784814926760179304439306734375440401540802420959" + "2824137233150618983587600353687865541878473398230323350346" + "2500531545062832660" + ) + inf_m_1 = PointEdwards(curve_ed448, x1, y1, 1, x1 * y1) + + inf = inf_m_1 + generator_ed448 + + assert inf is INFINITY + + +def test_ed448_mul_to_infinity(): + g = generator_ed448 + inf = g * g.order() + + assert inf is INFINITY + + +def test_ed448_mul_to_infinity_plus_1(): + g = generator_ed448 + + z = g * (g.order() + 1) + + assert z == g + + +def test_ed448_add_and_mul_equivalence(): + g = generator_ed448 + + assert g + g == g * 2 + assert g + g + g == g * 3 + + +def test_ed25519_encode(): + g = generator_ed25519 + g_bytes = g.to_bytes() + assert len(g_bytes) == 32 + exp_bytes = ( + b"\x58\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + b"\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + ) + assert g_bytes == exp_bytes + + +def test_ed25519_decode(): + exp_bytes = ( + b"\x58\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + b"\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + ) + a = PointEdwards.from_bytes(curve_ed25519, exp_bytes) + + assert a == generator_ed25519 + + +class TestEdwardsMalformed(unittest.TestCase): + def test_invalid_point(self): + exp_bytes = ( + b"\x78\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + b"\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + ) + with self.assertRaises(MalformedPointError): + PointEdwards.from_bytes(curve_ed25519, exp_bytes) + + def test_invalid_length(self): + exp_bytes = ( + b"\x58\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + b"\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66\x66" + b"\x66" + ) + with self.assertRaises(MalformedPointError) as e: + PointEdwards.from_bytes(curve_ed25519, exp_bytes) + + self.assertIn("length", str(e.exception)) + + def test_ed448_invalid(self): + exp_bytes = b"\xff" * 57 + with self.assertRaises(MalformedPointError): + PointEdwards.from_bytes(curve_ed448, exp_bytes) + + +def test_ed448_encode(): + g = generator_ed448 + g_bytes = g.to_bytes() + assert len(g_bytes) == 57 + exp_bytes = ( + b"\x14\xfa\x30\xf2\x5b\x79\x08\x98\xad\xc8\xd7\x4e\x2c\x13\xbd" + b"\xfd\xc4\x39\x7c\xe6\x1c\xff\xd3\x3a\xd7\xc2\xa0\x05\x1e\x9c" + b"\x78\x87\x40\x98\xa3\x6c\x73\x73\xea\x4b\x62\xc7\xc9\x56\x37" + b"\x20\x76\x88\x24\xbc\xb6\x6e\x71\x46\x3f\x69\x00" + ) + assert g_bytes == exp_bytes + + +def test_ed448_decode(): + exp_bytes = ( + b"\x14\xfa\x30\xf2\x5b\x79\x08\x98\xad\xc8\xd7\x4e\x2c\x13\xbd" + b"\xfd\xc4\x39\x7c\xe6\x1c\xff\xd3\x3a\xd7\xc2\xa0\x05\x1e\x9c" + b"\x78\x87\x40\x98\xa3\x6c\x73\x73\xea\x4b\x62\xc7\xc9\x56\x37" + b"\x20\x76\x88\x24\xbc\xb6\x6e\x71\x46\x3f\x69\x00" + ) + + a = PointEdwards.from_bytes(curve_ed448, exp_bytes) + + assert a == generator_ed448 + + +class TestEdDSAEquality(unittest.TestCase): + def test_equal_public_points(self): + key1 = PublicKey(generator_ed25519, b"\x01" * 32) + key2 = PublicKey(generator_ed25519, b"\x01" * 32) + + self.assertEqual(key1, key2) + self.assertFalse(key1 != key2) + + def test_unequal_public_points(self): + key1 = PublicKey(generator_ed25519, b"\x01" * 32) + key2 = PublicKey(generator_ed25519, b"\x03" * 32) + + self.assertNotEqual(key1, key2) + + def test_unequal_to_string(self): + key1 = PublicKey(generator_ed25519, b"\x01" * 32) + key2 = b"\x01" * 32 + + self.assertNotEqual(key1, key2) + + def test_unequal_publickey_curves(self): + key1 = PublicKey(generator_ed25519, b"\x01" * 32) + key2 = PublicKey(generator_ed448, b"\x03" * 56 + b"\x00") + + self.assertNotEqual(key1, key2) + self.assertTrue(key1 != key2) + + def test_equal_private_keys(self): + key1 = PrivateKey(generator_ed25519, b"\x01" * 32) + key2 = PrivateKey(generator_ed25519, b"\x01" * 32) + + self.assertEqual(key1, key2) + self.assertFalse(key1 != key2) + + def test_unequal_private_keys(self): + key1 = PrivateKey(generator_ed25519, b"\x01" * 32) + key2 = PrivateKey(generator_ed25519, b"\x02" * 32) + + self.assertNotEqual(key1, key2) + self.assertTrue(key1 != key2) + + def test_unequal_privatekey_to_string(self): + key1 = PrivateKey(generator_ed25519, b"\x01" * 32) + key2 = b"\x01" * 32 + + self.assertNotEqual(key1, key2) + + def test_unequal_privatekey_curves(self): + key1 = PrivateKey(generator_ed25519, b"\x01" * 32) + key2 = PrivateKey(generator_ed448, b"\x01" * 57) + + self.assertNotEqual(key1, key2) + + +class TestInvalidEdDSAInputs(unittest.TestCase): + def test_wrong_length_of_private_key(self): + with self.assertRaises(ValueError): + PrivateKey(generator_ed25519, b"\x01" * 31) + + def test_wrong_length_of_public_key(self): + with self.assertRaises(ValueError): + PublicKey(generator_ed25519, b"\x01" * 33) + + def test_wrong_cofactor_curve(self): + ed_c = curve_ed25519 + + def _hash(data): + return hashlib.new("sha512", compat26_str(data)).digest() + + curve = CurveEdTw(ed_c.p(), ed_c.a(), ed_c.d(), 1, _hash) + g = generator_ed25519 + fake_gen = PointEdwards(curve, g.x(), g.y(), 1, g.x() * g.y()) + + with self.assertRaises(ValueError) as e: + PrivateKey(fake_gen, g.to_bytes()) + + self.assertIn("cofactor", str(e.exception)) + + def test_invalid_signature_length(self): + key = PublicKey(generator_ed25519, b"\x01" * 32) + + with self.assertRaises(ValueError) as e: + key.verify(b"", b"\x01" * 65) + + self.assertIn("length", str(e.exception)) + + def test_changing_public_key(self): + key = PublicKey(generator_ed25519, b"\x01" * 32) + + g = key.point + + new_g = PointEdwards(curve_ed25519, g.x(), g.y(), 1, g.x() * g.y()) + + key.point = new_g + + self.assertEqual(g, key.point) + + def test_changing_public_key_to_different_point(self): + key = PublicKey(generator_ed25519, b"\x01" * 32) + + with self.assertRaises(ValueError) as e: + key.point = generator_ed25519 + + self.assertIn("coordinates", str(e.exception)) + + def test_invalid_s_value(self): + key = PublicKey( + generator_ed25519, + b"\xd7\x5a\x98\x01\x82\xb1\x0a\xb7\xd5\x4b\xfe\xd3\xc9\x64\x07\x3a" + b"\x0e\xe1\x72\xf3\xda\xa6\x23\x25\xaf\x02\x1a\x68\xf7\x07\x51\x1a", + ) + sig_valid = bytearray( + b"\xe5\x56\x43\x00\xc3\x60\xac\x72\x90\x86\xe2\xcc\x80\x6e\x82\x8a" + b"\x84\x87\x7f\x1e\xb8\xe5\xd9\x74\xd8\x73\xe0\x65\x22\x49\x01\x55" + b"\x5f\xb8\x82\x15\x90\xa3\x3b\xac\xc6\x1e\x39\x70\x1c\xf9\xb4\x6b" + b"\xd2\x5b\xf5\xf0\x59\x5b\xbe\x24\x65\x51\x41\x43\x8e\x7a\x10\x0b" + ) + + self.assertTrue(key.verify(b"", sig_valid)) + + sig_invalid = bytearray(sig_valid) + sig_invalid[-1] = 0xFF + + with self.assertRaises(ValueError): + key.verify(b"", sig_invalid) + + def test_invalid_r_value(self): + key = PublicKey( + generator_ed25519, + b"\xd7\x5a\x98\x01\x82\xb1\x0a\xb7\xd5\x4b\xfe\xd3\xc9\x64\x07\x3a" + b"\x0e\xe1\x72\xf3\xda\xa6\x23\x25\xaf\x02\x1a\x68\xf7\x07\x51\x1a", + ) + sig_valid = bytearray( + b"\xe5\x56\x43\x00\xc3\x60\xac\x72\x90\x86\xe2\xcc\x80\x6e\x82\x8a" + b"\x84\x87\x7f\x1e\xb8\xe5\xd9\x74\xd8\x73\xe0\x65\x22\x49\x01\x55" + b"\x5f\xb8\x82\x15\x90\xa3\x3b\xac\xc6\x1e\x39\x70\x1c\xf9\xb4\x6b" + b"\xd2\x5b\xf5\xf0\x59\x5b\xbe\x24\x65\x51\x41\x43\x8e\x7a\x10\x0b" + ) + + self.assertTrue(key.verify(b"", sig_valid)) + + sig_invalid = bytearray(sig_valid) + sig_invalid[0] = 0xE0 + + with self.assertRaises(ValueError): + key.verify(b"", sig_invalid) + + +HYP_SETTINGS = dict() +HYP_SETTINGS["max_examples"] = 10 + + +@settings(**HYP_SETTINGS) +@example(1) +@example(5) # smallest multiple that requires changing sign of x +@given(st.integers(min_value=1, max_value=int(generator_ed25519.order() - 1))) +def test_ed25519_encode_decode(multiple): + a = generator_ed25519 * multiple + + b = PointEdwards.from_bytes(curve_ed25519, a.to_bytes()) + + assert a == b + + +@settings(**HYP_SETTINGS) +@example(1) +@example(2) # smallest multiple that requires changing the sign of x +@given(st.integers(min_value=1, max_value=int(generator_ed448.order() - 1))) +def test_ed448_encode_decode(multiple): + a = generator_ed448 * multiple + + b = PointEdwards.from_bytes(curve_ed448, a.to_bytes()) + + assert a == b + + +@settings(**HYP_SETTINGS) +@example(1) +@example(2) +@given(st.integers(min_value=1, max_value=int(generator_ed25519.order()) - 1)) +def test_ed25519_mul_precompute_vs_naf(multiple): + """Compare multiplication with and without precomputation.""" + g = generator_ed25519 + new_g = PointEdwards(curve_ed25519, g.x(), g.y(), 1, g.x() * g.y()) + + assert g * multiple == multiple * new_g + + +# Test vectors from RFC 8032 +TEST_VECTORS = [ + # TEST 1 + ( + generator_ed25519, + "9d61b19deffd5a60ba844af492ec2cc4" "4449c5697b326919703bac031cae7f60", + "d75a980182b10ab7d54bfed3c964073a" "0ee172f3daa62325af021a68f707511a", + "", + "e5564300c360ac729086e2cc806e828a" + "84877f1eb8e5d974d873e06522490155" + "5fb8821590a33bacc61e39701cf9b46b" + "d25bf5f0595bbe24655141438e7a100b", + ), + # TEST 2 + ( + generator_ed25519, + "4ccd089b28ff96da9db6c346ec114e0f" "5b8a319f35aba624da8cf6ed4fb8a6fb", + "3d4017c3e843895a92b70aa74d1b7ebc" "9c982ccf2ec4968cc0cd55f12af4660c", + "72", + "92a009a9f0d4cab8720e820b5f642540" + "a2b27b5416503f8fb3762223ebdb69da" + "085ac1e43e15996e458f3613d0f11d8c" + "387b2eaeb4302aeeb00d291612bb0c00", + ), + # TEST 3 + ( + generator_ed25519, + "c5aa8df43f9f837bedb7442f31dcb7b1" "66d38535076f094b85ce3a2e0b4458f7", + "fc51cd8e6218a1a38da47ed00230f058" "0816ed13ba3303ac5deb911548908025", + "af82", + "6291d657deec24024827e69c3abe01a3" + "0ce548a284743a445e3680d7db5ac3ac" + "18ff9b538d16f290ae67f760984dc659" + "4a7c15e9716ed28dc027beceea1ec40a", + ), + # TEST 1024 + ( + generator_ed25519, + "f5e5767cf153319517630f226876b86c" "8160cc583bc013744c6bf255f5cc0ee5", + "278117fc144c72340f67d0f2316e8386" "ceffbf2b2428c9c51fef7c597f1d426e", + "08b8b2b733424243760fe426a4b54908" + "632110a66c2f6591eabd3345e3e4eb98" + "fa6e264bf09efe12ee50f8f54e9f77b1" + "e355f6c50544e23fb1433ddf73be84d8" + "79de7c0046dc4996d9e773f4bc9efe57" + "38829adb26c81b37c93a1b270b20329d" + "658675fc6ea534e0810a4432826bf58c" + "941efb65d57a338bbd2e26640f89ffbc" + "1a858efcb8550ee3a5e1998bd177e93a" + "7363c344fe6b199ee5d02e82d522c4fe" + "ba15452f80288a821a579116ec6dad2b" + "3b310da903401aa62100ab5d1a36553e" + "06203b33890cc9b832f79ef80560ccb9" + "a39ce767967ed628c6ad573cb116dbef" + "efd75499da96bd68a8a97b928a8bbc10" + "3b6621fcde2beca1231d206be6cd9ec7" + "aff6f6c94fcd7204ed3455c68c83f4a4" + "1da4af2b74ef5c53f1d8ac70bdcb7ed1" + "85ce81bd84359d44254d95629e9855a9" + "4a7c1958d1f8ada5d0532ed8a5aa3fb2" + "d17ba70eb6248e594e1a2297acbbb39d" + "502f1a8c6eb6f1ce22b3de1a1f40cc24" + "554119a831a9aad6079cad88425de6bd" + "e1a9187ebb6092cf67bf2b13fd65f270" + "88d78b7e883c8759d2c4f5c65adb7553" + "878ad575f9fad878e80a0c9ba63bcbcc" + "2732e69485bbc9c90bfbd62481d9089b" + "eccf80cfe2df16a2cf65bd92dd597b07" + "07e0917af48bbb75fed413d238f5555a" + "7a569d80c3414a8d0859dc65a46128ba" + "b27af87a71314f318c782b23ebfe808b" + "82b0ce26401d2e22f04d83d1255dc51a" + "ddd3b75a2b1ae0784504df543af8969b" + "e3ea7082ff7fc9888c144da2af58429e" + "c96031dbcad3dad9af0dcbaaaf268cb8" + "fcffead94f3c7ca495e056a9b47acdb7" + "51fb73e666c6c655ade8297297d07ad1" + "ba5e43f1bca32301651339e22904cc8c" + "42f58c30c04aafdb038dda0847dd988d" + "cda6f3bfd15c4b4c4525004aa06eeff8" + "ca61783aacec57fb3d1f92b0fe2fd1a8" + "5f6724517b65e614ad6808d6f6ee34df" + "f7310fdc82aebfd904b01e1dc54b2927" + "094b2db68d6f903b68401adebf5a7e08" + "d78ff4ef5d63653a65040cf9bfd4aca7" + "984a74d37145986780fc0b16ac451649" + "de6188a7dbdf191f64b5fc5e2ab47b57" + "f7f7276cd419c17a3ca8e1b939ae49e4" + "88acba6b965610b5480109c8b17b80e1" + "b7b750dfc7598d5d5011fd2dcc5600a3" + "2ef5b52a1ecc820e308aa342721aac09" + "43bf6686b64b2579376504ccc493d97e" + "6aed3fb0f9cd71a43dd497f01f17c0e2" + "cb3797aa2a2f256656168e6c496afc5f" + "b93246f6b1116398a346f1a641f3b041" + "e989f7914f90cc2c7fff357876e506b5" + "0d334ba77c225bc307ba537152f3f161" + "0e4eafe595f6d9d90d11faa933a15ef1" + "369546868a7f3a45a96768d40fd9d034" + "12c091c6315cf4fde7cb68606937380d" + "b2eaaa707b4c4185c32eddcdd306705e" + "4dc1ffc872eeee475a64dfac86aba41c" + "0618983f8741c5ef68d3a101e8a3b8ca" + "c60c905c15fc910840b94c00a0b9d0", + "0aab4c900501b3e24d7cdf4663326a3a" + "87df5e4843b2cbdb67cbf6e460fec350" + "aa5371b1508f9f4528ecea23c436d94b" + "5e8fcd4f681e30a6ac00a9704a188a03", + ), + # TEST SHA(abc) + ( + generator_ed25519, + "833fe62409237b9d62ec77587520911e" "9a759cec1d19755b7da901b96dca3d42", + "ec172b93ad5e563bf4932c70e1245034" "c35467ef2efd4d64ebf819683467e2bf", + "ddaf35a193617abacc417349ae204131" + "12e6fa4e89a97ea20a9eeee64b55d39a" + "2192992a274fc1a836ba3c23a3feebbd" + "454d4423643ce80e2a9ac94fa54ca49f", + "dc2a4459e7369633a52b1bf277839a00" + "201009a3efbf3ecb69bea2186c26b589" + "09351fc9ac90b3ecfdfbc7c66431e030" + "3dca179c138ac17ad9bef1177331a704", + ), + # Blank + ( + generator_ed448, + "6c82a562cb808d10d632be89c8513ebf" + "6c929f34ddfa8c9f63c9960ef6e348a3" + "528c8a3fcc2f044e39a3fc5b94492f8f" + "032e7549a20098f95b", + "5fd7449b59b461fd2ce787ec616ad46a" + "1da1342485a70e1f8a0ea75d80e96778" + "edf124769b46c7061bd6783df1e50f6c" + "d1fa1abeafe8256180", + "", + "533a37f6bbe457251f023c0d88f976ae" + "2dfb504a843e34d2074fd823d41a591f" + "2b233f034f628281f2fd7a22ddd47d78" + "28c59bd0a21bfd3980ff0d2028d4b18a" + "9df63e006c5d1c2d345b925d8dc00b41" + "04852db99ac5c7cdda8530a113a0f4db" + "b61149f05a7363268c71d95808ff2e65" + "2600", + ), + # 1 octet + ( + generator_ed448, + "c4eab05d357007c632f3dbb48489924d" + "552b08fe0c353a0d4a1f00acda2c463a" + "fbea67c5e8d2877c5e3bc397a659949e" + "f8021e954e0a12274e", + "43ba28f430cdff456ae531545f7ecd0a" + "c834a55d9358c0372bfa0c6c6798c086" + "6aea01eb00742802b8438ea4cb82169c" + "235160627b4c3a9480", + "03", + "26b8f91727bd62897af15e41eb43c377" + "efb9c610d48f2335cb0bd0087810f435" + "2541b143c4b981b7e18f62de8ccdf633" + "fc1bf037ab7cd779805e0dbcc0aae1cb" + "cee1afb2e027df36bc04dcecbf154336" + "c19f0af7e0a6472905e799f1953d2a0f" + "f3348ab21aa4adafd1d234441cf807c0" + "3a00", + ), + # 11 octets + ( + generator_ed448, + "cd23d24f714274e744343237b93290f5" + "11f6425f98e64459ff203e8985083ffd" + "f60500553abc0e05cd02184bdb89c4cc" + "d67e187951267eb328", + "dcea9e78f35a1bf3499a831b10b86c90" + "aac01cd84b67a0109b55a36e9328b1e3" + "65fce161d71ce7131a543ea4cb5f7e9f" + "1d8b00696447001400", + "0c3e544074ec63b0265e0c", + "1f0a8888ce25e8d458a21130879b840a" + "9089d999aaba039eaf3e3afa090a09d3" + "89dba82c4ff2ae8ac5cdfb7c55e94d5d" + "961a29fe0109941e00b8dbdeea6d3b05" + "1068df7254c0cdc129cbe62db2dc957d" + "bb47b51fd3f213fb8698f064774250a5" + "028961c9bf8ffd973fe5d5c206492b14" + "0e00", + ), + # 12 octets + ( + generator_ed448, + "258cdd4ada32ed9c9ff54e63756ae582" + "fb8fab2ac721f2c8e676a72768513d93" + "9f63dddb55609133f29adf86ec9929dc" + "cb52c1c5fd2ff7e21b", + "3ba16da0c6f2cc1f30187740756f5e79" + "8d6bc5fc015d7c63cc9510ee3fd44adc" + "24d8e968b6e46e6f94d19b945361726b" + "d75e149ef09817f580", + "64a65f3cdedcdd66811e2915", + "7eeeab7c4e50fb799b418ee5e3197ff6" + "bf15d43a14c34389b59dd1a7b1b85b4a" + "e90438aca634bea45e3a2695f1270f07" + "fdcdf7c62b8efeaf00b45c2c96ba457e" + "b1a8bf075a3db28e5c24f6b923ed4ad7" + "47c3c9e03c7079efb87cb110d3a99861" + "e72003cbae6d6b8b827e4e6c143064ff" + "3c00", + ), + # 13 octets + ( + generator_ed448, + "7ef4e84544236752fbb56b8f31a23a10" + "e42814f5f55ca037cdcc11c64c9a3b29" + "49c1bb60700314611732a6c2fea98eeb" + "c0266a11a93970100e", + "b3da079b0aa493a5772029f0467baebe" + "e5a8112d9d3a22532361da294f7bb381" + "5c5dc59e176b4d9f381ca0938e13c6c0" + "7b174be65dfa578e80", + "64a65f3cdedcdd66811e2915e7", + "6a12066f55331b6c22acd5d5bfc5d712" + "28fbda80ae8dec26bdd306743c5027cb" + "4890810c162c027468675ecf645a8317" + "6c0d7323a2ccde2d80efe5a1268e8aca" + "1d6fbc194d3f77c44986eb4ab4177919" + "ad8bec33eb47bbb5fc6e28196fd1caf5" + "6b4e7e0ba5519234d047155ac727a105" + "3100", + ), + # 64 octets + ( + generator_ed448, + "d65df341ad13e008567688baedda8e9d" + "cdc17dc024974ea5b4227b6530e339bf" + "f21f99e68ca6968f3cca6dfe0fb9f4fa" + "b4fa135d5542ea3f01", + "df9705f58edbab802c7f8363cfe5560a" + "b1c6132c20a9f1dd163483a26f8ac53a" + "39d6808bf4a1dfbd261b099bb03b3fb5" + "0906cb28bd8a081f00", + "bd0f6a3747cd561bdddf4640a332461a" + "4a30a12a434cd0bf40d766d9c6d458e5" + "512204a30c17d1f50b5079631f64eb31" + "12182da3005835461113718d1a5ef944", + "554bc2480860b49eab8532d2a533b7d5" + "78ef473eeb58c98bb2d0e1ce488a98b1" + "8dfde9b9b90775e67f47d4a1c3482058" + "efc9f40d2ca033a0801b63d45b3b722e" + "f552bad3b4ccb667da350192b61c508c" + "f7b6b5adadc2c8d9a446ef003fb05cba" + "5f30e88e36ec2703b349ca229c267083" + "3900", + ), + # 256 octets + ( + generator_ed448, + "2ec5fe3c17045abdb136a5e6a913e32a" + "b75ae68b53d2fc149b77e504132d3756" + "9b7e766ba74a19bd6162343a21c8590a" + "a9cebca9014c636df5", + "79756f014dcfe2079f5dd9e718be4171" + "e2ef2486a08f25186f6bff43a9936b9b" + "fe12402b08ae65798a3d81e22e9ec80e" + "7690862ef3d4ed3a00", + "15777532b0bdd0d1389f636c5f6b9ba7" + "34c90af572877e2d272dd078aa1e567c" + "fa80e12928bb542330e8409f31745041" + "07ecd5efac61ae7504dabe2a602ede89" + "e5cca6257a7c77e27a702b3ae39fc769" + "fc54f2395ae6a1178cab4738e543072f" + "c1c177fe71e92e25bf03e4ecb72f47b6" + "4d0465aaea4c7fad372536c8ba516a60" + "39c3c2a39f0e4d832be432dfa9a706a6" + "e5c7e19f397964ca4258002f7c0541b5" + "90316dbc5622b6b2a6fe7a4abffd9610" + "5eca76ea7b98816af0748c10df048ce0" + "12d901015a51f189f3888145c03650aa" + "23ce894c3bd889e030d565071c59f409" + "a9981b51878fd6fc110624dcbcde0bf7" + "a69ccce38fabdf86f3bef6044819de11", + "c650ddbb0601c19ca11439e1640dd931" + "f43c518ea5bea70d3dcde5f4191fe53f" + "00cf966546b72bcc7d58be2b9badef28" + "743954e3a44a23f880e8d4f1cfce2d7a" + "61452d26da05896f0a50da66a239a8a1" + "88b6d825b3305ad77b73fbac0836ecc6" + "0987fd08527c1a8e80d5823e65cafe2a" + "3d00", + ), + # 1023 octets + ( + generator_ed448, + "872d093780f5d3730df7c212664b37b8" + "a0f24f56810daa8382cd4fa3f77634ec" + "44dc54f1c2ed9bea86fafb7632d8be19" + "9ea165f5ad55dd9ce8", + "a81b2e8a70a5ac94ffdbcc9badfc3feb" + "0801f258578bb114ad44ece1ec0e799d" + "a08effb81c5d685c0c56f64eecaef8cd" + "f11cc38737838cf400", + "6ddf802e1aae4986935f7f981ba3f035" + "1d6273c0a0c22c9c0e8339168e675412" + "a3debfaf435ed651558007db4384b650" + "fcc07e3b586a27a4f7a00ac8a6fec2cd" + "86ae4bf1570c41e6a40c931db27b2faa" + "15a8cedd52cff7362c4e6e23daec0fbc" + "3a79b6806e316efcc7b68119bf46bc76" + "a26067a53f296dafdbdc11c77f7777e9" + "72660cf4b6a9b369a6665f02e0cc9b6e" + "dfad136b4fabe723d2813db3136cfde9" + "b6d044322fee2947952e031b73ab5c60" + "3349b307bdc27bc6cb8b8bbd7bd32321" + "9b8033a581b59eadebb09b3c4f3d2277" + "d4f0343624acc817804728b25ab79717" + "2b4c5c21a22f9c7839d64300232eb66e" + "53f31c723fa37fe387c7d3e50bdf9813" + "a30e5bb12cf4cd930c40cfb4e1fc6225" + "92a49588794494d56d24ea4b40c89fc0" + "596cc9ebb961c8cb10adde976a5d602b" + "1c3f85b9b9a001ed3c6a4d3b1437f520" + "96cd1956d042a597d561a596ecd3d173" + "5a8d570ea0ec27225a2c4aaff26306d1" + "526c1af3ca6d9cf5a2c98f47e1c46db9" + "a33234cfd4d81f2c98538a09ebe76998" + "d0d8fd25997c7d255c6d66ece6fa56f1" + "1144950f027795e653008f4bd7ca2dee" + "85d8e90f3dc315130ce2a00375a318c7" + "c3d97be2c8ce5b6db41a6254ff264fa6" + "155baee3b0773c0f497c573f19bb4f42" + "40281f0b1f4f7be857a4e59d416c06b4" + "c50fa09e1810ddc6b1467baeac5a3668" + "d11b6ecaa901440016f389f80acc4db9" + "77025e7f5924388c7e340a732e554440" + "e76570f8dd71b7d640b3450d1fd5f041" + "0a18f9a3494f707c717b79b4bf75c984" + "00b096b21653b5d217cf3565c9597456" + "f70703497a078763829bc01bb1cbc8fa" + "04eadc9a6e3f6699587a9e75c94e5bab" + "0036e0b2e711392cff0047d0d6b05bd2" + "a588bc109718954259f1d86678a579a3" + "120f19cfb2963f177aeb70f2d4844826" + "262e51b80271272068ef5b3856fa8535" + "aa2a88b2d41f2a0e2fda7624c2850272" + "ac4a2f561f8f2f7a318bfd5caf969614" + "9e4ac824ad3460538fdc25421beec2cc" + "6818162d06bbed0c40a387192349db67" + "a118bada6cd5ab0140ee273204f628aa" + "d1c135f770279a651e24d8c14d75a605" + "9d76b96a6fd857def5e0b354b27ab937" + "a5815d16b5fae407ff18222c6d1ed263" + "be68c95f32d908bd895cd76207ae7264" + "87567f9a67dad79abec316f683b17f2d" + "02bf07e0ac8b5bc6162cf94697b3c27c" + "d1fea49b27f23ba2901871962506520c" + "392da8b6ad0d99f7013fbc06c2c17a56" + "9500c8a7696481c1cd33e9b14e40b82e" + "79a5f5db82571ba97bae3ad3e0479515" + "bb0e2b0f3bfcd1fd33034efc6245eddd" + "7ee2086ddae2600d8ca73e214e8c2b0b" + "db2b047c6a464a562ed77b73d2d841c4" + "b34973551257713b753632efba348169" + "abc90a68f42611a40126d7cb21b58695" + "568186f7e569d2ff0f9e745d0487dd2e" + "b997cafc5abf9dd102e62ff66cba87", + "e301345a41a39a4d72fff8df69c98075" + "a0cc082b802fc9b2b6bc503f926b65bd" + "df7f4c8f1cb49f6396afc8a70abe6d8a" + "ef0db478d4c6b2970076c6a0484fe76d" + "76b3a97625d79f1ce240e7c576750d29" + "5528286f719b413de9ada3e8eb78ed57" + "3603ce30d8bb761785dc30dbc320869e" + "1a00", + ), +] + + +@pytest.mark.parametrize( + "generator,private_key,public_key,message,signature", + TEST_VECTORS, +) +def test_vectors(generator, private_key, public_key, message, signature): + private_key = a2b_hex(private_key) + public_key = a2b_hex(public_key) + message = a2b_hex(message) + signature = a2b_hex(signature) + + sig_key = PrivateKey(generator, private_key) + ver_key = PublicKey(generator, public_key) + + assert sig_key.public_key().public_key() == ver_key.public_key() + + gen_sig = sig_key.sign(message) + + assert gen_sig == signature + + assert ver_key.verify(message, signature) diff --git a/env/lib/python3.8/site-packages/ecdsa/test_ellipticcurve.py b/env/lib/python3.8/site-packages/ecdsa/test_ellipticcurve.py new file mode 100644 index 00000000..85faef4d --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_ellipticcurve.py @@ -0,0 +1,199 @@ +import pytest + +try: + import unittest2 as unittest +except ImportError: + import unittest +from hypothesis import given, settings +import hypothesis.strategies as st + +try: + from hypothesis import HealthCheck + + HC_PRESENT = True +except ImportError: # pragma: no cover + HC_PRESENT = False +from .numbertheory import inverse_mod +from .ellipticcurve import CurveFp, INFINITY, Point + + +HYP_SETTINGS = {} +if HC_PRESENT: # pragma: no branch + HYP_SETTINGS["suppress_health_check"] = [HealthCheck.too_slow] + HYP_SETTINGS["deadline"] = 5000 + + +# NIST Curve P-192: +p = 6277101735386680763835789423207666416083908700390324961279 +r = 6277101735386680763835789423176059013767194773182842284081 +# s = 0x3045ae6fc8422f64ed579528d38120eae12196d5 +# c = 0x3099d2bbbfcb2538542dcd5fb078b6ef5f3d6fe2c745de65 +b = 0x64210519E59C80E70FA7E9AB72243049FEB8DEECC146B9B1 +Gx = 0x188DA80EB03090F67CBF20EB43A18800F4FF0AFD82FF1012 +Gy = 0x07192B95FFC8DA78631011ED6B24CDD573F977A11E794811 + +c192 = CurveFp(p, -3, b) +p192 = Point(c192, Gx, Gy, r) + +c_23 = CurveFp(23, 1, 1) +g_23 = Point(c_23, 13, 7, 7) + + +HYP_SLOW_SETTINGS = dict(HYP_SETTINGS) +HYP_SLOW_SETTINGS["max_examples"] = 10 + + +@settings(**HYP_SLOW_SETTINGS) +@given(st.integers(min_value=1, max_value=r + 1)) +def test_p192_mult_tests(multiple): + inv_m = inverse_mod(multiple, r) + + p1 = p192 * multiple + assert p1 * inv_m == p192 + + +def add_n_times(point, n): + ret = INFINITY + i = 0 + while i <= n: + yield ret + ret = ret + point + i += 1 + + +# From X9.62 I.1 (p. 96): +@pytest.mark.parametrize( + "p, m, check", + [(g_23, n, exp) for n, exp in enumerate(add_n_times(g_23, 8))], + ids=["g_23 test with mult {0}".format(i) for i in range(9)], +) +def test_add_and_mult_equivalence(p, m, check): + assert p * m == check + + +class TestCurve(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.c_23 = CurveFp(23, 1, 1) + + def test_equality_curves(self): + self.assertEqual(self.c_23, CurveFp(23, 1, 1)) + + def test_inequality_curves(self): + c192 = CurveFp(p, -3, b) + self.assertNotEqual(self.c_23, c192) + + def test_usability_in_a_hashed_collection_curves(self): + {self.c_23: None} + + def test_hashability_curves(self): + hash(self.c_23) + + def test_conflation_curves(self): + ne1, ne2, ne3 = CurveFp(24, 1, 1), CurveFp(23, 2, 1), CurveFp(23, 1, 2) + eq1, eq2, eq3 = CurveFp(23, 1, 1), CurveFp(23, 1, 1), self.c_23 + self.assertEqual(len(set((c_23, eq1, eq2, eq3))), 1) + self.assertEqual(len(set((c_23, ne1, ne2, ne3))), 4) + self.assertDictEqual({c_23: None}, {eq1: None}) + self.assertIn(eq2, {eq3: None}) + + +class TestPoint(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.c_23 = CurveFp(23, 1, 1) + cls.g_23 = Point(cls.c_23, 13, 7, 7) + + p = 6277101735386680763835789423207666416083908700390324961279 + r = 6277101735386680763835789423176059013767194773182842284081 + # s = 0x3045ae6fc8422f64ed579528d38120eae12196d5 + # c = 0x3099d2bbbfcb2538542dcd5fb078b6ef5f3d6fe2c745de65 + b = 0x64210519E59C80E70FA7E9AB72243049FEB8DEECC146B9B1 + Gx = 0x188DA80EB03090F67CBF20EB43A18800F4FF0AFD82FF1012 + Gy = 0x07192B95FFC8DA78631011ED6B24CDD573F977A11E794811 + + cls.c192 = CurveFp(p, -3, b) + cls.p192 = Point(cls.c192, Gx, Gy, r) + + def test_p192(self): + # Checking against some sample computations presented + # in X9.62: + d = 651056770906015076056810763456358567190100156695615665659 + Q = d * self.p192 + self.assertEqual( + Q.x(), 0x62B12D60690CDCF330BABAB6E69763B471F994DD702D16A5 + ) + + k = 6140507067065001063065065565667405560006161556565665656654 + R = k * self.p192 + self.assertEqual( + R.x(), 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD + ) + self.assertEqual( + R.y(), 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835 + ) + + u1 = 2563697409189434185194736134579731015366492496392189760599 + u2 = 6266643813348617967186477710235785849136406323338782220568 + temp = u1 * self.p192 + u2 * Q + self.assertEqual( + temp.x(), 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD + ) + self.assertEqual( + temp.y(), 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835 + ) + + def test_double_infinity(self): + p1 = INFINITY + p3 = p1.double() + self.assertEqual(p1, p3) + self.assertEqual(p3.x(), p1.x()) + self.assertEqual(p3.y(), p3.y()) + + def test_double(self): + x1, y1, x3, y3 = (3, 10, 7, 12) + + p1 = Point(self.c_23, x1, y1) + p3 = p1.double() + self.assertEqual(p3.x(), x3) + self.assertEqual(p3.y(), y3) + + def test_multiply(self): + x1, y1, m, x3, y3 = (3, 10, 2, 7, 12) + p1 = Point(self.c_23, x1, y1) + p3 = p1 * m + self.assertEqual(p3.x(), x3) + self.assertEqual(p3.y(), y3) + + # Trivial tests from X9.62 B.3: + def test_add(self): + """We expect that on curve c, (x1,y1) + (x2, y2 ) = (x3, y3).""" + + x1, y1, x2, y2, x3, y3 = (3, 10, 9, 7, 17, 20) + p1 = Point(self.c_23, x1, y1) + p2 = Point(self.c_23, x2, y2) + p3 = p1 + p2 + self.assertEqual(p3.x(), x3) + self.assertEqual(p3.y(), y3) + + def test_add_as_double(self): + """We expect that on curve c, (x1,y1) + (x2, y2 ) = (x3, y3).""" + + x1, y1, x2, y2, x3, y3 = (3, 10, 3, 10, 7, 12) + p1 = Point(self.c_23, x1, y1) + p2 = Point(self.c_23, x2, y2) + p3 = p1 + p2 + self.assertEqual(p3.x(), x3) + self.assertEqual(p3.y(), y3) + + def test_equality_points(self): + self.assertEqual(self.g_23, Point(self.c_23, 13, 7, 7)) + + def test_inequality_points(self): + c = CurveFp(100, -3, 100) + p = Point(c, 100, 100, 100) + self.assertNotEqual(self.g_23, p) + + def test_inequality_points_diff_types(self): + c = CurveFp(100, -3, 100) + self.assertNotEqual(self.g_23, c) diff --git a/env/lib/python3.8/site-packages/ecdsa/test_jacobi.py b/env/lib/python3.8/site-packages/ecdsa/test_jacobi.py new file mode 100644 index 00000000..1f528048 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_jacobi.py @@ -0,0 +1,657 @@ +import pickle + +try: + import unittest2 as unittest +except ImportError: + import unittest + +import os +import sys +import signal +import pytest +import threading +import platform +import hypothesis.strategies as st +from hypothesis import given, assume, settings, example + +from .ellipticcurve import CurveFp, PointJacobi, INFINITY +from .ecdsa import ( + generator_256, + curve_256, + generator_224, + generator_brainpoolp160r1, + curve_brainpoolp160r1, + generator_112r2, +) +from .numbertheory import inverse_mod +from .util import randrange + + +NO_OLD_SETTINGS = {} +if sys.version_info > (2, 7): # pragma: no branch + NO_OLD_SETTINGS["deadline"] = 5000 + + +class TestJacobi(unittest.TestCase): + def test___init__(self): + curve = object() + x = 2 + y = 3 + z = 1 + order = 4 + pj = PointJacobi(curve, x, y, z, order) + + self.assertEqual(pj.order(), order) + self.assertIs(pj.curve(), curve) + self.assertEqual(pj.x(), x) + self.assertEqual(pj.y(), y) + + def test_add_with_different_curves(self): + p_a = PointJacobi.from_affine(generator_256) + p_b = PointJacobi.from_affine(generator_224) + + with self.assertRaises(ValueError): + p_a + p_b + + def test_compare_different_curves(self): + self.assertNotEqual(generator_256, generator_224) + + def test_equality_with_non_point(self): + pj = PointJacobi.from_affine(generator_256) + + self.assertNotEqual(pj, "value") + + def test_conversion(self): + pj = PointJacobi.from_affine(generator_256) + pw = pj.to_affine() + + self.assertEqual(generator_256, pw) + + def test_single_double(self): + pj = PointJacobi.from_affine(generator_256) + pw = generator_256.double() + + pj = pj.double() + + self.assertEqual(pj.x(), pw.x()) + self.assertEqual(pj.y(), pw.y()) + + def test_double_with_zero_point(self): + pj = PointJacobi(curve_256, 0, 0, 1) + + pj = pj.double() + + self.assertIs(pj, INFINITY) + + def test_double_with_zero_equivalent_point(self): + pj = PointJacobi(curve_256, 0, curve_256.p(), 1) + + pj = pj.double() + + self.assertIs(pj, INFINITY) + + def test_double_with_zero_equivalent_point_non_1_z(self): + pj = PointJacobi(curve_256, 0, curve_256.p(), 2) + + pj = pj.double() + + self.assertIs(pj, INFINITY) + + def test_compare_with_affine_point(self): + pj = PointJacobi.from_affine(generator_256) + pa = pj.to_affine() + + self.assertEqual(pj, pa) + self.assertEqual(pa, pj) + + def test_to_affine_with_zero_point(self): + pj = PointJacobi(curve_256, 0, 0, 1) + + pa = pj.to_affine() + + self.assertIs(pa, INFINITY) + + def test_add_with_affine_point(self): + pj = PointJacobi.from_affine(generator_256) + pa = pj.to_affine() + + s = pj + pa + + self.assertEqual(s, pj.double()) + + def test_radd_with_affine_point(self): + pj = PointJacobi.from_affine(generator_256) + pa = pj.to_affine() + + s = pa + pj + + self.assertEqual(s, pj.double()) + + def test_add_with_infinity(self): + pj = PointJacobi.from_affine(generator_256) + + s = pj + INFINITY + + self.assertEqual(s, pj) + + def test_add_zero_point_to_affine(self): + pa = PointJacobi.from_affine(generator_256).to_affine() + pj = PointJacobi(curve_256, 0, 0, 1) + + s = pj + pa + + self.assertIs(s, pa) + + def test_multiply_by_zero(self): + pj = PointJacobi.from_affine(generator_256) + + pj = pj * 0 + + self.assertIs(pj, INFINITY) + + def test_zero_point_multiply_by_one(self): + pj = PointJacobi(curve_256, 0, 0, 1) + + pj = pj * 1 + + self.assertIs(pj, INFINITY) + + def test_multiply_by_one(self): + pj = PointJacobi.from_affine(generator_256) + pw = generator_256 * 1 + + pj = pj * 1 + + self.assertEqual(pj.x(), pw.x()) + self.assertEqual(pj.y(), pw.y()) + + def test_multiply_by_two(self): + pj = PointJacobi.from_affine(generator_256) + pw = generator_256 * 2 + + pj = pj * 2 + + self.assertEqual(pj.x(), pw.x()) + self.assertEqual(pj.y(), pw.y()) + + def test_rmul_by_two(self): + pj = PointJacobi.from_affine(generator_256) + pw = generator_256 * 2 + + pj = 2 * pj + + self.assertEqual(pj, pw) + + def test_compare_non_zero_with_infinity(self): + pj = PointJacobi.from_affine(generator_256) + + self.assertNotEqual(pj, INFINITY) + + def test_compare_zero_point_with_infinity(self): + pj = PointJacobi(curve_256, 0, 0, 1) + + self.assertEqual(pj, INFINITY) + + def test_compare_double_with_multiply(self): + pj = PointJacobi.from_affine(generator_256) + dbl = pj.double() + mlpl = pj * 2 + + self.assertEqual(dbl, mlpl) + + @settings(max_examples=10) + @given( + st.integers( + min_value=0, max_value=int(generator_brainpoolp160r1.order()) + ) + ) + def test_multiplications(self, mul): + pj = PointJacobi.from_affine(generator_brainpoolp160r1) + pw = pj.to_affine() * mul + + pj = pj * mul + + self.assertEqual((pj.x(), pj.y()), (pw.x(), pw.y())) + self.assertEqual(pj, pw) + + @settings(max_examples=10) + @given( + st.integers( + min_value=0, max_value=int(generator_brainpoolp160r1.order()) + ) + ) + @example(0) + @example(int(generator_brainpoolp160r1.order())) + def test_precompute(self, mul): + precomp = generator_brainpoolp160r1 + self.assertTrue(precomp._PointJacobi__precompute) + pj = PointJacobi.from_affine(generator_brainpoolp160r1) + + a = precomp * mul + b = pj * mul + + self.assertEqual(a, b) + + @settings(max_examples=10) + @given( + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + ) + @example(3, 3) + def test_add_scaled_points(self, a_mul, b_mul): + j_g = PointJacobi.from_affine(generator_brainpoolp160r1) + a = PointJacobi.from_affine(j_g * a_mul) + b = PointJacobi.from_affine(j_g * b_mul) + + c = a + b + + self.assertEqual(c, j_g * (a_mul + b_mul)) + + @settings(max_examples=10) + @given( + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + st.integers(min_value=1, max_value=int(curve_brainpoolp160r1.p() - 1)), + ) + def test_add_one_scaled_point(self, a_mul, b_mul, new_z): + j_g = PointJacobi.from_affine(generator_brainpoolp160r1) + a = PointJacobi.from_affine(j_g * a_mul) + b = PointJacobi.from_affine(j_g * b_mul) + + p = curve_brainpoolp160r1.p() + + assume(inverse_mod(new_z, p)) + + new_zz = new_z * new_z % p + + b = PointJacobi( + curve_brainpoolp160r1, + b.x() * new_zz % p, + b.y() * new_zz * new_z % p, + new_z, + ) + + c = a + b + + self.assertEqual(c, j_g * (a_mul + b_mul)) + + @settings(max_examples=10) + @given( + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + st.integers(min_value=1, max_value=int(curve_brainpoolp160r1.p() - 1)), + ) + @example(1, 1, 1) + @example(3, 3, 3) + @example(2, int(generator_brainpoolp160r1.order() - 2), 1) + @example(2, int(generator_brainpoolp160r1.order() - 2), 3) + def test_add_same_scale_points(self, a_mul, b_mul, new_z): + j_g = PointJacobi.from_affine(generator_brainpoolp160r1) + a = PointJacobi.from_affine(j_g * a_mul) + b = PointJacobi.from_affine(j_g * b_mul) + + p = curve_brainpoolp160r1.p() + + assume(inverse_mod(new_z, p)) + + new_zz = new_z * new_z % p + + a = PointJacobi( + curve_brainpoolp160r1, + a.x() * new_zz % p, + a.y() * new_zz * new_z % p, + new_z, + ) + b = PointJacobi( + curve_brainpoolp160r1, + b.x() * new_zz % p, + b.y() * new_zz * new_z % p, + new_z, + ) + + c = a + b + + self.assertEqual(c, j_g * (a_mul + b_mul)) + + def test_add_same_scale_points_static(self): + j_g = generator_brainpoolp160r1 + p = curve_brainpoolp160r1.p() + a = j_g * 11 + a.scale() + z1 = 13 + x = PointJacobi( + curve_brainpoolp160r1, + a.x() * z1**2 % p, + a.y() * z1**3 % p, + z1, + ) + y = PointJacobi( + curve_brainpoolp160r1, + a.x() * z1**2 % p, + a.y() * z1**3 % p, + z1, + ) + + c = a + a + + self.assertEqual(c, x + y) + + @settings(max_examples=14) + @given( + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + st.integers( + min_value=1, max_value=int(generator_brainpoolp160r1.order()) + ), + st.lists( + st.integers( + min_value=1, max_value=int(curve_brainpoolp160r1.p() - 1) + ), + min_size=2, + max_size=2, + unique=True, + ), + ) + @example(2, 2, [2, 1]) + @example(2, 2, [2, 3]) + @example(2, int(generator_brainpoolp160r1.order() - 2), [2, 3]) + @example(2, int(generator_brainpoolp160r1.order() - 2), [2, 1]) + def test_add_different_scale_points(self, a_mul, b_mul, new_z): + j_g = PointJacobi.from_affine(generator_brainpoolp160r1) + a = PointJacobi.from_affine(j_g * a_mul) + b = PointJacobi.from_affine(j_g * b_mul) + + p = curve_brainpoolp160r1.p() + + assume(inverse_mod(new_z[0], p)) + assume(inverse_mod(new_z[1], p)) + + new_zz0 = new_z[0] * new_z[0] % p + new_zz1 = new_z[1] * new_z[1] % p + + a = PointJacobi( + curve_brainpoolp160r1, + a.x() * new_zz0 % p, + a.y() * new_zz0 * new_z[0] % p, + new_z[0], + ) + b = PointJacobi( + curve_brainpoolp160r1, + b.x() * new_zz1 % p, + b.y() * new_zz1 * new_z[1] % p, + new_z[1], + ) + + c = a + b + + self.assertEqual(c, j_g * (a_mul + b_mul)) + + def test_add_different_scale_points_static(self): + j_g = generator_brainpoolp160r1 + p = curve_brainpoolp160r1.p() + a = j_g * 11 + a.scale() + z1 = 13 + x = PointJacobi( + curve_brainpoolp160r1, + a.x() * z1**2 % p, + a.y() * z1**3 % p, + z1, + ) + z2 = 29 + y = PointJacobi( + curve_brainpoolp160r1, + a.x() * z2**2 % p, + a.y() * z2**3 % p, + z2, + ) + + c = a + a + + self.assertEqual(c, x + y) + + def test_add_point_3_times(self): + j_g = PointJacobi.from_affine(generator_256) + + self.assertEqual(j_g * 3, j_g + j_g + j_g) + + def test_mul_without_order(self): + j_g = PointJacobi(curve_256, generator_256.x(), generator_256.y(), 1) + + self.assertEqual(j_g * generator_256.order(), INFINITY) + + def test_mul_add_inf(self): + j_g = PointJacobi.from_affine(generator_256) + + self.assertEqual(j_g, j_g.mul_add(1, INFINITY, 1)) + + def test_mul_add_same(self): + j_g = PointJacobi.from_affine(generator_256) + + self.assertEqual(j_g * 2, j_g.mul_add(1, j_g, 1)) + + def test_mul_add_precompute(self): + j_g = PointJacobi.from_affine(generator_brainpoolp160r1, True) + b = PointJacobi.from_affine(j_g * 255, True) + + self.assertEqual(j_g * 256, j_g + b) + self.assertEqual(j_g * (5 + 255 * 7), j_g * 5 + b * 7) + self.assertEqual(j_g * (5 + 255 * 7), j_g.mul_add(5, b, 7)) + + def test_mul_add_precompute_large(self): + j_g = PointJacobi.from_affine(generator_brainpoolp160r1, True) + b = PointJacobi.from_affine(j_g * 255, True) + + self.assertEqual(j_g * 256, j_g + b) + self.assertEqual( + j_g * (0xFF00 + 255 * 0xF0F0), j_g * 0xFF00 + b * 0xF0F0 + ) + self.assertEqual( + j_g * (0xFF00 + 255 * 0xF0F0), j_g.mul_add(0xFF00, b, 0xF0F0) + ) + + def test_mul_add_to_mul(self): + j_g = PointJacobi.from_affine(generator_256) + + a = j_g * 3 + b = j_g.mul_add(2, j_g, 1) + + self.assertEqual(a, b) + + def test_mul_add_differnt(self): + j_g = PointJacobi.from_affine(generator_256) + + w_a = j_g * 2 + + self.assertEqual(j_g.mul_add(1, w_a, 1), j_g * 3) + + def test_mul_add_slightly_different(self): + j_g = PointJacobi.from_affine(generator_256) + + w_a = j_g * 2 + w_b = j_g * 3 + + self.assertEqual(w_a.mul_add(1, w_b, 3), w_a * 1 + w_b * 3) + + def test_mul_add(self): + j_g = PointJacobi.from_affine(generator_256) + + w_a = generator_256 * 255 + w_b = generator_256 * (0xA8 * 0xF0) + j_b = j_g * 0xA8 + + ret = j_g.mul_add(255, j_b, 0xF0) + + self.assertEqual(ret.to_affine(), w_a + w_b) + + def test_mul_add_large(self): + j_g = PointJacobi.from_affine(generator_256) + b = PointJacobi.from_affine(j_g * 255) + + self.assertEqual(j_g * 256, j_g + b) + self.assertEqual( + j_g * (0xFF00 + 255 * 0xF0F0), j_g * 0xFF00 + b * 0xF0F0 + ) + self.assertEqual( + j_g * (0xFF00 + 255 * 0xF0F0), j_g.mul_add(0xFF00, b, 0xF0F0) + ) + + def test_mul_add_with_infinity_as_result(self): + j_g = PointJacobi.from_affine(generator_256) + + order = generator_256.order() + + b = PointJacobi.from_affine(generator_256 * 256) + + self.assertEqual(j_g.mul_add(order % 256, b, order // 256), INFINITY) + + def test_mul_add_without_order(self): + j_g = PointJacobi(curve_256, generator_256.x(), generator_256.y(), 1) + + order = generator_256.order() + + w_b = generator_256 * 34 + w_b.scale() + + b = PointJacobi(curve_256, w_b.x(), w_b.y(), 1) + + self.assertEqual(j_g.mul_add(order % 34, b, order // 34), INFINITY) + + def test_mul_add_with_doubled_negation_of_itself(self): + j_g = PointJacobi.from_affine(generator_256 * 17) + + dbl_neg = 2 * (-j_g) + + self.assertEqual(j_g.mul_add(4, dbl_neg, 2), INFINITY) + + def test_equality(self): + pj1 = PointJacobi(curve=CurveFp(23, 1, 1, 1), x=2, y=3, z=1, order=1) + pj2 = PointJacobi(curve=CurveFp(23, 1, 1, 1), x=2, y=3, z=1, order=1) + self.assertEqual(pj1, pj2) + + def test_equality_with_invalid_object(self): + j_g = PointJacobi.from_affine(generator_256) + + self.assertNotEqual(j_g, 12) + + def test_equality_with_wrong_curves(self): + p_a = PointJacobi.from_affine(generator_256) + p_b = PointJacobi.from_affine(generator_224) + + self.assertNotEqual(p_a, p_b) + + def test_pickle(self): + pj = PointJacobi(curve=CurveFp(23, 1, 1, 1), x=2, y=3, z=1, order=1) + self.assertEqual(pickle.loads(pickle.dumps(pj)), pj) + + @settings(**NO_OLD_SETTINGS) + @given(st.integers(min_value=1, max_value=10)) + def test_multithreading(self, thread_num): + # ensure that generator's precomputation table is filled + generator_112r2 * 2 + + # create a fresh point that doesn't have a filled precomputation table + gen = generator_112r2 + gen = PointJacobi(gen.curve(), gen.x(), gen.y(), 1, gen.order(), True) + + self.assertEqual(gen._PointJacobi__precompute, []) + + def runner(generator): + order = generator.order() + for _ in range(10): + generator * randrange(order) + + threads = [] + for _ in range(thread_num): + threads.append(threading.Thread(target=runner, args=(gen,))) + + for t in threads: + t.start() + + runner(gen) + + for t in threads: + t.join() + + self.assertEqual( + gen._PointJacobi__precompute, + generator_112r2._PointJacobi__precompute, + ) + + @pytest.mark.skipif( + platform.system() == "Windows", + reason="there are no signals on Windows", + ) + def test_multithreading_with_interrupts(self): + thread_num = 10 + # ensure that generator's precomputation table is filled + generator_112r2 * 2 + + # create a fresh point that doesn't have a filled precomputation table + gen = generator_112r2 + gen = PointJacobi(gen.curve(), gen.x(), gen.y(), 1, gen.order(), True) + + self.assertEqual(gen._PointJacobi__precompute, []) + + def runner(generator): + order = generator.order() + for _ in range(50): + generator * randrange(order) + + def interrupter(barrier_start, barrier_end, lock_exit): + # wait until MainThread can handle KeyboardInterrupt + barrier_start.release() + barrier_end.acquire() + os.kill(os.getpid(), signal.SIGINT) + lock_exit.release() + + threads = [] + for _ in range(thread_num): + threads.append(threading.Thread(target=runner, args=(gen,))) + + barrier_start = threading.Lock() + barrier_start.acquire() + barrier_end = threading.Lock() + barrier_end.acquire() + lock_exit = threading.Lock() + lock_exit.acquire() + + threads.append( + threading.Thread( + target=interrupter, + args=(barrier_start, barrier_end, lock_exit), + ) + ) + + for t in threads: + t.start() + + with self.assertRaises(KeyboardInterrupt): + # signal to interrupter that we can now handle the signal + barrier_start.acquire() + barrier_end.release() + runner(gen) + # use the lock to ensure we never go past the scope of + # assertRaises before the os.kill is called + lock_exit.acquire() + + for t in threads: + t.join() + + self.assertEqual( + gen._PointJacobi__precompute, + generator_112r2._PointJacobi__precompute, + ) diff --git a/env/lib/python3.8/site-packages/ecdsa/test_keys.py b/env/lib/python3.8/site-packages/ecdsa/test_keys.py new file mode 100644 index 00000000..25386b17 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_keys.py @@ -0,0 +1,959 @@ +try: + import unittest2 as unittest +except ImportError: + import unittest + +try: + buffer +except NameError: + buffer = memoryview + +import os +import array +import pytest +import hashlib + +from .keys import VerifyingKey, SigningKey, MalformedPointError +from .der import ( + unpem, + UnexpectedDER, + encode_sequence, + encode_oid, + encode_bitstring, +) +from .util import ( + sigencode_string, + sigencode_der, + sigencode_strings, + sigdecode_string, + sigdecode_der, + sigdecode_strings, +) +from .curves import NIST256p, Curve, BRAINPOOLP160r1, Ed25519, Ed448 +from .ellipticcurve import Point, PointJacobi, CurveFp, INFINITY +from .ecdsa import generator_brainpoolp160r1 + + +class TestVerifyingKeyFromString(unittest.TestCase): + """ + Verify that ecdsa.keys.VerifyingKey.from_string() can be used with + bytes-like objects + """ + + @classmethod + def setUpClass(cls): + cls.key_bytes = ( + b"\x04L\xa2\x95\xdb\xc7Z\xd7\x1f\x93\nz\xcf\x97\xcf" + b"\xd7\xc2\xd9o\xfe8}X!\xae\xd4\xfah\xfa^\rpI\xba\xd1" + b"Y\xfb\x92xa\xebo+\x9cG\xfav\xca" + ) + cls.vk = VerifyingKey.from_string(cls.key_bytes) + + def test_bytes(self): + self.assertIsNotNone(self.vk) + self.assertIsInstance(self.vk, VerifyingKey) + self.assertEqual( + self.vk.pubkey.point.x(), + 105419898848891948935835657980914000059957975659675736097, + ) + self.assertEqual( + self.vk.pubkey.point.y(), + 4286866841217412202667522375431381222214611213481632495306, + ) + + def test_bytes_memoryview(self): + vk = VerifyingKey.from_string(buffer(self.key_bytes)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytearray(self): + vk = VerifyingKey.from_string(bytearray(self.key_bytes)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytesarray_memoryview(self): + vk = VerifyingKey.from_string(buffer(bytearray(self.key_bytes))) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_array_array_of_bytes(self): + arr = array.array("B", self.key_bytes) + vk = VerifyingKey.from_string(arr) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_array_array_of_bytes_memoryview(self): + arr = array.array("B", self.key_bytes) + vk = VerifyingKey.from_string(buffer(arr)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_array_array_of_ints(self): + arr = array.array("I", self.key_bytes) + vk = VerifyingKey.from_string(arr) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_array_array_of_ints_memoryview(self): + arr = array.array("I", self.key_bytes) + vk = VerifyingKey.from_string(buffer(arr)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytes_uncompressed(self): + vk = VerifyingKey.from_string(b"\x04" + self.key_bytes) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytearray_uncompressed(self): + vk = VerifyingKey.from_string(bytearray(b"\x04" + self.key_bytes)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytes_compressed(self): + vk = VerifyingKey.from_string(b"\x02" + self.key_bytes[:24]) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytearray_compressed(self): + vk = VerifyingKey.from_string(bytearray(b"\x02" + self.key_bytes[:24])) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + +class TestVerifyingKeyFromDer(unittest.TestCase): + """ + Verify that ecdsa.keys.VerifyingKey.from_der() can be used with + bytes-like objects. + """ + + @classmethod + def setUpClass(cls): + prv_key_str = ( + "-----BEGIN EC PRIVATE KEY-----\n" + "MF8CAQEEGF7IQgvW75JSqULpiQQ8op9WH6Uldw6xxaAKBggqhkjOPQMBAaE0AzIA\n" + "BLiBd9CE7xf15FY5QIAoNg+fWbSk1yZOYtoGUdzkejWkxbRc9RWTQjqLVXucIJnz\n" + "bA==\n" + "-----END EC PRIVATE KEY-----\n" + ) + key_str = ( + "-----BEGIN PUBLIC KEY-----\n" + "MEkwEwYHKoZIzj0CAQYIKoZIzj0DAQEDMgAEuIF30ITvF/XkVjlAgCg2D59ZtKTX\n" + "Jk5i2gZR3OR6NaTFtFz1FZNCOotVe5wgmfNs\n" + "-----END PUBLIC KEY-----\n" + ) + cls.key_pem = key_str + + cls.key_bytes = unpem(key_str) + assert isinstance(cls.key_bytes, bytes) + cls.vk = VerifyingKey.from_pem(key_str) + cls.sk = SigningKey.from_pem(prv_key_str) + + key_str = ( + "-----BEGIN PUBLIC KEY-----\n" + "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE4H3iRbG4TSrsSRb/gusPQB/4YcN8\n" + "Poqzgjau4kfxBPyZimeRfuY/9g/wMmPuhGl4BUve51DsnKJFRr8psk0ieA==\n" + "-----END PUBLIC KEY-----\n" + ) + cls.vk2 = VerifyingKey.from_pem(key_str) + + cls.sk2 = SigningKey.generate(vk.curve) + + def test_load_key_with_explicit_parameters(self): + pub_key_str = ( + "-----BEGIN PUBLIC KEY-----\n" + "MIIBSzCCAQMGByqGSM49AgEwgfcCAQEwLAYHKoZIzj0BAQIhAP////8AAAABAAAA\n" + "AAAAAAAAAAAA////////////////MFsEIP////8AAAABAAAAAAAAAAAAAAAA////\n" + "///////////8BCBaxjXYqjqT57PrvVV2mIa8ZR0GsMxTsPY7zjw+J9JgSwMVAMSd\n" + "NgiG5wSTamZ44ROdJreBn36QBEEEaxfR8uEsQkf4vOblY6RA8ncDfYEt6zOg9KE5\n" + "RdiYwpZP40Li/hp/m47n60p8D54WK84zV2sxXs7LtkBoN79R9QIhAP////8AAAAA\n" + "//////////+85vqtpxeehPO5ysL8YyVRAgEBA0IABIr1UkgYs5jmbFc7it1/YI2X\n" + "T//IlaEjMNZft1owjqpBYH2ErJHk4U5Pp4WvWq1xmHwIZlsH7Ig4KmefCfR6SmU=\n" + "-----END PUBLIC KEY-----" + ) + pk = VerifyingKey.from_pem(pub_key_str) + + pk_exp = VerifyingKey.from_string( + b"\x04\x8a\xf5\x52\x48\x18\xb3\x98\xe6\x6c\x57\x3b\x8a\xdd\x7f" + b"\x60\x8d\x97\x4f\xff\xc8\x95\xa1\x23\x30\xd6\x5f\xb7\x5a\x30" + b"\x8e\xaa\x41\x60\x7d\x84\xac\x91\xe4\xe1\x4e\x4f\xa7\x85\xaf" + b"\x5a\xad\x71\x98\x7c\x08\x66\x5b\x07\xec\x88\x38\x2a\x67\x9f" + b"\x09\xf4\x7a\x4a\x65", + curve=NIST256p, + ) + self.assertEqual(pk, pk_exp) + + def test_load_key_with_explicit_with_explicit_disabled(self): + pub_key_str = ( + "-----BEGIN PUBLIC KEY-----\n" + "MIIBSzCCAQMGByqGSM49AgEwgfcCAQEwLAYHKoZIzj0BAQIhAP////8AAAABAAAA\n" + "AAAAAAAAAAAA////////////////MFsEIP////8AAAABAAAAAAAAAAAAAAAA////\n" + "///////////8BCBaxjXYqjqT57PrvVV2mIa8ZR0GsMxTsPY7zjw+J9JgSwMVAMSd\n" + "NgiG5wSTamZ44ROdJreBn36QBEEEaxfR8uEsQkf4vOblY6RA8ncDfYEt6zOg9KE5\n" + "RdiYwpZP40Li/hp/m47n60p8D54WK84zV2sxXs7LtkBoN79R9QIhAP////8AAAAA\n" + "//////////+85vqtpxeehPO5ysL8YyVRAgEBA0IABIr1UkgYs5jmbFc7it1/YI2X\n" + "T//IlaEjMNZft1owjqpBYH2ErJHk4U5Pp4WvWq1xmHwIZlsH7Ig4KmefCfR6SmU=\n" + "-----END PUBLIC KEY-----" + ) + with self.assertRaises(UnexpectedDER): + VerifyingKey.from_pem( + pub_key_str, valid_curve_encodings=["named_curve"] + ) + + def test_load_key_with_disabled_format(self): + with self.assertRaises(MalformedPointError) as e: + VerifyingKey.from_der(self.key_bytes, valid_encodings=["raw"]) + + self.assertIn("enabled (raw) encodings", str(e.exception)) + + def test_custom_hashfunc(self): + vk = VerifyingKey.from_der(self.key_bytes, hashlib.sha256) + + self.assertIs(vk.default_hashfunc, hashlib.sha256) + + def test_from_pem_with_custom_hashfunc(self): + vk = VerifyingKey.from_pem(self.key_pem, hashlib.sha256) + + self.assertIs(vk.default_hashfunc, hashlib.sha256) + + def test_bytes(self): + vk = VerifyingKey.from_der(self.key_bytes) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytes_memoryview(self): + vk = VerifyingKey.from_der(buffer(self.key_bytes)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytearray(self): + vk = VerifyingKey.from_der(bytearray(self.key_bytes)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_bytesarray_memoryview(self): + vk = VerifyingKey.from_der(buffer(bytearray(self.key_bytes))) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_array_array_of_bytes(self): + arr = array.array("B", self.key_bytes) + vk = VerifyingKey.from_der(arr) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_array_array_of_bytes_memoryview(self): + arr = array.array("B", self.key_bytes) + vk = VerifyingKey.from_der(buffer(arr)) + + self.assertEqual(self.vk.to_string(), vk.to_string()) + + def test_equality_on_verifying_keys(self): + self.assertEqual(self.vk, self.sk.get_verifying_key()) + + def test_inequality_on_verifying_keys(self): + self.assertNotEqual(self.vk, self.vk2) + + def test_inequality_on_verifying_keys_not_implemented(self): + self.assertNotEqual(self.vk, None) + + def test_VerifyingKey_inequality_on_same_curve(self): + self.assertNotEqual(self.vk, self.sk2.verifying_key) + + def test_SigningKey_inequality_on_same_curve(self): + self.assertNotEqual(self.sk, self.sk2) + + def test_inequality_on_wrong_types(self): + self.assertNotEqual(self.vk, self.sk) + + def test_from_public_point_old(self): + pj = self.vk.pubkey.point + point = Point(pj.curve(), pj.x(), pj.y()) + + vk = VerifyingKey.from_public_point(point, self.vk.curve) + + self.assertEqual(vk, self.vk) + + def test_ed25519_VerifyingKey_repr__(self): + sk = SigningKey.from_string(Ed25519.generator.to_bytes(), Ed25519) + string = repr(sk.verifying_key) + + self.assertEqual( + "VerifyingKey.from_string(" + "bytearray(b'K\\x0c\\xfbZH\\x8e\\x8c\\x8c\\x07\\xee\\xda\\xfb" + "\\xe1\\x97\\xcd\\x90\\x18\\x02\\x15h]\\xfe\\xbe\\xcbB\\xba\\xe6r" + "\\x10\\xae\\xf1P'), Ed25519, None)", + string, + ) + + def test_edwards_from_public_point(self): + point = Ed25519.generator + with self.assertRaises(ValueError) as e: + VerifyingKey.from_public_point(point, Ed25519) + + self.assertIn("incompatible with Edwards", str(e.exception)) + + def test_edwards_precompute_no_side_effect(self): + sk = SigningKey.from_string(Ed25519.generator.to_bytes(), Ed25519) + vk = sk.verifying_key + vk2 = VerifyingKey.from_string(vk.to_string(), Ed25519) + vk.precompute() + + self.assertEqual(vk, vk2) + + def test_parse_malfomed_eddsa_der_pubkey(self): + der_str = encode_sequence( + encode_sequence(encode_oid(*Ed25519.oid)), + encode_bitstring(bytes(Ed25519.generator.to_bytes()), 0), + encode_bitstring(b"\x00", 0), + ) + + with self.assertRaises(UnexpectedDER) as e: + VerifyingKey.from_der(der_str) + + self.assertIn("trailing junk after public key", str(e.exception)) + + def test_edwards_from_public_key_recovery(self): + with self.assertRaises(ValueError) as e: + VerifyingKey.from_public_key_recovery(b"", b"", Ed25519) + + self.assertIn("unsupported for Edwards", str(e.exception)) + + def test_edwards_from_public_key_recovery_with_digest(self): + with self.assertRaises(ValueError) as e: + VerifyingKey.from_public_key_recovery_with_digest( + b"", b"", Ed25519 + ) + + self.assertIn("unsupported for Edwards", str(e.exception)) + + def test_load_ed25519_from_pem(self): + vk_pem = ( + "-----BEGIN PUBLIC KEY-----\n" + "MCowBQYDK2VwAyEAIwBQ0NZkIiiO41WJfm5BV42u3kQm7lYnvIXmCy8qy2U=\n" + "-----END PUBLIC KEY-----\n" + ) + + vk = VerifyingKey.from_pem(vk_pem) + + self.assertIsInstance(vk.curve, Curve) + self.assertIs(vk.curve, Ed25519) + + vk_str = ( + b"\x23\x00\x50\xd0\xd6\x64\x22\x28\x8e\xe3\x55\x89\x7e\x6e\x41\x57" + b"\x8d\xae\xde\x44\x26\xee\x56\x27\xbc\x85\xe6\x0b\x2f\x2a\xcb\x65" + ) + + vk_2 = VerifyingKey.from_string(vk_str, Ed25519) + + self.assertEqual(vk, vk_2) + + def test_export_ed255_to_pem(self): + vk_str = ( + b"\x23\x00\x50\xd0\xd6\x64\x22\x28\x8e\xe3\x55\x89\x7e\x6e\x41\x57" + b"\x8d\xae\xde\x44\x26\xee\x56\x27\xbc\x85\xe6\x0b\x2f\x2a\xcb\x65" + ) + + vk = VerifyingKey.from_string(vk_str, Ed25519) + + vk_pem = ( + b"-----BEGIN PUBLIC KEY-----\n" + b"MCowBQYDK2VwAyEAIwBQ0NZkIiiO41WJfm5BV42u3kQm7lYnvIXmCy8qy2U=\n" + b"-----END PUBLIC KEY-----\n" + ) + + self.assertEqual(vk_pem, vk.to_pem()) + + def test_ed25519_export_import(self): + sk = SigningKey.generate(Ed25519) + vk = sk.verifying_key + + vk2 = VerifyingKey.from_pem(vk.to_pem()) + + self.assertEqual(vk, vk2) + + def test_ed25519_sig_verify(self): + vk_pem = ( + "-----BEGIN PUBLIC KEY-----\n" + "MCowBQYDK2VwAyEAIwBQ0NZkIiiO41WJfm5BV42u3kQm7lYnvIXmCy8qy2U=\n" + "-----END PUBLIC KEY-----\n" + ) + + vk = VerifyingKey.from_pem(vk_pem) + + data = b"data\n" + + # signature created by OpenSSL 3.0.0 beta1 + sig = ( + b"\x64\x47\xab\x6a\x33\xcd\x79\x45\xad\x98\x11\x6c\xb9\xf2\x20\xeb" + b"\x90\xd6\x50\xe3\xc7\x8f\x9f\x60\x10\xec\x75\xe0\x2f\x27\xd3\x96" + b"\xda\xe8\x58\x7f\xe0\xfe\x46\x5c\x81\xef\x50\xec\x29\x9f\xae\xd5" + b"\xad\x46\x3c\x91\x68\x83\x4d\xea\x8d\xa8\x19\x04\x04\x79\x03\x0b" + ) + + self.assertTrue(vk.verify(sig, data)) + + def test_ed448_from_pem(self): + pem_str = ( + "-----BEGIN PUBLIC KEY-----\n" + "MEMwBQYDK2VxAzoAeQtetSu7CMEzE+XWB10Bg47LCA0giNikOxHzdp+tZ/eK/En0\n" + "dTdYD2ll94g58MhSnBiBQB9A1MMA\n" + "-----END PUBLIC KEY-----\n" + ) + + vk = VerifyingKey.from_pem(pem_str) + + self.assertIsInstance(vk.curve, Curve) + self.assertIs(vk.curve, Ed448) + + vk_str = ( + b"\x79\x0b\x5e\xb5\x2b\xbb\x08\xc1\x33\x13\xe5\xd6\x07\x5d\x01\x83" + b"\x8e\xcb\x08\x0d\x20\x88\xd8\xa4\x3b\x11\xf3\x76\x9f\xad\x67\xf7" + b"\x8a\xfc\x49\xf4\x75\x37\x58\x0f\x69\x65\xf7\x88\x39\xf0\xc8\x52" + b"\x9c\x18\x81\x40\x1f\x40\xd4\xc3\x00" + ) + + vk2 = VerifyingKey.from_string(vk_str, Ed448) + + self.assertEqual(vk, vk2) + + def test_ed448_to_pem(self): + vk_str = ( + b"\x79\x0b\x5e\xb5\x2b\xbb\x08\xc1\x33\x13\xe5\xd6\x07\x5d\x01\x83" + b"\x8e\xcb\x08\x0d\x20\x88\xd8\xa4\x3b\x11\xf3\x76\x9f\xad\x67\xf7" + b"\x8a\xfc\x49\xf4\x75\x37\x58\x0f\x69\x65\xf7\x88\x39\xf0\xc8\x52" + b"\x9c\x18\x81\x40\x1f\x40\xd4\xc3\x00" + ) + vk = VerifyingKey.from_string(vk_str, Ed448) + + vk_pem = ( + b"-----BEGIN PUBLIC KEY-----\n" + b"MEMwBQYDK2VxAzoAeQtetSu7CMEzE+XWB10Bg47LCA0giNikOxHzdp+tZ/eK/En0\n" + b"dTdYD2ll94g58MhSnBiBQB9A1MMA\n" + b"-----END PUBLIC KEY-----\n" + ) + + self.assertEqual(vk_pem, vk.to_pem()) + + def test_ed448_export_import(self): + sk = SigningKey.generate(Ed448) + vk = sk.verifying_key + + vk2 = VerifyingKey.from_pem(vk.to_pem()) + + self.assertEqual(vk, vk2) + + def test_ed448_sig_verify(self): + pem_str = ( + "-----BEGIN PUBLIC KEY-----\n" + "MEMwBQYDK2VxAzoAeQtetSu7CMEzE+XWB10Bg47LCA0giNikOxHzdp+tZ/eK/En0\n" + "dTdYD2ll94g58MhSnBiBQB9A1MMA\n" + "-----END PUBLIC KEY-----\n" + ) + + vk = VerifyingKey.from_pem(pem_str) + + data = b"data\n" + + # signature created by OpenSSL 3.0.0 beta1 + sig = ( + b"\x68\xed\x2c\x70\x35\x22\xca\x1c\x35\x03\xf3\xaa\x51\x33\x3d\x00" + b"\xc0\xae\xb0\x54\xc5\xdc\x7f\x6f\x30\x57\xb4\x1d\xcb\xe9\xec\xfa" + b"\xc8\x45\x3e\x51\xc1\xcb\x60\x02\x6a\xd0\x43\x11\x0b\x5f\x9b\xfa" + b"\x32\x88\xb2\x38\x6b\xed\xac\x09\x00\x78\xb1\x7b\x5d\x7e\xf8\x16" + b"\x31\xdd\x1b\x3f\x98\xa0\xce\x19\xe7\xd8\x1c\x9f\x30\xac\x2f\xd4" + b"\x1e\x55\xbf\x21\x98\xf6\x4c\x8c\xbe\x81\xa5\x2d\x80\x4c\x62\x53" + b"\x91\xd5\xee\x03\x30\xc6\x17\x66\x4b\x9e\x0c\x8d\x40\xd0\xad\xae" + b"\x0a\x00" + ) + + self.assertTrue(vk.verify(sig, data)) + + +class TestSigningKey(unittest.TestCase): + """ + Verify that ecdsa.keys.SigningKey.from_der() can be used with + bytes-like objects. + """ + + @classmethod + def setUpClass(cls): + prv_key_str = ( + "-----BEGIN EC PRIVATE KEY-----\n" + "MF8CAQEEGF7IQgvW75JSqULpiQQ8op9WH6Uldw6xxaAKBggqhkjOPQMBAaE0AzIA\n" + "BLiBd9CE7xf15FY5QIAoNg+fWbSk1yZOYtoGUdzkejWkxbRc9RWTQjqLVXucIJnz\n" + "bA==\n" + "-----END EC PRIVATE KEY-----\n" + ) + cls.sk1 = SigningKey.from_pem(prv_key_str) + + prv_key_str = ( + "-----BEGIN PRIVATE KEY-----\n" + "MG8CAQAwEwYHKoZIzj0CAQYIKoZIzj0DAQEEVTBTAgEBBBheyEIL1u+SUqlC6YkE\n" + "PKKfVh+lJXcOscWhNAMyAAS4gXfQhO8X9eRWOUCAKDYPn1m0pNcmTmLaBlHc5Ho1\n" + "pMW0XPUVk0I6i1V7nCCZ82w=\n" + "-----END PRIVATE KEY-----\n" + ) + cls.sk1_pkcs8 = SigningKey.from_pem(prv_key_str) + + prv_key_str = ( + "-----BEGIN EC PRIVATE KEY-----\n" + "MHcCAQEEIKlL2EAm5NPPZuXwxRf4nXMk0A80y6UUbiQ17be/qFhRoAoGCCqGSM49\n" + "AwEHoUQDQgAE4H3iRbG4TSrsSRb/gusPQB/4YcN8Poqzgjau4kfxBPyZimeRfuY/\n" + "9g/wMmPuhGl4BUve51DsnKJFRr8psk0ieA==\n" + "-----END EC PRIVATE KEY-----\n" + ) + cls.sk2 = SigningKey.from_pem(prv_key_str) + + def test_decoding_explicit_curve_parameters(self): + prv_key_str = ( + "-----BEGIN PRIVATE KEY-----\n" + "MIIBeQIBADCCAQMGByqGSM49AgEwgfcCAQEwLAYHKoZIzj0BAQIhAP////8AAAAB\n" + "AAAAAAAAAAAAAAAA////////////////MFsEIP////8AAAABAAAAAAAAAAAAAAAA\n" + "///////////////8BCBaxjXYqjqT57PrvVV2mIa8ZR0GsMxTsPY7zjw+J9JgSwMV\n" + "AMSdNgiG5wSTamZ44ROdJreBn36QBEEEaxfR8uEsQkf4vOblY6RA8ncDfYEt6zOg\n" + "9KE5RdiYwpZP40Li/hp/m47n60p8D54WK84zV2sxXs7LtkBoN79R9QIhAP////8A\n" + "AAAA//////////+85vqtpxeehPO5ysL8YyVRAgEBBG0wawIBAQQgIXtREfUmR16r\n" + "ZbmvDGD2lAEFPZa2DLPyz0czSja58yChRANCAASK9VJIGLOY5mxXO4rdf2CNl0//\n" + "yJWhIzDWX7daMI6qQWB9hKyR5OFOT6eFr1qtcZh8CGZbB+yIOCpnnwn0ekpl\n" + "-----END PRIVATE KEY-----\n" + ) + + sk = SigningKey.from_pem(prv_key_str) + + sk2 = SigningKey.from_string( + b"\x21\x7b\x51\x11\xf5\x26\x47\x5e\xab\x65\xb9\xaf\x0c\x60\xf6" + b"\x94\x01\x05\x3d\x96\xb6\x0c\xb3\xf2\xcf\x47\x33\x4a\x36\xb9" + b"\xf3\x20", + curve=NIST256p, + ) + + self.assertEqual(sk, sk2) + + def test_decoding_explicit_curve_parameters_with_explicit_disabled(self): + prv_key_str = ( + "-----BEGIN PRIVATE KEY-----\n" + "MIIBeQIBADCCAQMGByqGSM49AgEwgfcCAQEwLAYHKoZIzj0BAQIhAP////8AAAAB\n" + "AAAAAAAAAAAAAAAA////////////////MFsEIP////8AAAABAAAAAAAAAAAAAAAA\n" + "///////////////8BCBaxjXYqjqT57PrvVV2mIa8ZR0GsMxTsPY7zjw+J9JgSwMV\n" + "AMSdNgiG5wSTamZ44ROdJreBn36QBEEEaxfR8uEsQkf4vOblY6RA8ncDfYEt6zOg\n" + "9KE5RdiYwpZP40Li/hp/m47n60p8D54WK84zV2sxXs7LtkBoN79R9QIhAP////8A\n" + "AAAA//////////+85vqtpxeehPO5ysL8YyVRAgEBBG0wawIBAQQgIXtREfUmR16r\n" + "ZbmvDGD2lAEFPZa2DLPyz0czSja58yChRANCAASK9VJIGLOY5mxXO4rdf2CNl0//\n" + "yJWhIzDWX7daMI6qQWB9hKyR5OFOT6eFr1qtcZh8CGZbB+yIOCpnnwn0ekpl\n" + "-----END PRIVATE KEY-----\n" + ) + + with self.assertRaises(UnexpectedDER): + SigningKey.from_pem( + prv_key_str, valid_curve_encodings=["named_curve"] + ) + + def test_equality_on_signing_keys(self): + sk = SigningKey.from_secret_exponent( + self.sk1.privkey.secret_multiplier, self.sk1.curve + ) + self.assertEqual(self.sk1, sk) + self.assertEqual(self.sk1_pkcs8, sk) + + def test_verify_with_empty_message(self): + sig = self.sk1.sign(b"") + + self.assertTrue(sig) + + vk = self.sk1.verifying_key + + self.assertTrue(vk.verify(sig, b"")) + + def test_verify_with_precompute(self): + sig = self.sk1.sign(b"message") + + vk = self.sk1.verifying_key + + vk.precompute() + + self.assertTrue(vk.verify(sig, b"message")) + + def test_compare_verifying_key_with_precompute(self): + vk1 = self.sk1.verifying_key + vk1.precompute() + + vk2 = self.sk1_pkcs8.verifying_key + + self.assertEqual(vk1, vk2) + + def test_verify_with_lazy_precompute(self): + sig = self.sk2.sign(b"other message") + + vk = self.sk2.verifying_key + + vk.precompute(lazy=True) + + self.assertTrue(vk.verify(sig, b"other message")) + + def test_inequality_on_signing_keys(self): + self.assertNotEqual(self.sk1, self.sk2) + + def test_inequality_on_signing_keys_not_implemented(self): + self.assertNotEqual(self.sk1, None) + + def test_ed25519_from_pem(self): + pem_str = ( + "-----BEGIN PRIVATE KEY-----\n" + "MC4CAQAwBQYDK2VwBCIEIDS6x9FO1PG8T4xIPg8Zd0z8uL6sVGZFEZrX17gHC/XU\n" + "-----END PRIVATE KEY-----\n" + ) + + sk = SigningKey.from_pem(pem_str) + + sk_str = SigningKey.from_string( + b"\x34\xBA\xC7\xD1\x4E\xD4\xF1\xBC\x4F\x8C\x48\x3E\x0F\x19\x77\x4C" + b"\xFC\xB8\xBE\xAC\x54\x66\x45\x11\x9A\xD7\xD7\xB8\x07\x0B\xF5\xD4", + Ed25519, + ) + + self.assertEqual(sk, sk_str) + + def test_ed25519_to_pem(self): + sk = SigningKey.from_string( + b"\x34\xBA\xC7\xD1\x4E\xD4\xF1\xBC\x4F\x8C\x48\x3E\x0F\x19\x77\x4C" + b"\xFC\xB8\xBE\xAC\x54\x66\x45\x11\x9A\xD7\xD7\xB8\x07\x0B\xF5\xD4", + Ed25519, + ) + + pem_str = ( + b"-----BEGIN PRIVATE KEY-----\n" + b"MC4CAQAwBQYDK2VwBCIEIDS6x9FO1PG8T4xIPg8Zd0z8uL6sVGZFEZrX17gHC/XU\n" + b"-----END PRIVATE KEY-----\n" + ) + + self.assertEqual(sk.to_pem(format="pkcs8"), pem_str) + + def test_ed25519_to_and_from_pem(self): + sk = SigningKey.generate(Ed25519) + + decoded = SigningKey.from_pem(sk.to_pem(format="pkcs8")) + + self.assertEqual(sk, decoded) + + def test_ed448_from_pem(self): + pem_str = ( + "-----BEGIN PRIVATE KEY-----\n" + "MEcCAQAwBQYDK2VxBDsEOTyFuXqFLXgJlV8uDqcOw9nG4IqzLiZ/i5NfBDoHPzmP\n" + "OP0JMYaLGlTzwovmvCDJ2zLaezu9NLz9aQ==\n" + "-----END PRIVATE KEY-----\n" + ) + sk = SigningKey.from_pem(pem_str) + + sk_str = SigningKey.from_string( + b"\x3C\x85\xB9\x7A\x85\x2D\x78\x09\x95\x5F\x2E\x0E\xA7\x0E\xC3\xD9" + b"\xC6\xE0\x8A\xB3\x2E\x26\x7F\x8B\x93\x5F\x04\x3A\x07\x3F\x39\x8F" + b"\x38\xFD\x09\x31\x86\x8B\x1A\x54\xF3\xC2\x8B\xE6\xBC\x20\xC9\xDB" + b"\x32\xDA\x7B\x3B\xBD\x34\xBC\xFD\x69", + Ed448, + ) + + self.assertEqual(sk, sk_str) + + def test_ed448_to_pem(self): + sk = SigningKey.from_string( + b"\x3C\x85\xB9\x7A\x85\x2D\x78\x09\x95\x5F\x2E\x0E\xA7\x0E\xC3\xD9" + b"\xC6\xE0\x8A\xB3\x2E\x26\x7F\x8B\x93\x5F\x04\x3A\x07\x3F\x39\x8F" + b"\x38\xFD\x09\x31\x86\x8B\x1A\x54\xF3\xC2\x8B\xE6\xBC\x20\xC9\xDB" + b"\x32\xDA\x7B\x3B\xBD\x34\xBC\xFD\x69", + Ed448, + ) + pem_str = ( + b"-----BEGIN PRIVATE KEY-----\n" + b"MEcCAQAwBQYDK2VxBDsEOTyFuXqFLXgJlV8uDqcOw9nG4IqzLiZ/i5NfBDoHPzmP\n" + b"OP0JMYaLGlTzwovmvCDJ2zLaezu9NLz9aQ==\n" + b"-----END PRIVATE KEY-----\n" + ) + + self.assertEqual(sk.to_pem(format="pkcs8"), pem_str) + + def test_ed448_encode_decode(self): + sk = SigningKey.generate(Ed448) + + decoded = SigningKey.from_pem(sk.to_pem(format="pkcs8")) + + self.assertEqual(decoded, sk) + + +class TestTrivialCurve(unittest.TestCase): + @classmethod + def setUpClass(cls): + # To test what happens with r or s in signing happens to be zero we + # need to find a scalar that creates one of the points on a curve that + # has x coordinate equal to zero. + # Even for secp112r2 curve that's non trivial so use this toy + # curve, for which we can iterate over all points quickly + curve = CurveFp(163, 84, 58) + gen = PointJacobi(curve, 2, 87, 1, 167, generator=True) + + cls.toy_curve = Curve("toy_p8", curve, gen, (1, 2, 0)) + + cls.sk = SigningKey.from_secret_exponent( + 140, + cls.toy_curve, + hashfunc=hashlib.sha1, + ) + + def test_generator_sanity(self): + gen = self.toy_curve.generator + + self.assertEqual(gen * gen.order(), INFINITY) + + def test_public_key_sanity(self): + self.assertEqual(self.sk.verifying_key.to_string(), b"\x98\x1e") + + def test_deterministic_sign(self): + sig = self.sk.sign_deterministic(b"message") + + self.assertEqual(sig, b"-.") + + self.assertTrue(self.sk.verifying_key.verify(sig, b"message")) + + def test_deterministic_sign_random_message(self): + msg = os.urandom(32) + sig = self.sk.sign_deterministic(msg) + self.assertEqual(len(sig), 2) + self.assertTrue(self.sk.verifying_key.verify(sig, msg)) + + def test_deterministic_sign_that_rises_R_zero_error(self): + # the raised RSZeroError is caught and handled internally by + # sign_deterministic methods + msg = b"\x00\x4f" + sig = self.sk.sign_deterministic(msg) + self.assertEqual(sig, b"\x36\x9e") + self.assertTrue(self.sk.verifying_key.verify(sig, msg)) + + def test_deterministic_sign_that_rises_S_zero_error(self): + msg = b"\x01\x6d" + sig = self.sk.sign_deterministic(msg) + self.assertEqual(sig, b"\x49\x6c") + self.assertTrue(self.sk.verifying_key.verify(sig, msg)) + + +# test VerifyingKey.verify() +prv_key_str = ( + "-----BEGIN EC PRIVATE KEY-----\n" + "MF8CAQEEGF7IQgvW75JSqULpiQQ8op9WH6Uldw6xxaAKBggqhkjOPQMBAaE0AzIA\n" + "BLiBd9CE7xf15FY5QIAoNg+fWbSk1yZOYtoGUdzkejWkxbRc9RWTQjqLVXucIJnz\n" + "bA==\n" + "-----END EC PRIVATE KEY-----\n" +) +key_bytes = unpem(prv_key_str) +assert isinstance(key_bytes, bytes) +sk = SigningKey.from_der(key_bytes) +vk = sk.verifying_key + +data = ( + b"some string for signing" + b"contents don't really matter" + b"but do include also some crazy values: " + b"\x00\x01\t\r\n\x00\x00\x00\xff\xf0" +) +assert len(data) % 4 == 0 +sha1 = hashlib.sha1() +sha1.update(data) +data_hash = sha1.digest() +assert isinstance(data_hash, bytes) +sig_raw = sk.sign(data, sigencode=sigencode_string) +assert isinstance(sig_raw, bytes) +sig_der = sk.sign(data, sigencode=sigencode_der) +assert isinstance(sig_der, bytes) +sig_strings = sk.sign(data, sigencode=sigencode_strings) +assert isinstance(sig_strings[0], bytes) + +verifiers = [] +for modifier, fun in [ + ("bytes", lambda x: x), + ("bytes memoryview", lambda x: buffer(x)), + ("bytearray", lambda x: bytearray(x)), + ("bytearray memoryview", lambda x: buffer(bytearray(x))), + ("array.array of bytes", lambda x: array.array("B", x)), + ("array.array of bytes memoryview", lambda x: buffer(array.array("B", x))), + ("array.array of ints", lambda x: array.array("I", x)), + ("array.array of ints memoryview", lambda x: buffer(array.array("I", x))), +]: + if "ints" in modifier: + conv = lambda x: x + else: + conv = fun + for sig_format, signature, decoder, mod_apply in [ + ("raw", sig_raw, sigdecode_string, lambda x: conv(x)), + ("der", sig_der, sigdecode_der, lambda x: conv(x)), + ( + "strings", + sig_strings, + sigdecode_strings, + lambda x: tuple(conv(i) for i in x), + ), + ]: + for method_name, vrf_mthd, vrf_data in [ + ("verify", vk.verify, data), + ("verify_digest", vk.verify_digest, data_hash), + ]: + verifiers.append( + pytest.param( + signature, + decoder, + mod_apply, + fun, + vrf_mthd, + vrf_data, + id="{2}-{0}-{1}".format(modifier, sig_format, method_name), + ) + ) + + +@pytest.mark.parametrize( + "signature,decoder,mod_apply,fun,vrf_mthd,vrf_data", verifiers +) +def test_VerifyingKey_verify( + signature, decoder, mod_apply, fun, vrf_mthd, vrf_data +): + sig = mod_apply(signature) + + assert vrf_mthd(sig, fun(vrf_data), sigdecode=decoder) + + +# test SigningKey.from_string() +prv_key_bytes = ( + b"^\xc8B\x0b\xd6\xef\x92R\xa9B\xe9\x89\x04<\xa2" + b"\x9fV\x1f\xa5%w\x0e\xb1\xc5" +) +assert len(prv_key_bytes) == 24 +converters = [] +for modifier, convert in [ + ("bytes", lambda x: x), + ("bytes memoryview", buffer), + ("bytearray", bytearray), + ("bytearray memoryview", lambda x: buffer(bytearray(x))), + ("array.array of bytes", lambda x: array.array("B", x)), + ("array.array of bytes memoryview", lambda x: buffer(array.array("B", x))), + ("array.array of ints", lambda x: array.array("I", x)), + ("array.array of ints memoryview", lambda x: buffer(array.array("I", x))), +]: + converters.append(pytest.param(convert, id=modifier)) + + +@pytest.mark.parametrize("convert", converters) +def test_SigningKey_from_string(convert): + key = convert(prv_key_bytes) + sk = SigningKey.from_string(key) + + assert sk.to_string() == prv_key_bytes + + +# test SigningKey.from_der() +prv_key_str = ( + "-----BEGIN EC PRIVATE KEY-----\n" + "MF8CAQEEGF7IQgvW75JSqULpiQQ8op9WH6Uldw6xxaAKBggqhkjOPQMBAaE0AzIA\n" + "BLiBd9CE7xf15FY5QIAoNg+fWbSk1yZOYtoGUdzkejWkxbRc9RWTQjqLVXucIJnz\n" + "bA==\n" + "-----END EC PRIVATE KEY-----\n" +) +key_bytes = unpem(prv_key_str) +assert isinstance(key_bytes, bytes) + +# last two converters are for array.array of ints, those require input +# that's multiple of 4, which no curve we support produces +@pytest.mark.parametrize("convert", converters[:-2]) +def test_SigningKey_from_der(convert): + key = convert(key_bytes) + sk = SigningKey.from_der(key) + + assert sk.to_string() == prv_key_bytes + + +# test SigningKey.sign_deterministic() +extra_entropy = b"\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11" + + +@pytest.mark.parametrize("convert", converters) +def test_SigningKey_sign_deterministic(convert): + sig = sk.sign_deterministic( + convert(data), extra_entropy=convert(extra_entropy) + ) + + vk.verify(sig, data) + + +# test SigningKey.sign_digest_deterministic() +@pytest.mark.parametrize("convert", converters) +def test_SigningKey_sign_digest_deterministic(convert): + sig = sk.sign_digest_deterministic( + convert(data_hash), extra_entropy=convert(extra_entropy) + ) + + vk.verify(sig, data) + + +@pytest.mark.parametrize("convert", converters) +def test_SigningKey_sign(convert): + sig = sk.sign(convert(data)) + + vk.verify(sig, data) + + +@pytest.mark.parametrize("convert", converters) +def test_SigningKey_sign_digest(convert): + sig = sk.sign_digest(convert(data_hash)) + + vk.verify(sig, data) + + +def test_SigningKey_with_unlikely_value(): + sk = SigningKey.from_secret_exponent(NIST256p.order - 1, curve=NIST256p) + vk = sk.verifying_key + sig = sk.sign(b"hello") + assert vk.verify(sig, b"hello") + + +def test_SigningKey_with_custom_curve_old_point(): + generator = generator_brainpoolp160r1 + generator = Point( + generator.curve(), + generator.x(), + generator.y(), + generator.order(), + ) + + curve = Curve( + "BRAINPOOLP160r1", + generator.curve(), + generator, + (1, 3, 36, 3, 3, 2, 8, 1, 1, 1), + ) + + sk = SigningKey.from_secret_exponent(12, curve) + + sk2 = SigningKey.from_secret_exponent(12, BRAINPOOLP160r1) + + assert sk.privkey == sk2.privkey + + +def test_VerifyingKey_inequality_with_different_curves(): + sk1 = SigningKey.from_secret_exponent(2, BRAINPOOLP160r1) + sk2 = SigningKey.from_secret_exponent(2, NIST256p) + + assert sk1.verifying_key != sk2.verifying_key + + +def test_VerifyingKey_inequality_with_different_secret_points(): + sk1 = SigningKey.from_secret_exponent(2, BRAINPOOLP160r1) + sk2 = SigningKey.from_secret_exponent(3, BRAINPOOLP160r1) + + assert sk1.verifying_key != sk2.verifying_key + + +def test_SigningKey_from_pem_pkcs8v2_EdDSA(): + pem = """-----BEGIN PRIVATE KEY----- + MFMCAQEwBQYDK2VwBCIEICc2F2ag1n1QP0jY+g9qWx5sDkx0s/HdNi3cSRHw+zsI + oSMDIQA+HQ2xCif8a/LMWR2m5HaCm5I2pKe/cc8OiRANMHxjKQ== + -----END PRIVATE KEY-----""" + + sk = SigningKey.from_pem(pem) + assert sk.curve == Ed25519 diff --git a/env/lib/python3.8/site-packages/ecdsa/test_malformed_sigs.py b/env/lib/python3.8/site-packages/ecdsa/test_malformed_sigs.py new file mode 100644 index 00000000..8e1b611d --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_malformed_sigs.py @@ -0,0 +1,370 @@ +from __future__ import with_statement, division + +import hashlib + +try: + from hashlib import algorithms_available +except ImportError: # pragma: no cover + algorithms_available = [ + "md5", + "sha1", + "sha224", + "sha256", + "sha384", + "sha512", + ] +# skip algorithms broken by change to OpenSSL 3.0 and early versions +# of hashlib that list algorithms that require the legacy provider to work +# https://bugs.python.org/issue38820 +algorithms_available = [ + i + for i in algorithms_available + if i not in ("mdc2", "md2", "md4", "whirlpool", "ripemd160") +] +from functools import partial +import pytest +import sys +import hypothesis.strategies as st +from hypothesis import note, assume, given, settings, example + +from .keys import SigningKey +from .keys import BadSignatureError +from .util import sigencode_der, sigencode_string +from .util import sigdecode_der, sigdecode_string +from .curves import curves +from .der import ( + encode_integer, + encode_bitstring, + encode_octet_string, + encode_oid, + encode_sequence, + encode_constructed, +) +from .ellipticcurve import CurveEdTw + + +example_data = b"some data to sign" +"""Since the data is hashed for processing, really any string will do.""" + + +hash_and_size = [ + (name, hashlib.new(name).digest_size) for name in algorithms_available +] +"""Pairs of hash names and their output sizes. +Needed for pairing with curves as we don't support hashes +bigger than order sizes of curves.""" + + +keys_and_sigs = [] +"""Name of the curve+hash combination, VerifyingKey and DER signature.""" + + +# for hypothesis strategy shrinking we want smallest curves and hashes first +for curve in sorted(curves, key=lambda x: x.baselen): + for hash_alg in [ + name + for name, size in sorted(hash_and_size, key=lambda x: x[1]) + if 0 < size <= curve.baselen + ]: + sk = SigningKey.generate( + curve, hashfunc=partial(hashlib.new, hash_alg) + ) + + keys_and_sigs.append( + ( + "{0} {1}".format(curve, hash_alg), + sk.verifying_key, + sk.sign(example_data, sigencode=sigencode_der), + ) + ) + + +# first make sure that the signatures can be verified +@pytest.mark.parametrize( + "verifying_key,signature", + [pytest.param(vk, sig, id=name) for name, vk, sig in keys_and_sigs], +) +def test_signatures(verifying_key, signature): + assert verifying_key.verify( + signature, example_data, sigdecode=sigdecode_der + ) + + +@st.composite +def st_fuzzed_sig(draw, keys_and_sigs): + """ + Hypothesis strategy that generates pairs of VerifyingKey and malformed + signatures created by fuzzing of a valid signature. + """ + name, verifying_key, old_sig = draw(st.sampled_from(keys_and_sigs)) + note("Configuration: {0}".format(name)) + + sig = bytearray(old_sig) + + # decide which bytes should be removed + to_remove = draw( + st.lists(st.integers(min_value=0, max_value=len(sig) - 1), unique=True) + ) + to_remove.sort() + for i in reversed(to_remove): + del sig[i] + note("Remove bytes: {0}".format(to_remove)) + + # decide which bytes of the original signature should be changed + if sig: # pragma: no branch + xors = draw( + st.dictionaries( + st.integers(min_value=0, max_value=len(sig) - 1), + st.integers(min_value=1, max_value=255), + ) + ) + for i, val in xors.items(): + sig[i] ^= val + note("xors: {0}".format(xors)) + + # decide where new data should be inserted + insert_pos = draw(st.integers(min_value=0, max_value=len(sig))) + # NIST521p signature is about 140 bytes long, test slightly longer + insert_data = draw(st.binary(max_size=256)) + + sig = sig[:insert_pos] + insert_data + sig[insert_pos:] + note( + "Inserted at position {0} bytes: {1!r}".format(insert_pos, insert_data) + ) + + sig = bytes(sig) + # make sure that there was performed at least one mutation on the data + assume(to_remove or xors or insert_data) + # and that the mutations didn't cancel each-other out + assume(sig != old_sig) + + return verifying_key, sig + + +params = {} +# not supported in hypothesis 2.0.0 +if sys.version_info >= (2, 7): # pragma: no branch + from hypothesis import HealthCheck + + # deadline=5s because NIST521p are slow to verify + params["deadline"] = 5000 + params["suppress_health_check"] = [ + HealthCheck.data_too_large, + HealthCheck.filter_too_much, + HealthCheck.too_slow, + ] + +slow_params = dict(params) +slow_params["max_examples"] = 10 + + +@settings(**params) +@given(st_fuzzed_sig(keys_and_sigs)) +def test_fuzzed_der_signatures(args): + verifying_key, sig = args + + with pytest.raises(BadSignatureError): + verifying_key.verify(sig, example_data, sigdecode=sigdecode_der) + + +@st.composite +def st_random_der_ecdsa_sig_value(draw): + """ + Hypothesis strategy for selecting random values and encoding them + to ECDSA-Sig-Value object:: + + ECDSA-Sig-Value ::= SEQUENCE { + r INTEGER, + s INTEGER + } + """ + name, verifying_key, _ = draw(st.sampled_from(keys_and_sigs)) + note("Configuration: {0}".format(name)) + order = int(verifying_key.curve.order) + + # the encode_integer doesn't support negative numbers, would be nice + # to generate them too, but we have coverage for remove_integer() + # verifying that it doesn't accept them, so meh. + # Test all numbers around the ones that can show up (around order) + # way smaller and slightly bigger + r = draw( + st.integers(min_value=0, max_value=order << 4) + | st.integers(min_value=order >> 2, max_value=order + 1) + ) + s = draw( + st.integers(min_value=0, max_value=order << 4) + | st.integers(min_value=order >> 2, max_value=order + 1) + ) + + sig = encode_sequence(encode_integer(r), encode_integer(s)) + + return verifying_key, sig + + +@settings(**slow_params) +@given(st_random_der_ecdsa_sig_value()) +def test_random_der_ecdsa_sig_value(params): + """ + Check if random values encoded in ECDSA-Sig-Value structure are rejected + as signature. + """ + verifying_key, sig = params + + with pytest.raises(BadSignatureError): + verifying_key.verify(sig, example_data, sigdecode=sigdecode_der) + + +def st_der_integer(*args, **kwargs): + """ + Hypothesis strategy that returns a random positive integer as DER + INTEGER. + Parameters are passed to hypothesis.strategy.integer. + """ + if "min_value" not in kwargs: # pragma: no branch + kwargs["min_value"] = 0 + return st.builds(encode_integer, st.integers(*args, **kwargs)) + + +@st.composite +def st_der_bit_string(draw, *args, **kwargs): + """ + Hypothesis strategy that returns a random DER BIT STRING. + Parameters are passed to hypothesis.strategy.binary. + """ + data = draw(st.binary(*args, **kwargs)) + if data: + unused = draw(st.integers(min_value=0, max_value=7)) + data = bytearray(data) + data[-1] &= -(2**unused) + data = bytes(data) + else: + unused = 0 + return encode_bitstring(data, unused) + + +def st_der_octet_string(*args, **kwargs): + """ + Hypothesis strategy that returns a random DER OCTET STRING object. + Parameters are passed to hypothesis.strategy.binary + """ + return st.builds(encode_octet_string, st.binary(*args, **kwargs)) + + +def st_der_null(): + """ + Hypothesis strategy that returns DER NULL object. + """ + return st.just(b"\x05\x00") + + +@st.composite +def st_der_oid(draw): + """ + Hypothesis strategy that returns DER OBJECT IDENTIFIER objects. + """ + first = draw(st.integers(min_value=0, max_value=2)) + if first < 2: + second = draw(st.integers(min_value=0, max_value=39)) + else: + second = draw(st.integers(min_value=0, max_value=2**512)) + rest = draw( + st.lists(st.integers(min_value=0, max_value=2**512), max_size=50) + ) + return encode_oid(first, second, *rest) + + +def st_der(): + """ + Hypothesis strategy that returns random DER structures. + + A valid DER structure is any primitive object, an octet encoding + of a valid DER structure, sequence of valid DER objects or a constructed + encoding of any of the above. + """ + return st.recursive( + st.just(b"") + | st_der_integer(max_value=2**4096) + | st_der_bit_string(max_size=1024**2) + | st_der_octet_string(max_size=1024**2) + | st_der_null() + | st_der_oid(), + lambda children: st.builds( + lambda x: encode_octet_string(x), st.one_of(children) + ) + | st.builds(lambda x: encode_bitstring(x, 0), st.one_of(children)) + | st.builds( + lambda x: encode_sequence(*x), st.lists(children, max_size=200) + ) + | st.builds( + lambda tag, x: encode_constructed(tag, x), + st.integers(min_value=0, max_value=0x3F), + st.one_of(children), + ), + max_leaves=40, + ) + + +@settings(**params) +@given(st.sampled_from(keys_and_sigs), st_der()) +def test_random_der_as_signature(params, der): + """Check if random DER structures are rejected as signature""" + name, verifying_key, _ = params + + with pytest.raises(BadSignatureError): + verifying_key.verify(der, example_data, sigdecode=sigdecode_der) + + +@settings(**params) +@given(st.sampled_from(keys_and_sigs), st.binary(max_size=1024**2)) +@example( + keys_and_sigs[0], encode_sequence(encode_integer(0), encode_integer(0)) +) +@example( + keys_and_sigs[0], + encode_sequence(encode_integer(1), encode_integer(1)) + b"\x00", +) +@example(keys_and_sigs[0], encode_sequence(*[encode_integer(1)] * 3)) +def test_random_bytes_as_signature(params, der): + """Check if random bytes are rejected as signature""" + name, verifying_key, _ = params + + with pytest.raises(BadSignatureError): + verifying_key.verify(der, example_data, sigdecode=sigdecode_der) + + +keys_and_string_sigs = [ + ( + name, + verifying_key, + sigencode_string( + *sigdecode_der(sig, verifying_key.curve.order), + order=verifying_key.curve.order + ), + ) + for name, verifying_key, sig in keys_and_sigs + if not isinstance(verifying_key.curve.curve, CurveEdTw) +] +""" +Name of the curve+hash combination, VerifyingKey and signature as a +byte string. +""" + + +keys_and_string_sigs += [ + ( + name, + verifying_key, + sig, + ) + for name, verifying_key, sig in keys_and_sigs + if isinstance(verifying_key.curve.curve, CurveEdTw) +] + + +@settings(**params) +@given(st_fuzzed_sig(keys_and_string_sigs)) +def test_fuzzed_string_signatures(params): + verifying_key, sig = params + + with pytest.raises(BadSignatureError): + verifying_key.verify(sig, example_data, sigdecode=sigdecode_string) diff --git a/env/lib/python3.8/site-packages/ecdsa/test_numbertheory.py b/env/lib/python3.8/site-packages/ecdsa/test_numbertheory.py new file mode 100644 index 00000000..8bc787f1 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_numbertheory.py @@ -0,0 +1,433 @@ +import operator +from functools import reduce + +try: + import unittest2 as unittest +except ImportError: + import unittest +import hypothesis.strategies as st +import pytest +from hypothesis import given, settings, example + +try: + from hypothesis import HealthCheck + + HC_PRESENT = True +except ImportError: # pragma: no cover + HC_PRESENT = False +from .numbertheory import ( + SquareRootError, + JacobiError, + factorization, + gcd, + lcm, + jacobi, + inverse_mod, + is_prime, + next_prime, + smallprimes, + square_root_mod_prime, +) + + +BIGPRIMES = ( + 999671, + 999683, + 999721, + 999727, + 999749, + 999763, + 999769, + 999773, + 999809, + 999853, + 999863, + 999883, + 999907, + 999917, + 999931, + 999953, + 999959, + 999961, + 999979, + 999983, +) + + +@pytest.mark.parametrize( + "prime, next_p", [(p, q) for p, q in zip(BIGPRIMES[:-1], BIGPRIMES[1:])] +) +def test_next_prime(prime, next_p): + assert next_prime(prime) == next_p + + +@pytest.mark.parametrize("val", [-1, 0, 1]) +def test_next_prime_with_nums_less_2(val): + assert next_prime(val) == 2 + + +@pytest.mark.parametrize("prime", smallprimes) +def test_square_root_mod_prime_for_small_primes(prime): + squares = set() + for num in range(0, 1 + prime // 2): + sq = num * num % prime + squares.add(sq) + root = square_root_mod_prime(sq, prime) + # tested for real with TestNumbertheory.test_square_root_mod_prime + assert root * root % prime == sq + + for nonsquare in range(0, prime): + if nonsquare in squares: + continue + with pytest.raises(SquareRootError): + square_root_mod_prime(nonsquare, prime) + + +def test_square_root_mod_prime_for_2(): + a = square_root_mod_prime(1, 2) + assert a == 1 + + +def test_square_root_mod_prime_for_small_prime(): + root = square_root_mod_prime(98**2 % 101, 101) + assert root * root % 101 == 9 + + +def test_square_root_mod_prime_for_p_congruent_5(): + p = 13 + assert p % 8 == 5 + + root = square_root_mod_prime(3, p) + assert root * root % p == 3 + + +def test_square_root_mod_prime_for_p_congruent_5_large_d(): + p = 29 + assert p % 8 == 5 + + root = square_root_mod_prime(4, p) + assert root * root % p == 4 + + +class TestSquareRootModPrime(unittest.TestCase): + def test_power_of_2_p(self): + with self.assertRaises(JacobiError): + square_root_mod_prime(12, 32) + + def test_no_square(self): + with self.assertRaises(SquareRootError) as e: + square_root_mod_prime(12, 31) + + self.assertIn("no square root", str(e.exception)) + + def test_non_prime(self): + with self.assertRaises(SquareRootError) as e: + square_root_mod_prime(12, 33) + + self.assertIn("p is not prime", str(e.exception)) + + def test_non_prime_with_negative(self): + with self.assertRaises(SquareRootError) as e: + square_root_mod_prime(697 - 1, 697) + + self.assertIn("p is not prime", str(e.exception)) + + +@st.composite +def st_two_nums_rel_prime(draw): + # 521-bit is the biggest curve we operate on, use 1024 for a bit + # of breathing space + mod = draw(st.integers(min_value=2, max_value=2**1024)) + num = draw( + st.integers(min_value=1, max_value=mod - 1).filter( + lambda x: gcd(x, mod) == 1 + ) + ) + return num, mod + + +@st.composite +def st_primes(draw, *args, **kwargs): + if "min_value" not in kwargs: # pragma: no branch + kwargs["min_value"] = 1 + prime = draw( + st.sampled_from(smallprimes) + | st.integers(*args, **kwargs).filter(is_prime) + ) + return prime + + +@st.composite +def st_num_square_prime(draw): + prime = draw(st_primes(max_value=2**1024)) + num = draw(st.integers(min_value=0, max_value=1 + prime // 2)) + sq = num * num % prime + return sq, prime + + +@st.composite +def st_comp_with_com_fac(draw): + """ + Strategy that returns lists of numbers, all having a common factor. + """ + primes = draw( + st.lists(st_primes(max_value=2**512), min_size=1, max_size=10) + ) + # select random prime(s) that will make the common factor of composites + com_fac_primes = draw( + st.lists(st.sampled_from(primes), min_size=1, max_size=20) + ) + com_fac = reduce(operator.mul, com_fac_primes, 1) + + # select at most 20 lists (returned numbers), + # each having at most 30 primes (factors) including none (then the number + # will be 1) + comp_primes = draw( + st.integers(min_value=1, max_value=20).flatmap( + lambda n: st.lists( + st.lists(st.sampled_from(primes), max_size=30), + min_size=1, + max_size=n, + ) + ) + ) + + return [reduce(operator.mul, nums, 1) * com_fac for nums in comp_primes] + + +@st.composite +def st_comp_no_com_fac(draw): + """ + Strategy that returns lists of numbers that don't have a common factor. + """ + primes = draw( + st.lists( + st_primes(max_value=2**512), min_size=2, max_size=10, unique=True + ) + ) + # first select the primes that will create the uncommon factor + # between returned numbers + uncom_fac_primes = draw( + st.lists( + st.sampled_from(primes), + min_size=1, + max_size=len(primes) - 1, + unique=True, + ) + ) + uncom_fac = reduce(operator.mul, uncom_fac_primes, 1) + + # then build composites from leftover primes + leftover_primes = [i for i in primes if i not in uncom_fac_primes] + + assert leftover_primes + assert uncom_fac_primes + + # select at most 20 lists, each having at most 30 primes + # selected from the leftover_primes list + number_primes = draw( + st.integers(min_value=1, max_value=20).flatmap( + lambda n: st.lists( + st.lists(st.sampled_from(leftover_primes), max_size=30), + min_size=1, + max_size=n, + ) + ) + ) + + numbers = [reduce(operator.mul, nums, 1) for nums in number_primes] + + insert_at = draw(st.integers(min_value=0, max_value=len(numbers))) + numbers.insert(insert_at, uncom_fac) + return numbers + + +HYP_SETTINGS = {} +if HC_PRESENT: # pragma: no branch + HYP_SETTINGS["suppress_health_check"] = [ + HealthCheck.filter_too_much, + HealthCheck.too_slow, + ] + # the factorization() sometimes takes a long time to finish + HYP_SETTINGS["deadline"] = 5000 + + +HYP_SLOW_SETTINGS = dict(HYP_SETTINGS) +HYP_SLOW_SETTINGS["max_examples"] = 10 + + +class TestIsPrime(unittest.TestCase): + def test_very_small_prime(self): + assert is_prime(23) + + def test_very_small_composite(self): + assert not is_prime(22) + + def test_small_prime(self): + assert is_prime(123456791) + + def test_special_composite(self): + assert not is_prime(10261) + + def test_medium_prime_1(self): + # nextPrime[2^256] + assert is_prime(2**256 + 0x129) + + def test_medium_prime_2(self): + # nextPrime(2^256+0x129) + assert is_prime(2**256 + 0x12D) + + def test_medium_trivial_composite(self): + assert not is_prime(2**256 + 0x130) + + def test_medium_non_trivial_composite(self): + assert not is_prime(2**256 + 0x12F) + + def test_large_prime(self): + # nextPrime[2^2048] + assert is_prime(2**2048 + 0x3D5) + + +class TestNumbertheory(unittest.TestCase): + def test_gcd(self): + assert gcd(3 * 5 * 7, 3 * 5 * 11, 3 * 5 * 13) == 3 * 5 + assert gcd([3 * 5 * 7, 3 * 5 * 11, 3 * 5 * 13]) == 3 * 5 + assert gcd(3) == 3 + + @unittest.skipUnless( + HC_PRESENT, + "Hypothesis 2.0.0 can't be made tolerant of hard to " + "meet requirements (like `is_prime()`), the test " + "case times-out on it", + ) + @settings(**HYP_SLOW_SETTINGS) + @given(st_comp_with_com_fac()) + def test_gcd_with_com_factor(self, numbers): + n = gcd(numbers) + assert 1 in numbers or n != 1 + for i in numbers: + assert i % n == 0 + + @unittest.skipUnless( + HC_PRESENT, + "Hypothesis 2.0.0 can't be made tolerant of hard to " + "meet requirements (like `is_prime()`), the test " + "case times-out on it", + ) + @settings(**HYP_SLOW_SETTINGS) + @given(st_comp_no_com_fac()) + def test_gcd_with_uncom_factor(self, numbers): + n = gcd(numbers) + assert n == 1 + + @given( + st.lists( + st.integers(min_value=1, max_value=2**8192), + min_size=1, + max_size=20, + ) + ) + def test_gcd_with_random_numbers(self, numbers): + n = gcd(numbers) + for i in numbers: + # check that at least it's a divider + assert i % n == 0 + + def test_lcm(self): + assert lcm(3, 5 * 3, 7 * 3) == 3 * 5 * 7 + assert lcm([3, 5 * 3, 7 * 3]) == 3 * 5 * 7 + assert lcm(3) == 3 + + @given( + st.lists( + st.integers(min_value=1, max_value=2**8192), + min_size=1, + max_size=20, + ) + ) + def test_lcm_with_random_numbers(self, numbers): + n = lcm(numbers) + for i in numbers: + assert n % i == 0 + + @unittest.skipUnless( + HC_PRESENT, + "Hypothesis 2.0.0 can't be made tolerant of hard to " + "meet requirements (like `is_prime()`), the test " + "case times-out on it", + ) + @settings(**HYP_SETTINGS) + @given(st_num_square_prime()) + def test_square_root_mod_prime(self, vals): + square, prime = vals + + calc = square_root_mod_prime(square, prime) + assert calc * calc % prime == square + + @settings(**HYP_SETTINGS) + @given(st.integers(min_value=1, max_value=10**12)) + @example(265399 * 1526929) + @example(373297**2 * 553991) + def test_factorization(self, num): + factors = factorization(num) + mult = 1 + for i in factors: + mult *= i[0] ** i[1] + assert mult == num + + def test_factorisation_smallprimes(self): + exp = 101 * 103 + assert 101 in smallprimes + assert 103 in smallprimes + factors = factorization(exp) + mult = 1 + for i in factors: + mult *= i[0] ** i[1] + assert mult == exp + + def test_factorisation_not_smallprimes(self): + exp = 1231 * 1237 + assert 1231 not in smallprimes + assert 1237 not in smallprimes + factors = factorization(exp) + mult = 1 + for i in factors: + mult *= i[0] ** i[1] + assert mult == exp + + def test_jacobi_with_zero(self): + assert jacobi(0, 3) == 0 + + def test_jacobi_with_one(self): + assert jacobi(1, 3) == 1 + + @settings(**HYP_SETTINGS) + @given(st.integers(min_value=3, max_value=1000).filter(lambda x: x % 2)) + def test_jacobi(self, mod): + if is_prime(mod): + squares = set() + for root in range(1, mod): + assert jacobi(root * root, mod) == 1 + squares.add(root * root % mod) + for i in range(1, mod): + if i not in squares: + assert jacobi(i, mod) == -1 + else: + factors = factorization(mod) + for a in range(1, mod): + c = 1 + for i in factors: + c *= jacobi(a, i[0]) ** i[1] + assert c == jacobi(a, mod) + + @given(st_two_nums_rel_prime()) + def test_inverse_mod(self, nums): + num, mod = nums + + inv = inverse_mod(num, mod) + + assert 0 < inv < mod + assert num * inv % mod == 1 + + def test_inverse_mod_with_zero(self): + assert 0 == inverse_mod(0, 11) diff --git a/env/lib/python3.8/site-packages/ecdsa/test_pyecdsa.py b/env/lib/python3.8/site-packages/ecdsa/test_pyecdsa.py new file mode 100644 index 00000000..d61f5083 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_pyecdsa.py @@ -0,0 +1,2267 @@ +from __future__ import with_statement, division + +try: + import unittest2 as unittest +except ImportError: + import unittest +import os +import sys +import shutil +import subprocess +import pytest +from binascii import hexlify, unhexlify +from hashlib import sha1, sha256, sha384, sha512 +import hashlib +from functools import partial + +from hypothesis import given +import hypothesis.strategies as st + +from six import b, print_, binary_type +from .keys import SigningKey, VerifyingKey +from .keys import BadSignatureError, MalformedPointError, BadDigestError +from . import util +from .util import sigencode_der, sigencode_strings +from .util import sigdecode_der, sigdecode_strings +from .util import number_to_string, encoded_oid_ecPublicKey, MalformedSignature +from .curves import Curve, UnknownCurveError +from .curves import ( + SECP112r1, + SECP112r2, + SECP128r1, + SECP160r1, + NIST192p, + NIST224p, + NIST256p, + NIST384p, + NIST521p, + SECP256k1, + BRAINPOOLP160r1, + BRAINPOOLP192r1, + BRAINPOOLP224r1, + BRAINPOOLP256r1, + BRAINPOOLP320r1, + BRAINPOOLP384r1, + BRAINPOOLP512r1, + Ed25519, + Ed448, + curves, +) +from .ecdsa import ( + curve_brainpoolp224r1, + curve_brainpoolp256r1, + curve_brainpoolp384r1, + curve_brainpoolp512r1, +) +from .ellipticcurve import Point +from . import der +from . import rfc6979 +from . import ecdsa + + +class SubprocessError(Exception): + pass + + +def run_openssl(cmd): + OPENSSL = "openssl" + p = subprocess.Popen( + [OPENSSL] + cmd.split(), + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + ) + stdout, ignored = p.communicate() + if p.returncode != 0: + raise SubprocessError( + "cmd '%s %s' failed: rc=%s, stdout/err was %s" + % (OPENSSL, cmd, p.returncode, stdout) + ) + return stdout.decode() + + +class ECDSA(unittest.TestCase): + def test_basic(self): + priv = SigningKey.generate() + pub = priv.get_verifying_key() + + data = b("blahblah") + sig = priv.sign(data) + + self.assertTrue(pub.verify(sig, data)) + self.assertRaises(BadSignatureError, pub.verify, sig, data + b("bad")) + + pub2 = VerifyingKey.from_string(pub.to_string()) + self.assertTrue(pub2.verify(sig, data)) + + def test_deterministic(self): + data = b("blahblah") + secexp = int("9d0219792467d7d37b4d43298a7d0c05", 16) + + priv = SigningKey.from_secret_exponent(secexp, SECP256k1, sha256) + pub = priv.get_verifying_key() + + k = rfc6979.generate_k( + SECP256k1.generator.order(), secexp, sha256, sha256(data).digest() + ) + + sig1 = priv.sign(data, k=k) + self.assertTrue(pub.verify(sig1, data)) + + sig2 = priv.sign(data, k=k) + self.assertTrue(pub.verify(sig2, data)) + + sig3 = priv.sign_deterministic(data, sha256) + self.assertTrue(pub.verify(sig3, data)) + + self.assertEqual(sig1, sig2) + self.assertEqual(sig1, sig3) + + def test_bad_usage(self): + # sk=SigningKey() is wrong + self.assertRaises(TypeError, SigningKey) + self.assertRaises(TypeError, VerifyingKey) + + def test_lengths(self): + default = NIST192p + priv = SigningKey.generate() + pub = priv.get_verifying_key() + self.assertEqual(len(pub.to_string()), default.verifying_key_length) + sig = priv.sign(b("data")) + self.assertEqual(len(sig), default.signature_length) + for curve in ( + NIST192p, + NIST224p, + NIST256p, + NIST384p, + NIST521p, + BRAINPOOLP160r1, + BRAINPOOLP192r1, + BRAINPOOLP224r1, + BRAINPOOLP256r1, + BRAINPOOLP320r1, + BRAINPOOLP384r1, + BRAINPOOLP512r1, + ): + priv = SigningKey.generate(curve=curve) + pub1 = priv.get_verifying_key() + pub2 = VerifyingKey.from_string(pub1.to_string(), curve) + self.assertEqual(pub1.to_string(), pub2.to_string()) + self.assertEqual(len(pub1.to_string()), curve.verifying_key_length) + sig = priv.sign(b("data")) + self.assertEqual(len(sig), curve.signature_length) + + def test_serialize(self): + seed = b("secret") + curve = NIST192p + secexp1 = util.randrange_from_seed__trytryagain(seed, curve.order) + secexp2 = util.randrange_from_seed__trytryagain(seed, curve.order) + self.assertEqual(secexp1, secexp2) + priv1 = SigningKey.from_secret_exponent(secexp1, curve) + priv2 = SigningKey.from_secret_exponent(secexp2, curve) + self.assertEqual( + hexlify(priv1.to_string()), hexlify(priv2.to_string()) + ) + self.assertEqual(priv1.to_pem(), priv2.to_pem()) + pub1 = priv1.get_verifying_key() + pub2 = priv2.get_verifying_key() + data = b("data") + sig1 = priv1.sign(data) + sig2 = priv2.sign(data) + self.assertTrue(pub1.verify(sig1, data)) + self.assertTrue(pub2.verify(sig1, data)) + self.assertTrue(pub1.verify(sig2, data)) + self.assertTrue(pub2.verify(sig2, data)) + self.assertEqual(hexlify(pub1.to_string()), hexlify(pub2.to_string())) + + def test_nonrandom(self): + s = b("all the entropy in the entire world, compressed into one line") + + def not_much_entropy(numbytes): + return s[:numbytes] + + # we control the entropy source, these two keys should be identical: + priv1 = SigningKey.generate(entropy=not_much_entropy) + priv2 = SigningKey.generate(entropy=not_much_entropy) + self.assertEqual( + hexlify(priv1.get_verifying_key().to_string()), + hexlify(priv2.get_verifying_key().to_string()), + ) + # likewise, signatures should be identical. Obviously you'd never + # want to do this with keys you care about, because the secrecy of + # the private key depends upon using different random numbers for + # each signature + sig1 = priv1.sign(b("data"), entropy=not_much_entropy) + sig2 = priv2.sign(b("data"), entropy=not_much_entropy) + self.assertEqual(hexlify(sig1), hexlify(sig2)) + + def assertTruePrivkeysEqual(self, priv1, priv2): + self.assertEqual( + priv1.privkey.secret_multiplier, priv2.privkey.secret_multiplier + ) + self.assertEqual( + priv1.privkey.public_key.generator, + priv2.privkey.public_key.generator, + ) + + def test_privkey_creation(self): + s = b("all the entropy in the entire world, compressed into one line") + + def not_much_entropy(numbytes): + return s[:numbytes] + + priv1 = SigningKey.generate() + self.assertEqual(priv1.baselen, NIST192p.baselen) + + priv1 = SigningKey.generate(curve=NIST224p) + self.assertEqual(priv1.baselen, NIST224p.baselen) + + priv1 = SigningKey.generate(entropy=not_much_entropy) + self.assertEqual(priv1.baselen, NIST192p.baselen) + priv2 = SigningKey.generate(entropy=not_much_entropy) + self.assertEqual(priv2.baselen, NIST192p.baselen) + self.assertTruePrivkeysEqual(priv1, priv2) + + priv1 = SigningKey.from_secret_exponent(secexp=3) + self.assertEqual(priv1.baselen, NIST192p.baselen) + priv2 = SigningKey.from_secret_exponent(secexp=3) + self.assertTruePrivkeysEqual(priv1, priv2) + + priv1 = SigningKey.from_secret_exponent(secexp=4, curve=NIST224p) + self.assertEqual(priv1.baselen, NIST224p.baselen) + + def test_privkey_strings(self): + priv1 = SigningKey.generate() + s1 = priv1.to_string() + self.assertEqual(type(s1), binary_type) + self.assertEqual(len(s1), NIST192p.baselen) + priv2 = SigningKey.from_string(s1) + self.assertTruePrivkeysEqual(priv1, priv2) + + s1 = priv1.to_pem() + self.assertEqual(type(s1), binary_type) + self.assertTrue(s1.startswith(b("-----BEGIN EC PRIVATE KEY-----"))) + self.assertTrue(s1.strip().endswith(b("-----END EC PRIVATE KEY-----"))) + priv2 = SigningKey.from_pem(s1) + self.assertTruePrivkeysEqual(priv1, priv2) + + s1 = priv1.to_der() + self.assertEqual(type(s1), binary_type) + priv2 = SigningKey.from_der(s1) + self.assertTruePrivkeysEqual(priv1, priv2) + + priv1 = SigningKey.generate(curve=NIST256p) + s1 = priv1.to_pem() + self.assertEqual(type(s1), binary_type) + self.assertTrue(s1.startswith(b("-----BEGIN EC PRIVATE KEY-----"))) + self.assertTrue(s1.strip().endswith(b("-----END EC PRIVATE KEY-----"))) + priv2 = SigningKey.from_pem(s1) + self.assertTruePrivkeysEqual(priv1, priv2) + + s1 = priv1.to_der() + self.assertEqual(type(s1), binary_type) + priv2 = SigningKey.from_der(s1) + self.assertTruePrivkeysEqual(priv1, priv2) + + def test_privkey_strings_brainpool(self): + priv1 = SigningKey.generate(curve=BRAINPOOLP512r1) + s1 = priv1.to_pem() + self.assertEqual(type(s1), binary_type) + self.assertTrue(s1.startswith(b("-----BEGIN EC PRIVATE KEY-----"))) + self.assertTrue(s1.strip().endswith(b("-----END EC PRIVATE KEY-----"))) + priv2 = SigningKey.from_pem(s1) + self.assertTruePrivkeysEqual(priv1, priv2) + + s1 = priv1.to_der() + self.assertEqual(type(s1), binary_type) + priv2 = SigningKey.from_der(s1) + self.assertTruePrivkeysEqual(priv1, priv2) + + def assertTruePubkeysEqual(self, pub1, pub2): + self.assertEqual(pub1.pubkey.point, pub2.pubkey.point) + self.assertEqual(pub1.pubkey.generator, pub2.pubkey.generator) + self.assertEqual(pub1.curve, pub2.curve) + + def test_pubkey_strings(self): + priv1 = SigningKey.generate() + pub1 = priv1.get_verifying_key() + s1 = pub1.to_string() + self.assertEqual(type(s1), binary_type) + self.assertEqual(len(s1), NIST192p.verifying_key_length) + pub2 = VerifyingKey.from_string(s1) + self.assertTruePubkeysEqual(pub1, pub2) + + priv1 = SigningKey.generate(curve=NIST256p) + pub1 = priv1.get_verifying_key() + s1 = pub1.to_string() + self.assertEqual(type(s1), binary_type) + self.assertEqual(len(s1), NIST256p.verifying_key_length) + pub2 = VerifyingKey.from_string(s1, curve=NIST256p) + self.assertTruePubkeysEqual(pub1, pub2) + + pub1_der = pub1.to_der() + self.assertEqual(type(pub1_der), binary_type) + pub2 = VerifyingKey.from_der(pub1_der) + self.assertTruePubkeysEqual(pub1, pub2) + + self.assertRaises( + der.UnexpectedDER, VerifyingKey.from_der, pub1_der + b("junk") + ) + badpub = VerifyingKey.from_der(pub1_der) + + class FakeGenerator: + def order(self): + return 123456789 + + class FakeCurveFp: + def p(self): + return int( + "6525534529039240705020950546962731340" + "4541085228058844382513856749047873406763" + ) + + badcurve = Curve( + "unknown", FakeCurveFp(), FakeGenerator(), (1, 2, 3, 4, 5, 6), None + ) + badpub.curve = badcurve + badder = badpub.to_der() + self.assertRaises(UnknownCurveError, VerifyingKey.from_der, badder) + + pem = pub1.to_pem() + self.assertEqual(type(pem), binary_type) + self.assertTrue(pem.startswith(b("-----BEGIN PUBLIC KEY-----")), pem) + self.assertTrue( + pem.strip().endswith(b("-----END PUBLIC KEY-----")), pem + ) + pub2 = VerifyingKey.from_pem(pem) + self.assertTruePubkeysEqual(pub1, pub2) + + def test_pubkey_strings_brainpool(self): + priv1 = SigningKey.generate(curve=BRAINPOOLP512r1) + pub1 = priv1.get_verifying_key() + s1 = pub1.to_string() + self.assertEqual(type(s1), binary_type) + self.assertEqual(len(s1), BRAINPOOLP512r1.verifying_key_length) + pub2 = VerifyingKey.from_string(s1, curve=BRAINPOOLP512r1) + self.assertTruePubkeysEqual(pub1, pub2) + + pub1_der = pub1.to_der() + self.assertEqual(type(pub1_der), binary_type) + pub2 = VerifyingKey.from_der(pub1_der) + self.assertTruePubkeysEqual(pub1, pub2) + + def test_vk_to_der_with_invalid_point_encoding(self): + sk = SigningKey.generate() + vk = sk.verifying_key + + with self.assertRaises(ValueError): + vk.to_der("raw") + + def test_sk_to_der_with_invalid_point_encoding(self): + sk = SigningKey.generate() + + with self.assertRaises(ValueError): + sk.to_der("raw") + + def test_vk_from_der_garbage_after_curve_oid(self): + type_oid_der = encoded_oid_ecPublicKey + curve_oid_der = der.encode_oid(*(1, 2, 840, 10045, 3, 1, 1)) + b( + "garbage" + ) + enc_type_der = der.encode_sequence(type_oid_der, curve_oid_der) + point_der = der.encode_bitstring(b"\x00\xff", None) + to_decode = der.encode_sequence(enc_type_der, point_der) + + with self.assertRaises(der.UnexpectedDER): + VerifyingKey.from_der(to_decode) + + def test_vk_from_der_invalid_key_type(self): + type_oid_der = der.encode_oid(*(1, 2, 3)) + curve_oid_der = der.encode_oid(*(1, 2, 840, 10045, 3, 1, 1)) + enc_type_der = der.encode_sequence(type_oid_der, curve_oid_der) + point_der = der.encode_bitstring(b"\x00\xff", None) + to_decode = der.encode_sequence(enc_type_der, point_der) + + with self.assertRaises(der.UnexpectedDER): + VerifyingKey.from_der(to_decode) + + def test_vk_from_der_garbage_after_point_string(self): + type_oid_der = encoded_oid_ecPublicKey + curve_oid_der = der.encode_oid(*(1, 2, 840, 10045, 3, 1, 1)) + enc_type_der = der.encode_sequence(type_oid_der, curve_oid_der) + point_der = der.encode_bitstring(b"\x00\xff", None) + b("garbage") + to_decode = der.encode_sequence(enc_type_der, point_der) + + with self.assertRaises(der.UnexpectedDER): + VerifyingKey.from_der(to_decode) + + def test_vk_from_der_invalid_bitstring(self): + type_oid_der = encoded_oid_ecPublicKey + curve_oid_der = der.encode_oid(*(1, 2, 840, 10045, 3, 1, 1)) + enc_type_der = der.encode_sequence(type_oid_der, curve_oid_der) + point_der = der.encode_bitstring(b"\x08\xff", None) + to_decode = der.encode_sequence(enc_type_der, point_der) + + with self.assertRaises(der.UnexpectedDER): + VerifyingKey.from_der(to_decode) + + def test_vk_from_der_with_invalid_length_of_encoding(self): + type_oid_der = encoded_oid_ecPublicKey + curve_oid_der = der.encode_oid(*(1, 2, 840, 10045, 3, 1, 1)) + enc_type_der = der.encode_sequence(type_oid_der, curve_oid_der) + point_der = der.encode_bitstring(b"\xff" * 64, 0) + to_decode = der.encode_sequence(enc_type_der, point_der) + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_der(to_decode) + + def test_vk_from_der_with_raw_encoding(self): + type_oid_der = encoded_oid_ecPublicKey + curve_oid_der = der.encode_oid(*(1, 2, 840, 10045, 3, 1, 1)) + enc_type_der = der.encode_sequence(type_oid_der, curve_oid_der) + point_der = der.encode_bitstring(b"\xff" * 48, 0) + to_decode = der.encode_sequence(enc_type_der, point_der) + + with self.assertRaises(der.UnexpectedDER): + VerifyingKey.from_der(to_decode) + + def test_signature_strings(self): + priv1 = SigningKey.generate() + pub1 = priv1.get_verifying_key() + data = b("data") + + sig = priv1.sign(data) + self.assertEqual(type(sig), binary_type) + self.assertEqual(len(sig), NIST192p.signature_length) + self.assertTrue(pub1.verify(sig, data)) + + sig = priv1.sign(data, sigencode=sigencode_strings) + self.assertEqual(type(sig), tuple) + self.assertEqual(len(sig), 2) + self.assertEqual(type(sig[0]), binary_type) + self.assertEqual(type(sig[1]), binary_type) + self.assertEqual(len(sig[0]), NIST192p.baselen) + self.assertEqual(len(sig[1]), NIST192p.baselen) + self.assertTrue(pub1.verify(sig, data, sigdecode=sigdecode_strings)) + + sig_der = priv1.sign(data, sigencode=sigencode_der) + self.assertEqual(type(sig_der), binary_type) + self.assertTrue(pub1.verify(sig_der, data, sigdecode=sigdecode_der)) + + def test_sig_decode_strings_with_invalid_count(self): + with self.assertRaises(MalformedSignature): + sigdecode_strings([b("one"), b("two"), b("three")], 0xFF) + + def test_sig_decode_strings_with_wrong_r_len(self): + with self.assertRaises(MalformedSignature): + sigdecode_strings([b("one"), b("two")], 0xFF) + + def test_sig_decode_strings_with_wrong_s_len(self): + with self.assertRaises(MalformedSignature): + sigdecode_strings([b("\xa0"), b("\xb0\xff")], 0xFF) + + def test_verify_with_too_long_input(self): + sk = SigningKey.generate() + vk = sk.verifying_key + + with self.assertRaises(BadDigestError): + vk.verify_digest(None, b("\x00") * 128) + + def test_sk_from_secret_exponent_with_wrong_sec_exponent(self): + with self.assertRaises(MalformedPointError): + SigningKey.from_secret_exponent(0) + + def test_sk_from_string_with_wrong_len_string(self): + with self.assertRaises(MalformedPointError): + SigningKey.from_string(b("\x01")) + + def test_sk_from_der_with_junk_after_sequence(self): + ver_der = der.encode_integer(1) + to_decode = der.encode_sequence(ver_der) + b("garbage") + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sk_from_der_with_wrong_version(self): + ver_der = der.encode_integer(0) + to_decode = der.encode_sequence(ver_der) + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sk_from_der_invalid_const_tag(self): + ver_der = der.encode_integer(1) + privkey_der = der.encode_octet_string(b("\x00\xff")) + curve_oid_der = der.encode_oid(*(1, 2, 3)) + const_der = der.encode_constructed(1, curve_oid_der) + to_decode = der.encode_sequence( + ver_der, privkey_der, const_der, curve_oid_der + ) + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sk_from_der_garbage_after_privkey_oid(self): + ver_der = der.encode_integer(1) + privkey_der = der.encode_octet_string(b("\x00\xff")) + curve_oid_der = der.encode_oid(*(1, 2, 3)) + b("garbage") + const_der = der.encode_constructed(0, curve_oid_der) + to_decode = der.encode_sequence( + ver_der, privkey_der, const_der, curve_oid_der + ) + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sk_from_der_with_short_privkey(self): + ver_der = der.encode_integer(1) + privkey_der = der.encode_octet_string(b("\x00\xff")) + curve_oid_der = der.encode_oid(*(1, 2, 840, 10045, 3, 1, 1)) + const_der = der.encode_constructed(0, curve_oid_der) + to_decode = der.encode_sequence( + ver_der, privkey_der, const_der, curve_oid_der + ) + + sk = SigningKey.from_der(to_decode) + self.assertEqual(sk.privkey.secret_multiplier, 255) + + def test_sk_from_p8_der_with_wrong_version(self): + ver_der = der.encode_integer(2) + algorithm_der = der.encode_sequence( + der.encode_oid(1, 2, 840, 10045, 2, 1), + der.encode_oid(1, 2, 840, 10045, 3, 1, 1), + ) + privkey_der = der.encode_octet_string( + der.encode_sequence( + der.encode_integer(1), der.encode_octet_string(b"\x00\xff") + ) + ) + to_decode = der.encode_sequence(ver_der, algorithm_der, privkey_der) + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sk_from_p8_der_with_wrong_algorithm(self): + ver_der = der.encode_integer(1) + algorithm_der = der.encode_sequence( + der.encode_oid(1, 2, 3), der.encode_oid(1, 2, 840, 10045, 3, 1, 1) + ) + privkey_der = der.encode_octet_string( + der.encode_sequence( + der.encode_integer(1), der.encode_octet_string(b"\x00\xff") + ) + ) + to_decode = der.encode_sequence(ver_der, algorithm_der, privkey_der) + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sk_from_p8_der_with_trailing_junk_after_algorithm(self): + ver_der = der.encode_integer(1) + algorithm_der = der.encode_sequence( + der.encode_oid(1, 2, 840, 10045, 2, 1), + der.encode_oid(1, 2, 840, 10045, 3, 1, 1), + der.encode_octet_string(b"junk"), + ) + privkey_der = der.encode_octet_string( + der.encode_sequence( + der.encode_integer(1), der.encode_octet_string(b"\x00\xff") + ) + ) + to_decode = der.encode_sequence(ver_der, algorithm_der, privkey_der) + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sk_from_p8_der_with_trailing_junk_after_key(self): + ver_der = der.encode_integer(1) + algorithm_der = der.encode_sequence( + der.encode_oid(1, 2, 840, 10045, 2, 1), + der.encode_oid(1, 2, 840, 10045, 3, 1, 1), + ) + privkey_der = der.encode_octet_string( + der.encode_sequence( + der.encode_integer(1), der.encode_octet_string(b"\x00\xff") + ) + + der.encode_integer(999) + ) + to_decode = der.encode_sequence( + ver_der, + algorithm_der, + privkey_der, + der.encode_octet_string(b"junk"), + ) + + with self.assertRaises(der.UnexpectedDER): + SigningKey.from_der(to_decode) + + def test_sign_with_too_long_hash(self): + sk = SigningKey.from_secret_exponent(12) + + with self.assertRaises(BadDigestError): + sk.sign_digest(b("\xff") * 64) + + def test_hashfunc(self): + sk = SigningKey.generate(curve=NIST256p, hashfunc=sha256) + data = b("security level is 128 bits") + sig = sk.sign(data) + vk = VerifyingKey.from_string( + sk.get_verifying_key().to_string(), curve=NIST256p, hashfunc=sha256 + ) + self.assertTrue(vk.verify(sig, data)) + + sk2 = SigningKey.generate(curve=NIST256p) + sig2 = sk2.sign(data, hashfunc=sha256) + vk2 = VerifyingKey.from_string( + sk2.get_verifying_key().to_string(), + curve=NIST256p, + hashfunc=sha256, + ) + self.assertTrue(vk2.verify(sig2, data)) + + vk3 = VerifyingKey.from_string( + sk.get_verifying_key().to_string(), curve=NIST256p + ) + self.assertTrue(vk3.verify(sig, data, hashfunc=sha256)) + + def test_public_key_recovery(self): + # Create keys + curve = BRAINPOOLP160r1 + + sk = SigningKey.generate(curve=curve) + vk = sk.get_verifying_key() + + # Sign a message + data = b("blahblah") + signature = sk.sign(data) + + # Recover verifying keys + recovered_vks = VerifyingKey.from_public_key_recovery( + signature, data, curve + ) + + # Test if each pk is valid + for recovered_vk in recovered_vks: + # Test if recovered vk is valid for the data + self.assertTrue(recovered_vk.verify(signature, data)) + + # Test if properties are equal + self.assertEqual(vk.curve, recovered_vk.curve) + self.assertEqual( + vk.default_hashfunc, recovered_vk.default_hashfunc + ) + + # Test if original vk is the list of recovered keys + self.assertIn( + vk.pubkey.point, + [recovered_vk.pubkey.point for recovered_vk in recovered_vks], + ) + + def test_public_key_recovery_with_custom_hash(self): + # Create keys + curve = BRAINPOOLP160r1 + + sk = SigningKey.generate(curve=curve, hashfunc=sha256) + vk = sk.get_verifying_key() + + # Sign a message + data = b("blahblah") + signature = sk.sign(data) + + # Recover verifying keys + recovered_vks = VerifyingKey.from_public_key_recovery( + signature, data, curve, hashfunc=sha256, allow_truncate=True + ) + + # Test if each pk is valid + for recovered_vk in recovered_vks: + # Test if recovered vk is valid for the data + self.assertTrue(recovered_vk.verify(signature, data)) + + # Test if properties are equal + self.assertEqual(vk.curve, recovered_vk.curve) + self.assertEqual(sha256, recovered_vk.default_hashfunc) + + # Test if original vk is the list of recovered keys + self.assertIn( + vk.pubkey.point, + [recovered_vk.pubkey.point for recovered_vk in recovered_vks], + ) + + def test_encoding(self): + sk = SigningKey.from_secret_exponent(123456789) + vk = sk.verifying_key + + exp = b( + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + self.assertEqual(vk.to_string(), exp) + self.assertEqual(vk.to_string("raw"), exp) + self.assertEqual(vk.to_string("uncompressed"), b("\x04") + exp) + self.assertEqual(vk.to_string("compressed"), b("\x02") + exp[:24]) + self.assertEqual(vk.to_string("hybrid"), b("\x06") + exp) + + def test_decoding(self): + sk = SigningKey.from_secret_exponent(123456789) + vk = sk.verifying_key + + enc = b( + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + + from_raw = VerifyingKey.from_string(enc) + self.assertEqual(from_raw.pubkey.point, vk.pubkey.point) + + from_uncompressed = VerifyingKey.from_string(b("\x04") + enc) + self.assertEqual(from_uncompressed.pubkey.point, vk.pubkey.point) + + from_compressed = VerifyingKey.from_string(b("\x02") + enc[:24]) + self.assertEqual(from_compressed.pubkey.point, vk.pubkey.point) + + from_uncompressed = VerifyingKey.from_string(b("\x06") + enc) + self.assertEqual(from_uncompressed.pubkey.point, vk.pubkey.point) + + def test_uncompressed_decoding_as_only_alowed(self): + enc = b( + "\x04" + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + vk = VerifyingKey.from_string(enc, valid_encodings=("uncompressed",)) + sk = SigningKey.from_secret_exponent(123456789) + + self.assertEqual(vk, sk.verifying_key) + + def test_raw_decoding_with_blocked_format(self): + enc = b( + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + with self.assertRaises(MalformedPointError) as exp: + VerifyingKey.from_string(enc, valid_encodings=("hybrid",)) + + self.assertIn("hybrid", str(exp.exception)) + + def test_decoding_with_unknown_format(self): + with self.assertRaises(ValueError) as e: + VerifyingKey.from_string(b"", valid_encodings=("raw", "foobar")) + + self.assertIn("Only uncompressed, compressed", str(e.exception)) + + def test_uncompressed_decoding_with_blocked_format(self): + enc = b( + "\x04" + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + with self.assertRaises(MalformedPointError) as exp: + VerifyingKey.from_string(enc, valid_encodings=("hybrid",)) + + self.assertIn("Invalid X9.62 encoding", str(exp.exception)) + + def test_hybrid_decoding_with_blocked_format(self): + enc = b( + "\x06" + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + with self.assertRaises(MalformedPointError) as exp: + VerifyingKey.from_string(enc, valid_encodings=("uncompressed",)) + + self.assertIn("Invalid X9.62 encoding", str(exp.exception)) + + def test_compressed_decoding_with_blocked_format(self): + enc = b( + "\x02" + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + )[:25] + with self.assertRaises(MalformedPointError) as exp: + VerifyingKey.from_string(enc, valid_encodings=("hybrid", "raw")) + + self.assertIn("(hybrid, raw)", str(exp.exception)) + + def test_decoding_with_malformed_uncompressed(self): + enc = b( + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(b("\x02") + enc) + + def test_decoding_with_malformed_compressed(self): + enc = b( + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(b("\x01") + enc[:24]) + + def test_decoding_with_inconsistent_hybrid(self): + enc = b( + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(b("\x07") + enc) + + def test_decoding_with_point_not_on_curve(self): + enc = b( + "\x0c\xe0\x1d\xe0d\x1c\x8eS\x8a\xc0\x9eK\xa8x !\xd5\xc2\xc3" + "\xfd\xc8\xa0c\xff\xfb\x02\xb9\xc4\x84)\x1a\x0f\x8b\x87\xa4" + "z\x8a#\xb5\x97\xecO\xb6\xa0HQ\x89*" + ) + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(enc[:47] + b("\x00")) + + def test_decoding_with_point_at_infinity(self): + # decoding it is unsupported, as it's not necessary to encode it + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(b("\x00")) + + def test_not_lying_on_curve(self): + enc = number_to_string(NIST192p.curve.p(), NIST192p.curve.p() + 1) + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(b("\x02") + enc) + + def test_from_string_with_invalid_curve_too_short_ver_key_len(self): + # both verifying_key_length and baselen are calculated internally + # by the Curve constructor, but since we depend on them verify + # that inconsistent values are detected + curve = Curve("test", ecdsa.curve_192, ecdsa.generator_192, (1, 2)) + curve.verifying_key_length = 16 + curve.baselen = 32 + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(b("\x00") * 16, curve) + + def test_from_string_with_invalid_curve_too_long_ver_key_len(self): + # both verifying_key_length and baselen are calculated internally + # by the Curve constructor, but since we depend on them verify + # that inconsistent values are detected + curve = Curve("test", ecdsa.curve_192, ecdsa.generator_192, (1, 2)) + curve.verifying_key_length = 16 + curve.baselen = 16 + + with self.assertRaises(MalformedPointError): + VerifyingKey.from_string(b("\x00") * 16, curve) + + +@pytest.mark.parametrize( + "val,even", [(i, j) for i in range(256) for j in [True, False]] +) +def test_VerifyingKey_decode_with_small_values(val, even): + enc = number_to_string(val, NIST192p.order) + + if even: + enc = b("\x02") + enc + else: + enc = b("\x03") + enc + + # small values can both be actual valid public keys and not, verify that + # only expected exceptions are raised if they are not + try: + vk = VerifyingKey.from_string(enc) + assert isinstance(vk, VerifyingKey) + except MalformedPointError: + assert True + + +params = [] +for curve in curves: + for enc in ["raw", "uncompressed", "compressed", "hybrid"]: + params.append( + pytest.param(curve, enc, id="{0}-{1}".format(curve.name, enc)) + ) + + +@pytest.mark.parametrize("curve,encoding", params) +def test_VerifyingKey_encode_decode(curve, encoding): + sk = SigningKey.generate(curve=curve) + vk = sk.verifying_key + + encoded = vk.to_string(encoding) + + from_enc = VerifyingKey.from_string(encoded, curve=curve) + + assert vk.pubkey.point == from_enc.pubkey.point + + +class OpenSSL(unittest.TestCase): + # test interoperability with OpenSSL tools. Note that openssl's ECDSA + # sign/verify arguments changed between 0.9.8 and 1.0.0: the early + # versions require "-ecdsa-with-SHA1", the later versions want just + # "-SHA1" (or to leave out that argument entirely, which means the + # signature will use some default digest algorithm, probably determined + # by the key, probably always SHA1). + # + # openssl ecparam -name secp224r1 -genkey -out privkey.pem + # openssl ec -in privkey.pem -text -noout # get the priv/pub keys + # openssl dgst -ecdsa-with-SHA1 -sign privkey.pem -out data.sig data.txt + # openssl asn1parse -in data.sig -inform DER + # data.sig is 64 bytes, probably 56b plus ASN1 overhead + # openssl dgst -ecdsa-with-SHA1 -prverify privkey.pem -signature data.sig data.txt ; echo $? + # openssl ec -in privkey.pem -pubout -out pubkey.pem + # openssl ec -in privkey.pem -pubout -outform DER -out pubkey.der + + OPENSSL_SUPPORTED_CURVES = set( + c.split(":")[0].strip() + for c in run_openssl("ecparam -list_curves").split("\n") + ) + + def get_openssl_messagedigest_arg(self, hash_name): + v = run_openssl("version") + # e.g. "OpenSSL 1.0.0 29 Mar 2010", or "OpenSSL 1.0.0a 1 Jun 2010", + # or "OpenSSL 0.9.8o 01 Jun 2010" + vs = v.split()[1].split(".") + if vs >= ["1", "0", "0"]: # pragma: no cover + return "-{0}".format(hash_name) + else: # pragma: no cover + return "-ecdsa-with-{0}".format(hash_name) + + # sk: 1:OpenSSL->python 2:python->OpenSSL + # vk: 3:OpenSSL->python 4:python->OpenSSL + # sig: 5:OpenSSL->python 6:python->OpenSSL + + @pytest.mark.skipif( + "secp112r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp112r1", + ) + def test_from_openssl_secp112r1(self): + return self.do_test_from_openssl(SECP112r1) + + @pytest.mark.skipif( + "secp112r2" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp112r2", + ) + def test_from_openssl_secp112r2(self): + return self.do_test_from_openssl(SECP112r2) + + @pytest.mark.skipif( + "secp128r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp128r1", + ) + def test_from_openssl_secp128r1(self): + return self.do_test_from_openssl(SECP128r1) + + @pytest.mark.skipif( + "secp160r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp160r1", + ) + def test_from_openssl_secp160r1(self): + return self.do_test_from_openssl(SECP160r1) + + @pytest.mark.skipif( + "prime192v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime192v1", + ) + def test_from_openssl_nist192p(self): + return self.do_test_from_openssl(NIST192p) + + @pytest.mark.skipif( + "prime192v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime192v1", + ) + def test_from_openssl_nist192p_sha256(self): + return self.do_test_from_openssl(NIST192p, "SHA256") + + @pytest.mark.skipif( + "secp224r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp224r1", + ) + def test_from_openssl_nist224p(self): + return self.do_test_from_openssl(NIST224p) + + @pytest.mark.skipif( + "prime256v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime256v1", + ) + def test_from_openssl_nist256p(self): + return self.do_test_from_openssl(NIST256p) + + @pytest.mark.skipif( + "prime256v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime256v1", + ) + def test_from_openssl_nist256p_sha384(self): + return self.do_test_from_openssl(NIST256p, "SHA384") + + @pytest.mark.skipif( + "prime256v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime256v1", + ) + def test_from_openssl_nist256p_sha512(self): + return self.do_test_from_openssl(NIST256p, "SHA512") + + @pytest.mark.skipif( + "secp384r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp384r1", + ) + def test_from_openssl_nist384p(self): + return self.do_test_from_openssl(NIST384p) + + @pytest.mark.skipif( + "secp521r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp521r1", + ) + def test_from_openssl_nist521p(self): + return self.do_test_from_openssl(NIST521p) + + @pytest.mark.skipif( + "secp256k1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp256k1", + ) + def test_from_openssl_secp256k1(self): + return self.do_test_from_openssl(SECP256k1) + + @pytest.mark.skipif( + "brainpoolP160r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP160r1", + ) + def test_from_openssl_brainpoolp160r1(self): + return self.do_test_from_openssl(BRAINPOOLP160r1) + + @pytest.mark.skipif( + "brainpoolP192r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP192r1", + ) + def test_from_openssl_brainpoolp192r1(self): + return self.do_test_from_openssl(BRAINPOOLP192r1) + + @pytest.mark.skipif( + "brainpoolP224r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP224r1", + ) + def test_from_openssl_brainpoolp224r1(self): + return self.do_test_from_openssl(BRAINPOOLP224r1) + + @pytest.mark.skipif( + "brainpoolP256r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP256r1", + ) + def test_from_openssl_brainpoolp256r1(self): + return self.do_test_from_openssl(BRAINPOOLP256r1) + + @pytest.mark.skipif( + "brainpoolP320r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP320r1", + ) + def test_from_openssl_brainpoolp320r1(self): + return self.do_test_from_openssl(BRAINPOOLP320r1) + + @pytest.mark.skipif( + "brainpoolP384r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP384r1", + ) + def test_from_openssl_brainpoolp384r1(self): + return self.do_test_from_openssl(BRAINPOOLP384r1) + + @pytest.mark.skipif( + "brainpoolP512r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP512r1", + ) + def test_from_openssl_brainpoolp512r1(self): + return self.do_test_from_openssl(BRAINPOOLP512r1) + + def do_test_from_openssl(self, curve, hash_name="SHA1"): + curvename = curve.openssl_name + assert curvename + # OpenSSL: create sk, vk, sign. + # Python: read vk(3), checksig(5), read sk(1), sign, check + mdarg = self.get_openssl_messagedigest_arg(hash_name) + if os.path.isdir("t"): # pragma: no cover + shutil.rmtree("t") + os.mkdir("t") + run_openssl("ecparam -name %s -genkey -out t/privkey.pem" % curvename) + run_openssl("ec -in t/privkey.pem -pubout -out t/pubkey.pem") + data = b("data") + with open("t/data.txt", "wb") as e: + e.write(data) + run_openssl( + "dgst %s -sign t/privkey.pem -out t/data.sig t/data.txt" % mdarg + ) + run_openssl( + "dgst %s -verify t/pubkey.pem -signature t/data.sig t/data.txt" + % mdarg + ) + with open("t/pubkey.pem", "rb") as e: + pubkey_pem = e.read() + vk = VerifyingKey.from_pem(pubkey_pem) # 3 + with open("t/data.sig", "rb") as e: + sig_der = e.read() + self.assertTrue( + vk.verify( + sig_der, + data, # 5 + hashfunc=partial(hashlib.new, hash_name), + sigdecode=sigdecode_der, + ) + ) + + with open("t/privkey.pem") as e: + fp = e.read() + sk = SigningKey.from_pem(fp) # 1 + sig = sk.sign(data, hashfunc=partial(hashlib.new, hash_name)) + self.assertTrue( + vk.verify(sig, data, hashfunc=partial(hashlib.new, hash_name)) + ) + + run_openssl( + "pkcs8 -topk8 -nocrypt " + "-in t/privkey.pem -outform pem -out t/privkey-p8.pem" + ) + with open("t/privkey-p8.pem", "rb") as e: + privkey_p8_pem = e.read() + sk_from_p8 = SigningKey.from_pem(privkey_p8_pem) + self.assertEqual(sk, sk_from_p8) + + @pytest.mark.skipif( + "secp112r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp112r1", + ) + def test_to_openssl_secp112r1(self): + self.do_test_to_openssl(SECP112r1) + + @pytest.mark.skipif( + "secp112r2" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp112r2", + ) + def test_to_openssl_secp112r2(self): + self.do_test_to_openssl(SECP112r2) + + @pytest.mark.skipif( + "secp128r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp128r1", + ) + def test_to_openssl_secp128r1(self): + self.do_test_to_openssl(SECP128r1) + + @pytest.mark.skipif( + "secp160r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp160r1", + ) + def test_to_openssl_secp160r1(self): + self.do_test_to_openssl(SECP160r1) + + @pytest.mark.skipif( + "prime192v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime192v1", + ) + def test_to_openssl_nist192p(self): + self.do_test_to_openssl(NIST192p) + + @pytest.mark.skipif( + "prime192v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime192v1", + ) + def test_to_openssl_nist192p_sha256(self): + self.do_test_to_openssl(NIST192p, "SHA256") + + @pytest.mark.skipif( + "secp224r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp224r1", + ) + def test_to_openssl_nist224p(self): + self.do_test_to_openssl(NIST224p) + + @pytest.mark.skipif( + "prime256v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime256v1", + ) + def test_to_openssl_nist256p(self): + self.do_test_to_openssl(NIST256p) + + @pytest.mark.skipif( + "prime256v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime256v1", + ) + def test_to_openssl_nist256p_sha384(self): + self.do_test_to_openssl(NIST256p, "SHA384") + + @pytest.mark.skipif( + "prime256v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime256v1", + ) + def test_to_openssl_nist256p_sha512(self): + self.do_test_to_openssl(NIST256p, "SHA512") + + @pytest.mark.skipif( + "secp384r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp384r1", + ) + def test_to_openssl_nist384p(self): + self.do_test_to_openssl(NIST384p) + + @pytest.mark.skipif( + "secp521r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp521r1", + ) + def test_to_openssl_nist521p(self): + self.do_test_to_openssl(NIST521p) + + @pytest.mark.skipif( + "secp256k1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support secp256k1", + ) + def test_to_openssl_secp256k1(self): + self.do_test_to_openssl(SECP256k1) + + @pytest.mark.skipif( + "brainpoolP160r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP160r1", + ) + def test_to_openssl_brainpoolp160r1(self): + self.do_test_to_openssl(BRAINPOOLP160r1) + + @pytest.mark.skipif( + "brainpoolP192r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP192r1", + ) + def test_to_openssl_brainpoolp192r1(self): + self.do_test_to_openssl(BRAINPOOLP192r1) + + @pytest.mark.skipif( + "brainpoolP224r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP224r1", + ) + def test_to_openssl_brainpoolp224r1(self): + self.do_test_to_openssl(BRAINPOOLP224r1) + + @pytest.mark.skipif( + "brainpoolP256r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP256r1", + ) + def test_to_openssl_brainpoolp256r1(self): + self.do_test_to_openssl(BRAINPOOLP256r1) + + @pytest.mark.skipif( + "brainpoolP320r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP320r1", + ) + def test_to_openssl_brainpoolp320r1(self): + self.do_test_to_openssl(BRAINPOOLP320r1) + + @pytest.mark.skipif( + "brainpoolP384r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP384r1", + ) + def test_to_openssl_brainpoolp384r1(self): + self.do_test_to_openssl(BRAINPOOLP384r1) + + @pytest.mark.skipif( + "brainpoolP512r1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support brainpoolP512r1", + ) + def test_to_openssl_brainpoolp512r1(self): + self.do_test_to_openssl(BRAINPOOLP512r1) + + def do_test_to_openssl(self, curve, hash_name="SHA1"): + curvename = curve.openssl_name + assert curvename + # Python: create sk, vk, sign. + # OpenSSL: read vk(4), checksig(6), read sk(2), sign, check + mdarg = self.get_openssl_messagedigest_arg(hash_name) + if os.path.isdir("t"): # pragma: no cover + shutil.rmtree("t") + os.mkdir("t") + sk = SigningKey.generate(curve=curve) + vk = sk.get_verifying_key() + data = b("data") + with open("t/pubkey.der", "wb") as e: + e.write(vk.to_der()) # 4 + with open("t/pubkey.pem", "wb") as e: + e.write(vk.to_pem()) # 4 + sig_der = sk.sign( + data, + hashfunc=partial(hashlib.new, hash_name), + sigencode=sigencode_der, + ) + + with open("t/data.sig", "wb") as e: + e.write(sig_der) # 6 + with open("t/data.txt", "wb") as e: + e.write(data) + with open("t/baddata.txt", "wb") as e: + e.write(data + b("corrupt")) + + self.assertRaises( + SubprocessError, + run_openssl, + "dgst %s -verify t/pubkey.der -keyform DER -signature t/data.sig t/baddata.txt" + % mdarg, + ) + run_openssl( + "dgst %s -verify t/pubkey.der -keyform DER -signature t/data.sig t/data.txt" + % mdarg + ) + + with open("t/privkey.pem", "wb") as e: + e.write(sk.to_pem()) # 2 + run_openssl( + "dgst %s -sign t/privkey.pem -out t/data.sig2 t/data.txt" % mdarg + ) + run_openssl( + "dgst %s -verify t/pubkey.pem -signature t/data.sig2 t/data.txt" + % mdarg + ) + + with open("t/privkey-explicit.pem", "wb") as e: + e.write(sk.to_pem(curve_parameters_encoding="explicit")) + run_openssl( + "dgst %s -sign t/privkey-explicit.pem -out t/data.sig2 t/data.txt" + % mdarg + ) + run_openssl( + "dgst %s -verify t/pubkey.pem -signature t/data.sig2 t/data.txt" + % mdarg + ) + + with open("t/privkey-p8.pem", "wb") as e: + e.write(sk.to_pem(format="pkcs8")) + run_openssl( + "dgst %s -sign t/privkey-p8.pem -out t/data.sig3 t/data.txt" + % mdarg + ) + run_openssl( + "dgst %s -verify t/pubkey.pem -signature t/data.sig3 t/data.txt" + % mdarg + ) + + with open("t/privkey-p8-explicit.pem", "wb") as e: + e.write( + sk.to_pem(format="pkcs8", curve_parameters_encoding="explicit") + ) + run_openssl( + "dgst %s -sign t/privkey-p8-explicit.pem -out t/data.sig3 t/data.txt" + % mdarg + ) + run_openssl( + "dgst %s -verify t/pubkey.pem -signature t/data.sig3 t/data.txt" + % mdarg + ) + + OPENSSL_SUPPORTED_TYPES = set() + try: + if "-rawin" in run_openssl("pkeyutl -help"): + OPENSSL_SUPPORTED_TYPES = set( + c.lower() + for c in ("ED25519", "ED448") + if c in run_openssl("list -public-key-methods") + ) + except SubprocessError: + pass + + def do_eddsa_test_to_openssl(self, curve): + curvename = curve.name.upper() + + if os.path.isdir("t"): + shutil.rmtree("t") + os.mkdir("t") + + sk = SigningKey.generate(curve=curve) + vk = sk.get_verifying_key() + + data = b"data" + with open("t/pubkey.der", "wb") as e: + e.write(vk.to_der()) + with open("t/pubkey.pem", "wb") as e: + e.write(vk.to_pem()) + + sig = sk.sign(data) + + with open("t/data.sig", "wb") as e: + e.write(sig) + with open("t/data.txt", "wb") as e: + e.write(data) + with open("t/baddata.txt", "wb") as e: + e.write(data + b"corrupt") + + with self.assertRaises(SubprocessError): + run_openssl( + "pkeyutl -verify -pubin -inkey t/pubkey.pem -rawin " + "-in t/baddata.txt -sigfile t/data.sig" + ) + run_openssl( + "pkeyutl -verify -pubin -inkey t/pubkey.pem -rawin " + "-in t/data.txt -sigfile t/data.sig" + ) + + shutil.rmtree("t") + + # in practice at least OpenSSL 3.0.0 is needed to make EdDSA signatures + # earlier versions support EdDSA only in X.509 certificates + @pytest.mark.skipif( + "ed25519" not in OPENSSL_SUPPORTED_TYPES, + reason="system openssl does not support signing with Ed25519", + ) + def test_to_openssl_ed25519(self): + return self.do_eddsa_test_to_openssl(Ed25519) + + @pytest.mark.skipif( + "ed448" not in OPENSSL_SUPPORTED_TYPES, + reason="system openssl does not support signing with Ed448", + ) + def test_to_openssl_ed448(self): + return self.do_eddsa_test_to_openssl(Ed448) + + def do_eddsa_test_from_openssl(self, curve): + curvename = curve.name + + if os.path.isdir("t"): + shutil.rmtree("t") + os.mkdir("t") + + data = b"data" + + run_openssl( + "genpkey -algorithm {0} -outform PEM -out t/privkey.pem".format( + curvename + ) + ) + run_openssl( + "pkey -outform PEM -pubout -in t/privkey.pem -out t/pubkey.pem" + ) + + with open("t/data.txt", "wb") as e: + e.write(data) + run_openssl( + "pkeyutl -sign -inkey t/privkey.pem " + "-rawin -in t/data.txt -out t/data.sig" + ) + + with open("t/data.sig", "rb") as e: + sig = e.read() + with open("t/pubkey.pem", "rb") as e: + vk = VerifyingKey.from_pem(e.read()) + + self.assertIs(vk.curve, curve) + + vk.verify(sig, data) + + shutil.rmtree("t") + + @pytest.mark.skipif( + "ed25519" not in OPENSSL_SUPPORTED_TYPES, + reason="system openssl does not support signing with Ed25519", + ) + def test_from_openssl_ed25519(self): + return self.do_eddsa_test_from_openssl(Ed25519) + + @pytest.mark.skipif( + "ed448" not in OPENSSL_SUPPORTED_TYPES, + reason="system openssl does not support signing with Ed448", + ) + def test_from_openssl_ed448(self): + return self.do_eddsa_test_from_openssl(Ed448) + + +class TooSmallCurve(unittest.TestCase): + OPENSSL_SUPPORTED_CURVES = set( + c.split(":")[0].strip() + for c in run_openssl("ecparam -list_curves").split("\n") + ) + + @pytest.mark.skipif( + "prime192v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime192v1", + ) + def test_sign_too_small_curve_dont_allow_truncate_raises(self): + sk = SigningKey.generate(curve=NIST192p) + data = b("data") + with self.assertRaises(BadDigestError): + sk.sign( + data, + hashfunc=partial(hashlib.new, "SHA256"), + sigencode=sigencode_der, + allow_truncate=False, + ) + + @pytest.mark.skipif( + "prime192v1" not in OPENSSL_SUPPORTED_CURVES, + reason="system openssl does not support prime192v1", + ) + def test_verify_too_small_curve_dont_allow_truncate_raises(self): + sk = SigningKey.generate(curve=NIST192p) + vk = sk.get_verifying_key() + data = b("data") + sig_der = sk.sign( + data, + hashfunc=partial(hashlib.new, "SHA256"), + sigencode=sigencode_der, + allow_truncate=True, + ) + with self.assertRaises(BadDigestError): + vk.verify( + sig_der, + data, + hashfunc=partial(hashlib.new, "SHA256"), + sigdecode=sigdecode_der, + allow_truncate=False, + ) + + +class DER(unittest.TestCase): + def test_integer(self): + self.assertEqual(der.encode_integer(0), b("\x02\x01\x00")) + self.assertEqual(der.encode_integer(1), b("\x02\x01\x01")) + self.assertEqual(der.encode_integer(127), b("\x02\x01\x7f")) + self.assertEqual(der.encode_integer(128), b("\x02\x02\x00\x80")) + self.assertEqual(der.encode_integer(256), b("\x02\x02\x01\x00")) + # self.assertEqual(der.encode_integer(-1), b("\x02\x01\xff")) + + def s(n): + return der.remove_integer(der.encode_integer(n) + b("junk")) + + self.assertEqual(s(0), (0, b("junk"))) + self.assertEqual(s(1), (1, b("junk"))) + self.assertEqual(s(127), (127, b("junk"))) + self.assertEqual(s(128), (128, b("junk"))) + self.assertEqual(s(256), (256, b("junk"))) + self.assertEqual( + s(1234567890123456789012345678901234567890), + (1234567890123456789012345678901234567890, b("junk")), + ) + + def test_number(self): + self.assertEqual(der.encode_number(0), b("\x00")) + self.assertEqual(der.encode_number(127), b("\x7f")) + self.assertEqual(der.encode_number(128), b("\x81\x00")) + self.assertEqual(der.encode_number(3 * 128 + 7), b("\x83\x07")) + # self.assertEqual(der.read_number("\x81\x9b" + "more"), (155, 2)) + # self.assertEqual(der.encode_number(155), b("\x81\x9b")) + for n in (0, 1, 2, 127, 128, 3 * 128 + 7, 840, 10045): # , 155): + x = der.encode_number(n) + b("more") + n1, llen = der.read_number(x) + self.assertEqual(n1, n) + self.assertEqual(x[llen:], b("more")) + + def test_length(self): + self.assertEqual(der.encode_length(0), b("\x00")) + self.assertEqual(der.encode_length(127), b("\x7f")) + self.assertEqual(der.encode_length(128), b("\x81\x80")) + self.assertEqual(der.encode_length(255), b("\x81\xff")) + self.assertEqual(der.encode_length(256), b("\x82\x01\x00")) + self.assertEqual(der.encode_length(3 * 256 + 7), b("\x82\x03\x07")) + self.assertEqual(der.read_length(b("\x81\x9b") + b("more")), (155, 2)) + self.assertEqual(der.encode_length(155), b("\x81\x9b")) + for n in (0, 1, 2, 127, 128, 255, 256, 3 * 256 + 7, 155): + x = der.encode_length(n) + b("more") + n1, llen = der.read_length(x) + self.assertEqual(n1, n) + self.assertEqual(x[llen:], b("more")) + + def test_sequence(self): + x = der.encode_sequence(b("ABC"), b("DEF")) + b("GHI") + self.assertEqual(x, b("\x30\x06ABCDEFGHI")) + x1, rest = der.remove_sequence(x) + self.assertEqual(x1, b("ABCDEF")) + self.assertEqual(rest, b("GHI")) + + def test_constructed(self): + x = der.encode_constructed(0, NIST224p.encoded_oid) + self.assertEqual(hexlify(x), b("a007") + b("06052b81040021")) + x = der.encode_constructed(1, unhexlify(b("0102030a0b0c"))) + self.assertEqual(hexlify(x), b("a106") + b("0102030a0b0c")) + + +class Util(unittest.TestCase): + def test_trytryagain(self): + tta = util.randrange_from_seed__trytryagain + for i in range(1000): + seed = "seed-%d" % i + for order in ( + 2**8 - 2, + 2**8 - 1, + 2**8, + 2**8 + 1, + 2**8 + 2, + 2**16 - 1, + 2**16 + 1, + ): + n = tta(seed, order) + self.assertTrue(1 <= n < order, (1, n, order)) + # this trytryagain *does* provide long-term stability + self.assertEqual( + ("%x" % (tta("seed", NIST224p.order))).encode(), + b("6fa59d73bf0446ae8743cf748fc5ac11d5585a90356417e97155c3bc"), + ) + + def test_trytryagain_single(self): + tta = util.randrange_from_seed__trytryagain + order = 2**8 - 2 + seed = b"text" + n = tta(seed, order) + # known issue: https://github.com/warner/python-ecdsa/issues/221 + if sys.version_info < (3, 0): # pragma: no branch + self.assertEqual(n, 228) + else: + self.assertEqual(n, 18) + + @given(st.integers(min_value=0, max_value=10**200)) + def test_randrange(self, i): + # util.randrange does not provide long-term stability: we might + # change the algorithm in the future. + entropy = util.PRNG("seed-%d" % i) + for order in ( + 2**8 - 2, + 2**8 - 1, + 2**8, + 2**16 - 1, + 2**16 + 1, + ): + # that oddball 2**16+1 takes half our runtime + n = util.randrange(order, entropy=entropy) + self.assertTrue(1 <= n < order, (1, n, order)) + + def OFF_test_prove_uniformity(self): # pragma: no cover + order = 2**8 - 2 + counts = dict([(i, 0) for i in range(1, order)]) + assert 0 not in counts + assert order not in counts + for i in range(1000000): + seed = "seed-%d" % i + n = util.randrange_from_seed__trytryagain(seed, order) + counts[n] += 1 + # this technique should use the full range + self.assertTrue(counts[order - 1]) + for i in range(1, order): + print_("%3d: %s" % (i, "*" * (counts[i] // 100))) + + +class RFC6979(unittest.TestCase): + # https://tools.ietf.org/html/rfc6979#appendix-A.1 + def _do(self, generator, secexp, hsh, hash_func, expected): + actual = rfc6979.generate_k(generator.order(), secexp, hash_func, hsh) + self.assertEqual(expected, actual) + + def test_SECP256k1(self): + """RFC doesn't contain test vectors for SECP256k1 used in bitcoin. + This vector has been computed by Golang reference implementation instead.""" + self._do( + generator=SECP256k1.generator, + secexp=int("9d0219792467d7d37b4d43298a7d0c05", 16), + hsh=sha256(b("sample")).digest(), + hash_func=sha256, + expected=int( + "8fa1f95d514760e498f28957b824ee6ec39ed64826ff4fecc2b5739ec45b91cd", + 16, + ), + ) + + def test_SECP256k1_2(self): + self._do( + generator=SECP256k1.generator, + secexp=int( + "cca9fbcc1b41e5a95d369eaa6ddcff73b61a4efaa279cfc6567e8daa39cbaf50", + 16, + ), + hsh=sha256(b("sample")).digest(), + hash_func=sha256, + expected=int( + "2df40ca70e639d89528a6b670d9d48d9165fdc0febc0974056bdce192b8e16a3", + 16, + ), + ) + + def test_SECP256k1_3(self): + self._do( + generator=SECP256k1.generator, + secexp=0x1, + hsh=sha256(b("Satoshi Nakamoto")).digest(), + hash_func=sha256, + expected=0x8F8A276C19F4149656B280621E358CCE24F5F52542772691EE69063B74F15D15, + ) + + def test_SECP256k1_4(self): + self._do( + generator=SECP256k1.generator, + secexp=0x1, + hsh=sha256( + b( + "All those moments will be lost in time, like tears in rain. Time to die..." + ) + ).digest(), + hash_func=sha256, + expected=0x38AA22D72376B4DBC472E06C3BA403EE0A394DA63FC58D88686C611ABA98D6B3, + ) + + def test_SECP256k1_5(self): + self._do( + generator=SECP256k1.generator, + secexp=0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140, + hsh=sha256(b("Satoshi Nakamoto")).digest(), + hash_func=sha256, + expected=0x33A19B60E25FB6F4435AF53A3D42D493644827367E6453928554F43E49AA6F90, + ) + + def test_SECP256k1_6(self): + self._do( + generator=SECP256k1.generator, + secexp=0xF8B8AF8CE3C7CCA5E300D33939540C10D45CE001B8F252BFBC57BA0342904181, + hsh=sha256(b("Alan Turing")).digest(), + hash_func=sha256, + expected=0x525A82B70E67874398067543FD84C83D30C175FDC45FDEEE082FE13B1D7CFDF1, + ) + + def test_1(self): + # Basic example of the RFC, it also tests 'try-try-again' from Step H of rfc6979 + self._do( + generator=Point( + None, + 0, + 0, + int("4000000000000000000020108A2E0CC0D99F8A5EF", 16), + ), + secexp=int("09A4D6792295A7F730FC3F2B49CBC0F62E862272F", 16), + hsh=unhexlify( + b( + "AF2BDBE1AA9B6EC1E2ADE1D694F41FC71A831D0268E9891562113D8A62ADD1BF" + ) + ), + hash_func=sha256, + expected=int("23AF4074C90A02B3FE61D286D5C87F425E6BDD81B", 16), + ) + + def test_2(self): + self._do( + generator=NIST192p.generator, + secexp=int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16), + hsh=sha1(b("sample")).digest(), + hash_func=sha1, + expected=int( + "37D7CA00D2C7B0E5E412AC03BD44BA837FDD5B28CD3B0021", 16 + ), + ) + + def test_3(self): + self._do( + generator=NIST192p.generator, + secexp=int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16), + hsh=sha256(b("sample")).digest(), + hash_func=sha256, + expected=int( + "32B1B6D7D42A05CB449065727A84804FB1A3E34D8F261496", 16 + ), + ) + + def test_4(self): + self._do( + generator=NIST192p.generator, + secexp=int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16), + hsh=sha512(b("sample")).digest(), + hash_func=sha512, + expected=int( + "A2AC7AB055E4F20692D49209544C203A7D1F2C0BFBC75DB1", 16 + ), + ) + + def test_5(self): + self._do( + generator=NIST192p.generator, + secexp=int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16), + hsh=sha1(b("test")).digest(), + hash_func=sha1, + expected=int( + "D9CF9C3D3297D3260773A1DA7418DB5537AB8DD93DE7FA25", 16 + ), + ) + + def test_6(self): + self._do( + generator=NIST192p.generator, + secexp=int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16), + hsh=sha256(b("test")).digest(), + hash_func=sha256, + expected=int( + "5C4CE89CF56D9E7C77C8585339B006B97B5F0680B4306C6C", 16 + ), + ) + + def test_7(self): + self._do( + generator=NIST192p.generator, + secexp=int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16), + hsh=sha512(b("test")).digest(), + hash_func=sha512, + expected=int( + "0758753A5254759C7CFBAD2E2D9B0792EEE44136C9480527", 16 + ), + ) + + def test_8(self): + self._do( + generator=NIST521p.generator, + secexp=int( + "0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538", + 16, + ), + hsh=sha1(b("sample")).digest(), + hash_func=sha1, + expected=int( + "089C071B419E1C2820962321787258469511958E80582E95D8378E0C2CCDB3CB42BEDE42F50E3FA3C71F5A76724281D31D9C89F0F91FC1BE4918DB1C03A5838D0F9", + 16, + ), + ) + + def test_9(self): + self._do( + generator=NIST521p.generator, + secexp=int( + "0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538", + 16, + ), + hsh=sha256(b("sample")).digest(), + hash_func=sha256, + expected=int( + "0EDF38AFCAAECAB4383358B34D67C9F2216C8382AAEA44A3DAD5FDC9C32575761793FEF24EB0FC276DFC4F6E3EC476752F043CF01415387470BCBD8678ED2C7E1A0", + 16, + ), + ) + + def test_10(self): + self._do( + generator=NIST521p.generator, + secexp=int( + "0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538", + 16, + ), + hsh=sha512(b("test")).digest(), + hash_func=sha512, + expected=int( + "16200813020EC986863BEDFC1B121F605C1215645018AEA1A7B215A564DE9EB1B38A67AA1128B80CE391C4FB71187654AAA3431027BFC7F395766CA988C964DC56D", + 16, + ), + ) + + +class ECDH(unittest.TestCase): + def _do(self, curve, generator, dA, x_qA, y_qA, dB, x_qB, y_qB, x_Z, y_Z): + qA = dA * generator + qB = dB * generator + Z = dA * qB + self.assertEqual(Point(curve, x_qA, y_qA), qA) + self.assertEqual(Point(curve, x_qB, y_qB), qB) + self.assertTrue( + (dA * qB) + == (dA * dB * generator) + == (dB * dA * generator) + == (dB * qA) + ) + self.assertEqual(Point(curve, x_Z, y_Z), Z) + + +class RFC6932(ECDH): + # https://tools.ietf.org/html/rfc6932#appendix-A.1 + + def test_brainpoolP224r1(self): + self._do( + curve=curve_brainpoolp224r1, + generator=BRAINPOOLP224r1.generator, + dA=int( + "7C4B7A2C8A4BAD1FBB7D79CC0955DB7C6A4660CA64CC4778159B495E", 16 + ), + x_qA=int( + "B104A67A6F6E85E14EC1825E1539E8ECDBBF584922367DD88C6BDCF2", 16 + ), + y_qA=int( + "46D782E7FDB5F60CD8404301AC5949C58EDB26BC68BA07695B750A94", 16 + ), + dB=int( + "63976D4AAE6CD0F6DD18DEFEF55D96569D0507C03E74D6486FFA28FB", 16 + ), + x_qB=int( + "2A97089A9296147B71B21A4B574E1278245B536F14D8C2B9D07A874E", 16 + ), + y_qB=int( + "9B900D7C77A709A797276B8CA1BA61BB95B546FC29F862E44D59D25B", 16 + ), + x_Z=int( + "312DFD98783F9FB77B9704945A73BEB6DCCBE3B65D0F967DCAB574EB", 16 + ), + y_Z=int( + "6F800811D64114B1C48C621AB3357CF93F496E4238696A2A012B3C98", 16 + ), + ) + + def test_brainpoolP256r1(self): + self._do( + curve=curve_brainpoolp256r1, + generator=BRAINPOOLP256r1.generator, + dA=int( + "041EB8B1E2BC681BCE8E39963B2E9FC415B05283313DD1A8BCC055F11AE" + "49699", + 16, + ), + x_qA=int( + "78028496B5ECAAB3C8B6C12E45DB1E02C9E4D26B4113BC4F015F60C5C" + "CC0D206", + 16, + ), + y_qA=int( + "A2AE1762A3831C1D20F03F8D1E3C0C39AFE6F09B4D44BBE80CD100987" + "B05F92B", + 16, + ), + dB=int( + "06F5240EACDB9837BC96D48274C8AA834B6C87BA9CC3EEDD81F99A16B8D" + "804D3", + 16, + ), + x_qB=int( + "8E07E219BA588916C5B06AA30A2F464C2F2ACFC1610A3BE2FB240B635" + "341F0DB", + 16, + ), + y_qB=int( + "148EA1D7D1E7E54B9555B6C9AC90629C18B63BEE5D7AA6949EBBF47B2" + "4FDE40D", + 16, + ), + x_Z=int( + "05E940915549E9F6A4A75693716E37466ABA79B4BF2919877A16DD2CC2" + "E23708", + 16, + ), + y_Z=int( + "6BC23B6702BC5A019438CEEA107DAAD8B94232FFBBC350F3B137628FE6" + "FD134C", + 16, + ), + ) + + def test_brainpoolP384r1(self): + self._do( + curve=curve_brainpoolp384r1, + generator=BRAINPOOLP384r1.generator, + dA=int( + "014EC0755B78594BA47FB0A56F6173045B4331E74BA1A6F47322E70D79D" + "828D97E095884CA72B73FDABD5910DF0FA76A", + 16, + ), + x_qA=int( + "45CB26E4384DAF6FB776885307B9A38B7AD1B5C692E0C32F012533277" + "8F3B8D3F50CA358099B30DEB5EE69A95C058B4E", + 16, + ), + y_qA=int( + "8173A1C54AFFA7E781D0E1E1D12C0DC2B74F4DF58E4A4E3AF7026C5D3" + "2DC530A2CD89C859BB4B4B768497F49AB8CC859", + 16, + ), + dB=int( + "6B461CB79BD0EA519A87D6828815D8CE7CD9B3CAA0B5A8262CBCD550A01" + "5C90095B976F3529957506E1224A861711D54", + 16, + ), + x_qB=int( + "01BF92A92EE4BE8DED1A911125C209B03F99E3161CFCC986DC7711383" + "FC30AF9CE28CA3386D59E2C8D72CE1E7B4666E8", + 16, + ), + y_qB=int( + "3289C4A3A4FEE035E39BDB885D509D224A142FF9FBCC5CFE5CCBB3026" + "8EE47487ED8044858D31D848F7A95C635A347AC", + 16, + ), + x_Z=int( + "04CC4FF3DCCCB07AF24E0ACC529955B36D7C807772B92FCBE48F3AFE9A" + "2F370A1F98D3FA73FD0C0747C632E12F1423EC", + 16, + ), + y_Z=int( + "7F465F90BD69AFB8F828A214EB9716D66ABC59F17AF7C75EE7F1DE22AB" + "5D05085F5A01A9382D05BF72D96698FE3FF64E", + 16, + ), + ) + + def test_brainpoolP512r1(self): + self._do( + curve=curve_brainpoolp512r1, + generator=BRAINPOOLP512r1.generator, + dA=int( + "636B6BE0482A6C1C41AA7AE7B245E983392DB94CECEA2660A379CFE1595" + "59E357581825391175FC195D28BAC0CF03A7841A383B95C262B98378287" + "4CCE6FE333", + 16, + ), + x_qA=int( + "0562E68B9AF7CBFD5565C6B16883B777FF11C199161ECC427A39D17EC" + "2166499389571D6A994977C56AD8252658BA8A1B72AE42F4FB7532151" + "AFC3EF0971CCDA", + 16, + ), + y_qA=int( + "A7CA2D8191E21776A89860AFBC1F582FAA308D551C1DC6133AF9F9C3C" + "AD59998D70079548140B90B1F311AFB378AA81F51B275B2BE6B7DEE97" + "8EFC7343EA642E", + 16, + ), + dB=int( + "0AF4E7F6D52EDD52907BB8DBAB3992A0BB696EC10DF11892FF205B66D38" + "1ECE72314E6A6EA079CEA06961DBA5AE6422EF2E9EE803A1F236FB96A17" + "99B86E5C8B", + 16, + ), + x_qB=int( + "5A7954E32663DFF11AE24712D87419F26B708AC2B92877D6BFEE2BFC4" + "3714D89BBDB6D24D807BBD3AEB7F0C325F862E8BADE4F74636B97EAAC" + "E739E11720D323", + 16, + ), + y_qB=int( + "96D14621A9283A1BED84DE8DD64836B2C0758B11441179DC0C54C0D49" + "A47C03807D171DD544B72CAAEF7B7CE01C7753E2CAD1A861ECA55A719" + "54EE1BA35E04BE", + 16, + ), + x_Z=int( + "1EE8321A4BBF93B9CF8921AB209850EC9B7066D1984EF08C2BB7232362" + "08AC8F1A483E79461A00E0D5F6921CE9D360502F85C812BEDEE23AC5B2" + "10E5811B191E", + 16, + ), + y_Z=int( + "2632095B7B936174B41FD2FAF369B1D18DCADEED7E410A7E251F083109" + "7C50D02CFED02607B6A2D5ADB4C0006008562208631875B58B54ECDA5A" + "4F9FE9EAABA6", + 16, + ), + ) + + +class RFC7027(ECDH): + # https://tools.ietf.org/html/rfc7027#appendix-A + + def test_brainpoolP256r1(self): + self._do( + curve=curve_brainpoolp256r1, + generator=BRAINPOOLP256r1.generator, + dA=int( + "81DB1EE100150FF2EA338D708271BE38300CB54241D79950F77B0630398" + "04F1D", + 16, + ), + x_qA=int( + "44106E913F92BC02A1705D9953A8414DB95E1AAA49E81D9E85F929A8E" + "3100BE5", + 16, + ), + y_qA=int( + "8AB4846F11CACCB73CE49CBDD120F5A900A69FD32C272223F789EF10E" + "B089BDC", + 16, + ), + dB=int( + "55E40BC41E37E3E2AD25C3C6654511FFA8474A91A0032087593852D3E7D" + "76BD3", + 16, + ), + x_qB=int( + "8D2D688C6CF93E1160AD04CC4429117DC2C41825E1E9FCA0ADDD34E6F" + "1B39F7B", + 16, + ), + y_qB=int( + "990C57520812BE512641E47034832106BC7D3E8DD0E4C7F1136D70065" + "47CEC6A", + 16, + ), + x_Z=int( + "89AFC39D41D3B327814B80940B042590F96556EC91E6AE7939BCE31F3A" + "18BF2B", + 16, + ), + y_Z=int( + "49C27868F4ECA2179BFD7D59B1E3BF34C1DBDE61AE12931648F43E5963" + "2504DE", + 16, + ), + ) + + def test_brainpoolP384r1(self): + self._do( + curve=curve_brainpoolp384r1, + generator=BRAINPOOLP384r1.generator, + dA=int( + "1E20F5E048A5886F1F157C74E91BDE2B98C8B52D58E5003D57053FC4B0B" + "D65D6F15EB5D1EE1610DF870795143627D042", + 16, + ), + x_qA=int( + "68B665DD91C195800650CDD363C625F4E742E8134667B767B1B476793" + "588F885AB698C852D4A6E77A252D6380FCAF068", + 16, + ), + y_qA=int( + "55BC91A39C9EC01DEE36017B7D673A931236D2F1F5C83942D049E3FA2" + "0607493E0D038FF2FD30C2AB67D15C85F7FAA59", + 16, + ), + dB=int( + "032640BC6003C59260F7250C3DB58CE647F98E1260ACCE4ACDA3DD869F7" + "4E01F8BA5E0324309DB6A9831497ABAC96670", + 16, + ), + x_qB=int( + "4D44326F269A597A5B58BBA565DA5556ED7FD9A8A9EB76C25F46DB69D" + "19DC8CE6AD18E404B15738B2086DF37E71D1EB4", + 16, + ), + y_qB=int( + "62D692136DE56CBE93BF5FA3188EF58BC8A3A0EC6C1E151A21038A42E" + "9185329B5B275903D192F8D4E1F32FE9CC78C48", + 16, + ), + x_Z=int( + "0BD9D3A7EA0B3D519D09D8E48D0785FB744A6B355E6304BC51C229FBBC" + "E239BBADF6403715C35D4FB2A5444F575D4F42", + 16, + ), + y_Z=int( + "0DF213417EBE4D8E40A5F76F66C56470C489A3478D146DECF6DF0D94BA" + "E9E598157290F8756066975F1DB34B2324B7BD", + 16, + ), + ) + + def test_brainpoolP512r1(self): + self._do( + curve=curve_brainpoolp512r1, + generator=BRAINPOOLP512r1.generator, + dA=int( + "16302FF0DBBB5A8D733DAB7141C1B45ACBC8715939677F6A56850A38BD8" + "7BD59B09E80279609FF333EB9D4C061231FB26F92EEB04982A5F1D1764C" + "AD57665422", + 16, + ), + x_qA=int( + "0A420517E406AAC0ACDCE90FCD71487718D3B953EFD7FBEC5F7F27E28" + "C6149999397E91E029E06457DB2D3E640668B392C2A7E737A7F0BF044" + "36D11640FD09FD", + 16, + ), + y_qA=int( + "72E6882E8DB28AAD36237CD25D580DB23783961C8DC52DFA2EC138AD4" + "72A0FCEF3887CF62B623B2A87DE5C588301EA3E5FC269B373B60724F5" + "E82A6AD147FDE7", + 16, + ), + dB=int( + "230E18E1BCC88A362FA54E4EA3902009292F7F8033624FD471B5D8ACE49" + "D12CFABBC19963DAB8E2F1EBA00BFFB29E4D72D13F2224562F405CB8050" + "3666B25429", + 16, + ), + x_qB=int( + "9D45F66DE5D67E2E6DB6E93A59CE0BB48106097FF78A081DE781CDB31" + "FCE8CCBAAEA8DD4320C4119F1E9CD437A2EAB3731FA9668AB268D871D" + "EDA55A5473199F", + 16, + ), + y_qB=int( + "2FDC313095BCDD5FB3A91636F07A959C8E86B5636A1E930E8396049CB" + "481961D365CC11453A06C719835475B12CB52FC3C383BCE35E27EF194" + "512B71876285FA", + 16, + ), + x_Z=int( + "A7927098655F1F9976FA50A9D566865DC530331846381C87256BAF3226" + "244B76D36403C024D7BBF0AA0803EAFF405D3D24F11A9B5C0BEF679FE1" + "454B21C4CD1F", + 16, + ), + y_Z=int( + "7DB71C3DEF63212841C463E881BDCF055523BD368240E6C3143BD8DEF8" + "B3B3223B95E0F53082FF5E412F4222537A43DF1C6D25729DDB51620A83" + "2BE6A26680A2", + 16, + ), + ) + + +# https://tools.ietf.org/html/rfc4754#page-5 +@pytest.mark.parametrize( + "w, gwx, gwy, k, msg, md, r, s, curve", + [ + pytest.param( + "DC51D3866A15BACDE33D96F992FCA99DA7E6EF0934E7097559C27F1614C88A7F", + "2442A5CC0ECD015FA3CA31DC8E2BBC70BF42D60CBCA20085E0822CB04235E970", + "6FC98BD7E50211A4A27102FA3549DF79EBCB4BF246B80945CDDFE7D509BBFD7D", + "9E56F509196784D963D1C0A401510EE7ADA3DCC5DEE04B154BF61AF1D5A6DECE", + b"abc", + sha256, + "CB28E0999B9C7715FD0A80D8E47A77079716CBBF917DD72E97566EA1C066957C", + "86FA3BB4E26CAD5BF90B7F81899256CE7594BB1EA0C89212748BFF3B3D5B0315", + NIST256p, + id="ECDSA-256", + ), + pytest.param( + "0BEB646634BA87735D77AE4809A0EBEA865535DE4C1E1DCB692E84708E81A5AF" + "62E528C38B2A81B35309668D73524D9F", + "96281BF8DD5E0525CA049C048D345D3082968D10FEDF5C5ACA0C64E6465A97EA" + "5CE10C9DFEC21797415710721F437922", + "447688BA94708EB6E2E4D59F6AB6D7EDFF9301D249FE49C33096655F5D502FAD" + "3D383B91C5E7EDAA2B714CC99D5743CA", + "B4B74E44D71A13D568003D7489908D564C7761E229C58CBFA18950096EB7463B" + "854D7FA992F934D927376285E63414FA", + b"abc", + sha384, + "FB017B914E29149432D8BAC29A514640B46F53DDAB2C69948084E2930F1C8F7E" + "08E07C9C63F2D21A07DCB56A6AF56EB3", + "B263A1305E057F984D38726A1B46874109F417BCA112674C528262A40A629AF1" + "CBB9F516CE0FA7D2FF630863A00E8B9F", + NIST384p, + id="ECDSA-384", + ), + pytest.param( + "0065FDA3409451DCAB0A0EAD45495112A3D813C17BFD34BDF8C1209D7DF58491" + "20597779060A7FF9D704ADF78B570FFAD6F062E95C7E0C5D5481C5B153B48B37" + "5FA1", + "0151518F1AF0F563517EDD5485190DF95A4BF57B5CBA4CF2A9A3F6474725A35F" + "7AFE0A6DDEB8BEDBCD6A197E592D40188901CECD650699C9B5E456AEA5ADD190" + "52A8", + "006F3B142EA1BFFF7E2837AD44C9E4FF6D2D34C73184BBAD90026DD5E6E85317" + "D9DF45CAD7803C6C20035B2F3FF63AFF4E1BA64D1C077577DA3F4286C58F0AEA" + "E643", + "00C1C2B305419F5A41344D7E4359933D734096F556197A9B244342B8B62F46F9" + "373778F9DE6B6497B1EF825FF24F42F9B4A4BD7382CFC3378A540B1B7F0C1B95" + "6C2F", + b"abc", + sha512, + "0154FD3836AF92D0DCA57DD5341D3053988534FDE8318FC6AAAAB68E2E6F4339" + "B19F2F281A7E0B22C269D93CF8794A9278880ED7DBB8D9362CAEACEE54432055" + "2251", + "017705A7030290D1CEB605A9A1BB03FF9CDD521E87A696EC926C8C10C8362DF4" + "975367101F67D1CF9BCCBF2F3D239534FA509E70AAC851AE01AAC68D62F86647" + "2660", + NIST521p, + id="ECDSA-521", + ), + ], +) +def test_RFC4754_vectors(w, gwx, gwy, k, msg, md, r, s, curve): + sk = SigningKey.from_string(unhexlify(w), curve) + vk = VerifyingKey.from_string(unhexlify(gwx + gwy), curve) + assert sk.verifying_key == vk + sig = sk.sign(msg, hashfunc=md, sigencode=sigencode_strings, k=int(k, 16)) + + assert sig == (unhexlify(r), unhexlify(s)) + + assert vk.verify(sig, msg, md, sigdecode_strings) diff --git a/env/lib/python3.8/site-packages/ecdsa/test_rw_lock.py b/env/lib/python3.8/site-packages/ecdsa/test_rw_lock.py new file mode 100644 index 00000000..0a84b9c7 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_rw_lock.py @@ -0,0 +1,180 @@ +# Copyright Mateusz Kobos, (c) 2011 +# https://code.activestate.com/recipes/577803-reader-writer-lock-with-priority-for-writers/ +# released under the MIT licence + +try: + import unittest2 as unittest +except ImportError: + import unittest +import threading +import time +import copy +from ._rwlock import RWLock + + +class Writer(threading.Thread): + def __init__( + self, buffer_, rw_lock, init_sleep_time, sleep_time, to_write + ): + """ + @param buffer_: common buffer_ shared by the readers and writers + @type buffer_: list + @type rw_lock: L{RWLock} + @param init_sleep_time: sleep time before doing any action + @type init_sleep_time: C{float} + @param sleep_time: sleep time while in critical section + @type sleep_time: C{float} + @param to_write: data that will be appended to the buffer + """ + threading.Thread.__init__(self) + self.__buffer = buffer_ + self.__rw_lock = rw_lock + self.__init_sleep_time = init_sleep_time + self.__sleep_time = sleep_time + self.__to_write = to_write + self.entry_time = None + """Time of entry to the critical section""" + self.exit_time = None + """Time of exit from the critical section""" + + def run(self): + time.sleep(self.__init_sleep_time) + self.__rw_lock.writer_acquire() + self.entry_time = time.time() + time.sleep(self.__sleep_time) + self.__buffer.append(self.__to_write) + self.exit_time = time.time() + self.__rw_lock.writer_release() + + +class Reader(threading.Thread): + def __init__(self, buffer_, rw_lock, init_sleep_time, sleep_time): + """ + @param buffer_: common buffer shared by the readers and writers + @type buffer_: list + @type rw_lock: L{RWLock} + @param init_sleep_time: sleep time before doing any action + @type init_sleep_time: C{float} + @param sleep_time: sleep time while in critical section + @type sleep_time: C{float} + """ + threading.Thread.__init__(self) + self.__buffer = buffer_ + self.__rw_lock = rw_lock + self.__init_sleep_time = init_sleep_time + self.__sleep_time = sleep_time + self.buffer_read = None + """a copy of a the buffer read while in critical section""" + self.entry_time = None + """Time of entry to the critical section""" + self.exit_time = None + """Time of exit from the critical section""" + + def run(self): + time.sleep(self.__init_sleep_time) + self.__rw_lock.reader_acquire() + self.entry_time = time.time() + time.sleep(self.__sleep_time) + self.buffer_read = copy.deepcopy(self.__buffer) + self.exit_time = time.time() + self.__rw_lock.reader_release() + + +class RWLockTestCase(unittest.TestCase): + def test_readers_nonexclusive_access(self): + (buffer_, rw_lock, threads) = self.__init_variables() + + threads.append(Reader(buffer_, rw_lock, 0, 0)) + threads.append(Writer(buffer_, rw_lock, 0.2, 0.4, 1)) + threads.append(Reader(buffer_, rw_lock, 0.3, 0.3)) + threads.append(Reader(buffer_, rw_lock, 0.5, 0)) + + self.__start_and_join_threads(threads) + + ## The third reader should enter after the second one but it should + ## exit before the second one exits + ## (i.e. the readers should be in the critical section + ## at the same time) + + self.assertEqual([], threads[0].buffer_read) + self.assertEqual([1], threads[2].buffer_read) + self.assertEqual([1], threads[3].buffer_read) + self.assertTrue(threads[1].exit_time <= threads[2].entry_time) + self.assertTrue(threads[2].entry_time <= threads[3].entry_time) + self.assertTrue(threads[3].exit_time < threads[2].exit_time) + + def test_writers_exclusive_access(self): + (buffer_, rw_lock, threads) = self.__init_variables() + + threads.append(Writer(buffer_, rw_lock, 0, 0.4, 1)) + threads.append(Writer(buffer_, rw_lock, 0.1, 0, 2)) + threads.append(Reader(buffer_, rw_lock, 0.2, 0)) + + self.__start_and_join_threads(threads) + + ## The second writer should wait for the first one to exit + + self.assertEqual([1, 2], threads[2].buffer_read) + self.assertTrue(threads[0].exit_time <= threads[1].entry_time) + self.assertTrue(threads[1].exit_time <= threads[2].exit_time) + + def test_writer_priority(self): + (buffer_, rw_lock, threads) = self.__init_variables() + + threads.append(Writer(buffer_, rw_lock, 0, 0, 1)) + threads.append(Reader(buffer_, rw_lock, 0.1, 0.4)) + threads.append(Writer(buffer_, rw_lock, 0.2, 0, 2)) + threads.append(Reader(buffer_, rw_lock, 0.3, 0)) + threads.append(Reader(buffer_, rw_lock, 0.3, 0)) + + self.__start_and_join_threads(threads) + + ## The second writer should go before the second and the third reader + + self.assertEqual([1], threads[1].buffer_read) + self.assertEqual([1, 2], threads[3].buffer_read) + self.assertEqual([1, 2], threads[4].buffer_read) + self.assertTrue(threads[0].exit_time < threads[1].entry_time) + self.assertTrue(threads[1].exit_time <= threads[2].entry_time) + self.assertTrue(threads[2].exit_time <= threads[3].entry_time) + self.assertTrue(threads[2].exit_time <= threads[4].entry_time) + + def test_many_writers_priority(self): + (buffer_, rw_lock, threads) = self.__init_variables() + + threads.append(Writer(buffer_, rw_lock, 0, 0, 1)) + threads.append(Reader(buffer_, rw_lock, 0.1, 0.6)) + threads.append(Writer(buffer_, rw_lock, 0.2, 0.1, 2)) + threads.append(Reader(buffer_, rw_lock, 0.3, 0)) + threads.append(Reader(buffer_, rw_lock, 0.4, 0)) + threads.append(Writer(buffer_, rw_lock, 0.5, 0.1, 3)) + + self.__start_and_join_threads(threads) + + ## The two last writers should go first -- after the first reader and + ## before the second and the third reader + + self.assertEqual([1], threads[1].buffer_read) + self.assertEqual([1, 2, 3], threads[3].buffer_read) + self.assertEqual([1, 2, 3], threads[4].buffer_read) + self.assertTrue(threads[0].exit_time < threads[1].entry_time) + self.assertTrue(threads[1].exit_time <= threads[2].entry_time) + self.assertTrue(threads[1].exit_time <= threads[5].entry_time) + self.assertTrue(threads[2].exit_time <= threads[3].entry_time) + self.assertTrue(threads[2].exit_time <= threads[4].entry_time) + self.assertTrue(threads[5].exit_time <= threads[3].entry_time) + self.assertTrue(threads[5].exit_time <= threads[4].entry_time) + + @staticmethod + def __init_variables(): + buffer_ = [] + rw_lock = RWLock() + threads = [] + return (buffer_, rw_lock, threads) + + @staticmethod + def __start_and_join_threads(threads): + for t in threads: + t.start() + for t in threads: + t.join() diff --git a/env/lib/python3.8/site-packages/ecdsa/test_sha3.py b/env/lib/python3.8/site-packages/ecdsa/test_sha3.py new file mode 100644 index 00000000..2c6bd15c --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/test_sha3.py @@ -0,0 +1,111 @@ +try: + import unittest2 as unittest +except ImportError: + import unittest +import pytest + +try: + from gmpy2 import mpz + + GMPY = True +except ImportError: + try: + from gmpy import mpz + + GMPY = True + except ImportError: + GMPY = False + +from ._sha3 import shake_256 +from ._compat import bytes_to_int, int_to_bytes + +B2I_VECTORS = [ + (b"\x00\x01", "big", 1), + (b"\x00\x01", "little", 0x0100), + (b"", "big", 0), + (b"\x00", "little", 0), +] + + +@pytest.mark.parametrize("bytes_in,endian,int_out", B2I_VECTORS) +def test_bytes_to_int(bytes_in, endian, int_out): + out = bytes_to_int(bytes_in, endian) + assert out == int_out + + +class TestBytesToInt(unittest.TestCase): + def test_bytes_to_int_wrong_endian(self): + with self.assertRaises(ValueError): + bytes_to_int(b"\x00", "middle") + + def test_int_to_bytes_wrong_endian(self): + with self.assertRaises(ValueError): + int_to_bytes(0, byteorder="middle") + + +@pytest.mark.skipif(GMPY == False, reason="requites gmpy or gmpy2") +def test_int_to_bytes_with_gmpy(): + assert int_to_bytes(mpz(1)) == b"\x01" + + +I2B_VECTORS = [ + (0, None, "big", b""), + (0, 1, "big", b"\x00"), + (1, None, "big", b"\x01"), + (0x0100, None, "little", b"\x00\x01"), + (0x0100, 4, "little", b"\x00\x01\x00\x00"), + (1, 4, "big", b"\x00\x00\x00\x01"), +] + + +@pytest.mark.parametrize("int_in,length,endian,bytes_out", I2B_VECTORS) +def test_int_to_bytes(int_in, length, endian, bytes_out): + out = int_to_bytes(int_in, length, endian) + assert out == bytes_out + + +SHAKE_256_VECTORS = [ + ( + b"Message.", + 32, + b"\x78\xa1\x37\xbb\x33\xae\xe2\x72\xb1\x02\x4f\x39\x43\xe5\xcf\x0c" + b"\x4e\x9c\x72\x76\x2e\x34\x4c\xf8\xf9\xc3\x25\x9d\x4f\x91\x2c\x3a", + ), + ( + b"", + 32, + b"\x46\xb9\xdd\x2b\x0b\xa8\x8d\x13\x23\x3b\x3f\xeb\x74\x3e\xeb\x24" + b"\x3f\xcd\x52\xea\x62\xb8\x1b\x82\xb5\x0c\x27\x64\x6e\xd5\x76\x2f", + ), + ( + b"message", + 32, + b"\x86\x16\xe1\xe4\xcf\xd8\xb5\xf7\xd9\x2d\x43\xd8\x6e\x1b\x14\x51" + b"\xa2\xa6\x5a\xf8\x64\xfc\xb1\x26\xc2\x66\x0a\xb3\x46\x51\xb1\x75", + ), + ( + b"message", + 16, + b"\x86\x16\xe1\xe4\xcf\xd8\xb5\xf7\xd9\x2d\x43\xd8\x6e\x1b\x14\x51", + ), + ( + b"message", + 64, + b"\x86\x16\xe1\xe4\xcf\xd8\xb5\xf7\xd9\x2d\x43\xd8\x6e\x1b\x14\x51" + b"\xa2\xa6\x5a\xf8\x64\xfc\xb1\x26\xc2\x66\x0a\xb3\x46\x51\xb1\x75" + b"\x30\xd6\xba\x2a\x46\x65\xf1\x9d\xf0\x62\x25\xb1\x26\xd1\x3e\xed" + b"\x91\xd5\x0d\xe7\xb9\xcb\x65\xf3\x3a\x46\xae\xd3\x6c\x7d\xc5\xe8", + ), + ( + b"A" * 1024, + 32, + b"\xa5\xef\x7e\x30\x8b\xe8\x33\x64\xe5\x9c\xf3\xb5\xf3\xba\x20\xa3" + b"\x5a\xe7\x30\xfd\xbc\x33\x11\xbf\x83\x89\x50\x82\xb4\x41\xe9\xb3", + ), +] + + +@pytest.mark.parametrize("msg,olen,ohash", SHAKE_256_VECTORS) +def test_shake_256(msg, olen, ohash): + out = shake_256(msg, olen) + assert out == bytearray(ohash) diff --git a/env/lib/python3.8/site-packages/ecdsa/util.py b/env/lib/python3.8/site-packages/ecdsa/util.py new file mode 100644 index 00000000..9a561105 --- /dev/null +++ b/env/lib/python3.8/site-packages/ecdsa/util.py @@ -0,0 +1,433 @@ +from __future__ import division + +import os +import math +import binascii +import sys +from hashlib import sha256 +from six import PY2, int2byte, b, next +from . import der +from ._compat import normalise_bytes + +# RFC5480: +# The "unrestricted" algorithm identifier is: +# id-ecPublicKey OBJECT IDENTIFIER ::= { +# iso(1) member-body(2) us(840) ansi-X9-62(10045) keyType(2) 1 } + +oid_ecPublicKey = (1, 2, 840, 10045, 2, 1) +encoded_oid_ecPublicKey = der.encode_oid(*oid_ecPublicKey) + +# RFC5480: +# The ECDH algorithm uses the following object identifier: +# id-ecDH OBJECT IDENTIFIER ::= { +# iso(1) identified-organization(3) certicom(132) schemes(1) +# ecdh(12) } + +oid_ecDH = (1, 3, 132, 1, 12) + +# RFC5480: +# The ECMQV algorithm uses the following object identifier: +# id-ecMQV OBJECT IDENTIFIER ::= { +# iso(1) identified-organization(3) certicom(132) schemes(1) +# ecmqv(13) } + +oid_ecMQV = (1, 3, 132, 1, 13) + +if sys.version_info >= (3,): # pragma: no branch + + def entropy_to_bits(ent_256): + """Convert a bytestring to string of 0's and 1's""" + return bin(int.from_bytes(ent_256, "big"))[2:].zfill(len(ent_256) * 8) + +else: + + def entropy_to_bits(ent_256): + """Convert a bytestring to string of 0's and 1's""" + return "".join(bin(ord(x))[2:].zfill(8) for x in ent_256) + + +if sys.version_info < (2, 7): # pragma: no branch + # Can't add a method to a built-in type so we are stuck with this + def bit_length(x): + return len(bin(x)) - 2 + +else: + + def bit_length(x): + return x.bit_length() or 1 + + +def orderlen(order): + return (1 + len("%x" % order)) // 2 # bytes + + +def randrange(order, entropy=None): + """Return a random integer k such that 1 <= k < order, uniformly + distributed across that range. Worst case should be a mean of 2 loops at + (2**k)+2. + + Note that this function is not declared to be forwards-compatible: we may + change the behavior in future releases. The entropy= argument (which + should get a callable that behaves like os.urandom) can be used to + achieve stability within a given release (for repeatable unit tests), but + should not be used as a long-term-compatible key generation algorithm. + """ + assert order > 1 + if entropy is None: + entropy = os.urandom + upper_2 = bit_length(order - 2) + upper_256 = upper_2 // 8 + 1 + while True: # I don't think this needs a counter with bit-wise randrange + ent_256 = entropy(upper_256) + ent_2 = entropy_to_bits(ent_256) + rand_num = int(ent_2[:upper_2], base=2) + 1 + if 0 < rand_num < order: + return rand_num + + +class PRNG: + # this returns a callable which, when invoked with an integer N, will + # return N pseudorandom bytes. Note: this is a short-term PRNG, meant + # primarily for the needs of randrange_from_seed__trytryagain(), which + # only needs to run it a few times per seed. It does not provide + # protection against state compromise (forward security). + def __init__(self, seed): + self.generator = self.block_generator(seed) + + def __call__(self, numbytes): + a = [next(self.generator) for i in range(numbytes)] + + if PY2: # pragma: no branch + return "".join(a) + else: + return bytes(a) + + def block_generator(self, seed): + counter = 0 + while True: + for byte in sha256( + ("prng-%d-%s" % (counter, seed)).encode() + ).digest(): + yield byte + counter += 1 + + +def randrange_from_seed__overshoot_modulo(seed, order): + # hash the data, then turn the digest into a number in [1,order). + # + # We use David-Sarah Hopwood's suggestion: turn it into a number that's + # sufficiently larger than the group order, then modulo it down to fit. + # This should give adequate (but not perfect) uniformity, and simple + # code. There are other choices: try-try-again is the main one. + base = PRNG(seed)(2 * orderlen(order)) + number = (int(binascii.hexlify(base), 16) % (order - 1)) + 1 + assert 1 <= number < order, (1, number, order) + return number + + +def lsb_of_ones(numbits): + return (1 << numbits) - 1 + + +def bits_and_bytes(order): + bits = int(math.log(order - 1, 2) + 1) + bytes = bits // 8 + extrabits = bits % 8 + return bits, bytes, extrabits + + +# the following randrange_from_seed__METHOD() functions take an +# arbitrarily-sized secret seed and turn it into a number that obeys the same +# range limits as randrange() above. They are meant for deriving consistent +# signing keys from a secret rather than generating them randomly, for +# example a protocol in which three signing keys are derived from a master +# secret. You should use a uniformly-distributed unguessable seed with about +# curve.baselen bytes of entropy. To use one, do this: +# seed = os.urandom(curve.baselen) # or other starting point +# secexp = ecdsa.util.randrange_from_seed__trytryagain(sed, curve.order) +# sk = SigningKey.from_secret_exponent(secexp, curve) + + +def randrange_from_seed__truncate_bytes(seed, order, hashmod=sha256): + # hash the seed, then turn the digest into a number in [1,order), but + # don't worry about trying to uniformly fill the range. This will lose, + # on average, four bits of entropy. + bits, _bytes, extrabits = bits_and_bytes(order) + if extrabits: + _bytes += 1 + base = hashmod(seed).digest()[:_bytes] + base = "\x00" * (_bytes - len(base)) + base + number = 1 + int(binascii.hexlify(base), 16) + assert 1 <= number < order + return number + + +def randrange_from_seed__truncate_bits(seed, order, hashmod=sha256): + # like string_to_randrange_truncate_bytes, but only lose an average of + # half a bit + bits = int(math.log(order - 1, 2) + 1) + maxbytes = (bits + 7) // 8 + base = hashmod(seed).digest()[:maxbytes] + base = "\x00" * (maxbytes - len(base)) + base + topbits = 8 * maxbytes - bits + if topbits: + base = int2byte(ord(base[0]) & lsb_of_ones(topbits)) + base[1:] + number = 1 + int(binascii.hexlify(base), 16) + assert 1 <= number < order + return number + + +def randrange_from_seed__trytryagain(seed, order): + # figure out exactly how many bits we need (rounded up to the nearest + # bit), so we can reduce the chance of looping to less than 0.5 . This is + # specified to feed from a byte-oriented PRNG, and discards the + # high-order bits of the first byte as necessary to get the right number + # of bits. The average number of loops will range from 1.0 (when + # order=2**k-1) to 2.0 (when order=2**k+1). + assert order > 1 + bits, bytes, extrabits = bits_and_bytes(order) + generate = PRNG(seed) + while True: + extrabyte = b("") + if extrabits: + extrabyte = int2byte(ord(generate(1)) & lsb_of_ones(extrabits)) + guess = string_to_number(extrabyte + generate(bytes)) + 1 + if 1 <= guess < order: + return guess + + +def number_to_string(num, order): + l = orderlen(order) + fmt_str = "%0" + str(2 * l) + "x" + string = binascii.unhexlify((fmt_str % num).encode()) + assert len(string) == l, (len(string), l) + return string + + +def number_to_string_crop(num, order): + l = orderlen(order) + fmt_str = "%0" + str(2 * l) + "x" + string = binascii.unhexlify((fmt_str % num).encode()) + return string[:l] + + +def string_to_number(string): + return int(binascii.hexlify(string), 16) + + +def string_to_number_fixedlen(string, order): + l = orderlen(order) + assert len(string) == l, (len(string), l) + return int(binascii.hexlify(string), 16) + + +# these methods are useful for the sigencode= argument to SK.sign() and the +# sigdecode= argument to VK.verify(), and control how the signature is packed +# or unpacked. + + +def sigencode_strings(r, s, order): + r_str = number_to_string(r, order) + s_str = number_to_string(s, order) + return (r_str, s_str) + + +def sigencode_string(r, s, order): + """ + Encode the signature to raw format (:term:`raw encoding`) + + It's expected that this function will be used as a `sigencode=` parameter + in :func:`ecdsa.keys.SigningKey.sign` method. + + :param int r: first parameter of the signature + :param int s: second parameter of the signature + :param int order: the order of the curve over which the signature was + computed + + :return: raw encoding of ECDSA signature + :rtype: bytes + """ + # for any given curve, the size of the signature numbers is + # fixed, so just use simple concatenation + r_str, s_str = sigencode_strings(r, s, order) + return r_str + s_str + + +def sigencode_der(r, s, order): + """ + Encode the signature into the ECDSA-Sig-Value structure using :term:`DER`. + + Encodes the signature to the following :term:`ASN.1` structure:: + + Ecdsa-Sig-Value ::= SEQUENCE { + r INTEGER, + s INTEGER + } + + It's expected that this function will be used as a `sigencode=` parameter + in :func:`ecdsa.keys.SigningKey.sign` method. + + :param int r: first parameter of the signature + :param int s: second parameter of the signature + :param int order: the order of the curve over which the signature was + computed + + :return: DER encoding of ECDSA signature + :rtype: bytes + """ + return der.encode_sequence(der.encode_integer(r), der.encode_integer(s)) + + +# canonical versions of sigencode methods +# these enforce low S values, by negating the value (modulo the order) if +# above order/2 see CECKey::Sign() +# https://github.com/bitcoin/bitcoin/blob/master/src/key.cpp#L214 +def sigencode_strings_canonize(r, s, order): + if s > order / 2: + s = order - s + return sigencode_strings(r, s, order) + + +def sigencode_string_canonize(r, s, order): + if s > order / 2: + s = order - s + return sigencode_string(r, s, order) + + +def sigencode_der_canonize(r, s, order): + if s > order / 2: + s = order - s + return sigencode_der(r, s, order) + + +class MalformedSignature(Exception): + """ + Raised by decoding functions when the signature is malformed. + + Malformed in this context means that the relevant strings or integers + do not match what a signature over provided curve would create. Either + because the byte strings have incorrect lengths or because the encoded + values are too large. + """ + + pass + + +def sigdecode_string(signature, order): + """ + Decoder for :term:`raw encoding` of ECDSA signatures. + + raw encoding is a simple concatenation of the two integers that comprise + the signature, with each encoded using the same amount of bytes depending + on curve size/order. + + It's expected that this function will be used as the `sigdecode=` + parameter to the :func:`ecdsa.keys.VerifyingKey.verify` method. + + :param signature: encoded signature + :type signature: bytes like object + :param order: order of the curve over which the signature was computed + :type order: int + + :raises MalformedSignature: when the encoding of the signature is invalid + + :return: tuple with decoded 'r' and 's' values of signature + :rtype: tuple of ints + """ + signature = normalise_bytes(signature) + l = orderlen(order) + if not len(signature) == 2 * l: + raise MalformedSignature( + "Invalid length of signature, expected {0} bytes long, " + "provided string is {1} bytes long".format(2 * l, len(signature)) + ) + r = string_to_number_fixedlen(signature[:l], order) + s = string_to_number_fixedlen(signature[l:], order) + return r, s + + +def sigdecode_strings(rs_strings, order): + """ + Decode the signature from two strings. + + First string needs to be a big endian encoding of 'r', second needs to + be a big endian encoding of the 's' parameter of an ECDSA signature. + + It's expected that this function will be used as the `sigdecode=` + parameter to the :func:`ecdsa.keys.VerifyingKey.verify` method. + + :param list rs_strings: list of two bytes-like objects, each encoding one + parameter of signature + :param int order: order of the curve over which the signature was computed + + :raises MalformedSignature: when the encoding of the signature is invalid + + :return: tuple with decoded 'r' and 's' values of signature + :rtype: tuple of ints + """ + if not len(rs_strings) == 2: + raise MalformedSignature( + "Invalid number of strings provided: {0}, expected 2".format( + len(rs_strings) + ) + ) + (r_str, s_str) = rs_strings + r_str = normalise_bytes(r_str) + s_str = normalise_bytes(s_str) + l = orderlen(order) + if not len(r_str) == l: + raise MalformedSignature( + "Invalid length of first string ('r' parameter), " + "expected {0} bytes long, provided string is {1} " + "bytes long".format(l, len(r_str)) + ) + if not len(s_str) == l: + raise MalformedSignature( + "Invalid length of second string ('s' parameter), " + "expected {0} bytes long, provided string is {1} " + "bytes long".format(l, len(s_str)) + ) + r = string_to_number_fixedlen(r_str, order) + s = string_to_number_fixedlen(s_str, order) + return r, s + + +def sigdecode_der(sig_der, order): + """ + Decoder for DER format of ECDSA signatures. + + DER format of signature is one that uses the :term:`ASN.1` :term:`DER` + rules to encode it as a sequence of two integers:: + + Ecdsa-Sig-Value ::= SEQUENCE { + r INTEGER, + s INTEGER + } + + It's expected that this function will be used as as the `sigdecode=` + parameter to the :func:`ecdsa.keys.VerifyingKey.verify` method. + + :param sig_der: encoded signature + :type sig_der: bytes like object + :param order: order of the curve over which the signature was computed + :type order: int + + :raises UnexpectedDER: when the encoding of signature is invalid + + :return: tuple with decoded 'r' and 's' values of signature + :rtype: tuple of ints + """ + sig_der = normalise_bytes(sig_der) + # return der.encode_sequence(der.encode_integer(r), der.encode_integer(s)) + rs_strings, empty = der.remove_sequence(sig_der) + if empty != b"": + raise der.UnexpectedDER( + "trailing junk after DER sig: %s" % binascii.hexlify(empty) + ) + r, rest = der.remove_integer(rs_strings) + s, empty = der.remove_integer(rest) + if empty != b"": + raise der.UnexpectedDER( + "trailing junk after DER numbers: %s" % binascii.hexlify(empty) + ) + return r, s diff --git a/env/lib/python3.8/site-packages/ens/__init__.py b/env/lib/python3.8/site-packages/ens/__init__.py new file mode 100644 index 00000000..c3a463a5 --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/__init__.py @@ -0,0 +1,15 @@ +# flake8: noqa + +from .main import ( + ENS, +) + +from .exceptions import ( + AddressMismatch, + BidTooLow, + InvalidLabel, + InvalidName, + UnauthorizedError, + UnderfundedBid, + UnownedName, +) diff --git a/env/lib/python3.8/site-packages/ens/abis.py b/env/lib/python3.8/site-packages/ens/abis.py new file mode 100644 index 00000000..d29b4d89 --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/abis.py @@ -0,0 +1,1394 @@ +# flake8: noqa + +ENS = [ + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "name": "resolver", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "name": "owner", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "label", + "type": "bytes32" + }, + { + "name": "owner", + "type": "address" + } + ], + "name": "setSubnodeOwner", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "ttl", + "type": "uint64" + } + ], + "name": "setTTL", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "name": "ttl", + "outputs": [ + { + "name": "", + "type": "uint64" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "resolver", + "type": "address" + } + ], + "name": "setResolver", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "owner", + "type": "address" + } + ], + "name": "setOwner", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": False, + "name": "owner", + "type": "address" + } + ], + "name": "Transfer", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": True, + "name": "label", + "type": "bytes32" + }, + { + "indexed": False, + "name": "owner", + "type": "address" + } + ], + "name": "NewOwner", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": False, + "name": "resolver", + "type": "address" + } + ], + "name": "NewResolver", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": False, + "name": "ttl", + "type": "uint64" + } + ], + "name": "NewTTL", + "type": "event" + } +] + +AUCTION_REGISTRAR = [ + { + "constant": False, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + } + ], + "name": "releaseDeed", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + } + ], + "name": "getAllowedTime", + "outputs": [ + { + "name": "timestamp", + "type": "uint256" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "unhashedName", + "type": "string" + } + ], + "name": "invalidateName", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "hash", + "type": "bytes32" + }, + { + "name": "owner", + "type": "address" + }, + { + "name": "value", + "type": "uint256" + }, + { + "name": "salt", + "type": "bytes32" + } + ], + "name": "shaBid", + "outputs": [ + { + "name": "sealedBid", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "bidder", + "type": "address" + }, + { + "name": "seal", + "type": "bytes32" + } + ], + "name": "cancelBid", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + } + ], + "name": "entries", + "outputs": [ + { + "name": "", + "type": "uint8" + }, + { + "name": "", + "type": "address" + }, + { + "name": "", + "type": "uint256" + }, + { + "name": "", + "type": "uint256" + }, + { + "name": "", + "type": "uint256" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "ens", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + }, + { + "name": "_value", + "type": "uint256" + }, + { + "name": "_salt", + "type": "bytes32" + } + ], + "name": "unsealBid", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + } + ], + "name": "transferRegistrars", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "", + "type": "address" + }, + { + "name": "", + "type": "bytes32" + } + ], + "name": "sealedBids", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + } + ], + "name": "state", + "outputs": [ + { + "name": "", + "type": "uint8" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + }, + { + "name": "newOwner", + "type": "address" + } + ], + "name": "transfer", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + }, + { + "name": "_timestamp", + "type": "uint256" + } + ], + "name": "isAllowed", + "outputs": [ + { + "name": "allowed", + "type": "bool" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + } + ], + "name": "finalizeAuction", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "registryStarted", + "outputs": [ + { + "name": "", + "type": "uint256" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "launchLength", + "outputs": [ + { + "name": "", + "type": "uint32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "sealedBid", + "type": "bytes32" + } + ], + "name": "newBid", + "outputs": [], + "payable": True, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "labels", + "type": "bytes32[]" + } + ], + "name": "eraseNode", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "_hashes", + "type": "bytes32[]" + } + ], + "name": "startAuctions", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "hash", + "type": "bytes32" + }, + { + "name": "deed", + "type": "address" + }, + { + "name": "registrationDate", + "type": "uint256" + } + ], + "name": "acceptRegistrarTransfer", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "_hash", + "type": "bytes32" + } + ], + "name": "startAuction", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "rootNode", + "outputs": [ + { + "name": "", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "hashes", + "type": "bytes32[]" + }, + { + "name": "sealedBid", + "type": "bytes32" + } + ], + "name": "startAuctionsAndBid", + "outputs": [], + "payable": True, + "type": "function" + }, + { + "inputs": [ + { + "name": "_ens", + "type": "address" + }, + { + "name": "_rootNode", + "type": "bytes32" + }, + { + "name": "_startDate", + "type": "uint256" + } + ], + "payable": False, + "type": "constructor" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "hash", + "type": "bytes32" + }, + { + "indexed": False, + "name": "registrationDate", + "type": "uint256" + } + ], + "name": "AuctionStarted", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "hash", + "type": "bytes32" + }, + { + "indexed": True, + "name": "bidder", + "type": "address" + }, + { + "indexed": False, + "name": "deposit", + "type": "uint256" + } + ], + "name": "NewBid", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "hash", + "type": "bytes32" + }, + { + "indexed": True, + "name": "owner", + "type": "address" + }, + { + "indexed": False, + "name": "value", + "type": "uint256" + }, + { + "indexed": False, + "name": "status", + "type": "uint8" + } + ], + "name": "BidRevealed", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "hash", + "type": "bytes32" + }, + { + "indexed": True, + "name": "owner", + "type": "address" + }, + { + "indexed": False, + "name": "value", + "type": "uint256" + }, + { + "indexed": False, + "name": "registrationDate", + "type": "uint256" + } + ], + "name": "HashRegistered", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "hash", + "type": "bytes32" + }, + { + "indexed": False, + "name": "value", + "type": "uint256" + } + ], + "name": "HashReleased", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "hash", + "type": "bytes32" + }, + { + "indexed": True, + "name": "name", + "type": "string" + }, + { + "indexed": False, + "name": "value", + "type": "uint256" + }, + { + "indexed": False, + "name": "registrationDate", + "type": "uint256" + } + ], + "name": "HashInvalidated", + "type": "event" + } +] + +DEED = [ + { + "constant": True, + "inputs": [], + "name": "creationDate", + "outputs": [ + { + "name": "", + "type": "uint256" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [], + "name": "destroyDeed", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "newOwner", + "type": "address" + } + ], + "name": "setOwner", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "registrar", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "owner", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "refundRatio", + "type": "uint256" + } + ], + "name": "closeDeed", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "newRegistrar", + "type": "address" + } + ], + "name": "setRegistrar", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "newValue", + "type": "uint256" + } + ], + "name": "setBalance", + "outputs": [], + "payable": True, + "type": "function" + }, + { + "inputs": [], + "type": "constructor" + }, + { + "payable": True, + "type": "fallback" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": False, + "name": "newOwner", + "type": "address" + } + ], + "name": "OwnerChanged", + "type": "event" + }, + { + "anonymous": False, + "inputs": [], + "name": "DeedClosed", + "type": "event" + } +] + +FIFS_REGISTRAR = [ + { + "constant": True, + "inputs": [], + "name": "ens", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "", + "type": "bytes32" + } + ], + "name": "expiryTimes", + "outputs": [ + { + "name": "", + "type": "uint256" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "subnode", + "type": "bytes32" + }, + { + "name": "owner", + "type": "address" + } + ], + "name": "register", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "rootNode", + "outputs": [ + { + "name": "", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "inputs": [ + { + "name": "ensAddr", + "type": "address" + }, + { + "name": "node", + "type": "bytes32" + } + ], + "type": "constructor" + } +] + +RESOLVER = [ + { + "constant": True, + "inputs": [ + { + "name": "interfaceID", + "type": "bytes4" + } + ], + "name": "supportsInterface", + "outputs": [ + { + "name": "", + "type": "bool" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "contentTypes", + "type": "uint256" + } + ], + "name": "ABI", + "outputs": [ + { + "name": "contentType", + "type": "uint256" + }, + { + "name": "data", + "type": "bytes" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "x", + "type": "bytes32" + }, + { + "name": "y", + "type": "bytes32" + } + ], + "name": "setPubkey", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "name": "content", + "outputs": [ + { + "name": "ret", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "name": "addr", + "outputs": [ + { + "name": "ret", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "contentType", + "type": "uint256" + }, + { + "name": "data", + "type": "bytes" + } + ], + "name": "setABI", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "name": "name", + "outputs": [ + { + "name": "ret", + "type": "string" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "name", + "type": "string" + } + ], + "name": "setName", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "hash", + "type": "bytes32" + } + ], + "name": "setContent", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "name": "pubkey", + "outputs": [ + { + "name": "x", + "type": "bytes32" + }, + { + "name": "y", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "node", + "type": "bytes32" + }, + { + "name": "addr", + "type": "address" + } + ], + "name": "setAddr", + "outputs": [], + "payable": False, + "type": "function" + }, + { + "inputs": [ + { + "name": "ensAddr", + "type": "address" + } + ], + "payable": False, + "type": "constructor" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": False, + "name": "a", + "type": "address" + } + ], + "name": "AddrChanged", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": False, + "name": "hash", + "type": "bytes32" + } + ], + "name": "ContentChanged", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": False, + "name": "name", + "type": "string" + } + ], + "name": "NameChanged", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": True, + "name": "contentType", + "type": "uint256" + } + ], + "name": "ABIChanged", + "type": "event" + }, + { + "anonymous": False, + "inputs": [ + { + "indexed": True, + "name": "node", + "type": "bytes32" + }, + { + "indexed": False, + "name": "x", + "type": "bytes32" + }, + { + "indexed": False, + "name": "y", + "type": "bytes32" + } + ], + "name": "PubkeyChanged", + "type": "event" + } +] + +REVERSE_REGISTRAR = [ + { + "constant": False, + "inputs": [ + { + "name": "owner", + "type": "address" + }, + { + "name": "resolver", + "type": "address" + } + ], + "name": "claimWithResolver", + "outputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "owner", + "type": "address" + } + ], + "name": "claim", + "outputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "ens", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [], + "name": "defaultResolver", + "outputs": [ + { + "name": "", + "type": "address" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": True, + "inputs": [ + { + "name": "addr", + "type": "address" + } + ], + "name": "node", + "outputs": [ + { + "name": "ret", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "constant": False, + "inputs": [ + { + "name": "name", + "type": "string" + } + ], + "name": "setName", + "outputs": [ + { + "name": "node", + "type": "bytes32" + } + ], + "payable": False, + "type": "function" + }, + { + "inputs": [ + { + "name": "ensAddr", + "type": "address" + }, + { + "name": "resolverAddr", + "type": "address" + } + ], + "payable": False, + "type": "constructor" + } +] diff --git a/env/lib/python3.8/site-packages/ens/auto.py b/env/lib/python3.8/site-packages/ens/auto.py new file mode 100644 index 00000000..690f637d --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/auto.py @@ -0,0 +1,3 @@ +from ens import ENS + +ns = ENS() diff --git a/env/lib/python3.8/site-packages/ens/constants.py b/env/lib/python3.8/site-packages/ens/constants.py new file mode 100644 index 00000000..78455f4e --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/constants.py @@ -0,0 +1,20 @@ +from eth_typing import ( + ChecksumAddress, + HexAddress, + HexStr, +) +from hexbytes import ( + HexBytes, +) + +ACCEPTABLE_STALE_HOURS = 48 + +AUCTION_START_GAS_CONSTANT = 25000 +AUCTION_START_GAS_MARGINAL = 39000 + +EMPTY_SHA3_BYTES = HexBytes(b'\0' * 32) +EMPTY_ADDR_HEX = HexAddress(HexStr('0x' + '00' * 20)) + +REVERSE_REGISTRAR_DOMAIN = 'addr.reverse' + +ENS_MAINNET_ADDR = ChecksumAddress(HexAddress(HexStr('0x00000000000C2E074eC69A0dFb2997BA6C7d2e1e'))) diff --git a/env/lib/python3.8/site-packages/ens/contract_data.py b/env/lib/python3.8/site-packages/ens/contract_data.py new file mode 100644 index 00000000..76e5a409 --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/contract_data.py @@ -0,0 +1,19 @@ +# flake8: noqa +import json + +registrar_abi = json.loads('[{"constant":false,"inputs":[{"name":"_hash","type":"bytes32"}],"name":"releaseDeed","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"_hash","type":"bytes32"}],"name":"getAllowedTime","outputs":[{"name":"timestamp","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"unhashedName","type":"string"}],"name":"invalidateName","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"hash","type":"bytes32"},{"name":"owner","type":"address"},{"name":"value","type":"uint256"},{"name":"salt","type":"bytes32"}],"name":"shaBid","outputs":[{"name":"sealedBid","type":"bytes32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"bidder","type":"address"},{"name":"seal","type":"bytes32"}],"name":"cancelBid","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"_hash","type":"bytes32"}],"name":"entries","outputs":[{"name":"","type":"uint8"},{"name":"","type":"address"},{"name":"","type":"uint256"},{"name":"","type":"uint256"},{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"ens","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"_hash","type":"bytes32"},{"name":"_value","type":"uint256"},{"name":"_salt","type":"bytes32"}],"name":"unsealBid","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"_hash","type":"bytes32"}],"name":"transferRegistrars","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"","type":"address"},{"name":"","type":"bytes32"}],"name":"sealedBids","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"_hash","type":"bytes32"}],"name":"state","outputs":[{"name":"","type":"uint8"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"_hash","type":"bytes32"},{"name":"newOwner","type":"address"}],"name":"transfer","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"_hash","type":"bytes32"},{"name":"_timestamp","type":"uint256"}],"name":"isAllowed","outputs":[{"name":"allowed","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"_hash","type":"bytes32"}],"name":"finalizeAuction","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"registryStarted","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"launchLength","outputs":[{"name":"","type":"uint32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"sealedBid","type":"bytes32"}],"name":"newBid","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"constant":false,"inputs":[{"name":"labels","type":"bytes32[]"}],"name":"eraseNode","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"_hashes","type":"bytes32[]"}],"name":"startAuctions","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"hash","type":"bytes32"},{"name":"deed","type":"address"},{"name":"registrationDate","type":"uint256"}],"name":"acceptRegistrarTransfer","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"_hash","type":"bytes32"}],"name":"startAuction","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"rootNode","outputs":[{"name":"","type":"bytes32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"hashes","type":"bytes32[]"},{"name":"sealedBid","type":"bytes32"}],"name":"startAuctionsAndBid","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"inputs":[{"name":"_ens","type":"address"},{"name":"_rootNode","type":"bytes32"},{"name":"_startDate","type":"uint256"}],"payable":false,"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"name":"hash","type":"bytes32"},{"indexed":false,"name":"registrationDate","type":"uint256"}],"name":"AuctionStarted","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"hash","type":"bytes32"},{"indexed":true,"name":"bidder","type":"address"},{"indexed":false,"name":"deposit","type":"uint256"}],"name":"NewBid","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"hash","type":"bytes32"},{"indexed":true,"name":"owner","type":"address"},{"indexed":false,"name":"value","type":"uint256"},{"indexed":false,"name":"status","type":"uint8"}],"name":"BidRevealed","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"hash","type":"bytes32"},{"indexed":true,"name":"owner","type":"address"},{"indexed":false,"name":"value","type":"uint256"},{"indexed":false,"name":"registrationDate","type":"uint256"}],"name":"HashRegistered","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"hash","type":"bytes32"},{"indexed":false,"name":"value","type":"uint256"}],"name":"HashReleased","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"hash","type":"bytes32"},{"indexed":true,"name":"name","type":"string"},{"indexed":false,"name":"value","type":"uint256"},{"indexed":false,"name":"registrationDate","type":"uint256"}],"name":"HashInvalidated","type":"event"}]') +registrar_bytecode = "6060604052341561000f57600080fd5b60405160608061275783398101604052808051919060200180519190602001805160008054600160a060020a031916600160a060020a0387161781556001859055909250821190506100615742610063565b805b6004555050506126df806100786000396000f300606060405236156101175763ffffffff60e060020a6000350416630230a07c811461011c57806313c89a8f1461013457806315f733311461015c57806322ec1244146101ad5780632525f5c1146101d5578063267b6922146101f75780633f15457f1461025f57806347872b421461028e5780635ddae283146102aa5780635e431709146102c057806361d585da146102e257806379ce9fac1461031c578063935033371461033e578063983b94fb1461036b5780639c67f06f14610381578063ae1a0b0c14610394578063ce92dced146103c0578063de10f04b146103cb578063e27fe50f1461041a578063ea9e107a14610469578063ede8acdb1461048e578063faff50a8146104a4578063febefd61146104b7575b600080fd5b341561012757600080fd5b6101326004356104fd565b005b341561013f57600080fd5b61014a60043561072a565b60405190815260200160405180910390f35b341561016757600080fd5b61013260046024813581810190830135806020601f8201819004810201604051908101604052818152929190602084018383808284375094965061074e95505050505050565b34156101b857600080fd5b61014a600435600160a060020a0360243516604435606435610a7a565b34156101e057600080fd5b610132600160a060020a0360043516602435610ac5565b341561020257600080fd5b61020d600435610c8c565b6040518086600581111561021d57fe5b60ff16815260200185600160a060020a0316600160a060020a031681526020018481526020018381526020018281526020019550505050505060405180910390f35b341561026a57600080fd5b610272610cd8565b604051600160a060020a03909116815260200160405180910390f35b341561029957600080fd5b610132600435602435604435610ce7565b34156102b557600080fd5b610132600435611254565b34156102cb57600080fd5b610272600160a060020a03600435166024356114b4565b34156102ed57600080fd5b6102f86004356114da565b6040518082600581111561030857fe5b60ff16815260200191505060405180910390f35b341561032757600080fd5b610132600435600160a060020a0360243516611550565b341561034957600080fd5b61035760043560243561169b565b604051901515815260200160405180910390f35b341561037657600080fd5b6101326004356116b1565b341561038c57600080fd5b61014a611919565b341561039f57600080fd5b6103a761191f565b60405163ffffffff909116815260200160405180910390f35b610132600435611926565b34156103d657600080fd5b6101326004602481358181019083013580602081810201604051908101604052809392919081815260200183836020028082843750949650611a1a95505050505050565b341561042557600080fd5b6101326004602481358181019083013580602081810201604051908101604052809392919081815260200183836020028082843750949650611a7595505050505050565b341561047457600080fd5b610132600435600160a060020a0360243516604435611aab565b341561049957600080fd5b610132600435611ab0565b34156104af57600080fd5b61014a611bfc565b61013260046024813581810190830135806020818102016040519081016040528093929190818152602001838360200280828437509496505093359350611c0292505050565b60008082600261050c826114da565b600581111561051757fe5b1415806105a3575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b151561057257600080fd5b6102c65a03f1151561058357600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b156105ad57600080fd5b600084815260026020526040902080546001820154919450600160a060020a031692506301e13380014210801561065f575060008054600154600160a060020a03308116939216916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561063957600080fd5b6102c65a03f1151561064a57600080fd5b50505060405180519050600160a060020a0316145b1561066957600080fd5b60006002840181905560038401558254600160a060020a031916835561068e84611c14565b81600160a060020a031663bbe427716103e860405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b15156106d657600080fd5b6102c65a03f115156106e757600080fd5b505050600283015484907f292b79b9246fa2c8e77d3fe195b251f9cb839d7d038e667c069ee7708c631e169060405190815260200160405180910390a250505050565b6004547001000000000000000000000000000000006249d400818404020401919050565b600080826040518082805190602001908083835b602083106107815780518252601f199092019160209182019101610762565b6001836020036101000a03801982511681845116179092525050509190910192506040915050519081900390206002806107ba836114da565b60058111156107c557fe5b146107cf57600080fd5b60066107da86611e12565b11156107e557600080fd5b846040518082805190602001908083835b602083106108155780518252601f1990920191602091820191016107f6565b6001836020036101000a03801982511681845116179092525050509190910192506040915050519081900390206000818152600260205260409020909450925061085e84611c14565b8254600160a060020a0316156109b5576108838360020154662386f26fc10000611ec3565b60028085018290558454600160a060020a03169163b0c80972919004600060405160e060020a63ffffffff8516028152600481019290925215156024820152604401600060405180830381600087803b15156108de57600080fd5b6102c65a03f115156108ef57600080fd5b50508354600160a060020a031690506313af40353360405160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b151561094257600080fd5b6102c65a03f1151561095357600080fd5b50508354600160a060020a0316905063bbe427716103e860405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b15156109a057600080fd5b6102c65a03f115156109b157600080fd5b5050505b846040518082805190602001908083835b602083106109e55780518252601f1990920191602091820191016109c6565b6001836020036101000a0380198251168184511617909252505050919091019250604091505051809103902084600019167f1f9c649fe47e58bb60f4e52f0d90e4c47a526c9f90c5113df842c025970b66ad8560020154866001015460405191825260208201526040908101905180910390a3505060006002820181905560038201558054600160a060020a03191690555050565b600084848484604051938452600160a060020a03929092166c010000000000000000000000000260208401526034830152605482015260740160405180910390209050949350505050565b600160a060020a03808316600090815260036020908152604080832085845290915290205416801580610b61575062069780600160a060020a0382166305b344106000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b1515610b3d57600080fd5b6102c65a03f11515610b4e57600080fd5b5050506040518051905001621275000142105b15610b6b57600080fd5b80600160a060020a03166313af40353360405160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b1515610bb957600080fd5b6102c65a03f11515610bca57600080fd5b50505080600160a060020a031663bbe42771600560405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b1515610c1457600080fd5b6102c65a03f11515610c2557600080fd5b505050600160a060020a03831660008181526003602090815260408083208684529091528082208054600160a060020a03191690558491600080516020612694833981519152916005905191825260ff1660208201526040908101905180910390a3505050565b60008181526002602052604081208190819081908190610cab876114da565b815460018301546002840154600390940154929a600160a060020a03909216995097509195509350915050565b600054600160a060020a031681565b600080600080600080610cfc89338a8a610a7a565b600160a060020a033381166000908152600360209081526040808320858452909152902054919750169450841515610d3357600080fd5b600160a060020a0333811660009081526003602090815260408083208a845282528083208054600160a060020a03191690558c835260029091528082209650610dd6928b9290891691633fa4f245919051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b1515610db657600080fd5b6102c65a03f11515610dc757600080fd5b50505060405180519050611edb565b925084600160a060020a031663b0c8097284600160405160e060020a63ffffffff8516028152600481019290925215156024820152604401600060405180830381600087803b1515610e2757600080fd5b6102c65a03f11515610e3857600080fd5b505050610e44896114da565b91506002826005811115610e5457fe5b1415610ef25784600160a060020a031663bbe42771600560405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b1515610ea157600080fd5b6102c65a03f11515610eb257600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600160405191825260ff1660208201526040908101905180910390a3611249565b6004826005811115610f0057fe5b14610f0a57600080fd5b662386f26fc10000831080610f88575060018401546202a2ff1901600160a060020a0386166305b344106000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b1515610f6b57600080fd5b6102c65a03f11515610f7c57600080fd5b50505060405180519050115b156110265784600160a060020a031663bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b1515610fd557600080fd5b6102c65a03f11515610fe657600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600060405191825260ff1660208201526040908101905180910390a3611249565b8360030154831115611108578354600160a060020a0316156110a257508254600160a060020a03168063bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b151561108d57600080fd5b6102c65a03f1151561109e57600080fd5b5050505b600384018054600280870191909155908490558454600160a060020a031916600160a060020a038781169190911786553316908a9060008051602061269483398151915290869060405191825260ff1660208201526040908101905180910390a3611249565b83600201548311156111b45760028401839055600160a060020a03851663bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b151561116357600080fd5b6102c65a03f1151561117457600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600360405191825260ff1660208201526040908101905180910390a3611249565b84600160a060020a031663bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b15156111fc57600080fd5b6102c65a03f1151561120d57600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600460405191825260ff1660208201526040908101905180910390a35b505050505050505050565b600080826002611263826114da565b600581111561126e57fe5b1415806112fa575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b15156112c957600080fd5b6102c65a03f115156112da57600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b1561130457600080fd5b60008054600154600160a060020a03909116916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561135b57600080fd5b6102c65a03f1151561136c57600080fd5b50505060405180519050925030600160a060020a031683600160a060020a0316141561139757600080fd5b600084815260026020526040908190208054909350600160a060020a03169063faab9d399085905160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b15156113fa57600080fd5b6102c65a03f1151561140b57600080fd5b505082546001840154600160a060020a03808716935063ea9e107a92889291169060405160e060020a63ffffffff86160281526004810193909352600160a060020a0390911660248301526044820152606401600060405180830381600087803b151561147757600080fd5b6102c65a03f1151561148857600080fd5b50508254600160a060020a03191683555050600060018201819055600282018190556003909101555050565b6003602090815260009283526040808420909152908252902054600160a060020a031681565b60008181526002602052604081206114f2834261169b565b1515611501576005915061154a565b80600101544210156115315760018101546202a2ff1901421015611528576001915061154a565b6004915061154a565b60038101541515611545576000915061154a565b600291505b50919050565b600082600261155e826114da565b600581111561156957fe5b1415806115f5575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b15156115c457600080fd5b6102c65a03f115156115d557600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b156115ff57600080fd5b600160a060020a038316151561161457600080fd5b600084815260026020526040908190208054909350600160a060020a0316906313af40359085905160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b151561167757600080fd5b6102c65a03f1151561168857600080fd5b5050506116958484611eec565b50505050565b60006116a68361072a565b821190505b92915050565b60008160026116bf826114da565b60058111156116ca57fe5b141580611756575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b151561172557600080fd5b6102c65a03f1151561173657600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b1561176057600080fd5b60008381526002602081905260409091209081015490925061178990662386f26fc10000611ec3565b600283018190558254600160a060020a03169063b0c8097290600160405160e060020a63ffffffff8516028152600481019290925215156024820152604401600060405180830381600087803b15156117e157600080fd5b6102c65a03f115156117f257600080fd5b5050825461186291508490600160a060020a0316638da5cb5b6000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b151561184257600080fd5b6102c65a03f1151561185357600080fd5b50505060405180519050611eec565b8154600160a060020a0316638da5cb5b6000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b15156118a957600080fd5b6102c65a03f115156118ba57600080fd5b50505060405180519050600160a060020a031683600019167f0f0c27adfd84b60b6f456b0e87cdccb1e5fb9603991588d87fa99f5b6b61e6708460020154856001015460405191825260208201526040908101905180910390a3505050565b60045481565b6249d40081565b600160a060020a0333811660009081526003602090815260408083208584529091528120549091168190111561195b57600080fd5b662386f26fc1000034101561196f57600080fd5b3433611979612187565b600160a060020a0390911681526020016040518091039082f080151561199e57600080fd5b33600160a060020a039081166000818152600360209081526040808320898452909152908190208054600160a060020a0319169385169390931790925591935090915083907fb556ff269c1b6714f432c36431e2041d28436a73b6c3f19c021827bbdc6bfc299034905190815260200160405180910390a35050565b80511515611a2757600080fd5b6002611a4b82600184510381518110611a3c57fe5b906020019060200201516114da565b6005811115611a5657fe5b1415611a6157600080fd5b611a72600182510382600154611fd6565b50565b60005b8151811015611aa757611a9f828281518110611a9057fe5b90602001906020020151611ab0565b600101611a78565b5050565b505050565b600080600454421080611aca5750600454630784ce000142115b80611b51575060008054600154600160a060020a03308116939216916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515611b2a57600080fd5b6102c65a03f11515611b3b57600080fd5b50505060405180519050600160a060020a031614155b15611b5b57600080fd5b611b64836114da565b91506001826005811115611b7457fe5b1415611b7f57611aab565b6000826005811115611b8d57fe5b14611b9757600080fd5b50600082815260026020819052604080832042620697800160018201819055928101849055600381019390935584917f87e97e825a1d1fa0c54e1d36c7506c1dea8b1efd451fe68b000cf96f7cf40003915190815260200160405180910390a2505050565b60015481565b611c0b82611a75565b611aa781611926565b60008054600154600160a060020a033081169216906302571be390846040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515611c6d57600080fd5b6102c65a03f11515611c7e57600080fd5b50505060405180519050600160a060020a03161415611aa757600054600154600160a060020a03909116906306ab592390843060405160e060020a63ffffffff861602815260048101939093526024830191909152600160a060020a03166044820152606401600060405180830381600087803b1515611cfd57600080fd5b6102c65a03f11515611d0e57600080fd5b5050506001548260405191825260208201526040908101905190819003902060008054919250600160a060020a0390911690631896f70a90839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b1515611d8c57600080fd5b6102c65a03f11515611d9d57600080fd5b505060008054600160a060020a03169150635b0fc9c390839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b1515611dfa57600080fd5b6102c65a03f11515611e0b57600080fd5b5050505050565b600060018201818080838651019250600091505b82841015611eba5760ff845116905060808160ff161015611e4c57600184019350611eaf565b60e08160ff161015611e6357600284019350611eaf565b60f08160ff161015611e7a57600384019350611eaf565b60f88160ff161015611e9157600484019350611eaf565b60fc8160ff161015611ea857600584019350611eaf565b6006840193505b600190910190611e26565b50949350505050565b600081831115611ed45750816116ab565b50806116ab565b600081831015611ed45750816116ab565b60008054600154600160a060020a03308116939216916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515611f4657600080fd5b6102c65a03f11515611f5757600080fd5b50505060405180519050600160a060020a03161415611aa757600054600154600160a060020a03909116906306ab592390848460405160e060020a63ffffffff861602815260048101939093526024830191909152600160a060020a03166044820152606401600060405180830381600087803b1515611dfa57600080fd5b600054600160a060020a03166306ab592382848681518110611ff457fe5b906020019060200201513060405160e060020a63ffffffff861602815260048101939093526024830191909152600160a060020a03166044820152606401600060405180830381600087803b151561204b57600080fd5b6102c65a03f1151561205c57600080fd5b5050508082848151811061206c57fe5b906020019060200201516040519182526020820152604090810190518091039020905060008311156120a6576120a6600184038383611fd6565b60008054600160a060020a031690631896f70a90839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b151561210057600080fd5b6102c65a03f1151561211157600080fd5b505060008054600160a060020a03169150635b0fc9c390839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b151561216e57600080fd5b6102c65a03f1151561217f57600080fd5b505050505050565b6040516104fc8061219883390190560060606040526040516020806104fc8339810160405280805160028054600160a060020a03928316600160a060020a03199182161790915560008054339093169290911691909117905550504260019081556005805460ff191690911790553460045561048c806100706000396000f300606060405236156100a15763ffffffff7c010000000000000000000000000000000000000000000000000000000060003504166305b3441081146100a65780630b5ab3d5146100cb57806313af4035146100e05780632b20e397146100ff5780633fa4f2451461012e578063674f220f146101415780638da5cb5b14610154578063b0c8097214610167578063bbe4277114610182578063faab9d3914610198575b600080fd5b34156100b157600080fd5b6100b96101b7565b60405190815260200160405180910390f35b34156100d657600080fd5b6100de6101bd565b005b34156100eb57600080fd5b6100de600160a060020a0360043516610207565b341561010a57600080fd5b6101126102b0565b604051600160a060020a03909116815260200160405180910390f35b341561013957600080fd5b6100b96102bf565b341561014c57600080fd5b6101126102c5565b341561015f57600080fd5b6101126102d4565b341561017257600080fd5b6100de60043560243515156102e3565b341561018d57600080fd5b6100de60043561036c565b34156101a357600080fd5b6100de600160a060020a0360043516610416565b60015481565b60055460ff16156101cd57600080fd5b600254600160a060020a039081169030163180156108fc0290604051600060405180830381858888f19350505050156102055761deadff5b565b60005433600160a060020a0390811691161461022257600080fd5b600160a060020a038116151561023757600080fd5b6002805460038054600160a060020a0380841673ffffffffffffffffffffffffffffffffffffffff19928316179092559091169083161790557fa2ea9883a321a3e97b8266c2b078bfeec6d50c711ed71f874a90d500ae2eaf3681604051600160a060020a03909116815260200160405180910390a150565b600054600160a060020a031681565b60045481565b600354600160a060020a031681565b600254600160a060020a031681565b60005433600160a060020a039081169116146102fe57600080fd5b60055460ff16151561030f57600080fd5b81600454101561031e57600080fd5b6004829055600254600160a060020a039081169030163183900380156108fc0290604051600060405180830381858888f1935050505015801561035e5750805b1561036857600080fd5b5050565b60005433600160a060020a0390811691161461038757600080fd5b60055460ff16151561039857600080fd5b6005805460ff1916905561dead6103e8600160a060020a03301631838203020480156108fc0290604051600060405180830381858888f1935050505015156103df57600080fd5b7fbb2ce2f51803bba16bc85282b47deeea9a5c6223eabea1077be696b3f265cf1360405160405180910390a16104136101bd565b50565b60005433600160a060020a0390811691161461043157600080fd5b6000805473ffffffffffffffffffffffffffffffffffffffff1916600160a060020a03929092169190911790555600a165627a7a7230582023849fab751378a237489f526a289ede7796b7f6b7dac5a973b8c3ca25f368a800297b6c4b278d165a6b33958f8ea5dfb00c8c9d4d0acf1985bef5d10786898bc3e7a165627a7a72305820e6807b87ab11a69864cefed52eef7f9c4635fd0e26312e944bbbcbff5cd26d920029" +registrar_bytecode_runtime = "606060405236156101175763ffffffff60e060020a6000350416630230a07c811461011c57806313c89a8f1461013457806315f733311461015c57806322ec1244146101ad5780632525f5c1146101d5578063267b6922146101f75780633f15457f1461025f57806347872b421461028e5780635ddae283146102aa5780635e431709146102c057806361d585da146102e257806379ce9fac1461031c578063935033371461033e578063983b94fb1461036b5780639c67f06f14610381578063ae1a0b0c14610394578063ce92dced146103c0578063de10f04b146103cb578063e27fe50f1461041a578063ea9e107a14610469578063ede8acdb1461048e578063faff50a8146104a4578063febefd61146104b7575b600080fd5b341561012757600080fd5b6101326004356104fd565b005b341561013f57600080fd5b61014a60043561072a565b60405190815260200160405180910390f35b341561016757600080fd5b61013260046024813581810190830135806020601f8201819004810201604051908101604052818152929190602084018383808284375094965061074e95505050505050565b34156101b857600080fd5b61014a600435600160a060020a0360243516604435606435610a7a565b34156101e057600080fd5b610132600160a060020a0360043516602435610ac5565b341561020257600080fd5b61020d600435610c8c565b6040518086600581111561021d57fe5b60ff16815260200185600160a060020a0316600160a060020a031681526020018481526020018381526020018281526020019550505050505060405180910390f35b341561026a57600080fd5b610272610cd8565b604051600160a060020a03909116815260200160405180910390f35b341561029957600080fd5b610132600435602435604435610ce7565b34156102b557600080fd5b610132600435611254565b34156102cb57600080fd5b610272600160a060020a03600435166024356114b4565b34156102ed57600080fd5b6102f86004356114da565b6040518082600581111561030857fe5b60ff16815260200191505060405180910390f35b341561032757600080fd5b610132600435600160a060020a0360243516611550565b341561034957600080fd5b61035760043560243561169b565b604051901515815260200160405180910390f35b341561037657600080fd5b6101326004356116b1565b341561038c57600080fd5b61014a611919565b341561039f57600080fd5b6103a761191f565b60405163ffffffff909116815260200160405180910390f35b610132600435611926565b34156103d657600080fd5b6101326004602481358181019083013580602081810201604051908101604052809392919081815260200183836020028082843750949650611a1a95505050505050565b341561042557600080fd5b6101326004602481358181019083013580602081810201604051908101604052809392919081815260200183836020028082843750949650611a7595505050505050565b341561047457600080fd5b610132600435600160a060020a0360243516604435611aab565b341561049957600080fd5b610132600435611ab0565b34156104af57600080fd5b61014a611bfc565b61013260046024813581810190830135806020818102016040519081016040528093929190818152602001838360200280828437509496505093359350611c0292505050565b60008082600261050c826114da565b600581111561051757fe5b1415806105a3575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b151561057257600080fd5b6102c65a03f1151561058357600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b156105ad57600080fd5b600084815260026020526040902080546001820154919450600160a060020a031692506301e13380014210801561065f575060008054600154600160a060020a03308116939216916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561063957600080fd5b6102c65a03f1151561064a57600080fd5b50505060405180519050600160a060020a0316145b1561066957600080fd5b60006002840181905560038401558254600160a060020a031916835561068e84611c14565b81600160a060020a031663bbe427716103e860405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b15156106d657600080fd5b6102c65a03f115156106e757600080fd5b505050600283015484907f292b79b9246fa2c8e77d3fe195b251f9cb839d7d038e667c069ee7708c631e169060405190815260200160405180910390a250505050565b6004547001000000000000000000000000000000006249d400818404020401919050565b600080826040518082805190602001908083835b602083106107815780518252601f199092019160209182019101610762565b6001836020036101000a03801982511681845116179092525050509190910192506040915050519081900390206002806107ba836114da565b60058111156107c557fe5b146107cf57600080fd5b60066107da86611e12565b11156107e557600080fd5b846040518082805190602001908083835b602083106108155780518252601f1990920191602091820191016107f6565b6001836020036101000a03801982511681845116179092525050509190910192506040915050519081900390206000818152600260205260409020909450925061085e84611c14565b8254600160a060020a0316156109b5576108838360020154662386f26fc10000611ec3565b60028085018290558454600160a060020a03169163b0c80972919004600060405160e060020a63ffffffff8516028152600481019290925215156024820152604401600060405180830381600087803b15156108de57600080fd5b6102c65a03f115156108ef57600080fd5b50508354600160a060020a031690506313af40353360405160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b151561094257600080fd5b6102c65a03f1151561095357600080fd5b50508354600160a060020a0316905063bbe427716103e860405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b15156109a057600080fd5b6102c65a03f115156109b157600080fd5b5050505b846040518082805190602001908083835b602083106109e55780518252601f1990920191602091820191016109c6565b6001836020036101000a0380198251168184511617909252505050919091019250604091505051809103902084600019167f1f9c649fe47e58bb60f4e52f0d90e4c47a526c9f90c5113df842c025970b66ad8560020154866001015460405191825260208201526040908101905180910390a3505060006002820181905560038201558054600160a060020a03191690555050565b600084848484604051938452600160a060020a03929092166c010000000000000000000000000260208401526034830152605482015260740160405180910390209050949350505050565b600160a060020a03808316600090815260036020908152604080832085845290915290205416801580610b61575062069780600160a060020a0382166305b344106000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b1515610b3d57600080fd5b6102c65a03f11515610b4e57600080fd5b5050506040518051905001621275000142105b15610b6b57600080fd5b80600160a060020a03166313af40353360405160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b1515610bb957600080fd5b6102c65a03f11515610bca57600080fd5b50505080600160a060020a031663bbe42771600560405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b1515610c1457600080fd5b6102c65a03f11515610c2557600080fd5b505050600160a060020a03831660008181526003602090815260408083208684529091528082208054600160a060020a03191690558491600080516020612694833981519152916005905191825260ff1660208201526040908101905180910390a3505050565b60008181526002602052604081208190819081908190610cab876114da565b815460018301546002840154600390940154929a600160a060020a03909216995097509195509350915050565b600054600160a060020a031681565b600080600080600080610cfc89338a8a610a7a565b600160a060020a033381166000908152600360209081526040808320858452909152902054919750169450841515610d3357600080fd5b600160a060020a0333811660009081526003602090815260408083208a845282528083208054600160a060020a03191690558c835260029091528082209650610dd6928b9290891691633fa4f245919051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b1515610db657600080fd5b6102c65a03f11515610dc757600080fd5b50505060405180519050611edb565b925084600160a060020a031663b0c8097284600160405160e060020a63ffffffff8516028152600481019290925215156024820152604401600060405180830381600087803b1515610e2757600080fd5b6102c65a03f11515610e3857600080fd5b505050610e44896114da565b91506002826005811115610e5457fe5b1415610ef25784600160a060020a031663bbe42771600560405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b1515610ea157600080fd5b6102c65a03f11515610eb257600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600160405191825260ff1660208201526040908101905180910390a3611249565b6004826005811115610f0057fe5b14610f0a57600080fd5b662386f26fc10000831080610f88575060018401546202a2ff1901600160a060020a0386166305b344106000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b1515610f6b57600080fd5b6102c65a03f11515610f7c57600080fd5b50505060405180519050115b156110265784600160a060020a031663bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b1515610fd557600080fd5b6102c65a03f11515610fe657600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600060405191825260ff1660208201526040908101905180910390a3611249565b8360030154831115611108578354600160a060020a0316156110a257508254600160a060020a03168063bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b151561108d57600080fd5b6102c65a03f1151561109e57600080fd5b5050505b600384018054600280870191909155908490558454600160a060020a031916600160a060020a038781169190911786553316908a9060008051602061269483398151915290869060405191825260ff1660208201526040908101905180910390a3611249565b83600201548311156111b45760028401839055600160a060020a03851663bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b151561116357600080fd5b6102c65a03f1151561117457600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600360405191825260ff1660208201526040908101905180910390a3611249565b84600160a060020a031663bbe427716103e360405160e060020a63ffffffff84160281526004810191909152602401600060405180830381600087803b15156111fc57600080fd5b6102c65a03f1151561120d57600080fd5b5050600160a060020a03331690508960008051602061269483398151915285600460405191825260ff1660208201526040908101905180910390a35b505050505050505050565b600080826002611263826114da565b600581111561126e57fe5b1415806112fa575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b15156112c957600080fd5b6102c65a03f115156112da57600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b1561130457600080fd5b60008054600154600160a060020a03909116916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561135b57600080fd5b6102c65a03f1151561136c57600080fd5b50505060405180519050925030600160a060020a031683600160a060020a0316141561139757600080fd5b600084815260026020526040908190208054909350600160a060020a03169063faab9d399085905160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b15156113fa57600080fd5b6102c65a03f1151561140b57600080fd5b505082546001840154600160a060020a03808716935063ea9e107a92889291169060405160e060020a63ffffffff86160281526004810193909352600160a060020a0390911660248301526044820152606401600060405180830381600087803b151561147757600080fd5b6102c65a03f1151561148857600080fd5b50508254600160a060020a03191683555050600060018201819055600282018190556003909101555050565b6003602090815260009283526040808420909152908252902054600160a060020a031681565b60008181526002602052604081206114f2834261169b565b1515611501576005915061154a565b80600101544210156115315760018101546202a2ff1901421015611528576001915061154a565b6004915061154a565b60038101541515611545576000915061154a565b600291505b50919050565b600082600261155e826114da565b600581111561156957fe5b1415806115f5575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b15156115c457600080fd5b6102c65a03f115156115d557600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b156115ff57600080fd5b600160a060020a038316151561161457600080fd5b600084815260026020526040908190208054909350600160a060020a0316906313af40359085905160e060020a63ffffffff8416028152600160a060020a039091166004820152602401600060405180830381600087803b151561167757600080fd5b6102c65a03f1151561168857600080fd5b5050506116958484611eec565b50505050565b60006116a68361072a565b821190505b92915050565b60008160026116bf826114da565b60058111156116ca57fe5b141580611756575060008181526002602052604080822054600160a060020a031691638da5cb5b9151602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b151561172557600080fd5b6102c65a03f1151561173657600080fd5b50505060405180519050600160a060020a031633600160a060020a031614155b1561176057600080fd5b60008381526002602081905260409091209081015490925061178990662386f26fc10000611ec3565b600283018190558254600160a060020a03169063b0c8097290600160405160e060020a63ffffffff8516028152600481019290925215156024820152604401600060405180830381600087803b15156117e157600080fd5b6102c65a03f115156117f257600080fd5b5050825461186291508490600160a060020a0316638da5cb5b6000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b151561184257600080fd5b6102c65a03f1151561185357600080fd5b50505060405180519050611eec565b8154600160a060020a0316638da5cb5b6000604051602001526040518163ffffffff1660e060020a028152600401602060405180830381600087803b15156118a957600080fd5b6102c65a03f115156118ba57600080fd5b50505060405180519050600160a060020a031683600019167f0f0c27adfd84b60b6f456b0e87cdccb1e5fb9603991588d87fa99f5b6b61e6708460020154856001015460405191825260208201526040908101905180910390a3505050565b60045481565b6249d40081565b600160a060020a0333811660009081526003602090815260408083208584529091528120549091168190111561195b57600080fd5b662386f26fc1000034101561196f57600080fd5b3433611979612187565b600160a060020a0390911681526020016040518091039082f080151561199e57600080fd5b33600160a060020a039081166000818152600360209081526040808320898452909152908190208054600160a060020a0319169385169390931790925591935090915083907fb556ff269c1b6714f432c36431e2041d28436a73b6c3f19c021827bbdc6bfc299034905190815260200160405180910390a35050565b80511515611a2757600080fd5b6002611a4b82600184510381518110611a3c57fe5b906020019060200201516114da565b6005811115611a5657fe5b1415611a6157600080fd5b611a72600182510382600154611fd6565b50565b60005b8151811015611aa757611a9f828281518110611a9057fe5b90602001906020020151611ab0565b600101611a78565b5050565b505050565b600080600454421080611aca5750600454630784ce000142115b80611b51575060008054600154600160a060020a03308116939216916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515611b2a57600080fd5b6102c65a03f11515611b3b57600080fd5b50505060405180519050600160a060020a031614155b15611b5b57600080fd5b611b64836114da565b91506001826005811115611b7457fe5b1415611b7f57611aab565b6000826005811115611b8d57fe5b14611b9757600080fd5b50600082815260026020819052604080832042620697800160018201819055928101849055600381019390935584917f87e97e825a1d1fa0c54e1d36c7506c1dea8b1efd451fe68b000cf96f7cf40003915190815260200160405180910390a2505050565b60015481565b611c0b82611a75565b611aa781611926565b60008054600154600160a060020a033081169216906302571be390846040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515611c6d57600080fd5b6102c65a03f11515611c7e57600080fd5b50505060405180519050600160a060020a03161415611aa757600054600154600160a060020a03909116906306ab592390843060405160e060020a63ffffffff861602815260048101939093526024830191909152600160a060020a03166044820152606401600060405180830381600087803b1515611cfd57600080fd5b6102c65a03f11515611d0e57600080fd5b5050506001548260405191825260208201526040908101905190819003902060008054919250600160a060020a0390911690631896f70a90839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b1515611d8c57600080fd5b6102c65a03f11515611d9d57600080fd5b505060008054600160a060020a03169150635b0fc9c390839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b1515611dfa57600080fd5b6102c65a03f11515611e0b57600080fd5b5050505050565b600060018201818080838651019250600091505b82841015611eba5760ff845116905060808160ff161015611e4c57600184019350611eaf565b60e08160ff161015611e6357600284019350611eaf565b60f08160ff161015611e7a57600384019350611eaf565b60f88160ff161015611e9157600484019350611eaf565b60fc8160ff161015611ea857600584019350611eaf565b6006840193505b600190910190611e26565b50949350505050565b600081831115611ed45750816116ab565b50806116ab565b600081831015611ed45750816116ab565b60008054600154600160a060020a03308116939216916302571be391906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515611f4657600080fd5b6102c65a03f11515611f5757600080fd5b50505060405180519050600160a060020a03161415611aa757600054600154600160a060020a03909116906306ab592390848460405160e060020a63ffffffff861602815260048101939093526024830191909152600160a060020a03166044820152606401600060405180830381600087803b1515611dfa57600080fd5b600054600160a060020a03166306ab592382848681518110611ff457fe5b906020019060200201513060405160e060020a63ffffffff861602815260048101939093526024830191909152600160a060020a03166044820152606401600060405180830381600087803b151561204b57600080fd5b6102c65a03f1151561205c57600080fd5b5050508082848151811061206c57fe5b906020019060200201516040519182526020820152604090810190518091039020905060008311156120a6576120a6600184038383611fd6565b60008054600160a060020a031690631896f70a90839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b151561210057600080fd5b6102c65a03f1151561211157600080fd5b505060008054600160a060020a03169150635b0fc9c390839060405160e060020a63ffffffff85160281526004810192909252600160a060020a03166024820152604401600060405180830381600087803b151561216e57600080fd5b6102c65a03f1151561217f57600080fd5b505050505050565b6040516104fc8061219883390190560060606040526040516020806104fc8339810160405280805160028054600160a060020a03928316600160a060020a03199182161790915560008054339093169290911691909117905550504260019081556005805460ff191690911790553460045561048c806100706000396000f300606060405236156100a15763ffffffff7c010000000000000000000000000000000000000000000000000000000060003504166305b3441081146100a65780630b5ab3d5146100cb57806313af4035146100e05780632b20e397146100ff5780633fa4f2451461012e578063674f220f146101415780638da5cb5b14610154578063b0c8097214610167578063bbe4277114610182578063faab9d3914610198575b600080fd5b34156100b157600080fd5b6100b96101b7565b60405190815260200160405180910390f35b34156100d657600080fd5b6100de6101bd565b005b34156100eb57600080fd5b6100de600160a060020a0360043516610207565b341561010a57600080fd5b6101126102b0565b604051600160a060020a03909116815260200160405180910390f35b341561013957600080fd5b6100b96102bf565b341561014c57600080fd5b6101126102c5565b341561015f57600080fd5b6101126102d4565b341561017257600080fd5b6100de60043560243515156102e3565b341561018d57600080fd5b6100de60043561036c565b34156101a357600080fd5b6100de600160a060020a0360043516610416565b60015481565b60055460ff16156101cd57600080fd5b600254600160a060020a039081169030163180156108fc0290604051600060405180830381858888f19350505050156102055761deadff5b565b60005433600160a060020a0390811691161461022257600080fd5b600160a060020a038116151561023757600080fd5b6002805460038054600160a060020a0380841673ffffffffffffffffffffffffffffffffffffffff19928316179092559091169083161790557fa2ea9883a321a3e97b8266c2b078bfeec6d50c711ed71f874a90d500ae2eaf3681604051600160a060020a03909116815260200160405180910390a150565b600054600160a060020a031681565b60045481565b600354600160a060020a031681565b600254600160a060020a031681565b60005433600160a060020a039081169116146102fe57600080fd5b60055460ff16151561030f57600080fd5b81600454101561031e57600080fd5b6004829055600254600160a060020a039081169030163183900380156108fc0290604051600060405180830381858888f1935050505015801561035e5750805b1561036857600080fd5b5050565b60005433600160a060020a0390811691161461038757600080fd5b60055460ff16151561039857600080fd5b6005805460ff1916905561dead6103e8600160a060020a03301631838203020480156108fc0290604051600060405180830381858888f1935050505015156103df57600080fd5b7fbb2ce2f51803bba16bc85282b47deeea9a5c6223eabea1077be696b3f265cf1360405160405180910390a16104136101bd565b50565b60005433600160a060020a0390811691161461043157600080fd5b6000805473ffffffffffffffffffffffffffffffffffffffff1916600160a060020a03929092169190911790555600a165627a7a7230582023849fab751378a237489f526a289ede7796b7f6b7dac5a973b8c3ca25f368a800297b6c4b278d165a6b33958f8ea5dfb00c8c9d4d0acf1985bef5d10786898bc3e7a165627a7a72305820e6807b87ab11a69864cefed52eef7f9c4635fd0e26312e944bbbcbff5cd26d920029" + +resolver_abi = json.loads('[{"constant":true,"inputs":[{"name":"interfaceID","type":"bytes4"}],"name":"supportsInterface","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"node","type":"bytes32"},{"name":"key","type":"string"},{"name":"value","type":"string"}],"name":"setText","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"node","type":"bytes32"},{"name":"contentTypes","type":"uint256"}],"name":"ABI","outputs":[{"name":"contentType","type":"uint256"},{"name":"data","type":"bytes"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"node","type":"bytes32"},{"name":"x","type":"bytes32"},{"name":"y","type":"bytes32"}],"name":"setPubkey","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"node","type":"bytes32"}],"name":"content","outputs":[{"name":"ret","type":"bytes32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"node","type":"bytes32"}],"name":"addr","outputs":[{"name":"ret","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"node","type":"bytes32"},{"name":"key","type":"string"}],"name":"text","outputs":[{"name":"ret","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"node","type":"bytes32"},{"name":"contentType","type":"uint256"},{"name":"data","type":"bytes"}],"name":"setABI","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"node","type":"bytes32"}],"name":"name","outputs":[{"name":"ret","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"node","type":"bytes32"},{"name":"name","type":"string"}],"name":"setName","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"node","type":"bytes32"},{"name":"hash","type":"bytes32"}],"name":"setContent","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"name":"node","type":"bytes32"}],"name":"pubkey","outputs":[{"name":"x","type":"bytes32"},{"name":"y","type":"bytes32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"node","type":"bytes32"},{"name":"addr","type":"address"}],"name":"setAddr","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"inputs":[{"name":"ensAddr","type":"address"}],"payable":false,"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"name":"node","type":"bytes32"},{"indexed":false,"name":"a","type":"address"}],"name":"AddrChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"node","type":"bytes32"},{"indexed":false,"name":"hash","type":"bytes32"}],"name":"ContentChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"node","type":"bytes32"},{"indexed":false,"name":"name","type":"string"}],"name":"NameChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"node","type":"bytes32"},{"indexed":true,"name":"contentType","type":"uint256"}],"name":"ABIChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"node","type":"bytes32"},{"indexed":false,"name":"x","type":"bytes32"},{"indexed":false,"name":"y","type":"bytes32"}],"name":"PubkeyChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"node","type":"bytes32"},{"indexed":true,"name":"indexedKey","type":"string"},{"indexed":false,"name":"key","type":"string"}],"name":"TextChanged","type":"event"}]') +resolver_bytecode = "6060604052341561000f57600080fd5b6040516020806111b08339810160405280805160008054600160a060020a03909216600160a060020a0319909216919091179055505061115c806100546000396000f300606060405236156100a95763ffffffff60e060020a60003504166301ffc9a781146100ae57806310f13a8c146100e25780632203ab561461017c57806329cd62ea146102135780632dff69411461022f5780633b3b57de1461025757806359d1d43c14610289578063623195b014610356578063691f3431146103b257806377372213146103c8578063c3d014d61461041e578063c869023314610437578063d5fa2b0014610465575b600080fd5b34156100b957600080fd5b6100ce600160e060020a031960043516610487565b604051901515815260200160405180910390f35b34156100ed57600080fd5b61017a600480359060446024803590810190830135806020601f8201819004810201604051908101604052818152929190602084018383808284378201915050505050509190803590602001908201803590602001908080601f0160208091040260200160405190810160405281815292919060208401838380828437509496506105f495505050505050565b005b341561018757600080fd5b610195600435602435610805565b60405182815260406020820181815290820183818151815260200191508051906020019080838360005b838110156101d75780820151838201526020016101bf565b50505050905090810190601f1680156102045780820380516001836020036101000a031916815260200191505b50935050505060405180910390f35b341561021e57600080fd5b61017a60043560243560443561092f565b341561023a57600080fd5b610245600435610a2e565b60405190815260200160405180910390f35b341561026257600080fd5b61026d600435610a44565b604051600160a060020a03909116815260200160405180910390f35b341561029457600080fd5b6102df600480359060446024803590810190830135806020601f82018190048102016040519081016040528181529291906020840183838082843750949650610a5f95505050505050565b60405160208082528190810183818151815260200191508051906020019080838360005b8381101561031b578082015183820152602001610303565b50505050905090810190601f1680156103485780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b341561036157600080fd5b61017a600480359060248035919060649060443590810190830135806020601f82018190048102016040519081016040528181529291906020840183838082843750949650610b7e95505050505050565b34156103bd57600080fd5b6102df600435610c7a565b34156103d357600080fd5b61017a600480359060446024803590810190830135806020601f82018190048102016040519081016040528181529291906020840183838082843750949650610d4095505050505050565b341561042957600080fd5b61017a600435602435610e8a565b341561044257600080fd5b61044d600435610f63565b60405191825260208201526040908101905180910390f35b341561047057600080fd5b61017a600435600160a060020a0360243516610f80565b6000600160e060020a031982167f3b3b57de0000000000000000000000000000000000000000000000000000000014806104ea5750600160e060020a031982167fd8389dc500000000000000000000000000000000000000000000000000000000145b8061051e5750600160e060020a031982167f691f343100000000000000000000000000000000000000000000000000000000145b806105525750600160e060020a031982167f2203ab5600000000000000000000000000000000000000000000000000000000145b806105865750600160e060020a031982167fc869023300000000000000000000000000000000000000000000000000000000145b806105ba5750600160e060020a031982167f59d1d43c00000000000000000000000000000000000000000000000000000000145b806105ee5750600160e060020a031982167f01ffc9a700000000000000000000000000000000000000000000000000000000145b92915050565b600080548491600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561064d57600080fd5b6102c65a03f1151561065e57600080fd5b50505060405180519050600160a060020a031614151561067d57600080fd5b6000848152600160205260409081902083916005909101908590518082805190602001908083835b602083106106c45780518252601f1990920191602091820191016106a5565b6001836020036101000a0380198251168184511680821785525050505050509050019150509081526020016040518091039020908051610708929160200190611083565b50826040518082805190602001908083835b602083106107395780518252601f19909201916020918201910161071a565b6001836020036101000a0380198251168184511617909252505050919091019250604091505051908190039020847fd8c9334b1a9c2f9da342a0a2b32629c1a229b6445dad78947f674b44444a75508560405160208082528190810183818151815260200191508051906020019080838360005b838110156107c55780820151838201526020016107ad565b50505050905090810190601f1680156107f25780820380516001836020036101000a031916815260200191505b509250505060405180910390a350505050565b600061080f611101565b60008481526001602081905260409091209092505b838311610922578284161580159061085d5750600083815260068201602052604081205460026000196101006001841615020190911604115b15610917578060060160008481526020019081526020016000208054600181600116156101000203166002900480601f01602080910402602001604051908101604052809291908181526020018280546001816001161561010002031660029004801561090b5780601f106108e05761010080835404028352916020019161090b565b820191906000526020600020905b8154815290600101906020018083116108ee57829003601f168201915b50505050509150610927565b600290920291610824565b600092505b509250929050565b600080548491600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561098857600080fd5b6102c65a03f1151561099957600080fd5b50505060405180519050600160a060020a03161415156109b857600080fd5b6040805190810160409081528482526020808301859052600087815260019091522060030181518155602082015160019091015550837f1d6f5e03d3f63eb58751986629a5439baee5079ff04f345becb66e23eb154e46848460405191825260208201526040908101905180910390a250505050565b6000908152600160208190526040909120015490565b600090815260016020526040902054600160a060020a031690565b610a67611101565b60008381526001602052604090819020600501908390518082805190602001908083835b60208310610aaa5780518252601f199092019160209182019101610a8b565b6001836020036101000a03801982511681845116808217855250505050505090500191505090815260200160405180910390208054600181600116156101000203166002900480601f016020809104026020016040519081016040528092919081815260200182805460018160011615610100020316600290048015610b715780601f10610b4657610100808354040283529160200191610b71565b820191906000526020600020905b815481529060010190602001808311610b5457829003601f168201915b5050505050905092915050565b600080548491600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610bd757600080fd5b6102c65a03f11515610be857600080fd5b50505060405180519050600160a060020a0316141515610c0757600080fd5b6000198301831615610c1857600080fd5b60008481526001602090815260408083208684526006019091529020828051610c45929160200190611083565b5082847faa121bbeef5f32f5961a2a28966e769023910fc9479059ee3495d4c1a696efe360405160405180910390a350505050565b610c82611101565b6001600083600019166000191681526020019081526020016000206002018054600181600116156101000203166002900480601f016020809104026020016040519081016040528092919081815260200182805460018160011615610100020316600290048015610d345780601f10610d0957610100808354040283529160200191610d34565b820191906000526020600020905b815481529060010190602001808311610d1757829003601f168201915b50505050509050919050565b600080548391600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610d9957600080fd5b6102c65a03f11515610daa57600080fd5b50505060405180519050600160a060020a0316141515610dc957600080fd5b6000838152600160205260409020600201828051610deb929160200190611083565b50827fb7d29e911041e8d9b843369e890bcb72c9388692ba48b65ac54e7214c4c348f78360405160208082528190810183818151815260200191508051906020019080838360005b83811015610e4b578082015183820152602001610e33565b50505050905090810190601f168015610e785780820380516001836020036101000a031916815260200191505b509250505060405180910390a2505050565b600080548391600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610ee357600080fd5b6102c65a03f11515610ef457600080fd5b50505060405180519050600160a060020a0316141515610f1357600080fd5b6000838152600160208190526040918290200183905583907f0424b6fe0d9c3bdbece0e7879dc241bb0c22e900be8b6c168b4ee08bd9bf83bc9084905190815260200160405180910390a2505050565b600090815260016020526040902060038101546004909101549091565b600080548391600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610fd957600080fd5b6102c65a03f11515610fea57600080fd5b50505060405180519050600160a060020a031614151561100957600080fd5b60008381526001602052604090819020805473ffffffffffffffffffffffffffffffffffffffff1916600160a060020a03851617905583907f52d7d861f09ab3d26239d492e8968629f95e9e318cf0b73bfddc441522a15fd290849051600160a060020a03909116815260200160405180910390a2505050565b828054600181600116156101000203166002900490600052602060002090601f016020900481019282601f106110c457805160ff19168380011785556110f1565b828001600101855582156110f1579182015b828111156110f15782518255916020019190600101906110d6565b506110fd929150611113565b5090565b60206040519081016040526000815290565b61112d91905b808211156110fd5760008155600101611119565b905600a165627a7a723058206016a807d9d5f6060e9c0d1c808e52d4ca30a4cab0140adcc6587bfc13bedf100029" +resolver_bytecode_runtime = "606060405236156100a95763ffffffff60e060020a60003504166301ffc9a781146100ae57806310f13a8c146100e25780632203ab561461017c57806329cd62ea146102135780632dff69411461022f5780633b3b57de1461025757806359d1d43c14610289578063623195b014610356578063691f3431146103b257806377372213146103c8578063c3d014d61461041e578063c869023314610437578063d5fa2b0014610465575b600080fd5b34156100b957600080fd5b6100ce600160e060020a031960043516610487565b604051901515815260200160405180910390f35b34156100ed57600080fd5b61017a600480359060446024803590810190830135806020601f8201819004810201604051908101604052818152929190602084018383808284378201915050505050509190803590602001908201803590602001908080601f0160208091040260200160405190810160405281815292919060208401838380828437509496506105f495505050505050565b005b341561018757600080fd5b610195600435602435610805565b60405182815260406020820181815290820183818151815260200191508051906020019080838360005b838110156101d75780820151838201526020016101bf565b50505050905090810190601f1680156102045780820380516001836020036101000a031916815260200191505b50935050505060405180910390f35b341561021e57600080fd5b61017a60043560243560443561092f565b341561023a57600080fd5b610245600435610a2e565b60405190815260200160405180910390f35b341561026257600080fd5b61026d600435610a44565b604051600160a060020a03909116815260200160405180910390f35b341561029457600080fd5b6102df600480359060446024803590810190830135806020601f82018190048102016040519081016040528181529291906020840183838082843750949650610a5f95505050505050565b60405160208082528190810183818151815260200191508051906020019080838360005b8381101561031b578082015183820152602001610303565b50505050905090810190601f1680156103485780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b341561036157600080fd5b61017a600480359060248035919060649060443590810190830135806020601f82018190048102016040519081016040528181529291906020840183838082843750949650610b7e95505050505050565b34156103bd57600080fd5b6102df600435610c7a565b34156103d357600080fd5b61017a600480359060446024803590810190830135806020601f82018190048102016040519081016040528181529291906020840183838082843750949650610d4095505050505050565b341561042957600080fd5b61017a600435602435610e8a565b341561044257600080fd5b61044d600435610f63565b60405191825260208201526040908101905180910390f35b341561047057600080fd5b61017a600435600160a060020a0360243516610f80565b6000600160e060020a031982167f3b3b57de0000000000000000000000000000000000000000000000000000000014806104ea5750600160e060020a031982167fd8389dc500000000000000000000000000000000000000000000000000000000145b8061051e5750600160e060020a031982167f691f343100000000000000000000000000000000000000000000000000000000145b806105525750600160e060020a031982167f2203ab5600000000000000000000000000000000000000000000000000000000145b806105865750600160e060020a031982167fc869023300000000000000000000000000000000000000000000000000000000145b806105ba5750600160e060020a031982167f59d1d43c00000000000000000000000000000000000000000000000000000000145b806105ee5750600160e060020a031982167f01ffc9a700000000000000000000000000000000000000000000000000000000145b92915050565b600080548491600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561064d57600080fd5b6102c65a03f1151561065e57600080fd5b50505060405180519050600160a060020a031614151561067d57600080fd5b6000848152600160205260409081902083916005909101908590518082805190602001908083835b602083106106c45780518252601f1990920191602091820191016106a5565b6001836020036101000a0380198251168184511680821785525050505050509050019150509081526020016040518091039020908051610708929160200190611083565b50826040518082805190602001908083835b602083106107395780518252601f19909201916020918201910161071a565b6001836020036101000a0380198251168184511617909252505050919091019250604091505051908190039020847fd8c9334b1a9c2f9da342a0a2b32629c1a229b6445dad78947f674b44444a75508560405160208082528190810183818151815260200191508051906020019080838360005b838110156107c55780820151838201526020016107ad565b50505050905090810190601f1680156107f25780820380516001836020036101000a031916815260200191505b509250505060405180910390a350505050565b600061080f611101565b60008481526001602081905260409091209092505b838311610922578284161580159061085d5750600083815260068201602052604081205460026000196101006001841615020190911604115b15610917578060060160008481526020019081526020016000208054600181600116156101000203166002900480601f01602080910402602001604051908101604052809291908181526020018280546001816001161561010002031660029004801561090b5780601f106108e05761010080835404028352916020019161090b565b820191906000526020600020905b8154815290600101906020018083116108ee57829003601f168201915b50505050509150610927565b600290920291610824565b600092505b509250929050565b600080548491600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b151561098857600080fd5b6102c65a03f1151561099957600080fd5b50505060405180519050600160a060020a03161415156109b857600080fd5b6040805190810160409081528482526020808301859052600087815260019091522060030181518155602082015160019091015550837f1d6f5e03d3f63eb58751986629a5439baee5079ff04f345becb66e23eb154e46848460405191825260208201526040908101905180910390a250505050565b6000908152600160208190526040909120015490565b600090815260016020526040902054600160a060020a031690565b610a67611101565b60008381526001602052604090819020600501908390518082805190602001908083835b60208310610aaa5780518252601f199092019160209182019101610a8b565b6001836020036101000a03801982511681845116808217855250505050505090500191505090815260200160405180910390208054600181600116156101000203166002900480601f016020809104026020016040519081016040528092919081815260200182805460018160011615610100020316600290048015610b715780601f10610b4657610100808354040283529160200191610b71565b820191906000526020600020905b815481529060010190602001808311610b5457829003601f168201915b5050505050905092915050565b600080548491600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610bd757600080fd5b6102c65a03f11515610be857600080fd5b50505060405180519050600160a060020a0316141515610c0757600080fd5b6000198301831615610c1857600080fd5b60008481526001602090815260408083208684526006019091529020828051610c45929160200190611083565b5082847faa121bbeef5f32f5961a2a28966e769023910fc9479059ee3495d4c1a696efe360405160405180910390a350505050565b610c82611101565b6001600083600019166000191681526020019081526020016000206002018054600181600116156101000203166002900480601f016020809104026020016040519081016040528092919081815260200182805460018160011615610100020316600290048015610d345780601f10610d0957610100808354040283529160200191610d34565b820191906000526020600020905b815481529060010190602001808311610d1757829003601f168201915b50505050509050919050565b600080548391600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610d9957600080fd5b6102c65a03f11515610daa57600080fd5b50505060405180519050600160a060020a0316141515610dc957600080fd5b6000838152600160205260409020600201828051610deb929160200190611083565b50827fb7d29e911041e8d9b843369e890bcb72c9388692ba48b65ac54e7214c4c348f78360405160208082528190810183818151815260200191508051906020019080838360005b83811015610e4b578082015183820152602001610e33565b50505050905090810190601f168015610e785780820380516001836020036101000a031916815260200191505b509250505060405180910390a2505050565b600080548391600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610ee357600080fd5b6102c65a03f11515610ef457600080fd5b50505060405180519050600160a060020a0316141515610f1357600080fd5b6000838152600160208190526040918290200183905583907f0424b6fe0d9c3bdbece0e7879dc241bb0c22e900be8b6c168b4ee08bd9bf83bc9084905190815260200160405180910390a2505050565b600090815260016020526040902060038101546004909101549091565b600080548391600160a060020a033381169216906302571be39084906040516020015260405160e060020a63ffffffff84160281526004810191909152602401602060405180830381600087803b1515610fd957600080fd5b6102c65a03f11515610fea57600080fd5b50505060405180519050600160a060020a031614151561100957600080fd5b60008381526001602052604090819020805473ffffffffffffffffffffffffffffffffffffffff1916600160a060020a03851617905583907f52d7d861f09ab3d26239d492e8968629f95e9e318cf0b73bfddc441522a15fd290849051600160a060020a03909116815260200160405180910390a2505050565b828054600181600116156101000203166002900490600052602060002090601f016020900481019282601f106110c457805160ff19168380011785556110f1565b828001600101855582156110f1579182015b828111156110f15782518255916020019190600101906110d6565b506110fd929150611113565b5090565b60206040519081016040526000815290565b61112d91905b808211156110fd5760008155600101611119565b905600a165627a7a723058206016a807d9d5f6060e9c0d1c808e52d4ca30a4cab0140adcc6587bfc13bedf100029" + + +reverse_registrar_abi = json.loads('[{"constant":false,"inputs":[{"name":"owner","type":"address"},{"name":"resolver","type":"address"}],"name":"claimWithResolver","outputs":[{"name":"node","type":"bytes32"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"owner","type":"address"}],"name":"claim","outputs":[{"name":"node","type":"bytes32"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"ens","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"defaultResolver","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"addr","type":"address"}],"name":"node","outputs":[{"name":"ret","type":"bytes32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"name","type":"string"}],"name":"setName","outputs":[{"name":"node","type":"bytes32"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"inputs":[{"name":"ensAddr","type":"address"},{"name":"resolverAddr","type":"address"}],"payable":false,"stateMutability":"nonpayable","type":"constructor"}]') +reverse_registrar_bytecode = "6060604052341561000f57600080fd5b604051604080610d96833981016040528080519060200190919080519060200190919050506000826000806101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908373ffffffffffffffffffffffffffffffffffffffff16021790555081600160006101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908373ffffffffffffffffffffffffffffffffffffffff1602179055506000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166302571be37f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e26001026000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b151561017a57600080fd5b6102c65a03f1151561018b57600080fd5b50505060405180519050905060008173ffffffffffffffffffffffffffffffffffffffff16141515610277578073ffffffffffffffffffffffffffffffffffffffff16631e83409a336000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001915050602060405180830381600087803b151561025a57600080fd5b6102c65a03f1151561026b57600080fd5b50505060405180519050505b505050610b0d806102896000396000f300606060405260043610610078576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680630f5a54661461007d5780631e83409a146100f15780633f15457f14610146578063828eab0e1461019b578063bffbe61c146101f0578063c47f002714610245575b600080fd5b341561008857600080fd5b6100d3600480803573ffffffffffffffffffffffffffffffffffffffff1690602001909190803573ffffffffffffffffffffffffffffffffffffffff169060200190919050506102be565b60405180826000191660001916815260200191505060405180910390f35b34156100fc57600080fd5b610128600480803573ffffffffffffffffffffffffffffffffffffffff1690602001909190505061086e565b60405180826000191660001916815260200191505060405180910390f35b341561015157600080fd5b610159610882565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b34156101a657600080fd5b6101ae6108a7565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b34156101fb57600080fd5b610227600480803573ffffffffffffffffffffffffffffffffffffffff169060200190919050506108cd565b60405180826000191660001916815260200191505060405180910390f35b341561025057600080fd5b6102a0600480803590602001908201803590602001908080601f0160208091040260200160405190810160405280939291908181526020018383808284378201915050505050509190505061092f565b60405180826000191660001916815260200191505060405180910390f35b60008060006102cc33610a80565b91507f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e260010282604051808360001916600019168152602001826000191660001916815260200192505050604051809103902092506000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166302571be3846000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b15156103c157600080fd5b6102c65a03f115156103d257600080fd5b50505060405180519050905060008473ffffffffffffffffffffffffffffffffffffffff16141580156104eb57506000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16630178b8bf846000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b15156104a057600080fd5b6102c65a03f115156104b157600080fd5b5050506040518051905073ffffffffffffffffffffffffffffffffffffffff168473ffffffffffffffffffffffffffffffffffffffff1614155b1561071b573073ffffffffffffffffffffffffffffffffffffffff168173ffffffffffffffffffffffffffffffffffffffff1614151561063b576000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166306ab59237f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e260010284306040518463ffffffff167c010000000000000000000000000000000000000000000000000000000002815260040180846000191660001916815260200183600019166000191681526020018273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020019350505050600060405180830381600087803b151561062357600080fd5b6102c65a03f1151561063457600080fd5b5050503090505b6000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16631896f70a84866040518363ffffffff167c01000000000000000000000000000000000000000000000000000000000281526004018083600019166000191681526020018273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200192505050600060405180830381600087803b151561070657600080fd5b6102c65a03f1151561071757600080fd5b5050505b8473ffffffffffffffffffffffffffffffffffffffff168173ffffffffffffffffffffffffffffffffffffffff16141515610863576000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166306ab59237f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e260010284886040518463ffffffff167c010000000000000000000000000000000000000000000000000000000002815260040180846000191660001916815260200183600019166000191681526020018273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020019350505050600060405180830381600087803b151561084e57600080fd5b6102c65a03f1151561085f57600080fd5b5050505b829250505092915050565b600061087b8260006102be565b9050919050565b6000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1681565b600160009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1681565b60007f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e26001026108fc83610a80565b60405180836000191660001916815260200182600019166000191681526020019250505060405180910390209050919050565b600061095d30600160009054906101000a900473ffffffffffffffffffffffffffffffffffffffff166102be565b9050600160009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16637737221382846040518363ffffffff167c010000000000000000000000000000000000000000000000000000000002815260040180836000191660001916815260200180602001828103825283818151815260200191508051906020019080838360005b83811015610a185780820151818401526020810190506109fd565b50505050905090810190601f168015610a455780820380516001836020036101000a031916815260200191505b509350505050600060405180830381600087803b1515610a6457600080fd5b6102c65a03f11515610a7557600080fd5b505050809050919050565b60007f303132333435363738396162636465660000000000000000000000000000000060285b60018103905081600f85161a815360108404935060018103905081600f85161a815360108404935080610aa6576028600020925050509190505600a165627a7a72305820a8513240f040cd9ded89ca4d0c5bda58536850e642e1d933ad64158ef4c820660029" +reverse_registrar_bytecode_runtime = "606060405260043610610078576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680630f5a54661461007d5780631e83409a146100f15780633f15457f14610146578063828eab0e1461019b578063bffbe61c146101f0578063c47f002714610245575b600080fd5b341561008857600080fd5b6100d3600480803573ffffffffffffffffffffffffffffffffffffffff1690602001909190803573ffffffffffffffffffffffffffffffffffffffff169060200190919050506102be565b60405180826000191660001916815260200191505060405180910390f35b34156100fc57600080fd5b610128600480803573ffffffffffffffffffffffffffffffffffffffff1690602001909190505061086e565b60405180826000191660001916815260200191505060405180910390f35b341561015157600080fd5b610159610882565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b34156101a657600080fd5b6101ae6108a7565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b34156101fb57600080fd5b610227600480803573ffffffffffffffffffffffffffffffffffffffff169060200190919050506108cd565b60405180826000191660001916815260200191505060405180910390f35b341561025057600080fd5b6102a0600480803590602001908201803590602001908080601f0160208091040260200160405190810160405280939291908181526020018383808284378201915050505050509190505061092f565b60405180826000191660001916815260200191505060405180910390f35b60008060006102cc33610a80565b91507f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e260010282604051808360001916600019168152602001826000191660001916815260200192505050604051809103902092506000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166302571be3846000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b15156103c157600080fd5b6102c65a03f115156103d257600080fd5b50505060405180519050905060008473ffffffffffffffffffffffffffffffffffffffff16141580156104eb57506000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16630178b8bf846000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b15156104a057600080fd5b6102c65a03f115156104b157600080fd5b5050506040518051905073ffffffffffffffffffffffffffffffffffffffff168473ffffffffffffffffffffffffffffffffffffffff1614155b1561071b573073ffffffffffffffffffffffffffffffffffffffff168173ffffffffffffffffffffffffffffffffffffffff1614151561063b576000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166306ab59237f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e260010284306040518463ffffffff167c010000000000000000000000000000000000000000000000000000000002815260040180846000191660001916815260200183600019166000191681526020018273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020019350505050600060405180830381600087803b151561062357600080fd5b6102c65a03f1151561063457600080fd5b5050503090505b6000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16631896f70a84866040518363ffffffff167c01000000000000000000000000000000000000000000000000000000000281526004018083600019166000191681526020018273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200192505050600060405180830381600087803b151561070657600080fd5b6102c65a03f1151561071757600080fd5b5050505b8473ffffffffffffffffffffffffffffffffffffffff168173ffffffffffffffffffffffffffffffffffffffff16141515610863576000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166306ab59237f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e260010284886040518463ffffffff167c010000000000000000000000000000000000000000000000000000000002815260040180846000191660001916815260200183600019166000191681526020018273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020019350505050600060405180830381600087803b151561084e57600080fd5b6102c65a03f1151561085f57600080fd5b5050505b829250505092915050565b600061087b8260006102be565b9050919050565b6000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1681565b600160009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1681565b60007f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e26001026108fc83610a80565b60405180836000191660001916815260200182600019166000191681526020019250505060405180910390209050919050565b600061095d30600160009054906101000a900473ffffffffffffffffffffffffffffffffffffffff166102be565b9050600160009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16637737221382846040518363ffffffff167c010000000000000000000000000000000000000000000000000000000002815260040180836000191660001916815260200180602001828103825283818151815260200191508051906020019080838360005b83811015610a185780820151818401526020810190506109fd565b50505050905090810190601f168015610a455780820380516001836020036101000a031916815260200191505b509350505050600060405180830381600087803b1515610a6457600080fd5b6102c65a03f11515610a7557600080fd5b505050809050919050565b60007f303132333435363738396162636465660000000000000000000000000000000060285b60018103905081600f85161a815360108404935060018103905081600f85161a815360108404935080610aa6576028600020925050509190505600a165627a7a72305820a8513240f040cd9ded89ca4d0c5bda58536850e642e1d933ad64158ef4c820660029" + +reverse_resolver_abi = json.loads('[{"constant":true,"inputs":[],"name":"ens","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"","type":"bytes32"}],"name":"name","outputs":[{"name":"","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"node","type":"bytes32"},{"name":"_name","type":"string"}],"name":"setName","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"inputs":[{"name":"ensAddr","type":"address"}],"payable":false,"stateMutability":"nonpayable","type":"constructor"}]') +reverse_resolver_bytecode = "6060604052341561000f57600080fd5b6040516020806106c9833981016040528080519060200190919050506000816000806101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908373ffffffffffffffffffffffffffffffffffffffff1602179055506000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166302571be37f91d1777781884d03a6757a803996e38de2a42967fb37eeaca72729271025a9e26001026000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b151561013057600080fd5b6102c65a03f1151561014157600080fd5b50505060405180519050905060008173ffffffffffffffffffffffffffffffffffffffff1614151561022d578073ffffffffffffffffffffffffffffffffffffffff16631e83409a336000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001915050602060405180830381600087803b151561021057600080fd5b6102c65a03f1151561022157600080fd5b50505060405180519050505b505061048b8061023e6000396000f300606060405260043610610057576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680633f15457f1461005c578063691f3431146100b15780637737221314610151575b600080fd5b341561006757600080fd5b61006f6101bb565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b34156100bc57600080fd5b6100d66004808035600019169060200190919050506101e0565b6040518080602001828103825283818151815260200191508051906020019080838360005b838110156101165780820151818401526020810190506100fb565b50505050905090810190601f1680156101435780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b341561015c57600080fd5b6101b960048080356000191690602001909190803590602001908201803590602001908080601f01602080910402602001604051908101604052809392919081815260200183838082843782019150505050505091905050610290565b005b6000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1681565b60016020528060005260406000206000915090508054600181600116156101000203166002900480601f0160208091040260200160405190810160405280929190818152602001828054600181600116156101000203166002900480156102885780601f1061025d57610100808354040283529160200191610288565b820191906000526020600020905b81548152906001019060200180831161026b57829003601f168201915b505050505081565b816000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166302571be3826000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b151561033157600080fd5b6102c65a03f1151561034257600080fd5b5050506040518051905073ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff1614151561038557600080fd5b8160016000856000191660001916815260200190815260200160002090805190602001906103b49291906103ba565b50505050565b828054600181600116156101000203166002900490600052602060002090601f016020900481019282601f106103fb57805160ff1916838001178555610429565b82800160010185558215610429579182015b8281111561042857825182559160200191906001019061040d565b5b509050610436919061043a565b5090565b61045c91905b80821115610458576000816000905550600101610440565b5090565b905600a165627a7a72305820f4c4cb4d191f31a62c4de12f7aecf9b258ba17f3a6ee294191171f7d30c04ab00029" +reverse_resolver_bytecode_runtime = "606060405260043610610057576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680633f15457f1461005c578063691f3431146100b15780637737221314610151575b600080fd5b341561006757600080fd5b61006f6101bb565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b34156100bc57600080fd5b6100d66004808035600019169060200190919050506101e0565b6040518080602001828103825283818151815260200191508051906020019080838360005b838110156101165780820151818401526020810190506100fb565b50505050905090810190601f1680156101435780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b341561015c57600080fd5b6101b960048080356000191690602001909190803590602001908201803590602001908080601f01602080910402602001604051908101604052809392919081815260200183838082843782019150505050505091905050610290565b005b6000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1681565b60016020528060005260406000206000915090508054600181600116156101000203166002900480601f0160208091040260200160405190810160405280929190818152602001828054600181600116156101000203166002900480156102885780601f1061025d57610100808354040283529160200191610288565b820191906000526020600020905b81548152906001019060200180831161026b57829003601f168201915b505050505081565b816000809054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166302571be3826000604051602001526040518263ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401808260001916600019168152602001915050602060405180830381600087803b151561033157600080fd5b6102c65a03f1151561034257600080fd5b5050506040518051905073ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff1614151561038557600080fd5b8160016000856000191660001916815260200190815260200160002090805190602001906103b49291906103ba565b50505050565b828054600181600116156101000203166002900490600052602060002090601f016020900481019282601f106103fb57805160ff1916838001178555610429565b82800160010185558215610429579182015b8281111561042857825182559160200191906001019061040d565b5b509050610436919061043a565b5090565b61045c91905b80821115610458576000816000905550600101610440565b5090565b905600a165627a7a72305820f4c4cb4d191f31a62c4de12f7aecf9b258ba17f3a6ee294191171f7d30c04ab00029" diff --git a/env/lib/python3.8/site-packages/ens/exceptions.py b/env/lib/python3.8/site-packages/ens/exceptions.py new file mode 100644 index 00000000..4048c5d2 --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/exceptions.py @@ -0,0 +1,79 @@ +import idna + + +class AddressMismatch(ValueError): + """ + In order to set up reverse resolution correctly, the ENS name should first + point to the address. This exception is raised if the name does + not currently point to the address. + """ + pass + + +class InvalidName(idna.IDNAError): + """ + This exception is raised if the provided name does not meet + the syntax standards specified in `EIP 137 name syntax + `_. + + For example: names may not start with a dot, or include a space. + """ + pass + + +class UnauthorizedError(Exception): + """ + Raised if the sending account is not the owner of the name + you are trying to modify. Make sure to set ``from`` in the + ``transact`` keyword argument to the owner of the name. + """ + pass + + +class UnownedName(Exception): + """ + Raised if you are trying to modify a name that no one owns. + + If working on a subdomain, make sure the subdomain gets created + first with :meth:`~ens.main.ENS.setup_address`. + """ + pass + + +class BidTooLow(ValueError): + """ + Raised if you bid less than the minimum amount + """ + pass + + +class InvalidBidHash(ValueError): + """ + Raised if you supply incorrect data to generate the bid hash. + """ + pass + + +class InvalidLabel(ValueError): + """ + Raised if you supply an invalid label + """ + pass + + +class OversizeTransaction(ValueError): + """ + Raised if a transaction you are trying to create would cost so + much gas that it could not fit in a block. + + For example: when you try to start too many auctions at once. + """ + pass + + +class UnderfundedBid(ValueError): + """ + Raised if you send less wei with your bid than you declared + as your intent to bid. + """ + pass diff --git a/env/lib/python3.8/site-packages/ens/main.py b/env/lib/python3.8/site-packages/ens/main.py new file mode 100644 index 00000000..22ab9e7e --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/main.py @@ -0,0 +1,365 @@ +from typing import ( + TYPE_CHECKING, + Optional, + Sequence, + Tuple, + Union, + cast, +) + +from eth_typing import ( + Address, + ChecksumAddress, + HexAddress, +) +from eth_utils import ( + is_binary_address, + is_checksum_address, + to_checksum_address, +) +from hexbytes import ( + HexBytes, +) + +from ens import abis +from ens.constants import ( + EMPTY_ADDR_HEX, + ENS_MAINNET_ADDR, + REVERSE_REGISTRAR_DOMAIN, +) +from ens.exceptions import ( + AddressMismatch, + UnauthorizedError, + UnownedName, +) +from ens.utils import ( + address_in, + address_to_reverse_domain, + default, + dict_copy, + init_web3, + is_none_or_zero_address, + is_valid_name, + label_to_hash, + normal_name_to_hash, + normalize_name, + raw_name_to_hash, +) + +if TYPE_CHECKING: + from web3 import Web3 # noqa: F401 + from web3.contract import ( # noqa: F401 + Contract, + ) + from web3.providers import ( # noqa: F401 + BaseProvider, + ) + from web3.types import ( # noqa: F401 + TxParams, + ) + + +class ENS: + """ + Quick access to common Ethereum Name Service functions, + like getting the address for a name. + + Unless otherwise specified, all addresses are assumed to be a `str` in + `checksum format `_, + like: ``"0x314159265dD8dbb310642f98f50C066173C1259b"`` + """ + + labelhash = staticmethod(label_to_hash) + namehash = staticmethod(raw_name_to_hash) + nameprep = staticmethod(normalize_name) + is_valid_name = staticmethod(is_valid_name) + reverse_domain = staticmethod(address_to_reverse_domain) + + def __init__( + self, provider: 'BaseProvider'=cast('BaseProvider', default), addr: ChecksumAddress=None + ) -> None: + """ + :param provider: a single provider used to connect to Ethereum + :type provider: instance of `web3.providers.base.BaseProvider` + :param hex-string addr: the address of the ENS registry on-chain. If not provided, + ENS.py will default to the mainnet ENS registry address. + """ + self.web3 = init_web3(provider) + + ens_addr = addr if addr else ENS_MAINNET_ADDR + self.ens = self.web3.eth.contract(abi=abis.ENS, address=ens_addr) + self._resolverContract = self.web3.eth.contract(abi=abis.RESOLVER) + + @classmethod + def fromWeb3(cls, web3: 'Web3', addr: ChecksumAddress=None) -> 'ENS': + """ + Generate an ENS instance with web3 + + :param `web3.Web3` web3: to infer connection information + :param hex-string addr: the address of the ENS registry on-chain. If not provided, + ENS.py will default to the mainnet ENS registry address. + """ + return cls(web3.manager.provider, addr=addr) + + def address(self, name: str) -> Optional[ChecksumAddress]: + """ + Look up the Ethereum address that `name` currently points to. + + :param str name: an ENS name to look up + :raises InvalidName: if `name` has invalid syntax + """ + return cast(ChecksumAddress, self.resolve(name, 'addr')) + + def name(self, address: ChecksumAddress) -> Optional[str]: + """ + Look up the name that the address points to, using a + reverse lookup. Reverse lookup is opt-in for name owners. + + :param address: + :type address: hex-string + """ + reversed_domain = address_to_reverse_domain(address) + return self.resolve(reversed_domain, get='name') + + @dict_copy + def setup_address( + self, + name: str, + address: Union[Address, ChecksumAddress, HexAddress]=cast(ChecksumAddress, default), + transact: "TxParams"={} + ) -> HexBytes: + """ + Set up the name to point to the supplied address. + The sender of the transaction must own the name, or + its parent name. + + Example: If the caller owns ``parentname.eth`` with no subdomains + and calls this method with ``sub.parentname.eth``, + then ``sub`` will be created as part of this call. + + :param str name: ENS name to set up + :param str address: name will point to this address, in checksum format. If ``None``, + erase the record. If not specified, name will point to the owner's address. + :param dict transact: the transaction configuration, like in + :meth:`~web3.eth.Eth.sendTransaction` + :raises InvalidName: if ``name`` has invalid syntax + :raises UnauthorizedError: if ``'from'`` in `transact` does not own `name` + """ + owner = self.setup_owner(name, transact=transact) + self._assert_control(owner, name) + if is_none_or_zero_address(address): + address = None + elif address is default: + address = owner + elif is_binary_address(address): + address = to_checksum_address(cast(str, address)) + elif not is_checksum_address(address): + raise ValueError("You must supply the address in checksum format") + if self.address(name) == address: + return None + if address is None: + address = EMPTY_ADDR_HEX + transact['from'] = owner + resolver: 'Contract' = self._set_resolver(name, transact=transact) + return resolver.functions.setAddr(raw_name_to_hash(name), address).transact(transact) + + @dict_copy + def setup_name( + self, name: str, address: ChecksumAddress=None, transact: "TxParams"={} + ) -> HexBytes: + """ + Set up the address for reverse lookup, aka "caller ID". + After successful setup, the method :meth:`~ens.main.ENS.name` will return + `name` when supplied with `address`. + + :param str name: ENS name that address will point to + :param str address: to set up, in checksum format + :param dict transact: the transaction configuration, like in + :meth:`~web3.eth.sendTransaction` + :raises AddressMismatch: if the name does not already point to the address + :raises InvalidName: if `name` has invalid syntax + :raises UnauthorizedError: if ``'from'`` in `transact` does not own `name` + :raises UnownedName: if no one owns `name` + """ + if not name: + self._assert_control(address, 'the reverse record') + return self._setup_reverse(None, address, transact=transact) + else: + resolved = self.address(name) + if is_none_or_zero_address(address): + address = resolved + elif resolved and address != resolved and resolved != EMPTY_ADDR_HEX: + raise AddressMismatch( + "Could not set address %r to point to name, because the name resolves to %r. " + "To change the name for an existing address, call setup_address() first." % ( + address, resolved + ) + ) + if is_none_or_zero_address(address): + address = self.owner(name) + if is_none_or_zero_address(address): + raise UnownedName("claim subdomain using setup_address() first") + if is_binary_address(address): + address = to_checksum_address(address) + if not is_checksum_address(address): + raise ValueError("You must supply the address in checksum format") + self._assert_control(address, name) + if not resolved: + self.setup_address(name, address, transact=transact) + return self._setup_reverse(name, address, transact=transact) + + def resolve(self, name: str, get: str='addr') -> Optional[Union[ChecksumAddress, str]]: + normal_name = normalize_name(name) + resolver = self.resolver(normal_name) + if resolver: + lookup_function = getattr(resolver.functions, get) + namehash = normal_name_to_hash(normal_name) + address = lookup_function(namehash).call() + if is_none_or_zero_address(address): + return None + return lookup_function(namehash).call() + else: + return None + + def resolver(self, normal_name: str) -> Optional['Contract']: + resolver_addr = self.ens.caller.resolver(normal_name_to_hash(normal_name)) + if is_none_or_zero_address(resolver_addr): + return None + return self._resolverContract(address=resolver_addr) + + def reverser(self, target_address: ChecksumAddress) -> Optional['Contract']: + reversed_domain = address_to_reverse_domain(target_address) + return self.resolver(reversed_domain) + + def owner(self, name: str) -> ChecksumAddress: + """ + Get the owner of a name. Note that this may be different from the + deed holder in the '.eth' registrar. Learn more about the difference + between deed and name ownership in the ENS `Managing Ownership docs + `_ + + :param str name: ENS name to look up + :return: owner address + :rtype: str + """ + node = raw_name_to_hash(name) + return self.ens.caller.owner(node) + + @dict_copy + def setup_owner( + self, + name: str, + new_owner: ChecksumAddress=cast(ChecksumAddress, default), + transact: "TxParams"={} + ) -> ChecksumAddress: + """ + Set the owner of the supplied name to `new_owner`. + + For typical scenarios, you'll never need to call this method directly, + simply call :meth:`setup_name` or :meth:`setup_address`. This method does *not* + set up the name to point to an address. + + If `new_owner` is not supplied, then this will assume you + want the same owner as the parent domain. + + If the caller owns ``parentname.eth`` with no subdomains + and calls this method with ``sub.parentname.eth``, + then ``sub`` will be created as part of this call. + + :param str name: ENS name to set up + :param new_owner: account that will own `name`. If ``None``, set owner to empty addr. + If not specified, name will point to the parent domain owner's address. + :param dict transact: the transaction configuration, like in + :meth:`~web3.eth.Eth.sendTransaction` + :raises InvalidName: if `name` has invalid syntax + :raises UnauthorizedError: if ``'from'`` in `transact` does not own `name` + :returns: the new owner's address + """ + (super_owner, unowned, owned) = self._first_owner(name) + if new_owner is default: + new_owner = super_owner + elif not new_owner: + new_owner = ChecksumAddress(EMPTY_ADDR_HEX) + else: + new_owner = to_checksum_address(new_owner) + current_owner = self.owner(name) + if new_owner == EMPTY_ADDR_HEX and not current_owner: + return None + elif current_owner == new_owner: + return current_owner + else: + self._assert_control(super_owner, name, owned) + self._claim_ownership(new_owner, unowned, owned, super_owner, transact=transact) + return new_owner + + def _assert_control(self, account: ChecksumAddress, name: str, + parent_owned: Optional[str] = None) -> None: + if not address_in(account, self.web3.eth.accounts): + raise UnauthorizedError( + "in order to modify %r, you must control account %r, which owns %r" % ( + name, account, parent_owned or name + ) + ) + + def _first_owner(self, name: str) -> Tuple[Optional[ChecksumAddress], Sequence[str], str]: + """ + Takes a name, and returns the owner of the deepest subdomain that has an owner + + :returns: (owner or None, list(unowned_subdomain_labels), first_owned_domain) + """ + owner = None + unowned = [] + pieces = normalize_name(name).split('.') + while pieces and is_none_or_zero_address(owner): + name = '.'.join(pieces) + owner = self.owner(name) + if is_none_or_zero_address(owner): + unowned.append(pieces.pop(0)) + return (owner, unowned, name) + + @dict_copy + def _claim_ownership( + self, + owner: ChecksumAddress, + unowned: Sequence[str], + owned: str, + old_owner: ChecksumAddress=None, + transact: "TxParams"={} + ) -> None: + transact['from'] = old_owner or owner + for label in reversed(unowned): + self.ens.functions.setSubnodeOwner( + raw_name_to_hash(owned), + label_to_hash(label), + owner + ).transact(transact) + owned = "%s.%s" % (label, owned) + + @dict_copy + def _set_resolver( + self, name: str, resolver_addr: ChecksumAddress=None, transact: "TxParams"={} + ) -> 'Contract': + if is_none_or_zero_address(resolver_addr): + resolver_addr = self.address('resolver.eth') + namehash = raw_name_to_hash(name) + if self.ens.caller.resolver(namehash) != resolver_addr: + self.ens.functions.setResolver( + namehash, + resolver_addr + ).transact(transact) + return self._resolverContract(address=resolver_addr) + + @dict_copy + def _setup_reverse( + self, name: str, address: ChecksumAddress, transact: "TxParams"={} + ) -> HexBytes: + if name: + name = normalize_name(name) + else: + name = '' + transact['from'] = address + return self._reverse_registrar().functions.setName(name).transact(transact) + + def _reverse_registrar(self) -> 'Contract': + addr = self.ens.caller.owner(normal_name_to_hash(REVERSE_REGISTRAR_DOMAIN)) + return self.web3.eth.contract(address=addr, abi=abis.REVERSE_REGISTRAR) diff --git a/env/lib/python3.8/site-packages/ens/utils.py b/env/lib/python3.8/site-packages/ens/utils.py new file mode 100644 index 00000000..652e4788 --- /dev/null +++ b/env/lib/python3.8/site-packages/ens/utils.py @@ -0,0 +1,224 @@ +import copy +import datetime +import functools +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Collection, + Optional, + Type, + TypeVar, + Union, + cast, +) + +from eth_typing import ( + Address, + ChecksumAddress, + HexAddress, + HexStr, +) +from eth_utils import ( + is_same_address, + remove_0x_prefix, + to_normalized_address, +) +from hexbytes import ( + HexBytes, +) +import idna + +from ens.constants import ( + ACCEPTABLE_STALE_HOURS, + AUCTION_START_GAS_CONSTANT, + AUCTION_START_GAS_MARGINAL, + EMPTY_ADDR_HEX, + EMPTY_SHA3_BYTES, + REVERSE_REGISTRAR_DOMAIN, +) +from ens.exceptions import ( + InvalidName, +) + +default = object() + + +if TYPE_CHECKING: + from web3 import Web3 as _Web3 # noqa: F401 + from web3.providers import ( # noqa: F401 + BaseProvider, + ) + + +def Web3() -> Type['_Web3']: + from web3 import Web3 as Web3Main + return Web3Main + + +TFunc = TypeVar("TFunc", bound=Callable[..., Any]) + + +def dict_copy(func: TFunc) -> TFunc: + "copy dict keyword args, to avoid modifying caller's copy" + @functools.wraps(func) + def wrapper(*args: Any, **kwargs: Any) -> TFunc: + copied_kwargs = copy.deepcopy(kwargs) + return func(*args, **copied_kwargs) + return cast(TFunc, wrapper) + + +def ensure_hex(data: HexBytes) -> HexBytes: + if not isinstance(data, str): + return Web3().toHex(data) + return data + + +def init_web3(provider: 'BaseProvider'=cast('BaseProvider', default)) -> '_Web3': + from web3 import Web3 as Web3Main + + if provider is default: + w3 = Web3Main(ens=None) + else: + w3 = Web3Main(provider, ens=None) + + return customize_web3(w3) + + +def customize_web3(w3: '_Web3') -> '_Web3': + from web3.middleware import make_stalecheck_middleware + + w3.middleware_onion.remove('name_to_address') + w3.middleware_onion.add( + make_stalecheck_middleware(ACCEPTABLE_STALE_HOURS * 3600), + name='stalecheck', + ) + return w3 + + +def normalize_name(name: str) -> str: + """ + Clean the fully qualified name, as defined in ENS `EIP-137 + `_ + + This does *not* enforce whether ``name`` is a label or fully qualified domain. + + :param str name: the dot-separated ENS name + :raises InvalidName: if ``name`` has invalid syntax + """ + if not name: + return name + elif isinstance(name, (bytes, bytearray)): + name = name.decode('utf-8') + + try: + return idna.uts46_remap(name, std3_rules=True) + except idna.IDNAError as exc: + raise InvalidName(f"{name} is an invalid name, because {exc}") from exc + + +def is_valid_name(name: str) -> bool: + """ + Validate whether the fully qualified name is valid, as defined in ENS `EIP-137 + `_ + + :param str name: the dot-separated ENS name + :returns: True if ``name`` is set, and :meth:`~ens.main.ENS.nameprep` will not raise InvalidName + """ + if not name: + return False + try: + normalize_name(name) + return True + except InvalidName: + return False + + +def to_utc_datetime(timestamp: float) -> Optional[datetime.datetime]: + if timestamp: + return datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc) + else: + return None + + +def sha3_text(val: Union[str, bytes]) -> HexBytes: + if isinstance(val, str): + val = val.encode('utf-8') + return Web3().keccak(val) + + +def label_to_hash(label: str) -> HexBytes: + label = normalize_name(label) + if '.' in label: + raise ValueError("Cannot generate hash for label %r with a '.'" % label) + return Web3().keccak(text=label) + + +def normal_name_to_hash(name: str) -> HexBytes: + node = EMPTY_SHA3_BYTES + if name: + labels = name.split(".") + for label in reversed(labels): + labelhash = label_to_hash(label) + assert isinstance(labelhash, bytes) + assert isinstance(node, bytes) + node = Web3().keccak(node + labelhash) + return node + + +def raw_name_to_hash(name: str) -> HexBytes: + """ + Generate the namehash. This is also known as the ``node`` in ENS contracts. + + In normal operation, generating the namehash is handled + behind the scenes. For advanced usage, it is a helpful utility. + + This normalizes the name with `nameprep + `_ + before hashing. + + :param str name: ENS name to hash + :return: the namehash + :rtype: bytes + :raises InvalidName: if ``name`` has invalid syntax + """ + normalized_name = normalize_name(name) + return normal_name_to_hash(normalized_name) + + +def address_in(address: ChecksumAddress, addresses: Collection[ChecksumAddress]) -> bool: + return any(is_same_address(address, item) for item in addresses) + + +def address_to_reverse_domain(address: ChecksumAddress) -> str: + lower_unprefixed_address = remove_0x_prefix(HexStr(to_normalized_address(address))) + return lower_unprefixed_address + '.' + REVERSE_REGISTRAR_DOMAIN + + +def estimate_auction_start_gas(labels: Collection[str]) -> int: + return AUCTION_START_GAS_CONSTANT + AUCTION_START_GAS_MARGINAL * len(labels) + + +def assert_signer_in_modifier_kwargs(modifier_kwargs: Any) -> ChecksumAddress: + ERR_MSG = "You must specify the sending account" + assert len(modifier_kwargs) == 1, ERR_MSG + + _modifier_type, modifier_dict = dict(modifier_kwargs).popitem() + if 'from' not in modifier_dict: + raise TypeError(ERR_MSG) + + return modifier_dict['from'] + + +def is_none_or_zero_address(addr: Union[Address, ChecksumAddress, HexAddress]) -> bool: + return not addr or addr == EMPTY_ADDR_HEX + + +def is_valid_ens_name(ens_name: str) -> bool: + split_domain = ens_name.split('.') + if len(split_domain) == 1: + return False + for name in split_domain: + if not is_valid_name(name): + return False + return True diff --git a/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/LICENSE new file mode 100644 index 00000000..8c67f6d0 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2016 Piper Merriam + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/METADATA new file mode 100644 index 00000000..b697400d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/METADATA @@ -0,0 +1,162 @@ +Metadata-Version: 2.1 +Name: eth-abi +Version: 2.2.0 +Summary: eth_abi: Python utilities for working with Ethereum ABI definitions, especially encoding and decoding +Home-page: https://github.com/ethereum/eth-abi +Author: The Ethereum Foundation +Author-email: snakecharmers@ethereum.org +License: MIT +Keywords: ethereum +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Requires-Python: >=3.6, <4 +Description-Content-Type: text/markdown +License-File: LICENSE +Requires-Dist: eth-utils (<2.0.0,>=1.2.0) +Requires-Dist: eth-typing (<3.0.0,>=2.0.0) +Requires-Dist: parsimonious (<0.9.0,>=0.8.0) +Provides-Extra: dev +Requires-Dist: bumpversion (<1,>=0.5.3) ; extra == 'dev' +Requires-Dist: pytest-watch (<5,>=4.1.0) ; extra == 'dev' +Requires-Dist: wheel ; extra == 'dev' +Requires-Dist: twine ; extra == 'dev' +Requires-Dist: ipython ; extra == 'dev' +Requires-Dist: pytest (==4.4.1) ; extra == 'dev' +Requires-Dist: pytest-pythonpath (>=0.7.1) ; extra == 'dev' +Requires-Dist: pytest-xdist (==1.22.3) ; extra == 'dev' +Requires-Dist: tox (<3,>=2.9.1) ; extra == 'dev' +Requires-Dist: eth-hash[pycryptodome] ; extra == 'dev' +Requires-Dist: hypothesis (<5.0.0,>=4.18.2) ; extra == 'dev' +Requires-Dist: flake8 (==4.0.1) ; extra == 'dev' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'dev' +Requires-Dist: mypy (==0.910) ; extra == 'dev' +Requires-Dist: pydocstyle (<4,>=3.0.0) ; extra == 'dev' +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'dev' +Requires-Dist: jinja2 (<3.1.0,>=3.0.0) ; extra == 'dev' +Requires-Dist: sphinx-rtd-theme (>=0.1.9) ; extra == 'dev' +Requires-Dist: towncrier (<22,>=21) ; extra == 'dev' +Provides-Extra: doc +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'doc' +Requires-Dist: jinja2 (<3.1.0,>=3.0.0) ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme (>=0.1.9) ; extra == 'doc' +Requires-Dist: towncrier (<22,>=21) ; extra == 'doc' +Provides-Extra: lint +Requires-Dist: flake8 (==4.0.1) ; extra == 'lint' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'lint' +Requires-Dist: mypy (==0.910) ; extra == 'lint' +Requires-Dist: pydocstyle (<4,>=3.0.0) ; extra == 'lint' +Provides-Extra: test +Requires-Dist: pytest (==4.4.1) ; extra == 'test' +Requires-Dist: pytest-pythonpath (>=0.7.1) ; extra == 'test' +Requires-Dist: pytest-xdist (==1.22.3) ; extra == 'test' +Requires-Dist: tox (<3,>=2.9.1) ; extra == 'test' +Requires-Dist: eth-hash[pycryptodome] ; extra == 'test' +Requires-Dist: hypothesis (<5.0.0,>=4.18.2) ; extra == 'test' +Provides-Extra: tools +Requires-Dist: hypothesis (<5.0.0,>=4.18.2) ; extra == 'tools' + +# Ethereum Contract Interface (ABI) Utility + +[![Join the chat at https://gitter.im/ethereum/eth-abi](https://badges.gitter.im/ethereum/eth-abi.svg)](https://gitter.im/ethereum/eth-abi?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[![Build Status](https://circleci.com/gh/ethereum/eth-abi.svg?style=shield)](https://circleci.com/gh/ethereum/eth-abi) +[![PyPI version](https://badge.fury.io/py/eth-abi.svg)](https://badge.fury.io/py/eth-abi) +[![Python versions](https://img.shields.io/pypi/pyversions/eth-abi.svg)](https://pypi.python.org/pypi/eth-abi) +[![Docs build](https://readthedocs.org/projects/eth-abi/badge/?version=latest)](http://eth-abi.readthedocs.io/en/latest/?badge=latest) + + +Python utilities for working with Ethereum ABI definitions, especially encoding and decoding + +Read more in the [documentation on ReadTheDocs](https://eth-abi.readthedocs.io/). [View the change log](https://eth-abi.readthedocs.io/en/latest/releases.html). + +## Quickstart + +```sh +pip install eth_abi +``` + +## Developer Setup + +If you would like to hack on eth-abi, please check out the [Snake Charmers +Tactical Manual](https://github.com/ethereum/snake-charmers-tactical-manual) +for information on how we do: + +- Testing +- Pull Requests +- Code Style +- Documentation + +### Development Environment Setup + +You can set up your dev environment with: + +```sh +git clone git@github.com:ethereum/eth-abi.git +cd eth-abi +virtualenv -p python3 venv +. venv/bin/activate +pip install -e .[dev] +``` + +### Testing Setup + +During development, you might like to have tests run on every file save. + +Show flake8 errors on file change: + +```sh +# Test flake8 +when-changed -v -s -r -1 eth_abi/ tests/ -c "clear; flake8 eth_abi tests && echo 'flake8 success' || echo 'error'" +``` + +Run multi-process tests in one command, but without color: + +```sh +# in the project root: +pytest --numprocesses=4 --looponfail --maxfail=1 +# the same thing, succinctly: +pytest -n 4 -f --maxfail=1 +``` + +Run in one thread, with color and desktop notifications: + +```sh +cd venv +ptw --onfail "notify-send -t 5000 'Test failure ⚠⚠⚠⚠⚠' 'python 3 test on eth-abi failed'" ../tests ../eth_abi +``` + +### Release setup + +For Debian-like systems: +``` +apt install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. This is typically done from the +master branch, except when releasing a beta (in which case the beta is released from master, +and the previous stable branch is released from said branch). + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 4.0.0-alpha.1 devnum"` + + diff --git a/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/RECORD new file mode 100644 index 00000000..cb0961c3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/RECORD @@ -0,0 +1,41 @@ +eth_abi-2.2.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_abi-2.2.0.dist-info/LICENSE,sha256=yhpAhLbx-OsdiXGFDq9iIveM3X98KaZSs0nQcB73M68,1080 +eth_abi-2.2.0.dist-info/METADATA,sha256=dGQ721VWs6J-Tlw2U6Rl992bsHH4ENq7tLAaXy3tHeM,5923 +eth_abi-2.2.0.dist-info/RECORD,, +eth_abi-2.2.0.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +eth_abi-2.2.0.dist-info/top_level.txt,sha256=Bi9Lg7_2qcq7aP0hKlK6zn2npfHy8LoLc60I_TTWysI,8 +eth_abi/__init__.py,sha256=LnI7P7hnhwjVYNzhzAZj1F1WhkdG9tSh-_2-qIP67-g,510 +eth_abi/__pycache__/__init__.cpython-38.pyc,, +eth_abi/__pycache__/abi.cpython-38.pyc,, +eth_abi/__pycache__/base.cpython-38.pyc,, +eth_abi/__pycache__/codec.cpython-38.pyc,, +eth_abi/__pycache__/constants.cpython-38.pyc,, +eth_abi/__pycache__/decoding.cpython-38.pyc,, +eth_abi/__pycache__/encoding.cpython-38.pyc,, +eth_abi/__pycache__/exceptions.cpython-38.pyc,, +eth_abi/__pycache__/grammar.cpython-38.pyc,, +eth_abi/__pycache__/packed.cpython-38.pyc,, +eth_abi/__pycache__/registry.cpython-38.pyc,, +eth_abi/abi.py,sha256=_8X1PWxRARrGT07Hj2S_RpRcc3S9MfbW_llmPvwUTKI,504 +eth_abi/base.py,sha256=jAYh-9-TGQHoBVP2TP7MAA0xV-6hV5yMzPdWK85Co0k,4846 +eth_abi/codec.py,sha256=5AqTj2qadl_fHGjMncgSbTTkIq5RzXcanDvikFg5wFw,6884 +eth_abi/constants.py,sha256=CxUmQBvEGdjlMAOZSU5vhEbdu_OOMO1rVPOgoRHEPp8,57 +eth_abi/decoding.py,sha256=DzwbBuTXs4wySud3f5vmTfz1Vw0vhF_2fCzwQqh5aAI,16855 +eth_abi/encoding.py,sha256=QjjqldwuVsZn3MtySoIuM3zHZWfVGx8l2iPUqrjBn5c,20279 +eth_abi/exceptions.py,sha256=P-0SbBuyD73PtkJmtfKovDkKqPD5RBQ9yaEN2d4FI0o,2911 +eth_abi/grammar.py,sha256=vdcCHmFWU4owkkwUIjWSRWW4pWfPvJJCAQOOHJW7nMc,12172 +eth_abi/packed.py,sha256=I2eDuCdp1kXs2sIzJGbklDnb3ULx8EbKTa0uQJ-pLF0,387 +eth_abi/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_abi/registry.py,sha256=3IWQLDBxpJbPIO7RSxAtPeoHUByP1gevvJsC9-ZOihg,19104 +eth_abi/tools/__init__.py,sha256=qyxY82bT0HM8m9bqpo0IMFY_y4OM9C0YA4gUACnUWQg,65 +eth_abi/tools/__pycache__/__init__.cpython-38.pyc,, +eth_abi/tools/__pycache__/_strategies.cpython-38.pyc,, +eth_abi/tools/_strategies.py,sha256=81qSUpFoMBlDM64mqo2gB0J0Z9nQxjL_Ljx9GXD_7OQ,5785 +eth_abi/utils/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_abi/utils/__pycache__/__init__.cpython-38.pyc,, +eth_abi/utils/__pycache__/numeric.cpython-38.pyc,, +eth_abi/utils/__pycache__/padding.cpython-38.pyc,, +eth_abi/utils/__pycache__/string.cpython-38.pyc,, +eth_abi/utils/numeric.py,sha256=qhGPsBiUY2kpDU65YZkE4fLwd5Jd1L0YvUI0qVTsUk8,2089 +eth_abi/utils/padding.py,sha256=fetFK9WEq5V3CbhXRyOUZSDFZs8sfAcAo_FUkMOw2Nw,427 +eth_abi/utils/string.py,sha256=XgmBL5SfZ2OOiFqhnQy4DOebO_tyzOy3qcPK0oZVLD4,435 diff --git a/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/top_level.txt new file mode 100644 index 00000000..4f78b858 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi-2.2.0.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_abi diff --git a/env/lib/python3.8/site-packages/eth_abi/__init__.py b/env/lib/python3.8/site-packages/eth_abi/__init__.py new file mode 100644 index 00000000..3c69adb6 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/__init__.py @@ -0,0 +1,25 @@ +import sys +import warnings + +import pkg_resources + +from eth_abi.abi import ( # NOQA + decode, + decode_abi, + decode_single, + encode, + encode_abi, + encode_single, + is_encodable, + is_encodable_type, +) + +if sys.version_info[1] == 6: + # reference setup.py -> python_requires + warnings.warn( + "eth-abi version 3.0.0 will no longer support Python 3.6", + category=DeprecationWarning, + stacklevel=2 + ) + +__version__ = pkg_resources.get_distribution('eth-abi').version diff --git a/env/lib/python3.8/site-packages/eth_abi/abi.py b/env/lib/python3.8/site-packages/eth_abi/abi.py new file mode 100644 index 00000000..c0b35875 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/abi.py @@ -0,0 +1,19 @@ +from eth_abi.codec import ( + ABICodec, +) +from eth_abi.registry import ( + registry, +) + +default_codec = ABICodec(registry) + +encode = default_codec.encode +encode_abi = default_codec.encode_abi # deprecated +encode_single = default_codec.encode_single # deprecated + +decode = default_codec.decode +decode_abi = default_codec.decode_abi # deprecated +decode_single = default_codec.decode_single # deprecated + +is_encodable = default_codec.is_encodable +is_encodable_type = default_codec.is_encodable_type diff --git a/env/lib/python3.8/site-packages/eth_abi/base.py b/env/lib/python3.8/site-packages/eth_abi/base.py new file mode 100644 index 00000000..6ec0a058 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/base.py @@ -0,0 +1,147 @@ +import functools + +from eth_typing.abi import ( + TypeStr, +) + +from .grammar import ( + BasicType, + TupleType, + normalize, + parse, +) + + +def parse_type_str(expected_base=None, with_arrlist=False): + """ + Used by BaseCoder subclasses as a convenience for implementing the + ``from_type_str`` method required by ``ABIRegistry``. Useful if normalizing + then parsing a type string with an (optional) expected base is required in + that method. + """ + def decorator(old_from_type_str): + @functools.wraps(old_from_type_str) + def new_from_type_str(cls, type_str, registry): + normalized_type_str = normalize(type_str) + abi_type = parse(normalized_type_str) + + type_str_repr = repr(type_str) + if type_str != normalized_type_str: + type_str_repr = '{} (normalized to {})'.format( + type_str_repr, + repr(normalized_type_str), + ) + + if expected_base is not None: + if not isinstance(abi_type, BasicType): + raise ValueError( + 'Cannot create {} for non-basic type {}'.format( + cls.__name__, + type_str_repr, + ) + ) + if abi_type.base != expected_base: + raise ValueError( + 'Cannot create {} for type {}: expected type with ' + "base '{}'".format( + cls.__name__, + type_str_repr, + expected_base, + ) + ) + + if not with_arrlist and abi_type.arrlist is not None: + raise ValueError( + 'Cannot create {} for type {}: expected type with ' + 'no array dimension list'.format( + cls.__name__, + type_str_repr, + ) + ) + if with_arrlist and abi_type.arrlist is None: + raise ValueError( + 'Cannot create {} for type {}: expected type with ' + 'array dimension list'.format( + cls.__name__, + type_str_repr, + ) + ) + + # Perform general validation of default solidity types + abi_type.validate() + + return old_from_type_str(cls, abi_type, registry) + + return classmethod(new_from_type_str) + + return decorator + + +def parse_tuple_type_str(old_from_type_str): + """ + Used by BaseCoder subclasses as a convenience for implementing the + ``from_type_str`` method required by ``ABIRegistry``. Useful if normalizing + then parsing a tuple type string is required in that method. + """ + @functools.wraps(old_from_type_str) + def new_from_type_str(cls, type_str, registry): + normalized_type_str = normalize(type_str) + abi_type = parse(normalized_type_str) + + type_str_repr = repr(type_str) + if type_str != normalized_type_str: + type_str_repr = '{} (normalized to {})'.format( + type_str_repr, + repr(normalized_type_str), + ) + + if not isinstance(abi_type, TupleType): + raise ValueError( + 'Cannot create {} for non-tuple type {}'.format( + cls.__name__, + type_str_repr, + ) + ) + + abi_type.validate() + + return old_from_type_str(cls, abi_type, registry) + + return classmethod(new_from_type_str) + + +class BaseCoder: + """ + Base class for all encoder and decoder classes. + """ + is_dynamic = False + + def __init__(self, **kwargs): + cls = type(self) + + # Ensure no unrecognized kwargs were given + for key, value in kwargs.items(): + if not hasattr(cls, key): + raise AttributeError( + 'Property {key} not found on {cls_name} class. ' + '`{cls_name}.__init__` only accepts keyword arguments which are ' + 'present on the {cls_name} class.'.format( + key=key, + cls_name=cls.__name__, + ) + ) + setattr(self, key, value) + + # Validate given combination of kwargs + self.validate() + + def validate(self): + pass + + @classmethod + def from_type_str(cls, type_str: TypeStr, registry) -> 'BaseCoder': # pragma: no cover + """ + Used by :any:`ABIRegistry` to get an appropriate encoder or decoder + instance for the given type string and type registry. + """ + raise NotImplementedError('Must implement `from_type_str`') diff --git a/env/lib/python3.8/site-packages/eth_abi/codec.py b/env/lib/python3.8/site-packages/eth_abi/codec.py new file mode 100644 index 00000000..d1acc545 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/codec.py @@ -0,0 +1,214 @@ +from typing import ( + Any, + Iterable, + Tuple, +) +import warnings + +from eth_typing.abi import ( + Decodable, + TypeStr, +) +from eth_utils import ( + is_bytes, +) + +from eth_abi.decoding import ( + ContextFramesBytesIO, + TupleDecoder, +) +from eth_abi.encoding import ( + TupleEncoder, +) +from eth_abi.exceptions import ( + EncodingError, +) +from eth_abi.registry import ( + ABIRegistry, +) + + +class BaseABICoder: + """ + Base class for porcelain coding APIs. These are classes which wrap + instances of :class:`~eth_abi.registry.ABIRegistry` to provide last-mile + coding functionality. + """ + def __init__(self, registry: ABIRegistry): + """ + Constructor. + + :param registry: The registry providing the encoders to be used when + encoding values. + """ + self._registry = registry + + +class ABIEncoder(BaseABICoder): + """ + Wraps a registry to provide last-mile encoding functionality. + """ + def encode_single(self, typ: TypeStr, arg: Any) -> bytes: + """ + Encodes the python value ``arg`` as a binary value of the ABI type + ``typ``. + + :param typ: The string representation of the ABI type that will be used + for encoding e.g. ``'uint256'``, ``'bytes[]'``, ``'(int,int)'``, + etc. + :param arg: The python value to be encoded. + + :returns: The binary representation of the python value ``arg`` as a + value of the ABI type ``typ``. + """ + warnings.warn( + "abi.encode_single() and abi.encode_single_packed() are deprecated and will be removed " + "in version 4.0.0 in favor of abi.encode() and abi.encode_packed(), respectively", + category=DeprecationWarning, + ) + + encoder = self._registry.get_encoder(typ) + + return encoder(arg) + + def encode_abi(self, types: Iterable[TypeStr], args: Iterable[Any]) -> bytes: + """ + Encodes the python values in ``args`` as a sequence of binary values of + the ABI types in ``types`` via the head-tail mechanism. + + :param types: An iterable of string representations of the ABI types + that will be used for encoding e.g. ``('uint256', 'bytes[]', + '(int,int)')`` + :param args: An iterable of python values to be encoded. + + :returns: The head-tail encoded binary representation of the python + values in ``args`` as values of the ABI types in ``types``. + """ + warnings.warn( + "abi.encode_abi() and abi.encode_abi_packed() are deprecated and will be removed in " + "version 4.0.0 in favor of abi.encode() and abi.encode_packed(), respectively", + category=DeprecationWarning, + ) + return self.encode(types, args) + + def encode(self, types, args): + encoders = [ + self._registry.get_encoder(type_str) + for type_str in types + ] + + encoder = TupleEncoder(encoders=encoders) + + return encoder(args) + + def is_encodable(self, typ: TypeStr, arg: Any) -> bool: + """ + Determines if the python value ``arg`` is encodable as a value of the + ABI type ``typ``. + + :param typ: A string representation for the ABI type against which the + python value ``arg`` will be checked e.g. ``'uint256'``, + ``'bytes[]'``, ``'(int,int)'``, etc. + :param arg: The python value whose encodability should be checked. + + :returns: ``True`` if ``arg`` is encodable as a value of the ABI type + ``typ``. Otherwise, ``False``. + """ + encoder = self._registry.get_encoder(typ) + + try: + encoder.validate_value(arg) + except EncodingError: + return False + except AttributeError: + try: + encoder(arg) + except EncodingError: + return False + + return True + + def is_encodable_type(self, typ: TypeStr) -> bool: + """ + Returns ``True`` if values for the ABI type ``typ`` can be encoded by + this codec. + + :param typ: A string representation for the ABI type that will be + checked for encodability e.g. ``'uint256'``, ``'bytes[]'``, + ``'(int,int)'``, etc. + + :returns: ``True`` if values for ``typ`` can be encoded by this codec. + Otherwise, ``False``. + """ + return self._registry.has_encoder(typ) + + +class ABIDecoder(BaseABICoder): + """ + Wraps a registry to provide last-mile decoding functionality. + """ + stream_class = ContextFramesBytesIO + + def decode_single(self, typ: TypeStr, data: Decodable) -> Any: + """ + Decodes the binary value ``data`` of the ABI type ``typ`` into its + equivalent python value. + + :param typ: The string representation of the ABI type that will be used for + decoding e.g. ``'uint256'``, ``'bytes[]'``, ``'(int,int)'``, etc. + :param data: The binary value to be decoded. + + :returns: The equivalent python value of the ABI value represented in + ``data``. + """ + warnings.warn( + "abi.decode_single() is deprecated and will be removed in version 4.0.0 in favor of " + "abi.decode()", + category=DeprecationWarning, + ) + + if not is_bytes(data): + raise TypeError("The `data` value must be of bytes type. Got {0}".format(type(data))) + + decoder = self._registry.get_decoder(typ) + stream = self.stream_class(data) + + return decoder(stream) + + def decode_abi(self, types: Iterable[TypeStr], data: Decodable) -> Tuple[Any, ...]: + """ + Decodes the binary value ``data`` as a sequence of values of the ABI types + in ``types`` via the head-tail mechanism into a tuple of equivalent python + values. + + :param types: An iterable of string representations of the ABI types that + will be used for decoding e.g. ``('uint256', 'bytes[]', '(int,int)')`` + :param data: The binary value to be decoded. + + :returns: A tuple of equivalent python values for the ABI values + represented in ``data``. + """ + warnings.warn( + "abi.decode_abi() is deprecated and will be removed in version 4.0.0 in favor of " + "abi.decode()", + category=DeprecationWarning + ) + return self.decode(types, data) + + def decode(self, types, data): + if not is_bytes(data): + raise TypeError(f"The `data` value must be of bytes type. Got {type(data)}") + + decoders = [ + self._registry.get_decoder(type_str) + for type_str in types + ] + + decoder = TupleDecoder(decoders=decoders) + stream = self.stream_class(data) + + return decoder(stream) + + +class ABICodec(ABIEncoder, ABIDecoder): + pass diff --git a/env/lib/python3.8/site-packages/eth_abi/constants.py b/env/lib/python3.8/site-packages/eth_abi/constants.py new file mode 100644 index 00000000..45b1b363 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/constants.py @@ -0,0 +1,3 @@ +TT256 = 2 ** 256 +TT256M1 = 2 ** 256 - 1 +TT255 = 2 ** 255 diff --git a/env/lib/python3.8/site-packages/eth_abi/decoding.py b/env/lib/python3.8/site-packages/eth_abi/decoding.py new file mode 100644 index 00000000..f706a94a --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/decoding.py @@ -0,0 +1,563 @@ +import abc +import decimal +import io +from typing import ( + Any, +) + +from eth_utils import ( + big_endian_to_int, + to_normalized_address, + to_tuple, +) + +from eth_abi.base import ( + BaseCoder, + parse_tuple_type_str, + parse_type_str, +) +from eth_abi.exceptions import ( + DecodingError, + InsufficientDataBytes, + NonEmptyPaddingBytes, +) +from eth_abi.utils.numeric import ( + TEN, + abi_decimal_context, + ceil32, +) + + +class ContextFramesBytesIO(io.BytesIO): + """ + A byte stream which can track a series of contextual frames in a stack. This + data structure is necessary to perform nested decodings using the + :py:class:``HeadTailDecoder`` since offsets present in head sections are + relative only to a particular encoded object. These offsets can only be + used to locate a position in a decoding stream if they are paired with a + contextual offset that establishes the position of the object in which they + are found. + + For example, consider the encoding of a value for the following type:: + + type: (int,(int,int[])) + value: (1,(2,[3,3])) + + There are two tuples in this type: one inner and one outer. The inner tuple + type contains a dynamic type ``int[]`` and, therefore, is itself dynamic. + This means that its value encoding will be placed in the tail section of the + outer tuple's encoding. Furthermore, the inner tuple's encoding will, + itself, contain a tail section with the encoding for ``[3,3]``. All + together, the encoded value of ``(1,(2,[3,3]))`` would look like this (the + data values are normally 32 bytes wide but have been truncated to remove the + redundant zeros at the beginnings of their encodings):: + + offset data + -------------------------- + ^ 0 0x01 + | 32 0x40 <-- Offset of object A in global frame (64) + -----|-------------------- + Global frame ^ 64 0x02 <-- Beginning of object A (64 w/offset 0 = 64) + | | 96 0x40 <-- Offset of object B in frame of object A (64) + -----|-Object A's frame--- + | | 128 0x02 <-- Beginning of object B (64 w/offset 64 = 128) + | | 160 0x03 + v v 192 0x03 + -------------------------- + + Note that the offset of object B is encoded as 64 which only specifies the + beginning of its encoded value relative to the beginning of object A's + encoding. Globally, object B is located at offset 128. In order to make + sense out of object B's offset, it needs to be positioned in the context of + its enclosing object's frame (object A). + """ + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + + self._frames = [] + self._total_offset = 0 + + def seek_in_frame(self, pos, *args, **kwargs): + """ + Seeks relative to the total offset of the current contextual frames. + """ + self.seek(self._total_offset + pos, *args, **kwargs) + + def push_frame(self, offset): + """ + Pushes a new contextual frame onto the stack with the given offset and a + return position at the current cursor position then seeks to the new + total offset. + """ + self._frames.append((offset, self.tell())) + self._total_offset += offset + + self.seek_in_frame(0) + + def pop_frame(self): + """ + Pops the current contextual frame off of the stack and returns the + cursor to the frame's return position. + """ + try: + offset, return_pos = self._frames.pop() + except IndexError: + raise IndexError('no frames to pop') + self._total_offset -= offset + + self.seek(return_pos) + + +class BaseDecoder(BaseCoder, metaclass=abc.ABCMeta): + """ + Base class for all decoder classes. Subclass this if you want to define a + custom decoder class. Subclasses must also implement + :any:`BaseCoder.from_type_str`. + """ + @abc.abstractmethod + def decode(self, stream: ContextFramesBytesIO) -> Any: # pragma: no cover + """ + Decodes the given stream of bytes into a python value. Should raise + :any:`exceptions.DecodingError` if a python value cannot be decoded + from the given byte stream. + """ + pass + + def __call__(self, stream: ContextFramesBytesIO) -> Any: + return self.decode(stream) + + +class HeadTailDecoder(BaseDecoder): + is_dynamic = True + + tail_decoder = None + + def validate(self): + super().validate() + + if self.tail_decoder is None: + raise ValueError("No `tail_decoder` set") + + def decode(self, stream): + start_pos = decode_uint_256(stream) + + stream.push_frame(start_pos) + value = self.tail_decoder(stream) + stream.pop_frame() + + return value + + +class TupleDecoder(BaseDecoder): + decoders = None + + def __init__(self, **kwargs): + super().__init__(**kwargs) + + self.decoders = tuple( + HeadTailDecoder(tail_decoder=d) if getattr(d, 'is_dynamic', False) else d + for d in self.decoders + ) + + self.is_dynamic = any(getattr(d, 'is_dynamic', False) for d in self.decoders) + + def validate(self): + super().validate() + + if self.decoders is None: + raise ValueError("No `decoders` set") + + @to_tuple + def decode(self, stream): + for decoder in self.decoders: + yield decoder(stream) + + @parse_tuple_type_str + def from_type_str(cls, abi_type, registry): + decoders = tuple( + registry.get_decoder(c.to_type_str()) + for c in abi_type.components + ) + + return cls(decoders=decoders) + + +class SingleDecoder(BaseDecoder): + decoder_fn = None + + def validate(self): + super().validate() + + if self.decoder_fn is None: + raise ValueError("No `decoder_fn` set") + + def validate_padding_bytes(self, value, padding_bytes): + raise NotImplementedError("Must be implemented by subclasses") + + def decode(self, stream): + raw_data = self.read_data_from_stream(stream) + data, padding_bytes = self.split_data_and_padding(raw_data) + value = self.decoder_fn(data) + self.validate_padding_bytes(value, padding_bytes) + + return value + + def read_data_from_stream(self, stream): + raise NotImplementedError("Must be implemented by subclasses") + + def split_data_and_padding(self, raw_data): + return raw_data, b'' + + +class BaseArrayDecoder(BaseDecoder): + item_decoder = None + + def __init__(self, **kwargs): + super().__init__(**kwargs) + + # Use a head-tail decoder to decode dynamic elements + if self.item_decoder.is_dynamic: + self.item_decoder = HeadTailDecoder( + tail_decoder=self.item_decoder, + ) + + def validate(self): + super().validate() + + if self.item_decoder is None: + raise ValueError("No `item_decoder` set") + + @parse_type_str(with_arrlist=True) + def from_type_str(cls, abi_type, registry): + item_decoder = registry.get_decoder(abi_type.item_type.to_type_str()) + + array_spec = abi_type.arrlist[-1] + if len(array_spec) == 1: + # If array dimension is fixed + return SizedArrayDecoder( + array_size=array_spec[0], + item_decoder=item_decoder, + ) + else: + # If array dimension is dynamic + return DynamicArrayDecoder(item_decoder=item_decoder) + + +class SizedArrayDecoder(BaseArrayDecoder): + array_size = None + + def __init__(self, **kwargs): + super().__init__(**kwargs) + + self.is_dynamic = self.item_decoder.is_dynamic + + @to_tuple + def decode(self, stream): + for _ in range(self.array_size): + yield self.item_decoder(stream) + + +class DynamicArrayDecoder(BaseArrayDecoder): + # Dynamic arrays are always dynamic, regardless of their elements + is_dynamic = True + + @to_tuple + def decode(self, stream): + array_size = decode_uint_256(stream) + stream.push_frame(32) + for _ in range(array_size): + yield self.item_decoder(stream) + stream.pop_frame() + + +class FixedByteSizeDecoder(SingleDecoder): + decoder_fn = None + value_bit_size = None + data_byte_size = None + is_big_endian = None + + def validate(self): + super().validate() + + if self.value_bit_size is None: + raise ValueError("`value_bit_size` may not be None") + if self.data_byte_size is None: + raise ValueError("`data_byte_size` may not be None") + if self.decoder_fn is None: + raise ValueError("`decoder_fn` may not be None") + if self.is_big_endian is None: + raise ValueError("`is_big_endian` may not be None") + + if self.value_bit_size % 8 != 0: + raise ValueError( + "Invalid value bit size: {0}. Must be a multiple of 8".format( + self.value_bit_size, + ) + ) + + if self.value_bit_size > self.data_byte_size * 8: + raise ValueError("Value byte size exceeds data size") + + def read_data_from_stream(self, stream): + data = stream.read(self.data_byte_size) + + if len(data) != self.data_byte_size: + raise InsufficientDataBytes( + "Tried to read {0} bytes. Only got {1} bytes".format( + self.data_byte_size, + len(data), + ) + ) + + return data + + def split_data_and_padding(self, raw_data): + value_byte_size = self._get_value_byte_size() + padding_size = self.data_byte_size - value_byte_size + + if self.is_big_endian: + padding_bytes = raw_data[:padding_size] + data = raw_data[padding_size:] + else: + data = raw_data[:value_byte_size] + padding_bytes = raw_data[value_byte_size:] + + return data, padding_bytes + + def validate_padding_bytes(self, value, padding_bytes): + value_byte_size = self._get_value_byte_size() + padding_size = self.data_byte_size - value_byte_size + + if padding_bytes != b'\x00' * padding_size: + raise NonEmptyPaddingBytes( + "Padding bytes were not empty: {0}".format(repr(padding_bytes)) + ) + + def _get_value_byte_size(self): + value_byte_size = self.value_bit_size // 8 + return value_byte_size + + +class Fixed32ByteSizeDecoder(FixedByteSizeDecoder): + data_byte_size = 32 + + +class BooleanDecoder(Fixed32ByteSizeDecoder): + value_bit_size = 8 + is_big_endian = True + + @staticmethod + def decoder_fn(data): + if data == b'\x00': + return False + elif data == b'\x01': + return True + else: + raise NonEmptyPaddingBytes( + "Boolean must be either 0x0 or 0x1. Got: {0}".format(repr(data)) + ) + + @parse_type_str('bool') + def from_type_str(cls, abi_type, registry): + return cls() + + +class AddressDecoder(Fixed32ByteSizeDecoder): + value_bit_size = 20 * 8 + is_big_endian = True + decoder_fn = staticmethod(to_normalized_address) + + @parse_type_str('address') + def from_type_str(cls, abi_type, registry): + return cls() + + +# +# Unsigned Integer Decoders +# +class UnsignedIntegerDecoder(Fixed32ByteSizeDecoder): + decoder_fn = staticmethod(big_endian_to_int) + is_big_endian = True + + @parse_type_str('uint') + def from_type_str(cls, abi_type, registry): + return cls(value_bit_size=abi_type.sub) + + +decode_uint_256 = UnsignedIntegerDecoder(value_bit_size=256) + + +# +# Signed Integer Decoders +# +class SignedIntegerDecoder(Fixed32ByteSizeDecoder): + is_big_endian = True + + def decoder_fn(self, data): + value = big_endian_to_int(data) + if value >= 2 ** (self.value_bit_size - 1): + return value - 2 ** self.value_bit_size + else: + return value + + def validate_padding_bytes(self, value, padding_bytes): + value_byte_size = self._get_value_byte_size() + padding_size = self.data_byte_size - value_byte_size + + if value >= 0: + expected_padding_bytes = b'\x00' * padding_size + else: + expected_padding_bytes = b'\xff' * padding_size + + if padding_bytes != expected_padding_bytes: + raise NonEmptyPaddingBytes( + "Padding bytes were not empty: {0}".format(repr(padding_bytes)) + ) + + @parse_type_str('int') + def from_type_str(cls, abi_type, registry): + return cls(value_bit_size=abi_type.sub) + + +# +# Bytes1..32 +# +class BytesDecoder(Fixed32ByteSizeDecoder): + is_big_endian = False + + @staticmethod + def decoder_fn(data): + return data + + @parse_type_str('bytes') + def from_type_str(cls, abi_type, registry): + return cls(value_bit_size=abi_type.sub * 8) + + +class BaseFixedDecoder(Fixed32ByteSizeDecoder): + frac_places = None + is_big_endian = True + + def validate(self): + super().validate() + + if self.frac_places is None: + raise ValueError("must specify `frac_places`") + + if self.frac_places <= 0 or self.frac_places > 80: + raise ValueError("`frac_places` must be in range (0, 80]") + + +class UnsignedFixedDecoder(BaseFixedDecoder): + def decoder_fn(self, data): + value = big_endian_to_int(data) + + with decimal.localcontext(abi_decimal_context): + decimal_value = decimal.Decimal(value) / TEN ** self.frac_places + + return decimal_value + + @parse_type_str('ufixed') + def from_type_str(cls, abi_type, registry): + value_bit_size, frac_places = abi_type.sub + + return cls(value_bit_size=value_bit_size, frac_places=frac_places) + + +class SignedFixedDecoder(BaseFixedDecoder): + def decoder_fn(self, data): + value = big_endian_to_int(data) + if value >= 2 ** (self.value_bit_size - 1): + signed_value = value - 2 ** self.value_bit_size + else: + signed_value = value + + with decimal.localcontext(abi_decimal_context): + decimal_value = decimal.Decimal(signed_value) / TEN ** self.frac_places + + return decimal_value + + def validate_padding_bytes(self, value, padding_bytes): + value_byte_size = self._get_value_byte_size() + padding_size = self.data_byte_size - value_byte_size + + if value >= 0: + expected_padding_bytes = b'\x00' * padding_size + else: + expected_padding_bytes = b'\xff' * padding_size + + if padding_bytes != expected_padding_bytes: + raise NonEmptyPaddingBytes( + "Padding bytes were not empty: {0}".format(repr(padding_bytes)) + ) + + @parse_type_str('fixed') + def from_type_str(cls, abi_type, registry): + value_bit_size, frac_places = abi_type.sub + + return cls(value_bit_size=value_bit_size, frac_places=frac_places) + + +# +# String and Bytes +# +class ByteStringDecoder(SingleDecoder): + is_dynamic = True + + @staticmethod + def decoder_fn(data): + return data + + @staticmethod + def read_data_from_stream(stream): + data_length = decode_uint_256(stream) + padded_length = ceil32(data_length) + + data = stream.read(padded_length) + + if len(data) < padded_length: + raise InsufficientDataBytes( + "Tried to read {0} bytes. Only got {1} bytes".format( + padded_length, + len(data), + ) + ) + + padding_bytes = data[data_length:] + + if padding_bytes != b'\x00' * (padded_length - data_length): + raise NonEmptyPaddingBytes( + "Padding bytes were not empty: {0}".format(repr(padding_bytes)) + ) + + return data[:data_length] + + def validate_padding_bytes(self, value, padding_bytes): + pass + + @parse_type_str('bytes') + def from_type_str(cls, abi_type, registry): + return cls() + + +class StringDecoder(ByteStringDecoder): + @parse_type_str('string') + def from_type_str(cls, abi_type, registry): + return cls() + + @staticmethod + def decoder_fn(data): + try: + value = data.decode("utf-8") + except UnicodeDecodeError as e: + raise DecodingError( + e.encoding, + e.object, + e.start, + e.end, + "The returned type for this function is string which is " + "expected to be a UTF8 encoded string of text. The returned " + "value could not be decoded as valid UTF8. This is indicative " + "of a broken application which is using incorrect return types for " + "binary data.") from e + return value diff --git a/env/lib/python3.8/site-packages/eth_abi/encoding.py b/env/lib/python3.8/site-packages/eth_abi/encoding.py new file mode 100644 index 00000000..c6b8e3fe --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/encoding.py @@ -0,0 +1,725 @@ +import abc +import codecs +import decimal +from itertools import ( + accumulate, +) +from typing import ( + Any, + Optional, + Type, +) + +from eth_utils import ( + int_to_big_endian, + is_address, + is_boolean, + is_bytes, + is_integer, + is_list_like, + is_number, + is_text, + to_canonical_address, +) + +from eth_abi.base import ( + BaseCoder, + parse_tuple_type_str, + parse_type_str, +) +from eth_abi.exceptions import ( + EncodingTypeError, + IllegalValue, + ValueOutOfBounds, +) +from eth_abi.utils.numeric import ( + TEN, + abi_decimal_context, + ceil32, + compute_signed_fixed_bounds, + compute_signed_integer_bounds, + compute_unsigned_fixed_bounds, + compute_unsigned_integer_bounds, +) +from eth_abi.utils.padding import ( + fpad, + zpad, + zpad_right, +) +from eth_abi.utils.string import ( + abbr, +) + + +class BaseEncoder(BaseCoder, metaclass=abc.ABCMeta): + """ + Base class for all encoder classes. Subclass this if you want to define a + custom encoder class. Subclasses must also implement + :any:`BaseCoder.from_type_str`. + """ + @abc.abstractmethod + def encode(self, value: Any) -> bytes: # pragma: no cover + """ + Encodes the given value as a sequence of bytes. Should raise + :any:`exceptions.EncodingError` if ``value`` cannot be encoded. + """ + pass + + @abc.abstractmethod + def validate_value(self, value: Any) -> None: # pragma: no cover + """ + Checks whether or not the given value can be encoded by this encoder. + If the given value cannot be encoded, must raise + :any:`exceptions.EncodingError`. + """ + pass + + @classmethod + def invalidate_value( + cls, + value: Any, + exc: Type[Exception] = EncodingTypeError, + msg: Optional[str] = None, + ) -> None: + """ + Throws a standard exception for when a value is not encodable by an + encoder. + """ + raise exc( + "Value `{rep}` of type {typ} cannot be encoded by {cls}{msg}".format( + rep=abbr(value), + typ=type(value), + cls=cls.__name__, + msg="" if msg is None else (": " + msg), + ) + ) + + def __call__(self, value: Any) -> bytes: + return self.encode(value) + + +class TupleEncoder(BaseEncoder): + encoders = None + + def __init__(self, **kwargs): + super().__init__(**kwargs) + + self.is_dynamic = any(getattr(e, 'is_dynamic', False) for e in self.encoders) + + def validate(self): + super().validate() + + if self.encoders is None: + raise ValueError("`encoders` may not be none") + + def validate_value(self, value): + if not is_list_like(value): + self.invalidate_value( + value, + msg="must be list-like object such as array or tuple", + ) + + if len(value) != len(self.encoders): + self.invalidate_value( + value, + exc=ValueOutOfBounds, + msg="value has {} items when {} were expected".format( + len(value), + len(self.encoders), + ), + ) + + for item, encoder in zip(value, self.encoders): + try: + encoder.validate_value(item) + except AttributeError: + encoder(item) + + def encode(self, values): + self.validate_value(values) + + raw_head_chunks = [] + tail_chunks = [] + for value, encoder in zip(values, self.encoders): + if getattr(encoder, 'is_dynamic', False): + raw_head_chunks.append(None) + tail_chunks.append(encoder(value)) + else: + raw_head_chunks.append(encoder(value)) + tail_chunks.append(b'') + + head_length = sum( + 32 if item is None else len(item) + for item in raw_head_chunks + ) + tail_offsets = (0,) + tuple(accumulate(map(len, tail_chunks[:-1]))) + head_chunks = tuple( + encode_uint_256(head_length + offset) if chunk is None else chunk + for chunk, offset in zip(raw_head_chunks, tail_offsets) + ) + + encoded_value = b''.join(head_chunks + tuple(tail_chunks)) + + return encoded_value + + @parse_tuple_type_str + def from_type_str(cls, abi_type, registry): + encoders = tuple( + registry.get_encoder(c.to_type_str()) + for c in abi_type.components + ) + + return cls(encoders=encoders) + + +class FixedSizeEncoder(BaseEncoder): + value_bit_size = None + data_byte_size = None + encode_fn = None + type_check_fn = None + is_big_endian = None + + def validate(self): + super().validate() + + if self.value_bit_size is None: + raise ValueError("`value_bit_size` may not be none") + if self.data_byte_size is None: + raise ValueError("`data_byte_size` may not be none") + if self.encode_fn is None: + raise ValueError("`encode_fn` may not be none") + if self.is_big_endian is None: + raise ValueError("`is_big_endian` may not be none") + + if self.value_bit_size % 8 != 0: + raise ValueError( + "Invalid value bit size: {0}. Must be a multiple of 8".format( + self.value_bit_size, + ) + ) + + if self.value_bit_size > self.data_byte_size * 8: + raise ValueError("Value byte size exceeds data size") + + def validate_value(self, value): + raise NotImplementedError("Must be implemented by subclasses") + + def encode(self, value): + self.validate_value(value) + base_encoded_value = self.encode_fn(value) + + if self.is_big_endian: + padded_encoded_value = zpad(base_encoded_value, self.data_byte_size) + else: + padded_encoded_value = zpad_right(base_encoded_value, self.data_byte_size) + + return padded_encoded_value + + +class Fixed32ByteSizeEncoder(FixedSizeEncoder): + data_byte_size = 32 + + +class BooleanEncoder(Fixed32ByteSizeEncoder): + value_bit_size = 8 + is_big_endian = True + + @classmethod + def validate_value(cls, value): + if not is_boolean(value): + cls.invalidate_value(value) + + @classmethod + def encode_fn(cls, value): + if value is True: + return b'\x01' + elif value is False: + return b'\x00' + else: + raise ValueError("Invariant") + + @parse_type_str('bool') + def from_type_str(cls, abi_type, registry): + return cls() + + +class PackedBooleanEncoder(BooleanEncoder): + data_byte_size = 1 + + +class NumberEncoder(Fixed32ByteSizeEncoder): + is_big_endian = True + bounds_fn = None + illegal_value_fn = None + type_check_fn = None + + def validate(self): + super().validate() + + if self.bounds_fn is None: + raise ValueError("`bounds_fn` cannot be null") + if self.type_check_fn is None: + raise ValueError("`type_check_fn` cannot be null") + + def validate_value(self, value): + if not self.type_check_fn(value): + self.invalidate_value(value) + + illegal_value = ( + self.illegal_value_fn is not None and + self.illegal_value_fn(value) + ) + if illegal_value: + self.invalidate_value(value, exc=IllegalValue) + + lower_bound, upper_bound = self.bounds_fn(self.value_bit_size) + if value < lower_bound or value > upper_bound: + self.invalidate_value( + value, + exc=ValueOutOfBounds, + msg=( + "Cannot be encoded in {} bits. Must be bounded " + "between [{}, {}].".format( + self.value_bit_size, + lower_bound, + upper_bound, + ) + ) + ) + + +class UnsignedIntegerEncoder(NumberEncoder): + encode_fn = staticmethod(int_to_big_endian) + bounds_fn = staticmethod(compute_unsigned_integer_bounds) + type_check_fn = staticmethod(is_integer) + + @parse_type_str('uint') + def from_type_str(cls, abi_type, registry): + return cls(value_bit_size=abi_type.sub) + + +encode_uint_256 = UnsignedIntegerEncoder(value_bit_size=256, data_byte_size=32) + + +class PackedUnsignedIntegerEncoder(UnsignedIntegerEncoder): + @parse_type_str('uint') + def from_type_str(cls, abi_type, registry): + return cls( + value_bit_size=abi_type.sub, + data_byte_size=abi_type.sub // 8, + ) + + +class SignedIntegerEncoder(NumberEncoder): + bounds_fn = staticmethod(compute_signed_integer_bounds) + type_check_fn = staticmethod(is_integer) + + def encode_fn(self, value): + return int_to_big_endian(value % (2 ** self.value_bit_size)) + + def encode(self, value): + self.validate_value(value) + base_encoded_value = self.encode_fn(value) + + if value >= 0: + padded_encoded_value = zpad(base_encoded_value, self.data_byte_size) + else: + padded_encoded_value = fpad(base_encoded_value, self.data_byte_size) + + return padded_encoded_value + + @parse_type_str('int') + def from_type_str(cls, abi_type, registry): + return cls(value_bit_size=abi_type.sub) + + +class PackedSignedIntegerEncoder(SignedIntegerEncoder): + @parse_type_str('int') + def from_type_str(cls, abi_type, registry): + return cls( + value_bit_size=abi_type.sub, + data_byte_size=abi_type.sub // 8, + ) + + +class BaseFixedEncoder(NumberEncoder): + frac_places = None + + @staticmethod + def type_check_fn(value): + return is_number(value) and not isinstance(value, float) + + @staticmethod + def illegal_value_fn(value): + if isinstance(value, decimal.Decimal): + return value.is_nan() or value.is_infinite() + + return False + + def validate_value(self, value): + super().validate_value(value) + + with decimal.localcontext(abi_decimal_context): + residue = value % (TEN ** -self.frac_places) + + if residue > 0: + self.invalidate_value( + value, + exc=IllegalValue, + msg='residue {} outside allowed fractional precision of {}'.format( + repr(residue), + self.frac_places, + ), + ) + + def validate(self): + super().validate() + + if self.frac_places is None: + raise ValueError("must specify `frac_places`") + + if self.frac_places <= 0 or self.frac_places > 80: + raise ValueError("`frac_places` must be in range (0, 80]") + + +class UnsignedFixedEncoder(BaseFixedEncoder): + def bounds_fn(self, value_bit_size): + return compute_unsigned_fixed_bounds(self.value_bit_size, self.frac_places) + + def encode_fn(self, value): + with decimal.localcontext(abi_decimal_context): + scaled_value = value * TEN ** self.frac_places + integer_value = int(scaled_value) + + return int_to_big_endian(integer_value) + + @parse_type_str('ufixed') + def from_type_str(cls, abi_type, registry): + value_bit_size, frac_places = abi_type.sub + + return cls( + value_bit_size=value_bit_size, + frac_places=frac_places, + ) + + +class PackedUnsignedFixedEncoder(UnsignedFixedEncoder): + @parse_type_str('ufixed') + def from_type_str(cls, abi_type, registry): + value_bit_size, frac_places = abi_type.sub + + return cls( + value_bit_size=value_bit_size, + data_byte_size=value_bit_size // 8, + frac_places=frac_places, + ) + + +class SignedFixedEncoder(BaseFixedEncoder): + def bounds_fn(self, value_bit_size): + return compute_signed_fixed_bounds(self.value_bit_size, self.frac_places) + + def encode_fn(self, value): + with decimal.localcontext(abi_decimal_context): + scaled_value = value * TEN ** self.frac_places + integer_value = int(scaled_value) + + unsigned_integer_value = integer_value % (2 ** self.value_bit_size) + + return int_to_big_endian(unsigned_integer_value) + + def encode(self, value): + self.validate_value(value) + base_encoded_value = self.encode_fn(value) + + if value >= 0: + padded_encoded_value = zpad(base_encoded_value, self.data_byte_size) + else: + padded_encoded_value = fpad(base_encoded_value, self.data_byte_size) + + return padded_encoded_value + + @parse_type_str('fixed') + def from_type_str(cls, abi_type, registry): + value_bit_size, frac_places = abi_type.sub + + return cls( + value_bit_size=value_bit_size, + frac_places=frac_places, + ) + + +class PackedSignedFixedEncoder(SignedFixedEncoder): + @parse_type_str('fixed') + def from_type_str(cls, abi_type, registry): + value_bit_size, frac_places = abi_type.sub + + return cls( + value_bit_size=value_bit_size, + data_byte_size=value_bit_size // 8, + frac_places=frac_places, + ) + + +class AddressEncoder(Fixed32ByteSizeEncoder): + value_bit_size = 20 * 8 + encode_fn = staticmethod(to_canonical_address) + is_big_endian = True + + @classmethod + def validate_value(cls, value): + if not is_address(value): + cls.invalidate_value(value) + + def validate(self): + super().validate() + + if self.value_bit_size != 20 * 8: + raise ValueError('Addresses must be 160 bits in length') + + @parse_type_str('address') + def from_type_str(cls, abi_type, registry): + return cls() + + +class PackedAddressEncoder(AddressEncoder): + data_byte_size = 20 + + +class BytesEncoder(Fixed32ByteSizeEncoder): + is_big_endian = False + + def validate_value(self, value): + if not is_bytes(value): + self.invalidate_value(value) + + byte_size = self.value_bit_size // 8 + if len(value) > byte_size: + self.invalidate_value( + value, + exc=ValueOutOfBounds, + msg="exceeds total byte size for bytes{} encoding".format(byte_size), + ) + + @staticmethod + def encode_fn(value): + return value + + @parse_type_str('bytes') + def from_type_str(cls, abi_type, registry): + return cls(value_bit_size=abi_type.sub * 8) + + +class PackedBytesEncoder(BytesEncoder): + @parse_type_str('bytes') + def from_type_str(cls, abi_type, registry): + return cls( + value_bit_size=abi_type.sub * 8, + data_byte_size=abi_type.sub, + ) + + +class ByteStringEncoder(BaseEncoder): + is_dynamic = True + + @classmethod + def validate_value(cls, value): + if not is_bytes(value): + cls.invalidate_value(value) + + @classmethod + def encode(cls, value): + cls.validate_value(value) + + if not value: + padded_value = b'\x00' * 32 + else: + padded_value = zpad_right(value, ceil32(len(value))) + + encoded_size = encode_uint_256(len(value)) + encoded_value = encoded_size + padded_value + + return encoded_value + + @parse_type_str('bytes') + def from_type_str(cls, abi_type, registry): + return cls() + + +class PackedByteStringEncoder(ByteStringEncoder): + is_dynamic = False + + @classmethod + def encode(cls, value): + cls.validate_value(value) + return value + + +class TextStringEncoder(BaseEncoder): + is_dynamic = True + + @classmethod + def validate_value(cls, value): + if not is_text(value): + cls.invalidate_value(value) + + @classmethod + def encode(cls, value): + cls.validate_value(value) + + value_as_bytes = codecs.encode(value, 'utf8') + + if not value_as_bytes: + padded_value = b'\x00' * 32 + else: + padded_value = zpad_right(value_as_bytes, ceil32(len(value_as_bytes))) + + encoded_size = encode_uint_256(len(value_as_bytes)) + encoded_value = encoded_size + padded_value + + return encoded_value + + @parse_type_str('string') + def from_type_str(cls, abi_type, registry): + return cls() + + +class PackedTextStringEncoder(TextStringEncoder): + is_dynamic = False + + @classmethod + def encode(cls, value): + cls.validate_value(value) + return codecs.encode(value, 'utf8') + + +class BaseArrayEncoder(BaseEncoder): + item_encoder = None + + def validate(self): + super().validate() + + if self.item_encoder is None: + raise ValueError("`item_encoder` may not be none") + + def validate_value(self, value): + if not is_list_like(value): + self.invalidate_value( + value, + msg="must be list-like such as array or tuple", + ) + + for item in value: + self.item_encoder.validate_value(item) + + def encode_elements(self, value): + self.validate_value(value) + + item_encoder = self.item_encoder + tail_chunks = tuple(item_encoder(i) for i in value) + + items_are_dynamic = getattr(item_encoder, 'is_dynamic', False) + if not items_are_dynamic: + return b''.join(tail_chunks) + + head_length = 32 * len(value) + tail_offsets = (0,) + tuple(accumulate(map(len, tail_chunks[:-1]))) + head_chunks = tuple( + encode_uint_256(head_length + offset) + for offset in tail_offsets + ) + return b''.join(head_chunks + tail_chunks) + + @parse_type_str(with_arrlist=True) + def from_type_str(cls, abi_type, registry): + item_encoder = registry.get_encoder(abi_type.item_type.to_type_str()) + + array_spec = abi_type.arrlist[-1] + if len(array_spec) == 1: + # If array dimension is fixed + return SizedArrayEncoder( + array_size=array_spec[0], + item_encoder=item_encoder, + ) + else: + # If array dimension is dynamic + return DynamicArrayEncoder(item_encoder=item_encoder) + + +class PackedArrayEncoder(BaseArrayEncoder): + array_size = None + + def validate_value(self, value): + super().validate_value(value) + + if self.array_size is not None and len(value) != self.array_size: + self.invalidate_value( + value, + exc=ValueOutOfBounds, + msg="value has {} items when {} were expected".format( + len(value), + self.array_size, + ), + ) + + def encode(self, value): + encoded_elements = self.encode_elements(value) + + return encoded_elements + + @parse_type_str(with_arrlist=True) + def from_type_str(cls, abi_type, registry): + item_encoder = registry.get_encoder(abi_type.item_type.to_type_str()) + + array_spec = abi_type.arrlist[-1] + if len(array_spec) == 1: + return cls( + array_size=array_spec[0], + item_encoder=item_encoder, + ) + else: + return cls(item_encoder=item_encoder) + + +class SizedArrayEncoder(BaseArrayEncoder): + array_size = None + + def __init__(self, **kwargs): + super().__init__(**kwargs) + + self.is_dynamic = self.item_encoder.is_dynamic + + def validate(self): + super().validate() + + if self.array_size is None: + raise ValueError("`array_size` may not be none") + + def validate_value(self, value): + super().validate_value(value) + + if len(value) != self.array_size: + self.invalidate_value( + value, + exc=ValueOutOfBounds, + msg="value has {} items when {} were expected".format( + len(value), + self.array_size, + ), + ) + + def encode(self, value): + encoded_elements = self.encode_elements(value) + + return encoded_elements + + +class DynamicArrayEncoder(BaseArrayEncoder): + is_dynamic = True + + def encode(self, value): + encoded_size = encode_uint_256(len(value)) + encoded_elements = self.encode_elements(value) + encoded_value = encoded_size + encoded_elements + + return encoded_value diff --git a/env/lib/python3.8/site-packages/eth_abi/exceptions.py b/env/lib/python3.8/site-packages/eth_abi/exceptions.py new file mode 100644 index 00000000..f0ec058b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/exceptions.py @@ -0,0 +1,123 @@ +import parsimonious + + +class EncodingError(Exception): + """ + Base exception for any error that occurs during encoding. + """ + pass + + +class EncodingTypeError(EncodingError): + """ + Raised when trying to encode a python value whose type is not supported for + the output ABI type. + """ + pass + + +class IllegalValue(EncodingError): + """ + Raised when trying to encode a python value with the correct type but with + a value that is not considered legal for the output ABI type. + + Example: + + .. code-block:: python + + fixed128x19_encoder(Decimal('NaN')) # cannot encode NaN + """ + pass + + +class ValueOutOfBounds(IllegalValue): + """ + Raised when trying to encode a python value with the correct type but with + a value that appears outside the range of valid values for the output ABI + type. + + Example: + + .. code-block:: python + + ufixed8x1_encoder(Decimal('25.6')) # out of bounds + """ + pass + + +class DecodingError(Exception): + """ + Base exception for any error that occurs during decoding. + """ + pass + + +class InsufficientDataBytes(DecodingError): + """ + Raised when there are insufficient data to decode a value for a given ABI + type. + """ + pass + + +class NonEmptyPaddingBytes(DecodingError): + """ + Raised when the padding bytes of an ABI value are malformed. + """ + pass + + +class ParseError(parsimonious.ParseError): + """ + Raised when an ABI type string cannot be parsed. + """ + def __str__(self): + return "Parse error at '{}' (column {}) in type string '{}'".format( + self.text[self.pos:self.pos + 5], + self.column(), + self.text, + ) + + +class ABITypeError(ValueError): + """ + Raised when a parsed ABI type has inconsistent properties; for example, + when trying to parse the type string ``'uint7'`` (which has a bit-width + that is not congruent with zero modulo eight). + """ + pass + + +class PredicateMappingError(Exception): + """ + Raised when an error occurs in a registry's internal mapping. + """ + pass + + +class NoEntriesFound(ValueError, PredicateMappingError): + """ + Raised when no registration is found for a type string in a registry's + internal mapping. + + .. warning:: + + In a future version of ``eth-abi``, this error class will no longer + inherit from ``ValueError``. + """ + pass + + +class MultipleEntriesFound(ValueError, PredicateMappingError): + """ + Raised when multiple registrations are found for a type string in a + registry's internal mapping. This error is non-recoverable and indicates + that a registry was configured incorrectly. Registrations are expected to + cover completely distinct ranges of type strings. + + .. warning:: + + In a future version of ``eth-abi``, this error class will no longer + inherit from ``ValueError``. + """ + pass diff --git a/env/lib/python3.8/site-packages/eth_abi/grammar.py b/env/lib/python3.8/site-packages/eth_abi/grammar.py new file mode 100644 index 00000000..1bf45e1f --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/grammar.py @@ -0,0 +1,430 @@ +import functools +import re + +import parsimonious +from parsimonious import ( + expressions, +) + +from eth_abi.exceptions import ( + ABITypeError, + ParseError, +) + +grammar = parsimonious.Grammar(r""" +type = tuple_type / basic_type + +tuple_type = components arrlist? +components = non_zero_tuple / zero_tuple + +non_zero_tuple = "(" type next_type* ")" +next_type = "," type + +zero_tuple = "()" + +basic_type = base sub? arrlist? + +base = alphas + +sub = two_size / digits +two_size = (digits "x" digits) + +arrlist = (const_arr / dynam_arr)+ +const_arr = "[" digits "]" +dynam_arr = "[]" + +alphas = ~"[A-Za-z]+" +digits = ~"[1-9][0-9]*" +""") + + +class NodeVisitor(parsimonious.NodeVisitor): + """ + Parsimonious node visitor which performs both parsing of type strings and + post-processing of parse trees. Parsing operations are cached. + """ + grammar = grammar + + def visit_non_zero_tuple(self, node, visited_children): + # Ignore left and right parens + _, first, rest, _ = visited_children + + return (first,) + rest + + def visit_tuple_type(self, node, visited_children): + components, arrlist = visited_children + + return TupleType(components, arrlist, node=node) + + def visit_next_type(self, node, visited_children): + # Ignore comma + _, abi_type = visited_children + + return abi_type + + def visit_zero_tuple(self, node, visited_children): + return tuple() + + def visit_basic_type(self, node, visited_children): + base, sub, arrlist = visited_children + + return BasicType(base, sub, arrlist, node=node) + + def visit_two_size(self, node, visited_children): + # Ignore "x" + first, _, second = visited_children + + return first, second + + def visit_const_arr(self, node, visited_children): + # Ignore left and right brackets + _, int_value, _ = visited_children + + return (int_value,) + + def visit_dynam_arr(self, node, visited_children): + return tuple() + + def visit_alphas(self, node, visited_children): + return node.text + + def visit_digits(self, node, visited_children): + return int(node.text) + + def generic_visit(self, node, visited_children): + if isinstance(node.expr, expressions.OneOf): + # Unwrap value chosen from alternatives + return visited_children[0] + + if isinstance(node.expr, expressions.Optional): + # Unwrap optional value or return `None` + if len(visited_children) != 0: + return visited_children[0] + + return None + + return tuple(visited_children) + + @functools.lru_cache(maxsize=None) + def parse(self, type_str): + """ + Parses a type string into an appropriate instance of + :class:`~eth_abi.grammar.ABIType`. If a type string cannot be parsed, + throws :class:`~eth_abi.exceptions.ParseError`. + + :param type_str: The type string to be parsed. + :returns: An instance of :class:`~eth_abi.grammar.ABIType` containing + information about the parsed type string. + """ + if not isinstance(type_str, str): + raise TypeError('Can only parse string values: got {}'.format(type(type_str))) + + try: + return super().parse(type_str) + except parsimonious.ParseError as e: + raise ParseError(e.text, e.pos, e.expr) + + +visitor = NodeVisitor() + + +class ABIType: + """ + Base class for results of type string parsing operations. + """ + __slots__ = ('arrlist', 'node') + + def __init__(self, arrlist=None, node=None): + self.arrlist = arrlist + """ + The list of array dimensions for a parsed type. Equal to ``None`` if + type string has no array dimensions. + """ + + self.node = node + """ + The parsimonious ``Node`` instance associated with this parsed type. + Used to generate error messages for invalid types. + """ + + def __repr__(self): # pragma: no cover + return '<{} {}>'.format( + type(self).__qualname__, + repr(self.to_type_str()), + ) + + def __eq__(self, other): + # Two ABI types are equal if their string representations are equal + return ( + type(self) is type(other) and + self.to_type_str() == other.to_type_str() + ) + + def to_type_str(self): # pragma: no cover + """ + Returns the string representation of an ABI type. This will be equal to + the type string from which it was created. + """ + raise NotImplementedError('Must implement `to_type_str`') + + @property + def item_type(self): + """ + If this type is an array type, equal to an appropriate + :class:`~eth_abi.grammar.ABIType` instance for the array's items. + """ + raise NotImplementedError('Must implement `item_type`') + + def validate(self): # pragma: no cover + """ + Validates the properties of an ABI type against the solidity ABI spec: + + https://solidity.readthedocs.io/en/develop/abi-spec.html + + Raises :class:`~eth_abi.exceptions.ABITypeError` if validation fails. + """ + raise NotImplementedError('Must implement `validate`') + + def invalidate(self, error_msg): + # Invalidates an ABI type with the given error message. Expects that a + # parsimonious node was provided from the original parsing operation + # that yielded this type. + node = self.node + + raise ABITypeError( + "For '{comp_str}' type at column {col} " + "in '{type_str}': {error_msg}".format( + comp_str=node.text, + col=node.start + 1, + type_str=node.full_text, + error_msg=error_msg, + ), + ) + + @property + def is_array(self): + """ + Equal to ``True`` if a type is an array type (i.e. if it has an array + dimension list). Otherwise, equal to ``False``. + """ + return self.arrlist is not None + + @property + def is_dynamic(self): + """ + Equal to ``True`` if a type has a dynamically sized encoding. + Otherwise, equal to ``False``. + """ + raise NotImplementedError('Must implement `is_dynamic`') + + @property + def _has_dynamic_arrlist(self): + return self.is_array and any(len(dim) == 0 for dim in self.arrlist) + + +class TupleType(ABIType): + """ + Represents the result of parsing a tuple type string e.g. "(int,bool)". + """ + __slots__ = ('components',) + + def __init__(self, components, arrlist=None, *, node=None): + super().__init__(arrlist, node) + + self.components = components + """ + A tuple of :class:`~eth_abi.grammar.ABIType` instances for each of the + tuple type's components. + """ + + def to_type_str(self): + arrlist = self.arrlist + + if isinstance(arrlist, tuple): + arrlist = ''.join(repr(list(a)) for a in arrlist) + else: + arrlist = '' + + return '({}){}'.format( + ','.join(c.to_type_str() for c in self.components), + arrlist, + ) + + @property + def item_type(self): + if not self.is_array: + raise ValueError("Cannot determine item type for non-array type '{}'".format( + self.to_type_str(), + )) + + return type(self)( + self.components, + self.arrlist[:-1] or None, + node=self.node, + ) + + def validate(self): + for c in self.components: + c.validate() + + @property + def is_dynamic(self): + if self._has_dynamic_arrlist: + return True + + return any(c.is_dynamic for c in self.components) + + +class BasicType(ABIType): + """ + Represents the result of parsing a basic type string e.g. "uint", "address", + "ufixed128x19[][2]". + """ + __slots__ = ('base', 'sub') + + def __init__(self, base, sub=None, arrlist=None, *, node=None): + super().__init__(arrlist, node) + + self.base = base + """The base of a basic type e.g. "uint" for "uint256" etc.""" + + self.sub = sub + """ + The sub type of a basic type e.g. ``256`` for "uint256" or ``(128, 18)`` + for "ufixed128x18" etc. Equal to ``None`` if type string has no sub + type. + """ + + def to_type_str(self): + sub, arrlist = self.sub, self.arrlist + + if isinstance(sub, int): + sub = str(sub) + elif isinstance(sub, tuple): + sub = 'x'.join(str(s) for s in sub) + else: + sub = '' + + if isinstance(arrlist, tuple): + arrlist = ''.join(repr(list(a)) for a in arrlist) + else: + arrlist = '' + + return self.base + sub + arrlist + + @property + def item_type(self): + if not self.is_array: + raise ValueError("Cannot determine item type for non-array type '{}'".format( + self.to_type_str(), + )) + + return type(self)( + self.base, + self.sub, + self.arrlist[:-1] or None, + node=self.node, + ) + + @property + def is_dynamic(self): + if self._has_dynamic_arrlist: + return True + + if self.base == 'string': + return True + + if self.base == 'bytes' and self.sub is None: + return True + + return False + + def validate(self): + base, sub = self.base, self.sub + + # Check validity of string type + if base == 'string': + if sub is not None: + self.invalidate('string type cannot have suffix') + + # Check validity of bytes type + elif base == 'bytes': + if not (sub is None or isinstance(sub, int)): + self.invalidate('bytes type must have either no suffix or a numerical suffix') + + if isinstance(sub, int) and sub > 32: + self.invalidate('maximum 32 bytes for fixed-length bytes') + + # Check validity of integer type + elif base in ('int', 'uint'): + if not isinstance(sub, int): + self.invalidate('integer type must have numerical suffix') + + if sub < 8 or 256 < sub: + self.invalidate('integer size out of bounds (max 256 bits)') + + if sub % 8 != 0: + self.invalidate('integer size must be multiple of 8') + + # Check validity of fixed type + elif base in ('fixed', 'ufixed'): + if not isinstance(sub, tuple): + self.invalidate( + 'fixed type must have suffix of form x, e.g. 128x19', + ) + + bits, minus_e = sub + + if bits < 8 or 256 < bits: + self.invalidate('fixed size out of bounds (max 256 bits)') + + if bits % 8 != 0: + self.invalidate('fixed size must be multiple of 8') + + if minus_e < 1 or 80 < minus_e: + self.invalidate( + 'fixed exponent size out of bounds, {} must be in 1-80'.format( + minus_e, + ), + ) + + # Check validity of hash type + elif base == 'hash': + if not isinstance(sub, int): + self.invalidate('hash type must have numerical suffix') + + # Check validity of address type + elif base == 'address': + if sub is not None: + self.invalidate('address cannot have suffix') + + +TYPE_ALIASES = { + 'int': 'int256', + 'uint': 'uint256', + 'fixed': 'fixed128x18', + 'ufixed': 'ufixed128x18', + 'function': 'bytes24', + 'byte': 'bytes1', +} + +TYPE_ALIAS_RE = re.compile(r'\b({})\b'.format( + '|'.join(re.escape(a) for a in TYPE_ALIASES.keys()) +)) + + +def normalize(type_str): + """ + Normalizes a type string into its canonical version e.g. the type string + 'int' becomes 'int256', etc. + + :param type_str: The type string to be normalized. + :returns: The canonical version of the input type string. + """ + return TYPE_ALIAS_RE.sub( + lambda match: TYPE_ALIASES[match.group(0)], + type_str, + ) + + +parse = visitor.parse diff --git a/env/lib/python3.8/site-packages/eth_abi/packed.py b/env/lib/python3.8/site-packages/eth_abi/packed.py new file mode 100644 index 00000000..fcab1d4a --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/packed.py @@ -0,0 +1,13 @@ +from .codec import ( + ABIEncoder, +) +from .registry import ( + registry_packed, +) + +default_encoder_packed = ABIEncoder(registry_packed) + +encode_packed = default_encoder_packed.encode +is_encodable_packed = default_encoder_packed.is_encodable +encode_single_packed = default_encoder_packed.encode_single # deprecated +encode_abi_packed = default_encoder_packed.encode_abi # deprecated diff --git a/env/lib/python3.8/site-packages/eth_abi/py.typed b/env/lib/python3.8/site-packages/eth_abi/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_abi/registry.py b/env/lib/python3.8/site-packages/eth_abi/registry.py new file mode 100644 index 00000000..6073a852 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/registry.py @@ -0,0 +1,616 @@ +import abc +import copy +import functools +from typing import ( + Any, + Callable, + Type, + Union, +) + +from eth_typing import ( + abi, +) + +from . import ( + decoding, + encoding, + exceptions, + grammar, +) +from .base import ( + BaseCoder, +) +from .exceptions import ( + MultipleEntriesFound, + NoEntriesFound, +) + +Lookup = Union[abi.TypeStr, Callable[[abi.TypeStr], bool]] + +EncoderCallable = Callable[[Any], bytes] +DecoderCallable = Callable[[decoding.ContextFramesBytesIO], Any] + +Encoder = Union[EncoderCallable, Type[encoding.BaseEncoder]] +Decoder = Union[DecoderCallable, Type[decoding.BaseDecoder]] + + +class Copyable(abc.ABC): + @abc.abstractmethod + def copy(self): + pass + + def __copy__(self): + return self.copy() + + def __deepcopy__(self, *args): + return self.copy() + + +class PredicateMapping(Copyable): + """ + Acts as a mapping from predicate functions to values. Values are retrieved + when their corresponding predicate matches a given input. Predicates can + also be labeled to facilitate removal from the mapping. + """ + def __init__(self, name): + self._name = name + self._values = {} + self._labeled_predicates = {} + + def add(self, predicate, value, label=None): + if predicate in self._values: + raise ValueError('Matcher {} already exists in {}'.format( + repr(predicate), + self._name, + )) + + if label is not None: + if label in self._labeled_predicates: + raise ValueError( + "Matcher {} with label '{}' already exists in {}".format( + repr(predicate), + label, + self._name, + ), + ) + + self._labeled_predicates[label] = predicate + + self._values[predicate] = value + + def find(self, type_str): + results = tuple( + (predicate, value) for predicate, value in self._values.items() + if predicate(type_str) + ) + + if len(results) == 0: + raise NoEntriesFound("No matching entries for '{}' in {}".format( + type_str, + self._name, + )) + + predicates, values = tuple(zip(*results)) + + if len(results) > 1: + predicate_reprs = ', '.join(map(repr, predicates)) + raise MultipleEntriesFound( + f"Multiple matching entries for '{type_str}' in {self._name}: " + f"{predicate_reprs}. This occurs when two registrations match the " + "same type string. You may need to delete one of the " + "registrations or modify its matching behavior to ensure it " + "doesn't collide with other registrations. See the \"Registry\" " + "documentation for more information." + ) + + return values[0] + + def remove_by_equality(self, predicate): + # Delete the predicate mapping to the previously stored value + try: + del self._values[predicate] + except KeyError: + raise KeyError('Matcher {} not found in {}'.format( + repr(predicate), + self._name, + )) + + # Delete any label which refers to this predicate + try: + label = self._label_for_predicate(predicate) + except ValueError: + pass + else: + del self._labeled_predicates[label] + + def _label_for_predicate(self, predicate): + # Both keys and values in `_labeled_predicates` are unique since the + # `add` method enforces this + for key, value in self._labeled_predicates.items(): + if value is predicate: + return key + + raise ValueError('Matcher {} not referred to by any label in {}'.format( + repr(predicate), + self._name, + )) + + def remove_by_label(self, label): + try: + predicate = self._labeled_predicates[label] + except KeyError: + raise KeyError("Label '{}' not found in {}".format(label, self._name)) + + del self._labeled_predicates[label] + del self._values[predicate] + + def remove(self, predicate_or_label): + if callable(predicate_or_label): + self.remove_by_equality(predicate_or_label) + elif isinstance(predicate_or_label, str): + self.remove_by_label(predicate_or_label) + else: + raise TypeError('Key to be removed must be callable or string: got {}'.format( + type(predicate_or_label), + )) + + def copy(self): + cpy = type(self)(self._name) + + cpy._values = copy.copy(self._values) + cpy._labeled_predicates = copy.copy(self._labeled_predicates) + + return cpy + + +class Predicate: + """ + Represents a predicate function to be used for type matching in + ``ABIRegistry``. + """ + __slots__ = tuple() + + def __call__(self, *args, **kwargs): # pragma: no cover + raise NotImplementedError('Must implement `__call__`') + + def __str__(self): # pragma: no cover + raise NotImplementedError('Must implement `__str__`') + + def __repr__(self): + return '<{} {}>'.format(type(self).__name__, self) + + def __iter__(self): + for attr in self.__slots__: + yield getattr(self, attr) + + def __hash__(self): + return hash(tuple(self)) + + def __eq__(self, other): + return ( + type(self) is type(other) and + tuple(self) == tuple(other) + ) + + +class Equals(Predicate): + """ + A predicate that matches any input equal to `value`. + """ + __slots__ = ('value',) + + def __init__(self, value): + self.value = value + + def __call__(self, other): + return self.value == other + + def __str__(self): + return '(== {})'.format(repr(self.value)) + + +class BaseEquals(Predicate): + """ + A predicate that matches a basic type string with a base component equal to + `value` and no array component. If `with_sub` is `True`, the type string + must have a sub component to match. If `with_sub` is `False`, the type + string must *not* have a sub component to match. If `with_sub` is None, + the type string's sub component is ignored. + """ + __slots__ = ('base', 'with_sub') + + def __init__(self, base, *, with_sub=None): + self.base = base + self.with_sub = with_sub + + def __call__(self, type_str): + try: + abi_type = grammar.parse(type_str) + except exceptions.ParseError: + return False + + if isinstance(abi_type, grammar.BasicType): + if abi_type.arrlist is not None: + return False + + if self.with_sub is not None: + if self.with_sub and abi_type.sub is None: + return False + if not self.with_sub and abi_type.sub is not None: + return False + + return abi_type.base == self.base + + # We'd reach this point if `type_str` did not contain a basic type + # e.g. if it contained a tuple type + return False + + def __str__(self): + return '(base == {}{})'.format( + repr(self.base), + '' if self.with_sub is None else ( + ' and sub is not None' if self.with_sub else ' and sub is None' + ), + ) + + +def has_arrlist(type_str): + """ + A predicate that matches a type string with an array dimension list. + """ + try: + abi_type = grammar.parse(type_str) + except exceptions.ParseError: + return False + + return abi_type.arrlist is not None + + +def is_base_tuple(type_str): + """ + A predicate that matches a tuple type with no array dimension list. + """ + try: + abi_type = grammar.parse(type_str) + except exceptions.ParseError: + return False + + return isinstance(abi_type, grammar.TupleType) and abi_type.arrlist is None + + +def _clear_encoder_cache(old_method): + @functools.wraps(old_method) + def new_method(self, *args, **kwargs): + self.get_encoder.cache_clear() + return old_method(self, *args, **kwargs) + + return new_method + + +def _clear_decoder_cache(old_method): + @functools.wraps(old_method) + def new_method(self, *args, **kwargs): + self.get_decoder.cache_clear() + return old_method(self, *args, **kwargs) + + return new_method + + +class BaseRegistry: + @staticmethod + def _register(mapping, lookup, value, label=None): + if callable(lookup): + mapping.add(lookup, value, label) + return + + if isinstance(lookup, str): + mapping.add(Equals(lookup), value, lookup) + return + + raise TypeError( + 'Lookup must be a callable or a value of type `str`: got {}'.format( + repr(lookup), + ) + ) + + @staticmethod + def _unregister(mapping, lookup_or_label): + if callable(lookup_or_label): + mapping.remove_by_equality(lookup_or_label) + return + + if isinstance(lookup_or_label, str): + mapping.remove_by_label(lookup_or_label) + return + + raise TypeError( + 'Lookup/label must be a callable or a value of type `str`: got {}'.format( + repr(lookup_or_label), + ) + ) + + @staticmethod + def _get_registration(mapping, type_str): + try: + value = mapping.find(type_str) + except ValueError as e: + if 'No matching' in e.args[0]: + # If no matches found, attempt to parse in case lack of matches + # was due to unparsability + grammar.parse(type_str) + + raise + + return value + + +class ABIRegistry(Copyable, BaseRegistry): + def __init__(self): + self._encoders = PredicateMapping('encoder registry') + self._decoders = PredicateMapping('decoder registry') + + def _get_registration(self, mapping, type_str): + coder = super()._get_registration(mapping, type_str) + + if isinstance(coder, type) and issubclass(coder, BaseCoder): + return coder.from_type_str(type_str, self) + + return coder + + @_clear_encoder_cache + def register_encoder(self, lookup: Lookup, encoder: Encoder, label: str = None) -> None: + """ + Registers the given ``encoder`` under the given ``lookup``. A unique + string label may be optionally provided that can be used to refer to + the registration by name. For more information about arguments, refer + to :any:`register`. + """ + self._register(self._encoders, lookup, encoder, label=label) + + @_clear_encoder_cache + def unregister_encoder(self, lookup_or_label: Lookup) -> None: + """ + Unregisters an encoder in the registry with the given lookup or label. + If ``lookup_or_label`` is a string, the encoder with the label + ``lookup_or_label`` will be unregistered. If it is an function, the + encoder with the lookup function ``lookup_or_label`` will be + unregistered. + """ + self._unregister(self._encoders, lookup_or_label) + + @_clear_decoder_cache + def register_decoder(self, lookup: Lookup, decoder: Decoder, label: str = None) -> None: + """ + Registers the given ``decoder`` under the given ``lookup``. A unique + string label may be optionally provided that can be used to refer to + the registration by name. For more information about arguments, refer + to :any:`register`. + """ + self._register(self._decoders, lookup, decoder, label=label) + + @_clear_decoder_cache + def unregister_decoder(self, lookup_or_label: Lookup) -> None: + """ + Unregisters a decoder in the registry with the given lookup or label. + If ``lookup_or_label`` is a string, the decoder with the label + ``lookup_or_label`` will be unregistered. If it is an function, the + decoder with the lookup function ``lookup_or_label`` will be + unregistered. + """ + self._unregister(self._decoders, lookup_or_label) + + def register(self, + lookup: Lookup, + encoder: Encoder, + decoder: Decoder, + label: str = None) -> None: + """ + Registers the given ``encoder`` and ``decoder`` under the given + ``lookup``. A unique string label may be optionally provided that can + be used to refer to the registration by name. + + :param lookup: A type string or type string matcher function + (predicate). When the registry is queried with a type string + ``query`` to determine which encoder or decoder to use, ``query`` + will be checked against every registration in the registry. If a + registration was created with a type string for ``lookup``, it will + be considered a match if ``lookup == query``. If a registration + was created with a matcher function for ``lookup``, it will be + considered a match if ``lookup(query) is True``. If more than one + registration is found to be a match, then an exception is raised. + + :param encoder: An encoder callable or class to use if ``lookup`` + matches a query. If ``encoder`` is a callable, it must accept a + python value and return a ``bytes`` value. If ``encoder`` is a + class, it must be a valid subclass of :any:`encoding.BaseEncoder` + and must also implement the :any:`from_type_str` method on + :any:`base.BaseCoder`. + + :param decoder: A decoder callable or class to use if ``lookup`` + matches a query. If ``decoder`` is a callable, it must accept a + stream-like object of bytes and return a python value. If + ``decoder`` is a class, it must be a valid subclass of + :any:`decoding.BaseDecoder` and must also implement the + :any:`from_type_str` method on :any:`base.BaseCoder`. + + :param label: An optional label that can be used to refer to this + registration by name. This label can be used to unregister an + entry in the registry via the :any:`unregister` method and its + variants. + """ + self.register_encoder(lookup, encoder, label=label) + self.register_decoder(lookup, decoder, label=label) + + def unregister(self, label: str) -> None: + """ + Unregisters the entries in the encoder and decoder registries which + have the label ``label``. + """ + self.unregister_encoder(label) + self.unregister_decoder(label) + + @functools.lru_cache(maxsize=None) + def get_encoder(self, type_str): + return self._get_registration(self._encoders, type_str) + + def has_encoder(self, type_str: abi.TypeStr) -> bool: + """ + Returns ``True`` if an encoder is found for the given type string + ``type_str``. Otherwise, returns ``False``. Raises + :class:`~eth_abi.exceptions.MultipleEntriesFound` if multiple encoders + are found. + """ + try: + self.get_encoder(type_str) + except NoEntriesFound: + return False + else: + return True + + @functools.lru_cache(maxsize=None) + def get_decoder(self, type_str): + return self._get_registration(self._decoders, type_str) + + def copy(self): + """ + Copies a registry such that new registrations can be made or existing + registrations can be unregistered without affecting any instance from + which a copy was obtained. This is useful if an existing registry + fulfills most of a user's needs but requires one or two modifications. + In that case, a copy of that registry can be obtained and the necessary + changes made without affecting the original registry. + """ + cpy = type(self)() + + cpy._encoders = copy.copy(self._encoders) + cpy._decoders = copy.copy(self._decoders) + + return cpy + + +registry = ABIRegistry() + +registry.register( + BaseEquals('uint'), + encoding.UnsignedIntegerEncoder, decoding.UnsignedIntegerDecoder, + label='uint', +) +registry.register( + BaseEquals('int'), + encoding.SignedIntegerEncoder, decoding.SignedIntegerDecoder, + label='int', +) +registry.register( + BaseEquals('address'), + encoding.AddressEncoder, decoding.AddressDecoder, + label='address', +) +registry.register( + BaseEquals('bool'), + encoding.BooleanEncoder, decoding.BooleanDecoder, + label='bool', +) +registry.register( + BaseEquals('ufixed'), + encoding.UnsignedFixedEncoder, decoding.UnsignedFixedDecoder, + label='ufixed', +) +registry.register( + BaseEquals('fixed'), + encoding.SignedFixedEncoder, decoding.SignedFixedDecoder, + label='fixed', +) +registry.register( + BaseEquals('bytes', with_sub=True), + encoding.BytesEncoder, decoding.BytesDecoder, + label='bytes', +) +registry.register( + BaseEquals('bytes', with_sub=False), + encoding.ByteStringEncoder, decoding.ByteStringDecoder, + label='bytes', +) +registry.register( + BaseEquals('function'), + encoding.BytesEncoder, decoding.BytesDecoder, + label='function', +) +registry.register( + BaseEquals('string'), + encoding.TextStringEncoder, decoding.StringDecoder, + label='string', +) +registry.register( + has_arrlist, + encoding.BaseArrayEncoder, decoding.BaseArrayDecoder, + label='has_arrlist', +) +registry.register( + is_base_tuple, + encoding.TupleEncoder, decoding.TupleDecoder, + label='is_base_tuple', +) + +registry_packed = ABIRegistry() + +registry_packed.register_encoder( + BaseEquals('uint'), + encoding.PackedUnsignedIntegerEncoder, + label='uint', +) +registry_packed.register_encoder( + BaseEquals('int'), + encoding.PackedSignedIntegerEncoder, + label='int', +) +registry_packed.register_encoder( + BaseEquals('address'), + encoding.PackedAddressEncoder, + label='address', +) +registry_packed.register_encoder( + BaseEquals('bool'), + encoding.PackedBooleanEncoder, + label='bool', +) +registry_packed.register_encoder( + BaseEquals('ufixed'), + encoding.PackedUnsignedFixedEncoder, + label='ufixed', +) +registry_packed.register_encoder( + BaseEquals('fixed'), + encoding.PackedSignedFixedEncoder, + label='fixed', +) +registry_packed.register_encoder( + BaseEquals('bytes', with_sub=True), + encoding.PackedBytesEncoder, + label='bytes', +) +registry_packed.register_encoder( + BaseEquals('bytes', with_sub=False), + encoding.PackedByteStringEncoder, + label='bytes', +) +registry_packed.register_encoder( + BaseEquals('function'), + encoding.PackedBytesEncoder, + label='function', +) +registry_packed.register_encoder( + BaseEquals('string'), + encoding.PackedTextStringEncoder, + label='string', +) +registry_packed.register_encoder( + has_arrlist, + encoding.PackedArrayEncoder, + label='has_arrlist', +) +registry_packed.register_encoder( + is_base_tuple, + encoding.TupleEncoder, + label='is_base_tuple', +) diff --git a/env/lib/python3.8/site-packages/eth_abi/tools/__init__.py b/env/lib/python3.8/site-packages/eth_abi/tools/__init__.py new file mode 100644 index 00000000..5123dfa2 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/tools/__init__.py @@ -0,0 +1,3 @@ +from ._strategies import ( # noqa: F401 + get_abi_strategy, +) diff --git a/env/lib/python3.8/site-packages/eth_abi/tools/_strategies.py b/env/lib/python3.8/site-packages/eth_abi/tools/_strategies.py new file mode 100644 index 00000000..090c708b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/tools/_strategies.py @@ -0,0 +1,217 @@ +from typing import ( + Callable, + Union, +) + +from eth_typing.abi import ( + TypeStr, +) +from eth_utils import ( + to_checksum_address, +) +from hypothesis import ( + strategies as st, +) + +from eth_abi.grammar import ( + ABIType, + normalize, + parse, +) +from eth_abi.registry import ( + BaseEquals, + BaseRegistry, + Lookup, + PredicateMapping, + has_arrlist, + is_base_tuple, +) +from eth_abi.utils.numeric import ( + scale_places, +) + +StrategyFactory = Callable[[ABIType, 'StrategyRegistry'], st.SearchStrategy] +StrategyRegistration = Union[st.SearchStrategy, StrategyFactory] + + +class StrategyRegistry(BaseRegistry): + def __init__(self): + self._strategies = PredicateMapping('strategy registry') + + def register_strategy(self, + lookup: Lookup, + registration: StrategyRegistration, + label: str = None) -> None: + self._register(self._strategies, lookup, registration, label=label) + + def unregister_strategy(self, lookup_or_label: Lookup) -> None: + self._unregister(self._strategies, lookup_or_label) + + def get_strategy(self, type_str: TypeStr) -> st.SearchStrategy: + """ + Returns a hypothesis strategy for the given ABI type. + + :param type_str: The canonical string representation of the ABI type + for which a hypothesis strategy should be returned. + + :returns: A hypothesis strategy for generating Python values that are + encodable as values of the given ABI type. + """ + registration = self._get_registration(self._strategies, type_str) + + if isinstance(registration, st.SearchStrategy): + # If a hypothesis strategy was registered, just return it + return registration + else: + # Otherwise, assume the factory is a callable. Call it with the abi + # type to get an appropriate hypothesis strategy. + normalized_type_str = normalize(type_str) + abi_type = parse(normalized_type_str) + strategy = registration(abi_type, self) + + return strategy + + +def get_uint_strategy(abi_type: ABIType, registry: StrategyRegistry) -> st.SearchStrategy: + bits = abi_type.sub + + return st.integers( + min_value=0, + max_value=2 ** bits - 1, + ) + + +def get_int_strategy(abi_type: ABIType, registry: StrategyRegistry) -> st.SearchStrategy: + bits = abi_type.sub + + return st.integers( + min_value=-2 ** (bits - 1), + max_value=2 ** (bits - 1) - 1, + ) + + +address_strategy = st.binary(min_size=20, max_size=20).map(to_checksum_address) +bool_strategy = st.booleans() + + +def get_ufixed_strategy(abi_type: ABIType, registry: StrategyRegistry) -> st.SearchStrategy: + bits, places = abi_type.sub + + return st.decimals( + min_value=0, + max_value=2 ** bits - 1, + places=0, + ).map(scale_places(places)) + + +def get_fixed_strategy(abi_type: ABIType, registry: StrategyRegistry) -> st.SearchStrategy: + bits, places = abi_type.sub + + return st.decimals( + min_value=-2 ** (bits - 1), + max_value=2 ** (bits - 1) - 1, + places=0, + ).map(scale_places(places)) + + +def get_bytes_strategy(abi_type: ABIType, registry: StrategyRegistry) -> st.SearchStrategy: + num_bytes = abi_type.sub + + return st.binary( + min_size=num_bytes, + max_size=num_bytes, + ) + + +bytes_strategy = st.binary(min_size=0, max_size=4096) +string_strategy = st.text() + + +def get_array_strategy(abi_type: ABIType, registry: StrategyRegistry) -> st.SearchStrategy: + item_type = abi_type.item_type + item_type_str = item_type.to_type_str() + item_strategy = registry.get_strategy(item_type_str) + + last_dim = abi_type.arrlist[-1] + if len(last_dim) == 0: + # Is dynamic list. Don't restrict length. + return st.lists(item_strategy) + else: + # Is static list. Restrict length. + dim_size = last_dim[0] + return st.lists(item_strategy, min_size=dim_size, max_size=dim_size) + + +def get_tuple_strategy(abi_type: ABIType, registry: StrategyRegistry) -> st.SearchStrategy: + component_strategies = [ + registry.get_strategy(comp_abi_type.to_type_str()) + for comp_abi_type in abi_type.components + ] + + return st.tuples(*component_strategies) + + +strategy_registry = StrategyRegistry() + +strategy_registry.register_strategy( + BaseEquals('uint'), + get_uint_strategy, + label='uint', +) +strategy_registry.register_strategy( + BaseEquals('int'), + get_int_strategy, + label='int', +) +strategy_registry.register_strategy( + BaseEquals('address', with_sub=False), + address_strategy, + label='address', +) +strategy_registry.register_strategy( + BaseEquals('bool', with_sub=False), + bool_strategy, + label='bool', +) +strategy_registry.register_strategy( + BaseEquals('ufixed'), + get_ufixed_strategy, + label='ufixed', +) +strategy_registry.register_strategy( + BaseEquals('fixed'), + get_fixed_strategy, + label='fixed', +) +strategy_registry.register_strategy( + BaseEquals('bytes', with_sub=True), + get_bytes_strategy, + label='bytes', +) +strategy_registry.register_strategy( + BaseEquals('bytes', with_sub=False), + bytes_strategy, + label='bytes', +) +strategy_registry.register_strategy( + BaseEquals('function', with_sub=False), + get_bytes_strategy, + label='function', +) +strategy_registry.register_strategy( + BaseEquals('string', with_sub=False), + string_strategy, + label='string', +) +strategy_registry.register_strategy( + has_arrlist, + get_array_strategy, + label='has_arrlist', +) +strategy_registry.register_strategy( + is_base_tuple, + get_tuple_strategy, + label='is_base_tuple', +) + +get_abi_strategy = strategy_registry.get_strategy diff --git a/env/lib/python3.8/site-packages/eth_abi/utils/__init__.py b/env/lib/python3.8/site-packages/eth_abi/utils/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_abi/utils/numeric.py b/env/lib/python3.8/site-packages/eth_abi/utils/numeric.py new file mode 100644 index 00000000..e79e7b81 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/utils/numeric.py @@ -0,0 +1,82 @@ +import decimal +from typing import ( + Callable, + Tuple, +) + +ABI_DECIMAL_PREC = 999 + +abi_decimal_context = decimal.Context(prec=ABI_DECIMAL_PREC) + +ZERO = decimal.Decimal(0) +TEN = decimal.Decimal(10) + + +def ceil32(x: int) -> int: + return x if x % 32 == 0 else x + 32 - (x % 32) + + +def compute_unsigned_integer_bounds(num_bits: int) -> Tuple[int, int]: + return ( + 0, + 2 ** num_bits - 1, + ) + + +def compute_signed_integer_bounds(num_bits: int) -> Tuple[int, int]: + return ( + -1 * 2 ** (num_bits - 1), + 2 ** (num_bits - 1) - 1, + ) + + +def compute_unsigned_fixed_bounds( + num_bits: int, + frac_places: int, +) -> Tuple[decimal.Decimal, decimal.Decimal]: + int_upper = compute_unsigned_integer_bounds(num_bits)[1] + + with decimal.localcontext(abi_decimal_context): + upper = decimal.Decimal(int_upper) * TEN ** -frac_places + + return ZERO, upper + + +def compute_signed_fixed_bounds( + num_bits: int, + frac_places: int, +) -> Tuple[decimal.Decimal, decimal.Decimal]: + int_lower, int_upper = compute_signed_integer_bounds(num_bits) + + with decimal.localcontext(abi_decimal_context): + exp = TEN ** -frac_places + lower = decimal.Decimal(int_lower) * exp + upper = decimal.Decimal(int_upper) * exp + + return lower, upper + + +def scale_places(places: int) -> Callable[[decimal.Decimal], decimal.Decimal]: + """ + Returns a function that shifts the decimal point of decimal values to the + right by ``places`` places. + """ + if not isinstance(places, int): + raise ValueError( + f'Argument `places` must be int. Got value {places} of type {type(places)}.', + ) + + with decimal.localcontext(abi_decimal_context): + scaling_factor = TEN ** -places + + def f(x: decimal.Decimal) -> decimal.Decimal: + with decimal.localcontext(abi_decimal_context): + return x * scaling_factor + + places_repr = f'Eneg{places}' if places > 0 else f'Epos{-places}' + func_name = f'scale_by_{places_repr}' + + f.__name__ = func_name + f.__qualname__ = func_name + + return f diff --git a/env/lib/python3.8/site-packages/eth_abi/utils/padding.py b/env/lib/python3.8/site-packages/eth_abi/utils/padding.py new file mode 100644 index 00000000..a6bd852c --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/utils/padding.py @@ -0,0 +1,27 @@ +from eth_utils.toolz import ( + curry, +) + + +@curry +def zpad(value: bytes, length: int) -> bytes: + return value.rjust(length, b'\x00') + + +zpad32 = zpad(length=32) + + +@curry +def zpad_right(value: bytes, length: int) -> bytes: + return value.ljust(length, b'\x00') + + +zpad32_right = zpad_right(length=32) + + +@curry +def fpad(value: bytes, length: int) -> bytes: + return value.rjust(length, b'\xff') + + +fpad32 = fpad(length=32) diff --git a/env/lib/python3.8/site-packages/eth_abi/utils/string.py b/env/lib/python3.8/site-packages/eth_abi/utils/string.py new file mode 100644 index 00000000..d218d00d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_abi/utils/string.py @@ -0,0 +1,19 @@ +from typing import ( + Any, +) + + +def abbr(value: Any, limit: int = 79) -> str: + """ + Converts a value into its string representation and abbreviates that + representation based on the given length `limit` if necessary. + """ + rep = repr(value) + + if len(rep) > limit: + if limit < 3: + raise ValueError('Abbreviation limit may not be less than 3') + + rep = rep[:limit - 3] + '...' + + return rep diff --git a/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/LICENSE b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/LICENSE new file mode 100644 index 00000000..17bc694e --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2020 The Ethereum Foundation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/METADATA new file mode 100644 index 00000000..22578fbc --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/METADATA @@ -0,0 +1,172 @@ +Metadata-Version: 2.1 +Name: eth-account +Version: 0.5.9 +Summary: eth-account: Sign Ethereum transactions and messages with local private keys +Home-page: https://github.com/ethereum/eth-account +Author: The Ethereum Foundation +Author-email: snakecharmers@ethereum.org +License: MIT +Keywords: ethereum +Platform: UNKNOWN +Classifier: Development Status :: 3 - Alpha +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Operating System :: MacOS +Classifier: Operating System :: POSIX +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Requires-Python: >=3.6, <4 +Description-Content-Type: text/markdown +License-File: LICENSE +Requires-Dist: bitarray (<3,>=1.2.1) +Requires-Dist: eth-abi (<3,>=2.0.0b7) +Requires-Dist: eth-keyfile (<0.6.0,>=0.5.0) +Requires-Dist: eth-keys (<0.4.0,>=0.3.4) +Requires-Dist: eth-rlp (<2,>=0.1.2) +Requires-Dist: eth-utils (<2,>=1.3.0) +Requires-Dist: hexbytes (<1,>=0.1.0) +Requires-Dist: rlp (<3,>=1.0.0) +Provides-Extra: dev +Requires-Dist: bumpversion (<1,>=0.5.3) ; extra == 'dev' +Requires-Dist: pytest-watch (<5,>=4.1.0) ; extra == 'dev' +Requires-Dist: wheel ; extra == 'dev' +Requires-Dist: twine ; extra == 'dev' +Requires-Dist: ipython ; extra == 'dev' +Requires-Dist: hypothesis (<5,>=4.18.0) ; extra == 'dev' +Requires-Dist: pytest (<7,>=6.2.5) ; extra == 'dev' +Requires-Dist: pytest-xdist ; extra == 'dev' +Requires-Dist: tox (==3.14.6) ; extra == 'dev' +Requires-Dist: flake8 (==3.7.9) ; extra == 'dev' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'dev' +Requires-Dist: mypy (==0.910) ; extra == 'dev' +Requires-Dist: pydocstyle (<6,>=5.0.0) ; extra == 'dev' +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'dev' +Requires-Dist: sphinx-rtd-theme (<1,>=0.1.9) ; extra == 'dev' +Requires-Dist: towncrier (>=21.9.0) ; extra == 'dev' +Provides-Extra: doc +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme (<1,>=0.1.9) ; extra == 'doc' +Requires-Dist: towncrier (>=21.9.0) ; extra == 'doc' +Provides-Extra: lint +Requires-Dist: flake8 (==3.7.9) ; extra == 'lint' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'lint' +Requires-Dist: mypy (==0.910) ; extra == 'lint' +Requires-Dist: pydocstyle (<6,>=5.0.0) ; extra == 'lint' +Provides-Extra: test +Requires-Dist: hypothesis (<5,>=4.18.0) ; extra == 'test' +Requires-Dist: pytest (<7,>=6.2.5) ; extra == 'test' +Requires-Dist: pytest-xdist ; extra == 'test' +Requires-Dist: tox (==3.14.6) ; extra == 'test' + +# eth-account + +[![Join the chat at https://gitter.im/ethereum/eth-account](https://badges.gitter.im/ethereum/eth-account.svg)](https://gitter.im/ethereum/eth-account?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[![Build Status](https://circleci.com/gh/ethereum/eth-account.svg?style=shield)](https://circleci.com/gh/ethereum/eth-account) +[![PyPI version](https://badge.fury.io/py/eth-account.svg)](https://badge.fury.io/py/eth-account) +[![Python versions](https://img.shields.io/pypi/pyversions/eth-account.svg)](https://pypi.python.org/pypi/eth-account) +[![Docs build](https://readthedocs.org/projects/eth-account/badge/?version=latest)](http://eth-account.readthedocs.io/en/latest/?badge=latest) + + +Sign Ethereum transactions and messages with local private keys + +Read more in the [documentation on ReadTheDocs](https://eth-account.readthedocs.io/). [View the change log](https://eth-account.readthedocs.io/en/latest/release_notes.html). + +## Quickstart + +```sh +pip install eth-account +``` + +## Developer Setup + +If you would like to hack on eth-account, please check out the [Snake Charmers +Tactical Manual](https://github.com/ethereum/snake-charmers-tactical-manual) +for information on how we do: + +- Testing +- Pull Requests +- Code Style +- Documentation + +### Development Environment Setup + +You can set up your dev environment with: + +```sh +git clone git@github.com:ethereum/eth-account.git +cd eth-account +virtualenv -p python3 venv +. venv/bin/activate +pip install -e .[dev] +``` + +To run the integration test cases, you need to install node and the custom cli tool as follows: + +```sh +apt-get install -y nodejs # As sudo +./tests/integration/ethers-cli/setup_node_v12.sh # As sudo +cd tests/integration/ethers-cli +npm install -g . # As sudo +``` + +### Testing Setup + +During development, you might like to have tests run on every file save. + +Show flake8 errors on file change: + +```sh +# Test flake8 +when-changed -v -s -r -1 eth_account/ tests/ -c "clear; flake8 eth_account tests && echo 'flake8 success' || echo 'error'" +``` + +Run multi-process tests in one command, but without color: + +```sh +# in the project root: +pytest --numprocesses=4 --looponfail --maxfail=1 +# the same thing, succinctly: +pytest -n 4 -f --maxfail=1 +``` + +Run in one thread, with color and desktop notifications: + +```sh +cd venv +ptw --onfail "notify-send -t 5000 'Test failure ⚠⚠⚠⚠⚠' 'python 3 test on eth-account failed'" ../tests ../eth_account +``` + +### Release setup + +For Debian-like systems: +``` +apt install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. This is typically done from the +master branch, except when releasing a beta (in which case the beta is released from master, +and the previous stable branch is released from said branch). + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 4.0.0-alpha.1 devnum"` + + diff --git a/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/RECORD new file mode 100644 index 00000000..c67a5a6b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/RECORD @@ -0,0 +1,56 @@ +eth_account-0.5.9.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_account-0.5.9.dist-info/LICENSE,sha256=390jwXuDReXaeTef0eT63FrFKogAMVgYBUWl1oGk4ec,1090 +eth_account-0.5.9.dist-info/METADATA,sha256=IegutOzVbla1fELLTweDO3a-3GHm2B4oWBWZcpA80cw,6088 +eth_account-0.5.9.dist-info/RECORD,, +eth_account-0.5.9.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +eth_account-0.5.9.dist-info/top_level.txt,sha256=Pmzzr7KpoIw_6r3KGb4ALApo-lPk8bXBYtrpdPyVedc,12 +eth_account/__init__.py,sha256=opUm2QHHC5lYFN1desT2iuktpD2pQjLFE-cIfLohU4A,72 +eth_account/__pycache__/__init__.cpython-38.pyc,, +eth_account/__pycache__/account.cpython-38.pyc,, +eth_account/__pycache__/datastructures.cpython-38.pyc,, +eth_account/__pycache__/messages.cpython-38.pyc,, +eth_account/_utils/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_account/_utils/__pycache__/__init__.cpython-38.pyc,, +eth_account/_utils/__pycache__/legacy_transactions.cpython-38.pyc,, +eth_account/_utils/__pycache__/signing.cpython-38.pyc,, +eth_account/_utils/__pycache__/transaction_utils.cpython-38.pyc,, +eth_account/_utils/__pycache__/typed_transactions.cpython-38.pyc,, +eth_account/_utils/__pycache__/validation.cpython-38.pyc,, +eth_account/_utils/legacy_transactions.py,sha256=WXGgdH_gijoi17_djQvZDTplAnpvNYxRAzFSIS9OmlQ,4400 +eth_account/_utils/signing.py,sha256=hibsJ5rgwACSwqdY1uL_8riQbeaPFLMcGtP8u4zXvFE,4632 +eth_account/_utils/structured_data/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_account/_utils/structured_data/__pycache__/__init__.cpython-38.pyc,, +eth_account/_utils/structured_data/__pycache__/hashing.cpython-38.pyc,, +eth_account/_utils/structured_data/__pycache__/validation.cpython-38.pyc,, +eth_account/_utils/structured_data/hashing.py,sha256=ph34NdMXiWtVNjHHW2-2pbvnMU3quJBkeS4vAu60JGU,9829 +eth_account/_utils/structured_data/validation.py,sha256=DtNcQeqgKCR_uaKukvLOxST2zwtcr1t3hZBtaJrtFXo,5126 +eth_account/_utils/transaction_utils.py,sha256=IeqmnhycZN6hztZ0--1Hm3b2vRL1FV04Ks4QgepzRfs,2971 +eth_account/_utils/typed_transactions.py,sha256=bIuvsIlj6kfu6Fop17tFYIFamwPNtIPVj74JgQxhWoI,20235 +eth_account/_utils/validation.py,sha256=G7IdclZZp6QaEFgEd0fXu3EEpvSi_sPXHRLX-XO_6BA,3044 +eth_account/account.py,sha256=DJgNCf1nVlM9fzR5xnLyJpbboa8dtY50MXMPuAkArJo,34756 +eth_account/datastructures.py,sha256=tFJV5LG9Kk00JuQEcTCczpG8QyKYwh_1LXl8jQYlcFM,612 +eth_account/hdaccount/__init__.py,sha256=9fKFkwIqgOJuYm7dR2FYjVWkA_hY8uEMmuuFybqJvDs,783 +eth_account/hdaccount/__pycache__/__init__.cpython-38.pyc,, +eth_account/hdaccount/__pycache__/_utils.cpython-38.pyc,, +eth_account/hdaccount/__pycache__/deterministic.cpython-38.pyc,, +eth_account/hdaccount/__pycache__/mnemonic.cpython-38.pyc,, +eth_account/hdaccount/_utils.py,sha256=lsk6YNkrEqdh8FfvjjlIbYau0k2Jpo9lBMqnW2lxh_A,1375 +eth_account/hdaccount/deterministic.py,sha256=-5l6mg-v8LxcwAypjRnYN9FMW2jDMTv1FThLKcIHz4g,9186 +eth_account/hdaccount/mnemonic.py,sha256=4NzzC824nYjCAH2sO5uXyRh4MbdMQkMXG9VDcUwpq3s,8021 +eth_account/hdaccount/wordlist/chinese_simplified.txt,sha256=XFlCeSvYNAy4snzVkvEBXt9WqMWyYnbuGKSCQo58VyY,8192 +eth_account/hdaccount/wordlist/chinese_traditional.txt,sha256=QXsms9hQCkrj1ZcX1wEZUttvwvuEuAfz-UrHNOicG18,8192 +eth_account/hdaccount/wordlist/czech.txt,sha256=foDhYcPpPZVUwu-3jU486_j8cn6cUuA7g7lEBr3Mlfw,14945 +eth_account/hdaccount/wordlist/english.txt,sha256=L17tU6Rye0v4iA2PPxme_JDlhQNkbZ_47_Oi7Tsk29o,13116 +eth_account/hdaccount/wordlist/french.txt,sha256=68OVmreAGh32usT6fZcGUvHfdraDzS9AA8lBxj1Rflk,16777 +eth_account/hdaccount/wordlist/italian.txt,sha256=05LEn9twCiTNH86yN8H2XcwSj2s0qKrLWLWThLXGSMI,16033 +eth_account/hdaccount/wordlist/japanese.txt,sha256=Lu0K70kikeBhYz162BF_GisD64CinQ5OMResJSjQX_0,26423 +eth_account/hdaccount/wordlist/korean.txt,sha256=npX4bBZ96I9FDwqvieh_ZiSlf5c8Z7UW4zjo6LiJf2A,37832 +eth_account/hdaccount/wordlist/spanish.txt,sha256=RoRqWgE50ePLdyk-UhwoZfe824LETo0KBqLNDsukjAs,13996 +eth_account/messages.py,sha256=CKlnJnlKgcieNp9ulQ6MHCZSMgn2jwrE6gA6qCyaUFo,8719 +eth_account/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_account/signers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_account/signers/__pycache__/__init__.cpython-38.pyc,, +eth_account/signers/__pycache__/base.cpython-38.pyc,, +eth_account/signers/__pycache__/local.cpython-38.pyc,, +eth_account/signers/base.py,sha256=2sCvjCZi3_7yHqN2azCMqCpQgaNUSPNw7-ePqwa-Drs,2698 +eth_account/signers/local.py,sha256=_lCDxtUUXTK8pVxwrWU66hY35QS9g1CmutXdFrwGPGs,2986 diff --git a/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/top_level.txt new file mode 100644 index 00000000..d8beafa4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account-0.5.9.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_account diff --git a/env/lib/python3.8/site-packages/eth_account/__init__.py b/env/lib/python3.8/site-packages/eth_account/__init__.py new file mode 100644 index 00000000..7bd8a219 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/__init__.py @@ -0,0 +1,5 @@ +from eth_account.account import ( + Account, +) + +__all__ = ["Account"] diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/__init__.py b/env/lib/python3.8/site-packages/eth_account/_utils/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/legacy_transactions.py b/env/lib/python3.8/site-packages/eth_account/_utils/legacy_transactions.py new file mode 100644 index 00000000..aa60cb49 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/_utils/legacy_transactions.py @@ -0,0 +1,157 @@ +import itertools +from typing import ( + Dict, +) + +from cytoolz import ( + curry, + dissoc, + merge, + partial, + pipe, +) +from eth_rlp import ( + HashableRLP, +) +from eth_utils.curried import ( + apply_formatters_to_dict, +) +import rlp +from rlp.sedes import ( + Binary, + big_endian_int, + binary, +) + +from .transaction_utils import ( + set_transaction_type_if_needed, +) +from .typed_transactions import ( + TypedTransaction, +) +from .validation import ( + LEGACY_TRANSACTION_FORMATTERS, + LEGACY_TRANSACTION_VALID_VALUES, +) + + +def serializable_unsigned_transaction_from_dict(transaction_dict): + transaction_dict = set_transaction_type_if_needed(transaction_dict) + if 'type' in transaction_dict: + # We delegate to TypedTransaction, which will carry out validation & formatting. + return TypedTransaction.from_dict(transaction_dict) + + assert_valid_fields(transaction_dict) + filled_transaction = pipe( + transaction_dict, + dict, + partial(merge, TRANSACTION_DEFAULTS), + chain_id_to_v, + apply_formatters_to_dict(LEGACY_TRANSACTION_FORMATTERS), + ) + if 'v' in filled_transaction: + serializer = Transaction + else: + serializer = UnsignedTransaction + return serializer.from_dict(filled_transaction) + + +def encode_transaction(unsigned_transaction, vrs): + (v, r, s) = vrs + chain_naive_transaction = dissoc(unsigned_transaction.as_dict(), 'v', 'r', 's') + if isinstance(unsigned_transaction, TypedTransaction): + # Typed transaction have their own encoding format, so we must delegate the encoding. + chain_naive_transaction['v'] = v + chain_naive_transaction['r'] = r + chain_naive_transaction['s'] = s + signed_typed_transaction = TypedTransaction.from_dict(chain_naive_transaction) + return signed_typed_transaction.encode() + signed_transaction = Transaction(v=v, r=r, s=s, **chain_naive_transaction) + return rlp.encode(signed_transaction) + + +TRANSACTION_DEFAULTS = { + 'to': b'', + 'value': 0, + 'data': b'', + 'chainId': None, +} + +ALLOWED_TRANSACTION_KEYS = { + 'nonce', + 'gasPrice', + 'gas', + 'to', + 'value', + 'data', + 'chainId', # set chainId to None if you want a transaction that can be replayed across networks +} + +REQUIRED_TRANSACTION_KEYS = ALLOWED_TRANSACTION_KEYS.difference(TRANSACTION_DEFAULTS.keys()) + + +def assert_valid_fields(transaction_dict): + # check if any keys are missing + missing_keys = REQUIRED_TRANSACTION_KEYS.difference(transaction_dict.keys()) + if missing_keys: + raise TypeError("Transaction must include these fields: %r" % missing_keys) + + # check if any extra keys were specified + superfluous_keys = set(transaction_dict.keys()).difference(ALLOWED_TRANSACTION_KEYS) + if superfluous_keys: + raise TypeError("Transaction must not include unrecognized fields: %r" % superfluous_keys) + + # check for valid types in each field + valid_fields: Dict[str, bool] + valid_fields = apply_formatters_to_dict(LEGACY_TRANSACTION_VALID_VALUES, transaction_dict) + if not all(valid_fields.values()): + invalid = {key: transaction_dict[key] for key, valid in valid_fields.items() if not valid} + raise TypeError("Transaction had invalid fields: %r" % invalid) + + +def chain_id_to_v(transaction_dict): + # See EIP 155 + chain_id = transaction_dict.pop('chainId') + if chain_id is None: + return transaction_dict + else: + return dict(transaction_dict, v=chain_id, r=0, s=0) + + +@curry +def fill_transaction_defaults(transaction): + return merge(TRANSACTION_DEFAULTS, transaction) + + +UNSIGNED_TRANSACTION_FIELDS = ( + ('nonce', big_endian_int), + ('gasPrice', big_endian_int), + ('gas', big_endian_int), + ('to', Binary.fixed_length(20, allow_empty=True)), + ('value', big_endian_int), + ('data', binary), +) + + +class Transaction(HashableRLP): + fields = UNSIGNED_TRANSACTION_FIELDS + ( + ('v', big_endian_int), + ('r', big_endian_int), + ('s', big_endian_int), + ) + + +class UnsignedTransaction(HashableRLP): + fields = UNSIGNED_TRANSACTION_FIELDS + + +ChainAwareUnsignedTransaction = Transaction + + +def strip_signature(txn): + unsigned_parts = itertools.islice(txn, len(UNSIGNED_TRANSACTION_FIELDS)) + return list(unsigned_parts) + + +def vrs_from(transaction): + return (getattr(transaction, part) for part in 'vrs') diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/signing.py b/env/lib/python3.8/site-packages/eth_account/_utils/signing.py new file mode 100644 index 00000000..1078aa49 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/_utils/signing.py @@ -0,0 +1,147 @@ +from cytoolz import ( + pipe, +) +from eth_utils import ( + to_bytes, + to_int, +) + +from eth_account._utils.legacy_transactions import ( + ChainAwareUnsignedTransaction, + Transaction, + UnsignedTransaction, + encode_transaction, + serializable_unsigned_transaction_from_dict, + strip_signature, +) +from eth_account._utils.typed_transactions import ( + TypedTransaction, +) + +CHAIN_ID_OFFSET = 35 +V_OFFSET = 27 + +# signature versions +PERSONAL_SIGN_VERSION = b'E' # Hex value 0x45 +INTENDED_VALIDATOR_SIGN_VERSION = b'\x00' # Hex value 0x00 +STRUCTURED_DATA_SIGN_VERSION = b'\x01' # Hex value 0x01 + + +def sign_transaction_dict(eth_key, transaction_dict): + # generate RLP-serializable transaction, with defaults filled + unsigned_transaction = serializable_unsigned_transaction_from_dict(transaction_dict) + + transaction_hash = unsigned_transaction.hash() + + # detect chain + if isinstance(unsigned_transaction, UnsignedTransaction): + chain_id = None + (v, r, s) = sign_transaction_hash(eth_key, transaction_hash, chain_id) + elif isinstance(unsigned_transaction, Transaction): + chain_id = unsigned_transaction.v + (v, r, s) = sign_transaction_hash(eth_key, transaction_hash, chain_id) + elif isinstance(unsigned_transaction, TypedTransaction): + # Each transaction type dictates its payload, and consequently, + # all the funky logic around the `v` signature field is both obsolete && incorrect. + # We want to obtain the raw `v` and delegate to the transaction type itself. + (v, r, s) = eth_key.sign_msg_hash(transaction_hash).vrs + else: + # Cannot happen, but better for code to be defensive + self-documenting. + raise TypeError("unknown Transaction object: %s" % type(unsigned_transaction)) + + # serialize transaction with rlp + encoded_transaction = encode_transaction(unsigned_transaction, vrs=(v, r, s)) + + return (v, r, s, encoded_transaction) + + +def hash_of_signed_transaction(txn_obj): + """ + Regenerate the hash of the signed transaction object. + + 1. Infer the chain ID from the signature + 2. Strip out signature from transaction + 3. Annotate the transaction with that ID, if available + 4. Take the hash of the serialized, unsigned, chain-aware transaction + + Chain ID inference and annotation is according to EIP-155 + See details at https://github.com/ethereum/EIPs/blob/master/EIPS/eip-155.md + + :return: the hash of the provided transaction, to be signed + """ + (chain_id, _v) = extract_chain_id(txn_obj.v) + unsigned_parts = strip_signature(txn_obj) + if chain_id is None: + signable_transaction = UnsignedTransaction(*unsigned_parts) + else: + extended_transaction = unsigned_parts + [chain_id, 0, 0] + signable_transaction = ChainAwareUnsignedTransaction(*extended_transaction) + return signable_transaction.hash() + + +def extract_chain_id(raw_v): + """ + Extracts chain ID, according to EIP-155. + + @return (chain_id, v) + """ + above_id_offset = raw_v - CHAIN_ID_OFFSET + if above_id_offset < 0: + if raw_v in {0, 1}: + return (None, raw_v + V_OFFSET) + elif raw_v in {27, 28}: + return (None, raw_v) + else: + raise ValueError("v %r is invalid, must be one of: 0, 1, 27, 28, 35+") + else: + (chain_id, v_bit) = divmod(above_id_offset, 2) + return (chain_id, v_bit + V_OFFSET) + + +def to_standard_signature_bytes(ethereum_signature_bytes): + rs = ethereum_signature_bytes[:-1] + v = to_int(ethereum_signature_bytes[-1]) + standard_v = to_standard_v(v) + return rs + to_bytes(standard_v) + + +def to_standard_v(enhanced_v): + (_chain, chain_naive_v) = extract_chain_id(enhanced_v) + v_standard = chain_naive_v - V_OFFSET + assert v_standard in {0, 1} + return v_standard + + +def to_eth_v(v_raw, chain_id=None): + if chain_id is None: + v = v_raw + V_OFFSET + else: + v = v_raw + CHAIN_ID_OFFSET + 2 * chain_id + return v + + +def sign_transaction_hash(account, transaction_hash, chain_id): + signature = account.sign_msg_hash(transaction_hash) + (v_raw, r, s) = signature.vrs + v = to_eth_v(v_raw, chain_id) + return (v, r, s) + + +def _pad_to_eth_word(bytes_val): + return bytes_val.rjust(32, b'\0') + + +def to_bytes32(val): + return pipe( + val, + to_bytes, + _pad_to_eth_word, + ) + + +def sign_message_hash(key, msg_hash): + signature = key.sign_msg_hash(msg_hash) + (v_raw, r, s) = signature.vrs + v = to_eth_v(v_raw) + eth_signature_bytes = to_bytes32(r) + to_bytes32(s) + to_bytes(v) + return (v, r, s, eth_signature_bytes) diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/__init__.py b/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/hashing.py b/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/hashing.py new file mode 100644 index 00000000..2074f866 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/hashing.py @@ -0,0 +1,296 @@ +from itertools import ( + groupby, +) +import json +from operator import ( + itemgetter, +) + +from eth_abi import ( + encode_abi, + is_encodable, + is_encodable_type, +) +from eth_abi.grammar import ( + parse, +) +from eth_utils import ( + ValidationError, + keccak, + to_tuple, + toolz, +) + +from .validation import ( + validate_structured_data, +) + + +def get_dependencies(primary_type, types): + """ + Perform DFS to get all the dependencies of the primary_type. + """ + deps = set() + struct_names_yet_to_be_expanded = [primary_type] + + while len(struct_names_yet_to_be_expanded) > 0: + struct_name = struct_names_yet_to_be_expanded.pop() + + deps.add(struct_name) + fields = types[struct_name] + for field in fields: + if field["type"] not in types: + # We don't need to expand types that are not user defined (customized) + continue + elif field["type"] in deps: + # skip types that we have already encountered + continue + else: + # Custom Struct Type + struct_names_yet_to_be_expanded.append(field["type"]) + + # Don't need to make a struct as dependency of itself + deps.remove(primary_type) + + return tuple(deps) + + +def field_identifier(field): + """ + Convert a field dict into a typed-name string. + + Given a ``field`` of the format {'name': NAME, 'type': TYPE}, + this function converts it to ``TYPE NAME`` + """ + return "{0} {1}".format(field["type"], field["name"]) + + +def encode_struct(struct_name, struct_field_types): + return "{0}({1})".format( + struct_name, + ','.join(map(field_identifier, struct_field_types)), + ) + + +def encode_type(primary_type, types): + """ + Serialize types into an encoded string. + + The type of a struct is encoded as name ‖ "(" ‖ member₁ ‖ "," ‖ member₂ ‖ "," ‖ … ‖ memberₙ ")" + where each member is written as type ‖ " " ‖ name. + """ + # Getting the dependencies and sorting them alphabetically as per EIP712 + deps = get_dependencies(primary_type, types) + sorted_deps = (primary_type,) + tuple(sorted(deps)) + + result = ''.join( + [ + encode_struct(struct_name, types[struct_name]) + for struct_name in sorted_deps + ] + ) + return result + + +def hash_struct_type(primary_type, types): + return keccak(text=encode_type(primary_type, types)) + + +def is_array_type(type): + # Identify if type such as "person[]" or "person[2]" is an array + abi_type = parse(type) + return abi_type.is_array + + +@to_tuple +def get_depths_and_dimensions(data, depth): + """ + Yields 2-length tuples of depth and dimension of each element at that depth. + """ + if not isinstance(data, (list, tuple)): + # Not checking for Iterable instance, because even Dictionaries and strings + # are considered as iterables, but that's not what we want the condition to be. + return () + + yield depth, len(data) + + for item in data: + # iterating over all 1 dimension less sub-data items + yield from get_depths_and_dimensions(item, depth + 1) + + +def get_array_dimensions(data): + """ + Given an array type data item, check that it is an array and return the dimensions as a tuple. + + Ex: get_array_dimensions([[1, 2, 3], [4, 5, 6]]) returns (2, 3) + """ + depths_and_dimensions = get_depths_and_dimensions(data, 0) + # re-form as a dictionary with `depth` as key, and all of the dimensions found at that depth. + grouped_by_depth = { + depth: tuple(dimension for depth, dimension in group) + for depth, group in groupby(depths_and_dimensions, itemgetter(0)) + } + + # validate that there is only one dimension for any given depth. + invalid_depths_dimensions = tuple( + (depth, dimensions) + for depth, dimensions in grouped_by_depth.items() + if len(set(dimensions)) != 1 + ) + if invalid_depths_dimensions: + raise ValidationError( + '\n'.join( + [ + "Depth {0} of array data has more than one dimensions: {1}". + format(depth, dimensions) + for depth, dimensions in invalid_depths_dimensions + ] + ) + ) + + dimensions = tuple( + toolz.first(set(dimensions)) + for depth, dimensions in sorted(grouped_by_depth.items()) + ) + + return dimensions + + +@to_tuple +def flatten_multidimensional_array(array): + for item in array: + if isinstance(item, (list, tuple)): + # Not checking for Iterable instance, because even Dictionaries and strings + # are considered as iterables, but that's not what we want the condition to be. + yield from flatten_multidimensional_array(item) + else: + yield item + + +@to_tuple +def _encode_data(primary_type, types, data): + # Add typehash + yield "bytes32", hash_struct_type(primary_type, types) + + # Add field contents + for field in types[primary_type]: + value = data[field["name"]] + if field["type"] == "string": + if not isinstance(value, str): + raise TypeError( + "Value of `{0}` ({2}) in the struct `{1}` is of the type `{3}`, but expected " + "string value".format( + field["name"], + primary_type, + value, + type(value), + ) + ) + # Special case where the values need to be keccak hashed before they are encoded + hashed_value = keccak(text=value) + yield "bytes32", hashed_value + elif field["type"] == "bytes": + if not isinstance(value, bytes): + raise TypeError( + "Value of `{0}` ({2}) in the struct `{1}` is of the type `{3}`, but expected " + "bytes value".format( + field["name"], + primary_type, + value, + type(value), + ) + ) + # Special case where the values need to be keccak hashed before they are encoded + hashed_value = keccak(primitive=value) + yield "bytes32", hashed_value + elif field["type"] in types: + # This means that this type is a user defined type + hashed_value = keccak(primitive=encode_data(field["type"], types, value)) + yield "bytes32", hashed_value + elif is_array_type(field["type"]): + # Get the dimensions from the value + array_dimensions = get_array_dimensions(value) + # Get the dimensions from what was declared in the schema + parsed_type = parse(field["type"]) + for i in range(len(array_dimensions)): + if len(parsed_type.arrlist[i]) == 0: + # Skip empty or dynamically declared dimensions + continue + if array_dimensions[i] != parsed_type.arrlist[i][0]: + # Dimensions should match with declared schema + raise TypeError( + "Array data `{0}` has dimensions `{1}` whereas the " + "schema has dimensions `{2}`".format( + value, + array_dimensions, + tuple(map(lambda x: x[0], parsed_type.arrlist)), + ) + ) + + array_items = flatten_multidimensional_array(value) + array_items_encoding = [ + encode_data(parsed_type.base, types, array_item) + for array_item in array_items + ] + concatenated_array_encodings = b''.join(array_items_encoding) + hashed_value = keccak(concatenated_array_encodings) + yield "bytes32", hashed_value + else: + # First checking to see if type is valid as per abi + if not is_encodable_type(field["type"]): + raise TypeError( + "Received Invalid type `{0}` in the struct `{1}`".format( + field["type"], + primary_type, + ) + ) + + # Next see if the data fits the specified encoding type + if is_encodable(field["type"], value): + # field["type"] is a valid type and this value corresponds to that type. + yield field["type"], value + else: + raise TypeError( + "Value of `{0}` ({2}) in the struct `{1}` is of the type `{3}`, but expected " + "{4} value".format( + field["name"], + primary_type, + value, + type(value), + field["type"], + ) + ) + + +def encode_data(primaryType, types, data): + data_types_and_hashes = _encode_data(primaryType, types, data) + data_types, data_hashes = zip(*data_types_and_hashes) + return encode_abi(data_types, data_hashes) + + +def load_and_validate_structured_message(structured_json_string_data): + structured_data = json.loads(structured_json_string_data) + validate_structured_data(structured_data) + + return structured_data + + +def hash_domain(structured_data): + return keccak( + encode_data( + "EIP712Domain", + structured_data["types"], + structured_data["domain"] + ) + ) + + +def hash_message(structured_data): + return keccak( + encode_data( + structured_data["primaryType"], + structured_data["types"], + structured_data["message"] + ) + ) diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/validation.py b/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/validation.py new file mode 100644 index 00000000..fcb9197d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/_utils/structured_data/validation.py @@ -0,0 +1,120 @@ +import re + +from eth_utils import ( + ValidationError, +) + +# Regexes +IDENTIFIER_REGEX = r"^[a-zA-Z_$][a-zA-Z_$0-9]*$" +TYPE_REGEX = r"^[a-zA-Z_$][a-zA-Z_$0-9]*(\[([1-9]\d*\b)*\])*$" + + +def validate_has_attribute(attr_name, dict_data): + if attr_name not in dict_data: + raise ValidationError( + "Attribute `{0}` not found in the JSON string". + format(attr_name) + ) + + +def validate_types_attribute(structured_data): + # Check that the data has `types` attribute + validate_has_attribute("types", structured_data) + + # Check if all the `name` and the `type` attributes in each field of all the + # `types` attribute are valid (Regex Check) + for struct_name in structured_data["types"]: + # Check that `struct_name` is of the type string + if not isinstance(struct_name, str): + raise ValidationError( + "Struct Name of `types` attribute should be a string, but got type `{0}`". + format(type(struct_name)) + ) + for field in structured_data["types"][struct_name]: + # Check that `field["name"]` is of the type string + if not isinstance(field["name"], str): + raise ValidationError( + "Field Name `{0}` of struct `{1}` should be a string, but got type `{2}`". + format(field["name"], struct_name, type(field["name"])) + ) + # Check that `field["type"]` is of the type string + if not isinstance(field["type"], str): + raise ValidationError( + "Field Type `{0}` of struct `{1}` should be a string, but got type `{2}`". + format(field["type"], struct_name, type(field["type"])) + ) + # Check that field["name"] matches with IDENTIFIER_REGEX + if not re.match(IDENTIFIER_REGEX, field["name"]): + raise ValidationError( + "Invalid Identifier `{0}` in `{1}`".format(field["name"], struct_name) + ) + # Check that field["type"] matches with TYPE_REGEX + if not re.match(TYPE_REGEX, field["type"]): + raise ValidationError( + "Invalid Type `{0}` in `{1}`".format(field["type"], struct_name) + ) + + +def validate_field_declared_only_once_in_struct(field_name, struct_data, struct_name): + if len([field for field in struct_data if field["name"] == field_name]) != 1: + raise ValidationError( + "Attribute `{0}` not declared or declared more than once in {1}". + format(field_name, struct_name) + ) + + +EIP712_DOMAIN_FIELDS = [ + "name", + "version", + "chainId", + "verifyingContract", +] + + +def used_header_fields(EIP712Domain_data): + return [field["name"] for field in EIP712Domain_data if field["name"] in EIP712_DOMAIN_FIELDS] + + +def validate_EIP712Domain_schema(structured_data): + # Check that the `types` attribute contains `EIP712Domain` schema declaration + if "EIP712Domain" not in structured_data["types"]: + raise ValidationError("`EIP712Domain struct` not found in types attribute") + # Check that the names and types in `EIP712Domain` are what are mentioned in the EIP-712 + # and they are declared only once (if defined at all) + EIP712Domain_data = structured_data["types"]["EIP712Domain"] + header_fields = used_header_fields(EIP712Domain_data) + if len(header_fields) == 0: + raise ValidationError(f"One of {EIP712_DOMAIN_FIELDS} must be defined in {structured_data}") + for field in header_fields: + validate_field_declared_only_once_in_struct(field, EIP712Domain_data, "EIP712Domain") + + +def validate_primaryType_attribute(structured_data): + # Check that `primaryType` attribute is present + if "primaryType" not in structured_data: + raise ValidationError("The Structured Data needs to have a `primaryType` attribute") + # Check that `primaryType` value is a string + if not isinstance(structured_data["primaryType"], str): + raise ValidationError( + "Value of attribute `primaryType` should be `string`, but got type `{0}`". + format(type(structured_data["primaryType"])) + ) + # Check that the value of `primaryType` is present in the `types` attribute + if not structured_data["primaryType"] in structured_data["types"]: + raise ValidationError( + "The Primary Type `{0}` is not present in the `types` attribute". + format(structured_data["primaryType"]) + ) + + +def validate_structured_data(structured_data): + # validate the `types` attribute + validate_types_attribute(structured_data) + # validate the `EIP712Domain` struct of `types` attribute + validate_EIP712Domain_schema(structured_data) + # validate the `primaryType` attribute + validate_primaryType_attribute(structured_data) + # Check that there is a `domain` attribute in the structured data + validate_has_attribute("domain", structured_data) + # Check that there is a `message` attribute in the structured data + validate_has_attribute("message", structured_data) diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/transaction_utils.py b/env/lib/python3.8/site-packages/eth_account/_utils/transaction_utils.py new file mode 100644 index 00000000..86e9ead5 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/_utils/transaction_utils.py @@ -0,0 +1,82 @@ +from typing import ( + Any, + Dict, + Sequence, +) + +from toolz import ( + assoc, + dissoc, +) + +from eth_account._utils.validation import ( + is_rlp_structured_access_list, + is_rpc_structured_access_list, +) + + +def set_transaction_type_if_needed(transaction_dict: Dict[str, Any]) -> Dict[str, Any]: + if 'type' not in transaction_dict: + if 'gasPrice' in transaction_dict and 'accessList' in transaction_dict: + # access list txn - type 1 + transaction_dict = assoc(transaction_dict, 'type', '0x1') + elif 'maxFeePerGas' in transaction_dict and 'maxPriorityFeePerGas' in transaction_dict: + # dynamic fee txn - type 2 + transaction_dict = assoc(transaction_dict, 'type', '0x2') + return transaction_dict + + +# JSON-RPC to rlp transaction structure +def transaction_rpc_to_rlp_structure(dictionary: Dict[str, Any]) -> Dict[str, Any]: + """ + Convert a JSON-RPC-structured transaction to an rlp-structured transaction. + """ + access_list = dictionary.get('accessList') + if access_list: + dictionary = dissoc(dictionary, 'accessList') + rlp_structured_access_list = _access_list_rpc_to_rlp_structure(access_list) + dictionary = assoc(dictionary, 'accessList', rlp_structured_access_list) + return dictionary + + +def _access_list_rpc_to_rlp_structure(access_list: Sequence) -> Sequence: + if not is_rpc_structured_access_list(access_list): + raise ValueError("provided object not formatted as JSON-RPC-structured access list") + rlp_structured_access_list = [] + for d in access_list: + # flatten each dict into a tuple of its values + rlp_structured_access_list.append( + ( + d['address'], # value of address + tuple(_ for _ in d['storageKeys']) # tuple of storage key values + ) + ) + return tuple(rlp_structured_access_list) + + +# rlp to JSON-RPC transaction structure +def transaction_rlp_to_rpc_structure(dictionary: Dict[str, Any]) -> Dict[str, Any]: + """ + Convert an rlp-structured transaction to a JSON-RPC-structured transaction. + """ + access_list = dictionary.get('accessList') + if access_list: + dictionary = dissoc(dictionary, 'accessList') + rpc_structured_access_list = _access_list_rlp_to_rpc_structure(access_list) + dictionary = assoc(dictionary, 'accessList', rpc_structured_access_list) + return dictionary + + +def _access_list_rlp_to_rpc_structure(access_list: Sequence) -> Sequence: + if not is_rlp_structured_access_list(access_list): + raise ValueError("provided object not formatted as rlp-structured access list") + rpc_structured_access_list = [] + for t in access_list: + # build a dictionary with appropriate keys for each tuple + rpc_structured_access_list.append( + { + 'address': t[0], + 'storageKeys': t[1] + } + ) + return tuple(rpc_structured_access_list) diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/typed_transactions.py b/env/lib/python3.8/site-packages/eth_account/_utils/typed_transactions.py new file mode 100644 index 00000000..c79577ec --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/_utils/typed_transactions.py @@ -0,0 +1,519 @@ +from abc import ( + ABC, + abstractmethod, +) +from typing import ( + Any, + Dict, + Tuple, + cast, +) + +from cytoolz import ( + dissoc, + identity, + merge, + partial, + pipe, +) +from eth_rlp import ( + HashableRLP, +) +from eth_utils import ( + keccak, +) +from eth_utils.curried import ( + apply_formatter_to_array, + apply_formatters_to_dict, + apply_one_of_formatters, + hexstr_if_str, + is_bytes, + is_string, + to_bytes, + to_int, +) +from hexbytes import ( + HexBytes, +) +import rlp +from rlp.sedes import ( + BigEndianInt, + Binary, + CountableList, + List, + big_endian_int, + binary, +) + +from .transaction_utils import ( + set_transaction_type_if_needed, + transaction_rlp_to_rpc_structure, + transaction_rpc_to_rlp_structure, +) +from .validation import ( + LEGACY_TRANSACTION_FORMATTERS, + LEGACY_TRANSACTION_VALID_VALUES, + is_int_or_prefixed_hexstr, + is_rpc_structured_access_list, +) + +TYPED_TRANSACTION_FORMATTERS = merge( + LEGACY_TRANSACTION_FORMATTERS, { + 'chainId': hexstr_if_str(to_int), + 'type': hexstr_if_str(to_int), + 'accessList': apply_formatter_to_array( + apply_formatters_to_dict( + { + "address": apply_one_of_formatters(( + (is_string, hexstr_if_str(to_bytes)), + (is_bytes, identity), + )), + "storageKeys": apply_formatter_to_array(hexstr_if_str(to_int)) + } + ), + ), + 'maxPriorityFeePerGas': hexstr_if_str(to_int), + 'maxFeePerGas': hexstr_if_str(to_int), + }, +) + +# Define typed transaction common sedes. +# [[{20 bytes}, [{32 bytes}...]]...], where ... means “zero or more of the thing to the left”. +access_list_sede_type = CountableList( + List([ + Binary.fixed_length(20, allow_empty=False), + CountableList(BigEndianInt(32)), + ]), +) + + +class _TypedTransactionImplementation(ABC): + """ + Abstract class that every typed transaction must implement. + Should not be imported or used by clients of the library. + """ + @abstractmethod + def hash(self) -> bytes: + pass + + @abstractmethod + def payload(self) -> bytes: + pass + + @abstractmethod + def as_dict(self) -> Dict[str, Any]: + pass + + @abstractmethod + def vrs(self) -> Tuple[int, int, int]: + pass + + +class TypedTransaction: + """ + Represents a Typed Transaction as per EIP-2718. + The currently supported Transaction Types are: + * EIP-2930's AccessListTransaction + * EIP-1559's DynamicFeeTransaction + """ + def __init__(self, transaction_type: int, transaction: _TypedTransactionImplementation): + """Should not be called directly. Use instead the 'from_dict' method.""" + if not isinstance(transaction, _TypedTransactionImplementation): + raise TypeError("expected _TypedTransactionImplementation, got %s" % type(transaction)) + if not isinstance(transaction_type, int): + raise TypeError("expected int, got %s" % type(transaction_type)) + self.transaction_type = transaction_type + self.transaction = transaction + + @classmethod + def from_dict(cls, dictionary: Dict[str, Any]): + """Builds a TypedTransaction from a dictionary. Verifies the dictionary is well formed.""" + dictionary = set_transaction_type_if_needed(dictionary) + if not ('type' in dictionary and is_int_or_prefixed_hexstr(dictionary['type'])): + raise ValueError("missing or incorrect transaction type") + # Switch on the transaction type to choose the correct constructor. + transaction_type = pipe(dictionary['type'], hexstr_if_str(to_int)) + transaction: Any + if transaction_type == AccessListTransaction.transaction_type: + transaction = AccessListTransaction + elif transaction_type == DynamicFeeTransaction.transaction_type: + transaction = DynamicFeeTransaction + else: + raise TypeError("Unknown Transaction type: %s" % transaction_type) + return cls( + transaction_type=transaction_type, + transaction=transaction.from_dict(dictionary), + ) + + @classmethod + def from_bytes(cls, encoded_transaction: HexBytes): + """Builds a TypedTransaction from a signed encoded transaction.""" + if not isinstance(encoded_transaction, HexBytes): + raise TypeError("expected Hexbytes, got %s" % type(encoded_transaction)) + if not (len(encoded_transaction) > 0 and encoded_transaction[0] <= 0x7f): + raise ValueError("unexpected input") + if encoded_transaction[0] == AccessListTransaction.transaction_type: + transaction_type = AccessListTransaction.transaction_type + transaction = AccessListTransaction.from_bytes(encoded_transaction) + elif encoded_transaction[0] == DynamicFeeTransaction.transaction_type: + transaction_type = DynamicFeeTransaction.transaction_type + transaction = DynamicFeeTransaction.from_bytes(encoded_transaction) + else: + # The only known transaction types should be explicit if/elif branches. + raise TypeError("typed transaction has unknown type: %s" % encoded_transaction[0]) + return cls( + transaction_type=transaction_type, + transaction=transaction, + ) + + def hash(self) -> bytes: + """ + Hashes this TypedTransaction to prepare it for signing. + + As per the EIP-2718 specifications, + the hashing format is dictated by the transaction type itself, and so we delegate the call. + Note that the return type will be bytes. + """ + return self.transaction.hash() + + def encode(self) -> bytes: + """ + Encodes this TypedTransaction and returns it as bytes. + + The transaction format follows + EIP-2718's typed transaction format (TransactionType || TransactionPayload). + Note that we delegate to a transaction type's payload() method as the EIP-2718 does not + prescribe a TransactionPayload format, leaving types free to implement their own encoding. + """ + return bytes([self.transaction_type]) + self.transaction.payload() + + def as_dict(self) -> Dict[str, Any]: + """Returns this transaction as a dictionary.""" + return self.transaction.as_dict() + + def vrs(self) -> Tuple[int, int, int]: + """Returns (v, r, s) if they exist.""" + return self.transaction.vrs() + + +class AccessListTransaction(_TypedTransactionImplementation): + """ + Represents an access list transaction per EIP-2930. + """ + # This is the first transaction to implement the EIP-2718 typed transaction. + transaction_type = 1 # '0x01' + + unsigned_transaction_fields = ( + ('chainId', big_endian_int), + ('nonce', big_endian_int), + ('gasPrice', big_endian_int), + ('gas', big_endian_int), + ('to', Binary.fixed_length(20, allow_empty=True)), + ('value', big_endian_int), + ('data', binary), + ('accessList', access_list_sede_type), + ) + + signature_fields = ( + ('v', big_endian_int), + ('r', big_endian_int), + ('s', big_endian_int), + ) + + transaction_field_defaults = { + 'type': b'0x1', + 'chainId': 0, + 'to': b'', + 'value': 0, + 'data': b'', + 'accessList': [], + } + + _unsigned_transaction_serializer = type( + "_unsigned_transaction_serializer", (HashableRLP, ), { + "fields": unsigned_transaction_fields, + }, + ) + + _signed_transaction_serializer = type( + "_signed_transaction_serializer", (HashableRLP, ), { + "fields": unsigned_transaction_fields + signature_fields, + }, + ) + + def __init__(self, dictionary: Dict[str, Any]): + self.dictionary = dictionary + + @classmethod + def assert_valid_fields(cls, dictionary: Dict[str, Any]): + transaction_valid_values = merge(LEGACY_TRANSACTION_VALID_VALUES, { + 'type': is_int_or_prefixed_hexstr, + 'accessList': is_rpc_structured_access_list, + }) + + if 'v' in dictionary and dictionary['v'] == 0: + # This is insane logic that is required because the way we evaluate + # correct types is in the `if not all()` branch below, and 0 obviously + # maps to the int(0), which maps to False... This was not an issue in non-typed + # transaction because v=0, couldn't exist with the chain offset. + dictionary['v'] = '0x0' + valid_fields = apply_formatters_to_dict( + transaction_valid_values, dictionary, + ) # type: Dict[str, Any] + if not all(valid_fields.values()): + invalid = {key: dictionary[key] for key, valid in valid_fields.items() if not valid} + raise TypeError("Transaction had invalid fields: %r" % invalid) + + @classmethod + def from_dict(cls, dictionary: Dict[str, Any]): + """ + Builds an AccessListTransaction from a dictionary. + Verifies that the dictionary is well formed. + """ + # Validate fields. + cls.assert_valid_fields(dictionary) + sanitized_dictionary = pipe( + dictionary, + dict, + partial(merge, cls.transaction_field_defaults), + apply_formatters_to_dict(TYPED_TRANSACTION_FORMATTERS), + ) + + # We have verified the type, we can safely remove it from the dictionary, + # given that it is not to be included within the RLP payload. + transaction_type = sanitized_dictionary.pop('type') + if transaction_type != cls.transaction_type: + raise ValueError( + "expected transaction type %s, got %s" % (cls.transaction_type, transaction_type), + ) + return cls( + dictionary=sanitized_dictionary, + ) + + @classmethod + def from_bytes(cls, encoded_transaction: HexBytes): + """Builds an AccesslistTransaction from a signed encoded transaction.""" + if not isinstance(encoded_transaction, HexBytes): + raise TypeError("expected Hexbytes, got type: %s" % type(encoded_transaction)) + if not (len(encoded_transaction) > 0 and encoded_transaction[0] == cls.transaction_type): + raise ValueError("unexpected input") + # Format is (0x01 || TransactionPayload) + # We strip the prefix, and RLP unmarshal the payload into our signed transaction serializer. + transaction_payload = encoded_transaction[1:] + rlp_serializer = cls._signed_transaction_serializer + dictionary = rlp_serializer.from_bytes(transaction_payload).as_dict() # type: ignore + rpc_structured_dict = transaction_rlp_to_rpc_structure(dictionary) + rpc_structured_dict['type'] = cls.transaction_type + return cls.from_dict(rpc_structured_dict) + + def as_dict(self) -> Dict[str, Any]: + """Returns this transaction as a dictionary.""" + dictionary = self.dictionary.copy() + dictionary['type'] = self.__class__.transaction_type + return dictionary + + def hash(self) -> bytes: + """ + Hashes this AccessListTransaction to prepare it for signing. + As per the EIP-2930 specifications, the signature is a secp256k1 signature over + keccak256(0x01 || rlp([chainId, nonce, gasPrice, gasLimit, to, value, data, accessList])). + Here, we compute the keccak256(...) hash. + """ + # Remove signature fields. + transaction_without_signature_fields = dissoc(self.dictionary, 'v', 'r', 's') + # RPC-structured transaction to rlp-structured transaction + rlp_structured_txn_without_sig_fields = transaction_rpc_to_rlp_structure( + transaction_without_signature_fields + ) + rlp_serializer = self.__class__._unsigned_transaction_serializer + hash = pipe( + rlp_serializer.from_dict(rlp_structured_txn_without_sig_fields), # type: ignore + lambda val: rlp.encode(val), # rlp([...]) + lambda val: bytes([self.__class__.transaction_type]) + val, # (0x01 || rlp([...])) + keccak, # keccak256(0x01 || rlp([...])) + ) + return cast(bytes, hash) + + def payload(self) -> bytes: + """ + Returns this transaction's payload as bytes. + + Here, the TransactionPayload = rlp([chainId, + nonce, gasPrice, gasLimit, to, value, data, accessList, signatureYParity, signatureR, + signatureS]) + """ + if not all(k in self.dictionary for k in 'vrs'): + raise ValueError("attempting to encode an unsigned transaction") + rlp_serializer = self.__class__._signed_transaction_serializer + rlp_structured_dict = transaction_rpc_to_rlp_structure(self.dictionary) + payload = rlp.encode(rlp_serializer.from_dict(rlp_structured_dict)) # type: ignore + return cast(bytes, payload) + + def vrs(self) -> Tuple[int, int, int]: + """Returns (v, r, s) if they exist.""" + if not all(k in self.dictionary for k in 'vrs'): + raise ValueError("attempting to encode an unsigned transaction") + return (self.dictionary['v'], self.dictionary['r'], self.dictionary['s']) + + +class DynamicFeeTransaction(_TypedTransactionImplementation): + """ + Represents a dynamic fee transaction access per EIP-1559. + """ + # This is the second transaction to implement the EIP-2718 typed transaction. + transaction_type = 2 # '0x02' + + unsigned_transaction_fields = ( + ('chainId', big_endian_int), + ('nonce', big_endian_int), + ('maxPriorityFeePerGas', big_endian_int), + ('maxFeePerGas', big_endian_int), + ('gas', big_endian_int), + ('to', Binary.fixed_length(20, allow_empty=True)), + ('value', big_endian_int), + ('data', binary), + ('accessList', access_list_sede_type), + ) + + signature_fields = ( + ('v', big_endian_int), + ('r', big_endian_int), + ('s', big_endian_int), + ) + + transaction_field_defaults = { + 'type': b'0x2', + 'chainId': 0, + 'to': b'', + 'value': 0, + 'data': b'', + 'accessList': [], + } + + _unsigned_transaction_serializer = type( + "_unsigned_transaction_serializer", (HashableRLP, ), { + "fields": unsigned_transaction_fields, + }, + ) + + _signed_transaction_serializer = type( + "_signed_transaction_serializer", (HashableRLP, ), { + "fields": unsigned_transaction_fields + signature_fields, + }, + ) + + def __init__(self, dictionary: Dict[str, Any]): + self.dictionary = dictionary + + @classmethod + def assert_valid_fields(cls, dictionary: Dict[str, Any]): + transaction_valid_values = merge(LEGACY_TRANSACTION_VALID_VALUES, { + 'type': is_int_or_prefixed_hexstr, + 'maxPriorityFeePerGas': is_int_or_prefixed_hexstr, + 'maxFeePerGas': is_int_or_prefixed_hexstr, + 'accessList': is_rpc_structured_access_list, + }) + + if 'v' in dictionary and dictionary['v'] == 0: + # This is insane logic that is required because the way we evaluate + # correct types is in the `if not all()` branch below, and 0 obviously + # maps to the int(0), which maps to False... This was not an issue in non-typed + # transaction because v=0, couldn't exist with the chain offset. + dictionary['v'] = '0x0' + valid_fields = apply_formatters_to_dict( + transaction_valid_values, dictionary, + ) # type: Dict[str, Any] + if not all(valid_fields.values()): + invalid = {key: dictionary[key] for key, valid in valid_fields.items() if not valid} + raise TypeError("Transaction had invalid fields: %r" % invalid) + + @classmethod + def from_dict(cls, dictionary: Dict[str, Any]): + """ + Builds a DynamicFeeTransaction from a dictionary. + Verifies that the dictionary is well formed. + """ + # Validate fields. + cls.assert_valid_fields(dictionary) + sanitized_dictionary = pipe( + dictionary, + dict, + partial(merge, cls.transaction_field_defaults), + apply_formatters_to_dict(TYPED_TRANSACTION_FORMATTERS), + ) + + # We have verified the type, we can safely remove it from the dictionary, + # given that it is not to be included within the RLP payload. + transaction_type = sanitized_dictionary.pop('type') + if transaction_type != cls.transaction_type: + raise ValueError( + "expected transaction type %s, got %s" % (cls.transaction_type, transaction_type), + ) + return cls( + dictionary=sanitized_dictionary, + ) + + @classmethod + def from_bytes(cls, encoded_transaction: HexBytes): + """Builds a DynamicFeeTransaction from a signed encoded transaction.""" + if not isinstance(encoded_transaction, HexBytes): + raise TypeError("expected Hexbytes, got type: %s" % type(encoded_transaction)) + if not (len(encoded_transaction) > 0 and encoded_transaction[0] == cls.transaction_type): + raise ValueError("unexpected input") + # Format is (0x02 || TransactionPayload) + # We strip the prefix, and RLP unmarshal the payload into our signed transaction serializer. + transaction_payload = encoded_transaction[1:] + rlp_serializer = cls._signed_transaction_serializer + dictionary = rlp_serializer.from_bytes(transaction_payload).as_dict() # type: ignore + rpc_structured_dict = transaction_rlp_to_rpc_structure(dictionary) + rpc_structured_dict['type'] = cls.transaction_type + return cls.from_dict(rpc_structured_dict) + + def as_dict(self) -> Dict[str, Any]: + """Returns this transaction as a dictionary.""" + dictionary = self.dictionary.copy() + dictionary['type'] = self.__class__.transaction_type + return dictionary + + def hash(self) -> bytes: + """ + Hashes this DynamicFeeTransaction to prepare it for signing. + As per the EIP-1559 specifications, the signature is a secp256k1 signature over + keccak256(0x02 || rlp([chainId, nonce, maxPriorityFeePerGas, maxFeePerGas, gasLimit, to, + value, data, accessList])). Here, we compute the keccak256(...) hash. + """ + # Remove signature fields. + transaction_without_signature_fields = dissoc(self.dictionary, 'v', 'r', 's') + # RPC-structured transaction to rlp-structured transaction + rlp_structured_txn_without_sig_fields = transaction_rpc_to_rlp_structure( + transaction_without_signature_fields + ) + rlp_serializer = self.__class__._unsigned_transaction_serializer + hash = pipe( + rlp_serializer.from_dict(rlp_structured_txn_without_sig_fields), # type: ignore + lambda val: rlp.encode(val), # rlp([...]) + lambda val: bytes([self.__class__.transaction_type]) + val, # (0x02 || rlp([...])) + keccak, # keccak256(0x02 || rlp([...])) + ) + return cast(bytes, hash) + + def payload(self) -> bytes: + """ + Returns this transaction's payload as bytes. + + Here, the TransactionPayload = rlp([chainId, + nonce, maxPriorityFeePerGas, maxFeePerGas, gasLimit, to, value, data, accessList, + signatureYParity, signatureR, signatureS]) + """ + if not all(k in self.dictionary for k in 'vrs'): + raise ValueError("attempting to encode an unsigned transaction") + rlp_serializer = self.__class__._signed_transaction_serializer + rlp_structured_dict = transaction_rpc_to_rlp_structure(self.dictionary) + payload = rlp.encode(rlp_serializer.from_dict(rlp_structured_dict)) # type: ignore + return cast(bytes, payload) + + def vrs(self) -> Tuple[int, int, int]: + """Returns (v, r, s) if they exist.""" + if not all(k in self.dictionary for k in 'vrs'): + raise ValueError("attempting to encode an unsigned transaction") + return (self.dictionary['v'], self.dictionary['r'], self.dictionary['s']) diff --git a/env/lib/python3.8/site-packages/eth_account/_utils/validation.py b/env/lib/python3.8/site-packages/eth_account/_utils/validation.py new file mode 100644 index 00000000..dcb27c8d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/_utils/validation.py @@ -0,0 +1,119 @@ +from cytoolz import ( + identity, +) +from eth_utils import ( + is_binary_address, + is_checksum_address, + is_dict, +) +from eth_utils.curried import ( + apply_one_of_formatters, + hexstr_if_str, + is_0x_prefixed, + is_address, + is_bytes, + is_integer, + is_list_like, + is_string, + to_bytes, + to_int, +) + +VALID_EMPTY_ADDRESSES = {None, b'', ''} + + +def is_none(val): + return val is None + + +def is_valid_address(value): + if is_binary_address(value): + return True + elif is_checksum_address(value): + return True + else: + return False + + +def is_int_or_prefixed_hexstr(val): + if is_integer(val): + return True + elif isinstance(val, str) and is_0x_prefixed(val): + return True + else: + return False + + +def is_empty_or_checksum_address(val): + if val in VALID_EMPTY_ADDRESSES: + return True + else: + return is_valid_address(val) + + +def is_rpc_structured_access_list(val): + """Returns true if 'val' is a valid JSON-RPC structured access list.""" + if not is_list_like(val): + return False + for d in val: + if not is_dict(d): + return False + if len(d) != 2: + return False + address = d.get('address') + storage_keys = d.get('storageKeys') + if any(_ is None for _ in (address, storage_keys)): + return False + if not is_address(address): + return False + if not is_list_like(storage_keys): + return False + for storage_key in storage_keys: + if not is_int_or_prefixed_hexstr(storage_key): + return False + return True + + +def is_rlp_structured_access_list(val): + """Returns true if 'val' is a valid rlp-structured access list.""" + if not is_list_like(val): + return False + for item in val: + if not is_list_like(item): + return False + if len(item) != 2: + return False + address, storage_keys = item + if not is_address(address): + return False + for storage_key in storage_keys: + if not is_int_or_prefixed_hexstr(storage_key): + return False + return True + + +LEGACY_TRANSACTION_FORMATTERS = { + 'nonce': hexstr_if_str(to_int), + 'gasPrice': hexstr_if_str(to_int), + 'gas': hexstr_if_str(to_int), + 'to': apply_one_of_formatters(( + (is_string, hexstr_if_str(to_bytes)), + (is_bytes, identity), + (is_none, lambda val: b''), + )), + 'value': hexstr_if_str(to_int), + 'data': hexstr_if_str(to_bytes), + 'v': hexstr_if_str(to_int), + 'r': hexstr_if_str(to_int), + 's': hexstr_if_str(to_int), +} + +LEGACY_TRANSACTION_VALID_VALUES = { + 'nonce': is_int_or_prefixed_hexstr, + 'gasPrice': is_int_or_prefixed_hexstr, + 'gas': is_int_or_prefixed_hexstr, + 'to': is_empty_or_checksum_address, + 'value': is_int_or_prefixed_hexstr, + 'data': lambda val: isinstance(val, (int, str, bytes, bytearray)), + 'chainId': lambda val: val is None or is_int_or_prefixed_hexstr(val), +} diff --git a/env/lib/python3.8/site-packages/eth_account/account.py b/env/lib/python3.8/site-packages/eth_account/account.py new file mode 100644 index 00000000..3f0a37df --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/account.py @@ -0,0 +1,780 @@ +from collections.abc import ( + Mapping, +) +import json +import os +import warnings + +from cytoolz import ( + dissoc, +) +from eth_keyfile import ( + create_keyfile_json, + decode_keyfile_json, +) +from eth_keys import ( + KeyAPI, + keys, +) +from eth_keys.exceptions import ( + ValidationError, +) +from eth_utils.curried import ( + combomethod, + hexstr_if_str, + is_dict, + keccak, + text_if_str, + to_bytes, + to_int, +) +from hexbytes import ( + HexBytes, +) + +from eth_account._utils.legacy_transactions import ( + Transaction, + vrs_from, +) +from eth_account._utils.signing import ( + hash_of_signed_transaction, + sign_message_hash, + sign_transaction_dict, + to_standard_signature_bytes, + to_standard_v, +) +from eth_account._utils.typed_transactions import ( + TypedTransaction, +) +from eth_account.datastructures import ( + SignedMessage, + SignedTransaction, +) +from eth_account.hdaccount import ( + ETHEREUM_DEFAULT_PATH, + generate_mnemonic, + key_from_seed, + seed_from_mnemonic, +) +from eth_account.messages import ( + SignableMessage, + _hash_eip191_message, +) +from eth_account.signers.local import ( + LocalAccount, +) + + +class Account(object): + """ + The primary entry point for working with Ethereum private keys. + + It does **not** require a connection to an Ethereum node. + """ + _keys = keys + + _default_kdf = os.getenv('ETH_ACCOUNT_KDF', 'scrypt') + + # Enable unaudited features (off by default) + _use_unaudited_hdwallet_features = False + + @classmethod + def enable_unaudited_hdwallet_features(cls): + """ + Use this flag to enable unaudited HD Wallet features. + """ + cls._use_unaudited_hdwallet_features = True + + @combomethod + def create(self, extra_entropy=''): + r""" + Creates a new private key, and returns it as a :class:`~eth_account.local.LocalAccount`. + + :param extra_entropy: Add extra randomness to whatever randomness your OS can provide + :type extra_entropy: str or bytes or int + :returns: an object with private key and convenience methods + + .. code-block:: python + + >>> from eth_account import Account + >>> acct = Account.create('KEYSMASH FJAFJKLDSKF7JKFDJ 1530') + >>> acct.address + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + >>> acct.key + HexBytes('0x8676e9a8c86c8921e922e61e0bb6e9e9689aad4c99082620610b00140e5f21b8') + + # These methods are also available: sign_message(), sign_transaction(), encrypt() + # They correspond to the same-named methods in Account.* + # but without the private key argument + """ + extra_key_bytes = text_if_str(to_bytes, extra_entropy) + key_bytes = keccak(os.urandom(32) + extra_key_bytes) + return self.from_key(key_bytes) + + @staticmethod + def decrypt(keyfile_json, password): + """ + Decrypts a private key. + + The key may have been encrypted using an Ethereum client or :meth:`~Account.encrypt`. + + :param keyfile_json: The encrypted key + :type keyfile_json: dict or str + :param str password: The password that was used to encrypt the key + :returns: the raw private key + :rtype: ~hexbytes.main.HexBytes + + .. doctest:: python + + >>> encrypted = { + ... 'address': '5ce9454909639d2d17a3f753ce7d93fa0b9ab12e', + ... 'crypto': {'cipher': 'aes-128-ctr', + ... 'cipherparams': {'iv': '482ef54775b0cc59f25717711286f5c8'}, + ... 'ciphertext': 'cb636716a9fd46adbb31832d964df2082536edd5399a3393327dc89b0193a2be', + ... 'kdf': 'scrypt', + ... 'kdfparams': {}, + ... 'kdfparams': {'dklen': 32, + ... 'n': 262144, + ... 'p': 8, + ... 'r': 1, + ... 'salt': 'd3c9a9945000fcb6c9df0f854266d573'}, + ... 'mac': '4f626ec5e7fea391b2229348a65bfef532c2a4e8372c0a6a814505a350a7689d'}, + ... 'id': 'b812f3f9-78cc-462a-9e89-74418aa27cb0', + ... 'version': 3} + >>> Account.decrypt(encrypted, 'password') + HexBytes('0xb25c7db31feed9122727bf0939dc769a96564b2de4c4726d035b36ecf1e5b364') + + """ + if isinstance(keyfile_json, str): + keyfile = json.loads(keyfile_json) + elif is_dict(keyfile_json): + keyfile = keyfile_json + else: + raise TypeError("The keyfile should be supplied as a JSON string, or a dictionary.") + password_bytes = text_if_str(to_bytes, password) + return HexBytes(decode_keyfile_json(keyfile, password_bytes)) + + @classmethod + def encrypt(cls, private_key, password, kdf=None, iterations=None): + """ + Creates a dictionary with an encrypted version of your private key. + To import this keyfile into Ethereum clients like geth and parity: + encode this dictionary with :func:`json.dumps` and save it to disk where your + client keeps key files. + + :param private_key: The raw private key + :type private_key: hex str, bytes, int or :class:`eth_keys.datatypes.PrivateKey` + :param str password: The password which you will need to unlock the account in your client + :param str kdf: The key derivation function to use when encrypting your private key + :param int iterations: The work factor for the key derivation function + :returns: The data to use in your encrypted file + :rtype: dict + + If kdf is not set, the default key derivation function falls back to the + environment variable :envvar:`ETH_ACCOUNT_KDF`. If that is not set, then + 'scrypt' will be used as the default. + + .. doctest:: python + + >>> from pprint import pprint + >>> encrypted = Account.encrypt( + ... 0xb25c7db31feed9122727bf0939dc769a96564b2de4c4726d035b36ecf1e5b364, + ... 'password' + ... ) + >>> pprint(encrypted) + {'address': '5ce9454909639d2d17a3f753ce7d93fa0b9ab12e', + 'crypto': {'cipher': 'aes-128-ctr', + 'cipherparams': {'iv': '...'}, + 'ciphertext': '...', + 'kdf': 'scrypt', + 'kdfparams': {'dklen': 32, + 'n': 262144, + 'p': 8, + 'r': 1, + 'salt': '...'}, + 'mac': '...'}, + 'id': '...', + 'version': 3} + + >>> with open('my-keyfile', 'w') as f: # doctest: +SKIP + ... f.write(json.dumps(encrypted)) + """ + if isinstance(private_key, keys.PrivateKey): + key_bytes = private_key.to_bytes() + else: + key_bytes = HexBytes(private_key) + + if kdf is None: + kdf = cls._default_kdf + + password_bytes = text_if_str(to_bytes, password) + assert len(key_bytes) == 32 + + return create_keyfile_json(key_bytes, password_bytes, kdf=kdf, iterations=iterations) + + @combomethod + def privateKeyToAccount(self, private_key): + """ + .. CAUTION:: Deprecated for :meth:`~eth_account.account.Account.from_key`. + This method will be removed in v0.5 + """ + warnings.warn( + "privateKeyToAccount is deprecated in favor of from_key", + category=DeprecationWarning, + ) + return self.from_key(private_key) + + @combomethod + def from_key(self, private_key): + r""" + Returns a convenient object for working with the given private key. + + :param private_key: The raw private key + :type private_key: hex str, bytes, int or :class:`eth_keys.datatypes.PrivateKey` + :return: object with methods for signing and encrypting + :rtype: LocalAccount + + .. doctest:: python + + >>> acct = Account.from_key( + ... 0xb25c7db31feed9122727bf0939dc769a96564b2de4c4726d035b36ecf1e5b364) + >>> acct.address + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + >>> acct.key + HexBytes('0xb25c7db31feed9122727bf0939dc769a96564b2de4c4726d035b36ecf1e5b364') + + # These methods are also available: sign_message(), sign_transaction(), encrypt() + # They correspond to the same-named methods in Account.* + # but without the private key argument + """ + key = self._parsePrivateKey(private_key) + return LocalAccount(key, self) + + @combomethod + def from_mnemonic(self, + mnemonic: str, + passphrase: str = "", + account_path: str = ETHEREUM_DEFAULT_PATH): + """ + Generate an account from a mnemonic. + + .. CAUTION:: This feature is experimental, unaudited, and likely to change soon + + :param str mnemonic: space-separated list of BIP39 mnemonic seed words + :param str passphrase: Optional passphrase used to encrypt the mnemonic + :param str account_path: Specify an alternate HD path for deriving the seed using + BIP32 HD wallet key derivation. + :return: object with methods for signing and encrypting + :rtype: LocalAccount + + .. doctest:: python + + >>> from eth_account import Account + >>> Account.enable_unaudited_hdwallet_features() + >>> acct = Account.from_mnemonic( + ... "coral allow abandon recipe top tray caught video climb similar prepare bracket " + ... "antenna rubber announce gauge volume hub hood burden skill immense add acid") + >>> acct.address + '0x9AdA5dAD14d925f4df1378409731a9B71Bc8569d' + + # These methods are also available: sign_message(), sign_transaction(), encrypt() + # They correspond to the same-named methods in Account.* + # but without the private key argument + """ + if not self._use_unaudited_hdwallet_features: + raise AttributeError( + "The use of the Mnemonic features of Account is disabled by default until " + "its API stabilizes. To use these features, please enable them by running " + "`Account.enable_unaudited_hdwallet_features()` and try again." + ) + seed = seed_from_mnemonic(mnemonic, passphrase) + private_key = key_from_seed(seed, account_path) + key = self._parsePrivateKey(private_key) + return LocalAccount(key, self) + + @combomethod + def create_with_mnemonic(self, + passphrase: str = "", + num_words: int = 12, + language: str = "english", + account_path: str = ETHEREUM_DEFAULT_PATH): + r""" + Create a new private key and related mnemonic. + + .. CAUTION:: This feature is experimental, unaudited, and likely to change soon + + Creates a new private key, and returns it as a :class:`~eth_account.local.LocalAccount`, + alongside the mnemonic that can used to regenerate it using any BIP39-compatible wallet. + + :param str passphrase: Extra passphrase to encrypt the seed phrase + :param int num_words: Number of words to use with seed phrase. Default is 12 words. + Must be one of [12, 15, 18, 21, 24]. + :param str language: Language to use for BIP39 mnemonic seed phrase. + :param str account_path: Specify an alternate HD path for deriving the seed using + BIP32 HD wallet key derivation. + :returns: A tuple consisting of an object with private key and convenience methods, + and the mnemonic seed phrase that can be used to restore the account. + :rtype: (LocalAccount, str) + + .. doctest:: python + + >>> from eth_account import Account + >>> Account.enable_unaudited_hdwallet_features() + >>> acct, mnemonic = Account.create_with_mnemonic() + >>> acct.address # doctest: +SKIP + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + >>> acct == Account.from_mnemonic(mnemonic) + True + + # These methods are also available: sign_message(), sign_transaction(), encrypt() + # They correspond to the same-named methods in Account.* + # but without the private key argument + """ + if not self._use_unaudited_hdwallet_features: + raise AttributeError( + "The use of the Mnemonic features of Account is disabled by default until " + "its API stabilizes. To use these features, please enable them by running " + "`Account.enable_unaudited_hdwallet_features()` and try again." + ) + mnemonic = generate_mnemonic(num_words, language) + return self.from_mnemonic(mnemonic, passphrase, account_path), mnemonic + + @combomethod + def recover_message(self, signable_message: SignableMessage, vrs=None, signature=None): + r""" + Get the address of the account that signed the given message. + You must specify exactly one of: vrs or signature + + :param signable_message: the message that was signed + :param vrs: the three pieces generated by an elliptic curve signature + :type vrs: tuple(v, r, s), each element is hex str, bytes or int + :param signature: signature bytes concatenated as r+s+v + :type signature: hex str or bytes or int + :returns: address of signer, hex-encoded & checksummed + :rtype: str + :raises: ``eth_keys.exceptions.BadSignature`` if signature is invalid + + .. doctest:: python + + >>> from eth_account.messages import encode_defunct + >>> from eth_account import Account + >>> message = encode_defunct(text="I♥SF") + >>> vrs = ( + ... 28, + ... '0xe6ca9bba58c88611fad66a6ce8f996908195593807c4b38bd528d2cff09d4eb3', + ... '0x3e5bfbbf4d3e39b1a2fd816a7680c19ebebaf3a141b239934ad43cb33fcec8ce') + >>> Account.recover_message(message, vrs=vrs) + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + + + # All of these recover calls are equivalent: + + # variations on vrs + >>> vrs = ( + ... '0x1c', + ... '0xe6ca9bba58c88611fad66a6ce8f996908195593807c4b38bd528d2cff09d4eb3', + ... '0x3e5bfbbf4d3e39b1a2fd816a7680c19ebebaf3a141b239934ad43cb33fcec8ce') + >>> Account.recover_message(message, vrs=vrs) + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + + >>> # Caution about this approach: likely problems if there are leading 0s + >>> vrs = ( + ... 0x1c, + ... 0xe6ca9bba58c88611fad66a6ce8f996908195593807c4b38bd528d2cff09d4eb3, + ... 0x3e5bfbbf4d3e39b1a2fd816a7680c19ebebaf3a141b239934ad43cb33fcec8ce) + >>> Account.recover_message(message, vrs=vrs) + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + + >>> vrs = ( + ... b'\x1c', + ... b'\xe6\xca\x9b\xbaX\xc8\x86\x11\xfa\xd6jl\xe8\xf9\x96\x90\x81\x95Y8\x07\xc4\xb3\x8b\xd5(\xd2\xcf\xf0\x9dN\xb3', # noqa: E501 + ... b'>[\xfb\xbfM>9\xb1\xa2\xfd\x81jv\x80\xc1\x9e\xbe\xba\xf3\xa1A\xb29\x93J\xd4<\xb3?\xce\xc8\xce') # noqa: E501 + >>> Account.recover_message(message, vrs=vrs) + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + + # variations on signature + >>> signature = '0xe6ca9bba58c88611fad66a6ce8f996908195593807c4b38bd528d2cff09d4eb33e5bfbbf4d3e39b1a2fd816a7680c19ebebaf3a141b239934ad43cb33fcec8ce1c' # noqa: E501 + >>> Account.recover_message(message, signature=signature) + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + >>> signature = b'\xe6\xca\x9b\xbaX\xc8\x86\x11\xfa\xd6jl\xe8\xf9\x96\x90\x81\x95Y8\x07\xc4\xb3\x8b\xd5(\xd2\xcf\xf0\x9dN\xb3>[\xfb\xbfM>9\xb1\xa2\xfd\x81jv\x80\xc1\x9e\xbe\xba\xf3\xa1A\xb29\x93J\xd4<\xb3?\xce\xc8\xce\x1c' # noqa: E501 + >>> Account.recover_message(message, signature=signature) + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + >>> # Caution about this approach: likely problems if there are leading 0s + >>> signature = 0xe6ca9bba58c88611fad66a6ce8f996908195593807c4b38bd528d2cff09d4eb33e5bfbbf4d3e39b1a2fd816a7680c19ebebaf3a141b239934ad43cb33fcec8ce1c # noqa: E501 + >>> Account.recover_message(message, signature=signature) + '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' + """ + message_hash = _hash_eip191_message(signable_message) + return self._recover_hash(message_hash, vrs, signature) + + @combomethod + def recoverHash(self, message_hash, vrs=None, signature=None): + """ + Get the address of the account that signed the message with the given hash. + You must specify exactly one of: vrs or signature + + .. CAUTION:: Deprecated for :meth:`~eth_account.account.Account.recover_message`. + This method might be removed as early as v0.5 + + :param message_hash: the hash of the message that you want to verify + :type message_hash: hex str or bytes or int + :param vrs: the three pieces generated by an elliptic curve signature + :type vrs: tuple(v, r, s), each element is hex str, bytes or int + :param signature: signature bytes concatenated as r+s+v + :type signature: hex str or bytes or int + :returns: address of signer, hex-encoded & checksummed + :rtype: str + """ + warnings.warn( + "recoverHash is deprecated in favor of recover_message", + category=DeprecationWarning, + ) + return self._recover_hash(message_hash, vrs, signature) + + @combomethod + def _recover_hash(self, message_hash, vrs=None, signature=None): + hash_bytes = HexBytes(message_hash) + if len(hash_bytes) != 32: + raise ValueError("The message hash must be exactly 32-bytes") + if vrs is not None: + v, r, s = map(hexstr_if_str(to_int), vrs) + v_standard = to_standard_v(v) + signature_obj = self._keys.Signature(vrs=(v_standard, r, s)) + elif signature is not None: + signature_bytes = HexBytes(signature) + signature_bytes_standard = to_standard_signature_bytes(signature_bytes) + signature_obj = self._keys.Signature(signature_bytes=signature_bytes_standard) + else: + raise TypeError("You must supply the vrs tuple or the signature bytes") + pubkey = signature_obj.recover_public_key_from_msg_hash(hash_bytes) + return pubkey.to_checksum_address() + + @combomethod + def recoverTransaction(self, serialized_transaction): + """ + .. CAUTION:: Deprecated for :meth:`~eth_account.account.Account.recover_transaction`. + This method will be removed in v0.5 + """ + warnings.warn( + "recoverTransaction is deprecated in favor of recover_transaction", + category=DeprecationWarning, + ) + return self.recover_transaction(serialized_transaction) + + @combomethod + def recover_transaction(self, serialized_transaction): + """ + Get the address of the account that signed this transaction. + + :param serialized_transaction: the complete signed transaction + :type serialized_transaction: hex str, bytes or int + :returns: address of signer, hex-encoded & checksummed + :rtype: str + + .. doctest:: python + + >>> raw_transaction = '0xf86a8086d55698372431831e848094f0109fc8df283027b6285cc889f5aa624eac1f55843b9aca008025a009ebb6ca057a0535d6186462bc0b465b561c94a295bdb0621fc19208ab149a9ca0440ffd775ce91a833ab410777204d5341a6f9fa91216a6f3ee2c051fea6a0428' # noqa: E501 + >>> Account.recover_transaction(raw_transaction) + '0x2c7536E3605D9C16a7a3D7b1898e529396a65c23' + """ + txn_bytes = HexBytes(serialized_transaction) + if len(txn_bytes) > 0 and txn_bytes[0] <= 0x7f: + # We are dealing with a typed transaction. + typed_transaction = TypedTransaction.from_bytes(txn_bytes) + msg_hash = typed_transaction.hash() + vrs = typed_transaction.vrs() + return self._recover_hash(msg_hash, vrs=vrs) + + txn = Transaction.from_bytes(txn_bytes) + msg_hash = hash_of_signed_transaction(txn) + return self._recover_hash(msg_hash, vrs=vrs_from(txn)) + + def setKeyBackend(self, backend): + """ + .. CAUTION:: Deprecated for :meth:`~eth_account.account.Account.set_key_backend`. + This method will be removed in v0.5 + """ + warnings.warn( + "setKeyBackend is deprecated in favor of set_key_backend", + category=DeprecationWarning, + ) + self.set_key_backend(backend) + + def set_key_backend(self, backend): + """ + Change the backend used by the underlying eth-keys library. + + *(The default is fine for most users)* + + :param backend: any backend that works in + `eth_keys.KeyApi(backend) `_ + + """ + self._keys = KeyAPI(backend) + + @combomethod + def sign_message(self, signable_message: SignableMessage, private_key): + r""" + Sign the provided message. + + This API supports any messaging format that will encode to EIP-191_ messages. + + If you would like historical compatibility with + :meth:`w3.eth.sign() ` + you can use :meth:`~eth_account.messages.encode_defunct`. + + Other options are the "validator", or "structured data" standards. (Both of these + are in *DRAFT* status currently, so be aware that the implementation is not + guaranteed to be stable). You can import all supported message encoders in + ``eth_account.messages``. + + :param signable_message: the encoded message for signing + :param private_key: the key to sign the message with + :type private_key: hex str, bytes, int or :class:`eth_keys.datatypes.PrivateKey` + :returns: Various details about the signature - most importantly the fields: v, r, and s + :rtype: ~eth_account.datastructures.SignedMessage + + .. doctest:: python + + >>> msg = "I♥SF" + >>> from eth_account.messages import encode_defunct + >>> msghash = encode_defunct(text=msg) + >>> msghash + SignableMessage(version=b'E', + header=b'thereum Signed Message:\n6', + body=b'I\xe2\x99\xa5SF') + >>> # If you're curious about the internal fields of SignableMessage, take a look at EIP-191, linked above # noqa: E501 + >>> key = "0xb25c7db31feed9122727bf0939dc769a96564b2de4c4726d035b36ecf1e5b364" + >>> Account.sign_message(msghash, key) + SignedMessage(messageHash=HexBytes('0x1476abb745d423bf09273f1afd887d951181d25adc66c4834a70491911b7f750'), + r=104389933075820307925104709181714897380569894203213074526835978196648170704563, + s=28205917190874851400050446352651915501321657673772411533993420917949420456142, + v=28, + signature=HexBytes('0xe6ca9bba58c88611fad66a6ce8f996908195593807c4b38bd528d2cff09d4eb33e5bfbbf4d3e39b1a2fd816a7680c19ebebaf3a141b239934ad43cb33fcec8ce1c')) + + + + .. _EIP-191: https://eips.ethereum.org/EIPS/eip-191 + """ + message_hash = _hash_eip191_message(signable_message) + return self._sign_hash(message_hash, private_key) + + @combomethod + def signHash(self, message_hash, private_key): + """ + Sign the provided hash. + + .. WARNING:: *Never* sign a hash that you didn't generate, + it can be an arbitrary transaction. For example, it might + send all of your account's ether to an attacker. + Instead, prefer :meth:`~eth_account.account.Account.sign_message`, + which cannot accidentally sign a transaction. + + .. CAUTION:: Deprecated for :meth:`~eth_account.account.Account.sign_message`. + This method will be removed in v0.6 + + :param message_hash: the 32-byte message hash to be signed + :type message_hash: hex str, bytes or int + :param private_key: the key to sign the message with + :type private_key: hex str, bytes, int or :class:`eth_keys.datatypes.PrivateKey` + :returns: Various details about the signature - most + importantly the fields: v, r, and s + :rtype: ~eth_account.datastructures.SignedMessage + """ + warnings.warn( + "signHash is deprecated in favor of sign_message", + category=DeprecationWarning, + ) + return self._sign_hash(message_hash, private_key) + + @combomethod + def _sign_hash(self, message_hash, private_key): + msg_hash_bytes = HexBytes(message_hash) + if len(msg_hash_bytes) != 32: + raise ValueError("The message hash must be exactly 32-bytes") + + key = self._parsePrivateKey(private_key) + + (v, r, s, eth_signature_bytes) = sign_message_hash(key, msg_hash_bytes) + return SignedMessage( + messageHash=msg_hash_bytes, + r=r, + s=s, + v=v, + signature=HexBytes(eth_signature_bytes), + ) + + @combomethod + def signTransaction(self, transaction_dict, private_key): + """ + .. CAUTION:: Deprecated for :meth:`~eth_account.account.Account.sign_transaction`. + This method will be removed in v0.5 + """ + warnings.warn( + "signTransaction is deprecated in favor of sign_transaction", + category=DeprecationWarning, + ) + return self.sign_transaction(transaction_dict, private_key) + + @combomethod + def sign_transaction(self, transaction_dict, private_key): + """ + Sign a transaction using a local private key. + + It produces signature details and the hex-encoded transaction suitable for broadcast using + :meth:`w3.eth.sendRawTransaction() `. + + To create the transaction dict that calls a contract, use contract object: + `my_contract.functions.my_function().buildTransaction() + `_ + + Note: For non-legacy (typed) transactions, if the transaction type is not explicitly + provided, it may be determined from the transaction parameters of a well-formed + transaction. See below for examples on how to sign with different transaction types. + + :param dict transaction_dict: the transaction with available keys, depending on the type of + transaction: nonce, chainId, to, data, value, gas, gasPrice, type, accessList, + maxFeePerGas, and maxPriorityFeePerGas + :param private_key: the private key to sign the data with + :type private_key: hex str, bytes, int or :class:`eth_keys.datatypes.PrivateKey` + :returns: Various details about the signature - most + importantly the fields: v, r, and s + :rtype: AttributeDict + + .. code-block:: python + + >>> # EIP-1559 dynamic fee transaction (more efficient and preferred over legacy txn) + >>> dynamic_fee_transaction = { + "type": 2, # optional - can be implicitly determined based on max fee params # noqa: E501 + "gas": 100000, + "maxFeePerGas": 2000000000, + "maxPriorityFeePerGas": 2000000000, + "data": "0x616263646566", + "nonce": 34, + "to": "0x09616C3d61b3331fc4109a9E41a8BDB7d9776609", + "value": "0x5af3107a4000", + "accessList": ( # optional + { + "address": "0x0000000000000000000000000000000000000001", + "storageKeys": ( + "0x0100000000000000000000000000000000000000000000000000000000000000", # noqa: E501 + ) + }, + ), + "chainId": 1900, + } + >>> key = '0x4c0883a69102937d6231471b5dbb6204fe5129617082792ae468d01a3f362318' + >>> signed = Account.sign_transaction(dynamic_fee_transaction, key) + {'hash': HexBytes('0x126431f2a7fda003aada7c2ce52b0ce3cbdbb1896230d3333b9eea24f42d15b0'), + 'r': 110093478023675319011132687961420618950720745285952062287904334878381994888509, + 'rawTransaction': HexBytes('0x02f8b282076c2284773594008477359400830186a09409616c3d61b3331fc4109a9e41a8bdb7d9776609865af3107a400086616263646566f838f7940000000000000000000000000000000000000001e1a0010000000000000000000000000000000000000000000000000000000000000080a0f366b34a5c206859b9778b4c909207e53443cca9e0b82e0b94bc4b47e6434d3da04a731eda413a944d4ea2d2236671e586e57388d0e9d40db53044ae4089f2aec8'), # noqa: E501 + 's': 33674551144139401179914073499472892825822542092106065756005379322302694600392, + 'v': 0} + >>> w3.eth.sendRawTransaction(signed.rawTransaction) + + .. code-block:: python + + >>> # legacy transaction (less efficient than EIP-1559 dynamic fee txn) + >>> legacy_transaction = { + # Note that the address must be in checksum format or native bytes: + 'to': '0xF0109fC8DF283027b6285cc889F5aA624EaC1F55', + 'value': 1000000000, + 'gas': 2000000, + 'gasPrice': 234567897654321, + 'nonce': 0, + 'chainId': 1 + } + >>> key = '0x4c0883a69102937d6231471b5dbb6204fe5129617082792ae468d01a3f362318' + >>> signed = Account.sign_transaction(legacy_transaction, key) + {'hash': HexBytes('0x6893a6ee8df79b0f5d64a180cd1ef35d030f3e296a5361cf04d02ce720d32ec5'), + 'r': 4487286261793418179817841024889747115779324305375823110249149479905075174044, + 'rawTransaction': HexBytes('0xf86a8086d55698372431831e848094f0109fc8df283027b6285cc889f5aa624eac1f55843b9aca008025a009ebb6ca057a0535d6186462bc0b465b561c94a295bdb0621fc19208ab149a9ca0440ffd775ce91a833ab410777204d5341a6f9fa91216a6f3ee2c051fea6a0428'), # noqa: E501 + 's': 30785525769477805655994251009256770582792548537338581640010273753578382951464, + 'v': 37} + >>> w3.eth.sendRawTransaction(signed.rawTransaction) + + .. code-block:: python + + >>> access_list_transaction = { + "type": 1, # optional - can be implicitly determined based on 'accessList' and 'gasPrice' params # noqa: E501 + "gas": 100000, + "gasPrice": 1000000000, + "data": "0x616263646566", + "nonce": 34, + "to": "0x09616C3d61b3331fc4109a9E41a8BDB7d9776609", + "value": "0x5af3107a4000", + "accessList": ( + { + "address": "0x0000000000000000000000000000000000000001", + "storageKeys": ( + "0x0100000000000000000000000000000000000000000000000000000000000000", # noqa: E501 + ) + }, + ), + "chainId": 1900, + } + >>> key = '0x4c0883a69102937d6231471b5dbb6204fe5129617082792ae468d01a3f362318' + >>> signed = Account.sign_transaction(access_list_transaction, key) + {'hash': HexBytes('0x2864ca20a74ca5e044067ad4139a22ff5a0853434f5f1dc00108f24ef5f1f783'), + 'r': 105940705063391628472351883894091935317142890114440570831409400676736873197702, + 'rawTransaction': HexBytes('0x01f8ad82076c22843b9aca00830186a09409616c3d61b3331fc4109a9e41a8bdb7d9776609865af3107a400086616263646566f838f7940000000000000000000000000000000000000001e1a0010000000000000000000000000000000000000000000000000000000000000080a0ea38506c4afe4bb402e030877fbe1011fa1da47aabcf215db8da8fee5d3af086a051e9af653b8eb98e74e894a766cf88904dbdb10b0bc1fbd12f18f661fa2797a4'), # noqa: E501 + 's': 37050226636175381535892585331727388340134760347943439553552848647212419749796, + 'v': 0} + >>> w3.eth.sendRawTransaction(signed.rawTransaction) + """ + if not isinstance(transaction_dict, Mapping): + raise TypeError("transaction_dict must be dict-like, got %r" % transaction_dict) + + account = self.from_key(private_key) + + # allow from field, *only* if it matches the private key + if 'from' in transaction_dict: + if transaction_dict['from'] == account.address: + sanitized_transaction = dissoc(transaction_dict, 'from') + else: + raise TypeError("from field must match key's %s, but it was %s" % ( + account.address, + transaction_dict['from'], + )) + else: + sanitized_transaction = transaction_dict + + # sign transaction + ( + v, + r, + s, + encoded_transaction, + ) = sign_transaction_dict(account._key_obj, sanitized_transaction) + transaction_hash = keccak(encoded_transaction) + + return SignedTransaction( + rawTransaction=HexBytes(encoded_transaction), + hash=HexBytes(transaction_hash), + r=r, + s=s, + v=v, + ) + + @combomethod + def _parsePrivateKey(self, key): + """ + Generate a :class:`eth_keys.datatypes.PrivateKey` from the provided key. + + If the key is already of type :class:`eth_keys.datatypes.PrivateKey`, return the key. + + :param key: the private key from which a :class:`eth_keys.datatypes.PrivateKey` + will be generated + :type key: hex str, bytes, int or :class:`eth_keys.datatypes.PrivateKey` + :returns: the provided key represented as a :class:`eth_keys.datatypes.PrivateKey` + """ + if isinstance(key, self._keys.PrivateKey): + return key + + try: + return self._keys.PrivateKey(HexBytes(key)) + except ValidationError as original_exception: + raise ValueError( + "The private key must be exactly 32 bytes long, instead of " + "%d bytes." % len(key) + ) from original_exception diff --git a/env/lib/python3.8/site-packages/eth_account/datastructures.py b/env/lib/python3.8/site-packages/eth_account/datastructures.py new file mode 100644 index 00000000..14d6f9de --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/datastructures.py @@ -0,0 +1,36 @@ +from typing import ( + NamedTuple, +) + +from hexbytes import ( + HexBytes, +) + + +def __getitem__(self, index): + try: + return tuple.__getitem__(self, index) + except TypeError: + return getattr(self, index) + + +class SignedTransaction(NamedTuple): + rawTransaction: HexBytes + hash: HexBytes + r: int + s: int + v: int + + def __getitem__(self, index): + return __getitem__(self, index) + + +class SignedMessage(NamedTuple): + messageHash: HexBytes + r: int + s: int + v: int + signature: HexBytes + + def __getitem__(self, index): + return __getitem__(self, index) diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/__init__.py b/env/lib/python3.8/site-packages/eth_account/hdaccount/__init__.py new file mode 100644 index 00000000..94af87c3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/__init__.py @@ -0,0 +1,30 @@ +from eth_utils import ( + ValidationError, +) + +from .deterministic import ( + HDPath, +) +from .mnemonic import ( + Mnemonic, +) + +ETHEREUM_DEFAULT_PATH = "m/44'/60'/0'/0/0" + + +def generate_mnemonic(num_words: int, lang: str) -> str: + return Mnemonic(lang).generate(num_words) + + +def seed_from_mnemonic(words: str, passphrase: str) -> bytes: + lang = Mnemonic.detect_language(words) + expanded_words = Mnemonic(lang).expand(words) + if not Mnemonic(lang).is_mnemonic_valid(expanded_words): + raise ValidationError( + f"Provided words: '{expanded_words}', are not a valid BIP39 mnemonic phrase!" + ) + return Mnemonic.to_seed(expanded_words, passphrase) + + +def key_from_seed(seed: bytes, account_path: str): + return HDPath(account_path).derive(seed) diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/_utils.py b/env/lib/python3.8/site-packages/eth_account/hdaccount/_utils.py new file mode 100644 index 00000000..7179078c --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/_utils.py @@ -0,0 +1,59 @@ +import hashlib +import hmac +from typing import ( + Union, +) +import unicodedata + +from eth_keys import ( + keys, +) +from eth_utils import ( + ValidationError, +) +from hexbytes import ( + HexBytes, +) + +PBKDF2_ROUNDS = 2048 +SECP256K1_N = int("FFFFFFFF_FFFFFFFF_FFFFFFFF_FFFFFFFE_BAAEDCE6_AF48A03B_BFD25E8C_D0364141", 16) + + +def normalize_string(txt: Union[str, bytes]) -> str: + if isinstance(txt, bytes): + utxt = txt.decode("utf8") + elif isinstance(txt, str): + utxt = txt + else: + raise ValidationError("String value expected") + + return unicodedata.normalize("NFKD", utxt) + + +def sha256(data: bytes) -> bytes: + return hashlib.sha256(data).digest() + + +def hmac_sha512(chain_code: bytes, data: bytes) -> bytes: + """ + As specified by RFC4231 - https://tools.ietf.org/html/rfc4231 . + """ + return hmac.new(chain_code, data, hashlib.sha512).digest() + + +def pbkdf2_hmac_sha512(passcode: str, salt: str) -> bytes: + return hashlib.pbkdf2_hmac( + "sha512", + passcode.encode("utf-8"), + salt.encode("utf-8"), + PBKDF2_ROUNDS, + ) + + +def ec_point(pkey: bytes) -> bytes: + """ + Compute `point(p)`, where `point` is ecdsa point multiplication. + + Note: Result is ecdsa public key serialized to compressed form + """ + return keys.PrivateKey(HexBytes(pkey)).public_key.to_compressed_bytes() # type: ignore diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/deterministic.py b/env/lib/python3.8/site-packages/eth_account/hdaccount/deterministic.py new file mode 100644 index 00000000..a427d897 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/deterministic.py @@ -0,0 +1,246 @@ +""" +Generate Heirarchical Deterministic Wallets (HDWallet). + +Partially implements the BIP-0032, BIP-0043, and BIP-0044 specifications: +BIP-0032: https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki +BIP-0043: https://github.com/bitcoin/bips/blob/master/bip-0043.mediawiki +BIP-0044: https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki + +Skips serialization and public key derivation as unnecssary for this library's purposes. + +Notes +----- + +* Integers are modulo the order of the curve (referred to as n). +* Addition (+) of two coordinate pair is defined as application of the EC group operation. +* Concatenation (||) is the operation of appending one byte sequence onto another. + + +Definitions +----------- + +* point(p): returns the coordinate pair resulting from EC point multiplication + (repeated application of the EC group operation) of the secp256k1 base point + with the integer p. +* ser_32(i): serialize a 32-bit unsigned integer i as a 4-byte sequence, + most significant byte first. +* ser_256(p): serializes the integer p as a 32-byte sequence, most significant byte first. +* ser_P(P): serializes the coordinate pair P = (x,y) as a byte sequence using SEC1's compressed + form: (0x02 or 0x03) || ser_256(x), where the header byte depends on the parity of the + omitted y coordinate. +* parse_256(p): interprets a 32-byte sequence as a 256-bit number, most significant byte first. + +""" +# Additional notes: +# - This module currently only implements private parent key => private child key CKD function, +# as it is not necessary to the HD key derivation functions used in this library to implement +# the other functions yet (as this module is only used for derivation of private keys). That +# could change, but wasn't deemed necessary at the time this module was introduced. +# - Unlike other libraries, this library does not use Bitcoin key serialization, because it is +# not intended to be ultimately used for Bitcoin key derivations. This presents a simplified +# API, and no expectation is given for `xpub/xpriv` key derivation. +from typing import ( + Tuple, + Type, + Union, +) + +from eth_utils import ( + ValidationError, + to_int, +) + +from ._utils import ( + SECP256K1_N, + ec_point, + hmac_sha512, +) + +BASE_NODE_IDENTIFIERS = {"m", "M"} +HARD_NODE_SUFFIXES = {"'", "H"} + + +class Node(int): + """ + A base node class. + """ + + TAG = "" # No tag + OFFSET = 0x0 # No offset + index: int + + def __new__(cls, index): + if 0 > index or index > 2**31: + raise ValidationError(f"{cls} cannot be initialized with value {index}") + + obj = int.__new__(cls, index + cls.OFFSET) + obj.index = index + return obj + + def __repr__(self): + return f"{self.__class__.__name__}({self.index})" + + def __add__(self, other: int): + return self.__class__(self.index + other) + + def serialize(self) -> bytes: + return self.to_bytes(4, byteorder="big") + + def encode(self) -> str: + return str(self.index) + self.TAG + + @staticmethod + def decode(node: str) -> Union["SoftNode", "HardNode"]: + if len(node) < 1: + raise ValidationError("Cannot use empty string") + + node_class: Union[Type["SoftNode"], Type["HardNode"]] + if node[-1] in HARD_NODE_SUFFIXES: + node_class = HardNode + node_index = node[:-1] + else: + node_class = SoftNode + node_index = node + + try: + node_value = int(node_index) + except ValueError as err: + raise ValidationError(f"'{node_index}' is not a valid node index.") from err + + return node_class(node_value) + + +class SoftNode(Node): + """ + Soft node (unhardened), where value = index . + """ + + TAG = "" # No tag + OFFSET = 0x0 # No offset + + +class HardNode(Node): + """ + Hard node, where value = index + BIP32_HARDENED_CONSTANT . + """ + + TAG = "H" # "H" (or "'") means hard node (but use "H" for clarity) + OFFSET = 0x80000000 # 2**31, BIP32 "Hardening constant" + + +def derive_child_key( + parent_key: bytes, + parent_chain_code: bytes, + node: Node, +) -> Tuple[bytes, bytes]: + """ + Compute a derivitive key from the parent key. + + From BIP32: + + The function CKDpriv((k_par, c_par), i) → (k_i, c_i) computes a child extended + private key from the parent extended private key: + + 1. Check whether the child is a hardened key (i ≥ 2**31). If the child is a hardened key, + let I = HMAC-SHA512(Key = c_par, Data = 0x00 || ser_256(k_par) || ser_32(i)). + (Note: The 0x00 pads the private key to make it 33 bytes long.) + If it is not a hardened key, then + let I = HMAC-SHA512(Key = c_par, Data = ser_P(point(k_par)) || ser_32(i)). + 2. Split I into two 32-byte sequences, I_L and I_R. + 3. The returned child key k_i is parse_256(I_L) + k_par (mod n). + 4. The returned chain code c_i is I_R. + 5. In case parse_256(I_L) ≥ n or k_i = 0, the resulting key is invalid, + and one should proceed with the next value for i. + (Note: this has probability lower than 1 in 2**127.) + + """ + assert len(parent_chain_code) == 32 + if isinstance(node, HardNode): + # NOTE Empty byte is added to align to SoftNode case + assert len(parent_key) == 32 # Should be guaranteed here in return statment + child = hmac_sha512(parent_chain_code, b"\x00" + parent_key + node.serialize()) + + elif isinstance(node, SoftNode): + assert len(ec_point(parent_key)) == 33 # Should be guaranteed by Account class + child = hmac_sha512(parent_chain_code, ec_point(parent_key) + node.serialize()) + + else: + raise ValidationError(f"Cannot process: {node}") + + assert len(child) == 64 + + if to_int(child[:32]) >= SECP256K1_N: + # Invalid key, compute using next node (< 2**-127 probability) + return derive_child_key(parent_key, parent_chain_code, node + 1) + + child_key = (to_int(child[:32]) + to_int(parent_key)) % SECP256K1_N + if child_key == 0: + # Invalid key, compute using next node (< 2**-127 probability) + return derive_child_key(parent_key, parent_chain_code, node + 1) + + child_key_bytes = child_key.to_bytes(32, byteorder="big") + child_chain_code = child[32:] + return child_key_bytes, child_chain_code + + +class HDPath: + def __init__(self, path: str): + """ + Create a new Hierarchical Deterministic path by decoding the given path. + + Initializes an hd account generator using the + given path string (from BIP-0032). The path is decoded into nodes of the + derivation key tree, which define a pathway from a given master seed to + the child key that is used for a given purpose. Please also reference BIP- + 0043 (which definites the first level as the "purpose" field of an HD path) + and BIP-0044 (which defines a commonly-used, 5-level scheme for BIP32 paths) + for examples of how this object may be used. Please note however that this + object makes no such assumptions of the use of BIP43 or BIP44, or later BIPs. + :param path : BIP32-compatible derivation path + :type path : str as "m/idx_0/.../idx_n" or "m/idx_0/.../idx_n" + where idx_* is either an integer value (soft node) + or an integer value followed by either the "'" char + or the "H" char (hardened node) + """ + if len(path) < 1: + raise ValidationError("Cannot parse path from empty string.") + + nodes = path.split("/") # Should at least make 1 entry in resulting list + if nodes[0] not in BASE_NODE_IDENTIFIERS: + raise ValidationError(f'Path is not valid: "{path}". Must start with "m"') + + decoded_path = [] + for idx, node in enumerate(nodes[1:]): # We don't need the root node 'm' + try: + decoded_path.append(Node.decode(node)) + except ValidationError as err: + raise ValidationError( + f'Path "{path}" is not valid. Issue with node "{node}": {err}' + ) from err + + self._path = decoded_path + + def __repr__(self) -> str: + return f'{self.__class__.__name__}(path="{self.encode()}")' + + def encode(self) -> str: + """ + Encodes this class to a string (reversing the decoding in the constructor). + """ + encoded_path = ("m",) + tuple(node.encode() for node in self._path) + return "/".join(encoded_path) + + def derive(self, seed: bytes) -> bytes: + """ + Perform the BIP32 Heirarchical Derivation recursive loop with the given Path. + + Note that the key and chain_code are initialized with the master seed, and that + the key that is returned is the child key at the end of derivation process (and + the chain code is discarded) + """ + master_node = hmac_sha512(b"Bitcoin seed", seed) + key = master_node[:32] + chain_code = master_node[32:] + for node in self._path: + key, chain_code = derive_child_key(key, chain_code, node) + return key diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/mnemonic.py b/env/lib/python3.8/site-packages/eth_account/hdaccount/mnemonic.py new file mode 100644 index 00000000..fc4c8c0f --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/mnemonic.py @@ -0,0 +1,211 @@ +# Originally from: https://github.com/trezor/python-mnemonic +# +# Copyright (c) 2013 Pavol Rusnak +# Copyright (c) 2017 mruddy +# +# Permission is hereby granted, free of charge, to any person obtaining a copy of +# this software and associated documentation files (the "Software"), to deal in +# the Software without restriction, including without limitation the rights to +# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies +# of the Software, and to permit persons to whom the Software is furnished to do +# so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +# + +import os +from pathlib import ( + Path, +) +import secrets +from typing import ( + Dict, + List, +) + +from bitarray import ( + bitarray, +) +from bitarray.util import ( + ba2int, + int2ba, +) +from eth_utils import ( + ValidationError, +) + +from ._utils import ( + normalize_string, + pbkdf2_hmac_sha512, + sha256, +) + +VALID_ENTROPY_SIZES = [16, 20, 24, 28, 32] +VALID_WORD_COUNTS = [12, 15, 18, 21, 24] +WORDLIST_DIR = Path(__file__).parent / "wordlist" +WORDLIST_LEN = 2048 + +_cached_wordlists: Dict[str, List[str]] = dict() + + +def get_wordlist(language): + if language in _cached_wordlists.keys(): + return _cached_wordlists[language] + with open(WORDLIST_DIR / f"{language}.txt", "r", encoding="utf-8") as f: + wordlist = [w.strip() for w in f.readlines()] + if len(wordlist) != WORDLIST_LEN: + raise ValidationError( + f"Wordlist should contain {WORDLIST_LEN} words, " + f"but it contains {len(wordlist)} words." + ) + _cached_wordlists[language] = wordlist + return wordlist + + +class Mnemonic: + def __init__(self, raw_language="english"): + language = raw_language.lower().replace(' ', '_') + languages = Mnemonic.list_languages() + if language not in languages: + raise ValidationError( + f'Invalid language choice "{language}", must be one of {languages}' + ) + self.language = language + self.wordlist = get_wordlist(language) + + @staticmethod + def list_languages(): + return sorted(Path(f).stem for f in WORDLIST_DIR.rglob("*.txt")) + + @classmethod + def detect_language(cls, raw_mnemonic): + mnemonic = normalize_string(raw_mnemonic) + + words = set(mnemonic.split(" ")) + matching_languages = { + lang + for lang in Mnemonic.list_languages() + if len(words.intersection(cls(lang).wordlist)) == len(words) + } + + # No language had all words match it, so the language can't be fully determined + if len(matching_languages) < 1: + raise ValidationError(f"Language not detected for word(s): {raw_mnemonic}") + + # If both chinese simplified and chinese traditional match (because one is a subset of the + # other) then return simplified. This doesn't hold for other languages. + if len(matching_languages) == 2 and all("chinese" in lang for lang in matching_languages): + return "chinese_simplified" + + # Because certain wordlists share some similar words, if we detect multiple languages + # that the provided mnemonic word(s) could be valid in, we have to throw + if len(matching_languages) > 1: + raise ValidationError(f"Word(s) are valid in multiple languages: {raw_mnemonic}") + + (language,) = matching_languages + return language + + def generate(self, num_words=12) -> str: + if num_words not in VALID_WORD_COUNTS: + raise ValidationError( + f"Invalid choice for number of words: {num_words}, should be one of " + f"{VALID_WORD_COUNTS}" + ) + return self.to_mnemonic(os.urandom(4 * num_words // 3)) # 4/3 bytes per word + + def to_mnemonic(self, entropy) -> str: + entropy_size = len(entropy) + if entropy_size not in VALID_ENTROPY_SIZES: + raise ValidationError( + f"Invalid data length {len(entropy)}, should be one of " + f"{VALID_ENTROPY_SIZES}" + ) + + bits = bitarray() + bits.frombytes(entropy) + + checksum = bitarray() + checksum.frombytes(sha256(entropy)) + + # Add enough bits from the checksum to make it modulo 11 (2**11 = 2048) + bits.extend(checksum[:entropy_size // 4]) + indices = tuple(ba2int(bits[i * 11: (i + 1) * 11]) for i in range(len(bits) // 11)) + words = tuple(self.wordlist[idx] for idx in indices) + + if self.language == "japanese": # Japanese must be joined by ideographic space. + phrase = u"\u3000".join(words) + else: + phrase = " ".join(words) + return phrase + + def is_mnemonic_valid(self, mnemonic): + words = normalize_string(mnemonic).split(" ") + num_words = len(words) + + if num_words not in VALID_WORD_COUNTS: + return False + + try: + indices = tuple(self.wordlist.index(w) for w in words) + except ValueError: + return False + + encoded_seed = bitarray() + for idx in indices: + # Build bitarray from tightly packing indices (which are 11-bits integers) + encoded_seed.extend(int2ba(idx, length=11)) + + entropy_size = 4 * num_words // 3 + + # Checksum the raw entropy bits + checksum = bitarray() + checksum.frombytes(sha256(encoded_seed[:entropy_size * 8].tobytes())) + computed_checksum = checksum[:len(encoded_seed) - entropy_size * 8].tobytes() + + # Extract the stored checksum bits + stored_checksum = encoded_seed[entropy_size * 8:].tobytes() + + # Check that the stored matches the relevant slice of the actual checksum + # NOTE: Use secrets.compare_digest for protection again timing attacks + return secrets.compare_digest(stored_checksum, computed_checksum) + + def expand_word(self, prefix): + if prefix in self.wordlist: + return prefix + else: + matches = [word for word in self.wordlist if word.startswith(prefix)] + if len(matches) == 1: # matched exactly one word in the wordlist + return matches[0] + else: + # exact match not found. + # this is not a validation routine, just return the input + return prefix + + def expand(self, mnemonic): + return " ".join(map(self.expand_word, mnemonic.split(" "))) + + @classmethod + def to_seed(cls, checked_mnemonic: str, passphrase: str = "") -> bytes: + """ + :param str checked_mnemonic: Must be a correct, fully-expanded BIP39 seed phrase. + :param str passphrase: Encryption passphrase used to secure the mnemonic. + :returns bytes: 64 bytes of raw seed material from PRNG + """ + mnemonic = normalize_string(checked_mnemonic) + # NOTE: This domain separater ("mnemonic") is added per BIP39 spec to the passphrase + # https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki#from-mnemonic-to-seed + salt = "mnemonic" + normalize_string(passphrase) + # From BIP39: + # To create a binary seed from the mnemonic, we use the PBKDF2 function with a + # mnemonic sentence (in UTF-8 NFKD) used as the password and the string "mnemonic" + # and passphrase (again in UTF-8 NFKD) used as the salt. + stretched = pbkdf2_hmac_sha512(mnemonic, salt) + return stretched[:64] diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/chinese_simplified.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/chinese_simplified.txt new file mode 100644 index 00000000..b90f1ed8 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/chinese_simplified.txt @@ -0,0 +1,2048 @@ +的 +一 +是 +在 +不 +了 +有 +和 +人 +这 +中 +大 +为 +上 +个 +国 +我 +以 +要 +他 +时 +来 +用 +们 +生 +到 +作 +地 +于 +出 +就 +分 +对 +成 +会 +可 +主 +发 +年 +动 +同 +工 +也 +能 +下 +过 +子 +说 +产 +种 +面 +而 +方 +后 +多 +定 +行 +学 +法 +所 +民 +得 +经 +十 +三 +之 +进 +着 +等 +部 +度 +家 +电 +力 +里 +如 +水 +化 +高 +自 +二 +理 +起 +小 +物 +现 +实 +加 +量 +都 +两 +体 +制 +机 +当 +使 +点 +从 +业 +本 +去 +把 +性 +好 +应 +开 +它 +合 +还 +因 +由 +其 +些 +然 +前 +外 +天 +政 +四 +日 +那 +社 +义 +事 +平 +形 +相 +全 +表 +间 +样 +与 +关 +各 +重 +新 +线 +内 +数 +正 +心 +反 +你 +明 +看 +原 +又 +么 +利 +比 +或 +但 +质 +气 +第 +向 +道 +命 +此 +变 +条 +只 +没 +结 +解 +问 +意 +建 +月 +公 +无 +系 +军 +很 +情 +者 +最 +立 +代 +想 +已 +通 +并 +提 +直 +题 +党 +程 +展 +五 +果 +料 +象 +员 +革 +位 +入 +常 +文 +总 +次 +品 +式 +活 +设 +及 +管 +特 +件 +长 +求 +老 +头 +基 +资 +边 +流 +路 +级 +少 +图 +山 +统 +接 +知 +较 +将 +组 +见 +计 +别 +她 +手 +角 +期 +根 +论 +运 +农 +指 +几 +九 +区 +强 +放 +决 +西 +被 +干 +做 +必 +战 +先 +回 +则 +任 +取 +据 +处 +队 +南 +给 +色 +光 +门 +即 +保 +治 +北 +造 +百 +规 +热 +领 +七 +海 +口 +东 +导 +器 +压 +志 +世 +金 +增 +争 +济 +阶 +油 +思 +术 +极 +交 +受 +联 +什 +认 +六 +共 +权 +收 +证 +改 +清 +美 +再 +采 +转 +更 +单 +风 +切 +打 +白 +教 +速 +花 +带 +安 +场 +身 +车 +例 +真 +务 +具 +万 +每 +目 +至 +达 +走 +积 +示 +议 +声 +报 +斗 +完 +类 +八 +离 +华 +名 +确 +才 +科 +张 +信 +马 +节 +话 +米 +整 +空 +元 +况 +今 +集 +温 +传 +土 +许 +步 +群 +广 +石 +记 +需 +段 +研 +界 +拉 +林 +律 +叫 +且 +究 +观 +越 +织 +装 +影 +算 +低 +持 +音 +众 +书 +布 +复 +容 +儿 +须 +际 +商 +非 +验 +连 +断 +深 +难 +近 +矿 +千 +周 +委 +素 +技 +备 +半 +办 +青 +省 +列 +习 +响 +约 +支 +般 +史 +感 +劳 +便 +团 +往 +酸 +历 +市 +克 +何 +除 +消 +构 +府 +称 +太 +准 +精 +值 +号 +率 +族 +维 +划 +选 +标 +写 +存 +候 +毛 +亲 +快 +效 +斯 +院 +查 +江 +型 +眼 +王 +按 +格 +养 +易 +置 +派 +层 +片 +始 +却 +专 +状 +育 +厂 +京 +识 +适 +属 +圆 +包 +火 +住 +调 +满 +县 +局 +照 +参 +红 +细 +引 +听 +该 +铁 +价 +严 +首 +底 +液 +官 +德 +随 +病 +苏 +失 +尔 +死 +讲 +配 +女 +黄 +推 +显 +谈 +罪 +神 +艺 +呢 +席 +含 +企 +望 +密 +批 +营 +项 +防 +举 +球 +英 +氧 +势 +告 +李 +台 +落 +木 +帮 +轮 +破 +亚 +师 +围 +注 +远 +字 +材 +排 +供 +河 +态 +封 +另 +施 +减 +树 +溶 +怎 +止 +案 +言 +士 +均 +武 +固 +叶 +鱼 +波 +视 +仅 +费 +紧 +爱 +左 +章 +早 +朝 +害 +续 +轻 +服 +试 +食 +充 +兵 +源 +判 +护 +司 +足 +某 +练 +差 +致 +板 +田 +降 +黑 +犯 +负 +击 +范 +继 +兴 +似 +余 +坚 +曲 +输 +修 +故 +城 +夫 +够 +送 +笔 +船 +占 +右 +财 +吃 +富 +春 +职 +觉 +汉 +画 +功 +巴 +跟 +虽 +杂 +飞 +检 +吸 +助 +升 +阳 +互 +初 +创 +抗 +考 +投 +坏 +策 +古 +径 +换 +未 +跑 +留 +钢 +曾 +端 +责 +站 +简 +述 +钱 +副 +尽 +帝 +射 +草 +冲 +承 +独 +令 +限 +阿 +宣 +环 +双 +请 +超 +微 +让 +控 +州 +良 +轴 +找 +否 +纪 +益 +依 +优 +顶 +础 +载 +倒 +房 +突 +坐 +粉 +敌 +略 +客 +袁 +冷 +胜 +绝 +析 +块 +剂 +测 +丝 +协 +诉 +念 +陈 +仍 +罗 +盐 +友 +洋 +错 +苦 +夜 +刑 +移 +频 +逐 +靠 +混 +母 +短 +皮 +终 +聚 +汽 +村 +云 +哪 +既 +距 +卫 +停 +烈 +央 +察 +烧 +迅 +境 +若 +印 +洲 +刻 +括 +激 +孔 +搞 +甚 +室 +待 +核 +校 +散 +侵 +吧 +甲 +游 +久 +菜 +味 +旧 +模 +湖 +货 +损 +预 +阻 +毫 +普 +稳 +乙 +妈 +植 +息 +扩 +银 +语 +挥 +酒 +守 +拿 +序 +纸 +医 +缺 +雨 +吗 +针 +刘 +啊 +急 +唱 +误 +训 +愿 +审 +附 +获 +茶 +鲜 +粮 +斤 +孩 +脱 +硫 +肥 +善 +龙 +演 +父 +渐 +血 +欢 +械 +掌 +歌 +沙 +刚 +攻 +谓 +盾 +讨 +晚 +粒 +乱 +燃 +矛 +乎 +杀 +药 +宁 +鲁 +贵 +钟 +煤 +读 +班 +伯 +香 +介 +迫 +句 +丰 +培 +握 +兰 +担 +弦 +蛋 +沉 +假 +穿 +执 +答 +乐 +谁 +顺 +烟 +缩 +征 +脸 +喜 +松 +脚 +困 +异 +免 +背 +星 +福 +买 +染 +井 +概 +慢 +怕 +磁 +倍 +祖 +皇 +促 +静 +补 +评 +翻 +肉 +践 +尼 +衣 +宽 +扬 +棉 +希 +伤 +操 +垂 +秋 +宜 +氢 +套 +督 +振 +架 +亮 +末 +宪 +庆 +编 +牛 +触 +映 +雷 +销 +诗 +座 +居 +抓 +裂 +胞 +呼 +娘 +景 +威 +绿 +晶 +厚 +盟 +衡 +鸡 +孙 +延 +危 +胶 +屋 +乡 +临 +陆 +顾 +掉 +呀 +灯 +岁 +措 +束 +耐 +剧 +玉 +赵 +跳 +哥 +季 +课 +凯 +胡 +额 +款 +绍 +卷 +齐 +伟 +蒸 +殖 +永 +宗 +苗 +川 +炉 +岩 +弱 +零 +杨 +奏 +沿 +露 +杆 +探 +滑 +镇 +饭 +浓 +航 +怀 +赶 +库 +夺 +伊 +灵 +税 +途 +灭 +赛 +归 +召 +鼓 +播 +盘 +裁 +险 +康 +唯 +录 +菌 +纯 +借 +糖 +盖 +横 +符 +私 +努 +堂 +域 +枪 +润 +幅 +哈 +竟 +熟 +虫 +泽 +脑 +壤 +碳 +欧 +遍 +侧 +寨 +敢 +彻 +虑 +斜 +薄 +庭 +纳 +弹 +饲 +伸 +折 +麦 +湿 +暗 +荷 +瓦 +塞 +床 +筑 +恶 +户 +访 +塔 +奇 +透 +梁 +刀 +旋 +迹 +卡 +氯 +遇 +份 +毒 +泥 +退 +洗 +摆 +灰 +彩 +卖 +耗 +夏 +择 +忙 +铜 +献 +硬 +予 +繁 +圈 +雪 +函 +亦 +抽 +篇 +阵 +阴 +丁 +尺 +追 +堆 +雄 +迎 +泛 +爸 +楼 +避 +谋 +吨 +野 +猪 +旗 +累 +偏 +典 +馆 +索 +秦 +脂 +潮 +爷 +豆 +忽 +托 +惊 +塑 +遗 +愈 +朱 +替 +纤 +粗 +倾 +尚 +痛 +楚 +谢 +奋 +购 +磨 +君 +池 +旁 +碎 +骨 +监 +捕 +弟 +暴 +割 +贯 +殊 +释 +词 +亡 +壁 +顿 +宝 +午 +尘 +闻 +揭 +炮 +残 +冬 +桥 +妇 +警 +综 +招 +吴 +付 +浮 +遭 +徐 +您 +摇 +谷 +赞 +箱 +隔 +订 +男 +吹 +园 +纷 +唐 +败 +宋 +玻 +巨 +耕 +坦 +荣 +闭 +湾 +键 +凡 +驻 +锅 +救 +恩 +剥 +凝 +碱 +齿 +截 +炼 +麻 +纺 +禁 +废 +盛 +版 +缓 +净 +睛 +昌 +婚 +涉 +筒 +嘴 +插 +岸 +朗 +庄 +街 +藏 +姑 +贸 +腐 +奴 +啦 +惯 +乘 +伙 +恢 +匀 +纱 +扎 +辩 +耳 +彪 +臣 +亿 +璃 +抵 +脉 +秀 +萨 +俄 +网 +舞 +店 +喷 +纵 +寸 +汗 +挂 +洪 +贺 +闪 +柬 +爆 +烯 +津 +稻 +墙 +软 +勇 +像 +滚 +厘 +蒙 +芳 +肯 +坡 +柱 +荡 +腿 +仪 +旅 +尾 +轧 +冰 +贡 +登 +黎 +削 +钻 +勒 +逃 +障 +氨 +郭 +峰 +币 +港 +伏 +轨 +亩 +毕 +擦 +莫 +刺 +浪 +秘 +援 +株 +健 +售 +股 +岛 +甘 +泡 +睡 +童 +铸 +汤 +阀 +休 +汇 +舍 +牧 +绕 +炸 +哲 +磷 +绩 +朋 +淡 +尖 +启 +陷 +柴 +呈 +徒 +颜 +泪 +稍 +忘 +泵 +蓝 +拖 +洞 +授 +镜 +辛 +壮 +锋 +贫 +虚 +弯 +摩 +泰 +幼 +廷 +尊 +窗 +纲 +弄 +隶 +疑 +氏 +宫 +姐 +震 +瑞 +怪 +尤 +琴 +循 +描 +膜 +违 +夹 +腰 +缘 +珠 +穷 +森 +枝 +竹 +沟 +催 +绳 +忆 +邦 +剩 +幸 +浆 +栏 +拥 +牙 +贮 +礼 +滤 +钠 +纹 +罢 +拍 +咱 +喊 +袖 +埃 +勤 +罚 +焦 +潜 +伍 +墨 +欲 +缝 +姓 +刊 +饱 +仿 +奖 +铝 +鬼 +丽 +跨 +默 +挖 +链 +扫 +喝 +袋 +炭 +污 +幕 +诸 +弧 +励 +梅 +奶 +洁 +灾 +舟 +鉴 +苯 +讼 +抱 +毁 +懂 +寒 +智 +埔 +寄 +届 +跃 +渡 +挑 +丹 +艰 +贝 +碰 +拔 +爹 +戴 +码 +梦 +芽 +熔 +赤 +渔 +哭 +敬 +颗 +奔 +铅 +仲 +虎 +稀 +妹 +乏 +珍 +申 +桌 +遵 +允 +隆 +螺 +仓 +魏 +锐 +晓 +氮 +兼 +隐 +碍 +赫 +拨 +忠 +肃 +缸 +牵 +抢 +博 +巧 +壳 +兄 +杜 +讯 +诚 +碧 +祥 +柯 +页 +巡 +矩 +悲 +灌 +龄 +伦 +票 +寻 +桂 +铺 +圣 +恐 +恰 +郑 +趣 +抬 +荒 +腾 +贴 +柔 +滴 +猛 +阔 +辆 +妻 +填 +撤 +储 +签 +闹 +扰 +紫 +砂 +递 +戏 +吊 +陶 +伐 +喂 +疗 +瓶 +婆 +抚 +臂 +摸 +忍 +虾 +蜡 +邻 +胸 +巩 +挤 +偶 +弃 +槽 +劲 +乳 +邓 +吉 +仁 +烂 +砖 +租 +乌 +舰 +伴 +瓜 +浅 +丙 +暂 +燥 +橡 +柳 +迷 +暖 +牌 +秧 +胆 +详 +簧 +踏 +瓷 +谱 +呆 +宾 +糊 +洛 +辉 +愤 +竞 +隙 +怒 +粘 +乃 +绪 +肩 +籍 +敏 +涂 +熙 +皆 +侦 +悬 +掘 +享 +纠 +醒 +狂 +锁 +淀 +恨 +牲 +霸 +爬 +赏 +逆 +玩 +陵 +祝 +秒 +浙 +貌 +役 +彼 +悉 +鸭 +趋 +凤 +晨 +畜 +辈 +秩 +卵 +署 +梯 +炎 +滩 +棋 +驱 +筛 +峡 +冒 +啥 +寿 +译 +浸 +泉 +帽 +迟 +硅 +疆 +贷 +漏 +稿 +冠 +嫩 +胁 +芯 +牢 +叛 +蚀 +奥 +鸣 +岭 +羊 +凭 +串 +塘 +绘 +酵 +融 +盆 +锡 +庙 +筹 +冻 +辅 +摄 +袭 +筋 +拒 +僚 +旱 +钾 +鸟 +漆 +沈 +眉 +疏 +添 +棒 +穗 +硝 +韩 +逼 +扭 +侨 +凉 +挺 +碗 +栽 +炒 +杯 +患 +馏 +劝 +豪 +辽 +勃 +鸿 +旦 +吏 +拜 +狗 +埋 +辊 +掩 +饮 +搬 +骂 +辞 +勾 +扣 +估 +蒋 +绒 +雾 +丈 +朵 +姆 +拟 +宇 +辑 +陕 +雕 +偿 +蓄 +崇 +剪 +倡 +厅 +咬 +驶 +薯 +刷 +斥 +番 +赋 +奉 +佛 +浇 +漫 +曼 +扇 +钙 +桃 +扶 +仔 +返 +俗 +亏 +腔 +鞋 +棱 +覆 +框 +悄 +叔 +撞 +骗 +勘 +旺 +沸 +孤 +吐 +孟 +渠 +屈 +疾 +妙 +惜 +仰 +狠 +胀 +谐 +抛 +霉 +桑 +岗 +嘛 +衰 +盗 +渗 +脏 +赖 +涌 +甜 +曹 +阅 +肌 +哩 +厉 +烃 +纬 +毅 +昨 +伪 +症 +煮 +叹 +钉 +搭 +茎 +笼 +酷 +偷 +弓 +锥 +恒 +杰 +坑 +鼻 +翼 +纶 +叙 +狱 +逮 +罐 +络 +棚 +抑 +膨 +蔬 +寺 +骤 +穆 +冶 +枯 +册 +尸 +凸 +绅 +坯 +牺 +焰 +轰 +欣 +晋 +瘦 +御 +锭 +锦 +丧 +旬 +锻 +垄 +搜 +扑 +邀 +亭 +酯 +迈 +舒 +脆 +酶 +闲 +忧 +酚 +顽 +羽 +涨 +卸 +仗 +陪 +辟 +惩 +杭 +姚 +肚 +捉 +飘 +漂 +昆 +欺 +吾 +郎 +烷 +汁 +呵 +饰 +萧 +雅 +邮 +迁 +燕 +撒 +姻 +赴 +宴 +烦 +债 +帐 +斑 +铃 +旨 +醇 +董 +饼 +雏 +姿 +拌 +傅 +腹 +妥 +揉 +贤 +拆 +歪 +葡 +胺 +丢 +浩 +徽 +昂 +垫 +挡 +览 +贪 +慰 +缴 +汪 +慌 +冯 +诺 +姜 +谊 +凶 +劣 +诬 +耀 +昏 +躺 +盈 +骑 +乔 +溪 +丛 +卢 +抹 +闷 +咨 +刮 +驾 +缆 +悟 +摘 +铒 +掷 +颇 +幻 +柄 +惠 +惨 +佳 +仇 +腊 +窝 +涤 +剑 +瞧 +堡 +泼 +葱 +罩 +霍 +捞 +胎 +苍 +滨 +俩 +捅 +湘 +砍 +霞 +邵 +萄 +疯 +淮 +遂 +熊 +粪 +烘 +宿 +档 +戈 +驳 +嫂 +裕 +徙 +箭 +捐 +肠 +撑 +晒 +辨 +殿 +莲 +摊 +搅 +酱 +屏 +疫 +哀 +蔡 +堵 +沫 +皱 +畅 +叠 +阁 +莱 +敲 +辖 +钩 +痕 +坝 +巷 +饿 +祸 +丘 +玄 +溜 +曰 +逻 +彭 +尝 +卿 +妨 +艇 +吞 +韦 +怨 +矮 +歇 diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/chinese_traditional.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/chinese_traditional.txt new file mode 100644 index 00000000..9b020479 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/chinese_traditional.txt @@ -0,0 +1,2048 @@ +的 +一 +是 +在 +不 +了 +有 +和 +人 +這 +中 +大 +為 +上 +個 +國 +我 +以 +要 +他 +時 +來 +用 +們 +生 +到 +作 +地 +於 +出 +就 +分 +對 +成 +會 +可 +主 +發 +年 +動 +同 +工 +也 +能 +下 +過 +子 +說 +產 +種 +面 +而 +方 +後 +多 +定 +行 +學 +法 +所 +民 +得 +經 +十 +三 +之 +進 +著 +等 +部 +度 +家 +電 +力 +裡 +如 +水 +化 +高 +自 +二 +理 +起 +小 +物 +現 +實 +加 +量 +都 +兩 +體 +制 +機 +當 +使 +點 +從 +業 +本 +去 +把 +性 +好 +應 +開 +它 +合 +還 +因 +由 +其 +些 +然 +前 +外 +天 +政 +四 +日 +那 +社 +義 +事 +平 +形 +相 +全 +表 +間 +樣 +與 +關 +各 +重 +新 +線 +內 +數 +正 +心 +反 +你 +明 +看 +原 +又 +麼 +利 +比 +或 +但 +質 +氣 +第 +向 +道 +命 +此 +變 +條 +只 +沒 +結 +解 +問 +意 +建 +月 +公 +無 +系 +軍 +很 +情 +者 +最 +立 +代 +想 +已 +通 +並 +提 +直 +題 +黨 +程 +展 +五 +果 +料 +象 +員 +革 +位 +入 +常 +文 +總 +次 +品 +式 +活 +設 +及 +管 +特 +件 +長 +求 +老 +頭 +基 +資 +邊 +流 +路 +級 +少 +圖 +山 +統 +接 +知 +較 +將 +組 +見 +計 +別 +她 +手 +角 +期 +根 +論 +運 +農 +指 +幾 +九 +區 +強 +放 +決 +西 +被 +幹 +做 +必 +戰 +先 +回 +則 +任 +取 +據 +處 +隊 +南 +給 +色 +光 +門 +即 +保 +治 +北 +造 +百 +規 +熱 +領 +七 +海 +口 +東 +導 +器 +壓 +志 +世 +金 +增 +爭 +濟 +階 +油 +思 +術 +極 +交 +受 +聯 +什 +認 +六 +共 +權 +收 +證 +改 +清 +美 +再 +採 +轉 +更 +單 +風 +切 +打 +白 +教 +速 +花 +帶 +安 +場 +身 +車 +例 +真 +務 +具 +萬 +每 +目 +至 +達 +走 +積 +示 +議 +聲 +報 +鬥 +完 +類 +八 +離 +華 +名 +確 +才 +科 +張 +信 +馬 +節 +話 +米 +整 +空 +元 +況 +今 +集 +溫 +傳 +土 +許 +步 +群 +廣 +石 +記 +需 +段 +研 +界 +拉 +林 +律 +叫 +且 +究 +觀 +越 +織 +裝 +影 +算 +低 +持 +音 +眾 +書 +布 +复 +容 +兒 +須 +際 +商 +非 +驗 +連 +斷 +深 +難 +近 +礦 +千 +週 +委 +素 +技 +備 +半 +辦 +青 +省 +列 +習 +響 +約 +支 +般 +史 +感 +勞 +便 +團 +往 +酸 +歷 +市 +克 +何 +除 +消 +構 +府 +稱 +太 +準 +精 +值 +號 +率 +族 +維 +劃 +選 +標 +寫 +存 +候 +毛 +親 +快 +效 +斯 +院 +查 +江 +型 +眼 +王 +按 +格 +養 +易 +置 +派 +層 +片 +始 +卻 +專 +狀 +育 +廠 +京 +識 +適 +屬 +圓 +包 +火 +住 +調 +滿 +縣 +局 +照 +參 +紅 +細 +引 +聽 +該 +鐵 +價 +嚴 +首 +底 +液 +官 +德 +隨 +病 +蘇 +失 +爾 +死 +講 +配 +女 +黃 +推 +顯 +談 +罪 +神 +藝 +呢 +席 +含 +企 +望 +密 +批 +營 +項 +防 +舉 +球 +英 +氧 +勢 +告 +李 +台 +落 +木 +幫 +輪 +破 +亞 +師 +圍 +注 +遠 +字 +材 +排 +供 +河 +態 +封 +另 +施 +減 +樹 +溶 +怎 +止 +案 +言 +士 +均 +武 +固 +葉 +魚 +波 +視 +僅 +費 +緊 +愛 +左 +章 +早 +朝 +害 +續 +輕 +服 +試 +食 +充 +兵 +源 +判 +護 +司 +足 +某 +練 +差 +致 +板 +田 +降 +黑 +犯 +負 +擊 +范 +繼 +興 +似 +餘 +堅 +曲 +輸 +修 +故 +城 +夫 +夠 +送 +筆 +船 +佔 +右 +財 +吃 +富 +春 +職 +覺 +漢 +畫 +功 +巴 +跟 +雖 +雜 +飛 +檢 +吸 +助 +昇 +陽 +互 +初 +創 +抗 +考 +投 +壞 +策 +古 +徑 +換 +未 +跑 +留 +鋼 +曾 +端 +責 +站 +簡 +述 +錢 +副 +盡 +帝 +射 +草 +衝 +承 +獨 +令 +限 +阿 +宣 +環 +雙 +請 +超 +微 +讓 +控 +州 +良 +軸 +找 +否 +紀 +益 +依 +優 +頂 +礎 +載 +倒 +房 +突 +坐 +粉 +敵 +略 +客 +袁 +冷 +勝 +絕 +析 +塊 +劑 +測 +絲 +協 +訴 +念 +陳 +仍 +羅 +鹽 +友 +洋 +錯 +苦 +夜 +刑 +移 +頻 +逐 +靠 +混 +母 +短 +皮 +終 +聚 +汽 +村 +雲 +哪 +既 +距 +衛 +停 +烈 +央 +察 +燒 +迅 +境 +若 +印 +洲 +刻 +括 +激 +孔 +搞 +甚 +室 +待 +核 +校 +散 +侵 +吧 +甲 +遊 +久 +菜 +味 +舊 +模 +湖 +貨 +損 +預 +阻 +毫 +普 +穩 +乙 +媽 +植 +息 +擴 +銀 +語 +揮 +酒 +守 +拿 +序 +紙 +醫 +缺 +雨 +嗎 +針 +劉 +啊 +急 +唱 +誤 +訓 +願 +審 +附 +獲 +茶 +鮮 +糧 +斤 +孩 +脫 +硫 +肥 +善 +龍 +演 +父 +漸 +血 +歡 +械 +掌 +歌 +沙 +剛 +攻 +謂 +盾 +討 +晚 +粒 +亂 +燃 +矛 +乎 +殺 +藥 +寧 +魯 +貴 +鐘 +煤 +讀 +班 +伯 +香 +介 +迫 +句 +豐 +培 +握 +蘭 +擔 +弦 +蛋 +沉 +假 +穿 +執 +答 +樂 +誰 +順 +煙 +縮 +徵 +臉 +喜 +松 +腳 +困 +異 +免 +背 +星 +福 +買 +染 +井 +概 +慢 +怕 +磁 +倍 +祖 +皇 +促 +靜 +補 +評 +翻 +肉 +踐 +尼 +衣 +寬 +揚 +棉 +希 +傷 +操 +垂 +秋 +宜 +氫 +套 +督 +振 +架 +亮 +末 +憲 +慶 +編 +牛 +觸 +映 +雷 +銷 +詩 +座 +居 +抓 +裂 +胞 +呼 +娘 +景 +威 +綠 +晶 +厚 +盟 +衡 +雞 +孫 +延 +危 +膠 +屋 +鄉 +臨 +陸 +顧 +掉 +呀 +燈 +歲 +措 +束 +耐 +劇 +玉 +趙 +跳 +哥 +季 +課 +凱 +胡 +額 +款 +紹 +卷 +齊 +偉 +蒸 +殖 +永 +宗 +苗 +川 +爐 +岩 +弱 +零 +楊 +奏 +沿 +露 +桿 +探 +滑 +鎮 +飯 +濃 +航 +懷 +趕 +庫 +奪 +伊 +靈 +稅 +途 +滅 +賽 +歸 +召 +鼓 +播 +盤 +裁 +險 +康 +唯 +錄 +菌 +純 +借 +糖 +蓋 +橫 +符 +私 +努 +堂 +域 +槍 +潤 +幅 +哈 +竟 +熟 +蟲 +澤 +腦 +壤 +碳 +歐 +遍 +側 +寨 +敢 +徹 +慮 +斜 +薄 +庭 +納 +彈 +飼 +伸 +折 +麥 +濕 +暗 +荷 +瓦 +塞 +床 +築 +惡 +戶 +訪 +塔 +奇 +透 +梁 +刀 +旋 +跡 +卡 +氯 +遇 +份 +毒 +泥 +退 +洗 +擺 +灰 +彩 +賣 +耗 +夏 +擇 +忙 +銅 +獻 +硬 +予 +繁 +圈 +雪 +函 +亦 +抽 +篇 +陣 +陰 +丁 +尺 +追 +堆 +雄 +迎 +泛 +爸 +樓 +避 +謀 +噸 +野 +豬 +旗 +累 +偏 +典 +館 +索 +秦 +脂 +潮 +爺 +豆 +忽 +托 +驚 +塑 +遺 +愈 +朱 +替 +纖 +粗 +傾 +尚 +痛 +楚 +謝 +奮 +購 +磨 +君 +池 +旁 +碎 +骨 +監 +捕 +弟 +暴 +割 +貫 +殊 +釋 +詞 +亡 +壁 +頓 +寶 +午 +塵 +聞 +揭 +炮 +殘 +冬 +橋 +婦 +警 +綜 +招 +吳 +付 +浮 +遭 +徐 +您 +搖 +谷 +贊 +箱 +隔 +訂 +男 +吹 +園 +紛 +唐 +敗 +宋 +玻 +巨 +耕 +坦 +榮 +閉 +灣 +鍵 +凡 +駐 +鍋 +救 +恩 +剝 +凝 +鹼 +齒 +截 +煉 +麻 +紡 +禁 +廢 +盛 +版 +緩 +淨 +睛 +昌 +婚 +涉 +筒 +嘴 +插 +岸 +朗 +莊 +街 +藏 +姑 +貿 +腐 +奴 +啦 +慣 +乘 +夥 +恢 +勻 +紗 +扎 +辯 +耳 +彪 +臣 +億 +璃 +抵 +脈 +秀 +薩 +俄 +網 +舞 +店 +噴 +縱 +寸 +汗 +掛 +洪 +賀 +閃 +柬 +爆 +烯 +津 +稻 +牆 +軟 +勇 +像 +滾 +厘 +蒙 +芳 +肯 +坡 +柱 +盪 +腿 +儀 +旅 +尾 +軋 +冰 +貢 +登 +黎 +削 +鑽 +勒 +逃 +障 +氨 +郭 +峰 +幣 +港 +伏 +軌 +畝 +畢 +擦 +莫 +刺 +浪 +秘 +援 +株 +健 +售 +股 +島 +甘 +泡 +睡 +童 +鑄 +湯 +閥 +休 +匯 +舍 +牧 +繞 +炸 +哲 +磷 +績 +朋 +淡 +尖 +啟 +陷 +柴 +呈 +徒 +顏 +淚 +稍 +忘 +泵 +藍 +拖 +洞 +授 +鏡 +辛 +壯 +鋒 +貧 +虛 +彎 +摩 +泰 +幼 +廷 +尊 +窗 +綱 +弄 +隸 +疑 +氏 +宮 +姐 +震 +瑞 +怪 +尤 +琴 +循 +描 +膜 +違 +夾 +腰 +緣 +珠 +窮 +森 +枝 +竹 +溝 +催 +繩 +憶 +邦 +剩 +幸 +漿 +欄 +擁 +牙 +貯 +禮 +濾 +鈉 +紋 +罷 +拍 +咱 +喊 +袖 +埃 +勤 +罰 +焦 +潛 +伍 +墨 +欲 +縫 +姓 +刊 +飽 +仿 +獎 +鋁 +鬼 +麗 +跨 +默 +挖 +鏈 +掃 +喝 +袋 +炭 +污 +幕 +諸 +弧 +勵 +梅 +奶 +潔 +災 +舟 +鑑 +苯 +訟 +抱 +毀 +懂 +寒 +智 +埔 +寄 +屆 +躍 +渡 +挑 +丹 +艱 +貝 +碰 +拔 +爹 +戴 +碼 +夢 +芽 +熔 +赤 +漁 +哭 +敬 +顆 +奔 +鉛 +仲 +虎 +稀 +妹 +乏 +珍 +申 +桌 +遵 +允 +隆 +螺 +倉 +魏 +銳 +曉 +氮 +兼 +隱 +礙 +赫 +撥 +忠 +肅 +缸 +牽 +搶 +博 +巧 +殼 +兄 +杜 +訊 +誠 +碧 +祥 +柯 +頁 +巡 +矩 +悲 +灌 +齡 +倫 +票 +尋 +桂 +鋪 +聖 +恐 +恰 +鄭 +趣 +抬 +荒 +騰 +貼 +柔 +滴 +猛 +闊 +輛 +妻 +填 +撤 +儲 +簽 +鬧 +擾 +紫 +砂 +遞 +戲 +吊 +陶 +伐 +餵 +療 +瓶 +婆 +撫 +臂 +摸 +忍 +蝦 +蠟 +鄰 +胸 +鞏 +擠 +偶 +棄 +槽 +勁 +乳 +鄧 +吉 +仁 +爛 +磚 +租 +烏 +艦 +伴 +瓜 +淺 +丙 +暫 +燥 +橡 +柳 +迷 +暖 +牌 +秧 +膽 +詳 +簧 +踏 +瓷 +譜 +呆 +賓 +糊 +洛 +輝 +憤 +競 +隙 +怒 +粘 +乃 +緒 +肩 +籍 +敏 +塗 +熙 +皆 +偵 +懸 +掘 +享 +糾 +醒 +狂 +鎖 +淀 +恨 +牲 +霸 +爬 +賞 +逆 +玩 +陵 +祝 +秒 +浙 +貌 +役 +彼 +悉 +鴨 +趨 +鳳 +晨 +畜 +輩 +秩 +卵 +署 +梯 +炎 +灘 +棋 +驅 +篩 +峽 +冒 +啥 +壽 +譯 +浸 +泉 +帽 +遲 +矽 +疆 +貸 +漏 +稿 +冠 +嫩 +脅 +芯 +牢 +叛 +蝕 +奧 +鳴 +嶺 +羊 +憑 +串 +塘 +繪 +酵 +融 +盆 +錫 +廟 +籌 +凍 +輔 +攝 +襲 +筋 +拒 +僚 +旱 +鉀 +鳥 +漆 +沈 +眉 +疏 +添 +棒 +穗 +硝 +韓 +逼 +扭 +僑 +涼 +挺 +碗 +栽 +炒 +杯 +患 +餾 +勸 +豪 +遼 +勃 +鴻 +旦 +吏 +拜 +狗 +埋 +輥 +掩 +飲 +搬 +罵 +辭 +勾 +扣 +估 +蔣 +絨 +霧 +丈 +朵 +姆 +擬 +宇 +輯 +陝 +雕 +償 +蓄 +崇 +剪 +倡 +廳 +咬 +駛 +薯 +刷 +斥 +番 +賦 +奉 +佛 +澆 +漫 +曼 +扇 +鈣 +桃 +扶 +仔 +返 +俗 +虧 +腔 +鞋 +棱 +覆 +框 +悄 +叔 +撞 +騙 +勘 +旺 +沸 +孤 +吐 +孟 +渠 +屈 +疾 +妙 +惜 +仰 +狠 +脹 +諧 +拋 +黴 +桑 +崗 +嘛 +衰 +盜 +滲 +臟 +賴 +湧 +甜 +曹 +閱 +肌 +哩 +厲 +烴 +緯 +毅 +昨 +偽 +症 +煮 +嘆 +釘 +搭 +莖 +籠 +酷 +偷 +弓 +錐 +恆 +傑 +坑 +鼻 +翼 +綸 +敘 +獄 +逮 +罐 +絡 +棚 +抑 +膨 +蔬 +寺 +驟 +穆 +冶 +枯 +冊 +屍 +凸 +紳 +坯 +犧 +焰 +轟 +欣 +晉 +瘦 +禦 +錠 +錦 +喪 +旬 +鍛 +壟 +搜 +撲 +邀 +亭 +酯 +邁 +舒 +脆 +酶 +閒 +憂 +酚 +頑 +羽 +漲 +卸 +仗 +陪 +闢 +懲 +杭 +姚 +肚 +捉 +飄 +漂 +昆 +欺 +吾 +郎 +烷 +汁 +呵 +飾 +蕭 +雅 +郵 +遷 +燕 +撒 +姻 +赴 +宴 +煩 +債 +帳 +斑 +鈴 +旨 +醇 +董 +餅 +雛 +姿 +拌 +傅 +腹 +妥 +揉 +賢 +拆 +歪 +葡 +胺 +丟 +浩 +徽 +昂 +墊 +擋 +覽 +貪 +慰 +繳 +汪 +慌 +馮 +諾 +姜 +誼 +兇 +劣 +誣 +耀 +昏 +躺 +盈 +騎 +喬 +溪 +叢 +盧 +抹 +悶 +諮 +刮 +駕 +纜 +悟 +摘 +鉺 +擲 +頗 +幻 +柄 +惠 +慘 +佳 +仇 +臘 +窩 +滌 +劍 +瞧 +堡 +潑 +蔥 +罩 +霍 +撈 +胎 +蒼 +濱 +倆 +捅 +湘 +砍 +霞 +邵 +萄 +瘋 +淮 +遂 +熊 +糞 +烘 +宿 +檔 +戈 +駁 +嫂 +裕 +徙 +箭 +捐 +腸 +撐 +曬 +辨 +殿 +蓮 +攤 +攪 +醬 +屏 +疫 +哀 +蔡 +堵 +沫 +皺 +暢 +疊 +閣 +萊 +敲 +轄 +鉤 +痕 +壩 +巷 +餓 +禍 +丘 +玄 +溜 +曰 +邏 +彭 +嘗 +卿 +妨 +艇 +吞 +韋 +怨 +矮 +歇 diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/czech.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/czech.txt new file mode 100644 index 00000000..fdab4a2a --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/czech.txt @@ -0,0 +1,2048 @@ +abdikace +abeceda +adresa +agrese +akce +aktovka +alej +alkohol +amputace +ananas +andulka +anekdota +anketa +antika +anulovat +archa +arogance +asfalt +asistent +aspirace +astma +astronom +atlas +atletika +atol +autobus +azyl +babka +bachor +bacil +baculka +badatel +bageta +bagr +bahno +bakterie +balada +baletka +balkon +balonek +balvan +balza +bambus +bankomat +barbar +baret +barman +baroko +barva +baterka +batoh +bavlna +bazalka +bazilika +bazuka +bedna +beran +beseda +bestie +beton +bezinka +bezmoc +beztak +bicykl +bidlo +biftek +bikiny +bilance +biograf +biolog +bitva +bizon +blahobyt +blatouch +blecha +bledule +blesk +blikat +blizna +blokovat +bloudit +blud +bobek +bobr +bodlina +bodnout +bohatost +bojkot +bojovat +bokorys +bolest +borec +borovice +bota +boubel +bouchat +bouda +boule +bourat +boxer +bradavka +brambora +branka +bratr +brepta +briketa +brko +brloh +bronz +broskev +brunetka +brusinka +brzda +brzy +bublina +bubnovat +buchta +buditel +budka +budova +bufet +bujarost +bukvice +buldok +bulva +bunda +bunkr +burza +butik +buvol +buzola +bydlet +bylina +bytovka +bzukot +capart +carevna +cedr +cedule +cejch +cejn +cela +celer +celkem +celnice +cenina +cennost +cenovka +centrum +cenzor +cestopis +cetka +chalupa +chapadlo +charita +chata +chechtat +chemie +chichot +chirurg +chlad +chleba +chlubit +chmel +chmura +chobot +chochol +chodba +cholera +chomout +chopit +choroba +chov +chrapot +chrlit +chrt +chrup +chtivost +chudina +chutnat +chvat +chvilka +chvost +chyba +chystat +chytit +cibule +cigareta +cihelna +cihla +cinkot +cirkus +cisterna +citace +citrus +cizinec +cizost +clona +cokoliv +couvat +ctitel +ctnost +cudnost +cuketa +cukr +cupot +cvaknout +cval +cvik +cvrkot +cyklista +daleko +dareba +datel +datum +dcera +debata +dechovka +decibel +deficit +deflace +dekl +dekret +demokrat +deprese +derby +deska +detektiv +dikobraz +diktovat +dioda +diplom +disk +displej +divadlo +divoch +dlaha +dlouho +dluhopis +dnes +dobro +dobytek +docent +dochutit +dodnes +dohled +dohoda +dohra +dojem +dojnice +doklad +dokola +doktor +dokument +dolar +doleva +dolina +doma +dominant +domluvit +domov +donutit +dopad +dopis +doplnit +doposud +doprovod +dopustit +dorazit +dorost +dort +dosah +doslov +dostatek +dosud +dosyta +dotaz +dotek +dotknout +doufat +doutnat +dovozce +dozadu +doznat +dozorce +drahota +drak +dramatik +dravec +draze +drdol +drobnost +drogerie +drozd +drsnost +drtit +drzost +duben +duchovno +dudek +duha +duhovka +dusit +dusno +dutost +dvojice +dvorec +dynamit +ekolog +ekonomie +elektron +elipsa +email +emise +emoce +empatie +epizoda +epocha +epopej +epos +esej +esence +eskorta +eskymo +etiketa +euforie +evoluce +exekuce +exkurze +expedice +exploze +export +extrakt +facka +fajfka +fakulta +fanatik +fantazie +farmacie +favorit +fazole +federace +fejeton +fenka +fialka +figurant +filozof +filtr +finance +finta +fixace +fjord +flanel +flirt +flotila +fond +fosfor +fotbal +fotka +foton +frakce +freska +fronta +fukar +funkce +fyzika +galeje +garant +genetika +geolog +gilotina +glazura +glejt +golem +golfista +gotika +graf +gramofon +granule +grep +gril +grog +groteska +guma +hadice +hadr +hala +halenka +hanba +hanopis +harfa +harpuna +havran +hebkost +hejkal +hejno +hejtman +hektar +helma +hematom +herec +herna +heslo +hezky +historik +hladovka +hlasivky +hlava +hledat +hlen +hlodavec +hloh +hloupost +hltat +hlubina +hluchota +hmat +hmota +hmyz +hnis +hnojivo +hnout +hoblina +hoboj +hoch +hodiny +hodlat +hodnota +hodovat +hojnost +hokej +holinka +holka +holub +homole +honitba +honorace +horal +horda +horizont +horko +horlivec +hormon +hornina +horoskop +horstvo +hospoda +hostina +hotovost +houba +houf +houpat +houska +hovor +hradba +hranice +hravost +hrazda +hrbolek +hrdina +hrdlo +hrdost +hrnek +hrobka +hromada +hrot +hrouda +hrozen +hrstka +hrubost +hryzat +hubenost +hubnout +hudba +hukot +humr +husita +hustota +hvozd +hybnost +hydrant +hygiena +hymna +hysterik +idylka +ihned +ikona +iluze +imunita +infekce +inflace +inkaso +inovace +inspekce +internet +invalida +investor +inzerce +ironie +jablko +jachta +jahoda +jakmile +jakost +jalovec +jantar +jarmark +jaro +jasan +jasno +jatka +javor +jazyk +jedinec +jedle +jednatel +jehlan +jekot +jelen +jelito +jemnost +jenom +jepice +jeseter +jevit +jezdec +jezero +jinak +jindy +jinoch +jiskra +jistota +jitrnice +jizva +jmenovat +jogurt +jurta +kabaret +kabel +kabinet +kachna +kadet +kadidlo +kahan +kajak +kajuta +kakao +kaktus +kalamita +kalhoty +kalibr +kalnost +kamera +kamkoliv +kamna +kanibal +kanoe +kantor +kapalina +kapela +kapitola +kapka +kaple +kapota +kapr +kapusta +kapybara +karamel +karotka +karton +kasa +katalog +katedra +kauce +kauza +kavalec +kazajka +kazeta +kazivost +kdekoliv +kdesi +kedluben +kemp +keramika +kino +klacek +kladivo +klam +klapot +klasika +klaun +klec +klenba +klepat +klesnout +klid +klima +klisna +klobouk +klokan +klopa +kloub +klubovna +klusat +kluzkost +kmen +kmitat +kmotr +kniha +knot +koalice +koberec +kobka +kobliha +kobyla +kocour +kohout +kojenec +kokos +koktejl +kolaps +koleda +kolize +kolo +komando +kometa +komik +komnata +komora +kompas +komunita +konat +koncept +kondice +konec +konfese +kongres +konina +konkurs +kontakt +konzerva +kopanec +kopie +kopnout +koprovka +korbel +korektor +kormidlo +koroptev +korpus +koruna +koryto +korzet +kosatec +kostka +kotel +kotleta +kotoul +koukat +koupelna +kousek +kouzlo +kovboj +koza +kozoroh +krabice +krach +krajina +kralovat +krasopis +kravata +kredit +krejcar +kresba +kreveta +kriket +kritik +krize +krkavec +krmelec +krmivo +krocan +krok +kronika +kropit +kroupa +krovka +krtek +kruhadlo +krupice +krutost +krvinka +krychle +krypta +krystal +kryt +kudlanka +kufr +kujnost +kukla +kulajda +kulich +kulka +kulomet +kultura +kuna +kupodivu +kurt +kurzor +kutil +kvalita +kvasinka +kvestor +kynolog +kyselina +kytara +kytice +kytka +kytovec +kyvadlo +labrador +lachtan +ladnost +laik +lakomec +lamela +lampa +lanovka +lasice +laso +lastura +latinka +lavina +lebka +leckdy +leden +lednice +ledovka +ledvina +legenda +legie +legrace +lehce +lehkost +lehnout +lektvar +lenochod +lentilka +lepenka +lepidlo +letadlo +letec +letmo +letokruh +levhart +levitace +levobok +libra +lichotka +lidojed +lidskost +lihovina +lijavec +lilek +limetka +linie +linka +linoleum +listopad +litina +litovat +lobista +lodivod +logika +logoped +lokalita +loket +lomcovat +lopata +lopuch +lord +losos +lotr +loudal +louh +louka +louskat +lovec +lstivost +lucerna +lucifer +lump +lusk +lustrace +lvice +lyra +lyrika +lysina +madam +madlo +magistr +mahagon +majetek +majitel +majorita +makak +makovice +makrela +malba +malina +malovat +malvice +maminka +mandle +manko +marnost +masakr +maskot +masopust +matice +matrika +maturita +mazanec +mazivo +mazlit +mazurka +mdloba +mechanik +meditace +medovina +melasa +meloun +mentolka +metla +metoda +metr +mezera +migrace +mihnout +mihule +mikina +mikrofon +milenec +milimetr +milost +mimika +mincovna +minibar +minomet +minulost +miska +mistr +mixovat +mladost +mlha +mlhovina +mlok +mlsat +mluvit +mnich +mnohem +mobil +mocnost +modelka +modlitba +mohyla +mokro +molekula +momentka +monarcha +monokl +monstrum +montovat +monzun +mosaz +moskyt +most +motivace +motorka +motyka +moucha +moudrost +mozaika +mozek +mozol +mramor +mravenec +mrkev +mrtvola +mrzet +mrzutost +mstitel +mudrc +muflon +mulat +mumie +munice +muset +mutace +muzeum +muzikant +myslivec +mzda +nabourat +nachytat +nadace +nadbytek +nadhoz +nadobro +nadpis +nahlas +nahnat +nahodile +nahradit +naivita +najednou +najisto +najmout +naklonit +nakonec +nakrmit +nalevo +namazat +namluvit +nanometr +naoko +naopak +naostro +napadat +napevno +naplnit +napnout +naposled +naprosto +narodit +naruby +narychlo +nasadit +nasekat +naslepo +nastat +natolik +navenek +navrch +navzdory +nazvat +nebe +nechat +necky +nedaleko +nedbat +neduh +negace +nehet +nehoda +nejen +nejprve +neklid +nelibost +nemilost +nemoc +neochota +neonka +nepokoj +nerost +nerv +nesmysl +nesoulad +netvor +neuron +nevina +nezvykle +nicota +nijak +nikam +nikdy +nikl +nikterak +nitro +nocleh +nohavice +nominace +nora +norek +nositel +nosnost +nouze +noviny +novota +nozdra +nuda +nudle +nuget +nutit +nutnost +nutrie +nymfa +obal +obarvit +obava +obdiv +obec +obehnat +obejmout +obezita +obhajoba +obilnice +objasnit +objekt +obklopit +oblast +oblek +obliba +obloha +obluda +obnos +obohatit +obojek +obout +obrazec +obrna +obruba +obrys +obsah +obsluha +obstarat +obuv +obvaz +obvinit +obvod +obvykle +obyvatel +obzor +ocas +ocel +ocenit +ochladit +ochota +ochrana +ocitnout +odboj +odbyt +odchod +odcizit +odebrat +odeslat +odevzdat +odezva +odhadce +odhodit +odjet +odjinud +odkaz +odkoupit +odliv +odluka +odmlka +odolnost +odpad +odpis +odplout +odpor +odpustit +odpykat +odrazka +odsoudit +odstup +odsun +odtok +odtud +odvaha +odveta +odvolat +odvracet +odznak +ofina +ofsajd +ohlas +ohnisko +ohrada +ohrozit +ohryzek +okap +okenice +oklika +okno +okouzlit +okovy +okrasa +okres +okrsek +okruh +okupant +okurka +okusit +olejnina +olizovat +omak +omeleta +omezit +omladina +omlouvat +omluva +omyl +onehdy +opakovat +opasek +operace +opice +opilost +opisovat +opora +opozice +opravdu +oproti +orbital +orchestr +orgie +orlice +orloj +ortel +osada +oschnout +osika +osivo +oslava +oslepit +oslnit +oslovit +osnova +osoba +osolit +ospalec +osten +ostraha +ostuda +ostych +osvojit +oteplit +otisk +otop +otrhat +otrlost +otrok +otruby +otvor +ovanout +ovar +oves +ovlivnit +ovoce +oxid +ozdoba +pachatel +pacient +padouch +pahorek +pakt +palanda +palec +palivo +paluba +pamflet +pamlsek +panenka +panika +panna +panovat +panstvo +pantofle +paprika +parketa +parodie +parta +paruka +paryba +paseka +pasivita +pastelka +patent +patrona +pavouk +pazneht +pazourek +pecka +pedagog +pejsek +peklo +peloton +penalta +pendrek +penze +periskop +pero +pestrost +petarda +petice +petrolej +pevnina +pexeso +pianista +piha +pijavice +pikle +piknik +pilina +pilnost +pilulka +pinzeta +pipeta +pisatel +pistole +pitevna +pivnice +pivovar +placenta +plakat +plamen +planeta +plastika +platit +plavidlo +plaz +plech +plemeno +plenta +ples +pletivo +plevel +plivat +plnit +plno +plocha +plodina +plomba +plout +pluk +plyn +pobavit +pobyt +pochod +pocit +poctivec +podat +podcenit +podepsat +podhled +podivit +podklad +podmanit +podnik +podoba +podpora +podraz +podstata +podvod +podzim +poezie +pohanka +pohnutka +pohovor +pohroma +pohyb +pointa +pojistka +pojmout +pokazit +pokles +pokoj +pokrok +pokuta +pokyn +poledne +polibek +polknout +poloha +polynom +pomalu +pominout +pomlka +pomoc +pomsta +pomyslet +ponechat +ponorka +ponurost +popadat +popel +popisek +poplach +poprosit +popsat +popud +poradce +porce +porod +porucha +poryv +posadit +posed +posila +poskok +poslanec +posoudit +pospolu +postava +posudek +posyp +potah +potkan +potlesk +potomek +potrava +potupa +potvora +poukaz +pouto +pouzdro +povaha +povidla +povlak +povoz +povrch +povstat +povyk +povzdech +pozdrav +pozemek +poznatek +pozor +pozvat +pracovat +prahory +praktika +prales +praotec +praporek +prase +pravda +princip +prkno +probudit +procento +prodej +profese +prohra +projekt +prolomit +promile +pronikat +propad +prorok +prosba +proton +proutek +provaz +prskavka +prsten +prudkost +prut +prvek +prvohory +psanec +psovod +pstruh +ptactvo +puberta +puch +pudl +pukavec +puklina +pukrle +pult +pumpa +punc +pupen +pusa +pusinka +pustina +putovat +putyka +pyramida +pysk +pytel +racek +rachot +radiace +radnice +radon +raft +ragby +raketa +rakovina +rameno +rampouch +rande +rarach +rarita +rasovna +rastr +ratolest +razance +razidlo +reagovat +reakce +recept +redaktor +referent +reflex +rejnok +reklama +rekord +rekrut +rektor +reputace +revize +revma +revolver +rezerva +riskovat +riziko +robotika +rodokmen +rohovka +rokle +rokoko +romaneto +ropovod +ropucha +rorejs +rosol +rostlina +rotmistr +rotoped +rotunda +roubenka +roucho +roup +roura +rovina +rovnice +rozbor +rozchod +rozdat +rozeznat +rozhodce +rozinka +rozjezd +rozkaz +rozloha +rozmar +rozpad +rozruch +rozsah +roztok +rozum +rozvod +rubrika +ruchadlo +rukavice +rukopis +ryba +rybolov +rychlost +rydlo +rypadlo +rytina +ryzost +sadista +sahat +sako +samec +samizdat +samota +sanitka +sardinka +sasanka +satelit +sazba +sazenice +sbor +schovat +sebranka +secese +sedadlo +sediment +sedlo +sehnat +sejmout +sekera +sekta +sekunda +sekvoje +semeno +seno +servis +sesadit +seshora +seskok +seslat +sestra +sesuv +sesypat +setba +setina +setkat +setnout +setrvat +sever +seznam +shoda +shrnout +sifon +silnice +sirka +sirotek +sirup +situace +skafandr +skalisko +skanzen +skaut +skeptik +skica +skladba +sklenice +sklo +skluz +skoba +skokan +skoro +skripta +skrz +skupina +skvost +skvrna +slabika +sladidlo +slanina +slast +slavnost +sledovat +slepec +sleva +slezina +slib +slina +sliznice +slon +sloupek +slovo +sluch +sluha +slunce +slupka +slza +smaragd +smetana +smilstvo +smlouva +smog +smrad +smrk +smrtka +smutek +smysl +snad +snaha +snob +sobota +socha +sodovka +sokol +sopka +sotva +souboj +soucit +soudce +souhlas +soulad +soumrak +souprava +soused +soutok +souviset +spalovna +spasitel +spis +splav +spodek +spojenec +spolu +sponzor +spornost +spousta +sprcha +spustit +sranda +sraz +srdce +srna +srnec +srovnat +srpen +srst +srub +stanice +starosta +statika +stavba +stehno +stezka +stodola +stolek +stopa +storno +stoupat +strach +stres +strhnout +strom +struna +studna +stupnice +stvol +styk +subjekt +subtropy +suchar +sudost +sukno +sundat +sunout +surikata +surovina +svah +svalstvo +svetr +svatba +svazek +svisle +svitek +svoboda +svodidlo +svorka +svrab +sykavka +sykot +synek +synovec +sypat +sypkost +syrovost +sysel +sytost +tabletka +tabule +tahoun +tajemno +tajfun +tajga +tajit +tajnost +taktika +tamhle +tampon +tancovat +tanec +tanker +tapeta +tavenina +tazatel +technika +tehdy +tekutina +telefon +temnota +tendence +tenista +tenor +teplota +tepna +teprve +terapie +termoska +textil +ticho +tiskopis +titulek +tkadlec +tkanina +tlapka +tleskat +tlukot +tlupa +tmel +toaleta +topinka +topol +torzo +touha +toulec +tradice +traktor +tramp +trasa +traverza +trefit +trest +trezor +trhavina +trhlina +trochu +trojice +troska +trouba +trpce +trpitel +trpkost +trubec +truchlit +truhlice +trus +trvat +tudy +tuhnout +tuhost +tundra +turista +turnaj +tuzemsko +tvaroh +tvorba +tvrdost +tvrz +tygr +tykev +ubohost +uboze +ubrat +ubrousek +ubrus +ubytovna +ucho +uctivost +udivit +uhradit +ujednat +ujistit +ujmout +ukazatel +uklidnit +uklonit +ukotvit +ukrojit +ulice +ulita +ulovit +umyvadlo +unavit +uniforma +uniknout +upadnout +uplatnit +uplynout +upoutat +upravit +uran +urazit +usednout +usilovat +usmrtit +usnadnit +usnout +usoudit +ustlat +ustrnout +utahovat +utkat +utlumit +utonout +utopenec +utrousit +uvalit +uvolnit +uvozovka +uzdravit +uzel +uzenina +uzlina +uznat +vagon +valcha +valoun +vana +vandal +vanilka +varan +varhany +varovat +vcelku +vchod +vdova +vedro +vegetace +vejce +velbloud +veletrh +velitel +velmoc +velryba +venkov +veranda +verze +veselka +veskrze +vesnice +vespodu +vesta +veterina +veverka +vibrace +vichr +videohra +vidina +vidle +vila +vinice +viset +vitalita +vize +vizitka +vjezd +vklad +vkus +vlajka +vlak +vlasec +vlevo +vlhkost +vliv +vlnovka +vloupat +vnucovat +vnuk +voda +vodivost +vodoznak +vodstvo +vojensky +vojna +vojsko +volant +volba +volit +volno +voskovka +vozidlo +vozovna +vpravo +vrabec +vracet +vrah +vrata +vrba +vrcholek +vrhat +vrstva +vrtule +vsadit +vstoupit +vstup +vtip +vybavit +vybrat +vychovat +vydat +vydra +vyfotit +vyhledat +vyhnout +vyhodit +vyhradit +vyhubit +vyjasnit +vyjet +vyjmout +vyklopit +vykonat +vylekat +vymazat +vymezit +vymizet +vymyslet +vynechat +vynikat +vynutit +vypadat +vyplatit +vypravit +vypustit +vyrazit +vyrovnat +vyrvat +vyslovit +vysoko +vystavit +vysunout +vysypat +vytasit +vytesat +vytratit +vyvinout +vyvolat +vyvrhel +vyzdobit +vyznat +vzadu +vzbudit +vzchopit +vzdor +vzduch +vzdychat +vzestup +vzhledem +vzkaz +vzlykat +vznik +vzorek +vzpoura +vztah +vztek +xylofon +zabrat +zabydlet +zachovat +zadarmo +zadusit +zafoukat +zahltit +zahodit +zahrada +zahynout +zajatec +zajet +zajistit +zaklepat +zakoupit +zalepit +zamezit +zamotat +zamyslet +zanechat +zanikat +zaplatit +zapojit +zapsat +zarazit +zastavit +zasunout +zatajit +zatemnit +zatknout +zaujmout +zavalit +zavelet +zavinit +zavolat +zavrtat +zazvonit +zbavit +zbrusu +zbudovat +zbytek +zdaleka +zdarma +zdatnost +zdivo +zdobit +zdroj +zdvih +zdymadlo +zelenina +zeman +zemina +zeptat +zezadu +zezdola +zhatit +zhltnout +zhluboka +zhotovit +zhruba +zima +zimnice +zjemnit +zklamat +zkoumat +zkratka +zkumavka +zlato +zlehka +zloba +zlom +zlost +zlozvyk +zmapovat +zmar +zmatek +zmije +zmizet +zmocnit +zmodrat +zmrzlina +zmutovat +znak +znalost +znamenat +znovu +zobrazit +zotavit +zoubek +zoufale +zplodit +zpomalit +zprava +zprostit +zprudka +zprvu +zrada +zranit +zrcadlo +zrnitost +zrno +zrovna +zrychlit +zrzavost +zticha +ztratit +zubovina +zubr +zvednout +zvenku +zvesela +zvon +zvrat +zvukovod +zvyk diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/english.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/english.txt new file mode 100644 index 00000000..942040ed --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/english.txt @@ -0,0 +1,2048 @@ +abandon +ability +able +about +above +absent +absorb +abstract +absurd +abuse +access +accident +account +accuse +achieve +acid +acoustic +acquire +across +act +action +actor +actress +actual +adapt +add +addict +address +adjust +admit +adult +advance +advice +aerobic +affair +afford +afraid +again +age +agent +agree +ahead +aim +air +airport +aisle +alarm +album +alcohol +alert +alien +all +alley +allow +almost +alone +alpha +already +also +alter +always +amateur +amazing +among +amount +amused +analyst +anchor +ancient +anger +angle +angry +animal +ankle +announce +annual +another +answer +antenna +antique +anxiety +any +apart +apology +appear +apple +approve +april +arch +arctic +area +arena +argue +arm +armed +armor +army +around +arrange +arrest +arrive +arrow +art +artefact +artist +artwork +ask +aspect +assault +asset +assist +assume +asthma +athlete +atom +attack +attend +attitude +attract +auction +audit +august +aunt +author +auto +autumn +average +avocado +avoid +awake +aware +away +awesome +awful +awkward +axis +baby +bachelor +bacon +badge +bag +balance +balcony +ball +bamboo +banana +banner +bar +barely +bargain +barrel +base +basic +basket +battle +beach +bean +beauty +because +become +beef +before +begin +behave +behind +believe +below +belt +bench +benefit +best +betray +better +between +beyond +bicycle +bid +bike +bind +biology +bird +birth +bitter +black +blade +blame +blanket +blast +bleak +bless +blind +blood +blossom +blouse +blue +blur +blush +board +boat +body +boil +bomb +bone +bonus +book +boost +border +boring +borrow +boss +bottom +bounce +box +boy +bracket +brain +brand +brass +brave +bread +breeze +brick +bridge +brief +bright +bring +brisk +broccoli +broken +bronze +broom +brother +brown +brush +bubble +buddy +budget +buffalo +build +bulb +bulk +bullet +bundle +bunker +burden +burger +burst +bus +business +busy +butter +buyer +buzz +cabbage +cabin +cable +cactus +cage +cake +call +calm +camera +camp +can +canal +cancel +candy +cannon +canoe +canvas +canyon +capable +capital +captain +car +carbon +card +cargo +carpet +carry +cart +case +cash +casino +castle +casual +cat +catalog +catch +category +cattle +caught +cause +caution +cave +ceiling +celery +cement +census +century +cereal +certain +chair +chalk +champion +change +chaos +chapter +charge +chase +chat +cheap +check +cheese +chef +cherry +chest +chicken +chief +child +chimney +choice +choose +chronic +chuckle +chunk +churn +cigar +cinnamon +circle +citizen +city +civil +claim +clap +clarify +claw +clay +clean +clerk +clever +click +client +cliff +climb +clinic +clip +clock +clog +close +cloth +cloud +clown +club +clump +cluster +clutch +coach +coast +coconut +code +coffee +coil +coin +collect +color +column +combine +come +comfort +comic +common +company +concert +conduct +confirm +congress +connect +consider +control +convince +cook +cool +copper +copy +coral +core +corn +correct +cost +cotton +couch +country +couple +course +cousin +cover +coyote +crack +cradle +craft +cram +crane +crash +crater +crawl +crazy +cream +credit +creek +crew +cricket +crime +crisp +critic +crop +cross +crouch +crowd +crucial +cruel +cruise +crumble +crunch +crush +cry +crystal +cube +culture +cup +cupboard +curious +current +curtain +curve +cushion +custom +cute +cycle +dad +damage +damp +dance +danger +daring +dash +daughter +dawn +day +deal +debate +debris +decade +december +decide +decline +decorate +decrease +deer +defense +define +defy +degree +delay +deliver +demand +demise +denial +dentist +deny +depart +depend +deposit +depth +deputy +derive +describe +desert +design +desk +despair +destroy +detail +detect +develop +device +devote +diagram +dial +diamond +diary +dice +diesel +diet +differ +digital +dignity +dilemma +dinner +dinosaur +direct +dirt +disagree +discover +disease +dish +dismiss +disorder +display +distance +divert +divide +divorce +dizzy +doctor +document +dog +doll +dolphin +domain +donate +donkey +donor +door +dose +double +dove +draft +dragon +drama +drastic +draw +dream +dress +drift +drill +drink +drip +drive +drop +drum +dry +duck +dumb +dune +during +dust +dutch +duty +dwarf +dynamic +eager +eagle +early +earn +earth +easily +east +easy +echo +ecology +economy +edge +edit +educate +effort +egg +eight +either +elbow +elder +electric +elegant +element +elephant +elevator +elite +else +embark +embody +embrace +emerge +emotion +employ +empower +empty +enable +enact +end +endless +endorse +enemy +energy +enforce +engage +engine +enhance +enjoy +enlist +enough +enrich +enroll +ensure +enter +entire +entry +envelope +episode +equal +equip +era +erase +erode +erosion +error +erupt +escape +essay +essence +estate +eternal +ethics +evidence +evil +evoke +evolve +exact +example +excess +exchange +excite +exclude +excuse +execute +exercise +exhaust +exhibit +exile +exist +exit +exotic +expand +expect +expire +explain +expose +express +extend +extra +eye +eyebrow +fabric +face +faculty +fade +faint +faith +fall +false +fame +family +famous +fan +fancy +fantasy +farm +fashion +fat +fatal +father +fatigue +fault +favorite +feature +february +federal +fee +feed +feel +female +fence +festival +fetch +fever +few +fiber +fiction +field +figure +file +film +filter +final +find +fine +finger +finish +fire +firm +first +fiscal +fish +fit +fitness +fix +flag +flame +flash +flat +flavor +flee +flight +flip +float +flock +floor +flower +fluid +flush +fly +foam +focus +fog +foil +fold +follow +food +foot +force +forest +forget +fork +fortune +forum +forward +fossil +foster +found +fox +fragile +frame +frequent +fresh +friend +fringe +frog +front +frost +frown +frozen +fruit +fuel +fun +funny +furnace +fury +future +gadget +gain +galaxy +gallery +game +gap +garage +garbage +garden +garlic +garment +gas +gasp +gate +gather +gauge +gaze +general +genius +genre +gentle +genuine +gesture +ghost +giant +gift +giggle +ginger +giraffe +girl +give +glad +glance +glare +glass +glide +glimpse +globe +gloom +glory +glove +glow +glue +goat +goddess +gold +good +goose +gorilla +gospel +gossip +govern +gown +grab +grace +grain +grant +grape +grass +gravity +great +green +grid +grief +grit +grocery +group +grow +grunt +guard +guess +guide +guilt +guitar +gun +gym +habit +hair +half +hammer +hamster +hand +happy +harbor +hard +harsh +harvest +hat +have +hawk +hazard +head +health +heart +heavy +hedgehog +height +hello +helmet +help +hen +hero +hidden +high +hill +hint +hip +hire +history +hobby +hockey +hold +hole +holiday +hollow +home +honey +hood +hope +horn +horror +horse +hospital +host +hotel +hour +hover +hub +huge +human +humble +humor +hundred +hungry +hunt +hurdle +hurry +hurt +husband +hybrid +ice +icon +idea +identify +idle +ignore +ill +illegal +illness +image +imitate +immense +immune +impact +impose +improve +impulse +inch +include +income +increase +index +indicate +indoor +industry +infant +inflict +inform +inhale +inherit +initial +inject +injury +inmate +inner +innocent +input +inquiry +insane +insect +inside +inspire +install +intact +interest +into +invest +invite +involve +iron +island +isolate +issue +item +ivory +jacket +jaguar +jar +jazz +jealous +jeans +jelly +jewel +job +join +joke +journey +joy +judge +juice +jump +jungle +junior +junk +just +kangaroo +keen +keep +ketchup +key +kick +kid +kidney +kind +kingdom +kiss +kit +kitchen +kite +kitten +kiwi +knee +knife +knock +know +lab +label +labor +ladder +lady +lake +lamp +language +laptop +large +later +latin +laugh +laundry +lava +law +lawn +lawsuit +layer +lazy +leader +leaf +learn +leave +lecture +left +leg +legal +legend +leisure +lemon +lend +length +lens +leopard +lesson +letter +level +liar +liberty +library +license +life +lift +light +like +limb +limit +link +lion +liquid +list +little +live +lizard +load +loan +lobster +local +lock +logic +lonely +long +loop +lottery +loud +lounge +love +loyal +lucky +luggage +lumber +lunar +lunch +luxury +lyrics +machine +mad +magic +magnet +maid +mail +main +major +make +mammal +man +manage +mandate +mango +mansion +manual +maple +marble +march +margin +marine +market +marriage +mask +mass +master +match +material +math +matrix +matter +maximum +maze +meadow +mean +measure +meat +mechanic +medal +media +melody +melt +member +memory +mention +menu +mercy +merge +merit +merry +mesh +message +metal +method +middle +midnight +milk +million +mimic +mind +minimum +minor +minute +miracle +mirror +misery +miss +mistake +mix +mixed +mixture +mobile +model +modify +mom +moment +monitor +monkey +monster +month +moon +moral +more +morning +mosquito +mother +motion +motor +mountain +mouse +move +movie +much +muffin +mule +multiply +muscle +museum +mushroom +music +must +mutual +myself +mystery +myth +naive +name +napkin +narrow +nasty +nation +nature +near +neck +need +negative +neglect +neither +nephew +nerve +nest +net +network +neutral +never +news +next +nice +night +noble +noise +nominee +noodle +normal +north +nose +notable +note +nothing +notice +novel +now +nuclear +number +nurse +nut +oak +obey +object +oblige +obscure +observe +obtain +obvious +occur +ocean +october +odor +off +offer +office +often +oil +okay +old +olive +olympic +omit +once +one +onion +online +only +open +opera +opinion +oppose +option +orange +orbit +orchard +order +ordinary +organ +orient +original +orphan +ostrich +other +outdoor +outer +output +outside +oval +oven +over +own +owner +oxygen +oyster +ozone +pact +paddle +page +pair +palace +palm +panda +panel +panic +panther +paper +parade +parent +park +parrot +party +pass +patch +path +patient +patrol +pattern +pause +pave +payment +peace +peanut +pear +peasant +pelican +pen +penalty +pencil +people +pepper +perfect +permit +person +pet +phone +photo +phrase +physical +piano +picnic +picture +piece +pig +pigeon +pill +pilot +pink +pioneer +pipe +pistol +pitch +pizza +place +planet +plastic +plate +play +please +pledge +pluck +plug +plunge +poem +poet +point +polar +pole +police +pond +pony +pool +popular +portion +position +possible +post +potato +pottery +poverty +powder +power +practice +praise +predict +prefer +prepare +present +pretty +prevent +price +pride +primary +print +priority +prison +private +prize +problem +process +produce +profit +program +project +promote +proof +property +prosper +protect +proud +provide +public +pudding +pull +pulp +pulse +pumpkin +punch +pupil +puppy +purchase +purity +purpose +purse +push +put +puzzle +pyramid +quality +quantum +quarter +question +quick +quit +quiz +quote +rabbit +raccoon +race +rack +radar +radio +rail +rain +raise +rally +ramp +ranch +random +range +rapid +rare +rate +rather +raven +raw +razor +ready +real +reason +rebel +rebuild +recall +receive +recipe +record +recycle +reduce +reflect +reform +refuse +region +regret +regular +reject +relax +release +relief +rely +remain +remember +remind +remove +render +renew +rent +reopen +repair +repeat +replace +report +require +rescue +resemble +resist +resource +response +result +retire +retreat +return +reunion +reveal +review +reward +rhythm +rib +ribbon +rice +rich +ride +ridge +rifle +right +rigid +ring +riot +ripple +risk +ritual +rival +river +road +roast +robot +robust +rocket +romance +roof +rookie +room +rose +rotate +rough +round +route +royal +rubber +rude +rug +rule +run +runway +rural +sad +saddle +sadness +safe +sail +salad +salmon +salon +salt +salute +same +sample +sand +satisfy +satoshi +sauce +sausage +save +say +scale +scan +scare +scatter +scene +scheme +school +science +scissors +scorpion +scout +scrap +screen +script +scrub +sea +search +season +seat +second +secret +section +security +seed +seek +segment +select +sell +seminar +senior +sense +sentence +series +service +session +settle +setup +seven +shadow +shaft +shallow +share +shed +shell +sheriff +shield +shift +shine +ship +shiver +shock +shoe +shoot +shop +short +shoulder +shove +shrimp +shrug +shuffle +shy +sibling +sick +side +siege +sight +sign +silent +silk +silly +silver +similar +simple +since +sing +siren +sister +situate +six +size +skate +sketch +ski +skill +skin +skirt +skull +slab +slam +sleep +slender +slice +slide +slight +slim +slogan +slot +slow +slush +small +smart +smile +smoke +smooth +snack +snake +snap +sniff +snow +soap +soccer +social +sock +soda +soft +solar +soldier +solid +solution +solve +someone +song +soon +sorry +sort +soul +sound +soup +source +south +space +spare +spatial +spawn +speak +special +speed +spell +spend +sphere +spice +spider +spike +spin +spirit +split +spoil +sponsor +spoon +sport +spot +spray +spread +spring +spy +square +squeeze +squirrel +stable +stadium +staff +stage +stairs +stamp +stand +start +state +stay +steak +steel +stem +step +stereo +stick +still +sting +stock +stomach +stone +stool +story +stove +strategy +street +strike +strong +struggle +student +stuff +stumble +style +subject +submit +subway +success +such +sudden +suffer +sugar +suggest +suit +summer +sun +sunny +sunset +super +supply +supreme +sure +surface +surge +surprise +surround +survey +suspect +sustain +swallow +swamp +swap +swarm +swear +sweet +swift +swim +swing +switch +sword +symbol +symptom +syrup +system +table +tackle +tag +tail +talent +talk +tank +tape +target +task +taste +tattoo +taxi +teach +team +tell +ten +tenant +tennis +tent +term +test +text +thank +that +theme +then +theory +there +they +thing +this +thought +three +thrive +throw +thumb +thunder +ticket +tide +tiger +tilt +timber +time +tiny +tip +tired +tissue +title +toast +tobacco +today +toddler +toe +together +toilet +token +tomato +tomorrow +tone +tongue +tonight +tool +tooth +top +topic +topple +torch +tornado +tortoise +toss +total +tourist +toward +tower +town +toy +track +trade +traffic +tragic +train +transfer +trap +trash +travel +tray +treat +tree +trend +trial +tribe +trick +trigger +trim +trip +trophy +trouble +truck +true +truly +trumpet +trust +truth +try +tube +tuition +tumble +tuna +tunnel +turkey +turn +turtle +twelve +twenty +twice +twin +twist +two +type +typical +ugly +umbrella +unable +unaware +uncle +uncover +under +undo +unfair +unfold +unhappy +uniform +unique +unit +universe +unknown +unlock +until +unusual +unveil +update +upgrade +uphold +upon +upper +upset +urban +urge +usage +use +used +useful +useless +usual +utility +vacant +vacuum +vague +valid +valley +valve +van +vanish +vapor +various +vast +vault +vehicle +velvet +vendor +venture +venue +verb +verify +version +very +vessel +veteran +viable +vibrant +vicious +victory +video +view +village +vintage +violin +virtual +virus +visa +visit +visual +vital +vivid +vocal +voice +void +volcano +volume +vote +voyage +wage +wagon +wait +walk +wall +walnut +want +warfare +warm +warrior +wash +wasp +waste +water +wave +way +wealth +weapon +wear +weasel +weather +web +wedding +weekend +weird +welcome +west +wet +whale +what +wheat +wheel +when +where +whip +whisper +wide +width +wife +wild +will +win +window +wine +wing +wink +winner +winter +wire +wisdom +wise +wish +witness +wolf +woman +wonder +wood +wool +word +work +world +worry +worth +wrap +wreck +wrestle +wrist +write +wrong +yard +year +yellow +you +young +youth +zebra +zero +zone +zoo diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/french.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/french.txt new file mode 100644 index 00000000..1d749904 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/french.txt @@ -0,0 +1,2048 @@ +abaisser +abandon +abdiquer +abeille +abolir +aborder +aboutir +aboyer +abrasif +abreuver +abriter +abroger +abrupt +absence +absolu +absurde +abusif +abyssal +académie +acajou +acarien +accabler +accepter +acclamer +accolade +accroche +accuser +acerbe +achat +acheter +aciduler +acier +acompte +acquérir +acronyme +acteur +actif +actuel +adepte +adéquat +adhésif +adjectif +adjuger +admettre +admirer +adopter +adorer +adoucir +adresse +adroit +adulte +adverbe +aérer +aéronef +affaire +affecter +affiche +affreux +affubler +agacer +agencer +agile +agiter +agrafer +agréable +agrume +aider +aiguille +ailier +aimable +aisance +ajouter +ajuster +alarmer +alchimie +alerte +algèbre +algue +aliéner +aliment +alléger +alliage +allouer +allumer +alourdir +alpaga +altesse +alvéole +amateur +ambigu +ambre +aménager +amertume +amidon +amiral +amorcer +amour +amovible +amphibie +ampleur +amusant +analyse +anaphore +anarchie +anatomie +ancien +anéantir +angle +angoisse +anguleux +animal +annexer +annonce +annuel +anodin +anomalie +anonyme +anormal +antenne +antidote +anxieux +apaiser +apéritif +aplanir +apologie +appareil +appeler +apporter +appuyer +aquarium +aqueduc +arbitre +arbuste +ardeur +ardoise +argent +arlequin +armature +armement +armoire +armure +arpenter +arracher +arriver +arroser +arsenic +artériel +article +aspect +asphalte +aspirer +assaut +asservir +assiette +associer +assurer +asticot +astre +astuce +atelier +atome +atrium +atroce +attaque +attentif +attirer +attraper +aubaine +auberge +audace +audible +augurer +aurore +automne +autruche +avaler +avancer +avarice +avenir +averse +aveugle +aviateur +avide +avion +aviser +avoine +avouer +avril +axial +axiome +badge +bafouer +bagage +baguette +baignade +balancer +balcon +baleine +balisage +bambin +bancaire +bandage +banlieue +bannière +banquier +barbier +baril +baron +barque +barrage +bassin +bastion +bataille +bateau +batterie +baudrier +bavarder +belette +bélier +belote +bénéfice +berceau +berger +berline +bermuda +besace +besogne +bétail +beurre +biberon +bicycle +bidule +bijou +bilan +bilingue +billard +binaire +biologie +biopsie +biotype +biscuit +bison +bistouri +bitume +bizarre +blafard +blague +blanchir +blessant +blinder +blond +bloquer +blouson +bobard +bobine +boire +boiser +bolide +bonbon +bondir +bonheur +bonifier +bonus +bordure +borne +botte +boucle +boueux +bougie +boulon +bouquin +bourse +boussole +boutique +boxeur +branche +brasier +brave +brebis +brèche +breuvage +bricoler +brigade +brillant +brioche +brique +brochure +broder +bronzer +brousse +broyeur +brume +brusque +brutal +bruyant +buffle +buisson +bulletin +bureau +burin +bustier +butiner +butoir +buvable +buvette +cabanon +cabine +cachette +cadeau +cadre +caféine +caillou +caisson +calculer +calepin +calibre +calmer +calomnie +calvaire +camarade +caméra +camion +campagne +canal +caneton +canon +cantine +canular +capable +caporal +caprice +capsule +capter +capuche +carabine +carbone +caresser +caribou +carnage +carotte +carreau +carton +cascade +casier +casque +cassure +causer +caution +cavalier +caverne +caviar +cédille +ceinture +céleste +cellule +cendrier +censurer +central +cercle +cérébral +cerise +cerner +cerveau +cesser +chagrin +chaise +chaleur +chambre +chance +chapitre +charbon +chasseur +chaton +chausson +chavirer +chemise +chenille +chéquier +chercher +cheval +chien +chiffre +chignon +chimère +chiot +chlorure +chocolat +choisir +chose +chouette +chrome +chute +cigare +cigogne +cimenter +cinéma +cintrer +circuler +cirer +cirque +citerne +citoyen +citron +civil +clairon +clameur +claquer +classe +clavier +client +cligner +climat +clivage +cloche +clonage +cloporte +cobalt +cobra +cocasse +cocotier +coder +codifier +coffre +cogner +cohésion +coiffer +coincer +colère +colibri +colline +colmater +colonel +combat +comédie +commande +compact +concert +conduire +confier +congeler +connoter +consonne +contact +convexe +copain +copie +corail +corbeau +cordage +corniche +corpus +correct +cortège +cosmique +costume +coton +coude +coupure +courage +couteau +couvrir +coyote +crabe +crainte +cravate +crayon +créature +créditer +crémeux +creuser +crevette +cribler +crier +cristal +critère +croire +croquer +crotale +crucial +cruel +crypter +cubique +cueillir +cuillère +cuisine +cuivre +culminer +cultiver +cumuler +cupide +curatif +curseur +cyanure +cycle +cylindre +cynique +daigner +damier +danger +danseur +dauphin +débattre +débiter +déborder +débrider +débutant +décaler +décembre +déchirer +décider +déclarer +décorer +décrire +décupler +dédale +déductif +déesse +défensif +défiler +défrayer +dégager +dégivrer +déglutir +dégrafer +déjeuner +délice +déloger +demander +demeurer +démolir +dénicher +dénouer +dentelle +dénuder +départ +dépenser +déphaser +déplacer +déposer +déranger +dérober +désastre +descente +désert +désigner +désobéir +dessiner +destrier +détacher +détester +détourer +détresse +devancer +devenir +deviner +devoir +diable +dialogue +diamant +dicter +différer +digérer +digital +digne +diluer +dimanche +diminuer +dioxyde +directif +diriger +discuter +disposer +dissiper +distance +divertir +diviser +docile +docteur +dogme +doigt +domaine +domicile +dompter +donateur +donjon +donner +dopamine +dortoir +dorure +dosage +doseur +dossier +dotation +douanier +double +douceur +douter +doyen +dragon +draper +dresser +dribbler +droiture +duperie +duplexe +durable +durcir +dynastie +éblouir +écarter +écharpe +échelle +éclairer +éclipse +éclore +écluse +école +économie +écorce +écouter +écraser +écrémer +écrivain +écrou +écume +écureuil +édifier +éduquer +effacer +effectif +effigie +effort +effrayer +effusion +égaliser +égarer +éjecter +élaborer +élargir +électron +élégant +éléphant +élève +éligible +élitisme +éloge +élucider +éluder +emballer +embellir +embryon +émeraude +émission +emmener +émotion +émouvoir +empereur +employer +emporter +emprise +émulsion +encadrer +enchère +enclave +encoche +endiguer +endosser +endroit +enduire +énergie +enfance +enfermer +enfouir +engager +engin +englober +énigme +enjamber +enjeu +enlever +ennemi +ennuyeux +enrichir +enrobage +enseigne +entasser +entendre +entier +entourer +entraver +énumérer +envahir +enviable +envoyer +enzyme +éolien +épaissir +épargne +épatant +épaule +épicerie +épidémie +épier +épilogue +épine +épisode +épitaphe +époque +épreuve +éprouver +épuisant +équerre +équipe +ériger +érosion +erreur +éruption +escalier +espadon +espèce +espiègle +espoir +esprit +esquiver +essayer +essence +essieu +essorer +estime +estomac +estrade +étagère +étaler +étanche +étatique +éteindre +étendoir +éternel +éthanol +éthique +ethnie +étirer +étoffer +étoile +étonnant +étourdir +étrange +étroit +étude +euphorie +évaluer +évasion +éventail +évidence +éviter +évolutif +évoquer +exact +exagérer +exaucer +exceller +excitant +exclusif +excuse +exécuter +exemple +exercer +exhaler +exhorter +exigence +exiler +exister +exotique +expédier +explorer +exposer +exprimer +exquis +extensif +extraire +exulter +fable +fabuleux +facette +facile +facture +faiblir +falaise +fameux +famille +farceur +farfelu +farine +farouche +fasciner +fatal +fatigue +faucon +fautif +faveur +favori +fébrile +féconder +fédérer +félin +femme +fémur +fendoir +féodal +fermer +féroce +ferveur +festival +feuille +feutre +février +fiasco +ficeler +fictif +fidèle +figure +filature +filetage +filière +filleul +filmer +filou +filtrer +financer +finir +fiole +firme +fissure +fixer +flairer +flamme +flasque +flatteur +fléau +flèche +fleur +flexion +flocon +flore +fluctuer +fluide +fluvial +folie +fonderie +fongible +fontaine +forcer +forgeron +formuler +fortune +fossile +foudre +fougère +fouiller +foulure +fourmi +fragile +fraise +franchir +frapper +frayeur +frégate +freiner +frelon +frémir +frénésie +frère +friable +friction +frisson +frivole +froid +fromage +frontal +frotter +fruit +fugitif +fuite +fureur +furieux +furtif +fusion +futur +gagner +galaxie +galerie +gambader +garantir +gardien +garnir +garrigue +gazelle +gazon +géant +gélatine +gélule +gendarme +général +génie +genou +gentil +géologie +géomètre +géranium +germe +gestuel +geyser +gibier +gicler +girafe +givre +glace +glaive +glisser +globe +gloire +glorieux +golfeur +gomme +gonfler +gorge +gorille +goudron +gouffre +goulot +goupille +gourmand +goutte +graduel +graffiti +graine +grand +grappin +gratuit +gravir +grenat +griffure +griller +grimper +grogner +gronder +grotte +groupe +gruger +grutier +gruyère +guépard +guerrier +guide +guimauve +guitare +gustatif +gymnaste +gyrostat +habitude +hachoir +halte +hameau +hangar +hanneton +haricot +harmonie +harpon +hasard +hélium +hématome +herbe +hérisson +hermine +héron +hésiter +heureux +hiberner +hibou +hilarant +histoire +hiver +homard +hommage +homogène +honneur +honorer +honteux +horde +horizon +horloge +hormone +horrible +houleux +housse +hublot +huileux +humain +humble +humide +humour +hurler +hydromel +hygiène +hymne +hypnose +idylle +ignorer +iguane +illicite +illusion +image +imbiber +imiter +immense +immobile +immuable +impact +impérial +implorer +imposer +imprimer +imputer +incarner +incendie +incident +incliner +incolore +indexer +indice +inductif +inédit +ineptie +inexact +infini +infliger +informer +infusion +ingérer +inhaler +inhiber +injecter +injure +innocent +inoculer +inonder +inscrire +insecte +insigne +insolite +inspirer +instinct +insulter +intact +intense +intime +intrigue +intuitif +inutile +invasion +inventer +inviter +invoquer +ironique +irradier +irréel +irriter +isoler +ivoire +ivresse +jaguar +jaillir +jambe +janvier +jardin +jauger +jaune +javelot +jetable +jeton +jeudi +jeunesse +joindre +joncher +jongler +joueur +jouissif +journal +jovial +joyau +joyeux +jubiler +jugement +junior +jupon +juriste +justice +juteux +juvénile +kayak +kimono +kiosque +label +labial +labourer +lacérer +lactose +lagune +laine +laisser +laitier +lambeau +lamelle +lampe +lanceur +langage +lanterne +lapin +largeur +larme +laurier +lavabo +lavoir +lecture +légal +léger +légume +lessive +lettre +levier +lexique +lézard +liasse +libérer +libre +licence +licorne +liège +lièvre +ligature +ligoter +ligue +limer +limite +limonade +limpide +linéaire +lingot +lionceau +liquide +lisière +lister +lithium +litige +littoral +livreur +logique +lointain +loisir +lombric +loterie +louer +lourd +loutre +louve +loyal +lubie +lucide +lucratif +lueur +lugubre +luisant +lumière +lunaire +lundi +luron +lutter +luxueux +machine +magasin +magenta +magique +maigre +maillon +maintien +mairie +maison +majorer +malaxer +maléfice +malheur +malice +mallette +mammouth +mandater +maniable +manquant +manteau +manuel +marathon +marbre +marchand +mardi +maritime +marqueur +marron +marteler +mascotte +massif +matériel +matière +matraque +maudire +maussade +mauve +maximal +méchant +méconnu +médaille +médecin +méditer +méduse +meilleur +mélange +mélodie +membre +mémoire +menacer +mener +menhir +mensonge +mentor +mercredi +mérite +merle +messager +mesure +métal +météore +méthode +métier +meuble +miauler +microbe +miette +mignon +migrer +milieu +million +mimique +mince +minéral +minimal +minorer +minute +miracle +miroiter +missile +mixte +mobile +moderne +moelleux +mondial +moniteur +monnaie +monotone +monstre +montagne +monument +moqueur +morceau +morsure +mortier +moteur +motif +mouche +moufle +moulin +mousson +mouton +mouvant +multiple +munition +muraille +murène +murmure +muscle +muséum +musicien +mutation +muter +mutuel +myriade +myrtille +mystère +mythique +nageur +nappe +narquois +narrer +natation +nation +nature +naufrage +nautique +navire +nébuleux +nectar +néfaste +négation +négliger +négocier +neige +nerveux +nettoyer +neurone +neutron +neveu +niche +nickel +nitrate +niveau +noble +nocif +nocturne +noirceur +noisette +nomade +nombreux +nommer +normatif +notable +notifier +notoire +nourrir +nouveau +novateur +novembre +novice +nuage +nuancer +nuire +nuisible +numéro +nuptial +nuque +nutritif +obéir +objectif +obliger +obscur +observer +obstacle +obtenir +obturer +occasion +occuper +océan +octobre +octroyer +octupler +oculaire +odeur +odorant +offenser +officier +offrir +ogive +oiseau +oisillon +olfactif +olivier +ombrage +omettre +onctueux +onduler +onéreux +onirique +opale +opaque +opérer +opinion +opportun +opprimer +opter +optique +orageux +orange +orbite +ordonner +oreille +organe +orgueil +orifice +ornement +orque +ortie +osciller +osmose +ossature +otarie +ouragan +ourson +outil +outrager +ouvrage +ovation +oxyde +oxygène +ozone +paisible +palace +palmarès +palourde +palper +panache +panda +pangolin +paniquer +panneau +panorama +pantalon +papaye +papier +papoter +papyrus +paradoxe +parcelle +paresse +parfumer +parler +parole +parrain +parsemer +partager +parure +parvenir +passion +pastèque +paternel +patience +patron +pavillon +pavoiser +payer +paysage +peigne +peintre +pelage +pélican +pelle +pelouse +peluche +pendule +pénétrer +pénible +pensif +pénurie +pépite +péplum +perdrix +perforer +période +permuter +perplexe +persil +perte +peser +pétale +petit +pétrir +peuple +pharaon +phobie +phoque +photon +phrase +physique +piano +pictural +pièce +pierre +pieuvre +pilote +pinceau +pipette +piquer +pirogue +piscine +piston +pivoter +pixel +pizza +placard +plafond +plaisir +planer +plaque +plastron +plateau +pleurer +plexus +pliage +plomb +plonger +pluie +plumage +pochette +poésie +poète +pointe +poirier +poisson +poivre +polaire +policier +pollen +polygone +pommade +pompier +ponctuel +pondérer +poney +portique +position +posséder +posture +potager +poteau +potion +pouce +poulain +poumon +pourpre +poussin +pouvoir +prairie +pratique +précieux +prédire +préfixe +prélude +prénom +présence +prétexte +prévoir +primitif +prince +prison +priver +problème +procéder +prodige +profond +progrès +proie +projeter +prologue +promener +propre +prospère +protéger +prouesse +proverbe +prudence +pruneau +psychose +public +puceron +puiser +pulpe +pulsar +punaise +punitif +pupitre +purifier +puzzle +pyramide +quasar +querelle +question +quiétude +quitter +quotient +racine +raconter +radieux +ragondin +raideur +raisin +ralentir +rallonge +ramasser +rapide +rasage +ratisser +ravager +ravin +rayonner +réactif +réagir +réaliser +réanimer +recevoir +réciter +réclamer +récolter +recruter +reculer +recycler +rédiger +redouter +refaire +réflexe +réformer +refrain +refuge +régalien +région +réglage +régulier +réitérer +rejeter +rejouer +relatif +relever +relief +remarque +remède +remise +remonter +remplir +remuer +renard +renfort +renifler +renoncer +rentrer +renvoi +replier +reporter +reprise +reptile +requin +réserve +résineux +résoudre +respect +rester +résultat +rétablir +retenir +réticule +retomber +retracer +réunion +réussir +revanche +revivre +révolte +révulsif +richesse +rideau +rieur +rigide +rigoler +rincer +riposter +risible +risque +rituel +rival +rivière +rocheux +romance +rompre +ronce +rondin +roseau +rosier +rotatif +rotor +rotule +rouge +rouille +rouleau +routine +royaume +ruban +rubis +ruche +ruelle +rugueux +ruiner +ruisseau +ruser +rustique +rythme +sabler +saboter +sabre +sacoche +safari +sagesse +saisir +salade +salive +salon +saluer +samedi +sanction +sanglier +sarcasme +sardine +saturer +saugrenu +saumon +sauter +sauvage +savant +savonner +scalpel +scandale +scélérat +scénario +sceptre +schéma +science +scinder +score +scrutin +sculpter +séance +sécable +sécher +secouer +sécréter +sédatif +séduire +seigneur +séjour +sélectif +semaine +sembler +semence +séminal +sénateur +sensible +sentence +séparer +séquence +serein +sergent +sérieux +serrure +sérum +service +sésame +sévir +sevrage +sextuple +sidéral +siècle +siéger +siffler +sigle +signal +silence +silicium +simple +sincère +sinistre +siphon +sirop +sismique +situer +skier +social +socle +sodium +soigneux +soldat +soleil +solitude +soluble +sombre +sommeil +somnoler +sonde +songeur +sonnette +sonore +sorcier +sortir +sosie +sottise +soucieux +soudure +souffle +soulever +soupape +source +soutirer +souvenir +spacieux +spatial +spécial +sphère +spiral +stable +station +sternum +stimulus +stipuler +strict +studieux +stupeur +styliste +sublime +substrat +subtil +subvenir +succès +sucre +suffixe +suggérer +suiveur +sulfate +superbe +supplier +surface +suricate +surmener +surprise +sursaut +survie +suspect +syllabe +symbole +symétrie +synapse +syntaxe +système +tabac +tablier +tactile +tailler +talent +talisman +talonner +tambour +tamiser +tangible +tapis +taquiner +tarder +tarif +tartine +tasse +tatami +tatouage +taupe +taureau +taxer +témoin +temporel +tenaille +tendre +teneur +tenir +tension +terminer +terne +terrible +tétine +texte +thème +théorie +thérapie +thorax +tibia +tiède +timide +tirelire +tiroir +tissu +titane +titre +tituber +toboggan +tolérant +tomate +tonique +tonneau +toponyme +torche +tordre +tornade +torpille +torrent +torse +tortue +totem +toucher +tournage +tousser +toxine +traction +trafic +tragique +trahir +train +trancher +travail +trèfle +tremper +trésor +treuil +triage +tribunal +tricoter +trilogie +triomphe +tripler +triturer +trivial +trombone +tronc +tropical +troupeau +tuile +tulipe +tumulte +tunnel +turbine +tuteur +tutoyer +tuyau +tympan +typhon +typique +tyran +ubuesque +ultime +ultrason +unanime +unifier +union +unique +unitaire +univers +uranium +urbain +urticant +usage +usine +usuel +usure +utile +utopie +vacarme +vaccin +vagabond +vague +vaillant +vaincre +vaisseau +valable +valise +vallon +valve +vampire +vanille +vapeur +varier +vaseux +vassal +vaste +vecteur +vedette +végétal +véhicule +veinard +véloce +vendredi +vénérer +venger +venimeux +ventouse +verdure +vérin +vernir +verrou +verser +vertu +veston +vétéran +vétuste +vexant +vexer +viaduc +viande +victoire +vidange +vidéo +vignette +vigueur +vilain +village +vinaigre +violon +vipère +virement +virtuose +virus +visage +viseur +vision +visqueux +visuel +vital +vitesse +viticole +vitrine +vivace +vivipare +vocation +voguer +voile +voisin +voiture +volaille +volcan +voltiger +volume +vorace +vortex +voter +vouloir +voyage +voyelle +wagon +xénon +yacht +zèbre +zénith +zeste +zoologie diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/italian.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/italian.txt new file mode 100644 index 00000000..c47370f4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/italian.txt @@ -0,0 +1,2048 @@ +abaco +abbaglio +abbinato +abete +abisso +abolire +abrasivo +abrogato +accadere +accenno +accusato +acetone +achille +acido +acqua +acre +acrilico +acrobata +acuto +adagio +addebito +addome +adeguato +aderire +adipe +adottare +adulare +affabile +affetto +affisso +affranto +aforisma +afoso +africano +agave +agente +agevole +aggancio +agire +agitare +agonismo +agricolo +agrumeto +aguzzo +alabarda +alato +albatro +alberato +albo +albume +alce +alcolico +alettone +alfa +algebra +aliante +alibi +alimento +allagato +allegro +allievo +allodola +allusivo +almeno +alogeno +alpaca +alpestre +altalena +alterno +alticcio +altrove +alunno +alveolo +alzare +amalgama +amanita +amarena +ambito +ambrato +ameba +america +ametista +amico +ammasso +ammenda +ammirare +ammonito +amore +ampio +ampliare +amuleto +anacardo +anagrafe +analista +anarchia +anatra +anca +ancella +ancora +andare +andrea +anello +angelo +angolare +angusto +anima +annegare +annidato +anno +annuncio +anonimo +anticipo +anzi +apatico +apertura +apode +apparire +appetito +appoggio +approdo +appunto +aprile +arabica +arachide +aragosta +araldica +arancio +aratura +arazzo +arbitro +archivio +ardito +arenile +argento +argine +arguto +aria +armonia +arnese +arredato +arringa +arrosto +arsenico +arso +artefice +arzillo +asciutto +ascolto +asepsi +asettico +asfalto +asino +asola +aspirato +aspro +assaggio +asse +assoluto +assurdo +asta +astenuto +astice +astratto +atavico +ateismo +atomico +atono +attesa +attivare +attorno +attrito +attuale +ausilio +austria +autista +autonomo +autunno +avanzato +avere +avvenire +avviso +avvolgere +azione +azoto +azzimo +azzurro +babele +baccano +bacino +baco +badessa +badilata +bagnato +baita +balcone +baldo +balena +ballata +balzano +bambino +bandire +baraonda +barbaro +barca +baritono +barlume +barocco +basilico +basso +batosta +battuto +baule +bava +bavosa +becco +beffa +belgio +belva +benda +benevole +benigno +benzina +bere +berlina +beta +bibita +bici +bidone +bifido +biga +bilancia +bimbo +binocolo +biologo +bipede +bipolare +birbante +birra +biscotto +bisesto +bisnonno +bisonte +bisturi +bizzarro +blando +blatta +bollito +bonifico +bordo +bosco +botanico +bottino +bozzolo +braccio +bradipo +brama +branca +bravura +bretella +brevetto +brezza +briglia +brillante +brindare +broccolo +brodo +bronzina +brullo +bruno +bubbone +buca +budino +buffone +buio +bulbo +buono +burlone +burrasca +bussola +busta +cadetto +caduco +calamaro +calcolo +calesse +calibro +calmo +caloria +cambusa +camerata +camicia +cammino +camola +campale +canapa +candela +cane +canino +canotto +cantina +capace +capello +capitolo +capogiro +cappero +capra +capsula +carapace +carcassa +cardo +carisma +carovana +carretto +cartolina +casaccio +cascata +caserma +caso +cassone +castello +casuale +catasta +catena +catrame +cauto +cavillo +cedibile +cedrata +cefalo +celebre +cellulare +cena +cenone +centesimo +ceramica +cercare +certo +cerume +cervello +cesoia +cespo +ceto +chela +chiaro +chicca +chiedere +chimera +china +chirurgo +chitarra +ciao +ciclismo +cifrare +cigno +cilindro +ciottolo +circa +cirrosi +citrico +cittadino +ciuffo +civetta +civile +classico +clinica +cloro +cocco +codardo +codice +coerente +cognome +collare +colmato +colore +colposo +coltivato +colza +coma +cometa +commando +comodo +computer +comune +conciso +condurre +conferma +congelare +coniuge +connesso +conoscere +consumo +continuo +convegno +coperto +copione +coppia +copricapo +corazza +cordata +coricato +cornice +corolla +corpo +corredo +corsia +cortese +cosmico +costante +cottura +covato +cratere +cravatta +creato +credere +cremoso +crescita +creta +criceto +crinale +crisi +critico +croce +cronaca +crostata +cruciale +crusca +cucire +cuculo +cugino +cullato +cupola +curatore +cursore +curvo +cuscino +custode +dado +daino +dalmata +damerino +daniela +dannoso +danzare +datato +davanti +davvero +debutto +decennio +deciso +declino +decollo +decreto +dedicato +definito +deforme +degno +delegare +delfino +delirio +delta +demenza +denotato +dentro +deposito +derapata +derivare +deroga +descritto +deserto +desiderio +desumere +detersivo +devoto +diametro +dicembre +diedro +difeso +diffuso +digerire +digitale +diluvio +dinamico +dinnanzi +dipinto +diploma +dipolo +diradare +dire +dirotto +dirupo +disagio +discreto +disfare +disgelo +disposto +distanza +disumano +dito +divano +divelto +dividere +divorato +doblone +docente +doganale +dogma +dolce +domato +domenica +dominare +dondolo +dono +dormire +dote +dottore +dovuto +dozzina +drago +druido +dubbio +dubitare +ducale +duna +duomo +duplice +duraturo +ebano +eccesso +ecco +eclissi +economia +edera +edicola +edile +editoria +educare +egemonia +egli +egoismo +egregio +elaborato +elargire +elegante +elencato +eletto +elevare +elfico +elica +elmo +elsa +eluso +emanato +emblema +emesso +emiro +emotivo +emozione +empirico +emulo +endemico +enduro +energia +enfasi +enoteca +entrare +enzima +epatite +epilogo +episodio +epocale +eppure +equatore +erario +erba +erboso +erede +eremita +erigere +ermetico +eroe +erosivo +errante +esagono +esame +esanime +esaudire +esca +esempio +esercito +esibito +esigente +esistere +esito +esofago +esortato +esoso +espanso +espresso +essenza +esso +esteso +estimare +estonia +estroso +esultare +etilico +etnico +etrusco +etto +euclideo +europa +evaso +evidenza +evitato +evoluto +evviva +fabbrica +faccenda +fachiro +falco +famiglia +fanale +fanfara +fango +fantasma +fare +farfalla +farinoso +farmaco +fascia +fastoso +fasullo +faticare +fato +favoloso +febbre +fecola +fede +fegato +felpa +feltro +femmina +fendere +fenomeno +fermento +ferro +fertile +fessura +festivo +fetta +feudo +fiaba +fiducia +fifa +figurato +filo +finanza +finestra +finire +fiore +fiscale +fisico +fiume +flacone +flamenco +flebo +flemma +florido +fluente +fluoro +fobico +focaccia +focoso +foderato +foglio +folata +folclore +folgore +fondente +fonetico +fonia +fontana +forbito +forchetta +foresta +formica +fornaio +foro +fortezza +forzare +fosfato +fosso +fracasso +frana +frassino +fratello +freccetta +frenata +fresco +frigo +frollino +fronde +frugale +frutta +fucilata +fucsia +fuggente +fulmine +fulvo +fumante +fumetto +fumoso +fune +funzione +fuoco +furbo +furgone +furore +fuso +futile +gabbiano +gaffe +galateo +gallina +galoppo +gambero +gamma +garanzia +garbo +garofano +garzone +gasdotto +gasolio +gastrico +gatto +gaudio +gazebo +gazzella +geco +gelatina +gelso +gemello +gemmato +gene +genitore +gennaio +genotipo +gergo +ghepardo +ghiaccio +ghisa +giallo +gilda +ginepro +giocare +gioiello +giorno +giove +girato +girone +gittata +giudizio +giurato +giusto +globulo +glutine +gnomo +gobba +golf +gomito +gommone +gonfio +gonna +governo +gracile +grado +grafico +grammo +grande +grattare +gravoso +grazia +greca +gregge +grifone +grigio +grinza +grotta +gruppo +guadagno +guaio +guanto +guardare +gufo +guidare +ibernato +icona +identico +idillio +idolo +idra +idrico +idrogeno +igiene +ignaro +ignorato +ilare +illeso +illogico +illudere +imballo +imbevuto +imbocco +imbuto +immane +immerso +immolato +impacco +impeto +impiego +importo +impronta +inalare +inarcare +inattivo +incanto +incendio +inchino +incisivo +incluso +incontro +incrocio +incubo +indagine +india +indole +inedito +infatti +infilare +inflitto +ingaggio +ingegno +inglese +ingordo +ingrosso +innesco +inodore +inoltrare +inondato +insano +insetto +insieme +insonnia +insulina +intasato +intero +intonaco +intuito +inumidire +invalido +invece +invito +iperbole +ipnotico +ipotesi +ippica +iride +irlanda +ironico +irrigato +irrorare +isolato +isotopo +isterico +istituto +istrice +italia +iterare +labbro +labirinto +lacca +lacerato +lacrima +lacuna +laddove +lago +lampo +lancetta +lanterna +lardoso +larga +laringe +lastra +latenza +latino +lattuga +lavagna +lavoro +legale +leggero +lembo +lentezza +lenza +leone +lepre +lesivo +lessato +lesto +letterale +leva +levigato +libero +lido +lievito +lilla +limatura +limitare +limpido +lineare +lingua +liquido +lira +lirica +lisca +lite +litigio +livrea +locanda +lode +logica +lombare +londra +longevo +loquace +lorenzo +loto +lotteria +luce +lucidato +lumaca +luminoso +lungo +lupo +luppolo +lusinga +lusso +lutto +macabro +macchina +macero +macinato +madama +magico +maglia +magnete +magro +maiolica +malafede +malgrado +malinteso +malsano +malto +malumore +mana +mancia +mandorla +mangiare +manifesto +mannaro +manovra +mansarda +mantide +manubrio +mappa +maratona +marcire +maretta +marmo +marsupio +maschera +massaia +mastino +materasso +matricola +mattone +maturo +mazurca +meandro +meccanico +mecenate +medesimo +meditare +mega +melassa +melis +melodia +meninge +meno +mensola +mercurio +merenda +merlo +meschino +mese +messere +mestolo +metallo +metodo +mettere +miagolare +mica +micelio +michele +microbo +midollo +miele +migliore +milano +milite +mimosa +minerale +mini +minore +mirino +mirtillo +miscela +missiva +misto +misurare +mitezza +mitigare +mitra +mittente +mnemonico +modello +modifica +modulo +mogano +mogio +mole +molosso +monastero +monco +mondina +monetario +monile +monotono +monsone +montato +monviso +mora +mordere +morsicato +mostro +motivato +motosega +motto +movenza +movimento +mozzo +mucca +mucosa +muffa +mughetto +mugnaio +mulatto +mulinello +multiplo +mummia +munto +muovere +murale +musa +muscolo +musica +mutevole +muto +nababbo +nafta +nanometro +narciso +narice +narrato +nascere +nastrare +naturale +nautica +naviglio +nebulosa +necrosi +negativo +negozio +nemmeno +neofita +neretto +nervo +nessuno +nettuno +neutrale +neve +nevrotico +nicchia +ninfa +nitido +nobile +nocivo +nodo +nome +nomina +nordico +normale +norvegese +nostrano +notare +notizia +notturno +novella +nucleo +nulla +numero +nuovo +nutrire +nuvola +nuziale +oasi +obbedire +obbligo +obelisco +oblio +obolo +obsoleto +occasione +occhio +occidente +occorrere +occultare +ocra +oculato +odierno +odorare +offerta +offrire +offuscato +oggetto +oggi +ognuno +olandese +olfatto +oliato +oliva +ologramma +oltre +omaggio +ombelico +ombra +omega +omissione +ondoso +onere +onice +onnivoro +onorevole +onta +operato +opinione +opposto +oracolo +orafo +ordine +orecchino +orefice +orfano +organico +origine +orizzonte +orma +ormeggio +ornativo +orologio +orrendo +orribile +ortensia +ortica +orzata +orzo +osare +oscurare +osmosi +ospedale +ospite +ossa +ossidare +ostacolo +oste +otite +otre +ottagono +ottimo +ottobre +ovale +ovest +ovino +oviparo +ovocito +ovunque +ovviare +ozio +pacchetto +pace +pacifico +padella +padrone +paese +paga +pagina +palazzina +palesare +pallido +palo +palude +pandoro +pannello +paolo +paonazzo +paprica +parabola +parcella +parere +pargolo +pari +parlato +parola +partire +parvenza +parziale +passivo +pasticca +patacca +patologia +pattume +pavone +peccato +pedalare +pedonale +peggio +peloso +penare +pendice +penisola +pennuto +penombra +pensare +pentola +pepe +pepita +perbene +percorso +perdonato +perforare +pergamena +periodo +permesso +perno +perplesso +persuaso +pertugio +pervaso +pesatore +pesista +peso +pestifero +petalo +pettine +petulante +pezzo +piacere +pianta +piattino +piccino +picozza +piega +pietra +piffero +pigiama +pigolio +pigro +pila +pilifero +pillola +pilota +pimpante +pineta +pinna +pinolo +pioggia +piombo +piramide +piretico +pirite +pirolisi +pitone +pizzico +placebo +planare +plasma +platano +plenario +pochezza +poderoso +podismo +poesia +poggiare +polenta +poligono +pollice +polmonite +polpetta +polso +poltrona +polvere +pomice +pomodoro +ponte +popoloso +porfido +poroso +porpora +porre +portata +posa +positivo +possesso +postulato +potassio +potere +pranzo +prassi +pratica +precluso +predica +prefisso +pregiato +prelievo +premere +prenotare +preparato +presenza +pretesto +prevalso +prima +principe +privato +problema +procura +produrre +profumo +progetto +prolunga +promessa +pronome +proposta +proroga +proteso +prova +prudente +prugna +prurito +psiche +pubblico +pudica +pugilato +pugno +pulce +pulito +pulsante +puntare +pupazzo +pupilla +puro +quadro +qualcosa +quasi +querela +quota +raccolto +raddoppio +radicale +radunato +raffica +ragazzo +ragione +ragno +ramarro +ramingo +ramo +randagio +rantolare +rapato +rapina +rappreso +rasatura +raschiato +rasente +rassegna +rastrello +rata +ravveduto +reale +recepire +recinto +recluta +recondito +recupero +reddito +redimere +regalato +registro +regola +regresso +relazione +remare +remoto +renna +replica +reprimere +reputare +resa +residente +responso +restauro +rete +retina +retorica +rettifica +revocato +riassunto +ribadire +ribelle +ribrezzo +ricarica +ricco +ricevere +riciclato +ricordo +ricreduto +ridicolo +ridurre +rifasare +riflesso +riforma +rifugio +rigare +rigettato +righello +rilassato +rilevato +rimanere +rimbalzo +rimedio +rimorchio +rinascita +rincaro +rinforzo +rinnovo +rinomato +rinsavito +rintocco +rinuncia +rinvenire +riparato +ripetuto +ripieno +riportare +ripresa +ripulire +risata +rischio +riserva +risibile +riso +rispetto +ristoro +risultato +risvolto +ritardo +ritegno +ritmico +ritrovo +riunione +riva +riverso +rivincita +rivolto +rizoma +roba +robotico +robusto +roccia +roco +rodaggio +rodere +roditore +rogito +rollio +romantico +rompere +ronzio +rosolare +rospo +rotante +rotondo +rotula +rovescio +rubizzo +rubrica +ruga +rullino +rumine +rumoroso +ruolo +rupe +russare +rustico +sabato +sabbiare +sabotato +sagoma +salasso +saldatura +salgemma +salivare +salmone +salone +saltare +saluto +salvo +sapere +sapido +saporito +saraceno +sarcasmo +sarto +sassoso +satellite +satira +satollo +saturno +savana +savio +saziato +sbadiglio +sbalzo +sbancato +sbarra +sbattere +sbavare +sbendare +sbirciare +sbloccato +sbocciato +sbrinare +sbruffone +sbuffare +scabroso +scadenza +scala +scambiare +scandalo +scapola +scarso +scatenare +scavato +scelto +scenico +scettro +scheda +schiena +sciarpa +scienza +scindere +scippo +sciroppo +scivolo +sclerare +scodella +scolpito +scomparto +sconforto +scoprire +scorta +scossone +scozzese +scriba +scrollare +scrutinio +scuderia +scultore +scuola +scuro +scusare +sdebitare +sdoganare +seccatura +secondo +sedano +seggiola +segnalato +segregato +seguito +selciato +selettivo +sella +selvaggio +semaforo +sembrare +seme +seminato +sempre +senso +sentire +sepolto +sequenza +serata +serbato +sereno +serio +serpente +serraglio +servire +sestina +setola +settimana +sfacelo +sfaldare +sfamato +sfarzoso +sfaticato +sfera +sfida +sfilato +sfinge +sfocato +sfoderare +sfogo +sfoltire +sforzato +sfratto +sfruttato +sfuggito +sfumare +sfuso +sgabello +sgarbato +sgonfiare +sgorbio +sgrassato +sguardo +sibilo +siccome +sierra +sigla +signore +silenzio +sillaba +simbolo +simpatico +simulato +sinfonia +singolo +sinistro +sino +sintesi +sinusoide +sipario +sisma +sistole +situato +slitta +slogatura +sloveno +smarrito +smemorato +smentito +smeraldo +smilzo +smontare +smottato +smussato +snellire +snervato +snodo +sobbalzo +sobrio +soccorso +sociale +sodale +soffitto +sogno +soldato +solenne +solido +sollazzo +solo +solubile +solvente +somatico +somma +sonda +sonetto +sonnifero +sopire +soppeso +sopra +sorgere +sorpasso +sorriso +sorso +sorteggio +sorvolato +sospiro +sosta +sottile +spada +spalla +spargere +spatola +spavento +spazzola +specie +spedire +spegnere +spelatura +speranza +spessore +spettrale +spezzato +spia +spigoloso +spillato +spinoso +spirale +splendido +sportivo +sposo +spranga +sprecare +spronato +spruzzo +spuntino +squillo +sradicare +srotolato +stabile +stacco +staffa +stagnare +stampato +stantio +starnuto +stasera +statuto +stelo +steppa +sterzo +stiletto +stima +stirpe +stivale +stizzoso +stonato +storico +strappo +stregato +stridulo +strozzare +strutto +stuccare +stufo +stupendo +subentro +succoso +sudore +suggerito +sugo +sultano +suonare +superbo +supporto +surgelato +surrogato +sussurro +sutura +svagare +svedese +sveglio +svelare +svenuto +svezia +sviluppo +svista +svizzera +svolta +svuotare +tabacco +tabulato +tacciare +taciturno +tale +talismano +tampone +tannino +tara +tardivo +targato +tariffa +tarpare +tartaruga +tasto +tattico +taverna +tavolata +tazza +teca +tecnico +telefono +temerario +tempo +temuto +tendone +tenero +tensione +tentacolo +teorema +terme +terrazzo +terzetto +tesi +tesserato +testato +tetro +tettoia +tifare +tigella +timbro +tinto +tipico +tipografo +tiraggio +tiro +titanio +titolo +titubante +tizio +tizzone +toccare +tollerare +tolto +tombola +tomo +tonfo +tonsilla +topazio +topologia +toppa +torba +tornare +torrone +tortora +toscano +tossire +tostatura +totano +trabocco +trachea +trafila +tragedia +tralcio +tramonto +transito +trapano +trarre +trasloco +trattato +trave +treccia +tremolio +trespolo +tributo +tricheco +trifoglio +trillo +trincea +trio +tristezza +triturato +trivella +tromba +trono +troppo +trottola +trovare +truccato +tubatura +tuffato +tulipano +tumulto +tunisia +turbare +turchino +tuta +tutela +ubicato +uccello +uccisore +udire +uditivo +uffa +ufficio +uguale +ulisse +ultimato +umano +umile +umorismo +uncinetto +ungere +ungherese +unicorno +unificato +unisono +unitario +unte +uovo +upupa +uragano +urgenza +urlo +usanza +usato +uscito +usignolo +usuraio +utensile +utilizzo +utopia +vacante +vaccinato +vagabondo +vagliato +valanga +valgo +valico +valletta +valoroso +valutare +valvola +vampata +vangare +vanitoso +vano +vantaggio +vanvera +vapore +varano +varcato +variante +vasca +vedetta +vedova +veduto +vegetale +veicolo +velcro +velina +velluto +veloce +venato +vendemmia +vento +verace +verbale +vergogna +verifica +vero +verruca +verticale +vescica +vessillo +vestale +veterano +vetrina +vetusto +viandante +vibrante +vicenda +vichingo +vicinanza +vidimare +vigilia +vigneto +vigore +vile +villano +vimini +vincitore +viola +vipera +virgola +virologo +virulento +viscoso +visione +vispo +vissuto +visura +vita +vitello +vittima +vivanda +vivido +viziare +voce +voga +volatile +volere +volpe +voragine +vulcano +zampogna +zanna +zappato +zattera +zavorra +zefiro +zelante +zelo +zenzero +zerbino +zibetto +zinco +zircone +zitto +zolla +zotico +zucchero +zufolo +zulu +zuppa diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/japanese.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/japanese.txt new file mode 100644 index 00000000..fb8501a6 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/japanese.txt @@ -0,0 +1,2048 @@ +あいこくしん +あいさつ +あいだ +あおぞら +あかちゃん +あきる +あけがた +あける +あこがれる +あさい +あさひ +あしあと +あじわう +あずかる +あずき +あそぶ +あたえる +あたためる +あたりまえ +あたる +あつい +あつかう +あっしゅく +あつまり +あつめる +あてな +あてはまる +あひる +あぶら +あぶる +あふれる +あまい +あまど +あまやかす +あまり +あみもの +あめりか +あやまる +あゆむ +あらいぐま +あらし +あらすじ +あらためる +あらゆる +あらわす +ありがとう +あわせる +あわてる +あんい +あんがい +あんこ +あんぜん +あんてい +あんない +あんまり +いいだす +いおん +いがい +いがく +いきおい +いきなり +いきもの +いきる +いくじ +いくぶん +いけばな +いけん +いこう +いこく +いこつ +いさましい +いさん +いしき +いじゅう +いじょう +いじわる +いずみ +いずれ +いせい +いせえび +いせかい +いせき +いぜん +いそうろう +いそがしい +いだい +いだく +いたずら +いたみ +いたりあ +いちおう +いちじ +いちど +いちば +いちぶ +いちりゅう +いつか +いっしゅん +いっせい +いっそう +いったん +いっち +いってい +いっぽう +いてざ +いてん +いどう +いとこ +いない +いなか +いねむり +いのち +いのる +いはつ +いばる +いはん +いびき +いひん +いふく +いへん +いほう +いみん +いもうと +いもたれ +いもり +いやがる +いやす +いよかん +いよく +いらい +いらすと +いりぐち +いりょう +いれい +いれもの +いれる +いろえんぴつ +いわい +いわう +いわかん +いわば +いわゆる +いんげんまめ +いんさつ +いんしょう +いんよう +うえき +うえる +うおざ +うがい +うかぶ +うかべる +うきわ +うくらいな +うくれれ +うけたまわる +うけつけ +うけとる +うけもつ +うける +うごかす +うごく +うこん +うさぎ +うしなう +うしろがみ +うすい +うすぎ +うすぐらい +うすめる +うせつ +うちあわせ +うちがわ +うちき +うちゅう +うっかり +うつくしい +うったえる +うつる +うどん +うなぎ +うなじ +うなずく +うなる +うねる +うのう +うぶげ +うぶごえ +うまれる +うめる +うもう +うやまう +うよく +うらがえす +うらぐち +うらない +うりあげ +うりきれ +うるさい +うれしい +うれゆき +うれる +うろこ +うわき +うわさ +うんこう +うんちん +うんてん +うんどう +えいえん +えいが +えいきょう +えいご +えいせい +えいぶん +えいよう +えいわ +えおり +えがお +えがく +えきたい +えくせる +えしゃく +えすて +えつらん +えのぐ +えほうまき +えほん +えまき +えもじ +えもの +えらい +えらぶ +えりあ +えんえん +えんかい +えんぎ +えんげき +えんしゅう +えんぜつ +えんそく +えんちょう +えんとつ +おいかける +おいこす +おいしい +おいつく +おうえん +おうさま +おうじ +おうせつ +おうたい +おうふく +おうべい +おうよう +おえる +おおい +おおう +おおどおり +おおや +おおよそ +おかえり +おかず +おがむ +おかわり +おぎなう +おきる +おくさま +おくじょう +おくりがな +おくる +おくれる +おこす +おこなう +おこる +おさえる +おさない +おさめる +おしいれ +おしえる +おじぎ +おじさん +おしゃれ +おそらく +おそわる +おたがい +おたく +おだやか +おちつく +おっと +おつり +おでかけ +おとしもの +おとなしい +おどり +おどろかす +おばさん +おまいり +おめでとう +おもいで +おもう +おもたい +おもちゃ +おやつ +おやゆび +およぼす +おらんだ +おろす +おんがく +おんけい +おんしゃ +おんせん +おんだん +おんちゅう +おんどけい +かあつ +かいが +がいき +がいけん +がいこう +かいさつ +かいしゃ +かいすいよく +かいぜん +かいぞうど +かいつう +かいてん +かいとう +かいふく +がいへき +かいほう +かいよう +がいらい +かいわ +かえる +かおり +かかえる +かがく +かがし +かがみ +かくご +かくとく +かざる +がぞう +かたい +かたち +がちょう +がっきゅう +がっこう +がっさん +がっしょう +かなざわし +かのう +がはく +かぶか +かほう +かほご +かまう +かまぼこ +かめれおん +かゆい +かようび +からい +かるい +かろう +かわく +かわら +がんか +かんけい +かんこう +かんしゃ +かんそう +かんたん +かんち +がんばる +きあい +きあつ +きいろ +ぎいん +きうい +きうん +きえる +きおう +きおく +きおち +きおん +きかい +きかく +きかんしゃ +ききて +きくばり +きくらげ +きけんせい +きこう +きこえる +きこく +きさい +きさく +きさま +きさらぎ +ぎじかがく +ぎしき +ぎじたいけん +ぎじにってい +ぎじゅつしゃ +きすう +きせい +きせき +きせつ +きそう +きぞく +きぞん +きたえる +きちょう +きつえん +ぎっちり +きつつき +きつね +きてい +きどう +きどく +きない +きなが +きなこ +きぬごし +きねん +きのう +きのした +きはく +きびしい +きひん +きふく +きぶん +きぼう +きほん +きまる +きみつ +きむずかしい +きめる +きもだめし +きもち +きもの +きゃく +きやく +ぎゅうにく +きよう +きょうりゅう +きらい +きらく +きりん +きれい +きれつ +きろく +ぎろん +きわめる +ぎんいろ +きんかくじ +きんじょ +きんようび +ぐあい +くいず +くうかん +くうき +くうぐん +くうこう +ぐうせい +くうそう +ぐうたら +くうふく +くうぼ +くかん +くきょう +くげん +ぐこう +くさい +くさき +くさばな +くさる +くしゃみ +くしょう +くすのき +くすりゆび +くせげ +くせん +ぐたいてき +くださる +くたびれる +くちこみ +くちさき +くつした +ぐっすり +くつろぐ +くとうてん +くどく +くなん +くねくね +くのう +くふう +くみあわせ +くみたてる +くめる +くやくしょ +くらす +くらべる +くるま +くれる +くろう +くわしい +ぐんかん +ぐんしょく +ぐんたい +ぐんて +けあな +けいかく +けいけん +けいこ +けいさつ +げいじゅつ +けいたい +げいのうじん +けいれき +けいろ +けおとす +けおりもの +げきか +げきげん +げきだん +げきちん +げきとつ +げきは +げきやく +げこう +げこくじょう +げざい +けさき +げざん +けしき +けしごむ +けしょう +げすと +けたば +けちゃっぷ +けちらす +けつあつ +けつい +けつえき +けっこん +けつじょ +けっせき +けってい +けつまつ +げつようび +げつれい +けつろん +げどく +けとばす +けとる +けなげ +けなす +けなみ +けぬき +げねつ +けねん +けはい +げひん +けぶかい +げぼく +けまり +けみかる +けむし +けむり +けもの +けらい +けろけろ +けわしい +けんい +けんえつ +けんお +けんか +げんき +けんげん +けんこう +けんさく +けんしゅう +けんすう +げんそう +けんちく +けんてい +けんとう +けんない +けんにん +げんぶつ +けんま +けんみん +けんめい +けんらん +けんり +こあくま +こいぬ +こいびと +ごうい +こうえん +こうおん +こうかん +ごうきゅう +ごうけい +こうこう +こうさい +こうじ +こうすい +ごうせい +こうそく +こうたい +こうちゃ +こうつう +こうてい +こうどう +こうない +こうはい +ごうほう +ごうまん +こうもく +こうりつ +こえる +こおり +ごかい +ごがつ +ごかん +こくご +こくさい +こくとう +こくない +こくはく +こぐま +こけい +こける +ここのか +こころ +こさめ +こしつ +こすう +こせい +こせき +こぜん +こそだて +こたい +こたえる +こたつ +こちょう +こっか +こつこつ +こつばん +こつぶ +こてい +こてん +ことがら +ことし +ことば +ことり +こなごな +こねこね +このまま +このみ +このよ +ごはん +こひつじ +こふう +こふん +こぼれる +ごまあぶら +こまかい +ごますり +こまつな +こまる +こむぎこ +こもじ +こもち +こもの +こもん +こやく +こやま +こゆう +こゆび +こよい +こよう +こりる +これくしょん +ころっけ +こわもて +こわれる +こんいん +こんかい +こんき +こんしゅう +こんすい +こんだて +こんとん +こんなん +こんびに +こんぽん +こんまけ +こんや +こんれい +こんわく +ざいえき +さいかい +さいきん +ざいげん +ざいこ +さいしょ +さいせい +ざいたく +ざいちゅう +さいてき +ざいりょう +さうな +さかいし +さがす +さかな +さかみち +さがる +さぎょう +さくし +さくひん +さくら +さこく +さこつ +さずかる +ざせき +さたん +さつえい +ざつおん +ざっか +ざつがく +さっきょく +ざっし +さつじん +ざっそう +さつたば +さつまいも +さてい +さといも +さとう +さとおや +さとし +さとる +さのう +さばく +さびしい +さべつ +さほう +さほど +さます +さみしい +さみだれ +さむけ +さめる +さやえんどう +さゆう +さよう +さよく +さらだ +ざるそば +さわやか +さわる +さんいん +さんか +さんきゃく +さんこう +さんさい +ざんしょ +さんすう +さんせい +さんそ +さんち +さんま +さんみ +さんらん +しあい +しあげ +しあさって +しあわせ +しいく +しいん +しうち +しえい +しおけ +しかい +しかく +じかん +しごと +しすう +じだい +したうけ +したぎ +したて +したみ +しちょう +しちりん +しっかり +しつじ +しつもん +してい +してき +してつ +じてん +じどう +しなぎれ +しなもの +しなん +しねま +しねん +しのぐ +しのぶ +しはい +しばかり +しはつ +しはらい +しはん +しひょう +しふく +じぶん +しへい +しほう +しほん +しまう +しまる +しみん +しむける +じむしょ +しめい +しめる +しもん +しゃいん +しゃうん +しゃおん +じゃがいも +しやくしょ +しゃくほう +しゃけん +しゃこ +しゃざい +しゃしん +しゃせん +しゃそう +しゃたい +しゃちょう +しゃっきん +じゃま +しゃりん +しゃれい +じゆう +じゅうしょ +しゅくはく +じゅしん +しゅっせき +しゅみ +しゅらば +じゅんばん +しょうかい +しょくたく +しょっけん +しょどう +しょもつ +しらせる +しらべる +しんか +しんこう +じんじゃ +しんせいじ +しんちく +しんりん +すあげ +すあし +すあな +ずあん +すいえい +すいか +すいとう +ずいぶん +すいようび +すうがく +すうじつ +すうせん +すおどり +すきま +すくう +すくない +すける +すごい +すこし +ずさん +すずしい +すすむ +すすめる +すっかり +ずっしり +ずっと +すてき +すてる +すねる +すのこ +すはだ +すばらしい +ずひょう +ずぶぬれ +すぶり +すふれ +すべて +すべる +ずほう +すぼん +すまい +すめし +すもう +すやき +すらすら +するめ +すれちがう +すろっと +すわる +すんぜん +すんぽう +せあぶら +せいかつ +せいげん +せいじ +せいよう +せおう +せかいかん +せきにん +せきむ +せきゆ +せきらんうん +せけん +せこう +せすじ +せたい +せたけ +せっかく +せっきゃく +ぜっく +せっけん +せっこつ +せっさたくま +せつぞく +せつだん +せつでん +せっぱん +せつび +せつぶん +せつめい +せつりつ +せなか +せのび +せはば +せびろ +せぼね +せまい +せまる +せめる +せもたれ +せりふ +ぜんあく +せんい +せんえい +せんか +せんきょ +せんく +せんげん +ぜんご +せんさい +せんしゅ +せんすい +せんせい +せんぞ +せんたく +せんちょう +せんてい +せんとう +せんぬき +せんねん +せんぱい +ぜんぶ +ぜんぽう +せんむ +せんめんじょ +せんもん +せんやく +せんゆう +せんよう +ぜんら +ぜんりゃく +せんれい +せんろ +そあく +そいとげる +そいね +そうがんきょう +そうき +そうご +そうしん +そうだん +そうなん +そうび +そうめん +そうり +そえもの +そえん +そがい +そげき +そこう +そこそこ +そざい +そしな +そせい +そせん +そそぐ +そだてる +そつう +そつえん +そっかん +そつぎょう +そっけつ +そっこう +そっせん +そっと +そとがわ +そとづら +そなえる +そなた +そふぼ +そぼく +そぼろ +そまつ +そまる +そむく +そむりえ +そめる +そもそも +そよかぜ +そらまめ +そろう +そんかい +そんけい +そんざい +そんしつ +そんぞく +そんちょう +ぞんび +ぞんぶん +そんみん +たあい +たいいん +たいうん +たいえき +たいおう +だいがく +たいき +たいぐう +たいけん +たいこ +たいざい +だいじょうぶ +だいすき +たいせつ +たいそう +だいたい +たいちょう +たいてい +だいどころ +たいない +たいねつ +たいのう +たいはん +だいひょう +たいふう +たいへん +たいほ +たいまつばな +たいみんぐ +たいむ +たいめん +たいやき +たいよう +たいら +たいりょく +たいる +たいわん +たうえ +たえる +たおす +たおる +たおれる +たかい +たかね +たきび +たくさん +たこく +たこやき +たさい +たしざん +だじゃれ +たすける +たずさわる +たそがれ +たたかう +たたく +ただしい +たたみ +たちばな +だっかい +だっきゃく +だっこ +だっしゅつ +だったい +たてる +たとえる +たなばた +たにん +たぬき +たのしみ +たはつ +たぶん +たべる +たぼう +たまご +たまる +だむる +ためいき +ためす +ためる +たもつ +たやすい +たよる +たらす +たりきほんがん +たりょう +たりる +たると +たれる +たれんと +たろっと +たわむれる +だんあつ +たんい +たんおん +たんか +たんき +たんけん +たんご +たんさん +たんじょうび +だんせい +たんそく +たんたい +だんち +たんてい +たんとう +だんな +たんにん +だんねつ +たんのう +たんぴん +だんぼう +たんまつ +たんめい +だんれつ +だんろ +だんわ +ちあい +ちあん +ちいき +ちいさい +ちえん +ちかい +ちから +ちきゅう +ちきん +ちけいず +ちけん +ちこく +ちさい +ちしき +ちしりょう +ちせい +ちそう +ちたい +ちたん +ちちおや +ちつじょ +ちてき +ちてん +ちぬき +ちぬり +ちのう +ちひょう +ちへいせん +ちほう +ちまた +ちみつ +ちみどろ +ちめいど +ちゃんこなべ +ちゅうい +ちゆりょく +ちょうし +ちょさくけん +ちらし +ちらみ +ちりがみ +ちりょう +ちるど +ちわわ +ちんたい +ちんもく +ついか +ついたち +つうか +つうじょう +つうはん +つうわ +つかう +つかれる +つくね +つくる +つけね +つける +つごう +つたえる +つづく +つつじ +つつむ +つとめる +つながる +つなみ +つねづね +つのる +つぶす +つまらない +つまる +つみき +つめたい +つもり +つもる +つよい +つるぼ +つるみく +つわもの +つわり +てあし +てあて +てあみ +ていおん +ていか +ていき +ていけい +ていこく +ていさつ +ていし +ていせい +ていたい +ていど +ていねい +ていひょう +ていへん +ていぼう +てうち +ておくれ +てきとう +てくび +でこぼこ +てさぎょう +てさげ +てすり +てそう +てちがい +てちょう +てつがく +てつづき +でっぱ +てつぼう +てつや +でぬかえ +てぬき +てぬぐい +てのひら +てはい +てぶくろ +てふだ +てほどき +てほん +てまえ +てまきずし +てみじか +てみやげ +てらす +てれび +てわけ +てわたし +でんあつ +てんいん +てんかい +てんき +てんぐ +てんけん +てんごく +てんさい +てんし +てんすう +でんち +てんてき +てんとう +てんない +てんぷら +てんぼうだい +てんめつ +てんらんかい +でんりょく +でんわ +どあい +といれ +どうかん +とうきゅう +どうぐ +とうし +とうむぎ +とおい +とおか +とおく +とおす +とおる +とかい +とかす +ときおり +ときどき +とくい +とくしゅう +とくてん +とくに +とくべつ +とけい +とける +とこや +とさか +としょかん +とそう +とたん +とちゅう +とっきゅう +とっくん +とつぜん +とつにゅう +とどける +ととのえる +とない +となえる +となり +とのさま +とばす +どぶがわ +とほう +とまる +とめる +ともだち +ともる +どようび +とらえる +とんかつ +どんぶり +ないかく +ないこう +ないしょ +ないす +ないせん +ないそう +なおす +ながい +なくす +なげる +なこうど +なさけ +なたでここ +なっとう +なつやすみ +ななおし +なにごと +なにもの +なにわ +なのか +なふだ +なまいき +なまえ +なまみ +なみだ +なめらか +なめる +なやむ +ならう +ならび +ならぶ +なれる +なわとび +なわばり +にあう +にいがた +にうけ +におい +にかい +にがて +にきび +にくしみ +にくまん +にげる +にさんかたんそ +にしき +にせもの +にちじょう +にちようび +にっか +にっき +にっけい +にっこう +にっさん +にっしょく +にっすう +にっせき +にってい +になう +にほん +にまめ +にもつ +にやり +にゅういん +にりんしゃ +にわとり +にんい +にんか +にんき +にんげん +にんしき +にんずう +にんそう +にんたい +にんち +にんてい +にんにく +にんぷ +にんまり +にんむ +にんめい +にんよう +ぬいくぎ +ぬかす +ぬぐいとる +ぬぐう +ぬくもり +ぬすむ +ぬまえび +ぬめり +ぬらす +ぬんちゃく +ねあげ +ねいき +ねいる +ねいろ +ねぐせ +ねくたい +ねくら +ねこぜ +ねこむ +ねさげ +ねすごす +ねそべる +ねだん +ねつい +ねっしん +ねつぞう +ねったいぎょ +ねぶそく +ねふだ +ねぼう +ねほりはほり +ねまき +ねまわし +ねみみ +ねむい +ねむたい +ねもと +ねらう +ねわざ +ねんいり +ねんおし +ねんかん +ねんきん +ねんぐ +ねんざ +ねんし +ねんちゃく +ねんど +ねんぴ +ねんぶつ +ねんまつ +ねんりょう +ねんれい +のいず +のおづま +のがす +のきなみ +のこぎり +のこす +のこる +のせる +のぞく +のぞむ +のたまう +のちほど +のっく +のばす +のはら +のべる +のぼる +のみもの +のやま +のらいぬ +のらねこ +のりもの +のりゆき +のれん +のんき +ばあい +はあく +ばあさん +ばいか +ばいく +はいけん +はいご +はいしん +はいすい +はいせん +はいそう +はいち +ばいばい +はいれつ +はえる +はおる +はかい +ばかり +はかる +はくしゅ +はけん +はこぶ +はさみ +はさん +はしご +ばしょ +はしる +はせる +ぱそこん +はそん +はたん +はちみつ +はつおん +はっかく +はづき +はっきり +はっくつ +はっけん +はっこう +はっさん +はっしん +はったつ +はっちゅう +はってん +はっぴょう +はっぽう +はなす +はなび +はにかむ +はぶらし +はみがき +はむかう +はめつ +はやい +はやし +はらう +はろうぃん +はわい +はんい +はんえい +はんおん +はんかく +はんきょう +ばんぐみ +はんこ +はんしゃ +はんすう +はんだん +ぱんち +ぱんつ +はんてい +はんとし +はんのう +はんぱ +はんぶん +はんぺん +はんぼうき +はんめい +はんらん +はんろん +ひいき +ひうん +ひえる +ひかく +ひかり +ひかる +ひかん +ひくい +ひけつ +ひこうき +ひこく +ひさい +ひさしぶり +ひさん +びじゅつかん +ひしょ +ひそか +ひそむ +ひたむき +ひだり +ひたる +ひつぎ +ひっこし +ひっし +ひつじゅひん +ひっす +ひつぜん +ぴったり +ぴっちり +ひつよう +ひてい +ひとごみ +ひなまつり +ひなん +ひねる +ひはん +ひびく +ひひょう +ひほう +ひまわり +ひまん +ひみつ +ひめい +ひめじし +ひやけ +ひやす +ひよう +びょうき +ひらがな +ひらく +ひりつ +ひりょう +ひるま +ひるやすみ +ひれい +ひろい +ひろう +ひろき +ひろゆき +ひんかく +ひんけつ +ひんこん +ひんしゅ +ひんそう +ぴんち +ひんぱん +びんぼう +ふあん +ふいうち +ふうけい +ふうせん +ぷうたろう +ふうとう +ふうふ +ふえる +ふおん +ふかい +ふきん +ふくざつ +ふくぶくろ +ふこう +ふさい +ふしぎ +ふじみ +ふすま +ふせい +ふせぐ +ふそく +ぶたにく +ふたん +ふちょう +ふつう +ふつか +ふっかつ +ふっき +ふっこく +ぶどう +ふとる +ふとん +ふのう +ふはい +ふひょう +ふへん +ふまん +ふみん +ふめつ +ふめん +ふよう +ふりこ +ふりる +ふるい +ふんいき +ぶんがく +ぶんぐ +ふんしつ +ぶんせき +ふんそう +ぶんぽう +へいあん +へいおん +へいがい +へいき +へいげん +へいこう +へいさ +へいしゃ +へいせつ +へいそ +へいたく +へいてん +へいねつ +へいわ +へきが +へこむ +べにいろ +べにしょうが +へらす +へんかん +べんきょう +べんごし +へんさい +へんたい +べんり +ほあん +ほいく +ぼうぎょ +ほうこく +ほうそう +ほうほう +ほうもん +ほうりつ +ほえる +ほおん +ほかん +ほきょう +ぼきん +ほくろ +ほけつ +ほけん +ほこう +ほこる +ほしい +ほしつ +ほしゅ +ほしょう +ほせい +ほそい +ほそく +ほたて +ほたる +ぽちぶくろ +ほっきょく +ほっさ +ほったん +ほとんど +ほめる +ほんい +ほんき +ほんけ +ほんしつ +ほんやく +まいにち +まかい +まかせる +まがる +まける +まこと +まさつ +まじめ +ますく +まぜる +まつり +まとめ +まなぶ +まぬけ +まねく +まほう +まもる +まゆげ +まよう +まろやか +まわす +まわり +まわる +まんが +まんきつ +まんぞく +まんなか +みいら +みうち +みえる +みがく +みかた +みかん +みけん +みこん +みじかい +みすい +みすえる +みせる +みっか +みつかる +みつける +みてい +みとめる +みなと +みなみかさい +みねらる +みのう +みのがす +みほん +みもと +みやげ +みらい +みりょく +みわく +みんか +みんぞく +むいか +むえき +むえん +むかい +むかう +むかえ +むかし +むぎちゃ +むける +むげん +むさぼる +むしあつい +むしば +むじゅん +むしろ +むすう +むすこ +むすぶ +むすめ +むせる +むせん +むちゅう +むなしい +むのう +むやみ +むよう +むらさき +むりょう +むろん +めいあん +めいうん +めいえん +めいかく +めいきょく +めいさい +めいし +めいそう +めいぶつ +めいれい +めいわく +めぐまれる +めざす +めした +めずらしい +めだつ +めまい +めやす +めんきょ +めんせき +めんどう +もうしあげる +もうどうけん +もえる +もくし +もくてき +もくようび +もちろん +もどる +もらう +もんく +もんだい +やおや +やける +やさい +やさしい +やすい +やすたろう +やすみ +やせる +やそう +やたい +やちん +やっと +やっぱり +やぶる +やめる +ややこしい +やよい +やわらかい +ゆうき +ゆうびんきょく +ゆうべ +ゆうめい +ゆけつ +ゆしゅつ +ゆせん +ゆそう +ゆたか +ゆちゃく +ゆでる +ゆにゅう +ゆびわ +ゆらい +ゆれる +ようい +ようか +ようきゅう +ようじ +ようす +ようちえん +よかぜ +よかん +よきん +よくせい +よくぼう +よけい +よごれる +よさん +よしゅう +よそう +よそく +よっか +よてい +よどがわく +よねつ +よやく +よゆう +よろこぶ +よろしい +らいう +らくがき +らくご +らくさつ +らくだ +らしんばん +らせん +らぞく +らたい +らっか +られつ +りえき +りかい +りきさく +りきせつ +りくぐん +りくつ +りけん +りこう +りせい +りそう +りそく +りてん +りねん +りゆう +りゅうがく +りよう +りょうり +りょかん +りょくちゃ +りょこう +りりく +りれき +りろん +りんご +るいけい +るいさい +るいじ +るいせき +るすばん +るりがわら +れいかん +れいぎ +れいせい +れいぞうこ +れいとう +れいぼう +れきし +れきだい +れんあい +れんけい +れんこん +れんさい +れんしゅう +れんぞく +れんらく +ろうか +ろうご +ろうじん +ろうそく +ろくが +ろこつ +ろじうら +ろしゅつ +ろせん +ろてん +ろめん +ろれつ +ろんぎ +ろんぱ +ろんぶん +ろんり +わかす +わかめ +わかやま +わかれる +わしつ +わじまし +わすれもの +わらう +われる diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/korean.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/korean.txt new file mode 100644 index 00000000..1acebf75 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/korean.txt @@ -0,0 +1,2048 @@ +가격 +가끔 +가난 +가능 +가득 +가르침 +가뭄 +가방 +가상 +가슴 +가운데 +가을 +가이드 +가입 +가장 +가정 +가족 +가죽 +각오 +각자 +간격 +간부 +간섭 +간장 +간접 +간판 +갈등 +갈비 +갈색 +갈증 +감각 +감기 +감소 +감수성 +감자 +감정 +갑자기 +강남 +강당 +강도 +강력히 +강변 +강북 +강사 +강수량 +강아지 +강원도 +강의 +강제 +강조 +같이 +개구리 +개나리 +개방 +개별 +개선 +개성 +개인 +객관적 +거실 +거액 +거울 +거짓 +거품 +걱정 +건강 +건물 +건설 +건조 +건축 +걸음 +검사 +검토 +게시판 +게임 +겨울 +견해 +결과 +결국 +결론 +결석 +결승 +결심 +결정 +결혼 +경계 +경고 +경기 +경력 +경복궁 +경비 +경상도 +경영 +경우 +경쟁 +경제 +경주 +경찰 +경치 +경향 +경험 +계곡 +계단 +계란 +계산 +계속 +계약 +계절 +계층 +계획 +고객 +고구려 +고궁 +고급 +고등학생 +고무신 +고민 +고양이 +고장 +고전 +고집 +고춧가루 +고통 +고향 +곡식 +골목 +골짜기 +골프 +공간 +공개 +공격 +공군 +공급 +공기 +공동 +공무원 +공부 +공사 +공식 +공업 +공연 +공원 +공장 +공짜 +공책 +공통 +공포 +공항 +공휴일 +과목 +과일 +과장 +과정 +과학 +관객 +관계 +관광 +관념 +관람 +관련 +관리 +관습 +관심 +관점 +관찰 +광경 +광고 +광장 +광주 +괴로움 +굉장히 +교과서 +교문 +교복 +교실 +교양 +교육 +교장 +교직 +교통 +교환 +교훈 +구경 +구름 +구멍 +구별 +구분 +구석 +구성 +구속 +구역 +구입 +구청 +구체적 +국가 +국기 +국내 +국립 +국물 +국민 +국수 +국어 +국왕 +국적 +국제 +국회 +군대 +군사 +군인 +궁극적 +권리 +권위 +권투 +귀국 +귀신 +규정 +규칙 +균형 +그날 +그냥 +그늘 +그러나 +그룹 +그릇 +그림 +그제서야 +그토록 +극복 +극히 +근거 +근교 +근래 +근로 +근무 +근본 +근원 +근육 +근처 +글씨 +글자 +금강산 +금고 +금년 +금메달 +금액 +금연 +금요일 +금지 +긍정적 +기간 +기관 +기념 +기능 +기독교 +기둥 +기록 +기름 +기법 +기본 +기분 +기쁨 +기숙사 +기술 +기억 +기업 +기온 +기운 +기원 +기적 +기준 +기침 +기혼 +기획 +긴급 +긴장 +길이 +김밥 +김치 +김포공항 +깍두기 +깜빡 +깨달음 +깨소금 +껍질 +꼭대기 +꽃잎 +나들이 +나란히 +나머지 +나물 +나침반 +나흘 +낙엽 +난방 +날개 +날씨 +날짜 +남녀 +남대문 +남매 +남산 +남자 +남편 +남학생 +낭비 +낱말 +내년 +내용 +내일 +냄비 +냄새 +냇물 +냉동 +냉면 +냉방 +냉장고 +넥타이 +넷째 +노동 +노란색 +노력 +노인 +녹음 +녹차 +녹화 +논리 +논문 +논쟁 +놀이 +농구 +농담 +농민 +농부 +농업 +농장 +농촌 +높이 +눈동자 +눈물 +눈썹 +뉴욕 +느낌 +늑대 +능동적 +능력 +다방 +다양성 +다음 +다이어트 +다행 +단계 +단골 +단독 +단맛 +단순 +단어 +단위 +단점 +단체 +단추 +단편 +단풍 +달걀 +달러 +달력 +달리 +닭고기 +담당 +담배 +담요 +담임 +답변 +답장 +당근 +당분간 +당연히 +당장 +대규모 +대낮 +대단히 +대답 +대도시 +대략 +대량 +대륙 +대문 +대부분 +대신 +대응 +대장 +대전 +대접 +대중 +대책 +대출 +대충 +대통령 +대학 +대한민국 +대합실 +대형 +덩어리 +데이트 +도대체 +도덕 +도둑 +도망 +도서관 +도심 +도움 +도입 +도자기 +도저히 +도전 +도중 +도착 +독감 +독립 +독서 +독일 +독창적 +동화책 +뒷모습 +뒷산 +딸아이 +마누라 +마늘 +마당 +마라톤 +마련 +마무리 +마사지 +마약 +마요네즈 +마을 +마음 +마이크 +마중 +마지막 +마찬가지 +마찰 +마흔 +막걸리 +막내 +막상 +만남 +만두 +만세 +만약 +만일 +만점 +만족 +만화 +많이 +말기 +말씀 +말투 +맘대로 +망원경 +매년 +매달 +매력 +매번 +매스컴 +매일 +매장 +맥주 +먹이 +먼저 +먼지 +멀리 +메일 +며느리 +며칠 +면담 +멸치 +명단 +명령 +명예 +명의 +명절 +명칭 +명함 +모금 +모니터 +모델 +모든 +모범 +모습 +모양 +모임 +모조리 +모집 +모퉁이 +목걸이 +목록 +목사 +목소리 +목숨 +목적 +목표 +몰래 +몸매 +몸무게 +몸살 +몸속 +몸짓 +몸통 +몹시 +무관심 +무궁화 +무더위 +무덤 +무릎 +무슨 +무엇 +무역 +무용 +무조건 +무지개 +무척 +문구 +문득 +문법 +문서 +문제 +문학 +문화 +물가 +물건 +물결 +물고기 +물론 +물리학 +물음 +물질 +물체 +미국 +미디어 +미사일 +미술 +미역 +미용실 +미움 +미인 +미팅 +미혼 +민간 +민족 +민주 +믿음 +밀가루 +밀리미터 +밑바닥 +바가지 +바구니 +바나나 +바늘 +바닥 +바닷가 +바람 +바이러스 +바탕 +박물관 +박사 +박수 +반대 +반드시 +반말 +반발 +반성 +반응 +반장 +반죽 +반지 +반찬 +받침 +발가락 +발걸음 +발견 +발달 +발레 +발목 +발바닥 +발생 +발음 +발자국 +발전 +발톱 +발표 +밤하늘 +밥그릇 +밥맛 +밥상 +밥솥 +방금 +방면 +방문 +방바닥 +방법 +방송 +방식 +방안 +방울 +방지 +방학 +방해 +방향 +배경 +배꼽 +배달 +배드민턴 +백두산 +백색 +백성 +백인 +백제 +백화점 +버릇 +버섯 +버튼 +번개 +번역 +번지 +번호 +벌금 +벌레 +벌써 +범위 +범인 +범죄 +법률 +법원 +법적 +법칙 +베이징 +벨트 +변경 +변동 +변명 +변신 +변호사 +변화 +별도 +별명 +별일 +병실 +병아리 +병원 +보관 +보너스 +보라색 +보람 +보름 +보상 +보안 +보자기 +보장 +보전 +보존 +보통 +보편적 +보험 +복도 +복사 +복숭아 +복습 +볶음 +본격적 +본래 +본부 +본사 +본성 +본인 +본질 +볼펜 +봉사 +봉지 +봉투 +부근 +부끄러움 +부담 +부동산 +부문 +부분 +부산 +부상 +부엌 +부인 +부작용 +부장 +부정 +부족 +부지런히 +부친 +부탁 +부품 +부회장 +북부 +북한 +분노 +분량 +분리 +분명 +분석 +분야 +분위기 +분필 +분홍색 +불고기 +불과 +불교 +불꽃 +불만 +불법 +불빛 +불안 +불이익 +불행 +브랜드 +비극 +비난 +비닐 +비둘기 +비디오 +비로소 +비만 +비명 +비밀 +비바람 +비빔밥 +비상 +비용 +비율 +비중 +비타민 +비판 +빌딩 +빗물 +빗방울 +빗줄기 +빛깔 +빨간색 +빨래 +빨리 +사건 +사계절 +사나이 +사냥 +사람 +사랑 +사립 +사모님 +사물 +사방 +사상 +사생활 +사설 +사슴 +사실 +사업 +사용 +사월 +사장 +사전 +사진 +사촌 +사춘기 +사탕 +사투리 +사흘 +산길 +산부인과 +산업 +산책 +살림 +살인 +살짝 +삼계탕 +삼국 +삼십 +삼월 +삼촌 +상관 +상금 +상대 +상류 +상반기 +상상 +상식 +상업 +상인 +상자 +상점 +상처 +상추 +상태 +상표 +상품 +상황 +새벽 +색깔 +색연필 +생각 +생명 +생물 +생방송 +생산 +생선 +생신 +생일 +생활 +서랍 +서른 +서명 +서민 +서비스 +서양 +서울 +서적 +서점 +서쪽 +서클 +석사 +석유 +선거 +선물 +선배 +선생 +선수 +선원 +선장 +선전 +선택 +선풍기 +설거지 +설날 +설렁탕 +설명 +설문 +설사 +설악산 +설치 +설탕 +섭씨 +성공 +성당 +성명 +성별 +성인 +성장 +성적 +성질 +성함 +세금 +세미나 +세상 +세월 +세종대왕 +세탁 +센터 +센티미터 +셋째 +소규모 +소극적 +소금 +소나기 +소년 +소득 +소망 +소문 +소설 +소속 +소아과 +소용 +소원 +소음 +소중히 +소지품 +소질 +소풍 +소형 +속담 +속도 +속옷 +손가락 +손길 +손녀 +손님 +손등 +손목 +손뼉 +손실 +손질 +손톱 +손해 +솔직히 +솜씨 +송아지 +송이 +송편 +쇠고기 +쇼핑 +수건 +수년 +수단 +수돗물 +수동적 +수면 +수명 +수박 +수상 +수석 +수술 +수시로 +수업 +수염 +수영 +수입 +수준 +수집 +수출 +수컷 +수필 +수학 +수험생 +수화기 +숙녀 +숙소 +숙제 +순간 +순서 +순수 +순식간 +순위 +숟가락 +술병 +술집 +숫자 +스님 +스물 +스스로 +스승 +스웨터 +스위치 +스케이트 +스튜디오 +스트레스 +스포츠 +슬쩍 +슬픔 +습관 +습기 +승객 +승리 +승부 +승용차 +승진 +시각 +시간 +시골 +시금치 +시나리오 +시댁 +시리즈 +시멘트 +시민 +시부모 +시선 +시설 +시스템 +시아버지 +시어머니 +시월 +시인 +시일 +시작 +시장 +시절 +시점 +시중 +시즌 +시집 +시청 +시합 +시험 +식구 +식기 +식당 +식량 +식료품 +식물 +식빵 +식사 +식생활 +식초 +식탁 +식품 +신고 +신규 +신념 +신문 +신발 +신비 +신사 +신세 +신용 +신제품 +신청 +신체 +신화 +실감 +실내 +실력 +실례 +실망 +실수 +실습 +실시 +실장 +실정 +실질적 +실천 +실체 +실컷 +실태 +실패 +실험 +실현 +심리 +심부름 +심사 +심장 +심정 +심판 +쌍둥이 +씨름 +씨앗 +아가씨 +아나운서 +아드님 +아들 +아쉬움 +아스팔트 +아시아 +아울러 +아저씨 +아줌마 +아직 +아침 +아파트 +아프리카 +아픔 +아홉 +아흔 +악기 +악몽 +악수 +안개 +안경 +안과 +안내 +안녕 +안동 +안방 +안부 +안주 +알루미늄 +알코올 +암시 +암컷 +압력 +앞날 +앞문 +애인 +애정 +액수 +앨범 +야간 +야단 +야옹 +약간 +약국 +약속 +약수 +약점 +약품 +약혼녀 +양념 +양력 +양말 +양배추 +양주 +양파 +어둠 +어려움 +어른 +어젯밤 +어쨌든 +어쩌다가 +어쩐지 +언니 +언덕 +언론 +언어 +얼굴 +얼른 +얼음 +얼핏 +엄마 +업무 +업종 +업체 +엉덩이 +엉망 +엉터리 +엊그제 +에너지 +에어컨 +엔진 +여건 +여고생 +여관 +여군 +여권 +여대생 +여덟 +여동생 +여든 +여론 +여름 +여섯 +여성 +여왕 +여인 +여전히 +여직원 +여학생 +여행 +역사 +역시 +역할 +연결 +연구 +연극 +연기 +연락 +연설 +연세 +연속 +연습 +연애 +연예인 +연인 +연장 +연주 +연출 +연필 +연합 +연휴 +열기 +열매 +열쇠 +열심히 +열정 +열차 +열흘 +염려 +엽서 +영국 +영남 +영상 +영양 +영역 +영웅 +영원히 +영하 +영향 +영혼 +영화 +옆구리 +옆방 +옆집 +예감 +예금 +예방 +예산 +예상 +예선 +예술 +예습 +예식장 +예약 +예전 +예절 +예정 +예컨대 +옛날 +오늘 +오락 +오랫동안 +오렌지 +오로지 +오른발 +오븐 +오십 +오염 +오월 +오전 +오직 +오징어 +오페라 +오피스텔 +오히려 +옥상 +옥수수 +온갖 +온라인 +온몸 +온종일 +온통 +올가을 +올림픽 +올해 +옷차림 +와이셔츠 +와인 +완성 +완전 +왕비 +왕자 +왜냐하면 +왠지 +외갓집 +외국 +외로움 +외삼촌 +외출 +외침 +외할머니 +왼발 +왼손 +왼쪽 +요금 +요일 +요즘 +요청 +용기 +용서 +용어 +우산 +우선 +우승 +우연히 +우정 +우체국 +우편 +운동 +운명 +운반 +운전 +운행 +울산 +울음 +움직임 +웃어른 +웃음 +워낙 +원고 +원래 +원서 +원숭이 +원인 +원장 +원피스 +월급 +월드컵 +월세 +월요일 +웨이터 +위반 +위법 +위성 +위원 +위험 +위협 +윗사람 +유난히 +유럽 +유명 +유물 +유산 +유적 +유치원 +유학 +유행 +유형 +육군 +육상 +육십 +육체 +은행 +음력 +음료 +음반 +음성 +음식 +음악 +음주 +의견 +의논 +의문 +의복 +의식 +의심 +의외로 +의욕 +의원 +의학 +이것 +이곳 +이념 +이놈 +이달 +이대로 +이동 +이렇게 +이력서 +이론적 +이름 +이민 +이발소 +이별 +이불 +이빨 +이상 +이성 +이슬 +이야기 +이용 +이웃 +이월 +이윽고 +이익 +이전 +이중 +이튿날 +이틀 +이혼 +인간 +인격 +인공 +인구 +인근 +인기 +인도 +인류 +인물 +인생 +인쇄 +인연 +인원 +인재 +인종 +인천 +인체 +인터넷 +인하 +인형 +일곱 +일기 +일단 +일대 +일등 +일반 +일본 +일부 +일상 +일생 +일손 +일요일 +일월 +일정 +일종 +일주일 +일찍 +일체 +일치 +일행 +일회용 +임금 +임무 +입대 +입력 +입맛 +입사 +입술 +입시 +입원 +입장 +입학 +자가용 +자격 +자극 +자동 +자랑 +자부심 +자식 +자신 +자연 +자원 +자율 +자전거 +자정 +자존심 +자판 +작가 +작년 +작성 +작업 +작용 +작은딸 +작품 +잔디 +잔뜩 +잔치 +잘못 +잠깐 +잠수함 +잠시 +잠옷 +잠자리 +잡지 +장관 +장군 +장기간 +장래 +장례 +장르 +장마 +장면 +장모 +장미 +장비 +장사 +장소 +장식 +장애인 +장인 +장점 +장차 +장학금 +재능 +재빨리 +재산 +재생 +재작년 +재정 +재채기 +재판 +재학 +재활용 +저것 +저고리 +저곳 +저녁 +저런 +저렇게 +저번 +저울 +저절로 +저축 +적극 +적당히 +적성 +적용 +적응 +전개 +전공 +전기 +전달 +전라도 +전망 +전문 +전반 +전부 +전세 +전시 +전용 +전자 +전쟁 +전주 +전철 +전체 +전통 +전혀 +전후 +절대 +절망 +절반 +절약 +절차 +점검 +점수 +점심 +점원 +점점 +점차 +접근 +접시 +접촉 +젓가락 +정거장 +정도 +정류장 +정리 +정말 +정면 +정문 +정반대 +정보 +정부 +정비 +정상 +정성 +정오 +정원 +정장 +정지 +정치 +정확히 +제공 +제과점 +제대로 +제목 +제발 +제법 +제삿날 +제안 +제일 +제작 +제주도 +제출 +제품 +제한 +조각 +조건 +조금 +조깅 +조명 +조미료 +조상 +조선 +조용히 +조절 +조정 +조직 +존댓말 +존재 +졸업 +졸음 +종교 +종로 +종류 +종소리 +종업원 +종종 +종합 +좌석 +죄인 +주관적 +주름 +주말 +주머니 +주먹 +주문 +주민 +주방 +주변 +주식 +주인 +주일 +주장 +주전자 +주택 +준비 +줄거리 +줄기 +줄무늬 +중간 +중계방송 +중국 +중년 +중단 +중독 +중반 +중부 +중세 +중소기업 +중순 +중앙 +중요 +중학교 +즉석 +즉시 +즐거움 +증가 +증거 +증권 +증상 +증세 +지각 +지갑 +지경 +지극히 +지금 +지급 +지능 +지름길 +지리산 +지방 +지붕 +지식 +지역 +지우개 +지원 +지적 +지점 +지진 +지출 +직선 +직업 +직원 +직장 +진급 +진동 +진로 +진료 +진리 +진짜 +진찰 +진출 +진통 +진행 +질문 +질병 +질서 +짐작 +집단 +집안 +집중 +짜증 +찌꺼기 +차남 +차라리 +차량 +차림 +차별 +차선 +차츰 +착각 +찬물 +찬성 +참가 +참기름 +참새 +참석 +참여 +참외 +참조 +찻잔 +창가 +창고 +창구 +창문 +창밖 +창작 +창조 +채널 +채점 +책가방 +책방 +책상 +책임 +챔피언 +처벌 +처음 +천국 +천둥 +천장 +천재 +천천히 +철도 +철저히 +철학 +첫날 +첫째 +청년 +청바지 +청소 +청춘 +체계 +체력 +체온 +체육 +체중 +체험 +초등학생 +초반 +초밥 +초상화 +초순 +초여름 +초원 +초저녁 +초점 +초청 +초콜릿 +촛불 +총각 +총리 +총장 +촬영 +최근 +최상 +최선 +최신 +최악 +최종 +추석 +추억 +추진 +추천 +추측 +축구 +축소 +축제 +축하 +출근 +출발 +출산 +출신 +출연 +출입 +출장 +출판 +충격 +충고 +충돌 +충분히 +충청도 +취업 +취직 +취향 +치약 +친구 +친척 +칠십 +칠월 +칠판 +침대 +침묵 +침실 +칫솔 +칭찬 +카메라 +카운터 +칼국수 +캐릭터 +캠퍼스 +캠페인 +커튼 +컨디션 +컬러 +컴퓨터 +코끼리 +코미디 +콘서트 +콜라 +콤플렉스 +콩나물 +쾌감 +쿠데타 +크림 +큰길 +큰딸 +큰소리 +큰아들 +큰어머니 +큰일 +큰절 +클래식 +클럽 +킬로 +타입 +타자기 +탁구 +탁자 +탄생 +태권도 +태양 +태풍 +택시 +탤런트 +터널 +터미널 +테니스 +테스트 +테이블 +텔레비전 +토론 +토마토 +토요일 +통계 +통과 +통로 +통신 +통역 +통일 +통장 +통제 +통증 +통합 +통화 +퇴근 +퇴원 +퇴직금 +튀김 +트럭 +특급 +특별 +특성 +특수 +특징 +특히 +튼튼히 +티셔츠 +파란색 +파일 +파출소 +판결 +판단 +판매 +판사 +팔십 +팔월 +팝송 +패션 +팩스 +팩시밀리 +팬티 +퍼센트 +페인트 +편견 +편의 +편지 +편히 +평가 +평균 +평생 +평소 +평양 +평일 +평화 +포스터 +포인트 +포장 +포함 +표면 +표정 +표준 +표현 +품목 +품질 +풍경 +풍속 +풍습 +프랑스 +프린터 +플라스틱 +피곤 +피망 +피아노 +필름 +필수 +필요 +필자 +필통 +핑계 +하느님 +하늘 +하드웨어 +하룻밤 +하반기 +하숙집 +하순 +하여튼 +하지만 +하천 +하품 +하필 +학과 +학교 +학급 +학기 +학년 +학력 +학번 +학부모 +학비 +학생 +학술 +학습 +학용품 +학원 +학위 +학자 +학점 +한계 +한글 +한꺼번에 +한낮 +한눈 +한동안 +한때 +한라산 +한마디 +한문 +한번 +한복 +한식 +한여름 +한쪽 +할머니 +할아버지 +할인 +함께 +함부로 +합격 +합리적 +항공 +항구 +항상 +항의 +해결 +해군 +해답 +해당 +해물 +해석 +해설 +해수욕장 +해안 +핵심 +핸드백 +햄버거 +햇볕 +햇살 +행동 +행복 +행사 +행운 +행위 +향기 +향상 +향수 +허락 +허용 +헬기 +현관 +현금 +현대 +현상 +현실 +현장 +현재 +현지 +혈액 +협력 +형부 +형사 +형수 +형식 +형제 +형태 +형편 +혜택 +호기심 +호남 +호랑이 +호박 +호텔 +호흡 +혹시 +홀로 +홈페이지 +홍보 +홍수 +홍차 +화면 +화분 +화살 +화요일 +화장 +화학 +확보 +확인 +확장 +확정 +환갑 +환경 +환영 +환율 +환자 +활기 +활동 +활발히 +활용 +활짝 +회견 +회관 +회복 +회색 +회원 +회장 +회전 +횟수 +횡단보도 +효율적 +후반 +후춧가루 +훈련 +훨씬 +휴식 +휴일 +흉내 +흐름 +흑백 +흑인 +흔적 +흔히 +흥미 +흥분 +희곡 +희망 +희생 +흰색 +힘껏 diff --git a/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/spanish.txt b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/spanish.txt new file mode 100644 index 00000000..fdbc23c7 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/hdaccount/wordlist/spanish.txt @@ -0,0 +1,2048 @@ +ábaco +abdomen +abeja +abierto +abogado +abono +aborto +abrazo +abrir +abuelo +abuso +acabar +academia +acceso +acción +aceite +acelga +acento +aceptar +ácido +aclarar +acné +acoger +acoso +activo +acto +actriz +actuar +acudir +acuerdo +acusar +adicto +admitir +adoptar +adorno +aduana +adulto +aéreo +afectar +afición +afinar +afirmar +ágil +agitar +agonía +agosto +agotar +agregar +agrio +agua +agudo +águila +aguja +ahogo +ahorro +aire +aislar +ajedrez +ajeno +ajuste +alacrán +alambre +alarma +alba +álbum +alcalde +aldea +alegre +alejar +alerta +aleta +alfiler +alga +algodón +aliado +aliento +alivio +alma +almeja +almíbar +altar +alteza +altivo +alto +altura +alumno +alzar +amable +amante +amapola +amargo +amasar +ámbar +ámbito +ameno +amigo +amistad +amor +amparo +amplio +ancho +anciano +ancla +andar +andén +anemia +ángulo +anillo +ánimo +anís +anotar +antena +antiguo +antojo +anual +anular +anuncio +añadir +añejo +año +apagar +aparato +apetito +apio +aplicar +apodo +aporte +apoyo +aprender +aprobar +apuesta +apuro +arado +araña +arar +árbitro +árbol +arbusto +archivo +arco +arder +ardilla +arduo +área +árido +aries +armonía +arnés +aroma +arpa +arpón +arreglo +arroz +arruga +arte +artista +asa +asado +asalto +ascenso +asegurar +aseo +asesor +asiento +asilo +asistir +asno +asombro +áspero +astilla +astro +astuto +asumir +asunto +atajo +ataque +atar +atento +ateo +ático +atleta +átomo +atraer +atroz +atún +audaz +audio +auge +aula +aumento +ausente +autor +aval +avance +avaro +ave +avellana +avena +avestruz +avión +aviso +ayer +ayuda +ayuno +azafrán +azar +azote +azúcar +azufre +azul +baba +babor +bache +bahía +baile +bajar +balanza +balcón +balde +bambú +banco +banda +baño +barba +barco +barniz +barro +báscula +bastón +basura +batalla +batería +batir +batuta +baúl +bazar +bebé +bebida +bello +besar +beso +bestia +bicho +bien +bingo +blanco +bloque +blusa +boa +bobina +bobo +boca +bocina +boda +bodega +boina +bola +bolero +bolsa +bomba +bondad +bonito +bono +bonsái +borde +borrar +bosque +bote +botín +bóveda +bozal +bravo +brazo +brecha +breve +brillo +brinco +brisa +broca +broma +bronce +brote +bruja +brusco +bruto +buceo +bucle +bueno +buey +bufanda +bufón +búho +buitre +bulto +burbuja +burla +burro +buscar +butaca +buzón +caballo +cabeza +cabina +cabra +cacao +cadáver +cadena +caer +café +caída +caimán +caja +cajón +cal +calamar +calcio +caldo +calidad +calle +calma +calor +calvo +cama +cambio +camello +camino +campo +cáncer +candil +canela +canguro +canica +canto +caña +cañón +caoba +caos +capaz +capitán +capote +captar +capucha +cara +carbón +cárcel +careta +carga +cariño +carne +carpeta +carro +carta +casa +casco +casero +caspa +castor +catorce +catre +caudal +causa +cazo +cebolla +ceder +cedro +celda +célebre +celoso +célula +cemento +ceniza +centro +cerca +cerdo +cereza +cero +cerrar +certeza +césped +cetro +chacal +chaleco +champú +chancla +chapa +charla +chico +chiste +chivo +choque +choza +chuleta +chupar +ciclón +ciego +cielo +cien +cierto +cifra +cigarro +cima +cinco +cine +cinta +ciprés +circo +ciruela +cisne +cita +ciudad +clamor +clan +claro +clase +clave +cliente +clima +clínica +cobre +cocción +cochino +cocina +coco +código +codo +cofre +coger +cohete +cojín +cojo +cola +colcha +colegio +colgar +colina +collar +colmo +columna +combate +comer +comida +cómodo +compra +conde +conejo +conga +conocer +consejo +contar +copa +copia +corazón +corbata +corcho +cordón +corona +correr +coser +cosmos +costa +cráneo +cráter +crear +crecer +creído +crema +cría +crimen +cripta +crisis +cromo +crónica +croqueta +crudo +cruz +cuadro +cuarto +cuatro +cubo +cubrir +cuchara +cuello +cuento +cuerda +cuesta +cueva +cuidar +culebra +culpa +culto +cumbre +cumplir +cuna +cuneta +cuota +cupón +cúpula +curar +curioso +curso +curva +cutis +dama +danza +dar +dardo +dátil +deber +débil +década +decir +dedo +defensa +definir +dejar +delfín +delgado +delito +demora +denso +dental +deporte +derecho +derrota +desayuno +deseo +desfile +desnudo +destino +desvío +detalle +detener +deuda +día +diablo +diadema +diamante +diana +diario +dibujo +dictar +diente +dieta +diez +difícil +digno +dilema +diluir +dinero +directo +dirigir +disco +diseño +disfraz +diva +divino +doble +doce +dolor +domingo +don +donar +dorado +dormir +dorso +dos +dosis +dragón +droga +ducha +duda +duelo +dueño +dulce +dúo +duque +durar +dureza +duro +ébano +ebrio +echar +eco +ecuador +edad +edición +edificio +editor +educar +efecto +eficaz +eje +ejemplo +elefante +elegir +elemento +elevar +elipse +élite +elixir +elogio +eludir +embudo +emitir +emoción +empate +empeño +empleo +empresa +enano +encargo +enchufe +encía +enemigo +enero +enfado +enfermo +engaño +enigma +enlace +enorme +enredo +ensayo +enseñar +entero +entrar +envase +envío +época +equipo +erizo +escala +escena +escolar +escribir +escudo +esencia +esfera +esfuerzo +espada +espejo +espía +esposa +espuma +esquí +estar +este +estilo +estufa +etapa +eterno +ética +etnia +evadir +evaluar +evento +evitar +exacto +examen +exceso +excusa +exento +exigir +exilio +existir +éxito +experto +explicar +exponer +extremo +fábrica +fábula +fachada +fácil +factor +faena +faja +falda +fallo +falso +faltar +fama +familia +famoso +faraón +farmacia +farol +farsa +fase +fatiga +fauna +favor +fax +febrero +fecha +feliz +feo +feria +feroz +fértil +fervor +festín +fiable +fianza +fiar +fibra +ficción +ficha +fideo +fiebre +fiel +fiera +fiesta +figura +fijar +fijo +fila +filete +filial +filtro +fin +finca +fingir +finito +firma +flaco +flauta +flecha +flor +flota +fluir +flujo +flúor +fobia +foca +fogata +fogón +folio +folleto +fondo +forma +forro +fortuna +forzar +fosa +foto +fracaso +frágil +franja +frase +fraude +freír +freno +fresa +frío +frito +fruta +fuego +fuente +fuerza +fuga +fumar +función +funda +furgón +furia +fusil +fútbol +futuro +gacela +gafas +gaita +gajo +gala +galería +gallo +gamba +ganar +gancho +ganga +ganso +garaje +garza +gasolina +gastar +gato +gavilán +gemelo +gemir +gen +género +genio +gente +geranio +gerente +germen +gesto +gigante +gimnasio +girar +giro +glaciar +globo +gloria +gol +golfo +goloso +golpe +goma +gordo +gorila +gorra +gota +goteo +gozar +grada +gráfico +grano +grasa +gratis +grave +grieta +grillo +gripe +gris +grito +grosor +grúa +grueso +grumo +grupo +guante +guapo +guardia +guerra +guía +guiño +guion +guiso +guitarra +gusano +gustar +haber +hábil +hablar +hacer +hacha +hada +hallar +hamaca +harina +haz +hazaña +hebilla +hebra +hecho +helado +helio +hembra +herir +hermano +héroe +hervir +hielo +hierro +hígado +higiene +hijo +himno +historia +hocico +hogar +hoguera +hoja +hombre +hongo +honor +honra +hora +hormiga +horno +hostil +hoyo +hueco +huelga +huerta +hueso +huevo +huida +huir +humano +húmedo +humilde +humo +hundir +huracán +hurto +icono +ideal +idioma +ídolo +iglesia +iglú +igual +ilegal +ilusión +imagen +imán +imitar +impar +imperio +imponer +impulso +incapaz +índice +inerte +infiel +informe +ingenio +inicio +inmenso +inmune +innato +insecto +instante +interés +íntimo +intuir +inútil +invierno +ira +iris +ironía +isla +islote +jabalí +jabón +jamón +jarabe +jardín +jarra +jaula +jazmín +jefe +jeringa +jinete +jornada +joroba +joven +joya +juerga +jueves +juez +jugador +jugo +juguete +juicio +junco +jungla +junio +juntar +júpiter +jurar +justo +juvenil +juzgar +kilo +koala +labio +lacio +lacra +lado +ladrón +lagarto +lágrima +laguna +laico +lamer +lámina +lámpara +lana +lancha +langosta +lanza +lápiz +largo +larva +lástima +lata +látex +latir +laurel +lavar +lazo +leal +lección +leche +lector +leer +legión +legumbre +lejano +lengua +lento +leña +león +leopardo +lesión +letal +letra +leve +leyenda +libertad +libro +licor +líder +lidiar +lienzo +liga +ligero +lima +límite +limón +limpio +lince +lindo +línea +lingote +lino +linterna +líquido +liso +lista +litera +litio +litro +llaga +llama +llanto +llave +llegar +llenar +llevar +llorar +llover +lluvia +lobo +loción +loco +locura +lógica +logro +lombriz +lomo +lonja +lote +lucha +lucir +lugar +lujo +luna +lunes +lupa +lustro +luto +luz +maceta +macho +madera +madre +maduro +maestro +mafia +magia +mago +maíz +maldad +maleta +malla +malo +mamá +mambo +mamut +manco +mando +manejar +manga +maniquí +manjar +mano +manso +manta +mañana +mapa +máquina +mar +marco +marea +marfil +margen +marido +mármol +marrón +martes +marzo +masa +máscara +masivo +matar +materia +matiz +matriz +máximo +mayor +mazorca +mecha +medalla +medio +médula +mejilla +mejor +melena +melón +memoria +menor +mensaje +mente +menú +mercado +merengue +mérito +mes +mesón +meta +meter +método +metro +mezcla +miedo +miel +miembro +miga +mil +milagro +militar +millón +mimo +mina +minero +mínimo +minuto +miope +mirar +misa +miseria +misil +mismo +mitad +mito +mochila +moción +moda +modelo +moho +mojar +molde +moler +molino +momento +momia +monarca +moneda +monja +monto +moño +morada +morder +moreno +morir +morro +morsa +mortal +mosca +mostrar +motivo +mover +móvil +mozo +mucho +mudar +mueble +muela +muerte +muestra +mugre +mujer +mula +muleta +multa +mundo +muñeca +mural +muro +músculo +museo +musgo +música +muslo +nácar +nación +nadar +naipe +naranja +nariz +narrar +nasal +natal +nativo +natural +náusea +naval +nave +navidad +necio +néctar +negar +negocio +negro +neón +nervio +neto +neutro +nevar +nevera +nicho +nido +niebla +nieto +niñez +niño +nítido +nivel +nobleza +noche +nómina +noria +norma +norte +nota +noticia +novato +novela +novio +nube +nuca +núcleo +nudillo +nudo +nuera +nueve +nuez +nulo +número +nutria +oasis +obeso +obispo +objeto +obra +obrero +observar +obtener +obvio +oca +ocaso +océano +ochenta +ocho +ocio +ocre +octavo +octubre +oculto +ocupar +ocurrir +odiar +odio +odisea +oeste +ofensa +oferta +oficio +ofrecer +ogro +oído +oír +ojo +ola +oleada +olfato +olivo +olla +olmo +olor +olvido +ombligo +onda +onza +opaco +opción +ópera +opinar +oponer +optar +óptica +opuesto +oración +orador +oral +órbita +orca +orden +oreja +órgano +orgía +orgullo +oriente +origen +orilla +oro +orquesta +oruga +osadía +oscuro +osezno +oso +ostra +otoño +otro +oveja +óvulo +óxido +oxígeno +oyente +ozono +pacto +padre +paella +página +pago +país +pájaro +palabra +palco +paleta +pálido +palma +paloma +palpar +pan +panal +pánico +pantera +pañuelo +papá +papel +papilla +paquete +parar +parcela +pared +parir +paro +párpado +parque +párrafo +parte +pasar +paseo +pasión +paso +pasta +pata +patio +patria +pausa +pauta +pavo +payaso +peatón +pecado +pecera +pecho +pedal +pedir +pegar +peine +pelar +peldaño +pelea +peligro +pellejo +pelo +peluca +pena +pensar +peñón +peón +peor +pepino +pequeño +pera +percha +perder +pereza +perfil +perico +perla +permiso +perro +persona +pesa +pesca +pésimo +pestaña +pétalo +petróleo +pez +pezuña +picar +pichón +pie +piedra +pierna +pieza +pijama +pilar +piloto +pimienta +pino +pintor +pinza +piña +piojo +pipa +pirata +pisar +piscina +piso +pista +pitón +pizca +placa +plan +plata +playa +plaza +pleito +pleno +plomo +pluma +plural +pobre +poco +poder +podio +poema +poesía +poeta +polen +policía +pollo +polvo +pomada +pomelo +pomo +pompa +poner +porción +portal +posada +poseer +posible +poste +potencia +potro +pozo +prado +precoz +pregunta +premio +prensa +preso +previo +primo +príncipe +prisión +privar +proa +probar +proceso +producto +proeza +profesor +programa +prole +promesa +pronto +propio +próximo +prueba +público +puchero +pudor +pueblo +puerta +puesto +pulga +pulir +pulmón +pulpo +pulso +puma +punto +puñal +puño +pupa +pupila +puré +quedar +queja +quemar +querer +queso +quieto +química +quince +quitar +rábano +rabia +rabo +ración +radical +raíz +rama +rampa +rancho +rango +rapaz +rápido +rapto +rasgo +raspa +rato +rayo +raza +razón +reacción +realidad +rebaño +rebote +recaer +receta +rechazo +recoger +recreo +recto +recurso +red +redondo +reducir +reflejo +reforma +refrán +refugio +regalo +regir +regla +regreso +rehén +reino +reír +reja +relato +relevo +relieve +relleno +reloj +remar +remedio +remo +rencor +rendir +renta +reparto +repetir +reposo +reptil +res +rescate +resina +respeto +resto +resumen +retiro +retorno +retrato +reunir +revés +revista +rey +rezar +rico +riego +rienda +riesgo +rifa +rígido +rigor +rincón +riñón +río +riqueza +risa +ritmo +rito +rizo +roble +roce +rociar +rodar +rodeo +rodilla +roer +rojizo +rojo +romero +romper +ron +ronco +ronda +ropa +ropero +rosa +rosca +rostro +rotar +rubí +rubor +rudo +rueda +rugir +ruido +ruina +ruleta +rulo +rumbo +rumor +ruptura +ruta +rutina +sábado +saber +sabio +sable +sacar +sagaz +sagrado +sala +saldo +salero +salir +salmón +salón +salsa +salto +salud +salvar +samba +sanción +sandía +sanear +sangre +sanidad +sano +santo +sapo +saque +sardina +sartén +sastre +satán +sauna +saxofón +sección +seco +secreto +secta +sed +seguir +seis +sello +selva +semana +semilla +senda +sensor +señal +señor +separar +sepia +sequía +ser +serie +sermón +servir +sesenta +sesión +seta +setenta +severo +sexo +sexto +sidra +siesta +siete +siglo +signo +sílaba +silbar +silencio +silla +símbolo +simio +sirena +sistema +sitio +situar +sobre +socio +sodio +sol +solapa +soldado +soledad +sólido +soltar +solución +sombra +sondeo +sonido +sonoro +sonrisa +sopa +soplar +soporte +sordo +sorpresa +sorteo +sostén +sótano +suave +subir +suceso +sudor +suegra +suelo +sueño +suerte +sufrir +sujeto +sultán +sumar +superar +suplir +suponer +supremo +sur +surco +sureño +surgir +susto +sutil +tabaco +tabique +tabla +tabú +taco +tacto +tajo +talar +talco +talento +talla +talón +tamaño +tambor +tango +tanque +tapa +tapete +tapia +tapón +taquilla +tarde +tarea +tarifa +tarjeta +tarot +tarro +tarta +tatuaje +tauro +taza +tazón +teatro +techo +tecla +técnica +tejado +tejer +tejido +tela +teléfono +tema +temor +templo +tenaz +tender +tener +tenis +tenso +teoría +terapia +terco +término +ternura +terror +tesis +tesoro +testigo +tetera +texto +tez +tibio +tiburón +tiempo +tienda +tierra +tieso +tigre +tijera +tilde +timbre +tímido +timo +tinta +tío +típico +tipo +tira +tirón +titán +títere +título +tiza +toalla +tobillo +tocar +tocino +todo +toga +toldo +tomar +tono +tonto +topar +tope +toque +tórax +torero +tormenta +torneo +toro +torpedo +torre +torso +tortuga +tos +tosco +toser +tóxico +trabajo +tractor +traer +tráfico +trago +traje +tramo +trance +trato +trauma +trazar +trébol +tregua +treinta +tren +trepar +tres +tribu +trigo +tripa +triste +triunfo +trofeo +trompa +tronco +tropa +trote +trozo +truco +trueno +trufa +tubería +tubo +tuerto +tumba +tumor +túnel +túnica +turbina +turismo +turno +tutor +ubicar +úlcera +umbral +unidad +unir +universo +uno +untar +uña +urbano +urbe +urgente +urna +usar +usuario +útil +utopía +uva +vaca +vacío +vacuna +vagar +vago +vaina +vajilla +vale +válido +valle +valor +válvula +vampiro +vara +variar +varón +vaso +vecino +vector +vehículo +veinte +vejez +vela +velero +veloz +vena +vencer +venda +veneno +vengar +venir +venta +venus +ver +verano +verbo +verde +vereda +verja +verso +verter +vía +viaje +vibrar +vicio +víctima +vida +vídeo +vidrio +viejo +viernes +vigor +vil +villa +vinagre +vino +viñedo +violín +viral +virgo +virtud +visor +víspera +vista +vitamina +viudo +vivaz +vivero +vivir +vivo +volcán +volumen +volver +voraz +votar +voto +voz +vuelo +vulgar +yacer +yate +yegua +yema +yerno +yeso +yodo +yoga +yogur +zafiro +zanja +zapato +zarza +zona +zorro +zumo +zurdo diff --git a/env/lib/python3.8/site-packages/eth_account/messages.py b/env/lib/python3.8/site-packages/eth_account/messages.py new file mode 100644 index 00000000..6f9d368f --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/messages.py @@ -0,0 +1,239 @@ +from collections.abc import ( + Mapping, +) +import json +from typing import ( + NamedTuple, + Union, +) + +from eth_typing import ( + Address, + Hash32, +) +from eth_utils.curried import ( + ValidationError, + keccak, + text_if_str, + to_bytes, + to_canonical_address, + to_text, +) +from hexbytes import ( + HexBytes, +) + +from eth_account._utils.structured_data.hashing import ( + hash_domain, + hash_message as hash_eip712_message, + load_and_validate_structured_message, +) +from eth_account._utils.validation import ( + is_valid_address, +) + +text_to_bytes = text_if_str(to_bytes) + + +# watch for updates to signature format +class SignableMessage(NamedTuple): + """ + A message compatible with EIP-191_ that is ready to be signed. + + The properties are components of an EIP-191_ signable message. Other message formats + can be encoded into this format for easy signing. This data structure doesn't need to + know about the original message format. For example, you can think of + EIP-712 as compiling down to an EIP-191 message. + + In typical usage, you should never need to create these by hand. Instead, use + one of the available encode_* methods in this module, like: + + - :meth:`encode_structured_data` + - :meth:`encode_intended_validator` + - :meth:`encode_defunct` + + .. _EIP-191: https://eips.ethereum.org/EIPS/eip-191 + """ + version: bytes # must be length 1 + header: bytes # aka "version specific data" + body: bytes # aka "data to sign" + + +def _hash_eip191_message(signable_message: SignableMessage) -> Hash32: + version = signable_message.version + if len(version) != 1: + raise ValidationError( + f"The supplied message version is {version!r}. " + "The EIP-191 signable message standard only supports one-byte versions." + ) + + joined = b'\x19' + version + signable_message.header + signable_message.body + return Hash32(keccak(joined)) + + +# watch for updates to signature format +def encode_intended_validator( + validator_address: Union[Address, str], + primitive: bytes = None, + *, + hexstr: str = None, + text: str = None) -> SignableMessage: + """ + Encode a message using the "intended validator" approach (ie~ version 0) defined in EIP-191_. + + Supply the message as exactly one of these three arguments: + bytes as a primitive, a hex string, or a unicode string. + + .. WARNING:: Note that this code has not gone through an external audit. + Also, watch for updates to the format, as the EIP is still in DRAFT. + + :param validator_address: which on-chain contract is capable of validating this message, + provided as a checksummed address or in native bytes. + :param primitive: the binary message to be signed + :type primitive: bytes or int + :param str hexstr: the message encoded as hex + :param str text: the message as a series of unicode characters (a normal Py3 str) + :returns: The EIP-191 encoded message, ready for signing + + .. _EIP-191: https://eips.ethereum.org/EIPS/eip-191 + """ + if not is_valid_address(validator_address): + raise ValidationError( + f"Cannot encode message with 'Validator Address': {validator_address!r}. " + "It must be a checksum address, or an address converted to bytes." + ) + # The validator_address is a str or Address (which is a subtype of bytes). Both of + # these are AnyStr, which includes str and bytes. Not sure why mypy complains here... + canonical_address = to_canonical_address(validator_address) + message_bytes = to_bytes(primitive, hexstr=hexstr, text=text) + return SignableMessage( + HexBytes(b'\x00'), # version 0, as defined in EIP-191 + canonical_address, + message_bytes, + ) + + +def encode_structured_data( + primitive: Union[bytes, int, Mapping] = None, + *, + hexstr: str = None, + text: str = None) -> SignableMessage: + """ + Encode an EIP-712_ message. + + EIP-712 is the "structured data" approach (ie~ version 1 of an EIP-191 message). + + Supply the message as exactly one of the three arguments: + + - primitive, as a dict that defines the structured data + - primitive, as bytes + - text, as a json-encoded string + - hexstr, as a hex-encoded (json-encoded) string + + .. WARNING:: Note that this code has not gone through an external audit, and + the test cases are incomplete. + Also, watch for updates to the format, as the EIP is still in DRAFT. + + :param primitive: the binary message to be signed + :type primitive: bytes or int or Mapping (eg~ dict ) + :param hexstr: the message encoded as hex + :param text: the message as a series of unicode characters (a normal Py3 str) + :returns: The EIP-191 encoded message, ready for signing + + .. _EIP-712: https://eips.ethereum.org/EIPS/eip-712 + """ + if isinstance(primitive, Mapping): + message_string = json.dumps(primitive) + else: + message_string = to_text(primitive, hexstr=hexstr, text=text) + structured_data = load_and_validate_structured_message(message_string) + return SignableMessage( + HexBytes(b'\x01'), + hash_domain(structured_data), + hash_eip712_message(structured_data), + ) + + +def encode_defunct( + primitive: bytes = None, + *, + hexstr: str = None, + text: str = None) -> SignableMessage: + r""" + Encode a message for signing, using an old, unrecommended approach. + + Only use this method if you must have compatibility with + :meth:`w3.eth.sign() `. + + EIP-191 defines this as "version ``E``". + + .. NOTE: This standard includes the number of bytes in the message as a part of the header. + Awkwardly, the number of bytes in the message is encoded in decimal ascii. + So if the message is 'abcde', then the length is encoded as the ascii + character '5'. This is one of the reasons that this message format is not preferred. + There is ambiguity when the message '00' is encoded, for example. + + Supply exactly one of the three arguments: bytes, a hex string, or a unicode string. + + :param primitive: the binary message to be signed + :type primitive: bytes or int + :param str hexstr: the message encoded as hex + :param str text: the message as a series of unicode characters (a normal Py3 str) + :returns: The EIP-191 encoded message, ready for signing + + .. doctest:: python + + >>> from eth_account.messages import encode_defunct + >>> from eth_utils.curried import to_hex, to_bytes + + >>> message_text = "I♥SF" + >>> encode_defunct(text=message_text) + SignableMessage(version=b'E', header=b'thereum Signed Message:\n6', body=b'I\xe2\x99\xa5SF') + + These four also produce the same hash: + >>> encode_defunct(to_bytes(text=message_text)) + SignableMessage(version=b'E', header=b'thereum Signed Message:\n6', body=b'I\xe2\x99\xa5SF') + + >>> encode_defunct(bytes(message_text, encoding='utf-8')) + SignableMessage(version=b'E', header=b'thereum Signed Message:\n6', body=b'I\xe2\x99\xa5SF') + + >>> to_hex(text=message_text) + '0x49e299a55346' + >>> encode_defunct(hexstr='0x49e299a55346') + SignableMessage(version=b'E', header=b'thereum Signed Message:\n6', body=b'I\xe2\x99\xa5SF') + + >>> encode_defunct(0x49e299a55346) + SignableMessage(version=b'E', header=b'thereum Signed Message:\n6', body=b'I\xe2\x99\xa5SF') + """ + message_bytes = to_bytes(primitive, hexstr=hexstr, text=text) + msg_length = str(len(message_bytes)).encode('utf-8') + + # Encoding version E defined by EIP-191 + return SignableMessage( + b'E', + b'thereum Signed Message:\n' + msg_length, + message_bytes, + ) + + +def defunct_hash_message( + primitive: bytes = None, + *, + hexstr: str = None, + text: str = None) -> HexBytes: + """ + Convert the provided message into a message hash, to be signed. + + .. CAUTION:: Intented for use with the deprecated :meth:`eth_account.account.Account.signHash`. + This is for backwards compatibility only. All new implementations + should use :meth:`encode_defunct` instead. + + :param primitive: the binary message to be signed + :type primitive: bytes or int + :param str hexstr: the message encoded as hex + :param str text: the message as a series of unicode characters (a normal Py3 str) + :returns: The hash of the message, after adding the prefix + """ + signable = encode_defunct(primitive, hexstr=hexstr, text=text) + hashed = _hash_eip191_message(signable) + return HexBytes(hashed) diff --git a/env/lib/python3.8/site-packages/eth_account/py.typed b/env/lib/python3.8/site-packages/eth_account/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_account/signers/__init__.py b/env/lib/python3.8/site-packages/eth_account/signers/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_account/signers/base.py b/env/lib/python3.8/site-packages/eth_account/signers/base.py new file mode 100644 index 00000000..58d3ade9 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/signers/base.py @@ -0,0 +1,100 @@ +from abc import ( + ABC, + abstractmethod, +) + +from eth_account.messages import ( + SignableMessage, +) + + +class BaseAccount(ABC): + """ + Specify convenience methods to sign transactions and message hashes. + """ + + @property + @abstractmethod + def address(self): + """ + The checksummed public address for this account. + + .. code-block:: python + + >>> my_account.address # doctest: +SKIP + "0xF0109fC8DF283027b6285cc889F5aA624EaC1F55" + + """ + pass + + @abstractmethod + def sign_message(self, signable_message: SignableMessage): + """ + Sign the EIP-191_ message. + + This uses the same structure + as in :meth:`~eth_account.account.Account.sign_message` + but without specifying the private key. + + :param signable_message: The encoded message, ready for signing + + .. _EIP-191: https://eips.ethereum.org/EIPS/eip-191 + """ + pass + + @abstractmethod + def signHash(self, message_hash): + """ + Sign the hash of a message. + + This uses the same structure + as in :meth:`~eth_account.account.Account.signHash` + but without specifying the private key. + + .. CAUTION:: Deprecated for :meth:`~eth_account.signers.base.BaseAccount.sign_message`. + To be removed in v0.6 + + :param bytes message_hash: 32 byte hash of the message to sign + """ + pass + + @abstractmethod + def signTransaction(self, transaction_dict): + """ + Sign a transaction dict. + + This uses the same structure as in + :meth:`~eth_account.account.Account.sign_transaction` + but without specifying the private key. + + .. CAUTION:: Deprecated for :meth:`~eth_account.account.signers.local.sign_transaction`. + This method will be removed in v0.6 + + :param dict transaction_dict: transaction with all fields specified + """ + pass + + @abstractmethod + def sign_transaction(self, transaction_dict): + """ + Sign a transaction dict. + + This uses the same structure as in + :meth:`~eth_account.account.Account.sign_transaction` + but without specifying the private key. + + :param dict transaction_dict: transaction with all fields specified + """ + pass + + def __eq__(self, other): + """ + Equality test between two accounts. + + Two accounts are considered the same if they are exactly the same type, + and can sign for the same address. + """ + return type(self) == type(other) and self.address == other.address + + def __hash__(self): + return hash((type(self), self.address)) diff --git a/env/lib/python3.8/site-packages/eth_account/signers/local.py b/env/lib/python3.8/site-packages/eth_account/signers/local.py new file mode 100644 index 00000000..ce78b156 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_account/signers/local.py @@ -0,0 +1,102 @@ +import warnings + +from eth_account.signers.base import ( + BaseAccount, +) + + +class LocalAccount(BaseAccount): + r""" + A collection of convenience methods to sign and encrypt, with an embedded private key. + + :var bytes key: the 32-byte private key data + + .. code-block:: python + + >>> my_local_account.address # doctest: +SKIP + "0xF0109fC8DF283027b6285cc889F5aA624EaC1F55" + >>> my_local_account.key # doctest: +SKIP + b"\x01\x23..." + + You can also get the private key by casting the account to :class:`bytes`: + + .. code-block:: python + + >>> bytes(my_local_account) # doctest: +SKIP + b"\\x01\\x23..." + """ + def __init__(self, key, account): + """ + Initialize a new account with the the given private key. + + :param eth_keys.PrivateKey key: to prefill in private key execution + :param ~eth_account.account.Account account: the key-unaware management API + """ + self._publicapi = account + + self._address = key.public_key.to_checksum_address() + + key_raw = key.to_bytes() + self._private_key = key_raw + + self._key_obj = key + + @property + def address(self): + return self._address + + @property + def privateKey(self): + """ + .. CAUTION:: Deprecated for :meth:`~eth_account.signers.local.LocalAccount.key`. + This attribute will be removed in v0.5 + """ + warnings.warn( + "privateKey is deprecated in favor of key", + category=DeprecationWarning, + ) + return self._private_key + + @property + def key(self): + """ + Get the private key. + """ + return self._private_key + + def encrypt(self, password, kdf=None, iterations=None): + """ + Generate a string with the encrypted key. + + This uses the same structure as in + :meth:`~eth_account.account.Account.encrypt`, but without a private key argument. + """ + return self._publicapi.encrypt(self.key, password, kdf=kdf, iterations=iterations) + + def signHash(self, message_hash): + return self._publicapi.signHash( + message_hash, + private_key=self.key, + ) + + def sign_message(self, signable_message): + """ + Generate a string with the encrypted key. + + This uses the same structure as in + :meth:`~eth_account.account.Account.sign_message`, but without a private key argument. + """ + return self._publicapi.sign_message(signable_message, private_key=self.key) + + def signTransaction(self, transaction_dict): + warnings.warn( + "signTransaction is deprecated in favor of sign_transaction", + category=DeprecationWarning, + ) + return self.sign_transaction(transaction_dict) + + def sign_transaction(self, transaction_dict): + return self._publicapi.sign_transaction(transaction_dict, self.key) + + def __bytes__(self): + return self.key diff --git a/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/LICENSE b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/LICENSE new file mode 100644 index 00000000..17bc694e --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2020 The Ethereum Foundation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/METADATA new file mode 100644 index 00000000..94604e72 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/METADATA @@ -0,0 +1,178 @@ +Metadata-Version: 2.1 +Name: eth-hash +Version: 0.5.1 +Summary: eth-hash: The Ethereum hashing function, keccak256, sometimes (erroneously) called sha3 +Home-page: https://github.com/ethereum/eth-hash +Author: The Ethereum Foundation +Author-email: snakecharmers@ethereum.org +License: MIT +Keywords: ethereum +Classifier: Development Status :: 3 - Alpha +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Operating System :: MacOS +Classifier: Operating System :: POSIX +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Requires-Python: >=3.7, <4 +Description-Content-Type: text/markdown +License-File: LICENSE +Provides-Extra: dev +Requires-Dist: bumpversion (<1,>=0.5.3) ; extra == 'dev' +Requires-Dist: pytest-watch (<5,>=4.1.0) ; extra == 'dev' +Requires-Dist: wheel ; extra == 'dev' +Requires-Dist: twine ; extra == 'dev' +Requires-Dist: ipython ; extra == 'dev' +Requires-Dist: pytest (<7,>=6.2.5) ; extra == 'dev' +Requires-Dist: pytest-xdist (<3,>=2.4.0) ; extra == 'dev' +Requires-Dist: tox (<4,>=3.14.6) ; extra == 'dev' +Requires-Dist: flake8 (==3.7.9) ; extra == 'dev' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'dev' +Requires-Dist: mypy (==0.961) ; extra == 'dev' +Requires-Dist: pydocstyle (<6,>=5.0.0) ; extra == 'dev' +Requires-Dist: black (<23,>=22.0) ; extra == 'dev' +Requires-Dist: Sphinx (<6,>=5.0.0) ; extra == 'dev' +Requires-Dist: sphinx-rtd-theme (<1,>=0.1.9) ; extra == 'dev' +Requires-Dist: towncrier (<22,>=21) ; extra == 'dev' +Requires-Dist: jinja2 (<3.1.0,>=3.0.0) ; extra == 'dev' +Provides-Extra: doc +Requires-Dist: Sphinx (<6,>=5.0.0) ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme (<1,>=0.1.9) ; extra == 'doc' +Requires-Dist: towncrier (<22,>=21) ; extra == 'doc' +Requires-Dist: jinja2 (<3.1.0,>=3.0.0) ; extra == 'doc' +Provides-Extra: lint +Requires-Dist: flake8 (==3.7.9) ; extra == 'lint' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'lint' +Requires-Dist: mypy (==0.961) ; extra == 'lint' +Requires-Dist: pydocstyle (<6,>=5.0.0) ; extra == 'lint' +Requires-Dist: black (<23,>=22.0) ; extra == 'lint' +Provides-Extra: pycryptodome +Requires-Dist: pycryptodome (<4,>=3.6.6) ; extra == 'pycryptodome' +Provides-Extra: pysha3 +Requires-Dist: pysha3 (<2.0.0,>=1.0.0) ; (python_version < "3.9") and extra == 'pysha3' +Requires-Dist: safe-pysha3 (>=1.0.0) ; (python_version >= "3.9") and extra == 'pysha3' +Provides-Extra: test +Requires-Dist: pytest (<7,>=6.2.5) ; extra == 'test' +Requires-Dist: pytest-xdist (<3,>=2.4.0) ; extra == 'test' +Requires-Dist: tox (<4,>=3.14.6) ; extra == 'test' + +# eth-hash + +[![Join the chat at https://gitter.im/ethereum/web3.py](https://badges.gitter.im/ethereum/web3.py.svg)](https://gitter.im/ethereum/web3.py?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[![Build Status](https://circleci.com/gh/ethereum/eth-hash.svg?style=shield)](https://circleci.com/gh/ethereum/eth-hash) +[![PyPI version](https://badge.fury.io/py/eth-hash.svg)](https://badge.fury.io/py/eth-hash) +[![Python versions](https://img.shields.io/pypi/pyversions/eth-hash.svg)](https://pypi.python.org/pypi/eth-hash) +[![Docs build](https://readthedocs.org/projects/eth-hash/badge/?version=latest)](http://eth-hash.readthedocs.io/en/latest/?badge=latest) + + +The Ethereum hashing function, keccak256, sometimes (erroneously) called sha3 + +Note: the similarly named [pyethash](https://github.com/ethereum/ethash) +has a completely different use: it generates proofs of work. + +This is a low-level library, intended to be used internally by other Ethereum tools. +If you're looking for a convenient hashing tool, check out +[`eth_utils.keccak()`](https://eth-utils.readthedocs.io/en/stable/utilities.html#keccak-bytes-int-bool-text-str-hexstr-str-bytes) +which will be a little friendlier, and provide access to other helpful utilities. + +Read more in the [documentation on ReadTheDocs](https://eth-hash.readthedocs.io/). [View the change log](https://eth-hash.readthedocs.io/en/latest/release_notes.html). + + +## Quickstart + +```sh +pip install eth-hash[pycryptodome] +``` + +```py +>>> from eth_hash.auto import keccak +>>> keccak(b'') +b"\xc5\xd2F\x01\x86\xf7#<\x92~}\xb2\xdc\xc7\x03\xc0\xe5\x00\xb6S\xca\x82';{\xfa\xd8\x04]\x85\xa4p" +``` + +See the [docs](http://eth-hash.readthedocs.io/en/latest/quickstart.html#quickstart) +for more about choosing and installing backends. + +## Developer Setup + +If you would like to hack on eth-hash, please check out the [Snake Charmers +Tactical Manual](https://github.com/ethereum/snake-charmers-tactical-manual) +for information on how we do: + +- Testing +- Pull Requests +- Code Style +- Documentation + +### Development Environment Setup + +You can set up your dev environment with: + +```sh +git clone git@github.com:ethereum/eth-hash.git +cd eth-hash +virtualenv -p python3 venv +. venv/bin/activate +pip install -e .[dev] +``` + +### Testing Setup + +During development, you might like to have tests run on every file save. + +Show flake8 errors on file change: + +```sh +# Test flake8 +when-changed -v -s -r -1 eth_hash/ tests/ -c "clear; flake8 eth_hash tests && echo 'flake8 success' || echo 'error'" +``` + +Run multi-process tests in one command, but without color: + +```sh +# in the project root: +pytest --numprocesses=4 --looponfail --maxfail=1 +# the same thing, succinctly: +pytest -n 4 -f --maxfail=1 +``` + +Run in one thread, with color and desktop notifications: + +```sh +cd venv +ptw --onfail "notify-send -t 5000 'Test failure ⚠⚠⚠⚠⚠' 'python 3 test on eth-hash failed'" ../tests ../eth_hash +``` + +### Release setup + +For Debian-like systems: +``` +apt install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. This is typically done from the +master branch, except when releasing a beta (in which case the beta is released from master, +and the previous stable branch is released from said branch). + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 4.0.0-alpha.1 devnum"` diff --git a/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/RECORD new file mode 100644 index 00000000..dba8c03e --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/RECORD @@ -0,0 +1,25 @@ +eth_hash-0.5.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_hash-0.5.1.dist-info/LICENSE,sha256=390jwXuDReXaeTef0eT63FrFKogAMVgYBUWl1oGk4ec,1090 +eth_hash-0.5.1.dist-info/METADATA,sha256=ZKQPLaUWessBnudYJDYvtSXfWkPXabVzhMF-VJNJJVY,6750 +eth_hash-0.5.1.dist-info/RECORD,, +eth_hash-0.5.1.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +eth_hash-0.5.1.dist-info/top_level.txt,sha256=bqykoQYaTTUqN3y1-QNtlzC6GXEn6j0Qc7b2sYx-1Gk,9 +eth_hash/__init__.py,sha256=h_c2lJ07Qbk2BSaRq1RPt7xzMA72p_7sfL8bmyCzycU,51 +eth_hash/__pycache__/__init__.cpython-38.pyc,, +eth_hash/__pycache__/abc.cpython-38.pyc,, +eth_hash/__pycache__/auto.cpython-38.pyc,, +eth_hash/__pycache__/main.cpython-38.pyc,, +eth_hash/__pycache__/utils.cpython-38.pyc,, +eth_hash/abc.py,sha256=qtBc9kGrrB1YLyN_8JvELhZnpnD4aExH-mcly_oIHTw,629 +eth_hash/auto.py,sha256=eMuwC5zz5VfORSHBf521ryX-U4-nhmHbWIYoomdYCuY,136 +eth_hash/backends/__init__.py,sha256=4IqwM3EtKs_hkPrya67s4_TGCJVdBfAzRmKF37P0SHw,352 +eth_hash/backends/__pycache__/__init__.cpython-38.pyc,, +eth_hash/backends/__pycache__/auto.cpython-38.pyc,, +eth_hash/backends/__pycache__/pycryptodome.cpython-38.pyc,, +eth_hash/backends/__pycache__/pysha3.cpython-38.pyc,, +eth_hash/backends/auto.py,sha256=Jki3oad_dbW_ZMP9-DxcNSZYIvhcxri3GpKsDP2ozbk,753 +eth_hash/backends/pycryptodome.py,sha256=fTOXR0PrnIAKCk284HbD5ZIMxNh1qA-ndDNTWNjp6IE,1212 +eth_hash/backends/pysha3.py,sha256=lD0Vh84Y5z0INsQHm84Bg-QMV0MGrvEQJZa6cVrzA2k,960 +eth_hash/main.py,sha256=ZomEuQT9Owgo5vG0cs-HE3_u4RZJYaNJUUb4y-M0I2I,2018 +eth_hash/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_hash/utils.py,sha256=Iww3qanmOHXVaAiq11VKa2A6mqh5ulGmmCUuDoffXVQ,2293 diff --git a/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/top_level.txt new file mode 100644 index 00000000..7544003d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash-0.5.1.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_hash diff --git a/env/lib/python3.8/site-packages/eth_hash/__init__.py b/env/lib/python3.8/site-packages/eth_hash/__init__.py new file mode 100644 index 00000000..5e1a1ecd --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/__init__.py @@ -0,0 +1,3 @@ +from .main import ( # noqa: F401 + Keccak256, +) diff --git a/env/lib/python3.8/site-packages/eth_hash/abc.py b/env/lib/python3.8/site-packages/eth_hash/abc.py new file mode 100644 index 00000000..856601e5 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/abc.py @@ -0,0 +1,35 @@ +from abc import ( + ABC, + abstractmethod, +) +from typing import ( + Union, +) + + +class PreImageAPI(ABC): + @abstractmethod + def __init__(self, value: bytes) -> None: + ... + + @abstractmethod + def update(self, value: bytes) -> None: + ... + + @abstractmethod + def digest(self) -> bytes: + ... + + @abstractmethod + def copy(self) -> "PreImageAPI": + ... + + +class BackendAPI(ABC): + @abstractmethod + def keccak256(self, in_data: Union[bytearray, bytes]) -> bytes: + ... + + @abstractmethod + def preimage(self, in_data: Union[bytearray, bytes]) -> PreImageAPI: + ... diff --git a/env/lib/python3.8/site-packages/eth_hash/auto.py b/env/lib/python3.8/site-packages/eth_hash/auto.py new file mode 100644 index 00000000..73c6d387 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/auto.py @@ -0,0 +1,8 @@ +from eth_hash.backends.auto import ( + AutoBackend, +) +from eth_hash.main import ( + Keccak256, +) + +keccak = Keccak256(AutoBackend()) diff --git a/env/lib/python3.8/site-packages/eth_hash/backends/__init__.py b/env/lib/python3.8/site-packages/eth_hash/backends/__init__.py new file mode 100644 index 00000000..fbe00fe3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/backends/__init__.py @@ -0,0 +1,14 @@ +""" +A collection of optional backends that implement hashing. + +You must manually select and install the backend you want. If the backend is +not installed, then trying to import the module for that backend will cause an +:class:`ImportError`. + +See :ref:`Choose a hashing backend` for more. +""" + +SUPPORTED_BACKENDS = [ + "pycryptodome", + "pysha3", +] diff --git a/env/lib/python3.8/site-packages/eth_hash/backends/auto.py b/env/lib/python3.8/site-packages/eth_hash/backends/auto.py new file mode 100644 index 00000000..f0e2c341 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/backends/auto.py @@ -0,0 +1,28 @@ +from typing import ( + Union, +) + +from eth_hash.abc import ( + BackendAPI, + PreImageAPI, +) +from eth_hash.utils import ( + auto_choose_backend, +) + + +class AutoBackend(BackendAPI): + def _initialize(self) -> None: + backend = auto_choose_backend() + # Use setattr to circumvent mypy's confusion, see: + # https://github.com/python/mypy/issues/2427 + setattr(self, "keccak256", backend.keccak256) + setattr(self, "preimage", backend.preimage) + + def keccak256(self, in_data: Union[bytearray, bytes]) -> bytes: + self._initialize() + return self.keccak256(in_data) + + def preimage(self, in_data: Union[bytearray, bytes]) -> PreImageAPI: + self._initialize() + return self.preimage(in_data) diff --git a/env/lib/python3.8/site-packages/eth_hash/backends/pycryptodome.py b/env/lib/python3.8/site-packages/eth_hash/backends/pycryptodome.py new file mode 100644 index 00000000..75eba0a8 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/backends/pycryptodome.py @@ -0,0 +1,45 @@ +from typing import ( + Union, +) + +from Crypto.Hash import ( + keccak, +) + +from eth_hash.abc import ( + BackendAPI, + PreImageAPI, +) + + +class CryptodomePreimage(PreImageAPI): + def __init__(self, prehash: bytes) -> None: + self._hash = keccak.new(data=prehash, digest_bits=256, update_after_digest=True) + # pycryptodome doesn't expose a `copy` mechanism for it's hash objects + # so we keep a record of all of the parts for when/if we need to copy + # them. + self._parts = [prehash] + + def update(self, prehash: bytes) -> None: + self._hash.update(prehash) + self._parts.append(prehash) + + def digest(self) -> bytes: + return self._hash.digest() + + def copy(self) -> "CryptodomePreimage": + return CryptodomePreimage(b"".join(self._parts)) + + +class CryptodomeBackend(BackendAPI): + def keccak256(self, prehash: Union[bytearray, bytes]) -> bytes: + hasher = keccak.new(data=prehash, digest_bits=256) + return hasher.digest() + + def preimage(self, prehash: Union[bytearray, bytes]) -> PreImageAPI: + return CryptodomePreimage(prehash) + + +backend = CryptodomeBackend() +keccak256 = backend.keccak256 +preimage = backend.preimage diff --git a/env/lib/python3.8/site-packages/eth_hash/backends/pysha3.py b/env/lib/python3.8/site-packages/eth_hash/backends/pysha3.py new file mode 100644 index 00000000..758afe7d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/backends/pysha3.py @@ -0,0 +1,41 @@ +from typing import ( + Union, +) + +from sha3 import ( + keccak_256 as _keccak_256, +) + +from eth_hash.abc import ( + BackendAPI, + PreImageAPI, +) + + +class Pysha3Preimage(PreImageAPI): + def __init__(self, prehash: bytes) -> None: + self._hash = _keccak_256(prehash) + + def update(self, prehash: bytes) -> None: + return self._hash.update(prehash) # type: ignore + + def digest(self) -> bytes: + return self._hash.digest() # type: ignore + + def copy(self) -> "Pysha3Preimage": + dup = Pysha3Preimage(b"") + dup._hash = self._hash.copy() + return dup + + +class PySha3Backend(BackendAPI): + def keccak256(self, prehash: Union[bytearray, bytes]) -> bytes: + return _keccak_256(prehash).digest() # type: ignore + + def preimage(self, prehash: Union[bytearray, bytes]) -> PreImageAPI: + return Pysha3Preimage(prehash) + + +backend = PySha3Backend() +keccak256 = backend.keccak256 +preimage = backend.preimage diff --git a/env/lib/python3.8/site-packages/eth_hash/main.py b/env/lib/python3.8/site-packages/eth_hash/main.py new file mode 100644 index 00000000..88fc3625 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/main.py @@ -0,0 +1,58 @@ +from typing import ( + Union, +) + +from .abc import ( + BackendAPI, + PreImageAPI, +) + + +class Keccak256: + def __init__(self, backend: BackendAPI) -> None: + self._backend = backend + self.hasher = self._hasher_first_run + self.preimage = self._preimage_first_run + + def _hasher_first_run(self, in_data: Union[bytearray, bytes]) -> bytes: + """ + Validate, on first-run, that the hasher backend is valid. + + After first run, replace this with the new hasher method. + This is a bit of a hacky way to minimize overhead on hash calls after + this first one. + """ + # Execute directly before saving method, + # to let any side-effects settle (see AutoBackend) + result = self._backend.keccak256(in_data) + new_hasher = self._backend.keccak256 + assert ( + new_hasher(b"") + == b"\xc5\xd2F\x01\x86\xf7#<\x92~}\xb2\xdc\xc7\x03\xc0\xe5\x00\xb6S\xca\x82';\x7b\xfa\xd8\x04]\x85\xa4p" # noqa: E501 + ) + self.hasher = new_hasher + return result + + def _preimage_first_run(self, in_data: Union[bytearray, bytes]) -> PreImageAPI: + # Execute directly before saving method, + # to let any side-effects settle (see AutoBackend) + result = self._backend.preimage(in_data) + self.preimage = self._backend.preimage + return result + + def __call__(self, preimage: Union[bytearray, bytes]) -> bytes: + if not isinstance(preimage, (bytearray, bytes)): + raise TypeError( + "Can only compute the hash of `bytes` or `bytearray` values, not %r" + % preimage + ) + + return self.hasher(preimage) + + def new(self, preimage: Union[bytearray, bytes]) -> PreImageAPI: + if not isinstance(preimage, (bytearray, bytes)): + raise TypeError( + "Can only compute the hash of `bytes` or `bytearray` values, not %r" + % preimage + ) + return self.preimage(preimage) diff --git a/env/lib/python3.8/site-packages/eth_hash/py.typed b/env/lib/python3.8/site-packages/eth_hash/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_hash/utils.py b/env/lib/python3.8/site-packages/eth_hash/utils.py new file mode 100644 index 00000000..57c7c09f --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_hash/utils.py @@ -0,0 +1,84 @@ +import importlib +import logging +import os + +from eth_hash.abc import ( + BackendAPI, +) +from eth_hash.backends import ( + SUPPORTED_BACKENDS, +) + + +def auto_choose_backend() -> BackendAPI: + env_backend = get_backend_in_environment() + + if env_backend: + return load_environment_backend(env_backend) + else: + return choose_available_backend() + + +def get_backend_in_environment() -> str: + return os.environ.get("ETH_HASH_BACKEND", "") + + +def load_backend(backend_name: str) -> BackendAPI: + import_path = "eth_hash.backends.%s" % backend_name + module = importlib.import_module(import_path) + + try: + backend = module.backend + except AttributeError as e: + raise ValueError( + "Import of %s failed, because %r does not have 'backend' attribute" + % ( + import_path, + module, + ) + ) from e + + if isinstance(backend, BackendAPI): + return backend + else: + raise ValueError( + "Import of %s failed, because %r is an invalid back end" + % ( + import_path, + backend, + ) + ) + + +def load_environment_backend(env_backend: str) -> BackendAPI: + if env_backend in SUPPORTED_BACKENDS: + try: + return load_backend(env_backend) + except ImportError as e: + raise ImportError( + "The backend specified in ETH_HASH_BACKEND, '{0}', is not installed. " + "Install with `pip install eth-hash[{0}]`.".format(env_backend) + ) from e + else: + raise ValueError( + "The backend specified in ETH_HASH_BACKEND, %r, is not supported. " + "Choose one of: %r" % (env_backend, SUPPORTED_BACKENDS) + ) + + +def choose_available_backend() -> BackendAPI: + for backend in SUPPORTED_BACKENDS: + try: + return load_backend(backend) + except ImportError: + logging.getLogger("eth_hash").debug( + "Failed to import %s", backend, exc_info=True + ) + raise ImportError( + "None of these hashing backends are installed: %r.\n" + "Install with `pip install eth-hash[%s]`." + % ( + SUPPORTED_BACKENDS, + SUPPORTED_BACKENDS[0], + ) + ) diff --git a/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/DESCRIPTION.rst b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/DESCRIPTION.rst new file mode 100644 index 00000000..6b6d685b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/DESCRIPTION.rst @@ -0,0 +1,194 @@ +Ethereum Keyfile +================ + +A library for handling the encrypted keyfiles used to store ethereum +private keys + + This library and repository was previously located at + https://github.com/pipermerriam/ethereum-keyfile. It was transferred + to the Ethereum foundation github in November 2017 and renamed to + ``eth-keyfile``. The PyPi package was also renamed from + ``ethereum-keyfile`` to \`eth-keyfile. + +Installation +------------ + +.. code:: sh + + pip install eth-keyfile + +Development +----------- + +.. code:: sh + + pip install -e . -r requirements-dev.txt + +Running the tests +~~~~~~~~~~~~~~~~~ + +You can run the tests with: + +.. code:: sh + + py.test tests + +Or you can install ``tox`` to run the full test suite. + +Releasing +~~~~~~~~~ + +Pandoc is required for transforming the markdown README to the proper +format to render correctly on pypi. + +For Debian-like systems: + +:: + + apt install pandoc + +Or on OSX: + +.. code:: sh + + brew install pandoc + +To release a new version: + +.. code:: sh + + make release bump=$$VERSION_PART_TO_BUMP$$ + +How to bumpversion +^^^^^^^^^^^^^^^^^^ + +The version format for this repo is ``{major}.{minor}.{patch}`` for +stable, and ``{major}.{minor}.{patch}-{stage}.{devnum}`` for unstable +(``stage`` can be alpha or beta). + +To issue the next version in line, specify which part to bump, like +``make release bump=minor`` or ``make release bump=devnum``. + +If you are in a beta version, ``make release bump=stage`` will switch to +a stable. + +To issue an unstable version when the current version is stable, specify +the new version explicitly, like +``make release bump="--new-version 2.0.0-alpha.1 devnum"`` + +Documentation +------------- + +``eth_keyfile.load_keyfile(path_or_file_obj) --> keyfile_json`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes either a filesystem path represented as a string or a file object +and returns the parsed keyfile json as a python dictionary. + +.. code:: python + + >>> from eth_keyfile import load_keyfile + >>> load_keyfile('path/to-my-keystore/keystore.json') + { + "crypto" : { + "cipher" : "aes-128-ctr", + "cipherparams" : { + "iv" : "6087dab2f9fdbbfaddc31a909735c1e6" + }, + "ciphertext" : "5318b4d5bcd28de64ee5559e671353e16f075ecae9f99c7a79a38af5f869aa46", + "kdf" : "pbkdf2", + "kdfparams" : { + "c" : 262144, + "dklen" : 32, + "prf" : "hmac-sha256", + "salt" : "ae3cd4e7013836a3df6bd7241b12db061dbe2c6785853cce422d148a624ce0bd" + }, + "mac" : "517ead924a9d0dc3124507e3393d175ce3ff7c1e96529c6c555ce9e51205e9b2" + }, + "id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6", + "version" : 3 + } + +``eth_keyfile.create_keyfile_json(private_key, password, kdf="pbkdf2", work_factor=None) --> keyfile_json`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes the following parameters: + +- ``private_key``: A bytestring of length 32 +- ``password``: A bytestring which will be the password that can be + used to decrypt the resulting keyfile. +- ``kdf``: The key derivation function. Allowed values are ``pbkdf2`` + and ``scrypt``. By default, ``pbkdf2`` will be used. +- ``work_factor``: The work factor which will be used for the given key + derivation function. By default ``1000000`` will be used for + ``pbkdf2`` and ``262144`` for ``scrypt``. + +Returns the keyfile json as a python dictionary. + +.. code:: python + + >>> private_key = b'\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01' + >>> create_keyfile_json(private_key, b'foo') + { + "address" : "1a642f0e3c3af545e7acbd38b07251b3990914f1", + "crypto" : { + "cipher" : "aes-128-ctr", + "cipherparams" : { + "iv" : "6087dab2f9fdbbfaddc31a909735c1e6" + }, + "ciphertext" : "5318b4d5bcd28de64ee5559e671353e16f075ecae9f99c7a79a38af5f869aa46", + "kdf" : "pbkdf2", + "kdfparams" : { + "c" : 262144, + "dklen" : 32, + "prf" : "hmac-sha256", + "salt" : "ae3cd4e7013836a3df6bd7241b12db061dbe2c6785853cce422d148a624ce0bd" + }, + "mac" : "517ead924a9d0dc3124507e3393d175ce3ff7c1e96529c6c555ce9e51205e9b2" + }, + "id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6", + "version" : 3 + } + +``eth_keyfile.decode_keyfile_json(keyfile_json, password) --> private_key`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes the keyfile json as a python dictionary and the password for the +keyfile, returning the decoded private key. + +.. code:: python + + >>> keyfile_json = { + ... "crypto" : { + ... "cipher" : "aes-128-ctr", + ... "cipherparams" : { + ... "iv" : "6087dab2f9fdbbfaddc31a909735c1e6" + ... }, + ... "ciphertext" : "5318b4d5bcd28de64ee5559e671353e16f075ecae9f99c7a79a38af5f869aa46", + ... "kdf" : "pbkdf2", + ... "kdfparams" : { + ... "c" : 262144, + ... "dklen" : 32, + ... "prf" : "hmac-sha256", + ... "salt" : "ae3cd4e7013836a3df6bd7241b12db061dbe2c6785853cce422d148a624ce0bd" + ... }, + ... "mac" : "517ead924a9d0dc3124507e3393d175ce3ff7c1e96529c6c555ce9e51205e9b2" + ... }, + ... "id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6", + ... "version" : 3 + ... } + >>> decode_keyfile_json(keyfile_json, b'foo') + b'\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01' + +``eth_keyfile.extract_key_from_keyfile(path_or_file_obj, password) --> private_key`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes a filesystem path represented by a string or a file object and the +password for the keyfile. Returns the private key as a bytestring. + +.. code:: python + + >>> extract_key_from_keyfile('path/to-my-keystore/keyfile.json', b'foo') + b'\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01' + + diff --git a/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/METADATA new file mode 100644 index 00000000..c5d93960 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/METADATA @@ -0,0 +1,219 @@ +Metadata-Version: 2.0 +Name: eth-keyfile +Version: 0.5.1 +Summary: A library for handling the encrypted keyfiles used to store ethereum private keys. +Home-page: https://github.com/ethereum/eth-keyfile +Author: Piper Merriam +Author-email: pipermerriam@gmail.com +License: MIT +Description-Content-Type: UNKNOWN +Keywords: ethereum +Platform: UNKNOWN +Classifier: Development Status :: 2 - Pre-Alpha +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Requires-Dist: eth-utils (<2.0.0,>=1.0.0-beta.1) +Requires-Dist: eth-keys (<1.0.0,>=0.1.0-beta.4) +Requires-Dist: cytoolz (<1.0.0,>=0.9.0) +Requires-Dist: pycryptodome (<4.0.0,>=3.4.7) + +Ethereum Keyfile +================ + +A library for handling the encrypted keyfiles used to store ethereum +private keys + + This library and repository was previously located at + https://github.com/pipermerriam/ethereum-keyfile. It was transferred + to the Ethereum foundation github in November 2017 and renamed to + ``eth-keyfile``. The PyPi package was also renamed from + ``ethereum-keyfile`` to \`eth-keyfile. + +Installation +------------ + +.. code:: sh + + pip install eth-keyfile + +Development +----------- + +.. code:: sh + + pip install -e . -r requirements-dev.txt + +Running the tests +~~~~~~~~~~~~~~~~~ + +You can run the tests with: + +.. code:: sh + + py.test tests + +Or you can install ``tox`` to run the full test suite. + +Releasing +~~~~~~~~~ + +Pandoc is required for transforming the markdown README to the proper +format to render correctly on pypi. + +For Debian-like systems: + +:: + + apt install pandoc + +Or on OSX: + +.. code:: sh + + brew install pandoc + +To release a new version: + +.. code:: sh + + make release bump=$$VERSION_PART_TO_BUMP$$ + +How to bumpversion +^^^^^^^^^^^^^^^^^^ + +The version format for this repo is ``{major}.{minor}.{patch}`` for +stable, and ``{major}.{minor}.{patch}-{stage}.{devnum}`` for unstable +(``stage`` can be alpha or beta). + +To issue the next version in line, specify which part to bump, like +``make release bump=minor`` or ``make release bump=devnum``. + +If you are in a beta version, ``make release bump=stage`` will switch to +a stable. + +To issue an unstable version when the current version is stable, specify +the new version explicitly, like +``make release bump="--new-version 2.0.0-alpha.1 devnum"`` + +Documentation +------------- + +``eth_keyfile.load_keyfile(path_or_file_obj) --> keyfile_json`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes either a filesystem path represented as a string or a file object +and returns the parsed keyfile json as a python dictionary. + +.. code:: python + + >>> from eth_keyfile import load_keyfile + >>> load_keyfile('path/to-my-keystore/keystore.json') + { + "crypto" : { + "cipher" : "aes-128-ctr", + "cipherparams" : { + "iv" : "6087dab2f9fdbbfaddc31a909735c1e6" + }, + "ciphertext" : "5318b4d5bcd28de64ee5559e671353e16f075ecae9f99c7a79a38af5f869aa46", + "kdf" : "pbkdf2", + "kdfparams" : { + "c" : 262144, + "dklen" : 32, + "prf" : "hmac-sha256", + "salt" : "ae3cd4e7013836a3df6bd7241b12db061dbe2c6785853cce422d148a624ce0bd" + }, + "mac" : "517ead924a9d0dc3124507e3393d175ce3ff7c1e96529c6c555ce9e51205e9b2" + }, + "id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6", + "version" : 3 + } + +``eth_keyfile.create_keyfile_json(private_key, password, kdf="pbkdf2", work_factor=None) --> keyfile_json`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes the following parameters: + +- ``private_key``: A bytestring of length 32 +- ``password``: A bytestring which will be the password that can be + used to decrypt the resulting keyfile. +- ``kdf``: The key derivation function. Allowed values are ``pbkdf2`` + and ``scrypt``. By default, ``pbkdf2`` will be used. +- ``work_factor``: The work factor which will be used for the given key + derivation function. By default ``1000000`` will be used for + ``pbkdf2`` and ``262144`` for ``scrypt``. + +Returns the keyfile json as a python dictionary. + +.. code:: python + + >>> private_key = b'\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01' + >>> create_keyfile_json(private_key, b'foo') + { + "address" : "1a642f0e3c3af545e7acbd38b07251b3990914f1", + "crypto" : { + "cipher" : "aes-128-ctr", + "cipherparams" : { + "iv" : "6087dab2f9fdbbfaddc31a909735c1e6" + }, + "ciphertext" : "5318b4d5bcd28de64ee5559e671353e16f075ecae9f99c7a79a38af5f869aa46", + "kdf" : "pbkdf2", + "kdfparams" : { + "c" : 262144, + "dklen" : 32, + "prf" : "hmac-sha256", + "salt" : "ae3cd4e7013836a3df6bd7241b12db061dbe2c6785853cce422d148a624ce0bd" + }, + "mac" : "517ead924a9d0dc3124507e3393d175ce3ff7c1e96529c6c555ce9e51205e9b2" + }, + "id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6", + "version" : 3 + } + +``eth_keyfile.decode_keyfile_json(keyfile_json, password) --> private_key`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes the keyfile json as a python dictionary and the password for the +keyfile, returning the decoded private key. + +.. code:: python + + >>> keyfile_json = { + ... "crypto" : { + ... "cipher" : "aes-128-ctr", + ... "cipherparams" : { + ... "iv" : "6087dab2f9fdbbfaddc31a909735c1e6" + ... }, + ... "ciphertext" : "5318b4d5bcd28de64ee5559e671353e16f075ecae9f99c7a79a38af5f869aa46", + ... "kdf" : "pbkdf2", + ... "kdfparams" : { + ... "c" : 262144, + ... "dklen" : 32, + ... "prf" : "hmac-sha256", + ... "salt" : "ae3cd4e7013836a3df6bd7241b12db061dbe2c6785853cce422d148a624ce0bd" + ... }, + ... "mac" : "517ead924a9d0dc3124507e3393d175ce3ff7c1e96529c6c555ce9e51205e9b2" + ... }, + ... "id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6", + ... "version" : 3 + ... } + >>> decode_keyfile_json(keyfile_json, b'foo') + b'\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01' + +``eth_keyfile.extract_key_from_keyfile(path_or_file_obj, password) --> private_key`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Takes a filesystem path represented by a string or a file object and the +password for the keyfile. Returns the private key as a bytestring. + +.. code:: python + + >>> extract_key_from_keyfile('path/to-my-keystore/keyfile.json', b'foo') + b'\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01' + + diff --git a/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/RECORD new file mode 100644 index 00000000..163a3bec --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/RECORD @@ -0,0 +1,11 @@ +eth_keyfile-0.5.1.dist-info/DESCRIPTION.rst,sha256=6l_dpqYGrDtPxZsjyPlB4yGaxq8_eHBy89jv_wHeO64,6350 +eth_keyfile-0.5.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_keyfile-0.5.1.dist-info/METADATA,sha256=VOcrguV76W22yCRWF4CP_0YUNuiLS62iNZuIO_icWrI,7303 +eth_keyfile-0.5.1.dist-info/RECORD,, +eth_keyfile-0.5.1.dist-info/WHEEL,sha256=8Lm45v9gcYRm70DrgFGVe4WsUtUMi1_0Tso1hqPGMjA,92 +eth_keyfile-0.5.1.dist-info/metadata.json,sha256=pOffklRIKgYEuyNBqzyhGQ20lSrXME4vuaRNDkcNyuY,1087 +eth_keyfile-0.5.1.dist-info/top_level.txt,sha256=SLopX9FIYcgXwQ8fFsEHlp9LtZSd-IJ8RVKuv_r5ZDQ,12 +eth_keyfile/__init__.py,sha256=Y9n5_9dGG8UkWF8jP6dq8szg6b8Q5lH84wgReVwOfb0,562 +eth_keyfile/__pycache__/__init__.cpython-38.pyc,, +eth_keyfile/__pycache__/keyfile.cpython-38.pyc,, +eth_keyfile/keyfile.py,sha256=7eCQYN92bzRYMqNP_U3bh-SZoYdHqDsoQXPGjiS3P3w,6771 diff --git a/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/WHEEL new file mode 100644 index 00000000..6261a26e --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.30.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/top_level.txt new file mode 100644 index 00000000..84b890b7 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile-0.5.1.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_keyfile diff --git a/env/lib/python3.8/site-packages/eth_keyfile/__init__.py b/env/lib/python3.8/site-packages/eth_keyfile/__init__.py new file mode 100644 index 00000000..13b154f2 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile/__init__.py @@ -0,0 +1,23 @@ +from __future__ import absolute_import + +import pkg_resources +import warnings +import sys + +from eth_keyfile.keyfile import ( # noqa: F401 + load_keyfile, + create_keyfile_json, + decode_keyfile_json, + extract_key_from_keyfile, +) + + +if sys.version_info.major < 3: + warnings.simplefilter('always', DeprecationWarning) + warnings.warn(DeprecationWarning( + "The `eth-keyfile` library is dropping support for Python 2. Upgrade to Python 3." + )) + warnings.resetwarnings() + + +__version__ = pkg_resources.get_distribution("eth-keyfile").version diff --git a/env/lib/python3.8/site-packages/eth_keyfile/keyfile.py b/env/lib/python3.8/site-packages/eth_keyfile/keyfile.py new file mode 100644 index 00000000..fed0501a --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keyfile/keyfile.py @@ -0,0 +1,268 @@ +import hashlib +import json +import uuid + +from Crypto import Random +from Crypto.Cipher import AES +from Crypto.Protocol.KDF import scrypt +from Crypto.Util import Counter + +from eth_keys import keys + +from eth_utils import ( + big_endian_to_int, + decode_hex, + encode_hex, + int_to_big_endian, + is_dict, + is_string, + keccak, + remove_0x_prefix, + to_dict, +) + + +def encode_hex_no_prefix(value): + return remove_0x_prefix(encode_hex(value)) + + +def load_keyfile(path_or_file_obj): + if is_string(path_or_file_obj): + with open(path_or_file_obj) as keyfile_file: + return json.load(keyfile_file) + else: + return json.load(path_or_file_obj) + + +def create_keyfile_json(private_key, password, version=3, kdf="pbkdf2", iterations=None): + if version == 3: + return _create_v3_keyfile_json(private_key, password, kdf, iterations) + else: + raise NotImplementedError("Not yet implemented") + + +def decode_keyfile_json(raw_keyfile_json, password): + keyfile_json = normalize_keys(raw_keyfile_json) + version = keyfile_json['version'] + + if version == 3: + return _decode_keyfile_json_v3(keyfile_json, password) + else: + raise NotImplementedError("Not yet implemented") + + +def extract_key_from_keyfile(path_or_file_obj, password): + keyfile_json = load_keyfile(path_or_file_obj) + private_key = decode_keyfile_json(keyfile_json, password) + return private_key + + +@to_dict +def normalize_keys(keyfile_json): + for key, value in keyfile_json.items(): + if is_string(key): + norm_key = key.lower() + else: + norm_key = key + + if is_dict(value): + norm_value = normalize_keys(value) + else: + norm_value = value + + yield norm_key, norm_value + + +# +# Version 3 creators +# +DKLEN = 32 +SCRYPT_R = 1 +SCRYPT_P = 8 + + +def _create_v3_keyfile_json(private_key, password, kdf, work_factor=None): + salt = Random.get_random_bytes(16) + + if work_factor is None: + work_factor = get_default_work_factor_for_kdf(kdf) + + if kdf == 'pbkdf2': + derived_key = _pbkdf2_hash( + password, + hash_name='sha256', + salt=salt, + iterations=work_factor, + dklen=DKLEN, + ) + kdfparams = { + 'c': work_factor, + 'dklen': DKLEN, + 'prf': 'hmac-sha256', + 'salt': encode_hex_no_prefix(salt), + } + elif kdf == 'scrypt': + derived_key = _scrypt_hash( + password, + salt=salt, + buflen=DKLEN, + r=SCRYPT_R, + p=SCRYPT_P, + n=work_factor, + ) + kdfparams = { + 'dklen': DKLEN, + 'n': work_factor, + 'r': SCRYPT_R, + 'p': SCRYPT_P, + 'salt': encode_hex_no_prefix(salt), + } + else: + raise NotImplementedError("KDF not implemented: {0}".format(kdf)) + + iv = big_endian_to_int(Random.get_random_bytes(16)) + encrypt_key = derived_key[:16] + ciphertext = encrypt_aes_ctr(private_key, encrypt_key, iv) + mac = keccak(derived_key[16:32] + ciphertext) + + address = keys.PrivateKey(private_key).public_key.to_address() + + return { + 'address': remove_0x_prefix(address), + 'crypto': { + 'cipher': 'aes-128-ctr', + 'cipherparams': { + 'iv': encode_hex_no_prefix(int_to_big_endian(iv)), + }, + 'ciphertext': encode_hex_no_prefix(ciphertext), + 'kdf': kdf, + 'kdfparams': kdfparams, + 'mac': encode_hex_no_prefix(mac), + }, + 'id': str(uuid.uuid4()), + 'version': 3, + } + + +# +# Verson 3 decoder +# +def _decode_keyfile_json_v3(keyfile_json, password): + crypto = keyfile_json['crypto'] + kdf = crypto['kdf'] + + # Derive the encryption key from the password using the key derivation + # function. + if kdf == 'pbkdf2': + derived_key = _derive_pbkdf_key(crypto, password) + elif kdf == 'scrypt': + derived_key = _derive_scrypt_key(crypto, password) + else: + raise TypeError("Unsupported key derivation function: {0}".format(kdf)) + + # Validate that the derived key matchs the provided MAC + ciphertext = decode_hex(crypto['ciphertext']) + mac = keccak(derived_key[16:32] + ciphertext) + + expected_mac = decode_hex(crypto['mac']) + + if mac != expected_mac: + raise ValueError("MAC mismatch") + + # Decrypt the ciphertext using the derived encryption key to get the + # private key. + encrypt_key = derived_key[:16] + cipherparams = crypto['cipherparams'] + iv = big_endian_to_int(decode_hex(cipherparams['iv'])) + + private_key = decrypt_aes_ctr(ciphertext, encrypt_key, iv) + + return private_key + + +# +# Key derivation +# +def _derive_pbkdf_key(crypto, password): + kdf_params = crypto['kdfparams'] + salt = decode_hex(kdf_params['salt']) + dklen = kdf_params['dklen'] + should_be_hmac, _, hash_name = kdf_params['prf'].partition('-') + assert should_be_hmac == 'hmac' + iterations = kdf_params['c'] + + derive_pbkdf_key = _pbkdf2_hash(password, hash_name, salt, iterations, dklen) + + return derive_pbkdf_key + + +def _derive_scrypt_key(crypto, password): + kdf_params = crypto['kdfparams'] + salt = decode_hex(kdf_params['salt']) + p = kdf_params['p'] + r = kdf_params['r'] + n = kdf_params['n'] + buflen = kdf_params['dklen'] + + derived_scrypt_key = _scrypt_hash( + password, + salt=salt, + n=n, + r=r, + p=p, + buflen=buflen, + ) + return derived_scrypt_key + + +def _scrypt_hash(password, salt, n, r, p, buflen): + derived_key = scrypt( + password, + salt=salt, + key_len=buflen, + N=n, + r=r, + p=p, + num_keys=1, + ) + return derived_key + + +def _pbkdf2_hash(password, hash_name, salt, iterations, dklen): + derived_key = hashlib.pbkdf2_hmac( + hash_name=hash_name, + password=password, + salt=salt, + iterations=iterations, + dklen=dklen, + ) + + return derived_key + + +# +# Encryption and Decryption +# +def decrypt_aes_ctr(ciphertext, key, iv): + ctr = Counter.new(128, initial_value=iv, allow_wraparound=True) + encryptor = AES.new(key, AES.MODE_CTR, counter=ctr) + return encryptor.decrypt(ciphertext) + + +def encrypt_aes_ctr(value, key, iv): + ctr = Counter.new(128, initial_value=iv, allow_wraparound=True) + encryptor = AES.new(key, AES.MODE_CTR, counter=ctr) + ciphertext = encryptor.encrypt(value) + return ciphertext + + +# +# Utility +# +def get_default_work_factor_for_kdf(kdf): + if kdf == 'pbkdf2': + return 1000000 + elif kdf == 'scrypt': + return 262144 + else: + raise ValueError("Unsupported key derivation function: {0}".format(kdf)) diff --git a/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/LICENSE b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/LICENSE new file mode 100644 index 00000000..5e8bed12 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2017 Piper Merriam + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/METADATA new file mode 100644 index 00000000..4b1b15bd --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/METADATA @@ -0,0 +1,417 @@ +Metadata-Version: 2.1 +Name: eth-keys +Version: 0.3.4 +Summary: Common API for Ethereum key operations. +Home-page: https://github.com/ethereum/eth-keys +Author: Piper Merriam +Author-email: pipermerriam@gmail.com +License: MIT +Keywords: ethereum +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Description-Content-Type: text/markdown +Requires-Dist: eth-utils (<2.0.0,>=1.8.2) +Requires-Dist: eth-typing (<3.0.0,>=2.2.1) +Provides-Extra: coincurve +Requires-Dist: coincurve (<13.0.0,>=7.0.0) ; extra == 'coincurve' +Provides-Extra: dev +Requires-Dist: tox (==3.20.0) ; extra == 'dev' +Requires-Dist: bumpversion (==0.5.3) ; extra == 'dev' +Requires-Dist: twine ; extra == 'dev' +Requires-Dist: eth-utils (<2.0.0,>=1.8.2) ; extra == 'dev' +Requires-Dist: eth-typing (<3.0.0,>=2.2.1) ; extra == 'dev' +Requires-Dist: flake8 (==3.0.4) ; extra == 'dev' +Requires-Dist: mypy (==0.782) ; extra == 'dev' +Requires-Dist: asn1tools (<0.147,>=0.146.2) ; extra == 'dev' +Requires-Dist: factory-boy (<3.1,>=3.0.1) ; extra == 'dev' +Requires-Dist: pyasn1 (<0.5,>=0.4.5) ; extra == 'dev' +Requires-Dist: pytest (==5.4.1) ; extra == 'dev' +Requires-Dist: hypothesis (<6.0.0,>=5.10.3) ; extra == 'dev' +Requires-Dist: eth-hash[pysha3] ; (implementation_name == "cpython") and extra == 'dev' +Requires-Dist: eth-hash[pycryptodome] ; (implementation_name == "pypy") and extra == 'dev' +Provides-Extra: eth-keys +Requires-Dist: eth-utils (<2.0.0,>=1.8.2) ; extra == 'eth-keys' +Requires-Dist: eth-typing (<3.0.0,>=2.2.1) ; extra == 'eth-keys' +Provides-Extra: lint +Requires-Dist: flake8 (==3.0.4) ; extra == 'lint' +Requires-Dist: mypy (==0.782) ; extra == 'lint' +Provides-Extra: test +Requires-Dist: asn1tools (<0.147,>=0.146.2) ; extra == 'test' +Requires-Dist: factory-boy (<3.1,>=3.0.1) ; extra == 'test' +Requires-Dist: pyasn1 (<0.5,>=0.4.5) ; extra == 'test' +Requires-Dist: pytest (==5.4.1) ; extra == 'test' +Requires-Dist: hypothesis (<6.0.0,>=5.10.3) ; extra == 'test' +Requires-Dist: eth-hash[pysha3] ; (implementation_name == "cpython") and extra == 'test' +Requires-Dist: eth-hash[pycryptodome] ; (implementation_name == "pypy") and extra == 'test' + +# Ethereum Keys + + +A common API for Ethereum key operations with pluggable backends. + + +> This library and repository was previously located at https://github.com/pipermerriam/ethereum-keys. It was transferred to the Ethereum foundation github in November 2017 and renamed to `eth-keys`. The PyPi package was also renamed from `ethereum-keys` to `eth-keys`. + +## Installation + +```sh +pip install eth-keys +``` + +## Development + +```sh +pip install -e .[dev] +``` + + +### Running the tests + +You can run the tests with: + +```sh +py.test tests +``` + +Or you can install `tox` to run the full test suite. + + +### Releasing + +Pandoc is required for transforming the markdown README to the proper format to +render correctly on pypi. + +For Debian-like systems: + +``` +apt install pandoc +``` + +Or on OSX: + +```sh +brew install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` + + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 2.0.0-alpha.1 devnum"` + + + +## QuickStart + +```python +>>> from eth_keys import keys +>>> pk = keys.PrivateKey(b'\x01' * 32) +>>> signature = pk.sign_msg(b'a message') +>>> pk +'0x0101010101010101010101010101010101010101010101010101010101010101' +>>> pk.public_key +'0x1b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f70beaf8f588b541507fed6a642c5ab42dfdf8120a7f639de5122d47a69a8e8d1' +>>> signature +'0xccda990dba7864b79dc49158fea269338a1cf5747bc4c4bf1b96823e31a0997e7d1e65c06c5bf128b7109e1b4b9ba8d1305dc33f32f624695b2fa8e02c12c1e000' +>>> pk.public_key.to_checksum_address() +'0x1a642f0E3c3aF545E7AcBD38b07251B3990914F1' +>>> signature.verify_msg(b'a message', pk.public_key) +True +>>> signature.recover_public_key_from_msg(b'a message') == pk.public_key +True +``` + + +## Documentation + +### `KeyAPI(backend=None)` + +The `KeyAPI` object is the primary API for interacting with the `eth-keys` +libary. The object takes a single optional argument in its constructor which +designates what backend will be used for eliptical curve cryptography +operations. The built-in backends are: + +* `eth_keys.backends.NativeECCBackend`: A pure python implementation of the ECC operations. +* `eth_keys.backends.CoinCurveECCBackend`: Uses the [`coincurve`](https://github.com/ofek/coincurve) library for ECC operations. + +By default, `eth-keys` will *try* to use the `CoinCurveECCBackend`, +falling back to the `NativeECCBackend` if the `coincurve` library is not +available. + +> Note: The `coincurve` library is not automatically installed with `eth-keys` and must be installed separately. + +The `backend` argument can be given in any of the following forms. + +* Instance of the backend class +* The backend class +* String with the dot-separated import path for the backend class. + +```python +>>> from eth_keys import KeyAPI +>>> from eth_keys.backends import NativeECCBackend +# These are all the same +>>> keys = KeyAPI(NativeECCBackend) +>>> keys = KeyAPI(NativeECCBackend()) +>>> keys = KeyAPI('eth_keys.backends.NativeECCBackend') +# Or for the coincurve base backend +>>> keys = KeyAPI('eth_keys.backends.CoinCurveECCBackend') +``` + +The backend can also be configured using the environment variable +`ECC_BACKEND_CLASS` which should be set to the dot-separated python import path +to the desired backend. + +```python +>>> import os +>>> os.environ['ECC_BACKEND_CLASS'] = 'eth_keys.backends.CoinCurveECCBackend' +``` + + +### `KeyAPI.ecdsa_sign(message_hash, private_key) -> Signature` + +This method returns a signature for the given `message_hash`, signed by the +provided `private_key`. + +* `message_hash`: **must** be a byte string of length 32 +* `private_key`: **must** be an instance of `PrivateKey` + + +### `KeyAPI.ecdsa_verify(message_hash, signature, public_key) -> bool` + +Returns `True` or `False` based on whether the provided `signature` is a valid +signature for the provided `message_hash` and `public_key`. + +* `message_hash`: **must** be a byte string of length 32 +* `signature`: **must** be an instance of `Signature` +* `public_key`: **must** be an instance of `PublicKey` + + +### `KeyAPI.ecdsa_recover(message_hash, signature) -> PublicKey` + +Returns the `PublicKey` instances recovered from the given `signature` and +`message_hash`. + +* `message_hash`: **must** be a byte string of length 32 +* `signature`: **must** be an instance of `Signature` + + +### `KeyAPI.private_key_to_public_key(private_key) -> PublicKey` + +Returns the `PublicKey` instances computed from the given `private_key` +instance. + +* `private_key`: **must** be an instance of `PublicKey` + + +### Common APIs for `PublicKey`, `PrivateKey` and `Signature` + +There is a common API for the following objects. + +* `PublicKey` +* `PrivateKey` +* `Signature` + +Each of these objects has all of the following APIs. + +* `obj.to_bytes()`: Returns the object in it's canonical `bytes` serialization. +* `obj.to_hex()`: Returns a text string of the hex encoded canonical representation. + + +### `KeyAPI.PublicKey(public_key_bytes)` + +The `PublicKey` class takes a single argument which must be a bytes string with length 64. + +> Note that there are two other common formats for public keys: 65 bytes with a leading `\x04` byte +> and 33 bytes starting with either `\x02` or `\x03`. To use the former with the `PublicKey` object, +> remove the first byte. For the latter, refer to `PublicKey.from_compressed_bytes`. + +The following methods are available: + + +#### `PublicKey.from_compressed_bytes(compressed_bytes) -> PublicKey` + +This `classmethod` returns a new `PublicKey` instance computed from its compressed representation. + +* `compressed_bytes` **must** be a byte string of length 33 starting with `\x02` or `\x03`. + + +#### `PublicKey.from_private(private_key) -> PublicKey` + +This `classmethod` returns a new `PublicKey` instance computed from the +given `private_key`. + +* `private_key` may either be a byte string of length 32 or an instance of the `KeyAPI.PrivateKey` class. + + +#### `PublicKey.recover_from_msg(message, signature) -> PublicKey` + +This `classmethod` returns a new `PublicKey` instance computed from the +provided `message` and `signature`. + +* `message` **must** be a byte string +* `signature` **must** be an instance of `KeyAPI.Signature` + + +#### `PublicKey.recover_from_msg_hash(message_hash, signature) -> PublicKey` + +Same as `PublicKey.recover_from_msg` except that `message_hash` should be the Keccak +hash of the `message`. + + +#### `PublicKey.verify_msg(message, signature) -> bool` + +This method returns `True` or `False` based on whether the signature is a valid +for the given message. + + +#### `PublicKey.verify_msg_hash(message_hash, signature) -> bool` + +Same as `PublicKey.verify_msg` except that `message_hash` should be the Keccak +hash of the `message`. + + +#### `PublicKey.to_compressed_bytes() -> bytes` + +Returns the compressed representation of this public key. + + +#### `PublicKey.to_address() -> text` + +Returns the hex encoded ethereum address for this public key. + + +#### `PublicKey.to_checksum_address() -> text` + +Returns the ERC55 checksum formatted ethereum address for this public key. + + +#### `PublicKey.to_canonical_address() -> bytes` + +Returns the 20-byte representation of the ethereum address for this public key. + + +### `KeyAPI.PrivateKey(private_key_bytes)` + +The `PrivateKey` class takes a single argument which must be a bytes string with length 32. + +The following methods and properties are available + + +#### `PrivateKey.public_key` + +This *property* holds the `PublicKey` instance coresponding to this private key. + + +#### `PrivateKey.sign_msg(message) -> Signature` + +This method returns a signature for the given `message` in the form of a +`Signature` instance + +* `message` **must** be a byte string. + + +#### `PrivateKey.sign_msg_hash(message_hash) -> Signature` + +Same as `PrivateKey.sign` except that `message_hash` should be the Keccak +hash of the `message`. + + +### `KeyAPI.Signature(signature_bytes=None, vrs=None)` + +The `Signature` class can be instantiated in one of two ways. + +* `signature_bytes`: a bytes string with length 65. +* `vrs`: a 3-tuple composed of the integers `v`, `r`, and `s`. + +> Note: If using the `signature_bytes` to instantiate, the byte string should be encoded as `r_bytes | s_bytes | v_bytes` where `|` represents concatenation. `r_bytes` and `s_bytes` should be 32 bytes in length. `v_bytes` should be a single byte `\x00` or `\x01`. + +Signatures are expected to use `1` or `0` for their `v` value. + +The following methods and properties are available + + +#### `Signature.v` + +This property returns the `v` value from the signature as an integer. + + +#### `Signature.r` + +This property returns the `r` value from the signature as an integer. + + +#### `Signature.s` + +This property returns the `s` value from the signature as an integer. + + +#### `Signature.vrs` + +This property returns a 3-tuple of `(v, r, s)`. + + +#### `Signature.verify_msg(message, public_key) -> bool` + +This method returns `True` or `False` based on whether the signature is a valid +for the given public key. + +* `message`: **must** be a byte string. +* `public_key`: **must** be an instance of `PublicKey` + + +#### `Signature.verify_msg_hash(message_hash, public_key) -> bool` + +Same as `Signature.verify_msg` except that `message_hash` should be the Keccak +hash of the `message`. + + +#### `Signature.recover_public_key_from_msg(message) -> PublicKey` + +This method returns a `PublicKey` instance recovered from the signature. + +* `message`: **must** be a byte string. + + +#### `Signature.recover_public_key_from_msg_hash(message_hash) -> PublicKey` + +Same as `Signature.recover_public_key_from_msg` except that `message_hash` +should be the Keccak hash of the `message`. + + +### Exceptions + +#### `eth_api.exceptions.ValidationError` + +This error is raised during instantaition of any of the `PublicKey`, +`PrivateKey` or `Signature` classes if their constructor parameters are +invalid. + + +#### `eth_api.exceptions.BadSignature` + +This error is raised from any of the `recover` or `verify` methods involving +signatures if the signature is invalid. + + diff --git a/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/RECORD new file mode 100644 index 00000000..a23f1595 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/RECORD @@ -0,0 +1,48 @@ +eth_keys-0.3.4.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_keys-0.3.4.dist-info/LICENSE,sha256=MoCQbeU4kpbrnL8X5yIsS0_2HrCKEDJImUJ_ZjlTfo0,1080 +eth_keys-0.3.4.dist-info/METADATA,sha256=QE8vdU5j__qxEvyGev0qd3XSkXhaC9ki644YgOf-CY4,12887 +eth_keys-0.3.4.dist-info/RECORD,, +eth_keys-0.3.4.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +eth_keys-0.3.4.dist-info/top_level.txt,sha256=aHvz-StkdboE2DuBpQa0D_WExOioUqGIu66WeRG1IIc,9 +eth_keys/__init__.py,sha256=gxQWam2iDSPiyLlvT1JbhTQ2hh3IRT2FMJtXRrteEdQ,395 +eth_keys/__pycache__/__init__.cpython-38.pyc,, +eth_keys/__pycache__/constants.cpython-38.pyc,, +eth_keys/__pycache__/datatypes.cpython-38.pyc,, +eth_keys/__pycache__/exceptions.cpython-38.pyc,, +eth_keys/__pycache__/main.cpython-38.pyc,, +eth_keys/__pycache__/validation.cpython-38.pyc,, +eth_keys/backends/__init__.py,sha256=5tLU1C--9zQsOTZnwrXZUA3-jbCgA98l942X_Y3bcn8,927 +eth_keys/backends/__pycache__/__init__.cpython-38.pyc,, +eth_keys/backends/__pycache__/base.cpython-38.pyc,, +eth_keys/backends/__pycache__/coincurve.cpython-38.pyc,, +eth_keys/backends/base.py,sha256=zGtdgWtbtfwnwMKN1bt0c77koL4B2p68YwD59i77E48,1406 +eth_keys/backends/coincurve.py,sha256=ACpSm00Ev2H3cPHj7VMFlgPxmGeUI_rmD1SedbplkzA,4349 +eth_keys/backends/native/__init__.py,sha256=MgEqPXLxxfKO-Oynn5kb6PYJDfm-JSVGLFKyzcK_VMY,98 +eth_keys/backends/native/__pycache__/__init__.cpython-38.pyc,, +eth_keys/backends/native/__pycache__/ecdsa.cpython-38.pyc,, +eth_keys/backends/native/__pycache__/jacobian.cpython-38.pyc,, +eth_keys/backends/native/__pycache__/main.cpython-38.pyc,, +eth_keys/backends/native/ecdsa.py,sha256=s-pLCQkOj5YGIsgm_U6iA_1K64A3dzTQFNYe3UsFO8M,4775 +eth_keys/backends/native/jacobian.py,sha256=A6KsVkomwikEdRedpG49EOS6d_n3IVyGF9TJh8igcfo,2532 +eth_keys/backends/native/main.py,sha256=-IsfHLC4fmM9GQxY4x40Imn6qnA0_k-_yV6ZNBjB654,2309 +eth_keys/constants.py,sha256=BFf-_q4dKUEXl5dhyFrAyT_cGhJjMOT-nuRh4ADkZbs,692 +eth_keys/datatypes.py,sha256=me-iFnnG1lWa-5pWwp2wvjmGagErJP3-zuf5iml2R1c,13685 +eth_keys/exceptions.py,sha256=_jt4mc9H2hSlK-YtNJxQvgbLpPoYvVrRKL-ebAEyBqg,229 +eth_keys/main.py,sha256=0LTh_J-IaRoe2ILgB8q5AMN6wxQG5guayYH7YChMfRE,4595 +eth_keys/tools/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_keys/tools/__pycache__/__init__.cpython-38.pyc,, +eth_keys/tools/__pycache__/factories.cpython-38.pyc,, +eth_keys/tools/factories.py,sha256=_Uiju3rJ877Gkk-S2zh8s96VStZdZjxE_CVYvbMOM2E,847 +eth_keys/utils/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_keys/utils/__pycache__/__init__.cpython-38.pyc,, +eth_keys/utils/__pycache__/address.cpython-38.pyc,, +eth_keys/utils/__pycache__/der.cpython-38.pyc,, +eth_keys/utils/__pycache__/module_loading.cpython-38.pyc,, +eth_keys/utils/__pycache__/numeric.cpython-38.pyc,, +eth_keys/utils/__pycache__/padding.cpython-38.pyc,, +eth_keys/utils/address.py,sha256=DATFbVBnAZshUp8eYLCV8BCvJEZV6CHj1PcKyQJZ96c,149 +eth_keys/utils/der.py,sha256=NBZ2Jggfh8VH9vycSQMdnIr09Hu39aZqHiR8ciKsj4k,3401 +eth_keys/utils/module_loading.py,sha256=EY3A9DDzfAiL3iNV-MtfKFodxHaXGDqGGtbr-fFu5ow,1553 +eth_keys/utils/numeric.py,sha256=mitpI-LijTNIdZst_3RMVNwqcR8DO6QvoeOsZYbKJFs,472 +eth_keys/utils/padding.py,sha256=83D_RHOM0-UN_QB60oLH6y-BBVTk40Iw1dK9p24xnYU,70 +eth_keys/validation.py,sha256=8N0CdNracC320cxPphvOTh43q2IPEZnGNXIG074VL_s,3081 diff --git a/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/top_level.txt new file mode 100644 index 00000000..372240bd --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys-0.3.4.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_keys diff --git a/env/lib/python3.8/site-packages/eth_keys/__init__.py b/env/lib/python3.8/site-packages/eth_keys/__init__.py new file mode 100644 index 00000000..e02dc188 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/__init__.py @@ -0,0 +1,18 @@ +from __future__ import absolute_import + +import sys +import warnings + + +if sys.version_info.major < 3: + warnings.simplefilter('always', DeprecationWarning) + warnings.warn(DeprecationWarning( + "The `eth-keys` library is dropping support for Python 2. Upgrade to Python 3." + )) + warnings.resetwarnings() + + +from .main import ( # noqa: F401 + KeyAPI, + lazy_key_api as keys, +) diff --git a/env/lib/python3.8/site-packages/eth_keys/backends/__init__.py b/env/lib/python3.8/site-packages/eth_keys/backends/__init__.py new file mode 100644 index 00000000..aaa3683d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/backends/__init__.py @@ -0,0 +1,36 @@ +from __future__ import absolute_import + +import os +from typing import Type + +from eth_keys.utils.module_loading import ( + import_string, +) + +from .base import BaseECCBackend # noqa: F401 +from .coincurve import ( # noqa: F401 + CoinCurveECCBackend, + is_coincurve_available, +) +from .native import NativeECCBackend # noqa: F401 + + +def get_default_backend_class() -> str: + if is_coincurve_available(): + return 'eth_keys.backends.CoinCurveECCBackend' + else: + return 'eth_keys.backends.NativeECCBackend' + + +def get_backend_class(import_path: str = None) -> Type[BaseECCBackend]: + if import_path is None: + import_path = os.environ.get( + 'ECC_BACKEND_CLASS', + get_default_backend_class(), + ) + return import_string(import_path) + + +def get_backend(import_path: str = None) -> BaseECCBackend: + backend_class = get_backend_class(import_path) + return backend_class() diff --git a/env/lib/python3.8/site-packages/eth_keys/backends/base.py b/env/lib/python3.8/site-packages/eth_keys/backends/base.py new file mode 100644 index 00000000..8991f959 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/backends/base.py @@ -0,0 +1,44 @@ +from typing import Any # noqa: F401 + +from eth_keys.datatypes import ( + BaseSignature, + NonRecoverableSignature, + PrivateKey, + PublicKey, + Signature, +) + + +class BaseECCBackend(object): + def ecdsa_sign(self, + msg_hash: bytes, + private_key: PrivateKey) -> Signature: + raise NotImplementedError() + + def ecdsa_sign_non_recoverable(self, + msg_hash: bytes, + private_key: PrivateKey) -> NonRecoverableSignature: + raise NotImplementedError() + + def ecdsa_verify(self, + msg_hash: bytes, + signature: BaseSignature, + public_key: PublicKey) -> bool: + raise NotImplementedError() + + def ecdsa_recover(self, + msg_hash: bytes, + signature: Signature) -> PublicKey: + raise NotImplementedError() + + def private_key_to_public_key(self, + private_key: PrivateKey) -> PublicKey: + raise NotImplementedError() + + def decompress_public_key_bytes(self, + compressed_public_key_bytes: bytes) -> bytes: + raise NotImplementedError() + + def compress_public_key_bytes(self, + uncompressed_public_key_bytes: bytes) -> bytes: + raise NotImplementedError() diff --git a/env/lib/python3.8/site-packages/eth_keys/backends/coincurve.py b/env/lib/python3.8/site-packages/eth_keys/backends/coincurve.py new file mode 100644 index 00000000..8b659681 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/backends/coincurve.py @@ -0,0 +1,127 @@ +from __future__ import absolute_import + +from typing import Optional # noqa: F401 + +from eth_utils import ( + big_endian_to_int, +) + +from eth_keys.datatypes import ( # noqa: F401 + BaseSignature, + NonRecoverableSignature, + PrivateKey, + PublicKey, + Signature, +) +from eth_keys.exceptions import ( + BadSignature, +) +from eth_keys.validation import ( + validate_uncompressed_public_key_bytes, +) +from eth_keys.utils import ( + der, +) +from eth_keys.utils.numeric import ( + coerce_low_s, +) + +from .base import BaseECCBackend + + +def is_coincurve_available() -> bool: + try: + import coincurve # noqa: F401 + except ImportError: + return False + else: + return True + + +class CoinCurveECCBackend(BaseECCBackend): + def __init__(self) -> None: + try: + import coincurve + except ImportError: + raise ImportError("The CoinCurveECCBackend requires the coincurve \ + library which is not available for import.") + self.keys = coincurve.keys + self.ecdsa = coincurve.ecdsa + super(CoinCurveECCBackend, self).__init__() + + def ecdsa_sign(self, + msg_hash: bytes, + private_key: PrivateKey) -> Signature: + private_key_bytes = private_key.to_bytes() + signature_bytes = self.keys.PrivateKey(private_key_bytes).sign_recoverable( + msg_hash, + hasher=None, + ) + signature = Signature(signature_bytes, backend=self) + return signature + + def ecdsa_sign_non_recoverable(self, + msg_hash: bytes, + private_key: PrivateKey) -> NonRecoverableSignature: + private_key_bytes = private_key.to_bytes() + + der_encoded_signature = self.keys.PrivateKey(private_key_bytes).sign( + msg_hash, + hasher=None, + ) + rs = der.two_int_sequence_decoder(der_encoded_signature) + + signature = NonRecoverableSignature(rs=rs, backend=self) + return signature + + def ecdsa_verify(self, + msg_hash: bytes, + signature: BaseSignature, + public_key: PublicKey) -> bool: + # coincurve rejects signatures with a high s, so convert to the equivalent low s form + low_s = coerce_low_s(signature.s) + der_encoded_signature = der.two_int_sequence_encoder(signature.r, low_s) + coincurve_public_key = self.keys.PublicKey(b"\x04" + public_key.to_bytes()) + return coincurve_public_key.verify( + der_encoded_signature, + msg_hash, + hasher=None, + ) + + def ecdsa_recover(self, + msg_hash: bytes, + signature: Signature) -> PublicKey: + signature_bytes = signature.to_bytes() + try: + public_key_bytes = self.keys.PublicKey.from_signature_and_message( + signature_bytes, + msg_hash, + hasher=None, + ).format(compressed=False)[1:] + except (ValueError, Exception) as err: + # `coincurve` can raise `ValueError` or `Exception` dependending on + # how the signature is invalid. + raise BadSignature(str(err)) + public_key = PublicKey(public_key_bytes, backend=self) + return public_key + + def private_key_to_public_key(self, private_key: PrivateKey) -> PublicKey: + public_key_bytes = self.keys.PrivateKey(private_key.to_bytes()).public_key.format( + compressed=False, + )[1:] + return PublicKey(public_key_bytes, backend=self) + + def decompress_public_key_bytes(self, + compressed_public_key_bytes: bytes) -> bytes: + public_key = self.keys.PublicKey(compressed_public_key_bytes) + return public_key.format(compressed=False)[1:] + + def compress_public_key_bytes(self, + uncompressed_public_key_bytes: bytes) -> bytes: + validate_uncompressed_public_key_bytes(uncompressed_public_key_bytes) + point = ( + big_endian_to_int(uncompressed_public_key_bytes[:32]), + big_endian_to_int(uncompressed_public_key_bytes[32:]), + ) + public_key = self.keys.PublicKey.from_point(*point) + return public_key.format(compressed=True) diff --git a/env/lib/python3.8/site-packages/eth_keys/backends/native/__init__.py b/env/lib/python3.8/site-packages/eth_keys/backends/native/__init__.py new file mode 100644 index 00000000..d7a2a732 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/backends/native/__init__.py @@ -0,0 +1,5 @@ +from __future__ import absolute_import + +from .main import ( # noqa: F401 + NativeECCBackend, +) diff --git a/env/lib/python3.8/site-packages/eth_keys/backends/native/ecdsa.py b/env/lib/python3.8/site-packages/eth_keys/backends/native/ecdsa.py new file mode 100644 index 00000000..6719b7f4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/backends/native/ecdsa.py @@ -0,0 +1,169 @@ +""" +Functions lifted from https://github.com/vbuterin/pybitcointools +""" +import hashlib +import hmac +from typing import (Any, Callable, Optional, Tuple) # noqa: F401 + +from eth_utils import ( + int_to_big_endian, + big_endian_to_int, +) + +from eth_keys.constants import ( + SECPK1_N as N, + SECPK1_G as G, + SECPK1_Gx as Gx, + SECPK1_Gy as Gy, + SECPK1_P as P, + SECPK1_A as A, + SECPK1_B as B, +) +from eth_keys.exceptions import ( + BadSignature, +) + +from eth_keys.utils.padding import pad32 + +from .jacobian import ( + inv, + fast_multiply, + fast_add, + is_identity, + jacobian_add, + jacobian_multiply, + from_jacobian, +) + + +def decode_public_key(public_key_bytes: bytes) -> Tuple[int, int]: + left = big_endian_to_int(public_key_bytes[0:32]) + right = big_endian_to_int(public_key_bytes[32:64]) + return left, right + + +def encode_raw_public_key(raw_public_key: Tuple[int, int]) -> bytes: + left, right = raw_public_key + return b''.join(( + pad32(int_to_big_endian(left)), + pad32(int_to_big_endian(right)), + )) + + +def private_key_to_public_key(private_key_bytes: bytes) -> bytes: + private_key_as_num = big_endian_to_int(private_key_bytes) + + if private_key_as_num >= N: + raise Exception("Invalid privkey") + + raw_public_key = fast_multiply(G, private_key_as_num) + public_key_bytes = encode_raw_public_key(raw_public_key) + return public_key_bytes + + +def compress_public_key(uncompressed_public_key_bytes: bytes) -> bytes: + x, y = decode_public_key(uncompressed_public_key_bytes) + if y % 2 == 0: + prefix = b"\x02" + else: + prefix = b"\x03" + return prefix + pad32(int_to_big_endian(x)) + + +def decompress_public_key(compressed_public_key_bytes: bytes) -> bytes: + if len(compressed_public_key_bytes) != 33: + raise ValueError("Invalid compressed public key") + + prefix = compressed_public_key_bytes[0] + if prefix not in (2, 3): + raise ValueError("Invalid compressed public key") + + x = big_endian_to_int(compressed_public_key_bytes[1:]) + y_squared = (x**3 + A * x + B) % P + y_abs = pow(y_squared, ((P + 1) // 4), P) + + if (prefix == 2 and y_abs & 1 == 1) or (prefix == 3 and y_abs & 1 == 0): + y = (-y_abs) % P + else: + y = y_abs + + return encode_raw_public_key((x, y)) + + +def deterministic_generate_k(msg_hash: bytes, + private_key_bytes: bytes, + digest_fn: Callable[[], Any] = hashlib.sha256) -> int: + v_0 = b'\x01' * 32 + k_0 = b'\x00' * 32 + + k_1 = hmac.new(k_0, v_0 + b'\x00' + private_key_bytes + msg_hash, digest_fn).digest() + v_1 = hmac.new(k_1, v_0, digest_fn).digest() + k_2 = hmac.new(k_1, v_1 + b'\x01' + private_key_bytes + msg_hash, digest_fn).digest() + v_2 = hmac.new(k_2, v_1, digest_fn).digest() + + kb = hmac.new(k_2, v_2, digest_fn).digest() + k = big_endian_to_int(kb) + return k + + +def ecdsa_raw_sign(msg_hash: bytes, + private_key_bytes: bytes) -> Tuple[int, int, int]: + z = big_endian_to_int(msg_hash) + k = deterministic_generate_k(msg_hash, private_key_bytes) + + r, y = fast_multiply(G, k) + s_raw = inv(k, N) * (z + r * big_endian_to_int(private_key_bytes)) % N + + v = 27 + ((y % 2) ^ (0 if s_raw * 2 < N else 1)) + s = s_raw if s_raw * 2 < N else N - s_raw + + return v - 27, r, s + + +def ecdsa_raw_verify(msg_hash: bytes, + rs: Tuple[int, int], + public_key_bytes: bytes) -> bool: + raw_public_key = decode_public_key(public_key_bytes) + + r, s = rs + + w = inv(s, N) + z = big_endian_to_int(msg_hash) + + u1, u2 = z * w % N, r * w % N + x, y = fast_add( + fast_multiply(G, u1), + fast_multiply(raw_public_key, u2), + ) + return bool(r == x and (r % N) and (s % N)) + + +def ecdsa_raw_recover(msg_hash: bytes, + vrs: Tuple[int, int, int]) -> bytes: + v, r, s = vrs + v += 27 + + if not (27 <= v <= 34): + raise BadSignature("%d must in range 27-31" % v) + + x = r + + xcubedaxb = (x * x * x + A * x + B) % P + beta = pow(xcubedaxb, (P + 1) // 4, P) + y = beta if v % 2 ^ beta % 2 else (P - beta) + # If xcubedaxb is not a quadratic residue, then r cannot be the x coord + # for a point on the curve, and so the sig is invalid + if (xcubedaxb - y * y) % P != 0 or not (r % N) or not (s % N): + raise BadSignature("Invalid signature") + z = big_endian_to_int(msg_hash) + Gz = jacobian_multiply((Gx, Gy, 1), (N - z) % N) + XY = jacobian_multiply((x, y, 1), s) + Qr = jacobian_add(Gz, XY) + Q = jacobian_multiply(Qr, inv(r, N)) + + if is_identity(Q): + raise BadSignature("InvalidSignature") + + raw_public_key = from_jacobian(Q) + + return encode_raw_public_key(raw_public_key) diff --git a/env/lib/python3.8/site-packages/eth_keys/backends/native/jacobian.py b/env/lib/python3.8/site-packages/eth_keys/backends/native/jacobian.py new file mode 100644 index 00000000..1ea09fe5 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/backends/native/jacobian.py @@ -0,0 +1,97 @@ +from typing import Tuple # noqa: F401 + +from eth_keys.constants import ( + IDENTITY_POINTS, + SECPK1_P as P, + SECPK1_N as N, + SECPK1_A as A, +) + + +def inv(a: int, n: int) -> int: + if a == 0: + return 0 + lm, hm = 1, 0 + low, high = a % n, n + while low > 1: + r = high // low + nm, new = hm - lm * r, high - low * r + lm, low, hm, high = nm, new, lm, low + return lm % n + + +def to_jacobian(p: Tuple[int, int]) -> Tuple[int, int, int]: + o = (p[0], p[1], 1) + return o + + +def jacobian_double(p: Tuple[int, int, int]) -> Tuple[int, int, int]: + if not p[1]: + return (0, 0, 0) + ysq = (p[1] ** 2) % P + S = (4 * p[0] * ysq) % P + M = (3 * p[0] ** 2 + A * p[2] ** 4) % P + nx = (M**2 - 2 * S) % P + ny = (M * (S - nx) - 8 * ysq ** 2) % P + nz = (2 * p[1] * p[2]) % P + return (nx, ny, nz) + + +def jacobian_add(p: Tuple[int, int, int], + q: Tuple[int, int, int]) -> Tuple[int, int, int]: + if not p[1]: + return q + if not q[1]: + return p + U1 = (p[0] * q[2] ** 2) % P + U2 = (q[0] * p[2] ** 2) % P + S1 = (p[1] * q[2] ** 3) % P + S2 = (q[1] * p[2] ** 3) % P + if U1 == U2: + if S1 != S2: + return (0, 0, 1) + return jacobian_double(p) + H = U2 - U1 + R = S2 - S1 + H2 = (H * H) % P + H3 = (H * H2) % P + U1H2 = (U1 * H2) % P + nx = (R ** 2 - H3 - 2 * U1H2) % P + ny = (R * (U1H2 - nx) - S1 * H3) % P + nz = (H * p[2] * q[2]) % P + return (nx, ny, nz) + + +def from_jacobian(p: Tuple[int, int, int]) -> Tuple[int, int]: + z = inv(p[2], P) + return ((p[0] * z**2) % P, (p[1] * z**3) % P) + + +def jacobian_multiply(a: Tuple[int, int, int], + n: int) -> Tuple[int, int, int]: + if a[1] == 0 or n == 0: + return (0, 0, 1) + if n == 1: + return a + if n < 0 or n >= N: + return jacobian_multiply(a, n % N) + if (n % 2) == 0: + return jacobian_double(jacobian_multiply(a, n // 2)) + elif (n % 2) == 1: + return jacobian_add(jacobian_double(jacobian_multiply(a, n // 2)), a) + else: + raise Exception("Invariant: Unreachable code path") + + +def fast_multiply(a: Tuple[int, int], + n: int) -> Tuple[int, int]: + return from_jacobian(jacobian_multiply(to_jacobian(a), n)) + + +def fast_add(a: Tuple[int, int], + b: Tuple[int, int]) -> Tuple[int, int]: + return from_jacobian(jacobian_add(to_jacobian(a), to_jacobian(b))) + + +def is_identity(p: Tuple[int, int, int]) -> bool: + return p in IDENTITY_POINTS diff --git a/env/lib/python3.8/site-packages/eth_keys/backends/native/main.py b/env/lib/python3.8/site-packages/eth_keys/backends/native/main.py new file mode 100644 index 00000000..ca7094f5 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/backends/native/main.py @@ -0,0 +1,63 @@ +from __future__ import absolute_import + +from typing import Optional # noqa: F401 + +from .ecdsa import ( + ecdsa_raw_recover, + ecdsa_raw_sign, + ecdsa_raw_verify, + private_key_to_public_key, + compress_public_key, + decompress_public_key, +) + +from eth_keys.backends.base import BaseECCBackend +from eth_keys.datatypes import ( # noqa: F401 + BaseSignature, + NonRecoverableSignature, + PrivateKey, + PublicKey, + Signature, +) + + +class NativeECCBackend(BaseECCBackend): + def ecdsa_sign(self, + msg_hash: bytes, + private_key: PrivateKey) -> Signature: + signature_vrs = ecdsa_raw_sign(msg_hash, private_key.to_bytes()) + signature = Signature(vrs=signature_vrs, backend=self) + return signature + + def ecdsa_sign_non_recoverable(self, + msg_hash: bytes, + private_key: PrivateKey) -> NonRecoverableSignature: + _, signature_r, signature_s = ecdsa_raw_sign(msg_hash, private_key.to_bytes()) + signature = NonRecoverableSignature(rs=(signature_r, signature_s), backend=self) + return signature + + def ecdsa_verify(self, + msg_hash: bytes, + signature: BaseSignature, + public_key: PublicKey) -> bool: + return ecdsa_raw_verify(msg_hash, signature.rs, public_key.to_bytes()) + + def ecdsa_recover(self, + msg_hash: bytes, + signature: Signature) -> PublicKey: + public_key_bytes = ecdsa_raw_recover(msg_hash, signature.vrs) + public_key = PublicKey(public_key_bytes, backend=self) + return public_key + + def private_key_to_public_key(self, private_key: PrivateKey) -> PublicKey: + public_key_bytes = private_key_to_public_key(private_key.to_bytes()) + public_key = PublicKey(public_key_bytes, backend=self) + return public_key + + def decompress_public_key_bytes(self, + compressed_public_key_bytes: bytes) -> bytes: + return decompress_public_key(compressed_public_key_bytes) + + def compress_public_key_bytes(self, + uncompressed_public_key_bytes: bytes) -> bytes: + return compress_public_key(uncompressed_public_key_bytes) diff --git a/env/lib/python3.8/site-packages/eth_keys/constants.py b/env/lib/python3.8/site-packages/eth_keys/constants.py new file mode 100644 index 00000000..b7757ce4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/constants.py @@ -0,0 +1,19 @@ +from typing import Tuple # noqa: F401 + + +# +# SECPK1N +# +SECPK1_P = 2**256 - 2**32 - 977 # type: int +SECPK1_N = 115792089237316195423570985008687907852837564279074904382605163141518161494337 # type: int # noqa: E501 +SECPK1_A = 0 # type: int # noqa: E501 +SECPK1_B = 7 # type: int # noqa: E501 +SECPK1_Gx = 55066263022277343669578718895168534326250603453777594175500187360389116729240 # type: int # noqa: E501 +SECPK1_Gy = 32670510020758816978083085130507043184471273380659243275938904335757337482424 # type: int # noqa: E501 +SECPK1_G = (SECPK1_Gx, SECPK1_Gy) # type: Tuple[int, int] + + +# +# Internal representations of Jacobian identity points +# +IDENTITY_POINTS = ((0, 0, 0), (0, 0, 1)) diff --git a/env/lib/python3.8/site-packages/eth_keys/datatypes.py b/env/lib/python3.8/site-packages/eth_keys/datatypes.py new file mode 100644 index 00000000..386d173b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/datatypes.py @@ -0,0 +1,451 @@ +from __future__ import absolute_import + +from abc import ( + ABC, + abstractmethod, +) +import codecs +import collections +import sys +from typing import ( # noqa: F401 + Any, + Tuple, + Union, + Type, + TYPE_CHECKING, +) + +from eth_typing import ( + ChecksumAddress, +) +from eth_utils import ( + big_endian_to_int, + encode_hex, + int_to_big_endian, + is_bytes, + is_string, + keccak, + to_checksum_address, + to_normalized_address, + ValidationError, + +) + +from eth_keys.utils.address import ( + public_key_bytes_to_address, +) +from eth_keys.utils.numeric import ( + int_to_byte, +) +from eth_keys.utils.padding import ( + pad32, +) + +from eth_keys.exceptions import ( + BadSignature, +) +from eth_keys.validation import ( + validate_private_key_bytes, + validate_compressed_public_key_bytes, + validate_uncompressed_public_key_bytes, + validate_recoverable_signature_bytes, + validate_non_recoverable_signature_bytes, + validate_signature_v, + validate_signature_r_or_s, +) + +if TYPE_CHECKING: + from eth_keys.backends.base import BaseECCBackend # noqa: F401 + + +# Must compare against version_info[0] and not version_info.major to please mypy. +if sys.version_info[0] == 2: + ByteString = type( + b'BaseString', + (collections.abc.Sequence, basestring), # noqa: F821 + {}, + ) # type: Any +else: + ByteString = collections.abc.ByteString + + +class LazyBackend: + def __init__(self, + backend: 'Union[BaseECCBackend, Type[BaseECCBackend], str, None]' = None, + ) -> None: + from eth_keys.backends.base import ( # noqa: F811 + BaseECCBackend, + ) + + if backend is None: + pass + elif isinstance(backend, BaseECCBackend): + pass + elif isinstance(backend, type) and issubclass(backend, BaseECCBackend): + backend = backend() + elif is_string(backend): + backend = self.get_backend(backend) + else: + raise ValueError( + "Unsupported format for ECC backend. Must be an instance or " + "subclass of `eth_keys.backends.BaseECCBackend` or a string of " + "the dot-separated import path for the desired backend class" + ) + + self.backend = backend + + _backend = None # type: BaseECCBackend + + @property + def backend(self) -> 'BaseECCBackend': + if self._backend is None: + return self.get_backend() + else: + return self._backend + + @backend.setter + def backend(self, value: 'BaseECCBackend') -> None: + self._backend = value + + @classmethod + def get_backend(cls, *args: Any, **kwargs: Any) -> 'BaseECCBackend': + from eth_keys.backends import get_backend + return get_backend(*args, **kwargs) + + +class BaseKey(ByteString, collections.abc.Hashable): + _raw_key = None # type: bytes + + def to_hex(self) -> str: + return '0x' + codecs.decode(codecs.encode(self._raw_key, 'hex'), 'ascii') + + def to_bytes(self) -> bytes: + return self._raw_key + + def __hash__(self) -> int: + return big_endian_to_int(keccak(self.to_bytes())) + + def __str__(self) -> str: + return self.to_hex() + + def __int__(self) -> int: + return big_endian_to_int(self._raw_key) + + def __len__(self) -> int: + # TODO: this seems wrong. + return 64 + + # Must be typed with `ignore` due to + # https://github.com/python/mypy/issues/1237 + def __getitem__(self, index: int) -> int: # type: ignore + return self._raw_key[index] + + def __eq__(self, other: Any) -> bool: + if hasattr(other, 'to_bytes'): + return self.to_bytes() == other.to_bytes() + elif is_bytes(other): + return self.to_bytes() == other + else: + return False + + def __repr__(self) -> str: + return "'{0}'".format(self.to_hex()) + + def __index__(self) -> int: + return self.__int__() + + def __hex__(self) -> str: + if sys.version_info[0] == 2: + return codecs.encode(self.to_hex(), 'ascii') + else: + return self.to_hex() + + +class PublicKey(BaseKey, LazyBackend): + def __init__(self, + public_key_bytes: bytes, + backend: 'Union[BaseECCBackend, Type[BaseECCBackend], str, None]' = None, + ) -> None: + validate_uncompressed_public_key_bytes(public_key_bytes) + + self._raw_key = public_key_bytes + super().__init__(backend=backend) + + @classmethod + def from_compressed_bytes(cls, + compressed_public_key_bytes: bytes, + backend: 'BaseECCBackend' = None, + ) -> 'PublicKey': + validate_compressed_public_key_bytes(compressed_public_key_bytes) + + if backend is None: + backend = cls.get_backend() + uncompressed_key = backend.decompress_public_key_bytes(compressed_public_key_bytes) + + return cls(uncompressed_key, backend) + + @classmethod + def from_private(cls, + private_key: 'PrivateKey', + backend: 'BaseECCBackend' = None, + ) -> 'PublicKey': + if backend is None: + backend = cls.get_backend() + return backend.private_key_to_public_key(private_key) + + @classmethod + def recover_from_msg(cls, + message: bytes, + signature: 'Signature', + backend: 'BaseECCBackend' = None, + ) -> 'PublicKey': + message_hash = keccak(message) + return cls.recover_from_msg_hash(message_hash, signature, backend) + + @classmethod + def recover_from_msg_hash(cls, + message_hash: bytes, + signature: 'Signature', + backend: 'BaseECCBackend' = None, + ) -> 'PublicKey': + if backend is None: + backend = cls.get_backend() + return backend.ecdsa_recover(message_hash, signature) + + def verify_msg(self, + message: bytes, + signature: 'Signature', + ) -> bool: + message_hash = keccak(message) + return self.verify_msg_hash(message_hash, signature) + + def verify_msg_hash(self, + message_hash: bytes, + signature: 'Signature', + ) -> bool: + return self.backend.ecdsa_verify(message_hash, signature, self) + + def to_compressed_bytes(self) -> bytes: + return self.backend.compress_public_key_bytes(self.to_bytes()) + + # + # Ethereum address conversions + # + def to_checksum_address(self) -> ChecksumAddress: + return to_checksum_address(public_key_bytes_to_address(self.to_bytes())) + + def to_address(self) -> str: + return to_normalized_address(public_key_bytes_to_address(self.to_bytes())) + + def to_canonical_address(self) -> bytes: + return public_key_bytes_to_address(self.to_bytes()) + + +class PrivateKey(BaseKey, LazyBackend): + public_key = None # type: PublicKey + + def __init__(self, + private_key_bytes: bytes, + backend: 'Union[BaseECCBackend, Type[BaseECCBackend], str, None]' = None, + ) -> None: + validate_private_key_bytes(private_key_bytes) + + self._raw_key = private_key_bytes + + self.public_key = self.backend.private_key_to_public_key(self) + super().__init__(backend=backend) + + def sign_msg(self, message: bytes) -> 'Signature': + message_hash = keccak(message) + return self.sign_msg_hash(message_hash) + + def sign_msg_hash(self, message_hash: bytes) -> 'Signature': + return self.backend.ecdsa_sign(message_hash, self) + + def sign_msg_non_recoverable(self, message: bytes) -> 'NonRecoverableSignature': + message_hash = keccak(message) + return self.sign_msg_hash_non_recoverable(message_hash) + + def sign_msg_hash_non_recoverable(self, message_hash: bytes) -> 'NonRecoverableSignature': + return self.backend.ecdsa_sign_non_recoverable(message_hash, self) + + +class BaseSignature(ByteString, LazyBackend, ABC): + _r = None # type: int + _s = None # type: int + + def __init__(self, + rs: Tuple[int, int], + backend: 'Union[BaseECCBackend, Type[BaseECCBackend], str, None]' = None + ) -> None: + for value in rs: + try: + validate_signature_r_or_s(value) + except ValidationError as error: + raise BadSignature(error) from error + + self._r, self._s = rs + super().__init__(backend=backend) + + @property + def r(self) -> int: + return self._r + + @property + def s(self) -> int: + return self._s + + @property + def rs(self) -> Tuple[int, int]: + return (self.r, self.s) + + @abstractmethod + def to_bytes(self) -> bytes: + pass + + def __bytes__(self) -> bytes: + return self.to_bytes() + + def to_hex(self) -> str: + return encode_hex(self.to_bytes()) + + def __hash__(self) -> int: + return big_endian_to_int(keccak(self.to_bytes())) + + def __str__(self) -> str: + return self.to_hex() + + def __len__(self) -> int: + return len(bytes(self)) + + def __eq__(self, other: Any) -> bool: + if hasattr(other, 'to_bytes'): + return self.to_bytes() == other.to_bytes() + elif is_bytes(other): + return self.to_bytes() == other + else: + return False + + # Must be typed with `ignore` due to + # https://github.com/python/mypy/issues/1237 + def __getitem__(self, index: int) -> int: # type: ignore + return self.to_bytes()[index] + + def __repr__(self) -> str: + return "'{0}'".format(self.to_hex()) + + def __index__(self) -> int: + return self.__int__() + + def __hex__(self) -> str: + return self.to_hex() + + def __int__(self) -> int: + return big_endian_to_int(self.to_bytes()) + + def verify_msg(self, + message: bytes, + public_key: PublicKey) -> bool: + message_hash = keccak(message) + return self.verify_msg_hash(message_hash, public_key) + + def verify_msg_hash(self, + message_hash: bytes, + public_key: PublicKey) -> bool: + return self.backend.ecdsa_verify(message_hash, self, public_key) + + +class Signature(BaseSignature): + _v = None # type: int + + def __init__(self, + signature_bytes: bytes = None, + vrs: Tuple[int, int, int] = None, + backend: 'Union[BaseECCBackend, Type[BaseECCBackend], str, None]' = None, + ) -> None: + if bool(signature_bytes) is bool(vrs): + raise TypeError("You must provide one of `signature_bytes` or `vrs`") + elif signature_bytes: + validate_recoverable_signature_bytes(signature_bytes) + r = big_endian_to_int(signature_bytes[0:32]) + s = big_endian_to_int(signature_bytes[32:64]) + v = ord(signature_bytes[64:65]) + elif vrs: + v, r, s, = vrs + else: + raise TypeError("Invariant: unreachable code path") + + super().__init__(rs=(r, s), backend=backend) + try: + self.v = v + except ValidationError as error: + raise BadSignature(str(error)) from error + + # + # v + # + @property + def v(self) -> int: + return self._v + + @v.setter + def v(self, value: int) -> None: + validate_signature_v(value) + self._v = value + + @BaseSignature.r.setter # type: ignore + def r(self, value: int) -> None: + validate_signature_r_or_s(value) + self._r = value + + @BaseSignature.s.setter # type: ignore + def s(self, value: int) -> None: + validate_signature_r_or_s(value) + self._s = value + + @property + def vrs(self) -> Tuple[int, int, int]: + return (self.v, self.r, self.s) + + def to_bytes(self) -> bytes: + vb = int_to_byte(self.v) + rb = pad32(int_to_big_endian(self.r)) + sb = pad32(int_to_big_endian(self.s)) + return b''.join((rb, sb, vb)) + + def recover_public_key_from_msg(self, message: bytes) -> PublicKey: + message_hash = keccak(message) + return self.recover_public_key_from_msg_hash(message_hash) + + def recover_public_key_from_msg_hash(self, message_hash: bytes) -> PublicKey: + return self.backend.ecdsa_recover(message_hash, self) + + def to_non_recoverable_signature(self) -> 'NonRecoverableSignature': + return NonRecoverableSignature(rs=self.rs) + + +class NonRecoverableSignature(BaseSignature): + + def __init__(self, + signature_bytes: bytes = None, + rs: Tuple[int, int] = None, + backend: 'Union[BaseECCBackend, Type[BaseECCBackend], str, None]' = None, + ) -> None: + if signature_bytes is None and rs is None: + raise TypeError("You must provide one of `signature_bytes` or `vr`") + elif signature_bytes: + validate_non_recoverable_signature_bytes(signature_bytes) + r = big_endian_to_int(signature_bytes[0:32]) + s = big_endian_to_int(signature_bytes[32:64]) + elif rs: + r, s = rs + else: + raise Exception("Invariant: unreachable code path") + + super().__init__(rs=(r, s), backend=backend) + + def to_bytes(self) -> bytes: + return b''.join( + pad32(int_to_big_endian(value)) + for value in self.rs + ) diff --git a/env/lib/python3.8/site-packages/eth_keys/exceptions.py b/env/lib/python3.8/site-packages/eth_keys/exceptions.py new file mode 100644 index 00000000..fb8c75be --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/exceptions.py @@ -0,0 +1,7 @@ +# This exposes the eth-utils exception for backwards compatibility, +# for any library that catches eth_keys.exceptions.ValidationError +from eth_utils import ValidationError # noqa: F401 + + +class BadSignature(Exception): + pass diff --git a/env/lib/python3.8/site-packages/eth_keys/main.py b/env/lib/python3.8/site-packages/eth_keys/main.py new file mode 100644 index 00000000..678a7b87 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/main.py @@ -0,0 +1,119 @@ +from typing import (Any, Union, Type) # noqa: F401 + +from eth_utils import ( + ValidationError, +) + +from eth_keys.datatypes import ( + BaseSignature, + LazyBackend, + NonRecoverableSignature, + PublicKey, + PrivateKey, + Signature, +) +from eth_keys.validation import ( + validate_message_hash, +) + + +# These must be aliased due to a scoping issue in mypy +# https://github.com/python/mypy/issues/1775 +_PublicKey = PublicKey +_PrivateKey = PrivateKey +_Signature = Signature +_NonRecoverableSignature = NonRecoverableSignature + + +class KeyAPI(LazyBackend): + # + # datatype shortcuts + # + PublicKey = PublicKey # type: Type[_PublicKey] + PrivateKey = PrivateKey # type: Type[_PrivateKey] + Signature = Signature # type: Type[_Signature] + NonRecoverableSignature = NonRecoverableSignature # type: Type[_NonRecoverableSignature] + + # + # Proxy method calls to the backends + # + def ecdsa_sign(self, + message_hash: bytes, + private_key: _PrivateKey) -> _Signature: + validate_message_hash(message_hash) + if not isinstance(private_key, PrivateKey): + raise ValidationError( + "The `private_key` must be an instance of `eth_keys.datatypes.PrivateKey`" + ) + signature = self.backend.ecdsa_sign(message_hash, private_key) + if not isinstance(signature, Signature): + raise ValidationError( + "Backend returned an invalid signature. Return value must be " + "an instance of `eth_keys.datatypes.Signature`" + ) + return signature + + def ecdsa_sign_non_recoverable(self, + message_hash: bytes, + private_key: _PrivateKey) -> _NonRecoverableSignature: + validate_message_hash(message_hash) + if not isinstance(private_key, PrivateKey): + raise ValidationError( + "The `private_key` must be an instance of `eth_keys.datatypes.PrivateKey`" + ) + signature = self.backend.ecdsa_sign_non_recoverable(message_hash, private_key) + if not isinstance(signature, NonRecoverableSignature): + raise ValidationError( + "Backend returned an invalid signature. Return value must be " + "an instance of `eth_keys.datatypes.Signature`" + ) + return signature + + def ecdsa_verify(self, + message_hash: bytes, + signature: BaseSignature, + public_key: _PublicKey) -> bool: + validate_message_hash(message_hash) + if not isinstance(public_key, PublicKey): + raise ValidationError( + "The `public_key` must be an instance of `eth_keys.datatypes.PublicKey`" + ) + if not isinstance(signature, BaseSignature): + raise ValidationError( + "The `signature` must be an instance of `eth_keys.datatypes.BaseSignature`" + ) + return self.backend.ecdsa_verify(message_hash, signature, public_key) + + def ecdsa_recover(self, + message_hash: bytes, + signature: _Signature) -> _PublicKey: + validate_message_hash(message_hash) + if not isinstance(signature, Signature): + raise ValidationError( + "The `signature` must be an instance of `eth_keys.datatypes.Signature`" + ) + public_key = self.backend.ecdsa_recover(message_hash, signature) + if not isinstance(public_key, _PublicKey): + raise ValidationError( + "Backend returned an invalid public_key. Return value must be " + "an instance of `eth_keys.datatypes.PublicKey`" + ) + return public_key + + def private_key_to_public_key(self, private_key: _PrivateKey) -> _PublicKey: + if not isinstance(private_key, PrivateKey): + raise ValidationError( + "The `private_key` must be an instance of `eth_keys.datatypes.PrivateKey`" + ) + public_key = self.backend.private_key_to_public_key(private_key) + if not isinstance(public_key, PublicKey): + raise ValidationError( + "Backend returned an invalid public_key. Return value must be " + "an instance of `eth_keys.datatypes.PublicKey`" + ) + return public_key + + +# This creates an easy to import backend which will lazily fetch whatever +# backend has been configured at runtime (as opposed to import or instantiation time). +lazy_key_api = KeyAPI(backend=None) diff --git a/env/lib/python3.8/site-packages/eth_keys/tools/__init__.py b/env/lib/python3.8/site-packages/eth_keys/tools/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_keys/tools/factories.py b/env/lib/python3.8/site-packages/eth_keys/tools/factories.py new file mode 100644 index 00000000..ffc01bb8 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/tools/factories.py @@ -0,0 +1,33 @@ +try: + import factory +except ImportError as err: + raise ImportError( + "Use of `eth_keys.tools.factories` requires the `factory-boy` package " + "which does not appear to be installed." + ) from err + +from eth_keys import keys + + +def _mk_random_bytes(num_bytes: int) -> bytes: + try: + import secrets + except ImportError: + import os + return os.urandom(num_bytes) + else: + return secrets.token_bytes(num_bytes) + + +class PrivateKeyFactory(factory.Factory): # type: ignore + class Meta: + model = keys.PrivateKey + + private_key_bytes = factory.LazyFunction(lambda: _mk_random_bytes(32)) + + +class PublicKeyFactory(factory.Factory): # type: ignore + class Meta: + model = keys.PublicKey + + public_key_bytes = factory.LazyFunction(lambda: PrivateKeyFactory().public_key.to_bytes()) diff --git a/env/lib/python3.8/site-packages/eth_keys/utils/__init__.py b/env/lib/python3.8/site-packages/eth_keys/utils/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_keys/utils/address.py b/env/lib/python3.8/site-packages/eth_keys/utils/address.py new file mode 100644 index 00000000..bb8e7e1c --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/utils/address.py @@ -0,0 +1,7 @@ +from eth_utils import ( + keccak, +) + + +def public_key_bytes_to_address(public_key_bytes: bytes) -> bytes: + return keccak(public_key_bytes)[-20:] diff --git a/env/lib/python3.8/site-packages/eth_keys/utils/der.py b/env/lib/python3.8/site-packages/eth_keys/utils/der.py new file mode 100644 index 00000000..ddeee83e --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/utils/der.py @@ -0,0 +1,118 @@ +# Non-recoverable signatures are encoded using a DER sequence of two integers +# We locally implement serialization and deserialization for this specific spec +# with constrained inputs. +# This is done locally to avoid importing a 3rd-party library, in this very sensitive project. +# asn1tools and pyasn1 were used as reference APIs, see how in tests/core/test_utils_asn1.py +# +# See more about DER encodings, and ASN.1 in general, here: +# http://luca.ntop.org/Teaching/Appunti/asn1.html +# +# These methods are NOT intended for external use outside of this project. They do not +# fully validate inputs and make assumptions that are not *generally* true. + +from typing import ( + Iterator, + Tuple, +) + +from eth_utils import ( + apply_to_return_value, + big_endian_to_int, + int_to_big_endian, +) + + +@apply_to_return_value(bytes) +def two_int_sequence_encoder(signature_r: int, signature_s: int) -> Iterator[int]: + """ + Encode two integers using DER, defined as: + + :: + + ECDSASpec DEFINITIONS ::= BEGIN + ECDSASignature ::= SEQUENCE { + r INTEGER, + s INTEGER + } + END + + Only a subset of integers are supported: positive, 32-byte ints. + + See: https://docs.microsoft.com/en-us/windows/desktop/seccertenroll/about-sequence + """ + # Sequence tag + yield 0x30 + + encoded1 = _encode_int(signature_r) + encoded2 = _encode_int(signature_s) + + # Sequence length + yield len(encoded1) + len(encoded2) + + yield from encoded1 + yield from encoded2 + + +def two_int_sequence_decoder(encoded: bytes) -> Tuple[int, int]: + """ + Decode bytes to two integers using DER, defined as: + + :: + + ECDSASpec DEFINITIONS ::= BEGIN + ECDSASignature ::= SEQUENCE { + r INTEGER, + s INTEGER + } + END + + Only a subset of integers are supported: positive, 32-byte ints. + + r is returned first, and s is returned second + + See: https://docs.microsoft.com/en-us/windows/desktop/seccertenroll/about-sequence + """ + if encoded[0] != 0x30: + raise ValueError("Encoded sequence must start with 0x30 byte, but got %s" % encoded[0]) + + # skip sequence length + int1, rest = _decode_int(encoded[2:]) + int2, empty = _decode_int(rest) + + if len(empty) != 0: + raise ValueError("Encoded sequence must not contain any trailing data, but had %r" % empty) + + return int1, int2 + + +@apply_to_return_value(bytes) +def _encode_int(primitive: int) -> Iterator[int]: + # See: https://docs.microsoft.com/en-us/windows/desktop/seccertenroll/about-integer + + # Integer tag + yield 0x02 + + encoded = int_to_big_endian(primitive) + if encoded[0] >= 128: + # Indicate that integer is positive (it always is, but doesn't always need the flag) + yield len(encoded) + 1 + yield 0x00 + else: + yield len(encoded) + + yield from encoded + + +def _decode_int(encoded: bytes) -> Tuple[int, bytes]: + # See: https://docs.microsoft.com/en-us/windows/desktop/seccertenroll/about-integer + + if encoded[0] != 0x02: + raise ValueError( + "Encoded value must be an integer, starting with on 0x02 byte, but got %s" % encoded[0] + ) + + length = encoded[1] + # to_int can handle leading zeros + decoded_int = big_endian_to_int(encoded[2:2 + length]) + + return decoded_int, encoded[2 + length:] diff --git a/env/lib/python3.8/site-packages/eth_keys/utils/module_loading.py b/env/lib/python3.8/site-packages/eth_keys/utils/module_loading.py new file mode 100644 index 00000000..03b4a784 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/utils/module_loading.py @@ -0,0 +1,52 @@ +import operator +from typing import (Any, Tuple) +from importlib import import_module + + +def import_string(dotted_path: str) -> Any: + """ + Source: django.utils.module_loading + Import a dotted module path and return the attribute/class designated by the + last name in the path. Raise ImportError if the import failed. + """ + try: + module_path, class_name = dotted_path.rsplit('.', 1) + except ValueError: + msg = "%s doesn't look like a module path" % dotted_path + raise ImportError(msg) + + module = import_module(module_path) + + try: + return getattr(module, class_name) + except AttributeError: + msg = 'Module "%s" does not define a "%s" attribute/class' % ( + module_path, class_name) + raise ImportError(msg) + + +def split_at_longest_importable_path(dotted_path: str) -> Tuple[str, str]: + num_path_parts = len(dotted_path.split('.')) + + for i in range(1, num_path_parts): + path_parts = dotted_path.rsplit('.', i) + import_part = path_parts[0] + remainder = '.'.join(path_parts[1:]) + + try: + module = import_module(import_part) + except ImportError: + continue + + try: + operator.attrgetter(remainder)(module) + except AttributeError: + raise ImportError( + "Unable to derive appropriate import path for {0}".format( + dotted_path, + ) + ) + else: + return import_part, remainder + else: + return '', dotted_path diff --git a/env/lib/python3.8/site-packages/eth_keys/utils/numeric.py b/env/lib/python3.8/site-packages/eth_keys/utils/numeric.py new file mode 100644 index 00000000..9041e0ae --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/utils/numeric.py @@ -0,0 +1,16 @@ +from eth_keys.constants import ( + SECPK1_N, +) + + +def int_to_byte(value: int) -> bytes: + return bytes([value]) + + +def coerce_low_s(value: int) -> int: + """Coerce the s component of an ECDSA signature into its low-s form. + + See https://bitcoin.stackexchange.com/questions/83408/in-ecdsa-why-is-r-%E2%88%92s-mod-n-complementary-to-r-s # noqa: E501 + or https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md. + """ + return min(value, -value % SECPK1_N) diff --git a/env/lib/python3.8/site-packages/eth_keys/utils/padding.py b/env/lib/python3.8/site-packages/eth_keys/utils/padding.py new file mode 100644 index 00000000..58665bfb --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/utils/padding.py @@ -0,0 +1,2 @@ +def pad32(value: bytes) -> bytes: + return value.rjust(32, b'\x00') diff --git a/env/lib/python3.8/site-packages/eth_keys/validation.py b/env/lib/python3.8/site-packages/eth_keys/validation.py new file mode 100644 index 00000000..96ae759d --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_keys/validation.py @@ -0,0 +1,111 @@ +from typing import Any + +from eth_utils import ( + encode_hex, + is_bytes, + is_integer, + ValidationError, +) +from eth_utils.toolz import curry + +from eth_keys.constants import ( + SECPK1_N, +) + + +def validate_integer(value: Any) -> None: + if not is_integer(value) or isinstance(value, bool): + raise ValidationError("Value must be a an integer. Got: {0}".format(type(value))) + + +def validate_bytes(value: Any) -> None: + if not is_bytes(value): + raise ValidationError("Value must be a byte string. Got: {0}".format(type(value))) + + +@curry +def validate_gte(value: Any, minimum: int) -> None: + validate_integer(value) + if value < minimum: + raise ValidationError( + "Value {0} is not greater than or equal to {1}".format( + value, minimum, + ) + ) + + +@curry +def validate_lte(value: Any, maximum: int) -> None: + validate_integer(value) + if value > maximum: + raise ValidationError( + "Value {0} is not less than or equal to {1}".format( + value, maximum, + ) + ) + + +validate_lt_secpk1n = validate_lte(maximum=SECPK1_N - 1) + + +def validate_bytes_length(value: bytes, expected_length: int, name: str) -> None: + actual_length = len(value) + if actual_length != expected_length: + raise ValidationError( + "Unexpected {name} length: Expected {expected_length}, but got {actual_length} " + "bytes".format( + name=name, + expected_length=expected_length, + actual_length=actual_length, + ) + ) + + +def validate_message_hash(value: Any) -> None: + validate_bytes(value) + validate_bytes_length(value, 32, "message hash") + + +def validate_uncompressed_public_key_bytes(value: Any) -> None: + validate_bytes(value) + validate_bytes_length(value, 64, "uncompressed public key") + + +def validate_compressed_public_key_bytes(value: Any) -> None: + validate_bytes(value) + validate_bytes_length(value, 33, "compressed public key") + first_byte = value[0:1] + if first_byte not in (b"\x02", b"\x03"): + raise ValidationError( + "Unexpected compressed public key format: Must start with 0x02 or 0x03, but starts " + "with {first_byte}".format( + first_byte=encode_hex(first_byte), + ) + ) + + +def validate_private_key_bytes(value: Any) -> None: + validate_bytes(value) + validate_bytes_length(value, 32, "private key") + + +def validate_recoverable_signature_bytes(value: Any) -> None: + validate_bytes(value) + validate_bytes_length(value, 65, "recoverable signature") + + +def validate_non_recoverable_signature_bytes(value: Any) -> None: + validate_bytes(value) + validate_bytes_length(value, 64, "non recoverable signature") + + +def validate_signature_v(value: int) -> None: + validate_integer(value) + validate_gte(value, minimum=0) + validate_lte(value, maximum=1) + + +def validate_signature_r_or_s(value: int) -> None: + validate_integer(value) + validate_gte(value, 0) + validate_lt_secpk1n(value) diff --git a/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/LICENSE b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/LICENSE new file mode 100644 index 00000000..17bc694e --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2020 The Ethereum Foundation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/METADATA new file mode 100644 index 00000000..12d4ad6f --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/METADATA @@ -0,0 +1,153 @@ +Metadata-Version: 2.1 +Name: eth-rlp +Version: 0.2.1 +Summary: eth-rlp: RLP definitions for common Ethereum objects in Python +Home-page: https://github.com/ethereum/eth-rlp +Author: The Ethereum Foundation +Author-email: snakecharmers@ethereum.org +License: MIT +Keywords: ethereum +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Requires-Python: >=3.6, <4 +Description-Content-Type: text/markdown +Requires-Dist: eth-utils (<2,>=1.0.1) +Requires-Dist: hexbytes (<1,>=0.1.0) +Requires-Dist: rlp (<3,>=0.6.0) +Provides-Extra: dev +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'dev' +Requires-Dist: bumpversion (<1,>=0.5.3) ; extra == 'dev' +Requires-Dist: eth-hash[pycryptodome] ; extra == 'dev' +Requires-Dist: flake8 (==3.7.9) ; extra == 'dev' +Requires-Dist: ipython ; extra == 'dev' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'dev' +Requires-Dist: mypy (==0.770) ; extra == 'dev' +Requires-Dist: pydocstyle (<4,>=3.0.0) ; extra == 'dev' +Requires-Dist: pytest-watch (<5,>=4.1.0) ; extra == 'dev' +Requires-Dist: pytest-xdist ; extra == 'dev' +Requires-Dist: pytest (==5.4.1) ; extra == 'dev' +Requires-Dist: sphinx-rtd-theme (>=0.1.9) ; extra == 'dev' +Requires-Dist: towncrier (<20,>=19.2.0) ; extra == 'dev' +Requires-Dist: tox (==3.14.6) ; extra == 'dev' +Requires-Dist: twine ; extra == 'dev' +Requires-Dist: wheel ; extra == 'dev' +Provides-Extra: doc +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme (>=0.1.9) ; extra == 'doc' +Requires-Dist: towncrier (<20,>=19.2.0) ; extra == 'doc' +Provides-Extra: lint +Requires-Dist: flake8 (==3.7.9) ; extra == 'lint' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'lint' +Requires-Dist: mypy (==0.770) ; extra == 'lint' +Requires-Dist: pydocstyle (<4,>=3.0.0) ; extra == 'lint' +Provides-Extra: test +Requires-Dist: eth-hash[pycryptodome] ; extra == 'test' +Requires-Dist: pytest-xdist ; extra == 'test' +Requires-Dist: pytest (==5.4.1) ; extra == 'test' +Requires-Dist: tox (==3.14.6) ; extra == 'test' + +# eth-rlp + +[![Join the chat at https://gitter.im/ethereum/eth-rlp](https://badges.gitter.im/ethereum/eth-rlp.svg)](https://gitter.im/ethereum/eth-rlp?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[![Build Status](https://circleci.com/gh/ethereum/eth-rlp.svg?style=shield)](https://circleci.com/gh/ethereum/eth-rlp) +[![PyPI version](https://badge.fury.io/py/eth-rlp.svg)](https://badge.fury.io/py/eth-rlp) +[![Python versions](https://img.shields.io/pypi/pyversions/eth-rlp.svg)](https://pypi.python.org/pypi/eth-rlp) +[![Docs build](https://readthedocs.org/projects/eth-rlp/badge/?version=latest)](http://eth-rlp.readthedocs.io/en/latest/?badge=latest) + + +RLP definitions for common Ethereum objects in Python + +Read more in the [documentation on ReadTheDocs](http://eth-rlp.readthedocs.io/). [View the change log](http://eth-rlp.readthedocs.io/en/latest/release_notes.html). + +## Quickstart + +```sh +pip install eth-rlp +``` + +## Developer Setup + +If you would like to hack on eth-rlp, please check out the [Snake Charmers +Tactical Manual](https://github.com/ethereum/snake-charmers-tactical-manual) +for information on how we do: + +- Testing +- Pull Requests +- Code Style +- Documentation + +### Development Environment Setup + +You can set up your dev environment with: + +```sh +git clone git@github.com:ethereum/eth-rlp.git +cd eth-rlp +virtualenv -p python3 venv +. venv/bin/activate +pip install -e .[dev] +``` + +### Testing Setup + +During development, you might like to have tests run on every file save. + +Show flake8 errors on file change: + +```sh +# Test flake8 +when-changed -v -s -r -1 eth_rlp/ tests/ -c "clear; flake8 eth_rlp tests && echo 'flake8 success' || echo 'error'" +``` + +Run multi-process tests in one command, but without color: + +```sh +# in the project root: +pytest --numprocesses=4 --looponfail --maxfail=1 +# the same thing, succinctly: +pytest -n 4 -f --maxfail=1 +``` + +Run in one thread, with color and desktop notifications: + +```sh +cd venv +ptw --onfail "notify-send -t 5000 'Test failure ⚠⚠⚠⚠⚠' 'python 3 test on eth-rlp failed'" ../tests ../eth_rlp +``` + +### Release setup + +For Debian-like systems: +``` +apt install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. This is typically done from the +master branch, except when releasing a beta (in which case the beta is released from master, +and the previous stable branch is released from said branch). + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 4.0.0-alpha.1 devnum"` + + diff --git a/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/RECORD new file mode 100644 index 00000000..50290a92 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/RECORD @@ -0,0 +1,10 @@ +eth_rlp-0.2.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_rlp-0.2.1.dist-info/LICENSE,sha256=390jwXuDReXaeTef0eT63FrFKogAMVgYBUWl1oGk4ec,1090 +eth_rlp-0.2.1.dist-info/METADATA,sha256=K94bSiKDJfO6LyLcTbhKkLzDs7tit6zc80Vrk0JU28c,5326 +eth_rlp-0.2.1.dist-info/RECORD,, +eth_rlp-0.2.1.dist-info/WHEEL,sha256=g4nMs7d-Xl9-xC9XovUrsDHGXt-FT0E17Yqo92DEfvY,92 +eth_rlp-0.2.1.dist-info/top_level.txt,sha256=VrC4OXwXcQ3WO7yKr9f6LJQJl0YGtzLE_FFLbCuo3-U,8 +eth_rlp/__init__.py,sha256=s_WNiImN_Xk2o5Aor-Rc817cTLipMXKz2f6iQUXYdIY,53 +eth_rlp/__pycache__/__init__.cpython-38.pyc,, +eth_rlp/__pycache__/main.cpython-38.pyc,, +eth_rlp/main.py,sha256=VJI-yEy-XU7hyeySemO9Gje_9thmLvb6m39Xuh7SvX4,2571 diff --git a/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/WHEEL new file mode 100644 index 00000000..b552003f --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/top_level.txt new file mode 100644 index 00000000..4373e7f4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp-0.2.1.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_rlp diff --git a/env/lib/python3.8/site-packages/eth_rlp/__init__.py b/env/lib/python3.8/site-packages/eth_rlp/__init__.py new file mode 100644 index 00000000..5df61be4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp/__init__.py @@ -0,0 +1,3 @@ +from .main import ( # noqa: F401 + HashableRLP, +) diff --git a/env/lib/python3.8/site-packages/eth_rlp/main.py b/env/lib/python3.8/site-packages/eth_rlp/main.py new file mode 100644 index 00000000..eaea48f1 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_rlp/main.py @@ -0,0 +1,107 @@ +from eth_utils import ( + keccak, +) +from eth_utils.toolz import ( + pipe, +) +from hexbytes import ( + HexBytes, +) +import rlp + + +class HashableRLP(rlp.Serializable): + r""" + An extension of :class:`rlp.Serializable`. In adition to the below + functions, the class is iterable. + + Use like: + + :: + + class MyRLP(HashableRLP): + fields = ( + ('name1', rlp.sedes.big_endian_int), + ('name2', rlp.sedes.binary), + etc... + ) + + my_obj = MyRLP(name2=b'\xff', name1=1) + list(my_obj) == [1, b'\xff'] + # note that the iteration order is always in RLP-defined order + """ + + @classmethod + def from_dict(cls, field_dict): + r""" + In addition to the standard initialization of. + + :: + + my_obj = MyRLP(name1=1, name2=b'\xff') + + This method enables initialization with. + + :: + + my_obj = MyRLP.from_dict({'name1': 1, 'name2': b'\xff'}) + + In general, the standard initialization is preferred, but + some approaches might favor this API, like when using + :meth:`toolz.functoolz.pipe`. + + :: + + return eth_utils.toolz.pipe( + my_dict, + normalize, + validate, + MyRLP.from_dict, + ) + + :param dict field_dict: the dictionary of values to initialize with + :returns: the new rlp object + :rtype: HashableRLP + """ + return cls(**field_dict) + + @classmethod + def from_bytes(cls, serialized_bytes): + """ + Shorthand invocation for :meth:`rlp.decode` using this class. + + :param bytes serialized_bytes: the byte string to decode + :return: the decoded object + :rtype: HashableRLP + """ + return rlp.decode(serialized_bytes, cls) + + def hash(self): + """ + :returns: the hash of the encoded bytestring + :rtype: ~hexbytes.main.HexBytes + """ + return pipe( + self, + rlp.encode, + keccak, + HexBytes, + ) + + def __iter__(self): + if hasattr(self, 'fields'): + return iter(getattr(self, field) for field, _ in self.fields) + else: + return super().__iter__() + + def as_dict(self): + """ + Convert rlp object to a dict + + :returns: mapping of RLP field names to field values + :rtype: dict + """ + try: + return super().as_dict() + except AttributeError: + return vars(self) diff --git a/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/LICENSE new file mode 100644 index 00000000..d93175ab --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2019 The Ethereum Foundation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/METADATA new file mode 100644 index 00000000..b4f60ac5 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/METADATA @@ -0,0 +1,149 @@ +Metadata-Version: 2.1 +Name: eth-typing +Version: 2.3.0 +Summary: eth-typing: Common type annotations for ethereum python packages +Home-page: https://github.com/ethereum/eth-typing +Author: The Ethereum Foundation +Author-email: snakecharmers@ethereum.org +License: MIT +Keywords: ethereum +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Requires-Python: >=3.5, <4 +Description-Content-Type: text/markdown +Provides-Extra: dev +Requires-Dist: bumpversion (<1,>=0.5.3) ; extra == 'dev' +Requires-Dist: pytest-watch (<5,>=4.1.0) ; extra == 'dev' +Requires-Dist: wheel ; extra == 'dev' +Requires-Dist: twine ; extra == 'dev' +Requires-Dist: ipython ; extra == 'dev' +Requires-Dist: pytest (<4.5,>=4.4) ; extra == 'dev' +Requires-Dist: pytest-xdist ; extra == 'dev' +Requires-Dist: tox (<3,>=2.9.1) ; extra == 'dev' +Requires-Dist: flake8 (==3.8.3) ; extra == 'dev' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'dev' +Requires-Dist: mypy (==0.782) ; extra == 'dev' +Requires-Dist: pydocstyle (<4,>=3.0.0) ; extra == 'dev' +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'dev' +Requires-Dist: sphinx-rtd-theme (>=0.1.9) ; extra == 'dev' +Provides-Extra: doc +Requires-Dist: Sphinx (<2,>=1.6.5) ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme (>=0.1.9) ; extra == 'doc' +Provides-Extra: lint +Requires-Dist: flake8 (==3.8.3) ; extra == 'lint' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'lint' +Requires-Dist: mypy (==0.782) ; extra == 'lint' +Requires-Dist: pydocstyle (<4,>=3.0.0) ; extra == 'lint' +Provides-Extra: test +Requires-Dist: pytest (<4.5,>=4.4) ; extra == 'test' +Requires-Dist: pytest-xdist ; extra == 'test' +Requires-Dist: tox (<3,>=2.9.1) ; extra == 'test' + +# eth-typing + +[![Join the chat at https://gitter.im/ethereum/eth-typing](https://badges.gitter.im/ethereum/eth-typing.svg)](https://gitter.im/ethereum/eth-typing?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[![Build Status](https://circleci.com/gh/ethereum/eth-typing.svg?style=shield)](https://circleci.com/gh/ethereum/eth-typing) +[![PyPI version](https://badge.fury.io/py/eth-typing.svg)](https://badge.fury.io/py/eth-typing) +[![Python versions](https://img.shields.io/pypi/pyversions/eth-typing.svg)](https://pypi.python.org/pypi/eth-typing) +[![Docs build](https://readthedocs.org/projects/eth-typing/badge/?version=latest)](http://eth-typing.readthedocs.io/en/latest/?badge=latest) + + +Common type annotations for ethereum python packages. + +Read more in the [documentation on ReadTheDocs](https://eth-typing.readthedocs.io/). [View the change log](https://eth-typing.readthedocs.io/en/latest/releases.html). + +## Quickstart + +```sh +pip install eth-typing +``` + +## Developer Setup + +If you would like to hack on eth-typing, please check out the [Snake Charmers +Tactical Manual](https://github.com/ethereum/snake-charmers-tactical-manual) +for information on how we do: + +- Testing +- Pull Requests +- Code Style +- Documentation + +### Development Environment Setup + +You can set up your dev environment with: + +```sh +git clone git@github.com:ethereum/eth-typing.git +cd eth-typing +virtualenv -p python3 venv +. venv/bin/activate +pip install -e .[dev] +``` + +### Testing Setup + +During development, you might like to have tests run on every file save. + +Show flake8 errors on file change: + +```sh +# Test flake8 +when-changed -v -s -r -1 eth_typing/ tests/ -c "clear; flake8 eth_typing tests && echo 'flake8 success' || echo 'error'" +``` + +Run multi-process tests in one command, but without color: + +```sh +# in the project root: +pytest --numprocesses=4 --looponfail --maxfail=1 +# the same thing, succinctly: +pytest -n 4 -f --maxfail=1 +``` + +Run in one thread, with color and desktop notifications: + +```sh +cd venv +ptw --onfail "notify-send -t 5000 'Test failure ⚠⚠⚠⚠⚠' 'python 3 test on eth-typing failed'" ../tests ../eth_typing +``` + +### Release setup + +For Debian-like systems: +``` +apt install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. This is typically done from the +master branch, except when releasing a beta (in which case the beta is released from master, +and the previous stable branch is released from said branch). To include changes made with each +release, update "docs/releases.rst" with the changes, and apply commit directly to master +before release. + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 4.0.0-alpha.1 devnum"` + + diff --git a/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/RECORD new file mode 100644 index 00000000..892d67ef --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/RECORD @@ -0,0 +1,23 @@ +eth_typing-2.3.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_typing-2.3.0.dist-info/LICENSE,sha256=cM36Yl4EdfyKjk6yN6mo39G3XAYuucKNjjwoeh6tKuI,1090 +eth_typing-2.3.0.dist-info/METADATA,sha256=yudFxHfJKHH5FKH2BJWbbDahCxK2uKgPaOBePZfk5Rk,5266 +eth_typing-2.3.0.dist-info/RECORD,, +eth_typing-2.3.0.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +eth_typing-2.3.0.dist-info/top_level.txt,sha256=jlkaPMCqCyRUeU0WmgKjMDfYO72vgxOnhVV1SocbovU,11 +eth_typing/__init__.py,sha256=kUMkF7pKGA9QKLkUNchcif_ghQPfVrEnYwvN5vX-HDw,532 +eth_typing/__pycache__/__init__.cpython-38.pyc,, +eth_typing/__pycache__/abi.cpython-38.pyc,, +eth_typing/__pycache__/bls.cpython-38.pyc,, +eth_typing/__pycache__/discovery.cpython-38.pyc,, +eth_typing/__pycache__/encoding.cpython-38.pyc,, +eth_typing/__pycache__/enums.cpython-38.pyc,, +eth_typing/__pycache__/ethpm.cpython-38.pyc,, +eth_typing/__pycache__/evm.cpython-38.pyc,, +eth_typing/abi.py,sha256=kGqws8LwEauRbdgxonXq1xhw13Cr_nucn2msTPXfgk4,85 +eth_typing/bls.py,sha256=oXtZl6X977aN0vN-eAh02JrpfEaV1-YmswqWyMOc_J0,145 +eth_typing/discovery.py,sha256=0H-tbsb-8B-hjwuv0rTRzlpkcpPvqPsyvOaH2IfLLgg,71 +eth_typing/encoding.py,sha256=7s3sw9o1K2pi9M5cPr84d3SamAxrCgQLXhRL0fX-wBM,117 +eth_typing/enums.py,sha256=3_io0eOv8vCbI8XXyCY7zHf0CgSQiomV62_ke8Pc5i0,358 +eth_typing/ethpm.py,sha256=yrXmpthtYkT87o0ggpgoJJh_pskF7qbE5HwRvmLAGpc,173 +eth_typing/evm.py,sha256=_jQlVy6SNgNhP1UwxQV6BYQfv2-T_gBrOEOVWBCOMGw,431 +eth_typing/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/top_level.txt new file mode 100644 index 00000000..d82f98e4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing-2.3.0.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_typing diff --git a/env/lib/python3.8/site-packages/eth_typing/__init__.py b/env/lib/python3.8/site-packages/eth_typing/__init__.py new file mode 100644 index 00000000..29d13fcc --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/__init__.py @@ -0,0 +1,32 @@ +from .abi import ( # noqa: F401 + Decodable, + TypeStr, +) +from .bls import ( # noqa: F401 + BLSPubkey, + BLSSignature, +) +from .discovery import ( # noqa: F401 + NodeID, +) +from .encoding import ( # noqa: F401 + HexStr, + Primitives, +) +from .enums import ( # noqa: F401 + ForkName, +) +from .ethpm import ( # noqa: F401 + URI, + ContractName, + Manifest, +) +from .evm import ( # noqa: F401 + Address, + AnyAddress, + BlockIdentifier, + BlockNumber, + ChecksumAddress, + Hash32, + HexAddress, +) diff --git a/env/lib/python3.8/site-packages/eth_typing/abi.py b/env/lib/python3.8/site-packages/eth_typing/abi.py new file mode 100644 index 00000000..0ac42518 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/abi.py @@ -0,0 +1,6 @@ +from typing import ( + Union, +) + +TypeStr = str +Decodable = Union[bytes, bytearray] diff --git a/env/lib/python3.8/site-packages/eth_typing/bls.py b/env/lib/python3.8/site-packages/eth_typing/bls.py new file mode 100644 index 00000000..f2ef4134 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/bls.py @@ -0,0 +1,6 @@ +from typing import ( + NewType, +) + +BLSPubkey = NewType('BLSPubkey', bytes) # bytes48 +BLSSignature = NewType('BLSSignature', bytes) # bytes96 diff --git a/env/lib/python3.8/site-packages/eth_typing/discovery.py b/env/lib/python3.8/site-packages/eth_typing/discovery.py new file mode 100644 index 00000000..80bfe215 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/discovery.py @@ -0,0 +1,5 @@ +from typing import ( + NewType, +) + +NodeID = NewType("NodeID", bytes) diff --git a/env/lib/python3.8/site-packages/eth_typing/encoding.py b/env/lib/python3.8/site-packages/eth_typing/encoding.py new file mode 100644 index 00000000..393617e0 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/encoding.py @@ -0,0 +1,7 @@ +from typing import ( + NewType, + Union, +) + +HexStr = NewType('HexStr', str) +Primitives = Union[bytes, int, bool] diff --git a/env/lib/python3.8/site-packages/eth_typing/enums.py b/env/lib/python3.8/site-packages/eth_typing/enums.py new file mode 100644 index 00000000..86a8388e --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/enums.py @@ -0,0 +1,13 @@ +class ForkName: + Frontier = 'Frontier' + Homestead = 'Homestead' + EIP150 = 'EIP150' + EIP158 = 'EIP158' + Byzantium = 'Byzantium' + Constantinople = 'Constantinople' + Metropolis = 'Metropolis' + ConstantinopleFix = 'ConstantinopleFix' + Istanbul = 'Istanbul' + Berlin = 'Berlin' + London = 'London' + ArrowGlacier = 'ArrowGlacier' diff --git a/env/lib/python3.8/site-packages/eth_typing/ethpm.py b/env/lib/python3.8/site-packages/eth_typing/ethpm.py new file mode 100644 index 00000000..30189865 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/ethpm.py @@ -0,0 +1,9 @@ +from typing import ( + Any, + Dict, + NewType, +) + +ContractName = NewType('ContractName', str) +Manifest = NewType('Manifest', Dict[str, Any]) +URI = NewType('URI', str) diff --git a/env/lib/python3.8/site-packages/eth_typing/evm.py b/env/lib/python3.8/site-packages/eth_typing/evm.py new file mode 100644 index 00000000..55f4391a --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_typing/evm.py @@ -0,0 +1,18 @@ +from typing import ( + NewType, + TypeVar, + Union, +) + +from .encoding import ( + HexStr, +) + +Hash32 = NewType('Hash32', bytes) +BlockNumber = NewType('BlockNumber', int) +BlockIdentifier = Union[BlockNumber, Hash32] + +Address = NewType('Address', bytes) +HexAddress = NewType('HexAddress', HexStr) +ChecksumAddress = NewType('ChecksumAddress', HexAddress) +AnyAddress = TypeVar('AnyAddress', Address, HexAddress, ChecksumAddress) diff --git a/env/lib/python3.8/site-packages/eth_typing/py.typed b/env/lib/python3.8/site-packages/eth_typing/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/INSTALLER b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/LICENSE b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/LICENSE new file mode 100644 index 00000000..5e8bed12 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2017 Piper Merriam + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/METADATA b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/METADATA new file mode 100644 index 00000000..7360a2ba --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/METADATA @@ -0,0 +1,154 @@ +Metadata-Version: 2.1 +Name: eth-utils +Version: 1.9.5 +Summary: Common utility functions for ethereum codebases. +Home-page: https://github.com/ethereum/eth_utils +Author: Piper Merriam +Author-email: pipermerriam@gmail.com +License: MIT +Keywords: ethereum +Platform: UNKNOWN +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Requires-Python: >=3.5,!=3.5.2,<4 +Description-Content-Type: text/markdown +Requires-Dist: eth-hash (<1.0.0,>=0.1.0) +Requires-Dist: eth-typing (<3.0.0,>=2.2.1) +Requires-Dist: cytoolz (<1.0.0,>=0.10.1) ; implementation_name == "cpython" +Requires-Dist: toolz (<1,>0.8.2) ; implementation_name == "pypy" +Provides-Extra: deploy +Requires-Dist: bumpversion (<1.0.0,>=0.5.3) ; extra == 'deploy' +Requires-Dist: tox (<3.0.0,>=2.9.1) ; extra == 'deploy' +Requires-Dist: wheel (<1.0.0,>=0.30.0) ; extra == 'deploy' +Provides-Extra: dev +Requires-Dist: twine (<2,>=1.13) ; extra == 'dev' +Requires-Dist: Sphinx (<2,>=1.5.5) ; extra == 'dev' +Requires-Dist: sphinx-rtd-theme (<2,>=0.1.9) ; extra == 'dev' +Requires-Dist: towncrier (<20,>=19.2.0) ; extra == 'dev' +Requires-Dist: black (<19,>=18.6b4) ; extra == 'dev' +Requires-Dist: flake8 (<4.0.0,>=3.7.0) ; extra == 'dev' +Requires-Dist: isort (==4.3.18) ; extra == 'dev' +Requires-Dist: mypy (==0.720) ; extra == 'dev' +Requires-Dist: pytest (<4.0.0,>=3.4.1) ; extra == 'dev' +Requires-Dist: hypothesis (<5.0.0,>=4.43.0) ; extra == 'dev' +Requires-Dist: pytest-pythonpath (<1.0,>=0.3) ; extra == 'dev' +Requires-Dist: bumpversion (<1.0.0,>=0.5.3) ; extra == 'dev' +Requires-Dist: tox (<3.0.0,>=2.9.1) ; extra == 'dev' +Requires-Dist: wheel (<1.0.0,>=0.30.0) ; extra == 'dev' +Provides-Extra: doc +Requires-Dist: Sphinx (<2,>=1.5.5) ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme (<2,>=0.1.9) ; extra == 'doc' +Requires-Dist: towncrier (<20,>=19.2.0) ; extra == 'doc' +Provides-Extra: lint +Requires-Dist: black (<19,>=18.6b4) ; extra == 'lint' +Requires-Dist: flake8 (<4.0.0,>=3.7.0) ; extra == 'lint' +Requires-Dist: isort (==4.3.18) ; extra == 'lint' +Requires-Dist: mypy (==0.720) ; extra == 'lint' +Requires-Dist: pytest (<4.0.0,>=3.4.1) ; extra == 'lint' +Provides-Extra: test +Requires-Dist: hypothesis (<5.0.0,>=4.43.0) ; extra == 'test' +Requires-Dist: pytest (<4.0.0,>=3.4.1) ; extra == 'test' +Requires-Dist: pytest-pythonpath (<1.0,>=0.3) ; extra == 'test' + +# Ethereum Utils + +[![Join the chat at https://gitter.im/ethereum/eth-utils](https://badges.gitter.im/ethereum/eth-utils.svg)](https://gitter.im/ethereum/eth-utils) + +[![Build Status](https://circleci.com/gh/ethereum/eth-utils.svg?style=shield)](https://circleci.com/gh/ethereum/eth-utils) + +[Documentation hosted by ReadTheDocs](https://eth-utils.readthedocs.io/en/latest/) + +Common utility functions for codebases which interact with ethereum. + +> This library and repository was previously located at https://github.com/pipermerriam/ethereum-utils. It was transferred to the Ethereum foundation github in November 2017 and renamed to `eth-utils`. The PyPi package was also renamed from `ethereum-utils` to `eth-utils`. + + +## Installation + +```sh +pip install eth-utils +``` + +## Development + +Clone the repository and then run: + +```sh +pip install -e .[dev] eth-hash[pycryptodome] +``` + +## Documentation + +Building Sphinx docs locally: + +```sh +pip install -e .[doc] +cd docs +make html +``` + +Docs are written in [reStructuredText](http://docutils.sourceforge.net/rst.html) and built using the [Sphinx](http://www.sphinx-doc.org/) documentation generator. + + +### Running the tests + +You can run the tests with: + +```sh +py.test tests +``` + +Or you can install `tox` to run the full test suite. + + +### Releasing + +Pandoc is required for transforming the markdown README to the proper format to +render correctly on pypi. + +For Debian-like systems: + +``` +apt install pandoc +``` + +Or on OSX: + +```sh +brew install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` +To preview the upcoming release notes: + +```sh +towncrier --draft +``` + + + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 4.0.0-alpha.1 devnum"` + + diff --git a/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/RECORD b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/RECORD new file mode 100644 index 00000000..0047974b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/RECORD @@ -0,0 +1,55 @@ +eth_utils-1.9.5.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +eth_utils-1.9.5.dist-info/LICENSE,sha256=MoCQbeU4kpbrnL8X5yIsS0_2HrCKEDJImUJ_ZjlTfo0,1080 +eth_utils-1.9.5.dist-info/METADATA,sha256=jiwiyZloD8brnBXLRFOoeBLg06kSXs_bWbyOfhXwlTo,4912 +eth_utils-1.9.5.dist-info/RECORD,, +eth_utils-1.9.5.dist-info/WHEEL,sha256=EVRjI69F5qVjm_YgqcTXPnTAv3BfSUr0WVAHuSP3Xoo,92 +eth_utils-1.9.5.dist-info/top_level.txt,sha256=LEemVHaEdiFC-weYl-xj_8RIiLlOXYYbBXoRYI7iSTU,10 +eth_utils/__init__.py,sha256=hOcDYpFqlWCgoa-Xe-nGGEs0NbSzJvM9W14HtOx_x_8,2642 +eth_utils/__main__.py,sha256=fhPVOBu0ScDlTC5eaBLLAoKTfdqYxpWt9ColEdosM_E,77 +eth_utils/__pycache__/__init__.cpython-38.pyc,, +eth_utils/__pycache__/__main__.cpython-38.pyc,, +eth_utils/__pycache__/abi.cpython-38.pyc,, +eth_utils/__pycache__/address.cpython-38.pyc,, +eth_utils/__pycache__/applicators.cpython-38.pyc,, +eth_utils/__pycache__/conversions.cpython-38.pyc,, +eth_utils/__pycache__/crypto.cpython-38.pyc,, +eth_utils/__pycache__/currency.cpython-38.pyc,, +eth_utils/__pycache__/debug.cpython-38.pyc,, +eth_utils/__pycache__/decorators.cpython-38.pyc,, +eth_utils/__pycache__/encoding.cpython-38.pyc,, +eth_utils/__pycache__/exceptions.cpython-38.pyc,, +eth_utils/__pycache__/functional.cpython-38.pyc,, +eth_utils/__pycache__/hexadecimal.cpython-38.pyc,, +eth_utils/__pycache__/humanize.cpython-38.pyc,, +eth_utils/__pycache__/logging.cpython-38.pyc,, +eth_utils/__pycache__/module_loading.cpython-38.pyc,, +eth_utils/__pycache__/numeric.cpython-38.pyc,, +eth_utils/__pycache__/toolz.cpython-38.pyc,, +eth_utils/__pycache__/types.cpython-38.pyc,, +eth_utils/__pycache__/units.cpython-38.pyc,, +eth_utils/abi.py,sha256=V0OQeVCb5pUPOR__gSVsQ5DpuJrrECg2kfBlLlVQ7rs,1929 +eth_utils/address.py,sha256=ZXZ6yfzkL7-aA7vj_zMsveKojs8D_33MRt2NPycMkaw,4077 +eth_utils/applicators.py,sha256=BOf1dgfAZdnG7gwlb4diBdzPVyqTH-6UDPX2TIM3IFM,4035 +eth_utils/conversions.py,sha256=H67hGv4FHs1_oVMmhsgtlapcYeBPXxcVexP1HFKR2bA,5355 +eth_utils/crypto.py,sha256=VkdcQG45nXg3GsQtJ2E56RsztlqXuoyXo2ZOO1oz99o,275 +eth_utils/currency.py,sha256=45pUtrUZy70QNdYTxV0nQ527hiQiUaoNgS8mmUdQ54k,3052 +eth_utils/curried/__init__.py,sha256=FmWIrbXU4m8rP2xEPK2kamoPt-IAJ3BD3pfI1GmZLkk,6168 +eth_utils/curried/__pycache__/__init__.cpython-38.pyc,, +eth_utils/debug.py,sha256=wH9EEVlk5NA312b9QV0t0SISSBpJGkPYKhhbd0RpVQ4,485 +eth_utils/decorators.py,sha256=qQa3XiHfkFKzgypSDEwBhJ64pRRZ4zQ6fCRgcxCH6FU,4070 +eth_utils/encoding.py,sha256=1qfDeuinLZ01XjYgknpm_p9LuWwaYvicYkYI8mS1iMc,199 +eth_utils/exceptions.py,sha256=WCzTv4HjMI3vauJoKAXsGSHvvhB24QLE4aNptiVVz8s,120 +eth_utils/functional.py,sha256=2meYMkFKoFk2HnPi5WltATfMrVtMENhhWQwI0SjvA6c,2091 +eth_utils/hexadecimal.py,sha256=5djjPD5HglneamyuRsBawLf1gpPRQyymY0xtJn1HKB4,3304 +eth_utils/humanize.py,sha256=Vll5XmoJ3mEqqgs5XN2u4YUOHko5FfvTnT0fIyRSBM0,3909 +eth_utils/logging.py,sha256=3qKLYHTv1yxWj-GpRNH6fNjzK4quz20I-6MkpaDcze0,5495 +eth_utils/module_loading.py,sha256=iK5d6rmb7dgnKtPd-8E5Hqo06WPdO0fWZXw-JCWGJL8,780 +eth_utils/numeric.py,sha256=KR40RA2lfoF83nyIq9UcJw7M9NEggvMsEzlq12GN5ag,1162 +eth_utils/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +eth_utils/toolz.py,sha256=aFaH9KtqrYM14gFFY7UPA-FF18Utly8eX3o1Cotm2kQ,2663 +eth_utils/types.py,sha256=MfIb8sW5bt303wIznDAK6JJRuJ_g9vK3s8eq6W6ir9c,1065 +eth_utils/typing/__init__.py,sha256=NpFHzOs3B3YpviD6mRljTFohoc2koBfFPBgbZwJpP84,339 +eth_utils/typing/__pycache__/__init__.cpython-38.pyc,, +eth_utils/typing/__pycache__/misc.cpython-38.pyc,, +eth_utils/typing/misc.py,sha256=V9XVxw1OYHlBFtKxA29jq8rwmiyLGTbL1SwIIDjY3w0,181 +eth_utils/units.py,sha256=jRo8p6trxwuISBnT8kfxTNVyd_TSd5vVY5aiKDefB1U,1757 diff --git a/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/WHEEL b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/WHEEL new file mode 100644 index 00000000..83ff02e9 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.35.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/top_level.txt b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/top_level.txt new file mode 100644 index 00000000..0e9ce328 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils-1.9.5.dist-info/top_level.txt @@ -0,0 +1 @@ +eth_utils diff --git a/env/lib/python3.8/site-packages/eth_utils/__init__.py b/env/lib/python3.8/site-packages/eth_utils/__init__.py new file mode 100644 index 00000000..876f8165 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/__init__.py @@ -0,0 +1,112 @@ +import sys +import warnings + +import pkg_resources + +from .abi import ( # noqa: F401 + event_abi_to_log_topic, + event_signature_to_log_topic, + function_abi_to_4byte_selector, + function_signature_to_4byte_selector, +) +from .address import ( # noqa: F401 + is_address, + is_binary_address, + is_canonical_address, + is_checksum_address, + is_checksum_formatted_address, + is_hex_address, + is_normalized_address, + is_same_address, + to_canonical_address, + to_checksum_address, + to_normalized_address, +) +from .applicators import ( # noqa: F401 + apply_formatter_at_index, + apply_formatter_if, + apply_formatter_to_array, + apply_formatters_to_dict, + apply_formatters_to_sequence, + apply_key_map, + apply_one_of_formatters, + combine_argument_formatters, +) +from .conversions import ( # noqa: F401 + hexstr_if_str, + text_if_str, + to_bytes, + to_hex, + to_int, + to_text, +) +from .crypto import keccak # noqa: F401 +from .currency import denoms, from_wei, to_wei # noqa: F401 +from .decorators import combomethod, replace_exceptions # noqa: F401 +from .encoding import big_endian_to_int, int_to_big_endian # noqa: F401 +from .exceptions import ValidationError # noqa: F401 +from .functional import ( # noqa: F401 + apply_to_return_value, + flatten_return, + reversed_return, + sort_return, + to_dict, + to_list, + to_ordered_dict, + to_set, + to_tuple, +) +from .hexadecimal import ( # noqa: F401 + add_0x_prefix, + decode_hex, + encode_hex, + is_0x_prefixed, + is_hex, + is_hexstr, + remove_0x_prefix, +) +from .humanize import ( # noqa: F401 + humanize_bytes, + humanize_hash, + humanize_integer_sequence, + humanize_ipfs_uri, + humanize_seconds, +) +from .logging import ( # noqa: F401 + DEBUG2_LEVEL_NUM, + ExtendedDebugLogger, + HasExtendedDebugLogger, + HasExtendedDebugLoggerMeta, + HasLogger, + HasLoggerMeta, + get_extended_debug_logger, + get_logger, + setup_DEBUG2_logging, +) +from .module_loading import import_string # noqa: F401 +from .numeric import clamp # noqa: F401 +from .types import ( # noqa: F401 + is_boolean, + is_bytes, + is_dict, + is_integer, + is_list, + is_list_like, + is_null, + is_number, + is_string, + is_text, + is_tuple, +) + +if sys.version_info.major < 3: + warnings.simplefilter("always", DeprecationWarning) + warnings.warn( + DeprecationWarning( + "The `eth-utils` library has dropped support for Python 2. Upgrade to Python 3." + ) + ) + warnings.resetwarnings() + + +__version__ = pkg_resources.get_distribution("eth-utils").version diff --git a/env/lib/python3.8/site-packages/eth_utils/__main__.py b/env/lib/python3.8/site-packages/eth_utils/__main__.py new file mode 100644 index 00000000..cb9bc0b6 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/__main__.py @@ -0,0 +1,3 @@ +from .debug import get_environment_summary + +print(get_environment_summary()) diff --git a/env/lib/python3.8/site-packages/eth_utils/abi.py b/env/lib/python3.8/site-packages/eth_utils/abi.py new file mode 100644 index 00000000..e1c2181b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/abi.py @@ -0,0 +1,60 @@ +from typing import Any, Dict + +from .crypto import keccak + + +def collapse_if_tuple(abi: Dict[str, Any]) -> str: + """Converts a tuple from a dict to a parenthesized list of its types. + + >>> from eth_utils.abi import collapse_if_tuple + >>> collapse_if_tuple( + ... { + ... 'components': [ + ... {'name': 'anAddress', 'type': 'address'}, + ... {'name': 'anInt', 'type': 'uint256'}, + ... {'name': 'someBytes', 'type': 'bytes'}, + ... ], + ... 'type': 'tuple', + ... } + ... ) + '(address,uint256,bytes)' + """ + typ = abi["type"] + if not typ.startswith("tuple"): + return typ + + delimited = ",".join(collapse_if_tuple(c) for c in abi["components"]) + # Whatever comes after "tuple" is the array dims. The ABI spec states that + # this will have the form "", "[]", or "[k]". + array_dim = typ[5:] + collapsed = "({}){}".format(delimited, array_dim) + + return collapsed + + +def _abi_to_signature(abi: Dict[str, Any]) -> str: + function_signature = "{fn_name}({fn_input_types})".format( + fn_name=abi["name"], + fn_input_types=",".join( + [collapse_if_tuple(abi_input) for abi_input in abi.get("inputs", [])] + ), + ) + return function_signature + + +def function_signature_to_4byte_selector(event_signature: str) -> bytes: + return keccak(text=event_signature.replace(" ", ""))[:4] + + +def function_abi_to_4byte_selector(function_abi: Dict[str, Any]) -> bytes: + function_signature = _abi_to_signature(function_abi) + return function_signature_to_4byte_selector(function_signature) + + +def event_signature_to_log_topic(event_signature: str) -> bytes: + return keccak(text=event_signature.replace(" ", "")) + + +def event_abi_to_log_topic(event_abi: Dict[str, Any]) -> bytes: + event_signature = _abi_to_signature(event_abi) + return event_signature_to_log_topic(event_signature) diff --git a/env/lib/python3.8/site-packages/eth_utils/address.py b/env/lib/python3.8/site-packages/eth_utils/address.py new file mode 100644 index 00000000..59e834c2 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/address.py @@ -0,0 +1,150 @@ +from typing import Any, AnyStr + +from eth_typing import Address, AnyAddress, ChecksumAddress, HexAddress, HexStr + +from .conversions import hexstr_if_str, to_hex +from .crypto import keccak +from .hexadecimal import ( + add_0x_prefix, + decode_hex, + encode_hex, + is_hexstr, + remove_0x_prefix, +) +from .types import is_bytes, is_text + + +def is_hex_address(value: Any) -> bool: + """ + Checks if the given string of text type is an address in hexadecimal encoded form. + """ + if not is_hexstr(value): + return False + else: + unprefixed = remove_0x_prefix(value) + return len(unprefixed) == 40 + + +def is_binary_address(value: Any) -> bool: + """ + Checks if the given string is an address in raw bytes form. + """ + if not is_bytes(value): + return False + elif len(value) != 20: + return False + else: + return True + + +def is_address(value: Any) -> bool: + """ + Checks if the given string in a supported value + is an address in any of the known formats. + """ + if is_checksum_formatted_address(value): + return is_checksum_address(value) + elif is_hex_address(value): + return True + elif is_binary_address(value): + return True + else: + return False + + +def to_normalized_address(value: AnyStr) -> HexAddress: + """ + Converts an address to its normalized hexadecimal representation. + """ + try: + hex_address = hexstr_if_str(to_hex, value).lower() + except AttributeError: + raise TypeError( + "Value must be any string, instead got type {}".format(type(value)) + ) + if is_address(hex_address): + return HexAddress(HexStr(hex_address)) + else: + raise ValueError( + "Unknown format {}, attempted to normalize to {}".format(value, hex_address) + ) + + +def is_normalized_address(value: Any) -> bool: + """ + Returns whether the provided value is an address in its normalized form. + """ + if not is_address(value): + return False + else: + return value == to_normalized_address(value) + + +def to_canonical_address(address: AnyStr) -> Address: + """ + Given any supported representation of an address + returns its canonical form (20 byte long string). + """ + return Address(decode_hex(to_normalized_address(address))) + + +def is_canonical_address(address: Any) -> bool: + """ + Returns `True` if the `value` is an address in its canonical form. + """ + if not is_bytes(address) or len(address) != 20: + return False + return address == to_canonical_address(address) + + +def is_same_address(left: AnyAddress, right: AnyAddress) -> bool: + """ + Checks if both addresses are same or not. + """ + if not is_address(left) or not is_address(right): + raise ValueError("Both values must be valid addresses") + else: + return to_normalized_address(left) == to_normalized_address(right) + + +def to_checksum_address(value: AnyStr) -> ChecksumAddress: + """ + Makes a checksum address given a supported format. + """ + norm_address = to_normalized_address(value) + address_hash = encode_hex(keccak(text=remove_0x_prefix(HexStr(norm_address)))) + + checksum_address = add_0x_prefix( + HexStr( + "".join( + ( + norm_address[i].upper() + if int(address_hash[i], 16) > 7 + else norm_address[i] + ) + for i in range(2, 42) + ) + ) + ) + return ChecksumAddress(HexAddress(checksum_address)) + + +def is_checksum_address(value: Any) -> bool: + if not is_text(value): + return False + + if not is_hex_address(value): + return False + return value == to_checksum_address(value) + + +def is_checksum_formatted_address(value: Any) -> bool: + + if not is_hex_address(value): + return False + elif remove_0x_prefix(value) == remove_0x_prefix(value).lower(): + return False + elif remove_0x_prefix(value) == remove_0x_prefix(value).upper(): + return False + else: + return True diff --git a/env/lib/python3.8/site-packages/eth_utils/applicators.py b/env/lib/python3.8/site-packages/eth_utils/applicators.py new file mode 100644 index 00000000..bd72e9f7 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/applicators.py @@ -0,0 +1,134 @@ +from typing import Any, Callable, Dict, Generator, List, Tuple +import warnings + +from .decorators import return_arg_type +from .functional import to_dict +from .toolz import compose, curry + +Formatters = Callable[[List[Any]], List[Any]] + + +@return_arg_type(2) +def apply_formatter_at_index( + formatter: Callable[..., Any], at_index: int, value: List[Any] +) -> Generator[List[Any], None, None]: + if at_index + 1 > len(value): + raise IndexError( + "Not enough values in iterable to apply formatter. Got: {0}. " + "Need: {1}".format(len(value), at_index + 1) + ) + for index, item in enumerate(value): + if index == at_index: + yield formatter(item) + else: + yield item + + +def combine_argument_formatters(*formatters: List[Callable[..., Any]]) -> Formatters: + warnings.warn( + DeprecationWarning( + "combine_argument_formatters(formatter1, formatter2)([item1, item2])" + "has been deprecated and will be removed in a subsequent major version " + "release of the eth-utils library. Update your calls to use " + "apply_formatters_to_sequence([formatter1, formatter2], [item1, item2]) " + "instead." + ) + ) + + _formatter_at_index = curry(apply_formatter_at_index) + return compose( + *( + _formatter_at_index(formatter, index) + for index, formatter in enumerate(formatters) + ) + ) + + +@return_arg_type(1) +def apply_formatters_to_sequence( + formatters: List[Any], sequence: List[Any] +) -> Generator[List[Any], None, None]: + if len(formatters) > len(sequence): + raise IndexError( + "Too many formatters for sequence: {} formatters for {!r}".format( + len(formatters), sequence + ) + ) + elif len(formatters) < len(sequence): + raise IndexError( + "Too few formatters for sequence: {} formatters for {!r}".format( + len(formatters), sequence + ) + ) + else: + for formatter, item in zip(formatters, sequence): + yield formatter(item) + + +def apply_formatter_if( + condition: Callable[..., bool], formatter: Callable[..., Any], value: Any +) -> Any: + if condition(value): + return formatter(value) + else: + return value + + +@to_dict +def apply_formatters_to_dict( + formatters: Dict[Any, Any], value: Dict[Any, Any] +) -> Generator[Tuple[Any, Any], None, None]: + for key, item in value.items(): + if key in formatters: + try: + yield key, formatters[key](item) + except (TypeError, ValueError) as exc: + raise type(exc)( + "Could not format value %r as field %r" % (item, key) + ) from exc + else: + yield key, item + + +@return_arg_type(1) +def apply_formatter_to_array( + formatter: Callable[..., Any], value: List[Any] +) -> Generator[List[Any], None, None]: + for item in value: + yield formatter(item) + + +def apply_one_of_formatters( + formatter_condition_pairs: Tuple[Tuple[Callable[..., Any], Callable[..., Any]]], + value: Any, +) -> Any: + for condition, formatter in formatter_condition_pairs: + if condition(value): + return formatter(value) + else: + raise ValueError( + "The provided value did not satisfy any of the formatter conditions" + ) + + +@to_dict +def apply_key_map( + key_mappings: Dict[Any, Any], value: Dict[Any, Any] +) -> Generator[Tuple[Any, Any], None, None]: + key_conflicts = ( + set(value.keys()) + .difference(key_mappings.keys()) + .intersection(v for k, v in key_mappings.items() if v in value) + ) + if key_conflicts: + raise KeyError( + "Could not apply key map due to conflicting key(s): {}".format( + key_conflicts + ) + ) + + for key, item in value.items(): + if key in key_mappings: + yield key_mappings[key], item + else: + yield key, item diff --git a/env/lib/python3.8/site-packages/eth_utils/conversions.py b/env/lib/python3.8/site-packages/eth_utils/conversions.py new file mode 100644 index 00000000..7b81ea14 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/conversions.py @@ -0,0 +1,165 @@ +from typing import Callable, TypeVar, Union + +from eth_typing import HexStr, Primitives + +from .decorators import validate_conversion_arguments +from .encoding import big_endian_to_int, int_to_big_endian +from .hexadecimal import ( + add_0x_prefix, + decode_hex, + encode_hex, + is_hexstr, + remove_0x_prefix, +) +from .types import is_boolean, is_integer, is_string + +T = TypeVar("T") + + +@validate_conversion_arguments +def to_hex( + primitive: Primitives = None, hexstr: HexStr = None, text: str = None +) -> HexStr: + """ + Auto converts any supported value into its hex representation. + Trims leading zeros, as defined in: + https://github.com/ethereum/wiki/wiki/JSON-RPC#hex-value-encoding + """ + if hexstr is not None: + return add_0x_prefix(HexStr(hexstr.lower())) + + if text is not None: + return encode_hex(text.encode("utf-8")) + + if is_boolean(primitive): + return HexStr("0x1") if primitive else HexStr("0x0") + + if isinstance(primitive, (bytes, bytearray)): + return encode_hex(primitive) + elif is_string(primitive): + raise TypeError( + "Unsupported type: The primitive argument must be one of: bytes," + "bytearray, int or bool and not str" + ) + + if is_integer(primitive): + return HexStr(hex(primitive)) + + raise TypeError( + "Unsupported type: '{0}'. Must be one of: bool, str, bytes, bytearray" + "or int.".format(repr(type(primitive))) + ) + + +@validate_conversion_arguments +def to_int( + primitive: Primitives = None, hexstr: HexStr = None, text: str = None +) -> int: + """ + Converts value to its integer representation. + Values are converted this way: + + * primitive: + + * bytes, bytearrays: big-endian integer + * bool: True => 1, False => 0 + * hexstr: interpret hex as integer + * text: interpret as string of digits, like '12' => 12 + """ + if hexstr is not None: + return int(hexstr, 16) + elif text is not None: + return int(text) + elif isinstance(primitive, (bytes, bytearray)): + return big_endian_to_int(primitive) + elif isinstance(primitive, str): + raise TypeError("Pass in strings with keyword hexstr or text") + elif isinstance(primitive, (int, bool)): + return int(primitive) + else: + raise TypeError( + "Invalid type. Expected one of int/bool/str/bytes/bytearray. Got " + "{0}".format(type(primitive)) + ) + + +@validate_conversion_arguments +def to_bytes( + primitive: Primitives = None, hexstr: HexStr = None, text: str = None +) -> bytes: + if is_boolean(primitive): + return b"\x01" if primitive else b"\x00" + elif isinstance(primitive, bytearray): + return bytes(primitive) + elif isinstance(primitive, bytes): + return primitive + elif is_integer(primitive): + return to_bytes(hexstr=to_hex(primitive)) + elif hexstr is not None: + if len(hexstr) % 2: + # type check ignored here because casting an Optional arg to str is not possible + hexstr = "0x0" + remove_0x_prefix(hexstr) # type: ignore + return decode_hex(hexstr) + elif text is not None: + return text.encode("utf-8") + raise TypeError( + "expected a bool, int, byte or bytearray in first arg, or keyword of hexstr or text" + ) + + +@validate_conversion_arguments +def to_text( + primitive: Primitives = None, hexstr: HexStr = None, text: str = None +) -> str: + if hexstr is not None: + return to_bytes(hexstr=hexstr).decode("utf-8") + elif text is not None: + return text + elif isinstance(primitive, str): + return to_text(hexstr=primitive) + elif isinstance(primitive, (bytes, bytearray)): + return primitive.decode("utf-8") + elif is_integer(primitive): + byte_encoding = int_to_big_endian(primitive) + return to_text(byte_encoding) + raise TypeError("Expected an int, bytes, bytearray or hexstr.") + + +def text_if_str( + to_type: Callable[..., T], text_or_primitive: Union[bytes, int, str] +) -> T: + """ + Convert to a type, assuming that strings can be only unicode text (not a hexstr) + + :param to_type function: takes the arguments (primitive, hexstr=hexstr, text=text), + eg~ to_bytes, to_text, to_hex, to_int, etc + :param text_or_primitive bytes, str, int: value to convert + """ + if isinstance(text_or_primitive, str): + return to_type(text=text_or_primitive) + else: + return to_type(text_or_primitive) + + +def hexstr_if_str( + to_type: Callable[..., T], hexstr_or_primitive: Union[bytes, int, str] +) -> T: + """ + Convert to a type, assuming that strings can be only hexstr (not unicode text) + + :param to_type function: takes the arguments (primitive, hexstr=hexstr, text=text), + eg~ to_bytes, to_text, to_hex, to_int, etc + :param hexstr_or_primitive bytes, str, int: value to convert + """ + if isinstance(hexstr_or_primitive, str): + if remove_0x_prefix(HexStr(hexstr_or_primitive)) and not is_hexstr( + hexstr_or_primitive + ): + raise ValueError( + "when sending a str, it must be a hex string. Got: {0!r}".format( + hexstr_or_primitive + ) + ) + return to_type(hexstr=hexstr_or_primitive) + else: + return to_type(hexstr_or_primitive) diff --git a/env/lib/python3.8/site-packages/eth_utils/crypto.py b/env/lib/python3.8/site-packages/eth_utils/crypto.py new file mode 100644 index 00000000..b8308bc2 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/crypto.py @@ -0,0 +1,11 @@ +from typing import Union + +from eth_hash.auto import keccak as keccak_256 + +from .conversions import to_bytes + + +def keccak( + primitive: Union[bytes, int, bool] = None, hexstr: str = None, text: str = None +) -> bytes: + return keccak_256(to_bytes(primitive, hexstr, text)) diff --git a/env/lib/python3.8/site-packages/eth_utils/currency.py b/env/lib/python3.8/site-packages/eth_utils/currency.py new file mode 100644 index 00000000..30bc3129 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/currency.py @@ -0,0 +1,102 @@ +import decimal +from decimal import localcontext +from typing import Union + +from .types import is_integer, is_string +from .units import units + + +class denoms: + wei = int(units["wei"]) + kwei = int(units["kwei"]) + babbage = int(units["babbage"]) + femtoether = int(units["femtoether"]) + mwei = int(units["mwei"]) + lovelace = int(units["lovelace"]) + picoether = int(units["picoether"]) + gwei = int(units["gwei"]) + shannon = int(units["shannon"]) + nanoether = int(units["nanoether"]) + nano = int(units["nano"]) + szabo = int(units["szabo"]) + microether = int(units["microether"]) + micro = int(units["micro"]) + finney = int(units["finney"]) + milliether = int(units["milliether"]) + milli = int(units["milli"]) + ether = int(units["ether"]) + kether = int(units["kether"]) + grand = int(units["grand"]) + mether = int(units["mether"]) + gether = int(units["gether"]) + tether = int(units["tether"]) + + +MIN_WEI = 0 +MAX_WEI = 2 ** 256 - 1 + + +def from_wei(number: int, unit: str) -> Union[int, decimal.Decimal]: + """ + Takes a number of wei and converts it to any other ether unit. + """ + if unit.lower() not in units: + raise ValueError( + "Unknown unit. Must be one of {0}".format("/".join(units.keys())) + ) + + if number == 0: + return 0 + + if number < MIN_WEI or number > MAX_WEI: + raise ValueError("value must be between 1 and 2**256 - 1") + + unit_value = units[unit.lower()] + + with localcontext() as ctx: + ctx.prec = 999 + d_number = decimal.Decimal(value=number, context=ctx) + result_value = d_number / unit_value + + return result_value + + +def to_wei(number: Union[int, float, str, decimal.Decimal], unit: str) -> int: + """ + Takes a number of a unit and converts it to wei. + """ + if unit.lower() not in units: + raise ValueError( + "Unknown unit. Must be one of {0}".format("/".join(units.keys())) + ) + + if is_integer(number) or is_string(number): + d_number = decimal.Decimal(value=number) + elif isinstance(number, float): + d_number = decimal.Decimal(value=str(number)) + elif isinstance(number, decimal.Decimal): + d_number = number + else: + raise TypeError("Unsupported type. Must be one of integer, float, or string") + + s_number = str(number) + unit_value = units[unit.lower()] + + if d_number == decimal.Decimal(0): + return 0 + + if d_number < 1 and "." in s_number: + with localcontext() as ctx: + multiplier = len(s_number) - s_number.index(".") - 1 + ctx.prec = multiplier + d_number = decimal.Decimal(value=number, context=ctx) * 10 ** multiplier + unit_value /= 10 ** multiplier + + with localcontext() as ctx: + ctx.prec = 999 + result_value = decimal.Decimal(value=d_number, context=ctx) * unit_value + + if result_value < MIN_WEI or result_value > MAX_WEI: + raise ValueError("Resulting wei value must be between 1 and 2**256 - 1") + + return int(result_value) diff --git a/env/lib/python3.8/site-packages/eth_utils/curried/__init__.py b/env/lib/python3.8/site-packages/eth_utils/curried/__init__.py new file mode 100644 index 00000000..65a1fd1b --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/curried/__init__.py @@ -0,0 +1,265 @@ +# flake8: noqa +from eth_utils import ( + ExtendedDebugLogger, + HasExtendedDebugLogger, + HasExtendedDebugLoggerMeta, + HasLogger, + HasLoggerMeta, + ValidationError, + add_0x_prefix, + apply_formatter_at_index, + apply_formatter_if as non_curried_apply_formatter_if, + apply_formatter_to_array, + apply_formatters_to_dict as non_curried_apply_formatters_to_dict, + apply_formatters_to_sequence, + apply_key_map, + apply_one_of_formatters as non_curried_apply_one_of_formatters, + apply_to_return_value, + big_endian_to_int, + clamp, + combine_argument_formatters, + combomethod, + decode_hex, + denoms, + encode_hex, + event_abi_to_log_topic, + event_signature_to_log_topic, + flatten_return, + from_wei, + function_abi_to_4byte_selector, + function_signature_to_4byte_selector, + get_extended_debug_logger, + get_logger, + hexstr_if_str as non_curried_hexstr_if_str, + humanize_bytes, + humanize_hash, + humanize_integer_sequence, + humanize_ipfs_uri, + humanize_seconds, + import_string, + int_to_big_endian, + is_0x_prefixed, + is_address, + is_binary_address, + is_boolean, + is_bytes, + is_canonical_address, + is_checksum_address, + is_checksum_formatted_address, + is_dict, + is_hex, + is_hex_address, + is_hexstr, + is_integer, + is_list, + is_list_like, + is_normalized_address, + is_null, + is_number, + is_same_address, + is_string, + is_text, + is_tuple, + keccak, + remove_0x_prefix, + replace_exceptions, + reversed_return, + setup_DEBUG2_logging, + sort_return, + text_if_str as non_curried_text_if_str, + to_bytes, + to_canonical_address, + to_checksum_address, + to_dict, + to_hex, + to_int, + to_list, + to_normalized_address, + to_ordered_dict, + to_set, + to_text, + to_tuple, + to_wei, +) +from eth_utils.toolz import curry +from typing import ( + Any, + Callable, + Dict, + Generator, + Optional, + Sequence, + Tuple, + TypeVar, + Union, + overload, +) + +TReturn = TypeVar("TReturn") +TValue = TypeVar("TValue") + + +@overload +def apply_formatter_if( + condition: Callable[..., bool] +) -> Callable[[Callable[..., TReturn]], Callable[[TValue], Union[TReturn, TValue]]]: + ... + + +@overload +def apply_formatter_if( + condition: Callable[..., bool], formatter: Callable[..., TReturn] +) -> Callable[[TValue], Union[TReturn, TValue]]: + ... + + +@overload +def apply_formatter_if( + condition: Callable[..., bool], formatter: Callable[..., TReturn], value: TValue +) -> Union[TReturn, TValue]: + ... + + +# This is just a stub to appease mypy, it gets overwritten later +def apply_formatter_if( + condition: Callable[..., bool], + formatter: Callable[..., TReturn] = None, + value: TValue = None, +) -> Union[ + Callable[[Callable[..., TReturn]], Callable[[TValue], Union[TReturn, TValue]]], + Callable[[TValue], Union[TReturn, TValue]], + TReturn, + TValue, +]: + ... + + +@overload +def apply_one_of_formatters( + formatter_condition_pairs: Sequence[ + Tuple[Callable[..., bool], Callable[..., TReturn]] + ] +) -> Callable[[TValue], TReturn]: + ... + + +@overload +def apply_one_of_formatters( + formatter_condition_pairs: Sequence[ + Tuple[Callable[..., bool], Callable[..., TReturn]] + ], + value: TValue, +) -> TReturn: + ... + + +# This is just a stub to appease mypy, it gets overwritten later +def apply_one_of_formatters( + formatter_condition_pairs: Sequence[ + Tuple[Callable[..., bool], Callable[..., TReturn]] + ], + value: TValue = None, +) -> TReturn: + ... + + +@overload +def hexstr_if_str( + to_type: Callable[..., TReturn] +) -> Callable[[Union[bytes, int, str]], TReturn]: + ... + + +@overload +def hexstr_if_str( + to_type: Callable[..., TReturn], to_format: Union[bytes, int, str] +) -> TReturn: + ... + + +# This is just a stub to appease mypy, it gets overwritten later +def hexstr_if_str( + to_type: Callable[..., TReturn], to_format: Union[bytes, int, str] = None +) -> TReturn: + ... + + +@overload +def text_if_str( + to_type: Callable[..., TReturn] +) -> Callable[[Union[bytes, int, str]], TReturn]: + ... + + +@overload +def text_if_str( + to_type: Callable[..., TReturn], text_or_primitive: Union[bytes, int, str] +) -> TReturn: + ... + + +# This is just a stub to appease mypy, it gets overwritten later +def text_if_str( + to_type: Callable[..., TReturn], text_or_primitive: Union[bytes, int, str] = None +) -> TReturn: + ... + + +@overload +def apply_formatters_to_dict( + formatters: Dict[Any, Any] +) -> Callable[[Dict[Any, Any]], TReturn]: + ... + + +@overload +def apply_formatters_to_dict( + formatters: Dict[Any, Any], value: Dict[Any, Any] +) -> TReturn: + ... + + +# This is just a stub to appease mypy, it gets overwritten later +def apply_formatters_to_dict( + formatters: Dict[Any, Any], value: Optional[Dict[Any, Any]] = None +) -> TReturn: + ... + + +apply_formatter_at_index = curry(apply_formatter_at_index) +apply_formatter_if = curry(non_curried_apply_formatter_if) +apply_formatter_to_array = curry(apply_formatter_to_array) +apply_formatters_to_dict = curry(non_curried_apply_formatters_to_dict) +apply_formatters_to_sequence = curry(apply_formatters_to_sequence) +apply_key_map = curry(apply_key_map) +apply_one_of_formatters = curry(non_curried_apply_one_of_formatters) +from_wei = curry(from_wei) +get_logger = curry(get_logger) +hexstr_if_str = curry(non_curried_hexstr_if_str) +is_same_address = curry(is_same_address) +text_if_str = curry(non_curried_text_if_str) +to_wei = curry(to_wei) +clamp = curry(clamp) + +# Delete any methods and classes that are not intended to be importable from +# eth_utils.curried +# We do this approach instead of __all__ because this approach actually prevents +# importing the wrong thing, while __all__ only affects `from eth_utils.curried import *` +del Any +del Callable +del Dict +del Generator +del Optional +del Sequence +del TReturn +del TValue +del Tuple +del TypeVar +del Union +del curry +del non_curried_apply_formatter_if +del non_curried_apply_one_of_formatters +del non_curried_apply_formatters_to_dict +del non_curried_hexstr_if_str +del non_curried_text_if_str +del overload diff --git a/env/lib/python3.8/site-packages/eth_utils/debug.py b/env/lib/python3.8/site-packages/eth_utils/debug.py new file mode 100644 index 00000000..d77e84cb --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/debug.py @@ -0,0 +1,20 @@ +import platform +import subprocess +import sys + + +def pip_freeze() -> str: + result = subprocess.run("pip freeze".split(), stdout=subprocess.PIPE) + return "pip freeze result:\n%s" % result.stdout.decode() + + +def python_version() -> str: + return "Python version:\n%s" % sys.version + + +def platform_info() -> str: + return "Operating System: %s" % platform.platform() + + +def get_environment_summary() -> str: + return "\n\n".join([python_version(), platform_info(), pip_freeze()]) diff --git a/env/lib/python3.8/site-packages/eth_utils/decorators.py b/env/lib/python3.8/site-packages/eth_utils/decorators.py new file mode 100644 index 00000000..a20f4a43 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/decorators.py @@ -0,0 +1,125 @@ +import functools +import itertools +from typing import Any, Callable, Dict, Iterable, Type, TypeVar + +from .types import is_text + +T = TypeVar("T") + + +class combomethod(object): + def __init__(self, method: Callable[..., Any]) -> None: + self.method = method + + def __get__(self, obj: T = None, objtype: Type[T] = None) -> Callable[..., Any]: + @functools.wraps(self.method) + def _wrapper(*args: Any, **kwargs: Any) -> Any: + if obj is not None: + return self.method(obj, *args, **kwargs) + else: + return self.method(objtype, *args, **kwargs) + + return _wrapper + + +def _has_one_val(*args: T, **kwargs: T) -> bool: + vals = itertools.chain(args, kwargs.values()) + not_nones = list(filter(lambda val: val is not None, vals)) + return len(not_nones) == 1 + + +def _assert_one_val(*args: T, **kwargs: T) -> None: + if not _has_one_val(*args, **kwargs): + raise TypeError( + "Exactly one of the passed values can be specified. " + "Instead, values were: %r, %r" % (args, kwargs) + ) + + +def _hexstr_or_text_kwarg_is_text_type(**kwargs: T) -> bool: + value = kwargs["hexstr"] if "hexstr" in kwargs else kwargs["text"] + return is_text(value) + + +def _assert_hexstr_or_text_kwarg_is_text_type(**kwargs: T) -> None: + if not _hexstr_or_text_kwarg_is_text_type(**kwargs): + raise TypeError( + "Arguments passed as hexstr or text must be of text type. " + "Instead, value was: %r" % (repr(next(iter(list(kwargs.values()))))) + ) + + +def _validate_supported_kwarg(kwargs: Any) -> None: + if next(iter(kwargs)) not in ["primitive", "hexstr", "text"]: + raise TypeError( + "Kwarg must be 'primitive', 'hexstr', or 'text'. " + "Instead, kwarg was: %r" % (next(iter(kwargs))) + ) + + +def validate_conversion_arguments(to_wrap: Callable[..., T]) -> Callable[..., T]: + """ + Validates arguments for conversion functions. + - Only a single argument is present + - Kwarg must be 'primitive' 'hexstr' or 'text' + - If it is 'hexstr' or 'text' that it is a text type + """ + + @functools.wraps(to_wrap) + def wrapper(*args: Any, **kwargs: Any) -> T: + _assert_one_val(*args, **kwargs) + if kwargs: + _validate_supported_kwarg(kwargs) + + if len(args) == 0 and "primitive" not in kwargs: + _assert_hexstr_or_text_kwarg_is_text_type(**kwargs) + return to_wrap(*args, **kwargs) + + return wrapper + + +def return_arg_type(at_position: int) -> Callable[..., Callable[..., T]]: + """ + Wrap the return value with the result of `type(args[at_position])` + """ + + def decorator(to_wrap: Callable[..., Any]) -> Callable[..., T]: + @functools.wraps(to_wrap) + def wrapper(*args: Any, **kwargs: Any) -> T: + result = to_wrap(*args, **kwargs) + ReturnType = type(args[at_position]) + return ReturnType(result) + + return wrapper + + return decorator + + +def replace_exceptions( + old_to_new_exceptions: Dict[Type[BaseException], Type[BaseException]] +) -> Callable[..., Any]: + """ + Replaces old exceptions with new exceptions to be raised in their place. + """ + old_exceptions = tuple(old_to_new_exceptions.keys()) + + def decorator(to_wrap: Callable[..., Any]) -> Callable[..., Any]: + @functools.wraps(to_wrap) + # String type b/c pypy3 throws SegmentationFault with Iterable as arg on nested fn + # Ignore so we don't have to import `Iterable` + def wrapper( + *args: Iterable[Any], **kwargs: Dict[str, Any] + ) -> Callable[..., Any]: + try: + return to_wrap(*args, **kwargs) + except old_exceptions as err: + try: + raise old_to_new_exceptions[type(err)](err) from err + except KeyError: + raise TypeError( + "could not look up new exception to use for %r" % err + ) from err + + return wrapper + + return decorator diff --git a/env/lib/python3.8/site-packages/eth_utils/encoding.py b/env/lib/python3.8/site-packages/eth_utils/encoding.py new file mode 100644 index 00000000..44a9fe76 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/encoding.py @@ -0,0 +1,6 @@ +def int_to_big_endian(value: int) -> bytes: + return value.to_bytes((value.bit_length() + 7) // 8 or 1, "big") + + +def big_endian_to_int(value: bytes) -> int: + return int.from_bytes(value, "big") diff --git a/env/lib/python3.8/site-packages/eth_utils/exceptions.py b/env/lib/python3.8/site-packages/eth_utils/exceptions.py new file mode 100644 index 00000000..8d8af447 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/exceptions.py @@ -0,0 +1,6 @@ +class ValidationError(Exception): + """ + Raised when something does not pass a validation check. + """ + + pass diff --git a/env/lib/python3.8/site-packages/eth_utils/functional.py b/env/lib/python3.8/site-packages/eth_utils/functional.py new file mode 100644 index 00000000..ae49a7cb --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/functional.py @@ -0,0 +1,73 @@ +import collections +import functools +import itertools +from typing import ( # noqa: F401 + Any, + Callable, + Dict, + Iterable, + List, + Mapping, + Set, + Tuple, + TypeVar, + Union, +) + +from .toolz import compose as _compose + +T = TypeVar("T") + + +def identity(value: T) -> T: + return value + + +TGIn = TypeVar("TGIn") +TGOut = TypeVar("TGOut") +TFOut = TypeVar("TFOut") + + +def combine( + f: Callable[[TGOut], TFOut], g: Callable[[TGIn], TGOut] +) -> Callable[[TGIn], TFOut]: + return lambda x: f(g(x)) + + +def apply_to_return_value( + callback: Callable[..., T] +) -> Callable[..., Callable[..., T]]: + def outer(fn: Callable[..., T]) -> Callable[..., T]: + # We would need to type annotate *args and **kwargs but doing so segfaults + # the PyPy builds. We ignore instead. + @functools.wraps(fn) + def inner(*args, **kwargs) -> T: # type: ignore + return callback(fn(*args, **kwargs)) + + return inner + + return outer + + +TVal = TypeVar("TVal") +TKey = TypeVar("TKey") +to_tuple = apply_to_return_value( + tuple +) # type: Callable[[Callable[..., Iterable[TVal]]], Callable[..., Tuple[TVal, ...]]] # noqa: E501 +to_list = apply_to_return_value( + list +) # type: Callable[[Callable[..., Iterable[TVal]]], Callable[..., List[TVal]]] # noqa: E501 +to_set = apply_to_return_value( + set +) # type: Callable[[Callable[..., Iterable[TVal]]], Callable[..., Set[TVal]]] # noqa: E501 +to_dict = apply_to_return_value( + dict +) # type: Callable[[Callable[..., Iterable[Union[Mapping[TKey, TVal], Tuple[TKey, TVal]]]]], Callable[..., Dict[TKey, TVal]]] # noqa: E501 +to_ordered_dict = apply_to_return_value( + collections.OrderedDict +) # type: Callable[[Callable[..., Iterable[Union[Mapping[TKey, TVal], Tuple[TKey, TVal]]]]], Callable[..., collections.OrderedDict[TKey, TVal]]] # noqa: E501 +sort_return = _compose(to_tuple, apply_to_return_value(sorted)) +flatten_return = _compose( + to_tuple, apply_to_return_value(itertools.chain.from_iterable) +) +reversed_return = _compose(to_tuple, apply_to_return_value(reversed), to_tuple) diff --git a/env/lib/python3.8/site-packages/eth_utils/hexadecimal.py b/env/lib/python3.8/site-packages/eth_utils/hexadecimal.py new file mode 100644 index 00000000..3732a2b4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/hexadecimal.py @@ -0,0 +1,121 @@ +# String encodings and numeric representations + +import binascii +import re +from typing import Any, AnyStr + +from eth_typing import HexStr + +from .types import is_string, is_text + +_HEX_REGEXP = re.compile("[0-9a-fA-F]*") + + +def decode_hex(value: str) -> bytes: + if not is_text(value): + raise TypeError("Value must be an instance of str") + non_prefixed = remove_0x_prefix(HexStr(value)) + # unhexlify will only accept bytes type someday + ascii_hex = non_prefixed.encode("ascii") + return binascii.unhexlify(ascii_hex) + + +def encode_hex(value: AnyStr) -> HexStr: + if not is_string(value): + raise TypeError("Value must be an instance of str or unicode") + elif isinstance(value, (bytes, bytearray)): + ascii_bytes = value + else: + ascii_bytes = value.encode("ascii") + + binary_hex = binascii.hexlify(ascii_bytes) + return add_0x_prefix(HexStr(binary_hex.decode("ascii"))) + + +def is_0x_prefixed(value: Any) -> bool: + if not is_text(value): + raise TypeError( + "is_0x_prefixed requires text typed arguments. Got: {0}".format(repr(value)) + ) + return value.startswith("0x") or value.startswith("0X") + + +def remove_0x_prefix(value: HexStr) -> HexStr: + if is_0x_prefixed(value): + return HexStr(value[2:]) + return value + + +def add_0x_prefix(value: HexStr) -> HexStr: + if is_0x_prefixed(value): + return value + return HexStr("0x" + value) + + +def is_hexstr(value: Any) -> bool: + if not is_text(value): + return False + + elif value.lower() == "0x": + return True + + unprefixed_value = remove_0x_prefix(value) + if len(unprefixed_value) % 2 != 0: + value_to_decode = "0" + unprefixed_value + else: + value_to_decode = unprefixed_value + + if not _HEX_REGEXP.fullmatch(value_to_decode): + return False + + try: + # convert from a value like '09af' to b'09af' + ascii_hex = value_to_decode.encode("ascii") + except UnicodeDecodeError: + # Should have already been caught by regex above, but just in case... + return False + + try: + # convert to a value like b'\x09\xaf' + value_as_bytes = binascii.unhexlify(ascii_hex) + except binascii.Error: + return False + except TypeError: + return False + else: + return bool(value_as_bytes) + + +def is_hex(value: Any) -> bool: + if not is_text(value): + raise TypeError( + "is_hex requires text typed arguments. Got: {0}".format(repr(value)) + ) + elif value.lower() == "0x": + return True + + unprefixed_value = remove_0x_prefix(value) + if len(unprefixed_value) % 2 != 0: + value_to_decode = "0" + unprefixed_value + else: + value_to_decode = unprefixed_value + + if not _HEX_REGEXP.fullmatch(value_to_decode): + return False + + try: + # convert from a value like '09af' to b'09af' + ascii_hex = value_to_decode.encode("ascii") + except UnicodeDecodeError: + # Should have already been caught by regex above, but just in case... + return False + + try: + # convert to a value like b'\x09\xaf' + value_as_bytes = binascii.unhexlify(ascii_hex) + except binascii.Error: + return False + except TypeError: + return False + else: + return bool(value_as_bytes) diff --git a/env/lib/python3.8/site-packages/eth_utils/humanize.py b/env/lib/python3.8/site-packages/eth_utils/humanize.py new file mode 100644 index 00000000..03dd95c4 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/humanize.py @@ -0,0 +1,159 @@ +from typing import Any, Iterable, Iterator, Tuple, Union +from urllib import parse + +from eth_typing import URI, Hash32 + +from .toolz import sliding_window, take + + +def humanize_seconds(seconds: Union[float, int]) -> str: + if int(seconds) == 0: + return "0s" + + unit_values = _consume_leading_zero_units(_humanize_seconds(int(seconds))) + + return "".join( + ("{0}{1}".format(amount, unit) for amount, unit in take(3, unit_values)) + ) + + +SECOND = 1 +MINUTE = 60 +HOUR = 60 * 60 +DAY = 24 * HOUR +YEAR = 365 * DAY +MONTH = YEAR // 12 +WEEK = 7 * DAY + + +UNITS = ( + (YEAR, "y"), + (MONTH, "m"), + (WEEK, "w"), + (DAY, "d"), + (HOUR, "h"), + (MINUTE, "m"), + (SECOND, "s"), +) + + +def _consume_leading_zero_units( + units_iter: Iterator[Tuple[int, str]] +) -> Iterator[Tuple[int, str]]: + for amount, unit in units_iter: + if amount == 0: + continue + else: + yield (amount, unit) + break + + yield from units_iter + + +def _humanize_seconds(seconds: int) -> Iterator[Tuple[int, str]]: + remainder = seconds + + for duration, unit in UNITS: + if not remainder: + break + + num = remainder // duration + yield num, unit + + remainder %= duration + + +DISPLAY_HASH_CHARS = 4 + + +def humanize_bytes(value: bytes) -> str: + if len(value) <= DISPLAY_HASH_CHARS + 1: + return value.hex() + value_as_hex = value.hex() + head = value_as_hex[:DISPLAY_HASH_CHARS] + tail = value_as_hex[-1 * DISPLAY_HASH_CHARS :] + return "{0}..{1}".format(head, tail) + + +def humanize_hash(value: Hash32) -> str: + return humanize_bytes(value) + + +def humanize_ipfs_uri(uri: URI) -> str: + if not is_ipfs_uri(uri): + raise TypeError( + "%s does not look like a valid IPFS uri. Currently, " + "only CIDv0 hash schemes are supported." % uri + ) + + parsed = parse.urlparse(uri) + ipfs_hash = parsed.netloc + head = ipfs_hash[:DISPLAY_HASH_CHARS] + tail = ipfs_hash[-1 * DISPLAY_HASH_CHARS :] + return "ipfs://{0}..{1}".format(head, tail) + + +def is_ipfs_uri(value: Any) -> bool: + if not isinstance(value, str): + return False + + parsed = parse.urlparse(value) + if parsed.scheme != "ipfs" or not parsed.netloc: + return False + + return _is_CIDv0_ipfs_hash(parsed.netloc) + + +def _is_CIDv0_ipfs_hash(ipfs_hash: str) -> bool: + if ipfs_hash.startswith("Qm") and len(ipfs_hash) == 46: + return True + return False + + +def _find_breakpoints(*values: int) -> Iterator[int]: + yield 0 + for index, (left, right) in enumerate(sliding_window(2, values), 1): + if left + 1 == right: + continue + else: + yield index + yield len(values) + + +def _extract_integer_ranges(*values: int) -> Iterator[Tuple[int, int]]: + """ + Take a sequence of integers which is expected to be ordered and return the + most concise definition of the sequence in terms of integer ranges. + + - fn(1, 2, 3) -> ((1, 3),) + - fn(1, 2, 3, 7, 8, 9) -> ((1, 3), (7, 9)) + - fn(1, 7, 8, 9) -> ((1, 1), (7, 9)) + """ + for left, right in sliding_window(2, _find_breakpoints(*values)): + chunk = values[left:right] + yield chunk[0], chunk[-1] + + +def _humanize_range(bounds: Tuple[int, int]) -> str: + left, right = bounds + if left == right: + return str(left) + else: + return "{left}-{right}".format(left=left, right=right) + + +def humanize_integer_sequence(values_iter: Iterable[int]) -> str: + """ + Return a human readable string that attempts to concisely define a sequence + of integers. + + - fn((1, 2, 3)) -> '1-3' + - fn((1, 2, 3, 7, 8, 9)) -> '1-3|7-9' + - fn((1, 2, 3, 5, 7, 8, 9)) -> '1-3|5|7-9' + - fn((1, 7, 8, 9)) -> '1|7-9' + """ + values = tuple(values_iter) + if not values: + return "(empty)" + else: + return "|".join(map(_humanize_range, _extract_integer_ranges(*values))) diff --git a/env/lib/python3.8/site-packages/eth_utils/logging.py b/env/lib/python3.8/site-packages/eth_utils/logging.py new file mode 100644 index 00000000..dc68c4a8 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/logging.py @@ -0,0 +1,158 @@ +import contextlib +import functools +import logging +from typing import Any, Callable, Dict, Iterator, Optional, Tuple, Type, TypeVar, cast + +from .toolz import assoc + +DEBUG2_LEVEL_NUM = 8 + +TLogger = TypeVar("TLogger", bound=logging.Logger) + + +class cached_show_debug2_property: + def __init__(self, func: Callable[[TLogger], bool]): + # type ignored b/c arg1 expects Callable[..., Any] + functools.update_wrapper(self, func) # type: ignore + self._func = func + + def __get__(self, obj: Optional[TLogger], cls: Type[logging.Logger]) -> Any: + if obj is None: + return self + + result = self._func(obj) + obj.__dict__[self._func.__name__] = result + return result + + +class ExtendedDebugLogger(logging.Logger): + """ + Logging class that can be used for lower level debug logging. + """ + + @cached_show_debug2_property + def show_debug2(self) -> bool: + return self.isEnabledFor(DEBUG2_LEVEL_NUM) + + def debug2(self, message: str, *args: Any, **kwargs: Any) -> None: + if self.show_debug2: + self.log(DEBUG2_LEVEL_NUM, message, *args, **kwargs) + else: + # When we find that `DEBUG2` isn't enabled we completely replace + # the `debug2` function in this instance of the logger with a noop + # lambda to further speed up + self.__dict__["debug2"] = lambda message, *args, **kwargs: None + + def __reduce__(self) -> Tuple[Any, ...]: + # This is needed because our parent's implementation could cause us to become a regular + # Logger on unpickling. + return get_extended_debug_logger, (self.name,) + + +def setup_DEBUG2_logging() -> None: + """ + Installs the `DEBUG2` level logging levels to the main logging module. + """ + if not hasattr(logging, "DEBUG2"): + logging.addLevelName(DEBUG2_LEVEL_NUM, "DEBUG2") + setattr(logging, "DEBUG2", DEBUG2_LEVEL_NUM) # typing: ignore + + +@contextlib.contextmanager +def _use_logger_class(logger_class: Type[logging.Logger]) -> Iterator[None]: + original_logger_class = logging.getLoggerClass() + logging.setLoggerClass(logger_class) + try: + yield + finally: + logging.setLoggerClass(original_logger_class) + + +def get_logger(name: str, logger_class: Type[TLogger] = None) -> TLogger: + if logger_class is None: + return cast(TLogger, logging.getLogger(name)) + else: + with _use_logger_class(logger_class): + # The logging module caches logger instances. The following code + # ensures that if there is a cached instance that we don't + # accidentally return the incorrect logger type because the logging + # module does not *update* the cached instance in the event that + # the global logging class changes. + # + # types ignored b/c mypy doesn't identify presence of manager on logging.Logger + manager = logging.Logger.manager # type: ignore + if name in manager.loggerDict: + if type(manager.loggerDict[name]) is not logger_class: + del manager.loggerDict[name] + return cast(TLogger, logging.getLogger(name)) + + +def get_extended_debug_logger(name: str) -> ExtendedDebugLogger: + return get_logger(name, ExtendedDebugLogger) + + +THasLoggerMeta = TypeVar("THasLoggerMeta", bound="HasLoggerMeta") + + +class HasLoggerMeta(type): + """ + Metaclass which using the `__qualname__` to assign a logger instance to the + class who's name corresponds to the import path for the class. + """ + + logger_class = logging.Logger + + def __new__( + mcls: Type[THasLoggerMeta], + name: str, + bases: Tuple[Type[Any]], + namespace: Dict[str, Any], + ) -> type: + if "logger" in namespace: + # If a logger was explicitly declared we shouldn't do anything to + # replace it. + return super().__new__(mcls, name, bases, namespace) + if "__qualname__" not in namespace: + raise AttributeError("Missing __qualname__") + + with _use_logger_class(mcls.logger_class): + logger = logging.getLogger(namespace["__qualname__"]) + + return super().__new__(mcls, name, bases, assoc(namespace, "logger", logger)) + + @classmethod + def replace_logger_class( + mcls: Type[THasLoggerMeta], value: Type[logging.Logger] + ) -> Type[THasLoggerMeta]: + return type(mcls.__name__, (mcls,), {"logger_class": value}) + + @classmethod + def meta_compat( + mcls: Type[THasLoggerMeta], other: Type[type] + ) -> Type[THasLoggerMeta]: + return type(mcls.__name__, (mcls, other), {}) + + +class _BaseHasLogger(metaclass=HasLoggerMeta): + # This class exists to a allow us to define the type of the logger. Once + # python3.5 is deprecated this can be removed in favor of a simple type + # annotation on the main class. + logger = logging.Logger("") # type: logging.Logger + + +class HasLogger(_BaseHasLogger): + pass + + +HasExtendedDebugLoggerMeta = HasLoggerMeta.replace_logger_class(ExtendedDebugLogger) + + +class _BaseHasExtendedDebugLogger(metaclass=HasExtendedDebugLoggerMeta): # type: ignore + # This class exists to a allow us to define the type of the logger. Once + # python3.5 is deprecated this can be removed in favor of a simple type + # annotation on the main class. + logger = ExtendedDebugLogger("") # type: ExtendedDebugLogger + + +class HasExtendedDebugLogger(_BaseHasExtendedDebugLogger): + pass diff --git a/env/lib/python3.8/site-packages/eth_utils/module_loading.py b/env/lib/python3.8/site-packages/eth_utils/module_loading.py new file mode 100644 index 00000000..33822dbf --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/module_loading.py @@ -0,0 +1,26 @@ +from importlib import import_module +from typing import Any + + +def import_string(dotted_path: str) -> Any: + """ + Source: django.utils.module_loading + Import a dotted module path and return the attribute/class designated by the + last name in the path. Raise ImportError if the import failed. + """ + try: + module_path, class_name = dotted_path.rsplit(".", 1) + except ValueError: + msg = "%s doesn't look like a module path" % dotted_path + raise ImportError(msg) + + module = import_module(module_path) + + try: + return getattr(module, class_name) + except AttributeError: + msg = 'Module "%s" does not define a "%s" attribute/class' % ( + module_path, + class_name, + ) + raise ImportError(msg) diff --git a/env/lib/python3.8/site-packages/eth_utils/numeric.py b/env/lib/python3.8/site-packages/eth_utils/numeric.py new file mode 100644 index 00000000..526a68ce --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/numeric.py @@ -0,0 +1,36 @@ +from abc import ABC, abstractmethod +import decimal +import numbers +from typing import Any, TypeVar, Union + + +class Comparable(ABC): + @abstractmethod + def __lt__(self, other: Any) -> bool: + ... + + @abstractmethod + def __gt__(self, other: Any) -> bool: + ... + + +TComparable = Union[Comparable, numbers.Real, int, float, decimal.Decimal] + + +TValue = TypeVar("TValue", bound=TComparable) + + +def clamp(lower_bound: TValue, upper_bound: TValue, value: TValue) -> TValue: + # The `mypy` ignore statements here are due to doing a comparison of + # `Union` types which isn't allowed. (per cburgdorf). This approach was + # chosen over using `typing.overload` to define multiple signatures for + # each comparison type here since the added value of "proper" typing + # doesn't seem to justify the complexity of having a bunch of different + # signatures defined. The external library perspective on this function + # should still be adequate under this approach + if value < lower_bound: # type: ignore + return lower_bound + elif value > upper_bound: # type: ignore + return upper_bound + else: + return value diff --git a/env/lib/python3.8/site-packages/eth_utils/py.typed b/env/lib/python3.8/site-packages/eth_utils/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/eth_utils/toolz.py b/env/lib/python3.8/site-packages/eth_utils/toolz.py new file mode 100644 index 00000000..683f1b61 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/toolz.py @@ -0,0 +1,156 @@ +try: + from cytoolz import ( + accumulate, + assoc, + assoc_in, + comp, + compatibility, + complement, + compose, + concat, + concatv, + cons, + count, + countby, + curried, + curry, + dicttoolz, + diff, + dissoc, + do, + drop, + excepts, + filter, + first, + flip, + frequencies, + functoolz, + get, + get_in, + groupby, + identity, + interleave, + interpose, + isdistinct, + isiterable, + itemfilter, + itemmap, + iterate, + itertoolz, + join, + juxt, + keyfilter, + keymap, + last, + map, + mapcat, + memoize, + merge, + merge_sorted, + merge_with, + nth, + partial, + partition, + partition_all, + partitionby, + peek, + pipe, + pluck, + random_sample, + recipes, + reduce, + reduceby, + remove, + second, + sliding_window, + sorted, + tail, + take, + take_nth, + thread_first, + thread_last, + topk, + unique, + update_in, + utils, + valfilter, + valmap, + ) +except ImportError: + from toolz import ( # noqa: F401 + accumulate, + assoc, + assoc_in, + comp, + compatibility, + complement, + compose, + concat, + concatv, + cons, + count, + countby, + curried, + curry, + dicttoolz, + diff, + dissoc, + do, + drop, + excepts, + filter, + first, + flip, + frequencies, + functoolz, + get, + get_in, + groupby, + identity, + interleave, + interpose, + isdistinct, + isiterable, + itemfilter, + itemmap, + iterate, + itertoolz, + join, + juxt, + keyfilter, + keymap, + last, + map, + mapcat, + memoize, + merge, + merge_sorted, + merge_with, + nth, + partial, + partition, + partition_all, + partitionby, + peek, + pipe, + pluck, + random_sample, + recipes, + reduce, + reduceby, + remove, + second, + sliding_window, + sorted, + tail, + take, + take_nth, + thread_first, + thread_last, + topk, + unique, + update_in, + utils, + valfilter, + valmap, + ) diff --git a/env/lib/python3.8/site-packages/eth_utils/types.py b/env/lib/python3.8/site-packages/eth_utils/types.py new file mode 100644 index 00000000..6babbef3 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/types.py @@ -0,0 +1,52 @@ +import collections.abc +import numbers +from typing import Any + +bytes_types = (bytes, bytearray) +integer_types = (int,) +text_types = (str,) +string_types = (bytes, str, bytearray) + + +def is_integer(value: Any) -> bool: + return isinstance(value, integer_types) and not isinstance(value, bool) + + +def is_bytes(value: Any) -> bool: + return isinstance(value, bytes_types) + + +def is_text(value: Any) -> bool: + return isinstance(value, text_types) + + +def is_string(value: Any) -> bool: + return isinstance(value, string_types) + + +def is_boolean(value: Any) -> bool: + return isinstance(value, bool) + + +def is_dict(obj: Any) -> bool: + return isinstance(obj, collections.abc.Mapping) + + +def is_list_like(obj: Any) -> bool: + return not is_string(obj) and isinstance(obj, collections.abc.Sequence) + + +def is_list(obj: Any) -> bool: + return isinstance(obj, list) + + +def is_tuple(obj: Any) -> bool: + return isinstance(obj, tuple) + + +def is_null(obj: Any) -> bool: + return obj is None + + +def is_number(obj: Any) -> bool: + return isinstance(obj, numbers.Number) diff --git a/env/lib/python3.8/site-packages/eth_utils/typing/__init__.py b/env/lib/python3.8/site-packages/eth_utils/typing/__init__.py new file mode 100644 index 00000000..60f0e838 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/typing/__init__.py @@ -0,0 +1,18 @@ +import warnings + +from .misc import ( # noqa: F401 + Address, + AnyAddress, + ChecksumAddress, + HexAddress, + HexStr, + Primitives, + T, +) + +warnings.warn( + "The eth_utils.typing module will be deprecated in favor " + "of eth-typing in the next major version bump.", + category=DeprecationWarning, + stacklevel=2, +) diff --git a/env/lib/python3.8/site-packages/eth_utils/typing/misc.py b/env/lib/python3.8/site-packages/eth_utils/typing/misc.py new file mode 100644 index 00000000..45cfc91f --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/typing/misc.py @@ -0,0 +1,12 @@ +from typing import TypeVar + +from eth_typing import ( # noqa: F401 + Address, + AnyAddress, + ChecksumAddress, + HexAddress, + HexStr, + Primitives, +) + +T = TypeVar("T") diff --git a/env/lib/python3.8/site-packages/eth_utils/units.py b/env/lib/python3.8/site-packages/eth_utils/units.py new file mode 100644 index 00000000..8848b892 --- /dev/null +++ b/env/lib/python3.8/site-packages/eth_utils/units.py @@ -0,0 +1,31 @@ +import decimal + +# Units are in their own module here, so that they can keep this +# formatting, as this module is excluded from black in pyproject.toml +# fmt: off +units = { + 'wei': decimal.Decimal('1'), # noqa: E241 + 'kwei': decimal.Decimal('1000'), # noqa: E241 + 'babbage': decimal.Decimal('1000'), # noqa: E241 + 'femtoether': decimal.Decimal('1000'), # noqa: E241 + 'mwei': decimal.Decimal('1000000'), # noqa: E241 + 'lovelace': decimal.Decimal('1000000'), # noqa: E241 + 'picoether': decimal.Decimal('1000000'), # noqa: E241 + 'gwei': decimal.Decimal('1000000000'), # noqa: E241 + 'shannon': decimal.Decimal('1000000000'), # noqa: E241 + 'nanoether': decimal.Decimal('1000000000'), # noqa: E241 + 'nano': decimal.Decimal('1000000000'), # noqa: E241 + 'szabo': decimal.Decimal('1000000000000'), # noqa: E241 + 'microether': decimal.Decimal('1000000000000'), # noqa: E241 + 'micro': decimal.Decimal('1000000000000'), # noqa: E241 + 'finney': decimal.Decimal('1000000000000000'), # noqa: E241 + 'milliether': decimal.Decimal('1000000000000000'), # noqa: E241 + 'milli': decimal.Decimal('1000000000000000'), # noqa: E241 + 'ether': decimal.Decimal('1000000000000000000'), # noqa: E241 + 'kether': decimal.Decimal('1000000000000000000000'), # noqa: E241 + 'grand': decimal.Decimal('1000000000000000000000'), # noqa: E241 + 'mether': decimal.Decimal('1000000000000000000000000'), # noqa: E241 + 'gether': decimal.Decimal('1000000000000000000000000000'), # noqa: E241 + 'tether': decimal.Decimal('1000000000000000000000000000000'), # noqa: E241 +} +# fmt: on diff --git a/env/lib/python3.8/site-packages/ethpm/__init__.py b/env/lib/python3.8/site-packages/ethpm/__init__.py new file mode 100644 index 00000000..14c435af --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/__init__.py @@ -0,0 +1,20 @@ +from pathlib import Path + + +ETHPM_DIR = Path(__file__).parent +ASSETS_DIR = ETHPM_DIR / "assets" + + +def get_ethpm_spec_dir() -> Path: + ethpm_spec_dir = ETHPM_DIR / "ethpm-spec" + v3_spec = ethpm_spec_dir / "spec" / "v3.spec.json" + if not v3_spec.is_file(): + raise FileNotFoundError( + "The ethpm-spec submodule is not available. " + "Please import the submodule with `git submodule update --init`" + ) + return ethpm_spec_dir + + +from .package import Package # noqa: F401 +from .backends.registry import RegistryURI # noqa: F401 diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/__init__.py b/env/lib/python3.8/site-packages/ethpm/_utils/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/backend.py b/env/lib/python3.8/site-packages/ethpm/_utils/backend.py new file mode 100644 index 00000000..ae9fdbdd --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/backend.py @@ -0,0 +1,77 @@ +import logging +from typing import ( + Generator, + Type, +) + +from eth_typing import ( + URI, +) +from eth_utils import ( + to_tuple, +) +from ipfshttpclient.exceptions import ( + ConnectionError, +) + +from ethpm.backends.base import ( + BaseURIBackend, +) +from ethpm.backends.http import ( + GithubOverHTTPSBackend, +) +from ethpm.backends.ipfs import ( + DummyIPFSBackend, + InfuraIPFSBackend, + LocalIPFSBackend, + get_ipfs_backend_class, +) +from ethpm.backends.registry import ( + RegistryURIBackend, +) + +logger = logging.getLogger("ethpm.utils.backend") + +ALL_URI_BACKENDS = [ + InfuraIPFSBackend, + DummyIPFSBackend, + LocalIPFSBackend, + GithubOverHTTPSBackend, + RegistryURIBackend, +] + + +@to_tuple +def get_translatable_backends_for_uri( + uri: URI +) -> Generator[Type[BaseURIBackend], None, None]: + # type ignored because of conflict with instantiating BaseURIBackend + for backend in ALL_URI_BACKENDS: + try: + if backend().can_translate_uri(uri): # type: ignore + yield backend + except ConnectionError: + logger.debug("No local IPFS node available on port 5001.", exc_info=True) + + +@to_tuple +def get_resolvable_backends_for_uri( + uri: URI +) -> Generator[Type[BaseURIBackend], None, None]: + # special case the default IPFS backend to the first slot. + default_ipfs = get_ipfs_backend_class() + if default_ipfs in ALL_URI_BACKENDS and default_ipfs().can_resolve_uri(uri): + yield default_ipfs + else: + for backend_class in ALL_URI_BACKENDS: + if backend_class is default_ipfs: + continue + # type ignored because of conflict with instantiating BaseURIBackend + else: + try: + if backend_class().can_resolve_uri(uri): # type: ignore + yield backend_class + except ConnectionError: + logger.debug( + "No local IPFS node available on port 5001.", exc_info=True + ) diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/cache.py b/env/lib/python3.8/site-packages/ethpm/_utils/cache.py new file mode 100644 index 00000000..eb8ff29a --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/cache.py @@ -0,0 +1,32 @@ +import functools +from typing import ( + Any, + Callable, +) + + +class cached_property: + """ + Decorator that converts a method with a single self argument into a + property cached on the instance. + + Optional ``name`` argument allows you to make cached properties of other + methods. (e.g. url = cached_property(get_absolute_url, name='url') ) + """ + + def __init__(self, func: Callable[..., Any], name: str = None) -> None: + self.func = func + self.name = name or func.__name__ + # type ignored b/c self is 'cached_property' but expects 'Callable[..., Any]' + functools.update_wrapper(self, func) # type: ignore + + def __get__(self, instance: Any, cls: Any = None) -> Any: + """ + Call the function and put the return value in instance.__dict__ so that + subsequent attribute access on the instance returns the cached value + instead of calling cached_property.__get__(). + """ + if instance is None: + return self + res = instance.__dict__[self.name] = self.func(instance) + return res diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/chains.py b/env/lib/python3.8/site-packages/ethpm/_utils/chains.py new file mode 100644 index 00000000..51b8658d --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/chains.py @@ -0,0 +1,112 @@ +import re +from typing import ( + Any, + Tuple, +) +from urllib import ( + parse, +) + +from eth_typing import ( + URI, + BlockNumber, + HexStr, +) +from eth_utils import ( + add_0x_prefix, + is_integer, + remove_0x_prefix, +) +from hexbytes import ( + HexBytes, +) + +from ethpm.constants import ( + SUPPORTED_CHAIN_IDS, +) +from web3 import Web3 + + +def get_genesis_block_hash(web3: Web3) -> HexBytes: + return web3.eth.getBlock(BlockNumber(0))["hash"] + + +BLOCK = "block" + +BIP122_URL_REGEX = ( + "^" + "blockchain://" + "(?P[a-zA-Z0-9]{64})" + "/" + "(?Pblock|transaction)" + "/" + "(?P[a-zA-Z0-9]{64})" + "$" +) + + +def is_BIP122_uri(value: URI) -> bool: + return bool(re.match(BIP122_URL_REGEX, value)) + + +def parse_BIP122_uri(blockchain_uri: URI) -> Tuple[HexStr, str, HexStr]: + match = re.match(BIP122_URL_REGEX, blockchain_uri) + if match is None: + raise ValueError(f"Invalid URI format: '{blockchain_uri}'") + chain_id, resource_type, resource_hash = match.groups() + return (add_0x_prefix(HexStr(chain_id)), resource_type, add_0x_prefix(HexStr(resource_hash))) + + +def is_BIP122_block_uri(value: URI) -> bool: + if not is_BIP122_uri(value): + return False + _, resource_type, _ = parse_BIP122_uri(value) + return resource_type == BLOCK + + +BLOCK_OR_TRANSACTION_HASH_REGEX = "^(?:0x)?[a-zA-Z0-9]{64}$" + + +def is_block_or_transaction_hash(value: str) -> bool: + return bool(re.match(BLOCK_OR_TRANSACTION_HASH_REGEX, value)) + + +def create_BIP122_uri( + chain_id: HexStr, resource_type: str, resource_identifier: HexStr +) -> URI: + """ + See: https://github.com/bitcoin/bips/blob/master/bip-0122.mediawiki + """ + if resource_type != BLOCK: + raise ValueError("Invalid resource_type. Must be one of 'block'") + elif not is_block_or_transaction_hash(resource_identifier): + raise ValueError( + "Invalid resource_identifier. Must be a hex encoded 32 byte value" + ) + elif not is_block_or_transaction_hash(chain_id): + raise ValueError("Invalid chain_id. Must be a hex encoded 32 byte value") + + return URI( + parse.urlunsplit( + [ + "blockchain", + remove_0x_prefix(chain_id), + f"{resource_type}/{remove_0x_prefix(resource_identifier)}", + "", + "", + ] + ) + ) + + +def create_block_uri(chain_id: HexStr, block_identifier: HexStr) -> URI: + return create_BIP122_uri(chain_id, "block", remove_0x_prefix(block_identifier)) + + +def is_supported_chain_id(chain_id: Any) -> bool: + if not is_integer(chain_id): + return False + + if chain_id not in SUPPORTED_CHAIN_IDS.keys(): + return False + return True diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/contract.py b/env/lib/python3.8/site-packages/ethpm/_utils/contract.py new file mode 100644 index 00000000..5e8d2ccf --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/contract.py @@ -0,0 +1,35 @@ +from typing import ( + Any, + Dict, + Generator, + Tuple, +) + +from eth_utils import ( + to_dict, +) + + +@to_dict +def generate_contract_factory_kwargs( + contract_data: Dict[str, Any] +) -> Generator[Tuple[str, Any], None, None]: + """ + Build a dictionary of kwargs to be passed into contract factory. + """ + if "abi" in contract_data: + yield "abi", contract_data["abi"] + + if "deploymentBytecode" in contract_data: + yield "bytecode", contract_data["deploymentBytecode"]["bytecode"] + if "linkReferences" in contract_data["deploymentBytecode"]: + yield "unlinked_references", tuple( + contract_data["deploymentBytecode"]["linkReferences"] + ) + + if "runtimeBytecode" in contract_data: + yield "bytecode_runtime", contract_data["runtimeBytecode"]["bytecode"] + if "linkReferences" in contract_data["runtimeBytecode"]: + yield "linked_references", tuple( + contract_data["runtimeBytecode"]["linkReferences"] + ) diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/deployments.py b/env/lib/python3.8/site-packages/ethpm/_utils/deployments.py new file mode 100644 index 00000000..3749d54e --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/deployments.py @@ -0,0 +1,136 @@ +from typing import ( + Any, + Dict, + Generator, + List, + Tuple, +) + +from eth_utils import ( + is_same_address, + to_bytes, + to_tuple, +) +from eth_utils.toolz import ( + get_in, +) +from hexbytes import ( + HexBytes, +) + +from ethpm.exceptions import ( + BytecodeLinkingError, + EthPMValidationError, +) +from web3 import Web3 + + +def get_linked_deployments(deployments: Dict[str, Any]) -> Dict[str, Any]: + """ + Returns all deployments found in a chain URI's deployment data that contain link dependencies. + """ + linked_deployments = { + dep: data + for dep, data in deployments.items() + if get_in(("runtimeBytecode", "linkDependencies"), data) + } + for deployment, data in linked_deployments.items(): + if any( + link_dep["value"] == deployment + for link_dep in data["runtimeBytecode"]["linkDependencies"] + ): + raise BytecodeLinkingError( + f"Link dependency found in {deployment} deployment that references its " + "own contract instance, which is disallowed" + ) + return linked_deployments + + +def validate_linked_references( + link_deps: Tuple[Tuple[int, bytes], ...], bytecode: HexBytes +) -> None: + """ + Validates that normalized linked_references (offset, expected_bytes) + match the corresponding bytecode. + """ + offsets, values = zip(*link_deps) + for idx, offset in enumerate(offsets): + value = values[idx] + # https://github.com/python/mypy/issues/4975 + offset_value = int(offset) + dep_length = len(value) + end_of_bytes = offset_value + dep_length + # Ignore b/c whitespace around ':' conflict b/w black & flake8 + actual_bytes = bytecode[offset_value:end_of_bytes] # noqa: E203 + if actual_bytes != values[idx]: + raise EthPMValidationError( + "Error validating linked reference. " + f"Offset: {offset} " + f"Value: {values[idx]} " + f"Bytecode: {bytecode} ." + ) + + +@to_tuple +def normalize_linked_references( + data: List[Dict[str, Any]] +) -> Generator[Tuple[int, str, str], None, None]: + """ + Return a tuple of information representing all insertions of a linked reference. + (offset, type, value) + """ + for deployment in data: + for offset in deployment["offsets"]: + yield offset, deployment["type"], deployment["value"] + + +def validate_deployments_tx_receipt( + deployments: Dict[str, Any], w3: Web3, allow_missing_data: bool = False +) -> None: + """ + Validate that address and block hash found in deployment data match what is found on-chain. + :allow_missing_data: by default, enforces validation of address and blockHash. + """ + # todo: provide hook to lazily look up tx receipt via binary search if missing data + for name, data in deployments.items(): + if "transaction" in data: + tx_hash = data["transaction"] + tx_receipt = w3.eth.getTransactionReceipt(tx_hash) + # tx_address will be None if contract created via contract factory + tx_address = tx_receipt["contractAddress"] + + if tx_address is None and allow_missing_data is False: + raise EthPMValidationError( + "No contract address found in tx receipt. Unable to verify " + "address found in tx receipt matches address in manifest's deployment data. " + "If this validation is not necessary, please enable `allow_missing_data` arg. " + ) + + if tx_address is not None and not is_same_address( + tx_address, data["address"] + ): + raise EthPMValidationError( + f"Error validating tx_receipt for {name} deployment. " + f"Address found in manifest's deployment data: {data['address']} " + f"Does not match address found on tx_receipt: {tx_address}." + ) + + if "block" in data: + if tx_receipt["blockHash"] != to_bytes(hexstr=data["block"]): + raise EthPMValidationError( + f"Error validating tx_receipt for {name} deployment. " + f"Block found in manifest's deployment data: {data['block']} does not " + f"Does not match block found on tx_receipt: {tx_receipt['blockHash']}." + ) + elif allow_missing_data is False: + raise EthPMValidationError( + "No block hash found in deployment data. " + "Unable to verify block hash on tx receipt. " + "If this validation is not necessary, please enable `allow_missing_data` arg." + ) + elif allow_missing_data is False: + raise EthPMValidationError( + "No transaction hash found in deployment data. " + "Unable to validate tx_receipt. " + "If this validation is not necessary, please enable `allow_missing_data` arg." + ) diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/ipfs.py b/env/lib/python3.8/site-packages/ethpm/_utils/ipfs.py new file mode 100644 index 00000000..b838bce4 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/ipfs.py @@ -0,0 +1,111 @@ +import hashlib +from pathlib import ( + Path, +) +from typing import ( + Dict, +) +from urllib import ( + parse, +) + +from base58 import ( + b58encode, +) +from eth_utils import ( + to_text, +) +from google.protobuf.descriptor import ( + Descriptor, +) + +from ethpm._utils.protobuf.ipfs_file_pb2 import ( + Data, + PBNode, +) + + +def dummy_ipfs_pin(path: Path) -> Dict[str, str]: + """ + Return IPFS data as if file was pinned to an actual node. + """ + ipfs_return = { + "Hash": generate_file_hash(path.read_bytes()), + "Name": path.name, + "Size": str(path.stat().st_size), + } + return ipfs_return + + +def create_ipfs_uri(ipfs_hash: str) -> str: + return f"ipfs://{ipfs_hash}" + + +def extract_ipfs_path_from_uri(value: str) -> str: + """ + Return the path from an IPFS URI. + Path = IPFS hash & following path. + """ + parse_result = parse.urlparse(value) + + if parse_result.netloc: + if parse_result.path: + return "".join((parse_result.netloc, parse_result.path.rstrip("/"))) + else: + return parse_result.netloc + else: + return parse_result.path.strip("/") + + +def is_ipfs_uri(value: str) -> bool: + """ + Return a bool indicating whether or not the value is a valid IPFS URI. + """ + parse_result = parse.urlparse(value) + if parse_result.scheme != "ipfs": + return False + if not parse_result.netloc and not parse_result.path: + return False + + return True + + +# +# Generate IPFS hash +# Lifted from https://github.com/ethereum/populus/blob/feat%2Fv2/populus/utils/ipfs.py +# + + +SHA2_256 = b"\x12" +LENGTH_32 = b"\x20" + + +def multihash(value: bytes) -> bytes: + data_hash = hashlib.sha256(value).digest() + + multihash_bytes = SHA2_256 + LENGTH_32 + data_hash + return multihash_bytes + + +def serialize_bytes(file_bytes: bytes) -> Descriptor: + file_size = len(file_bytes) + + data_protobuf = Data( + # type ignored b/c DataType is manually attached in ipfs_file_pb2.py + Type=Data.DataType.Value("File"), # type: ignore + Data=file_bytes, + filesize=file_size, + ) + data_protobuf_bytes = data_protobuf.SerializeToString() + + file_protobuf = PBNode(Links=[], Data=data_protobuf_bytes) + + return file_protobuf + + +def generate_file_hash(content_bytes: bytes) -> str: + file_protobuf: Descriptor = serialize_bytes(content_bytes) + # type ignored b/c SerializeToString is manually attached in ipfs_file_pb2.py + file_protobuf_bytes = file_protobuf.SerializeToString() # type: ignore + file_multihash = multihash(file_protobuf_bytes) + return to_text(b58encode(file_multihash)) diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/protobuf/__init__.py b/env/lib/python3.8/site-packages/ethpm/_utils/protobuf/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/protobuf/ipfs_file_pb2.py b/env/lib/python3.8/site-packages/ethpm/_utils/protobuf/ipfs_file_pb2.py new file mode 100644 index 00000000..344dc150 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/protobuf/ipfs_file_pb2.py @@ -0,0 +1,318 @@ +# flake8: noqa +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: file.proto + +import sys + +from google.protobuf import ( + descriptor as _descriptor, + descriptor_pb2, + message as _message, + reflection as _reflection, + symbol_database as _symbol_database, +) + +_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +DESCRIPTOR = _descriptor.FileDescriptor( + name="file.proto", + package="", + syntax="proto2", + serialized_pb=_b( + '\n\nfile.proto"\xa1\x01\n\x04\x44\x61ta\x12\x1c\n\x04Type\x18\x01 \x02(\x0e\x32\x0e.Data.DataType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c\x12\x10\n\x08\x66ilesize\x18\x03 \x01(\x04\x12\x12\n\nblocksizes\x18\x04 \x03(\x04"G\n\x08\x44\x61taType\x12\x07\n\x03Raw\x10\x00\x12\r\n\tDirectory\x10\x01\x12\x08\n\x04\x46ile\x10\x02\x12\x0c\n\x08Metadata\x10\x03\x12\x0b\n\x07Symlink\x10\x04"3\n\x06PBLink\x12\x0c\n\x04Hash\x18\x01 \x01(\x0c\x12\x0c\n\x04Name\x18\x02 \x01(\t\x12\r\n\x05Tsize\x18\x03 \x01(\x04".\n\x06PBNode\x12\x16\n\x05Links\x18\x02 \x03(\x0b\x32\x07.PBLink\x12\x0c\n\x04\x44\x61ta\x18\x01 \x01(\x0c' + ), +) +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + + +_DATA_DATATYPE = _descriptor.EnumDescriptor( + name="DataType", + full_name="Data.DataType", + filename=None, + file=DESCRIPTOR, + values=[ + _descriptor.EnumValueDescriptor( + name="Raw", index=0, number=0, options=None, type=None + ), + _descriptor.EnumValueDescriptor( + name="Directory", index=1, number=1, options=None, type=None + ), + _descriptor.EnumValueDescriptor( + name="File", index=2, number=2, options=None, type=None + ), + _descriptor.EnumValueDescriptor( + name="Metadata", index=3, number=3, options=None, type=None + ), + _descriptor.EnumValueDescriptor( + name="Symlink", index=4, number=4, options=None, type=None + ), + ], + containing_type=None, + options=None, + serialized_start=105, + serialized_end=176, +) +_sym_db.RegisterEnumDescriptor(_DATA_DATATYPE) + + +_DATA = _descriptor.Descriptor( + name="Data", + full_name="Data", + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name="Type", + full_name="Data.Type", + index=0, + number=1, + type=14, + cpp_type=8, + label=2, + has_default_value=False, + default_value=0, + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + _descriptor.FieldDescriptor( + name="Data", + full_name="Data.Data", + index=1, + number=2, + type=12, + cpp_type=9, + label=1, + has_default_value=False, + default_value=_b(""), + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + _descriptor.FieldDescriptor( + name="filesize", + full_name="Data.filesize", + index=2, + number=3, + type=4, + cpp_type=4, + label=1, + has_default_value=False, + default_value=0, + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + _descriptor.FieldDescriptor( + name="blocksizes", + full_name="Data.blocksizes", + index=3, + number=4, + type=4, + cpp_type=4, + label=3, + has_default_value=False, + default_value=[], + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + ], + extensions=[], + nested_types=[], + enum_types=[_DATA_DATATYPE], + options=None, + is_extendable=False, + syntax="proto2", + extension_ranges=[], + oneofs=[], + serialized_start=15, + serialized_end=176, +) + + +_PBLINK = _descriptor.Descriptor( + name="PBLink", + full_name="PBLink", + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name="Hash", + full_name="PBLink.Hash", + index=0, + number=1, + type=12, + cpp_type=9, + label=1, + has_default_value=False, + default_value=_b(""), + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + _descriptor.FieldDescriptor( + name="Name", + full_name="PBLink.Name", + index=1, + number=2, + type=9, + cpp_type=9, + label=1, + has_default_value=False, + default_value=_b("").decode("utf-8"), + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + _descriptor.FieldDescriptor( + name="Tsize", + full_name="PBLink.Tsize", + index=2, + number=3, + type=4, + cpp_type=4, + label=1, + has_default_value=False, + default_value=0, + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + ], + extensions=[], + nested_types=[], + enum_types=[], + options=None, + is_extendable=False, + syntax="proto2", + extension_ranges=[], + oneofs=[], + serialized_start=178, + serialized_end=229, +) + + +_PBNODE = _descriptor.Descriptor( + name="PBNode", + full_name="PBNode", + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name="Links", + full_name="PBNode.Links", + index=0, + number=2, + type=11, + cpp_type=10, + label=3, + has_default_value=False, + default_value=[], + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + _descriptor.FieldDescriptor( + name="Data", + full_name="PBNode.Data", + index=1, + number=1, + type=12, + cpp_type=9, + label=1, + has_default_value=False, + default_value=_b(""), + message_type=None, + enum_type=None, + containing_type=None, + is_extension=False, + extension_scope=None, + options=None, + ), + ], + extensions=[], + nested_types=[], + enum_types=[], + options=None, + is_extendable=False, + syntax="proto2", + extension_ranges=[], + oneofs=[], + serialized_start=231, + serialized_end=277, +) + +_DATA.fields_by_name["Type"].enum_type = _DATA_DATATYPE +_DATA_DATATYPE.containing_type = _DATA +_PBNODE.fields_by_name["Links"].message_type = _PBLINK +DESCRIPTOR.message_types_by_name["Data"] = _DATA +DESCRIPTOR.message_types_by_name["PBLink"] = _PBLINK +DESCRIPTOR.message_types_by_name["PBNode"] = _PBNODE + +Data = _reflection.GeneratedProtocolMessageType( + "Data", + (_message.Message,), + dict( + DESCRIPTOR=_DATA, + __module__="file_pb2" + # @@protoc_insertion_point(class_scope:Data) + ), +) +_sym_db.RegisterMessage(Data) + +PBLink = _reflection.GeneratedProtocolMessageType( + "PBLink", + (_message.Message,), + dict( + DESCRIPTOR=_PBLINK, + __module__="file_pb2" + # @@protoc_insertion_point(class_scope:PBLink) + ), +) +_sym_db.RegisterMessage(PBLink) + +PBNode = _reflection.GeneratedProtocolMessageType( + "PBNode", + (_message.Message,), + dict( + DESCRIPTOR=_PBNODE, + __module__="file_pb2" + # @@protoc_insertion_point(class_scope:PBNode) + ), +) +_sym_db.RegisterMessage(PBNode) + + +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/ethpm/_utils/registry.py b/env/lib/python3.8/site-packages/ethpm/_utils/registry.py new file mode 100644 index 00000000..b9212a71 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/_utils/registry.py @@ -0,0 +1,29 @@ +import json +from typing import ( + Any, + Dict, +) + + +def is_ens_domain(authority: str) -> bool: + """ + Return false if authority is not a valid ENS domain. + """ + # check that authority ends with the tld '.eth' + # check that there are either 2 or 3 subdomains in the authority + # i.e. zeppelinos.eth or packages.zeppelinos.eth + if authority[-4:] != ".eth" or len(authority.split(".")) not in [2, 3]: + return False + return True + + +def fetch_standard_registry_abi() -> Dict[str, Any]: + """ + Return the standard Registry ABI to interact with a deployed Registry. + TODO: Update once the standard is finalized via ERC process. + """ + # In-lining abi here since it needs to be updated to a registry conforming to + # https://github.com/ethereum/EIPs/issues/1319 + return json.loads( + '[{"constant":true,"inputs":[{"name":"_name","type":"bytes32"}],"name":"lookupPackage","outputs":[{"name":"","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"","type":"bytes32"}],"name":"index","outputs":[{"name":"uri","type":"string"},{"name":"version","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"_name","type":"bytes32"},{"name":"_version","type":"string"},{"name":"_uri","type":"string"}],"name":"registerPackage","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"}]' # noqa: E501 + ) diff --git a/env/lib/python3.8/site-packages/ethpm/assets/__init__.py b/env/lib/python3.8/site-packages/ethpm/assets/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/Authority.sol b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/Authority.sol new file mode 100644 index 00000000..c64d5e2e --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/Authority.sol @@ -0,0 +1,156 @@ +pragma solidity ^0.4.24; +pragma experimental "v0.5.0"; + + +contract AuthorityInterface { + function canCall( + address callerAddress, + address codeAddress, + bytes4 sig + ) + public + view + returns (bool); +} + + +contract AuthorizedInterface { + address public owner; + AuthorityInterface public authority; + + modifier auth { + require(isAuthorized(),"escape:Authority:caller-not-authorized"); + _; + } + + event OwnerUpdate(address indexed oldOwner, address indexed newOwner); + event AuthorityUpdate(address indexed oldAuthority, address indexed newAuthority); + + function setOwner(address newOwner) public returns (bool); + + function setAuthority(AuthorityInterface newAuthority) public returns (bool); + + function isAuthorized() internal returns (bool); +} + + +contract Authorized is AuthorizedInterface { + constructor() public { + owner = msg.sender; + emit OwnerUpdate(0x0, owner); + } + + function setOwner(address newOwner) + public + auth + returns (bool) + { + emit OwnerUpdate(owner, newOwner); + owner = newOwner; + return true; + } + + function setAuthority(AuthorityInterface newAuthority) + public + auth + returns (bool) + { + emit AuthorityUpdate(authority, newAuthority); + authority = newAuthority; + return true; + } + + function isAuthorized() internal returns (bool) { + if (msg.sender == owner) { + return true; + } else if (address(authority) == (0)) { + return false; + } else { + return authority.canCall(msg.sender, this, msg.sig); + } + } +} + + +contract WhitelistAuthorityInterface is AuthorityInterface, AuthorizedInterface { + event SetCanCall( + address indexed callerAddress, + address indexed codeAddress, + bytes4 indexed sig, + bool can + ); + + event SetAnyoneCanCall( + address indexed codeAddress, + bytes4 indexed sig, + bool can + ); + + function setCanCall( + address callerAddress, + address codeAddress, + bytes4 sig, + bool can + ) + public + returns (bool); + + function setAnyoneCanCall( + address codeAddress, + bytes4 sig, + bool can + ) + public + returns (bool); +} + + +contract WhitelistAuthority is WhitelistAuthorityInterface, Authorized { + mapping (address => mapping (address => mapping (bytes4 => bool))) _canCall; + mapping (address => mapping (bytes4 => bool)) _anyoneCanCall; + + function canCall( + address callerAddress, + address codeAddress, + bytes4 sig + ) + public + view + returns (bool) + { + if (_anyoneCanCall[codeAddress][sig]) { + return true; + } else { + return _canCall[callerAddress][codeAddress][sig]; + } + } + + function setCanCall( + address callerAddress, + address codeAddress, + bytes4 sig, + bool can + ) + public + auth + returns (bool) + { + _canCall[callerAddress][codeAddress][sig] = can; + emit SetCanCall(callerAddress, codeAddress, sig, can); + return true; + } + + function setAnyoneCanCall( + address codeAddress, + bytes4 sig, + bool can + ) + public + auth + returns (bool) + { + _anyoneCanCall[codeAddress][sig] = can; + emit SetAnyoneCanCall(codeAddress, sig, can); + return true; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/IndexedOrderedSetLib.sol b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/IndexedOrderedSetLib.sol new file mode 100644 index 00000000..a7fb2067 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/IndexedOrderedSetLib.sol @@ -0,0 +1,106 @@ +pragma solidity ^0.4.24; +pragma experimental "v0.5.0"; + +/// @title Library implementing an array type which allows O(1) lookups on values. +/// @author Piper Merriam +library IndexedOrderedSetLib { + struct IndexedOrderedSet { + bytes32[] _values; + mapping (bytes32 => uint) _valueIndices; + mapping (bytes32 => bool) _exists; + } + + modifier requireValue(IndexedOrderedSet storage self, bytes32 value) { + require(contains(self, value), "escape:IndexedOrderedSetLib:value-not-found"); + _; + } + + /// @dev Returns the size of the set + /// @param self The set + function size(IndexedOrderedSet storage self) + public + view + returns (uint) + { + return self._values.length; + } + + /// @dev Returns boolean if the key is in the set + /// @param self The set + /// @param value The value to check + function contains(IndexedOrderedSet storage self, bytes32 value) + public + view + returns (bool) + { + return self._exists[value]; + } + + /// @dev Returns the index of the value in the set. + /// @param self The set + /// @param value The value to look up the index for. + function indexOf(IndexedOrderedSet storage self, bytes32 value) + public + view + requireValue(self, value) + returns (uint) + { + return self._valueIndices[value]; + } + + /// @dev Removes the element at index idx from the set and returns it. + /// @param self The set + /// @param idx The index to remove and return. + function pop(IndexedOrderedSet storage self, uint idx) public returns (bytes32) { + bytes32 value = get(self, idx); + + if (idx != self._values.length - 1) { + bytes32 movedValue = self._values[self._values.length - 1]; + self._values[idx] = movedValue; + self._valueIndices[movedValue] = idx; + } + self._values.length -= 1; + + delete self._valueIndices[value]; + delete self._exists[value]; + + return value; + } + + /// @dev Removes the element at index idx from the set + /// @param self The set + /// @param value The value to remove from the set. + function remove(IndexedOrderedSet storage self, bytes32 value) + public + requireValue(self, value) + returns (bool) + { + uint idx = indexOf(self, value); + pop(self, idx); + return true; + } + + /// @dev Retrieves the element at the provided index. + /// @param self The set + /// @param idx The index to retrieve. + function get(IndexedOrderedSet storage self, uint idx) + public + view + returns (bytes32) + { + return self._values[idx]; + } + + /// @dev Pushes the new value onto the set + /// @param self The set + /// @param value The value to push. + function add(IndexedOrderedSet storage self, bytes32 value) public returns (bool) { + if (contains(self, value)) return true; + + self._valueIndices[value] = self._values.length; + self._values.push(value); + self._exists[value] = true; + + return true; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageDB.sol b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageDB.sol new file mode 100644 index 00000000..93a74cad --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageDB.sol @@ -0,0 +1,225 @@ +pragma solidity ^0.4.24; +pragma experimental "v0.5.0"; + +import {IndexedOrderedSetLib} from "./IndexedOrderedSetLib.sol"; +import {Authorized} from "./Authority.sol"; + + +/// @title Database contract for a package index package data. +/// @author Tim Coulter , Piper Merriam +contract PackageDB is Authorized { + using IndexedOrderedSetLib for IndexedOrderedSetLib.IndexedOrderedSet; + + struct Package { + bool exists; + uint createdAt; + uint updatedAt; + string name; + address owner; + } + + // Package Data: (nameHash => value) + mapping (bytes32 => Package) _recordedPackages; + IndexedOrderedSetLib.IndexedOrderedSet _allPackageNameHashes; + + // Events + event PackageReleaseAdd(bytes32 indexed nameHash, bytes32 indexed releaseHash); + event PackageReleaseRemove(bytes32 indexed nameHash, bytes32 indexed releaseHash); + event PackageCreate(bytes32 indexed nameHash); + event PackageDelete(bytes32 indexed nameHash, string reason); + event PackageOwnerUpdate(bytes32 indexed nameHash, address indexed oldOwner, address indexed newOwner); + + /* + * Modifiers + */ + modifier onlyIfPackageExists(bytes32 nameHash) { + require(packageExists(nameHash), "escape:PackageDB:package-not-found"); + _; + } + + // + // +-------------+ + // | Write API | + // +-------------+ + // + + /// @dev Creates or updates a release for a package. Returns success. + /// @param name Package name + function setPackage(string name) + public + auth + returns (bool) + { + // Hash the name and the version for storing data + bytes32 nameHash = hashName(name); + + Package storage package = _recordedPackages[nameHash]; + + // Mark the package as existing if it isn't already tracked. + if (!packageExists(nameHash)) { + + // Set package data + package.exists = true; + package.createdAt = block.timestamp; // solium-disable-line security/no-block-members + package.name = name; + + // Add the nameHash to the list of all package nameHashes. + _allPackageNameHashes.add(nameHash); + + emit PackageCreate(nameHash); + } + + package.updatedAt = block.timestamp; // solium-disable-line security/no-block-members + + return true; + } + + /// @dev Removes a package from the package db. Packages with existing releases may not be removed. Returns success. + /// @param nameHash The name hash of a package. + function removePackage(bytes32 nameHash, string reason) + public + auth + onlyIfPackageExists(nameHash) + returns (bool) + { + emit PackageDelete(nameHash, reason); + + delete _recordedPackages[nameHash]; + _allPackageNameHashes.remove(nameHash); + + return true; + } + + /// @dev Sets the owner of a package to the provided address. Returns success. + /// @param nameHash The name hash of a package. + /// @param newPackageOwner The address of the new owner. + function setPackageOwner(bytes32 nameHash, address newPackageOwner) + public + auth + onlyIfPackageExists(nameHash) + returns (bool) + { + emit PackageOwnerUpdate(nameHash, _recordedPackages[nameHash].owner, newPackageOwner); + + _recordedPackages[nameHash].owner = newPackageOwner; + _recordedPackages[nameHash].updatedAt = block.timestamp; // solium-disable-line security/no-block-members + + return true; + } + + // + // +------------+ + // | Read API | + // +------------+ + // + + /// @dev Query the existence of a package with the given name. Returns boolean indicating whether the package exists. + /// @param nameHash The name hash of a package. + function packageExists(bytes32 nameHash) + public + view + returns (bool) + { + return _recordedPackages[nameHash].exists; + } + + /// @dev Return the total number of packages + function getNumPackages() + public + view + returns (uint) + { + return _allPackageNameHashes.size(); + } + + /// @dev Returns package namehash at the provided index from the set of all known name hashes. + /// @param idx The index of the package name hash to retrieve. + function getPackageNameHash(uint idx) + public + view + returns (bytes32) + { + return _allPackageNameHashes.get(idx); + } + + /// @dev Returns information about the package. + /// @param nameHash The name hash to look up. + function getPackageData(bytes32 nameHash) + public + view + onlyIfPackageExists(nameHash) + returns ( + address packageOwner, + uint createdAt, + uint updatedAt + ) + { + Package storage package = _recordedPackages[nameHash]; + return (package.owner, package.createdAt, package.updatedAt); + } + + /// @dev Returns the package name for the given namehash + /// @param nameHash The name hash to look up. + function getPackageName(bytes32 nameHash) + public + view + onlyIfPackageExists(nameHash) + returns (string) + { + return _recordedPackages[nameHash].name; + } + + /// @dev Returns a slice of the array of all package hashes for the named package. + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllPackageIds(uint _offset, uint limit) + public + view + returns ( + bytes32[] packageIds, + uint offset + ) + { + bytes32[] memory hashes; // Array of package ids to return + uint cursor = _offset; // Index counter to traverse DB array + uint remaining; // Counter to collect `limit` packages + uint totalPackages = getNumPackages(); // Total number of packages in registry + + // Is request within range? + if (cursor < totalPackages){ + + // Get total remaining records + remaining = totalPackages - cursor; + + // Number of records to collect is lesser of `remaining` and `limit` + if (remaining > limit ){ + remaining = limit; + } + + // Allocate return array + hashes = new bytes32[](remaining); + + // Collect records. (IndexedOrderedSet manages deletions.) + while(remaining > 0){ + bytes32 hash = getPackageNameHash(cursor); + hashes[remaining - 1] = hash; + remaining--; + cursor++; + } + } + return (hashes, cursor); + } + + /* + * Hash Functions + */ + /// @dev Returns name hash for a given package name. + /// @param name Package name + function hashName(string name) + public + pure + returns (bytes32) + { + return keccak256(abi.encodePacked(name)); + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageRegistry.sol b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageRegistry.sol new file mode 100644 index 00000000..f825c67d --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageRegistry.sol @@ -0,0 +1,361 @@ +pragma solidity ^0.4.24; +pragma experimental "v0.5.0"; + + +import {PackageDB} from "./PackageDB.sol"; +import {ReleaseDB} from "./ReleaseDB.sol"; +import {ReleaseValidator} from "./ReleaseValidator.sol"; +import {PackageRegistryInterface} from "./PackageRegistryInterface.sol"; +import {Authorized} from "./Authority.sol"; + + +/// @title Database contract for a package index. +/// @author Tim Coulter , Piper Merriam +contract PackageRegistry is Authorized, PackageRegistryInterface { + PackageDB private packageDb; + ReleaseDB private releaseDb; + ReleaseValidator private releaseValidator; + + // Events + event VersionRelease(string packageName, string version, string manifestURI); + event PackageTransfer(address indexed oldOwner, address indexed newOwner); + + // + // Administrative API + // + /// @dev Sets the address of the PackageDb contract. + /// @param newPackageDb The address to set for the PackageDb. + function setPackageDb(address newPackageDb) + public + auth + returns (bool) + { + packageDb = PackageDB(newPackageDb); + return true; + } + + /// @dev Sets the address of the ReleaseDb contract. + /// @param newReleaseDb The address to set for the ReleaseDb. + function setReleaseDb(address newReleaseDb) + public + auth + returns (bool) + { + releaseDb = ReleaseDB(newReleaseDb); + return true; + } + + /// @dev Sets the address of the ReleaseValidator contract. + /// @param newReleaseValidator The address to set for the ReleaseValidator. + function setReleaseValidator(address newReleaseValidator) + public + auth + returns (bool) + { + releaseValidator = ReleaseValidator(newReleaseValidator); + return true; + } + + // + // +-------------+ + // | Write API | + // +-------------+ + // + /// @dev Creates a a new release for the named package. If this is the first release for the given package then this will also assign msg.sender as the owner of the package. Returns success. + /// @notice Will create a new release the given package with the given release information. + /// @param packageName Package name + /// @param version Version string (ex: '1.0.0') + /// @param manifestURI The URI for the release manifest for this release. + function release( + string packageName, + string version, + string manifestURI + ) + public + auth + returns (bytes32 releaseId) + { + require(address(packageDb) != 0x0, "escape:PackageIndex:package-db-not-set"); + require(address(releaseDb) != 0x0, "escape:PackageIndex:release-db-not-set"); + require(address(releaseValidator) != 0x0, "escape:PackageIndex:release-validator-not-set"); + + bytes32 versionHash = releaseDb.hashVersion(version); + + // If the version for this release is not in the version database, populate + // it. This must happen prior to validation to ensure that the version is + // present in the releaseDb. + if (!releaseDb.versionExists(versionHash)) { + releaseDb.setVersion(version); + } + + // Run release validator. This method reverts with an error message string + // on failure. + releaseValidator.validateRelease( + packageDb, + releaseDb, + msg.sender, + packageName, + version, + manifestURI + ); + + // Compute hashes + bool _packageExists = packageExists(packageName); + + // Both creates the package if it is new as well as updating the updatedAt + // timestamp on the package. + packageDb.setPackage(packageName); + + bytes32 nameHash = packageDb.hashName(packageName); + + // If the package does not yet exist create it and set the owner + if (!_packageExists) { + packageDb.setPackageOwner(nameHash, msg.sender); + } + + // Create the release and add it to the list of package release hashes. + releaseDb.setRelease(nameHash, versionHash, manifestURI); + + // Log the release. + releaseId = releaseDb.hashRelease(nameHash, versionHash); + emit VersionRelease(packageName, version, manifestURI); + + return releaseId; + } + + /// @dev Transfers package ownership to the provider new owner address. + /// @notice Will transfer ownership of this package to the provided new owner address. + /// @param name Package name + /// @param newPackageOwner The address of the new owner. + function transferPackageOwner(string name, address newPackageOwner) + public + auth + returns (bool) + { + if (isPackageOwner(name, msg.sender)) { + // Only the package owner may transfer package ownership. + return false; + } + + // Lookup the current owner + address packageOwner; + (packageOwner,,,) = getPackageData(name); + + // Log the transfer + emit PackageTransfer(packageOwner, newPackageOwner); + + // Update the owner. + packageDb.setPackageOwner(packageDb.hashName(name), newPackageOwner); + + return true; + } + + // + // +------------+ + // | Read API | + // +------------+ + // + + /// @dev Returns the address of the packageDb + function getPackageDb() + public + view + returns (address) + { + return address(packageDb); + } + + /// @dev Returns the address of the releaseDb + function getReleaseDb() + public + view + returns (address) + { + return address(releaseDb); + } + + /// @dev Returns the address of the releaseValidator + function getReleaseValidator() + public + view + returns (address) + { + return address(releaseValidator); + } + + /// @dev Query the existence of a package with the given name. Returns boolean indicating whether the package exists. + /// @param name Package name + function packageExists(string name) + public + view + returns (bool) + { + return packageDb.packageExists(packageDb.hashName(name)); + } + + /// @dev Query the existence of a release at the provided version for the named package. Returns boolean indicating whether such a release exists. + /// @param name Package name + /// @param version Version string (ex: '1.0.0') + function releaseExists( + string name, + string version + ) + public + view + returns (bool) + { + bytes32 nameHash = packageDb.hashName(name); + bytes32 versionHash = releaseDb.hashVersion(version); + return releaseDb.releaseExists(releaseDb.hashRelease(nameHash, versionHash)); + } + + /// @dev Returns a slice of the array of all package hashes for the named package. + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllPackageIds(uint offset, uint limit) + public + view + returns( + bytes32[] packageIds, + uint pointer + ) + { + return packageDb.getAllPackageIds(offset, limit); + } + + /// @dev Retrieves the name for the given name hash. + /// @param packageId The name hash of package to lookup the name for. + function getPackageName(bytes32 packageId) + public + view + returns (string packageName) + { + return packageDb.getPackageName(packageId); + } + + /// @dev Returns the package data. + /// @param name Package name + function getPackageData(string name) + public + view + returns ( + address packageOwner, + uint createdAt, + uint numReleases, + uint updatedAt + ) + { + bytes32 nameHash = packageDb.hashName(name); + (packageOwner, createdAt, updatedAt) = packageDb.getPackageData(nameHash); + numReleases = releaseDb.getNumReleasesForNameHash(nameHash); + return (packageOwner, createdAt, numReleases, updatedAt); + } + + /// @dev Returns the release data for the release associated with the given release hash. + /// @param releaseId The release hash. + function getReleaseData(bytes32 releaseId) + public + view + returns ( + string packageName, + string version, + string manifestURI + ) + { + bytes32 versionHash; + bytes32 nameHash; + (nameHash,versionHash, ,) = releaseDb.getReleaseData(releaseId); + + packageName = packageDb.getPackageName(nameHash); + version = releaseDb.getVersion(versionHash); + manifestURI = releaseDb.getManifestURI(releaseId); + + return (packageName, version, manifestURI); + } + + /// @dev Returns a slice of the array of all package hashes for the named package. + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllReleaseIds(string packageName, uint offset, uint limit) + public + view + returns ( + bytes32[] releaseIds, + uint pointer + ) + { + bytes32 nameHash = packageDb.hashName(packageName); + return releaseDb.getAllReleaseIds(nameHash, offset, limit); + } + + /// @dev Returns release id that *would* be generated for a name and version pair on `release`. + /// @param packageName Package name + /// @param version Version string (ex: '1.0.0') + function generateReleaseId(string packageName, string version) + public + view + returns (bytes32 releaseId) + { + bytes32 nameHash = packageDb.hashName(packageName); + bytes32 versionHash = releaseDb.hashVersion(version); + return keccak256(abi.encodePacked(nameHash, versionHash)); + } + + /// @dev Returns the release id for a given name and version pair if present on registry. + /// @param packageName Package name + /// @param version Version string(ex: '1.0.0') + function getReleaseId(string packageName, string version) + public + view + returns (bytes32 releaseId) + { + releaseId = generateReleaseId(packageName, version); + bool _releaseExists = releaseDb.releaseExists(releaseId); + if (!_releaseExists) { + return 0; + } + return releaseId; + } + + /// @dev Returns the number of packages stored on the registry + function numPackageIds() + public + view + returns (uint totalCount) + { + return packageDb.getNumPackages(); + } + + /// @dev Returns the number of releases for a given package name on the registry + /// @param packageName Package name + function numReleaseIds(string packageName) + public + view + returns (uint totalCount) + { + bool _packageExists = packageExists(packageName); + if (!_packageExists) { + return 0; + } + bytes32 nameHash = packageDb.hashName(packageName); + return releaseDb.getNumReleasesForNameHash(nameHash); + } + + // + // +----------------+ + // | Internal API | + // +----------------+ + // + /// @dev Returns boolean whether the provided address is the package owner + /// @param name The name of the package + /// @param _address The address to check + function isPackageOwner(string name, address _address) + internal + view + returns (bool) + { + address packageOwner; + (packageOwner,,,) = getPackageData(name); + return (packageOwner != _address); + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageRegistryInterface.sol b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageRegistryInterface.sol new file mode 100644 index 00000000..f70535ba --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/PackageRegistryInterface.sol @@ -0,0 +1,97 @@ +pragma solidity ^0.4.24; +pragma experimental "v0.5.0"; + + +/// @title EIP 1319 Smart Contract Package Registry Interface +/// @author Piper Merriam , Christopher Gewecke +contract PackageRegistryInterface { + + // + // +-------------+ + // | Write API | + // +-------------+ + // + + /// @dev Creates a a new release for the named package. + /// @notice Will create a new release the given package with the given release information. + /// @param packageName Package name + /// @param version Version string (ex: 1.0.0) + /// @param manifestURI The URI for the release manifest for this release. + function release( + string packageName, + string version, + string manifestURI + ) + public + returns (bytes32 releaseId); + + // + // +------------+ + // | Read API | + // +------------+ + // + + /// @dev Returns the string name of the package associated with a package id + /// @param packageId The package id to look up + function getPackageName(bytes32 packageId) + public + view + returns (string packageName); + + /// @dev Returns a slice of the array of all package ids for the named package. + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllPackageIds(uint offset, uint limit) + public + view + returns ( + bytes32[] packageIds, + uint pointer + ); + + /// @dev Returns a slice of the array of all release hashes for the named package. + /// @param packageName Package name + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllReleaseIds(string packageName, uint offset, uint limit) + public + view + returns ( + bytes32[] releaseIds, + uint pointer + ); + + /// @dev Returns the package data for a release. + /// @param releaseId Release id + function getReleaseData(bytes32 releaseId) + public + view + returns ( + string packageName, + string version, + string manifestURI + ); + + // @dev Returns release id that *would* be generated for a name and version pair on `release`. + // @param packageName Package name + // @param version Version string (ex: '1.0.0') + function generateReleaseId(string packageName, string version) + public + view + returns (bytes32 releaseId); + + /// @dev Returns the release id for a given name and version pair if present on registry. + /// @param packageName Package name + /// @param version Version string(ex: '1.0.0') + function getReleaseId(string packageName, string version) + public + view + returns (bytes32 releaseId); + + /// @dev Returns the number of packages stored on the registry + function numPackageIds() public view returns (uint totalCount); + + /// @dev Returns the number of releases for a given package name on the registry + /// @param packageName Package name + function numReleaseIds(string packageName) public view returns (uint totalCount); +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/ReleaseDB.sol b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/ReleaseDB.sol new file mode 100644 index 00000000..a1d66c73 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/ReleaseDB.sol @@ -0,0 +1,309 @@ +pragma solidity ^0.4.24; +pragma experimental "v0.5.0"; + + +import {IndexedOrderedSetLib} from "./IndexedOrderedSetLib.sol"; +import {Authorized} from "./Authority.sol"; + + +/// @title Database contract for a package index. +/// @author Tim Coulter , Piper Merriam +contract ReleaseDB is Authorized { + using IndexedOrderedSetLib for IndexedOrderedSetLib.IndexedOrderedSet; + + struct Release { + bool exists; + uint createdAt; + uint updatedAt; + bytes32 nameHash; + bytes32 versionHash; + string manifestURI; + } + + // Release Data: (releaseId => value) + mapping (bytes32 => Release) _recordedReleases; + mapping (bytes32 => bool) _removedReleases; + IndexedOrderedSetLib.IndexedOrderedSet _allReleaseIds; + mapping (bytes32 => IndexedOrderedSetLib.IndexedOrderedSet) _releaseIdsByNameHash; + + // Version Data: (versionHash => value) + mapping (bytes32 => string) _recordedVersions; + mapping (bytes32 => bool) _versionExists; + + // Events + event ReleaseCreate(bytes32 indexed releaseId); + event ReleaseUpdate(bytes32 indexed releaseId); + event ReleaseDelete(bytes32 indexed releaseId, string reason); + + /* + * Modifiers + */ + modifier onlyIfVersionExists(bytes32 versionHash) { + require(versionExists(versionHash), "escape:ReleaseDB:version-not-found"); + _; + } + + modifier onlyIfReleaseExists(bytes32 releaseId) { + require(releaseExists(releaseId), "escape:ReleaseDB:release-not-found"); + _; + } + + // + // +-------------+ + // | Write API | + // +-------------+ + // + + /// @dev Creates or updates a release for a package. Returns success. + /// @param nameHash The name hash of the package. + /// @param versionHash The version hash for the release version. + /// @param manifestURI The URI for the release manifest for this release. + function setRelease( + bytes32 nameHash, + bytes32 versionHash, + string manifestURI + ) + public + auth + returns (bool) + { + bytes32 releaseId = hashRelease(nameHash, versionHash); + + Release storage release = _recordedReleases[releaseId]; + + // If this is a new version push it onto the array of version hashes for + // this package. + if (release.exists) { + emit ReleaseUpdate(releaseId); + } else { + // Populate the basic release data. + release.exists = true; + release.createdAt = block.timestamp; // solium-disable-line security/no-block-members + release.nameHash = nameHash; + release.versionHash = versionHash; + + // Push the release hash into the array of all release hashes. + _allReleaseIds.add(releaseId); + _releaseIdsByNameHash[nameHash].add(releaseId); + + emit ReleaseCreate(releaseId); + } + + // Record the last time the release was updated. + release.updatedAt = block.timestamp; // solium-disable-line security/no-block-members + + // Save the release manifest URI + release.manifestURI = manifestURI; + + return true; + } + + /// @dev Removes a release from a package. Returns success. + /// @param releaseId The release hash to be removed + /// @param reason Explanation for why the removal happened. + function removeRelease(bytes32 releaseId, string reason) + public + auth + onlyIfReleaseExists(releaseId) + returns (bool) + { + bytes32 nameHash; + bytes32 versionHash; + + (nameHash, versionHash,,) = getReleaseData(releaseId); + + // Zero out the release data. + delete _recordedReleases[releaseId]; + delete _recordedVersions[versionHash]; + + // Remove the release hash from the list of all release hashes + _allReleaseIds.remove(releaseId); + _releaseIdsByNameHash[nameHash].remove(releaseId); + + // Add the release hash to the map of removed releases + _removedReleases[releaseId] = true; + + // Log the removal. + emit ReleaseDelete(releaseId, reason); + + return true; + } + + + /// @dev Adds the given version to the local version database. Returns the versionHash for the provided version. + /// @param version Version string (ex: '1.0.0') + function setVersion(string version) + public + auth + returns (bytes32) + { + bytes32 versionHash = hashVersion(version); + + if (!_versionExists[versionHash]) { + _recordedVersions[versionHash] = version; + _versionExists[versionHash] = true; + } + return versionHash; + } + + // + // +------------+ + // | Read API | + // +------------+ + // + + /// @dev Returns a slice of the array of all releases hashes for the named package. + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllReleaseIds(bytes32 nameHash, uint _offset, uint limit) + public + view + returns ( + bytes32[] releaseIds, + uint offset + ) + { + bytes32[] memory hashes; // Release ids to return + uint cursor = _offset; // Index counter to traverse DB array + uint remaining; // Counter to collect `limit` packages + uint totalReleases = getNumReleasesForNameHash(nameHash); // Total number of packages in registry + + // Is request within range? + if (cursor < totalReleases){ + + // Get total remaining records + remaining = totalReleases - cursor; + + // Number of records to collect is lesser of `remaining` and `limit` + if (remaining > limit ){ + remaining = limit; + } + + // Allocate return array + hashes = new bytes32[](remaining); + + // Collect records. (IndexedOrderedSet manages deletions.) + while(remaining > 0){ + bytes32 hash = getReleaseIdForNameHash(nameHash, cursor); + hashes[remaining - 1] = hash; + remaining--; + cursor++; + } + } + return (hashes, cursor); + } + + /// @dev Get the total number of releases + /// @param nameHash the name hash to lookup. + function getNumReleasesForNameHash(bytes32 nameHash) + public + view + returns (uint) + { + return _releaseIdsByNameHash[nameHash].size(); + } + + /// @dev Release hash for a Package at a given index + /// @param nameHash the name hash to lookup. + /// @param idx The index of the release hash to retrieve. + function getReleaseIdForNameHash(bytes32 nameHash, uint idx) + public + view + returns (bytes32) + { + return _releaseIdsByNameHash[nameHash].get(idx); + } + + /// @dev Query the existence of a release at the provided version for a package. Returns boolean indicating whether such a release exists. + /// @param releaseId The release hash to query. + function releaseExists(bytes32 releaseId) + public + view + returns (bool) + { + return _recordedReleases[releaseId].exists; + } + + /// @dev Query the past existence of a release at the provided version for a package. Returns boolean indicating whether such a release ever existed. + /// @param releaseHash The release hash to query. + function releaseExisted(bytes32 releaseHash) + public + view + returns (bool) + { + return _removedReleases[releaseHash]; + } + + /// @dev Query the existence of the provided version in the recorded versions. Returns boolean indicating whether such a version exists. + /// @param versionHash the version hash to check. + function versionExists(bytes32 versionHash) + public + view + returns (bool) + { + return _versionExists[versionHash]; + } + + /// @dev Returns the releaseData for the given release has a package. + /// @param releaseId The release hash. + function getReleaseData(bytes32 releaseId) + public + view + onlyIfReleaseExists(releaseId) + returns ( + bytes32 nameHash, + bytes32 versionHash, + uint createdAt, + uint updatedAt + ) + { + Release storage release = _recordedReleases[releaseId]; + return (release.nameHash, release.versionHash, release.createdAt, release.updatedAt); + } + + /// @dev Returns string version identifier from the version of the given release hash. + /// @param versionHash the version hash + function getVersion(bytes32 versionHash) + public + view + onlyIfVersionExists(versionHash) + returns (string) + { + return _recordedVersions[versionHash]; + } + + /// @dev Returns the URI of the release manifest for the given release hash. + /// @param releaseId Release hash + function getManifestURI(bytes32 releaseId) + public + view + onlyIfReleaseExists(releaseId) + returns (string) + { + return _recordedReleases[releaseId].manifestURI; + } + + /* + * Hash Functions + */ + /// @dev Returns version hash for the given semver version. + /// @param version Version string + function hashVersion(string version) + public + pure + returns (bytes32) + { + return keccak256(abi.encodePacked(version)); + } + + /// @dev Returns release hash for the given release + /// @param nameHash The name hash of the package name. + /// @param versionHash The version hash for the release version. + function hashRelease(bytes32 nameHash, bytes32 versionHash) + public + pure + returns (bytes32) + { + return keccak256(abi.encodePacked(nameHash, versionHash)); + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/ReleaseValidator.sol b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/ReleaseValidator.sol new file mode 100644 index 00000000..ed13ff1b --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/registry/contracts/ReleaseValidator.sol @@ -0,0 +1,152 @@ +pragma solidity ^0.4.24; +pragma experimental "v0.5.0"; + +import {PackageDB} from "./PackageDB.sol"; +import {ReleaseDB} from "./ReleaseDB.sol"; + +/// @title Database contract for a package index. +/// @author Piper Merriam +contract ReleaseValidator { + /// @dev Runs validation on all of the data needed for releasing a package. Returns success. + /// @param packageDb The address of the PackageDB + /// @param releaseDb The address of the ReleaseDB + /// @param callerAddress The address which is attempting to create the release. + /// @param name The name of the package. + /// @param version The version string of the package (ex: `1.0.0`) + /// @param manifestURI The URI of the release manifest. + function validateRelease( + PackageDB packageDb, + ReleaseDB releaseDb, + address callerAddress, + string name, + string version, + string manifestURI + ) + public + view + returns (bool) + { + if (address(packageDb) == 0x0){ + // packageDb address is null + revert("escape:ReleaseValidator:package-db-not-set"); + } else if (address(releaseDb) == 0x0){ + // releaseDb address is null + revert("escape:ReleaseValidator:release-db-not-set"); + } else if (!validateAuthorization(packageDb, callerAddress, name)) { + // package exists and msg.sender is not the owner not the package owner. + revert("escape:ReleaseValidator:caller-not-authorized"); + } else if (!validateIsNewRelease(packageDb, releaseDb, name, version)) { + // this version has already been released. + revert("escape:ReleaseValidator:version-previously-published"); + } else if (!validatePackageName(packageDb, name)) { + // invalid package name. + revert("escape:ReleaseValidator:invalid-package-name"); + } else if (!validateStringIdentifier(manifestURI)) { + // disallow empty release manifest URI + revert("escape:ReleaseValidator:invalid-manifest-uri"); + } else if (!validateStringIdentifier(version)) { + // disallow version 0.0.0 + revert("escape:ReleaseValidator:invalid-release-version"); + } + return true; + } + + /// @dev Validate whether the callerAddress is authorized to make this release. + /// @param packageDb The address of the PackageDB + /// @param callerAddress The address which is attempting to create the release. + /// @param name The name of the package. + function validateAuthorization( + PackageDB packageDb, + address callerAddress, + string name + ) + public + view + returns (bool) + { + bytes32 nameHash = packageDb.hashName(name); + if (!packageDb.packageExists(nameHash)) { + return true; + } + address packageOwner; + + (packageOwner,,) = packageDb.getPackageData(nameHash); + + if (packageOwner == callerAddress) { + return true; + } + return false; + } + + /// @dev Validate that the version being released has not already been released. + /// @param packageDb The address of the PackageDB + /// @param releaseDb The address of the ReleaseDB + /// @param name The name of the package. + /// @param version The version string for the release + function validateIsNewRelease( + PackageDB packageDb, + ReleaseDB releaseDb, + string name, + string version + ) + public + view + returns (bool) + { + bytes32 nameHash = packageDb.hashName(name); + bytes32 versionHash = releaseDb.hashVersion(version); + bytes32 releaseHash = releaseDb.hashRelease(nameHash, versionHash); + return !releaseDb.releaseExists(releaseHash) && !releaseDb.releaseExisted(releaseHash); + } + + uint constant DIGIT_0 = uint(bytes1("0")); + uint constant DIGIT_9 = uint(bytes1("9")); + uint constant LETTER_a = uint(bytes1("a")); + uint constant LETTER_z = uint(bytes1("z")); + bytes1 constant DASH = bytes1("-"); + + /// @dev Returns boolean whether the provided package name is valid. + /// @param packageDb The address of the PackageDB + /// @param name The name of the package. + function validatePackageName(PackageDB packageDb, string name) + public + view + returns (bool) + { + bytes32 nameHash = packageDb.hashName(name); + + if (packageDb.packageExists(nameHash)) { + // existing names are always valid. + return true; + } + + if (bytes(name).length < 2 || bytes(name).length > 255) { + return false; + } + for (uint i = 0; i < bytes(name).length; i++) { + if (bytes(name)[i] == DASH && i > 0) { + continue; + } else if (i > 0 && uint(bytes(name)[i]) >= DIGIT_0 && uint(bytes(name)[i]) <= DIGIT_9) { + continue; + } else if (uint(bytes(name)[i]) >= LETTER_a && uint(bytes(name)[i]) <= LETTER_z) { + continue; + } else { + return false; + } + } + return true; + } + + /// @dev Returns boolean whether the input string has a length + /// @param value The string to validate. + function validateStringIdentifier(string value) + public + pure + returns (bool) + { + if (bytes(value).length == 0) { + return false; + } + return true; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/Ownable.sol b/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/Ownable.sol new file mode 100644 index 00000000..b2bd6f66 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/Ownable.sol @@ -0,0 +1,63 @@ +pragma solidity ^0.5.0; + +/** + * @dev Contract module which provides a basic access control mechanism, where + * there is an account (an owner) that can be granted exclusive access to + * specific functions. + * + * This module is used through inheritance. It will make available the modifier + * `onlyOwner`, which can be applied to your functions to restrict their use to + * the owner. + */ +contract Ownable { + address private _owner; + + event OwnershipTransferred(address indexed previousOwner, address indexed newOwner); + + /** + * @dev Initializes the contract setting the deployer as the initial owner. + */ + constructor () internal { + _owner = msg.sender; + emit OwnershipTransferred(address(0), _owner); + } + + /** + * @dev Returns the address of the current owner. + */ + function owner() public view returns (address) { + return _owner; + } + + /** + * @dev Throws if called by any account other than the owner. + */ + modifier onlyOwner() { + require(isOwner(), "Ownable: caller is not the owner"); + _; + } + + /** + * @dev Returns true if the caller is the current owner. + */ + function isOwner() public view returns (bool) { + return msg.sender == _owner; + } + + /** + * @dev Transfers ownership of the contract to a new account (`newOwner`). + * Can only be called by the current owner. + */ + function transferOwnership(address newOwner) public onlyOwner { + _transferOwnership(newOwner); + } + + /** + * @dev Transfers ownership of the contract to a new account (`newOwner`). + */ + function _transferOwnership(address newOwner) internal { + require(newOwner != address(0), "Ownable: new owner is the zero address"); + emit OwnershipTransferred(_owner, newOwner); + _owner = newOwner; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/PackageRegistry.sol b/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/PackageRegistry.sol new file mode 100644 index 00000000..e1d478c0 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/PackageRegistry.sol @@ -0,0 +1,373 @@ +pragma solidity >=0.5.10; + +import {PackageRegistryInterface} from "./PackageRegistryInterface.sol"; +import {Ownable} from "./Ownable.sol"; + +/// @title Contract for an ERC1319 Registry, adapted from ethpm/escape-truffle +/// @author Nick Gheorghita +contract PackageRegistry is PackageRegistryInterface, Ownable { + struct Package { + bool exists; + uint createdAt; + uint updatedAt; + uint releaseCount; + string name; + } + + struct Release { + bool exists; + uint createdAt; + bytes32 packageId; + string version; + string manifestURI; + } + + mapping (bytes32 => Package) public packages; + mapping (bytes32 => Release) public releases; + + // package_id#release_count => release_id + mapping (bytes32 => bytes32) packageReleaseIndex; + // Total package number (int128) => package_id (bytes32) + mapping (uint => bytes32) allPackageIds; + // Total release number (int128) => release_id (bytes32) + mapping (uint => bytes32) allReleaseIds; + // Total number of packages in registry + uint public packageCount; + // Total number of releases in registry + uint public releaseCount; + + // Events + event VersionRelease(string packageName, string version, string manifestURI); + event PackageTransfer(address indexed oldOwner, address indexed newOwner); + + // Modifiers + modifier onlyIfPackageExists(string memory packageName) { + require(packageExists(packageName), "package-does-not-exist"); + _; + } + + modifier onlyIfReleaseExists(string memory packageName, string memory version) { + require (releaseExists(packageName, version), "release-does-not-exist"); + _; + } + + // + // =============== + // | Write API | + // =============== + // + + /// @dev Creates a new release for the named package. If this is the first release for the given + /// package then this will also create and store the package. Returns releaseID if successful. + /// @notice Will create a new release the given package with the given release information. + /// @param packageName Package name + /// @param version Version string (ex: '1.0.0') + /// @param manifestURI The URI for the release manifest for this release. + function release( + string memory packageName, + string memory version, + string memory manifestURI + ) + public + onlyOwner + returns (bytes32) + { + validatePackageName(packageName); + validateStringIdentifier(version); + validateStringIdentifier(manifestURI); + + // Compute hashes + bytes32 packageId = generatePackageId(packageName); + bytes32 releaseId = generateReleaseId(packageName, version); + Package storage package = packages[packageId]; + + // If the package does not yet exist create it + if (package.exists == false) { + package.exists = true; + package.createdAt = block.timestamp; + package.updatedAt = block.timestamp; + package.name = packageName; + package.releaseCount = 0; + allPackageIds[packageCount] = packageId; + packageCount++; + } else { + package.updatedAt = block.timestamp; + } + cutRelease(packageId, releaseId, packageName, version, manifestURI); + return releaseId; + } + + function cutRelease( + bytes32 packageId, + bytes32 releaseId, + string memory packageName, + string memory version, + string memory manifestURI + ) + private + { + Release storage newRelease = releases[releaseId]; + require(newRelease.exists == false, "release-already-exists"); + + // Store new release data + newRelease.exists = true; + newRelease.createdAt = block.timestamp; + newRelease.packageId = packageId; + newRelease.version = version; + newRelease.manifestURI = manifestURI; + + releases[releaseId] = newRelease; + allReleaseIds[releaseCount] = releaseId; + releaseCount++; + + // Update package's release count + Package storage package = packages[packageId]; + bytes32 packageReleaseId = generatePackageReleaseId(packageId, package.releaseCount); + packageReleaseIndex[packageReleaseId] = releaseId; + package.releaseCount++; + + // Log the release. + emit VersionRelease(packageName, version, manifestURI); + } + + // + // ============== + // | Read API | + // ============== + // + + /// @dev Returns the string name of the package associated with a package id + /// @param packageId The package id to look up + function getPackageName(bytes32 packageId) + public + view + returns (string memory packageName) + { + Package memory targetPackage = packages[packageId]; + require (targetPackage.exists == true, "package-does-not-exist"); + return targetPackage.name; + } + + /// @dev Returns a slice of the array of all package ids for the named package. + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllPackageIds(uint offset, uint limit) + public + view + returns ( + bytes32[] memory packageIds, + uint pointer + ) + { + bytes32[] memory hashes; // Array of package ids to return + uint cursor = offset; // Index counter to traverse DB array + uint remaining; // Counter to collect `limit` packages + + // Is request within range? + if (cursor < packageCount){ + + // Get total remaining records + remaining = packageCount - cursor; + + // Number of records to collect is lesser of `remaining` and `limit` + if (remaining > limit ){ + remaining = limit; + } + + // Allocate return array + hashes = new bytes32[](remaining); + + // Collect records. + while(remaining > 0){ + bytes32 hash = allPackageIds[cursor]; + hashes[remaining - 1] = hash; + remaining--; + cursor++; + } + } + return (hashes, cursor); + } + + /// @dev Returns a slice of the array of all release hashes for the named package. + /// @param packageName Package name + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllReleaseIds(string memory packageName, uint offset, uint limit) + public + view + onlyIfPackageExists(packageName) + returns ( + bytes32[] memory releaseIds, + uint pointer + ) + { + bytes32 packageId = generatePackageId(packageName); + Package storage package = packages[packageId]; + bytes32[] memory hashes; // Release ids to return + uint cursor = offset; // Index counter to traverse DB array + uint remaining; // Counter to collect `limit` packages + uint numPackageReleases = package.releaseCount; // Total number of packages in registry + + // Is request within range? + if (cursor < numPackageReleases){ + + // Get total remaining records + remaining = numPackageReleases - cursor; + + // Number of records to collect is lesser of `remaining` and `limit` + if (remaining > limit ){ + remaining = limit; + } + + // Allocate return array + hashes = new bytes32[](remaining); + + // Collect records. + while(remaining > 0){ + bytes32 packageReleaseId = generatePackageReleaseId(packageId, cursor); + bytes32 hash = packageReleaseIndex[packageReleaseId]; + hashes[remaining - 1] = hash; + remaining--; + cursor++; + } + } + return (hashes, cursor); + } + + + /// @dev Returns the package data for a release. + /// @param releaseId Release id + function getReleaseData(bytes32 releaseId) + public + view + returns ( + string memory packageName, string memory version, + string memory manifestURI + ) + { + Release memory targetRelease = releases[releaseId]; + Package memory targetPackage = packages[targetRelease.packageId]; + return (targetPackage.name, targetRelease.version, targetRelease.manifestURI); + } + + /// @dev Returns the release id for a given name and version pair if present on registry. + /// @param packageName Package name + /// @param version Version string(ex: '1.0.0') + function getReleaseId(string memory packageName, string memory version) + public + view + onlyIfPackageExists(packageName) + onlyIfReleaseExists(packageName, version) + returns (bytes32 releaseId) + { + return generateReleaseId(packageName, version); + } + + /// @dev Returns the number of packages stored on the registry + function numPackageIds() public view returns (uint totalCount) + { + return packageCount; + } + + /// @dev Returns the number of releases for a given package name on the registry + /// @param packageName Package name + function numReleaseIds(string memory packageName) + public + view + onlyIfPackageExists(packageName) + returns (uint totalCount) + { + bytes32 packageId = generatePackageId(packageName); + Package storage package = packages[packageId]; + return package.releaseCount; + } + + /// @dev Returns a bool indicating whether the given release exists in this registry. + /// @param packageName Package Name + /// @param version version + function releaseExists(string memory packageName, string memory version) + public + view + onlyIfPackageExists(packageName) + returns (bool) + { + bytes32 releaseId = generateReleaseId(packageName, version); + Release storage targetRelease = releases[releaseId]; + return targetRelease.exists; + } + + /// @dev Returns a bool indicating whether the given package exists in this registry. + /// @param packageName Package Name + function packageExists(string memory packageName) public view returns (bool) { + bytes32 packageId = generatePackageId(packageName); + return packages[packageId].exists; + } + + // + // ==================== + // | Hash Functions | + // ==================== + // + + /// @dev Returns name hash for a given package name. + /// @param name Package name + function generatePackageId(string memory name) + public + pure + returns (bytes32) + { + return keccak256(abi.encodePacked(name)); + } + + // @dev Returns release id that *would* be generated for a name and version pair on `release`. + // @param packageName Package name + // @param version Version string (ex: '1.0.0') + function generateReleaseId( + string memory packageName, + string memory version + ) + public + view + returns (bytes32) + { + return keccak256(abi.encodePacked(packageName, version)); + } + + function generatePackageReleaseId( + bytes32 packageId, + uint packageReleaseCount + ) + private + pure + returns (bytes32) + { + return keccak256(abi.encodePacked(packageId, packageReleaseCount)); + } + + + // + // ================ + // | Validation | + // ================ + // + + /// @dev Returns boolean whether the provided package name is valid. + /// @param name The name of the package. + function validatePackageName(string memory name) + public + pure + returns (bool) + { + require (bytes(name).length > 2 && bytes(name).length < 255, "invalid-package-name"); + } + + /// @dev Returns boolean whether the input string has a length + /// @param value The string to validate. + function validateStringIdentifier(string memory value) + public + pure + returns (bool) + { + require (bytes(value).length != 0, "invalid-string-identifier"); + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/PackageRegistryInterface.sol b/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/PackageRegistryInterface.sol new file mode 100644 index 00000000..59cc2726 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/simple-registry/contracts/PackageRegistryInterface.sol @@ -0,0 +1,96 @@ +pragma solidity >=0.5.10; + + +/// @title EIP 1319 Smart Contract Package Registry Interface +/// @author Piper Merriam , Christopher Gewecke +contract PackageRegistryInterface { + + // + // +-------------+ + // | Write API | + // +-------------+ + // + + /// @dev Creates a a new release for the named package. + /// @notice Will create a new release the given package with the given release information. + /// @param packageName Package name + /// @param version Version string (ex: 1.0.0) + /// @param manifestURI The URI for the release manifest for this release. + function release( + string memory packageName, + string memory version, + string memory manifestURI + ) + public + returns (bytes32 releaseId); + + // + // +------------+ + // | Read API | + // +------------+ + // + + /// @dev Returns the string name of the package associated with a package id + /// @param packageId The package id to look up + function getPackageName(bytes32 packageId) + public + view + returns (string memory packageName); + + /// @dev Returns a slice of the array of all package ids for the named package. + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllPackageIds(uint offset, uint limit) + public + view + returns ( + bytes32[] memory packageIds, + uint pointer + ); + + /// @dev Returns a slice of the array of all release hashes for the named package. + /// @param packageName Package name + /// @param offset The starting index for the slice. + /// @param limit The length of the slice + function getAllReleaseIds(string memory packageName, uint offset, uint limit) + public + view + returns ( + bytes32[] memory releaseIds, + uint pointer + ); + + /// @dev Returns the package data for a release. + /// @param releaseId Release id + function getReleaseData(bytes32 releaseId) + public + view + returns ( + string memory packageName, + string memory version, + string memory manifestURI + ); + + // @dev Returns release id that *would* be generated for a name and version pair on `release`. + // @param packageName Package name + // @param version Version string (ex: '1.0.0') + function generateReleaseId(string memory packageName, string memory version) + public + view + returns (bytes32 releaseId); + + /// @dev Returns the release id for a given name and version pair if present on registry. + /// @param packageName Package name + /// @param version Version string(ex: '1.0.0') + function getReleaseId(string memory packageName, string memory version) + public + view + returns (bytes32 releaseId); + + /// @dev Returns the number of packages stored on the registry + function numPackageIds() public view returns (uint totalCount); + + /// @dev Returns the number of releases for a given package name on the registry + /// @param packageName Package name + function numReleaseIds(string memory packageName) public view returns (uint totalCount); +} diff --git a/env/lib/python3.8/site-packages/ethpm/assets/vyper_registry/registry.vy b/env/lib/python3.8/site-packages/ethpm/assets/vyper_registry/registry.vy new file mode 100644 index 00000000..28ca4d92 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/vyper_registry/registry.vy @@ -0,0 +1,216 @@ +# Vyper Reference Implementation of ERC1319 + +# Structs +struct Package: + exists: bool + createdAt: timestamp + updatedAt: timestamp + name: bytes32 + releaseCount: int128 + +struct Release: + exists: bool + createdAt: timestamp + packageId: bytes32 + version: bytes32 + uri: bytes[1000] + + +# Events +VersionRelease: event({_package: indexed(bytes32), _version: bytes32, _uri: bytes[1000]}) + +owner: public(address) + +# Package Data: (package_id => value) +packages: public(map(bytes32, Package)) + +# Release Data: (release_id => value) +releases: public(map(bytes32, Release)) + + +# package_id#release_count => release_id +packageReleaseIndex: map(bytes32, bytes32) +# Total number of packages in registry +packageCount: public(int128) +# Total number of releases in registry +releaseCount: public(int128) +# Total package number (int128) => package_id (bytes32) +packageIds: map(int128, bytes32) +# Total release number (int128) => release_id (bytes32) +releaseIds: map(int128, bytes32) + +EMPTY_BYTES: bytes32 + + +@public +def __init__(): + self.owner = msg.sender + + +@public +def transferOwner(newOwner: address): + assert self.owner == msg.sender + self.owner = newOwner + + +@public +def getReleaseId(packageName: bytes32, version: bytes32) -> bytes32: + releaseConcat: bytes[64] = concat(packageName, version) + releaseId: bytes32 = sha3(releaseConcat) + assert self.releases[releaseId].exists + return releaseId + + +@public +def generateReleaseId(packageName: bytes32, version: bytes32) -> bytes32: + releaseConcat: bytes[64] = concat(packageName, version) + releaseId: bytes32 = sha3(releaseConcat) + return releaseId + + +@public +def getPackageName(packageId: bytes32) -> bytes32: + assert self.packages[packageId].exists + return self.packages[packageId].name + + +@public +def getPackageData(packageName: bytes32) -> (bytes32, bytes32, int128): + packageId: bytes32 = sha3(packageName) + assert self.packages[packageId].exists + return ( + self.packages[packageId].name, + packageId, + self.packages[packageId].releaseCount, + ) + + +@public +def numPackageIds() -> int128: + return self.packageCount + + +@public +def numReleaseIds(packageName: bytes32) -> int128: + packageId: bytes32 = sha3(packageName) + assert self.packages[packageId].exists + return self.packages[packageId].releaseCount + + +@public +def getAllPackageIds( + offset: uint256, length: uint256 +) -> (bytes32, bytes32, bytes32, bytes32, bytes32): + offset_int: int128 = convert(offset, int128) + length_int: int128 = convert(length, int128) + assert length_int == 5 + assert offset_int <= self.packageCount + ids: bytes32[5] + for idx in range(offset_int, offset_int + 4): + if idx <= self.packageCount: + packageId: bytes32 = self.packageIds[idx] + ids[(idx - offset_int)] = packageId + else: + ids[(idx - offset_int)] = self.EMPTY_BYTES + return (ids[0], ids[1], ids[2], ids[3], ids[4]) + + +@private +def generatePackageReleaseId(packageId: bytes32, count: int128) -> bytes32: + countBytes: bytes32 = convert(count, bytes32) + packageReleaseTag: bytes[64] = concat(packageId, countBytes) + packageReleaseId: bytes32 = sha3(packageReleaseTag) + return packageReleaseId + + +@public +def getAllReleaseIds( + packageName: bytes32, offset: uint256, length: uint256 +) -> (bytes32, bytes32, bytes32, bytes32, bytes32): + offset_int: int128 = convert(offset, int128) + length_int: int128 = convert(length, int128) + assert length_int == 5 + packageId: bytes32 = sha3(packageName) + assert self.packages[packageId].exists + assert offset_int <= self.packages[packageId].releaseCount + ids: bytes32[5] + for idx in range(offset_int, offset_int + 4): + if idx <= self.packages[packageId].releaseCount: + packageReleaseId: bytes32 = self.generatePackageReleaseId( + packageId, (idx + 1) + ) + releaseId: bytes32 = self.packageReleaseIndex[packageReleaseId] + ids[(idx - offset_int)] = releaseId + else: + ids[(idx - offset_int)] = self.EMPTY_BYTES + return (ids[0], ids[1], ids[2], ids[3], ids[4]) + + +@public +def getReleaseData(releaseId: bytes32) -> (bytes32, bytes32, bytes[1000]): + assert self.releases[releaseId].exists + packageId: bytes32 = self.releases[releaseId].packageId + return ( + self.packages[packageId].name, + self.releases[releaseId].version, + self.releases[releaseId].uri, + ) + + +@private +def cutRelease( + releaseId: bytes32, + packageId: bytes32, + version: bytes32, + uri: bytes[1000], + name: bytes32, +): + self.releases[releaseId] = Release({ + exists: True, + createdAt: block.timestamp, + packageId: packageId, + version: version, + uri: uri, + }) + self.packages[packageId].releaseCount += 1 + self.releaseIds[self.releaseCount] = releaseId + self.releaseCount += 1 + packageReleaseId: bytes32 = self.generatePackageReleaseId( + packageId, self.packages[packageId].releaseCount + ) + self.packageReleaseIndex[packageReleaseId] = releaseId + log.VersionRelease(name, version, uri) + + +@public +def release(packageName: bytes32, version: bytes32, manifestURI: bytes[1000]): + assert packageName != self.EMPTY_BYTES + assert version != self.EMPTY_BYTES + assert len(manifestURI) > 0 + assert self.owner == msg.sender + + + packageId: bytes32 = sha3(packageName) + releaseId: bytes32 = self.generateReleaseId(packageName, version) + + if self.packages[packageId].exists == True: + self.packages[packageId] = Package({ + exists: True, + createdAt: self.packages[packageId].createdAt, + updatedAt: block.timestamp, + name: packageName, + releaseCount: self.packages[packageId].releaseCount, + }) + assert self.releases[releaseId].exists == False + self.cutRelease(releaseId, packageId, version, manifestURI, packageName) + else: + self.packages[packageId] = Package({ + exists: True, + createdAt: block.timestamp, + updatedAt: block.timestamp, + name: packageName, + releaseCount: 0, + }) + self.packageIds[self.packageCount] = packageId + self.packageCount += 1 + self.cutRelease(releaseId, packageId, version, manifestURI, packageName) diff --git a/env/lib/python3.8/site-packages/ethpm/assets/vyper_registry/registry_with_delete.vy b/env/lib/python3.8/site-packages/ethpm/assets/vyper_registry/registry_with_delete.vy new file mode 100644 index 00000000..31385af3 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/assets/vyper_registry/registry_with_delete.vy @@ -0,0 +1,244 @@ +# Vyper Reference Implementation of ERC1319 - with delete +# Once all the releaseIds of a package are deleted - package namespace is permanently unavailable + +# Structs +struct Package: + exists: bool + createdAt: timestamp + updatedAt: timestamp + name: bytes32 + releaseCount: int128 + +struct Release: + exists: bool + createdAt: timestamp + packageId: bytes32 + version: bytes32 + uri: bytes[1000] + +# Events +VersionRelease: event({_package: indexed(bytes32), _version: bytes32, _uri: bytes[1000]}) + +owner: public(address) + +# Package Data: (package_id => value) +packages: public(map(bytes32, Package)) + +# Release Data: (release_id => value) +releases: public(map(bytes32, Release)) + +# package_id#release_count => release_id +packageReleaseIndex: map(bytes32, bytes32) +# Total number of packages in registry +totalPackageCount: public(int128) +activePackageCount: public(int128) +# Total number of releases in registry +totalReleaseCount: public(int128) +activeReleaseCount: public(int128) +# Total package number (int128) => package_id (bytes32) +packageIds: map(int128, bytes32) +# Total release number (int128) => release_id (bytes32) +releaseIds: map(int128, bytes32) + +EMPTY_BYTES: bytes32 + + +@public +def __init__(): + self.owner = msg.sender + + +@public +def transferOwner(newOwner: address): + assert self.owner == msg.sender + self.owner = newOwner + + +@public +def getReleaseId(packageName: bytes32, version: bytes32) -> bytes32: + releaseConcat: bytes[64] = concat(packageName, version) + releaseId: bytes32 = sha3(releaseConcat) + assert self.releases[releaseId].exists + return releaseId + + +@public +def generateReleaseId(packageName: bytes32, version: bytes32) -> bytes32: + releaseConcat: bytes[64] = concat(packageName, version) + releaseId: bytes32 = sha3(releaseConcat) + return releaseId + + +@public +def getPackageName(packageId: bytes32) -> bytes32: + assert self.packages[packageId].exists + return self.packages[packageId].name + + +@public +def getPackageData(packageName: bytes32) -> (bytes32, bytes32, int128): + packageId: bytes32 = sha3(packageName) + assert self.packages[packageId].exists + return ( + self.packages[packageId].name, + packageId, + self.packages[packageId].releaseCount, + ) + + +@public +def numPackageIds() -> int128: + return self.activePackageCount + + +@public +def numReleaseIds(packageName: bytes32) -> int128: + packageId: bytes32 = sha3(packageName) + assert self.packages[packageId].exists + return self.packages[packageId].releaseCount + + +@public +def getAllPackageIds( + offset: uint256, length: uint256 +) -> (bytes32, bytes32, bytes32, bytes32, bytes32): + offset_int: int128 = convert(offset, int128) + length_int: int128 = convert(length, int128) + assert length_int == 5 + assert offset_int <= self.activePackageCount + ids: bytes32[5] + for idx in range(offset_int, offset_int + 4): + if idx <= self.activePackageCount: + packageId: bytes32 = self.packageIds[idx] + if self.packages[packageId].exists: + ids[(idx - offset_int)] = packageId + else: + ids[(idx - offset_int)] = self.EMPTY_BYTES + else: + ids[(idx - offset_int)] = self.EMPTY_BYTES + return (ids[0], ids[1], ids[2], ids[3], ids[4]) + + +@private +def generatePackageReleaseId(packageId: bytes32, count: int128) -> bytes32: + countBytes: bytes32 = convert(count, bytes32) + packageReleaseTag: bytes[64] = concat(packageId, countBytes) + packageReleaseId: bytes32 = sha3(packageReleaseTag) + return packageReleaseId + + +@public +def getAllReleaseIds( + packageName: bytes32, offset: uint256, length: uint256 +) -> (bytes32, bytes32, bytes32, bytes32, bytes32): + offset_int: int128 = convert(offset, int128) + length_int: int128 = convert(length, int128) + assert length_int == 5 + packageId: bytes32 = sha3(packageName) + assert self.packages[packageId].exists + assert offset_int <= self.packages[packageId].releaseCount + ids: bytes32[5] + for idx in range(offset_int, offset_int + 4): + if idx <= self.packages[packageId].releaseCount: + packageReleaseId: bytes32 = self.generatePackageReleaseId( + packageId, (idx + 1) + ) + releaseId: bytes32 = self.packageReleaseIndex[packageReleaseId] + if self.releases[releaseId].exists: + ids[(idx - offset_int)] = releaseId + else: + ids[(idx - offset_int)] = self.EMPTY_BYTES + else: + ids[(idx - offset_int)] = self.EMPTY_BYTES + return (ids[0], ids[1], ids[2], ids[3], ids[4]) + + +@public +def getReleaseData(releaseId: bytes32) -> (bytes32, bytes32, bytes[1000]): + assert self.releases[releaseId].exists + packageId: bytes32 = self.releases[releaseId].packageId + return ( + self.packages[packageId].name, + self.releases[releaseId].version, + self.releases[releaseId].uri, + ) + + +@private +def cutRelease( + releaseId: bytes32, + packageId: bytes32, + version: bytes32, + uri: bytes[1000], + name: bytes32, +): + assert self.releases[releaseId].createdAt == 0 + self.releases[releaseId] = Release({ + exists: True, + createdAt: block.timestamp, + packageId: packageId, + version: version, + uri: uri, + }) + self.packages[packageId].releaseCount += 1 + self.releaseIds[self.totalReleaseCount] = releaseId + self.totalReleaseCount += 1 + self.activeReleaseCount += 1 + packageReleaseId: bytes32 = self.generatePackageReleaseId( + packageId, self.packages[packageId].releaseCount + ) + self.packageReleaseIndex[packageReleaseId] = releaseId + log.VersionRelease(name, version, uri) + + +@public +def release(packageName: bytes32, version: bytes32, manifestURI: bytes[1000]): + assert packageName != self.EMPTY_BYTES + assert version != self.EMPTY_BYTES + assert len(manifestURI) > 0 + assert self.owner == msg.sender + + + packageId: bytes32 = sha3(packageName) + releaseId: bytes32 = self.generateReleaseId(packageName, version) + + if self.packages[packageId].exists == True: + self.packages[packageId] = Package({ + exists: True, + createdAt: self.packages[packageId].createdAt, + updatedAt: block.timestamp, + name: packageName, + releaseCount: self.packages[packageId].releaseCount, + }) + self.cutRelease(releaseId, packageId, version, manifestURI, packageName) + else: + assert self.packages[packageId].createdAt == 0 + self.packages[packageId] = Package({ + exists: True, + createdAt: block.timestamp, + updatedAt: block.timestamp, + name: packageName, + releaseCount: 0, + }) + self.packageIds[self.totalPackageCount] = packageId + self.totalPackageCount += 1 + self.activePackageCount += 1 + self.cutRelease(releaseId, packageId, version, manifestURI, packageName) + + +@public +def deleteReleaseId(releaseId: bytes32): + assert self.owner == msg.sender + assert self.releases[releaseId].exists + packageId: bytes32 = self.releases[releaseId].packageId + assert self.packages[packageId].exists + assert self.packages[packageId].releaseCount > 0 + if self.packages[packageId].releaseCount == 1: + self.packages[packageId].releaseCount = 0 + self.packages[packageId].exists = False + self.activePackageCount -= 1 + self.activeReleaseCount -= 1 + else: + self.packages[packageId].releaseCount -= 1 + self.activeReleaseCount -= 1 + self.releases[releaseId].exists = False diff --git a/env/lib/python3.8/site-packages/ethpm/backends/__init__.py b/env/lib/python3.8/site-packages/ethpm/backends/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/ethpm/backends/base.py b/env/lib/python3.8/site-packages/ethpm/backends/base.py new file mode 100644 index 00000000..d607058f --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/backends/base.py @@ -0,0 +1,43 @@ +from abc import ( + ABC, + abstractmethod, +) +from typing import ( + Union, +) + +from eth_typing import ( + URI, +) + + +class BaseURIBackend(ABC): + """ + Generic backend that all URI backends are subclassed from. + + All subclasses must implement: + can_resolve_uri, can_translate_uri, fetch_uri_contents + """ + + @abstractmethod + def can_resolve_uri(self, uri: URI) -> bool: + """ + Return a bool indicating whether this backend class can + resolve the given URI to it's contents. + """ + pass + + @abstractmethod + def can_translate_uri(self, uri: URI) -> bool: + """ + Return a bool indicating whether this backend class can + translate the given URI to a corresponding content-addressed URI. + """ + pass + + @abstractmethod + def fetch_uri_contents(self, uri: URI) -> Union[bytes, URI]: + """ + Fetch the contents stored at a URI. + """ + pass diff --git a/env/lib/python3.8/site-packages/ethpm/backends/http.py b/env/lib/python3.8/site-packages/ethpm/backends/http.py new file mode 100644 index 00000000..e5fce69c --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/backends/http.py @@ -0,0 +1,108 @@ +import base64 +import json +from typing import ( + Tuple, +) +from urllib import ( + parse, +) + +from eth_typing import ( + URI, +) +from eth_utils import ( + is_text, +) +import requests + +from ethpm.backends.base import ( + BaseURIBackend, +) +from ethpm.constants import ( + GITHUB_API_AUTHORITY, +) +from ethpm.exceptions import ( + CannotHandleURI, +) +from ethpm.validation.uri import ( + validate_blob_uri_contents, +) + + +class GithubOverHTTPSBackend(BaseURIBackend): + """ + Base class for all URIs pointing to a content-addressed Github URI. + """ + + def can_resolve_uri(self, uri: URI) -> bool: + return is_valid_content_addressed_github_uri(uri) + + def can_translate_uri(self, uri: URI) -> bool: + """ + GithubOverHTTPSBackend uri's must resolve to a valid manifest, + and cannot translate to another content-addressed URI. + """ + return False + + def fetch_uri_contents(self, uri: URI) -> bytes: + if not self.can_resolve_uri(uri): + raise CannotHandleURI(f"GithubOverHTTPSBackend cannot resolve {uri}.") + + response = requests.get(uri) + response.raise_for_status() + contents = json.loads(response.content) + if contents["encoding"] != "base64": + raise CannotHandleURI( + "Expected contents returned from Github to be base64 encoded, " + f"instead received {contents['encoding']}." + ) + decoded_contents = base64.b64decode(contents["content"]) + validate_blob_uri_contents(decoded_contents, uri) + return decoded_contents + + @property + def base_uri(self) -> str: + return GITHUB_API_AUTHORITY + + +def is_valid_content_addressed_github_uri(uri: URI) -> bool: + """ + Returns a bool indicating whether the given uri conforms to this scheme. + https://api.github.com/repos/:owner/:repo/git/blobs/:file_sha + """ + return is_valid_github_uri(uri, ("/repos/", "/git/", "/blobs/")) + + +def is_valid_api_github_uri(uri: URI) -> bool: + """ + Returns a bool indicating whether the given uri conforms to this scheme. + https://api.github.com/repos/:owner/:repo/contents/:path/:to/:file + """ + return is_valid_github_uri(uri, ("/repos/", "/contents/")) + + +def is_valid_github_uri(uri: URI, expected_path_terms: Tuple[str, ...]) -> bool: + """ + Return a bool indicating whether or not the URI fulfills the following specs + Valid Github URIs *must*: + - Have 'https' scheme + - Have 'api.github.com' authority + - Have a path that contains all "expected_path_terms" + """ + if not is_text(uri): + return False + + parsed = parse.urlparse(uri) + path, scheme, authority = parsed.path, parsed.scheme, parsed.netloc + if not all((path, scheme, authority)): + return False + + if any(term for term in expected_path_terms if term not in path): + return False + + if scheme != "https": + return False + + if authority != GITHUB_API_AUTHORITY: + return False + return True diff --git a/env/lib/python3.8/site-packages/ethpm/backends/ipfs.py b/env/lib/python3.8/site-packages/ethpm/backends/ipfs.py new file mode 100644 index 00000000..1b9f0bcc --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/backends/ipfs.py @@ -0,0 +1,211 @@ +from abc import ( + abstractmethod, +) +import os +from pathlib import ( + Path, +) +from typing import ( + Dict, + List, + Type, +) + +from eth_utils import ( + import_string, + to_bytes, +) +import ipfshttpclient + +from ethpm import ( + get_ethpm_spec_dir, +) +from ethpm._utils.ipfs import ( + dummy_ipfs_pin, + extract_ipfs_path_from_uri, + generate_file_hash, + is_ipfs_uri, +) +from ethpm.backends.base import ( + BaseURIBackend, +) +from ethpm.constants import ( + DEFAULT_IPFS_BACKEND, + INFURA_GATEWAY_MULTIADDR, + IPFS_GATEWAY_PREFIX, +) +from ethpm.exceptions import ( + CannotHandleURI, + EthPMValidationError, +) + + +class BaseIPFSBackend(BaseURIBackend): + """ + Base class for all URIs with an IPFS scheme. + """ + + def can_resolve_uri(self, uri: str) -> bool: + """ + Return a bool indicating whether or not this backend + is capable of serving the content located at the URI. + """ + return is_ipfs_uri(uri) + + def can_translate_uri(self, uri: str) -> bool: + """ + Return False. IPFS URIs cannot be used to point + to another content-addressed URI. + """ + return False + + @abstractmethod + def pin_assets(self, file_or_dir_path: Path) -> List[Dict[str, str]]: + """ + Pin assets found at `file_or_dir_path` and return a + list containing pinned asset data. + """ + pass + + +class IPFSOverHTTPBackend(BaseIPFSBackend): + """ + Base class for all IPFS URIs served over an http connection. + All subclasses must implement: base_uri + """ + + def __init__(self) -> None: + self.client = ipfshttpclient.connect(self.base_uri) + + def fetch_uri_contents(self, uri: str) -> bytes: + ipfs_hash = extract_ipfs_path_from_uri(uri) + contents = self.client.cat(ipfs_hash) + # Local validation of hashed contents only works for non-chunked files ~< 256kb + # Improved validation WIP @ https://github.com/ethpm/py-ethpm/pull/165 + if len(contents) <= 262144: + validation_hash = generate_file_hash(contents) + if validation_hash != ipfs_hash: + raise EthPMValidationError( + f"Hashed IPFS contents retrieved from uri: {uri} do not match its content hash." + ) + return contents + + @property + @abstractmethod + def base_uri(self) -> str: + pass + + def pin_assets(self, file_or_dir_path: Path) -> List[Dict[str, str]]: + if file_or_dir_path.is_dir(): + dir_data = self.client.add(str(file_or_dir_path), recursive=True) + return dir_data + elif file_or_dir_path.is_file(): + file_data = self.client.add(str(file_or_dir_path), recursive=False) + return [file_data] + else: + raise TypeError( + f"{file_or_dir_path} is not a valid file or directory path." + ) + + +class IPFSGatewayBackend(IPFSOverHTTPBackend): + """ + Backend class for all IPFS URIs served over the IPFS gateway. + """ + + # todo update this gateway to work r&w + # https://discuss.ipfs.io/t/writeable-http-gateways/210 + @property + def base_uri(self) -> str: + return IPFS_GATEWAY_PREFIX + + def pin_assets(self, file_or_dir_path: Path) -> List[Dict[str, str]]: + raise CannotHandleURI( + "IPFS gateway is currently disabled, please use a different IPFS backend." + ) + + def fetch_uri_contents(self, uri: str) -> bytes: + raise CannotHandleURI( + "IPFS gateway is currently disabled, please use a different IPFS backend." + ) + + +class InfuraIPFSBackend(IPFSOverHTTPBackend): + """ + Backend class for all IPFS URIs served over the Infura IFPS gateway. + """ + + @property + def base_uri(self) -> str: + return INFURA_GATEWAY_MULTIADDR + + +class LocalIPFSBackend(IPFSOverHTTPBackend): + """ + Backend class for all IPFS URIs served through a direct connection to an IPFS node. + Default IPFS port = 5001 + """ + + @property + def base_uri(self) -> str: + return "/ip4/127.0.0.1/tcp/5001" + + +MANIFEST_URIS = { + "ipfs://QmQNffBrmbB3TuBCtYfYsJWJVLssatWXa3H6CkGeyNUySA": "standard-token", + "ipfs://QmWnPsiS3Xb8GvCDEBFnnKs8Yk4HaAX6rCqJAaQXGbCoPk": "safe-math-lib", + "ipfs://QmcxvhkJJVpbxEAa6cgW3B6XwPJb79w9GpNUv2P2THUzZR": "owned", +} + + +class DummyIPFSBackend(BaseIPFSBackend): + """ + Backend class to serve IPFS URIs without having to make an HTTP request. + Used primarily for testing purposes, returns a locally stored manifest or contract. + --- + `ipfs_uri` can either be: + - Valid IPFS URI -> safe-math-lib manifest (ALWAYS) + - Path to manifest/contract in ethpm_spec_dir -> defined manifest/contract + """ + + def fetch_uri_contents(self, ipfs_uri: str) -> bytes: + pkg_name = MANIFEST_URIS[ipfs_uri] + ethpm_spec_dir = get_ethpm_spec_dir() + pkg_contents = (ethpm_spec_dir / "examples" / pkg_name / "v3.json").read_text() + return to_bytes(text=pkg_contents) + + def can_resolve_uri(self, uri: str) -> bool: + return uri in MANIFEST_URIS + + def pin_assets(self, file_or_dir_path: Path) -> List[Dict[str, str]]: + """ + Return a dict containing the IPFS hash, file name, and size of a file. + """ + if file_or_dir_path.is_dir(): + asset_data = [dummy_ipfs_pin(path) for path in file_or_dir_path.glob("*")] + elif file_or_dir_path.is_file(): + asset_data = [dummy_ipfs_pin(file_or_dir_path)] + else: + raise FileNotFoundError( + f"{file_or_dir_path} is not a valid file or directory path." + ) + return asset_data + + +def get_ipfs_backend(import_path: str = None) -> BaseIPFSBackend: + """ + Return the `BaseIPFSBackend` class specified by import_path, default, or env variable. + """ + backend_class = get_ipfs_backend_class(import_path) + return backend_class() + + +def get_ipfs_backend_class(import_path: str = None) -> Type[BaseIPFSBackend]: + if import_path is None: + import_path = os.environ.get("ETHPM_IPFS_BACKEND_CLASS", DEFAULT_IPFS_BACKEND) + if not import_path: + raise CannotHandleURI( + "Please provide an import class or set " + "`ETHPM_IPFS_BACKEND_CLASS` environment variable." + ) + return import_string(import_path) diff --git a/env/lib/python3.8/site-packages/ethpm/backends/registry.py b/env/lib/python3.8/site-packages/ethpm/backends/registry.py new file mode 100644 index 00000000..fe9e8828 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/backends/registry.py @@ -0,0 +1,142 @@ +from collections import ( + namedtuple, +) +from typing import ( + Optional, + Tuple, +) +from urllib import ( + parse, +) + +from eth_typing import ( + URI, +) +from eth_utils import ( + is_address, +) + +from ens import ENS +from ethpm._utils.registry import ( + fetch_standard_registry_abi, +) +from ethpm.backends.base import ( + BaseURIBackend, +) +from ethpm.exceptions import ( + CannotHandleURI, + EthPMValidationError, +) +from ethpm.validation.uri import ( + validate_registry_uri, +) + +# TODO: Update registry ABI once ERC is finalized. +REGISTRY_ABI = fetch_standard_registry_abi() +RegistryURI = namedtuple( + "RegistryURI", + ["address", "chain_id", "name", "version", "namespaced_asset", "ens"] +) + + +class RegistryURIBackend(BaseURIBackend): + """ + Backend class to handle Registry URIs. + + A Registry URI must resolve to a resolvable content-addressed URI. + """ + + def __init__(self) -> None: + from web3.auto.infura import w3 + self.w3 = w3 + + def can_translate_uri(self, uri: str) -> bool: + return is_valid_registry_uri(uri) + + def can_resolve_uri(self, uri: str) -> bool: + return False + + def fetch_uri_contents(self, uri: str) -> URI: + """ + Return content-addressed URI stored at registry URI. + """ + address, chain_id, pkg_name, pkg_version, _, _ = parse_registry_uri(uri) + if chain_id != '1': + # todo: support all testnets + raise CannotHandleURI( + "Currently only mainnet registry uris are supported." + ) + self.w3.enable_unstable_package_management_api() + self.w3.pm.set_registry(address) + _, _, manifest_uri = self.w3.pm.get_release_data(pkg_name, pkg_version) + return URI(manifest_uri) + + +def is_valid_registry_uri(uri: str) -> bool: + """ + Return a boolean indicating whether `uri` argument + conforms to the Registry URI scheme. + """ + try: + validate_registry_uri(uri) + except EthPMValidationError: + return False + else: + return True + + +def parse_registry_uri(uri: str) -> RegistryURI: + """ + Validate and return (authority, chain_id, pkg_name, version) from a valid registry URI. + """ + from web3.auto.infura import w3 + validate_registry_uri(uri) + parsed_uri = parse.urlparse(uri) + if ":" in parsed_uri.netloc: + address_or_ens, chain_id = parsed_uri.netloc.split(":") + else: + address_or_ens, chain_id = parsed_uri.netloc, "1" + ns = ENS.fromWeb3(w3) + if is_address(address_or_ens): + address = address_or_ens + ens = None + elif ns.address(address_or_ens): + address = ns.address(address_or_ens) + ens = address_or_ens + else: + raise CannotHandleURI( + f"Invalid address or ENS domain found in uri: {uri}." + ) + pkg_name, pkg_version, namespaced_asset = _process_pkg_path(parsed_uri.path) + return RegistryURI(address, chain_id, pkg_name, pkg_version, namespaced_asset, ens) + + +def _process_pkg_path(raw_pkg_path: str) -> Tuple[Optional[str], Optional[str], Optional[str]]: + pkg_path = raw_pkg_path.strip("/") + if not pkg_path: + return None, None, None + + pkg_id, namespaced_asset = _parse_pkg_path(pkg_path) + pkg_name, pkg_version = _parse_pkg_id(pkg_id) + if not pkg_version and namespaced_asset: + raise EthPMValidationError( + "Invalid registry URI, missing package version." + "Version is required if namespaced assets are defined." + ) + return pkg_name, pkg_version, namespaced_asset + + +def _parse_pkg_path(pkg_path: str) -> Tuple[str, Optional[str]]: + if "/" in pkg_path: + pkg_id, _, namespaced_asset = pkg_path.partition("/") + return pkg_id, namespaced_asset + else: + return pkg_path, None + + +def _parse_pkg_id(pkg_id: str) -> Tuple[str, Optional[str]]: + if "@" not in pkg_id: + return pkg_id, None + pkg_name, _, safe_pkg_version = pkg_id.partition("@") + pkg_version = parse.unquote(safe_pkg_version) + return pkg_name, pkg_version diff --git a/env/lib/python3.8/site-packages/ethpm/constants.py b/env/lib/python3.8/site-packages/ethpm/constants.py new file mode 100644 index 00000000..2ca5ca09 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/constants.py @@ -0,0 +1,19 @@ +REGISTRY_URI_SCHEMES = ("erc1319", "ethpm") + +PACKAGE_NAME_REGEX = "^[a-zA-Z][-_a-zA-Z0-9]{0,255}$" + +DEFAULT_IPFS_BACKEND = "ethpm.backends.ipfs.InfuraIPFSBackend" + +IPFS_GATEWAY_PREFIX = "https://ipfs.io/ipfs/" + +INFURA_GATEWAY_MULTIADDR = "/dns4/ipfs.infura.io/tcp/5001/https/" + +GITHUB_API_AUTHORITY = "api.github.com" + +SUPPORTED_CHAIN_IDS = { + 1: "mainnet", + 3: "ropsten", + 4: "rinkeby", + 5: "goerli", + 42: "kovan", +} diff --git a/env/lib/python3.8/site-packages/ethpm/contract.py b/env/lib/python3.8/site-packages/ethpm/contract.py new file mode 100644 index 00000000..4cedcd64 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/contract.py @@ -0,0 +1,178 @@ +from typing import ( # noqa: F401 + Any, + Dict, + List, + Optional, + Tuple, + Type, +) + +from eth_utils import ( + combomethod, + is_canonical_address, + to_bytes, + to_canonical_address, +) +from eth_utils.toolz import ( + assoc, + curry, + pipe, +) + +from ethpm.exceptions import ( + BytecodeLinkingError, + EthPMValidationError, +) +from ethpm.validation.misc import ( + validate_empty_bytes, +) +from web3 import Web3 +from web3._utils.validation import ( + validate_address, +) +from web3.contract import ( + Contract, + ContractConstructor, +) + + +class LinkableContract(Contract): + """ + A subclass of web3.contract.Contract that is capable of handling + contract factories with link references in their package's manifest. + """ + + unlinked_references: Optional[Tuple[Dict[str, Any]]] = None + linked_references: Optional[Tuple[Dict[str, Any]]] = None + needs_bytecode_linking = None + + def __init__(self, address: bytes, **kwargs: Any) -> None: + if self.needs_bytecode_linking: + raise BytecodeLinkingError( + "Contract cannot be instantiated until its bytecode is linked." + ) + validate_address(address) + # type ignored to allow for undefined **kwargs on `Contract` base class __init__ + super(LinkableContract, self).__init__(address=address, **kwargs) # type: ignore + + @classmethod + def factory( + cls, web3: Web3, class_name: str = None, **kwargs: Any + ) -> Contract: + dep_link_refs = kwargs.get("unlinked_references") + bytecode = kwargs.get("bytecode") + needs_bytecode_linking = False + if dep_link_refs and bytecode: + if not is_prelinked_bytecode(to_bytes(hexstr=bytecode), dep_link_refs): + needs_bytecode_linking = True + kwargs = assoc(kwargs, "needs_bytecode_linking", needs_bytecode_linking) + return super(LinkableContract, cls).factory(web3, class_name, **kwargs) + + @classmethod + def constructor(cls, *args: Any, **kwargs: Any) -> ContractConstructor: + if cls.needs_bytecode_linking: + raise BytecodeLinkingError( + "Contract cannot be deployed until its bytecode is linked." + ) + return super(LinkableContract, cls).constructor(*args, **kwargs) + + @classmethod + def link_bytecode(cls, attr_dict: Dict[str, str]) -> Type["LinkableContract"]: + """ + Return a cloned contract factory with the deployment / runtime bytecode linked. + + :attr_dict: Dict[`ContractType`: `Address`] for all deployment and runtime link references. + """ + if not cls.unlinked_references and not cls.linked_references: + raise BytecodeLinkingError("Contract factory has no linkable bytecode.") + if not cls.needs_bytecode_linking: + raise BytecodeLinkingError( + "Bytecode for this contract factory does not require bytecode linking." + ) + cls.validate_attr_dict(attr_dict) + bytecode = apply_all_link_refs(cls.bytecode, cls.unlinked_references, attr_dict) + runtime = apply_all_link_refs( + cls.bytecode_runtime, cls.linked_references, attr_dict + ) + linked_class = cls.factory( + cls.web3, bytecode_runtime=runtime, bytecode=bytecode + ) + if linked_class.needs_bytecode_linking: + raise BytecodeLinkingError( + "Expected class to be fully linked, but class still needs bytecode linking." + ) + return linked_class + + @combomethod + def validate_attr_dict(self, attr_dict: Dict[str, str]) -> None: + """ + Validates that ContractType keys in attr_dict reference existing manifest ContractTypes. + """ + attr_dict_names = list(attr_dict.keys()) + + if not self.unlinked_references and not self.linked_references: + raise BytecodeLinkingError( + "Unable to validate attr dict, this contract has no linked/unlinked references." + ) + + unlinked_refs = self.unlinked_references or ({},) + linked_refs = self.linked_references or ({},) + all_link_refs = unlinked_refs + linked_refs + + all_link_names = [ref["name"] for ref in all_link_refs if ref] + if set(attr_dict_names) != set(all_link_names): + raise BytecodeLinkingError( + "All link references must be defined when calling " + "`link_bytecode` on a contract factory." + ) + for address in attr_dict.values(): + validate_address(address) + + +def is_prelinked_bytecode(bytecode: bytes, link_refs: List[Dict[str, Any]]) -> bool: + """ + Returns False if all expected link_refs are unlinked, otherwise returns True. + todo support partially pre-linked bytecode (currently all or nothing) + """ + for link_ref in link_refs: + for offset in link_ref["offsets"]: + try: + validate_empty_bytes(offset, link_ref["length"], bytecode) + except EthPMValidationError: + return True + return False + + +def apply_all_link_refs( + bytecode: bytes, link_refs: List[Dict[str, Any]], attr_dict: Dict[str, str] +) -> bytes: + """ + Applies all link references corresponding to a valid attr_dict to the bytecode. + """ + if link_refs is None: + return bytecode + link_fns = ( + apply_link_ref(offset, ref["length"], attr_dict[ref["name"]]) + for ref in link_refs + for offset in ref["offsets"] + ) + linked_bytecode = pipe(bytecode, *link_fns) + return linked_bytecode + + +@curry +def apply_link_ref(offset: int, length: int, value: bytes, bytecode: bytes) -> bytes: + """ + Returns the new bytecode with `value` put into the location indicated by `offset` and `length`. + """ + try: + validate_empty_bytes(offset, length, bytecode) + except EthPMValidationError: + raise BytecodeLinkingError("Link references cannot be applied to bytecode") + + address = value if is_canonical_address(value) else to_canonical_address(value) + new_bytes = ( + # Ignore linting error b/c conflict b/w black & flake8 + bytecode[:offset] + address + bytecode[offset + length:] # noqa: E201, E203 + ) + return new_bytes diff --git a/env/lib/python3.8/site-packages/ethpm/dependencies.py b/env/lib/python3.8/site-packages/ethpm/dependencies.py new file mode 100644 index 00000000..f1f2eac5 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/dependencies.py @@ -0,0 +1,58 @@ +from typing import ( + Dict, + List, + Tuple, +) + +from ethpm.validation.package import ( + validate_package_name, +) + + +class Dependencies: + """ + Class to manage the `Package` instances of a Package's `buildDependencies`. + """ + + # ignoring Package type here and below to avoid a circular dependency + def __init__( + self, build_dependencies: Dict[str, "Package"] # type: ignore + ) -> None: + self.build_dependencies = build_dependencies + + def __getitem__(self, key: str) -> "Package": # type: ignore # noqa: F821 + return self.build_dependencies.get(key) + + def __contains__(self, key: str) -> bool: + return key in self.build_dependencies + + def _validate_name(self, name: str) -> None: + validate_package_name(name) + if name not in self.build_dependencies: + raise KeyError(f"Package name: {name} not found in build dependencies.") + + def items(self) -> Tuple[Tuple[str, "Package"], ...]: # type: ignore + """ + Return an iterable containing package name and + corresponding `Package` instance that are available. + """ + item_dict = { + name: self.build_dependencies.get(name) for name in self.build_dependencies + } + return tuple(item_dict.items()) + + def values(self) -> List["Package"]: # type: ignore + """ + Return an iterable of the available `Package` instances. + """ + values = [self.build_dependencies.get(name) for name in self.build_dependencies] + return values + + def get_dependency_package( + self, package_name: str + ) -> "Package": # type: ignore # noqa: F821 + """ + Return the dependency Package for a given package name. + """ + self._validate_name(package_name) + return self.build_dependencies.get(package_name) diff --git a/env/lib/python3.8/site-packages/ethpm/deployments.py b/env/lib/python3.8/site-packages/ethpm/deployments.py new file mode 100644 index 00000000..7ee501ad --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/deployments.py @@ -0,0 +1,79 @@ +from typing import ( + Any, + Dict, + ItemsView, + List, +) + +from eth_typing import ( + Address, + HexStr, +) + +from ethpm.validation.package import ( + validate_contract_name, +) +from web3._utils.compat import ( + TypedDict, +) +from web3.eth import ( + Contract, +) + + +class Deployments: + """ + Deployment object to access instances of + deployed contracts belonging to a package. + """ + + def __init__( + self, + deployment_data: Dict[str, Dict[str, str]], + contract_instances: Dict[str, Contract], + ) -> None: + self.deployment_data = deployment_data + self.contract_instances = contract_instances + + def __getitem__(self, key: str) -> Dict[str, str]: + return self.get(key) + + def __contains__(self, key: str) -> bool: + return key in self.deployment_data + + def get(self, key: str) -> Dict[str, str]: + self._validate_name_and_references(key) + return self.deployment_data[key] + + def items(self) -> ItemsView[str, Dict[str, str]]: + item_dict = {name: self.get(name) for name in self.deployment_data} + return item_dict.items() + + def values(self) -> List[Dict[str, str]]: + values = [self.get(name) for name in self.deployment_data] + return values + + def get_instance(self, contract_name: str) -> Contract: + """ + Fetches a contract instance belonging to deployment + after validating contract name. + """ + self._validate_name_and_references(contract_name) + return self.contract_instances[contract_name] + + def _validate_name_and_references(self, name: str) -> None: + validate_contract_name(name) + if name not in self.deployment_data: + raise KeyError( + f"Contract deployment: {name} not found in deployment data. " + f"Available deployments include: {list(sorted(self.deployment_data.keys()))}." + ) + + +class DeploymentData(TypedDict): + address: Address + transaction: HexStr + block: HexStr + runtime_bytecode: Dict[str, Any] + compiler: Dict[str, str] + contractType: str diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/escrow/contracts/Escrow.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/escrow/contracts/Escrow.sol new file mode 100644 index 00000000..a5a0c25e --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/escrow/contracts/Escrow.sol @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + + +import {SafeSendLib} from "./SafeSendLib.sol"; + + +/// @title Contract for holding funds in escrow between two semi trusted parties. +/// @author Piper Merriam +contract Escrow { + using SafeSendLib for address; + + address public sender; + address public recipient; + + constructor(address _recipient) public { + sender = msg.sender; + recipient = _recipient; + } + + /// @dev Releases the escrowed funds to the other party. + /// @notice This will release the escrowed funds to the other party. + function releaseFunds() public { + if (msg.sender == sender) { + recipient.sendOrThrow(address(this).balance); + } else if (msg.sender == recipient) { + sender.sendOrThrow(address(this).balance); + } else { + revert(); + } + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/escrow/contracts/SafeSendLib.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/escrow/contracts/SafeSendLib.sol new file mode 100644 index 00000000..0e5a23d1 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/escrow/contracts/SafeSendLib.sol @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + + +/// @title Library for safe sending of ether. +/// @author Piper Merriam +library SafeSendLib { + /// @dev Attempts to send the specified amount to the recipient throwing an error if it fails + /// @param recipient The address that the funds should be to. + /// @param value The amount in wei that should be sent. + function sendOrThrow(address recipient, uint value) public returns (bool) { + if (value > address(this).balance) + revert(); + + if (!payable(recipient).send(value)) + revert(); + + return true; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/owned/contracts/Owned.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/owned/contracts/Owned.sol new file mode 100644 index 00000000..4152f93d --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/owned/contracts/Owned.sol @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + +contract Owned { + address owner; + + modifier onlyOwner { require(msg.sender == owner); _; } + + constructor() public { + owner = msg.sender; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/safe-math-lib/contracts/SafeMathLib.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/safe-math-lib/contracts/SafeMathLib.sol new file mode 100644 index 00000000..575f2a54 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/safe-math-lib/contracts/SafeMathLib.sol @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + + +/// @title Safe Math Library +/// @author Piper Merriam +library SafeMathLib { + /// @dev Adds a and b, throwing an error if the operation would cause an overflow. + /// @param a The first number to add + /// @param b The second number to add + function safeAdd(uint256 a, uint256 b) public pure returns (uint256 c) { + c = a + b; + assert(c >= a); + return c; + } + + /// @dev Subtracts b from a, throwing an error if the operation would cause an underflow. + /// @param a The number to be subtracted from + /// @param b The amount that should be subtracted + function safeSub(uint256 a, uint256 b) public pure returns (uint256) { + assert(b <= a); + return a - b; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/standard-token/contracts/AbstractToken.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/standard-token/contracts/AbstractToken.sol new file mode 100644 index 00000000..5775c510 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/standard-token/contracts/AbstractToken.sol @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + + +/// Implements ERC 20 Token standard: https://github.com/ethereum/EIPs/issues/20 + +/// @title Abstract token contract - Functions to be implemented by token contracts. +/// @author Stefan George - +abstract contract Token { + // This is not an abstract function, because solc won't recognize generated getter functions for public variables as functions + function totalSupply() external view virtual returns (uint256 supply) {} + function balanceOf(address owner) public view virtual returns (uint256 balance); + function transfer(address to, uint256 value) public virtual returns (bool success); + function transferFrom(address from, address to, uint256 value) public virtual returns (bool success); + function approve(address spender, uint256 value) public virtual returns (bool success); + function allowance(address owner, address spender) public view virtual returns (uint256 remaining); + + event Transfer(address indexed from, address indexed to, uint256 value); + event Approval(address indexed owner, address indexed spender, uint256 value); +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/standard-token/contracts/StandardToken.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/standard-token/contracts/StandardToken.sol new file mode 100644 index 00000000..677b443d --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/standard-token/contracts/StandardToken.sol @@ -0,0 +1,84 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + + +import "./AbstractToken.sol"; + + +/// @title Standard token contract +/// @author Stefan George - +contract StandardToken is Token { + /* + * Data structures + */ + mapping (address => uint256) balances; + mapping (address => mapping (address => uint256)) allowed; + uint256 override public totalSupply; + + constructor (uint _totalSupply) public { + totalSupply = _totalSupply; + balances[msg.sender] = _totalSupply; + emit Transfer(address(0), msg.sender, _totalSupply); + } + + /* + * Read and write storage functions + */ + /// @dev Transfers sender's tokens to a given address. Returns success. + /// @param _to Address of token receiver. + /// @param _value Number of tokens to transfer. + function transfer(address _to, uint256 _value) public override returns (bool success) { + if (balances[msg.sender] >= _value && _value > 0) { + balances[msg.sender] -= _value; + balances[_to] += _value; + emit Transfer(msg.sender, _to, _value); + return true; + } + else { + return false; + } + } + + /// @dev Allows allowed third party to transfer tokens from one address to another. Returns success. + /// @param _from Address from where tokens are withdrawn. + /// @param _to Address to where tokens are sent. + /// @param _value Number of tokens to transfer. + function transferFrom(address _from, address _to, uint256 _value) public override returns (bool success) { + if (balances[_from] >= _value && allowed[_from][msg.sender] >= _value && _value > 0) { + balances[_to] += _value; + balances[_from] -= _value; + allowed[_from][msg.sender] -= _value; + emit Transfer(_from, _to, _value); + return true; + } + else { + return false; + } + } + + /// @dev Returns number of tokens owned by given address. + /// @param _owner Address of token owner. + function balanceOf(address _owner) public override view returns (uint256 balance) { + return balances[_owner]; + } + + /// @dev Sets approved amount of tokens for spender. Returns success. + /// @param _spender Address of allowed account. + /// @param _value Number of approved tokens. + function approve(address _spender, uint256 _value) public override returns (bool success) { + allowed[msg.sender][_spender] = _value; + emit Approval(msg.sender, _spender, _value); + return true; + } + + /* + * Read storage functions + */ + /// @dev Returns number of allowed tokens for given address. + /// @param _owner Address of token owner. + /// @param _spender Address of token spender. + function allowance(address _owner, address _spender) public override view returns (uint256 remaining) { + return allowed[_owner][_spender]; + } + +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/transferable/contracts/Transferable.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/transferable/contracts/Transferable.sol new file mode 100644 index 00000000..e597ad67 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/transferable/contracts/Transferable.sol @@ -0,0 +1,14 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + +import {Owned} from "owned/contracts/Owned.sol"; + +contract Transferable is Owned { + event OwnerChanged(address indexed prevOwner, address indexed newOwner); + + function transferOwner(address newOwner) public onlyOwner returns (bool) { + emit OwnerChanged(owner, newOwner); + owner = newOwner; + return true; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/wallet-with-send/contracts/WalletWithSend.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/wallet-with-send/contracts/WalletWithSend.sol new file mode 100644 index 00000000..00cecd6f --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/wallet-with-send/contracts/WalletWithSend.sol @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + + +import {Wallet} from "./wallet/contracts/Wallet.sol"; + + +/// @title Wallet contract with simple send and approval spending functionality +/// @author Piper Merriam +contract WalletWithSend is Wallet { + /// @dev Sends funds that have been approved to the specified address + /// @notice This will send the reciepient the specified amount. + function approvedSend(uint value, address payable to) public { + allowances[msg.sender] = allowances[msg.sender].safeSub(value); + if (!to.send(value)) + revert(); + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/wallet/contracts/Wallet.sol b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/wallet/contracts/Wallet.sol new file mode 100644 index 00000000..35e0ea2f --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/ethpm-spec/examples/wallet/contracts/Wallet.sol @@ -0,0 +1,41 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.6.8; + + +import {SafeMathLib} from "./safe-math-lib/contracts/SafeMathLib.sol"; +import {Owned} from "./owned/contracts/Owned.sol"; + + +/// @title Contract for holding funds in escrow between two semi trusted parties. +/// @author Piper Merriam +contract Wallet is Owned { + using SafeMathLib for uint; + + mapping (address => uint) allowances; + + /// @dev Fallback function for depositing funds + fallback() external { + } + + /// @dev Sends the recipient the specified amount + /// @notice This will send the reciepient the specified amount. + function send(address payable recipient, uint value) public onlyOwner returns (bool) { + return recipient.send(value); + } + + /// @dev Sets recipient to be approved to withdraw the specified amount + /// @notice This will set the recipient to be approved to withdraw the specified amount. + function approve(address recipient, uint value) public onlyOwner returns (bool) { + allowances[recipient] = value; + return true; + } + + /// @dev Lets caller withdraw up to their approved amount + /// @notice This will withdraw provided value, deducting it from your total allowance. + function withdraw(uint value) public returns (bool) { + allowances[msg.sender] = allowances[msg.sender].safeSub(value); + if (!msg.sender.send(value)) + revert(); + return true; + } +} diff --git a/env/lib/python3.8/site-packages/ethpm/exceptions.py b/env/lib/python3.8/site-packages/ethpm/exceptions.py new file mode 100644 index 00000000..af727ae5 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/exceptions.py @@ -0,0 +1,54 @@ +class PyEthPMError(Exception): + """ + Base class for all Py-EthPM errors. + """ + + pass + + +class InsufficientAssetsError(PyEthPMError): + """ + Raised when a Manifest or Package does not contain the required assets to do something. + """ + + pass + + +class EthPMValidationError(PyEthPMError): + """ + Raised when something does not pass a validation check. + """ + + pass + + +class CannotHandleURI(PyEthPMError): + """ + Raised when the given URI cannot be served by any of the available backends. + """ + + pass + + +class FailureToFetchIPFSAssetsError(PyEthPMError): + """ + Raised when an attempt to fetch a Package's assets via IPFS failed. + """ + + pass + + +class BytecodeLinkingError(PyEthPMError): + """ + Raised when an attempt to link a contract factory's bytecode failed. + """ + + pass + + +class ManifestBuildingError(PyEthPMError): + """ + Raised when an attempt to build a manifest failed. + """ + + pass diff --git a/env/lib/python3.8/site-packages/ethpm/package.py b/env/lib/python3.8/site-packages/ethpm/package.py new file mode 100644 index 00000000..e0645740 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/package.py @@ -0,0 +1,434 @@ +import json +from pathlib import ( + Path, +) +from typing import ( + Any, + Dict, + Generator, + Iterable, + List, + Optional, + Tuple, + Type, + Union, + cast, +) + +from eth_typing import ( + URI, + Address, + ContractName, + Manifest, +) +from eth_utils import ( + to_canonical_address, + to_dict, + to_text, + to_tuple, +) + +from ethpm._utils.cache import ( + cached_property, +) +from ethpm._utils.contract import ( + generate_contract_factory_kwargs, +) +from ethpm._utils.deployments import ( + get_linked_deployments, + normalize_linked_references, + validate_deployments_tx_receipt, + validate_linked_references, +) +from ethpm.contract import ( + LinkableContract, +) +from ethpm.dependencies import ( + Dependencies, +) +from ethpm.deployments import ( + DeploymentData, + Deployments, +) +from ethpm.exceptions import ( + BytecodeLinkingError, + EthPMValidationError, + FailureToFetchIPFSAssetsError, + InsufficientAssetsError, + PyEthPMError, +) +from ethpm.uri import ( + resolve_uri_contents, +) +from ethpm.validation.manifest import ( + check_for_deployments, + validate_build_dependencies_are_present, + validate_manifest_against_schema, + validate_manifest_deployments, + validate_raw_manifest_format, +) +from ethpm.validation.misc import ( + validate_w3_instance, +) +from ethpm.validation.package import ( + validate_build_dependency, + validate_contract_name, + validate_minimal_contract_factory_data, +) +from ethpm.validation.uri import ( + validate_single_matching_uri, +) +from web3 import Web3 +from web3._utils.validation import ( + validate_address, +) +from web3.eth import ( + Contract, +) + + +class Package(object): + def __init__( + self, manifest: Dict[str, Any], w3: Web3, uri: Optional[str] = None + ) -> None: + """ + A package should be created using one of the available + classmethods and a valid w3 instance. + """ + if not isinstance(manifest, dict): + raise TypeError( + "Package object must be initialized with a dictionary. " + f"Got {type(manifest)}" + ) + + if "manifest" not in manifest or manifest["manifest"] != "ethpm/3": + raise EthPMValidationError( + "Py-Ethpm currently only supports v3 ethpm manifests. " + "Please use the CLI to update or re-generate a v3 manifest. " + ) + + validate_manifest_against_schema(manifest) + validate_manifest_deployments(manifest) + validate_w3_instance(w3) + + self.w3 = w3 + self.w3.eth.defaultContractFactory = cast(Type[Contract], LinkableContract) + self.manifest = manifest + self._uri = uri + + def update_w3(self, w3: Web3) -> "Package": + """ + Returns a new instance of `Package` containing the same manifest, + but connected to a different web3 instance. + + .. doctest:: + + >>> new_w3 = Web3(Web3.EthereumTesterProvider()) + >>> NewPackage = OwnedPackage.update_w3(new_w3) + >>> assert NewPackage.w3 == new_w3 + >>> assert OwnedPackage.manifest == NewPackage.manifest + """ + validate_w3_instance(w3) + return Package(self.manifest, w3, self.uri) + + def __repr__(self) -> str: + """ + String readable representation of the Package. + + .. doctest:: + + >>> OwnedPackage.__repr__() + '' + """ + name = self.name + version = self.version + return f"" + + @property + def name(self) -> str: + """ + The name of this ``Package``. + + .. doctest:: + + >>> OwnedPackage.name + 'owned' + """ + return self.manifest["name"] + + @property + def version(self) -> str: + """ + The package version of a ``Package``. + + .. doctest:: + + >>> OwnedPackage.version + '1.0.0' + """ + return self.manifest["version"] + + @property + def manifest_version(self) -> str: + """ + The manifest version of a ``Package``. + + .. doctest:: + + >>> OwnedPackage.manifest_version + 'ethpm/3' + """ + return self.manifest["manifest"] + + @property + def uri(self) -> Optional[str]: + """ + The uri (local file_path / content-addressed URI) of a ``Package``'s manifest. + """ + return self._uri + + @property + def contract_types(self) -> List[str]: + """ + All contract types included in this package. + """ + if 'contractTypes' in self.manifest: + return sorted(self.manifest['contractTypes'].keys()) + else: + raise ValueError("No contract types found in manifest; {self.__repr__()}.") + + @classmethod + def from_file(cls, file_path: Path, w3: Web3) -> "Package": + """ + Returns a ``Package`` instantiated by a manifest located at the provided Path. + ``file_path`` arg must be a ``pathlib.Path`` instance. + A valid ``Web3`` instance is required to instantiate a ``Package``. + """ + if isinstance(file_path, Path): + raw_manifest = file_path.read_text() + validate_raw_manifest_format(raw_manifest) + manifest = json.loads(raw_manifest) + else: + raise TypeError( + "The Package.from_file method expects a pathlib.Path instance." + f"Got {type(file_path)} instead." + ) + return cls(manifest, w3, file_path.as_uri()) + + @classmethod + def from_uri(cls, uri: URI, w3: Web3) -> "Package": + """ + Returns a Package object instantiated by a manifest located at a content-addressed URI. + A valid ``Web3`` instance is also required. + URI schemes supported: + + - IPFS: `ipfs://Qm...` + + - HTTP: `https://api.github.com/repos/:owner/:repo/git/blobs/:file_sha` + + - Registry: `erc1319://registry.eth:1/greeter?version=1.0.0` + + .. code:: python + + OwnedPackage = Package.from_uri('ipfs://QmbeVyFLSuEUxiXKwSsEjef7icpdTdA4kGG9BcrJXKNKUW', w3) # noqa: E501 + """ + contents = to_text(resolve_uri_contents(uri)) + validate_raw_manifest_format(contents) + manifest = json.loads(contents) + return cls(manifest, w3, uri) + + # + # Contracts + # + + def get_contract_factory(self, name: ContractName) -> LinkableContract: + """ + Return the contract factory for a given contract type, generated from the data vailable + in ``Package.manifest``. Contract factories are accessible from the package class. + + .. code:: python + + Owned = OwnedPackage.get_contract_factory('owned') + + In cases where a contract uses a library, the contract factory will have + unlinked bytecode. The ``ethpm`` package ships with its own subclass of + ``web3.contract.Contract``, ``ethpm.contract.LinkableContract`` with a few extra + methods and properties related to bytecode linking. + + .. code:: python + + >>> math = owned_package.contract_factories.math + >>> math.needs_bytecode_linking + True + >>> linked_math = math.link_bytecode({'MathLib': '0x1234...'}) + >>> linked_math.needs_bytecode_linking + False + """ + validate_contract_name(name) + + if "contractTypes" not in self.manifest: + raise InsufficientAssetsError( + "This package does not contain any contract type data." + ) + + try: + contract_data = self.manifest["contractTypes"][name] + except KeyError: + raise InsufficientAssetsError( + "This package does not contain any package data to generate " + f"a contract factory for contract type: {name}. Available contract types include: " + f"{self.contract_types}." + ) + + validate_minimal_contract_factory_data(contract_data) + contract_kwargs = generate_contract_factory_kwargs(contract_data) + contract_factory = self.w3.eth.contract(**contract_kwargs) + return contract_factory + + def get_contract_instance(self, name: ContractName, address: Address) -> Contract: + """ + Will return a ``Web3.contract`` instance generated from the contract type data available + in ``Package.manifest`` and the provided ``address``. The provided ``address`` must be + valid on the connected chain available through ``Package.w3``. + """ + validate_address(address) + validate_contract_name(name) + try: + self.manifest["contractTypes"][name]["abi"] + except KeyError: + raise InsufficientAssetsError( + "Package does not have the ABI required to generate a contract instance " + f"for contract: {name} at address: {address}." + ) + contract_kwargs = generate_contract_factory_kwargs( + self.manifest["contractTypes"][name] + ) + contract_instance = self.w3.eth.contract( + address=address, **contract_kwargs + ) + return contract_instance + + # + # Build Dependencies + # + + @cached_property + def build_dependencies(self) -> "Dependencies": + """ + Return `Dependencies` instance containing the build dependencies available on this Package. + The ``Package`` class should provide access to the full dependency tree. + + .. code:: python + + >>> owned_package.build_dependencies['zeppelin'] + + """ + validate_build_dependencies_are_present(self.manifest) + + dependencies = self.manifest["buildDependencies"] + dependency_packages = {} + for name, uri in dependencies.items(): + try: + validate_build_dependency(name, uri) + dependency_package = Package.from_uri(uri, self.w3) + except PyEthPMError as e: + raise FailureToFetchIPFSAssetsError( + f"Failed to retrieve build dependency: {name} from URI: {uri}.\n" + f"Got error: {e}." + ) + else: + dependency_packages[name] = dependency_package + + return Dependencies(dependency_packages) + + # + # Deployments + # + + @cached_property + def deployments(self) -> Union["Deployments", Dict[None, None]]: + """ + Returns a ``Deployments`` object containing all the deployment data and contract + instances of a ``Package``'s `contract_types`. Automatically filters deployments + to only expose those available on the current ``Package.w3`` instance. + + .. code:: python + + package.deployments.get_instance("ContractType") + """ + if not check_for_deployments(self.manifest): + return {} + + all_blockchain_uris = self.manifest["deployments"].keys() + matching_uri = validate_single_matching_uri(all_blockchain_uris, self.w3) + + deployments = self.manifest["deployments"][matching_uri] + all_contract_instances = self._get_all_contract_instances(deployments) + validate_deployments_tx_receipt(deployments, self.w3, allow_missing_data=True) + linked_deployments = get_linked_deployments(deployments) + if linked_deployments: + for deployment_data in linked_deployments.values(): + on_chain_bytecode = self.w3.eth.getCode( + deployment_data["address"] + ) + unresolved_linked_refs = normalize_linked_references( + deployment_data["runtimeBytecode"]["linkDependencies"] + ) + resolved_linked_refs = tuple( + self._resolve_linked_references(link_ref, deployments) + for link_ref in unresolved_linked_refs + ) + for linked_ref in resolved_linked_refs: + validate_linked_references(linked_ref, on_chain_bytecode) + + return Deployments(deployments, all_contract_instances) + + @to_dict + def _get_all_contract_instances( + self, deployments: Dict[str, DeploymentData] + ) -> Iterable[Tuple[str, Contract]]: + for deployment_name, deployment_data in deployments.items(): + if deployment_data['contractType'] not in self.contract_types: + raise EthPMValidationError( + f"Contract type: {deployment_data['contractType']} for alias: " + f"{deployment_name} not found. Available contract types include: " + f"{self.contract_types}." + ) + contract_instance = self.get_contract_instance( + ContractName(deployment_data['contractType']), + deployment_data['address'], + ) + yield deployment_name, contract_instance + + @to_tuple + def _resolve_linked_references( + self, link_ref: Tuple[int, str, str], deployments: Dict[str, Any] + ) -> Generator[Tuple[int, bytes], None, None]: + # No nested deployment: i.e. 'Owned' + offset, link_type, value = link_ref + if link_type == "literal": + yield offset, to_canonical_address(value) + elif value in deployments: + yield offset, to_canonical_address(deployments[value]["address"]) + # No nested deployment, but invalid ref + elif ":" not in value: + raise BytecodeLinkingError( + f"Contract instance reference: {value} not found in package's deployment data." + ) + # Expects child pkg in build_dependencies + elif value.split(":")[0] not in self.build_dependencies: + raise BytecodeLinkingError( + f"Expected build dependency: {value.split(':')[0]} not found " + "in package's build dependencies." + ) + # Find and return resolved, nested ref + else: + unresolved_linked_ref = value.split(":", 1)[-1] + build_dependency = self.build_dependencies[value.split(":")[0]] + yield build_dependency._resolve_link_dependencies(unresolved_linked_ref) + + +def format_manifest(manifest: Manifest, *, prettify: bool = None) -> str: + if prettify: + return json.dumps(manifest, sort_keys=True, indent=4) + return json.dumps(manifest, sort_keys=True, separators=(",", ":")) diff --git a/env/lib/python3.8/site-packages/ethpm/tools/__init__.py b/env/lib/python3.8/site-packages/ethpm/tools/__init__.py new file mode 100644 index 00000000..c92e4485 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/tools/__init__.py @@ -0,0 +1 @@ +from .get_manifest import get_ethpm_local_manifest, get_ethpm_spec_manifest # noqa: F401 diff --git a/env/lib/python3.8/site-packages/ethpm/tools/builder.py b/env/lib/python3.8/site-packages/ethpm/tools/builder.py new file mode 100644 index 00000000..a9b1466c --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/tools/builder.py @@ -0,0 +1,916 @@ +import functools +import json +from pathlib import ( + Path, +) +import tempfile +from typing import ( + Any, + Callable, + Dict, + Iterable, + List, + Optional, + Set, + Tuple, +) + +from eth_typing import ( + URI, + HexStr, + Manifest, +) +from eth_utils import ( + add_0x_prefix, + is_hex, + is_string, + to_bytes, + to_checksum_address, + to_dict, + to_list, +) +from eth_utils.toolz import ( + assoc, + assoc_in, + concat, + curry, + pipe, +) + +from ethpm import ( + Package, +) +from ethpm._utils.chains import ( + is_BIP122_block_uri, +) +from ethpm.backends.ipfs import ( + BaseIPFSBackend, +) +from ethpm.exceptions import ( + EthPMValidationError, + ManifestBuildingError, +) +from ethpm.package import ( + format_manifest, +) +from ethpm.uri import ( + is_supported_content_addressed_uri, +) +from ethpm.validation.manifest import ( + validate_manifest_against_schema, +) +from ethpm.validation.package import ( + validate_package_name, +) +from web3 import Web3 +from web3._utils.validation import ( + validate_address, +) + + +def build(obj: Dict[str, Any], *fns: Callable[..., Any]) -> Dict[str, Any]: + """ + Wrapper function to pipe manifest through build functions. + Does not validate the manifest by default. + """ + return pipe(obj, *fns) + + +# +# Required Fields +# + + +@curry +def package_name(name: str, manifest: Manifest) -> Manifest: + """ + Return a copy of manifest with `name` set to "name". + """ + return assoc(manifest, "name", name) + + +@curry +def manifest_version(manifest_version: str, manifest: Manifest) -> Manifest: + """ + Return a copy of manifest with `manifest_version` set to "manifest". + """ + return assoc(manifest, "manifest", manifest_version) + + +@curry +def version(version: str, manifest: Manifest) -> Manifest: + """ + Return a copy of manifest with `version` set to "version". + """ + return assoc(manifest, "version", version) + + +# +# Meta +# + + +def authors(*author_list: str) -> Manifest: + """ + Return a copy of manifest with a list of author posargs set to "meta": {"authors": author_list} + """ + return _authors(author_list) + + +@curry +@functools.wraps(authors) +def _authors(authors: Set[str], manifest: Manifest) -> Manifest: + return assoc_in(manifest, ("meta", "authors"), list(authors)) + + +@curry +def license(license: str, manifest: Manifest) -> Manifest: + """ + Return a copy of manifest with `license` set to "meta": {"license": `license`} + """ + return assoc_in(manifest, ("meta", "license"), license) + + +@curry +def description(description: str, manifest: Manifest) -> Manifest: + """ + Return a copy of manifest with `description` set to "meta": {"descriptions": `description`} + """ + return assoc_in(manifest, ("meta", "description"), description) + + +def keywords(*keyword_list: str) -> Manifest: + """ + Return a copy of manifest with a list of keyword posargs set to + "meta": {"keywords": keyword_list} + """ + return _keywords(keyword_list) + + +@curry +@functools.wraps(keywords) +def _keywords(keywords: Set[str], manifest: Manifest) -> Manifest: + return assoc_in(manifest, ("meta", "keywords"), list(keywords)) + + +def links(**link_dict: str) -> Manifest: + """ + Return a copy of manifest with a dict of link kwargs set to "meta": {"links": link_dict} + """ + return _links(link_dict) + + +@curry +def _links(link_dict: Dict[str, str], manifest: Manifest) -> Manifest: + return assoc_in(manifest, ("meta", "links"), link_dict) + + +# +# Sources +# + + +def get_names_and_paths(compiler_output: Dict[str, Any]) -> Dict[str, str]: + """ + Return a mapping of contract name to relative path as defined in compiler output. + """ + return { + contract_name: make_path_relative(path) + for path in compiler_output + for contract_name in compiler_output[path].keys() + } + + +def make_path_relative(path: str) -> str: + """ + Returns the given path prefixed with "./" if the path + is not already relative in the compiler output. + """ + if "../" in path: + raise ManifestBuildingError( + f"Path: {path} appears to be outside of the virtual source tree. " + "Please make sure all sources are within the virtual source tree root directory." + ) + + if path[:2] != "./": + return f"./{path}" + return path + + +def source_inliner( + compiler_output: Dict[str, Any], package_root_dir: Optional[Path] = None +) -> Manifest: + return _inline_sources(compiler_output, package_root_dir) + + +@curry +def _inline_sources( + compiler_output: Dict[str, Any], package_root_dir: Optional[Path], name: str +) -> Manifest: + return _inline_source(name, compiler_output, package_root_dir) + + +def inline_source( + name: str, compiler_output: Dict[str, Any], package_root_dir: Optional[Path] = None +) -> Manifest: + """ + Return a copy of manifest with added field to + "sources": {relative_source_path: contract_source_data}. + + If `package_root_dir` is not provided, cwd is expected to resolve the relative + path to the source as defined in the compiler output. + """ + return _inline_source(name, compiler_output, package_root_dir) + + +@curry +def _inline_source( + name: str, + compiler_output: Dict[str, Any], + package_root_dir: Optional[Path], + manifest: Manifest, +) -> Manifest: + names_and_paths = get_names_and_paths(compiler_output) + cwd = Path.cwd() + try: + source_path = names_and_paths[name] + except KeyError: + raise ManifestBuildingError( + f"Unable to inline source: {name}. " + f"Available sources include: {list(sorted(names_and_paths.keys()))}." + ) + + if package_root_dir: + if (package_root_dir / source_path).is_file(): + source_data = (package_root_dir / source_path).read_text() + else: + raise ManifestBuildingError( + f"Contract source: {source_path} cannot be found in " + f"provided package_root_dir: {package_root_dir}." + ) + elif (cwd / source_path).is_file(): + source_data = (cwd / source_path).read_text() + else: + raise ManifestBuildingError( + "Contract source cannot be resolved, please make sure that the working " + "directory is set to the correct directory or provide `package_root_dir`." + ) + + # rstrip used here since Path.read_text() adds a newline to returned contents + source_data_object = { + "content": source_data.rstrip("\n"), + "installPath": source_path, + "type": "solidity", + } + return assoc_in(manifest, ["sources", source_path], source_data_object) + + +def source_pinner( + compiler_output: Dict[str, Any], + ipfs_backend: BaseIPFSBackend, + package_root_dir: Optional[Path] = None, +) -> Manifest: + return _pin_sources(compiler_output, ipfs_backend, package_root_dir) + + +@curry +def _pin_sources( + compiler_output: Dict[str, Any], + ipfs_backend: BaseIPFSBackend, + package_root_dir: Optional[Path], + name: str, +) -> Manifest: + return _pin_source(name, compiler_output, ipfs_backend, package_root_dir) + + +def pin_source( + name: str, + compiler_output: Dict[str, Any], + ipfs_backend: BaseIPFSBackend, + package_root_dir: Optional[Path] = None, +) -> Manifest: + """ + Pins source to IPFS and returns a copy of manifest with added field to + "sources": {relative_source_path: IFPS URI}. + + If `package_root_dir` is not provided, cwd is expected to resolve the relative path + to the source as defined in the compiler output. + """ + return _pin_source(name, compiler_output, ipfs_backend, package_root_dir) + + +@curry +def _pin_source( + name: str, + compiler_output: Dict[str, Any], + ipfs_backend: BaseIPFSBackend, + package_root_dir: Optional[Path], + manifest: Manifest, +) -> Manifest: + names_and_paths = get_names_and_paths(compiler_output) + try: + source_path = names_and_paths[name] + except KeyError: + raise ManifestBuildingError( + f"Unable to pin source: {name}. " + f"Available sources include: {list(sorted(names_and_paths.keys()))}." + ) + if package_root_dir: + if not (package_root_dir / source_path).is_file(): + raise ManifestBuildingError( + f"Unable to find and pin contract source: {source_path} " + f"under specified package_root_dir: {package_root_dir}." + ) + (ipfs_data,) = ipfs_backend.pin_assets(package_root_dir / source_path) + else: + cwd = Path.cwd() + if not (cwd / source_path).is_file(): + raise ManifestBuildingError( + f"Unable to find and pin contract source: {source_path} " + f"current working directory: {cwd}." + ) + (ipfs_data,) = ipfs_backend.pin_assets(cwd / source_path) + + source_data_object = { + "urls": [f"ipfs://{ipfs_data['Hash']}"], + "type": "solidity", + "installPath": source_path, + } + return assoc_in(manifest, ["sources", source_path], source_data_object) + + +# +# Contract Types +# + + +def contract_type( + name: str, + compiler_output: Dict[str, Any], + alias: Optional[str] = None, + abi: Optional[bool] = False, + compiler: Optional[bool] = False, + contract_type: Optional[bool] = False, + deployment_bytecode: Optional[bool] = False, + devdoc: Optional[bool] = False, + userdoc: Optional[bool] = False, + source_id: Optional[bool] = False, + runtime_bytecode: Optional[bool] = False, +) -> Manifest: + """ + Returns a copy of manifest with added contract_data field as specified by kwargs. + If no kwargs are present, all available contract_data found in the compiler output + will be included. + + To include specific contract_data fields, add kwarg set to True (i.e. `abi=True`) + To alias a contract_type, include a kwarg `alias` (i.e. `alias="OwnedAlias"`) + If only an alias kwarg is provided, all available contract data will be included. + Kwargs must match fields as defined in the EthPM Spec (except "alias") if user + wants to include them in custom contract_type. + """ + contract_type_fields = { + "contractType": contract_type, + "deploymentBytecode": deployment_bytecode, + "runtimeBytecode": runtime_bytecode, + "abi": abi, + "compiler": compiler, + "userdoc": userdoc, + "devdoc": devdoc, + "sourceId": source_id, + } + selected_fields = [k for k, v in contract_type_fields.items() if v] + return _contract_type(name, compiler_output, alias, selected_fields) + + +@curry +def _contract_type( + name: str, + compiler_output: Dict[str, Any], + alias: Optional[str], + selected_fields: Optional[List[str]], + manifest: Manifest, +) -> Manifest: + contracts_by_name = normalize_compiler_output(compiler_output) + try: + all_type_data = contracts_by_name[name] + except KeyError: + raise ManifestBuildingError( + f"Contract name: {name} not found in the provided compiler output." + ) + if selected_fields: + contract_type_data = filter_all_data_by_selected_fields( + all_type_data, selected_fields + ) + else: + contract_type_data = all_type_data + + if "compiler" in contract_type_data: + compiler_info = contract_type_data.pop('compiler') + contract_type_ref = alias if alias else name + manifest_with_compilers = add_compilers_to_manifest( + compiler_info, contract_type_ref, manifest + ) + else: + manifest_with_compilers = manifest + + if alias: + return assoc_in( + manifest_with_compilers, + ["contractTypes", alias], + assoc(contract_type_data, "contractType", name), + ) + return assoc_in(manifest_with_compilers, ["contractTypes", name], contract_type_data) + + +def add_compilers_to_manifest( + compiler_info: Dict[str, Any], contract_type: str, manifest: Manifest +) -> Manifest: + """ + Adds a compiler information object to a manifest's top-level `compilers`. + """ + if "compilers" not in manifest: + compiler_info['contractTypes'] = [contract_type] + return assoc_in(manifest, ["compilers"], [compiler_info]) + + updated_compiler_info = update_compilers_object( + compiler_info, contract_type, manifest["compilers"] + ) + return assoc_in(manifest, ["compilers"], updated_compiler_info) + + +@to_list +def update_compilers_object( + new_compiler: Dict[str, Any], contract_type: str, previous_compilers: List[Dict[str, Any]] +) -> Iterable[Dict[str, Any]]: + """ + Updates a manifest's top-level `compilers` with a new compiler information object. + - If compiler version already exists, we just update the compiler's `contractTypes` + """ + recorded_new_contract_type = False + for compiler in previous_compilers: + contract_types = compiler.pop("contractTypes") + if contract_type in contract_types: + raise ManifestBuildingError( + f"Contract type: {contract_type} already referenced in `compilers`." + ) + if compiler == new_compiler: + contract_types.append(contract_type) + recorded_new_contract_type = True + compiler["contractTypes"] = contract_types + yield compiler + + if not recorded_new_contract_type: + new_compiler["contractTypes"] = [contract_type] + yield new_compiler + + +@to_dict +def filter_all_data_by_selected_fields( + all_type_data: Dict[str, Any], selected_fields: List[str] +) -> Iterable[Tuple[str, Any]]: + """ + Raises exception if selected field data is not available in the contract type data + automatically gathered by normalize_compiler_output. Otherwise, returns the data. + """ + for field in selected_fields: + if field in all_type_data: + yield field, all_type_data[field] + else: + raise ManifestBuildingError( + f"Selected field: {field} not available in data collected from solc output: " + f"{list(sorted(all_type_data.keys()))}. Please make sure the relevant data " + "is present in your solc output." + ) + + +def normalize_compiler_output(compiler_output: Dict[str, Any]) -> Dict[str, Any]: + """ + Return compiler output with normalized fields for each contract type, + as specified in `normalize_contract_type`. + """ + paths_and_names = [ + (path, contract_name) + for path in compiler_output + for contract_name in compiler_output[path].keys() + ] + paths, names = zip(*paths_and_names) + if len(names) != len(set(names)): + duplicates = set([name for name in names if names.count(name) > 1]) + raise ManifestBuildingError( + f"Duplicate contract types: {duplicates} were found in the compiler output." + ) + return { + name: normalize_contract_type(compiler_output[path][name], path) + for path, name in paths_and_names + } + + +@to_dict +def normalize_contract_type( + contract_type_data: Dict[str, Any], + source_id: str, +) -> Iterable[Tuple[str, Any]]: + """ + Serialize contract_data found in compiler output to the defined fields. + """ + yield "abi", contract_type_data["abi"] + yield "sourceId", source_id + if "evm" in contract_type_data: + if "bytecode" in contract_type_data["evm"]: + yield "deploymentBytecode", normalize_bytecode_object( + contract_type_data["evm"]["bytecode"] + ) + if "deployedBytecode" in contract_type_data["evm"]: + yield "runtimeBytecode", normalize_bytecode_object( + contract_type_data["evm"]["deployedBytecode"] + ) + if "devdoc" in contract_type_data: + yield "devdoc", contract_type_data['devdoc'] + if "userdoc" in contract_type_data: + yield "userdoc", contract_type_data['userdoc'] + # make sure metadata isn't an empty string in solc output + if "metadata" in contract_type_data and contract_type_data["metadata"]: + yield "compiler", normalize_compiler_object( + json.loads(contract_type_data["metadata"]) + ) + + +@to_dict +def normalize_compiler_object(obj: Dict[str, Any]) -> Iterable[Tuple[str, Any]]: + yield "name", "solc" + yield "version", obj["compiler"]["version"] + yield "settings", {"optimize": obj["settings"]["optimizer"]["enabled"]} + + +@to_dict +def normalize_bytecode_object(obj: Dict[str, Any]) -> Iterable[Tuple[str, Any]]: + try: + link_references = obj["linkReferences"] + except KeyError: + link_references = None + try: + bytecode = obj["object"] + except KeyError: + raise ManifestBuildingError( + "'object' key not found in bytecode data from compiler output. " + "Please make sure your solidity compiler output is valid." + ) + if link_references: + yield "linkReferences", process_link_references(link_references, bytecode) + yield "bytecode", process_bytecode(link_references, bytecode) + else: + yield "bytecode", add_0x_prefix(bytecode) + + +def process_bytecode(link_refs: Dict[str, Any], bytecode: bytes) -> HexStr: + """ + Replace link_refs in bytecode with 0's. + """ + all_offsets = [y for x in link_refs.values() for y in x.values()] + # Link ref validation. + validate_link_ref_fns = ( + validate_link_ref(ref["start"] * 2, ref["length"] * 2) + for ref in concat(all_offsets) + ) + pipe(bytecode, *validate_link_ref_fns) + # Convert link_refs in bytecode to 0's + link_fns = ( + replace_link_ref_in_bytecode(ref["start"] * 2, ref["length"] * 2) + for ref in concat(all_offsets) + ) + processed_bytecode = pipe(bytecode, *link_fns) + return add_0x_prefix(processed_bytecode) + + +@curry +def replace_link_ref_in_bytecode(offset: int, length: int, bytecode: str) -> str: + new_bytes = ( + bytecode[:offset] + "0" * length + bytecode[offset + length :] # noqa: E203 + ) + return new_bytes + + +# todo pull all bytecode linking/validating across py-ethpm into shared utils +@to_list +def process_link_references( + link_refs: Dict[str, Any], bytecode: str +) -> Iterable[Dict[str, Any]]: + for link_ref in link_refs.values(): + yield normalize_link_ref(link_ref, bytecode) + + +def normalize_link_ref(link_ref: Dict[str, Any], bytecode: str) -> Dict[str, Any]: + name = list(link_ref.keys())[0] + return { + "name": name, + "length": 20, + "offsets": normalize_offsets(link_ref, bytecode), + } + + +@to_list +def normalize_offsets(data: Dict[str, Any], bytecode: str) -> Iterable[List[int]]: + for link_ref in data.values(): + for ref in link_ref: + yield ref["start"] + + +@curry +def validate_link_ref(offset: int, length: int, bytecode: str) -> str: + slot_length = offset + length + slot = bytecode[offset:slot_length] + if slot[:2] != "__" and slot[-2:] != "__": + raise EthPMValidationError( + f"Slot: {slot}, at offset: {offset} of length: {length} is not a valid " + "link_ref that can be replaced." + ) + return bytecode + + +# +# Deployments +# + + +def deployment_type( + *, + contract_instance: str, + contract_type: str, + deployment_bytecode: Dict[str, Any] = None, + runtime_bytecode: Dict[str, Any] = None, + compiler: Dict[str, Any] = None, +) -> Manifest: + """ + Returns a callable that allows the user to add deployments of the same type + across multiple chains. + """ + return _deployment_type( + contract_instance, + contract_type, + deployment_bytecode, + runtime_bytecode, + compiler, + ) + + +def deployment( + *, + block_uri: URI, + contract_instance: str, + contract_type: str, + address: HexStr, + transaction: HexStr = None, + block: HexStr = None, + deployment_bytecode: Dict[str, Any] = None, + runtime_bytecode: Dict[str, Any] = None, + compiler: Dict[str, Any] = None, +) -> Manifest: + """ + Returns a manifest, with the newly included deployment. Requires a valid blockchain URI, + however no validation is provided that this URI is unique amongst the other deployment + URIs, so the user must take care that each blockchain URI represents a unique blockchain. + """ + return _deployment( + contract_instance, + contract_type, + deployment_bytecode, + runtime_bytecode, + compiler, + block_uri, + address, + transaction, + block, + ) + + +@curry +def _deployment_type( + contract_instance: str, + contract_type: str, + deployment_bytecode: Dict[str, Any], + runtime_bytecode: Dict[str, Any], + compiler: Dict[str, Any], + block_uri: URI, + address: HexStr, + tx: HexStr = None, + block: HexStr = None, + manifest: Manifest = None, +) -> Manifest: + return _deployment( + contract_instance, + contract_type, + deployment_bytecode, + runtime_bytecode, + compiler, + block_uri, + address, + tx, + block, + ) + + +@curry +def _deployment( + contract_instance: str, + contract_type: str, + deployment_bytecode: Dict[str, Any], + runtime_bytecode: Dict[str, Any], + compiler: Dict[str, Any], + block_uri: URI, + address: HexStr, + tx: HexStr, + block: HexStr, + manifest: Manifest, +) -> Manifest: + validate_address(address) + if not is_BIP122_block_uri(block_uri): + raise ManifestBuildingError(f"{block_uri} is not a valid BIP122 URI.") + + if tx: + if not is_string(tx) and not is_hex(tx): + raise ManifestBuildingError( + f"Transaction hash: {tx} is not a valid hexstring" + ) + + if block: + if not is_string(block) and not is_hex(block): + raise ManifestBuildingError(f"Block hash: {block} is not a valid hexstring") + # todo: validate db, rb and compiler are properly formatted dicts + deployment_data = _build_deployments_object( + contract_type, + deployment_bytecode, + runtime_bytecode, + compiler, + address, + tx, + block, + manifest, + ) + return assoc_in( + manifest, ["deployments", block_uri, contract_instance], deployment_data + ) + + +@to_dict +def _build_deployments_object( + contract_type: str, + deployment_bytecode: Dict[str, Any], + runtime_bytecode: Dict[str, Any], + compiler: Dict[str, Any], + address: HexStr, + tx: HexStr, + block: HexStr, + manifest: Dict[str, Any], +) -> Iterable[Tuple[str, Any]]: + """ + Returns a dict with properly formatted deployment data. + """ + yield "contractType", contract_type + yield "address", to_checksum_address(address) + if deployment_bytecode: + yield "deploymentBytecode", deployment_bytecode + if compiler: + yield "compiler", compiler + if tx: + yield "transaction", tx + if block: + yield "block", block + if runtime_bytecode: + yield "runtimeBytecode", runtime_bytecode + + +# +# Build Dependencies +# + + +def build_dependency(package_name: str, uri: URI) -> Manifest: + """ + Returns the manifest with injected build dependency. + """ + return _build_dependency(package_name, uri) + + +@curry +def _build_dependency(package_name: str, uri: URI, manifest: Manifest) -> Manifest: + validate_package_name(package_name) + if not is_supported_content_addressed_uri(uri): + raise EthPMValidationError( + f"{uri} is not a supported content-addressed URI. " + "Currently only IPFS and Github blob uris are supported." + ) + return assoc_in(manifest, ("buildDependencies", package_name), uri) + + +# +# Helpers +# + + +@curry +def init_manifest( + package_name: str, version: str, manifest_version: Optional[str] = "ethpm/3" +) -> Dict[str, Any]: + """ + Returns an initial dict with the minimal requried fields for a valid manifest. + Should only be used as the first fn to be piped into a `build()` pipeline. + """ + return { + "name": package_name, + "version": version, + "manifest": manifest_version, + } + + +# +# Formatting +# + + +@curry +def validate(manifest: Manifest) -> Manifest: + """ + Return a validated manifest against the V2-specification schema. + """ + validate_manifest_against_schema(manifest) + return manifest + + +@curry +def as_package(w3: Web3, manifest: Manifest) -> Package: + """ + Return a Package object instantiated with the provided manifest and web3 instance. + """ + return Package(manifest, w3) + + +def write_to_disk( + manifest_root_dir: Optional[Path] = None, + manifest_name: Optional[str] = None, + prettify: Optional[bool] = False, +) -> Manifest: + """ + Write the active manifest to disk + Defaults + - Writes manifest to cwd unless Path is provided as manifest_root_dir. + - Writes manifest with a filename of Manifest[version].json unless a desired + manifest name (which must end in json) is provided as manifest_name. + - Writes the minified manifest version to disk unless prettify is set to True. + """ + return _write_to_disk(manifest_root_dir, manifest_name, prettify) + + +@curry +def _write_to_disk( + manifest_root_dir: Optional[Path], + manifest_name: Optional[str], + prettify: Optional[bool], + manifest: Manifest, +) -> Manifest: + if manifest_root_dir: + if manifest_root_dir.is_dir(): + cwd = manifest_root_dir + else: + raise ManifestBuildingError( + f"Manifest root directory: {manifest_root_dir} cannot be found, please " + "provide a valid directory for writing the manifest to disk. " + "(Path obj // leave manifest_root_dir blank to default to cwd)" + ) + else: + cwd = Path.cwd() + + if manifest_name: + if not manifest_name.lower().endswith(".json"): + raise ManifestBuildingError( + f"Invalid manifest name: {manifest_name}. All manifest names must end in .json" + ) + disk_manifest_name = manifest_name + else: + disk_manifest_name = manifest["version"] + ".json" + + contents = format_manifest(manifest, prettify=prettify) + + if (cwd / disk_manifest_name).is_file(): + raise ManifestBuildingError( + f"Manifest: {disk_manifest_name} already exists in cwd: {cwd}" + ) + (cwd / disk_manifest_name).write_text(contents) + return manifest + + +@curry +def pin_to_ipfs( + manifest: Manifest, *, backend: BaseIPFSBackend, prettify: Optional[bool] = False +) -> List[Dict[str, str]]: + """ + Returns the IPFS pin data after pinning the manifest to the provided IPFS Backend. + + `pin_to_ipfs()` Should *always* be the last argument in a builder, as it will return the pin + data and not the manifest. + """ + contents = format_manifest(manifest, prettify=prettify) + + with tempfile.NamedTemporaryFile() as temp: + temp.write(to_bytes(text=contents)) + temp.seek(0) + return backend.pin_assets(Path(temp.name)) diff --git a/env/lib/python3.8/site-packages/ethpm/tools/checker.py b/env/lib/python3.8/site-packages/ethpm/tools/checker.py new file mode 100644 index 00000000..27050cd8 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/tools/checker.py @@ -0,0 +1,307 @@ +import re +from typing import ( + Any, + Dict, +) + +from eth_typing import ( + Manifest, +) +from eth_utils.toolz import ( + assoc, + assoc_in, + curry, +) + +from ethpm.constants import ( + PACKAGE_NAME_REGEX, +) +from ethpm.tools.builder import ( + build, +) + +# todo: validate no duplicate blockchain uris in deployments, if web3 is available + +WARNINGS = { + "manifest_missing": "Manifest missing a required 'manifest' field.", + "manifest_invalid": "'manifest' is invalid. The only supported version is 'ethpm/3'.", + "name_missing": "Manifest missing a suggested 'name' field", + "name_invalid": "'name' is invalid. " + f"Doesn't match the regex: {PACKAGE_NAME_REGEX}", + "version_missing": "Manifest missing a suggested 'version' field.", + "meta_missing": "Manifest missing a suggested 'meta' field.", + "authors_missing": "'meta' field missing suggested 'authors' field.", + "description_missing": "'meta' field missing suggested 'description' field.", + "links_missing": "'meta' field missing suggested 'links' field.", + "license_missing": "'meta' field missing suggested 'license' field.", + "keywords_missing": "'meta' field missing suggested 'keywords' field.", + "sources_missing": """Manifest is missing a sources field, which defines a source tree """ + """that should comprise the full source tree necessary to recompile the contracts """ + """contained in this release.""", + "contract_type_missing": """Manifest does not contain any 'contractTypes'. Packages """ + """should only include contract types that can be found in the source files for this """ + """package. Packages should not include contract types from dependencies. Packages """ + """should not include abstract contracts in the contract types section of a release.""", + "abi_missing": """Contract type: {0} is missing an abi field, which is essential for using """ + """this package.""", + "deployment_bytecode_missing": """Contract type: {0} is missing a `deploymentBytecode` field,""" + """ which is essential for using this package.""", + "contract_type_subfield_missing": """Contract type: {0} is missing a `contractType` field, """ + """which is essential if an alias is being used to namespace this contract type.""", + "runtime_bytecode_missing": "Contract type: {0} is missing a `runtimeBytecode` field.", + "bytecode_subfield_missing": """Contract type: {0} is missing a required bytecode subfield """ + """in its {1} bytecode object.""", + "devdoc_missing": "Contract type: {0} is missing a devdoc field.", + "userdoc_missing": "Contract type: {0} is missing a userdoc field.", + "compilers_missing": "Manifest is missing a suggested `compilers` field.", +} + + +# +# Validation +# + + +def check_manifest(manifest: Manifest) -> Dict[str, str]: + generate_warnings = ( + check_manifest_version(manifest), + check_package_name(manifest), + check_version(manifest), + check_meta(manifest), + check_sources(manifest), + check_contract_types(manifest), + check_compilers(manifest), + ) + return build({}, *generate_warnings) + + +# +# Required fields +# + + +@curry +def check_manifest_version( + manifest: Manifest, warnings: Dict[str, str] +) -> Dict[str, str]: + if "manifest" not in manifest or not manifest["manifest"]: + return assoc(warnings, "manifest", WARNINGS["manifest_missing"]) + if manifest["manifest"] != "ethpm/3": + return assoc(warnings, "manifest", WARNINGS["manifest_invalid"]) + return warnings + + +@curry +def check_package_name(manifest: Manifest, warnings: Dict[str, str]) -> Dict[str, str]: + if "name" not in manifest or not manifest["name"]: + return assoc(warnings, "name", WARNINGS["name_missing"]) + if not bool(re.match(PACKAGE_NAME_REGEX, manifest["name"])): + return assoc(warnings, "name", WARNINGS["name_invalid"]) + return warnings + + +@curry +def check_version(manifest: Manifest, warnings: Dict[str, str]) -> Dict[str, str]: + if "version" not in manifest or not manifest["version"]: + return assoc(warnings, "version", WARNINGS["version_missing"]) + return warnings + + +# +# Meta fields +# + + +@curry +def check_meta(manifest: Manifest, warnings: Dict[str, str]) -> Dict[str, str]: + if "meta" not in manifest or not manifest["meta"]: + return assoc(warnings, "meta", WARNINGS["meta_missing"]) + meta_validation = ( + check_authors(manifest["meta"]), + check_license(manifest["meta"]), + check_description(manifest["meta"]), + check_keywords(manifest["meta"]), + check_links(manifest["meta"]), + ) + return build(warnings, *meta_validation) + + +@curry +def check_authors(meta: Dict[str, Any], warnings: Dict[str, str]) -> Dict[str, str]: + if "authors" not in meta: + return assoc(warnings, "meta.authors", WARNINGS["authors_missing"]) + return warnings + + +@curry +def check_license(meta: Dict[str, Any], warnings: Dict[str, str]) -> Dict[str, str]: + if "license" not in meta or not meta["license"]: + return assoc(warnings, "meta.license", WARNINGS["license_missing"]) + return warnings + + +@curry +def check_description(meta: Dict[str, Any], warnings: Dict[str, str]) -> Dict[str, str]: + if "description" not in meta or not meta["description"]: + return assoc(warnings, "meta.description", WARNINGS["description_missing"]) + return warnings + + +@curry +def check_keywords(meta: Dict[str, Any], warnings: Dict[str, str]) -> Dict[str, str]: + if "keywords" not in meta or not meta["keywords"]: + return assoc(warnings, "meta.keywords", WARNINGS["keywords_missing"]) + return warnings + + +@curry +def check_links(meta: Dict[str, Any], warnings: Dict[str, str]) -> Dict[str, str]: + if "links" not in meta or not meta["links"]: + return assoc(warnings, "meta.links", WARNINGS["links_missing"]) + return warnings + + +# +# Sources +# + + +@curry +def check_sources(manifest: Manifest, warnings: Dict[str, str]) -> Dict[str, str]: + if "sources" not in manifest or not manifest["sources"]: + return assoc(warnings, "sources", WARNINGS["sources_missing"]) + return warnings + + +# +# Contract Types +# + +# todo: validate a contract type matches source +@curry +def check_contract_types( + manifest: Manifest, warnings: Dict[str, str] +) -> Dict[str, str]: + if "contractTypes" not in manifest or not manifest["contractTypes"]: + return assoc(warnings, "contractTypes", WARNINGS["contract_type_missing"]) + + all_contract_type_validations = ( + ( + check_abi(contract_name, data), + check_contract_type(contract_name, data), + check_deployment_bytecode(contract_name, data), + check_runtime_bytecode(contract_name, data), + check_devdoc(contract_name, data), + check_userdoc(contract_name, data), + ) + for contract_name, data in manifest["contractTypes"].items() + ) + return build(warnings, *sum(all_contract_type_validations, ())) + + +@curry +def check_abi( + contract_name: str, data: Dict[str, Any], warnings: Dict[str, str] +) -> Dict[str, str]: + if "abi" not in data or not data["abi"]: + return assoc_in( + warnings, + ["contractTypes", contract_name, "abi"], + WARNINGS["abi_missing"].format(contract_name), + ) + return warnings + + +@curry +def check_contract_type( + contract_name: str, data: Dict[str, Any], warnings: Dict[str, str] +) -> Dict[str, str]: + if "contractType" not in data or not data["contractType"]: + return assoc_in( + warnings, + ["contractTypes", contract_name, "contractType"], + WARNINGS["contract_type_subfield_missing"].format(contract_name), + ) + return warnings + + +@curry +def check_deployment_bytecode( + contract_name: str, data: Dict[str, Any], warnings: Dict[str, str] +) -> Dict[str, str]: + if "deploymentBytecode" not in data or not data["deploymentBytecode"]: + return assoc_in( + warnings, + ["contractTypes", contract_name, "deploymentBytecode"], + WARNINGS["deployment_bytecode_missing"].format(contract_name), + ) + return build( + warnings, + check_bytecode_object(contract_name, "deployment", data["deploymentBytecode"]), + ) + + +@curry +def check_runtime_bytecode( + contract_name: str, data: Dict[str, Any], warnings: Dict[str, str] +) -> Dict[str, str]: + if "runtimeBytecode" not in data or not data["runtimeBytecode"]: + return assoc_in( + warnings, + ["contractTypes", contract_name, "runtimeBytecode"], + WARNINGS["runtime_bytecode_missing"].format(contract_name), + ) + return build( + warnings, + check_bytecode_object(contract_name, "runtime", data["runtimeBytecode"]), + ) + + +@curry +def check_bytecode_object( + contract_name: str, + bytecode_type: str, + bytecode_data: Dict[str, Any], + warnings: Dict[str, str], +) -> Dict[str, str]: + # todo: check if bytecode has link_refs & validate link_refs present in object + if "bytecode" not in bytecode_data or not bytecode_data["bytecode"]: + return assoc_in( + warnings, + ["contractTypes", contract_name, f"{bytecode_type}Bytecode"], + WARNINGS["bytecode_subfield_missing"].format(contract_name, bytecode_type), + ) + return warnings + + +@curry +def check_devdoc( + contract_name: str, data: Dict[str, Any], warnings: Dict[str, str] +) -> Dict[str, str]: + if "devdoc" not in data or not data["devdoc"]: + return assoc_in( + warnings, + ["contractTypes", contract_name, "devdoc"], + WARNINGS["devdoc_missing"].format(contract_name), + ) + return warnings + + +@curry +def check_userdoc( + contract_name: str, data: Dict[str, Any], warnings: Dict[str, str] +) -> Dict[str, str]: + if "userdoc" not in data or not data["userdoc"]: + return assoc_in( + warnings, + ["contractTypes", contract_name, "userdoc"], + WARNINGS["userdoc_missing"].format(contract_name), + ) + return warnings + + +@curry +def check_compilers(manifest: Manifest, warnings: Dict[str, str]) -> Dict[str, str]: + if "compilers" not in manifest or not manifest["compilers"]: + return assoc(warnings, "compilers", WARNINGS["compilers_missing"]) + return warnings diff --git a/env/lib/python3.8/site-packages/ethpm/tools/get_manifest.py b/env/lib/python3.8/site-packages/ethpm/tools/get_manifest.py new file mode 100644 index 00000000..4633f486 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/tools/get_manifest.py @@ -0,0 +1,19 @@ +import json +from typing import ( + Any, + Dict, +) + +from ethpm import ( + ASSETS_DIR, + get_ethpm_spec_dir, +) + + +def get_ethpm_spec_manifest(use_case: str, filename: str) -> Dict[str, Any]: + ethpm_spec_dir = get_ethpm_spec_dir() + return json.loads((ethpm_spec_dir / 'examples' / use_case / filename).read_text()) + + +def get_ethpm_local_manifest(use_case: str, filename: str) -> Dict[str, Any]: + return json.loads((ASSETS_DIR / use_case / filename).read_text()) diff --git a/env/lib/python3.8/site-packages/ethpm/uri.py b/env/lib/python3.8/site-packages/ethpm/uri.py new file mode 100644 index 00000000..286f2d49 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/uri.py @@ -0,0 +1,133 @@ +import json + +from eth_typing import ( + URI, +) +from eth_utils import ( + encode_hex, + to_hex, +) +from eth_utils.toolz import ( + curry, +) +import requests + +from ethpm._utils.backend import ( + get_resolvable_backends_for_uri, + get_translatable_backends_for_uri, +) +from ethpm._utils.chains import ( + BLOCK, + create_block_uri, + get_genesis_block_hash, + parse_BIP122_uri, +) +from ethpm._utils.ipfs import ( + is_ipfs_uri, +) +from ethpm.backends.http import ( + is_valid_api_github_uri, + is_valid_content_addressed_github_uri, +) +from ethpm.backends.registry import ( + RegistryURIBackend, +) +from ethpm.exceptions import ( + CannotHandleURI, +) +from web3 import Web3 +from web3.types import ( + BlockNumber, +) + + +def resolve_uri_contents(uri: URI, fingerprint: bool = None) -> bytes: + resolvable_backends = get_resolvable_backends_for_uri(uri) + if resolvable_backends: + for backend in resolvable_backends: + try: + # type ignored to handle case if URI is returned + contents: bytes = backend().fetch_uri_contents(uri) # type: ignore + except CannotHandleURI: + continue + return contents + + translatable_backends = get_translatable_backends_for_uri(uri) + if translatable_backends: + if fingerprint: + raise CannotHandleURI( + "Registry URIs must point to a resolvable content-addressed URI." + ) + package_id = RegistryURIBackend().fetch_uri_contents(uri) + return resolve_uri_contents(package_id, True) + + raise CannotHandleURI( + f"URI: {uri} cannot be resolved by any of the available backends." + ) + + +def create_content_addressed_github_uri(uri: URI) -> URI: + """ + Returns a content-addressed Github "git_url" that conforms to this scheme. + https://api.github.com/repos/:owner/:repo/git/blobs/:file_sha + + Accepts Github-defined "url" that conforms to this scheme + https://api.github.com/repos/:owner/:repo/contents/:path/:to/manifest.json + """ + if not is_valid_api_github_uri(uri): + raise CannotHandleURI(f"{uri} does not conform to Github's API 'url' scheme.") + response = requests.get(uri) + response.raise_for_status() + contents = json.loads(response.content) + if contents["type"] != "file": + raise CannotHandleURI( + f"Expected url to point to a 'file' type, instead received {contents['type']}." + ) + return contents["git_url"] + + +def is_supported_content_addressed_uri(uri: URI) -> bool: + """ + Returns a bool indicating whether provided uri is currently supported. + Currently Py-EthPM only supports IPFS and Github blob content-addressed uris. + """ + if not is_ipfs_uri(uri) and not is_valid_content_addressed_github_uri(uri): + return False + return True + + +def create_latest_block_uri(w3: Web3, from_blocks_ago: int = 3) -> URI: + """ + Creates a block uri for the given w3 instance. + Defaults to 3 blocks prior to the "latest" block to accommodate for block reorgs. + If using a testnet with less than 3 mined blocks, adjust :from_blocks_ago:. + """ + chain_id = to_hex(get_genesis_block_hash(w3)) + latest_block_tx_receipt = w3.eth.getBlock("latest") + target_block_number = BlockNumber(latest_block_tx_receipt["number"] - from_blocks_ago) + if target_block_number < 0: + raise Exception( + f"Only {latest_block_tx_receipt['number']} blocks avaible on provided w3, " + f"cannot create latest block uri for {from_blocks_ago} blocks ago." + ) + recent_block = to_hex(w3.eth.getBlock(target_block_number)["hash"]) + return create_block_uri(chain_id, recent_block) + + +@curry +def check_if_chain_matches_chain_uri(web3: Web3, blockchain_uri: URI) -> bool: + chain_id, resource_type, resource_hash = parse_BIP122_uri(blockchain_uri) + genesis_block = web3.eth.getBlock("earliest") + + if encode_hex(genesis_block["hash"]) != chain_id: + return False + + if resource_type == BLOCK: + resource = web3.eth.getBlock(resource_hash) + else: + raise ValueError(f"Unsupported resource type: {resource_type}") + + if encode_hex(resource["hash"]) == resource_hash: + return True + else: + return False diff --git a/env/lib/python3.8/site-packages/ethpm/validation/__init__.py b/env/lib/python3.8/site-packages/ethpm/validation/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/ethpm/validation/manifest.py b/env/lib/python3.8/site-packages/ethpm/validation/manifest.py new file mode 100644 index 00000000..6aba1b10 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/validation/manifest.py @@ -0,0 +1,147 @@ +from abc import ( + ABCMeta, +) +import json +from typing import ( + Any, + Dict, + List, + Set, + cast, +) + +from jsonschema import ( + ValidationError as jsonValidationError, + validate, +) +from jsonschema.validators import ( + Draft7Validator, + validator_for, +) + +from ethpm import ( + get_ethpm_spec_dir, +) +from ethpm.exceptions import ( + EthPMValidationError, +) + +META_FIELDS = { + "license": str, + "authors": list, + "description": str, + "keywords": list, + "links": dict, +} + + +def validate_meta_object(meta: Dict[str, Any], allow_extra_meta_fields: bool) -> None: + """ + Validates that every key is one of `META_FIELDS` and has a value of the expected type. + """ + for key, value in meta.items(): + if key in META_FIELDS: + if cast(ABCMeta, type(value)) is not META_FIELDS[key]: + raise EthPMValidationError( + f"Values for {key} are expected to have the type {META_FIELDS[key]}, " + f"instead got {type(value)}." + ) + elif allow_extra_meta_fields: + if key[:2] != "x-": + raise EthPMValidationError( + "Undefined meta fields need to begin with 'x-', " + f"{key} is not a valid undefined meta field." + ) + else: + raise EthPMValidationError( + f"{key} is not a permitted meta field. To allow undefined fields, " + "set `allow_extra_meta_fields` to True." + ) + + +def _load_schema_data() -> Dict[str, Any]: + ethpm_spec_dir = get_ethpm_spec_dir() + v3_schema_path = ethpm_spec_dir / "spec" / "v3.spec.json" + return json.loads(v3_schema_path.read_text()) + + +def extract_contract_types_from_deployments(deployment_data: List[Any]) -> Set[str]: + contract_types = set( + deployment["contractType"] + for chain_deployments in deployment_data + for deployment in chain_deployments.values() + ) + return contract_types + + +def validate_manifest_against_schema(manifest: Dict[str, Any]) -> None: + """ + Load and validate manifest against schema + located at v3_schema_path. + """ + schema_data = _load_schema_data() + try: + validate(manifest, schema_data, cls=validator_for(schema_data, Draft7Validator)) + except jsonValidationError as e: + raise EthPMValidationError( + f"Manifest invalid for schema version {schema_data['version']}. " + f"Reason: {e.message}" + f"{e}" + ) + + +def check_for_deployments(manifest: Dict[str, Any]) -> bool: + if "deployments" not in manifest or not manifest["deployments"]: + return False + return True + + +def validate_build_dependencies_are_present(manifest: Dict[str, Any]) -> None: + if "buildDependencies" not in manifest: + raise EthPMValidationError("Manifest doesn't have any build dependencies.") + + if not manifest["buildDependencies"]: + raise EthPMValidationError("Manifest's build dependencies key is empty.") + + +def validate_manifest_deployments(manifest: Dict[str, Any]) -> None: + """ + Validate that a manifest's deployments contracts reference existing contractTypes. + """ + if set(("contractTypes", "deployments")).issubset(manifest): + all_contract_types = list(manifest["contractTypes"].keys()) + all_deployments = list(manifest["deployments"].values()) + all_deployment_names = extract_contract_types_from_deployments(all_deployments) + missing_contract_types = set(all_deployment_names).difference( + all_contract_types + ) + if missing_contract_types: + raise EthPMValidationError( + f"Manifest missing references to contracts: {missing_contract_types}." + ) + + +def validate_raw_manifest_format(raw_manifest: str) -> None: + """ + Raise a EthPMValidationError if a manifest ... + - is not tightly packed (i.e. no linebreaks or extra whitespace) + - does not have alphabetically sorted keys + - has duplicate keys + - is not UTF-8 encoded + - has a trailing newline + """ + try: + manifest_dict = json.loads(raw_manifest) + except json.JSONDecodeError as err: + raise json.JSONDecodeError( + "Failed to load package data. File is not a valid JSON document.", + err.doc, + err.pos, + ) + compact_manifest = json.dumps(manifest_dict, sort_keys=True, separators=(",", ":")) + if raw_manifest != compact_manifest: + raise EthPMValidationError( + "The manifest appears to be malformed. Please ensure that it conforms to the " + "EthPM-Spec for document format. " + "http://ethpm.github.io/ethpm-spec/package-spec.html#document-format " + ) diff --git a/env/lib/python3.8/site-packages/ethpm/validation/misc.py b/env/lib/python3.8/site-packages/ethpm/validation/misc.py new file mode 100644 index 00000000..52848900 --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/validation/misc.py @@ -0,0 +1,36 @@ +from urllib import ( + parse, +) + +from ethpm.exceptions import ( + EthPMValidationError, +) +from web3 import Web3 + + +def validate_w3_instance(w3: Web3) -> None: + if w3 is None or not isinstance(w3, Web3): + raise ValueError("Package does not have valid web3 instance.") + + +def validate_empty_bytes(offset: int, length: int, bytecode: bytes) -> None: + """ + Validates that segment [`offset`:`offset`+`length`] of + `bytecode` is comprised of empty bytes (b'\00'). + """ + slot_length = offset + length + slot = bytecode[offset:slot_length] + if slot != bytearray(length): + raise EthPMValidationError( + f"Bytecode segment: [{offset}:{slot_length}] is not comprised of empty bytes, " + f"rather: {slot}." + ) + + +def validate_escaped_string(string: str) -> None: + unsafe = parse.unquote(string) + safe = parse.quote(unsafe) + if string != safe: + raise EthPMValidationError( + f"String: {string} is not properly escaped, and contains url unsafe characters." + ) diff --git a/env/lib/python3.8/site-packages/ethpm/validation/package.py b/env/lib/python3.8/site-packages/ethpm/validation/package.py new file mode 100644 index 00000000..e3052e4f --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/validation/package.py @@ -0,0 +1,80 @@ +import re +from typing import ( + Any, + Dict, +) + +from eth_utils import ( + is_text, +) + +from ethpm._utils.ipfs import ( + is_ipfs_uri, +) +from ethpm.constants import ( + PACKAGE_NAME_REGEX, +) +from ethpm.exceptions import ( + EthPMValidationError, + InsufficientAssetsError, +) + + +def validate_minimal_contract_factory_data(contract_data: Dict[str, str]) -> None: + """ + Validate that contract data in a package contains at least an "abi" and + "deploymentBytecode" necessary to generate a deployable contract factory. + """ + if not all(key in contract_data.keys() for key in ("abi", "deploymentBytecode")): + raise InsufficientAssetsError( + "Minimum required contract data to generate a deployable " + "contract factory (abi & deploymentBytecode) not found." + ) + + +def validate_package_version(version: Any) -> None: + """ + Validates that a package version is of text type. + """ + if not is_text(version): + raise EthPMValidationError( + f"Expected a version of text type, instead received {type(version)}." + ) + + +def validate_package_name(pkg_name: str) -> None: + """ + Raise an exception if the value is not a valid package name + as defined in the EthPM-Spec. + """ + if not bool(re.match(PACKAGE_NAME_REGEX, pkg_name)): + raise EthPMValidationError(f"{pkg_name} is not a valid package name.") + + +def validate_manifest_version(version: str) -> None: + """ + Raise an exception if the version is not "ethpm/3". + """ + if not version == "ethpm/3": + raise EthPMValidationError( + f"Py-EthPM does not support the provided specification version: {version}" + ) + + +def validate_build_dependency(key: str, uri: str) -> None: + """ + Raise an exception if the key in dependencies is not a valid package name, + or if the value is not a valid IPFS URI. + """ + validate_package_name(key) + # validate is supported content-addressed uri + if not is_ipfs_uri(uri): + raise EthPMValidationError(f"URI: {uri} is not a valid IPFS URI.") + + +CONTRACT_NAME_REGEX = re.compile("^[a-zA-Z][-a-zA-Z0-9_]{0,255}$") + + +def validate_contract_name(name: str) -> None: + if not CONTRACT_NAME_REGEX.match(name): + raise EthPMValidationError(f"Contract name: {name} is not valid.") diff --git a/env/lib/python3.8/site-packages/ethpm/validation/uri.py b/env/lib/python3.8/site-packages/ethpm/validation/uri.py new file mode 100644 index 00000000..a733276f --- /dev/null +++ b/env/lib/python3.8/site-packages/ethpm/validation/uri.py @@ -0,0 +1,154 @@ +import hashlib +from typing import ( + List, +) +from urllib import ( + parse, +) + +from eth_utils import ( + is_checksum_address, + to_bytes, + to_int, + to_text, +) + +from ethpm._utils.chains import ( + is_supported_chain_id, +) +from ethpm._utils.ipfs import ( + is_ipfs_uri, +) +from ethpm._utils.registry import ( + is_ens_domain, +) +from ethpm.constants import ( + REGISTRY_URI_SCHEMES, +) +from ethpm.exceptions import ( + EthPMValidationError, +) +from ethpm.validation.misc import ( + validate_escaped_string, +) +from ethpm.validation.package import ( + validate_package_name, +) +from web3 import Web3 + + +def validate_ipfs_uri(uri: str) -> None: + """ + Raise an exception if the provided URI is not a valid IPFS URI. + """ + if not is_ipfs_uri(uri): + raise EthPMValidationError(f"URI: {uri} is not a valid IPFS URI.") + + +def validate_registry_uri(uri: str) -> None: + """ + Raise an exception if the URI does not conform to the registry URI scheme. + """ + parsed = parse.urlparse(uri) + scheme, authority, pkg_path = ( + parsed.scheme, + parsed.netloc, + parsed.path, + ) + pkg_id = pkg_path.strip("/") + + if "@" in pkg_id: + if len(pkg_id.split("@")) != 2: + raise EthPMValidationError("Registry URI: {pkg_id} is not properly escaped") + pkg_name, pkg_version = pkg_id.split("@") + else: + pkg_name, pkg_version = (pkg_id, None) + + validate_registry_uri_scheme(scheme) + validate_registry_uri_authority(authority) + if pkg_name: + validate_package_name(pkg_name) + if not pkg_name and pkg_version: + raise EthPMValidationError("Registry URIs cannot provide a version without a package name.") + if pkg_version: + validate_escaped_string(pkg_version) + + +def validate_registry_uri_authority(auth: str) -> None: + """ + Raise an exception if the authority is not a valid ENS domain + or a valid checksummed contract address. + """ + if ":" in auth: + if len(auth.split(":")) != 2: + raise EthPMValidationError( + f"{auth} is not a valid registry URI authority. " + "Please try again with a valid registry URI." + ) + address, chain_id = auth.split(':') + else: + address, chain_id = auth, '1' + + if is_ens_domain(address) is False and not is_checksum_address(address): + raise EthPMValidationError( + f"{address} is not a valid registry address. " + "Please try again with a valid registry URI." + ) + + if not is_supported_chain_id(to_int(text=chain_id)): + raise EthPMValidationError( + f"Chain ID: {chain_id} is not supported. Supported chain ids include: " + "1 (mainnet), 3 (ropsten), 4 (rinkeby), 5 (goerli) and 42 (kovan). " + "Please try again with a valid registry URI." + ) + + +def validate_registry_uri_scheme(scheme: str) -> None: + """ + Raise an exception if the scheme is not a valid registry URI scheme: + - 'erc1319' + - 'ethpm' + """ + if scheme not in REGISTRY_URI_SCHEMES: + raise EthPMValidationError( + f"{scheme} is not a valid registry URI scheme. " + f"Valid schemes include: {REGISTRY_URI_SCHEMES}" + ) + + +def validate_single_matching_uri(all_blockchain_uris: List[str], w3: Web3) -> str: + """ + Return a single block URI after validating that it is the *only* URI in + all_blockchain_uris that matches the w3 instance. + """ + from ethpm.uri import check_if_chain_matches_chain_uri + + matching_uris = [ + uri for uri in all_blockchain_uris if check_if_chain_matches_chain_uri(w3, uri) + ] + + if not matching_uris: + raise EthPMValidationError("Package has no matching URIs on chain.") + elif len(matching_uris) != 1: + raise EthPMValidationError( + f"Package has too many ({len(matching_uris)}) matching URIs: {matching_uris}." + ) + return matching_uris[0] + + +def validate_blob_uri_contents(contents: bytes, blob_uri: str) -> None: + """ + Raises an exception if the sha1 hash of the contents does not match the hash found in te + blob_uri. Formula for how git calculates the hash found here: + http://alblue.bandlem.com/2011/08/git-tip-of-week-objects.html + """ + blob_path = parse.urlparse(blob_uri).path + blob_hash = blob_path.split("/")[-1] + contents_str = to_text(contents) + content_length = len(contents_str) + hashable_contents = "blob " + str(content_length) + "\0" + contents_str + hash_object = hashlib.sha1(to_bytes(text=hashable_contents)) + if hash_object.hexdigest() != blob_hash: + raise EthPMValidationError( + f"Hash of contents fetched from {blob_uri} do not match its hash: {blob_hash}." + ) diff --git a/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/INSTALLER b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/LICENSE b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/LICENSE new file mode 100644 index 00000000..50d4fa5e --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/LICENSE @@ -0,0 +1,73 @@ +The MIT License (MIT) + +Copyright (c) 2022 Alex Grönholm + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software is furnished to do so, +subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS +FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER +IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + +This project contains code copied from the Python standard library. +The following is the required license notice for those parts. + +PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 +-------------------------------------------- + +1. This LICENSE AGREEMENT is between the Python Software Foundation +("PSF"), and the Individual or Organization ("Licensee") accessing and +otherwise using this software ("Python") in source or binary form and +its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, PSF hereby +grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, +analyze, test, perform and/or display publicly, prepare derivative works, +distribute, and otherwise use Python alone or in any derivative version, +provided, however, that PSF's License Agreement and PSF's notice of copyright, +i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, +2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022 Python Software Foundation; +All Rights Reserved" are retained in Python alone or in any derivative version +prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on +or incorporates Python or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to Python. + +4. PSF is making Python available to Licensee on an "AS IS" +basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON +FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any +relationship of agency, partnership, or joint venture between PSF and +Licensee. This License Agreement does not grant permission to use PSF +trademarks or trade name in a trademark sense to endorse or promote +products or services of Licensee, or any third party. + +8. By copying, installing or otherwise using Python, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. diff --git a/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/METADATA b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/METADATA new file mode 100644 index 00000000..be4d0937 --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/METADATA @@ -0,0 +1,141 @@ +Metadata-Version: 2.1 +Name: exceptiongroup +Version: 1.0.4 +Summary: Backport of PEP 654 (exception groups) +Author-email: Alex Grönholm +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Typing :: Typed +Requires-Dist: pytest >= 6 ; extra == "test" +Project-URL: Changelog, https://github.com/agronholm/exceptiongroup/blob/main/CHANGES.rst +Project-URL: Issue Tracker, https://github.com/agronholm/exceptiongroup/issues +Project-URL: Source code, https://github.com/agronholm/exceptiongroup +Provides-Extra: test + +.. image:: https://github.com/agronholm/exceptiongroup/actions/workflows/test.yml/badge.svg + :target: https://github.com/agronholm/exceptiongroup/actions/workflows/test.yml + :alt: Build Status +.. image:: https://coveralls.io/repos/github/agronholm/exceptiongroup/badge.svg?branch=main + :target: https://coveralls.io/github/agronholm/exceptiongroup?branch=main + :alt: Code Coverage + +This is a backport of the ``BaseExceptionGroup`` and ``ExceptionGroup`` classes from +Python 3.11. + +It contains the following: + +* The ``exceptiongroup.BaseExceptionGroup`` and ``exceptiongroup.ExceptionGroup`` + classes +* A utility function (``exceptiongroup.catch()``) for catching exceptions possibly + nested in an exception group +* Patches to the ``TracebackException`` class that properly formats exception groups + (installed on import) +* An exception hook that handles formatting of exception groups through + ``TracebackException`` (installed on import) +* Special versions of some of the functions from the ``traceback`` module, modified to + correctly handle exception groups even when monkey patching is disabled, or blocked by + another custom exception hook: + + * ``traceback.format_exception()`` + * ``traceback.format_exception_only()`` + * ``traceback.print_exception()`` + * ``traceback.print_exc()`` + +If this package is imported on Python 3.11 or later, the built-in implementations of the +exception group classes are used instead, ``TracebackException`` is not monkey patched +and the exception hook won't be installed. + +See the `standard library documentation`_ for more information on exception groups. + +.. _standard library documentation: https://docs.python.org/3/library/exceptions.html + +Catching exceptions +=================== + +Due to the lack of the ``except*`` syntax introduced by `PEP 654`_ in earlier Python +versions, you need to use ``exceptiongroup.catch()`` to catch exceptions that are +potentially nested inside an exception group. This function returns a context manager +that calls the given handler for any exceptions matching the sole argument. + +The argument to ``catch()`` must be a dict (or any ``Mapping``) where each key is either +an exception class or an iterable of exception classes. Each value must be a callable +that takes a single positional argument. The handler will be called at most once, with +an exception group as an argument which will contain all the exceptions that are any +of the given types, or their subclasses. The exception group may contain nested groups +containing more matching exceptions. + +Thus, the following Python 3.11+ code: + +.. code-block:: python3 + + try: + ... + except* (ValueError, KeyError) as excgroup: + for exc in excgroup.exceptions: + print('Caught exception:', type(exc)) + except* RuntimeError: + print('Caught runtime error') + +would be written with this backport like this: + +.. code-block:: python3 + + from exceptiongroup import ExceptionGroup, catch + + def value_key_err_handler(excgroup: ExceptionGroup) -> None: + for exc in excgroup.exceptions: + print('Caught exception:', type(exc)) + + def runtime_err_handler(exc: ExceptionGroup) -> None: + print('Caught runtime error') + + with catch({ + (ValueError, KeyError): value_key_err_handler, + RuntimeError: runtime_err_handler + }): + ... + +**NOTE**: Just like with ``except*``, you cannot handle ``BaseExceptionGroup`` or +``ExceptionGroup`` with ``catch()``. + +Notes on monkey patching +======================== + +To make exception groups render properly when an unhandled exception group is being +printed out, this package does two things when it is imported on any Python version +earlier than 3.11: + +#. The ``traceback.TracebackException`` class is monkey patched to store extra + information about exception groups (in ``__init__()``) and properly format them (in + ``format()``) +#. An exception hook is installed at ``sys.excepthook``, provided that no other hook is + already present. This hook causes the exception to be formatted using + ``traceback.TracebackException`` rather than the built-in rendered. + +If ``sys.exceptionhook`` is found to be set to something else than the default when +``exceptiongroup`` is imported, no monkeypatching is done at all. + +To prevent the exception hook and patches from being installed, set the environment +variable ``EXCEPTIONGROUP_NO_PATCH`` to ``1``. + +Formatting exception groups +--------------------------- + +Normally, the monkey patching applied by this library on import will cause exception +groups to be printed properly in tracebacks. But in cases when the monkey patching is +blocked by a third party exception hook, or monkey patching is explicitly disabled, +you can still manually format exceptions using the special versions of the ``traceback`` +functions, like ``format_exception()``, listed at the top of this page. They work just +like their counterparts in the ``traceback`` module, except that they use a separately +patched subclass of ``TracebackException`` to perform the rendering. + +Particularly in cases where a library installs its own exception hook, it is recommended +to use these special versions to do the actual formatting of exceptions/tracebacks. + +.. _PEP 654: https://www.python.org/dev/peps/pep-0654/ + diff --git a/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/RECORD b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/RECORD new file mode 100644 index 00000000..0b9fdeba --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/RECORD @@ -0,0 +1,16 @@ +exceptiongroup-1.0.4.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +exceptiongroup-1.0.4.dist-info/LICENSE,sha256=blBw12UDHgrUA6HL-Qrm0ZoCKPgC4yC3rP9GCqcu1Hw,3704 +exceptiongroup-1.0.4.dist-info/METADATA,sha256=OC66zYYnqbzPyOjVzWq00upl5X9pe1Jz1s3OPwkQWdA,6083 +exceptiongroup-1.0.4.dist-info/RECORD,, +exceptiongroup-1.0.4.dist-info/WHEEL,sha256=rSgq_JpHF9fHR1lx53qwg_1-2LypZE_qmcuXbVUq948,81 +exceptiongroup/__init__.py,sha256=Zr-MMWDXFMdrRBC8nS8UFqATRQL0Z-Lb-SANqs-uu0I,920 +exceptiongroup/__pycache__/__init__.cpython-38.pyc,, +exceptiongroup/__pycache__/_catch.cpython-38.pyc,, +exceptiongroup/__pycache__/_exceptions.cpython-38.pyc,, +exceptiongroup/__pycache__/_formatting.cpython-38.pyc,, +exceptiongroup/__pycache__/_version.cpython-38.pyc,, +exceptiongroup/_catch.py,sha256=m7BHRSU_kKy_pqMISPArf99zW1RgMG00i8623f7_sEo,3578 +exceptiongroup/_exceptions.py,sha256=cd6Awi65bB94Qr_jPznaMe8BRWPLtwBisu7Ppk8_MLM,9687 +exceptiongroup/_formatting.py,sha256=LBv947qWeg2foEXPwgJl9S9lp2bH4OfRlWqxqXYXOZg,19035 +exceptiongroup/_version.py,sha256=jBXjl_hABxpdVyNAw5ycdFE6qZsjSMVOiQTI0Wr25wE,176 +exceptiongroup/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/WHEEL b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/WHEEL new file mode 100644 index 00000000..db4a255f --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup-1.0.4.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: flit 3.8.0 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/env/lib/python3.8/site-packages/exceptiongroup/__init__.py b/env/lib/python3.8/site-packages/exceptiongroup/__init__.py new file mode 100644 index 00000000..0e7e02bc --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup/__init__.py @@ -0,0 +1,40 @@ +__all__ = [ + "BaseExceptionGroup", + "ExceptionGroup", + "catch", + "format_exception", + "format_exception_only", + "print_exception", + "print_exc", +] + +import os +import sys + +from ._catch import catch +from ._version import version as __version__ # noqa: F401 + +if sys.version_info < (3, 11): + from ._exceptions import BaseExceptionGroup, ExceptionGroup + from ._formatting import ( + format_exception, + format_exception_only, + print_exc, + print_exception, + ) + + if os.getenv("EXCEPTIONGROUP_NO_PATCH") != "1": + from . import _formatting # noqa: F401 + + BaseExceptionGroup.__module__ = __name__ + ExceptionGroup.__module__ = __name__ +else: + from traceback import ( + format_exception, + format_exception_only, + print_exc, + print_exception, + ) + + BaseExceptionGroup = BaseExceptionGroup + ExceptionGroup = ExceptionGroup diff --git a/env/lib/python3.8/site-packages/exceptiongroup/_catch.py b/env/lib/python3.8/site-packages/exceptiongroup/_catch.py new file mode 100644 index 00000000..aa16d16b --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup/_catch.py @@ -0,0 +1,114 @@ +from __future__ import annotations + +import sys +from collections.abc import Callable, Iterable, Mapping +from contextlib import AbstractContextManager +from types import TracebackType +from typing import TYPE_CHECKING, Any + +if sys.version_info < (3, 11): + from ._exceptions import BaseExceptionGroup + +if TYPE_CHECKING: + _Handler = Callable[[BaseException], Any] + + +class _Catcher: + def __init__(self, handler_map: Mapping[tuple[type[BaseException], ...], _Handler]): + self._handler_map = handler_map + + def __enter__(self) -> None: + pass + + def __exit__( + self, + etype: type[BaseException] | None, + exc: BaseException | None, + tb: TracebackType | None, + ) -> bool: + if exc is not None: + unhandled = self.handle_exception(exc) + if unhandled is exc: + return False + elif unhandled is None: + return True + else: + raise unhandled from None + + return False + + def handle_exception(self, exc: BaseException) -> BaseExceptionGroup | None: + excgroup: BaseExceptionGroup | None + if isinstance(exc, BaseExceptionGroup): + excgroup = exc + else: + excgroup = BaseExceptionGroup("", [exc]) + + new_exceptions: list[BaseException] = [] + for exc_types, handler in self._handler_map.items(): + matched, excgroup = excgroup.split(exc_types) + if matched: + try: + handler(matched) + except BaseException as new_exc: + new_exceptions.append(new_exc) + + if not excgroup: + break + + if new_exceptions: + if excgroup: + new_exceptions.append(excgroup) + + return BaseExceptionGroup("", new_exceptions) + elif ( + excgroup and len(excgroup.exceptions) == 1 and excgroup.exceptions[0] is exc + ): + return exc + else: + return excgroup + + +def catch( + __handlers: Mapping[type[BaseException] | Iterable[type[BaseException]], _Handler] +) -> AbstractContextManager[None]: + if not isinstance(__handlers, Mapping): + raise TypeError("the argument must be a mapping") + + handler_map: dict[ + tuple[type[BaseException], ...], Callable[[BaseExceptionGroup]] + ] = {} + for type_or_iterable, handler in __handlers.items(): + iterable: tuple[type[BaseException]] + if isinstance(type_or_iterable, type) and issubclass( + type_or_iterable, BaseException + ): + iterable = (type_or_iterable,) + elif isinstance(type_or_iterable, Iterable): + iterable = tuple(type_or_iterable) + else: + raise TypeError( + "each key must be either an exception classes or an iterable thereof" + ) + + if not callable(handler): + raise TypeError("handlers must be callable") + + for exc_type in iterable: + if not isinstance(exc_type, type) or not issubclass( + exc_type, BaseException + ): + raise TypeError( + "each key must be either an exception classes or an iterable " + "thereof" + ) + + if issubclass(exc_type, BaseExceptionGroup): + raise TypeError( + "catching ExceptionGroup with catch() is not allowed. " + "Use except instead." + ) + + handler_map[iterable] = handler + + return _Catcher(handler_map) diff --git a/env/lib/python3.8/site-packages/exceptiongroup/_exceptions.py b/env/lib/python3.8/site-packages/exceptiongroup/_exceptions.py new file mode 100644 index 00000000..ebdd1720 --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup/_exceptions.py @@ -0,0 +1,280 @@ +from __future__ import annotations + +from collections.abc import Callable, Sequence +from functools import partial +from inspect import getmro, isclass +from typing import TYPE_CHECKING, Any, Generic, Type, TypeVar, cast, overload + +if TYPE_CHECKING: + from typing import Self + +_BaseExceptionT_co = TypeVar("_BaseExceptionT_co", bound=BaseException, covariant=True) +_BaseExceptionT = TypeVar("_BaseExceptionT", bound=BaseException) +_ExceptionT_co = TypeVar("_ExceptionT_co", bound=Exception, covariant=True) +_ExceptionT = TypeVar("_ExceptionT", bound=Exception) + + +def check_direct_subclass( + exc: BaseException, parents: tuple[type[BaseException]] +) -> bool: + for cls in getmro(exc.__class__)[:-1]: + if cls in parents: + return True + + return False + + +def get_condition_filter( + condition: type[_BaseExceptionT] + | tuple[type[_BaseExceptionT], ...] + | Callable[[_BaseExceptionT_co], bool] +) -> Callable[[_BaseExceptionT_co], bool]: + if isclass(condition) and issubclass( + cast(Type[BaseException], condition), BaseException + ): + return partial(check_direct_subclass, parents=(condition,)) + elif isinstance(condition, tuple): + if all(isclass(x) and issubclass(x, BaseException) for x in condition): + return partial(check_direct_subclass, parents=condition) + elif callable(condition): + return cast("Callable[[BaseException], bool]", condition) + + raise TypeError("expected a function, exception type or tuple of exception types") + + +class BaseExceptionGroup(BaseException, Generic[_BaseExceptionT_co]): + """A combination of multiple unrelated exceptions.""" + + def __new__( + cls, __message: str, __exceptions: Sequence[_BaseExceptionT_co] + ) -> Self: + if not isinstance(__message, str): + raise TypeError(f"argument 1 must be str, not {type(__message)}") + if not isinstance(__exceptions, Sequence): + raise TypeError("second argument (exceptions) must be a sequence") + if not __exceptions: + raise ValueError( + "second argument (exceptions) must be a non-empty sequence" + ) + + for i, exc in enumerate(__exceptions): + if not isinstance(exc, BaseException): + raise ValueError( + f"Item {i} of second argument (exceptions) is not an " f"exception" + ) + + if cls is BaseExceptionGroup: + if all(isinstance(exc, Exception) for exc in __exceptions): + cls = ExceptionGroup + + return super().__new__(cls, __message, __exceptions) + + def __init__( + self, __message: str, __exceptions: Sequence[_BaseExceptionT_co], *args: Any + ): + super().__init__(__message, __exceptions, *args) + self._message = __message + self._exceptions = __exceptions + + def add_note(self, note: str) -> None: + if not isinstance(note, str): + raise TypeError( + f"Expected a string, got note={note!r} (type {type(note).__name__})" + ) + + if not hasattr(self, "__notes__"): + self.__notes__: list[str] = [] + + self.__notes__.append(note) + + @property + def message(self) -> str: + return self._message + + @property + def exceptions( + self, + ) -> tuple[_BaseExceptionT_co | BaseExceptionGroup[_BaseExceptionT_co], ...]: + return tuple(self._exceptions) + + @overload + def subgroup( + self, __condition: type[_BaseExceptionT] | tuple[type[_BaseExceptionT], ...] + ) -> BaseExceptionGroup[_BaseExceptionT] | None: + ... + + @overload + def subgroup( + self: Self, __condition: Callable[[_BaseExceptionT_co], bool] + ) -> Self | None: + ... + + def subgroup( + self: Self, + __condition: type[_BaseExceptionT] + | tuple[type[_BaseExceptionT], ...] + | Callable[[_BaseExceptionT_co], bool], + ) -> BaseExceptionGroup[_BaseExceptionT] | Self | None: + condition = get_condition_filter(__condition) + modified = False + if condition(self): + return self + + exceptions: list[BaseException] = [] + for exc in self.exceptions: + if isinstance(exc, BaseExceptionGroup): + subgroup = exc.subgroup(__condition) + if subgroup is not None: + exceptions.append(subgroup) + + if subgroup is not exc: + modified = True + elif condition(exc): + exceptions.append(exc) + else: + modified = True + + if not modified: + return self + elif exceptions: + group = self.derive(exceptions) + group.__cause__ = self.__cause__ + group.__context__ = self.__context__ + group.__traceback__ = self.__traceback__ + return group + else: + return None + + @overload + def split( + self: Self, + __condition: type[_BaseExceptionT] | tuple[type[_BaseExceptionT], ...], + ) -> tuple[BaseExceptionGroup[_BaseExceptionT] | None, Self | None]: + ... + + @overload + def split( + self: Self, __condition: Callable[[_BaseExceptionT_co], bool] + ) -> tuple[Self | None, Self | None]: + ... + + def split( + self: Self, + __condition: type[_BaseExceptionT] + | tuple[type[_BaseExceptionT], ...] + | Callable[[_BaseExceptionT_co], bool], + ) -> tuple[BaseExceptionGroup[_BaseExceptionT] | None, Self | None] | tuple[ + Self | None, Self | None + ]: + condition = get_condition_filter(__condition) + if condition(self): + return self, None + + matching_exceptions: list[BaseException] = [] + nonmatching_exceptions: list[BaseException] = [] + for exc in self.exceptions: + if isinstance(exc, BaseExceptionGroup): + matching, nonmatching = exc.split(condition) + if matching is not None: + matching_exceptions.append(matching) + + if nonmatching is not None: + nonmatching_exceptions.append(nonmatching) + elif condition(exc): + matching_exceptions.append(exc) + else: + nonmatching_exceptions.append(exc) + + matching_group: Self | None = None + if matching_exceptions: + matching_group = self.derive(matching_exceptions) + matching_group.__cause__ = self.__cause__ + matching_group.__context__ = self.__context__ + matching_group.__traceback__ = self.__traceback__ + + nonmatching_group: Self | None = None + if nonmatching_exceptions: + nonmatching_group = self.derive(nonmatching_exceptions) + nonmatching_group.__cause__ = self.__cause__ + nonmatching_group.__context__ = self.__context__ + nonmatching_group.__traceback__ = self.__traceback__ + + return matching_group, nonmatching_group + + def derive(self: Self, __excs: Sequence[_BaseExceptionT_co]) -> Self: + eg = BaseExceptionGroup(self.message, __excs) + if hasattr(self, "__notes__"): + # Create a new list so that add_note() only affects one exceptiongroup + eg.__notes__ = list(self.__notes__) + + return eg + + def __str__(self) -> str: + suffix = "" if len(self._exceptions) == 1 else "s" + return f"{self.message} ({len(self._exceptions)} sub-exception{suffix})" + + def __repr__(self) -> str: + return f"{self.__class__.__name__}({self.message!r}, {self._exceptions!r})" + + +class ExceptionGroup(BaseExceptionGroup[_ExceptionT_co], Exception): + def __new__(cls, __message: str, __exceptions: Sequence[_ExceptionT_co]) -> Self: + instance: ExceptionGroup[_ExceptionT_co] = super().__new__( + cls, __message, __exceptions + ) + if cls is ExceptionGroup: + for exc in __exceptions: + if not isinstance(exc, Exception): + raise TypeError("Cannot nest BaseExceptions in an ExceptionGroup") + + return instance + + if TYPE_CHECKING: + + @property + def exceptions( + self, + ) -> tuple[_ExceptionT_co | ExceptionGroup[_ExceptionT_co], ...]: + ... + + @overload # type: ignore[override] + def subgroup( + self, __condition: type[_ExceptionT] | tuple[type[_ExceptionT], ...] + ) -> ExceptionGroup[_ExceptionT] | None: + ... + + @overload + def subgroup( + self: Self, __condition: Callable[[_ExceptionT_co], bool] + ) -> Self | None: + ... + + def subgroup( + self: Self, + __condition: type[_ExceptionT] + | tuple[type[_ExceptionT], ...] + | Callable[[_ExceptionT_co], bool], + ) -> ExceptionGroup[_ExceptionT] | Self | None: + return super().subgroup(__condition) + + @overload # type: ignore[override] + def split( + self: Self, __condition: type[_ExceptionT] | tuple[type[_ExceptionT], ...] + ) -> tuple[ExceptionGroup[_ExceptionT] | None, Self | None]: + ... + + @overload + def split( + self: Self, __condition: Callable[[_ExceptionT_co], bool] + ) -> tuple[Self | None, Self | None]: + ... + + def split( + self: Self, + __condition: type[_ExceptionT] + | tuple[type[_ExceptionT], ...] + | Callable[[_ExceptionT_co], bool], + ) -> tuple[ExceptionGroup[_ExceptionT] | None, Self | None] | tuple[ + Self | None, Self | None + ]: + return super().split(__condition) diff --git a/env/lib/python3.8/site-packages/exceptiongroup/_formatting.py b/env/lib/python3.8/site-packages/exceptiongroup/_formatting.py new file mode 100644 index 00000000..4ff269bb --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup/_formatting.py @@ -0,0 +1,554 @@ +# traceback_exception_init() adapted from trio +# +# _ExceptionPrintContext and traceback_exception_format() copied from the standard +# library +from __future__ import annotations + +import collections.abc +import sys +import textwrap +import traceback +from functools import singledispatch +from types import TracebackType +from typing import Any, List, Optional + +from ._exceptions import BaseExceptionGroup + +max_group_width = 15 +max_group_depth = 10 +_cause_message = ( + "\nThe above exception was the direct cause of the following exception:\n\n" +) + +_context_message = ( + "\nDuring handling of the above exception, another exception occurred:\n\n" +) + + +def _format_final_exc_line(etype, value): + valuestr = _safe_string(value, "exception") + if value is None or not valuestr: + line = f"{etype}\n" + else: + line = f"{etype}: {valuestr}\n" + + return line + + +def _safe_string(value, what, func=str): + try: + return func(value) + except BaseException: + return f"<{what} {func.__name__}() failed>" + + +class _ExceptionPrintContext: + def __init__(self): + self.seen = set() + self.exception_group_depth = 0 + self.need_close = False + + def indent(self): + return " " * (2 * self.exception_group_depth) + + def emit(self, text_gen, margin_char=None): + if margin_char is None: + margin_char = "|" + indent_str = self.indent() + if self.exception_group_depth: + indent_str += margin_char + " " + + if isinstance(text_gen, str): + yield textwrap.indent(text_gen, indent_str, lambda line: True) + else: + for text in text_gen: + yield textwrap.indent(text, indent_str, lambda line: True) + + +def exceptiongroup_excepthook( + etype: type[BaseException], value: BaseException, tb: TracebackType | None +) -> None: + sys.stderr.write("".join(traceback.format_exception(etype, value, tb))) + + +class PatchedTracebackException(traceback.TracebackException): + def __init__( + self, + exc_type: type[BaseException], + exc_value: BaseException, + exc_traceback: TracebackType | None, + *, + limit: int | None = None, + lookup_lines: bool = True, + capture_locals: bool = False, + compact: bool = False, + _seen: set[int] | None = None, + ) -> None: + kwargs: dict[str, Any] = {} + if sys.version_info >= (3, 10): + kwargs["compact"] = compact + + is_recursive_call = _seen is not None + if _seen is None: + _seen = set() + _seen.add(id(exc_value)) + + self.stack = traceback.StackSummary.extract( + traceback.walk_tb(exc_traceback), + limit=limit, + lookup_lines=lookup_lines, + capture_locals=capture_locals, + ) + self.exc_type = exc_type + # Capture now to permit freeing resources: only complication is in the + # unofficial API _format_final_exc_line + self._str = _safe_string(exc_value, "exception") + self.__notes__ = getattr(exc_value, "__notes__", None) + + if exc_type and issubclass(exc_type, SyntaxError): + # Handle SyntaxError's specially + self.filename = exc_value.filename + lno = exc_value.lineno + self.lineno = str(lno) if lno is not None else None + self.text = exc_value.text + self.offset = exc_value.offset + self.msg = exc_value.msg + if sys.version_info >= (3, 10): + end_lno = exc_value.end_lineno + self.end_lineno = str(end_lno) if end_lno is not None else None + self.end_offset = exc_value.end_offset + elif ( + exc_type + and issubclass(exc_type, (NameError, AttributeError)) + and getattr(exc_value, "name", None) is not None + ): + suggestion = _compute_suggestion_error(exc_value, exc_traceback) + if suggestion: + self._str += f". Did you mean: '{suggestion}'?" + + if lookup_lines: + # Force all lines in the stack to be loaded + for frame in self.stack: + frame.line + + self.__suppress_context__ = ( + exc_value.__suppress_context__ if exc_value is not None else False + ) + + # Convert __cause__ and __context__ to `TracebackExceptions`s, use a + # queue to avoid recursion (only the top-level call gets _seen == None) + if not is_recursive_call: + queue = [(self, exc_value)] + while queue: + te, e = queue.pop() + + if e and e.__cause__ is not None and id(e.__cause__) not in _seen: + cause = PatchedTracebackException( + type(e.__cause__), + e.__cause__, + e.__cause__.__traceback__, + limit=limit, + lookup_lines=lookup_lines, + capture_locals=capture_locals, + _seen=_seen, + ) + else: + cause = None + + if compact: + need_context = ( + cause is None and e is not None and not e.__suppress_context__ + ) + else: + need_context = True + if ( + e + and e.__context__ is not None + and need_context + and id(e.__context__) not in _seen + ): + context = PatchedTracebackException( + type(e.__context__), + e.__context__, + e.__context__.__traceback__, + limit=limit, + lookup_lines=lookup_lines, + capture_locals=capture_locals, + _seen=_seen, + ) + else: + context = None + + # Capture each of the exceptions in the ExceptionGroup along with each + # of their causes and contexts + if e and isinstance(e, BaseExceptionGroup): + exceptions = [] + for exc in e.exceptions: + texc = PatchedTracebackException( + type(exc), + exc, + exc.__traceback__, + lookup_lines=lookup_lines, + capture_locals=capture_locals, + _seen=_seen, + ) + exceptions.append(texc) + else: + exceptions = None + + te.__cause__ = cause + te.__context__ = context + te.exceptions = exceptions + if cause: + queue.append((te.__cause__, e.__cause__)) + if context: + queue.append((te.__context__, e.__context__)) + if exceptions: + queue.extend(zip(te.exceptions, e.exceptions)) + + def format(self, *, chain=True, _ctx=None): + if _ctx is None: + _ctx = _ExceptionPrintContext() + + output = [] + exc = self + if chain: + while exc: + if exc.__cause__ is not None: + chained_msg = _cause_message + chained_exc = exc.__cause__ + elif exc.__context__ is not None and not exc.__suppress_context__: + chained_msg = _context_message + chained_exc = exc.__context__ + else: + chained_msg = None + chained_exc = None + + output.append((chained_msg, exc)) + exc = chained_exc + else: + output.append((None, exc)) + + for msg, exc in reversed(output): + if msg is not None: + yield from _ctx.emit(msg) + if exc.exceptions is None: + if exc.stack: + yield from _ctx.emit("Traceback (most recent call last):\n") + yield from _ctx.emit(exc.stack.format()) + yield from _ctx.emit(exc.format_exception_only()) + elif _ctx.exception_group_depth > max_group_depth: + # exception group, but depth exceeds limit + yield from _ctx.emit(f"... (max_group_depth is {max_group_depth})\n") + else: + # format exception group + is_toplevel = _ctx.exception_group_depth == 0 + if is_toplevel: + _ctx.exception_group_depth += 1 + + if exc.stack: + yield from _ctx.emit( + "Exception Group Traceback (most recent call last):\n", + margin_char="+" if is_toplevel else None, + ) + yield from _ctx.emit(exc.stack.format()) + + yield from _ctx.emit(exc.format_exception_only()) + num_excs = len(exc.exceptions) + if num_excs <= max_group_width: + n = num_excs + else: + n = max_group_width + 1 + _ctx.need_close = False + for i in range(n): + last_exc = i == n - 1 + if last_exc: + # The closing frame may be added by a recursive call + _ctx.need_close = True + + if max_group_width is not None: + truncated = i >= max_group_width + else: + truncated = False + title = f"{i + 1}" if not truncated else "..." + yield ( + _ctx.indent() + + ("+-" if i == 0 else " ") + + f"+---------------- {title} ----------------\n" + ) + _ctx.exception_group_depth += 1 + if not truncated: + yield from exc.exceptions[i].format(chain=chain, _ctx=_ctx) + else: + remaining = num_excs - max_group_width + plural = "s" if remaining > 1 else "" + yield from _ctx.emit( + f"and {remaining} more exception{plural}\n" + ) + + if last_exc and _ctx.need_close: + yield _ctx.indent() + "+------------------------------------\n" + _ctx.need_close = False + _ctx.exception_group_depth -= 1 + + if is_toplevel: + assert _ctx.exception_group_depth == 1 + _ctx.exception_group_depth = 0 + + def format_exception_only(self): + """Format the exception part of the traceback. + The return value is a generator of strings, each ending in a newline. + Normally, the generator emits a single string; however, for + SyntaxError exceptions, it emits several lines that (when + printed) display detailed information about where the syntax + error occurred. + The message indicating which exception occurred is always the last + string in the output. + """ + if self.exc_type is None: + yield traceback._format_final_exc_line(None, self._str) + return + + stype = self.exc_type.__qualname__ + smod = self.exc_type.__module__ + if smod not in ("__main__", "builtins"): + if not isinstance(smod, str): + smod = "" + stype = smod + "." + stype + + if not issubclass(self.exc_type, SyntaxError): + yield _format_final_exc_line(stype, self._str) + elif traceback_exception_format_syntax_error is not None: + yield from traceback_exception_format_syntax_error(self, stype) + else: + yield from traceback_exception_original_format_exception_only(self) + + if isinstance(self.__notes__, collections.abc.Sequence): + for note in self.__notes__: + note = _safe_string(note, "note") + yield from [line + "\n" for line in note.split("\n")] + elif self.__notes__ is not None: + yield _safe_string(self.__notes__, "__notes__", func=repr) + + +traceback_exception_original_format = traceback.TracebackException.format +traceback_exception_original_format_exception_only = ( + traceback.TracebackException.format_exception_only +) +traceback_exception_format_syntax_error = getattr( + traceback.TracebackException, "_format_syntax_error", None +) +if sys.excepthook is sys.__excepthook__: + traceback.TracebackException.__init__ = ( # type: ignore[assignment] + PatchedTracebackException.__init__ + ) + traceback.TracebackException.format = ( # type: ignore[assignment] + PatchedTracebackException.format + ) + traceback.TracebackException.format_exception_only = ( # type: ignore[assignment] + PatchedTracebackException.format_exception_only + ) + sys.excepthook = exceptiongroup_excepthook + + +@singledispatch +def format_exception_only(__exc: BaseException) -> List[str]: + return list( + PatchedTracebackException( + type(__exc), __exc, None, compact=True + ).format_exception_only() + ) + + +@format_exception_only.register +def _(__exc: type, value: BaseException) -> List[str]: + return format_exception_only(value) + + +@singledispatch +def format_exception( + __exc: BaseException, + limit: Optional[int] = None, + chain: bool = True, +) -> List[str]: + return list( + PatchedTracebackException( + type(__exc), __exc, __exc.__traceback__, limit=limit, compact=True + ).format(chain=chain) + ) + + +@format_exception.register +def _( + __exc: type, + value: BaseException, + tb: TracebackType, + limit: Optional[int] = None, + chain: bool = True, +) -> List[str]: + return format_exception(value, limit, chain) + + +@singledispatch +def print_exception( + __exc: BaseException, + limit: Optional[int] = None, + file: Any = None, + chain: bool = True, +) -> None: + if file is None: + file = sys.stderr + + for line in PatchedTracebackException( + type(__exc), __exc, __exc.__traceback__, limit=limit + ).format(chain=chain): + print(line, file=file, end="") + + +@print_exception.register +def _( + __exc: type, + value: BaseException, + tb: TracebackType, + limit: Optional[int] = None, + file: Any = None, + chain: bool = True, +) -> None: + print_exception(value, limit, file, chain) + + +def print_exc( + limit: Optional[int] = None, + file: Any | None = None, + chain: bool = True, +) -> None: + value = sys.exc_info()[1] + print_exception(value, limit, file, chain) + + +# Python levenshtein edit distance code for NameError/AttributeError +# suggestions, backported from 3.12 + +_MAX_CANDIDATE_ITEMS = 750 +_MAX_STRING_SIZE = 40 +_MOVE_COST = 2 +_CASE_COST = 1 +_SENTINEL = object() + + +def _substitution_cost(ch_a, ch_b): + if ch_a == ch_b: + return 0 + if ch_a.lower() == ch_b.lower(): + return _CASE_COST + return _MOVE_COST + + +def _compute_suggestion_error(exc_value, tb): + wrong_name = getattr(exc_value, "name", None) + if wrong_name is None or not isinstance(wrong_name, str): + return None + if isinstance(exc_value, AttributeError): + obj = getattr(exc_value, "obj", _SENTINEL) + if obj is _SENTINEL: + return None + obj = exc_value.obj + try: + d = dir(obj) + except Exception: + return None + else: + assert isinstance(exc_value, NameError) + # find most recent frame + if tb is None: + return None + while tb.tb_next is not None: + tb = tb.tb_next + frame = tb.tb_frame + + d = list(frame.f_locals) + list(frame.f_globals) + list(frame.f_builtins) + if len(d) > _MAX_CANDIDATE_ITEMS: + return None + wrong_name_len = len(wrong_name) + if wrong_name_len > _MAX_STRING_SIZE: + return None + best_distance = wrong_name_len + suggestion = None + for possible_name in d: + if possible_name == wrong_name: + # A missing attribute is "found". Don't suggest it (see GH-88821). + continue + # No more than 1/3 of the involved characters should need changed. + max_distance = (len(possible_name) + wrong_name_len + 3) * _MOVE_COST // 6 + # Don't take matches we've already beaten. + max_distance = min(max_distance, best_distance - 1) + current_distance = _levenshtein_distance( + wrong_name, possible_name, max_distance + ) + if current_distance > max_distance: + continue + if not suggestion or current_distance < best_distance: + suggestion = possible_name + best_distance = current_distance + return suggestion + + +def _levenshtein_distance(a, b, max_cost): + # A Python implementation of Python/suggestions.c:levenshtein_distance. + + # Both strings are the same + if a == b: + return 0 + + # Trim away common affixes + pre = 0 + while a[pre:] and b[pre:] and a[pre] == b[pre]: + pre += 1 + a = a[pre:] + b = b[pre:] + post = 0 + while a[: post or None] and b[: post or None] and a[post - 1] == b[post - 1]: + post -= 1 + a = a[: post or None] + b = b[: post or None] + if not a or not b: + return _MOVE_COST * (len(a) + len(b)) + if len(a) > _MAX_STRING_SIZE or len(b) > _MAX_STRING_SIZE: + return max_cost + 1 + + # Prefer shorter buffer + if len(b) < len(a): + a, b = b, a + + # Quick fail when a match is impossible + if (len(b) - len(a)) * _MOVE_COST > max_cost: + return max_cost + 1 + + # Instead of producing the whole traditional len(a)-by-len(b) + # matrix, we can update just one row in place. + # Initialize the buffer row + row = list(range(_MOVE_COST, _MOVE_COST * (len(a) + 1), _MOVE_COST)) + + result = 0 + for bindex in range(len(b)): + bchar = b[bindex] + distance = result = bindex * _MOVE_COST + minimum = sys.maxsize + for index in range(len(a)): + # 1) Previous distance in this row is cost(b[:b_index], a[:index]) + substitute = distance + _substitution_cost(bchar, a[index]) + # 2) cost(b[:b_index], a[:index+1]) from previous row + distance = row[index] + # 3) existing result is cost(b[:b_index+1], a[index]) + + insert_delete = min(result, distance) + _MOVE_COST + result = min(insert_delete, substitute) + + # cost(b[:b_index+1], a[:index+1]) + row[index] = result + if result < minimum: + minimum = result + if minimum > max_cost: + # Everything in this row is too big, so bail early. + return max_cost + 1 + return result diff --git a/env/lib/python3.8/site-packages/exceptiongroup/_version.py b/env/lib/python3.8/site-packages/exceptiongroup/_version.py new file mode 100644 index 00000000..a33069f0 --- /dev/null +++ b/env/lib/python3.8/site-packages/exceptiongroup/_version.py @@ -0,0 +1,5 @@ +# coding: utf-8 +# file generated by setuptools_scm +# don't change, don't track in version control +__version__ = version = '1.0.4' +__version_tuple__ = version_tuple = (1, 0, 4) diff --git a/env/lib/python3.8/site-packages/exceptiongroup/py.typed b/env/lib/python3.8/site-packages/exceptiongroup/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/INSTALLER b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/LICENSE b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/LICENSE new file mode 100644 index 00000000..68a49daa --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/LICENSE @@ -0,0 +1,24 @@ +This is free and unencumbered software released into the public domain. + +Anyone is free to copy, modify, publish, use, compile, sell, or +distribute this software, either in source code form or as a compiled +binary, for any purpose, commercial or non-commercial, and by any +means. + +In jurisdictions that recognize copyright laws, the author or authors +of this software dedicate any and all copyright interest in the +software to the public domain. We make this dedication for the benefit +of the public at large and to the detriment of our heirs and +successors. We intend this dedication to be an overt act of +relinquishment in perpetuity of all present and future rights to this +software under copyright law. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR +OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, +ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +OTHER DEALINGS IN THE SOFTWARE. + +For more information, please refer to diff --git a/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/METADATA b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/METADATA new file mode 100644 index 00000000..3e999112 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/METADATA @@ -0,0 +1,382 @@ +Metadata-Version: 2.1 +Name: fastecdsa +Version: 2.2.3 +Summary: Fast elliptic curve digital signatures +Home-page: https://github.com/AntonKueltz/fastecdsa +Author: Anton Kueltz +Author-email: kueltz.anton@gmail.com +License: CC0 1.0 Universal (CC0 1.0) Public Domain Dedication +Keywords: elliptic curve cryptography ecdsa ecc +Classifier: Development Status :: 4 - Beta +Classifier: Intended Audience :: Developers +Classifier: Topic :: Security :: Cryptography +Classifier: License :: CC0 1.0 Universal (CC0 1.0) Public Domain Dedication +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: POSIX :: Linux +License-File: LICENSE + +fastecdsa +========= +.. image:: https://img.shields.io/pypi/v/fastecdsa.svg + :target: https://pypi.org/project/fastecdsa/ + :alt: PyPI + +.. image:: https://travis-ci.com/AntonKueltz/fastecdsa.svg?branch=master + :target: https://travis-ci.com/AntonKueltz/fastecdsa + :alt: Travis CI + +.. image:: https://readthedocs.org/projects/fastecdsa/badge/?version=stable + :target: https://fastecdsa.readthedocs.io/en/stable/?badge=stable + :alt: Documentation Status + +.. contents:: + +About +----- +This is a python package for doing fast elliptic curve cryptography, specifically +digital signatures. + +Security +-------- +There is no nonce reuse, no branching on secret material, +and all points are validated before any operations are performed on them. Timing side challenges +are mitigated via Montgomery point multiplication. Nonces are generated per RFC6979_. The default +curve used throughout the package is P256 which provides 128 bits of security. If you require a +higher level of security you can specify the curve parameter in a method to use a curve over a +bigger field e.g. P384. All that being said, crypto is tricky and I'm not beyond making mistakes. +Please use a more established and reviewed library for security critical applications. Open an +issue or email me if you see any security issue or risk with this library. + +Python Versions Supported +------------------------- +The initial release of this package was targeted at python2.7. Earlier versions may work but have +no guarantee of correctness or stability. As of release 1.2.1+ python3 is supported as well. Due to +python2's EOL on January 1st 2020 release 2.x of this package only supports python3.5+. + +Operating Systems Supported +--------------------------- +This package is targeted at the Linux and MacOS operating systems. Due to the the dependency on +the GMP C library building this package on Windows is difficult and no official support or +distributions are provided for Windows OSes. See issue11_ for what users have done to get things +building. + +Supported Primitives +-------------------- +Curves over Prime Fields +~~~~~~~~~~~~~~~~~~~~~~~~ + ++---------------------------+-----------------------------------------+-------------+ +| Name | Class | Proposed By | ++===========================+=========================================+=============+ +| P192 / secp192r1 | :code:`fastecdsa.curve.P192` | NIST / NSA | ++---------------------------+-----------------------------------------+-------------+ +| P224 / secp224r1 | :code:`fastecdsa.curve.P224` | NIST / NSA | ++---------------------------+-----------------------------------------+-------------+ +| P256 / secp256r1 | :code:`fastecdsa.curve.P256` | NIST / NSA | ++---------------------------+-----------------------------------------+-------------+ +| P384 / secp384r1 | :code:`fastecdsa.curve.P384` | NIST / NSA | ++---------------------------+-----------------------------------------+-------------+ +| P521 / secp521r1 | :code:`fastecdsa.curve.P521` | NIST / NSA | ++---------------------------+-----------------------------------------+-------------+ +| secp192k1 | :code:`fastecdsa.curve.secp192k1` | Certicom | ++---------------------------+-----------------------------------------+-------------+ +| secp224k1 | :code:`fastecdsa.curve.secp224k1` | Certicom | ++---------------------------+-----------------------------------------+-------------+ +| secp256k1 (bitcoin curve) | :code:`fastecdsa.curve.secp256k1` | Certicom | ++---------------------------+-----------------------------------------+-------------+ +| brainpoolP160r1 | :code:`fastecdsa.curve.brainpoolP160r1` | BSI | ++---------------------------+-----------------------------------------+-------------+ +| brainpoolP192r1 | :code:`fastecdsa.curve.brainpoolP192r1` | BSI | ++---------------------------+-----------------------------------------+-------------+ +| brainpoolP224r1 | :code:`fastecdsa.curve.brainpoolP224r1` | BSI | ++---------------------------+-----------------------------------------+-------------+ +| brainpoolP256r1 | :code:`fastecdsa.curve.brainpoolP256r1` | BSI | ++---------------------------+-----------------------------------------+-------------+ +| brainpoolP320r1 | :code:`fastecdsa.curve.brainpoolP320r1` | BSI | ++---------------------------+-----------------------------------------+-------------+ +| brainpoolP384r1 | :code:`fastecdsa.curve.brainpoolP384r1` | BSI | ++---------------------------+-----------------------------------------+-------------+ +| brainpoolP512r1 | :code:`fastecdsa.curve.brainpoolP512r1` | BSI | ++---------------------------+-----------------------------------------+-------------+ + +Arbitrary Curves +~~~~~~~~~~~~~~~~ +As of version 1.5.1 construction of arbitrary curves in Weierstrass form +(:code:`y^2 = x^3 + ax + b (mod p)`) is supported. I advise against using custom curves for any +security critical applications. It's up to you to make sure that the parameters you pass here are +correct, no validation of the base point is done, and in general no sanity checks are done. Use +at your own risk. + +.. code:: python + + from fastecdsa.curve import Curve + curve = Curve( + name, # (str): The name of the curve + p, # (long): The value of p in the curve equation. + a, # (long): The value of a in the curve equation. + b, # (long): The value of b in the curve equation. + q, # (long): The order of the base point of the curve. + gx, # (long): The x coordinate of the base point of the curve. + gy, # (long): The y coordinate of the base point of the curve. + oid # (str): The object identifier of the curve (optional). + ) + +Hash Functions +~~~~~~~~~~~~~~ +Any hash function in the :code:`hashlib` module (:code:`md5, sha1, sha224, sha256, sha384, sha512`) +will work, as will any hash function that implements the same interface / core functionality as the +those in :code:`hashlib`. For instance, if you wish to use SHA3 as the hash function the +:code:`pysha3` package will work with this library as long as it is at version >=1.0b1 (as previous +versions didn't work with the :code:`hmac` module which is used in nonce generation). Note +that :code:`sha3_224, sha3_256, sha3_384, sha3_512` are all in :code:`hashlib` as of python3.6. + +Performance +----------- + +Curves over Prime Fields +~~~~~~~~~~~~~~~~~~~~~~~~ +Currently it does elliptic curve arithmetic significantly faster than the :code:`ecdsa` +package. You can see the times for 1,000 signature and verification operations over +various curves below. These were run on an early 2014 MacBook Air with a 1.4 GHz Intel +Core i5. + ++-----------+------------------------+--------------------+---------+ +| Curve | :code:`fastecdsa` time | :code:`ecdsa` time | Speedup | ++-----------+------------------------+--------------------+---------+ +| P192 | 3.62s | 1m35.49s | ~26x | ++-----------+------------------------+--------------------+---------+ +| P224 | 4.50s | 2m13.42s | ~29x | ++-----------+------------------------+--------------------+---------+ +| P256 | 6.15s | 2m52.43s | ~28x | ++-----------+------------------------+--------------------+---------+ +| P384 | 12.11s | 6m21.01s | ~31x | ++-----------+------------------------+--------------------+---------+ +| P521 | 22.21s | 11m39.53s | ~31x | ++-----------+------------------------+--------------------+---------+ +| secp256k1 | 5.92s | 2m57.19s | ~30x | ++-----------+------------------------+--------------------+---------+ + +Benchmarking +~~~~~~~~~~~~ +If you'd like to benchmark performance on your machine you can do so using the command: + +.. code:: bash + + $ python setup.py benchmark + +This will use the :code:`timeit` module to benchmark 1000 signature and verification operations +for each curve supported by this package. Alternatively, if you have not cloned the repo but +have installed the package via e.g. :code:`pip` you can use the following command: + +.. code:: bash + + $ python -m fastecdsa.benchmark + +Installing +---------- +You can use pip: :code:`$ pip install fastecdsa` or clone the repo and use +:code:`$ python setup.py install`. Note that you need to have a C compiler. +You also need to have GMP_ on your system as the underlying +C code in this package includes the :code:`gmp.h` header (and links against gmp +via the :code:`-lgmp` flag). You can install all dependencies as follows: + +apt +~~~ + +.. code:: bash + + $ sudo apt-get install python-dev libgmp3-dev + +yum +~~~ + +.. code:: bash + + $ sudo yum install python-devel gmp-devel + +Usage +----- +Generating Keys +~~~~~~~~~~~~~~~ +You can use this package to generate keys if you like. Recall that private keys on elliptic curves +are integers, and public keys are points i.e. integer pairs. + +.. code:: python + + from fastecdsa import keys, curve + + """The reason there are two ways to generate a keypair is that generating the public key requires + a point multiplication, which can be expensive. That means sometimes you may want to delay + generating the public key until it is actually needed.""" + + # generate a keypair (i.e. both keys) for curve P256 + priv_key, pub_key = keys.gen_keypair(curve.P256) + + # generate a private key for curve P256 + priv_key = keys.gen_private_key(curve.P256) + + # get the public key corresponding to the private key we just generated + pub_key = keys.get_public_key(priv_key, curve.P256) + + +Signing and Verifying +~~~~~~~~~~~~~~~~~~~~~ +Some basic usage is shown below: + +.. code:: python + + from fastecdsa import curve, ecdsa, keys + from hashlib import sha384 + + m = "a message to sign via ECDSA" # some message + + ''' use default curve and hash function (P256 and SHA2) ''' + private_key = keys.gen_private_key(curve.P256) + public_key = keys.get_public_key(private_key, curve.P256) + # standard signature, returns two integers + r, s = ecdsa.sign(m, private_key) + # should return True as the signature we just generated is valid. + valid = ecdsa.verify((r, s), m, public_key) + + ''' specify a different hash function to use with ECDSA ''' + r, s = ecdsa.sign(m, private_key, hashfunc=sha384) + valid = ecdsa.verify((r, s), m, public_key, hashfunc=sha384) + + ''' specify a different curve to use with ECDSA ''' + private_key = keys.gen_private_key(curve.P224) + public_key = keys.get_public_key(private_key, curve.P224) + r, s = ecdsa.sign(m, private_key, curve=curve.P224) + valid = ecdsa.verify((r, s), m, public_key, curve=curve.P224) + + ''' using SHA3 via pysha3>=1.0b1 package ''' + import sha3 # pip install [--user] pysha3==1.0b1 + from hashlib import sha3_256 + private_key, public_key = keys.gen_keypair(curve.P256) + r, s = ecdsa.sign(m, private_key, hashfunc=sha3_256) + valid = ecdsa.verify((r, s), m, public_key, hashfunc=sha3_256) + +Arbitrary Elliptic Curve Arithmetic +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The :code:`Point` class allows arbitrary arithmetic to be performed over curves. The two main +operations are point addition and point multiplication (by a scalar) which can be done via the +standard python operators (:code:`+` and :code:`*` respectively): + +.. code:: python + + # example taken from the document below (section 4.3.2): + # https://koclab.cs.ucsb.edu/teaching/cren/docs/w02/nist-routines.pdf + + from fastecdsa.curve import P256 + from fastecdsa.point import Point + + xs = 0xde2444bebc8d36e682edd27e0f271508617519b3221a8fa0b77cab3989da97c9 + ys = 0xc093ae7ff36e5380fc01a5aad1e66659702de80f53cec576b6350b243042a256 + S = Point(xs, ys, curve=P256) + + xt = 0x55a8b00f8da1d44e62f6b3b25316212e39540dc861c89575bb8cf92e35e0986b + yt = 0x5421c3209c2d6c704835d82ac4c3dd90f61a8a52598b9e7ab656e9d8c8b24316 + T = Point(xt, yt, curve=P256) + + # Point Addition + R = S + T + + # Point Subtraction: (xs, ys) - (xt, yt) = (xs, ys) + (xt, -yt) + R = S - T + + # Point Doubling + R = S + S # produces the same value as the operation below + R = 2 * S # S * 2 works fine too i.e. order doesn't matter + + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + + # Scalar Multiplication + R = d * S # S * d works fine too i.e. order doesn't matter + + e = 0xd37f628ece72a462f0145cbefe3f0b355ee8332d37acdd83a358016aea029db7 + + # Joint Scalar Multiplication + R = d * S + e * T + +Importing and Exporting Keys +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +You can also export keys as files, ASN.1 encoded and formatted per RFC5480_ and RFC5915_. Both +private keys and public keys can be exported as follows: + +.. code:: python + + from fastecdsa.curve import P256 + from fastecdsa.keys import export_key, gen_keypair + + d, Q = gen_keypair(P256) + # save the private key to disk + export_key(d, curve=P256, filepath='/path/to/exported/p256.key') + # save the public key to disk + export_key(Q, curve=P256, filepath='/path/to/exported/p256.pub') + +Keys stored in this format can also be imported. The import function will figure out if the key +is a public or private key and parse it accordingly: + +.. code:: python + + from fastecdsa.keys import import_key + + # if the file is a private key then parsed_d is a long and parsed_Q is a Point object + # if the file is a public key then parsed_d will be None + parsed_d, parsed_Q = import_key('/path/to/file.key') + +Other encoding formats can also be specified, such as SEC1_ for public keys. This is done using +classes found in the :code:`fastecdsa.encoding` package, and passing them as keyword args to +the key functions: + +.. code:: python + + from fastecdsa.curve import P256 + from fastecdsa.encoding.sec1 import SEC1Encoder + from fastecdsa.keys import export_key, gen_keypair, import_key + + _, Q = gen_keypair(P256) + export_key(Q, curve=P256, filepath='/path/to/p256.key', encoder=SEC1Encoder) + parsed_Q = import_key('/path/to/p256.key', curve=P256, public=True, decoder=SEC1Encoder) + +Encoding Signatures +~~~~~~~~~~~~~~~~~~~ +DER encoding of ECDSA signatures as defined in RFC2459_ is also supported. The +:code:`fastecdsa.encoding.der` provides the :code:`DEREncoder` class which encodes signatures: + +.. code:: python + + from fastecdsa.encoding.der import DEREncoder + + r, s = 0xdeadc0de, 0xbadc0de + encoded = DEREncoder.encode_signature(r, s) + decoded_r, decoded_s = DEREncoder.decode_signature(encoded) + +Acknowledgements +---------------- +Thanks to those below for contributing improvements: + +- boneyard93501 +- clouds56 +- m-kus +- sirk390 +- targon +- NotStatilko +- bbbrumley +- luinxz +- JJChiDguez +- J08nY +- trevor-crypto +- sylvainpelissier + +.. _issue11: https://github.com/AntonKueltz/fastecdsa/issues/11 +.. _GMP: https://gmplib.org/ +.. _RFC2459: https://tools.ietf.org/html/rfc2459 +.. _RFC5480: https://tools.ietf.org/html/rfc5480 +.. _RFC5915: https://tools.ietf.org/html/rfc5915 +.. _RFC6979: https://tools.ietf.org/html/rfc6979 +.. _SEC1: http://www.secg.org/sec1-v2.pdf diff --git a/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/RECORD b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/RECORD new file mode 100644 index 00000000..74509c69 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/RECORD @@ -0,0 +1,66 @@ +fastecdsa-2.2.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +fastecdsa-2.2.3.dist-info/LICENSE,sha256=fhLl30uuEsshWBuhV87SDhmGoFCN0Q0Oikq5pM-U6Fw,1211 +fastecdsa-2.2.3.dist-info/METADATA,sha256=JG23qWcsr1XxNOQfT3kJjVKcUBMz0aTXucviluX1jCM,16213 +fastecdsa-2.2.3.dist-info/RECORD,, +fastecdsa-2.2.3.dist-info/WHEEL,sha256=CAuuk_9zz9TklvSaKKuyDpVMPSzU9xWV6t1vK7KC7-A,103 +fastecdsa-2.2.3.dist-info/top_level.txt,sha256=91d9rf4AlxLbofgjdaaKNsbXiKa-NNvcAexuWHXyzJE,10 +fastecdsa/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +fastecdsa/__pycache__/__init__.cpython-38.pyc,, +fastecdsa/__pycache__/benchmark.cpython-38.pyc,, +fastecdsa/__pycache__/curve.cpython-38.pyc,, +fastecdsa/__pycache__/ecdsa.cpython-38.pyc,, +fastecdsa/__pycache__/keys.cpython-38.pyc,, +fastecdsa/__pycache__/point.cpython-38.pyc,, +fastecdsa/__pycache__/util.cpython-38.pyc,, +fastecdsa/_ecdsa.cpython-38-x86_64-linux-gnu.so,sha256=b3giw7JTnsEBFsFUoLHmrSf8a-TmSNhPqmuTnqb2nWY,80056 +fastecdsa/benchmark.py,sha256=TmoK4Rf7aScKzYnozp2JbSns-S-_Sl6oOnByJxRUjv8,1135 +fastecdsa/curve.py,sha256=QLcPmbBSXP1jkN3tUFFuONdvlRls5SzMJXgB0Xn8l38,13916 +fastecdsa/curvemath.cpython-38-x86_64-linux-gnu.so,sha256=tFNMGvSAVcVPYta17KDfxA5Cjc4i1bPJSVEbAmTkmmQ,58472 +fastecdsa/ecdsa.py,sha256=xVSMEnHuDWCilqCToYx_MbdV1QGmJaQA5pXMFnF01tE,4182 +fastecdsa/encoding/__init__.py,sha256=bWiYNckwVnn0Erprec3GF4HSeoILI0FLvZqQdbETxOo,1188 +fastecdsa/encoding/__pycache__/__init__.cpython-38.pyc,, +fastecdsa/encoding/__pycache__/asn1.cpython-38.pyc,, +fastecdsa/encoding/__pycache__/der.cpython-38.pyc,, +fastecdsa/encoding/__pycache__/pem.cpython-38.pyc,, +fastecdsa/encoding/__pycache__/sec1.cpython-38.pyc,, +fastecdsa/encoding/__pycache__/util.cpython-38.pyc,, +fastecdsa/encoding/asn1.py,sha256=RqGgKVxJDNuOh79-_QCNpigeu8vF06En2Sv0qJ2RONI,4136 +fastecdsa/encoding/der.py,sha256=MPR1uHPW8i2Lo_mjV2KSEtL-tpLRdFjWL8FtjvYPrxc,2651 +fastecdsa/encoding/pem.py,sha256=UpVcnWa4nBFtY6SJkSy8ZNB3aT4H9GPI59EVav4eZ9U,5210 +fastecdsa/encoding/sec1.py,sha256=qBFC2mrFUwEG7mA-JV4JUoDSgSl5_i1wSEFigbL44QM,3103 +fastecdsa/encoding/util.py,sha256=Y-xqQPotu5T0IEej8lVFuTK2EyuqvLtLWNL-LSMdEpY,482 +fastecdsa/keys.py,sha256=LUPEaiApnFujC0z6q7BV9TRis0VnsNI9adx8Ur2hL4g,6737 +fastecdsa/point.py,sha256=A6sPGW6p4UYUiZlJHOw1tFJXyJ6zfiiGz5o-oo50FLc,6240 +fastecdsa/tests/__init__.py,sha256=qBNRzlTB-sHkpC_ApudBt2uoQoyyxr1QdBk0nDRQ0Ak,416 +fastecdsa/tests/__pycache__/__init__.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_brainpool_ecdh.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_key_export_import.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_key_recovery.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_keygen.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_nonce_generation.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_p256_ecdsa.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_point.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_prehashed.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_prime_field_curve_math.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_rfc6979_ecdsa.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_whitespace_parsing.cpython-38.pyc,, +fastecdsa/tests/__pycache__/test_whycheproof_vectors.cpython-38.pyc,, +fastecdsa/tests/encoding/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +fastecdsa/tests/encoding/__pycache__/__init__.cpython-38.pyc,, +fastecdsa/tests/encoding/__pycache__/test_der.cpython-38.pyc,, +fastecdsa/tests/encoding/__pycache__/test_sec1.cpython-38.pyc,, +fastecdsa/tests/encoding/test_der.py,sha256=pQLmmrkB2kOQZspsf_CLLNxSAQmwoV5cPV5OXu7BgBw,5961 +fastecdsa/tests/encoding/test_sec1.py,sha256=NrySF-cxOFsBssBZt_wP-u43f0oX9LorqXNF6pFy0vI,5544 +fastecdsa/tests/test_brainpool_ecdh.py,sha256=MPf769i9V59odAidxVDlyKFR9ZpkPDzBBl79lXhqOco,4878 +fastecdsa/tests/test_key_export_import.py,sha256=z5idCKr1E5MdO5hPZw1JBp8czfo3TJNB0ooTFKjsUWQ,1121 +fastecdsa/tests/test_key_recovery.py,sha256=ffNPg2cM1jpEKOrpJ3zS6PCPB65ktdqTB3kJj89CAFI,615 +fastecdsa/tests/test_keygen.py,sha256=wYCwS-ohob6zq7OYEcLJBpH3OEWFWBBvqZ8WgsMAG6E,1695 +fastecdsa/tests/test_nonce_generation.py,sha256=jw37HWPIGikAh1DHcysw2miK3XdY5o4-v3qo2mdxlAM,1125 +fastecdsa/tests/test_p256_ecdsa.py,sha256=PI06ZW2Zn6v5k31gH7uCz0oo4PVTBtba8MrGdbFeNb0,4081 +fastecdsa/tests/test_point.py,sha256=txQhgCmql6lhPEE4iLLeP2KCc9OdMvgqlrTx5Wm0W4M,1521 +fastecdsa/tests/test_prehashed.py,sha256=zy8afvLAPH3RjwHAlBliDvXN8tHOxb2VCDeUEjo56LM,436 +fastecdsa/tests/test_prime_field_curve_math.py,sha256=3N648lR2xKkpmte8DPopItQrXSAhCJ9tlpqzJBjBKqU,13150 +fastecdsa/tests/test_rfc6979_ecdsa.py,sha256=VBVJjlPpX-pr9niHQlWJPI0M7h7CU_kNHiLMQfZ9Exc,5389 +fastecdsa/tests/test_whitespace_parsing.py,sha256=PhVGJU1aDNoQJo0OdQuIQppw0cGinL5JaO418xgVWsk,1477 +fastecdsa/tests/test_whycheproof_vectors.py,sha256=Fbvkh1MFRRekPt_eNFg7VrdxSkPSzYIPw9Vx1T_mNV0,8044 +fastecdsa/util.py,sha256=qxabzSccRnUlXGd9bsMMgze6TQuZgFBpdItxYywc0y4,5508 diff --git a/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/WHEEL b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/WHEEL new file mode 100644 index 00000000..ae84cbde --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.38.4) +Root-Is-Purelib: false +Tag: cp38-cp38-linux_x86_64 + diff --git a/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/top_level.txt b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/top_level.txt new file mode 100644 index 00000000..f55dc160 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa-2.2.3.dist-info/top_level.txt @@ -0,0 +1 @@ +fastecdsa diff --git a/env/lib/python3.8/site-packages/fastecdsa/__init__.py b/env/lib/python3.8/site-packages/fastecdsa/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/fastecdsa/_ecdsa.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/fastecdsa/_ecdsa.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..021b4488 Binary files /dev/null and b/env/lib/python3.8/site-packages/fastecdsa/_ecdsa.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/fastecdsa/benchmark.py b/env/lib/python3.8/site-packages/fastecdsa/benchmark.py new file mode 100644 index 00000000..21655a20 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/benchmark.py @@ -0,0 +1,33 @@ +from binascii import b2a_uu +from os import urandom +from timeit import timeit + +from .curve import ( + Curve, P192, P224, P256, P384, P521, W25519, W448, secp192k1, secp224k1, secp256k1, + brainpoolP160r1, brainpoolP192r1, brainpoolP224r1, brainpoolP256r1, brainpoolP320r1, + brainpoolP384r1, brainpoolP512r1 +) +from .ecdsa import sign, verify +from .keys import gen_keypair +from .point import Point + + +def sign_and_verify(d: int, Q: Point, curve: Curve): + msg = b2a_uu(urandom(32)) + sig = sign(msg, d, curve=curve) + assert verify(sig, msg, Q, curve=curve) + + +if __name__ == '__main__': + iterations = 1000 + curves = ( + P192, P224, P256, P384, P521, W25519, W448, secp192k1, secp224k1, secp256k1, + brainpoolP160r1, brainpoolP192r1, brainpoolP224r1, brainpoolP256r1, brainpoolP320r1, + brainpoolP384r1, brainpoolP512r1 + ) + + for curve in curves: + d, Q = gen_keypair(curve) + time = timeit(stmt=lambda: sign_and_verify(d, Q, curve), number=iterations) + print('{} signatures and verifications with curve {} took {:.2f} seconds'.format( + iterations, curve.name, time)) diff --git a/env/lib/python3.8/site-packages/fastecdsa/curve.py b/env/lib/python3.8/site-packages/fastecdsa/curve.py new file mode 100644 index 00000000..3439fe37 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/curve.py @@ -0,0 +1,309 @@ +class Curve: + """Representation of an elliptic curve. + + Defines a group for the arithmetic operations of point addition and scalar multiplication. + Currently only curves defined via the equation :math:`y^2 \equiv x^3 + ax + b \pmod{p}` are + supported. + + Attributes: + | name (str): The name of the curve + | p (int): The value of :math:`p` in the curve equation. + | a (int): The value of :math:`a` in the curve equation. + | b (int): The value of :math:`b` in the curve equation. + | q (int): The order of the base point of the curve. + | oid (bytes): The object identifier of the curve. + """ + _oid_lookup = {} # a lookup table for getting curve instances by their object identifier + + def __init__(self, name: str, p: int, a: int, b: int, q: int, gx: int, gy: int, oid: bytes = None): + """Initialize the parameters of an elliptic curve. + + WARNING: Do not generate your own parameters unless you know what you are doing or you could + generate a curve severely less secure than you think. Even then, consider using a + standardized curve for the sake of interoperability. + + Currently only curves defined via the equation :math:`y^2 \equiv x^3 + ax + b \pmod{p}` are + supported. + + Args: + | name (string): The name of the curve + | p (int): The value of :math:`p` in the curve equation. + | a (int): The value of :math:`a` in the curve equation. + | b (int): The value of :math:`b` in the curve equation. + | q (int): The order of the base point of the curve. + | gx (int): The x coordinate of the base point of the curve. + | gy (int): The y coordinate of the base point of the curve. + | oid (bytes): The object identifier of the curve. + """ + self.name = name + self.p = p + self.a = a + self.b = b + self.q = q + self.gx = gx + self.gy = gy + self.oid = oid + + if oid is not None: + self._oid_lookup[oid] = self + + def __repr__(self) -> str: + return self.name + + @classmethod + def get_curve_by_oid(cls, oid: bytes): + """Get a curve via it's object identifier.""" + return cls._oid_lookup.get(oid, None) + + def is_point_on_curve(self, point: (int, int)) -> bool: + """ Check if a point lies on this curve. + + The check is done by evaluating the curve equation :math:`y^2 \equiv x^3 + ax + b \pmod{p}` + at the given point :math:`(x,y)` with this curve's domain parameters :math:`(a, b, p)`. If + the congruence holds, then the point lies on this curve. + + Args: + point (long, long): A tuple representing the point :math:`P` as an :math:`(x, y)` coordinate + pair. + + Returns: + bool: :code:`True` if the point lies on this curve, otherwise :code:`False`. + """ + x, y, = point + left = y * y + right = (x * x * x) + (self.a * x) + self.b + return (left - right) % self.p == 0 + + def evaluate(self, x: int) -> int: + """ Evaluate the elliptic curve polynomial at 'x' + + Args: + x (int): The position to evaluate the polynomial at + + Returns: + int: the value of :math:`(x^3 + ax + b) \bmod{p}` + """ + return (x ** 3 + self.a * x + self.b) % self.p + + @property + def G(self): + """The base point of the curve. + + For the purposes of ECDSA this point is multiplied by a private key to obtain the + corresponding public key. Make a property to avoid cyclic dependency of Point on Curve + (a point lies on a curve) and Curve on Point (curves have a base point). + """ + from .point import Point + return Point(self.gx, self.gy, self) + + +# see https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-186-draft.pdf +# Setion 4.2.1 "Weierstrass Curves" for params +P192 = Curve( + 'P192', + 6277101735386680763835789423207666416083908700390324961279, + -3, + 2455155546008943817740293915197451784769108058161191238065, + 6277101735386680763835789423176059013767194773182842284081, + 602046282375688656758213480587526111916698976636884684818, + 174050332293622031404857552280219410364023488927386650641, + b'\x2A\x86\x48\xCE\x3D\x03\x01\x01' +) +P224 = Curve( + 'P224', + 26959946667150639794667015087019630673557916260026308143510066298881, + -3, + 18958286285566608000408668544493926415504680968679321075787234672564, + 26959946667150639794667015087019625940457807714424391721682722368061, + 19277929113566293071110308034699488026831934219452440156649784352033, + 19926808758034470970197974370888749184205991990603949537637343198772, + b'\x2B\x81\x04\x00\x21' +) +P256 = Curve( + 'P256', + 115792089210356248762697446949407573530086143415290314195533631308867097853951, + -3, + 41058363725152142129326129780047268409114441015993725554835256314039467401291, + 115792089210356248762697446949407573529996955224135760342422259061068512044369, + 48439561293906451759052585252797914202762949526041747995844080717082404635286, + 36134250956749795798585127919587881956611106672985015071877198253568414405109, + b'\x2A\x86\x48\xCE\x3D\x03\x01\x07' +) +P384 = Curve( + 'P384', + int('39402006196394479212279040100143613805079739270465446667948293404245721771496870329047266' + '088258938001861606973112319'), + -3, + int('27580193559959705877849011840389048093056905856361568521428707301988689241309860865136260' + '764883745107765439761230575'), + int('39402006196394479212279040100143613805079739270465446667946905279627659399113263569398956' + '308152294913554433653942643'), + int('26247035095799689268623156744566981891852923491109213387815615900925518854738050089022388' + '053975719786650872476732087'), + int('83257109614890299855467512895201081792878530488613155947092059024805031998844192244386437' + '60392947333078086511627871'), + b'\x2B\x81\x04\x00\x22' +) +P521 = Curve( + 'P521', + int('68647976601306097149819007990813932172694353001433054093944634591855431833976560521225596' + '40661454554977296311391480858037121987999716643812574028291115057151'), + -3, + int('10938490380737342745111123907668055699362075989516837489945863944959531161507350160137087' + '37573759623248592132296706313309438452531591012912142327488478985984'), + int('68647976601306097149819007990813932172694353001433054093944634591855431833976553942450577' + '46333217197532963996371363321113864768612440380340372808892707005449'), + int('26617408020502170632287687167233609607298591687569731477066713684188029449964278084915450' + '80627771902352094241225065558662157113545570916814161637315895999846'), + int('37571800257700204635455072244911836035944551347697624866945677796155444774405563166912344' + '05012945539562144444537289428522585666729196580810124344277578376784'), + b'\x2B\x81\x04\x00\x23' +) +W25519 = Curve( + 'W25519', + 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed, + 0x2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa984914a144, + 0x7b425ed097b425ed097b425ed097b425ed097b425ed097b4260b5e9c7710c864, + 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed, + 0x2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad245a, + 0x5f51e65e475f794b1fe122d388b72eb36dc2b28192839e4dd6163a5d81312c14, +) +W448 = Curve( + 'W448', + int('72683872429560689054932380788800453435364136068731806028149019918061232816673077268639638' + '3698676545930088884461843637361053498018365439'), + int('48455914953040459369954920525866968956909424045821204018766013278707488544448718179093092' + '2465784363953392589641229091574035657199637535'), + int('26919952751689144094419400292148316087171902247678446677092229599281938080249287877273940' + '1369880202196329216467349495319191685664513904'), + int('18170968107390172263733095197200113358841034017182951507037254979514600396153958571619575' + '5291692375963310293709091662304773755859649779'), + int('48455914953040459369954920525866968956909424045821204018766013278707488544448718179093092' + '2465784363953392589641229091574035665345629073'), + int('35529392678556817526412750206378333480897639938771427183188089843516908878696741000293267' + '3765864550910142774147268105838985595290606362'), +) + +# see http://www.secg.org/sec2-v2.pdf for params +secp192k1 = Curve( + 'secp192k1', + 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFEE37, + 0x0, + 0x3, + 0xFFFFFFFFFFFFFFFFFFFFFFFE26F2FC170F69466A74DEFD8D, + 0xDB4FF10EC057E9AE26B07D0280B7F4341DA5D1B1EAE06C7D, + 0x9B2F2F6D9C5628A7844163D015BE86344082AA88D95E2F9D, + b'\x2B\x81\x04\x00\x1F' +) + +secp224k1 = Curve( + 'secp224k1', + 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFE56D, + 0x0, + 0x5, + 0x10000000000000000000000000001DCE8D2EC6184CAF0A971769FB1F7, + 0xA1455B334DF099DF30FC28A169A467E9E47075A90F7E650EB6B7A45C, + 0x7E089FED7FBA344282CAFBD6F7E319F7C0B0BD59E2CA4BDB556D61A5, + b'\x2B\x81\x04\x00\x20' +) + +secp256k1 = Curve( + 'secp256k1', + 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F, + 0x0, + 0x7, + 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141, + 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798, + 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8, + b'\x2B\x81\x04\x00\x0A' +) + +# see https://tools.ietf.org/html/rfc5639#section-3.1 for params +brainpoolP160r1 = Curve( + 'brainpoolP160r1', + 0xE95E4A5F737059DC60DFC7AD95B3D8139515620F, + 0x340E7BE2A280EB74E2BE61BADA745D97E8F7C300, + 0x1E589A8595423412134FAA2DBDEC95C8D8675E58, + 0xE95E4A5F737059DC60DF5991D45029409E60FC09, + 0xBED5AF16EA3F6A4F62938C4631EB5AF7BDBCDBC3, + 0x1667CB477A1A8EC338F94741669C976316DA6321, + b'\x2B\x24\x03\x03\x02\x08\x01\x01\x01' +) + +brainpoolP192r1 = Curve( + 'brainpoolP192r1', + 0xC302F41D932A36CDA7A3463093D18DB78FCE476DE1A86297, + 0x6A91174076B1E0E19C39C031FE8685C1CAE040E5C69A28EF, + 0x469A28EF7C28CCA3DC721D044F4496BCCA7EF4146FBF25C9, + 0xC302F41D932A36CDA7A3462F9E9E916B5BE8F1029AC4ACC1, + 0xC0A0647EAAB6A48753B033C56CB0F0900A2F5C4853375FD6, + 0x14B690866ABD5BB88B5F4828C1490002E6773FA2FA299B8F, + b'\x2B\x24\x03\x03\x02\x08\x01\x01\x03' +) + +brainpoolP224r1 = Curve( + 'brainpoolP224r1', + 0xD7C134AA264366862A18302575D1D787B09F075797DA89F57EC8C0FF, + 0x68A5E62CA9CE6C1C299803A6C1530B514E182AD8B0042A59CAD29F43, + 0x2580F63CCFE44138870713B1A92369E33E2135D266DBB372386C400B, + 0xD7C134AA264366862A18302575D0FB98D116BC4B6DDEBCA3A5A7939F, + 0x0D9029AD2C7E5CF4340823B2A87DC68C9E4CE3174C1E6EFDEE12C07D, + 0x58AA56F772C0726F24C6B89E4ECDAC24354B9E99CAA3F6D3761402CD, + b'\x2B\x24\x03\x03\x02\x08\x01\x01\x05' +) + +brainpoolP256r1 = Curve( + 'brainpoolP256r1', + 0xA9FB57DBA1EEA9BC3E660A909D838D726E3BF623D52620282013481D1F6E5377, + 0x7D5A0975FC2C3057EEF67530417AFFE7FB8055C126DC5C6CE94A4B44F330B5D9, + 0x26DC5C6CE94A4B44F330B5D9BBD77CBF958416295CF7E1CE6BCCDC18FF8C07B6, + 0xA9FB57DBA1EEA9BC3E660A909D838D718C397AA3B561A6F7901E0E82974856A7, + 0x8BD2AEB9CB7E57CB2C4B482FFC81B7AFB9DE27E1E3BD23C23A4453BD9ACE3262, + 0x547EF835C3DAC4FD97F8461A14611DC9C27745132DED8E545C1D54C72F046997, + b'\x2B\x24\x03\x03\x02\x08\x01\x01\x07' +) + +brainpoolP320r1 = Curve( + 'brainpoolP320r1', + 0xD35E472036BC4FB7E13C785ED201E065F98FCFA6F6F40DEF4F92B9EC7893EC28FCD412B1F1B32E27, + 0x3EE30B568FBAB0F883CCEBD46D3F3BB8A2A73513F5EB79DA66190EB085FFA9F492F375A97D860EB4, + 0x520883949DFDBC42D3AD198640688A6FE13F41349554B49ACC31DCCD884539816F5EB4AC8FB1F1A6, + 0xD35E472036BC4FB7E13C785ED201E065F98FCFA5B68F12A32D482EC7EE8658E98691555B44C59311, + 0x43BD7E9AFB53D8B85289BCC48EE5BFE6F20137D10A087EB6E7871E2A10A599C710AF8D0D39E20611, + 0x14FDD05545EC1CC8AB4093247F77275E0743FFED117182EAA9C77877AAAC6AC7D35245D1692E8EE1, + b'\x2B\x24\x03\x03\x02\x08\x01\x01\x09' +) + +brainpoolP384r1 = Curve( + 'brainpoolP384r1', + int('8CB91E82A3386D280F5D6F7E50E641DF152F7109ED5456B412B1DA197FB71123ACD3A729901D1A718747001331' + '07EC53', 16), + int('7BC382C63D8C150C3C72080ACE05AFA0C2BEA28E4FB22787139165EFBA91F90F8AA5814A503AD4EB04A8C7DD22' + 'CE2826', 16), + int('04A8C7DD22CE28268B39B55416F0447C2FB77DE107DCD2A62E880EA53EEB62D57CB4390295DBC9943AB78696FA' + '504C11', 16), + int('8CB91E82A3386D280F5D6F7E50E641DF152F7109ED5456B31F166E6CAC0425A7CF3AB6AF6B7FC3103B883202E9' + '046565', 16), + int('1D1C64F068CF45FFA2A63A81B7C13F6B8847A3E77EF14FE3DB7FCAFE0CBD10E8E826E03436D646AAEF87B2E247' + 'D4AF1E', 16), + int('8ABE1D7520F9C2A45CB1EB8E95CFD55262B70B29FEEC5864E19C054FF99129280E464621779181114282034126' + '3C5315', 16), + b'\x2B\x24\x03\x03\x02\x08\x01\x01\x0b' +) + +brainpoolP512r1 = Curve( + 'brainpoolP512r1', + int('AADD9DB8DBE9C48B3FD4E6AE33C9FC07CB308DB3B3C9D20ED6639CCA703308717D4D9B009BC66842AECDA12AE6' + 'A380E62881FF2F2D82C68528AA6056583A48F3', 16), + int('7830A3318B603B89E2327145AC234CC594CBDD8D3DF91610A83441CAEA9863BC2DED5D5AA8253AA10A2EF1C98B' + '9AC8B57F1117A72BF2C7B9E7C1AC4D77FC94CA', 16), + int('3DF91610A83441CAEA9863BC2DED5D5AA8253AA10A2EF1C98B9AC8B57F1117A72BF2C7B9E7C1AC4D77FC94CADC' + '083E67984050B75EBAE5DD2809BD638016F723', 16), + int('AADD9DB8DBE9C48B3FD4E6AE33C9FC07CB308DB3B3C9D20ED6639CCA70330870553E5C414CA92619418661197F' + 'AC10471DB1D381085DDADDB58796829CA90069', 16), + int('81AEE4BDD82ED9645A21322E9C4C6A9385ED9F70B5D916C1B43B62EEF4D0098EFF3B1F78E2D0D48D50D1687B93' + 'B97D5F7C6D5047406A5E688B352209BCB9F822', 16), + int('7DDE385D566332ECC0EABFA9CF7822FDF209F70024A57B1AA000C55B881F8111B2DCDE494A5F485E5BCA4BD88A' + '2763AED1CA2B2FA8F0540678CD1E0F3AD80892', 16), + b'\x2B\x24\x03\x03\x02\x08\x01\x01\x0d' +) diff --git a/env/lib/python3.8/site-packages/fastecdsa/curvemath.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/fastecdsa/curvemath.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..dc5e04c9 Binary files /dev/null and b/env/lib/python3.8/site-packages/fastecdsa/curvemath.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/fastecdsa/ecdsa.py b/env/lib/python3.8/site-packages/fastecdsa/ecdsa.py new file mode 100644 index 00000000..8cacef9f --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/ecdsa.py @@ -0,0 +1,121 @@ +from binascii import hexlify +from hashlib import sha256 +from typing import Tuple, TypeVar + +from fastecdsa import _ecdsa +from .curve import Curve, P256 +from .point import Point +from .util import RFC6979, msg_bytes + +MsgTypes = TypeVar('MsgTypes', str, bytes, bytearray) +SigType = Tuple[int, int] + + +class EcdsaError(Exception): + def __init__(self, msg): + self.msg = msg + + +def sign(msg: MsgTypes, d: int, curve: Curve = P256, hashfunc=sha256, prehashed: bool = False): + """Sign a message using the elliptic curve digital signature algorithm. + + The elliptic curve signature algorithm is described in full in FIPS 186-4 Section 6. Please + refer to http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf for more information. + + Args: + | msg (str|bytes|bytearray): A message to be signed. + | d (int): The ECDSA private key of the signer. + | curve (fastecdsa.curve.Curve): The curve to be used to sign the message. + | hashfunc (_hashlib.HASH): The hash function used to compress the message. + | prehashed (bool): The message being passed has already been hashed by :code:`hashfunc`. + + Returns: + (int, int): The signature (r, s) as a tuple. + """ + # generate a deterministic nonce per RFC6979 + rfc6979 = RFC6979(msg, d, curve.q, hashfunc, prehashed=prehashed) + k = rfc6979.gen_nonce() + + # Fix the bit-length of the random nonce, + # so that it doesn't leak via timing. + # This does not change that ks (mod n) = kt (mod n) = k (mod n) + ks = k + curve.q + kt = ks + curve.q + if ks.bit_length() == curve.q.bit_length(): + k = kt + else: + k = ks + + if prehashed: + hex_digest = hexlify(msg).decode() + else: + hex_digest = hashfunc(msg_bytes(msg)).hexdigest() + + r, s = _ecdsa.sign( + hex_digest, + str(d), + str(k), + str(curve.p), + str(curve.a), + str(curve.b), + str(curve.q), + str(curve.gx), + str(curve.gy) + ) + return int(r), int(s) + + +def verify( + sig: SigType, msg: MsgTypes, Q: Point, curve: Curve = P256, hashfunc=sha256, prehashed: bool = False) -> bool: + """Verify a message signature using the elliptic curve digital signature algorithm. + + The elliptic curve signature algorithm is described in full in FIPS 186-4 Section 6. Please + refer to http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf for more information. + + Args: + | sig (int, int): The signature for the message. + | msg (str|bytes|bytearray): A message to be signed. + | Q (fastecdsa.point.Point): The ECDSA public key of the signer. + | curve (fastecdsa.curve.Curve): The curve to be used to sign the message. + | hashfunc (_hashlib.HASH): The hash function used to compress the message. + | prehashed (bool): The message being passed has already been hashed by :code:`hashfunc`. + + Returns: + bool: True if the signature is valid, False otherwise. + + Raises: + fastecdsa.ecdsa.EcdsaError: If the signature or public key are invalid. Invalid signature + in this case means that it has values less than 1 or greater than the curve order. + """ + if isinstance(Q, tuple): + Q = Point(Q[0], Q[1], curve) + r, s = sig + + # validate Q, r, s (Q should be validated in constructor of Point already but double check) + if not curve.is_point_on_curve((Q.x, Q.y)): + raise EcdsaError('Invalid public key, point is not on curve {}'.format(curve.name)) + elif r > curve.q or r < 1: + raise EcdsaError( + 'Invalid Signature: r is not a positive integer smaller than the curve order') + elif s > curve.q or s < 1: + raise EcdsaError( + 'Invalid Signature: s is not a positive integer smaller than the curve order') + + if prehashed: + hashed = hexlify(msg).decode() + else: + hashed = hashfunc(msg_bytes(msg)).hexdigest() + + return _ecdsa.verify( + str(r), + str(s), + hashed, + str(Q.x), + str(Q.y), + str(curve.p), + str(curve.a), + str(curve.b), + str(curve.q), + str(curve.gx), + str(curve.gy) + ) diff --git a/env/lib/python3.8/site-packages/fastecdsa/encoding/__init__.py b/env/lib/python3.8/site-packages/fastecdsa/encoding/__init__.py new file mode 100644 index 00000000..dadef956 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/encoding/__init__.py @@ -0,0 +1,51 @@ +from abc import ABCMeta, abstractmethod + +from ..point import Point + + +class KeyEncoder: + """Base class that any encoding class for EC keys should derive from. + + All overriding methods should be static. If your key encoder writes binary + data you must have a field named :code:`binary_data` set to :code:`True` in + order for keys to correctly read from and write to disk. + """ + __metaclass__ = ABCMeta + + @staticmethod + @abstractmethod + def encode_public_key(Q: Point): + pass + + @staticmethod + @abstractmethod + def encode_private_key(d: int): + pass + + @staticmethod + @abstractmethod + def decode_public_key(data) -> Point: + pass + + @staticmethod + @abstractmethod + def decode_private_key(data) -> (int, Point): + pass + + +class SigEncoder: + """Base class that any encoding class for EC signatures should derive from. + + All overriding methods should be static. + """ + __metaclass__ = ABCMeta + + @staticmethod + @abstractmethod + def encode_signature(r: int, s: int) -> bytes: + pass + + @staticmethod + @abstractmethod + def decode_signature(binary_data: bytes) -> (int, int): + pass diff --git a/env/lib/python3.8/site-packages/fastecdsa/encoding/asn1.py b/env/lib/python3.8/site-packages/fastecdsa/encoding/asn1.py new file mode 100644 index 00000000..f02ef03d --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/encoding/asn1.py @@ -0,0 +1,134 @@ +from struct import pack, unpack + +from ..curve import Curve +from ..point import Point +from .util import int_bytelen, int_to_bytes + +INTEGER = b'\x02' +BIT_STRING = b'\x03' +OCTET_STRING = b'\x04' +OBJECT_IDENTIFIER = b'\x06' +SEQUENCE = b'\x30' +PARAMETERS = b'\xa0' +PUBLIC_KEY = b'\xa1' + + +class ASN1EncodingError(Exception): + pass + + +def _asn1_len(data: bytes) -> bytes: + # https://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf + # section 8.1.3.3 + dlen = len(data) + + if dlen < 0x80: + return pack('=B', dlen) + else: + encoded = b'' + + while dlen: + len_byte = pack('=B', dlen & 0xff) + encoded = len_byte + encoded + dlen >>= 8 + + return pack('=B', 0x80 | len(encoded)) + encoded + + +def asn1_structure(data_type: bytes, data: bytes) -> bytes: + return data_type + _asn1_len(data) + data + + +def asn1_private_key(d: int, curve: Curve) -> bytes: + d_bytes = int_to_bytes(d) + padding = b'\x00' * (int_bytelen(curve.q) - len(d_bytes)) + return asn1_structure(OCTET_STRING, padding + d_bytes) + + +def asn1_ecversion(version=1) -> bytes: + version_bytes = int_to_bytes(version) + return asn1_structure(INTEGER, version_bytes) + + +def asn1_ecpublickey() -> bytes: + # via RFC 5480 - The "unrestricted" algorithm identifier is: + # id-ecPublicKey OBJECT IDENTIFIER ::= { + # iso(1) member-body(2) us(840) ansi-X9-62(10045) keyType(2) 1 } + return asn1_structure(OBJECT_IDENTIFIER, b'\x2A\x86\x48\xCE\x3D\x02\x01') + + +def asn1_oid(curve: Curve) -> bytes: + oid_bytes = curve.oid + return asn1_structure(OBJECT_IDENTIFIER, oid_bytes) + + +def asn1_public_key(Q: Point) -> bytes: + p_len = int_bytelen(Q.curve.p) + + x_bytes = int_to_bytes(Q.x) + x_padding = b'\x00' * (p_len - len(x_bytes)) + + y_bytes = int_to_bytes(Q.y) + y_padding = b'\x00' * (p_len - len(y_bytes)) + + key_bytes = b'\x00\x04' + x_padding + x_bytes + y_padding + y_bytes + return asn1_structure(BIT_STRING, key_bytes) + + +def parse_asn1_length(data: bytes) -> (int, bytes, bytes): + """ + Parse an ASN.1 encoded structure. + + Args: + data (bytes): A sequence of bytes representing an ASN.1 encoded structure + + Returns: + (int, bytes, bytes): A tuple of the integer length in bytes, the byte representation of the integer, + and the remaining bytes after the integer bytes in the sequence + """ + (initial_byte,) = unpack('=B', data[:1]) + data = data[1:] + + if not (initial_byte & 0x80): + length = initial_byte + else: + count = initial_byte & 0x7f + fmt = {1: '=B', 2: '=H', 3: '=L', 4: '=L', 5: '=Q', 6: '=Q', 7: '=Q', 8: '=Q'} + zero_padding = b'\x00' * ((1 << (count.bit_length() - 1)) - count) + + (length,) = unpack(fmt[count], zero_padding + data[:count]) + data = data[count:] + + if length > len(data): + raise ASN1EncodingError("Parsed length of ASN.1 structure to be {} bytes but only {} bytes" + "remain in the provided data".format(length, len(data))) + + return length, data[:length], data[length:] + + +def parse_asn1_int(data: bytes) -> (int, bytes, bytes): + """ + Parse an ASN.1 encoded integer. + + Args: + data (bytes): A sequence of bytes whose start is an ASN.1 integer encoding + + Returns: + (int, bytes, bytes): A tuple of the integer length in bytes, the byte representation of the integer, + and the remaining bytes after the integer bytes in the sequence + """ + + # encoding needs at least the type, length and data + if len(data) < 3: + raise ASN1EncodingError("ASN.1 encoded integer must be at least 3 bytes long") + # integer should be identified as ASN.1 integer + if data[0] != ord(INTEGER): + raise ASN1EncodingError("Value should be a ASN.1 INTEGER") + + length, data, remaining = parse_asn1_length(data[1:]) + + # integer length should match length indicated + if length != len(data): + raise ASN1EncodingError("Expected ASN.1 INTEGER to be {} bytes, got {} bytes".format(length, len(data))) + + return length, data, remaining diff --git a/env/lib/python3.8/site-packages/fastecdsa/encoding/der.py b/env/lib/python3.8/site-packages/fastecdsa/encoding/der.py new file mode 100644 index 00000000..a0344337 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/encoding/der.py @@ -0,0 +1,76 @@ +from struct import pack + +from . import SigEncoder +from .asn1 import ASN1EncodingError, INTEGER, SEQUENCE, asn1_structure, parse_asn1_int, parse_asn1_length +from .util import bytes_to_int, int_to_bytes + + +class InvalidDerSignature(Exception): + pass + + +class DEREncoder(SigEncoder): + @staticmethod + def encode_signature(r: int, s: int) -> bytes: + """Encode an EC signature in serialized DER format as described in + https://tools.ietf.org/html/rfc2459 (section 7.2.2) and as detailed by + bip-0066 + + Args: + r, s + + Returns: + bytes: The DER encoded signature + + """ + r_bytes = int_to_bytes(r) + if r_bytes[0] & 0x80: + r_bytes = b"\x00" + r_bytes + s_bytes = int_to_bytes(s) + if s_bytes[0] & 0x80: + s_bytes = b"\x00" + s_bytes + + r_asn1 = asn1_structure(INTEGER, r_bytes) + s_asn1 = asn1_structure(INTEGER, s_bytes) + return asn1_structure(SEQUENCE, r_asn1 + s_asn1) + + @staticmethod + def decode_signature(sig: bytes) -> (int, int): + """Decode an EC signature from serialized DER format as described in + https://tools.ietf.org/html/rfc2459 (section 7.2.2) and as detailed by + bip-0066 + + Returns (r,s) + """ + def _validate_int_bytes(data: bytes): + # check for negative values, indicated by leading 1 bit + if data[0] & 0x80: + raise InvalidDerSignature("Signature contains a negative value") + + # check for leading 0x00s that aren't there to disambiguate possible negative values + if data[0] == 0x00 and not data[1] & 0x80: + raise InvalidDerSignature("Invalid leading 0x00 byte in ASN.1 integer") + + # overarching structure must be a sequence + if not sig or sig[0] != ord(SEQUENCE): + raise InvalidDerSignature("First byte should be ASN.1 SEQUENCE") + + try: + seqlen, sequence, leftover = parse_asn1_length(sig[1:]) + except ASN1EncodingError as asn1_error: + raise InvalidDerSignature(asn1_error) + + # sequence should be entirety remaining data + if leftover: + raise InvalidDerSignature("Expected a sequence of {} bytes, got {}".format( + seqlen, len(sequence + leftover))) + + try: + rlen, r, sdata = parse_asn1_int(sequence) + slen, s, _ = parse_asn1_int(sdata) + except ASN1EncodingError as asn1_error: + raise InvalidDerSignature(asn1_error) + + _validate_int_bytes(r) + _validate_int_bytes(s) + return bytes_to_int(r), bytes_to_int(s) diff --git a/env/lib/python3.8/site-packages/fastecdsa/encoding/pem.py b/env/lib/python3.8/site-packages/fastecdsa/encoding/pem.py new file mode 100644 index 00000000..10d9b4d3 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/encoding/pem.py @@ -0,0 +1,135 @@ +from binascii import a2b_base64, b2a_base64, hexlify +from textwrap import wrap + +from . import KeyEncoder +from .asn1 import ( + BIT_STRING, OBJECT_IDENTIFIER, OCTET_STRING, PARAMETERS, PUBLIC_KEY, SEQUENCE, + asn1_ecpublickey, asn1_ecversion, asn1_oid, asn1_private_key, asn1_public_key, asn1_structure, parse_asn1_length +) +from ..curve import Curve +from ..point import Point + +EC_PRIVATE_HEADER = '-----BEGIN EC PRIVATE KEY-----' +EC_PRIVATE_FOOTER = '-----END EC PRIVATE KEY-----' + +EC_PUBLIC_HEADER = '-----BEGIN PUBLIC KEY-----' +EC_PUBLIC_FOOTER = '-----END PUBLIC KEY-----' + + +class PEMEncoder(KeyEncoder): + ASN1_PARSED_DATA = [] + binary_data = False + + @staticmethod + def _parse_ascii_armored_base64(data: str) -> bytes: + """Convert an ASCII armored key to raw binary data""" + data = data.strip() + lines = (line for line in data.split('\n')) + header = next(lines).rstrip() + + base64_data = '' + line = next(lines).rstrip() + + while line and (line != EC_PRIVATE_FOOTER) and (line != EC_PUBLIC_FOOTER): + base64_data += line + line = next(lines).rstrip() + + return a2b_base64(base64_data) + + @staticmethod + def _parse_asn1_structure(data: bytes): + """Recursively parse ASN.1 data""" + data_type = data[:1] + length, data, remaining = parse_asn1_length(data[1:]) + + if data_type in [OCTET_STRING, BIT_STRING, OBJECT_IDENTIFIER]: + PEMEncoder.ASN1_PARSED_DATA.append((data_type, data)) + elif data_type in [SEQUENCE, PUBLIC_KEY, PARAMETERS]: + PEMEncoder._parse_asn1_structure(data) + + if remaining: + PEMEncoder._parse_asn1_structure(remaining) + + @staticmethod + def encode_public_key(Q: Point) -> str: + """Encode an EC public key as described in `RFC 5480 `_. + + Returns: + str: The ASCII armored encoded EC public key. + """ + algorithm = asn1_ecpublickey() + oid = asn1_oid(Q.curve) + parameters = asn1_structure(SEQUENCE, algorithm + oid) + public_key = asn1_public_key(Q) + + sequence = parameters + public_key + ec_public_key = asn1_structure(SEQUENCE, sequence) + b64_data = '\n'.join(wrap(b2a_base64(ec_public_key).decode(), 64)) + + return EC_PUBLIC_HEADER + '\n' + b64_data + '\n' + EC_PUBLIC_FOOTER + + @staticmethod + def encode_private_key(d: int, Q: Point = None, curve: Curve = None) -> str: + """Encode an EC keypair as described in `RFC 5915 `_. + + Args: + | d (long): An ECDSA private key. + | Q (fastecdsa.point.Point): The ECDSA public key. + | curve (fastecdsa.curve.Curve): The curve that the private key is for. + + Returns: + str: The ASCII armored encoded EC keypair. + """ + if Q is None and curve is None: + raise ValueError('Ambiguous encoding, public key or curve must passed as an argument') + elif Q is None: + Q = d * curve.G + + version = asn1_ecversion() + private_key = asn1_private_key(d, Q.curve) + oid = asn1_oid(Q.curve) + parameters = asn1_structure(PARAMETERS, oid) + public_key_bitstring = asn1_public_key(Q) + public_key = asn1_structure(PUBLIC_KEY, public_key_bitstring) + + sequence = version + private_key + parameters + public_key + ec_private_key = asn1_structure(SEQUENCE, sequence) + b64_data = '\n'.join(wrap(b2a_base64(ec_private_key).decode(), 64)) + + return EC_PRIVATE_HEADER + '\n' + b64_data + '\n' + EC_PRIVATE_FOOTER + + @staticmethod + def decode_public_key(pemdata: str, curve: Curve = None) -> Point: + """Delegate to private key decoding but return only the public key""" + _, Q = PEMEncoder.decode_private_key(pemdata) + return Q + + @staticmethod + def decode_private_key(pemdata: str) -> (int, Point): + """Decode an EC key as described in `RFC 5915 `_ and + `RFC 5480 `_. + + Args: + pemdata (bytes): A sequence of bytes representing an encoded EC key. + + Returns: + (long, fastecdsa.point.Point): A private key, public key tuple. If the encoded key was a + public key the first entry in the tuple is None. + """ + pemdata = PEMEncoder._parse_ascii_armored_base64(pemdata) + PEMEncoder._parse_asn1_structure(pemdata) + + d, x, y, curve = None, None, None, None + for (value_type, value) in PEMEncoder.ASN1_PARSED_DATA: + if value_type == OCTET_STRING: + d = int(hexlify(value), 16) + elif value_type == OBJECT_IDENTIFIER and curve is None: + curve = Curve.get_curve_by_oid(value) + elif value_type == BIT_STRING: + value = value[2:] # strip off b'\x00\x04' + x = int(hexlify(value[:len(value) // 2]), 16) + y = int(hexlify(value[len(value) // 2:]), 16) + + PEMEncoder.ASN1_PARSED_DATA = [] + Q = None if (x is None) or (y is None) else Point(x, y, curve) + return d, Q diff --git a/env/lib/python3.8/site-packages/fastecdsa/encoding/sec1.py b/env/lib/python3.8/site-packages/fastecdsa/encoding/sec1.py new file mode 100644 index 00000000..a5dad8e5 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/encoding/sec1.py @@ -0,0 +1,78 @@ +from . import KeyEncoder +from .util import bytes_to_int, int_bytelen, int_to_bytes +from ..curve import Curve +from ..point import Point +from ..util import mod_sqrt + + +class InvalidSEC1PublicKey(Exception): + pass + + +class SEC1Encoder(KeyEncoder): + binary_data = True + + @staticmethod + def encode_public_key(point: Point, compressed: bool = True) -> bytes: + """ Encode a public key as described in http://www.secg.org/SEC1-Ver-1.0.pdf + in sections 2.3.3/2.3.4 + uncompressed: 04 + x_bytes + y_bytes + compressed: 02 or 03 + x_bytes + Args: + point (fastecdsa.point.Point): Public key to encode + compressed (bool): Set to False if you want an uncompressed format + + Returns: + bytes: The SEC1 encoded public key + """ + bytelen = int_bytelen(point.curve.q) + if compressed: + if point.y & 1: # odd root + return b'\x03' + int_to_bytes(point.x).rjust(bytelen, b'\x00') + else: # even root + return b'\x02' + int_to_bytes(point.x).rjust(bytelen, b'\x00') + return b'\x04' + int_to_bytes(point.x).rjust(bytelen, b'\x00') + int_to_bytes(point.y).rjust(bytelen, b'\x00') + + @staticmethod + def decode_public_key(key: bytes, curve: Curve) -> Point: + """ Decode a public key as described in http://www.secg.org/SEC1-Ver-1.0.pdf + in sections 2.3.3/2.3.4 + + uncompressed: 04 + x_bytes + y_bytes + compressed: 02 or 03 + x_bytes + + Args: + key (bytes): public key encoded using the SEC1 format + curve (fastecdsa.curve.Curve): Curve to use when decoding the public key + + Returns: + Point: The decoded public key + + Raises: + InvalidSEC1PublicKey + """ + bytelen = int_bytelen(curve.q) + if key.startswith(b'\x04'): # uncompressed key + if len(key) != bytelen * 2 + 1: + raise InvalidSEC1PublicKey('An uncompressed public key must be %d bytes long' % (bytelen * 2 + 1)) + x, y = bytes_to_int(key[1:bytelen + 1]), bytes_to_int(key[bytelen + 1:]) + else: # compressed key + if len(key) != bytelen + 1: + raise InvalidSEC1PublicKey('A compressed public key must be %d bytes long' % (bytelen + 1)) + x = bytes_to_int(key[1:]) + root = mod_sqrt(curve.evaluate(x), curve.p)[0] + if key.startswith(b'\x03'): # odd root + y = root if root % 2 == 1 else -root % curve.p + elif key.startswith(b'\x02'): # even root + y = root if root % 2 == 0 else -root % curve.p + else: + raise InvalidSEC1PublicKey('Wrong key format') + return Point(x, y, curve=curve) + + @staticmethod + def encode_private_key(d): + raise NotImplementedError('SEC1Encoder only encodes public keys') + + @staticmethod + def decode_private_key(data): + raise NotImplementedError('SEC1Encoder only decodes public keys') diff --git a/env/lib/python3.8/site-packages/fastecdsa/encoding/util.py b/env/lib/python3.8/site-packages/fastecdsa/encoding/util.py new file mode 100644 index 00000000..daa1db71 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/encoding/util.py @@ -0,0 +1,29 @@ +from struct import pack + + +def int_bytelen(x: int) -> int: + length = 0 + + while x: + length += 1 + x >>= 8 + + return length + + +def int_to_bytes(x: int) -> bytes: + bs = b'' + + while x: + bs = pack('=B', (0xff & x)) + bs + x >>= 8 + + return bs + + +def bytes_to_int(bytestr: bytes) -> int: + """Make an integer from a big endian bytestring.""" + value = 0 + for byte_intval in bytestr: + value = value * 256 + byte_intval + return value diff --git a/env/lib/python3.8/site-packages/fastecdsa/keys.py b/env/lib/python3.8/site-packages/fastecdsa/keys.py new file mode 100644 index 00000000..f9994abe --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/keys.py @@ -0,0 +1,173 @@ +from binascii import hexlify +from hashlib import sha256 +from os import urandom +from typing import Optional, Tuple + +from .curve import Curve, P256 +from .ecdsa import verify +from .encoding.pem import PEMEncoder +from .point import Point +from .util import mod_sqrt, msg_bytes + + +def gen_keypair(curve: Curve) -> Tuple[int, Point]: + """Generate a keypair that consists of a private key and a public key. + + The private key :math:`d` is an integer generated via a cryptographically secure random number + generator that lies in the range :math:`[1,n)`, where :math:`n` is the curve order. The public + key :math:`Q` is a point on the curve calculated as :math:`Q = dG`, where :math:`G` is the + curve's base point. + + Args: + curve (fastecdsa.curve.Curve): The curve over which the keypair will be calulated. + + Returns: + int, fastecdsa.point.Point: Returns a tuple with the private key first and public key + second. + """ + private_key = gen_private_key(curve) + public_key = get_public_key(private_key, curve) + return private_key, public_key + + +def gen_private_key(curve: Curve, randfunc=urandom) -> int: + """Generate a private key to sign data with. + + The private key :math:`d` is an integer generated via a cryptographically secure random number + generator that lies in the range :math:`[1,n)`, where :math:`n` is the curve order. The default + random number generator used is /dev/urandom. + + Args: + | curve (fastecdsa.curve.Curve): The curve over which the key will be calulated. + | randfunc (function): A function taking one argument 'n' and returning a bytestring + of n random bytes suitable for cryptographic use. + The default is "os.urandom" + Returns: + int: Returns a positive integer smaller than the curve order. + """ + order_bits = 0 + order = curve.q + + while order > 0: + order >>= 1 + order_bits += 1 + + order_bytes = (order_bits + 7) // 8 # randfunc only takes bytes + extra_bits = order_bytes * 8 - order_bits # bits to shave off after getting bytes + + rand = int(hexlify(randfunc(order_bytes)), 16) + rand >>= extra_bits + + # no modding by group order or we'll introduce biases + while rand >= curve.q: + rand = int(hexlify(randfunc(order_bytes)), 16) + rand >>= extra_bits + + return rand + + +def get_public_key(d: int, curve: Curve) -> Point: + """Generate a public key from a private key. + + The public key :math:`Q` is a point on the curve calculated as :math:`Q = dG`, where :math:`d` + is the private key and :math:`G` is the curve's base point. + + Args: + | d (long): An integer representing the private key. + | curve (fastecdsa.curve.Curve): The curve over which the key will be calulated. + + Returns: + fastecdsa.point.Point: The public key, a point on the given curve. + """ + return d * curve.G + + +def get_public_keys_from_sig(sig: (int, int), msg, curve: Curve = P256, hashfunc=sha256) -> Tuple[Point, Point]: + """Recover the public keys that can verify a signature / message pair. + + Args: + | sig (int, int): A ECDSA signature. + | msg (str|bytes|bytearray): The message corresponding to the signature. + | curve (fastecdsa.curve.Curve): The curve used to sign the message. + | hashfunc (_hashlib.HASH): The hash function used to compress the message. + + Returns: + (fastecdsa.point.Point, fastecdsa.point.Point): The public keys that can verify the + signature for the message. + """ + r, s = sig + rinv = pow(r, curve.q - 2, curve.q) + + z = int(hashfunc(msg_bytes(msg)).hexdigest(), 16) + hash_bit_length = hashfunc().digest_size * 8 + if curve.q.bit_length() < hash_bit_length: + z >>= (hash_bit_length - curve.q.bit_length()) + + y_squared = (r * r * r + curve.a * r + curve.b) % curve.p + y1, y2 = mod_sqrt(y_squared, curve.p) + R1, R2 = Point(r, y1, curve=curve), Point(r, y2, curve=curve) + + Qs = rinv * (s * R1 - z * curve.G), rinv * (s * R2 - z * curve.G) + for Q in Qs: + if not verify(sig, msg, Q, curve=curve, hashfunc=hashfunc): + raise ValueError('Could not recover public key, is the signature ({}) a valid ' + 'signature for the message ({}) over the given curve ({}) using the ' + 'given hash function ({})?'.format(sig, msg, curve, hashfunc)) + return Qs + + +def export_key(key, curve: Curve = None, filepath: str = None, encoder=PEMEncoder): + """Export a public or private EC key in PEM format. + + Args: + | key (fastecdsa.point.Point | int): A public or private EC key + | curve (fastecdsa.curve.Curve): The curve corresponding to the key (required if the + key is a private key) + | filepath (str): Where to save the exported key. If None the key is simply printed. + | encoder (type): The class used to encode the key + """ + # encode a public key + if isinstance(key, Point): + encoded = encoder.encode_public_key(key) + + # throw error for ambiguous private keys + elif curve is None: + raise ValueError('curve parameter cannot be \'None\' when exporting a private key') + + # encode a private key + else: + pubkey = key * curve.G + encoded = encoder.encode_private_key(key, Q=pubkey) + + # return binary data or write to file + if filepath is None: + return encoded + else: + # some encoder output strings, others bytes, need to determine what mode to write in + write_mode = 'w' + ('b' if getattr(encoder, 'binary_data', False) else '') + with open(filepath, write_mode) as f: + f.write(encoded) + + +def import_key(filepath: str, curve: Curve = None, public: bool = False, decoder=PEMEncoder + ) -> Tuple[Optional[int], Point]: + """Import a public or private EC key in PEM format. + + Args: + | filepath (str): The location of the key file + | public (bool): Indicates if the key file is a public key + | decoder (fastecdsa.encoding.KeyEncoder): The class used to parse the key + + Returns: + (long, fastecdsa.point.Point): A (private key, public key) tuple. If a public key was + imported then the first value will be None. + """ + # some decoders read strings, others bytes, need to determine what mode to write in + read_mode = 'r' + ('b' if getattr(decoder, 'binary_data', False) else '') + with open(filepath, read_mode) as f: + data = f.read() + + if public: + return None, decoder.decode_public_key(data, curve) + else: + return decoder.decode_private_key(data) diff --git a/env/lib/python3.8/site-packages/fastecdsa/point.py b/env/lib/python3.8/site-packages/fastecdsa/point.py new file mode 100644 index 00000000..ffe79c00 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/point.py @@ -0,0 +1,191 @@ +from fastecdsa import curvemath +from .curve import P256 +from .util import validate_type + + +class CurveMismatchError(Exception): + def __init__(self, curve1, curve2): + self.msg = 'Tried to add points on two different curves <{}> & <{}>'.format( + curve1.name, curve2.name) + + +class Point: + """Representation of a point on an elliptic curve. + + Attributes: + | x (int): The x coordinate of the point. + | y (int): The y coordinate of the point. + | curve (:class:`Curve`): The curve that the point lies on. + """ + + def __init__(self, x: int, y: int, curve=P256): + """Initialize a point on an elliptic curve. + + The x and y parameters must satisfy the equation :math:`y^2 \equiv x^3 + ax + b \pmod{p}`, + where a, b, and p are attributes of the curve parameter. + + Args: + | x (int): The x coordinate of the point. + | y (int): The y coordinate of the point. + | curve (:class:`Curve`): The curve that the point lies on. + """ + + # Reduce numbers before computation to avoid errors and limit computations. + if curve is not None: + x = x % curve.p + y = y % curve.p + + if not (x == 0 and y == 0 and curve is None) and not curve.is_point_on_curve((x, y)): + raise ValueError( + 'coordinates are not on curve <{}>\n\tx={:x}\n\ty={:x}'.format(curve.name, x, y)) + else: + self.x = x + self.y = y + self.curve = curve + + def __str__(self): + if self == self.IDENTITY_ELEMENT: + return '' + else: + return 'X: 0x{:x}\nY: 0x{:x}\n(On curve <{}>)'.format(self.x, self.y, self.curve.name) + + def __unicode__(self) -> str: + return self.__str__() + + def __repr__(self) -> str: + return self.__str__() + + def __eq__(self, other) -> bool: + validate_type(other, Point) + return self.x == other.x and self.y == other.y and self.curve is other.curve + + def __add__(self, other): + """Add two points on the same elliptic curve. + + Args: + | self (:class:`Point`): a point :math:`P` on the curve + | other (:class:`Point`): a point :math:`Q` on the curve + + Returns: + :class:`Point`: A point :math:`R` such that :math:`R = P + Q` + """ + validate_type(other, Point) + + if self == self.IDENTITY_ELEMENT: + return other + elif other == self.IDENTITY_ELEMENT: + return self + elif self.curve is not other.curve: + raise CurveMismatchError(self.curve, other.curve) + elif self == -other: + return self.IDENTITY_ELEMENT + else: + x, y = curvemath.add( + str(self.x), + str(self.y), + str(other.x), + str(other.y), + str(self.curve.p), + str(self.curve.a), + str(self.curve.b), + str(self.curve.q), + str(self.curve.gx), + str(self.curve.gy) + ) + return Point(int(x), int(y), self.curve) + + def __radd__(self, other): + """Add two points on the same elliptic curve. + + Args: + | self (:class:`Point`): a point :math:`P` on the curve + | other (:class:`Point`): a point :math:`Q` on the curve + + Returns: + :class:`Point`: A point :math:`R` such that :math:`R = R + Q` + """ + validate_type(other, Point) + return self.__add__(other) + + def __sub__(self, other): + """Subtract two points on the same elliptic curve. + + Args: + | self (:class:`Point`): a point :math:`P` on the curve + | other (:class:`Point`): a point :math:`Q` on the curve + + Returns: + :class:`Point`: A point :math:`R` such that :math:`R = P - Q` + """ + validate_type(other, Point) + + if self == other: + return self.IDENTITY_ELEMENT + elif other == self.IDENTITY_ELEMENT: + return self + + negative = Point(other.x, -other.y % other.curve.p, other.curve) + return self.__add__(negative) + + def __mul__(self, scalar: int): + """Multiply a :class:`Point` on an elliptic curve by an integer. + + Args: + | self (:class:`Point`): a point :math:`P` on the curve + | other (int): an integer :math:`d \in \mathbb{Z_q}` where :math:`q` is the order of + the curve that :math:`P` is on + + Returns: + :class:`Point`: A point :math:`R` such that :math:`R = P * d` + """ + validate_type(scalar, int) + + if scalar == 0: + return self.IDENTITY_ELEMENT + + x, y = curvemath.mul( + str(self.x), + str(self.y), + str(abs(scalar)), + str(self.curve.p), + str(self.curve.a), + str(self.curve.b), + str(self.curve.q), + str(self.curve.gx), + str(self.curve.gy) + ) + x = int(x) + y = int(y) + if x == 0 and y == 0: + return self.IDENTITY_ELEMENT + return Point(x, y, self.curve) if scalar > 0 else -Point(x, y, self.curve) + + def __rmul__(self, scalar: int): + """Multiply a :class:`Point` on an elliptic curve by an integer. + + Args: + | self (:class:`Point`): a point :math:`P` on the curve + | other (long): an integer :math:`d \in \mathbb{Z_q}` where :math:`q` is the order of + the curve that :math:`P` is on + + Returns: + :class:`Point`: A point :math:`R` such that :math:`R = d * P` + """ + return self.__mul__(scalar) + + def __neg__(self): + """Return the negation of a :class:`Point` i.e. the points reflection over the x-axis. + + Args: + | self (:class:`Point`): a point :math:`P` on the curve + + Returns: + :class:`Point`: A point :math:`R = (P_x, -P_y)` + """ + if self == self.IDENTITY_ELEMENT: + return self + + return Point(self.x, -self.y % self.curve.p, self.curve) + + +Point.IDENTITY_ELEMENT = Point(0, 0, curve=None) # also known as the point at infinity diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/__init__.py b/env/lib/python3.8/site-packages/fastecdsa/tests/__init__.py new file mode 100644 index 00000000..474f3552 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/__init__.py @@ -0,0 +1,9 @@ +from ..curve import ( + P192, P224, P256, P384, P521, secp192k1, secp224k1, secp256k1, brainpoolP160r1, brainpoolP192r1, + brainpoolP224r1, brainpoolP256r1, brainpoolP320r1, brainpoolP384r1, brainpoolP512r1 +) + +CURVES = [ + P192, P224, P256, P384, P521, secp192k1, secp224k1, secp256k1, brainpoolP160r1, brainpoolP192r1, + brainpoolP224r1, brainpoolP256r1, brainpoolP320r1, brainpoolP384r1, brainpoolP512r1 +] diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/__init__.py b/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/test_der.py b/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/test_der.py new file mode 100644 index 00000000..98665f11 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/test_der.py @@ -0,0 +1,111 @@ +from binascii import unhexlify +from random import randint +from unittest import TestCase + +from .. import CURVES +from fastecdsa.curve import secp256k1 +from fastecdsa.ecdsa import sign +from fastecdsa.encoding.der import DEREncoder, InvalidDerSignature + + +class TestDEREncoder(TestCase): + def test_encode_signature(self): + self.assertEqual( + DEREncoder.encode_signature(r=1, s=2), + b"\x30" # SEQUENCE + b"\x06" # Length of Sequence + b"\x02" # INTEGER + b"\x01" # Length of r + b"\x01" # r + b"\x02" # INTEGER + b"\x01" # Length of s + b"\x02", # s + ) + + # Check that we add a zero byte when the number's highest bit is set + self.assertEqual(DEREncoder.encode_signature(r=128, s=128), b"0\x08\x02\x02\x00\x80\x02\x02\x00\x80") + + # Check a value on a standard curve like secp256k1 works + # see https://github.com/btccom/secp256k1-go/blob/master/secp256k1/sign_vectors.yaml + secp256k1_vectors = [ + ( + 0x31a84594060e103f5a63eb742bd46cf5f5900d8406e2726dedfc61c7cf43ebad, + unhexlify("9e5755ec2f328cc8635a55415d0e9a09c2b6f2c9b0343c945fbbfe08247a4cbe"), + unhexlify("30440220132382ca59240c2e14ee7ff61d90fc63276325f4cbe8169fc53ade4a407c2fc802204d86fbe3bde69" + "75dd5a91fdc95ad6544dcdf0dab206f02224ce7e2b151bd82ab"), + ), + ( + 0x7177f0d04c79fa0b8c91fe90c1cf1d44772d1fba6e5eb9b281a22cd3aafb51fe, + unhexlify("2d46a712699bae19a634563d74d04cc2da497b841456da270dccb75ac2f7c4e7"), + unhexlify("3045022100d80cf7abc9ab601373780cee3733d2cb5ff69ba1452ec2d2a058adf9645c13be0220011d1213b7d" + "152f72fd8759b45276ba32d9c909602e5ec89550baf3aaa8ed950"), + ), + ( + 0x989e500d6b1397f2c5dcdf43c58ac2f14df753eb6089654e07ff946b3f84f3d5, + unhexlify("c94f4ec84be928017cbbb447d2ab5b5d4d69e5e5fd03da7eae4378a1b1c9c402"), + unhexlify("3045022100d0f5b740cbe3ee5b098d3c5afdefa61bb0797cb4e7b596afbd38174e1c653bb602200329e9f1a09" + "632de477664814791ac31544e04715db68f4b02657ba35863e711"), + ), + ( + 0x39dfc615f2b718397f6903b0c46c47c5687e97d3d2a5e1f2b200f459f7b1219b, + unhexlify("dfeb2092955572ce0695aa038f58df5499949e18f58785553c3e83343cd5eb93"), + unhexlify("30440220692c01edf8aeab271df3ed4e8d57a170f014f8f9d65031aac28b5e1840acfb5602205075f9d1fdbf5" + "079ee052e5f3572d518b3594ef49582899ec44d065f71a55192"), + ), + ] + + for private_key, digest, expected in secp256k1_vectors: + r, s = sign(digest, private_key, curve=secp256k1, prehashed=True) + encoded = DEREncoder.encode_signature(r, s) + self.assertEqual(encoded, expected) + + def test_decode_signature(self): + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"") # length too short + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x31\x06\x02\x01\x01\x02\x01\x02") # invalid SEQUENCE marker + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x07\x02\x01\x01\x02\x01\x02") # invalid length (too short) + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x05\x02\x01\x01\x02\x01\x02") # invalid length (too long) + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x02\x03\x01\x02\x01\x02") # invalid length of r + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x02\x01\x01\x02\x03\x02") # invalid length of s + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x03\x01\x01\x02\x01\x02") # invalid INTEGER marker for r + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x02\x00\x02\x01\x02") # length of r is 0 + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x02\x01\x81\x02\x01\x02") # value of r is negative + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x07\x02\x02\x00\x01\x02\x01\x02") # value of r starts with a zero byte + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x02\x01\x01\x03\x01\x02") # invalid INTEGER marker for s + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x02\x01\x01\x02\x00") # value of s is 0 + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x06\x02\x01\x01\x02\x01\x81") # value of s is negative + with self.assertRaises(InvalidDerSignature): + DEREncoder.decode_signature(b"\x30\x07\x02\x01\x01\x02\x02\x00\x02") # value of s starts with a zero byte + + self.assertEqual(DEREncoder.decode_signature(b"\x30\x06\x02\x01\x01\x02\x01\x02"), (1, 2)) + self.assertEqual( + DEREncoder.decode_signature(b"0\x08\x02\x02\x00\x80\x02\x02\x00\x80"), (128, 128) + ) # verify zero bytes + self.assertEqual( + DEREncoder.decode_signature(b"0\x08\x02\x02\x03\xE8\x02\x02\x03\xE8"), (1000, 1000) + ) # verify byte order + + def test_encode_decode_all_curves(self): + for curve in CURVES: + d = randint(1, curve.q) + Q = d * curve.G + r, s = sign("sign me", d, curve=curve) + + encoded = DEREncoder.encode_signature(r, s) + decoded_r, decoded_s = DEREncoder.decode_signature(encoded) + + self.assertEqual(decoded_r, r) + self.assertEqual(decoded_s, s) + diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/test_sec1.py b/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/test_sec1.py new file mode 100644 index 00000000..df3934bd --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/encoding/test_sec1.py @@ -0,0 +1,100 @@ +from binascii import hexlify, unhexlify +from unittest import TestCase + +from fastecdsa.curve import P256, secp192k1, secp256k1 +from fastecdsa.encoding.sec1 import InvalidSEC1PublicKey, SEC1Encoder +from fastecdsa.point import Point + + +class TestSEC1Encoder(TestCase): + def test_encode_public_key(self): + # 1/ PrivateKey generated using openssl "openssl ecparam -name secp256k1 -genkey -out ec-priv.pem" + # 2/ Printed using "openssl ec -in ec-priv.pem -text -noout" and converted to numeric using "asn1._bytes_to_int" + priv_key = 7002880736699640265110069622773736733141182416793484574964618597954446769264 + pubkey_compressed = hexlify(SEC1Encoder.encode_public_key(secp256k1.G * priv_key)) + pubkey_uncompressed = hexlify(SEC1Encoder.encode_public_key(secp256k1.G * priv_key, compressed=False)) + + # 3/ PublicKey extracted using "openssl ec -in ec-priv.pem -pubout -out ec-pub.pem" + # 4/ Encoding verified using openssl "openssl ec -in ec-pub.pem -pubin -text -noout -conv_form compressed" + self.assertEqual(pubkey_compressed, b"02e5e2c01985aafb6e2c3ad49f3db5ccc54b2e63343af405b521303d0f35835062") + self.assertEqual( + pubkey_uncompressed, + b"04e5e2c01985aafb6e2c3ad49f3db5ccc54b2e63343af405b521303d0f3583506" + b"23dad76df888abde5ed0cc5af1b83968edffcae5d70bedb24fdc18bb5f79499d0", + ) + + # Same with P256 Curve + priv_P256 = 807015861248675637760562792774171551137308512372870683367415858378856470633 + pubkey_compressed = hexlify(SEC1Encoder.encode_public_key(P256.G * priv_P256)) + pubkey_uncompressed = hexlify(SEC1Encoder.encode_public_key(P256.G * priv_P256, compressed=False)) + self.assertEqual(pubkey_compressed, b"0212c9ddf64b0d1f1d91d9bd729abfb880079fa889d66604cc0b78c9cbc271824c") + self.assertEqual( + pubkey_uncompressed, + b"0412c9ddf64b0d1f1d91d9bd729abfb880079fa889d66604cc0b78c9cbc271824" + b"c9a7d581bcf2aba680b53cedbade03be62fe95869da04a168a458f369ac6a823e", + ) + + # And secp192k1 Curve + priv_secp192k1 = 5345863567856687638748079156318679969014620278806295592453 + pubkey_compressed = hexlify(SEC1Encoder.encode_public_key(secp192k1.G * priv_secp192k1)) + pubkey_uncompressed = hexlify(SEC1Encoder.encode_public_key(secp192k1.G * priv_secp192k1, compressed=False)) + self.assertEqual(pubkey_compressed, b"03a3bec5fba6d13e51fb55bd88dd097cb9b04f827bc151d22d") + self.assertEqual( + pubkey_uncompressed, + b"04a3bec5fba6d13e51fb55bd88dd097cb9b04f827bc151d22df07a73819149e8d903aa983e52ab1cff38f0d381f940d361", + ) + + def test_decode_public_key(self): + expected_public = Point( + x=0xE5E2C01985AAFB6E2C3AD49F3DB5CCC54B2E63343AF405B521303D0F35835062, + y=0x3DAD76DF888ABDE5ED0CC5AF1B83968EDFFCAE5D70BEDB24FDC18BB5F79499D0, + curve=secp256k1, + ) + public_from_compressed = SEC1Encoder.decode_public_key( + unhexlify(b"02e5e2c01985aafb6e2c3ad49f3db5ccc54b2e63343af405b521303d0f35835062"), secp256k1 + ) + public_from_uncompressed = SEC1Encoder.decode_public_key( + unhexlify( + b"04e5e2c01985aafb6e2c3ad49f3db5ccc54b2e63343af405b521303d0f3583506" + b"23dad76df888abde5ed0cc5af1b83968edffcae5d70bedb24fdc18bb5f79499d0" + ), + secp256k1, + ) + + # Same values as in `test_encode_public_key`, verified using openssl + self.assertEqual(public_from_compressed, expected_public) + self.assertEqual(public_from_uncompressed, expected_public) + with self.assertRaises(InvalidSEC1PublicKey) as e: + SEC1Encoder.decode_public_key(b"\x02", secp256k1) # invalid compressed length + self.assertEqual(e.exception.args[0], "A compressed public key must be 33 bytes long") + with self.assertRaises(InvalidSEC1PublicKey) as e: + SEC1Encoder.decode_public_key(b"\x04", secp256k1) # invalid uncompressed length + self.assertEqual(e.exception.args[0], "An uncompressed public key must be 65 bytes long") + with self.assertRaises(InvalidSEC1PublicKey) as e: + # invalid prefix value + SEC1Encoder.decode_public_key( + unhexlify(b"05e5e2c01985aafb6e2c3ad49f3db5ccc54b2e63343af405b521303d0f35835062"), secp256k1 + ) + self.assertEqual(e.exception.args[0], "Wrong key format") + + # With P256, same values as in `test_encode_public_key`, verified using openssl + expected_P256 = Point( + x=0x12C9DDF64B0D1F1D91D9BD729ABFB880079FA889D66604CC0B78C9CBC271824C, + y=0x9A7D581BCF2ABA680B53CEDBADE03BE62FE95869DA04A168A458F369AC6A823E, + curve=P256, + ) + public_from_compressed = SEC1Encoder.decode_public_key( + unhexlify(b"0212c9ddf64b0d1f1d91d9bd729abfb880079fa889d66604cc0b78c9cbc271824c"), P256 + ) + self.assertEqual(public_from_compressed, expected_P256) + + # With secp192k1, same values as in `test_encode_public_key`, verified using openssl + expected_secp192k1 = Point( + x=0xA3BEC5FBA6D13E51FB55BD88DD097CB9B04F827BC151D22D, + y=0xF07A73819149E8D903AA983E52AB1CFF38F0D381F940D361, + curve=secp192k1, + ) + public_from_compressed = SEC1Encoder.decode_public_key( + unhexlify(b"03a3bec5fba6d13e51fb55bd88dd097cb9b04f827bc151d22d"), secp192k1 + ) + self.assertEqual(public_from_compressed, expected_secp192k1) diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_brainpool_ecdh.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_brainpool_ecdh.py new file mode 100644 index 00000000..c89baee2 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_brainpool_ecdh.py @@ -0,0 +1,120 @@ +from unittest import TestCase + +from ..curve import brainpoolP256r1, brainpoolP384r1, brainpoolP512r1 + + +class TestBrainpoolECDH(TestCase): + def test_256bit(self): + # https://tools.ietf.org/html/rfc7027#appendix-A.1 + dA = 0x81DB1EE100150FF2EA338D708271BE38300CB54241D79950F77B063039804F1D + qA = dA * brainpoolP256r1.G + + self.assertEqual(qA.x, 0x44106E913F92BC02A1705D9953A8414DB95E1AAA49E81D9E85F929A8E3100BE5) + self.assertEqual(qA.y, 0x8AB4846F11CACCB73CE49CBDD120F5A900A69FD32C272223F789EF10EB089BDC) + + dB = 0x55E40BC41E37E3E2AD25C3C6654511FFA8474A91A0032087593852D3E7D76BD3 + qB = dB * brainpoolP256r1.G + + self.assertEqual(qB.x, 0x8D2D688C6CF93E1160AD04CC4429117DC2C41825E1E9FCA0ADDD34E6F1B39F7B) + self.assertEqual(qB.y, 0x990C57520812BE512641E47034832106BC7D3E8DD0E4C7F1136D7006547CEC6A) + + self.assertEqual((dA * qB).x, (dB * qA).x) + self.assertEqual((dA * qB).y, (dB * qA).y) + + Z = dA * qB + self.assertEqual(Z.x, 0x89AFC39D41D3B327814B80940B042590F96556EC91E6AE7939BCE31F3A18BF2B) + self.assertEqual(Z.y, 0x49C27868F4ECA2179BFD7D59B1E3BF34C1DBDE61AE12931648F43E59632504DE) + + def test_384bit(self): + # https://tools.ietf.org/html/rfc7027#appendix-A.2 + dA = int('1E20F5E048A5886F1F157C74E91BDE2B98C8B52D58E5003D57053FC4B0BD6' + '5D6F15EB5D1EE1610DF870795143627D042', 16) + qA = dA * brainpoolP384r1.G + + self.assertEqual( + qA.x, + int('68B665DD91C195800650CDD363C625F4E742E8134667B767B1B47679358' + '8F885AB698C852D4A6E77A252D6380FCAF068', 16) + ) + self.assertEqual( + qA.y, + int('55BC91A39C9EC01DEE36017B7D673A931236D2F1F5C83942D049E3FA206' + '07493E0D038FF2FD30C2AB67D15C85F7FAA59', 16) + ) + + dB = int('032640BC6003C59260F7250C3DB58CE647F98E1260ACCE4ACDA3DD869F74E' + '01F8BA5E0324309DB6A9831497ABAC96670', 16) + qB = dB * brainpoolP384r1.G + + self.assertEqual( + qB.x, + int('4D44326F269A597A5B58BBA565DA5556ED7FD9A8A9EB76C25F46DB69D19' + 'DC8CE6AD18E404B15738B2086DF37E71D1EB4', 16) + ) + self.assertEqual( + qB.y, + int('62D692136DE56CBE93BF5FA3188EF58BC8A3A0EC6C1E151A21038A42E91' + '85329B5B275903D192F8D4E1F32FE9CC78C48', 16) + ) + + self.assertEqual((dA * qB).x, (dB * qA).x) + self.assertEqual((dA * qB).y, (dB * qA).y) + + Z = dA * qB + self.assertEqual( + Z.x, + int('0BD9D3A7EA0B3D519D09D8E48D0785FB744A6B355E6304BC51C229FBBCE2' + '39BBADF6403715C35D4FB2A5444F575D4F42', 16) + ) + self.assertEqual( + Z.y, + int('0DF213417EBE4D8E40A5F76F66C56470C489A3478D146DECF6DF0D94BAE9' + 'E598157290F8756066975F1DB34B2324B7BD', 16) + ) + + def test_512bit(self): + # https://tools.ietf.org/html/rfc7027#appendix-A.2 + dA = int('16302FF0DBBB5A8D733DAB7141C1B45ACBC8715939677F6A56850A38BD87B' + 'D59B09E80279609FF333EB9D4C061231FB26F92EEB04982A5F1D1764CAD57665422', 16) + qA = dA * brainpoolP512r1.G + + self.assertEqual( + qA.x, + int('0A420517E406AAC0ACDCE90FCD71487718D3B953EFD7FBEC5F7F27E28C6' + '149999397E91E029E06457DB2D3E640668B392C2A7E737A7F0BF04436D11640FD09FD', 16) + ) + self.assertEqual( + qA.y, + int('72E6882E8DB28AAD36237CD25D580DB23783961C8DC52DFA2EC138AD472' + 'A0FCEF3887CF62B623B2A87DE5C588301EA3E5FC269B373B60724F5E82A6AD147FDE7', 16) + ) + + dB = int('230E18E1BCC88A362FA54E4EA3902009292F7F8033624FD471B5D8ACE49D1' + '2CFABBC19963DAB8E2F1EBA00BFFB29E4D72D13F2224562F405CB80503666B25429', 16) + qB = dB * brainpoolP512r1.G + + self.assertEqual( + qB.x, + int('9D45F66DE5D67E2E6DB6E93A59CE0BB48106097FF78A081DE781CDB31FC' + 'E8CCBAAEA8DD4320C4119F1E9CD437A2EAB3731FA9668AB268D871DEDA55A5473199F', 16) + ) + self.assertEqual( + qB.y, + int('2FDC313095BCDD5FB3A91636F07A959C8E86B5636A1E930E8396049CB48' + '1961D365CC11453A06C719835475B12CB52FC3C383BCE35E27EF194512B71876285FA', 16) + ) + + self.assertEqual((dA * qB).x, (dB * qA).x) + self.assertEqual((dA * qB).y, (dB * qA).y) + + Z = dA * qB + self.assertEqual( + Z.x, + int('A7927098655F1F9976FA50A9D566865DC530331846381C87256BAF322624' + '4B76D36403C024D7BBF0AA0803EAFF405D3D24F11A9B5C0BEF679FE1454B21C4CD1F', 16) + ) + self.assertEqual( + Z.y, + int('7DB71C3DEF63212841C463E881BDCF055523BD368240E6C3143BD8DEF8B3' + 'B3223B95E0F53082FF5E412F4222537A43DF1C6D25729DDB51620A832BE6A26680A2', 16) + ) \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_key_export_import.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_key_export_import.py new file mode 100644 index 00000000..efdf23cb --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_key_export_import.py @@ -0,0 +1,34 @@ +from os import remove +from unittest import TestCase + +from . import CURVES +from ..encoding.pem import PEMEncoder +from ..encoding.sec1 import SEC1Encoder +from ..keys import export_key, gen_keypair, import_key + +TEST_FILE_PATH = 'fastecdsa_test_key.pem' + + +class TestExportImport(TestCase): + def test_export_import_private_key(self): + for curve in CURVES: + d, Q = gen_keypair(curve) + export_key(d, curve=curve, filepath=TEST_FILE_PATH) + d_, Q_ = import_key(filepath=TEST_FILE_PATH, curve=curve) + + self.assertEqual(d, d_) + self.assertEqual(Q, Q_) + + remove(TEST_FILE_PATH) + + def test_export_import_public_key(self): + for curve in CURVES: + for encoder in (PEMEncoder, SEC1Encoder): + d, Q = gen_keypair(curve) + export_key(Q, curve=curve, filepath=TEST_FILE_PATH, encoder=encoder) + d_, Q_ = import_key(filepath=TEST_FILE_PATH, curve=curve, public=True, decoder=encoder) + + self.assertIsNone(d_) + self.assertEqual(Q, Q_) + + remove(TEST_FILE_PATH) diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_key_recovery.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_key_recovery.py new file mode 100644 index 00000000..59f80051 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_key_recovery.py @@ -0,0 +1,18 @@ +from hashlib import sha256 +from unittest import TestCase + +from . import CURVES +from ..ecdsa import sign +from ..keys import gen_keypair, get_public_keys_from_sig + + +class TestKeyRecovery(TestCase): + def test_key_recovery(self): + for curve in CURVES: + d, Q = gen_keypair(curve) + msg = 'https://crypto.stackexchange.com/questions/18105/how-does-recovering-the-' \ + 'public-key-from-an-ecdsa-signature-work' + sig = sign(msg, d, curve=curve) + + Qs = get_public_keys_from_sig(sig, msg, curve=curve, hashfunc=sha256) + self.assertTrue(Q in Qs) \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_keygen.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_keygen.py new file mode 100644 index 00000000..34b3aca3 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_keygen.py @@ -0,0 +1,40 @@ +from unittest import TestCase + +from ..keys import gen_private_key + + +class TestKeygen(TestCase): + def test_gen_private_key(self): + class FakeCurve(): + def __init__(self, q): + self.q = q + + class FakeRandom(): + def __init__(self, values=b"\x00"): + self.values = values + self.pos = 0 + + def __call__(self, nb): + result = self.values[self.pos:self.pos + nb] + self.pos += nb + return result + + self.assertEqual(gen_private_key(FakeCurve(2), randfunc=FakeRandom(b"\x00")), 0) + + # 1 byte / 6 bits shaved off + the first try is lower than the order + self.assertEqual(gen_private_key(FakeCurve(2), randfunc=FakeRandom(b"\x40")), 1) + + # 1 byte / 6 bits shaved off + the first try is higher than the order + self.assertEqual(gen_private_key(FakeCurve(2), randfunc=FakeRandom(b"\xc0\x40")), 1) + self.assertEqual(gen_private_key(FakeCurve(2), randfunc=FakeRandom(b"\xc0\x00")), 0) + + # 2 byte / 3 are shaved off, the first try is lower than the order. + self.assertEqual(gen_private_key(FakeCurve(8191), randfunc=FakeRandom(b"\xff\xf0")), 8190) + + # 2 byte / 3 are shaved off + # first try : _bytes_to_int("\xff\xf8") >> 3 == 8191 (too high for order 8191) + # second try : _bytes_to_int("\xff\xf0") >> 3 == 8190 (ok for order 8191) + self.assertEqual(gen_private_key(FakeCurve(8191), randfunc=FakeRandom(b"\xff\xf8\xff\xf0")), 8190) + + # Same but with a different second try value + self.assertEqual(gen_private_key(FakeCurve(8191), randfunc=FakeRandom(b"\xff\xf8\xff\xef")), 8189) \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_nonce_generation.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_nonce_generation.py new file mode 100644 index 00000000..1cd04292 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_nonce_generation.py @@ -0,0 +1,31 @@ +from hashlib import sha1, sha224, sha256, sha384, sha512 +from unittest import TestCase + +from ..util import RFC6979 + + +class TestNonceGeneration(TestCase): + def test_rfc_6979(self): + msg = 'sample' + x = 0x09A4D6792295A7F730FC3F2B49CBC0F62E862272F + q = 0x4000000000000000000020108A2E0CC0D99F8A5EF + + expected = 0x09744429FA741D12DE2BE8316E35E84DB9E5DF1CD + nonce = RFC6979(msg, x, q, sha1).gen_nonce() + self.assertTrue(nonce == expected) + + expected = 0x323E7B28BFD64E6082F5B12110AA87BC0D6A6E159 + nonce = RFC6979(msg, x, q, sha224).gen_nonce() + self.assertTrue(nonce == expected) + + expected = 0x23AF4074C90A02B3FE61D286D5C87F425E6BDD81B + nonce = RFC6979(msg, x, q, sha256).gen_nonce() + self.assertTrue(nonce == expected) + + expected = 0x2132ABE0ED518487D3E4FA7FD24F8BED1F29CCFCE + nonce = RFC6979(msg, x, q, sha384).gen_nonce() + self.assertTrue(nonce == expected) + + expected = 0x00BBCC2F39939388FDFE841892537EC7B1FF33AA3 + nonce = RFC6979(msg, x, q, sha512).gen_nonce() + self.assertTrue(nonce == expected) \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_p256_ecdsa.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_p256_ecdsa.py new file mode 100644 index 00000000..44a051e9 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_p256_ecdsa.py @@ -0,0 +1,99 @@ +from hashlib import sha1, sha224, sha256, sha384, sha512 +from unittest import TestCase + +from ..curve import P256 +from ..ecdsa import sign, verify +from ..point import Point + + +class TestP256ECDSA(TestCase): + """ case taken from http://tools.ietf.org/html/rfc6979#appendix-A.2.5 """ + + def test_ecdsa_P256_SHA1_sign(self): + d = 0xC9AFA9D845BA75166B5C215767B1D6934E50C3DB36E89B127B8A622B120F6721 + expected = ( + 0x61340C88C3AAEBEB4F6D667F672CA9759A6CCAA9FA8811313039EE4A35471D32, + 0x6D7F147DAC089441BB2E2FE8F7A3FA264B9C475098FDCF6E00D7C996E1B8B7EB, + ) + sig = sign('sample', d, curve=P256, hashfunc=sha1) + self.assertEqual(sig, expected) + + Q = d * P256.G + self.assertTrue(verify(sig, 'sample', Q, curve=P256, hashfunc=sha1)) + + """ case taken from http://tools.ietf.org/html/rfc6979#appendix-A.2.5 """ + + def test_ecdsa_P256_SHA224_sign(self): + d = 0xC9AFA9D845BA75166B5C215767B1D6934E50C3DB36E89B127B8A622B120F6721 + expected = ( + 0x53B2FFF5D1752B2C689DF257C04C40A587FABABB3F6FC2702F1343AF7CA9AA3F, + 0xB9AFB64FDC03DC1A131C7D2386D11E349F070AA432A4ACC918BEA988BF75C74C + ) + sig = sign('sample', d, curve=P256, hashfunc=sha224) + self.assertEqual(sig, expected) + + Q = d * P256.G + self.assertTrue(verify(sig, 'sample', Q, curve=P256, hashfunc=sha224)) + + """ case taken from http://tools.ietf.org/html/rfc6979#appendix-A.2.5 """ + + def test_ecdsa_P256_SHA2_sign(self): + d = 0xC9AFA9D845BA75166B5C215767B1D6934E50C3DB36E89B127B8A622B120F6721 + expected = ( + 0xEFD48B2AACB6A8FD1140DD9CD45E81D69D2C877B56AAF991C34D0EA84EAF3716, + 0xF7CB1C942D657C41D436C7A1B6E29F65F3E900DBB9AFF4064DC4AB2F843ACDA8 + ) + sig = sign('sample', d, curve=P256, hashfunc=sha256) + self.assertEqual(sig, expected) + + Q = d * P256.G + self.assertTrue(verify(sig, 'sample', Q, curve=P256, hashfunc=sha256)) + + """ case taken from http://tools.ietf.org/html/rfc6979#appendix-A.2.5 """ + + def test_ecdsa_P256_SHA384_sign(self): + d = 0xC9AFA9D845BA75166B5C215767B1D6934E50C3DB36E89B127B8A622B120F6721 + expected = ( + 0x0EAFEA039B20E9B42309FB1D89E213057CBF973DC0CFC8F129EDDDC800EF7719, + 0x4861F0491E6998B9455193E34E7B0D284DDD7149A74B95B9261F13ABDE940954 + ) + sig = sign('sample', d, curve=P256, hashfunc=sha384) + self.assertEqual(sig, expected) + + Q = d * P256.G + self.assertTrue(verify(sig, 'sample', Q, curve=P256, hashfunc=sha384)) + + """ case taken from http://tools.ietf.org/html/rfc6979#appendix-A.2.5 """ + + def test_ecdsa_P256_SHA512_sign(self): + d = 0xC9AFA9D845BA75166B5C215767B1D6934E50C3DB36E89B127B8A622B120F6721 + expected = ( + 0x8496A60B5E9B47C825488827E0495B0E3FA109EC4568FD3F8D1097678EB97F00, + 0x2362AB1ADBE2B8ADF9CB9EDAB740EA6049C028114F2460F96554F61FAE3302FE + ) + sig = sign('sample', d, curve=P256, hashfunc=sha512) + self.assertEqual(sig, expected) + + Q = d * P256.G + self.assertTrue(verify(sig, 'sample', Q, curve=P256, hashfunc=sha512)) + + """ case taken from https://www.nsa.gov/ia/_files/ecdsa.pdf """ + + def test_ecdsa_P256_verify(self): + Q = Point( + 0x8101ece47464a6ead70cf69a6e2bd3d88691a3262d22cba4f7635eaff26680a8, + 0xd8a12ba61d599235f67d9cb4d58f1783d3ca43e78f0a5abaa624079936c0c3a9, + curve=P256 + ) + msg = 'This is only a test message. It is 48 bytes long' + sig = ( + 0x7214bc9647160bbd39ff2f80533f5dc6ddd70ddf86bb815661e805d5d4e6f27c, + 0x7d1ff961980f961bdaa3233b6209f4013317d3e3f9e1493592dbeaa1af2bc367 + ) + self.assertTrue(verify(sig, msg, Q, curve=P256, hashfunc=sha256)) + + sig = ( + 0x7214bc9647160bbd39ff2f80533f5dc6ddd70ddf86bb815661e805d5d4e6fbad, + 0x7d1ff961980f961bdaa3233b6209f4013317d3e3f9e1493592dbeaa1af2bc367 + ) + self.assertFalse(verify(sig, msg, Q, curve=P256, hashfunc=sha256)) \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_point.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_point.py new file mode 100644 index 00000000..6021532a --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_point.py @@ -0,0 +1,51 @@ +from unittest import TestCase + +from ..curve import W25519 +from ..point import Point + + +class TestTypeValidation(TestCase): + def test_type_validation_add(self): + with self.assertRaises(TypeError): + _ = Point.IDENTITY_ELEMENT + 2 + + with self.assertRaises(TypeError): + _ = W25519.G + 2 + + with self.assertRaises(TypeError): + _ = 2 + Point.IDENTITY_ELEMENT + + with self.assertRaises(TypeError): + _ = 2 + W25519.G + + def test_type_validation_sub(self): + with self.assertRaises(TypeError): + _ = Point.IDENTITY_ELEMENT - 2 + + with self.assertRaises(TypeError): + _ = W25519.G - 2 + + with self.assertRaises(TypeError): + _ = 2 - Point.IDENTITY_ELEMENT + + with self.assertRaises(TypeError): + _ = 2 - W25519.G + + def test_type_validation_mul(self): + with self.assertRaises(TypeError): + _ = Point.IDENTITY_ELEMENT * 1.5 + + with self.assertRaises(TypeError): + _ = W25519.G * 1.5 + + with self.assertRaises(TypeError): + _ = 1.5 * Point.IDENTITY_ELEMENT + + with self.assertRaises(TypeError): + _ = 1.5 * W25519.G + + def test_point_at_infinity(self): + self.assertEqual(W25519.G * 0, Point.IDENTITY_ELEMENT) + self.assertEqual(W25519.G * W25519.q, Point.IDENTITY_ELEMENT) + self.assertEqual(W25519.G + Point.IDENTITY_ELEMENT, W25519.G) + self.assertEqual(W25519.G - Point.IDENTITY_ELEMENT, W25519.G) diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_prehashed.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_prehashed.py new file mode 100644 index 00000000..1857dd1b --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_prehashed.py @@ -0,0 +1,16 @@ + +from hashlib import sha256 +from unittest import TestCase + +from ..curve import P256 +from ..ecdsa import sign, verify +from ..keys import gen_keypair + + +class TestPrehashed(TestCase): + def test_prehashed_bytes(self): + d, Q = gen_keypair(P256) + prehashed_message = sha256(b"").digest() + + r, s = sign(prehashed_message, d, prehashed=True) + self.assertTrue(verify((r, s), prehashed_message, Q, prehashed=True)) diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_prime_field_curve_math.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_prime_field_curve_math.py new file mode 100644 index 00000000..bbbc200f --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_prime_field_curve_math.py @@ -0,0 +1,284 @@ +from random import choice, randint +from unittest import TestCase + +from . import CURVES +from ..curve import P192, P224, P256, P384, P521, secp256k1, W25519, W448 +from ..point import Point + + +class TestPrimeFieldCurve(TestCase): + """ NIST P curve tests taken from https://www.nsa.gov/ia/_files/nist-routines.pdf """ + + def test_P192_arith(self): + S = Point( + 0xd458e7d127ae671b0c330266d246769353a012073e97acf8, + 0x325930500d851f336bddc050cf7fb11b5673a1645086df3b, + curve=P192 + ) + d = 0xa78a236d60baec0c5dd41b33a542463a8255391af64c74ee + expected = Point( + 0x1faee4205a4f669d2d0a8f25e3bcec9a62a6952965bf6d31, + 0x5ff2cdfa508a2581892367087c696f179e7a4d7e8260fb06, + curve=P192 + ) + R = d * S + self.assertEqual(R, expected) + + def test_P224_arith(self): + S = Point( + 0x6eca814ba59a930843dc814edd6c97da95518df3c6fdf16e9a10bb5b, + 0xef4b497f0963bc8b6aec0ca0f259b89cd80994147e05dc6b64d7bf22, + curve=P224 + ) + d = 0xa78ccc30eaca0fcc8e36b2dd6fbb03df06d37f52711e6363aaf1d73b + expected = Point( + 0x96a7625e92a8d72bff1113abdb95777e736a14c6fdaacc392702bca4, + 0x0f8e5702942a3c5e13cd2fd5801915258b43dfadc70d15dbada3ed10, + curve=P224 + ) + R = d * S + self.assertEqual(R, expected) + + def test_P256_arith(self): + S = Point( + 0xde2444bebc8d36e682edd27e0f271508617519b3221a8fa0b77cab3989da97c9, + 0xc093ae7ff36e5380fc01a5aad1e66659702de80f53cec576b6350b243042a256, + curve=P256 + ) + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + expected = Point( + 0x51d08d5f2d4278882946d88d83c97d11e62becc3cfc18bedacc89ba34eeca03f, + 0x75ee68eb8bf626aa5b673ab51f6e744e06f8fcf8a6c0cf3035beca956a7b41d5, + curve=P256 + ) + R = d * S + self.assertEqual(R, expected) + + def test_P384_arith(self): + S = Point( + int('fba203b81bbd23f2b3be971cc23997e1ae4d89e69cb6f92385dda82768ada415ebab4167459da98e6' + '2b1332d1e73cb0e', 16), + int('5ffedbaefdeba603e7923e06cdb5d0c65b22301429293376d5c6944e3fa6259f162b4788de6987fd5' + '9aed5e4b5285e45', 16), + curve=P384 + ) + d = int('a4ebcae5a665983493ab3e626085a24c104311a761b5a8fdac052ed1f111a5c44f76f45659d2d111a' + '61b5fdd97583480', 16) + expected = Point( + int('e4f77e7ffeb7f0958910e3a680d677a477191df166160ff7ef6bb5261f791aa7b45e3e653d151b95d' + 'ad3d93ca0290ef2', 16), + int('ac7dee41d8c5f4a7d5836960a773cfc1376289d3373f8cf7417b0c6207ac32e913856612fc9ff2e35' + '7eb2ee05cf9667f', 16), + curve=P384 + ) + R = d * S + self.assertEqual(R, expected) + + def test_P521_arith(self): + S = Point( + int('000001d5c693f66c08ed03ad0f031f937443458f601fd098d3d0227b4bf62873af50740b0bb84aa15' + '7fc847bcf8dc16a8b2b8bfd8e2d0a7d39af04b089930ef6dad5c1b4', 16), + int('00000144b7770963c63a39248865ff36b074151eac33549b224af5c8664c54012b818ed037b2b7c1a' + '63ac89ebaa11e07db89fcee5b556e49764ee3fa66ea7ae61ac01823', 16), + curve=P521 + ) + d = int('000001eb7f81785c9629f136a7e8f8c674957109735554111a2a866fa5a166699419bfa9936c78b62' + '653964df0d6da940a695c7294d41b2d6600de6dfcf0edcfc89fdcb1', 16) + expected = Point( + int('00000091b15d09d0ca0353f8f96b93cdb13497b0a4bb582ae9ebefa35eee61bf7b7d041b8ec34c6c0' + '0c0c0671c4ae063318fb75be87af4fe859608c95f0ab4774f8c95bb', 16), + int('00000130f8f8b5e1abb4dd94f6baaf654a2d5810411e77b7423965e0c7fd79ec1ae563c207bd255ee' + '9828eb7a03fed565240d2cc80ddd2cecbb2eb50f0951f75ad87977f', 16), + curve=P521 + ) + R = d * S + self.assertEqual(R, expected) + + def test_secp256k1_arith(self): + # http://crypto.stackexchange.com/a/787/17884 + m = 0xAA5E28D6A97A2479A65527F7290311A3624D4CC0FA1578598EE3C2613BF99522 + expected = Point( + 0x34F9460F0E4F08393D192B3C5133A6BA099AA0AD9FD54EBCCFACDFA239FF49C6, + 0x0B71EA9BD730FD8923F6D25A7A91E7DD7728A960686CB5A901BB419E0F2CA232, + curve=secp256k1 + ) + R = m * secp256k1.G + self.assertEqual(R, expected) + + m = 0x7E2B897B8CEBC6361663AD410835639826D590F393D90A9538881735256DFAE3 + expected = Point( + 0xD74BF844B0862475103D96A611CF2D898447E288D34B360BC885CB8CE7C00575, + 0x131C670D414C4546B88AC3FF664611B1C38CEB1C21D76369D7A7A0969D61D97D, + curve=secp256k1 + ) + R = m * secp256k1.G + self.assertEqual(R, expected) + + m = 0x6461E6DF0FE7DFD05329F41BF771B86578143D4DD1F7866FB4CA7E97C5FA945D + expected = Point( + 0xE8AECC370AEDD953483719A116711963CE201AC3EB21D3F3257BB48668C6A72F, + 0xC25CAF2F0EBA1DDB2F0F3F47866299EF907867B7D27E95B3873BF98397B24EE1, + curve=secp256k1 + ) + R = m * secp256k1.G + self.assertEqual(R, expected) + + m = 0x376A3A2CDCD12581EFFF13EE4AD44C4044B8A0524C42422A7E1E181E4DEECCEC + expected = Point( + 0x14890E61FCD4B0BD92E5B36C81372CA6FED471EF3AA60A3E415EE4FE987DABA1, + 0x297B858D9F752AB42D3BCA67EE0EB6DCD1C2B7B0DBE23397E66ADC272263F982, + curve=secp256k1 + ) + R = m * secp256k1.G + self.assertEqual(R, expected) + + m = 0x1B22644A7BE026548810C378D0B2994EEFA6D2B9881803CB02CEFF865287D1B9 + expected = Point( + 0xF73C65EAD01C5126F28F442D087689BFA08E12763E0CEC1D35B01751FD735ED3, + 0xF449A8376906482A84ED01479BD18882B919C140D638307F0C0934BA12590BDE, + curve=secp256k1 + ) + R = m * secp256k1.G + self.assertEqual(R, expected) + + def test_secp256k1_add_large(self): + xs = 1 + xt = secp256k1.p + 1 + ys = 0x4218f20ae6c646b363db68605822fb14264ca8d2587fdd6fbc750d587e76a7ee + S = Point(xs, ys, curve=secp256k1) + T = Point(xt, ys, curve=secp256k1) + expected = Point( + 0xc7ffffffffffffffffffffffffffffffffffffffffffffffffffffff37fffd03, + 0x4298c557a7ddcc570e8bf054c4cad9e99f396b3ce19d50f1b91c9df4bb00d333, + curve=secp256k1 + ) + R = S+T + self.assertEqual(R, expected) + + def test_W25519_arith(self): + subgroup = [ + (0, 0), + (19624287790469256669057814461137428606839005560001276145469620721820115669041, + 25869741026945134960544184956460972567356779614910045322022475500191642319642), + (19298681539552699237261830834781317975544997444273427339909597334652188435538, + 9094040566125962849133224048217411091405536248825867518642941381412595940312), + (784994156384216107199399111990385161439916830893843497063691184659069321411, + 47389623381099381275796781267935282128369626952426857267179501978497824351671), + (19298681539552699237261830834781317975544997444273427339909597334652188435537, + 0), + (784994156384216107199399111990385161439916830893843497063691184659069321411, + 10506421237558716435988711236408671798265365380393424752549290025458740468278), + (19298681539552699237261830834781317975544997444273427339909597334652188435538, + 48802004052532134862652268456126542835229456083994414501085850622543968879637), + (19624287790469256669057814461137428606839005560001276145469620721820115669041, + 32026303591712962751241307547882981359278212717910236697706316503764922500307)] + + # subgroup[1] generates an order-8 subgroup + S = Point(subgroup[1][0], subgroup[1][1], curve=W25519) + m = randint(1, S.curve.q - 1) + + # test subgroup operation + for i in range(2 * len(subgroup)): + R = (m + i) * S + idx = (m + i) % len(subgroup) + if idx == 0: + expected = Point.IDENTITY_ELEMENT + else: + expected = Point(subgroup[idx][0], subgroup[idx][1], curve=S.curve) + self.assertEqual(R, expected) + + # test 2Q = inf when ord(Q) = 2; subgroup[4] is such a point + S = Point(subgroup[4][0], subgroup[4][1], curve=S.curve) + R = S + S + self.assertEqual(R, Point.IDENTITY_ELEMENT) + + def test_W448_arith(self): + subgroup = [ + (0, 0), + (484559149530404593699549205258669689569094240458212040187660132787074885444487181790930922465784363953392589641229091574035665345629067, + 197888467295464439538354009753858038256835152591059802148199779196087404232002515713604263127793030747855424464185691766453844835192428), + (484559149530404593699549205258669689569094240458212040187660132787074885444487181790930922465784363953392589641229091574035665345629068, + 0), + (484559149530404593699549205258669689569094240458212040187660132787074885444487181790930922465784363953392589641229091574035665345629067, + 528950257000142451010969798134146496096806208096258258133290419984524923934728256972792120570883515182233459997657945594599653183173011)] + + # subgroup[1] generates an order-4 subgroup + S = Point(subgroup[1][0], subgroup[1][1], curve=W448) + m = randint(1, S.curve.q - 1) + + # test subgroup operation + for i in range(2 * len(subgroup)): + R = (m + i) * S + idx = (m + i) % len(subgroup) + if idx == 0: + expected = Point.IDENTITY_ELEMENT + else: + expected = Point(subgroup[idx][0], subgroup[idx][1], curve=S.curve) + self.assertEqual(R, expected) + + # test 2Q = inf when ord(Q) = 2; subgroup[2] is such a point + S = Point(subgroup[2][0], subgroup[2][1], curve=S.curve) + R = S + S + self.assertEqual(R, Point.IDENTITY_ELEMENT) + + def test_arbitrary_arithmetic(self): + for _ in range(100): + curve = choice(CURVES) + a, b = randint(0, curve.q), randint(0, curve.q) + c = (a + b) % curve.q + P, Q = a * curve.G, b * curve.G + R = c * curve.G + pq_sum, qp_sum = P + Q, Q + P + self.assertEqual(pq_sum, qp_sum) + self.assertEqual(qp_sum, R) + + def test_point_at_infinity_arithmetic(self): + for curve in CURVES: + a = randint(0, curve.q) + b = curve.q - a + P, Q = a * curve.G, b * curve.G + + self.assertEqual(P + Q, Point.IDENTITY_ELEMENT) + self.assertEqual((P + Q) + P, P) + + def test_mul_by_negative(self): + for curve in CURVES: + P = -1 * curve.G + self.assertEqual(P.x, (-curve.G).x) + self.assertEqual(P.y, (-curve.G).y) + + a = randint(0, curve.q) + P = -a * curve.G + Q = a * -curve.G + + self.assertEqual(P.x, Q.x) + self.assertEqual(P.y, Q.y) + + def test_mul_by_large(self): + # W25519 test + # this is an order h * q point, where ord(G) = q and cofactor h satisfies card(E) = h * q + P_25519 = (35096664872667797862139202932949049580901987963171775123990915594423742128033, + 22480573877477866270168076032006671893585325258603943218058327520008883368696) + # scalar larger than q but smaller than h * q + k_25519 = 29372000253601831461196252357151931556312986177449743872632027037220559545805 + # KAT: Q = k * P + Q_25519 = (49706689516442043243945288648738077613576846608357551203998149787039218825283, + 39031159953669648926560068238390400180132971507115734045330364888380270413913) + P_25519 = Point(P_25519[0], P_25519[1], curve=W25519) + R = k_25519 * P_25519 + expected = Point(Q_25519[0], Q_25519[1], curve=W25519) + self.assertEqual(R, expected) + + # W448 test + # this is an order h * q point, where ord(G) = q and cofactor h satisfies card(E) = h * q + P_448 = (231480979532797508943026590383724571663349496894338686929676625369384864922735088561874582237479033305968563866115522624182905190510908, + 386642878578520576902801772187402730328995571185781418728101620741813344331894790606494081334128109534545128357659367878296954820089456) + # scalar larger than q but smaller than h * q + k_448 = 402277635739946816237642095194112559838556500677282525912537756673080201343619608058123416303707654128973619279630717054942423338738673 + # KAT: Q = k * P + Q_448 = (594031426065062897452711501095164269186787326947787097339623446049485959406429079799671040253813345976305183137288555654745835823926292, + 397805400082591898298129134663530334000872412871159149907636409950148029590985263340056678772524289857573459442468777694902850559368658) + P_448 = Point(P_448[0], P_448[1], curve=W448) + R = k_448 * P_448 + expected = Point(Q_448[0], Q_448[1], curve=W448) + self.assertEqual(R, expected) diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_rfc6979_ecdsa.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_rfc6979_ecdsa.py new file mode 100644 index 00000000..bee3a278 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_rfc6979_ecdsa.py @@ -0,0 +1,130 @@ +from hashlib import sha1, sha224, sha256, sha384, sha512 +from re import findall, DOTALL +from urllib.request import urlopen +from unittest import TestCase + +from ..curve import P192, P224, P256, P384, P521 +from ..ecdsa import sign +from ..util import RFC6979 + + +class TestRFC6979ECDSA(TestCase): + @classmethod + def setUpClass(cls): + cls.rfc6979_text = urlopen('https://tools.ietf.org/rfc/rfc6979.txt').read().decode() + cls.hash_lookup = { + '1': sha1, + '224': sha224, + '256': sha256, + '384': sha384, + '512': sha512 + } + + def test_p192_rfc6979_ecdsa(self): + curve_tests = findall(r'curve: NIST P-192(.*)curve: NIST P-224', self.rfc6979_text, flags=DOTALL)[0] + + q = int(findall(r'q = ([0-9A-F]*)', curve_tests)[0], 16) + x = int(findall(r'x = ([0-9A-F]*)', curve_tests)[0], 16) + + test_regex = r'With SHA-(\d+), message = "([a-zA-Z]*)":\n' \ + r'\s*k = ([0-9A-F]*)\n' \ + r'\s*r = ([0-9A-F]*)\n' \ + r'\s*s = ([0-9A-F]*)\n' + + for test in findall(test_regex, curve_tests): + h = self.hash_lookup[test[0]] + msg = test[1] + k = int(test[2], 16) + r = int(test[3], 16) + s = int(test[4], 16) + + self.assertEqual(k, RFC6979(msg, x, q, h).gen_nonce()) + self.assertEqual((r, s), sign(msg, x, curve=P192, hashfunc=h)) + + def test_p224_rfc6979_ecdsa(self): + curve_tests = findall(r'curve: NIST P-224(.*)curve: NIST P-256', self.rfc6979_text, flags=DOTALL)[0] + + q = int(findall(r'q = ([0-9A-F]*)', curve_tests)[0], 16) + x = int(findall(r'x = ([0-9A-F]*)', curve_tests)[0], 16) + + test_regex = r'With SHA-(\d+), message = "([a-zA-Z]*)":\n' \ + r'\s*k = ([0-9A-F]*)\n' \ + r'\s*r = ([0-9A-F]*)\n' \ + r'\s*s = ([0-9A-F]*)\n' + + for test in findall(test_regex, curve_tests): + h = self.hash_lookup[test[0]] + msg = test[1] + k = int(test[2], 16) + r = int(test[3], 16) + s = int(test[4], 16) + + self.assertEqual(k, RFC6979(msg, x, q, h).gen_nonce()) + self.assertEqual((r, s), sign(msg, x, curve=P224, hashfunc=h)) + + def test_p256_rfc6979_ecdsa(self): + curve_tests = findall(r'curve: NIST P-256(.*)curve: NIST P-384', self.rfc6979_text, flags=DOTALL)[0] + + q = int(findall(r'q = ([0-9A-F]*)', curve_tests)[0], 16) + x = int(findall(r'x = ([0-9A-F]*)', curve_tests)[0], 16) + + test_regex = r'With SHA-(\d+), message = "([a-zA-Z]*)":\n' \ + r'\s*k = ([0-9A-F]*)\n' \ + r'\s*r = ([0-9A-F]*)\n' \ + r'\s*s = ([0-9A-F]*)\n' + + for test in findall(test_regex, curve_tests): + h = self.hash_lookup[test[0]] + msg = test[1] + k = int(test[2], 16) + r = int(test[3], 16) + s = int(test[4], 16) + + self.assertEqual(k, RFC6979(msg, x, q, h).gen_nonce()) + self.assertEqual((r, s), sign(msg, x, curve=P256, hashfunc=h)) + + def test_p384_rfc6979_ecdsa(self): + curve_tests = findall(r'curve: NIST P-384(.*)curve: NIST P-521', self.rfc6979_text, flags=DOTALL)[0] + + q_parts = findall(r'q = ([0-9A-F]*)\n\s*([0-9A-F]*)', curve_tests)[0] + q = int(q_parts[0] + q_parts[1], 16) + x_parts = findall(r'x = ([0-9A-F]*)\n\s*([0-9A-F]*)', curve_tests)[0] + x = int(x_parts[0] + x_parts[1], 16) + + test_regex = r'With SHA-(\d+), message = "([a-zA-Z]*)":\n' \ + r'\s*k = ([0-9A-F]*)\n\s*([0-9A-F]*)\n' \ + r'\s*r = ([0-9A-F]*)\n\s*([0-9A-F]*)\n' \ + r'\s*s = ([0-9A-F]*)\n\s*([0-9A-F]*)\n' + + for test in findall(test_regex, curve_tests): + h = self.hash_lookup[test[0]] + msg = test[1] + k = int(test[2] + test[3], 16) + r = int(test[4] + test[5], 16) + s = int(test[6] + test[7], 16) + + self.assertEqual(k, RFC6979(msg, x, q, h).gen_nonce()) + self.assertEqual((r, s), sign(msg, x, curve=P384, hashfunc=h)) + + def test_p521_rfc6979_ecdsa(self): + curve_tests = findall(r'curve: NIST P-521(.*)curve: NIST K-163', self.rfc6979_text, flags=DOTALL)[0] + + q_parts = findall(r'q = ([0-9A-F]*)\n\s*([0-9A-F]*)\n\s*([0-9A-F]*)', curve_tests)[0] + q = int(q_parts[0] + q_parts[1] + q_parts[2], 16) + x_parts = findall(r'x = ([0-9A-F]*)\n\s*([0-9A-F]*)\n\s*([0-9A-F]*)', curve_tests)[0] + x = int(x_parts[0] + x_parts[1] + x_parts[2], 16) + + test_regex = r'With SHA-(\d+), message = "([a-zA-Z]*)":\n' \ + r'\s*k = ([0-9A-F]*)\n\s*([0-9A-F]*)\n\s*([0-9A-F]*)\n' \ + r'\s*r = ([0-9A-F]*)\n\s*([0-9A-F]*)\n\s*([0-9A-F]*)\n' \ + r'\s*s = ([0-9A-F]*)\n\s*([0-9A-F]*)\n\s*([0-9A-F]*)\n' + + for test in findall(test_regex, curve_tests): + h = self.hash_lookup[test[0]] + msg = test[1] + k = int(test[2] + test[3] + test[4], 16) + r = int(test[5] + test[6] + test[7], 16) + s = int(test[8] + test[9] + test[10], 16) + + self.assertEqual(k, RFC6979(msg, x, q, h).gen_nonce()) + self.assertEqual((r, s), sign(msg, x, curve=P521, hashfunc=h)) diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_whitespace_parsing.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_whitespace_parsing.py new file mode 100644 index 00000000..4b1cf2b4 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_whitespace_parsing.py @@ -0,0 +1,32 @@ +from unittest import TestCase + +from ..curve import secp256k1 +from ..encoding.pem import PEMEncoder +from ..point import Point + + +class TestWhitespaceParsing(TestCase): + d = 0x3007df8e8bed6c3592a10f9d0495173dcecbc99a40e8c88b47d7d590f8b608dd + x = 0x2c071354794f38ff439eff52f8b475b0def34a210aa3b88a63f2a7295d35f41f + y = 0xb0f35d4f45d95657fa37d6a5431be9441a95638fc710884f43fff7938428056e + Q = Point(x, y, curve=secp256k1) + + def test_leading_newline(self): + keypem = "\n-----BEGIN EC PRIVATE KEY-----\n" \ + "MHQCAQEEIDAH346L7Ww1kqEPnQSVFz3Oy8maQOjIi0fX1ZD4tgjdoAcGBSuBBAAK\n" \ + "oUQDQgAELAcTVHlPOP9Dnv9S+LR1sN7zSiEKo7iKY/KnKV019B+w811PRdlWV/o3\n" \ + "1qVDG+lEGpVjj8cQiE9D//eThCgFbg==\n" \ + "-----END EC PRIVATE KEY-----\n" + key, pubkey = PEMEncoder.decode_private_key(keypem) + self.assertEqual(key, self.d) + self.assertEqual(pubkey, self.Q) + + def test_leading_trailing_newlines(self): + keypem = "\n\n\n\n-----BEGIN EC PRIVATE KEY-----\n" \ + "MHQCAQEEIDAH346L7Ww1kqEPnQSVFz3Oy8maQOjIi0fX1ZD4tgjdoAcGBSuBBAAK\n" \ + "oUQDQgAELAcTVHlPOP9Dnv9S+LR1sN7zSiEKo7iKY/KnKV019B+w811PRdlWV/o3\n" \ + "1qVDG+lEGpVjj8cQiE9D//eThCgFbg==\n" \ + "-----END EC PRIVATE KEY-----\n\n\n\n" + key, pubkey = PEMEncoder.decode_private_key(keypem) + self.assertEqual(key, self.d) + self.assertEqual(pubkey, self.Q) \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/fastecdsa/tests/test_whycheproof_vectors.py b/env/lib/python3.8/site-packages/fastecdsa/tests/test_whycheproof_vectors.py new file mode 100644 index 00000000..f78bf5ec --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/tests/test_whycheproof_vectors.py @@ -0,0 +1,174 @@ +from binascii import unhexlify +from json import JSONDecodeError, loads +from sys import version_info +from unittest import SkipTest, TestCase, skipIf +from urllib.error import URLError +from urllib.request import urlopen + +from fastecdsa.curve import ( + P224, P256, P384, P521, + brainpoolP224r1, brainpoolP256r1, brainpoolP320r1, brainpoolP384r1, brainpoolP512r1, + secp256k1 +) +from fastecdsa.ecdsa import verify +from fastecdsa.encoding.der import DEREncoder +from fastecdsa.encoding.sec1 import SEC1Encoder + +try: + from hashlib import sha224, sha256, sha384, sha3_224, sha3_256, sha3_384, sha3_512, sha512 +except ImportError: + pass + + +@skipIf(version_info.minor < 6, "Test requires sha3 (added in python3.6) to run") +class TestWycheproofEcdsaVerify(TestCase): + @staticmethod + def _get_tests(url): + try: + test_raw = urlopen(url).read() + test_json = loads(test_raw) + return test_json["testGroups"] + except (JSONDecodeError, URLError) as error: + SkipTest("Skipping tests, could not download / parse data from {}\nError: {}".format(url, error)) + + def _test_runner(self, tests, curve, hashfunc): + for test_group in tests: + keybytes = unhexlify(test_group["key"]["uncompressed"]) + public_key = SEC1Encoder.decode_public_key(keybytes, curve) + + for test in test_group["tests"]: + try: + message = unhexlify(test["msg"]) + sigbytes = unhexlify(test["sig"]) + signature = DEREncoder.decode_signature(sigbytes) + expected = test["result"] == "valid" + + result = verify(signature, message, public_key, curve, hashfunc) + self.assertEqual(result, expected, test) + except: + self.assertFalse(test["result"] == "valid", test) + + def test_brainpool224r1_sha224(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_brainpoolP224r1_sha224_test.json" + tests = self._get_tests(url) + self._test_runner(tests, brainpoolP224r1, sha224) + + def test_brainpoolP256r1_sha256(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_brainpoolP256r1_sha256_test.json" + tests = self._get_tests(url) + self._test_runner(tests, brainpoolP256r1, sha256) + + def test_brainpoolP320r1_sha384(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_brainpoolP320r1_sha384_test.json" + tests = self._get_tests(url) + self._test_runner(tests, brainpoolP320r1, sha384) + + def test_brainpoolP384r1_sha384(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_brainpoolP384r1_sha384_test.json" + tests = self._get_tests(url) + self._test_runner(tests, brainpoolP384r1, sha384) + + def test_brainpoolP512r1_sha512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_brainpoolP512r1_sha512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, brainpoolP512r1, sha512) + + def test_p224_sha224(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp224r1_sha224_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P224, sha224) + + def test_p224_sha256(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp224r1_sha256_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P224, sha256) + + def test_p224_sha3_224(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp224r1_sha3_224_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P224, sha3_224) + + def test_p224_sha3_256(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp224r1_sha3_256_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P224, sha3_256) + + def test_p224_sha3_512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp224r1_sha3_512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P224, sha3_512) + + def test_p224_sha512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp224r1_sha512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P224, sha512) + + def test_secp256k1_sha256(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256k1_sha256_test.json" + tests = self._get_tests(url) + self._test_runner(tests, secp256k1, sha256) + + def test_secp256k1_sha3_256(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256k1_sha3_256_test.json" + tests = self._get_tests(url) + self._test_runner(tests, secp256k1, sha3_256) + + def test_secp256k1_sha3_512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256k1_sha3_512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, secp256k1, sha3_512) + + def test_secp256k1_sha512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256k1_sha512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, secp256k1, sha512) + + def test_p256_sha256(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256r1_sha256_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P256, sha256) + + def test_p256_sha3_256(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256r1_sha3_256_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P256, sha3_256) + + def test_p256_sha3_512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256r1_sha3_512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P256, sha3_512) + + def test_p256_sha512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp256r1_sha512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P256, sha512) + + def test_p384_sha384(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp384r1_sha384_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P384, sha384) + + def test_p384_sha3_384(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp384r1_sha3_384_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P384, sha3_384) + + def test_p384_sha3_512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp384r1_sha3_512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P384, sha3_512) + + def test_p384_sha512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp384r1_sha512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P384, sha512) + + def test_p521_sha3_512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp521r1_sha3_512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P521, sha3_512) + + def test_p521_sha512(self): + url = "https://raw.githubusercontent.com/google/wycheproof/master/testvectors/ecdsa_secp521r1_sha512_test.json" + tests = self._get_tests(url) + self._test_runner(tests, P521, sha512) diff --git a/env/lib/python3.8/site-packages/fastecdsa/util.py b/env/lib/python3.8/site-packages/fastecdsa/util.py new file mode 100644 index 00000000..9f75b771 --- /dev/null +++ b/env/lib/python3.8/site-packages/fastecdsa/util.py @@ -0,0 +1,170 @@ +from binascii import hexlify +import hmac +from struct import pack +from typing import Callable + + +class RFC6979: + """Generate a nonce per RFC6979. + + In order to avoid reusing a nonce with the same key when signing two different messages (which + leaks the private key) RFC6979 provides a deterministic method for generating nonces. This is + based on using a pseudo-random function (HMAC) to derive a nonce from the message and private + key. More info here: http://tools.ietf.org/html/rfc6979. + + Attributes: + | msg (bytes): A message being signed. + | x (int): An ECDSA private key. + | q (int): The order of the generator point of the curve being used to sign the message. + | hashfunc (_hashlib.HASH): The hash function used to compress the message. + | prehashed (bool): Whether the signature is on a pre-hashed message. + """ + def __init__(self, msg: bytes, x: int, q: int, hashfunc: Callable, prehashed: bool = False): + self.x = x + self.q = q + self.msg = msg_bytes(msg) + self.qlen = len(bin(q)) - 2 # -2 for the leading '0b' + self.rlen = ((self.qlen + 7) // 8) * 8 + self.hashfunc = hashfunc + self.prehashed = prehashed + + def _bits2int(self, b: bytes) -> int: + """ http://tools.ietf.org/html/rfc6979#section-2.3.2 """ + i = int(hexlify(b), 16) + blen = len(b) * 8 + + if blen > self.qlen: + i >>= (blen - self.qlen) + + return i + + def _int2octets(self, x: int) -> bytes: + """ http://tools.ietf.org/html/rfc6979#section-2.3.3 """ + octets = b'' + + while x > 0: + octets = pack('=B', (0xff & x)) + octets + x >>= 8 + + padding = b'\x00' * ((self.rlen // 8) - len(octets)) + return padding + octets + + def _bits2octets(self, b: bytes) -> bytes: + """ http://tools.ietf.org/html/rfc6979#section-2.3.4 """ + z1 = self._bits2int(b) # -2 for the leading '0b' + z2 = z1 % self.q + return self._int2octets(z2) + + def gen_nonce(self): + """ http://tools.ietf.org/html/rfc6979#section-3.2 """ + hash_size = self.hashfunc().digest_size + if self.prehashed: + h1 = self.msg + else: + h1 = self.hashfunc(self.msg).digest() + key_and_msg = self._int2octets(self.x) + self._bits2octets(h1) + + v = b''.join([b'\x01' for _ in range(hash_size)]) + k = b''.join([b'\x00' for _ in range(hash_size)]) + + k = hmac.new(k, v + b'\x00' + key_and_msg, self.hashfunc).digest() + v = hmac.new(k, v, self.hashfunc).digest() + k = hmac.new(k, v + b'\x01' + key_and_msg, self.hashfunc).digest() + v = hmac.new(k, v, self.hashfunc).digest() + + while True: + t = b'' + + while len(t) * 8 < self.qlen: + v = hmac.new(k, v, self.hashfunc).digest() + t = t + v + + nonce = self._bits2int(t) + if nonce >= 1 and nonce < self.q: + return nonce + + k = hmac.new(k, v + b'\x00', self.hashfunc).digest() + v = hmac.new(k, v, self.hashfunc).digest() + + +def _tonelli_shanks(n: int, p: int) -> (int, int): + """A generic algorithm for computng modular square roots.""" + Q, S = p - 1, 0 + while Q % 2 == 0: + Q, S = Q // 2, S + 1 + + z = 2 + while pow(z, (p - 1) // 2, p) != (-1 % p): + z += 1 + + M, c, t, R = S, pow(z, Q, p), pow(n, Q, p), pow(n, (Q + 1) // 2, p) + while t != 1: + for i in range(1, M): + if pow(t, 2**i, p) == 1: + break + + b = pow(c, 2**(M - i - 1), p) + M, c, t, R = i, pow(b, 2, p), (t * b * b) % p, (R * b) % p + + return R, -R % p + + +def mod_sqrt(a: int, p: int) -> (int, int): + """Compute the square root of :math:`a \pmod{p}` + + In other words, find a value :math:`x` such that :math:`x^2 \equiv a \pmod{p}`. + + Args: + | a (int): The value whose root to take. + | p (int): The prime whose field to perform the square root in. + + Returns: + (int, int): the two values of :math:`x` satisfying :math:`x^2 \equiv a \pmod{p}`. + """ + if p % 4 == 3: + k = (p - 3) // 4 + x = pow(a, k + 1, p) + return x, (-x % p) + else: + return _tonelli_shanks(a, p) + + +def msg_bytes(msg) -> bytes: + """Return bytes in a consistent way for a given message. + + The message is expected to be either a string, bytes, or an array of bytes. + + Args: + | msg (str|bytes|bytearray): The data to transform. + + Returns: + bytes: The byte encoded data. + + Raises: + ValueError: If the data cannot be encoded as bytes. + """ + if isinstance(msg, bytes): + return msg + elif isinstance(msg, str): + return msg.encode() + elif isinstance(msg, bytearray): + return bytes(msg) + else: + raise ValueError('Msg "{}" of type {} cannot be converted to bytes'.format( + msg, type(msg))) + + +def validate_type(instance: object, expected_type: type): + """Validate that instance is an instance of the expected_type. + + Args: + | instance: The object whose type is being checked + | expected_type: The expected type of instance + | var_name: The name of the object + + Raises: + TypeError: If instance is not of type expected_type + """ + if not isinstance(instance, expected_type): + raise TypeError('Expected a value of type {}, got a value of type {}'.format( + expected_type, type(instance))) diff --git a/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/LICENSE new file mode 100644 index 00000000..cf1ab25d --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/LICENSE @@ -0,0 +1,24 @@ +This is free and unencumbered software released into the public domain. + +Anyone is free to copy, modify, publish, use, compile, sell, or +distribute this software, either in source code form or as a compiled +binary, for any purpose, commercial or non-commercial, and by any +means. + +In jurisdictions that recognize copyright laws, the author or authors +of this software dedicate any and all copyright interest in the +software to the public domain. We make this dedication for the benefit +of the public at large and to the detriment of our heirs and +successors. We intend this dedication to be an overt act of +relinquishment in perpetuity of all present and future rights to this +software under copyright law. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR +OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, +ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +OTHER DEALINGS IN THE SOFTWARE. + +For more information, please refer to diff --git a/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/METADATA b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/METADATA new file mode 100644 index 00000000..8ed1972c --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/METADATA @@ -0,0 +1,48 @@ +Metadata-Version: 2.1 +Name: filelock +Version: 3.8.0 +Summary: A platform independent file lock. +Home-page: https://github.com/tox-dev/py-filelock +Download-URL: https://github.com/tox-dev/py-filelock/archive/main.zip +Author: Benedikt Schmitt +Author-email: benedikt@benediktschmitt.de +License: Unlicense +Project-URL: Source, https://github.com/tox-dev/py-filelock +Project-URL: Tracker, https://github.com/tox-dev/py-filelock/issues +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: Public Domain +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Topic :: Internet +Classifier: Topic :: Software Development :: Libraries +Classifier: Topic :: System +Requires-Python: >=3.7 +Description-Content-Type: text/markdown +License-File: LICENSE +Provides-Extra: docs +Requires-Dist: furo (>=2022.6.21) ; extra == 'docs' +Requires-Dist: sphinx (>=5.1.1) ; extra == 'docs' +Requires-Dist: sphinx-autodoc-typehints (>=1.19.1) ; extra == 'docs' +Provides-Extra: testing +Requires-Dist: covdefaults (>=2.2) ; extra == 'testing' +Requires-Dist: coverage (>=6.4.2) ; extra == 'testing' +Requires-Dist: pytest (>=7.1.2) ; extra == 'testing' +Requires-Dist: pytest-cov (>=3) ; extra == 'testing' +Requires-Dist: pytest-timeout (>=2.1) ; extra == 'testing' + +# py-filelock + +[![PyPI](https://img.shields.io/pypi/v/filelock)](https://pypi.org/project/filelock/) +[![Supported Python +versions](https://img.shields.io/pypi/pyversions/filelock.svg)](https://pypi.org/project/filelock/) +[![Documentation +status](https://readthedocs.org/projects/py-filelock/badge/?version=latest)](https://py-filelock.readthedocs.io/en/latest/?badge=latest) +[![Code style: +black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) +[![Downloads](https://pepy.tech/badge/filelock/month)](https://pepy.tech/project/filelock/month) +[![check](https://github.com/tox-dev/py-filelock/actions/workflows/check.yml/badge.svg)](https://github.com/tox-dev/py-filelock/actions/workflows/check.yml) + +For more information checkout the [official documentation](https://py-filelock.readthedocs.io/en/latest/index.html). diff --git a/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/RECORD b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/RECORD new file mode 100644 index 00000000..ca8ff3bc --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/RECORD @@ -0,0 +1,24 @@ +filelock-3.8.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +filelock-3.8.0.dist-info/LICENSE,sha256=iNm062BXnBkew5HKBMFhMFctfu3EqG2qWL8oxuFMm80,1210 +filelock-3.8.0.dist-info/METADATA,sha256=CDoR1fv4WeAylV4aZlRFOVJ0O6JAbH0mHauIEVdT_cg,2311 +filelock-3.8.0.dist-info/RECORD,, +filelock-3.8.0.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +filelock-3.8.0.dist-info/top_level.txt,sha256=NDrf9i5BNogz4hEdsr6Hi7Ws3TlSSKY4Q2Y9_-i2GwU,9 +filelock-3.8.0.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1 +filelock/__init__.py,sha256=XE-RanOqTbJkFCKJbe3PqzNCkVgLQXqs_Yp1rrUPF58,1246 +filelock/__pycache__/__init__.cpython-38.pyc,, +filelock/__pycache__/_api.cpython-38.pyc,, +filelock/__pycache__/_error.cpython-38.pyc,, +filelock/__pycache__/_soft.cpython-38.pyc,, +filelock/__pycache__/_unix.cpython-38.pyc,, +filelock/__pycache__/_util.cpython-38.pyc,, +filelock/__pycache__/_windows.cpython-38.pyc,, +filelock/__pycache__/version.cpython-38.pyc,, +filelock/_api.py,sha256=74rZeupmXu8PFZGBAprN3fE4rS45d0njANACuPOeI0M,8895 +filelock/_error.py,sha256=Gaxp2TfdmgdvYFkllGCBOE37vYIXnHKKW3RYfKH7DYM,399 +filelock/_soft.py,sha256=rSpmt4Oi0Eb4JeKzyWImeqf5MYCJR0Dyc_kUN3kHj7Y,1650 +filelock/_unix.py,sha256=gM4-5mqDtamGp5qwWkaZNSTuv9p7vwBa4diVmI1ZBwQ,1578 +filelock/_util.py,sha256=BZKOAYTQdmcHmF34-Ft4kMdEvk3a8NGtsAhe6kfxZuU,594 +filelock/_windows.py,sha256=wvg-_SfJEDyxDeDVH8wBqxbPquXaycoYXgXJOBmm-G4,1890 +filelock/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +filelock/version.py,sha256=tu_YI6bXBlf0-atO2EfySopGOlrOwqLUiRyOvOhMUtw,176 diff --git a/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/top_level.txt new file mode 100644 index 00000000..83c2e357 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/top_level.txt @@ -0,0 +1 @@ +filelock diff --git a/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/zip-safe b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/zip-safe new file mode 100644 index 00000000..8b137891 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock-3.8.0.dist-info/zip-safe @@ -0,0 +1 @@ + diff --git a/env/lib/python3.8/site-packages/filelock/__init__.py b/env/lib/python3.8/site-packages/filelock/__init__.py new file mode 100644 index 00000000..afcdb706 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/__init__.py @@ -0,0 +1,48 @@ +""" +A platform independent file lock that supports the with-statement. + +.. autodata:: filelock.__version__ + :no-value: + +""" +from __future__ import annotations + +import sys +import warnings + +from ._api import AcquireReturnProxy, BaseFileLock +from ._error import Timeout +from ._soft import SoftFileLock +from ._unix import UnixFileLock, has_fcntl +from ._windows import WindowsFileLock +from .version import version + +#: version of the project as a string +__version__: str = version + + +if sys.platform == "win32": # pragma: win32 cover + _FileLock: type[BaseFileLock] = WindowsFileLock +else: # pragma: win32 no cover + if has_fcntl: + _FileLock: type[BaseFileLock] = UnixFileLock + else: + _FileLock = SoftFileLock + if warnings is not None: + warnings.warn("only soft file lock is available") + +#: Alias for the lock, which should be used for the current platform. On Windows, this is an alias for +# :class:`WindowsFileLock`, on Unix for :class:`UnixFileLock` and otherwise for :class:`SoftFileLock`. +FileLock: type[BaseFileLock] = _FileLock + + +__all__ = [ + "__version__", + "FileLock", + "SoftFileLock", + "Timeout", + "UnixFileLock", + "WindowsFileLock", + "BaseFileLock", + "AcquireReturnProxy", +] diff --git a/env/lib/python3.8/site-packages/filelock/_api.py b/env/lib/python3.8/site-packages/filelock/_api.py new file mode 100644 index 00000000..9c400039 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/_api.py @@ -0,0 +1,246 @@ +from __future__ import annotations + +import contextlib +import logging +import os +import time +import warnings +from abc import ABC, abstractmethod +from threading import Lock +from types import TracebackType +from typing import Any + +from ._error import Timeout + +_LOGGER = logging.getLogger("filelock") + + +# This is a helper class which is returned by :meth:`BaseFileLock.acquire` and wraps the lock to make sure __enter__ +# is not called twice when entering the with statement. If we would simply return *self*, the lock would be acquired +# again in the *__enter__* method of the BaseFileLock, but not released again automatically. issue #37 (memory leak) +class AcquireReturnProxy: + """A context aware object that will release the lock file when exiting.""" + + def __init__(self, lock: BaseFileLock) -> None: + self.lock = lock + + def __enter__(self) -> BaseFileLock: + return self.lock + + def __exit__( + self, + exc_type: type[BaseException] | None, # noqa: U100 + exc_value: BaseException | None, # noqa: U100 + traceback: TracebackType | None, # noqa: U100 + ) -> None: + self.lock.release() + + +class BaseFileLock(ABC, contextlib.ContextDecorator): + """Abstract base class for a file lock object.""" + + def __init__(self, lock_file: str | os.PathLike[Any], timeout: float = -1) -> None: + """ + Create a new lock object. + + :param lock_file: path to the file + :param timeout: default timeout when acquiring the lock, in seconds. It will be used as fallback value in + the acquire method, if no timeout value (``None``) is given. If you want to disable the timeout, set it + to a negative value. A timeout of 0 means, that there is exactly one attempt to acquire the file lock. + """ + # The path to the lock file. + self._lock_file: str = os.fspath(lock_file) + + # The file descriptor for the *_lock_file* as it is returned by the os.open() function. + # This file lock is only NOT None, if the object currently holds the lock. + self._lock_file_fd: int | None = None + + # The default timeout value. + self.timeout: float = timeout + + # We use this lock primarily for the lock counter. + self._thread_lock: Lock = Lock() + + # The lock counter is used for implementing the nested locking mechanism. Whenever the lock is acquired, the + # counter is increased and the lock is only released, when this value is 0 again. + self._lock_counter: int = 0 + + @property + def lock_file(self) -> str: + """:return: path to the lock file""" + return self._lock_file + + @property + def timeout(self) -> float: + """ + :return: the default timeout value, in seconds + + .. versionadded:: 2.0.0 + """ + return self._timeout + + @timeout.setter + def timeout(self, value: float | str) -> None: + """ + Change the default timeout value. + + :param value: the new value, in seconds + """ + self._timeout = float(value) + + @abstractmethod + def _acquire(self) -> None: + """If the file lock could be acquired, self._lock_file_fd holds the file descriptor of the lock file.""" + raise NotImplementedError + + @abstractmethod + def _release(self) -> None: + """Releases the lock and sets self._lock_file_fd to None.""" + raise NotImplementedError + + @property + def is_locked(self) -> bool: + """ + + :return: A boolean indicating if the lock file is holding the lock currently. + + .. versionchanged:: 2.0.0 + + This was previously a method and is now a property. + """ + return self._lock_file_fd is not None + + def acquire( + self, + timeout: float | None = None, + poll_interval: float = 0.05, + *, + poll_intervall: float | None = None, + blocking: bool = True, + ) -> AcquireReturnProxy: + """ + Try to acquire the file lock. + + :param timeout: maximum wait time for acquiring the lock, ``None`` means use the default :attr:`~timeout` is and + if ``timeout < 0``, there is no timeout and this method will block until the lock could be acquired + :param poll_interval: interval of trying to acquire the lock file + :param poll_intervall: deprecated, kept for backwards compatibility, use ``poll_interval`` instead + :param blocking: defaults to True. If False, function will return immediately if it cannot obtain a lock on the + first attempt. Otherwise this method will block until the timeout expires or the lock is acquired. + :raises Timeout: if fails to acquire lock within the timeout period + :return: a context object that will unlock the file when the context is exited + + .. code-block:: python + + # You can use this method in the context manager (recommended) + with lock.acquire(): + pass + + # Or use an equivalent try-finally construct: + lock.acquire() + try: + pass + finally: + lock.release() + + .. versionchanged:: 2.0.0 + + This method returns now a *proxy* object instead of *self*, + so that it can be used in a with statement without side effects. + + """ + # Use the default timeout, if no timeout is provided. + if timeout is None: + timeout = self.timeout + + if poll_intervall is not None: + msg = "use poll_interval instead of poll_intervall" + warnings.warn(msg, DeprecationWarning, stacklevel=2) + poll_interval = poll_intervall + + # Increment the number right at the beginning. We can still undo it, if something fails. + with self._thread_lock: + self._lock_counter += 1 + + lock_id = id(self) + lock_filename = self._lock_file + start_time = time.monotonic() + try: + while True: + with self._thread_lock: + if not self.is_locked: + _LOGGER.debug("Attempting to acquire lock %s on %s", lock_id, lock_filename) + self._acquire() + + if self.is_locked: + _LOGGER.debug("Lock %s acquired on %s", lock_id, lock_filename) + break + elif blocking is False: + _LOGGER.debug("Failed to immediately acquire lock %s on %s", lock_id, lock_filename) + raise Timeout(self._lock_file) + elif 0 <= timeout < time.monotonic() - start_time: + _LOGGER.debug("Timeout on acquiring lock %s on %s", lock_id, lock_filename) + raise Timeout(self._lock_file) + else: + msg = "Lock %s not acquired on %s, waiting %s seconds ..." + _LOGGER.debug(msg, lock_id, lock_filename, poll_interval) + time.sleep(poll_interval) + except BaseException: # Something did go wrong, so decrement the counter. + with self._thread_lock: + self._lock_counter = max(0, self._lock_counter - 1) + raise + return AcquireReturnProxy(lock=self) + + def release(self, force: bool = False) -> None: + """ + Releases the file lock. Please note, that the lock is only completely released, if the lock counter is 0. Also + note, that the lock file itself is not automatically deleted. + + :param force: If true, the lock counter is ignored and the lock is released in every case/ + """ + with self._thread_lock: + + if self.is_locked: + self._lock_counter -= 1 + + if self._lock_counter == 0 or force: + lock_id, lock_filename = id(self), self._lock_file + + _LOGGER.debug("Attempting to release lock %s on %s", lock_id, lock_filename) + self._release() + self._lock_counter = 0 + _LOGGER.debug("Lock %s released on %s", lock_id, lock_filename) + + def __enter__(self) -> BaseFileLock: + """ + Acquire the lock. + + :return: the lock object + """ + self.acquire() + return self + + def __exit__( + self, + exc_type: type[BaseException] | None, # noqa: U100 + exc_value: BaseException | None, # noqa: U100 + traceback: TracebackType | None, # noqa: U100 + ) -> None: + """ + Release the lock. + + :param exc_type: the exception type if raised + :param exc_value: the exception value if raised + :param traceback: the exception traceback if raised + """ + self.release() + + def __del__(self) -> None: + """Called when the lock object is deleted.""" + self.release(force=True) + + +__all__ = [ + "BaseFileLock", + "AcquireReturnProxy", +] diff --git a/env/lib/python3.8/site-packages/filelock/_error.py b/env/lib/python3.8/site-packages/filelock/_error.py new file mode 100644 index 00000000..b3885214 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/_error.py @@ -0,0 +1,17 @@ +from __future__ import annotations + + +class Timeout(TimeoutError): + """Raised when the lock could not be acquired in *timeout* seconds.""" + + def __init__(self, lock_file: str) -> None: + #: The path of the file lock. + self.lock_file = lock_file + + def __str__(self) -> str: + return f"The file lock '{self.lock_file}' could not be acquired." + + +__all__ = [ + "Timeout", +] diff --git a/env/lib/python3.8/site-packages/filelock/_soft.py b/env/lib/python3.8/site-packages/filelock/_soft.py new file mode 100644 index 00000000..cb09799a --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/_soft.py @@ -0,0 +1,47 @@ +from __future__ import annotations + +import os +import sys +from errno import EACCES, EEXIST, ENOENT + +from ._api import BaseFileLock +from ._util import raise_on_exist_ro_file + + +class SoftFileLock(BaseFileLock): + """Simply watches the existence of the lock file.""" + + def _acquire(self) -> None: + raise_on_exist_ro_file(self._lock_file) + # first check for exists and read-only mode as the open will mask this case as EEXIST + mode = ( + os.O_WRONLY # open for writing only + | os.O_CREAT + | os.O_EXCL # together with above raise EEXIST if the file specified by filename exists + | os.O_TRUNC # truncate the file to zero byte + ) + try: + fd = os.open(self._lock_file, mode) + except OSError as exception: + if exception.errno == EEXIST: # expected if cannot lock + pass + elif exception.errno == ENOENT: # No such file or directory - parent directory is missing + raise + elif exception.errno == EACCES and sys.platform != "win32": # pragma: win32 no cover + # Permission denied - parent dir is R/O + raise # note windows does not allow you to make a folder r/o only files + else: + self._lock_file_fd = fd + + def _release(self) -> None: + os.close(self._lock_file_fd) # type: ignore # the lock file is definitely not None + self._lock_file_fd = None + try: + os.remove(self._lock_file) + except OSError: # the file is already deleted and that's what we want + pass + + +__all__ = [ + "SoftFileLock", +] diff --git a/env/lib/python3.8/site-packages/filelock/_unix.py b/env/lib/python3.8/site-packages/filelock/_unix.py new file mode 100644 index 00000000..03b612c9 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/_unix.py @@ -0,0 +1,56 @@ +from __future__ import annotations + +import os +import sys +from typing import cast + +from ._api import BaseFileLock + +#: a flag to indicate if the fcntl API is available +has_fcntl = False +if sys.platform == "win32": # pragma: win32 cover + + class UnixFileLock(BaseFileLock): + """Uses the :func:`fcntl.flock` to hard lock the lock file on unix systems.""" + + def _acquire(self) -> None: + raise NotImplementedError + + def _release(self) -> None: + raise NotImplementedError + +else: # pragma: win32 no cover + try: + import fcntl + except ImportError: + pass + else: + has_fcntl = True + + class UnixFileLock(BaseFileLock): + """Uses the :func:`fcntl.flock` to hard lock the lock file on unix systems.""" + + def _acquire(self) -> None: + open_mode = os.O_RDWR | os.O_CREAT | os.O_TRUNC + fd = os.open(self._lock_file, open_mode) + try: + fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) + except OSError: + os.close(fd) + else: + self._lock_file_fd = fd + + def _release(self) -> None: + # Do not remove the lockfile: + # https://github.com/tox-dev/py-filelock/issues/31 + # https://stackoverflow.com/questions/17708885/flock-removing-locked-file-without-race-condition + fd = cast(int, self._lock_file_fd) + self._lock_file_fd = None + fcntl.flock(fd, fcntl.LOCK_UN) + os.close(fd) + + +__all__ = [ + "has_fcntl", + "UnixFileLock", +] diff --git a/env/lib/python3.8/site-packages/filelock/_util.py b/env/lib/python3.8/site-packages/filelock/_util.py new file mode 100644 index 00000000..238b80fa --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/_util.py @@ -0,0 +1,20 @@ +from __future__ import annotations + +import os +import stat + + +def raise_on_exist_ro_file(filename: str) -> None: + try: + file_stat = os.stat(filename) # use stat to do exists + can write to check without race condition + except OSError: + return None # swallow does not exist or other errors + + if file_stat.st_mtime != 0: # if os.stat returns but modification is zero that's an invalid os.stat - ignore it + if not (file_stat.st_mode & stat.S_IWUSR): + raise PermissionError(f"Permission denied: {filename!r}") + + +__all__ = [ + "raise_on_exist_ro_file", +] diff --git a/env/lib/python3.8/site-packages/filelock/_windows.py b/env/lib/python3.8/site-packages/filelock/_windows.py new file mode 100644 index 00000000..60e68cb9 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/_windows.py @@ -0,0 +1,63 @@ +from __future__ import annotations + +import os +import sys +from errno import ENOENT +from typing import cast + +from ._api import BaseFileLock +from ._util import raise_on_exist_ro_file + +if sys.platform == "win32": # pragma: win32 cover + import msvcrt + + class WindowsFileLock(BaseFileLock): + """Uses the :func:`msvcrt.locking` function to hard lock the lock file on windows systems.""" + + def _acquire(self) -> None: + raise_on_exist_ro_file(self._lock_file) + mode = ( + os.O_RDWR # open for read and write + | os.O_CREAT # create file if not exists + | os.O_TRUNC # truncate file if not empty + ) + try: + fd = os.open(self._lock_file, mode) + except OSError as exception: + if exception.errno == ENOENT: # No such file or directory + raise + else: + try: + msvcrt.locking(fd, msvcrt.LK_NBLCK, 1) + except OSError: + os.close(fd) + else: + self._lock_file_fd = fd + + def _release(self) -> None: + fd = cast(int, self._lock_file_fd) + self._lock_file_fd = None + msvcrt.locking(fd, msvcrt.LK_UNLCK, 1) + os.close(fd) + + try: + os.remove(self._lock_file) + # Probably another instance of the application hat acquired the file lock. + except OSError: + pass + +else: # pragma: win32 no cover + + class WindowsFileLock(BaseFileLock): + """Uses the :func:`msvcrt.locking` function to hard lock the lock file on windows systems.""" + + def _acquire(self) -> None: + raise NotImplementedError + + def _release(self) -> None: + raise NotImplementedError + + +__all__ = [ + "WindowsFileLock", +] diff --git a/env/lib/python3.8/site-packages/filelock/py.typed b/env/lib/python3.8/site-packages/filelock/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/filelock/version.py b/env/lib/python3.8/site-packages/filelock/version.py new file mode 100644 index 00000000..c9de29e4 --- /dev/null +++ b/env/lib/python3.8/site-packages/filelock/version.py @@ -0,0 +1,5 @@ +# coding: utf-8 +# file generated by setuptools_scm +# don't change, don't track in version control +__version__ = version = '3.8.0' +__version_tuple__ = version_tuple = (3, 8, 0) diff --git a/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/INSTALLER b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/LICENSE.txt b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/LICENSE.txt new file mode 100644 index 00000000..a51531d9 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/LICENSE.txt @@ -0,0 +1,7 @@ +Copyright (c) 2012 Santiago Lezica + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/METADATA b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/METADATA new file mode 100644 index 00000000..e75c226f --- /dev/null +++ b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/METADATA @@ -0,0 +1,46 @@ +Metadata-Version: 2.1 +Name: frozendict +Version: 1.2 +Summary: An immutable dictionary +Home-page: https://github.com/slezica/python-frozendict +Author: Santiago Lezica +Author-email: slezica89@gmail.com +License: MIT License +License-File: LICENSE.txt + +========== +frozendict +========== + +``frozendict`` is an immutable wrapper around dictionaries that implements the +complete mapping interface. It can be used as a drop-in replacement for +dictionaries where immutability is desired. + +Of course, this is ``python``, and you can still poke around the object's +internals if you want. + +The ``frozendict`` constructor mimics ``dict``, and all of the expected +interfaces (``iter``, ``len``, ``repr``, ``hash``, ``getitem``) are provided. +Note that a ``frozendict`` does not guarantee the immutability of its values, so +the utility of ``hash`` method is restricted by usage. + +The only difference is that the ``copy()`` method of ``frozendict`` takes +variable keyword arguments, which will be present as key/value pairs in the new, +immutable copy. + +Example shell usage: + +.. code-block:: python + + from frozendict import frozendict + + fd = frozendict({ 'hello': 'World' }) + + print fd + # + + print fd['hello'] + # 'World' + + print fd.copy(another='key/value') + # diff --git a/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/RECORD b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/RECORD new file mode 100644 index 00000000..f73cd67a --- /dev/null +++ b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/RECORD @@ -0,0 +1,8 @@ +frozendict-1.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +frozendict-1.2.dist-info/LICENSE.txt,sha256=Fe_LgkkiNbO8jINBI90hEDF_8EMuLw1VJvl-Db55B88,1059 +frozendict-1.2.dist-info/METADATA,sha256=-tZ1F9dr3jalL5JR4clD09dUNojahfzIyWWXj8U4iVU,1352 +frozendict-1.2.dist-info/RECORD,, +frozendict-1.2.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92 +frozendict-1.2.dist-info/top_level.txt,sha256=H0x_yOfp00xK1WR_JMHct-wr-7Ht9uC9iQR2aHNL_uw,11 +frozendict/__init__.py,sha256=qE9GbCahp044vHfD3LSNjLOjJmhm3TS95HenRVW8mcA,1503 +frozendict/__pycache__/__init__.cpython-38.pyc,, diff --git a/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/WHEEL b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/WHEEL new file mode 100644 index 00000000..57e3d840 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.38.4) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/top_level.txt b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/top_level.txt new file mode 100644 index 00000000..302328f6 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozendict-1.2.dist-info/top_level.txt @@ -0,0 +1 @@ +frozendict diff --git a/env/lib/python3.8/site-packages/frozendict/__init__.py b/env/lib/python3.8/site-packages/frozendict/__init__.py new file mode 100644 index 00000000..399948a6 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozendict/__init__.py @@ -0,0 +1,64 @@ +import collections +import operator +import functools +import sys + + +try: + from collections import OrderedDict +except ImportError: # python < 2.7 + OrderedDict = NotImplemented + + +iteritems = getattr(dict, 'iteritems', dict.items) # py2-3 compatibility + + +class frozendict(collections.Mapping): + """ + An immutable wrapper around dictionaries that implements the complete :py:class:`collections.Mapping` + interface. It can be used as a drop-in replacement for dictionaries where immutability is desired. + """ + + dict_cls = dict + + def __init__(self, *args, **kwargs): + self._dict = self.dict_cls(*args, **kwargs) + self._hash = None + + def __getitem__(self, key): + return self._dict[key] + + def __contains__(self, key): + return key in self._dict + + def copy(self, **add_or_replace): + return self.__class__(self, **add_or_replace) + + def __iter__(self): + return iter(self._dict) + + def __len__(self): + return len(self._dict) + + def __repr__(self): + return '<%s %r>' % (self.__class__.__name__, self._dict) + + def __hash__(self): + if self._hash is None: + h = 0 + for key, value in iteritems(self._dict): + h ^= hash((key, value)) + self._hash = h + return self._hash + + +class FrozenOrderedDict(frozendict): + """ + A frozendict subclass that maintains key order + """ + + dict_cls = OrderedDict + + +if OrderedDict is NotImplemented: + del FrozenOrderedDict diff --git a/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/INSTALLER b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/LICENSE b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/LICENSE new file mode 100644 index 00000000..7082a2d5 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/LICENSE @@ -0,0 +1,201 @@ +Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2013-2019 Nikolay Kim and Andrew Svetlov + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/METADATA b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/METADATA new file mode 100644 index 00000000..235aca1f --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/METADATA @@ -0,0 +1,150 @@ +Metadata-Version: 2.1 +Name: frozenlist +Version: 1.3.3 +Summary: A list-like structure which implements collections.abc.MutableSequence +Home-page: https://github.com/aio-libs/frozenlist +Maintainer: aiohttp team +Maintainer-email: team@aiohttp.org +License: Apache 2 +Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby +Project-URL: CI: Github Actions, https://github.com/aio-libs/frozenlist/actions +Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/frozenlist +Project-URL: Docs: RTD, https://frozenlist.readthedocs.io +Project-URL: GitHub: issues, https://github.com/aio-libs/frozenlist/issues +Project-URL: GitHub: repo, https://github.com/aio-libs/frozenlist +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Intended Audience :: Developers +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Development Status :: 5 - Production/Stable +Classifier: Operating System :: POSIX +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: Microsoft :: Windows +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE + +========== +frozenlist +========== + +.. image:: https://github.com/aio-libs/frozenlist/workflows/CI/badge.svg + :target: https://github.com/aio-libs/frozenlist/actions + :alt: GitHub status for master branch + +.. image:: https://codecov.io/gh/aio-libs/frozenlist/branch/master/graph/badge.svg + :target: https://codecov.io/gh/aio-libs/frozenlist + :alt: codecov.io status for master branch + +.. image:: https://badge.fury.io/py/frozenlist.svg + :target: https://pypi.org/project/frozenlist + :alt: Latest PyPI package version + +.. image:: https://readthedocs.org/projects/frozenlist/badge/?version=latest + :target: https://frozenlist.readthedocs.io/ + :alt: Latest Read The Docs + +.. image:: https://img.shields.io/discourse/topics?server=https%3A%2F%2Faio-libs.discourse.group%2F + :target: https://aio-libs.discourse.group/ + :alt: Discourse group for io-libs + +.. image:: https://badges.gitter.im/Join%20Chat.svg + :target: https://gitter.im/aio-libs/Lobby + :alt: Chat on Gitter + +Introduction +============ + +``frozenlist.FrozenList`` is a list-like structure which implements +``collections.abc.MutableSequence``. The list is *mutable* until ``FrozenList.freeze`` +is called, after which list modifications raise ``RuntimeError``: + + +>>> from frozenlist import FrozenList +>>> fl = FrozenList([17, 42]) +>>> fl.append('spam') +>>> fl.append('Vikings') +>>> fl + +>>> fl.freeze() +>>> fl + +>>> fl.frozen +True +>>> fl.append("Monty") +Traceback (most recent call last): + File "", line 1, in + File "frozenlist/_frozenlist.pyx", line 97, in frozenlist._frozenlist.FrozenList.append + self._check_frozen() + File "frozenlist/_frozenlist.pyx", line 19, in frozenlist._frozenlist.FrozenList._check_frozen + raise RuntimeError("Cannot modify frozen list.") +RuntimeError: Cannot modify frozen list. + + +FrozenList is also hashable, but only when frozen. Otherwise it also throws a RuntimeError: + + +>>> fl = FrozenList([17, 42, 'spam']) +>>> hash(fl) +Traceback (most recent call last): + File "", line 1, in + File "frozenlist/_frozenlist.pyx", line 111, in frozenlist._frozenlist.FrozenList.__hash__ + raise RuntimeError("Cannot hash unfrozen list.") +RuntimeError: Cannot hash unfrozen list. +>>> fl.freeze() +>>> hash(fl) +3713081631934410656 +>>> dictionary = {fl: 'Vikings'} # frozen fl can be a dict key +>>> dictionary +{: 'Vikings'} + + +Installation +------------ + +:: + + $ pip install frozenlist + +The library requires Python 3.6 or newer. + + +Documentation +============= + +https://frozenlist.readthedocs.io/ + +Communication channels +====================== + +*aio-libs discourse group*: https://aio-libs.discourse.group + +Feel free to post your questions and ideas here. + +*gitter chat* https://gitter.im/aio-libs/Lobby + +Requirements +============ + +- Python >= 3.6 + +License +======= + +``frozenlist`` is offered under the Apache 2 license. + +Source code +=========== + +The project is hosted on GitHub_ + +Please file an issue in the `bug tracker +`_ if you have found a bug +or have some suggestions to improve the library. + +.. _GitHub: https://github.com/aio-libs/frozenlist diff --git a/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/RECORD b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/RECORD new file mode 100644 index 00000000..d10c1998 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/RECORD @@ -0,0 +1,12 @@ +frozenlist-1.3.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +frozenlist-1.3.3.dist-info/LICENSE,sha256=b9UkPpLdf5jsacesN3co50kFcJ_1J6W_mNbQJjwE9bY,11332 +frozenlist-1.3.3.dist-info/METADATA,sha256=f8FI9mzqEFVV4IHXvWrRk8BuLIAwRWGJuYewaEidVQg,4693 +frozenlist-1.3.3.dist-info/RECORD,, +frozenlist-1.3.3.dist-info/WHEEL,sha256=WBvXYjpU8Xs6LszuEbU6pGw6bPe2DwAoDmxw0w2rxhU,217 +frozenlist-1.3.3.dist-info/top_level.txt,sha256=jivtxsPXA3nK3WBWW2LW5Mtu_GHt8UZA13NeCs2cKuA,11 +frozenlist/__init__.py,sha256=yf1hVJQUEyUwzTJau3i3SQeFDYYROwgNpJLMzHN58tM,2323 +frozenlist/__init__.pyi,sha256=vMEoES1xGegPtVXoCi9XydEeHsyuIq-KdeXwP5PdsaA,1470 +frozenlist/__pycache__/__init__.cpython-38.pyc,, +frozenlist/_frozenlist.cpython-38-x86_64-linux-gnu.so,sha256=v2EonDrCwZhbHauHVeFD850WS1A7lTIsYC2t5EsKwZg,509432 +frozenlist/_frozenlist.pyx,sha256=9V4Z1En6TZwgFD26d-sjxyhUzUm338H1Qiz4-i5ukv0,2983 +frozenlist/py.typed,sha256=sow9soTwP9T_gEAQSVh7Gb8855h04Nwmhs2We-JRgZM,7 diff --git a/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/WHEEL b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/WHEEL new file mode 100644 index 00000000..e5d12f1b --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/WHEEL @@ -0,0 +1,8 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.38.2) +Root-Is-Purelib: false +Tag: cp38-cp38-manylinux_2_5_x86_64 +Tag: cp38-cp38-manylinux1_x86_64 +Tag: cp38-cp38-manylinux_2_17_x86_64 +Tag: cp38-cp38-manylinux2014_x86_64 + diff --git a/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/top_level.txt b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/top_level.txt new file mode 100644 index 00000000..52f13fc4 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist-1.3.3.dist-info/top_level.txt @@ -0,0 +1 @@ +frozenlist diff --git a/env/lib/python3.8/site-packages/frozenlist/__init__.py b/env/lib/python3.8/site-packages/frozenlist/__init__.py new file mode 100644 index 00000000..4f172e7c --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist/__init__.py @@ -0,0 +1,96 @@ +import os +import sys +import types +from collections.abc import MutableSequence +from functools import total_ordering +from typing import Tuple, Type + +__version__ = "1.3.3" + +__all__ = ("FrozenList", "PyFrozenList") # type: Tuple[str, ...] + + +NO_EXTENSIONS = bool(os.environ.get("FROZENLIST_NO_EXTENSIONS")) # type: bool + + +@total_ordering +class FrozenList(MutableSequence): + + __slots__ = ("_frozen", "_items") + + if sys.version_info >= (3, 9): + __class_getitem__ = classmethod(types.GenericAlias) + else: + + @classmethod + def __class_getitem__(cls: Type["FrozenList"]) -> Type["FrozenList"]: + return cls + + def __init__(self, items=None): + self._frozen = False + if items is not None: + items = list(items) + else: + items = [] + self._items = items + + @property + def frozen(self): + return self._frozen + + def freeze(self): + self._frozen = True + + def __getitem__(self, index): + return self._items[index] + + def __setitem__(self, index, value): + if self._frozen: + raise RuntimeError("Cannot modify frozen list.") + self._items[index] = value + + def __delitem__(self, index): + if self._frozen: + raise RuntimeError("Cannot modify frozen list.") + del self._items[index] + + def __len__(self): + return self._items.__len__() + + def __iter__(self): + return self._items.__iter__() + + def __reversed__(self): + return self._items.__reversed__() + + def __eq__(self, other): + return list(self) == other + + def __le__(self, other): + return list(self) <= other + + def insert(self, pos, item): + if self._frozen: + raise RuntimeError("Cannot modify frozen list.") + self._items.insert(pos, item) + + def __repr__(self): + return f"" + + def __hash__(self): + if self._frozen: + return hash(tuple(self)) + else: + raise RuntimeError("Cannot hash unfrozen list.") + + +PyFrozenList = FrozenList + + +try: + from ._frozenlist import FrozenList as CFrozenList # type: ignore + + if not NO_EXTENSIONS: # pragma: no cover + FrozenList = CFrozenList # type: ignore +except ImportError: # pragma: no cover + pass diff --git a/env/lib/python3.8/site-packages/frozenlist/__init__.pyi b/env/lib/python3.8/site-packages/frozenlist/__init__.pyi new file mode 100644 index 00000000..ae803ef6 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist/__init__.pyi @@ -0,0 +1,47 @@ +from typing import ( + Generic, + Iterable, + Iterator, + List, + MutableSequence, + Optional, + TypeVar, + Union, + overload, +) + +_T = TypeVar("_T") +_Arg = Union[List[_T], Iterable[_T]] + +class FrozenList(MutableSequence[_T], Generic[_T]): + def __init__(self, items: Optional[_Arg[_T]] = None) -> None: ... + @property + def frozen(self) -> bool: ... + def freeze(self) -> None: ... + @overload + def __getitem__(self, i: int) -> _T: ... + @overload + def __getitem__(self, s: slice) -> FrozenList[_T]: ... + @overload + def __setitem__(self, i: int, o: _T) -> None: ... + @overload + def __setitem__(self, s: slice, o: Iterable[_T]) -> None: ... + @overload + def __delitem__(self, i: int) -> None: ... + @overload + def __delitem__(self, i: slice) -> None: ... + def __len__(self) -> int: ... + def __iter__(self) -> Iterator[_T]: ... + def __reversed__(self) -> Iterator[_T]: ... + def __eq__(self, other: object) -> bool: ... + def __le__(self, other: FrozenList[_T]) -> bool: ... + def __ne__(self, other: object) -> bool: ... + def __lt__(self, other: FrozenList[_T]) -> bool: ... + def __ge__(self, other: FrozenList[_T]) -> bool: ... + def __gt__(self, other: FrozenList[_T]) -> bool: ... + def insert(self, pos: int, item: _T) -> None: ... + def __repr__(self) -> str: ... + def __hash__(self) -> int: ... + +# types for C accelerators are the same +CFrozenList = PyFrozenList = FrozenList diff --git a/env/lib/python3.8/site-packages/frozenlist/_frozenlist.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/frozenlist/_frozenlist.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..d7a1e21a Binary files /dev/null and b/env/lib/python3.8/site-packages/frozenlist/_frozenlist.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/frozenlist/_frozenlist.pyx b/env/lib/python3.8/site-packages/frozenlist/_frozenlist.pyx new file mode 100644 index 00000000..9ee846c1 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist/_frozenlist.pyx @@ -0,0 +1,123 @@ +import sys +import types +from collections.abc import MutableSequence + + +cdef class FrozenList: + + if sys.version_info >= (3, 9): + __class_getitem__ = classmethod(types.GenericAlias) + else: + @classmethod + def __class_getitem__(cls): + return cls + + cdef readonly bint frozen + cdef list _items + + def __init__(self, items=None): + self.frozen = False + if items is not None: + items = list(items) + else: + items = [] + self._items = items + + cdef object _check_frozen(self): + if self.frozen: + raise RuntimeError("Cannot modify frozen list.") + + cdef inline object _fast_len(self): + return len(self._items) + + def freeze(self): + self.frozen = True + + def __getitem__(self, index): + return self._items[index] + + def __setitem__(self, index, value): + self._check_frozen() + self._items[index] = value + + def __delitem__(self, index): + self._check_frozen() + del self._items[index] + + def __len__(self): + return self._fast_len() + + def __iter__(self): + return self._items.__iter__() + + def __reversed__(self): + return self._items.__reversed__() + + def __richcmp__(self, other, op): + if op == 0: # < + return list(self) < other + if op == 1: # <= + return list(self) <= other + if op == 2: # == + return list(self) == other + if op == 3: # != + return list(self) != other + if op == 4: # > + return list(self) > other + if op == 5: # => + return list(self) >= other + + def insert(self, pos, item): + self._check_frozen() + self._items.insert(pos, item) + + def __contains__(self, item): + return item in self._items + + def __iadd__(self, items): + self._check_frozen() + self._items += list(items) + return self + + def index(self, item): + return self._items.index(item) + + def remove(self, item): + self._check_frozen() + self._items.remove(item) + + def clear(self): + self._check_frozen() + self._items.clear() + + def extend(self, items): + self._check_frozen() + self._items += list(items) + + def reverse(self): + self._check_frozen() + self._items.reverse() + + def pop(self, index=-1): + self._check_frozen() + return self._items.pop(index) + + def append(self, item): + self._check_frozen() + return self._items.append(item) + + def count(self, item): + return self._items.count(item) + + def __repr__(self): + return ''.format(self.frozen, + self._items) + + def __hash__(self): + if self.frozen: + return hash(tuple(self._items)) + else: + raise RuntimeError("Cannot hash unfrozen list.") + + +MutableSequence.register(FrozenList) diff --git a/env/lib/python3.8/site-packages/frozenlist/py.typed b/env/lib/python3.8/site-packages/frozenlist/py.typed new file mode 100644 index 00000000..f5642f79 --- /dev/null +++ b/env/lib/python3.8/site-packages/frozenlist/py.typed @@ -0,0 +1 @@ +Marker diff --git a/env/lib/python3.8/site-packages/google/protobuf/__init__.py b/env/lib/python3.8/site-packages/google/protobuf/__init__.py new file mode 100644 index 00000000..3087605b --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/__init__.py @@ -0,0 +1,33 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# Copyright 2007 Google Inc. All Rights Reserved. + +__version__ = '3.20.3' diff --git a/env/lib/python3.8/site-packages/google/protobuf/any_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/any_pb2.py new file mode 100644 index 00000000..9121193d --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/any_pb2.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/any.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"&\n\x03\x41ny\x12\x10\n\x08type_url\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x0c\x42v\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.any_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _ANY._serialized_start=46 + _ANY._serialized_end=84 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/api_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/api_pb2.py new file mode 100644 index 00000000..1721b10a --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/api_pb2.py @@ -0,0 +1,32 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/api.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2 +from google.protobuf import type_pb2 as google_dot_protobuf_dot_type__pb2 + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/api.proto\x12\x0fgoogle.protobuf\x1a$google/protobuf/source_context.proto\x1a\x1agoogle/protobuf/type.proto\"\x81\x02\n\x03\x41pi\x12\x0c\n\x04name\x18\x01 \x01(\t\x12(\n\x07methods\x18\x02 \x03(\x0b\x32\x17.google.protobuf.Method\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x0f\n\x07version\x18\x04 \x01(\t\x12\x36\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12&\n\x06mixins\x18\x06 \x03(\x0b\x32\x16.google.protobuf.Mixin\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"\xd5\x01\n\x06Method\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x18\n\x10request_type_url\x18\x02 \x01(\t\x12\x19\n\x11request_streaming\x18\x03 \x01(\x08\x12\x19\n\x11response_type_url\x18\x04 \x01(\t\x12\x1a\n\x12response_streaming\x18\x05 \x01(\x08\x12(\n\x07options\x18\x06 \x03(\x0b\x32\x17.google.protobuf.Option\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"#\n\x05Mixin\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04root\x18\x02 \x01(\tBv\n\x13\x63om.google.protobufB\x08\x41piProtoP\x01Z,google.golang.org/protobuf/types/known/apipb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.api_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010ApiProtoP\001Z,google.golang.org/protobuf/types/known/apipb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _API._serialized_start=113 + _API._serialized_end=370 + _METHOD._serialized_start=373 + _METHOD._serialized_end=586 + _MIXIN._serialized_start=588 + _MIXIN._serialized_end=623 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/compiler/__init__.py b/env/lib/python3.8/site-packages/google/protobuf/compiler/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/google/protobuf/compiler/plugin_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/compiler/plugin_pb2.py new file mode 100644 index 00000000..715a8913 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/compiler/plugin_pb2.py @@ -0,0 +1,35 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/compiler/plugin.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2 + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"F\n\x07Version\x12\r\n\x05major\x18\x01 \x01(\x05\x12\r\n\x05minor\x18\x02 \x01(\x05\x12\r\n\x05patch\x18\x03 \x01(\x05\x12\x0e\n\x06suffix\x18\x04 \x01(\t\"\xba\x01\n\x14\x43odeGeneratorRequest\x12\x18\n\x10\x66ile_to_generate\x18\x01 \x03(\t\x12\x11\n\tparameter\x18\x02 \x01(\t\x12\x38\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\x12;\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.Version\"\xc1\x02\n\x15\x43odeGeneratorResponse\x12\r\n\x05\x65rror\x18\x01 \x01(\t\x12\x1a\n\x12supported_features\x18\x02 \x01(\x04\x12\x42\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.File\x1a\x7f\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x17\n\x0finsertion_point\x18\x02 \x01(\t\x12\x0f\n\x07\x63ontent\x18\x0f \x01(\t\x12?\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfo\"8\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x42W\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.compiler.plugin_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb' + _VERSION._serialized_start=101 + _VERSION._serialized_end=171 + _CODEGENERATORREQUEST._serialized_start=174 + _CODEGENERATORREQUEST._serialized_end=360 + _CODEGENERATORRESPONSE._serialized_start=363 + _CODEGENERATORRESPONSE._serialized_end=684 + _CODEGENERATORRESPONSE_FILE._serialized_start=499 + _CODEGENERATORRESPONSE_FILE._serialized_end=626 + _CODEGENERATORRESPONSE_FEATURE._serialized_start=628 + _CODEGENERATORRESPONSE_FEATURE._serialized_end=684 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/descriptor.py b/env/lib/python3.8/site-packages/google/protobuf/descriptor.py new file mode 100644 index 00000000..ad70be9a --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/descriptor.py @@ -0,0 +1,1224 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Descriptors essentially contain exactly the information found in a .proto +file, in types that make this information accessible in Python. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + +import threading +import warnings + +from google.protobuf.internal import api_implementation + +_USE_C_DESCRIPTORS = False +if api_implementation.Type() == 'cpp': + # Used by MakeDescriptor in cpp mode + import binascii + import os + from google.protobuf.pyext import _message + _USE_C_DESCRIPTORS = True + + +class Error(Exception): + """Base error for this module.""" + + +class TypeTransformationError(Error): + """Error transforming between python proto type and corresponding C++ type.""" + + +if _USE_C_DESCRIPTORS: + # This metaclass allows to override the behavior of code like + # isinstance(my_descriptor, FieldDescriptor) + # and make it return True when the descriptor is an instance of the extension + # type written in C++. + class DescriptorMetaclass(type): + def __instancecheck__(cls, obj): + if super(DescriptorMetaclass, cls).__instancecheck__(obj): + return True + if isinstance(obj, cls._C_DESCRIPTOR_CLASS): + return True + return False +else: + # The standard metaclass; nothing changes. + DescriptorMetaclass = type + + +class _Lock(object): + """Wrapper class of threading.Lock(), which is allowed by 'with'.""" + + def __new__(cls): + self = object.__new__(cls) + self._lock = threading.Lock() # pylint: disable=protected-access + return self + + def __enter__(self): + self._lock.acquire() + + def __exit__(self, exc_type, exc_value, exc_tb): + self._lock.release() + + +_lock = threading.Lock() + + +def _Deprecated(name): + if _Deprecated.count > 0: + _Deprecated.count -= 1 + warnings.warn( + 'Call to deprecated create function %s(). Note: Create unlinked ' + 'descriptors is going to go away. Please use get/find descriptors from ' + 'generated code or query the descriptor_pool.' + % name, + category=DeprecationWarning, stacklevel=3) + + +# Deprecated warnings will print 100 times at most which should be enough for +# users to notice and do not cause timeout. +_Deprecated.count = 100 + + +_internal_create_key = object() + + +class DescriptorBase(metaclass=DescriptorMetaclass): + + """Descriptors base class. + + This class is the base of all descriptor classes. It provides common options + related functionality. + + Attributes: + has_options: True if the descriptor has non-default options. Usually it + is not necessary to read this -- just call GetOptions() which will + happily return the default instance. However, it's sometimes useful + for efficiency, and also useful inside the protobuf implementation to + avoid some bootstrapping issues. + """ + + if _USE_C_DESCRIPTORS: + # The class, or tuple of classes, that are considered as "virtual + # subclasses" of this descriptor class. + _C_DESCRIPTOR_CLASS = () + + def __init__(self, options, serialized_options, options_class_name): + """Initialize the descriptor given its options message and the name of the + class of the options message. The name of the class is required in case + the options message is None and has to be created. + """ + self._options = options + self._options_class_name = options_class_name + self._serialized_options = serialized_options + + # Does this descriptor have non-default options? + self.has_options = (options is not None) or (serialized_options is not None) + + def _SetOptions(self, options, options_class_name): + """Sets the descriptor's options + + This function is used in generated proto2 files to update descriptor + options. It must not be used outside proto2. + """ + self._options = options + self._options_class_name = options_class_name + + # Does this descriptor have non-default options? + self.has_options = options is not None + + def GetOptions(self): + """Retrieves descriptor options. + + This method returns the options set or creates the default options for the + descriptor. + """ + if self._options: + return self._options + + from google.protobuf import descriptor_pb2 + try: + options_class = getattr(descriptor_pb2, + self._options_class_name) + except AttributeError: + raise RuntimeError('Unknown options class name %s!' % + (self._options_class_name)) + + with _lock: + if self._serialized_options is None: + self._options = options_class() + else: + self._options = _ParseOptions(options_class(), + self._serialized_options) + + return self._options + + +class _NestedDescriptorBase(DescriptorBase): + """Common class for descriptors that can be nested.""" + + def __init__(self, options, options_class_name, name, full_name, + file, containing_type, serialized_start=None, + serialized_end=None, serialized_options=None): + """Constructor. + + Args: + options: Protocol message options or None + to use default message options. + options_class_name (str): The class name of the above options. + name (str): Name of this protocol message type. + full_name (str): Fully-qualified name of this protocol message type, + which will include protocol "package" name and the name of any + enclosing types. + file (FileDescriptor): Reference to file info. + containing_type: if provided, this is a nested descriptor, with this + descriptor as parent, otherwise None. + serialized_start: The start index (inclusive) in block in the + file.serialized_pb that describes this descriptor. + serialized_end: The end index (exclusive) in block in the + file.serialized_pb that describes this descriptor. + serialized_options: Protocol message serialized options or None. + """ + super(_NestedDescriptorBase, self).__init__( + options, serialized_options, options_class_name) + + self.name = name + # TODO(falk): Add function to calculate full_name instead of having it in + # memory? + self.full_name = full_name + self.file = file + self.containing_type = containing_type + + self._serialized_start = serialized_start + self._serialized_end = serialized_end + + def CopyToProto(self, proto): + """Copies this to the matching proto in descriptor_pb2. + + Args: + proto: An empty proto instance from descriptor_pb2. + + Raises: + Error: If self couldn't be serialized, due to to few constructor + arguments. + """ + if (self.file is not None and + self._serialized_start is not None and + self._serialized_end is not None): + proto.ParseFromString(self.file.serialized_pb[ + self._serialized_start:self._serialized_end]) + else: + raise Error('Descriptor does not contain serialization.') + + +class Descriptor(_NestedDescriptorBase): + + """Descriptor for a protocol message type. + + Attributes: + name (str): Name of this protocol message type. + full_name (str): Fully-qualified name of this protocol message type, + which will include protocol "package" name and the name of any + enclosing types. + containing_type (Descriptor): Reference to the descriptor of the type + containing us, or None if this is top-level. + fields (list[FieldDescriptor]): Field descriptors for all fields in + this type. + fields_by_number (dict(int, FieldDescriptor)): Same + :class:`FieldDescriptor` objects as in :attr:`fields`, but indexed + by "number" attribute in each FieldDescriptor. + fields_by_name (dict(str, FieldDescriptor)): Same + :class:`FieldDescriptor` objects as in :attr:`fields`, but indexed by + "name" attribute in each :class:`FieldDescriptor`. + nested_types (list[Descriptor]): Descriptor references + for all protocol message types nested within this one. + nested_types_by_name (dict(str, Descriptor)): Same Descriptor + objects as in :attr:`nested_types`, but indexed by "name" attribute + in each Descriptor. + enum_types (list[EnumDescriptor]): :class:`EnumDescriptor` references + for all enums contained within this type. + enum_types_by_name (dict(str, EnumDescriptor)): Same + :class:`EnumDescriptor` objects as in :attr:`enum_types`, but + indexed by "name" attribute in each EnumDescriptor. + enum_values_by_name (dict(str, EnumValueDescriptor)): Dict mapping + from enum value name to :class:`EnumValueDescriptor` for that value. + extensions (list[FieldDescriptor]): All extensions defined directly + within this message type (NOT within a nested type). + extensions_by_name (dict(str, FieldDescriptor)): Same FieldDescriptor + objects as :attr:`extensions`, but indexed by "name" attribute of each + FieldDescriptor. + is_extendable (bool): Does this type define any extension ranges? + oneofs (list[OneofDescriptor]): The list of descriptors for oneof fields + in this message. + oneofs_by_name (dict(str, OneofDescriptor)): Same objects as in + :attr:`oneofs`, but indexed by "name" attribute. + file (FileDescriptor): Reference to file descriptor. + + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.Descriptor + + def __new__( + cls, + name=None, + full_name=None, + filename=None, + containing_type=None, + fields=None, + nested_types=None, + enum_types=None, + extensions=None, + options=None, + serialized_options=None, + is_extendable=True, + extension_ranges=None, + oneofs=None, + file=None, # pylint: disable=redefined-builtin + serialized_start=None, + serialized_end=None, + syntax=None, + create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + return _message.default_pool.FindMessageTypeByName(full_name) + + # NOTE(tmarek): The file argument redefining a builtin is nothing we can + # fix right now since we don't know how many clients already rely on the + # name of the argument. + def __init__(self, name, full_name, filename, containing_type, fields, + nested_types, enum_types, extensions, options=None, + serialized_options=None, + is_extendable=True, extension_ranges=None, oneofs=None, + file=None, serialized_start=None, serialized_end=None, # pylint: disable=redefined-builtin + syntax=None, create_key=None): + """Arguments to __init__() are as described in the description + of Descriptor fields above. + + Note that filename is an obsolete argument, that is not used anymore. + Please use file.name to access this as an attribute. + """ + if create_key is not _internal_create_key: + _Deprecated('Descriptor') + + super(Descriptor, self).__init__( + options, 'MessageOptions', name, full_name, file, + containing_type, serialized_start=serialized_start, + serialized_end=serialized_end, serialized_options=serialized_options) + + # We have fields in addition to fields_by_name and fields_by_number, + # so that: + # 1. Clients can index fields by "order in which they're listed." + # 2. Clients can easily iterate over all fields with the terse + # syntax: for f in descriptor.fields: ... + self.fields = fields + for field in self.fields: + field.containing_type = self + self.fields_by_number = dict((f.number, f) for f in fields) + self.fields_by_name = dict((f.name, f) for f in fields) + self._fields_by_camelcase_name = None + + self.nested_types = nested_types + for nested_type in nested_types: + nested_type.containing_type = self + self.nested_types_by_name = dict((t.name, t) for t in nested_types) + + self.enum_types = enum_types + for enum_type in self.enum_types: + enum_type.containing_type = self + self.enum_types_by_name = dict((t.name, t) for t in enum_types) + self.enum_values_by_name = dict( + (v.name, v) for t in enum_types for v in t.values) + + self.extensions = extensions + for extension in self.extensions: + extension.extension_scope = self + self.extensions_by_name = dict((f.name, f) for f in extensions) + self.is_extendable = is_extendable + self.extension_ranges = extension_ranges + self.oneofs = oneofs if oneofs is not None else [] + self.oneofs_by_name = dict((o.name, o) for o in self.oneofs) + for oneof in self.oneofs: + oneof.containing_type = self + self.syntax = syntax or "proto2" + + @property + def fields_by_camelcase_name(self): + """Same FieldDescriptor objects as in :attr:`fields`, but indexed by + :attr:`FieldDescriptor.camelcase_name`. + """ + if self._fields_by_camelcase_name is None: + self._fields_by_camelcase_name = dict( + (f.camelcase_name, f) for f in self.fields) + return self._fields_by_camelcase_name + + def EnumValueName(self, enum, value): + """Returns the string name of an enum value. + + This is just a small helper method to simplify a common operation. + + Args: + enum: string name of the Enum. + value: int, value of the enum. + + Returns: + string name of the enum value. + + Raises: + KeyError if either the Enum doesn't exist or the value is not a valid + value for the enum. + """ + return self.enum_types_by_name[enum].values_by_number[value].name + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.DescriptorProto. + + Args: + proto: An empty descriptor_pb2.DescriptorProto. + """ + # This function is overridden to give a better doc comment. + super(Descriptor, self).CopyToProto(proto) + + +# TODO(robinson): We should have aggressive checking here, +# for example: +# * If you specify a repeated field, you should not be allowed +# to specify a default value. +# * [Other examples here as needed]. +# +# TODO(robinson): for this and other *Descriptor classes, we +# might also want to lock things down aggressively (e.g., +# prevent clients from setting the attributes). Having +# stronger invariants here in general will reduce the number +# of runtime checks we must do in reflection.py... +class FieldDescriptor(DescriptorBase): + + """Descriptor for a single field in a .proto file. + + Attributes: + name (str): Name of this field, exactly as it appears in .proto. + full_name (str): Name of this field, including containing scope. This is + particularly relevant for extensions. + index (int): Dense, 0-indexed index giving the order that this + field textually appears within its message in the .proto file. + number (int): Tag number declared for this field in the .proto file. + + type (int): (One of the TYPE_* constants below) Declared type. + cpp_type (int): (One of the CPPTYPE_* constants below) C++ type used to + represent this field. + + label (int): (One of the LABEL_* constants below) Tells whether this + field is optional, required, or repeated. + has_default_value (bool): True if this field has a default value defined, + otherwise false. + default_value (Varies): Default value of this field. Only + meaningful for non-repeated scalar fields. Repeated fields + should always set this to [], and non-repeated composite + fields should always set this to None. + + containing_type (Descriptor): Descriptor of the protocol message + type that contains this field. Set by the Descriptor constructor + if we're passed into one. + Somewhat confusingly, for extension fields, this is the + descriptor of the EXTENDED message, not the descriptor + of the message containing this field. (See is_extension and + extension_scope below). + message_type (Descriptor): If a composite field, a descriptor + of the message type contained in this field. Otherwise, this is None. + enum_type (EnumDescriptor): If this field contains an enum, a + descriptor of that enum. Otherwise, this is None. + + is_extension: True iff this describes an extension field. + extension_scope (Descriptor): Only meaningful if is_extension is True. + Gives the message that immediately contains this extension field. + Will be None iff we're a top-level (file-level) extension field. + + options (descriptor_pb2.FieldOptions): Protocol message field options or + None to use default field options. + + containing_oneof (OneofDescriptor): If the field is a member of a oneof + union, contains its descriptor. Otherwise, None. + + file (FileDescriptor): Reference to file descriptor. + """ + + # Must be consistent with C++ FieldDescriptor::Type enum in + # descriptor.h. + # + # TODO(robinson): Find a way to eliminate this repetition. + TYPE_DOUBLE = 1 + TYPE_FLOAT = 2 + TYPE_INT64 = 3 + TYPE_UINT64 = 4 + TYPE_INT32 = 5 + TYPE_FIXED64 = 6 + TYPE_FIXED32 = 7 + TYPE_BOOL = 8 + TYPE_STRING = 9 + TYPE_GROUP = 10 + TYPE_MESSAGE = 11 + TYPE_BYTES = 12 + TYPE_UINT32 = 13 + TYPE_ENUM = 14 + TYPE_SFIXED32 = 15 + TYPE_SFIXED64 = 16 + TYPE_SINT32 = 17 + TYPE_SINT64 = 18 + MAX_TYPE = 18 + + # Must be consistent with C++ FieldDescriptor::CppType enum in + # descriptor.h. + # + # TODO(robinson): Find a way to eliminate this repetition. + CPPTYPE_INT32 = 1 + CPPTYPE_INT64 = 2 + CPPTYPE_UINT32 = 3 + CPPTYPE_UINT64 = 4 + CPPTYPE_DOUBLE = 5 + CPPTYPE_FLOAT = 6 + CPPTYPE_BOOL = 7 + CPPTYPE_ENUM = 8 + CPPTYPE_STRING = 9 + CPPTYPE_MESSAGE = 10 + MAX_CPPTYPE = 10 + + _PYTHON_TO_CPP_PROTO_TYPE_MAP = { + TYPE_DOUBLE: CPPTYPE_DOUBLE, + TYPE_FLOAT: CPPTYPE_FLOAT, + TYPE_ENUM: CPPTYPE_ENUM, + TYPE_INT64: CPPTYPE_INT64, + TYPE_SINT64: CPPTYPE_INT64, + TYPE_SFIXED64: CPPTYPE_INT64, + TYPE_UINT64: CPPTYPE_UINT64, + TYPE_FIXED64: CPPTYPE_UINT64, + TYPE_INT32: CPPTYPE_INT32, + TYPE_SFIXED32: CPPTYPE_INT32, + TYPE_SINT32: CPPTYPE_INT32, + TYPE_UINT32: CPPTYPE_UINT32, + TYPE_FIXED32: CPPTYPE_UINT32, + TYPE_BYTES: CPPTYPE_STRING, + TYPE_STRING: CPPTYPE_STRING, + TYPE_BOOL: CPPTYPE_BOOL, + TYPE_MESSAGE: CPPTYPE_MESSAGE, + TYPE_GROUP: CPPTYPE_MESSAGE + } + + # Must be consistent with C++ FieldDescriptor::Label enum in + # descriptor.h. + # + # TODO(robinson): Find a way to eliminate this repetition. + LABEL_OPTIONAL = 1 + LABEL_REQUIRED = 2 + LABEL_REPEATED = 3 + MAX_LABEL = 3 + + # Must be consistent with C++ constants kMaxNumber, kFirstReservedNumber, + # and kLastReservedNumber in descriptor.h + MAX_FIELD_NUMBER = (1 << 29) - 1 + FIRST_RESERVED_FIELD_NUMBER = 19000 + LAST_RESERVED_FIELD_NUMBER = 19999 + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.FieldDescriptor + + def __new__(cls, name, full_name, index, number, type, cpp_type, label, + default_value, message_type, enum_type, containing_type, + is_extension, extension_scope, options=None, + serialized_options=None, + has_default_value=True, containing_oneof=None, json_name=None, + file=None, create_key=None): # pylint: disable=redefined-builtin + _message.Message._CheckCalledFromGeneratedFile() + if is_extension: + return _message.default_pool.FindExtensionByName(full_name) + else: + return _message.default_pool.FindFieldByName(full_name) + + def __init__(self, name, full_name, index, number, type, cpp_type, label, + default_value, message_type, enum_type, containing_type, + is_extension, extension_scope, options=None, + serialized_options=None, + has_default_value=True, containing_oneof=None, json_name=None, + file=None, create_key=None): # pylint: disable=redefined-builtin + """The arguments are as described in the description of FieldDescriptor + attributes above. + + Note that containing_type may be None, and may be set later if necessary + (to deal with circular references between message types, for example). + Likewise for extension_scope. + """ + if create_key is not _internal_create_key: + _Deprecated('FieldDescriptor') + + super(FieldDescriptor, self).__init__( + options, serialized_options, 'FieldOptions') + self.name = name + self.full_name = full_name + self.file = file + self._camelcase_name = None + if json_name is None: + self.json_name = _ToJsonName(name) + else: + self.json_name = json_name + self.index = index + self.number = number + self.type = type + self.cpp_type = cpp_type + self.label = label + self.has_default_value = has_default_value + self.default_value = default_value + self.containing_type = containing_type + self.message_type = message_type + self.enum_type = enum_type + self.is_extension = is_extension + self.extension_scope = extension_scope + self.containing_oneof = containing_oneof + if api_implementation.Type() == 'cpp': + if is_extension: + self._cdescriptor = _message.default_pool.FindExtensionByName(full_name) + else: + self._cdescriptor = _message.default_pool.FindFieldByName(full_name) + else: + self._cdescriptor = None + + @property + def camelcase_name(self): + """Camelcase name of this field. + + Returns: + str: the name in CamelCase. + """ + if self._camelcase_name is None: + self._camelcase_name = _ToCamelCase(self.name) + return self._camelcase_name + + @property + def has_presence(self): + """Whether the field distinguishes between unpopulated and default values. + + Raises: + RuntimeError: singular field that is not linked with message nor file. + """ + if self.label == FieldDescriptor.LABEL_REPEATED: + return False + if (self.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE or + self.containing_oneof): + return True + if hasattr(self.file, 'syntax'): + return self.file.syntax == 'proto2' + if hasattr(self.message_type, 'syntax'): + return self.message_type.syntax == 'proto2' + raise RuntimeError( + 'has_presence is not ready to use because field %s is not' + ' linked with message type nor file' % self.full_name) + + @staticmethod + def ProtoTypeToCppProtoType(proto_type): + """Converts from a Python proto type to a C++ Proto Type. + + The Python ProtocolBuffer classes specify both the 'Python' datatype and the + 'C++' datatype - and they're not the same. This helper method should + translate from one to another. + + Args: + proto_type: the Python proto type (descriptor.FieldDescriptor.TYPE_*) + Returns: + int: descriptor.FieldDescriptor.CPPTYPE_*, the C++ type. + Raises: + TypeTransformationError: when the Python proto type isn't known. + """ + try: + return FieldDescriptor._PYTHON_TO_CPP_PROTO_TYPE_MAP[proto_type] + except KeyError: + raise TypeTransformationError('Unknown proto_type: %s' % proto_type) + + +class EnumDescriptor(_NestedDescriptorBase): + + """Descriptor for an enum defined in a .proto file. + + Attributes: + name (str): Name of the enum type. + full_name (str): Full name of the type, including package name + and any enclosing type(s). + + values (list[EnumValueDescriptor]): List of the values + in this enum. + values_by_name (dict(str, EnumValueDescriptor)): Same as :attr:`values`, + but indexed by the "name" field of each EnumValueDescriptor. + values_by_number (dict(int, EnumValueDescriptor)): Same as :attr:`values`, + but indexed by the "number" field of each EnumValueDescriptor. + containing_type (Descriptor): Descriptor of the immediate containing + type of this enum, or None if this is an enum defined at the + top level in a .proto file. Set by Descriptor's constructor + if we're passed into one. + file (FileDescriptor): Reference to file descriptor. + options (descriptor_pb2.EnumOptions): Enum options message or + None to use default enum options. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.EnumDescriptor + + def __new__(cls, name, full_name, filename, values, + containing_type=None, options=None, + serialized_options=None, file=None, # pylint: disable=redefined-builtin + serialized_start=None, serialized_end=None, create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + return _message.default_pool.FindEnumTypeByName(full_name) + + def __init__(self, name, full_name, filename, values, + containing_type=None, options=None, + serialized_options=None, file=None, # pylint: disable=redefined-builtin + serialized_start=None, serialized_end=None, create_key=None): + """Arguments are as described in the attribute description above. + + Note that filename is an obsolete argument, that is not used anymore. + Please use file.name to access this as an attribute. + """ + if create_key is not _internal_create_key: + _Deprecated('EnumDescriptor') + + super(EnumDescriptor, self).__init__( + options, 'EnumOptions', name, full_name, file, + containing_type, serialized_start=serialized_start, + serialized_end=serialized_end, serialized_options=serialized_options) + + self.values = values + for value in self.values: + value.type = self + self.values_by_name = dict((v.name, v) for v in values) + # Values are reversed to ensure that the first alias is retained. + self.values_by_number = dict((v.number, v) for v in reversed(values)) + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.EnumDescriptorProto. + + Args: + proto (descriptor_pb2.EnumDescriptorProto): An empty descriptor proto. + """ + # This function is overridden to give a better doc comment. + super(EnumDescriptor, self).CopyToProto(proto) + + +class EnumValueDescriptor(DescriptorBase): + + """Descriptor for a single value within an enum. + + Attributes: + name (str): Name of this value. + index (int): Dense, 0-indexed index giving the order that this + value appears textually within its enum in the .proto file. + number (int): Actual number assigned to this enum value. + type (EnumDescriptor): :class:`EnumDescriptor` to which this value + belongs. Set by :class:`EnumDescriptor`'s constructor if we're + passed into one. + options (descriptor_pb2.EnumValueOptions): Enum value options message or + None to use default enum value options options. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.EnumValueDescriptor + + def __new__(cls, name, index, number, + type=None, # pylint: disable=redefined-builtin + options=None, serialized_options=None, create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + # There is no way we can build a complete EnumValueDescriptor with the + # given parameters (the name of the Enum is not known, for example). + # Fortunately generated files just pass it to the EnumDescriptor() + # constructor, which will ignore it, so returning None is good enough. + return None + + def __init__(self, name, index, number, + type=None, # pylint: disable=redefined-builtin + options=None, serialized_options=None, create_key=None): + """Arguments are as described in the attribute description above.""" + if create_key is not _internal_create_key: + _Deprecated('EnumValueDescriptor') + + super(EnumValueDescriptor, self).__init__( + options, serialized_options, 'EnumValueOptions') + self.name = name + self.index = index + self.number = number + self.type = type + + +class OneofDescriptor(DescriptorBase): + """Descriptor for a oneof field. + + Attributes: + name (str): Name of the oneof field. + full_name (str): Full name of the oneof field, including package name. + index (int): 0-based index giving the order of the oneof field inside + its containing type. + containing_type (Descriptor): :class:`Descriptor` of the protocol message + type that contains this field. Set by the :class:`Descriptor` constructor + if we're passed into one. + fields (list[FieldDescriptor]): The list of field descriptors this + oneof can contain. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.OneofDescriptor + + def __new__( + cls, name, full_name, index, containing_type, fields, options=None, + serialized_options=None, create_key=None): + _message.Message._CheckCalledFromGeneratedFile() + return _message.default_pool.FindOneofByName(full_name) + + def __init__( + self, name, full_name, index, containing_type, fields, options=None, + serialized_options=None, create_key=None): + """Arguments are as described in the attribute description above.""" + if create_key is not _internal_create_key: + _Deprecated('OneofDescriptor') + + super(OneofDescriptor, self).__init__( + options, serialized_options, 'OneofOptions') + self.name = name + self.full_name = full_name + self.index = index + self.containing_type = containing_type + self.fields = fields + + +class ServiceDescriptor(_NestedDescriptorBase): + + """Descriptor for a service. + + Attributes: + name (str): Name of the service. + full_name (str): Full name of the service, including package name. + index (int): 0-indexed index giving the order that this services + definition appears within the .proto file. + methods (list[MethodDescriptor]): List of methods provided by this + service. + methods_by_name (dict(str, MethodDescriptor)): Same + :class:`MethodDescriptor` objects as in :attr:`methods_by_name`, but + indexed by "name" attribute in each :class:`MethodDescriptor`. + options (descriptor_pb2.ServiceOptions): Service options message or + None to use default service options. + file (FileDescriptor): Reference to file info. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.ServiceDescriptor + + def __new__( + cls, + name=None, + full_name=None, + index=None, + methods=None, + options=None, + serialized_options=None, + file=None, # pylint: disable=redefined-builtin + serialized_start=None, + serialized_end=None, + create_key=None): + _message.Message._CheckCalledFromGeneratedFile() # pylint: disable=protected-access + return _message.default_pool.FindServiceByName(full_name) + + def __init__(self, name, full_name, index, methods, options=None, + serialized_options=None, file=None, # pylint: disable=redefined-builtin + serialized_start=None, serialized_end=None, create_key=None): + if create_key is not _internal_create_key: + _Deprecated('ServiceDescriptor') + + super(ServiceDescriptor, self).__init__( + options, 'ServiceOptions', name, full_name, file, + None, serialized_start=serialized_start, + serialized_end=serialized_end, serialized_options=serialized_options) + self.index = index + self.methods = methods + self.methods_by_name = dict((m.name, m) for m in methods) + # Set the containing service for each method in this service. + for method in self.methods: + method.containing_service = self + + def FindMethodByName(self, name): + """Searches for the specified method, and returns its descriptor. + + Args: + name (str): Name of the method. + Returns: + MethodDescriptor or None: the descriptor for the requested method, if + found. + """ + return self.methods_by_name.get(name, None) + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.ServiceDescriptorProto. + + Args: + proto (descriptor_pb2.ServiceDescriptorProto): An empty descriptor proto. + """ + # This function is overridden to give a better doc comment. + super(ServiceDescriptor, self).CopyToProto(proto) + + +class MethodDescriptor(DescriptorBase): + + """Descriptor for a method in a service. + + Attributes: + name (str): Name of the method within the service. + full_name (str): Full name of method. + index (int): 0-indexed index of the method inside the service. + containing_service (ServiceDescriptor): The service that contains this + method. + input_type (Descriptor): The descriptor of the message that this method + accepts. + output_type (Descriptor): The descriptor of the message that this method + returns. + client_streaming (bool): Whether this method uses client streaming. + server_streaming (bool): Whether this method uses server streaming. + options (descriptor_pb2.MethodOptions or None): Method options message, or + None to use default method options. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.MethodDescriptor + + def __new__(cls, + name, + full_name, + index, + containing_service, + input_type, + output_type, + client_streaming=False, + server_streaming=False, + options=None, + serialized_options=None, + create_key=None): + _message.Message._CheckCalledFromGeneratedFile() # pylint: disable=protected-access + return _message.default_pool.FindMethodByName(full_name) + + def __init__(self, + name, + full_name, + index, + containing_service, + input_type, + output_type, + client_streaming=False, + server_streaming=False, + options=None, + serialized_options=None, + create_key=None): + """The arguments are as described in the description of MethodDescriptor + attributes above. + + Note that containing_service may be None, and may be set later if necessary. + """ + if create_key is not _internal_create_key: + _Deprecated('MethodDescriptor') + + super(MethodDescriptor, self).__init__( + options, serialized_options, 'MethodOptions') + self.name = name + self.full_name = full_name + self.index = index + self.containing_service = containing_service + self.input_type = input_type + self.output_type = output_type + self.client_streaming = client_streaming + self.server_streaming = server_streaming + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.MethodDescriptorProto. + + Args: + proto (descriptor_pb2.MethodDescriptorProto): An empty descriptor proto. + + Raises: + Error: If self couldn't be serialized, due to too few constructor + arguments. + """ + if self.containing_service is not None: + from google.protobuf import descriptor_pb2 + service_proto = descriptor_pb2.ServiceDescriptorProto() + self.containing_service.CopyToProto(service_proto) + proto.CopyFrom(service_proto.method[self.index]) + else: + raise Error('Descriptor does not contain a service.') + + +class FileDescriptor(DescriptorBase): + """Descriptor for a file. Mimics the descriptor_pb2.FileDescriptorProto. + + Note that :attr:`enum_types_by_name`, :attr:`extensions_by_name`, and + :attr:`dependencies` fields are only set by the + :py:mod:`google.protobuf.message_factory` module, and not by the generated + proto code. + + Attributes: + name (str): Name of file, relative to root of source tree. + package (str): Name of the package + syntax (str): string indicating syntax of the file (can be "proto2" or + "proto3") + serialized_pb (bytes): Byte string of serialized + :class:`descriptor_pb2.FileDescriptorProto`. + dependencies (list[FileDescriptor]): List of other :class:`FileDescriptor` + objects this :class:`FileDescriptor` depends on. + public_dependencies (list[FileDescriptor]): A subset of + :attr:`dependencies`, which were declared as "public". + message_types_by_name (dict(str, Descriptor)): Mapping from message names + to their :class:`Descriptor`. + enum_types_by_name (dict(str, EnumDescriptor)): Mapping from enum names to + their :class:`EnumDescriptor`. + extensions_by_name (dict(str, FieldDescriptor)): Mapping from extension + names declared at file scope to their :class:`FieldDescriptor`. + services_by_name (dict(str, ServiceDescriptor)): Mapping from services' + names to their :class:`ServiceDescriptor`. + pool (DescriptorPool): The pool this descriptor belongs to. When not + passed to the constructor, the global default pool is used. + """ + + if _USE_C_DESCRIPTORS: + _C_DESCRIPTOR_CLASS = _message.FileDescriptor + + def __new__(cls, name, package, options=None, + serialized_options=None, serialized_pb=None, + dependencies=None, public_dependencies=None, + syntax=None, pool=None, create_key=None): + # FileDescriptor() is called from various places, not only from generated + # files, to register dynamic proto files and messages. + # pylint: disable=g-explicit-bool-comparison + if serialized_pb == b'': + # Cpp generated code must be linked in if serialized_pb is '' + try: + return _message.default_pool.FindFileByName(name) + except KeyError: + raise RuntimeError('Please link in cpp generated lib for %s' % (name)) + elif serialized_pb: + return _message.default_pool.AddSerializedFile(serialized_pb) + else: + return super(FileDescriptor, cls).__new__(cls) + + def __init__(self, name, package, options=None, + serialized_options=None, serialized_pb=None, + dependencies=None, public_dependencies=None, + syntax=None, pool=None, create_key=None): + """Constructor.""" + if create_key is not _internal_create_key: + _Deprecated('FileDescriptor') + + super(FileDescriptor, self).__init__( + options, serialized_options, 'FileOptions') + + if pool is None: + from google.protobuf import descriptor_pool + pool = descriptor_pool.Default() + self.pool = pool + self.message_types_by_name = {} + self.name = name + self.package = package + self.syntax = syntax or "proto2" + self.serialized_pb = serialized_pb + + self.enum_types_by_name = {} + self.extensions_by_name = {} + self.services_by_name = {} + self.dependencies = (dependencies or []) + self.public_dependencies = (public_dependencies or []) + + def CopyToProto(self, proto): + """Copies this to a descriptor_pb2.FileDescriptorProto. + + Args: + proto: An empty descriptor_pb2.FileDescriptorProto. + """ + proto.ParseFromString(self.serialized_pb) + + +def _ParseOptions(message, string): + """Parses serialized options. + + This helper function is used to parse serialized options in generated + proto2 files. It must not be used outside proto2. + """ + message.ParseFromString(string) + return message + + +def _ToCamelCase(name): + """Converts name to camel-case and returns it.""" + capitalize_next = False + result = [] + + for c in name: + if c == '_': + if result: + capitalize_next = True + elif capitalize_next: + result.append(c.upper()) + capitalize_next = False + else: + result += c + + # Lower-case the first letter. + if result and result[0].isupper(): + result[0] = result[0].lower() + return ''.join(result) + + +def _OptionsOrNone(descriptor_proto): + """Returns the value of the field `options`, or None if it is not set.""" + if descriptor_proto.HasField('options'): + return descriptor_proto.options + else: + return None + + +def _ToJsonName(name): + """Converts name to Json name and returns it.""" + capitalize_next = False + result = [] + + for c in name: + if c == '_': + capitalize_next = True + elif capitalize_next: + result.append(c.upper()) + capitalize_next = False + else: + result += c + + return ''.join(result) + + +def MakeDescriptor(desc_proto, package='', build_file_if_cpp=True, + syntax=None): + """Make a protobuf Descriptor given a DescriptorProto protobuf. + + Handles nested descriptors. Note that this is limited to the scope of defining + a message inside of another message. Composite fields can currently only be + resolved if the message is defined in the same scope as the field. + + Args: + desc_proto: The descriptor_pb2.DescriptorProto protobuf message. + package: Optional package name for the new message Descriptor (string). + build_file_if_cpp: Update the C++ descriptor pool if api matches. + Set to False on recursion, so no duplicates are created. + syntax: The syntax/semantics that should be used. Set to "proto3" to get + proto3 field presence semantics. + Returns: + A Descriptor for protobuf messages. + """ + if api_implementation.Type() == 'cpp' and build_file_if_cpp: + # The C++ implementation requires all descriptors to be backed by the same + # definition in the C++ descriptor pool. To do this, we build a + # FileDescriptorProto with the same definition as this descriptor and build + # it into the pool. + from google.protobuf import descriptor_pb2 + file_descriptor_proto = descriptor_pb2.FileDescriptorProto() + file_descriptor_proto.message_type.add().MergeFrom(desc_proto) + + # Generate a random name for this proto file to prevent conflicts with any + # imported ones. We need to specify a file name so the descriptor pool + # accepts our FileDescriptorProto, but it is not important what that file + # name is actually set to. + proto_name = binascii.hexlify(os.urandom(16)).decode('ascii') + + if package: + file_descriptor_proto.name = os.path.join(package.replace('.', '/'), + proto_name + '.proto') + file_descriptor_proto.package = package + else: + file_descriptor_proto.name = proto_name + '.proto' + + _message.default_pool.Add(file_descriptor_proto) + result = _message.default_pool.FindFileByName(file_descriptor_proto.name) + + if _USE_C_DESCRIPTORS: + return result.message_types_by_name[desc_proto.name] + + full_message_name = [desc_proto.name] + if package: full_message_name.insert(0, package) + + # Create Descriptors for enum types + enum_types = {} + for enum_proto in desc_proto.enum_type: + full_name = '.'.join(full_message_name + [enum_proto.name]) + enum_desc = EnumDescriptor( + enum_proto.name, full_name, None, [ + EnumValueDescriptor(enum_val.name, ii, enum_val.number, + create_key=_internal_create_key) + for ii, enum_val in enumerate(enum_proto.value)], + create_key=_internal_create_key) + enum_types[full_name] = enum_desc + + # Create Descriptors for nested types + nested_types = {} + for nested_proto in desc_proto.nested_type: + full_name = '.'.join(full_message_name + [nested_proto.name]) + # Nested types are just those defined inside of the message, not all types + # used by fields in the message, so no loops are possible here. + nested_desc = MakeDescriptor(nested_proto, + package='.'.join(full_message_name), + build_file_if_cpp=False, + syntax=syntax) + nested_types[full_name] = nested_desc + + fields = [] + for field_proto in desc_proto.field: + full_name = '.'.join(full_message_name + [field_proto.name]) + enum_desc = None + nested_desc = None + if field_proto.json_name: + json_name = field_proto.json_name + else: + json_name = None + if field_proto.HasField('type_name'): + type_name = field_proto.type_name + full_type_name = '.'.join(full_message_name + + [type_name[type_name.rfind('.')+1:]]) + if full_type_name in nested_types: + nested_desc = nested_types[full_type_name] + elif full_type_name in enum_types: + enum_desc = enum_types[full_type_name] + # Else type_name references a non-local type, which isn't implemented + field = FieldDescriptor( + field_proto.name, full_name, field_proto.number - 1, + field_proto.number, field_proto.type, + FieldDescriptor.ProtoTypeToCppProtoType(field_proto.type), + field_proto.label, None, nested_desc, enum_desc, None, False, None, + options=_OptionsOrNone(field_proto), has_default_value=False, + json_name=json_name, create_key=_internal_create_key) + fields.append(field) + + desc_name = '.'.join(full_message_name) + return Descriptor(desc_proto.name, desc_name, None, None, fields, + list(nested_types.values()), list(enum_types.values()), [], + options=_OptionsOrNone(desc_proto), + create_key=_internal_create_key) diff --git a/env/lib/python3.8/site-packages/google/protobuf/descriptor_database.py b/env/lib/python3.8/site-packages/google/protobuf/descriptor_database.py new file mode 100644 index 00000000..073eddc7 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/descriptor_database.py @@ -0,0 +1,177 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides a container for DescriptorProtos.""" + +__author__ = 'matthewtoia@google.com (Matt Toia)' + +import warnings + + +class Error(Exception): + pass + + +class DescriptorDatabaseConflictingDefinitionError(Error): + """Raised when a proto is added with the same name & different descriptor.""" + + +class DescriptorDatabase(object): + """A container accepting FileDescriptorProtos and maps DescriptorProtos.""" + + def __init__(self): + self._file_desc_protos_by_file = {} + self._file_desc_protos_by_symbol = {} + + def Add(self, file_desc_proto): + """Adds the FileDescriptorProto and its types to this database. + + Args: + file_desc_proto: The FileDescriptorProto to add. + Raises: + DescriptorDatabaseConflictingDefinitionError: if an attempt is made to + add a proto with the same name but different definition than an + existing proto in the database. + """ + proto_name = file_desc_proto.name + if proto_name not in self._file_desc_protos_by_file: + self._file_desc_protos_by_file[proto_name] = file_desc_proto + elif self._file_desc_protos_by_file[proto_name] != file_desc_proto: + raise DescriptorDatabaseConflictingDefinitionError( + '%s already added, but with different descriptor.' % proto_name) + else: + return + + # Add all the top-level descriptors to the index. + package = file_desc_proto.package + for message in file_desc_proto.message_type: + for name in _ExtractSymbols(message, package): + self._AddSymbol(name, file_desc_proto) + for enum in file_desc_proto.enum_type: + self._AddSymbol(('.'.join((package, enum.name))), file_desc_proto) + for enum_value in enum.value: + self._file_desc_protos_by_symbol[ + '.'.join((package, enum_value.name))] = file_desc_proto + for extension in file_desc_proto.extension: + self._AddSymbol(('.'.join((package, extension.name))), file_desc_proto) + for service in file_desc_proto.service: + self._AddSymbol(('.'.join((package, service.name))), file_desc_proto) + + def FindFileByName(self, name): + """Finds the file descriptor proto by file name. + + Typically the file name is a relative path ending to a .proto file. The + proto with the given name will have to have been added to this database + using the Add method or else an error will be raised. + + Args: + name: The file name to find. + + Returns: + The file descriptor proto matching the name. + + Raises: + KeyError if no file by the given name was added. + """ + + return self._file_desc_protos_by_file[name] + + def FindFileContainingSymbol(self, symbol): + """Finds the file descriptor proto containing the specified symbol. + + The symbol should be a fully qualified name including the file descriptor's + package and any containing messages. Some examples: + + 'some.package.name.Message' + 'some.package.name.Message.NestedEnum' + 'some.package.name.Message.some_field' + + The file descriptor proto containing the specified symbol must be added to + this database using the Add method or else an error will be raised. + + Args: + symbol: The fully qualified symbol name. + + Returns: + The file descriptor proto containing the symbol. + + Raises: + KeyError if no file contains the specified symbol. + """ + try: + return self._file_desc_protos_by_symbol[symbol] + except KeyError: + # Fields, enum values, and nested extensions are not in + # _file_desc_protos_by_symbol. Try to find the top level + # descriptor. Non-existent nested symbol under a valid top level + # descriptor can also be found. The behavior is the same with + # protobuf C++. + top_level, _, _ = symbol.rpartition('.') + try: + return self._file_desc_protos_by_symbol[top_level] + except KeyError: + # Raise the original symbol as a KeyError for better diagnostics. + raise KeyError(symbol) + + def FindFileContainingExtension(self, extendee_name, extension_number): + # TODO(jieluo): implement this API. + return None + + def FindAllExtensionNumbers(self, extendee_name): + # TODO(jieluo): implement this API. + return [] + + def _AddSymbol(self, name, file_desc_proto): + if name in self._file_desc_protos_by_symbol: + warn_msg = ('Conflict register for file "' + file_desc_proto.name + + '": ' + name + + ' is already defined in file "' + + self._file_desc_protos_by_symbol[name].name + '"') + warnings.warn(warn_msg, RuntimeWarning) + self._file_desc_protos_by_symbol[name] = file_desc_proto + + +def _ExtractSymbols(desc_proto, package): + """Pulls out all the symbols from a descriptor proto. + + Args: + desc_proto: The proto to extract symbols from. + package: The package containing the descriptor type. + + Yields: + The fully qualified name found in the descriptor. + """ + message_name = package + '.' + desc_proto.name if package else desc_proto.name + yield message_name + for nested_type in desc_proto.nested_type: + for symbol in _ExtractSymbols(nested_type, message_name): + yield symbol + for enum_type in desc_proto.enum_type: + yield '.'.join((message_name, enum_type.name)) diff --git a/env/lib/python3.8/site-packages/google/protobuf/descriptor_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/descriptor_pb2.py new file mode 100644 index 00000000..f5703864 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/descriptor_pb2.py @@ -0,0 +1,1925 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/descriptor.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +if _descriptor._USE_C_DESCRIPTORS == False: + DESCRIPTOR = _descriptor.FileDescriptor( + name='google/protobuf/descriptor.proto', + package='google.protobuf', + syntax='proto2', + serialized_options=None, + create_key=_descriptor._internal_create_key, + serialized_pb=b'\n google/protobuf/descriptor.proto\x12\x0fgoogle.protobuf\"G\n\x11\x46ileDescriptorSet\x12\x32\n\x04\x66ile\x18\x01 \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\"\xdb\x03\n\x13\x46ileDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07package\x18\x02 \x01(\t\x12\x12\n\ndependency\x18\x03 \x03(\t\x12\x19\n\x11public_dependency\x18\n \x03(\x05\x12\x17\n\x0fweak_dependency\x18\x0b \x03(\x05\x12\x36\n\x0cmessage_type\x18\x04 \x03(\x0b\x32 .google.protobuf.DescriptorProto\x12\x37\n\tenum_type\x18\x05 \x03(\x0b\x32$.google.protobuf.EnumDescriptorProto\x12\x38\n\x07service\x18\x06 \x03(\x0b\x32\'.google.protobuf.ServiceDescriptorProto\x12\x38\n\textension\x18\x07 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12-\n\x07options\x18\x08 \x01(\x0b\x32\x1c.google.protobuf.FileOptions\x12\x39\n\x10source_code_info\x18\t \x01(\x0b\x32\x1f.google.protobuf.SourceCodeInfo\x12\x0e\n\x06syntax\x18\x0c \x01(\t\"\xa9\x05\n\x0f\x44\x65scriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x34\n\x05\x66ield\x18\x02 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12\x38\n\textension\x18\x06 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12\x35\n\x0bnested_type\x18\x03 \x03(\x0b\x32 .google.protobuf.DescriptorProto\x12\x37\n\tenum_type\x18\x04 \x03(\x0b\x32$.google.protobuf.EnumDescriptorProto\x12H\n\x0f\x65xtension_range\x18\x05 \x03(\x0b\x32/.google.protobuf.DescriptorProto.ExtensionRange\x12\x39\n\noneof_decl\x18\x08 \x03(\x0b\x32%.google.protobuf.OneofDescriptorProto\x12\x30\n\x07options\x18\x07 \x01(\x0b\x32\x1f.google.protobuf.MessageOptions\x12\x46\n\x0ereserved_range\x18\t \x03(\x0b\x32..google.protobuf.DescriptorProto.ReservedRange\x12\x15\n\rreserved_name\x18\n \x03(\t\x1a\x65\n\x0e\x45xtensionRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\x12\x37\n\x07options\x18\x03 \x01(\x0b\x32&.google.protobuf.ExtensionRangeOptions\x1a+\n\rReservedRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\"g\n\x15\x45xtensionRangeOptions\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\xd5\x05\n\x14\x46ieldDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x03 \x01(\x05\x12:\n\x05label\x18\x04 \x01(\x0e\x32+.google.protobuf.FieldDescriptorProto.Label\x12\x38\n\x04type\x18\x05 \x01(\x0e\x32*.google.protobuf.FieldDescriptorProto.Type\x12\x11\n\ttype_name\x18\x06 \x01(\t\x12\x10\n\x08\x65xtendee\x18\x02 \x01(\t\x12\x15\n\rdefault_value\x18\x07 \x01(\t\x12\x13\n\x0boneof_index\x18\t \x01(\x05\x12\x11\n\tjson_name\x18\n \x01(\t\x12.\n\x07options\x18\x08 \x01(\x0b\x32\x1d.google.protobuf.FieldOptions\x12\x17\n\x0fproto3_optional\x18\x11 \x01(\x08\"\xb6\x02\n\x04Type\x12\x0f\n\x0bTYPE_DOUBLE\x10\x01\x12\x0e\n\nTYPE_FLOAT\x10\x02\x12\x0e\n\nTYPE_INT64\x10\x03\x12\x0f\n\x0bTYPE_UINT64\x10\x04\x12\x0e\n\nTYPE_INT32\x10\x05\x12\x10\n\x0cTYPE_FIXED64\x10\x06\x12\x10\n\x0cTYPE_FIXED32\x10\x07\x12\r\n\tTYPE_BOOL\x10\x08\x12\x0f\n\x0bTYPE_STRING\x10\t\x12\x0e\n\nTYPE_GROUP\x10\n\x12\x10\n\x0cTYPE_MESSAGE\x10\x0b\x12\x0e\n\nTYPE_BYTES\x10\x0c\x12\x0f\n\x0bTYPE_UINT32\x10\r\x12\r\n\tTYPE_ENUM\x10\x0e\x12\x11\n\rTYPE_SFIXED32\x10\x0f\x12\x11\n\rTYPE_SFIXED64\x10\x10\x12\x0f\n\x0bTYPE_SINT32\x10\x11\x12\x0f\n\x0bTYPE_SINT64\x10\x12\"C\n\x05Label\x12\x12\n\x0eLABEL_OPTIONAL\x10\x01\x12\x12\n\x0eLABEL_REQUIRED\x10\x02\x12\x12\n\x0eLABEL_REPEATED\x10\x03\"T\n\x14OneofDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12.\n\x07options\x18\x02 \x01(\x0b\x32\x1d.google.protobuf.OneofOptions\"\xa4\x02\n\x13\x45numDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x38\n\x05value\x18\x02 \x03(\x0b\x32).google.protobuf.EnumValueDescriptorProto\x12-\n\x07options\x18\x03 \x01(\x0b\x32\x1c.google.protobuf.EnumOptions\x12N\n\x0ereserved_range\x18\x04 \x03(\x0b\x32\x36.google.protobuf.EnumDescriptorProto.EnumReservedRange\x12\x15\n\rreserved_name\x18\x05 \x03(\t\x1a/\n\x11\x45numReservedRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\"l\n\x18\x45numValueDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x02 \x01(\x05\x12\x32\n\x07options\x18\x03 \x01(\x0b\x32!.google.protobuf.EnumValueOptions\"\x90\x01\n\x16ServiceDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x36\n\x06method\x18\x02 \x03(\x0b\x32&.google.protobuf.MethodDescriptorProto\x12\x30\n\x07options\x18\x03 \x01(\x0b\x32\x1f.google.protobuf.ServiceOptions\"\xc1\x01\n\x15MethodDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x12\n\ninput_type\x18\x02 \x01(\t\x12\x13\n\x0boutput_type\x18\x03 \x01(\t\x12/\n\x07options\x18\x04 \x01(\x0b\x32\x1e.google.protobuf.MethodOptions\x12\x1f\n\x10\x63lient_streaming\x18\x05 \x01(\x08:\x05\x66\x61lse\x12\x1f\n\x10server_streaming\x18\x06 \x01(\x08:\x05\x66\x61lse\"\xa5\x06\n\x0b\x46ileOptions\x12\x14\n\x0cjava_package\x18\x01 \x01(\t\x12\x1c\n\x14java_outer_classname\x18\x08 \x01(\t\x12\"\n\x13java_multiple_files\x18\n \x01(\x08:\x05\x66\x61lse\x12)\n\x1djava_generate_equals_and_hash\x18\x14 \x01(\x08\x42\x02\x18\x01\x12%\n\x16java_string_check_utf8\x18\x1b \x01(\x08:\x05\x66\x61lse\x12\x46\n\x0coptimize_for\x18\t \x01(\x0e\x32).google.protobuf.FileOptions.OptimizeMode:\x05SPEED\x12\x12\n\ngo_package\x18\x0b \x01(\t\x12\"\n\x13\x63\x63_generic_services\x18\x10 \x01(\x08:\x05\x66\x61lse\x12$\n\x15java_generic_services\x18\x11 \x01(\x08:\x05\x66\x61lse\x12\"\n\x13py_generic_services\x18\x12 \x01(\x08:\x05\x66\x61lse\x12#\n\x14php_generic_services\x18* \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x17 \x01(\x08:\x05\x66\x61lse\x12\x1e\n\x10\x63\x63_enable_arenas\x18\x1f \x01(\x08:\x04true\x12\x19\n\x11objc_class_prefix\x18$ \x01(\t\x12\x18\n\x10\x63sharp_namespace\x18% \x01(\t\x12\x14\n\x0cswift_prefix\x18\' \x01(\t\x12\x18\n\x10php_class_prefix\x18( \x01(\t\x12\x15\n\rphp_namespace\x18) \x01(\t\x12\x1e\n\x16php_metadata_namespace\x18, \x01(\t\x12\x14\n\x0cruby_package\x18- \x01(\t\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\":\n\x0cOptimizeMode\x12\t\n\x05SPEED\x10\x01\x12\r\n\tCODE_SIZE\x10\x02\x12\x10\n\x0cLITE_RUNTIME\x10\x03*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08&\x10\'\"\x84\x02\n\x0eMessageOptions\x12&\n\x17message_set_wire_format\x18\x01 \x01(\x08:\x05\x66\x61lse\x12.\n\x1fno_standard_descriptor_accessor\x18\x02 \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x11\n\tmap_entry\x18\x07 \x01(\x08\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x04\x10\x05J\x04\x08\x05\x10\x06J\x04\x08\x06\x10\x07J\x04\x08\x08\x10\tJ\x04\x08\t\x10\n\"\xbe\x03\n\x0c\x46ieldOptions\x12:\n\x05\x63type\x18\x01 \x01(\x0e\x32#.google.protobuf.FieldOptions.CType:\x06STRING\x12\x0e\n\x06packed\x18\x02 \x01(\x08\x12?\n\x06jstype\x18\x06 \x01(\x0e\x32$.google.protobuf.FieldOptions.JSType:\tJS_NORMAL\x12\x13\n\x04lazy\x18\x05 \x01(\x08:\x05\x66\x61lse\x12\x1e\n\x0funverified_lazy\x18\x0f \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x13\n\x04weak\x18\n \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\"/\n\x05\x43Type\x12\n\n\x06STRING\x10\x00\x12\x08\n\x04\x43ORD\x10\x01\x12\x10\n\x0cSTRING_PIECE\x10\x02\"5\n\x06JSType\x12\r\n\tJS_NORMAL\x10\x00\x12\r\n\tJS_STRING\x10\x01\x12\r\n\tJS_NUMBER\x10\x02*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x04\x10\x05\"^\n\x0cOneofOptions\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\x93\x01\n\x0b\x45numOptions\x12\x13\n\x0b\x61llow_alias\x18\x02 \x01(\x08\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x05\x10\x06\"}\n\x10\x45numValueOptions\x12\x19\n\ndeprecated\x18\x01 \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"{\n\x0eServiceOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\xad\x02\n\rMethodOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12_\n\x11idempotency_level\x18\" \x01(\x0e\x32/.google.protobuf.MethodOptions.IdempotencyLevel:\x13IDEMPOTENCY_UNKNOWN\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\"P\n\x10IdempotencyLevel\x12\x17\n\x13IDEMPOTENCY_UNKNOWN\x10\x00\x12\x13\n\x0fNO_SIDE_EFFECTS\x10\x01\x12\x0e\n\nIDEMPOTENT\x10\x02*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\x9e\x02\n\x13UninterpretedOption\x12;\n\x04name\x18\x02 \x03(\x0b\x32-.google.protobuf.UninterpretedOption.NamePart\x12\x18\n\x10identifier_value\x18\x03 \x01(\t\x12\x1a\n\x12positive_int_value\x18\x04 \x01(\x04\x12\x1a\n\x12negative_int_value\x18\x05 \x01(\x03\x12\x14\n\x0c\x64ouble_value\x18\x06 \x01(\x01\x12\x14\n\x0cstring_value\x18\x07 \x01(\x0c\x12\x17\n\x0f\x61ggregate_value\x18\x08 \x01(\t\x1a\x33\n\x08NamePart\x12\x11\n\tname_part\x18\x01 \x02(\t\x12\x14\n\x0cis_extension\x18\x02 \x02(\x08\"\xd5\x01\n\x0eSourceCodeInfo\x12:\n\x08location\x18\x01 \x03(\x0b\x32(.google.protobuf.SourceCodeInfo.Location\x1a\x86\x01\n\x08Location\x12\x10\n\x04path\x18\x01 \x03(\x05\x42\x02\x10\x01\x12\x10\n\x04span\x18\x02 \x03(\x05\x42\x02\x10\x01\x12\x18\n\x10leading_comments\x18\x03 \x01(\t\x12\x19\n\x11trailing_comments\x18\x04 \x01(\t\x12!\n\x19leading_detached_comments\x18\x06 \x03(\t\"\xa7\x01\n\x11GeneratedCodeInfo\x12\x41\n\nannotation\x18\x01 \x03(\x0b\x32-.google.protobuf.GeneratedCodeInfo.Annotation\x1aO\n\nAnnotation\x12\x10\n\x04path\x18\x01 \x03(\x05\x42\x02\x10\x01\x12\x13\n\x0bsource_file\x18\x02 \x01(\t\x12\r\n\x05\x62\x65gin\x18\x03 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x04 \x01(\x05\x42~\n\x13\x63om.google.protobufB\x10\x44\x65scriptorProtosH\x01Z-google.golang.org/protobuf/types/descriptorpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1aGoogle.Protobuf.Reflection' + ) +else: + DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n google/protobuf/descriptor.proto\x12\x0fgoogle.protobuf\"G\n\x11\x46ileDescriptorSet\x12\x32\n\x04\x66ile\x18\x01 \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\"\xdb\x03\n\x13\x46ileDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07package\x18\x02 \x01(\t\x12\x12\n\ndependency\x18\x03 \x03(\t\x12\x19\n\x11public_dependency\x18\n \x03(\x05\x12\x17\n\x0fweak_dependency\x18\x0b \x03(\x05\x12\x36\n\x0cmessage_type\x18\x04 \x03(\x0b\x32 .google.protobuf.DescriptorProto\x12\x37\n\tenum_type\x18\x05 \x03(\x0b\x32$.google.protobuf.EnumDescriptorProto\x12\x38\n\x07service\x18\x06 \x03(\x0b\x32\'.google.protobuf.ServiceDescriptorProto\x12\x38\n\textension\x18\x07 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12-\n\x07options\x18\x08 \x01(\x0b\x32\x1c.google.protobuf.FileOptions\x12\x39\n\x10source_code_info\x18\t \x01(\x0b\x32\x1f.google.protobuf.SourceCodeInfo\x12\x0e\n\x06syntax\x18\x0c \x01(\t\"\xa9\x05\n\x0f\x44\x65scriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x34\n\x05\x66ield\x18\x02 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12\x38\n\textension\x18\x06 \x03(\x0b\x32%.google.protobuf.FieldDescriptorProto\x12\x35\n\x0bnested_type\x18\x03 \x03(\x0b\x32 .google.protobuf.DescriptorProto\x12\x37\n\tenum_type\x18\x04 \x03(\x0b\x32$.google.protobuf.EnumDescriptorProto\x12H\n\x0f\x65xtension_range\x18\x05 \x03(\x0b\x32/.google.protobuf.DescriptorProto.ExtensionRange\x12\x39\n\noneof_decl\x18\x08 \x03(\x0b\x32%.google.protobuf.OneofDescriptorProto\x12\x30\n\x07options\x18\x07 \x01(\x0b\x32\x1f.google.protobuf.MessageOptions\x12\x46\n\x0ereserved_range\x18\t \x03(\x0b\x32..google.protobuf.DescriptorProto.ReservedRange\x12\x15\n\rreserved_name\x18\n \x03(\t\x1a\x65\n\x0e\x45xtensionRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\x12\x37\n\x07options\x18\x03 \x01(\x0b\x32&.google.protobuf.ExtensionRangeOptions\x1a+\n\rReservedRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\"g\n\x15\x45xtensionRangeOptions\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\xd5\x05\n\x14\x46ieldDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x03 \x01(\x05\x12:\n\x05label\x18\x04 \x01(\x0e\x32+.google.protobuf.FieldDescriptorProto.Label\x12\x38\n\x04type\x18\x05 \x01(\x0e\x32*.google.protobuf.FieldDescriptorProto.Type\x12\x11\n\ttype_name\x18\x06 \x01(\t\x12\x10\n\x08\x65xtendee\x18\x02 \x01(\t\x12\x15\n\rdefault_value\x18\x07 \x01(\t\x12\x13\n\x0boneof_index\x18\t \x01(\x05\x12\x11\n\tjson_name\x18\n \x01(\t\x12.\n\x07options\x18\x08 \x01(\x0b\x32\x1d.google.protobuf.FieldOptions\x12\x17\n\x0fproto3_optional\x18\x11 \x01(\x08\"\xb6\x02\n\x04Type\x12\x0f\n\x0bTYPE_DOUBLE\x10\x01\x12\x0e\n\nTYPE_FLOAT\x10\x02\x12\x0e\n\nTYPE_INT64\x10\x03\x12\x0f\n\x0bTYPE_UINT64\x10\x04\x12\x0e\n\nTYPE_INT32\x10\x05\x12\x10\n\x0cTYPE_FIXED64\x10\x06\x12\x10\n\x0cTYPE_FIXED32\x10\x07\x12\r\n\tTYPE_BOOL\x10\x08\x12\x0f\n\x0bTYPE_STRING\x10\t\x12\x0e\n\nTYPE_GROUP\x10\n\x12\x10\n\x0cTYPE_MESSAGE\x10\x0b\x12\x0e\n\nTYPE_BYTES\x10\x0c\x12\x0f\n\x0bTYPE_UINT32\x10\r\x12\r\n\tTYPE_ENUM\x10\x0e\x12\x11\n\rTYPE_SFIXED32\x10\x0f\x12\x11\n\rTYPE_SFIXED64\x10\x10\x12\x0f\n\x0bTYPE_SINT32\x10\x11\x12\x0f\n\x0bTYPE_SINT64\x10\x12\"C\n\x05Label\x12\x12\n\x0eLABEL_OPTIONAL\x10\x01\x12\x12\n\x0eLABEL_REQUIRED\x10\x02\x12\x12\n\x0eLABEL_REPEATED\x10\x03\"T\n\x14OneofDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12.\n\x07options\x18\x02 \x01(\x0b\x32\x1d.google.protobuf.OneofOptions\"\xa4\x02\n\x13\x45numDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x38\n\x05value\x18\x02 \x03(\x0b\x32).google.protobuf.EnumValueDescriptorProto\x12-\n\x07options\x18\x03 \x01(\x0b\x32\x1c.google.protobuf.EnumOptions\x12N\n\x0ereserved_range\x18\x04 \x03(\x0b\x32\x36.google.protobuf.EnumDescriptorProto.EnumReservedRange\x12\x15\n\rreserved_name\x18\x05 \x03(\t\x1a/\n\x11\x45numReservedRange\x12\r\n\x05start\x18\x01 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x05\"l\n\x18\x45numValueDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x02 \x01(\x05\x12\x32\n\x07options\x18\x03 \x01(\x0b\x32!.google.protobuf.EnumValueOptions\"\x90\x01\n\x16ServiceDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x36\n\x06method\x18\x02 \x03(\x0b\x32&.google.protobuf.MethodDescriptorProto\x12\x30\n\x07options\x18\x03 \x01(\x0b\x32\x1f.google.protobuf.ServiceOptions\"\xc1\x01\n\x15MethodDescriptorProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x12\n\ninput_type\x18\x02 \x01(\t\x12\x13\n\x0boutput_type\x18\x03 \x01(\t\x12/\n\x07options\x18\x04 \x01(\x0b\x32\x1e.google.protobuf.MethodOptions\x12\x1f\n\x10\x63lient_streaming\x18\x05 \x01(\x08:\x05\x66\x61lse\x12\x1f\n\x10server_streaming\x18\x06 \x01(\x08:\x05\x66\x61lse\"\xa5\x06\n\x0b\x46ileOptions\x12\x14\n\x0cjava_package\x18\x01 \x01(\t\x12\x1c\n\x14java_outer_classname\x18\x08 \x01(\t\x12\"\n\x13java_multiple_files\x18\n \x01(\x08:\x05\x66\x61lse\x12)\n\x1djava_generate_equals_and_hash\x18\x14 \x01(\x08\x42\x02\x18\x01\x12%\n\x16java_string_check_utf8\x18\x1b \x01(\x08:\x05\x66\x61lse\x12\x46\n\x0coptimize_for\x18\t \x01(\x0e\x32).google.protobuf.FileOptions.OptimizeMode:\x05SPEED\x12\x12\n\ngo_package\x18\x0b \x01(\t\x12\"\n\x13\x63\x63_generic_services\x18\x10 \x01(\x08:\x05\x66\x61lse\x12$\n\x15java_generic_services\x18\x11 \x01(\x08:\x05\x66\x61lse\x12\"\n\x13py_generic_services\x18\x12 \x01(\x08:\x05\x66\x61lse\x12#\n\x14php_generic_services\x18* \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x17 \x01(\x08:\x05\x66\x61lse\x12\x1e\n\x10\x63\x63_enable_arenas\x18\x1f \x01(\x08:\x04true\x12\x19\n\x11objc_class_prefix\x18$ \x01(\t\x12\x18\n\x10\x63sharp_namespace\x18% \x01(\t\x12\x14\n\x0cswift_prefix\x18\' \x01(\t\x12\x18\n\x10php_class_prefix\x18( \x01(\t\x12\x15\n\rphp_namespace\x18) \x01(\t\x12\x1e\n\x16php_metadata_namespace\x18, \x01(\t\x12\x14\n\x0cruby_package\x18- \x01(\t\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\":\n\x0cOptimizeMode\x12\t\n\x05SPEED\x10\x01\x12\r\n\tCODE_SIZE\x10\x02\x12\x10\n\x0cLITE_RUNTIME\x10\x03*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08&\x10\'\"\x84\x02\n\x0eMessageOptions\x12&\n\x17message_set_wire_format\x18\x01 \x01(\x08:\x05\x66\x61lse\x12.\n\x1fno_standard_descriptor_accessor\x18\x02 \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x11\n\tmap_entry\x18\x07 \x01(\x08\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x04\x10\x05J\x04\x08\x05\x10\x06J\x04\x08\x06\x10\x07J\x04\x08\x08\x10\tJ\x04\x08\t\x10\n\"\xbe\x03\n\x0c\x46ieldOptions\x12:\n\x05\x63type\x18\x01 \x01(\x0e\x32#.google.protobuf.FieldOptions.CType:\x06STRING\x12\x0e\n\x06packed\x18\x02 \x01(\x08\x12?\n\x06jstype\x18\x06 \x01(\x0e\x32$.google.protobuf.FieldOptions.JSType:\tJS_NORMAL\x12\x13\n\x04lazy\x18\x05 \x01(\x08:\x05\x66\x61lse\x12\x1e\n\x0funverified_lazy\x18\x0f \x01(\x08:\x05\x66\x61lse\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x13\n\x04weak\x18\n \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\"/\n\x05\x43Type\x12\n\n\x06STRING\x10\x00\x12\x08\n\x04\x43ORD\x10\x01\x12\x10\n\x0cSTRING_PIECE\x10\x02\"5\n\x06JSType\x12\r\n\tJS_NORMAL\x10\x00\x12\r\n\tJS_STRING\x10\x01\x12\r\n\tJS_NUMBER\x10\x02*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x04\x10\x05\"^\n\x0cOneofOptions\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\x93\x01\n\x0b\x45numOptions\x12\x13\n\x0b\x61llow_alias\x18\x02 \x01(\x08\x12\x19\n\ndeprecated\x18\x03 \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02J\x04\x08\x05\x10\x06\"}\n\x10\x45numValueOptions\x12\x19\n\ndeprecated\x18\x01 \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"{\n\x0eServiceOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\xad\x02\n\rMethodOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12_\n\x11idempotency_level\x18\" \x01(\x0e\x32/.google.protobuf.MethodOptions.IdempotencyLevel:\x13IDEMPOTENCY_UNKNOWN\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption\"P\n\x10IdempotencyLevel\x12\x17\n\x13IDEMPOTENCY_UNKNOWN\x10\x00\x12\x13\n\x0fNO_SIDE_EFFECTS\x10\x01\x12\x0e\n\nIDEMPOTENT\x10\x02*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\x9e\x02\n\x13UninterpretedOption\x12;\n\x04name\x18\x02 \x03(\x0b\x32-.google.protobuf.UninterpretedOption.NamePart\x12\x18\n\x10identifier_value\x18\x03 \x01(\t\x12\x1a\n\x12positive_int_value\x18\x04 \x01(\x04\x12\x1a\n\x12negative_int_value\x18\x05 \x01(\x03\x12\x14\n\x0c\x64ouble_value\x18\x06 \x01(\x01\x12\x14\n\x0cstring_value\x18\x07 \x01(\x0c\x12\x17\n\x0f\x61ggregate_value\x18\x08 \x01(\t\x1a\x33\n\x08NamePart\x12\x11\n\tname_part\x18\x01 \x02(\t\x12\x14\n\x0cis_extension\x18\x02 \x02(\x08\"\xd5\x01\n\x0eSourceCodeInfo\x12:\n\x08location\x18\x01 \x03(\x0b\x32(.google.protobuf.SourceCodeInfo.Location\x1a\x86\x01\n\x08Location\x12\x10\n\x04path\x18\x01 \x03(\x05\x42\x02\x10\x01\x12\x10\n\x04span\x18\x02 \x03(\x05\x42\x02\x10\x01\x12\x18\n\x10leading_comments\x18\x03 \x01(\t\x12\x19\n\x11trailing_comments\x18\x04 \x01(\t\x12!\n\x19leading_detached_comments\x18\x06 \x03(\t\"\xa7\x01\n\x11GeneratedCodeInfo\x12\x41\n\nannotation\x18\x01 \x03(\x0b\x32-.google.protobuf.GeneratedCodeInfo.Annotation\x1aO\n\nAnnotation\x12\x10\n\x04path\x18\x01 \x03(\x05\x42\x02\x10\x01\x12\x13\n\x0bsource_file\x18\x02 \x01(\t\x12\r\n\x05\x62\x65gin\x18\x03 \x01(\x05\x12\x0b\n\x03\x65nd\x18\x04 \x01(\x05\x42~\n\x13\x63om.google.protobufB\x10\x44\x65scriptorProtosH\x01Z-google.golang.org/protobuf/types/descriptorpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1aGoogle.Protobuf.Reflection') + +if _descriptor._USE_C_DESCRIPTORS == False: + _FIELDDESCRIPTORPROTO_TYPE = _descriptor.EnumDescriptor( + name='Type', + full_name='google.protobuf.FieldDescriptorProto.Type', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='TYPE_DOUBLE', index=0, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FLOAT', index=1, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_INT64', index=2, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_UINT64', index=3, number=4, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_INT32', index=4, number=5, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FIXED64', index=5, number=6, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_FIXED32', index=6, number=7, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_BOOL', index=7, number=8, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_STRING', index=8, number=9, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_GROUP', index=9, number=10, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_MESSAGE', index=10, number=11, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_BYTES', index=11, number=12, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_UINT32', index=12, number=13, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_ENUM', index=13, number=14, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SFIXED32', index=14, number=15, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SFIXED64', index=15, number=16, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SINT32', index=16, number=17, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='TYPE_SINT64', index=17, number=18, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + ) + _sym_db.RegisterEnumDescriptor(_FIELDDESCRIPTORPROTO_TYPE) + + _FIELDDESCRIPTORPROTO_LABEL = _descriptor.EnumDescriptor( + name='Label', + full_name='google.protobuf.FieldDescriptorProto.Label', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='LABEL_OPTIONAL', index=0, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='LABEL_REQUIRED', index=1, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='LABEL_REPEATED', index=2, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + ) + _sym_db.RegisterEnumDescriptor(_FIELDDESCRIPTORPROTO_LABEL) + + _FILEOPTIONS_OPTIMIZEMODE = _descriptor.EnumDescriptor( + name='OptimizeMode', + full_name='google.protobuf.FileOptions.OptimizeMode', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='SPEED', index=0, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='CODE_SIZE', index=1, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='LITE_RUNTIME', index=2, number=3, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + ) + _sym_db.RegisterEnumDescriptor(_FILEOPTIONS_OPTIMIZEMODE) + + _FIELDOPTIONS_CTYPE = _descriptor.EnumDescriptor( + name='CType', + full_name='google.protobuf.FieldOptions.CType', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='STRING', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='CORD', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='STRING_PIECE', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + ) + _sym_db.RegisterEnumDescriptor(_FIELDOPTIONS_CTYPE) + + _FIELDOPTIONS_JSTYPE = _descriptor.EnumDescriptor( + name='JSType', + full_name='google.protobuf.FieldOptions.JSType', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='JS_NORMAL', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='JS_STRING', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='JS_NUMBER', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + ) + _sym_db.RegisterEnumDescriptor(_FIELDOPTIONS_JSTYPE) + + _METHODOPTIONS_IDEMPOTENCYLEVEL = _descriptor.EnumDescriptor( + name='IdempotencyLevel', + full_name='google.protobuf.MethodOptions.IdempotencyLevel', + filename=None, + file=DESCRIPTOR, + create_key=_descriptor._internal_create_key, + values=[ + _descriptor.EnumValueDescriptor( + name='IDEMPOTENCY_UNKNOWN', index=0, number=0, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='NO_SIDE_EFFECTS', index=1, number=1, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + _descriptor.EnumValueDescriptor( + name='IDEMPOTENT', index=2, number=2, + serialized_options=None, + type=None, + create_key=_descriptor._internal_create_key), + ], + containing_type=None, + serialized_options=None, + ) + _sym_db.RegisterEnumDescriptor(_METHODOPTIONS_IDEMPOTENCYLEVEL) + + + _FILEDESCRIPTORSET = _descriptor.Descriptor( + name='FileDescriptorSet', + full_name='google.protobuf.FileDescriptorSet', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='file', full_name='google.protobuf.FileDescriptorSet.file', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _FILEDESCRIPTORPROTO = _descriptor.Descriptor( + name='FileDescriptorProto', + full_name='google.protobuf.FileDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.FileDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='package', full_name='google.protobuf.FileDescriptorProto.package', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='dependency', full_name='google.protobuf.FileDescriptorProto.dependency', index=2, + number=3, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='public_dependency', full_name='google.protobuf.FileDescriptorProto.public_dependency', index=3, + number=10, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='weak_dependency', full_name='google.protobuf.FileDescriptorProto.weak_dependency', index=4, + number=11, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='message_type', full_name='google.protobuf.FileDescriptorProto.message_type', index=5, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enum_type', full_name='google.protobuf.FileDescriptorProto.enum_type', index=6, + number=5, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='service', full_name='google.protobuf.FileDescriptorProto.service', index=7, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extension', full_name='google.protobuf.FileDescriptorProto.extension', index=8, + number=7, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.FileDescriptorProto.options', index=9, + number=8, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='source_code_info', full_name='google.protobuf.FileDescriptorProto.source_code_info', index=10, + number=9, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='syntax', full_name='google.protobuf.FileDescriptorProto.syntax', index=11, + number=12, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _DESCRIPTORPROTO_EXTENSIONRANGE = _descriptor.Descriptor( + name='ExtensionRange', + full_name='google.protobuf.DescriptorProto.ExtensionRange', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='start', full_name='google.protobuf.DescriptorProto.ExtensionRange.start', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.DescriptorProto.ExtensionRange.end', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.DescriptorProto.ExtensionRange.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + _DESCRIPTORPROTO_RESERVEDRANGE = _descriptor.Descriptor( + name='ReservedRange', + full_name='google.protobuf.DescriptorProto.ReservedRange', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='start', full_name='google.protobuf.DescriptorProto.ReservedRange.start', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.DescriptorProto.ReservedRange.end', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + _DESCRIPTORPROTO = _descriptor.Descriptor( + name='DescriptorProto', + full_name='google.protobuf.DescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.DescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='field', full_name='google.protobuf.DescriptorProto.field', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extension', full_name='google.protobuf.DescriptorProto.extension', index=2, + number=6, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='nested_type', full_name='google.protobuf.DescriptorProto.nested_type', index=3, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='enum_type', full_name='google.protobuf.DescriptorProto.enum_type', index=4, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extension_range', full_name='google.protobuf.DescriptorProto.extension_range', index=5, + number=5, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_decl', full_name='google.protobuf.DescriptorProto.oneof_decl', index=6, + number=8, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.DescriptorProto.options', index=7, + number=7, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_range', full_name='google.protobuf.DescriptorProto.reserved_range', index=8, + number=9, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_name', full_name='google.protobuf.DescriptorProto.reserved_name', index=9, + number=10, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_DESCRIPTORPROTO_EXTENSIONRANGE, _DESCRIPTORPROTO_RESERVEDRANGE, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _EXTENSIONRANGEOPTIONS = _descriptor.Descriptor( + name='ExtensionRangeOptions', + full_name='google.protobuf.ExtensionRangeOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.ExtensionRangeOptions.uninterpreted_option', index=0, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _FIELDDESCRIPTORPROTO = _descriptor.Descriptor( + name='FieldDescriptorProto', + full_name='google.protobuf.FieldDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.FieldDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='number', full_name='google.protobuf.FieldDescriptorProto.number', index=1, + number=3, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='label', full_name='google.protobuf.FieldDescriptorProto.label', index=2, + number=4, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=1, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='type', full_name='google.protobuf.FieldDescriptorProto.type', index=3, + number=5, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=1, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='type_name', full_name='google.protobuf.FieldDescriptorProto.type_name', index=4, + number=6, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='extendee', full_name='google.protobuf.FieldDescriptorProto.extendee', index=5, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='default_value', full_name='google.protobuf.FieldDescriptorProto.default_value', index=6, + number=7, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='oneof_index', full_name='google.protobuf.FieldDescriptorProto.oneof_index', index=7, + number=9, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='json_name', full_name='google.protobuf.FieldDescriptorProto.json_name', index=8, + number=10, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.FieldDescriptorProto.options', index=9, + number=8, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='proto3_optional', full_name='google.protobuf.FieldDescriptorProto.proto3_optional', index=10, + number=17, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _FIELDDESCRIPTORPROTO_TYPE, + _FIELDDESCRIPTORPROTO_LABEL, + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _ONEOFDESCRIPTORPROTO = _descriptor.Descriptor( + name='OneofDescriptorProto', + full_name='google.protobuf.OneofDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.OneofDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.OneofDescriptorProto.options', index=1, + number=2, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE = _descriptor.Descriptor( + name='EnumReservedRange', + full_name='google.protobuf.EnumDescriptorProto.EnumReservedRange', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='start', full_name='google.protobuf.EnumDescriptorProto.EnumReservedRange.start', index=0, + number=1, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.EnumDescriptorProto.EnumReservedRange.end', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + _ENUMDESCRIPTORPROTO = _descriptor.Descriptor( + name='EnumDescriptorProto', + full_name='google.protobuf.EnumDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.EnumDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='value', full_name='google.protobuf.EnumDescriptorProto.value', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.EnumDescriptorProto.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_range', full_name='google.protobuf.EnumDescriptorProto.reserved_range', index=3, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='reserved_name', full_name='google.protobuf.EnumDescriptorProto.reserved_name', index=4, + number=5, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _ENUMVALUEDESCRIPTORPROTO = _descriptor.Descriptor( + name='EnumValueDescriptorProto', + full_name='google.protobuf.EnumValueDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.EnumValueDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='number', full_name='google.protobuf.EnumValueDescriptorProto.number', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.EnumValueDescriptorProto.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _SERVICEDESCRIPTORPROTO = _descriptor.Descriptor( + name='ServiceDescriptorProto', + full_name='google.protobuf.ServiceDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.ServiceDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='method', full_name='google.protobuf.ServiceDescriptorProto.method', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.ServiceDescriptorProto.options', index=2, + number=3, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _METHODDESCRIPTORPROTO = _descriptor.Descriptor( + name='MethodDescriptorProto', + full_name='google.protobuf.MethodDescriptorProto', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.MethodDescriptorProto.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='input_type', full_name='google.protobuf.MethodDescriptorProto.input_type', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='output_type', full_name='google.protobuf.MethodDescriptorProto.output_type', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='options', full_name='google.protobuf.MethodDescriptorProto.options', index=3, + number=4, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='client_streaming', full_name='google.protobuf.MethodDescriptorProto.client_streaming', index=4, + number=5, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='server_streaming', full_name='google.protobuf.MethodDescriptorProto.server_streaming', index=5, + number=6, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _FILEOPTIONS = _descriptor.Descriptor( + name='FileOptions', + full_name='google.protobuf.FileOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='java_package', full_name='google.protobuf.FileOptions.java_package', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_outer_classname', full_name='google.protobuf.FileOptions.java_outer_classname', index=1, + number=8, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_multiple_files', full_name='google.protobuf.FileOptions.java_multiple_files', index=2, + number=10, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_generate_equals_and_hash', full_name='google.protobuf.FileOptions.java_generate_equals_and_hash', index=3, + number=20, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_string_check_utf8', full_name='google.protobuf.FileOptions.java_string_check_utf8', index=4, + number=27, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='optimize_for', full_name='google.protobuf.FileOptions.optimize_for', index=5, + number=9, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=1, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='go_package', full_name='google.protobuf.FileOptions.go_package', index=6, + number=11, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='cc_generic_services', full_name='google.protobuf.FileOptions.cc_generic_services', index=7, + number=16, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='java_generic_services', full_name='google.protobuf.FileOptions.java_generic_services', index=8, + number=17, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='py_generic_services', full_name='google.protobuf.FileOptions.py_generic_services', index=9, + number=18, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_generic_services', full_name='google.protobuf.FileOptions.php_generic_services', index=10, + number=42, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.FileOptions.deprecated', index=11, + number=23, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='cc_enable_arenas', full_name='google.protobuf.FileOptions.cc_enable_arenas', index=12, + number=31, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=True, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='objc_class_prefix', full_name='google.protobuf.FileOptions.objc_class_prefix', index=13, + number=36, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='csharp_namespace', full_name='google.protobuf.FileOptions.csharp_namespace', index=14, + number=37, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='swift_prefix', full_name='google.protobuf.FileOptions.swift_prefix', index=15, + number=39, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_class_prefix', full_name='google.protobuf.FileOptions.php_class_prefix', index=16, + number=40, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_namespace', full_name='google.protobuf.FileOptions.php_namespace', index=17, + number=41, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='php_metadata_namespace', full_name='google.protobuf.FileOptions.php_metadata_namespace', index=18, + number=44, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='ruby_package', full_name='google.protobuf.FileOptions.ruby_package', index=19, + number=45, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.FileOptions.uninterpreted_option', index=20, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _FILEOPTIONS_OPTIMIZEMODE, + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _MESSAGEOPTIONS = _descriptor.Descriptor( + name='MessageOptions', + full_name='google.protobuf.MessageOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='message_set_wire_format', full_name='google.protobuf.MessageOptions.message_set_wire_format', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='no_standard_descriptor_accessor', full_name='google.protobuf.MessageOptions.no_standard_descriptor_accessor', index=1, + number=2, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.MessageOptions.deprecated', index=2, + number=3, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='map_entry', full_name='google.protobuf.MessageOptions.map_entry', index=3, + number=7, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.MessageOptions.uninterpreted_option', index=4, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _FIELDOPTIONS = _descriptor.Descriptor( + name='FieldOptions', + full_name='google.protobuf.FieldOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='ctype', full_name='google.protobuf.FieldOptions.ctype', index=0, + number=1, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='packed', full_name='google.protobuf.FieldOptions.packed', index=1, + number=2, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='jstype', full_name='google.protobuf.FieldOptions.jstype', index=2, + number=6, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='lazy', full_name='google.protobuf.FieldOptions.lazy', index=3, + number=5, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='unverified_lazy', full_name='google.protobuf.FieldOptions.unverified_lazy', index=4, + number=15, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.FieldOptions.deprecated', index=5, + number=3, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='weak', full_name='google.protobuf.FieldOptions.weak', index=6, + number=10, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.FieldOptions.uninterpreted_option', index=7, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _FIELDOPTIONS_CTYPE, + _FIELDOPTIONS_JSTYPE, + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _ONEOFOPTIONS = _descriptor.Descriptor( + name='OneofOptions', + full_name='google.protobuf.OneofOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.OneofOptions.uninterpreted_option', index=0, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _ENUMOPTIONS = _descriptor.Descriptor( + name='EnumOptions', + full_name='google.protobuf.EnumOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='allow_alias', full_name='google.protobuf.EnumOptions.allow_alias', index=0, + number=2, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.EnumOptions.deprecated', index=1, + number=3, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.EnumOptions.uninterpreted_option', index=2, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _ENUMVALUEOPTIONS = _descriptor.Descriptor( + name='EnumValueOptions', + full_name='google.protobuf.EnumValueOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.EnumValueOptions.deprecated', index=0, + number=1, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.EnumValueOptions.uninterpreted_option', index=1, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _SERVICEOPTIONS = _descriptor.Descriptor( + name='ServiceOptions', + full_name='google.protobuf.ServiceOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.ServiceOptions.deprecated', index=0, + number=33, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.ServiceOptions.uninterpreted_option', index=1, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _METHODOPTIONS = _descriptor.Descriptor( + name='MethodOptions', + full_name='google.protobuf.MethodOptions', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='deprecated', full_name='google.protobuf.MethodOptions.deprecated', index=0, + number=33, type=8, cpp_type=7, label=1, + has_default_value=True, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='idempotency_level', full_name='google.protobuf.MethodOptions.idempotency_level', index=1, + number=34, type=14, cpp_type=8, label=1, + has_default_value=True, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='uninterpreted_option', full_name='google.protobuf.MethodOptions.uninterpreted_option', index=2, + number=999, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + _METHODOPTIONS_IDEMPOTENCYLEVEL, + ], + serialized_options=None, + is_extendable=True, + syntax='proto2', + extension_ranges=[(1000, 536870912), ], + oneofs=[ + ], + ) + + + _UNINTERPRETEDOPTION_NAMEPART = _descriptor.Descriptor( + name='NamePart', + full_name='google.protobuf.UninterpretedOption.NamePart', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name_part', full_name='google.protobuf.UninterpretedOption.NamePart.name_part', index=0, + number=1, type=9, cpp_type=9, label=2, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='is_extension', full_name='google.protobuf.UninterpretedOption.NamePart.is_extension', index=1, + number=2, type=8, cpp_type=7, label=2, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + _UNINTERPRETEDOPTION = _descriptor.Descriptor( + name='UninterpretedOption', + full_name='google.protobuf.UninterpretedOption', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='google.protobuf.UninterpretedOption.name', index=0, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='identifier_value', full_name='google.protobuf.UninterpretedOption.identifier_value', index=1, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='positive_int_value', full_name='google.protobuf.UninterpretedOption.positive_int_value', index=2, + number=4, type=4, cpp_type=4, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='negative_int_value', full_name='google.protobuf.UninterpretedOption.negative_int_value', index=3, + number=5, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='double_value', full_name='google.protobuf.UninterpretedOption.double_value', index=4, + number=6, type=1, cpp_type=5, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='string_value', full_name='google.protobuf.UninterpretedOption.string_value', index=5, + number=7, type=12, cpp_type=9, label=1, + has_default_value=False, default_value=b"", + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='aggregate_value', full_name='google.protobuf.UninterpretedOption.aggregate_value', index=6, + number=8, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_UNINTERPRETEDOPTION_NAMEPART, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _SOURCECODEINFO_LOCATION = _descriptor.Descriptor( + name='Location', + full_name='google.protobuf.SourceCodeInfo.Location', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='path', full_name='google.protobuf.SourceCodeInfo.Location.path', index=0, + number=1, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='span', full_name='google.protobuf.SourceCodeInfo.Location.span', index=1, + number=2, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='leading_comments', full_name='google.protobuf.SourceCodeInfo.Location.leading_comments', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='trailing_comments', full_name='google.protobuf.SourceCodeInfo.Location.trailing_comments', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='leading_detached_comments', full_name='google.protobuf.SourceCodeInfo.Location.leading_detached_comments', index=4, + number=6, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + _SOURCECODEINFO = _descriptor.Descriptor( + name='SourceCodeInfo', + full_name='google.protobuf.SourceCodeInfo', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='location', full_name='google.protobuf.SourceCodeInfo.location', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_SOURCECODEINFO_LOCATION, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + + _GENERATEDCODEINFO_ANNOTATION = _descriptor.Descriptor( + name='Annotation', + full_name='google.protobuf.GeneratedCodeInfo.Annotation', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='path', full_name='google.protobuf.GeneratedCodeInfo.Annotation.path', index=0, + number=1, type=5, cpp_type=1, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='source_file', full_name='google.protobuf.GeneratedCodeInfo.Annotation.source_file', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=b"".decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='begin', full_name='google.protobuf.GeneratedCodeInfo.Annotation.begin', index=2, + number=3, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + _descriptor.FieldDescriptor( + name='end', full_name='google.protobuf.GeneratedCodeInfo.Annotation.end', index=3, + number=4, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + _GENERATEDCODEINFO = _descriptor.Descriptor( + name='GeneratedCodeInfo', + full_name='google.protobuf.GeneratedCodeInfo', + filename=None, + file=DESCRIPTOR, + containing_type=None, + create_key=_descriptor._internal_create_key, + fields=[ + _descriptor.FieldDescriptor( + name='annotation', full_name='google.protobuf.GeneratedCodeInfo.annotation', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), + ], + extensions=[ + ], + nested_types=[_GENERATEDCODEINFO_ANNOTATION, ], + enum_types=[ + ], + serialized_options=None, + is_extendable=False, + syntax='proto2', + extension_ranges=[], + oneofs=[ + ], + ) + + _FILEDESCRIPTORSET.fields_by_name['file'].message_type = _FILEDESCRIPTORPROTO + _FILEDESCRIPTORPROTO.fields_by_name['message_type'].message_type = _DESCRIPTORPROTO + _FILEDESCRIPTORPROTO.fields_by_name['enum_type'].message_type = _ENUMDESCRIPTORPROTO + _FILEDESCRIPTORPROTO.fields_by_name['service'].message_type = _SERVICEDESCRIPTORPROTO + _FILEDESCRIPTORPROTO.fields_by_name['extension'].message_type = _FIELDDESCRIPTORPROTO + _FILEDESCRIPTORPROTO.fields_by_name['options'].message_type = _FILEOPTIONS + _FILEDESCRIPTORPROTO.fields_by_name['source_code_info'].message_type = _SOURCECODEINFO + _DESCRIPTORPROTO_EXTENSIONRANGE.fields_by_name['options'].message_type = _EXTENSIONRANGEOPTIONS + _DESCRIPTORPROTO_EXTENSIONRANGE.containing_type = _DESCRIPTORPROTO + _DESCRIPTORPROTO_RESERVEDRANGE.containing_type = _DESCRIPTORPROTO + _DESCRIPTORPROTO.fields_by_name['field'].message_type = _FIELDDESCRIPTORPROTO + _DESCRIPTORPROTO.fields_by_name['extension'].message_type = _FIELDDESCRIPTORPROTO + _DESCRIPTORPROTO.fields_by_name['nested_type'].message_type = _DESCRIPTORPROTO + _DESCRIPTORPROTO.fields_by_name['enum_type'].message_type = _ENUMDESCRIPTORPROTO + _DESCRIPTORPROTO.fields_by_name['extension_range'].message_type = _DESCRIPTORPROTO_EXTENSIONRANGE + _DESCRIPTORPROTO.fields_by_name['oneof_decl'].message_type = _ONEOFDESCRIPTORPROTO + _DESCRIPTORPROTO.fields_by_name['options'].message_type = _MESSAGEOPTIONS + _DESCRIPTORPROTO.fields_by_name['reserved_range'].message_type = _DESCRIPTORPROTO_RESERVEDRANGE + _EXTENSIONRANGEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _FIELDDESCRIPTORPROTO.fields_by_name['label'].enum_type = _FIELDDESCRIPTORPROTO_LABEL + _FIELDDESCRIPTORPROTO.fields_by_name['type'].enum_type = _FIELDDESCRIPTORPROTO_TYPE + _FIELDDESCRIPTORPROTO.fields_by_name['options'].message_type = _FIELDOPTIONS + _FIELDDESCRIPTORPROTO_TYPE.containing_type = _FIELDDESCRIPTORPROTO + _FIELDDESCRIPTORPROTO_LABEL.containing_type = _FIELDDESCRIPTORPROTO + _ONEOFDESCRIPTORPROTO.fields_by_name['options'].message_type = _ONEOFOPTIONS + _ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE.containing_type = _ENUMDESCRIPTORPROTO + _ENUMDESCRIPTORPROTO.fields_by_name['value'].message_type = _ENUMVALUEDESCRIPTORPROTO + _ENUMDESCRIPTORPROTO.fields_by_name['options'].message_type = _ENUMOPTIONS + _ENUMDESCRIPTORPROTO.fields_by_name['reserved_range'].message_type = _ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE + _ENUMVALUEDESCRIPTORPROTO.fields_by_name['options'].message_type = _ENUMVALUEOPTIONS + _SERVICEDESCRIPTORPROTO.fields_by_name['method'].message_type = _METHODDESCRIPTORPROTO + _SERVICEDESCRIPTORPROTO.fields_by_name['options'].message_type = _SERVICEOPTIONS + _METHODDESCRIPTORPROTO.fields_by_name['options'].message_type = _METHODOPTIONS + _FILEOPTIONS.fields_by_name['optimize_for'].enum_type = _FILEOPTIONS_OPTIMIZEMODE + _FILEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _FILEOPTIONS_OPTIMIZEMODE.containing_type = _FILEOPTIONS + _MESSAGEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _FIELDOPTIONS.fields_by_name['ctype'].enum_type = _FIELDOPTIONS_CTYPE + _FIELDOPTIONS.fields_by_name['jstype'].enum_type = _FIELDOPTIONS_JSTYPE + _FIELDOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _FIELDOPTIONS_CTYPE.containing_type = _FIELDOPTIONS + _FIELDOPTIONS_JSTYPE.containing_type = _FIELDOPTIONS + _ONEOFOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _ENUMOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _ENUMVALUEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _SERVICEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _METHODOPTIONS.fields_by_name['idempotency_level'].enum_type = _METHODOPTIONS_IDEMPOTENCYLEVEL + _METHODOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION + _METHODOPTIONS_IDEMPOTENCYLEVEL.containing_type = _METHODOPTIONS + _UNINTERPRETEDOPTION_NAMEPART.containing_type = _UNINTERPRETEDOPTION + _UNINTERPRETEDOPTION.fields_by_name['name'].message_type = _UNINTERPRETEDOPTION_NAMEPART + _SOURCECODEINFO_LOCATION.containing_type = _SOURCECODEINFO + _SOURCECODEINFO.fields_by_name['location'].message_type = _SOURCECODEINFO_LOCATION + _GENERATEDCODEINFO_ANNOTATION.containing_type = _GENERATEDCODEINFO + _GENERATEDCODEINFO.fields_by_name['annotation'].message_type = _GENERATEDCODEINFO_ANNOTATION + DESCRIPTOR.message_types_by_name['FileDescriptorSet'] = _FILEDESCRIPTORSET + DESCRIPTOR.message_types_by_name['FileDescriptorProto'] = _FILEDESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['DescriptorProto'] = _DESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['ExtensionRangeOptions'] = _EXTENSIONRANGEOPTIONS + DESCRIPTOR.message_types_by_name['FieldDescriptorProto'] = _FIELDDESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['OneofDescriptorProto'] = _ONEOFDESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['EnumDescriptorProto'] = _ENUMDESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['EnumValueDescriptorProto'] = _ENUMVALUEDESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['ServiceDescriptorProto'] = _SERVICEDESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['MethodDescriptorProto'] = _METHODDESCRIPTORPROTO + DESCRIPTOR.message_types_by_name['FileOptions'] = _FILEOPTIONS + DESCRIPTOR.message_types_by_name['MessageOptions'] = _MESSAGEOPTIONS + DESCRIPTOR.message_types_by_name['FieldOptions'] = _FIELDOPTIONS + DESCRIPTOR.message_types_by_name['OneofOptions'] = _ONEOFOPTIONS + DESCRIPTOR.message_types_by_name['EnumOptions'] = _ENUMOPTIONS + DESCRIPTOR.message_types_by_name['EnumValueOptions'] = _ENUMVALUEOPTIONS + DESCRIPTOR.message_types_by_name['ServiceOptions'] = _SERVICEOPTIONS + DESCRIPTOR.message_types_by_name['MethodOptions'] = _METHODOPTIONS + DESCRIPTOR.message_types_by_name['UninterpretedOption'] = _UNINTERPRETEDOPTION + DESCRIPTOR.message_types_by_name['SourceCodeInfo'] = _SOURCECODEINFO + DESCRIPTOR.message_types_by_name['GeneratedCodeInfo'] = _GENERATEDCODEINFO + _sym_db.RegisterFileDescriptor(DESCRIPTOR) + +else: + _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.descriptor_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + _FILEDESCRIPTORSET._serialized_start=53 + _FILEDESCRIPTORSET._serialized_end=124 + _FILEDESCRIPTORPROTO._serialized_start=127 + _FILEDESCRIPTORPROTO._serialized_end=602 + _DESCRIPTORPROTO._serialized_start=605 + _DESCRIPTORPROTO._serialized_end=1286 + _DESCRIPTORPROTO_EXTENSIONRANGE._serialized_start=1140 + _DESCRIPTORPROTO_EXTENSIONRANGE._serialized_end=1241 + _DESCRIPTORPROTO_RESERVEDRANGE._serialized_start=1243 + _DESCRIPTORPROTO_RESERVEDRANGE._serialized_end=1286 + _EXTENSIONRANGEOPTIONS._serialized_start=1288 + _EXTENSIONRANGEOPTIONS._serialized_end=1391 + _FIELDDESCRIPTORPROTO._serialized_start=1394 + _FIELDDESCRIPTORPROTO._serialized_end=2119 + _FIELDDESCRIPTORPROTO_TYPE._serialized_start=1740 + _FIELDDESCRIPTORPROTO_TYPE._serialized_end=2050 + _FIELDDESCRIPTORPROTO_LABEL._serialized_start=2052 + _FIELDDESCRIPTORPROTO_LABEL._serialized_end=2119 + _ONEOFDESCRIPTORPROTO._serialized_start=2121 + _ONEOFDESCRIPTORPROTO._serialized_end=2205 + _ENUMDESCRIPTORPROTO._serialized_start=2208 + _ENUMDESCRIPTORPROTO._serialized_end=2500 + _ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE._serialized_start=2453 + _ENUMDESCRIPTORPROTO_ENUMRESERVEDRANGE._serialized_end=2500 + _ENUMVALUEDESCRIPTORPROTO._serialized_start=2502 + _ENUMVALUEDESCRIPTORPROTO._serialized_end=2610 + _SERVICEDESCRIPTORPROTO._serialized_start=2613 + _SERVICEDESCRIPTORPROTO._serialized_end=2757 + _METHODDESCRIPTORPROTO._serialized_start=2760 + _METHODDESCRIPTORPROTO._serialized_end=2953 + _FILEOPTIONS._serialized_start=2956 + _FILEOPTIONS._serialized_end=3761 + _FILEOPTIONS_OPTIMIZEMODE._serialized_start=3686 + _FILEOPTIONS_OPTIMIZEMODE._serialized_end=3744 + _MESSAGEOPTIONS._serialized_start=3764 + _MESSAGEOPTIONS._serialized_end=4024 + _FIELDOPTIONS._serialized_start=4027 + _FIELDOPTIONS._serialized_end=4473 + _FIELDOPTIONS_CTYPE._serialized_start=4354 + _FIELDOPTIONS_CTYPE._serialized_end=4401 + _FIELDOPTIONS_JSTYPE._serialized_start=4403 + _FIELDOPTIONS_JSTYPE._serialized_end=4456 + _ONEOFOPTIONS._serialized_start=4475 + _ONEOFOPTIONS._serialized_end=4569 + _ENUMOPTIONS._serialized_start=4572 + _ENUMOPTIONS._serialized_end=4719 + _ENUMVALUEOPTIONS._serialized_start=4721 + _ENUMVALUEOPTIONS._serialized_end=4846 + _SERVICEOPTIONS._serialized_start=4848 + _SERVICEOPTIONS._serialized_end=4971 + _METHODOPTIONS._serialized_start=4974 + _METHODOPTIONS._serialized_end=5275 + _METHODOPTIONS_IDEMPOTENCYLEVEL._serialized_start=5184 + _METHODOPTIONS_IDEMPOTENCYLEVEL._serialized_end=5264 + _UNINTERPRETEDOPTION._serialized_start=5278 + _UNINTERPRETEDOPTION._serialized_end=5564 + _UNINTERPRETEDOPTION_NAMEPART._serialized_start=5513 + _UNINTERPRETEDOPTION_NAMEPART._serialized_end=5564 + _SOURCECODEINFO._serialized_start=5567 + _SOURCECODEINFO._serialized_end=5780 + _SOURCECODEINFO_LOCATION._serialized_start=5646 + _SOURCECODEINFO_LOCATION._serialized_end=5780 + _GENERATEDCODEINFO._serialized_start=5783 + _GENERATEDCODEINFO._serialized_end=5950 + _GENERATEDCODEINFO_ANNOTATION._serialized_start=5871 + _GENERATEDCODEINFO_ANNOTATION._serialized_end=5950 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/descriptor_pool.py b/env/lib/python3.8/site-packages/google/protobuf/descriptor_pool.py new file mode 100644 index 00000000..911372a8 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/descriptor_pool.py @@ -0,0 +1,1295 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides DescriptorPool to use as a container for proto2 descriptors. + +The DescriptorPool is used in conjection with a DescriptorDatabase to maintain +a collection of protocol buffer descriptors for use when dynamically creating +message types at runtime. + +For most applications protocol buffers should be used via modules generated by +the protocol buffer compiler tool. This should only be used when the type of +protocol buffers used in an application or library cannot be predetermined. + +Below is a straightforward example on how to use this class:: + + pool = DescriptorPool() + file_descriptor_protos = [ ... ] + for file_descriptor_proto in file_descriptor_protos: + pool.Add(file_descriptor_proto) + my_message_descriptor = pool.FindMessageTypeByName('some.package.MessageType') + +The message descriptor can be used in conjunction with the message_factory +module in order to create a protocol buffer class that can be encoded and +decoded. + +If you want to get a Python class for the specified proto, use the +helper functions inside google.protobuf.message_factory +directly instead of this class. +""" + +__author__ = 'matthewtoia@google.com (Matt Toia)' + +import collections +import warnings + +from google.protobuf import descriptor +from google.protobuf import descriptor_database +from google.protobuf import text_encoding + + +_USE_C_DESCRIPTORS = descriptor._USE_C_DESCRIPTORS # pylint: disable=protected-access + + +def _Deprecated(func): + """Mark functions as deprecated.""" + + def NewFunc(*args, **kwargs): + warnings.warn( + 'Call to deprecated function %s(). Note: Do add unlinked descriptors ' + 'to descriptor_pool is wrong. Use Add() or AddSerializedFile() ' + 'instead.' % func.__name__, + category=DeprecationWarning) + return func(*args, **kwargs) + NewFunc.__name__ = func.__name__ + NewFunc.__doc__ = func.__doc__ + NewFunc.__dict__.update(func.__dict__) + return NewFunc + + +def _NormalizeFullyQualifiedName(name): + """Remove leading period from fully-qualified type name. + + Due to b/13860351 in descriptor_database.py, types in the root namespace are + generated with a leading period. This function removes that prefix. + + Args: + name (str): The fully-qualified symbol name. + + Returns: + str: The normalized fully-qualified symbol name. + """ + return name.lstrip('.') + + +def _OptionsOrNone(descriptor_proto): + """Returns the value of the field `options`, or None if it is not set.""" + if descriptor_proto.HasField('options'): + return descriptor_proto.options + else: + return None + + +def _IsMessageSetExtension(field): + return (field.is_extension and + field.containing_type.has_options and + field.containing_type.GetOptions().message_set_wire_format and + field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and + field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL) + + +class DescriptorPool(object): + """A collection of protobufs dynamically constructed by descriptor protos.""" + + if _USE_C_DESCRIPTORS: + + def __new__(cls, descriptor_db=None): + # pylint: disable=protected-access + return descriptor._message.DescriptorPool(descriptor_db) + + def __init__(self, descriptor_db=None): + """Initializes a Pool of proto buffs. + + The descriptor_db argument to the constructor is provided to allow + specialized file descriptor proto lookup code to be triggered on demand. An + example would be an implementation which will read and compile a file + specified in a call to FindFileByName() and not require the call to Add() + at all. Results from this database will be cached internally here as well. + + Args: + descriptor_db: A secondary source of file descriptors. + """ + + self._internal_db = descriptor_database.DescriptorDatabase() + self._descriptor_db = descriptor_db + self._descriptors = {} + self._enum_descriptors = {} + self._service_descriptors = {} + self._file_descriptors = {} + self._toplevel_extensions = {} + # TODO(jieluo): Remove _file_desc_by_toplevel_extension after + # maybe year 2020 for compatibility issue (with 3.4.1 only). + self._file_desc_by_toplevel_extension = {} + self._top_enum_values = {} + # We store extensions in two two-level mappings: The first key is the + # descriptor of the message being extended, the second key is the extension + # full name or its tag number. + self._extensions_by_name = collections.defaultdict(dict) + self._extensions_by_number = collections.defaultdict(dict) + + def _CheckConflictRegister(self, desc, desc_name, file_name): + """Check if the descriptor name conflicts with another of the same name. + + Args: + desc: Descriptor of a message, enum, service, extension or enum value. + desc_name (str): the full name of desc. + file_name (str): The file name of descriptor. + """ + for register, descriptor_type in [ + (self._descriptors, descriptor.Descriptor), + (self._enum_descriptors, descriptor.EnumDescriptor), + (self._service_descriptors, descriptor.ServiceDescriptor), + (self._toplevel_extensions, descriptor.FieldDescriptor), + (self._top_enum_values, descriptor.EnumValueDescriptor)]: + if desc_name in register: + old_desc = register[desc_name] + if isinstance(old_desc, descriptor.EnumValueDescriptor): + old_file = old_desc.type.file.name + else: + old_file = old_desc.file.name + + if not isinstance(desc, descriptor_type) or ( + old_file != file_name): + error_msg = ('Conflict register for file "' + file_name + + '": ' + desc_name + + ' is already defined in file "' + + old_file + '". Please fix the conflict by adding ' + 'package name on the proto file, or use different ' + 'name for the duplication.') + if isinstance(desc, descriptor.EnumValueDescriptor): + error_msg += ('\nNote: enum values appear as ' + 'siblings of the enum type instead of ' + 'children of it.') + + raise TypeError(error_msg) + + return + + def Add(self, file_desc_proto): + """Adds the FileDescriptorProto and its types to this pool. + + Args: + file_desc_proto (FileDescriptorProto): The file descriptor to add. + """ + + self._internal_db.Add(file_desc_proto) + + def AddSerializedFile(self, serialized_file_desc_proto): + """Adds the FileDescriptorProto and its types to this pool. + + Args: + serialized_file_desc_proto (bytes): A bytes string, serialization of the + :class:`FileDescriptorProto` to add. + + Returns: + FileDescriptor: Descriptor for the added file. + """ + + # pylint: disable=g-import-not-at-top + from google.protobuf import descriptor_pb2 + file_desc_proto = descriptor_pb2.FileDescriptorProto.FromString( + serialized_file_desc_proto) + file_desc = self._ConvertFileProtoToFileDescriptor(file_desc_proto) + file_desc.serialized_pb = serialized_file_desc_proto + return file_desc + + # Add Descriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddDescriptor(self, desc): + self._AddDescriptor(desc) + + # Never call this method. It is for internal usage only. + def _AddDescriptor(self, desc): + """Adds a Descriptor to the pool, non-recursively. + + If the Descriptor contains nested messages or enums, the caller must + explicitly register them. This method also registers the FileDescriptor + associated with the message. + + Args: + desc: A Descriptor. + """ + if not isinstance(desc, descriptor.Descriptor): + raise TypeError('Expected instance of descriptor.Descriptor.') + + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + + self._descriptors[desc.full_name] = desc + self._AddFileDescriptor(desc.file) + + # Add EnumDescriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddEnumDescriptor(self, enum_desc): + self._AddEnumDescriptor(enum_desc) + + # Never call this method. It is for internal usage only. + def _AddEnumDescriptor(self, enum_desc): + """Adds an EnumDescriptor to the pool. + + This method also registers the FileDescriptor associated with the enum. + + Args: + enum_desc: An EnumDescriptor. + """ + + if not isinstance(enum_desc, descriptor.EnumDescriptor): + raise TypeError('Expected instance of descriptor.EnumDescriptor.') + + file_name = enum_desc.file.name + self._CheckConflictRegister(enum_desc, enum_desc.full_name, file_name) + self._enum_descriptors[enum_desc.full_name] = enum_desc + + # Top enum values need to be indexed. + # Count the number of dots to see whether the enum is toplevel or nested + # in a message. We cannot use enum_desc.containing_type at this stage. + if enum_desc.file.package: + top_level = (enum_desc.full_name.count('.') + - enum_desc.file.package.count('.') == 1) + else: + top_level = enum_desc.full_name.count('.') == 0 + if top_level: + file_name = enum_desc.file.name + package = enum_desc.file.package + for enum_value in enum_desc.values: + full_name = _NormalizeFullyQualifiedName( + '.'.join((package, enum_value.name))) + self._CheckConflictRegister(enum_value, full_name, file_name) + self._top_enum_values[full_name] = enum_value + self._AddFileDescriptor(enum_desc.file) + + # Add ServiceDescriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddServiceDescriptor(self, service_desc): + self._AddServiceDescriptor(service_desc) + + # Never call this method. It is for internal usage only. + def _AddServiceDescriptor(self, service_desc): + """Adds a ServiceDescriptor to the pool. + + Args: + service_desc: A ServiceDescriptor. + """ + + if not isinstance(service_desc, descriptor.ServiceDescriptor): + raise TypeError('Expected instance of descriptor.ServiceDescriptor.') + + self._CheckConflictRegister(service_desc, service_desc.full_name, + service_desc.file.name) + self._service_descriptors[service_desc.full_name] = service_desc + + # Add ExtensionDescriptor to descriptor pool is dreprecated. Please use Add() + # or AddSerializedFile() to add a FileDescriptorProto instead. + @_Deprecated + def AddExtensionDescriptor(self, extension): + self._AddExtensionDescriptor(extension) + + # Never call this method. It is for internal usage only. + def _AddExtensionDescriptor(self, extension): + """Adds a FieldDescriptor describing an extension to the pool. + + Args: + extension: A FieldDescriptor. + + Raises: + AssertionError: when another extension with the same number extends the + same message. + TypeError: when the specified extension is not a + descriptor.FieldDescriptor. + """ + if not (isinstance(extension, descriptor.FieldDescriptor) and + extension.is_extension): + raise TypeError('Expected an extension descriptor.') + + if extension.extension_scope is None: + self._toplevel_extensions[extension.full_name] = extension + + try: + existing_desc = self._extensions_by_number[ + extension.containing_type][extension.number] + except KeyError: + pass + else: + if extension is not existing_desc: + raise AssertionError( + 'Extensions "%s" and "%s" both try to extend message type "%s" ' + 'with field number %d.' % + (extension.full_name, existing_desc.full_name, + extension.containing_type.full_name, extension.number)) + + self._extensions_by_number[extension.containing_type][ + extension.number] = extension + self._extensions_by_name[extension.containing_type][ + extension.full_name] = extension + + # Also register MessageSet extensions with the type name. + if _IsMessageSetExtension(extension): + self._extensions_by_name[extension.containing_type][ + extension.message_type.full_name] = extension + + @_Deprecated + def AddFileDescriptor(self, file_desc): + self._InternalAddFileDescriptor(file_desc) + + # Never call this method. It is for internal usage only. + def _InternalAddFileDescriptor(self, file_desc): + """Adds a FileDescriptor to the pool, non-recursively. + + If the FileDescriptor contains messages or enums, the caller must explicitly + register them. + + Args: + file_desc: A FileDescriptor. + """ + + self._AddFileDescriptor(file_desc) + # TODO(jieluo): This is a temporary solution for FieldDescriptor.file. + # FieldDescriptor.file is added in code gen. Remove this solution after + # maybe 2020 for compatibility reason (with 3.4.1 only). + for extension in file_desc.extensions_by_name.values(): + self._file_desc_by_toplevel_extension[ + extension.full_name] = file_desc + + def _AddFileDescriptor(self, file_desc): + """Adds a FileDescriptor to the pool, non-recursively. + + If the FileDescriptor contains messages or enums, the caller must explicitly + register them. + + Args: + file_desc: A FileDescriptor. + """ + + if not isinstance(file_desc, descriptor.FileDescriptor): + raise TypeError('Expected instance of descriptor.FileDescriptor.') + self._file_descriptors[file_desc.name] = file_desc + + def FindFileByName(self, file_name): + """Gets a FileDescriptor by file name. + + Args: + file_name (str): The path to the file to get a descriptor for. + + Returns: + FileDescriptor: The descriptor for the named file. + + Raises: + KeyError: if the file cannot be found in the pool. + """ + + try: + return self._file_descriptors[file_name] + except KeyError: + pass + + try: + file_proto = self._internal_db.FindFileByName(file_name) + except KeyError as error: + if self._descriptor_db: + file_proto = self._descriptor_db.FindFileByName(file_name) + else: + raise error + if not file_proto: + raise KeyError('Cannot find a file named %s' % file_name) + return self._ConvertFileProtoToFileDescriptor(file_proto) + + def FindFileContainingSymbol(self, symbol): + """Gets the FileDescriptor for the file containing the specified symbol. + + Args: + symbol (str): The name of the symbol to search for. + + Returns: + FileDescriptor: Descriptor for the file that contains the specified + symbol. + + Raises: + KeyError: if the file cannot be found in the pool. + """ + + symbol = _NormalizeFullyQualifiedName(symbol) + try: + return self._InternalFindFileContainingSymbol(symbol) + except KeyError: + pass + + try: + # Try fallback database. Build and find again if possible. + self._FindFileContainingSymbolInDb(symbol) + return self._InternalFindFileContainingSymbol(symbol) + except KeyError: + raise KeyError('Cannot find a file containing %s' % symbol) + + def _InternalFindFileContainingSymbol(self, symbol): + """Gets the already built FileDescriptor containing the specified symbol. + + Args: + symbol (str): The name of the symbol to search for. + + Returns: + FileDescriptor: Descriptor for the file that contains the specified + symbol. + + Raises: + KeyError: if the file cannot be found in the pool. + """ + try: + return self._descriptors[symbol].file + except KeyError: + pass + + try: + return self._enum_descriptors[symbol].file + except KeyError: + pass + + try: + return self._service_descriptors[symbol].file + except KeyError: + pass + + try: + return self._top_enum_values[symbol].type.file + except KeyError: + pass + + try: + return self._file_desc_by_toplevel_extension[symbol] + except KeyError: + pass + + # Try fields, enum values and nested extensions inside a message. + top_name, _, sub_name = symbol.rpartition('.') + try: + message = self.FindMessageTypeByName(top_name) + assert (sub_name in message.extensions_by_name or + sub_name in message.fields_by_name or + sub_name in message.enum_values_by_name) + return message.file + except (KeyError, AssertionError): + raise KeyError('Cannot find a file containing %s' % symbol) + + def FindMessageTypeByName(self, full_name): + """Loads the named descriptor from the pool. + + Args: + full_name (str): The full name of the descriptor to load. + + Returns: + Descriptor: The descriptor for the named type. + + Raises: + KeyError: if the message cannot be found in the pool. + """ + + full_name = _NormalizeFullyQualifiedName(full_name) + if full_name not in self._descriptors: + self._FindFileContainingSymbolInDb(full_name) + return self._descriptors[full_name] + + def FindEnumTypeByName(self, full_name): + """Loads the named enum descriptor from the pool. + + Args: + full_name (str): The full name of the enum descriptor to load. + + Returns: + EnumDescriptor: The enum descriptor for the named type. + + Raises: + KeyError: if the enum cannot be found in the pool. + """ + + full_name = _NormalizeFullyQualifiedName(full_name) + if full_name not in self._enum_descriptors: + self._FindFileContainingSymbolInDb(full_name) + return self._enum_descriptors[full_name] + + def FindFieldByName(self, full_name): + """Loads the named field descriptor from the pool. + + Args: + full_name (str): The full name of the field descriptor to load. + + Returns: + FieldDescriptor: The field descriptor for the named field. + + Raises: + KeyError: if the field cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + message_name, _, field_name = full_name.rpartition('.') + message_descriptor = self.FindMessageTypeByName(message_name) + return message_descriptor.fields_by_name[field_name] + + def FindOneofByName(self, full_name): + """Loads the named oneof descriptor from the pool. + + Args: + full_name (str): The full name of the oneof descriptor to load. + + Returns: + OneofDescriptor: The oneof descriptor for the named oneof. + + Raises: + KeyError: if the oneof cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + message_name, _, oneof_name = full_name.rpartition('.') + message_descriptor = self.FindMessageTypeByName(message_name) + return message_descriptor.oneofs_by_name[oneof_name] + + def FindExtensionByName(self, full_name): + """Loads the named extension descriptor from the pool. + + Args: + full_name (str): The full name of the extension descriptor to load. + + Returns: + FieldDescriptor: The field descriptor for the named extension. + + Raises: + KeyError: if the extension cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + try: + # The proto compiler does not give any link between the FileDescriptor + # and top-level extensions unless the FileDescriptorProto is added to + # the DescriptorDatabase, but this can impact memory usage. + # So we registered these extensions by name explicitly. + return self._toplevel_extensions[full_name] + except KeyError: + pass + message_name, _, extension_name = full_name.rpartition('.') + try: + # Most extensions are nested inside a message. + scope = self.FindMessageTypeByName(message_name) + except KeyError: + # Some extensions are defined at file scope. + scope = self._FindFileContainingSymbolInDb(full_name) + return scope.extensions_by_name[extension_name] + + def FindExtensionByNumber(self, message_descriptor, number): + """Gets the extension of the specified message with the specified number. + + Extensions have to be registered to this pool by calling :func:`Add` or + :func:`AddExtensionDescriptor`. + + Args: + message_descriptor (Descriptor): descriptor of the extended message. + number (int): Number of the extension field. + + Returns: + FieldDescriptor: The descriptor for the extension. + + Raises: + KeyError: when no extension with the given number is known for the + specified message. + """ + try: + return self._extensions_by_number[message_descriptor][number] + except KeyError: + self._TryLoadExtensionFromDB(message_descriptor, number) + return self._extensions_by_number[message_descriptor][number] + + def FindAllExtensions(self, message_descriptor): + """Gets all the known extensions of a given message. + + Extensions have to be registered to this pool by build related + :func:`Add` or :func:`AddExtensionDescriptor`. + + Args: + message_descriptor (Descriptor): Descriptor of the extended message. + + Returns: + list[FieldDescriptor]: Field descriptors describing the extensions. + """ + # Fallback to descriptor db if FindAllExtensionNumbers is provided. + if self._descriptor_db and hasattr( + self._descriptor_db, 'FindAllExtensionNumbers'): + full_name = message_descriptor.full_name + all_numbers = self._descriptor_db.FindAllExtensionNumbers(full_name) + for number in all_numbers: + if number in self._extensions_by_number[message_descriptor]: + continue + self._TryLoadExtensionFromDB(message_descriptor, number) + + return list(self._extensions_by_number[message_descriptor].values()) + + def _TryLoadExtensionFromDB(self, message_descriptor, number): + """Try to Load extensions from descriptor db. + + Args: + message_descriptor: descriptor of the extended message. + number: the extension number that needs to be loaded. + """ + if not self._descriptor_db: + return + # Only supported when FindFileContainingExtension is provided. + if not hasattr( + self._descriptor_db, 'FindFileContainingExtension'): + return + + full_name = message_descriptor.full_name + file_proto = self._descriptor_db.FindFileContainingExtension( + full_name, number) + + if file_proto is None: + return + + try: + self._ConvertFileProtoToFileDescriptor(file_proto) + except: + warn_msg = ('Unable to load proto file %s for extension number %d.' % + (file_proto.name, number)) + warnings.warn(warn_msg, RuntimeWarning) + + def FindServiceByName(self, full_name): + """Loads the named service descriptor from the pool. + + Args: + full_name (str): The full name of the service descriptor to load. + + Returns: + ServiceDescriptor: The service descriptor for the named service. + + Raises: + KeyError: if the service cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + if full_name not in self._service_descriptors: + self._FindFileContainingSymbolInDb(full_name) + return self._service_descriptors[full_name] + + def FindMethodByName(self, full_name): + """Loads the named service method descriptor from the pool. + + Args: + full_name (str): The full name of the method descriptor to load. + + Returns: + MethodDescriptor: The method descriptor for the service method. + + Raises: + KeyError: if the method cannot be found in the pool. + """ + full_name = _NormalizeFullyQualifiedName(full_name) + service_name, _, method_name = full_name.rpartition('.') + service_descriptor = self.FindServiceByName(service_name) + return service_descriptor.methods_by_name[method_name] + + def _FindFileContainingSymbolInDb(self, symbol): + """Finds the file in descriptor DB containing the specified symbol. + + Args: + symbol (str): The name of the symbol to search for. + + Returns: + FileDescriptor: The file that contains the specified symbol. + + Raises: + KeyError: if the file cannot be found in the descriptor database. + """ + try: + file_proto = self._internal_db.FindFileContainingSymbol(symbol) + except KeyError as error: + if self._descriptor_db: + file_proto = self._descriptor_db.FindFileContainingSymbol(symbol) + else: + raise error + if not file_proto: + raise KeyError('Cannot find a file containing %s' % symbol) + return self._ConvertFileProtoToFileDescriptor(file_proto) + + def _ConvertFileProtoToFileDescriptor(self, file_proto): + """Creates a FileDescriptor from a proto or returns a cached copy. + + This method also has the side effect of loading all the symbols found in + the file into the appropriate dictionaries in the pool. + + Args: + file_proto: The proto to convert. + + Returns: + A FileDescriptor matching the passed in proto. + """ + if file_proto.name not in self._file_descriptors: + built_deps = list(self._GetDeps(file_proto.dependency)) + direct_deps = [self.FindFileByName(n) for n in file_proto.dependency] + public_deps = [direct_deps[i] for i in file_proto.public_dependency] + + file_descriptor = descriptor.FileDescriptor( + pool=self, + name=file_proto.name, + package=file_proto.package, + syntax=file_proto.syntax, + options=_OptionsOrNone(file_proto), + serialized_pb=file_proto.SerializeToString(), + dependencies=direct_deps, + public_dependencies=public_deps, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + scope = {} + + # This loop extracts all the message and enum types from all the + # dependencies of the file_proto. This is necessary to create the + # scope of available message types when defining the passed in + # file proto. + for dependency in built_deps: + scope.update(self._ExtractSymbols( + dependency.message_types_by_name.values())) + scope.update((_PrefixWithDot(enum.full_name), enum) + for enum in dependency.enum_types_by_name.values()) + + for message_type in file_proto.message_type: + message_desc = self._ConvertMessageDescriptor( + message_type, file_proto.package, file_descriptor, scope, + file_proto.syntax) + file_descriptor.message_types_by_name[message_desc.name] = ( + message_desc) + + for enum_type in file_proto.enum_type: + file_descriptor.enum_types_by_name[enum_type.name] = ( + self._ConvertEnumDescriptor(enum_type, file_proto.package, + file_descriptor, None, scope, True)) + + for index, extension_proto in enumerate(file_proto.extension): + extension_desc = self._MakeFieldDescriptor( + extension_proto, file_proto.package, index, file_descriptor, + is_extension=True) + extension_desc.containing_type = self._GetTypeFromScope( + file_descriptor.package, extension_proto.extendee, scope) + self._SetFieldType(extension_proto, extension_desc, + file_descriptor.package, scope) + file_descriptor.extensions_by_name[extension_desc.name] = ( + extension_desc) + self._file_desc_by_toplevel_extension[extension_desc.full_name] = ( + file_descriptor) + + for desc_proto in file_proto.message_type: + self._SetAllFieldTypes(file_proto.package, desc_proto, scope) + + if file_proto.package: + desc_proto_prefix = _PrefixWithDot(file_proto.package) + else: + desc_proto_prefix = '' + + for desc_proto in file_proto.message_type: + desc = self._GetTypeFromScope( + desc_proto_prefix, desc_proto.name, scope) + file_descriptor.message_types_by_name[desc_proto.name] = desc + + for index, service_proto in enumerate(file_proto.service): + file_descriptor.services_by_name[service_proto.name] = ( + self._MakeServiceDescriptor(service_proto, index, scope, + file_proto.package, file_descriptor)) + + self._file_descriptors[file_proto.name] = file_descriptor + + # Add extensions to the pool + file_desc = self._file_descriptors[file_proto.name] + for extension in file_desc.extensions_by_name.values(): + self._AddExtensionDescriptor(extension) + for message_type in file_desc.message_types_by_name.values(): + for extension in message_type.extensions: + self._AddExtensionDescriptor(extension) + + return file_desc + + def _ConvertMessageDescriptor(self, desc_proto, package=None, file_desc=None, + scope=None, syntax=None): + """Adds the proto to the pool in the specified package. + + Args: + desc_proto: The descriptor_pb2.DescriptorProto protobuf message. + package: The package the proto should be located in. + file_desc: The file containing this message. + scope: Dict mapping short and full symbols to message and enum types. + syntax: string indicating syntax of the file ("proto2" or "proto3") + + Returns: + The added descriptor. + """ + + if package: + desc_name = '.'.join((package, desc_proto.name)) + else: + desc_name = desc_proto.name + + if file_desc is None: + file_name = None + else: + file_name = file_desc.name + + if scope is None: + scope = {} + + nested = [ + self._ConvertMessageDescriptor( + nested, desc_name, file_desc, scope, syntax) + for nested in desc_proto.nested_type] + enums = [ + self._ConvertEnumDescriptor(enum, desc_name, file_desc, None, + scope, False) + for enum in desc_proto.enum_type] + fields = [self._MakeFieldDescriptor(field, desc_name, index, file_desc) + for index, field in enumerate(desc_proto.field)] + extensions = [ + self._MakeFieldDescriptor(extension, desc_name, index, file_desc, + is_extension=True) + for index, extension in enumerate(desc_proto.extension)] + oneofs = [ + # pylint: disable=g-complex-comprehension + descriptor.OneofDescriptor( + desc.name, + '.'.join((desc_name, desc.name)), + index, + None, + [], + _OptionsOrNone(desc), + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + for index, desc in enumerate(desc_proto.oneof_decl) + ] + extension_ranges = [(r.start, r.end) for r in desc_proto.extension_range] + if extension_ranges: + is_extendable = True + else: + is_extendable = False + desc = descriptor.Descriptor( + name=desc_proto.name, + full_name=desc_name, + filename=file_name, + containing_type=None, + fields=fields, + oneofs=oneofs, + nested_types=nested, + enum_types=enums, + extensions=extensions, + options=_OptionsOrNone(desc_proto), + is_extendable=is_extendable, + extension_ranges=extension_ranges, + file=file_desc, + serialized_start=None, + serialized_end=None, + syntax=syntax, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + for nested in desc.nested_types: + nested.containing_type = desc + for enum in desc.enum_types: + enum.containing_type = desc + for field_index, field_desc in enumerate(desc_proto.field): + if field_desc.HasField('oneof_index'): + oneof_index = field_desc.oneof_index + oneofs[oneof_index].fields.append(fields[field_index]) + fields[field_index].containing_oneof = oneofs[oneof_index] + + scope[_PrefixWithDot(desc_name)] = desc + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + self._descriptors[desc_name] = desc + return desc + + def _ConvertEnumDescriptor(self, enum_proto, package=None, file_desc=None, + containing_type=None, scope=None, top_level=False): + """Make a protobuf EnumDescriptor given an EnumDescriptorProto protobuf. + + Args: + enum_proto: The descriptor_pb2.EnumDescriptorProto protobuf message. + package: Optional package name for the new message EnumDescriptor. + file_desc: The file containing the enum descriptor. + containing_type: The type containing this enum. + scope: Scope containing available types. + top_level: If True, the enum is a top level symbol. If False, the enum + is defined inside a message. + + Returns: + The added descriptor + """ + + if package: + enum_name = '.'.join((package, enum_proto.name)) + else: + enum_name = enum_proto.name + + if file_desc is None: + file_name = None + else: + file_name = file_desc.name + + values = [self._MakeEnumValueDescriptor(value, index) + for index, value in enumerate(enum_proto.value)] + desc = descriptor.EnumDescriptor(name=enum_proto.name, + full_name=enum_name, + filename=file_name, + file=file_desc, + values=values, + containing_type=containing_type, + options=_OptionsOrNone(enum_proto), + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + scope['.%s' % enum_name] = desc + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + self._enum_descriptors[enum_name] = desc + + # Add top level enum values. + if top_level: + for value in values: + full_name = _NormalizeFullyQualifiedName( + '.'.join((package, value.name))) + self._CheckConflictRegister(value, full_name, file_name) + self._top_enum_values[full_name] = value + + return desc + + def _MakeFieldDescriptor(self, field_proto, message_name, index, + file_desc, is_extension=False): + """Creates a field descriptor from a FieldDescriptorProto. + + For message and enum type fields, this method will do a look up + in the pool for the appropriate descriptor for that type. If it + is unavailable, it will fall back to the _source function to + create it. If this type is still unavailable, construction will + fail. + + Args: + field_proto: The proto describing the field. + message_name: The name of the containing message. + index: Index of the field + file_desc: The file containing the field descriptor. + is_extension: Indication that this field is for an extension. + + Returns: + An initialized FieldDescriptor object + """ + + if message_name: + full_name = '.'.join((message_name, field_proto.name)) + else: + full_name = field_proto.name + + if field_proto.json_name: + json_name = field_proto.json_name + else: + json_name = None + + return descriptor.FieldDescriptor( + name=field_proto.name, + full_name=full_name, + index=index, + number=field_proto.number, + type=field_proto.type, + cpp_type=None, + message_type=None, + enum_type=None, + containing_type=None, + label=field_proto.label, + has_default_value=False, + default_value=None, + is_extension=is_extension, + extension_scope=None, + options=_OptionsOrNone(field_proto), + json_name=json_name, + file=file_desc, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + + def _SetAllFieldTypes(self, package, desc_proto, scope): + """Sets all the descriptor's fields's types. + + This method also sets the containing types on any extensions. + + Args: + package: The current package of desc_proto. + desc_proto: The message descriptor to update. + scope: Enclosing scope of available types. + """ + + package = _PrefixWithDot(package) + + main_desc = self._GetTypeFromScope(package, desc_proto.name, scope) + + if package == '.': + nested_package = _PrefixWithDot(desc_proto.name) + else: + nested_package = '.'.join([package, desc_proto.name]) + + for field_proto, field_desc in zip(desc_proto.field, main_desc.fields): + self._SetFieldType(field_proto, field_desc, nested_package, scope) + + for extension_proto, extension_desc in ( + zip(desc_proto.extension, main_desc.extensions)): + extension_desc.containing_type = self._GetTypeFromScope( + nested_package, extension_proto.extendee, scope) + self._SetFieldType(extension_proto, extension_desc, nested_package, scope) + + for nested_type in desc_proto.nested_type: + self._SetAllFieldTypes(nested_package, nested_type, scope) + + def _SetFieldType(self, field_proto, field_desc, package, scope): + """Sets the field's type, cpp_type, message_type and enum_type. + + Args: + field_proto: Data about the field in proto format. + field_desc: The descriptor to modify. + package: The package the field's container is in. + scope: Enclosing scope of available types. + """ + if field_proto.type_name: + desc = self._GetTypeFromScope(package, field_proto.type_name, scope) + else: + desc = None + + if not field_proto.HasField('type'): + if isinstance(desc, descriptor.Descriptor): + field_proto.type = descriptor.FieldDescriptor.TYPE_MESSAGE + else: + field_proto.type = descriptor.FieldDescriptor.TYPE_ENUM + + field_desc.cpp_type = descriptor.FieldDescriptor.ProtoTypeToCppProtoType( + field_proto.type) + + if (field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE + or field_proto.type == descriptor.FieldDescriptor.TYPE_GROUP): + field_desc.message_type = desc + + if field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: + field_desc.enum_type = desc + + if field_proto.label == descriptor.FieldDescriptor.LABEL_REPEATED: + field_desc.has_default_value = False + field_desc.default_value = [] + elif field_proto.HasField('default_value'): + field_desc.has_default_value = True + if (field_proto.type == descriptor.FieldDescriptor.TYPE_DOUBLE or + field_proto.type == descriptor.FieldDescriptor.TYPE_FLOAT): + field_desc.default_value = float(field_proto.default_value) + elif field_proto.type == descriptor.FieldDescriptor.TYPE_STRING: + field_desc.default_value = field_proto.default_value + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BOOL: + field_desc.default_value = field_proto.default_value.lower() == 'true' + elif field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: + field_desc.default_value = field_desc.enum_type.values_by_name[ + field_proto.default_value].number + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BYTES: + field_desc.default_value = text_encoding.CUnescape( + field_proto.default_value) + elif field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE: + field_desc.default_value = None + else: + # All other types are of the "int" type. + field_desc.default_value = int(field_proto.default_value) + else: + field_desc.has_default_value = False + if (field_proto.type == descriptor.FieldDescriptor.TYPE_DOUBLE or + field_proto.type == descriptor.FieldDescriptor.TYPE_FLOAT): + field_desc.default_value = 0.0 + elif field_proto.type == descriptor.FieldDescriptor.TYPE_STRING: + field_desc.default_value = u'' + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BOOL: + field_desc.default_value = False + elif field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM: + field_desc.default_value = field_desc.enum_type.values[0].number + elif field_proto.type == descriptor.FieldDescriptor.TYPE_BYTES: + field_desc.default_value = b'' + elif field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE: + field_desc.default_value = None + elif field_proto.type == descriptor.FieldDescriptor.TYPE_GROUP: + field_desc.default_value = None + else: + # All other types are of the "int" type. + field_desc.default_value = 0 + + field_desc.type = field_proto.type + + def _MakeEnumValueDescriptor(self, value_proto, index): + """Creates a enum value descriptor object from a enum value proto. + + Args: + value_proto: The proto describing the enum value. + index: The index of the enum value. + + Returns: + An initialized EnumValueDescriptor object. + """ + + return descriptor.EnumValueDescriptor( + name=value_proto.name, + index=index, + number=value_proto.number, + options=_OptionsOrNone(value_proto), + type=None, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + + def _MakeServiceDescriptor(self, service_proto, service_index, scope, + package, file_desc): + """Make a protobuf ServiceDescriptor given a ServiceDescriptorProto. + + Args: + service_proto: The descriptor_pb2.ServiceDescriptorProto protobuf message. + service_index: The index of the service in the File. + scope: Dict mapping short and full symbols to message and enum types. + package: Optional package name for the new message EnumDescriptor. + file_desc: The file containing the service descriptor. + + Returns: + The added descriptor. + """ + + if package: + service_name = '.'.join((package, service_proto.name)) + else: + service_name = service_proto.name + + methods = [self._MakeMethodDescriptor(method_proto, service_name, package, + scope, index) + for index, method_proto in enumerate(service_proto.method)] + desc = descriptor.ServiceDescriptor( + name=service_proto.name, + full_name=service_name, + index=service_index, + methods=methods, + options=_OptionsOrNone(service_proto), + file=file_desc, + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + self._CheckConflictRegister(desc, desc.full_name, desc.file.name) + self._service_descriptors[service_name] = desc + return desc + + def _MakeMethodDescriptor(self, method_proto, service_name, package, scope, + index): + """Creates a method descriptor from a MethodDescriptorProto. + + Args: + method_proto: The proto describing the method. + service_name: The name of the containing service. + package: Optional package name to look up for types. + scope: Scope containing available types. + index: Index of the method in the service. + + Returns: + An initialized MethodDescriptor object. + """ + full_name = '.'.join((service_name, method_proto.name)) + input_type = self._GetTypeFromScope( + package, method_proto.input_type, scope) + output_type = self._GetTypeFromScope( + package, method_proto.output_type, scope) + return descriptor.MethodDescriptor( + name=method_proto.name, + full_name=full_name, + index=index, + containing_service=None, + input_type=input_type, + output_type=output_type, + client_streaming=method_proto.client_streaming, + server_streaming=method_proto.server_streaming, + options=_OptionsOrNone(method_proto), + # pylint: disable=protected-access + create_key=descriptor._internal_create_key) + + def _ExtractSymbols(self, descriptors): + """Pulls out all the symbols from descriptor protos. + + Args: + descriptors: The messages to extract descriptors from. + Yields: + A two element tuple of the type name and descriptor object. + """ + + for desc in descriptors: + yield (_PrefixWithDot(desc.full_name), desc) + for symbol in self._ExtractSymbols(desc.nested_types): + yield symbol + for enum in desc.enum_types: + yield (_PrefixWithDot(enum.full_name), enum) + + def _GetDeps(self, dependencies, visited=None): + """Recursively finds dependencies for file protos. + + Args: + dependencies: The names of the files being depended on. + visited: The names of files already found. + + Yields: + Each direct and indirect dependency. + """ + + visited = visited or set() + for dependency in dependencies: + if dependency not in visited: + visited.add(dependency) + dep_desc = self.FindFileByName(dependency) + yield dep_desc + public_files = [d.name for d in dep_desc.public_dependencies] + yield from self._GetDeps(public_files, visited) + + def _GetTypeFromScope(self, package, type_name, scope): + """Finds a given type name in the current scope. + + Args: + package: The package the proto should be located in. + type_name: The name of the type to be found in the scope. + scope: Dict mapping short and full symbols to message and enum types. + + Returns: + The descriptor for the requested type. + """ + if type_name not in scope: + components = _PrefixWithDot(package).split('.') + while components: + possible_match = '.'.join(components + [type_name]) + if possible_match in scope: + type_name = possible_match + break + else: + components.pop(-1) + return scope[type_name] + + +def _PrefixWithDot(name): + return name if name.startswith('.') else '.%s' % name + + +if _USE_C_DESCRIPTORS: + # TODO(amauryfa): This pool could be constructed from Python code, when we + # support a flag like 'use_cpp_generated_pool=True'. + # pylint: disable=protected-access + _DEFAULT = descriptor._message.default_pool +else: + _DEFAULT = DescriptorPool() + + +def Default(): + return _DEFAULT diff --git a/env/lib/python3.8/site-packages/google/protobuf/duration_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/duration_pb2.py new file mode 100644 index 00000000..a8ecc07b --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/duration_pb2.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/duration.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/duration.proto\x12\x0fgoogle.protobuf\"*\n\x08\x44uration\x12\x0f\n\x07seconds\x18\x01 \x01(\x03\x12\r\n\x05nanos\x18\x02 \x01(\x05\x42\x83\x01\n\x13\x63om.google.protobufB\rDurationProtoP\x01Z1google.golang.org/protobuf/types/known/durationpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.duration_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\rDurationProtoP\001Z1google.golang.org/protobuf/types/known/durationpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _DURATION._serialized_start=51 + _DURATION._serialized_end=93 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/empty_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/empty_pb2.py new file mode 100644 index 00000000..0b4d554d --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/empty_pb2.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/empty.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.empty_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _EMPTY._serialized_start=48 + _EMPTY._serialized_end=55 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/field_mask_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/field_mask_pb2.py new file mode 100644 index 00000000..80a4e96e --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/field_mask_pb2.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/field_mask.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n google/protobuf/field_mask.proto\x12\x0fgoogle.protobuf\"\x1a\n\tFieldMask\x12\r\n\x05paths\x18\x01 \x03(\tB\x85\x01\n\x13\x63om.google.protobufB\x0e\x46ieldMaskProtoP\x01Z2google.golang.org/protobuf/types/known/fieldmaskpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.field_mask_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\016FieldMaskProtoP\001Z2google.golang.org/protobuf/types/known/fieldmaskpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _FIELDMASK._serialized_start=53 + _FIELDMASK._serialized_end=79 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/__init__.py b/env/lib/python3.8/site-packages/google/protobuf/internal/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/_api_implementation.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/google/protobuf/internal/_api_implementation.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..255ab1a7 Binary files /dev/null and b/env/lib/python3.8/site-packages/google/protobuf/internal/_api_implementation.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/api_implementation.py b/env/lib/python3.8/site-packages/google/protobuf/internal/api_implementation.py new file mode 100644 index 00000000..7fef2376 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/api_implementation.py @@ -0,0 +1,112 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Determine which implementation of the protobuf API is used in this process. +""" + +import os +import sys +import warnings + +try: + # pylint: disable=g-import-not-at-top + from google.protobuf.internal import _api_implementation + # The compile-time constants in the _api_implementation module can be used to + # switch to a certain implementation of the Python API at build time. + _api_version = _api_implementation.api_version +except ImportError: + _api_version = -1 # Unspecified by compiler flags. + +if _api_version == 1: + raise ValueError('api_version=1 is no longer supported.') + + +_default_implementation_type = ('cpp' if _api_version > 0 else 'python') + + +# This environment variable can be used to switch to a certain implementation +# of the Python API, overriding the compile-time constants in the +# _api_implementation module. Right now only 'python' and 'cpp' are valid +# values. Any other value will be ignored. +_implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION', + _default_implementation_type) + +if _implementation_type != 'python': + _implementation_type = 'cpp' + +if 'PyPy' in sys.version and _implementation_type == 'cpp': + warnings.warn('PyPy does not work yet with cpp protocol buffers. ' + 'Falling back to the python implementation.') + _implementation_type = 'python' + + +# Detect if serialization should be deterministic by default +try: + # The presence of this module in a build allows the proto implementation to + # be upgraded merely via build deps. + # + # NOTE: Merely importing this automatically enables deterministic proto + # serialization for C++ code, but we still need to export it as a boolean so + # that we can do the same for `_implementation_type == 'python'`. + # + # NOTE2: It is possible for C++ code to enable deterministic serialization by + # default _without_ affecting Python code, if the C++ implementation is not in + # use by this module. That is intended behavior, so we don't actually expose + # this boolean outside of this module. + # + # pylint: disable=g-import-not-at-top,unused-import + from google.protobuf import enable_deterministic_proto_serialization + _python_deterministic_proto_serialization = True +except ImportError: + _python_deterministic_proto_serialization = False + + +# Usage of this function is discouraged. Clients shouldn't care which +# implementation of the API is in use. Note that there is no guarantee +# that differences between APIs will be maintained. +# Please don't use this function if possible. +def Type(): + return _implementation_type + + +def _SetType(implementation_type): + """Never use! Only for protobuf benchmark.""" + global _implementation_type + _implementation_type = implementation_type + + +# See comment on 'Type' above. +def Version(): + return 2 + + +# For internal use only +def IsPythonDefaultSerializationDeterministic(): + return _python_deterministic_proto_serialization diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/builder.py b/env/lib/python3.8/site-packages/google/protobuf/internal/builder.py new file mode 100644 index 00000000..64353ee4 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/builder.py @@ -0,0 +1,130 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Builds descriptors, message classes and services for generated _pb2.py. + +This file is only called in python generated _pb2.py files. It builds +descriptors, message classes and services that users can directly use +in generated code. +""" + +__author__ = 'jieluo@google.com (Jie Luo)' + +from google.protobuf.internal import enum_type_wrapper +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database + +_sym_db = _symbol_database.Default() + + +def BuildMessageAndEnumDescriptors(file_des, module): + """Builds message and enum descriptors. + + Args: + file_des: FileDescriptor of the .proto file + module: Generated _pb2 module + """ + + def BuildNestedDescriptors(msg_des, prefix): + for (name, nested_msg) in msg_des.nested_types_by_name.items(): + module_name = prefix + name.upper() + module[module_name] = nested_msg + BuildNestedDescriptors(nested_msg, module_name + '_') + for enum_des in msg_des.enum_types: + module[prefix + enum_des.name.upper()] = enum_des + + for (name, msg_des) in file_des.message_types_by_name.items(): + module_name = '_' + name.upper() + module[module_name] = msg_des + BuildNestedDescriptors(msg_des, module_name + '_') + + +def BuildTopDescriptorsAndMessages(file_des, module_name, module): + """Builds top level descriptors and message classes. + + Args: + file_des: FileDescriptor of the .proto file + module_name: str, the name of generated _pb2 module + module: Generated _pb2 module + """ + + def BuildMessage(msg_des): + create_dict = {} + for (name, nested_msg) in msg_des.nested_types_by_name.items(): + create_dict[name] = BuildMessage(nested_msg) + create_dict['DESCRIPTOR'] = msg_des + create_dict['__module__'] = module_name + message_class = _reflection.GeneratedProtocolMessageType( + msg_des.name, (_message.Message,), create_dict) + _sym_db.RegisterMessage(message_class) + return message_class + + # top level enums + for (name, enum_des) in file_des.enum_types_by_name.items(): + module['_' + name.upper()] = enum_des + module[name] = enum_type_wrapper.EnumTypeWrapper(enum_des) + for enum_value in enum_des.values: + module[enum_value.name] = enum_value.number + + # top level extensions + for (name, extension_des) in file_des.extensions_by_name.items(): + module[name.upper() + '_FIELD_NUMBER'] = extension_des.number + module[name] = extension_des + + # services + for (name, service) in file_des.services_by_name.items(): + module['_' + name.upper()] = service + + # Build messages. + for (name, msg_des) in file_des.message_types_by_name.items(): + module[name] = BuildMessage(msg_des) + + +def BuildServices(file_des, module_name, module): + """Builds services classes and services stub class. + + Args: + file_des: FileDescriptor of the .proto file + module_name: str, the name of generated _pb2 module + module: Generated _pb2 module + """ + # pylint: disable=g-import-not-at-top + from google.protobuf import service as _service + from google.protobuf import service_reflection + # pylint: enable=g-import-not-at-top + for (name, service) in file_des.services_by_name.items(): + module[name] = service_reflection.GeneratedServiceType( + name, (_service.Service,), + dict(DESCRIPTOR=service, __module__=module_name)) + stub_name = name + '_Stub' + module[stub_name] = service_reflection.GeneratedServiceStubType( + stub_name, (module[name],), + dict(DESCRIPTOR=service, __module__=module_name)) diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/containers.py b/env/lib/python3.8/site-packages/google/protobuf/internal/containers.py new file mode 100644 index 00000000..29fbb53d --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/containers.py @@ -0,0 +1,710 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains container classes to represent different protocol buffer types. + +This file defines container classes which represent categories of protocol +buffer field types which need extra maintenance. Currently these categories +are: + +- Repeated scalar fields - These are all repeated fields which aren't + composite (e.g. they are of simple types like int32, string, etc). +- Repeated composite fields - Repeated fields which are composite. This + includes groups and nested messages. +""" + +import collections.abc +import copy +import pickle +from typing import ( + Any, + Iterable, + Iterator, + List, + MutableMapping, + MutableSequence, + NoReturn, + Optional, + Sequence, + TypeVar, + Union, + overload, +) + + +_T = TypeVar('_T') +_K = TypeVar('_K') +_V = TypeVar('_V') + + +class BaseContainer(Sequence[_T]): + """Base container class.""" + + # Minimizes memory usage and disallows assignment to other attributes. + __slots__ = ['_message_listener', '_values'] + + def __init__(self, message_listener: Any) -> None: + """ + Args: + message_listener: A MessageListener implementation. + The RepeatedScalarFieldContainer will call this object's + Modified() method when it is modified. + """ + self._message_listener = message_listener + self._values = [] + + @overload + def __getitem__(self, key: int) -> _T: + ... + + @overload + def __getitem__(self, key: slice) -> List[_T]: + ... + + def __getitem__(self, key): + """Retrieves item by the specified key.""" + return self._values[key] + + def __len__(self) -> int: + """Returns the number of elements in the container.""" + return len(self._values) + + def __ne__(self, other: Any) -> bool: + """Checks if another instance isn't equal to this one.""" + # The concrete classes should define __eq__. + return not self == other + + __hash__ = None + + def __repr__(self) -> str: + return repr(self._values) + + def sort(self, *args, **kwargs) -> None: + # Continue to support the old sort_function keyword argument. + # This is expected to be a rare occurrence, so use LBYL to avoid + # the overhead of actually catching KeyError. + if 'sort_function' in kwargs: + kwargs['cmp'] = kwargs.pop('sort_function') + self._values.sort(*args, **kwargs) + + def reverse(self) -> None: + self._values.reverse() + + +# TODO(slebedev): Remove this. BaseContainer does *not* conform to +# MutableSequence, only its subclasses do. +collections.abc.MutableSequence.register(BaseContainer) + + +class RepeatedScalarFieldContainer(BaseContainer[_T], MutableSequence[_T]): + """Simple, type-checked, list-like container for holding repeated scalars.""" + + # Disallows assignment to other attributes. + __slots__ = ['_type_checker'] + + def __init__( + self, + message_listener: Any, + type_checker: Any, + ) -> None: + """Args: + + message_listener: A MessageListener implementation. The + RepeatedScalarFieldContainer will call this object's Modified() method + when it is modified. + type_checker: A type_checkers.ValueChecker instance to run on elements + inserted into this container. + """ + super().__init__(message_listener) + self._type_checker = type_checker + + def append(self, value: _T) -> None: + """Appends an item to the list. Similar to list.append().""" + self._values.append(self._type_checker.CheckValue(value)) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def insert(self, key: int, value: _T) -> None: + """Inserts the item at the specified position. Similar to list.insert().""" + self._values.insert(key, self._type_checker.CheckValue(value)) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def extend(self, elem_seq: Iterable[_T]) -> None: + """Extends by appending the given iterable. Similar to list.extend().""" + if elem_seq is None: + return + try: + elem_seq_iter = iter(elem_seq) + except TypeError: + if not elem_seq: + # silently ignore falsy inputs :-/. + # TODO(ptucker): Deprecate this behavior. b/18413862 + return + raise + + new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter] + if new_values: + self._values.extend(new_values) + self._message_listener.Modified() + + def MergeFrom( + self, + other: Union['RepeatedScalarFieldContainer[_T]', Iterable[_T]], + ) -> None: + """Appends the contents of another repeated field of the same type to this + one. We do not check the types of the individual fields. + """ + self._values.extend(other) + self._message_listener.Modified() + + def remove(self, elem: _T): + """Removes an item from the list. Similar to list.remove().""" + self._values.remove(elem) + self._message_listener.Modified() + + def pop(self, key: Optional[int] = -1) -> _T: + """Removes and returns an item at a given index. Similar to list.pop().""" + value = self._values[key] + self.__delitem__(key) + return value + + @overload + def __setitem__(self, key: int, value: _T) -> None: + ... + + @overload + def __setitem__(self, key: slice, value: Iterable[_T]) -> None: + ... + + def __setitem__(self, key, value) -> None: + """Sets the item on the specified position.""" + if isinstance(key, slice): + if key.step is not None: + raise ValueError('Extended slices not supported') + self._values[key] = map(self._type_checker.CheckValue, value) + self._message_listener.Modified() + else: + self._values[key] = self._type_checker.CheckValue(value) + self._message_listener.Modified() + + def __delitem__(self, key: Union[int, slice]) -> None: + """Deletes the item at the specified position.""" + del self._values[key] + self._message_listener.Modified() + + def __eq__(self, other: Any) -> bool: + """Compares the current instance with another one.""" + if self is other: + return True + # Special case for the same type which should be common and fast. + if isinstance(other, self.__class__): + return other._values == self._values + # We are presumably comparing against some other sequence type. + return other == self._values + + def __deepcopy__( + self, + unused_memo: Any = None, + ) -> 'RepeatedScalarFieldContainer[_T]': + clone = RepeatedScalarFieldContainer( + copy.deepcopy(self._message_listener), self._type_checker) + clone.MergeFrom(self) + return clone + + def __reduce__(self, **kwargs) -> NoReturn: + raise pickle.PickleError( + "Can't pickle repeated scalar fields, convert to list first") + + +# TODO(slebedev): Constrain T to be a subtype of Message. +class RepeatedCompositeFieldContainer(BaseContainer[_T], MutableSequence[_T]): + """Simple, list-like container for holding repeated composite fields.""" + + # Disallows assignment to other attributes. + __slots__ = ['_message_descriptor'] + + def __init__(self, message_listener: Any, message_descriptor: Any) -> None: + """ + Note that we pass in a descriptor instead of the generated directly, + since at the time we construct a _RepeatedCompositeFieldContainer we + haven't yet necessarily initialized the type that will be contained in the + container. + + Args: + message_listener: A MessageListener implementation. + The RepeatedCompositeFieldContainer will call this object's + Modified() method when it is modified. + message_descriptor: A Descriptor instance describing the protocol type + that should be present in this container. We'll use the + _concrete_class field of this descriptor when the client calls add(). + """ + super().__init__(message_listener) + self._message_descriptor = message_descriptor + + def add(self, **kwargs: Any) -> _T: + """Adds a new element at the end of the list and returns it. Keyword + arguments may be used to initialize the element. + """ + new_element = self._message_descriptor._concrete_class(**kwargs) + new_element._SetListener(self._message_listener) + self._values.append(new_element) + if not self._message_listener.dirty: + self._message_listener.Modified() + return new_element + + def append(self, value: _T) -> None: + """Appends one element by copying the message.""" + new_element = self._message_descriptor._concrete_class() + new_element._SetListener(self._message_listener) + new_element.CopyFrom(value) + self._values.append(new_element) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def insert(self, key: int, value: _T) -> None: + """Inserts the item at the specified position by copying.""" + new_element = self._message_descriptor._concrete_class() + new_element._SetListener(self._message_listener) + new_element.CopyFrom(value) + self._values.insert(key, new_element) + if not self._message_listener.dirty: + self._message_listener.Modified() + + def extend(self, elem_seq: Iterable[_T]) -> None: + """Extends by appending the given sequence of elements of the same type + + as this one, copying each individual message. + """ + message_class = self._message_descriptor._concrete_class + listener = self._message_listener + values = self._values + for message in elem_seq: + new_element = message_class() + new_element._SetListener(listener) + new_element.MergeFrom(message) + values.append(new_element) + listener.Modified() + + def MergeFrom( + self, + other: Union['RepeatedCompositeFieldContainer[_T]', Iterable[_T]], + ) -> None: + """Appends the contents of another repeated field of the same type to this + one, copying each individual message. + """ + self.extend(other) + + def remove(self, elem: _T) -> None: + """Removes an item from the list. Similar to list.remove().""" + self._values.remove(elem) + self._message_listener.Modified() + + def pop(self, key: Optional[int] = -1) -> _T: + """Removes and returns an item at a given index. Similar to list.pop().""" + value = self._values[key] + self.__delitem__(key) + return value + + @overload + def __setitem__(self, key: int, value: _T) -> None: + ... + + @overload + def __setitem__(self, key: slice, value: Iterable[_T]) -> None: + ... + + def __setitem__(self, key, value): + # This method is implemented to make RepeatedCompositeFieldContainer + # structurally compatible with typing.MutableSequence. It is + # otherwise unsupported and will always raise an error. + raise TypeError( + f'{self.__class__.__name__} object does not support item assignment') + + def __delitem__(self, key: Union[int, slice]) -> None: + """Deletes the item at the specified position.""" + del self._values[key] + self._message_listener.Modified() + + def __eq__(self, other: Any) -> bool: + """Compares the current instance with another one.""" + if self is other: + return True + if not isinstance(other, self.__class__): + raise TypeError('Can only compare repeated composite fields against ' + 'other repeated composite fields.') + return self._values == other._values + + +class ScalarMap(MutableMapping[_K, _V]): + """Simple, type-checked, dict-like container for holding repeated scalars.""" + + # Disallows assignment to other attributes. + __slots__ = ['_key_checker', '_value_checker', '_values', '_message_listener', + '_entry_descriptor'] + + def __init__( + self, + message_listener: Any, + key_checker: Any, + value_checker: Any, + entry_descriptor: Any, + ) -> None: + """ + Args: + message_listener: A MessageListener implementation. + The ScalarMap will call this object's Modified() method when it + is modified. + key_checker: A type_checkers.ValueChecker instance to run on keys + inserted into this container. + value_checker: A type_checkers.ValueChecker instance to run on values + inserted into this container. + entry_descriptor: The MessageDescriptor of a map entry: key and value. + """ + self._message_listener = message_listener + self._key_checker = key_checker + self._value_checker = value_checker + self._entry_descriptor = entry_descriptor + self._values = {} + + def __getitem__(self, key: _K) -> _V: + try: + return self._values[key] + except KeyError: + key = self._key_checker.CheckValue(key) + val = self._value_checker.DefaultValue() + self._values[key] = val + return val + + def __contains__(self, item: _K) -> bool: + # We check the key's type to match the strong-typing flavor of the API. + # Also this makes it easier to match the behavior of the C++ implementation. + self._key_checker.CheckValue(item) + return item in self._values + + @overload + def get(self, key: _K) -> Optional[_V]: + ... + + @overload + def get(self, key: _K, default: _T) -> Union[_V, _T]: + ... + + # We need to override this explicitly, because our defaultdict-like behavior + # will make the default implementation (from our base class) always insert + # the key. + def get(self, key, default=None): + if key in self: + return self[key] + else: + return default + + def __setitem__(self, key: _K, value: _V) -> _T: + checked_key = self._key_checker.CheckValue(key) + checked_value = self._value_checker.CheckValue(value) + self._values[checked_key] = checked_value + self._message_listener.Modified() + + def __delitem__(self, key: _K) -> None: + del self._values[key] + self._message_listener.Modified() + + def __len__(self) -> int: + return len(self._values) + + def __iter__(self) -> Iterator[_K]: + return iter(self._values) + + def __repr__(self) -> str: + return repr(self._values) + + def MergeFrom(self, other: 'ScalarMap[_K, _V]') -> None: + self._values.update(other._values) + self._message_listener.Modified() + + def InvalidateIterators(self) -> None: + # It appears that the only way to reliably invalidate iterators to + # self._values is to ensure that its size changes. + original = self._values + self._values = original.copy() + original[None] = None + + # This is defined in the abstract base, but we can do it much more cheaply. + def clear(self) -> None: + self._values.clear() + self._message_listener.Modified() + + def GetEntryClass(self) -> Any: + return self._entry_descriptor._concrete_class + + +class MessageMap(MutableMapping[_K, _V]): + """Simple, type-checked, dict-like container for with submessage values.""" + + # Disallows assignment to other attributes. + __slots__ = ['_key_checker', '_values', '_message_listener', + '_message_descriptor', '_entry_descriptor'] + + def __init__( + self, + message_listener: Any, + message_descriptor: Any, + key_checker: Any, + entry_descriptor: Any, + ) -> None: + """ + Args: + message_listener: A MessageListener implementation. + The ScalarMap will call this object's Modified() method when it + is modified. + key_checker: A type_checkers.ValueChecker instance to run on keys + inserted into this container. + value_checker: A type_checkers.ValueChecker instance to run on values + inserted into this container. + entry_descriptor: The MessageDescriptor of a map entry: key and value. + """ + self._message_listener = message_listener + self._message_descriptor = message_descriptor + self._key_checker = key_checker + self._entry_descriptor = entry_descriptor + self._values = {} + + def __getitem__(self, key: _K) -> _V: + key = self._key_checker.CheckValue(key) + try: + return self._values[key] + except KeyError: + new_element = self._message_descriptor._concrete_class() + new_element._SetListener(self._message_listener) + self._values[key] = new_element + self._message_listener.Modified() + return new_element + + def get_or_create(self, key: _K) -> _V: + """get_or_create() is an alias for getitem (ie. map[key]). + + Args: + key: The key to get or create in the map. + + This is useful in cases where you want to be explicit that the call is + mutating the map. This can avoid lint errors for statements like this + that otherwise would appear to be pointless statements: + + msg.my_map[key] + """ + return self[key] + + @overload + def get(self, key: _K) -> Optional[_V]: + ... + + @overload + def get(self, key: _K, default: _T) -> Union[_V, _T]: + ... + + # We need to override this explicitly, because our defaultdict-like behavior + # will make the default implementation (from our base class) always insert + # the key. + def get(self, key, default=None): + if key in self: + return self[key] + else: + return default + + def __contains__(self, item: _K) -> bool: + item = self._key_checker.CheckValue(item) + return item in self._values + + def __setitem__(self, key: _K, value: _V) -> NoReturn: + raise ValueError('May not set values directly, call my_map[key].foo = 5') + + def __delitem__(self, key: _K) -> None: + key = self._key_checker.CheckValue(key) + del self._values[key] + self._message_listener.Modified() + + def __len__(self) -> int: + return len(self._values) + + def __iter__(self) -> Iterator[_K]: + return iter(self._values) + + def __repr__(self) -> str: + return repr(self._values) + + def MergeFrom(self, other: 'MessageMap[_K, _V]') -> None: + # pylint: disable=protected-access + for key in other._values: + # According to documentation: "When parsing from the wire or when merging, + # if there are duplicate map keys the last key seen is used". + if key in self: + del self[key] + self[key].CopyFrom(other[key]) + # self._message_listener.Modified() not required here, because + # mutations to submessages already propagate. + + def InvalidateIterators(self) -> None: + # It appears that the only way to reliably invalidate iterators to + # self._values is to ensure that its size changes. + original = self._values + self._values = original.copy() + original[None] = None + + # This is defined in the abstract base, but we can do it much more cheaply. + def clear(self) -> None: + self._values.clear() + self._message_listener.Modified() + + def GetEntryClass(self) -> Any: + return self._entry_descriptor._concrete_class + + +class _UnknownField: + """A parsed unknown field.""" + + # Disallows assignment to other attributes. + __slots__ = ['_field_number', '_wire_type', '_data'] + + def __init__(self, field_number, wire_type, data): + self._field_number = field_number + self._wire_type = wire_type + self._data = data + return + + def __lt__(self, other): + # pylint: disable=protected-access + return self._field_number < other._field_number + + def __eq__(self, other): + if self is other: + return True + # pylint: disable=protected-access + return (self._field_number == other._field_number and + self._wire_type == other._wire_type and + self._data == other._data) + + +class UnknownFieldRef: # pylint: disable=missing-class-docstring + + def __init__(self, parent, index): + self._parent = parent + self._index = index + + def _check_valid(self): + if not self._parent: + raise ValueError('UnknownField does not exist. ' + 'The parent message might be cleared.') + if self._index >= len(self._parent): + raise ValueError('UnknownField does not exist. ' + 'The parent message might be cleared.') + + @property + def field_number(self): + self._check_valid() + # pylint: disable=protected-access + return self._parent._internal_get(self._index)._field_number + + @property + def wire_type(self): + self._check_valid() + # pylint: disable=protected-access + return self._parent._internal_get(self._index)._wire_type + + @property + def data(self): + self._check_valid() + # pylint: disable=protected-access + return self._parent._internal_get(self._index)._data + + +class UnknownFieldSet: + """UnknownField container""" + + # Disallows assignment to other attributes. + __slots__ = ['_values'] + + def __init__(self): + self._values = [] + + def __getitem__(self, index): + if self._values is None: + raise ValueError('UnknownFields does not exist. ' + 'The parent message might be cleared.') + size = len(self._values) + if index < 0: + index += size + if index < 0 or index >= size: + raise IndexError('index %d out of range'.index) + + return UnknownFieldRef(self, index) + + def _internal_get(self, index): + return self._values[index] + + def __len__(self): + if self._values is None: + raise ValueError('UnknownFields does not exist. ' + 'The parent message might be cleared.') + return len(self._values) + + def _add(self, field_number, wire_type, data): + unknown_field = _UnknownField(field_number, wire_type, data) + self._values.append(unknown_field) + return unknown_field + + def __iter__(self): + for i in range(len(self)): + yield UnknownFieldRef(self, i) + + def _extend(self, other): + if other is None: + return + # pylint: disable=protected-access + self._values.extend(other._values) + + def __eq__(self, other): + if self is other: + return True + # Sort unknown fields because their order shouldn't + # affect equality test. + values = list(self._values) + if other is None: + return not values + values.sort() + # pylint: disable=protected-access + other_values = sorted(other._values) + return values == other_values + + def _clear(self): + for value in self._values: + # pylint: disable=protected-access + if isinstance(value._data, UnknownFieldSet): + value._data._clear() # pylint: disable=protected-access + self._values = None diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/decoder.py b/env/lib/python3.8/site-packages/google/protobuf/internal/decoder.py new file mode 100644 index 00000000..bc1b7b78 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/decoder.py @@ -0,0 +1,1029 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Code for decoding protocol buffer primitives. + +This code is very similar to encoder.py -- read the docs for that module first. + +A "decoder" is a function with the signature: + Decode(buffer, pos, end, message, field_dict) +The arguments are: + buffer: The string containing the encoded message. + pos: The current position in the string. + end: The position in the string where the current message ends. May be + less than len(buffer) if we're reading a sub-message. + message: The message object into which we're parsing. + field_dict: message._fields (avoids a hashtable lookup). +The decoder reads the field and stores it into field_dict, returning the new +buffer position. A decoder for a repeated field may proactively decode all of +the elements of that field, if they appear consecutively. + +Note that decoders may throw any of the following: + IndexError: Indicates a truncated message. + struct.error: Unpacking of a fixed-width field failed. + message.DecodeError: Other errors. + +Decoders are expected to raise an exception if they are called with pos > end. +This allows callers to be lax about bounds checking: it's fineto read past +"end" as long as you are sure that someone else will notice and throw an +exception later on. + +Something up the call stack is expected to catch IndexError and struct.error +and convert them to message.DecodeError. + +Decoders are constructed using decoder constructors with the signature: + MakeDecoder(field_number, is_repeated, is_packed, key, new_default) +The arguments are: + field_number: The field number of the field we want to decode. + is_repeated: Is the field a repeated field? (bool) + is_packed: Is the field a packed field? (bool) + key: The key to use when looking up the field within field_dict. + (This is actually the FieldDescriptor but nothing in this + file should depend on that.) + new_default: A function which takes a message object as a parameter and + returns a new instance of the default value for this field. + (This is called for repeated fields and sub-messages, when an + instance does not already exist.) + +As with encoders, we define a decoder constructor for every type of field. +Then, for every field of every message class we construct an actual decoder. +That decoder goes into a dict indexed by tag, so when we decode a message +we repeatedly read a tag, look up the corresponding decoder, and invoke it. +""" + +__author__ = 'kenton@google.com (Kenton Varda)' + +import math +import struct + +from google.protobuf.internal import containers +from google.protobuf.internal import encoder +from google.protobuf.internal import wire_format +from google.protobuf import message + + +# This is not for optimization, but rather to avoid conflicts with local +# variables named "message". +_DecodeError = message.DecodeError + + +def _VarintDecoder(mask, result_type): + """Return an encoder for a basic varint value (does not include tag). + + Decoded values will be bitwise-anded with the given mask before being + returned, e.g. to limit them to 32 bits. The returned decoder does not + take the usual "end" parameter -- the caller is expected to do bounds checking + after the fact (often the caller can defer such checking until later). The + decoder returns a (value, new_pos) pair. + """ + + def DecodeVarint(buffer, pos): + result = 0 + shift = 0 + while 1: + b = buffer[pos] + result |= ((b & 0x7f) << shift) + pos += 1 + if not (b & 0x80): + result &= mask + result = result_type(result) + return (result, pos) + shift += 7 + if shift >= 64: + raise _DecodeError('Too many bytes when decoding varint.') + return DecodeVarint + + +def _SignedVarintDecoder(bits, result_type): + """Like _VarintDecoder() but decodes signed values.""" + + signbit = 1 << (bits - 1) + mask = (1 << bits) - 1 + + def DecodeVarint(buffer, pos): + result = 0 + shift = 0 + while 1: + b = buffer[pos] + result |= ((b & 0x7f) << shift) + pos += 1 + if not (b & 0x80): + result &= mask + result = (result ^ signbit) - signbit + result = result_type(result) + return (result, pos) + shift += 7 + if shift >= 64: + raise _DecodeError('Too many bytes when decoding varint.') + return DecodeVarint + +# All 32-bit and 64-bit values are represented as int. +_DecodeVarint = _VarintDecoder((1 << 64) - 1, int) +_DecodeSignedVarint = _SignedVarintDecoder(64, int) + +# Use these versions for values which must be limited to 32 bits. +_DecodeVarint32 = _VarintDecoder((1 << 32) - 1, int) +_DecodeSignedVarint32 = _SignedVarintDecoder(32, int) + + +def ReadTag(buffer, pos): + """Read a tag from the memoryview, and return a (tag_bytes, new_pos) tuple. + + We return the raw bytes of the tag rather than decoding them. The raw + bytes can then be used to look up the proper decoder. This effectively allows + us to trade some work that would be done in pure-python (decoding a varint) + for work that is done in C (searching for a byte string in a hash table). + In a low-level language it would be much cheaper to decode the varint and + use that, but not in Python. + + Args: + buffer: memoryview object of the encoded bytes + pos: int of the current position to start from + + Returns: + Tuple[bytes, int] of the tag data and new position. + """ + start = pos + while buffer[pos] & 0x80: + pos += 1 + pos += 1 + + tag_bytes = buffer[start:pos].tobytes() + return tag_bytes, pos + + +# -------------------------------------------------------------------- + + +def _SimpleDecoder(wire_type, decode_value): + """Return a constructor for a decoder for fields of a particular type. + + Args: + wire_type: The field's wire type. + decode_value: A function which decodes an individual value, e.g. + _DecodeVarint() + """ + + def SpecificDecoder(field_number, is_repeated, is_packed, key, new_default, + clear_if_default=False): + if is_packed: + local_DecodeVarint = _DecodeVarint + def DecodePackedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + (endpoint, pos) = local_DecodeVarint(buffer, pos) + endpoint += pos + if endpoint > end: + raise _DecodeError('Truncated message.') + while pos < endpoint: + (element, pos) = decode_value(buffer, pos) + value.append(element) + if pos > endpoint: + del value[-1] # Discard corrupt value. + raise _DecodeError('Packed element was truncated.') + return pos + return DecodePackedField + elif is_repeated: + tag_bytes = encoder.TagBytes(field_number, wire_type) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + (element, new_pos) = decode_value(buffer, pos) + value.append(element) + # Predict that the next tag is another copy of the same repeated + # field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos >= end: + # Prediction failed. Return. + if new_pos > end: + raise _DecodeError('Truncated message.') + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + (new_value, pos) = decode_value(buffer, pos) + if pos > end: + raise _DecodeError('Truncated message.') + if clear_if_default and not new_value: + field_dict.pop(key, None) + else: + field_dict[key] = new_value + return pos + return DecodeField + + return SpecificDecoder + + +def _ModifiedDecoder(wire_type, decode_value, modify_value): + """Like SimpleDecoder but additionally invokes modify_value on every value + before storing it. Usually modify_value is ZigZagDecode. + """ + + # Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but + # not enough to make a significant difference. + + def InnerDecode(buffer, pos): + (result, new_pos) = decode_value(buffer, pos) + return (modify_value(result), new_pos) + return _SimpleDecoder(wire_type, InnerDecode) + + +def _StructPackDecoder(wire_type, format): + """Return a constructor for a decoder for a fixed-width field. + + Args: + wire_type: The field's wire type. + format: The format string to pass to struct.unpack(). + """ + + value_size = struct.calcsize(format) + local_unpack = struct.unpack + + # Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but + # not enough to make a significant difference. + + # Note that we expect someone up-stack to catch struct.error and convert + # it to _DecodeError -- this way we don't have to set up exception- + # handling blocks every time we parse one value. + + def InnerDecode(buffer, pos): + new_pos = pos + value_size + result = local_unpack(format, buffer[pos:new_pos])[0] + return (result, new_pos) + return _SimpleDecoder(wire_type, InnerDecode) + + +def _FloatDecoder(): + """Returns a decoder for a float field. + + This code works around a bug in struct.unpack for non-finite 32-bit + floating-point values. + """ + + local_unpack = struct.unpack + + def InnerDecode(buffer, pos): + """Decode serialized float to a float and new position. + + Args: + buffer: memoryview of the serialized bytes + pos: int, position in the memory view to start at. + + Returns: + Tuple[float, int] of the deserialized float value and new position + in the serialized data. + """ + # We expect a 32-bit value in little-endian byte order. Bit 1 is the sign + # bit, bits 2-9 represent the exponent, and bits 10-32 are the significand. + new_pos = pos + 4 + float_bytes = buffer[pos:new_pos].tobytes() + + # If this value has all its exponent bits set, then it's non-finite. + # In Python 2.4, struct.unpack will convert it to a finite 64-bit value. + # To avoid that, we parse it specially. + if (float_bytes[3:4] in b'\x7F\xFF' and float_bytes[2:3] >= b'\x80'): + # If at least one significand bit is set... + if float_bytes[0:3] != b'\x00\x00\x80': + return (math.nan, new_pos) + # If sign bit is set... + if float_bytes[3:4] == b'\xFF': + return (-math.inf, new_pos) + return (math.inf, new_pos) + + # Note that we expect someone up-stack to catch struct.error and convert + # it to _DecodeError -- this way we don't have to set up exception- + # handling blocks every time we parse one value. + result = local_unpack('= b'\xF0') + and (double_bytes[0:7] != b'\x00\x00\x00\x00\x00\x00\xF0')): + return (math.nan, new_pos) + + # Note that we expect someone up-stack to catch struct.error and convert + # it to _DecodeError -- this way we don't have to set up exception- + # handling blocks every time we parse one value. + result = local_unpack(' end: + raise _DecodeError('Truncated message.') + while pos < endpoint: + value_start_pos = pos + (element, pos) = _DecodeSignedVarint32(buffer, pos) + # pylint: disable=protected-access + if element in enum_type.values_by_number: + value.append(element) + else: + if not message._unknown_fields: + message._unknown_fields = [] + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_VARINT) + + message._unknown_fields.append( + (tag_bytes, buffer[value_start_pos:pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + field_number, wire_format.WIRETYPE_VARINT, element) + # pylint: enable=protected-access + if pos > endpoint: + if element in enum_type.values_by_number: + del value[-1] # Discard corrupt value. + else: + del message._unknown_fields[-1] + # pylint: disable=protected-access + del message._unknown_field_set._values[-1] + # pylint: enable=protected-access + raise _DecodeError('Packed element was truncated.') + return pos + return DecodePackedField + elif is_repeated: + tag_bytes = encoder.TagBytes(field_number, wire_format.WIRETYPE_VARINT) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + """Decode serialized repeated enum to its value and a new position. + + Args: + buffer: memoryview of the serialized bytes. + pos: int, position in the memory view to start at. + end: int, end position of serialized data + message: Message object to store unknown fields in + field_dict: Map[Descriptor, Any] to store decoded values in. + + Returns: + int, new position in serialized data. + """ + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + (element, new_pos) = _DecodeSignedVarint32(buffer, pos) + # pylint: disable=protected-access + if element in enum_type.values_by_number: + value.append(element) + else: + if not message._unknown_fields: + message._unknown_fields = [] + message._unknown_fields.append( + (tag_bytes, buffer[pos:new_pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + field_number, wire_format.WIRETYPE_VARINT, element) + # pylint: enable=protected-access + # Predict that the next tag is another copy of the same repeated + # field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos >= end: + # Prediction failed. Return. + if new_pos > end: + raise _DecodeError('Truncated message.') + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + """Decode serialized repeated enum to its value and a new position. + + Args: + buffer: memoryview of the serialized bytes. + pos: int, position in the memory view to start at. + end: int, end position of serialized data + message: Message object to store unknown fields in + field_dict: Map[Descriptor, Any] to store decoded values in. + + Returns: + int, new position in serialized data. + """ + value_start_pos = pos + (enum_value, pos) = _DecodeSignedVarint32(buffer, pos) + if pos > end: + raise _DecodeError('Truncated message.') + if clear_if_default and not enum_value: + field_dict.pop(key, None) + return pos + # pylint: disable=protected-access + if enum_value in enum_type.values_by_number: + field_dict[key] = enum_value + else: + if not message._unknown_fields: + message._unknown_fields = [] + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_VARINT) + message._unknown_fields.append( + (tag_bytes, buffer[value_start_pos:pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + field_number, wire_format.WIRETYPE_VARINT, enum_value) + # pylint: enable=protected-access + return pos + return DecodeField + + +# -------------------------------------------------------------------- + + +Int32Decoder = _SimpleDecoder( + wire_format.WIRETYPE_VARINT, _DecodeSignedVarint32) + +Int64Decoder = _SimpleDecoder( + wire_format.WIRETYPE_VARINT, _DecodeSignedVarint) + +UInt32Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint32) +UInt64Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint) + +SInt32Decoder = _ModifiedDecoder( + wire_format.WIRETYPE_VARINT, _DecodeVarint32, wire_format.ZigZagDecode) +SInt64Decoder = _ModifiedDecoder( + wire_format.WIRETYPE_VARINT, _DecodeVarint, wire_format.ZigZagDecode) + +# Note that Python conveniently guarantees that when using the '<' prefix on +# formats, they will also have the same size across all platforms (as opposed +# to without the prefix, where their sizes depend on the C compiler's basic +# type sizes). +Fixed32Decoder = _StructPackDecoder(wire_format.WIRETYPE_FIXED32, ' end: + raise _DecodeError('Truncated string.') + value.append(_ConvertToUnicode(buffer[pos:new_pos])) + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated string.') + if clear_if_default and not size: + field_dict.pop(key, None) + else: + field_dict[key] = _ConvertToUnicode(buffer[pos:new_pos]) + return new_pos + return DecodeField + + +def BytesDecoder(field_number, is_repeated, is_packed, key, new_default, + clear_if_default=False): + """Returns a decoder for a bytes field.""" + + local_DecodeVarint = _DecodeVarint + + assert not is_packed + if is_repeated: + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_LENGTH_DELIMITED) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated string.') + value.append(buffer[pos:new_pos].tobytes()) + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated string.') + if clear_if_default and not size: + field_dict.pop(key, None) + else: + field_dict[key] = buffer[pos:new_pos].tobytes() + return new_pos + return DecodeField + + +def GroupDecoder(field_number, is_repeated, is_packed, key, new_default): + """Returns a decoder for a group field.""" + + end_tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_END_GROUP) + end_tag_len = len(end_tag_bytes) + + assert not is_packed + if is_repeated: + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_START_GROUP) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + # Read sub-message. + pos = value.add()._InternalParse(buffer, pos, end) + # Read end tag. + new_pos = pos+end_tag_len + if buffer[pos:new_pos] != end_tag_bytes or new_pos > end: + raise _DecodeError('Missing group end tag.') + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + # Read sub-message. + pos = value._InternalParse(buffer, pos, end) + # Read end tag. + new_pos = pos+end_tag_len + if buffer[pos:new_pos] != end_tag_bytes or new_pos > end: + raise _DecodeError('Missing group end tag.') + return new_pos + return DecodeField + + +def MessageDecoder(field_number, is_repeated, is_packed, key, new_default): + """Returns a decoder for a message field.""" + + local_DecodeVarint = _DecodeVarint + + assert not is_packed + if is_repeated: + tag_bytes = encoder.TagBytes(field_number, + wire_format.WIRETYPE_LENGTH_DELIMITED) + tag_len = len(tag_bytes) + def DecodeRepeatedField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + # Read length. + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated message.') + # Read sub-message. + if value.add()._InternalParse(buffer, pos, new_pos) != new_pos: + # The only reason _InternalParse would return early is if it + # encountered an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + return DecodeRepeatedField + else: + def DecodeField(buffer, pos, end, message, field_dict): + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + # Read length. + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated message.') + # Read sub-message. + if value._InternalParse(buffer, pos, new_pos) != new_pos: + # The only reason _InternalParse would return early is if it encountered + # an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + return new_pos + return DecodeField + + +# -------------------------------------------------------------------- + +MESSAGE_SET_ITEM_TAG = encoder.TagBytes(1, wire_format.WIRETYPE_START_GROUP) + +def MessageSetItemDecoder(descriptor): + """Returns a decoder for a MessageSet item. + + The parameter is the message Descriptor. + + The message set message looks like this: + message MessageSet { + repeated group Item = 1 { + required int32 type_id = 2; + required string message = 3; + } + } + """ + + type_id_tag_bytes = encoder.TagBytes(2, wire_format.WIRETYPE_VARINT) + message_tag_bytes = encoder.TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED) + item_end_tag_bytes = encoder.TagBytes(1, wire_format.WIRETYPE_END_GROUP) + + local_ReadTag = ReadTag + local_DecodeVarint = _DecodeVarint + local_SkipField = SkipField + + def DecodeItem(buffer, pos, end, message, field_dict): + """Decode serialized message set to its value and new position. + + Args: + buffer: memoryview of the serialized bytes. + pos: int, position in the memory view to start at. + end: int, end position of serialized data + message: Message object to store unknown fields in + field_dict: Map[Descriptor, Any] to store decoded values in. + + Returns: + int, new position in serialized data. + """ + message_set_item_start = pos + type_id = -1 + message_start = -1 + message_end = -1 + + # Technically, type_id and message can appear in any order, so we need + # a little loop here. + while 1: + (tag_bytes, pos) = local_ReadTag(buffer, pos) + if tag_bytes == type_id_tag_bytes: + (type_id, pos) = local_DecodeVarint(buffer, pos) + elif tag_bytes == message_tag_bytes: + (size, message_start) = local_DecodeVarint(buffer, pos) + pos = message_end = message_start + size + elif tag_bytes == item_end_tag_bytes: + break + else: + pos = SkipField(buffer, pos, end, tag_bytes) + if pos == -1: + raise _DecodeError('Missing group end tag.') + + if pos > end: + raise _DecodeError('Truncated message.') + + if type_id == -1: + raise _DecodeError('MessageSet item missing type_id.') + if message_start == -1: + raise _DecodeError('MessageSet item missing message.') + + extension = message.Extensions._FindExtensionByNumber(type_id) + # pylint: disable=protected-access + if extension is not None: + value = field_dict.get(extension) + if value is None: + message_type = extension.message_type + if not hasattr(message_type, '_concrete_class'): + # pylint: disable=protected-access + message._FACTORY.GetPrototype(message_type) + value = field_dict.setdefault( + extension, message_type._concrete_class()) + if value._InternalParse(buffer, message_start,message_end) != message_end: + # The only reason _InternalParse would return early is if it encountered + # an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + else: + if not message._unknown_fields: + message._unknown_fields = [] + message._unknown_fields.append( + (MESSAGE_SET_ITEM_TAG, buffer[message_set_item_start:pos].tobytes())) + if message._unknown_field_set is None: + message._unknown_field_set = containers.UnknownFieldSet() + message._unknown_field_set._add( + type_id, + wire_format.WIRETYPE_LENGTH_DELIMITED, + buffer[message_start:message_end].tobytes()) + # pylint: enable=protected-access + + return pos + + return DecodeItem + +# -------------------------------------------------------------------- + +def MapDecoder(field_descriptor, new_default, is_message_map): + """Returns a decoder for a map field.""" + + key = field_descriptor + tag_bytes = encoder.TagBytes(field_descriptor.number, + wire_format.WIRETYPE_LENGTH_DELIMITED) + tag_len = len(tag_bytes) + local_DecodeVarint = _DecodeVarint + # Can't read _concrete_class yet; might not be initialized. + message_type = field_descriptor.message_type + + def DecodeMap(buffer, pos, end, message, field_dict): + submsg = message_type._concrete_class() + value = field_dict.get(key) + if value is None: + value = field_dict.setdefault(key, new_default(message)) + while 1: + # Read length. + (size, pos) = local_DecodeVarint(buffer, pos) + new_pos = pos + size + if new_pos > end: + raise _DecodeError('Truncated message.') + # Read sub-message. + submsg.Clear() + if submsg._InternalParse(buffer, pos, new_pos) != new_pos: + # The only reason _InternalParse would return early is if it + # encountered an end-group tag. + raise _DecodeError('Unexpected end-group tag.') + + if is_message_map: + value[submsg.key].CopyFrom(submsg.value) + else: + value[submsg.key] = submsg.value + + # Predict that the next tag is another copy of the same repeated field. + pos = new_pos + tag_len + if buffer[new_pos:pos] != tag_bytes or new_pos == end: + # Prediction failed. Return. + return new_pos + + return DecodeMap + +# -------------------------------------------------------------------- +# Optimization is not as heavy here because calls to SkipField() are rare, +# except for handling end-group tags. + +def _SkipVarint(buffer, pos, end): + """Skip a varint value. Returns the new position.""" + # Previously ord(buffer[pos]) raised IndexError when pos is out of range. + # With this code, ord(b'') raises TypeError. Both are handled in + # python_message.py to generate a 'Truncated message' error. + while ord(buffer[pos:pos+1].tobytes()) & 0x80: + pos += 1 + pos += 1 + if pos > end: + raise _DecodeError('Truncated message.') + return pos + +def _SkipFixed64(buffer, pos, end): + """Skip a fixed64 value. Returns the new position.""" + + pos += 8 + if pos > end: + raise _DecodeError('Truncated message.') + return pos + + +def _DecodeFixed64(buffer, pos): + """Decode a fixed64.""" + new_pos = pos + 8 + return (struct.unpack(' end: + raise _DecodeError('Truncated message.') + return pos + + +def _SkipGroup(buffer, pos, end): + """Skip sub-group. Returns the new position.""" + + while 1: + (tag_bytes, pos) = ReadTag(buffer, pos) + new_pos = SkipField(buffer, pos, end, tag_bytes) + if new_pos == -1: + return pos + pos = new_pos + + +def _DecodeUnknownFieldSet(buffer, pos, end_pos=None): + """Decode UnknownFieldSet. Returns the UnknownFieldSet and new position.""" + + unknown_field_set = containers.UnknownFieldSet() + while end_pos is None or pos < end_pos: + (tag_bytes, pos) = ReadTag(buffer, pos) + (tag, _) = _DecodeVarint(tag_bytes, 0) + field_number, wire_type = wire_format.UnpackTag(tag) + if wire_type == wire_format.WIRETYPE_END_GROUP: + break + (data, pos) = _DecodeUnknownField(buffer, pos, wire_type) + # pylint: disable=protected-access + unknown_field_set._add(field_number, wire_type, data) + + return (unknown_field_set, pos) + + +def _DecodeUnknownField(buffer, pos, wire_type): + """Decode a unknown field. Returns the UnknownField and new position.""" + + if wire_type == wire_format.WIRETYPE_VARINT: + (data, pos) = _DecodeVarint(buffer, pos) + elif wire_type == wire_format.WIRETYPE_FIXED64: + (data, pos) = _DecodeFixed64(buffer, pos) + elif wire_type == wire_format.WIRETYPE_FIXED32: + (data, pos) = _DecodeFixed32(buffer, pos) + elif wire_type == wire_format.WIRETYPE_LENGTH_DELIMITED: + (size, pos) = _DecodeVarint(buffer, pos) + data = buffer[pos:pos+size].tobytes() + pos += size + elif wire_type == wire_format.WIRETYPE_START_GROUP: + (data, pos) = _DecodeUnknownFieldSet(buffer, pos) + elif wire_type == wire_format.WIRETYPE_END_GROUP: + return (0, -1) + else: + raise _DecodeError('Wrong wire type in tag.') + + return (data, pos) + + +def _EndGroup(buffer, pos, end): + """Skipping an END_GROUP tag returns -1 to tell the parent loop to break.""" + + return -1 + + +def _SkipFixed32(buffer, pos, end): + """Skip a fixed32 value. Returns the new position.""" + + pos += 4 + if pos > end: + raise _DecodeError('Truncated message.') + return pos + + +def _DecodeFixed32(buffer, pos): + """Decode a fixed32.""" + + new_pos = pos + 4 + return (struct.unpack('B').pack + + def EncodeVarint(write, value, unused_deterministic=None): + bits = value & 0x7f + value >>= 7 + while value: + write(local_int2byte(0x80|bits)) + bits = value & 0x7f + value >>= 7 + return write(local_int2byte(bits)) + + return EncodeVarint + + +def _SignedVarintEncoder(): + """Return an encoder for a basic signed varint value (does not include + tag).""" + + local_int2byte = struct.Struct('>B').pack + + def EncodeSignedVarint(write, value, unused_deterministic=None): + if value < 0: + value += (1 << 64) + bits = value & 0x7f + value >>= 7 + while value: + write(local_int2byte(0x80|bits)) + bits = value & 0x7f + value >>= 7 + return write(local_int2byte(bits)) + + return EncodeSignedVarint + + +_EncodeVarint = _VarintEncoder() +_EncodeSignedVarint = _SignedVarintEncoder() + + +def _VarintBytes(value): + """Encode the given integer as a varint and return the bytes. This is only + called at startup time so it doesn't need to be fast.""" + + pieces = [] + _EncodeVarint(pieces.append, value, True) + return b"".join(pieces) + + +def TagBytes(field_number, wire_type): + """Encode the given tag and return the bytes. Only called at startup.""" + + return bytes(_VarintBytes(wire_format.PackTag(field_number, wire_type))) + +# -------------------------------------------------------------------- +# As with sizers (see above), we have a number of common encoder +# implementations. + + +def _SimpleEncoder(wire_type, encode_value, compute_value_size): + """Return a constructor for an encoder for fields of a particular type. + + Args: + wire_type: The field's wire type, for encoding tags. + encode_value: A function which encodes an individual value, e.g. + _EncodeVarint(). + compute_value_size: A function which computes the size of an individual + value, e.g. _VarintSize(). + """ + + def SpecificEncoder(field_number, is_repeated, is_packed): + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + size = 0 + for element in value: + size += compute_value_size(element) + local_EncodeVarint(write, size, deterministic) + for element in value: + encode_value(write, element, deterministic) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, deterministic): + for element in value: + write(tag_bytes) + encode_value(write, element, deterministic) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, deterministic): + write(tag_bytes) + return encode_value(write, value, deterministic) + return EncodeField + + return SpecificEncoder + + +def _ModifiedEncoder(wire_type, encode_value, compute_value_size, modify_value): + """Like SimpleEncoder but additionally invokes modify_value on every value + before passing it to encode_value. Usually modify_value is ZigZagEncode.""" + + def SpecificEncoder(field_number, is_repeated, is_packed): + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + size = 0 + for element in value: + size += compute_value_size(modify_value(element)) + local_EncodeVarint(write, size, deterministic) + for element in value: + encode_value(write, modify_value(element), deterministic) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, deterministic): + for element in value: + write(tag_bytes) + encode_value(write, modify_value(element), deterministic) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, deterministic): + write(tag_bytes) + return encode_value(write, modify_value(value), deterministic) + return EncodeField + + return SpecificEncoder + + +def _StructPackEncoder(wire_type, format): + """Return a constructor for an encoder for a fixed-width field. + + Args: + wire_type: The field's wire type, for encoding tags. + format: The format string to pass to struct.pack(). + """ + + value_size = struct.calcsize(format) + + def SpecificEncoder(field_number, is_repeated, is_packed): + local_struct_pack = struct.pack + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + local_EncodeVarint(write, len(value) * value_size, deterministic) + for element in value: + write(local_struct_pack(format, element)) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, unused_deterministic=None): + for element in value: + write(tag_bytes) + write(local_struct_pack(format, element)) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, unused_deterministic=None): + write(tag_bytes) + return write(local_struct_pack(format, value)) + return EncodeField + + return SpecificEncoder + + +def _FloatingPointEncoder(wire_type, format): + """Return a constructor for an encoder for float fields. + + This is like StructPackEncoder, but catches errors that may be due to + passing non-finite floating-point values to struct.pack, and makes a + second attempt to encode those values. + + Args: + wire_type: The field's wire type, for encoding tags. + format: The format string to pass to struct.pack(). + """ + + value_size = struct.calcsize(format) + if value_size == 4: + def EncodeNonFiniteOrRaise(write, value): + # Remember that the serialized form uses little-endian byte order. + if value == _POS_INF: + write(b'\x00\x00\x80\x7F') + elif value == _NEG_INF: + write(b'\x00\x00\x80\xFF') + elif value != value: # NaN + write(b'\x00\x00\xC0\x7F') + else: + raise + elif value_size == 8: + def EncodeNonFiniteOrRaise(write, value): + if value == _POS_INF: + write(b'\x00\x00\x00\x00\x00\x00\xF0\x7F') + elif value == _NEG_INF: + write(b'\x00\x00\x00\x00\x00\x00\xF0\xFF') + elif value != value: # NaN + write(b'\x00\x00\x00\x00\x00\x00\xF8\x7F') + else: + raise + else: + raise ValueError('Can\'t encode floating-point values that are ' + '%d bytes long (only 4 or 8)' % value_size) + + def SpecificEncoder(field_number, is_repeated, is_packed): + local_struct_pack = struct.pack + if is_packed: + tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED) + local_EncodeVarint = _EncodeVarint + def EncodePackedField(write, value, deterministic): + write(tag_bytes) + local_EncodeVarint(write, len(value) * value_size, deterministic) + for element in value: + # This try/except block is going to be faster than any code that + # we could write to check whether element is finite. + try: + write(local_struct_pack(format, element)) + except SystemError: + EncodeNonFiniteOrRaise(write, element) + return EncodePackedField + elif is_repeated: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeRepeatedField(write, value, unused_deterministic=None): + for element in value: + write(tag_bytes) + try: + write(local_struct_pack(format, element)) + except SystemError: + EncodeNonFiniteOrRaise(write, element) + return EncodeRepeatedField + else: + tag_bytes = TagBytes(field_number, wire_type) + def EncodeField(write, value, unused_deterministic=None): + write(tag_bytes) + try: + write(local_struct_pack(format, value)) + except SystemError: + EncodeNonFiniteOrRaise(write, value) + return EncodeField + + return SpecificEncoder + + +# ==================================================================== +# Here we declare an encoder constructor for each field type. These work +# very similarly to sizer constructors, described earlier. + + +Int32Encoder = Int64Encoder = EnumEncoder = _SimpleEncoder( + wire_format.WIRETYPE_VARINT, _EncodeSignedVarint, _SignedVarintSize) + +UInt32Encoder = UInt64Encoder = _SimpleEncoder( + wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize) + +SInt32Encoder = SInt64Encoder = _ModifiedEncoder( + wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize, + wire_format.ZigZagEncode) + +# Note that Python conveniently guarantees that when using the '<' prefix on +# formats, they will also have the same size across all platforms (as opposed +# to without the prefix, where their sizes depend on the C compiler's basic +# type sizes). +Fixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, ' str + ValueType = int + + def __init__(self, enum_type): + """Inits EnumTypeWrapper with an EnumDescriptor.""" + self._enum_type = enum_type + self.DESCRIPTOR = enum_type # pylint: disable=invalid-name + + def Name(self, number): # pylint: disable=invalid-name + """Returns a string containing the name of an enum value.""" + try: + return self._enum_type.values_by_number[number].name + except KeyError: + pass # fall out to break exception chaining + + if not isinstance(number, int): + raise TypeError( + 'Enum value for {} must be an int, but got {} {!r}.'.format( + self._enum_type.name, type(number), number)) + else: + # repr here to handle the odd case when you pass in a boolean. + raise ValueError('Enum {} has no name defined for value {!r}'.format( + self._enum_type.name, number)) + + def Value(self, name): # pylint: disable=invalid-name + """Returns the value corresponding to the given enum name.""" + try: + return self._enum_type.values_by_name[name].number + except KeyError: + pass # fall out to break exception chaining + raise ValueError('Enum {} has no value defined for name {!r}'.format( + self._enum_type.name, name)) + + def keys(self): + """Return a list of the string names in the enum. + + Returns: + A list of strs, in the order they were defined in the .proto file. + """ + + return [value_descriptor.name + for value_descriptor in self._enum_type.values] + + def values(self): + """Return a list of the integer values in the enum. + + Returns: + A list of ints, in the order they were defined in the .proto file. + """ + + return [value_descriptor.number + for value_descriptor in self._enum_type.values] + + def items(self): + """Return a list of the (name, value) pairs of the enum. + + Returns: + A list of (str, int) pairs, in the order they were defined + in the .proto file. + """ + return [(value_descriptor.name, value_descriptor.number) + for value_descriptor in self._enum_type.values] + + def __getattr__(self, name): + """Returns the value corresponding to the given enum name.""" + try: + return super( + EnumTypeWrapper, + self).__getattribute__('_enum_type').values_by_name[name].number + except KeyError: + pass # fall out to break exception chaining + raise AttributeError('Enum {} has no value defined for name {!r}'.format( + self._enum_type.name, name)) diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/extension_dict.py b/env/lib/python3.8/site-packages/google/protobuf/internal/extension_dict.py new file mode 100644 index 00000000..b346cf28 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/extension_dict.py @@ -0,0 +1,213 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains _ExtensionDict class to represent extensions. +""" + +from google.protobuf.internal import type_checkers +from google.protobuf.descriptor import FieldDescriptor + + +def _VerifyExtensionHandle(message, extension_handle): + """Verify that the given extension handle is valid.""" + + if not isinstance(extension_handle, FieldDescriptor): + raise KeyError('HasExtension() expects an extension handle, got: %s' % + extension_handle) + + if not extension_handle.is_extension: + raise KeyError('"%s" is not an extension.' % extension_handle.full_name) + + if not extension_handle.containing_type: + raise KeyError('"%s" is missing a containing_type.' + % extension_handle.full_name) + + if extension_handle.containing_type is not message.DESCRIPTOR: + raise KeyError('Extension "%s" extends message type "%s", but this ' + 'message is of type "%s".' % + (extension_handle.full_name, + extension_handle.containing_type.full_name, + message.DESCRIPTOR.full_name)) + + +# TODO(robinson): Unify error handling of "unknown extension" crap. +# TODO(robinson): Support iteritems()-style iteration over all +# extensions with the "has" bits turned on? +class _ExtensionDict(object): + + """Dict-like container for Extension fields on proto instances. + + Note that in all cases we expect extension handles to be + FieldDescriptors. + """ + + def __init__(self, extended_message): + """ + Args: + extended_message: Message instance for which we are the Extensions dict. + """ + self._extended_message = extended_message + + def __getitem__(self, extension_handle): + """Returns the current value of the given extension handle.""" + + _VerifyExtensionHandle(self._extended_message, extension_handle) + + result = self._extended_message._fields.get(extension_handle) + if result is not None: + return result + + if extension_handle.label == FieldDescriptor.LABEL_REPEATED: + result = extension_handle._default_constructor(self._extended_message) + elif extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE: + message_type = extension_handle.message_type + if not hasattr(message_type, '_concrete_class'): + # pylint: disable=protected-access + self._extended_message._FACTORY.GetPrototype(message_type) + assert getattr(extension_handle.message_type, '_concrete_class', None), ( + 'Uninitialized concrete class found for field %r (message type %r)' + % (extension_handle.full_name, + extension_handle.message_type.full_name)) + result = extension_handle.message_type._concrete_class() + try: + result._SetListener(self._extended_message._listener_for_children) + except ReferenceError: + pass + else: + # Singular scalar -- just return the default without inserting into the + # dict. + return extension_handle.default_value + + # Atomically check if another thread has preempted us and, if not, swap + # in the new object we just created. If someone has preempted us, we + # take that object and discard ours. + # WARNING: We are relying on setdefault() being atomic. This is true + # in CPython but we haven't investigated others. This warning appears + # in several other locations in this file. + result = self._extended_message._fields.setdefault( + extension_handle, result) + + return result + + def __eq__(self, other): + if not isinstance(other, self.__class__): + return False + + my_fields = self._extended_message.ListFields() + other_fields = other._extended_message.ListFields() + + # Get rid of non-extension fields. + my_fields = [field for field in my_fields if field.is_extension] + other_fields = [field for field in other_fields if field.is_extension] + + return my_fields == other_fields + + def __ne__(self, other): + return not self == other + + def __len__(self): + fields = self._extended_message.ListFields() + # Get rid of non-extension fields. + extension_fields = [field for field in fields if field[0].is_extension] + return len(extension_fields) + + def __hash__(self): + raise TypeError('unhashable object') + + # Note that this is only meaningful for non-repeated, scalar extension + # fields. Note also that we may have to call _Modified() when we do + # successfully set a field this way, to set any necessary "has" bits in the + # ancestors of the extended message. + def __setitem__(self, extension_handle, value): + """If extension_handle specifies a non-repeated, scalar extension + field, sets the value of that field. + """ + + _VerifyExtensionHandle(self._extended_message, extension_handle) + + if (extension_handle.label == FieldDescriptor.LABEL_REPEATED or + extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE): + raise TypeError( + 'Cannot assign to extension "%s" because it is a repeated or ' + 'composite type.' % extension_handle.full_name) + + # It's slightly wasteful to lookup the type checker each time, + # but we expect this to be a vanishingly uncommon case anyway. + type_checker = type_checkers.GetTypeChecker(extension_handle) + # pylint: disable=protected-access + self._extended_message._fields[extension_handle] = ( + type_checker.CheckValue(value)) + self._extended_message._Modified() + + def __delitem__(self, extension_handle): + self._extended_message.ClearExtension(extension_handle) + + def _FindExtensionByName(self, name): + """Tries to find a known extension with the specified name. + + Args: + name: Extension full name. + + Returns: + Extension field descriptor. + """ + return self._extended_message._extensions_by_name.get(name, None) + + def _FindExtensionByNumber(self, number): + """Tries to find a known extension with the field number. + + Args: + number: Extension field number. + + Returns: + Extension field descriptor. + """ + return self._extended_message._extensions_by_number.get(number, None) + + def __iter__(self): + # Return a generator over the populated extension fields + return (f[0] for f in self._extended_message.ListFields() + if f[0].is_extension) + + def __contains__(self, extension_handle): + _VerifyExtensionHandle(self._extended_message, extension_handle) + + if extension_handle not in self._extended_message._fields: + return False + + if extension_handle.label == FieldDescriptor.LABEL_REPEATED: + return bool(self._extended_message._fields.get(extension_handle)) + + if extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE: + value = self._extended_message._fields.get(extension_handle) + # pylint: disable=protected-access + return value is not None and value._is_present_in_parent + + return True diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/message_listener.py b/env/lib/python3.8/site-packages/google/protobuf/internal/message_listener.py new file mode 100644 index 00000000..0fc255a7 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/message_listener.py @@ -0,0 +1,78 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Defines a listener interface for observing certain +state transitions on Message objects. + +Also defines a null implementation of this interface. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + + +class MessageListener(object): + + """Listens for modifications made to a message. Meant to be registered via + Message._SetListener(). + + Attributes: + dirty: If True, then calling Modified() would be a no-op. This can be + used to avoid these calls entirely in the common case. + """ + + def Modified(self): + """Called every time the message is modified in such a way that the parent + message may need to be updated. This currently means either: + (a) The message was modified for the first time, so the parent message + should henceforth mark the message as present. + (b) The message's cached byte size became dirty -- i.e. the message was + modified for the first time after a previous call to ByteSize(). + Therefore the parent should also mark its byte size as dirty. + Note that (a) implies (b), since new objects start out with a client cached + size (zero). However, we document (a) explicitly because it is important. + + Modified() will *only* be called in response to one of these two events -- + not every time the sub-message is modified. + + Note that if the listener's |dirty| attribute is true, then calling + Modified at the moment would be a no-op, so it can be skipped. Performance- + sensitive callers should check this attribute directly before calling since + it will be true most of the time. + """ + + raise NotImplementedError + + +class NullMessageListener(object): + + """No-op MessageListener implementation.""" + + def Modified(self): + pass diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/python_message.py b/env/lib/python3.8/site-packages/google/protobuf/internal/python_message.py new file mode 100644 index 00000000..2921d5cb --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/python_message.py @@ -0,0 +1,1539 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# This code is meant to work on Python 2.4 and above only. +# +# TODO(robinson): Helpers for verbose, common checks like seeing if a +# descriptor's cpp_type is CPPTYPE_MESSAGE. + +"""Contains a metaclass and helper functions used to create +protocol message classes from Descriptor objects at runtime. + +Recall that a metaclass is the "type" of a class. +(A class is to a metaclass what an instance is to a class.) + +In this case, we use the GeneratedProtocolMessageType metaclass +to inject all the useful functionality into the classes +output by the protocol compiler at compile-time. + +The upshot of all this is that the real implementation +details for ALL pure-Python protocol buffers are *here in +this file*. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + +from io import BytesIO +import struct +import sys +import weakref + +# We use "as" to avoid name collisions with variables. +from google.protobuf.internal import api_implementation +from google.protobuf.internal import containers +from google.protobuf.internal import decoder +from google.protobuf.internal import encoder +from google.protobuf.internal import enum_type_wrapper +from google.protobuf.internal import extension_dict +from google.protobuf.internal import message_listener as message_listener_mod +from google.protobuf.internal import type_checkers +from google.protobuf.internal import well_known_types +from google.protobuf.internal import wire_format +from google.protobuf import descriptor as descriptor_mod +from google.protobuf import message as message_mod +from google.protobuf import text_format + +_FieldDescriptor = descriptor_mod.FieldDescriptor +_AnyFullTypeName = 'google.protobuf.Any' +_ExtensionDict = extension_dict._ExtensionDict + +class GeneratedProtocolMessageType(type): + + """Metaclass for protocol message classes created at runtime from Descriptors. + + We add implementations for all methods described in the Message class. We + also create properties to allow getting/setting all fields in the protocol + message. Finally, we create slots to prevent users from accidentally + "setting" nonexistent fields in the protocol message, which then wouldn't get + serialized / deserialized properly. + + The protocol compiler currently uses this metaclass to create protocol + message classes at runtime. Clients can also manually create their own + classes at runtime, as in this example: + + mydescriptor = Descriptor(.....) + factory = symbol_database.Default() + factory.pool.AddDescriptor(mydescriptor) + MyProtoClass = factory.GetPrototype(mydescriptor) + myproto_instance = MyProtoClass() + myproto.foo_field = 23 + ... + """ + + # Must be consistent with the protocol-compiler code in + # proto2/compiler/internal/generator.*. + _DESCRIPTOR_KEY = 'DESCRIPTOR' + + def __new__(cls, name, bases, dictionary): + """Custom allocation for runtime-generated class types. + + We override __new__ because this is apparently the only place + where we can meaningfully set __slots__ on the class we're creating(?). + (The interplay between metaclasses and slots is not very well-documented). + + Args: + name: Name of the class (ignored, but required by the + metaclass protocol). + bases: Base classes of the class we're constructing. + (Should be message.Message). We ignore this field, but + it's required by the metaclass protocol + dictionary: The class dictionary of the class we're + constructing. dictionary[_DESCRIPTOR_KEY] must contain + a Descriptor object describing this protocol message + type. + + Returns: + Newly-allocated class. + + Raises: + RuntimeError: Generated code only work with python cpp extension. + """ + descriptor = dictionary[GeneratedProtocolMessageType._DESCRIPTOR_KEY] + + if isinstance(descriptor, str): + raise RuntimeError('The generated code only work with python cpp ' + 'extension, but it is using pure python runtime.') + + # If a concrete class already exists for this descriptor, don't try to + # create another. Doing so will break any messages that already exist with + # the existing class. + # + # The C++ implementation appears to have its own internal `PyMessageFactory` + # to achieve similar results. + # + # This most commonly happens in `text_format.py` when using descriptors from + # a custom pool; it calls symbol_database.Global().getPrototype() on a + # descriptor which already has an existing concrete class. + new_class = getattr(descriptor, '_concrete_class', None) + if new_class: + return new_class + + if descriptor.full_name in well_known_types.WKTBASES: + bases += (well_known_types.WKTBASES[descriptor.full_name],) + _AddClassAttributesForNestedExtensions(descriptor, dictionary) + _AddSlots(descriptor, dictionary) + + superclass = super(GeneratedProtocolMessageType, cls) + new_class = superclass.__new__(cls, name, bases, dictionary) + return new_class + + def __init__(cls, name, bases, dictionary): + """Here we perform the majority of our work on the class. + We add enum getters, an __init__ method, implementations + of all Message methods, and properties for all fields + in the protocol type. + + Args: + name: Name of the class (ignored, but required by the + metaclass protocol). + bases: Base classes of the class we're constructing. + (Should be message.Message). We ignore this field, but + it's required by the metaclass protocol + dictionary: The class dictionary of the class we're + constructing. dictionary[_DESCRIPTOR_KEY] must contain + a Descriptor object describing this protocol message + type. + """ + descriptor = dictionary[GeneratedProtocolMessageType._DESCRIPTOR_KEY] + + # If this is an _existing_ class looked up via `_concrete_class` in the + # __new__ method above, then we don't need to re-initialize anything. + existing_class = getattr(descriptor, '_concrete_class', None) + if existing_class: + assert existing_class is cls, ( + 'Duplicate `GeneratedProtocolMessageType` created for descriptor %r' + % (descriptor.full_name)) + return + + cls._decoders_by_tag = {} + if (descriptor.has_options and + descriptor.GetOptions().message_set_wire_format): + cls._decoders_by_tag[decoder.MESSAGE_SET_ITEM_TAG] = ( + decoder.MessageSetItemDecoder(descriptor), None) + + # Attach stuff to each FieldDescriptor for quick lookup later on. + for field in descriptor.fields: + _AttachFieldHelpers(cls, field) + + descriptor._concrete_class = cls # pylint: disable=protected-access + _AddEnumValues(descriptor, cls) + _AddInitMethod(descriptor, cls) + _AddPropertiesForFields(descriptor, cls) + _AddPropertiesForExtensions(descriptor, cls) + _AddStaticMethods(cls) + _AddMessageMethods(descriptor, cls) + _AddPrivateHelperMethods(descriptor, cls) + + superclass = super(GeneratedProtocolMessageType, cls) + superclass.__init__(name, bases, dictionary) + + +# Stateless helpers for GeneratedProtocolMessageType below. +# Outside clients should not access these directly. +# +# I opted not to make any of these methods on the metaclass, to make it more +# clear that I'm not really using any state there and to keep clients from +# thinking that they have direct access to these construction helpers. + + +def _PropertyName(proto_field_name): + """Returns the name of the public property attribute which + clients can use to get and (in some cases) set the value + of a protocol message field. + + Args: + proto_field_name: The protocol message field name, exactly + as it appears (or would appear) in a .proto file. + """ + # TODO(robinson): Escape Python keywords (e.g., yield), and test this support. + # nnorwitz makes my day by writing: + # """ + # FYI. See the keyword module in the stdlib. This could be as simple as: + # + # if keyword.iskeyword(proto_field_name): + # return proto_field_name + "_" + # return proto_field_name + # """ + # Kenton says: The above is a BAD IDEA. People rely on being able to use + # getattr() and setattr() to reflectively manipulate field values. If we + # rename the properties, then every such user has to also make sure to apply + # the same transformation. Note that currently if you name a field "yield", + # you can still access it just fine using getattr/setattr -- it's not even + # that cumbersome to do so. + # TODO(kenton): Remove this method entirely if/when everyone agrees with my + # position. + return proto_field_name + + +def _AddSlots(message_descriptor, dictionary): + """Adds a __slots__ entry to dictionary, containing the names of all valid + attributes for this message type. + + Args: + message_descriptor: A Descriptor instance describing this message type. + dictionary: Class dictionary to which we'll add a '__slots__' entry. + """ + dictionary['__slots__'] = ['_cached_byte_size', + '_cached_byte_size_dirty', + '_fields', + '_unknown_fields', + '_unknown_field_set', + '_is_present_in_parent', + '_listener', + '_listener_for_children', + '__weakref__', + '_oneofs'] + + +def _IsMessageSetExtension(field): + return (field.is_extension and + field.containing_type.has_options and + field.containing_type.GetOptions().message_set_wire_format and + field.type == _FieldDescriptor.TYPE_MESSAGE and + field.label == _FieldDescriptor.LABEL_OPTIONAL) + + +def _IsMapField(field): + return (field.type == _FieldDescriptor.TYPE_MESSAGE and + field.message_type.has_options and + field.message_type.GetOptions().map_entry) + + +def _IsMessageMapField(field): + value_type = field.message_type.fields_by_name['value'] + return value_type.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE + + +def _AttachFieldHelpers(cls, field_descriptor): + is_repeated = (field_descriptor.label == _FieldDescriptor.LABEL_REPEATED) + is_packable = (is_repeated and + wire_format.IsTypePackable(field_descriptor.type)) + is_proto3 = field_descriptor.containing_type.syntax == 'proto3' + if not is_packable: + is_packed = False + elif field_descriptor.containing_type.syntax == 'proto2': + is_packed = (field_descriptor.has_options and + field_descriptor.GetOptions().packed) + else: + has_packed_false = (field_descriptor.has_options and + field_descriptor.GetOptions().HasField('packed') and + field_descriptor.GetOptions().packed == False) + is_packed = not has_packed_false + is_map_entry = _IsMapField(field_descriptor) + + if is_map_entry: + field_encoder = encoder.MapEncoder(field_descriptor) + sizer = encoder.MapSizer(field_descriptor, + _IsMessageMapField(field_descriptor)) + elif _IsMessageSetExtension(field_descriptor): + field_encoder = encoder.MessageSetItemEncoder(field_descriptor.number) + sizer = encoder.MessageSetItemSizer(field_descriptor.number) + else: + field_encoder = type_checkers.TYPE_TO_ENCODER[field_descriptor.type]( + field_descriptor.number, is_repeated, is_packed) + sizer = type_checkers.TYPE_TO_SIZER[field_descriptor.type]( + field_descriptor.number, is_repeated, is_packed) + + field_descriptor._encoder = field_encoder + field_descriptor._sizer = sizer + field_descriptor._default_constructor = _DefaultValueConstructorForField( + field_descriptor) + + def AddDecoder(wiretype, is_packed): + tag_bytes = encoder.TagBytes(field_descriptor.number, wiretype) + decode_type = field_descriptor.type + if (decode_type == _FieldDescriptor.TYPE_ENUM and + type_checkers.SupportsOpenEnums(field_descriptor)): + decode_type = _FieldDescriptor.TYPE_INT32 + + oneof_descriptor = None + clear_if_default = False + if field_descriptor.containing_oneof is not None: + oneof_descriptor = field_descriptor + elif (is_proto3 and not is_repeated and + field_descriptor.cpp_type != _FieldDescriptor.CPPTYPE_MESSAGE): + clear_if_default = True + + if is_map_entry: + is_message_map = _IsMessageMapField(field_descriptor) + + field_decoder = decoder.MapDecoder( + field_descriptor, _GetInitializeDefaultForMap(field_descriptor), + is_message_map) + elif decode_type == _FieldDescriptor.TYPE_STRING: + field_decoder = decoder.StringDecoder( + field_descriptor.number, is_repeated, is_packed, + field_descriptor, field_descriptor._default_constructor, + clear_if_default) + elif field_descriptor.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + field_decoder = type_checkers.TYPE_TO_DECODER[decode_type]( + field_descriptor.number, is_repeated, is_packed, + field_descriptor, field_descriptor._default_constructor) + else: + field_decoder = type_checkers.TYPE_TO_DECODER[decode_type]( + field_descriptor.number, is_repeated, is_packed, + # pylint: disable=protected-access + field_descriptor, field_descriptor._default_constructor, + clear_if_default) + + cls._decoders_by_tag[tag_bytes] = (field_decoder, oneof_descriptor) + + AddDecoder(type_checkers.FIELD_TYPE_TO_WIRE_TYPE[field_descriptor.type], + False) + + if is_repeated and wire_format.IsTypePackable(field_descriptor.type): + # To support wire compatibility of adding packed = true, add a decoder for + # packed values regardless of the field's options. + AddDecoder(wire_format.WIRETYPE_LENGTH_DELIMITED, True) + + +def _AddClassAttributesForNestedExtensions(descriptor, dictionary): + extensions = descriptor.extensions_by_name + for extension_name, extension_field in extensions.items(): + assert extension_name not in dictionary + dictionary[extension_name] = extension_field + + +def _AddEnumValues(descriptor, cls): + """Sets class-level attributes for all enum fields defined in this message. + + Also exporting a class-level object that can name enum values. + + Args: + descriptor: Descriptor object for this message type. + cls: Class we're constructing for this message type. + """ + for enum_type in descriptor.enum_types: + setattr(cls, enum_type.name, enum_type_wrapper.EnumTypeWrapper(enum_type)) + for enum_value in enum_type.values: + setattr(cls, enum_value.name, enum_value.number) + + +def _GetInitializeDefaultForMap(field): + if field.label != _FieldDescriptor.LABEL_REPEATED: + raise ValueError('map_entry set on non-repeated field %s' % ( + field.name)) + fields_by_name = field.message_type.fields_by_name + key_checker = type_checkers.GetTypeChecker(fields_by_name['key']) + + value_field = fields_by_name['value'] + if _IsMessageMapField(field): + def MakeMessageMapDefault(message): + return containers.MessageMap( + message._listener_for_children, value_field.message_type, key_checker, + field.message_type) + return MakeMessageMapDefault + else: + value_checker = type_checkers.GetTypeChecker(value_field) + def MakePrimitiveMapDefault(message): + return containers.ScalarMap( + message._listener_for_children, key_checker, value_checker, + field.message_type) + return MakePrimitiveMapDefault + +def _DefaultValueConstructorForField(field): + """Returns a function which returns a default value for a field. + + Args: + field: FieldDescriptor object for this field. + + The returned function has one argument: + message: Message instance containing this field, or a weakref proxy + of same. + + That function in turn returns a default value for this field. The default + value may refer back to |message| via a weak reference. + """ + + if _IsMapField(field): + return _GetInitializeDefaultForMap(field) + + if field.label == _FieldDescriptor.LABEL_REPEATED: + if field.has_default_value and field.default_value != []: + raise ValueError('Repeated field default value not empty list: %s' % ( + field.default_value)) + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + # We can't look at _concrete_class yet since it might not have + # been set. (Depends on order in which we initialize the classes). + message_type = field.message_type + def MakeRepeatedMessageDefault(message): + return containers.RepeatedCompositeFieldContainer( + message._listener_for_children, field.message_type) + return MakeRepeatedMessageDefault + else: + type_checker = type_checkers.GetTypeChecker(field) + def MakeRepeatedScalarDefault(message): + return containers.RepeatedScalarFieldContainer( + message._listener_for_children, type_checker) + return MakeRepeatedScalarDefault + + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + # _concrete_class may not yet be initialized. + message_type = field.message_type + def MakeSubMessageDefault(message): + assert getattr(message_type, '_concrete_class', None), ( + 'Uninitialized concrete class found for field %r (message type %r)' + % (field.full_name, message_type.full_name)) + result = message_type._concrete_class() + result._SetListener( + _OneofListener(message, field) + if field.containing_oneof is not None + else message._listener_for_children) + return result + return MakeSubMessageDefault + + def MakeScalarDefault(message): + # TODO(protobuf-team): This may be broken since there may not be + # default_value. Combine with has_default_value somehow. + return field.default_value + return MakeScalarDefault + + +def _ReraiseTypeErrorWithFieldName(message_name, field_name): + """Re-raise the currently-handled TypeError with the field name added.""" + exc = sys.exc_info()[1] + if len(exc.args) == 1 and type(exc) is TypeError: + # simple TypeError; add field name to exception message + exc = TypeError('%s for field %s.%s' % (str(exc), message_name, field_name)) + + # re-raise possibly-amended exception with original traceback: + raise exc.with_traceback(sys.exc_info()[2]) + + +def _AddInitMethod(message_descriptor, cls): + """Adds an __init__ method to cls.""" + + def _GetIntegerEnumValue(enum_type, value): + """Convert a string or integer enum value to an integer. + + If the value is a string, it is converted to the enum value in + enum_type with the same name. If the value is not a string, it's + returned as-is. (No conversion or bounds-checking is done.) + """ + if isinstance(value, str): + try: + return enum_type.values_by_name[value].number + except KeyError: + raise ValueError('Enum type %s: unknown label "%s"' % ( + enum_type.full_name, value)) + return value + + def init(self, **kwargs): + self._cached_byte_size = 0 + self._cached_byte_size_dirty = len(kwargs) > 0 + self._fields = {} + # Contains a mapping from oneof field descriptors to the descriptor + # of the currently set field in that oneof field. + self._oneofs = {} + + # _unknown_fields is () when empty for efficiency, and will be turned into + # a list if fields are added. + self._unknown_fields = () + # _unknown_field_set is None when empty for efficiency, and will be + # turned into UnknownFieldSet struct if fields are added. + self._unknown_field_set = None # pylint: disable=protected-access + self._is_present_in_parent = False + self._listener = message_listener_mod.NullMessageListener() + self._listener_for_children = _Listener(self) + for field_name, field_value in kwargs.items(): + field = _GetFieldByName(message_descriptor, field_name) + if field is None: + raise TypeError('%s() got an unexpected keyword argument "%s"' % + (message_descriptor.name, field_name)) + if field_value is None: + # field=None is the same as no field at all. + continue + if field.label == _FieldDescriptor.LABEL_REPEATED: + copy = field._default_constructor(self) + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: # Composite + if _IsMapField(field): + if _IsMessageMapField(field): + for key in field_value: + copy[key].MergeFrom(field_value[key]) + else: + copy.update(field_value) + else: + for val in field_value: + if isinstance(val, dict): + copy.add(**val) + else: + copy.add().MergeFrom(val) + else: # Scalar + if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM: + field_value = [_GetIntegerEnumValue(field.enum_type, val) + for val in field_value] + copy.extend(field_value) + self._fields[field] = copy + elif field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + copy = field._default_constructor(self) + new_val = field_value + if isinstance(field_value, dict): + new_val = field.message_type._concrete_class(**field_value) + try: + copy.MergeFrom(new_val) + except TypeError: + _ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name) + self._fields[field] = copy + else: + if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM: + field_value = _GetIntegerEnumValue(field.enum_type, field_value) + try: + setattr(self, field_name, field_value) + except TypeError: + _ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name) + + init.__module__ = None + init.__doc__ = None + cls.__init__ = init + + +def _GetFieldByName(message_descriptor, field_name): + """Returns a field descriptor by field name. + + Args: + message_descriptor: A Descriptor describing all fields in message. + field_name: The name of the field to retrieve. + Returns: + The field descriptor associated with the field name. + """ + try: + return message_descriptor.fields_by_name[field_name] + except KeyError: + raise ValueError('Protocol message %s has no "%s" field.' % + (message_descriptor.name, field_name)) + + +def _AddPropertiesForFields(descriptor, cls): + """Adds properties for all fields in this protocol message type.""" + for field in descriptor.fields: + _AddPropertiesForField(field, cls) + + if descriptor.is_extendable: + # _ExtensionDict is just an adaptor with no state so we allocate a new one + # every time it is accessed. + cls.Extensions = property(lambda self: _ExtensionDict(self)) + + +def _AddPropertiesForField(field, cls): + """Adds a public property for a protocol message field. + Clients can use this property to get and (in the case + of non-repeated scalar fields) directly set the value + of a protocol message field. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + # Catch it if we add other types that we should + # handle specially here. + assert _FieldDescriptor.MAX_CPPTYPE == 10 + + constant_name = field.name.upper() + '_FIELD_NUMBER' + setattr(cls, constant_name, field.number) + + if field.label == _FieldDescriptor.LABEL_REPEATED: + _AddPropertiesForRepeatedField(field, cls) + elif field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + _AddPropertiesForNonRepeatedCompositeField(field, cls) + else: + _AddPropertiesForNonRepeatedScalarField(field, cls) + + +class _FieldProperty(property): + __slots__ = ('DESCRIPTOR',) + + def __init__(self, descriptor, getter, setter, doc): + property.__init__(self, getter, setter, doc=doc) + self.DESCRIPTOR = descriptor + + +def _AddPropertiesForRepeatedField(field, cls): + """Adds a public property for a "repeated" protocol message field. Clients + can use this property to get the value of the field, which will be either a + RepeatedScalarFieldContainer or RepeatedCompositeFieldContainer (see + below). + + Note that when clients add values to these containers, we perform + type-checking in the case of repeated scalar fields, and we also set any + necessary "has" bits as a side-effect. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + proto_field_name = field.name + property_name = _PropertyName(proto_field_name) + + def getter(self): + field_value = self._fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + + # Atomically check if another thread has preempted us and, if not, swap + # in the new object we just created. If someone has preempted us, we + # take that object and discard ours. + # WARNING: We are relying on setdefault() being atomic. This is true + # in CPython but we haven't investigated others. This warning appears + # in several other locations in this file. + field_value = self._fields.setdefault(field, field_value) + return field_value + getter.__module__ = None + getter.__doc__ = 'Getter for %s.' % proto_field_name + + # We define a setter just so we can throw an exception with a more + # helpful error message. + def setter(self, new_value): + raise AttributeError('Assignment not allowed to repeated field ' + '"%s" in protocol message object.' % proto_field_name) + + doc = 'Magic attribute generated for "%s" proto field.' % proto_field_name + setattr(cls, property_name, _FieldProperty(field, getter, setter, doc=doc)) + + +def _AddPropertiesForNonRepeatedScalarField(field, cls): + """Adds a public property for a nonrepeated, scalar protocol message field. + Clients can use this property to get and directly set the value of the field. + Note that when the client sets the value of a field by using this property, + all necessary "has" bits are set as a side-effect, and we also perform + type-checking. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + proto_field_name = field.name + property_name = _PropertyName(proto_field_name) + type_checker = type_checkers.GetTypeChecker(field) + default_value = field.default_value + is_proto3 = field.containing_type.syntax == 'proto3' + + def getter(self): + # TODO(protobuf-team): This may be broken since there may not be + # default_value. Combine with has_default_value somehow. + return self._fields.get(field, default_value) + getter.__module__ = None + getter.__doc__ = 'Getter for %s.' % proto_field_name + + clear_when_set_to_default = is_proto3 and not field.containing_oneof + + def field_setter(self, new_value): + # pylint: disable=protected-access + # Testing the value for truthiness captures all of the proto3 defaults + # (0, 0.0, enum 0, and False). + try: + new_value = type_checker.CheckValue(new_value) + except TypeError as e: + raise TypeError( + 'Cannot set %s to %.1024r: %s' % (field.full_name, new_value, e)) + if clear_when_set_to_default and not new_value: + self._fields.pop(field, None) + else: + self._fields[field] = new_value + # Check _cached_byte_size_dirty inline to improve performance, since scalar + # setters are called frequently. + if not self._cached_byte_size_dirty: + self._Modified() + + if field.containing_oneof: + def setter(self, new_value): + field_setter(self, new_value) + self._UpdateOneofState(field) + else: + setter = field_setter + + setter.__module__ = None + setter.__doc__ = 'Setter for %s.' % proto_field_name + + # Add a property to encapsulate the getter/setter. + doc = 'Magic attribute generated for "%s" proto field.' % proto_field_name + setattr(cls, property_name, _FieldProperty(field, getter, setter, doc=doc)) + + +def _AddPropertiesForNonRepeatedCompositeField(field, cls): + """Adds a public property for a nonrepeated, composite protocol message field. + A composite field is a "group" or "message" field. + + Clients can use this property to get the value of the field, but cannot + assign to the property directly. + + Args: + field: A FieldDescriptor for this field. + cls: The class we're constructing. + """ + # TODO(robinson): Remove duplication with similar method + # for non-repeated scalars. + proto_field_name = field.name + property_name = _PropertyName(proto_field_name) + + def getter(self): + field_value = self._fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + + # Atomically check if another thread has preempted us and, if not, swap + # in the new object we just created. If someone has preempted us, we + # take that object and discard ours. + # WARNING: We are relying on setdefault() being atomic. This is true + # in CPython but we haven't investigated others. This warning appears + # in several other locations in this file. + field_value = self._fields.setdefault(field, field_value) + return field_value + getter.__module__ = None + getter.__doc__ = 'Getter for %s.' % proto_field_name + + # We define a setter just so we can throw an exception with a more + # helpful error message. + def setter(self, new_value): + raise AttributeError('Assignment not allowed to composite field ' + '"%s" in protocol message object.' % proto_field_name) + + # Add a property to encapsulate the getter. + doc = 'Magic attribute generated for "%s" proto field.' % proto_field_name + setattr(cls, property_name, _FieldProperty(field, getter, setter, doc=doc)) + + +def _AddPropertiesForExtensions(descriptor, cls): + """Adds properties for all fields in this protocol message type.""" + extensions = descriptor.extensions_by_name + for extension_name, extension_field in extensions.items(): + constant_name = extension_name.upper() + '_FIELD_NUMBER' + setattr(cls, constant_name, extension_field.number) + + # TODO(amauryfa): Migrate all users of these attributes to functions like + # pool.FindExtensionByNumber(descriptor). + if descriptor.file is not None: + # TODO(amauryfa): Use cls.MESSAGE_FACTORY.pool when available. + pool = descriptor.file.pool + cls._extensions_by_number = pool._extensions_by_number[descriptor] + cls._extensions_by_name = pool._extensions_by_name[descriptor] + +def _AddStaticMethods(cls): + # TODO(robinson): This probably needs to be thread-safe(?) + def RegisterExtension(extension_handle): + extension_handle.containing_type = cls.DESCRIPTOR + # TODO(amauryfa): Use cls.MESSAGE_FACTORY.pool when available. + # pylint: disable=protected-access + cls.DESCRIPTOR.file.pool._AddExtensionDescriptor(extension_handle) + _AttachFieldHelpers(cls, extension_handle) + cls.RegisterExtension = staticmethod(RegisterExtension) + + def FromString(s): + message = cls() + message.MergeFromString(s) + return message + cls.FromString = staticmethod(FromString) + + +def _IsPresent(item): + """Given a (FieldDescriptor, value) tuple from _fields, return true if the + value should be included in the list returned by ListFields().""" + + if item[0].label == _FieldDescriptor.LABEL_REPEATED: + return bool(item[1]) + elif item[0].cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + return item[1]._is_present_in_parent + else: + return True + + +def _AddListFieldsMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def ListFields(self): + all_fields = [item for item in self._fields.items() if _IsPresent(item)] + all_fields.sort(key = lambda item: item[0].number) + return all_fields + + cls.ListFields = ListFields + +_PROTO3_ERROR_TEMPLATE = \ + ('Protocol message %s has no non-repeated submessage field "%s" ' + 'nor marked as optional') +_PROTO2_ERROR_TEMPLATE = 'Protocol message %s has no non-repeated field "%s"' + +def _AddHasFieldMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + is_proto3 = (message_descriptor.syntax == "proto3") + error_msg = _PROTO3_ERROR_TEMPLATE if is_proto3 else _PROTO2_ERROR_TEMPLATE + + hassable_fields = {} + for field in message_descriptor.fields: + if field.label == _FieldDescriptor.LABEL_REPEATED: + continue + # For proto3, only submessages and fields inside a oneof have presence. + if (is_proto3 and field.cpp_type != _FieldDescriptor.CPPTYPE_MESSAGE and + not field.containing_oneof): + continue + hassable_fields[field.name] = field + + # Has methods are supported for oneof descriptors. + for oneof in message_descriptor.oneofs: + hassable_fields[oneof.name] = oneof + + def HasField(self, field_name): + try: + field = hassable_fields[field_name] + except KeyError: + raise ValueError(error_msg % (message_descriptor.full_name, field_name)) + + if isinstance(field, descriptor_mod.OneofDescriptor): + try: + return HasField(self, self._oneofs[field].name) + except KeyError: + return False + else: + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + value = self._fields.get(field) + return value is not None and value._is_present_in_parent + else: + return field in self._fields + + cls.HasField = HasField + + +def _AddClearFieldMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def ClearField(self, field_name): + try: + field = message_descriptor.fields_by_name[field_name] + except KeyError: + try: + field = message_descriptor.oneofs_by_name[field_name] + if field in self._oneofs: + field = self._oneofs[field] + else: + return + except KeyError: + raise ValueError('Protocol message %s has no "%s" field.' % + (message_descriptor.name, field_name)) + + if field in self._fields: + # To match the C++ implementation, we need to invalidate iterators + # for map fields when ClearField() happens. + if hasattr(self._fields[field], 'InvalidateIterators'): + self._fields[field].InvalidateIterators() + + # Note: If the field is a sub-message, its listener will still point + # at us. That's fine, because the worst than can happen is that it + # will call _Modified() and invalidate our byte size. Big deal. + del self._fields[field] + + if self._oneofs.get(field.containing_oneof, None) is field: + del self._oneofs[field.containing_oneof] + + # Always call _Modified() -- even if nothing was changed, this is + # a mutating method, and thus calling it should cause the field to become + # present in the parent message. + self._Modified() + + cls.ClearField = ClearField + + +def _AddClearExtensionMethod(cls): + """Helper for _AddMessageMethods().""" + def ClearExtension(self, extension_handle): + extension_dict._VerifyExtensionHandle(self, extension_handle) + + # Similar to ClearField(), above. + if extension_handle in self._fields: + del self._fields[extension_handle] + self._Modified() + cls.ClearExtension = ClearExtension + + +def _AddHasExtensionMethod(cls): + """Helper for _AddMessageMethods().""" + def HasExtension(self, extension_handle): + extension_dict._VerifyExtensionHandle(self, extension_handle) + if extension_handle.label == _FieldDescriptor.LABEL_REPEATED: + raise KeyError('"%s" is repeated.' % extension_handle.full_name) + + if extension_handle.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + value = self._fields.get(extension_handle) + return value is not None and value._is_present_in_parent + else: + return extension_handle in self._fields + cls.HasExtension = HasExtension + +def _InternalUnpackAny(msg): + """Unpacks Any message and returns the unpacked message. + + This internal method is different from public Any Unpack method which takes + the target message as argument. _InternalUnpackAny method does not have + target message type and need to find the message type in descriptor pool. + + Args: + msg: An Any message to be unpacked. + + Returns: + The unpacked message. + """ + # TODO(amauryfa): Don't use the factory of generated messages. + # To make Any work with custom factories, use the message factory of the + # parent message. + # pylint: disable=g-import-not-at-top + from google.protobuf import symbol_database + factory = symbol_database.Default() + + type_url = msg.type_url + + if not type_url: + return None + + # TODO(haberman): For now we just strip the hostname. Better logic will be + # required. + type_name = type_url.split('/')[-1] + descriptor = factory.pool.FindMessageTypeByName(type_name) + + if descriptor is None: + return None + + message_class = factory.GetPrototype(descriptor) + message = message_class() + + message.ParseFromString(msg.value) + return message + + +def _AddEqualsMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def __eq__(self, other): + if (not isinstance(other, message_mod.Message) or + other.DESCRIPTOR != self.DESCRIPTOR): + return False + + if self is other: + return True + + if self.DESCRIPTOR.full_name == _AnyFullTypeName: + any_a = _InternalUnpackAny(self) + any_b = _InternalUnpackAny(other) + if any_a and any_b: + return any_a == any_b + + if not self.ListFields() == other.ListFields(): + return False + + # TODO(jieluo): Fix UnknownFieldSet to consider MessageSet extensions, + # then use it for the comparison. + unknown_fields = list(self._unknown_fields) + unknown_fields.sort() + other_unknown_fields = list(other._unknown_fields) + other_unknown_fields.sort() + return unknown_fields == other_unknown_fields + + cls.__eq__ = __eq__ + + +def _AddStrMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def __str__(self): + return text_format.MessageToString(self) + cls.__str__ = __str__ + + +def _AddReprMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def __repr__(self): + return text_format.MessageToString(self) + cls.__repr__ = __repr__ + + +def _AddUnicodeMethod(unused_message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def __unicode__(self): + return text_format.MessageToString(self, as_utf8=True).decode('utf-8') + cls.__unicode__ = __unicode__ + + +def _BytesForNonRepeatedElement(value, field_number, field_type): + """Returns the number of bytes needed to serialize a non-repeated element. + The returned byte count includes space for tag information and any + other additional space associated with serializing value. + + Args: + value: Value we're serializing. + field_number: Field number of this value. (Since the field number + is stored as part of a varint-encoded tag, this has an impact + on the total bytes required to serialize the value). + field_type: The type of the field. One of the TYPE_* constants + within FieldDescriptor. + """ + try: + fn = type_checkers.TYPE_TO_BYTE_SIZE_FN[field_type] + return fn(field_number, value) + except KeyError: + raise message_mod.EncodeError('Unrecognized field type: %d' % field_type) + + +def _AddByteSizeMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def ByteSize(self): + if not self._cached_byte_size_dirty: + return self._cached_byte_size + + size = 0 + descriptor = self.DESCRIPTOR + if descriptor.GetOptions().map_entry: + # Fields of map entry should always be serialized. + size = descriptor.fields_by_name['key']._sizer(self.key) + size += descriptor.fields_by_name['value']._sizer(self.value) + else: + for field_descriptor, field_value in self.ListFields(): + size += field_descriptor._sizer(field_value) + for tag_bytes, value_bytes in self._unknown_fields: + size += len(tag_bytes) + len(value_bytes) + + self._cached_byte_size = size + self._cached_byte_size_dirty = False + self._listener_for_children.dirty = False + return size + + cls.ByteSize = ByteSize + + +def _AddSerializeToStringMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def SerializeToString(self, **kwargs): + # Check if the message has all of its required fields set. + if not self.IsInitialized(): + raise message_mod.EncodeError( + 'Message %s is missing required fields: %s' % ( + self.DESCRIPTOR.full_name, ','.join(self.FindInitializationErrors()))) + return self.SerializePartialToString(**kwargs) + cls.SerializeToString = SerializeToString + + +def _AddSerializePartialToStringMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + + def SerializePartialToString(self, **kwargs): + out = BytesIO() + self._InternalSerialize(out.write, **kwargs) + return out.getvalue() + cls.SerializePartialToString = SerializePartialToString + + def InternalSerialize(self, write_bytes, deterministic=None): + if deterministic is None: + deterministic = ( + api_implementation.IsPythonDefaultSerializationDeterministic()) + else: + deterministic = bool(deterministic) + + descriptor = self.DESCRIPTOR + if descriptor.GetOptions().map_entry: + # Fields of map entry should always be serialized. + descriptor.fields_by_name['key']._encoder( + write_bytes, self.key, deterministic) + descriptor.fields_by_name['value']._encoder( + write_bytes, self.value, deterministic) + else: + for field_descriptor, field_value in self.ListFields(): + field_descriptor._encoder(write_bytes, field_value, deterministic) + for tag_bytes, value_bytes in self._unknown_fields: + write_bytes(tag_bytes) + write_bytes(value_bytes) + cls._InternalSerialize = InternalSerialize + + +def _AddMergeFromStringMethod(message_descriptor, cls): + """Helper for _AddMessageMethods().""" + def MergeFromString(self, serialized): + serialized = memoryview(serialized) + length = len(serialized) + try: + if self._InternalParse(serialized, 0, length) != length: + # The only reason _InternalParse would return early is if it + # encountered an end-group tag. + raise message_mod.DecodeError('Unexpected end-group tag.') + except (IndexError, TypeError): + # Now ord(buf[p:p+1]) == ord('') gets TypeError. + raise message_mod.DecodeError('Truncated message.') + except struct.error as e: + raise message_mod.DecodeError(e) + return length # Return this for legacy reasons. + cls.MergeFromString = MergeFromString + + local_ReadTag = decoder.ReadTag + local_SkipField = decoder.SkipField + decoders_by_tag = cls._decoders_by_tag + + def InternalParse(self, buffer, pos, end): + """Create a message from serialized bytes. + + Args: + self: Message, instance of the proto message object. + buffer: memoryview of the serialized data. + pos: int, position to start in the serialized data. + end: int, end position of the serialized data. + + Returns: + Message object. + """ + # Guard against internal misuse, since this function is called internally + # quite extensively, and its easy to accidentally pass bytes. + assert isinstance(buffer, memoryview) + self._Modified() + field_dict = self._fields + # pylint: disable=protected-access + unknown_field_set = self._unknown_field_set + while pos != end: + (tag_bytes, new_pos) = local_ReadTag(buffer, pos) + field_decoder, field_desc = decoders_by_tag.get(tag_bytes, (None, None)) + if field_decoder is None: + if not self._unknown_fields: # pylint: disable=protected-access + self._unknown_fields = [] # pylint: disable=protected-access + if unknown_field_set is None: + # pylint: disable=protected-access + self._unknown_field_set = containers.UnknownFieldSet() + # pylint: disable=protected-access + unknown_field_set = self._unknown_field_set + # pylint: disable=protected-access + (tag, _) = decoder._DecodeVarint(tag_bytes, 0) + field_number, wire_type = wire_format.UnpackTag(tag) + if field_number == 0: + raise message_mod.DecodeError('Field number 0 is illegal.') + # TODO(jieluo): remove old_pos. + old_pos = new_pos + (data, new_pos) = decoder._DecodeUnknownField( + buffer, new_pos, wire_type) # pylint: disable=protected-access + if new_pos == -1: + return pos + # pylint: disable=protected-access + unknown_field_set._add(field_number, wire_type, data) + # TODO(jieluo): remove _unknown_fields. + new_pos = local_SkipField(buffer, old_pos, end, tag_bytes) + if new_pos == -1: + return pos + self._unknown_fields.append( + (tag_bytes, buffer[old_pos:new_pos].tobytes())) + pos = new_pos + else: + pos = field_decoder(buffer, new_pos, end, self, field_dict) + if field_desc: + self._UpdateOneofState(field_desc) + return pos + cls._InternalParse = InternalParse + + +def _AddIsInitializedMethod(message_descriptor, cls): + """Adds the IsInitialized and FindInitializationError methods to the + protocol message class.""" + + required_fields = [field for field in message_descriptor.fields + if field.label == _FieldDescriptor.LABEL_REQUIRED] + + def IsInitialized(self, errors=None): + """Checks if all required fields of a message are set. + + Args: + errors: A list which, if provided, will be populated with the field + paths of all missing required fields. + + Returns: + True iff the specified message has all required fields set. + """ + + # Performance is critical so we avoid HasField() and ListFields(). + + for field in required_fields: + if (field not in self._fields or + (field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE and + not self._fields[field]._is_present_in_parent)): + if errors is not None: + errors.extend(self.FindInitializationErrors()) + return False + + for field, value in list(self._fields.items()): # dict can change size! + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + if field.label == _FieldDescriptor.LABEL_REPEATED: + if (field.message_type.has_options and + field.message_type.GetOptions().map_entry): + continue + for element in value: + if not element.IsInitialized(): + if errors is not None: + errors.extend(self.FindInitializationErrors()) + return False + elif value._is_present_in_parent and not value.IsInitialized(): + if errors is not None: + errors.extend(self.FindInitializationErrors()) + return False + + return True + + cls.IsInitialized = IsInitialized + + def FindInitializationErrors(self): + """Finds required fields which are not initialized. + + Returns: + A list of strings. Each string is a path to an uninitialized field from + the top-level message, e.g. "foo.bar[5].baz". + """ + + errors = [] # simplify things + + for field in required_fields: + if not self.HasField(field.name): + errors.append(field.name) + + for field, value in self.ListFields(): + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + if field.is_extension: + name = '(%s)' % field.full_name + else: + name = field.name + + if _IsMapField(field): + if _IsMessageMapField(field): + for key in value: + element = value[key] + prefix = '%s[%s].' % (name, key) + sub_errors = element.FindInitializationErrors() + errors += [prefix + error for error in sub_errors] + else: + # ScalarMaps can't have any initialization errors. + pass + elif field.label == _FieldDescriptor.LABEL_REPEATED: + for i in range(len(value)): + element = value[i] + prefix = '%s[%d].' % (name, i) + sub_errors = element.FindInitializationErrors() + errors += [prefix + error for error in sub_errors] + else: + prefix = name + '.' + sub_errors = value.FindInitializationErrors() + errors += [prefix + error for error in sub_errors] + + return errors + + cls.FindInitializationErrors = FindInitializationErrors + + +def _FullyQualifiedClassName(klass): + module = klass.__module__ + name = getattr(klass, '__qualname__', klass.__name__) + if module in (None, 'builtins', '__builtin__'): + return name + return module + '.' + name + + +def _AddMergeFromMethod(cls): + LABEL_REPEATED = _FieldDescriptor.LABEL_REPEATED + CPPTYPE_MESSAGE = _FieldDescriptor.CPPTYPE_MESSAGE + + def MergeFrom(self, msg): + if not isinstance(msg, cls): + raise TypeError( + 'Parameter to MergeFrom() must be instance of same class: ' + 'expected %s got %s.' % (_FullyQualifiedClassName(cls), + _FullyQualifiedClassName(msg.__class__))) + + assert msg is not self + self._Modified() + + fields = self._fields + + for field, value in msg._fields.items(): + if field.label == LABEL_REPEATED: + field_value = fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + fields[field] = field_value + field_value.MergeFrom(value) + elif field.cpp_type == CPPTYPE_MESSAGE: + if value._is_present_in_parent: + field_value = fields.get(field) + if field_value is None: + # Construct a new object to represent this field. + field_value = field._default_constructor(self) + fields[field] = field_value + field_value.MergeFrom(value) + else: + self._fields[field] = value + if field.containing_oneof: + self._UpdateOneofState(field) + + if msg._unknown_fields: + if not self._unknown_fields: + self._unknown_fields = [] + self._unknown_fields.extend(msg._unknown_fields) + # pylint: disable=protected-access + if self._unknown_field_set is None: + self._unknown_field_set = containers.UnknownFieldSet() + self._unknown_field_set._extend(msg._unknown_field_set) + + cls.MergeFrom = MergeFrom + + +def _AddWhichOneofMethod(message_descriptor, cls): + def WhichOneof(self, oneof_name): + """Returns the name of the currently set field inside a oneof, or None.""" + try: + field = message_descriptor.oneofs_by_name[oneof_name] + except KeyError: + raise ValueError( + 'Protocol message has no oneof "%s" field.' % oneof_name) + + nested_field = self._oneofs.get(field, None) + if nested_field is not None and self.HasField(nested_field.name): + return nested_field.name + else: + return None + + cls.WhichOneof = WhichOneof + + +def _Clear(self): + # Clear fields. + self._fields = {} + self._unknown_fields = () + # pylint: disable=protected-access + if self._unknown_field_set is not None: + self._unknown_field_set._clear() + self._unknown_field_set = None + + self._oneofs = {} + self._Modified() + + +def _UnknownFields(self): + if self._unknown_field_set is None: # pylint: disable=protected-access + # pylint: disable=protected-access + self._unknown_field_set = containers.UnknownFieldSet() + return self._unknown_field_set # pylint: disable=protected-access + + +def _DiscardUnknownFields(self): + self._unknown_fields = [] + self._unknown_field_set = None # pylint: disable=protected-access + for field, value in self.ListFields(): + if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: + if _IsMapField(field): + if _IsMessageMapField(field): + for key in value: + value[key].DiscardUnknownFields() + elif field.label == _FieldDescriptor.LABEL_REPEATED: + for sub_message in value: + sub_message.DiscardUnknownFields() + else: + value.DiscardUnknownFields() + + +def _SetListener(self, listener): + if listener is None: + self._listener = message_listener_mod.NullMessageListener() + else: + self._listener = listener + + +def _AddMessageMethods(message_descriptor, cls): + """Adds implementations of all Message methods to cls.""" + _AddListFieldsMethod(message_descriptor, cls) + _AddHasFieldMethod(message_descriptor, cls) + _AddClearFieldMethod(message_descriptor, cls) + if message_descriptor.is_extendable: + _AddClearExtensionMethod(cls) + _AddHasExtensionMethod(cls) + _AddEqualsMethod(message_descriptor, cls) + _AddStrMethod(message_descriptor, cls) + _AddReprMethod(message_descriptor, cls) + _AddUnicodeMethod(message_descriptor, cls) + _AddByteSizeMethod(message_descriptor, cls) + _AddSerializeToStringMethod(message_descriptor, cls) + _AddSerializePartialToStringMethod(message_descriptor, cls) + _AddMergeFromStringMethod(message_descriptor, cls) + _AddIsInitializedMethod(message_descriptor, cls) + _AddMergeFromMethod(cls) + _AddWhichOneofMethod(message_descriptor, cls) + # Adds methods which do not depend on cls. + cls.Clear = _Clear + cls.UnknownFields = _UnknownFields + cls.DiscardUnknownFields = _DiscardUnknownFields + cls._SetListener = _SetListener + + +def _AddPrivateHelperMethods(message_descriptor, cls): + """Adds implementation of private helper methods to cls.""" + + def Modified(self): + """Sets the _cached_byte_size_dirty bit to true, + and propagates this to our listener iff this was a state change. + """ + + # Note: Some callers check _cached_byte_size_dirty before calling + # _Modified() as an extra optimization. So, if this method is ever + # changed such that it does stuff even when _cached_byte_size_dirty is + # already true, the callers need to be updated. + if not self._cached_byte_size_dirty: + self._cached_byte_size_dirty = True + self._listener_for_children.dirty = True + self._is_present_in_parent = True + self._listener.Modified() + + def _UpdateOneofState(self, field): + """Sets field as the active field in its containing oneof. + + Will also delete currently active field in the oneof, if it is different + from the argument. Does not mark the message as modified. + """ + other_field = self._oneofs.setdefault(field.containing_oneof, field) + if other_field is not field: + del self._fields[other_field] + self._oneofs[field.containing_oneof] = field + + cls._Modified = Modified + cls.SetInParent = Modified + cls._UpdateOneofState = _UpdateOneofState + + +class _Listener(object): + + """MessageListener implementation that a parent message registers with its + child message. + + In order to support semantics like: + + foo.bar.baz.qux = 23 + assert foo.HasField('bar') + + ...child objects must have back references to their parents. + This helper class is at the heart of this support. + """ + + def __init__(self, parent_message): + """Args: + parent_message: The message whose _Modified() method we should call when + we receive Modified() messages. + """ + # This listener establishes a back reference from a child (contained) object + # to its parent (containing) object. We make this a weak reference to avoid + # creating cyclic garbage when the client finishes with the 'parent' object + # in the tree. + if isinstance(parent_message, weakref.ProxyType): + self._parent_message_weakref = parent_message + else: + self._parent_message_weakref = weakref.proxy(parent_message) + + # As an optimization, we also indicate directly on the listener whether + # or not the parent message is dirty. This way we can avoid traversing + # up the tree in the common case. + self.dirty = False + + def Modified(self): + if self.dirty: + return + try: + # Propagate the signal to our parents iff this is the first field set. + self._parent_message_weakref._Modified() + except ReferenceError: + # We can get here if a client has kept a reference to a child object, + # and is now setting a field on it, but the child's parent has been + # garbage-collected. This is not an error. + pass + + +class _OneofListener(_Listener): + """Special listener implementation for setting composite oneof fields.""" + + def __init__(self, parent_message, field): + """Args: + parent_message: The message whose _Modified() method we should call when + we receive Modified() messages. + field: The descriptor of the field being set in the parent message. + """ + super(_OneofListener, self).__init__(parent_message) + self._field = field + + def Modified(self): + """Also updates the state of the containing oneof in the parent message.""" + try: + self._parent_message_weakref._UpdateOneofState(self._field) + super(_OneofListener, self).Modified() + except ReferenceError: + pass diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/type_checkers.py b/env/lib/python3.8/site-packages/google/protobuf/internal/type_checkers.py new file mode 100644 index 00000000..a53e71fe --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/type_checkers.py @@ -0,0 +1,435 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides type checking routines. + +This module defines type checking utilities in the forms of dictionaries: + +VALUE_CHECKERS: A dictionary of field types and a value validation object. +TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing + function. +TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization + function. +FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their + corresponding wire types. +TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization + function. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + +import ctypes +import numbers + +from google.protobuf.internal import decoder +from google.protobuf.internal import encoder +from google.protobuf.internal import wire_format +from google.protobuf import descriptor + +_FieldDescriptor = descriptor.FieldDescriptor + + +def TruncateToFourByteFloat(original): + return ctypes.c_float(original).value + + +def ToShortestFloat(original): + """Returns the shortest float that has same value in wire.""" + # All 4 byte floats have between 6 and 9 significant digits, so we + # start with 6 as the lower bound. + # It has to be iterative because use '.9g' directly can not get rid + # of the noises for most values. For example if set a float_field=0.9 + # use '.9g' will print 0.899999976. + precision = 6 + rounded = float('{0:.{1}g}'.format(original, precision)) + while TruncateToFourByteFloat(rounded) != original: + precision += 1 + rounded = float('{0:.{1}g}'.format(original, precision)) + return rounded + + +def SupportsOpenEnums(field_descriptor): + return field_descriptor.containing_type.syntax == 'proto3' + + +def GetTypeChecker(field): + """Returns a type checker for a message field of the specified types. + + Args: + field: FieldDescriptor object for this field. + + Returns: + An instance of TypeChecker which can be used to verify the types + of values assigned to a field of the specified type. + """ + if (field.cpp_type == _FieldDescriptor.CPPTYPE_STRING and + field.type == _FieldDescriptor.TYPE_STRING): + return UnicodeValueChecker() + if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM: + if SupportsOpenEnums(field): + # When open enums are supported, any int32 can be assigned. + return _VALUE_CHECKERS[_FieldDescriptor.CPPTYPE_INT32] + else: + return EnumValueChecker(field.enum_type) + return _VALUE_CHECKERS[field.cpp_type] + + +# None of the typecheckers below make any attempt to guard against people +# subclassing builtin types and doing weird things. We're not trying to +# protect against malicious clients here, just people accidentally shooting +# themselves in the foot in obvious ways. +class TypeChecker(object): + + """Type checker used to catch type errors as early as possible + when the client is setting scalar fields in protocol messages. + """ + + def __init__(self, *acceptable_types): + self._acceptable_types = acceptable_types + + def CheckValue(self, proposed_value): + """Type check the provided value and return it. + + The returned value might have been normalized to another type. + """ + if not isinstance(proposed_value, self._acceptable_types): + message = ('%.1024r has type %s, but expected one of: %s' % + (proposed_value, type(proposed_value), self._acceptable_types)) + raise TypeError(message) + return proposed_value + + +class TypeCheckerWithDefault(TypeChecker): + + def __init__(self, default_value, *acceptable_types): + TypeChecker.__init__(self, *acceptable_types) + self._default_value = default_value + + def DefaultValue(self): + return self._default_value + + +class BoolValueChecker(object): + """Type checker used for bool fields.""" + + def CheckValue(self, proposed_value): + if not hasattr(proposed_value, '__index__') or ( + type(proposed_value).__module__ == 'numpy' and + type(proposed_value).__name__ == 'ndarray'): + message = ('%.1024r has type %s, but expected one of: %s' % + (proposed_value, type(proposed_value), (bool, int))) + raise TypeError(message) + return bool(proposed_value) + + def DefaultValue(self): + return False + + +# IntValueChecker and its subclasses perform integer type-checks +# and bounds-checks. +class IntValueChecker(object): + + """Checker used for integer fields. Performs type-check and range check.""" + + def CheckValue(self, proposed_value): + if not hasattr(proposed_value, '__index__') or ( + type(proposed_value).__module__ == 'numpy' and + type(proposed_value).__name__ == 'ndarray'): + message = ('%.1024r has type %s, but expected one of: %s' % + (proposed_value, type(proposed_value), (int,))) + raise TypeError(message) + + if not self._MIN <= int(proposed_value) <= self._MAX: + raise ValueError('Value out of range: %d' % proposed_value) + # We force all values to int to make alternate implementations where the + # distinction is more significant (e.g. the C++ implementation) simpler. + proposed_value = int(proposed_value) + return proposed_value + + def DefaultValue(self): + return 0 + + +class EnumValueChecker(object): + + """Checker used for enum fields. Performs type-check and range check.""" + + def __init__(self, enum_type): + self._enum_type = enum_type + + def CheckValue(self, proposed_value): + if not isinstance(proposed_value, numbers.Integral): + message = ('%.1024r has type %s, but expected one of: %s' % + (proposed_value, type(proposed_value), (int,))) + raise TypeError(message) + if int(proposed_value) not in self._enum_type.values_by_number: + raise ValueError('Unknown enum value: %d' % proposed_value) + return proposed_value + + def DefaultValue(self): + return self._enum_type.values[0].number + + +class UnicodeValueChecker(object): + + """Checker used for string fields. + + Always returns a unicode value, even if the input is of type str. + """ + + def CheckValue(self, proposed_value): + if not isinstance(proposed_value, (bytes, str)): + message = ('%.1024r has type %s, but expected one of: %s' % + (proposed_value, type(proposed_value), (bytes, str))) + raise TypeError(message) + + # If the value is of type 'bytes' make sure that it is valid UTF-8 data. + if isinstance(proposed_value, bytes): + try: + proposed_value = proposed_value.decode('utf-8') + except UnicodeDecodeError: + raise ValueError('%.1024r has type bytes, but isn\'t valid UTF-8 ' + 'encoding. Non-UTF-8 strings must be converted to ' + 'unicode objects before being added.' % + (proposed_value)) + else: + try: + proposed_value.encode('utf8') + except UnicodeEncodeError: + raise ValueError('%.1024r isn\'t a valid unicode string and ' + 'can\'t be encoded in UTF-8.'% + (proposed_value)) + + return proposed_value + + def DefaultValue(self): + return u"" + + +class Int32ValueChecker(IntValueChecker): + # We're sure to use ints instead of longs here since comparison may be more + # efficient. + _MIN = -2147483648 + _MAX = 2147483647 + + +class Uint32ValueChecker(IntValueChecker): + _MIN = 0 + _MAX = (1 << 32) - 1 + + +class Int64ValueChecker(IntValueChecker): + _MIN = -(1 << 63) + _MAX = (1 << 63) - 1 + + +class Uint64ValueChecker(IntValueChecker): + _MIN = 0 + _MAX = (1 << 64) - 1 + + +# The max 4 bytes float is about 3.4028234663852886e+38 +_FLOAT_MAX = float.fromhex('0x1.fffffep+127') +_FLOAT_MIN = -_FLOAT_MAX +_INF = float('inf') +_NEG_INF = float('-inf') + + +class DoubleValueChecker(object): + """Checker used for double fields. + + Performs type-check and range check. + """ + + def CheckValue(self, proposed_value): + """Check and convert proposed_value to float.""" + if (not hasattr(proposed_value, '__float__') and + not hasattr(proposed_value, '__index__')) or ( + type(proposed_value).__module__ == 'numpy' and + type(proposed_value).__name__ == 'ndarray'): + message = ('%.1024r has type %s, but expected one of: int, float' % + (proposed_value, type(proposed_value))) + raise TypeError(message) + return float(proposed_value) + + def DefaultValue(self): + return 0.0 + + +class FloatValueChecker(DoubleValueChecker): + """Checker used for float fields. + + Performs type-check and range check. + + Values exceeding a 32-bit float will be converted to inf/-inf. + """ + + def CheckValue(self, proposed_value): + """Check and convert proposed_value to float.""" + converted_value = super().CheckValue(proposed_value) + # This inf rounding matches the C++ proto SafeDoubleToFloat logic. + if converted_value > _FLOAT_MAX: + return _INF + if converted_value < _FLOAT_MIN: + return _NEG_INF + + return TruncateToFourByteFloat(converted_value) + +# Type-checkers for all scalar CPPTYPEs. +_VALUE_CHECKERS = { + _FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(), + _FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(), + _FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(), + _FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(), + _FieldDescriptor.CPPTYPE_DOUBLE: DoubleValueChecker(), + _FieldDescriptor.CPPTYPE_FLOAT: FloatValueChecker(), + _FieldDescriptor.CPPTYPE_BOOL: BoolValueChecker(), + _FieldDescriptor.CPPTYPE_STRING: TypeCheckerWithDefault(b'', bytes), +} + + +# Map from field type to a function F, such that F(field_num, value) +# gives the total byte size for a value of the given type. This +# byte size includes tag information and any other additional space +# associated with serializing "value". +TYPE_TO_BYTE_SIZE_FN = { + _FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize, + _FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize, + _FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize, + _FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize, + _FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize, + _FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize, + _FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize, + _FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize, + _FieldDescriptor.TYPE_STRING: wire_format.StringByteSize, + _FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize, + _FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize, + _FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize, + _FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize, + _FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize, + _FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize, + _FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize, + _FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize, + _FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize + } + + +# Maps from field types to encoder constructors. +TYPE_TO_ENCODER = { + _FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder, + _FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder, + _FieldDescriptor.TYPE_INT64: encoder.Int64Encoder, + _FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder, + _FieldDescriptor.TYPE_INT32: encoder.Int32Encoder, + _FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder, + _FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder, + _FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder, + _FieldDescriptor.TYPE_STRING: encoder.StringEncoder, + _FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder, + _FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder, + _FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder, + _FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder, + _FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder, + _FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder, + _FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder, + _FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder, + _FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder, + } + + +# Maps from field types to sizer constructors. +TYPE_TO_SIZER = { + _FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer, + _FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer, + _FieldDescriptor.TYPE_INT64: encoder.Int64Sizer, + _FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer, + _FieldDescriptor.TYPE_INT32: encoder.Int32Sizer, + _FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer, + _FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer, + _FieldDescriptor.TYPE_BOOL: encoder.BoolSizer, + _FieldDescriptor.TYPE_STRING: encoder.StringSizer, + _FieldDescriptor.TYPE_GROUP: encoder.GroupSizer, + _FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer, + _FieldDescriptor.TYPE_BYTES: encoder.BytesSizer, + _FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer, + _FieldDescriptor.TYPE_ENUM: encoder.EnumSizer, + _FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer, + _FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer, + _FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer, + _FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer, + } + + +# Maps from field type to a decoder constructor. +TYPE_TO_DECODER = { + _FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder, + _FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder, + _FieldDescriptor.TYPE_INT64: decoder.Int64Decoder, + _FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder, + _FieldDescriptor.TYPE_INT32: decoder.Int32Decoder, + _FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder, + _FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder, + _FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder, + _FieldDescriptor.TYPE_STRING: decoder.StringDecoder, + _FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder, + _FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder, + _FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder, + _FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder, + _FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder, + _FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder, + _FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder, + _FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder, + _FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder, + } + +# Maps from field type to expected wiretype. +FIELD_TYPE_TO_WIRE_TYPE = { + _FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64, + _FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32, + _FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64, + _FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32, + _FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_STRING: + wire_format.WIRETYPE_LENGTH_DELIMITED, + _FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP, + _FieldDescriptor.TYPE_MESSAGE: + wire_format.WIRETYPE_LENGTH_DELIMITED, + _FieldDescriptor.TYPE_BYTES: + wire_format.WIRETYPE_LENGTH_DELIMITED, + _FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32, + _FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64, + _FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT, + _FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT, + } diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/well_known_types.py b/env/lib/python3.8/site-packages/google/protobuf/internal/well_known_types.py new file mode 100644 index 00000000..b581ab75 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/well_known_types.py @@ -0,0 +1,878 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains well known classes. + +This files defines well known classes which need extra maintenance including: + - Any + - Duration + - FieldMask + - Struct + - Timestamp +""" + +__author__ = 'jieluo@google.com (Jie Luo)' + +import calendar +import collections.abc +import datetime + +from google.protobuf.descriptor import FieldDescriptor + +_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S' +_NANOS_PER_SECOND = 1000000000 +_NANOS_PER_MILLISECOND = 1000000 +_NANOS_PER_MICROSECOND = 1000 +_MILLIS_PER_SECOND = 1000 +_MICROS_PER_SECOND = 1000000 +_SECONDS_PER_DAY = 24 * 3600 +_DURATION_SECONDS_MAX = 315576000000 + + +class Any(object): + """Class for Any Message type.""" + + __slots__ = () + + def Pack(self, msg, type_url_prefix='type.googleapis.com/', + deterministic=None): + """Packs the specified message into current Any message.""" + if len(type_url_prefix) < 1 or type_url_prefix[-1] != '/': + self.type_url = '%s/%s' % (type_url_prefix, msg.DESCRIPTOR.full_name) + else: + self.type_url = '%s%s' % (type_url_prefix, msg.DESCRIPTOR.full_name) + self.value = msg.SerializeToString(deterministic=deterministic) + + def Unpack(self, msg): + """Unpacks the current Any message into specified message.""" + descriptor = msg.DESCRIPTOR + if not self.Is(descriptor): + return False + msg.ParseFromString(self.value) + return True + + def TypeName(self): + """Returns the protobuf type name of the inner message.""" + # Only last part is to be used: b/25630112 + return self.type_url.split('/')[-1] + + def Is(self, descriptor): + """Checks if this Any represents the given protobuf type.""" + return '/' in self.type_url and self.TypeName() == descriptor.full_name + + +_EPOCH_DATETIME_NAIVE = datetime.datetime.utcfromtimestamp(0) +_EPOCH_DATETIME_AWARE = datetime.datetime.fromtimestamp( + 0, tz=datetime.timezone.utc) + + +class Timestamp(object): + """Class for Timestamp message type.""" + + __slots__ = () + + def ToJsonString(self): + """Converts Timestamp to RFC 3339 date string format. + + Returns: + A string converted from timestamp. The string is always Z-normalized + and uses 3, 6 or 9 fractional digits as required to represent the + exact time. Example of the return format: '1972-01-01T10:00:20.021Z' + """ + nanos = self.nanos % _NANOS_PER_SECOND + total_sec = self.seconds + (self.nanos - nanos) // _NANOS_PER_SECOND + seconds = total_sec % _SECONDS_PER_DAY + days = (total_sec - seconds) // _SECONDS_PER_DAY + dt = datetime.datetime(1970, 1, 1) + datetime.timedelta(days, seconds) + + result = dt.isoformat() + if (nanos % 1e9) == 0: + # If there are 0 fractional digits, the fractional + # point '.' should be omitted when serializing. + return result + 'Z' + if (nanos % 1e6) == 0: + # Serialize 3 fractional digits. + return result + '.%03dZ' % (nanos / 1e6) + if (nanos % 1e3) == 0: + # Serialize 6 fractional digits. + return result + '.%06dZ' % (nanos / 1e3) + # Serialize 9 fractional digits. + return result + '.%09dZ' % nanos + + def FromJsonString(self, value): + """Parse a RFC 3339 date string format to Timestamp. + + Args: + value: A date string. Any fractional digits (or none) and any offset are + accepted as long as they fit into nano-seconds precision. + Example of accepted format: '1972-01-01T10:00:20.021-05:00' + + Raises: + ValueError: On parsing problems. + """ + if not isinstance(value, str): + raise ValueError('Timestamp JSON value not a string: {!r}'.format(value)) + timezone_offset = value.find('Z') + if timezone_offset == -1: + timezone_offset = value.find('+') + if timezone_offset == -1: + timezone_offset = value.rfind('-') + if timezone_offset == -1: + raise ValueError( + 'Failed to parse timestamp: missing valid timezone offset.') + time_value = value[0:timezone_offset] + # Parse datetime and nanos. + point_position = time_value.find('.') + if point_position == -1: + second_value = time_value + nano_value = '' + else: + second_value = time_value[:point_position] + nano_value = time_value[point_position + 1:] + if 't' in second_value: + raise ValueError( + 'time data \'{0}\' does not match format \'%Y-%m-%dT%H:%M:%S\', ' + 'lowercase \'t\' is not accepted'.format(second_value)) + date_object = datetime.datetime.strptime(second_value, _TIMESTAMPFOMAT) + td = date_object - datetime.datetime(1970, 1, 1) + seconds = td.seconds + td.days * _SECONDS_PER_DAY + if len(nano_value) > 9: + raise ValueError( + 'Failed to parse Timestamp: nanos {0} more than ' + '9 fractional digits.'.format(nano_value)) + if nano_value: + nanos = round(float('0.' + nano_value) * 1e9) + else: + nanos = 0 + # Parse timezone offsets. + if value[timezone_offset] == 'Z': + if len(value) != timezone_offset + 1: + raise ValueError('Failed to parse timestamp: invalid trailing' + ' data {0}.'.format(value)) + else: + timezone = value[timezone_offset:] + pos = timezone.find(':') + if pos == -1: + raise ValueError( + 'Invalid timezone offset value: {0}.'.format(timezone)) + if timezone[0] == '+': + seconds -= (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60 + else: + seconds += (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60 + # Set seconds and nanos + self.seconds = int(seconds) + self.nanos = int(nanos) + + def GetCurrentTime(self): + """Get the current UTC into Timestamp.""" + self.FromDatetime(datetime.datetime.utcnow()) + + def ToNanoseconds(self): + """Converts Timestamp to nanoseconds since epoch.""" + return self.seconds * _NANOS_PER_SECOND + self.nanos + + def ToMicroseconds(self): + """Converts Timestamp to microseconds since epoch.""" + return (self.seconds * _MICROS_PER_SECOND + + self.nanos // _NANOS_PER_MICROSECOND) + + def ToMilliseconds(self): + """Converts Timestamp to milliseconds since epoch.""" + return (self.seconds * _MILLIS_PER_SECOND + + self.nanos // _NANOS_PER_MILLISECOND) + + def ToSeconds(self): + """Converts Timestamp to seconds since epoch.""" + return self.seconds + + def FromNanoseconds(self, nanos): + """Converts nanoseconds since epoch to Timestamp.""" + self.seconds = nanos // _NANOS_PER_SECOND + self.nanos = nanos % _NANOS_PER_SECOND + + def FromMicroseconds(self, micros): + """Converts microseconds since epoch to Timestamp.""" + self.seconds = micros // _MICROS_PER_SECOND + self.nanos = (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND + + def FromMilliseconds(self, millis): + """Converts milliseconds since epoch to Timestamp.""" + self.seconds = millis // _MILLIS_PER_SECOND + self.nanos = (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND + + def FromSeconds(self, seconds): + """Converts seconds since epoch to Timestamp.""" + self.seconds = seconds + self.nanos = 0 + + def ToDatetime(self, tzinfo=None): + """Converts Timestamp to a datetime. + + Args: + tzinfo: A datetime.tzinfo subclass; defaults to None. + + Returns: + If tzinfo is None, returns a timezone-naive UTC datetime (with no timezone + information, i.e. not aware that it's UTC). + + Otherwise, returns a timezone-aware datetime in the input timezone. + """ + delta = datetime.timedelta( + seconds=self.seconds, + microseconds=_RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND)) + if tzinfo is None: + return _EPOCH_DATETIME_NAIVE + delta + else: + return _EPOCH_DATETIME_AWARE.astimezone(tzinfo) + delta + + def FromDatetime(self, dt): + """Converts datetime to Timestamp. + + Args: + dt: A datetime. If it's timezone-naive, it's assumed to be in UTC. + """ + # Using this guide: http://wiki.python.org/moin/WorkingWithTime + # And this conversion guide: http://docs.python.org/library/time.html + + # Turn the date parameter into a tuple (struct_time) that can then be + # manipulated into a long value of seconds. During the conversion from + # struct_time to long, the source date in UTC, and so it follows that the + # correct transformation is calendar.timegm() + self.seconds = calendar.timegm(dt.utctimetuple()) + self.nanos = dt.microsecond * _NANOS_PER_MICROSECOND + + +class Duration(object): + """Class for Duration message type.""" + + __slots__ = () + + def ToJsonString(self): + """Converts Duration to string format. + + Returns: + A string converted from self. The string format will contains + 3, 6, or 9 fractional digits depending on the precision required to + represent the exact Duration value. For example: "1s", "1.010s", + "1.000000100s", "-3.100s" + """ + _CheckDurationValid(self.seconds, self.nanos) + if self.seconds < 0 or self.nanos < 0: + result = '-' + seconds = - self.seconds + int((0 - self.nanos) // 1e9) + nanos = (0 - self.nanos) % 1e9 + else: + result = '' + seconds = self.seconds + int(self.nanos // 1e9) + nanos = self.nanos % 1e9 + result += '%d' % seconds + if (nanos % 1e9) == 0: + # If there are 0 fractional digits, the fractional + # point '.' should be omitted when serializing. + return result + 's' + if (nanos % 1e6) == 0: + # Serialize 3 fractional digits. + return result + '.%03ds' % (nanos / 1e6) + if (nanos % 1e3) == 0: + # Serialize 6 fractional digits. + return result + '.%06ds' % (nanos / 1e3) + # Serialize 9 fractional digits. + return result + '.%09ds' % nanos + + def FromJsonString(self, value): + """Converts a string to Duration. + + Args: + value: A string to be converted. The string must end with 's'. Any + fractional digits (or none) are accepted as long as they fit into + precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s + + Raises: + ValueError: On parsing problems. + """ + if not isinstance(value, str): + raise ValueError('Duration JSON value not a string: {!r}'.format(value)) + if len(value) < 1 or value[-1] != 's': + raise ValueError( + 'Duration must end with letter "s": {0}.'.format(value)) + try: + pos = value.find('.') + if pos == -1: + seconds = int(value[:-1]) + nanos = 0 + else: + seconds = int(value[:pos]) + if value[0] == '-': + nanos = int(round(float('-0{0}'.format(value[pos: -1])) *1e9)) + else: + nanos = int(round(float('0{0}'.format(value[pos: -1])) *1e9)) + _CheckDurationValid(seconds, nanos) + self.seconds = seconds + self.nanos = nanos + except ValueError as e: + raise ValueError( + 'Couldn\'t parse duration: {0} : {1}.'.format(value, e)) + + def ToNanoseconds(self): + """Converts a Duration to nanoseconds.""" + return self.seconds * _NANOS_PER_SECOND + self.nanos + + def ToMicroseconds(self): + """Converts a Duration to microseconds.""" + micros = _RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND) + return self.seconds * _MICROS_PER_SECOND + micros + + def ToMilliseconds(self): + """Converts a Duration to milliseconds.""" + millis = _RoundTowardZero(self.nanos, _NANOS_PER_MILLISECOND) + return self.seconds * _MILLIS_PER_SECOND + millis + + def ToSeconds(self): + """Converts a Duration to seconds.""" + return self.seconds + + def FromNanoseconds(self, nanos): + """Converts nanoseconds to Duration.""" + self._NormalizeDuration(nanos // _NANOS_PER_SECOND, + nanos % _NANOS_PER_SECOND) + + def FromMicroseconds(self, micros): + """Converts microseconds to Duration.""" + self._NormalizeDuration( + micros // _MICROS_PER_SECOND, + (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND) + + def FromMilliseconds(self, millis): + """Converts milliseconds to Duration.""" + self._NormalizeDuration( + millis // _MILLIS_PER_SECOND, + (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND) + + def FromSeconds(self, seconds): + """Converts seconds to Duration.""" + self.seconds = seconds + self.nanos = 0 + + def ToTimedelta(self): + """Converts Duration to timedelta.""" + return datetime.timedelta( + seconds=self.seconds, microseconds=_RoundTowardZero( + self.nanos, _NANOS_PER_MICROSECOND)) + + def FromTimedelta(self, td): + """Converts timedelta to Duration.""" + self._NormalizeDuration(td.seconds + td.days * _SECONDS_PER_DAY, + td.microseconds * _NANOS_PER_MICROSECOND) + + def _NormalizeDuration(self, seconds, nanos): + """Set Duration by seconds and nanos.""" + # Force nanos to be negative if the duration is negative. + if seconds < 0 and nanos > 0: + seconds += 1 + nanos -= _NANOS_PER_SECOND + self.seconds = seconds + self.nanos = nanos + + +def _CheckDurationValid(seconds, nanos): + if seconds < -_DURATION_SECONDS_MAX or seconds > _DURATION_SECONDS_MAX: + raise ValueError( + 'Duration is not valid: Seconds {0} must be in range ' + '[-315576000000, 315576000000].'.format(seconds)) + if nanos <= -_NANOS_PER_SECOND or nanos >= _NANOS_PER_SECOND: + raise ValueError( + 'Duration is not valid: Nanos {0} must be in range ' + '[-999999999, 999999999].'.format(nanos)) + if (nanos < 0 and seconds > 0) or (nanos > 0 and seconds < 0): + raise ValueError( + 'Duration is not valid: Sign mismatch.') + + +def _RoundTowardZero(value, divider): + """Truncates the remainder part after division.""" + # For some languages, the sign of the remainder is implementation + # dependent if any of the operands is negative. Here we enforce + # "rounded toward zero" semantics. For example, for (-5) / 2 an + # implementation may give -3 as the result with the remainder being + # 1. This function ensures we always return -2 (closer to zero). + result = value // divider + remainder = value % divider + if result < 0 and remainder > 0: + return result + 1 + else: + return result + + +class FieldMask(object): + """Class for FieldMask message type.""" + + __slots__ = () + + def ToJsonString(self): + """Converts FieldMask to string according to proto3 JSON spec.""" + camelcase_paths = [] + for path in self.paths: + camelcase_paths.append(_SnakeCaseToCamelCase(path)) + return ','.join(camelcase_paths) + + def FromJsonString(self, value): + """Converts string to FieldMask according to proto3 JSON spec.""" + if not isinstance(value, str): + raise ValueError('FieldMask JSON value not a string: {!r}'.format(value)) + self.Clear() + if value: + for path in value.split(','): + self.paths.append(_CamelCaseToSnakeCase(path)) + + def IsValidForDescriptor(self, message_descriptor): + """Checks whether the FieldMask is valid for Message Descriptor.""" + for path in self.paths: + if not _IsValidPath(message_descriptor, path): + return False + return True + + def AllFieldsFromDescriptor(self, message_descriptor): + """Gets all direct fields of Message Descriptor to FieldMask.""" + self.Clear() + for field in message_descriptor.fields: + self.paths.append(field.name) + + def CanonicalFormFromMask(self, mask): + """Converts a FieldMask to the canonical form. + + Removes paths that are covered by another path. For example, + "foo.bar" is covered by "foo" and will be removed if "foo" + is also in the FieldMask. Then sorts all paths in alphabetical order. + + Args: + mask: The original FieldMask to be converted. + """ + tree = _FieldMaskTree(mask) + tree.ToFieldMask(self) + + def Union(self, mask1, mask2): + """Merges mask1 and mask2 into this FieldMask.""" + _CheckFieldMaskMessage(mask1) + _CheckFieldMaskMessage(mask2) + tree = _FieldMaskTree(mask1) + tree.MergeFromFieldMask(mask2) + tree.ToFieldMask(self) + + def Intersect(self, mask1, mask2): + """Intersects mask1 and mask2 into this FieldMask.""" + _CheckFieldMaskMessage(mask1) + _CheckFieldMaskMessage(mask2) + tree = _FieldMaskTree(mask1) + intersection = _FieldMaskTree() + for path in mask2.paths: + tree.IntersectPath(path, intersection) + intersection.ToFieldMask(self) + + def MergeMessage( + self, source, destination, + replace_message_field=False, replace_repeated_field=False): + """Merges fields specified in FieldMask from source to destination. + + Args: + source: Source message. + destination: The destination message to be merged into. + replace_message_field: Replace message field if True. Merge message + field if False. + replace_repeated_field: Replace repeated field if True. Append + elements of repeated field if False. + """ + tree = _FieldMaskTree(self) + tree.MergeMessage( + source, destination, replace_message_field, replace_repeated_field) + + +def _IsValidPath(message_descriptor, path): + """Checks whether the path is valid for Message Descriptor.""" + parts = path.split('.') + last = parts.pop() + for name in parts: + field = message_descriptor.fields_by_name.get(name) + if (field is None or + field.label == FieldDescriptor.LABEL_REPEATED or + field.type != FieldDescriptor.TYPE_MESSAGE): + return False + message_descriptor = field.message_type + return last in message_descriptor.fields_by_name + + +def _CheckFieldMaskMessage(message): + """Raises ValueError if message is not a FieldMask.""" + message_descriptor = message.DESCRIPTOR + if (message_descriptor.name != 'FieldMask' or + message_descriptor.file.name != 'google/protobuf/field_mask.proto'): + raise ValueError('Message {0} is not a FieldMask.'.format( + message_descriptor.full_name)) + + +def _SnakeCaseToCamelCase(path_name): + """Converts a path name from snake_case to camelCase.""" + result = [] + after_underscore = False + for c in path_name: + if c.isupper(): + raise ValueError( + 'Fail to print FieldMask to Json string: Path name ' + '{0} must not contain uppercase letters.'.format(path_name)) + if after_underscore: + if c.islower(): + result.append(c.upper()) + after_underscore = False + else: + raise ValueError( + 'Fail to print FieldMask to Json string: The ' + 'character after a "_" must be a lowercase letter ' + 'in path name {0}.'.format(path_name)) + elif c == '_': + after_underscore = True + else: + result += c + + if after_underscore: + raise ValueError('Fail to print FieldMask to Json string: Trailing "_" ' + 'in path name {0}.'.format(path_name)) + return ''.join(result) + + +def _CamelCaseToSnakeCase(path_name): + """Converts a field name from camelCase to snake_case.""" + result = [] + for c in path_name: + if c == '_': + raise ValueError('Fail to parse FieldMask: Path name ' + '{0} must not contain "_"s.'.format(path_name)) + if c.isupper(): + result += '_' + result += c.lower() + else: + result += c + return ''.join(result) + + +class _FieldMaskTree(object): + """Represents a FieldMask in a tree structure. + + For example, given a FieldMask "foo.bar,foo.baz,bar.baz", + the FieldMaskTree will be: + [_root] -+- foo -+- bar + | | + | +- baz + | + +- bar --- baz + In the tree, each leaf node represents a field path. + """ + + __slots__ = ('_root',) + + def __init__(self, field_mask=None): + """Initializes the tree by FieldMask.""" + self._root = {} + if field_mask: + self.MergeFromFieldMask(field_mask) + + def MergeFromFieldMask(self, field_mask): + """Merges a FieldMask to the tree.""" + for path in field_mask.paths: + self.AddPath(path) + + def AddPath(self, path): + """Adds a field path into the tree. + + If the field path to add is a sub-path of an existing field path + in the tree (i.e., a leaf node), it means the tree already matches + the given path so nothing will be added to the tree. If the path + matches an existing non-leaf node in the tree, that non-leaf node + will be turned into a leaf node with all its children removed because + the path matches all the node's children. Otherwise, a new path will + be added. + + Args: + path: The field path to add. + """ + node = self._root + for name in path.split('.'): + if name not in node: + node[name] = {} + elif not node[name]: + # Pre-existing empty node implies we already have this entire tree. + return + node = node[name] + # Remove any sub-trees we might have had. + node.clear() + + def ToFieldMask(self, field_mask): + """Converts the tree to a FieldMask.""" + field_mask.Clear() + _AddFieldPaths(self._root, '', field_mask) + + def IntersectPath(self, path, intersection): + """Calculates the intersection part of a field path with this tree. + + Args: + path: The field path to calculates. + intersection: The out tree to record the intersection part. + """ + node = self._root + for name in path.split('.'): + if name not in node: + return + elif not node[name]: + intersection.AddPath(path) + return + node = node[name] + intersection.AddLeafNodes(path, node) + + def AddLeafNodes(self, prefix, node): + """Adds leaf nodes begin with prefix to this tree.""" + if not node: + self.AddPath(prefix) + for name in node: + child_path = prefix + '.' + name + self.AddLeafNodes(child_path, node[name]) + + def MergeMessage( + self, source, destination, + replace_message, replace_repeated): + """Merge all fields specified by this tree from source to destination.""" + _MergeMessage( + self._root, source, destination, replace_message, replace_repeated) + + +def _StrConvert(value): + """Converts value to str if it is not.""" + # This file is imported by c extension and some methods like ClearField + # requires string for the field name. py2/py3 has different text + # type and may use unicode. + if not isinstance(value, str): + return value.encode('utf-8') + return value + + +def _MergeMessage( + node, source, destination, replace_message, replace_repeated): + """Merge all fields specified by a sub-tree from source to destination.""" + source_descriptor = source.DESCRIPTOR + for name in node: + child = node[name] + field = source_descriptor.fields_by_name[name] + if field is None: + raise ValueError('Error: Can\'t find field {0} in message {1}.'.format( + name, source_descriptor.full_name)) + if child: + # Sub-paths are only allowed for singular message fields. + if (field.label == FieldDescriptor.LABEL_REPEATED or + field.cpp_type != FieldDescriptor.CPPTYPE_MESSAGE): + raise ValueError('Error: Field {0} in message {1} is not a singular ' + 'message field and cannot have sub-fields.'.format( + name, source_descriptor.full_name)) + if source.HasField(name): + _MergeMessage( + child, getattr(source, name), getattr(destination, name), + replace_message, replace_repeated) + continue + if field.label == FieldDescriptor.LABEL_REPEATED: + if replace_repeated: + destination.ClearField(_StrConvert(name)) + repeated_source = getattr(source, name) + repeated_destination = getattr(destination, name) + repeated_destination.MergeFrom(repeated_source) + else: + if field.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE: + if replace_message: + destination.ClearField(_StrConvert(name)) + if source.HasField(name): + getattr(destination, name).MergeFrom(getattr(source, name)) + else: + setattr(destination, name, getattr(source, name)) + + +def _AddFieldPaths(node, prefix, field_mask): + """Adds the field paths descended from node to field_mask.""" + if not node and prefix: + field_mask.paths.append(prefix) + return + for name in sorted(node): + if prefix: + child_path = prefix + '.' + name + else: + child_path = name + _AddFieldPaths(node[name], child_path, field_mask) + + +def _SetStructValue(struct_value, value): + if value is None: + struct_value.null_value = 0 + elif isinstance(value, bool): + # Note: this check must come before the number check because in Python + # True and False are also considered numbers. + struct_value.bool_value = value + elif isinstance(value, str): + struct_value.string_value = value + elif isinstance(value, (int, float)): + struct_value.number_value = value + elif isinstance(value, (dict, Struct)): + struct_value.struct_value.Clear() + struct_value.struct_value.update(value) + elif isinstance(value, (list, ListValue)): + struct_value.list_value.Clear() + struct_value.list_value.extend(value) + else: + raise ValueError('Unexpected type') + + +def _GetStructValue(struct_value): + which = struct_value.WhichOneof('kind') + if which == 'struct_value': + return struct_value.struct_value + elif which == 'null_value': + return None + elif which == 'number_value': + return struct_value.number_value + elif which == 'string_value': + return struct_value.string_value + elif which == 'bool_value': + return struct_value.bool_value + elif which == 'list_value': + return struct_value.list_value + elif which is None: + raise ValueError('Value not set') + + +class Struct(object): + """Class for Struct message type.""" + + __slots__ = () + + def __getitem__(self, key): + return _GetStructValue(self.fields[key]) + + def __contains__(self, item): + return item in self.fields + + def __setitem__(self, key, value): + _SetStructValue(self.fields[key], value) + + def __delitem__(self, key): + del self.fields[key] + + def __len__(self): + return len(self.fields) + + def __iter__(self): + return iter(self.fields) + + def keys(self): # pylint: disable=invalid-name + return self.fields.keys() + + def values(self): # pylint: disable=invalid-name + return [self[key] for key in self] + + def items(self): # pylint: disable=invalid-name + return [(key, self[key]) for key in self] + + def get_or_create_list(self, key): + """Returns a list for this key, creating if it didn't exist already.""" + if not self.fields[key].HasField('list_value'): + # Clear will mark list_value modified which will indeed create a list. + self.fields[key].list_value.Clear() + return self.fields[key].list_value + + def get_or_create_struct(self, key): + """Returns a struct for this key, creating if it didn't exist already.""" + if not self.fields[key].HasField('struct_value'): + # Clear will mark struct_value modified which will indeed create a struct. + self.fields[key].struct_value.Clear() + return self.fields[key].struct_value + + def update(self, dictionary): # pylint: disable=invalid-name + for key, value in dictionary.items(): + _SetStructValue(self.fields[key], value) + +collections.abc.MutableMapping.register(Struct) + + +class ListValue(object): + """Class for ListValue message type.""" + + __slots__ = () + + def __len__(self): + return len(self.values) + + def append(self, value): + _SetStructValue(self.values.add(), value) + + def extend(self, elem_seq): + for value in elem_seq: + self.append(value) + + def __getitem__(self, index): + """Retrieves item by the specified index.""" + return _GetStructValue(self.values.__getitem__(index)) + + def __setitem__(self, index, value): + _SetStructValue(self.values.__getitem__(index), value) + + def __delitem__(self, key): + del self.values[key] + + def items(self): + for i in range(len(self)): + yield self[i] + + def add_struct(self): + """Appends and returns a struct value as the next value in the list.""" + struct_value = self.values.add().struct_value + # Clear will mark struct_value modified which will indeed create a struct. + struct_value.Clear() + return struct_value + + def add_list(self): + """Appends and returns a list value as the next value in the list.""" + list_value = self.values.add().list_value + # Clear will mark list_value modified which will indeed create a list. + list_value.Clear() + return list_value + +collections.abc.MutableSequence.register(ListValue) + + +WKTBASES = { + 'google.protobuf.Any': Any, + 'google.protobuf.Duration': Duration, + 'google.protobuf.FieldMask': FieldMask, + 'google.protobuf.ListValue': ListValue, + 'google.protobuf.Struct': Struct, + 'google.protobuf.Timestamp': Timestamp, +} diff --git a/env/lib/python3.8/site-packages/google/protobuf/internal/wire_format.py b/env/lib/python3.8/site-packages/google/protobuf/internal/wire_format.py new file mode 100644 index 00000000..883f5255 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/internal/wire_format.py @@ -0,0 +1,268 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Constants and static functions to support protocol buffer wire format.""" + +__author__ = 'robinson@google.com (Will Robinson)' + +import struct +from google.protobuf import descriptor +from google.protobuf import message + + +TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag. +TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7 + +# These numbers identify the wire type of a protocol buffer value. +# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded +# tag-and-type to store one of these WIRETYPE_* constants. +# These values must match WireType enum in google/protobuf/wire_format.h. +WIRETYPE_VARINT = 0 +WIRETYPE_FIXED64 = 1 +WIRETYPE_LENGTH_DELIMITED = 2 +WIRETYPE_START_GROUP = 3 +WIRETYPE_END_GROUP = 4 +WIRETYPE_FIXED32 = 5 +_WIRETYPE_MAX = 5 + + +# Bounds for various integer types. +INT32_MAX = int((1 << 31) - 1) +INT32_MIN = int(-(1 << 31)) +UINT32_MAX = (1 << 32) - 1 + +INT64_MAX = (1 << 63) - 1 +INT64_MIN = -(1 << 63) +UINT64_MAX = (1 << 64) - 1 + +# "struct" format strings that will encode/decode the specified formats. +FORMAT_UINT32_LITTLE_ENDIAN = '> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK) + + +def ZigZagEncode(value): + """ZigZag Transform: Encodes signed integers so that they can be + effectively used with varint encoding. See wire_format.h for + more details. + """ + if value >= 0: + return value << 1 + return (value << 1) ^ (~0) + + +def ZigZagDecode(value): + """Inverse of ZigZagEncode().""" + if not value & 0x1: + return value >> 1 + return (value >> 1) ^ (~0) + + + +# The *ByteSize() functions below return the number of bytes required to +# serialize "field number + type" information and then serialize the value. + + +def Int32ByteSize(field_number, int32): + return Int64ByteSize(field_number, int32) + + +def Int32ByteSizeNoTag(int32): + return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32) + + +def Int64ByteSize(field_number, int64): + # Have to convert to uint before calling UInt64ByteSize(). + return UInt64ByteSize(field_number, 0xffffffffffffffff & int64) + + +def UInt32ByteSize(field_number, uint32): + return UInt64ByteSize(field_number, uint32) + + +def UInt64ByteSize(field_number, uint64): + return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64) + + +def SInt32ByteSize(field_number, int32): + return UInt32ByteSize(field_number, ZigZagEncode(int32)) + + +def SInt64ByteSize(field_number, int64): + return UInt64ByteSize(field_number, ZigZagEncode(int64)) + + +def Fixed32ByteSize(field_number, fixed32): + return TagByteSize(field_number) + 4 + + +def Fixed64ByteSize(field_number, fixed64): + return TagByteSize(field_number) + 8 + + +def SFixed32ByteSize(field_number, sfixed32): + return TagByteSize(field_number) + 4 + + +def SFixed64ByteSize(field_number, sfixed64): + return TagByteSize(field_number) + 8 + + +def FloatByteSize(field_number, flt): + return TagByteSize(field_number) + 4 + + +def DoubleByteSize(field_number, double): + return TagByteSize(field_number) + 8 + + +def BoolByteSize(field_number, b): + return TagByteSize(field_number) + 1 + + +def EnumByteSize(field_number, enum): + return UInt32ByteSize(field_number, enum) + + +def StringByteSize(field_number, string): + return BytesByteSize(field_number, string.encode('utf-8')) + + +def BytesByteSize(field_number, b): + return (TagByteSize(field_number) + + _VarUInt64ByteSizeNoTag(len(b)) + + len(b)) + + +def GroupByteSize(field_number, message): + return (2 * TagByteSize(field_number) # START and END group. + + message.ByteSize()) + + +def MessageByteSize(field_number, message): + return (TagByteSize(field_number) + + _VarUInt64ByteSizeNoTag(message.ByteSize()) + + message.ByteSize()) + + +def MessageSetItemByteSize(field_number, msg): + # First compute the sizes of the tags. + # There are 2 tags for the beginning and ending of the repeated group, that + # is field number 1, one with field number 2 (type_id) and one with field + # number 3 (message). + total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3)) + + # Add the number of bytes for type_id. + total_size += _VarUInt64ByteSizeNoTag(field_number) + + message_size = msg.ByteSize() + + # The number of bytes for encoding the length of the message. + total_size += _VarUInt64ByteSizeNoTag(message_size) + + # The size of the message. + total_size += message_size + return total_size + + +def TagByteSize(field_number): + """Returns the bytes required to serialize a tag with this field number.""" + # Just pass in type 0, since the type won't affect the tag+type size. + return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0)) + + +# Private helper function for the *ByteSize() functions above. + +def _VarUInt64ByteSizeNoTag(uint64): + """Returns the number of bytes required to serialize a single varint + using boundary value comparisons. (unrolled loop optimization -WPierce) + uint64 must be unsigned. + """ + if uint64 <= 0x7f: return 1 + if uint64 <= 0x3fff: return 2 + if uint64 <= 0x1fffff: return 3 + if uint64 <= 0xfffffff: return 4 + if uint64 <= 0x7ffffffff: return 5 + if uint64 <= 0x3ffffffffff: return 6 + if uint64 <= 0x1ffffffffffff: return 7 + if uint64 <= 0xffffffffffffff: return 8 + if uint64 <= 0x7fffffffffffffff: return 9 + if uint64 > UINT64_MAX: + raise message.EncodeError('Value out of range: %d' % uint64) + return 10 + + +NON_PACKABLE_TYPES = ( + descriptor.FieldDescriptor.TYPE_STRING, + descriptor.FieldDescriptor.TYPE_GROUP, + descriptor.FieldDescriptor.TYPE_MESSAGE, + descriptor.FieldDescriptor.TYPE_BYTES +) + + +def IsTypePackable(field_type): + """Return true iff packable = true is valid for fields of this type. + + Args: + field_type: a FieldDescriptor::Type value. + + Returns: + True iff fields of this type are packable. + """ + return field_type not in NON_PACKABLE_TYPES diff --git a/env/lib/python3.8/site-packages/google/protobuf/json_format.py b/env/lib/python3.8/site-packages/google/protobuf/json_format.py new file mode 100644 index 00000000..5024ed89 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/json_format.py @@ -0,0 +1,912 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains routines for printing protocol messages in JSON format. + +Simple usage example: + + # Create a proto object and serialize it to a json format string. + message = my_proto_pb2.MyMessage(foo='bar') + json_string = json_format.MessageToJson(message) + + # Parse a json format string to proto object. + message = json_format.Parse(json_string, my_proto_pb2.MyMessage()) +""" + +__author__ = 'jieluo@google.com (Jie Luo)' + + +import base64 +from collections import OrderedDict +import json +import math +from operator import methodcaller +import re +import sys + +from google.protobuf.internal import type_checkers +from google.protobuf import descriptor +from google.protobuf import symbol_database + + +_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S' +_INT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT32, + descriptor.FieldDescriptor.CPPTYPE_UINT32, + descriptor.FieldDescriptor.CPPTYPE_INT64, + descriptor.FieldDescriptor.CPPTYPE_UINT64]) +_INT64_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT64, + descriptor.FieldDescriptor.CPPTYPE_UINT64]) +_FLOAT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_FLOAT, + descriptor.FieldDescriptor.CPPTYPE_DOUBLE]) +_INFINITY = 'Infinity' +_NEG_INFINITY = '-Infinity' +_NAN = 'NaN' + +_UNPAIRED_SURROGATE_PATTERN = re.compile( + u'[\ud800-\udbff](?![\udc00-\udfff])|(? self.max_recursion_depth: + raise ParseError('Message too deep. Max recursion depth is {0}'.format( + self.max_recursion_depth)) + message_descriptor = message.DESCRIPTOR + full_name = message_descriptor.full_name + if not path: + path = message_descriptor.name + if _IsWrapperMessage(message_descriptor): + self._ConvertWrapperMessage(value, message, path) + elif full_name in _WKTJSONMETHODS: + methodcaller(_WKTJSONMETHODS[full_name][1], value, message, path)(self) + else: + self._ConvertFieldValuePair(value, message, path) + self.recursion_depth -= 1 + + def _ConvertFieldValuePair(self, js, message, path): + """Convert field value pairs into regular message. + + Args: + js: A JSON object to convert the field value pairs. + message: A regular protocol message to record the data. + path: parent path to log parse error info. + + Raises: + ParseError: In case of problems converting. + """ + names = [] + message_descriptor = message.DESCRIPTOR + fields_by_json_name = dict((f.json_name, f) + for f in message_descriptor.fields) + for name in js: + try: + field = fields_by_json_name.get(name, None) + if not field: + field = message_descriptor.fields_by_name.get(name, None) + if not field and _VALID_EXTENSION_NAME.match(name): + if not message_descriptor.is_extendable: + raise ParseError( + 'Message type {0} does not have extensions at {1}'.format( + message_descriptor.full_name, path)) + identifier = name[1:-1] # strip [] brackets + # pylint: disable=protected-access + field = message.Extensions._FindExtensionByName(identifier) + # pylint: enable=protected-access + if not field: + # Try looking for extension by the message type name, dropping the + # field name following the final . separator in full_name. + identifier = '.'.join(identifier.split('.')[:-1]) + # pylint: disable=protected-access + field = message.Extensions._FindExtensionByName(identifier) + # pylint: enable=protected-access + if not field: + if self.ignore_unknown_fields: + continue + raise ParseError( + ('Message type "{0}" has no field named "{1}" at "{2}".\n' + ' Available Fields(except extensions): "{3}"').format( + message_descriptor.full_name, name, path, + [f.json_name for f in message_descriptor.fields])) + if name in names: + raise ParseError('Message type "{0}" should not have multiple ' + '"{1}" fields at "{2}".'.format( + message.DESCRIPTOR.full_name, name, path)) + names.append(name) + value = js[name] + # Check no other oneof field is parsed. + if field.containing_oneof is not None and value is not None: + oneof_name = field.containing_oneof.name + if oneof_name in names: + raise ParseError('Message type "{0}" should not have multiple ' + '"{1}" oneof fields at "{2}".'.format( + message.DESCRIPTOR.full_name, oneof_name, + path)) + names.append(oneof_name) + + if value is None: + if (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE + and field.message_type.full_name == 'google.protobuf.Value'): + sub_message = getattr(message, field.name) + sub_message.null_value = 0 + elif (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM + and field.enum_type.full_name == 'google.protobuf.NullValue'): + setattr(message, field.name, 0) + else: + message.ClearField(field.name) + continue + + # Parse field value. + if _IsMapEntry(field): + message.ClearField(field.name) + self._ConvertMapFieldValue(value, message, field, + '{0}.{1}'.format(path, name)) + elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED: + message.ClearField(field.name) + if not isinstance(value, list): + raise ParseError('repeated field {0} must be in [] which is ' + '{1} at {2}'.format(name, value, path)) + if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + # Repeated message field. + for index, item in enumerate(value): + sub_message = getattr(message, field.name).add() + # None is a null_value in Value. + if (item is None and + sub_message.DESCRIPTOR.full_name != 'google.protobuf.Value'): + raise ParseError('null is not allowed to be used as an element' + ' in a repeated field at {0}.{1}[{2}]'.format( + path, name, index)) + self.ConvertMessage(item, sub_message, + '{0}.{1}[{2}]'.format(path, name, index)) + else: + # Repeated scalar field. + for index, item in enumerate(value): + if item is None: + raise ParseError('null is not allowed to be used as an element' + ' in a repeated field at {0}.{1}[{2}]'.format( + path, name, index)) + getattr(message, field.name).append( + _ConvertScalarFieldValue( + item, field, '{0}.{1}[{2}]'.format(path, name, index))) + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + if field.is_extension: + sub_message = message.Extensions[field] + else: + sub_message = getattr(message, field.name) + sub_message.SetInParent() + self.ConvertMessage(value, sub_message, '{0}.{1}'.format(path, name)) + else: + if field.is_extension: + message.Extensions[field] = _ConvertScalarFieldValue( + value, field, '{0}.{1}'.format(path, name)) + else: + setattr( + message, field.name, + _ConvertScalarFieldValue(value, field, + '{0}.{1}'.format(path, name))) + except ParseError as e: + if field and field.containing_oneof is None: + raise ParseError('Failed to parse {0} field: {1}.'.format(name, e)) + else: + raise ParseError(str(e)) + except ValueError as e: + raise ParseError('Failed to parse {0} field: {1}.'.format(name, e)) + except TypeError as e: + raise ParseError('Failed to parse {0} field: {1}.'.format(name, e)) + + def _ConvertAnyMessage(self, value, message, path): + """Convert a JSON representation into Any message.""" + if isinstance(value, dict) and not value: + return + try: + type_url = value['@type'] + except KeyError: + raise ParseError( + '@type is missing when parsing any message at {0}'.format(path)) + + try: + sub_message = _CreateMessageFromTypeUrl(type_url, self.descriptor_pool) + except TypeError as e: + raise ParseError('{0} at {1}'.format(e, path)) + message_descriptor = sub_message.DESCRIPTOR + full_name = message_descriptor.full_name + if _IsWrapperMessage(message_descriptor): + self._ConvertWrapperMessage(value['value'], sub_message, + '{0}.value'.format(path)) + elif full_name in _WKTJSONMETHODS: + methodcaller(_WKTJSONMETHODS[full_name][1], value['value'], sub_message, + '{0}.value'.format(path))( + self) + else: + del value['@type'] + self._ConvertFieldValuePair(value, sub_message, path) + value['@type'] = type_url + # Sets Any message + message.value = sub_message.SerializeToString() + message.type_url = type_url + + def _ConvertGenericMessage(self, value, message, path): + """Convert a JSON representation into message with FromJsonString.""" + # Duration, Timestamp, FieldMask have a FromJsonString method to do the + # conversion. Users can also call the method directly. + try: + message.FromJsonString(value) + except ValueError as e: + raise ParseError('{0} at {1}'.format(e, path)) + + def _ConvertValueMessage(self, value, message, path): + """Convert a JSON representation into Value message.""" + if isinstance(value, dict): + self._ConvertStructMessage(value, message.struct_value, path) + elif isinstance(value, list): + self._ConvertListValueMessage(value, message.list_value, path) + elif value is None: + message.null_value = 0 + elif isinstance(value, bool): + message.bool_value = value + elif isinstance(value, str): + message.string_value = value + elif isinstance(value, _INT_OR_FLOAT): + message.number_value = value + else: + raise ParseError('Value {0} has unexpected type {1} at {2}'.format( + value, type(value), path)) + + def _ConvertListValueMessage(self, value, message, path): + """Convert a JSON representation into ListValue message.""" + if not isinstance(value, list): + raise ParseError('ListValue must be in [] which is {0} at {1}'.format( + value, path)) + message.ClearField('values') + for index, item in enumerate(value): + self._ConvertValueMessage(item, message.values.add(), + '{0}[{1}]'.format(path, index)) + + def _ConvertStructMessage(self, value, message, path): + """Convert a JSON representation into Struct message.""" + if not isinstance(value, dict): + raise ParseError('Struct must be in a dict which is {0} at {1}'.format( + value, path)) + # Clear will mark the struct as modified so it will be created even if + # there are no values. + message.Clear() + for key in value: + self._ConvertValueMessage(value[key], message.fields[key], + '{0}.{1}'.format(path, key)) + return + + def _ConvertWrapperMessage(self, value, message, path): + """Convert a JSON representation into Wrapper message.""" + field = message.DESCRIPTOR.fields_by_name['value'] + setattr( + message, 'value', + _ConvertScalarFieldValue(value, field, path='{0}.value'.format(path))) + + def _ConvertMapFieldValue(self, value, message, field, path): + """Convert map field value for a message map field. + + Args: + value: A JSON object to convert the map field value. + message: A protocol message to record the converted data. + field: The descriptor of the map field to be converted. + path: parent path to log parse error info. + + Raises: + ParseError: In case of convert problems. + """ + if not isinstance(value, dict): + raise ParseError( + 'Map field {0} must be in a dict which is {1} at {2}'.format( + field.name, value, path)) + key_field = field.message_type.fields_by_name['key'] + value_field = field.message_type.fields_by_name['value'] + for key in value: + key_value = _ConvertScalarFieldValue(key, key_field, + '{0}.key'.format(path), True) + if value_field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + self.ConvertMessage(value[key], + getattr(message, field.name)[key_value], + '{0}[{1}]'.format(path, key_value)) + else: + getattr(message, field.name)[key_value] = _ConvertScalarFieldValue( + value[key], value_field, path='{0}[{1}]'.format(path, key_value)) + + +def _ConvertScalarFieldValue(value, field, path, require_str=False): + """Convert a single scalar field value. + + Args: + value: A scalar value to convert the scalar field value. + field: The descriptor of the field to convert. + path: parent path to log parse error info. + require_str: If True, the field value must be a str. + + Returns: + The converted scalar field value + + Raises: + ParseError: In case of convert problems. + """ + try: + if field.cpp_type in _INT_TYPES: + return _ConvertInteger(value) + elif field.cpp_type in _FLOAT_TYPES: + return _ConvertFloat(value, field) + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL: + return _ConvertBool(value, require_str) + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING: + if field.type == descriptor.FieldDescriptor.TYPE_BYTES: + if isinstance(value, str): + encoded = value.encode('utf-8') + else: + encoded = value + # Add extra padding '=' + padded_value = encoded + b'=' * (4 - len(encoded) % 4) + return base64.urlsafe_b64decode(padded_value) + else: + # Checking for unpaired surrogates appears to be unreliable, + # depending on the specific Python version, so we check manually. + if _UNPAIRED_SURROGATE_PATTERN.search(value): + raise ParseError('Unpaired surrogate') + return value + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM: + # Convert an enum value. + enum_value = field.enum_type.values_by_name.get(value, None) + if enum_value is None: + try: + number = int(value) + enum_value = field.enum_type.values_by_number.get(number, None) + except ValueError: + raise ParseError('Invalid enum value {0} for enum type {1}'.format( + value, field.enum_type.full_name)) + if enum_value is None: + if field.file.syntax == 'proto3': + # Proto3 accepts unknown enums. + return number + raise ParseError('Invalid enum value {0} for enum type {1}'.format( + value, field.enum_type.full_name)) + return enum_value.number + except ParseError as e: + raise ParseError('{0} at {1}'.format(e, path)) + + +def _ConvertInteger(value): + """Convert an integer. + + Args: + value: A scalar value to convert. + + Returns: + The integer value. + + Raises: + ParseError: If an integer couldn't be consumed. + """ + if isinstance(value, float) and not value.is_integer(): + raise ParseError('Couldn\'t parse integer: {0}'.format(value)) + + if isinstance(value, str) and value.find(' ') != -1: + raise ParseError('Couldn\'t parse integer: "{0}"'.format(value)) + + if isinstance(value, bool): + raise ParseError('Bool value {0} is not acceptable for ' + 'integer field'.format(value)) + + return int(value) + + +def _ConvertFloat(value, field): + """Convert an floating point number.""" + if isinstance(value, float): + if math.isnan(value): + raise ParseError('Couldn\'t parse NaN, use quoted "NaN" instead') + if math.isinf(value): + if value > 0: + raise ParseError('Couldn\'t parse Infinity or value too large, ' + 'use quoted "Infinity" instead') + else: + raise ParseError('Couldn\'t parse -Infinity or value too small, ' + 'use quoted "-Infinity" instead') + if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT: + # pylint: disable=protected-access + if value > type_checkers._FLOAT_MAX: + raise ParseError('Float value too large') + # pylint: disable=protected-access + if value < type_checkers._FLOAT_MIN: + raise ParseError('Float value too small') + if value == 'nan': + raise ParseError('Couldn\'t parse float "nan", use "NaN" instead') + try: + # Assume Python compatible syntax. + return float(value) + except ValueError: + # Check alternative spellings. + if value == _NEG_INFINITY: + return float('-inf') + elif value == _INFINITY: + return float('inf') + elif value == _NAN: + return float('nan') + else: + raise ParseError('Couldn\'t parse float: {0}'.format(value)) + + +def _ConvertBool(value, require_str): + """Convert a boolean value. + + Args: + value: A scalar value to convert. + require_str: If True, value must be a str. + + Returns: + The bool parsed. + + Raises: + ParseError: If a boolean value couldn't be consumed. + """ + if require_str: + if value == 'true': + return True + elif value == 'false': + return False + else: + raise ParseError('Expected "true" or "false", not {0}'.format(value)) + + if not isinstance(value, bool): + raise ParseError('Expected true or false without quotes') + return value + +_WKTJSONMETHODS = { + 'google.protobuf.Any': ['_AnyMessageToJsonObject', + '_ConvertAnyMessage'], + 'google.protobuf.Duration': ['_GenericMessageToJsonObject', + '_ConvertGenericMessage'], + 'google.protobuf.FieldMask': ['_GenericMessageToJsonObject', + '_ConvertGenericMessage'], + 'google.protobuf.ListValue': ['_ListValueMessageToJsonObject', + '_ConvertListValueMessage'], + 'google.protobuf.Struct': ['_StructMessageToJsonObject', + '_ConvertStructMessage'], + 'google.protobuf.Timestamp': ['_GenericMessageToJsonObject', + '_ConvertGenericMessage'], + 'google.protobuf.Value': ['_ValueMessageToJsonObject', + '_ConvertValueMessage'] +} diff --git a/env/lib/python3.8/site-packages/google/protobuf/message.py b/env/lib/python3.8/site-packages/google/protobuf/message.py new file mode 100644 index 00000000..76c6802f --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/message.py @@ -0,0 +1,424 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# TODO(robinson): We should just make these methods all "pure-virtual" and move +# all implementation out, into reflection.py for now. + + +"""Contains an abstract base class for protocol messages.""" + +__author__ = 'robinson@google.com (Will Robinson)' + +class Error(Exception): + """Base error type for this module.""" + pass + + +class DecodeError(Error): + """Exception raised when deserializing messages.""" + pass + + +class EncodeError(Error): + """Exception raised when serializing messages.""" + pass + + +class Message(object): + + """Abstract base class for protocol messages. + + Protocol message classes are almost always generated by the protocol + compiler. These generated types subclass Message and implement the methods + shown below. + """ + + # TODO(robinson): Link to an HTML document here. + + # TODO(robinson): Document that instances of this class will also + # have an Extensions attribute with __getitem__ and __setitem__. + # Again, not sure how to best convey this. + + # TODO(robinson): Document that the class must also have a static + # RegisterExtension(extension_field) method. + # Not sure how to best express at this point. + + # TODO(robinson): Document these fields and methods. + + __slots__ = [] + + #: The :class:`google.protobuf.descriptor.Descriptor` for this message type. + DESCRIPTOR = None + + def __deepcopy__(self, memo=None): + clone = type(self)() + clone.MergeFrom(self) + return clone + + def __eq__(self, other_msg): + """Recursively compares two messages by value and structure.""" + raise NotImplementedError + + def __ne__(self, other_msg): + # Can't just say self != other_msg, since that would infinitely recurse. :) + return not self == other_msg + + def __hash__(self): + raise TypeError('unhashable object') + + def __str__(self): + """Outputs a human-readable representation of the message.""" + raise NotImplementedError + + def __unicode__(self): + """Outputs a human-readable representation of the message.""" + raise NotImplementedError + + def MergeFrom(self, other_msg): + """Merges the contents of the specified message into current message. + + This method merges the contents of the specified message into the current + message. Singular fields that are set in the specified message overwrite + the corresponding fields in the current message. Repeated fields are + appended. Singular sub-messages and groups are recursively merged. + + Args: + other_msg (Message): A message to merge into the current message. + """ + raise NotImplementedError + + def CopyFrom(self, other_msg): + """Copies the content of the specified message into the current message. + + The method clears the current message and then merges the specified + message using MergeFrom. + + Args: + other_msg (Message): A message to copy into the current one. + """ + if self is other_msg: + return + self.Clear() + self.MergeFrom(other_msg) + + def Clear(self): + """Clears all data that was set in the message.""" + raise NotImplementedError + + def SetInParent(self): + """Mark this as present in the parent. + + This normally happens automatically when you assign a field of a + sub-message, but sometimes you want to make the sub-message + present while keeping it empty. If you find yourself using this, + you may want to reconsider your design. + """ + raise NotImplementedError + + def IsInitialized(self): + """Checks if the message is initialized. + + Returns: + bool: The method returns True if the message is initialized (i.e. all of + its required fields are set). + """ + raise NotImplementedError + + # TODO(robinson): MergeFromString() should probably return None and be + # implemented in terms of a helper that returns the # of bytes read. Our + # deserialization routines would use the helper when recursively + # deserializing, but the end user would almost always just want the no-return + # MergeFromString(). + + def MergeFromString(self, serialized): + """Merges serialized protocol buffer data into this message. + + When we find a field in `serialized` that is already present + in this message: + + - If it's a "repeated" field, we append to the end of our list. + - Else, if it's a scalar, we overwrite our field. + - Else, (it's a nonrepeated composite), we recursively merge + into the existing composite. + + Args: + serialized (bytes): Any object that allows us to call + ``memoryview(serialized)`` to access a string of bytes using the + buffer interface. + + Returns: + int: The number of bytes read from `serialized`. + For non-group messages, this will always be `len(serialized)`, + but for messages which are actually groups, this will + generally be less than `len(serialized)`, since we must + stop when we reach an ``END_GROUP`` tag. Note that if + we *do* stop because of an ``END_GROUP`` tag, the number + of bytes returned does not include the bytes + for the ``END_GROUP`` tag information. + + Raises: + DecodeError: if the input cannot be parsed. + """ + # TODO(robinson): Document handling of unknown fields. + # TODO(robinson): When we switch to a helper, this will return None. + raise NotImplementedError + + def ParseFromString(self, serialized): + """Parse serialized protocol buffer data into this message. + + Like :func:`MergeFromString()`, except we clear the object first. + + Raises: + message.DecodeError if the input cannot be parsed. + """ + self.Clear() + return self.MergeFromString(serialized) + + def SerializeToString(self, **kwargs): + """Serializes the protocol message to a binary string. + + Keyword Args: + deterministic (bool): If true, requests deterministic serialization + of the protobuf, with predictable ordering of map keys. + + Returns: + A binary string representation of the message if all of the required + fields in the message are set (i.e. the message is initialized). + + Raises: + EncodeError: if the message isn't initialized (see :func:`IsInitialized`). + """ + raise NotImplementedError + + def SerializePartialToString(self, **kwargs): + """Serializes the protocol message to a binary string. + + This method is similar to SerializeToString but doesn't check if the + message is initialized. + + Keyword Args: + deterministic (bool): If true, requests deterministic serialization + of the protobuf, with predictable ordering of map keys. + + Returns: + bytes: A serialized representation of the partial message. + """ + raise NotImplementedError + + # TODO(robinson): Decide whether we like these better + # than auto-generated has_foo() and clear_foo() methods + # on the instances themselves. This way is less consistent + # with C++, but it makes reflection-type access easier and + # reduces the number of magically autogenerated things. + # + # TODO(robinson): Be sure to document (and test) exactly + # which field names are accepted here. Are we case-sensitive? + # What do we do with fields that share names with Python keywords + # like 'lambda' and 'yield'? + # + # nnorwitz says: + # """ + # Typically (in python), an underscore is appended to names that are + # keywords. So they would become lambda_ or yield_. + # """ + def ListFields(self): + """Returns a list of (FieldDescriptor, value) tuples for present fields. + + A message field is non-empty if HasField() would return true. A singular + primitive field is non-empty if HasField() would return true in proto2 or it + is non zero in proto3. A repeated field is non-empty if it contains at least + one element. The fields are ordered by field number. + + Returns: + list[tuple(FieldDescriptor, value)]: field descriptors and values + for all fields in the message which are not empty. The values vary by + field type. + """ + raise NotImplementedError + + def HasField(self, field_name): + """Checks if a certain field is set for the message. + + For a oneof group, checks if any field inside is set. Note that if the + field_name is not defined in the message descriptor, :exc:`ValueError` will + be raised. + + Args: + field_name (str): The name of the field to check for presence. + + Returns: + bool: Whether a value has been set for the named field. + + Raises: + ValueError: if the `field_name` is not a member of this message. + """ + raise NotImplementedError + + def ClearField(self, field_name): + """Clears the contents of a given field. + + Inside a oneof group, clears the field set. If the name neither refers to a + defined field or oneof group, :exc:`ValueError` is raised. + + Args: + field_name (str): The name of the field to check for presence. + + Raises: + ValueError: if the `field_name` is not a member of this message. + """ + raise NotImplementedError + + def WhichOneof(self, oneof_group): + """Returns the name of the field that is set inside a oneof group. + + If no field is set, returns None. + + Args: + oneof_group (str): the name of the oneof group to check. + + Returns: + str or None: The name of the group that is set, or None. + + Raises: + ValueError: no group with the given name exists + """ + raise NotImplementedError + + def HasExtension(self, extension_handle): + """Checks if a certain extension is present for this message. + + Extensions are retrieved using the :attr:`Extensions` mapping (if present). + + Args: + extension_handle: The handle for the extension to check. + + Returns: + bool: Whether the extension is present for this message. + + Raises: + KeyError: if the extension is repeated. Similar to repeated fields, + there is no separate notion of presence: a "not present" repeated + extension is an empty list. + """ + raise NotImplementedError + + def ClearExtension(self, extension_handle): + """Clears the contents of a given extension. + + Args: + extension_handle: The handle for the extension to clear. + """ + raise NotImplementedError + + def UnknownFields(self): + """Returns the UnknownFieldSet. + + Returns: + UnknownFieldSet: The unknown fields stored in this message. + """ + raise NotImplementedError + + def DiscardUnknownFields(self): + """Clears all fields in the :class:`UnknownFieldSet`. + + This operation is recursive for nested message. + """ + raise NotImplementedError + + def ByteSize(self): + """Returns the serialized size of this message. + + Recursively calls ByteSize() on all contained messages. + + Returns: + int: The number of bytes required to serialize this message. + """ + raise NotImplementedError + + @classmethod + def FromString(cls, s): + raise NotImplementedError + + @staticmethod + def RegisterExtension(extension_handle): + raise NotImplementedError + + def _SetListener(self, message_listener): + """Internal method used by the protocol message implementation. + Clients should not call this directly. + + Sets a listener that this message will call on certain state transitions. + + The purpose of this method is to register back-edges from children to + parents at runtime, for the purpose of setting "has" bits and + byte-size-dirty bits in the parent and ancestor objects whenever a child or + descendant object is modified. + + If the client wants to disconnect this Message from the object tree, she + explicitly sets callback to None. + + If message_listener is None, unregisters any existing listener. Otherwise, + message_listener must implement the MessageListener interface in + internal/message_listener.py, and we discard any listener registered + via a previous _SetListener() call. + """ + raise NotImplementedError + + def __getstate__(self): + """Support the pickle protocol.""" + return dict(serialized=self.SerializePartialToString()) + + def __setstate__(self, state): + """Support the pickle protocol.""" + self.__init__() + serialized = state['serialized'] + # On Python 3, using encoding='latin1' is required for unpickling + # protos pickled by Python 2. + if not isinstance(serialized, bytes): + serialized = serialized.encode('latin1') + self.ParseFromString(serialized) + + def __reduce__(self): + message_descriptor = self.DESCRIPTOR + if message_descriptor.containing_type is None: + return type(self), (), self.__getstate__() + # the message type must be nested. + # Python does not pickle nested classes; use the symbol_database on the + # receiving end. + container = message_descriptor + return (_InternalConstructMessage, (container.full_name,), + self.__getstate__()) + + +def _InternalConstructMessage(full_name): + """Constructs a nested message.""" + from google.protobuf import symbol_database # pylint:disable=g-import-not-at-top + + return symbol_database.Default().GetSymbol(full_name)() diff --git a/env/lib/python3.8/site-packages/google/protobuf/message_factory.py b/env/lib/python3.8/site-packages/google/protobuf/message_factory.py new file mode 100644 index 00000000..3656fa68 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/message_factory.py @@ -0,0 +1,185 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Provides a factory class for generating dynamic messages. + +The easiest way to use this class is if you have access to the FileDescriptor +protos containing the messages you want to create you can just do the following: + +message_classes = message_factory.GetMessages(iterable_of_file_descriptors) +my_proto_instance = message_classes['some.proto.package.MessageName']() +""" + +__author__ = 'matthewtoia@google.com (Matt Toia)' + +from google.protobuf.internal import api_implementation +from google.protobuf import descriptor_pool +from google.protobuf import message + +if api_implementation.Type() == 'cpp': + from google.protobuf.pyext import cpp_message as message_impl +else: + from google.protobuf.internal import python_message as message_impl + + +# The type of all Message classes. +_GENERATED_PROTOCOL_MESSAGE_TYPE = message_impl.GeneratedProtocolMessageType + + +class MessageFactory(object): + """Factory for creating Proto2 messages from descriptors in a pool.""" + + def __init__(self, pool=None): + """Initializes a new factory.""" + self.pool = pool or descriptor_pool.DescriptorPool() + + # local cache of all classes built from protobuf descriptors + self._classes = {} + + def GetPrototype(self, descriptor): + """Obtains a proto2 message class based on the passed in descriptor. + + Passing a descriptor with a fully qualified name matching a previous + invocation will cause the same class to be returned. + + Args: + descriptor: The descriptor to build from. + + Returns: + A class describing the passed in descriptor. + """ + if descriptor not in self._classes: + result_class = self.CreatePrototype(descriptor) + # The assignment to _classes is redundant for the base implementation, but + # might avoid confusion in cases where CreatePrototype gets overridden and + # does not call the base implementation. + self._classes[descriptor] = result_class + return result_class + return self._classes[descriptor] + + def CreatePrototype(self, descriptor): + """Builds a proto2 message class based on the passed in descriptor. + + Don't call this function directly, it always creates a new class. Call + GetPrototype() instead. This method is meant to be overridden in subblasses + to perform additional operations on the newly constructed class. + + Args: + descriptor: The descriptor to build from. + + Returns: + A class describing the passed in descriptor. + """ + descriptor_name = descriptor.name + result_class = _GENERATED_PROTOCOL_MESSAGE_TYPE( + descriptor_name, + (message.Message,), + { + 'DESCRIPTOR': descriptor, + # If module not set, it wrongly points to message_factory module. + '__module__': None, + }) + result_class._FACTORY = self # pylint: disable=protected-access + # Assign in _classes before doing recursive calls to avoid infinite + # recursion. + self._classes[descriptor] = result_class + for field in descriptor.fields: + if field.message_type: + self.GetPrototype(field.message_type) + for extension in result_class.DESCRIPTOR.extensions: + if extension.containing_type not in self._classes: + self.GetPrototype(extension.containing_type) + extended_class = self._classes[extension.containing_type] + extended_class.RegisterExtension(extension) + return result_class + + def GetMessages(self, files): + """Gets all the messages from a specified file. + + This will find and resolve dependencies, failing if the descriptor + pool cannot satisfy them. + + Args: + files: The file names to extract messages from. + + Returns: + A dictionary mapping proto names to the message classes. This will include + any dependent messages as well as any messages defined in the same file as + a specified message. + """ + result = {} + for file_name in files: + file_desc = self.pool.FindFileByName(file_name) + for desc in file_desc.message_types_by_name.values(): + result[desc.full_name] = self.GetPrototype(desc) + + # While the extension FieldDescriptors are created by the descriptor pool, + # the python classes created in the factory need them to be registered + # explicitly, which is done below. + # + # The call to RegisterExtension will specifically check if the + # extension was already registered on the object and either + # ignore the registration if the original was the same, or raise + # an error if they were different. + + for extension in file_desc.extensions_by_name.values(): + if extension.containing_type not in self._classes: + self.GetPrototype(extension.containing_type) + extended_class = self._classes[extension.containing_type] + extended_class.RegisterExtension(extension) + return result + + +_FACTORY = MessageFactory() + + +def GetMessages(file_protos): + """Builds a dictionary of all the messages available in a set of files. + + Args: + file_protos: Iterable of FileDescriptorProto to build messages out of. + + Returns: + A dictionary mapping proto names to the message classes. This will include + any dependent messages as well as any messages defined in the same file as + a specified message. + """ + # The cpp implementation of the protocol buffer library requires to add the + # message in topological order of the dependency graph. + file_by_name = {file_proto.name: file_proto for file_proto in file_protos} + def _AddFile(file_proto): + for dependency in file_proto.dependency: + if dependency in file_by_name: + # Remove from elements to be visited, in order to cut cycles. + _AddFile(file_by_name.pop(dependency)) + _FACTORY.pool.Add(file_proto) + while file_by_name: + _AddFile(file_by_name.popitem()[1]) + return _FACTORY.GetMessages([file_proto.name for file_proto in file_protos]) diff --git a/env/lib/python3.8/site-packages/google/protobuf/proto_builder.py b/env/lib/python3.8/site-packages/google/protobuf/proto_builder.py new file mode 100644 index 00000000..a4667ce6 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/proto_builder.py @@ -0,0 +1,134 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Dynamic Protobuf class creator.""" + +from collections import OrderedDict +import hashlib +import os + +from google.protobuf import descriptor_pb2 +from google.protobuf import descriptor +from google.protobuf import message_factory + + +def _GetMessageFromFactory(factory, full_name): + """Get a proto class from the MessageFactory by name. + + Args: + factory: a MessageFactory instance. + full_name: str, the fully qualified name of the proto type. + Returns: + A class, for the type identified by full_name. + Raises: + KeyError, if the proto is not found in the factory's descriptor pool. + """ + proto_descriptor = factory.pool.FindMessageTypeByName(full_name) + proto_cls = factory.GetPrototype(proto_descriptor) + return proto_cls + + +def MakeSimpleProtoClass(fields, full_name=None, pool=None): + """Create a Protobuf class whose fields are basic types. + + Note: this doesn't validate field names! + + Args: + fields: dict of {name: field_type} mappings for each field in the proto. If + this is an OrderedDict the order will be maintained, otherwise the + fields will be sorted by name. + full_name: optional str, the fully-qualified name of the proto type. + pool: optional DescriptorPool instance. + Returns: + a class, the new protobuf class with a FileDescriptor. + """ + factory = message_factory.MessageFactory(pool=pool) + + if full_name is not None: + try: + proto_cls = _GetMessageFromFactory(factory, full_name) + return proto_cls + except KeyError: + # The factory's DescriptorPool doesn't know about this class yet. + pass + + # Get a list of (name, field_type) tuples from the fields dict. If fields was + # an OrderedDict we keep the order, but otherwise we sort the field to ensure + # consistent ordering. + field_items = fields.items() + if not isinstance(fields, OrderedDict): + field_items = sorted(field_items) + + # Use a consistent file name that is unlikely to conflict with any imported + # proto files. + fields_hash = hashlib.sha1() + for f_name, f_type in field_items: + fields_hash.update(f_name.encode('utf-8')) + fields_hash.update(str(f_type).encode('utf-8')) + proto_file_name = fields_hash.hexdigest() + '.proto' + + # If the proto is anonymous, use the same hash to name it. + if full_name is None: + full_name = ('net.proto2.python.public.proto_builder.AnonymousProto_' + + fields_hash.hexdigest()) + try: + proto_cls = _GetMessageFromFactory(factory, full_name) + return proto_cls + except KeyError: + # The factory's DescriptorPool doesn't know about this class yet. + pass + + # This is the first time we see this proto: add a new descriptor to the pool. + factory.pool.Add( + _MakeFileDescriptorProto(proto_file_name, full_name, field_items)) + return _GetMessageFromFactory(factory, full_name) + + +def _MakeFileDescriptorProto(proto_file_name, full_name, field_items): + """Populate FileDescriptorProto for MessageFactory's DescriptorPool.""" + package, name = full_name.rsplit('.', 1) + file_proto = descriptor_pb2.FileDescriptorProto() + file_proto.name = os.path.join(package.replace('.', '/'), proto_file_name) + file_proto.package = package + desc_proto = file_proto.message_type.add() + desc_proto.name = name + for f_number, (f_name, f_type) in enumerate(field_items, 1): + field_proto = desc_proto.field.add() + field_proto.name = f_name + # # If the number falls in the reserved range, reassign it to the correct + # # number after the range. + if f_number >= descriptor.FieldDescriptor.FIRST_RESERVED_FIELD_NUMBER: + f_number += ( + descriptor.FieldDescriptor.LAST_RESERVED_FIELD_NUMBER - + descriptor.FieldDescriptor.FIRST_RESERVED_FIELD_NUMBER + 1) + field_proto.number = f_number + field_proto.label = descriptor_pb2.FieldDescriptorProto.LABEL_OPTIONAL + field_proto.type = f_type + return file_proto diff --git a/env/lib/python3.8/site-packages/google/protobuf/pyext/__init__.py b/env/lib/python3.8/site-packages/google/protobuf/pyext/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/google/protobuf/pyext/_message.cpython-38-x86_64-linux-gnu.so b/env/lib/python3.8/site-packages/google/protobuf/pyext/_message.cpython-38-x86_64-linux-gnu.so new file mode 100644 index 00000000..085a26fc Binary files /dev/null and b/env/lib/python3.8/site-packages/google/protobuf/pyext/_message.cpython-38-x86_64-linux-gnu.so differ diff --git a/env/lib/python3.8/site-packages/google/protobuf/pyext/cpp_message.py b/env/lib/python3.8/site-packages/google/protobuf/pyext/cpp_message.py new file mode 100644 index 00000000..fc8eb32d --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/pyext/cpp_message.py @@ -0,0 +1,65 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Protocol message implementation hooks for C++ implementation. + +Contains helper functions used to create protocol message classes from +Descriptor objects at runtime backed by the protocol buffer C++ API. +""" + +__author__ = 'tibell@google.com (Johan Tibell)' + +from google.protobuf.pyext import _message + + +class GeneratedProtocolMessageType(_message.MessageMeta): + + """Metaclass for protocol message classes created at runtime from Descriptors. + + The protocol compiler currently uses this metaclass to create protocol + message classes at runtime. Clients can also manually create their own + classes at runtime, as in this example: + + mydescriptor = Descriptor(.....) + factory = symbol_database.Default() + factory.pool.AddDescriptor(mydescriptor) + MyProtoClass = factory.GetPrototype(mydescriptor) + myproto_instance = MyProtoClass() + myproto.foo_field = 23 + ... + + The above example will not work for nested types. If you wish to include them, + use reflection.MakeClass() instead of manually instantiating the class in + order to create the appropriate class structure. + """ + + # Must be consistent with the protocol-compiler code in + # proto2/compiler/internal/generator.*. + _DESCRIPTOR_KEY = 'DESCRIPTOR' diff --git a/env/lib/python3.8/site-packages/google/protobuf/reflection.py b/env/lib/python3.8/site-packages/google/protobuf/reflection.py new file mode 100644 index 00000000..81e18859 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/reflection.py @@ -0,0 +1,95 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# This code is meant to work on Python 2.4 and above only. + +"""Contains a metaclass and helper functions used to create +protocol message classes from Descriptor objects at runtime. + +Recall that a metaclass is the "type" of a class. +(A class is to a metaclass what an instance is to a class.) + +In this case, we use the GeneratedProtocolMessageType metaclass +to inject all the useful functionality into the classes +output by the protocol compiler at compile-time. + +The upshot of all this is that the real implementation +details for ALL pure-Python protocol buffers are *here in +this file*. +""" + +__author__ = 'robinson@google.com (Will Robinson)' + + +from google.protobuf import message_factory +from google.protobuf import symbol_database + +# The type of all Message classes. +# Part of the public interface, but normally only used by message factories. +GeneratedProtocolMessageType = message_factory._GENERATED_PROTOCOL_MESSAGE_TYPE + +MESSAGE_CLASS_CACHE = {} + + +# Deprecated. Please NEVER use reflection.ParseMessage(). +def ParseMessage(descriptor, byte_str): + """Generate a new Message instance from this Descriptor and a byte string. + + DEPRECATED: ParseMessage is deprecated because it is using MakeClass(). + Please use MessageFactory.GetPrototype() instead. + + Args: + descriptor: Protobuf Descriptor object + byte_str: Serialized protocol buffer byte string + + Returns: + Newly created protobuf Message object. + """ + result_class = MakeClass(descriptor) + new_msg = result_class() + new_msg.ParseFromString(byte_str) + return new_msg + + +# Deprecated. Please NEVER use reflection.MakeClass(). +def MakeClass(descriptor): + """Construct a class object for a protobuf described by descriptor. + + DEPRECATED: use MessageFactory.GetPrototype() instead. + + Args: + descriptor: A descriptor.Descriptor object describing the protobuf. + Returns: + The Message class object described by the descriptor. + """ + # Original implementation leads to duplicate message classes, which won't play + # well with extensions. Message factory info is also missing. + # Redirect to message_factory. + return symbol_database.Default().GetPrototype(descriptor) diff --git a/env/lib/python3.8/site-packages/google/protobuf/service.py b/env/lib/python3.8/site-packages/google/protobuf/service.py new file mode 100644 index 00000000..56252463 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/service.py @@ -0,0 +1,228 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""DEPRECATED: Declares the RPC service interfaces. + +This module declares the abstract interfaces underlying proto2 RPC +services. These are intended to be independent of any particular RPC +implementation, so that proto2 services can be used on top of a variety +of implementations. Starting with version 2.3.0, RPC implementations should +not try to build on these, but should instead provide code generator plugins +which generate code specific to the particular RPC implementation. This way +the generated code can be more appropriate for the implementation in use +and can avoid unnecessary layers of indirection. +""" + +__author__ = 'petar@google.com (Petar Petrov)' + + +class RpcException(Exception): + """Exception raised on failed blocking RPC method call.""" + pass + + +class Service(object): + + """Abstract base interface for protocol-buffer-based RPC services. + + Services themselves are abstract classes (implemented either by servers or as + stubs), but they subclass this base interface. The methods of this + interface can be used to call the methods of the service without knowing + its exact type at compile time (analogous to the Message interface). + """ + + def GetDescriptor(): + """Retrieves this service's descriptor.""" + raise NotImplementedError + + def CallMethod(self, method_descriptor, rpc_controller, + request, done): + """Calls a method of the service specified by method_descriptor. + + If "done" is None then the call is blocking and the response + message will be returned directly. Otherwise the call is asynchronous + and "done" will later be called with the response value. + + In the blocking case, RpcException will be raised on error. + + Preconditions: + + * method_descriptor.service == GetDescriptor + * request is of the exact same classes as returned by + GetRequestClass(method). + * After the call has started, the request must not be modified. + * "rpc_controller" is of the correct type for the RPC implementation being + used by this Service. For stubs, the "correct type" depends on the + RpcChannel which the stub is using. + + Postconditions: + + * "done" will be called when the method is complete. This may be + before CallMethod() returns or it may be at some point in the future. + * If the RPC failed, the response value passed to "done" will be None. + Further details about the failure can be found by querying the + RpcController. + """ + raise NotImplementedError + + def GetRequestClass(self, method_descriptor): + """Returns the class of the request message for the specified method. + + CallMethod() requires that the request is of a particular subclass of + Message. GetRequestClass() gets the default instance of this required + type. + + Example: + method = service.GetDescriptor().FindMethodByName("Foo") + request = stub.GetRequestClass(method)() + request.ParseFromString(input) + service.CallMethod(method, request, callback) + """ + raise NotImplementedError + + def GetResponseClass(self, method_descriptor): + """Returns the class of the response message for the specified method. + + This method isn't really needed, as the RpcChannel's CallMethod constructs + the response protocol message. It's provided anyway in case it is useful + for the caller to know the response type in advance. + """ + raise NotImplementedError + + +class RpcController(object): + + """An RpcController mediates a single method call. + + The primary purpose of the controller is to provide a way to manipulate + settings specific to the RPC implementation and to find out about RPC-level + errors. The methods provided by the RpcController interface are intended + to be a "least common denominator" set of features which we expect all + implementations to support. Specific implementations may provide more + advanced features (e.g. deadline propagation). + """ + + # Client-side methods below + + def Reset(self): + """Resets the RpcController to its initial state. + + After the RpcController has been reset, it may be reused in + a new call. Must not be called while an RPC is in progress. + """ + raise NotImplementedError + + def Failed(self): + """Returns true if the call failed. + + After a call has finished, returns true if the call failed. The possible + reasons for failure depend on the RPC implementation. Failed() must not + be called before a call has finished. If Failed() returns true, the + contents of the response message are undefined. + """ + raise NotImplementedError + + def ErrorText(self): + """If Failed is true, returns a human-readable description of the error.""" + raise NotImplementedError + + def StartCancel(self): + """Initiate cancellation. + + Advises the RPC system that the caller desires that the RPC call be + canceled. The RPC system may cancel it immediately, may wait awhile and + then cancel it, or may not even cancel the call at all. If the call is + canceled, the "done" callback will still be called and the RpcController + will indicate that the call failed at that time. + """ + raise NotImplementedError + + # Server-side methods below + + def SetFailed(self, reason): + """Sets a failure reason. + + Causes Failed() to return true on the client side. "reason" will be + incorporated into the message returned by ErrorText(). If you find + you need to return machine-readable information about failures, you + should incorporate it into your response protocol buffer and should + NOT call SetFailed(). + """ + raise NotImplementedError + + def IsCanceled(self): + """Checks if the client cancelled the RPC. + + If true, indicates that the client canceled the RPC, so the server may + as well give up on replying to it. The server should still call the + final "done" callback. + """ + raise NotImplementedError + + def NotifyOnCancel(self, callback): + """Sets a callback to invoke on cancel. + + Asks that the given callback be called when the RPC is canceled. The + callback will always be called exactly once. If the RPC completes without + being canceled, the callback will be called after completion. If the RPC + has already been canceled when NotifyOnCancel() is called, the callback + will be called immediately. + + NotifyOnCancel() must be called no more than once per request. + """ + raise NotImplementedError + + +class RpcChannel(object): + + """Abstract interface for an RPC channel. + + An RpcChannel represents a communication line to a service which can be used + to call that service's methods. The service may be running on another + machine. Normally, you should not use an RpcChannel directly, but instead + construct a stub {@link Service} wrapping it. Example: + + Example: + RpcChannel channel = rpcImpl.Channel("remotehost.example.com:1234") + RpcController controller = rpcImpl.Controller() + MyService service = MyService_Stub(channel) + service.MyMethod(controller, request, callback) + """ + + def CallMethod(self, method_descriptor, rpc_controller, + request, response_class, done): + """Calls the method identified by the descriptor. + + Call the given method of the remote service. The signature of this + procedure looks the same as Service.CallMethod(), but the requirements + are less strict in one important way: the request object doesn't have to + be of any specific class as long as its descriptor is method.input_type. + """ + raise NotImplementedError diff --git a/env/lib/python3.8/site-packages/google/protobuf/service_reflection.py b/env/lib/python3.8/site-packages/google/protobuf/service_reflection.py new file mode 100644 index 00000000..f82ab714 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/service_reflection.py @@ -0,0 +1,295 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains metaclasses used to create protocol service and service stub +classes from ServiceDescriptor objects at runtime. + +The GeneratedServiceType and GeneratedServiceStubType metaclasses are used to +inject all useful functionality into the classes output by the protocol +compiler at compile-time. +""" + +__author__ = 'petar@google.com (Petar Petrov)' + + +class GeneratedServiceType(type): + + """Metaclass for service classes created at runtime from ServiceDescriptors. + + Implementations for all methods described in the Service class are added here + by this class. We also create properties to allow getting/setting all fields + in the protocol message. + + The protocol compiler currently uses this metaclass to create protocol service + classes at runtime. Clients can also manually create their own classes at + runtime, as in this example:: + + mydescriptor = ServiceDescriptor(.....) + class MyProtoService(service.Service): + __metaclass__ = GeneratedServiceType + DESCRIPTOR = mydescriptor + myservice_instance = MyProtoService() + # ... + """ + + _DESCRIPTOR_KEY = 'DESCRIPTOR' + + def __init__(cls, name, bases, dictionary): + """Creates a message service class. + + Args: + name: Name of the class (ignored, but required by the metaclass + protocol). + bases: Base classes of the class being constructed. + dictionary: The class dictionary of the class being constructed. + dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object + describing this protocol service type. + """ + # Don't do anything if this class doesn't have a descriptor. This happens + # when a service class is subclassed. + if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary: + return + + descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY] + service_builder = _ServiceBuilder(descriptor) + service_builder.BuildService(cls) + cls.DESCRIPTOR = descriptor + + +class GeneratedServiceStubType(GeneratedServiceType): + + """Metaclass for service stubs created at runtime from ServiceDescriptors. + + This class has similar responsibilities as GeneratedServiceType, except that + it creates the service stub classes. + """ + + _DESCRIPTOR_KEY = 'DESCRIPTOR' + + def __init__(cls, name, bases, dictionary): + """Creates a message service stub class. + + Args: + name: Name of the class (ignored, here). + bases: Base classes of the class being constructed. + dictionary: The class dictionary of the class being constructed. + dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object + describing this protocol service type. + """ + super(GeneratedServiceStubType, cls).__init__(name, bases, dictionary) + # Don't do anything if this class doesn't have a descriptor. This happens + # when a service stub is subclassed. + if GeneratedServiceStubType._DESCRIPTOR_KEY not in dictionary: + return + + descriptor = dictionary[GeneratedServiceStubType._DESCRIPTOR_KEY] + service_stub_builder = _ServiceStubBuilder(descriptor) + service_stub_builder.BuildServiceStub(cls) + + +class _ServiceBuilder(object): + + """This class constructs a protocol service class using a service descriptor. + + Given a service descriptor, this class constructs a class that represents + the specified service descriptor. One service builder instance constructs + exactly one service class. That means all instances of that class share the + same builder. + """ + + def __init__(self, service_descriptor): + """Initializes an instance of the service class builder. + + Args: + service_descriptor: ServiceDescriptor to use when constructing the + service class. + """ + self.descriptor = service_descriptor + + def BuildService(builder, cls): + """Constructs the service class. + + Args: + cls: The class that will be constructed. + """ + + # CallMethod needs to operate with an instance of the Service class. This + # internal wrapper function exists only to be able to pass the service + # instance to the method that does the real CallMethod work. + # Making sure to use exact argument names from the abstract interface in + # service.py to match the type signature + def _WrapCallMethod(self, method_descriptor, rpc_controller, request, done): + return builder._CallMethod(self, method_descriptor, rpc_controller, + request, done) + + def _WrapGetRequestClass(self, method_descriptor): + return builder._GetRequestClass(method_descriptor) + + def _WrapGetResponseClass(self, method_descriptor): + return builder._GetResponseClass(method_descriptor) + + builder.cls = cls + cls.CallMethod = _WrapCallMethod + cls.GetDescriptor = staticmethod(lambda: builder.descriptor) + cls.GetDescriptor.__doc__ = 'Returns the service descriptor.' + cls.GetRequestClass = _WrapGetRequestClass + cls.GetResponseClass = _WrapGetResponseClass + for method in builder.descriptor.methods: + setattr(cls, method.name, builder._GenerateNonImplementedMethod(method)) + + def _CallMethod(self, srvc, method_descriptor, + rpc_controller, request, callback): + """Calls the method described by a given method descriptor. + + Args: + srvc: Instance of the service for which this method is called. + method_descriptor: Descriptor that represent the method to call. + rpc_controller: RPC controller to use for this method's execution. + request: Request protocol message. + callback: A callback to invoke after the method has completed. + """ + if method_descriptor.containing_service != self.descriptor: + raise RuntimeError( + 'CallMethod() given method descriptor for wrong service type.') + method = getattr(srvc, method_descriptor.name) + return method(rpc_controller, request, callback) + + def _GetRequestClass(self, method_descriptor): + """Returns the class of the request protocol message. + + Args: + method_descriptor: Descriptor of the method for which to return the + request protocol message class. + + Returns: + A class that represents the input protocol message of the specified + method. + """ + if method_descriptor.containing_service != self.descriptor: + raise RuntimeError( + 'GetRequestClass() given method descriptor for wrong service type.') + return method_descriptor.input_type._concrete_class + + def _GetResponseClass(self, method_descriptor): + """Returns the class of the response protocol message. + + Args: + method_descriptor: Descriptor of the method for which to return the + response protocol message class. + + Returns: + A class that represents the output protocol message of the specified + method. + """ + if method_descriptor.containing_service != self.descriptor: + raise RuntimeError( + 'GetResponseClass() given method descriptor for wrong service type.') + return method_descriptor.output_type._concrete_class + + def _GenerateNonImplementedMethod(self, method): + """Generates and returns a method that can be set for a service methods. + + Args: + method: Descriptor of the service method for which a method is to be + generated. + + Returns: + A method that can be added to the service class. + """ + return lambda inst, rpc_controller, request, callback: ( + self._NonImplementedMethod(method.name, rpc_controller, callback)) + + def _NonImplementedMethod(self, method_name, rpc_controller, callback): + """The body of all methods in the generated service class. + + Args: + method_name: Name of the method being executed. + rpc_controller: RPC controller used to execute this method. + callback: A callback which will be invoked when the method finishes. + """ + rpc_controller.SetFailed('Method %s not implemented.' % method_name) + callback(None) + + +class _ServiceStubBuilder(object): + + """Constructs a protocol service stub class using a service descriptor. + + Given a service descriptor, this class constructs a suitable stub class. + A stub is just a type-safe wrapper around an RpcChannel which emulates a + local implementation of the service. + + One service stub builder instance constructs exactly one class. It means all + instances of that class share the same service stub builder. + """ + + def __init__(self, service_descriptor): + """Initializes an instance of the service stub class builder. + + Args: + service_descriptor: ServiceDescriptor to use when constructing the + stub class. + """ + self.descriptor = service_descriptor + + def BuildServiceStub(self, cls): + """Constructs the stub class. + + Args: + cls: The class that will be constructed. + """ + + def _ServiceStubInit(stub, rpc_channel): + stub.rpc_channel = rpc_channel + self.cls = cls + cls.__init__ = _ServiceStubInit + for method in self.descriptor.methods: + setattr(cls, method.name, self._GenerateStubMethod(method)) + + def _GenerateStubMethod(self, method): + return (lambda inst, rpc_controller, request, callback=None: + self._StubMethod(inst, method, rpc_controller, request, callback)) + + def _StubMethod(self, stub, method_descriptor, + rpc_controller, request, callback): + """The body of all service methods in the generated stub class. + + Args: + stub: Stub instance. + method_descriptor: Descriptor of the invoked method. + rpc_controller: Rpc controller to execute the method. + request: Request protocol message. + callback: A callback to execute when the method finishes. + Returns: + Response message (in case of blocking call). + """ + return stub.rpc_channel.CallMethod( + method_descriptor, rpc_controller, request, + method_descriptor.output_type._concrete_class, callback) diff --git a/env/lib/python3.8/site-packages/google/protobuf/source_context_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/source_context_pb2.py new file mode 100644 index 00000000..30cca2e0 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/source_context_pb2.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/source_context.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n$google/protobuf/source_context.proto\x12\x0fgoogle.protobuf\"\"\n\rSourceContext\x12\x11\n\tfile_name\x18\x01 \x01(\tB\x8a\x01\n\x13\x63om.google.protobufB\x12SourceContextProtoP\x01Z6google.golang.org/protobuf/types/known/sourcecontextpb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.source_context_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\022SourceContextProtoP\001Z6google.golang.org/protobuf/types/known/sourcecontextpb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _SOURCECONTEXT._serialized_start=57 + _SOURCECONTEXT._serialized_end=91 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/struct_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/struct_pb2.py new file mode 100644 index 00000000..149728ca --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/struct_pb2.py @@ -0,0 +1,36 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/struct.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1cgoogle/protobuf/struct.proto\x12\x0fgoogle.protobuf\"\x84\x01\n\x06Struct\x12\x33\n\x06\x66ields\x18\x01 \x03(\x0b\x32#.google.protobuf.Struct.FieldsEntry\x1a\x45\n\x0b\x46ieldsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12%\n\x05value\x18\x02 \x01(\x0b\x32\x16.google.protobuf.Value:\x02\x38\x01\"\xea\x01\n\x05Value\x12\x30\n\nnull_value\x18\x01 \x01(\x0e\x32\x1a.google.protobuf.NullValueH\x00\x12\x16\n\x0cnumber_value\x18\x02 \x01(\x01H\x00\x12\x16\n\x0cstring_value\x18\x03 \x01(\tH\x00\x12\x14\n\nbool_value\x18\x04 \x01(\x08H\x00\x12/\n\x0cstruct_value\x18\x05 \x01(\x0b\x32\x17.google.protobuf.StructH\x00\x12\x30\n\nlist_value\x18\x06 \x01(\x0b\x32\x1a.google.protobuf.ListValueH\x00\x42\x06\n\x04kind\"3\n\tListValue\x12&\n\x06values\x18\x01 \x03(\x0b\x32\x16.google.protobuf.Value*\x1b\n\tNullValue\x12\x0e\n\nNULL_VALUE\x10\x00\x42\x7f\n\x13\x63om.google.protobufB\x0bStructProtoP\x01Z/google.golang.org/protobuf/types/known/structpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.struct_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\013StructProtoP\001Z/google.golang.org/protobuf/types/known/structpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _STRUCT_FIELDSENTRY._options = None + _STRUCT_FIELDSENTRY._serialized_options = b'8\001' + _NULLVALUE._serialized_start=474 + _NULLVALUE._serialized_end=501 + _STRUCT._serialized_start=50 + _STRUCT._serialized_end=182 + _STRUCT_FIELDSENTRY._serialized_start=113 + _STRUCT_FIELDSENTRY._serialized_end=182 + _VALUE._serialized_start=185 + _VALUE._serialized_end=419 + _LISTVALUE._serialized_start=421 + _LISTVALUE._serialized_end=472 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/symbol_database.py b/env/lib/python3.8/site-packages/google/protobuf/symbol_database.py new file mode 100644 index 00000000..fdcf8cf0 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/symbol_database.py @@ -0,0 +1,194 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""A database of Python protocol buffer generated symbols. + +SymbolDatabase is the MessageFactory for messages generated at compile time, +and makes it easy to create new instances of a registered type, given only the +type's protocol buffer symbol name. + +Example usage:: + + db = symbol_database.SymbolDatabase() + + # Register symbols of interest, from one or multiple files. + db.RegisterFileDescriptor(my_proto_pb2.DESCRIPTOR) + db.RegisterMessage(my_proto_pb2.MyMessage) + db.RegisterEnumDescriptor(my_proto_pb2.MyEnum.DESCRIPTOR) + + # The database can be used as a MessageFactory, to generate types based on + # their name: + types = db.GetMessages(['my_proto.proto']) + my_message_instance = types['MyMessage']() + + # The database's underlying descriptor pool can be queried, so it's not + # necessary to know a type's filename to be able to generate it: + filename = db.pool.FindFileContainingSymbol('MyMessage') + my_message_instance = db.GetMessages([filename])['MyMessage']() + + # This functionality is also provided directly via a convenience method: + my_message_instance = db.GetSymbol('MyMessage')() +""" + + +from google.protobuf.internal import api_implementation +from google.protobuf import descriptor_pool +from google.protobuf import message_factory + + +class SymbolDatabase(message_factory.MessageFactory): + """A database of Python generated symbols.""" + + def RegisterMessage(self, message): + """Registers the given message type in the local database. + + Calls to GetSymbol() and GetMessages() will return messages registered here. + + Args: + message: A :class:`google.protobuf.message.Message` subclass (or + instance); its descriptor will be registered. + + Returns: + The provided message. + """ + + desc = message.DESCRIPTOR + self._classes[desc] = message + self.RegisterMessageDescriptor(desc) + return message + + def RegisterMessageDescriptor(self, message_descriptor): + """Registers the given message descriptor in the local database. + + Args: + message_descriptor (Descriptor): the message descriptor to add. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._AddDescriptor(message_descriptor) + + def RegisterEnumDescriptor(self, enum_descriptor): + """Registers the given enum descriptor in the local database. + + Args: + enum_descriptor (EnumDescriptor): The enum descriptor to register. + + Returns: + EnumDescriptor: The provided descriptor. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._AddEnumDescriptor(enum_descriptor) + return enum_descriptor + + def RegisterServiceDescriptor(self, service_descriptor): + """Registers the given service descriptor in the local database. + + Args: + service_descriptor (ServiceDescriptor): the service descriptor to + register. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._AddServiceDescriptor(service_descriptor) + + def RegisterFileDescriptor(self, file_descriptor): + """Registers the given file descriptor in the local database. + + Args: + file_descriptor (FileDescriptor): The file descriptor to register. + """ + if api_implementation.Type() == 'python': + # pylint: disable=protected-access + self.pool._InternalAddFileDescriptor(file_descriptor) + + def GetSymbol(self, symbol): + """Tries to find a symbol in the local database. + + Currently, this method only returns message.Message instances, however, if + may be extended in future to support other symbol types. + + Args: + symbol (str): a protocol buffer symbol. + + Returns: + A Python class corresponding to the symbol. + + Raises: + KeyError: if the symbol could not be found. + """ + + return self._classes[self.pool.FindMessageTypeByName(symbol)] + + def GetMessages(self, files): + # TODO(amauryfa): Fix the differences with MessageFactory. + """Gets all registered messages from a specified file. + + Only messages already created and registered will be returned; (this is the + case for imported _pb2 modules) + But unlike MessageFactory, this version also returns already defined nested + messages, but does not register any message extensions. + + Args: + files (list[str]): The file names to extract messages from. + + Returns: + A dictionary mapping proto names to the message classes. + + Raises: + KeyError: if a file could not be found. + """ + + def _GetAllMessages(desc): + """Walk a message Descriptor and recursively yields all message names.""" + yield desc + for msg_desc in desc.nested_types: + for nested_desc in _GetAllMessages(msg_desc): + yield nested_desc + + result = {} + for file_name in files: + file_desc = self.pool.FindFileByName(file_name) + for msg_desc in file_desc.message_types_by_name.values(): + for desc in _GetAllMessages(msg_desc): + try: + result[desc.full_name] = self._classes[desc] + except KeyError: + # This descriptor has no registered class, skip it. + pass + return result + + +_DEFAULT = SymbolDatabase(pool=descriptor_pool.Default()) + + +def Default(): + """Returns the default SymbolDatabase.""" + return _DEFAULT diff --git a/env/lib/python3.8/site-packages/google/protobuf/text_encoding.py b/env/lib/python3.8/site-packages/google/protobuf/text_encoding.py new file mode 100644 index 00000000..759cf11f --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/text_encoding.py @@ -0,0 +1,110 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Encoding related utilities.""" +import re + +_cescape_chr_to_symbol_map = {} +_cescape_chr_to_symbol_map[9] = r'\t' # optional escape +_cescape_chr_to_symbol_map[10] = r'\n' # optional escape +_cescape_chr_to_symbol_map[13] = r'\r' # optional escape +_cescape_chr_to_symbol_map[34] = r'\"' # necessary escape +_cescape_chr_to_symbol_map[39] = r"\'" # optional escape +_cescape_chr_to_symbol_map[92] = r'\\' # necessary escape + +# Lookup table for unicode +_cescape_unicode_to_str = [chr(i) for i in range(0, 256)] +for byte, string in _cescape_chr_to_symbol_map.items(): + _cescape_unicode_to_str[byte] = string + +# Lookup table for non-utf8, with necessary escapes at (o >= 127 or o < 32) +_cescape_byte_to_str = ([r'\%03o' % i for i in range(0, 32)] + + [chr(i) for i in range(32, 127)] + + [r'\%03o' % i for i in range(127, 256)]) +for byte, string in _cescape_chr_to_symbol_map.items(): + _cescape_byte_to_str[byte] = string +del byte, string + + +def CEscape(text, as_utf8): + # type: (...) -> str + """Escape a bytes string for use in an text protocol buffer. + + Args: + text: A byte string to be escaped. + as_utf8: Specifies if result may contain non-ASCII characters. + In Python 3 this allows unescaped non-ASCII Unicode characters. + In Python 2 the return value will be valid UTF-8 rather than only ASCII. + Returns: + Escaped string (str). + """ + # Python's text.encode() 'string_escape' or 'unicode_escape' codecs do not + # satisfy our needs; they encodes unprintable characters using two-digit hex + # escapes whereas our C++ unescaping function allows hex escapes to be any + # length. So, "\0011".encode('string_escape') ends up being "\\x011", which + # will be decoded in C++ as a single-character string with char code 0x11. + text_is_unicode = isinstance(text, str) + if as_utf8 and text_is_unicode: + # We're already unicode, no processing beyond control char escapes. + return text.translate(_cescape_chr_to_symbol_map) + ord_ = ord if text_is_unicode else lambda x: x # bytes iterate as ints. + if as_utf8: + return ''.join(_cescape_unicode_to_str[ord_(c)] for c in text) + return ''.join(_cescape_byte_to_str[ord_(c)] for c in text) + + +_CUNESCAPE_HEX = re.compile(r'(\\+)x([0-9a-fA-F])(?![0-9a-fA-F])') + + +def CUnescape(text): + # type: (str) -> bytes + """Unescape a text string with C-style escape sequences to UTF-8 bytes. + + Args: + text: The data to parse in a str. + Returns: + A byte string. + """ + + def ReplaceHex(m): + # Only replace the match if the number of leading back slashes is odd. i.e. + # the slash itself is not escaped. + if len(m.group(1)) & 1: + return m.group(1) + 'x0' + m.group(2) + return m.group(0) + + # This is required because the 'string_escape' encoding doesn't + # allow single-digit hex escapes (like '\xf'). + result = _CUNESCAPE_HEX.sub(ReplaceHex, text) + + return (result.encode('utf-8') # Make it bytes to allow decode. + .decode('unicode_escape') + # Make it bytes again to return the proper type. + .encode('raw_unicode_escape')) diff --git a/env/lib/python3.8/site-packages/google/protobuf/text_format.py b/env/lib/python3.8/site-packages/google/protobuf/text_format.py new file mode 100644 index 00000000..412385c2 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/text_format.py @@ -0,0 +1,1795 @@ +# Protocol Buffers - Google's data interchange format +# Copyright 2008 Google Inc. All rights reserved. +# https://developers.google.com/protocol-buffers/ +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Contains routines for printing protocol messages in text format. + +Simple usage example:: + + # Create a proto object and serialize it to a text proto string. + message = my_proto_pb2.MyMessage(foo='bar') + text_proto = text_format.MessageToString(message) + + # Parse a text proto string. + message = text_format.Parse(text_proto, my_proto_pb2.MyMessage()) +""" + +__author__ = 'kenton@google.com (Kenton Varda)' + +# TODO(b/129989314) Import thread contention leads to test failures. +import encodings.raw_unicode_escape # pylint: disable=unused-import +import encodings.unicode_escape # pylint: disable=unused-import +import io +import math +import re + +from google.protobuf.internal import decoder +from google.protobuf.internal import type_checkers +from google.protobuf import descriptor +from google.protobuf import text_encoding + +# pylint: disable=g-import-not-at-top +__all__ = ['MessageToString', 'Parse', 'PrintMessage', 'PrintField', + 'PrintFieldValue', 'Merge', 'MessageToBytes'] + +_INTEGER_CHECKERS = (type_checkers.Uint32ValueChecker(), + type_checkers.Int32ValueChecker(), + type_checkers.Uint64ValueChecker(), + type_checkers.Int64ValueChecker()) +_FLOAT_INFINITY = re.compile('-?inf(?:inity)?f?$', re.IGNORECASE) +_FLOAT_NAN = re.compile('nanf?$', re.IGNORECASE) +_QUOTES = frozenset(("'", '"')) +_ANY_FULL_TYPE_NAME = 'google.protobuf.Any' + + +class Error(Exception): + """Top-level module error for text_format.""" + + +class ParseError(Error): + """Thrown in case of text parsing or tokenizing error.""" + + def __init__(self, message=None, line=None, column=None): + if message is not None and line is not None: + loc = str(line) + if column is not None: + loc += ':{0}'.format(column) + message = '{0} : {1}'.format(loc, message) + if message is not None: + super(ParseError, self).__init__(message) + else: + super(ParseError, self).__init__() + self._line = line + self._column = column + + def GetLine(self): + return self._line + + def GetColumn(self): + return self._column + + +class TextWriter(object): + + def __init__(self, as_utf8): + self._writer = io.StringIO() + + def write(self, val): + return self._writer.write(val) + + def close(self): + return self._writer.close() + + def getvalue(self): + return self._writer.getvalue() + + +def MessageToString( + message, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + use_field_number=False, + descriptor_pool=None, + indent=0, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + # type: (...) -> str + """Convert protobuf message to text format. + + Double values can be formatted compactly with 15 digits of + precision (which is the most that IEEE 754 "double" can guarantee) + using double_format='.15g'. To ensure that converting to text and back to a + proto will result in an identical value, double_format='.17g' should be used. + + Args: + message: The protocol buffers message. + as_utf8: Return unescaped Unicode for non-ASCII characters. + In Python 3 actual Unicode characters may appear as is in strings. + In Python 2 the return value will be valid UTF-8 rather than only ASCII. + as_one_line: Don't introduce newlines between fields. + use_short_repeated_primitives: Use short repeated format for primitives. + pointy_brackets: If True, use angle brackets instead of curly braces for + nesting. + use_index_order: If True, fields of a proto message will be printed using + the order defined in source code instead of the field number, extensions + will be printed at the end of the message and their relative order is + determined by the extension number. By default, use the field number + order. + float_format (str): If set, use this to specify float field formatting + (per the "Format Specification Mini-Language"); otherwise, shortest float + that has same value in wire will be printed. Also affect double field + if double_format is not set but float_format is set. + double_format (str): If set, use this to specify double field formatting + (per the "Format Specification Mini-Language"); if it is not set but + float_format is set, use float_format. Otherwise, use ``str()`` + use_field_number: If True, print field numbers instead of names. + descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types. + indent (int): The initial indent level, in terms of spaces, for pretty + print. + message_formatter (function(message, indent, as_one_line) -> unicode|None): + Custom formatter for selected sub-messages (usually based on message + type). Use to pretty print parts of the protobuf for easier diffing. + print_unknown_fields: If True, unknown fields will be printed. + force_colon: If set, a colon will be added after the field name even if the + field is a proto message. + + Returns: + str: A string of the text formatted protocol buffer message. + """ + out = TextWriter(as_utf8) + printer = _Printer( + out, + indent, + as_utf8, + as_one_line, + use_short_repeated_primitives, + pointy_brackets, + use_index_order, + float_format, + double_format, + use_field_number, + descriptor_pool, + message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintMessage(message) + result = out.getvalue() + out.close() + if as_one_line: + return result.rstrip() + return result + + +def MessageToBytes(message, **kwargs): + # type: (...) -> bytes + """Convert protobuf message to encoded text format. See MessageToString.""" + text = MessageToString(message, **kwargs) + if isinstance(text, bytes): + return text + codec = 'utf-8' if kwargs.get('as_utf8') else 'ascii' + return text.encode(codec) + + +def _IsMapEntry(field): + return (field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and + field.message_type.has_options and + field.message_type.GetOptions().map_entry) + + +def PrintMessage(message, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + use_field_number=False, + descriptor_pool=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + printer = _Printer( + out=out, indent=indent, as_utf8=as_utf8, + as_one_line=as_one_line, + use_short_repeated_primitives=use_short_repeated_primitives, + pointy_brackets=pointy_brackets, + use_index_order=use_index_order, + float_format=float_format, + double_format=double_format, + use_field_number=use_field_number, + descriptor_pool=descriptor_pool, + message_formatter=message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintMessage(message) + + +def PrintField(field, + value, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + """Print a single field name/value pair.""" + printer = _Printer(out, indent, as_utf8, as_one_line, + use_short_repeated_primitives, pointy_brackets, + use_index_order, float_format, double_format, + message_formatter=message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintField(field, value) + + +def PrintFieldValue(field, + value, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + """Print a single field value (not including name).""" + printer = _Printer(out, indent, as_utf8, as_one_line, + use_short_repeated_primitives, pointy_brackets, + use_index_order, float_format, double_format, + message_formatter=message_formatter, + print_unknown_fields=print_unknown_fields, + force_colon=force_colon) + printer.PrintFieldValue(field, value) + + +def _BuildMessageFromTypeName(type_name, descriptor_pool): + """Returns a protobuf message instance. + + Args: + type_name: Fully-qualified protobuf message type name string. + descriptor_pool: DescriptorPool instance. + + Returns: + A Message instance of type matching type_name, or None if the a Descriptor + wasn't found matching type_name. + """ + # pylint: disable=g-import-not-at-top + if descriptor_pool is None: + from google.protobuf import descriptor_pool as pool_mod + descriptor_pool = pool_mod.Default() + from google.protobuf import symbol_database + database = symbol_database.Default() + try: + message_descriptor = descriptor_pool.FindMessageTypeByName(type_name) + except KeyError: + return None + message_type = database.GetPrototype(message_descriptor) + return message_type() + + +# These values must match WireType enum in google/protobuf/wire_format.h. +WIRETYPE_LENGTH_DELIMITED = 2 +WIRETYPE_START_GROUP = 3 + + +class _Printer(object): + """Text format printer for protocol message.""" + + def __init__( + self, + out, + indent=0, + as_utf8=False, + as_one_line=False, + use_short_repeated_primitives=False, + pointy_brackets=False, + use_index_order=False, + float_format=None, + double_format=None, + use_field_number=False, + descriptor_pool=None, + message_formatter=None, + print_unknown_fields=False, + force_colon=False): + """Initialize the Printer. + + Double values can be formatted compactly with 15 digits of precision + (which is the most that IEEE 754 "double" can guarantee) using + double_format='.15g'. To ensure that converting to text and back to a proto + will result in an identical value, double_format='.17g' should be used. + + Args: + out: To record the text format result. + indent: The initial indent level for pretty print. + as_utf8: Return unescaped Unicode for non-ASCII characters. + In Python 3 actual Unicode characters may appear as is in strings. + In Python 2 the return value will be valid UTF-8 rather than ASCII. + as_one_line: Don't introduce newlines between fields. + use_short_repeated_primitives: Use short repeated format for primitives. + pointy_brackets: If True, use angle brackets instead of curly braces for + nesting. + use_index_order: If True, print fields of a proto message using the order + defined in source code instead of the field number. By default, use the + field number order. + float_format: If set, use this to specify float field formatting + (per the "Format Specification Mini-Language"); otherwise, shortest + float that has same value in wire will be printed. Also affect double + field if double_format is not set but float_format is set. + double_format: If set, use this to specify double field formatting + (per the "Format Specification Mini-Language"); if it is not set but + float_format is set, use float_format. Otherwise, str() is used. + use_field_number: If True, print field numbers instead of names. + descriptor_pool: A DescriptorPool used to resolve Any types. + message_formatter: A function(message, indent, as_one_line): unicode|None + to custom format selected sub-messages (usually based on message type). + Use to pretty print parts of the protobuf for easier diffing. + print_unknown_fields: If True, unknown fields will be printed. + force_colon: If set, a colon will be added after the field name even if + the field is a proto message. + """ + self.out = out + self.indent = indent + self.as_utf8 = as_utf8 + self.as_one_line = as_one_line + self.use_short_repeated_primitives = use_short_repeated_primitives + self.pointy_brackets = pointy_brackets + self.use_index_order = use_index_order + self.float_format = float_format + if double_format is not None: + self.double_format = double_format + else: + self.double_format = float_format + self.use_field_number = use_field_number + self.descriptor_pool = descriptor_pool + self.message_formatter = message_formatter + self.print_unknown_fields = print_unknown_fields + self.force_colon = force_colon + + def _TryPrintAsAnyMessage(self, message): + """Serializes if message is a google.protobuf.Any field.""" + if '/' not in message.type_url: + return False + packed_message = _BuildMessageFromTypeName(message.TypeName(), + self.descriptor_pool) + if packed_message: + packed_message.MergeFromString(message.value) + colon = ':' if self.force_colon else '' + self.out.write('%s[%s]%s ' % (self.indent * ' ', message.type_url, colon)) + self._PrintMessageFieldValue(packed_message) + self.out.write(' ' if self.as_one_line else '\n') + return True + else: + return False + + def _TryCustomFormatMessage(self, message): + formatted = self.message_formatter(message, self.indent, self.as_one_line) + if formatted is None: + return False + + out = self.out + out.write(' ' * self.indent) + out.write(formatted) + out.write(' ' if self.as_one_line else '\n') + return True + + def PrintMessage(self, message): + """Convert protobuf message to text format. + + Args: + message: The protocol buffers message. + """ + if self.message_formatter and self._TryCustomFormatMessage(message): + return + if (message.DESCRIPTOR.full_name == _ANY_FULL_TYPE_NAME and + self._TryPrintAsAnyMessage(message)): + return + fields = message.ListFields() + if self.use_index_order: + fields.sort( + key=lambda x: x[0].number if x[0].is_extension else x[0].index) + for field, value in fields: + if _IsMapEntry(field): + for key in sorted(value): + # This is slow for maps with submessage entries because it copies the + # entire tree. Unfortunately this would take significant refactoring + # of this file to work around. + # + # TODO(haberman): refactor and optimize if this becomes an issue. + entry_submsg = value.GetEntryClass()(key=key, value=value[key]) + self.PrintField(field, entry_submsg) + elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED: + if (self.use_short_repeated_primitives + and field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_MESSAGE + and field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_STRING): + self._PrintShortRepeatedPrimitivesValue(field, value) + else: + for element in value: + self.PrintField(field, element) + else: + self.PrintField(field, value) + + if self.print_unknown_fields: + self._PrintUnknownFields(message.UnknownFields()) + + def _PrintUnknownFields(self, unknown_fields): + """Print unknown fields.""" + out = self.out + for field in unknown_fields: + out.write(' ' * self.indent) + out.write(str(field.field_number)) + if field.wire_type == WIRETYPE_START_GROUP: + if self.as_one_line: + out.write(' { ') + else: + out.write(' {\n') + self.indent += 2 + + self._PrintUnknownFields(field.data) + + if self.as_one_line: + out.write('} ') + else: + self.indent -= 2 + out.write(' ' * self.indent + '}\n') + elif field.wire_type == WIRETYPE_LENGTH_DELIMITED: + try: + # If this field is parseable as a Message, it is probably + # an embedded message. + # pylint: disable=protected-access + (embedded_unknown_message, pos) = decoder._DecodeUnknownFieldSet( + memoryview(field.data), 0, len(field.data)) + except Exception: # pylint: disable=broad-except + pos = 0 + + if pos == len(field.data): + if self.as_one_line: + out.write(' { ') + else: + out.write(' {\n') + self.indent += 2 + + self._PrintUnknownFields(embedded_unknown_message) + + if self.as_one_line: + out.write('} ') + else: + self.indent -= 2 + out.write(' ' * self.indent + '}\n') + else: + # A string or bytes field. self.as_utf8 may not work. + out.write(': \"') + out.write(text_encoding.CEscape(field.data, False)) + out.write('\" ' if self.as_one_line else '\"\n') + else: + # varint, fixed32, fixed64 + out.write(': ') + out.write(str(field.data)) + out.write(' ' if self.as_one_line else '\n') + + def _PrintFieldName(self, field): + """Print field name.""" + out = self.out + out.write(' ' * self.indent) + if self.use_field_number: + out.write(str(field.number)) + else: + if field.is_extension: + out.write('[') + if (field.containing_type.GetOptions().message_set_wire_format and + field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and + field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL): + out.write(field.message_type.full_name) + else: + out.write(field.full_name) + out.write(']') + elif field.type == descriptor.FieldDescriptor.TYPE_GROUP: + # For groups, use the capitalized name. + out.write(field.message_type.name) + else: + out.write(field.name) + + if (self.force_colon or + field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_MESSAGE): + # The colon is optional in this case, but our cross-language golden files + # don't include it. Here, the colon is only included if force_colon is + # set to True + out.write(':') + + def PrintField(self, field, value): + """Print a single field name/value pair.""" + self._PrintFieldName(field) + self.out.write(' ') + self.PrintFieldValue(field, value) + self.out.write(' ' if self.as_one_line else '\n') + + def _PrintShortRepeatedPrimitivesValue(self, field, value): + """"Prints short repeated primitives value.""" + # Note: this is called only when value has at least one element. + self._PrintFieldName(field) + self.out.write(' [') + for i in range(len(value) - 1): + self.PrintFieldValue(field, value[i]) + self.out.write(', ') + self.PrintFieldValue(field, value[-1]) + self.out.write(']') + self.out.write(' ' if self.as_one_line else '\n') + + def _PrintMessageFieldValue(self, value): + if self.pointy_brackets: + openb = '<' + closeb = '>' + else: + openb = '{' + closeb = '}' + + if self.as_one_line: + self.out.write('%s ' % openb) + self.PrintMessage(value) + self.out.write(closeb) + else: + self.out.write('%s\n' % openb) + self.indent += 2 + self.PrintMessage(value) + self.indent -= 2 + self.out.write(' ' * self.indent + closeb) + + def PrintFieldValue(self, field, value): + """Print a single field value (not including name). + + For repeated fields, the value should be a single element. + + Args: + field: The descriptor of the field to be printed. + value: The value of the field. + """ + out = self.out + if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + self._PrintMessageFieldValue(value) + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM: + enum_value = field.enum_type.values_by_number.get(value, None) + if enum_value is not None: + out.write(enum_value.name) + else: + out.write(str(value)) + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING: + out.write('\"') + if isinstance(value, str) and not self.as_utf8: + out_value = value.encode('utf-8') + else: + out_value = value + if field.type == descriptor.FieldDescriptor.TYPE_BYTES: + # We always need to escape all binary data in TYPE_BYTES fields. + out_as_utf8 = False + else: + out_as_utf8 = self.as_utf8 + out.write(text_encoding.CEscape(out_value, out_as_utf8)) + out.write('\"') + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL: + if value: + out.write('true') + else: + out.write('false') + elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT: + if self.float_format is not None: + out.write('{1:{0}}'.format(self.float_format, value)) + else: + if math.isnan(value): + out.write(str(value)) + else: + out.write(str(type_checkers.ToShortestFloat(value))) + elif (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_DOUBLE and + self.double_format is not None): + out.write('{1:{0}}'.format(self.double_format, value)) + else: + out.write(str(value)) + + +def Parse(text, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + NOTE: for historical reasons this function does not clear the input + message. This is different from what the binary msg.ParseFrom(...) does. + If text contains a field already set in message, the value is appended if the + field is repeated. Otherwise, an error is raised. + + Example:: + + a = MyProto() + a.repeated_field.append('test') + b = MyProto() + + # Repeated fields are combined + text_format.Parse(repr(a), b) + text_format.Parse(repr(a), b) # repeated_field contains ["test", "test"] + + # Non-repeated fields cannot be overwritten + a.singular_field = 1 + b.singular_field = 2 + text_format.Parse(repr(a), b) # ParseError + + # Binary version: + b.ParseFromString(a.SerializeToString()) # repeated_field is now "test" + + Caller is responsible for clearing the message as needed. + + Args: + text (str): Message text representation. + message (Message): A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + Message: The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + return ParseLines(text.split(b'\n' if isinstance(text, bytes) else u'\n'), + message, + allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + + +def Merge(text, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + Like Parse(), but allows repeated values for a non-repeated field, and uses + the last one. This means any non-repeated, top-level fields specified in text + replace those in the message. + + Args: + text (str): Message text representation. + message (Message): A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + Message: The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + return MergeLines( + text.split(b'\n' if isinstance(text, bytes) else u'\n'), + message, + allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + + +def ParseLines(lines, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + See Parse() for caveats. + + Args: + lines: An iterable of lines of a message's text representation. + message: A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool: A DescriptorPool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + parser = _Parser(allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + return parser.ParseLines(lines, message) + + +def MergeLines(lines, + message, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + """Parses a text representation of a protocol message into a message. + + See Merge() for more details. + + Args: + lines: An iterable of lines of a message's text representation. + message: A protocol buffer message to merge into. + allow_unknown_extension: if True, skip over missing extensions and keep + parsing + allow_field_number: if True, both field number and field name are allowed. + descriptor_pool: A DescriptorPool used to resolve Any types. + allow_unknown_field: if True, skip over unknown field and keep + parsing. Avoid to use this option if possible. It may hide some + errors (e.g. spelling error on field name) + + Returns: + The same message passed as argument. + + Raises: + ParseError: On text parsing problems. + """ + parser = _Parser(allow_unknown_extension, + allow_field_number, + descriptor_pool=descriptor_pool, + allow_unknown_field=allow_unknown_field) + return parser.MergeLines(lines, message) + + +class _Parser(object): + """Text format parser for protocol message.""" + + def __init__(self, + allow_unknown_extension=False, + allow_field_number=False, + descriptor_pool=None, + allow_unknown_field=False): + self.allow_unknown_extension = allow_unknown_extension + self.allow_field_number = allow_field_number + self.descriptor_pool = descriptor_pool + self.allow_unknown_field = allow_unknown_field + + def ParseLines(self, lines, message): + """Parses a text representation of a protocol message into a message.""" + self._allow_multiple_scalars = False + self._ParseOrMerge(lines, message) + return message + + def MergeLines(self, lines, message): + """Merges a text representation of a protocol message into a message.""" + self._allow_multiple_scalars = True + self._ParseOrMerge(lines, message) + return message + + def _ParseOrMerge(self, lines, message): + """Converts a text representation of a protocol message into a message. + + Args: + lines: Lines of a message's text representation. + message: A protocol buffer message to merge into. + + Raises: + ParseError: On text parsing problems. + """ + # Tokenize expects native str lines. + str_lines = ( + line if isinstance(line, str) else line.decode('utf-8') + for line in lines) + tokenizer = Tokenizer(str_lines) + while not tokenizer.AtEnd(): + self._MergeField(tokenizer, message) + + def _MergeField(self, tokenizer, message): + """Merges a single protocol message field into a message. + + Args: + tokenizer: A tokenizer to parse the field name and values. + message: A protocol message to record the data. + + Raises: + ParseError: In case of text parsing problems. + """ + message_descriptor = message.DESCRIPTOR + if (message_descriptor.full_name == _ANY_FULL_TYPE_NAME and + tokenizer.TryConsume('[')): + type_url_prefix, packed_type_name = self._ConsumeAnyTypeUrl(tokenizer) + tokenizer.Consume(']') + tokenizer.TryConsume(':') + if tokenizer.TryConsume('<'): + expanded_any_end_token = '>' + else: + tokenizer.Consume('{') + expanded_any_end_token = '}' + expanded_any_sub_message = _BuildMessageFromTypeName(packed_type_name, + self.descriptor_pool) + if not expanded_any_sub_message: + raise ParseError('Type %s not found in descriptor pool' % + packed_type_name) + while not tokenizer.TryConsume(expanded_any_end_token): + if tokenizer.AtEnd(): + raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % + (expanded_any_end_token,)) + self._MergeField(tokenizer, expanded_any_sub_message) + deterministic = False + + message.Pack(expanded_any_sub_message, + type_url_prefix=type_url_prefix, + deterministic=deterministic) + return + + if tokenizer.TryConsume('['): + name = [tokenizer.ConsumeIdentifier()] + while tokenizer.TryConsume('.'): + name.append(tokenizer.ConsumeIdentifier()) + name = '.'.join(name) + + if not message_descriptor.is_extendable: + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" does not have extensions.' % + message_descriptor.full_name) + # pylint: disable=protected-access + field = message.Extensions._FindExtensionByName(name) + # pylint: enable=protected-access + + + if not field: + if self.allow_unknown_extension: + field = None + else: + raise tokenizer.ParseErrorPreviousToken( + 'Extension "%s" not registered. ' + 'Did you import the _pb2 module which defines it? ' + 'If you are trying to place the extension in the MessageSet ' + 'field of another message that is in an Any or MessageSet field, ' + 'that message\'s _pb2 module must be imported as well' % name) + elif message_descriptor != field.containing_type: + raise tokenizer.ParseErrorPreviousToken( + 'Extension "%s" does not extend message type "%s".' % + (name, message_descriptor.full_name)) + + tokenizer.Consume(']') + + else: + name = tokenizer.ConsumeIdentifierOrNumber() + if self.allow_field_number and name.isdigit(): + number = ParseInteger(name, True, True) + field = message_descriptor.fields_by_number.get(number, None) + if not field and message_descriptor.is_extendable: + field = message.Extensions._FindExtensionByNumber(number) + else: + field = message_descriptor.fields_by_name.get(name, None) + + # Group names are expected to be capitalized as they appear in the + # .proto file, which actually matches their type names, not their field + # names. + if not field: + field = message_descriptor.fields_by_name.get(name.lower(), None) + if field and field.type != descriptor.FieldDescriptor.TYPE_GROUP: + field = None + + if (field and field.type == descriptor.FieldDescriptor.TYPE_GROUP and + field.message_type.name != name): + field = None + + if not field and not self.allow_unknown_field: + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" has no field named "%s".' % + (message_descriptor.full_name, name)) + + if field: + if not self._allow_multiple_scalars and field.containing_oneof: + # Check if there's a different field set in this oneof. + # Note that we ignore the case if the same field was set before, and we + # apply _allow_multiple_scalars to non-scalar fields as well. + which_oneof = message.WhichOneof(field.containing_oneof.name) + if which_oneof is not None and which_oneof != field.name: + raise tokenizer.ParseErrorPreviousToken( + 'Field "%s" is specified along with field "%s", another member ' + 'of oneof "%s" for message type "%s".' % + (field.name, which_oneof, field.containing_oneof.name, + message_descriptor.full_name)) + + if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + tokenizer.TryConsume(':') + merger = self._MergeMessageField + else: + tokenizer.Consume(':') + merger = self._MergeScalarField + + if (field.label == descriptor.FieldDescriptor.LABEL_REPEATED and + tokenizer.TryConsume('[')): + # Short repeated format, e.g. "foo: [1, 2, 3]" + if not tokenizer.TryConsume(']'): + while True: + merger(tokenizer, message, field) + if tokenizer.TryConsume(']'): + break + tokenizer.Consume(',') + + else: + merger(tokenizer, message, field) + + else: # Proto field is unknown. + assert (self.allow_unknown_extension or self.allow_unknown_field) + _SkipFieldContents(tokenizer) + + # For historical reasons, fields may optionally be separated by commas or + # semicolons. + if not tokenizer.TryConsume(','): + tokenizer.TryConsume(';') + + + def _ConsumeAnyTypeUrl(self, tokenizer): + """Consumes a google.protobuf.Any type URL and returns the type name.""" + # Consume "type.googleapis.com/". + prefix = [tokenizer.ConsumeIdentifier()] + tokenizer.Consume('.') + prefix.append(tokenizer.ConsumeIdentifier()) + tokenizer.Consume('.') + prefix.append(tokenizer.ConsumeIdentifier()) + tokenizer.Consume('/') + # Consume the fully-qualified type name. + name = [tokenizer.ConsumeIdentifier()] + while tokenizer.TryConsume('.'): + name.append(tokenizer.ConsumeIdentifier()) + return '.'.join(prefix), '.'.join(name) + + def _MergeMessageField(self, tokenizer, message, field): + """Merges a single scalar field into a message. + + Args: + tokenizer: A tokenizer to parse the field value. + message: The message of which field is a member. + field: The descriptor of the field to be merged. + + Raises: + ParseError: In case of text parsing problems. + """ + is_map_entry = _IsMapEntry(field) + + if tokenizer.TryConsume('<'): + end_token = '>' + else: + tokenizer.Consume('{') + end_token = '}' + + if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: + if field.is_extension: + sub_message = message.Extensions[field].add() + elif is_map_entry: + sub_message = getattr(message, field.name).GetEntryClass()() + else: + sub_message = getattr(message, field.name).add() + else: + if field.is_extension: + if (not self._allow_multiple_scalars and + message.HasExtension(field)): + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" extensions.' % + (message.DESCRIPTOR.full_name, field.full_name)) + sub_message = message.Extensions[field] + else: + # Also apply _allow_multiple_scalars to message field. + # TODO(jieluo): Change to _allow_singular_overwrites. + if (not self._allow_multiple_scalars and + message.HasField(field.name)): + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" fields.' % + (message.DESCRIPTOR.full_name, field.name)) + sub_message = getattr(message, field.name) + sub_message.SetInParent() + + while not tokenizer.TryConsume(end_token): + if tokenizer.AtEnd(): + raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token,)) + self._MergeField(tokenizer, sub_message) + + if is_map_entry: + value_cpptype = field.message_type.fields_by_name['value'].cpp_type + if value_cpptype == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: + value = getattr(message, field.name)[sub_message.key] + value.CopyFrom(sub_message.value) + else: + getattr(message, field.name)[sub_message.key] = sub_message.value + + @staticmethod + def _IsProto3Syntax(message): + message_descriptor = message.DESCRIPTOR + return (hasattr(message_descriptor, 'syntax') and + message_descriptor.syntax == 'proto3') + + def _MergeScalarField(self, tokenizer, message, field): + """Merges a single scalar field into a message. + + Args: + tokenizer: A tokenizer to parse the field value. + message: A protocol message to record the data. + field: The descriptor of the field to be merged. + + Raises: + ParseError: In case of text parsing problems. + RuntimeError: On runtime errors. + """ + _ = self.allow_unknown_extension + value = None + + if field.type in (descriptor.FieldDescriptor.TYPE_INT32, + descriptor.FieldDescriptor.TYPE_SINT32, + descriptor.FieldDescriptor.TYPE_SFIXED32): + value = _ConsumeInt32(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_INT64, + descriptor.FieldDescriptor.TYPE_SINT64, + descriptor.FieldDescriptor.TYPE_SFIXED64): + value = _ConsumeInt64(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_UINT32, + descriptor.FieldDescriptor.TYPE_FIXED32): + value = _ConsumeUint32(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_UINT64, + descriptor.FieldDescriptor.TYPE_FIXED64): + value = _ConsumeUint64(tokenizer) + elif field.type in (descriptor.FieldDescriptor.TYPE_FLOAT, + descriptor.FieldDescriptor.TYPE_DOUBLE): + value = tokenizer.ConsumeFloat() + elif field.type == descriptor.FieldDescriptor.TYPE_BOOL: + value = tokenizer.ConsumeBool() + elif field.type == descriptor.FieldDescriptor.TYPE_STRING: + value = tokenizer.ConsumeString() + elif field.type == descriptor.FieldDescriptor.TYPE_BYTES: + value = tokenizer.ConsumeByteString() + elif field.type == descriptor.FieldDescriptor.TYPE_ENUM: + value = tokenizer.ConsumeEnum(field) + else: + raise RuntimeError('Unknown field type %d' % field.type) + + if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: + if field.is_extension: + message.Extensions[field].append(value) + else: + getattr(message, field.name).append(value) + else: + if field.is_extension: + if (not self._allow_multiple_scalars and + not self._IsProto3Syntax(message) and + message.HasExtension(field)): + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" extensions.' % + (message.DESCRIPTOR.full_name, field.full_name)) + else: + message.Extensions[field] = value + else: + duplicate_error = False + if not self._allow_multiple_scalars: + if self._IsProto3Syntax(message): + # Proto3 doesn't represent presence so we try best effort to check + # multiple scalars by compare to default values. + duplicate_error = bool(getattr(message, field.name)) + else: + duplicate_error = message.HasField(field.name) + + if duplicate_error: + raise tokenizer.ParseErrorPreviousToken( + 'Message type "%s" should not have multiple "%s" fields.' % + (message.DESCRIPTOR.full_name, field.name)) + else: + setattr(message, field.name, value) + + +def _SkipFieldContents(tokenizer): + """Skips over contents (value or message) of a field. + + Args: + tokenizer: A tokenizer to parse the field name and values. + """ + # Try to guess the type of this field. + # If this field is not a message, there should be a ":" between the + # field name and the field value and also the field value should not + # start with "{" or "<" which indicates the beginning of a message body. + # If there is no ":" or there is a "{" or "<" after ":", this field has + # to be a message or the input is ill-formed. + if tokenizer.TryConsume(':') and not tokenizer.LookingAt( + '{') and not tokenizer.LookingAt('<'): + _SkipFieldValue(tokenizer) + else: + _SkipFieldMessage(tokenizer) + + +def _SkipField(tokenizer): + """Skips over a complete field (name and value/message). + + Args: + tokenizer: A tokenizer to parse the field name and values. + """ + if tokenizer.TryConsume('['): + # Consume extension name. + tokenizer.ConsumeIdentifier() + while tokenizer.TryConsume('.'): + tokenizer.ConsumeIdentifier() + tokenizer.Consume(']') + else: + tokenizer.ConsumeIdentifierOrNumber() + + _SkipFieldContents(tokenizer) + + # For historical reasons, fields may optionally be separated by commas or + # semicolons. + if not tokenizer.TryConsume(','): + tokenizer.TryConsume(';') + + +def _SkipFieldMessage(tokenizer): + """Skips over a field message. + + Args: + tokenizer: A tokenizer to parse the field name and values. + """ + + if tokenizer.TryConsume('<'): + delimiter = '>' + else: + tokenizer.Consume('{') + delimiter = '}' + + while not tokenizer.LookingAt('>') and not tokenizer.LookingAt('}'): + _SkipField(tokenizer) + + tokenizer.Consume(delimiter) + + +def _SkipFieldValue(tokenizer): + """Skips over a field value. + + Args: + tokenizer: A tokenizer to parse the field name and values. + + Raises: + ParseError: In case an invalid field value is found. + """ + # String/bytes tokens can come in multiple adjacent string literals. + # If we can consume one, consume as many as we can. + if tokenizer.TryConsumeByteString(): + while tokenizer.TryConsumeByteString(): + pass + return + + if (not tokenizer.TryConsumeIdentifier() and + not _TryConsumeInt64(tokenizer) and not _TryConsumeUint64(tokenizer) and + not tokenizer.TryConsumeFloat()): + raise ParseError('Invalid field value: ' + tokenizer.token) + + +class Tokenizer(object): + """Protocol buffer text representation tokenizer. + + This class handles the lower level string parsing by splitting it into + meaningful tokens. + + It was directly ported from the Java protocol buffer API. + """ + + _WHITESPACE = re.compile(r'\s+') + _COMMENT = re.compile(r'(\s*#.*$)', re.MULTILINE) + _WHITESPACE_OR_COMMENT = re.compile(r'(\s|(#.*$))+', re.MULTILINE) + _TOKEN = re.compile('|'.join([ + r'[a-zA-Z_][0-9a-zA-Z_+-]*', # an identifier + r'([0-9+-]|(\.[0-9]))[0-9a-zA-Z_.+-]*', # a number + ] + [ # quoted str for each quote mark + # Avoid backtracking! https://stackoverflow.com/a/844267 + r'{qt}[^{qt}\n\\]*((\\.)+[^{qt}\n\\]*)*({qt}|\\?$)'.format(qt=mark) + for mark in _QUOTES + ])) + + _IDENTIFIER = re.compile(r'[^\d\W]\w*') + _IDENTIFIER_OR_NUMBER = re.compile(r'\w+') + + def __init__(self, lines, skip_comments=True): + self._position = 0 + self._line = -1 + self._column = 0 + self._token_start = None + self.token = '' + self._lines = iter(lines) + self._current_line = '' + self._previous_line = 0 + self._previous_column = 0 + self._more_lines = True + self._skip_comments = skip_comments + self._whitespace_pattern = (skip_comments and self._WHITESPACE_OR_COMMENT + or self._WHITESPACE) + self._SkipWhitespace() + self.NextToken() + + def LookingAt(self, token): + return self.token == token + + def AtEnd(self): + """Checks the end of the text was reached. + + Returns: + True iff the end was reached. + """ + return not self.token + + def _PopLine(self): + while len(self._current_line) <= self._column: + try: + self._current_line = next(self._lines) + except StopIteration: + self._current_line = '' + self._more_lines = False + return + else: + self._line += 1 + self._column = 0 + + def _SkipWhitespace(self): + while True: + self._PopLine() + match = self._whitespace_pattern.match(self._current_line, self._column) + if not match: + break + length = len(match.group(0)) + self._column += length + + def TryConsume(self, token): + """Tries to consume a given piece of text. + + Args: + token: Text to consume. + + Returns: + True iff the text was consumed. + """ + if self.token == token: + self.NextToken() + return True + return False + + def Consume(self, token): + """Consumes a piece of text. + + Args: + token: Text to consume. + + Raises: + ParseError: If the text couldn't be consumed. + """ + if not self.TryConsume(token): + raise self.ParseError('Expected "%s".' % token) + + def ConsumeComment(self): + result = self.token + if not self._COMMENT.match(result): + raise self.ParseError('Expected comment.') + self.NextToken() + return result + + def ConsumeCommentOrTrailingComment(self): + """Consumes a comment, returns a 2-tuple (trailing bool, comment str).""" + + # Tokenizer initializes _previous_line and _previous_column to 0. As the + # tokenizer starts, it looks like there is a previous token on the line. + just_started = self._line == 0 and self._column == 0 + + before_parsing = self._previous_line + comment = self.ConsumeComment() + + # A trailing comment is a comment on the same line than the previous token. + trailing = (self._previous_line == before_parsing + and not just_started) + + return trailing, comment + + def TryConsumeIdentifier(self): + try: + self.ConsumeIdentifier() + return True + except ParseError: + return False + + def ConsumeIdentifier(self): + """Consumes protocol message field identifier. + + Returns: + Identifier string. + + Raises: + ParseError: If an identifier couldn't be consumed. + """ + result = self.token + if not self._IDENTIFIER.match(result): + raise self.ParseError('Expected identifier.') + self.NextToken() + return result + + def TryConsumeIdentifierOrNumber(self): + try: + self.ConsumeIdentifierOrNumber() + return True + except ParseError: + return False + + def ConsumeIdentifierOrNumber(self): + """Consumes protocol message field identifier. + + Returns: + Identifier string. + + Raises: + ParseError: If an identifier couldn't be consumed. + """ + result = self.token + if not self._IDENTIFIER_OR_NUMBER.match(result): + raise self.ParseError('Expected identifier or number, got %s.' % result) + self.NextToken() + return result + + def TryConsumeInteger(self): + try: + self.ConsumeInteger() + return True + except ParseError: + return False + + def ConsumeInteger(self): + """Consumes an integer number. + + Returns: + The integer parsed. + + Raises: + ParseError: If an integer couldn't be consumed. + """ + try: + result = _ParseAbstractInteger(self.token) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def TryConsumeFloat(self): + try: + self.ConsumeFloat() + return True + except ParseError: + return False + + def ConsumeFloat(self): + """Consumes an floating point number. + + Returns: + The number parsed. + + Raises: + ParseError: If a floating point number couldn't be consumed. + """ + try: + result = ParseFloat(self.token) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def ConsumeBool(self): + """Consumes a boolean value. + + Returns: + The bool parsed. + + Raises: + ParseError: If a boolean value couldn't be consumed. + """ + try: + result = ParseBool(self.token) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def TryConsumeByteString(self): + try: + self.ConsumeByteString() + return True + except ParseError: + return False + + def ConsumeString(self): + """Consumes a string value. + + Returns: + The string parsed. + + Raises: + ParseError: If a string value couldn't be consumed. + """ + the_bytes = self.ConsumeByteString() + try: + return str(the_bytes, 'utf-8') + except UnicodeDecodeError as e: + raise self._StringParseError(e) + + def ConsumeByteString(self): + """Consumes a byte array value. + + Returns: + The array parsed (as a string). + + Raises: + ParseError: If a byte array value couldn't be consumed. + """ + the_list = [self._ConsumeSingleByteString()] + while self.token and self.token[0] in _QUOTES: + the_list.append(self._ConsumeSingleByteString()) + return b''.join(the_list) + + def _ConsumeSingleByteString(self): + """Consume one token of a string literal. + + String literals (whether bytes or text) can come in multiple adjacent + tokens which are automatically concatenated, like in C or Python. This + method only consumes one token. + + Returns: + The token parsed. + Raises: + ParseError: When the wrong format data is found. + """ + text = self.token + if len(text) < 1 or text[0] not in _QUOTES: + raise self.ParseError('Expected string but found: %r' % (text,)) + + if len(text) < 2 or text[-1] != text[0]: + raise self.ParseError('String missing ending quote: %r' % (text,)) + + try: + result = text_encoding.CUnescape(text[1:-1]) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def ConsumeEnum(self, field): + try: + result = ParseEnum(field, self.token) + except ValueError as e: + raise self.ParseError(str(e)) + self.NextToken() + return result + + def ParseErrorPreviousToken(self, message): + """Creates and *returns* a ParseError for the previously read token. + + Args: + message: A message to set for the exception. + + Returns: + A ParseError instance. + """ + return ParseError(message, self._previous_line + 1, + self._previous_column + 1) + + def ParseError(self, message): + """Creates and *returns* a ParseError for the current token.""" + return ParseError('\'' + self._current_line + '\': ' + message, + self._line + 1, self._column + 1) + + def _StringParseError(self, e): + return self.ParseError('Couldn\'t parse string: ' + str(e)) + + def NextToken(self): + """Reads the next meaningful token.""" + self._previous_line = self._line + self._previous_column = self._column + + self._column += len(self.token) + self._SkipWhitespace() + + if not self._more_lines: + self.token = '' + return + + match = self._TOKEN.match(self._current_line, self._column) + if not match and not self._skip_comments: + match = self._COMMENT.match(self._current_line, self._column) + if match: + token = match.group(0) + self.token = token + else: + self.token = self._current_line[self._column] + +# Aliased so it can still be accessed by current visibility violators. +# TODO(dbarnett): Migrate violators to textformat_tokenizer. +_Tokenizer = Tokenizer # pylint: disable=invalid-name + + +def _ConsumeInt32(tokenizer): + """Consumes a signed 32bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If a signed 32bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=True, is_long=False) + + +def _ConsumeUint32(tokenizer): + """Consumes an unsigned 32bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If an unsigned 32bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=False, is_long=False) + + +def _TryConsumeInt64(tokenizer): + try: + _ConsumeInt64(tokenizer) + return True + except ParseError: + return False + + +def _ConsumeInt64(tokenizer): + """Consumes a signed 32bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If a signed 32bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=True, is_long=True) + + +def _TryConsumeUint64(tokenizer): + try: + _ConsumeUint64(tokenizer) + return True + except ParseError: + return False + + +def _ConsumeUint64(tokenizer): + """Consumes an unsigned 64bit integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + + Returns: + The integer parsed. + + Raises: + ParseError: If an unsigned 64bit integer couldn't be consumed. + """ + return _ConsumeInteger(tokenizer, is_signed=False, is_long=True) + + +def _ConsumeInteger(tokenizer, is_signed=False, is_long=False): + """Consumes an integer number from tokenizer. + + Args: + tokenizer: A tokenizer used to parse the number. + is_signed: True if a signed integer must be parsed. + is_long: True if a long integer must be parsed. + + Returns: + The integer parsed. + + Raises: + ParseError: If an integer with given characteristics couldn't be consumed. + """ + try: + result = ParseInteger(tokenizer.token, is_signed=is_signed, is_long=is_long) + except ValueError as e: + raise tokenizer.ParseError(str(e)) + tokenizer.NextToken() + return result + + +def ParseInteger(text, is_signed=False, is_long=False): + """Parses an integer. + + Args: + text: The text to parse. + is_signed: True if a signed integer must be parsed. + is_long: True if a long integer must be parsed. + + Returns: + The integer value. + + Raises: + ValueError: Thrown Iff the text is not a valid integer. + """ + # Do the actual parsing. Exception handling is propagated to caller. + result = _ParseAbstractInteger(text) + + # Check if the integer is sane. Exceptions handled by callers. + checker = _INTEGER_CHECKERS[2 * int(is_long) + int(is_signed)] + checker.CheckValue(result) + return result + + +def _ParseAbstractInteger(text): + """Parses an integer without checking size/signedness. + + Args: + text: The text to parse. + + Returns: + The integer value. + + Raises: + ValueError: Thrown Iff the text is not a valid integer. + """ + # Do the actual parsing. Exception handling is propagated to caller. + orig_text = text + c_octal_match = re.match(r'(-?)0(\d+)$', text) + if c_octal_match: + # Python 3 no longer supports 0755 octal syntax without the 'o', so + # we always use the '0o' prefix for multi-digit numbers starting with 0. + text = c_octal_match.group(1) + '0o' + c_octal_match.group(2) + try: + return int(text, 0) + except ValueError: + raise ValueError('Couldn\'t parse integer: %s' % orig_text) + + +def ParseFloat(text): + """Parse a floating point number. + + Args: + text: Text to parse. + + Returns: + The number parsed. + + Raises: + ValueError: If a floating point number couldn't be parsed. + """ + try: + # Assume Python compatible syntax. + return float(text) + except ValueError: + # Check alternative spellings. + if _FLOAT_INFINITY.match(text): + if text[0] == '-': + return float('-inf') + else: + return float('inf') + elif _FLOAT_NAN.match(text): + return float('nan') + else: + # assume '1.0f' format + try: + return float(text.rstrip('f')) + except ValueError: + raise ValueError('Couldn\'t parse float: %s' % text) + + +def ParseBool(text): + """Parse a boolean value. + + Args: + text: Text to parse. + + Returns: + Boolean values parsed + + Raises: + ValueError: If text is not a valid boolean. + """ + if text in ('true', 't', '1', 'True'): + return True + elif text in ('false', 'f', '0', 'False'): + return False + else: + raise ValueError('Expected "true" or "false".') + + +def ParseEnum(field, value): + """Parse an enum value. + + The value can be specified by a number (the enum value), or by + a string literal (the enum name). + + Args: + field: Enum field descriptor. + value: String value. + + Returns: + Enum value number. + + Raises: + ValueError: If the enum value could not be parsed. + """ + enum_descriptor = field.enum_type + try: + number = int(value, 0) + except ValueError: + # Identifier. + enum_value = enum_descriptor.values_by_name.get(value, None) + if enum_value is None: + raise ValueError('Enum type "%s" has no value named %s.' % + (enum_descriptor.full_name, value)) + else: + # Numeric value. + if hasattr(field.file, 'syntax'): + # Attribute is checked for compatibility. + if field.file.syntax == 'proto3': + # Proto3 accept numeric unknown enums. + return number + enum_value = enum_descriptor.values_by_number.get(number, None) + if enum_value is None: + raise ValueError('Enum type "%s" has no value with number %d.' % + (enum_descriptor.full_name, number)) + return enum_value.number diff --git a/env/lib/python3.8/site-packages/google/protobuf/timestamp_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/timestamp_pb2.py new file mode 100644 index 00000000..558d4969 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/timestamp_pb2.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/timestamp.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1fgoogle/protobuf/timestamp.proto\x12\x0fgoogle.protobuf\"+\n\tTimestamp\x12\x0f\n\x07seconds\x18\x01 \x01(\x03\x12\r\n\x05nanos\x18\x02 \x01(\x05\x42\x85\x01\n\x13\x63om.google.protobufB\x0eTimestampProtoP\x01Z2google.golang.org/protobuf/types/known/timestamppb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.timestamp_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\016TimestampProtoP\001Z2google.golang.org/protobuf/types/known/timestamppb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _TIMESTAMP._serialized_start=52 + _TIMESTAMP._serialized_end=95 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/type_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/type_pb2.py new file mode 100644 index 00000000..19903fb6 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/type_pb2.py @@ -0,0 +1,42 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/type.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2 +from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2 + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1agoogle/protobuf/type.proto\x12\x0fgoogle.protobuf\x1a\x19google/protobuf/any.proto\x1a$google/protobuf/source_context.proto\"\xd7\x01\n\x04Type\x12\x0c\n\x04name\x18\x01 \x01(\t\x12&\n\x06\x66ields\x18\x02 \x03(\x0b\x32\x16.google.protobuf.Field\x12\x0e\n\x06oneofs\x18\x03 \x03(\t\x12(\n\x07options\x18\x04 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x36\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12\'\n\x06syntax\x18\x06 \x01(\x0e\x32\x17.google.protobuf.Syntax\"\xd5\x05\n\x05\x46ield\x12)\n\x04kind\x18\x01 \x01(\x0e\x32\x1b.google.protobuf.Field.Kind\x12\x37\n\x0b\x63\x61rdinality\x18\x02 \x01(\x0e\x32\".google.protobuf.Field.Cardinality\x12\x0e\n\x06number\x18\x03 \x01(\x05\x12\x0c\n\x04name\x18\x04 \x01(\t\x12\x10\n\x08type_url\x18\x06 \x01(\t\x12\x13\n\x0boneof_index\x18\x07 \x01(\x05\x12\x0e\n\x06packed\x18\x08 \x01(\x08\x12(\n\x07options\x18\t \x03(\x0b\x32\x17.google.protobuf.Option\x12\x11\n\tjson_name\x18\n \x01(\t\x12\x15\n\rdefault_value\x18\x0b \x01(\t\"\xc8\x02\n\x04Kind\x12\x10\n\x0cTYPE_UNKNOWN\x10\x00\x12\x0f\n\x0bTYPE_DOUBLE\x10\x01\x12\x0e\n\nTYPE_FLOAT\x10\x02\x12\x0e\n\nTYPE_INT64\x10\x03\x12\x0f\n\x0bTYPE_UINT64\x10\x04\x12\x0e\n\nTYPE_INT32\x10\x05\x12\x10\n\x0cTYPE_FIXED64\x10\x06\x12\x10\n\x0cTYPE_FIXED32\x10\x07\x12\r\n\tTYPE_BOOL\x10\x08\x12\x0f\n\x0bTYPE_STRING\x10\t\x12\x0e\n\nTYPE_GROUP\x10\n\x12\x10\n\x0cTYPE_MESSAGE\x10\x0b\x12\x0e\n\nTYPE_BYTES\x10\x0c\x12\x0f\n\x0bTYPE_UINT32\x10\r\x12\r\n\tTYPE_ENUM\x10\x0e\x12\x11\n\rTYPE_SFIXED32\x10\x0f\x12\x11\n\rTYPE_SFIXED64\x10\x10\x12\x0f\n\x0bTYPE_SINT32\x10\x11\x12\x0f\n\x0bTYPE_SINT64\x10\x12\"t\n\x0b\x43\x61rdinality\x12\x17\n\x13\x43\x41RDINALITY_UNKNOWN\x10\x00\x12\x18\n\x14\x43\x41RDINALITY_OPTIONAL\x10\x01\x12\x18\n\x14\x43\x41RDINALITY_REQUIRED\x10\x02\x12\x18\n\x14\x43\x41RDINALITY_REPEATED\x10\x03\"\xce\x01\n\x04\x45num\x12\x0c\n\x04name\x18\x01 \x01(\t\x12-\n\tenumvalue\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.EnumValue\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x36\n\x0esource_context\x18\x04 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12\'\n\x06syntax\x18\x05 \x01(\x0e\x32\x17.google.protobuf.Syntax\"S\n\tEnumValue\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06number\x18\x02 \x01(\x05\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\";\n\x06Option\x12\x0c\n\x04name\x18\x01 \x01(\t\x12#\n\x05value\x18\x02 \x01(\x0b\x32\x14.google.protobuf.Any*.\n\x06Syntax\x12\x11\n\rSYNTAX_PROTO2\x10\x00\x12\x11\n\rSYNTAX_PROTO3\x10\x01\x42{\n\x13\x63om.google.protobufB\tTypeProtoP\x01Z-google.golang.org/protobuf/types/known/typepb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.type_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\tTypeProtoP\001Z-google.golang.org/protobuf/types/known/typepb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _SYNTAX._serialized_start=1413 + _SYNTAX._serialized_end=1459 + _TYPE._serialized_start=113 + _TYPE._serialized_end=328 + _FIELD._serialized_start=331 + _FIELD._serialized_end=1056 + _FIELD_KIND._serialized_start=610 + _FIELD_KIND._serialized_end=938 + _FIELD_CARDINALITY._serialized_start=940 + _FIELD_CARDINALITY._serialized_end=1056 + _ENUM._serialized_start=1059 + _ENUM._serialized_end=1265 + _ENUMVALUE._serialized_start=1267 + _ENUMVALUE._serialized_end=1350 + _OPTION._serialized_start=1352 + _OPTION._serialized_end=1411 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/util/__init__.py b/env/lib/python3.8/site-packages/google/protobuf/util/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/google/protobuf/util/json_format_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/util/json_format_pb2.py new file mode 100644 index 00000000..66a5836c --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/util/json_format_pb2.py @@ -0,0 +1,72 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/util/json_format.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n&google/protobuf/util/json_format.proto\x12\x11protobuf_unittest\"\x89\x01\n\x13TestFlagsAndStrings\x12\t\n\x01\x41\x18\x01 \x02(\x05\x12K\n\rrepeatedgroup\x18\x02 \x03(\n24.protobuf_unittest.TestFlagsAndStrings.RepeatedGroup\x1a\x1a\n\rRepeatedGroup\x12\t\n\x01\x66\x18\x03 \x02(\t\"!\n\x14TestBase64ByteArrays\x12\t\n\x01\x61\x18\x01 \x02(\x0c\"G\n\x12TestJavaScriptJSON\x12\t\n\x01\x61\x18\x01 \x01(\x05\x12\r\n\x05\x66inal\x18\x02 \x01(\x02\x12\n\n\x02in\x18\x03 \x01(\t\x12\x0b\n\x03Var\x18\x04 \x01(\t\"Q\n\x18TestJavaScriptOrderJSON1\x12\t\n\x01\x64\x18\x01 \x01(\x05\x12\t\n\x01\x63\x18\x02 \x01(\x05\x12\t\n\x01x\x18\x03 \x01(\x08\x12\t\n\x01\x62\x18\x04 \x01(\x05\x12\t\n\x01\x61\x18\x05 \x01(\x05\"\x89\x01\n\x18TestJavaScriptOrderJSON2\x12\t\n\x01\x64\x18\x01 \x01(\x05\x12\t\n\x01\x63\x18\x02 \x01(\x05\x12\t\n\x01x\x18\x03 \x01(\x08\x12\t\n\x01\x62\x18\x04 \x01(\x05\x12\t\n\x01\x61\x18\x05 \x01(\x05\x12\x36\n\x01z\x18\x06 \x03(\x0b\x32+.protobuf_unittest.TestJavaScriptOrderJSON1\"$\n\x0cTestLargeInt\x12\t\n\x01\x61\x18\x01 \x02(\x03\x12\t\n\x01\x62\x18\x02 \x02(\x04\"\xa0\x01\n\x0bTestNumbers\x12\x30\n\x01\x61\x18\x01 \x01(\x0e\x32%.protobuf_unittest.TestNumbers.MyType\x12\t\n\x01\x62\x18\x02 \x01(\x05\x12\t\n\x01\x63\x18\x03 \x01(\x02\x12\t\n\x01\x64\x18\x04 \x01(\x08\x12\t\n\x01\x65\x18\x05 \x01(\x01\x12\t\n\x01\x66\x18\x06 \x01(\r\"(\n\x06MyType\x12\x06\n\x02OK\x10\x00\x12\x0b\n\x07WARNING\x10\x01\x12\t\n\x05\x45RROR\x10\x02\"T\n\rTestCamelCase\x12\x14\n\x0cnormal_field\x18\x01 \x01(\t\x12\x15\n\rCAPITAL_FIELD\x18\x02 \x01(\x05\x12\x16\n\x0e\x43\x61melCaseField\x18\x03 \x01(\x05\"|\n\x0bTestBoolMap\x12=\n\x08\x62ool_map\x18\x01 \x03(\x0b\x32+.protobuf_unittest.TestBoolMap.BoolMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\"O\n\rTestRecursion\x12\r\n\x05value\x18\x01 \x01(\x05\x12/\n\x05\x63hild\x18\x02 \x01(\x0b\x32 .protobuf_unittest.TestRecursion\"\x86\x01\n\rTestStringMap\x12\x43\n\nstring_map\x18\x01 \x03(\x0b\x32/.protobuf_unittest.TestStringMap.StringMapEntry\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\xc4\x01\n\x14TestStringSerializer\x12\x15\n\rscalar_string\x18\x01 \x01(\t\x12\x17\n\x0frepeated_string\x18\x02 \x03(\t\x12J\n\nstring_map\x18\x03 \x03(\x0b\x32\x36.protobuf_unittest.TestStringSerializer.StringMapEntry\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"$\n\x18TestMessageWithExtension*\x08\x08\x64\x10\x80\x80\x80\x80\x02\"z\n\rTestExtension\x12\r\n\x05value\x18\x01 \x01(\t2Z\n\x03\x65xt\x12+.protobuf_unittest.TestMessageWithExtension\x18\x64 \x01(\x0b\x32 .protobuf_unittest.TestExtension\"Q\n\x14TestDefaultEnumValue\x12\x39\n\nenum_value\x18\x01 \x01(\x0e\x32\x1c.protobuf_unittest.EnumValue:\x07\x44\x45\x46\x41ULT*2\n\tEnumValue\x12\x0c\n\x08PROTOCOL\x10\x00\x12\n\n\x06\x42UFFER\x10\x01\x12\x0b\n\x07\x44\x45\x46\x41ULT\x10\x02') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.util.json_format_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + TestMessageWithExtension.RegisterExtension(_TESTEXTENSION.extensions_by_name['ext']) + + DESCRIPTOR._options = None + _TESTBOOLMAP_BOOLMAPENTRY._options = None + _TESTBOOLMAP_BOOLMAPENTRY._serialized_options = b'8\001' + _TESTSTRINGMAP_STRINGMAPENTRY._options = None + _TESTSTRINGMAP_STRINGMAPENTRY._serialized_options = b'8\001' + _TESTSTRINGSERIALIZER_STRINGMAPENTRY._options = None + _TESTSTRINGSERIALIZER_STRINGMAPENTRY._serialized_options = b'8\001' + _ENUMVALUE._serialized_start=1607 + _ENUMVALUE._serialized_end=1657 + _TESTFLAGSANDSTRINGS._serialized_start=62 + _TESTFLAGSANDSTRINGS._serialized_end=199 + _TESTFLAGSANDSTRINGS_REPEATEDGROUP._serialized_start=173 + _TESTFLAGSANDSTRINGS_REPEATEDGROUP._serialized_end=199 + _TESTBASE64BYTEARRAYS._serialized_start=201 + _TESTBASE64BYTEARRAYS._serialized_end=234 + _TESTJAVASCRIPTJSON._serialized_start=236 + _TESTJAVASCRIPTJSON._serialized_end=307 + _TESTJAVASCRIPTORDERJSON1._serialized_start=309 + _TESTJAVASCRIPTORDERJSON1._serialized_end=390 + _TESTJAVASCRIPTORDERJSON2._serialized_start=393 + _TESTJAVASCRIPTORDERJSON2._serialized_end=530 + _TESTLARGEINT._serialized_start=532 + _TESTLARGEINT._serialized_end=568 + _TESTNUMBERS._serialized_start=571 + _TESTNUMBERS._serialized_end=731 + _TESTNUMBERS_MYTYPE._serialized_start=691 + _TESTNUMBERS_MYTYPE._serialized_end=731 + _TESTCAMELCASE._serialized_start=733 + _TESTCAMELCASE._serialized_end=817 + _TESTBOOLMAP._serialized_start=819 + _TESTBOOLMAP._serialized_end=943 + _TESTBOOLMAP_BOOLMAPENTRY._serialized_start=897 + _TESTBOOLMAP_BOOLMAPENTRY._serialized_end=943 + _TESTRECURSION._serialized_start=945 + _TESTRECURSION._serialized_end=1024 + _TESTSTRINGMAP._serialized_start=1027 + _TESTSTRINGMAP._serialized_end=1161 + _TESTSTRINGMAP_STRINGMAPENTRY._serialized_start=1113 + _TESTSTRINGMAP_STRINGMAPENTRY._serialized_end=1161 + _TESTSTRINGSERIALIZER._serialized_start=1164 + _TESTSTRINGSERIALIZER._serialized_end=1360 + _TESTSTRINGSERIALIZER_STRINGMAPENTRY._serialized_start=1113 + _TESTSTRINGSERIALIZER_STRINGMAPENTRY._serialized_end=1161 + _TESTMESSAGEWITHEXTENSION._serialized_start=1362 + _TESTMESSAGEWITHEXTENSION._serialized_end=1398 + _TESTEXTENSION._serialized_start=1400 + _TESTEXTENSION._serialized_end=1522 + _TESTDEFAULTENUMVALUE._serialized_start=1524 + _TESTDEFAULTENUMVALUE._serialized_end=1605 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/util/json_format_proto3_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/util/json_format_proto3_pb2.py new file mode 100644 index 00000000..5498deaf --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/util/json_format_proto3_pb2.py @@ -0,0 +1,129 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/util/json_format_proto3.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2 +from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 +from google.protobuf import field_mask_pb2 as google_dot_protobuf_dot_field__mask__pb2 +from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 +from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 +from google.protobuf import wrappers_pb2 as google_dot_protobuf_dot_wrappers__pb2 +from google.protobuf import unittest_pb2 as google_dot_protobuf_dot_unittest__pb2 + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n-google/protobuf/util/json_format_proto3.proto\x12\x06proto3\x1a\x19google/protobuf/any.proto\x1a\x1egoogle/protobuf/duration.proto\x1a google/protobuf/field_mask.proto\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1egoogle/protobuf/wrappers.proto\x1a\x1egoogle/protobuf/unittest.proto\"\x1c\n\x0bMessageType\x12\r\n\x05value\x18\x01 \x01(\x05\"\x94\x05\n\x0bTestMessage\x12\x12\n\nbool_value\x18\x01 \x01(\x08\x12\x13\n\x0bint32_value\x18\x02 \x01(\x05\x12\x13\n\x0bint64_value\x18\x03 \x01(\x03\x12\x14\n\x0cuint32_value\x18\x04 \x01(\r\x12\x14\n\x0cuint64_value\x18\x05 \x01(\x04\x12\x13\n\x0b\x66loat_value\x18\x06 \x01(\x02\x12\x14\n\x0c\x64ouble_value\x18\x07 \x01(\x01\x12\x14\n\x0cstring_value\x18\x08 \x01(\t\x12\x13\n\x0b\x62ytes_value\x18\t \x01(\x0c\x12$\n\nenum_value\x18\n \x01(\x0e\x32\x10.proto3.EnumType\x12*\n\rmessage_value\x18\x0b \x01(\x0b\x32\x13.proto3.MessageType\x12\x1b\n\x13repeated_bool_value\x18\x15 \x03(\x08\x12\x1c\n\x14repeated_int32_value\x18\x16 \x03(\x05\x12\x1c\n\x14repeated_int64_value\x18\x17 \x03(\x03\x12\x1d\n\x15repeated_uint32_value\x18\x18 \x03(\r\x12\x1d\n\x15repeated_uint64_value\x18\x19 \x03(\x04\x12\x1c\n\x14repeated_float_value\x18\x1a \x03(\x02\x12\x1d\n\x15repeated_double_value\x18\x1b \x03(\x01\x12\x1d\n\x15repeated_string_value\x18\x1c \x03(\t\x12\x1c\n\x14repeated_bytes_value\x18\x1d \x03(\x0c\x12-\n\x13repeated_enum_value\x18\x1e \x03(\x0e\x32\x10.proto3.EnumType\x12\x33\n\x16repeated_message_value\x18\x1f \x03(\x0b\x32\x13.proto3.MessageType\"\x8c\x02\n\tTestOneof\x12\x1b\n\x11oneof_int32_value\x18\x01 \x01(\x05H\x00\x12\x1c\n\x12oneof_string_value\x18\x02 \x01(\tH\x00\x12\x1b\n\x11oneof_bytes_value\x18\x03 \x01(\x0cH\x00\x12,\n\x10oneof_enum_value\x18\x04 \x01(\x0e\x32\x10.proto3.EnumTypeH\x00\x12\x32\n\x13oneof_message_value\x18\x05 \x01(\x0b\x32\x13.proto3.MessageTypeH\x00\x12\x36\n\x10oneof_null_value\x18\x06 \x01(\x0e\x32\x1a.google.protobuf.NullValueH\x00\x42\r\n\x0boneof_value\"\xe1\x04\n\x07TestMap\x12.\n\x08\x62ool_map\x18\x01 \x03(\x0b\x32\x1c.proto3.TestMap.BoolMapEntry\x12\x30\n\tint32_map\x18\x02 \x03(\x0b\x32\x1d.proto3.TestMap.Int32MapEntry\x12\x30\n\tint64_map\x18\x03 \x03(\x0b\x32\x1d.proto3.TestMap.Int64MapEntry\x12\x32\n\nuint32_map\x18\x04 \x03(\x0b\x32\x1e.proto3.TestMap.Uint32MapEntry\x12\x32\n\nuint64_map\x18\x05 \x03(\x0b\x32\x1e.proto3.TestMap.Uint64MapEntry\x12\x32\n\nstring_map\x18\x06 \x03(\x0b\x32\x1e.proto3.TestMap.StringMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x05\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x03\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\r\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x04\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\"\x85\x06\n\rTestNestedMap\x12\x34\n\x08\x62ool_map\x18\x01 \x03(\x0b\x32\".proto3.TestNestedMap.BoolMapEntry\x12\x36\n\tint32_map\x18\x02 \x03(\x0b\x32#.proto3.TestNestedMap.Int32MapEntry\x12\x36\n\tint64_map\x18\x03 \x03(\x0b\x32#.proto3.TestNestedMap.Int64MapEntry\x12\x38\n\nuint32_map\x18\x04 \x03(\x0b\x32$.proto3.TestNestedMap.Uint32MapEntry\x12\x38\n\nuint64_map\x18\x05 \x03(\x0b\x32$.proto3.TestNestedMap.Uint64MapEntry\x12\x38\n\nstring_map\x18\x06 \x03(\x0b\x32$.proto3.TestNestedMap.StringMapEntry\x12\x32\n\x07map_map\x18\x07 \x03(\x0b\x32!.proto3.TestNestedMap.MapMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x05\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a/\n\rInt64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x03\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint32MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\r\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eUint64MapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x04\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a\x44\n\x0bMapMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12$\n\x05value\x18\x02 \x01(\x0b\x32\x15.proto3.TestNestedMap:\x02\x38\x01\"{\n\rTestStringMap\x12\x38\n\nstring_map\x18\x01 \x03(\x0b\x32$.proto3.TestStringMap.StringMapEntry\x1a\x30\n\x0eStringMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\xee\x07\n\x0bTestWrapper\x12.\n\nbool_value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.BoolValue\x12\x30\n\x0bint32_value\x18\x02 \x01(\x0b\x32\x1b.google.protobuf.Int32Value\x12\x30\n\x0bint64_value\x18\x03 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x32\n\x0cuint32_value\x18\x04 \x01(\x0b\x32\x1c.google.protobuf.UInt32Value\x12\x32\n\x0cuint64_value\x18\x05 \x01(\x0b\x32\x1c.google.protobuf.UInt64Value\x12\x30\n\x0b\x66loat_value\x18\x06 \x01(\x0b\x32\x1b.google.protobuf.FloatValue\x12\x32\n\x0c\x64ouble_value\x18\x07 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12\x32\n\x0cstring_value\x18\x08 \x01(\x0b\x32\x1c.google.protobuf.StringValue\x12\x30\n\x0b\x62ytes_value\x18\t \x01(\x0b\x32\x1b.google.protobuf.BytesValue\x12\x37\n\x13repeated_bool_value\x18\x0b \x03(\x0b\x32\x1a.google.protobuf.BoolValue\x12\x39\n\x14repeated_int32_value\x18\x0c \x03(\x0b\x32\x1b.google.protobuf.Int32Value\x12\x39\n\x14repeated_int64_value\x18\r \x03(\x0b\x32\x1b.google.protobuf.Int64Value\x12;\n\x15repeated_uint32_value\x18\x0e \x03(\x0b\x32\x1c.google.protobuf.UInt32Value\x12;\n\x15repeated_uint64_value\x18\x0f \x03(\x0b\x32\x1c.google.protobuf.UInt64Value\x12\x39\n\x14repeated_float_value\x18\x10 \x03(\x0b\x32\x1b.google.protobuf.FloatValue\x12;\n\x15repeated_double_value\x18\x11 \x03(\x0b\x32\x1c.google.protobuf.DoubleValue\x12;\n\x15repeated_string_value\x18\x12 \x03(\x0b\x32\x1c.google.protobuf.StringValue\x12\x39\n\x14repeated_bytes_value\x18\x13 \x03(\x0b\x32\x1b.google.protobuf.BytesValue\"n\n\rTestTimestamp\x12)\n\x05value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x32\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.Timestamp\"k\n\x0cTestDuration\x12(\n\x05value\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x31\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x19.google.protobuf.Duration\":\n\rTestFieldMask\x12)\n\x05value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.FieldMask\"e\n\nTestStruct\x12&\n\x05value\x18\x01 \x01(\x0b\x32\x17.google.protobuf.Struct\x12/\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x17.google.protobuf.Struct\"\\\n\x07TestAny\x12#\n\x05value\x18\x01 \x01(\x0b\x32\x14.google.protobuf.Any\x12,\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x14.google.protobuf.Any\"b\n\tTestValue\x12%\n\x05value\x18\x01 \x01(\x0b\x32\x16.google.protobuf.Value\x12.\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x16.google.protobuf.Value\"n\n\rTestListValue\x12)\n\x05value\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.ListValue\x12\x32\n\x0erepeated_value\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.ListValue\"\x89\x01\n\rTestBoolValue\x12\x12\n\nbool_value\x18\x01 \x01(\x08\x12\x34\n\x08\x62ool_map\x18\x02 \x03(\x0b\x32\".proto3.TestBoolValue.BoolMapEntry\x1a.\n\x0c\x42oolMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\x08\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\"+\n\x12TestCustomJsonName\x12\x15\n\x05value\x18\x01 \x01(\x05R\x06@value\"J\n\x0eTestExtensions\x12\x38\n\nextensions\x18\x01 \x01(\x0b\x32$.protobuf_unittest.TestAllExtensions\"\x84\x01\n\rTestEnumValue\x12%\n\x0b\x65num_value1\x18\x01 \x01(\x0e\x32\x10.proto3.EnumType\x12%\n\x0b\x65num_value2\x18\x02 \x01(\x0e\x32\x10.proto3.EnumType\x12%\n\x0b\x65num_value3\x18\x03 \x01(\x0e\x32\x10.proto3.EnumType*\x1c\n\x08\x45numType\x12\x07\n\x03\x46OO\x10\x00\x12\x07\n\x03\x42\x41R\x10\x01\x42,\n\x18\x63om.google.protobuf.utilB\x10JsonFormatProto3b\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.util.json_format_proto3_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\030com.google.protobuf.utilB\020JsonFormatProto3' + _TESTMAP_BOOLMAPENTRY._options = None + _TESTMAP_BOOLMAPENTRY._serialized_options = b'8\001' + _TESTMAP_INT32MAPENTRY._options = None + _TESTMAP_INT32MAPENTRY._serialized_options = b'8\001' + _TESTMAP_INT64MAPENTRY._options = None + _TESTMAP_INT64MAPENTRY._serialized_options = b'8\001' + _TESTMAP_UINT32MAPENTRY._options = None + _TESTMAP_UINT32MAPENTRY._serialized_options = b'8\001' + _TESTMAP_UINT64MAPENTRY._options = None + _TESTMAP_UINT64MAPENTRY._serialized_options = b'8\001' + _TESTMAP_STRINGMAPENTRY._options = None + _TESTMAP_STRINGMAPENTRY._serialized_options = b'8\001' + _TESTNESTEDMAP_BOOLMAPENTRY._options = None + _TESTNESTEDMAP_BOOLMAPENTRY._serialized_options = b'8\001' + _TESTNESTEDMAP_INT32MAPENTRY._options = None + _TESTNESTEDMAP_INT32MAPENTRY._serialized_options = b'8\001' + _TESTNESTEDMAP_INT64MAPENTRY._options = None + _TESTNESTEDMAP_INT64MAPENTRY._serialized_options = b'8\001' + _TESTNESTEDMAP_UINT32MAPENTRY._options = None + _TESTNESTEDMAP_UINT32MAPENTRY._serialized_options = b'8\001' + _TESTNESTEDMAP_UINT64MAPENTRY._options = None + _TESTNESTEDMAP_UINT64MAPENTRY._serialized_options = b'8\001' + _TESTNESTEDMAP_STRINGMAPENTRY._options = None + _TESTNESTEDMAP_STRINGMAPENTRY._serialized_options = b'8\001' + _TESTNESTEDMAP_MAPMAPENTRY._options = None + _TESTNESTEDMAP_MAPMAPENTRY._serialized_options = b'8\001' + _TESTSTRINGMAP_STRINGMAPENTRY._options = None + _TESTSTRINGMAP_STRINGMAPENTRY._serialized_options = b'8\001' + _TESTBOOLVALUE_BOOLMAPENTRY._options = None + _TESTBOOLVALUE_BOOLMAPENTRY._serialized_options = b'8\001' + _ENUMTYPE._serialized_start=4849 + _ENUMTYPE._serialized_end=4877 + _MESSAGETYPE._serialized_start=277 + _MESSAGETYPE._serialized_end=305 + _TESTMESSAGE._serialized_start=308 + _TESTMESSAGE._serialized_end=968 + _TESTONEOF._serialized_start=971 + _TESTONEOF._serialized_end=1239 + _TESTMAP._serialized_start=1242 + _TESTMAP._serialized_end=1851 + _TESTMAP_BOOLMAPENTRY._serialized_start=1557 + _TESTMAP_BOOLMAPENTRY._serialized_end=1603 + _TESTMAP_INT32MAPENTRY._serialized_start=1605 + _TESTMAP_INT32MAPENTRY._serialized_end=1652 + _TESTMAP_INT64MAPENTRY._serialized_start=1654 + _TESTMAP_INT64MAPENTRY._serialized_end=1701 + _TESTMAP_UINT32MAPENTRY._serialized_start=1703 + _TESTMAP_UINT32MAPENTRY._serialized_end=1751 + _TESTMAP_UINT64MAPENTRY._serialized_start=1753 + _TESTMAP_UINT64MAPENTRY._serialized_end=1801 + _TESTMAP_STRINGMAPENTRY._serialized_start=1803 + _TESTMAP_STRINGMAPENTRY._serialized_end=1851 + _TESTNESTEDMAP._serialized_start=1854 + _TESTNESTEDMAP._serialized_end=2627 + _TESTNESTEDMAP_BOOLMAPENTRY._serialized_start=1557 + _TESTNESTEDMAP_BOOLMAPENTRY._serialized_end=1603 + _TESTNESTEDMAP_INT32MAPENTRY._serialized_start=1605 + _TESTNESTEDMAP_INT32MAPENTRY._serialized_end=1652 + _TESTNESTEDMAP_INT64MAPENTRY._serialized_start=1654 + _TESTNESTEDMAP_INT64MAPENTRY._serialized_end=1701 + _TESTNESTEDMAP_UINT32MAPENTRY._serialized_start=1703 + _TESTNESTEDMAP_UINT32MAPENTRY._serialized_end=1751 + _TESTNESTEDMAP_UINT64MAPENTRY._serialized_start=1753 + _TESTNESTEDMAP_UINT64MAPENTRY._serialized_end=1801 + _TESTNESTEDMAP_STRINGMAPENTRY._serialized_start=1803 + _TESTNESTEDMAP_STRINGMAPENTRY._serialized_end=1851 + _TESTNESTEDMAP_MAPMAPENTRY._serialized_start=2559 + _TESTNESTEDMAP_MAPMAPENTRY._serialized_end=2627 + _TESTSTRINGMAP._serialized_start=2629 + _TESTSTRINGMAP._serialized_end=2752 + _TESTSTRINGMAP_STRINGMAPENTRY._serialized_start=2704 + _TESTSTRINGMAP_STRINGMAPENTRY._serialized_end=2752 + _TESTWRAPPER._serialized_start=2755 + _TESTWRAPPER._serialized_end=3761 + _TESTTIMESTAMP._serialized_start=3763 + _TESTTIMESTAMP._serialized_end=3873 + _TESTDURATION._serialized_start=3875 + _TESTDURATION._serialized_end=3982 + _TESTFIELDMASK._serialized_start=3984 + _TESTFIELDMASK._serialized_end=4042 + _TESTSTRUCT._serialized_start=4044 + _TESTSTRUCT._serialized_end=4145 + _TESTANY._serialized_start=4147 + _TESTANY._serialized_end=4239 + _TESTVALUE._serialized_start=4241 + _TESTVALUE._serialized_end=4339 + _TESTLISTVALUE._serialized_start=4341 + _TESTLISTVALUE._serialized_end=4451 + _TESTBOOLVALUE._serialized_start=4454 + _TESTBOOLVALUE._serialized_end=4591 + _TESTBOOLVALUE_BOOLMAPENTRY._serialized_start=1557 + _TESTBOOLVALUE_BOOLMAPENTRY._serialized_end=1603 + _TESTCUSTOMJSONNAME._serialized_start=4593 + _TESTCUSTOMJSONNAME._serialized_end=4636 + _TESTEXTENSIONS._serialized_start=4638 + _TESTEXTENSIONS._serialized_end=4712 + _TESTENUMVALUE._serialized_start=4715 + _TESTENUMVALUE._serialized_end=4847 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/google/protobuf/wrappers_pb2.py b/env/lib/python3.8/site-packages/google/protobuf/wrappers_pb2.py new file mode 100644 index 00000000..e49eb4c1 --- /dev/null +++ b/env/lib/python3.8/site-packages/google/protobuf/wrappers_pb2.py @@ -0,0 +1,42 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: google/protobuf/wrappers.proto +"""Generated protocol buffer code.""" +from google.protobuf.internal import builder as _builder +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/wrappers.proto\x12\x0fgoogle.protobuf\"\x1c\n\x0b\x44oubleValue\x12\r\n\x05value\x18\x01 \x01(\x01\"\x1b\n\nFloatValue\x12\r\n\x05value\x18\x01 \x01(\x02\"\x1b\n\nInt64Value\x12\r\n\x05value\x18\x01 \x01(\x03\"\x1c\n\x0bUInt64Value\x12\r\n\x05value\x18\x01 \x01(\x04\"\x1b\n\nInt32Value\x12\r\n\x05value\x18\x01 \x01(\x05\"\x1c\n\x0bUInt32Value\x12\r\n\x05value\x18\x01 \x01(\r\"\x1a\n\tBoolValue\x12\r\n\x05value\x18\x01 \x01(\x08\"\x1c\n\x0bStringValue\x12\r\n\x05value\x18\x01 \x01(\t\"\x1b\n\nBytesValue\x12\r\n\x05value\x18\x01 \x01(\x0c\x42\x83\x01\n\x13\x63om.google.protobufB\rWrappersProtoP\x01Z1google.golang.org/protobuf/types/known/wrapperspb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') + +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.wrappers_pb2', globals()) +if _descriptor._USE_C_DESCRIPTORS == False: + + DESCRIPTOR._options = None + DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\rWrappersProtoP\001Z1google.golang.org/protobuf/types/known/wrapperspb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' + _DOUBLEVALUE._serialized_start=51 + _DOUBLEVALUE._serialized_end=79 + _FLOATVALUE._serialized_start=81 + _FLOATVALUE._serialized_end=108 + _INT64VALUE._serialized_start=110 + _INT64VALUE._serialized_end=137 + _UINT64VALUE._serialized_start=139 + _UINT64VALUE._serialized_end=167 + _INT32VALUE._serialized_start=169 + _INT32VALUE._serialized_end=196 + _UINT32VALUE._serialized_start=198 + _UINT32VALUE._serialized_end=226 + _BOOLVALUE._serialized_start=228 + _BOOLVALUE._serialized_end=254 + _STRINGVALUE._serialized_start=256 + _STRINGVALUE._serialized_end=284 + _BYTESVALUE._serialized_start=286 + _BYTESVALUE._serialized_end=313 +# @@protoc_insertion_point(module_scope) diff --git a/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/INSTALLER b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/LICENSE b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/LICENSE new file mode 100644 index 00000000..17bc694e --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2020 The Ethereum Foundation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/METADATA b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/METADATA new file mode 100644 index 00000000..ebbff8f1 --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/METADATA @@ -0,0 +1,180 @@ +Metadata-Version: 2.1 +Name: hexbytes +Version: 0.3.0 +Summary: hexbytes: Python `bytes` subclass that decodes hex, with a readable console output +Home-page: https://github.com/ethereum/hexbytes +Author: The Ethereum Foundation +Author-email: snakecharmers@ethereum.org +License: MIT +Keywords: ethereum +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Natural Language :: English +Classifier: Operating System :: MacOS +Classifier: Operating System :: POSIX +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Requires-Python: >=3.7, <4 +Description-Content-Type: text/markdown +License-File: LICENSE +Provides-Extra: dev +Requires-Dist: bumpversion (<1,>=0.5.3) ; extra == 'dev' +Requires-Dist: pytest-watch (<5,>=4.1.0) ; extra == 'dev' +Requires-Dist: wheel ; extra == 'dev' +Requires-Dist: twine ; extra == 'dev' +Requires-Dist: ipython ; extra == 'dev' +Requires-Dist: pytest (<8,>=7) ; extra == 'dev' +Requires-Dist: pytest-xdist ; extra == 'dev' +Requires-Dist: tox (<4,>=3.25.1) ; extra == 'dev' +Requires-Dist: hypothesis (<=6.31.6,>=3.44.24) ; extra == 'dev' +Requires-Dist: eth-utils (<3,>=1.0.1) ; extra == 'dev' +Requires-Dist: flake8 (==3.7.9) ; extra == 'dev' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'dev' +Requires-Dist: mypy (==0.971) ; extra == 'dev' +Requires-Dist: pydocstyle (<6,>=5.0.0) ; extra == 'dev' +Requires-Dist: black (<23,>=22) ; extra == 'dev' +Requires-Dist: Sphinx (<5,>=4.0.0) ; extra == 'dev' +Requires-Dist: sphinx-rtd-theme (<1,>=0.1.9) ; extra == 'dev' +Requires-Dist: towncrier (<22,>=21) ; extra == 'dev' +Provides-Extra: doc +Requires-Dist: Sphinx (<5,>=4.0.0) ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme (<1,>=0.1.9) ; extra == 'doc' +Requires-Dist: towncrier (<22,>=21) ; extra == 'doc' +Provides-Extra: lint +Requires-Dist: flake8 (==3.7.9) ; extra == 'lint' +Requires-Dist: isort (<5,>=4.2.15) ; extra == 'lint' +Requires-Dist: mypy (==0.971) ; extra == 'lint' +Requires-Dist: pydocstyle (<6,>=5.0.0) ; extra == 'lint' +Requires-Dist: black (<23,>=22) ; extra == 'lint' +Provides-Extra: test +Requires-Dist: pytest (<8,>=7) ; extra == 'test' +Requires-Dist: pytest-xdist ; extra == 'test' +Requires-Dist: tox (<4,>=3.25.1) ; extra == 'test' +Requires-Dist: hypothesis (<=6.31.6,>=3.44.24) ; extra == 'test' +Requires-Dist: eth-utils (<3,>=1.0.1) ; extra == 'test' + +# HexBytes + +[![Join the chat at https://gitter.im/ethereum/web3.py](https://badges.gitter.im/ethereum/web3.py.svg)](https://gitter.im/ethereum/web3.py?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[![Build Status](https://circleci.com/gh/ethereum/hexbytes.svg?style=shield)](https://circleci.com/gh/ethereum/hexbytes) +[![PyPI version](https://badge.fury.io/py/hexbytes.svg)](https://badge.fury.io/py/hexbytes) +[![Python versions](https://img.shields.io/pypi/pyversions/hexbytes.svg)](https://pypi.python.org/pypi/hexbytes) +[![Docs build](https://readthedocs.org/projects/hexbytes/badge/?version=latest)](http://hexbytes.readthedocs.io/en/latest/?badge=latest) + + +Python `bytes` subclass that decodes hex, with a readable console output + +Read more in the [documentation on ReadTheDocs](https://hexbytes.readthedocs.io/). [View the change log](https://hexbytes.readthedocs.io/en/latest/release_notes.html). + +## Quickstart + +```sh +pip install hexbytes +``` + +```py +# convert from bytes to a prettier representation at the console +>>> HexBytes(b"\x03\x08wf\xbfh\xe7\x86q\xd1\xeaCj\xe0\x87\xdat\xa1'a\xda\xc0 \x01\x1a\x9e\xdd\xc4\x90\x0b\xf1;") +HexBytes('0x03087766bf68e78671d1ea436ae087da74a12761dac020011a9eddc4900bf13b') + +# HexBytes accepts the hex string representation as well, ignoring case and 0x prefixes +>>> hb = HexBytes('03087766BF68E78671D1EA436AE087DA74A12761DAC020011A9EDDC4900BF13B') +HexBytes('0x03087766bf68e78671d1ea436ae087da74a12761dac020011a9eddc4900bf13b') + +# get the first byte: +>>> hb[0] +3 + +# show how many bytes are in the value +>>> len(hb) +32 + +# cast back to the basic `bytes` type +>>> bytes(hb) +b"\x03\x08wf\xbfh\xe7\x86q\xd1\xeaCj\xe0\x87\xdat\xa1'a\xda\xc0 \x01\x1a\x9e\xdd\xc4\x90\x0b\xf1;" +``` + +## Developer Setup + +If you would like to hack on hexbytes, please check out the [Snake Charmers +Tactical Manual](https://github.com/ethereum/snake-charmers-tactical-manual) +for information on how we do: + +- Testing +- Pull Requests +- Code Style +- Documentation + +### Development Environment Setup + +You can set up your dev environment with: + +```sh +git clone git@github.com:carver/hexbytes.git +cd hexbytes +virtualenv -p python3 venv +. venv/bin/activate +pip install -e .[dev] +``` + +### Testing Setup + +During development, you might like to have tests run on every file save. + +Show flake8 errors on file change: + +```sh +# Test flake8 +when-changed -v -s -r -1 hexbytes/ tests/ -c "clear; flake8 hexbytes tests && echo 'flake8 success' || echo 'error'" +``` + +Run multi-process tests in one command, but without color: + +```sh +# in the project root: +pytest --numprocesses=4 --looponfail --maxfail=1 +# the same thing, succinctly: +pytest -n 4 -f --maxfail=1 +``` + +Run in one thread, with color and desktop notifications: + +```sh +cd venv +ptw --onfail "notify-send -t 5000 'Test failure ⚠⚠⚠⚠⚠' 'python 3 test on hexbytes failed'" ../tests ../hexbytes +``` + +### Release setup + +For Debian-like systems: +``` +apt install pandoc +``` + +To release a new version: + +```sh +make release bump=$$VERSION_PART_TO_BUMP$$ +``` + +#### How to bumpversion + +The version format for this repo is `{major}.{minor}.{patch}` for stable, and +`{major}.{minor}.{patch}-{stage}.{devnum}` for unstable (`stage` can be alpha or beta). + +To issue the next version in line, specify which part to bump, +like `make release bump=minor` or `make release bump=devnum`. This is typically done from the +master branch, except when releasing a beta (in which case the beta is released from master, +and the previous stable branch is released from said branch). + +If you are in a beta version, `make release bump=stage` will switch to a stable. + +To issue an unstable version when the current version is stable, specify the +new version explicitly, like `make release bump="--new-version 4.0.0-alpha.1 devnum"` + + diff --git a/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/RECORD b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/RECORD new file mode 100644 index 00000000..8a4ffe4d --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/RECORD @@ -0,0 +1,13 @@ +hexbytes-0.3.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +hexbytes-0.3.0.dist-info/LICENSE,sha256=390jwXuDReXaeTef0eT63FrFKogAMVgYBUWl1oGk4ec,1090 +hexbytes-0.3.0.dist-info/METADATA,sha256=ViRiW6HWDLN7yCqjOl_44MEk5kiuN0jnSciKuqu9igk,6406 +hexbytes-0.3.0.dist-info/RECORD,, +hexbytes-0.3.0.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +hexbytes-0.3.0.dist-info/top_level.txt,sha256=tBdxpEf2kC9G9g9MHsuQ7DtZQKR7mwum2teBZpzi0rA,9 +hexbytes/__init__.py,sha256=CTEC38p8BZiDRds2iANHMTjVspmjXOVzkvF68SPwKjA,60 +hexbytes/__pycache__/__init__.cpython-38.pyc,, +hexbytes/__pycache__/_utils.cpython-38.pyc,, +hexbytes/__pycache__/main.cpython-38.pyc,, +hexbytes/_utils.py,sha256=hUEDsNJ8WJqYBENOML0S1ni8Lnf2veYB0bCmjM1avCI,1687 +hexbytes/main.py,sha256=HinKu9VtCFPxWEh8B58eFBti0F3NH9wWZxzySNye1A4,1954 +hexbytes/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/WHEEL b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/top_level.txt b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/top_level.txt new file mode 100644 index 00000000..ee9ebe79 --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes-0.3.0.dist-info/top_level.txt @@ -0,0 +1 @@ +hexbytes diff --git a/env/lib/python3.8/site-packages/hexbytes/__init__.py b/env/lib/python3.8/site-packages/hexbytes/__init__.py new file mode 100644 index 00000000..a00261e5 --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes/__init__.py @@ -0,0 +1,5 @@ +from .main import ( + HexBytes, +) + +__all__ = ["HexBytes"] diff --git a/env/lib/python3.8/site-packages/hexbytes/_utils.py b/env/lib/python3.8/site-packages/hexbytes/_utils.py new file mode 100644 index 00000000..55517282 --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes/_utils.py @@ -0,0 +1,54 @@ +import binascii +from typing import ( + Union, +) + + +def to_bytes(val: Union[bool, bytearray, bytes, int, str, memoryview]) -> bytes: + """ + Equivalent to: `eth_utils.hexstr_if_str(eth_utils.to_bytes, val)` . + + Convert a hex string, integer, or bool, to a bytes representation. + Alternatively, pass through bytes or bytearray as a bytes value. + """ + if isinstance(val, bytes): + return val + elif isinstance(val, str): + return hexstr_to_bytes(val) + elif isinstance(val, bytearray): + return bytes(val) + elif isinstance(val, bool): + return b"\x01" if val else b"\x00" + elif isinstance(val, int): + # Note that this int check must come after the bool check, because + # isinstance(True, int) is True + if val < 0: + raise ValueError(f"Cannot convert negative integer {val} to bytes") + else: + return to_bytes(hex(val)) + elif isinstance(val, memoryview): + return bytes(val) + else: + raise TypeError(f"Cannot convert {val!r} of type {type(val)} to bytes") + + +def hexstr_to_bytes(hexstr: str) -> bytes: + if hexstr.startswith(("0x", "0X")): + non_prefixed_hex = hexstr[2:] + else: + non_prefixed_hex = hexstr + + # if the hex string is odd-length, then left-pad it to an even length + if len(hexstr) % 2: + padded_hex = "0" + non_prefixed_hex + else: + padded_hex = non_prefixed_hex + + try: + ascii_hex = padded_hex.encode("ascii") + except UnicodeDecodeError: + raise ValueError( + f"hex string {padded_hex} may only contain [0-9a-fA-F] characters" + ) + else: + return binascii.unhexlify(ascii_hex) diff --git a/env/lib/python3.8/site-packages/hexbytes/main.py b/env/lib/python3.8/site-packages/hexbytes/main.py new file mode 100644 index 00000000..cc90ca29 --- /dev/null +++ b/env/lib/python3.8/site-packages/hexbytes/main.py @@ -0,0 +1,69 @@ +import sys +from typing import ( + TYPE_CHECKING, + Type, + Union, + cast, + overload, +) + +from ._utils import ( + to_bytes, +) + +if TYPE_CHECKING: + # remove once hexbytes supports python>=3.8 + # Types was added to typing in 3.8 + if sys.version_info >= (3, 8): + from typing import SupportsIndex + else: + from typing_extensions import SupportsIndex # noqa: F401 + + +BytesLike = Union[bool, bytearray, bytes, int, str, memoryview] + + +class HexBytes(bytes): + """ + HexBytes is a *very* thin wrapper around the python built-in :class:`bytes` class. + + It has these three changes: + 1. Accepts more initializing values, like hex strings, non-negative integers, + and booleans + 2. Returns hex with prefix '0x' from :meth:`HexBytes.hex` + 3. The representation at console is in hex + """ + + def __new__(cls: Type[bytes], val: BytesLike) -> "HexBytes": + bytesval = to_bytes(val) + return cast(HexBytes, super().__new__(cls, bytesval)) # type: ignore # https://github.com/python/typeshed/issues/2630 # noqa: E501 + + def hex( + self, sep: Union[str, bytes] = None, bytes_per_sep: "SupportsIndex" = 1 + ) -> str: + """ + Output hex-encoded bytes, with an "0x" prefix. + + Everything following the "0x" is output exactly like :meth:`bytes.hex`. + """ + return "0x" + super().hex() + + @overload + def __getitem__(self, key: "SupportsIndex") -> int: # noqa: F811 + ... + + @overload # noqa: F811 + def __getitem__(self, key: slice) -> "HexBytes": # noqa: F811 + ... + + def __getitem__( # noqa: F811 + self, key: Union["SupportsIndex", slice] + ) -> Union[int, bytes, "HexBytes"]: + result = super().__getitem__(key) + if hasattr(result, "hex"): + return type(self)(result) + else: + return result + + def __repr__(self) -> str: + return f"HexBytes({self.hex()!r})" diff --git a/env/lib/python3.8/site-packages/hexbytes/py.typed b/env/lib/python3.8/site-packages/hexbytes/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/INSTALLER b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/INSTALLER new file mode 100644 index 00000000..2bd745f6 --- /dev/null +++ b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/INSTALLER @@ -0,0 +1 @@ +poetry \ No newline at end of file diff --git a/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/METADATA b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/METADATA new file mode 100644 index 00000000..1a9ed392 --- /dev/null +++ b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/METADATA @@ -0,0 +1,19 @@ +Metadata-Version: 2.1 +Name: horus-compile +Version: 0.0.6.3 +Summary: Use formally verified annotations in your Cairo code +Author: Nethermind +Author-email: hello@nethermind.io +Requires-Python: >=3.7,<3.10 +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.7 +Requires-Dist: cairo-lang (==0.10.1) +Requires-Dist: eth-utils (>=1.2.0,<2.0.0) +Requires-Dist: lark (>=1.1.4,<2.0.0) +Requires-Dist: marshmallow (>=3.15.0,<4.0.0) +Requires-Dist: marshmallow-dataclass (>=7.1.0,<8.5.4) +Requires-Dist: z3-solver (>=4.8.15,<5.0.0) diff --git a/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/RECORD b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/RECORD new file mode 100644 index 00000000..22d2666d --- /dev/null +++ b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/RECORD @@ -0,0 +1,7 @@ +/mnt/c/Users/elija/Documents/projects/horus-compile/env/lib/python3.8/site-packages/horus_compile.pth,sha256=Uv_d0NUMSHfh97W391zMQTlcG6UT1UxIXgLSny5TnhE,56 +/mnt/c/Users/elija/Documents/projects/horus-compile/env/bin/horus-compile,sha256=Z4c8ETPAM9np9a6iW23r3wC0OYwYQuxA-T6Kmz7JWMM,173 +/mnt/c/Users/elija/Documents/projects/horus-compile/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/METADATA,sha256=-60vwv8EzNE9KLwQ5S9C0oyvDL9JckhpTmzaguk7adk,772 +/mnt/c/Users/elija/Documents/projects/horus-compile/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/INSTALLER,sha256=gcMg69IJ3d31kOPdejOXCeJ_7u2OVE9WaZ5A1idCOi0,6 +/mnt/c/Users/elija/Documents/projects/horus-compile/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/entry_points.txt,sha256=is80DyHn1FT3vO_d52klQfdOFgFXOmeCxjX38XYDIBQ,66 +/mnt/c/Users/elija/Documents/projects/horus-compile/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/direct_url.json,sha256=6PZEkpMr7Jv-B88ogI5h7A7t4sUHPzCmYTkxF5uL2F0,101 +/mnt/c/Users/elija/Documents/projects/horus-compile/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/RECORD,, diff --git a/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/entry_points.txt b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/entry_points.txt new file mode 100644 index 00000000..1142e045 --- /dev/null +++ b/env/lib/python3.8/site-packages/horus_compile-0.0.6.3.dist-info/entry_points.txt @@ -0,0 +1,3 @@ +[console_scripts] +horus-compile=horus.compiler.horus_compile:run + diff --git a/env/lib/python3.8/site-packages/horus_compile.pth b/env/lib/python3.8/site-packages/horus_compile.pth new file mode 100644 index 00000000..7737b2b1 --- /dev/null +++ b/env/lib/python3.8/site-packages/horus_compile.pth @@ -0,0 +1 @@ +/mnt/c/Users/elija/Documents/projects/horus-compile/src diff --git a/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/AUTHORS.rst b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/AUTHORS.rst new file mode 100644 index 00000000..90401390 --- /dev/null +++ b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/AUTHORS.rst @@ -0,0 +1,66 @@ +Credits +======= + +``html5lib`` is written and maintained by: + +- James Graham +- Sam Sneddon +- Łukasz Langa +- Will Kahn-Greene + + +Patches and suggestions +----------------------- +(In chronological order, by first commit:) + +- Anne van Kesteren +- Lachlan Hunt +- lantis63 +- Sam Ruby +- Thomas Broyer +- Tim Fletcher +- Mark Pilgrim +- Ryan King +- Philip Taylor +- Edward Z. Yang +- fantasai +- Philip Jägenstedt +- Ms2ger +- Mohammad Taha Jahangir +- Andy Wingo +- Andreas Madsack +- Karim Valiev +- Juan Carlos Garcia Segovia +- Mike West +- Marc DM +- Simon Sapin +- Michael[tm] Smith +- Ritwik Gupta +- Marc Abramowitz +- Tony Lopes +- lilbludevil +- Kevin +- Drew Hubl +- Austin Kumbera +- Jim Baker +- Jon Dufresne +- Donald Stufft +- Alex Gaynor +- Nik Nyby +- Jakub Wilk +- Sigmund Cherem +- Gabi Davar +- Florian Mounier +- neumond +- Vitalik Verhovodov +- Kovid Goyal +- Adam Chainz +- John Vandenberg +- Eric Amorde +- Benedikt Morbach +- Jonathan Vanasco +- Tom Most +- Ville Skyttä +- Hugo van Kemenade +- Mark Vasilkov + diff --git a/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/INSTALLER b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/LICENSE b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/LICENSE new file mode 100644 index 00000000..c87fa7a0 --- /dev/null +++ b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/LICENSE @@ -0,0 +1,20 @@ +Copyright (c) 2006-2013 James Graham and other contributors + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/METADATA b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/METADATA new file mode 100644 index 00000000..ee83c1f8 --- /dev/null +++ b/env/lib/python3.8/site-packages/html5lib-1.1.dist-info/METADATA @@ -0,0 +1,552 @@ +Metadata-Version: 2.1 +Name: html5lib +Version: 1.1 +Summary: HTML parser based on the WHATWG HTML specification +Home-page: https://github.com/html5lib/html5lib-python +Maintainer: James Graham +Maintainer-email: james@hoppipolla.co.uk +License: MIT License +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.* +Requires-Dist: six (>=1.9) +Requires-Dist: webencodings +Provides-Extra: all +Requires-Dist: genshi ; extra == 'all' +Requires-Dist: chardet (>=2.2) ; extra == 'all' +Requires-Dist: lxml ; (platform_python_implementation == 'CPython') and extra == 'all' +Provides-Extra: chardet +Requires-Dist: chardet (>=2.2) ; extra == 'chardet' +Provides-Extra: genshi +Requires-Dist: genshi ; extra == 'genshi' +Provides-Extra: lxml +Requires-Dist: lxml ; (platform_python_implementation == 'CPython') and extra == 'lxml' + +html5lib +======== + +.. image:: https://travis-ci.org/html5lib/html5lib-python.svg?branch=master + :target: https://travis-ci.org/html5lib/html5lib-python + + +html5lib is a pure-python library for parsing HTML. It is designed to +conform to the WHATWG HTML specification, as is implemented by all major +web browsers. + + +Usage +----- + +Simple usage follows this pattern: + +.. code-block:: python + + import html5lib + with open("mydocument.html", "rb") as f: + document = html5lib.parse(f) + +or: + +.. code-block:: python + + import html5lib + document = html5lib.parse("

    Hello World!") + +By default, the ``document`` will be an ``xml.etree`` element instance. +Whenever possible, html5lib chooses the accelerated ``ElementTree`` +implementation (i.e. ``xml.etree.cElementTree`` on Python 2.x). + +Two other tree types are supported: ``xml.dom.minidom`` and +``lxml.etree``. To use an alternative format, specify the name of +a treebuilder: + +.. code-block:: python + + import html5lib + with open("mydocument.html", "rb") as f: + lxml_etree_document = html5lib.parse(f, treebuilder="lxml") + +When using with ``urllib2`` (Python 2), the charset from HTTP should be +pass into html5lib as follows: + +.. code-block:: python + + from contextlib import closing + from urllib2 import urlopen + import html5lib + + with closing(urlopen("http://example.com/")) as f: + document = html5lib.parse(f, transport_encoding=f.info().getparam("charset")) + +When using with ``urllib.request`` (Python 3), the charset from HTTP +should be pass into html5lib as follows: + +.. code-block:: python + + from urllib.request import urlopen + import html5lib + + with urlopen("http://example.com/") as f: + document = html5lib.parse(f, transport_encoding=f.info().get_content_charset()) + +To have more control over the parser, create a parser object explicitly. +For instance, to make the parser raise exceptions on parse errors, use: + +.. code-block:: python + + import html5lib + with open("mydocument.html", "rb") as f: + parser = html5lib.HTMLParser(strict=True) + document = parser.parse(f) + +When you're instantiating parser objects explicitly, pass a treebuilder +class as the ``tree`` keyword argument to use an alternative document +format: + +.. code-block:: python + + import html5lib + parser = html5lib.HTMLParser(tree=html5lib.getTreeBuilder("dom")) + minidom_document = parser.parse("

    Hello World!") + +More documentation is available at https://html5lib.readthedocs.io/. + + +Installation +------------ + +html5lib works on CPython 2.7+, CPython 3.5+ and PyPy. To install: + +.. code-block:: bash + + $ pip install html5lib + +The goal is to support a (non-strict) superset of the versions that `pip +supports +`_. + +Optional Dependencies +--------------------- + +The following third-party libraries may be used for additional +functionality: + +- ``lxml`` is supported as a tree format (for both building and + walking) under CPython (but *not* PyPy where it is known to cause + segfaults); + +- ``genshi`` has a treewalker (but not builder); and + +- ``chardet`` can be used as a fallback when character encoding cannot + be determined. + + +Bugs +---- + +Please report any bugs on the `issue tracker +`_. + + +Tests +----- + +Unit tests require the ``pytest`` and ``mock`` libraries and can be +run using the ``py.test`` command in the root directory. + +Test data are contained in a separate `html5lib-tests +`_ repository and included +as a submodule, thus for git checkouts they must be initialized:: + + $ git submodule init + $ git submodule update + +If you have all compatible Python implementations available on your +system, you can run tests on all of them using the ``tox`` utility, +which can be found on PyPI. + + +Questions? +---------- + +There's a mailing list available for support on Google Groups, +`html5lib-discuss `_, +though you may get a quicker response asking on IRC in `#whatwg on +irc.freenode.net `_. + +Change Log +---------- + +1.1 +~~~ + +UNRELEASED + +Breaking changes: + +* Drop support for Python 3.3. (#358) +* Drop support for Python 3.4. (#421) + +Deprecations: + +* Deprecate the ``html5lib`` sanitizer (``html5lib.serialize(sanitize=True)`` and + ``html5lib.filters.sanitizer``). We recommend users migrate to `Bleach + `. Please let us know if Bleach doesn't suffice for your + use. (#443) + +Other changes: + +* Try to import from ``collections.abc`` to remove DeprecationWarning and ensure + ``html5lib`` keeps working in future Python versions. (#403) +* Drop optional ``datrie`` dependency. (#442) + + +1.0.1 +~~~~~ + +Released on December 7, 2017 + +Breaking changes: + +* Drop support for Python 2.6. (#330) (Thank you, Hugo, Will Kahn-Greene!) +* Remove ``utils/spider.py`` (#353) (Thank you, Jon Dufresne!) + +Features: + +* Improve documentation. (#300, #307) (Thank you, Jon Dufresne, Tom Most, + Will Kahn-Greene!) +* Add iframe seamless boolean attribute. (Thank you, Ritwik Gupta!) +* Add itemscope as a boolean attribute. (#194) (Thank you, Jonathan Vanasco!) +* Support Python 3.6. (#333) (Thank you, Jon Dufresne!) +* Add CI support for Windows using AppVeyor. (Thank you, John Vandenberg!) +* Improve testing and CI and add code coverage (#323, #334), (Thank you, Jon + Dufresne, John Vandenberg, Sam Sneddon, Will Kahn-Greene!) +* Semver-compliant version number. + +Bug fixes: + +* Add support for setuptools < 18.5 to support environment markers. (Thank you, + John Vandenberg!) +* Add explicit dependency for six >= 1.9. (Thank you, Eric Amorde!) +* Fix regexes to work with Python 3.7 regex adjustments. (#318, #379) (Thank + you, Benedikt Morbach, Ville Skyttä, Mark Vasilkov!) +* Fix alphabeticalattributes filter namespace bug. (#324) (Thank you, Will + Kahn-Greene!) +* Include license file in generated wheel package. (#350) (Thank you, Jon + Dufresne!) +* Fix annotation-xml typo. (#339) (Thank you, Will Kahn-Greene!) +* Allow uppercase hex chararcters in CSS colour check. (#377) (Thank you, + Komal Dembla, Hugo!) + + +1.0 +~~~ + +Released and unreleased on December 7, 2017. Badly packaged release. + + +0.999999999/1.0b10 +~~~~~~~~~~~~~~~~~~ + +Released on July 15, 2016 + +* Fix attribute order going to the tree builder to be document order + instead of reverse document order(!). + + +0.99999999/1.0b9 +~~~~~~~~~~~~~~~~ + +Released on July 14, 2016 + +* **Added ordereddict as a mandatory dependency on Python 2.6.** + +* Added ``lxml``, ``genshi``, ``datrie``, ``charade``, and ``all`` + extras that will do the right thing based on the specific + interpreter implementation. + +* Now requires the ``mock`` package for the testsuite. + +* Cease supporting DATrie under PyPy. + +* **Remove PullDOM support, as this hasn't ever been properly + tested, doesn't entirely work, and as far as I can tell is + completely unused by anyone.** + +* Move testsuite to ``py.test``. + +* **Fix #124: move to webencodings for decoding the input byte stream; + this makes html5lib compliant with the Encoding Standard, and + introduces a required dependency on webencodings.** + +* **Cease supporting Python 3.2 (in both CPython and PyPy forms).** + +* **Fix comments containing double-dash with lxml 3.5 and above.** + +* **Use scripting disabled by default (as we don't implement + scripting).** + +* **Fix #11, avoiding the XSS bug potentially caused by serializer + allowing attribute values to be escaped out of in old browser versions, + changing the quote_attr_values option on serializer to take one of + three values, "always" (the old True value), "legacy" (the new option, + and the new default), and "spec" (the old False value, and the old + default).** + +* **Fix #72 by rewriting the sanitizer to apply only to treewalkers + (instead of the tokenizer); as such, this will require amending all + callers of it to use it via the treewalker API.** + +* **Drop support of charade, now that chardet is supported once more.** + +* **Replace the charset keyword argument on parse and related methods + with a set of keyword arguments: override_encoding, transport_encoding, + same_origin_parent_encoding, likely_encoding, and default_encoding.** + +* **Move filters._base, treebuilder._base, and treewalkers._base to .base + to clarify their status as public.** + +* **Get rid of the sanitizer package. Merge sanitizer.sanitize into the + sanitizer.htmlsanitizer module and move that to sanitizer. This means + anyone who used sanitizer.sanitize or sanitizer.HTMLSanitizer needs no + code changes.** + +* **Rename treewalkers.lxmletree to .etree_lxml and + treewalkers.genshistream to .genshi to have a consistent API.** + +* Move a whole load of stuff (inputstream, ihatexml, trie, tokenizer, + utils) to be underscore prefixed to clarify their status as private. + + +0.9999999/1.0b8 +~~~~~~~~~~~~~~~ + +Released on September 10, 2015 + +* Fix #195: fix the sanitizer to drop broken URLs (it threw an + exception between 0.9999 and 0.999999). + + +0.999999/1.0b7 +~~~~~~~~~~~~~~ + +Released on July 7, 2015 + +* Fix #189: fix the sanitizer to allow relative URLs again (as it did + prior to 0.9999/1.0b5). + + +0.99999/1.0b6 +~~~~~~~~~~~~~ + +Released on April 30, 2015 + +* Fix #188: fix the sanitizer to not throw an exception when sanitizing + bogus data URLs. + + +0.9999/1.0b5 +~~~~~~~~~~~~ + +Released on April 29, 2015 + +* Fix #153: Sanitizer fails to treat some attributes as URLs. Despite how + this sounds, this has no known security implications. No known version + of IE (5.5 to current), Firefox (3 to current), Safari (6 to current), + Chrome (1 to current), or Opera (12 to current) will run any script + provided in these attributes. + +* Pass error message to the ParseError exception in strict parsing mode. + +* Allow data URIs in the sanitizer, with a whitelist of content-types. + +* Add support for Python implementations that don't support lone + surrogates (read: Jython). Fixes #2. + +* Remove localization of error messages. This functionality was totally + unused (and untested that everything was localizable), so we may as + well follow numerous browsers in not supporting translating technical + strings. + +* Expose treewalkers.pprint as a public API. + +* Add a documentEncoding property to HTML5Parser, fix #121. + + +0.999 +~~~~~ + +Released on December 23, 2013 + +* Fix #127: add work-around for CPython issue #20007: .read(0) on + http.client.HTTPResponse drops the rest of the content. + +* Fix #115: lxml treewalker can now deal with fragments containing, at + their root level, text nodes with non-ASCII characters on Python 2. + + +0.99 +~~~~ + +Released on September 10, 2013 + +* No library changes from 1.0b3; released as 0.99 as pip has changed + behaviour from 1.4 to avoid installing pre-release versions per + PEP 440. + + +1.0b3 +~~~~~ + +Released on July 24, 2013 + +* Removed ``RecursiveTreeWalker`` from ``treewalkers._base``. Any + implementation using it should be moved to + ``NonRecursiveTreeWalker``, as everything bundled with html5lib has + for years. + +* Fix #67 so that ``BufferedStream`` to correctly returns a bytes + object, thereby fixing any case where html5lib is passed a + non-seekable RawIOBase-like object. + + +1.0b2 +~~~~~ + +Released on June 27, 2013 + +* Removed reordering of attributes within the serializer. There is now + an ``alphabetical_attributes`` option which preserves the previous + behaviour through a new filter. This allows attribute order to be + preserved through html5lib if the tree builder preserves order. + +* Removed ``dom2sax`` from DOM treebuilders. It has been replaced by + ``treeadapters.sax.to_sax`` which is generic and supports any + treewalker; it also resolves all known bugs with ``dom2sax``. + +* Fix treewalker assertions on hitting bytes strings on + Python 2. Previous to 1.0b1, treewalkers coped with mixed + bytes/unicode data on Python 2; this reintroduces this prior + behaviour on Python 2. Behaviour is unchanged on Python 3. + + +1.0b1 +~~~~~ + +Released on May 17, 2013 + +* Implementation updated to implement the `HTML specification + `_ as of 5th May + 2013 (`SVN `_ revision r7867). + +* Python 3.2+ supported in a single codebase using the ``six`` library. + +* Removed support for Python 2.5 and older. + +* Removed the deprecated Beautiful Soup 3 treebuilder. + ``beautifulsoup4`` can use ``html5lib`` as a parser instead. Note that + since it doesn't support namespaces, foreign content like SVG and + MathML is parsed incorrectly. + +* Removed ``simpletree`` from the package. The default tree builder is + now ``etree`` (using the ``xml.etree.cElementTree`` implementation if + available, and ``xml.etree.ElementTree`` otherwise). + +* Removed the ``XHTMLSerializer`` as it never actually guaranteed its + output was well-formed XML, and hence provided little of use. + +* Removed default DOM treebuilder, so ``html5lib.treebuilders.dom`` is no + longer supported. ``html5lib.treebuilders.getTreeBuilder("dom")`` will + return the default DOM treebuilder, which uses ``xml.dom.minidom``. + +* Optional heuristic character encoding detection now based on + ``charade`` for Python 2.6 - 3.3 compatibility. + +* Optional ``Genshi`` treewalker support fixed. + +* Many bugfixes, including: + + * #33: null in attribute value breaks XML AttValue; + + * #4: nested, indirect descendant,